ROBUST DESIGN IN INTEGRATED PRODUCT AND PROCESS DEVELOPMENT

Size: px
Start display at page:

Download "ROBUST DESIGN IN INTEGRATED PRODUCT AND PROCESS DEVELOPMENT"

Transcription

1 CHAPTER 6 ROBUST DESIGN IN INTEGRATED PRODUCT AND PROCESS DEVELOPMENT Delivering reliable, high quality products and processes at low cost has become the key survival in today's global economy. Driven by the need to compete on cost and performance, many qualities conscious organizations are increasing focusing on the optimization of process/product design. This reflects the realization that quality cannot be achieved economically through inspection. Designing in quality is cheaper than trying to inspect and re-engineer it in a product hits the production floor worse; after it gets to the customer. Thus, new philosophy, technology and advanced statistical tools must be employed to design high quality products at low cost. The purpose of this research work is to put under somebody's nose how the capitulation of integrating orthogonal array based DoE with risk management (robust design method which optimizes product/process design variables) in product development improves the product's performance; how far those products are functionally reliable in customer environment and also cost effective for producers / user through some industrial case studies. 6.1 Literature review on robust design The collection of design principles and methods known as robust design is the philosophy of a Japanese industrial consultant, Genichi Taguchi. It is a more costconscious and effective way to realize robust, high-quality products than by tightly controlling manufacturing processes. Instead of measuring quality via tolerance ranges (a common practice in industry), Taguchi proposed a mathematical model (equation 6.1) for the quality loss function. This quality loss function (L) is proportional to the square of the deviation of performance (y) from a target value (T). L = k(y - T)2 (6.1) As illustrated in figure 6.1, any deviation from target performance results in a quality loss. This quality loss function represents Taguchi's philosophy of striving to deliver on target products and processes rather than those that barely satisfy a corporate limit or tolerance level. From Taguchi s perspective, tolerance design (which involves tightening tolerances on product or process parameters) is expensive and should be utilized only when robustness cannot be designed into' a product or process by selecting parameter levels that are least sensitive to variations. Robust design occurs during the parameter design stage that precedes tolerance design but follows the system design in which a preliminary layout is specified for the product or process. Taguchi notes that too many tolerance-driven engineers jump directly from system design to tolerance design and ignore the critically important parameter design stage. 64

2 Taguchi s robust design approach for parameter design involves clearly separating control factors (design parameters that can be controlled easily) from noise factors (design parameters that are difficult or impossible to control). Designed experiments based on orthogonal arrays are conducted with control and noise factors to evaluate the effect of control factors on nominal response values and sensitivity of responses to variations in noise factors. The overall quality of alternative designs is compared via signal to noise ratios that combine measures of the mean response and the standard deviation. Product or process designs, characterized by specific levels of control factors, are selected that maximize the signal to noise ratio. The intent is to minimize performance deviations from target values while simultaneously bringing mean performance on target. By this measure, a designer would search for solutions such as Product A in figure 6.1 which offers both on-target performance and minimal standard deviation, compared with Products B and C, respectively, and therefore lower quality loss. Loss Performance pdf Figure 6.1 Taguchi s quality loss function, (Taguchi & Clausing, 1990) In robust design, it is important to take advantage of interactions and nonlinear relationships between control and noise factors to dampen the effect of noise factors and thus reduce variation in the responses (Wu and Chen, 1992). Control factor settings are chosen to minimize the sensitivity of a design to fluctuations in noise factors. Similarly, if control factors are expected to fluctuate, control factor settings are chosen that minimize the sensitivity of overall system performance to control factor variation. Compromises must be made typically between mean performance and performance variation. Robust solutions may not be optimal; conversely, optimal decisions are rarely robust. Undoubtedly, Taguchi initiated a paradigm shift in engineering design towards considering quality, robustness and variability earlier in the design process rather than exclusively in the final, detailed stages of design when tolerances are specified. He also encouraged designers to design quality into products and processes rather than 65

3 imposing it during the manufacturing process. Quality engineering that focuses exclusively on tolerancing has proven to be a very expensive approach relative to robust design. Precision manufacturing is costly. As a result of Taguchi s influence, statistical methods are more commonly used during the design process to consider the nondeterministic nature of many factors and assumptions in a systematic, mathematical manner. The alternative is to impose high factors of safety to ensure that a design can accommodate any potential variability. However, products with large factor of safety are often heavier, more expensive, and less attractive than their robustly designed counterparts. Overall, the potential benefits of implementing a robust design approach include increased customer satisfaction with products that exhibit consistently high rather than marginal quality, and decreased cost of re-work and replacement of defective products. From this discussion, it is evident that Taguchi s robust design philosophy is appropriate as a partial foundation for a comprehensive, robust topology design method for multifunctional applications. However, it is still unclear how the robust design philosophy could be implemented for engineering design applications. Due to the intellectual and practical appeal of Taguchi s robust design philosophy, researchers and practitioners have been actively establishing and improving the methods and techniques needed to implement robust design for engineering applications. Important criticisms and extensions are reviewed in the following section. 6.2 Extensions of robust design methodology for engineering design applications Although Taguchi s robust design principles are advocated widely in both industrial and academic settings, his statistical techniques, including orthogonal arrays and signal-tonoise ratio, have been criticized extensively, and improving the statistical methodology has been an active area of research (Myers and Montgomery, 1995; Nair, 1992; Tsui, 1992; Tsui, 1996). During the past decade, a number of researchers have extended robust design methods for a variety of applications in engineering design (Cagan and Williams, 1993; Chen, et. al., 1996a & 1996b; Chen and Lewis, 1999; Mavris, et. al., 1999; Otto and Antonsson, 1993a & 1993b; Parkinson, et. al., 1993; Su and Renaud, 1997; Yu and Ishii, 1994 & 1998; and Chang, et. al, 1994). Typically, the robustness of a design is related to variation in an objective function and constraint(s) caused by variation in environmental conditions or design parameters themselves as well as the feasibility and desirability of a design with respect to constraints and performance targets respectively in light of this variation. In work that is foundational to this proposed research (Chen and coauthors, 1996a; 1996b) use robust design techniques to determine ranged sets of preliminary design specifications that are both robust and flexible. They formulated their domain independent systematic approach - Ihe Robust Concept Exploration Method (RCEM) by 66

4 integrating statistical experimentation and approximate models, robust design techniques, multidisciplinary analyses, and multi-objective decisions. The computing infrastructure of the RCEM is illustrated in figure 6.2. As shown in the figure, design parameters are classified in stage A as noise factors, control factors, or responses. Statistical experiments are designed in stage B, and the results of the experiments are analyzed in stage D, based on data obtained from rigorous analysis models (stage C). Typically, experimentation is performed sequentially to explore and narrow the design space and to identify important factors for each response. In stage E, metamodels or surrogate models are constructed for each response. Multiobjective robust design decisions are modeled as compromise Decision Support Problems (DSPs) in stage F and they are solved using the surrogate or response surface models directly rather than the computationally expensive analysis models. The Compromise DSP Find Control Vanaoses Satisfy Constraints Goats Mean or? Target' Minimize Deviation' Bounds Minimize Deviation Function Robust, Top-Level iveln Design Specifications lions/ /e Response Surface Model ^ C. Simulation Programs (Rigorous Analysis Tools) D. Experiments Analyzer Eliminate unimportant factors Reduce the design space to the region of interest Rian additional experiments o input and Output ( ] Processci Simulation Prog-am Figure 6.2 Computing infrastructures for the RCEM (Chen, et al., 1996a) There are some cases in the early stages of design when requirements themselves are uncertain and most appropriately expressed as a range (i.e., smaller than a lower limit, larger than an upper limit, or between lower and upper limits) rather than a target value, as shown in figure 6.3. In these cases, it may not be appropriate to bring the mean on target and minimize variation. Instead, it may be necessary to measure the extent to which a range or distribution of design performance (induced by a range of design specifications) satisfies a ranged set of design requirements. The design capability indices a set of metrics that are based on process capability indices designed especially for assessing the capability of a ranged set of design specifications for satisfying a ranged set of design requirements. The design capability indices are incorporated as goals in the compromise DSP within the RCEM framework (Chen and coauthors, 1999a). In further work (Chen and Yuan, 1999; Chen and coauthors, 1999b) introduced a design 67

5 preference index that allows a designer to specify varying degrees of desirability for ranged sets of performance, rather than specifying precise target values or limits for a range of requirements beyond which designs are considered worthless. Requsrements Range M t Design 1 Distribution Jr-W I \ \ / 1 \ \ ' 1 \ N. i i Response Figure 6.3 Comparing designs with respect to a range of requirements (Chen, et.al.,1999a) 6.3 Quality by design: Taguchi philosophy Most of the robust design literature is focused on the latter portions of embodiment and detailed design in which dimensions are adjusted to accommodate manufacturing variations; however, there has been some emphasis on infusing robust design techniques in the earlier stages of design when decisions are made that profoundly impact product performance and quality. Primarily, this has been achieved by enhancing the robustness of design decisions with respect to subsequent variations in designs themselves. Since late 1950s Dr. Genichi Taguchi has introduced several new statistical tools and concepts of quality improvement that depends heavily on the statistical theory for design of experiments. These methods of design optimization developed by Dr. Taguchi are referred as robust design. This new design method provides a systematic and efficient approach for finding the near optimum combination of design parameters so that the product is functionally sound, exhibits a high level of performance, and is robust to noise factors. Robust design has been successfully implemented in Japan for designing reliable, high quality products at low cost in such areas as automobiles and consumers electronics. While it is a common concept that quality is attacked by variability, it is the counterattack that makes the DoE approach different from other methods of QC. The discipline of off-line quality control enables the product development 68

6 or process engineer to do his or her job in a quality manner and at the same time produce a quality product or process at the lowest possible cost. Figure 6.4 illustrates the off-line and on-line QC process. Figure 6.4 Off-line and On-line QC process The three major boxes across the top of the figure define quality, relate design engineering to quality, and show how to engineer the production process for quality. These three items are the key to reducing the performance variation by recognizing the existence of noises within the system and outside of the system. Once we recognize that some variation is within our control and some beyond it, we are in a position to compensate for variation in the design of the product or process. Quality loss is the financial loss imparted to society after a product is shipped. It is measured in monetary units and is related to quantifiable product characteristics. Two products that are designed to perform the same function may both meet specifications, but can impart different losses to society. Therefore merely meeting specifications is a poor measure of quality. An example is we met specs and have zero defects" delusion. 69

7 The color density of Sony-Japan TV sets follows a normal distribution with a large number in the A category and fewer in the B and C ranges. The Sony-USA distribution shows no non-conformities but follows a uniform distribution and therefore there are more inferior grade sets produced. Further it seems likely that there may D grade sets made in the U.SL, but that these sets were inspected out, accounting for the sharp edges of the distribution. The inspection and consequential scrap or rework adds even more loss than the customer dissatisfaction caused by the poor color quality (Barker, 1986). Taguchi defines quality loss via loss function. He unites the financial loss with the functional specification through a quadratic relationship that comes from a Taylor series expansion. The quadratic takes the form of a parabola. If you have a more analytical loss function, then the Taylor expansion approximation is unnecessary. However, we may use the quadratic if there is no other function available. To apply the concept of the loss function, we need to know the cost of the loss at the limit of the functional specification. In the case of the TV sets, the loss may be the result of an adjustment or replacement of a part. Quality is infused into all aspects of a product s life - it becomes a philosophy that is integrated throughout the entire corporate structure. It is difficult to quantify the parameters of a particulars loss function; we should not give up our quality engineering efforts, but strive to apply the methods of quality engineering. American Standards of Quality Control (ASQC) has defined quality as the totality of features and characteristics of a product or service that bear on its ability to satisfy given needs." Taguchi, on the other hand, distinguishes between quality and features, arguing that adding feature is not a way of improving the quality of given product - despite what Madison Avenue would have us believe. For example, two pieces of paper can have different finish characteristics. A coated paper should not be considered of higher quality than plain 20 lb bond. A rag content paper has a different feature, but could be of less quality than a plain bond paper of wood fiber if the quality characteristic we need for functionality is lack of impregnated dirt fibers. A copy machine with a recirculating document handler should not be considered a higher quality device than one with a semiautomatic document handler. These machines have features aimed at different market segments and are priced accordingly. Variations from desired performance values (functional specifications) cause loss of quality. Customers become dissatisfied and service costs increase for a direct loss. Indirect loss stems from loss of market share and from the unusual sales and marketing efforts needed to overcome uncompetitive quality. The ability to control performance variations comes, from knowledge of the size 70

8 of this variation. We of course measure the standard deviation of the process via process capability studies. The effect of the ratio of the functional specification to the process spread will tell us if we are in a good practice zone or if the process needs to be revised Noise sources The undesirable and uncontrollable sources that can cause deviation from target values in product's functional characteristics are called noise, and are divided into three types. 1. External noise: operating environment variables such as temperature and humidity, and conditions of use that disturb the functions of a product (Human error). 2. Internal noise: changes that occur when a product deteriorates during storage, by friction or by wearing out of parts during use. 3. Unlt-to-unit noise: differences between individual products because of manufacturing-process imperfections such as variation in machine setting. The overall quality system should be designed to produce robust product with respect to all noise factors. To achieve robustness, QC efforts must begin in the product and process design (off-line QC) and must be continued through production operation (online QC). External and internal noise can be reduced most effectively at the R & D step. However, unit-to-unit noise can be handled in the overall off-iine and on-line QC stages. Key points in quality improvement planning The variation of product quality characteristics from their target values should be reduced. The primary aim of quality improvement is to achieve a population distribution as close to the targe! as possible. To accomplish this, Signal-to-Noise (SN) ratio is adopted (Box, 1988). Taguchi uses experimental designs as a tool to make products robust to noise factors, and to reduce the effects of variation on product and process quality characteristics. He especially uses constructed tables known as 'tables of orthogonal arrays' in which he allocates the noise factors to the outer array', and the design factors to the 'inner array' in the parameter design. Classical applications of experimental design focused primarily on optimizing average product performance characteristics rather than considering effects on variation Signal-to-Noise Since we need to control our performance characteristic (response variable) at both the mean level and the variation around this mean, it would be convenient to use an objective measure that combines both of these parameters in a single metric. Table 6.1 defines such a figure of merit, which Taguchi has called the signal to noise ratio for 71

9 various types of performance characteristics. In its elemental form, the S/N is simply the ratio of the mean to the standard deviation. We probably recognize this relationship as the inverse of the coefficient of variation (standard deviation/mean) which has been used extensively as a figure of merit in Western statistics. Whatever the types of quality or cost characteristic, the transformations are such that the S/N ratio is always interpreted in the same way, the larger the S/N ratio. Taguchi has developed over 70 distinct signal- to-noise figure-of-merit metrics. Each of these is a customized measure of the performance characteristic in terms of location and dispersion. While many of these S/N metrics are unique to an industry or process, there are three S/N ratios that may be applied to a wide range of response variables; these three generic S/Ns are given in table 6.1 Table 6.1 Useful Signal-to-Noise ratios Type N: Nominal is best (dimensions, output voltage, etc.,) - (Sm - Ve) S/Nn = 10 Log 10 n m e (6.2) Where, yi is an observation and n is the number of observations.,2 Sy2 - s - (Sy^ Om V =- n n -1 Type S: Smaller is better (noise, harmful material, contamination, etc.,) S/Ns = 10 Logic -(Ey?) n Type B: Bigger is better (Strength, Power, etc.,) S/Nb = 10 Logic (Sl/yf) n (6.3) (6.4) If we examine the type N (nominal is best), we observe that the numerator of the expression includes the sum of the squares due to the mean (Sm) and the denominator has the variance Ve) Therefore, the S/Nn attains a maximum value when the mean is high and the variation is low. Should we harve a performance characteristic that must be made low in value, this type S/Nn will not work because the mean value goes the wrong way. Therefore, we utilize the type S/Ns for characteristics that must be minimized. Similar lines of reasoning were applied to develop the type S/N B (bigger performance characteristic is better). 72

10 6.4 Statistical experiment design Quality engineering using statistical experimental design adds the necessary substance to the up-front considerations of quality-loss and the means to eliminate variation in products. Taguchi has two types of active engineering designs beyond the selection of the system. The first applied design finds the levels of the parameters that will infuse the least variation into the product, process, or service. This is called a parameter design and is always done first in our engineering efforts. The second activity is only used if the variation in the product or process is beyond the tolerable limits. In this application of statistical experimental design, we make systematic changes on a tolerance magnitude to determine which of the factors contribute most to the variation in the end product. Instead of tightening all the tolerances in a system, this analysis tells which tolerances to tighten and which tolerances can be eased. Of course, we could use process capability studies with models or Monte-Carlo methods to optimize a process. However none of these methods are as efficient as the Orthogonal Array based Experiment Design approach that takes a small fraction of the possible experimental conditions and draws conclusions that get the right conditions and tolerances for quality products. Example involves the butterfly, a small plastic part used in the carburetor of an I.C. Engine. The plastic must withstand the solvent effects of fuel and the pressures of the return spring in the automatic choke mechanism. The current design for these parts is resulting in a rash of complaints and a considerable number of service-related activities. The goal is to find a system to prevent butterfly breakage using robust design concept, since the parameter design and the tolerance design are key to the implementation of quality goals in product design and manufacturing Framework of experiment design Experiments are carried out by researchers or engineers in all fields of study to compare the effects of several conditions / to discover something new. If an experiment is to be performed most efficiently, then a scientific approach to planning must be considered. The statistical design of experiments is the process of planning experiments so that appropriate data will be collected, the minimum number of experiments will be performed to acquire the necessary technical information, and suitable statistical methods will be used to analyze the collected data. The statistical approach to experimental design is necessary to draw meaningful conclusions from the data. Thus, there are two aspects in any experimental design: i) designing the experiment and ii) statistical analysis of the collected data. These two are closely related, since the method of statistical analysis 73

11 depends on the design employed. The recommended procedure for an experimental design are given in figure 6.5 and briefly explained below. 1. Statement of the experimental problem. It is necessary to develop a clear statement of the problem with objectives. A clear statement contributes to a better understanding of the phenomena under study and the form of final solution of the problem. 2. Understanding of present situation. It is important to collect as much related past data for the experimental problem as possible, and to understand the present situation. It is useful to collect related information from the literature and from all concerned parties: engineering, quality assurance, manufacturing, marketing, operational personnel, and so on. 3. Choice of response variables. From the statement of the problem, an appropriate response variable should be selected. If necessary, multiple response variables may be chosen. For such a case, we should select not only yield but also utility. Thought must be given to how the response variables will be measured, and the probable accuracy of such measurements. Figure 6.5 Outline of experimental design procedure 4. Choice of factors and levels. The experimenter must select the independent variables (factors) which affect the chosen response variables. The values (levels) of 74

12 5. the factors to be used in the experiment should be carefully selected. Usually between two and five levels are appropriate, and the range of levels should be as large as possible within the region of experimental interest to the experimenter. 6. Selection of experimental design. This step is the backbone in the experimental design procedure. The experimenter must decide an appropriate experimental design by considering the number of factors, levels, all possible level combinations, the cos* of the experiment and the time available. If there are too many factors, a fractional factorial design using orthogonal arrays is usually recommended. The experimenter should also decide an appropriate number of replicates at each experimental condition (run) to guarantee desired statistical accuracy, and the order in which the experiments will be conducted (the method of randomization). A mathematical model for the experiment must also be proposed, so that a statistical analysis of the data may be performed. 7. Performing the exper'rments. This is the actual data-collection process. The experimenter should pay attention to the actual progress of the experiment to ensure that it is proceeding according to the plan, and also pay attention to maintaining as uniform an experimental environment as possible. 8. Data analysis. Statistical methods such as analysis of variance and statistical estimation should be employed in analyzing the data from the experiment. It is desirable to obtain all possible riformation from this data analysis for the stated experimental problem. 9. Analysis of results and conclusbns. Once the data set has been analyzed, the experimenter must draw physical inferences from his statistical results to evaluate their practical implications for the stated experimental problem. Then conclusions should be made for the stated problem. 9. Confirmation test. Before presenting the results to others and taking a practical course of action, the experimenter needs to carry out a confirmation test to evaluate the conclusion from the experiment. VS. Recommendations and follow-up management. The experimenter presents the results to others and takes any necessary action. In order to sustain the improvements suggested by the experiment, careful follow-up action is needed such as standardization of operating conditions and use of check sheets and control charts to evaluate the continuing implications of the experiment. 75

13 11. Planning of subsequent experiments. Further round of experiments is recommended, if the experimental problem has not been completely solved. Experimentation is usually an iterative process, with one experiment solving the problem only partially and subsequent experiments hopefully dealing with outstanding questions Classification of data types Experimental data are usually divided into two types: discrete (attribute) and continuous (variable). Each of these is further divided into three categories (Taguchi, 1987). This classification is important in deciding on the number of replications that are necessary for the experiments, and then determining a suitable method of data analysis. Discrete data (simple discrete, fixed marginal discrete and multi-discrete); Continuous data (simple continuous, multi-fractional continuous and multi-variable) Classification of factors The factors which are treated in experiments can be divided into two types: fixed and random. Fixed means that the factor levels are technically controllable and each level has some technical meaning. Also the levels of a fixed factor can be reproduced and retested. However, random means that the factor levels are not technically controllable, and each level does not have any technical meaning. Also the levels of a random factor normally cannot be reproduced and retested. Fixed factors subdivided into three classes: 1. Control factors: The design or process variables which are controllable and where the designer may be interested in finding the level which is, in some sense, best. The major variables in an experiment such as related with temperature, pressure and time are all control factors. 2. Indicative factors: The factors that possess technical levels in the same way as control factors, but for which any notion of the best level may be meaningless. Say in a tire-wear experiment, suppose that the position of tire (front right, front left, back right and back left) is a factor. Such factor is an indicative factor. 3. Signal factors: The factors that influence the average value but not the variability of a response. They are also called target-control factors. Likewise, there are also three classes of random factor: 1. Block (group) factors: The factors for which there are levels, but where there is no technical significance to such levels. Differences of geographic location, differences depending on the day, lot differences and differences between operators are block factors. 76

14 2. Supplementary factors: The factors that include supplementary experimental or measured values, which are the result of recording the state of the experimental conditions. These factors are used as independent variables in the covariance analysis. 3. Noise (error) factors: The factors that have an influence over a response but cannot be controlled in actual applications. They are of three kinds: inner noise, outer noise and between-product ncxse. 6.4,4 Classification of experimental designs There are numerous types of experimental design that are classified according to the allocation of treatment (factor) combinations and the degree of randomization of experiments. These experimental design classifications are enlightened below: The flowchart in figure 6.6 provides guidelines for the selection of experimental designs. 1. Factorial design. This is a design for investigating all possible treatment as combinations which are formed from the factors under consideration. The order in which possible treatment combinations are selected is completely random. Singlefactor, two-factor and three-factor factorial designs belong to this class, as do 2k (k factors at 2 levels) and 3k (k factors at 3 levels) factorial designs. 2. Fractional factorial design. This is a design for investigating a fraction of all possible treatment combinations which are formed from the factors under investigation. Once again, the order in which the treatment combinations are chosen is completely random. Designs using tables of orthogonal arrays, Plackett-Burman designs, Latin square designs (when factors assigned to the rows and columns of a Latin square) and Graeco-Latin square designs are fractional factorial designs. This type of design is used when the cost of experiment is high and the experiment is time-consuming. 3. Randomized complete block design, split-plot design and nested design. All possible treatment combinations are tested in these designs, but some form of restriction is imposed on randomization. A design in which each block contains all possible treatments, and the only randomization of treatments is within the blocks, is called the randomized complete Mock design. For an explanation of split-plot design and nested design, refer (Montgomery, 1997) and (John, 1971) for instance. 77

15 Figure 6.6 Flow chart for selecting experimental design 4. Incomplete block design. If every treatment is not present in every block in a randomized complete block desicpi, it is an incomplete block design. This design is used when we may not be able to run all the treatments in each block because of a shortage of experimental apparatus or inadequate facilities. If each block contains the same number of treatments and they are arranged so that every pair of treatments occurs together in the same number of blocks, the design is said to be balanced. 78

16 5. Response surface design and mixture design. This is the design where the objective is to explore a regression model to find a functional relationship between the response variable and the factors (independent variables) involved, and to find the optimal conditions of the factors. Central composite designs, rotatable designs, simplex designs, mixture designs and evolutionary operation (EVOP) designs belong to this class. Mixture designs are used for experiments in which the various components are mixed in proportions constrained to sum to 1. In the mixture experiment, the factors are the components or ingredients whose optimal proportions or levels are of interest to establish. Note that, because of the constraint just mentioned, the factor levels are not independent. Most experimental designs used in industrial practice are covered by the above classification. Since there are many designs, it is sometimes difficult to choose an appropriate one to use in practice (Readers who understand the Korean language recommended to read Park, 1982; 1990 and 1993). These books are quite useful for engineers and scientists who wish to employ experimental designs in their engineering and research activities. 6.5 Robust design approach for cost and quality The early design phase of a product/process has the greatest impact on life cycle cost and quality. Therefore significant cost savings and improvements in quality can be realized by optimizing product / process designs. The three major steps in designing a quality product are system design, parameter design and tolerance design. System design is the process of applying scientific and engineering knowledge to produce a basic functional prototype design. The prototype model defines the configuration and attributes of the products undergoing analysis or development. The initial design mary be functional but it may be far from optimum in terms of quality and cost. Parameter design is an investigation conducted to identify the settings of design / process parameters that optimize the performance characteristics and reduce the sensitivity of engineering designs to the sources of variation (noise). Parameters design requires some form of experimentation for the evaluation of the effect of noise factors on the performance aims a high level of performance under a wide range of conditions, and is robust to noise factors. Tolerance design is the process of determining tolerances around the nominal settings that are identified in the parameter design process. Tolerance design is required if robust design cannot produce the required performance without costly special components or high process accuracy. It involves tightening of tolerances on parameters where their variability could have a large negative effect on the final system. Typically 79

17 tightening tolerances leads to higher cost. Most American and European engineers focus on system and tolerance design to achieve performance. The common practice in product and process design is to base an initial prototype on the first feasible design (system design). Then the reliability and stability against noise factors are studied and any problems are corrected by requesting costlier components with tighter tolerances (tolerance design). Experimenting with the design variable one at a time or by trial and error until a first feasible design is found, is a common approach to design optimization. However, this approach can lead to either a very long and expensive time span for completing the design or a premature termination of the design process due to budget and schedule pressures. The result in most cases is a product design which may be far from optimal. As an example, if the designer is studying 13 design parameters at 3 levels, varying one factor at a time would require studying 1,594,323 experimental configurations (313). This is a " Full Factorial approach where all possible combinations of parameter values are tried. Obviously, the time and cost involved in conducting such a detailed study during advanced design is prohibitive. In contrast, robust design method provides the designer with a systematic and efficient approach for conducting experimentation to determine near optimum settings of design parameters for performance and cost. As discussed in section 5.4, robust design method uses orthogonal ways (OA) to study the parameter space, usually containing a large number of decision variables, with a small number of experiments. Based on design of experiments theory, Taguchi s orthogonal arrays provide a method for selecting an intelligent subset of the parameter space. Using of Orthogonal Arrays significantly reduce the number of experimental configurations. Orthogonal arrays are not unique to Taguchi and they were discovered considerably earlier in the 1930s by R.A. Fisher and L.H.C.Tippett in England. However, Taguchi has simplified their use by providing tabulated sets of standard orthogonal arrays and corresponding linear graphs to fit a specific project. A typical L9 (34) Orthogonal Array tabulation (table 6.2) is shown below: Table 6.2 L9 (34) Orthogonal Array A B C D

18 In this array, the columns are mutuaily orthogonal. That is, for any pair of columns, all combinations of factor levels occur, and they occurs an equal number of times. Here there are four factors A, B, C, and D, each at three levels. This is called an L9 design, the 9 indicating the nine rows, configurations or prototypes to be tested, with test characteristics defined by the row of the table. The number of column of an OA represents the maximum number of factors that can be studied using that army. Note that this design reduces 81 (34) configurations to 9. Using of L9 OA means that 9 experiments are carried out in search of the 81 control factor combinations which give the near optimal mean, also the near minimum variation away from this mean. Some of the commonly used orthogonal arrays are given table 6.3. As is evident from table 6.3, there are greater savings in testing for the larger arrays Table 6.3 Commonly used OAs with number of equivalent full factorial experiments Orthogonal Array Factors and Levels No of full factorial experiments L4 3 Factors at 2 Levels 8 L8 7 Factors at 2 Levels 128 L9 4 Factors at 3 Levels 81 L16 15 Factors at 2 Levels 32,768 L27 13 Factors at 3 Levels 1,594,323 L64 21 Factors at 4 Levels 4.4 x Structure of orthogonal arrays Many designed experiments use matrices called orthogonal arrays for determining which combinations of factor levels to use for each experimental run and for analyzing the data. Orthogonal Array is a fractional factorial matrix which assures a balanced comparison of levels of any factor or interaction of factors. It is a matrix of numbers arranged in rows and columns where each row represents the level of the factors in each run. The array is called orthogonal because all columns can be evaluated independently of one another Classification of orthogonal arrays There are several orthogonal arrays which are commonly used in practice. These OA can be classified into standard orthogonal arrays, extended orthogonal arrays, mixed orthogonal arrays and column-merged orthogonal arrays. In this section we consider each of these in turn.. STANDARD ORTHOGONAL ARRA YS The major two-level standard orthogonal arrays are: L4(23), L8(27), L16(215), L32(231), l-64(263). The major three-level standard orthogonal arrays are: L9(34), L27(313), L8i(340). The four-level and five-level orthogonal arrays, L.64(421),L25(56) are also classified as standard orthogonal arrays. 81

19 EXTENDED ORTHOGONAL ARRA YS When there are too many factors to be assigned, and interactions can be ignored, we can extend the standard orthogonal arrays to have more columns to accommodate more factors. These arrays are called extended orthogonal arrays. Such arrays as L12{211), L27(322) are typical extended orthogonal arrays. In fact, all Plackett-Burman designs such as Li2(211), L20(2 ), L24{223) that do not belong to the standard orthogonal arrays may be considered as extended orthogonal arrays. L36(313) is also classified as an extended orthogonal array. MIXED ORTHOGONAL ARRA YS Orthogonal arrays which contain two different levels of columns are called mixed orthogonal arrays. The ones most often used are: L1S(21 x 37), L32(21 x 49), L36(211 x 312), L36(23 x 313), LS0(21 x 511), LM<21 x 325). These arrays are used when there are many factors with two different levels, and interactions can be ignored. COLUMN-MERGED ORTHOGONAL ARRA YS The orthogonal arrays which can be constructed from the standard orthogonal arrays by using the column merging method are called column-merged orthogonal arrays. These are: L8(41 x 24), L16(41 x 212), L16(42 x 29), L16(43 x 26), L16(44 x 23), L16(45), L16(81 x 28), L32(41 x 22% L32(42x 225), L27(91 x 39). In addition to the multi-level arrangements described above, other experimental techniques can be applied using OA experiments. Four of these techniques are the dummy-level technique, combination design, branching design and the idle column method, which are the subjects of this chapter. These techniques make the OAs more powerful in experimental practice. By making use of orthogonal arrays, the robust design approach improves the efficiency of generating the information necessary to design systems that are robust to variations in manufacturing processes and operating conditions. As a result, development time can be shortened and R&D costs can be reduced considerably. Furthermore, a near optimum choice of parameters may result in wider tolerances so that low cost components and production processes can be used. 6.6 Conclusion Around the millennium 2000, the use of Taguchi s quality design concept has been increasing in the global countries. Many companies are now realizing that new tools are required for survival in the increasingly competitive world class market, Thus, it is expected that the application of these methods will become widespread as low life cycle cost, operability, and quality issues replace performance as the driving design criteria. 82