Capability on Aggregate Processes

Similar documents
Measurement Systems Analysis

Why do Gage R&Rs fail?

Chapter 8 Script. Welcome to Chapter 8, Are Your Curves Normal? Probability and Why It Counts.

Chapter 4: Foundations for inference. OpenIntro Statistics, 2nd Edition

Chapter 3. Displaying and Summarizing Quantitative Data. 1 of 66 05/21/ :00 AM

Introduction to Control Charts

Confidence Intervals

Correlation and Simple. Linear Regression. Scenario. Defining Correlation

Using the Power of Statistical Thinking

Descriptive Statistics Tutorial

Online Student Guide Types of Control Charts

Gush vs. Bore: A Look at the Statistics of Sampling

Understanding Variation and Statistical Process Control: Variation and Process Capability Calculations

Chapter 1 Data and Descriptive Statistics

AP Stats ~ Lesson 8A: Confidence Intervals OBJECTIVES:

Capability studies, helpful tools in process quality improvement

Parallel processes and SPC

Sawtooth Software. Sample Size Issues for Conjoint Analysis Studies RESEARCH PAPER SERIES. Bryan Orme, Sawtooth Software, Inc.

Lecture Notes on Statistical Quality Control

WINDOWS, MINITAB, AND INFERENCE

1. What is a key difference between an Affinity Diagram and other tools?

The Dummy s Guide to Data Analysis Using SPSS

Design for Manufacture. Machine and Process Capability

Day 1: Confidence Intervals, Center and Spread (CLT, Variability of Sample Mean) Day 2: Regression, Regression Inference, Classification

Workshop #2: Evolution

Lab 9: Sampling Distributions

ENVIRONMENTAL FINANCE CENTER AT THE UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL SCHOOL OF GOVERNMENT REPORT 3

CHAPTER 8 T Tests. A number of t tests are available, including: The One-Sample T Test The Paired-Samples Test The Independent-Samples T Test

Online Student Guide Scatter Diagrams

The materials required over the next two modules include:

Capacity Management - Telling the story

QUICK & DIRTY GRR PROCEDURE TO RANK TEST METHOD VARIABILITY

A Practical Guide to Selecting the Right Control Chart

Secondary Math Margin of Error

The Knapsack Problem

ABM PLAYBOOK TESTING WITH ABM ANALYTICS: 4 STEPS TO SEE FUNNEL PERFORMANCE FOR ANYTHING

How Markets Use Knowledge. By Russ Roberts 8/22/08

Module - 01 Lecture - 03 Descriptive Statistics: Graphical Approaches

Statistics Quality: Control - Statistical Process Control and Using Control Charts

Measure Phase Measurement System Analysis

TEACHER NOTES MATH NSPIRED

Chapter 6 - Statistical Quality Control

Guest Concepts, Inc. (702)

THE NORMAL CURVE AND SAMPLES:

Thus, there are two points to keep in mind when analyzing risk:

PROCESS VALIDATION. A Systematic Approach 2015 WHITE PAPER WHITE PAPER PRODUCED BY MAETRICS

Monte Carlo Simulation Practicum. S. David Alley, P.E. ANNA, Inc (annainc.com)

Resolving Common Issues with Performance Indices

Marginal Costing Q.8

Process Performance and Quality Chapter 6

Make the Jump from Business User to Data Analyst in SAS Visual Analytics

Process Performance and Quality

Equipment and preparation required for one group (2-4 students) to complete the workshop

You can t fatten a pig by weighing it!

Operations and Supply Chain Management Prof. G. Srinivisan Department of Management Studies Indian Institute of Technology, Madras

The Impact of Agile. Quantified.

VMKC (AS9103) EVALUATION Detailed Tool

5 CHAPTER: DATA COLLECTION AND ANALYSIS

Chapter 7: Sampling Distributions

+? Mean +? No change -? Mean -? No Change. *? Mean *? Std *? Transformations & Data Cleaning. Transformations

Statistics in Validation. Tara Scherder CSO Supply, Arlenda, Inc

Cpk. X _ LSL 3s 3s USL _ X. Cpk = Min [ Specification Width Process Spread LSL USL

Applying Statistical Techniques to implement High Maturity Practices At North Shore Technologies (NST) Anand Bhatnagar December 2015

Campaigns - 5 things you need to know. 27 Signs You Need A New Agency. What the AdWords Update Means for Your Paid Search Strategy

Physics 141 Plotting on a Spreadsheet

Project Management. P Blanchfield

Audit Sampling With MindBridge. Written by: Corey Yanofsky and Behzad Nikzad

On the Path to ISO Accreditation

Acceptance sampling is an inspection procedure used to

TickITplus Implementation Note

Multiple Regression. Dr. Tom Pierce Department of Psychology Radford University

Marketing Automation: One Step at a Time

Statistical Process Control Seminar at Jireh Semiconductor. Topic Agenda

TRADING FOREX. with POINT & FIGURE

INDUSTRIAL ENGINEERING

OUTCOME-BASED BUSINESS MODELS IN THE INTERNET OF THINGS

Amazon Sponsored Products Mastery. Jeff Cohen & Brandon Checketts

JMP TIP SHEET FOR BUSINESS STATISTICS CENGAGE LEARNING

Model Selection, Evaluation, Diagnosis

How to ask questions thru Webcast If just want awareness feel free to drop off after first hour

Mathematics in Contemporary Society - Chapter 5 (Spring 2018)

Displaying Bivariate Numerical Data

Risk Analysis Overview

Untangling Correlated Predictors with Principle Components

= = Intro to Statistics for the Social Sciences. Name: Lab Session: Spring, 2015, Dr. Suzanne Delaney

WHY LOCALIZATION MATTERS:

Chart Recipe ebook. by Mynda Treacy

Think. Feel. Do. Making law firm bids more persuasive

Engenharia e Tecnologia Espaciais ETE Engenharia e Gerenciamento de Sistemas Espaciais

Lecture (chapter 7): Estimation procedures

36.2. Exploring Data. Introduction. Prerequisites. Learning Outcomes

Quality Management (PQM01) Chapter 04 - Quality Control

Holding Accountability Conversations

How to Cut Costs in AdWords in Less Than One Hour a Week

Welcome to the course, Evaluating the Measurement System. The Measurement System is all the elements that make up the use of a particular gage.

+? Mean +? No change -? Mean -? No Change. *? Mean *? Std *? Transformations & Data Cleaning. Transformations

ADVANCED DATA ANALYTICS

Utilizing Data Science To Assess Software Quality

Unit3: Foundationsforinference. 1. Variability in estimates and CLT. Sta Fall Lab attendance & lateness Peer evaluations

Our objectives today will be to review Outcomes Report layout and how to use the metrics to gauge how your site is doing in relation to all of the

Transcription:

Capability on Aggregate Processes CVJ Systems AWD Systems Trans Axle Solutions edrive Systems

The Problem Fixture 1 Fixture 2 Horizontal Mach With one machine and a couple of fixtures, it s a pretty easy problem. You do a study on each fixture and show both are capable. That s going to be somewhere between 60 and 200 parts measured, depending on requirements. But as the copies of the process grow, so does the study. If we increase to 6 machines with 2 fixtures, that s between 360 and 1,200 measurements IF we agree that fixtures aren t switching around machines. Add in more operations, and more opportunities for mixing, and this problem grows exponentially. If we study the AGGREGATE of this output (assuming there is representation from each subprocess ) is this good enough? 2

The Problem Formally Stated There are two components to a capability study: centeredness (mean location) and spread (variance/standard deviation). We need to examine the effects of both. 1) Case Study: If I have 3 parallel processes that are capable (high Cp), but they are not centered (low Cpk), and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? We will create some normal distributions with good Cp values (at least 2.5), center up 1 (Cpk about 2.5) and put the other two at the upper and lower limits. (Cpk close to 1). Distributions will all be in spec because. If they were all over the place, it s clear the aggregate would be bad, we don t want to test extremes, we want to test borderline cases. The question is given this, will the aggregate show a problem? 2) Case Study: If I have 3 parallel processes that are centered (Cpk ~ Cp) and two have very capable distributions (low variance, good Cp) with the third having a large variance, and I randomly draw parts from the AGGREGATE of these processes, what does the resulting study look like? We will generate 3 centered processes (Cpk = Cp). Two distributions will be tight (Cpk = Cp near 2.5) and one of them noisy (Cpk = Cp near 1.33). Again, we don t want to discuss extremes, we want to test a reasonable case and examine the effects. With the same question will we see a problem in the aggregate? 3

The Problem Steps to Making the Models 1) Use Excel to generate random normal data specifying a mean and a standard deviation. This will simulate the output of the individual processes. 2) Generate thousands of points simulating a process run at these settings. 3) Grab a subset of these points, as if we were grabbing production parts off the end of the process for a study. We will grab 40 parts at random out of the thousands with subgrouping. 4) Calculate the capabilities of these experimental draws and confirm they are close to the desired inputs. 5) Grab a subset from these selected parts at random, as if we then pulled aggregate parts. 6) Calculate the capabilities of these draws to see what a random study on the aggregate output shows us. 4

Case Study 1: Three capable distributions, not centered. Let s assume we have 3 machines, each trying to make an inside diameter of 30 mm. We do a separate capability study on each. What happens if we take all the samples and mix them up and do a capability study on the aggregate? Essentially losing traceability from each of the 3 machines. Set 1 Set 2 Set 3 Name Mach 1 Mach 2 Mach 3 Upper Spec Limit 30.02 30.02 30.02 Lower Spec Limit 29.98 29.98 29.98 n 40 40 40 mean 29.99 30.00 30.01 std dev 0.0023 0.0024 0.0026 Cp 2.86 2.76 2.57 Cpk 1.15 2.71 0.97 Details on the distributions: Notice the standard deviations are all the same (green arrow) the processes have all the same noise. The random distributions were all generated with the same standard deviation as an input. And the calculated sigma of the study confirms the distribution. Close to the input value (.0024), but still random. Notice the Cps (purple arrow). Because they all have the same noise (generated with the same standard deviation), the Cps are all roughly the same they should be. Now examine the Cpks (red arrow). Only the centered process has Cpk~Cp (it s centered). The other two have Cpk near 1 because the processes are near the limits. The means in the generation of the random data were intentionally set here. A few thousand points were generated, and 40 (8 subgroups of 5) were grabbed out of all these random points. Now, if we grab 40 points at random out of THESE distributions and do a study, what does the aggregate look like? 5 5

Case Study 1: Three capable distributions, not centered. The final result looks like this. Set 1 Set 2 Set 3 Set 4 Name Mach 1 Mach 2 Mach 3 123 Comb Upper Spec Limit 30.02 30.02 30.02 30.02 Lower Spec Limit 29.98 29.98 29.98 29.98 n 40 40 40 40 mean 29.99 30.00 30.01 30.00 std dev 0.0023 0.0024 0.0026 0.0097 Cp 2.86 2.76 2.57 0.69 Cpk 1.15 2.71 0.97 0.67 The histogram bars from the root distributions (blue, red, green) have been removed for clarity. The purple histogram shown is from the random pulls of the root distributions. You can sort of tell that there were three independent distributions that generated this data from the 3 higher bars in the purple histogram. Notice in the combination, the Cp and Cpk are low (gold box). Notice how high the standard deviation is (green box) compared to the three source distributions. Notice how non-normal the resulting aggregate distribution is. This random pull was 12 from Mach 1, 18 from Mach 2, and 10 from Mach 3. (This model was run repeatedly, results were similar). In this case, with source distributions different, it had a negative effect on the aggregate distribution. This means that if an aggregate distribution is capable (regarding centerdness) the underlying distributions are also capable. 6 6

Case Study 1: The Thought Experiment Let s recap we had 3 root distributions, all very capable, but two of them needing centering. And the aggregate study showed us a very wide, but still centered deviation we would have interpreted as a fail. Where the sub-processes would have definitely been a pass (for one of them) and possibly a pass with centering on the other two. Key Points: 1) The result makes sense. Imaging them all aligned in the center, and you drag two of them left and right from center, you can see in your mind the total distribution getting wider, but remaining centered. 2) With only 3 distributions, it s easy to imagine, but what happens if we have more? What if we had 20 sub-processes, 19 of them a bit right of nominal (perchance well set up, accounting for tool wear) and the 20 th is flubbed. It s hovering at the lower limit, what would be the effect? 7 7

Case Study 1: The Thought Experiment It might look something like this. One process is not set up well, making a distribution over here. 19 distributions well collected here. Key Point: The more parallel processes that contribute to the distribution over here, the less you will statistically notice the stray process Lower Limit Nominal A resulting aggregate distribution may look like the pink sketch. Not a wide distribution, per se, but one showing two peaks. It may have an acceptable Cp or Cpk. This effect is because the samples are heavily weighted towards the good distribution (19 good vs 1 outlying process). This effect would be more muted as more good processes were added because of this weighting. Now, 100 parallel processes aren t seen too often in manufacturing. But 40 are (multi-cavity injection molding tools come to mind). There are a few takeaways. 1) Were one to attempt an aggregate experiment, one would have to ensure adequate representation from each sub process. (NOT random). You would want 5 parts from each process at a minimum. And maybe adjust quantities based on where the means of the initial 5 draws fell on a histogram. 2) It is unlikely the raw Cp/Cpk numbers would be enough to adequately evaluate the results, you would definitely want to plot the results and convince yourself the numbers made sense. Especially looking for multiple peaks. Something you would NOT expect from subprocesses that were identical. Upper Limit 8 8

Case Study 2: One process is noisy. OK. But what if 2 of the machines are good and one is really noisy? Let s keep the same parameters. But this time, Machines 4 and 5 are well centered. Machine 6 is centered, but noisy. A cutter is loose will this still show up in an aggregate capability study? Set 1 Set 2 Set 3 Name Mach 4 Mach 5 Mach 6 Upper Spec Limit 30.02 30.02 30.02 Lower Spec Limit 29.98 29.98 29.98 n 40 40 40 mean 30.00 30.00 30.00 std dev 0.0027 0.0026 0.0051 Cp 2.51 2.52 1.31 Cpk 2.47 2.47 1.24 The first two machines are running good. Well centered. Tight distributions (blue and maroon). The third machine is noisy (green). It is ALMOST capable (to a hurdle of 1.33). If we were just considering Mach 6, we would reject it and tell the supplier that they need to work on the noise. But let s assume we have no knowledge of which machine a part came from and we randomly take 40 pieces from this aggregate population again. What would we get? Will the aggregate be out? Think about the answer before you continue. 9 9

Case Study 2: One process is noisy. And here is the result. Set 1 Set 2 Set 3 Set 4 Name Mach 4 Mach 5 Mach 6 456 Comb Upper Spec Limit 30.02 30.02 30.02 30.02 Lower Spec Limit 29.98 29.98 29.98 29.98 n 40 40 40 40 mean 30.00 30.00 30.00 30.00 std dev 0.0027 0.0026 0.0051 0.0043 Cp 2.51 2.52 1.31 1.54 Cpk 2.47 2.47 1.24 1.49 The aggregate distribution (purple) got worse than the two good machines (4 & 5) because we are including data from the bad machine (6). But the two good machines helped the output of the bad one. So the aggregate is a pass even though one of the subprocesses is NOT. If we studied each machine individually, we would have caught the bad apple. But collectively, it did NOT ruin the bunch. It made it worse, but not enough to result in a failed study. The likelihood of getting a tail reading in the bad (green) Mach 6 distribution is cut in third by adding the two good machines into the mix. So a similar leveraging effect works here, too. If we had more and more good processes, the statistical significance of the bad apple would go down. This does not mean the bad apple is running good, it is just less likely that we would detect it. 10 10

Difference of stray process to the rest of the group Resulting Effects With two process having very different means, you would easily notice a two peaked distribution. Easy to detect. This is the danger corner, MAYBE you detect this. With enough parallel processes that are good, they could very much mask one stray one in an aggregate study. extreme slight few Trivial solution: with two processes that do not have a detectable difference this is what you want, processes without a detectable difference. Ease of detection many Number of parallel processes When one studies an aggregate sampling from parallel processes, the ability to detect a stray process in the aggregate is based on how extreme the errant process is (either process noise or mean shift) and how many processes are in parallel. In other words: This corner is also trivial. If I keep duplicating process that I cannot detect a difference between, this is a good thing multiple processes, statistically identical. The more extreme the response of the stray process is, the more likely you are to detect it. The fewer the processes in the study, the more likely you are to detect it. What this means is, you cannot necessarily say: I have a passing capability study on the aggregate of my processes, therefore all my sub processes are OK. 11 11

The Problem (Again) We are still faced with the main problem. Given we have demonstrated that it is possible for an aggregate study to mask a stray process, do we then have to do 30+ piece studies on every combination? The answer is no IF we do a structured experiment. If we consider the output of each subprocess as its own subgroup, we can still detect a stray process with an aggregate study, but we have to approach this with a structured, stratified approach. 12

Each trial is a different machine and fixture First, here s a good study. We grab 5 parts from each machine/fixture combination and keep them controlled Trial N1 N2 N3 N4 N5 1 20.003 19.986 20.001 19.995 20.004 2 19.998 20.019 19.999 20.017 19.988 3 19.995 19.997 20.003 19.998 19.997 4 20.005 19.996 20.012 20.009 20.009 5 20.008 19.993 19.989 19.994 20.002 6 20.019 19.985 20.003 19.997 20.000 7 19.994 20.005 19.998 19.994 20.012 8 20.000 19.992 19.993 19.998 20.004 9 20.002 19.998 20.002 20.013 20.000 10 20.000 19.988 19.995 20.005 20.000 11 20.007 20.010 19.993 20.010 19.997 12 20.016 19.998 20.009 19.995 19.989 13 19.999 20.001 20.003 20.000 19.998 14 20.009 20.011 20.000 19.990 20.008 15 20.008 20.008 20.001 19.995 20.002 16 19.985 20.001 20.013 20.008 19.992 17 19.995 20.008 20.001 20.004 19.986 18 19.995 19.997 19.999 19.995 20.007 19 20.005 19.998 19.991 20.003 19.993 20 20.003 20.003 19.996 19.999 20.001 We can generate some data. Let s assume we want a normal distribution of a feature that s at 20 ± 0.05 and we want a Cp ~ Cpk = 2.5. We can generate some random, normal data with µ=20 and σ=0.00667 and get such a spread. Here are the results. Capability Data Ppk = 2.212 Cpk = 2.305 Pp = 2.233 Cp = 2.327 This is a little shy of our target Cp of 2.5, but good enough. What happens if one of the process strays? This is all centered up. Let s take trial 5 and move it s mean so that it is JUST capable (Cpk = 1.33) 13 13

Each trial is a different machine and fixture Process 5 nudged We grab 5 parts from each machine/fixture combination and keep them controlled Trial N1 N2 N3 N4 N5 1 20.003 19.986 20.001 19.995 20.004 2 19.998 20.019 19.999 20.017 19.988 3 19.995 19.997 20.003 19.998 19.997 4 20.005 19.996 20.012 20.009 20.009 5 19.978 19.963 19.959 19.964 19.972 6 20.019 19.985 20.003 19.997 20.000 7 19.994 20.005 19.998 19.994 20.012 8 20.000 19.992 19.993 19.998 20.004 9 20.002 19.998 20.002 20.013 20.000 10 20.000 19.988 19.995 20.005 20.000 11 20.007 20.010 19.993 20.010 19.997 12 20.016 19.998 20.009 19.995 19.989 13 19.999 20.001 20.003 20.000 19.998 14 20.009 20.011 20.000 19.990 20.008 15 20.008 20.008 20.001 19.995 20.002 16 19.985 20.001 20.013 20.008 19.992 17 19.995 20.008 20.001 20.004 19.986 18 19.995 19.997 19.999 19.995 20.007 19 20.005 19.998 19.991 20.003 19.993 20 20.003 20.003 19.996 19.999 20.001 We know the lower limit in our case study (19.95) and we know the σ that generated the data was 0.00667, so if we set the mean JUST from process 5 to: 19.95 + 0.00667 = 19.97001 this is what we get. Before Capability Data Ppk = 2.212 Cpk = 2.305 Pp = 2.233 Cp = 2.327 With shifting Capability Data Ppk = 1.567 Cpk = 2.280 Pp = 1.599 Cp = 2.327 The red arrows are indicating process 5. Now, it is still capable, it is one stray process riding right at the limit, but still in. You can almost see the effect in the histogram. But it is obvious in the run chart. The recommended course of action would be pass the processes, BUT machine/fixture 5 needs a separate study. 14 14

Each trial is a different machine and fixture Process 5 nudged We grab 5 parts from each machine/fixture combination and keep them controlled Trial N1 N2 N3 N4 N5 1 20.003 19.986 20.001 19.995 20.004 2 19.998 20.019 19.999 20.017 19.988 3 19.995 19.997 20.003 19.998 19.997 4 20.005 19.996 20.012 20.009 20.009 5 19.978 19.963 19.959 19.964 19.972 6 20.019 19.985 20.003 19.997 20.000 7 19.994 20.005 19.998 19.994 20.012 8 20.000 19.992 19.993 19.998 20.004 9 20.002 19.998 20.002 20.013 20.000 10 20.000 19.988 19.995 20.005 20.000 11 20.007 20.010 19.993 20.010 19.997 12 20.016 19.998 20.009 19.995 19.989 13 19.999 20.001 20.003 20.000 19.998 14 20.009 20.011 20.000 19.990 20.008 15 20.008 20.008 20.001 19.995 20.002 16 19.985 20.001 20.013 20.008 19.992 17 19.995 20.008 20.001 20.004 19.986 18 19.995 19.997 19.999 19.995 20.007 19 20.005 19.998 19.991 20.003 19.993 20 20.003 20.003 19.996 19.999 20.001 We know the lower limit in our case study (19.95) and we know the σ that generated the data was 0.00667, so if we set the mean JUST from process 5 to: 19.95 + 0.00667 = 19.97001 this is what we get. Before Capability Data Ppk = 2.212 Cpk = 2.305 Pp = 2.233 Cp = 2.327 With shifting Capability Data Ppk = 1.567 Cpk = 2.280 Pp = 1.599 Cp = 2.327 Another key point, it is more detectable in Pp/Ppk than Cp/Cpk because Cp/Cpk calculations are less sensitive to subgroup shifts. And remember, all these distributions are capable. So if a subprocess was very errant, this method has an excellent chance of detecting it. 15 15

Each trial is a different machine and fixture Why a Stratified Study? Random study, drawn from all process, no stratification. (What was process 5 is highlighted) Trial N1 N2 N3 N4 N5 1 20.003 19.986 20.001 19.995 20.004 2 19.998 20.019 19.959 20.017 19.988 3 19.995 19.997 20.003 19.998 19.997 4 20.005 19.996 20.012 20.009 20.009 5 20.009 20.001 19.999 19.995 19.972 6 20.019 19.985 20.003 19.997 20.000 7 19.994 20.005 19.998 19.994 20.012 8 20.000 19.992 19.993 19.998 20.004 9 20.002 19.998 20.002 20.013 20.000 10 20.000 19.988 19.995 20.005 20.000 11 20.007 20.010 19.993 20.010 19.997 12 20.016 19.998 20.009 19.964 19.989 13 19.999 20.001 20.003 20.000 19.998 14 19.978 20.011 20.000 19.990 20.008 15 20.008 20.008 20.001 19.995 20.002 16 19.985 19.963 20.013 20.008 19.992 17 19.995 20.008 20.001 20.004 19.986 18 19.995 19.997 19.999 19.995 20.007 19 20.005 19.998 19.991 20.003 19.993 20 20.003 20.003 19.996 19.999 20.001 If we just took all these samples at random, the parts from machine/fixture 5 would be sprinkled throughout the data. Similar to what is shown in this table. Before Capability Data Ppk = 2.212 Cpk = 2.305 Pp = 2.233 Cp = 2.327 Random pull (NOT stratified) => With shifting Capability Data Ppk = 1.567 Cpk = 2.280 Pp = 1.599 Cp = 2.327 Capability Data Ppk = 1.567 Cpk = 1.779 Pp = 1.599 Cp = 1.816 This hides the mean shift. The process looks overall less capable and you lose sight of the fact that there IS a problem with process 5. If process 5 were actually NOT capable, you may pass this study has having a few outliers. The conclusion is stratification is key!! 16 16

Also suspicious A noisy process As opposed to a mean shift, a stray process due to noise is harder to detect because of only 5 samples from each process. Left is the original case study (good, Cpk target of 2.5). The middle column has subgroup 5 with a target Cp/Cpk of 1.33 and the right column has the same data, but with subgroup 5 targeted at Cp/Cpk of 1.00. The issue isn t detectable in the indices, histograms, or run charts, but it IS detectable in the R-Chart. Again the conclusion is an aggregate study is possible IF it is structured and you are critical of the results. If this were from ONE process, it would be acceptable. Capability Data Ppk = 2.212 Cpk = 2.305 Pp = 2.233 Cp = 2.327 Capability Data Ppk = 2.073 Cpk = 2.205 Pp = 2.117 Cp = 2.252 Capability Data Ppk = 2.085 Cpk = 2.178 Pp = 2.115 Cp = 2.209 Random pull (NOT stratified) => 17 17

Keys to Success An aggregate study could be successful if: 1) It was controlled and stratified. Each subgroup representing at least 5 parts from each subprocess, kept together. 2) You must LOOK AT THE DATA and understand what it is trying to tell you. You cannot just look at the capability indices. The histogram and especially the run and range charts of the data are crucial to detect an errant process. 3) You must be willing to investigate errant data points. In this example, we are proving out 20 parallel process with 100 measurements. Don t balk at having to do an independent study on process 5. It warrants it, it looks very suspicious. 4) You need to be reasonably capable overall. This is plain confidence interval logic. If the run chart above were using more of the tolerance, it would justify doing a full study on a couple of machine/fixture combinations. 18 18

Steps to a successful Aggregate Study 5 parts from this machine / fixture is subgroup 1 n=5, subgroup 2 n=5, subgroup 3 And so on 1) Create a structured, stratified experiment. Guidelines: 5 in each subgroup would be the minimum number. If this wasn t a lot of parts (if we had 2 machines, it would only be 10 parts) increase the subgroup size until the total number of parts was at least 40, 100 would be better. The minimum of 5 must be maintained. If there were 100 processes in parallel, that s 500 total parts. 19 19

Example 1: Steps to a successful Aggregate Study 2) Conduct the stratified aggregate study, maintaining subgroup organization and pay close attention to the Xbar (run) chart and R charts. This would be ideal. Here s a nice, tight grouping. Looking at it, you are pretty confident all the processes are performing the same. From this, you could safely stop checking. Overall Cp is high. All sub-processes in the control limits. This would be ideal. You want to compare the calculated range control limit of the aggregate (.037) to the total tolerance (0.1) Here we use a bit more than a third of the tolerance. Half would be a red flag. Remember, in the aggregate, we have most likely grabbed parts within 1 sigma of the mean from each process. 20 20

Example 2: Steps to a successful Aggregate Study 2) Conduct the stratified aggregate study, maintaining subgroup organization and pay close attention to the Xbar (run) chart and R charts. This is noisy. The aggregate has a bad Cp/Cpk. This is too noisy to draw a conclusion on from the aggregate alone. But it s hard to detect on the run chart. All means are within the run chart control limits. Red flag: Range control limit is 0.074 which is more than half the tolerance. You would want to take the noisiest process (#20, in this case) and do a full study. More would be better. Problem noise is easier to detect in the range chart than the run chart above. You should also pick the process from the run chart above that is farthest from nominal and check that too. 21 21

Example 3: Steps to a successful Aggregate Study 2) Conduct the stratified aggregate study, maintaining subgroup organization and pay close attention to the Xbar (run) chart and R charts. This has one errant process. You would want to conduct another study focused on the errant process. Note the errant one is outside the calculated control limits. (There may be more than one errant process.) The range chart looks good. (It would, the spread of the subprocess is the same). It reiterates the fact you need to examine BOTH. 22 22

Example 4: Steps to a successful Aggregate Study 2) Conduct the stratified aggregate study, maintaining subgroup organization and pay close attention to the Xbar (run) chart and R charts. Here you have a number of points outside the control limits. Remember these are all different processes. This means they are not all centered. Best solution is to center these up and repeat the aggregate study (centering fix is the easy fix). Worst case would be take the 3 FARTHEST subprocesses and do a full study on each of them. If they are capable, all of them should be. Run chart looks good in this example as well. And it should, this has the centering problem. 23 23

Steps to a successful Aggregate Study 3) From the aggregate study, conclude everything is OK OR conduct your sub process study or studies. 4) Draw your conclusion. Final remark: There is no substitute for understanding what the graphs and data are telling you. Take the time to think about how well your model reflects your processes. One thing is clear it is investigative which means you need to conduct an intentional experiment, not a random one. 24 24