statrefs home‎ > ‎Main‎ > ‎Methods‎ > ‎Measurement System Analysis‎ > ‎


 Reference: AIAG MSA Manual


1. What is the difference between "% contribution" and "% study" in terms of GRR performance?

% Contribution is determined by multiplying by 100 the proportion of the GRR variance to the total study variance. % Study is determined by multiplying by 100 the proportion of the GRR standard deviation to the total study standard deviation. Thus, a level of 20% study is equivalent to a level of 4% contribution (.2 x .2 = .04).


Why are the K1, K2, K3 factors for doing a GRR so much different now than they used to be?

The K factors used in the original MSA1 and MSA2 manuals included a 5.15 sigma multiplier that cancelled out of the final results. Since that multiplier essentially had no impact on the final results, it was decided to eliminate its presence in the formulas. [See also page vi in the front of the MSA3 manual.]


Is the level of "GRR %" acceptability intended to include bias, linearity, and stability?

No. GRR % is merely the percent of the total variation in the GRR study as determined by the GRR methodology. Analyzing for bias or linearity requires separate, independent analyses. Stability requires long-term studies.


If a GRR study meets the "correct" level of performance, does that mean the measurement system is totally acceptable?

Not necessarily. GRR only covers the amount of variation due to measurement error, and technically only covers the short-term results gained from one study. This study also may not include all the sources of variation that can affect the measurement system over time, such as environmental effects, lot to lot differences, etc.

GRR also covers only one characteristic, one of perhaps several characteristics in a total measurement system. Similarly, the Ppk or Cpk index covers only one characteristic of a part or process - a "good" Ppk or Cpk index does not necessarily mean the entire part or process is acceptable.

Also, GRR does not cover bias, linearity or stability issues since it does not generally study the measurement process over a long period of time.


Why did we drop the %Bias and %Linearity?

The reason we dropped the indices is that (1) there is no "correct" way to analyze them and (2) we want to focus on the understanding of the measurement system variability and sources of variation rather than on "acceptable" indices.

We went with the focus that the bias and linearity should be the statistical equivalent of zero -- consequently the confidence bounds and the test of hypothesis. If the bounds are large (i.e. the natural measurement system variability is large) then the bias can be statistically zero even though it may not be "emotionally" zero (i.e. a large percent in the old terms). However, because the variability is large, the system is unacceptable due to the other parameter evaluations and, furthermore, adjusting the bias (using this variation) can cause the bias to become worse even though the calculated index becomes better -- ala Deming's funnel experiments.