Process Capability Indices
2), in order to compute conventioI].al measures of skewness (asymmetry) and kurtosis (flat- or peaked-'topness'). T,hese are I J.l3(X) J.l3(X) (f3i (X»2 = {J.l2(X)} 1 = {a(X)p
(1.12a)
(the symbols O:3(X)or A3(.r) are also used fa! this parameter), and J.l4(X)
J.l4(X)
/1'2 cannot be less than 1. It is not possible to have {/2 ..~ /J1- J O. If a < 0, the sign of j{f; is reversed, but /12 remains unchanged. If J7E. 0 the distribution is said to be posi. tively skew. For ,any normal distribution, j/II - 0 and f32= 3. If f32< 3 .the distribution is said to be platykurtic ('flat-topped'); if f32> 3 the distribution is said to b~ leptokurtic ('peak-topped').
E[S2]=a2
. ' n-3 var(S2)= IJ2- "n-l
(
(1.14 h)
)
a4
(1.14 c)
We also not~ the simple but useful inequality E[IXI] ~ IE[XJI
(1.14 d)
(Since E[IXIJ ~ E[X] and E[IXIJ = E[I- XI] ~ E[ - X] = - E[X].) Note also th~ identity min(a, b)=t(la+bl-la-bl)
(1.14 e)
..
(0
lilt /'odui'I iOIl
1.2 APPROXIMATIONS WITH STATISTICAL DIFFERENTIA,LS ('DELTA METHOD')
1vithstatistical d(lferentials
Approximations
11
The deviations A - a, B - a of A and B from their expected Much of our analysis of properties of PCls will be based on the assumption that the process distribution is normal. In this case we can obtain exact forl11l1l;\sfor mOl11entsof sall1plc estimators of the PCls we will discuss, In Chapter 4, however, we address problems arising from non-noimality, and in most cases we will not have exact formulas for distributions of estimators of PCls, or even for the moments. It is therefol'e useful to have some way of appmximating these values, one of which we will now describe. (Even in the case of normal process dIstributions, our approximations may be useful when they replace a cumbersome exact formula by a simpler, though only approximate formula. It is, of course, desirabk to reassure ourselves, 'so far as possible, with regard to the accuracy of the latter.) Suppose we have to deal with the distribution of the ratio
of two random variables
yalUes arc calJed statistical differentials. Clear]y
E[A-aJ=O
'"
L'
It If '~. IC
JI ,II
a
r, Ie IC
A
,
cov(A,
r
A
r
() -
B
=
E[G] ~~
var(B)=O'~
fJ
B) = PABO'AO'B
r
a+(A-a)
{ 13+ (B -
13)}
=
#
#2 -Ii ) B-
}
(1.,16)
The crucial step in the approximation is' to suppose that we can terminate the last expansion and neglect the remaining terms. This is unlikely to lead to satisfactory results unless I(B - 13)/ fJ I is small. Indeed, if I(B -- fJ)/fJ I exceeds 1, the series does not even converge. Setting aside our misgivings for the present, we take expected values of both sides of (1.16), obtaining
E[B] ~#
a
-
r
'
)(
( 13
(
1-
fJAI/(JA(J1/
afJ
(J~
) ,
(1.l7a)
+ 132
A similar analysis leads to
i.e. the correlation coefficient between A and B is flAil' To approximate the expected value of Gr we write G=
B-
){1-#+ ( ,
'
and
var(A)=O'~
(
A- a
G=fj 1+'-a-
10
E[A] =a
E[(A-a){B-fJ)J=PABO'AO'/I
(The notation A-a==i>(A); B-fJ=(5(B) is sometimes us~d, giving rise to the alternative name: delta method.) We now expand formalJy the second and third components of the right-hand side of (1.15).If r is an integer,the expansion of the second component terminates, but that of the third component does not. Thus we have (with r= 1)
'.
'
G--B
E[(A-a)2J=0'~
A-a
1+-
0:
r
"' Vdi\
.1+-n-"'fJ
)(
j1
)
"
(
G) -'-
a
'
2
) (---
-\(Ja2
0'~
2PAIIO'AO'I1
,afJ
(J~
+-
)
132
(1.17b)
Formulas (1.17a) and (1.17b) can be written conveniently in terms of the coeflicients of variation of A and B (see p. 7).
( 1.15) \',\
E[GJ ~ fi [I - flAil':c. V.(A )c. V.(B)}+ {C.V.(B)} 2J (1.17c)
...
12
Introduction Approximations var( G) ~
(~) 2 [{c.v.(A»)2
13
We take
- 2PAB{C.V.(A)C. V.(B)}
+{C.V.(B)}2]
with statistical d!fferentials
(l..l7d)
Another application of the method is to approximate the expected value of a function g(A) in terms of moments of A. Expanding in a Taylor series
/) It'
I
" L: (Xi--X)2 1=1 -' as our 'A', and g(A)= A 2. From (1.31) we will find that II.
g(A)
= g(a + A - a)
==
g(a)+ (A
- a)g'(a) + t(A
- a)2y"(a)
L (XI-X)2
+ ...
is distributed
1= 1
~ g(a) + (A - a)g'(a) -1-~(A - aF?J"(a)
as
X;-1(J"2
Hence
Taking expected values of both sides a
E[g(A)] ~g(a) + t?/'(a)(J"~
(1.18a)
I)
= E [A ] = (n -
1)(J"2
and
Similar aI1alysis k:ads lo
. (J"~.:::2(n-l)(J"4
var(g(A)) ~ {g'(a)} 2 var (A) = {g'(a)} 2(J"~
(1.18b)
I)
Also
g'(A)= -fA -~ g"(A)=;iA-~ Example (see Sections 1.4 and 1.5 for notation)
Hence
Suppose we want to approximate the expected value and variance of
f
{ i=
where X),...,X"
and
II ElA
jJ~{(fl-l)(J"2}
.!+1{2(n-l)([4}{;i(n-l)
1(J"-5}
-!
(XI-X)2
.1
}
~(n-1r-!{1+
4(n~1)}(}'-1
(1.19 a)
and
arc indcpcndent N(~,(J"2)random variables
s
.
-
1
n
X=- LXI, n 1=1
var(A -!) ~ 2(n -1)(}'4 x (4~3 )
. =2(n-1
)
1 4
(}' x 4(n-1)
-6 3 (J"
..
14
I ntroductiol1
Binomial and Poisson distributions
t - 2(n--;-I)2a2
(Recall that n!=1 x 2 x... x nand (n-I\+J).)
Remembering the formulas (1.14 b, c) for E[S2] and var(S2) we find that
a(S-
15
(1.I9h)
n-3 1 1 )~2 /32- n-l
1 a-.
(
)
1
n(kl=--=n
x (n-l)
x...
if - Ihe J--I'quantities (a common notation in the literature), weDenoting ohs~rve thaI
(1.19c)
Pr[X=O],
Pr[X=l],
Pr[X=2],...,
Pr[X=I1]
are p)". the successive terms in the binomial expansion (q+
for any distribution.
Suppose that in a sequence of n observations we observe whether an event E (e.g. X =::;;x)occurs or not. We can represent the number of occasions on which E occurs by a random variable X. To establish an appropriate distribution for X, we need to make certain assumptions. The most common set of assumptions is: 1. for each observation the probability of occurrence of li has the same value, p, say - symbolically - Pr[E] = p: and 2. the set of results of n observations are mutually independen t.
The expected valuc and variance of the distribution are c:
j.l'l(X)=E[X]=np
11
112(X)= Var(X)
1\ 11 II
L~
= npq
respectively. If n is increased indefinitely and p decreased, keeping the expected value np constant (= 0 say) then
~.
Pr[X = k] approaches to I
Under the above stated conditions
Okexp(
- 0)
k!
The distribution defined. by
for 1-
(
0'
)
'
- ~ 2 1 X- ~ [ -:2 --;- J =~(P --;--' 1
X
( )
( ) (1.24h)
The .expected value of X is E[XJ = ~ and the variance of X is a2. The standard deviation is 0'. The distribution of U is symmetric about :tero, i.e. fv( -u)=fv(u). The distribution of X is symmetric about its expected (mean) value ~, i.e.
1.4 NORMAL DISTRIBUTIONS
A random variable U has a standard (or unit) normal distribution if its CDF is
r;;:
2
_.-:.
respectively.We write symbolicallyX '" N(~,a2).
j= 1
Fv(u)=
00
exp[ --
2
and
f (
1
1 t.-t.
(1.24 a)
'
k
for a>O and ~:f=Oare
(1.22)
with
fx(~ +x)=fx(~ -x)
(1.25)
u
v 2n J -
.
exp(-~t2)dt=(u)available which are widely used. Typical values are shown in T,lble 1.1. Table 1.1 Values W(u), the standardized normal CDF
and its PDF is 1 fv(u)=-;= exp( -lu2) = (p(u) J2n .
I
x
1
17
u
(1.23 b)
(We shall use the notation (}>(u),(p(u) throughout this book.) . . This distribution is also called a Gaussian distribution,and occasionally a bell-shaped distribution. The expected value and the variance of U are 0 and 1 respectively.
~)
III.
hi D~
0 0.5 1.0 1.5 2.0 2.5 3.0
( u)
u
0.5000 0.6915 0.8413 0.9332 0.97725 0.9938 0.99865
0.6745 1.2816 1.6449 1.9600 2.3263 2.5758
3.0902. 3.2905
-
(u)
0.7500 0.9000 0.9500 0.9750 0.9900 0.9950 0.9990 0.9995
,,
. 18
Introduction
Central and non-centredchi-squared distributions
Note .that Ul = 1.645,and U2 = 1.96 correspond to $(ud= 0.95 and (uz) = 0.975 respectively, i.e. the area under the
normal curve from
- ro up to 1.645is95%
and that from- Cf:)
up to 1.96 is 97,5°1O
Sym bolicalJy 1.5.1 Central chi-squared distributions W"'(J2X;
If U I, U 2, ..., V v are mutually independerit unit normal variables, the distribution of
II JY has expected value (J2V and variance 2(J4V, Generally
'
X~=UT+U~+... +U~
(
2
) -E [(
/-lrXv -
"
has the PDF
frt(X)=[2ivr(~)J-1xh-lexp( -i)
x>D
(1.27)
JI)
(1.2R)
,'HI
2 r
,Xv
)]
_2',rch+r) rch)
(= v(v+ 2).., (v+ 2 ;=1) if r is a positive integer). If r:E:;!v,then /-l;(X;)is infinite. Generally a distribution with PDF
where l'(a) is the Gamma functioll ddilH.:d by r(a)=
f '" y'x-Icxp(--y)dy 0
, '
19
lx(x) = fi:x;(a)XCC-lexp(~x)
x> 0; a> 0, {3> 0
is cal1ed a gamma (iX,P) distribution.
(1.30)
.
i= 1
2 "'X,,-I(}
2
X)2
(
) (1.32)
'
c52
i= 1
x,v+rlexp"2
where
Also " I.tSj
-x
(2-)fx~+,/x)
=.i~O ~
(1.31 )
.
(J+"2) }
J
iF
,
"
\I
x {2,v+Jr ,'-,
-I
\
I,
A chi-square random variable with v degrees of freedom (X~) has a gamma (tv, 2) distribution. If X1,X2,...,X" are independent N(~,(}2) random variables, then 'the sum of squared deviations from the mean' is distributed as X;-1 (}2.Symbolically (d. p. 13) ~ -2 L.. (Xi-X)
21
Central and nor.-central chi-squared distributions
Introduction
20
and
X
" /I I L Xj j= 1
(which is distributed N(~, (}2/n)) are mutually independent. 1.5.2 Non-central chi-squared distributions
Following a similar line of development to that in Section 1.5.1,the distribution of
=
c52 j
() ()
p.i 2
h
exp(~~:2)
~=Pr[Y=jJ .I.
2
if Y has a Poisson distribution with expected value tc52. ,This is a non.central chi-squared distribution with v degrees of freedom and non-centrality parameter c52. Note that the distribution depends only on the sum c52 and not on the individual values of the summands ~f. The distribution is a mixture (see section 1.9) of X~+:j distributions with weights Pj{tP) (j=O, 1,2, ...). Symbolically we write c52
,
.T
(1.33)
2
I
where Vi'" N(O, 1) with
The expected value and variance of the arc (v+b2) arid 2(v+2c52) respectivdy. Note that
v
L a=b2 j= 1 has the PD F
()
x'v\P)", x~ I 2.! /\ POi
X~2(b2)=(U 1+ ~d2 +(U 2+ ~2)2 + .., +(U v+ ~\)2
k
.
L X~'h(c5~)
h=1
ct:J c52 j 1
fX~2(62)(X)= j~O[(2'
c52
) Ji.J exp(':-2 ) I
x~\P) distribution
k
k '" X~2
(L b ) ~
h=l
,.
with v =
L
Vh
h=1 ,
if the Xi'S in the summation are mutually independent.
(1.34)
Beta distributions
1111 /'oi!l/cl iOIl
22
In particular,
1.6 BETA DISTRIBUTIONS
A random variable Y has a standard heta (ex,In distrihution if its PDF is
a ~+
E [ YJ = J1't ( Y.\ =
1
.
fy(y)
'.
B(rx,/3) yOt-l(l - y)/J-l
23
O~y~
1 and IX,13>0 (1.35 a)
E [Y 2]
=
It
rx(a
(1.38 a) 13
+ 1)
2(Y) = (a + 13)(:;+ (1+ 1)
(1.38'b)
and hence where B(a, h) is the beta function,
= f( rx)r(fJ)/n
rx+ (1).
Thus
rxf3 va r( Y )
- r(a+{3) y"-l(1_y)P-l fyCv)- f(rx)r(fJ)
O~y~l
(1.35 h)
If a = fJ= 1, the PDF is constant and this special case is balled a standard uniform distribution. . . If Xl and X2 are mutually independent and Xj"'X~j (j= 1, 2) then 2
Xl+X2"'X\,1+\,l
.
-~'"
Xl.
and X1+X2
eta
B
-V 1 -V2)
(2' 2/
(1.36)
m9reover Xl /(X 1+ X :2)and Xl + X 2 are mutually independent. (These results will be of importance when we shall derive distributions of various PCls.) . The rth (crude) moment of Y is I/;.(n
1':1y",
nIX -1-r) f(IX + f/)
IXrrJ
I '(a)1 '(a I (i I /')
(IX I {I)I,.I
I) t~
=-=
E (Y J )
-
{E ( Y ) } 2
= (a +
(J) 2 ( IX + (J+ 1 )
We note here the following definitions: I. the (complete) beta function; 'l
1'1
j
B(a, [3)= 0 y"-l(I--
y)P-l
dy
rx,/3>0
t dy
OO
this includes thc 1 distribu-
tion (scdion., 1.7) Thcse families, we have already encountered,
1.11 SYSTEMS OF DISTRIBUTIONS
In addition to specific individual families of distributions, of the kind discussed in sections 1.3-1.10, there are systems including a wid? variety of families of distributions, but based on some simple defining concept. Among these we will describe first the Pearson system and later the Edgeworth (Gram-Charlier) system. Both are systems of continuous distributions.
and further
details are available for example,in JohnsoQand Kotz (1970, pp. 9-15).
1.11.2 Edgeworth (Gram-Charlier) distributions These will be encountered in Chapter 4, and will be described there, but a brief account is available in Johnson and Kotz (1970, pp. 15-22).
I I
33
Introduction
Facts from statistical methodology
1.12 FACTS FROM STAT1ST1CAL METHODOLOGY
are callcd sufficient for the parameters in the PDFs of the Xs. It follows that maximum likelihood estimators of 0 must he functions of the suflieient statistics (TJ,...', li,)::::: T. Also the joint distribution of the Xs, given values of T does not depend 011O.This is sometimes described, informally, hy the phrase, T contains all the information 0:1 ().' If it is not possible to lind all)' function 11('1')of T with expecled value zero, except when Pr[h(T) :::::OJ :::::1, then'(T is called a complete sufficient statistic (set).
32
This section contains informat'ion on more advanced topics of statistical theory. This information is needed for full appreciation of a few sections later in the book, but it is not essential to master (or even read) this material to understand and use the results.
1.12.1 Likelihood
For independent continuous variables Xl"", X" possessing PDFs the likelihood function is simply the product of their PDFs. Thus the likelihood of (X 1,..., X,,), L(X)is given by L(X)=fxJX
1)f:dX
2)'"
fx.,(X,,)
"
= rlfdX;)
( I.60)
j= 1
1.12.4 Minimum variance unbiased estimators
It is also tr'ue that among unbiased estimators of a parameter I an estimator determined by the complete set of sufficient statistics Ti>..., Th will have the minimum possible variance. Furthermore, given any unbiased estimator of y, G say, such a minimum variance unbiased estimator (MVUE) can,be obtained by deriving
(For discrete variables the f..dXj) would' be replaced by the probability function J>Xj(Xj)where J>,dx;)=PrlXj=xiJ) 1.12.2 Maximum likelihood estimators If the distributions of X J,"', X" depend ('In values of par{)Z, ... , es ( = 0) then the likelihood function, L, is a
(1.61)
Go(T)= E[GITJ i.e. the expcctcd valuc of G, given T. This is based on the Blackwell-Rao 1947; Rao, 1945).
theorem (Blackwell,
ameters e 1,
function of 9. The values ()1, Oz, ..., Osl = 9) that maximize L are, called maximum likelihood estimators of e J, .,. , Osrespectively.
Example If XI>"',
1.12.3 Sullicicnt stutistks
X" are independent
N(~, (Tz) random
the likelihood function is
,
If the likelihood function can be expressed entirely in terms qf functions of X ('statistics') Tl(X), ..., T,,(X)the set lTl>..., Th)
.
/I
1
L(X)= I1.--c-exp ;=1 (T,/br.
t X 1: 2 - j -- S 2, { (T } ]
[
variables then
Introduct
34
-
I
I
- (0"YiW L-Iq exp -
1
l-
ion
1 (X i--
~
\
l;)'~
SlippOSC WL: wanl
I
[
1.12.5 Likelihood ratio tests
/I
~2I ,
- (aJ2n)"exp - 2a' { J, (X,-X)'+n(X -~)'}J
(1.62)
The maximum likelihood estimator of
':,=X.
(Xi-X)2 }
(1.63b)
(~, 62) are sufficient statistics for (~,0").they are also, in fact complete sufficient statistic E[4J =~, so ~ is an unbiased estimator of ~. Since E[X IX, 62J = X, 4(= X) is also a minimum variance unbiased estimator of ~. But since
;(
rez(n-l)) .
TIfx,(xiIOj)
j=O,l
i'" ,
(1.65)
The likelihood ratio test procedure is of the following form
1
~I~-
0, I)
(1.63 a)
/I
1
Ilj (i
/I
L(XIHj)=
The maximum likelihood ,estimator of a is
.
hypotheses
specifying the values of parameters ()= (01)..., Os) to ,be 9j= (elj,..., es;) (j=O,1). The two likelihood functions, are .
~ is
i?
E[6J =
10 choOSL: hdwL:en
/I
6= { n-l.L,:
35
F acts from statistical methodology
2 1
(~.)
ai=a
(1.64)
it is not an unbiased estimator of a. However \ J'(in-I)) 6 :: 2 rein)
()
is not only unbiased, but is a MVUE of a. (There are more elaborate applications. of the Blackwell-Rao theorem in Appendices2A and 3A.)
If L~~JHd L(XIHo)
If f~(XIII d L(X!Ho)
,
' r-
'"
'-":r'" ..... rr,
S
0..
'.:!(o.. 0 0
. for an. existing process, 1.33 . for a new process, 1.50
~ i7\
NIl'>
..,fV) """ II
For characteristics related to essential safety, strength or performance features - I'or example in manufacturing of bolts used in bridge construction - minimum values of 1.50 for existing, and 1.67 for new processes, are recommended. General1y, we prefer to emphasize the relations of values of PCls to expected proportions of NC items, whenever this is possible.
s ";~
2:
Or-O ONO . . r-ON II
~
'"
E .~
u
Z \
>
I
-II'>
~
-
25
i:: 0
~
2.3 ESTIMATION OF Cp
0 II
'0 .-
§. 0..
~
The C'i1lyparameter in (2.1a) which may need to be estimated is (J"the standard deviation of X. A natural estimator of (Jis
S
r- 2: NIl'>V)
r-
~ 0 - 'b.0
""!
I
g. ....
0
(J'
II
)
] .s i ~ 2: ~ O~ ~ ~ 25 E ~ NOo II E --;:I
'S
I
~
I
I"'i",U QJ :;s
~
f-o
\ \
~
I
-..;....
""oS< UN
I
/I
j=L.,I (Xi
1
.8
x )".r
(2.:1)
/I
I Xj n j= 1
X =-
::= S ... g, i:: e<j 0-
\.
where
'"2
'C' ""
-r
In-I
0..
If the distribution of X is normal, then 82 is distributed as
II
Os 00-
i:...J
-~
11- 1-(J2 x (chi-squared
with (n-I)
degrees of freedom), (2.4 a)
44
The hasic process capahility indices
45
Estimation of Cp
or, symbolically
(2.4) and (2.7), we have ,
1
_(J2
(2.4 b)
2
n-l
Xn-1
pr C\,> C
[ c"
(see section 1.5). Estimation of (J using random samples from a normal population has been very thoroughly studied. If only
,
] = Pr[x~ - 1
-
N
N
d
i3
' g II
Cp = 1.33, then the standard
r'
I"!
~
u= 4) and
deviation of Cp is O.~7 - over half the value of Cp! 'Estimation' in this situation is of very little use, but if n exceeds 50, fHirlyreasonable accuracy is attained. Another disturbing faclor possible elrccls of nOll-normality - will be the topic of Chapter 4. 'Bayesian estimation of Cp has been studied by Bian and Saw (1993).
~
.:!J I:: 0
The distribution of ('p when the distribution of X is Ilul normal will be considered in Chapter 4. We note that whell f = 24, for example (with n = 25, if /=n-I) the S.D. of Cp is very roughly 14--15°;;1of the expected value (i.e. coef-I-icientof variation is 14--15%). This represents a quite considerable amount of dispersion. As a further example take Montgomery's (1985) suggested minimum value of Cp for 'existing processes', which is 1.33 (see the end of section 2.2). The corresponding value of d/a is 4. If a sample of size 20 is available, we see (from f = 19 in Table 2.3) that if Cp= 1.33, the expected value of Cp would be 1.389 and its standard deviation is 0.24. Clearly, the results of using Cp will be unduly favourable in over 50% of cases actually we have from (2.8), with c = 1, Pr[C;p> 1.33J= Pr[XI9 < 19J = 53.3%
0
tf')
",1
-Irl
i 0
vi
.....
0\
. I
r--0\~r'':"'00~0 000 00~V>'
CpkU)
where
(2.31)
C/i
Cpkl=-=L~L 38
and
cpku-- USL-X 38-
I"-- V) N
I")
00000
0 V)0
00 'r, 1"1 r 1 c')
\0
000"";"";
I"-- N I") I") 00000
0\ '' - - 1"1('" cr. 00000
0 00 NI"--N \0 (')0\01")0\ '.0 0 I") I"-- 0 o"";"";"";('.j
I.:U
I") V) I") 00 \00\1")\00
0
f"')Or-:'t---t---
I Some
possibilities
are
shown
in' Table
2.9 (with
111 =
!(LSL + USL)). (See also Figs. 2.6 a-d.) , When 0"is not known, but is estimated by S, the problem is more complicated. If we replace the values ~(X), ;;(X) by
x - tll-1.al 2S and X + tll-l .l-a
fi
I I
)
l 2S
fi
be regarded as a test of the hypothesis Cplt = Cpl2 against either two-sided (Cplt ¥=CpI2) or one-sided (Cplt Cp12) alternative hypotheses, for the case when the
((2.36a) with f = n -1) we would still get 100(1- ex)% confidence limits from Table 2.9, but computation of these limits, would need the correct value of CTto evaluate the multiplier 1/(30").
.
:;;;'m :;;;'m
>m
Confidence limits (x 30')
sample sizes from which the estimators Cpl1 , Cpl2 are calculated a're the same - n, say. In an obvious notation the statistics Tt
= 3fiCPlt, 1'2= 3JnCp12 are independent noncentral t
variaqles with (n-1) degrees of freedom, and noncentrality
Table 2.9 Confidence interval for Cpk(O'known) Valueof f(X') ~(X)
75
pnrnnleters JIi(~1 LSL1)/(fI,jn(~r-LSL2)/(f2 respectively. Applying a generalized likelihood ratio test technique (section 1.12),Chou and Owen obtain the test statistic
~ I
.
:;;;'m (~(X)-LSL,f(X)-LSL)
>m (min(~(g)-LSL, USL-~(X)), t(USL-LSL)) >m (USL:'~(X), USL-f(X))
u = [{T'y + 2(n-l)
Bayesian estimation of Cpkhas been studied by Bian and Saw (1993). ..
2.6 COM PARlSON OF CAPABILITIES
PCls are measures of 'capability' of a process. They may also be used to compare capabilities of two (or more) processes. In
I
}{1'~+ 2(n -I)} J1- Tt 1'2
(2.44)
To establish approximate significance limits for U they carried out simulation experiments as a result of which they suggest the approximdtion, for the distribution of U, when Ho is valid: .'all IQg {!U/(n - I)} approximately distributed as chi-squared with
VII
degrees offreedom.' Table 2.10 gives values of alland Vi"
estimated by Chotl and Oweil from their simulations.
76
77
Correlated observations
The basic rwocess capability indices I
I
Table 2.10 Values of (/11ami VII
,
however, that in this case we cannot rely on independence of J( and S. Correlation can arise naturally when 0"2 is estimated from data classified into several subgroups. Suppose that we have k subgrou.ps, each containing m items and denote by Xuv the value of vth item in the uth subgroup. If the model
I
I
n
all
VII
5 10 15 20
8.98 19.05 26.47 36.58
1.28 1.22 1.07 1.06
The progression of values is rather irregular, possibly
XUV=Yu+Zuv u= 1, .." k; v= 1, ..., m
reflecting random variation from the simulations (though. there were 10000 samples in each'case).[If the valucs of (IIO and a15 were about 18.2 and 27.3 respectively, progression would be smooth.]' .
applies with all Ys and Z mutually independent random variables, each with expected value zero, and with var(Yu)=O"~
var(Zuv)=O"~
2.7 CORRELATED OBSERVATIONS
thenw~ have n = km,
The above theory based on mutually independent ,observations of X can be extended to cases in which the observations of the process characteristic are correlated, using results presented by Yang and Hancock (1990).They show that
0"2=0"~+(1i
and correlation between
E[S2]=(1-p)0"2
.
wherep = averageof the 1n(n-1) correlationsbetweenpairs of Xs. '(SeeAppendix 2.A.) This implies that in the estimator
.
o
.
if u#u' if u=u' (and v 7""vi)
XII" :\/1<J XII"" =
{ (O"J10"~)
so that 1d
1
CP=:3S
2
P of Cp,the denominator willtend to underestimate 0" and so
km(m-1) ~
t
0"2 y
2
) n(n- 1)km(m-1)- (O"y+O"z
..
C\,
will tend to overestimate CpoThis reinforces the effect of the bias in l/S as an estimate of 1/0",which also tends to produce . overestimation of Cp (see Table 2.3). There will be a similar effect for Cpk' We warn the reader,
(2.45)
=
O"~
n(n-1)~0"2
m-1 Cpm when C~> 1.23(Cp> 1.11), provided u is not too close to 0 or 1. If u=O (i.e. ~=1'=1J1.)then Cp=Cpk=Cpm' . Johnson (1991) also approached the Cpm index from the point of view of loss functions, using the converse concept of 'worth'. He supposed that the maximum worth, W1', of an item corresponds to the characteristic X having the target value, T. As the deviation of X from T increases, the worth becomes less, . eventually becoming zero, and then negative. If the worth function is of the simple form WT-k(X
~ 1')2
It
IS
IIiCII
If C;m
.'he Ipl
we see that,
Le=(3~)2C~
",.. lid
Hence Le is effectivelyequivalent to Cpm,although it has a differentbackground. A natural unbiased estimator of Le is
Illu
t
'is termed rhe relative worth,and (X -
~
Le ~I) ~ ,r
1')2 /~ 2
- 1')2J - L e
The distribution
II
2
L. (Xii=1
1')
of n~ 2Ie is therefore that of 2 '2 (J
Xn
n(~-- 1')2
(
(J
2
)
discussed in section 1.5. The resulting analysis for construction of confidence intervals follows lines to be developed in scction 3.5. Extcnsion to nonsymmetric loss functions is direct.
3.3 COMPARISON OF CpmAND qm WITH Cp AND Cpk
is termed the
relative loss. The expected relative loss is then (in Johnson's notation)
~2
,
~
= n~ -21
,'I~
.1
=1-- (X - 1')2 ~2
-- 1'fJ}-1
1110
.
Wr
= (~Y {E[X
11'".
I~ II
= 1- k(X - 1')2
95
IVI;
for W1'~k(X - 1')2
it becomes zero when IX-1'I=JWT/k. In Johnson's (1991) approach the values 1':f:.JWT/k play similar roles to those of the specification limits in defining Cpm and we may define ~=.J~';k. . The ratio (worth)/(maximum worth)
E[(X
Comparison of Cpm and C~m with Cp and Cpk Recalling that
r
We first note that if 1'=~, then Cptn is the same as CpoIf, also, ~= rn,then both Cpm and C~mare the SaIne as Cpk' If, further, the process characteristic has a 'stable normal distribution' . thcn a vaillc of ! (for all four indiccs) mcans that thc expected proportion of product with values of X within specification
96
The Cpm index and related indices
limits is a little over 99.7%. Negative values are impossible for Cp and Cpm'They are impossible for C;m if T falls within specification limits (and for Cpk if ~ falls within specification limits). . The principal distinction between Cpm and Cpk is, as l'I1irabella (1991) implicitly indicates, and Kushler and Hurley (1992) state explicitly, in the relative importance of the specification limits (USL and LSL)' versus the target value, T. As explained in Chapter 2, the main function of Cpk is to indicate the degree to which the process is within specification limits. For LSL< ~< USL, Cpk---+ (X) as 0"---+0,but large values of Cpk do not provide information about
discrepancy
between
~ and
Comparison
~llr If Cp
11111
of Cpm and Ctm with Cp and Cpk
97
= c, we have ((,0") points on the line
I'll
1d 0"=-3c
II ,ml '
,r
parallel to the (-axis, and tangential to the semi-circle corre= c at the point (T,idle). sponding Cpm If Cpk= to c we have
,'I 'II III I ..
t
"
/(-m/ +3cO"=d
T. The index Cpn"
however, purports to measure the degree of 'on-targetness' of the process, and Cpm-HX) if ~---+T,as well as 0"---+0. With Cpm, specification limits are used solely in order to scale the loss function (squared deviation) in the denominator. Kushler and Hurley (1992, p.2) emphasize that, if Tis not equal to m, the following paradox can arise'... moving the process mean towards the target (which will increase the Cpkt and Cpmindices)' can reduce the fraction of the distribution which is within' specification limits.'
t,I':;
lines , on the pair of (and 0");0). This corresponds to ((,0") points
II "I
III
( =m+d-3CO" and
Ii 'II
(=m-d
II
+ 3CO"
,
Finally, we compare plots of values of (~, 0")corresponding to fixed values of Cp, Cpk and Cpm' If Cpm= c, then
.~
These lines intersect at the point (m, Ad/c) and meet the (-axis al Ihe points (m-d,o) and (m+d,o). Boyles (1991, Figs. 4 and 5) provides diagrams representirtg these loci from the example already described in section 3.2 (USL = 65,LSL= 35,
m,; 50,2d= 30)withc= -.t,1, 1and l
0"2+(~-T)2=(} ~r
'
Figure 3.1 is a simplified version, giving ((,0") loci for' a general value of c, exceeding
These (~,0")points lie on a semi-circle with centre at (T,O), radius *d/c, and restricted to non-negative 0".
= c would
t with T = m. (If c < t the
lin~s
meet the (-axis between its points of intersec- . tion ((=m:ttd/c) with the Cpm=C semi-circle.) If T=I=m,the lines Cpk= c and Cp= c would be unchanged but the Cpm= c semi-circle would be moved horizontally to centre at the' point (T, 0), with points of intersection with the (-axis at ~= T:ttdlc. Cpk
tThe Cpk index referred to here is the modified index described by Kane (1986), namely Hd-I~-TI}/a.
""
, "-"':"'~';'~~';'>\
"">'~\
98
The Cpm index and related indices
Estimationof Cpm
(J
~
(and C1~m)
99
Tbe corresponding estimator of C;m is
'* -
!~/--=E:~--::_!!1l!
C!"n -
}-
{n
3
£ (Xi i~ I
-j
T)2
(3.16)
}
.(a)
(a)
.1(11) II ~
i m-d
m'-:!d/c
m
m-dd/c
\
Fig. 3.1 Loci of points (~,a) such that: (a) Cp=c; (b) Cpk=C; (c) Cpm=c; when T=m, C>5. (If cI
From (3.50c) and (3.50d), the variance of derived.
(J
1(1(1 + j) + i)r(1(n - I'+ j) + i) x f(!+i)f(!(n+j)+i) Taking
121
Cpm(a) indices
~
- Rj 2
1
d
(3,50 c)
- (n-1 + 2i)f(1+ i) RiJ
~.-m
1--I
(J
~ l
d
,
I~ICo)
(3.51)
C,mk~30{l+(,:m)'r with Ri=r(1(n-1)+ i)/r(1n+ i). Taking ,1.=0we find 1 -d E[Cpmk]=- 3 (J [
-
~2R--
2 (11-1)
1
,.
R 1t J .,fic
with R = nHn-1))/nin). I Now taking 1'=2, we obtain from (3o50b) after some simplification '
Table 3.4 gives values of Cpmk for representative dj(J and 1~-mll(J,
tI,l
.
,(3.50d)
11,1)
J
'
2
E[CpmkJ=hxp( -!A)
L 00
i'cO
(!W d./h -"T I.
[(,(J-
2
)-
11-
1 ' 2 2
+
I
- 2-fi( \
(J
)
+ i) ( 1 n!+i) \n-1+2i [(1
)
The index Cpm attempts to represent a combination of ef. fects of greater (or less) variability and of greater (or less) relative deviation of ~ from the target value. The class of
ro
r.
I~ l'Ii
+ 1+ 2i
n+2iJ (:\.~Oe)
3.7 Cpm(a) INDICES
'lMltno
ell
dj;t
values of
I
'I I')
indices now to be described attempts to pro,vide a flexible choice of the relative importance to be ascribed to these two e,ffects. From (3.8), we note that if '" = I~- TI/(J is small, then
I
I
Cpm ~(l-1(2)Cp
(3.52)
Table 3.3
Moments of Cpmk: E=E[CpmkJ;
S.D.=S.D.
(Cpmd
-
I(-ml/a 0.0
1.0
0.5
E
S.D.
E
S.D.
E
11=10 d/a 2 3 4 5 6
0.636 0.997 1.359 1.720 2.081
0.193 0.281 0.371 0.462 0.553
0.491 0.813 1.126 0.458 1.780
0.192 0.268 0.347 0.426 0.505
0.264 0.515
II= 20 d/a 2 3 4 5 6
0.633 0.979 1.326 1.672 2.019
0.124 0.180 .0.237 0.294. 0.352
0.470 0.779 1.089 1.398 L708
0.129 0.176 0.225 0.275 0.325
~
..
2.0
E
S.D.
E
S.D.
0.135 0.187 0.765 0.241 1.016 0.295 1.266 0.350
0.106 0.299 0.492 0.684 0.877
0.081 0.114 0.148 0.182 0.216
0.006 0.160 0.313 0.467 0;620
0.051 0.072 0.094 0.116 0.138
0.249 0.491 0.734 0.977 1.220
0.099 0.188 0.476 0.665 0.854
0.054 0.076 0.098 0.120 0.142
0.003 0.154 0.305 OA57 0.608
0.035 0.049 0.063 0.078 0.093
--
...... ..
1.5 S.D.
0.087 0111 0.155 0.190 0.225 ---
-
-'
"
111!118!
".""'-:::
. t..;
-~.
._-
". c
,-
'"-
.-.~---,--
n=30 d/a 2 3 4 5 6
0.635 0.977 1.319 1.661 2.003
0.099 0.142 0.187 0.232 0.278
0.462 0.768 1.074 U79 1.685
0.103 0.141 0.179 0.218 0.258
0.244 0.48:' 0.725 0.965 1206
0.070 0.096 0.123 0.151 0.178
0.097 0.284 0.471 0.659 0.846
0.043 0.060 0.078 0.096 0.114
0.002 0.152 0.303 0.453 0.604
0.028 0.039 0.051 0.063 0.075
n=40 d/a 2 3 4 5 6
0.637 0.977 1.317 1.656 1.996
0.084 0.459 0.121 0.762 0.160 . 1.066 0.198 1.370 0.237 1.673
0.088 0.120 0.153 0.186 0.220
0.242 0.481 0.720 0.959 1.199
0.060 0.082 0.105 0.129 0.152
0.096 0.282 0.469 0.656 0.843
0.037 0.052 0.067 0.082 0.097
0.002 0.152 0.302 0.452 0.602
0.024 0.034 0.044 0.054 0.064
n=50 d/a 2 3 4 5 6
0.639 0.978 1.316 1.654 1.993
0.075 0.108 0.141 0.175 0.210
0.079 0.107 0.136 0.165 0.195
0.241 OA79 0.718 0.956 1.194
0.053 0.073 0.093 0.114 0.135
0.095 0.281 0.468 0.654 0.840
0.033 0.046 0.059 0.073 0.087
0.001 0.151 0.301 0.451 0.601
0.021 0.030 0.039 0.048 0.057
0.456 0.759 1.061 1.364 1.666
Table 3.3 (Cont1 I~-ml/(J 0.0
0.5
1.5
1.0
2.0
E
S.D.
E
S.D.
E
S.D.
E
S.D.
E
S.D.
0.641 0.978 1.316 1.653 1.991
0.068 0.098 0.128 0.159 0.190
0.455 0.756 1.058 1.360 1.662
0.071 0.097 0.123 0.150 0.177
0.240 0.478 0.716 0.954 1.192
0.048 0.066 0.085 0.104 0.123
0.095 0.281 0.467 0.653 0.830
0.030 0.042 0.054 0.066 0.079
0.001 0.151 0.301 6.450 0.600
0.019 0.027 0.036 0.044 0.052
0.642 0.979 1.316 1.653 1.990
0.063 0.090 0.118 0.147 0.175
0.454 0.755 1.056 1.357 1.659
0.066 0.089 0.114 0.138 0.163
0.239 0.477 0.715 0.952 1.190
0.044 0.061 0.078 0.096 0.113
0.094 0.280 0.466 0.652 0.838
0.028 0.039 0.050 0.061 0.073
0.001 0.151 0.300 0.450 0.599
0.018 0.025 0.033 0.040 0.048
n=60 dl(J
2 3 4 5 6 n=70 di(J
2 3 4 5 6 .-----------------.-----iii ~
---
"---
L~
"
c
=-~.~
~~-~
n=80 dl(J 2 3 4 5 6
0.643 0.980 . 1.316 1.653 1.989
0.058 0.084 0.110 0.137 0.163
0.453 0.754 1.055 1.355 1.656
0.061 0.083 0.106 0.129 0.152
0.239 0.476 0.714 0.951 1.188
0.041 0.057 0.073 0.089 0.105
0.094 0.280 0.466 0.651 0.837
0.026 0.036 0.047 0.057 0.068
0.001 0.150 0.300 0.449 0.599
0.017 0.024 0.031 0.038 0.045
n=90 dl(J 2 3 4 5 6
0.644 0.980 1.316 1.653 1.989
0.055 0.079 0.104 0.129 0.154
0.452 0.753 1.053 1.354 1.654
0.058 0.078 0.100 0.121 0.143
0.239 0.476 0.713 0.950 1.187
0.039 0.054 0.069 0.084 0.099
0.094 0.280 0.465 0.651 0.837
0.024 0.034 0.044 0.054 0.064
0.001 0.150 0.300 0.449 0.599
0.016 0.022 0.029 0.036 0.042
n= 100 dl(J 2 3 4 5 6
0.645 0.981 1.317 1.653 1.988
0.052 0.075 0.098 0.122 0.146
0.452 0.752 1.052 1.353 1.653
0.055 0.074 0.094 0.115 0.135
0.238 0.475 0.712 0.949 1.186
0.037 0.051 0.065 0.079 0.094
0.094 0.279 0.465 0.651 0.836
0.023 0.032 0.042 0.051 0.060
0.001 0.015 0.150 0.021 0.300 0.027 0.44-9 0.034 0.599 0.040
~---_..-
I
126
I
The Cpm index and related indices
Table 3.4 Values of Cpmkwhen T=m (J"
-d
Appendix 3,A
I
I-ml
=(n-l)'/2.£ 1~
I
0.0
0.5
1.0
1.5
2.0
0.667 1.000 1.333 1.667 2.000
0.447 0.745 1.043 1.342 1.640 .
0.236 0.471 0.707 1.943 1.179
0.0925 0.277 9.462 0.647 1.832
0.000 0.149 0.29H 0.447 0.596
( ){ ~
(-IV
J
()
127
a(nn-l) } j -E[(U
+~.\/;;)2j]E[Xn-:i'2j]
(J
(3.56 a)
2 3 4 5 6
In particular using (2.16a, b)
E[Cpm(a)]
= { 1-
and It is suggested that we consider the class of PCls defined hy
'
j Cpm(a)=(1-a(2)Cp
E[{C
(3.53)
pm
2 2a(n-l) } (a) ]= { 1-(I+n(Z)+
n(n-5)
X- T
Cpm(a)={ 1-a (
s-
Z
)
-
d X- T s} CP=3S { 1-a
(
Assuming normal (N(~, (JZ»variation, Cpm(a)is distributed as (n-1)1 Xn-l {
1- a(n; 1)(U+ '~)Z nXn-l
}
Cp
(3.55)
where U and Xn- 1 are mutually independent and U is a unit normal variable. From (3.55),the r-th moment of Cpm(a)is E[{ Cpm(a)}'] =
. '
-
~Z(n 5)(n -7) (3 + 6n(2 + n2(4) } E[C~J
(3.56c) It must be remembered that the Cpm(a)indices suffer from the same drawback as that noted for Cpmin section 3.2, namely, that if T is not equal to m=:hLSL+ USL), the same value of
Z
) } (3.54)
'.
(3.56 b)
a2(11-1)2
where a is a positive constant, chosen with regard tq the desired balance between the effects of variability and relative departure from target value. A natural estimator of Cpm(a)is
-
a(n -1) 2 n(n-4) (1 + n( )} E[Cp]
t,
Cpm(a),obtained
when c;=T-~ or c;=T+~, can correspond
to markedly different expected proportions of NC items. It is possible for a C"m(a)indcx to bc negative(unlikeC" or Cpn" but like Cpd. Sincc this occurs. only for large values of "I, the not relative from the target value T, this . would be a departure drawback. of c; GUpla and Kolz (1993) have labulated values of E[Cpm(1)J and S.D.(Cpm(!). , APPENDIX 3.A: A MINIMUM VARIANCE UNBIASED ESTIMATOR FOR W=(9C:,;r1
This appendix is for more mathematically-inclined readers. Hthe independent random variables X1,...,Xn each have the normal distribution with expected value c; and variance
128
The Cpm index and related indices i
(f2, then I
{( n
Now, the inequality X,< T is equivalent to'
r Vi 0, while for x ~ -0.25
II
I
I
Fig. 3.2 The function gn(x) for various sample sizes. 'I
"
Boyles, R.A.(1991)The Taguchi capability index, J. Qua/. Techno/.,23. Boyles, R.A. (1992) Cpm for asymmetrical tolerances, Tech., Rep. Precision Castparts Corp., Portland, Oregon. Chan, L.K., Cheng, S.W. and Spiring, FA (1988a) A new measure of process capability, CPIn,J. Qua/. Techno/., 20, 160-75. Chan, L.K., Cheng, S.W. and Spiring, FA (1988b) A graphical IcchniqtH~for process capahilily, Trans. ASQC Conyress, Dallas, Texas, 268-75. , Chan, LX., Xiong, Z. and Zhang, D. (1990) On the asymptotic distributions of some process capability indices, Commun. Statist. - Them'. Meth., 19,11-18. Cheng; S.W. (1992) Is the process capable? Tables and graphs in 'assessing Cpn" Quality Engineering 4, 563-76. Chou, Y.-M., Owen, D.B. and Borrego, S.A. (1990) Lower confidence limits on process capability indices, Quality Progress, 23(7), 231-236. David, B.A., Hartley, H.O. and Pearson, £.S. (1954) Distribution of the ratio, in a single normal sample, of range to sta.ndard deviation Biometrika, 41, 482-93.
.,...134
The Cpm index and related indices
Grant, E.L. and Leavenworth, R.S. (\988) Statistical Quality Control, (6th edn), McGraw-Hili: New York. Gupta, A.K. and Kotz, S. (1993) Tech. Rep. Bowling Green State University, Bowling Green Ohio. Haight, F.A. (1967) Handbook of the Poisson Distribution, Wiley: New York. Hsiang, T.C. and Taguchi, G. (1985) A Tutorial on Quality, Control and Assurance - The Taguchi Methods. ArneI'. Statist. Assoc. Annual Meeting, Las Vegas, Nevada (188 pp.). . Johnson, N.L.. Kotz. S. and Kemp. A.W. (1993) Distrilmtio/ls in Statistics
-
Discrete
Distributions
(2nd edn),
Wiley:
4
Process capability indices
under non-normality: ro bustness properties
.
New
York. Johnson, T. (\ 991) A new measure of process capability related to Cp"" MS. Dept. Agric. Resource Econ.. North Carolina State University, Raleigh, North Carolina. Kane, V.E. (1986) Process capability indices, J. Qual. Technol., 18, 41-52. Kushler, R. and Hurley, P. (1992) Confidence bounds for capability , iridices. J. Qual. Tee/mo/.. 24, 188-195. Marcucci, M.O. and Beazley, C.F. (1988) Capability indices: Process performance measures, Trans. ASQC Congress, 516-23. Mirabella, J. (1991) Determining which capability index to use, Quality Prouress, 24(8), 10. Patnaik, P.B. (1949) The non-central X2- and F-distributions and their applications, Biometrika, 36, 202-32. Peam, W.L., Kotz, S. and Johnson, N.L. (1992) Distributional and inferential, properties of process capability indices, J. Qual. Technol., 24, 216-31. Spiring, FA (1989) An application of Cpmto the tool wear problem, Trans. ASQC Conoress.Toront'o, 123-8. Spiring. 1".1\.(1991a) Assessing prm;ess capability in the presence or systematic assignable Clluse, J. Qual. T eclmol., 23, 125-34. Spiring, FA (1991b) The Cpmindex, Quality Progress, 24(2), 57-61. Subbaiah, P. and Taam, W. (1991) Inference on the capability index: Cpm, MS, Dcpt. Math. ScL, Oakland University, Rochester, Minnesota.
4.1 INTRODUCTION
.
I U "8' I' 1'8" I
1-.1
.,
I-,
I
The efTectsof non-normality of the distribution of the measured characteristic, X, on properties of PCls have not been a IP..ajor research item until quite recently, although soine practitioners have been well aware of possible problems in this respect. In his seminal paper, Kane (1986) devoted only a short paragraph to these problems, in a section dealing with 'drawbacks'. 'A variety of processes result in a non-normal distribution Jor a characteristic. It is probably reasonable to expect that capability indices are somewlJ.at sensitive to departures from normality. Alas, it is possible to estimate the percentage of parts outside the specification limits, either directly or with a fitted distribution. This percentage can be related to' an equivalent capability for a process having a normal distribution.' A more alarmist and pessimistic assessment of the 'hopelessness' of meaningful interpretation of PCls (in particular, of Cpk) when the process is not normal is presented in Parts 2 and 3 of Gunter's (1989) series of four papers. Gunter emphasizes the difference between 'perfect' (precisely normal) 135
136
Process
capability
illdices under 1IOII-IIOrmality
and 'occasionally erratic' processes, e.g., contaminated processes with probability density functions of form ,
pcp(x;~1, ad
+ (1.- p)cp(x;~2' a2)
(4.1)
I,
I 10I
l'i I
\ Iii
where 0
0; (~1' ad¥:(~2' a2) and
will provide such an insight into several features of a distribution. Indeed one of us (NLJ) has some sympathy with the .
'\", .1) I' lill 'II
cp(x;~,a) = (j21t 0) and PI = P3= P, say, the noncentrality parameter is
'
x
1
L
I
rC; I {(t~n)i z! }
, i= 0
)
J+i-~ n-1 2 2r/2 r( -2+i
1 An=
(4.10)
j
'I"~ h I
I
LIO) -. 14I
{ nl +n3--~(n3-nd
2
}a
2
(4.13)
Some values of E[CpJ/Cp and S.D.(C\)/Cp, calculated from (4.11a) and (4.11 b), are shown in Table 4.2 (based on Kotz
Effects of non-normality .cr.)
q
r "
~
:
r-r-OOOO 0\0\ 00 oo",.,ONONON 11::>..000000 r'1"
."J>
... I::>..
N
II
~ ,.
II
....
II
>lJ>
I~
>lJ>1::>.._V)"""V)-1r\ I 000000
0
= :.:-
0 N
0 M
II ~ .. 0 .c:.:-
0 N
0 M
mill
I ,1
to the situation
when p = 0 (no contamination)
in
which the bias of Cp is always positive, though decreasing as n increases. Gunter (1989) also observed a negative bias, when the contaminating variable has a larger vflriance than' the main one. . Kocherlakola et al. (1992) provide more detailed tables for the case k = 2. They also derived the moments of Cpu= (USL - X)/(38). The distribution of Cpu is essentially a mixture of doubly noncentral t distributions; see section 1.7.
146
Process capahility indices under non-normality
4.2.2 Edgeworth series distributions
III!
The use of Edgeworth expansions to represent mild deviations from normality has been quite I~lshionable in recent years. There is a voluminous literature on this topic, including Subrahmaniam* (1966, 1968a, b) who has been a pioneer in this field. . It has to be kept in mind that there are quite severe limitations on the allowable values of .j7f;and fJ2to ensure that the PDF (4.2) remains positive for all x; see, e.g. Johnson and Kotz (1970, p.l8), and for a more detailed analysis, Balitskaya and Zolotuhina (1988)). Kocherlakota et al. (t 992) show that for the process dislribution (4.2) the expt:, < 1(LSL+ USL) respectively. In all cases, T = 50.
~Ol
itlh I'llIt
,
I~
(M) (L) (R) (M).
LSL 48.5 45.5 48.5 47.0
USL 51.5 51.5 54.5 53.0
Cp 0.5 0.5 0.5 1.0
Cpk 0.5 1.0 1.0 1.0
(L) (R) (M) (L) (R)
LSL 44.0 47.0 44.0 41.0 44.0
USL 53.0 56.0 56.0 56.0 59.0
Cp 1.0 1.0 2.0 2.0 2.0
Cpk 1.5 1.5 2.0 2.5 2.5
4.2.3 Misccllancous Price and Price (1992) present values, estimated by simulation, of the expected values of Cp and CPk>from the foHowing process distributions, numbered [lJ to [13J
n
~
-
II, g .
Ihl
The value of E[CpjCp] does not depend on ~, so we do 11Ot need to distinguish (M), (L) and (R), nor does it depend on Cpo Hence the simple Table 4.5 suffices.In this table, the distributions are arranged in increasing order of I~I.
150
Construction of robust PCls
Process capahility indices under non-normality O'
Table 4.5 Simulated values of E[Cp/CpJ Distribution [lJ [2J [3J [4J [5J [6J [7J [8J [9J [10] [11J [12J [13]
(normal) (Uniform)
'
(Beta) (Beta)} (Gamma) (Gamma) (Gamma) (Gamma) (Gamma) (Gamma) (Gamma) (Gamma) (Gamma)
n= 10
n=30
n= 100
1.1183 1.0420 1.1171
1.0318 1.0070 1.0377
1.0128 1.0017 1.0137
1.1044 1.1155 1.1527 1.2714 1.3478 1.5795 1.6220 1.8792 2.2152
1.027 ' 1.0371 1.04"/4 1.0801 1.1155 1.1715 1.2051 1.2595 1.2869
1.0091 1.0143 1.0146 1.0242 1.0449 1.0449 1.0664 1.0850 1.0966
Hn
-
j
HI'
al~ HIiJ a~. fn 0
8h till PI', III' 111
These values do not depend on '/: The values for the two beta distributions have been combined, as they should be the same.
10 IOns 11.1
I.hl
Comparing the estimates for the normal distribution ([lJ) with the correct values (1.0942, 1.0268, 1.0077 for 11= 10,30, 100 respectively) we see that the estimates are in excess
by about 2% for n = 10,t% for n= 30or 100.
0
00
0
Sampling variation is also evident in the progression of values for the nine gamma distributions, especially in the n = 100 values for distributions [6J and [7J, and [9J and [10]. As skewness increases, so does E[CpjCp], reaching quite remarkable values for high values of y1J;. These howeyer, correspond to quite remarkable (and, we hope and surmise, rare) forms of process distributions, having even higher values of y/f3; than does the exponeI2:tialdistribution. For moderate skewness, the values of E[CpjCpJ are quite close to those of the normal distribution (d. Table 4.3). Table 4.6 presents values of E[CpkJ estimated from simulation for a number of cases, selected from those presented by Price
and Price (1992).
0
We again note the extrem€ly high positive bias - this time
st~11J) (/J ~== llilficss Ya, 0111.of
Illhe MlllfOJ. III,lIe Ih,~w1I\\.,nd hll~cn ton. fllte
of Cpkas an estimator of Cpk- fordistribution[13] whenn= 10, and only relatively smaller biases when n=30 and n= 100. For the exponential distribution [8J there is a quite substantial positive bias in Cpk' The bias is larger when ( is greater. than l(LSL+ USL) - case (L) - than when it is smaller"-~ case oCR).The greater among these biases for exponential, in Table 4.6, are of order 25-35% (when n= 10), falling to 21-5% when n= 100. As for Cp, the results for the extreme distribution [13J are sharply discrepant from those for normal, and mildly skew distributions. Of course, [13J is included only to exhibit the possibility of such remarkable biases, not to imply that they are al1ything like everyday occurrences. Coming to the variability of the estimators Cp and Cpk>we note that the standard deviation of Cp might be expected to be approximately proportional to J({32-1) where f32(= J14/(J4)is the kurtosis shape factor (see section 1.1) for the process distribution. We would therefore expect lower standard deviations for uniform process distributions (f32 = 1.8) than for normal (f32 = 3) and higher standard deviations when f32> 3 (e.g. for the exponential [8J with f32= 9). Values of J(f32 -1) are included in Table 4.7, and the estimated standard deviations support these conjectures. It should be realized that Tables 4.5-7, being based on simulations, can give only a broad global picture of variation in bias and standard de:viation of Cp and Cpkwith changes in the shape of the process distribution. More precise values await completion of relevant mathematical analyses which we hope interested
readers might undertake. 0'
4.3 CONSTRUCTION
UaThtfby ll'llt nl1l4ll11e
151
OF ROBUST PCls
The PCls described in this section are not completely distribution-free, but are intended to reduce the effects of nonnormality.
I
-~-
-~~~-
~
---
,--_.
Table 4.6 Values of E[Cpk] from simulation (Price and Price (1992» Cpk
min(-LSL,
USL-'
1
0.5
[1] [2] [3][4] [7] [8] [13] [l]LR [2]LR [3]L[ 4]R/[3]R[ 4]L [7] L/[7] R [8]L/[8]R [13]L/[13]R [1] [2] [3] [4] [7] [8]
1/2
1
1.0
0.468 0.453 0.471 0.474 0.487 0.582 0.516 0.504 0.525/0.509 0.528/0.519 0.548/0.532 0.670/0.617 0.985 0.957 0.989 0.998 1.057 . 1.028 '1.163
0.464 0.424 0.464 0.480 0.527 0.904 0.559 0.521 . 0.569/0.548 0.597/0.555 0.671/0.600 1.259/0.956 1.023 0.945 1.022 -
2.012
[13] "iIiio
-..;
- ""'1"
;:'
-
[13]
--
---
2/3 -
2.0
[l]LR
'!a... ~-~~
1
z~
-
H18 1.042 1.129/1.111 1.174/1.132 1.307/1236 2.337/2.064 2.141 1.987 2139 2209 2.435 4.227 2.237 2.084 2.245/2.233 2.326/2.284 2.578/2.507 4.582/4279
~
..
. 0.997-
1.067
~.
1.226
-- -
0.480 0.474 0.481 0.480 0.485. 0.519 0.506 0.501 0.513/0.504 0.510/0.504 0.515/0.509 0.557/0.540 0.986 0.975 0.988 0.987
1.226
"""'"
"" ,j
---.
[2]LR [3]L[ 4]R/[3]R[ 4]L [7]L/[7]R [8]L/[8]R [13]L/[13]R [1] [2] [3][4] [7J [8J [13J [lJLR [2JLR [3JL[ 4JR/[3JR[ 4JL [7] L/[7J R [8JL/[8JR [13JL/[13JR
4/5
*Notes:
- ---
.::c.-""
n= 100
n=30
n=lO
Process distribution*
d
-
1.032 1.007 1.043/1.032 1.052/1.043 1.088/1.072 1.J13/1.261 2.016 1.964 2.027 2.045 2.108 2.513 2.064 2.014 2.081/2.070 2.099/2.090 2.168/2.153 2.600/2.548
~,,"~;. ~-
-"'-'
"" JijJ
i I ! i
1.007
-,
.--
. _.
..
--="
'-' .J
1.013 1.002 1.017/1.011 1.017/1.012 1.027/1.021 1.105/1.088 1.999 1.976 1.998 2.002 2.021 2.164 2.026 2.003 2.030/2.024 2.032/2.026 2.052/2.046 2.201/2.185
-
(a) When min(~-LSL, USL-Q/d=l, we have ~=1(LSL+USL1 so only (M) applies. (b) Since [3] and [4] are mirror images, results can be merged as shown. (c) For symmetrical-distributions (normal [1] and uniform [2]), the L and R values should be the same and so they have been.averaged.
i
----
Table 4.7 Estimates of S.D. (Cp) and S.D. (Cpd from simulation (Price and Price (1992» Distribution
(Jh-1)!
sgng-1(LSL+USL)}
Normal [1J
. 1.414
O(M) 1(L), -l(R) }
0.894
O(M) l(L), -l(R) }
Uniform [2J
n=30 S.D.(Cpk) S.D.(CpfCp) n=l00 S.D.(CpJ S.D.(CpfCp) 0.077 0.148 0075 0148 } { { 0.160} . . 0.082 0.047 0045 0089 SO.088 } { } . lO.110 .' 0.056 0.109 0.192
.
xi.s [7J
2.160
O(M) l(L)
0,199
}
-l(R) Exponential [8J
2.828
Values for S.D.(Cpi;) correspond
-I sgn(h)=
--.
l
-
0.142
{ 0.169} 0.119
with Cpk = 1.
}
for h> 0
4.8 0.135%
-
0.237
{ 0.255'}
0 for h=O
--
. Table
0.273
{ 0.137} 0.092 0.139
0.258
}
to process distributions
0.111
for h'
Table 4.8 0.135% and 99.865% points of standardized Pearson curves with positive skewness (vip v7f;. 0). If
0.0
0.2
0.4
0.6
0.8
1.8
-1.727 1.727
-1.496 1.871
-1.230 1.896
-0.975 1.803
-0.747 1.636
2.2
-2.210 2.210
- 1.912 2.400
-1.555 2.454
-1.212 2.349
-0.927 2.108
- 0.692 1.822
2.6
- 3.000 3.000
- 2.535 2.869
-1.930 2.969
-1.496 2.926
- 1.125 - 2.699
-0.841 2.314
-0.616 1.928
3.0
-3.000 3.000
- 2689 3.224
- 2.289 3.358
-1.817 3.385
-1.356 3.259
- 1.000 2.914
-0.739 2.405
-0.531 1.960
3.4
- 3.261 3.261
- 2.952 3.484
- 2.589 3.639
- 2.127 3.175
-1.619 3.681
- 1.178 3.468
-0.865 2.993
-0.634 2.398
3.8
- 3.458 3.458
-3.118 3.678
-2.821 3.844
- 2.396 3.951
- 1.887 3.981
-1.381 3.883
-1.000 3.861
-0.736 2.945
~0.533 '2.322
4.2
- 3.611 3.611
-3.218 3.724
-2.983 3.997
-2.616 4.124
- 2.132 4.194
-1.602 4.177
-1.149 3.496
-0.840 3.529
-0.617 2.798
4.6
-3.731 3.731
- 3.282 3.942
- 3.092 4.115
-2.787 4.253
- 2.345 4.351
-1.821 4.386
-1.316 4.311
-0.950 4.015
- 0.701 3.364
-0.510 2.609
5.0
- 3.828 3.828
- 3.325 4.034
-3.167 4.208
-2.91§ 4.354
-02.524 4.468
- 2.023 4.539
- 1.494 4.532
- 1.068 4.372
-0.785 3.907
- 0.580 3.095
/32
-
1.6
For each (J Ph P2) combination, the upper rmv.contains-9:-!-3-5%,-puiub(O;T1fmillierower, 99:8'65°)'0' points (0.).
1.8
Construction of robust PCl s
Moment esti'mators for (4.17)and (4.18) are obtained hy
-~
,v
.
§ 88~:2~
'-'g
~ ..0
1
d"" X[IJII of size n, and calculate
I
. f3= 0.9546 and place of 68, or . f3= 0.6826 and
the interval (~--2(1, ~+2(1) oflength 4(1contains 95.46% of ..values, and
It is here that we must part company with them, as it seemsunreasonable to use relationships based on normal distributions to estimate values which are supposed to be distribution-free!
I 1
Chan et at. (19~H~) proposed the following method or Qbtaining 'distribution-free' PCI s. They note that the estimator, 8, in the denominator of Cp is used solely to cstimtltc the length (6(1)of the interval covering 99.73% of the values of X (on the twin assumptions of normality and perfect centring, at ~=~(LSL + USL)). They propose using distribution-free tolerance intervals, as defined, for example, in Guenther (1985) (not to be confused with Gunter (1989)1) to estimate this length. These tolerance intervals (widely used in statistical methodology for the last 50 years) are. designed' to include at least 100f3% of a distribution with preassigned probability 100(1-1J()%, for given f3 (usually close to 1) and IJ((usually close to zero). In the PCI framework, a natural choice for f3 would be f3= 0.9973,with, perhaps, 1J(=0.05. Unfortunately construction of such intervals would require prohibitively large samples (of size 1000 or more). Chan et at. (1988) suggest that this difficulty can be overcome in the following way. Construction of tolerance intervals
161
C(1JPk
from this new
'sample'. This is repeated many (g) times and we obtain a set
using! x (length of tolerance interval) in I
using 3 x (length of tolerance interval) in ~ 1
of values C[I]Pk,C[2]pko""C[9Jpk'which we regard as approximating the distribution of Cpk in samples of size n - this estimate is the bootstrap distribution. (The theoretical basis
162
Process capahility indices under non-normality
Flexible PCls
of this method is that we use the empirical cumulative distribution from the first sample - assigning probability m - 1 to each va:lue - as an approximation to the true CDF of X.) Practice has indicated that a minimum of 1000 bootstrap samples are needed for a reliable cakulation of bootstrap confidence intervals for Cpk'
4.3.4c The bias-corrected percemile confidence interval
This is intended to produce a shorter confidence interval by allowing for the skewness of the distribution of Cpk' Guenther (1985) pointed out the possibility of doing this in the general case, and Efron (1982) developed a method applicable in bootstrapping situations. The first step is to locate the observed Cpkin the bootstrap' order statistics Cpk(l)~... ~ Cpk(lOOO).For example, if we
itIJ
According to Hall (1992) a leading expert on bootstrap ~
methodology - difficulties 'in applying the bootstrap to process capability indices is that these indices are ratios of random variables, with a significant amount of variability in the variable in the denominator'. (Similar situations exist in regard to estimation of a correlation coefficient, and of the ratio of two expected values. The bootstrap performs quite poorly in these two (better-known) contexts.) Franklin and Wasserman (1991) distinguish three types of bootstrap confidence intervals for Cpk, discussed below.
1
have
Cpk = 1.42 from
the original data, and among the,
bootstrapped values we find
Cpk(365)= 1.41 and \
I!
I
t'
4.3.4a The 'standard' confidence interval
(C~k - ZI -a/2S*(Cpk)' C~k + Zl-a/2S *(Cpk))
(4.19)
"
365
A
""
r u'
-Ii
,
I'
;
II
,i
1,000= 0.365 = Po, say
We then calculate -I(po)=zo (i.e. (zo)=Po)'In our Zo= - 1 (0.365) = - 0.345. We next calculate
example,
PI =(2z0-Z1-a/2)
where (ZI-a/2)=1-a/2.
and
Pu = (2zo
+ZI-a/2)
(in our case, with a=0.05; ZI-a/2 =-1(0.975)=1.960, PI= 11>( - 2.650)= 0.004; Pu= ( 1.270)==0.898, and form the confidence interval (Cpd1000PI)' Cpk(1000pu))'
It, 4.3.4b The percentile confidence interval Ii
The 1000 CpkS are ordered as Cpk(1)~Cpd2)~...~ Cpk(1000). The confidence interval is then (Cpk«500a»), Cpk«500(1-cx))), where < ) denotes 'nearcst integer to'.
:i
Cpk(366)= 1.43
then we estimate
- Pr[C11k~ 1.41] as
One thousand bbotstrap samples are obtained by re-sampling from the observed values XI, X 2"", X n (with replacement). The arithmetic mean C~k and standard deviation S*(Cpk) of the CpkSare calculated. The 100(1-a)% confidence interval ~~~~ -
163
I
1,,1 MI-I
II
In our example, this is (Cpk(4),Cpk(898)). The rationale of this method is that since the observed Cpk is not at the median of the bootstrap distribution (in our case, below ~he median) the confidence limits should be adjusted approximately (in our case, lowered, because Zof ~
'08"0"0
~ ,...s 01)- ;:j ~ °-
""'c;;~+->S::os::
0
~.~
0
s:: "'."0
'0 ~en L.
~
0..0
~
15; c:..
0~
CIj
--
E o~ . C'i =;:j;:j-
+-> ~ ,,~ CIj+->~""
0 ~
~
0
h>f
C'i
W~I CIj
:.: I ""0: >
s::-o
s::
0
.- ~ V 1) -8:;v§o s:: ~0000:> 6\1:t:: 0 ~ b ~+-> ..2 ~ O\:I: ;> .2 h °:;: ~ ;> °;:joo
--
8I-
0\
-00
0
~
~:e~o
.8 ......
..... 0 (~ 0 0 0 s:: CIj s:: 'c 0 CIj .~ ° > ;:j ,-. 0
o~
~NI~
.
~s:: '"
0
b ~~
~
en CIj
-IN
v
"..., -;:j ~ ..0 ..c:: 'i: ...... en \0 N
I '-y-..J 0.. X
II) ..c:: ......
0 I-.
--
N
~
c.
;>.
O£
"]
~Z
O]
OZ
']
~Y
O] °O'S O]
01 =u
=p/lL -7Sn)
:L=~ u::>qh\'d'lf.:)/(d'lf;;)°O'S="O'S ptrn d'lf.:?/[d'lf.:)JH=HJOS::>°I!.(LfH:t31q8.L
l
170
Proccss cafJahility indices /llIdl'/' nOll-normality
Asymptotic
4.5 ASYMPTOTIC PROPERTIES.
Large-scaleapproximations to the distributions of Cp,
2 .1 .l 2 0'pk = 9 + 2 C pk
- HI U1/+!Cpk(,82 -1)!U 2}
(note that Cp= Cpk in this case) where Uland U2 are standardized bivariate normal variables, with correlation coefficient .j7J~/(,82-1)t. If the process distribution is normal the distribution is that of
Cpk
-!
C~ = (~)8 ~
3{
the asymptotic distribution of Cp can easily be approached by considering the asymptotic distribution of 8. Chan et ai. (1990) provided a rigorous proof that ~(Cp - Cp) has an asymptotic normal distribution with jmean zero and variance
j ~
II
of vi n(Cpk - Cpk) is normal
kU2
J2
P
l
(4.33)
J
U2 are independent standardized
normal
For Cpm, we have ..j';;(Cpm - Cpm)asymptotically normally distributed with expected value zero and variance
I
-
~-m
(~-m) +jp, (7 )+:[' + (~ )
1(/3 -1)
0'
2
with expected
value zero and variance
IU11+ ~C
2
If /32=3 (as for normal process distribution), O'~=~C~. For Cpk, Chan et ai.jI990) found that if ~¥:m the asymptotic distribution
where Uland variables.
(4.29)
t(/32 -1) C~
(4.32) I
I I
Since
O'~ =
(4.31)
In the very special case when ~= m, the limiting distribution of ~(Cpk - Cpd is that of
have been studied by Chan et ai. (1990).
Cpm
171
/32= 3, so that
There are some large-sample properties of PCls which apply to a wide range of process distributions and so contribute to our knowledge of behaviour of PCls under non-normal conditions.' Utilization of some of these properties calls for knowledge of the shape factors A.l and ..14,and the need to estimate the values of these parameters can introduce substantialerrors, as we have already noted. Nevertheless, asymptotic properties, used with clear understanding of their limitations, can provide valuable insight into the nature of indices. and
properties
6;m=
{1
}
6m
C;m (4.34)
2
.
.~ 2
1
O'pk= '9
where
O. Then a multivariate' PCI can be defined as
L(x)=(x-T)'A
-l(x-T)
(5.28a) MCp=r/ro
as a loss function (generalizing L(x) = k(x- T)2).We.have E[L(X)] =tr(A -IV o)+(~- T)'A'-l(~-T)
.
where
(5.28 b)
Pr[h(X-T»r]
where tr (M) denotes 'trace of M', which is the sum of diagonal elements of M. Large values of E[L(X)] are, of course, 'bad', and small values, 'good'. For readers' convenience we summ<Jrize,in Table 5.1,
the scalar indicesintroduced in this chapter.
I
.
3. Chen (1992) has noted that a broad class of PCls can be defined in the following way. Suppose the specification
This ensures that if MCp= 1 the ~xpected proportion of NC items is a. This definition includes, for example, the rectangular specification region IXj-Tjl 2, calculation of component (3.) is cumbersome, and more importantly, there is no practically rel-
I
i
Ii'
r
i \
I I
(5.31a)
and
iI
FAF' = diag (),)
(5.31b)
where diag (),) is a IIx IIdiagonal matrix in which the diagonal elements A1, ..., Av are roots of the determinantal equation
IA-AVO/=O
(5.32)
and 1= diag (1) is a v x v identity matrix. Note that (5.31a) can be rewritten F'F=VOl
I
(5.31 c)
and (531 b) as (F- l)'A -1 F-1 = diag(1/),)
(5.31d)
198
Multivariate
If Y = F(X - ;) then Y has a v-dimensionalmultinormal distribution with expected value 0 and variance-covariance matrix
(W is the weighted sum of n independent
The general distribution of W in this case has been studied by centage points of the distribution for v= 4 and v= 5 for various combinations of values of the ASsubject to the condition
from (5.31a). Then since X-;=F-Iy,
v
These lOO(l-IX)% percentage points are values of c~ such that Pre W < c~] = 1 -IX. From the tables one can see how c~ varies with variation in the values of the AS. Some values for v = 4 are shown in Table 5.2.
v
Aj-1(Yj+"" 6, say), even if the process distribution is not normal. We therefore use a normal distribution for X with some confidence. On t4e other hand none of the sampling distributions of PCls in this book is normal, even when the process . distribution is perfectly normal. , Tables of the kind included in this book provide adequate
~ackground . for interpreting estimates of PCls when the
process distribution is normal. In our opinion, objections, ioic~d by some opponents of PCls, that are based on noting that the distribution of estimates of PCls when the process distribution is non-normal is unknown and may hide unpleasant surprises are somewhat exaggerated. The results of studies, like those of Price and Price (1992) and Franklin and Wasserman (1992) summarized in Chapter 4 provide some reassurance, at the same time indicating where real danger maylurk.In particularthe effectsof kurtosis(fJ2< 3, or more
importantly {J2» 3) are notable.
.
We appreciate greatly the efforts of those involved in these enquiries, although we have to remark - with all due respectthat their methods (of simulation and bootstrapping) do not, ,
.
207
as yet; provide a sufficiently clearcut picture for theoreticians to guess the exact form of the sampling distribution, on the lines of W.S. Gosset ('Student')'s classical work on the distribution of XIS in 1908 wherein hand calculation coupled with perceptive intuition led to a correct conjecture, deriving the t-distribution.
Index
Approximations 10, 12, 46, 109 Asymmetry 113 Asymptotic distributions 170 Attributes 78
1
Bayesian methods 112, 151 Beta distributions 22, 118, '121, 128, 148, 172 functions 22 incomplete 23 ratio, incomplete 23, 80, 129 Bias 33, 57, 98 Binomial distributions 15, 114, 172 Bivariate normal distributions 179 .
Blackwell-Rao theorem 33, . 80, 133 Bootstrap 161
Capability indices, process, see Process capability indices Chi-squared distributions 18,45,75, 137, 172, 181
nO~1central20, 99, 111, 119, 141, 198 Clements' method 156 Coefficient of variation 7 Computer program 203 Confidence interval 45, 70, 107
for Cp 45, 47 . for Cpk 69 for Cpm 107 for Cpm 187 Contaminated distribution 136, 140 Correlated observations 76, 106, 179 Correlation 7, 29, 78, 82, 106, 181 Cumulative distribution function 4 Delta method, see Statistical differential method Density function, see Probability density function Distribution function cumulative 4 Oistribution-free PCls 160
210
Index
Edgeworth series 140, 146 Ellipsoid of concentration 18,4 Estimation 43, 98 Estimator 32 maximum likelihood 32 moment 32 unbiased 79 unbiased minimum variance (MVUE) 79, 127 Expected proportion NC 40, 53, 92, 183 . value 6
Exponential distribution 19, 138, 149 Folded distributions 26 beta 27, 174 norl1lul26, 56, 17] Gamma distributions 18, 149 function 18 incomplete 19 ratio 19 incomplete 20 Gram-Charlier series, see Edgeworth series History 1, 54, 89 Hotelling's T2 195 Hypothesis .35 Independence 7 Interpercentile distance 164 Johnson-Kotz-Pearn method 156
Index
Kurtosis 8
Parallelepiped 178, 194 Pearson- Tukey (stabilizing) results 157 Percentile (percentage) point . 18, 190, 199
Likelihood 32 maximum 32 ratio test 35, 75 Loss function 91, 181, 192
Performance indices, process, see Process performance indices
Maximum likelihood 32 Mean deviation 47 Mean square error 92, 101 Mid-point 39 Mixture distributions 27, 99, 121, 169 Moments 7,59, 102, 122, 168 Multinomial distributions 16, . 141 Multinormal distributions 29, lXI, 19X Noncentral chi-squared distributions, see Chisquared distributions,. noncentral t-distributions, see [distributions, noncentral Nonconforming (NC) product 38 Non-normality 135 Normal distributions 16, 53, 161 bivariate, see Bivariate normal distributions folded, see Folded distributions, normal Odds ratio 78
~
'I
211
MCp 193 MCP 188 comparison of 74, 95 tlexible 164 . for attributes 78 for multivariate data 177 vector-valued 194 Process performance indices (PPls) Pp 41 Ppk 55
Poisson distributions 15, 21, 99, 107 Probability density function 5 Process capahility indices Quadratic form 30 (PC Is) Cjkp 165 Ratio 47 Cp 38 Robustness 151 C p 186 Cpg 90 Semivariance 116, 153 C pk 51 Shape factor 8, 67, 151 Cpk 67 kurtosis 8 Cpk 66 skewness 8, 149 Cpkl 55 Significance test 35 Cpku 55 Simulation 75, 138 C pk(8)157 Skewness 8, 149 Cpm 90, 187 Specification limit 38