This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
A 2 , •• • , An are transmitted with equal probabilities through a communication channel. In the absence of noise, the symbol aj corresponds to the signal Aj (j = 1, 2, ... , m). In the presence of noise, each symbol is correctly received with probability p and is distorted to another symbol with probability q = 1 - p. Evaluate the average quantity of information per symbol in the cases of absence and of presence of noise. 28.16 Signals Al> A 2 , ••• , Am are transmitted through a communication channel with equal probabilities. In the absence of noise, the symbol aj corresponds to the signal Aj (j = 1, 2, ... , m). Because of the presence of noise, signal Aj can be received correctly with probability pjj or as symbol a; with probability p;j (i, j = 1, 2, ... , m, 2:7'= 1 pij = 1). Estimate the average quantity of information per symbol that is transmitted through the channel whose noise is characterized by the matrix [[pij[[.
VI
THE LIMIT THEOREMS
29. THE LAW OF LARGE NUMBERS Basic Formulas If a random variable X has a finite variance, then for any shev's inequality holds:
P(JX-
xl_
~e):::;;
e
> 0, Cheby-
D[X]
-
e
2- ·
If X 1 , X 2 , ••• , Xn ... is a sequence of random variables, pairwise indepen·· dent, whose variances are bounded by the same constant D[X1;;] :::;; C, k = 1, 2, ... , then for any constant e > 0,
(Chebyshev's theorem). If the random variables X 1 , X 2 , . • . , Xn . .. all have the same distribution and have finite expectations x, then for any constant e > 0, lim
n_,.c:o
P( I!n !
k=l
Xk -
xl < e) 1 =
(Khinchin's theorem). For a sequence of dependent random variables X 1 , X 2 , satisfying the condition . hm 21 D n
n--+oo
l2 xkJ n
=
•.. ,
Xn, ... ,
0,
k=l
for any constant e > 0, we have
(Markov's theorem). In order that the law of large numbers be applicable to any sequence of dependent random variables X 1 , X 2 , . . . , Xn, ... , i.e., for any constant e > 0, for the relation
. {11- 2 xk - -1 2 xk-1 < }
hm p
n~OCJ
n
nk=l
n
e
=
1
nk=l
171
172
THE LIMIT THEOREMS
to be fulfilled it is necessary and sufficient that the following equality hold true:
}2 l~n;, M _..;._n--:{"'--~-l.,...nc_x_"_-_x_")-'--:}-::2 {1
n
n"~1 (X" -
+
=
o.
x")
SOLUTION FOR TYPICAL EXAMPLES
Example 29.1 tion and M[cp(X)]
SOLUTION. of inequalities
=
Prove that if cp(x) is a monotonic increasing positive funcm exists, then m P(X > t) :::;; cp(t) ·
Taking into account the properties of cp(x), we obtain a chain P(X > t)
=
J
f(x) dx :::;; -1()
(1 ) cpt
J+
x>t
:::;;
'P t
J
x>t
cp(x)f(x) dx
oo
cp(x)f(x) dx m( )' cpt
=
-00
f::
since m = M[cp(X)] = cp(x)f(x) dx. This implies that P(X > t) :::;; mfcp(t), which we wish to prove. Similarly one can solve Problems 29.2 to 29.5. Example 29.2 Given a sequence of independent random variables X1o X 2 , ••• , Xn, ... with the same distribution function 1 1 X F(x) = -2 + -arctan -• 7T a determined whether Khinchin's theorem can be applied to this sequence. SOLUTION. For the applicability of Khinchin's theorem, it is necessary that the expectation of the random variable X exist; i.e., X dF(x)fdx dx converge absolutely. However,
r:
f
+ 00
- oo
IxI dF(x) d = d x X
=
2a
loo
7T
o
~
X
lim ln
7TA-+ro
X 2
dx
+ a2
= 1"lm 2a A~ oo 7T
(1 + A:) a
=
lA o
X
X 2
dx a2
+
oo,
i.e., the integral does not converge, the expectation does not exist and Khinchin's theorem is not applicable. Example 29.3 Can the integral J = fa"' (sin x)fx dx (a > 0), after the change of variables y = afx, be calculated by a Monte-Carlo method according to the formula 1 n 1 . a Sill - ' In = -
L-
nk=lYk
Yk
where y" are random numbers on the interval [0, 1]?
29.
173
THE LAW OF LARGE NUMBERS
SOLUTION. obtain
Performing the previously mentioned change of variables, we J
=
1! 1
Joy
sin ~ dy. y
The quantity Jn can be considered an approximate value of J only if the limit equality limn~"' P(IJn - Jl < e) = 1 holds true. The random numbers y, have equal distributions and, thus, the functions (1/ Yk) sin (a/ yk) also have equal distributions. To apply Khinchin's theorem, one should make sure that the expectation M[(l/ Y) sin (aj Y)] exists, where Y is a random variable uniformly distributed over the interval [0, 1]; i.e., one should prove that J~ (If y) sin (aj y) converges absolutely. However, if we denote by s the minimal integer satisfying the inequality s :;: ,: aj7T, then
i
1
0
11- sm. al Y
Y
dy
J"' Jsinxl dx :;: ,:
=
~ Jsinxl L i" ... is specified by the distribution series
1
P(Xi = k) = k 3 , ( 3)
(k = 1,2,3, ... ),
where '(3) = l:r'= 1 1/ P = 1.20256 is the value of the Riemann function for argument 3. Is the law of large numbers applicable to this sequence? 29.13 Given a sequence of random variables X 1 , X 2 , ••• , Xn. .. . , for which D[Xnl ::::; c and rij-+ 0 for [i - j[ -+ oo (r;j is the correlation coefficient between Xi and Xj), prove that the law of large numbers can be applied to this sequence (Bernstein's theorem). 29.14 A sequence of independent and equally distributed random variables X 1 , X2 , ••. , X;, ... is specified by the distribution series
(k=1,2, ... ); determine whether the law of large numbers applies to this sequence.
176
THE LIMIT THEOREMS
30. THE DE MOIVRE-LAPLACE AND LYAPUNOV THEOREMS Basic Formulas
According to the de Moivre-Laplace theorem, for a series of n independent trials in each of which an event A occurs with the same probability p(O < p < 1), there obtains the relation: lim
n~oo
P(a : : ; ~-npqnp < b)
21
= . } Jb e- dt = - [IP(b) - (a)], v 27T a
where m is the number of occurrences of event A in n trials and (x)
2
=
V27T
rx
Jo
e- X2, Xs I tb t2, ts), ... ,
where f(x 1 , . • • , Xn I tb ... , tn) is the density of the joint distribution of the values of the random function at times (t 1 , t 2 , t 3 , .•• , tn). The expectation x(t) 1
When not otherwise specified, X(t) is real.
181
182
THE CORRELATION THEORY OF RANDOM FUNCTIONS
and correlation function Kx(tl> t 2) are expressed in terms of the functions f(x 1 I t 1) andj(x1, x 2 I t 1, t 2 ) by the formulas (for continuous random functions) 2 x(t)
=
Kx(tl, t2)
=
J:ro xf(x t) dx, J: J:ro xlxd(xl, x2 I tb t2) dxl dx2 1
x(tl)x(t2).
ro
For a normal stochastic process, the joint distribution at n times is completely defined by the functions x(t) and Kx(t 1 , t 2) by the formulas for the distribution of a system of normal random variables with expectations
and whose elements of the covariance matrix are kn = Kx(ti> t 1), I, j = 1, 2, ... , n. The mutual correlation function Rxy(tl> t 2) of two random functions X(t) and Y(t) is specified by the formula Rxy(tl, t2)
=
M{[X*(tl) - x*(t1)][ Y(t2) - .YCt2)]}
=
R';,y(t2, t1).
For stationary processes, Rxy(tl, t2)
=
Rxy{t2 - t1).
The notion of correlation function extends to random functions of several variables. If, for example, the random function X(g, YJ) is a function of two nonrandom variables, then
SOLUTION FOR TYPICAL EXAMPLES
The problems of this section are of two types. Those of the first type ask for the correlation function of a random function and for the general properties of the correlation function. In solving these problems one should start directly from the definition of the correlation function. The problems. of the second type ask for the probability that the ordinates of a random function assume certain values. To solve these problems, it is necessary to use the corresponding normal distribution law specified by its expectation and correlation function. Example 31.1
Find the correlation function Kx(t 1 , t2) if k
X(t) =
2:
i=l
[Ai cos wit+ Bi sin wit],
where wi are known numbers, the real random variables Ai and Bi are mutually uncorrelated and have zero expectations and variances defined by the equalities (j= l,2, ... ,k). 2
X(t) is considered real.
31.
GENERAL PROPERTIES OF CORRELATION FUNCTIONS
SOLUTION. Since x(t) = 2J~ 1 (ii 1 cos w 1t tion of the correlation function Kx(tl,
t2)
=
M
t~1
J
1 [A 1 cos w,-(r
+
+ b1 sin w 1t)
=
B 1 sin w 1t 1 ][A 1 cos w 1t 2
183
0, by the defini-
+ B1 sin w 1t 2]}.
If we open the parentheses and apply the expectation theorem, we notice that all the terms containing factors of the form M[A 1A1], M[B1Bd for j =I= l and M[A 1Bd for any j and l are zero, and M[AJ] = M[BJ] = aJ. Therefore Kx(tl, t2) = 2~~1 a7 COS w/t2 - t1). Similarly one can solve Problems 31.3 to 31.6 and 31.10. Example 31.2 Let X(t) be a normal stationary random function with zero expectation. Prove that if 1 [
=
2
z(t)
=
Z(t)
X(t)X(t IX(t)X(t
+ +
T) ] T)l ,
1
+
l
arccos [- kx( T)],
then 7T
where kx( T) is the normalized correlation function of X(t). SOLUTION. Using the fact that X(t) is normal, we see that the distribution law of second order can be represented as f(x 1 , x 2 I t, t
+ T)
=
1
27Ta;
V1 -
k;(T)
exp
{
-
X~
+ X~[ - _ 2kx( T)X1X2} k2( )] · 2ax2 1
xT
The required expectation can be represented in the form
Since (1/2)[1 + (x 1 x 2 /lx 1 x 2 1)] is identically equal to zero if the signs of ordinates x 1 and x 2 are different, and equal to one otherwise, we see that z(t)
=
=
f f
I t, t +
T) dxl dx2
Loo Loo j(x1, X2 I t, t +
T) dx1 dx2,
00
2
00
f(xl, x2
+
Loo Loo f(xr, x2 I t, t + T) dxl dx2
which by integration leads to the result mentioned in the Example. (For integration it is convenient to introduce new variables r, rp, setting X1 = r cos rp, x 2 = r sin rp.) PROBLEMS 31.1
Prove that
184
THE CORRELATION THEORY OF RANDOM FUNCTIONS
31.2 Prove that 1Rxy(t 1 , t 2 )i :::;; ax· 31.3 Prove that the correlation function does not change if any nonrandom function is added to a random function. 31.4 Find the variance of a random function X(t) whose ordinates vary stepwise by quantities /::,.j at random times. The number of steps during a time interval r obeys a Poisson distribution with a constant A.r, and the magnitudes of the steps /::,.; are mutually independent, with equal variances a 2 and zero expectations, and X(O) is a nonrandom variable. 31.5 Find the correlation function of a random function X(t) assuming two values, + 1 and -1 ; the number of changes of sign of the function obeys a Poisson distribution with a constant temporal density ..\, and .X(t) can be assumed zero. 31.6 A random function X(t) consists of segments of horizontal lines of unit length whose ordinates can assume either sign with equal probability, and their absolute values obey the distribution law
/Cixl)
=
rcl~" l) e-
1x 1
(gamma-distribution).
Evaluate Kx(r). 31.7 The correlation function of the heel angle of of a ship 0(t) has the form
Find the probability that at time t 2 = t 1 + r the heel angle 0(t 2 ) will be greater than 15°, if 0(t) is a normal random function, 8 = 0, 8(t 1 ) = 5°, r = 2 sec., a = 30 deg. 2 , a = 0.02 sec. - 1 and f3 = 0.75 sec. - 1 • 31.8 It is possible to use a sonic depth finder on a rolling ship whose heel angle G(t) satisfies I G(t)l :::;; 80 • The time for the first measurement is selected so that this condition is satisfied. Find the probability that the second measurement can be performed after r 0 sec. if G(t) is a normal function, 8 = 0, the variance D[G(t)] = a~ and the normalized correlation function k( r) = K 9 ( r)/a~ are known. 31.9 The correlation function of the heel angle G(t) of a ship isK9 (r) = ae-al x 2 to polar coordinates, one easily calculates both integrals and obtains
where the normalized correlation function kx( r) is given by the formula kx(r)
=
k 0(r)
=
e-"' 1' 1(cosfh-
~sin,BH).
The required variance
= : ;2
J: (t- r) arcsin [e-"' 1'1(cos,Br- ~sin,BH)] dr.
The problem can be solved by another method, too. If we use the formula sgn X = 1(7T; too eiux dufu and set it in the initial differential equation, then after we integrate with respect to time and estimate the expectation of '¥2 (t), we obtain b2 dul du2 D['¥(t)] = - 4 2H 2 Jroo foo E(ul> U2) - dt1 dt2, 7T 0 0 -oo -oo ulu2 where E(ul> u2 ) is the characteristic function for the system of normal variables X(t 1 ) and X(t2). If we substitute in the last integral the expression for E(ul> u2 ) and integrate it three times, we find for D['Y(t)] the same expression as just obtained.
Jt it
Example 32.3
Find the expectation and correlation function of the random
function Y(t)
=
a(t)X(t)
dX(t)
+ b(t) ( i t ,
where a(t) and b(t) are given (numerical) functions, X(t) is a differentiable random function and x(t), Kx(t 1, t 2 ) are known. SOLUTION. The function Y(t) is the result of application of the linear operator [a(t) + b(t) d(dt] to the random function X(t). Therefore, the required result can be obtained by applying the general formulas. However, the solution can be found more easily by direct computation of y(t) and Ky{tl> t 2 ). We have dX(t)] dx(t) y(t) = M [ a(t)X(t) + b(t)-----;[( = a(t)x(t) + b(t) dt' Ky{t1, t2)
=
M[{ a*(tl)[X*(t 1) - x*(t 1)]
+ b*(t 1) [d~::t 1 )
X { a(t 2)[X(t2) - .X(t2)] =
a*(tl)a(t 2)Kx(t1, t2)
+
a*(t1)b(t2)
a
+ b(t2) [d~;: 2 )
° Kx(tl>
01 ut 2
+ b*(t 1)a(t 2) " ut 1
- dx;2 1)]}
t2)
+
-
d~~22)]}]
t 2)
a2
b*(t1)b(t2) ~ Kx(tl, t2). ut 1 ut 2
32.
LINEAR OPERATIONS WITH RANDOM FUNCTIONS
PROBLEMS 32.1 Find the correlation function of the derivative of a random function X(t) if Kx(T) = ae-al aj27T). 33.6 To remove the damage caused by a random exterior perturbation characterized by a normal random function X(t), it is necessary to use power W(t) proportional to X 2 (t): W(t) = kX 2 (t).
Estimate the average number of times per unit time in which the power of the motor will be insufficient to remove the damage, if its maximum possible value is w0 , x = 0, K;,(r) = ae-altl(cosfir
+ ~sin,Bir!),
and k, w0 , a, a and fi are known constants. 33.7 On an airplane, there is a device (an accelerometer) that measures the accelerations normal to the axis of the fuselage and in the plane of the wing. The automatic pilot is programed for a horizontal rectilinear flight with constant velocity. Because of errors in direction, the angle 'Y(t), made by the velocity vector with the fixed vertical plane, is random. Estimate the average number of times per unit time in which the sensitive element of the accelerometer will go off scale if this event occurs when the instantaneous radius of curvature of the trajectory of the airplane in the horizontal plane becomes equal to the minimal admitted radius of circulation R 0 • The velocity of the plane v can be assumed constant and Kl}(t 2
t1 )
-
=
ae-" 1' 1 ( cos ,Br
+ ~sin ,Bir!),
where r = t 2 - t1. 33.8 The altitude H(t) of an airplane directed by an automatic pilot is a random function whose expectation li is the given altitude of flight, and whose correlation function is Kh( r)
=
ae-"' 1'1(cos fir
+ ~sin fil rl).
Assuming that H(t) is normal, find the minimal altitude li that can be established in the system of devices for pilotless flight so that during time T the probability of failure caused by collision with the surface of the earth is less than 8 = 0.01 per cent, if a = 400 sq. m., a= 0.01 sec.-\ ,8 = 0.1 sec.- 1 and T = 5 hours. 33.9 A radio control line insures the transmission of a signal without distortion if the perturbation X(t) at the input of the receiver during transmission does not exceed in absolute value some level a. Find the probability Q for transmission without distortion if
x=
0,
and the time of transmission is T.
198
THE CORRELATION THEORY OF RANDOM FUNCTIONS
33.10 Find the distribution law for the ordinates of a normal random X(t) at its points of maxima if
x=
0,
33.11 Given a normal stochastic process X(t), find the distribution law for the ordinates of its minima if Kx(r)
=
+ a[r[ + ~a 2 r 2 ).
a 2 e-a1r1(1
33.12 Estimate the average number of inflexion points of a normal random function X(t) in time T if Kx(r)
=
ae-a•••.
33.13 Estimate the average number of maxima ii per unit area of a normal random function of two variables ~(x, y) if its twodimensional correlation function is a function of two variables
and its two-dimensional spectral density S,(wt. w2)
=
4:2
J_'Xl"" J_'"""
e-i<wt 0,
35.
211
CHARACTERISTICS AT THE OUTPUT OF SYSTEMS
is a random function X(t) whose spectral density in the frequency band lwl :( w 0 , where w 0 » a can be considered constant: Sx(w)
~
c2 •
Find the correlation function of Y(t) for t » lfa. 35.2 A dynamical system is described by the equation ao
dY(t)
----cit +
a1 Y(t) = bo
dX(t)
----cit + b1X(t),
where :X = canst. is known and Kx(T) = a~e-"' 1 ' 1 , arfa 0 > 0. Find the expectation and variance for the stationary solution of this equation. 35.3 The deviation U(t) of a heel-meter located in the plane of the midship frame is defined by the equation (n > h > 0),
where F(t) = 1/g[~c(t)- cG(t)]. The heel angle 0(t) and the velocity of the lateral shift of the center of gravity of the ship 7)c(t), as a result of orbital motion, can be considered uncorrelated random functions
K~c(T)
=
a1e-"' 11 ' 1(cos,81T
K 8 ( T)
=
a 2e-"'•
1 1 '
+ ~~sin,B1H).
(cos ,82T+
~:sin ,82! Tl),
and all the constants contained in the formulas are known. Evaluate Su(w). 35.4 An astatic gyroscope with proportional correction is located on a ship in the plane of the midship frame. Find the variance for the deviation a of its axis from the direction given by the physical pendulum if the angle a is determined by the equation a(t)
+
ea(t)
=
eU(t)
(e > 0).
Assume the time elapsed since the start of the gyroscope is sufficiently great so that a(t) can be considered stationary; determine the spectral density Su(w) by use of the result of Problem 35.3, where a 1 = 1.24 sq. m.fsec. 2; a 1 = 0.1 sec. - 1; ,81 = 1.20 sec. - 1; 1 2 2 a 2 = 0.04 sec. - ; ,8 2 = 0.42 sec. - 1; a 2 = 3.8·10- rad. ; h = 0.6sec.- 1; n = 6.28sec.- 1; c =lOrn.; e = 0.01 sec. - 1. 35.5 Find the spectral density and correlation function of the stationary solution of the equation d 2 ~iQ dt~
+ 2h dY(tJ + k 2 Y(t) dt
=
X(t)
(k
~
if X (t) has the properties of" whit~ noise," that is, S x( w)
h > 0) =
c 2 = const.
212
THE CORRELATION THEORY OF RANDOM FUNCTIONS
35.6 The angular deviation 0(t) of the coil of a galvanometer from the equilibrium position in the case of open circuit is defined by the equation 2 1 d 0(t) dt2
+
r
d@(t) dt
+
Dr;.()= M() o t t '
where I is the moment of inertia of the coil, r is the friction coefficient, D is the rigidity coefficient of the thread on which the coil is suspended and M(t) is the perturbing moment caused by the impact of molecules from the surrounding medium. Find the spectral density and the correlation function of the angle @(t) if the spectral density M(t) can be assumed constant and, according to results of statistical mechanics, a~D = kT, where k is Boltzmann's constant and Tis the absolute temperature of the medium. 35.7 Two random stationary functions Y(t) and X(t) are related by the equation d3 Y(t) dt 3
+
6 d2 Y(t) dt 2
+
11 dY(t)
+
dt
6 Y(t) = 5X(t)
+
7 d3 X(t). dt 3
Find the spectral density Sy(w) for the stationary solution of the equation if Sx(w) = [4/7T(w 2 + 1)]. 35.8 Does the equation Y(t) - 2 Y(t)
+
3 Y(t) = X(t),
containing on its right-hand side the stationary function X(t), admit a stationary solution? 35.9 Find the variance of the ordinate of the center of gravity of a ship gc(t) on a wavy sea if ~c(t)
+
2h~c(t)
+
w~~c(t) = w~X(t),
where the ordinate of the wave front X(t) has the correlation function Kx(r)
=
ae-al•l(cos,Br
+ ~sin,BH);
h and w 0 are constants defined by the parameters of the ship, a is a parameter characterizing the irregularity of waves, ,B is the dominant frequency of waves and w 0 ~ h > 0. 35.10 The error given by an accelerometer measuring the horizontal acceleration of an airplane is defined by the equation e(t)
+
2he(t)
+
n2 e(t) = gn 2 y(t),
where h = 0.6 sec. -r, n = 6.28 sec. - l , g = 9.81 m.jsec. 2 and the heel angle y(t) is a stationary normal random function with a known correlation function: Ky(r)
=
3 ·10- 4 e- 0 · 6 1•1(cos 5-r
+ 0.12 sin 51-rl).
Find the variance of e(t) for the stationary operating mode of the accelerometer.
35.
CHARACTERISTICS AT THE OUTPUT OF SYSTEMS
213
35.11 Prove if the input signal of a linear stable dynamical system described by equations with constant coefficients is a random function X(t) with properties of" white noise" (Sx(w) = c 2 ), then for a sufficiently long elapsed time after the start of operations, the correlation function of the output signal Y(t) is defined by the equality Ky{r)
= 21TC 2
f"
p(t)p(t- r) dt,
where p(t) is the Green's function of the system. 35.12 Find the variance of the heel angle G(t) of a ship defined by the equation G(t)
+ 2hG(t) + k 2 8(t)
=
(k > h > 0),
PF(t)
if the wave slope angle F(t) has a zero expectation,
and the rolling process can be considered stationary. 35.13 A stationary random function Y(t) is related to the stationary function X(t), whose spectral density is known, by the equation where k ~ h > 0. Find the mutual spectral density Syx(w) and the mutual correlation function Ryx( r). 35.14 Given Y(t)
+
8Y(t)
+ 7Y(t)
=
X(t),
find the correlation function Y(t) for times exceeding the time of the transient process. 35.15 The input signal of a dynamical system with Green's function p(t) represents a stationary random function X(t) with zero expectation. Find the variance of the deviation of the output signal Y(t) from some stationary function Z(t) if Kx( r) and Rxz( r) are known, z = 0 and the transient process of the system can be considered finished. 35.16 Using the spectral decomposition of a stationary random function X(t), find for time t » lja the variance for the integral of the equation Y(t) + a Y(t) = tX(t) with zero initial conditions, if
214
THE CORRELATION THEORY OF RANDOM FUNCTIONS
35.17 As a consequence of the random unbalance of the gyromotor placed on a platform with a random vertical acceleration W(t), the direction gyroscope precesses with angular velocity a(t)
=
7; [1 + i W(t)].
Find the expectation and variance of the azimuthal departure a(t) at timet if M[L] = 0, D[L] =a~, Kw(r) and ware known, P, H and g are known constants and Land W(t) are uncorrelated.
35.18
Find the correlation function of the particular solution
YJ(t) of the equation Y(t)
+ 2hY(t) + k 2 Y(t)
= e-atX(t)
with zero initial conditions, if (k ~ h > 0).
35.19 equation
Two random functions Y(t) and X(t) are related by the Y(t)- tY(t) = X(t).
Find Ky{t 1 , t 2) if Kx(r) = ae-altl and if fort= 0, Y(t) = 0. 35.20 Find the expectation and the correlation function of the particular solution of the equation Y(t) - a2 t Y(t) = bX(t)
with zero initial conditions, if x(t)
=
t,
35.21 Find the expectation and the correlation function of the solution of the differential equation,
.
Y(t)
if for t x(t)
=
1
+t
Y(t) = X(t),
= t 0 # 0, Y(t) = y 0 , where Yo is a nonrandom variable and I/t; Kx(tl, t2) = tlt2e-alt2 -tll.
35.22 Write the general expression for the expectation and correlation function of the solution Y(t) of a differential equation of nth order whose Green's function is p(t 1 , t 2 ), if on the right-hand side of the equation the random function X(t) appears, x(t) and Kx(t 1 , t 2 ) are known and the initial values of Y(t) and the first (n - 1) derivatives are random variables uncorrelated with the ordinates of the random function X(t) with known expectations ej and with correlation matrix [[kj 1 [[ (l,j = 1, 2, ... , n). 35.23 Given the system
Y1 (t) +
3Y1 (t)- Y 2 (t) = X(t),
Y2 (t) +
2Y1 (t) = 0,
35.
CHARACTERISTICS AT THE OUTPUT OF SYSTEMS
215
find the variance of Y 2 (t) for t = 0.5 sec. if for t = 0, Y 1 (t) and Y 2 (t) are random variables, uncorrelated to X(t); D[Y1 (0)] = 1, D[Y2(0)] = 2, M{[Y1(0) - Y1(0)][Y2(0)- Y2(0)]} = -1,
Sx(w)
=
2 7T(w2 + 1)2 sec.
35.24 equations
Find the variance for the solutions of the system of
35.25 equations
Find the variance for the solutions of the system of
~1 (t) + 3 Y 1 (t) - Y 2 (t) = tX(t)}• Y 2 (t) + 2Y1 (t) = 0 for time t if the initial conditions are zero and 2 Sx(w) = 7T(w2 + 1)
~l(t) Y 2 (t)
+ +
3 Y 1 (t) - Y 2 (t) = tX(t)}• 2Y1 (t) = 0
for t = 0.5 sec. if Sx(w) = [2/7T(w 2 + 1)] and the initial conditions are zero. 35.26 The input signal to an automatic friction clutch serving as a differential rectifier is a random function X(t). Find the variance for the rectified function Z (t) and the variance of the rectified velocity of its variance Y(t) if the operation of the friction clutch is described by the system of equations bY(t) bZ(t)
+ Y(t) = + Z(t) =
aX(t)} X(t) '
where a and b are constant scale coefficients, and Kx(T) = a;e-al•l and the transient process is finished. 35.27 For t = 1, find the distribution law for the solution of the equation Y(t) + 3 Y(t) + 2 Y(t) = X(t) if fort= 0, Y(t) = Y 0 , Y(t) = and mutually uncorrelated and M[Y0 ] = M[Y0 ] =
x=
0, Kx( T)
Y0 ,
and Y0 ,
Y0
and X(t) are normal
D[Y0 ] = 1.5, = 2e- 1•1.
D[Y0 ] = 0.2,
35.28 The deviation U(t) from the vertical position of a plane physical pendulum whose plane of oscillation coincides with the diametral plane of a ship is defined by the equations U(t) 2
X(t) = - n {tc(t) g
+
+
2hU(t)
+
Y(t)U(t) = X(t),
fic(t)(t) - px[ 0, (b) T < 0. SOLUTION.
In this case,
+
c2w2
Sx(w)
=
Q 1 (w)
=
w2
a2 + + (32
c2(32 = C
2 1Pl(w)i2 I Ql(w)l2'
1
y = - Va 2
w- i(3,
c
+
c 2(3 2 .
(a) For T > 0, the expression [Qf(w)/Pi"(w)]Sx 2 (w) has one pole in the upper half-plane: w = if3; consequently, . ) = L(zw
l2w c
if3 . _1- . zy w - z13 e-Bt
w -
a2
c 2 ((3
+
y)(y
+
[cw - z'(3) w + if3. eiwt w
+ zy
w
2
a2
]
+ (3 2 "'= iB
iw)
(b) ForT < 0, [Qf(w)fPj(w)]Sxz(w) has one pole in the lower half-plane: iy; consequently,
w = -
.
L(zw)
=
a2 iwt w2 + (32 2) 2 (32 e cw+y 2( 2 w+
1w - if3 - 1-c 2 w - iy w
2
a =-
c2 w2
1
+ y2
[ eiwt
+
+
iy
[cw + zy. )w +if3 a 2
2
w
+
iy w
+ {3 2
ion]
e
w= -iy
w - i{3 yt] i(f3 + y) e ·
Example 36.3 The distance D(t) to an airplane, measured with the aid of a radar device with error V(t), is the input to a dynamical system that estimates the present value of the velocity by taking into account only its values during time (t - T, t). Determine the optimal Green's function !( T) if Kv( T) = a~e-al, = f!J>'(f!J>"Y' (r = 0, 1, ... , K - 1). The matrix transition probabilities is defined by the formula
g;
IIPiill
=
of mean limiting
The column p of mean limiting unconditional probabilities is given by p = £fo'p(O). If h = I in the matrix f!J>, then the mean limiting unconditional probabilities Pi (j = I, 2, ... , m) are uniquely defined by the equalities: &J'p = p, l.'F~1
Pi
=
I.
SOLUTION FOR TYPICAL EXAMPLES
Example 38.1 Some numbers are selected at random from a table of random numbers containing integers 1 to m inclusive. The system is in state Qi if the largest of the selected numbers is j (j = 1, 2, ... , m). Find the probabilities p~~l (i, k = 1, 2, ... , m) that after selecting n random numbers from this table, the largest number will be k if before it was i.
SOLUTION. Any integer 1 to m appears equally probable in the table of random numbers and, thus, any transition from state Q1 (the largest selected number is 1) to any state Qi is equally probable. Thenp 1i = 1/m (j = 1, 2, ... , m). The transition from Q 2 tor Q 1 is impossible and, consequently, p 21 = 0. The system can remain in state Q2 in two cases: if the selected number is 1 or 2 and, consequently, p 22 = 2/m, p 2 i = Ifm (j = 3, 4, ... , m). In the general case, we find i Pii = 0 for i > j; Pii = -I "'!Of l. < 1. Pii =
m;
m
(i,j
=
1, 2, ... , m).
The matrix of transition probabilities can be written as
f!J>=
m
m
m
m
m
0
2 m
m
m
1 m
0
0
3 m
m
1 m
0
0
0
mm
1 m
0
0
0
0
1
The characteristic equation lA!&"- f!!>l
= TI (11- ~) = m ~ H = H I (pek - 1 + qe - will be t..k = pek - 1 + qe- n = Hr H- 1 , where r = I t..k8ik II, we find 1 2m pj%> = _ [pe k,
pl~l
pj~l =
i
=
0 (i, k
=
0, 1, ... , m),
0, 1, ... , m) and for k > i
=
_ _ . ... p~D~(Pv)
v~i (Pv
Pi)(Pv
Pt+l)
_ Pv-1)(Pv Pv+1) · · ·(Pv- P~c-1)(Pv- Pk)
(Pv
where Pi,i+1
A.
Pi+1 -
D~ei(A.) =
Pi,i+2
Pi,i+s
Pi,le-1
Pile
Pi+1,i+2
Pi+1,i+s
Pi+ 1,1e -1
Pi+ 1,1c
0
Pi+2 - A
Pi+2,i+s
Pi+ 2,1c -1
Pi+2,k
0
0
0
P1c-2,1e-1
P~c-2,1c
0
0
0
P k - 1 - "-
P~c-1,1c
38.7 Prove that if under the assumptions made in the preceding problem Pk~c = p (k = 0, 1, ... , m - 1), then P~~
= 1,
Pk~ =pn
(k
= 0, 1, ... , m- 1);
for i > k, pl~J = 0 (i, k = 0, 1, ... , m), and fork > i (n) _
Pik -
+ i - 1)! {dk-i [A-nD (A.)]} (m _ 1)! dA_k-i lei 1\~v'
(m- k
(n) (i- 1)! { dm-i [ A_n D (A.)]} Pim - (m _ 1)! dA_m-i A_ 1 mi 1\~p
+
Dmi(1) (1 _ p)m ;'
where D~c;(A.) is determined by the formula of the preceding problem for Pk = p (k = 0, 1, ... , m - 1). 38.8 From an urn containing N white and black balls, m balls are drawn simultaneously. The black balls are used to replace the white balls that are drawn. Initially the urn contains m white balls and after several drawings it contains i white balls. Determine the probabilities pl~l (i, k = 0, 1, ... , m) that after n additional drawings there will be k white balls in the urn. Evaluate these probabilities for N = 6, m = 3. 38.9 For a given series of shots, each marksman from one group scores any number of points ranging from N + 1 to N + m with equal probabilities. Determine the probability that among the next n marksmen of this group at least one will score N + k points, if the maximal number of points scored by the previous marksmen is N + i (k ~ i = 1,2, . .. ,m).
244
MARKOV PROCESSES
38.10 Along a straight line AB in a horizontal plane, there are placed identical, vertical cyclinders of radius r whose centers are a distance I apart. Perpendicular to this line spheres of radius R are thrown, and the path of a moving sphere crosses AB with equal probability at any point of the interval L on which there stand m cylinders. The distance between the centers of the cylinders is I > 2(r + R); each time a sphere hits a cylinder the number of cylinders decreases by one. Determine the probabilities pm' (i, k = 0, 1, ... , m) that after n throws k cylinders will remain if before this there were i cylinders. 38.11 In a domain D, partitioned into m equal parts, points are placed successively so that their positions are equally probable throughout the domain. Determine the probabilities p\~' (i, k = 1, 2, ... , m) that after placing a new series of n points, the number of parts of D containing at least one point will increase from i to k. 38.12 At times tb t 2 , t 3 , .•• a ship can change its direction by selecting one out of m possible courses, Qb Q 2, ... , Qm. The probabilitypi;thatattimetrtheshipchangesfrom Qito Q;ispi; = am-i+i+l' and am+k = ak i= 0 (k = I, 2, ... , m), 2;;'= 1 ak = 1. Determine the probability PW that for tn < t < tn+ 1 , the direction of the ship will be Qk if the initial direction was Q; (j, k = 1, 2, ... , m). Find this probability for n = oo. 38.13 Consider the following model of the diffusion process with central force. A particle can lie only on the segment AB at points with coordinates xk = xA + ktl. (k = 0, 1, ... , m), where Xm = xB. It shifts stepwise from X; to the next point toward A with probability jjm, and to the next point toward B with probability 1 - jfm. Determine the probabilities p\~' (i, k = 0, 1, ... , m) that after n steps the particle will be at point xk if initially it was at xi. 38.14 The assumptions here are the same as in Example 38.2, but the machine does not turn off. When there are no nickels in the container and a dime is inserted, or there are m nickels and a nickel is inserted, the machine returns the last coin inserted without releasing a token. Find the probabilities p~~, (i, k = 0, 1, ... , m) that after n demands for tokens, there will be k nickels in the container if initially there were i nickels. 38.15 Two marksmen A and B fire shots in turn so that after each hit A fires, and after each failure B fires. The right for the first shot is determined on the same basis, by reference to the outcome of a preliminary shot fired by a randomly chosen marksman. Determine the probability of failure at the nth trial independent of the previous hits if the probabilities of failure at each trial for these two marksmen are a and {3, respectively. 38.16 Given the matrix f!Jl = IIPd of transition probabilities, that is irreducible, nonperiodic and twice-stochastic, i.e., the sum of elements of each column and of each row is unity, find the limiting probabilities p}"'' (j = 1, 2, ... , m). 38.17 There are m white and m black balls, that are mixed thoroughly and then equally distributed in two urns. From each urn one ball is randomly drawn and placed in the other. Find the probabilities Pik (i, k = 0, I, ... , m) that after an infinite number of such
38.
245
MARKOV CHAINS
interchanges, the first urn will contain k white balls if initially it contained i white balls. 38.18 A segment AB is divided into m equal intervals. A particle can lie only on the midpoint of some interval and shifts stepwise by an amount equal to the length of one interval toward point B with probability p and toward point A with probability q = 1 - p. At the endpoints of AB, reflecting screens are placed so that upon reaching A or B, the particle is reflected toward its initial position. Find the limiting unconditional probabilities p\d") (k = 1, 2, ... , m) that the particle is in each of them intervals. 38.19 Given the following transition probabilities for a Markov chain with an infinite number of states, 1 P1,i+l=i+ 1
i
Pn =
i
+
1'
(i
=
1, 2, ... ),
determine the limiting probabilities p}oo) (j = 1, 2, ... ). 38.20 The transition probabilities for a Markov chain with an infinite number of states is defined by Pn = q, Pi,i+l = p = 1 - q (i = 1, 2, ... ). Find the limiting probabilitiesp}oo) (j = 1, 2, ... ). 38.21 A Markov chain with an infinite number of states has the following transition probabilities: Pn =
1
1
2' P12 = 2'
Pn
=
1
i'
Pi,i+l
i-1 = -i-
(i
=
2, 3, ... ) .
Find the limiting probabilities p~k'l (i, k = 1, 2, ... ). 38.22 A particle makes a random walk on the positive portion of the x-axis. The particle can shift by one step !3. to the right with probability a, to the left with probability f3 or it can remain fixed; it can reach only points with coordinates x 1 (j = 1, 2, ... ). From the point with coordinate x 1 = !3., the particle can move to the right with probability a or remain fixed with probability 1 - a. Find the limiting transition probabilities Pkoo) (k = 1, 2, ... ). 38.23 The matrix of transition probabilities is given in the form
where R is the matrix associated with the irreducible nonperiodic group C of essential states Q 1 , Q 2 , . . • , Q., and the square matrix W is associated with the inessential states Qs +1o Qs +2 , .•• , Qm. Determine the limiting probabilitiesp* 1 (j = s + 1, s + 2, ... , m) that the system will pass into a state belonging to group C. 38.24 The matrix of transition probabilities is given in the form I
fY =
R
0
0
0
R1
0
u ul
w
where R is the matrix corresponding to the nonperiodic group C of
246
MARKOV PROCESSES
essential states Q 1 , Q 2 , ••• , Q., and the square matrix W corresponds to the inessential states Qr+r. Qr+ 2 , . . • , Qm. Find the probabilities P*; (j = r + 1, r + 2, ... , m) that the system will pass into a state belonging to the group C if all the elements of Ware equal to a and the sum of elements of any row of matrix U is {3. 38.25 Two players A and B continue a game until the complete financial ruin of one. Their probabilities of winning at each play are, respectively, p and q (p + q = 1). At each play the win of one player (loss for the other) is one dollar, and the total capital of the players ism dollars. Determine the probabilities of financial ruin for each if A has j dollars (j = 1, 2, ... , m - 1) before the game begins. 38.26 Given the transition probabilities P;,;+I = 1 (j = 1, 2, ... , m - 1), Pm 1 = 1, determine the transition probabilities p}~l and the mean limiting transition probabilities P;" (j, k = l, 2, ... , m). 38.27 The matrix of transition probabilities is
f3
y
8
0 0
1
0
0
0
1
a
f?JJ=
0 0
0 0
where a i= 1. Determine the transition probabilities pj~l and the mean limiting transition probabilities P;" (j, k = 1, 2, 3, 4). 38.28 Given the elements of the matrix oftransition probabilities, P;.Hl =
(j= 1,2, ... ,2m -1),
P
P;,;-1 = q = 1 -
P
(j
=
2, 3, ... , 2m),
without evaluating the eigenvalues of the matrix f!/J, find the limiting transition probabilities and the mean limiting unconditional probabilities. 38.29 A particle is displaced on a segment ABby random impacts, and can be at the points with coordinates X;= xA + j/:::,. (j = 0, 1, ... , m). Reflecting screens are placed at the endpoints A and B. Each impact can shift the particle to the right with probability p and to the left with probability q = 1 - p. If the particle is next to a screen, any impact shifts it to the screen in question. Find the mean limiting unconditional probabilities that the particle is at each division point of the segment AB.
39. THE MARKOV PROCESSES WITH A DISCRETE NUMBER OF STATES Basic Formulas
The behavior of a system with possible states Q 0 , Q 1 , Q 2 , •.• , Qm can be described by a random function X(t) assuming the value k, if at time t the system is in state Q". If the passage from one state to another is possible at any
39.
MARKOV PROCESSES-DISCRETE NUMBER OF STATES
247
time t, and the probabilities P 1k(t, r) of transition from state Q1 at time t to state Qk at time r ( r ~ t) are independent of the behavior of the system before the time t, then X(t) is a Markov stochastic process with a discrete number of states. (The number of states can be finite or infinite.) The transition probabilities P1k(t, r) satisfy the relation m
Pik(t, r)
I Pi;(t, s)P;k(s, r), i=O
=
( ::::; S ::::;
T.
The process is homogeneous if Pik(t, r)
=
Pik( T
-
t).
In this case, for the Markov process m
Pik(r- t) =
I
P 1;(s- t)P;k(r- s),
( ::::; S ::::;
T.
i=O
A Markov process is called regular if: (a) for each state Qk there exists a limit ck(t)
lim } [I - Pkk(t, t
=
M-o ut
+ tl.t)];
(b) for each pair of states Q, and Qk there exists a temporal transition probability density p 1k(t), continuous in t, defined by Pik (t )
I 1. P 1k(t, t + tl.t) lm A ' c1(t) t.t-o ut
=-
where the limit exists uniformly with respect to t, and for fixed k uniformly with respect to i. For regular Markov processes the probabilities P1it, r) are determined by two systems of differential equations: 8P 1k(t, r) = - ck(t )Pik(t, r) 8 T
+
8Pik(t, 8 T) =
C; (t)
t
Cj ( t )pik(t, T ) -
" Pi; ( t, r)c;( r)p;k( r) £....
(the first system),
j#k
" P;k (t, T )Pii (t ) £....
(the second system)
j#l
(i,j, k = 0, 1, 2, ... , m),
with initial conditions Pik(t, t)
=
sik•
where
i = k, i # k. For a homogeneous Markov process, e;(t) and p 1;{t) are independent of time, P 1k(t, r) = P 1k( r - t) and the systems of differential equations become s.k '
dPik(t)
~ = -ckPiit)
+
{
=
"
I, 0,
for for
(
;~ C;P;kPi; t)
(the first system), (the second system)
(i,j, k = 0, 1, 2, ... , m)
248
MARKOV PROCESSES
with initial conditions Pi/,_(0)
=
8i~ and the probability that the system will change its state during time interval (t, t + D..t) is [A.D..t + o(D..t)]. 39.7 The customers of a repair shop form a simple queue with parameter /... Each customer is serviced by one repairman during a random time, obeying an exponential distribution law with parameter p... If there are no free repairmen, the customer leaves without service. How many repairmen should there be so that the probability that a customer will be refused immediate service is at most 0.015, if p.. = /... 39.8 One repairman services m automatic machines, which need no care during normal operation. The failures of each machine form an independent simple flow with parameter /... To remove the defects, a repairman spends a random time distributed according to an exponential law with parameter p... Find the limiting probabilities that k machines do not run (are being repaired or are waiting for repairs), and the expected number of machines waiting for repairs. 39.9 Solve Problem 39.8 under the assumption that the number of repairmen is r (r < m). 39.10 A computer uses either units of type A or units of type B. The failures of these units form a simple flow with parameters ,\A = 0.1 units/hour and As = 0.01 units/hour. The total cost of all units of type A is a, and that of all units of type B is b (b > a). A defective unit causes a random delay, obeying an exponential distribution law with an average time of two hours. The cost per hour of delay is c. Find the expectation for the saving achieved by using more reliable elements during 1000 hours of use. 39.11 The incoming calls for service in a system consisting of n homogeneous devices form a simple queue with parameter/... The service starts immediately if there is at least one free device, and each call requires a single free device whose servicing time is a random variable obeying an exponential distribution with parameter p.. (!Ln > /..). If a call finds no free device, it waits in line.
39.
MARKOV PROCESSES-DISCRETE NUMBER OF STATES
Determine the limiting values for (a) the probabilities Pk that there are exactly k calls in the system (being serviced and waiting in line); (b) the probability p* that all devices are busy; (c) the distribution function F(t) and the expected time t spent by a device waiting in line; (d) the expected number m 1 of calls waiting in line, the expected number m2 of calls in the servicing system and the expected number of working devices m 3 that need no service. 39.12 The machines arriving at a repair shop that gives guaranteed service form a simple queue with parameter ,\ = 10 units/hour. The servicing time for one unit is a random variable obeying an exponential distribution law with parameter fL = 5 unitsjhour. Determine the average time elapsed from the moment a machine arrives until it is repaired if there are four repairmen, each servicing only one machine at a time. 39.13 How many positions should an experimental station have so that an average of one per cent of items wait more than 2/3 of a shift to start, if the duration of the experiments is a random variable obeying an exponential distribution law with a mean shift of 0.2, and the incoming devices used in these experiments form a simple queue with an average number of 10 units per shift? 39.14 A servicing system consists of n devices, each servicing only one call at a time. The servicing time is an exponentially distributed random variable with parameter fL. The incoming calls for service form a simple queue with parameter ,\ (J.Ln > ,\). A call is serviced immediately if at least one device is free. If all devices are busy and the number of calls in the waiting line is less than m, the calls line up in the waiting line; if there are m calls in the waiting line, a new call is refused service. Find the limiting values for (a) the probabilities Pk that there will be exactly k calls in the servicing system; (b) the probability that a call will be denied service; (c) the probabilities that all servicing devices will be busy; (d) the distribution function F(t) for the time spent in the waiting line; (e) the expected number of calls m 1 in the waiting line, the expected number of calls m2 in the servicing system, and the expected number of devices m 3 freed from service. 39.15 A barbershop has three barbers. Each barber spends an average of 10 minutes with each customer. The customers form a simple queue with an average of 12 customers per hour. The customers stand in line if when they arrive there are fewer than three persons in the waiting line; otherwise, they leave. Determine the probability p 0 for no customers, the probability p that a customer will leave without having his hair cut, the probability p* that all barbers will be busy working, the average number of customers m 1 in the waiting line, and the average number of customers m2 in the barbershop in general. 39.16 An electric circuit supplies electric energy to m identical machines, which need service independently. The probability that during the interval (t, t + Llt) a machine stops using electric energy is J.LLlt + o(Llt), and the probability that it will need energy during the same
255
256
MARKOV PROCESSl S
interval is [ALit + o(Llt)]. Determine the limiting probability that there will be n machines connected in the circuit. 39.17 A shower of cosmic particles is caused by one particle reaching the atmosphere at some given moment. Determine the probability that at time t after the first particle reaches the atmosphere there will be n particles, if each particle during the time interval (t, t + Llt) can produce with probability [Mil + o(Llt)] a new particle with practically the same reproduction probability. 39.18 A shower of cosmic particles is produced by one particle reaching the atmosphere at some given moment. Estimate the probability that at time t after the first particle reaches the atmosphere there will ben particles, if each particle during the time interval (t, t + Llt) can produce a new particle with probability [,\!lt + o(Llt)] or disappear with probability [,u!lt + o(Llt)]. 39.19 In a homogeneous process of pure birth (birth without death), a number of n particles at timet can change into n + 1 particles during the interval (t, t + Llt) with probability ,\n(t)Llt + O(Llt), where ,\ (t)
=
n
I I
+ an, + at
or they can fail to increase in number. Determine the probability that at time t there will be exactly n particles.
40.
CONTINUOUS MARKOV PROCESSES Basic Formulas
A continuous stochastic process U(t) is called a Markov process if the distribution function F(un I u1 , . . . , Un _ 1 ) of the ordinate of U (t) at time tn, computed under the assumption that the values of the ordinates u 1 , u2 , . . • , un _ 1 at times t 1 , t 2 , . . • , tn_ 1 are known (t 1 < t 2 < · · · < tn_ 1 < tn), depends only on the value of the last ordinate; i.e., F(un I u1, ... , Un-1)
F(un I Un-1).
=
The conditional probability density f(un I Un _ 1 ) is a function f(t, x; of four variables, where for the sake of brevity one uses the notations: U(r) = Y,
U(t) =X,
{
r,
y)
~ T,
The function f(t, x; r, y) satisfies the Kolmogorov equations 2 of
-;::+ OT of OT
of a(t, x)-;:;-uX
a
+ oy
+
I o2f -2 b(t, x) ~ uX
[a(r, y)f]-
I
cP
2 oy2
=
(the first equation),
0
[b(r, y)f]
=
0
(the second equation),
2 The second Kolmogorov equation is sometimes called the Fokker-Pianck equation or Fokker-Pianck-Kolmogorov equation since before it was rigorously proved by Kolmogorov it had appeared in the works of these physicists.
40.
257
CONTINUOUS MARKOV PROCESSES
where a(t, x) =lim - 1- M{[Y- X][ X= x}, '~t T
-
(
b(t, x) =lim - 1 - M{[Y- X] 2 '~t T
-
(
[
X= x}.
The functionf(t, x; r, y) has the general properties of the probability density: f(t, x; r, y) ;:: 0,
J:oo f(t, x; r, y) dy =
1
and satisfies the initial condition f(t, x; r, y) = 8(y - x)
for
r
= t.
If the range for the ordinates of the random function is bounded, that is, ex ~ U(t) ~ (3,
then in addition to the previously mentioned conditions, the function G(r, y)
=a(r, y)f- :21 aya [b(r, y)f]
should also be constrained by the following boundary conditions: G(r, a)= G(r, (3) = 0
for any
r.
(G(r, y) may be regarded as a "probability flow.") A set of n random functions U1 (t), ... , Un(t) forms a Markov process if the probability density (distribution function) jfor the ordinates Y 1 , Y 2 , .•• , Yn of these functions at time r, calculated under the assumption that at time t the ordinates of the random functions assumed the values X 1 , X 2 , . . . , Xn, is independent of the values of the ordinates of U1 (t), U2 (t), . .. , Un(t) for times previous to t. In this case, the function/satisfies the system of multidimensional Kolmogorov equations
(first equation),
aj
n
a
j= 1
yj
a+ La [air, Jl, .. . , Yn)J] T
1 n a2 - 2 J,l=l .L 87) [bn(r, Yl. · · ·, Yn)J] y, Yz
=
0
(second equation),
where the coefficients ai and bn are determined by the equations
alt, xl, .. . , Xn) =lim - 1r~t T t
M[(Yj- XJ I
xl
= xl, .. . , Xn = Xn],
258
MARKOV PROCESSES
and the initial conditions T
= t, f(t, xl, ... ' Xn; r, Jr,
... 'Yn)
= S(YI - xl)S(y2 - x2) ... S(yn - Xn).
Given the differential equation for the components of a Markov process U1 (t), U2 ( t), ... , Un(t), to determine the coefficients a1 and b 11 (a and b in the linear case) one must compute the ratio of the increments of the ordinates of U,(t) during a small time interval to ( r - t), find the conditional expectations of these increments and of their products and pass to the limit as r __,_ t. To any multidimensional Kolmogorov equation, there corresponds a system of differential equations for the components of the process
aaul f
=
!ft(t, U1, ... , Un)
l
=
n
+ _L
m~l
gtm(t, U1, ... , Un)fm(t),
I, 2, ... , n,
where fm(t) are mutually independent random functions with independent ordinates ("white noise") whose correlation functions are Km( r) = 8( r), and the function if; 1 and g 1m are uniquely determined by the system
-~
glj gim
=
blm; gli
J= 1
=
gj!}
(!, m
'
=
1, 2, ... , n).
!ft =at To solve the Kolmogorov equations one can use the general methods of the theory of parabolic differential equations (see, for example Koshlyakov, Gliner and Smirnov, 1964). When a1 and b1m are linear functions ofthe ordinates U1(l), the solution can be obtained by passing from the probability density f(t, x 1 , . . . , Xn; r, Yr. ... , Yn) to the characteristic function E(z1, ... , Zn)
=
J: J: 00
• • •
00
exp {i(z1Y1
+ · · · + ZnYn)}
X f(t, xl, ... ' Xn; r, Jr, ... ' Yn) dYI ... dyn, obeying a partial differential equation of first order, which can be solved by general methods. 1 If the coefficients a~> b1m are independent oft, then the problem of finding the stationary solutions of the Kolmogorov equations makes sense. To find the stationary solution of the second Kolmogorov equation, set ojjor = 0 and look for the solution of the resulting equation as a function of JI, y 2 , ••. , Yn only. In the particular case of a one-dimensional Markov process, the solution is obtained by quadratures. Any stationary normal process with a rational spectral density can be considered as a component of a multidimensional Markovian process. The probability W(T) that the ordinate of a one-dimensional Markov process during a time T = r - t after a time t will, with known probability density j 0 (x) for the ordinates of the random function, remain within the limits of the interval (a, [3) is W(T)
=
J:
w(r, y) dy,
T
=
t
+ T,
40.
259
CONTINUOUS MARKOV PROCESSES
where the probability density w(T, y) is the solution of the second Kolmogorov equation with conditions: w( T, y)
=
fo(Y)
w( T, a)
=
w( T, fJ)
=
0
to
T
to
T ): t.
=
t,
When the initial value of the ordinate is known, f 0 (y) = ll(y - x). The probability density f(T) of the sojourn time of a random function in the interval (a, {3) is defined by the equality f(T)
=-
J: aw~~
y) dy,
T
=
t.
T-
The average sojourn time T of the random function in the interval (a, fJ) is related to w( T, y) by T = Jooo W(T) dT. For a i= oo, f3 = oo, the last formulas give the probability W(T) of sojourn time above a given level, the probability density f(T) of the passage time and the average passage time T. The average number of passages beyond the level a per unit time for a one-dimensional Markov process is infinity. However, the average number n( T 0 ) of passages per unit time for passages with duration greater than T 0 > 0 is finite, and for a stationary process it is defined by the formula n(To)
=
f(a) Loo v(T 0 , y) dy,
where f(a) is the probability density for the ordinate (corresponding to argument a) of the process and v( T, y) is the solution of the second Kolmogorov equation for a stochastic process with conditions: T < t, v(T, y)
=
0; T ): t, v(T, a)
=
ll(T- t),
which is equivalent to the solution of the equation for the Laplace-Carson transform v(p, y). For a stationary process
a ay
[12 aya (bv) -
J
v=p
av = pv,
for
v
y=a,
=
0 for
y
=
oo.
The transform of n( T 0 ) is ii(p)
1
= - - f(a)
p
TI 8(b-) Y
+ f(a)a(a).
Y=ct
The probability W(T) that the ordinate U1 (t) of a component of a multidimensional Markov process will remain within the interval (a, fJ) if initially the distribution law for the components U1 (t), U2 (t), ... , Un(t) is known, is defined by the equation W(T)
=
J:oo ··· J:oo f w(T,Yr.···,Yn)dyl···dyn_
1
dyn,
T= T- t,
where w( T, Yr. . .. , Yn) is the probability density that the components of the process reach a volume element dy 1 · · · dyn at time T under the assumption that during the interval (t, T) the ordinate U 1(t) has never left the limits of the
260
MARKOV PROCESSES
interval (a, f3). The function w( r, y 1, ... , Yn) is the solution of the second Kolmogorov equation with the conditions w( r, Y1, ... , Yn)
=
fo(Yl, ... , Yn)
w( r, a, J2, ... , Yn) = w( r, {3, y 2 ,
.•. ,
for
r
=
t;
Yn) = 0 for
r ;;:, t.
The probability density f(T) of the sojourn time of U1 (t) in the interval (a, f3) is defined by the formula
_ f(T) --
J"'
· · · J"' J/3a 8w( r, Y1,aT... , Yn) dY1 · · · dYn-1 dYn,
-oo
-oo
T
=
T-
t.
In the last formula, a can be -oo, or f3 can be +oo, which correspond to probabilities of sojourn time neither above nor below a given level.
SOLUTION FOR TYPICAL EXAMPLES
Example 40.1 Write the Kolmogorov equations for a multidimensional Markov process whose components U1(t), U2 (t), ... , Un(t) satisfy the system of differential equations j= 1,2, ... ,n, where if;; are known continuous functions, c; are known constants and g;(t) are independent random functions with the property of" white noise"; that is,
t;(t)
=
0,
SoLUTION. To write the Kolmogorov equations, it suffices to determine the coefficients a; and b11 of these equations. Denoting by X; the ordinate of the random function U;(t) at time t and by Y; its ordinate at time r, and integrating the initial equations, we obtain
Yi - X;
=
f
if;i[tl> U1(t1), ... , Un(tl)] dt1
+ C;
J:
g;(tl) dt1.
Considering the difference r - t small, we can carry ,Pi outside the first integral with a precision up to second order terms and set t 1 = T, U1 = X 1 , U2 = X2 , Un = Xn, which leads to y j - xj =
if;;(t, Xl,.' ., Xn)(r- t)
+ Cj
r
Wl) dtl;
that is,
y.J
x.
T-(
J
=
if;;(t,
xl> ... '
Xn)
+ Tc- (
_J_
J' t
Wl) dtl.
Assuming that the random variables Xl> ... , Xn are equal to x 1 , . . . , Xn, finding the expectation of the last equality and passing to the limit as r-+ t, we obtain a;(t,
X1, ... ,
Xn)
=
if;;(t,
X1, ••. ,
Xn).
40.
261
CONTINUOUS MARKOV PROCESSES
Multiplying the expression for ( Y1 - X 1) by that for ( Y1 expectation of the product obtained, we get M[(Yi- X,)(Yl- Xl)
= 0, where (3 1 = V~/c (3. Solving this equation by the Fourier method and setting w( r1, Y1) .f( r 1)x(Y 1 ), we obtain for .Ph) and x(Yl) the equations 1 d.f -if dr 1
'2
=-tl.
'
d2x2 + L.Y1 ,., d--;dx + 2f'2 -d \" + l)x
h
h
=
o.
=
40.
269
CONTINUOUS MARKOV PROCESSES
The first equation has the obvious solution ,P(r 1) = exp {- .\2r 1} and the second one has finite solutions at infinity only if .\ 2 = n (n = 0, 1, 2, ... ), when x(Yl) = exp {- yi}H n(Yl),
where
dn Hn(Yl) = ( -1)n exp {yr} -dn (exp {- yf}) Y1
is the Hermite polynomial. Consequently, the solution must be sought in the form
w
=
exp{-yr}
2:""
n~o
anexp{-nr 1}Hn(Jl).
Since for y 1 = 0, w must vanish for any r 1, the series can contain only polynomials Hn(Y 1 ) with odd indices (H2k+l(O) = 0, H 2k(O) =/= 0 for any integer k > 0). Therefore, the solution should be of the form
w
=
exp{-yi}
2:""
k~o
a2k+ 1 exp{-(2k
+
1)r1}H2k+l(Jl).
To find the coefficients a2 k +1, it is necessary to fulfill the initial condition; that is,
This condition is equivalent, for the range ( -oo, +oo) of Y1> to the condition
Multiplying both sides of the last equality by H 2 k +l(Jl), integrating with respect to y 1 from -oo to +oo and considering that
J:"" e-x Hn(x)Hm(x) dx 2
(Snn
=
1, Dmn
=
2nn! .y:;:;. Smn
=
0 for n =/= m), we obtain
a2k+l
= -
-v~
1
22k+1(2k
+
-
l)!v7T c
· 2H
({3 ) 2k+l 1.
Thus,
w
=
-exp {- yi}
L"" exp {-(2k + 1)r1} k~O
X
22k(2k
:~)!V7TC H2k+l(f31)H2k+l(Jl).
Returning to variables y and r, we find w(r, y)
=-
v~ ~ . ;- L..
CV1Tk~O
exp { -(2k
+
1)ra} exp
{ - 2a y 2} C
1
x 22k(2k
+
(v~ f3)
(v~ y)
1)! H2k+l -c- H2k+l -c- .
270
MARKOV PROCESSES
Substituting the resulting series in the formula for W( r) and considering that
f
a
-oo
exp
{
a:y2} (v~ y) v~ -C""2 H2k+l -c- c dy
-fa -
{
- ( - l)k+l (2k)! exp -h2}H2k+l ( Y1 )dY1~'
-oo
we obtain that W(r)
=
1
~
y1;;: k~ exp { -(2k
+
c-
l)m} 22 k( 2k
1)k
+
( -v~ f3) l)k! H2k+l -c- ·
PROBLEMS
40.1 Find the coefficients of the Kolmogorov equations for an n-dimensional Markov process if its components U1 (t), U2 (t), ... , Un(t) are determined by the system of equations
j
=
1, 2, ... , n,
where if;; and rp; are known continuous functions of their variables and t;(t) are independent random functions with the properties of "white noise": Ku( r) = 8( r). ~j = 0, 40.2
Given the system of differential equations j
=
1, 2, ... , n ,
where if;; are known functions of their arguments and Z(t) is a normal stationary stochastic process with spectral density, c2
Sz(w) = (w2
+ a:2)3'
add to the multidimensional process U1 (t), ... , Un(t) the necessary number of components so that the process obtained is Markovian. Write the Kolmogorov equations for it. 40.3 Suppose U(t), a stationary normal process, is given with spectral density c2w2 Su( w) = [( w2 + a2 + (32)2 _ 4(32w2]' where c, a: and (3 are constants. Show that U(t) can be considered as a component of a multidimensional Markov process. Determine the number of dimensions of this process and the coefficients of the Kolmogorov equations.
40.
271
CONTINUOUS MARKOV PROCESSES
40.4 Determine the coefficients of the Kolmogorov equations of a multidimensional Markov process defined by the system of equations
where M[Z,{t)Z1(t
i,(t) = 0,
+
r)] = !f;i 1(t)8(r),
j, l = 1, 2, ... , n,
and 'Pi and !fn are known continuous functions of their arguments. 40.5 The random functions U,(t) satisfy the system of differential equations
j= 1,2, ... ,r, where 'Pi are known continuous functions of their arguments and Z(t) is a stationary normal random function with rational density i
=
n > m,
0,
where the polynomials
Pm(X)
=
f3oxm
+ {3 1 xm-l + · · · + f3m,
have roots only in the upper half-plane. Show that U 1 (t), ... , U7(t) can be considered as components of a multidimensional Markov process, determine the number of dimensions and the coefficients of the Kolmogorov equations of this process. 40.6 Show that if the Kolmogorov equations
where ai, aim• him (j, m = 1, 2, ... , n) are constants, hold for a multidimensional Markov process, then the stochastic process satisfies the system of differential equations:
j
=
1, 2, ... , n,
where 40.7 Derive the system of differential equations for the components of a two-dimensional Markov process U 1 (t), U 2 (t) if the conditional probability density f(t, x 1 , x 2 ; r, YI, y 2 ) satisfies the equation
of 1 [ of of] or + ; Y2 011 + cp(YI) oY2 -
1 [ J.L 2
K 2 o~f] o2 oyr (Yd) + 2 oy~
=
0·
272
MARKOV PROCESSES
40.8 Determine the distribution law for the ordinate of a random function U(t) for the stationary mode if d2 U dt 2
a: 2
+2
dU dt
+
cp(U)
=
W),
where a is a constant, cp( U) is a given function that ensures the existence of a stationary mode and ~(t) = 0,
Solve the problem for the particular case when cp(V) = (3 2 U 3 • 40.9 Determine the stationary distribution law for the ordinate of a random function U(t) if
~~ =
+
cp(U)
!f;(V)g(t),
where cp(U) and ,P(V) are known functions and g(t) represents "white noise" with zero expectation and unit variance. 40.10 A diode detector consists of a nonlinear element with volt-ampere characteristic F( V) connected in series with a parallel RC circuit. A random input signal ~(t) is fed to the detector. Determine the stationary distribution law of the voltage U(t) in the RC circuit if the equation of the detector has the form dU dt
+
1 RC U
where R and C are constants and for which ~(t) =
1
=
C F(~
~(t)
- U),
is a normal stationary function
0,
Solve the problem for the particular case in which F(v)
=
{
u ): 0, u < 0.
ku, 0,
40.11 Determine the distribution law for the ordinate of a random function U(t) for time r > 0 if dU a2 adt- 2U = ag(t), fo(x)
=
40.12
x2 } ' x exp { - 2a2 a2
Ke;{r) U(t)
=
X
for
t
=
=
a8(r), 0
(x ): 0).
An input signal representing a normal stochastic process
nr) with a small correlation time is received by an exponential detector whose voltage U(t) is defined by the equation dU dt
+
1 U _ io ac-au RC - Ce '
40.
273
CONTINUOUS MARKOV PROCESSES
where R, C, a, i 0 are the constants of the detector, ~ K,(r)
=
0 and
= a2e-al•l.
Using the approximate representation eaWJ
M[ea 0.
274
MARKOV PROCESSES
Determine the conditional distribution law of the ordinate of the stochastic process U(t) at time r ~ t if at time t, U(t) = x, O(t) = 0, ~(t) = 0, K,( r) = c2 8( r); c, h, k are known constants. 40.17 The equation defining the operation of an element of a system of automatic control has the form dU
dt =-a sgn
U
+
c~(t),
where a and care constants and g(t)
=
0,
Kk) = 8(-r).
Write the Kolmogorov equation for the determination of the conditional probability density f( t, x; r, y). 40.18 A moving charged particle is under the influence of three forces directed parallel to the velocity vector U(t): the forces created by the electric field of intensity ~(t), the accelerating force created by the field whose intensity can be taken inversely proportional to the velocity of the particle and the friction forces proportional to the velocity. The motion equation has the form dU
dt
=
-{3U
+
y
U
+ a~(t).
Find the probability density f(t, x; r, y) for the magnitude of the velocity U(t) if a, {3 andy are constants and g(t) = 0, Kk) = 8(-r); the mass of the particle is m. 40.19 A radio receiver can detect a random input noise U(t) only if the absolute value of the signal is greater than the sensitivity level of the receiver u0 • Determine the probability W(T) that during time T no false signal will be received if U(t) is a normal stochastic process with zero expectation and with correlation function Ku(r)
= a2e-al we have
For the variance of the random variable a2 , we have
where M[(a 2 ) 2 ] = k 2 M[(2::"= 1 [xi- x[) 2 ]. Let zi = X; - (1/n) 2.'1= 1 xi. Since zi is a linear function of normal random variables, it also obeys a normal distribution law with parameters
D[z;]
=
D [X;
-
*l
-1 L... n 1 =1
xj
=
where (j of. i)
-
*]
n1 L...
Xj
1=1 j i=i
=
Therefore,
[n -
D -n-1 Xj
n - 1) 2 ( - - a2
n
+ n- -- 2a1 2 n
=
n- -- a12 . n
280
METHODS OF DATA PROCESSING
Passing to polar coordinates, we find
=
2a 2
_z
--
[Vl - r 2
7T
+ r arcsin r].
Here,
Finally we get D[a 2 ]
=
M[(a 2) 2 ]
-
a2 = :
2
G+
Vn(n - 2) - n
+ arcsin n ~
The ratio between the variances for the random variables n are shown in Table 28. TABLE
0' 1
and
1) ·
0' 2
for different
a
given by the
28
n
5
10
20
50
0[0'2] D[at)
1.053
1.096
1.150
1.170
The solution for this example implies that the estimate of formula
has a smaller variance than the result obtained from the formula
that is, the estimate 0' 1 is more efficient. Similarly one can solve Problems 41.7, 41.12 and 41.20. Example 41.3 From the current production of an automatic boring machine, a sample of 200 cylinders is selected. The measured deviations of the diameters of these cylinders from the rated value are given in Table 29. Determine the estimates for the expectation, variance, asymmetry and the excess of these deviations.
41.
281
MOMENTS OF RANDOM VARIABLES
SOLUTION. To simplify the inteG· ;:.,Jiary calculations, we introduce the random variable
where as "false zero" we take C microns.
xth
=
Zj
2
I
'
2.5 microns and the class width is h
=
TABLE
Class No.
C
3
=
5
29
4
6
5
7
8
9
IO
- -- - - - - - - -- - - - - - - -- Limits (in microns)
-20 -I5
from to
-15 -10
-10 -5
-5 0
0 +5
+10 +15
+5 +10
+15 +20
+20 +25
+25 +30
- -- -- - - -- - - -- - - -- The mean value xf (in microns) -17.5 -12.5 -7.5
-2.5
+7.5 +12.5 +17.5 +22.5 +27.5
+2.5
- - - - - - - -- - - - - - - - - Size
7
Frequency
15
11
24
41
49
26
17
7
3
- -- -- - - - - -- -- - - -- -
0.035 0.055 0.075 O.I20 0.245 0.205 0.130 0.085 0.035 0.015
Let us determine the estimates of the first four moments of the random variable by considering the Sheppard corrections. The calculations are summarized in Table 30.
TABLE
Class No.
pf
xf
xf- C Zt=-h-
z'f
--- 1
2 0.035 0.055 0.075 0.120 0.245 0.205 0.130 0.085 0.035 O.D15
z1
zj
pfzt
pfz'f
pfz1
pfzj
7
8
9
10
11
256 8I 16 I 0 1 I6 81 256 625
-0.140 -0.165 -0.150 -0.120 0 0.205 0.260 0.255 O.I40 0.075
0.560 0.495 0.300 0.120 0 0.205 0.520 0.765 0.560 0.375
-2.240 -1.485 -0.600 -0.120 0 0.205 1.040 2.295 2.240 1.875
8.960 4.455 1.200 O.I20 0 0.205 2.080 6.885 8.960 9.375
A
B
D
E
-3
4
--- I 2 3 4 5 6 7 8 9 10
30
5
6
- -- -17.5 -12.5 -7.5 -2.5 +2.5 +7.5 +12.5 +17.5 +22.5 +27.5
-4 -3 -2 -1 0 1 2 3 4 5
16 9
4 1 0 1 4 9 I6 25
----
-64 -27 -8 -1 0 1 8 27 64 125
--
2: A= 0.36;
D =
B = 3.90;
E = 42.24.
3.21;
282
METHODS OF DATA PROCESSING
Taking into account the Sheppard corrections, we obtain:
x~ D[X]
~
hA
+
C
=
h2 ( B - A 2
4.30 fL, -
pdX] ~ h 3 (D - 3AB
112)
=
+ 2A 3 )
92.25 fL 2 ' -113.75 fL 3 ,
=
p4[X]
~
h[E- A(4D - ~) + B( 6A
Sk[X]
~
p3
E~ [X]
~ p 4[X] _ 3 = 24,215.62 _ 3 = _ 0 16
X
~
4
;r]
=
~~~~9 ; 5
a!
=
2 -
~)
3A 4 +
-
2 ~ 0]
=
24,215.62 fL 4,
-0.128,
8510.06
·
·
For the same variables, but without considering the Sheppard corrections, we have (see Examples 43.2 and 43.4):
x~ Sk[X]
~
~
25,375.00 fL 4,
4.30 fL,
p 4[X]
-0.125,
E'i[X] ~ -0.145.
D[X] ~ 94.26fL 2 ,
Problems 41.5, 41.8, 41.18 and 41.19 can be solved in a similar manner.
PROBLEMS
41.1 In 12 independent measurements of a base of length 232.38 m., which were performed with the same instrument, the following results were obtained: 232.50, 232.48, 232.15, 232.53, 232.45, 232.30, 232.48, 232.05, 232.45, 232.60, 232.47 and 232.30 m. Assuming that the errors obey a normal distribution and do not contain systematic errors, determine the unbiased estimate for the standard deviations. 41.2 The following are the results of eight independent measurements performed with an instrument with no systematic error: 369, 378, 315, 420, 385,401, 372 and 383m. Determine the unbiased estimate for the variance of the errors in measurements if (a) the length of the base that is being measured is known: .X= 375 m., (b) the length of the measured base is unknown. 41.3 In processing the data obtained in 15 tests performed with a model airplane, the following values for its maximal velocity were obtained: 422.2, 418.7, 425.6, 420.3, 425.8, 423.1, 431.5, 428.2, 438.3, 434.0, 411.3, 417.2, 413.5, 441.3 and 423.0 m./sec. Determine the unbiased estimates for the expectation and standard deviation of the maximal velocity assumed to obey a normal distribution law. 41.4 In processing the data of six tests performed with a motorboat, the following values for its maximal velocity were obtained: 27, 38, 30, 37, 35 and 31 m./sec. Determine the unbiased estimates for
41.
283
MOMENTS OF RANDOM VARIABLES
the expectation and standard deviation of the maximal velocity, assuming that the maximal velocity of the boat obeys a normal distribution law. 41.5 The sensitivity of a television set to video signals is characterized by data in Table 31. TABLE
31
m, m, m, xf, f.l.V xf,fLV xf,fLV --- --- --- --- --- --200 10 350 20 550 3 225 1 375 10 600 19 250 26 400 29 625 3 275 8 425 5 650 1 23 300 450 26 700 6 325 9 500 24 800 4
Find the estimates for the expectation and standard deviation of the sensitivity of the set. 41.6 A number n of independent experiments are performed to determine the frequency of an event A. Determine the value of P(A) that maximizes the variance of the frequency. 41.7 A number n of independent measurements of the same unknown constant quantity are performed. The errors obey a normal distribution law with zero expectation. To determine the estimates of the variance by using the experimental data, the following formulas are applied: _2
al =
n
""
(
L-t
j=l
Xj -
-)2
X
'
n
a~=
L n
j=l
(
-)2 xj- x . n - 1
Find the variance of the random variables a~ and a~. 41.8 The experimental values of a random variable X are divided into groups. The average value xj for thejth group and the number of elements mj in the jth group are in Table 32.
TABLE 32
xf
m,
xf
m,
xf
m,
44 45 46
7 18 120
47 48 49
48 33 5
50 52 58
1 2
1
Find the estimates for the asymmetry coefficient and the excess. 41.9 A sample, x 1 , x 2 , . . . , Xm selected from a population is processed by differences in order to determine the estimates for the
284
METHODS OF DATA PROCESSING
variance. The formula used for processing the results of the experiment IS n-1
a;=k L: (x 1 + 1 -xY. j=1
a;
How large should k be so that is an unbiased estimate of a; if the random variable X is normal? 41.10 Let xi> x 2 , . . . , Xn be the outcomes of independent measurements of an unknown constant. The errors in measurements obey the same normal distribution law. The standard deviation is determined by the formula n
a = k j=l L: lx1 - xl, where 1 n X=n 1=1
2
X;.
Determine the value of k for which a is an unbiased estimate of a. 41.11 Independent measurements of a known constant x are x 1 , x 2 , •.• , xn- The errors obey the same normal distribution law. For processing the results of these observations in order to obtain the estimates for the standard deviation of errors the following formula is used: n
a = k L: lx1 j= 1
xi.
How large should k be so that the estimates are unbiased for (a) the standard deviation of the errors, (b) the variance of the errors? 41.12 Independent measurements x 1 , x 2 , ••. , Xn with different accuracies of the same unknown constant are made. The estimate of the quantity being measured is determined from the formula
_
X=
Ll=1A1x 1 . L7=1Ai
How large should A 1 be so that the variance of xis minimal if the standard deviation of the errors of the jth measurement is a 1 ? 41.13 A system of two random variables with a normal distribution in the plane is subjected to n independent experiments, in which the values (xk> Y~
xn =
P{2Nii
x~ 6 } = 28. For a sufficiently large k (practically greater than 30), the limits of the confidence interval are determined approximately by the formulas:
where e 0 is the solution of the equation a = 50) and small r (r < 0.5), the limits rH, r 8 of the confidence interval for r are given approximately by
where e0 is the solution of the quation a a-=
'
=
15) according to the formulas at the beginning of this solution, we have (see Table 38) vl =
4n
(V 4n
- 1
+
'
e0) 2
42.
293
CONFIDENCE LEVELS AND CONFIDENCE INTERVALS TABLE
38
n
20
30
40
e0
1.65
1.65
1.65
v,
0.72
0.76
0.79
v2
1.53
1.40
1.33
The quantity e 0 is determined from Table 8T for the level a = 0.9. In Figure 35, there is given the graph representing v 1 and v 2 as functions of n for the confidence level a = 0.9. Example 42.5 Three types of devices (A, B and C) are subjected to 50 independent trials during a certain time interval; the numbers of failures are recorded as in Table 39. Find the limits of the confidence intervals for the TABLE
Number of failures
39
Number of observations in which such number of failures occurred for type A
for type B
for type C
38 12 0 0 0
4 16 20 6 4
50 0 0 0 0
0 1 2 3 4
expected number of failures of each type during a selected time interval if the confidence level a = 0.9, and the number of failures for each type obeys a Poisson distribution law during this interval.
v,,Y.z
FIGURE 35
J
z
v,
'
~----~----~~--~~--~~_..n
/J
ID
Zll
.JD
9/l
294
METHODS OF DATA PROCESSING
SOLUTION. To determine the limits of the confidence interval for the devices of type A, we make use of a chi-square distribution. From Table 1ST, fork = 24 degrees of freedom and probability (1 + a)/2 = 0.95, we find xL 6 = 13.S; for k = 26 and probability 8 = (1 - a)/2 = 0.05, we find x~ = 3S.9. The upper limit a 2 and the lower limit a 1 of the confidence interval for ii devices of type A are equal to
X~ 3S.9 a2 = 2N = 100 = 0.3S9,
xLo 13.s 0 13 s · a 1 = 2N = 100 = ·
To determine the limits of the confidence interval for the expected number of devices of type B that failed, one also should use the chi-square distribution for k = ISO and k = 1S2 degrees of freedom. Table 1ST contains the data only fork = 30. Therefore, considering that for a number of degrees of freedom greater than 30, a chi-square distribution practically coincides with a normal one, we have
~ (V42:7= 1 m,- 1-
a1 ~
e 0) 2 =
(Y4·90- 1- 1.64) 2 = 150 200 · '
e 0) 2 =
(V4·90- 1 200
4N
~ (V4 2:7= 1 m,- 1
a2 ~
+
4N
+
1.64) 2
=
212 · ·
For devices of type C, 2:7= 1 m 1 = 0 and, therefore, the lower limit of the confidence interval is certainly zero. From Table 1ST, fork = 2 and probability 1 - a = 0.1, we determine x~ 6 = 4.6 and calculate the value for the upper limit: a 2 = x~ 6 /2N = 4.6/100 = 0.046.
Example 42.6 Ten items out of thirty tested are defective. Determine the limits of the confidence interval for the probability of a defect if the confidence level is 0.95 and the number of defective items obeys a binomial distribution. Compare the results of the exact and approximate solutions. SoLUTION. The exact solution can be obtained directly from Table 30T. For x = 10, n - x = 20 and a confidence level equal to 95 per cent, we have P1 = 0.173, p 2 = 0.52S. For large np(I - p), the equations from which we determine the limits of the confidence interval for p can be written approximately by using the normal distribution:
Ln c;p']'(l
- Pl)n-x
z
x=m
Lm
c;pW
-
P2)
1
A;-
a 1 ·v
271"
fn +(1/2) exp { m-(1/2)
~
~ ~
1 --2 e0
n
+
a
1-2-,
1
Jm+(1/2) {- (z- z2)2} - 1 - a. . ;exp 2 2 dz2 a 2V27r -1/2 U2
From this,
p1
=
n-x -· __1_
x=O
p 2}
(z - Z )2} 2a21 dz
[ _
np
e6
J( ~) np
1
+ -2 ± -2 ±
eo
±
(n n
np + ~)
+
e6]
-4 '
42.
295
CONFIDENCE LEVELS AND CONFIDENCE INTERVALS
where p = m/n = 1/3 and the quantity e0 can be determined from Table 8T for level a = 0.95, P1
~ 33 ~ 84 [ 10 -
P2
~ 33 ~ 84 [10 + 0.5 +
0.5
+
~~0 · 5 + 0.96]
1.92 - 1.96)9 · 5
1.96) 10 · 53 ~19 · 5
1.92 +
0.180,
=
+ 0.96]
=
0.529.
An approximation of the same kind gives the formula arcsin Vp 2 } . • 1arcsm v p1
~
•
arcsm
Jnp ±n(1/2) ± 2v.
e0
;-n'
which, when applied, leads to ~
P2
0.526,
P1
~
0.166.
By a rougher approximation p 1 and p 2 can be found if one considers that the frequency p is approximately normally distributed about p with variance ft(l - ft)/vn. In this case, P2}
PI
~p±
where e is the solution of the equation a = Using Table 8T for a = 0.95, we get
ev'n
=
vp(I - p) hence, it follows that p 1
~
1.96
'
e =
0.333-0.169
=
e,
Ka) = a, where a is a sufficiently small quantity (significance level) whose value is determined by the nature of the problem. If the experimental value of the discrepancy measure Kq is greater than Ka, the deviation from the theoretical distribution law is considered significant and the assumption regarding the form of the distribution is disproved (the probability of disproving a correct assumption with regard to the form of the distribution in this case is equal to a). If Kq :::;; Ka, then the experimental data agree with the hypothesis made about the form of the distribution law. The test of the hypothesis about the character of the distribution by means of goodness-of-fit procedures can be performed in another order: according to the value Kq, one determines the probability aq = P(K < Kq)· If aq < a, the deviations are significant; if aq :;:, a, the deviations are insignificant. The values aq, very close to 1 (very good fit), correspond to an event with very small probability of occurrence and indicate that the sample is defective (for example, elements with large deviations from the average are eliminated from the initial sample without further reason). In different tests of goodness-of-fit, different quantities are taken as measures of discrepancy between the statistical and theoretical distributions. In the chi-square tests (the Pearson tests), the discrepancy measure is the quantity x2 , whose experimental value x~ is given by the formula 2 xq-
*'
.L...
;~1
(m; - np;)2 ' np;
where l is the number of classes into which all experimental values of X are divided, n is the sample size, m; is the number in the ith class and p; is the probability, computed from the theoretical distribution law, that the random variable X is in the ith class interval. For n-----'>- ro, the distribution of x~. regardless of the distribution of the random variable X, tends to a chi-square distribution with k = l - r - I degrees of freedom, where r is the number of parameters, computed according to the given sample, of the theoretical distribution law. The values of the probabilities P(x 2 :;:, xn as functions of x~ and k are given in Table 17T. To apply the chi-square test in the general case it is necessary that the sample size n and class numbers m; be sufficiently large (practically, it is considered sufficient that n ~ 50-60, m; ~ 5-8). The Kolmogorov test of goodness-of-fit is applicable only if the parameters of the theoretical distribution law are not determined by the data of the sample. The biggest value D of the absolute value of the difference between the statistical and theoretical distribution functions is selected as the discrepancy measure of the statistical and theoretical distribution laws. The experimental value Dq of Dis determined by the formula Dq
=
max [F(x) - F(x)[,
where F and F are the statistical and the theoretical distribution functions, respectively.
302
METHODS OF DATA PROCESSING
As n--* oo, the distribution law for ,\ = V N D, regardless of the form of the distribution of the random variable X, tends to the Kolmogorov distribution. The values of the probabilities aq = P(D ~ Dq) = P(.\) = 1 - K(A) are included in Table 25T. The Kolmogorov test is also a statistical test of the hypothesis that two samples of size n 1 and n 2 arise from a single population. In this case, aq = P(A), where P(.\) is given in Table 25T, but
where F1 (x) and F2 (x) are the statistical distribution functions for the first and second samples. The form of the theoretical distribution is chosen either on the basis of data about the random variables selected or by qualitative analysis of the form of the distribution histogram. If the form of the distribution cannot be established from general considerations, then it is approximated by a distribution whose first few moments are the same as the estimates obtained from the sample. For approximating expressions, one can use Pearson's curves (Gnedenko and Khinchin, 1962), which consider the four first moments or the infinite Edgeworth series (Gnedenko and Khinchin, 1962). Here, for a small deviation of the statistical distribution from the normal, one can retain only the first terms, forming a Charlier-A series, F(z) = 0.5
+ 0.5 !0 ), the average value of t or ..\ (with the limits to and t 1 > to or ..\0 and ..\ 1 > ..\ 0 ), or (for the homogeneity control of the production) the variance of the parameter in the lot (with the limits a§ and a~ > a§). In the case in which the quality of a lot improves with the increase of the parameter, the corresponding inequalities are reversed. There are different methods of control: single sampling, double sampling and sequential analysis. The determination of the size of the sample and the criteria of acceptance or rejection of a lot according to given values of a and (3 constitutes planning. In the case of single sampling, one determines the sample size n0 and the acceptance number v; if the value of the controlled parameter is ~v in the sample, then the lot is accepted, if it is > v, then the lot is rejected. If one controls the number (proportion) of defective items in a sample of
45.
STATISTICAL METHODS OF QUALITY CONTROL
347
size n 0 , the total number of defective items in the lot being L and the size of the lot being N, then P(M >
a =
vI L
v cmcno-m
=
10 )
=
1-
L
°C:0-
1
m=O
10 ,
N
where the values C;:' can be taken from Table 1T or computed with the aid of Table 2T. For n 0 ::::; O.IN, it is possible to pass approximately to a binomial distribution law a=
1-
v
2:
m=O
C~0 pg'(l
- p 0 )no-m = 1 - P(p 0 , n 0 , v),
where p 0 = 10 /N, p 1 = 11 /N, and the values of P(p, n, d) can be taken from Table 4T or computed with the aid of Tables 2T and 3T. Moreover, if Po < 0.1, Pl < 0.1, then letting a 0 = n 0 p 0 , a 1 = n 0 p 1 (passing to the Poisson distribution law), we obtain
,..,r:l
--
1-
~ L. m=v+1
-a'{' e -a 1 m!
=
P( X2 --._ 2 ) ;:::; Xq1 ,
where ~
L.
m=v+1
am -a -e m!
are given in Table 7T, and the probabilities P(x 2 > x~) can be obtained from Table 17T fork = 2(v + 1) degrees of freedom. If 50 ::::; n 0 ::::; O.IN, n 0 p 0 ): 4, then one may use the more convenient formulas: a=~_ ~ v 2 , then the lot is rejected; in the other cases the second sample is taken. If the value of the controlled parameter found for the sample of size (n 1 + n 2 ) is :::;; v3 , then the lot is accepted and otherwise, it is rejected. If one controls by the number of defective items in a sample, then
fork
As in the case of single sampling, in the presence of certain relations between the numbers n 1 , n 2 , N, 10 , 11 an approximate passage is possible from a hypergeometric distribution to a binomial, normal or Poisson distribution law.
45.
349
STATISTICAL METHODS OF QUALITY CONTROL
If one controls by the average value x Gf the parameter in a sample, then for a normal distribution of the parameter of one item with given variance a 2 in the particular case when n 1 = n 2 = n, v1 = v 3 = v, v2 = ro, we have a = 1 - P1 -
where
(3
0.5(p2 - pt},
PI
=
0.5
- to) , + o.5
0
a/v 2n
For to > t 1, the inequality signs appearing in the conditions of acceptance and rejection are reversed and, in the formulas for p 1 , p 2, p 3 , p 4 , the plus sign appearing in front of the second term is replaced by a minus sign. If one controls by x and the probability density of the parameter X for one item is exponential: f(x) = Ae-r.x, n 1 = n 2 = n, v 1 = v3 = v, v2 = ro, then a =
PD,
1 - P1 - 0.5(p2 -
f3
Ps
=
+ 0.5(p4
- p~),
where P1
Ps
X~o),
=
1 - P(x 2
=
1 - P(x 2 ;:::: X~1),
;::::
P2 P4
X~o),
=
1 - P(x 2
=
1 - P(x2 ;:::: X~1),
;::::
x~ 1 = 2nA 1 v, and the probabilities P(x 2 ;:::: xn are computed according to Table 17T for k=2n degrees of freedom (for Pl andp 3 ) and k = 4n (for P2 and p4). If one controls the homogeneity of the production when the controlled parameter is normally distributed, n 1 = n 2 = n, v1 = v3 = v, v2 = ro, then
X~o = 2nA 0 v,
a = 1 - PI - 0.5(p2 - p~),
(3
P2
=
+ 0.5(p4
- p~),
where PI> p 2 , p 3 • p 4 are determined from Table 22T for q and k, and q = q 0 for p 1 and p 2 , q = q1 for p 3 and p 4 ; for a known x, k = n for PI and p 3 , k = 2n for p 2 and p 4 ; for an unknown x, k = n - 1 for p 1 and p 3 , k = 2(n - 1) for P2 and P4· In the sequential Wald analysis for a variable sample size n and a random value of the controlled parameter in the sample, the likelihood coefficient y is computed and the control lasts until y leaves the limits of the interval (B, A), where B = (3/(1 - a), A = (I - (3)/a; if y '( B, then the lot is accepted, if y ;:::: A, the lot is rejected and for B < y < A the tests continue. If one controls by means of m defective items in a sample, then
y
= y(n, m) =
eGc;r:.rl em en lo
m
0
N-lo
For n '( O.lN, a formula valid for a binomial distribution is useful: y(n, m) =
p]"(l - Pl)n-m m'1 )n-m' Po',. -Po
where
Po=
lo
N'
350
METHODS OF DATA PROCESSING
In this case, the lot is accepted if m :::;; h 1 + nh 3 , the lot is rejected if m): h 2 + nh 3 and the tests continue if h 1 + nh 3 < m < h2 + nh 3 , where h1
=
log B , logPl +log 1 -Po Po 1 - P1
h2
=
log A , logPl +log 1 -Po Po 1 - P1
1-p
hs
=
l o g - -0 1 - P1 ------7~-
logPl +log 1 -Po Po 1 - P1 In Figure 37 the strip II gives the range of values for n and m for which the tests are continued, I being the acceptance range, and III being the rejection range. If n :::;; O.lN, p 1 < 0.1, then
where a0 = np 0 , a 1 = nPl. For the most part, the conditions for sequential control and the graphical method remain unchanged, but in the present case h 1 = log B , logPl Po
h2 = log A , logPl Po
hs = 0.4343(pl - Po).
logPl Po If the binomial distribution law is acceptable, the expectation of the sample size is determined by the formulas M [n I Po ] -_ M [n I Pd
=
(1 - a) log B + a log A ' 1 Po logp 1 - (1 -Po) log 1 -Po Po - P1
f3 log B +
P1log~: -
(1 - ,8) log A
(1 - P1)
=
log~ ~:
m
FIGURE 37
45.
351
STATISTICAL METHODS OF QUALITY CONTROL
The expectation of the sample size becomes maximal when the number of defective items in the lot is l = Nha: M[ ] n max
= _
log B log A 1
logP1log -Po Po 1 - P1
where Po
'
lo
1ll'
=
PI
=
/1
111 .
If one controls by the average value .X of the parameter in the sample and the parameter of one item is a normal random variable with known variance a 2 , then
The lot is accepted if nx ~ h1 + han, the lot is rejected if n.X:;?: h 2 + han and the tests are continued if h1 + nh 3 < n.X < h2 + nh 3 , where h1 = 2.303 f
a2
1 -
f log B;
h 2 = 2.303
0
f
a2
1 -
f log A; 0
The method of control in the present case can also be graphically represented as in Figure 37 if n.X is used in place of m on the y-axis. For fa > f1> we shall have h1 > 0, h 2 < 0 and the inequalities in the acceptance and rejection conditions change their signs. The expected number of tests is determined by the formulas:
M[n I fa]
=
h2
+ (lf~ _:_)h~1
M[n I fd
=
h2
+l~1 h~
- h2)'
h2)'
If the parameter of an individual item has the probability density f(x) then
=
Ae-Ax,
The lot is accepted if n.X:;?: h1 + nh 3 , it is rejected if n.X ~ h2 + nha, and the tests are continued if h1 + nh 3 > n.X > h 2 + nha, where h1
=
log B -2.303 .\ _ A , 1
log A h2 = -2.303 A _ .\ ;
0
1
0
A
h3
=
log 2 Aa 2.303 A A 1 -
0
The graphical representation of the method of control differs from that represented in Figure 37 only because in the present case I represents the rejection
352
METHODS OF DATA PROCESSING
region and III represents the acceptance region. The expected number of tests is computed by the formulas M[n I Ao]
(1- a)logB
=
log M[n I
Ad
=
=-
~~
alogA,
- 0.4343 ,\ 1
f3log B log
M[nlmax
~~
+
+
~
Ao
(1 - f3) log A,
- 0.4343 ,\ 1
~
Ao
h1h2 h~ ·
If the production is checked for homogeneity (normal distribution law), then y
=
y(n, a)
=
(a2 a2)}
an~ exp { -n 2 a1 2 a1
2 a0
·
The lot is accepted (for a known .X) if na 2 :;;; h 1 + nh 3 , It IS rejected if + nh 3 and the tests are continued if h 1 + nh 3 < na 2 < h 2 + nh 3 , where
na 2 ~ h 2
a!
h _ 4.606log B 1 1 1 ' a§- a~
h _ 4.606 log A 2 1 1 '
hs
=
2.303log 2 ao 1 1
a§- a~
The graphical representation is analogous to Figure 37 with the values of na 2 on the y-axis. If .X is unknown, then whenever n appears in the formulas it should be replaced by (n - 1). The expected numbers of tests are
M[nJmax
=-
h1h2 2h~ ·
If the total number of defects of the items belonging to the sample is checked
and the number of defects of one item obeys a Poisson law with parameter a, then all the preceding formulas are applicable for the Poisson distribution if we replace: m by nx, Po and p 1 by a 0 and a 1, a 0 and a 1 by na 0 and nab x~o by 2na 0 and x~ 1 by 2nab where n is the size of the sample.
For n
~
50, na
~
4, it is possible to pass to a normal distribution
45.
353
STATISTICAL METHODS OF QUALITY CONTROL
To determine the probability that the number of tests is n < ng in a sequential analysis when a « f3 or f3 « a, one may apply Wald's distribution
P(y < yg) = Wc(Yg) =
J2~ J:o
i
y- 312 exp {- (y
+~-
2)} dy,
where y is the ratio of the number of tests (n) to the expectation of n for some value of the control parameter of the lot(/, g, A.), yg = Yln~no and the parameter c of Wald's distribution is determined by the following formulas: (a) for a binomial distribution of the proportion of the defective product, c = K
lp logP1 - (1 - p) log 1 -Pol Po 1 - P1 , p(l - p)(logP1 + log 1 -Po) Po 1 - P1
p = ~;
(b) for a normal distribution of the product parameter,
- lx- go; gt
c-K
g1
-
g0
,
(c) for an exponential distribution of the product parameter, 12.303log ~ - ,\1 c = K
c1 ~ Aor
~
"-ol
,
where K
=
{2.303llog Bl if the selected value of the parameter is < h 3 , a « f3; 2.303 log A if the selected value of the parameter is > h 3 , f3 « a.
A special case of control by the number of defective products arises in reliability tests of duration t, where the time of reliable operation is assumed to obey an exponential distribution law. In this case, the probability p that an item fails during time tis given by the formula p = 1 - e-'-t. All the formulas of control for the proportion of defective products in the case of a binomial distribution remain valid if one replaces p 0 by 1 - e-'-ot, p 1 by 1 - e-'-tt. If At < 0.1, then it is possible to pass to a Poisson distribution if, in the corresponding formulas, one replaces a 0 by nA 0 t, a 1 by nA 1 t, x~o by 2nA 0 t, x~ 1 by 2nA 1 t. The sequential analysis differs in the present case because for a fixed number n0 of tested items, the testing time t is random. The lot is accepted if t ~ t 1 + mt 3 , rejected if t ~ t 2 + mt 3 and the tests are continued if t 1 + mt 3 > t > t 2 + mt 3 , where
354
METHODS OF DATA PROCESSING
and m is the number of failures during time t. To plot the graph, one represents m on the x-axis and t on the y-axis. The expectation of the testing time T for At< 0.1 is determined by the formulas: M[T I ..\o] =
tH
no
M[n IPoL M[T]max
=
tH
no
M[n]max•
where tH is a number chosen to simplify the computations and p 0 = ..\ 0tH, Pl = ,\ltH. To determine the probability that the testing time T < t 9 if ex « (3 or f3 « ex, one applies Wald's distribution in which one should set y = t/M[T I ..\] and find the parameter c by the formula valid for a binomial distribution for the preceding chosen value of tH.
SOLUTION FOR TYPICAL EXAMPLES
Example 45.1 A lot of N = 40 items is considered as first grade if it contains at most !0 = 8 defective items. If the number of defective items exceeds !1 = 20, then the lot is returned for repairs.
(a) Compute ex and (3 by a single sampling of size n 0 = 10 if the acceptance number is v = 3; (b) find ex and (3 for a double sampling for which n 1 = n 2 = 5, v 1 = 0, v 2 = 2, v 3 = 3; (c) compare the efficiency of planning by the methods of single and double samplings according to the average number of items tested in 100 identical lots; (d) construct the sequential sampling plan for ex and (3 obtained in (a), determine nmin for the lot~ with L = 0 and L """ N. SOLUTION.
(a) We compute ex and (3 by the formulas ~ c;rc~g-m
L
ex= 1 -
m~
a
,
(3
ex = 0.089,
(3
clo
=
1 ~ c1o L 40
=
0.136.
40
m~o
Using Table lT for C:;', we find
(b) We compute ex and (3 by the formulas
Co
c5
f3 =~ C5 40
2
+ mL"1
~
1
and obtain ex = 0.105,
(3
=
0.134.
em20 cla-m 20 ·
45.
355
STATISTICAL METHODS OF QUALITY CONTROL
(c) The probability that a first-grade lot in the case of double sampling will be accepted after the first sampling of five items is P(m 1
:::;;
v1 )
c~c~2
= P(m 1 = 0) = - C = 0.306. 5 40
The expectation of the number of lots accepted after the first sampling from a total number of 100 lots is 100·0.306 = 30.6lots; for the remaining 69.4lots a second sampling is necessary. The average number of items used in double sampling is 30.6·5 + 69.4·10 = 847. In the method of single sampling, the number of items used is 100-10 = 1000. In comparing the efficiency of the control methods, we have neglected the differences between the values of a and f3 obtained by single and double sampling. (d) For a = 0.089 and f3 = 0.136 the plan of sequential analysis is the following: B = -1 -f3 = 0.149, -a
A = l -
a
f3 = 9.71,
log B = -0.826, log A = 0.987.
To determine nmin when all the items of the lot are nondefective, we compute the successive values of log y(n; 0) by the formulas log y(l; 0) =log (N- 11 )! +log (N- [0 + 1)! -log (N- 10 )!- log (N- /1 + 1)!, log y(n + 1; 0) = log y(n; 0) - log (N - !0 - n)! + log (N - !1 - n)! We have: logy(l; logy(3; log y(5; log y(7;
0) 0) 0) 0)
= = = =
0.7959; 0.3614; -0.1136; -0.6377;
log y(2; log y(4; log y(6; log y(8;
0) 0) 0) 0)
= = = =
0.5833; 0.1295; -0.3688; -0.9217.
Since the inequality log y(n; 0) < log B is satisfied only if n ): 8, it follows that nmin = 8. For a lot consisting of defective items, n = m. We find log y(1; 1) = 0.3979. For successive values of n, we make use of the formula logy(n + 1; m + 1) = logy(n; m) + log(/1
-
m) -log(/0
-
m).
We obtain log y(2; 2) = 0.8316; log y(3; 3) = 1.3087 >log A = 0.987; consequently, in this case nmin = 3. Similarly one can solve Problem 45.1.
356
METHODS OF DATA PROCESSING
Example 45.2 A large lot of tubes (N > 10,000) is checked. If the proportion of defective tubes is p ~ p 0 = 0.02, the lot is considered good; if p ): p 1 = 0.10, the lot is considered defective. Using the binomial and Poisson distribution laws (confirm their applicability): (a) compute a and (3 for a single sampling (single control) if n = 47, v = 2; (b) compute a and (3 for a double sampling (double control) taking n 1 = n2 = 25, v 1 = 0, v 2 = 2, v3 = 2; (c) compare the efficiency of the single and double controls by the number of items tested per 100 lots; (d) construct the plan of sequential control, plot the graph and determine nmin for the lot with p = 0, p = 1, compute the expectation for the number of tests in the case of sequential control. SOLUTION.
(a) In the case of binomial distribution, 2
a = 1-
2
{3
c4~o.02m0.98 47 - m'
m=O
2
2
=
C~0.10m0.9047-m.
m~o
Using Table 4T for the binomial distribution function and interpolating between n = 40 and n = 50, we get a = 0.0686, (3 = 0.1350. In the case of a Poisson distribution law, computing a 0 = n 0 p 0 = 0.94, a1 = n 0 p 1 = 4.7, we obtain
Using Table 7T, which contains the total probabilities for a Poisson distribution, we find (interpolating with respect to a) a= 0.0698,
f3
=
0.159.
(b) For a binomial distribution law, using Table IT and 4T, we find 2
2
a= 1 -
c~o.02m,0.98 25 -ml
:x:
m 1 =0
+ m~ 1 [ C;'l0.02m (3
1
0.98 25 -m 1 ( 1 -
= C8 50.1 °0.9 25 + m,~ 1 [ C;'lO.l m,0.9 25 -m,
C;'~0.02m•0.98 25 -m2)]
ex:
=
0.0704,
C;'tO.l m•0.9 25 -m2)] = 0.1450.
In the case of a Poisson distribution law, using Tables 6T and 7T and computing a 01 = 0.5, a 02 = 0.5, a 11 = 2.5, a 21 = 2.5, we obtain ~
a=
£....
m, ~3
0.5m,e-0.5 !
m1.
+
i
m, ~ 1
f3
2.5m 1 e- o.25 =
1
m1!
~
[o.sm'e,-0.5 ( m1
0
m2
~3-m,
o.sm2e,-0.5)] m2 0
=
0.0715,
45.
357
STATISTICAL METHODS OF QUALITY CONTROL
The essential difference between the values of f3 computed with the aid of binomial and Poisson distributions is explained by the large value of p 1 = 0.10. (c) The probabiUy oi acceptance of a good lot (p :;::; 0.02) after the first sampling in the case of double control (we compare the results of the binomial distribution) is P(m 1
v1 )
:(
=
P(m 1
=
0)
C8 5 0.02° · 0.98 25
=
=
0.6035.
The average number of good lots accepted after the first sampling from the total number of IOO lots is 100·0.6035
60.35.
=
For the remaining 39.65 lots, a second sampling will be necessary. The average expenditure in tubes for a double control of 100 lots is equal to 60.35. 25
+
39.64. 50 = 3497.
In a defective lot, the probability of rejection after the first sampling in the case of double control is 2
P(ml >
v2)
=
P(m 1 > 2)
=
2:
I -
m1
C;"lO.l m,0.9 25 -m,
=0
=
0.4629.
The average number of lots rejected after the first sampling from a total of I 00 lots is 100-0.4629
46.29.
=
For the remaining 53.71 lots a second sampling will be necessary. The average expenditure in tubes for a double control of I 00 lots will be 46.29. 25
+ 53.71· 50
3843.
=
For a single control, in all cases I 00 · 50
will be consumed. (d) For a = 0.0686, f3 distribution we get: B
=
=
log B
O.I450,
=
=
5000 tubes
0.1350 for a sequential control, using a binomial
A
-0.8388,
Furthermore, h1 = -1.140, h 2 for a good lot for p = 0:
1.496, h 3
=
nmin = -
for a defective lot when p
=
I:
nmin
=
h2
=
1.1007.
=
1.140 0 _0503
=
22.7 ~ 23;
+ nmlnh3' h2
nmin
=
0.0503 (Figure 38). We find
=
hl h3
log A
1.26I,
=
1 - h3
=
1.496 0.9497
=
1. 5 ~ 2 ·
We determine the average numbers of tests for different p: M[n I 0.02]
=
31.7;
M[n I 0.10]
=
22.9;
M [n]max
=
35.7.
nm 1n
358
METHODS OF DATA PROCESSING
m
Ill
I,S hz
arctan /lll.Jil.J
1,0 II
0.5
FIGURE 38
'o~------------------------~n -IJ,5
-W~------------------7,5 h,
I
Problems 45.2 to 45.5, 45.7, 45.8 and 45.10 can be solved by following this solution. Example 45.3 A large lot of resistors, for which the time of reliable operation obeys an exponential distribution, is subjected to reliability tests. If the failure parameter ,\ ::::; ..\ 0 = 2 · 10- 6 hours - 1 , the lot is considered good; if ,\ ~ ..\ 1 = 1 · 10- 5 hours-\ the lot is considered defective. Assuming that ..\t0 < 0.1, where t 0 is a fixed testing time for each item in a sample of size n0 , determine for a = 0.005, (3 = 0.08, the value of n 0 • Use the method of single sampling for different t 0 , find 11 with the condition that t 0 = 1000 hours and also construct the plan of sequential control in the case n = n0 for t 0 = 1000 hours. Compute tmin for a good lot and a defective one and M[T I ..\], P(t < 1000), P(t < 500). SOLUTION. The size n 0 of the sample and the acceptance number 11 are determined by noting that ..\t 0 < 0.1, which permits use of the Poisson distribution and furthermore, permits passing from a Poisson distribution to a chisquare distribution. We compute the quotient ..\ 0 /..\ 1 = 0.2. Next, from Table 18T we find the values x~o for the entry quantities P(x 2 ~ x~o) = 1 - a = 0.995 and k; X~1 for P(x 2 ~ X~1) = (3 = 0.08 and k. By the method of sampling, we establish that for k = 15 2
X~o =
X~1 =
4.48,
23.22,
X~o
=
0.1930;
=
0.2041.
Xq1
for k
=
16 2
X~o =
X~1
5.10,
=
24.48,
X~o Xq1
x~ 1
Interpolating with respect to x~o/x~ 1 = 0.2, we find: k = 15.63, x~o = 4.87, = 23.99. We compute 11 = (k/2) - 1 = 6.815; we take 11 = 6, 2n 0 ..\ 0 t 0 = 4.87,
hence, it follows that n0 t 0 ..\t0 < 0.1 leads to t0
=
4.87/2·0.000002
=
1.218·10- 6 • The condition
< 0.1/0.00001 = 10,000 hours (since ..\ 1 = 0.00001).
Taking different values given in Table 111.
t0
< 10,000, we obtain the corresponding values of n0
45.
TABLE
no
111
100
500
1000
2500
5000
12,180
2436
1218
487
244
t 0 in hours
In B
359
STATISTICAL METHODS OF QUALITY CONTROL
We compute B, A, t 1 , t 2 for the method of sequential analysis: B = 0.08041, = -2.5211, A = 184, In A = 5.2161. Taking n0 = 1218, we have t1 =
t2 = t3 =
258.7 hours; -535.3 hours, 165.2 hours (Figure 39).
The minimal testing time in the case when m = 0 for a good lot is tmin = 258.7 hours; for a defective lot tmin = - 535.3 + 165.2m > 0; m = 3.24 ~ 4; for m = 4, tmin = 125.5 hours. Iffor t < 125.5 hours m ): 4, then the lot is rejected. To compute the average testing time for n = n 0 = 1218, we take tH = t 0 = 1000 hours. Then Po = AotH = 0.002; Pl = /..ltH = 0.010;
.!..!!.__ not3
=
M[n I P1l
=
/..*tH
=
0.00497.
Furthermore, we find
M[n IPol
=
505,
M[nJmax
572,
=
1001;
t
JOO 2.fQ
FIGURE 39
2/1/J
15/J 100 5~
m -1/J/J
-200 -JOO -INIIJ -SIJ(J
-6'/J/J
360
METHODS OF DATA PROCESSING
then we compute M[T I A0 ] = 415 hours, M[T]max
M[T I ,\r) =
=
470 hours,
821 hours.
We find the probability that the testing time for a fixed number of items n = n 0 = 1218 is less than 1000 hours and 500 hours. Therefore, fortH= 1000
hours, we compute the value of the parameter c of Wald's distribution and the value of no tH y = M[n I Po]= M[TI ,\) with the condition that p 0 = A.0 t 0 = 0.002; PI = A. 1 t 0 = 0.01. Taking p = p 0 since a« (3, we obtain c = 2.37, y = 1000/415 = 2.406. We find that (see Table 26T) P(T < 1000) = P(n < 1218) = Wc(Y) = 0.9599. For y = 0.5, we have y
1.203,
=
P(T < 500) = 0.725.
One can solve Problem 45.9 similarly.
Example 45.4 The quality of the disks produced on a fiat-grinding machine is determined by the number of spots on a disk. If the average number of spots per 10 disks is at most one, then the disks are considered to be of good quality; if the average number is greater than five, then the disks are defective. A sample of 40 disks is selected from a large lot (N > 1000). Assuming that the number of spots on a disk obeys a Poisson distribution law: (a) determineaand(3forv = 9; (b) for these a and (3 construct the plan of sequential control, compute nmin for a good lot and a defective one and find the values of M[n I a]; (c) test a concrete sample, whose data appear in Table 112, by the methods of single and sequential control.
TABLE
n
Xn
n
1 2 3 4 5 6 7 8
0 1 1
10
112
Xn
n
Xn
n
Xn
n
Xn
1 1 1 1 2 2 2 2
17
18 19 20 21 22 23 24
2 2 3 3 3 3 3 4
25 26 27 28 29 30 31 32
4 4 4 4 4 4 4 4
33 34 35 36 37 38 39 40
4 4 5 5 6 6 7 7
- - - - - - - - - - - - - -- - - -- I
1 1 1 1
SoLUTION.
na 0
=
4, na 1
=
9 11 12 13 14 15 16
(a) Using the Poisson distribution, we have a0 = 0.1, a 1 = 0.5, 20. Using Table 7T for the total probabilities of Xn occurrences
45.
361
STATISTICAL METHODS OF QUALITY CONTROL
of spots on disks in the sample, we find 4xne-4 L= -
~ 20xne,- 20 0.00500.
oo
a=
Xn
1 Xn ·
10
=
h1
I
=
Xn
(b) For a = 0.0081, (3 control (Figure 40) are: B = 0.005041;
(3
0.00813, =
logE a log 2 a0
A = 122.8;
-3.29;
=
Xn ·
0.0050, the characteristics of the sequential
log B = -2.298; =
= 10
h2
=
log A = 2.089,
log A= 2.99; a log-1 ao
hs = 0.4343(a 1 - ao) = 0 _248 . log al ao We compute nmin: for for
nmin = 13.2 ~ 14 nmin = 18.7 = 19.
0, = n,
Xn = Xn
The average number of tests in the case of sequential control is M[n I a 0 ] = 21.8;
M[n I ad
=
11.8;
M[n]max
=
39.5.
(c) In a sample with n0 = 40, it turns out that Xn = 7 < v = 9; consequently, the lot is accepted. Applying the method of sequential control (see Figure 40), for n = 30 we obtain that the point with coordinates (n, m) lies below the lower line; that is, the lot should be accepted. Indeed, for for
29, n = 30, n
=
Xn = Xn =
4lz1 4h1
+ mh3 + mh3
= =
3.90; 4.15;
Xn Xn
> h1 < h1
+ mh3; + mh3.
Similarly one can solve Problem 45.11. Example 45.5 The quality of punchings made by a horizontal forging machine is determined by the dispersion of their heights X, known to obey a
FIGURE 40
362
METHODS OF DATA PROCESSING
normal distribution law with expectation x = 32 mm. (nominal dimension). If the standard deviation a :::;; a 0 = 0.18 mm., the lot is considered good; if a ): a 1 = 0.30 mm., the lot is defective. Find a and f3 for the method of single sampling if n0 = 39 and 11 = 0.22 mm. Use the resulting values for a and f3 to construct a control plan by the method of sequential analysis. Compute nmin for a good lot and a defective one and find M[n I a]. We compute
SOLUTION.
a
and f3 by the formulas
fork = n0 = 39, q 0 = 11ja 0 = 1.221, q 1 = 11ja1 = 0.733. Interpolating according
to Table 22T for the chi-square distribution, we find 0.0303;
a =
f3
0.0064.
=
We find the values of B, A, h 1 , h2 , h 3 for the method of sequential analysis: B
=
0.006601;
ln B
=
hl = -0.528;
-5.021; A h2 = 0.345;
=
30.10; ln A h3 = 0.0518.
=
3.405;
We find nmin· For the poorest among the good lots, 6 2 = a~ = 0.0324; + nminh3; nmin = 27.2 ~ 28. For the best among the defective lots, 6 2 = a! = 0.0900; nm1na! = h 2 + nminh3; nmin = 9.3 ~ 10. We compute the average numbers of tests M[n I a] for different a: nm!na~ = hl
M[n I a 0 ]
=
25.9;
M[n
I ad=
M[n]max
8.8;
=
34.0.
In a similar manner, one can solve Problem 45.12. Example 45.6 The maximal pressure X in a powder chamber of a rocket is normally distributed with standard deviation a = IO kg./cm. 2 • The rocket is considered good if X :::;; ~ 0 = 100 kg.jcm. 2 ; if X ): ~ 1 = 105 kg./cm. 2 , the rocket is returned to the plant for adjustment. Given the values a = 0 .I 0 and f3 = O.OI, construct the plans for single control (n 0 , 11) and sequential control, compute the probabilities P(n < n 0 ) and P(n < (I/2)n 0) that for the sequential control the average number of tests will be less than n0 and (I/2)n 0 , respectively. SOLUTION. To compute the sample size n 0 and the acceptance number 11 for a single control, we use the formulas
0.99
(for p < 0.999),
P(n < 50) = 0.939. Similarly Problem 45.14 can be solved. PROBLEMS
45.1 Rods in lots of 100 are checked for their quality. If a lot contains L ::::; !0 = 4 defective items, the lot is accepted; if L ~ 11 = 28, the lot is rejected. Find a and f3 for the method of single sampling if n 0 = 22, v = 2, and for the method of double sampling for n1 = n 2 = 15, v 1 = 0, v 2 = 3, v 3 = 3; compare their efficiencies according to the average number of tests; construct the sequential analysis plan and compute the minimal number of tests for a good lot and a defective one in the case of sequential control. Use the values of a and f3 obtained by the method of single sampling. 45.2 In the production of large lots of ball bearings, a lot is considered good if the number of defective items does not exceed 1.5 per cent and defective if it exceeds 5 per cent. Construct and compare
45.
STATISTICAL METHODS OP QUALITY CONTROL
365
the efficiency of the plan of single control, for which the sample size n 0 = 410 and acceptance number v = 10, and the plan of double control, for which n1 = n 2 = 220, v1 = 2, v 2 = 7, v3 = 11. Construct the sequential control plan with a and (3 as found for the plan of single control. Compare the efficiencies of all three methods according to the average number of tests and compute nmin for a good lot and a defective one for sequential control. 45.3 A large lot of punched items is considered good if the proportion of defective items p :( p 0 = 0.10 and defective if p ~ p 1 = 0.20. Find a and (3 for the control by single sampling: use sample size n0 = 300 and acceptance number v = 45. For the resulting values of a and (3, construct the control plan by the method of sequential analysis and compute nmin for a good lot and a defective one; find M [n I p] and P(n < n 0 ), P(n < (1/2)n 0 ). Hint: Pass to the normal distribution. 45.4 For a large lot of items, construct the plan of single control (n 0 , v) that guarantees (a) a supplier's risk of 1 per cent and a consumer's risk of 2 per cent, if the lot is accepted when the proportion of defective items is p :( Po = 0.10 and rejected when p ~ p 1 = 0.20 (use the normal distribution), (b) a = 0.20, (3 = 0.10 for the same p 0 and p 1 applied to a Poisson distribution law. Construct the corresponding plans of sequential control and find the expectations for the number of tests. 45.5 For a = 0.05 and (3 = 0.10, construct the plans of single and sequential control for quality tests of large lots of rivets. The rivets are considered defective if their diameter X> I3.575 mm. A lot is accepted if the proportion of defective rivets is p :( Po = 0.03 and rejected if p ~ p 1 = 0.08. Compute, for a Poisson distribution, the size n 0 of the single sample and the acceptance number v. For the same a and (3, construct the plan of sequential control, compute nmin for a good lot and a defective one and find the average number of tests M[n I p] in a sequential control. 45.6 Rivets with diameter X> I3.575 mm. are considered defective. At most 5 per cent of the lots whose proportion of defective items is p < p 0 = 0.03 may be rejected and at most I 0 per cent of lots whose proportion of defective items is p ~ p 1 = 0.08 may be accepted. Assuming that the random variable X obeys a normal distribution whose estimates of the expectation x and variance a2 are determined on the basis of sample data, find the general formulas for the size n 0 of the single sample in dimension control and for z 0 such that the following condition is satisfied P(x P(x
+ az 0 + azo
>
>
II p II p
=Po)
=
a,
P1)
=
I -
=
f3.
Compute n 0 and z 0 for the conditions of the problem. Consider the fact that the quantity
u
=
x + az
0
366
METHODS OF DATA PROCESSING
is approximately normally distributed with parameters M[u] =.X+ az 0 ,
where k = n - 1. Compare the result with that of Problem 45.5. 45.7 Using the binomial and Poisson distributions, construct the plan of double control for n 1 = n 2 = 30, v1 = 3, v2 = 5, v3 = 8, if a lot is considered good when the proportion of defective items is p :::;; p 0 = 0.10 and defective whenp ): p 1 = 0.20. For the values a and f3 found for the binomial distribution, construct the plans of single and sequential control, compare all three methods according to the average number of tests. For the sequential control, find nmin for a good lot and a defective lot and compute the expectation of the number of tests M[n Ip]. 45.8 Construct the control plans by the methods of single and sequential sampling for large lots of radio tubes if a lot with proportion of defective items p :::;; p 0 = 0.02 is considered good and with p ): p 1 = 0.07 is considered defective. The producer's risk is a = 0.0001 and the consumer's risk is f3 = 0.01. For the plan of sequential control, determine nmin for a good lot and a defective one, find the average number of tests M [n I p] and the probabilities P(n :::;; M [n I PoD, P(n :::;; 2M [n I PoD· 45.9 The time of operation T (in hours) of a transformer obeys an exponential distribution with an intensity of failures ,\. Assuming that ,\t0 < 0.1, construct the plans of control by single sampling and sequential analysis for a = 0.1 0, f3 = 0.10. For the single control, find the acceptance number v and the size n 0 of the sample if the testing period of each transformer is t 0 = 500, 1000, 2000, 5000 hours. (Replace the Poisson distribution by a chi-square distribution.) For the sequential control, take a fixed sample size n 0 corresponding to t 0 = 1000 hours and find the average testing time of each transformer M[T I ,\]. Assume that a lot of transformers is good if the intensity of failures ,\:::;; ,\0 = 10- 5 hours- 1 and defective if,\): ,\1 = 2.10- 5 hours- 1 . 45.10 A large lot of electrical resistors is subjected to control for a = 0.005, f3 = 0.08; the lot is considered good if the proportion of defective resistors is p :::;; Po = 0.02 and defective if p ): p 1 = 0.10. Applying a chi-square distribution instead of a Poisson one, find the size n 0 and the acceptance number v for the method of single sampling; construct the plan of sequential control for a good lot and a defective lot; compute the expectation of the number of tested items and the probabilities P(n < n0 ), P(n < (lj2)n 0 ). 45.11 Before planting, lots of seed potatoes are checked for rotting centers. A lot of seed potatoes is considered good for planting if in each group of 10 potatoes there is at most one spot and bad if there are five spots or more. Assuming that the number of spots obeys a Poisson distribution, compute a and f3 for the method of double sampling if n1 = 40, n 2 = 20, v1 = 4, v2 = 12, v3 = 14. For the resulting values of a and (3, construct the plans of single and sequential control. Compare the
45.
STATISTICAL METHODS OF QUALITY CONTROL
367
efficiencies of all three methods according to the mean expenditures of seed potatoes necessary to test 100 lots. 45.12 The quality characteristic in a lot of electrical resistors, whose random values obey a normal distribution law with a known mean of 200 ohms, is the standard deviation a, and the lot is accepted if a ~ a 0 = 10 ohms and defective if a ): a 1 = 20 ohms. Construct the control plans by the method of single sampling with n0 = 16, v = 12.92 and double sampling with n 1 = n 2 = 13, v 1 = v 3 = 12, v 2 = oo. For the resulting values of a and f3 (in the case of single control), construct the plan of sequential control. Compare the efficiencies of all three methods of control according to the average number of tests. Compute nm 1n for the poorest among the good lots and the best among the defective lots. 45.13 Several lots of nylon are tested for strength. The strength characteristic X, measured in g./denier (specific strength of the fiber), obeys a normal distribution with standard deviation a = 0.8 g./denier. A lot is considered good if X ): x 0 = 5.4 g.jdenier and bad if X ~ x 1 = 4.9 g./denier. Construct the plan of strength control by single sampling with n 0 = 100 and v = 5.1. For the resulting values of a and {3, construct the plan of control by the method of sequential analysis, compute the mean expenditure in fibers and the probabilities P(n < n0 ), P(n < (1/2)n 0 ). 45.14 It is known that if the intensity of failures is ,\ ~ ,\0 = 0.01, then a lot of gyroscopes is considered reliable; if,\ ): ,\ 1 = 0.02, the lot is unreliable and should be rejected. Assuming that the timeT of reliable operation obeys an exponential distribution and taking a = f3 = 0.001, construct the plans for single (n 0 , v) and sequential controls according to the level of the parameter ,\. Find the average number of tested gyroscopes M [n I ,\] for the case of sequential control. 45.15 A large lot of condensers is being tested. The lot is considered good if the proportion of unreliable condensers is p ~ p 0 = 0.01; for p ): Pl = 0.06 the lot is rejected. Construct the plan of single control (n 0 , v) for the proportion of unreliable items so that a = 0.05, f3 = 0.05. To establish the reliability, each tested condenser belonging to the considered sample is subjected to a multiple sequential control for a' = 0.0001, {3' = 0.0001 and a condenser is considered reliable if the intensity of failures ,\ ~ ,\ 0 = 0.0000012 and unreliable for ,\ ): ,\ 1 = 0.0000020 hours - 1 (n is the number of tests used to establish the reliability of a condenser for given a' and (3'). One assumes that the time of reliable operation of a condenser obeys an exponential distribution. 45.16. Construct the plans of single and sequential controls of complex electronic devices whose reliability is evaluated according to the average time T of unfailing (reliable) operation. If T ): T 0 = 100 hours, a device is considered reliable and if T ~ T 1 = 50 hours, unreliable. It is necessary that a = f3 = 0.1 0. Consider that for a fixed testing time tT a device is accepted if tTjm = T ): v and rejected if T < v, where m is the number of failures for time t, and vis the acceptance number in the case of single control (n 0 = 1; in case of failure the device is repaired and the test is continued). In this case, tT/T obeys approximately a
368
METHODS OF DATA PROCESSING
Poisson distribution. In the case of sequential control, the quantity t depends on the progress of the test. (a) Determine the testing time tT and the acceptance number v for a single control. (b) For the plan of sequential control, reduce the condition for continuation of the tests ln B < In y(t, m) < In A to the form t 1 + mt3 > t > t2 + mt 3 • For t 1 , t2 , t 3 , obtain, preliminary general formulas. (c) In the case of sequential control, determine the minimal testing time tmin for the poorest of the good lots and the best of the rejected ones.
46.
DETERMINATION OF PROBABILITY CHARACTERISTICS OF RANDOM FUNCTIONS FROM EXPERIMENTAL DATA Basic Formulas
The methods of determination of the expectation, the correlation function and the distribution laws of the ordinates of a random function by processing a series of sample functions does not differ from the methods of determination of the corresponding probability characteristics of a system of random variables. In processing the sample functions of stationary random functions, instead of averaging the sample functions, one may sometimes average with respect to time; i.e., find the probability characteristics with respect to one or several sufficiently long realizations (the condition under which this is possible is called ergodicity). In this case, the estimates (approximate values) of the expectation and correlation function are determined by the formulas
x = Tlr Jo x(t) dt, _
('T
1
Kx(r) = T-
Jo
T
-t
+
[x(t)- x][x(t
r)- x] dt,
where Tis the total time of recording of the sample function. Sometimes instead of the last formula one uses the practically equivalent formula, -
Kx(r)
1 -T
=
-
T
iT-t x(t)x(t + 0
r) dt- x 2 •
In the case when the expectation xis known exactly, Kx(r)
~T-t
1
=
T-
~
-T
T
1
-
T
Jo
[x(t)- x][x(t
+
iT-t x(t)x(t + r) dt 0
r) - x] dt
x2 •
46.
DETERMINATION OF PROBABILITY CHARACTERISTICS
369
If x and Kx(r) are determined from the ordinates of a sample function of a random function at discrete time instants ti = (j - 1)Ll, the corresponding formulas become 1 m x = - x(tJ), m i=l 1 m-! Kx(r) = m _I j~1 [x(tj) - x][x(tj + r) - x]
L
or Kx{r)
=
1 m-! m -li~ x(ti)x(ti
+ r)-
x2,
where r = IL\, T = mL\. For normal random functions, the variances x and Kx(r) may be expressed in terms of Kx(r). In practical computations; the unknown correlation function Kx(r) in the formulas for D[x] and D[Kx(r)] is replaced by the quantity Kx(r). When one determines the value of the correlation function by processing several sample functions of different durations, one should take as the approximate value of the ordinates of Kx( r) the sum of ordinates obtained by processing individual realizations whose weights are inversely proportional to the variances of these ordinates.
SOLUTION FOR TYPICAL EXAMPLES
Example 46.1 The ordinates of a stationary random function are determined by photographing the scale of the measuring instrument during equal time intervals Ll. Determine the maximal admitted value of Ll for which the increase in the variance of Kx(O) compared with the variance obtained by processing the continuous graph of realization of a random function will be at most 8 per cent if the approximate value of Kx(r) = ae-a1'1 and the total recording time Tis » 1/a. It is known that x = 0 and the function X(t) can be considered normal.
SOLUTION. Since x = 0, by use of the continuous recording, the value of Kx(O) is determined by the formula
K1 (0)
=
~
f
x 2 (t) dt.
For finding the variance of K1 (0), we have D[K1(0)]
=
~
M[Kr(O)] - {M[K1(0)J)2
;2
=
;2 f f K~(t2
- t1) dt1 dt2
a2foT (T- r)e-2a' dr.
If after integration we eliminate the quantities containing the small (by assumption) factor e-aT, we get
D[K1 (0)]
=
a2 T 2 a2 (2aT- 1).
370
METHODS OF DATA PROCESSING
If the ordinates of the random function are discrete, the value of Kx(O) is
Determining the variance of K 2 (0), we find that D[K2 (0)] =
~ 2 i~ ~~ M [X 2(JL\)X 2(lL\)] {
2 = 2
m
m
m
L: L: K'fc(lL\ -
- m 2K;(O)}
.
jL\),
i=l l=l
where for the calculation of the expectation one uses a property of moments of systems of normal random variables. Using the value of Kx( T), we obtain D[K2(0)]
2
L L;
2m
= ~
m
L (m _ r)e-2arA _ ...!!:..._ m
42m
e-2all-miA
= ~
22
m i=l l=l m r=o - 2a2L\ T(l - e-4aA) - 2L\e-2aA - T2 (1 - e-2aA)2
The limiting value of L\ is found from the equation D[K2 (0)] D[K1(0)]
=
0 01 a·
1
+ .
'
that is, from the equation 2a2L\[T(l _ e-4aA) _ 2L\e-2aA] (2aT- 1)(1 - e-2aA)2
=
1
+ 0.01a.
For aLl « 1, we obtain approximately K=
2aT- 1 a -· 2aT- 3 100
PROBLEMS 46.1
Prove that the condition lim Kx(T) ·~
=
0
00
is necessary in order that the function X(t) be ergodic. 46.2 Verify whether the expression Sx(w)
=~I
r
eiwTx(t) dtr
may be taken as an estimate of the spectral density if X(t) is a normal stationary random function (.X = 0) and Jooo !K( T)! dT < oo.
46.
DETERMINATION OF PROBABILITY CHARACTERISTICS
371
46.3 To determine the estimate of the correlation function of a stationary normal stochastic process X(t) (x = 0) a correlator is used that operates according to the formula K-x(T)
1 -T
=
-
T
lT-< x(t)x(t + T) dt. 0
Derive the formula for D[Kx(T)]. 46.4 Determine the expectations and the variances of the estimates of correlation functions defined by one of the formulas 1 K1(T) = -T
lT-< x(t)x(t + T)dt- (.X) 2,
K2(T)
r-·
-
=
T
1
0
T- T Jo
[x(t)- x][x(t
+
T)- .X] dt,
where .X = 1/(T- T) J~-· x(t) dt, if X(t) is a normal random function. 46.5 The correlation function of the stationary stochastic process X(t) has the form Find the variance for the estimate of the expectation defined by the formula .X =
~
f
x(t) dt.
46.6 The spectral density Sx(w) is found by a Fourier inversion of the approximate value of the correlation function. Determine D[Sx(w)] as a function of w if KxH =
~
f
x(t)x(t
+ T) dt,
x=
0,
the process is normal and, to solve the problem, one may use Kx(T) = ae-al- 1 _ P(A) = a + b - 1 . ~ P(B) b 5.16 From Z =Xu Y, it follows that Z ~X+ I Yl, Z:;:, X- I Yl, P(Z ~ 11) :;:, P(X ~ 10 and I Yl ~ 1) = P(X ~ 10) + P(l Yl ~ 1) - P(X ~ 10 or I Yl ~ 1) :;:, 0.9 + 0.95 - 1 = 0.85, P(Z :;:, 9) :;:, 0.05, P(Z ~ 9) ~ 0.95. 5.17 0.44 and 0.35. 5.18 p(2 - p). 5.19 PB = 0.1 + 0.9·0.8·0.3 = 0.316; Pc = 0.9(0.2 + 0.8·0.7·0.4) = 0.3816.
5 20 ·
l _1_
-
P - n (n - 1)
+ (1 -
l) ln n -
n2 - n
+ 1.
n2 (n - 1)
5.21 PB ~ 0.8, Pc ~ 0.2. 5.22 G(m + n) = G(m) + [1 - G(m)]G(n I m); G(
5.23
I ) n m
P1 =
= G(n
111 l + 23 + 25
Another solution: P1
+ P2
5.24 P1
+ P2 + P3
5.25 p
+
5.26
P1
+
=
=
2
1
(1/2)p1; that is, P1 = 2/3, P2 1 . 4 1, P2 = 2 P1, P3 = 2 P2, I.e., P1 = 7, P2 =
1, p 2
1
2 p;
=
p =
m
1 P1 - '
11
+ ... = 3' P2 = 22 + 24 + ... = 3' 1
q = 1' q =
+ P2
m) - G(m).
1 - G(m)
n
+
m
=
= =
1/3. 2 1 7, P3 = 7·
2
3' P2. P = P1 '
n+m n +2m
= ---·
5.27 p 1 is the probability of hitting for the first marksman; P2 is the probability ofhittingforthesecondmarksman;p1 + P2 = 1,0.2p2 = 0.8·0.3P1;P = P1 = 0.455. 5.28 Use the condition of Problem 1.12. 5.29 If we calculate the number of identical terms, we get P
(0 Ak) 1
=
Ci;P(A1)-
C~P(A1A2) + C~P(A1A2A3)- · · · + ( -l)n- 1P(J]
Ak).
382
ANSWERS AND SOLUTIONS
5.30 Using the equality Il~~ 1 A~c = 2:~~ 1 A~c from Problem 1.12 and the general formula for the probability of a sum of events, we obtain PCrJl
A~c)
=
1-
tt P(A~c)- :~:jJ+l P(A~cA;) i
2 _nf +n~ k=l
P(A~cA;A;)-···+(-l)n-lp(fr
J=k+l i=J+l
k=1
Ak)}·
Ak
However, according to Problem 1.12 we have Il~~ 1 = .L~~1 A~c and, hence, for any s, P(Il~~ 1 A~c) = 1 - P(.L~~ 1 A~c). Also considering the equality 1 - c; + c; - ... + ( -l)n = 0, we get the formula indicated in the assumption of the problem. 5.31 Use the equality P( Ao JJ
Ak)
=
P(JJ
A~c) -
P(J]
Ak)
and the formula from the condition of Problem 5.30. n
5.32 p = k~l
(
-1)k-1
k!
.
The probability that m persons out of n will occupy their seats is = 1/m!. The probability that the remaining n - m persons will not sit in their seats is n~m ( -l)k 1 n-m(-1)k 5.33
C;;'(n - m)/n!
L -k!·
L-k,;
k~O
5.34
p=m ~c~o
•
The event A 1 means that no passenger will enter the jth car,
P(A,)
= (
1
~r,
P(A;A;)
= (1 -
~r,
..
P(A;A;As) = ( 1 -
~r
n: r.
and so on. Using the formula from the answer to Problem 5.29, we obtain p = 1-
c~(1- ~r
+ c;(1-
~r-
+ (-l)n-lc~-1(1-
1
5.35 The first player wins in the following n cases: (1) in m games he loses no game, (2) in m games he loses one but wins the (m + 1)st game, (3) in m + 1 games he loses two, but wins the (m + 2)nd game, ... , (n) in m + n - 2 games he loses n - 1 and, then, he wins the (m + n - 1)st game. p = pm(l + C~r.q + C~+lq2 + ... + C~+;-2qn-1). 5.36 The stack is divided in the ratio p 1 / p 2 of probabilities of winning for the first and second players, P1
1 1 1 2 = 21m ( 1 + 2 Cm + 2 2 Cm+l + · · · +
P2
1 (1 + 2 1 cln + 22 1 = 2n
2
1 n-1 ) 2 n-1 Cm+n-2 ' 1
Cn+l + · · · + 2m-l
cm-1 ) m+n-2 ·
5.37 The event A means that the first told the truth, B means that the fourth told the truth; p
= P(A I B)= P(A)~~ I A)_
Let PJc be the probability that (in view of double distortions) the kth liar transmitted the correct information; P1 = 1/3, P2 = 5/9, Ps = 13/27, P4 = 41/81, P(A) = p 1 , P(B I A) = Ps, P(B) = p4; P = 13/41.
383
ANSWERS AND SOLUTIONS
5.38 We replace the convex contour by a polygon with n sides. The event A means that line Ai; will be crossed by the ith and jth sides; A = L:r~ 1 L:~ ~ i +1 Ai;, p' = L:r~1 L:J~i+1Pif, where Pi;= P(Ai;); p' = (1/2) L~~1P~, P~ = L:f~1Pki- Pkk being the probability that the parallel lines are crossed by the kth side of length lk. From the solution of Buffon's problem 3.22, it follows that p~ = 2lk/Lrr; p' = (1/Lrr) L:~ ~ 1 lk. Since this probability is independent of the number and size of the sides, we have p = s/Lrr. 6.
THE TOTAL PROBABILITY FORMULA 6 ·1 P =
11 1
1 2
12·TI + 12·TI
13 132.
=
6' 2 p
3 4
1 2
4. 9 + 4. 9
=
7
=
T8.
6.3 H 1 means that among the balls drawn there are no white balls, H 2 means that one ball is white and H 3 that both are white;
! ( m1 + m2 ) . - 2 n1 + m1 n2 + m2 H; 1 means that a white ball is drawn from the jth urn; p _
6.4
P(H11 ) = _!!!_, m+k k P(Hd = - - k ' m+ P(H. ) _ 21
P(Hd
-
m
+ k)
(m
= m
k
+
(m + 1) (m + k + 1)
k
+
(m
+
+
k) (m
m
k
+
_ m 1) - m + k'
k.
Consider P(H;1) = m
m
+
k,
P(Hd = m
k
+
k·
Then P(H;+ 1,1) = m/(m + k). Therefore p = m/(m + k). 6.5 0.7. 6.6 2/9. 6.7 0.225. 6.8 0.75. 6.9 0.332. 6.10 The event A means getting a contact. The hypothesis Hk means that a contact is possible on the kth band (k = 1, 2). Let x be the position of the center of the hole and y the point of application of the contact. P(H1) = P(15 ~ x ~ 45) = 0.3, P(H2 ) = P(60 ~ x ~ 95) = 0.35. The contact is possible on the first band if for 25 ~ x ~ 35lx - Yl ~ 5, for 15 ~ x ~ 25, 20 ~ y ~ x + 5, for 35 ~ x ~ 45x - 5 ~ y ~ 45. Thus P(A I H 1) = 1/15. Similarly, P(A I H2) = 1/14, p = 0.045. 6.11 The event A means that s calls come during the time interval 2t. The hypothesis Hk (k = 0, 1, ... , s) means that during the first interval k calls came, P(Hk) = Pt(k). The probability that s - k calls come during the second interval will be
(k
=
6.12 The hypothesis Hk means that there are k defective bulbs, P(Hk) = 1/6 0, 1, ... , 5). The event A means that all 100 bulbs are good, c100
P(A
1
Hk) =
1 ~~~-k ~ o.9k c
(k =
o,
1000
1
p
5
= 6 k~O P(A I Hk) ~ 0.78.
1, ... , 5);
384 (k
ANSWERS AND SOLUTIONS 6.13 The hypothesis Hk means that there are k white balls in the urn 0, 1, ... , n); the event A means that a white ball will be drawn from the urn,
=
1
P(Hk) = n + 1 ,
I Hk)
P(A
=
k+1 n + 1;
n+2
P = 2 (n + 1) ·
6.14 The hypothesis Hk (k = 0, 1, 2, 3) means that knew balls are taken for the first game. The event A means that three new balls are taken for the second game, P(H)
=
k
=
Cfs
(9C~
6.15 P = 141q 6 · 16 p
c~c~-k +
P(A
'
1
H)= k
8c&c~
c~-k.
Cj\ '
oo89
P =
.
.
+ 7C}) = o.58.
25 24 (25 5 5 25) 24 30.29 + 30. 29 + 30.29 . 28
=
190 203"
6.17 P(A) = P(AB) + P(AB) = P(B)P(A I B) + P(B)P(A I B). The equality is valid only in several particular cases: (a) A = V, (b) B = U (c) B = A, (d) B = A, (e) B = V, where U denotes a certain event and V an impossible one. 6.18
By the formula from Example 6.2, it follows that m
6.19
In the first region there are eight helicopters, p
~
~
13, p
~
0.67.
0.74.
7. COMPUTATION OF THE PROBABILITIES OF HYPOTHESES AFTER A TRIAL (BAYES' FORMULA) 7 •1 p
=
0.1·5/6 0.9·1/2 + 0.1·5/6
5 32.
=
7.3 The hypothesis H 1 means that the item is a standard one and H2 that it is nonstandard. The event A means that the item is found to be good; P(Hl) = 0.96, P(H2) = 0.04, P(A I Hl) = 0.98, P(A I H2) = 0.05, P(A) = 0.9428; p = P(Hl I A) = 0.998. 7.4 The hypotheses Hk (k = 0, 1, ... , 5) means that there are k defective items. The event A means that one defective item is drawn;
~·
P(Hk) =
P(A I Hk)
=
~·
P(Hk I A) = P(Hk):(c:) I Hk).
The most probable hypothesis is H 5 ; that is, there are five defective items. 1 7.5 P(Ha I A) = 6 _0 _78 = 0.214 (see Problem 6.12). 7.6 The event A denotes the win of player D; the hypothesis Hk (k means that the opponent was player B or C; P(Hk) =
l 2;
P(A I Hl)
=
0.6
X
0.3 + (1 - 0.18)
X
0.7
X
=
1, 2)
0.5;
P(Hl I A) = 0.59; 0.3 + (1 - 0.06)0.4 X 0.7; P(H2 I A) = 0.41. 7.7 The second group. 7.8 The event A means that two marksmen score a hit, Hk means that the kth marksman fails; 6 p = P(H3 I A) = 13' P(A I H2)
=
0.2
X
385
ANSWERS AND SOLUTIONS 7.9
The event A means that the boar is killed by the second bullet; 3
I P(Hk).
P(A) =
k~1
The hypothesis Hk means that the kth marksman hit (k = 1, 2, 3); P(H1) = 0.048, P(H2) = 0.128, P(H3) = 0.288, P(H1 I A)= 0.103, P(H2 A)= 0.277, P(H3 I A)= 0.620. 7.10 The fourth part. 7.12 The events are: M 1 that the first twin is a boy, M 2 that the second is also a boy. The hypotheses are: H 1 that both are boys, H 2 that there are a boy and a girl; 1
P(M1)
= a
+
z1 [1
- (a
+ b)];
7.13 A~c means that the kth child born is a boy and B~c that it is a girl (k = 1, 2); P(A1A2) + P(B1B2) + 2P(A1B2) = 1, P(A1A2 + B1B2) = 4P(A 1B2). Therefore, P(A1A2)
2
1
1
+ P(B1B2) = 3' P(A1B2) = 6' P(A1A2) = 0.51 - 6; 103
p = P(A2I A1) = 153 ·
7.14 5/11. 7.15 One occurrence. 7.16 Hypothesis H 1 means that the first student is a junior and H2 means that he is a sophomore. A denotes the event that the second student has been studying for more time than the first, B means that the second student is in the third year. P(H1) = ~1 ' n-
P(H2) = ~1 ' n1 P(A) = (n _ l) 2 [n1Cn2 p = P(B
P(A I H 1)
+
I A)
n3) =
+
P(A I H 2) = ~1 , nn3 P(AB) = n _ 1 ;
n2 + n 3 , n-1
=
n2n3], 1
-
n1
1
+-
n2
7.17 1/4 and 2/11. 7.18 The hypotheses H~c (k = 0, 1, ... , 8) mean that eight out of k items are nondefective. A denotes the event that three out of four selected items are nondefective:
P(H~c)
=
~·
I A)
=
C~CL~c ----cr (k
P(H~c
P(H; I A)= 0 (j
p = P(H4
8.
3
=
1A)· 4 +
=
0, 1, 2, 8),
3, 4, 5, 6, 7),
P(A) =
1 5;
1 3 PCH5 1A)· 2 = 14 ·
EVALUATION OF PROBABILITIES OF OCCURRENCE OF AN EVENT IN REPEATED INDEPENDENT TRIALS 8.1
(a) 0.9 4
=
8.2
(a) Cro
2~ 0
0.656, = ; :6 ,
(b) 0.9 4
+ 4·0.1·0.9 3 = 0.948.
(b) 1 -
2~ 0 (1
+ Ci:o + Cra + Cia +
1)
= ::;4 ·
386
ANSWERS AND SOLUTIONS 8.3 8.4
8.5
8.7 p = 1- (0.8 4 8.8
1.35e- 2 = 0.18, (b) p ~ 0.09. 8.6 (a) 0.163, (b) 0.353.
= C~ 00 ·0.0P·0.99 197 ~
(a) p 0.17.
0.64.
+
4·0.8 3 ·0.2
mt C~pmqn-m[1-
Wn =
+
+ 2·0.8·0.2 3 )0.7 2 ·0.6 =
5·0.8 2 ·0.2 2
~rJ
(1-
=
0.718.
~r·
1- (1-
8.9 p = 1 - (0.7 4 + 4·0.7 3 ·0.3 ·0.4) = 0.595. 8.10 Hypothesis H 1 means the probability of hitting in one shot is 1/2, H 2 means that this probability is 2/3. The event A means that 116 hits occurred. P(H1 I A) ~ 2P(H2 I A); that is, the first hypothesis is more probable. 8.11 See Table 113. TABLE
113
p
0.01
0.05
0.1
0.2
0.3
0.4
0.5
0.6
R1o: 1
0.0956
0.4013
0.6513
0.8926
0.9718
0.9940
0.9990
0.9999
8.12 8.14
0.2. Rn:
8.13 0.73. 1 - e- 0 · 02 n (n > 10). See Table 114.
~
1
TABLE
n
1
Rn;l
0.02
114
10
20
30
40
50
60
70
80
90
100
0.18
0.33
0.45
0.55
0.63
0.70
0.75
0.80
0.84
0.86
- - - - - - - - - - - -- - - -- - - -
8.15 p = 1 - 0.95 10 = 0.4. 8.17 p = P~o 8.18 (a) p =
+ I
3PMPe P~:
k~O
8.20
8.16 p
+ Pa) +
=
1 - 0.9 5 = 0.41.
3P1oP~
= 0.0935. (b) 0.243.
kPb, = 0.311,
8.19
0.488.
A denotes the event that two good items are produced. The hypothesis
Hk means that the kth worker produces the items (k = 1, 2, 3); 3
p =
L
P(Hk
k~1
8.21
(a) p
8.22 P 1 8.23
=
-
p-
=
p4
1
-v'2: = +
0.794,
Clp 4 q
+
I A)
X
(b) 3p4
Cgp 4 q 2
+
P(A -
I Hk)
4p 3
~
0.22.
+2=
0, p
C~p 3 q 3 (p 2
en 22n-k 2n-k• 1
8.24
1
+
2k- 1
=
L
m=k
Pm
=
2k -1
npk
L
m=k
c~~}qm-k.
0.614.
2p 2q) = 0.723; Pn
0.784.
8.25 The 200 w. ones (R6,1 = 0.394; R1o,2 = 0.117). 8.26 0.64. 8.27 0.2816. 8.28 Pm = nC~~\pkqm-k form ;;. k; Pm = 0 form < k. 8.29 p
=
=
0.277.
387
ANSWERS AND SOLUTIONS 8.30
We require: 0.1 ~ 0.8n [ 1
n
+4+
n(n -
32
1)] .'
n
~
25.
8.31 We require: 0.99·5 10 = 4 10 + Cl: 04 9 + · · · + C~ 0 4 10 -n; n = 5. 8.32 P 4 ,o = 0.3024, P 4 , 1 = 0.4404, P4, 2 = 0.2144, P 4 , 3 = 0.0404, P4, 4 = 0.0024. 8.33 0.26. 8.34 0.159. 8.35 95/144. 8.36 n = 29. 8.37 n ~ 10. 8.38 n ~ 16. 8.39 8. 8.40 8. 8.41 fL = 4; p = 0.251. 8.42 fL+ = 3, fL- = 1; p = 32/81. 9.
THE MULTINOMIAL DISTRIBUTION. RECURSION FORMULAS. GENERATING FUNCTIONS 9.1 p 9.2 P 9.3
=
+
Ps; 2,2,1
2Ps; 3,2,0 = 50/243.
P3; 1,1,r + P3; 2,r o + P3; r,2,o = 0.245. 9! 1 9! (a) p = ( 3 !) 3 • 39 = 0.085, (b) p = 6 413121 =
9.4 P
= :
9.5 p
=
X
1 39 = 0.385.
°i
1 1 0.15 6 0.22 3 ·0.13 =: 0.13·10- 4 •
1 - 2(0.0664 4
+ ~ 0.2561 4 +
4·0.0664·0.256!3
+
9.6
6 · 0.0664 2 · 0.256P + 4 · 0.2561 · 0.0664 3 ) = 0.983. 12! 6! 12! 1 (a) p = 26 _6 r 2 = 0.00344, (b) p = 2 .. 21 2131 41 · 6 r 2 = 0.138.
9.7
(a) Pr = (l
+m+
+ mr + nr)! lr! mr! nr! ·(l
(/r (c) p =
9.8
n)lr +mr +n1'
1
p = Pn, P1c = P~c-r·2
+
+
(b) p = 6ph
[1rmmrnnr m + n) 1r+mr+n1
1 (1- Pie-r) 2 = 0.5; p = 0.5.
9.9 Let Pic be the probability of a tie when 2k resulting games have been played; Plc+r = (lj2)pk (k = 0, 1, ... ), Po= 1, Pn-r = (lj2)n-r; P = (1/2)Pn-r = 1/2n. 9.10 The number n should be odd. Let P1c be the probability that after 2k games the play is not terminated; Po = 1, Pic=
(~r(k
=
1, 2, ... ,n ~ 3 );
p =
~P; 2
=
+
1
~ (~r- 3 ll 2 •
9.11 Let P~c be the probability of ruin of the first player when he has k dollars. According to the formula of total probability P1c = PP~c+r + qPic-r· Moreover, p + q = 1, Po = 1, Pn+m = 0. Consequently, q(p~c -Pie-r) = P(Pic+r -Pic· (1) p = q. Then P~c = 1 - kc, c = 1/(n + m); that is, Pr = m/(n + m), Pn = n/(n + m). (2)p #- q. ThenPic- Pie-r= (pfq)k(Pr- 1). Summing these equalities from 1 to n and from 1 to n + m, we obtain 11 - Pn
= (1 -
Pr)
(~r
1 - q_ p
q)n+m 1 - (-
,
1 - Pn+m
= (1 - Pr)
pq
1 - -
p
388
ANSWERS AND SOLUTIONS
Thus, 1Pr =
(%r
1-
Pn = 1 - Pr =
p)n+m' 1- ( -
(*r
1- ( -q)n+m p
q
9.12 P = Pm;Pm = Oform ~ n;Pn = 1/2n-l;Pm = 1/2nforn < m < 2n- I. In the general case Pm is determined from the recurrent formula 1 1 1 Pm = 2Pm-1 + 22 Pm-2 +···+ 2 n_ 1 Pm-n+l• which is obtained by the formula of total probability. In this case, the hypothesis Hk means that the first opponent of the winner wins k games; 1)n-k (k = 1,2, ... ,n- 1). Pm-k = P(Hk) ( 2 9.13 Pk is the probability that exactly k games are necessary. Fork = 1, 2, 3, 4, 5, Pk = 0, P 6 = 2p 6 = 1/2 5 , P 7 = 2Cgp 6 q = 3/2 5 , Pa = 2C~p 6 q 2 = 21/2 7 , P 9 = 7/2 5 , P 10 = 63/2 9 ; (a) R = 2J~ 1 Pk = 193/256, (b) if n is odd, then Pn = 0. For even n, Pn = (1/2)Pcn-ll/2, where Pk is the probability that after 2k games the opponents have equal numbers of points; p 5 = 0(1/2 10 = 63/2 8, Pk+l = (1/2)pk; that is, 63 63 Pk = 2k+s (k = 5, 6, ... ), Pn = 2(n/2)+ 3 •
Cr
9.14 Expand (1 - u)- 1 into a series and find the coefficient of um. 9.15 The same as in Problem 9.14. 9.16 The required probability is the constant term in the expansion of generating function 1 ( G(u) = 4 n U
1)n +2+U
(1
=
+ u)2n 4 nun ;
P
=
1 4n
en2n•
9.17 The required probability is the sum of the coefficients of u raised to powers not less than m in the expansion of the function 1 2 G(u) = ( 16 u
1
3
+ 4u + 8 + P
=
1 42n
1 4u 4n
2
k = 2n +m
1
)n
+ 16u2
(1
+
u)4n.
(4u)2n
,
C~n·
For n = m = 3,p = 0.073. 9.18 The required probability is twice the sum of the coefficients of U 4 in the expansion of the function 1 ( G(u) = 520 u
1
+-u +
) 20 3 20!
p = 2 520
=
8
1 20 20-m 520
20! m. n.'(20- m - n)l. um-n320-m-n; 316-2k k)! k! (16 - 2k)! = 0· 104 ·
2 2
m~o
k~O (4 +
n~o
'
9.19 (a) The required probability Pchamp is the sum of the coefficients of nonnegative powers of u in the expansion of the function 1 1 1) 24 (1 + u) 48 G(u) = ( 4 u + 4u + 2 = 424u24 ; 48 Pasplrant = 0.4423. Pchamp = C~8 2. ~24 (2 48 + = 0.5577, 4 24 k~24
2
cw
389
ANSWERS AND SOLUTIONS
(b) the probability of the complementary eve]\t is the sum of the coefficients of u whose powers range from - 4 to 3 in the expansion of the function
+
9.20 function
42o
=
p
1
+
u2
+ ... +
= 1
+
c~- 1 u
6" (u
=
Using the equality 1/(1 - u)n
uB)
Rm = 10,
+
C~:;:iu 2
=
~n (Cii;-
= 0.22.
+ · · · we obtain
= :L;;'~n P~c.
+ C~Ciii-12-
c;.Ciii-s
Using the equality
· · ·).
m = 20, R2o
=
1
610 ( C§g - C loCi2)
=
0.0029.
The desired probability is the coefficient of u21 in the expansion of the
G(u) = =
1~s
+
(1
u
+ ... +
1
106 (1 - C§u 10
+
1~6
ug)s =
( \ -_ u~or
Cgu 20 - · · ·)(1
1
p = 106 (C,f6 -
9.22
k == 16
Un(l - uB)n 6n(l - u)n .
=
and the series is cut off if m - 6k < n; (b) Rm 1 + c~- 1 + · · · + C'i~l = C';, we obtain
9.21 function
2: no
= 1 - 420
(a) The required probability Pm is found with the aid of the generating G(u)
For n
23
1
u)4o u2o ;
1 (1
G(u)
+
Cf,C:f6
+
Cgu
C~Cil)
=
+
C~u 2
+ · · ·);
0.04.
(a) PN is the coefficient of uN in the expansion of the function
~nn (\-=- u:r;
G(u) =
and the series is cut off when N - ms < n; (b)
P =
1
i
+ PN-
k==n
Pk = 1
+ PN- _!_ mn
C~CJJ-2m- · · ·)
(CJJ-
CkCJJ-m +
~3 (C~
- 3) = 0.1875;
(compare with Problem 9.20). 9.23
(a) G1(u)
= ~231
(\ -=_ :
v21
4
)
3
,
(b) G2(v) =
8"3 (1 +
v) 9 ,
(c)
G2(~)
p =
1
p = 83 C§ = 0.2461;
3 ~ 3 (1
G (u)
=
G1(u) x
P
=
3 ~ 3 (Cl2 + 3C,f2) =
=
+
0.1585.
u 2))1
+
u) 12 ,
390
ANSWERS AND SOLUTIONS
9.24 Hypothesis Hk means that the numbers of heads for the two coins first become equal after k tosses of both coins (k = 1, 2,. 0 0, n); the event A means that after n throws the numbers of heads become equal (previous equality is not excluded).
P(A)
i, P(Hk)P(A
=
k~l
I Hk),
P(A
Hk)
1
P(Hn),
P =
=
4 }_k
P(A)
=
P(A I Ho),
ca;!2k·
Consequently, C2'n = l:~~1 4kCa;!2kP(Hk)o Using successful values for n, one can findp = P(Hn)o Let R(u) = 2:k'~ 1 ukP(Hk), Q(u) = l:f~o uip;, wherepn-; = P(A I H;). Adding together the terms containing un, we obtain: n
oo
Q(u)R(u) =
2
un
n=l
-
R(u)- 1-
2
oo
Pn-kP(Hk) =
2
unPn(A) = Q(u) - 1;
n=l
k=l
~ k 1- u- k~l u
.y-- -
22k
(2n - 2)!
(2k - 2)! . lk!(k- 1)!'
p
=
22 n
1
(n- 1)! n!.
9.25 Let JL be the number of votes cast for a certain candidate. The probability of this is P~ = C~p~qn-~0 The probability that at most JL votes are cast for this candidate is a~ = l:~~o P •. The probability that among k candidates l - 1 receive at least JL votes, k - l - 1 persons get no more than JL votes and two receive JL votes each is
k'
+
2(1- 1)! (k.- l - 1)! (1 kl
p
9.26 (a)
P~ - a~)l-1a~-l-1P~;
n
= 2(1- 1)! (k.- l - 1)! ~~o P~a~-l-1(1 + P~- a~)z-1.
The probability of winning one point for the serving team is 2/3. pk
= C[5 Gr GrS+k + Cf-1Cf5 Gr Grl+k +
C~-1Cr5 Gr (~r+k
+ .. + c~=~cr51 Grk-2 (~r7-k + Cf5 Grk Gr5-k 0
or Pk
(~r 5 '~k (4k- 1Cf5 + 4k- 2Cf-1Cr5 + 4k- 3 c~-1Cr5 + · · ·
=
Q ~.. __
+ 4Cii:-1Cr5 1 + Cf5); 4k + 4 k - 1c1c1 k 14 + 4k - 2c2c2 k 14 + ... , + 4c~-lcr4 1 + Cf4)
(1)3 (2)3 1( 14
6k
(k = 0, 1, ... , 13). The numbers Pk and Qk are given in Table 115. TABLE
115
k
0
1
2
pk Qk
0000228 0000114
0000571 0000342
0001047 0000695
0001623 0001159
0002260 0.01709
0002915 0002312
0003546 0002929
k
7
8
9
10
11
12
13
0004604 0004064
0004986 0004525
0005254 0004890
0005407 0005148
0005450 0005299
0005392 0005345
3
4
5
6
/
P,, Qk
0.04118 0003524
391
ANSWERS AND SOLUTIONS 13
(b)
= .L
Pr
k~O
Pk
13
= 0.47401,
= .L
Qr
k~O
Qk
= 0.42056.
(c) let ak be the probability of scoring 14 + k points out of 28 + 2k for the first team (serving), which wins the last ball, f3k being the analogous probability for the second team;
+ C[3Cf4 (1)4 3 (2)24 3 + ···
1)2 (2)26 a 0 = C[4 ( 3 3
+ C[3Clj ( 31)26 (2)2 3 + (1)28 3 = 0.05198, that is, (ak ao
+ {3k)
+
f3o
Pk = ~ Pk
=
+
0.10543
1
3k (ao
=
(- 1)k gk+ 1
~-
+
(ak - {3k)
f3o), f3
(ao-
(e)
Prr
= .L
( -1)k0.00148
0.10543
qk = ~
gk+1
+
( -1)k0.00148 gk+1
;
00
k~o
P = P1
(-1)k
~ (ao - f3o);
_ ao + f3o ( -1)k {3 . qk- ~ - gk+ 1 (ao- a),
a),
00
(d)
=
Pk = 0.05257,
+ Prr =
Qrr
= .L
qk
k~o
+
Q = Q1
0.52658,
= 0.05286; Qrr
= 0.47342.
II RANDOM VARIABLES 10.
THE PROBABILITY DISTRIBUTION SERIES, THE DISTRIBUTION POLYGON AND THE DISTRIBUTION FUNCTION OF A DISCRETE RANDOM VARIABLE 10.1
See Table 116. TABLE
116
x,
I
0
I
Pi
I
0.7
0.3
-------
F(x)
10.2
=
0 { 0.7
for for
x ~ 0, 0 < x ~ 1,
1
for
x > 1.
See Table 117. TABLE Xi
0
I
117 2
3
- -- - - ----- Pi
0.125
0.375
0.375
0.125
392
ANSWERS AND SOLUTIONS
F(x) =
10.3
0 0.125 ( 0.500 0.875 1
See Table 118. TABLE
jj'
0.1
10.4
X.:; 0, 0 < X.:; 1, 1 < X .:; 2, 2 < X.:; 3, X> 3.
for for for for for
(a) P(X
118
2
3
4
5
0.09
0.081
0.0729
0.6561
= m) = qm-lp =
1/2m,
(b) one experiment.
10.5 X 1 is the random number of throws for the basketball player who starts the throws and X 2 is the same for the second player; P(X1 = m) = 0.6m- 1 ·0.4m } for all m ~ 1. P(X2 = m) = 0.6m+1.o.4m-1 10.6
See Table 119. TABLE
~1--=2_ p,
I o.oo8
119
3
8
9
14
15
19
20
25
30
0.036
0.060
0.054
0.180
0.027
0.150
0.135
0.225
0.125
10.7 P(X = m) = qm-•p = 1/2m-s for all m ~ 4, since the minimal random number of inclusions is four and occurs if the first device included ceases to operate. qn- 1 for m = 0, 10.8 (a) P(X = m) = { qn-m- 1 for O<m.:;n-1; pqm- 1 for 1.:;m.:;n-1, (b) P(X = m) = { qn- 1 for m = n. 10.9
P(X
= m) = C!,'(pmqn-m
P(X P(X P(X
10.13
See Table 120.
= m) = =
k)
=
for all 0 .:; m .:; n.
1 - 2·0.25m for all m ~ 1. (1 - pjw)k- 1p/w for all k ~ 1.
10.10 10.11 10.12
= m) = (np)m/m! e-np
for all m
TABLE
z,
0
~
0.
120
1
2
3
4
3
2
4
5 32
-
10 32
-
10 32
-
00
- -- -- - - - - - - -- l
P•
-
32
-
5 32
'
-
l
32
393
ANSWERS AND SOLUTIONS 10.14
See Table 121. TABLE
I0 I
x,
2
I
3
4
5
121 6
7
8
10
9
1I
12
13
---1--1-------------------------10 3 p, I 3 6 10 15 21 28 36 45 55 63 69 73 75
===1==----------------------------------------_-_-_ 14 15 16 17 18 19 20 21 22 23 24 25 26 27 X,
·~-----1---------
IO"p,
11.
75
73
69
63
55
1
45
36
28
21
15
10
6
3
1
THE DISTRIBUTION FUNCTION AND THE PROBABILITY DENSITY FUNCTION OF A CONTINUOUS RANDOM VARIABLE 11.1 11.2
F(x)
1 if x belongs to (0, 1), ={ 0
f(x)=_}
"
if x does not belong to (0, 1).
271'
v
1
e-x 212 .
11.4
(a) p = ;•
11.5
(a) a, ·
11.6
(a) f(x) = ~ (c)
11.7
1
(b) f(x) =
(b) a
J
'2log2 -.--
loge
Te-ttr.
~
1.18a,
xm-le-xm;xo
(c) f(x)
(x;;. 0),
(b)
=
~ e- x2t2a2. a
Xp = {
-x0 In (1 - p)}i 1m,
(m;;; I xofm
(a) 10,
(b) F(x) =
2 V27T'
1
2, b =
1
[ 18 e- 1212 dt, where t 8 = log x - log
Jo
a
+
11.8
(a) a =
11.9
1 a(f3 - a) (c) P(a < X < (3) = -arctan 2 f3 · 7T' a + a 1 1 1 11.10 (a) F(x) = 2 + .;;:arctanx, a=-;=·
.;;:;
Xo
a
(b) F(x) = 7T(X2
a2);
(b) P(IXI < 1) =
1
2.
V7T'
11.11
p =
1
2'
11.12 p
=
2 3
-·
11.13 Introduce the random variable X denoting the time interval during which a tube ceases to operate. Write the differential equation for F(x) = P(X < x), the distribution function of the random variable X. The solution of this equation for x = l has the form F(l) = 1 - e-kz. 11.4
(a)
~: (6P 00
11.5 f(x) = 1 ~
- sLz
+ 3z
1
2i 8(x
- Xt).
2),
Cb) 1 -
(~
_
:r+l.
394 12.
ANSWERS AND SOLUTIONS NUMERICAL CHARACTERISTICS OF DISCRETE RANDOM VARIABLES
12.1 x- p. 12.2 Xa = 1.8, xb = 1.7, xB = 2.0; the minimal number of weighings will be in the case of system (b). 12.3 M[X] = 2, D[X] = 1.1. 12.4 To prove this it is necessary to compute M[X] = dG(u)/dulu=I. where G(u) = (q1 P1tt)(q2 P2u)(qs Pstt). 12.5 We form the generating function G(u) = (q + pu)n; M[X] = G'(l) = np. 2 n 12.6 N t~ mtkt.
+
+
+
12.7 For the first, 7/11; for the second, -7/11 coins; that is, the game is lost by the second player. 12.8 Consider a, band cas the expected wins of players A, Band C under the assumption that A wins from B. For these quantities there obtain a = (m/2) + (b/2), c = a/2, b = c/2, forming a system of equations for the unknowns a, b and c. Solving the system, we obtain a = (4/7)m, b = (l/7)m, c = (2/7)m. In the second case, we obtain for the players A, Band C, (5/14)m, (5(14)m, (2(7)m, respectively. 12.9
~2
M[A] =
+(i4 +g5) +G7 +~B)+··· m
oo
= 3 M(C) = 2 2 M[X]
1 k(l - p)~ -) = -n/4 P(a < ii) = 0.544 a e ' P(a > a) 0.456
~.
13.7
M[X] =
hV'" D[X] = m + 1.
13.8
M[X] =
~ x0 ,
D[X] =
D[V] =
~ x5.
~ (~- ~)· h
2
'"
=
1 19 · ·
396
ANSWERS AND SOLUTIONS 13.9
M[X] = 0, D[X] = 2.
1
13.10
A = ~a+ 1 r(a
13.11
A = 1'(a)r(b)' M[X] = a
13.12
A
rca
+
1)' M[X] =(a+ 1)~, D[X] = ~ 2 (a
+
a
b)
r(~) =.
(
v;r ~ '
J: (1
To calculate the integral x =
1).
ab
+ b'
D[X]
M[X] = 0, D[X]
)
+
= (a + b)2(a + b + 1) 1
= - -2 (n > 2). n-
+ x 2)- en+ 1>12 dx, use the change of variables
V y/(1
- y) leading to the B-function and express the latter in terms of the
13.13
A=
13.14
Use the relation
r- function.
2(n-3)/2r(n ; 1)
_ v2 r(~) ' M[X]-
r(n; 2)'
D[X] = n - 1 -
x
2•
f(x) = dF(x) = _ d[1 - F(x)] dx dx
13.15 M [T] = 1/y. Notice that p(t) is the distribution function of the random time of search (T) necessary to sight the ship. 13.16 m(t) = m 0 e-Pt. Consider the fact that the probability of decay of any fixed atom during the time interval (t, t + 11t) is pl1t and work out the differential equation for m(t). 13.17
Tn
=
(1/p)(log 2)/(log e). Use the solution of Problem 13.16.
13.18 [P(T < T)]/[P(T > T)] = 0.79; that is, the number of scientific workers who are older than the average age (among the scientific workers) is larger than that younger than the average age. The average age among the scientific workers is T = 41.25 years. (2v - 1)(2v - 3) · · · 5 · 3 ·1 13.19 m2 v = ( )( ) ( ) n" for n > 2v + 1, m 2 v + 1 = 0. n- 2 n- 4 " · n - 2v
For the calculation of integrals of the form
l
oo
o
V n[y/(1 r -function. + q) + k).
make the change of variables x express the latter in terms of the r(p -:- k)r(p
+
=
13.20
mk = r(p)r(p
13.21
M[X] = 0, D[X] = 12
13.22
P.,k
q
x2)-Cn+1l
(
x 2 " 1 +n
77 2
dx,
y)] that leads to the B-function and
1
+ 2.
k
= 2: ( -l)k-i(x)k-imh where
m; = M[X 1].
j=O k
13.23
m"
=
2 0 C~(x)"- 1p.,;, where
t~
fLJ =
M[(X- x)i].
397
ANSWERS AND SOLUTIONS 14.
POISSON'S LAW 14.1
p
=
- e- 01
;;:;
0.095.
14.2 p
34
=
4 ! e- 3
14.3 p = 1 - e- 1 ;;:; 0.63. 14.4 p = e- 0 · 5 14.5 (1) 0.95958, (2) 0.95963. 14.6 0.9. 1 500 1 1 2 1 14.8 p = I ;;:; 1 - I ;;:; 0.08.
L
0.4.
14.10
;;:;
0.17.
0.61.
14.7
0.143.
L
em~3m.
14.9
;;:;
em~om.
sk
1
=
vli·
14.12 M[X] = D[X] = (log 2)/(log e)MN0 /ATn. Work out the differential equation for the average number of particles at the instant t. Equate the average number of particles with half the initial number. The resulting equation enables one to find the probability of decay of a given particle; multiplying it by the number of particles, we get M [X]. n1o
14.13
~ 1.02·10- 10 , p = 10! e -n ~
( a)
=
(b) p
1 - e-n- ne-n;;:;
~.673,
where
where s =
15.
:2:~
1
k;. In as much as
:2:f~ 1
,\ 1
and s is finite, then
THE NORMAL DISTRIBUTION LAW 15.1
p = 0.0536.
15.2 15.3
Pbelow =
15.5
Ex
15.6
See Table 122.
0.1725, (a) 1372 sq. m., =
0.4846, (b) 0.4105.
Pmside =
10 5 F ( x)
I -65
15.4
0.3429. 22 measurements.
2p J~ Ey ;;:; 0. 78Ey.
TABLE X
Pabove =
-55
1-----;-;- ----;;;-
-45
-35
-25
122
-15
-5
+5
+15
I
+25
+35
--------2,150 8,865 25,000 50,000 75,000 91,135 97,850 I 99,650 99,965
----
398
ANSWERS AND SOLUTIONS
15.7 E ~ 39 m. The resulting transcendental equation may be more simply solved by a graphical method. 15.8
E 1 =a
J!·
15.9 (a) 0.1587, 0.0228, 0.00135; (b) 0.3173, 0.0455, 0.0027. 15.10 p ~ 0.089. 15.11 p = 0.25. 15.12 (a) 0.5196, (b) 0.1281. 15.13 M [X] = 3 items. 15.14 Not less than 30 fL. 15.15 ~ 8.6 km. 15.16 (a) 1.25 mm., (b) 0. 73 mm.
X) _ Q) (b : x) for all x > b, 1_ Q)(b : X) Q)(x: x) _Q)(a: x) -'-:-----'---_;._ _.:... for a < x < b. Q)(b :X) _Q)(a: X) (x :
15.17
15.18
16.
(a) Fa(X)
=
(b) F6(x)
=
E= Jbln 2
P
a2 (bja) ·
CHARACTERISTIC FUNCTIONS 16.1
+ pe1u, where q = 1 -
E(u) = q
p.
n
= f1 (qk + pke 1u), where Pk + qk
16.2
E(u)
16.3
E(u) = (q
+ pe 1u)n;
16.4
E(u) = 1
+
16.5
E(u) = exp {a(e 1u
= 1.
i=l
M[X]
np, D[X] = npq.
1 a(l _ elu)' M[X] = a, D[X] = a(l +a). -
1)}, M [X] = D[X] = a.
a2u2)
T
16.6
E(u) = exp ( iux -
16.7
E(u)
16.8
E(u) = iu(b - a) , mk
16.9
E(u) = 1
1
= -1 - - . , -
IU
eiub -
+
=
mk
·
= k! bk+l -
eiua
=
(k
+
ak+l
1)(b - a)
vv; e-" 2 [i- $(v)], where v = uj2h and "'( ) -
..., v -
2 (" v; )a e
z2
d
z.
Integrate by parts and then use the formulas:
l
oo e-x 2 sm . 2px dx 16.10
V; e-P
= -
0
E(u) = ( 1 -
2
~)-A,
2
Ql(p),
Joo e-px 2 cos qx dx 0
mk = .:\(,\
+
1) · · ~~,\
+
k -
= -1
2
e-q 2 14 P
J;
2
-·
p
1).
2 See Jahnke, E., and Emde, R.: Tables of Functions with Formulae and Curves. 4th rev. ed. New York, Dover Publications, Inc., 1945.
399
ANSWERS AND SOLUTIONS 16.11
E(u)
11"
e'a" cos"' dcp
= -
7T
l 0 (au). Pass to polar coordinates and use
=
0
one of the integral representations of the Bessel function. 2 16.12 E(u) = exp [ixu - a I u]. By a change of variables it is reduced to the form a eixu E(u) = eixu- 2 - - 2 dx. 7T -oo x +a The integral in this formula is computed with the aid of the theory of residues, for which it is necessary to consider the integral
I+
a
-
f
7T
(1j
etzu
---dz
z2
+
a2
over a closed contour. For positive u the integration is performed over the semicircle (closed by a diameter) in the upper half-plane and for negative n over a similar semicircle in the lower half-plane.
~2 a
Ey(u) = exp {iu(b +ax) -
16.13
16.14 f.L 2 k 16.15
= a 2 k(2k-
f(x) =
( 7T
1)!!,
a
+ x 2)
a2
2
a 2},
= 0.
/L2k+1
(the Cauchy law).
for x > 0, for x > 0, f.(x) = {0 2 for x < 0, ex for x < 0. Solve this with the aid of the theory of residues; consider separate the cases of positive and negative values of x. 16.17 P(X = k) = 2-k, where k = 1, 2, 3, .... Expand the characteristic function in a series of powers of (1(2)etu and use the analytic representation of the 8-function given in the introduction to Section 11, p. 49. f(x) ={e-x 1 0
16.16
17.
THE COMPUTATION OF THE TOTAL PROBABILITY AND THE PROBABILITY DENSITY IN TERMS OF CONDITIONAL PROBABILITY 17.1 p
= !(() 2
b_
() ) 1
( In tan ()2 2
-
()1) · In tan 2
17.2 Denoting the diameter of the circle by D and the interval between the points by l, we obtain p = D(2tl-; D)
17.3 p
=
17.4 p
=
0.15.
i [1- d>(L ~ x)] + 2L:V; x
17.5
= 0.4375.
[exp { ~2-2} -p2
-
exp
{
-)2}] + ~ [(L ~ x) + (;)] ~ 0.67. -p2
In both cases we get the same result, P1
17.6 p
=
1- ~ rll {1- [( 2;
i
(L £2x _
=
P2 = 0.4.
r)- (2~ r)]f dz.
400
ANSWERS AND SOLUTIONS 17.7
F(w)
17.8 p
=
=
nr:f(y){f:+w /(x)dx}"- 1 dy.
128 I - 45 7T 2
~
0.712.
J+_
17.9 Pt = 2:f~Yt1 rt, where rt = 17.10
00
oo
.f.(x)fp(x - x 0 ) dx.
2(2a)mo + 1 + I)! e- 2 ".
f(a I mo) = (mo
III SYSTEMS OF RANDOM VARIABLES 18.
DISTRIBUTION LAWS AND NUMERICAL CHARACTERISTICS OF SYSTEMS OF RANDOM VARIABLES
f(x. y)
18.1
~ {~ -
~
for a
a;(d- c)
~
x
b, c
~
y
~
d,
outside the rectangle, F(x, y) = F1(x)F2(y), where
F1(x) =
1 for x ~ b, { x-a b _ a for a ~ x ~ b, 0
for x
F2(y) =
{1y-c
do _ c
~a,
(b) F(x, y)
(~arctan~
+
~)(~arctan~
(a) A
18.3
f(x, y, z) = abce-(ax+by+cz).
18.4
The triangle with vertices having coordinates:
20,
(~In a~c, 0, 0);
( 0,
=
~In a~c,
0);
(a) F(i, j) = P(X < i, Y < j) = P(X
18.5
~
( 0, 0, i - 1, Y
~
d,
for c ~ y ~ d, for y
18.2
=
for y
~
c.
+
~) ·
~In a~c). ~
j -
1).
For the values of F(i, j) see Table 123. TABLE
~
0
0 1 2 3
0.202 0.202 0.202 0.202
1
J
(b) 1 - P(X
~
6, Y
2
123 3
4
5
6
- -- -------- -- -
~
1)
0.376 0.475 0.475 0.475
=
0.489 0.652 0.683 0.683
1 - 0.887
=
0.551 0.754 0.810 0.811
0.113;
0.600 0.834 0.908 0.911
0.623 0.877 0.964 0.971
0.627 0.887 0.982 1.000
401
ANSWERS AND SOLUTIONS
(c)
M[X]
= 1.947;
= 0.504; Ilk11 I =
M[Y]
11
2 ' 610 0.561
0 ' 561 0.548
1/·
18.6 18.7
+ f(u,
P = f(u, v, w): [f(u, v, w)
18.8
w, v) + f(v, u, w) + f(v, w, u) + f(w, u, v)
+ f(w,
v, u)].
F(a1, b5) + F(a2, b1) - F(a2, bs) + F(a 3 , b4) - F(as, b2) + F(a4, b2) - F(a4, b4) + F(a5, b5) - F(a5, b1). a- 6 - a- 9 + a- 12 ,
18.9 P = F(a1o h 3 ) 18.10
P = a- 3
-
18.11
P=
7TR2 4ab R2 4 ab (7T - 2{3
4~: (7T -
+
sin 2{3)
2cx - 2{3
+
+
sin 2cx
sin 2{3)
for 0
~
R
~
b,
for b
~
R
~
a,
for a
~
R
~ Va + b
for R ~ where a
=
arc cos (a/R),
f3
V a 2 + b2 ,
~:
~~) ·
18.12
(a) c =
18 _13
(a) r
18.14
Consider the expectations of the squares of the expressions
=
xy
(b) p =
{+-11
ay{X- x)
+
(
1-
for n/m < 0, for n/m > 0,
ax(Y- y)
(b)
Ux
ay
=
\!!:._\· m
ay(X- x) - ax(Y- y).
and
18.15
Make use of the reaction kxy = M [XY] - xy.
18.16
llr;;ll =
18.17
(a) M[X] =ex+ y = 0.5,
-0.5 1 -0.5
1 -0.5 0.5
0.5 -0.5
(b) M[Y] =ex+ f3 = 0.45, D[X] = (ex + y)({3 + 8) = 0.25; D[ Y] = (ex + {3)(y + 8) = 0.2475; kxy = M[XY]- M[X]M[Y] =a- (a+ y)(a 1 18.18
M[X]
=
M[Y]
=
0;
2
llkt;ll =
0 18.19
f(x,y) = cosxcosy; M[X]
I kt; I
=
I\7T
~
3
2 ,
arc cos (b/R).
=
77~ 3 ,
2
7T
~ 311·
=
0
1
2
M[Y] = ~- 1;
+ {3)
=
0.175.
402
ANSWERS AND SOLUTIONS
+ !::.l arc cos!::.]· l
18.20 p = }}__ [1 - j1 - L 2 TTL [2 l 18.21 p = -[2(a TTab
+
b) - !].
Hint. Use the formula P(A u B) = P(A) + P(b) - P(AB), where the event A means that the needle crosses the side a and B that it crosses side b.
19.
THE NORMAL DISTRIBUTION LAW IN THE PLANE AND IN SPACE. THE MULTIDIMENSIONAL NORMAL DISTRIBUTION
=
~ [ 1 + ci>(x ~xx)] [1 + ci>(Y E~
19.1
F(x, y)
19.2
f(x, y) = 182TTV3
1
x exp { 19.3
(a) c
= 1.39,
(b)
2 [(x - 26) 2 196
-3
jjk!JII =
+
(x - 26)(y 182
0.132 -0.026
II
+
-0.02611 , 0.105
12)
+
(c) Sez
(y
+
12) 2 ] } 169 ·
= 0.162.
1 = 0.00560. 2TTe 3 2
v
19.4 !(2, 2) =
19.5
y)] ·
1 f(x, y, z) = 27TV 230 7T
x exp {/max
=
27T
2 ~ 0 (39x
1 V 2307T
=
2
+ 36y 2 + 26z 2
-
44xy
+ 36xz - 38yz)};
0.00595.
19.6
(a)
llkti 1 l
=
2 -1 0 0 0
-1 2 -1 0 0
0 -1 2 -1 0
0 0 -1 2 -1
0 0 0 -1 2
0
0
0
0
0
0 0 0 0
19.7 10 0 2 0
0 10 0 2
2 0 10 0
0 2 0 10
403
ANSWERS AND SOLUTIONS
19.10
fp2R2/E2 e-P 2 (d 2 /E 2 l Jo
P(R) =
(2 d'\;-)
10 ~ e-t dt,
where l 0 (x) is the Bessel function of an imaginary argument.
1
:2'
(a) P(X < Y) =
19.12
p =
19.13
(a) Pcirc = 1 - e-P
~
[
ID( c ~x x) - ID(a ~x x) ][ID(d ~/)
19.14 P =
o.s( 1 -
= 0.2035,
exp {- p 2
19.15
P
=
2: ( exp {- p 2
19.16
A
=
4dk, ex = Ex
19.17 P =
2
1D(V~l7r)~D(V~07T)
(c) Pun=
~D(~)~D(~)
19.18 Psat = 1 - q 3
-
1
(b) Psq =
iDe~/)]
-
['(,;;)]2 2
=
0.0335.
= 0.2030,
= 0.1411.
~:}).
~}
J
0) =
19.11
+
-
exp {- p2
2p2d2 3£; ,
fJ
~D(;J~D(;J
=
;~}).
Ez
J
1
2p2k"
+ 3 £~
since ex > Ex,
+ Ps) 2 +
3q 2(1 - q) - 3q[(p2
fJ
•
> Ez.
2p2p4] - P~ = 0.379,
Pexcel = P~ + 3p;(pa + P4) + 3p2p5 = 0.007, where P2 = 0.196, Ps = 0.198, P4 = 0.148, p5 = 0.055, q = 0.403.
19.19 p =
~ [iD(k)]2.
19.20 p =
~D(~)
- ~~~
exp {- P2 ;:} -
[
1D(2~)
r.
19.21
19.22 19.23 19.24 19.25
25(x1 16(x1
2: n
; =1
-
-
(x1 -
+ 36(x1 2 2) + 5x~ + 10) 2
Xt-1) 2 =
-
10)(x2 - 10)
16(x 3
+
2) 2
+
+
8(x1
[ 5 - -log n - 2 (27T) ] · loge 2
The problem has no solution for n > 12.
= 7484.6. + 2) = 805.1.
36(x 2 - 10) 2 -
2)(x 3
404 20.
ANSWERS AND SOLUTIONS DISTRIBUTION LAWS OF SUBSYSTEMS OF CONTINUOUS RANDOM VARIABLES AND CONDITIONAL DISTRIBUTION LAWS 20.1 for a1 ~ x ~ a2, b1 ~ y ~ b2,
1 f(x, y, z)
= { (a2
C1 ~ z ~ c2,
- a1)(b2 - b1)(c2 - c1)
outside the parallelepiped;
0
(b
f(y, z) = {
b ;(
2 -
1 c2 - c1
)
for b1
o
lxl
~z~
c2,
for c1 ~ z ~ c2, outside the interval. The random variables X, Y, Z are independent.
_1_ f(z) = { c2 - c1
For
b2 , c1
outside the triangle;
0
20.2
~y~
IYl
~ R,
~ R, 2 ---x--;;2 2 V~R"'
fx(x) = Fx(x)
7TR2
=
Fy(y) =
,
J
~ [arcsin~
+
~
1 - ;:]
+
~·
~
+
i j1- ~:]
+
~;
for
lxl
for
lxl =
for
lxl
[arcsini
X and Yare independent, since f(x, y) i= fx(x)fy(y).
20.3
f(y I x)
1
2VR~- x
~ ~ [S(y +
2
+
R)
S(y
~
R)]
< R, R,
> R,
ll(z) being the ll-function. 20.4 X and Y are uncorrelated.
20.5
(a) f(x, y)
( b) f() x x
(c)
f(y
~ ~
inside the square,
{
outside the square. Inside the square;
= avl2-
I x) =
a2
2lxl ' Jy'() = avl2- 2IYI. Y a2 , 1
aV2 -
2lxl'
f(x
I y) =
1 aV2 -
21i
a2 (d) D[X] = D[Y) = 12 ,
(e) the random variables X and Yare dependent but uncorrelated.
405
ANSWERS AND SOLUTIONS
lzl
20.6 < R.
fz(z) = [3(R
2- z2)/4R3] for lzl
20.7
k = 4.fx(x) f(y I x) = fy(y), M[X]
20.8
M[X]
=
D[X] =
< R, f(x, y
I z)
=
1/[1r(R
2- z2)]
for
2xe-x 2 (x ~ O),fy(y) = 2ye-Y 2 (y ~ O),f(x I y) =fAx), M[ Y] = v:;;./2, D[X] = D[ Y] = 1 - 7T/4, kxy = 0.
= =
J_"'"'
M[X I y]fy(y) dy;
J_"'"'
D[X 1 y]fy(y) dy
+ J_"',
{x- M[X 1 y]} 2fy(y) dy.
20.9 Since M[X] = 5, M[Y] = -2, ax= a, ay = 2a, r = -0.8, it follows that: (a) M[XI y] = 5- 0.8/2(y + 2) = 4.2- 0.4y, M[YI x] = - 2 - 0.8 x 2(x - 5) = 6 - 1.6x, axlY = 0.6a, ay 1 x = 1.2a, (b) fx(x) =
2 . 11 _ exp { - (x 2- 25) } av 27T a
(c) f(x
I y) =
f( y
I x) =
20.10
,
~ 27T exp {- (x + ~% ~ . a +
1.6x - 6) 2 } 2 2.88a
Aj~exp{ -(a- !:)x
fx(x) =
•
4 · 2) 2 } •
0.6a
1 exp { - (y 1.2aV 27T
2 .11 - exp { - (y 8+ 22) } 2av 27T a
fy(y) =
2},
•
Aj~exp{ -(c-
fy(y) =
For the independence of X and Y it is necessary that
V ac exp { _ b 2 4
1rA
(xc
2
_
4xy b
+
y 2 )} = 1 .
This condition is satisfied for b = 0. In this case A 20.11
k
f(y
20.12
3V3 k
=-
7T '
I x)
1 f(x 18 '
J;
=
I y)
= -XY
a =
V ac/7T.
2
= - e- 2m. a2
b + V a 2 + b2 b 2 a + V a 2 + b2 1 ..;--a + 6a In b + 3 a 2 + b2 •
21.20
6 b In
21.21
-2b a + b 2 + 3 a 2· ab2 In a + V ba 2 + b2 + a 2 3-a 22b 2 V 3
2
21.23
0.316g.
+ 225b 2 -
21.25
M[Z] = Sa; D[Z] = 100a2
21.26
M[Y]
21.27
M[ Y] = exp {- x(l - cos b)} cos (x sin b),
=
D[ Y] =
Ex_, D[Y] pV7T
~
[1
=
E~ p
+ exp {- x(l
- cos 2b)} cos (x sin 2b)] - y2.
21.29
M[Z] = 2a
21.30
M[Z] = 5(V3 - 1), D[Z] = 7600.
21.31
rxy
21.32
M[Z] = 0, D[Z] = 2L'. 2a 2 .
21.34
r
=
(b) 22.0 sq. m.,
E V7T, - D[Z] --7T 2p
(c) 10 sq. m. =
8) + -4p£22 (4- 7T).
a 2( 3 - 7T
n!!/V(2n- 2)!!, if n is even; rxy
a( 1
+
e;), D[R]
=
150ab.
2
(a) 26.7 sq. m.,
=
l /3; [2 /18.
(l_l). 7T
21.28
J2
21.24
a; 2 ( 1 -
21.33
~)
=
0 if n is odd. la
7T '
(l2 - 7T2..i.).
a2
(where e is the eccentricity).
408 22.
ANSWERS AND SOLUTIONS THE DISTRIBUTION LAWS OF FUNCTIONS OF RANDOM VARIABLES 22.1 Fx(y Fy(y)
~b)
{
=
for a> 0,
b)
(y -
for a < 0.
1 - Fx - a
22.2 22.3
fy(y)
=
fx(eY)eY.
z } 1 exp { - 2aa2 fz(z) = { ~v 211"az
for z > 0, for z.:;; 0.
22.4
:-v;
2} { -p 2(y) 2p --exp
fy(y) = {
for y ;. 0,
E
for y < 0. 22.5 11" 1ry sin { 0 fy(y) =
for
1
2 .:;;
for y
; arctan e.
22.6 for 0 < v .:;; a3 ,
•
for v < 0 or v > a 3 • 22.7 fx(x)
1
l
=; [2 + x 2
(-CXJ.:;; x.:;; CXJ).
22.8 f~(y)
1
= { 7rY a2 - y2 0
22.9
(a) fy(y)
= 31r[l +
for IYl < a (the arcsine law)
for IYI ;;>a. 1 (1 _ y)sl2](l _ y)21s'
(b) if a > 0, then v~
fy(y)
= { ~(a + y)Vy
for y ;. 0, for y < 0;
if a < 0, then
{ fy(y) =
~(a
-Va for + y)Vy
y .:;; 0,
for y > 0; (c)
for
IYI.:;; ~·
for
IYI
h(y)- { :
> ::.
2
409
ANSWERS AND SOLUTIONS 22.10 For an odd n,
for even n, 2ay 0,
fy(y) = { mr(a2
0 22.11
(a) fy(y) = Iyle-Y 2
(
for·y:;;;: 0.
-oo :;;;: y :;;;: oo),
for y ~ 0, " 0 .or y < .
2ye-Y 2
(b) fy{y) = { 0
22.12
rck + 1.5) cos 2 k+.1. y --::::'---:__ Jy(y) = { v 7T rck + 1) 0
22.13
(a) fy(y) =
v1277 exp {
- y2} 2 ,
for
IYI :; ;: ~·
for
IYI
> ~-
1 { (y -a~y)2} · (b) fy(y) = ay V 2 277 exp -
22.14 fu(y) = {
r ) = 22.16 J.(z 22.17
.11 -exp { a;v 27T
(a) J.(z)
(b) f.(z)
22.18
= ("'! )a
1 = -2
X
1 for 0 :;;;: y :;;;: 1, 0 for y < 0 or y > 1 .
Z2} , -p a.
!(x, : _) dx - Jo ! !(x, !.) dx, X
exp { -lzl},
- ro X
(c) f,(z)
(d) f,(z) =
V~77 exp { -~}·
(a) f.(z) =
Sa'" yf(zy, y) dy -
(b) f,(z) = (1
J.(z)
f"'
X
= -
1 - exp
2axay
{-J:.L}, GxGy
yf(zy, y) dy,
2z
+
z2)2'
r(~) (c) f.(z) =
h 2 ax+ 2 ay.2 were a.=
2
r(~)-v;
(1
+ z 2)- 2;
~ {:~In IPI
f,CJii
for 1,81
~
1,
for 1,81 >1. 22.22
f(t, r:p) = 2 7T
v 1t -
r~y
t 2 (1 - rxy s:n 2rp)}· 2(1 - rxY)
exp {
For rxy = 0, :sin ex, where ,\ = 1 + 2D1 sine tan e cos ex + Di sin 2 e tan 2 e D'f_sin 2 e Disin 2 e
j
m-
vm-
V ENTROPY AND INFORMATION 27.
THE ENTROPY OF RANDOM EVENTS AND VARIABLES 27.1 H1
Since -
8
3
[8
5 log 5 H 2 = 15 Iog 4 - 15 15 log 3 - log 15 15 -
= -0.733 < 0, the outcome of the experiment for the first urn is more certain.
5 15
3]
15
419
ANSWERS AND SOLUTIONS 27.2 p = 1/2. 27.3
H1 = -
3 ~ 3 log 3 ~ 3 -
3 ~3) log ( 1 - 3 ~ 3)
(1 -
= 0.297 decimal unit,
3v'3) ( 3v'3) ( 3v'3 H2 = - 3v'3 477 log 477 - 1 - 477 log 1 - 477 = 0.295 decimal unit; that is, the uncertainties are practically the same. 27.4
(a) H= -cos 2 ~log.cos 2 ~- sin 2 ~log.sin 2 ~, n n n n
27.5
Since P(X = k) = p(l - p)k-\ then H[X] = _P log. p
+
(b) n = 4.
(1 - p) log. (1 - p). p
When p decreases from 1 to 0, the entropy increases monotonically from 0 to 27.6
(a) H[X] = - n[p log. p
+ (1 -
r::tJ.
p) log. (1 - p)
(b) H[X] = 1.5log. 2. (b) log. [ax V27Te],
27.7
(a) log. (d - c),
27.8
H[X] =log. (0.5V~).
27.9
(c) log. (ejc).
H[X I y] = Hy[X] = log. (ax Y27Te(1 - r 2 )), H[Yix] = Hx[Y] = log.(ayY27Te(l- r 2 )),
where ax and ay are the standard deviations and r is the correlation coefficient between X andY. 27.10
= log. v' (27Te)nlkl, where
lkl
is the determinant of the covariance matrix.
+
27.11
Hx[Y] = H[Y] - H[X]
27.12
The uniform distribution law: [(x) =
{b ~ a 0
27.13
Hy[X].
b,
for a :,:; x :,:; for x < a, x > b.
The exponential distribution law: [(x) =
{M~X] exp { -M~X]} 0
~
0, for x for x < 0.
420
ANSWERS AND SOLUTIONS
{-~}· 2m2
1 exp V27Tm 2
27.14
f(x) =
27.15
The normal law: I 1 exp{- 211k 1 ,La11(x!- M[X;])(y 1 - M[Y1])}. \ (27T)nikl !, f ex l-ex 27.17 loga 1050 and loga 30. k:' P2; = -n-·
/(x1,x2,···,xn) = 27.16
PH =
27.18
H[ Y1, Y2, ... , Yn] - H[X1, X 2, ... , Xn] =
where
I(8rp~cl8x 1 )
f'oo ··-J_
00 00
fx(X!,
X2, ... ,
Xn) loga
II (::;)1 dx1 "· dxn,
is the Jacobian of the transformation from ( Y1, Y2, ... , Yn) to
(X1, X2, ... , Xn).
27.19 (a) The logarithm of the absolute value of the determinant ial
Coded notations
0000100 0000011 0000010 00000011 00000010 00000001 000000001 000000000
28.13 Use the fact that the coded notation of the letter A 1 will consist of k 1 symbols. 28.14 In the absence of noise, the amount of information is the entropy of the input communication system: I = - P(A 1 ) log 2 P(A 1 ) - P(A2) log2 P(A2) = 1 binary unit. In the presence of noise I= 0.919 binary unit; it decreases by an amount equal to the magnitude of the average conditional entropy, namely - P(a1)[P(A1 I a1) log2 P(A1 I a1) + P(A2 I a1) log2 P(A2 I a1)] - P(a2)[P(A1 I a2) log2 P(A1 I a2) + P(A2 I a2) log2 P(A2 I a2)], where
423
ANSWERS AND SOLUTIONS
28.15 If the noise is absent, I= H 1 = log 2 m; when the noise is present I = H1 - H2 = log2 m + p log2 p + q log 2 qf(m - 1).
2: 2:
I= log2 m +
28.16
I
j
P(a1)P(A 1 I a1) log2 P(A 1 I a;),
where
VI THE LIMIT THEOREMS 29.
THE LAW OF LARGE NUMBERS 29.1
(a) P(IX-
xl ;;.
4E) ~ 0.1375,
(b) P(IX-
xl ;;.
3a) ~ 1/9.
29.2 It is proved in the same manner as one proves Chebyshev's inequality. For the proof make use of the obvious inequality
In
~
f(x) dx
L~
eex-t 2 f(x) dx,
where Q is the set of all x satisfying the condition t 2 + In J x> e
29.3 Using arguments analogous to those in the proof of the Chebyshev inequality, one obtains a chain of inequalities P(X;;.
29.4
e)~
1
---a;;
leax
e
~
dF(e"x)
e-•eM[e•x].
eax~eae
Use the Chebyshev inequality and note that
x=
m + 1, and M[X 2] =
(m + 1)(m + 2), hence,
P(O < X < 2(m
29.5
+
1)) = P'(l X -
xl
< m
+
1) > 1 -
(mDl~]) 2
Denoting by Xn the random number of occurrences of the event A in
n experiments, we have P(l Xn - 5001 < 100) > 1 - (250/100 2 ) = 0.975. Conse-
quently, all questions may be answered "yes." 29.6 The random variables XI< are mutually independent and have equal expectations x~< = 0 and variances D[X~ lim n(n ~ 1) n-oo 2n
=
!. 2
Applicable since the inequality
29.10
1 0 .:;; D [ ;i
n
k-?1Xk
]
1
< n2
n
k-?1D[Xk]
C
< ;i'
where cis the upper bound of D[Xd for all k = 1, 2, ... , n, holds for k1 1 < 0. The relation
follows from the inequality. 29.11 To prove this, it suffices to estimate
where
Replacing all
ak
by their maximal value b, we obtain D
(!n i xk] < 3nn~
2 b2,
k=1
hence, it follows immediately that lim
29.12 29.13
o[! i xk] = o.
n k=l Applicable, since all the assumptions of Khinchin's theorem are satisfied. Consider n-oo
D[Zn]
= D [~
J
1
Xk] =
~2 ~ ~~ 1~ ri;aiat I < :2 ~~ J 1
lrl11,
where a 1 is the standard deviation of the random variable X 1• Since ru-+ 0 for Ii - j I -+ cx:J, then for any e > 0, one may indicate an N such that the inequality lr11 l < e holds for all li- jl > N. This means that in the matrix llr11 II, containing n 2 elements, at most Nn elements exceed e (these elements are replaced by unity) and the rest are less than e. From the preceding facts, we infer the inequality
L L I
1 n n Nn 1 N r;; < 2" + 2 (n 2 - Nn)e = e + - (1 - e), n 1= 1 1 = 1 n n n therefore, limn-"' D[znl = 0; this proves the theorem. 29.14 The law of large numbers cannot be applied since the series 12
6 "' -2:
(-1)k- 1
k defining M[X1] is not absolutely convergent. 7T2 k=1
,
425
ANSWERS AND SOLUTIONS 30.
THE DE MOIVRE-LAPLACE AND L YAPUNOV THEOREMS 30.1
P( 0.2 :;::;:
~
< 0.4)
=
0.97.
30.2
P(70 :;::;: m < 86)
=
0.927.
30.3 (a) P(m > 20) = 0.5, (b) P(m < 28) = 0.9772, (c) P(l4 :;::;: m < 26) = 0.8664. 30.4 In the limiting equality of the de Moivre-Laplace theorem, set b
=-a= eJ;q
and then make use of the integral representations of the functions (x). 30.5 Because the probability of the event is unknown, the variance of the number of occurrences of the event should be taken as maximal; that is, pq = 0.25. In this case: (a) n ~ 250,000, (b) n = 16,~00. 30.6 In the problems in which the upper limit of the permitted number of occurrences is equal to the number of experiments performed, b turns out to be so large that 0 and (1 - lrl) if H :;: ;: 1, we have for lrl :;::;: 1, Kx(T) = (1 -
lri)M[X 2] = (1 - lrl)
roo
Jo
X
XI\
X2
r(A
+
1) e-x dx
= (A+ 2)(A- 1)(1 - lrl). Consequently, Kx(T) = {
(A+ 2)(A + 1)(1 - lrl),
lrl :;::;: 1,
0,
lrl > 1. 31.7 Letting 0 1 = 8(t1), 82 = 0(t 1 + r), for the conditional distribution law we get f( e I e = 5o) = Jcelo e2) 2 1 f((J1) '
where f({}lo ()2) is the normal distribution law of a system of random variables with correlation matrix
II
Ke(O) Ke(r)
Ke(r)ll· Ke(O)
Substituting the data from the assumption of the problem, we get oo 1 P = 15 1ce2 I e1 = 5°) de2 = 2 [1 - ct>(2.68)J = o.oo37.
J
31.8 Denoting the heel angles at instants t and t + r by 0 1 and 82, respectively, and their distribution law by f({} 1 , () 2 ), for the conditional distribution law of the heel angle at the instant of second measurement, we get
f:o
f(()1, ()2) d()1
427
ANSWERS AND SOLUTIONS
31.9 Denoting xl = 0(t), x2 = G(t), Xs = 0(t + To), the correlation matrix of the system X1o X2, X 3 becomes 0 Ke(ro) Ka(O)
llk;zll =
-K(r 0 )
0
-Ke(O)
Ke(ro)
-Ke(ro)
,
Ke(O)
which after numerical substitution becomes
=II 36e-3~ 0·5
llk;tll
36(0.2520+ 1.572) 0
36e~0.511· 36
Determining the conditional distribution law according to the distribution law f(xl> x2, Xs) Jo"' f(xh X2, Xs) dx2 f(xs I X1
= 2, X2 > 0)
=
,
J_"'
00
Jo"' f(xl> X2, Xs) dx2 dxs
x 1=2
we obtain for the required probability, P
=
J
10
-10
31.10
ji(t) = a(t)x(t)
31.11
f(x) dx
f(xs I X1
+ J
x~acosB
x2 > 0) dxs
= 0.958.
b(t); Ky(t1o t2) = a*(t1 )a(t2)Kx(tl> t2).
J
=
= 2,
fa(a)fe(B) da dB; f(x)
= aJ 27T exp { - ; :2 } ·
"x+dx
31.12 The probability that the interval T will lie between r and r + dr is the probability that there will be n points in the interval (0, r) and one point in the interval (r, r + dr). Since by assumption these events are independent, we have ~
P(r
T
~
T
+
(t\r)n
= -n!- e- 7",\ dr·,
dr)
that is, f(r)
31.13 32.
f(u)
=
)..n+1 7 n
= - -1n.
e-ll 0, we shall have
R ( ) _ 27r(klk2c) 2 -h 2 , 2w2(h1 + h2) cos w2r - [w~ - w~ - (h1 + h 2) 2] sin w 2 r. yz T w2 e [(w 2 - w1) 2 + (h1 + h2) 2][(w2 + w1) 2 + (h1 + h2) 2] '
and for r :;;;: 0, R ( ) _ 27r(ktk2c) 2 YZ T
-
wl h ,
x e
36.
1
2w1(h1 + h2) cos w1 T [(w 2 - w 1) 2 + (h1
+ [(w~ - wD + (h1 + h 2) 2] sin w1 T + h2) 2][(w2 + w1) 2 + (h1 + h2) 2] '
OPTIMAL DYNAMICAL SYSTEMS
36.1 Determining Kx(r) as a correlation function of a sum of correlated random functions and applying to the resulting equality a Fourier inversion, we get Sx(w)
36.2
=
. )36.4 L( lW - a2(w2
+
Su(w)
SxzCw) = iw[Su(w)
+
Sv(w)
+
Suv(w)
36.3
Svu(w)].
+
S!v(w).
L(iw) = iwe-i"", D[e(t)] = 0 .
. 2
w
+ (32)2 +
2 x { w (w
+
b2(w2
+
(32)2 -iro>
e
x [ (m - in) (
(
a2)2
-
(w - ia)2(w - i(3)2 2m
m _ in .
m -
+ m+
+
if3)2e-Cn+iml>
.
m- 1a
(w
+
m
+
in)
. )(mm ++ inin +- iaif3) 2
ln
x e-Cn-iml>(w- m
+
in]}•
440
ANSWERS AND SOLUTIONS
where
- jv
m-
/L2 + v2 - /L 2 '
a2fJ2 + b2a2
=
/L
a2
+
b2
L(iw)
36.5
=
n
v=
,
= a2
(a
c2 (a
jvp,2 +2v2 + 1-L,
ab[,8 2 -
+
a2
+ ,B)(w + d)(w
a2[
b2
.
- i,B)' - id)
where
Va2
c =
36.6
D[e(t)] =
J_"'"'
!c Va2,82
=
+ b2a2.
[N(iw)[2S,.(w) dw
- J_"'"' 3 6·
d
+ b2'
S~v(w)] dw.
[L(iw)[ 2[Su(w) + Sv(w) + S,.v(w) +
m + in 7 L(" ) = ia 2 { zw 2mc 2 [m + i(n + n1)]2 -
m~
.
w + m - in (w - m1 - in1)(w + m1 - in1)
-m +in
where
D[e(t)]
n
= V V,84 + y4
n1
= J J,84 +
y4
- ,82,
+ ::2
_
,82;
2 4 2 2 7T [IA[ a = 7Ta - - - - + Im ( - A - .) ] • 2 2
2n
2m c
n
m
+
zn
where A= [m 2
36.8 36.10
L(iw)
=
L(iw)
-
m~ -
m+in (n + n 1) 2] + 2im(n + n1)
36.9
e-at.
=
we-.:.tia { w [cos ,Br -
(1 -
L(iw)
=
e-t[iwT
+
(1
+
r)].
~) sin ,BT] + i [(2,8 - a) sin ,BT - a cos ,Br]};
D[e(r)]
a2 {(a2 +,8 2,82) = 772fJ 2 2
-
a) e- 2 fi< [ cos ,BT- ( 1 - ~ -
36.11
where
L(" ) a2(a zw = c 2 (d
~2 e-
+ ,8) + a)
2 fit
[cos ,Br
i,B w - id'
-at w -
e
+ (1
sin ,Br] 2 -
~) sin ,Brr}.
441
ANSWERS AND SOLUTIONS 36.12 L(iw) =
c2
a 2( w 2
+
b2)
{(w
2
+ r:l 2 )e- 1"''o -
(b- {3) e-b'o(w - ia)(w- ir:l)} (a - b) !-' '
!-'
where 1 a 2 = 7T - (aa~
+ {3rr~) ,
1 ar:l ....!:. ({3a~
b2
= 2
a
+
7T
= e-at( cos ar + sin pr + i ~sin
36.13
L(iw)
36.14
L(iw) =
aa~),
c
aa~ =-· 7T
2
ar).
1 . e-ato{e- 1P'o[f3 - i(a - y)](w - {3 - ia) 2{3 (w-1y) + e1P'o[f3 + i(a - y)](w
n~(a2 + (32)M- 2a 2 n~7T[~ IAI
+
D[e{t)] = [ni
Im ({3
2 -
+ {3 -
ia)};
~ 2 ia)],
where 1
= 2{3 e- i, k =1- m, A1
p\fl
= (
i)n;
p\r)
,m
o
=
for
p\~)
i > k;
=
k)n (k-1)n (m ----,;--
PH
for
= i/m,
i < k.
38.10 The state Q 1 means that j cylinders (j = 0, 1, ... , m) remained on the segment of length L. The probability that the ball hits a cylinder is ja, where a=
2(r
+ L
R)
.
P;,;-1 = ja,
'
Pt;
= 1-
ja,
Pi!=
0
for i i= j and i i= j - 1 (i, j = 0, 1, ... , m). The eigenvalues >.k = 1 - ka (k = 0, 1, ... , m), p\rl = 0 fori < k. Fori > k, ()
Akt >. = a
i-ki! 1>.6"-&'l k! flt=k(>. - 1 + va)
By Perron's formula for i > k, we have (n)_ Ptk - a
t-ki!I[
>,n(>.->.,)
]
k! j=O nt=k (>. - 1 + va) }.=).t i-k -k[1 - Ck + v)aln.
=
i!±(-1)1 -k(l-ja)n k! f=k ( j - k)! (i- j)!
a .L c-over v=O
38.11 State Q1 (j = 1, 2, ... , m) means that the selected points are located in j parts of the region D; p 11 = j /m, Pt.f+ 1 = 1 - j/m. The eigenvalues >., = r/m (r = 1,2, ... ,m). From &'H= HJ, it follows that h,k = (C);;::};c;;.-_\) (h 1k = 1); from H- 1 &' = JH- 1 and H- 1 H = @" it follows that h\,- 1 l = ( -1)'- 1 C~-:: 11 C~-=-\. ,qpn = HrH- 1 • p\~l = 0 fori> k and fori~ k,
Pl~> = C~::\
kf (-1)k-t-r(r m+ i)n Ck-t
r=O
(for another solution see Problem 38.10). 38.12 Set e = e 2" 11 m. Then
where
m
>.k
L
a,e'' -1)(k -1)
(k
= 1, 2, ... , m).
r=l
PW = 38.13
_!_ m
i
A~e(r-1l = 1/m (j = 1, 2, ... , m). 38.17 Pt; =
Q1 describes the state in which the urn contains j white balls;
2j(m- j)
m2
Pt.J+1 =
'
r
(m - j)2 m2 '
(j=0,1, ... ,m).
PJ.J-1 = m2
The chain is irreducible and nonperiodic, p~'k> = p'k_"'>. From the system (k
= 0, 1, ... ,m),
we get - - (C~)2 Ptk - Pk - - - m -
C2m
(k
= 0, 1, ... , m).
38.18 Q1 describes the state in which the particle is located at the midpoint of the jth interval of the segment; Pu = q, Pmm = p, Pt,J+1 = p, PJ.J-1 = q (j = 1, 2, ... , m). The chain is irreducible and nonperiodic. The probabilities p'k_"'> can be determined from the system qpi"'' + qp~"') = Pi"'), ppC,."'J1 + pp~"') = pC,."'), (k = 2, 3, ... , m - 1). PPk"'-\ + qp'k_"'/1 = Pk"'> Then
(~r-iA"'',
Pk"') =
For p = q, p'k_"'> = 1/m and for p =F q, 1-
P (oo)k -
E
,-(ir W''
(k = 1,2, ... ,m).
The probabilities p'k"'' can also be obtained from p~~> as n -HfJ (see Problem 38.14). 38.19
The chain is irreducible and nonperiodic. From the system (j
= 1, 2, ... ),
it follows that u1
1
= -:-;
1.
00
i
i
00
2: -.+- 1 Ut = t~1 2: c· + 1)
u1,
t~1 l
l
1
•
u1.
Since
i -(. +i1)'.
1~1
1,
l
there is a nonzero solution. We also have 00 00 1 lu1 u1 = u1(e - 1) < ex:>,
2:
J~1
1
2: -:-;
J~11·
that is, the chain is ergodic, 1 1 p)"'' = -:--1 pi"'> = e - 1, 1. pi_"')
p( 00) 1
-
-;---~:-:-
(e - 1)j!
(j
= 1, 2, ... ).
450
ANSWERS AND SOLUTIONS 38.20
The chain is irreducible and nonperiodic. From the system 00
2
UtPtt
=
(j = 1, 2, ... ),
Ut
t~1
00
2 lutl
t~1
00
=
Ut
2P
1~1
=
1- 1
u ____!.
q
< cx:J;
consequently, the chain is ergodic; that is, (j
38.21
= 1, 2, ... ).
The chain is irreducible and nonperiodic. From the system 00
2
UtPtt
=
(j = 1, 2, ... ) '
Ut
1~1
it follows that Ut
(j
= 2(j- 1)
=
2, 3, ... ).
The series
1~1 lu,l = u1 00
00
[
1
+ ;~
1 ] 2(j- 1)
is divergent; that is, the chain is nonergodic. This is a null-regular chain for which
plk'> = O(i,k = 1,2, ... ). 38.22 Q 1 means that the particle is located at the point with coordinate = 1, 2, ... ); Pn = 1 -a, Pt.i+1 =a, Pt+1,; = {3, P;; = 1 - a - (3 (j = 1, 2, ... ). The chain is irreducible and nonperiodic. From the system 2~ 1 u1p 11 = u" follows that uk = (a({3)k- 1u 1 (k = 1, 2, ... ). For (a/{3) < 1, we have j6. (j
and, consequently, the chain is ergodi;o;
that is, (k = 1, 2, ... ).
If a/{3 ;. 1, the Markov chain is null-regular; pj'(/> = 0 (j, k = 1, 2, ... ).
woo
38.23
Since
38.24
From the system
=
0, p* 1
= 2~~1P)~> =
(j
r
=
1 (j
+
= s
1, r
+
+
1,
s
+
2, ... , m),
we obtain
P
-
*1 -
f3
1 - a(m - r)
(j
=
r
+
1, r
+
2, ... , m).
2, ... , m).
451
ANSWERS AND SOLUTIONS
38.25 Q1 represents the state in which player A has j dollars (j = 0, 1, ... , m); Poo = 1, Pmm = 1, Pi.i+l = p, Pt.t-1 = q (j = 1, 2, ... , m- 1). The probabilities p* 1 = p)'f:l of ruin of player A are determined from the system Pu
= P*2P + q,
P*,m-1 = PP*m-2. P*t (j = 2, 3, ... , m - 2).
= qp*;-1 + PP*t+1
Setting p* 1 = a- b(qjp)1, we find for p #- q that 1 - (!!.)m-1 P*t = ---''-f-q-'-c-::::-
1-
(~r
and for p = q that p* 1 = 1 - j jm (j = 1, 2, ... , m - 1). The probabilities of ruin of B are p,1, 1(B) = 1 - p* 1(A). Another solution of this problem may be obtained from the expression for p)g> as n-->- oo (see Example 38.2). 38.26 H = llh1kll = lle.u _ JL)(u _ 1) oG(t, u) ot au
with the initial condition G(O, u) = u. It has the solution G(
= JLK +
)
t, u
where
r1-
K=
{JL -
u[I - (A + JL)K]. I - UAK
e
D[K8 (2.09)] = 5.35 grad.\ D[K8 (16.72)] = 2.92 grad.'
and the corresponding standard deviations are 2.41, 2.32, 2.19 and 1.71 grad. 2. 46.12 When t increases the quotient, t 1 /t converges in probability to the probability P of coincidence of the signs of the ordinates of the random functions X(t) and X(t + r), related, for a normal process, to the normalized correlation function k( r) by k( r) = cos 7T(l - P), which can be proved by integrating the two-dimensional normal distribution law of the ordinates of the random function between proper limits. 46.13 Denoting by 1 [ X(t)X(t + r) 1 Z(t)
= 2 1 +
IX(t)X(t
+
r)iJ
and by P the probability that the signs of X(t) and X(t + r) coincide, we get kx( r) = cos 7T(l - z) ~ cos 7T(l - z) + 7I'(z - z) sin 7T(l - z).
z=
P;
470
ANSWERS AND SOLUTIONS
Consequently, D[k(r)] ~
7T 2D[i]
+
Ja
sin 2 1r(1 - P) =
fa
[1 - k~(r)]D[i];
7T 2
(""' f"'}t(xb x2, x3, x4) dx1 dx2 dx3 dx4 - 2 2 ,
-ooJo )a
-oo
j(x1, x2, X3, x4) being the distribution law of the system of normal variables X(t 1), X(t 1 + r), X(t2), X(t2 + r).
46.14 reaction:
Kx(r)
= g 1K1(r) + g2K2(-r) + g 3K3(-r), where we have the approximate a~
g;=1
1
1
(j=1,2,3);
2
a;
2 ("Ti
= T'f
Ja
-
(T;- r)Kt(r) dr.
++ a~ a~ a~ For T; exceeding considerably the damping time of Kx(r), it is approximately true that
a1 =~(a-.!!...), T; T1 where b = fa"' rK(r) dr and K(r) is a sample function. 46.15
-
m-l-1
2
2
D[Kx(r)] = (m _ /) 2
[K;(sM
+
Kx(sb.
x (m - l - s)
46.16
By 9 per cent.
46.17
txo =
~ LT Kx(r) dr; b.opt =
46.18
("T
Ja
+
lb.)Kx(sb. - lb.)]
s=l
a;=
2 ("T T )a
Kx(r)
K;(r) dr - Tail -
+
1
-
m _ 1 [K;(o)
21TjT COST
dr,
j > 0;
!.2 I o:J. 1~1
Since
J =~I j(t)dt, then D[j] = ;; [1- o:1T(l-
The mean error is EJ
=
pVl a 1
=
e-aT)] =a~=
0.58 ·10- 8 A.
+
(0.86·10- 8 ) 2 A2 •
v A;(lb.)].
SOURCES OF TABLES REFERRED TO IN THE TEXT*
IT.
The binomial coefficients C;:': Beyer, W., pp. 339-340; Middleton, D., 1960; Kouden, D., 1961, pp. 564-567; Volodin, B. G., et al., 1962, p. 393.
2T.
The factorials n! or logarithms of factorials log n!: Barlow, P., 1962; Beyer, W., pp. 449-450; Bronstein, I., and Semendyaev, K. A., 1964; Boev, G., 1956, pp. 350--353; Kouden, D., 1961, pp. 568-569; Segal, B. I., and Semendyaev, K. A., 1962, p. 393; Unkovskii, V. A., 1953, p. 311; Volodin, B. G., et al., 1962, p. 394.
3T.
Powers of integers: Beyer, W., pp. 452-453.
4T.
The
binomial distribution function P(d < m + 1) = P(d ~ m) = - p)n-k: Beyer, W., pp. 163-173; Kouden, D., 1961, pp. 573-578. L~=o C~pk(l
ST.
6T.
The values of the gamma-function r(x) or logarithms of the gamma-function log r(x): Beyer, W., p. 497; Bronstein, I., and Semendyaev, K. A., 1964; Hald, A., 1952; Middleton, D., 1960; Boev, G., 1956, p. 353; Segal, B. I., and Semendyaev, K. A., 1962, pp. 353-391; Shor, Ya., 1962, p. 528.
a~ e-a for a Poisson distribution: Beyer, W., m. pp. 175-187; Gnedenko, B. V.; Saaty, T., 1957; Boev, G., 1956, pp. 357-358; Dunin-Barkovskii, I. V., and Smirnov, N. V., 1955, pp. 492-494; Segal, B. I., and Semendyaev, K. A., 1962. The probabilities P(m, a) =
7T.
The total probabilities P(k ;:. m) = e-a Lk=m ak(k! for a Poisson distribution: Beyer, W., pp. 175-187.
8T.
The Laplace function (the probability integral) in case of an argument expressed in terms of standard deviation (z) = 2/Y27T J~ e-x 212 dx: Arley, N., and Buch, K., 1950; Beyer, W., pp. 115-124; Cramer, H., 1946; Gnedenko, B. V., and Khinchin, A., 1962; Milne, W. E., 1949; Pugachev, V. S., 1965; Saaty, T., 1957; Bernstein, S., 1946, pp. 410-411.
9T.
The probability density of the normal distribution rp(z)
. } e-z•t 2 for an v 27T argument expressed in standard deviations: Beyer, W., pp. 115-124; Gnedenko, B. V., p. 383. =
* More complete information on the references is found in the Bibliography, which follows this section.
471
472
SOURCES OF TABLES REFERRED TO IN THE TEXT
lOT.
The derivatives of the probability density of the normal distribution cp(x): cp 2 (x) = cp"(x) = (x 2 - l)cp(x); cp 3 (x) = cp"'(x) = - (x 3 - 3x)cp(x): Beyer, W., pp. 115-124.
11 T.
The reduced Laplace function for an argument expressed in standard deviations, (z) = 2pjv';. f~ e-P 2 x 2 dx: see 8T.
12T.
The probability density of the normal distribution for an argument expressed in standard deviation, cp(z) = pjv';. e-P 222 : see 9T.
13T.
Thefunctionp(z) see 8T, 9T.
14T.
The Student distribution law
2pjv';. f~ e-p 2 x 2 dx- 2z(p/V;.)e-P 222
=
P(T < t) = r[(k + 1)/2] r(k/2)Vk-rr
{t
Ja
(t + k
x2) -