Probabi I ity and Statistics Second Edition
Morris
H.
DeGroot
Carnegie-Mellon University
•
..
�
ADDISON·WESLEY P...
83 downloads
2336 Views
29MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Probabi I ity and Statistics Second Edition
Morris
H.
DeGroot
Carnegie-Mellon University
•
..
�
ADDISON·WESLEY PUBLISHING COMPANY Reading, Massachusetts/Menlo Park, Calirornia Don Mills, Ontario;Wokingham, England/Amsterdam/Sydney Singapore/Tokyo/Mexico City/Bogota/Santiago/San Juan
This book is in the Addison-Wesley Series in Statistics. Frederick Mosteller
Consulting Editor
Library of Congress Cataloging in Publication Data DeGroot, Morris H. Probability and statistics. Bibliography: p. Includes index. 1. Probabilities.
2. Mathematical statistics.
I. Title. QA273.D35 1984
519.2
84-6269
ISBN 0-201-11366-X
Reprinled with correclions. Seplember 1989
Copyright © 1975, 1986 by Addison-Wesley Publishing Company, Inc. All right' reserved. No part of this publication may be reproduced, stored in a retrieval
system, or tran�mitted, in any form or by any means, electronic, mechanicaJ, photocopying, recording, or otherwise, without the prior written permission of the publisher. Printed in the United States of America. Pubtished simultaneously in Canada. 14 15 16 11 18 19 20 "11'. 959493
To Jenny and Jeremy
Preface This book contains enough material for a one-year course in probability and statistics. The mathematical requirements for the course are a knowledge of the elements of calculus and a familiarity with the concepts and elementary proper ties of vectors and matrices. No previous knowledge of probability or statistics is assumed. The book has been written with both the student and the teacher in mind. :jrecial care has been taken to make sure that the text can be read and understood with few obscure passages or other stumbling blocks. Theorems and proofs are presented where appropriate, and illustrative examples are given at almost every step of the way. More than noo exercises are included in the book. Some of these
exercises provide numerical applications of results presented in the text, and others are intended to stimulate further thought about these results. A new feature of this second edition is the inclusion of approximately 20 or 25 exercises at the end of each chapter that supplement the exercises given at the end of most of the individual sections of the boole.
The first five chapters are devoted to probability and can serve
as
the text for •
a one-semester course on that topic. The elementary concepts of probability are illustrated by such famous examples as the birthday problem, the tennis tourna ment problem, the matching problem, the collector's problem, and the game of craps. Standard material on random variables and probability distributions is highlighted by discussions of statistical swindles, the use of a table of random digits, the elementary notions of life testing, a comparison of the relative advantages of the mean and the median as predictors, the importance of the central limit theorem, and the correction for continuity. Also included as special features of these chapters are sections on Markov chains, the Gambler's Ruin
Iv
Preface
problem, choosing the best, utility and preferences among gambles, and the Bore1-Kolmogorov paradox. These topics are treated in a completely elementary fashion, but they can be omitted without loss of continuity if time is limited. Sections of the book that can be so omitted are indicated, in the traditional way, by asterisks in the Contents and in the text. The last five chapters of the book are devoted to statistical inference. The coverage here is modern in outlook. Both classical and Bayesian statistical methods are developed in an integrated presentation. No single school of thought is treated in a dogmatic fashion. My goal is to equip the student with the theory and methodology that have proved to be useful in the past and promise to be useful in the future. These chapters contain a comprehensive but elementarysurveyor estimation, testing hypotheses, nonparamelric methods, multiple regression, and the analysis of variance. The strengths and weaknesses and the advantages and disadvantages of such basic concepts as maximum likelihood estimation, Bayesian decision procedures, unbiased estimation, confidence intervals, and levels of significance arc discussed from a contemporary viewpoint. Special features of these chaplers include discussions of prior and posterior distributions, sufficient statistics, Fisher information, the delta method, the Bayesian ana1ysis of samples from a nOrlnal distribution, unbiased tests, multidecision problems, tests of goodness-of-fit, contingency tables, Simpson's paradox, inferences about the median and other quantiles, robust estimation and trimmed means, confidence bands for a regres sion line, and the regression faUacy. If time does not permit complete coverage of the contents of these chapters, any of the following sections can be omitted without loss of continuity: 7.6, 7.8, 8.3, 9.6, 9.7, 9.8, 9.9, and 9.10. In summary, the main changes in this second edition are new sections or subsections on statistica1 swindles, choosing the best, the Borel-Kolmogorov paradox, the correction for continuity, the delta method, unbiased tests, Simpson's paradox, confidence bands for a regression line, and the regression fallacy, as well as a new section of supplementary exercises at the end of each chapter. The material introducing random variables and their distributions has been thor-' oughly revised, and minor changes, additions, and deletions have been made throughout the text. Although a computer can be a valuable adjunct in a course in probability and statistics such as this one, none of the exercises in this book requires access to a computer or a knowledge of programming. For this reason, the use of this book is not tied to a computer in any way. Instructors are urged, however, to use computers i n the course as much as is feasible. A small calculator is a helpful aid for solving some of the numerical exercises in the second half of the 1xx>k. One further point about the style in which the book is written should be emphasized. The pronoun " he" is used throughout the book in reference to a person who is confronted with a statistical problem. This usage certainly does not mean that only males ca1culate probabilities and make decisions, or that only
Preface
V
males can be statisticians. The word "he" is used quite literally as defined in Webster's Third New International Dictionary to mean" that one whose sex is unknown or immaterial." The field of statistics should certainly be as accessible to women as it is to men. It should certainly be as accessible to members of minority
groups
as
it is to the majority. It is my sincere hope that this book will help create
among all groups an awareness and appreciation of probability and statistics as an interesting, lively, and important branch of science. I a m indebted to the readers, instructors, and colleagues whose comments have strengthened this edition. Marion Reynolds, Jr., of Virginia Polytechnic Institute and James Stapleton of Michigan State University reviewed the manuscript for the publisher and made many valuable suggestions. I am grateful to the Literary Executor of the late Sir Ronald A. Fisher, F.R.S., to Dr. Frank Yates, F.R.S., and the Longman Group Ltd., London, for permission to adapt Table III of their book Statistical Tables for Biological, Agricultural and Medical
Research (6th Edition, 1974).
The field of statistics has grown and changed since I wrote a Preface for the
first edition of this book in November, 1974, and so have I. The influence on my life and work of those who made that first edition possible remains vivid and
undiminished; but with growth and change have come new influences
as
well,
both personal and professional. The love, warmth, and support of my family and friends, old and new, have sustained and stimulated me, and enabled me to write a book thet I believe reflects contemporary probability and statistics. M.H. D.
Pittsburgh, Pennsylvania October J 985
,
Contents
1
Introduction to Probablllly 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 1.13
2
The History of Probability 1 Interpretations of Probability 2 Experiments and Events 5 Set Theory 7 The Definition of Probability 13 Finite Sample Spaces 18 Counting Methods 20 Combinatorial Methods 26 Multinomial Coefficients 32 The Probability of a Union of Events Independent Events 43 Statistical Swindles 52 Supplementary Exercises 54
36
Conditional Probablllly 2.1 2.2 •
•
2.3 2.4
The Definition of Conditional Probability Bayes' Theorem 64 Markov Chains 72 The Gambler's Ruin Problem 82
57
vII
vIII
Contents
•
3
2.5
Choosing the Best
2.6
Supplementary Exercises
94
Random Variables and Distributions 3.1
Random Variables and Discrete Distributions
3.2 3.3 3.4
Continuous Distributions The Distribution Function Bivariate Distributions
115
3.5
Marginal Djstributio�s
125
3.6
Conditional Distributions
3.7
Multivariate Distributions
4
108
134 142
Functions of a Random Variable
3.9
Functions of Two or More Random Variables The Borel-Kolmogorov Paradox
150 171
174
Supplementary Exercises
Expectation
•
4.1
The Expectation of a Random Variable
4.2
Properties of Expectations
4.3
Variance
194
4.4 4.5
Moments
199
The Mean and the Median
4.6
Covariance and Correlation
4.7
Conditional Expectation
4.8 4.9
The 'Sample Mean
4.10
Sup'plemcntary
Utility
179
187
206 213
219
226
233
Special Distributions Introduction
97
102
3.8 ·3.10 3.11
5
87
243
5.1 5.2
The Bernoulli and Binomial Distributions
5.3 5.4
The Poisson Distribution
5.5
The Negative Binomial Distribution
5.6
The Normal Distribution
5.7
The Central Limit Theorem
The Hypcrgcometric Distribution
247
252 263 274
258
243
158
Contents
5.8 5.9 5.10 5.l1 5.12 5.13
6
The Correction for Continuity
283
The Gamma Distribution
286 294
The Beta Distribution
The Multinontial Distribution
297
The Bivariate Normal Distribution Supplementary Exercises
300
307
Estimation 6.1 6.2
7
Statistical Inference
311
Prior and Posterior Distributions
6.3
Conjugate Prior Distributions
6.4 6.5 6.6 6.7 6.8 6.9 6.10
Bayes Estimators
313 321
330
Maximum Likelihood Estimators 338 Properties of Maximum Likelihood Estimators Sufficient Statistics
356
Jointly Sufficient Statistics
364 371 Supplementary Exercises 377
Improving an Estimator
Sampling DistrIbutions 01 Estimators 7.1 7.2 7.3
348
The Sampling Distribution of a Statistic
The Chi-Square Distribution
381
383
loint Distribution of the Sample Mean and Sample Variance
•
7.4 7.5 7.6
7.7 ·7.8 7.9
8
386 The I Distribution 393 Confidence Intervals 398 Bayesian Analysis of Samples from a Normal Distribution Unbiased Estimators Fisher Information
402 411 420
Supplementary Exercises
433
TestIng Hypotheses
•
8.1 8.2 8.3
Problems of Testing Hypotheses Testing Simple Hypotheses 442 Multidecision Problems
456
437
•
Ix
x
Contents
8.4 8.5 8.6 8.7 8.8 8.9 8.10
9
506
Categorical Data and Nonparametrlc Methods 9.1 9.2 9.3 9.4 9.5 ·9.6 ·9.7 ·9.8 ·9.9 ·9.10 9.11
10
Uniformly Most Powerful Tests 466 Selecting a Test Procedure 477 The t Test 485 Discussion of the Methodology of Testing Hypotheses 494 The F Distribution 499 Comparing the Means of Two Normal Distributions Supplementary Exercises 512
Tests 01 Goodness-ol-Fit 519 Goodness-of-Fit for Composite Hypotheses 526 Contingency Tables 534 Tests of Homogeneity 540 Simpson's Paradox 548 Kolmogorov-Smirnov Tests 552 Inferences about the Median and Other Quantiles Robust Estimation 564 Paired Observations 571 Ranks for Two Samples 580 Supplementary Exercises 586
561
Linear Statistical Models 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9
The Method of Least Squares 593 Regression 604 Tests of Hypotheses and Confidence Intervals in Simple Linear Regression 612 • The Regression FaUacy 628 Multiple Regression 631 Analysis of Variance 644 The Two-Way Layout 652 The Two-Way Layout with Replications 662 Supplementary Exercises 674
References
679
Tables Binomial Probabilities
682
Contents
Random Digits
685
Poisson Probabilities
688
The Standard Normal Distribution Function
x2 Distribution 690 The t Distribution 692 0.95 Quantile of the F Distribution 0.975 Quantile of the F Distribution
689
The
694 695
Answers to Even-Numbered ExercIses Index
697
717
•
xl
I ntroduction to ProbabiI ity
1.1. THE HISTORY OF PROBABILITY The concepts of chance and uncertainty are as old as civilization itself. People have always had to cope with uncertainty about the weather, their food supply, and other aspects of their environment, and have strived to reduce this uncer tainlY and its effects. Even the idea of gambling has a long history. 8y about the year 3500 B.C., games of chance played with bone objects that could be consid ered precursors of dice were apparently highly developed in Egypt and elsewhere. Cubical dice with markings virtually identical to those on modem dice have been
found in Egyptian tombs dating from 2000 B.C. W� know that gambling with dice has been popular ever since that time and played an important part in the early development of probability theory. It is generally believed that the mathematical theory of probability was
started by the French mathematicians Blaise Pascal (1623-1662) and Pierre Fermat (1601-1665) when they succeeded in deriving exact probabilities for certain gambling problems involving dice. Some of the problems that 'they solved had been outstanding for about 300 years. However, numerical ,robabilities of various dice combinations had been calculated previously by Girolamo Cardano (1501-1576) and by Galilco Galilei (1564-1642). The theory of probability has been developed steadily since the seventeenth century and has been widely applied in diverse fields of study. Today, probability theory is an important 1001 in most areas of engineering, science. and manage ment. Many research workers are actively engaged in the discovery and establish ment of new applications of probability in fields such as medicine, meteorology, photography from spaceships, marketing, earthquake prediction, human behavior,
1
2
Introduction to Probability
the design of computer systems, and law, In most legal proceedings involving antitrust violations or employment discrimination, both sides often present prob ability and statistical calculations to help support their cases,
References The ancient history of gambling and the origins of the mathematical theory of probability are discussed by David (1962), Ore (1960), and Todhunter (1865). Some introductory books on probability theory, which discuss many of the same topics that wiU be studied in this book, arc FeUcr (1968); Hoel, Port, and Stone (1971); Meyer (1970); and Olkin, Gleser, and Derman (1980). Other introductory books, which discuss both probability theory and statistics at about the same level as they will be discussed in this book, are Brunk (1975); Devore (1982); Fraser (1976); Freund and Walpole (1980); Hogg and Craig (1978); Kempthorne and Folks (1971); Larson (1974); Lindgren (1976); Mendenhall, Scheaffer, and Wackerly (1981); and Mood, Graybill, and Boes (1974).
1 .2. INTERPRETATIONS OF PROBABILITY In addition to the many fonnal applications of probability theory, the concept of probability enters our everyday life and conversation, We often hear and use such expressions as: .. It probably will rain tomorrow afternoon"; "It is very likely that the plane will arrive latc"; or "The chances are good that he will be able to join us for dinner this evening," Each of these expressions is based on the concept of the probability, or the likelihood, that some specific event will occur. Despite the fact that the concept of probability is such a common and natural part of our experience, no singJe scientific interpretation of the term probability is accepted by all statisticians, philosophers, and other authorities. Through the. years, each interpretation of probability that has been proposed by some authori• ties has been criticized by others. Indeed, the true meaning of probability is still a highly controversial subject and is involved in many current philosophical discus sions pertaining to the foundations of statistics. Three different interpretations of probabiHty will be described here, Each of these interpretations can be very useful in applying probability theory to practical problems,
The Frequency InterpretatIon
01
Probability
In many problems, the probability that some specific outcome of a process will be obtai_ned can be interpreted to mean the relative frequency with which that outcome would be obtained if the process were repeated a large number of times
1.2. Interpretations of Probability
3
under similar conditions. For example, the probability of obtaining a head when a coin is tossed is considered to be 1/2 because the relative frequency of heads should be approximately 1/2 when the coin is tossed a large number of times under similar conditions. In other words, it is assumed that the proportion of tosses on whieh a head is obtained wou1d be approximately 1/2. Of course, the conditions mentioned in this example are too vague to serve as the basis for a scientific definition of probability. First, a "large number" of tosses of the coin is specified, but there is no definite indication of an actual number that would be considered large enough. Second, it is stated that the coin should be tossed each time" under simi_lar conditions," but these conditions are not described precisely. The conditions under which the coin is tossed must not be completely identical for each toss because the outcomes would then be the same, and there would be either all heads or all tails. In fact, a skilled person can toss a coin into the air repeatedly and catch it in such a way that a head is obtained on almost every toss. Hence, the tosses must not be completely con trolled but must have some"random" features. Furthermore, it is stated that the relative frequency of heads should be "approximately 1/2," but no limit is specified for the permissible variation from 1/2. If a coin were tossed 1,000,000 times, we would not expect to obtain exactly 500,000 heads. Indeed, we would be extremely surprised if we obtained exactly 500,000 heads. On the other hand, neither would we expect the number of heads to be very far from 500,000. It would be desirable to be able to make a precise statement of the likelihoods of the different possible numbers of heads, but these likelihoods would of necessity depend on the very concept of probability that we are trying to define. Another shortcoming of the frequency interpretation of probability is that it applies only to a problem in which there can be, at least in principle, a large number of similar repetitions of a certain process. Many important problems are not of this type. For example. the frequency interpretation of probability cannot be applied directly to the probability that a specific acquaintance will get married within the next two years or to the probability that a particular medical research project will lead to the development of a new treatment for a c,crtain disease within a specified period of time.
The Classical Interpretation 01 Probability The classical interpretation of probability is based on the concept of equally likely outcomes. For example, when a coin is lossed, there a_re two possible outcomes: a head or a tail. If it may be assumed that these outcomes arc equaUy likely to occur, then they must have the same probability. Since the sum of the probabili ties must be 1, both the probability of a head and the probabi1ity of a tail must be 1/2. More generally, if the outcome of some process must be one of n diflcrent
4
Introduction to Probability
outcomes, and if these n outcomes are equal1y likely to occur, then the probability of each outcome is lin. Two basic difficulties arise when an attempt is made to develop a formal definition of prohability from the classical interpretation. First. the concept of equally likely outcomes is essentially based on the concept of probability that we are trying to define. The statement that two possible outcomes are equally likely to occur is the same as the statement that two outcomes have the same probabil· ity. Second, no systematic method is given for assigning probabilities to outcomes that are not assumed to be equally likely. When a coin is tossed, or a well·bal· anced die is rolled, or a card is chosen from a well·shuffied deck of cards, the different possible outcomes can usually be regarded as equally likely because of the nature of the process. However, when the problem is to guess whether an acquaintance will get married or whether a research project will be successful, the possible outcomes would not typically be considered to be equally likely, and a different method is needed for assigning probabilities to these outcomes.
The Subjective InterpretaUon of Probability According to the subjective, or personal, interpretation of probability, tbe prob· ability that a person assigns to a possible outcome of some process represents his own judgment of the likelibood that tbe outcome will be obtained. This judgment win be based on that person's beliefs and information about the process. Another person, who may have different beliefs or different information, may assign a different probability to the same outcome. For this reason, it is appropriate to speak of a certain person's subjective probability of an outcome, rather than to speak of the true probability of that outcome. As an illustration of this interpretation, suppose that a coin is to be tossed once. A person with no special information about the coin or the way in which it is tossed might regard a head and a tail to be equally likely outcomes. That person would then assign a subjective probability of 1/2 to the possibility of· obtaining a head. The person who is actually tossing the coin, however, mightJeel that a head is much more likely to be obtained than a tail. [n order that this person may be able to assign subjective probabilities to the outcomes, he must express the strength of his belief in numerical terms. Suppose, for example, that he regards the likelihood of obtaining a head to be the same as the likelihood of obtaining a red card when one card is chosen from a well-shuffled deck containing four red cards and one black card. Since the person would assign a probability of 4/5 to the possibility of obtaining a red card, he should also assign a probability of 4/5 to the possibility of obtaining a head when the coin is tossed. This subjective interpretation of probability can be formalized. In general, if a person's judgments of the relative likelihoods of various combinations of outcomes satisfy certain conditions of consistency, then it can be shown thal his
5
1.3. Experiments and Events
subjective probabilities of the different possible events can be uniquely de termined. However, there are two difficulties with the subjective interpretation. First, the requirement that a person's judgments of the relative likelihoods of an infinite number of events be completely consistent and free from contradictions does not seem to be humanly attainable. Second, the subjective interpretation provides no "'objective" basis for two or more scientists working together to reach a common evaluation of the state of knowledge in some scientific area of common
interest. On the other hand, recognition of the subjective interpretation of probability has the salutary effect of emphasizing some of the subjective aspects of science. A particular scientist's evaluation of the probability of some uncertain outcome must ultimately be his own evaluation based on a1l the evidence available to him. This evaluation may well be based in part on the frequency interpretation of probability, since the scientist may take into account the relative frequency of occurrence of this outcome or similar outcomes in the past. It may also be based in part on the classical interpretation of probability, since the scientist may take into account the total number of possible outcomes that he considers equally likely to occur. Nevertheless, the final assignment of numerical probabilities is the responsibility of the scientist himself. The subjective nature of science is also revealed in the actual problem that a particular scientist chooses to study from the class of problems that might have been chosen, in the experiments that he decides to perform in carrying out this study, and in the conclusions that he draws from his experimental data. The mathematical theory of probability and statistics can play an important part in these choices, decisions, and conclusions. Moreover, this theory of probability and statistics can be developed, and will be presented in this book, without regard to the controversy surrounding the different interpretations of the term probability. This theory is correct and can be usefu11y applied, regardless of which interpreta tion of probability is used in a particular problem. The theories and techniques that will be presented in this book have served as valuable guides and tools in almost aU aspects of the design and analysis of effective experimentation. •
1.3.
EXPERIMENTS AND EVENTS
Types 01 Experiments The theory of probability pertains to the various possible outcomes that might be obtained and the possible events that might occur when an experiment is performed. The term "experiment" is used in probability theory to describe virtually any process whose outcome is not known in advance with certainty. Some examples of experiments will now be given.
6
Introduction to Probability
1. [n an experiment in which a coin is to be tossed
10
times, the experimenter
might want to determine the probability that at least 4 heads will be obtained.
2.
In an experiment in which a sample of 1000 transistors is to be selected from a large shipment of similar items and each selected item is to be inspected, a person might want to determine the probability that not more than one of the selected transistors will be defective.
3. [n an experiment in which the air temperature at a certain location is to be observed every day at noon for 90 successive days, a person might want to determine the probability that the average temperature during this period will be less than some specified value.
4.
From information relating to the life of Thomas Jefferson, a certain person might want to determine the probability that Jefferson was born in the year
1741.
5.
In evaluating an industrial research and development project at a certain time, a person might want to determine the probability that the project will result in the successful development of a new product within a specified number of months. It can be seen from these examples that the possible outcomes of an
experiment may be either random or nonrandom, in accordance with the usual meanings of those terms. The interesting feature of an experiment is that each of its possible outcomes can be specified before the experiment is performed, and probabilities can be assigned to various combinations of outcomes that are of interest.
The Mathematical Theory of Probability •
As was explained in Section
1.2,
there is controversy in regard to the proper
meaning and interpretation of some of the probabilities that arc assigned to the outcomes of many experiments. However, once probabilities have been assigned to some simple outcomes in an experiment, there is complete agreement among all authorities that the mathematicaJ theory of probability provides the appropriate methodology for the furthcr study of these probabilities. Almost all work in the mathematical theory of probability, from the most elementary textbooks to the most advanced research, has been related to the following two problems: (i) methods for determining the probabilities of certain events from the specified probabilities of each possible outcome of an experiment and (ii) methods for revising the probabilities of events when additional relevant information obtained.
is
1.4. Set Theory
7
These methods are based on standard mathematical techniques. The purpose of the first five chapters of this book is to present these technjques which, together, form the mathematical theory of probability.
1.4. SET THEORY The Sample Space The collection of all possible outcomes of an experiment is called the sample space of the experiment. In other words, the sample space of an experiment can be thought of as a set, or collection, of different possible outcomes; and each outcome can be thought of as a poim. or an element, in the sample space. Because of this interpretation, the language and concepts of set theory provide a natural context for the development of probability theory. The basic ideas and notation of set theory will now be reviewed.
Relations 01 Set Theory Lel S denote the sample space of some experiment. Then any possible outcome s of the experiment is said to be a member of the space S, or 10 belong to the space S. The statement that s is a member of S is denoted symbolically by the relation s E S. When an experiment has been performed and we say that some event has occurred, we mean that the outcome of the experiment satisfied certain conditions which specified that event. In other words, some outcomes in the space S signify that the event occurred, and all other outcomes in S signify that the event did not occur. In accordance with this interpretation, any event can be regarded as a certain subset of possible outcomes in the space S. For example, when a six-sided die is rolled, the sample space can be regarded as containing tbe six numbers 1,2,3,4,5,6. Symbolically, we write s � {1,2 ,3,4.5,6}.
The event A tbat an even number is obtained is defined by tbe subset A (2,4, 6). The event B that a number greater than 2 is obtained is defined by the subset B (3,4.5,6). It is said that an event A is contained in another event B if every outcome that belongs to the subset defining the event A also belongs to the subset defining the event B. This relation between two events is expresse1i symbolically by the relation A C B. The relation A c B is also expressed by saying that A is a subset of B. Equivalently. if A c B, we may say that B contains A and may write B:> A. �
�
8
Introduction to Probability
Tn the example pertaining to the die, suppose that A is the event that an even number is obtained and C is the event that a number greater than 1 is obtained. Since A (2, 4,6) and C (2,3, noted that A c S for any event A. If two events A and B are so related that A c B and B e A, it follows that A and B must contain exactly the same points. In other words, A = B. I f A, B , that A C C. The proof of thjs fact is left as an exercise. -
-
The Empty Set Some events are impossible. For example , obta,in a negative number. Hence, the event that a negative number will be obtained is defined by the subset of S that contains no outcomes. This subset of S is called the empty set, or null set, and it is denoted by the symbol Jf. Now consider any arbitrary event A. Since the empty set � contains no points, it is logically correct to say that any point belonging to0 also belongs to A, or fie A. ]n other words, for any event A, it is true that0c A C S. Operations
01 Set Theory
Unions. If A and B are any two events , the the event containing all outcomes that belong to A alone, to B alone, or to both A and 8. The notation for the union of A and B is A U 8. The event A U B is sketched in Fig. 1.1. A sketch of this type is called a Venn diagram. For any events A and B , AU B B UA, A U !d = A, =
Furthermore, if A
c
A UA = A, AU S - S.
B, then A U 8 = 8.
s,------,
Figure 1.1
The event A U 8.
•
1.4. Set Theory
9
The union of n events AI A" is defined to be the event that contains all outcomes which belong to at least one of these n events. The notation for this union is either Al U A2 U . .. U A" or U;'_I AI' Similarly, the notation for the union of an infinite sequence of events AI' A2 is Ur:l Aj• The notation for the union of an arbitrary collection af events Ai' where the values of the sub script i belong to some index set I, isUjE1Aj• The union of three events A, B, and C can be calculated either directly from the definition of A U B U C or by first evaluating the union of any two of the events and then forming the union of this combination of events and the third event. In other words, the following associative relations arc satisfied: • . . . •
• . . .
AU
---
B u C � (A U B) U C - A ()"(B U C).
Intersections. If A and B are any two events, the intersection of A and B is defined to be the event that contains all outcomes which belong both to A and to B. The notation for the intersection of A and B is A n B. The event A n B is sketched in a Venn diagram in Fig. 1.2. It is often convenient to denote the intersection of A and B by the symbol AB instead of A n B. and we sha1l use these two types of notation interchangeably. For any events A and B, the intersection has the foUowing properties: An B � B nA, A n fI � fI,
An A �A, An S �A.
Furthermore. if A c B, then An B = A. The intersection of n events AI" ' " A" is defined to be the event that contains the outcomes which are common to all these n events. The notation for this intersection is Al n A2 n ... n A", or ni_I Ai• or A,A2 .. : A". Similar notations are used for the intersection of an infinite sequence of events or for .the intersection of an arbitrary collection of events. s .-
----
--
Figure 1.2
-,
__ _ ___
The event A n
B.
10
Introduction to ProbabUity
Figure 1.3
The event A 0 and Pr(A U B U C) =
5 Pr( A ).
Determine
7. Suppose that each of two dice is loaded so that when either die is rolled. the probability that the number 0.3 for
k
= 3 or
4.
k
will appear is
OJ
for
k=
1,
2,
5,
or
6
and is
I f the two loaded dice arc rolled, what is the probability
that the sum of the two numbers that appear will be 7?
1.13. Supplementary Exercises
S.
SS
Suppose that there is a probability of 1/50 that you will win a certain game. If you play the game 50 times, independently, what is the probability that you wilJ win at least once?
9. Suppose that a certain precinct contains 350 voters, of which 250 are
Democrats and 100 are Republicans. If 30 voters are chosen at random from the precinct, what is the probability that exactly 1 8 Democrats will be selected?
students A, B, and C are enrolled in the same class. Suppose that A attends class 30 percent of the time, B attends class 50 percent of the time, and C attends class 80 percent of the time. If these students attend class independently of each other, what is (a) the probability that at least one of them will be in class on a particular day and (b) the probability that exactly one of them wiJ] be in class on a particular day?
10. Three
11.
Suppose that a balanced dje is rolled three times, and let Xi denote the number that appears on the ith roll ( i � 1 , 2, 3). Evaluate Pre X, > X, > X,).
12.
Consider the World Series of baseball, as described in Exercise 13 of Sec. 1 . 1 1 . If there is probability p that team A will win any particular game, what is the probability that it will be necessary to play seven games in order to determine the winner of the Series?
13.
Suppose that in a deck of 20 cards, each card has one of the numbers I, 2, 3, 4, or 5 and there are 4 cards witb each number. If 10 cards arc chosen from the deck at random, without replaceme�t, what is the probability that each of the numbers 1 , 2, 3, 4, and 5 will appear exactly twice?
14.
Suppose that three red balls and three white baUs are thrown at random into three boxes, and all throws are i_ndependent. What is the probability that each box contains one red ball and one white ball?
15.
If five balls are thrown at random into n boxes, and all throws are indepen dent, what is the probability that no box contains more than two balls?
16.
Bus tickets in a certain city contain four numbers, U, V, W, and X. Each of these numbers is equally likely to be any of .the ten digits 0, 1 , . . . , 9, and the four numbers are chosen independently. A bus rider is said to be lucky if U + V = W + X. What proportion of the riders. are lucky?
17.
Suppose that a box contains r red balls and lY white balls. Suppose also that balls are drawn from the box one at a time, at random, without replacement. (a) What is the probability that all r red balls will be obtained before any white balls are obtained? (b) What is the probability that all r red balls will be obtained before two white balls are obtained?
IS. Suppose that a box contains r red balls, w white balls, and b blue balls. Suppose also that balls are drawn from the box one at a time, at random, without replacement. What is the probability that all r red balls will be obtained before any white balls are obtained?
Introduction to Probablltty
56
19. Suppose that 10 cards, of which 7 are red and 3 are green, are put at random into 10 envelopes, of which 7 are red and 3 are green, so that each envelope contains one card. Detennine the probability that exactly k envelopes will contain a card with a matching color (k = 0, 1, . . . , 10). 20. Suppose that 10 cards. of which 5 are red and 5 are green. are put at random into 10 envelopes, of which 7 are red and 3 are green, so that each envelope contains one card. Determine the probability that exactly k envelopes will contain a card with a matching color (k = 0, 1, . . . , 10). 21. A certain group has eight members, In January, three members are selected at random to serve on a committee. In February, four members are selected at random and independently of the first selection to serve on another commit tee. In March, five members are selected at random and independently of the previous two selections to serve on a third committee, Determine the prob ability that each of the eight members serves on at least one of the three committees, 22 . For the conditions of Exercise 21, determine the probability that two particu lar members A and B will serve together on at least one of the three committees. 23. Suppose that two players A and B take turns rolling a pair of balanced dice, and that the winner is the first player who obtains the sum of 7 on a given roll of the two dice. If A rolls first, what is the probability that B will win? 24. Three players A, B, and C take turns tossing a fair coin. Suppose that A tosses the coin first, B tosses second, and C tosses third; and this cycle is repeated indefinitely until someone wins by being the first player to obtain a head. Determine the probability that each of three players will win. 25, Let A I ' A2• and A) be three arbitrary events. Show that the probability that exactly one of these three events will occur is •
Pr( A, ) + Pr( A, ) + Pr ( A, ) - 2 Pr ( A,A, ) - 2 Pr ( A,A, ) - 2 Pr( A,A, )
26, Let All " " All be n arbitrary events, Show that the probability that exa.ctly one of these /I events will occur is "
L Pr( A; ) i-I
-
2L
i <j
Pr( A;AJ + 3 L Pr(A;AjAk) i 0, then Pr( A I B )
�
Pr( A B ) Pr( B )
The conditional probability Pr( A I B ) is not defined if Pr( B) � O. The conditional probability Pr( A I B ) has a simple meaning in terms of the ; f"eq,uer,cy interpretation of probability presented in Sec. 1.2. According to (hat
57
Conditional Probability
58
s ,-------,
Figure 2.1
The outcomes in the event
8
that also belong lO the event A .
interpretation, if an experimental process is repeated a large number of times,
then the proportion of repetitions in which the event B wiU occur is approxi
mately Pr( B ) and the proportion of repetitions in which both the event A and the event B will occur is approximately Pr(AB). Therefore, among those repetitions in which the event B occurs, the proportion of repetitions in which the event A will also occur is approximately equal 10 Pr( A I B ) �
Pr( A B ) Pr( B )
that the sum T of the two numbers was odd. We shall determine the probability
Example I: Rolling Dice. that T was less than
Suppose that two dice were roUed and i t was observed
I f we let A be the event that T
8,
A B is the event that T is the end of Sec,
Pr( A B )
Pr( B )
l.6,
� 36 2
�
2 36
3, 5,
or
< 8 and let B be the event that T is odd, then
7.
From the probabilities for two dice given at
we can evaluate Pr( A B ) and Pr( B) as follows:
+
4 6 12 36 + 36 � 36
+
6 + 36 36 + 36 4
1
� 3'
4 + 2 � 18 � 21 ' 36
•
36
Hence,
P r( A l B ) �
Pr( A B ) Pr( B }
�
2
3'
0
Example 2: Rolling Dice Repeatedly.
repeatedly and the sum T of the two numbers is to be observed for each roll. We
Suppose that two dice arc to be rolled
2.1.
The Definition of CondiUonal Probability
59
shall determine the probability p that the value T 7 will be observed before the value T = 8 is observed. The desired probability p could be calculated directly as follows: We could assume that the sample space S comains al1 sequences of outcomes which terminate as soon as either the sum T = 7 or the sum T = 8 is obtained. Then we could find the sum of the probabilities of aU the sequences which terminate when the value T = 7 is obtained. However, there is a simp)er approach in this example. We can consider the simple experiment in which two dice are rolled. If we repeat the experiment until either the sum T = 7 or the sum T = g is obtained, the effect is to restrict the outcome of the experiment to one of these two values. Hence, the problem can be restated as follows: Given that the outcome of the experiment is either T = 7 or T 8, determine the probability p that the outcome is actually T 7. If we let A be the event that T - 7 and let B be the event that the value of T is either 7 or 8, then B = A and -
-
-
A
p
_
Pr( A I B )
_
Pr( AB) Pr( B )
_
Pr( A ) Pr( B ) '
From the probabilities for two dice given in Example 3 of Sec. 1.6, Pr(A ) - 6/36 and Pr( B ) - (6/36) + (5/36) - 11/36. Hence, p - 6/11. 0
Conditional Probab1l1ty for Independent Events If two events A and B are independent, then Pr(A B ) - Pr(A)Pr( B). Hence, if Pr( B ) > 0, it follows from the definition of conditional probability that
In other words, if two events A and B are independent, then the conditional probability of A when it is known that B has occurred is the same as the unconditional probability of A when no information about B is available. The converse of this statement is also true. If Pr(A I B ) = Pr( A), then the events A and B must be independen t. Similarly, if A and B are two independent events and if Pr( A ) > 0, then Pr(BIA) - Pr(B). Conversely, if Pr( B I A ) - Pr( B), then the evenLS A and B are independent. These properties of conditional probabilities for independent events reinforce the i_nterpretations of the concept of independence that were given Ln Chapter I.
Conditional Probability
60
The Multiplication Rule lor Conditional Probabllilles
experiment involving two events A and B that are not independent, it is often convenient to compute the probability Pr( AB) that both events will occur by applying one of the following two equations: In an
Pr( A B )
=
Pr( B)Pr( A I B )
Pr( A B )
=
Pr( A ) Pr( BIA).
or Example 3: Selecting Two Balls.
Suppose that two balls are to be selected at random, without replacement, from a box containing r red balls and b blue balls. We shall determine the probability p that the first ball will be red and the second ball wiII be blue. Let A be the event that the first ball is red, and let B be the event that the second ball is blue. Obviously, Pr(A) = r/(r + b). Furthermore, if the event A has occurred, then one red ball has been removed from the box on the. first draw. Therefore, the probability of obtaining a blue ball on the second draw will be Pr( B I A )
=
b r+b- I'
It follows that Pr( A B ) =
r r
+
b b . r+b
_
l'
0
The principle that has just been applied can be extended to any finite number of events, as stated in the following theorem: ... Theorem
1.
Suppose that A I ' A 2, . . . , A" are any events such that O. Then
Pr(A,A, . . . A"_ ,) >
Pr( A,A, . . . A") Proof,
= Pr(A , )Pr(A,IA , )Pr(A ,IA , A , ) ' "
Pr( A"IA,A, . . . A"_ , ) .
The product of probabilities on the right side of Eq. (1) is equal to Pr( A,A, . . . A,,) Pr( A,A, . . . A"_,) .
(1 )
2.1. The D.etinltion of Conditional Probability
61
Since Pr( A 1 A 2 . . • A .. _ d > 0, each of the denominators i.n this product must be positive. An of the terms in the product cancel each other except the final numerator Pr( A , A , . . . A"), which is 'he left side of Eq. (I). 0
Example 4: Selecting Four Balls. Suppose that four balls are selected one at a
time. without replacement, from a box containing r red balls and b blue balls (r � 2, b � 2). We shall determine the probability of obtaining the sequence of outcomes red, blue, red, blue. H we let Rj denote the event that a red ball is obtained on the jth draw and let Bj denote the event that a blue ball is obtained on the jth draw (j = 1 , . . . , 4), then
r- I r+b
2
b-I r+b-3'
0
The Game 01 Craps
We shall conclude this section by discussing a popular gambling game called craps. One version of this game is played as foHows: A player rolls two dice, and the sum of the two numbers that appear is observed. If the sum on the first rol1 is 7 or 1 1 . the player wins the game immediately. If the sum on the first roU is 2, 3, or 12, the player loses the game immediately. If the sum on the first roll is 4, 5, 6, 8, 9, or 10, then the two dice are rolled again and again until the sum is either 7 or the original value. If the original value is obtained a second time before 7 is obtained, then the player wins. If the sum 7 is obtained before the original value is obtained a second time, then the player loses. We shall now compute the probability P ,hat the player will win. The probability 'iTo that the sum on the first roll will be either 7 or 1 1 is
•
If the sum obtained on the first roll is 4, the probability q, that the player will win is equal to the conditional probability that the sum 4 will be obtained again before the sum 7 is obtained. As dCl;cribed in Example 2, this probability is the same as the probability of obtaining the sum 4 when the outcome must be either 4 or 7. Hence,
q,
=
I 3/36 -;(3 :/ --:-:7 +'""" (6 -:-; ""3= '6) = "3 36 )'-'
.-'":� .� ..'
62
Conditional Probability
Since the probability P4 of obtaining the sum 4 on the first roll is P4 = 3/36 1/12, it folJows that the probability of obtaining the sum 4 on the first roll and then winning the game is =
7T4
Similarly, the probability P I O of obtaining the sum 1 0 on the first roll is 1/12, and the probability qlO of winning the game when the sum 10 has been obtained on the first roll is q lO 1/3. Hence, the probability 7T1O of obtaining the sum ]0 on the first roll and then winning the game is
PIO
=
=
7T1O = PlOqlO =
1
-
1
12 ' '3 = 36 .
The values of Pi ' will be found that
'" - P5 q,
1
flj,
and
'TT,
can be defined similarly for i
"-) -"7: +(
4/36 4 36 . ' ( :4/ ::: 3""" 6 "") 6:/::: 36
=
5, 6, 8, and 9. It
2 45
and
Also. ". - P. q. -
5/36 36 . (5/36) + (6/36) 5
25 396
- --
and
Since the probability P that the player will win in some way is the sum of all the probabilities just found, we obtain P - 170 + ( 17, + "10) + ( 175 + -
2
9
",
) + ( '"
+
w,)
2 25 244 1 + 2 . 36 + 2 . 45 + 2 · 396 - 495 - 0.493 .
Thus. the probabiljty of winning in the game of craps
is
slightly less than ]/2.
2.1. The Definition 01 Condilional Probability
•
63
EXERCISES 1 . If A and B are disjoint events and Pr( B )
>
0, what is the value of Pr( A I B )?
2. If S is the sample space of an experiment and A is any event in that space. what is the value of Pr( A I S )? 3. If A and B are independent events and Pr( B ) < 1, what is the value of Pr( A ' I B')?
4. A box contains r red balls and b blue balls. One ball is selected at random and its color is observed. The balL is then returned to the box and k additional balls of the same color are also put into the box. A second ball is then selected at random. its color is observed, and it is returned to the box together with k additional bans of the same color. Each time another ball is selected, the process is repeated. If four balls arc selected, what is the probability that the first three balls will be red and the fourth ball wil1 be blue?
5.
Each time a shopper purchases a tube of toothpaste. he chooses either brand A or brand B. Suppose that for each purchase after the first, the probability is 1/3 that he will choose the same brand that he chose on his preceding purchase and lhe probabililY is 2/3 lhal he will switch brands. If he is equally hkely to choose either brand A or brand B on his first purchase, what is the probability that both his first and second purchases will be brand A and both his third and fourth purchases will be brand B?
6. A box contains three cards. One card is red on both sides, one card is green on both sides, and one card is red on one side and green on the other. One card is selected from the box at random, and the color on one side is observed. If this side is green, what is' the probability that the other side of the card is also green?
7. Consider again the conditions of Exercise 6 of Sec. 1 . 1 1 . Two students A and B are both registered for a certain course. Student A attends class 80 percent of the time and student B attends class 60 percent of the lime, and the absences of the two students are independent. If at least one of the two students is in class on a given day, what is the probability that A is in class lhal day?
8. Consider again the conditions of Exercise 1 of Sec. 1.10. If a family selected at random from the city subscribes to newspaper A, what is the probability that the family also subscribes to newspaper B?
9. Consider again the conditions of Exercise l or-Sec. 1 . 10. If a family selected
at random from the city subscribes to at least one of the three newspapers A , B, and C, what is the probability that the family subscribes to newspaper A?
2.1. The Delln�tlon 01 Conditional Probability
•
63
EXERCISES 1. If A and B are disjoint events and Pr( 8) > 0, what is the value of Pr( A I B)? 2. If S is the sample space of an experiment and A is any event 1n that space, what is the value of Pr(A I S )?
3. If A and B are independent events and Pr( B ) < 1 , what is the value of Pr( A ' I B ')? box contains r red balls and b blue balls. One bal1 is selecled at random and its color is observed. The balL is then returned to the box and k additional balls of the same color are also put into the box. A second ball is then selected at random, its color is observed, and it is returned to the box together with k additional balls of the same color. Each time another ball is selected, the process is repeated. If four balls are selected, what is the probability that the first three balls will be red and the fourth ball will be blue?
4. A
5. Each time a shopper purchases a tube of toolhpaste. he chooses either brand
A or brand B. Suppose that for each purchase after the first, the probability is 1/3 that he will choose the same brand that he chose on his preceding purchase and the probability is 2/3 that he will switch brands. If he is equally likely 10 choose either brand A or brand B on his first purchase, what is the probability that both his first and second purchases will be brand A and both his third and fourth purchases will be brand B?
6. A
box contains three cards. One card is red on both sides, one card is green on both sides, and one card is red on one side and green on the other. One card is selected from the box at random, and the color on one side is observed. If this side is green, what is· the probability that the other side of the card is also green?
7. Consider again the conditions of ExerCise
6 of Sec. 1 . 1 1 . Two students A and
B are both registered for a certain course. Student A attends class 80 percent o f the time and student B attends class 60 percent of the time, and the absences of the two students are independent. If at least one of the two students is in class on a given day, what is the probabiHty that A is in class that day? 8. Consider again the conditions of Exercise 1 of Sec. 1.10. If a family selected at random from the city subscribes to newspaper A . what is the probability that the fami]y a]so subscribes to newspaper B?
I of Sec. 1.10. If a family selected at random from the city subscribes to at least one of the three newspapers A , B, and C . what i s the probability that the family subscribes to newspaper A?
9. Consider again the conditions of Exercise
Conditional Probability
64
10. Suppose that a box contains one blue card and rour red cards, which are labeled A , B, C, and D. Suppose also that two of these five cards are selected at random, without replacement. (a) If it is known that card A has been selected, what is the probability that both cards are red? (b) If it is known that at least one red card has been selected, what is the probability that both cards are red? ) 1 . The probability that any child in a certain ramily will have blue eyes is 1/4, and this feature is inherited independently by different children in the family.
H there are five children in the family and it is known that at least one or these children has blue eyes, what is the probability that at least three of the children have blue eyes? 12. Consider the family with five children described in Exercise
11.
(a) I f i t j s known that the youngest child in the family has blue eyes, what is the probability that at least three of the children have blue eyes? (b) Explain why the answer in part (a) is different rrom the answer in Exercise 1 1 . dice. If the sum on the first roll is 7 or
1 3 . Consider the following version of the game of craps: The player rolls two
11,
the player wins the game
immediately. If the sum on the first roll is 2, 3,
or
12, the player loses the
game immediately. However, i f the sum on the first roll is 4,
5, 6, 8, 9, or 10,
then the two dice are rolled again and again until the sum is either 7 or 1 1 or
7 or 1 1 is obtained, then tht: player wins. Jf either 7 or 1 1 is obtained before
the original value. If the original value is obtained a second time before either
the original value is obtained a second lime, then the player loses. Determine the probability that the player wi1l wjn this game.
BAYES' THEOREM
2.2.
•
Probability and PartlHons Let
S
denote the sample space of some experiment, and consider
A I " " , A k in S such that A I • . . . , Ak are disjoint and U�_l Ai = these events form a partition or s. If the
k events AI,
. . . •
Air rorm a partition of
S and if B
S.
k
events
It is said that
is any other event in
S, then the events AlB, AlB • . . . , A k B will rorm a partition of B, as illustrated in Fig. 2.2. Hence, we can write
B
=
( A l B ) U ( A, B ) U . . . U ( A , B ) .
2.2. Bayes' Theorem
Figure 2.2
The intersections of
B with
events
AI '
. . .
65
, A$ of a partition.
Furthermore, since the k events on the right side of this equation are disjoint, Pr( B )
-
k
L Pr{AjB} ,
j-l
Finally, if Pr(A) > 0 for j follows that Pr( B)
=
=
1 " " , k, then Pr(Aj Bl = Pr(AJ lPr(BIA) and it
k
L Pr{Aj }Pr{ BIA) } ,
j- I
In summary, we have derived the following result: Suppose th at the events Al" " , Ak form a partition of the space S and that Pr(A) > 0 for j = t, . " , k. Then, for any event B in S, Pr( B)
k
=
L Pr{AJPr{ BIA)}. j-I
I: Selecting Bolts. Two boxes contain long boits and short bolts. Suppose that one box contains 60 long bolts and 4
0,
let Pr( A i I B ) denote the posterior probability of A i
given that the event B has occurred. Prove that if Pr( A , I B ) Pr( A i l B ) > Pr(A,) for at least one value of i (i � 2, . .
.
< Pr(A,), then
, k ).
5. Consider again the conditions of Example 3 in this section, in which an item was selected at random from a balch of manufactured items and was found to be defective. For which values of
i (i � 1 , 2, 3) is the posterior probability Mj larger than the prior probability
that the item was produced by machine
that the item was produced by machine Mi?
6.
Suppose that in Example 3 in this section, the item selected at random from the entire lot is found to be nondefective. Determine the posterior probability that it was produced by machine
M2.
7. A new test has been devised for detecting a particular type of cancer. If the test is applied to a person who has trus type or cancer, the probabi lity that the person will h�tve
a
positive reaction is 0.95 and the probability that the person
will have a negative reaction is
0.05.
If the test is applied to a person who
does not have this type of cancer, the probability that the person will have a positive reaction is
0.05
negative reaction is
0.95.
and the probability that the person will have a Suppose that in the general population, one person
out of every 100,000 people has this type uf L:anL:Cf. If a person selected at
random has a positive reaction to the test, what is the probability that he has this type of cancer? 8. In a certain city, 30 percent of the people are Conservatives,
50
percent are
Liberals, and 20 percent are Independents. R.ecords show that in a particular election,
65
voted, and
percent of the Conservatives voted, 82 percent of the Liberals
50
percent of the Independents voted. If a person in the city is
selected at random and it is learned that he did not vote in the last election, what is the probability that he is a Liberal'
9.
Suppose that when a machine is adjusted properly, produced by it are or rugh quality and the other
50
50
percent of the items
percent are of medium
72
Conditional Probability
quality. Suppose, however, that the machine is improperly adjusted during 10 percent of the time and that, under these conditions, 25 percent of the items produced by it are of high quality and 75 percent are of medium quality. (a) Suppose that five items produced by the machine at a certain time are selected at random and inspected. If four of these items are of high quality and one item is of medium quality, what is the probability that the machine was adjusted properly at that time? (b) Suppose that one additional item, which was produced by the machine at the same time as the other five items, is selected and found to be of medium quality. What is the new posterior probability that the machine was adjusted properly? 10. Suppose that a box contains five coins, and that for each coin there is a
different probability that a head will be obtained when the coin is tossed. Let p, denote the probability of a head when the ith coin is tossed (i � 1, . . . , 5), l. and suppose that PI � 0, p, � 1/4, p, � 1/2, p, - 3/4, and p, �
(a) Suppose that one coin is selected at random from the box and that when it is tossed once, a head is obtained. What is the posterior probability that the i th coin was selected (i = ], . . . , 5)? (b) If the same coin were tossed again, what would be the probability of obtaining another head?
(c) H a tail had been obtained on the first toss of the selected coin and the same coin were tossed agajn, what would be the probability of obtaining a head on the second toss? 1 1 . Consider again the box containing the five different coins described in
Exercise 10. Suppose that one coin is selected at random from the box and is tossed repeatedly until a head is obtained. (a) If the first head is obtained on the fourth toss, what is the posterior probability that the i th coin was selected (i � 1 , . . . , 5)? (b) If we continue to toss the same coin until another head is obtained, what� is the probability that exactly three additional tosses will be required?
*2.3.
MARKOV CHAINS
Stochastic Processes Suppose that a certain business office has five telephone lines and that any number of these lines may be in use at any given time. During a certain period of time. the telephone lines are observed at regular intervals of 2 minutes and the number of lines that are being used at each time is noted. Let XI denote the
:2.3. Markov Chains
73
number of lines · that are being used when the lines are first observed at the beginning of the period; let X2 denote the number of lines that are being used when they are observed the second time, 2 minutes later; and in general, for n = 1 , 2, . . , let Xn denote the number of lines that are being used when they are observed for the nth time. The sequence of observations Xl' X2, . . is called a stochastic process, or random process, because the values of these observations cannot be predkted precisely beforehand but probabilities can be specified for each of the different possible values at any particular time. A stochastic process like the one just described is called a process with a discrete time parameter because the lines are observed only at discrete or separated points in time, rather than continuously in time. In a stochastic process the first observation XI is caBed the initial slale of the process; and for n = 2, 3, . , the observation Xn is called the stale of Ihe process at time n. In the preceding example, the state of the process at any time is the number of lines being used at that time. Therefore, each state must be an integer between 0 and 5. In the remainder of this chapter we shan consider only stochastic processes (or which there are just a finite number of possible states at any given time. In a stochastic process with a discrete time parameter, the Slate of the process varies in a random manner from time to time. To describe a complete probability model for a particular process, it i$ necessary to specify a probability for each of the possible values of the initial state Xl and also to specify for each subsequent state X" + l (11 - 1 , 2, ) every conditional probability of the. following form: .
.
. .
. . .
In other words, for every time n , the probability model must spccify the conditional probability that the process will be in stale xn + l at time n + 1, given that at times 1, , n the process was in states X1, X ". . . .
. . . •
Markov Chains
A Markov chain is a special type of �tochastic process, which may be described as follows: At any given lime n, when the current state Xn and all previous states Xl Xn_ 1 of the process are known, the probabilities of all future states Xj (j > n) depend only on the current state Xn and do not depend on the earlier slates XI ' . . . ' X,, _ I. Formally, a Markov chain is a stochastic process such that for n = 1 , 2. . . and for any possible sequence of slates x t, x2 . · · · ' Xn + l • Pr( Xn + 1 = X n + l 1 X1 = Xl' Xl X2 , · . , Xn = x ,, ) Pr( XI1+1 = x,,+ I I X" = XII ) ·
Definition.
• • . .
'
.
=
-
.
Conditional Probability
74
It follows from the multiplication rule for conditional probabilities given in Sec.
2.1
that the probabilities in a Markov chain must satisfy the relation
Finite Markov Chains with Stationary Tranl'ition Probabilities.
We shaH now
consider a Markov chain for which there are only a finite number states
S i t . . . t Sk t
k
and at any time the chain must be in one of these
Markov chain of this type is called a
finite Markov chain.
of possible
k states.
A
The conditional probability Pr( X,, + 1 � s) Xn � sJ that the Markov chain
will be i n state
probability.
Sj
at time
n + 1 if it is in state
Si
at time
n
is caJled a
transition
If for a certain Markov chain this transition probability has the same
value for every time
II
(n
=
1 , 2, . . . ),
then it i s said that the Markov chain has
stationary transition probabilities. In other words, a Markov chain has
transition probabilities Pi} such that
if, for any states
Si
stationary
and si' there is a transition probability
for n � l , 2,
....
To illustrate the application of these definitions we shall consider again the example involving the office with five telephone lines. In order for this stochastic process to be a Markov chain, the specified probability for each possible number of lines that may be in use at any time must depend only on the number of lines that were i n use when the process was observed most recently
2
minutes earlier
and must nol depend on any other observed values previously obtained. For example, if three lines were in use at time
n , then the probability specified for time n + 1 must be the same regardless of whether 0, 1 , 2, 3, 4, or 5 lines were i n use at time
n - 1.
In reality, however, the observation at time
n- 1
might
provide some information in regard to the length of time for which each of the three lines in use at lime
n
had been occupied, and this information might be
helpful in determining lhe probability for lime
11 + 1 .
Nevertheless, we shall
suppose now that this proc�ss is a Markov chain. If this Markov chain is to have stationary transition probabilities. it must be true that the rates at which incomi.ng and outgoing telephone calls are made and the average duration of these telephone calls do not change during the entire period covered by the process. This requirement means that the overall period cannot include busy times when more calls are expected or quiet times when fewer calls are expected. For example, if only one line is in use at a particular observation time, regardless of when this time occurs during the entire period covered by the process, then there
2.3. Markov Chains
75
must be a specific. probability P lj that exactly j lines will be in use 2 minutes later.
The Transition Matrix
=
Consider a finite Markov chain with k possible states S 1 , . . . , Sk and stationary transition probabilities. For i 1, . . . , k and j = 1 , . . , k, we shall again let Pij denote the conditional probability that the process will be in state Sj at a given observation time if it is in state Si at the preceding observation time. The transition matrix of the Markov chain is defined to be the k X k matrix P with elements P;j . Thus, The
Tram'ition Matrix for a Single Step. .
p
-
[
:
]
�:: .• . �;: . . PkI . .
PH
( I)
Since each number Pij is a probability, then Pij > O. Furthermore, EJ-l Pij = 1 for i = 1, . . . • k, because if the chain is in state SI at a given observation time, then the sum of the probabilities that it will be in each of the states 5 1 ' . . . ' S k at the next observation time must be 1. A square matrix for which al1 elements are nonnegative and the sum of the elements in each row is 1 is called a stochastic matrix. It is seen that the transition matrix P for any finite Markov chain with stationary transition probabilities must be a stochastic matrix. Conversely, any k X k stochastic matrix can serve as the transition matrix of a finite Markov chain with k possible states and stationary transition probabilities.
£xomp/e
1: A
Transition Matrix Jor the Number of O£:cup;ed Telephone Lines.
Suppose that in the example involving the office with five telephone lines, the numbers of Lines bei_ng used at times 1, 2, . . . form a Markov chain with statjonary transition probabilities. This chain has six possible states boo hl, · . . , hs, where hi is the state in which exactly i lines are being· used at a given time (i = 0, 1 , . . . , 5). Suppose that the transition matrix P is as follows:
p-
bo
b,
b,
b,
b..
b,
bo b,
0.1
0.4
0.2
0.1
0.1
0.1
0.2
0.3
0.2
0.1
0.1
OJ
b,
0.1
0.2
0.3
0.2
0.1
0.1
b, b. b,
0.1
0.1
0.2
0.3
0.2
0.1
0.1
0.1
0.1
0.2
0.3
0.2
0.1
0.1
0.1
0.1
0.4
0.2
(2)
76
CondlUonal Probability
(a) Assuming that aU five lines are in use at a certain observation time. we shall determine the probability that exactly four lines will be in use at the next observation time. (b) Assuming that no lines are in use at a certain lime, we shall determine the probability that at least one line wil1 be in use at the next observation time. (a) This probability is the element in the matrix P in the row corresponding to ·the state b5 and the column corresponding to the state b4" Its value is seen to be 0.4. (b) If no lines are in use at a certain time, then the element in the upper left corner of the matrix P gives the probability that no lines will be in use at the "next observation time. Its value is seen to be 0.1. Therefore. the probability that at least one line wiU be in use at the next observation time is 1 - 0.1 = 0.9. 0 Consider again an arbitrary Markov chain with k possible states 51 , 51. and the transition matrix P given by Eq" (1), and assume that the chain is in state Sj at a given time n . We shall now determine the probability that the chain will be in state 5j at time n + 2. In other words, we shaH determine the probability of moving from state Si to Slate Sj in two steps. The notation for this probability is p}j). For n = 1 , 2, " . . , let Xn denote the state of the chain at time n. Then, if s, denotes the state to which the chain has moved at time n + 1, The Transition Matrix for Several Stepj·. •
.
.
.
k
=
L Pr{ Xn + 1 = s� and Xn+ 2 = sjlXn = Sj)
,-I
k
=
L Pr( XI! + 1 = SA XI! = si)Pr(Xn+2 = sj IX,I+1 = s� )
,-I k
=
L ,-I
P" Prj '
The value of p:J> can be determined in the following manner: If the transition matrix P is squared, i.e., if the matrix p2 = PP is constructed, then the element in the ith row and the jth column of the matrix p2 will be [�-l PirPrj' Therefore, p};) will be the element in the ith row and the jth column of p2. By a similar argument the probability that the chain will move from the state S, to the state S in three steps, or p'Y = Pre X,,+ 3 = 5jl XI! s;), can be found by j constructing the matrix p3 = p2p. Then the probability pU) will be the e1ement in the ith row and the jth column of the matrix pl. =
2.3. Markov Chains
77
m In general, for any value of m (m = 2, 3, . . . ), the mth power p of the matrix P will specify the probability p:t) that the chain will move from any state p m is called the m-step Si to any state Sj in m steps. For this reason the matrix transition matrix of the Markov chain.
Example 2: The Two-Step and Three-Step Transition Matricel' jor the Number of Occupied Telephone Lines. Consider again the transition matrix P given by Eq. (2) for the Markov chain based on five telephone lines. We shall assume first that
i lines are in use at a certain time, and we shall determine the probability that exactly j lines will be in use two time periods later. If we multiply the matrix P by itself, we obtain the following two-step transition matrix: bo b, b2 2 P b, b4 b,
bo 0.14 0.13 0.12 0.11 0.11 O.ll
b, 0.23 0.24 0.20 0.17 0.16 0.16
b2 0.20 0.20 0.21 0.19 0.16 0.15
b,
b,
b,
0.15 0.15 0.18 0.20 0.18 0.17
0.16 0.16 0.17 0.20 0.24 0.25
0.12 0.12 0.12 0.13 0.15 0.16
( 3)
From this matrix we can find any two-step transition probability for the chain, such as the fol1owing: (i) If two lines are in use at a certain time, then the probability . that four lines will be in use twO time periods later is 0.17.
(ii) If three lines are in use at a certain time, then the probability that three Lines wiU again be in use two time periods later is 0.20.
We shall now assume that i 1jnes are in use at a certain time, and we shall determine the probability that exactly j lines will be in use three time periods later. p2p we obtain the following three-step If we construct the matrix p' , transition matrix: _
bo b, , b2 P b, b4 b,
bo 0.123 0.124 0.120 0.117 0.116 0.116
b,
b,
0.208 0.207 0.197 0.186 0.181 0.180
0.192 0.192 0.192 0.186 0.177 0.174
b,
b4
b,
0.166 0.166 0.174 0.179 0.176 0.174
0.183 0.183 0.188 0.199 0.211 0.215
0.128 0.128 0.129 0.133 0.139 0.141
•
(4)
78
Conditional Probability
From this matrix we can find any three-step transition probability for the chain, such as the foUowing: (i) If all five lines are in use at a certain time, then the probability that no lines will be in use three time periods later is 0.116. (ii) If one line is in use at a certain time, then the probability that exactly one line will again be in use three time periods later is 0.207. 0
The Inilial Probability Vector Suppose that a finite Markov chain with stationary transition probabilities bas k possible states • . . . Sk and that the chain might be in any one of these k states at the initial observation time n - 1. Suppose also that, for i = 1 , . . . , k, the probability that the chain will be in state at the beginning of the process is vj, where Vi � 0 and V I + . + Vk = 1. Any vector w = (WI" ' " wk ) such that k and also � 0 for i = 1, = is called a probability vector. The probability vector v (V I ' · · · ' Uk)' [,;_ 1 Wi = 1 which specifies the probabilities of the various states of a chain at the initial observation time, is called the initial probability vector for the chain. The initial probabiJity vector and the transition matrix together determine the probability that the chain will be in any particular state at any particular time. If v is the initial probability vector for a chain, then Pc( XI = = for i = 1, . . . , k. If the transition matrix of the chain is the k X k matrix P having the elements indicated by Eq. (1), then for j - 1 , . . . , k,
Sl
'
.
Si
.
Wi
.
.
.
•
Sj) Vi
Pij
(
Pr X, =
J=
s
,
L Pr( X,
i- I ,
- L Pri X,
i-I
=
=
Si and X, = sJ (
= si)Pr X,
=
)X
s
,
= Si
)
,
L ViPi)'
i- I
ViPij
Since L�_l is the jth component of the vector vP. trus derivation shows that the probabilities for the state of the chain at the observation time 2 are specified by the probabiJjty vector vP. More generally, suppose that at some given time " the probabiJity that the chain will be in state Sj is Pr( XII = = Wi' for i = I , . , k. Then
Sj)
(
)
Pr X,, + I = Sj =
k
L WiPij
for j = l , . . . , k .
.
.
2.3. Markov Chains
79
]n other words, if the probabilities of the various states at time n are specified by the probability vector HI, then the probabilities at time n + 1 are specified by the probability vector wP. 1t fol1ows that if the initial probability vector for a chain with stationary transition probabilities is v, then the probabilities of the various states at time n + 1 are specified by the probability vector vP".
Example 3: Probabilities for the Number of Occupied Telephone Lines.
Consider again the office with five telephone lines and the Markov chain for which the transition matrix P is given by Eq. (2). Suppose that at the beginning of the observation process at time n = 1, the probability that no lines be in use is 0.5. the probability that one line will be in use is 0.3, and the probability that two lines will be in use is 0.2. Then the initial probability vector is v = (0.5, 0.3, 0.2, 0,0, 0). We shall first determine the probability that exactly j lines will be in use at time 2, one period later. By an elementary computation it will be found that
will
vP � (0.13, 0.33, 0.22, 0.12, 0.10,0.10). Since the first component of this probability vector is 0.13, the probability that no hnes will be in use at time 2 is 0.13; since the second component is 0.33, the probability that exactly one line will be in use at time 2 is 0.33; and so on. Next, we shaH determine the probability that exactly j lines wiJ1 be in use at time 3. By use of Eq. (3), it will be found that
Vp2 = (0.133, 0.227,0.202, 0.156,0.162, 0 .120). Since the first component of this probability vector is 0.133, the probability that no lines will be in use at time 3 is 0.133; since the second component is 0.227, the probability that exactly one line will be in use at time 3 is 0.227; and so on. 0
EXERCISES
•
1. Suppose that the weather can be only sunny or cloudy, and that the weather conditions on successive mornings form a Markov chain with stationary transition probabilities. Suppose also that the transition matrix is as follows: Sunny Cloudy Sunny
0.7
0.3
Cloudy
0.6
0.4
80
Conditional Probability
(a) If it is cloudy on a given day, what is the probability that it will also be cloudy the next day? (b) If it is sunny on a given day, what is the probability that it will be sunny on the next two days? (c) If it is cloudy on a given day. what is the probability that it on at least one of the next three days?
will be sunny
2. Consider again the Markov chain described in Exercise L
(a) If it is sunny on a certain Wednesday. what is the probability that it will be sunny on the following Saturday?
(b) If it is cloudy on a certain Wednesday, what is the probability that it will be sunny on the following Saturday?
3.
Consider again the conditions of Exercises
1
and
2.
be sunny on both the following Saturday and Sunday?
(a) If it is sunny on a certain Wednesday. what is the probability that it will (b) If it is cloudy on a certain Wednesday, what is the probability that it will be sunny on both the following Saturday and Sunday?
4.
Consider again the Markov chain described in Exercise
1.
Suppose that the
probability that it will be sunny on a certain Wednesday is probability that it will be cloudy is
0.2
and the
0.8.
(a) Determine the probability that it will be cloudy on the next day, Thursday. (b) Determine the probability that it will be cloudy on Friday. (c) Determine the probability that it will be cloudy on Saturday. 5 . Suppose that a student will be either on time or late for a particular class, and
that the events that he is on time or late for the class on successive days form a Markov chain with stationary transition probabilities. Suppose also that if he is late on a given day, then the probability that he will be on time the next day is
0.8.
Furthermore, if he is on time on a given day, then the probabihty
that he wil1 be late the next day is
0.5.
(a) If the student is late on a certain day, what is the probability that he will be on time on each of the next three days? (b) If the student is on time on a given day, what is the probability that he will be late on each of the next three days?
6.
Consider again the Markov chain described in Exercise 5. (a) If the student i s late on the first day of class, what i s the probability that he will be on time on the fourth day of class? (b) If the student is on time on the first day of class, what is the probability that he will be on time on the fourth day of dass?
2.3. Markoy Chains
81
7 . Consider again the conditions of Exercises 5 and 6. Suppose that the
probability that the student will be late on the first day of class is 0.7 and that the probability that he will be on time is 0.3. (a) Determine the probability that he will be late on the second day of class. (b) Determine the probability that he will be on time on the fourth day of class. 8. Suppose that a Markov chain has four states SI ' S2' $3 ' 54 and stationary transition probabilities as specified by the following transition matrix: s,
s,
S,
Ij4 0 1/2
1/4 1 0
Ij4
1/4
0 0 1/2 1/4
$,
1/2 0 0 1/4
will
(a) If the chain is in state $, at a given time n, what is the probability that it be in state S2 at time n + 21 (b) If the chain is in state SI at a given time n. what is the probability that it will be in state s) at time n
+ 31
9. Let X
denote the initial state at time 1 of the Markov chain for which the transition matrix is as specified in Exercise 8, and suppose that the initial probabilities are as fonows: I
Pr( X, = $, ) = 1/8, Pr(X, = S 2 ) Pr( X,
e,
$,
)
=
3/8, Pr( X, = $, )
=
1/4;
=
1/4.
Determine the probabilities that the chain wil1 be in states SI' S2' S3 ' and $4 n for each of the following values of n: (a) n = 2; (b) n = 3; (c) n = 4. 10. Each time that a shopper purchases a tube of toothpaste, he chooses either brand A or brand B. Suppose that the probability is 1/3 that he will choose the same brand chosen on his previous purcJiase, and the probability is 2/3 that he will switch brands. at time
(a) If his first purchase is brand A, what is the probability that his fifth purchase wil1 be brand B1 :+
(b) If his first purchase is brand B, what is the probability that his fifth purchase will be brand B?
11. Suppose that three boys A, B, and C are throwing a ball from one 10 another. Whenever A has the ball he throws it to B with a probability of 0.2 and to C with a probability of 0.8. Whenever B has the ball he throws it to A
82
Conditional Probability
with a probability of 0.6 and to C with a probability of 0.4. Whenever C has the ball he is equally likely to throw it to either A or B. (a) Consider this process to be a Markov chain and construct the transition matrix. (b) If each of the three boys is equally likely 10 have the ball at a certain time n, whjch boy is most likely 10 have the ball at time n + 2? 12. Suppose that a coin is tossed repeatedly in such a way that heads and tails are equally likely to appear on any given toss and that all tosses are independent, with the following exception: Whenever either three heads or three tails have been obtained on three successive tosses, then the outcome of the next toss is always of the opposite type. At time n (n � 3) let the state of this process be specified by the outcomes on tosses n 2, n - 1, and n. Show that this process is a Markov chain with stationary transition probabilities and con struct the transition matrix. 13. There are two boxes A and B, each containing red and green balls. Suppose that box A contains one red baLi and two green balJs and that box B contains eight red baJls and two green bal1s. Consider the following process: One ball is selected at random from box A, and one ball is selected at random from box B. The ball selected from box A is then placed in box B and the bal1 selected from box B is placed in box A . These operations are then repeated indefinitely. Show that the numbers of red balls in box A form a Marlcov chain with stationary transition probabilities, and construct the transition matrix of the Markov chain. -
*2.4. THE GAMBLER'S RUIN PROBLEM Statement of the Problem Suppose that two gamblers A and B are playing a game against each other. Let p be a given number (0 < p < 1), and suppose that on each play of the game, the probability that gambler A will win one dollar from gambler B is p and the probability that gambler B will win one dollar from gambler A is q = 1 p. Suppose also that the initial fortune of gambler A is i dollars and the initial fortune of gambler B is k i dollars, where i and k i are given positive integers. Thus, the total fortune of the two gamblers is k dollars. Finally. suppose that the gamblers continue playing the game until the fortune of one of them has been reduced to 0 dollars. We shall now consider this game from the point of view of gambler A . His initial fortune is i dollars and on each play of the game his fortune will either increase by one dollar with a probability of p or decrease by one dollar with a -
-
-
2.4. The Gamb�r's Ruin Problem
83
probability of q. If p > 1/2, the game is favorable to him; if p < 1 /2, the game is unfavorable to him; and if p = 1/2, the game is equally favorable to both gamblers. The game ends either when the fortune of gambler A reaches k dollars, in which case gambler B will have no money left, or when the fortune of gambler A reaches ° dollars. The problem is to determine the probability that the fortune of gambler A will reach k dolJars before it reaches 0 dollars. Because one of the gamblers will have no money left at the end of the game, th.is problem is called the Gambler 's Ruin problem.
Solullon ol lhe Problem We shall continue to assume that the total fortune of the gamblers A and B is k dollars, and we shall let a i denote the probability that the fortune of gambler A wiU reach k dollars berore it reaches ° dollars, given that his initiaJ fortune is i dollars. If i = 0, then gambler A is ruined; and if i = k, then gambler A has won the game. Therefore, we shall assume that a o = 0 and a k = 1 . We shall now determine the value of aj for i = 1, . . . • k - l . Let A I denote the event that gambler A wins one dollar on the first play of the game; let Bl denote the event that gambler A loses one dollar on the first play of the game; and let W denote the event that the fortune of gambler A ultimately reaches k dollars before it reaches 0 dollars. Then
Pr( W ) - Pr( A I )Pr( WIA,) + Pr( B I )Pr( WIB I ) - p Pr( WIAI) + q Pr(WIBI)· Since the initial fortune of gambler A is i dollars (i = J , . . , k 1), then Pr( W) = a i• Furthermore, if gambler A wins one dollar on the first play of the game, then his fortune becomes i + 1 dollars and the probability Pr( WIA1) that his fortune will ultimately reach k dollars is therefore a i+ I' If A loses one dollar .
-
on the first play of the game, then his fortune becomes i - I dollars and the probability Pr( WIB I ) that his fortune will ultimately reach k dollars is therefore ai _ I' Hence, by Eq. (1),
1 in Eq. (2). Then, since We shall let i = 1, . . . , k obtain the following k 1 equations: -
-
a1 a2 a3
pa 2• = pa3 + qal> = pa4 + qa 2 ,
00
= 0 and
(2)
a k. = 1, we
=
a /( _ 2 = pa k_ 1 + qa k - 3 • a k _ 1 = P + qa"_2 ·
(3)
84
CondlUonal Probability
If the value of a I on the Left side of the i th equation is rewritten in the form po; + qa; and some elementary algebra is performed, then these k - 1 equations can be rewritten as fonows:
(4)
(!L)k-2a ' p
i
By equating the sum of the left sides of these k - 1 equations with the sum of the right sides, we obtain the relation ( 5)
Solution/or a Fair Game. Suppose first that p - q - 1/2. Then qlp 1, and it follows from Eq. (5) that 1 - al (k - l)a,• from which al 11k. In turn, it 21k; it foHows from the second foHows from the first equation in (4) that a2 = equation in (4) that a, - 3/k; and so on. In this way, we obtain the following -
-
-
complete solution when p - q - 1/2:
OJ
=
I
k
for i = 1 , . . . , k - 1 .
(6)
Example I: The Probability of Winning in a Fair Game. Suppose that p = q = 1/2, in which case the game is equally favorable to both gamblers; and suppose that the initial fortune of gambler A is dollars and the initial fortune of gambler B is just 2 dollars. In this example, i = and k = 100. Therefore, it follows from Eq. (6) that there is a probability of that gambler A will win dollars from gambler A . two dollars from gambler B before gambler B wins o
98
98 0.98 98
2.4. The Gambler's Ruin Problem
Solution for un Unfair Game.
Suppose now that p
rewritten in the form
oF q.
Then Eq. (5)
85
can
be
(7)
Hence,
(8) Each of the other values of aj for i = 2, , k 1 can now be determined in turn from the equations in (4). In this way, we obtain the fol1owing complete solution: . . .
OJ =
( ; )' - 1 (: r - I
for i
- I,
. . .
,k
-
-
1.
(9)
Example 1: The Probability of Winning in un Unfavoruble Game. Suppose that p = 0.4 and q = 0. 6, in which case the probability that gambler A will win one
dollar on any given play is smalle·r than the probability that he will lose one dollar. Suppose also that the initial fortune of gambler A is 99 dollars and the initial fortune of gambler B is just one dollar. We shall determine the probability thai gambler A will win one dollar from gambler B before gambler B wins 99 dollars from gambler A. In lhis example. the required probability a, is given by Eq. (9). in which q/p - 3/2, i 99, and k - 100. Therefore, -
a,
-
ur - 1 ( ; )'"" - 1
...:: ::--....>.�
�
3 -
I
-
2 3 .
2
Hence, although the probability that gambler A will win one dollar on any given play is only 0.4, Ihe probability thai he will win one dollar before he loses 99 dollars is approximately 2/3. 0
86
Condilional Probability
EXERCISES
1 . Consider the following three different possible conditions in the gambler's ruin problem: (aJ The initial fortune of gambler A is 2 dollars and the initial fortune of gambler B is 1 dollar. (b) The initial fortune of gambler A is 20 dollars and the initial fortune of gambler B is 10 dollars. (c) The initial fortune of gambler A is 200 dollars and the initial fortune of gambler B is 100 dollars. Suppose that p � q � 1/2. For which of these three conditions is there the of gambler B greatest probability that gambler A will win the initial fortune . before he loses his own initial fortune? 2. Consider again the three different conditions (a), (b), and (c) given in Exercise I, but suppose now that p < q. For which of these three conditions is there the greatest probability that gambler A will win the initial fortune of gambler B before he loses his own initial fortune? 3. Consider again the three different conditions (a), (b), and (c) given in Exercise 1, but suppose now that p > q. For which of these three conditions is there the greatest probability thht gambler A will win the initial fortune of gambler B before he loses his own initial fortune? 4. Suppose that on each play of a certain game. a person is equally likely to win one dollar or lose one dollar. Suppose also that the person's goal is to win two dollars by playing this game. How large an initial fortune must the person have in order for the probability to be at least 0.99 that he will achieve his goal before he loses his initial fortune? 5. Suppose that one each play of a certai.n game, a person will either win onc dollar with probability 2/3 or lose one dollar with probability 1/3. Suppose also that the person's goal is to win two dollars by playing this game. How large an initial fortune must the person have in order for the probability 10 be at least 0.99 that he will achieve his goal before he loses his initial fortune? 6. Suppose that on each play of a certain game, a person will either win one dollar with probability 1/3 or lose one dollar with probability 2/3. Suppose also that the person's goal is to win two dollars by playing this game. Show that no matter how large the person's initial fortune might be, the probability that he will achieve his goal before he los·,. his initial fortune is less than 1/4. 7. Suppose that the probability of a head on any toss of a certain coin is p (0 < p < 1), and suppose that the coin is tossed repeatedly. Let X. denote the total number of heads that have been obtained on the first n tosses, and
2.5. Choosing the Best
87
n
let V,r = 11 Xn denote the total number of tails on the first 11 tosses. Suppose that the losses are stopped as soon as a number is reached such that either X" = Y" + 3 or Yn = X" + 3. Determine the probability that Xn = Y" + 3 when the tosses are stopped. -
8.
Suppose that a certain box A contains 5 bans and another box B contains 10 balls. One of these two boxes is selected at random, and one ball from the selected box is transferred to the other box. If this process of selecting a box at random and transferring one ball from that box to the other box is repeated indefinitely, what is the probability that box A will become empty before box B becomes empty?
*2.5.
CHOOSING THE BEST
Optimal Selection In this section we shall describe a special problem of decision-making that illustrates in a striking fashion how the basic concepts of probability that we have developed can be applied to achieve some surprising1y strong results. Suppose that you are an employer who must hire a new person to fill an available position from II candidates for the position. We shall make the following assumptions about the hiring process. The candidates will appear in a random order to be interviewed by you, and you will interview them sequentiaUy, i.e., one at a time, in the order in which they appear. After interviewing each candidate, you must decide immediately whether or not you want to hire that candidate. If you decide to hire that candidate, the process terminates. 1f you decide not to hire that candidate, then you proceed to interview the next one. The candidate you have just interviewed leaves and, we assume, accepts some other position. Thus, once you have decided not to hire a candidate, you cannot later change your mind. At the beginning of the process you have no information ahout the qualifica tions or abilities of any of the 11 candidates. However, after interviewing a candidate you can rank that candidate in relation to e�ch of the other candidates whom you have previously interviewed. Since the candidates appear in random order, the only information that you can obtain 1/2; (c) p = 1/2?
Rand om Vari ables and D istributi ons
3.1 .
RANDOM VARIABLES AND DISCRETE DISTRIBUTIONS
Dellnllion 01 a Random Variable Consider an experiment for which the sample space is denoted by S. A real· valued function that is defined on the space S is called a random variable. In other words, in a particular experiment a random variable X would be some function that assigns a real number X(s) 10 each possible outcome s E S.
Example J: Tossing a Coin. Consider an experiment in which a coin is tossed ten
times. In this experiment the sample space can be regarded as the set of outcomes consisting of the 210 different sequences of tcn heads and tails that are possible, and the random variable X could be the number of heads obtained on the ten tosses. For each possible sequence s consisting of tcn heads and tails, this random variable would then assign a number X( s) equal to the number of heads in the sequence. Thus, if s is the sequence HHlTTHlTTH , then X(s) � 4. 0
Example
2:
Choosing a Point in the Plane. Suppose that a point in the xy-pJane is
chosen in accordance with som� specified probability distribution. Thus each outcome in the sample space is a point of the form s = ( x, y). If the random variable X is taken as the x-coordinate of the cho'sen point, then X(s) = x for each outcome s. Another possible random variable Y for this experiment is the y-coordinate of the chosen point. A third possible random variable Z is the distance from the origin to the chosen point. These random variables are defined by the functions 1/' 0 Y( s ) � y and Z s x' + y
( )
�
(
' ) -.
97
98
Random Variables and Distributions
Example
Measuring a Person's Height.
3:
Consider an experiment
10
which a
person is selected at random from some population and his height in inches is measured. This height is a random variable.
0
The Distribution 01 a Random Variable When a probability distribution has been specified on the sample space of an experiment, we can determine a probability distribution for the possible values of any random variable
X.
Let
A
denote the probability that the value of
Pr( X E A )
X
will belong to the
is equal to the probability that the outcome
be such that
Xes) E
Pr( X E A) subset A. Then
be any subset of the real line, and let
s
of the experiment wiU
A . In symbols,
Pr( X E A l � Pr ( s : X(sl E A } .
Example 4: Tossing a Coin. tossed ten times, and let X
Consider again an experiment in which a coin is be the number of heads that are obtained. In this
X
experiment the possible vaJues of Example 2 of Sec.
Pr( X � x l
Example a point s and 0 ). For every p.d.f. it must be true that f�oo f(x) 1 . Therefore, in this example, 1o4cxdx 8c 1. =
=
Hence,
1 ,
c
-
=
1/8. It follows that
and Pr( X > 2)
-
,. J,
1
ilxdx
•
-
3
4 '
0
It is often convenient and useful to represent a continuous distribution by a p.d.f. that is positive over an unbounded interval of the real line. For example, in a practical problem, the voltage X in a certain electrical system might be a random variable with a continuous distribu-
Extlmple 2: Unbounded Rant/(}m Van·ubles.
106
Random Variables and Distributions
tion that can be approximately represented by the p.d.f.
f( X ) _
{
O
for x � 0,
I
(I +
x )'
for x > 0.
(6)
It can be verified that the properties (2) and (3) required of all p.d.f.'s are satisfied by f(x). Even though the voltage X may actually be bounded in the real situation, the p.d.f. (6) may provide a good approximation for the distribution of X over its full range of values. For example, suppose that it is known that the maximum possible value of X is 1000, in which case Pr( X > 1000) - O. When the p.d.f. (6) is used, it is found that Pr( X > 1000) - 0.001. If (6) adequately represents the variability of X over the interval (0, 1(00), then it may be more convenient to use the p.d.L (6) than a p.dJ. that is similar to (6) for x � 1000, except for a new normalizing constant, and is 0 for x > 1000. 0
Exampk 3: Unbounded p.d/. 's. Since a value of a p.d.f. is a probability density, rather than a probability, such a value can be larger than 1 . In fact, the values of the following p.d.f. are unbounded in the neighborhood of x - 0: fi x ) -
{;
2 - x - 1/3
for O < x < l . otherwise.
(7)
It can be verified that even though the p.d.f. (7) is unbounded, it satisfies the properties (2) and (3) required of a p.d.f. 0
Mixed Distributions Most distributions that are encountered in practical problems are either discrete or continuous. We shall show, however, that it may sometimes be necessary to consider a distribution that is a mixture of a discrete distribution and a continu ous distribution. Suppose that in the electrical system considered in Example 2, the voltage X is to be measured by a voltmeter which wi1l record the actual value of X if X .:::;; 3 simply record the value 3 if X > 3. If we let Y denote the value recorded but by the voltmeter, then the distribution of Y can be derived as follows: First, Pre y - 3) - Pr(X � 3) - 1/4. Since the single value Y - 3 has prob ability 1/4, it follows that Pr(O < Y < 3) - 3/4. Furthermore, since Y - X for o < X < 3, this probability 3/4 for Y is distributed over the interval 0 < Y < 3 according to the same p.dJ. (6) as that of X over the same interval. Thus. the
will
3.2. Continuous Distributions
107
distribution of Y is specified by the combination of a p.d.L over the interval Y < 3 and a positive probability at the point Y = 3.
o
; 4,
otherwise. (a) Find the value of t such that Pr( X .; (b) Find the value of
t
t ) � 1/4.
such that Pr( X ;, /) � 1/2.
5. Let X be a random variable for which the p.d.f. is as given in Exercise 4. After the value of X has been observed, let Y be the integer closest to X. Find the p.1. of the random variable Y.
108
6.
Random Variables and Distributions
Suppose that a random variable X has a uniform distribution on the interval ( - 2, 8). Find the p.d.f. of X and the value of Pr(O < X < 7).
7. Suppose that the p.d.f. of a random variable X is as follows:
f(x ) �
{
ce -2 x
o
for x > 0, otherwise.
(a) Find the value of the constant c and sketch the p.d.f. (b) Find the value of Pr(l < X < 2). 8. Show that there does not exist any number c such that the following function f(x) would be a p.d.!.: for x > 0, otherwise.
9. Suppose that the p.d.L of a random variable X is as follows: for a < x < 1 , otherwise. (a) Find the value of the constant c and sketch the p.d.f. (b) Find the value of Pr(X '" 1/2).
I O. Show that there does not exist any number c such that the following function f(x) would be a p.d.f.: for 0 < x < 1 , otherwise.
3.3.
THE DISTRIBUTION FUNCTION
Dellnilion and
Basic Properties
The distribution function F of a random variable X is a function defined for each real number x as follows: for
-
00 < x < 00 .
(1)
3.3. The DlatribuHon FuncUon
1 09
It should be emphasized that the distribution function is defined in this way for every random variable X, regardless of whether the distribution of X is discrete, continuous, or mixed. The abbreviation for distribution function is d.L Some authors use the term cumulative distribution junction , instead of distribution function. and use the abbreviation c.dJ. It follows from Eq. (1) that the dJ. of any random variable X is a function F defined on the real line. The value of F(x) at any point x must be a number in the interval 0 � F(x) � 1 because F(x) is the probability of the event { X � x } . Funhermore. it follows from Eq. (1) that the dJ. of any random variable X must have the following three properties:
The junction F(x) is nondecreasing as x increases; that < x2, then F(xt) � F(x2).
Property 1 . Xl
is,
if
Proof.
If Xl < x2, then the occurrence of the event { X � xd also impUes that the event { X � x 2 } has occurred. Hence, Pr{ X � xd � Pr{ X � x2}. 0
An ex.ample of a dJ. is sketched in Fig. 3.5. It is shown in that figure that o � F(x) � l over the entire real line. Also, F(x) is always nondecreasing as x increases, although F(x) is constant over the interval Xl � X � x2 and for x � x4 •
Property 2.
lim
X --
-
00
F(x)
-
0 and
lim
F(x)
-
I.
F(x)
� f' I
- - - - - - - - - - - - - - - - - - -- - - - -
l� Zl
- -- - - - - - - - - - - - -
- - - -
-
- -
- - - - - - - -- - - - - - - - - - - - - -
,
" ---------'!--
I
I I I t I
I
I I I I I
�+_--�----L---------L-�LI ----x XI
o
Figure
3.5
An example of
X2
a
d.f.
X3
X4
110
Random Variables and OlatrlbuUons
Proof. These limiting values follow directly from the fact that Pre X '" x) must approach 0 as x .... - 00 and Pr( X ", x) must approach 1 as x .... co . These relations can in turn be rigorously established by using Exercises 1 1 and 12 of Sec. 1.10. 0
The limiting values specified in Property 2 are indicated in Fig. 3.5. In this figure, the value of F(x) actually becomes 1 at x - x, and then remains 1 for x > x4• Hence, it may be concluded that PriX '" x , ) - 1 and Pr(X > x,) - O. On the other hand, according to the sketch in Fig. 3.5, the value of F(x) approaches 0 as x - - 00 , but does not actually become at any finite point x. Therefore, for any finite value of x, no matter how small, Pri X '" x ) > O. A dJ. need not be continuous. In fact, the value of F(x) may jump at any number of points. In Fig. 3.5, for instance, such jumps or points of discontinuity occur where x - x, and x - x,. For any fixed value x, we shall let F(x-) denote the limit of the values of F(y) as y approaches x from the left, that is, as y approaches x through values smaller than x. In symbols,
0
F(x - )
-
lim F( y ) .
y-x y <x
Similarly, we shall define F(x + ) as the limit of the values of F(y) as y approaches x from the right. Thus,
F(x +) = lim F( y ) . y -x y>x
If the d.f. i s continuous at a given point x, then F(x-) = F(x+) = F(x) at that point.
Property 3. A d.f. is a/ways continuous from the right; that is, F(x ) = F(x+) at every point x. Proof. Let YI > Y2 > . be a sequence of numbers that are decreasing such that lim Yn x. Then the event ( X .. x ) is the intersection of all the events n - oo ( X .. Yn ) for n = 1 , 2, . . Hence, by Exercise 1 2 of Sec. 1.10, . .
-
.
.
F( x ) = Pr( X '" x ) = lim Pr( X "' Yn ) = F( x+). 0 n - ",
It follows from Property 3 that at any point x at which a jump occurs,
111
3.3. The Distribution Function
In Fig. 3.5 this property is illustrated by the fact that at the points of discontinu ity x = Xl and X = x3, the value of F(x,) is taken as 21 and the value of F(X3) is taken as 2 3 ,
Determining Probabilities from the Distribution Function If the d.f. of a random variable X is known, then the probability that X will lie in any specified interval of the real line can be determined from the d.f. We shall derive this probability for four different types of intervals.
Theorem 1 . For any given value x,
(2)
Pr( X > x ) � 1 - F( x). Proof Since Pr(X > x) � 1
-
Pr(X " x), Eq . (2) follows from Eq. (1).
Theorem 2. For any given values Xl and X2 such that Xl < x2• Pri x, < x " x, )
�
F(x,) - F(x,).
Proof Pri x , < x " x,) � Pri X " x,)
rectly from Eq. (1).
0
-
(3) Pr( X " x,). Hence, Eq. (3) follows di
0
For example, if the d.f. of X is as -sketched in Fig. 3.5, it follows from Theorems 1 and that Pr( X > x,) � 1 z, and Pr(x, < X " x, ) z, z,. Also, since F(x) is constant over the interval Xl � X � x 2 , then Pr(xJ < X � x2) �
2
o.
-
-
-
It is important to distinguish carefulJy between the strict inequalities and the weak inequalities that appear in all the preceding relations and also in the next theorem. If there is a jump in F(x ) at a given value x, then the values of Pr( X " x) and Pr(X < x) will be different.
Theorem 3. For any given value x, Pr( X < x )
�
F(x - ) .
Proof. Let Yl < Yl < . .
lim YIl
n� �
=
(4)
be an increasing sequence of numbers such that x. Then it can be shown that
(X< x)
00
-
.
U { X " Yn } · n- l
112
Random Variables and DlsbibuUons
Therefore, it follows from Exercise 11 of Sec. 1.10 that Pr( X < x ) = lim Pr( X .; y") . - 00
=
lim
,,--+ 00
F(y.)
=
F(x- ). 0
For example, for the d.L sketched in Fig. 3.5, Pr( X < x3) = z, and Pr(X < x4 ) = I . Finally, we shall show that for any given value x, Pr( X = x ) is equal to the amount of the jump that occurs in F at the point x. If F is continuous at the point x, that is, if there is no jump in F at x, then Pri X = x) = O.
Theorem 4.
For any giDen value x,
(5) Proof It is always true that Pr( X = x) = Pr( X .; x) - Pr(X < x). The relation (5) follows from the fact that Pr( X .; x) F(x) = F(x + ) at every point and from Theorem 3. 0 =
In Fig. 3.5, for example, Pr( X X I ) ZI - zO ' Pr( X the probability of any other individual value of X is O. =
=
=
x3)
=
Z3 - Z,'
and
The d.1. 01 a Discrete Distribution From the defirtition and properties of a d.r. F(x) it follows that if Pr(a < X < b) = 0 for two numbers a and b (a < b), then F(x) will be constant and horizontal over the interval a < x < b. Furthermore, as we have just seen, at any point x such that Pri X x) > 0, the d.L will jump by the amount Pri X x). Suppose that X has a discrete distribution with the p.f. fix). Together, the properties of a d.r. imply that F( x) must have the following form: F( x) will have a jump of magnitude fix;) at each possible value x; of X; and F(x) will be constant between any two successive jumps. The distribution of a discrete random variable X can be represented equally well by either the p.f. or the d.r. of X.
=
=
The d.1. 01 a Continuous Distribution Consider now a random variable X with a continuous distribution, and let f(x) and F(x) denote the p.d.f. and the d.f., respectively, of X. Since the probability
3.3. The Distribution Function
113
of any individual point x is 0, the dJ. F(x) will have no jumps. Hence, F(x) will be a continuous function over the entire real hne. Furthermore, since F( x ) - Pr( X " x)
-r
(6 )
f(l) dl ,
- 00
it follows that, at any point x at which f(x) is continuous,
Thus, the distribution of a continuous random variable X can be represented equally well by either the p.d.L or the d.L of X.
Example 1: Calculating a p.d./. from a d.f. Suppose iliat in a certain electrical system the voltage X is a random variable for which the dJ. is as follows: F( x ) -
{O
1
x + x
O.
for x < 0, for x �
This function satisfies the three properties requjred of every dJ., as given in Sec. 3.1. Furthermore, since this dJ. is continuous over the entire real line and is differentiable at every point except x = 0, the distribution of X is continuous. Therefore, the p.d.f. of X can be found at any point other than x '" 0 by the relation (7). The value of f(x) at the single point x 0 can be assigned arbitrarily. When the derivative F'(x) is calculated, it is found that I(x) is as given by Eq. (6) of Sec. 3.2. Conversely, if the p.d.L of X is given by Eq. (6) of Sec. 3.2, then by using Eq. (6) of this section it is found that F(x) is as given in this example. 0 -
EXERCISES
Suppose that a random variable X can take 6nly the values - 2, 0, 1, and 4, and that the probabilities of these values are as follows: Pr( X = 2) = 0.4, Pr( X - 0) - 0.1, Pr( X I) - 0.3, and Pr(X - 4) - 0.2. Sketch the d.L of X. 2. Suppose that a coin is tossed repeatedly until a head is obtained for the first time, and let X denote the number of tosses that are required. Sketch the d.L of X.
1.
-
-
1 14
Random Variables and DlstribuUons
3. Suppose that the d.f.
F of
a random variable X is as sketched in Fig. 3.6.
Find each of the following probabilities: (a) Pr( X (d) Pr( X
�
�
(b) Pr( X
1)
(e) Pr(O < X -
y is sketched in Fig.
Hence,
0
J: Detennining a Joint p.dJ. by Geometric Methods. Suppose that a point (X. Y ) is selected al random rrom inside the circle x 2 + y 2 � 9. We shall determine the joint p.d.f. of X and Y. Let S denote the set of points in the circle x 2 + y2 .:s:;: 9. The statement that
Example
y
� ___
Figure
3.10
The subset
x
So where x � y and I( x, y ) > 0 in Example 2.
120
Random Variablea a nd Diatributions
the point ( X, Y ) is selected at random from inside the circle is interpreted to mean that the joint p.d.L of X and Y is constant over S and is 0 outside S. Thus, for ( x , y ) E S, otherwise. We must have
f fl( x , y ) dx dy � c x (area of S ) � 1 . s
o
Since the area of the circle S is 971', the value of the constant c must be 1/(971').
Mixed Bivariate Distributions So far in this section we have discussed bivariate distributions that were either discrete or continuous. Occasionally, a statistician must consider a mixed bi variate distribution in which the distribution of one of the random variables is discrete and the distribution of the other random variable is conti.nuous. The
Y)
then found by summing the values of I(x, y ) for one variable and integrating
probability that the pair ( X,
will belong to a certain region of the xy-plane is
y) Cor the other variable. A more complicated type of mixed distribution can also arise in a practical problem. For example, suppose that X and Y are the times at which two specific
I(x,
components in an electronic system fail. There might be a certain probability p
(0 < p < 1) that the two components will faiL at the same time and a certain probability
1
-
P that they will fail at different times. Furthermore, if they fail at
the same time, then their common failure time x might be distributed according to a certain p.d.f. I(x);
if they
fail at different times x and
might be distributed according to a certain joint p.d.f. The joint distribution of
X and Y
g(x,
y, y).
then these times
in this example is not continuous for the
following reason : For any continuous distrihution the probability that ( X, Y) will lie on the line y must be 0, whereas in this example the value of this probability is p.
x
-
Bivariate Distribution Functions joint distribution junction, or joint d./-.
of two random variabJes X and Y is defined as the function F such that for all values of x and y ( 00 < x < 00 and - 00 < y < oo),
The
-
F(x . y ) � Pr( X
"
x and Y " y ) .
3.4. Bivariate Distributions
121
If the joint dJ. of two arbitrary random variables X and Y is F, then the probability that the pair ( X, Y) will lie in a specified rectangle in the xy-plane can be found from F as fol1ows: For any given numbers a < b and c < d, Pr(a
< X .;; b and c < Y .;; d ) = Pr(a < x .;; b and Y .;; d ) - Pr(a < X .;; b and Y .;; e) = [Pr(X .;; b and Y .;; d) - Pr( X .;; a and Y .;; d ) ] - [ Pr( X .;; b and Y .;; c) Pr( X ';; a and Y .;; e)] = F( b , d ) - F(a, d ) - F(b , c ) + F(a, c )
(1)
-
.
Hence, the probability of the rectangle A sketched in Fig. 3.11 is given by the combination of values of F just derived. It should be noted that two sides of the rectangle are included in the set A and the other two sides are excluded. Thus, ir there are points on the boundary of A that have positive probability, it is important to distinguish between the weak inequalities and the strict inequalities in Eq. (1). The d.f. F1 of just the single random variable X can be derived from the joint dJ. F as follows, for - 00 < x < 00 : FI ( x ) - Pr ( X .;; x ) = lim Pr( X .;; x and Y .;; y ) ,- �
- lim F(x, y). , - 00
Similarly, if F2 denotes the dJ. of Y, then for - 00 < y < 00, F2( y ) = lim F(x, y ) . x - oo
.'
a
Figure
3.11
The probability of a rectangle.
122
Random Variables and Distributions
Other relationships involving the univariate distribution of X, the univariate distribution of Y, and their joint bivariate distribution will be presented in the next section. Finally, if X and Y have a continuous joint distribution with joint p.d.f. J, then the joint d.L ror any values of x and y is
F(X, Y ) �
r r - 00
j(r, s ) dr ds.
- oc
Here, the symbols r and s arc used simply as dummy variables of integration. The joint p.d.f. can be derived from the joint d.f. by using the relation
j( x , y ) � at every point
8'F(x, y) 8x 8y (x, y ) at which this seeo,nd-order derivative exists.
Suppose that X and Y are random variables that can only take values in the intervals 0 � X � 2 and o � Y .s.;; 2. Suppose also that the joint dJ, of X and Y, ror 0 .:::;;;; x � 2 and o � y .s.;; 2, is as rollows:
Example 4: Determining a Jomt p.d.f. /rom a Joint d.f.
1 F(x, y ) � 16 xy(x + y) .
(2)
We shall first determine the dJ. F) or just the random variable X and then detennine the joint p.d.f. j of X and Y. The value of F(x, y) at any point (x, y) in the xy-plane that does not represent a pair of possible values of X and Y can be calcu1ated from (2) and the fact that F(x, y ) � Pr( X ,,;;; x and Y ,,;;; y). Thus, if either x < 0 or y < 0, then F(x, y) � O. If both x > 2 and y > 2, then F(x, y ) � 1 . If 0 ,,;;; x ,,;;; 2 and y > 2, then F(x, y) � F(x, 2) and it follows from Eq. (2) that
1 F( x, y ) � g X ( x Similarly, if 0 ,,;;;
F( x, y ) The function
�
y
,,;;;
+
2) .
2 and
x
> 2, then
1 gY ( y + 2 ) .
F( x, y) has now been specified for every point in the X)I-plane.
3.4. Bivariate Distributions
By letting
y
...,
00 ,
we find that the dJ. of just the random variable
x < O.
for
for 0
Furthermore, for
0<x
0 and the subset So where x + are sketched in Fig. 3.14. Thus,
Pr( X + Y
.;
l
) f ff(x,y)dx dy = f' 1' -x4Xy dydx = !. =
0
SG
0
y
�1
0
Suppose next that X and Y have discrete distributions; that X can take the values 1 , 2, . . , r; that Y can take the va1ues 1, 2, . . . , s; and that
.
Pr( X = i and Y = j) = Pi, Then, for i
=
for i
= l,
. . .
, r and j
= 1,
1, . . . , r, let
= i) Also, for j = 1 , Pr{ X
,
=
L Pi} = Pi+'
)-1 . . . •
s , let
,
Pr( Y = j) = L Pij = P+}, i-I
y
s
...... _x Figure 3.14
The subset .so
where x + y � 1 in Example 3.
. . . •
s.
130
Random Variable. and Distributions
Therefore, X and Y will be independent if and only if the following relationship is satisfied for all values of i and j: (6)
Example 4: Detennining Whether Random Variables Are Independent from a Table of Probabilities. Suppose that the joint p.f. of X and Y is specified by the table given in Example 1 of Sec. 3.4. We shall determine whether or not X and Y are independent. In Eq. (6), Pi j is the probability in the ith row and the jth column of the table, Pi + is the sum of the probabilities in the ith row, and P + j is the sum of the probabilities in the jlh column. It is found from the table that P l l 0.1, P l + = 0.2, and P + l = 0.4. Hence, PII * Pl+P+l' It follows immediately that X and Y cannot be independent. Suppose next that the joint pJ. of X and Y is specified by the following table. =
X
y
1
2
3
4
Total
1
0.06 0.D2 0.04 0.08
0.20
2
0.15 0.05 0.10 0.20
0.50
0.09
0.06 0.12
0.30
0.30 0.10 0.20 0.40
1.00
J
Total
om
Since it can be found from this table that Eq. (6) is satisfied for all values of i and j, it follows that X and Y are independent. 0 It should be noted from Example 4 that X and Y will be independent if and only if the rows of the table specifying their joint p.L are proportional to one another, or equivalently, if and only if the columns of the table are proportional to one another. Now suppose that X and Y are random variables that have a continuous joint distribution for which the joint p.d.f. is f. Then X and Y will be indepen dent if and only if f can be represented in the following form for - 00 < x < 00 and - 00 < y < 00 : f(x, y )
-
g,( x )g,( y ) ,
(7)
where gl is a nonnegative function of x alone and g2 is a nonnegative function of y alone. ]n other words, it is necessary and sufficient that, for all values of x and y, f can be factored into the product of an arbitrary nonnegative function of x and an arbitrary nonnegative function of y. However, it should be emphasized that, just as in Eq. (5), the factorization in Eq. (7) must be satisfied Cor all values of x and y( - 00 < x < 00 and - 00 < y < 00).
fn one important special case, in which
f(x, y)
131
Marginal Dittributlons
3.5.
=
0 for all values of x
and
y
outside a rectangle having sides parallel to the x-ax.is and the y-axis, it is not actually necessary to check Eq. (7) for all values of that
X and
and y. In order to verify
Y are independent in this case, it is sufficient to verify that Eq. (7) is
satisfied for all values of and
x
x
d be given values such that
a< c ,;; y ,;; d ) . a, (x, y)
-
b
�
00
� 00 and
b and
It should be noted that any of the endpoints Suppose that
for every point
independent if and only if
Example
I
can
p.d.f. l of
0
I( x,y ) � { ke-(x+2y)
b,
c,
outside
In this example
for
x�0
and y
X
I(x, y) � 0
gl (
81(X)
II {�- x
Ib)
�
Y will be
S.
� 0,
outside a rectangle
a � 0, b I(x, y) X Y ) e-2y.
-
00, C
which has the form
S
� 0,
and
d
-
00. Further
can be factored as in Eq. (7) by lelting
Therefore,
=
be the marginal p.d.f.'s of X and Y for X " which make
d can be infinite.
Suppose that the joint
and Y are independent.
It follows that in this case, except for constant factors,
marginal p.d.f.'s
and
and Y are independent and then determine
more, at any point inside S, and
00,
otherwise.
specified in Eq. (8) and for which
gl (X) = ke-x
and
S. Then X and
be factored as in Eq. (7) at all points in
We shall first determine whether their marginal p.dJ.'s.
�
(8)
Verifying the Factorization 0/ a Joint p.d./. X and Y is as follows:
5:
00 �
-
let S be the following rectangle in the xy-plane:
s � { ( x,y): a ,;; x ,;; I(x, y) � 0
b, c, a , c Y); (d) Pr( X + Y .; 1). Suppose that the joint p.d.f. of X and Y is as follows: for 0 :s;;; x :s;;; otherwise.
1
and 0 < y < 00 ,
Are X and Y independent? 7. Suppose that the joint p.d.1. of X and Y is as follows:
f(X, y ) � 8.
{ �4XY
for x � 0, otherwise.
y � 0, and x + y � 1,
Are X and Y independent? Suppose that a point ( X, Y ) is chosen at random from the rectangle S defined as folJows:
s � { ( x, y ) : 0 .; x
.; 2 and 1
.; Y .; 4) .
(a) Determine the joint p.d.!. of X and Y, t�e marginal p.d.1. of X, and the marginal p.d.!. of Y. (b) Are X and Y independent?
9. Suppose that a point ( X, Y) is chosen at random from the circle S defined as fol1ows:
s � { (x, y ) : x' + y ' .; l } .
134
Random Variables and Distributions
(a) Determine the joint p.d.r. of X and marginal p.d.f. of Y. (b) Are X and
]0.
Y
the marginal p.d.r. of X, and the
independent?
Suppose that two persons make an appointment to meet between 5
6
P.M.
and
P .M. at a certain location, and they agree that neither person will wait more
than
]0 minutes for the other person. If they arrive independently at random
times between 5
3.6.
Y,
and
P.M.
6
P.M., what is the probability that they will meet?
CONDITIONAL DISTRIBUTIONS
Discrete Conditional Distributions Suppose that X and Y are two random variables having a discrete joint distribu lion for wruch the joint p.f. is
f.
As before, we shall let
Y, respectively. After the value y
marginal pJ.'s of X and
II
and
I,
denole Ihe
of the random variable
Y has been observed, the probability that the random variable X will take any particular value
Pr( X
�
x
xIY
is specified by the following conditional probabilily:
_
y)
�
Pr( X � x and Y Pr( Y � y)
�
y) (1 J
J ' y:",-,( x'-C � I ,-; . 1,{yJ
In other words, if it is known that
Y
'"'"
y,
then the distribUlion of X will be a
discrete distribulion for wruch the probabilities are specified in Eq. (1). Trus
conditional distribution of X given that Y y. It follows y such Ihal I,( y ) > 0, this conditional X can be represented by a p.L 8)(x l y ) wru O. Then the conditionalp.d.f. gl
may be defined as follows:
f(x, y ) f2 ( y )
for
-
00 < x < 00 .
(4)
136
Random Variables and Distributions
For each fixed value of y, the function g, will be a p.d.L for X over the real line, since 8,(xly) � 0 and
It should be noted that Eq. (2) and Eq. (4) are identical. However, Eq. (2) w.s derived as the conditional probability that X x given that Y y, whereas Eq. (4) was defined to be the value of the conditional p.d.f. of X given that Y y. The definition given in Eq. (4) has an interpretation that can be understood by considering Fig. 3.15. The joint p.d.f. I defines a surface over the xy-plane for which the height I(x, y) at any point (x, y) represents the relative likelihood of that point. For instance, if it is known that Y Yo , then the point (x, y ) must lie on the line y Yo in the xy-plane, and the relative likelihood of any point (x, Yo) on this line is I(x, Yo) . Hence, the conditional p.d.f. g,(x l yo) of X should be proportional to I(x, Yo )' In other words, 8,(xlyo) is essentially the same as I( x, Yo ), but it includes a constant factor 1!(f, (yo») which is required to make the conditional p.d.L integrate to unity over all values of X. -
-
-
-
-
Figure
3.15
The conditional p.d.f. g, (x l Yo)
is proportional to fix, Yo ).
3.6. Conditional Oistributions
Similarly, for any value of x such that X = x is defined as follows:
given that
g, ( Y l x )
�
f(x, y ) fl ( x )
for
This equat.ion is identical to Eq.
137
fl( x ) > 0, the conditional p.d./. of Y
- 00 < y < 00 .
(5)
(3) which was derived for discrete distributions.
Example 1: Calculating a Conditional p,df. from a Joint p.df. Suppose that the joint p.d.f. of X and Y is as specified in Example 2 of Sec. 3.4. We shall first detennine the conditional p.d.f. of Y given that X x and then determine some probabilities for Y given the specific value X = 1/2. The set S for which f(x, y) > 0 was sketched in Fig. 3.9. Furthermore, the marginal p.d.L fl was derived i n Example 2 of Sec. 3.5 and sketched in Fig. 3.12. It can be seen from Fig. 3.12 that fl(x ) > 0 for - 1 < x < 1 but not for x � O. Therefore, for any given value of x such that - 1 < x < 0 or 0 < x < I , the conditional p.d.L g,(yl x ) of Y will be as follows: "'"
g, ( Y l x ) �
{
In particular,
(
pr Y ;;,
;
2y _ x4
otherwise.
if it is known that
X
� 1/2, then pr
! I ;) f,�l'( I ; ) dy � ]75 X -
�
y
(
Y ;;,
! Ix �) � I �
and
0
Construction 01 the Joint DIstributIon Basic Relations. It follows from Eq. (4) that for any value of y such that J,(y) > 0 and for any value of x, (6) J( x , y ) - g, ( x l y ) J, ( y ) . Furthermore, if f2( YO ) "" 0 for some value Yo. then it c�n be assumed without loss of genera1ity that f(x, Yo ) = 0 for all values of x. In this case, both sides of Eq. (6) will be 0 and the fact that g.(x lyo) is not defitled becomes irrelevant. Hence, Eq. (6) will be satisfied for all values of x and y. Similarly, it follows from Eq. that the joint p.d.L J(x, y ) can also be represented as follows for all values of x and y:
(5)
f(x, y ) � JI ( x ) g, ( y l x ) .
(7 )
Example 3: Choosing Points from Unifonn Distributions. Suppose that a point X is chosen from a uniform distribution on the interval (0. 1); and that after the
Random Variables and Dislributions
138
value X x has been observed (0 < x < ]), a point Y is then chosen from a uniform distribution on the interval (x, 1). We shall derive the marginal p.d.!. of Y. Since X has a uniform distribution, the marginal p.d.f. of X is as follows: =
for O < x < 1 , otherwise. Similarly, for any given value X = x (0 < x < 1), the conditional distribution of Y is a uniform distribution on the interval (x, 1). Since the length of this interval is 1 x, the conditional p.d.L of Y given that X � x will be -
for x < y < 1 , otherwise. It follows from Eq. (7) that the joint p.d·.f. of X and Y will be for 0 < x < y < 1 ,
(8)
otherwise. Thus, for 0 < y < 1 , the value of the marginal p.d.f. f,(y) of Y will be f, ( y )
�
J
OO
f( x, y ) dx �
- 00
(Y
Je
0
1
dx 1_x
�
- log(1
-
y).
Furthermore, since Y cannot be outside the interval 0 < y < 1, then /2(Y) = 0 for y .s,;;; 0 or y � 1. This marginal p.d.f. 12 is sketched in Fig. 3.16. It is ' interesting to note that in this example the function 12 is unbounded. 0 Independent Random Variables.. Suppose that X and Y are two random VariA abIes having a continuous joint distribution. It is known from Sec. 3.5 that X and Y are independent if and only if their joint p.d.f. f(x, y ) can be factored in the following form for - 00 < x < 00 and 00 < y < 00 : -
f( x, y ) � f, ( X )f, ( y )·
It follows from Eq. (6) that X and Y "re independent if and only value of y such that f,(y) > 0 and every value of x,
if
for every (9)
In other words, X and Y are independent if and only if the conditional p.d.f. of X for each given value of Y is the same as the marginal p.d.f. of X.
3.6.
Conditional Distributions
139
--� o ------------�--------�-- y Figure 3.16
The marginal p.d.L of Y in Example 3.
Similarly, it follows from Eq. (7) that X and Y are independent if and only if for every value of x such that fl( x ) > 0 and every value of y,
K, ( y l x )
-
f, ( y ) .
(10)
If the joint distribution of two random variables X and ' Y is discrete, then Eq. (9) and Eq. (10) are still necessary and sufficient conditions for X and Y to be independent. In this case, however. the functions fl' should be and interpreted as marginal and conditional pJ.'s rather than p.d.f.'s.
12 ' gl'
g2
EXERCISES 1 . Each student in a certain high school was classified according to his year in
school (freshman. sophomore. junior, or senior) �ld according to the number of times that he had visited a certain museum (never, once, or more than once). The proportions of students in the various classifications are given in the following table: More than once Once Never Freshmen
0.08
0.\0
0.04
Sophomores
0.04
0.10
0.04
Juniors
0.04
0.20
0.09
Seniors
0.02
0.15
0.10
Random Variables and Distributions
140
(a) If a student selected at random from the rugh school is a junior, what is the probability that he has never visited the museum? (b) I f a student at random from the high school has visited the museum three times, what is the probability that he is a senior? 2. Suppose that a point (X, follows: s
Y)
is chosen at random from the circle S defined as
� { ( x, y ) : ( x _ I)' + ( y + 2)'
0 1 X 2). 3. Suppose that the joint p. d.!. of two random variables X and Y is as follows:
f( x , y ) �
{ �(x + y2)
for O
� x � 1 and 0 � y � 1 ,
otherwise.
Determine (a) the conditional p.d.L of X for any given value of
(
(b) pr X
� i y �). �
5 . Suppose that the joint p.d.!. of two random variables X and Y is as follows:
f( x, y)
6.
_ {�
for 0 � x � '1T/2 and 0 otherwise.
sin x
� y � 3,
�
Determine (a) the conditional p.d.L of Y for any given value of X, and (b) Pr(l < Y < 2 1 X 0.73). Suppose that the joint p.dJ. of two random variables X and Y is as follows:
f(x, y)
�
{/6(4 �
2x
- y)
for x > 0 , otherwise.
y > 0 , and 2 x + y
when the expression for v is complicated. Then the
Random Variables and Distributions
14 6
g
joint p.d.f.
of X" . . . , X. will be
>0
for
i
=
"
g(x" . . . , x,,)
=
1 , . . . , n,
i -I
Il /( x, )
g(x1, . . . , xlI )
Otherwise,
follows: If X i
as
=
O.
Any probability involving the
II
lifetimes Xl ' "
_
. XII can in principle be
R". For = 1 . . . _ , 11
determined by inlegraling Ihis joinl p.d.L over Ihe appropriate subsel of example. if A is the subset of points (xl" ' " XII) such that Xi and
E7_ l xj < a,
pr
(t
Xi
i- I
where
a
) I ... I( fIx. ) (
0 for
is a given positive number, then
exp -
A
1-1
tl Xi
j-
) .
Then
for
any
values
of
Xl •
. . .
' X II
such
that
3.7. Multivariate Distributions
fo(x " . . . , x. ) > 0, the conditional p.d.!. of X, given that X, is defined as follows:
- x" . . .
, X.
147
-
x.
More generally, suppose that the random vector X - ( X" . . . , X.) is divided into two subveccors Y and Z, where Y is a k-dimensional random vlXlur comprising k of the n random variables in X, and Z is an (n - k )-dimensional random vector comprising the mher n - k random variables in X. Suppose also that the "-dimensional p.d.!. of ( Y, Z ) is 1 and that the marginal (n - k) dimensional p.d.L of Z is /2' Then for any given point z E R ,, - k such that 1,(,) > 0, the conditional k-dimensional p.d.L g, of Y when Z - , is defined as follows:
g ( y I z ) - I( y , , ) 1, ( , ) ,
for y E Rk .
(4)
For example. suppose that the joint p.d.f. of five random variables Xl' · · · . Xs is /. and that the marginal joint p.d.f. of X2 and X4 is 124' If we assume that f24(x2• x4) > 0. then the conditional joint p..d.f. of XI' X). and Xs. given that X2 Xl and X4 = x4• will be =
If the random vectors Y and Z have a discrete joi_nt distribution for which the joint p.!. is I, and iF the marginal p.L of Z is I" then the conditional p.r. g,( I , ) of Y for any given value Z - z also will be specified by Eq. (4). y
Example 2: Determining a Marginal Joint p.d./. Suppose that Xl is a random variable for which the p.d.f. /1 is as follows: for x > 0. otherwise.
•
Suppose, furthermore, that for any given value XI - Xl (Xl > 0). two other random vari_ables X2 and X) are independent and identically distributed and the conditional p.d.f. of each or these variables is as follows: for
I > 0,
otherwise. We shall determine the marginal joint p.d.f. of X2 and X)'
Random Variables and Distributions
148
Since X2 and Xl are i.i.d. for any given value of Xl' their conditional joint p.d.f. when X, 0) is
- x, (x, >
for X2 > a and otherwise.
xJ > 0,
The joint p.d.f. I of X" X" and X, will be positive only at those points (Xl' X2' X l ) such that 0, 0, and X3 > O. It now follows that, at any such point,
Xl >
X2 >
xt, x2, x3 ) -- 11 ( Xl )g23 ( x 2 ' X ) I Xl ) -- Xl'e -" (' +" +" ) . For x, > 0 and x, > 0, the marginal joint p.d.f. I"(x,, x,) of X, and X, can be I(
determined
as follows:
I"(x,, x,) - l�/(x" x" x, ) dx, o
2
(1 +
x,
+
x, )
l
.
From this marginal joint p.d.f. we can evaluate probabiljties involving X2 and X" such as Pr( X, + X, < 4). We have Pr( X, + X, < 4)
X)
6 1 - 1414-. 0,
x
otherwise.
Suppose also that for any given value X - x (x > 0), the n random variables Y" . . . , Y. are i.i.d. and the conditional p.d.f. g of each of them is as follows: g( Y l x ) -
{;
for O < y < x, otherwise.
Determine (a) the marginal joint p.d.f. of Y" . . . , Y. and (b) the conditional p.d.L of X for any given values of Y1. · · · , Yn•
3.8. FUNCTIONS OF A RANDOM VARIABLE Variable wIth a Discrete Distribution Suppose that a random variable X has a discrete distribution for which the p.f. is f. and that another random variable Y r( X) is defined as a certain function of X. Then the p.L g of Y can be derived from f in a straightforward manner as follows: For any possible value y of Y, =
g(y)
-
Pr(Y - y ) - Prlr( X ) - y]
-
x:r(x)-y
I:
f( x ) .
Variable with a Continuous Distribution If a random variable X has a continuous distribution, then the procedure for deriving the probability distribution of any function of X differs from that just given. Suppose that the p.d.!. of X is f and that another random variable is defined as Y - r(X). For any real number y , the d.!. G( y ) of Y can be derived as
3.B. Function. of a Random Variable
151
follows:
G(y)
-
Pr(Y "' Y ) - Pr [ r ( X ", y]
)
- f{ :cr(
j(x) dx.
x )""y}
If the random variable Y also bas a continuous distribution, its p.d.L g can be obtained from the relation
g
( y ) dG(y) dy _
-
This relation is satisfied at any point
y at which G is differentiable.
Example 1: Deriving the p.d/. 0/ X l When X Has a Uni/onn Distribution. Suppose lhal X has a uniform dislribution on lhe inlerval ( - 1, 1), so
for - 1 < x < 1 , otherwise. We shall delermine lhe p.d.f. of lhe random variable Y = Xl Since Y X2, then Y must belong to the interval 0 � y < 1. Thus, for any value of Y such Ihal O < y < 1, the d.f. G(y) of Y is =
- Pr( -ill ", X '" yl/2 )
f ""
= Y
,
j(x) dx = y'/2
_yl/l
y < 1, il follows thai lhe p.d.L g( y) of 1 g( y ) _ dG(y) dy 2yl/2 .
For 0 < I ·
, , ·
• ·
t.
t
; I
,
=
Y
is
--
-
This p.d.L of Y is skelched in Fig. 3.17. II should be nOled lhal allhough Y is simply the square of a random variable with a uniform distribution, the p.d.f. of Y is unbounded in lhe neighborhood of O. 0
y
-
1 52
Random Variables and Distributions
g(y)
-L __ __ __ __ __ __ L__ __ __ __ __ y __ __ __ __ __ __ __ __ o
Figure 3.17
The p.d.f. of Y =
X2 in Example 1 .
Direct Derivation of the Probability Density Function If a random variable
X
has a continuous distribution and if
suppose that
x .; b,
tion of
then it is
c is a constant, for all values of x in some interval and that Pr( a .; X .; b) > O. Then Prey � c) > O. Since the distribu
not necessarily true that a ';
Y = r( X),
r(x) =
Y
will also have a continuous distribution. For example,
c, where
c, this distribution Y in a case like this,
Y assigns positive probability to the value
continuous. In order to derive the distribution of
cannot be the d.f. of
Y must be derived by applying the method just described. For certain functions " Y will be continuous; and it wiH then be possible to derive the p.d.f. of Y directly without first deriving its d.L
however, the distribution of
�
Y r( X) and that the random variable X must lie in a certain b) over which the function r(x) is strictly increasing. This interval, for which Pr( a < X < h) 1 , might be a bounded interval, an unbounded interval, or the entire real line. As x varies over the interval a < x < b, the values of y = rex) wilJ vary over an interval a < y < /3. Furthermore, corresponding to each value of y in the interval a < y < /3, there be a unique value x in the interval a < x < b such that rex) = y. If we denote this value of x as x = s(y), Suppose that
interval ( a ,
-
will
then the function s will be the inverse of the function
values of
y �
r(x)
x
and y in the intervals a
if and only if
x � s(y).
<x XII Yn Xl> Xn . . . •
[Yl] [X,] :
=A
Y"
(7 )
.
:
X"
Thus, each variable Y; is a linear combination of the variables We shall also suppose that the matrix A is nonsingular and, therefore, that the matrix A - I exists. It then follows that the transformation in Eq. (7) is a one·to·one transformation of the entire space RII onto itself. At any point E R", the inverse transformation can be represented by the equation
XI" ' " X". (Yl" ' " Yn)
( 8) J
=
The Jacobian J of the transformation that is defined by Eq. (8) is simply det A I Also, it is known from the theory of determinants that -
.
I
det A - 1 = det A . Therefore, at any point E R", the joint p.d.f. g(y" . . . , Y.) of Yn can be evaluated in the following way: fTrst, for ; = 1, n , the I component Xi in f(xl XII) is replaced with the ith component of the vector
(Yl, " " Y.)
Y
• . . . •
. . . •
• . . . •
Then the result is divided by I det A I .
In vector notation, we let
= x
l�'] x"
and
Y
=
lY'].Y.
1 66
Random Variables and Distributions
Also, we let f(;r:) and Then
g( y) denote the joint p.d.!.'s of
Xl' . . . ' X" and YI, . . . , Y". (9)
The Sum 01 Two Random Variables
Suppose that two random variables XI and X2 have a given joint p.d.f. j and that it is desired to find the p.d.f. of the random variable Y = Xl + X2_ As a convenient device. we shall let Z = Xz. Then the transformation from Xl and X2 to Y and Z will be a one-to-one linear transformation. The inverse transformation is specified by the following equations: XI - Y X, Z.
-
The matrix
Z,
A -I
of coefficients of this transformation is
)
-1 . 1
Therefore,
It follows from Eq. (9) that the joint p.d.1. go of Y and (y, z) will be go( y , z ) = f( y
-
z, z
Z
at any given point
).
There£ore, the marginal p.d.f. g of Y can be obtained from the relation g(y)
- foo00f r y - z, z ) dz -
for
- 00 < y < 00 .
(10)
If we had initially defined Z by the relation Z = XI ' then we would have obtained the following alternative and equivalent form for the p.d.f. g: g(y)
- foo f(z, y - z ) dz - 00
for
-00 < y
y. Therefore, for y � 0,
g(y ) � [o /(Y Also,
-
z)/ ( z ) dz �
g (y) � 0 for y < O.
0
[e-(Y-' )e-' dz 0
Random Variables and DlstrlbutJona
168
The Range Suppose now that the n random variables
XI' . . ' X"
form a random sample
from a continuous distribution for which the p.d. f. is f and the dJ. is F. As .
earlier in this section, let the random variables Y1 and Y" be defined as follows: Y,
�
min ( X" . . . , X" 1
Y" � max {
and
X,
• . . . •
X" l .
(15)
The random variable W = Y" - Y1 is called the range of the sample. In other words, the range W is the difference between the largest value and the smallest value in 'he sample. We shall determine the p.d.f. of W. The joint p.d.f. g(y,. y") of Y, and Y" was presented in Eq. (1). If we let Y1• then the transformation from Y1 and Y" to W and Z will be a one-ta-one Z =
linear transformation. The inverse transformation is specified by the following equations:
�
Y"
W
+ z. JI
1. Therefore. the joint p.d.f. "(w. z) of W and Z can be obtained by replacing Yl with z and replacing y" with w + z in Eq. (1) The result, for w > 0 and - 00 < Z < 00, is For this transformation, I
�
__
h ( IV . z )
n (n - 1 ) [ F( w + z )
�
-
F( z ) 1 " - 2 r(z )r( IV + z ) .
(16)
h ( w, z) - O. The marginal p.d.f. h,(w) of the range W can be obtained from the relation
Otherwise,
h , ( w ) � f � h ( w, z ) dz.
( 17)
-�
pose that the n variables
Example
3:
The Range 0/ a Rundum Sample from
distribution on
le.
samp
r( x) � o
Sup
form a random sample from a uniform
the interval (0, 1). We shall determine the p.d.L of the range of the
{b
F(x) - x
<w 0, otherwise.
(1)
Then the joint p.d.1. of Xl and X, is for Xl > 0 and otherwise. Let the random variable
Z
X2
> 0,
(2)
be defined by the relation
(3 ) Suppose that we are interested in the conditional distribution of Xl given that Z = O. This conditional p.d.f. can easily be found by applying the methods presented in this chapter as follows: Let Y XI' Then the inverse of the transformation from Xl and X2 to Y and Z is expressed by the relations =
(4)
172
Random Variables and Distributions
The Jacobian of this transfonnation is
( 5)
y � Xl l y -{ ) ' Z +Y+ ) e { z g(y, ) �
Since > 0, it follows from Eq. (4) of Sec. 3 .9 and Eq. (2) of this section that the joint p.d.L of Y and Z is for y > 0 and yz > otherwise.
0
The value of this joint p.d.L at the point where Y
-
(6)
1,
�y
g(y,O) � ye-Iy+ l). Therefore, the value of the marginal p.d.f. 82(z) at z =
g,(O) � l�g(y,O)dy � e-1
a
> 0 and Z
�
0 is
is
o
It follows that the conditional p.d.f. of Y given that Z
_ g(y,O) g,(O) y O. _ -
Since Xl that Z =
�
ye
for
y O.
�
0 is
( 7)
>
Y, the conditional p.d.f. (7) is also the conditional p.d.f. of XI given Thus, if we let A denote the event that Z = 0, we may write for
Xl
>
O.
(8)
The derivation leading to Eq.. (8) is straightforward and follows the rules presented for conditional p.d.f.'s. However, there apparently is a much simpler way to derive I A). Since Xl > O. it follows from Eq. (3) that the event that Z = 0 is equivaJent to the event that X2 = 1. Hence, the event A could just as well be described as the event that X2 = 1. From this point of view, the conditional p.d.f. of Xl given A should be the same as the conditional p.d.L of Xl given that X2 = 1. Since Xl and X2 are independent, this conditional p.d.L is simply the marginal p.d.!. of X, as given by Eq. (1). Thus, from thjs point of view,
g\(x\
for
Xl O. >
(9)
Since we have here obtained two different expressions, (8) and (9), (or the same conditional p.d.!. A), we have arrived at the Borel-Kolmogorov paradox. Both expressions are correct, but they have different interpretations. If
gl(x, l
3.10. The Boral-Kolmogoroy Paradox
173
we regard the event A as one point in the sample space of the random variable Z, then Eq. (8) is correct. If we regard A as one point in the sample space of X2, then Eq. (9) is correct. The Borel-Kolmogorov paradox arises because Pr( A) - O. It emphasizes the fact that it is not possible to define a conditional distribution in a meaningful manner for just a single event having probability zero. Thus, a conditional distribution can have meaning only in the context of an entire family of conditional distributions that are defined in a l:unsistt::nt fashiou.
Conditioning on the Equality 01 Two Random Variables We shaU conclude this section with another example which is based on the joint p.d.f. of XI and X, given in (2). Suppose now that we wish to calculate the conditional p.d.f. of XI given that Xl = X2. One way to do this is to let Z - Xl - X, and to determine the conditional p.d.f. of Xl given that Z - O. It can be found in a straightforward fashion that the joint p.d.f. of and Z
Xl
IS for Xl
> 0, Z < Xl'
otherwise. Hence, for Z
-
0,
g(XI , O) - e- 'x,
for Xl
> 0,
and the value of the marginal p.d.L
. g,(O) - J(o
'"
Thus, if we let
(
}
t
�
gl ( X I I B )
(10)
e - 2x, dxl -
g,(z) at
g ( x l , O) g, (O)
_
-
0 is
"2I .
B denote the event that XI _
z
2 -2x,
e
-
X, Of) equivalently, that Z - 0, then
for X l
> O.
(ll)
Another way to approach the same problem is to let W - X,jXl. Then the event B is equivalent to the event that W = 1 . It can again be found in a straightforward fashion that the joint p.d.f. of XI and W is for XI
> 0 and w > 0,
otherwise.
(12
)
Random Variables and Distributions
174
Hence, for
1,
=
W
for X I > 0 and the value of the marginal p,d.f. h 2(w) at w II
2 (1)
�
=
1 is
1
1�
x 1e- 2"' dx1 � 4.
o
Thus, on the basis of this approach we find that
g , ( x1 1 B )
�
h ( x" I) h , (I)
,
for XI > 0 .
� 4x,e- 2.
.:S;
2,
Verify that X has a continuous distribution, and determine the p.d.L of X. 3. Suppose that the random variable X has a continuous distribution with the following p.d.f.: for - co < x < 00 . Determine the value
Xo
such that F(xo) � 0.9, where F(x) is the d.f. of X.
3.1 1 , Supplementary Exercises
175
4. Suppose that Xl and X2 are Li,d. random variables, and that each has a 2 uniform distribution on the interval (0, 1). Evaluate Pr(XI + xi � 1).
5. For any value of p > I, let
I
p.
� c{ p ) � L
X
x-l
Suppose that the random variable X has a discrete distribution with the following p.f.: fi x )
�
1
for x
c { p ) xp
=
1 , 2, . . . .
(al For any fixed positive integer n, determine the probability that X will be divisible by n. (b) Determine the probability that X will be odd. 6. Suppose that XI and X2 are i.i.d. random variables, each of which bas the p.f. f(x) specified in Exercise 5. Determine the probability that X, + X, wil1 be even. 7. Suppose that an electronic system comprises four components, and let Xj denote the time until component j fails to operate ( j � 1, 2, 3, 4). Suppose that Xl' x2 , X3, and X4 are Li.d. random variables, each of which has a continuous distribution with d.f. F(x). Suppose that the system will operate as long as both component 1 and at least one of the other three components operate. Determine the dJ. of the time until the system fails to operate. 8. Suppose that a box contains a large number of tacks, and that the probability X that a particular tack will land with its point up when it is tossed varies from tack to tack in accordance with the foHowing p.rl.f.: f{ x ) �
{�{I - X )
for O < x < l , otherwise.
•
Suppose that a tack is selected at random from the box and this tack is then tossed three times independently. Determine the probability that the tack will land with its point up on all three tosses. 9. Suppose that the radius X of a circle is a random variable having the following p.d.f.: fi x )
�
U
(3x
+
I)
for O < x < 2, otherwise.
Detennine the p.d.L of the area of the circ1e.
Random Variables and Distributions
1 76
10. Suppose that the random
variable X has the following p.d.!.:
for x > 0,
otherwise. Construct a random variable the interval (0,5).
Y
--
r( X) that has a uniform distribution on
11. Suppose that the 12 random variables Xl' . . ' X1 2 are i.i.d. and that each has a uniform distribution on the interval (0, 20). For j - 0, 1, . . . , 19, let Ij denote the interval (j, j + 1). Determine tbe probability that none of the 20 disjoint intervals I will contain more than one of the random variables j .
Xl. · · · ' X12·
12. Suppose tbat tbe joint distribution of X and Y is uniform over a set A in the xy-plane. For which of the following sets A are X and Y independent? (a)
A
circle with a radius of 1 and with its center at the origin.
ci rcl e with a radius of 1 and wi th its center at tbe poin t (3,5). (c) A square with vertices at the four points (1, 1), (I, - I), ( - 1, - I), and
(b)
A
( - 1, 1). (d)
A
rectangle with vertices at the four points (0, 0), (0,3), (1, 3), and (1, 0).
(e)
A
square with vertices at the four points (0, 0), (1, 1), (0, 2), and ( - 1, 1).
13. Suppose that X and Y are independent random variables with the following p d f.'s : .
.
fl ( x )
-
{6
< x < 1, otherwise, for 0
for 0
1 < y < 2'
otherwise. Determine the value of Pr( X > Y). 14. Suppose that on a particular day two persons A and B arrive at a certain store independently of each other. Suppose that A remains in the store for 15 minutes and B remains in the store for 1 0 minutes. I f the time of arrival of each person has a uniform distribution over the hour between 9:00 A.M. and 10:00 A.M., what is the probability that A and B will be in the store at the same time?
3.11.
Supplementary Exercises
177
15. Suppose that X and Y have the following joint p.d.f.: f{x, y ) =
{�{ x + y )
for 0 < x < y < 1, otherwise.
Detennin e (a) Pr(X < 1/2); (b) the marginal p.d.L of X; and (c) the conditional p.d.L of Y given that X = x. 16. Suppos� thaL X and Y are random variables. The marginal p.d.l. of X is
{
for 0 < x < 1, otherwise.
Also, the conditional p.d.f. of Y given that X = x is {
g Y l x) =
3y 2 ox'
for O < y < x, otherwise.
Determine (a) the marginal p.d.!. of Y and (b) the conditional p.d.L of X given that Y = y. ) 7. Suppose that the joint distribution of X and Y is uniform over the region in the xy-plane bounded by the four lines x = - 1, x - I, Y = x + I, and y=x I. Determine (a) Pre XY > 0) and (b) the conditional p.d.L of Y given that X = x. 18. Suppose that the random variables X, Y, and Z have the following joint p.d.L: -
f(x, y , z ) =
{�
for 0 < x < y < z < I , otherwise.
Determine the univariate marginal p.dJ.'s of X, Y, and Z. 19. Suppose that Ihe random variables X. Y, and Z have the following joint p.d.L: •
f( x, y, z ) =
{�
for 0 < x < y < 1 and 0 < z < 1 , otherwise.
Evaluate Pr(3X > Y 1 1 < 42 < 2). 20. Suppose that X and Y are i.i.d. random variables, and that each has the following p.d.L: for x > 0 , otherwise.
178
Random Variables and DistribYtion8
Also, let U =
X/( X + Y) and V = X + Y.
(a) Determine the joint p.d.f. of U and V. (b) Are U and V independent?
21. Suppose that the random variables /(x, Y ) = Also, let U
{�
XY
=
X and Y have the following joint p.d.L:
for 0 " x " Y " 1 ,
otherwise.
X/Y and V
=
Y.
(a) Determine the joint p.d.f. of U and V.
(b) Are
X and Y independent?
(c) Are U and V independent?
22. Suppose that
X" " " X, are i.i.d. random variables, each having the follow
ing d.f.: for x � 0, for x > O.
YI = mini Xl ' " ' ' Xn } and Yn maxi Xl " . . , Xn } · Determine the condi tional p.d.f. of Yl given lhat Y, Y.. Let
=
=
23. Suppose that
X" X" and Xl form a random sample of three observations
from a distribution having the following p.d.L: for O < x < l , otherwise. Determine the p.d.f. of the range of the sample.
, ,
Expectation
4.1. THE EXPECTATION OF A RANDOM VARIABLE Expectation lor a Discrete Distribution Suppose that a random variab1e X has a discrete distribution for which the pJ. is f. The expectation of X, denoted by £( X), is a number defined as follows: £( X )
=
(I)
L xf( x ) . x
Example 1: Calculating an Expectation from a p.f. Suppose that a random vari able X can have only the four different values - 2, 0, 1, and 4. and that Pr( X = - 2) 0.1. Pr( X = 0) = 0.4. Pr( X = 1) = 0.3, and Pr(X = 4) = 0.2. Then =
£( X)
=
=
-
2(0.1) + 0(0.4) + 1(0.3) + 4(0.2)
0.9. 0
•
It can be seen from Example 1 that the expectation £( X) is not necessarily equal to one of the possible values of X. If X can have only a finite number of different values, as in Example I, then there will be only a finite number of terms in the summation in Eq. (1). However, if there is an infinite sequence of dillerent possible values of X, then the summation in Eq. (1) consists of an infinite series of terms. Such a series may not converge for a given p.f. It is said that the expectation £( X) exists if and only if 179
l
Expectation
180
1 ,
"
,
the summation in Eq. (1) is absolutely convergenI, that is, if and only if
L l x lf(x ) < 00 . x
In other words. if relation (2) is satisfied, then £ ( X ) exists and its value is given by Eq. (1). If relation (2) is not satisfied, then E ( X ) does not exist.
Expectation lor a Continuous Distribution If a random variable X has a continuous distribution for which the p.d.f. is f, then the expectation E( X) is defined as follows: E ( X ) � r xf( x ) dx.
( 3)
- 00
Example 2: Calculating an Expectation from a p.d./. Suppose that the p.d.!. of a random variable X with a continuous distribution is
x < I, otherwise. for O
1I x l'f(x} dx " Pr{lXI " I } + E(I XI ' ) . By hypothesis, E ( I X I ' ) < 00 . It therefore follows that E(IXV) < 00 . A similar proof holds for a discrete or a more general type of distribution. 0 In particular, it follows from Theorem mean of X and the variance of X exist.
I that if £( X')
1'(0)
Suppose that the m.gJ. of a random variable some interval around the point then exists at the point in Eq.
(1)
musl
the other hand, if
some values of I and mighl
t - 0 .nd
(1)
t=
0;
t = O.
=
X exists
for an values of
t
in
It can he shown that the derivative �'(l)
and that at
t
0
the derivative of the
expectation
must be equal to the expectation of the derivative. Thus,
>1" (0) -
[!!...dl E(e'X)]
1-0
-
E[(!!... e'X) ]. dt
r- O
But
( !!...dl e'X )
_
,-0
( Xe'X) (-0 - X.
It follows lbal
>1" (0 ) In
- E( X).
other words, the derivative o f the rn.g.L � ( t ) More generally, if lbe m.g.f. H t ) of
around the point exist
( =
(k "'" 1 , 2, . . . ).
X exists
0, then it can be shown
at 1 = 0 is
the mean of X.
for all values of
t in an inlerval
that all moments E( Xk ) of X must
Furthermore, it can be shown that it is possible to
ale H I ) any arbitrary number of limes al lhe poinl
I - O.
For n
differenti
- 1 , 2, . . . ,
lhe
202
Expectation
nth derivative 01-(')(0) at I � 0 will satisfy the following relation:
Thus, >/-'(0) � E( X), >/-"(0) - E( X'), >/- ''' (0) - E( X'), and so on.
Example 2: Calculating an m.g.f. Suppose that X is a random variable for which the p.d.f. is as follows: forx > O, otherwise. We shall determine the m.g.f. of X and also Var( X). For any rea] number t.
=
100o
l e{r- )x dx .
The final integral in this equation will be finite if and only if I < 1 . Therefore, >/- ( 1 ) exists only for I < 1 . For any such value of I, 1 01- ( 1 ) � -- . 1 -I Since >/-(1) is finite for all values of I in an interval around the point I � 0, all moments of X exist. The first two derivatives of Vt are >/-'(1) =
1 (1 - I)'
---;0 -
Therefore, E(X) � >/-'(0)
and �
>/-"( t ) �
2 (1
-
t)"
1 and E(X') � 01-"(0) � 2. I t now follows that
Var( X) � 01-"(0) - [ 01-'(0)] ' = 1 .
0
Properties of Moment Generating Functions
We shall now present three basic theorems pertaining to moment generating functions.
4.4. Moments
203
Theorem 2. Let X be a random variable for which the m . g.f. is 1/11; let aX + b. where a and b are given constams; and let .p2 denote the m . g./.
Y Y.
of
=
Then for any value of t such that �l(at) exists,
",,(I)
�
eb'",,(a/) .
. Proof. By the defini,ion of an m.g.f.,
Example 3: Calculating the mg.J. 0/ a Linear Function. Suppose that the distribu tion of X is as specified in Example 2. Then the m.g.f. of X for t < 1 is
"' , ( I ) If Y � 3
�
-
1
1
1
-
'
2 X, then
the m.g.f.
of Y
will
exist for
t > - 1/2 and will have the
value
The next theorem shows that the m.gJ. of the sum of any number of ) independent random variables has a very simple form. Because of this property,
the m.g.f. is an important tool in the study of such sums.
Suppose that Xl' . . . , X" are tl independent random variables; and for i = 1, . , n, let 0/; denote the m.g.J. of Xi ' Let = Xl + . + Xn• and let the m . g.f. of be denoled by ",. Then for any value of t such Ihat "',(t) exists for i = 1, . . . , n,
Theorem
r
" .' ,,
t
. .
i'
Y
. .
Y
.
, ,
3.
"
"' ( I )
�
n ",, ( t ) .
1-'
(2)
Expectation
204
Proof By definition, >/- ( 1 )
�
E ( e 'Y )
�
E [ e " X,+ . . . + X" ]
�
E ( n e ' X' ) . ,- ,
Since the random variables Xl" , . , X" are independent, it follows from Theorem 4 in Sec. 4.2 that
Hence,
>/- ( 1 )
"
�
n >/-, ( 1) .
i-I
0
The Moment Generating Function for the Binomial D;�'tribution. Suppose that a random variable X has a binomial distribution with parameters n an d p. In Sees. 4.2 and 4.3, the mean and the variance of X were determined by representing X as the sum of n independent random variables Xl> . . . , X". In this representation, the distribution of each variable Xi is as follows: and
Pr( Xi � l ) � p
Pr( Xi =
0)
=
q= 1
-
p.
We shaH now use this representation to determine the m.g.f. of X = Xl + . . . + X". Since each of the random variables XI " ' " X" has the same distribution, the m.g.[ of each variable will be the same. For i "'" 1, . . . , n, the m.g.f. of Xi IS >/-i ( l ) = =
E ( e ' X, ) = ( e ' ) Pr( Xi = 1) + ( l ) Pr( Xi - 0)
per
+ q.
It follows from Theorem 3 that the m .g.f. of X in this case is >/- ( 1 ) =
( pe'
+
q) ".
( 3)
Uniqueness of Moment Generating Functions. We shall now state one morc important property of the m.g.f. The proof of this property is beyond the scope of this book and is omitted.
Theorem 4. If the m .g.f.'s of two random variables XI and Xz are identical for all values of t in an interval around the point I = 0, l/ten the probability distributions of Xl and Xz must be identical.
i
•
4.4. Moments
205
Suppose that XI and X2 are Xl has a binomial distribution with parame ters n 1 and p; and that X2 has a binomial distribution with parameters " 2 and p. Here, the value of p must be the same for both distributions, but it is not necessarily true that n 1 = n2, We shall determine the distribution of Xl + X2• t If of i denotes the m.g.r. o[ Xi for ; = 1, 2, then it follows [rom Eq. (3) that
�
The Additive Property 0/ the Binomial Distribution.
� independent random variables; that
� �: � �i,.-
of/ ( t )
..
t �
{
,�.
j
U
=
( pc' +
q ) "' .
the m.g.r. o[ XI + X, is denoted by of, then by Theorem 3,
"' ( 1 )
=
( pe l +
q ) "l + "!,
It can be seen from Eq. (3) that this function ", is the m.gJ. of a binomial i distribution with parameters n1 + n 2 and p. Hence, by Theorem 4, the distribu � tion of XI + X2 must be that binomial distribution. Thus, we have established the :,. following result: If Xl and X2 are independent random variables and if Xi has a binomial distribution with parameters n i and p (i := 1. 2), then XI + X2 has a binomial distribution WiTh parameters n l + 11 2 and p.
EXERCISES I. Suppose that X is a random variable [or which £( X) £(X3)
=
1, £ ( X 2 ) = 2, and
5 . Find the value o[ the third central moment of X. 2. If X has a uniform distribution on the interval (a, b), what is the value of the fifth centra] moment of X? 3. Suppose that X is any random variable such that £( X') exists. (a) Show that £ ( X ' ) ;. [ £ ( X)]'. (b) Show that £ ( X ' ) = [ £ ( X ) ) ' if and only if there exists a constant c such that Pr( X = c) = 1 . ff;,If: Var( X ) ;. O. 2 4. Suppose that X is a random variable with mean IJ. and variance 0 , and that the fourth moment of X exists. Show that =
5. Suppose that X has a uniform distribution on the interval (a, b). Determine the m.g.r. of X. 6. Suppose that X is a random variable for which the m.gJ. is as follows: of ( t ) - ! (3e' + e- ' )
4
for - 00 < t < 00 .
Find the mean and the variance of X.
206
Expectallon
7. Suppose that X is a random variable for which the m.g.f. is as follows:
for - 00 < I < 00 . 8.
Find the mean and the variance of X. Let X be a random variable with mean p. and variance 02• and let '" I (t) denote the m.g.f. of X for - 00 < I < 00 . Let c be a given.positive constant, and let Y be a random variable for which the m.g.f. is for - 00 < I < 00 .
Find expressions for the mean and the variance of Y in terms of the mean and the variance of X. 9. Suppose that the random variables X and Y are i.i.d. and that the m.gJ. of each is for - 00 < I < 00 . Find the m.gJ. of Z � 2 X - 3Y + 4. 10. Suppose that X is a random variable for which the m.g.f. is as follows:
"' ( 1 )
=
2 4 2 1 s et + S e 1 + S e 81
for - 00 < ( < 00 .
Find the probability distribution of X. Hint: It is a simple discrete distribu tion. 11. Suppose that X is a random variable for which the m.g.f. is as follows: 1
(4 + e' + e ) .p ( t ) � 6 _
'
for - 00 < I < 00 .
Find the probability distribution of
4.5.
X.
THE MEAN AND THE MEDIAN
The Median
It was mentioned in Sec. 4.1 that the mean of a probability distribution on the real Hne will be at the center of gravity of that distribution. In this sense, the mean of a distribution can be regarded as the center of the distribution. There is another point on the line which might also be regarded as the center of the
4.5. The Mean and the Median
207
distribution. This is the point which divides the total probability into two equal parts, that is, the point mo such that the probability to the left of "' 0 is 1/2 and the probability to the right of mo is also 1/2. This point is called the median of the distribution. I t should be noted, however, that for some discrete distributions there will not be any point at which the total probability is divided into two parts that are exactly equal. Moreover, for other distributions, which may be either discrete or continuous, there will be more than one such point. Therefore, the formal definition of a median, which will now be given, must be general enough to include these possibilities.
For any random variable X, a median of the distribution of X is defined as a point m such that Pr( X .; m ) " 1/2 alld Pr( X " m ) " 1/2.
Tn other words, a median is a point m that satisfies the fol1owing two requirements: First, if m is included with the values of X to the left of m, then Pr{ X .; m ) " Pr{ X > m). Second, i f m is included with the values of X to the right of m, then Pr{ X " m } ,. Pr{X < m ) . According t o this definition, every distribution must have at least one median, and for some distributions every point in some interval can be a median. If there is a point m such that Pr( X < m ) = Pr( X > m), that is, if the point m does actually divide the total probability into two equal parts, then m will of course be a median of the distribution of X.
Example 1: The Median of a Di.\·crete Distribution. Suppose that X has the
following discrete distribution: 2) � 0.2,
Pr{ X � 1 ) � 0.1,
Pr{ X
Pr{ X � 3) � 0.3,
Pr{ X � 4}
�
� 0.4.
•
The value 3 is a median of this distribution because Pr( X � 3)
0.6, which is greater than 1/2, and Pr( X ;;, 3) � 0.7, which is also greater than 1/2. Further more� 3 is the unique median of this distribution. 0 =
Example 2: A Discrete D6·tributio/J for Which the Median Is Not Unique. Suppose
that X has the following discrete distribution: Pr{X � 1) � 0 . 1 ,
Pr{ X � 3) � 0.3,
Pr{ X � 2) � 0.4, Pr{X �
4) � 0.2.
208
, ,
ExpeetaUon
Here, Pr( X " 2) - 1/2 and Pr(X '" 3) - 1/2. Therefore, every value of m in the closed interval 2 � m :s;;; 3 will he a median of this distribution. 0
Example
3: The
Median oj a Continuous Distribution. Suppose that X has a
continuous distribution for which the p.d.f. is as folJows: for O < x < 1 , Olherwise.
The unique median of this distribution will he the number m such that
1m o
4x 3 dx =
I
/' m
4x 3 dx = - . 2
This number is m = 1/21/4,
Example
4: A
0
Continuous Diflribution for Which the Median Is Not Unique.
Suppose that X has a continuous distribution for which the p.d.f. is as follows:
{�
I/2
f(x)
-
for 0 � x
.:s;; 1 , x :s;; 3,
for 2.5 :s;; otherwise.
Here, for any value of m in the closed interval 1 � m :s;; 2.5, Pr( X :::::; m) =
Pr( X
� m) =
1/2. Therefore, every value of m in the interval 1 � m .:S; 2.5 is
median of this distribution.
0
a:
ComparIson of the Mean and the MedIan Either the mean or the median of a distribution can be used to represent the "average" value of a variable. Some important properties of the mean have already been described in this chapter. and several more properties will be given Later in the book. However. for many purposes the median is a more useful measure of the average than is the mean. As mentioned in Sec. 4.1, the mean of
a
distribution can be made very large by removing a small but positive amount of probability from any part of the distribution and assigning this amount
to a
sufficiently large value of x. On the other hand, the median may be unaffected by
a similar change in probabilities. If any amount of probability is removed from a value of x larger than the median and assigned to an arbitrarily large value of x, the median of the new distribution will b e (he same as that of the original distribution. For example. suppose that the mean annual income among the families in a certain community is $30,000. It is possible that only a few famihes in the
�.
4.5. The Mean and tI1e Median
209
t
i
community actually have an income as large as $30,000, but those few families r' have incomes that are very much larger than $30,000. If, however, the median ; annual income among the families is $30,()(X), then at least one-half of the families must have incomes of $30,000 or more. We shall now consider two specific problems in which the value of a random l variable X must be predicted. [n the first problem, the optimal prediction that r can be made is the mean. In the second problem, the optiinal prediction is the ;. median. ,
r
, , \
�. �
Minimizing the Mean Squared Error Suppose that X is a random variable with mean p. and variance 0 2 Suppose also • that the value of X is to be observed in some experiment, hut this value must be predicted before the observation can he made. One basis for making the predic tion is to select some number d for which the expected value of the square of the error X - d will be a minimum. The number E[(X - d)'] is called the mean squared error of the prediction d. The abbreviation M.S.E. is often used for the term mean squared error. We shaH now determine the number d for which the M.S.E. is minimized. For any value of d,
]
E [( X - d ) ' - E ( X' - 2dX + d') -
E(X') - 2dp. + d'.
(1)
The final expression in Eq. (1) is simply a quadratic function of d. By elementary differentiation it will be found that the minimum value of this function is attained when d = J1.. Hence, in order to minimize the M.S.E., the predicted value of X should be its mean p.. Furthermore, when this prediction is used. the M.S.E. is . simply E[(X p.)'] - 0 ' -
Minimizing the Mean Absolute Error Another possible basis for predicting the value of a random variable X is to choose some number d for which E(lX dl) will be a minimum. The number E(IX dl) is called the mean absolUle error of the prediction. We shall use the abbreviation M.A.E. for the lenn mean absolute error. We shall now show that the M.A.E. is minimized when the chosen value of d is a median of the distribution of X. -
-
21 0
Expectation
Theorem 1 . Let m be a median 0/ the distribution of X, and lei d be any other number. Then £ ( l x - m l ) ,;; £ ( I X - d l ) .
(2)
Furthermore, there will be equality in the relation (2) if and only if d is also a median of the distribution of X.
Proof. For (.;Onvt::nit:nt.;t:, wt:: .shall assumt: that
X has a oontinuous distribution for 1S similar.
which the p.d.L is f. The proof for any other type of distribution Suppose first that
d > m. Then
f-�� ( I x - d l - Ix - ml)f(x) dx fm ( d - m)f(x) dx + t( d + m - 2x)f(x) dx + fd� ( m - d )f(x) dx
E(IX - d l ) - E(IX - ml) =
=
- 00
n l
fm { d - m )f( x ) dx + fd( m - d )f(x) dx + f �( m - d )f(x) dx
'"
- 00
d
( d - m} [Pr(X ';; m ) - Pr( X > m )] .
=
Since
iii
m
is a median of the distribution of
Pr( X
,;; m ) '" 1/2 ",
Pr( X
X,
it follows that
( 4)
> m).
The final difference in the relation
(3)
(3)
is therefore nonnegative. Hence,
(5)
E ( I X - d l ) '" E ( l X - m l ) ·
Furthermore, there can be equality in the relation (5) only if the inequalities inequalities will be equalities only if
relations (3) and (4) are actually equalities. The proof for any value of
Example 5: Predicting the probability is values:
1/6
d
Value
d
such that
d<m
is similar.
0/ a Discrete Random Variable.
X
0
Suppose that the
will take each of the following six
shan determine the prediction for which the M.S.E. is
minimum and the prediction for wh.ich the M.A.E. is minimum. In this example,
E ( X)
=
in
careful analysis shows that these
is also a median of the distribution of X.
that a random variable
0, 1, 2, 3, 5, 7. We
A
�(O + 1 + 2 + 3 + 5 + 7)
�
3.
Therefore the M.S.E. will be minimized by the unique value
d = ·3.
4.5.
The Mean and the Median
211
'
.
j , )
1 ,
" Also, any number m in the closed interval 2 � m � 3 is a median of the , " given distribution. Therefore. the M.A.E. will be minimized by any value of d such that 2 � d � 3 and only by such a value of d. 0 J
EXERCISES 1. Suppose that a random variable X has a discrete distribution for which the p.r. is as follows: for x - 1 , 2, 3 , 4, 5 , 6 , otherwise.
I
I
Determine all the medians of this distribution.
2. Suppose that a random variable X has a continuous distribution for which the p.d.L is as follows: for x > 0, otherwise.
3.
Determine all the medians of this distribution.
[0 a small community consisting of 153 families, the number of families that have k children (k - 0, 1 , 2, . . . ) is given in the following table:
Number of children
Number of families
1
2
42
3
27 23
o
4 or more
21 40
Determine the mean and the median of the number of children per family. 4. Suppose that an observed value of X is equally likely to come from a continuous distribution for which the p.d.L is / or from one for which the ' p.d.f. is g. Suppose that f(x) > 0 for 0 < x < I and f(x) - 0 otherwise, and suppose also that g(x) > 0 for 2 < x < 4 and g( x ) - 0 otherwise. Dc termine: (a) the mean and (b) the median of the distribution of X.
S. Suppose that a random variable X has a continuous distribution for which the p.d.f. f is as follows: for 0 < x < 1 , otherwise. 2 Determine the value of d that minimizes (a) £[(X - d ) [ and (b) £(1 X -
dl).
6. Suppose that a person's score X on a certain examination will be a number in the interval 0 � X � 1 and that X has a continuous distribution for which
Expectation
212
0.4 0.2
0.2
0.1
-3
Figure
4.5
2
Probabilities for Exercise 8.
Rood
the p.d.L is as follows: for 0
.s;; x
� 1,
otherwise.
7.
Determine the prediction of X which minimizes Ca) the M.S.E. and Cb) the M.A.E. Suppose that the distribution of a random variable X is symmetric with respect to the point x 0 and that E(X') < 00 . Show that E[( X d ) ' J is minimized by the value d O. Suppose that a fire can occur at any one of five points along a road. These points are located at - 3, - 1, 0, 1 , and 2 in Fig. 4.5. Suppose also that the probability that each of these points will be the location of the next fire that occurs along the road is as specified in Fig. 4.5. (a) At what point along the road should a fire engine wait in order to minimize the expected value of the square of the distance that it must travel to the next fire? (b) Where should the fire engine wait to minimize the expected value of the distance that it must travel to the next fire? If n houses are located at various points along a straight road, at what point along the road should a store be located in order to mjnimize the sum of the distances from the n houses to the store? Let X be a random variable having a binomial distribution wilh parameters 7 and p 1/4, and let Y be a random variable having a binomial n distribution with parameters n 5 and p 1/2. Which of these two ran dom variables can be predicted with the smaller M.S.E.? Consider a coin for which the probability of obtaining a head on any given toss is 0.3. Suppose that the coin is to be tossed 15 times, and let X denote the number of heads that will be obtained. (a) What prediction of X has the smallest M.S.E.? Cb) What prediction of X has the smallest M.A.E.? -
8.
9.
10.
-
-
-
-
-
11.
-
4.6.
4.6.
Covariance and Correlation
213
COVARIANCE AND CORRELATION
Covariance When we consider the joint distribution of two random variables, the means, the medians, and the variances of the variables provide useful information about their marginal distributions. However, tbese values do not provide any information about the relationship between the two variables or about their tendency to vary together rather than independently. In this section and the next one, we shall introduce new quantities which enable us to measure the association between two random variables, to determine the variance of the sum of any number of dependent random variables, and to predict the value of one random variable by using the observed value of some other related variable. Let X and Y be random variables having a specified joint distribution; and let E(X) = /Lx, E(Y) = /Ly, Var( X) = 0;, and Var( Y) = o � . The covariance of X and Y, which is denoted hy Cov( X, Y), is defined as Follows: Cov( X,
Y) = E [ ( X
-
I' x ) ( Y
-
I' y
)]
( 1)
.
It can be shown (see Exercise 1 at the end of this section) that if oi < 00 and . � < 00 , Iben Ihe expeclalion in Eq. (1) will exist and Cov( X, Y) will he finite. However, the value of Cov( X, Y ) can be positive, negative, or zero.
Conelatlon If 0 < 0;' < 00 and 0 < o � < 00, then the denoted by p ( X, y), is defined as [ollows:
,
t[,we �
tc
p ( X, Y ) =
Cov( X, Y )
�.
.
X
and Y,
•
which
.15
(2)
In order to determi�e the range of possible values or the correlation p ( X, Y), shall need the [ollowlng result:
The Schwarz Inequality.
For any random variables U and V,
[E(UV ) J ' .. E ( U ' ) E ( V ' ) .
fifProof. H (U ' ) FPr(uV 0) 1 .
1:.
° XO y
correlation of
E
=
=
( 3)
lhen Pr(U = 0) - 1. There[ore, it musl also he Irue that Hence, E(UV) = 0 and the relation (3) is salisfied. Similarly, i[ =
0,
214
Expect8Uon
£( V') = 0, then the relation (3) will be satisfied. We can assume, therefore, that £(U') > 0 and £(V') > O. Moreover, if either E(U') or E(V') is infinite, then
the right side of the relation (3) will be infinite. In this case, the relation (3) will surely be satisfied. Thus, we can assume that ° < £(U2) < 00 and 0 < E(V2) < 00. For any numbers a and b,
and
0 " E [ ( a U + bV) ']
=
a'£(U') + b'£ ( V ' ) + 2ab£(UV)
0 " £ [ ( a U - bV)' ] = a'£(U') + b'£ ( V ' ) - 2ab£(UV).
If we let (4) that
(4)
( 5)
a = [£(V')]1/2 and b = [£(U')]I/', then it follows from the relation
£ ( UV ) "" _ [E(U') £ ( V , ) ] 1/2 It also follows from the relation (5) that
£ ( UV ) " [ £( U ' ) E ( V,)]I/' .
These two relations together imply that the relation (3) is satisfied. 0 If we let U = X inequality that
- P. x and V = Y - p. y, then it follows from the Schwarz.
' [Cov( X, Y ) ] " a; a � .
In turn, it follows from Eq. - ) " p ( X, Y ) " 1 .
(2) that [ p ( X, Y)]' " ) or, equivalently, that
It is said that X and Y are positively correlated if p ( X, Y ) > 0, that X and Y are negatively correlated if p( X, Y) < 0, and that ;< and Y are ullcorrelated if p ( X, Y ) = O. It can be seen [rom Eq. (2) that Cov(X, Y ) and p ( X, Y ) must have the same si&n; that is, both 3re positive, or both are negative, or both are zero.
PropertIes 01 CovarIance and CorrelatIon We shall now present five theorems pertaining to the basic properties of covari ance and correlation. The first theorem provides an alternative method for calculating the value of Cov(X, Y).
4.6. Covariance and Correlation
For any random variables
Theorem 1 . 2 Oy < 00 , Cov( X, Y ) =
E ( XY )
-
X
215
and Y such that 0x2 < 00 and
E(X)E(Y).
(6)
Proof It follows [rom Eq. (1) that Cov( X, Y ) = E ( X Y =
Since
- � x Y - � y X + �x� Y )
E ( XY ) - � xE ( Y ) - !,yE ( X)
E( X) = !' x
and E (Y )
= !'y,
+ � x!' Y,
(6) is obtained.
Eq.
0
The next result shows tbat independent random variables must be uncorre lated.
If X and Y are independen t random variables with ° < (};. < 00 and 0 < o� < 00, then Theorem 2.
Cov( X, Y )
= p ( X, Y ) = O.
Proof If X and Y are independent, then £ ( XY) = E( X)E(Y). Therefore, by Eq. (6), Cov( X, Y ) = O. Also, it follows that p ( X, Y) = O. 0 The converse of Theorem
2
is not true as a general rule. Two dependent
random variables can be uncorrelated. Indeed, even though Y is an explicit function of X, i t is possible that p( X, Y J
Example
1:
= 0 as
in thc following example.
Dependent hut Uncorrelated Random Variables. Suppose that the
random variable
X can take only the three values
-
1 0, ,
and
1,
and that each of
these three values has the same probability. Also, let the random variable Y be defined by the relation Y
=
X2 We shall show that X and Y are dependent but
uncorrelated. In this example X and Y are clearly dependent, since the value of Y is completely determined by the value of X. How�ver, E (t Since
{)
=
E(X Y )
Cov( X, Y ) =
°
E ( X')
=°
=
and
and that
E( X)
=
O.
E(X)E(Y)
X and
=
0,
i t follows
Y are uncorrelated.
The next result shows that if Y is a
0
from Theorem
linear function of X, then
1
that
X and Y must
be correlated and, in fact, I p ( X, Y J I = 1 .
Suppose that X is a random variable such thal 0 < 0;' < co; alld that Y = aX + b for some cOIIStants a and b, where a '* 0. 'f a > 0, then p ( X, Y ) = 1 If a < 0, then p ( X, Y ) = - 1. Theorem 3 .
.
Expectation
216
If Y � aX + b, then J' y � aJ' x + b and Y rore, by Eq. (1),
Proof.
Cov( X, Y ) Since
ay
�
�
J' y
�
a(X
-
J'x). There·
]
a£ [ ( X - I'x)' � a a;.
l alax, the theorem rollows rrom Eq. (2).
0
The value of p( X, Y) provides a measure of the extent to which two random variables X and Y are linearly related. If the joint distribution of X and Y is relatively concentrated around a straight line in the xy-plane that has a posi live slope, then p i X, Y) will typically be close to L I f the joint distribution is relatively concentrated around a straight line that has a negative slope, then p( X, Y) will typically be close to - 1 . We shall not discuss these concepts rurther here. but we shall consider them again when the bivariate normal distdbution is introduced and studied in Sec. 5.12. We shall now determine the variance of the sum of random variables that are not necessarily independent. Theorem 4. If X and Y are random variables such that Var( X) < 00 and Var(Y ) < 00, then Var( X + Y ) Proof.
�
Var( X ) + Var( Y ) + 2 Cov(X, Y ) .
(7)
Since £( X + Y) � J'x + J'y, then
[( X -
+ ( Y - J' y)' + 2( X - J' x)( Y - I' y )
�
£
�
Var( X) + Var( Y ) + 2 Cov(X, V ) .
J' x)'
0
1
�
For any constants a and b, it can be shown that Cov( aX, bY) - ab Cov( X, Y) (see Exercise 4 at the end of this section). It therefore follows from Theorem 4 that Var( aX + b Y + c)
�
a'Var( X ) + b'Var( Y ) + 2ab Cov( X, Y ) .
(8)
In particular. Var( X - Y ) - Var( X) + Var( Y ) - 2 Cov( X, V ) .
(9)
4.6. Covariance and CorrelaUon
217
Theorem 4 can also be extended easily to the variance of the sum of n random variables, as follows:
If Xl" ' " Xn are random variables such that Var( XJ < 1, . . . , n, then
Theorem 5.
i=
var
(
i i�l X ) � � Var( Xi >. + 2 i
7E Cov( Xi ' XJ.
00
for
(10)
Proof. For any random variable Y, Cov(Y, Y) Var( Y). Therefore, by using the result in Exercise 7 at the end of this section, we can obtain the following relation: �
We shall separate the final sum in this relation into two sums: (i) the sum of those terms for which i = j and (ii) the sum of those terms for which i =1= j. Then, if we USe the fact that Cov( X" X) � Cov( Xj' X,), we obtain the relation
va{� /i) � i�l Var( X, ) + �
7_7 Cov( Xi' XJ
"
I; Var( X,) + 2I;I;Cov( x" Xj)' i 0,
The Chebyshev Inequality.
exists. Then for any given Pr(l X - £ ( X) I :>
t ) '"
�
var X )
(2)
I
Proof. Let Y = I X - £(X)]'- Then Pr( Y :> 0) applying the Markov in�uality to Y, we
= 1 and £ ( Y ) = Var( X). obtain the ronowing result:
Pr( I X - £ ( x ) I :> I ) = Pr( Y :> I ' ) ",
�
Var X ) t
By
0
It can be seen from this proof that the Chebyshev inequality is simply a special case of the Markov inequality. Therefore, the comments that were given following the proof of the Markov inequality can be applied as well to the Chebyshev inequality. Because of their generality, these inequalities are very useful. For example. if Var( X) = 0 ' and we let 1 = 30, then the Chebyshev
inequality yields the result that Pr( I X - £ ( X )
I
:> 3 0 ) '"
1 9
·
In words, this result states that the probability that any given random variable will differ from its mean by more than 3 standard deviations
calif/or exceed 1/9.
This probability will actually be much smaller than 1/9 for many of the random
variables and distributions that will be discussed in this book. The Chebyshev
inequality is useful because of the fact that this probability must be 1/9 or less for
every distribution.
It can also be shown (see Exercise 3 at the end of this
section) that the upper bound in any smaller and still hold for
(2) is sharp
in the sense that it cannot be made
all distributions.
Properties of the Sample Mean
n
Suppose that the random variables Xl . " ' , XI! form a random sample of size from some distribution for which the mean is I-L and the variance is (1"2. 10 other
words, suppose that the random variables Xl" ' " mean
X" are i.i.d. and that each has
I-L and variance (I" 2, We shall let X" represent the arithmetic average of the n
observations in the sample. Thus,
-
1
X11 = - ( X + · · · + X" ) . fI
J
This random variable X" is called the
sample mean.
228
Expectation
The mean and the variance of XII can easily be computed. It follows directly from the definition of XII that
1 1 E ( X" ) � n L E( X;l � n . nl' � "
i-J
Furthermore, since the variables Xl Var( X") �
( x;)
• . . .
1' . '
X" are independent.
J, var t n i -I
,,2 1 1 " � -, L Var(X;) � ,, 2 . no 2 _ " " i- I In words, the mean of X" is equal te the mean of the distribution from which the random sample was drawn, but the variance of X" is only l/n times the variance of that distribution. It follows that the probability distribution of X" win be more concentrated around the mean value fJ. than was the original distribution. In other words. the sample mean X" is more likely to be close to JI. than is the value of just a single observation X, from the given distribution. These statements can be made more precise by applying the Chebyshev inequality to X". Since E(X") � I' and Var( X,,) � a 2/n , it follows from the relation (2) that for any given number I > 0, PrO X"
-
1'1
;.
t) "
a2
(3)
Suppose that a random sample is to be taken from a distribution for which the value of the mean po is not known, but for which it is known that the standard deviation (J is 2 units. We shall determine how large the sample size must be in order to make the probability at least 0.99 that I X" - 1'1 will be' less than 1 unit. Since a 2 4, it follows from the relation (3) that for any given sample size n,
Example 1: Determining the Required Number 0/ Obseroat;ons.
Pr(l X .
-
=
1' 1
;.
4
1) " n
.
Since n must be chosen so that Pr( 1 X" - p. 1 < 1) � 0.99, it follows that n must be chosen so that 4/11 .:s;; 0.01. Hence, it is required that n � 400. 0 It should be emphasized that the use of the Chebyshev inequality in Example 1 guarantees that a sample for which " = 400 will be large enough to meet the specified probability requirements, regardless of the particular type of distribution
4.8. The Sampte Mean
229
from which the sample is to be taken. If further information about this distribu tion is avai.lable, then it can often be shown that a smaJler value for 11 will be sufficient. This property is illustrated in the next example. 2: Tossing a Coin. Suppose that a fair coin is to be tossed n Limes independently. For i = 1, . . . , 11, let Xi = 1 if a head is obtained on the jth loss and let Xi 0 if a tail is obtained on the jth toss. Then the sample mean Xn will simply be equal to the proportion of heads that are obtained on the 11 tosses. We shall determine the number of times the coin must be tossed in order to make Pr(O.4 � Xn � 0.6) � 0.7. We shall determine this number in two ways: first, by using the Chebyshev inequality; second, by using the exact probabilities for the binomial distribution of the total number of heads. Let T = E7_ 1 XI denote the total number of heads that are obtained when 11 losses are made. Then T has a binomial distribution with parameters 11 and 1/2. Therefore, it follows from Eq. (3) of Sec. 4.2 that £(T) = nl2 and it p follows from Eq. (2) of Sec. 4.3 that Var(T) - n14. Since X" = Tin, we can obtain the following relation from the Chebyshev inequality:
Example
=
-
Pr(O.4 .; X" .; 0.6)
=
Pr(O.4n .; T .; 0.6n)
�1 -
Ii
-'--;0
-
4(0.ln) '
-
1 -
2S /I
Hence, if n � 84, this probability will be at least 0.7, as required. However, from the tables of (he binomial distribution given at the end of this book, it is found that for 11 :ZII 15, Pr(O.4 .; Xn '; 0.6)
-
Pr(6 .; T .; 9) = 0.70.
Hence, 15 tosses would actually be sufficient to satisfy the specified probability requirement. 0 The Law 01 Large Numbers
The discussion in Example 2 indicates that the Chebyshev inequality may not be a practical tool for determining the appropriate sample size in a particular �. problem, because it may specify a much greater sample size than is actually �· needed [or the particular distribution from which the sample is bei.ng taken. 1. However, the Chebyshev inequality is a valuable theoretical tool, and it will be , used here to prove an important result known as the law oj large numbers. �
4.8.
The Sample Mean
231
It can also be shown that Eq. (4) is satisfied if the distribution from which the random sample is laken has a finite mean po but an infinite variance. However, the proof for this case is beyond the scope of this book. 0
Since X" converges to po in probability, it follows that there is high probability that X" will be close to po if the sample size n is large. Hence, if a large ranuom sample is taken from a distribution for which the mean is unknown, then the arithmetic average of the values in the sample will usually be a close estimate of the unknown mean. This topic will be discussed again in Chapter 5 after the central limit theorem has been derived. It will then be possible to present a more precise probability distribution for the difference between X" and p..
Weak Laws and Strong Laws. There are Olher concepts of the convergence of a sequence of random variables, in addition to tbe concept of convergence in probability that has been presented here. For example, it is said thal a sequence Zl' 22, . converges to a cOllslant b with prohability 1 if .
.
( lim Z"
Pr
1 1 - 00
-
b
)
-
l.
A careful investigation of the concept of convergence with probability 1 is beyond the scope of this book. It can be shown that if a sequence 21, Z2> ' . ' con verges to b with probability I, then the sequence will also converge to b in probability. For this reason, convergence with probability 1 is often called strong convergence, whereas convergence in probability is called weak convergence. In order to emphasize the distinction between these two concepts of convergence, the result which ht:re has been caUed simply the law of large numbers is often called the weak law of large numbers. The strong law of large numbers can then be stated as follows: If X" is the sample mean of a random sample of size n from a distribution wirh mean p" then
( lim X,. 11) = 1 .
Pr
,, - <Xl
=
The proof of this result will not be given here.
EXERCISES 1 . Suppose that X is a random variable for which
Pr( X ;' 0) - 1 and Pr( X ;' ]0) - 1/5. Prove that E( Xl ;.
2.
232
�'·
l
Expectation
2. Suppose that X is a random variable for which E( X) � 10, Pr( X '" 7) � 0.2, and Pr( X ;. 13) � 0.3. Prove that Var( X) ;. 9/2. 3. Let X be a random variable for which E(X) = p. and Var( X) = a 2• Con struct a probability distribution for X such that
Pr(l X - 1'1 " 30)
�
1/9.
4. How large a random sample must be taken from a given distribution in order for the probability to be at least 0.99 that the sample mean will be within 2 standard deviations of the mean of the distribution? 5. Suppose that Xl' . . . ' X" form a random sample of size n from a distribution for which the mean is 6.5 and the variance is 4. Determine how large the value of n muSt be in order for the following relation to be satisfied:
Pr(6 '" X. '" 7)
..
0.8.
6. Suppose that X is a random variable for which E(X) � =
{34. Prove that
Pr(IX - 1'1
..
t ) '"
I'
and £[(X - I')'J
/3r: .
7. Suppose that 30 percent of the items in a large manufactured lot are of poor quality. Suppose also that a random sample of n items is to be taken from the lot. and let Qn denote the proportion of the items in the sample that are of poor quality. Find a value of n such that Pr(0.2 '" Q. '" 0.4) ;. 0.75 by using (a) the Chebyshev inequality and (b) the tables of the binomial distribution at the end of this book. 8. Let ZI' Z2 be a sequence of random variables; and suppose that, for n = 1,2 , the distribution of Zft is as follows: • . . .
• . . .
Pr(Z. � n 2 ) � ;I
and
Pr( Z"
�
0)
-
1
1
- -.
n
Show that lim
£ ( Z.) - 00 bUl plim Z. � O.
9. It is said that a sequence of random variables
constal1t b in the quadratic mean if lim
" .... 00
£[(Z" - b )']
� O.
ZI' 22
• . . .
converges to a (5)
f Er
4.9. Utility
233
Show that Eq. (5) is satisfied if and only if lim
n-oo
£( Z,,)
=
b and
lim Var(Z,,) - O.
n - oo
Hint: Use Exercise 4 of Sec. 4.3. 10. Prove that if a sequence Zt, 22. " , converges to a constant b in the quadratic mean, then the sequence also converges to b in probability. 11. Let X" be the sample mean of a random sample of size n from a dislributi�n for which the mean is JL and the variance is 0 :2, where 0 2 < co. Show thal Xn converges to JA. in the quadratic mean as n - 00. 12. Let 21. 22 be a sequence of random variables; and suppose that for n = 2, 3, . . . , the distribution of 211 is as follows: • . • .
(
Pr Z"
�
1.) - 1 - ..!... n
n
2
and Pr(Z" = n )
1
= ,
n
.
(a) Does there exist a constant c to wh.ich the sequence converges m probability? (b) Does there exist a constant c to which the sequence converges in the quadratic mean?
'4.9.
UTILITY
Utility Functions
Consider a gamble in which one of the three following outcomes will occur: A person will win JOO doUars with probability 1/5, will win 0 dollars with probability 2/5, or will lose 40 dollars with probability 2/5. The expected gain from this gamble is
1 2 2 5(100) + 5 (0) + 5 ( -40)
=
•
4.
In general, any gamble of this type, in which the possible gains or losses are different amounts of money, can be represented a'i a random variable X with a specified probability distribution. It is (0 be understood that a positive value of X represents an actual monetary gain to the person from the gamble, and that a negative value of X represents a loss (which is considered a negative gain). The expected gain frorn a gamble X is then simply E( X). Although two different gambles X and may have the same expected gain, a person who is forced to accept one of the two gambles would typically prefer one
Y
234
Expectation
of them to the other. For example, consider two gambles X and Y for which the gains have the following probability distributions: Pr( X � 5(0) - Pr( X - - 4(0)
�
1/2
(1 )
and Pr( Y � 60)
� Pr( Y � 50)
�
Pr( Y - 40)
�
1/3.
(2)
Here, E ( X) = £(Y) 50. However, a person who does not desire to risk losing 400 dollars for the chance of winning 500 dollars would typically prefer Y, which yields a certain gain of at least 40 dollars. The theory of lIlility was developed during the 1930's and 1940's to describe a person's preference among gambles like those just described. According to that theory, a person will prefer a gamble X for which the expectation of a certain function U( X) is a maximum, rather than a gamble for which simply the expected gain E( X ) is a maximum. The function U is caUed the person's utility function. Roughly speaking, a person's utility function is a function that assigns to each possible amount x ( - 00 < x < 00) a number U(x) representing the actual worth to tHe person of gaining the amount x. For example, suppose that a person's utility function is U and he must choose between the gambles X and Y defined by Eqs. (1) and (2). Then ""
E [ U( X ) ]
�
I U(500) I U( - 400)
(3)
=
� U(60) � U(50) + � U(40)
(4)
and
E [ U( Y ) ]
+
+
The person would prefer the gamble for which the expected utility of the gain, as specified by Eq. (3) or Eq. (4), is larger. Formally, a person's utility function is defined as a function U having the following property: When the person must choose between any two gam�les X and Y, he will prefer X to Y if E[U(X)] > E[U( Y l], and will be indifferent between X and Y if E[U(X)] � E[U(Y)). When the person is choosing from more than two gambles, he will choose a gamble X for which E[U(X») is a maXImum. We shall not consider here the problem of determining the conditions that must be satisfied by a person's preferences among all possiblc gambles in order to be certain that these preferences can be represented by a utility function. This problem and other aspects of the theory of utihty are discussed by DeGroot (1970).
4.9.
um;ty
235
Examples of Uti lity Functions Since it is reasonable to assume that every person prefers a larger gain to a smaller gain, we shall assume that every utility function U(x) is an increasing function of the gain x. However, the shape of the function U(x) will vary from person to person and will depend on each person's wiUingness to risk losses of various amounts in attempting to increase his gains. For example, consider twO gambles X and Y for which the gains have the following probability distributions: Pr( X
�
- 3) � 0.5,
Pre X
�
Pr( X
2.5) � 0.4.
�
6) - 0 . 1
(5)
0.3.
(6)
and Pr( Y � - 2) � 0.3,
Pre y
Pr( Y � I ) � O.4,
� 3)
�
We shall assume that a person must choose onc of the following three decisions: (i) accept gamble X, (ii) accept gamble Y, or (iii) do not accept either gamble. We shall now determine the decision that a person would choose for three different utility functions.
Example I: Linear Utility Function. Suppose that Ve x ) = ax + b for some constants a and b, where a > O. In this case, for any gamble X, £[U(X)] � a£( X ) + b. Hence, for any two gambles X and Y, £ [ U( X)] > E[U( Y)] if and only if E( X) > E ( Y). In other words, a person who has a linear ulililY function will always choose a gamble for which the expected gain is a maximum. When the gambles X and Y are defined by Eqs. (5) and (6), £(X)
�
(0.5) ( - 3) + (0.4)(2.5) + (0 . 1 )(6)
�
0.1
and £ ( Y ) � (0.3)( - 2) + (0.4) ( 1 )
+ (0.3)(3)
�
0 7.
Furthermore, since the gain from not accepting either of lhese gambles is 0, the expected gain from choosing not �o accept either gamble is clearly O. Since E ( Y ) > £ ( X ) > 0, it follows that a person who �as a linear utility function would choose to accept gamble Y. 1f gamble Y were not available, then the person would prefer to accept gamble X rather than not to gamble at all. 0
00 < < 00.
Example 2: Cubic Utility Function. Suppose that a person's utility function is U(x) � x ' for x Then for the gambles defined by Eqs. (5) and (6), ' ' ' £ [ U( X») � (0.5)( - 3) + (0 .4)(2.5) + (0 . 1 ) (6) � 1 4.35
236
Expectation
and E [ U( Y )] - (0.3)( _2)' + (0.4)(1)' + (0.3)(3)' - 6.1. Furthermore, the utility of not accepting either gamble is U(O) = 03 = O. Since E[U(X») > £IU(Y)] > 0, it follows that the person would choose to accept gamble X. If gamble X were not available, the person would prefer to accept gamble Y rather than not to gamble at all 0
Example 3: Logarithmic Utility Function. Suppose that a person's utility function is U(x) - Iog(x + 4) for x > - 4. Since limx _ ' log(x + 4) - - 00, a person who has this utility function cannot choose a gamble in which there is any possibility of his gain being � 4 or less. For the gambles X and Y defined by Eqs. (5) and (6), _
£ [V( X)] - (O.5)(log 1) + (0.4)(log 6.5) + (O.l)(log 10) - 0.9790 and £ [ U( Y )] - (O.3)(log2) + (0.4)(log 5) + (0.3)(log 7) - 1.4355. Furthermore, the utility of not accepting either gamble is U(O) = log4 = 1.3863. Since E[V(Y)] > U(Ol > £[U(X1], it follows that the person would choose to accept gamble Y. If gamble Y were not available, the person would prefer not to gamble at all rather than to accept gamble X. 0 Seiling a Lottery Ticket
Suppose that a person has a lottery ticket from which he win receive a random gain of X dollars, where X has a specified probability distribution. We shall determine the number of dollars for which the person would be willing to sell this lottery ticket. Let U denote the person's utility fum:tion. Then the expected utility of his gain from the lottery ticket is £[U(X)]. lf he sells the lottery ticket for Xo dollars, then his gain is Xo dollars and the utility of this gain is U(xo). The person will prefer to accept Xo dollars as a certain gain rather than to accept the random gain X from the lottery ticket if and only if V(xo1 > £[U(X1]. Hence, the person would be willing to sell the lottery ticket for any amount Xo such that U(xo) > £[U( X1]. If U(xo) - E(U(X1], he would be indifferent between selling the lottery ticket and accepting the random gain X. Example 4: Quadratic Utility Function. Suppose that U(x) = x2 for x � 0, and suppose that the person has a lottery ticket from which he will win either 36
4.9. Utility
237
dollars with probability 1/4 or 0 dollars with probability 3/4. For how many dolJars Xo would he be willing to sell trus lottery ticket? The expected utility of the gain from the louery ticket is E [ U( X )] -
! U(36) i U(O) +
Therefore, the person would be willing to sell the lottery ticket for any amount X o such that U(xo) � x5 > 324. Hence, xo > 18. In other words, although the expected gain from the lottery ticket in this example is only 9 dollars, the person would not sell the ticket for less than 18 dollars. 0 Example x>
5:
Square-Root Utility Function. Suppose now that U( x )
=
Xl /2 for
0, and consider again the lottery ticket described in Example 4. The expected
utility of the gain from the lottery ticket in this case is E [ U( X )] � �
! U(36) + ! U(O) 1 3 4 (6) + 4(0) � 1 .5 .
Therefore. the person would be willing to sell the lottery ticket for any amount X o such that U( x.) xl(' > 1.5. Hence, Xo > 2.25. In other words, although the expected gain from the lottery ticket in this ex:ample is 9 dollars, the person would be willing to sell the ticket for as little as 2.25 dollars. 0 �
EXERCISES 1. Consider three gambles X, Y, and Z for whicp the probability distributions of the gains are as follows:
Pr( X � 5 )
�
Pr( X = 25)
Pr(Y � 10) - Pr( Y -
= 1/2,
20) � 1/2,
Pr( Z = 1 5 ) � I . Suppose that a person's utility function has the form Vex) = x 2 for x > Which of the three gambles would he prefer?
o.
238
2.
ExpectaUon
person whose utility function is
3.
Xl/Z
Determjne which of the three gambles in Exercise ] would be preferred by a
U(x) =
O.
1 would be preferred by a U(x) = ax + b, where a and b
Determine which of the three gambles in Exercise person whose utility function has the form are constants
4.
for x >
(a
>
0).
Consider a utility function
U
for which
U(O) � 0
and
U(IOO) - I.
Suppose
that a person who has this utility function is indifferent between accepting a 0 dol1ars with probability 1/3 or 100
gamble from which his gain will be dollars with probability
U(50)?
the value of
5.
2/3 and accepting 50 dollars as a sure thing. What is
Consider a utility function
U
for which
U(O)
�
5, U(I) - 8,
and
U(2) - 10.
Suppose that a person who has this utility function is indifferent between two gambles X and
Y for which
the probability distributions of the gains are
follows: Pr( X Pr( Y -
-I)
-
0.6, Pr( X = 0) = 0.2, Pr( X = 2) = 0.2;
0) = 0.9, Pr( Y = I) = 0.1 .
What is the value of
6.
U( - I)?
Suppose that a person must accept a gamble X of the following form:
Pr( X - a ) = p where
p
and
Pr( X - I - a ) = I - p,
0 < p < 1. Suppose also that the person · a (0 � a � 1 ) to be used in this gamble.
is a given number such that
can choose and fix the value of Determine the value of a that the person would choose if rus utility function is
U(x) =
Xl/2
O. Determine the value of a that a person would utiljty function is U(x) for x � O. 8. Determine the value of a that a person would utility function is U(x) = x for x � O. 9. Consider four gambles Xl ' X2 • Xl ' and X4
7.
as
log x for x >
=
distributions of the gains are as follows : Pr( X,
= 0) = 0.2,
Pr( X, �
0) = 0.4,
Pr( X, = 0) = 0.3, Pr( X, -
0)
�
Pr( X,
= I)
Pr( X, P r ( X,
Pr( X, � 2)
�
6
if hjs
choose in Exercise
6
if his
for which the probability ·
0.5, Pr( X, = 2) - 0.3;
I) = 0.2, Pr( X, = 2) = 0.4;
= I) = 0.3, =
choose in Exercise
Pr( X,
=
2) = 0.4;
0.5.
Suppose that a person's utility function is such that he prefers
Xl
to Xz. If
the person were forced to accept either X} or X4• which one would he choose?
4.10. Supplementary Exercises
239
10. Suppose that a person has a given fortune A > 0 and can bet any amount b of this fortune in a certain game (0 � b � A). If he wins the bet, then his fortune becomes A + b: if he loses the bel, then his fortune becomes A - b.
In general, let X denote his fortune after he has won or lost. Assume that the probability of his winning is p ,(0 < p < I) and the probability of his losing is 1 - p. Assume also that his utility function, as a function of his final fortune x. is U(x) = 10gx for x > O. If the person wishes to bet an amount b for which the expected utility of his fortune £[ U( X)] will be a maximum, what amount b should he bet? 11. Determ.ine the amount b that the person should bet in Exercise 10 if his utility function is U(x) = x l /2 for x ;:: O. 12. Determine the amount b that the person should bet in Exercise 10 if his utility function is U(x ) = x for x " O. 13. Determine the amount b that the person should bet in Exercise 10 if his utility funcLion is U(x) = x2 for x � O. 14. Suppose that a person has a lottery ticket from which he will wi.n X dollars, where X has a uniform distribution on the interval (0,4). Suppose also that the person's utility function is U(x ) = xo. for x ;:: O. where a is a given positive constant. For how many dollars · Xo would the person be willing to sell this lottery ticket?
4.10. SUPPLEMENTARY EXERCISES
1 . Suppose that the random variable X has a continuous distribution with dJ. F(x). Suppose also that Pr( X " 0) = 1 and that £ ( X ) exists. Show that £(X)
=
- F(x)] dx. t[l o
I-Iint: You may use the fact that if E( X) exists'; then lim x [ 1 - F(x)]
x
.... oc
=
O.
2. Consider again the conditions of Exercise 1, but suppose now that X has a discrete distribution with dJ. F(x), rather than a continuous distribution. Show that the conclusion of Exercise 1 still holds. 3. Suppose that X, Y. and Z arc nonnegative random variables such that Pr( X + y + Z " 1.3) = I. Show that X, Y, and Z cannot possibly have a joint distribution under which each of their marginal distributions is a uniform distribution on the interval (0, 1).
240
Ex�Uon
4. Suppose that the random variable X has mean fJ. and variance 0 2, and that Y = aX + b. Determine the values of a and b for which £(Y) = 0 and Var( y) J . 5 . Determine the expectation of the range of a random sample of size n from a uniform distribution on the interval (0, 1). 6. Suppose that an automobile dealer pays an amount X (in thousands of dollars) for a used car and then sells it for an amount Y. Suppose that the random variables X and Y have the following joint p.d.f. : �
for 0
<x 1- ( 1 )
• . . .
.
-
n >1-,(1)
i= 1
=
Il eA,(f" -I} k
i -1
=
e (AI+ "
'
.
.
.
+AI1- ( I) denote the independent, it follows
+ Adft' - I)
It can be seen from Eq. (6) that this m.g.f. >I- ( t ) is the m.g.f. of a Poisson distribution with mean A I + . . . + AI 0 (nol necessari1y an integer) and any number p in the interval o < p < 1 . In other words, it can be verified that for r > ° and 0 < p < 1, •
(3)
In this section, however, we shall restrict our attention to negative binomial distributions for which the parameter r is a positive integer. It follows from the results given in Exercise 4 at the end of Sec. 5.3 that the p.L of the negative binomial distribution can be written in the following alterna tive form: for x
- 0, 1 , 2, . . . ,
otherwise.
(4)
260
Special DistribuUons
The Geometric Distribution
A
negative binomial distribution for which
r
= 1 is called a geometric distribu�
tion. In other words, it is said that a random variable
distribution with parameter p which the p.f. f(x 1 1, p) is as
(0 < � < 1) if
X
X
has a
geometric
has a discrete distribution for
follows:
for x = 0, 1 , 2, . . . ,
( 5)
otherwise.
Consider again an infinite sequence of Bernoulli trials in which the outcome of any trial is either a success or a failure and the probability of a success on any trial is
p.
If we let
Xl
denote the number of failures that occur before the first
Xl will have a geometric distribution with parameter p. More generally, for j � 2,3, , we shall let 10 denote the number of failures
success is obtained, then
. . .
obtained. Since all the trials are independent and the probability of obtaining a
that occur after j - 1 successes have been obtained but before the jth success is
success on any given lrial is
geometric
distribution
P . it fo1lows that each random variable Xi will have a
with
parameter
p
and that
the
random
variables
the sum Xl + Xl ' Xl" will be independent. Furthermore, for r = 1, 2 . . . + X,. will be equal to the total number of failures that OCCUll before exactly r . .
• . . . •
successes have been obtained. Therefore, this sum will have a negative binomial dislfibution with parameters
r
and
p. We have thus derived the following result:"
If Xl ' Xr are U.d. random variables and If each Xi has a geometri� distribution with parameter p. then the sum Xl + . + X, has a negative binomial distribution with parameters r and p. • . . .
. .
Other Properties 01 Negative Binomial and Geometric Distributions
The Moment Generating Function. If Xl has a geometric p, then the m.g.!. '!-t(I) of Xl is as follows:
distribution with
parameter
�
'1'.(1) � E( e'X,) � P I: ( qe ) x . x- o
(6)
'
t
o < qe' < I , that is. for t < log(l/q). It is known from elementary calculus that The infinite series in Eq. (6) will have a finite sum for any value of for any number a (0 < a < 1),
f: "x �
x-o
l l - a
such that U
5.5.
Therefore, for
< log(ljq),
1
p
-,, ( 1 ) =
1
261
The Negative Binomial Dlatr1butlon
-
qe'
.
(7)
3
It is known from Theorem
of Sec.
4.4
that if the random variables
XI ' " ' ' X, are i.i.d. and if the m.g.f. of each of them is _, , ( I ), then the m.g.f. of the sum Xl + . " + X,. is [ � I ( t )]'. Since the distribution of thesum XI + . . . + X,.
is a negative binomial distribution with parameters r and
p, we have established
the following result:
If X has a negaliue binomial distribution with parameters r and p, then Ihe m.g.f. of X is as follows: "' ( I ) -
(
P 1 - qe '
)
'
The Mean and Variance.
for t < IOg If
Xl
(�)
(8)
has a geometric distribution with parameter
then the mean and the variance of
Xl
p.
can be found by differentiating the m.g.f.
given in Eq. (6). The results are as follows: ,
E ( XI ) = ",;(0) =
'l.
(9)
P
and
(10) Now suppose that X has a negative binomial distribution with parameters .
and p. If
X
is represented as the sum
XI + .
.
variables. each having the same distribution as
(10)
' + X,. of
Xl'
r
r independent random
it follows from Eqs.
(9)
and
that the mean and the variance of X must be
r E(X) = q P
and
Var(
XI) =
rq • P2
•
The Memoryless Property 0/ the Geometric Distribution.
(11) We shall continue to
consider an infinite sequence of Bernoulli trials in which the outcome of any trial is either a success or a failure and the probability of a success on any trial is
p.
Then the distribution of the number of failures that will occur before the first
p. Suppose now that a 20 trials. Then, since all trials are indepen additional failures that will occur before the first
success occurs is a geometric distribution with parameter failure occurred on each of the first dent, the distribution of the
262
Special OlstribuUons
success is obtained will again be a geometric distribution with parameter p. In effect, the process begins anew with the twenty-first trial, and the long sequence of failures that were obtained on the first 20 trials can have no effect on the future outcomes of the process. This property is often called the memoryless property of the geometric distribution. At the beginning of the experiment, the expected number of failures that will occur before the first success is obtained is q/p, as given by Eq. (9). If it is known thal failures were obtained on the first 20 trials, then the expected total number of failures before the first success is simply 20 + ( q/p). In purely mathematical terms, the memoryless property can be stated as follows:
If X has a geometric distribution with parameter p, then for any nonnegative integers k and I , (\2)
Pr( X = k + t l X " k) = Pr( X = t ) .
A simple mathematical proof of Eq. ( 1 2) could be given by using the p.r. f(x 1 1. p ) specified in Eq. (5). This proof is required in Exercise 7 at the end of this section.
EXERCISES 1. Suppose that a sequence of independent tosses are made with a coin lor which the probability of obtaining a head on any given toss is 1/30. (a) What is the expected number of tails that will be obtained before five heads have been obtained? (b) What is the variance of the number of tails that will be obtained before five heads have been obtained? 2. Consider the sequence of coin tosses described in Exercise 1 . (a) What is the expected number of tosses that will be required in order' to obtain five heads? (b) What is the variance of the number of tosses that be required in order to obtain five heads? 3. Suppose that two players A and B are trying to throw a basketball through a hoop. The probability that player A will succeed on any given throw is P. and he throws until he has succeeded r times. The probability that player B will succeed on any given throw is mp, where m is a given integer ( m 2. 3, . . . ) such that mp < 1 ; and he throws until he has succeeded mr times. (a) For which player is the expected number of throws smaller? (b) For which player is the variance of the number of throws smaller?
will
=
5.S. The Nonna'
Distribution
263
4.
Suppose that the random variables Xl • . . . • Xk are independent and that Xi has a negative binomial distribution with parameters r; and p (i = 1 , . . . , k). Prove that the sum Xl + . . . + Xk has a negative binomial distribution with parameters r = rl + . . . + rk and p.
5.
Suppose that X has a geometric distribution with parameter p. Determine the probability that the value of X will be one of the even integers 0, 2, 4, . . . .
6. Suppose that X has a geometric distribution with parameter p . Show that for
any nonnegative integer k, Pr(X ;;.. k ) = 7. Prove Eq. (12).
qN.
8.
Suppose that an elect.ronic system contains 11 components which function independently of each other, and suppose that these components are con nected in series, as defined in Exercise 4 of Sec. 3.7. Suppose also that each component will function properly for a certain number of periods and then will fail. Finally, suppose that for i = 1 , . . . , n , the number of periods for which component i wiJI function properly is a discrete random variable having a geometric distribution with parameter PI' Determine the distribution of the number of periods for which the system will function properly.
9.
Let I(x I r, p ) denote the p.L of the negative binomial distribution with parameters r and p; and let f(x 1 ;\ ) denote the p.e. of the Poisson distrihu tion with mean A, as defined by Eq. (1) of Sec. 5.4. Suppose r -+ co and q -+ 0 in such a way that the value of rq remains constant and is equal to A throughout the process. Show that for each fixed nonnegative integer x,
f ( x l r, p ) - f( x l ;\ ) ·
5.6.
THE NORMAL DISTRIBUTION
Importance of the Normal Distribution The normal distribution, which will be defined and discussed ion this section, is by far the single most important probability distribution in statistics. There are three main reasons for this preeminent position of the 'lormal distribution. The first reason is di_rectly related to the mathematical properties of the norma] distribution. We shall demonstrate in this section and in several later sections of this book that if a random sample is taken from a normal distribution, then the distributions of various important functions of the observations in the sample can be derived explicitly and will themseLves have simple forms. There fore, it is a mathematical convenience to be able to assume that tbe distribution from which a random sample is drawn is a normal distribution.
264
Special Distrlbulions
The second reason is that many scientists have observed that the random
studied in various physical experiments often have distributions which are approximately normal. For example. a normal distribution wi1l usually be a close approximation to the distribution of the heights or weigh ts of individuals in a homogeneous population of people. o f corn stalks or of mice, or to the distribution of the tensile strength of pieces of steel produced by a certain process. The third reason for the preeminence of the nonnal distribution is the central limit theorem, which will be stated and proved i n the next section. If a large random sample is taken from some distribution, then even though this distribu tion is not itself approximately normal, a consequence of the central limit theorem is that many important functions of the observations in the sample will have distributions which are approx.imately normal. In particular, for a large variables
,
random sample from any distribution that has a finite variance. the distribution
of the sample mean will be approximately the next section.
normal. We shall return to this topic in
Properties 01 the Normal Distribution
has a
normal
- 00 < x < 00 .
(1)
It is said that a random variable
distribution with mean IJ. and variance 0 2 ( - 00 < f.L < 00 and 0 > 0) if X has a continuous distribution for which the p.d.L I(x I lL, 0 2 ) is as follows: Definition of the Distribution.
f(x l l' , O 2 ) �
1 1/2
(2,, )
°
exp
[ - "21 ( X - )2] °
I'
for
We
X
shall now verify that the nonnegative function defined in Eq. proper p.d.f. by showing that
f� f(x l l',02) dx � 1 . -
�
(x - 1')/0, then
f(x 1 1', 02) dx f� f� -� �
�
IS
a
(2 )
�
If we le, y
(1)
I
(2,, )
l/2
(
exp -
-21
y
2
) dy.
W e shall now let
( 3) Then we must show that J =
(2 ?T )1/2.
5.6. The Normal Distribution
265
From Eq. (3), it follows that [ ' _ [ .. j
�
j� exp( - .!2.Y' ) dY JOO exp ( - .!2. Z' ) dZ - co
_ QO
.
We shall now change the variables in this integral from y and z to the polar coordinates r and 8 by letting y = rcos8 and z = r sinO. Then, since y2 + Z 2 = "
,
Therefore, 1 = (27T)1/2
and the
correctness of Eq. (2) has been
established.
The Moment Generating Fum:tion. In the definition of the normal distribution, i t is stated that the parameters J.L and ,, 2 are the mean and the variance of the distribution. In order to justify the use of these terms, we must verify \hat '" is actually the mean and 0- is actualJy the variance for the p.d.f. specified by Eq. (l). We shall do this after we have derived the rn.g.f. ""(r) of this normal distribution. By the definition of an m.g.f.,
2
"' ( I ) � E ( e 'X) �
\ exp foo - 00 (2,,) / 0 ,
[
tx -
]
( x - I' ) ' dx. . 20 ,
By completing the square inside the brackets, we obtain the relation tx -
(x
_
1')'
20'
=
I " ", I + a 2 I
2 ,, '
•
Therefore,
where c
_- jOO
- 00
J
.
1 exp (2,,) I2 a
{_ [X -(I' + "'I)]'} 2" 2
dx.
,-
266
Special
Dlstrlbutlons
If we now replace I' with I' +
a'l
in Eq. (I), it follows from Eq. (2) thai C
Hence. the m.g.f. of the normal distribution is as follows : for
X
If a random variable
given in
Eq. (I), it
- 00
- I. (4)
< t < 00 .
has a normal distribution for which the p.d.f. is as
follows from Eq. (4) that
£(X) - ,,'(0) - I' and Var( X )
- ,,"(0) - [ ,,'(O)J ' - a 2
Thus we have shown that the parameters p. and
variance of the normal distribution defined by Since the m.g.f.
(k
,, (t)
02
arc indeed the mean and the
Eq. (1).
is finite for all values of I, all the moments
- 1 , 2, . . . ) will also be finite.
The Shupe 01 the Normal Distribution.
I(x 1 1-'. ( 2 )
=
(12
I t can be seen from Eq. (I) that the p.d.L
of a norma] distribution with mean p. and variance
with respect to the point x
£(Xk)
is symmetric
11- . Therefore, Il is both the mean and the median of
the distribution. Furthermore, M is also the mode of the distribution. In other
I(x I ll. (1 2 ) attains its maximum value at the point x = ", . Finally, by differentiating f(x l lJ., CJ 2 ) twice, it can be found that there are points J.I. + C1 and at x = J.I. - o. of inflection a t x ' The p.d.L I(x I I', a ) is sketched in Fig. 5.1. It is seen that the curve is
words, the p.dJ.
"""
.. bell-shaped." However, it is not necessarily true that any arbitrary beU-shaped
p.d.L can be approximated by the p.d.f. of a normal distribution. For example, the p.d.f. of a Cauchy distribution, as sketched in
fjg.
4.3, is a symmetric
bell-shaped curve which apparently resembles the p.d.L sketched in Fig.
5.1.
However, since no moment o f the Cauchy distribution-not even the mean-ex ists, the tails of the Cauchy p.d.f. must be quite different from the tails of the normal p.d.f.
Linear T,.ansformations.
We shall now show that if a random variable
X
has a
normal distribution, then any linear function of X will also have a normal distribution.
Theorem 1 . If X has a normal distribUlion with mean jJ. and variance 0 2 and if Y = aX + b, where a and b are given constants and a *" 0, then Y has a
normal distribution with mean aJ! + b and variance a 20 2 .
5.6. The Norm.1 Di.trlbutlon
267
i • •
I /( ..;?; 0)
..--1--...
Figure S. t
The p.dJ. of a normal distribution.
Proof, The m.g.f. '" of X is given by Eq. (4). If '" y denotes the m.g.f. of Y, then for - 00 < I < 00 .
By comparing this expression for 0/ y with the m.g.f. of a normal distribution given in Eq. (4), we see that \f y is the m.g.!. of a normal distribution with mean Q� + b and variance a 20 2. Hence, Y must have this normal distribution. 0 The
Standard Nonnal Distribution
The normal distribution with mean 0 and variance 1 is called the standard normal distribution. The p.d.!. of the standard normal distribution is usuaHy denoted by the symbol . and the d.f. is denoted by the symbol . Thus.
for - 00 < x < 00
(5)
and
( x ) -
r
- �
where the symbol
( u ) du II
for - 00 < x < 00 ,
is used in Eq. (6) as a dummy variable of integration.
(6)
268
Special Distributions
The dJ. W(x) cannot be expressed in closed form in terms of elemental) functions. Therefore, probabilities for the Standard normal distribution or any other normal distribution can be found only by numerical approximations or by using a table of values of �(x) such as the one given at the end of this book. In that table, the values of �(x) are given only for x � O. Since the p.d.f. of the standard normal distribution is symmetric with respect to the point x = 0, it follows that Pre X .. x) - pre X ;,. - x ) for any number x ( - 00 < x < (0). Since Pr( X .. x ) - (x) and Pr( X ;,. -x) - 1 - ( - x), we have ( x )
+ '1> (
-
x)
-
for
1
�
00 < x < 00 .
(7)
It
foUows from Theorem 1 that if a random variable X has a normal distribution with mean p. and variance 0 2, then the variable Z = ( X - p.)/a will have a standard normal distribution. Therefore, probabilities fOT a normal distri bution with any specified mean and variance can be found from a table of the standard normal distributi61.
Example
1:
Detennining Probabilities lor a Normal Dstribution.. i Suppose that
has a normal distribution with mean determine th e value of Pr(1 < X < 8).
and
If we
let Z -
(X
-
=
Pr
Pr(l < X < 8)
5)/2, then
(1
-
2
5
Z
and standard deviation 2. We shall
5
will have a standard normal distribution
X-5
( 1 .2)
-1
Central Limit Theorem (Liopounovj lor the Variables. We shaH now state a central limit
Sum
We shall assume that £ ( Xi ) = IJ. ; and Var(X, ) =
0/
The
sequence of random variables
Xl ' X2 ,
. . .
-
0.7698.
0
of Independent Random
theorem which appJies to a
that are indepcndcnt but not necessarily
identicalJy distributed. This theorem was first proved by A. Liapounov in 1 901.
shall let
for i - 1 ,
Also, we . ( 2)
Y• Then
. . . , n.
E ( Y. ) - 0 and Var( Y.) - 1 . The theorem that is stated next gives a
sufficient condition for the distribution of this random variable Yn to be ap
proximately a standard normal distribution.
-
Suppose thai the random variables Xl. x2 , . are independe1l1 and that £(1 XI Il, I) < 00 for i = 1 , 2, Also, Suppoj'e lhal
Theorem 2.
_
.
.
.
.
_
( 3)
Finally, let the random variable number x , lim P r( Y. ...
x)
-
Yn be
as defined in Eq. (2). Then, for anyfixed
4> (x ) .
The interpretation of this theorem is as follows : If Eq.
any large va1ue of
n,
the distribution of
distribution with mean
(4) (3) is satisfied, then for
I:7_ 1 Xi will be approximately a normal E7_ 11J., and variance E7_10?, Il shou1d be noted that when
5.7. The Central Limit Theorem
277
the random variables XI ' X2 , . . . are identically distributed and the third moments of the variables exist, Eq. (3) will automatically be satisfied and Eq. (4) then reduces to Eq . (1). The distinction between the theorem of Lindeberg and Levy and the theorem of Liapounov should be emphasized. The theorem of Lindeberg and Levy applies to a sequence of i.i.d. random variables. In order for this theorem to be applicable, it is sufficient to assume only that the variance of each random variable is finite. The theorem of Liapounov applies to a sequence of independent random variables that are not necessarily identically distributed. In order for this theorem to be applicable, it must be assumed that the third moment of each random variable is finite and satisfies Eq. (3).
The Central Limit Theorem for Bernoulli Random Variables. theorem of Liapounov, we can establish the following result:
By applying the
X" are independent Theorem 3. Suppose that the random variables Xl" and that X; has a Bernoulli distribution with parameter p; ( i = 1 , 2, . . . ). . . •
Suppose also that the infinite series L;r: I y" -
:E7_ 1 X;
-
Piq; is divergent,
and let
E7_ 1 P;
(5)
(l:7_,p,q,) '/l
Then for any fixed number x,
lim Pr(Y"
"-�
..
x)
Proof. Here Pr(X, Var( Xj) = p,.qj' and
-
-
( x ) .
1)
-
p,
(6) and Pr( X,
-
0)
-
q,.
Therefore,
E(X,)
-
p"
It follows that
•
(7 )
Since the infinite series E�_ l p; qj is divergent, then '£7_1 Piq, -I> 00 as n - 00 and it can be seen from the relatlon (7) that Eq. (3) will be satisfied. In turn, it follows from Theorem 2 that Eq. (4) will be satisfied. Since Eq. (6) is simply a restatement of Eq. (4) for the particular random variables being considered here, the proof of the theorem is complete. 0
EC:-- IPjq,
Theorem 3 implies that if the infinite series is divergent, then the distribution of the sum E7_ 1 X; of a large number of independent Bernoulli
Speelal Distribution.
278
,
random variables will be approximately a normal distribution with mean [i-IPi and variance L:7_ 1 Pi qj' It should be kept in mind, however, that a typical practicaJ problem will involve only a finite number of random variables Xl" ' " XII rather than an infinite sequence of random variables. In such a problem, it is Dot meaningful to consider whether or not the infinite series r.r:.l p,ql is divergent, because only a finite number of values P" " " Pn will be specified in tbe problem. In a certain sense, therefore, the distribution of the sum Ei'_ I Xi can always be approximated by a normal distribution. The critical question is whether or not this normal distribution provides a good approximation to the actual distribution of Ei'_t Xj. The answer depends, of course, on the values of Pl" . • PI/ ' Since the normal distribution will be attained more and more closely as "£7-1Pj qj - 00, the normal distribution provides a good approximation when the value of L�_l p/ qj is large. Furthermore, since the value of each term Pi qj is a maximum when P, = 1 /2 , the approximation will be best when n is large and the values of P I " ' " Pn are close to 1/2. _
Example 3: Examination Questions. Suppose that an examination contains 99 questions arranged in a sequence from the easiest to the most difficult. Suppose that the probability that a particular student will answer the first question correctly is 0.99; the probability that he will answer the second question correctly is 0.98; and, in general, the probability that he will answer the ith question correctly is 1 - i/loo for i = 1, . . . , 99. It is assumed that all questions will be answered independently and that the student must answer at least 60 questions · correctly to pass the examination. We shall determine the probability tbat the student will pass. Let XI = 1 if the ith question is answered correctly and let x,. = 0 otherwise. Tben E( X,) = Pi = 1 - (i/lOO) and Var( X,) = Piqi = (VI00)[1 - (i/lOO)). Also,
1
�
and
1
�
i = 99 L Pi = 99 - 100 iL 100 -I i- I
1 2 (100) =
49.5
_
-=1'--" (100)'
(99)(100) 2
=
49.5
"�, . , I i-I 99
(99)(100)(199) 6
=
16.665.
It follows from tbe central limit theorem that tbe distribution of tbe LOtal number of questions that are answered correctly, which is L�lXi' will be approximately a normal distribulion with mean 49.5 and standard deviation
5.7. The Central Limit Theorem
279
� 4.08. Therefore, the distribution of the variable
(16.665)1/2
2:7_I X; - 49.5
Z�
4.08
will be approximately a standard normal distribution. It follows that
pr
( £:
,-1
)
Xi ;;' 60 � Pr( Z
;;,
257:15}
Effect of the Central Limit Theorem.
�
,
- (2.57:15} � 0.0050.
0
The central limit theorem provides a
plausible explanation for the fact that the distributions of many random variables studied in physical experiments are approximately normal. For example, a person's height is influenced by many random factors. If the height of each person is determined by adding the values of these individual factors, then the distribu
general, the central limit theorem indicates that the distribution of the sum of tion of the heights of a large number of persons will be approximately normal. In
many random variables can be approximately normal, even though the distribu tion of each random variable in the sum differs from the normal.
Convergence In Distribution Let
Xl' X2 ,
. . •
be a sequence of random variables; and for
denote the dJ. of d.f. is
P*.
F*
lim
F. ( x }
X*
�
1 , 2, . . . , let P.,
is a continuous function over the entire real
line. Then it is said that the sequence
• - 00
=
X.,. Also, let X* denote another random variable for which the
We shall assume that
random variable
n
Xl' X2'
•
•
•
converges in distribution
to the
if
F *(x}
for
(8)
- 00 < x < 00 .
X., converges in distribution to X*, and the asymptotic distribution of X". Thus, according to
Sometimes. it is simply said that distribution of X * is called the
the central limit theorem of Lindeberg and. Levy, as' indicated in Eq. (I), the random variable
l n /2(X"
- IL)/O converges in disLribution to a random variable
having a standard normal distribution; or, equivalently, the asymptotic distribu tion of
nl/2(X"
- 1')/0 is a standard normal distribution.
Convergence 0/ the Moment Generating Function.f.
Moment generating functions
are important in the study of convergence in distribution because of the following theorem. the proof of which is too advanced to be presented here.
Let Xl' X2, . . . be a sequence of random variables; and for 1, 2, . . . , let F" denote the d.f. of x. and leI >Y. denole the m.g.f. of x".
Theorem 4.
n
=
280
Special Distributions
A lso, let X* denote another random variable wilh d./. F * and m.g.f. 1}*. Suppose lhal lhe m.g.j.'s "" and ,,' exiSI (n � 1 , 2, . . . ). If lim " _ oo,,,,( t ) '1/;*( I) for all values of t in some interval around the point t = 0, then the sequence Xl ' X2 converges in distribution to X*. III olh�r words, the sequence of d.L's FI, F2, must converge to the dJ. F * • . . .
• • •
i f the corresponding sequence of m.gJ.'s I/J ' . 1JI2
• . . .
converges to the m.g.f. Ij!*.
Outline o/ the Pro% / the Central Limit Theorem. We are now ready to outline a proof of Theorem 1 , which is the central limit theorem of Lindeberg and Levy. We shall assume that the variables Xl' . . . ' Xn form a random sample of size n from a distribution with mean I-L and variance 0 2. We shall also assume, for convenience, that the m.g.f. o[ this distribution exists, although the central limit theorem is true even without this assumption. For i = l, . . . , n, let 1"; ( Xi - J.I.)/o. Then the random variables Y" . . . • y.. are i.i.d., and each has mean 0 and variance 1. Furthermore, let IE:
We shall show that Zn converges in distribution to a random variable having a standard normal distribution, as indicated in Eq. (I), by showing that the m.g.f. of Zn converges to the m.gJ. of the standard normal distribution. 1 , . . . , n), then it If 1jI ( t ) denotes the m.g.r. of each random variable 1'; (i fol1ows from Theorem 3 of Sec. 4.4 that the m.gJ. of the sum E7_ 1 r; will be [,,(I)J". Also, it follows from Theorem 2 of Sec. 4.4 that the m.g.f. 1:,,(1) of Z" will be
=
In trus problem, ,,'(0) - E ( Y ,) = 0 and ,,"(0) series expansion of "-'(I) about the point t =
= + I' + �! " 1
Also,
,
2
I' 2n
-
+
'"
(0) +
1',, '''(0) ';:-"- + 3!n3/2
-'-;-
�
0
1.
Therefore, the Taylor E ( Y, ' ) = has Lhe following form:
"
5.7. The Central Limit Theorem
It is shown in advanced calculus that if lim"_ooa,, b, then lim
" - 00
=
281
b for some numbers an and
(1
But +
,I '-,", ' ( 0-,-) + . . . -,--::; 3!nl/2 "
j
�
/2 2 '
Hence, (9) Since the right side of Eq. (9) is the m.gJ. of the standard normal distribution, it [ollows from Theorem 4 that the asymptotic distribution of ZI1 must be the standard normal distribution. An outline of the proof of the centrai limit theorem of Liapounov can also be given by proceeding along similar lines, but we shall not consider this problem further here.
EXERCISES
1.
Suppose that 75 percent of the people in a certain metropolitan area Jive in the city and 25 percent of the people Jive in the suburbs. )f 1200 people attending a certain concert represent a random sample from the metropolitan area, what is the probability that the number of people from the suburbs attending the concert will be fewer than 270?
2. Suppose that the distribution of the number of defects on any given bolt of cloth is a Poisson distribution with mean 5, and that the number of defects on each bolt is counted for a random sample" of 125 bolts. Determine the probability that the average number of defects per bolt in the sample will be less than 5.5.
3.
Suppose that a random sample of size n is to be laken from a distribution for which the mean is ,... and the standard deviation is 3. Use the central limit theorem to determine approximately the smallest value of n for which the following relation will be satisfied: Pr( I X"
-
1'1
< 0.3)
;.
0.95.
282
Special Disb'ibutions
4. Suppose that the proportion of defective items in a large manufactured lot 0.1. What is the smallest random sample of items that must be taken from t lot in order for the probability to be at least 0.99 that the proportion defective items in the sample will be less than 0.13?
5. Suppose that three boys A, B, and C throw snowballs at a target. Suppo also that boy A throws 10 times and the probability that he will hit the tar! on any given throw is 0.3; boy B throws 15 times and the probability that will hit the target on any given throw is 0.2; and boy C throws 20 times aJ the probability that he will hit the target on any given throw is 0.1. Determt the probability that the target will be hit at least 12 times.
6. If 16 digits are chosen from a table of random digits, what is the probabili that their average will lie between 4 and 6?
7. Suppose that people attending a party pour drinks from a bottle comainiJ 63 ounces of a certain liquid. Suppose also that the expected size of eal drink is 2 ounces, that the standard deviation of each drink is 1/2 ounce, at that all drinks are poured independently. Deterntine the probability that II bottle will not be empty after 36 drinks have been poured. 8. A physicist makes 25 independent measurements of the specific gravity of certain body. He knows that the limitations of his equipment are such th the standard deviation of each measurement is (J units.
(a) By using the Chebyshev inequality, find a lower bound for the probabili that the average of his measurements will differ from the actual specil gravity of the body by less than 0/4 units.
(b) By using the central lintit theorem, find an approximate value for tI probability in part (a). 9. A random sample of
II
items is to be taken from a distribution with mean
and standard deviation
CI.
(a) Use the Chebyshev inequality to deterntine the smallest number of iten n that must be taken in order to satisfy the following relation:
(
Pr I X"
- 1'1
�
:)
;.
0 .99 .
(b) Use the central limit theorem to determine the smallest number of items that must be taken in order to satisfy the relation in part (a) appro> mately. 10. Suppose that, on the average, one third of the graduating seniors at a certai college bave two parents attend the graduation ceremony, another tltird I these seniors have one parent attend the ceremony, and the remaining thiT of these seniors have no parents attend. If there are 600 graduating seniors i a particular class, what is the probability that not more than 650 parents wi attend the graduation ceremony?
5.8. The Correction lor Continuity
283
L S.8. THE CORRECTION FOR CONTINUITY •
L
�' Approximating a Discrete Distribution by a . Conllnuous Distribution Suppose that Xl Xn form a random sample from a discrete distribution, and , let X = Xl + + Xn• It was shown in the previous section that even though the distribution of X will be discrete, this distribution can be approximated by the normal distribution, which is continuous. In this section, we shall describe a standard method for impreving the quality ef the appreximation that is obtained when a prebability based on a discrete distributien is approximated by one based on a continuous distributien. Suppose, therefore. that the randem variable X has a discrete distribution with p.r. f(x) and it is desired to. appreximate this distributien by a continueus distributien with p.d.f. g(x). For simplicity, we shall censider enly a discrete distributien fer which all pessible values ef X are integers. This condition is satisfied fer the binemial, hypergeometric, Peisson, and negative binemial distributiens described in this chapter. " If the p.d.f. g(x) provides a geed appreximatien to. the distributien ef X. then for any integers a and b, we can simp.1y approximate the probability • . . . •
.
. .
b
Pr(a .,; X .,; b) - L [(X)
(1)
by the integral
tg( x ) dx.
(2)
a
Indeed, this approximation was used in Examples ] and 3 ef Sec. 5.7, where g(x) was the appropriate normal p.d.f. specified by the central limit theerem. This simple approximation has the follewing shertceming: Altheugh Pr( X � a) and Pre X > a) will typica11y have different values for the discrete distribution, these probabilities will a1ways be equal for the continuous dis tribution. Another way of expressing this shortco,..nling is as foHews: Altbeugh Pc( X = x) > 0 for any integer x that is a possible value of X, this probability is necessarily 0 under the approximating p.d.l.
Approxlmaling a Histogram The p.L I(x) of X can be represented by a histogram, er bar chart, as sketched in Fig. 5.4. For each integer x, the probability of x is represented by the area ef a rectangle with a base that extends from x to. x + and with a height f(x). �
�
�
284
Special Distributions
Figure 5.4
Approximating a histogram by using a
p.d.f.
Thus, the area of the rectangle for which the center of the base is at the integer x is simply I(x). An approximating p.d.L g(x) is also sketched in Fig. 5.4. From this point of view it can be seen that Pr( a � X ::s;;; b). as specified in Eq. (1), is the sum of the areas of the rectangles in Fig. 5.4 that are centered at a, (l + 1 , . . , b. It can also be seen from Fig. 5.4 thal the sum of these areas is approximated by the integral
.
/b+,,/2) g ( x ) dx.
(3)
0 - (1/2)
The adjustment from the integral (2) to the integral (3) is caned the
correetioll for
continuity.
If we use the correction for continuity, we find that the probability I(a) of the single integer a can be approximated as follows: Pr( X � a )
�
�
(
1 Pr a - 2
'"
X '" a +
/"+,'/2) g ( x ) dx . (1 - ( 1/2)
1 "2
) (4)
, •
,
5.8. The
Correction lor Continuity
285
Similarly. Pr( X > a ) � Pr( X ;;, a + I) �
_ jOO
u + (I/2)
g(x ) dx.
pr( X ;;, a + 3:)
(5)
Example I: Examination Questions. To illustrate the use of the correction for continuity. we shall again consider Example 3 of Sec. 5.7. In that. example, an examination contains 99 questions of varying difficulty and it is desired to determine Pre X � 60), where X denotes the total number of questions that a particular student answers correctly. Then under the conditions of the example. it is found from the central limit theorem that the discrete distribution of X could be approximated by a normal distribution with mean 49.5 and standard deviation 4.08. If we use the correction for continuity, we obtain 59. 5 - 49 . 5 Pr( X ;;, 60) � Pr( X ;;, 59.5) � pr Z ;;, 4.08 1 - (2.4510) � 0.007.
(
)
=
This value can be compared with the value 0.005 that was obtained in Sec. 5.7 without the correction. 0
Example 2: Coin Tossing. Suppose that a .fair coin is tossed 20 times, and that all tosses are independent. What is the probability of obtaining exactly 10 heads? Let X denote the total number of heads obtained in the 20 tosses. According to the central limit theorem, the distribution of X will be approximateLy a normal distribution with mean 10 and standard deviation [(20)(1 /2)(1/2)]1/2 � 2.236. If we use the correction for continuity, Pr( X � 10)
Pr(9.5 '" X ", 10 . 5) 0.5 0.5 � Pr - 2.236 '" Z '" 2.236 (0.2236) - ( - 0.2236) �. 0.177. =
=
(
)
The exact value of Pr( X = 10) found from the table of binomial probabilities given at the back of this book is 0.1762. Thus. the normal approx..i mation with the correction for continuity is quite good. 0 EXERCISES
1. Let X denote the total number of successes in 15 Bernoulli trials, with probability of success p = 0.3 on each trial.
286
Special DlstrtbuUons
(a) Determine approximately the value of Pr( X 4) by using the central limit theorem with the correction for continuity. (b) Compare the answer obtained in part (a) with the exact value of this probability. 2. Using the correction for continuity, determine the probability required in Example 1 of Sec. 5.7. 3. Using the correction for continuity, determine the probability required In Exercise 1 of Sec. 5.7. 4. Using the correction for continuity, determine the probability required LD , Exercise 2 of Sec. 5.7. 5 . Using the correction for continuity, determine the probability required in Exercise 5 of Sec. 5.7. 6. Using the correction for continuity, determine the probability required Ln Exercise 6 of Sec. 5.7. =
5.9.
THE GAMMA DISTRIBUTION
The Gamma Function
For any positive number a. let the value r( a) be defined by the following integral: r(a)
=
["x·-1e-Xdx.
(1)
o
r
It can be shown that the value of this integral will be finite for any value of a > O. The function whose values are defined by Eq. (1) for a > 0 is called the gamma function. We shall now derive some properties of the gamma function. Theorem
1'(a)
=
1.
If a > 1, then
(a - 1)1'(a - 1).
(2)
Proof We shall apply the method of integration by parts to the integral in Eq. (1). If we let u x·-1 and dv = e-'dx, then du (a - Ijx·-2dx and v = _ e -X• Therefore. roo 1'( a) = 1(0 "u do [ uvlo - 10 v du =
'
=
=
=
00
0 + ( a - 1)1'(a - 1). 0
5.9. The Gamma Dlatrtbutlon
1
It follows from Theorem
that for any integer n
� 2,
r{n) - ( n - l)r{n - 1) - (n - l}{n - 2)r{n - ( n - 1)(n
-
2)
·
·
·
1
·
287
-
2)
r {l)
- (n - 1) ! r{I) . Furthermore, by Eq. (1),
Hence, r(n) - ( n - I)! for n = 2,3, . . . Moreover, since r(l) have established the following result: .
Theorem 2.
For any positive integer n,
r{ n ) - (n
i)!.
-
-
1
-
O!, we
(3)
]n many statistical applications, f(a) must be evaluated when a is either a positive integer or of the form a = n + (1/2) for some positive integer n. It follows from Eq. (2) that for any positive integer n,
( ;) - ( n - �)( n �) . . (;)r (;).
r n+
-
(
Hence, it will be possible to determine the value of r n +
( ;) .
1) '
2"
(4)
if we can evaluate
r
From Eq. (1), r
(;) - fx-'/2e-xdx.
If we let
x
-
(i/2)y 2 in this integral. then dx - y dy and •
(5 )
Since the integral of the p.d.f. of the standard normal distribution is equal to 1, it follows that
288
Special Distributions
Therefore,
fexp ( - �y 2)dy = � (2,,)1/2 ( ; ( =
It now follows from Eq. (5) that
(6) For example, it is found from Eqs. (4) and (6) that
The Gamma Distribution
It is said that a random variable X has a gamma distribution with parameters a and p (0 > 0 and f3 > 0) if X has a continuous distribution for which the p.d.f. I(x I 0, P ) is specified follows:
{ �(ex)pa a-I as
l (x l o, P ) =
-- x
The integra1 of this p.d.f. is 1, function that
e _ "ox
for x > 0, (7i for x " O. since it follows from the definition of the gamma (8 )
If X has a gamma distribution with parameters a and (J, then the moments of X are easily found from Eqs. (7) and (8). For k = 1 , 2, . . . , we have re a + k ) pkr( ex) =
a( a + 1) · · · ( 0 + k - 1 ) pk
In particular, therefore, £(X) =
a P
r
5.9. The Gamma Distribution
289
and
a(a + l) _ ( a )' _ .!!.- .
Var( X ) -
/3 '
/3
/3 '
The m.g.f. IjJ of X can be obtained similarly, as follows:
This integral will be finite for any value of from Eq. (8) that
..p ( t )
/3 '
- f(a)
l such that t < for
t
0 and any other number h > O. Pr( X ;' t + h l X ;. t ) -
Pr( X ;' t + h ) Pr( X ;. t )
(12)
To illustrate the memoryless property, we shall suppose that X represents the number of minutes thaI elapse before some event occurs. According to Eq. (12), if
5.9. The Gamma OistribuUon
�
291
the event has not occurred in t minutes, then the probability that the event will not occur during the next h minutes is simply e-fJh. This is the same as the probability that the event would not occur during an interval of h minutes starting from time O. In other words. regardless of the length of time that has elapsed without the occurrence of the event, the probability that the event will occur during the next h minutes always has the same value. Theoretically, therefore, it is not necessary to consider past occurrences of an event in order to calculate probabilities for future occurrences of the event. This memoryless property will not strictly be satisfied in all practical prob lems. For example. suppose that X is the length of time for which a light bulb will burn before it fails. The length of time for which the bulb can be expected to continue to bum in the future will depend on the length of time for which it has been burning in the past. Nevertheless, the exponential distribution has been used effectively as an approximate distribution for such variables as the lengths of the lives of various products.
Life Tests Suppose that n light bulbs are burning simultaneously in a test to determine the lengths of their lives. We shall assume that tl�e n bulbs bum independently of one another and that the lifetime of each bulb has an exponential distribution with parameter /3. In other words. if X; denotes the lifetime of bulb i. for j � 1 n. then it is assumed that the random variables Xl X" are LLd. and that each has an exponential clistribution with parameter /3. We shall now determine the distribution of the length of time Y, until one of the n bulbs fails. Since the time Y, at which the first bulb fails will be equal to the smallest of the n ljfetimes Xl X"' we can write Yl = mine Xl . . . . ' X" } . For any number I > 0, • . . . •
• . . .
'
. . . . •
Pr(Y, > / ) � Pr(X, > t . . . . X. > t ) .
� Pr(X, > t ) · · · Pr( X. > t )
•
By comparing this result with Eq. (11). we see that the distribution of Y, must be an exponential distribution with parameter nf3. In summary, we have established the following result: Suppose that the variables Xl> ' " , X" form a random sample from an exponential distribution with parameter /1. Then the distribution of Y1 = mine Xl" ' " X" } will be an exponential distribution with parameter n/1.
Theorem 4.
Next. we shall determine the distribution of the interval of time Y2 between the failure of the first bulb and the failure of a second bulb.
292
Special DI.trlbullons
After one bulb has failed. n - 1 bulbs are still burning. Furthermore, regardless of the time at which the first bulb failed. it follows from the memory less property of the exponential distribution that the distribution of the remaining lifetime of each of lhe other n - 1 bulbs is still an exponential distribution with parameter p. In other words, the situation is the same as it would be if we were starting the test over again from time t = 0 with n - 1 new bulbs. Therefore, Y2 will be equal to the smallest of n - 1 Li.d. random variables, each of which has an exponential distribution with parameter /3. It foHows from Theorem 4 thai Y2 will have an exponential distribution witb parameter (n - 1)13. By continuing in this way, we find that the distribution of the interval of time Y, between tbe failure of the second bulb and the failure of a third bulb will be an exponential distribution with parameter (n - 2)13. Finally. after all but one of the bulbs have failed. the distribution of the additional interval of time until the final bulb fails will be an exponential distribution with parameter /3.
EXERCISES 1 . Suppose that X has a gamma distribution with parameters a and 13. and that c is a positive constant. Show that eX has a gamma distribution with parameters a and {1/c.
2. Sketcb the p.d.f. of the gamma distribution for each of the following pairs of values of the parameters a and 13: (a) ,, = 1/2 and 13 = I ; (b) a = I and 13 = I ; (c) a - 2 and 13 = l.
3. Determine the mode of the gamma distribution with parameters
a and fl.
4. Sketch the p.d.r. of the exponential distribution for each of the following values of the parameter 13: (a) 13 - 1/2. (b) 13 = 1. and (c) 13 - 2. 5. Suppose that Xl ' Xn form a random sample of size n from an exponen� ribulion with parameter {1. Determine the distribution of the sample tial dis� mean Xn. • . . .
6. Suppose that the number of minutes required to serve a customer at the checkout counter of a supermarket has an exponential distribution for which the mean is 3. Using the central limit theorem, determine the probability that the total time required to serve a random sample of 16 customers will exceed 1 hour.
7. Suppose that the random variables Xl' . . . ' XI< are independent and that XI has an exponential distribution with parameter f3; (i = 1, . k). Let Y = . .
min{ Xl ter 131
• . . . •
.
XI< } . Show that Y has an exponential distribution with parame
+ . . . + 13k•
8. Suppose that a certain system contains three components which function independently of each other and which are connected in series, as defined in
i
5.9. The Gamma Distribution
293
Exercise 4 of Sec. 3.7. so that the system fails as soon as one of the components fails. Suppose that the length of life of the first component, measured in hours, has an exponential distribution with parameter f3 = 0.00] ; the length of life of the second component has an exponential distribution with parameter fJ 0.003; and the length of life of the third component has 0.006. Determine the prob an exponential distribution with parameter P ability that the system will not fail before 100 hours. 9. Suppose th at an electronic system contains n similar components which function independently of each other and which are connected in series, so that the system fails as s<x>n as one of the components fails. Suppose also that the length of life of each component, measured in hours, has an exponential distribution with mean f.'. Determine the mean and the variance of the length of time until the system fails. -
=
10. Suppose that n items are being tested simultaneously; that the items are independent; and that the length of Life of each item has an exponential distribution with parameter p. Determine the expected length of time until Y2 Y3 )' three items have failed. Hint: The required value is E(YJ 11. Consider again the electronic system described in Exercise 9; but suppose now that the system will continue to operate until two components have failed. Determine the mean and the variance of the length of time until the . system fails.
+ +
12. Suppose that a certain examination is to be taken by five students indepen dently of one another, and that the number of minutes required by any particular student to complete the examination has an exponential distribution for which the mean is 80. Suppose that the examination begins at 9:00 A.M. Determine the probabilhy that at least one of the students will complete the examination before 9:40 A.M. 13 . Suppose again that the exam.ination considered in Exercise
12 is taken by five
students, and that the first student to complete the examination finishes at 9:25 A.M. Determine the probability that at least one other student will complete the examination before 10:00 A.M. 14. Suppose again th�t the examination considered in Exercise 12 is taken by five students. Determine the probability that no two students will complete the examination within 10 minutes of each other. 15. I t is said that a random variable has a Pareto distribution with parameters X o and a ( x o > 0 and a > 0) if has a continuous distribution for which the p.d.f. ./(x I xo, a) is as follows:
X X
for x � x o ,
294
Special
Distributions
Show that j f X has this Pareto distribution, then the random variable log( Xjxo), has an exponential distribution with parameter a. 16. Suppose that a random variable X has a normal distribution with mean Il and variance 0 2 . Determine the value of E[(X 1-4 ) 2n J for n = 1 , 2, . . . . 17. Consider a random variable X for which Pr( X 0) 1 , and for which the p.d.f. is and the dJ. is F. Consider also the function h defined as follows:
> =
-
hex)
18.
I
=
1
I( x )
_
for x > o.
F( x )
The function h is called the failure rate or the hazard junction of X. Show that if X has an exponential distribution, then the failure rate h(x) is constant for x > o. It is said that a random variable has a Weibull distribution with parameters a and b ( a 0 and b 0) if X has a continuous distribution for wruch the p.dJ. I a, b) is as follows:
>
I(x >
x > 0, for x 0.
for
�
Xb
Show that if X has this Weibull distribution, then the random variable has an exponential distribution with parameter f3 a-b. � 19. It is said that a random variable X has an increasingfai/ure rate if the failure rate h ( x ) defined in Exercise 17 is an increasing function of x for 0; and is a decreasing function it is said that X has a decreasing failure rate if h of x for Suppose that X has a Weibull distribution with parameters a and b, as defined in Exercise 18. Show that X has an increasing failure rate if 1 and that X has a decreasing failure rate if b < 1. b -
>
5.10.
(x)
x > O.
x>
THE BETA DISTRIBUTIO:N:'---.
_ _ _ _ _ _ _ _ _ _ _ _ _
Dellnltlon 01 the Beta Dlslrtbutlon
a
It is said that a random variable X has a beta distribUlion with parameters and P 0) if X has a continuous distribution for wruch the p.d.f. 0 and I a, is as follows:
f) (a > I(x f) > + f) x I(x l a,f) = �(a)r(f)
{ re a
. - 1(
1
- x ) P_I
for 0 < x < 1 , otherwise.
( 1)
5.10. The Beta Distribution
295
In order to verify that the integral of this p.d.f. over the real line has the value 1 , we must show that for a > 0 and fJ > 0,
1'x 0 - 1 ( 1 - x ) P - 1 dx = r( a) r( lll o re a + P ) .
( 2)
From the definition of the gamma function, it follows that
(3)
Now we shall Jet
x=
u u+v
and
y=
u + u.
Then = xy and v = (1 - x) y, and il can be found that the value of the Jacobian of this inverse transformation is y. Furthermore, as and vary over al1 positive values, x will vary over the interval (0, 1 ) and y will vary over aU positive values. From Eq. (3), we now obtain the relation
u
u
,
=
rea + III fo X O-1 ( }
v
- x ) P- 1 dx.
Therefore, Eq. (2) has been established. It can be seen from Eq. (1) that the beta distribution with parameters and /3 = 1 is simply the uniform distribution on the interval (0, 1).
ex = 1
•
Moments of the Beta Distribution When the p.d.f. of a rand9m variable X is given by Eq. (1), the moments of easily calculated. For k = 1 , 2, . . . ,
E ( X* ) = fo'x*/(X i a, P ) dX =
r( a + III 1' 0 + - 1 1 x * ( r ( a ) r ( 1ll 0
_
x ) P- ' dx.
X are
296
Special Distributions
Therefore, by Eq. (2), r ( a + /l ) r ( a ) r( /l )
E ( X, ) �
f( a + k ) f( /l ) f ( a + k + /l )
a( a + 1 ) . . . -'-( a k - 1) -;---; ;-;-;-' -;:-' ---;-;: ---:-; l) ' k . ( a + /l;:-'-+-;( + /l ) ( " + /l + J) . . --;-
-
"
-
+
-
It follows that E(X) �
a' a + /l
and ( a )' a( a + 1 ) Var( X ) � '---:--=;:77-.,-'-'; ;+ " 1 ) - a + /l ( a + /l ) ( a +-'- /l-:-C all . ( a + /l )'(a + /l + 1)
EXERCISES 1 . Determine the mode of the beta distribution with parameters 2.
-
4.
5.
and P.
assuming that a > 1 and {J > 1 . Sketch the p.d.f. of the beta distribution for each of the following pairs of values of the parameters: (a) a � 1/2 and /l � 1/2, (b) a � 1/2 and /l � 1, (c) a � 1/2 and /l - 2, (d) a - I and /l � 1 , (C) a - 2 and /l - 2, (e) a � 1 and /l � 2, (g) a � 25 and /l 100, (h) a 100 and /l � 25. Suppose that X has a beta distribution with parameters a and p. Show thafT 1 X has a beta distribution with parameters p and 0:'. Suppose that X has a beta distribution with parameters 0:' and P. and let r and s be given positive integers. Determine the value of £[ xr(1 - X)"J. Suppose that X and Y are independent random variables: that X has a gamma distribution with parameters a1 and /1; and that Y has a gamma distribution with parameters a2 and (J. Let U = X/(X + Y ) and V = X + Y. Show (a) that U has a beta distribution with parameters a1 and 0:' 2; and (b) thal U and V are independent. -
3.
0:'
-
5.11. The Multinomial Distribution
297
,.
6. Suppose that Xl and X2 form a random sample of two observed values from an exponential distribution with parameter fJ. Show that XI/(X1 + X2) has a uniform distribution on the interval (0, 1).
7. Suppose that the proportion X of defective items in a large lot is unknown, and that X has a beta distribution with parameters ex and fJ. (a) If one item is selected at random from the lot, what is the probability that it will be defective?
(b) If two items are selected at random from the lot, what is the probability that both will be defective?
5.11.
THE MULTINOMIAL DISTRIBUTION
Definition 01 the Multinomial DlstrlbuUon
Suppose that a population contains items of k different types (k � 2) and that the proportion of the items in the population that are of type 1 is Pi (I = 1, . . . , k). It is assumed that Pi > 0 for i = 1, . . . , k and that E7_ I PI = 1. Furthermore, suppose that n items are selected at random from the population, with replace · ment; and let ,Xi denote the number of selected items that are of type i (i = 1 , . . , k). Then it is said that the random vector X = ( Xl" . " Xk) has a multinomial distribution with parameters n and p = ( P I " ' " Pic). We shall now derive the p.e. of X. We can imagine that the n items are selected from the population one at a time, with replacement. Since the n selections are made independently of each other, the probability that the first item will be of type ii' the second item of type ;2. and so on is simply Pi Pi . . . Pi ' Therefore, the probability that the sequence of n outcomes will consisr of exactly Xl items of type I , X 2 items of type 2, and so on, selected in a particular prespecijied order. is- pflPi >:2 P'/. It follows that k ) is equal to the probability of obtaining exactly Xi items of type { (i :, 1, the probability pflp�� . . . Pk" multiplied by the total number of different ways in which the order of the n items can be specified. From the discussion given in Sec. 1.9, it fol1ows that the total number of different ways in which n items can be arranged when there are Xi items of type i (i = 1, . . . , k ) is given by the multinomial coefficient .
"
"
•
•
•
. . . •
•
n! Hence,
(I)
298
Special DlatribuUons
For any vector x following relation: I( x l n , p )
=
=
( x"
Pr( X = x)
. . . , x, ),
=
the p.r. I(x I n, p) of X is defined by the
Pr( X,
= x"
. . . , Xk
= x,
If X I ' _ _ _ , xk are nonnegative integers such that follows from Eq. (1) that
XI
+
).
- - - + Xk
=
n, then it (2)
Furthermore, I(x I n , p) = 0 for any other vector x.
Example 1: A ttendance at a Baseball Game. Suppose that 23 percent of the people attending a certain baseball game live within 10 miles of the stadium; 59 percent live between 10 and 50 miles from the stadium; and 18 percent live more than 50 miles from the stadium. Suppose also that 20 people are selected at random from the crowd attending the game. We shall determine the probability that seven of the people selected live within 10 miles of the stadium, eight of them live between 10 and 50 miles from the stadium, and five of them live more than 50 miles from the stadium_ We shall assume that the crowd attending the game is so large that it is . irrelevant whether the 20 people are selected with or without replacement. We can therefore assume that they were selected with replacement. It then follows from Eq. (1) or Eq. (2) that the required probability is 20! 7! 8 ' 5 ! ( 0.23 ) 7( 0.59 )'( 0.18 )'
=
0.0094.
0
Relation Between the Multinomial and Binomial Distributions When the population being sampled contains only two different types of items, that is, when k = 2. the multinomial distribution reduces to the binomial distri bution. The truth of this relationship can be demonstrated as foUows: Suppose that, for k - 2, the random vector X - ( Xl' X2 ) has a multinomial distribution with parameters IJ and p = ( P l' P2 )' Then it must be true that X2 - n - Xl and P 2 = 1 - P l ' Therefore. the random vector X is actually determined by the single random variable Xl' and the distribution of Xl depends only on the parameters n and PI - Furthermore, since Xl denotes the total number of items of type 1 that are selected in n Bernoulli trials, when the probability of selection on each trial is PI' it follows that Xl has a binomial distribution with parameters n and PtMore generally, for any given value of k (k 2, 3, . . . ); suppose that the random vector X = ( Xl' _ _ _ , Xk ) has a multinomial distribution with parameters =
r
5.1 1 . The Multinomial Distribution
r f
t
299
and p = ( P l p d . Since Xi can be regarded as the total number of items of type i that are selected in n Bernoulli trials, ",hen the probability of selection on each trial is Pi' it follows that the marginal distribution of each variable Xi ! (i 1, . . . , k ) must be a binomial distribution with parameters n and Pi -
r I
• . . . •
I ' ll
r
=
r
t Means, Variances, and Covariances
� r Suppose that a random vector X has a multinomial distribution with parameters � n and p. Since the marginal distribution of each component X, is a binomial
t>. distribution with parameters ,
f ,
£ ( X,)
-
and
lip,
n
and Pi' it follows that
Var( Xi ) - nPi ( 1
-
p, )
1, . . . , k.
(3)
the covariance
any
for i
=
t A similar of �r two different componentscanXIbeandusedXJ .toSince the sum Xi +of Xj can be regarded as i the tOlal number of items of either type i or type j that are selected in n Bernoulli � trials, when the probability of selection on each trial is Pi Pj' it follows that argument
derive the value
+
r Xi + Xj has a binomial distribution with parameters n and Pi + Pi " Hence,
(4) However, it is also true that Var( Xi +
XJ
-
Var( Xi ) + Var( X, )
+
2 Cov( X, . Xj )
- np, ( l - Pi ) + IIp, ( l - Pj ) + 2 Cov( X,. Xj ) .
(5 )
By equating the right sides of (4) and (5). we obtain the following result: Cov( Xi ' Aj) r
r i
-
- np,Pj '
(6)
Together. Eqs. (3) and (6) specify the values 01 the. means, the variances, and the covariances ror the multinomial distribution wiSh parameters n and p.
� ------
I EXERCISES • ,
f
F is a continuous dJ. on the real line; and let at and 02 be numbers such that F(at) 0.3 and F(a,) - 0.8. If 25 observations are
1. Suppose that
-
selected at random from the distribution for which the dJ. is F, what is the probability that six of the observed values will be less than ah len of the observed values will be between at and a 2 , and nine of the observed values will be grealer than 02?
300
Special DlatribuUons
2. If five balanced dice are rolled. what is the probability that the number 1
and
the number 4 will appear the same number of times? 3. Suppose that a die is loaded so that each of the numbers 1, 2, 3, 4, 5 , and 6 has a different probability of appearing when the die is roBed. For i = 1 6, let p, denote the probability that the number i will be obtained; and suppose that PI � 0.1 1 , p, � 0.30, p, 0.22, p, � 0.05, p, - 0.25, and p, - 0.07. Suppose also that the die is to be rolled 40 times. Let XI denote the number of rolls for which an e en number appears, and let X2 denote the number of rolls for which either the number 1 or the number 3 appears. Find the value of Pr( XI 20 and X2 15). 4. Suppose that 16 percent of the students in a certain high school are freshmen, 14 percent are sophomores, 38 percent are juniors, and 32 percent are seniors. If 1 5 students are selected at random from the school, what is the probability that at least 8 will be either freshmen or sophomores? 5. In Exercise 4, let XJ denote the number of juniors in the random sample of 1 5 students, and let X4 denote the number of seniors in the sample. Find the value of £( X, - X,) and the value of Var( X, - X,). 6. Suppose that the random variables Xl ' Xk are independent, and that Xi has a Poisson distribution with mean Ai ( i = 1 , . . , k). Show that for any fixed positive integer n, the conditional distribution of the random vector ' X.lJ. given that E7_ 1 X; = n, is a multinomial distribution with X ( Xl parameters n and p ( p 1, . . . , P k ), where • . . . •
�
v
�
�
• . . .
=
P,· �
5.12.
.
• . . .
-
for i
=
l , . . . , k.
THE BIVARIATE NORMAL DISTRIBUTION
;
Definition of the BIvarIate Normal DIstributIon
.
Suppose that 21 and Z2 are independent random variables each of which has a , standard normal distribution. Then the joint p.d.f. g(Z I ' Z2) of 21 and 22 is .1 specified (or any values of Z I and Z 2 by the equation
( 1) For any constants "' I ' JL2' °1, 02 ' and p such that 00 < l1 i < 00 ( i = 1 , 2), > 0 (i = 1. 2), and - 1 < p < 1. we shall now define two new random variables -
or
I ,
,
i
I
S.12. The Bivariate Normal
Distribution
301
Xl and X2 as follows:
x, �
[
a, p Z, + ( 1 -
,)1/' Z, 1
P
(2) + 1' , .
We shaU derive the joint p.d.f. !(x1, x,) o[ X, and X,. The transformation fwm ZI amI Z2 Lo Xl and X2 is a linear tJ·ansformation; and it will be found that the determinant 6. of the matrix of coefficients of 21 and 1 22 has the value 6. - (1 - p2) /2(J1(J2 . Therefore, as discussed in Sec. 3.9, the Jacobian J of the inverse transformation from XI and X2 to 21 and 22 is
1 (1 - p') '/' a1a,
( 3)
Since J > 0, the value o[ IJI is equal to the value o[ J itseif. If the relations (2) are solved for ZI and 22 in terms of XI and X2, then the joint p.d.f. !(xl, x 2 ) can be obtained by replacing Zt and Z 2 in Eq. (1) by their expressions in terms of x, and x " and then multiplying by IJI. It can be shown that the result is, for - 00 < Xl < 00 and - 00 < X 2 < 00,
When the joint p.d.f. of two random variables Xl and X2 is of the form in Eq. (4), it is said that XI and X2 have a bivariate normal distribuliol1. The means and the variances of the bivariate normal distribution specified by Eq. (4) are easily derived from the definitions in Eq. (2). Since 21 and 22 are independent and each has mean 0 and variance I, it follows that E(X1) = ""'1' E ( X2) = fJ.2' Var( X,) � a?, and Var(X,) � af. Furthermore, it can be shown by using Eq. (2) that Cov( XI ' X2 ) = P(Jl(J2' Therefore, the correlation of Xl and X2 is simply p. In summary, if Xl and X2 have a bivariate normal distribution for which the p.d.f. is specified by Eq. (4), then
£( Xi) � 1' ,
and
Also,
p ( X" X2)
-
p.
Var(X,) � a,'
[or i
�
1,2.
!
302
Special DlstrlbuUonl
It has been convenient for us to introduce the bivariate normal distribution as the joint distribution of certain linear combinations of independent random variables having standard normaJ distributions. It should be empbasized, how ever, that the bivariate normal distribution arises directly and naturally in many practical problems. For example, for many populations, the joint distribution of two physical characteristics such as the heights and the weights of the individuals in the population will be approximate1y a bivariate normal distribution. For other populations. the joint distribution of the scores of the individuals in the popula tion on two related tests will be approximately a bivariate normal distribution.
Marginal and CondlUonsl Distributions MarginllJ Distributions. We shaH continue to assume that the random variables Xl and X2 have a bivariate normal distribution and their joint p.d.f. is specified by Eq. (4). In the study of the properties of this distribution, it will be convenient to represent Xl and X2 as in Eq. (2), where ZI and 22 are independent random variables with standard normal distributions. In particular, since both Xl and Xl are linear combinations of 21 and Z2. it follows from this representation and from Corollary 1 of Sec. 5.6 that the marginal distributions of both Xl and X, are also normal distributions. Thus, for i = 1, 2, the marginal distribution of Xi is a normal distribution with mean JL j and variance al. Independence and CorrekltiolL If Xl and X2 are uncorre1ated. then p = O. In this case, it can be seen from Eq. (4) that the joint p.d.f. !(xl, x2) factors into the product of the marginal p.d.f. of Xl and the marginal p.d.f. of X,. Hence, Xl and Xl arc independent, and the following result has been established: Two random variables Xl and X2 that have a bivariate normal distribution are independent if and only if they are uncorrelated. We have already seen in Sec. 4.6 that two random variables Xl and X2 with an arbitrary joint distribution can be uncorrelated without being independent. Conditional Distributions. The condilional distribution of Xl given that Xl = Xl can also be derived from the representation in Eq. (2). If XI = Xl. then 21 (Xl J.LI)/OI· Therefore, the conditional distribution of X2 given that Xl = Xl is the same as the conditional distribution of =
-
(5) Since Z2 has a standard normal distribution and is independent of Xl' it follows from (5) that the conditional distribution of X2 given that Xl = Xl is a nonnal
; 5.12.
The Bivariate Normal Di,tribution
303
distribution for which the mean is
(6) and the variance is (1 - p2 )o{. The conditional distribution of XI given that X2 = X l cannot be derived so easily from Eq. (2) because of the different ways in which 21 and 22 enter Eq. (2). However. it is seen from Eq. (4) that the joint p.d.L I(x,. x , ) is symmetric in the two variables (Xl - f.L , )/o ] and ( X 2 - J.L2)/o2- Therefore, it follows that the conditional distribution of Xl given that X2 = Xl can be found from the conditional distribution of X2 given that Xl = XI (this distribution has just been derived) simply by interchanging Xl and x 2 • interchanging II I and JL 2 . and interchanging 0 , and 02. Thus, the conditional distribution of XI given that Xl = x 2 must be a normal distribution for which the mean is
(7) and the variance is (1 - P2)of. We have now shown that each marginal distribution and each conditional distribution of a bivariate normal distribution is a univariate normal distribution. Some particuJar features of the conditional distribution of X2 given that Xl - Xl should be noted. If p "* 0, then E(X2 1 Xl ) is a lin�ar function of the given value Xl' If P > 0, the slope of this linear function is positive. If p < O. the slope of the function is negative. However, the variance of the conditional distribution of X2 given that Xl - Xl is (1 - p2)al, and its value does not depend on the given value Xl' Furthermore, this variance of the conditional distribution of X2 is smaller than the variance at of the marginal distribution of
X2 · Example
I: Predicti"g a Person's Weight.
Let Xl denote the height of a person selected at random from a certain population, and let ,Xl denote the weight of the person. Suppose that these random variableS have a bivariate normal distribution • for which the p.d.L is specified by Eq. (4) and that the person's weight Xl must be predicted. We shall compare the smaUest M.S.E. that can be attained if the person's height Xl is known when his weight must be predicted with the smallest M.S.E. that can be attained if his height is not known. If the person's height is not known, then the best prediction of his weight is the mean E( X2) = ,",2; and the M.S.E. of this prediction is the variance or If it is known that the person's height is Xl' then the best prediction is the mean E( X2 I xl) of the conditional distribution of X2 given that XI XI� and the M.S.E. of this prediction is the variance (1 - pl)al of that conditional distribu=
304
Special Distributions
tion. Hence, when the value of XI is known, the M.S.E. is reduced from ol to- (l - p' ) af. 0
Si.nce the variance of the conditional distribution in Example 1 is (1 - p2)o£. regardless of the known height x I of the person, it follows that the difficulty of predicting the person's weight is the same for a tall person, a short person, or a person of medium height. Furthermore, since the variance (1 - p2 )al decreases as Ipl increases, it follows that it is easier to predict a person's weight from his height when the person is selected from a population in which height and weight are highly correlated.
Example 2: Determining a Marginal Distribution.
Suppose that a random variable X has a norma] distribution with mean p. and variance 0'2; and that for any number x, (he conditional distribution of another random variable Y given that X = x is a normal distribution with mean x and variance 72. We shall determine the marginal distribution of Y. We know that the marginal distribution of X is a normal distribution and that the conditional distribution of Y given that X = x is a normal distribution for which the mean is a linear function of x and the variance is constant. It follows that the joint distribution of X and Y must be a bivariate normal distribution. Hence, the marginal distribution of Y is also a normal distribution. The mean and the variance of Y must be determined. The mean of Y is E ( Y ) - E [ E( Y I X)] - £ ( X ) -
I t.
Furthermore, by Exercise 10 at the end of Sec. 4.7 , Var( Y ) - £ [ Var( Y I X)] + Var [ E ( Y I X)] - E ( " ' ) + Vari X )
Hence, the distribution of Y is a normal distribution with mean ,... and variance 7 2 + 0 2. 0
Linear Combinations Suppose again that two random variables XI and Xl have a bivariate normal distribution for which the p.d.f. is specified by Eq. (4). Now consider the random variable Y a l XI + a 2 X2 + h, where al• ti l ' and h are arbitrary given con stants. Both Xl and X can be represented, as in Eq. (2). as linear combinations 2 of independent and normally distributed random variables Zl and 2 , Since Y is 2 a linear combination of Xl and X2• it rollows that Y can also be represented as a linear combination of 21 and 22, Therefore, by Corollary 1 of Sec. 5.6, the =
5.12. The Bivariate Normal DistrlbuUon
305
distribution of Y will also be a normal distribution. Thus, the following im portant property has been established.
If two random variables XI and X2 have a bivariate normal distribution, then any linear combination Y = a l X1 + a2X2 + b will have a normal distribution.
The mean and variance of Y are as fo1Jows:
.nd V.r( Y ) � arVar(X,)
+
alVar(X,) + 2 a , a ,Cov( X, X, ) .
Example 3: Heights of HUl·mlnOs and Wives. Suppose that a married couple is selected at random from a certain population of married couples, and that the joint distribution of the height of the wife and the height of her husband is a bivariate normal distribution. Suppose that the heights of the wives have a mean of 66.8 inches and a standard deviation of 2 inches; that the heights of the husbands have a mean of 70 inches and a standard deviation of 2 inches; and that the correlation between these two heights is 0.68. We shall determine the probability that the wife will be taller than her husband. If we let X denote the height of the wife and let Y denote the height of her husband, then we must determine the value of Pr( X - Y > 0). Since X and Y have a bivariate. normal distribution, it follows that the distribution of X - Y will be a norma) distribution for which the mean is E ( X - Y ) - 66.8 - 70 � - 3 .2 and the variance is V.r ( X - Y )
�
�
Vari X ) + V.r( Y ) - 2 Cov(-"'. y )
4+4
-
2(0.68)(2)(2)
=
2.56.
Hence the standard deviation of X - Y is 1 .6. The random variable Z � ( X - Y + 3.2)/(1.6) will have a standard normal distribution. It can be found from the table given at the end of this book that Pr( X - Y > 0) � Pr(Z > 2) �
0.0227.
- 1
-
(2)
306
Special Distributions
Therefore, the probabiLity that the wife will be taller than her husband is 0.0227. o
EXERCLSES 1. Suppose that two different tests A and B are to be given to a student chosen at random from a certain population. Suppose also that the mean score on test A is 85 and the standard deviation is 10; that the mean score on test B is 90 and the standard deviation is 16; that the scores on the two tests have a bivariate normal distribution; and that the correlation of the two scores is 0.8. ' If the student s score on test A is 80, what is the probability that his score on test B will be higher than 90? 2. Consider again the two tests A and B described in Exercise ] . If a student is chosen at random. what is the probability that the sum of his scores on the two tests will be greater than 200? 3. Consider again the two tests A and B described in Exercise 1. If a student is chosen at random, what is the probability that his score on test A will be higher than his score on test B? 4. Consider again the two tests A and B described in Exercise 1. If a student is chosen at random and his score on test B is ] 00 what predicted value of his score on test A has the smallest M.S.E. and what is the value of this minimum M.S.E.? 5. Suppose that the random variables XI and X2 have a bivariate normal distribution for which the joint p.d.f. is specified by Eq. (4). Determine the value of the constant b for which Var(XI + bX,) will be a minimwn. 6. Suppose that Xl and X2 have a bivariate normal distribution for which E(XI I X,) � 3.7 - 0.15X" E(X, I XI) � 0.4 - 0.6X" and Var(X, 1 X,) 3.64. Find the mean and the variance of Xl. the mean and the variance of X2• and the correLation of XI and X,. 7. Let !(x" x,) denote the p.d.f. of the bivariate normal distribution specified by Eq. (4). Show that the maximum value of !(x1• x 2 ) is attained at the point at which X I = #'1 and x 2 = JJ2' 8. Let !(XI' x , ) denote the p.d.f. of the bivariate normal distribution specified by Eq. (4), and let k be a constant such that �
0 0, Pr( XI > kX, ) =
,� �-,fJ -, kfJ I + fJ, .
17. Suppose that 15,000 people in a city with a population of 500,000 are watching a certain television program. I f 200 people in the city are contacted at random, what is the probability, approximately, that fewer than four of them are watching the program? 18. Suppose that it is desired to estimate the proportion of persons in a large population who have a certain characteristic. A random sample of 100 persons is selected from the population without replacement, and the propor tion X of persons in the sample who have the characteristic is observed �Show that, no matter how large the population is, the standard deviation of X is at most 0.05.
19. Suppose that X has a binomial distribution with parameters n and p; and that Y has a negative binomial distribution with parameters r and p, where r is a positive integer. Show that Pr( X < r ) - Pr(Y > n - r ) by showing that both the left side and the right side of this equation can be regarded as the probability of the same event in a sequence of Bernoulli tri�ls with probabil ity p of success.
20. Suppose that X has a Poisson distribution with mean AI; and that Y has a gamma distribution with parameters a = k and /3 = A , where k is a positive integer. Show that Pr( X � k ) = Pr( Y .; I ) by showing that both the left side and the right side of this equation can be regarded as the probability of the same event in a Poisson process in which the expected number of occurrences per unit time is A.
21. Suppose that X has a Poisson distribution with a very large mean A. Explain why the distribution of X can be approximated by a normal distribution with mean A and variance A. In other words, explain why ( X - A)/AI/' converges in distribution, as A - 00, to a random varial:fIe having a standard normal distribution.
22. Suppose that X has a Poisson distribution with mean 10. Use the central limit theorem, both without and with the correction for continuity, to determine an approximate value for Pr(8 .::G;; X � 1 2). Use the table of Poisson probabilities given in the back of this book to assess the quality of these approximations.
23. Suppose that X is a random variable having a continuous distribution with 1. Let the failure rate p.d.f. f(x) and dJ. F(x), and for which Pr( X > 0) =
310
Special
hex)
Distributions
be as defined in Exercise 17 of Sec.
5.9. Show that
24. Suppose that 40 percent of the students in a large population are freshmen. 30 percent are sophomores. 20 percent are juniors. and 10 percent are seniors. Suppose that !O students are selected at random from the population; and let Xl' X,. X" X. denote. respectively. the numbers of freshmen. sophomores. juniors. and seniors that are obtained. (a) Determine p( X;.
"j) for each pair of values i and
j (i < j).
(b) For wbat values of i and j ( i < j) is p ( X;. X) most negative? (c) For what values of i and j(i < j ) is p ( X;. X) closest to O?
25. Suppose that Xl and X, have a bivariate normal distribution with means and Il,. variances Xl - 3X,.
�l
of and oi. and correlation p. Determine the distribution of
26. Suppose that X has a standard normal distribution. and that the conditional distribution of Y given X is a normal distribution with mean 2 X variance 12. Determine the marginal distribution of p ( X. Y).
Y
3 and
and the value of
27. Suppose that Xl and X, have a bivariate normal distribution with £( X,) Evaluate
£( X?X, ).
=
O.
Esti mation
6.1 .
STATISTICAL INFERENCE
Nature of Statistical Inference In the first five chapters of this book we discussed the theory and methods of probability. In the last five chapters we shall discuss the theory and methods of statistical inference. A prOblem of statistical inference or, more simply. a statistics problem is a problem in which data that have been generated in accordance with some unknown probability distribution must be analyzed and some type of inference about the unknown distribution must be made. In other words, in a statistics problem there are two or more probability distributions which might have generated some experimental data. In most real problems, there are an infinite number of different possible distributions which might have generated the data. By analyzing the data, we attempt to learn about the unknown distribution, to make some inferences about certain properties o.f the distribution, and to determine the relative likelihood that each possible distribution is actually the • correct one.
Parameters In many statistics problems, the probability distribution that generated the experimental data is completely known except for the values of one or more parameters. For example. it might be known that the length of life of a certain type of nuclear pacemaker has an exponential distribution with parameter /3, as
311
312
Estimation
defined in Sec. 5.9, but the exact value of f1 might be unknown. If the lifetimes 0 several pacemakers of this type can be observed, then from these observed vaJue and any other relevant information that might be available, it is possible to mak, an inference about the unknown value of the parameter p. For example, we migh wish to give our best estimate of the value of p, or to specify an interval in whicl we think the value of {3 is likely to lie, or to decide whether or not f1 is smalle than some specified value. It is typically not possible to determine the value of I I!xactly. As another example, suppose that the distribution of the heights of th individuals in a certain population is known to be a normal distribution wit mean f.l. and variance 02, but that the exact values of f.l. and 0 2 are unknown. I we can observe the heights of the individuals in a random sample selected frO! the given population, then from these observed heights and any other informatio we might have about tbe distribution of heights, we can make an inference abm the values of f.l. and 02. In a problem of statisticaJ inference, any characteristic of the distributio generating the experimental data which has an unknown value, such as the mea f.l. or the variance 0 2 in the example just presented, is caJled a parameter of t1 distribution. The set g of all possible values of a parameter or of a vector I parameters (61" . " 6k) is called the parameter space. In the first example we presented, the parameter {J of the expoilenti distribution must be positive. Therefore, unless certain positive values of {3 can 1 explicitly ruled out as possible values of {3, the parameter space will be the s of all positive numbers. In the second example we presented, the mean JL and tI variance 0 2 of the normal distribution can be regarded as a pair of parametel Here the value of f.l. can be any real Dumber and (12 must be positive. TherefoJ the parameter space n can be taken as the set of all pairs (f.l., (1 2 ) such tb 00 < f.l. < 00 and 0 2 > O. More specifically, if the normal distribution in it example represents the distribution of the heights in inches of the individuaJs some particular population, we might be certain that 30 < f.l. < 100 and 02 < � In this case, the parameter space n could be talen as the smaller set of all pa (jl, a 2 ) such that 30 < f' < 100 and 0 < 0 2 < 50. The important feature of the parameter space Q is that it must contain possible values of the parameters in a given problem, in order that we can certain that the actual value of the vector of parameters is a point in n .
(J
g
-
Statistical Decision Problems In many statistics problems, after the experimentaJ data have been analyzed, must choose a decision from some avaiJable class of decisions with the prope ahat the consequences of each available decision depend on the unknown va1ue some parameter. For example. we might have to estimate (he unknown value c
6.2. Prior and Posterior Distributions
31 3
parameter 8 when the consequences depend on how close our estimate is to the correct value (J. As another example, we might have to decide whether the unknown value of (J is larger or smaller than some specified constant when the consequences depend on whether the decision is right or wrong.
Experimental Design In some statistics problems. we have some control over the type or the amount of experimental data that wil1 be collected. For example, consider an experiment to determine the mean tensile strength of a certain type of alloy as a function of the pressure and temperature at which the alloy is produced. Within the limits of certain budgetary and time constraints, it may be possible for the experimenter to choose the levels of pressure and temperature at which experimental specimens of the alloy are to be produced, and also to specify the number of specimens to be produced at each of these levels. Such a problem. in which the experimenter can choose (at least to some extent) the particular experiment that is to be carried out, is called a problem of experimental design. Of course, the design of an experiment and the statistical analysis of the experimental data are closely related. One cannot design an effective experiment without considering the subsequent statistical analysis that is to be carried out on the data that will be obtained; and one cannot carry out a meaningful statistical analysis of experimental data without considering the particular type of experiment from which the data were derived.
References In the remainder of this book we shall consider many different problems of statistical inference, statistical decision, and experimental design. Some books that discuss statistical theory and methods at about the same level as they will be discussed in this book were mentioned at the end of Sec. 1.1. Some statistics books which are written at a more advanced level are Cramer (1946), Rao (1973), Zacks (1971) and (1981), DeGroot (1970), Ferguson (1967), Lehmann (1959, 1983), Bickel and Doksum (1977), and Rohatgi (1976). 6,2,
PRfOR AND POSTERIOR DISTRIBUTIONS
The Prior Distribution Specifying a Prior Distribution. Consider a problem of statistical inference in which observations are to be taken from a distribution for which the p.d.f. or the p.f. is f(x 1 8 ), where 8 is a parameter having an unknown value. It is assumed
314
Estimation
that the unknown value of 8 must lie in a specified parameter space Q. The problem of statistical inference introduced in Sec. 6.1 can be roughly described as the problem of trying to determine, on the basis of observations from the p.d.f. or p.r. f(x 1 8), where in the parameter space Q the actual value of () is likely to lie. In many problems, before any observations from f(x I 8) are available, the experimenter or statistician will be able to summarize his previous information and knowledge about where in Q the value of (J is likely to lie by constructing a probability distribution for B on the set n. In other words, before any experimen tal data have been collected or observed, the experimenter's past experience and knowledge will lead him to believe that 8 is more likely to lie in certain regions of Q than in others. We shall assume that the relative likelihoods of the different regions can be expressed in terms of a probability distribution on n. This distribution is called the prior distribution of () because it represents the relative likelihood that the true value of 8 lies in each of various regions of Q prior 10 the observation of any values from f(x 1 8 ).
Controversial Nature of Prior Distributions. The concept of a prior distribution is very controversial in statistics. This controversy is closely related to the con troversy in regard to the meaning of probability, which was discussed in Sec. 1.2. Some statisticians believe that a prior distribution can be chosen for the parame ter B in every statistics problem. They believe that this distribution is a subjective probability distribution in the sense that it represents an individual experimenter's information and subjective beliefs about where the true value of (J is likely to lie. They also believe, however, that a prior distribution is no different from any other probability distribution used in the field of statistics, and that all the rules of probability theory apply to a prior distribution. It is said that these statisticians adhere to the Bayesian philosophy of statistics. Other statisticians believe that in many problems it is not appropriate to speak of a probability distribution of () because the true value of () is nOl a random variable at all but is rather a certain fixed number whose value happens to be unknown to the experimenter. These statisticians believe that a prior distribution can be assigned to a parameter e only when there is extensive previous information about the relative frequencies with which e has taken each of its possible values in the past. It would then be possible for two different scientists to agree on the correct prior distribution to be used. For example. suppose that the proportion 8 of defective items in a certain large manufactured lot is unknown. Suppose also that the same manufacturer has produced many such lots of items in the past and that detailed records have been kept about the proportions of defective items in past lots. The relative frequencies for past lots could then be used to construct a prior distribution for e. Both groups of statisticians agree that whenever a meaningful prior distribu tion can be chosen, the theory and methods to be described in this section are applicable and useful. In this section and Sees. 6.3 and 6.4, we shall proceed
"
-I 1
j 1
6.2. Prior and Posterior Distributions
under the assumption that we
can
assign to
0
a prior distribution which
represents the probabilities that the unknown value of the parameter space. Beginning in Sec.
6.5,
315
0
lies in various subsets of
we shall consider techniques of
estimation that are not based on the assignment of a prior distribution.
Discrete and Continuous Prior Di�,.tributionl·.
In some problems, the parameter
0
can take only a finite number of different values or, at most, an infinite sequence of
different values. The prior distribution of
distribution. The p.f.
therefore be �(O) of this distribution is called the prior p.f. of
problems, the parameter
(}
will
a
discrete
O. In other
0 can take any value on the real line or in some interval
of the real line, and a continuous prior distribution is assigned to O. The p.d.L �(e) of this distribution is called the
Example
prior p.d.f.
Fair or Two-Headed Coin.
8
of e.
a head when a certain coin is tossed; and suppose that it is known that the coin 1:
Let
denote the probability of obtaining
either is fair or has a head on each side. Therefore, the only possible values of are e
� 1/2
and
prior p.L of e is
8�
I. If the prior probability that the coin is fair is
H1/2) � p
and
HI)
�I
- p.
Example 2: Proportion of Defective Items.
0
p,
8
then the
Suppose that the proportion
0
of
defective items in a large manufactured lot is unknown, and that the prior distribution assigned to
e
prior p.d.l. of
0
is a uniform distribution on the interval (0,1). Then the
is for 0 < e < otherwise.
Example
3:
I,
(1 )
D
Parameter of an Exponential Distribution.
Suppose that the lifetimes
of fluorescent lamps of a certain type are to be observed, and that the distribution of the lifetime of any particular lamp is an exponential distribution with parame
ter P, as defined in Sec. 5.9. Suppose also that the exact value of {3 is unknown and on the basis of previous experience the prior distribution of {J is taken as a gamma distribution for which the mean is 0.0002 and the standard deviation is 0.0001. We shall determine the prior p.d.L of 11. •
{J is 5.9 that
Suppose that the prior distribution of parameters
ao
is ll O/fJo and 0.0001 . It can of Sec.
5.9
and
fJo.
It was shown in Sec.
a gamma distribulion with
the mean of this distribution
ao/fJl. Therefore, «o/{3o = 0.0002 and aV2/fJo = ao � 4 and 110 � 20,000. It follows from Eq. (7) p.d.f. of f3 for f3 > 0 is as follows:
the variance is
now be found that
that the prior
(2 ) Also, �(f3)
�0
for f3
,, 0.
0
316
eatlmaUon
The Posterior Distribution Suppose now that the n random variables XI" ' " X" form a random sample from a distribution for which the p.d.f. or the p.f. is f(x 1 0). Suppose also that the value of the parameter 0 is unknown and the prior p.d.L or prior p.L of 8 is For simplicity, we shall assume that the parameter space D is either an interval of the real line or the entire real line; that is a prior p.d.L on n, rather than a prior p.f.; and that f(x 1 0) is a p.d.f., rather than a p.f. However, the discussion that will be given here can be easily adapted for a problem in which or f(x 1 0) is a p.f. Since the random variables XI " ' " Xn form a random sample from the distribulion for which the p.d.f. is f(x 1 0 ), it follows from Sec. 3.7 that their joint p.d.f.. fn(x" . . . , x , 1 0) will be given by the equation
�(O).
HO)
�(D)
(3)
If we use the vector notation x = (Xl" ' " x ,, ), then the joint p.d.f. in Eq. (3) can be written simply as 0). Since the parameter 0 itself is now regarded as having a distribution for which the p.d.f. is the joint p.d.!. f,(x 1 0) should properly be regarded as the conditional joint p.d.f. of Xl ' " . , X" for a given value of If we multiply this conditional joint p.d.!. by the p.d.f. we obtain the ( n + l)-dimensional joirit The marginal joint p.d.L of p.d.f. of Xl" ' " Xn and 0 in the form f,(x Xl ' Xn can now be obtained by integrating this joint p.d.L over all values of O. Therefore, the n-dimensional marginal joint p.d.!. of X" " " X, can be written in the form
f,(x I �(O),
�(O),
O.
I D)�(O).
• . . .
gn(x)
g,(x) - fj,(x I D)HO) dO.
(4)
Furthermore. the conditional p.d.L of 0 given that Xl = Xl" ' " X" = x" ' which we shall denote by E< 0 1 must be equal to the joint p.d.L of Xl" ' " X, and 0 divided by the marginal joint p.dJ. of Xl Xn. Thus, we have
x),
I D)«O) « O l x ) /,,(xg,(x) _
• . . .
'
for D E ll.
(5)
The probability distribution over Q represented by the conditional p.d.f. in Eq. (5) is called the of 0 because it is the distribution of 0 after the values of Xl" ' " X" have been observed. Similarly. the conditional p.dJ. of 0 in Eq. (5) is called the posterior p. d./. of We may say that a prior p.d.f. represents the relative likelihood. before the values of Xl. " , X" have been observed, that the true value of 8 lies in each of various regions of 12; and that
posterior distribution
�(O)
O.
.
6.2. Prior and Posterior Distributions
317
the posterior p.d.f. HO I x) represents this relative likelihood after the values Xl Xl. · · XII = XII have been observed. =
. •
The LIkelihood Function The denominator on the right side of Eq. (5) is simply the integral of the numerator over all possible values of O. Although the value of this integral depends on the observed values Xl xl1J it does not depend on () and it may be treated as a constant when the right side of Eq. (5) is regarded as a p.d.f. of O. We may therefore replace Eq. (5) with the following relation: • . . . •
H O l x ) o:: /,, ( x I OH(O).
(6)
The proportionality symbol C( is used here to indicate that the left side is equal to the right side except possibly for a constant factor, the value of whjch may depend on the observed values Xl ' x" but does not depend on O. The ap propriate constant factor whkh will establish the equality of the two sides in the relation (6) can be detennined at any time by using the fact that fo�( 0 I x ) dO I , because �(O I x ) is a p.d.f. of o. When the joint p.d.L or the joint p.L i.(x I 0) of the observations in a random sample is regarded as a function of � for given values of Xl ' " . , X'I' it is called the likelihood function. In this terminology. the relation (6) states that the posterior p.d.f. of 0 is proportional to the product of the li_kelihood function and the prior p.d.L of O. By using the proportionality relation (6). it is often possible to determine the posterior p.d.f. of 8 without explicitly performing the integration in Eq. (4). If we can recognize the right side of the relation (6) as being equal to one of the standard p.d.f.'s introduced in Chapter 5 or elsewhere in this book, except possibly for a constant factor, then we can easily determine the appropriate factor which will convert the right side of (6) into a proper p.d.f. of O. We shall illustrate these ideas by considering again Examples 2 and �. • . • •
-
Example
4:
Proportion of Defective Item.�. Suppose again, as in Example 2. that
the proportion 8 of defective items in a large manufactured lot is unknown and that the prior distribution of 8 is a uniform distribution on the interval (0, 1). Suppose also that a random sample of n items is taken from the lot; and for i= 1 , n , let Xi = 1 if the ith item is defective and let X; = 0 otherwise. Then Xl' . . . ' X., form n Bernou.lli trials with parameter O. We shall determine the posterior p.d.L of O. It follows from Eq. (2) of Sec. 5.2 that the p.e. of each observation Xi is • . . .
f(x I 0 )
_
{�
X ( l - e )I -X
for x = 0 , 1 , otherwise.
(7 )
318
Estimation
Hence, if we let y L:7_tX;. then the joint p.L of Xh following form for Xi = 0 or 1 (i = 1, . . . , n): =
. . . •
X" can be written in the (8)
Since the prior p.d.L H8) is given by Eq. (1), it follows that for 0 < 8
0 (; = 1, n) : . . . •
(12) Since the prior p.d.!. HP) is given by Eq. (2), it follows that for P
>
0, (13)
A
constant factor which does not involve /3 has been omitted from the right side of the relation (13).
,
6.2. Prior and Posterior Distributions
319
When we compare this expression with Eq. (7) of Sec. 5.9, we can see that, except for a constant factor, i t has the same form as the p.d.f. of a gamma distribution with parameters n + 4 and y + 20,000. Since the posterior p.d.f. H fJ I x) is proportional to f"(x I fJ)H fJ), it follows that H fJ I x ) must be the p.d.L of a gamma distribution with parameters n + 4 and y + 20,000. Therefore, for (3 > 0,
( y + 20, 000 ) " + , pn+Je-(), +20J)OO){J . E ( fJ l x ) � ( 1I + 3)!
0
(14)
Sequential Observations In many experiments, the observations XL . . . . . Xn which form the random sample must be obtained sequentially. that is, one at a time. ]n such an experiment, the value of Xl is observed first, the value of Xz is observed next, the value of X) is then observed, and so on. Suppose that the prior p.d.f. of the parameter 0 is �(O). After the value Xl of Xl has been observed. the posterior p.d.!. �(O I X l ) can be calculated in the usual way from the relation
H O l x, ) a. f( xI I O ) H O ) .
(IS)
This p.d.f., i n turn, serves as the prior p.d.L of 8 when the value of X2 is to be observed. Thus, after the va1ue x 2 of X2 has been observed, the posterior p.d.f. HO I x" x , ) can be calculated from the relation ( 1 6) We can continue in this way. calculating an updated posterior p.d.f. of 8 after each observation and using that p.d.f. as the prior p.d.f. of () for the next observation. The posterior p.d.f. �«() I X l x ,,--""l ) after the values Xl. · · · ' xn _ 1 have been observed will ultimately be the prior p.d.L of 0 for the final observed value of Xn. The posterior p.d.L after aU n values X I " . . , X n have been observed will therefore be specified by the relation • • . • •
(17 ) Alternatively, after all n values Xl" ' " xn have been observed, we could calculate the posterior p.d.f. H O I x ) in the usual way by combining the joint p.d.L f"( x I 0 ) with the original prior p.d.f. H O ), as indicated in Eq. (5). It can be shown (see Exercise 7) that the posterior p.d.f. �(O I x) will be the same regardless of whether it is calculated directly by using Eq. (5) or it is calculated sequentially by using Eqs. (15), (16), and (17). This property was illustrated in Sec. 2.2 for a
320
Estimation
coin that is known either to be fair or to have a head on each side. After each toss of the coin, the posterior probability that the coin is fair is updated.
EXERCISES 1. Suppose that the proportion e of defective items in a 1arge manufactured 10t is known to be either 0.1 or 0.2, and that the prior p.r. of e is a.s follows: HO.1) � 0.7
and
HO.2) � 0.3.
Suppose also that when eight items are selected at random from the lot, it is found that exact1y two of them are defective. Determine the posterior p.r. of O. 2. Suppose that the number of defects on a roll of magnetic recording tape ha., a Poisson distribution for which the mean A is either 1.0 or 1.5, and that the prior p.f. of A is as follows: Hl .O) � 0.4
3.
4.
5.
6.
and
«1 .5) � 0.6.
If a roll of tape selected at random is found to have three defects, what is the posterior p.f. of A? Suppose that the prior distribution of some parameter 0 is a gamma distribution for which the mean is 10 and the variance is 5. Determine the prior p.d.f. of O. Suppose that the prior distribution of some parameter 0 is a beta distribution for which the mean is 1/3 and the variance is 1/45. Determine the prior p.d.L of O. Suppose that the proportion fJ of defective items in a large manufactured lot is unknown, and that the prior distribution of () is a uniform distribution on the interval (0, 1). When eight items are se1ected at random from the Jot, it is found that exactly three of them are defective. Determine the posterior distribution of fJ. Consider again the problem described in Exercise 5, but suppose now that the prior p.d.f. of 0 is as follows: for 0 < 0 < 1 , otherwise. As in Exercise 5, suppose that in a random sample of eight items exactly three are found to be defective. Determine the posterior distribution of fJ.
321
6.3. Conjugate Prior Distributions
7.
Suppose that Xl
• . . . •
XI1 form a random sample from a distribution for which
the p.d.f. is f(x 1 0 ), that the value of 0 is unknown, and that the prior p.d.!. whether it is calculated directly by using Eq. (5) or it is calculated sequen
of 0 is H O ). Show that the posterior p.d.f. �( O I x) is the same regardless of tially by using Eqs. (15), (16), and (17).
8.
Consider again the problem described in Exercise 5, and assume the same prior distribution of
8.
Suppose now, however, that instead of selecting a
random sample of eight items from the lot, we perform the following exactly three defectives have been found. If we find that we must select a total
experiment: Items from the lot are selected at random one by one until
of eight items in this experiment, what is the posterior distribution of
8
at the
end of the experiment? 9. Suppose that a single observation tion on the interval
( ; 0 -
, 0
+
X
;)
is to be taken from a uniform distribu ; that the value of 0 is unknown; and
that the prior distribution of 8 is a uniform distribution on the interval (10, 20). If the observed value of X is 12, what is the posterior distribution of O ?
10. Consider again the conditions of Exercise 9, and assume the same prior distribution of
6. Suppose now, however, that six observations are selected at random from the uniform distribution on the interval 0 , 0 + ; and that their values are 1 1 .0, n.s,
n.7, n.l, 11 .4,
( ;
;)
.
and 10.9. Determine the
posterior distribution of O.
6.3.
CONJUGATE PRIOR DISTRIBUTIONS
Sampling from a Bemoulli Dlstrlbullon The Basic Theorem.
>
Certain prior distributions are particularly convenient for
use with samples from certain other distributions. For example, suppose that a random sample is taken from a Bernoulli distribution for which the value of the parameter 0 is unknown. If the prior distribution of
8
is a beta distribution. then
for any possible set of observed sample values, Lhe posterior distribution of
8
wiU
again be a beta distribution. Specifically, the following result can be established:
Theorem
1 . Suppose that Xl' . . . ' Xn form a random sample from a Bernoulli distribution for which the value of the parameter 8 is unknown (0 < 8 < 1). Suppose also that the prior distribution of 8 is a beta distribution with given parameters a and fj (a > 0 and fj > 0). Then the posterior distribution of 8 given that Xi = Xi (i = 1 , . . . , n ) is a beta distribution with parameters a + E? lx, and fJ + n - E;'_lXi•
322
EsUmalion
Proof. Let y = L:7_ ,x;. Then the likelihood function, that is, the joint p.r. f. (x I 0) of Xl' . . . ' X., is given by Eq. (7) of Sec. 6.2. Also, the prior p.d.f. �(8) satisfies the following relation:
HO)
ex
0 0 - 1 (1
-
O ) P- I
Since the posterior p.d.f. follows that
for 0
HO I x)
< 0 < 1.
is proportional to the product
for 0
/.,( x I O)�(8), it
< 8 < 1.
The right side of this relation can b e recognized as being, except for a constant factor, equal to the p.d.l. of a beta distribution with parameters a + y and p + n y. Therefore, the posterior distribution of IJ is as specified in the theorem. 0 -
Updating the Posterior D�·tribu'ion. One implication of Theorem 1 is the follow ing: Suppose that the proportion 0 of defective items in a large shipment is unknown; that the prior distribution of 0 is a beta distribution with parameters a and /3; and that n items are selected one at a time at random from the shipment and inspected. If the first item inspected is defective. Lhe posterior distribution of. o will be a beta distribution with parameters a + 1 and p. If the first item is nondefective, the posterior distribution will be a beta distribution with parameters a and f3 + 1. The process can be continued in the following way: Each time an item is inspected. the current posterior beta distribution of 8 is changed to a new beta distribution in which the value of either the parameter a or the parameter {J is increased by one unit. The value of 0: is increased by one unit each time a defective item is found, and the value of /1 is increased by one unit each time a nondefective item is found. The family of beta distributions is called a conjugate family ofprior distribu tions for samples from a Bernoulli distribution. If the prior distribution of 0 is a beta distribution. then the posterior distribution at each stage of sampling will also be a beta distribution, regardless of the observed values in the sample. It is also said that the family of beta distributions is closed under sampling from n Bernoulli distribution. Example I: The Variance 0/ the Po.fterior Beta Distribution. Suppose that the proportion 8 of defective items in a large shipment is unknown; that the prior distribution of is a uniform distribution on the interval (0, 1); and that items are to be selected at random from the shipment and inspected until the variance of the posterior distribution of (J has been reduced to the value 0.01 or less. We shall determine the total number of defective and nondefective items that must be obtained before the sampling process is stopped.
8
6.3. Conjugate Prior Distributions
1 ,
323
!:
: �
f
As stated in Sec. 5.10, the uniform distribution on the interval (0. 1) is a beta distribution for which 1 and f3 1. Therefore, after defective items and z nondefective items have been obtained, the posterior distribution of (J will be a beta distribution with and f3 It was shown in Sec. 5 . 10 that the variance of a beta distribution with parameters and fJ is + {J + 1)]. Therefore, the variance V of the posterior distribu tion of 0 will be
a=
=
y
a =y + 1
= z + 1.
a
� o.8/[(a + {J)2(a
�
v-
•.
, " ,
j
l' f
( y + l)(z + l) ( y + z + 2) 2(y + z + 3) ' y
Sampling is to SLOp as soon as the Dumber of defectives and the number of nondefectives z that have been obtained are such that V .:e;; 0.01. It can be shown (see Exercise 1) that it wi11 not be necessary to select more than 22 items. 0
Sampling from a Poisson Distribution
1,
I.
, •
:, , y
Proof.
Let y = :L7_tXi' Then the likelihood function
tion
fn ( x 1 8)
satisfies the rela-
•
,
In this relation, a factor that involves x but does not depend on 8 has been dropped from the right side. Furthermore, the prior p.d.1. of 0 has the form for 8 > O. Since the posterior p.d.f.
«8 I x)
is proportional to
i"e x I O)g(8),
it follows that
for 0 > O. The right side of this relation can be recognized as being, except for a constant
Estimation
324
,
factor, the p.d.f. of a gamma distribution with parameters a + y and fJ Therefore. the posterior distribution of () is as specified in the theorem. 0
+ n.
Example 2; The Variance of the Posterior Gamma Distrihution. Consider a Pois " son distribution for which the mean () is unknown. and suppose that the prior p.d.L of () is as follows: for O > O, [or O .. O. Suppose also that observations are to be taken at random from the given Poisson distribution until the variance of the posterior distribution of (J has been reduced -.� ;, to the value 0.01 or less. We shall determine the number of observations that must " be taken before the sampling process is slopped. The given prior p.d.L � (O) is the p.dJ. of a gamma distribution for which a = 1 and f3 = 2. Therefore, after we have obtained n observed vaJues Xl" ' " XII' the sum of which is y - l:7_1Xj. the posterior distribution of 0 will be a gamma distribution with a = y + 1 and fJ = II + 2. It was shown in Sec. 5.9 that the variance of a gamma distribution with parameters a and f3 is a/p l. Therefore, the variance V of the posterior distribution of 8 will be
v- y+ 1 • ( n + 2) 2 Sampling is to stop as soon as the sequence of observed values been obtained is such that V .:S;; 0.01. 0
Xl" ' " XII that has
Sampling from a Normal Dlstrlbulion When samples are taken from a normal distribution for which the value of the mean 8 is unknown but the value of the variance 0 2 is known, the family of normal distributions is itself a conjugate family of prior distributions, as is shown ' in the next theorem.
Theorem 3. Suppose that Xl XII form a random sample from a normal distribution for which the value of lhe mean 8 is unknown ( 00 < (J < 00) alld the value of the variance 0 2 is known (02 > 0). Suppose also that the prior distribution of 8 i:.' a normal distribution with given values of (he mean p. and the variance v2, Then the posterior dislriblllion of 8, given 11101 X; = Xj (i = 1. n), is a normal distribution for which the mean ILl alld the varial/ce v� are as follows: • . . . •
-
. . . •
1' 1 -
0_2u nv2x -"" 0,
for
6 0 and
fO' �(6) d6 = l.
(b) Consider the family of probability distributions that can be represented by a p.d.f. �(6) having the given form fer alJ possible pairs of constants a > 0 and /3 > O. Show lhat this family is a conjugate family of prior distributions for samples from a Donnal distribution with a known value of the mean I.&. and an unknown value of the variance O.
15. Suppose that in Exercise 14 the parameter is taken as the standard deviation
of the normal distribution, rather than the variance. Determine a conjugate family of prior distributions for samples from a normal distribution with a known value of the mean Jl and an unknown value of the standard devia tion o.
Estimation
330
16.
Suppose that the number of minutes a person must wait for a bus each morning has a uniform distribution on the interval (0, 0), where the value of the endpoint 8 is unknown. Suppose also that the prior p.d.L of 8 is as
{ HO) � t
follows:
192
� 4,
for 8
otherwise.
If the observed waiting times on three successive mornings are
5, 3,
and
8
minutes, what is the posterior p.d.f. of 01
17.
The Pareto distribution with parameters Xo and a defined in Exercise
15
of Sec.
5.9.
(xo >
° and a
>
0) is
Show that the famiJy of Pareto distribu
tions is a conjugate family of prior distributions for samples frpm a uniform distribution on the interval (0, 8), where the value of the endpoint 8 is unknown.
18.
Suppose that
Xl" ' " XII
form a random sample from a distribution for which
the p.d.f. f(x 1 0 ) is as follows : for O < x < l , otherwise. Suppose also that the value of the parameter 8 is unknown ( 8
>
0) and that
the prior distribution of 8 is a gamma distribution with parameters a and fJ (a
>
0 and
f3 > 0). Determine the mean and the variance of the posterior
distribution of 8.
6.4.
BAYES ESTIMATORS
Nature of an Estimation Problem Suppose that a random sample Xl• . . . • Xn is to be taken from a distribution for which the p.c. or p.d.L is I(x 1 8), where the value of the parameter 8 is unknown. Suppose also that the value of 8 must lie in a given interval Q of the real line. The interval Q could be either bounded or unbounded; in particular, it could be the
entire real line. Finally, suppose that the value of () must be estimated from the observed values in the sample.
estimator of the parameter 8,
is a real-valued function li( Xl" An
based on the random variables
Xl" ' " X,p
. . , Xn) which specifies the estimated value of 0 XI " ' " Xn• In other words, if the observed to be Xl" ' " X II ' then the estimated value of 8 is
for each possible sci of values of values of
8(x1,
. . . •
xn). Since the value of 8 must belong to the interval D, it is reasonable
Xl" ' " X"
turn out
,
,
6.4. Bayes Estimators
331
to require that every possible value of an estimator 'o( Xl . " ' , X,, ) must also belong to 1l. It is convenient to distinguish between the terms eSlimator and estimate. Since an estimator o(X} X, ) is a function of the random variables XI . " ' , Xn, the estimator itself is a random variable; and its probability distribution can be derived from tbe joint distribution of Xl X". On the other hand, an estimate is a specific value S(x" xn) of the estimator that is determined by using specific observed values Xl" " , X". It will often be convenient 10 use vector notation and to let X = ( Xl" ' " X,, ) and x = (Xl" ' " XII)' In this notation, an estimator is a function SeX) of the random vector X, and an estimate is a specific value 8(x). It will often be convenient to denote an estimator 8(X) simply by the symbol �. • . . . •
• . . .
'
. . . •
Loss Functions
The foremost requirement of a good estimatOr 8 is that it yield an estimate of 8 which is close to the actual value of 8. In other words, a good estimator is one for which it is highly probable that the error �(X) - 0 will be close to O. We shall assume that for each possible value of 8 E n and each possible estimate a E g, there is a number L ( 8 , a) which measures the loss or cost to the statistician when the true value of the parameter is 0 and his estimate is a. Typically, the greater the distance between a and 8. the larger will be the value of L (8 , a). As ·before, let �(O) denote the prior p.dJ. of 8 on the interval n; and consider a problem in which the statistician must estimate the value of 8 without being able to observe the values in a random sample. If the statistician chooses a particular estimate a, then his expected loss will be
£ [ L ( O , a )]
=
j L ( O , a H ( O ) dO.
(1 )
{)
We shall assume that the statistician wishes to chodse an estimate a for which the expected loss in Eq. (1) is a minimum. In any estimation problem, a function L for which the expectation £[L(8, a)] is to be minimized is called a loss function. Definition of a Bayes Estimator
Suppose now that the statistician can observe the value x of the random vector X belore estimating 0, and let �(O l x) denote the posterior p.d.f. of 0 on the interval n. For any estimate a that the statistician might use, his expected loss in this case will be
£ [ L(O, a ) I x l
-
InL(O, a ) H O I
x)
dO.
(2 )
Estimation
332
' : .
Hence, the statistician should now choose an estimate a for which the expectati� " in Eq. (2) is a minimum. For each possible value x of the random vector X, let 0 *( x) denote a valife of the estimate a for which the expected loss in Eq. (2) is a minimum. Then � function 8"'(X) for which the values are specified in this way will be an estima1Q.l of 8. This estimator is called a Bayes estimator of O. In other words, for eaclJ possible value x of X, the value is*(x) of the Bayes estimator is chosen so thal
E [ L ( O , �'(x)) l xl � min E [ L ( O , a ) l x) . aEn
In summary, we have considered an estimation problem in which a randon' sample X = (Xl" ' " X,,) is to be taken from a distribution involving a paramelel 8 that has an unknown value in some specified interval n. For any given lo� function L(O, a) and any prior p.d.L �(O), the BayeS estimator of 0 is Ib, estimator �'(X) for which Eq. (3) is satisfied for every possible value x of X. 11 should be emphasized that the form of the Bayes estimator will depend on bod the loss function that is used in the problem and the prior distribution that ·� assigned to e.
Different Loss Functions
The Squared Error LOj's Function. By far the most commonly used loss functior in estimation problems is the squared error loss function. This function is definec: as follows:
L(O,a) � (0
-
a )'.
(4
When the squared error loss function is used, the Bayes estimate 5 "'( x) for an� observed value of x will be the value of a for which the expectation E[(O - a) 2 l l' IS a nummurn. It was shown in Sec. 4.5 that for any given probability distribution of e, the expectation of «(} - a) 2 will be a minimum when a is chosen 10 be equal to the mean of the distribution of 8. Therefore, when the expectation of (8 - a) 2 i: calculated with respect to the posterior distribution of 8, this expectation will be 1 minimum when a is chosen to be equal to the mean E«() I .t") of the posterio: distribution. This discussion shows that when the squared error loss function (4 is used, the Bayes estimator is �'(X) � E(8 1 X).
Example
J:
Estimating the Parameter oj a Bernoulli Distribution. Suppose that ;
random sample Xl' . . . ' X" is to be taken from a Bernou1li distribution fOT whicl the value of the parameter 8 is unknown and must be estimated, and that thl prior distribution of 8 is a beta distribution with given parameters 0: and t
6.4. Bayes Estimators
333
(a > 0 and P > 0). Suppose also that the squared error loss function is used, as specified by Eq. (4), for 0 < 8 < 1 and 0 < a < 1. We shall determine the Bayes estimator of O. For any observed values Xl' ' x"' let y = E7_1X;. Then it follows from Theorem 1 of Sec. 6.3 that the posterior distribution of 0 will be a beta distribution with parameters a + y and fJ + n y. Since the mean of a beta distribution with parameters 0:1 and PI is al/{a1 + PI ), the mean of this posterior distribution of 0 will be (a + y )/( a + f3 + n). The Bayes estimate 8{x) will be equal to thls value for any observed vector x. Therefore, the Bayes estimator 45*(X) is specified as folJows: .
.
•
-
�*
( X)
�
a + E;'_ l x, a + {3 + n
( 5)
o
Example 2: Estimating the Mean of a Normal Distribution. Suppose that a
random sample Xl' . . . ' Xn is to be taken from a normal distribution for which the value of the mean 8 is unknown and the value of the variance (J 2 is known. Suppose also that the prior distribution of · O is a norma1 distribution with given values of the mean I! and the variance v 2• Suppose, finally, that the squared error loss function is to be used, as specified in Eq. (4), (or :... 00 < 0 < 00 and 00 < a < 00 . We shall determine the Bayes estimator of O. It follows from Theorem 3 of Sec. 6.3 that for any observed values XI" ' " xn' the posterior distribution of 0 will be a norma1 distribution for which the mean 1' 1 is specified by Eq. (1) of Sec. 6.3. Therefore, the Bayes estimator S *(X) is specified as follows: -
(6)
o
The A bsolute Error Loss Function. Another CQPlmonly used loss function in estimation problems is the absolute error loss function. This function is defined as follows: L ( O, a ) � 10
-
al·
(7)
For any observed value of x, the Bayes estimate o(x) wil1 now be the value of a fOT which the expectation £(10 al lx) is a minimum. It was shown in Sec. 4.5 that for any given probability distribution of 0, the expectation of 10 al will be a minimum when a is chosen to be equal to a median of the distribution of O. Therefore, when the expectation of 10 al is ca1culated with respect to the posterior distribution of 0, this expectation will be a minimum when a is chosen to be equal to a median of the posterior distribution -
-
-
334
Estimation
of 8. It follows that when the absolute error loss function (7) is used, the Bayes estimator S *( X ) is an estimator for which the value is always equal to a median of the posterior distribution of 8. \Ve shall now consider Examples 1 and 2 again, but we shaH use the absolute error loss function instead of the squared error loss function.
Example 3: El'/imating the Parameter of a Bernoulli Distribution. Consider again the conditions of Example 1 , but suppose now that the absolute error loss function is used, as specified by Eq. (7). For any observed values X l" ' " x,p the Bayes estimate 8*(x) will be equal to the median of the posterior distribution of 8, which is a beta distribution with parameters a + y and /1 + n - y. There is no simple expression for this median. It must be determined by numerical approxi· mations for each given set of observed values. 0
Example 4: Estimating the Mean of a Nonnal Distribution. Consider again the conditions of Example 2, but suppose now that the absolute error loss function is used. as specified by Eq. (7). For any observed values Xl' . . . ' X n , the Bayes estimate 8*(x) will be equal to the median of the posterior normal distribution of fJ. However, since the mean and the median of any normal distribution are equal, S*(x) is also equal to the mean of the posterior distribution. Therefore, the Bayes estimator with respect to the absolute error loss function is the same as the Bayes estimator with respect to the squared error loss function; and it is again given by
Eq. (6).
0
Other Loss Functions. Although the squared error loss function and, to a lesser extent, the absolute error loss function are the ones most commonly used in estimation problems, neither of these loss functions might be appropriate in a particular problem. In some problems, it might be appropriate to use a loss function having the form L(8, a) := - al k , where k is some positive number other than 1 or 2. In other problems, the loss that results when the error 18 - al has a given magnitude might depend on the actual value of fJ. In such a problem, it might be appropriate to use a loss function having the form L(8, a) O X 0 - a ) ' or L(O, a ) � >. ( - ai, where is a given positive function of O. In still other problems. it might be more costly to overestimate the value of (J by a certain amount than to underestimate the value of 8 by the same amount. One specific loss function which reflects this property is as follows:
10
0)10
>'(
L(O, a )
�
{ 3(( 00
t
a
- a)
for 8
>'(0)
:s;;;
a,
for 8 > a.
Various other types of loss functions might be relevant in specific estimation problems. However, in this book we shall consider only the squared error and absolute error loss functions.
t ,
6.4. Bayes Estimators
335
The Bayes Estimate for Large Samples Effect 0/ Different Prior Distribution,f.
Suppose that the proportion 8 of defective items in a large shipment is unknown, and that the prior distribution of e is a uniform djstribution on the interval (0, 1). Suppose also that the value of 8 must be estimated, and that the squared error loss function is used. Suppose, finally. that in a random sample of 100 items [Tom the shipment, exactly ten items are found to be defective. Since the uniform distribution is a beta distribution with parameters a = 1 and (3 = 1, and since n = 100 and y = 10 for the given sample, it follows from Eq. (5) that the Bayes estimate is o ( x) � 1 1 /102 - 0.108. Next, suppose that the prior p.d.L of 0 has the form �(O ) - 2(1 - 0 ) for instead of being a uniform distribution; and that again in a random sample of 100 items, exactly ten items are found to be defective. Since E(8) is the p.d.f. of a bela distribution with parameters a = 1 and f3 = 2, it follows from Eq. (5) that in this case the Bayes estimate of 8 is o(x) - 11/103 - 0.107. The two prior distributions considered here are quite different. The mean of the uniform prior distribution is 1/2, and the mean of the other beta prior distribution is 1/3. Nevertheless, because the number of obsclVations in the sample is so large (n - 100), the Bayes estimates with respect to the two different prior distributions are almost the same. Furthermore, the yalue� of both estimates are very close to the obselVed proportion of defective i terns in .the sample, which 0.1. is xn
o < (J < I,
=
Consistency 01 the Bayes Estimator.
Since the unknown value of 8 is the mean of the Bernoulli distribution from which the observations are being taken, it follows from the law of large numbers discussed in Sec. 4.8 that Xn converges in probability to this unknown value as n -- 00. Since the difference between the Bayes estimator li*(X) and Xn converges i n probability to 0 as n -- 00, it can also be concluded that li*(X) converges in probability LO the unknown value of 0 as n -- co. A sequence of estimators which converges 10 the unknown value of the parameter being estimated, as n - 00, is ca.11ed a consistent sequence of estimators. Thus, we have shown that the Bayes estimators li*(X) form a consistent sequence of estimators in the prohlem considered here. The practical interpretation of this result is as follows: When large numbers of observations are taken, there is high probability that the Bayes estimator will be very close to the unknown value of O. The results which have just been presented for estimating the parameter of a Bernoulli distribution are also true for other estimation problems. Under fairly general conditions and for a wide class of loss functions, the Bayes estimators of some parameter 0 will form a consistent sequence of estimators as the sample size n -- 00 . In particular, for random samples from anyone of the various families of distributions discussed in Sec. 6.3, i f a conjugate prior distribution is assigned to the parameter and the squared error loss function is used, the Bayes estimators will form a consistent sequence of estimators.
336
E.Umation
For example, consider again the conditions of Example 2. In that example, a random sample is taken from a normal distribution for which the value of the mean 0 is unknown, and the Bayes estimator �*( X) is specified by Eq. (6). By the law of large numbers, Xn will converge to the unknown value of the mean 8 as .n 00 . It can now be seen from Eq. (6) that t5*(X) will also converge to 0 as n 00 . Thus, the Bayes estimators again form a consistent sequence of estima tors. Other examples are given in Exercises 6 and 10 at the end of this section. -
-
EXERCISES ' 1 . Suppose that the proportion 8 of defective items in a large shipment is unknown, and that the prior distribution of (J is a beta distribution for which the parameters are a = 5 and fJ = 10. Suppose also that 20 items are selected at random from the shipment, and that exactly one of these items is found to be defective. If the squared error loss function is used, what is the Bayes estimate of 8? 2. Consider again the conditions of Exercise 1. Suppose that the prior distribu
tion of 8 is as given in Exercise 1, and suppose again that 20 items are selected at random from the shipment. (a) For what number of defective items in the sample will the mean squared error of the Bayes estimate be a maximum? (b) For what number will the mean squared error of the Bayes estimate be a minimum? 3. Suppose that a random sample of size n is taken from a Bernoulli distribu tion for which the value of the parameter 8 is unknown� and that the prior distribution of 8 is a beta distribution for which the mean is 1L0. Show that the mean of the posterior distribution of 8 wi1l be a weighted average having Ynht o , and show that Y" 1 as n --+- 00. the form y X + (l
" "
-+
-
4. Suppose that the number of defects in a 1200-foot roll of magnetic recording
tape has a Poisson distribution for which the value of the mean 0 is unknown, and that the prior distribution of 0 is a gamma distribution with parameters ex = 3 and f3 = 1 . When five rolls of this tape are selected at random and inspected, the numbers of defects found on the col1s are 2, 2, 6, 0, and 3. If the squared error loss function is used, what is the Bayes estimate of 8? (See Exercise 4 of Sec. 6.3.)
5. Suppose that a random sample of size n is taken from a Poisson distribution for which the value of the mean 8 is unknown, and that the prior distribution of 8 is a gamma distribution for which the mean is 1L0' Show that the mean of the posterior distribution of 8 will be a weighted average having the form Y"X" + (1 "Y,,)""' o . and show that y" 1 as n -+ 00 . -
--+
6.4. Bayes Estlmators
337
6. Consider again the conditions of Exercise 5, and suppose that the value of 8 must be estimated by using the squared error loss function. Show that the Bayes estimators, for n = 1 , 2, . . . , form a consistent sequence of estimators of O.
7.
Suppose that the heights of the individuals in a certain population have a normal distribution for which the value of the mean 0 is unknown and the standard deviation is 2 inches. Suppose also that the prior distribution of (J is a normal distribution for which the mean is 68 inches and the standard deviation is 1 inch. Suppose finally that ten people are selected at random from the population. and that their average height is found to be 69.5 inches. (a) Jf the squared error loss function is used, what is the Bayes estimate of O? (b) If the absolute error loss function is used, what is the Bayes estimate of O? (See Exercise 6 of Sec. 6.3.)
8. Suppose that a random sample is to be taken from a norma1 distribution for which the va1ue of the mean 0 is unknown and the standard deviation is 2;
(J
is a normal distribution for which the that the prior distribution of standard deviation is 1 ; and that the value of 0 must be estimated by using the squared error loss function. What is the smallest random sample that must be taken in order for the mean squared error of the Bayes estimator of 0 to be 0.01 or less? (See Exercise 9 of Sec. 6.3.)
9. Suppose that the time in minutes required to serve a customer at a certain facility has an exponential distribution for which the value of the parameter (J is unknown� that the prior distribution of 8 is a gamma distribution for which the mean is 0.2 and the standard deviation is 1 ; and that the average time required to serve a random sample of 20 customers is observed to be 3.8 minutes. If the squared error loss function is used, what is the Bayes estimate of O? (See Exercise 1 1 of Sec. 6.3.)
10. Suppose that a random sample of size n is taken from an exponential distribution for which the va1ue of the parameter 8 is unknown; that the prior distribution of 0 is a specified gamma distribution; and that the value of 0 must be estimated by using the squared errQr loss function. Show that the Bayes estimators, for n = 1 , 2, . . . , form a consistent sequence of estimators of O. 1 L Let 0 denote the proportion of registered voters in a large city who are in favor of a certain proposition. Suppose that the value of 8 is unknown, and that two statisticians A and B assign to 8 the following different prior p.d.f.'s L(O) and E.(O). respectively: E A ( 0 ) = 20 €n( O ) =
40'
ror O < 0 < 1 , for O < 0 < l .
In a random sample of 1 000 registered voters from the city, it is found that 710 are in favor of the proposition.
338
1� .!
Eatfmatlon
O.
!
..' ,
(a) Find the posterior distribution that each statistician assigns to (b) Find the Bayes estimate for each statistician based on the squared error " loss function. (c) Show that after the opinions of the 1000 registered voters in the random sample had been obtained, the Bayes estimates for the two statisticians could not possibly differ by more than 0.002, regardless of the number in the sample who were in favor of the proposition. 12. Suppose that Xl ' Xn form a random sample from a uniform distribution on the interval (0,0), where the value of the parameter 8 is unknown. Suppose also that the prior distribution of (j is a Pareto distribution with parameters Xo and 0: (xo > 0 and a > 0), as defined in Exercise 15 of Sec. 5.9. If the value of 8 is to be estimated by using the squared error loss function, what is the Bayes estimator of 81 (See Exercise 1 7 of Sec. 6.3.) 13. Suppose that Xl" ' " X" form a random sample from an exponential distri bution for which the value of the parameter 0 is unknown (0 > 0). Let E(8) denote the prior p.d.f. of 8; and let fJ denote the Bayes estimator of 0 with respect to the prior p.d.f. H 8 ) when the squared error loss function is used. Let 1f 0 2 and suppose that instead of estimating 8, it is desired to estimate the value of l/J subject to the following squared error loss function: • . . .
=
L ( o/ , a ) � ( 0/
-
a )'
for 0/ > 0 and a > O.
Let ,j, denote the Bayes estimator of 0/. Explain why ,j, > fJ2. Him: Use the fact that for any random variable Z which can have two or more values, £(Z') > [£(Z)]2
6.5.
MAXIMUM LIKELIHOOD ESTIMATORS
Limitations 01 Bayes Estimators The theory of Bayes estimators, as described in the preceding sections, provides a satisfactory and coherent theory for the estimation of parameters. Indeed, accord ing to statisticians who adhere to the Bayesian philosophy, it provides !.he only coherent theory of estimation that can possibly be developed. Nevertheless, tbere are certain limitations to the applicability of thjs theory in practical statistical problems. To apply the theory, it is necessary to specify a particular loss function, such as the squared error or absolute error function, and also a prior distribution for the parameter. Meaningful specifications may exist, in principle, but it may be very difficult and very time-consuming to determine them. In some problems. the
8.5. Maximum Likelihood bUm.torl
339
statistician must determine the specifications that would be appropriate for clients or employers who are unavailable or otherwise unable to communicate their preferences and knowledge. In other problems, it may be necessary for an estimate to be made jointly by members of a group or committee; and it may be difficult for the members of the group to reach agreement about an appropriate loss function and prior distribution. Another possible difficulty is that in a particular problem the parameter 0 may actually be a vector of real-valued parameters for which all the values are unknown. The theory of Bayes estimation which has been developed in the preceding sections can easily be generalized to include the estimation of a vector parameter O. However, to appl this theory in such a problem it is necessary to specify a multivariate prior distribution for the vector 9 and also to specify a loss function L(9, a) that is a function of the vector 0 and the vector a which wiiJ be used to estimate O. Even though, in a given problem, the statistician may be interested in estimating only one or two components of the vec(Or 0 he must still assign a multivariate prior distribution to the entire vector O. In many important statistical problems, some of which will be discussed later in this book, 9 may have a large number of components. In such a problem, it is especially difficult to specify a meaningful prior distribution on the multidimensional parameter space n. It should be emphasized that there is no simple way to resolve these difficulties. Other methods of estimation that are not based on prior distributions and loss functions typically have not only serious defects in their theoretical structure but also severe practical limitations. Nevertheless, it is useful to be able to apply a relatively simple method of constructing an estimator without having to specify a loss function and a prior distribution. In this section we shall describe such a method, which is called the method of maximum likelihood. This method, which was introduced by R. A. Fisher in 1912, can be applied in most problems, has a strong intuitive appeal, and will often yield a reasonable estimator of fJ. Furthermore. if the sample is large, the method will typically yield an excellent estimator of O. For these reasons, the method of maximum likelihood is probably the most widely used method of estimation in statistics.
y
Definition 01 a Maximum Likelihood Estimator Suppose that the random variables Xl' . . . ' X" form a random samplt: from a discrete distribution or a continuous distribution for which the p.r. or the p.d.f. is f(x I 0), where the parameter 8 belongs to some parameter space 0. Here, 8 can be either a real-valued parameter or a vector. For any observed vector x = (x L ' • • • • x , ) in the sample, the value of tbe joint p.f. or joint p.d.f. wiJl, as usual, be denoted by 1"(.< I 0). As before, when i"(x 1 8) is regarded as a function of () for a given vector x, it is called the likelihood/unction.
Estimation
340
Suppose, for the moment, that the observed vector x came from a discrete distribution. If an estimate of e must be selected, we would certainly not consider any value of e E n for which i t would be impossible to obtain the vector x that was actually observed. Furthermore, suppose that the probability I"(x I 0) of obtaining the actual observed vector x is very high when 8 has a particular value, say 0 = 00, and is very small for every other value of 0 E Q. Then we would naturally estimate the value of 8 to be 00 (unless we had strong prior information which outweighed the evidence in the sample and pointed toward some otber value). When the sample comes from a continuous distribution. it would again be natural to try to find a value of 0 for which the probability density j,,(x I 8) is large, and to use this value as an estimate of O. For any given observed vector r, we are led by this reasoning to consider a value of 0 for which the likelihood function j,l x 1 0 ) is a maximum and to use this value as an estimate of O. This concept is formalized in the following definition: For each possible observed vector x, let ll(x) E Q denote a value of (J E n for which the likelihood function I"( x 1 8) is a maximum, and let (j - 8(X) be the estimator of e defined in this way. The estimator b is called the maximum likelihood estimator of 8. The expression maximum likelihood e.ytimator or maxj� mum likelihood estimate is abbreviated M.L.E.
Examples of Maximum Likelihood Estimators
should be noted that in some problems, for certain observed vectors x, the maximum value of j,,( x 1 (} ) may not actually be attained for any point 0 E O. Tn such a case, an M.L.E. of (J does not exist. For certain other observed vectors .t, the maximum value of j,,( x 1 8) may actually be attained at more than one point in the space O. In such a case, the M.L.E. is not uniquely defined. and any one of these points can be chosen as the estimate 8. In many practical problems, however. the M.L.E. exists and is uniquely defined. We shall now illustrate the method of maximum likelihood and these various possibilities by considering seven examples. In each example. we shaJI attempt to determine an M.L.E. It
Example 1: Sampling from a Bernoulli Di5tr;but;on. Suppose that the random
variables Xl ' X" form a random sample from a Bernoulli distribution for which the parameter e is unknown (O :E;; 0 .:(,; 1). For any observed values Xl' ' XII ' where each X i is either 0 or I, the likelihood function is • . . .
•
•
.
I"( x 1 8 )
-
n OX'(1 i-I
-
8 ) ' -·', .
(I)
6.5.
Maximum Likelihood Estimators
341
The value of 0 which maximizes the likelihood function In( x I 0) will be the same as the value of fJ which maximizes log f,,(x I fJ). Therefore, it will be convenient to determine the M.L.E. by finding the value of fJ which maximizes n
L ( O ) - log j,, ( .< I O) - L [ xilogO + (1 i-1
-(i )
xi loge +
I-I
(n
-
.i Xi
I-I
)
-
x,} log(1 - 0)] (2)
IOg(l - e).
If we now calculate the derivative dL(fJ)/dfJ, set this derivative equal to 0, and solve the resulting equation for 0, we find that fJ = xn" It can be verified that this value does indeed maximize L(O). Hence, it also maximizes the likelihood function defined by Eq. (1). It follows therefore that the M.L.E. of 0 is b X..
-
o
It follows from Example 1 that if Xl XII are regarded as n Bernoulli trials, then the M.L.E. of the unknown probability of success on any given trial is simply the proportion of successes observed in the n trials. • . . . •
Example 2: Sampling from a Nonnal Dil'tribution. Suppose that Xl ' . . . ' X" form a random sample from a norma] distribution- for which the mean p. is unknown and the variance 0'2 is known. For any observed values Xl" ' " XII' the likelihood function In(x 1 1') will be (3)
It can be seen from Eq. (3) that In(x 1 1' ) will be maximized by the value of I' which minjmizes Q ( I' )
-
L ( x, _ 1')2 "
i-I
-
n
n
i-I
i-I
L Xi2 - 21' L Xi +
nl'
'.
•
If we now calculate the derivative dQ(I')/dl', set this derivative equal to 0, and solve the resulting equation for p. , we find that p. = x", It foHows. therefore, that the M.L.E. of I' is P. Xn . 0
-
It can be seen in Example 2 that the estimator f.t is not affected by the value of the variance (J2, which we assumed was known. The M.L.E. of the unknown mean 11 is simply the sample mean XII' regardless of the value of 02. We shall see this again in the next example, in which both p. and 0' 2 must be estimated.
342
··l 1 1j
e.Umation
Example 3: Sampling from a Normal Distribution with Unknolt,'n Variance. suppose again that Xl ' Xn form a random sample from a normal distribution, but suppose now that both the mean f.L and the variance 0' 2 are unknown. For � any observed values X,' . . ' x"' the likelihood function /"(x I I', 0 ' ) will again be given by the right side of Eq. (3). This function must now be maximized over al1 1 possible values of J.L and 0' 2, where - 00 < f.L < 00 and (1 2 ) O. Instead of 't maximizing the likelihood function fn(x I f.L, (1 2 ) directly. it IS again easier to j, .: maximize log/"(x I I', 0'). We have • . . .
�
.
j, .i 4) !: "J.
2 n 2 1 ;' n = - -log(Z ,, ) - -logo - -, '-- (x; - 1' ) . Z Z Zo ; _ 1
(
.�
I ,.
. '.�
We shall find the values of f.L and -0 2 for which L(f.L. 02) is maximum by, ', finding the values of II- and (J 2 that satisfy the following two equations: (Sa)
(5b) From Eq. (4), we obtain the relation
Therefore, from Eq. (Sa) we find that I' = X". Furthermore, from Eq. (4),
When IJ. is replaced with the value XII which we have just obtained, we fmd from Eq. (5b) that a, =
1 " _ )2 - " .i..J ( Xi - X"
n
i-I
•
(6)
r
6.5. Maximum
Likelihood Estimators
343
,
! ,
( l
�'
Just as X" is called the sample mean, the statistic on the right side of Eq. (6) is called the sample variance. It is the variance of a distribution that assigns probability ljn to each of the n observed values Xl" ' " x" in the sample. II can be verified that the values of I" and (1 2 which satisfy Eqs. (Sa) and (5b). and which have just been derived, actually yield the maximum value of L(}1. (1 2 ). Therefore, the M.L.E. 's of p. and 0 2 are and
2
(1
=
( X, n i1-L
1
-
" "
- 2. - X,. )
In other words, the M.L.E:s oj the mean and the variance of a normal distribu tion are the sample mean and the sample variance. 0
Example 4: Sampling from a Uni/onn Distribution. Suppose that Xl X" form a random sample from a uniform distribution on the interval (0, 0), where the value of the parameter ° is unknown (0 ) 0). The p.d.L I(x I 0 ) of each observation has the following form: • . . . •
l(x I O )
-
{!
for O
� x � I),
(7 )
otherwise.
Therefore. the joint p.d.f. I.(r I 0 ) of X,
1"( x I O)
-
{ t"
for O ", x; '" otherwise.
• . . . •
X" has the form
° (i -
1
• . . . •
nJ,
( 8)
It can be seen from Eq. (8) that the M.L.E. of I) must be a value of (J for " which (J ;;:-:. Xi for i 1 , . . . , n and which maximizes 1/1) among all such values. " Since I/O is a decreasing funcLion of 0, the estimate will be the smallest value of n. Since this value is. 0 max(x1, . . . , x,,), the 8 such that B � Xi for i = 1 , M.L.E. of ° is tl - max(X, • . . . • X"). 0 •
=
=
. . . •
It should be remarked that in Example 4, the M.L.E. fJ does not seem to be a suitable estimator of 0. Since max(X1, , X,,) < 0 with probability I, it follows that fJ surely underestimates the value of O. Indeed, if any prior distribution is a!\signed to 0, then the Bayes estimator of 0 will surely be greater than O. The actual amount by whlch the Bayes estimator exceeds fJ will, of course, depend on the particular prior distribution that is llsed and on the observed values of XI" " ' X", • . •
Example
5: Nonexistence 0/ an M.L.E. Suppose again that Xl . " " X" fonn a random sample from a uniform dislribution on the interval (0, 0). However,
344
"l
EsUrnation
..'
suppose now that instead of writing the p.d.!. /(x I 8) of the uniform distribution in the form given in Eq. (7), we write it in the roHowing form: /( x I 8 )
_
{! o
for 0 < x < 8,
(9 )
otherwise.
,�
,
� .��
.�
The only difference between Eq. (7) and Eq. (9) is that the value of the p.d.f. '.� at each of the two endpoints 0 and (j has been changed by replacing the weak inequalities in Eq. (7) with strict inequalities in Eq. (9). Therefore, either equation could be used as the p.d.f. of the uniform distribution. However, if Eq. (9) is used as the p.d.f, then an M.L.E. of 8 will be a value of () for which 8 > Xi for i = 1, . . . , n and which maximizes lio n among all such values. It should be noted that the possible values of (J no longer include the value e = max(xj, , XII)' since () must be greater than each observed value x; (i = 1, . . . , n). Since 8 can be chosen arbitrarily close to the value max(x1 xn) but cannot be chosen equal to this value, it fonows that the M.L.E. of 0 does not exist. 0 • • •
strictly
• . • • •
Examples 4 and 5 illustrate one shortcoming of the concept of an M.L.E. In all our previous discussions about p.d.f.'s, we emphasized the fact that it is irrelevant whether the p.d.f. of the uniform distribution is chosen to be equal to 1/8 over the open interval 0 < x < 0 or over the closed interval 0 0:>; x 0:>; O. Now, however, we see that the existence of an M.L.E. depends on this irrelevant and unimportant choice. This difficulty is easily avoided in Example 5 by using the p.d.1. given by Eq. (7) rather than that given by Eq. (9). In many other problems as well, a difficulty of this type in regard to tbe existence of an M.L.E. can be avoided simply by choosing one particular appropriate version of the p.d.f. to represent the given distribution. However, as we shaH see in Example 7, the difficulty cannot always be avoided.
Example 6: Nonun;queness 0/ an M.L.E. Suppose that XL " ' " Xn form a random sample from a uniform distribution on the interval (8, 8 + 1), where the value of the parameter f) is unknown ( - 00 < (} < 00), In tbjs example, the joint p.d.f, /"( x I 8) has the form for 8 .:s,; Xi .:s,; 8 + 1 ( i otherwise.
'
= 1"
" , n),
(10)
The condition that (} � Xi for i = 1" n is equivalent to the condition that 8 � min(xj" . " x,,). Similarly, the condition that Xi � 8 + 1 for i = 1" . " n is equivalent to the condition that 8 � max(x1" , . , x n) - 1 . Therefore, instead of writing fnex 1 8) in the form given in Eq. (10), we can use""the following fonn: ,
"
for max(x\, . . . , xn ) - 1 � 8 .:s,; min( x1, . . . , xn), otherwise,
(11)
345
6.5. MaKimum Likelihood Estimators
•
Thus, it is possible to select as an M.L.E. any value of () in the interval max(x1 , · · · , x,,)
-
1
.=s:;;
8 �
min(xl
• . . .
( 12 )
' X,, ) .
In this example, the M.L.E. is not uniquely specified. In fact, the method of max.imum likelihood provides no help at aU in choosing an estimate of 8. The likelihood of any value of 0 outside the interval (12) is actually O. Therefore, no value 8 outside this interval would ever be estimated, and all values inside the interval are M.L.E.'s. 0
7: Sampling from a Mixture of Two Di.'Itributions. Consider a random variable X that can come with equal probability either from a normal distribution with mean 0 and variance 1 or from another nonnal distribution with mean p. and variance a2, where both Il and a 2 are unknown. Under these conditions, the p.d.f. f(x I j.L, a2) of X will be the average of the p.d.f.'s of the two different normal distributions. Thus,
Example
,
1
f( x I I' , a ) - "2
{
(
1
X'
)
1
lj exp lj exp - 2 + (2".) 2 (2",) 2 a
[ (X _ I')2]} -
20 2
.
(13 )
Suppose now that Xlt . . . , X" form a random sample from the distribution for which the p.d.f. is given by Eq. (13), As usual, the likelihood function f.(r I I', 02) has Ihe form "
f. ( r 1 1' , 0 ' )
-
TI f ( r, 1 1' , 0 ' ) .
(14)
I-I
To find the M.L.E.'s of p. and a2, we must find values of p. and a 2 fOT which fn(x I J,L. ( 2 ) is maximized. Let xI< denote any one of the observed values If we let p, = and 2 let a -+ 0, then the factor f(x l< I p" a 2 ) on the right side of Eq. (14) will grow large without bound. while each factor I(x; I j.L, (2) for x; *' xI.: will approach the value
XI
O. Since the likelihood function can be made arbitrarily large by -+
-+
•
346
Estimation
choosing Jl = x k and choosing (J 2 arbitrarily close to 0, it follows that M.L.E.'s do not exist. If we try to correct this difficulty by allowing the value 0 to be a permissible estimate of then we fi� that there are n different pairs of M.L.E.'s of 1.1. and
(J 2,
02
0 2; namely,
p. = Xk and = 0 for k = 1, . . , n . All of these estimates appear silly. Consider again the description, given at the beginning of this example, of the two normal distributions from which each observation might come. Suppose, for .
-
example, that 1J = 1000 and that we use the estimates p. = xJ and 0 2 = O. Then, we would be estimating the value of the unknown variance to be O� and also, in effect, we would be concluding that exactly one observed value X 3 came from the given unknown normal distribution whereas aU the other 999 observed values came from the normal distribution with mean 0 and variance�1 . In fact, however, since each observation was equally likely to come from either of the two distributions, it is much more probable that hundreds of observed values, rather than just one, came from the unknown nonnal distribution. ]n this example, the method of maximum likelihood is obviously unsatisfactory. 0
EXERCISES 1 . It is not known what proportion p of the purchases of a certain brand of breakfast cereal are made by women and what proportion are made by men. In a random sample of 70 purchases of this cereal, it was found that 58 were made by women and 1 2 were made by men. Find the M.L.E. of p. 2. Consider again the conditions in Exercise 1 , but suppose also that it is known that
�
.:s;;
p
�
�.
If the observations in the random sample of 70 purchases
are as given in Exercise 1, what is the M.L.E. of p? for which the parameter 8 is unknown, but it is known that 8 lies in the open interval 0 8 Show that the M.L.E. of 0 does not exist if every observed value is 0 or if every observed value is 1 .
3. Suppose that Xl" ' " Xn form a random sample from a Berpoulli distribution
< < 1.
4. Suppose that Xl" ' " X/I form a random sample from a Poisson distribution for which the mean 8 is unknown ( 8 > 0).
O , for x .::s;; O.
Also, suppose that the value of 8 is unknown ( - 00 < 0 < 00).
(a) Show that the M.L.E. of 0 does not exist.
(b) Determine another . version of the p.dJ. of this same distribution for which the M.L.E. of 0 will exist, and find this estimator.
8. Suppose that XI " ' " Xn form a random sample from a distribution for which the p.d.f.
f{x I O )
I(x 1 0 ) is as
�
{
0 '- 1 OX
follows:
for O < x < l . otherwise.
Also, suppose that the value of 0 is unknown ( 0 ) 0). Find the M.L.E. of O.
9. Suppose that Xl' Xn form a random sample from a distribution for which the p.d.f. I(x 1 0) is as follows: . . . •
fix 1 0 )
1
� 2 e -'x-"
for - 00 < x < 00 .
Also. suppose that the value of (J is unknown ( - 00 < 0 < (0). Find the M.L.E. of O.
10. Suppose that Xl" ' " X" form a random sample from a uniform dis tribution on the interval (81, 82), where both 8] and O2 are unknown ( - 00 < 0, < 0, < (0). Find the M.L.E.'s of 0,. and 0,. 11. Suppose that a certain large population contains k different types of individ uals (k � 2); and let 8; denote the proportion of individuals of type i, for i= . . , k. Here, 0 � 8; � 1 and 01 + . . . + 8J = 1 . Suppose also that in a random sample of n individuals from this popUlation, exactly n; individuals are of type i, where n l + . . ' + n k = n. Find the M.L.E. ' s of 81 , Ole'
1,
.
• • • •
12. Suppose that the two-dimensional vectors ( XI ' Y1), ( X2, Y2)" " , ( X". Y,, ) form a random sample from a bivariate normal distribution for which the means of X and Y are unknown but the variances of X and Y and the correlation between X and Y are known. Find the M.L.E. 's of the means. 1 3. Suppose that the two-dimens!onal vectors ( Xl ' YI). ( X2' Y2 ), . . . , ( X,,, Y,,) form a random sample from a bivariate normal distribution for which the
348
Estimation
means of X and Y, the variances of X and Y, and the correlation between X and Y are unknown. Show that the M.L.E.'s of these five parameters are as follows: and and
-
"i =
1
-
11
6.S.
n
L ( Y,
i-1
-
2 YJ ,
PROPERTIES OF MAXIMUM LIKELIHOOD ESTIMATORS
Invarlance Suppose that the variables Xl ' . . . , X" form a random sample from a distribution for which either the p.r. or the p.d.L is f(x I 0 ), where the value of the parameter o is unknown, and let {j denote the M.L.E. of O. Thus, for any observed values x" . . . , x.. the likelihood function i.(x I 0 ) is maximized when 0 - h. Suppose now that we change the parameter in the distribution as follows: Instead of expressing the p.r. or the p.d.r. i(x I 0 ) in terms of the parameter 0, we shall express it in terms of a new parameter 'T = g(8), where g is a one-to-on.e function of O. We shall let 0 = h(7) denote the inverse function. Then, expressed in terms of the new parameter 7, the p.r. or p.d.f. of each observed value will be fi x I h ( 7)] and tbe likelihood function will be i.[ x I h(7)]. The M.L.E. T of T will be equal to the value of 7 for which j,,[x I h(T)] is maximized. Since f.(x I 0 ) is maximized when 0 = fl, it follows that i.[x I h( T)] will be maximized when h(T) - O. Hence, the M.L.E. T must satisfy the relation II(T) = /j or, equivalently, T = g(e). We have therefore established the following property, which is called the invariance prQperty of maximum likelihood estima tors:
If iJ is the maximum likelihood estimator of 0, then g(O) is the maximum likelihood estimator of g( 0 ). The invariance property can be extended to functions of a vector parameter 8. Suppose that 8 = (0" . . . , 0. ) is a vector of k real-valued parameters. If 7 = g(O" . . . , Ok) is a real-valued function of 0" . . . , 0., then T can be regarded as a single component of a one-to-one transformation from the set of parameters
6.6.
Propertle. of Maximum Likelihood Estimators
349
, Ok to a new set of k real-valued parameters. Therefore, if 81" OJ,; are the M.L.E.'s of 81, , 8" it follows from the invariance property that the M.L.E. of T is f g(iJ1, , 8, ). 81
. . •
• . • .
�
Example
•
1:
•
.
•
•
.
Estimating the Standard Deviation and the Second Moment. Suppose
that the variables Xl Xn form a random sample from a normal distribution for which both the mean p. and the variance 0 2 are unknown. We shall determine the M.L.E. of the standard deviation 0 and the M.L.E. of the second moment of the normal dist�tion £( X2). It was found in Example 3 of Sec. 6.5 that the M.L.E:s p. and 02 of the mean and the variance are the sample mean and the sample variance, respectively. From the invariance property, we can conclude that the M.L.E. 8 of the standar
=
Numerical Computation In many problems there exists a unique M.L.E. b of a given parameter 8, but this M.L.E. cannot be expressed as an explicit algebraic function of the observations in the sample. For a given set of observed values, it is necessary to determine the value of D by numerical computation. We shall illustrate this condition by two examples. Example 2: Sampling from a Gamma Distribution. Suppose that the variables
Xl" ' " Xn form a random sample from a gamma distribution for which the p.d.f. is as follows:
1
f( x l a ) � r ( a) X a - Ie - x
for x > O.
Suppose also that the value of a is unknown (a The likelihood function is 1
( ) "
f"(x l a ) � r '( a) i-I n Xj
.- 1
(
"
(I ) �
0) and is to be estimated.
)
exp - L xj • i-l
a will be the value of a which satisfies the equation a logf.(x l a) _ o . aa
(2)
The M.L.E. of
(3)
Estimation
350
When we apply Eq. ( 3) in this example, we obtain the following equation:
(4) Tables of the function f'(a)/r(a), which is called the digamma fllnction, are included in various published collections of mathematical tables. For any given values of X l x " . the unique value of a that satisfies Eq. (4) must be determined either by referring lO these tables or by carrying Qul a numerical analysis of the digamma function. This val ue will be the M.L.E. of lr. 0 •
.
.
.
'
Example 3: Sampling from a CauL'hy Dbitrihuti01L Suppose that the variables Xl" ' " X" form a random sample from a Cauchy distribution centered at an unknown point ( - co < 8 < 00), (or which the p.d.f. is as follows:
0
j(x I O ) =
I
.". [I +(x - O )2]
for - 00 < x < 00 .
( 5)
Suppose also that the value of 8 is to be estimated. The likelihood function is
( 6) Therefore. the M.L.E. of "
n
i-I
[ I + (x, - 0)'1
0
ill be the value which minimizes
w
(7)
For any given values of Xl' . . . . X'I' the value of which minimizes the expression (7) must be determined by a numerical computation. 0
0
Consistency Consider an estimation problem in which a random sample is to be taken from a distribution involving a parameter O. Suppose that for every sufficiently large sample size n, that is, for every value of n greater than some given minimum number, there exists a unique M.L.E. of O. Then, under certain conditions which are typically satisfied in practical problems, the sequence of M.L.E. ' s is a consistent sequence of estimators of O. In other words, in most problems the
6.6.
Properties of Maximum likelihood Estimators
351
sequence of M.L.E.'s converges in probability to the unknown value of 8 as n -+ 00.
We have remarked in Sec. 6.4 that under certain general conditions the sequence of Bayes estimators of a parameter 0 is also a consistent sequence of estimators. Therefore, for a given prior distribution and a sufficiently large sample size n, the Bayes �stimator and the M.L.E. of will typically be very close to each other, and both will be very close to the unknown value of O. We shall not present any formal details of the conditions that are needed to prove this result. We shall, however, illustrate the result by considering again a random sample Xl' . . " XII from a Bernoulli distribution for which the parameter 8 is unknown (0 .; .; 1). It was shown in Sec. 6.4 that if the given prior distribution of 8 is a beta distribution, then the ditference between the Bayes estimator of 8 and the sample mean Xn converges to 0 as n � 00 . Furthermore, it was shown in Example 1 of Sec. 6.5 that the M.L.E. of 0 is X". Thus, as n 00 , the difference between the Bayes estimator and the M.L.E. will converge to O. Finally, as remarked in Sec. 6.4, the sample mean X" converges in probability to 8 as n -+ 00 . Therefore both the sequence of Bayes estimators and the sequence of M.L.E.'s are consistent sequences.
0
0
-
Sampling Plans Suppose that an experimenter wishes to take observations from a distribution for which Lhe p.f. or the p.d.L is f(x I 0 ) in order to gain information about the value of the parameter 8. The experimenter could simply take a random sample of a predetermined size (rom the distribution. Instead, however. he may begin by first observing a few values at random (rom the distribution, and noting the cost and the time spent in taking these observations. He may then decide to observe a few more values at random from the distribution, and to study all the values thus far obtained. At some point the experimenter will decide to stop taking observations and will estimate the value of 8 from all the observed values that have been obtained up to Lhat point. He might decide to stop either because he feels that he has enough information to be able to make a good estimate of 8 or because he feels that he cannot alford to spend any more money or time on sampling. In this experiment, the number n of observations in the sample is not fixed beforehand. It is a random variable whose value may very well depend on the magnitudes of the observations as they are oblained. Regardless of whether an experimenter decides to fix the value of n before any observations are taken or prefers to use some other sampling plan, such as the one just described, it can be shown. that the likelihood function L(8) based on the observed values can be taken to be
352
Estimation
It follows, therefore. that the M. L. E. of 8 wiJI be the same, no matter what type of sampling plan is used. In other words, the value of 0 depends only on the values Xl' . . . ' XII that are actually observed and does not depend on the plan (if there was one) that was used by the experimenter to decide when to stop sampling. To HJustrate this property, suppose that the imervaJs of t:ime, in minutes, between arrivals of successive customers at a certain service facility are i.i.d. random variables. Suppose also that each interval has an exponential distribution with parameter /3, and that a set of observed intervals Xl' . . . ' Xn form a random sample from this distribution. It follows from Exercise 6 of Sec. 6.5 that the M.L.E. of fJ will be fj � 1/X.. Also, since the mean " of the exponential distribution is 1//3, it follows from the invariance property of M.L.E.'s that il = Xn • In other words, the M.L.E. of the mean is the average of the observations in the sample. Consider now the foUowing three sampling plans: (i) An experimenter decides in advance to take exactly 20 observations, and the average of these 20 observations turns out to be 6. Then the M.L.E. of JL is P. � 6. (Ii) An experimenter decides to take observations Xl. X2 . . until he obtains a value greater than 10. He finds that Xi < 10 for i 1 , . . , ]9 and thai X20 :> 10. Hence, sampling terminates after 20 observations. If the average of these 20 observations is 6, then the M.L.E. is again p. = 6. (iii) An experimenter takes observations one at a time, with no particular plan in mind, either until he is forced to stop sampling or until he feels that he has gained enough information to make a good estimate of 1'. If for either reason he stops after he has taken 20 observations and if the average of the 20 observations is 6, then the M.L.E. is again p. = 6. Sometimes, an experiment of this type must be terminated during an interval when the experimenter is waiting for the next customer to arrive. If a certain amount of time has elapsed since the arrival of the last customer, this time should not be omitted from the sample data, even though the full interval to the arrival of the next customer has not been observed. Suppose, for example, that the average of the first 20 observations is 6; that the experimenter waits another IS minutes but no other customer arrives; and that he then terminates the experi ment. In lIDS case, we know that the M.L.E. of ,." would have to be greater than 6, since the value of the 21st observation must be greater than 15, even though its exact value is unknown. The new M.L.E. can be obtained by multiplying the likelihood function for the first 20 observations by tbe probability that the 21st observation is greater than 15. and finding the value of which maximizes this new likelihood function (see Exercise 14). Other properties of M.L.E.'s wiJI be discussed later in this chapter and in Chapter 7. .
-
(J
.
6.6.
353
Properties of Maximum Likelihood Estimator.
The Likelihood Principle
0
0
The values of both the M.L.E. of a parameter and the Bayes estimator of depend on the observed values in the sample only through the Likelihood function fn(:C 1 0 ) determined by these observed values. Therefore, if two different sets of observed values determine the same likelihood function, then the same value of the M.L.E. of 8 will be obtained for both sets of observed values. Likewise, for a given prior p.d.L of 0, the same value of the Bayes estimate of () will be obtained for both sets of observed values. Now suppose that two different sets of observed values x and y detennine likelihood functions which axe proportional to each other. tn other words, the likelihood functions differ only by a factor which may depend on x and y but does not depend on 8. In this case, it can be verified that the M.L.E. of will again be the same regardless of whether x or y is considered. Also, for any given prior p.d.1. of 9, the Bayes eSlimale of 9 will be the same. Thus, bolh M.L.E.'s and Bayes estimators are compatible with the following principle of statistical inference, which is known as the likelihood principle:
0
Suppose that lwo different selS of observed values :c and y that might be obtained either from the same experiment or from two different experiments have the property thai lhey determine the same likelihood Junction for a certain parame� ter () or determine likelihood fUlIclions which are proportional to each other. Then x and y furnish the same information abOUl lhe unknown value of and a statislician should obtain lhe same estimate of 0 Jrom either x or y.
0,
For example, suppose that a statistician must estimate the unknown propor� tion () of defective items in a large manufactured lot. Suppose also that the statistician is informed that ten items were selected at random from the lot, and that exactly two were defective and eight were nondefective. Suppose, however, that the statistician does not know which one of the following two experiments had been performed: (i) A fixed sample of ten items had been selected from the lot and it was found that two of the items were defective. (ii) Items had been selected at random from the lot, one at a time, until two defective items had been obtained and i t was found that a total of ten iteQls had to be selected. For each of these two possible experiments, the observed values determine a likelihood function Ihal is proponional lo 92(1 for 0 '" 9 '" 1 . Therefore, if the statistician uses a method of estimation that is compatible with the likelihood principle, he does not need to know which one of the two possible experiments was actually performed. His estimate of () would be the same for either casco We have already remarked that for any given prior distribution of and any loss function, the Bayes estimate of () would be the same for either experiment. We have also remarked that the M.L.E. of (J would be the same for either experiment. However. in Chapter 7 we shall discuss a method of estimation caUed unbiased estimation. Although trus method is widely used in statistical problems, it -
0)'
0
354
EatlmaUon
violates the likelihood principle and specifies that a different estimate of 8 should be used in each of the two experiments.
EXERCISES
1 . Suppose that the variables Xl . . . . ' X" form a random sample from a Poisson distribution for which the mean is unknown. Determine the M.L.E. of the standard deviation of the distribution. 2. Suppose lhat Xl' . ' " XII form a random sample from an exponential distri bution for which the value of the parameter {1 is unknown. Determine the M.L.E. of the median of the distribution. 3. Suppose that the lifetime of a certain type of lamp has an exponential distribution for which the value of the parameter P is unknown. A random sample of " lamps of this type are tested for a period of T hours and the number X of lamps which fail during this period is observed; but the times at which the failures occurred are not noted. Detennine the M.L.E. of /3 based on the observed value of X. 4. Suppose that X" , . " XII form a random sample from a uniform distribution on the interval (a, b), where both endpoints a and b are unknown. Find the M.L.E. of the mean of the distribution. 5. Suppose that Xl" ' . ' XII form a random sample from a normal distribution for which both the mean and the variance are unknown. Find the M.L.E. of the 0.95 quantile of the distribution, that is, of the point (J such that Pr( X < 0) � 0.95. 6. For the conditions of Exercise 5, find the M.L.E. of ,, = Pr( X > 2). 7. Suppose that Xl XII form a random sample from a gamma distribution for which the p.d.f. is given by Eq. (1) in this section. Find the M.L.E. of qa)/r(a). 8. Suppose that Xl XII form a random sample from a gamma distribution for which both parameters ex and /J are unknown. Find the M.L.E. of a//3. 9. Suppose tbat Xl" . . ' X" form a random sample from a beta distribution for which both parameters a and f3 are unknown. Show that the M.L.E.'s of ex and f3 satisfy the following equation: • . . . •
• . . .
'
10. Suppose that Xl' . . . ' XII form a random sample of size " from a unifonn
6.6. Properties of Maximum likelihood estimators
355
distribution on the interval (0. 0). where the value of 0 is unknown. Show that the sequence of M.L.E.'s of 0 is a consistent sequence. 11. Suppose that Xl X" form a random sample from an exponential distri bution for which the value of the parameter f3 is unknown. Show that the sequence of M.L.E.'s of fj is a consistent sequence. • . . .
'
12. Suppose that X, X" form a random sample from a distribution for which the p.d.L is as specified in Exercise 8 of Sec. 6.5. Show that the sequence of M.L.E.'s of 8 is a consistent sequence. • . . . ,
13. Suppose that a scientist desires to estimate the proportion p of monarch butterflies that have a special type of marking on their wings. (a) Suppose that he captures monarch bUllerflies one at a time until he has found five that have this special marking. If he must capture a total of 43 bUllerflies, what is the M.L.E. of p? (b) Suppose that at the end of a day the scientist had captured 58 monarch butterflies and had found only three with the special marking. What is the M.L.E. of p? 14. Suppose that 21 observations are taken at random from an exponential distribution for which the mean I-' is unknown (I-' > 0); that the average of 20 of these observations is 6; and that although the exact value of the other observation could not be determined, it was known to be. greater than 15. Determine the M.L.E. 01. 1-'. 15. Suppose that each of two statisticians A and B must estimate a certain parameter 0 whose value is unknown (0 > 0). Statistician A can observe the value of a random variable X which has a gamma distribution with parame ters a and /3, where €X = 3 and {3 = 8; and statistician B can observe the value of a random variable Y which has a Poisson distribution with mean 20. Suppose that the value observed by statistician A is X = 2 and the value observed by statistician B is Y = 3. Show that the likelihood functions determined by these observed values are proportional, and find the common value of the M.L.E. of 0 obtained by each statistielan. 16. Suppose that each of two statisticians A an� B must estimate a certain parameter p whose value is unknown (0 < p < 1). Statistician A can observe the value of a random variable X which has a binomial distribution with parameters n = 10 and p; and statistician B can observe the value of a random variable Y which has a negative binomial distribution with parame ters r = 4 and p. Suppose that the value observed by statistician A is X 4 and the value observed by statistician B is Y = 6. Show that the likelihood functions determined by these observed values are proportional, and find the common value of the M.L.E. of p obtained by each statistician. =
Estimation
356
6.7.
SUFFICIENT STATISTICS
Delinlllon of a Statistic In many problems in which a parameter either an
M.L.E.
8
must be estimated, it is possible to find
or a Bayes estimator that will be suitable. In some problems,
however. neither of these estimators may be suitable. There may not be any M.L.E., or
t.here
may be
more
than one. Even when an M.L.E. is unique, it may
6.5, where the M.L.E. always Reasons why there may not be a suimble Bayes
not be a suitable estimator, as in Example 4 of Sec. underestimates the value of
8.
estimator were presented at the beginning of Sec.
6.5. In such problems, the
search for a good estimator must be extended beyond the methods that have been introduced thus far. In this section, we shall define the concept of a sufficient statistic, which was introduced by R. A. Fisher in
1922; and we shall show how this concept can be
used to expedite the search for a good estimator in many problems. We shall assume, as usual, that the random variables
XI , " ' , Xn
form a random sample
or the p.d.f. of this distribution. For simplicity, it will be
from either a discrete distribution or a continuous distribution, and we shall let
f(x I 0 ) denote the p.r.
convenient in most of the following discussion to assume that the distribution is
continuous and that f(x I 0 ) is a p.d.f. However, it should be kept in mind that the discussion applies equally well, except perhaps with some obvious changes. to
f(x I 0). We shall also assume that the.
unknown value of (j must belong to some specified parameter space a discrete distribution for which the p . L is Since the random variables
that their joint p.d.L
o
E
n:
Xl' "
_ .
X"
n.
form a random sample. it is known:
f"(x I 0) has the following form for some particular value of
f. ( x I O ) = f(xI I O)
..
·
(1 )
f(x" I O).
In other words, it is known that the joint p.d.f. of
XI" ' " X"
is a member of the
8E
The problem of estimating the value of (j can therefore be viewed as the problem family contaiuing all p.u.f:s having the form (1) for all possible values of
[I.
of selecting by inference the particular distribution in this family which generated the observations
Xl
• . . . �
X".
T = r(XI , X,,) of the observations in the ran dom sample is called a statistic. Three examples of statistics are the sample mean XII' the maximum Yn of the values of Xl XII' and the function r( XI • X,,) which has the constant value 3 for aU values of Xl" ' " XII' In any estimation Any real-valued function
• . . .
• . . . •
problem, we can say that an estimator of regarded as an estimate of the value of
8.
8
_ . . •
is a statistic whose value can be
8.7. Sufficient Slstl,tlcs
357
For any fixed value of 8 E 0, the distribution of any given statistic T can be derived from the joint p.d.f. of Xl' . . . ' X" given in Eq. (1). In general, this distribution will depend on the value of fJ. Hence, there will be a family of possible distributions of T corresponding to the different possible values of 8 E n.
Definition of a Sufficient Statistic Suppose that in a specific estimation problem, two statisticians A and B must estimate the value of the parameter 8; that statistician A can observe the values of the observations Xl' . . . ' Xn in a random sample; and that statistician B cannot observe the individual values of Xl XII but can learn the value of a certain statistic T = r( XI , XII)' In this case statistician A can choose any function of the observations Xl' . . . ' X" as an estimator of fJ, whereas statistician B can use only a function of T. Hence, it follows that A will generally be able to find a better estimator than will B. In some problems, however, B will be able to do just as wel1 as A . In such a problem, the single function T = r( XI ' . . . , Xn) will in some sense summarize aU the information contained in the random sample, and knowledge of the individual values of Xl' . . . ' X" will be irrelevant in the search for a good estimator of fJ. A statistic T having this property is called a sufficient slalislic. We shall now present the formal definition of a sufficient statislic� If T is a statistic and t is any particular value of T, then the condi tional joint distribution of Xl' X"' given that T t, can be calculated from Eq. (1). In general, this conditional joint distribution Will depend on the value of 8. There fore, for each value of t, there will be a family of possible conditional distribu tions corresponding to the different possible values of 0 E Q. It may happen, however, that for each possible value of t, the conditional joint distribution of Xl X given that T = t, is the same for all values of fJ E U and therefore does not actually depend on the value of O. In this. case it is said that T is a sufficient statistic for the parameter O. Before we describe a simple method for fihding a sufficient statistic, and before we consider an example of a sufficient statistic, we shall indicate why a sufficient statistic T that satisfies the definition just given is regarded as summariz ing all the relevant information about 8 contained in the sample X, , X". Let us return to the case of the statistician B who can learn only the value of the statistic T and cannot observe the individual values of Xl" . " XII " If T is a sufficient statistic, then the conditional joint distribution of Xl' " . , X"' given that T (, is completely known for any observed value t and does not depend on the unknown value of 8. Therefore, for any value 1 that might be observed. statisti X,; in accord cian B could, in principle, generate n random variables Xl' ance with this conditional joint distribution. The process of generating random • . . .
'
• . . .
. . . •
• . . .
'
=
n•
• . . .
=
. . . •
358
Estimation
variables
xl' "
X� having a specified joint probability distribution is called an
,
_
auxiliary randomization.
When we use this process of first observing
T and tben generating Xl. " " x;
in accordance with the specified conditional joint distribution, it follows that for
0, the marginal joint distribution of Xl" ' " x� will be the same as the joint distribution of Xl' Xn• Hence, if statistician B can observe the value of a sufficient statistic T, then he can generate n random variables Xl • . . . • X� which have the same joint distribution as the original random sample any given value of 0
Xl" ' "
E
. . . •
Xn . The property that distinguishes a sufficient statistic
T from a statistic
which is not sufficient may be described as follows: The auxiliary randomization used to generate tbe random variables
Xi, . . . • X�
after the sufficient statistic
T
has been observed does not require any knowledge about the value of 0, since the when T is given does not depend on T were not sufficient, this auxiliary randomization could not be carried out because the conditional joint distribution of Xl" ' " X" for a given value of T would involve the value of 8 and this value is unknown. conditional joint distribution of the value of
O.
Xl" ' " X"
]f the statistic
We can now show why statistician B. who observes only the value of a
T, can nevertheless estimate 8 just as well as can statistician A. who observes the values of Xl ' X". Suppose that A plans to use a particular estimator 8 ( X1, , X,,) to estimate O. and that B observes the value of T and. generates Xi , X:. which have the same joint distribution as Xl Xn• If B uses the estimator 8 ( X{, . . . • X�), then it follows that the probability distribution sufficient statistic
• . . .
•
•
•
• . . .
• . . . •
" of B 's estimator will be the same as the probability distribution of A's estimator.
This discussion illustrates why, when searching for a good estimator, a statistician can restrict the search to estimators which are functions of a sufficient statistic We shall return to this point in Sec.
T.
6.9.
The Factorization Criterion We shall now present a simple method for finding a sufficient statistic which can be applied in many problems. This method is based on the following result, which was developed with increasing generality by
1935.
and P. R. Halmos and
L. J.
Savage in
R-. A. Fisher in 1922.
1949.
J. Neyman in
The Factorization Criterion. Let Xto . . . , X" form a random sample from either a continuous distribution or a discrete distribution for which the p.d./. or the p.f. is f(x 1 8 ), where the value of 8 is unknown and belongs to a given parameter space Q. A statistic T = r( Xl X,, ) is a sufficient statistic for 8 if and only if the joint p. d.f. or the joint pI f"(x I 0 ) of X,' X" can be " E R and all values of factored as follows for all values of x = (xt xn) • . . . •
. . . •
• . . .
,
H E Q:
f.{ x I O ) - u ( x ) v [ r ( x ) , O l .
(2)
6.7.
Sufficient Statistics
359
Here, the functions u and v are nonnegative; the function u may depend on x but does not depend on IJ; and the function v will depend on 0 but depends on the observed value x only through the value of the statistic r( x). Proof We shall give the proof only when the random vector X � ( XI " ' " X,)
has a discrete distribution, in which case
I,(x 1 0) � Pr(X - x 1 0).
Suppose first that 1,( x I 0) can be factored as in Eq. (2) [or aU values o[ x E Rn and 0 E n. For each possible value 1 of T, let A(I) denote the set of all points x E Rn such that rex) = I. For any given value of 0 E n, we shall determine the conditional distribution of X given that T = t. For any point x A(I), X - x I O) _ In( x I O) Pr( X = x I T � t , 0) - Pr( Pr( T = 1 1 0) Ly E At,) 1, ( y I O) ' E
Since r( y) � I for every point y E A(t), and since Eq. (2) that
Pr(X � x I T = I, O ) = ", ,,(x) ""yeA(t) U ( y ) Finally, for any point
x
x E A(I), it follows from
.
(3)
that does not belong to A(I),
Pr(X = x I T = I, O) � O.
(4)
It can be seen from Eqs. (3) and (4) that the conditional distribution of X does not depend on O. Therefore, T is a sufficient statistic. Conversely, suppose that T is a sufficient statistic. Then, for any given value t of T, any point x E A(I), and any value of 0 E n, ·the conditional probability Pr(X � x I T = 1, 0) will not depend on 0 and wiU therefore have the form
Pr(X � x l T - /,0) - u(x). If we let U(I, 0) = Pr(T = 1 1 8), it follows that M x I e) - Pr( X � x I e) - Pr( X � x I T - /,8 )Pr( T = I 1 8) � u(x) v(/,O). Hence, In(x 1 8) has been factored in the form specified in Eq. (2). The proof for a random sample Xl Xn from a continuous distribution • . . . •
requires somewhat different methods and will not be given here. 0
Estimation
360
x u(x)
f.(x 1 0)
0
0 for all values of E n , the value For any value of for which in Eq. (2) can be chosen to be O. Therefore, when the of the function factorization criterion is being applied, it is sufficient to verify that a factorization of the form given in Eq. (2) is satisfied for every value of such that >0 for at least one value of (J E O. We shall now illustrate the use of the factorization criterion by giving four examples. =
x
f.(x I 0)
Xl
X"
Example 1: Sampling from a Poisson Distribution. Suppose that form a random sample from a Poisson distribution for which the value of the mean 0 is is a sufficient statistic for O. unknown (8 > 0). We shall show that T For any set of nonnegative integers the joint p.r. of is as follows: • . . .
= 1:7_1 X, Xl Xn,
'
fn(x I 9)
• . . . •
XI " Xn . 1) e ' x, ( O f. ( x I O ) = ,1] Xi ! = ,1] Xi ! .-"'O Y, where y = L?_lXi, It can be seen that f.(x 10) has been expressed, as in Eq. (2), as the product of a function that does not depend on and a function that depends on 0 but depends on the observed vector x only through the value of follows that T = [7_1 X" is a sufficient statistic for 8. Example Applying the Factorization Criterion 10 a Continuous Distribution. Suppose that Xl" ' " Xn form a random sample from a continuous distribution with the following p.d.f.: O < x < l, f(x 1 0 ) = {�X' - l for otherwise. It is assumed that the value of the parameter 8 is unknown (8 ) 0). We shall show that T = n7_1 Xi is a sufficient statistic for 8. For 0 < Xi < 1 (i 1 , .. , n), the joint p.d.f. f.(x 1 0) of X" " " X. follows: (5 ) f"(xIO) = 0 . ( nx, ) '- ' Furthermore. if at least one value of Xi is outside the interval 0 < 1, then f.(x 1 0) = 0 for every value of 0 n. The right side of Eq. (5) depends on x only through the value of the product n7_l Xi' Therefore, if we let u(x) = 1 and r(x) = n:_1 Xi' then f.(x 1 0) in Eq. (5) can be considered to be factored in the form specified in Eq. (2). It follows from the factorization criterion that the statistic T n7_1 Xi is a sufficient statistic for 9. Example 3: Sampling from a Nonnal Distribution. Suppose that Xl. " ' , Xn form a random sample from a normal distribution for which the mean is unknown and the variance (J2 is known. We shall show that T = 1:7_1 Xi is a sufficient . . •
•
f)
II
y.
0
2:
=
.
IS as
,-I
x,
0); T = n�_l Xj' 8. A uniform distribution on the integers 1 , 2, . . . , 8, as defined 1n Sec. 3.1, where the value of 0 is unknown ( 0 - 1 , 2, . . . ) ; T - max( X" . . . , X"). 9. A uniform distribution on the interval (a, h), where the value of and the value of b is unknown (b > a); T - max( X" . . . , X,, ).
a is known
10. A uniform distribution on the interval (a, b), where the value of b is known and the value of a is unknown (a < b); T = min(X1, · · · , Xn)' 1 1 . Consider a distribution for which the p.d.f. or the p.f. is f(x I 0), where e belongs to some parameter space O. I t is said that the family of distributions obtained by letting 8 vary over all values in !2 is an exponential family, or a
6.7.
363
Sutftcient Statlstlcs
Koopman-Darmois family, i.f f(x I fJ) can be written as foHows for fJ E all values of x:
g and
f(x I O ) - a ( O ) b (x )exp [ e ( O ) d ( x ) ] . Here a(O) and e(O) are arbitrary functions of 0, and b(x) and d(x) are arbitrary fUDctions of x. Assuming that the random variables Xl X" form a random sample from a distribution which belongs to an exponential family of this type. show that T = 1:7_1 d( XI ) is a sufficient statistic for O. 12. Show that each of the following families of distributions is an exponential family. as defined in Exercise 11: (a) The family of Bernoulli distriburions with an unknown value of the parameter p. (b) The family of Poisson distributions with an unknown mean. (c) The family of negative binomial distributions for which the value of r is known and the value of p is unknown. (d) The family of normal distributions with an unknown mean and a known variance. (e) The family of normal distributions with an unknown variance and a known mean. • . . . •
(f) The family of gamma distributions for which the value of IX is unknown and the value of fJ is known. (g) The family of gamma distributions for wbjch the value of a is known and the value of fJ is unknown. (h) The family of beta distributions for which the value of a is unknown and the value of {3 is known. (i) The family of beta distributions f6r which the value of a is known and the value of {1 is unknown. 13. Suppose that Xl X" form a random sample from a distribution for which the p.d.1. is fIx 1 0), where the value of the parameter 0 belongs to a given parameter space !l. Suppose that T r( XI, . . . , X.) and T ' r'(XI , , X.) are two statistics such that T ' is a one-to-one runction or T; that is, the value of T' can be determined from the value of T without knowing the values of Xl X"' and the value of T can be determined rrom the value of T' without knowing the values of Xl Xn' Show that T' is a sufficient statistic [or 0 ir and only if T is a sufficient statistic for O. 14. Suppose that Xl" ' " X" form a random sample from the gamma distribution specified in Exercise 6. Show that the statistic T = 1:7_ 1 log Xi is a sufficient statistic for the parameter a. 15. Suppose that Xl Xn form a random sample from a beta distribution with parameters a and /3. where the value of a is known and the value of fJ is • . . .
'
-
• . . . •
• . . . •
• . . . •
-
.
. .
364
, .' ,
Estimation
unknown ( /3 > 0). Show that the following statistic T is a sufficient statistic for f3:
(
n T � - L log n .-1 1 1
6.S.
1 )' •
X,
-
JOINTLY SUFFICIENT STATISTICS
Definition 01 Jointly Sufficient Statistics We shall continue to suppose that the variables Xl Xn form a random sample from a distribution for which the p.d.r. or the p.r. is f(x I 0), where the parameter 8 must belong to some parameter space Q. However, we shall now explicitly consider the possibility that 0 may be a vector of real·valued parameters. For example, if the sample comes from a normal distribution for wh..ich both the mean J.l. and the variance (} 2 are unknown, then (j would be a two-dimensional vector whose components are p. and (}2. Similarly, if the sample comes from a uniform distribution on some interval (a, b) for which both endpoints a and b are unknown, then (J would be a two-dimensional vector whose components are a and b. We shall, of course, continue to include the possibility that 0 is a one-dimensional parameter. In almost every problem in which 0 is a vector, as well as in many problems in which (J is one-dimensional, there does not exist a single statistic T which is . sulficient. In such a problem it is necessary to find two or more statistics Tit . . . , Tk which together are jOintly sufficient statistics in a sense that will now be described. Suppose that in a given problem the statistics T), . . . , Tk are defined by k. different functions of the vector of observations X = (XI' . . . ' Xn)' Specifically. let 1"; = riC X) for i = 1, . . . , k. Loosely speaking, the statistics T1, · · · , Tic are jointly sufficient statistics for 0 if a statistician who learns only the values of the k functions rl(X), . . . , r/c( X ) can estimate any component of 0, or any function of the components of 8, as well as can a statistician who observes the n individual values of XI" . . , Xn. In terms of the factorization criterion, the following version can now be stated: • . . . •
The STatistics T1, • Tic are jointly sufficient S(atislic� for 8 If and only if the joint p.d.f. or the joint p.f. f,,(x 1 9) can be factored as follows for all values of x E R'I and 01/ values of 8 E Q: .
.
.
fn ( x I O) � u(x) u [ r.(x). . . . . rk ( x)' O j .
(1 )
6.B.
Jointly Sufficient Stallstlcs
365
Here the functions u and v are nonnegative, the/unction u may depend on x but does not depend on fJ, and the function v will depend 011 0 but depends on x only through the k functions rl(x), . . . , rk (x). Example J: Jointly Sufficient Statistics for the Parameters of a Normal Distribu tion. Suppose that Xl' " Xn form a random sample from a normal distribution for which both the mean IL and the variance 0 2 are unknown. The joint p.d.f. of Xl" ' " X" is given by Eq. (7) of Sec. 6.7, and it can be seen that this joint p.d.f. depends on x only through the values of L�_ I XI and L.. 7_ 1 ;xI Therefore. by the factorization criterion, the statistics Tl Ei'_ l Xi and T2 = [;1_ 1 xl are jointly sufficient statistics for IJ. and 02. 0 . •
=
Suppose now that in a given problem the statistics TI , . . . , Tk are jointly sufficient statistics for some parameter vector O. If k other statistics T{• . . . • T;' are obtained from TI , . . , Tk by a one-to-one transformation, then it can be shown that T{, . . , Tk will also be jointly sufficient statistics for O.
.
.
Example 2: Another Pair 0/ Jointly Sufficient Statistic.f for the Parameters 0/ a Normal Distribution. Suppose again that XI ' " . , Xn form a random sample from a normal distribution for which both the mean J.1. and the variance 0 2 are unknown; and let T; and Ti denote the sample mean and the sample variance, respectively. Thus, T'I
= X"
and
T,
- -I tJ
" I: ( X, - X, ,)'.
i- I
We shaH show that T{ and Ti are jointly sufficient statistics for J.1. and 0 2. Let TI and T2 be the jointly sufficient statistics for J..1 and 0 2 derived Example 1. Then
T'1 - � T1
and
n
T' - � T - �2 T1' ' ' - n 2 n
Also. equivalently, and
lfl
•
T, -
n i T, + Ti' ) .
Hence, the statistics T; and Ti are obtained from the jointly sufficient statistics TI and T2 by a one-to-one transformation. It follows, therefore, that T{ and Ti themselves are jointly sufficient statistics for J.1. and 0 2. 0
We have now shown that the jointly sufficient statistics for the unknown mean and variance of a normal distribution can be chosen (0 be either TI and T2• as given i n Example 1 , or T{ and T2• as given in Example 2.
Estimation
366
Example 3: Jointly Sufficient Statistics jor the Parameters of a Uniform Dislribu· lion. Suppose that Xl' . . . ' X'I form a random sample from a uniform distribution on the interval (a, b), where the values of both endpoints a and b are unknown (a < b). Thejoint p.d.f. i.(x I a, b) of XI " ' " X" will be 0 unless all the observed values X1" " ' X " lie between a and b; that is, /,,(x l a, b) = 0 unless min(xl, , xn) � a and max(x1 , xn) � b. Furthermore, for any vector x such that min(x1, , XII) � a and max(x1 • . . • , X II ) � b, we have .
.
•
• . . •
.
•
•
j
MX l a , b ) � ( b - a l " For any two numbers y and z, we shall let h(y, z ) be defined as follows: for y � z, for y > z. For any value of x E R n, we can then write
f." ( x l a, b J -
h [ a , min( x" . . . , x " l ] h [ max(x"
(b
_
. . . , X,.), b]
aJ"
Since thls expression depends on x only through the values of min(xl, . . . , x,,) and max(x!, . . . , x,), it follows that the statistics T\ = min( XI , . . . , X,,) and T2 = max( Xl' " . , X,,) are jointly sufficient statistics for a and b. 0
Minimal Sufficient Statistics In a given problem we want to try to find a sufficient statistic, or a set of jointly sufficient statistics for 8, because the values of such statistics summarize all the relevant information about 8 contained in the random sample. When a set of jointly sufficient statistics are known, the search for a good estimator of 8 is simplified because we need only consider functions of these statistics as possible estimators. Therefore, in a given problem it is desirable to find, not merely any set of jointly sufficient statistics, but the simplesl set of jointly sufficient statistics. For example, it is correct but completely useless to say that in every problem the n observations Xl' " . , Xn are jointly sufficient statistics. We shall now describe another set of jointly sufficient statistics which exist in every problem and are slightly more useful. Suppose that X" . . . , X" form a random sample from some distribution. Let YI denote the smallest value in the random sample, let Y2 denote the next smaUest value, let YJ denote the thlrd smallest value, and so on. In this way, Yn denotes the largest value i.n the sample,
6.8.
Jointly Sufficient Statistics
367
and Y,,- l denotes the next largest value. The random variables YI , . . . , Y" are called the order statistics of the sample. Now let YI � h � . . . .:s;; y" denote the values of the order statistics for a given sample. If we are told the values of YI " ' " Yn ' then we know that these II values were obtained in the sample. However, we do not know which one of the observations Xl Xn actually yielded the value Y" which one actually yielded the value Y2. and so on. All we know is that the smallest of the values of Xl ' X" was YI ' the next smallest value was Y2 ' and so on. l f the variables Xl' . . . ' Xn form a random sample from a distribution for which the p.d.L or the p.f. is f(x I 0). then the joint p.d.f. or joint p.f. of Xl' " . , Xn has the folJowing form: • . . . •
• . . .
•
f. ( x 1 8 ) - n f(x" 8 ) .
(2 )
I-I
Since the order of the factors in the product on the right side of Eq. (2) irrelevant, Eq. (2) could just as well be rewritten in the form
IS
•
f.( x I 8 ) = n f(YI I O ) . I i-
Hence. fA x 1 8 ) depends on x only through the values of YI Y,. It follows. therefore, that the order statistics YI, . . . , Y" are always jointly sufficient statistics for 8. In other words, it is sufficient to know the set of n numbers that were obtained in the sample, and it is not necessary to know which particular one of these numbers was, for example, the value of X). In each of the examples that have been given in this section and in Sec. 6.7, we considered a distribution for which either there was a single sufficient statistic or there were two statistics which were jointly sufficient. For some distributions, however, the order statistics YI , Yn are the simplest set of jointly sufficient statistics that exist, and no further reduction in terms of sufficient statistics is possible. • . . . •
• . . .
Example
4:
Sufficient Statistics for the Puranfeter of a Cauchy Di."tribution.
Suppose that Xl ' Xn form a random sample from a Cauchy distribution centered at an unknown point 8 ( 00 < 8 < 00). The p.d.L f(x I 0) of this distribution is given by Eq. (5) of Sec. 6.6. and the joint p.d.[. f,,( x 1 8) of X,. ' X" is given by Eq. (6) of Sec. 6.6. II can be shown that the only jointly sufficient statistics that exist in this problem are the order statistics Yl' . ' " Yn or some other set of 11 statistics TI , Tn that can be derived from the order statistics by a one-to-one transformation. The details of the argument will not be given here. 0 • . . .
-
. . •
• • • •
368
Estimation
These considerations lead us to the concepts of a minimal sufficient statistic and a minimal set of jointly sufficient statistics. Roughly speaking, a sufficient statistic T is a minimal sufficient statistic if it cannot be reduced further without destroying the property of sufficiency. Alternatively, a sufficient statistic T is a minimal sufficient statistic if every function of T which itself is a sufficient statistic is a one-to-one function of T. Formally, we shall use the following definition, which is equivalent to either of the informal definitions just given: A statistic T is a minimal sufficient statistic if T is a sufficient statistic and is a funclion of every other sufficient statistic. In any problem in which there is no single sufficient statistic, minimal jointly sufficient statistics are defined in a similar manner. Thus, in Ex.ample 4, the order statistics Yl' " . , Yn are minimal jointly sufficient statistics.
Maximum Likelihood Estimators and Bayes Esllmators as Sufficient Statistics Suppose again that Xl • . . . ' Xn form a random sample from a distribution for which the p.f. or the p.d.L is f(x I Ol, where the value of the parameter 0 is unknown, and also that T = r( Xl ' X.,) is a sufficient statistic for O. We shall ___ now show that the M.LE. B depends on the observations Xl' . . . • Xn only through the statistic T. It follows from the factorization criterion presented in Sec. 6.7 that the likelihood function fn(x I Ol can be written in the form • . . .
fn ( x I O )
=
u ( x ) u [ r ( x), O ] .
The M.L.E. fJ is the value of 8 for which I,, ( x I 0) is a maximum. It follows. therefore, that b will be the value of 0 for which v[r(xl, OJ is a maximum. Since u[r(xl, OJ depends on the observed vector x only through the function r(xl, it follows that b will also depend on x only through the function r(x). Thus, the estimator fJ is a function of T = r( Xl" ' " X,,). Since the estimator lJ is a function of the observations XI ' . . . , XII and is not a function of the parameter 8, the estimator is itself a statistic. In many problems jj is actual1y a sufficient statistic. Since 8 win always be a function of any other sufficient statistic. we may now state the following result:
II the M. L. E. B is a sufficient statistic, then it is a minimal sufficient statistic.
These properties can be ex.tended to a vector parameter O. If 0 = (01 (),J is a vector of k real-valued parameters, then the M.L.E. ' s 01" " , Ok will depend on the observations XI " ' " Xn only through the functions in any set of jointly sufficient statistics. ]n many problems, the estimators 81" " , Ok themselves will form a set of jointly sufficient statistics. When they are jointly sufficient statistics, they will be minimaljointly sufficient statistics. • • • • •
6.8. Jointly Sufficient Statiatica
369
Example 5: Minimal Jointly Sufficient Statistics lor the Parameters 0/ a Normal Distribution. Suppose that Xl Xn form a random sample from a normal distribution for which both the mean #J. and the variance 0 2 are unknown. It was shown in Example 3 of Sec. 6.5 that the M.L.E.'s it and a ' are the sample mean and the sample variance. Also, it was shown in Example 2 of this section that ji. and 8 2 are jointly sufficient statistics. Hence, p. and 0- 2 are minimal jointly sufficient statistics. 0 • . . .
'
The statistician can restrict the search for good estimators of f' and a 2 to functions of minimal jointly sufficient statistics. It follows, therefore, from Exam ple 5 that if the M.L.E.'s it and 8 2 themselves are not used as estimators of IL and 0 2, the only other estimators that need to be considered are functions of ji. and
., u .
The discussion just presented for M.L.E.'s pertains also to Bayes estimators. Suppose that a parameter a is to be estimated and that a prior p.d.f. �(a) is assigned to 8. If a statistic T = r( Xl ' XII) is a sufficient statistic, then it follows from relation (6) of Sec. 6.2 and the factorization criterion that the posterior p.d.f. �(a l x) will satisfy the following relation: • . . .
�(a l xl
0:
v [ r(x), a W a l .
I t can b e seen from this relation that the posterior p.d.L of 8 will depend on the observed vector x only through the value o f r(x ). Since the Bayes estimator of 8 with respect to any specified loss function is calculated from this posterior p.d.f., the estimator also will depend on the observed vector x only through the value of r(x). In other words, the Bayes estimator is a function of T = r( XI' . . . ' X.).
Since the Bayes estimator is itself a statistic and is a function of any sufficient statistic T, we may state the following result: 11 the Bayes estimator is a sufficient statistic, then it is a minimal sufficient statistic.
EXERCISES Instructions for Exercises 1 to 4: In each exerCLse assume that the random variables Xl" ' " XI1 form a random sample of size n from the distribution specified in the exercise, and show that the statistics TI and T specified in the 2 exercise are jointly sufficient statistics. 1 . A gamma distribution for which both parameters a and P are unknown ( a > 0 and fJ > 0); TI n:_ 1 Xi and T, = l:7_1 Xi . =
Estlmallon
370
2.
A
beta distribution for which both parameters
3.
A
Pareto distribution (see Exercise
and f3
> 0); TI
and
Xo
� n 7_ 1 X, and T, � n7_1 (1 - X,).
arc unknown
ex
a and {3 are unknown (a
T2 = ni'_ l Xi -
(xo > 0
15
>0
of Sec.
5.9) for which both parameters a > 0); Tl = min(X1, . . , Xli') and
and
_
(8, () +
3), where the value of 8 is . . , Xn) and T2 = max( Xl" ' " Xn ) ' 5. Suppose that the vectors ( Xl' YI). ( X2 ' Y2 )" ' " ( Xn• Yn ) form a random sam
4.
A
uniform distribution on the interval
unknown (
-
00
o . (b) Show that lor n � 2, R(0, 8,) � R(O, 81) lor 0 > O. (c) Show that for n � 3, the estimator O2 dominates the estimator .51, 4. Consider again the conditions of Exercises I and 3. Show that there exists a constant c· such that the estimator c*Yn dominates every other estimator having the form c Y" for c =1= c · . 5. Suppose that Xl X" form a random sample of size II (n � 2) from a gamma distribution with parameters a and p, where the value of a is unknown (a > 0) and the value of P is known. Explain why X" is an inadmissible estimator of the mean of this distribution when the squared error loss function is used. 6. Suppose that Xl X" form a random sample from an exponential distri� bution for which the value 01 the parameter {3 is unknown ( {3 > 0) and must be estimated by using the squared error loss function. Let 8 be an estimator such that 8( XI ' . . . ' X") � 3 lor all possible values 01 Xl> . . . , X". (a) Determine the value 01 the M.S.E. R ( {3, 8) for {3 > O. (b) Explain why the estimator 8 must be admissible. 7. Suppose that a random sample of n observations is taken from a Poisson distribution lor which the value 01 the mean 0 is unknown ( 0 > 0), and that the value of p = e-6 must be estimated by using the squared error loss function. Since P is equal to the probabihty that an observation from this Poisson distribution will have the value 0, a natural estimator of P is the proportion iJ of observations in the random sample that have the value O. Explain why /J is an inadmissible estimator of p. .
• . . .
'
• . . . •
.
•
,
Supptementary Exerclses
6.10.
8. For any random variable
9.
Let
Xl'
. . . •
X,
show that
I £ ( X ) I .; £( 1 X I ).
Xn form a random sample from a distribution for which the p.dJ.
or the p.L is
where
f(x 1 0),
0
E
n.
Suppose that the value of
estimated. and that T is a sufficient statistic for
estimator of 0, and let °0
10.
Suppose that the variables
Xl . " ' , Xn
0
E
the estimators
{j and 80, Xl XII • . . . •
(orm a sequence of
00 = £(0 1 (0
:!':i; p
£ ( X, I T).
..
XII " " Xn
> 0). Let T = , n , let the statistic 1'; be defined as follows:
if
>,: = 1 y=o
� 1).
and let
Ei'_ l Xi; and
Xj = 0, Xj > O.
if
,
T). Compare
form a random sample from a Poisson distribution
for which the value of the mean (J is unknown « ()
i - I, .
must be
n Bernoulli trials for which the
T = 1::7_ , Xi. Determine the form of the estimator
for
0
be any given
form a random sample from a distribu
probability p of success on any given trial is unknown Suppose that
8
n,
and let the estimator 00 be defined by the relation
12.
Let
the p.r. is f(x 1 0), where 0 E n; and let 0 denote 8. Suppose also that the statistic T is a sufficient statistic for 0;
tion for which the p.d.f. or
Suppose that
O.
be another estimator defined by the relation
00
Show that for every value of
= £(0 1 T) .
the M.L.E. of
11.
377
Determine the form of the estimator EO"; I T).
13.
Consider again the condi tions of Exercises
the estimator
£(/3 1
T ).
7
and
12.
Determine the form of
6.10, SUPPLEMENTARY EXERCISES I.
Suppose 1
2.
-
0,
that
where
X" 0
.
.
.
•
X" are i.i.d. with Pr( Xi
is unknown
(0
.;
0 .;
1).
· 1) = 0
and Pr(X,
Find' the M.L.E. of
02
= 0) =
Suppose that the proportion (J of bad apples in a larg� lot is unknown. and
has the following prior p.d.L:
for 0
< 0 < 1,
otherwise. Suppose that a random sample of found that
3
10
apples is drawn from the lot, and it is
are bad. Find the Bayes estimate of
squared error loss function.
0
with respect to the
378
Estimation
3. Suppose that Xl • . . . • XII form a random sample from a uniform distribution with the following p.d.f.: for 8 �
x
� 28.
otherwise. Assuming that the value of 0 is unknown ( 0 ) 0). determine the M.LE. of B. 4. Suppose that XI and X2 are independent random variables. and that XI has a normal distribution with mean bilL and variance o? for ; = 1, 2. Suppose also that b l • b2• 012, and ol are known positive constants, and that p. is an unknown parameter. Determine the M.L.E. of p. based on Xl and X2 . 5. Let "' (a) - r '(a)/r(a) for a > 0 (the digaruma function). Show that
H a + 1) -
", ( a) + .!.a .
6. Suppose that a regular light bulb. a long-life light bulb, and an extra-long-life light bulb are being tested. The lifetime X, of the regular bulb has an exponential distribution with mean 8, the lifetime X2 of the long-life bulb has an exponential distribution with mean 28, and the lifetime XJ of the extra-long-life bulb has an exponential distribution with mean 38. (a) Determine the M.LE. of 8 based on the observations Xl ' X2• and Xl' (b) Let '" - 1/0, and suppose that the prior distribution of '" is a gamma distribution with parameters a and {3. Determine the posterior distribu� tion of IjJ given Xl. X2• and XJ' 7 . Consider a Markov chain with two possible states SI and S2 and with stationary transition probabilities as given in the foUowing transition matrix P: 1
5,
-
0]
1/4
'
where the value of 0 is unknown (0 � 0 � 1). Suppose that the initial state Xl of the chain is S I ' and let X2 • . . . • XtI + 1 denote the state of the chain at each of the next n successive periods. Determine the M.L.E. of (j based on the observations X2 • · · · • X,, + l ' 8. Suppose that an observation X is drawn from a distribution with the following p.d.f.:
f( x I O )
-
{�
for
O < x < O,
otherwise.
6.' O. Supplementary Ex...eI�es
0>
Also, suppose lhal lhe prior p.d.1, of for 0 otherwise.
0
379
is
Determine the Bayes estimator of 8 with respect to (a) the mean squared error loss function and (b) the absolute error loss function. Xn form n RernouUi trials with parameter 8 = 9. Suppose that Xl (1/3) (1 + P), where lhe value of P is unknown (0 '" P '" I). Delermine lhe M.L.E. of p. 10. The method of randomized response is sometimes used to conduct surveys on sensitive topics. A simple version of the method can be described as follows: A random sample of II persons is drawn from a large population. For each person in lhe sample there is probability 1/2 that the person will be asked a standard question and probability 1/2 that the person will be asked a sensitive question. Furthermore, this selection of the standard or the sensitive question is made independently from person to person. If a person is asked the standard question, then there is probability 1/2 that he will give a positive response; but if he is asked the sensitive question. then there is an unknown probability p that he will give a positive response. The statistician can observe only the total number X of positive responses that were given by the n persons in the sample. He cannot observe which of these persons were asked the sensitive question or how many persons in the sample were asked the sensitive question. Determine the M.L.E. of p based on the obser vation X. 11. Suppose that a random sample of four observations is to be drawn from a uniform distribution on the interval (0, 8), and that the prior distribution of 8 has the following p.d.f.: • . . . •
HO ) -
{i
(or 8 � 1 ,
2
otherwise. .
Suppose that the values of the observations in the sample are found to be 0.6, 0.4. 0.8, and 0.9. Determine the Bayes estimate of 8 with respect to the squared error loss function. 12. For the conditions of Exercise 1 1 . determine the Bayes estimate of 8 with respect to the absolute error loss function. 13. Suppose that Xl' Xn form a random sample from a distribution with the following p.d.f.: . . . •
f(x I P , O ) =
{ Pe o
- p'x- 8)
for x � 6 , otherwise,
Sam pl i ng Distri butions of Esti m ators
7.1. THE SAMPLING DISTRIBUTION OF A STATISTIC Statistics and Estimators Suppose that the random variables XI" " ; X" form a random sample from a distribution involving a parameter 0 whose value is unknown. In Sec. 6.7, a
statistic
was defined as any real-valued function
ables Xl" ' " that
T
X"' Since a statistic
T
T = r( Xl '
. . . •
XII ) of the vari
is a function of random variables, it follows
is itself a random variable and its distribution can, in principle, be derived
from the joint distribution of Xl" ' " Xn. This distribution is often called the
sampling distribution of the statistic
T
because it is derived from the joint
distribution of the observations in a random sample. As mentioned in Sec. of the observations Xl" ' "
6.8, an
estimator of
8
is a statistic since it is a function
XII " Therefore, in principle", it is possible to derive the
sampling distribution of any estimator of O. In �act, the distributions of many estimators anti statistics have a1ready been found in previous chapters of this book. For example, if Xl" ' "
Xn form a random sample from a normal distribu
tion with mean p. and variance 0' 2 . then it is known from Sec. 6.5 that the sample mean X, is the M.L.E. of p.. Furthermore, it was found in Corollary 2 of Sec. 5.6
that the distribution of Xn is a normal distribution with mean JL and variance
a '/n .
In this chapter we shaH derive, for random samples from a normal distribu tion, the d istribution of the sample variance and the distributions of various functions of the sample mean and the sample variance. These derivations wiIl lead us to the definitions of some new distributions that play an important part in 381
382
Sampling Distributions of Estimators
,
,
problems of statistical inference. [0. addition, we shall study certain general, ,
properties of estimators and their sampling distributions.
1,
' 14
Purpose of the Sampling Distribution It
.
can be seen from the discussion given in Chapter 6 that a Bayes estimator or an ,
sampling: have been "
M.L.E. in a given problem can be determ.ined without calculating the
f} bas been found. the relevant ,
distribution of the estimator. Indeed, after the values in the sample observed and the Bayes estimate of the parameter
O. "l 1 estimate 1 � units � �
properties of this estimate can be determined from the posterior distribution of
For example, the probability that the estimate does not differ from the unknown value of
8
by more than a specified number of units or the M.S.E. of the
can be determined from the posterior distribution of
e.
J J
However, before the sample is taken, we may wish to calculate the probability that the estimator will not differ from
8
by more than a specified number of
or to calculate the M.s.£. of the estimator. It is then typically necessary to determine the sampling distribution of the estimator for each possible
value of
8. 1
experiments should be perfonned in order to obtain the best estimator of
i
8, or if �
In particular, if the statistician must decide which one of two or more available
he must choose the best sample size in a given experiment, then he will typicaUy : : ; base his decision on the sampling distributions of the different estimators that , might
be used.
In addition, as mentioned in Sees.
6.2
and
6.5,
many statisticians believe that "
to
(J i n certain problems. Therefore, in such a probJem, it is not possible to assign a posterior distribution to the parameter O. Hence, after the sample values have been observed and the numerical value of the estimate of B has been it is either inappropriate or too difficult to assign a prior distribution
a
parameter
calculated, it would not be possible or would not be appropriate to consider the posterior probability that this estimate is close to
O.
Before the sample has
been.4
O. If this probability is
taken. the statistician can use the sampling distribution of the estimator to calculate the probability that the estimator will be close to high for every possible value of
e,
then the statistician can conclude that the
particular estimate which he has obtained from the observed sample values is likely to be close to gIven.
8,
even though explici t posterior probabilities cannot be
EXERCISES 1.
Suppose that a random sample is to be taken from a normal distribution for which the value of the mean
8
is unknown and the standard deviation is
2.
7.2. The Chi-Square DistribuUon
How large a random sample must be taken in order that
2.
for every possible value of
8?
E,(lX,
-
383
01') '" 0.1
For the conditions of Exercise 1, how large a random sample must be taken
in order that
E,( l X, - 00 '" 0.1
for every possible value of
8?
3. For the conditions of Exercise I, how large a random sampl e must be taken
Pr( I X, - 81 '" 0.1) ;' 0.95 for every possible value of O? that a random sample is to be taken from a Bernoulli distribution
in order that
4. Suppose
for which the value of the parameter
believed that the value of
p
p
is unknown. Suppose also that it is
is in the neighborhood of
random sample must be taken in order that
5.
p = 0.2?
For the conditions of Exercise
7.
How large a
Pr( I X, - pi '" 0.1) ;. 0.75
when
use the central limit theorem to find
4,
approximately the size of a random sample thai must be taken in order that
Pr( I X"
6.
0.2.
-
p i '" 0.1) ;. 0.95
when
p
For the conditions of Exercise
4,
For the conditions of Exercise
4,
in order that in order that
=
0.2.
how large a random sample must be taken
Ep(lX. - pi ' ) '" om
when
p
=
0.2?
how large a random sample must be taken
E/I X, - pi ' ) '" 0.01
for every possible value of
p (0 '" P '" I)?
7.2. THE CHI-SQUARE DISTRIBUTION
.. OellnlUon of the Distribution
hJn-
this section we shall introduce and discuss a particular type of gamma
�istribution known as the chi-square
(X 2 ) distribution. This distribution, which is
�':Closely related to random samples from a normal distribution, is widely applied in
Lthe field of statistics; and in the remainder of this book we shall see how it is
ta'PPlied in many important problems of statistical infer�nce. In this section we �all present the definition of the
bri'atical �1"" fior r ;. 1/2
properties.
X2
distribution and some of its basic mathe...
The gamma distribution with parameters any given positive integer
a
and
f3
was defined in Sec.
5.9.
n, the gamma distribution for which a = n/2 and ' is called the X distribution with n degrees of freedom. If a random variable X has a x 2 distribution with n degrees of freedom, it follows from � Qlq. (7) of Sec. 5.9 that the p.d.f. of X for x > 0 is ii> 17� (1 ) �
tfIt �
roo. f(x) - 0 ��. 1
for x '"
o.
384
Sampling Dlatrlbutlons of Estimators
A short table of probabilities for the X2 distribution for various values of tl is given at the end of this book. It follows from the definition of the X 2 distribution, and it can be seen from Eq. (1), that the X2 distribution with two degrees of freedom is an exponential distribution with parameter 1/2 or, equivalently, an exponential distribution for which the mean is 2. Thus. the following three distributions are all the same: the gamma distribution with parameters a = 1 and fJ 1 /2. the X 2 distribution with two degrees of freedom, and the exponential distribution for which the mean is 2. '""
Properties of the Distribution If a random variable X has a X2 distribution with n degrees of freedom, it follows from the expressions for the mean and the variance of the gamma distribution, as given in Sec. 5.9, that and
E( X) � I/
Var( X)
�
2n .
(2)
Furthermore, it follows from the moment generating function given ·in Sec. 5.9 that the m.g.f. of X is
>1- ( 1 ) �
(
1
1 - 21
) "/2
for t
" . ' X" are independent and if XI has a X 2 distribution with n degrees of freedom ( i 1 , . , k ), then the sum Xl + . . . + Xk has a X2 distribution with nl + . . . + n k degrees offreedom. j
=
We shall now establish" the basic relation between the Xl distribution and the normal distribution. We begin by showing that if a random variable X has a standard normal distribution, then the randGm variable . Y = X 2 will have a x 2 distribution with onc degree of freedom. For this purpose, we shaH let f(y) and F( y) denote the p.d.f. and the d.f. of Y. Also, since X has a standard normal distribution, we shall let 0,
F( y ) � Pr( Y '" y ) �
( y l/2)
Since fly) �
Ply)
_
- Pr( X2
'"
( y l/2 ) .
y) � Pr( _y l/2 " X ", yl/2 )
_
and O.
By comparing this equation with Eq. ( I ), it is seen that the p.d.!. of Y is indeed the p.d.f. of a x' distribution with one degree of freedom. We can now combine this result with Theorem 1 to obtain the following 2 theorem, which provides the main reason that the X distribution is important in statistics.
If the random variables Xl" ' " Xk are i.i.d. and if each of these variables has a standard normal distribution, then the sum of squares xt 2 + . . + Xl has a X distribution with k degrees of freedom.
Theorem 2. .
,
EXERCISES 1 . Find the mode of the X ' distribution with. n degrees of freedom (II
2.
=
1 , 2, . . . ) .
Sketch the p.d.!. of the X ' distribution with n degrees of freedom for each of the following values of n. Locate the mean, the median, and the mode on each sketch. (a) n = 1 ; (b) n 2 ;-(c) II = 3; (d) n = 4. =
3. Suppose that a point ( X, Y) is to be chosen at random in the xy-planc, where X and Y are independent random variables and each has a standard normal distribution. If a circle is drawn in the xy-plane with its center at the origin. what is the radius of the smallest circle that can be chosen in order for there to be probability 0.99 that the point ( X, Y) will li� inside the circle? 4. Suppose that a point ( X, Y, Z) is to be chosc;:n at random in thrc·c-dimenM sional space, where X, Y, and Z are independent random variables and each has a standard normal distribution. What is the probability that the distance from the origin to the point will be less than I unit? 5. When the motion of a microscopic particle in a liquid or a gas is observed, it is seen that the motion is irregular because the particle collides frequently with other particles. The probability model for this motion, which is called Brownian motiori, is as rollows: A coordinate system is chosen in the liquid or gas. Suppose that the particle is at .the origin of this coordinate system at time I 0; and let ( X, Y, Z) denote the coordinates of the particle at any time I > O. The random variables X, Y, and Z are i.i.d. and each of them has =
386
Sampling Distributions of estimators
a normal distribution with mean 0 and variance 0 2t. Find the probability that at time t = 2 the particle will lie within a sphere whose center i s at the origin and whose radius is 4 0. 6. Suppose that the random variables Xl. " X" arc independem, and that each random variable Xi has a continuous d.f. F;. Also, let the random variable Y be defined by the relation Y = - 2E;I_ I logFj ( X; ). Show that Y has a X2 distribution with 2n degrees of freedom. 7. Suppose that Xl X" form a random sample from a uniform distribution on the interval (0,1), and let W denote the range of the sample, as defined in Sec. 3.9. Also, let g.(x) denote the p.d.f. of the random variable 2n(1 W), 2 and let g(x) denote the p.d.L of the X distribution with four degrees of freedom. Show that . •
• . . . '
-
lim g. ( x ) - g ( x )
"-�
for x > O.
8. Suppose that Xl' . . . • X" form a random sample from a normal distribution
with mean
9.
Il
and variance 0 2. Find the distribution of
Suppose that six random variables X" . . . , X6 form a'random sample from a standard normal distribution, and let
y � ( X, + X, + X, ) ' + ( X, + X, + X.) ' .
Determine a value of c such that the random variable cY win have a X2 distribution. ]0. If a random variable X has a X l distribution with " degrees of freedom, then the distribution of XI/2 is called a chi (X) distribution with n degrees of freedom. Determine the mean of this distribution.
7.3.
JOINT DISTRIBUTION OF THE SAMPLE MEAN AND SAMPLE VARIANCE
Independence of the Sample Mean and Sample Variance Suppose that the variables Xl' X" form a random sample from a normal distribution with an unknown mean J.L and an unknown variance 0 2. Then, as was . . . •
, .
'
7.3. Joint Distribution of the Sample Mean and Sample Variance
387
shown in Sec. 6.5, the M.L.E.'s of '" and 2 are the sample mean X" and the sample variance (1/n)I:7_ I ( X; X,,)2. In this section we shall derive the joint distribution of these two estimators. We already know that the sample mean itself has a normal distribution with mean and variance a2jn. We shall establish the noteworthy property that the sample mean and the sample variance are independent random variables, even though both are functions of the same variables Xl" ' " X". Furthermore, we shall show that. except for a scale factor, the sample variance has a Xl distribu tion with n - degrees of freedom. More precisely, we shall show that the random variable E7_1(Xj X,J2/a 2 has a distribution with n - 1 degrees of freedom. This result is also a rather striking property or random samples from a normal distribution. as the following discussion indicates. Since the random variables Xl Xn are independent and since each has a normal distribution with mean and variance (1 2, then the random variables (X, - ,,)/ 2). Show that Var(X) � n/(1I 2). Hint: To evaluate E(X ' ), restrict the integral to the positive half of the real line and change the variable from x to x� n y� 2 x I + n
-
•
Then compare (he ,n(egral wetA tltt! (J.d.! tllII �/J' U)J//il1lJf)OJJ. 3. Suppose that Xl' . . . ' Xn form a random sample from a normal distribution ' .., • 01' . . ./ � � ...,�...,_...:p .4P ? .-l P/ for whICh the mean J.l and the standard deviation 0 are unknown, and let jL and 0- denote the M.L.E.'s of Jl and (1. For [he sanlpJe size 11 = 17, find a value of such that
-
•
Pr(ii >
k
I'
+
k8 ) � 0.95.
... ..... .
, ,
398
4.
Sampling Distributions of Estimators
Suppose that the five random variables
Xl
. "
"
Xs
are i.i.d. and that each has
a standard normal distribution. Determine a constant c such that the random
variable
C ( XI
+
will have a
S. By
t
X,)
distribution.
using the table of the
t
distribution given in the back of this book,
determine the value of the integral
j2.5
dx
- 00
6.
(12 + x')"
Suppose that the random variables
Xl and X2 arc independent and that each 0 and variance cr 2, Determine the value
has a normal distribution with mean of
pr
[
]
: 4 < .
( XI + X, ) ( Xl X,) -
Hint: ( Xl - X,) '
�
2
[(
Xl -
Xl
; X' l ' ( X, - Xl ; X' l 'l . +
CONFIDENCE INTERVALS
7.5.
Confidence I ntervals 'or the Mean 0' a Normal Distribution We shaU continue to assume that
Xl
• . . . '
XII
form a random sample from a
2.
Let
degrees of freedom,
and
normal distribution with unknown mean p. and unknown variance
gn _ l (x) denote the p.d.f. of the let
c
I distribution with
n-1
(J
be a constant such that
r g" _ l ( x l dx -
0) + Pr( X, < O )Pr( X, � (1/2)(1/2) + (1/2)(1/2) � 1/2.
>
0)
(4)
7.5. Confidence InlerYals
401
It follows f.-om Eq. (4) that if the values Yl = Yl and Y2 = Y2 are observed, then the interval ( Yl ' Y2 ) will be a confidence interval for 0 with confidence coefficient 1/2. However, the analysis can be carried further. Since both ol,Jse.-vations Xl and X2 must be greater than 0 - (l/2) and both must be less than 8 + (1/2), we know with certainty that y, > 8 - (1/2) and 12 < 8 + (l/2). In other words, we know with certainty that y,
- ( 1/2) < 8 < y,
+
(1/2).
(5 )
Suppose now that ( y, - y,) ;;, 1/2. Then y, � y, - (1/2) and it follows from Eq. (5) that y, < 8. Moreover, since y, + (1/2) � y" it also follows from Eq. (5) that 8 < y,. Thus, if (y, - y,) ;;, 1/2, then y, < 8 < y,. In other words, if (Y2 - Yt) � 1 /2, then we know with certainty that the confidence interval ( Yl' Y2 ) includes the unknown value of 8, even though the confidence coefficient of this interval is only 1/2. Indeed, even when ( Yz - Yl) < 1/2, the closer the value of (Y2 - Yl) is to 1/2, the more certain we feel that the interval (y" y,) includes O. Also, the closer the value of ( y, - y,) is to 0, the more certain we feel that the interval (y" y,) does not include O. However, the confidence coefficient necessarily remains 1/2 and does not depend on tbe observed values Yl and Y2' ]n the next section we shall discuss Bayesian methods for analyzing a random sample from a normal distribution for which both the mean J.L and the variance (1 2 are unknown. We shall assign a joint prior distribution to J.I. and 0 2, and shall then calculate the posterior probability that I-' belongs to any given interval (a, b). It can be shown [see, e.g., DeGroot (1970)[ that if the joint prior p.d.L of I-' and 0 2 is fairly smooth and does not assign high probability to any particular small set of values of p. and 0 2, and if the sample size n is large, then the confidence coefficient assigned to a particular confidence interval (a, b) for the mean p. will be approximately equal to the posterior probability that p. lies in the interval (a, b). An example of this approximate equality is ineluded in the next section. Therefore, under these conditions, the differences between tbe results obtained by the practical application of methods based on confidence intervals and methods based on prior probabilities will be small. Nevertheless the philosophical differ ences between these methods will persist.
EXERCISES 1 . Suppose that a random sample of eight observations is taken from a normal distribution for which both the mean p. and the variance (J 2 are unknown; and that the observed values are 3 1 , 3.5, 2.6, 3.4, 3.8, 3.0, 2.9, and 2.2. Find the shortest confidence interval for J.I. with each of the following three confidence coefficients: (a) 0.90, (b) 0.95, and (c) 0.99. .
402
Sampling Distributions 01 Estimators
2. Suppose that Xl" ' " X., form a random sample from a normal distribution for which both the mean I-' and variance a 2 are unknown; and let the random variable L denote the length of the shortest confidence interval for p. that can he constructed from the observed values in the sample. Find the value of E(L2) for the following vaJues of the sample size n and the confidence coefficient y : II
- 5 , y - 0.95. (a) (b) n - 10, y - 0.95. (c) n - 30, y - 0.95.
(d) II - 8, y - O.qO. (e) n - 8, y - 0.95. (f) II - 8, y - 0.99.
,
3. Suppose that Xl ' Xn form a random sample from a normal distribution for which the mean p. is unknown and the variance a 2 is known. How large a random sample must be taken in order that there will be a confidence interval for p. with confidence coefficient 0.95 and length less than O.Ola? 4. Suppose that Xl" ' " Xn form a random sample from a normal distribution for which both the mean JJ. and the variance CJ 2 are unknown. Describe a 2 method for constructing a confidence interval for 0 with a specified con· fidence coefficient y (0 < y < 1). Hint: Determine constants c 1 and "2 such that • . . .
5. Suppose that Xl. " ' , X" fonn a random sample from an exponential distri· bution with unknown mean IJ.. Describe a method for constructing a con· fidence interval for ;;. with a specified confidence coefficient y (0 < y < 1). Hint: Determine constants ci and C such that Pr[c, < (1/,Il)E7_1X.. < e21 2 -
y.
*7.6. BAYESIAN ANALYSIS OF SAMPLES FROM A N ORMAL DISTRIBUTION The Precision of a Normal Distribution Suppose that the variables XL" ' " X" form a random sample from a normal distribution for which both the mean ,.,. and the variance 0" are unknown. In this section we shall consider the assignment of a joint prior distribution to the parameters p. and 0 2, and shall study the posterior distribution that is then derived from the observed values in the sample.
2
,
· ._-
--� -_._--
-
.---.�
_.-
_.
-_._-
....---
. . ----
- . - _ . . _--- _ . _ ...-
-----
The precision r of a nonnal distribution is defined as the reciprocal of the variance; that is, 'T 1/0 2. In a Bayesian analysis of the type to be discussed in this section, it is convenient to specify a normal distribution by its mean p. and its precision r, rather than by its mean and its variance. Thus, if a random variable has a normal distribution with mean p. and precision 'T, then its p.d.L f(x I p., ) is specified as fol]ows, for 00 < x < 00: -
'T
-
(1 ) Similarly. if Xl X" fonn a random sample from a normal distribution with mean J.L and precision 'T, then their joint p.d_L j,,(x I IL, r) is as follows. for - 00 < Xi < 00 (i = 1, n): • . . .
'
. . . •
I
(2 )
A Conjugate Family of Prior Distributions We shall now describe a conjugate family of joint prior distributions of JJ. and 'T. We shall specify the joint distribution of I' and T by specifying botb tbe conditional distribution of JL given T and ' the marginal distribution of 'T. ]n particular, we sha11 assume that the conditional distribution of JL for any given value of T is a normal distribution for which the precision is proportional to the given value of T. and also that the marginal distribution of 'T is a gamma distribution. The family of all joint distributions of this type is a conjugate family of joint prior distributions. If the joint prior distribution of p. and T belongs to this family. then for any possible set of observed values in the random sample, the joint posterior distribution of JL and 'r will also belong to the family_ This result is established in the next theorem.
Theorem 1 . Suppose that Xl' X,. form a random sample from a normal distribution for which both the mean '" and the. precision T are unknown ( - 00 < Jl < 00 and T > 0)_ Suppose also that th.e joint prior distribution of I-" and 'T is as follows: The conditional distribution of p. given 'T is. a normal distribution wilh mean 11 0 and precision A OT ( 00 < 1-"0 < 00 and A O > 0); and the marginal distribution of T is a gamma distribution with parameters ao and f30 (ao > 0 and f30 > 0). Then the joint posterior distribution of Jl and T. given that XI xI (i = 1 , . . . , n), is as follows: The conditional distribution of J.L given 'T is a normal distribution with mean JJ.l and precision A1T, where . . . •
-
."",
p. t
=
AoJlo + nx" AO + n
and A,
=
Ao + n;
( 3)
404
Sampling Distributions of EsUmators
and the marginal distribution of T is a gamma distribution with parameters al and {31' where
" 1 " _ )2 and PI - Po + 2 '-- ( x, - x " i-I
+
n ", o ( x_ " - )2 2( '\ ) I o+n
1'0
(4)
Proof. The joint prior p.dJ. �(I', T) of I" and T can be found by multiplying the conditional p.d.f. �I{.u I T) of p. given ..,. by the marginal p.d.L �2( T) of T. By the conditions of the theorem we have. for - 00 < po < 00 and T > 0, ( 5) and
(6) A constant factor involving neither Jl nor T has been dropped from the right side of each of these r�lations. The joint posterior p.d.f. HI", T I x) of I' and T satisfies the relation
(7 ) If we use the definitio\of 1' 1 given in Eq. (3), then the following identity can be. established:
\
L ( X i - 1') 2 + h o t "
- 1' 0 ) 2 - ( ho + n ) ( 1' -
1"1 )' +
"
L ( X i - x,,) ' i-I
(8 )
It now follows from (2), (5), and (6) that the posterior p.dJ. �(I', T I x) can be written in the form
where hI' "" and P I are defined by Eqs. (3) and (4). When the expression inside the braces on the right side of Eq. (9) is regarded as a function of Jl for a fixed value of T, this expression can be. recognized as being (except for a constant factor) the p.d.f. of a normal distribution with mean "" 1 and precision AtT. Since the variable p. does not appear elsewhere on the right
7.6. Bayesian Analysis of Samples from a Normal Distribution
405
side of Eq. (9), it follows Ihat this p.d.f. must be the conditional posterior p.d.f. of IL given T. ]t now follows in turn that the expression outside the braces on the right side of Eq. (9) must be proportional to the marginal posterior p.d.f. of 7". This expression can be recognized as being (except for a constant factor) the p.dJ. of a gamma distribution with parameters al and 131, Hence. the joint posterior distribution of J1. and 7" is as specified in the theorem. 0
If the joint distribution of I' and T belongs to the conjugate family described in Theorem 1 . then it is said that p. and T have a joint normal-gamma distribution. By choosing appropriate values of 1'0. A O ' ao• and 130, it is usually possible in a particular problem to find a normal-gamma distribution which approximates an experimenter's actual prior distribution of J.L and 'T sufficiently well. It should be emphasized. however. that if the joint distribution of p. and 7" is a norm�amma distribution, then p. and 7" are not independent. Thus, it is not possible to use a normal-gamma distribution as a joint prior distribution of p. and T in a problem in which the experimenter's prior information about /A. and his ,. prior information about 'T are independent and he wishes to assign a joint prior distribution under which I' and T will be independent. Although this characteris tic of the family of normal-gamma distributions is a deficiency, it is not an important deficiency because of the following fact: Even if a joint prior distribu tion under which p. and 'T are independent is chosen from outside the conjugate family. it will be found that after just a single" value of X has been observed, p. and T will have a posterior distribution under which they are dependent. In other words, it is not possible for J.t and 'T to remain independent in the light of even one observation from the underlying normaJ distribution.
,".
f
,
"
The MargInal DistrIbution 01 the Mean When the joint distribution of J.l and 'T is a normal-gamma distribution of the type described in Theorem 1. then the conditional distribution of J.l for a given value of 7" is a certain normal distribution and the marginaJ distribution of T is a gamma distribution. It is not clear from this specification. however. what the marginal distribution of Jl will be. We shaH now derive this marginal distribution. Suppose that the joint distribution of Jl and 7" is the prior normal-gamma distribution described in Theorem 1 and specified by the constants J.lo. A O ' ao• and Po. We shall again let �(I-', T) denote the joint p.d.f. of I' and T, let �1(1' I T) denote the conditional p.d.L of I' given T, and leI �2(T) denote the marginal p.d.f. of T. Then, if �3(1') denotes the marginal p.d.L of I-' for - 00 < I-' < 00, (10)
Sampling Distributions of Estimator.
'1UO
If we make use of the proportionality symbol, then it follows from Eqs. ( and (6) that, for 00 < p. < 00, -
(1 This integral was evaluated in Eq. (8) of Sec. 5.9. By dropping a factor that d", not involve ft. we obtain the relation
or
( 13 We shall now define a new random variable Y by the relation Y
=
( A#:0 )
'/2
( p.
-
p. o ) .
(14
From this relation,
Then the p.d.f.
g(y) of Y will be specified by the following equation:
( Po ) g( y ) = A
1/2
tJ
0"'0
( g (y ) a: 1 + 2Y )
[( Po ) A 0"'0
1/2
y + p.o
It follows, therefore, from Eq. (13) that 2
"'0
- (2 ao + l)/2
]
for
-
00 < y < 00 .
(16)
( 17)
The p.d.f. of the , distribution with n degrees of freedom was given in Eq. (4) of Sec. 7.4. The p.d.f. in Eq. (17) can now be recognized as being (except for a constant factor) the p.d.f. of a I distribution with 20:0 degrees of freedom. Thus, we have established the fact that the distribution of Y, which is the linear function of p. given by Eg. (14), is a I distribution with 2ao degrees of freedom. In
7.6. Bayesian Analysis 01 Samples from a Normal Dislribution
407
'other words, the distribution of I' can be obtained from a ( distribution by ,\- iranslating the t distribution so that it is centered at lAo rather than at 0, and also cbanging the scale factor. The mean and the variance of the marginal distribution of I' can easily be obtained �rom the mean �nd the varia�ce of the I distribution th�t are given in . . t Sec. 7.4. Smce Y has a I d'stnbutlOn WIth 2ao degrees of freedom, It follows from Sec. 7.4 that E ( Y ) 0 if ao > 1/2 and that Var( y ) ao/(ao 1) if ao > 1. Therefore, we can obtain the following results from Eq. (IS). If "0 > 1/2, then £(1') � 1'0' Also, if ao > I, then
,\
-
Var( l' )
�
-
�
f30 . A o ( ao _ 1 )
(18)
Furthermore, the probability that I' lies in any specified interval can , in principle, be obtained from a table of the I distribution. It should be noted, however, that when the I distribution was introduced in Sec. 7.4, the number of degrees of freedom had to be a positive integer. In the present problem, the �umber of degrees of freedom is 2ao. Since ao can have any positive value, 2ao can be equal to any positive number and need not necessarily be an integer. When the joint prior distribution of p, and 'T is a normal-g3:mma distribution in which the constants are 1' 0' Ao, ao, and f3o, it was established in Theorem 1 that the joint posterior distribution of J.L and 'T is also a normai-gamma distribu .' tion in which the constants 1'" A" a,. and f3,. are given by Eqs. (3) and (4). It follows, therefore. that the marginal posterior distribution of I' can also be · reduced to a t distribution by using a linear transformation, like that in Eq. (14), I' with the constants of the posterior distribution. Hence, the mean and the variance U of this marginal posterior distribution, and also the probability that I' lies in any specified interval, can again be obtained from the corresponding I distribution. •
·
·
·
·
f
! A Numerical Example t
I: General Procedure. To illustrate the concepts that have been developed in this �. section, we shall consider again a normal distribution for which both the mean f and the precision are unknown. We shall now suppose Ihat we wish 10 l assign a joint normal-gamma prior distribution to I' and such that E(I'-) 10, �
7
7
�
Var(l') 8, E(7) � 2, and Var( 7 ) � 2. t We shall first delermine the values of the conslants 1'0' Ao, ao, and f30 for r which the prior distribution will satisfy these conditions. Since the marginal �. distribution of 'T is a gamma distribution with parameters ao and flo, we kn9w that E( 7 ) � lXo/f3o and Var(7) � ao/f3J. It follows from the given conditions I that ao � 2 and f30 � 1. Furthermore, we know that E(I') � 1'0 and that the I I
�
408
Sampling Distributions of EaUmators
value of Var(l'} is given by Eq. (18). Therefore, it follows from the given conditions that 1' 0 = 10 and AD = 1/8. The joint prior normal-gamma distribu tion of IL and 'T is completely specified by these values of 1'0' '\0' ao. and Po . Next, we shall determine an interval for J.1 centered at the point Po = 10 such that the probability that I' lies in this interval is 0.95 . Since the random variable Y defined by Eq. (14) has a I distribution with 2ao degrees of freedom, it follows that for the numerical values just obtained, the random variable (1/2)(1' - 10) has a I distribution with 4 degrees of freedom. From tbe table of the I distribution. it is found that pr
[ - 2.776 < ; ( I' - 10) < 2.776] = 0.95.
(19)
An equivalent statement is that Pr(4.448 < I'
a and I:.�_lPj 1.
total population who belong to stratum
=
We are interested in estimating the mean value p. of a certain characteristic among the total population. Among the individuals in stratum acteristic has mean the value of
o?
f.Li
and variance
0/.
i
this char
where the value of f.L i is unknown and
is known. Suppose that a
stratified sample is
taken from the
population as foUows: From each stratum j, a random samp'le of
n, individu
als is taken, and the characteristic is measured for each of these individuals. The samples from the
k
strata are taken independently of each other. Let
Xi
ni measurements in the sample from stratum i. p. ... E7_ 1 Pif.Li' and show also that jJ. = r..�_ I PjXj is an unbi�
denote the average of the (a) Show that
ased estimator of �.
(b) Let n
=
E�_ln J denote the total number of observations in the k
For a fixed value of
n, find the values of nl. _
. . •
samples.
n il; for which the variance
of A will be a minimum. 12. Suppose that
Xl' _ . . , X" form a random sample from a distribution for which
the p.d.f. or the p.f. is f(x 1 8 ), where the value of the parameter 8 is , r( Xl' _ . . • Xn ) is a sufficient statistic for unknown; and that the statistic T =
O.
Let 6 ( Xj,
• • •
, X"J denote an unbiased estimator of 8, and let Bo( T ) be
6.9. Bo( T ) is also an unbiased estimator of O.
another estimator which is defined by Eq. (3) of Sec. (a) Show that
(b) Show that Var.(Bo) '; Var. ( B ) for every possible value of
Hint:
Use Theorem
13. Suppose that
1
of Sec.
O.
6.9.
Xl Xn form a random sample from a uniform distribution (0. 8). where the value of the parameter 0 is unknown; and let • . . .
'
on the interval Y"
=
of
O.
max(X\, . . . , X" ). Show that [(n
+
l)/n)Y" is an unbiased estimalOr
." "
420
Sampling Distributions 01 Estimators
14. Suppose that a random variable X can take only the five values x
=
with the following probabilities:
1(1 1 0 ) 1(4 1 0 )
� �
0',
1(2 1 0 ) � 0 '(1 - 0),
0(1 - 0 ) ',
1(5 1 8 )
�
1, 2, 3, 4, 5
1(3 1 0 ) � 20(1 - 0),
(1 - 0)'.
Here the value of the parameter 8 is unknown (0 � 0 .:0:;;: 1). (a) Verify that the sum of the five given probabilities is 1 for any given value of O. (b) Consider an estimator 8,.( X ) that has the folJowing form: Show that for any given constant c, 8c< X) is an unbiased estimator of fJ. (c) Let 80 be a number such that 0 < 80 < 1. Determine a constant Co such that when 8 = 60 , the variance of /)co( X) is smaller than the variance of �c< X ) for any other value of c.
·7,8. FISHER INFORMATION De'inition and Properties o. Fisher In'ormatlon
The Fisher In/onnation in a Single Random Variable. In this section we shall' introduce a concept, called the Fisher information, which enters various aspects of the theory of statistical inference; and we shall describe a few uses of this concept. Consider a random variable X for which the p.r. or the p.d.f. is I(x I 0). It is assumed that f(x I 6 ) involves a parameter 6 whose vaJue is unknown but must' lie in a given open interval g of the real line. Furthennore, it is assumed that X takes values in a specified sample space S and that f(x I 6) > 0 for each value of x E S and each value of 0 E n. This assumption eliminates from consideration the uniform distribution on the interval (0, 6). where the value of 0 is unknown, because for that distribution I(x 1 0) > 0 only when x < 0 and I(x I 0) = 0 when x > fJ. The assumption does not eliminate any distribution where the set of values of x for which I(x I 0) > 0 is a fixed set that does not depend on O. Next, we define A(X 1 0) as follows:
A ( x I O) It
�
10g/( x I 0 ) .
(1)
is assumed that for each value of x E S, the p.f. or p.d.f. I(x I 0) is a twice
7.8. Fisher Information
421
differentiable function of 8, and we let A'(x I O ) =
a A(x I O) ao
and
A"( x I O )
=
a2 A(xI 0). ao 2
(2)
The Fisher information l( 8) in the random variable X is defined as follows: 1(0)
=
EO { [ A'( X I 0 )] 2 } .
(3)
Thus, if f(x I 0) is a p.d.f., then 1( 0 )
=
f
[A'( x I O ) ] 'f( x I O ) dx.
(4)
s
If f(x I 0 ) is a p.L, the integral in Eq. (4) is replaced by a sum over the points in S. Tn the discussion that follows, we shall assume for convenience that f(x I 0 ) is a p.d.!. However, all the results hold also when f(x I 0) is a p.!. We know that Is f(x I O ) dx = 1 for every value of 0 (l. Therefore, if the integral on the left side of this equation is differentiated with respect to 0, the result will be O. We shall assume that we can reverse the order in which we perform the integration with respect to x and the differentiation with respect to 8, and will still obtain the value O. In other words, we shall assume that we can take the derivative " inside the integral sign" and· obtain E
f f'( x I O ) dx
=
0
for 0 E (l.
(5)
s
Furthermore, we shall assume that we can take a second derivative with respect to o " inside the integral sign" and obtain
f r(x I O ) dx = 0 s
for 0 (l.
(6)
E
We can now give two additional forms for the Fisher information 1(0). First, since A'(X I 0 ) - f'(x I O )lf(x I 0 ) , then •
E. [ A'( X I O ) ]
=
f A'(X I O )f(x I O ) dx = f f'(X I O ) dx. s
s
Hence, it follows from Eq. (5) that E. [ A'( X I O ) ]
=
O.
(7)
422
Sampling Distributions of EsUmatora
Since the mean of I(D)
=
N( X I D) is 0, it follows from Eq. (3) that (8)
Var, [ h'( X I D)J .
Also, it should be noted that ;>""(x I 0 )
= j( x I e)f"( x I D ) - [rex I D)J2 [j( x I D) ]2 = r(x I D) - [h'(x I D)]'. j(x I D)
(9)
Therefore, E, [ ;>"" ( X I D ) J
=
! r(x I D) dx - / ( 0 ) .
(10)
s
(10) and (6) that I(D) - -Eo[h"( XI D)].
It follows from Eqs .
(11)
In many problems, it is easier to determine the value of from
I(D) from Eq. (11) than
Eq. (3).
Example 1: The BemouJli Distribution. Suppose that X has a Bernoulli distribu
(0
1).
tion for which the parameter P is unknown < P < We shall determine the Fisher information I( in X. In this example can have only the two values and l . For or 1,
p) X
;>,, ( x l p ) Hence,
=
log j ( x l p )
0
=
x log p + ( 1 - x )log(1 - p l.
N(x l p ) = p - 11 -- px x
and
h"(x lp) -
-[�p2
+
1
-x (1 p) 2 . 1
_
E(X) = p, the Fisher information is 1 -'- ' � "",O, I( p) = - E [ N' ( X I p )] = : + -:--=1 p p(1 p)
Since
_ _
x=0
7.8. Fisher Information
423
In this example it can be readily verified that the assumptions represented by Eqs. (5) and (6) are satisfied. Indeed, since X can take only the two values 0 and 1, the integrals in Eqs. (5) and (6) reduce to summations over the two values x = 0 and = 1 . Since it is always possible to take a derivative "inside a finite summation" and to differentiate the sum term by term, Eqs. (5) and (6) must be satisfied. 0
x
Suppose that X has a normal distribution for which the mean p. is unknown ( - 00 < p. < 00) and the variance a 2 is known. We shaH determine the Fisher information J(p.) in X. < 00, For - 00
0), tben it can be shown that tbe variance of any unbiased estimator of (J based on the single observation X is strictly greater than 1j/(0) for every value of (J > 0 (see Exercise 8). On the other hand, in many standard estimation problems there do exist efficient estimators. Of course, the estimator which is identically equal to a constant is an efficient estimator of that constant, since the variance of this estimator is O. However, as we shall now show, there are often efficient estimators of more interesting functions of 8 as well. There will be equality in the inequality (23) and, hence, there will be equality in the information inequality (24) if and only if the estimator T is a linear function of h'n(X 1 8). In other words, T will be an efficient estimator if and only if there exist functions u(O) and v(8) which may depend on 0 but do not depend on the observations Xl XII and which satisfy the relation • . . .
T
=
'
u(O)h'n ( X 1 0 ) + v(O).
(25)
It is possible that the only efficient estimators in a given problem will be constants. The reason is as follows: Since T is an estimator, it cannot involve the parameter O. Therefore, in order for T to be efficient, it must be possible to find functions u(O) and v(8) such that the parameter 8 will actually be canceled from the right side of Eq. (25) and the value of T will depend only on the observations Xl' ' X" and not on 8. . . .
Example 3: Sampling from a Poisson Distribution. Suppose that
XI " ' " Xn form a random sample from a Poisson distribution for which the mean 8 is unknown (8 > 0). We shall show that Xn is an efficient estimator of 8. The joint p.r. of Xl' ' Xn can be written in the form . . .
7.8. Fisher Information
427
Therefore. "
A,,( X I O )
=
-
n O + nX"log O - L 10g( Xj !) i-I
and
(26) If we now let
x"
=
u(O) = O/n and u(O) = 0, then it is found from Eq. (26) that
u ( O ) A',, ( X I O )
+
u(O).
Since the statistic Xn has been represented as a linear function of Nn(X I 0), it follows that Xn wiU be an efficient estimator of its expectation O. In other words, the variance of Xn will attain the lower bound given by the information inequality, which in this example will be O/n (see Exercise 2). This fact can also be verified directly. 0
Unbiased Estimators with Minimum Variance Suppose that in a given problem a particul"ar estimator T is an efficient estimator of its expectation m(O), and let T, denote any other unbiased estimator of m(O). Then for every value of 0 E 0, Var,(T) will he equal to the lower bound provided by the information inequality and Var9(Tt) will be at least as large as that lower bound. Hence, Var,(T) .; Var,(T,) for 0 E O. In other words, if T is an efficient estimator of m(O), then among all unbiased estimators of m(O), T will have the smallest variance for every possible value of B. In particular, it was shown in Example 3 that Xn is an efficient estimator of the mean 0 of a Poisson distribution. Therefore, for every value of 0, Xn has the smallest variance among all unbiased estirmitors of 8. This discussion estab· lishes a result which had been stated without proof in Sec. 7.7.
(J >
Properties of Maximum Likelihood Estimators lor Large Samples Suppose that Xl ' Xn form a random sample from a distribution for which the p.d.f. or the p.r. is [(x 1 0), and suppose also that [(x 1 0 ) satisfies conditions similar to those which were needed to derive the inrormation inequality. For any given sample size n, let b" denote the M.L.E. of O. We shall show that if n is large, then the distribution of 8" will be approximately a normal distribution with mean 0 and variance 1/[nl( O)J. • . . .
428
Sampling Dlatributlona of eatimators
Asymptotic Distribution 0/
variable
N"( X I 0).
Since
"
an
Efficient Estimator.
A"(X I 0 ) � L7_1A( X, I O),
Consider first the random then
(27) Furthermore, since the n random variables Xl " ' " Xn are i.i.d., the n random variables A'(X, 1 0), . . . . A'(X" I 0 ) will also be Li.d. We know from Eqs. (7) and (8) that the mean of each of these variables is 0 and the variance of each is [(0). Hence, it follows from the central limit theorem of Lindeberg and Levy that the asymptotic distribution of the random variable N,( X I 0 )/[nI(0)]'/2 will be a standard normal distribution. Now suppose that an estimator T is an efficient estimator of 8. Then E,(T) � 0 and Var.(T) � l/[nI(O)]. Furthermore, there must exist functions u(O) and v(O) which satisfy Eq. (25). Since the random variable N"(X I 0) has mean 0 and variance nI(O), it follows from Eq. (25) that E.(T) � v(O) and Var.(T) � [u(O)] 'nI(O). When these values for the mean and the variance of T are compared with the values we gave first, it is found that v(O) � 0 and lu(O)1 � l/[n/(O)]. To be specific, we shall assume that u(O) � l/[nl(O)], al though the same conclusions would be obtained if u(8) � - l/[nl(O)]. When the values u(O) � l/[nI(O)] and v(o) � 0 are substituted into Eq. (25), we obtain
[nl( 0)] 1/2( T - 0 ) �
N,, ( X I O) [ nI( O)] 1/2 '
(28)
We have already shown that the .asymptotic distribution of the random variable on the right side of Eq. (28) is a standard normal distribution. Therefore, the asymptotic distribution of the random variable on the left side of Eq. (28) is also a standard normal distribution. It follows from the result just presented tha t if the M.L.E. Bn is an efficient estimator of 8 for each value of 11 , then the asymptotic distribution of [nl(O )]l/2(O" - 0) will be a standard normal distribu tion. However. it can be shown that even in an arbitrary problem in which 0" is not an efficient estimator, [nl(0)] 1/2(1'" - 0) will have this same asymptotic distribution under certain conditions. Without presenting all the required condi· tions in full detail, we can state the following result. Suppose that in an arbitrary problem the M.L.E. 0" is determined by solving the equation A',,(x I tJ ) = 0; and in addition both the second and third derivatives A'�(x I 8) and A;;' (X 1 8) exist and satisfy certain regularity conditions. Then the asymptotic distribution of [11/(0)]1/2(0. - 0) will be a standard normal distribu tion. The proof of this result is beyond the scope of this book and will not be given here. A symptotic Distribution 0/ an M.LE.
7.S. Fisher Information
429
In practical terms, the foregoing result states that in most problems in which the sample size n is large and the M.L.E. On is found by differentiating the likelihood function i.( x I 0 ) or its logarithm, the distribution of [nI( 0)]'/'( 0. - 0) will be approximately a standard normal distribution. Equivalently, the distribu tion of 0" will be approximately a normal distribu �ion with mean (J and variance l/Inl(O»). Under these conditions it is said that 0,. is an asymptotically efficient
estimator.
Example 4: Estimating the Standard DevIation 0/ a Normal Distribution. Suppose that Xl X" form a random sample from a normal distribution for which the mean is 0 and the standard deviation a is unknown ( 0" > 0). It can be shown that the M.LE. of 0 is • . . . •
a
-
[! f:. ] n
i- I
Xi'
'!' .
Also, it can be shown (see Exercise 3) that the Fisher information in a single observation is 1(0) = 2/02• Therefore, if the sample size n is large, the distribu tion of a wiU be approximately a normal distribution with mean (1 and variance
. '/(2n).
0
Another general property of the M.L.E. 0" pertains to making inferences about a parameter 0 from the Bayesian point of view. Suppose that the prior distribution of 0 is represented by a positive and differentiable p.d.r. over the interval n, and that the sample size n is large. Then under conditions similar to the regularity conditions that are needed to assure the asymptotic normality of the distribution of 0", it can be shown that the posterior distribution of 0, after the values of Xl" ' . ' X" have been observed, will be approximately a normal distribution with mean On and variance 1/[ nI( 8,,)].
The Bayesian Point of View.
5: The Posterior Distribution 0/ the StaN/art! Deviation. Suppose again that Xl' . . . ' X,. form a random sample from a normal distribution for which the mean is 0 and the standard deviation a is unknown. Suppose also that the prior p.d.L of a is a positive and differentiable function for a > 0 and that the sample size II is large. Since /(0) = 2/a2, it follows that the posterior distribution of a will be approximately a normal distribution with mean a and variance a 2/(2n), where a is the M.L.E. of a calculated from the observed values in the sample. 0
Example
The Delta Method Suppose that Xl • . . . ' XII form a random sample from a distribution for which the p.d.L or the p.L is I(x I 0). where again the parameter 8 must lie in a given interval n of the real line. Suppose that T" is an estima tor of 8 which is based on ,
430
Sampling Distributions of Estimators
-
the observations Xl Xn and has the following property: For some positive 0) is a standard function b(O), the asymptotic distribution of [nb(O)]'/2Cr" normal distribution. In other words, suppose that for a large sample, the distribu tion of T,/ is approximately normal with mean 8 and variance [nb(8)] -1. Now suppose that it is desired to estimate the function a(8), where is a differentiable function of 0 such that a'(O) '" 0 for every 0 E (l. It is natural to consider using the estimator We shall detennine the asymptotic distribu tion of this estimator by a method known in statistics as the delta method. It folJows from the asymptotic distribution of that, for a large sample, Tn will be close to 0 with high probability. Hence, 0 will be small with high probability. Therefore, we shall represent the function as a Taylor series in 0, and we shall ignore all the terms involving (T" 0)' and higher powers. Thus, • . . . •
a(T,/).
T" aCT") a(O) a ( O ) T �
'
+
a
TnT" a(Tn) -
( " - 0)
(29)
and
[n��r�r [ aC T") - a( O )]
� [nb (O)] '/2( T"
-
0).
(30)
We can conclude that the asymptotic distribution of the left side of Eq. (30) will be a standard normal distribution, since that is the asymptotic distribution of will be the right side. In other words, for a large sample, the distribution of approximately normal with mean a(O) and variance [a'(O)] 'j[nb(O)].
a(T,,)
EXERCISES 1 . Suppose that a random variable X has a normal distribution for which the mean J.L is unknown ( - 00 < I-' < 00) and the variance 0 2 is known. Let f(x 1 1') d enote the p.d.f. of X, and let f'(x 1 1') and /"(x 1 1') denote the first and second partial derivatives with respect to p.. Show that ) f� -�f'(x 1 1' dx
�
0 and
f� -�r(x l l' ) dx
�
,
o.
2 . Suppose that a random variable X has a Poisson distribution for which the mean 0 is unknown (0 > 0). Find the Fisher information /(0) in X. 3. Suppose that a random variable has a normal distribution for which the mean is 0 and the standard deviation 0 is unknown ( (J > 0). Find the Fisher information 'ra) in X.
J " .
7.8. FIIher Information
431
4. Suppose that a random variable X has a normal distribution for which the
mean is 0 and the variance (J 2 is unknown ( 0 2 ) 0). Find the Fisher information /((J 2 ) in X. Note that in this exercise the variance 0 2 is regarded as the parameter, whereas in Exercise-3 the standard deviation (J is regarded as the parameter. 5. Suppose that X is a random variable for which the p.d.f. or the p.r. is I(x 1 8), where the value of the parameter 8 is unknown but must lie in an open interval Q. Let 10( 8) denote the Fisher information in X. Suppose now that the parameter 8 is replaced by a new parameter p., where 8 � >/-(,,) and '" is a differentiable function. Let 1\(J!) denote the Fisher information in X when the parameter is regarded as Ji. Show that
6. Suppose that XI " Xn form a random sample from a Bernoulli distribution for which the parameter p is unknown. Show that XII is an efficient estimator of p. 7. Suppose that Xl Xn form a random sample from a normal distribution for which the mean Ji is unknown and the variance (J 2 is known. Show that XI1 is an eHicient estimator of /L . 8. Suppose that a single observation X is taken from a normal distribution for which the mean is 0 and the standard deviation (J is unknown. Find an unbiased estimator of (J, determine its variance, and show that this variance is greater than 1/1(0) for every value of 0 > O. Note that the value of 1(0) was found in Exercise 3_ Xn form a random sample from a normal distribution 9. Suppose that Xl' for which the mean is 0 and the standard deviation (J is unknown ( 0 > 0). Find the lower bound specified by the information inequality for the variance of any unbiased estimator of log (J . 10. Suppose that Xl' . . . ' X" form a random sample from an exponential family for which the p.d.r. or the p.r. f(x 1 8) is as opecified in Exercise 11 of Sec. 6.7. Suppose also that the unknown value of 8 must belong to an open interval n of the real line. Show that the estimator T = 1:;'_ 1 d(X,) is an efficient estimator. flint: Show that T can be represented in the form given in Eq. (25). 11. Suppose that Xl' XI1 form a random sample from a normal distribution for which the mean is known and the variance is unknown. Construct an efficient estimator that is not identically equal to a constant, and determine the expectation and the variance of this estimator. 12. Determine what is wrong with the following argument: Suppose that the random variable X has a uniform distribution on the interval (0. 0). where _
. •
• . . . •
. . . •
. . . •
432
Sampling Distributions of EsUmators
0 is unknown ( 0 ) 0). Then f(x I 0) = I /O >"(x 1 0) = >"'(x I O ) = - (I/O). Therefore,
the value of and
,
Since 2 X is an unbiased estimator of Var( 2 X ) .,
1 - 0' /( 0 )
8,
- logO,
the information inequality states that
.
But
Var(2 X )
=
4 Var( X )
0' 4 · 0' = < 0' -
=
12
-
3
Hence� the information inequality is not correct. 13. Suppose that Xl •
. . . •
Xn form a random sample from a ganuna distribution
for which the value of the parameter known. Show thaI if
n
a
is unknown and the value of f3 is
is large the distribution of Ihe M.L.E. of
approximately a normal distribution with mean
a and variance
a
will be .
[ r(a)]' . n { f(a)f"(a) - [f'(a)l'} 14. Suppose that Xl " ' "
Xn form a random sample from a
for which the mean I' is unknown and the variance
02
nOTffiaYdistribution
is known; and that the
prior p.d.f. of JJ is a positive and differentiable function over the entire real
hne. Show that if n is large the posterior distribution of JJ given that Xi =
(i
-
1, . .
variance
.
, n ) will be approximately a Inormal distribution wi th mean xn and
(J
2In.
15. Suppose that Xl
• . . .
'
Xn
for which the parameter
form a random sample from a Bernoulli
p
furthermore, that . . . '
xn;
distribution
is unknown; and Ibal tbe prior p.d.f. of
positive and differentiable runction over the interval Xl•
Xi
n is large; that the 0 0). Determine the asymptotic distribu· tion of the M.L.E. of O. (Note: The M.L.E. was found in Exercise 8 of Sec. 6.5.) 15. Suppose that a random variable X has an exponential distribution for which the parameter f3 is unknown ( f3 > 0). Find the Fisher information 1(f3) in X.
7.9.
16. 17.
Suppose that
Supplementary Exerci...
435
Xl" ' " X" form a random sample from a Bernoulli distribution
for whicb tbe parameter p is unknown. Show that the variance of any
unbiased estimator of (1 Suppose that
Xl" ' " X"
-
p)' must be at least 4p(1 - p)'/n.
fonn a random sample from an exponential distri
bution for which the parameter fJ is unknown. Construct an efficient estima tor that is not identically equal to a constant, and determine the expectation
18.
and the variance of this estimator. Suppose that
Xl " ' " X"
form a random sample from an exponential distri
bution for which the parameter fJ is unknown. Show that if
n
is large, the
distribution of the M.L.E. of fJ will be approximately a normal distribution with mean fJ and variance fJ
2/n.
19. Consider again the conditions of Exercise
of fJ.
18, and let S" denote the M.L.E.
(a) Use the delta method to determine the asymptotic distribution of
1/s..
1/s. = X.. and use the central limit theorem to determine the asymptotic distribution of 1//3".
(b) Show that
Testi ng Hypotheses
PROBLEMS OF TESTING HYPOTHESES
8.1.
The Null and Alternative Hypotheses In this chapter we shall again consider statistical problems involving a parameter
9 whose value is unknown but must lie in a certain paramete r space
n.
We shall
suppose now, however, that n can be partitioned into two disjoint subsets
and that the statistician must decide whether the unknown value of (J lies
ill'
00 or in n t "
W e shall let
Ho
the hypothesis that
00
U
g, � g.
denote the hypothesis that
8E
01
A
problem
01
go.
and shall let H, denote
the hypotheses
statistician must decide whether to
decisions, is caUed a problem
E
in
subsets no and 0 1 are disj oint and Ho and H, must be true. The accept the hypothesis Ho or to accept the
n1- Since the
exactly one
hypothesis H,.
0
no and
this type. in which there are only two possible
of "testing hypotheses.
wrong decision, he typically must suffer
a
If the
statistician makes
the
certain loss' or pay a certain cost. ]n
many problems, he wiu have an opportunity to take some observations before he
has to make his decision, and the observed values will provide h.im with informa
tion about the value of
O. A
procedure for deciding whether to accept the
hypothesis Ho or to accept the hypothesis
test.
HI is called a
test procedure or simply a
In our discussion up to this point, we have treated the hypotheses
on an equal basis. In most problems, however, the two hypotheses are treated quite differently. To distinguish between them. the hypothesis
null hypothesis and the hypothesis Hi
is called the
Ho
Ho
and
HI
is called the
alternative hypothesis. We shall 437
438
THtlng Hypotheses
use this terminology in all the problems of testing hypotheses that will be discussed in the later sections of this chapter and in the rest of the book. One way of describing the decisions available to the statistician is that he may accept either Ho or HI' However. since there are only two possible decisions, accepting Ho is equivalent to rejecting Hi. and accepting HI is equivalent to rejecting Ho. We shall use all these descriptions in our discussions of testing hypotheses. The Critical Region
Consider now a problem in which hypotheses having the following form are to be tested:
Ho: HI :
0 E go , O E gI ·
Suppose that before the statistician has to decide which hypothesis to accept, he X" drawn from a distribution which can observe a random sample Xl involves the unknown parameter 9, We shall Jet S denote the sample space of the n�dimensional random vector X = ( Xl' " Xn)' In other words, S is the set of all possible outcomes of the random sample, In a problem of this type, the statistician specifies a test procedure by : partitioning the sample space S into two subsets. One subset contains the values , of X for which he will accept Ho. and the other subset contains the values of X for which he will reject Ho and therefore accept HI ' The subset for whkh Ho win be rejected is called the critical region of the test. In summary, a test procedure is determined by specifying the critical region of the test. The complement of the critical region must then contain all the outcomes for which Ho will be accepted, :, • . . .
'
. •
;'. '
The Power Function The characteristics of a test procedure can be described by specifying, for each value of 0 E g, either the probability 1I( 0) that the procedure will lead to the rejection of Ho or the probability I - 11(0) that it will lead to the acceptance of Ho. The function 11(0) is called the power function of the test. Thus, if C denotes the critical region of the test, then the power function 7T ( 8 ) is determined by the relation
11(0)
�
Pr( X E e l 0 )
for 0 E g.
(I)
8.1. Problems of Testing Hypotheses
439
Since the power function 'fT ( 0) specifies for each possible value of the parameter 0 the probability that Ho will be rejected, it follows that the ideal power function would be one for which .,,( 9) � 0 for every value of 9 E go and w(O) � 1 for every value of 0 E g,. If the power function of a test actually had these values, then regardless of the actual value of 0, the test would lead to the correct decision with probability 1. In a practical problem, however, there would seldom exist any test procedure having this ideal power function. For any value of 8 E Uo, the decision to reject Ho is an incorrect decision. Therefore, if 0 E go, .,,(0) is the probability that the statistician will make an \ incorrect decision. In many problems, a statistician will specify an upper bound i "0 (0 < "0 < 1) and will consider only tests for which ."(0) ,,,; "0 [or every value l of 0 E go· An upper bound " 0 that is specified in this way is called the leuel of � significance of the tests to be considered. The size a of a given test is defined as follows:
r
" �
sup .,, ( 8 ) .
(2)
()eno
I
In
i� t f
summarized as follows: If a statistician specifies a certain level of significance ao in a given problem of testing hypotheses, then he will consider only tests for which the size a is such that a � ao.
,
words, the size of a test is the maximum probability, among all the values of 0 �- which satisfy the null hypothesis, of making an incorrect decision. The relationship between the level of significance and the size can be
Exmnple .
I:
Testing Hypotheses A bout
a
Unifonn Distribution. Suppose that a
random sample Xl X'I is taken from a uniform distribution on the interval (0,0), where the value of 0 is unknown (0 > 0); and suppose also that it is desired to test the following hypotheses:
Ho : H, :
• . . . •
3 � 0 � 4, 0 < 3 or 8 > 4.
(3 ) •
We know from Sec. 6.5 that the M.L.E. of 8 is 1';, � max( X" . . . , X.). Although Y. must be less than 0, there is a high probability that Y. will be close to 8 if the sample size /I is fairly large. For illustrative purposes, we shall suppose that the hypothesis Ho will be accepted if the observed value of 1';, lies in the ,. interval 2.9 ",; Y. ",; 4 and that Ho will be rejected if Y. does not lie in this : interval. Thus, the critical region of the test contains all the values of Xl X" for which either Y. < 2.9 or Y. > 4. The power function of the test is specified by the relation . . . . •
.,,( 8 )
�
Pre y. < 2.9 1 0 ) + Pre y. > 4 1 0 ) .
440
Testing Hypotheses
If 8 " 2.9, then Prey" < 2.9 1 8) � 1 and Pre y" " 4 1 8) � O. Therefore, 71(8) � 1. If 2.9 < 8 " 4, then Prey" < 2.9 1 0) = (2.9/0)" and Prey" > 4 1 0) O. In this case, 71( 0 ) � (2.9/0)". Finally, if 0 > 4, then Pr( Y" < 2.9 1 0) � (2.9/0) " and Pre y" > 4 1 8) � 1 - (4/0)". In this case, ,,(8) � (2.9/8)" + 1 - (4/8)". The power function 1T( 0 ) is sketched in Fig. 8.1. By Eq. (2), the size of the test is 0< � sup, . 0 < 4 1T(0). It can be seen from Fig. 8.1 and the calculations just given that a � ,,(3) � (29/30) " . In particular, if the sample size is n � 68, then the size of the test is (29/30)" � 0.100. 0 �
Simple and Composite Hypotheses Suppose that XI " ' " Xn form a random sample from a distribution for which the p.d.1. or the p.1. is I(x 1 OJ, where the value of the parameter 0 must lie in the parameter space 0; that go and 01 are disjoint sets with go u n 1 = n; and that it is desired to test the following hypotheses:
Ho : H,:
8 E go, 8 E Q, .
For i = 0 or i = 1, the set OJ may contajn just a single value of fJ. It is then said that the hypothesis Hi is a simple hypothesis. If the set Qi contains more than . one value of 0, then it is said that the hypothesis Hi is a composite hypothesis. Under a simple hypothesis, the distribution of the observations is completely" specified. Under a composite hypothesis, it is specified only that the distribution, of the observations belongs to a certain class. For example, suppose that in a given problem the null hypothesis Ho has the form
Since this hypothesis is simple, it follows from Eq. (2) that the size of any test procedure will just be 1T( 80),
--------, - - - - - - - - - - - - - - -
-------+--�,_��==f4--�5�� e
Figure
8.1 The power [unction '11 ( 8).
8.1. Problems or Testing Hypotheses
441
EXERCISES 1. Suppose that Xl' . . . ' Xn form a random sample from a uniform distribution on the interval (0, 0), and that the following hypotheses are to be tested:
Ho : H, :
9 '" 2, 9 < 2.
Xn), and consider a test procedure such that the Let Yn - max( Xl' critical region contains all the outcomes for which Y" � 1.5. (a) Determine the power function of the test. (b) Determine the size of the test. 2. Suppose that the proportion p of defective items in a large population of items is unknown, and that it is desired to test the following hypotheses: . . . •
Ho: H, :
P - 0.2, P * 0.2.
Suppose also that a random sample of 20:items is drawn from the population. Let Y denote the number of defective items in the sample, and consider a lest procedure such that the critical region contains aU the outcomes for which either Y � 7 or Y " 1 . (a) Determine the value of the power function w( p) at the points p = 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, and 1; and sketch the power function. (b) Determine the size of the test. 3. Suppose that Xl ' . . . ' Xn form a random- sample from a normal distribution for which the mean IL is unknown and the variance is 1 . Suppose also that IL o is a certain specified number, and that the following hypotheses are to be tested:
Ho : Hl :
J.L - li D ,
P. =I= IL o ·
Finally, suppose that the sample size n is 25, and consider a test procedure such that Ho is to be accepted if I Xn - IL ol < c. Determine the value of c such that the size of the test will be 0.05. 4. Suppose that Xl' . ' . ' Xn form a random sample from a normal distribution for which both the mean p. and the va,riance 0 2 are unknown. Classify each
442
Testing Hypotheses
of the following hypotheses as either simple or composite: (aJ Ho: (b) (c) (d)
Ho: flo: Ho:
I' I' > I' = I' =
=
0 and a
=
l.
3 and a < l. - 2 and a' < 5. O.
5. Suppose that a single observation X is to be taken from a u niform distribu and suppose that the following hy tion on the interval .e+
� ).
(e �
potheses are to be tested:
Ho: H, :
8 � 3, 0 '" 4.
e
Construct a test procedure for which the power function has the following values: w( 0 ) = 0 for ,,; 3 and w( 0 ) = 1 for 0 '" 4.
8.2. TESTING SIMPLE HYPOTHESES Two Types of Errors In this section we shall consider problems of testing hypotheses in which a random sample is taken from one of two possible distributions, and the statisti·· cian must decide from whjch distribution the sample actually came. In this type of problem, the parameter space D contains exactly two points. and both the null hypothesis and the alternative hypothesis are simple. Specifically, we sha1l assume that the variables Xl • . . . X" form a random sample from a distribution for which the p.d.l. or the p.f. is fix 1 0 ), and we shall also assume that either = 80 or 8 = °1, where 80 and 81 are two specific values of O. The fol1owing simple hypotheses are to be tested: '
e
Ho: H, :
0 = 00' 0 = 0, .
For ;
=0
or
i
(I) ,.".
1 , we shall let
!i ( X ) = !( x d O, )!( x , I Oi )
.
.
. ! ( X. I Oi ) ·
(2)
Thus, /;(x) represents the joint p.d.f. or joint p.L of the observations in (he sample if the hypothesis Hi is true (i = 0, 1). When a test of the hypotheses (1) is being carried out, two possible types of errors must be kept in mind. First, the test might result in the rejection of the null
r
8.2. TesUng Simple Hypotheses
443
hypothesis Ho when, in fact, Ho is true. It has become traditional to call this result an error of type 1, or an ecror of the first kind. Second, the test might result in the acceptance of the null hypothesis Ho when, in fact, the alternative hypothesis Hl is true. This result is called an crror of type 2, or an error of the second kind. Furthermore, for any given test procedure 8, we shall let a(8) denote the probability of an error of type 1 and shall let {3(o) denote the probability of an error of type 2. Thus,
a(o) {3(0)
Pr( Rejecting Ho i 0 � 00 ), � Pr(Accepting Ho i 0 � 01 ) . �
It is desirable to find a test procedure for which the probabilities a( 0) and {1(8) of the two types of crror will be small. It is easy to construct a test procedure for which a( 0 ) � 0 by using a procedure that always accepts Ho. However, for this procedure {3( 0) � I . Similarly, it is easy to construct a test procedure for which {3(0) � 0 but a(o) For a given sample size, it is typically not possible to find a test procedure for which both a(o) and {3(o) will be arbitrarily small. Therefore, we shall now show how to construct a procedure for which the value of a specific linear combination of a and fi wiIJ be minimized.
1.
Optimal Tests Suppose that a and b are specified positive constants, and tbat it is desired to find a procedure 0 for which aa(o) + b{3( o ) will be a minimum. The following result shows that a procedure which is optimal in this sense has a very simple form.
Minimizing a Linear Combination.
Theorem 1 . Let 8 · denote a test procedure such that the hypothesis Ho is accepled il alo(x) > b/l(x) and Ihe hypolhesis HI is.accepled il aloC x) < b/,(x). Either Ho or HI may be accepled il alo(x) - ,.b/,( x). Then lor any olher lest procedure 8,
aa( o * ) + b{3( o * ) ,. aa(o) + b{3( o ) .
(3)
Proof. For convenience, we shall present the proof for a problem in which the random sample Xl" ' " Xn is drawn from a discrete distribution. In this case, I;( x ) represents the joint p.L of the observations in the sample when Hi is true (i = 0, 1 ). If the sample comes from a continuous distribution, in which case J;(x) is a joint p.d.f., then each of the sums that will appear in this proof should be replaced by an ,,-dimensional integral. I f we let R denote the critical region of an arbitrary test procedure {j, then R contains every sample outcome x for which {j specifies that Ho should be rejected,
444
..�
TesUng Hypotheses
and RC contains every outcome
aa(o)
+
x
bfl( o ) - a L fo( x ) zeR
for which Ho should be accepted. Therefore, +
b L f, ( x )
[
- a L fo(x) + b l - L f, ( X) xER
Je E R
]
(4)
- b + L [ afo(x) - bf,(x)J . zeR
It follows from Eq. (4) that the value of the linear combination aa(o) + bf3(o) will be a minimum if the critical region R is chosen so that the value of the final summation in Eq. (4) is a minimum. Furthermore, the value of this summation wiJl be a minimum if the summation includes every point x for which aJo(x) bf,(X) < 0 and does not include any point x for which afo(x) - bf,(x) > o. In other words, aa(o) + bfl(o) will be a minimum if the critical region R is chosen to include every point x such that afo(x) - bf,(x) < 0 and to exclude every point x such that this inequality is reversed. If afo(x) - bf,(x) 0 for some point x, then it is irrelevant whether or not x is included in R , since the corresponding term would contribute zero to the final summation in Eq. (4). It can be seen that ' this description of the critical region corresponds to the description of the test procedure 8 * given in the statement of the theorem. 0 -
The ratio f,( X)/fo(x) is called the likelihood ratio of the sample. Thus, . . Theorem 1 states that a test procedure for which the value of aa(o) + bfl(8) is a· mjnimum rejects Ho when the likelihood ratio exceeds alb and accepts Ho when the likelihood ratio is less than a/b.
Error of Type 1. Next, suppose that the probability a( 8 ) of an error of type 1 is not permitted to be greater than a specified level of significance, and it is desired to find a procedure 8 for which
Minimizing the Probability of
atI
fl(o) will be a minimum.
In this problem we can apply the following result, which is closely related to Theorem 1 and is known as the Neyman-Pearson lemma in honor of the statisticians J. Neyman and E. S. Pearson who developed these ideas in 1933.
Suppose that 8* is a test procedure which has the following form for some constant k > 0 : The hypothesiS flo is accepted if fo(x) > kf,(X) and che hypochesis H, is accepted if fo(x) < kf,(x). Either Ho or H, may be accepted if fo(x) - kf,( X). If 0 is allY orher rest procedure such that a(o) .;; a(o'), then it follolVs chat fl(o) ;" fl(o'). Furrhermore, if a(o) < a(o'), · rhen fl(o) > fl(O'). Neyman-Pearson Lemma.
8.2. Testing Simphl Hypotheses
Proof
From the description of tbe procedure
that for any other test procedure
a(8*)
+
0*
and from Theorem
1,
445
it follows
8,
kP ( o * ) '" a ( o ) + kP ( 8 ) .
(5 )
If a(8) '" a( 8 * ), then it foUows from the relation (5) that P(o) ;" p r o ' ). «(8) < a(8 '), then it foltows tbat pro) > P(o*). 0
Also,
if
To i11ustrate the use of the Neyman-Pearson lemma. we shall suppose that a
statistician wishes to use a test procedure for which
a(o) - 0.05
minimum. According to the lemma, he should try to find
a
and
value of
k
pro)
is a
for which
a(8*) = 0.05. The procedure 8 * wiU then have the minimum possible value of (i(8). If the distribution from whicb tbe random sample is taken is continuous, then it is usually (but not always) possible to find a value of k such tbat a(8*) is equal to a specified value such as 0.05. However, i f the distribution from which the random sample is taken is discrete, then it is typically not possible to choose k so that a( c5 * ) is equal to a specified value. These remarks are considered further in the following examples and in the exercises at the end of this section.
Example
Sampling from a Normal Distribution.
Suppose that
form
a random sample from a normal distribution for which the value of the mean 8 is I:
unknown and the variance is
1;
Xl" ' " X"
and that the following hypotheses are to be
tested:
Ho : HI : We
8
�
0,
(6 )
(J = l .
pro) will be a(�) � 0.05. variables Xl ' . . . ' Xn form a random sample from
shall begin by determining a test procedure for which
a
minimum among all test procedures for which When
H0
is true, the
standard normal distribution. When sample
from
l . Therefore,
lo ( x ) = and
I,(x)
=
a
normal
1
( 2 ,, )
1
( 2 ,, )
HI
is true, these
variables
form
a
a random
distribution for which both the mean and the variance are
,,/, exp
../, exp
( 1" ) - 2 L Xi' i-I
[ I" - "2
•
( 7)
'J "
L ( Xi - 1 ) .
i-I
After some algebraic simplification, the likelihood ratio
( 8)
11(x)//o(x)
can be
446
Testing Hypotheses
writlen in the form
fl ( x ) fo( x )
=
[ (- - "21 l] .
exp n
(9)
x.
It now follows from Eq. (9) that rejecting the hypothesis Ho when the likelihood ratio is greater than a specified positive constanl k is equivalent to rejecting Ho when the sample mean X. is greater than (1/2) + (1jn)log k. Let k' = (1/2) + (l/n)logk, and suppose that we can find a value of k' such that Pr ( X. > k ' 1 8
=
0)
=
(10)
O.OS.
Then the procedure 8* which rejects Ho when X" > k ' will be such that a( S * ) = O.OS. Furthermore, by the Neyman-Pearson lemma, S * will be an optimal procedure in the sense of minimizing the value of P( S ) among all procedures for which a( S ) '" O.OS. I t is easy to find a value of k' that satisfies Eq. (10). When 8 = 0, the distribution of X" will be a normal distribution with mean 0 and variance lin. Therefore, if we let Z = n1/2X"' then Z will have a standard normal distribution.. and Eq. (10) can he rewritten in the form Pr(Z > nl/2k')
=
O.OS.
(ll)
From a table of the standard normal distribution, it is found that Eq. (11) will be satisfied, and therefore Eq. (10) will also be satisfied, when n1/2k' = l.64S or, equivalently, when k' = 1 .645n- I/2 . In summary, among all test procedures for whkh a( 8) � 0.05, the procedure which rejects Ho when X" > 1.645n- I/2 is optimal. Next, we shall determine the probability P(S*) of an error of type 2 for thiS procedure 8 + . Since /1(8*) is the probability of accepting Ho when HI is true,
P(S*)
=
Pr ( X. < 1 .64Sn- I/2 1 8
=
1).
(12)
When 8 = 1, the distribution of X. will be a normal distribution with mean 1 and variance l/n. If we let Z' = n I /2( X. - 1), then Z' will have a standard normal distribution. Hence,
P(S ")
=
Pr( Z' < 1 .64S - n I /2 ) .
For instance, when n distribution that
P(S")
=
=
(13)
9, it is found from a table of the standard normal
Pr( Z' < - 1 .3SS)
=
1 - cfJ(1 .3SS) = 0.0877.
( 14)
8.2. Teatlng Simple Hypotheaes
447
Finally, for this same random sample and the same hypotheses (6), we shall determine the test procedure 60 for which the vaIue of 2a(6) + fJ(6) is a minimum, and we shall calculate the value of 2,,( 60) + fJ( 60) when 11 = 9. Il follows from Theorem J that the procedure 60 For which 2a(6) + fJ(6) is a minimum rejects Ho when the likelihood ratio is greater than 2. By Eq. (9), this procedure is equivalent to rejecting Ho when X" > (1/2) + (1/1I)log2. Thus, when n = 9, the optimal procedure 60 rejects Ho when X'1 > 0.577. For this procedure we then have Pr( X" > 0.577 1 0
=
fJ( 60) = Pr( X" < 0.577 1 0
-
a ( 60)
=
0)
(15)
1).
( 16)
and
If Z and Z' are defined as earlier in this example, then it is found that
a ( 60)
=
Pr( Z > 1.731) = 0 .0417
(17)
and
fJ ( 60 )
=
Pr(Z' < - 1 .269)
=
1 - ( 1 .269)
=
0.1022.
(18)
The minimum value of 2a(6) + fJ(6) is therefore
2 ,, ( 60 ) + fJ ( 60 )
2(0.0417) + (0.1022)
=
=
0.1856.
0
(19)
Example 2: Sampling from a Bemoulli Distribution. Suppose that Xl' . . . ' X"
form a random sample from a Bernoulli distribution for which the value of the parameter p is unknown; and that the following hypotheses are to be tested: Ho: HI :
It
is
P P -
desired to find
mmlffiU.ffi. y =
0.2, 0.4.
(20) a
test procedure for which a( 15) = 0.05 and P( 8) is a
In this example, each observed value Xi must be either 0 or 1. If we let I:7_ ,xj, then the joint p.L of XI " ' " X" when p = 0.2 is
lo ( x )
=
(0.2), (0.8),,-'
and the joint p.L when II ( x )
=
P =
(0.4) '(0.6),, -'.
(21) 0.4 is
(22 )
Testing Hypotheses
448
Hence, the likelihood ratio is f, e x ) � fo( x )
(l)"(�)Y . 4 3
,
(23) ;
It follows that rejecting Ho when the likelihood ratio is greater than a specified positive constant k is equivalent to rejecting Ho when y is greater than k', where k' �
log k + n log(4/3) log(8/3)
(24) .
To find a test procedure for which a( 0) � 0.05 and f3( 0) is a minimum, we
and for 0 < x < otherwise.
1,
On the basis of the observation X, it must be decided whether fo or f, is the correct p.d.l. Suppose that the prior probability that fo is correct is 2/3 and
455
8.2. Taltlng Simple Hypotheses
the prior probability that 11 is correct is
12.
is correct
unit; and the loss from deciding that 10 is
1
correct when in fact II is correct is
X
minimized, for what values of
Suppose also that the loss from
0; the loss from deciding that 11
choosing the correct decision is when in fact 10 is correct is
1/3.
4
units. If the expected loss is to be
should it be decided that 10 is correct?
Suppose that a certain industrial process can be either in control or out of control ; and that at any specified time the prior probability that it will be in control
is 0.9
and the;: prior probability that it will be out of control is
single observation
X of the output of the process is
0.1. A
to be taken, and it must
be decided immediately whether the process is in control or out of control. If
X
the process is in control. then and variance
1.
will have a normal distribution with mean
If the process is out of control, then
distribution with mean
52
and variance
1.
X
50
will have a normal
If it is decided that the process is
oul of control when in fact it is in control, then the Joss from unnecessarily stopping the process will be
$1000 .
If it is decided tbat the process is in
control when in fact i t is out of control, then the loss from continuing the process will be
$18,000.
If a correct decision is made, then the loss will be
O.
It is desired to find a test procedure for which the expected loss will be a minimum. For what values of
X should i t
be decided that the process is out
of control? 1 3 . Suppose that the proportion
is unknown, and it is desired to test the following simple hypotheses:
HI :
Ho :
P
P
-- 00 .4..3, - 0.4
p
of defect.ive items j,n a large manufactured lot
Suppose that the prior probability that P
ity that p
decision is
1
is
- 0.3
is
1/4 and the prior probabil
3/4; and also that the loss from choosing an incorrect
unit and the loss from choosing a correct decision is
that a random sample of
O.
Suppose
items is selected from t.he lot. Show that the Bayes test procedure is to reject Ho i f and only if the proportion of defective items n
in the sample is greater than 10g
( �) � ( t) ( 194 ) +
IOg
10g
14.
Suppose that a failure in a certain electronic system can occur because of
80 percent of the 20 percent of the failures are caused n independent soundings Xl' · · · ' Xn
either a minor defect or a major defect. Suppose also that failures are caused by minor defects and by major defects. When a failure occurs,
are made on the system. If the failure was caused by a minor defect, these
456
T.sting Hypotheses
soundings form a random sample from a Poisson distribution for which the mean is 3 . If the failure was caused by a major defect, these soundings form a random sample from a Poisson distribution for which the mean is 7. The cost of deciding that the failure was caused by a major defect when it was actually caused by a minor defect is $400. The .cost of deciding that the failure was caused by a minor defect when it was actual1y caused by a major defect is $2500. The cost of choosing a correct decision is O. For a given set of observed values of Xl" ' " XI1• which decision minimizes the expected cost?
·S.3.
MULTIDECISION PROBLEMS
Finite Number 01 Parameter Values and Finite Number of Decisions In a problem of testing a pair of simple hypotheses, there are only two possible values of the parameter (J and there are only two possible decisions for the experimenter, namely, either to accept Ho or to accept HI ' This problem therefore belongs to the class of problems in which there are a finite number of possible values of the parameter (J and a finite number of possible decisions. Such problems are called multidecision problems. For a general multidecision problem we shall let 8j, , 8k denote the k possi le values of 8� and we shall let �l,�·dm denote the m possible decisions that can be chosen. Furthermore, for Y;; 1, k and ). - 1, . . . , m , we shall let imenter when 8 = 8; and decision dj is wi) denote the loss incurred by the£Xper chosen. Finally, for i 1 , . . . , k, e shaIl let �i denote the prior probability that 8 = 8i• Thus �i ;;, 0 and �j + . 1 + g, = 1 . I f the experimenter must choose one of the decisions d1, . . . , dm without being able to observe any relevant sample data, then the expected loss or risk Pj from choosing decision dj will be Pj Er_l �iWij' A decision for which the risk is a minimum is called a Bayes decision.
�
•
=
.
.
•
. . . •
/
=
Example k
=
J: Finding u Bayes Decision.
3 and m
=
Consider a multidecision problem in which 4, and the losses Wi} are given by the following table: d,
8,
1
2
3
4
8,
3
o
1
2
8,
4
2
1
o
8.3. Multldeclslon Problems
457
It follows from this table that the risks of the four possible decisions are as follows: �, + 3 � , + 4�3 ' P2 � 2�, + 2�" p, = 3�, + �2 + �" P4 � 4� , + 2 �2 '
p, �
(1 )
For any given prior probabilities �" � 2 ' and �" a Bayes decision is found simply by determining the decision for which the risk is smallest. As an illustra tion, if �, = 0.5, � 2 = 0.2, and �, = 0.3, then p, = 2.3, p, = 1.6, p, = 2.0, and P4 � 2.4. Therefore, d, is the unique Bayes decision. If 8 = 0" it can be seen from the first row of the table of losses that d, has the smallest loss among the four decisions. Therefore, if the prior probability �l is sufficiently close to I, then d, will be the Bayes decision. Similarly, if 0 = O2, then d2 has the smallest loss among the four decisions. Therefore, if �2 is sufficiently close to 1, then d2 will be the Bayes decision. Finally, if 0 = 0" then d4 has the smallest loss among the four decisions. Therefore, if �, is sufficiently close to 1, then d4 will be the Bayes decision. We shall now determine whether there are any prior probabilities gl' g2 ' and �, for which d, will be a Bayes decision. The following results can be obtained from the relations (1): p, < p, if and only if �, + �2 > �,; and P4 < p, if and only if �, + �, < �,. Therefore, the only condition under which d, could be a Bayes decision is when �, + �2 = �" But, if �l + �2 � �3 ' then it follows that �l + � 2 = 1/2 and �, = 1/2, and it can be verified from the relations (1) that p, = P, = P4 = 1 + 2�, and P I = (5/2) + 2� 2 > 1 + 2�,. We can conclude from this discussion that d, is a Bayes decision only if �, + �2 = t/2 and �, � 1/2. However, in this case d, and d. are also Bayes decisions, and any one of these three decisions could be chosen. 0 Bayes Decision Procedures Consider a general multidecision problem in which the conditions �re as f�ll�ws: There are k possible values of the parameter 0; there are m possIble deCISiOns; the loss that results from choosing decision dj when 8 = OJ is Wi} for i = 1, . . . , k and j = 1, . . . , m; and the prior probability that 0 = 0i is �i for i = 1, . . . , k. Suppose now, however, that before the experimenter chooses a declslOn dj, he :an . . observe the values in a random sample Xl " ' " X" drawn from some dlstnbutlon which depends on the parameter O. . . p.r. For i = 1, . . . , k, we shall let i,(x 1 8.) denote the joint p.d.!. or the Jomt of the ohservations Xl ' . . . ' X, when 0 = Oi' After the vector x of values m the
458
THUng Hypotheses
•
sample has been observed, the posterior probability t(x) that 0 for i
=
i
• . . .
=
0, will be
, k.
(2)
Thus, after the vector x of values in the sample has been observed, the risk Pj(x) from choosing decision dj will be k
Pj (x} = L U X } W'j i-I
for j = 1, . . . , m .
(3)
It follows that after x has been observed, a Bayes decision will be a decision for which the risk in Eq. (3) is a minimum. Such a decision is called a Bayes decision with respect 10 the poslerior distribution of o. In a multidecision problem of this type, a decision procedure is defined to be a function 8 which specifies, tor each possible vector x, the decision 8(x) tbat is, to be chosen if the observed vector is x. Thus, for each vector x, 8(x) must be one: of the m possible decisions d1 • · , dm. A decision procedure " is called a Bayes decision procedure if, for each, possible vector .t, the decision 8( x ) is a Bayes decision with respect to the posterior distribution of O. In other words, when a Bayes decision procedure is used, the decision that is chosen after the vector x has been observed is always a' decision for which the risk p/x) is a minimum. Before any observations have been taken, the risk that the experimenter faces by using a specific decision procedure 8 can be calculated as follows: For j = 1, . . . , m , let A) denote the set of all outcomes x for which 8(x) "'" d), that is, for which decision dj wilJ be chosen. For convenience, we shall assume that the observations Xl Xn have a discrete distribution and that !,.(x 1 8j) represents their joint pJ, wben 0 = 0,. If 1.( x I 0,) is actually a joint p.d.f" then the summations over values of x that appear in the development to be given here should be replaced by integrals. If 0 = 0" tbe risk p(S I 0 0,) from using the procedure 8 is .
• . . .
•
'
=
m P (8 I 0 = O,) = L w,jPr[ 8 (x) )- 1
=
dj I 0 = 0,1
m = L W'j L I. ( x 1 0. ) . j-I
(4)
XEAj
Since the prior probability that 0 = 0, is �" the overall risk p(8) from using tbe
8.3. Multldecision Problems
procedure
8
p(8) � This risk
459
wi II be k
k
m
L �,P( 8 1 0 � 8,) � L L L �ilViJ"(X 1 8i)·
i-1
(5)
p(8) will be minimized when 8 is a Bayes decision procedure.
Example 2: Detennining a Bayel' Decision Procedure. Suppose that in a large shipment of fruit, the only three possible values for the proportion 8 of bruised pieces are 0.1, 0.3, and 0.5, and that there are three possible decisions d" d" and d). Suppose also that the losses from these decisions are as follows:
d,
9
0.1
0
1
3
9 � 0.3 9 0.5
2
o
2
3
1
0
�
�
Furthermore, suppose that on the basis of previous shipments from the same supplier, it is believed that the prior probabilities of the three ·possible values of e are as follows: 0.1) � 0.5, Pr( e � 0.3) - 0.3, Pre 8 � 0.5) � 0.2. Pre 8
�
(6)
Finally, suppose that we can observe the number Y of bruised pieces of fruit in a random sample of 20 pieces selected from the shipment. We shall determine a Baye.... decision procedure, and shall calculate t!le risk from that procedure. When f) = 0,1, the distribution of Y is a binbmial distribution with parame ters 20 and 0.1. The p.1. g(y 1 8 � 0.1) is as follows:
g (Y 1 8 � 0.1) �
(�)(O.l)'(0.9) 20-Y
lor y � 0, 1 , . . . , 20.
(7)
When e � 0.3 or 0 0.5, the distribution of Y is a similar binomial distribution and expressions for g(y 1 8 � 0.3) and g(y 1 8 � will have a form similar to Eq. (7). The values of these p.f.'s for specific values of y can be obtained from the table of the binomial distribution given at the end of this book. �
0.5)
460
Testing Hypotheses
It follows from Eq. (2) that after the value Y � Y has been observed, the posterior probability that 0 � 0.1 will be Pr( O � O.l l Y - y ) (0.5)g( y 1 0 = 0.1) '--c -C .5 ) = 7: 0---= 0--= . 1")+7: (0-' ) "-' )+ 7:: . 2") -g -;(I OC:--== O-:7 I =.5" ) g ( y 1 -::. 37 .30(0:-: ( 0--= g (-'y0 -'y-'= 0':-
(8)
Similar expressions can be written for the posterior probabilities that 8 = 0. 3 and that () = 0.5. These posterior probabilities, for each possible value of y. are given in Table 8.1. After the value of y has been observed, the risk p/ y) from each possible decision dj (J = 1 , 2, 3) can be calculated by applying Eq. ( 3) and using these posterior probabilities and the table of losses given at the beginning of this example. Thus, the risk p,(y) from choosing decision d, will be p , ( y ) � 2 Pr( 0 = 0. 3 1 Y = y )
+ 3 Pr( O = 0.5 1 Y = y),
Table 8.1 y
Pr(O - O.l 1 Y = y )
Pr(O - 0.3 1 Y = y)
Pr(O = 0.5 1 Y - y)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
0.9961 0.9850 0.9444 0.8141 0.5285 0.2199 0.0640 0.0151 0.0031 0.0005 0.0001 0.0000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.0039 0.0150 0.0553 0.1840 0.4606 0.7393 0.8294 0.7575 0.5864 0.3795 0.2078 0.1011 0.046 0.020 0.009 0.004 0.000 0.000 0.000 0.000 0.000
0.0000 0.0000 0.0002 0.0019 0.0109 0.0408 0.1066 0.2273 0.4105 0.6200 0.7921 0.8989 0.954 0.980 0.991 0.996 1.000 1.000 1.000 1 .000 1.000
17
18 19 20
8,3. Multideclalon Problems
461
the risk from choosing d, will be
p,(y) � Pr( 8 � 0.1 1 Y - y ) + Pr( O � 0.5 1 md the risk from choosing d,
will
Y � y) ,
be
p, ( y ) � 3 Pr( 8 � 0.1 I y � y) + 2 Pr( 8 � 0.3 I Y � y ) . Ibe values of these risks for each possible value of y and each possible decision .re given in Table 8.2. The following conclusions can be drawn from the tabulated values: If y '" 3, hen the Bayes decision is dl; if 4 � Y � 9, then the Bayes decision is d2; and if ;?> 10, then the Bayes decision is d,. In other words, the Bayes decision Table 8.2
Y
PI (y)
p,( y )
p,(y)
,0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0.0078 0.0300 0.1124 0.3737 0.9539 1.6010 1.9786 2.1969 2.4043 2.6190 2.7919 2.8989 2.954 2.980 2.991 2.996 3.00 3.00 3.00 3.00 3.00
0.9961 0.9850 0.9446 0.8160 0.5394 0.2607 0.1706 0.2428 0.4136 0.6205 0.7922 0.8989 0.954 0.980 0.991 0.996 1.00 1.00 1.00 1.00 1.00
2.9961 2.9850 2.9430 2.8103 2.5067 2.1383 1.8508 1.5603 1.1821 0.7605 0.4159 0.2022 0.092 0.040 0.018 0.008 0.00 0.00 0.00 0.00 0.00
•
462
Testing Hypotheses
procedure
{j
is defined as follows: if y = 0, 1 , 2, 3, if y = 4, 5 , 6 , 7 , 8,9, if y � 10, 1 1 , . . . , 20.
It now follows from Eq. (5) that the risk p(8)
�
9
"1
(9) , p ( 8 ) from using the procedure
8s i i
w
I
(0.5 ) I: g ( y I O = 0.1 ) + ( 1 .5 ) I: g ( Y I O � 0.1 ) ,
..
.j
20
+ ( 0.6 ) I: g(y l O = 0.3 ) + ( 0.6) I: g(y l O = 0.3 )
(10)
, .
..,-10
3
+ ( 0.6) I: g ( y l O y-O
-
•
,
:i
9
0.5 ) + (0.2) I: g(Y I O = 0.5 )
,
}' _ 4
,
= 0.2423. Thus, the risk of the Bayes decision procedure is
p(8)
�
0.2423.
0
Example 3: The Value oj Sample In/ormation. Suppose now that i t was necessary to choose one of the three decisions d1• d2• or d3 in Example 2 without being ' able to observe the number of bruised pieces in a random sample. In this case, it· is found from the table of losses wij and the prior probabilities (6) that the risks PI' P2' and P3 from choosing each of the decisions dl, d2• and d3 are as follows: P I � 2 ( 0.3 ) + 3 (0.2) = 1 .2, P 2 � ( 0.5) + (0.2) � 0.7, p, = 3 ( 0.5 ) + 2 (0.3 ) = 2.1.
,�
(11)
Hence, the Bayes decision without any observations would be d2 and the risk · from thai decision would be 0.7. By being able to observe the number of bruised pieces in a random sample of 20 pieces, we can reduce the risk from 0.7 to 0.2423. 0 EXERCISES 1. Suppose that a malfunction which causes a certain system to break down and
become inoperative can occur in either of two different parts of the system, part A or part B. Suppose also that when the system does become i.noper-
463
8.3. Multideclaion Problema
ative, it is not known immediately whether the malfunction causing the breakdown has occurred
in
part
A
or in part
B.
It is assumed that the repair
procedures are quite different for the two different parts. Therefore, when a breakdown occurs in the system, one of the following three decisions must be chosen : Decision d1: The repair procedure for a breakdown in part
A
is activated
immediately. If the malfunction causing the breakdown actually occurred in part
B. then the cost of this decision in
time is
terms of unnecessary labor and lost
$1000. If the malfun(.;lion actually occurred in part
A, then thi s
decision leads to the repair of the malfunction i n the most efficient manner and the cost is regarded as zero.
d2 :
B
activated immediately. If the malfunction actually occurred in part A, then Decision
The repair procedure for a breakdown in part
the cost of this decision is
$3000.
I f the malfunction occurred
in part
is
B, then
the cost is again regarded as zero. Decision dJ : A test is applied to the system that will determine with certainty whether the malfunction has occurred in part A or i n part The
B.
cost of applying this test is (aJ If
75
$300.
percent of all malfunctions occur in part A and only
occur in part
B,
25
percent
what is the Bayes decision when the system breaks
down? (b) Suppose that the breakdown in the system is always caused by a defect in one of
36 similar components, all of which are equally likely to be
defective.
H4
of these components are used in part
components are used in part
B.
A
and the other 32
what is the Bayes decision when the
system breaks down?
2.
Consider a multidecision problem in which
8
can take only two values, there
are four possible decisions, and the losses are as given in the following table:
d, 9,
For
i
o
10
1
6
10
o
8
6
�
I
or
2,
let �; denote the prior probability that 6
�
6;.
(a) Show that d4 is never a Bayes decision for any values of tl and � 2 . (b) For what values o f � I and � 2 i s the Bayes decision not unique?
3.
Suppose that an unmanned rocket is being launched, and that at the time of the launching a certain electronic component is either functioning or not functioning. In ahe control center there is a warning light that is not completely reliable. If the electronic component is not functioning, the
464
"-i
J•
Testing Hypotheses
"
warning light goes on with probability 1/2; if the component is functioning. the warning light goes on with probability 1/3. At the time of launching. an observer notes whether the warning light is on or off. I t must then be decided immediately whether or not to launch the rocket. Suppose that the losses, in millions of dollars, are as follows:
Do not launch rocket
Component runctioning
Launch rocket
o
2
Component not functioning
5
o
(a) Suppose that the prior probability that the component is not functioning is � - 2/5. If the warning light does not go on, is the Bayes decision 10 launch the rocket or not to launch it? (b) For what values of the prior probability � is the Bayes decision to launch the rocket, even if the warning light goes on?
4. Suppose that 10 percent of all the workers in a particular type of factory have a certain lung disease. Suppose also that a diagnostic test is available to help determine whether a particular worker has the disease, and that the outcome of the test is a random variable X with the following distribution: If the worker has the disease, then X has a normal distribution with mean 50 and variance 1 . If the worker does not have the disease, then X has a norma1 distribution with mean 52 and variance 1 . As a result of the outcome X, a ' worker may be required to undergo a complete medical examination. Suppose that the loss from requiring such an examination when the worker does not have the disease is $100; that the loss from not requiring an examination when the worker has the disease is $2000; and that otherwise the loss is O. If the test is administered to a worker selected at random from a factory of this type, for what values of X is the Bayes decision to require a complete medical examination? 5. On any given day, a production system may operate at a low level WI' at a medium level w2' or at a high level w3 • The output of the system during the first hour of a given day is measured as a number X between 0 and 2; and on the basis of the observed value of X i t must be decided at which of the three levels. Wl• W2• or w3• the system is operating on that day. When the system is operating at a low level, the p.d.f. of X is
1 /(x l w, ) - 2
for 0
" x
� 2.
8.3. Multideclslon Problems
When the system is operating at a medium level, the p.d.f. for 0 .; x
.;
465
IS
2.
When the system is operating at a high Jevel, the p.d.f. is f { x I w, )
-
3 ' il x
for 0
:e;;
x � 2.
Suppose that i t is known that the system operates at a low level on 10 percent of the days, at a medium level on 70 percent of the days, and at a high level on 20 percent of the days. Suppose finally that the loss from an incorrect decision is 1 unit, and that the loss from a correct decision is O. (a) Determine a Bayes decision procedure as a function of X, and calculate its risk. (b) Compare this risk with the minimum risk that could be attained if a decision had to be made without the observation X being available. 6. Suppose that it is mown that the probability p of a head when a certain coin is tossed is either 0.3 or 0.4; and that an experimenter must decide which value of p is the correct one after observ.ing the outcome. head or tail, of just a single toss of the coin. Suppose also that the prior probabilities are as follows: Pr{ p
-
0.3) - 0.8
and
Pr{ p - 0.4) - 0.2.
Finally, suppose that the loss from an incorrect decision is 1 unit. and the loss from a correct decision is O. Show that observing the outcome of a single toss is of no value in this problem because the risk of the Bayes decision procedure based on the observation is just as large as the risk of making a .. Bayes decision without the observation. 7 . Suppose that the variables Xl> . . . , XII form a random sample from a normal distribution with mean 0 and variance 1 , Suppose that it is known that either 0 - 0 or 0 - 1, and that the prior probabilities are Pr(O - 0) - Pr(O - 1 ) 1/2. Suppose also that one of three decisions dl, dz, and d) must be chosen, and that the losses from these decisions are as follows: d, 8-0
o
1
5
8
5
1
o
-
1
T••Ung
466
Hypotheses
(a) Show that a Bayes decision procedure has the following form: Choose decision dl if 'in decision
d3
� C1;
choose decision d2 if c1
80,
( 9)
We shall now show that if the joint p.d.f. or the joint p.f. of the observations in a random sample has a monotone likelihood ratio in some statistic T, then there will exist UMP tests of the hypotheses (9). Furthermore (see Exercise 1 1 ), there will exist UMP tests of the hypotheses obtained by reversing the inequalities in both Ho and H, in (9).
I"(x 1 8)
Theorem 1 . Suppose that I"(x I 0 ) has a monotone likelihood ratio in the statistic T - r( X). and let c be a consrant such that (10) Then the test procedure which rejects flo If T � c hypolheses (9) at Ihe leuel 01 significance "0'
8,
Proof.
Let be a specific value of procedure 0, let
.,(Il)
=
Pr( Rejecting
Ho l O =
8
such that
80) - ,,(80 1 1l )
is
8, > 80,
a
UMP
test of the
Also, for any test
469
8.4. Uniformly Most Powerful Tests
and let
It follows from the Neyman-Pearson lemma that among all procedures for which
a( 6) '" "0' the value of P( 6 ) will be minimized by a procedure which I"( x 1 0, )II"(x 1 00 ) ;, k. The constant k is to be chosen so tbat
rejects
Ho
when
Pr( ReJecting
Ho 1 0 - 80) - ao.
However, it follows from the assumptions presented i.n the theorem that the ratio
/,,( x 1 01}/f.,( x 1 00} which rejects
is an increasing function of
when this ratio is at Icast
Ho
rex}. equal to k
Therefore, a procedure will be equivalent to a
procedure which rejects Ho when rex} is at least equal to some other number c. The value of c is to be chosen so that Pr(Rejecting flo 1 0 = 80} = ao or, in other
words, so that Eq. (10) is satisfied.
In summary. we have established the following result: If the constant c is chosen so that Eq. (10) is satisfied, then for any value of (JI > (Jo, the procedure
P(6) - 1 - '17(0, 1 6 ) a(6) - '17(00 1 6 ) '" "0' In other words, among all procedures for which 1T(80 I 0) � ('t o , the procedure 8 * maximizes the value of w(0 1 6) at every value of 8 > 80, We may, therefore, state that S " is a UMP test among all procedures for which ,"«(Jo 1 8 ) � ao. However, in order to complete the proof of the theorem we must establish the result that {) * is a UMP test. not only in the class 'C of all procedures for which '17(80 1 8) '" ao but in the class 'C' of all procedures for which ,"(8 1 8) � 0'0 for every value of 0 � 80.
6*
which rejects Ho when T ;, c will minimize the value of
among all procedures for which
To establish tlus result, it should be noted first that 0.1.
( 1 1)
We shall show first that there exist UMP tests of the hypotheses (1 1). We shall then determine the form of these tests, and sha1l discuss the different levels of significance that can be attained with nonrandomized tests. Let Xl' . . . ' X20 denote the 20 observations in the sample. Then Xl' . . . ' X20 form a random sample of size 20 from a Bernoulli distribution with parameter p� and it is known from Example 1 that the joint p.L of XI ' . . . ' X20 has a monotone li.kelihood ratio i n the statistic Y = L�� 1 Xi' Therefore, by Theorem 1, a test procedure which rejects Ho when Y ;. c will be a UMP test of the hypothe·
.
= (1 1).
For any specified choice of the constant c, the level of signi ficance ao of the U MP test will be ao = Pr( Y ;. c I P = 0.1). When P = 0.1. the random variable Y has a binomial distribution with parameters n. = 20 and p = 0.1. Since Y has a discrete distribution and can have only a finite number of different possibie values, it follows that there are only a finite number of different possible values for Qo. To illustrate this remark, it is found from a table of the binomial distribution that if c = 7, then a o = Pr( Y ;. 7 I P = 0.1) = 0.0024; and if c = 6, then 010 = Pr( Y ;. 6 1 p = 0.1) = 0.01 1 3 . Therefore, if an experimenter wishes to use a level of significance that is approximately 0.01, he could choose either c = 7 and «0 = 0.0024 or c = 6 and "0 = 0.0113. T f the experimenter wishes to use a test for which the level of significance is exactly 0.01, then he can use a randomized test procedure of the type described in Sec. 8.2. However, it seems more reasonable for the experimenter to use onc of the levels of significance that can be attained with a non.randomized UMP test than to employ randomization for the sake of attaining some specific level of significance such as 0,01. 0
Example
4:
Testing Hypotheses About the Mean of a Nonnal DistributiQn. Sup·
pose that Xl' . . . ' X" form a random sample from a normal distribution for which the value of the mean ,", is unknown and the value of the variance a 2 is known. Let ,", 0 be a specified number, and suppose that the following hypotheses are to be
8.4. Uniformly Most Powerful Tests
471
tested : Ho :
Il ': /Lo'
H, :
We shan show firs t that for any specified level of significance ao (0 < ao < 1), there is a UMP test of the hypotheses (12). We shall then determine the power function of the UM P test. It is known frum Example 2 that the joint p. d . f. of Xl " ' " XII has a m onotone likelihood ratio in the statistic XII' Therefore, by Theorem 1, a test procedure 8 1 which rejects Ho when Xn ;;;:::' c will be a UMP test of the hypotheses (12). The level of significance of this test will be "0 = Pr( X" ;. c I /L = /L o ) · Let Z denote a random variable having a standard normal distribution; and for any specified value of "0' let r., denote the number such that Pr(Z ;. r.. ) = "0 ' For example, if "0 0.05, then r. = 1.645. When /L = /Lo, the random variable Z n lll( XII 110)/0 will have a ;tandard normal distribution, and =
-
=
If the preceding probability is to be equal to nl/2( 1-'0)/0 = L, or, equivalently,
c c=
"0'
then it must be true that
-
1-' 0
+
(1 3 )
r.,on- I/2•
We shall now determine the power funclion "(1-' 1 8,) of this UMP lest. By definition, •
For any given value of p.. the random variable Z' = nl/l(X" ,.,.)/0 will have a standard normal distribution. Therefore, if 11> denotes the d.f. of the standard normal distribution, then =
(15)
The power function "(!l 1 8,) is sketched in Fig. 8.2.
0
.
---" ,
472
Testing Hypotheses
" ==============�t= o ==�L po--------------------� "
Figure 8.2
The power function 'JT(JL 1 8\ ) for the UMP test of the hypotheses (12).
In each of the pairs of hypotheses (9), (11), and (12), the alternative hypothesis HI is called a one-sided allernative because the set of possible values of the parameter under HI lies entirely on one side of the set of possible values under the null hypothesis RD. In particular, for the hypotheses (9), (11), or (12), every possible value of the parameter under HI is larger than every possible value under Ho. Suppose now that instead of testing the hypotheses (12) in Example 4, we are interested in testing the following hypotheses:
Ho : HI :
JJ � 1-'-0' p. < li o '
(16)
HI is again a one-sided alternative, and it can be: 1 1 ) that there exists a UMP test of the hypotheses (16) at any shown (see Exercise specified level of significance ao (0 < ao < 1). By analogy with Eq. (13), the UMP test 02 will reject Ho when X" � c, where ]n this case the hypothesis
(17)
� == ------------�o-----l,po :=�==== ====------�� p Ci a
Figure 8.3
- -
The power function 1T(1l 1 82 ) for the UMP test of the hypotheses (16).
473
8.4. Uniformly Most Powerful Tests
The power function "(1' 1 6, ) of the test 6, will be (18) This function is sketched in Fig. 8.3.
Two-Sided Alternatives Suppose, finally. that instead of teSling either the hypotheses or the hypotheses Ho : H, :
(16). we are interested in
(12)
in Example
4.
testing the following hypotheses :
J.l. o . I' * 1'0 '
J.I. =
(19)
In this case. Ho is a simple hypothesis and Hl is a two-sided alternative. Since Ho is a simple hypothesis, the level of significance a o of any test procedure 8 will simply be equal to the value "(1' 0 1 6 ) of the power function at the point I'
� 1'0'
It follows from the discussion in this secti9n that there will not be a UMP test
of the hypotheses for any
(19) for any given level of significance a o (0 < ao < 1). In fact. particular value of I' such that I' > 1'0. the value of "(1' 1 6 ) will be
maximized by the test procedure 8 1 ; whereas for any particular value of J.1 such
that I' < 1'0' the value of "(1' 1 6 ) will be maximized by the test procedure 6,. In the next section, we shall discuss the selection of an appropriate test procedure in
this problem. i n which there is no UMP test.
EXERCISES
1. 2.
Suppose that
form a random sample ffOm a Poisson distribution
for which
XI ! " " X'I the mean A
is unknown
Xl" ' " X"
has a monotone likelihood ratio in the statistic
Suppose that
Xl
• . . . •
X"
(A > 0), Show
that the joint p.f. of
E7_ 1 Xi '
form a random sample from a normal distribution
for which the value of the mean p. is known and the value of the variance (J 2 is unknown ( J 2 > 0). Show that the joint p.d.f. of XI " ' " X" has a monotone
likelihood ratio in the statistic 3. Suppose that
Xl
• . . . •
Xn
E7_ , ( Xi - 1') '.
form a random sample from a gamma distribution
for which the value of the parameter a is unknown (a > 0) and the value of the parameter f3 is known . Show that the joint p.d.f. of Xt• . . . • X" has a monotone likelihood ratio in the statistic
4.
Suppose that
Xl
• . . . •
Xn
n �'- l Xi'
form a random sample from a gamma distribution
for which the value of the parameter
a
is known and the value of the
474
Testing Hypoth....
parameter.B is unknown ( {J > 0). Show that the joint p.d.L of Xl • . . . ' X. has a monotone likelihood ratio in the statistic - Xn. 5. Suppose that Xl" ' " Xn fonn a random sample from a distribution which belongs to an exponential family, as defined in Exercise 1 1 of Sec. 6.7; and that the p.d.L or the p.f. of this distribution is f(x I 0 ) as given in that exercise. Suppose also that c( 8 ) is a strictly increasing function of 8. Show that the joint p.d.f. or the joint p.L of Xl X. has a monotone likelihood ratio in the statistic E?_ I d( XJ. • . . . •
6. Suppose that Xl
XIl form a random sample from a distribution involving a parameter Q whose value is unknown, and suppose that it is desired to test the following hypotheses: Ho : HI :
• . . . •
8 .;; 80, 8 > 80,
Suppose also that the test procedure to be used ignores the observed values in the sample and, instead, depends only on an auxiliary randomization in which an unbalanced coin is tossed so that a head will be obtained with probability 0.05 and a tail will be obtained with probability 0.95. If a head is obtained, then Ho is rejected; and if a tail is obtained, then Ho is accepted. Describe the power function of this randomized test procedure.
02
7. Suppose that Xl
• . . . ' X" form a random sample from a normal distribution" for which the mean is 0 and the variance is unknown, and suppose that it is desired to test the following hypotheses:
Show that there exists a UMP test of these hypolheses at any given level significance "0 (0 < "0 < I).
of
8. Show that the UMP test in Exercise 7 rejects Ho when Ei_, X? ;:;; C, and determine the value of c when n = 10 and 00 = 0.05. 9. Suppose that Xl' X" form a random sample from a Bernoulli distribution . . . •
for which the parameter p is unknown, and suppose that it is desired to lest the following hypotheses:
8.4.
Show that if the sample size is
n = 20,
Uniformly Most Powerful Tests
then there exists a nonrandomized
UMP test of these hypotheses at the level of significance
the level of significance
10.
Suppose tbat
475
(10 = 0.0207.
0'0 = 0.0577
and at
form a random sample from a Poisson distribution
for which the mean J\ is unknown, and suppose that it is desired to test the
Xl
• . . .
'
X"
following hypotheses :
Ho : HI :
A '; 1 , A > 1.
Show that if the sample size is 11 =
10,
then there exists a nonrandomized
UMP test of these hypotheses at the level of significance
11.
Suppose that
Xl" . . ' X"
T
�
r ( X ). Let
form a random sample from a distribution that
0 whose value is unknown; and that the joint p.d . f. or f.(x 1 0 ) has a monotone likelihood ratio in the statistic 00 be a specified value of 0, and suppose that the following
involves a parameter
the joint p.L
"0 � 0.Ol43.
hypotheses are to be tested :
0 � 00' 0 < 00,
Ho: HI: Let
c
be a constant such that Pr( T .;
procedure whjch rejects Ho if
cance
12.
ao .
T�c
" 1 0 � 00) � "0'
Show that the test
is a UMP test at the level of signifi
Suppose that four observations are taken at random from a normal distribu tion for which the mean Ii is unknown and the variance is 1. Suppose also
that the following hypotheses are to be tested:
Ho: Ht:
P. � 10, 1" < 10. •
(a) Determine a UMP test at the level of significance 0'0 (b) Determine the power of this test when I" �
(c) Determine the probability of accepting
13.
.
.
following hypotheses: Ho: H,:
9. if
I" � I I .
Xl' . ' Xn form a random sample from a Poisson distribution the mean A is unknown, and suppose that it is desired to test the
Suppose that
for which
flo
= 0.1.
A ;" 1 , A < 1.
476
Testing Hypotheses
Suppose also that the sample size is n
in the in terval 0 < 14. Suppose that
10. At what levels of significance ao
=
ao < 0.03 do there exist nonrandomized
Xl" . . '
Xn
form a random sample from
an
UM-P tests?
exponential distri
bution for which the value of the parameter {J is unknown, and suppose that . it is desired to test the following hypotheses :
1
/3 ;" 2 ' /3
it St"'CITIS desirable that the probability of rejecting Ho should be at least as large when 0 E Q1 as it is whenever 0 E Qo . It can be seen that the test 0 for which the power function is sketched in Fig. 8.5 is an unbiased test of the hypotheses (4), Also, among the four tests for which the power functions are sketched in Fig. 8.4, only 8) is an unbiased test of the hypotheses (1). The requirement that a test is to be unbiased can sometimes narrow the selection of a test procedure. However, unbiased procedures should be sought only under relatively special circumstances. For "example, when testing the hy potheses (4), the statistician should use the unbiased test l) represented in Fig. 8.5 only under the following conditions: He believes that, for any value a > 0, it is just as important to reject Ho when 8 10 + a as to reject Ho when 0 = 9 - a. and he also believes that these two values of (j are equally likely. In practice, the statistician might very well forego the use of an unbiased test in order to use a biased test which has higher power in certain regions of 01 that he regards as particularly important or most likely to contain the true value of O. =
Equivalence 01 Conlldence Set. and Tests Suppose again that a random sample Xl XII is to be taken from a distribution for which the p.d.f. is !(x I fJ), where the value of the parameter (j is unknown but • . . .
'
Testing Hypotheses
482
must lie in a specified parameter space Q. The parameter 0 may be either a real number or a vector. The concept of a confidence set for 8, which we shall now introduce, is a generalization of the concept of a confidence interval introduced in Sec. 7.5. In an estimation problem, after we observe the values of Xl' . . . ' XiI' we choose a single point in the parameter space Q to be our estimate of 8. In the present discussion, instead of choosing just one point, we shal1 choose an entire subset of D in which we think 8 is likely to lie. We shall let w(xl, , xu) denote the subset of Q that is chosen after the values Xl' . . . ' X u have been observed. Before the values of Xl" . . ' X" have been observed, we can consider w( Xl' . . . , Xn) to be a random subset of g. For any given value 80 E g, we can calculate the probability that the subset w( Xl' . . . , Xn) will contain the point 80 when 80 is the actual value of 8. Suppose that this probability has the same value for every point 80 E D; that is, suppose that there is a number y (0 < y < 1) such that, for every point 00 E D, • • •
(11 ) In this case, after the values of Xl ' . ' . ' x" in the sample have been observed, it is said that the particular subset w(x 1 ' , xn) determined by these values is a . confidence sel for 8 with confidence coefficient y. We shall now indicate why the theory of confidence sets and the theory of . testing hypotheses are essentially equivalent theories. For any given point 80 E 0, consider testi_ng the following hypotheses: .
Ho : H, :
.
•
8 - 80' 8 * 80,
(12)
Suppose that for every point 00 E Q and every value of a (0 < a < 1), we can : - 'j' construct a test o( 80} of the hypotheses (12) for which the size is a; that is, we : �. can construct a test o( 80) such that Pr [ Rejecting Ho when using the test 0 ( 80 ) I
e - 80J - a .
(13)
For each possible set of values Xl" . ' , x " that might be observed in the random sample, let ",(x" . . . , x n ) denote the set of all points 80 E n for which the test o( 00) specifies accepting the hypothesis Ho when the observed values are Xl Xn. We shaU now show that the set w(x1, , xn) is a confidence set for 8 with confidence coefficient 1 - lX. We must show that if we let y - 1 - a, then Eq. (11) is satisfied for every point 80 E O. By the definition of w( Xl" . ' , XII)' a given point 80 will lie in the subset w(xj, . . . , x,,) if and only if the observed values x l, . . . , x" lead to acceptance of the hypothesis that 0 = Do when the test o( (0) is used. Therefore, • • • • •
. . •
B.5. SelecHng a Test Procedure
483
by Eq. (13), Pr[O. E:
w
( XI' .
.
.
' X.) \ 0
�
0.)
(14) Pr[Accepting H. when using the test S(O.)\ 0 8.) I « . � Conversely. suppose that w(x1 • • • • , xI'!) is a confidence set for (} with con fidence coefficient y. Then for any given point 80 E n, it can be shown that the test procedure which accepts the hypothesis Ho if and only if 80 E w(x1 • • • • • xI'!) will have size 1 y. We have now demonstrated that constructing a confidence set for (J with confidence coefficient y is equivalent to constructing a family of tests of the hypotheses (12) such that there is onc tcst for each value of (Jo E Sl and each test has size 1 y. �
�
-
-
-
EXERCISES
1. Suppose that Xl" ' " Xn form a random sample from a normal distribution for which the mean J! is unknown and the variance is 1; and that it is desired to test the following hypotheses for a given number J.Lo:
H. : HI:
p. � 1' . , I' '" 1' • .
Consider a test procedure 8 such that the hypothesis Ho is rejected if either
XI'! � c 1 or Xn � c2 • and let w(1-' 1 8) denote the power function of 8.
Delc:nnine the values of the constants ci and c2 such chat '1T(Ji-o 1 8) = 0.10 and the function 7i{J.L 1 8) is symmetric with respect to the point iJ. = #Lo' 2. Consider again the conditions of Exercise 1. Determine the values of the constants c1 and c2 such that '1T(Jl o 1 8) = 0.10 and 0 is unbiased. 3. Consider again the conditions of Excrcise 1, and suppose that C1 = Jlo
-
1 2 1 .96,,- / .
,
Determine the value of c, such that "(flo 1 8) � 0.10. 4. Considcr again the conditions of Exercise 1 and also the lest procedure described in that exercise. Determine the smallest value of n for which " (I'. \ 8 ) 0.1 0 and "(I'. + 1 1 8) � "(I'. 1 \ 8) ;;, 0.95. 5. Suppose that Xl > " " Xn form a random sample from a normal distribution for which the meaD J.L is unknown and the variance is 1 ; and that it is desired to test the following hypotheses: �
H. : 0.1 '" I' '" 0.2, HI : I' < 0.1 or I' > 0 .2 .
-
484
T..ting Hypotheses
Consider a test procedure 8 such that the hypothesis Ho is rejected if either X" � c 1 or X" � C2' and let 1T(p. 1 8 ) denote the power function of o. Suppose that the sample size is n = 25. Determine the values of the constants ci and c , such that ".(0.1 1 8) = ".(0.2 1 8 ) = 0.07.
6. Consider again the conditions of Exercise 5 and suppose also that Determine the values of the constants c, and and ".(0.2 1 8 ) = 0.05.
c,
25. 0.02
n =
such that ".(0.1 1 8)
=
7. Suppose that Xl Xn form a random sample from a uniform distribution on the interval (0, 0). where the value of 0 is unknown; and that it is desired to test the following hypotheses: • . . . •
Ho : Ht :
8 � 3. 11 > 3 .
(a) Show that for any given level of significance ao (0 '" ao < 1), there exists a UMP test which specifies that Ho should be rejected if max( X" . . . , X.) ;;, c.
(b) Determine the value of
c
for each possible value of ao.
8. For a given sample size n and a given value of ao, sketch the power function of the UMP tcst found in Exercise 7.
9. Suppose that Xl ' X" form a random sample from the uniform distribu tion described in Exercise 7. but suppose now that it is desired to test the. following hypotheses: • . . .
Ho: H,:
11 ;;, 3, 11 < 3 .
(a) Show that at any given level of significance ao (0 < ao < I), there exists a UMP test which specifies that Ho should be rejected if max( Xl X,,) � c. • . . . •
(b) Determine the value of
c
for each possible value of ao'
10. For a given sample size n and a given value of Qo• sketch the power function of the UMP test found in Exercise 9.
1 1 . Suppose thal Xl X" form a random sample from the uniform distribu tion described in Exercise 7, but suppose now that it is desired to test the following hypotheses: • . . . •
Ho: Ht:
11 3, 11 * 3 . -
Consider a test procedure � such that the hypothesis "'0 is rejected if either max(X" . . . , X,, ) '" c1 or max(X" . . . , X,, ) ;;, c " and let ".(11 1 8) denote the
8.6. The ' Test
power function of 8. Determine the values of the constants ,,(3 I 8) - 0.05 and 8 is unbiased.
c1
485
and c2 such that
8.6. THE t TEST Testing Hypotheses About the Mean of a Normal Distribution When the Variance Is Unknown
In this section we shall consider the problem of testing hypotheses about the mean of a normal distribution when both the mean and the variance are unknown. Specifically, we shaH suppose that the variables Xl Xn form a random sample from a normal distribution for which the mean p. and the variance (J 2 are unknown, and we shall consider testing the following hypotheses: • . . . '
Ho : HI :
I' JI.
1' 0 ' > Jl. o ·
'"
(1 )
The parameter space 11 in this problem comprises every two-dimensional vector ( JI. , (J 2 ). where - 00 < P. < 00 and 0 2 . > O. The null hypothesis Ho specifies that the vector (p" 11 2 ) lies in the subset Qo of Q comprising all vectors for which I-L � P,o and (J 2 > 0, as illustrated in Fig. 8.6. The alternative hypothesis HI speciJies that (p.. (2) belongs to the subset 0"] of Q comprising aU the vectors that do not belong to no. If it is desired to test the hypotheses (1) at a given level of significance p.o. Then the point (f I. It follows from Eq. (8) that r(x) ;> k if and only if (9)
where k ' is another constant whose value can be derived from k. Finally, for XII > p.o. the relation (9) is equivalent to the relation
X X i n 1 [ . ""' ; - 1 ( '("'
_
_
)2
(10)
1/2
"-1
;> C ,
where c - [In - l)k']'/'. Si_milarly, if the observed sample values are such that X/I < 11 0 and if k < 1, then it can again be shown that r(x) ;> k if and only if the relation (10) is satisfied, where c is a constant whose value can be derived from the value of k. As in Sec. 7.4, we shall let S: � L7_ 1( Xi - Xn)', and we shall let U denote the following statistic: U�
n '/2 ( X. - 1' 0 )
[ ]
S: '/2 n-1
.
(11)
B.6. The t Te.t
489
The likelihood ratio test procedure wltich we have just derived specifies that the hypothesis Ho should be rejected if V ;. c and that Ho should be accepted if U < c. In this problem, this procedure is called a I test, for reasons that wilJ now be explained.
PropertIes 01 the 1 Test When JJ. = JJ.o, it follows from Sec. 7.4 that the distribution of the statistic U defined by Eq. (ll) will be a I distribution with n - 1 degrees of freedom, Therefore, when ,.,. = /lo. it is possible to use a tab!e regardless of the value of of the t distribution to choose a constant c such that PI(U ;;:; c) = ao regardless of the value of (12. Now suppose that JJ. < JJ.o, and let
(72.
n l /' ( "'" - JJ. ) V S; •
_
[ n ]112
and
1
Then the random variable U * will have a .[ distribution with n - 1 degrc.es of freedom, and the value of the random variable W will be positive. Furthermore, it can be seen from Eq. (1 1) that V = V ' - W. Therefore, for any given values of JJ. and 0' 2 such that I' < p.o. Pr( V ;' c) = Pr( V '
- W ;. c) = Pr{V ' ;' c + W) < Pr{ V ' ;' c) = ao'
Thus, when JL < Jio. it is seen that Pr(U � c) < ao. regardless of the value of 0 2. It foUows that the size of this test, which rejects. Ho when U � c, is ao and the power function of the test satisfies the relation (2). Also, an argument similar to the one just given shows that when J.I. > }lo. then Pr(V ;. c) > ao, regardless of the value of a '. Finally, when � is very large, the numerator df V will tend to be very large; and the probability of rejecting Ho will be close to 1 . Formally, it can be shown that for any value of a ' > 0, lim "(JJ., a ' I � ) � 1 . " � OO
Similarly. it can be shown that (or any value of (12 > 0, lim Il-
- 00
"(JJ., a ' I � ) = O.
Testing Hypotheses
490
Suppose that the lengths, in millimeters, of metal
fibers produced by a certain process have a normal distribution for which both
Example
I: Length.f of F;her�'.
the mean /J and the variance 0 2 are unknown; and that the following hypotheses are 10 be tested:
I' ,. 5.2, I' > 5 . 2 .
(12
)
Suppose that the lengths of 1 5 fLbers selected at random arc measured, and it is . - 2 :1 15 , found that the sample mean XIS IS 5.4 and SlS - Ej_ t ( Xj - XlS} - 2.5. Based _
on these measurements, we shall carry out a
ao - 0.05. Since
n =
] 5 and 1'0
=
i1
I test at the level of significance
5.2, the statistic U defined by Eq. (1 1 ) will have a
I., �
I
distribution with 14 degrees of freedom when p. = 5.2. I t is found in the table of
-.i
Eq. ( 1 1 ) is 1.833,
l
{
distribution that Pr(U $!l 1 .761) = 0.05. Hence, the null hypothesis Ho should be rejected if U � 1.76 1 . Since the numerical value of U calculated from the
I
Ho
should be rejected.
,�
0
-j ,
-
TesUng with a Two-Sided Alternative
' '
We shall continue to assume that the variables XI ' . ' . ' Xn form a random sample from a normaJ distribution for which both the mean p. and the variance (J 2 are ·
unknown, but we shall suppose now that the following hypotheses are to be tested:
p. = J-Lo ' I' * 1' 0 '
Here, the alternative hypmhesis By analogy with the
t
HI
is two-sided.
test which has been derived for a one-sided alternative"
Ho
c2 , where ci and c2 are appropriately chosen constants. For any given values of p. and (J 2 that satisfy the null hypothesis
for f.L
=
�
fJ. o and 0- 2 > 0, the statistic U has a I distribution with
n
< 0 and c2 > 0 such that, when Ho is true, Pr(U
c, ) =
ao '
"1
-j 1
:1:
3
if either' .
'
'
'1
- 1 degrees of •,
flo, that is,
freedom. Therefore. by using a table of this distribution, it is possible to choose
c1
.� � � �
( 13)
the standard procedure for testing the hypotheses (13) is to reject
U � c i or U
1, .
( 1 4)
1 ,
As discussed in Sec. 8,5, there wi1l be many pairs of values of c i and c2 that satisfy Eq. (14). In most experiments. it is convenient to choose C 1 and C2 �1 symmetrically with respect to O. For this choice, c1 = - c and c2 = c : and when
Ho
is true, Pr( U
< - c ) = Pr( U >
c
)
=
I ao' 1
(15)
8.6. The I Test
491
For this symmetric choice, the t test will be an unbiased test of the hypath·
eses (13).
Example 2.. Length., oj Fibers.
We shall consider again the problem discussed in
Example 1 ; but we shall suppose now that, instead of the hypotheses (12), the following hypotheses are to be tested : I' - 5 .2,
Ho :
HI :
(16)
1' + 5 . 2 .
We shall again assume that the lengths of 1 5 fibers are measured, and that the value of U calculated from the observed values is 1.833. We shall test the bypotheses (16) at the level of significance ao - 0.05 by using a symmetric
of the type specified in Eq. (15). Since
ao
I
test
- 0.05, each tail of the critical region is to have probability 0.025.
Therefore, by using the column for p = 0.975 in the table of the
I
distribution
with 1 4 degrees of freedom, we find that the ( test specifies rejecting Ho if either U
< - 2.145 or U > 2.145. Since U - 1.833, the hypothesis Ho should be
accepted.
0
The numerical values in Examples 1 and 2 emphasize the importance of deciding whether the appropriate alternative hypothesis in a given problem is one-sided or two-sided. When the hypotheses
were tested at the level of
significance 0.05, the hypothesis Ho that I' '" 5.2 was rejected. When the hypothe
(12)
ses (16) were tested at the same level of significance and the same data were used,
the hypothesis Ho that I' - 5.2 was accepted.
Conlldence Intervals from the t Teat We can obtain a confidence interval for I' from the ( test that we have developed
for the hypotheses ( 1 3) by using the method described at the end of Sec. 8.5. The
confidence interval for JI. that would be obtained by (rus metbod is the same as the
confidence interval for IL that was given in Sec. 7.S. Hence, we shall not consider this topic further at this time.
•
EXERCISES 1.
Suppose that
9
tion for which both the mean IL and the variance for these
9
(12
observations are selected at random from a normal distribu
observations it is found that
'in =
22 and
arc unknown; and that
S} =
72.
(' . where Pr(U < (;1) 0.01 2 and Pre U > e ) � 0.09. For the sample data specified in Exercise 3, should 2 H0 be accepted or rejected? =
5. Suppose that the variables Xl" ' " Xn form a random sample from a normal distribution for which both the mean JL and the variance 0 2 are unknown; and that a I test at a given level of significance ao is to be carried out to test the following hypotheses:
Ho: IL ';; 1-' 0 ' HI : Jl > p.o· Let
w(J.1.. ( 2 1 8 ) denote the power function �f this t test, and assume that
8.6. The t Test
493
(1-'1' (J;) and (P, 2 ' of) are values of the parameters such that
Jl. l
1-' 0
-
P2
-
Il o
O2 Show that 11( 1'1, 0,' 1 8) = 11( 1',. O2' 1 8). 6. Consider a normal distribution for which both the mean 11 and the variance 0" 2 are unknown. and suppose that it is desired to test the following hy potheses: Ho : IJ. � Il o . HI : 1' > 1'0' Suppose that it is possible to observe onJy a single value of X from this distribution, hut that an independent random sample of n observations Y1 , Y" is available from another normal distribution for which the vari ance is also (J 2 and it is known that the mean is O. Show how to carry out a test of the hypotheses Ho and HI based on the I distribution with n degrees of freedom. 7. Suppose that the variables Xl " , Xn form a random sample from a normal distribution for which both the mean fL and the variance 0 2 are unknown. Let 05 be a given positive number, and suppose that it is desired to test the following hypotheses at a specified leveJ. of significance "0 (0 < "0 < 1): • • • .
.
.
Let Sn2 = L:7_1( Xj Xn) 2 , and suppose that the test procedure to be used specifies that Ho should be rejected if Sn'/o� � c. Also, let 11(1', 0 ' 1 8) denote the power function of this procedure. Explain how to choose the constant c so that. regardless of the value of fL. the following requirements are satisfied: 17(p" 0 2 1 S) < It'o if 0 2 < 06; 1T(p, (J 2 1 8) = It'o if 0 2 = 06; and 11(1', o' I �) > "0 if 0 ' > o�. 8. Suppose that a random samp]e of 10 observations Xl' .. XIO is taken from a normal distribution for which both the mean fL and the variance 0 2 are unknown; and that it is desired to test the following hypotheses: -
. . .
•
Ho : HI :
0 2 � 4, 0 2 > 4.
Suppose that a test of the form described in Exercise 7 is to be carried out at the level of significance "0 0.05. If the observed value of S; is 60, should the hypothesis Ho be accepted or rejected? 9. Suppose again, as in Exercise 8, that a random sample of 10 observations is taken from a normal distribution for which both the mean p and the variance
=
494
Testing Hypothoses
0 2 are unknown; but suppose now that the following hypotheses are to be tested at the level of significance 0.05:
Suppose that the null hypothesis Ho is to be rejected if either S1/2 � c} or S,.2 � '2. where the constants c1 and c2 are to be chosen so that, when the hypothesis Ho is true,
Pr( S'; ..; c, ) = Pr( S,
'
;.
Detennine the values of 8.7.
c, ) c1
= 0.025.
and
c2•
DISCUSSION OF THE METHODOLOGY OF TESTING HYPOTHESES
In many respects, the theory of testing hypotheses, as it has developed in statistical methodology, is misleading. According to this theory. in a problem of testing hypotheses only two decisions are available to the experimenter. He must ' either accept the null hypothesis Ho or reject Ho- It is true that there are problems of this type in statistical practice, and the theory of testing hypotheses can ' properly and usefully be applied to such problems. It is also true, however, that the methodology or testing hypotheses is applied to many situations in whlch the experimenter is mainly interested in determining the likelihood that the hypothe sis H0 is true and in which he does not necessarily have to choose one of two decisions. In this section we shall discuss the methodology of testing hypotheses as it is commonly practiced in various fields of application of statistics, in contrast to the theory of testing hypotheses which is presented elsewhere in this chapter.
Tall Areas In order to facilitate the discussion in this section, we shall again consider Example 1 of Sec. 8.6, in which the hypotheses (12) are to be tested by using an appropriate one-sided t test based on the statistic U. We shall assume that the sample data are as given in the example. In this example, the experimenter will typically realize that simply reporting whether the hypothesis Ho was accepted or rejected by the t test, carried out at the level of significance 0.05, does not convey all the information contained in the sample data in regard to the likelihood that Ho is true. If he reports only that Ho was rejected, then he is simply reporting that the observed value of U exceeded
8.7. Discussjon of the Methodology of Testing Hypotheses
495
the critical value 1.761. The resull of the test would be more useful if he reports whether this observed value was only slightly larger than 1 .761 or was very much larger than 1.761. Furthermore, the decision to accept Ho or to reject Ho in a given problem obviously depends on the level of significance 0:0 that is used in the problem. in most applications ao is chosen to be either 0.05 or 0.01, but there is no strong reaSon other than tradition for using one of these particular values. In our example, it was assumed that ao = 0.05 and Ho was rejected. If the value ao = 0.01 had been used, instead of 0.05, then IIo would have been accepted. For these reasons, an experimenter docs not typically choose a value of ao in advance of the experiment and then simply report whether Ho was accepted or rejected on the basis of the observed value of U. In many fields of application it has become standard practice to report the observed value of U and all the values of ao for which this observed value of U would lead to the rejection of Ho. Thus, if the observed value of U is 1.833, as in our example, it is found from the table of the I distribution given at the end of this book that the hypothesis Ho would be rejected for any level of significance ao � 0.05 and that Ho would not be rejected for any level of significance a o � 0.025. From a more complete table of the I distribution than the one given in this book, it can be found that the probability lying to the right of the value 1.833 in the tail of the I distribution with 14 degrees of freedom is 0.044. In other words, if Z denotes a random variable that has a I distribution with 14 degrees of freedom, then pc( Z ;;, 1.833) � 0.044. The value 0.044 is called the rail area or the p-ualue corresponding to the observed value of the stat.istic U. Thus, if the observed value of U is 1 .833, the hypothesis Ho should be rejected for any value of ao > 0.044 and should be accepted for any value of ao < 0.044. An experimenter, in his analysis of this experiment, would typically report that the observed value of U was 1.833 and the corresponding tail area is 0.044. It is then said that the observed value of U is just significant at the level of significance 0.044. One advantage to the experimenter, when he reports his experimental results in this manner, is that he does not have to select beforehand an arbitrary level of significance ao at which the I test is to be carried out. Also, when a reader of the experimenter's report learns that the observed value of U was just significant at the level of significance 0.044, he immediately knows tbat Ho should be rejected for any larger value of ao and should not be rejected for any smaller value. Tall Areas lor a Two-Sided Alternative Hypothesis N ow consider Example 2 of Sec. 8.6, in which the hypotheses (16) are to be tested
by using a symmetric two-sided I test based on the statistic U. In this example, the hypothesis Ho is rejected if either U � - c or U � c. If the test is to be
496
Testing Hypotheses
carried out at the level of significance 0.05, the value of c is chosen to be 2.145, because Pr(U ,,; - 2.145) + Pr(U ;;. 2.145) - 0.05. Because the appropriate test procedure in this example is a two-sided t test, the appropriate tail area corresponding to the observed value U 1.833 will be the sum of the following two probabilities: (i) the probability lying to the right of 1.833 in the right-hand tail of the I distribution and (ii) the probability lying to the left of 1.833 in the left-hand tail of tltis distribution. Because of the symmetry of the t distribution, these two probabilities are equal. Therefore, the tail area corresponding to the observed value U - 1.833 is 2(0.044) - 0.088. In ' other words, the observed value of U is just significant at the level of significance 0.088. The hypothesis Ho : 11 5.2 should be rejected for any level of significance ao > 0.088 and should be accepted for any level of significance a o < 0.088. It should be emphasized that when the null hypothesis Ho is accepted at a specified level of significance, it does not necessarily mean that the experimenter has become convinced that Ho is true. Rather. it usually means merely that the data do not provide strong evidence that Ho is not truc. -
-
-
Statistically Slgnlllcant Results We shall continue to consider Example 2 of Sec. 8.6, in wltich the hypotheses (16) are to be tested. As the value of the tail area corresponding to the observed value of U decreases. we regard the weight of evidence against the hypothesis Ho as becoming greater. Thus, if we had found that the observed value of U was just significant at the level of significance 0.00088, rather than at the level 0.088, tbert we would have regarded the sample as providing much stronger evidence against Ho . It is often said that an observed value of U is statistically significant iJ the corresponding tai] area is smaUer than the traditional value 0.05 or 0.01. . Although an experimenter does not know with certainty whether or not the hypothesis Ho in a given problem is true, he would conclude that a statistically significant observed value of U provides, at the very least, strong evidence against Ho. lt is extremely important for the experimenter to distinguish between an observed va1ue of U that is statistically significant and an actual value of the parameter 11 that is significantly different from the value 11 - 5.2 specified by the null hypothesis Ho. Although a statistically significant observed value of U provides strong evidence that Il is not equal to 5.2, it does not necessarily provide strong evidence that the actual value of Il is significantly different from 5.2. In a given problem, the tail area corresponding to the observed value of U might be very small; and yet the actual value of Il might be so close to 5.2 that. for practical purposes, the experimenter would not regard Il as being significantly different from 5.2. The situation just described can arise when the statistic U is based on a very large random sample. Suppose, for instance, that in Example 2 of Sec. 8.6 the
8.7. Discussion of the Methodology 01 Testing Hypotheses
497
lengths of 20,000 fibers in a random sample are measured, rather than the lengths of only 15 fibers. For a given level of significance, say ao = 0.05, let '11(p., 02 1 8) denote the power function of the t test based on these 20,000 observations. Then ,,(5.2, " ' , S) � 0.05 for every value of ,, 2 > O. However, because of the very large number of observations on which the test is based, the power '11 (p., 0 2 1 �) wiJl be very close to 1 for any value of p. that differs only slightly from 5.2 and for a moderate value of 0 2. In other words, even if the value of p. differs only slightly from 5.2, the probability is close to 1 that the observed value of U will be statistically significant. As explained in Sec. 8.5, it is inconceivable that the mean length f.L of all the fibers in the entire population will be exactly 5.2. However, p. may be very close to 5.2; and when it is, the experimenter will want to accept the null hypothesis Bo. Nevertheless, it is very likely that the I test based on the sample of 20,000 fibers will lead to a statistically significant value of U. Therefore, when an experimenter analyzes a powerful test based on a very large sample, he must exercise caution in interpreting the actual significance of a "statistically signifi cant" result. He knows in advance that there is a high probability of rejecting Ho even when the true value of p. differs only slightly from the value 5.2 specified under Ho. One way to handle this situation, as discussed in Sec. 8.2, is to recognize that a level of significance much smaller than the traditional value of 0.05 or 0.01 is appropriate for a problem with a large sample size. Another way is to regard the statistical problem as one of estimation rather than one of testing hypotheses. When a large random sample is available, the sample mean and the sample variance will be excellent estimators of the parameters p. and 0 2, Before the experimenter chooses any decision involving the unknown values of p. and 0 2, he should calculate and consider the values of these estimators as well as the value of the statistic U.
The Bayesian Approach When an experimenter or a statistician is testing hypotheses, he is most interested • in the use of the sample data to determine the probability that Ho is true. It must be emphasized that the methodology of testing hypotheses which has been discussed in this section does not, and cannot, deal directly with this probability. The tail area or p-value that is calculated from the observed sample provides no indication, by itself, of the probability that Ho is true, although the p-value is sometimes misinterpreted in this way. In fact, it is sometimes incorrectly stated by experimenters that the rejection of Ho at some specified level of significance ao indicates that the probability that Ho is true is less than ao' No such interpretation can logically be made. In order to be able to determine the probability that Ho is true, the experimenter must adopt a Bayesian approach. If a prior distribution is assigned to the parameter being
0
498
T••ting HypothlHes
tested, it is possible to calculate the posterior distribution of 0, given the sample data, and Pr( Ho is true) � Pr(O E no) can be determined from this posterior distribution. Although the discussion on the methodology of testing hypotheses has been presented here in the context of the 1 test, it should be emphasized that this discussion pertains quite generally to all problems of testing hypotheses.
EXERCISES 1 . Suppose that a random sample Xl" ' " Xn is to be taken from a normal distribution for which both the mean p. and the variance 0 2 are unknown; and that the following hypotheses are to be tested:
Ho: I' '" 3 , Ht : 1' > 3 . Suppose also that the sample size n is 17, and it is found from the observed values in the sample that X" � 3.2 and (1/n)I:7_,(Xi - X,Y - 0.09. Calcu late the value of the statistic V, and find the value of the corresponding tail area. 2. Consider again the conditions of Exercise I , but suppose now that the sample : size n is 170 and that it is again found from the observed values in the sample that X" 3.2 and (1/n)I:7_ ,( Xi - X,, )' 0.09. Calculate the value of the . statistic V, and find the value of the corresponding tail area. �
�
3. Consider again the conditions of Exercise 1, but suppose now that the following hypotheses are to be teSled: Ho:
H, :
I' � 3 . 1 , 1' '' 3.1 .
Suppose, as in Exercise 1, that the sample size 11 is 17 , and it is found from the observed values in the sample that X" � 3.2 and (l/n)I:7_ 1 ( Xi - X,,)' � 0.09. Calculate the value of lhe statistic U, and find the value of the corresponding tail area.
4. Consider again the conditions of Exercise 3, but suppose now that the sample size n is 170 and that it is again found from the observed va]ues in the sample that X" � 3.2 and (l/n) I:;'_ , ( X, - X,, )' 0.09. Calculate the value of the statistic U, and find the value of the corresponding tail area. 5. Consider again the conditions of Exercise 3. Suppose, as in Exercise 3, that the sample size n is 17, but suppose now that it is found from the observed values in the sample that X" - 3.0 and (1/n)I:7_1( Xi - X,, )' � 0.09. Calcu late the value of the statistic U, and find the value of the corresponding tail �
area.
B.B. The F Distribution
499
6. Suppose that a single observation X is taken from a normal distribution for which the mean J.t is unknown and the variance is
1.
Suppose that it is known
that the value of I' must be - 5 , 0, or 5 ; and that it is desired to test the following hypotheses at the level of significance 0.05: I' � 0 I' � - 5 0r l' � 5 .
Ho : HI :
Suppose also that the test procedure to be used specifies rejecting I XI >
c,
where the constant
c
Ho
when
is chosen so that Pr(1 XI > c I I' � 0) � 0.05.
(a) Find the value of c; and show that if X � 2, then Ho will be rejected.
(b) Show that if X � 2, then the value of the likelihood function at fl � 0 is 12.2 times as large as its value at IJ. = 5 and is 5.9 X 10 9 times as large as
7.
its value at IJ. = - 5 .
Suppose that a random sample of 10,000 observations is taken from a normal distribution for which the mean Ii is unknown and the variance i s 1 ; and that it is desired to test the following hypotheses at the level of significance 0.05:
Ho: HI :
I' � 0 ,
1' ''' O.
Suppose also that the test procedure specifies rejecting where the constant
c
is chosen so that Pr(j X" 1 >
probabili ty that the test will reject
Ho
c
I I'
Ho
when I X" I > c,
� 0) � 0.05. Find the
if (a) the actual value of I' is 0.01 and
(b) the actual value of I' is 0.02.
S. Consider again the conditions of Exercise 7, but suppose now that it is desired to test the following hypotheses:
Ho : HI :
Il � 0 , I' > O . •
Suppose also that in the random sample of 10,000 observations, the sample mean
8.8.
X"
THE
is 0.03. At what level of significance is this result just significant?
F DISTRIBUTION
Definition of the F Distribution I n this section we shall introduce a probability distribution, called the F distribu tion, that arises in many important problems of testing hypotheses in which two or more normal distributions are to be compared on the basis of random samples
.� ,
Testing Hypotheses
500
!
°22 For any test procedure S, we shall let 'IT(I'" 1' 2 ' of, 0,' I S) denote the power . function of S. We must find a procedure S such that "(1',,1',,0,2, of I S) .; ao for ol .:s;;: o£ and such that 11(1-'1.1'2. 0)2, (22 1 8) is as large as possible for a; > or .. There is no UMP test of the hypotheses (8), but it is cornman practice to use one particular procedure, called the F test. The F test, which we shall now derive, has the specified level of significance Qo and also has the following five properties: .
.
.
(i) (ii) (iii) (iv) (v)
?T(I',,1' 2 , oi, 0,' I S) w(JJ,, /A 2 , a,2, al l o) < w(Il1' /A 2 , al, al l 0) > "(1' ,, 1' 2 ' oi, 0,' I S) "().I" 1' " 0,', 0,' I S)
-->
-->
when of Qo when a,2 < Qo when al > 0 as of/o,' 1 as oi/o,'
ao
-->
-->
0,', al, ai, 0, 00 .
It follows immediately from properties (i), (ii), and (iii) that the F test is unbiased. Furthermore, it can be shown that the F test is actua1ly a UMP test within the class of all unbiased tests. The proof is beyond the scope of this book [see Lehmann (1959)]. Derivation 01 the
F Test
After the values X l • . . . ' xm and Yl " ' " y" in the two samples have been observed, the likelihood funclion g(x, y I I'" 1' 2 ' oi, oil is (9)
B.B. The F Olstribution
503
Here, both I ( x 1 1'1' al) and In( Y 1 1' 2 ' ai) have the general form given in Eq. (3) of Sec. 8.6. The likelihood ratio test procedure is based on a comparison of the following two values: the maximum value attained by the likelihood function (9) when the point ( P l' IL 2 . 012, ai) varies over the subset 00 specified by Ho, and the maximum value attained by the likelihood function when (1-'1' 1l 2 ' 012, Of) varies over the subset 01 specified by HI' For the hypotheses (8), no contains every point ( IL l , "'2 ' 0 12 , of) such that of � oi. and 01 contains every point such that of > or Therefore, we shall consider the following ratio: ..
(10) The procedure then specifies that Ho should be rejected if r(x. y) � k. where k is some chosen constant, and that Ho should be accepted if rex, y) < k. It can be shown by methods similar (0 those given in Sec. 8.6 that r(x, y) � k if and only if
l:7�I (Xi
l:;'_ I(Yi
-
-"no )'/( m 1) � c, 1 n y,, )'/( ) -
(11)
-
where c i s another constant whose value can be derived from this derivation will not be given here. Now we shall let n
and
S�
-
L ( r;
-
y..)'.
k.
The details of
(12)
Also, in accordance with the relation (11), we shall let the statistic V be defined . by the foUowing relation: v-
S;/(m S}/( n
-
1) 1)
.
•
(13)
The likelihood ratio test procedure which we have just described specifies that the hypothesis Ho should be rejected if V ;. c and that Ho should be accepted if V < c. In this problem. this procedure is called an F test, for reasons that will now be explained.
PropertIes 01 the
F Test
We know from Sec. 7.3 that the random variable s;/ol has a X 2 distribution with m 1 degrees of freedom and the random variable sf/ol has a X 2 di.stribution wilh n 1 degrees of freedom. Furthermore, these two random -
-
504
T••Ung Hypotheses
variables will b e independent, sinee they are calculated from two different samples. Therefore, the following random variable V · will have an F distribution with
m-1 =
v*
and
-1
n
degrees of freedom:
m:...----, Si/ CJ[ (-,-" -. l") :. " l+.; l -,,; � - 1 SU[( n ) ",>]
It can be seen from Eqs.
( 14) (13) and (14) that V =
(",'/"iJV*. If " ,' = ",', then
V = V * . Therefore. when 012 = Of the statistic V will have an F distribution with 1 and 1 degrees of freedom. In this case it is possible to use a table of
n-
m-
the F distribution to choose a constant c; such that Pr( V ;;:?; c) = ao> regardless of the corrunon value of (112 and ol and regardless of the values of Jl l and 1l2 ' Now suppose that
pr( v ;, c) = pr
Thus. when
l
o
(:�
r
I-' Z '
(1 )
For any test procedure 8. we shall let 1f(1-' 1 . f1 2 . 0 2 1 8) denote the power funclion of 8. We must find a procedure 8 such that 7T(JA.l. J1. 2 . o Z 1 8) .:s;; "0 for 1J. l .:s;; JA. z and such that 1T(IL1' J1.z. (1z 1 8 ) is as large as possible for 1-' 1 > I-'z. There is no UMP test of the hypotheses (I), but it is common practice to use a certain t test. This I test, which we shall now derive, has the specified level of significance dO and also has the following five properties: (i) (ii) (iii) (iv) (v)
"(1'1, 1'2' 0 2 1 6 ) � ao when 1'1 � 1'2' 1f(IlI' 1-' 2. O" Z I 8 ) < "0 when IL l < 1-'2' "( 1',, 1' 2 , 0 2 1 6) > ao when IL, > 1' 2 ' " ( 1'1 , 1' 2 , 0 2 1 6) � 0 as 1'1 - 1' 2 � - 00, " (IL" 1'2 ' 0 2 1 6) � 1 as 1', 1'2 � 00: -
It follows immediately from properties (i), (ii), and (iii) that the test 6 is unbiased. Furthermore, it can be shown that S is actually UMP within the class of all unbiased tests. The proof is beyond the scope of this book [see Lehmann (1959)]. After the values Xl. " " x'" and YI • . . . • YII in the two samples have been observed, the likelihood function g(x, y 1 1-'1 , 1-' 2 ' 0 2 ) is
g(x, y
•
1 1'1' ", , 0 2 ) � fm { x 1 1' 1 , 0 2 )1.( Y 1 1' , , 0 2 ) .
(2)
Here, both fm( x I lL" 0 2 ) and f.( y 1 1' 2 , 0 2 ) have the form given in Eq. (3) of Sec. 8.6, and the value of a 2 is the same in both terms. The likelihood ratio test procedure is based on a comparison of the following two values: the maximum value attained by the likelihood function (2) when the point (ILl ' 1' 2' 0 2 ) varies over the subset no specified by HO I and the maximum value auained by the likelihood function when (Il l' f1 z . aZ) varies over the subset 0l specified by H1· z For the hypotheses (I), 00 contains every point (1-'1. Ill> O" ) such that ILl � Il z . and QI contains every point such that 1' 1 > J.'z. Therefore. we shall consider the
508
Testing Hypotheses
following ratio: (3)
The procedure then specifies that Ho should be rejected if r(x, y) " k, where k is some chosen constant, and that Ho should be accepted if r(x, y ) < k. It can be shown by methods similar to those given in Sec. 8.6 that r(x, y) " k if and only if
( 1 1 )lfl[ m m + -;;
.1: (x ,
,-1
]
- xm )' + 1: ( y, - y. )' n
,-1
/2
1
" C
(4)
'
where c is another constant whose value can be derived from k. The details of thi s derivation wiIJ not be given here. We shall again let Si and S� be the sums of squares defined in Eq. (12) of Sec. 8.8. Also, in accordance with the relation (4), we shall lel the statistic U be defined by the following relation: U-
( m + 11
- 2) 1/'( Xm
( ! + ! ((S;
- }Tn ) .
(5)
+ sn lfl
The likelihood ratio test procedure which we have just described specifies that the. hypothesis Ho should be rej ected if U " c, and that Ho should be accepted if U < c. ]n this problem, this procedure is called a two-sample t test, for reasons that will now be explained. Properties of the Two-Sample t Test
We shall derive the distribution of the statistic U when ILl = 1L 2 ' For any given values of Il l ' Il l. and 0 2 , the sample mean Xm has a normaJ distribution with mean ILl and variance 0 21rn, and the sample mean Yn has a normal distribution with mean 11 2 and variance 0 21n. Since Xm and Yn are independent, it foUows that the difference Xm - Yn has a normal dis trib ution with mean 1' 1 1'2 and variance [(I/m) + (l/n»),, 2 Therefore when 1' 1 1' " the following random variable ZI will have a standard normal distribution :
,
ZI -
(
X:",m!..._ - :..: _� Yn�
) -+1
1 II'
m
n
"
-
-
(6)
r
t i \,
r
.
,
8.9. Comparing the Means 01 Two Normal Distributions
509
Also, for any values of 1-'1' 1-' 2' and 0'2, the random variable S;:/a2 has a X 2 distribution with m - ] degrees of freedom; the random variable S}jo 2 has a X 2 distribution with n - 1 degrees of freedom; and these two random variables will be independent. Therefore, the following random variable 2 has a X2 distribu 2 tion with m + n - 2 degrees of freedom:
S; + S� Z2 = ---"-----;:----'02 :. :
(7)
Furthermore, the four random variables X",, Y". si. and S� are indepen dent. This result is implied by the following two facts: (i) If one random variable is a function of Xh , Xm only and if another random variable is a function of Y, , Y" only. then these two variables must be independent. (ii) By Theorem 1 of Sec. 7.3, Xm and S; are independent, and YII and SJ are also independent. It follows that the two random variables 21 and 2 are independent. The random 2 variable 21 has a standard normal distribution when P,l = P, . and the random 2 variable 22 has a x2 distribution with m + n - 2 degrees of freedom for any values of IL l ' P, 2' and 02. The statistic U can now be represented in the form .
.
.
• . . .
U
_
Z, [Z';(m +
n
-
. 2)J'/2
(8)
Therefore, when JI. } = 1-'2' it follows from the definition of the t distribution given in Sec. 7.4 that U will have a t distribution with m + n - 2 degrees of freedom. Thus, when JJ.I = 1' 2 . it is possible to use a table of this I distribution to choose a constant c such that Pr(U � c) = dO' regardless of the common value of Jl.1 and }l 2 and regardless of the value of 0 2. If Jl.l < Jl.2' then it can be shown by an argument similar to that given in Sec. 8.6 that Pr(U " c) < "'0' regardless of the value of 02. It follows that tbe size of this test, which rejects flo when U � c, is ao. Also, it can he shown that if }l l > 1-'2. then Pr(U $!: c) > ao' regardless of the value of (1 2 . Furthermore, Pr(U � c) 0 as ILl - J.L 2 ---+ - 00 and Pr(U � c) ---+ 1 as ILl - J.L 2 - co. ---.
Example 1: Perfonning a Two-Sample t Test. Suppose that a random sample of
8 specimens of ore is collected from a certain location in a copper mine, and that the amount of copper in each of the specimens is measured in grams. We shall denote these 8 amounts by X" . . . , X" and shall suppose that the observed values 2 are such that X, 2.6 and S; = E�_ ,( Xi - X,) - 0.32. Suppose also that a second random sample of 10 specimens of ore is collected from another part of the mine. We shall denote the amounts of copper in these specimens by YI, , YIO ' and shall suppose that the observed values in grams are such that YlO = 2.3, and S� = L!� l(� - YIO)2 = 0.22. Let PO I denote the mean amount of copper in all =
•
.
.
Testing Hypotheses
5 10
the ore at the first location in the mine� let #l 2 denote the mean amount of copper in all the orc at the second location; and suppose that the hypotheses (1) are to be tested. We sha1l assume that all the observations have a normal distribution and that the variance is the same at both locations in the mine, even though the means may be different. In this example, the sample sizes are m = 8 and n = 10, and the value of the statistic U defined by Eq. (5) is 3.442. Also, by use of a table of the I distribution with 16 degrees of freedom, it is found that the tail area correspond ing to this observed value of U is less than 0.005. Hence, the null hypothesis will be rejected for any specified level of significance ao ;3 0.005. 0
Two-Sided Alternatives and Unequal Variances The procedure based on the statistic U which has just been described can easily be adapted to testing the following hypotheses at a specified level of signif Icance 0:0:
HHI:o: 11'1'1 ¢- 1'1',". Ho
(9)
Since the alternative hypothesis in this case is two-sided, the test procedure would be to reject if either U < c 1 or U > C2 . where the constants c l and c2 are chosen so that when Ho is true, Pr(U < cl) + Pr(U > c,) More importanlly, the basic procedure can be extended to a problem in which the variances of the two normal distributions are not equal, but the ratio of one variance to the other is known. Specifically, suppose that Xl ' . . . ' Xm form a random sample from a normal distribution with mean II I and variance al; and that , form an independent random sample from another normal distri bution with mean 11 2 and variance or Suppose also that the values of Il l ' IL 2 ' all, and 01 are unknown but that of = ko)2 , where k is a known positive constant. Then it can be shown (see Exercise 3 at the end of this section) that when 1'" the following random variable U will have a I distribution with m + n 2 degrees of freedom:
- ao.
Y1 Y" • • • •
1'1 -
-
U-
- 2)1/'( Xm- - -Y"1 ) .
s� ) /' ( -1 + -k ) 1/' ( Sx, + -
(m + n m
n
(10)
k
Hence, the statistic U defined by Eq. (10) can be used for testing either the hypotheses (1) or the hypotheses (9). Finally, if the values of all four parameters Il l ' 1L 2 ' ol, and 0] are unknown and if the value of the ratio a�/ai is also unknown, then the problem of testing
B.9.
Comparing the Means of Two Normal Distributions
511
the hypotheses (1) or the hypotheses (9) becomes very difficult. This problem is known as the Behrens-Fisher problem. Various test procedures have been pro posed, but most of them have been the subject of controversy in regard to their appropriateness or usefulness. No single procedure has found widespread accep tance among statisticians. This problem will not be discussed in this book.
EXERCISES
1. Suppose that a cenain drug A was administered to eight patients selected at random; and that after a fixed time period, the concentration of the drug in certain body cells of each patient was measured in appropriate units. Suppose that these concentrations for the eight patients were found to be as follows: 1 .23, 1 .42, 1 .4 1 , 1 .62, 1 .55, 1 .51, 1 .60, and 1 .76. Suppose a1so that a second drug B was administered to six different patients selected at random; and that when the concentration of drug B was mea sured in a similar way for these six patients, the results were as follows: 1 .76. 1 .41 , 1 .87, 1 .49, 1 .67, and 1 .81 .
Assuming that all the observations have a norm�l distribution with a common unknown variance, test the following hypotheses at the level of significance 0.10: The null hypothesis is that the mean concentration of drug A among all patients is at least as large as the mean concentration of drug B. The alternative hypothesis is that the mean concentration of drug B is larger tban that of drug A . , '"'
2. Consider again the conditions of Exercise 1 , but suppose now that it is desired to test the following hypotheses: The null hypothesis is that the mean concentration of drug A among aU patients is the same as the mean concentration of drug B. The alternative h)"pothesis, which is two-sided, is that the mean concentrations of the two drugs are not the same. Suppose that the test procedure specifies rejecti ng Ho if either U < c, or U > (2' where U is defined by Eq. (5) and ci and c, are chosen so that when Ho is true, Pr(U < cl) = Pr(U > c,) 0.05. Determine the values of CI and c" and determine whether the hypothesis Ho will be accepted or rejected when the sample data are as given in Exercise 1 . =
3. Suppose that Xl ' . ' " Xm form a random sample from a normal distribution with mean f.Ll and variance CJ12; and that Y1, Yn form an independent random sample from a normal distribution with mean J.' 2 and variance ol. Show that if f.L, = 11 2 and ol = ko?" then the random variable U defined by Eq. (10) has a t distribution with m + n 2 degrees of freedom. • • •
-
,
512
4.
TesUng Hypolh.....
Consider again the conditions and observed values of Exercise 1. However, suppose now that each observation for drug A has an unknown variance known that
al
=
(6/ 5 )of.
level of significance
5.
0.10.
B
ai,
Test the hypotheses described in Exercise 1 a1
and each observation for drug
has an unknown variance
af
but it is
the
Suppose that Xl • . . . • Xm form a random sample from a normal distribution with unknown mean J.LI and unknown variance
0 2;
and that Y] , . . . , Y" form
an independent random sample from another normal distribution with un
11 2
A ( - 00 < A < (0), construct a I m + n - 2 degrees of freedom:
known mean
Consider again the conditions of Exercise
B.
For any given constant
1.
Let Pl denote the mean of each
It is assumed, as in Exercise 1, that all the observations have a
observation for drug drug
A,
0 2,
test of the following hypotheses with
1'1 - 1' 2 � A . 1'1 - 1'2 oF A .
Ho : HI : 6.
and the same unknown variance
and let J1. 2 denote the mean of each observation for
5 to - J.L 2 with confidence coefficient 0.90.
common unknown variance. Use the results of Exercise confidence interval for f..' t
construct a
8.10. SUPPLEMENTARY EXERCISES 1. Suppose that a sequence of Bernoulli trials unknown probability
8
IS
to be carried out with an
of success on each trial, and that the following
hypotheses are to be tested:
Ho : HI :
0.1. fJ � O.2.
fJ
�
Let X denote the number of trials required to obtain a success, and suppose
2.
Ho
type 1 and type
that
is to be rejected if X
� 5.
Determine the probabilities of errors of
2.
errors of type 1 and type
Consider again the conditions of Exercise L Suppose that the 10sses from
and
HI
2
are equal, and that the prior probabilities that Ho
are true are equal. Determine the Bayes test procedure based on
observation X.
3.
the
Suppose that a single observation X is to be drawn from the following p.d.f.:
/(x l fJ)
�
{�(I
-
fJ ) x +
fJ
for
0�
x � 1.
otherwise,
8.10. Supplementary Exercl...
where the value of 8 is unknown (0 hypotheses are to be tested:
Ho :
H, :
51 3
'" 8 '" 2). Suppose also that the following
8 � 2,
8 � 0.
Determine the test procedure 8 for which a(lj) + 2P( f, ) is a minimum, and calculate this minimum value. 4. Consider again the conditions of Exercise 3, and suppose that the value of a is given (0 < a < 1). Determine the test procedure " for which P( 8) will be a minimum. and calculate this minimum value. 5. Consider again the conditions of Exercise 3, but suppose now that the following hypotheses are to be tested:
Ho: H, :
(J � 1,
8 < 1.
(a) Determine the power function of the test " that specifies rejecting Ho if
X > 0.9.
(b) What is the size of the test 6?
6. Consider again the conditions of Exercise 3. Show that the p.d.l. !(x 1 8) has a monotone likelihood ratio in the sta.tistic r( X ) = X, and determine a UMP test of the following hypotheses at the level of significance Uo � 0.05: -
1 � (J 2. '
1 8>
-
2'
7. Suppose that a box contains a large number of chips of three different colors, red, brown, and blue; and that it is desired to test the null hypothesis Ho that chips of the three colors are present in equal proportions against the alternative hypothesis HI that they are nor present in equal proportions. Suppose that three chips are to be drawn at random from the box, and that Ho is to be rejected if and only if at least two of the chlps have the same color. (a) Determine the size of the test. (b) Determine the power of the test if 1/7 of the chips are red, 2/7 are brown, and 4/7 are blue.
8. Suppose that a single observation X is to be drawn from an unknown distribution
Ho: HI :
P, and that the following simple hypotheses are to be tested:
P is a uniform distribution on the interval (0, 1). P is a standard normal distribution.
514
Testing Hypotheses
Determine the most powerful test of size 0.01 and calculate the power of the test when HI is true. 9. Suppose tbat the 12 observations Xl" ' " X12 form a random sample from a normal distribution for which both the mean p. and variance 0 2 are un known. Describe how to carry out a I test of the following hypotheses at the level of significance "'0 = 0.005:
Ho : J1. � 3 . HI : 1' < 3 . 10. Suppose that Xh
, X" form a random sample from a normal distribution with unknown mean 8 and variance 1 ; and that it is desired to test the following hypotheses:
Ho: HI :
• . .
8 � 0, 8 > 0.
Suppose also that it is decided to use a UMP test for which the power is 0.95 when 8 = 1. Determine the size of this test if IJ = 16. 11. Suppose that eight observations Xl Xs are drawn at random from a distribution with the following p.d.f.: • . . .
'
for 0 < x < 1, otherwise. Suppose also that the value of 8 is unknown (0 ) 0), and that it is desired to test the following hYPolheses:
Ho: 8 .:;;:; I , HI : 0 > l . Show that the UMP test at the level of significance ao = 0.05 specifies rejecting Ho if E�_llog Xi :? 3.981. 12. Suppose that Xl ' . . . ' X" form a random sample from a X2 distribution for which the degrees of freedom 8 is unknown (8 = 1, 2, . . ), and that it is desired to test the following hypotheses at a given level of significance ao -
.
(0
0 i - I, 2, or 3 and that p, + p, + p, - 1. In this problem, a random sample
interval for
(1)
is taken from the population, and the statistician must use the observed nwnbers of individuals who have each of the three genotypes to determine whether it is reasonable to believe that P I' p" and p, can be represented in the hypothesized (1) for value of 8 in the interval 0 < 0 < 1.
form
some
of six possible genotypes, and it is assumed that the probabilities p" . . . , Po of the In other genetics problems. each individual in the population must have one
different genotypes can be represented in the following form. for
some
values of
'
9.2.
Goodness-of·Flt for Composite Hypotheses
p, � ( 1
PI - 8I' '
p, � 2 6.( 1 - 6. - 6, ) ,
p, �
527
' - 0, - 0,) , (2)
28,{I - 0, - 8,).
Again, for any values of 81 and 82 satisfying the stated conditions, it can be 1. On the basis of the observed verified that Pi > 0 for i = 1, . . . , 6 and E�_ 1 p,' numbers NI > , N6 of individuals having each genotype in a random sample. the statistician must decide either to accept or to reject the null hypothesis that the probabilities PI " ' " P6 can be represented in the form (2) for some values of 81 and 02' In formaJ terms, in a problem of the type being considered, we are interested in testing the hypothesis that for i = 1, . . . , k, each probability Pi can be represented as a particular function '17'; (9) of a vector of parameters 8 = (01" , Os). It is assumed Lhat s < k - 1 and that no component of the vector 8 can be expressed as a function of the other s - 1 components. We shall let n denote the s·dimensional parameter space of all possible values of 6. Furthermore, we shall assume that the functions '"1(8), . . . • '17'k(8) .always form a feasible set of values of ' PI< in the sense that for every value of 8 E n, '1T;(9) > 0 for i = 1, . . , k Pl and L�_.", (9) - 1. The hypotheses to be tested can be written in the following form: -=
•
•
•
• •
.
• . . .
Ho :
There exists a value of 9
P, �
Hi :
E 0 such that
, ( 6 ) for i - I , . . . , k ,
"
The hypothesis Ho is not true.
(3 )
The assumption that s < k - 1 guarantees that the hypothesis Ho actually restricts the values or P l • . . . ' PI< to a proper subset of the set of all possible values of these probabilities. In other words, as the vector 8 runs through all the values in the set Q, the vector [".(6), . . . , ".(6)J runs th?ough only a proper subset of the possible values of ( P i' · · · . Pk)'
The X2 Test lor Composite Null Hypotheses In order to carry out a X' test of goodness-of-fit of the hypotheses (3). the statistic Q defined by Eq. (2) of Sec. 9.1 must be modified because the expected number np? of observations of type ; in a random sample of n observations is no longer completely specified by the null hypothesis Ro. The modification that is used is simply to replace np? by the M.L.E. of this expected number under the assumption that flo is true. ]n other words, if 6 denotes the M.L.E. of the
Categorical Data and Nonparametrlc Methods
528
parameter vector 8 based on the observed numbers N., . . . , N/(, then the statistic Q is defined as follows:
t
[ N; -
( 6 )] Q (4) ;-1 ",,; ( 6 ) Again, it is reasonable to base a test of the hypotheses (3) on this statistic Q by rejecting Ho if Q > c, where c is an appropriate constant. In 1924, R. A. Fisher showed that if the null hypothesis Ho is true and certain regularity conditions are satisfied, then as the sample size n 00, the dJ. of Q converges to the dJ. of the X2 distribution with k - 1 - s degrees of freedom. Thus, when the sample size n is large and the nul1 hypothesis Ho is true, the distribution of Q will be approximately a X2 distribution. To determine the number of degrees of freedom, we must subtract s from (he number k - 1 used in Sec. 9.1 because we are now estimating the s parameters 81, . . . , 8$ when we compare the observed number Ni with the expected number n1Ti(6) for k. In order that this resull will hold, it is necessary to satisfy the i = 1, foHowing regularity conditions: First, the M.L.E. 8 of the vector 6 must be found in the usual way by taking the partial derivatives of the likelihood function with respect to each of the parameters 81, , 8$; setting each of these s partial derivatives equal to 0; and then solving the resulting set of equations for 81, O Furthermore, these partial derivatives must satisfy certain conditions of the type alluded to in Sec. 7.8 when we discussed the asymptotic properties of · M.L.E.'s. To illustrate the use of the statistic Q defined by Eq. (4), we shall consider. again the two types of genetics problems described earlier in this section. In a problem of the first type, k 3 and it is desired to test the null hypothesis Ho that the probabilities Pl' P2 . and P3 can be represented in the form (1) against the alternative HI that Ho is not true. In this problem, s = 1. Therefore, when Ho is lrue, the distribution of the slatistic Q defined by Eq. (4) will be approximately a X2 distribution with one degree of freedom. In a problem of the second type, k = 6 and it is desired to test the null hypothesis Ho that the probabilities Pl' . . . • P6 can be represented in the form (2) against the alternative HI that Ho is not true. In this problem, s - 2. Therefore, when Ho is true, the distribution of Q will be approximately a x 2 distribution with 3 degrees of freedom. _
",,;
'
�
. . . •
•
•
•
•
,
.
.
s
s'
=
Determining the Maximum Likelihood Estimates When Ibe null hypOlhesis Ho in (3 ) is true, the likelihood function L(8) for Ibe observed numbers NI , . . . , N/( will be ( 5)
9.2.
Goodness·ot·Flt for Composite Hypotheses
529
Thus, k
log L ( 8 ) =
L ,\\ log",(8).
(6)
The M.L.E. Ii will be the value of 8 for which log L(8) is a maximum. For example, when k = 3 and Ho specifies that the probabilities PI. P2 ' and p, can be represented in the form (I), then 10g L ( U )
=
N,log( O') + N,log[20(1 - 8 )] + N, log [ (l - 0)']
= (2N1 + N,)logU + (2N, + N,)log(1 - O J + N,log 2 .
( 7)
It can be found by differentiation that the value of 0 for wruch log L(O) is a . maxlmum IS .
b
=
_ 2N, + N, 2 Nl + N, 211 2( N, + N, + N, )
(8)
The value of the statistic Q defined by Eq. (4) can now be calculated from the observed numbers N1, N2, and N). As previously mentioned, when Ho is true and 2 n is large, the distribution of Q will be approximately a X distribution with one degree of freedom. Hence, the tail area corresponding to the observed value of Q can be found from that X 2 distribution.
Testing Whether 8 Distribution Is Normal Consider now a problem in which a random sample XI " ' " Xn is taken from some continuous distribution for which the p.d.f. is unknown; and it is desired to test the null hypothesis Ho that this distribution is a normal distribution against the alternative hypothesis HI that the distribution is not normal. A X 2 test of goodness·of-fit can be applied to this problem if we divide the rea1 line into k subintervals and count the number Nj of observations in the random sampJe k). .. which fall into the ; th subinterval ( ; = 1, If Ho is true and if 11 and 0 2 denote the unknown mean and variance of the normal distribution, then the parameter vector e is the two-dimensional vector 8 = (1', 0'). The probability ",(8), or ",( 1' , 0'), that an observation will rail within the ith subinterval is the probability assigned to that subinterval by the normal distribution with mean p. and variance 0 2. In other words, if the ith subinterval is the interval from a; to bit then . . . •
'TT;
( ' ) Ib, (2,, 1)1(2 exp [ (X - I')'] dx . 1-' , 0
=
a,
0
-
20
2
(9)
Categorical Data and Nonparametric Methods
530
It is important to note that in order to calculate the value of the statistic Q defined by Eq. (4), the M.L.E.'s jl and
Nj,
•
•
•
,
Nk of observations
a2
must be found by using the numbers
a2
in the different subintervals; they should not be found
by using the observed values of Xl" . "
wi11 be the values of p. and 0
2
Xn themselves. In other words, p, and
which maximize the likelihood function
(10) Because of the complicated nature of the function 1T,-(P.,
02),
as given by
Eq. (9), a lengthy numerical computation would usually be required to determine the values of fL and
02
which maximize
that the M.L.E.'s of p. and
02
L(p., (1 2 ).
On the other hand, we know
based on the n observed values XI " ' " Xn in the original sample are simply the sample mean i" and the sample variance S;/n. Furthermore. if the estimators which maximize the likelihood function
L(p, (1 2 )
are used to calculate the statistic Q, then we know that when Ho is true, the distribution of Q will be approximately a
X2
distribution with
k
-
3
degrees of
freedom. On the other hand, if the M.L.E.'s X" and S;/n based on the observed values in the original sample are used to calculate Q, then this
X2
approximation
to the distribution of Q will not be appropriate. Because of the simple nature of
S,�/n,
the estimators X" and
we shall use these estimators to calculate Q; but
shall describe how their use modifies the distribution of Q.
we
In 1954, H. Chernoff and E. L. Lehmann established the following result: If the M.L.E.'s X" and S,,2/n are used to calculate the statistic Q, and if the null hypothesis Ho is true, then as between the dJ. of the X of the
X'
2
n
�
00 ,
the dJ. of Q converges to a d.L which lies
distribution with
distribution with
k
-
k
-
3 degrees of freedom and the dJ.
1 degrees of freedom. It follows that if the value
of Q is calculated in this simplified way, then the tail area corresponding to this value of Q is actually larger than the tail area found from a table of the distribution with
k
-
3 ciegrees of freedom.
In fact, the appropriate tail area lies
somewhere between the tail area found from a table of the
k
-
3
X2
X2
distribution with
degrees of freedom and the larger tail area found from a table of the X
distribution with
k
-
'
1 degrees of freedom. Thus. when the value of Q is
calculated in this simplified way, the corresponding tail area will be bounded by two values that can be obtained from a table of the X
2
distribution.
Testing CompOSite Hypotheses About an Arbitrary Distribution The procedure just described can be applied quite generally. Consider again a problem in which a random sample of n observations is taken from some continuous distribution for which the p.d.f. is unknown. Suppose now that it is desired to test the null hypothesis Ho that this distribution belongs to a certain
531
9.2. Goodness-o'-Fitfor Composite Hypotheses
family of distributions indexed by the s-dimensional parameter vector 6
(81,
•
•
•
, Os)
=
against the alternative hypothesis Hi that the distribution does not
belong to that particular famjly. Furthermore, suppose as usual that the real line is divided into
k
subintervals.
If the null hypothesis
Ho
is true and the vector 0 is estimated by maximizing
(5),
the likelihood function L ( 8 ) given by Eq.
Ho
2
k
then the statistic Q will have
is true and the M.L.E. of e that is found from the
approximately a X
distribution with
Q
-
1
-
s
degrees of freedom. However, if observed values in the
fI
original sample is used to calculate the statistic Q. then the appropriate approxi mation to the distribution of
distribution with
k
-
]
-
s
is n distribu tion which Jies between lhe x 2
degrees of freedom and the X
l
distribution with
1 degrees of freedom. Therefore, corresponding to this calculated value of Q. 2 the tail area found from a table of the X distribution with k 1 s degrees of
k
-
-
-
freedom will be a lower bound for the appropriate tail area, and the tail area found from a table of the X
l
distribution with
k
-
1
degrees of freedom will be
an upper bound for the appropriate tail area. The results just described also apply to discrete distributions. Suppose, for ex.ample, that a random sample of 11 observations is taken from a discrete distribution for which the possible values are the nonnegative integers 0, 1, 2, . . . . Suppose also that it is desired to tC.lii t the null hypothesis
Ho
that this distribution
is a Poisson distribution against the alternative hypothesis H! that the distribu · tion is not Poisson. Finally, suppose that the nonnegative integers 0, 1 , 2, . . . are divided into
k
classes such that each observation will lie in one of these classes.
I t is known from Exercise
4
of Sec.
6.5
that if
Ho
is true, then the sample
mean X" is the M.L.E. of the unknown mean 8 of the Poisson distribution based on the
9
�
n
observed values in the original sample. Therefore, if the estimator
X" is used to calculate tbe statistic Q defined by Eq.
mate distribution of Q when
Ho
degrees of freedom and a
distribution with
X2
X2
is true lies between a
k
-
1
(4), then
the approx.i
distribution with
k
-
2
degrees of freedom.
,
EXERCISES
L At the fifth hockey game of the season at a certain arena, 200 people were
selected at random and asked how many of the previous four games they had
attended. The results are given in Table 9.2. Test the hypothesis that these 200 observed values can be regarded as a random sample from a binomial distribution; that is, that there ex.ists a number
e
(0
<e
} then it follows that Eii = "Pi+P+j - Next, since Pi+ is the probability that an of observations
in -
the ith row and the jth column is simply
537
9.3. Contingency Tables
Table 9.8 27.14
18.88
12.98
22.08
15.36
10.56
17.48
12.16
8.36
25.30
17.60
12.10
observation wil1 be classified in the ith row,
Pi+
is simply the proportion of
observations in the sample that are classified in the ith row; that is, Similarly,
P+j =
Pi+ = Nj.:../n.
N+j/n, and it follows that
.. n ( Nni+ )( Nn+i ) � Ni .nN+i .
EIJ
�
(7)
Eil into Eq. (6), we can calculate the value of Q Nij. The null hypothesis Ho should be rejected if
If we substitute this value of from the observed values of Q
>
c, where c is an appropriately chosen constant. When Ho is true and the
sample size
n
- 1)
is large, the distribution of Q will be approximately a
distribution with
( R - I)(C
degrees of freedom.
-
X2
For example, suppose that we wish to test the hypotheses (5) on the basis of
-
the data in Table 9.7. By using the totals given in the table, we find that
-
NJ + =
-
N.+=
N+ I
N, +
59,
N2 + = 48, 92, N+ 2 = 64, and N+ 3 44. Since n 200, it follows from Eq. (7) that the 4 X 3 table of values of Eij is as given in Table 9.8. The values of Nij given in Table 9.7 can now be compared with the values of Table 9.8. The value of Q defined by Eq. (6) turns out to be 6.68. Since in /iii R = 4 and C = 3, the corresponding tail area is -to be found from a table of the X' distribution with (R - I)(C - I) - 6 degrees of freedom. Its value is larger 38, and
55; and also that
�
than 0.3. Therefore, in the absence of other information, the observed values do not provide any evidence that lIo is not true.
•
EXERCISES 1.
Show that the statistic Q defined by Eq. (6) can be rewritten in the form
- n.
Categorical Oatil and Nonparametric Methods
538
Table 9.9
Between 18
and 30
Over 30
2. Show that if
C�
Wears a moustache
Does not wear a moustache
12
28
8
52
2, the statistic Q defined by Eq. (6) can be rewritten in the
form
3.
Suppose that an experiment is carried out to see if there is any relation between a man's age and whether he wears a moustache. Suppose that 100 men,
18
years of age or older. are selected at random; and tbat each man is
classified according to whether or not he is between
18
and
30
years of age,
and also according to whether or not he wears a moustache. The observed numbers are given in Table
9.9.
Test the hypothesis that there is no relation
ship between a man's age and whether he wears a moustache.
4.
300
and that each person in the sample is classified according to whether his Suppose that
persons are selected at random from a large population;
blood type is 0, A , B, or AB, and also according to whether his blood type is
Rh
positive or
Rh
negative. The observed numbers are given in
Table 9.10.
Test the hypothesis that the two classifications of blood types are indepen dent.
5.
Suppose that a store carries two different brands, A and B, of a certain type of breakfast cereal. Suppose that during
a
one-week period the store noted
whether each package of this type of cereal that was purchased was of brand A or of brand B and aJso noted whether the purchaser was a man or a woman.
(A purchase made by a child or by a man and a woman together was
Table 9.\0 Rh Rh
o
A
B
AB
positive
82
89
19
negative
13
27
54
7
9
9.3.
Table 9.11 Brand A
Men Women
ConUngency Tables
539
Brand B
9
6
13
16
not counted.) Suppose that 44 packages were purchased, and that the results were as given in Table 9.11. Test the hypothesis that the brand purchased and the sex of the purchaser are independent.
6. Consider a two-way contingency table with three rows and three columns. Suppose that. for i � 1, 2, 3, and j � 1, 2, 3, the probability Pi} that an
individual selected at random from a given population will be classified in the ith row and the jth column of the table is as given in Table 9.12 (a) Show that the rows and columns of this table are independent by verifying that the values P;j satisfy the null hypothesis Ho in Eq. (5).
(b) Generate a random sample of 300 observations from the given population by choosing 300 pairs of digits from a table of random digits and classifying each pair in some cell of the contingency table in the following manner: Since P I I = 0.15, classify a pair of digits in the first cell if it is one of the first fifteen pairs 01, 02, . . . , 15. Since P I2 � 0.09, classify a pair of digits in the second cell if it is one of the next nine pairs 16, 17, . . . , 24. Continue in this way for all nine cells. Thus, since the last cell of the table has probability pJ) = 0.08, a pair of digits will be classified in that cell if it is one of the last eight pairs 93, 94, . . . , 99,00. (c) Consider the 3 X 3 table of observed values Ni} generated in part (b). Pretend that the probabilities Pi} were unknown, and test the hypothe ses (5).
7.
If all the students in a class carry out Exercise"6 independently of each other and use different sets of random digits, then the different values of the statistic Q obtained by the different students should form a random sampLe from a X' distribution with 4 degrees of freedom. If the values of Q for all
Table 9.12 0.15
0.09
0.06
0.15
0.09
0.06
0.20
0.12
0.08
Categorical Data and Nonparametric Methods
540
the students in the class are available to you, test the hypothesis that these values form such a random sample.
8. Consider a three-way contingency table of size R X e x T. For i = 1, . . . , R, } - 1, . . . , C, and k = 1, . . . , T, let Pijk denote the probability that an indi- . vidual selected at random from a given population will fall into the (i, j, k) cell of the table. Let C
T
Pi+ + - L L Pij" j-I k - 1
R
T
P+J + - L L Pij" i - I k- l
P,
..
R
C
- L L Pij' · i-I j-I
On the basis of a random sample of n observations from the given popula tion, construct a test of the following hypotheses:
Ho: HI :
9.
Pijk
= Pi+ +P+j+P++k
for all values of i, j , and k ,
The hypothesis Ho is not true.
Consider again the conditions of Exercise 1, C let .
.
.
,
8. For
i - I, . . . , R and j =
,
T
Pij+ "'" L Pijk· '-1
On the basis of a random sample of n observations from the given popula:" tion, construct a test of the following hypotheses:
Ho : HI :
9.4.
Pijk = Pij +P++k
for all values of i , } .
and k ,
The hypothesis Ho is not true.
TESTS OF HOMOGENEITY
Samples Irom Several Populations Consider again the problem described at the beginning of Sec. 9.3 in which each student in a random sample from the entire enrollment at a large university is classified in a contingency table according to the curriculum in which he is enrolled and according to his preference for either of two political candidates A and B. The results for a random sample of 200 students were presented in Table 9.7. Suppose that we are still interested in investigating whether there is a relationship between the curriculum in which a student is enrolled and the
9.4. T.sta of Homogeneity
541
candidate he prefers. Now suppose, however, that instead of selecting 200 students at random from the entire enrollment at the university and classifying them in a contingency table, the experiment is carried out in the following manner: First. 59 students are selected at random from all of those who are enrolled in engineering and science, and each of these 59 students in the random sample is classified according to whether he prefers candidate A , prefers candidate B. or is undecided. We shaU suppose. for convenience, that the results are as given in the first row of Table 9.7. Also, 48 students are selected at random from all of those who are enrolled in humanities and social sciences. Again, for convenience, we shall suppose that the resuhs are as given in the second row of Table 9.7. Similarly, 38 students are selected at random from all of those enrolled in fine arts, and 5 5 students are selected at random from all of those enrolled in industrial and public administration; and we shall suppose that the results are as given in the boltom two rows of Table 9.7 . Thus, we are assuming that we have again obtained a table of values identical to Table 9.7; but we are assuming now that this table was obtained by taking four different random samples from the different populations of students defined by the four rows of the table. In this context, we are interested in testing the hypothesis that, in all four popUlations, the. same proportion of students prefers candidate A . the same proportion prefers candidate B. and the .same proportion is undecided. In general, we shall consider a problem in which random samples are taken from R different populations and each observation in each sample can be classified as one of C different types. Thus, the data obtained from the R samples can be represented in an R X C table. For i = 1 , . . , R and j 1 , . . . , C. we shaH let P ij denote the probability that an observation chosen at :-andom from the ith population will be of type j. Thus, =
.
c
I: P ij
j-I
�
1
(1)
for i = l , . . . , R .
The hypotheses to be tested are as follows:
Ho : Hi :
for j = l , The hypothesis Ho is not true.
Plj = Plj = . . . = PRj
•
. . . •
C.
( 2)
The null hypothesis Ho in (2) states that all the distributions from which the R difrerent samples are drawn are actually alike; that is, that the R distributions are homogeneous. For this reason, a test of the hypotheses (2) is called a test of homogeneity of the R distributions. For i = 1, R . we shall let NI denote the number of observations in the random sample from the ith population; and for j = 1, . . , C, we shall let Ni} . . . •
...
.
Categorical Data and Nonparametric Methods
542
in
this random sample that are of type j.
for i = 1 , . . . , R.
(3)
denote the number of observations Thus, c
L N;j - N;+
)- 1
Furthermore, if we let n denote the total number of observations in al1 R samples and we let N+J denote the total number of observations of type j in the R samples, then aU the relations in Eqs. (3) and (4) of Sec. 9.3 will again be satisfioo. The
X2
Test of HomogeneIty
We shall now develop a test procedure for the hypotheses (2). Suppose for the moment that the probabilities Pij are known; and consider the following statistic calculated from the observations in the i th random sample: (4) This statistic is just the standard X' statistic, introduced in Eq . (2) of Sec. 9.1, for the random sample of Ni+ observations from the ith population. Therefore, when' the sample size Nj+ is large, the distribution of this statistic will be approximately' ' a x 2 distribution with C 1 degrees of freedom. If we now sum this statistic over the R different samples, we obtain the following statistic: -
(5) Since the observations in the R samples are drawn independently, the distribution of the statistic (5) wi11 be the distribution of the sum of R independent random variables, each of which bas approximately a X ' distribution witb C 1 degrees of freedom. Hence, the distribution of the statistic (5) will be approximately a X2 distribution with R( C 1) degrees of freooom. Since the probabilities p;] are not actually known, their values must be estimated from the observed numbers in the R random samples. When the null hypothesis Ho is true, the R random samples are actually drawn from the same distribution. Therefore, the M.L.E. of the probability that an observation in any one of these samples will be of type j is simply the proportion of all the observations in the R samples that are of type j. In other words, the M.L.E. of Pi} -
-
9.4. T••ts of Homogeneity
is the same for all values of i (i = 1 , . . . , R), and this estimator is When this M.L.E. is substituted into (5), we obtain the statistic
543
Pi} = N +j/n. (6)
where
(7) It can be seen that Eqs. (6) and (7) are precisely the same as Eqs. (6) and (7) of Sec. 9.3. Thus, the statistic Q to be used for the test of homogeneity in this section is precisely the same as the statistic Q to be used for the test of independence in Sec. 9.3. We shall now show that the number of degrees of freedom is also precisely the same for the test of homogeneity as for the test of independence. Since the distributions of the R populations are alike when Ho is true, and since E;_ I P'j = 1 for this common distribution, we have estimated C - 1 param� eters in this problem. Therefore, the statistic Q will have approximately a Xl distribution with R(C - 1) - ( C - 1) � ( R - l)(C - I) degrees of freedom. This number is the same as that found in Sec. 9.3. In summary, consider Table 9.7 again. The statistical analysis of this table will be the same for either of the following two procedures: The 200 observations are drawn as a single random sample from the entire enrollment of the university, and a test of independence is carried out; or the 200 observations are drawn as separate random samples from four different groups of students, and a test of homogeneity is carried out. In either case, in a problem of this type with R rows and C columns, we should calculate the statistic Q defined by Eqs. (6) and (7); and we should assume that its distribution when Ho is true will be approximately a X' distribution with ( R - I)(C - 1) degrees of.freedom.
Comparing Two or More Proportions Consider a problem in which it is desired to find oul whether the proportion of adults who watched a certain television program was the same in R different cities ( R ;t. 2). Suppose that for i = I, R , a random sample of Ni+ adults is selected from city i; that the number in the sample who watched the program is Nil; and that the number who did not watch the program is N/2 = N, + - NjJ > These data can be presented in an R x 2 table such as Table 9.13. The hypotheses to be tested will have the same form as the hypotheses (2). Hence. . . . •
544
Categorical Data and Nonparametrlc Methods
Did not
Sample
Ci ty
Watched program
watch
size
1
Ni l
NI 2
N1 .J.
2
N21
N"
N, +
NRO
NR,
NR+
. .
R
when the null hypothesis Ho is true. that is, when the proportion of adults who watched the program is the same in all R cities, the statistic Q defined by Eqs. (6) and (7) will have approximately a X 2 distribution with R 1 degrees of freedom. -
Correlated 2 x 2 Tables We sha1l now describe a type of problem in which the use of the X l lest of homogeneity would not be appropriate. Suppose that 100 persons were selected at . random in a certain city, and that each person was asked whether he thought the service provided by the Fire Department in the city was satisfactory. Shortly after ' this survey was carried out, a large fire occurred in the city. Suppose that after this fire, the same 100 persons were again asked whether they thought that the ' service provided by the Fire Department was satisfactory. The results are pre sented in Table 9.14. Table 9.14 has the same general appearance as other tables we have been considering in this section. However, it would not be appropriate to carry out a 2 X test of homogeneity for this table because the observations taken before the fire and the observations taken after the fire are not independent. Although the total number of observations in Table 9.14 is 200, only 100 independently chosen persons were questioned in the surveys. I t is reasonable to believe th at a particular person's opinion before the fire and his opinion after the fire are dependent. For this reason, Table 9.14 is called a correlated 2 x 2 table. Table 9.14
Satisfactory
I
Unsatisfactory
Before the fire
80
20
After the fire
72
28
9.4.
Tests of Homogeneity
545
Table 9.15
After the
{ Before the fire
fire
Satisfactory
Unsatisfactory
Satisfactory
70
10
Unsatisfactory
2
18
--------+--
The proper way to display the opinions of the 100 persons in the random sample is shown in Table 9.15. It is not possible to construct Table 9.15 from the data in Table 9.]4 alone. The entries in Table 9.14 are simply the marginal totals of Table 9.15. However, in order to construct Table 9.15, it is necessary to go back to the original data and, Cor each person in the sample, to consider both his opinion before the fire and his opinion after the fire. Furthermore, it usually is not appropriate to carry out either a x2 test of independence or a X 2 test of homogeneity for Table 9.15, because the hypotheses that are tested by either of these procedures usually are not those in which a statistician would be interested in this type of problem. In fact, in this problem a statistician would basically be interested in the answers to one or both of the following two questions: First, what proportion of the persons in the city changed their opinions about the Fire Department after the fire occurred? Second, ampng those persons in the city who did change their opinions after the fire, were the changes predominantly in one direction rather than the other? Table 9.15 provides information pertaining to both these questions. Accord ing to Table 9.15, the number of persons in the sample who changed their opinions after the fire was 10 + 2 = 12. Furthermore. among the 12 persons who did change their opinions, the opinions of 10 of them were changed from satisfactory to unsatisfactory and the opinions of 2 of them were chabged from unsatisfactory to satisfactory_ On the basis of these statistics, it is possible to make inferences about the corresponding proportions· for the entire population of • the city. In this example. the M.L.E. iJ of the proportion of the population who changed their opinions after the fire is 0.12. Also, among those who did change their opinions, the M.L.E. P12 of the proportion who changed from satisfactory to unsatisfactory is 5/6. Of course, if b is very small in a particular problem, then there is little interest in the value of P12 '
\
,
EXERCISES 1. An examination was given to 500 high-school seniors in each of two large cities, and their grades were recorded as low, medium, or high. The results are
..
Categortcal Data and Nonparametric Methods
546
Table
9.16
City
A
City
B
Table
Low
Medium
High
103
145
252
140
136
224
9.17 Number of lectures attended 0
I
1
I
2
I
3
I
4
Freshmen
10
Sophomores
14
19
20
4
13
Juniors
15
15
I7
4
9
Seruors
19
8
6
5
12
as given in Table
9.16.
16
27
6
11
Test the hypothesls that the distributions o f scores.
among seniors in the two cities are the same.
2.
in a visiting speaker to present a lecture on some topic of current interest. On . Every Tuesday afternoon during the school year, a certain university brought the day after the fourth lecture of the year, random samples
70
sophomores,
60
juniors, and
body at the university; and each
SO
of 70 freshmen,
seniors were selected from the student
of these students was asked how many of the ·
four lectures he had attended. The results are given in Table
9.17.
Test the
hypothesis that freshmen, sophomores. juniors. and seniors at the university attended the lectures with equal frequency.
3.
Suppose that five persons shoot at a target. Suppose also that for person
i
shoots
n;
i=1
• . . . •
times and hits the target Yi times; and that the values of
and y, are as given i n Table
9.18. Test the hypothesis that
5, n;
the five persons are
equally good marksmen.
4. A
manufacturing plant has preliminary contracts with three different sup
of machines. Each supplier delivered 15 machines, which were used in the plant for 4 months in preliminary production. It turned out that one of the machines from supplier 1 was defective; that seven of the machines from
pliers
supplier 2 were defective; and that seven of the machines from supplier
3
were defective. The p1ant statistician decided to lest the null hypothesis Ho
9.4. Tests of Homogeneity
547
Table 9.18
i
n,
y,
1
17
8
2 3
16
4
4
10 24
7 13
5
16
10
Table 9.19 Supplier
I
I Number of defectives
N,.
Expected number of defectives E; under Ho
( N, - E, ) E,
,
1
5 16
5
2 7
5 4
5
I
3
7
5 4
-
5
that the three suppliers provided the same quality. Therefore, he set up Table 9.19 and carried out a x2 test. By summing the values in the bottom row of Table 9.19, he found that the value of the X' statistic was 24/5 with 2 degrees of freedom. He then found from a table of the x2 d.istribution that Ho should be accepted when the level of significance is 0.05. Criticize this procedure, and provide a meaningful analysis of the observed data. 5. Suppose that 100 students in a physical-education class shoot at a target with a bow and arrow, and that 27 students hit the target. These 100 students are then given a demonstration on the proper technique for shooting with the bow and arrow. After the demonstration, they again shoot at the target. This time 35 students hit the target. Investigate the hypothesis that the demonstra tion was helpful. What add.itional information, if any, is needed? 6. As people entered a certain meeting, /I persons were selected at random; and each was asked either to name one of two political candidates he favored in a forthcoming election or to say " undecided" if he had no real preference. During the meeting, the people heard a speech on behalf · of one of the
548
Categorical Data and Nonparametric Methods
candidates. After the meeting, each of the same n persons was again asked to express his opinion. Describe a method for evaluating the effectiveness of the speaker.
9.5.
SIMPSON'S PARADOX
Comparing Treatments Suppose that an experiment is carried out in order to compare a new treatment for a particular disease with the standard treatment fOT the disease. In the experiment 80 subjects suffering from the disease are treated, 40 subjects receiving the new treatment and 40 receiving the standard trealment. After a certain period of time, it is observed how many of the subjects in each group have improved and how many have not. Suppose that the overal1 results (or all 80 patients are as shown in Table 9.20. According to this table, 20 of the 40 subjects who received the new treatment improved, and 24 of the 40 subjects who received the standard treatment improved. Thus, 50 percent of the subjects improved under the new treat ment, whereas 60 percent improved under the standard treatment. On the basis of tbese results, the new treatment appears inferior to the standard treatment. Aggregallon and Disaggregallon In order to investigate the efficacy of the new treatment more carefully, we might compare it with the standard treatment just for the men in the sample and, separately, just for the women in the sample. The results in Table 9.20 can thus � partitioned into two tables, one pertaining just to men and the other just to women. This process of splitting the overall data into disjoint components pertaining to different subgroups of the population is called disaggregation. Suppose that. when the values in Table 9.20 are disaggregated by considering the men and the women separatc1y, the results are as given in Table 9.21. It can be verified that when the data in these separate lables are combined, or aggre· Table 9.20
An patients
Improved
Not improved
Percent improved
60
New treatment
20
20
Standard treatment
24
16
50
9.5. Simpson'. Paradox
549
Table 9.21
Men only
Improved
Not improved
Percent improved
New treatment
12
18
40
3
7
30
Standard treatment
Women only
Improved
Not improved
Percent improved
New treatment
8
2
80
Standard treaunent
21
9
70
gated, we again obtain Table 9.20. However, Table 9.21 contains a big surprise because the new treatment appears to be superior to the standard treatment both for men and for women. Specifically, 40 percent of the men (12 out of 30) who received the new treatment improved, but only 30 percent of the men (3 out of 10) who received the standard treatment improved. Furthermore, 80 percent of the women (8 out of 10) who received the new treatment improved, but only 70 percent of the women (21 out of 30) who received the standard treatment improved. Thus, Tables 9.20 and 9.21 together yield somewhat anomalous results. According to Table 9.21, the new treatment is superior to the standard treatment both for men and for women; but according to Table 9.20, the new treatment is inferior to the standard treatment when all the subjects are aggregated. This type of result is known as Simpson's paradox. It should be emphasized that Simpson's paradox is not a phenomenon that occurs because we are working with small samples. The small numbers in Tables 9.20 and 9.21 were used merely for convenience in this explanation. Each of the entries in these tables could be multiplied by. 1000 or by 1 ,000,000 without changing the results.
The Paradox Explained Of course, Simpson's paradox is not actually a paradox; it is merely a result that is surprising and puzzling to someone who has not seen or thought about it before. It can be seen from Table 9.21 that in the example we are considering, women have a higher rate of improvement from tbe disease than men have,
550
Categorical Data and Nonparametrtc Methods
regardless of which treatment they receive. Furthermore, among the 40 men in the sample, 30 received the new treatment and only 10 received the standard treatment; whereas among the 40 women in the sample, these numbers are reversed. Thus, although the numbers of men and women in the experiment were equal, a high proportion of the women and a low proportion of the men received the standard treatment. Since women have a much higher rate of improvement than men, it is found in the aggregated Table 9.20 that the standard treatment manifests a higher overall rate of improvement than does the new treatment. Simpson's paradox demonstrates dramatically the dangers in making inferences from an aggregated table like Table 9.20. To make sure that Simpson's paradox cannot occur in an experiment like that just described, the relative proportions of men and women among the subjects who receive the new treat ment must be the same, or approximately the same, as the relative proportions of men and women among the subjects who receive the standard treatment. It is not necessary that there be equal numbers of men and women in the sample. We can express Simpson's paradox in probability terms. Let A denote the event that a subject chosen for the experiment will be a man, and let AC denote the event that the subject will be a woman. Also, let B denote the event that a subject will receive the new treatment, and let Be denote the event that the subject will receive the standard treatment. Finally, let J denote the event that a subject will improve. Simpson's paradox then reflects the fact that it is possible for all three of the following inequalities to hold simultaneously: Pr(J I A , B ) > Pr{ i I A , B
0,
�
-
_ i - l - ,lt2 2 L ( 1) e 2
(6)
Thus, if the null hypothesis Ho is true, then as n � 00. the dJ. of n1/2Dn* wiJl converge to the dJ. given by the infinite series on the right side of Eq. (6). For any value of I > 0, we shall let H(t) denote the value on the right side of Eq. (6). The values of H(I) are given in Table 9.23. A test procedure which rejects Ho when nI/2D,,* > c is caned a Kolmogorov Smirnov leSI. It follows from Eq. (6) that when tbe sample size II is large, the constant c can be chosen from Table 9.23 to achieve. at least approximately, any specified level of significance ao (0 < ao < 1). For example, it is found from Table 9.23 that H(1.36) - 0.95. Therefore, if the null hypothesis Ho is true, then Pr(1I 1 /2D: > 1 . 3 6) - 0.05. It follows that the level of significance of a Kolmogorov-Smirnov test with c - 1.36 will be 0.05.
Example I: Testing Whether a Sample Comes from a Standard Nonnal Distribu tion. Suppose that it is desired to test the null hypothesis that a certain random sample of 25 obseJVations was drawn from a standard normal distribution against the alternative that the random sample was drawn from some other continuous . distribution. The 25 observed values in the sample, in order from the smallest to the largest, are designated as Yl " . . ' Y25 and are listed in Table 9.24. The table also includes the value F"(Yi) of the sample d.f. and the value (y; ) of the dJ. of the standard normal distribution. By examining the values in Table 9.24, we find that Dn*, which is the largest difference between F,,(x) and (x), occurs when we pass from i = 4 to i = 5; that is, as x increases from the point x "" - 0.99 toward the point x = - 0.42. The comparison of F"( x ) and (x) over this interval is illustrated in Fig. 9.3, from which we see that D: - 0.3372 - 0.16 - 0.1772. Since 11 - 25 in this example, it follows that n1/'D,; - 0.886. From Table 9.23, we find that H(0.886) = 0.6. Hence, the tail area corresponding to the observed value of n1 /2D,,· is 0.4, and we may conclude that the sample dJ. F"(x) agrees very closely with the hypothesized d.L (x). 0
]t is important to emphasize again that when the sample size 11 is large. even a small value of the tail area corresponding to the observed value of n1/2D,,· would not necessarily indicate that the true dJ. F(x) was much different from the hypothesized dJ. (x). When II itself is large, even a small difTerence between the d.f. F(x) and the dJ. (x) would be sufficient to generate a large value of
9.8. Kolmogorov-Smlmov Tests
Table 9.24
I 1
i
2 3 4 5 6 7 8 9 10
II
12 13 14 15 16 17 18 19 20 21 22 23 24 25
y,
F,,(y,)
- 2.46 - 2.11 - 1 .23 -0.99 - 0.42 - 0.39 -0.21 - 0.15 -0.10 - om - 0.02 0.27 0.40 0.42 0.44 0.70 0.81 0.88 1.07 1.39 1.40 1.47 1.62 1.64 1.76
0.04 0.08 0.12 0.16 0.20 0.24 0.28 0.32 0.36 0.40 0.44 0.48 0.52 0.56 0.60 0.64 0.68 0.72 0.76 0.80 0.84 0.88 0.92 0.96 1.00
(y,) 0.0069 0.0174 0.1093 0.1611 0.3372 0.3483 0.4168 0.4404 0.4602 0.4721 0.4920 0.6064 0.6554 0.6628 0.6700 0.7580 0.7910 0.8106 0.8577 0.9177 0.9192 0.9292 0.9474 0.9495 0.9608
�)(x) 0.3372 - - - - - - - - - - - - - -
0. 16
-
-
I I I
I I I
I -0.99
Figure 9.3
D:l�
----r �---"-, ,. ..,_ -__'ll F.(. 1/2. Hence, a reasonable procedure is to reject Ho if X > c, where c is an appropriate constant. This procedure is called the sign
test.
In summary, a sign test is carried out as follows: ]n each pair, the difference between the response to drug A and the response to drug B is measured, and the number of pairs for which this difference is positive is counted. The decision to accept or reject 110 is then based solely on this number of positive differences. For example, suppose that the number of pairs is 15, and it is found that drug A yields a larger response thaa drug B in exacdy 1 1 of the 15 pairs. Then n = 1 5 and X = 1 1 . When p = 1/2, i t is found from the table of the binomial distribu tion that the corresponding tail area is 0.0593. Thus. the null hypothesis flo should be rejected at any level of significance greater than this number. The only information which the sign test utilizes from each pair of observa tions is the sign of the difference between the two responses. To apply the sign test. the experimenter only has to be able to observe whether the response to drug A or drug B is larger. He does not have to be able to obtain a numerical measurement of the magnitude of the difference between the two responses. However, if the magnitude of the difference for each pair can be measured, it is useful to apply a lest procedure which not only considers the sign of the difference but also recognizes the fact that a large difference between the responses is more important than a small difference. We shall now describe a procedure based on the relative magnitudes of the differences.
The Wilcoxon Signed-Ranks Test •
We shall continue to assume that drugs A and B are administered to n matched pairs of patients, and we shall assume that the response of each patient can be measured in appropriate units on some numerical scale. For i = 1 , 11 , we shall let Ai denote the response of the patient in pair i who receives drug A ; we shall let Hi denote the response of the patient in pair i who receives drug B; and we Bj• shall let Dj = A I , D" pertain to different pairs of patients, these Since the n differences DI differences will be independent random variables. Furthermore, because the patients in each pair have been carefully matched, we shall assume that all the , D" have the same continuous distribution. Finally, we shall differences DI , . . . •
-
• . . .
.
.
•
574
Categorical Data and Nonpar.metrlc Methods
assume that this distribution is symmetric with respect to some unknown point O. , Dn are i.i.d. and form a In summary, we shall assume that the differences D1 random sample from a continuous distribution which is symmetric with respect to the point O. The nuB hypothesis Ho that drug A is not more effective than drug B is equivalent to the statement that Pre D; :s;;; 0) � 1/2. [n turn, this statement is equivalent to the statement that 8 � O. Similarly, the alternative hypothesis HI that drug A is more effective than drug R 15 equivalent to the statement that o > O. Thus, we must test the following hypotheses: • • • •
Ho: HI :
0 .; 0, 0 > 0.
(2)
Since 8 is the median of the distribution of each difference D;, confidence intervals for 0 and tests of the hypotheses (2) can be developed by using the methods described in Sec. 9.7. We shall now describe a different procedure fOT testing the hypotheses (2) which was proposed by F. Wilcoxon in 1945 and is known as the Wilcoxon signed·ranks test. First, the absolute values I DI l , . . . , ID"I are arranged in order from the smallest absolute value to the largest. It is assumed that no two of these absolute values are equal, and that each is different from O. Second, each absolute value IDJ I is assigned a rank corresponding to its position in this ordering. Thus, the smallest absolute va1ue is assigned the rank 1; the second smallest absolute value is assigned the rank 2; and so on. The largest absolute value is assigned the rank n. Third, each of the ranks 1, If is assigned either a plus sign or a minus sign, the assignment depending on whether the original difference D; which yielded' that rank was positive or negative. Final1y, the statistic SI! is defined to be the sum of those ranks to which a plus sign was assigned. The Wilcoxon signed-ranks test is based on the value of the statistic S". Before we proceed further in our description of the procedure for carrying out this test, we shall consider an example. Suppose that the number of pairs is 15, and that the obsc::rved responses A j and Bi• for i = 1, . . . , 15, are as given in Table 9.34. The absolute values IVil have been ranked from 1 to 1 5 in column (5) of the table. Then in column (6) each rank has been given the same sign as the corresponding value of Dj• The value of the statistic Sn is the sum of the positive ranks in column (6). In this example, it is found that S" = 93. We shall now consider the distribution of the statistic Sw Suppose that the null hypothesis Ho is true. If 0 0, then the drugs A and B are equally effective. In this case, each of the ranks 1 , . . . , n is equally likely to be given a plus sign or a minus sign, and the n assignments of plus signs and minus signs are independent of each other. Furthermore, if 0 < 0, then drug A is less effective than drug B. In . . . •
=
9.9. Paired ObMrYaUon.
575
Table 9.34
i
1 2 3 4 5 6 7 8 9 10
II
12 13 14 15
(I )
(2)
(3)
(4)
(5)
A;
B;
D;
I D; I
Rank
(6) Signed rank
3.84 6.27 8.75 4.39 9.24 6.59 9.73 5.61 2.75 8.83 4.41 3.82 7.66 2.96 2.86
3.03 4.91 7.65 5.00 7.42 4.20 7.21 7.5 9 3.64 6.23 4.34 5.27 5.33 2.82 1.14
0.81 1.36 1.10 -0.61 1.82 2.39 2.52 - 1.98 - 0.89 2.60 0.07 - 1.45 2.33 0.14 1.72
0.81 1.36 1.10 0.61 1.82 2.39 2.52 1.98 0.89 2.60 om 1.45 2.33 0.14 1.72
4 7 6 3 10 13 14
+4 +7 +6 -3 + 10 + 13 + 14
II
- JI
5
15 1 8 12 2 9
-5 + 15 +1 -8 + 12 +2 +9
this case, each rank is more likely to receive a minus sign than a plus sign. and the statistic S.. will tend to be smaller than it would be if fJ = O. On the other hand. if the alternative hypothesis HI is true and 0 > 0, then drug A is actually more effective than drug B and each rank is more likely to receive a plus sign than a minus sign. In this case, the statistic S.. .will tend to be larger than it would be under the hypothesis Ho . For tIus reason, the Wilcoxon signed-ranks test specifies rejecting Ho when SII � c, where the constant is chosen appropriately. When 0 = 0, the mean and the variance or SII can be derived as follows: For = 1 , . . . , n , let � = 1 if the rank i receives a plus �gn and let w,. = 0 ir the rank i receives a minus sign. Then the statistic Sn can be represented in the form
c
i
n
( 3)
If e � 0, we have Pr(W, � 0) � Pr( W, � 1) � 1/2. Hence, E(W,) � 1/2 and Var( W;) = 1/4. Furthermore. the random variables WI ' w" are independent. It now follows from Eq. (3) that when 0 � 0, • . . .
E ( Sn ) �
1/
L: iE( W, )
i-I
1 /I L: i
� 2
i- I
(4 )
;'11
�..
Categorical Data and Nonparametric Methods
576
1'
:, � ,i
and ,., Yar(Sn) = I: i'Yar(W,)
=
i- I
1 i'. 4 iI: -I /I
( 5)
The final sum in Eq. (4), whkh is the sum of the integers from 1 to n , is equal to (1/2)n(n + 1). The final sum in Eq. (5), which is the sum of the squares of the integers from 1 to n, is equal to (1/6)n(" + 1)(2n + 1). Therefore, when 8 - 0, and
Yar( S" )
=
n ( n + 1)(2n + 1) . 24
(6)
Furthermore, it can be shown that as the number of pairs n � co, the distribution of S" converges to a normal distribution. More precisely. if JJ. " and 0,,2 represent the mean and the variance of SrI as given by the relations (6), then as n --+ 00, the d.L of (S" - J.L")/a" converges to the d.f. of the standard normal distribution. The practical interpretation of this result is as follows: When the number of pairs n is large and () = 0, the distribution of S" will be approximately a normal distribution for which the mean and variance are given by the rela tions (6). Suppose that the hypotheses (2) are to be tested at a specified level of significance 0'0 (0 < 0'0 < ]); and that the constant c is determined so that when o = 0, Pr(Sn � c) 0'0' Then the procedure which rejects Ho when Sn � c will, satisfy the specified level of significance. For example, consider again the data in Table 9.34. ]n this example, n = 15 and it is found from the relations (6) that E(S.) = 60 and Yar(S.) 310. Hence, · the standard deviation of Sn is an = /3]0 = 17.6. ]t follows that when 0 0, the· random variable Z" = (Sn - 60)/17.6 will have approximately a standard normal distribution. If we suppose that it is desired to test the hypotheses (2) at the level of significance 0.05, then Ho should be rejected if Z" � 1.645. For the data in Table 9.34 it was found that S. = 93. Therefore, Z. = 1.875, and it follows that the null hypothesis Ho should be rejected. In fact, the tail area corresponding to this observed value of S" is found from the table of the standard normal distribution to be 0.03. For a small value of n , the normal approximation is not applicable. In this case, the exact distribution of S" when 8 = 0 is given in many published collections of statistical tables. "'"
=
=
Ties The theory under discussion is based on the assumption that the values of D1, D" will be distinct nonzero numbers. Since the measurements in an actual experiment may be made with only limited precision, however, there may actually • • • •
9.9. Paired Observations
577
be ties or zeros among the observed values of Dl, , Dfl' Suppose that a sign test is to be performed and it is found that Dj = 0 for one or more values of i. In this case, the sign test should be carried out twice. In the first test, it should be assumed that each 0 is actually a positive difference. In the second test, each 0 should be treated as a negative difference. If the tail areas found from the two tests are roughJy equal, then the zeros are a relatively unimportant part of the data. If. on the other hand, the tail areas arc quite different, then the zeros can seriously affect the inferences that are to be made. In this case the experimenter should try to obtain additio'1al measurements or more refined measurements. Similar comments pertain to the Wilcoxon signed-rank test. If Dj = 0 for one or more values of i, these zeros should be assigned the lowest ranks and the test should be carried out twice. In the first test, plus signs should be assigned to these ranks. In the second test, minus signs should be assigned to them. A small difference in the tail areas would indicate that the zeros are relatively unim portant. A large difference would indicate that the data may be too unreliable to be used. The same type of reasoning can be applied to two differences Di and Dj which have different signs but the same absolute value. These pairs will occupy successive positions. say k and k + I, in the ranking of absolute values. However, since there is a tie, it is not clear which of the two ranks should be assigned the plus sign. Therefore, the test should be carried out twice. First, the rank k + 1 should be assigned a plus sign and (he rank . k should be assigned a minus sign. Then these signs should be interchanged. . Other reasonable methods for handling ties have been proposed. When two or more absolute values are the same, one simple method is to consider the successive ranks that are to be assigned to these absolute values and then to assign the average of these ranks to each of the tied values. The plus and minus signs are then assigned in the usual way. When th.is method is used, the value of Var(Sn) must be corrected because of the ties. •
EXERCISES
•
•
"
In an experiment to compare two different types of long-lasting razor blades, A and B, 20 men were asked to shave with a blade of type A for one week and then with a blade of type B for one week. At the end of this period, 15 men thought that blade A gave a smoother shave than blade B. and the other 5 men thought that blade B gave a smoother shave. Test the null hypothesis that blade A does not tend to give a smoother shave than blade B against the alternative that blade A tends to give a smoother shave than blade B. 2. Consider again the conditions of Exercise 1 . Discuss how the design of this experiment might be improved by considering how the 20 men who par ticipate might be selected and also by considering the possible effect of 1.
578
.,
Categorical Data and Nonparametrlc Methods
having each man shave first with a blade of type type
3.
r;j
D1,
.
•
of
, Dl� form a random sample from a normal distribution for which
both the mean po and the variance following hypotheses:
Ho : H, :
and then with a blade
A.
Consider the data presented i n Table .
B
1' '; I'
0'
2
9.34,
and assume that the
15
are unknown. Carry out a
t
differenees test
of the
0,
> O.
4. Consider again the data presented in Table
9.34.
Wilcoxon signed-ranks test, and the t test to the differences D} •
, D15.
(a) Compare the tail areas that are obtained from applying the sign test, the . . .
(b) Discuss the assumptions that are needed to apply each of these three tests. (c) Discuss the inferences that can be drawn in regard to the relative effectiveness of drug A and drug
B
because the three tail areas obtained
in part (a) have approximately the same magnitude; and discuss
the
inferences that could be drawn if these tail areas were widely different.
5.
I n an experiment to compare two different diets pigs were selected from each of
20
A
and
B
for pigs, a pair
of '
different litters. One pig in each pair was
A for a fixed period of time, whjle the ' B. At the end of the fixed period, the gain in
selected at random and was fed diet other pig in the pair was fed diet
weight of each pig was noted. The results are presented in Table
9.35.
Test
the null hypothesis that pigs do not tend to gain more weight from diet A than from diet
B against the alternative that pigs do tend to gain more weight
from diet A , by using (a) the sign test and (b) the Wilcoxon signed-ranks test. Table 9.35
diet A
diet
21.5 18.0 14.7 19.3 21 . 7 22.9 22.3 19.1 13.3 19.8
14.7 18.1 15.2 14.6 17.5 15.6 24.8 20.3 12.0 20.9
Gain from
Pair
--
I
2 3 4 5 6 7 8 9 10
Gain from
B
Pair IJ
12 13 14 15 16 17 18 19 20
Gain from
Gain from diet A
diet
19.0 18.8 19.0 15.8 19.6 22.0 13.4 16.8 18.4 24 9
19.4 13.6 19.2 9.1 13.2 16.6 10.8 13. 3 15.4 21.7
.
B
.
9.9. Paired Observations
6.
579
Consider again the experiment described in Exercise 5 and the data presented in Table 9.35.
(a) Test the hypotheses described in Exercise 5 by assuming that in each of the 20 pairs of pigs, the difference between the gain from diet A and the gain from diet B has a normal distribution with an unknown mean JI. and an unknown variance (J 2.
(b) Test the hypotheses described in Exercise 5 by assuming that the gain in weight for each pig that is fed diet A has a normal distribution with an unknown mean ILA and an unknown variance (J 2 , and the gain in weight for each pig that is fed diet B has a normal distribution with an unknown mean J1 B and the same unknown variance (1 2.
(c) Compare the results obtained in parts (a) and (b) of this exercise and parts (a) and (b) of Exercise 5; and discuss the interpretation of these results.
(d) Discuss methods for investigating whether the assumptions made in part (b) of this exercise are reasonable.
7. In an experiment to compare two different materials A and
B that might be used for manufacturing the heels of men's' dress shoes, 15 men were selected and fitted with a new pair of shoes on which one heel was made of material A and one heel was made of material B. At the beginning of the experiment, each heel was 10 millimeters thick. After the shoes had been worn for one
Table 9.36 Pair
Materia1 A
Material B
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
6.6 7.0 8.3 8.2 5.2 9.3 7.9 8.5 7.8 7.5 6.1 8.9 6.1 9.4 9.1
7.4 5.4 8.8 8.0 6.8 9.1 6.3 7.5 7.0 6.6 4.4 7.7 4.2 9.4 9.1
•
Categorical Data and Nonparametric Methods
580
month, the remaining thickness of each heel was measured. The results are given in Table 9.36. Test the null hypothesis that material A is not more durable than material B against the alternative that material A is more durable than material B, by using (a) the sign test and (b) the Wilcoxon signed-ranks test.
8. Consider again the conditions of Exercise 7, and suppose that for each pair of shoes it was decided by an auxiliary randomization whether the left shoe or the right shoe would receive the heel made of material A .
(a) Discuss this method of designing the experiment, and consider in particu lar the possibility that in every pair the left shoe receives the heel made of material A . (b) Discuss methods for improving the design of this experiment. [n addition to the data presented in Table 9.36, would it be helpful to know which shoe had the heel made of material A, and would it be helpful to have each man in the experiment also wear a pair of shoes in which both beels were made of material A or both were made of material B1
*9.10.
RANKS FOR TWO SAMPLES
Comparing Two DistributIons In this section we shall consider a problem in which a random sample of n r observations XI " ' " Xm is taken from a continuous distribution for which the p.d.f. f(x) is unknown. and an independent random sample of n observations YI, Yn i s taken from another continuous distribution for which the p.d.f. g(x) is also unknown. We shall assume that either the distribution of each observation y;. in the second sample is the same as the distribution of each observation Xi in the first sample, or else there exists a constant 8 such that the distribution of each random variable 1'; + 8 is the same as the distribution of each Xi' In other words, we shall assume that either [(x) � g(x) for all values of x, or else there exists a constant 0 such that [(x) � g(x - 0) for all values of x. FinaUy, we shall assume that the following hypotheses are to be tested: • . • •
Ho : H, :
[(x ) � g( x }
for <x< There exists a constant 0 ( 0 '" O) such that
[(x} - g(x - O )
00
00,
( I)
for - oo < x < oo .
It should be noted that the common form of the p.dJ.'s [(x) and g(x) is not specified by the hypothesis Ho, and that the value of 0 is not specified by the hypothesis H ,.
9.10. Ranks For Two Sampkts
581
Two methods for testing the hypotheses (I) have already been proposed in this chapter. One method is to use the X2 lest of homogeneity described in Sec. 9.4, which can be applied by grouping the observations in each sample into C intervals. The other method is to use the Kolrnogorov-Smirnov test for two samples described in Sec. 9.6. Furthermore, if we are willing to assume that the two samples are actually drawn from normal distributions, then testing the hypotheses (1) is the same as testing whether two normal distributions have the same mean when it is assumed that they have the same unknown variance. Therefore, under this assumption, we could use a t test based on m + n - 2 degrees of freedom as described in Sec. 8.9. In this section we shaU present another procedure for testing the hypotheses (1) that does not require any assumptions about the form of the distributions from which the samples are drawn. This procedure, whjch was introduced separately by F. Wilcoxon and by H. B. Mann and D. R. Whitney in the 1940's, is known as the Wilcoxon-Mann- Whitney ranks lest. The Wllcoxon-Mann-Whllney Ranks Test
In this procedure we begin by arranging the til + " observations in the two sampJes in a single sequence from the smallest value that appears in the two samples to the largest value that appears. Since all the observations corne from continuous distributions, it may be assumed that no two of the m + n observa tions have the same value. Thus, a total ordering of these m + n values can be obtained. Each observation in this total ordering is then assigned a rank corre sponding to its position in the ordering. Thus, the smallest observation among the m + n observations is assigned the rank 1 and the largest observation is assigned the rank In + Il. The Wilcoxon-Mann-Whitney ranks test is based on the property that if the null hypothesis flo is true and the two samples are actually drawn from the same distribution, then the observations Xl" ' " Xm will tend to be dispersed throughout the ordering of aU m + n observations, rather than be concentrated among the smaller values or among the larger values. In fact, when Ho is true, the ranks that are assigned to the m observations Xl" ' " Xm will be the same as if they were a random sample of m ranks drawn at random without replacement from a .box containing the m + 11 ranks 1, 2, . . . , m + Il . Let S denote the sum of the ranks that are assigned to the m observations XI ' . . . ' Xm. Since the average of the ranks 1 , 2, . . . , m + /I is (1 /2)(m + n + I), it follows from the discussion just given that when flo is true, E(S)
=
m( m + n + l ) 2
( 2)
582
Categorical Data and Nonparametrie Methods
Also, it can be shown that when Ho is true,
Var( S )
�
mn ( m + n 12
+
1)
(3) m
n
distribution of S will be approximately a normal distribution for which the mean Furthermore, when the sample sizes
and the variance are given by Eqs.
and
are large and Ho is true, the
(2) and (3).
Suppose now that the alternative hypothesis HI is true. If 8 < observations
Xl" ' " Xm
0,
then the
will tend to be smaller than the observations YI,
Xl' . . . , Xm
•
•
•
, Y".
be among the smaller ranks, and the random variable S will tend to be smaller
Therefore, the ranks that are assigned to the observations than it would be if Ho were true. Similarly, if e
assigned to the observations
Xl" ' " Xm
0,
will tend to
will tend to be among the larger ranks,
>
then the ranks that are
and the random variable S will tend to be larger than it would bc if Ho were truc.
Because of these properties, the Wilcoxon-Mann-Whitney ran.ks test specifies rejecting Ho if the value of S deviates very far from its mean value given by Eq.
(2). c,
In other words, the test specifies rejecting Ho if I S
where the constant
c
-
(1/2)m(m + n + 1)1 :;,
is chosen appropriately. In particular, when the ap
proximate normal distribution of S is used, the constant
c
can be chosen so that
the test is carried out at any specified level of significance ao'
Example 1: Carrying out a Wi/coxon-Mann- Whitney Ranks Test Suppose that the size m of the first sample is 20 and that the observed values are given in Table 9.37. Suppose also that the size n of the second sample is 36 and that these observed values are given in Table 9.38. We shall test the hypotheses (1) by carrying out a Wilcoxon-Mann-Whitney ranks test.
56 values in the two samples are ordered from smallest to largest in Table 9.39. Each observed value in the first sample is identified by the symbol x, The
and each observed value in the second sample is identified by the symbol y. The
sum S of the ranks of the be
494. Since m
�
20
and
20
n � 36
observed values in the first sample is found to
in this example, it follows from Eqs.
(2)
and
(3)
that if Ho is true, then S has approximately a normal distribution with mean 570 and variance
3420. The
standard deviation of S is therefore
Tabl. 9.37 0.730 1.411 0.039 0.247
1.033 1.420 1.352 - 0.779
0.362 1.073 1.171 0.477
0.859 1.427 - 0.174 1.016
0.911 1.166 1.214 0.273
(3420)1/2 =
58.48.
9.10. Ranks For Two Samples
Tabl. 9.38
1.520 0.931 1.268 0.756 0.542 1.705 - 0.127 0.939
Table 9.39
Rank 1 2 3 4 5 6 7 8 9
lD
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
0.876 0.664 1.039 2.589 1.532 1.277 1.160
Observed value - 1.079 - 0.779 - 0.174 -0.127 - 0.D78 0.039 0.065 0.247 0.273 0.274 0.279 0.362 0.477 0.482 0.542 0.566 0.664 0.709 0.730 0.756 0.789 0.859 0.876 0.911 0.912 0.931 0.939 1.0lD
1.148 1.952 0.912 1.281 - 1.079 0.065 1.010
1.633. 0.482 2.632 0.274 1.676 1.733 1.428
0.566 0.279 1.267 - 0.078 0.789 0.709 1.372
Sample
Rank
Observed value
Y x x
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55
1.016 1.033 1.039 1.073 1.148 1 . 160 1. 166 1.171 1.214 1.267 1.268 1.277 1.281 1.352 1.372 1.411 1.420 1.427 1.428 1.520 1.532 1.633 1.676 1.705 1.733 1.952 2.589
Y
y
x
Y
x
x
Y
Y
x x
Y Y
y y Y
x
Y
Y x
Y x
y
.Y Y
Y
56
•
2.632
Sample
x x Y
x
Y
Y
x x
x
Y
y Y Y
x
Y
x x x
Y Y
Y
Y Y
Y Y
.Y Y
Y
583
584
Categorical Data and Nonparametrlc Methods
- 570)/(58.48) will have approximately a standard normal distribution. Since S = 494 in this example, it follows that Z = - 1.300. In other words, the observed value of S lies 1.3 Hence, if
Ho
is true, the random variable Z - ( S
standard deviations to the left of its mean.
Ho
The Wilcoxon-Mann-Whitney ranks test specifies rejecting where
c
if 1 2 1
>
c,
is an appropriate constant. Therefore, the tail area corresponding to any
observed value of Z is the sum of the area of the standard normal distribution to the right of I Z I and the area of that distribution to the left of - I Z I . The tail area corresponding to the observed value Z
= - 1.3
of the standard normal distribution to be
is found in this way from a table
0.1936. Hence, the ao < 0.1936.
should be aceepted at any level of significance For small values of m and
n,
null hypothesis
Ho
0
the normal approximation to the distribution of
S will not be appropriate. Tables of the exact distributions of S for small sample sizes are given in many published collections of statistical tables.
Ties Since we have again assumed that both samples come from continuous distribu tions, there is probability 0 that any two observations will be equal. Nevertheless, because the measurements in an actual problem are made with only limited ' precision, it may be found that some of the recorded observed values are equal.
y. One procedure for handling these tied values is to assign to each observation in
Suppose that a group of two or more tied values includes at least one x and one
the group the average of the ranks that would be assigned to these observations.
(3)
When thjs procedure is used, the value of Var( S) as given in Eq.
must be
changed to take into account the ties that are present in the data. We shall not consider this procedure further here. Rather, we shall repeat the recommendation made at the end of Sec.
9.9
that the test be carried out twice. In
the first test, the smaller ranks in each group of tied observations should be second test, these assignments should be reversed. If the decision to accept or
assigned to the x's and the larger ranks should be assigned to the y's. In the reject
Ho
is different for the two assignments, or if the calculated tail areas are
very different, the data must be regarded as inconclusive.
EXERCISES I.
Consider again the data in Example I. Test the hypotheses
(1)
by applying
(1)
by assuming
the Kolmogorov-Smjrnov test for two samples.
2.
Consider again the data in Example
1.
Test the hypotheses
that the observations are taken from two normal distributions with the same variance, and applying a
I
test of the type described in Sec.
8.9.
9.10. Ranks For Two Samptes
585
3. In an experiment to compare the effectiveness of two drugs A and B in reducing blood glucose concentrations, drug A was administered to 25 patients and drug B was administered to 15 patients. The reductions in blood glucose concentrations for the 25 patients who received drug A are given in Table 9.40. The reductions in concentrations for the 15 patients who received drug B are given in Table 9.41. Test the hypothesis that the two drugs are equally effective in reducing blood glucose concentrations by using the Wilcoxon-Mano-Whitney ranks test.
4. Consider again the data in Exercise 3. Test the hypothesis thal the two drugs are equally effective by applying the Kolmogorov-Smirnov test for two samples.
5.
Consider again the data in Exercise 3. Test the hypothesis that the two drugs are equally effective by assuming that the observations are taken from two normal distributions with the same variance and applying a t test of the type described in Sec. 8.9.
6. Suppose that X.. . . . , Xm form a random sample of m observations from a continuous distribution for which lhe p.d.f. f(x) is unknown; and that
Y1, ' Yn form an independent random sample of n observations from another continuous distribution for which the p.d.L g(x) is also unknown. Suppose also that fix) = g(x - 8 ) for - 00 < x < 00, where the value of the parameter 9 is unknown ( - 00 O.
Table 9.40
0.35 0.16 0.58 1.64 1.50
1.12 1.20 0.04 0.49 1.73
1.54 0.40 0.44 0.90 1.15
Table 9.41
1.78 1.82 0.68 0.89 1 .26
1.25 1.95 1.48 0.86 1.07
1.01 1.81 1.59 1.63 1.31
0.13 1.38 0.75 0.83 0.72
0.77 0.39
0.71
0.28 0.91
•
586
Categorical Data and Nonparametrlc
Methods
7. Consider again the conditions of Exercise 6. Describe how to use the
Wilcoxon-Mann-Whitney ranks test to test the following hypotheses for a specified value of 00:
8. Consider again the conditions of Exercises 6 and 7. Describe how to use the
Wilcoxon-Mann-Whitney ranks test to determine a confidence interval for 0 with confidence coefficient y (0 < y < I),
9. Consider again the conditions of Exercises 6 and 7. Determine a confidence interval for 0 with confidence coefficient 0.90 based on the values given in Example J .
be the observations in two samples, and suppose that no two of these observations are equal. Consider the mn pairs
10. Let Xl" ' " X", and YI
( X" Yl), ( X" Yl),
" "
• . . .
,
Y"
, , ( Xl' Y.), ( X" Y.), "
Let U denote the number of these patrs for which the value of the X component is greater than the value of the Y component. Show that
U= S
-
2 m ( m + 1), I
where S is the sum o f the ranks assigned to section.
9.1 1.
Xl • . . . . Xm•
as defined
m
this
SUPPLEMENTARY EXERCISES
1. Suppose that 400 persons are chosen at random from a large population, and
that each person in the sample specifies which one of five breakfast cereals he most prefers. For i = 1, . . , S. let Pi denote the proportion of the population that prefers cereal i, and let Nj denote the number of persons in the sample who prefer cereal i. It is desired to test the following hypotheses at the level of significance 0.01 : .
Ho : PI = P2 = . . = Ps. HI : The hypothesis Ho is not true. .
For what values of 1:7_1 N/ should
Ho
be rejected?
9.11.
Supplementary Exercises
587
2. Consider a large population of families that have exactly three children; and suppose that it is desired to test the null hypothesis Ho that the distribution of the number of boys in each family is a binomial distribution with parameters
n=3
and
p=
1/2 against the general alternative HI that Ho is
that 26 families have no boys, 32 families have one boy, 40 families have two
not true. Suppose also that in a random sample of 128 families it is found boys, and
30
families have three boys. At what levels of significance should
Ho be rejected?
3.
Consider again the conditions of Exercise 2, including the observations in the random sample of 128 families, but suppose now that it is desired to test the composite null hypothesis Ho that the distribution of the number of boys in each family is a binomial distribution for which
n= 3
and the value of
p
is
not specified against the general alternative HI that Ho is not true. At what levels of significance should H0 be rejected?
4. In order to study the genetic hhtory of three different large groups of native
Americans, a random sample of 50 persons is drawn from group 1 ; a random sample of 100 persons is drawn from group 2; and a random sample of 200 persons is drawn from group
3. The blood
type of each person in the samples
the hypothesis that the distribution of blood types is the same in all three
is classified as A, B, AB, or 0, and the results are as given in Table 9.42. Test groups at the level of significance 0.1.
5.
Consider again the conditions of Exercise 4. Explain how to change the numbers in Table 9.42 in such a way that each row total and each column total remains unchanged, but tbe value of the
6. Consider a 2
X
X2
X2
test of independence that is to be applied to the elements of a
2 contingency table. Show that the quantity
value for each of the four cells of the table. ·
7.
test statistic is increased.
( Njj - Ejj)2
Consider again the conditions of Exercise 6. Show that the be written in the form
•
' n. -, -;i' < -,; N f 1 !.: 1 -;, N '_ ' N N -' t � , " f! ' t ) ,-Q � � Nl +N2 +N+lN+ 2 Table 9.42
A
B
AB
0
Group 1
24
6
5
15
50
Group 2
43
24
7
26
100
Group 3
69
47
22
62
200
has the same
X2 statistic Q can
588
Categorical Data and Nonparametric Methods
Table 9.43 n+a n-a
Table
9.44 an
(1 - a)n
n-a n+a
(1 - o)n an
8. Suppose that a X 2 test of independence at the level of significance 0.01 is to
be applied to the elements of a 2 x 2 contingency table containing 4n observations; and that the data have the fonn given in Table 9.43. For what values of a should the null hypothesis be rejected?
9. Suppose that a X 2 test of independence at the level of significance 0.05 is to be applied to the elements of a 2 x 2 contingency table containing 2n observations; and that the data have the form given in Table 9.44. For what · values of a should the null hypothesis be rejected?
10. In a study of the health effects of air pollution, it was found that the ' proportion of the total population of city A that suffered from respiratory diseases was larger than the proportion for city B. Since city A was generally regarded as being less polluted and more healthful than city B, this result was considered surprising. Therefore, separate investigations were made for the younger population (under age 40) and for the older population (age 40 or older). It was found that the proportion of the younger population suffering from respiratory diseases was smaller for city A than for city B ; and also that the proportion of the older population suffering from respiratory diseases was smaller for city A than for city B. Discuss and explain these results. 1 1 . Suppose that an achievement test in mathematics was given to students from two different high schools A and B. When the results of the test were tabulated, it was found that the average score for the freshmen at school A was higher than the average for the freshmen at school B; and that the same relationship existed for the sophomores at the two schools, for the juniors, and for the seniors. On the other hand. it was found also that the average score of all the students at school A was lower than that of aU the students at school B. Discuss and explain these results. 12.
A random sample of 100 hospital patients suffering from depression received a particular treatment over a period of three months. Prior to the beginning
9.1 1 .
589
Supplementary Exerel.es
of the treatment, each patient was classified as being at one of five levels of depression, where level level
1
represented the most severe level of depression and
5 represented the mildest level. At the end of the treatment each patient
was again classified according to the same five levels of depression. The results are given in Table
9.45.
Discuss the use of this table for determining
whether the treatment has been helpful in aJleviating depression.
13.
Suppose that a random sample of three observations is drawn from a distribution with the following p.d.f.: for O < x < l , otherwise, where
14.
(}
> O. Determine the p.d.f. of the sample median.
Suppose that a random sample of
n
observations is drawn from a distribution
for which the p.d.f. is as given in Exercise
13.
Determine the asymptotic
distribution of the sample median.
15.
Suppose that a random sample of tion with
n
observations is drawn from a
t
distribu
a degrees of freedom. Show that the asymptotic distribution of both
the sample mean X" and the sample median
X"
is normal; and determine the
positive integers a for which the variance of this asymptotic distribution is smaller for X" than for
16.
Suppose that Xl" ' " which the p.d.f. is
X".
X" form a large (andom sample from a distribution for
h (x I 9) = af(x I 9) + (1
a)g(x I 9).
-
the p.d.L of the normal distribution with unknown mean
Here
f(x I 9)
8 and variance
is
1;
g(x I 8 ) is the p.d.f. of the normal distribution with the same unknown mean 8 and variance a 2; and 0 � a -E; 1. Let X" and .i" denote the sample mean and the sample median, respectively.
(a) For
02 =
100,
determine the values of " for which the M.S.E. of
be smaller than the M.S.E. of
Table
X".
9.45
•
Level of depression after treatment
Leve] of depression before treatment
1
2
1
7
3
0
0
0
2
1
27
14
2
0
3
0
0
19
8
2
4
0
1
2
12
0
5
0
0
1
1
0
3
4
5
X"
will
590
Categorical Data and Nonparametric Methods
(b) For a � 1/2, determine the values or . ' ror which the M.S.E. or x. will be smaller than the M.S.E. or x 17. Suppose that Xl , XII form a random samplc from a distribution with p.d.f. f(x), and let Y, < Y, < . . . < Y. denote the order statistics or the sample. Prove that the joint p.d.L of Y1, , Y" is as follows : •.
•
.
.
.
•
g(y" " " y, )
_
{ � ! f( YI ) ' "
f ( y. )
.
.
for Y1 < Y2 < . . . < YM otherwise.
18. Let Y1 < Y2 < Y3 denote the order statistics of a random sample of three observations from a uniform distribution on the interval (0,1). Deter mine the conditiona1 distribution of Y2 given that Y1 = YI and Y3 Y3 (0 < Y, < Y, < 1). 19. Suppose that a random sample of 20 observations is drawn from an unknown continuous distribution, and let Y1 < . . . < Y20 denote the order statistics of the sample. Also, let 0 denote the 0.3 quantile of the distribution, and suppose that it is desired to present a confidence interval for 8 which has the form ( Y" Y" ,). Determine the value or r (r � 1 , 2, . . , 17) ror which this interval will have the largest confidence coefficient y. and determine the value of y. 20. Suppose that Xl ' . . . ' X", form a random sample from a continuous distribu- . tion for which the p.d.f. f(x) is unknown; that Yl, . . . , YII form an indepen dent random sample from another continuous distribution for which the . p.d.!. g(x) also is unknown; and that f(x) � g(x - 0) ror - 00 < x < 00, where the value of the parameter 8 is unknown ( - 00 < 8 < 00). Suppose · that it is desired to carry out a Wilcoxon-Mann-Whitney ranks test of the following hypotheses at a specified level or significance a (0 < a < 1): =
.
Ho : H, :
8 = 8o • 0 * 00,
Assume that no two of the observations are equal; and Jet VSo denote the number of pairs ( Xi' Yj) such that X, - Yj > (Jo, where i = 1 , m and j � 1, . . . , n. Show that for large values of m and n, the hypothesis Ho should be accepted if and only if • . . .
mn
< U < 8, 2
+
(
c 1
_
�2 )[ mn (m 1 2
+ n
+ 1)
]'/2
'
where c(y) denotes the y quantile of the standard normal distribution. Hint; See Exercise 10 or Sec. 9.10.
9.11.
Supplementary Exercises
591
21. Consider again the conditions of Exercise 20. Show that a confidence interval
0: can be obt�ined by the following for (J with confidence coefficient 1 procedure: Let k be the largest integer less than or equal to -
mn 2
_
« 1
_
.'!.)2 [ mn (m 12 n +
+ 1)
]1/2 .
be the k th smallest of the differences XI �, where j = 1, and j = 1, . , . , and let B be the kth largest of these mil differences. Then the interval A < 8 < B is a confidence interval of the required type.
Also, let
. . . •
A
m
n;
mn
-
Li near Statistical Models
10.1.
THE METHOD OF LEAST SQUARES
Fitting a Straight Line Suppose that each of ten patients is treated first with a certain amount of a standard drug A and then with an equal amount of a new drug B, and that the change in blood pressure induced by each drug is observed for each patient. This change in blood pressure will be called the response of the patient. For i = 1, . . . , 10, we shall let Xi denote the response, measured in appropriate units, of the ith patient when he receives drug A, and we shall let Yi denote his response when he receives drug B. We shall suppose also that the observed values of the responses are as given in Table 10.1. The len points ( x; . y;) for i = 1, . . . , 10 are plotted in Fig. 1O.l. Suppose now that we are interested in describing the relationship between the response y of a patient to drug B and his response x to drug A . In order to obtain a simple expression for this relationship. we might wish to fit a straight Line to the ten points plotted in Fig. 10.1. Although tbese ten points obviously do not lie exactly on a straight line, we might believe that the deviations from such a tine are caused by the fact that the observed change in the blood pressure of each patient is affected not only by the two drugs but also by various other factors. In other words, we might believe that if it were possible to control all these other factors, the observed points would actuaJly lie on a straight line. We might believe further that if we measured the responses to the two drugs for a very large number of patients, instead of for just ten patients, we would then find that the observed points tend ;0 cluster along a straight line. Perhaps we might also wish •
593
594
Linear Statistical Models
Table 10. 1 i
y,
x,
1.9 0.8
1
2
l.l
3 4 5 6 7 8 9 10
0.7 - 1 .0 0.2 - 1 .2 - 0.1 3.4 0.0 0.8 3.7 -
0.1 -0.1 4.4 4.6 1.6 5.5 3.4
2.0
J' 4
•
• J 2
I
I
0 • I
Figure 10.1
• • •
2
4
5
6
x
•
A plot of the observed
values in Table 10.1.
to be able to predict the response y of a future patient to the new drug B on the basis of his response x to the standard drug A. One procedure for making such a prediction would be to fit a straight line to the points in Fig. 10.1, and to use this line for predicting the value of y corresponding to any given value of x. It can be seen from Fig. 10.1 that if we did not have to consider the point (4.6,0.0), which is obtained from the patient for whom i 7 in Table 10.1, then the other nine points lie roughly along a straight line. One arbitrary line which fits reasonably well to these nine points is sketched in Fig. 10.2. However, if we wish -
10.1. The Method of Least Squares
4
y
/
J 2
I
0
I
/ Figure 10.2
'V •
595
V •
/
2
/
V· /'
4
3
5
x
6
I
A straight line fitted to nine of the points in Table 10.1.
10.2 should be adjusted in order to accommodate the anomalous point. We shall to fit a straight line to all tcn points, it is not clear just bow much the line in Fig.
now describe a method for fitting such a line._
The Least-Squares Line We shall assume here that we are interested in filling a straight line to the points plotted in Fig.
10.1
in order to obtain a simple mathematical relationship for
expressing the response response
x
y
of a patient to the new drug
to the standard drug
A.
able to predict closely a patient's response y to drug drug
A.
B
as a function of his
In other words, our main objective is to be
B
from his response
x
to
We are interested, therefore, in constructit..g a straight line such that, for
each observed response X i ' the corresponding value of y on the straight line wilJ be as close as possible to the actual observed response y;. The vertical deviations of the len plotted points from the Line drawn in Fig. 10.2 are sketched in Fig.
10.3. One method of constructing a straight line to fit the observed values is called
the method of least squares. According to this method, the line should be drawn so that the sum of the squares of the vertical deviations of aU the points from the line is a minimum. We shall now study lhis method in more detail. Consider an arbitrary straight line
y=
PI + P2 x,
constants /31 and /32 are to be delermined. When
in which the values of the
x = x . ;
the height of this line is
596
Linear Statletlcal Modele
y
./
4
3 ,
I 0
I
./
/
1/ 7 •
/'
3
2
4
V1 J/
5
6
x
Figure 10.3 Vcrtica1 deviations of the plotted points from a straight line. PI + fJ2Xi' Therefore, the vertical distance between the point ( X i' yJ and the line is Iy, - (fl. + fl,x, )I. Suppose that the line is to be fitted to n points, and let Q denote the sum of the squares of the vertical distances at the n points. Then n
Q � I: [ y, - (fl. + fl, X,)]2 i=1
( 1)
The method of least squares specifies that the values of {31 and /32 must be chosen so that the value of Q is minimized. It is not difficult to minimize the value of Q witb respect to {31 and f32� We have n
- 2 I: ( y, - fl. - fl,x,) i-1
(2)
and n
- 2 I: ( y, - fl. i-I
-
fl,x, )x,.
(3)
By setting each of these two partial derivatives equal to O. we obtain the following pair of equations: n
n
n i-I
i-I
n
(4)
10.1. The Method of Least Squares
597
The equations (4) are called the normal equations for /31 and /32' By consider ing the second-order derivatives of Q, we can show that the values of /31 and /32 which satisfy the normal equations will be the values for which the sum of squares Q in Eq. (1) is minimized. If we denote these values by PI and P,. then the equation of the straight line obtained by the method of least squares wil1 be y = PL + fJ2X. Tills line is called the least-squares line. As usual. we shall let x" � (l/n)L;'_ lx; and y" (1/n )L7_IY;. By solving the normal equations (4) for /3, and /3" we obtain the following results: �
(5)
p, Y
For the values given in Table 10.1, n = 1 0 and it is found from Eq. (5) that - 0.786 and p, 0.685. Hence, the equation of the least-squares line IS - 0.786 0.685x. This line is sketched in Fig. 10.4.
�
�
�
+
Filling a Polynomial by the Method of Least Squares Suppose now that instead of simply fitting a straight line 10 n plotted points, we wish to fit a polynomial of degree k ( k ;. 2). Such a polynomial will have the y 4
•
• 3
/'
2 I I
/"
"
Figure 10.4
0
V / • •
/
2
/ 3
/
4
I
The least-squares straight line.
V
6
x
Linear Statistical Models
598
following form:
(6) The method of least squares specifies that the constants PI' . , 13k + should be chosen so that the sum Q of the squares of the vertical deviations of the points from the curve is a minimum. In other words, these constants should be chosen so as to minimize the following expression for Q: .
Q
-
"
L [ y, - ( /3, + /3,x, + i- I
.
.
.
+ /3k + IXn f
.
I
(7)
If we calculate the k + 1 partial derivatives 8Q/8/3" . . . , 8Q/8/3k+, and we set each of these derivatives equal to O. we obtain the following k + 1 linear equations involving the k + 1 unknown values pI ' pk� l : • . . .
"
+ /3k +1 L x t "
L Yi ' n
+ Pk + l L X f + l = L X;Yi ' i-I i-I
/31 L X i + /3, L xl + i- I i-I
"
-
"
"
"
"
/3, L xt + /3, L X�+1 +
. . .
(8)
"
k + /3k + l L xl -
As before, these equations are called the normal equations. There will be a unique set of values of /31' f3k+l which satisfy the normal equations if and only jf the deterntinant of the (k + 1) X (k + 1) matrix formed from the coeffi· cients of f31, , fJk+ I is not zero. If there are at least k + 1 different values among the n observed values Xl ' X " ' then this determinant wiU not be zero and there will be a unique solution to the normal equations. We shall assume that this condition is satisfied. It can be shown by the methods of advanced calculus that the unique values of PI' . " 13k I which satisfy the normal equations will then be the values which minimize the value of Q in Eq. (7 ). If we denote these values by SL> . , ilk+l> then the least-squares polynomial wlll have the form + /3k+ IX" Y /31 + /3,x + . . . •
•
•
•
• . . .
.
...
. .
-
. . .
Example J: Fitting a Parabola. Suppose that we wish to fit a polynomial of the
form Y = PI + f32 x + {33x2 (which represents a parabola) to the ten points given in Table 10.1. In this example, it is found that the normal equations (8) are as
,
I .
10.1. The Method of Least Squarea
follows: lOfll + 23 .3fl,
+
23 .3fl, + 9O.37fl, 90.37fl,
+
599
90.37fl, - 8.1, +
401 .0fl,
401 .0fl, - 43.59, 1892.7/l,
+
=
(9)
204.55.
The unique values or fJl' /12 ' and Pl that satisfy these three equations are PI - 0.744, �2 = 0.616, and ill = 0.013. Hence, the least-squares parabola is =>I
y
-
- 0 .744
+
\10)
0.616x + 0.013x'.
This curve is sketched in Fig. 10.5 together with the least-squares straight line. Because the coefficient of x 2 in Eq. (10) is so small, the least-squares parabola and the least-squares straight line are very close together over the range of values included in Fig. 10.5. 0 Fitting a Linear Function of Several Variables We shall now consider an extensjon of the example discussed at the beginning of this section, in which we were interested in representing a patient's response to a new drug B as a linear function of his response to drug A . Suppose that we wish
)' 4
b/ -/V •
3
2 1
0
IV
-I V -I --7 I /'"
Parabola
Figure 10.S
V
#� •
•
2
1
Straisnt line
The least-squares parabola
4
/
•
6
x
600
Linear StaUaUcal Models
to represent a patient's response to drug B as a linear function involving not only his response to drug A but also some other relevant variables. For example, we may wish to represent the patient's response y to drug B as a linear function involving his response X l to drug A , his heart rate X2, his blood pressure Xl before he receives any drugs, and other relevant variables X4. · ' · , x k . Suppose that (or each patient i (i = 1, . . . , n) we measure his response Yi to drug B, his response Xil to drug A , and also his values xj2 . , , xj k for the other variables. Suppose also that in order to fit these observed values for the n patients, we wish to consider a linear function having the form • .
Y - P,
+
p, x,
+ ...
(11)
+Pk+IXk·
In this case, also, the values of fJ1" , . , fJk+ I can be determined by the method of least squares. For each given set of observed values XiII" " Xik, we again con sider the difference between the observed response YI and the value fJt + /32 XIl + . . . + fJk +1 Xjk of the linear function given in Eq, (11) . As before, it is required to minimize the sum Q of the squares of these differences, Here,
Q
n
L [YI - (P,
-
+
P, xil
+
. . .
+ Pk+lX'k)]'.
(12)
A set of k + I normal equations can again be obtained by setting each of the partial derivatives aQI apj equal to 0 for j - 1 , . . . , k + 1. These equations wilf have the following form:
Pln
"
+
p, L Xil
i-I
n
+
fJ1 L X1k ;-1
n
i-I
i-I
+Pk + l L Xik - L Yi' n
n
PI L X ;! + p, L xA +
n
n
+ fJk + I L X il X i k i-I
= ( 13)
n
+
fJ2 .E XikXil
+
If the determinant of the (k + 1) X (k + 1) matrix formed from the coefficients of /31" ' " fJk +l in these equations is not zero, then there will be a unique solution � t' ' ' ' ' �k + l of the equations. The least-squares linear function will then be Y = Pl + P, x l + . . . + Pk+IX•.
Example 2: Fitting a Linear Function of Two Variables. Suppose that we expand Table 10.1 to include the values given in the third column in Table 10.2 . Here, for
10.1. The Method 0' Least Squares
Table 10.2 i
I
each patient i ( i = 1,
X i2
0.7 1.0 --0.2 -1.2 -0.1 3.4 0.0 0.8 3.7 2.0
66 62 64 61 63 70 68 62 68 66
1.9
0.8 1.1 0.1 -0.1 4.4 4.6 1.6 5.5 3.4
2 3 4 5 6 7 8 9 10
y,
xi 2
xii
. . . •
601
1 0), Xil
denotes his response to the standard drug A ;
denotes his heart rate; and YI denotes his response to the new drug
B.
Suppose also that to these values we wish to fit a linear function having the form
y � /3, + /3,x,
+
/3,x,.
In this example, it is found that the normal equations
(13) are as
follows:
10/3, + 23.3/3, + 650/3, � 8.1, 23 .3/3, + 90.37/3, + 1563.6/3,
� 43.59,
(14)
650/3, + 1563 .6/3, + 42,334/3, � 563.1. The unique values of
/3,
� - 1 1 .4527, fJ,
�
PI' /12'
0.4503,
function is
y
-
and and
P3
p,
which satisfy these three equations are
�
U.I725.
- 1 1 .4527 + 0.4503x, + 0.1725x,.
0
Hence, the least-squares linear
•
(15)
I t should be nOled that the problem of fitting a polynomial of degree k involving only one variable, as specified by Eq. (6), can be regarded as a special case of the problem of filting a linear function involving several variables, as
specified by Eq.
(II).
To make Eq.
(11) applicable to the problem of filling a Eq. (6), we define the k variables Xl" ' " xk
simply as Xl = X, x2 - X , . . . • Xk "'" X A polynomial involving more than onc variable can also be represented in the polynomial having the form given in
l
form of Eq.
(11).
k
For example, suppose that the values of four variables r,
s,
I,
and Y arc observed for several different patients, and that we wish to fit to these observed values a function having the following form:
(16)
602
Linear Statistical Models
We can regard the function in Eq. (16) as a linear function having the form given in Eq. (11) with k = 6 if we define the six variables Xl" ' " X6 as follows: Xl = r, X2 = r 2, Xl = rs, X4 .t 2, Xs = t \ and X6 = rst. =
EXERCISES 1 . Show that the value of /12 in Eq. (S) can be rewritten in each of the following three forms: a _ E7_ ,(x, - x,,)(y,
- y,, ) • E;'_ , ( x, - x,, )2
(a) P 2 (b) (c)
/1, p,
_
E;'_ , (x, - x,, ) y,
, and that the distribution of ne2jo 2 is a X2 distribution with n freedom.
-
61 7
2 degrees of
Teats 01 Hypotheses about the Regression Coefficients Tests of HYPolhe...e." about /3.. Let 13: be a specified number ( - 00 < fit < 00), and suppose that it is desired to test the following hypotheses about the regression coefficient PI:
Ho : 13, = f3t, H, :
13, * f3t.
(17)
We shaH construct a I test of these hypotheses which will reject Ho when the least-squares estimator /11 is far from the hypothesized value 13 t and will accept JJ0 otherwise. It follows from Eqs. (9) and (10) of Sec. 10.2 that when the null hypothesis H0 is true, the following random variable WI will have a standard normal distribution: (18) Since the value of 0 is unknown, a test of the hypotheses (17) cannot be based simply on the random variable WI' However, the random variable S 2jo2 has a X2 distribution with n - 2 degrees of freedom for all possible values of the parameters /31' /32 ' and 0 2. Moreover, since PI and S2 are independent random variables, it follows that WI and S2 are also indepe�dent. Hence, when the hypothesis Ho is true, the following random variable VI will have a l distribution ,. with n - 2 degrees of freedom:
or
61 8
Linear StaUsticai Models
It can be seen from Eq. (19) that the random variable Vi is a statistic, since it , (x", V,,) and it is not a function is a function of only the observations (x l' Yl), 2 of the parameters (3,. (32' and 0 Thus. a reasonable test of the hypotheses (17) specifies rejecting Ho if IVI I > CI • where C1 is a suitable constant whose value can be chosen to obtain any specified level of significance 0 0 (0 < 0 0 < 1). This same test procedure will also be the likelihood ratio test procedure for the hypotheses (17). After the values Xl' YI" . . , Xn, y" have been observed, the 2 likelihood function /,,( y I X, f31' f32 . 0 ) is as given in Eq. (2) of Sec. 10.2. The likelihood ratio test procedure is to compare the foHowing two values: the max.imum value attained by this likelihood function when P2 and a 2 vary over all their possible values but P1 can only have the value f3t; and the maximum value attained by the likelihood function when all three parameters PI' P2' and a 2 vary over all their possible values. Therefore, we consider the fonowing ratio: • • .
r ( Y l x) -
SUPP1 ,Pl,U 1 j,,( y I X, f3u f3l ' 0 2 )
SUPPl,.,2 jn ( y l x , f3i. f32 , o 2 ) .
(20)
The procedure then specifies that Ho should be rejected if r( Y I x) > k. where k is some chosen constant, and that Ho should be accepted if r( y I x) � k. It can be shown that this procedure is equivalent to the procedure which specifies rejecting Ho if lUll > Ct. The derivation of this result will not be given. As an illustration of the use of this test procedure, suppose that in a proble� of simple linear regression we are interested in testing tbe null hypothesis that Ihe regression line y = /11 + f32x passes through the origin against the alternative hypothesis that the line does not pass through the origin. These hypotheses can be stated in the following form:
Ho: HI:
(3, (3,
-
""
0,
O.
(21)
Here the hypothesized value f3{ is o. Let U1 denote the value of VI calculated from a given set of observed values ( XI ' Yi ) for i = 1, . . . , n. Then the tail area corresponding to this value is the two-sided tail area
(22) For example, suppose that n = 20 and the calculated value of U, is 2.1. It is found from a table of the t distribution with 18 degrees of freedom that the corresponding tail area is 0.05. Hence, the null hypothesis Ho should be accepted at any level of significance ao < 0.05, and it should be rejected at any level of significance 00 > 0.05.
10.3. Tests of Hypotheses and Confidence Interval. In Simple Linear Regression
Let P2* be a specified number
Tests of Hypotheses about fl2'
( - 00
y, the value of y corresponding to any specified level of significance ao (0 < ao < 1) can be determined from a table of this F distribution. -
-
Confidence Intervals and Confidence Sets A confidence interval for PI can be obtained from the test of the hypotheses (17) based on the statistic UI defined by Eq. (19). It was explained in Sec. 8.5 that for any given observed values (Xi' Yi) for i 1, . , n , the set of all values of Pi for which the null hypothesis Ho in (17) would be accepted at the level of significance a o will form a confidence interval for PI with confidence coefficient 1 ao. Specifically, let gn_'(x) denote the p.d.f. of the , distribution with n 2 degrees of freedom� and let c denote a constant such that =
. .
-
-
,
(41) Then the set of all values of Pi such that IUd < c will form a confidence interval for PI with confidence coefficient 1 «0' Similarly, if U2 is defined by Eq. (25) and if c satisfies Eq. (41), then the set of all values of Pi such that IU,I < c will form a confidence interval for p, with confidence coefficient 1 ao. A confidence interval for a particular linear combination having the form al P ) + a 2 p2 can be constructed in the same way from the statistic U12 defined by Eq. (30). Specifically, suppose that we wish to construct a confidence interval for -
-
624
Linear Statislical Models
the height b = /3, + /32X of the regression line at a given point x. If we let 01 = 1 and a, � x in Eq. (30), then the set of all values of b such that IUd < c will form a confidence interval for b with confidence coefficient 1 - 0:0, It follows from some algebra that for each value of x, the upper and lower limits of this confidence interval will lie on the curves defined by the following relations (note the sign ± before c):
[ ,
]
"' )2 '/2 S L.. t _ 1 ( X i - X Y � hI + h, x ± c n ( n 2) . "' ' XII ) ' '-' , - I ( XI "
_
"
.
_
_
(42)
where c and S' are defined by Eqs. (41) and (11). In other words, with confidence coefficient 1 - ao for any given value of x, the actual value /31 + f32x of the regression line will lie between the value obtained by using the plus sign in (42) and the value obtained by using the minus sign. Next, consider the problem of constructing a confidence set for the pair of unknown regression coefficients /31 and /32' Such a confidence set can be obtained from the statistic U 2 defined by Eq. (40), which was used to test the hypotheses (32). Specifically, let h,, "_,(x) denote the p.d.!. of the F distribution with 2 and n - 2 degrees of freedom, and let y be a constant such that (43)
Then the set of all pairs of values of Pi and /32* such that U 2 < y will form a confidence set for the pair PI and /32 with confidence coefficient 1 - 0: 0 . It can be shown (see Exercise 17) that this confidence set will contain aU the points (Ill ' Il, ) inside a certain ellipse in the f3IP2-plane. [n other words, this confidence set will actually be a confidence ellipse. The confidence eflip'" that has just been derived for III and 112 can be used to construct a confidence set for the entire regression line y = /31 + /32X. Corre sponding to each point (Pl ' /32) inside the ellipse, we can draw a straight line Y = PI + P2 x in the xy-plane. The collection of all these straight lines corre sponding to all the points (/JI ' fJ2) inside the elljpse will be a confidence set with confidence coefficient 1 - ao for the actual regression line. A rather lengthy and detaiJed analysis, which will not be presented here [see Kendall and Stuart (1973)], shows that the upper and lower limits of this confidence set are the curves defined by the following relations: Y � hI + h, x
±
[ ,
( ) 2yS . ""', - I x, - X ) n ( 1I - 2) "'" '-'.- 1 ( XI XII ' "' "
'
.
_
-
]
1/2 ,
(44)
where y and S' are defined by Eqs. (43) and (11). In other words, with confidence coefficient 1 - ao• the actua1 regression line y � III + Il,x will lie
1 0.3. Teats of Hypotheses and Confidence Intervals In Simple Linear Regression
625
Table 10.5 ao
0.05
n
-
2 5 10 20 60
120 00
0.025
2 5 10 20 60 120 00
2
c
4.30 2.57 2.23 2.09 2.00 1.98 1.96
(2y)'/2
(2y)'/'/ . . . ' PI' will be a linear combination of the components Y\, . , Y" of the veCLOr Y. Since each of these components has a normal distribution, it follows that each estimator p} wiJI also have a normal distribution. We shaU now derive the means, variances, and covariances of these estimators. .
.
Mean Vector and Covariance Matrix Suppose that Y is an n-dimensional random vector with components Thus,
Y
-
Y\
• . . .
, Yn .
[ YI ] .
Y"
(12)
10.5. Multiple Regr..,lon
635
The expectation £(Y) of this random vector is defined to be the n-dimensional vector whose components are the expectations of the individual components of Y. Hence, (13 )
The vector £ ( Y ) is called the mean vector of Y. The covariance matrix of the random vector Y is defined to be the n X II matrix such that, for i = 1 , . . . . n and j = 1 , . . . , n , the element in the ith row and jth column is Cov(Y;, lj). We shall let Cov( Y ) denote this covariance matrix. Thus, if Cov( �, lj) = 0;1' then Cov( Y ) �
[O�:l 1 0''"
0"•
. . .
:
0""
( 1 4)
.
For ; = 1, . , 11 , Var(Y;) Cov(r;. Yi) u;;- Therefore, the n diagonal ele ments of the matrix Cov( Y ) are the variances of YI, , y" . Furthermore, since Cov(Yj O lj) Cov(lj , Y.), then uij = uji' Therefore, the matrix Cov( Y ) must be symmetric. The mean vector and the covariance matrix of the random vector Y in the general linear model can easily be determined. It follows from Eq. (1) that .
.
=
==
•
.
.
=
E( y )
�
ZI3.
(15)
Also, the components YI , . . . , Yn of Y are independent, and the variance of each of these components is (1' 2. Therefore, Cov( Y ) �
0 '/,
where I is the n x n identity matrix.
(16) •
Theorem 1 . Suppose that Y is an n-dimensional random vector as specified by £q. (12). for which the mean vector E ( Y ) alld the covariance matrix Cov( Y) exist. Suppose also that A is a p X II matrix whose elements are COlJStams, and that W is a p-dimensional random vector defined by the relation W = A Y. Then E ( W ) � AE(Y) and Cov( W ) � A Cov( Y)A'. Proof. Let the elements of the matrix A be denoted as follows: (17)
636
Linear Statistical Models
Then the ith component of the vector E(W) is (18)
It can be seen that the final summation in Eq. (18) is the ith component of the vector AE(Y). Hence, E ( W ) - AE(Y). Next, for i = 1, . . , p and j 1, . . , p, the element in the ith row and jth column of the p X P matrix Cov( W ) is
.
Cov( W;,
- .
(
fIj) - COY L a" Y" L aj,Y, l r"
1"1
Therefore, by Exercise 7 of Sec. 4.6,
Cov( W;, fIj)
-
"
).
"
L L a" aj,Cov( Y" Y,) . r-I 1-1
(19)
(20)
.
can be verified that the right side of Eq. (20) is the element in the ith row and }th column of the p x p matrix A Cov( Y )A'. Hence, Cov( W ) - A Cov(Y)A' . It
o
The means, the variances, and the covariances of the estimators /31" ' " pp . can be obtained by applying Theorem I . It is known fTOm Eq . (11) that � can be ' represented in tbe form � - A Y, where A (Z 'Z )- 'Z '. Therefore, it follows from Theorem 1 and Eq. (15) that
-
-
- ..
(21)
In other words, E(P) /lj for } 1 , . , p. Also, it follows fTOm Theorem I and Eq. (16) that Cov( � ) - ( Z 'Z ) - ' Z 'Cov( Y)Z( Z 'Z ) - ' - ( Z 'Z ) - 'Z '( o 'T ) Z( Z 'Z ) - '
(22)
Thus, for j = 1 , . . . . n, Var(/3) wiJi be equal to (1 2 limes the jth diagonal entry of the matrix ( Z 'Z ) - ' . Also, for j .,, ), COV(P" P) will be equal to 0 ' times the entry in the ith row and jth column of the matrix (Z'Z)-I.
10.5. MulUple Regression
637
The Gauss-Markov Theorem lor the General Linear Model , Y" are uncorrelated; that E( Yj) is given by Suppose that the observations Y1, 2 Eq. (1); that Var( Y,) = 0 for i = 1, . . . , n; and that no further assumptions are made about the distribution of YI, , Y". In particular, it is not assumed that Yp . . . • Y" necessarily have normal distributions. Suppose also that it is desired to estimate the value of 8 = cJ l, + . . . + cplJp :r CP+ I , where cl, · · · , cp + I are given constants, and consider the estimator 0 = clll, + . . . + cpPp + cp+ , . ]n this case, just as in the problem of simple Linear regression discussed in Sec. 10.2, the following result, which is known as the Gauss-Markov theorem, can be estab lished: . . •
• • •
Among all unbiased estimators of 8 which are linear combinations of the observations YI , • Y". the estimator 8 has the smallest variance for all possible values of /1" . . . . {3p and 0 2 . •
•
•
In particular, for j = 1, . . . , p, the least-squares estimator Pj will have the smallest variance among all linear unbiased estimators of /1j. Furthermore, if it is also assumed that the observations Y" . . . , Yn are independent and have nonnal distributions, as in the general linear model. then iJ will have the smallest variance among all unbiased estimators of (), including unbiased estimators that are not linear functions of Y" . . . � Y" . .
The Joint Distribution 01 the Estimators We shall let the entries in the symmetric p x p matrix (Z 'Z ) - 1 be denoted as follows:
(23)
It has been shown earlier in this section that the joint distribution of the estimators (1" has the following properties: For j = 1, . . . , n, the estimator • Pj has a normal dist!ibution with mean Ilj and variance �jjO 2. Furthermore, for i * j, we have Cov(fJ" Sj) � I,P'- . For i 1, . . . , n, we shall l e t Y, denote the M.L.E. of E(Y,). It follows from Eq. (1) that .
.
.
/Jp
�
(24)
638
Linear Statistical Models
Also, we shall let the random variable
S
"
2 � L ( y, i-I
-
y,)'.
This sum of squares s'
be defined as follows :
(25)
S2 can also be represented in the following for m :
( y - Z�)'(y- Z�).
�
2S
(26)
can be shown by methods beyond the scope of this book that the random variable S2/02 has a X l distribution with n - p degrees of freedom. Further more, it can be shown that the and the random vector p are
2 random2variable 5 S
It
independent.
/II, Hence, the random variable na2/02 Eq. (6). we see that a 2 has a X dis tribu ti on with n - p degrees of freedom; and the estimators a2 and P are i ndependent . The description of the joint distribution of the estimators Pl" ' " /Jp and a 2 is now compl ete. From
=
l
Testing Hypotheses Suppose that it is desired to test the hypothesis that one of the regresslOn coefficients /3j has a particular value Pt, 10 other words, suppose that the following hypotheses are to be tested:
Ho : fJj � fJ/, H , : fJj * fJ/ .
2
(27)
Since Var( � ) = rJJ 0 , it follows that when Ho is true, the rol lowing random j variable Uj will have a standard normal distribution:
lS
Furthermore, since the random variable 52/02 has a x 2 distribu tion with n - p degrees of freedom, and since and /3) are independent, it follows that when Ho is true, the following random variable � will have a I distribution with n - p degrees of freedom :
U; �
[ ( S2 ) ]1/2 n
1
-
P 02
10.5. MulHple Regression
639
or (29)
The test or the hypotheses (27) specifics that the null hypothesis Ho should be rejected if I�I > c, where c is a suitable constant whose value can be chostm to obtain any specified level of significance Q o (0 < Qo < 1). Thus, if u is the value of � observed in a given problem, the corresponding tail area is the two-sided area Pr ( � > 1111) + Pr ( � < - 1 11 1)
(30)
.
Problems of testing hypotheses which specify the values of two coefficients Pi and f3j are discussed in Exercises 16 to 20 at the end of this section. Multiple Linear Regression
In a problem of multiple linear regression, in which the regression of Y on the k variables X,' X. is given by Eq . (1) or Sec. 10.2, the mean E(Y;), for i = 1, . . . , fl . is given by Eq. (3) of this section. We arc often interested in testing the hypothesis that a particular one of the variables Xl" ' " XJc does not actually appear in the regression function. In other words, we arc often interested in testing the rollowing hypotheses for some particular value of j (j = 2, . . . , k + 1): . . . •
Pi = 0, HI : Pi ';' O. Ho :
(31)
Because of this interest in knowing whether or not f3j = 0, it has become standard practice in the analysis of a problem of multiple linear regression to present not only the M.L.E:s /31' f3Jc+ I but also the values of the statistics Uz , UJc+ 1 and the corresponding tail areas that are found from the expres sion (30). Also, in a problem of multiple linear regression, we are typically interested in determining how well the variables XI. " ' , Xk explain the observed variation in the random variable Y. The variation among the n observed values Yl" ' " Yn of Y can be mealOured by the value of E7_t(Yi - y,. )z, which is the sum of the squares of the deviations of Yl ' " . , Yn from the average .Yn. Similarly. after the regression of Y on XI" ' " XJc has been fitted from the data, the variation among the n observed values of Y that is still present can be measured by the sum of the squares of the deviations of y, , Yn from the fitted regression. This sum of . . . •
• . . .
• . . .
Linear Statistical Models
640
squares will be equal to the value of S2 calculated from the observed values. It can be written in the form ( y � ) '( y - � ).
z
z
It now follows that the proportion of the variation among the observed values
Yl " ' " YII
(y
that remains unexplained by the fitted regression is
Z � )'( y - Z�) "' "-iI ( Yi - YII )' -
"
(32)
-
In turn. the proportion of the variation among the observed values yp . . . , Yn that by the fitted regression is given by the following value R 2 ;
is explained R'
�
1
_
( y - z�)'( y - z(l )
( 33)
L7_1 ( y, y.)' -
The value of R 2 must lie in the interval
0
1. When R 2 = O. the . = , , = {1k + l = O. In this
:::;; R
2 :::;;
least-squares estimates have the values /11 = YI! and iJ2 case, the fitted regression function is just the constant function
y = )'11' When
R
2
is close to 1, the variation of the observed values of Y around the fitted regression
function is much smaller than their variation around y".
Screening Regression Equations
A common practice in many areas of application is as follows: (i) (ii)
Begin with a large number of variables Xl . · · · ' Xk . Calculate the regression of Y on Xl" ' " Xk
(iii)
• Drop out each variable Xi for which the regression estimate 13; IS
(iv)
Recalculate the regression of Y on just the variables that remain.
relatively small.
Under this two-stage process, the vector
�
of estimators at the second stage
can no longer appropriately be regarded as having the mean vector and the
covariance matrix developed in this section. If the hypotheses (3 1) are tested by using the usual I test of size a. then the probability of rejecting Ho will in fact be larger than a. even if Y and Xl • . . . • Xli. are independent.
A
similar comment pertains to the common practice of trying several differ
ent forms for the multiple regression.
and X/. as well
as
A
typical first step might be to include
xl
Xi' or to replace Xi with log Xi' The final step is to report the
regression equation that provides the " best fit" in some sense. If the usual
t
test
of the hypotheses (3 1) is applied to a regression coefficient in the equation that is ultimately selected. the probability of rejecting Ho will again be larger than the nominal size
a
of the test. even when Y and Xl • .
"
.
Xk are independent. In
summary. if we screen many different regression equations involving Y and
10.5. Multiple Regression
641
Xl" ' " XI and if we select just one equation that seems to provide a good fit, then this equation will usually suggest that there is a much stronger relation between Y and Xl Xk than actually exists. • . . . •
EXERCISES 1 . Show that the M.L.E. of 0 2 in the general linear model is given by Eq. (6). 2. Consider a regression ptohlem in which, for any given value x of a certain variable X, the random variable Y has a normal distribution with mean f3x and variance (1 2, where the values of f3 and (12 are unknown. Suppose that n independent pairs of observations (Xi' Yj) are obtained. Show that the M.L.E. of fJ is
3. For the conditions of Exercise 2, show that E( {1 ) - fJ and Var( {1 ) � cr 2/(E;r_ 1X;). 4. Suppose that when a small amount x of an insulin preparation is injected into a rabbit, the percentage decrease Y in blood sugar has a normal distribution with mean f3x and variance (12, where the values of f3 and (1 2 are unknown. Suppose that when independent observations are made on ten different rabbits, the observed values of Xi and Y,- for i = 1, . . , 10 are as given in Table 10.7. Determine the values of the M.l.E.'s /3 and 6 2, and the value of Var( {1 ). 5. For the conditions of Exercise 4 and the data in Table 10.7, carry out a test of the following hypotheses:
.
Ho : HI :
fJ � )0, fJ * IO.
•
Table 10.7 ;
12 45 3
x,
0.1.06 1.1.77 2.2
y,
83 11105
67 910 8
x,
2.2.82 3.3.55 4.2
y,
19 221422 9
642
Linear Statistical Models
Consider a regression problem in which a patient' s response Y LO a new drug B is to be related to his response X to a standard drug A . Suppose that for any given value x of X, the regression function is a polynomial of the (orm £(Y) = /31 + /32 x + /3Jx 2. Suppose also that ten pairs of observed values are as given in Table 10.1. Under the standard assumptions of nonnalily, independence, and homoscedasticity of the observations, determine the values of the M. L.E.'s f3,. f3,. f3" and '; 2 . 7. For the conditions of Exercise 6 and the data in Table 10.1 , determine the values of Var( /),), Var( /),). Var( /),), Cov( /) " /3,), Cov( /) " /),), and Cov( /),. /3,). 8. For the conditions of Exercise 6 and the data in Table 10.1, carry out a test of the followi ng hypotheses: 6.
�
Ho : H, :
�
A
f3, - 0, f3, # O.
9. For the conditions of Exercise 6 and the data in Table 10.1, carry out a test of
the following hypotheses:
Ho: H, :
f3, � 4. f3, # 4.
10. For the conditions of Exercise 6 and the data given in Table 10.1. determine the value of R', as defined by Eq. (33). 1 1 . Consider a problem of multiple linear regression in which a patient's response Y to a new drug B is to be related to his response XI to a standard drug A
and to his heart rate Xl' Suppose that for any given values Xl = Xl and Xl = x2, the regression function has the form E ( Y ) = {1, + fJ2 x j + !33X2; and that the values of ten sets of observations ( X ii' X i2 , Y,) are as given in Table ]0.2. Under the standard assumptions of multiple linear regression, determine the values of the M.L.E. ' s /),. /3" /),. and '; '. 12. For the conditions of Exercise 1 1 and the data in Table 10.2, determine the values of Var( /3 ,), V.r( /3,), Var( /3,). Cov( /3 " /3, ), Cov( /3 " /3,), and COV( /32' /3,). 13. For the conditions of Exercise 11 and the data in Table 10.2, carry out a test of the following hypotheses: Ho: H, :
f3, � 0, f3, # O .
14. For the conditions of Exercise 1 1 and the data in Table 10.2, carry out a test
of the following hypotheses: Ho: H, :
f3, f3,
� - 1, # - I.
10.5. Multiple RQ9resslon
643
1 5 . For the conditions of Exercise 1 1 and the data in Table 10.2. determine the value of R', as defined by Eq. (33). t6. Consider the general tinear model in which the observations Yh
.
.
.
, Y"
are
independent and have normal distributions with the same variance 0 2 , and in
which E ( Y, ) is given by Eq. (1). Let the matrix (Z 'Z ) - I be defined by Eq. (23). For any given values of i and j such that j '* j, let the random variable A;) be defined as follows: .
A u � /1,
-
r,} .
y-:/1j " JJ
Show that Cov(/3j . Ai}) = 0, and expJain why /3) and AI) are therefore independent. 17. For the conditions of Exercise 16, show that Var(AjJ ) = I t;; - a,5/r)J )] a 2 , Also show that the following random variable W 2 has : + +
-
.
.
.
( 6)
, J.
- 1, . . . , J,
Y+j + + Y + + + .
For all values of i and j, it can then be verified (see Exercise 3) that E(fJ,) E(a,) � IX ; , E(/3) - Pj• and E(yu) - Y;j"
(7) -
1',
,wl4
10.8.
The Two-Way Layout with RepllcaUon.
665
Partitioning the Sum of Squares Consider now the following sum Q1:
Q, -
,
J
K
L L L
(Y;jk
-
fJ-
-
a;
-
0'
Pj
-
Yij )2
(8 )
Since Q I is the sum of squares of IJK independent random variables, each of 2 which has a standard normal distribution, then Ql has a X distribution with IJK degrees of freedom. We shall now indicate how Q1 can be partitioned into five smaller independent sums of squares, each of which itself has a Xl distribution and each of which is associated with a particular type of variation among the observations. These sums of squares are defined as follows:
Q,
_
JJ� ( p. o
_
1')',
(9)
It
•
can be shown (see Exercise 6) thal Q, - Q ,
+
(10)
Q, + Q4 + Q, + Q,. l
Each random variable Qj has a X distribution. Since p. has a normal distribution with mean J.L and variance o 2/(lJK), it follows that QJ has one 1 degree of freedom. Also, Q4 has I 1 degrees of freedom, and Q5 has J degrees of freedom. The number of degrees of freedom for Q6 can be determined as follows: Although there are /J estimators Yij of the interactions, these estimators satisfy -
-
666
linear Statistical Model.
the following I + J equations (see Exercise 3):
L 'Yu J
0
for i - I,
0
for j
_
_
_ ,
I,
j- I I
L 'Yij i- I
-
-
I•...•
J.
However, if any J + J - 1 of these I + J equations are satisfied. then the remaining equation must also be satisfied. Therefore, since the IJ estimators Yfj actually must satisfy I + J - 1 constraints. we are, in effect, estimating only IJ - ( I + 1 - I ) - ( l - 1)(1 - 1 ) interactions_ It can be shown that Q6 has a ' X distribution with ( 1 - 1)(1 - 1 ) degrees of freedom_ It remains to determine the degrees of freedom for Q2' Since QI has UK degrees of freedom and the sum of the degrees of freedom for Q). Q4. Q5' and Q6 is IJ, it is anticipated that the degrees of freedom remaining for Q2 must be 11K - IJ - IJ( K - 1)_ Since Q, is a sum over 11 independent samples, each of size K, it can be shown that Q 2 has a X 2 distribution with JJ( K - 1 ) degrees of freedom. Finally, it can be shown that the random variables Q" Q" Q., Q" and Q, are independent (see Exercise 7 fOT a related result). These properties are summarized in Table 10.15, which is the ANOVA table for the two-way layout with K observations per cell. As discussed in Sec. 10.6, Table 10.15 Degrees of freedom
Source of variation
Main effccts of A Main effects of Interactions
Residuals
Total
-
1
Overall mean
B
IJK(Y.+> - �) ,
1- 1
J-
Sum of squares
1
( I - l)(J - 1 ) IJ(K - 1) IJK
I
JKL. i-I J
C�++- Y++ + - a;) 2
IK L O" j. - Y. + > - fJ, l2 )- 1 I
J
,-1
j-I
K L. L (y,,) + - Y; ... I
J
K
L L L ( Y,jk
i _ I j- I I
J
k-I
K
L L L (Y,jk
; - l j- I k. - I
-
-
+-
Y+j + + Y++.,. - y,, ) 2
Y,j . )' �
- Qi
-
fJj
-
YiY
10.8.
The Two-Way Layout with Replications
667
this table is different from, and more flexible than, the ANOVA tables presented in most textbooks, because the parameters are included in the sums of squares given in this table.
Testing Hypotheses As mentioned before, the effects of the factors A and
IJ are additive if and only if all the interactions Yij vanish. Hence, to test whether the effccts of the factors are additive, we must test the foHowing hypotheses:
for i = 1 , . . . , l and j = 1, Ho : y,; - O HI : The hypothesis Ho is not true.
. . . •
J.
(u)
I t follows from Table 10.15 and the discussion leading u p to it that when the l null hypothesis Ho is true, the random variable K E{_ I Ej_ 1 has. a X distribution with (1 - 1)(J - 1) degrees of freedom. Furthermore, regardless of whether or not Ho is true, the independent random variable
115/02
L:_1 L;_ L L :_t( Y;,k 0'
-
Y;j+) l
has a X 2 distribution with IJ(K 1 ) degrees of freedom. Thus, when Ho is true, the following random variable U}B has an F distribution with ( J - 1)(J 1) and JJ( K - 1) degrees o f freedom: -
-
IJK( K - l)L:_IL�_I(Y;J +- Y; + + - �+j++ y+++)2 (I - \)(J - \)L:_IL�_IL�_ I ( Y,J' - Y,j+)'
(12)
The nuJl hypothesis Ho should be rejected if v18 > c, where c is a suitable constant whjch is chosen to obtain any specified level of significance. An alternate (onn for V}R that is sometimes usefu_1 for computation is (see Exercise 8):
u' AB
-
(13)
668
Linear Statl.Ucal Model.
If the null hypothesis Ho in (11) is rejected, then we conclude that at least some of the interactions "'tij are not O. Therefore, the means of the observations for certain combinations of i and j wi1l be larger than the means of the observations for other combinations, and both factor A and factor B affect these means. In this case, since both factor A and factor B aJl'ect the means of the observations, there is not usually any further interest in testing whether either the main effects al , . . . , a, or the main effects fil' . . . ' p, are zero. On the other hand, if the null hypothesis Ho in ( 1 1 ) is not rejected, then it is possible that all the interactions are O. If, in addition, aU the main effects ap , a, were 0, then the rnean value of each observation would not depend in any way an the value of i. In this case, factor A would have no effect on the observations. Therefore, if the null hypothesis Ho in (11) is not rejected, we might be interested in testing the following hypotheses: .
•
.
Ho :
a, � O and Y'J � O
for i - I , . . . , J and j � l, . . . , J, The hypothesis Ho is not true.
HI :
(14)
It follows from Table 10.1 5 and the discussion leading up to it that when Ho is true, the following random variable will have a x 2 distribution with ( I - 1) + ( I - I X J - 1 ) � (J - I)J degrees of freedom: JK ""
L,;i _ I
ai
-2
- r ",J + K, L.. i _ I �j _ l"Yi2j
(15)
Also, regardless of whether or not Ho is true, the independent random variable r.;_ ILj_1L:_1(Yijl< - Y;j+)2/a 2 has a X2 distribution with JJ(K - 1) degrees of freedom. Hence, when Ho is true, the following random variable U} has an F distribution with (J - I)J and IJ( K - 1) degrees of freedom:
U} =
(
IK K
- 1 ) J'-' _ I ( Y,
[ ""
••
-
2 Y... )
+
-
'- i - I ", '-jJ - I ( Y,j . - Y,++""
( 16 ) The null hypothesis flo should be rejected if U} > c, where c is a suitable constant. Similarly, we may want to find out whether all the rnain effects of factor B, as well as the interactions, are O. In this case, we would test the following hypoth
eses:
lIo:
HI :
lor i � {3j � o and Y'J � 0 The hypothesis Ho is not true.
1, . . . , / and j � I , . . , J, .
( 17 )
10.8. The Two-Way Layout with Replications
669
By analogy with Eq. (16), it follows that when Ho is true, the following random variable Vi has an F distribution with I(l - I) and /J(K - I) degrees of freedom: U; = IK(K -
1)[
l
E;_ J !'+j + - Y.. , ) ' + E:_ ,E;_,( ¥,j + - ¥, .. - Y+j + + Y. (J - I)E:_ ,E�_,E:_ ,( Y,jk - Y;j . )'
.
Sj
(18)
Again, the hypothesis Ho should be rejected if vi > c. In a given problem, if the null hypothesis in (11) is not rejected and the null hypotheses in both (14) and (17) are rejected, then we may be willing to proceed with further studies and experimentation by using a model in which it is assumed that the effects of factor A and factor B are approximately additive and the effects of both factors are important. One further consideration should be emphasized. Suppose that the hypothe ses (14) or the hypotheses (17) are tested after the null hypothesis in ( 1 1 ) has been accepted at some given level of significance ao. Then the size of this second test should no longer be regarded simply as the usual value c< chosen by the experimenter. More appropriately, the size should now be regarded as the conditional probability that Ho in (14) or (17) will be rejected by a test procedure of nominal size a, given that Ho is true and the sample data are such that the null hypothesis in (11) was accepted by the first test. Example I: Estimating the Parameters in a Two- Way Layout with Replications. Suppose that an experiment is carried out by an automobile manufacturer to investigate whether a certain device, installed on the carburetor of an automobile, affects the amount of gasoline consumed by the automobile. The manufacturer produces thrcc different models of automobiles, namely, a compact model, an intennediate model, and a standard model. Five cars of each model, which were equipped with this device, were driven over a fixed route through city traffic; and the gasoline consumption of each car was measured. Also, five cars of each model, which were not equipped with this device, were driven over the same route; and the gasoline consumption of each of these cars was measured. The results, in liters of gasoline consumed, are given in Table 10.16. In this example, 1 - 2, I � 3, and K � S. The average value Y;j + for each of the six cells in Table 10.16 is presented in Table 10.17, which also gives the average value Y; + + for each of the two f£.WS, the average value Y +J+ for each of the three columns, and the average value Y + ++ of all 30 observations. •
670
Linear Statistical Models
10.16
Table
Compact
Intermediate
Standard
model
model
model
8.3 8.9 7.8 8.5 9A
Equipped with device
ll.6 lO.2 1 0.7
9.5 ll.3 lOA
8.7 10.0 9.7
IL9 ILO
8.2 10.6 10.1 11.3 10.8
7.9
Not equipped with device
9.2 10.2
8.4
12.4 11.7 10.0 ILl
ll.8
10.17
Table
Equipped with
Not equipped
Average for column
Intermediate
Standard
Average
model
mode]
model
for row
-
-
-
-
1;) + = 8.94
-
-
-
-
-
-
YlI + = 8.58
device with device
Compact
- -
YI1 + = 10.12
YI3+ = 11.08
Y22 + = 10.20
Y+ 1 + - 8.76
Y23 -+- " 11.40
Y+ 2+ - 10.16
Y+ 3 + = 11.24
Y1 + + - 9.9267
YB + = 10.1800
Y+++ = 10.0533
It follows from Table 10.17 and Eqs. (6) and (7) that the values or M.L.E.'s, or least-squares estimators, in this example are:
til
- 0.1267,
tl2 =
0.1267.
p, -
0.1067,
P3 -
1.1867,
- 0.0533,
112 =
0.0867.
113 =
- 0.0333,
0.0533.
h2 =
- 0.0867.
h3 =
0.0333.
il -
10.0533.
p, -
- 1.2933,
1\1 hI
=
the
10.8. The Two-Way Layout with Replications
In this example, the estimates of the interactions and j. D
Yij
671
are small for al1 values of i
Example 2: Testing for Additivity. Suppose now that it is desired to use the observed values in Table 10.16 to test the nul1 hypothesis that the effects of equipping a car with the device and of using a particular model are additive, against the alternative that these effects are not additive. In other words, suppose that it is desired to test the hypotheses (11). It is found from Eq. (12) that U}B = O.07tl. The corresponding tail area, as found from a tahle of the F distribution with 2 and 24 degrees of freedom, is much larger than 0.05. Hence, the nul1 hypothesis that the effects are additive would not be rejected at the usual levels of significance. D
Example 3: Testing for an Effect on Gasoline Consumption. Suppose next that it is
desired to test the null hypothesis that the device has no effect on gasoline consumption for any of the models tested, against the alternative that the device does affect gasoline consumption. In other words, suppose that it is desired to test the hypotheses (14). It is found from Eq. (16) that U} � 0.262. The corresponding tail area, as found from a table of the F distribution with 3 and 24 degrees of freedom, is much larger than 0.05. Hence, the null hypothesis would not be rejected at the usual levels of significance. Of course, this analysis does not take into account the conditioning effect, described just before Example I , of testing the hypotheses (14) after first testing the hypotheses (11) with the same data. D
The results obtained in Example 3 do not provide any indication that the device is effective. Nevertheless, it can be seen from Table 10.17 that for each of the three models, the average consumption of gasoline for the cars that were equipped with the device is smaller than the average consumption for the cars that were not so equipped. If we assume that the effects of the device and the model of automobile are additive, then regardless of the model of the automobile that is used, the M.L.E. of the reduction in gasoline consumption over the given route that is acrueved by equipping an automobiTh with the device is a2 &1 = 0.2534 liter. -
The TWO-Way Layout with Unequal Numbers Observations In the Cells
01
Consider again a two-way layout with [ rows and J columns; but suppose now that instead of there being K observations in each ceIJ, some cens have more I and j = 1, . . , J, we shall let K j observations than others. For i = 1, i denote the number of observations in the (i, j) cell. Thus, the total number of observations is r.f_ lr.�_lKij. We shall assume that every cel1 contains at least one observation, and we shall again let Yijk denote the kth observation in the (i, j) . . . •
.
672
Linear Statistical Models
cell. For any given values of i and j, the values of the subscript k are 1, . . . , KiP We shall also assume, as before, that all the observations l';'jk are independent; that each has a normal distribution; that VarCY;,d = 0 2 for all values of i, j, and + Yij' where these parameters satisfy the k; and that E(¥;jk ) = J.I. + (Xi + conditions given in Eq. (5). As usual, we shall let denote the average of the observations in the (i, j) cell. It can then be shown lhat for i - I" , . , I and j - 1, . . . , J, the M.L.E.'s, or least-squares estimators, are as follows:
�j+
/3)
J
. 1 " y Ui = J i... ij + j- l
.
P"
(19)
These estimators are intuitively reasonable and analogous to those given in Eqs. (6) and (7). Suppose now, however, that it is desired to test hypotheses such as (11), (14), or (17). The construction of appropriate tests becomes somewhat more difficult because, in general, the sums of squares analogous to those given in Eq. (9) will not be independent when there are unequal numbers of observations in the different cells. Hence, the test procedures presented earl.ier in th.is section cannot directly be copied here. It is necessary to develop other sums of squares that will be independent and that will reflect the different types of variations in the data in which we are interested. We shall not consider this problem further in this book. This problem and other problems of ANOVA are described in the advanced bOOK by Scheffe (1959).
EXERCISES 1. Show that for any given set of numbers 0ij (i = 1, . . . , I and j = 1, . . . , J), there exists a unique set of numbers J.I.. lXi ' /3j ' and Y'j ( i = 1 , I and j - 1, . . . , J ) that satisfy Eqs. (4) and (5). 2. Suppose that in a two-way layout, the values of 0if are as given in each of the . . . •
four matrices presented in parts (a), (b), (c), and (d) of Exercise 1 of Sec. 10.7. For each matrix, determine the values of J.I. , aI' /31' and "Ii) that satisfy Eqs. (4) and (5).
3. Verif� that if ai ' Pi' and "Ii} are as given by Eqs. (6) and (7), then Ef_,a i - 0; L.�_l/3j = 0; EJ_l'Yij = 0 for j = 1, . . . , J; and L�_lYij = 0 for i = 1, . . . , I.
10.8. The Two-Way layout with Replications
673
4. Verify that if p., a" Pi' and Yu are as given by Eqs. (6) and (7), then , E(P.) � 1-'; E(a,) � a,; E(fJ) � fJi; and E('1'i) � Yu for all values of i and j. 5 . Show that if p., a" P" and y" are as given by Eqs. (6) and (7), then the following results are true for all values of
,
Var( Ct.. )
i
and j:
) = (J - l 2 IJK
(J
,
( , ) � ( I - I)(J - l)
. Yij Var
UK
(J
2
•
6. Verify Eq. (10).
7. In a two-way layout with K observations in each ceH, show that for any values of i, i,. i2• ). )1 ' )2' and k. the fol1owing five random variables are uncorrelated with one another:
8. Verify the fact that the right sides of Eqs. (12) and (13) are equal. 9. Suppose that in an experimental study to determine the combined effects of receiving both a stimulant and a tranquilizer, three different types of stimu lants and four different types of tranquilizers are administered to a group of rabbits. Each rabbit in the experiment receives one of the stimulants and then, 20 minutes later, receives one of the tranquilizers. After 1 hour. the response of the rabbit is measured in appropriate units. In order that each possible pair of drugs may be administered to two different rabbits, 24 rabbits are used in the experiment. The responses of these 24 rabbits are given in Table 10.18. Determine the values of it, aj• Pj• and Yij for i = 1 . 2, 3 and ) = 1 , 2, 3, 4, and determine also the valtJe of 6 2• 10. For the conditions of Exercise 9 and the data in Table 10.18, test the hypothesis that every interaction between a stimulant and a tranquilizer is O.
11. For the conditions of Exercise 9 and the data in Table 10.18, test the hypothesis that all three stimulants yield the same responses.
12. For the conditions of Exercise 9 and the data in Table 10.18, test the hypothesis that
a]]
four tranquilizers yield the same responses.
13. For the conditions of Exercise 9 and the data in Table 10.18, test the following hypotheses: Ho : HI :
I-'
�
8,
1-' * 8 .
674
linear Statistical Models
Table 10.18 -
1
Stimulanl
2
�
-
3
4
1
11.2 11.6
7.4
8.1
7.1 7.0
9.6 7.6
2
l2.7 14.0
10 3
7.9
R.R
8.5
11 3
10.1
5.5
5.0
6.5
9.6
6.9
7.3
5.7
3
14.
Tranquilizer
10.8
For the conditions of Exercise 9 and the data In Table 10.18, test the following hypotheses:
Ho : HI :
a 2 :.e;; l , a2 > 1 .
a two-way layout with unequal numbers of observations in the cells, show that if fl , a" Pi' and y" are as given by Eq. (19), then E(fJ.) � 1', E( a, ) � a" E(/3;> � Pi' and E( 't,;> � Y'i for all values of i and j.
1 5. In
16. Verify that if p., a" /3i ' and 't,} are as given by Eq. (19), then [{_ la, � 0; E;_JJ, = 0; E{_ lYij = 0 (or j 1, . . , J; and E;- IYiJ = 0 for ; 1 , I. . 17. Show that if p. and a, are as given by Eq. (19), then for ; 1, . , I, "'"
=
.
�
10.9,
.
. . . •
.
SUPPLEMENTARY EXERCISES
( Xi' Y;),
i = 1,
n, form a random sample of size 11 from a bivariate normal distribution with means 1-'1 and 1-' 2 . variances of and ai. and correlation p; and let P-i' a?, and p denote their M.L.E:s Also, let /32 denote the M.L. E. of P2 in the regression of Y on X. Show that
1 . Suppose that
/3,
�
. . . •
po,/o,.
Hint: See Exercise 13 of Sec. 6.5. 2. Suppose that ( Xj' Y,), i = 1, . , n. form a random sample of size II from a bivariate normal distribution with means IL l and 1l2' variances of and al, .
.
1 0.9. Supplementary Exercises
675
and correlation p. Determine the mean and the variance of the following statistic T, given the observed values Xl = Xl' ' Xn X,,: • . •
T
=
L:7_ , ( xj - x.) 1';
:E7_ 1( Xi - xn)2
=
.
3. Let 81 , 82, and 83 denote the unknown angles of a triangle, measured in degrees (8j > 0 for i = 1 , 2, 3, and 8, + 82 + 8, = 1 80). Suppose that each angle is measured by an instrument that is subject to error, and that the measured values of 81, 82, and 8) are found to be Yt 83, Y2 = 47, and Y3 = 56, respectively. Determine the least-squares estimates of 81, 82, and 83. 4. Suppose that a straight line is to be fitted to n points (x1' Yl ), . . . ,(x " , Yn) . . = XI but Xl "* x2. Show that the least-squares line such that x2 = Xl I will pass through the point (x" y,). 5 . Suppose that a least-squares line is fitted to the n points (X I' Y,), · · · . (xn• Yn) in the usual way by minimizing the sum of squares of the vertical deviations of the points from the hne; and that another least-squares line is fitted by minimizing the sum of squares of the horizontal deviations of the points from the line. Vnder what conditions will these two lines coincide? p, + p,x is to he fitted to the n points 6. Suppose that a straight line y (X I ' Yt), . . . , (XII' y.... ) in such a way that the sum of the squared perpendicular (or orthogonal) distances from the points to the line is a minimum. De termine the optimal values of PI and /32. 7. Suppose that twin sisters are each to take a certain mathematics examination. They know that the scores they will obtain on the examination have the same 2 mean IL, the same variance 0 , and positive correlation p. Assuming that their scores have a bivariate normal distribution, show that after each twin learns her own score, she expects her sister's score to be closer 10 /i. 8. Suppose that a sample of n observations is formed from k subsamples containing n 1 , n k observations ( n l +- . . . + n k = n ). Let x' ) (j , n;) denote the observations in the ith subsample; and Jet xi + and v; 1 denote the sample mean and the sample variance of that subsample: =
=
.
=
• . . .
=
• . . .
Finally, let x and v 2 denote the sample mean and the sample variance of the entire sample of n observations: ++
x++= n1
-
.L. E Xi}' Ie
,-
,
",
. }-,
Determine an expression for v 2 in terms o f X++. Xi .+-, and vJ (i = 1, . . . , k).
LI".., StaUetical Models
676
9. Consider the linear regression model
i
for
= 1, .
..,n,
where ( w " x l ), . . . , ( w" , x n ) are given pairs of constants and €" . . . , £" are U.d. random variables, each of which has a normal distribution with mean 0
to. 11.
and variance
0 '.
Determine explicitly the M.L.E.'s of
Determine an unbiased estimator of
02
p,
and
{3,.
in a lwo-way layout with
( K ;;, 2).
tions in each cell
K observa
In a two-way layout with one observation i n each cell, construct a test of the null hypothesis that all the effects of both factor
1 2. In a two-way layout with
K
A and factor B arc
observalions in each cell
(X �
O.
2). construct a
test of the null hypothesis that all the main effects for both factor factor
13.
B,
and also all the interactions, are
A
and
O.
Suppose that each of two different varieties of corn is treated with two different types of fertilizer in order to compare the yields, and that
K
independent replications are obtained for each of the four combinations. Let
Xijk
denote the yield on the
with fertilizer
k th
replication of the combination of variety j
j (i = 1 , 2; j = 1, 2; k
= 1,
. . . , K ). Assume that all tbe
observations are independent and normally distributed; that each distribu tion has the same unknown variance; and that
1,
. . . •
Ho :
HI : 14.
E( X1jiJ =
P-I} for
k =
X. Describe how to carry out a test of the following hypotheses:
I'll - 1'1 2 = P-ll -
The hypothesis
/122 '
Ho is not true.
Suppose that WI ' W2• and
W3
are independent random variables, each of
which has a normal distribution with the following mean and variance:
E ( W, ) = 0, + 0"
E ( W, ) = 0, + 0, - 5 ,
E ( W, ) = 2 0, - 20"
V.r( W, ) = 0', Var(W,) = 0', Var(W, ) = 40'.
Determine the M.L. E.'s of 8,. O2 , and
(1 2 ,
and determine also the joint
distribution of these estimators.
15.
1, .
Suppose that it is desired to tit a curve of the form y
n points
(x" y;)
with Xi >
0
and Yj > 0 for
i=
= . .
axfJ , 11 .
to a given set of
Explain how this
curve can be fitted eilher by direct application of the method of least squares or by first transforming the problem into one of fitting a straight line to the
n
points (log xj, log yJ and then applying the method of least squares. Discuss the conditions under which each of these methods is appropriate.
10.9. Supplementary Exercises
677
16. Consider a PLo.blern of .simple linear regression, and let Zi = Yt• - p, -:--'b!..L.. den-Ole the -residual of -ilie observ3tU)ilY,li ' 1, . . , n), as deifned by Eq. (45) .
of Sec. 10.3. Evaluate Var(Z;) for given values of XI ' . . ' x/I' and show that it is a decreasing function of the distance between Xi and �'t" . .
17. Consider a general linear model with n x p design matrix Z, and let W = Y - Z a denote tbe vector of residuals. (In other words. the ith compo nent of W is Y, - Y;, where Y; is given by Eq. (24) of Sec. 105.) (3) Show that W � DY. where
(b) Show that the matrix D is
(e) Show that Cov( W ) �
0'0.
idempolefll�
that is, DD
= D.
18. Consider a two-way layout in wh.ich the effects of the factors are additive, so
that Eq. (1) of Sec. 10.7 is satisfied; and let V i V I and Wi wJ be arbitrary given positive numbers. Show that there exist unique numbers fJ.. al. · · · , ar, and /3\ /3J such that • . . . '
• . . .
'
. . . . •
I
J
i- I
j-I
L Vi"' i � L w;llj � 0
and for i
= 1,...•J
and j
= 1 , . . . , J.
19. Consider a two-way layout in which the effects of the factors are additive, as in Exercise 18; and suppose that there are Klj observations per cell, where > 0 for i - I, . . , J and j = 1, . . _ , J. Let Vi = Ki+ for " = I, . . . , T, and Wj - K+j for j = 1, . . . , J. Assume that E(1'ij.t J = J.' + a j + /3j for k = 1, . . , K11' i = l • . . . , j. and j = 1 , . . . , J, where L:." lvi ai = L�_lwi3J = 0. as in Exercise 1 8 . Verify that the least-squares estimators of fJ., a i• and f3j are as follows:
Klj
.
.
Ii =
y
++ + '
1
i++ y+++ y+++
a, l = K Y H ·
for j
=
1
• . . . •
I.
for j � I , . . . . J.
20. Consider again the conditions of Exercises 18 and 19, and let the estima tors fL. ai• and fJj be as given in Exercise 19. Show that Cov(fJ.. a,) = Cov(P-. Ii) � o.
678
21.
Linear StaUstical Models
Consider again the conditions of Exercises 18 and 19, and suppose that the numbers Ki} have the following proportionality property: for i =
1, ... , 1
and ) =
1 , . . . , J.
Shuw lhal Cov( ail /1) = 0, where the estimators /xi and /1j are as given m Exercise 1 9. 22. In a three-way layout with one observation in each cell, the observations Yilk ( i = I , . . . , I; ) = I , . . , J; k = l, . . . , K ) are assumed to be inde pendent and normally distributed, with a common variance IJ '. Suppose that E(Y.}k) = 6ilk· Show that for any given set of numbers 6ilk, there exists a C p A D pAC p BC umque set 0f numbers p., OJA . OJB, ak• I' ij ' iI . .1'
k) - �
.
.0000
.0000
0000
.
.0000
.()()()() .()()()() .()()()()
.
.
.
.oon
.0007
.
0000 0000 0000 0000
.0000 .
.
.0000
0000
.
.
.
.
0000 0000 0000 .0000 .
.
.
.
.
0000 .0000
.0000
.0000
.0000
. .
.
0000
.0000
0000
.
.
.0000 .
0000
.
.
.
. .
.0000
.
.0000 .0000
.0000
.
.
.0000 .
()()()()
.
0000
.
0000
.
Table of the Standard Nonnal Distribution Function
\l)(x) x
0.00
om
-
1 du f'"- OQ (27T)I/2 CXp( !u2) 2 -
40( .... ) 0.5000 0.5040
x
O.W
0.61 0.62
0.02
0.5080
0.03
0.5120
0.63
0.'" 0.05 0.06 0.07
0.5160 0.5199
0.64 0.65
0.5239 0.5279 0.5319
0.66 0.67
O.OR
0.09 0.10 0.11 0.12
0.53 59 0.5398
0.13
0.5438 0.5478 0.5511
0.14 0.15
0.5557 0.5596
0.16 0.1 7
0.5675
0.5636
¢I( x )
x
(:c)
1 .20
0.8849
1.80
0.9641
2.40
1 . 21 1.22
0.8869 0.8888
1.81
2.41 2.42
0.7324 0.7357 0.7389 0.7422 0.7454 0.7486
1.25
1.82 J.83 1.84
0.9664 0.9671
2.43
0.9925
1.87 1.88
0.%78
2.45 2.46
0.8980 0.8997 0.9015
1.8:;
0.9927 0.9929 0.9931
2.47 2.48
0.9932 0.9934
2.49 2.50
0.9936 0.9938
2.52
0.9941 0.9945
1.24
1.26
0.7549
1.29
0.7580 0.7611 0.7642
1.30
0.9032 0.9049 0.9066 0.9082
0.7704 0.7734
1.34
0.9099
1.94
0.9115
0.7764 0.7794
1.35
1.36 1.37
0.7823 0.7852
1.95
0.9951 0.9953
0.9131
1.96
0.9750
2.62
0.9147
1.97
0.9162 0.9177
1.98 1.99
2.64 2.66
0.9956 0.9959
1.38 1.39
0.9756 0.9761 0.9767
2.68
0.9961 0.9963
2.00
0.9773
2.01 2.02
0.9778
2.70 2.72
0.9965 0.9967
2.74 2.76
0.9969
0.9788
2.'"
0.9793
2.78
0.9973
2.05
0.9798
2.80
0.9974
2.06 2.07
0.9803
2.f!2 2.84
0.9976 0.9979
0.9817
2.86 2.88
0.9821 0.9826
2.90 2.92
0.9830 0.9834
2.94
0.9981 0.9983 0.9984
2.96
0.9985
0.9838
2.98
0.9986
2.15 2.16
0.9842 0.9846 0.98j()
3.00 3.05
0.9987 0.9989 0.9990
2.t8 2.19
0.9854
3.15
0.9992
3.20 3.25
0.9993 0.9994 0.9995
0.7881
1 .40
0.7910 0.7939
1.41 1.42
0.9192 0.9207 0.9222
0.83
0.7967
1.43
0.9236
0.84
0.7995
1.44
0.9251
0.85
0.8023
0.86 0.87
0.8051 0.8079 0.8106
1 .45
0.9265
0.27
0
0.28 .29
0.30 0.31
0.32 0.33 0.34
0.5987 0.6026 0.6064 0.6103
0.88
0.9292 0.9306
0.89
0.8133
1.49
0.9319
0.8159 0.8186
0.9332 0.9345
0.8212
1.50 1.51 1.52
0.6293
0.93
0.8238
1.53
0.9370
0.6331
0.94
0.8264
0.9382
0.95
0.96
0.8289 0.8315
1. 54
0.98
0.8365 0.8389
0.8461
0.6406
0.38 0.39
0.644'3
0.6480 0.6517
0.97
0.40 0041 0042
0.6554 0.6591
0.99 1.00 1.01
0.6628
1.02
0.43
0.6664
0.44 0.45
0.6700 0.6736
1.03 1.04 1.05
0.46
0.6772
1.06
0.47 0.48
0.6808 0.6K44
1 .07
1.08
0.49 0.50
0.6879 0.6915
0.51
0.69.50
1.09 1.10 1.11
0.52 1l.53
0.6985 0.7019
0.54
0.7054
0.55
0.7088 0.7123
0.58 0.51}
1.47 1.48
0.90 0.91 0.92
0.36
0.56 0.57
0.9279
0.6141
0.6368
0.37
1 .46
0.6179 0.62.17 0.6255
0.35
0.7157 0.7190 0.7224
1 .12
1.13 1.14 1.15 1.16 1.17 U8 1.19
0.8340
1 .57 1.55 1.56
1.58
1.59 1.60
0.9357
0.9394 0.9406 0.941 8
0.9429
1.61 1.62
0.9441 0.9452 0.9463 0.9474
1.63
0.9485
0.8531
1 .64 1.65
0.9495 0.9505
0.8554
1.66
0.8577 0.8599
1.67 1.68
0.9515
0.8621
1.69
0.8643
1.70
0.8665
1.71
0.8686
0.8413 0.&437
2.54
0.9948
0.80 0.81 0.82
0.25
0.9726 0.9732
2.56
0.5793
0.26
1.92
0.9713 0.9719
2.58 2.60
0.20 0.21 0.22 0.23 0.24
un
0.9686 0.9693 0.9699 0.9706
2.44
0.9738 0.9744
0.78
0.5910 0.5948
1.89 1.90
1.32 1.33
0.5714 0.5753 0.5832 0.5871
1.86
1.31
0.18 0.19
0.79
0.991 � 0.9920 0.9922
0.8907 0.8925 0.8944 0.8962
1.23
0.70 0.71
0.7673
¢I( .r)
0.9649 0.9656
1.27 1.28
0.75 0.76 0.77
x
0.7291
0.7517
0.74
(x)
0.1257
0.68 0.69
0.12 0.73
.,
1.93
2.03
2.08
2 09
2.10 2.11 2.12 2.lJ 2.\4
2 .17
2.20 2.21 2.22
0.9783
0.9808 0.9812
0.9857 0.9861 0.9864 0.9868
3.10
3.30 3.35 3.40
0.9971
0.9977 0.9980
0.9996 0.9997 0.9997
2.23 2,,24
0.9871 0.9875 0.9878
3.45 3.50
0.9525 0.9535
2.27
0.9881 0.9884
3.55 3.60
0.9998
2.28
0.9887
3.65
0.9999
0.9545
2.29
0.9890
0.9999
0.9554
2.30
0.9893
3.70 3.75
2.31 2.32
0.9896
3.80
1.72
0.9564 0.9573
0.9898
3.115
0.9999 0.9999
0.8708
1.73
0.9582
0.8729 0.8749
1.74
0.9591 0.9599
2.33 2.34
0.9901 0.9904 0.9906
3.90 3.95
1.0000 1.0000
4.00
1 .0000
0.8485 0.8508
0.8770 0.8790 0.8810 0.8830
1.75 1.76 1.77 1.78 1.79
0.9608 0.9616
2.25 2.26
2.35
2.36
0.9909
2.37
0.9911
0.9625
2.38
0.9913
0.9633
2.39
0.9916
Donald B. Owen. HANDBOOK OF STATISTICAL TABLES. " 1962. Addison-Wesley Company. Reading. Massachuscus. Rt.'prinlcd ....·jth permission.
0.9998 0.9998
0.9999
Publishing,
689
Tables
690
Table of the X. 2 Distribution
If X has a X 2 distribution with n degrees of freedom. this table gives thc vaJue of Ibal Pr(X ... x) - p.
n
p
I
2 3
4 ,
6 7 ,
9
\0
.005
.01
_025
.05
.0000
.0002 .0201
.0010 .0506 .21 58 .4844 .831 2
.0039 .1026 .3 51 8 .7107
.Ql00 _0717 .2070 ,41 1 7 .6757
.9893
1 .344 1.735 2.156
\I
3.074
15
4.07 5 4.601
12 13 14
.1148 . 2971 .5543
2.603 3.565
.11721
1 . 239 1 .647 2.088 2.558
3.053 3. 571
1.237 1 .690 2.1 80
2.700 3. 247
3.816 4.404
4.107 4.660 5.229
5.629 6.262
5.009
16 17
5.142
5.812
6.908
IS
5.697
6.401!
19 20
6.265 6.844 7.434
7.015 7.633 8. 260
7.564 8.2]1 8.907
21 22 23 24 25
8.034 8.643
30 40 50
60 70
80 90
' ". )Q
100
9.591
9.542 10.20 10.116
10.28 10.98 1 1.69 12.40
10.52
11.52
lJ.l2
1 3.79 20.71 27.99 35.53 43. 27 51.17 59.20
16.79 24.43 32.36
CJ JJ
14.95 22. 16 29.71 37.48 45.44 53.54 61. 75
67.]3
70.06
9.260 9.886
) ::00
8.897
::10
0< OJ: . ,
40.48 48.76 5 1.1 � 65.65 :)1 J:l
0:>:0:>
74.22
1.145
.10
.0158 .2107 .5844 1 .064 1.610
1.035 2. 167
3.490
3.940
4.168 4.865
2.732 3 .325 4.575 5.226 5.892 6.571 7.26 1 7.%2
2.204
l.1l33
5.578
6.304 7.042
7.790 8. 547
.20
.25
.0642
. 1015
.4463 1.005 1_649 2.343
.5754 1 .2 1 3 1.923 2.675
3.070 3.822 4.594 5.380
3.455 4.255
6.989 7.807 8. 634 9.467
7.584
6.179
10.31 11.15
5.071
5.899 6.737 8.438 9.299 10. 1 7
11.04
8.672
10.09
12.00
9.39() 10.12 10.85
10.86
12,86 13.72
U.91 12. 79 13.68 14.56
14.58
15.45
15.44
16.34 1 7.24 1 8. 14 19.04
11.59 12.34
13.09 13.85 14.61 18.49
26.51 34.76 43.19 5 1. 74 60.39 69.13 :1:::1 OJ
0):1)
77.93
9.312
11.65 1 2.44
13.24 14.04 14.85
1 6.3 1
17.19
15.66 16.47
18.94
19.94
20.60 2 9.05
23.]6 32.34
37.69 46.46 55.33 64.28
41.45 50.64
24.48 33.66 42.94 52.29 61.70 71.14 80.62
73.29
OJ n( Ij:r)
82.86
1 8.06
59.90 6 9.21
78.56 (J::J Of
10:;)0
87.95
00 "
oo:or
90.13
. 30 .1484 .7133 1 .424
x
such
.40
.2750 1.022
1.869
3.000
2.753 3.655
3.828 4.671 5.527 6.3 93 7.267
4.570 5. 493 6.423 7.357 8.295
8_148 9.034 9.926 10.82 U.72
9.237 10.18 11.1 3 1 2.08 13.03
12.62 13.53
1].98
2.195
14.4]
15.]5 16.27 17. 18
18.10 19.02 19.94 10.87
25.51 34. 87
44.31 53.81 63.35 72.92 82.51 O:1 I J 01;:'1
92.13
14.94 1 5. 89 16.85
17.81 18.77 1 9. 7 3
20_69
21.65 22.62 27.44
36.16 46.86 56 .62 66.40 7tU9
85.99
Of: !l l
CI:>;) ,
95 .8 1
Adapted wiLh permission from Biomet,.ika Tables for Stuf;s/ie/urlS. Vol. 1, lrd cd., Cambridge Universily P�s..'1. 1966, edited by E. S. Pearson and H. O. Hartley; and rrom .. A ncw table or pcn::Clllugc: poiots of tho: chi-square distribution," Biometrika. Vol. 5l(1964), pp. 231-239, by H. L. Harler. Aerospace Research Laboratories.
690
Tables
Table or the X2 Distribution
If X has a x 2 distribution with that Pr( X .; x) = p .
n
.005
.01
.025
.0100 .0717 .2070 .4117
.0000
.0002 .0201 .1148 .2971 .5543
.0506 .2158 .4844 .8312
to
.6757 .9893 1.344 1.735 2.156
.8721 1.239 1.647 2.088 2.558
11 12 13 14 15
2.603 3.074 3.565 4.075 4.601
I'
21
p
t
2 3 4
5 ,
7 8 9
17 18 19 20
"
23 24 25 30 40 SO 60 70 80 90 100
n
degrees of freedom, this table gives the vaJue of x such
.05
.10
.20
.0039 .1026 .3518 .7107 1.145
.0158 .2107 .5844 1.064 1.610
.0642 .4463 1.005 1.649 2.343
.1OL5 .5754 1.213 1.923 2.675
.1484 .7133 1_424 2.195 3.000
.2750 1.022 1.869 2.753 3.655
1.237 1.690 2.180 2.700 3.247
1.635 2.167 2.732 3.325 3.94{)
2.204 2.833 3.490 4.168 4.&65
3.070 3.822 4.594 5.380 6.179
3.455 4.255 5.071 5.899 6.737
3.828 4.671 5.527 6.393 7.267
4.570 5.493 6.423 7.357 8.295
3.053 3.571 4.107 4.660 5.229
3.816 4.4{)4 5.009 5.629 6.262
4.575 5.226 5.892 6.571 7.261
5.578 6.304 7.042 7.790 8.547
6.989 7.807 8.634 9.467 10.31
7.584 8.438 9.299 10.17 1 1 .04
8.148 9.034 9.926 10.82 11.72
9.237 10.18 11.13 12.08 13.03
5.142 5.697 6.265 6.844 7.434
5.812 6.408 7.015 7.633 8.260
6.908 7.564 8.231 8.907 9.591
7.962 8.672 9.390 10.12 10.85
9.312 1 0.09 10.R6 1 l .65 12.44
11.15 12.00 12.86 13.72 14.58
11.91 12.79 13.68 14.56 15.45
12.62 13.53 14.43 15_35 16_27
13.98 14.94 15.89 16.85 17.81
8.034 8.643 9.260 9.886 10.52
8.897 9.542 10.20 10.86 1 1 .52
10.28 10.98 11.69 12.40 13.12
11.59 12.34 13.09 13.85 14.61
lJ.l4
15.44 16.31 17.19 18.06 18.94
16.34 17.24 18.14 19.04 19.94
17.18 18.10 19.02 19.94 10.87
18.77 19.73 20.69
13.79 20.7J 27.99 35.53 43.27 51.17 59.20 67.33
14.95 22.16 29.71 37.48 45.44 53.54 61.75 70.06
16.79 24.43 32.36 40.48 48.76 57.IS 65.65 74.22
18.49 26.51 34.76 43.19 51.74 60.39 69.13 77.93
20.60 29.05 37.69
23.36 32.34 41.45 50.64 59.90 69.21 78.56 K7.95
24.48 33.66 42.94 52.29 61.70 71.14 80.62
25.51 34.87 44.31 53.81 63.35 72.92 82.51 92.13
27.44 36.16 46.86 56.62 66.40 76.19 85.99 95.81
.()(JlD
14.04 14.85 15.66 16.47
46.46
55.33 64.28 73.29 82.86
.25
9O.n
.30
.4{)
lUiS
22.62
Adapted with permission from Biometrika Tables Jor SwrjSlicians, Vol. I, 3rd ed.. Cambridge University Press. 1966, l"(iitcd by E. S. Pearson and H. O. Hartley: and (rom .. A new table or perCC'ntage points of the chi-square distribution," Biometrika, 51(1964). pp. 231-239. by H. L. Harter, Aero.�pace Research Laboratories.
Vol.
Tabl..
Table or the X2 Distribution
.50 .4549
.60 .7083
1.386
L8JJ
2.366 3.357 4.351
,4.045 5.132
5.348 6.346 7.344 8.343 9.342 10.34 1 1 .34
6.211
7.283 8.351
.75
.80
.90
.95
1.074 2.408
1.323 2.773
1 .642 3.219
3.665 4.878 6.064
4.108 5.385
3.841 5.991 7.R15
5.989
2.706 4.605 6.251 7.779
6.626
7.2!!9
9.236
7.841
8.55 8 9.803 1 1 .03
9.037 to.22
4.642
10.66
11.39
10.47
11.78
12.55
12.24 13.44
1 1 .5 3
12.90 14.01
13.70 14.&5
14.63 15.81
15.12 16.22
15.98 17.12
16.98 18.15
18.25
19.31
19.37 20.49 21.60 22.72 23.83
9.414
13.34
12.58 13.64 14.69
14.14
15.73
12.34
(Continued)
.70
7.231 8.383 9.524
1 7.3 2
15.34
16.78
16.34 17.34
17.82
18.42 19.51
18.87
20.60
1 8.34
19.91
21.69
19.34
20.95
22.77
.975
.9' 6.635 9.210 lU4 13.28
7.879 10.60 12.84 14.86
11 .07
12.83
15.09
16.75
12.59 14.07
16.81 18.48
18.55 20.28
15.51 16.92
14.45 16.01 17.53 19.02
20.09 21.67
21.95 23.59
15.99
IR.31
W.48
23.21
25.19
17.27 18.55
19.68 21 .0J
21.92
10.64 12.02 13,36 14.68
9.488
23.34
24.72 26.22
26.76 28.30
24.74
27.69
29.82
29.14
31.32
19.81
22.36
21.06
23.68
22.31
25.00
26.12
27.49
30.58
32.80
20.47
23.54 24.77 25.99
26.30 27.59 28.87
28.85 30.19 31.53
32.00
21.61 22.76
34.27 35.72 37.16
23.90
27.20
36.19
38.58
28.41
30.14 31.41
32.85
25.04
34.17
37.57
40.00 41.40 42.80
33.41 34.81
20.34
21.99
23.86
24.93
26.17
29.62
32.67
35.48
38.93
23.03
26.04
27.30
30·81
33.92
36.78
40.29
32.01 33.20 34. 3 8
35.17 36.42
38.08
41.64
39.36
42.98
37.65
40.65
44.31
40.26
43.77 55.76 67.51
74.40
79.08
46.98 59.34 71 .42
SO.89 63.69
68.97
51.81 6 3.17
79.71
85.53
90.41 101.1
96.58
90.53 101.9
106.6
) 13.1
118.1
124.1
124.3
129.6
135.8
22.34
24.07
24.94 26.02
27.14
28.43
23.34 24.34
25.11
27.10
28.24
29.55
26.14
28.17
29.34
30.68
31.32
33.53
34.80
39.34
41.62
44.16
45.62
54.72
56.33
36.25 47.27 58.16
51.89 62.13
49.33 59.33 69.33
72.36
79.33
82.57
89.33
92.76 102.9
99.33
65.23
66.98 77.58
86.12
88.13
75.69 96.52 106.9
98.65 109.1
.995
5.024 7.378 9.348 11.14
21.34
29.34
691
111.7
107.6 118.5
•
83.30
95.02
76.15 88.38 100.4 1 12.3
44.18 45.56 46.93
53.67 66.77 79.49 91.95 104.2 116.3 HIU 140.2
'" ., '"
I Di.tribution
18 19 20
17
II 12 13 14 IS 16
2 3 4 5 6 7 8 9 10
1
n
p -
.325 .289 .277 .271 .267 .265 .263 .262 .261 .260 .260 .259 .259 .258 .258 .258 .257 .257 .257 .257
.158 .142 .137 .134 .132 .131 .1 30 .130 .129 .129 . 128 .128 .128 .128 .128 .128 .127 .127 .127
.129
.60
.5 5
If X has a t di'itribution with
Table ollhe
396 .395 .394 393 .393 392 392 392 391 391
.510 .445 .424 .414 .408 .404 .402 .399 .398 397 1.000 .816 .765 .741 .727 .718 .711 .706 .703 .700 .697 .695 .694 .692 .691 .690 .689 .688 .688 .687
.727 .61 7 .584 .569 .559 .553 .549 .543 .542
.540 .539 .538 .537 .536 .535 .534 .534 .533 .533
.546
.75
. 70
.876 .873 .870 .868 .866 .865 .863 .862 .861 .860
1.376 1.061 .978 .941 .920 .906 .896 .889 .883 .879
.80
1.088 1.083 1.079 1.076 1.074 1.071 1.069 1.067 1 .066 1.064
1.963 1.386 1.250 1.190 1.156 1.134 1.119 1.108 1.100 1.093
.85
degrees of freedom, the table gives the value of
.65
n
.90
1.363 1.356 1.350 1.345 Ll41 Ll37 1.333 1.330 Ll28 1.325
1.7% 1.782 1771 1761 1.753 1.746 1.740 1.734 1.729 1.725
6.314 2.920 2353 2.131 2.015 1.943 1.895 1.860 1.833 1 .81 2
.95
p.
.975
""
2.201 2.179 2.160 2.145 2.131 2.120 2.110 2.101 2.093 2.086
12.706 4.303 3 . 1 82 2.776 2.571 2.447 2365 2.306 2.262 2.228
such that Pr(X � x)
3.078 1.886 1.638 1.533 1.476 1.440 1.415 1.397 1.383 1.372
x
2.718 2.681 2.650 2.624 2.602 2.583 2.567 2.552 2.539 2.528
31.821 6.965 4.541 3.747 3.365 3.143 2.998 2.896 2.821 2.764
.99
3.106 3.055 3.012 2.977 2.947 2.921 2.898 2.878 2.861 2.845
63.657 9.925 5.841 4.604 4.032 3.707 3.499 3.355 3.250 3. 1 69
.995
'" 5); (b) ( x: 1 .. x .. 7 ) ; (c) B; (d) ( x: 0 < x. < 1 or x > 7 ) : 00 .
Sec. 1.5 2 2. 5 · 4. 0.4. 8 . 0.4 if A
10. (a)
1
c:
6. (a)
1
1
1: ; (b) 6" ;
(c)
83 '
B and 0.1 if Pr(A u B)
- 4" ; (b) 43 ; (c) 32 ; (d) o.
-
I.
Sec. 1.6 2.
7' 4
4.
1 2
.
6.
2 3 '�
Sec. 1.7 2. 5 !.
5 4. 18 '
6.
201
8!2012
.
' 8. ( 3 ') 61
'
Sec. 1.8 2. They are equal. 4. This number is
( 4��1 ) and therefore it must be an integer.
697
698
Answers 10 Even-Numbered Exercises
6. n
16.
+
l
-
(Z)
k
8.
(en)
4
II
+ 1
10.
( 2;' )
1m
( \�)
131 121 6!2!41 416131 25! 1018!7!
6.
Sec. 1.10
9 4. 24'
2. 45 percent.
8.
10.
12.
+
52 4
Sec. 1.9 300! 2. 5!8!2871 '
6.
( �) ( i�) U�) 2
10
1
( \� ) {[( �) ,(�) 1 m)
I
4.
II ! 6"n. 11l2 ! · · · 116! '
8.
4!( 13!)' 52!
(�� )l
-[(i�) +(�� ) +(;�) +U�) +(�) +(i�)l m) m)]} + [(�) 10. ( ; )((�O) 5 : r) 2" 0,2, . . . , 10. +
11 =
+
,
where
r =
x
and
x
...
Sec. J.ll
1 4. 1 - - . 10'
12. 11 >
log(0.2) log( 0.99)
6. 0. 92.
14.
.
16. ( 0.8)" + (0.7)'"J - [(0.2)" Sec. 1.13 2. No. 4. (aJ If and onJy if
8.
A u B
-
+ (0.3)"'(. S. (b) Always.
1 7' 1
12 '
10. 10(0.01)(0.99)'-
Answers to Even-Numbered Exercises
6.
1 6'
8. 1
10. (a) 0.93. 14. 20. 22.
(b) 0.38.
4 81 '
16. 0.067.
C)(( 1�) 5 :J ,
w here k
- 2 '- 2 J
and J' -
699
- ( �6 t
12 .
(�) P'(l - P)'.
18.
( � W) r
1
.
, ,.
2 3, 4 5
PI + P2 + P3 - P,Pz - pz p'J - PIP) + PIPZP3' where
24. Pr(A wins)
- � ; P« B wins) - �; Pr(C wins) - �.
Chapter 2 Sec. 21
2 . Pr(A).
4.
2 6. ' 3
(r +
r( r + k ) ( r + 2 k ) b b)( r + b + k)( r + b + 2k)( r + b + 3k) . 3 1 8. 10. (a) 4 ; '3
Sec. 22
2. 0.47.
1 2 3 3 4 10. (a) 0, ; (b) 4 ; ' ' ' 10 10 10 10 Sec. 2.3
2. (a) 0.667;
(b) 0.666.
6. (a) 0. 632; (b) 0.605. 40
10. (a) 81 ;
(b)
41 81 .
1 (e) 4 '
6. 0.301.
8.
4. (a) 0.38 ; (b) 0.338; 1 1 8. (a) 'S ; (b) 'S .
18 59 '
(e) 0.3338.
(b)
5'
3
700
Answers to Even·Numbered Exercises
HHH
HHT
HTH
THH
TTH
THT
0 0 0
1 0 0
0
0 0
0 0
1
0 0 0 0
! 0 0 0 0
0 0 0 0
0
I
2. Condition (a).
4.
12. HHH HHT HTH THH TTH THT HTT
,
TIT
Sec.
,
0 0 0
1
! 0 0
0 0 0
l
0 1
,
II 12 '
6.
10. (a) 0.659. (b) 0.051.
1 4. (aJ 1 - Po - PI 1 Po
, where
1
Po
�
i;l (( i�)
and P I
! 0 0
0 0
1
�
0 0
, 0
,
12.
,
0
0
2 3'
8. 0.372.
en
"
,
1
8.
j ;" 198.
0
1
0
2.4
Sec. 1.6 4.
"
( ) (1 1 l 1: ) 4( 1� . ( i;)
.. 1 - 1 1:
I
"
(b) 1 - Pi7 18. 9 ' 22. (a) The second condition. (b) The first condition. (c) Equal probability under both conditions.
Chapter 3 Sec. 1.1 .
2. /(0) - 6 ' /( l ) - 18 ' /(2) - 9 ' /( 3)
4. /(x) -
1
.
W( 5 � xl 10 (5)
o 6. 0.806.
5
2
-
61 ' /(4)
�
for x - 2,3,4, 5 ,
otherwise. 8.
6 �, .
1 9 ' /(5)
�
HTT
1 18 '
TIT
0 0
0
0 0 0 "
,
0
Answers to Even-Numbered Exercises
Sec. 3.2
2; 1
2. (a)
4. (a)
2 13 ; (e) ' 27 27
(b)
1 -
6. fix) -
2; (b)
{
1 - /8.
for - 2 .; x .; 8,
1� 0
and probability is
otherwise,
7
10 '
Sec. 3.3 4. fi x) - (2/9)x for 0 .; x .; 3; fix) - 0 otherwise.
6. F(X) -
{>
for x < - 2 . < X + 2)
Cor - 2 ", x < S, (or x > 8 .
Sec. 3.4
7 7 1 1 2. (a) 4 ; (b) : (e) 40 : (d) 10 ' 0 20 13 79 5 4. (a) 4 ; (b) : (e) 16 ; (d) O. 256 6. (a) 0.55: (b) 0.8.
Sec. 3.5 2. (a) /,( x ) -
{g
(b) Yes; (e) Yes.
4. (a) /( x , y )
-{�.I'"
(b) 0,3: (e) 0.35,
6. Yes.
8, (a) /(x , y )
/, ( x ) (b) Yes. 10,
11
36
.
-
-(J {g
for 0
�x�
2,
for 0 " y " 1 ,
otherwise.
otherwise.
.
lor x - 0, 1 , 2 , 3 and y -, 0, 1 , 2 , 3 , otherwise.
lor ( x, y) otherwise.
E
for 0 " x " 2, otherwise.
S,
/' (Y) -
{g
for 1
� Y " 4,
otherwise.
701
702
Answers; to Even-Numbered Exercises
Sec. 3.6
2. (a)
- 2 < x < 4,
For
g,(ylx) �
4. 6.
- /2
2
(b)
4
(a) Fo' O
(b) '2 1
I {:[9 _ ( X _ I ) 'j 'f2
{ < y < I , g, (xly) -
1
8. (a) f, e x )
8
IT '
(b)
_{ �
X(2 +
0
�
1 (1 -
0
- 2x - Y 2(2 _ x)'
for 0
={3 e-C,ol +3o'J)
6.
whe,e
3.7
f: ( '; )p'(1 - p)"-',
o
Sec. 2.
3.8
g( y )
6.
- { :(l
Cry) - 1
-
(a) g(y) -
(b) (e)
8. Y
(I
_
y)'/'
_ly ) lf2
{ gI y
_ 1/'
I _'I' g(y) - { tY I
g(y) - { � y
-
2 X1/3.
for O
-
for
a
otherwise.
< y < I,
otherwise. -
1
< y < 0,
otherwise.
< y < 1,
otherwise .
for
0 22.5 + "51 1 1 exp{ - : ( x - 25)'} + g exp{ - : ( x - 20)' } 2 2 exp
12.
14. J(x) �
1
(2,,)
1/2
ax
exp
{- I
2 (logx
2a
- It)
'} for x > 0,
and
log 9.
f(x) = 0 for x :s;;: O.
Sec. 5.7
2. 0.9938 4. II � 542. 8.
Y" .
- �[min( XI ' ' ' ' X,, } + max( XI ' " ' ' X.. }]. 14. - 6.75. x" .
4. �
2. "1 " X" log2.
Sec. 6.8
-
11
3 4
� -'
l. 4
14. mint XI ' " ' ' X,,).
Answers
16. lS.
to
Even·Numbered Exercises
_
709
=( �E;'_110gxl 10gXO) x _I The smallest integer greater than � 1 . If � - is itself an integer. both P P P x
x() = min(Xj,
• • •
, X,, ), and &.
-1.
J
-
and - are M.L.E:s. p
20. 16.
Chapter 7 Sec. 7.1
2.
4. n 10.
255.
'I ;>
6.
=
II >
16.
Sec. 7. 1 4.
0.20. 8. Xl distribution with one degree of freedom. n 1)/2] 10. 2'/'r[( f( '1/2) +
Sec. 7.3
6.
(a) 'I
Sec. 7.4 4. c -
21; (b) " - 13.
-
(aJ 6.160 ' ; (b)
Sec. 7.6 4. J.l. o =
6. 8.
-
5;
2.050'; (e) 0.560': (dJ 1.80.'; (e) 2.80.': (1) 6.120'.
'\0 "'"
4; ao G' 2; Po "" 4.
The conditions imply that
ao
- 1, and H(p.) exists only for
(7.084, 7.948); (b) (7.031, 7.969).
(a)
The same for both samples.
. 6. 0.70.
0/3/2 .
Sec. 7.5
2.
8.
ao
>
10. (0.446, 1.530). 12. (0.724, 3.336).
Sec. 7. 7
2. 10.
1 " - L.. X, 11 i - I
- -1 -1 "L.." (X, .1',,) . (a) All values; (b) a = + 411 ' '"
II -
i_I
m
m
,
4. .J4.
Set·. 7.8
2. 16 .
1(0)
-
NormaJ
O'
1
.
Wit h mean
�.
4. 0 ' and vanance -. 9U4(J2 . 'I
8(X)
(e)
'",,
2x. - 31 (1
_
I , . 1( 0 - ) - 2 (J 4
+
00)'
710
Answers to Even-Numbered Exercises
Sec. 7.9
- 1) + 2p(n - 1) - I. (b) a - -.-,7+-'�'----O;-2 ' - 2( rn +\ 2) · Y 2 [�l' n - 1 /2 · 0.99 t 10. - c[ n (��� 1) ] '/', 12. 6.3.CUI - 1.96)11' + l.96/1d. (2) 14. (1n2 .
6. 8.
(a) For a(m
P
X,
where
c
degrees of freedom,
(a)
is the
quantile of the
_
distribution with n
where 1-41 and /11 are given by Eqs. (1) and
III
of Sec.
Normal. with mean (} and variance
Chapter 8
2. 11(0) -) -1,0.11(0. 1) -,,(0.0.43941, 11(0. 2) -,,(0.0.15558, 9423,,00 ,,(0. 3 3 996, ) 0. 7 505, ) -- 0.1.00 0. 9 935, ,,(0. 7 ) 0. 9 998, ,,(0. 8 ) 11(0. 6 ) ,,(0.9) - 1.0000; 0.1558. 4. 4. a( - 0.0877. 5 - 1.645n- 1/2 ; 6. - 31.02. 8. ( �r 12. 50.653. 10. 0.6170; 0.3174; 0.0454; 0.0026. 4"":+ 'i! 10g(0. 6=4) ;. = S' caused 14. log(7/3) Sec. 8.1 (a)
(b)
1 .0000, ,,(1)
(a) Simple; (b) Composite; (c) Composite; (d) Composite.
Sec. 8.2
(a) Reject Ho when X" >
2. (b) I .
8)
(b)
(b) (a)
c
(b)
P( 8 ) -
(e)
by a major defect if L X >
Decide that failure was
2. (b) E, - 178
X>
(d)
;-1
917 E, - 32 - 3·1 0.05 6. 12. 9.359; 0.7636; 0.9995. 2. - 1.645"-1/2 + 1.645n- 1/2. 4. n 11. 6. - -0.424 - 0.531.
Sec. 8.j
and E, -
and E,
or
Sec. 8.4
The power function is
for every value of (J.
(a) Reject Ho if X, "
(b)
(e)
Sec. 8.5
c1 = P.o =
and C2 c1
'""
P.o
and
C2
'
4.
X"
51.40.
8. - 36.62. c
1
Answers to Even·Numbered Exercises
71 1
Sec. 8.6 2. Since U . 8 . Smce
=
,
4 S,�
4. Accept Ho .
- 1.809, do not reject the claim. < 1 6.92, accept H(j'
Sec. 8. 7 2. U = 236 ;
: ; the corresponding tail area is very small.
1
4. U =
c -
6. (a)
the corresponding tail area is very small.
8. 0.0013.
1.%.
Sec. 8.8 -
2.
c
8.
CI
=
2. c,
=
4. I.
6. (a) o? 7.625 and (b) Accept Ho.
3.77.
10. 0.26SV < , < 3.12V.
1 .228. 0.321 and
c2
=
�
61 =
3.96;
Sec. 8.9 - 1.782 and
c2 =
1.782; No will be accepted.
4. Since U = - 1.672, reject 110,
6.
- 0.320
2 (6.635n)'12.
14. Normal, with mean
(.!.2 ) 1/' and variance
11 0
21411
8'
16. (a) 0.031 < a < 0.994. (b) 0 < 0.447 or 0 > 2.237. 18. Uniform on the interval ( Yl. y)).
Chapter
10
Sec. 10.1
4. Y - - 1.670
+
1.064x.
6. (a) y - 40.893
8.
y
-
+ 0.548x; (b) y - 38.483 + 3.44Ox - 0.643x'.
3.7148 + 1.I013x,
+
1.8517x,.
10. The sum of the squares of the deviations of the observed values. from the fitted curve is smaller in Exercise 9. Sec. 10. 2
B. - 0.775
10. c2
14. - 0.891.
16. c2
Sec. 10.3
2. Since VI
�
12. x '" x" - 2.33. 18. x "'" x" - 2.25.
3x" - 6.99.
- -i"
=
- 6.695, reject Ho . 6. Since lU12 I - 0.664, accept Ho. =
- 2.25.
- 6.894, reject Ho. 4. Since Uz 2 10. Since U = 24.48, reject No. ...
14. 0.284 " Y < 0.880. 12. 0.246 < /l, < 0.624. lB. 10(/1, - 0.147)' + 10.16(/l, - 0.435)' + 8.4(/1, -'0.147)(/1, - 0.435) Sec. 10.5
4.
/1 -
5.126, .'
6. /I � - 0.744,
�
" " '-.. '
tribution. 102, 150 Continuous joint distribution, 117, 143, 159 C:ontrnl variahle, 604 Convergence absolute, 180 in distribution. 279-280 in probability, 230 with probability one, 231 in quadratic mean, 232-233 Convolution, 167 Correction Cor continuity, 283-285 Correlated 2 x 2 table, 544-545 Correlation, 213-215 Covariance, 213-216 Covariance matrix, 635
Craig, A. T., 2, 680 Cramer, H., 313, 425, 679 Cramer-Rao inequality, 425 Craps. 61-62 Critical region. 438 Crossover design, 572 Cumulative distribution function, 109 Daniell, P., 565 David. F. N 2. 679 Decision procedure, 4S 7-462 failure rate, 294 "Creasing . D... Degrees of freedom of X2 distribution. 383 of F distribution, 500 of t distribution, 394 DeGroot. M. H., 234-, 313, 679 Delta method, 429-430 de Moivrc. A.. 275 Derman, C, 2, 680 Design matrix, 633 Devore, J. 1., 2, 679 d.i., see Distribution function Digamma function, 350 Disaggregation. 548 Discrete distribution. 99, 150 Discrete joint distribution. 115, 142. 158 Discrete time parameter, 73 Disjoint events, 10 Distribution of maximum and minimum values. 159-161 of a random variable, 98 of Slim of random variables, 166-167 Distribution function joint, 120, 142 marginal, 144 of a random variable, 108 Doksum, K. A., 313, 679 Dominating estimator, 374 Draper, N. R., 632, 679 .•
Efficient estimator, 426-427 Empirical distribution function, 552 Empty set, 8 Equally likely outcomes, 3 EJTor types, 442-443 Estimate, 331 Estimation problem, 330 Estimator. 330, 381 Bayes, 331-336 efficient, 426 improvement of. 371 -374
Index
inadmissible, 374
maximum likelihood estimators for, 632-634, 637-638
maximum likelihood, 338-354 sampling distribution of, 381-382 unbiased. 411-417 Expectation, 179-180 conditional, 219-221 of a linear function, 187 for a nonnegative discrete distribution. 192
of a product. 191 of a sum, 188
Expected value, 180
see also Expectation
normal equations for, 633 tests of hypotheses for, 638-639, 643 Gentle. I. E., 157, 680
Geometric distribution, 260-262 Gieser, L. J., 2, 680
Glivenko-Cantelli lemma. 553-554 Goodness-of-fit test, 521, 526 Gosset, W. S., 394 Grand mean, 654, 664 Graybill, F. A., 2, 680
Experiment, 5-6
Guaranteed winners. 53-54
Experimental design, 313
Guttman, I., 632. 679
Explained variation, 640 Exponential distribution, 289-291, 384 confidence interval for, 402 conjugate prior for, 327 estimator for, 337, 347, 352, 354 testing hypotheses about. 476 Exponential family, 362-363. 370. 474 F distribution, 499-502 F test. 502-505
Factorization criterion, 358-362 Failure rolle, 294 Fair game, 84 Favorable game, 83 Feller. W., 2, 679 Ferguson, T. S., 313, 679 Fermat, Pierre, 1
Fisher,
R. A,
339, 356, 396. 528, 644
Fisher information, 420-424 Folks. L., 2, 680 Fraser, D. A. 5., 2, 679
Freund, 1. E., 2, 679 Functions
of more than one random variable, 158-167 expectation of, 185
of one random variable. 150-154 expectation of, 183-184 Galilei. Galileo. 1 Gambler's ruin problem, 82-85 Gamma distribution, 286-289, 383
Halmos, P. R., 358 Hazard function, 294 Hetcroscedasticity, 606 Histogram. 283 Hoel, P. G.. 2, 679 Hogg.
R.
V" 2, 680
Homosccdastieity. 606 Hypergeometric distribution, 247-250 liypoihesis, 437 Li.d., 145 Inadmissible estimator, 374 Increasing failure rate, ,294 Independence of events, 43-49 of random variables, 127-132, 138-139, 144-145 Information inequality, 424-426 Lnitial probability vector, 78 Initial state. 73 Interaction, 664 Intersection, 9 Jacobian, 162y · ,t. of linear transformation, 165 Joint diStribution function, 120, 142 Joint probability density function, 117, 143 Ioint probability function, 115, 142 Jointly sufficicnt statistics, 364-369
Gamma function. 286-288 Gaus�Markov theorem
Kcmplbome, 0., 2, 680
for general lincar model, 637
Kendall, M. G., 624, 680
for simple linear regression, 608
Kennedy. W. J., Jr., 157, 680
General linear model, 631-632 design matrix for, 633 Gauss-Markov theorem for. 637
719
Kolmogorov. A. N., 556 Kolmogorov-Smirnov test. 554-559 of simple hypotbesis, 554-558
720
Index
Kolmogorov-Smimov test, (COlli.) for two samples. 558-559 Koopman-Darmois farnily, 362-363, 370
Mean absolute error, 209-211 Mean squared error, 209 Mean vector, 635 Median, 207-211
Larson, H. J.t 2, 679
Law of large numbers, 229-231 weak and strong, 231
Least-squares method, 593-602
confidence interval and test for, 561-563 Memoryless property of exponential distribution, 290 of geometric distribution, 261-262
for fitting a linear function. 599-602
Mendenhall. W., 2, 680
for titting a polynomial, 597-599
m.gJ .
for fitting a straight line, 595-597
Meyer. P. L., 2, 6&1 .
see
Moment generating function
Lehmann, E. L.t 313, 486, 502, 507, 530.680
Minimal sufficient statistic, 366-368
Level of significance, 439. 449-450, 495-496
Mixed distribution, 106, 120, 143
Levy, P., 275, 277, 279 Liapounov, A, 276-277, 281
Mode. 247
Life lests, 291-292 Lifetime. 145-146
MJ"E., see Maximum likelihood estimator Moment, 199
central. 200
Likelihood function, 317-319, 339
Moment-generating function, 201-204
Likelihood principle. 353-354, 416
Monotone likelihood ratio, 467
Likelihood ratio, 444
Mood. A. M., 2, 680
monotone, 467 Likelihood ratio test procedure. 487 for deriving F test, 502-503
for deriving 1 test, 486-489, 506-508
in simple linear resression. 618-619, 621-623
M.S.E., see Mean squared error
Multideeision problem, 4$6-462 Multinomial coefficient, 32-35
Multinomial distribution, 297-299 Multinomial theorem, 33
Lindeberg, J. W., 275, 277, 279 Lindgren, B. W., 2, 680
Multiple regression, 631-641
Linear transformations, 165-166
Multiplication rule, 21
orthogonal, 388-389
Multiple lincar regression, 631, 639-640
for conditional probabilities, 60-61
Loss function, 331-334
Multivariate distribution, 142
a,bsolute error, 333
Mutually exclusive events, 10
squared error. 332 Lottery ticket. 236-237
Negative binomial distribution. 258-262 conjugate prior for, 329
M.A.E
.•
see Mean absolute elror
Main effects, 664
Mann, H. B., 581
Marginal distribution, 125-127, 143-144
Neyman, 1.. 358, 444
Neyman-Pearson lemma, 444-445 Nonparametric problem, 520 Normal distribution, 263-272
Markov chain. 73-79
Bayes estimator for mcaa of, 333, 334
Markov inequality. 226
confidence intcrval for, 398-399, 402, 409
Matched pair, 571-572
conjusate prior (or, 324-326, 329, 402-409
Matching problem, 40-42, 190
estimator for variance of, 414-415
M.uimum likelihood estimator, 339-352
F test for, 502-505
asymptotic distribution of, 428-429
goodness-or-fit test for, 529-530
consistency of, 350-351
Kolmogorov-Smimov test for, 556
for general Linear model. 632-634, 637
maximum likelihood estimator for, 341-343,
invarianee of, 348 numerical computation of, 349 as sufficient statistic, 368 Mean, 180, 208
see also Expectation
346, 354, 391-392 precision of, 403 relation to t distribution, 395, 396
robust estimator for, 566-569 t test, for mean of, 485-491
Index
for comparing two means, 506-511 testing hypotheses about, 445-447, 470-471 , 474, 477, 483 Normal equations
Poisson distribution, 252-257 conjugate prior for, 323-324 estimator for, 336, 346, 354, 426-427 tef.tin& hypotheses about. 475-476
for general linear model, 633
Poisson process, 254-256
for linear function, 600
Port. S.• 2, 679
for polynomial, 598
Posterior distribution, 316-317
for straight line, 597
Posterior probability, 68-70
Null hypothesis, 437 composite, 479
721
Power function, 438 Precision, 403
Null set, 8
Prediction, 209-210, 221-224
Olltin, L. 2. 680
Prior distribulion, 313-315
in simple linear regression, 610-611 One·way layout, 644-650 ANOVA table for. 648-649
Prior probability, 68 Probability
estimators for, 646
additive property of, 13
tests of hypotheses for, 649-650
axioms of. 1 3
Optimal selection, 87-88
classical interpretation of, 3-4
Optimal tests, 443-445
frequency interpretation of, 2-3
Order statistlcs, 367
history of. 1-2
Ore. 0., 2, 680
subjectivc interpretation of, 4-5
Orthogonal matrix, 387-388 Outlier, 566
mathematical definition of, 13 Probability density function, 102, 152-154
Overall mean, 654, 664
conditional, 135-137, 147-148 joint, 117, 137, 143
Paired observation�, 571-577 Parameter, 3 1 1-312
marginal. 125-126, 143-144 nonuniqucncss of, 102-103
of BemouJli distribution, 244
Probability distribution, 13
of Bemoulli trials, 244
Probability function, 99
of beta distribution, 294
of a random variable, 98
of binomial distribution, 101, 245
conditional, 134
of exponential distribution, 289
joint. 115. 142
of gamma distribution, 288
marginal, 125
of geometric distribution, 260
Probability integral transformation. 154-155
of hypergeometric distribution, 248
Probability vector, 78
of multinomial distribution, 297
Proportionali ty symbol, 317
of negativc binomial distribution. 259 of Pareto distribution. 293 of Weibull distribution, 294 Parameter space, 3 1 2 Parametric family, 519 Pareto distribution, 293, 330 Parlor games, 93-94 Pascal, Blaise,
I
p.d.f., see Probability density function
Quantile, 563 of F distribution, 501
, R , 640, 644
.
RandoQl digits, 155
Random numbers, 155
Random process, 73 Random sample. 145
Pearson, E. S. • 444
Random variable, 97
Percentile. 563
Random vector, 142
Pearson, Karl. 521
distribution of, 98
Perfect forecasts, 52-53
Randomized response. 379
Pennutations, 22-24
Randomized test, 448-449
p.L, see Probability function
Range. 168
Inde.
effects DC factors in. 654
conjugate prior for, 330
estimators for. 654-655
estimator for, 338. 343, 347, 354-355
tests of hypotheses Cor, 657-658 with replications, 662-672
ANOYA table for. 666 estimators for, 664
723
testing hypotheses aboUl, 439-440, 484 Union, 8 probability of. 36-40 Utility function. 233-236
interactions in, 664 main effects of factors in, 664 tests oC hypotheses Cor, 667-669 with unequal numbers of observations, 671-672
Variance. 194-195 properties of, 195-198 Vector. 142
Yenh diagram, 8
UMP test, see Test, uniformly most powerful Unbiased estimation, 353, 411-417
Wackerly. D. D., 2. 680
Unbiased estimator, 411-417
Walpole. R. E, 2, 679
w1th minimum variance, 419. 427
WeibuU distribution, 294
Unbiased tests, 481
Whitney. D. R., 581
Unexplained variation, 640
Wilcoxon, F., 574, 581
Unfavorable game, 83, 85
Wilcoxon.Mann.Whitney ranks test, 581-584
Uniform distribution
Wilcoxon signed-ranks test, 573�576
on integers, 100 on an interval, 104-105
Zacks. S., 313, 680