STATISTICAL TABLES AND GRAPHS INTERPOLATION
Table Table Table Table Table Table Table Table
B.1 B.2 B.3 B.4 B.5 B.6 B...
8238 downloads
15960 Views
146MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
STATISTICAL TABLES AND GRAPHS INTERPOLATION
Table Table Table Table Table Table Table Table
B.1 B.2 B.3 B.4 B.5 B.6 B.7 B.8
Table B.9 Table B.10
Table Table Table Table Table
B.11 B.12 B.13 B.14 B.15
Table B.16 Table B.17 Table B.18 Table B.19 Table B.20 Table B.21 Table B.22 Table B.23 Table B.24 Table B.25 TableB.26a Table B.26b Table B.27 Table B.28 Table B.29 Table B.30 Table Table Table Table Table Table
B.31 B.32 B.33 B.34 B.35 B.36
Critical Values of the Chi-Square (X2) Distribution Proportions of the Normal Curve (One-Tailed) Critical Values of the t Distribution Critical Values of the F Distribution Critical Values of the q Distribution, for the Tukey Test Critical Values of q' for the One-Tailed Dunnett's Test Critical Values of q' for the Two-Tailed Dunnett's Test Critical Values of dmax for the Kolmogorov-Smirnov Goodness-of-Fit for Discrete or Grouped Data Critical Values of D for the Kolrnogorov-Srnirnov Goodness-of-Fit Test for Continuous Distributions Critical Values of Do for the 8-Corrected Kolmogorov-Smirnov Goodness-of-Fit Test for Continuous Distributions Critical Values of the Marin-Whitney U Distribution Critical Values of the Wilcoxon T Distribution Critical Values of the Kruskal-Wallis H Distribution Critical Values of the Friedman X~ Distribution Critical Values of Q for Nonparametric Multiple-Comparison Testing Critical Values of Q' for Nonparametric Multiple-Comparison Testing with a Control Critical Values of the Correlation Coefficient, r Fisher's z Transformation for Correlation Coefficients, r Correlation Coefficients, r, Corresponding to Fisher's z Transformation Critical Values of the Spearman Rank-Correlation Coefficient, rs Critical Values of the Top-Down Correlation Coefficient, rr Critical Values of the Symmetry Measure, v1il Critical Values of the Kurtosis Measure, b: The Arcsine Transformation, p' Proportions, p , Corresponding to Arcsine Transformations, p' Binomial Coefficients, nCX Proportions of the Binomial Distribution for p = q = 0.5 Critical Values of C for the Sign Test or the Binomial Test with p = 0.5 Critical Values for Fisher's Exact Test Critical Values of u for the Runs Test Critical Values of C for the Mean Square Successive Difference Test Critical Values, u, for the Runs-Up-and-Down Test Angular Deviation,s, As a Function of Vector Length, r Circular Standard Deviation, So, as a Function of Vector Length, r Circular Values of z for Rayleigh's Test for Circular Uniformity Critical Values of u for the V Test of Circular Uniformity Critical Values of m for the Hodges-Ajne Test for Circular Uniformity
672 676 678 680 717 733 735 736 741
745 747 758 761 763 764 765 766 768
770 773 775
776 777 778 780 782
785 786 795 826 835 837 838 840
842 844 845
Table B.37 TableB.38a TableB.38b Table B.39 Table B.40 Table B.41 Figure B.1
Correction Factor, K, for the Watson and Williams Test Critical Values of Watson's Two-Sample U2 Critical Values of Watson's One-Sample U2 Critical Values of R' for the Moore Test for Circular Uniformity Common Logarithms of Factorials Ten Thousand Random Digits Power and Sample Size in Analysis of Variance
847 849 853 853 854 856 859
BIOSTATISTICAL ANALYSIS
Fifth Edition
Jerrold H. Zar Department of Biological Sciences Northern Illinois University ~(j'n'
&tVouAaS uVl
'6(aY"'l
po,
&e;r
VV\~e~~.
ti~.
Prentice Hall
•• is an imprint
of
Upper Saddle River, New Jersey 07458
Library of Congress Cataloging-in-Publication
Data
Zar, Jerrold H. Biostatistical Analysis / Jerrold H. Zar-5th ed. p.cm. 1. Biometry. I. Title. ISBN: 978-0-13-100846-5 1. Biostatistical analysis I. Title QH323.5.z37201O 570.1'5195-dc22 2009035156 Editor-in-Chief. Statistics and Mathematics: Deirdre Lynch Acquisitions Editor: Christopher Cummings Editorial Project Manager: Christine O'Brien Assistant Editor: Christina Lepre Project Manager: Raegan Keida Heerema Associate Managing Editor: Bayani Mendoza de Leon Senior Managing Editor: Linda Mihatov Behrens Senior Operations Supervisor: Diane Peirano Marketing Manager: Alex Gay Marketing Assistant: Kathleen De Chave ; Art Director: Jayne Conte Cover Designer: Bruce Kenseiaar Prentice Hall
•• is an imprint of
© 2010,1999,1996,1984,1974
by Prentice Hall, Inc. Pearson Prentice Hall Pearson Education, Inc. Upper Saddle River, New Jersey 07458
All rights reserved. No pan of this book may be reproduced in any form or by any means. without permission in writing from the publisher. Pearson Prentice HaUTMis a trademark of Pearson Education, Inc. Printed in the United States of America 10 9 8 7 6 5 4 3 2 I
ISBN-10: ISBN-13:
Pearson Pearson Pearson Pearson Pearson Pearson Pearson Pearson Pearson
0-13-100846-3 978-0-13-100846-5
Education Ltd., London Education Singapore, Pte. Ltd. Education Canada, Inc. Education-Japan Education Australia PTY, Limited Education North Asia Ltd .. Hong Kong Education de Mexico, S.A. de C.v . Education Malaysia, Pte. Ltd. Education Upper Saddle River, New Jersey
Contents Preface
xi xiii
Acknowledgments 1
1.1 1.2 1.3 1.4
2
6
18 19
21 21 24 27 Tendency .
The Range . . . . . . . . .... Dispersion Measured with Quantiles The Mean Deviation . . The Variance . The Standard Deviation . The Coefficient of Variation. Indices of Diversity. Coding Data. . . . . . . . ..
Probabilities 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8
Counting Possible Outcomes Permutations Combinations...... Sets . Probability of an Event. Adding Probabilities .. Multiplying Probabilities. Conditional Probabilities.
The Normal Distribution 6.1 6.2
6 14
16 16 17
Measures of Variability and Dispersion 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8
5
The Arithmetic Mean The Median . The Mode . . . . . . . . Other Measures of Central Coding Data
2 5
16
and Samples
Populations . Samples from Populations Random Sampling .... Parameters and Statistics. Outliers.... .....
Measures of Central Tendency 3.1 3.2 3.3 3.4 3.5
4
Types of Biological Data . Accuracy and Significant Figures Frequency Distributions . . . .. Cumulative Frequency Distributions
Populations 2.1 2.2 2.3 2.4 2.5
3
1
Data: Types and Presentation
Proportions of a Normal Distribution The Distribution of Means. . . ....
28 30
33 33 35
37 37 41 42 42 46
49 49
51 55 57
59 60 63 63
66 68 72 iii
iv
Contents 6.3 6.4 6.5 6.6 7
8
9
Introduction to Statistical Hypothesis Testing Confidence Limits . Symmetry and Kurtosis . Assessing Departures from Normality
One-Sample Hypotheses 7.1 Two-Tailed Hypotheses Concerning the Mean 7.2 One-Tailed Hypotheses Concerning the Mean 7.3 Confidence Limits for the Population Mean 7.4 Reporting Variability Around the Mean . . . . 7.5 Reporting Variability Around the Median. . . 7.6 Sample Size and Estimation of the Population Mean 7.7 Sample Size, Detectable Difference, and Power in Tests Concerning the Mean. . . . . . .. . 7.8 Sampling Finite Populations. . . . . . . . . . . 7.9 Hypotheses Concerning the Median . . . . . . 7.10 Confidence Limits for the Population Median. 7.11 Hypotheses Concerning the Variance ..... 7.12 Confidence Limits for the Population Variance 7.13 Power and Sample Size in Tests Concerning the Variance 7.14 Hypotheses Concerning the Coefficient of Variation. . . 7.15 Confidence Limits for the Population Coefficient of Variation 7.16 Hypotheses Concerning Symmetry and Kurtosis . . . . . . . . Two-Sample Hypotheses 8.1 Testing for Difference Between Two Means . . . . . . . 8.2 Confidence Limits for Population Means. . . . . . . . . 8.3 Sample Size and Estimation of the Difference Between Two Population Means. . . . . . . . . . . . . . . . . . . 8.4 Sample Size, Detectable Difference, and Power in Tests for Difference between Two Means. . . . . . . . . . . . 8.5 Testing for Difference Between Two Variances. . . 8.6 Confidence Limits for Population Variances and the Population Variance Ratio. . . . . . . . . . . . . . . 8.7 Sample Size and Power in Tests for Difference Between Two Variances 8.8 Testing for Difference Between Two Coefficients of Variation 8.9 Confidence Limits for the Difference Between Two Coefficients of Variation . . . . . . . . . . . . . 8.10 Nonparametric Statistical Methods . . . . . . . 8.11 Two-Sample Rank Testing. . . . . . . . . . . . 8.12 Testing for Difference Between Two Medians. 8.13 Two-Sample Testing of Nominal-Scale Data. . 8.14 Testing for Difference Between Two Diversity Indices . 8.15 Coding Data. . . . . . . . . . . . . . . . . . . . . . . . . Paired-Sample Hypotheses 9.1 Testing Mean Difference Between Paired Samples 9.2 Confidence Limits for the Population Mean Difference 9.3 Power, Detectable Difference and Sample Size in Paired-Sample Testing of Means . . . . . . . . . . . .
74 85 87 91 97 97 103 105 108 112 114 115 118 120 120 121 122 124 125 126 126 130 130 142 146 147 151 157 158 159 162 162 163 172 174 174 176 179 179 182 182
Contents 9.4 9.5 9.6
Testing for Difference Between Variances from Two Correlated Populations. . . . . . . . . . . . . . . . . . Paired-Sample Testing by Ranks . . . . . . . . . . . . Confidence Limits for the Population Median Difference
10 Multisample Hypotheses and the Analysis of Variance 10.1 Single-Factor Analysis of Variance . . . . . . . 10.2 Confidence Limits for Population Means. . . . 10.3 Sample Size, Detectable Difference, and Power of Variance .. . . . . . . . . . . . . . . . . . . 10.4 Nonparametric Analysis of Variance. . . . . . 10.5 Testing for Difference Among Several Medians. 10.6 Homogeneity of Variances. . . . . . . . . 10.7 Homogeneity of Coefficients of Variation . . 10.8 Coding Data. . . . . . . . . . . . . . . . . . . 10.9 Multisample Testing for Nominal-Scale Data
. . . . . . . . . . . . . . in Analysis . .
v
182 183 188 189 190 206 207 214 219 220 221 224 224
11 Multiple Comparisons 11.1 Testing All Pairs of Means. . . . . . . . . . . . . . . 11.2 Confidence Intervals for Multiple Comparisons. . . 11.3 Testing a Control Mean Against Each Other Mean. 11.4 Multiple Contrasts . . . . . . . . . . . 11.5 Nonparametric Multiple Comparisons . 11.6 Nonparametric Multiple Contrasts . . . 11.7 Multiple Comparisons Among Medians 11.8 Multiple Comparisons Among Variances
226 227 232 234 237 239 243 244 244
12 Two-Factor Analysis of Variance 12.1 Two-Factor Analysis of Variance with Equal Replication 12.2 Two-Factor Analysis of Variance with Unequal Replication. 12.3 Two-Factor Analysis of Variance Without Replication. 12.4 Two-Factor Analysis of Variance with Randomized Blocks or Repeated Measures . . . . . . . . . . . 12.5 Multiple Comparisons and Confidence Intervals in Two-Factor Analysis of Variance 12.6 Sample Size, Detectable Difference, and Power in Two-Factor Analysis of Variance 12.7 Nonparametric Randomized-Block or Repeated-Measures Analysis of Variance . . . . . . . . . . . . . . . . . . . . . . 12.8 Dichotomous Nominal-Scale Data in Randomized Blocks or from Repeated Measures. . . . . . . . . . . . . . . . . . 12.9 Multiple Comparisons with Dichotomous Randomized-Block or Repeated-Measures Data . . . . . . . 12.10 Introduction to Analysis of Covariance
249 250 265 267
13 Data Transformations 13.1 The Logarithmic Transformation 13.2 The Square-Root Transformation. 13.3 The Arcsine Transformation 13.4 Other Transformations. . . . . . .
286 287 291 291 295
270 274 275 277 281 283 284
vi
Contents
~,
14 Multiway Factorial Analysis of Variance 14.1 Three-Factor Analysis of Variance . . . . . . 14.2 The Latin-Square Experimental Design . . . 14.3 Higher-Order Factorial Analysis of Variance 14.4 Multiway Analysis of Variance with Blocks or Repeated Measures. . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 Factorial Analysis of Variance with Unequal Replication 14.6 Multiple Comparisons and Confidence Intervals in Multiway Analysis of Variance . . . . . . . . . . . . . . . . . . . . . . . 14.7 Power, Detectable Difference, and Sample Size in Multiway Analysis of Variance . . . . . . . . . . . . . . . . . . . . . . .
296 296 299 303
15 Nested 15.1 15.2 15.3 15.4
307 307 313 314
(Hierarchical) Analysis of Variance Nesting Within One Factor ... . . . . . . . . . Nesting in Factorial Experimental Designs. . . . Multiple Comparisons and Confidence Intervals Power, Detectable Difference, and Sample Size in Nested Analysis of Variance
304 304 305 305
315
16 Multivariate Analysis of Variance 16.1 The Multivariate Normal Distribution . . . . . . . . . 16.2 Multivariate Analysis of Variance Hypothesis Testing 16.3 Further Analysis 16.4 Other Experimental Designs
316 316 319 326 326
17 Simple 17.1 17.2 17.3 17.4 17.5 17.6 17.7 17.8 17.9 17.10 17.11
328 328 330 337 341 342 347 349 355 355 357 361
Linear Regression Regression versus Correlation. . . . . . The Simple Linear Regression Equation Testing the Significance of a Regression Interpretations of Regression Functions Confidence Intervals in Regression . . . Inverse Prediction Regression with Replication and Testing for Linearity Power and Sample Size in Regression Regression Through the Origin . . . Data Transformations in Regression The Effect of Coding Data. . . . . .
18 Comparing Simple Linear Regression Equations 18.1 Comparing Two Slopes. . . . . . . . . . . 18.2 Comparing Two Elevations . . . . . . . . 18.3 Comparing Points on Two Regression Lines. 18.4 Comparing More Than Two Slopes. . . 18.5 Comparing More Than Two Elevations . 18.6 Multiple Comparisons Among Slopes . . 18.7 Multiple Comparisons Among Elevations 18.8 Multiple Comparisons of Points Among Regression Lines. 18.9 An Overall Test for Coincidental Regressions. . . . . . . .
363 363 367 371 372 375 375 376 377 378
19 Simple Linear Correlation 19.1 The Correlation Coefficient . . . . . . . . . . . 19.2 Hypotheses About the Correlation Coefficient
379 379 383
\
Contents
19.3 19.4 19.5 19.6 19.7 19.8 19.9 19.10 19.11 19.12 19.13 19.14
Confidence Intervals for the Population Correlation Coefficient Power and Sample Size in Correlation . . . . . . . . . . Comparing Two Correlation Coefficients Power and Sample Size in Comparing Two Correlation Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . Comparing More Than Two Correlation Coefficients . Multiple Comparisons Among Correlation Coefficients Rank Correlation . . . . . . . . . . . . Weighted Rank Correlation . . . . . . Correlation with Nominal-Scale Data Intraclass Correlation .. Concordance Correlation The Effect of Coding. . .
vii
386 386 390 392 393 396 398 402 405 411 . 414 417
20 Multiple Regression and Correlation 20.1 Intermediate Computational Steps . 20.2 The Multiple-Regression Equation . 20.3 Analysis of Variance of Multiple Regression or Correlation 20.4 Hypotheses Concerning Partial Regression Coefficients 20.5 Standardized Partial Regression Coefficients ~ 20.6 Selecting Independent Variables 20.7 Partial Correlation . . . . . . . . 20.8 Predicting Y Values . 20.9 Testing Difference Between Two Partial Regression Coefficients . . . . . . . . . . . . . . . 20.10 "Dummy" Variables . 20.11 Interaction of Independent Variables . 20.12 Comparing Multiple Regression Equations 20.13 Multiple Regression Through The Origin 20.14 Nonlinear Regression . 20.15 Descriptive Versus Predictive Models . 20.16 Concordance: Rank Correlation Among Several Variables
419
21 Polynomial Regression 21.1 Polynomial Curve Fitting 21.2 Quadratic Regression
458 458 463
22 Testing 22.1 22.2 22.3
466
....
22.4 22.5 22.6 22.7 22.8
for Goodness of Fit Chi-Square Goodness of Fit for Two Categories Chi-Square Correction for Continuity . . . . .. Chi-Square Goodness of Fit for More Than Two Categories . Subdividing Chi-Square Goodness of Fit . Chi-Square Goodness of Fit with Small Frequencies Heterogeneity Chi-Square Testing for Goodness of Fit. The Log-Likelihood Ratio for Goodness of Fit Kolmogorov-Smirnov Goodness of Fit .
23 Contingency Tables 23.1 Chi-Square Analysis of Contingency Tables 23.2 Visualizing Contingency-Table Data ....
420 423 426 430 433 433 438 440 442 443 444 444 446 447 448 449
467 469 470 472 473 474 478 481 490 492 494
viii
Contents 23.3 23.4 23.5 23.6 23.7 23.8
2 X 2 Contingency Tables Contingency Tables with Small Frequencies Heterogeneity Testing of 2 x 2 Tables Subdividing Contingency Tables The Log-Likelihood Ratio for Contingency Multidimensional Contingency Tables
. . . Tables .
497 503 504 506 508 510
24 Dichotomous Variables 24.1 Binomial Probabilities . . . . . . . 24.2 The Hypergeometric Distribution. 24.3 Sampling a Binomial Population . 24.4 Goodness of Fit for the Binomial Distribution. 24.5 The Binomial Test and One-Sample Test of a Proportion 24.6 The Sign Test . . . . . . . . . . . . . . . . . . . . . . . . 24.7 Power, Detectable Difference, and Sample Size for the Binomial and Sign Tests . . . . . . . . . . . . . 24.8 Confidence Limits for a Population Proportion . . 24.9 Confidence Interval for a Population Median . . . 24.10 Testing for D~ff~re~ce Betw~en Two Proporti~ .. : . . 24.11 Confidence Limits for the DIfference Between Proportions 24.12 Power, Detectable Difference, and Sample Size in Testing Difference Between Two Proportions. 24.13 Comparing More Than Two Proportions. 24.14 Multiple Comparisons for Proportions 24.15 Trends Among Proportions . . . . . . . . 24.16 The Fisher Exact Test 24.17 Paired-Sample Testing of Nominal-Scale Data 24.18 Logistic Regression. . . . . . . . . . . . . . . .
518 519 524 526 529 532 537
25 Testing 25.1 25.2 25.3 25.4 25.5 25.6 25.7 25.8
585 585 587 589 592 595 597 599 601
for Randomness Poisson Probabilities . . . . . . . . . . . . . . Confidence Limits for the Poisson Parameter Goodness of Fit for the Poisson Distribution The Poisson Distribution for the Binomial Test Comparing Two Poisson Counts Serial Randomness of Nominal-Scale Categories Serial Randomness of Measurements: Parametric Testing Serial Randomness of Measurements: Nonparametric Testing
26 Circular Distributions: Descriptive Statistics 26.1 Data on a Circular Scale . . . . . . . 26.2 Graphical Presentation of Circular Data 26.3 Trigonometric Functions. 26.4 The Mean Angle . . . . . . . . 26.5 Angular Dispersion. . . . . . . 26.6 The Median and Modal Angles 26.7 Confidence Limits for the Population Mean and Median Angles . . . . . . . 26.8 Axial Data. . . . . . . . . . 26.9 The Mean of Mean Angles.
539 543 548 549 551 552 555 557 559 561 569 577
605 605 607 610 612 615 617 618 619 621
Contents
ix
27 Circular Distributions: Hypothesis Testing 27.1 Testing Significance of the Mean Angle 27.2 Testing Significance of the Median Angle . . . 27.3 Testing Symmetry Around the Median Angle. 27.4 Two-Sample and Multisample Testing of Mean Angles 27.5 Nonparametric Two-Sample and Multisample Testing of Angles 27.6 Two-Sample and Multisample Testing of Median Angles .. 27.7 Two-Sample and Multisample Testing of Angular Distances 27.8 Two-Sample and Multisample Testing of Angular Dispersion 27.9 Parametric Analysis of the Mean of Mean Angles 27.10 Nonparametric Analysis of the Mean of Mean Angles . . . . 27.11 Parametric Two-Sample Analysis of the Mean of Mean Angles 27.12 Nonparametric Two-Sample Analysis of the Mean of Mean Angles 27.13 Parametric Paired-Sample Testing with Angles . . . 27.14 Nonparametric Paired-Sample Testing with Angles. 27.15 Parametric Angular Correlation and Regression . 27.16 Nonparametric Angular Correlation . . . . . . . . . 27.17 Goodness-of-Fit Testing for Circular Distributions. 27.18 Serial Randomness of Nominal-Scale Categories on a Circle
624 624 629 631 632 637 642 642 644 645 646 647 649 652 654 654 660 662 665
APPENDICES A The Greek Alphabet B Statistical Tables and Graphs Table B.l Critical Values of the Chi-Square (X2) Distribution Table B.2 Proportions of the Normal Curve (One-Tailed) . Table B.3 Critical Values of the t Distribution . . . . . . . . . Table B.4 Critical Values of the F Distribution. . . . . . . . . Table B.S Critical Values of the q Distribution, for the Tukey Test Table B.6 Critical Values of q' for the One-Tailed Dunnett's Test Table B.7 Critical Values of q' for the Two-Tailed Dunnett's Test Table B.8 Critical Values of dmax for the Kolmogorov-Smirnov Goodness-of-Fit for Discrete or Grouped Data . . . Table B.9 Critical Values of D for the Kolmogorov-Smirnov Goodness-of-Fit Test for Continuous Distributions. Table B.lO Critical Values of Do for the o-Corrected Kolmogorov-Smirnov Goodness-of-Fit Test for Continuous Distributions . . . . . . . . . . . . . Table B.ll Critical Values of the Marin-Whitney U Distribution Table B.12 Critical Values of the Wilcoxon T Distribution . . . Table B.13 Critical Values of the Kruskal-Wallis H Distribution Table B.14 Critical Values of the Friedman x~ Distribution Table B.IS Critical Values of Q for Nonparametric Multiple-Comparison Testing Table B.16 Critical Values of Q' for Nonparametric Multiple-Comparison Testing with a Control Table B.17 Critical Values of the Correlation Coefficient, r . Table B.lS Fisher's z Transformation for Correlation Coefficients, r.
669 671 672 676 678 680 717 . 733 735 736 741
. . . .
745 747 758 761 763 764
.
765 766 768
x
Contents
Table B.19 Table B.20 Table B.21 Table Table Table Table
B.22 B.23 B.24 B.25
TableB.26a TableB.26b Table B.27 Table B.28 Table B.29 Table B.30 Table B.31 Table B.32 Table B.33 Table B.34 Table B.35 Table B.36 Table B.37 TableB.38a TableB.38b Table B.39 Table B.40 Table B.41 Figure B.l
Correlation Coefficients, r, Corresponding to Fisher's z Transformation . . . . . . . . . . . . . . . . . . . Critical Values of the Spearman Rank-Correlation Coefficient, rs . Critical Values of the Top-Down Correlation Coefficient, r i . . . . . . . . . . . . . . . . . Critical Values of the Symmetry Measure, Jlil . Critical Values of the Kurtosis Measure, b2 The Arcsine Transformation, pi . . . . . Proportions, p, Corresponding to Arcsine Transformations, pi . . . . . . . . . . . . Binomial Coefficients, /lCX . Proportions of the Binomial Distribution for p = q = 0.5 Critical Values of C for the Sign Test or the Binomial Test with p = 0.5 . Critical Values for Fisher's Exact Test . Critical Values of II for the Runs Test . Critical Values of C for the Mean Square Successive Difference Test . . . . . . . . . . . . . . . . . . . . Critical Values, u, for the Runs-Up-and-Down Test Angular Deviation, s, As a Function of Vector Length, r . Circular Standard Deviation, So, as a Function of Vector Length, r . Circular Values of z for Rayleigh's Test for Circular Uniformity . Critical Values of It for the V Test of Circular Uniformity Critical Values of m for the Hodges-Ajne Test for Circular Uniformity . Correction Factor, K, for the Watson and Williams Test . Critical Values of Watson's Two-Sample U2 Critical Values of Watson's One-Sample U2 . Critical Values of R' for the Moore Test for Circular Uniformity . Common Logarithms of Factorials . Ten Thousand Random Digits . . . . . . . . . . Power and Sample Size in Analysis of Variance.
.770 · 773 · 775 · 776 .777 778 780 782 785 786 · 795 826 835 837 838 840 842
844 845 847
849 853 853 854
856 859
C The Effects of Coding Data
867
D Analysis-of-Variance Hypothesis Testing D.l Determination of Appropriate Fs and Degrees of Freedom D.2 Two-Factor Analysis of Variance . D.3 Three-Factor Analysis of Variance D.4 Nested Analysis of Variance.
869
Answers to Exercises
875
Literature Cited
886
Author Index
923
Subject Index
931
869 872 872 873
Preface Beginning with the first edition of this book, the goal has been to introduce a broad array of techniques for the examination and analysis of a wide variety of data that may be encountered in diverse areas of biological studies. As such, the book has been called upon to fulfill two purposes. First, it has served as an introductory textbook, assuming no prior knowledge of statistics. Second, it has functioned as a reference work consulted long after formal instruction has ended. Colleges and universities have long offered an assortment of introductory statistics courses. Some of these courses are without concentration on a particular field in which quantitative data might be collected (and often emphasize mathematics and statistical theory), and some focus on statistical methods of utility to a specific field (such as this book, which has an explicit orientation to the biological sciences). Walker (I 929: 148-163) reported that, although the-teaching of probability has a much longer history, the first statistics course at a\\~.S. university or college probably was at Columbia College (renamed Columbia University in 1896) in 1880 in the economics department; followed in 1887 by the second-the first in psychology-at the University of Pennsylvania; in 1889 by the first in anthropology, at Clark University; in 1897 by the first in biology, at Harvard University; in 1898 by the first in mathematics, at the University of Illinois; and in 1900 by the first in education, at Teachers College, Columbia University. In biology, the first courses with statistical content were probably taught by Charles B. Davenport at Harvard (1887 -1899), and his Statistical Methods in Biological Variation, first published in 1899, may have been the first American book focused on statistics (ibid.: 159). The material in this book requires no mathematical competence beyond very elementary algebra, although the discussions include many topics that appear seldom, if at all, in other general texts. Some statistical procedures are mentioned though not recommended. This is done for the benefit of readers who may encounter them in research reports or computer software. Many literature references and footnotes are given throughout most chapters, to provide support for material discussed, to provide historical points, or to direct the reader to sources of additional information. More references are given for controversial and lesser-known topics. The data in the examples and exercises are largely fictional, though generally realistic, and are intended to demonstrate statistical procedures, not to present actual research conclusions. The exercises at the end of chapters can serve as additional examples of statistical methods, and the answers are given at the back of the book. The sample sizes of most examples and exercises are small in order to conserve space and to enhance the ease of presentation and computation. Although the examples and exercises represent a variety of areas within the biological sciences, they are intended to be understood by biology students and researchers across a diversity of fields. There are important statistical procedures that involve computations so demanding that they preclude practical execution without appropriate computer software. Basic principles and aspects of the underlying calculations are presented to show how results may be obtained; for even if laborious calculations will be performed by computer, the biologist should be informed enough to interpret properly the computational results. Many statistical packages are available, commercially or otherwise, addressing various subsets of the procedures in this book; but no single package is promoted herein. xi
xii
Preface
A final contribution toward achieving a book with self-sufficiency for most biostatistical needs is the inclusion of a comprehensive set of statistical tables, more extensive than those found in similar texts. To be useful as a reference, and to allow for differences in content among courses for which it might be used, this book contains much more material than would be covered during one academic term. Therefore, I am sometimes asked to recommend what I consider to be the basic topics for an introduction to the subject. I suggest these book sections (though not necessarily in their entirety) as a core treatment of biostatistical methods, to be augmented or otherwise amended with others of the instructor's preference: 1.1-1.4, 2.1-2.4, 3.1-3.3, 4.1, 4.4-4.6, 6.1-6.4, 7.1-7.4,7.6-7.7,8.1-8.5,8.10-8.11,9.1-9.3,10.1-10.4, 11.1-11.4, 12.1-12.4, 14.1, 15.1,17.1-17.7,18.1-18.3,19.1-19.3,19.9, 20.2-20.4, 22.1-22.3, 22.5, 23.1-23.4; and the introductory paragraph( s) to each of these chapters. Jerrold H. Zar DeKalb, Illinois
Acknowledgments A book of this nature requires, and benefits from, the assistance of many. For the preparation of the early editions, the outstanding library collections of the University of Illinois at Urbana-Champaign were invaluable, and for all editions I am greatly indebted to the library materials and services of Northern Illinois University and its vast book- and journal-collection networks. I also gratefully acknowledge the cooperation of the computer services at Northern TIIinois University, which assisted in executing many of the computer programs I prepared to generate some of the appendix tables. For the tables taken from previously published sources, thanks are given for the permission to reprint them, and acknowledgment of each source is given immediately following the appearance of the reprinted material. Additionally, 1 am pleased to recognize the editorial and production staff at Pearson Education and Laserwords for their valued professional assistance in transforming my manuscript into the published product in hand. Over many years, teachers, students, and colleagues have aided in leading me to important biostatistical questions and to the material presented in this volume. Special recognition must be made of S. Charles Kendeigh (1904-1986) of the University of lllinois at Urbana-Champaign, who, through considerate mentorship, first made me aware of the value of quantitative analysis of biological data, leading me to produce the first edition; Edward Batschelet (1914-1979), of the University of Zurich, who provided me with kind encouragement and inspiration on statistical matters during the preparation of much of the first two editions; Arthur W. Ghent (1927-2001), University of Illinois at Urbana-Champaign, who was constantly supportive through the first four editions, offering statistical and biological commentary both stimulating and challenging; and Carol J. Feltz (1956-2001), Northern llIinois University, who provided substantive consultation on some major new material for the fourth edition. Prior to publication, the book drew upon the expertise and insights of reviewers, including Raid Amin, University of West Florida; Franklin R. Ampy, Howard University; Sulekha Anand, San Jose State University; Roxanne Barrows, Hocking College; Thomas Beitinger, University of North Texas; Mark Butler, Old Dominion University; William Caire, University of Central Oklahoma; Gary Cobbs, University of Louisville; Loveday L. Conquest, University of Washington; Todd A. Crowl, Utah State University; Matthew Edwards, University of California, Santa Cruz; Todd Fearer, University of Arkansas-Monticello; Avshalom Gamliel, Wavernetrics, Inc.; Peter Homann, Western Washington University; Robert Huber, Bowling Green State University; James W. Kirchner, University of California, Berkeley; Gary A. Lamberti, University of Notre Dame; David Larsen, University of Missouri, Columbia; David R. McConville, Saint Mary's University of Minnesota; J. Kelly McCoy, Angelo State University; David J. Moriarty, California State Polytechnic University; Mark Rizzardi, Humboldt State University; Michael A. Romano, Western Illinois University; and Thomas A. Wolosz, the State University of New York, Plattsburgh.l also acknowledge my wife, Carol, for her prolonged and consistent patience during the thirty-five years of production of the five editions of this book. Jerrold H. Zar DeKalb, Illinois
xiii
CHAPTER
1
Data: Types and Presentation 1.1 1.2
1.3
TYPES OF BIOLOGICAL DATA ACCURACY AND SIGNIFICANT FREQUENCY DISTRIBUTIONS
1.4
CUMULATIVE
FIGURES
FREQUENCY DISTRIBUTIONS
Scientific study involves the systematic collection, organization, analysis. and presentation of knowledge. Many investigations in the biological sciences are quantitative. where knowledge is in the form of numerical observations called data. (One numerical observation is a datulIl.*) In order for the presentation and analysis of data to be valid and useful, we must use methods appropriate to the type of data obtained, to the design of the data collection, and to the questions asked of the data; and the limitations of the data. of the data collection, and of the data analysis should be appreciated when formulating conclusions. This chapter, and those that follow. will introduce many concepts relevant to this goal. The word statistics is derived from the Latin for "state," indicating the historical importance of governmental data gathering, which related principally to demographic information (including census data and "vital statistics") and often to their use in military recruitment and tax collecting. I The term statistics is often encountered as a synonym for data: One hears of college enrollment statistics (such as the numbers of newly admitted students. numbers of senior students, numbers of students from various geographic locations), statistics of a basketball game (such as how many points were scored by each player. how many fouls were committed), labor statistics (such as numbers of workers unemployed. numbers employed in various occupations), and so on. Hereafter, this use of the word statistics will not appear in this book. Instead, it will be used in its other common manner: to refer to the orderly collection, analysis, and interpretation of data with a view to objective evaluation of conclusions based on the data. (Section 2.4 will introduce another fundamentally important use of the term statistic.i Statistics applied to biological problems is simply called biostatistics or, sometimes. biometry' (the latter term literally meaning "biological measurement"). Although "The term data is sometimes seen as a singular noun meaning "numerical information." This hook refrains from that use. 'Peters (I LJ1\7:7LJ) and Walker (I LJ2LJ:32) attribute the first use of the term statistics to a German professor, Gottfried Achcnwall (171 LJ-1772). who used the German word Statistik in 174LJ.and the first published use of the English word to John Sinclair (1754-11\35) in 17LJ1. j:The word biometry, which literally means "biological measurement," had. since the nineteenth century. bcen found in several contexts (such as demographics and. later. quantitative genetics; Armitage, I LJ1\5;Stigler. 2(00). but using it to mean the application of statistical methods to hiological information apparently was conceived between II\LJ2 and 1LJOIby Karl Pearson. along with the name Biometrika for the still-important English journal he helped found: and it was first publish cd in the inaugural issue of this journal in I LJOI (Sncdecor, I LJ54). The Biometrics Section of the American
1
4
Chapter
1
Data: Types and Presentation
their magnitudes relative to each other: or success in learning to run a maze may be recorded as A, B, or C. It is often true that biological data expressed on the ordinal scale could have been expressed on the interval or ratio scale had exact measurements been obtained (or obtainable). Sometimes data that were originally on interval or ratio scales will be changed to ranks; for example, examination grades of 99,85,73, and 66% (ratio scale) might be recorded as A, B, C, and 0 (ordinal scale), respectively. Ordinal-scale data contain and convey less information than ratio or interval data, for only relative magnitudes are known. Consequently, quantitative comparisons are impossible (e.g., we cannot speak of a grade of C being half as good as a grade of A, or of the difference between cell sizes 1 and 2 being the same as the difference between sizes 3 and 4). However, we will see that many useful statistical procedures are, in fact, applicable to ordinal data. (d) Data in Nominal Categories. Sometimes the variable being studied is classified by some qualitative measure it possesses rather than by a numerical measurement. In such cases the variable may be called an attribute, and we are said to be dealing with nominal, or categorical, data. Genetic phenotypes are commonly encountered biological attributes: The possible manifestations of an animal's eye color might be brown or blue; and if human hair color were the attribute of interest, we might record black, brown, blond, or red. As other examples of nominal data (nominal is from the Latin word for "name"), people might be classified as male or female, or right-handed or left-handed. Or, plants might be classified as dead or alive, or as with or without fertilizer application. Taxonomic categories also form a nominal classification scheme (for example, plants in a study might be classified as pine, spruce, or fir). Sometimes, data that might have been expressed on an ordinal, interval, or ratio scale of measurement may be recorded in nominal categories. For example, heights might be recorded as tall or short, or performance on an examination as pass or fail, where there is an arbitrary cut-off point on the measurement scale to separate tall from short and pass from fail. As will be seen, statistical methods useful with ratio, interval, or ordinal data generally are not applicable to nominal data, and we must, therefore, be able to identify such situations when they occur. (e) Continuous and Discrete Data. When we spoke previously of plant heights, we were dealing with a variable that could be any conceivable value within any observed range; this is referred to as a continuous variable. That is, if we measure a height of 35 em and a height of 36 em, an infinite number of heights is possible in the range from 35 to 36 em: a plant might be 35.07 cm tall or 35.988 cm tall, or 35.3263 cm tall, and so on, although, of course, we do not have devices sensitive enough to detect this infinity of heights. A continuous variable is one for which there is a possible value between any other two values. However, when speaking of the number of leaves on a plant, we are dealing with a variable that can take on only certain values. It might be possible to observe 27 leaves, or 28 leaves, but 27.43 leaves and 27.9 leaves are values of the variable that are impossible to obtain. Such a variable is termed a discrete or discontinuous variable (also known as a meristic variable). The number of white blood cells in 1 mm ' of blood, the number of giraffes visiting a water hole, and the number of eggs laid by a grasshopper are all discrete variables. The possible values of a discrete variable generally are consecutive integers, but this is not necessarily so. If the leaves on our
plants are always formed in pairs, then only even integers are possible values of the variable. And the ratio of number of wings to number of legs of insects is a discrete variable that may only have the value of 0,0.3333 ... , or 0.6666 ... (i.e., ~, ~, or ~, respectively).* Ratio-, interval-, and ordinal-scale data may be either continuous or discrete. Nominal-scale data by their nature are discrete. 1.2
ACCURACY AND SIGNIFICANT FIGURES
Accuracy is the nearness of a measurement to the true value of the variable being measured. Precision is not a synonymous term but refers to the closeness to each other of repeated measurements of the same quantity. Figure 1.1 illustrates the difference between accuracy and precision of measurements .
•
o
I
••• ••• •••I
I
I
I
I
I
2
3
4
5
6 kg
0
2
• •••••••• I
(a)
3
I
I
4
5
6
••••••• I I
•I
I
4
5
kg
(b)
•
••• •••
·r··, o
2
3
I
I
I
4
5
6 kg
I
0
2
3
6 kg
(d)
(e)
FIGURE 1.1: Accuracy and precision of measurements. A 3-kilogram animal is weighed 10 times. The 10 measurements shown in sample (a) are relatively accurate and precise; those in sample (b) are relatively accurate but not precise; those of sample (c) are relatively precise but not accurate; and those of sample (d) are relatively inaccurate and imprecise.
Human error may exist in the recording of data. For example, a person may miscount the number of birds in a tract of land or misread the numbers on a heartrate monitor. Or, a person might obtain correct data but record them in such a way (perhaps with poor handwriting) that a subsequent data analyst makes an error in reading them. We shall assume that such errors have not occurred, but there are other aspects of accuracy that should be considered. Accuracy of measurement can be expressed in numerical reporting. If we report that the hind leg of a frog is 8 em long, we are stating the number 8 (a value of a continuous variable) as an estimate of the frog's true leg length. This estimate was made using some sort of a measuring device. Had the device been capable of more accuracy, we might have declared that the leg was 8.3 em long, or perhaps 8.32 em long. When recording values of continuous variables, it is important to designate the accuracy with which the measurements have been made. By convention, the value 8 denotes a measurement in the range of 7.50000 ... to 8.49999 ... , the value 8.3 designates a range of 8.25000 ... to 8.34999 , and the value 8.32 implies that the true value lies within the range of 8.31500 to 8.32499 .... That is, the reported value is the midpoint of the implied range, and the size of this range is designated by the last decimal place in the measurement. The value of 8 em implies an ability to "The ellipsis marks
( ... ) may be read as "and so on." Here, they indicate
repeating decimal fractions, which could just as well have been written 0.6666666666666 ... , respectively.
that ~ and ~ are
as 0.3333333333333
... and
6
Chapter 1
Data: Types and Presentation determine length within a range of 1 ern, 8.3 ern implies a range of 0.1 ern, and 8.32 cm implies a range of 0.01 cm. Thus, to record a value of 8.0 implies greater accuracy of measurement than does the recording of a value of 8, for in the first instance the true value is said to lie between 7.95000 ... and 8.049999 ... (i.e., within a range of 0.1 ern), whereas 8 implies a value between 7.50000 ... and 8.49999 ... (i.e., within a range of 1 ern). To state 8.00 ern implies a measurement that ascertains the frog's limb length to be between 7.99500 ... and 8.00499 ... ern (i.e., within a range of 0.01 ern). Those digits in a number that denote the accuracy of the measurement are referred to as significant figures. Thus, 8 has one significant figure, 8.0 and 8.3 each have two significant figures, and 8.00 and 8.32 each have three. In working with exact values of discrete variables, the preceding considerations do not apply. That is, it is sufficient to state that our frog has four limbs or that its left lung contains thirteen flukes. The use of 4.0 or 13.00 would be inappropriate, for as the numbers involved are exactly 4 and 13, there is no question of accuracy or significant figures. But there are instances where significant figures and implied accuracy come into play with discrete data. An entomologist may report that there are 72,000 moths in a particular forest area. In doing so, it is probably not being claimed that this is the exact number but an estimate of the exact number, perhaps accurate to two significant figures. In such a case, 72,000 would imply a range of accuracy of 1000, so that the true value might lie anywhere from 71,500 to 72,500. If the entomologist wished to convey the fact that this estimate is believed to be accurate to the nearest 100 (i.e., to three significant figures), rather than to the nearest 1000, it would be better to present the data in the form of scientific notation." as follows: If the number 7.2 X 104( = 72,000) is written, a range of accuracy of 0.1 x 104( = 1000) is implied, and the true value is assumed to lie between 71,500 and 72,500. But if7.20 x 104 were written, a range of accuracy of 0.01 x 104( = 100) would be implied, and the true value would be assumed to be in the range of 71,950 to 72,050. Thus, the accuracy of large values (and this applies to continuous as well as discrete variables) can be expressed succinctly using scientific notation. Calculators and computers typically yield results with more significant figures than are justified by the data. However, it is good practice-to avoid rounding error-to retain many significant figures until the last step in a sequence of calculations, and on attaining the result of the final step to round off to the appropriate number of figures. A suggestion for the number of figures to report is given at the end of Section 6.2.
1.3
FREQUENCY DISTRIBUTIONS
When collecting and summarizing large amounts of data, it is often helpful to record the data in the form of a frequency table. Such a table simply involves a listing of all the observed values of the variable being studied and how many times each value is observed. Consider the tabulation of the frequency of occurrence of sparrow nests in each of several different locations. This is illustrated in Example 1.1, where the observed kinds of nest sites are listed, and for each kind the number of nests observed is recorded. The distribution of the total number of observations among the various categories is termed a frequency distribution. Example 1.1 is a frequency table for nominal data, and these data may also be presented graphically by means of a bar graph (Figure 1.2), where the height of each bar is proportional to the frequency in the class represented. The widths of all bars in a bar graph should be equal so "The use of scientific notation-by 2004b).
physicists-can
be traced back to at least the 18605 (Miller,
Section 1.3 EXAMPLE 1.1 Nominal Data
The Location of Sparrow
Frequency Distributions
7
Nests: A Frequency Table of
The variable is nest site, and there are four recorded categories of this variable. The numbers recorded in these categories constitute the frequency distribution. Nest Site
Number of Nests Observed
A. Vines B. Building eaves C. Low tree branches D. Tree and building cavities
56 60 46 49
60
50 .;;!: v:
v Z 40 '~
...
v
.D
E
30
i
20
10
0
A
B
C
D
Nest Site FIGURE 1.2: A bar graph of the sparrow nominal data.
nest data of Example 1.1. An example of a bar graph for
that the eye of the reader is not distracted from the differences in bar heights; this also makes the area of each bar proportional to the frequency it represents. Also, the frequency scale on the vertical axis should begin at zero to avoid the apparent differences among bars. If, for example, a bar graph of the data of Example 1.1 were constructed with the vertical axis representing frequencies of 45 to 60 rather than 0 to 60, the results would appear as in Figure 1.3. Huff (1954) illustrates other techniques that can mislead the readers of graphs. It is good practice to leave space between the bars of a bar graph of nominal data, to emphasize the distinctness among the categories represented. A frequency tabulation of ordinal data might appear as in Example 1.2, which presents the observed numbers of sunfish collected in each of five categories, each category being a degree of skin pigmentation. A bar graph (Figure 1.4) can be prepared for this frequency distribution just as for nominal data.
8
Chapter 1
Data: Types and Presentation 60
r-'-
FIGURE 1.3: A bar graph of the sparrow nest data of Example 1.1, drawn with the vertical axis starting at 45. Compare this with Figure 1.1, where the axis starts at O.
EXAMPLE 1.2 Numbers of Sunfish, Tabulated According to Amount of Black Pigmentation: A Frequency Table of Ordinal Data The variable is amount of pigmentation, which is expressed by numerically ordered classes. The numbers recorded for the five pigmentation classes compose the frequency distribution. ' Pigmentation Class
Amount of Pigmentation
Number of Fish
o
No black pigmentation Faintly speckled Moderately speckled Heavily speckled Solid black pigmentation
13 68 44 21 8
1 2 3 4
70 60
.;;: G:: 50 4-
0
I-
100
u
t:
U
::l
2"
w:
80
v
.:-: ~ '3 E
60
::l
U
40
_/-
20
/
-/
-/
,,--
/-
-
1.00 0.90 0.80
:;0 (')
0.70
§' :;::' (])
0.60
n t:
0.50
§'
:3c
:;::.
0.40 0.30
~ .., (]) s: t:
(1;
::l n
0.20
/'
'
80kg) = 0.1587, therefore P(Xi < 80) = 1.0000 - 0.1587" does not take into account the case of Xi = 80 kg. But, as we are considering the distribution at hand to be a continuous one, the probability of Xi being exactly 80.000 ... kg (or being exactly any other stated value) is practically nil, so these types of probability statements offer no practical difficulties. *Some old literature avoided referring to negative Z's by expressing the quantity, Z + 5, called a prohit. This term was introduced in 1934 by C. I. Bliss (David, 1995).
70
Chapter 6
The Normal Distribution
EXAMPLE 6.1a Calculating Proportions of a Normal Distribution of Bone lengths, Where I.t = 60 mm and (T = 10 mm
y
X, in millimeters
1. What proportion of the population of bone lengths is larger than 66 mm? Z = Xi -
f.L
= 66 mm - 60 mm = 0.60 10 mm
(T
P(Xi
> 66 mm) = P(Z > 0.60) = 0.2743 or 27.43%
2. What is the probability of picking, at random from this population, a bone larger than 66 mm? This is simply another way of stating the quantity calculated in part (1). The answer is 0.2743. 3. If there are 2000 bone lengths in this population, how many of them are greater than 66 mm? (0.2743)(2000) = 549 4. What proportion of the population is smaller than 66 mm? P(Xi
< 66 mm) = 1.0000 - P(Xi > 66 mm) = 1.0000 - 0.2743 = 0.7257
5. What proportion of this population lies between 60 and 66 mm? Of the total population, 0.5000 is larger than 60 mm and 0.2743 is larger than 66 mm. Therefore, 0.5000 - 0.2743 = 0.2257 of the population lies between 60 and 66 mm. That is, P( 60 mm < Xi < 66 mm) = 0.5000 - 0.2743 = 0.2257. 6. What portion of the area under the normal curve lies to the right of 77.5 mm? Z
= 77.5 mm - 60 mm = 1.75 lOmm
P(Xi>
77.5mm)
= P(Z
> 1.75) = 0.0401 or 4.01 %
7. If there are 2000 bone lengths in the population, how many of them are larger than 77.5 mm? (0.0401 )(2000) = 80 8. What is the probability of selecting at random from this population measuring between 66 and 77.5 mm in length? P(66 mm
a bone
< Xi < 77.5 mm) = P(0.60 < Z < 1.75) = 0.2743 - 0.0401 =
0.2342
Section 6.1
Proportions of a Normal Distribution
71
EXAMPLE 6.1b Calculating Proportions of a Normal Distribution of Sucrose Concentrations, Where f.L 65 mg/100 ml and (T 25 mg/100 ml
=
15
25
=
45
65
85
105
115
X, in mg/l 00 ml
1. What proportion
of the population is greater than 85 mg/lOO ml?
Z = (Xi
a
P(Xi
fL)
= 85 mg/100 ml - 65 mg/IOO ml = 0.8 25 mg/100 ml
> 85 mg/100 ml)
2. What proportion
=
P(Z
> 0.8)
=
0.2119 or 21.19%
of the population is less than 45 mg/IOO ml?
Z = 45 mg/100 ml - 65 mg/100 ml = -0.80 25 mg/IOO ml P(Xi
< 45 mg/IOO ml) = P(Z
< -0.80)
= P(Z
> 0.80) = 0.2119
That is, the probability of selecting from this population an observation less than 0.80 standard deviations below the mean is equal to the probability of obtaining an observation greater than 0.80 standard deviations above the mean. 3. What proportion of the population lies between 45 and 85 mgllOO ml? P(45mg/lOOml
< Xi < 85mg/100ml)
= =
< Z < 0.80) 1.0000 - P(Z < -0.80
P(-0.80
=
or Z > 0.80) 1.0000 (0.2119 1.0000 - 0.4238
=
0.5762
=
+ 0.2119)
Using the preceding considerations of the table of normal deviates (Table B.2), we can obtain the following information for measurements in a normal population: The interval of u. ± a will contain 68.27% of the measurements.* The interval of fL ± Za will contain 95.44% of the measurements. The interval of fL ± 2.5(T will contain 98.76% of the measurements. The interval of fL ± 3u will contain 99.73% of the measurements. 50% of the measurements lie within fL ± 0.67u. 95% of the measurements lie within fL ± 1.96u. 97.5% of the measurements lie within fL ± 2.24u. *Thc symbol "J." indicates 1631 (Cajori, 1928: 245).
"plus or minus"
and was first published
by William
Oughtred
in
72
Chapter 6
The Normal Distribution 99% of the measurements 99.5% of the measurements 99.9% of the measurements
6.2
lie within Il- ± 2.58IT. lie within Il- ± 2.81IT. lie within Il- ± 3.29IT.
THE DISTRIBUTION OF MEANS If random samples of size n are drawn from a normal population, the means of these samples will conform to normal distribution. The distribution of means from a nonnormal population will not be normal but will tend to approximate a normal distribution as n increases in size.* Furthermore, the variance of the distribution of means will decrease as n increases; in fact, the variance of the population of all possible means of samples of size n from a population with variance IT2 is IT2 (T~
x
The quantity
IT~
=-
(6.4)
n
is called the variance of the mean. A distribution
of sample statistics
is
called a sampling distribution t; therefore, we are discussing the sampling distribution of means. Since IT~ has square units, its square root, ITx, will have the same units as the original measurements (and, therefore, the same units as the mean, u; and the standard deviation, IT). This value, ITx' is the standard deviation of the mean. The standard deviation of a statistic is referred to as a standard error; thus, ITx is frequently called the standard error of the mean (sometimes abbreviated SEM), or simply the standard error (sometimes abbreviated SEyl::
"x ~ Just as Z = (Xi - Il-)/ IT (Equation distribution of Xi values,
~:2
or
ITx
6.3) is a normal
Z = _X_----'---Il-
=
(6.5) deviate
that refers to the normal (6.6)
ITx
is a normal deviate referring to the normal distribution of means (X values). Thus, we can ask questions such as: What is the probability of obtaining a random sample of nine measurements with a mean larger than 50.0 em from a population having a mean of 47.0 em and a standard deviation of 12.0 em? This and other examples of the use of normal deviates for the sampling distribution of means are presented in Example 6.2. As seen from Equation 6.5, to determine ITx one must know IT2 (or IT), which is a population parameter. Because we very seldom can calculate population parameters, we must rely on estimating them from random samples taken from the population. The best estimate of IT~, the population variance of the mean, is s2
s~= -
x
n
(6.7)
"This result is known as the centra/limit theorem. tThis term was apparently first used by Ronald Aylmer Fisher in 1922 (Miller, 2004a). +This relationship between the standard deviation of the mean and the standard deviation was published by Karl Friedrich Gauss in 1809 (Walker, 1929: 23). The term standard error was introduced in 1897 by G. U. Yule (David, 19(5), though in a different context (Miller, 2004a).
Section 6.2 EXAMPLE 6.2
Proportions
The Distribution
of a Sampling Distribution
of Means
73
of Means
1. A population of one-year-old children's chest circumferences has /.L = 47.0 em and U = 12.0 em, what is the probability of drawing from it a random sample of nine measurements that has a mean larger than 50.0 em?
Ux -Z
= X -
12.0 em - 40
.J§
/.L = 50.0 em
Ux P(X
>
-
. em
-
47.0 em
= 0.75
4.0 em
50.0 em)
= P(Z >
=
0.75)
0.2266
2. What is the probability of drawing a sample of 25 measurements from the preceding population and finding that the mean of this sample is less than 40.0 em? =x -- 12.0 em -- 24. em
J25
Z
P(X
=
40.0 em - 47.0 cm 2.4 cm
< 40.0 cm) = P(Z
< -2.92)
=
=
-2.92
P(Z
> 2.92)
= 0.0018
3. If 500 random samples of size 25 are taken from the preceding how many of them would have means larger than 50.0 ern? _ -
x -
U
12.0 cm - 24
J25
- .
population,
em
Z = 50.0 cm - 47.0 cm = 1.25 2.4 g P(X
> 50.0 em) = P(Z
Therefore, (0.1056) (500) larger than 50.0 cm.
the sample
variance
> 1.25) = 0.1056
= 53 samples would be expected
to have means
of the mean. Thus,
(6.8) is an estimate of Ux and is the sample standard error of the mean. Example 6.3 demonstrates the calculation of sx' The importance of the standard error in hypothesis testing and related procedures will be evident in Chapter 7. At this point, however, it can be noted that the magnitude of Sx is helpful in determining the precision to which the mean and some measures of variability may be reported. Although different practices have been followed by many, we shall employ the following (Eisenhart, 1968). We shall state the standard error to two significant figures (e.g., 2.7 mm in Example 6.3; see Section 1.2 for an explanation
74
Chapter 6
The Normal Distribution
of significant figures). Then the standard deviation and the mean will be reported with the same number of decimal places (e.g., X = 137.6 mm in Example 6.3*). The variance may be reported with twice the number of decimal places as the standard deviation. EXAMPLE 6.3
The Calculation
of the Standard Error of the Mean,
Sx
The Following are Data for Systolic Blood Pressures, in mm of Mercury, 12 Chimpanzees. n = 12 121 125 X = 1651 mm = 137.6 mm 128 12 2 134 SS = 228111 mm/ _ (1651 mm) , 12 136 = 960.9167 mm? 138 139 s2 = 960.9167 mm2 = 87.3561 mm2 141 11 144 s = J87.3561 mm-' = 9.35 mm 145 149 Sx = - s = 9.35 mm = 27. mm or 151 In v'12 = 1651 mm
of
2:X
2: X2 6.3
INTRODUCTION
=
228,111 mm2
Sx
=
\j
/s2
=
n
\j
/87.3561 mm2 = b.2797 12
mm2 = 2.7 mm
TO STATISTICAL HYPOTHESIS TESTING
A major goal of statistical analysis is to draw inferences about a population by examining a sample from that population. A very common example of this is the desire to draw conclusions about one or more population means. We begin by making a concise statement about the population mean, a statement called a null hypothesis (abbreviated Ho)t because it expresses the concept of "no difference." For example, a null hypothesis about a population mean (fL) might assert that fL is not different from zero (i.e., fL is equal to zero); and this would be written as Ho:
fL
=
o.
Or, we could hypothesize that the population mean is not different from (i.e., is equal to) 3.5 em, or not different from 10.5 kg, in which case we would write Ho: fL = 3.5 em or He: fL = 10.5 kg, respectively. *In Example 6.3, s is written with more deeimal plaees than the Eisenhart recommendations indicate because it is an intermediate, rather than a final, result: and rounding off intermediate computations may lead to serious rounding error. Indeed, some authors routinely report extra decimal places, even in final results, with the consideration that readers of the results may use them as intermediates in additional calculations. tThe term null hypothesis was first published by R. A. Fisher in 1935 (David, 1995; Miller, 2004a; Pearson, 1947). J. Neyman and E. S. Pearson were the first to use the symbol" Ho" and the term alternate hypothesis, in 1928 (Pearson, 1947; Miller, 2004a, 2004c). The concept of statistical testing of something akin to a null hypothesis was introduced 300 years ago by John Arbuthnot (1667 ~ 1725), a Scottish~ English physician and mathematician (Stigler, 1986: 225~226).
Section 6.3
Introduction to Statistical Hypothesis Testing
If statistical analysis concludes that it is likely then an alternate hypothesis (abbreviated HA or least tentatively). One states a null hypothesis and statistical test performed, and all possible outcomes hypotheses. So, for the preceding examples,*
He:
/1-
Ho: /1-
that a null hypothesis is false, HI) is assumed to be true (at an alternate hypothesis for each are accounted for by this pair of
= 0,
HA:
/1-
=;t:.
0;
= 3.5 em,
HA:
/1-
=;t:.
3.5 em;
= 10.5 kg, HA:
/1-
=;t:.
10.5 kg.
Ho:
/1-
75
It must be emphasized that statistical hypotheses arc to be stated before data are collected to test them. To propose hypotheses after examination of data can invalidate a statistical test. One may, however, legitimately formulate hypotheses after inspecting data if a new set of data is then collected with which to test the hypotheses. (a) Statistical Testing and Probability. Statistical testing of a null hypothesis about involves calculating X, the mean of a random sample from that population. As noted in Section 2.1, X is the best estimate of /1-; but it is only an estimate, and we can ask, What is the probability of an X at least as far from the hypothesized /1- as is the X in the sample, if Ho is true? Another way of visualizing this is to consider that, instead of obtaining one sample (of size n) from the population, a large number of samples (each sample of size n) could have been taken from that population. We can ask what proportion of those samples would have had means at least as far as our single sample's mean from the /1- specified in the null hypothesis. This question is answered by the considerations of Section 6.2 and is demonstrated in Example 6.4. /1-, the mean of a population,
EXAMPLE 6.4
Hypothesis Testing of
Ho: /.t = 0 and HA: /.t
*" 0
The variable, Xi, is the weight change of horses given an antibiotic for two weeks. The following measurements of Xi are those obtained from 17 horses (where a positive weight change signifies a weight gain and a negative weight change denotes a weight loss): 2.0, 1.1, 4.4, -3.1, -1.3, 3.9, 3.2, -1.6 ,3.5 1.2, 2.5, 2.3, 1.9, 1.8, 2.9, -0.3, and -2.4 kg. For these 17 data, the sample mean (X) is 1.29 kg. Although the population variance (a2) is typically not known, for the demonstration purpose of this example, a2 is said to be 13.4621 kg2. Then the population standard error of the mean would be 13.4621 kg2 17
"The symbol "*" denotes early, if not first, use (though equal sign).
=
JO.7919 kg2
=
0.89 kg
"is not equal to": Ball (1935: 242) credits Leonhard Euler with its it was first written with a vertical, not a diagonal, line through the
76
Chapter 6
The Normal Distribution
and 1.29 kg - 0 0.89 kg
= 1.45.
Using Table B.2, P(X
~ 1.29 kg)
= P(Z ~ 1.45) = 0.0735
and, because the distribution of Z is symmetrical, P(X::;
-1.29 kg)
=
P(Z::;
-1.45)
=
0.0735.
Therefore, P(X
~ 1.29 kg or X ::; -1.29 kg)
= P(Z ~ 1.45 or Z ::; -1.45) = 0.0735 + 0.0735 = 0.1470. As 0.1470 > 0.05, do not reject Hi;
In Example 6.4, it is desired to ask whether treating horses with an experimental antibiotic results in a change in body weight. The data shown (Xi values) are the changes in body weight of 17 horses that received the antibiotic, and the statistical hypotheses to be tested are Ho:!.L = 0 kg and HA:!.L oF 0 kg. (As shown in this example, we can write "0" instead of "0 kg" in these hypotheses, because they are statements about zero weight change, and zero would have the same meaning regardless of whether the horses were weighed in kilograms, milligrams, pounds, ounces, etc.) These 17 data have a mean of X = 1.29 kg and they are considered to represent a random sample from a very large number of data, namely the body-weight changes that would result from performing this experiment with a very large number of horses. This large number of potential Xi's is the statistical population. Although one almost never knows the actual parameters of a sampled population, for this introduction to statistical testing let us suppose that the variance of the population sampled for this example is known to be 0'2 = 13.462] kg2. Thus, for the population of means that could be drawn from this population of measurements, the standard error of the rnean is o-j- = ~O'2jn = ~13.4621 kg2j17 = ~0.7919kg2 = 0.89 kg (by Equation 6.5). We shall further assume that the population of possible means follows a normal distribution, which is generally a reasonable assumption even when the individual data in the population are not normally distributed. This hypothesis test may be conceived as asking the following: If we have a normal population with fL = 0 kg, and (J'y{ = 0.89 kg, what is the probability of obtaining a random sample of 17 data with a mean (X) at least as far from 0 kg as 1.29 kg (i.e., at least 1.29 kg larger than 0 kg or at least 1.29 kg smaller than 0 kg)?
Section 6.2 showed that probabilities for be ascertained through computations of Z hypothesis is tested in Example 6.4, in which (a computed quantity for which a probability is calculated to be 1.45, and Appendix Table
a distribution of possible means may (by Equation 6.6). The preceding null Z may be referred to as our test statistic will be determined). In this example, Z B.2 informs us that the probability of a
Section 6.3
Introduction to Statistical Hypothesis Testing
77
2 ~ 1.45 is 0.0735.* The null hypothesis asks about the deviation of the mean in either direction from 0 and, as the normal distribution is symmetrical, we can also say that P( - 2 :S 1.45) = 0.0735 and, therefore, P( 121 ~ 1.45) = 0.0735 + 0.0735 = 0.1470. This tells us the probability associated with a 121 (absolute value of 2) at least as large as the 121obtained; and this is the probability of a 2 at least as extreme as that obtained, if the null hypothesis is true. It should be noted that this probability,
P( 121 ~ [computed 21, if Ho is true), is not the same as
P(Ho is true, if 121 ~ [computed 21), for these are conditional probabilities, discussed in Section 5.8. In addition to the playing-card example in that section, suppose a null hypothesis was tested 2500 times, with results as in Example 6.5. By Equation 5.23, the probability of rejecting Ho, if Ho is true, is P( rejecting He; if Ho is true) = (number ofrejections of true Ho 's )/(number of true Ho's) = 100/2000 = 0.05. And the probability that Ho is true, if Ho is rejected, is P(Ho true, if Hi, is rejected) = (number of rejections of true Ho's)/(number of rejections of Ho's) = 100/550 = 0.18. These two probabilities (0.05 and 0.18) are decidedly not the same, for they are probabilities based on different conditions. EXAMPLE 6.5
Probability
of Rejecting a True Null Hypothesis
Hypothetical outcomes of testing the same null hypothesis for 2500 random samples of the same size from the same population (where the samples are taken with replacement). If Hi, is true
If Ho is false
Row total
If Ho is rejected If Ho is not rejected
1900
450 50
550 1950
Column total
2000
500
2500
100
Probability that Hi, is rejected if Ho is true Probability that Hi, is true if Ho is rejected
=
=
100/2000 = 0.05. 100/550 = 0.18.
In hypothesis testing, it is correct to say that the calculated probability (for example, using 2) is P(the data, given Ho is true) and it is not correct to say that the calculated probability is
P(Ho is true, given the data). Furthermore, in reality we may not be testing Hi: 11- = 0 kg in order to conclude that the population mean is exactly zero (which it probably is not). Rather, we "Note that "~" and respectively.
":5"
are symbols for "greater than or equal to" and "less than
or
equal to,"
78
Chapter 6
The Normal Distribution
are interested in concluding whether there is a very small difference between the population mean and a kg; and what is meant by very small will be discussed in Section 6.3( d). (b) Statistical Errors in Hypothesis Testing. It is desirable to have an objective criterion for drawing a conclusion about the null hypothesis in a statistical test. Even if Hi, is true, random sampling might yield a sample mean (X) far from the population mean (IL), and a large absolute value of Z would thereby be computed. However, such an occurrence is unlikely, and the larger the IZI, the smaller the probability that the sample came from a population described by Hi; Therefore, we can ask how small a probability (which is the same as asking how large a IZI) will be required to conclude that the null hypothesis is not likely to be true. The probability used as the criterion for rejection of Ho is called the significance level, routinely denoted by Q' (the lowercase Greek letter alpha).* As indicated below, an Q' of 0.05 is commonly employed. The value of the test statistic (in this case, Z) corresponding to Q' is termed the critical value of the test statistic. In Appendix Table B.2 it is seen that P( Z 2:: 1.96) = 0.025; and, inasmuch as the normal distribution is symmetrical, it is also the case that P( Z S -1.96) = 0.025. Therefore, the critical value for testing the above Ho at the 0.05 level (i.e., 5% level) of significance is Z = 1.96 (see Figure 6.4). These values of Z may be denoted as Z0025( I) = 1.96 and ZO.05(2) = 1.96, where the parenthetical number indicates whether one or two tails of the normal distribution are being referred to.
y
FIGURE 6.4: A normal curve showing (with shading) the 5% of the area under the curve that is the rejection region for the null hypothesis of Example 6.4. This rejection region consists of 2.5% of the curve in the right tail (demarcated by ZO.05(2) = 1.96) and 2.5% in the left tail (delineated by -Z005(2)
=
-1.96). The calculated test statistic in this example, Z
=
1.45, does not lie within either tail;
so Ho is not rejected.
So, a calculated Z greater than or equal to 1.96, or less than or equal to -1.96, would be reason to reject Ho, and the shaded portion of Figure 6.4 is known as the "rejection region." The absolute value of the test statistic in Example 6.4 (namely, IZI = 1.45) is not as large as the critical value (i.e., it is neither 2:: 1.9 nor S-1.96), so in this example the null hypothesis is not rejected as a statement about the sampled population. *David (1955) credits R. A. Fisher as the first to refer to "level of significance:' in 1925. Fisher (1925b) also was the first to formally recommend use of the 5% significance level as guidance for drawing a conclusion about the propriety of a null hypothesis (Cowles and Davis, 1982), although he later argued that a fixed significance level should not be used. This use of the Greek "(1''' first appears in a 1936 publication of J. Neyman and E. S. Pearson (Miller, 2004c).
Section 6.3
Introduction
to Statistical Hypothesis Testing
79
It is very important to realize that a true null hypothesis will sometimes be rejected, which of course means that an error has been committed in drawing a conclusion about the sampled population. Moreover, this error can be expected to be committed with a frequency of 0'. The rejection of a null hypothesis when it is in fact true is what is known as a Type 1 error (or "Type 1 error" or "alpha error" or "error of the first kind"). On the other hand, a statistical test will sometimes fail to detect that a Ho is in fact false, and an erroneous conclusion will be reached by not rejecting Ho. The probability of committing this kind of error (that is, not rejecting Hi, when it is false) is represented by {3(the lowercase Greek letter beta). This error is referred to as a Type 11error (or "Type 2 error" or "beta error" or "error of the second kind"). The power of a statistical test is defined as 1 - (3: the probability of correctly rejecting the null hypothesis when it is false.* If Ho is not rejected, some researchers refer to it as having been "accepted," but most consider it better to say "not rejected," for low statistical power often causes failure to reject, and "accept" sounds too definitive. Section 6.3(c) discusses how both the Type I and the Type II errors can be reduced. Table 6.1 summarizes these two types of statistical errors, and Table 6.2 indicates their probabilities. Because, for a given n, a relatively small probability of a Type I error is associated with a relatively large probability of a Type II error, it is appropriate to ask what the acceptable combination of the two might be. By experience, and by convention, an 0' of 0.05 is typically considered to be a "small enough chance" of committing a Type I error while not being so small as to result in "too large a chance"of a Type II error (sometimes considered to be around 20%). But the 0.05 level of significance is not sacrosanct. It is an arbitrary, albeit customary, threshold for concluding that there is significant evidence against a null hypothesis. And caution should be exercised in emphatically rejecting a null hypothesis if p = 0.049 and not rejecting if p = 0.051, for in such borderline cases further examination -and perhaps repetition-of the experiment would be recommended. TABLE 6.1:
The Two Types of Errors in Hypothesis Testing
If Ho is rejected: If Ho is not rejected:
If Ho is true
If Ho is false
Type I error No error
No error Type II error
Although 0.05 has been the most widely used significance level, individual researchers may decide whether it is more important to keep one type of error "The distinction between these two fundamental kinds of statistical errors, and the concept of power, date back to the pioneering work, in England, of Jerzy Neyman (1894-1981; Russian-born, of Polish roots, emigrating as an adult to Poland and then to England, and spending the last half of his life in the United States) and the English statistician Egon S. Pearson (1895-1980) (Lehmann and Reid, 1982; Neyman and Pearson, 1928a; Pearson, 1947). They conceived of the two kinds of errors in 1928 (Lehmann, 1999) and named them, and they formulated the concept of power in 1933 (David, 1995). With some influence by W. S. Gosset ("Student") (Lehmann, 1999), their modifications (e.g., Neyman and Pearson, 1933) of the ideas of the colossal British statistician (1890-1962) R. A. Fisher (1925b) provide the foundations of statistical hypothesis testing. However. from the mid-1930s until his death, Fisher disagreed intensely with the Neyman-Pearson approach, and the hypothesis testing commonly used today is a fusion of the Fisher and the Neyman-Pearson procedures (although this hybridization of philosophies has received criticism-e.g., by Hubbard and Bayarri, 2003). Over the years there has been further controversy regarding hypothesis testing, especially-but not entirely-within the social sciences (e.g .. Harlow, Mulaik, and Steiger. 1997). The most extreme critics conclude that hypothesis tests should never be used, while most others advise that they may be employed but only with care to avoid abuse.
80
Chapter
6
The Normal
Distribution TABLE 6.2: The Long-Terr~
Probabilities
of ~utcomes
in Hy~othe_~_~esting_
11/111 is true If //1) is rejected If 110 is not rejected
-
It'
I -
111111is l;d'L"
(I'
f:J ("Jlo\\cr"
j
r;
------------------_._-_._--_._------
or the other low. In S()l11e instances. we may he willing to test with 45 see, L] would be 45.21 see - (1.895)(O.58sec) = 45.21 see - 1.1Osec = 44.11 and L2 = 00. And the hypothesized J.LO (45 see) lies within the confidence interval, indicating that the null hypothesis is not rejected. (b) Prediction Limits. While confidence limits express the precision with which a population characteristic is estimated, we can also indicate the precision with which future observations from this population can be predicted. After calculating X and 052 from a random sample of n data from a population, we can ask what the mean would be from an additional random sample, of an additional m data, from the same population. The best estimate of the mean of those m additional data would be X, and the precision of that estimate may be expressed by this two-tailed prediction interval (abbreviated PI): -
X ±
t,,(2).v
~s2
-
m
+ s2 -, n
(7.7)
where v = n - 1 (Hahn and Meeker, 1991: 61-62). If the desire is to predict the value of one additional datum from that population (i.e., m = 1), then Equation 7.7 becomes -~
X ± ta(2).v\/
The prediction the confidence demonstrated One-tailed in Hahn and
s-
+ --;;.
(7.8)
interval will be wider than the confidence interval and will approach interval as m becomes very large. The use of Equations 7.7 and 7.8 is in Example 7.6. prediction intervals are not commonly obtained but are presented Meeker (1991: 63), who also consider another kind of interval: the
108
Chapter 7
One-Sample Hypotheses
tolerance interval, which may be calculated to contain at least a specified proportion (e.g., a specified percentile) of the sampled population; and in Patel (1989), who also discusses simultaneous prediction intervals of means from more than one future sample. The procedure is very much like that of Section 7.3a. If the desire is to obtain only a lower prediction limit, L, (while L2 is considered to be 00), then the first portion of Equation 7.7 (or 7.8) would be modified to be X ta(,), v (i.e., the one-tailed t would be used); and if the intent is to express only an upper prediction limit, L2 (while regarding L, to be -00), then we would use X + ta(,). v' As an example, Example 7.6 might have asked what the highest mean body temperature is that would be predicted, with probability a, from an additional sample. This would involve calculating L2 as indicated above, while L, = -00. EXAMPLE 7.6 Prediction Limits for Additional lation Sampled in Example 7.1
Sampling from the Popu-
From Example 7.1, which is a sample of 25 crab body temperatures, n = 25, X = 25.03°C, and s/ = 1.80(oC)2. (a) If we intend to collect 8 additional
crab body temperatures from the same population from which the 25 data in Example 7.1 came, then (by Equation 7.7) we can be 95% confident that the mean of those 8 data will be within this prediction interval: "C
25 .03
/1.80( °C)2 ± to 05(2).24\1
+
8
1.80( oC)2 2
= 25.03°C ± 2.064(O.545°C) = 25.03° C ± 1.12° C. Therefore, the 95% prediction limits for the predicted mean of these additional data are L, = 23.91 °C and L2 = 26.15°C. (b) If we intend to collect 1 additional crab body temperature from the same population from which the 25 data in Example 7.1 came, then (by Equation 7.8) we can be 95% confident that the additional datum will be within this prediction interval: o
/
25.03 C ± [0.05(2),24\1 1.80(cC)2
+
1.80( C)2 2 0
= 25.03°C ± 2.064(1.368°C) = 25.03°C ± 2.82°C. Therefore, the 95% prediction limits for this predicted datum are L, and L2 = 27.85°C. 7.4
=
22.21 "C
REPORTINGVARIABILITY AROUND THE MEAN It is very important to provide the reader of a research paper with information concerning the variability of the data reported. But authors of such papers are often unsure of appropriate ways of doing so, and not infrequently do so improperly.
Section 7.4
Reporting
Variability
around
the Mean
109
If we wish to describe the population that has been sampled, then the sample mean (X) and the standard deviation (s) may be reported. The range might also be reported, but in general it should not be stated without being accompanied by another measure of variability, such as s. Such statistics are frequently presented as in Table 7.1 or 7.2. TABLE 7.1: Tail Lengths (in mm) of Field Mice from Different
Location
n
Bedford, Indiana
18
Rochester, Minnesota
12
Fairfield, Iowa
16
Pratt, Kansas
16
Mount Pleasant, Michigan
13
Localities
X±SD (range in parentheses) 56.22 ± 1.33 (44.8 to 68.9) 59.61 ± 0.82 (43.9 to 69.8) 60.20 ± 0.92 (52.4 to 69.2) 53.93 ± 1.24 (46.1 to 63.6) 55.85 ± 0.90 (46.7 to 64.8)
TABLE 7.2: Evaporative
Water Loss of a Small Mammal at Various Air Temperatures. Statistics Are Mean ± Standard Deviation, with Range in Parentheses
Air Temperature
Sample size Evaporative water loss (mg/g/hr)
16.2
24.8
10 0.611 ± 0.164 (0.49 to 0.88)
13 0.643 ± 0.194 (0.38 to 1.13)
30.7
Sample
e C) 36.8
10 8 0.890 ± 0.212 1.981 ± 0.230 (0.64 to 1.39) (1.50 to 2.36)
40.9 9 3.762 ± 0.641 (3.16 to 5.35)
If it is the author's intention to provide the reader with a statement about the precision of estimation of the population mean, the use of the standard error (sx) is appropriate. A typical presentation is shown in Table 7.3a. This table might instead be set up to show confidence intervals, rather than standard errors, as shown in Table 7.3b. The standard error is always smaller than the standard deviation. But this is not a reason to report the former in preference to the latter. The determination should be made on the basis of whether the desire is to describe variability within the population or precision of estimating the population mean. There are three very important points to note about Tables 7.1, 7.2, 7.3a, and 7.3b. First, n should be stated somewhere in the table, either in the caption or in the body of the table. (Thus, the reader has the needed information to convert from SD to SE or from SE to SD, if so desired.) One should always state n when presenting sample statistics (X, s, sx, range, etc.), and if a tabular presentation is prepared, it is very good practice to include n somewhere in the table, even if it is mentioned elsewhere in the paper. Second, the measure of variability is clearly indicated. Not infrequently, an author will state something such as "the mean is 54.2 ± 2.7 g," with no explanation of what "± 2.7" denotes. This renders the statement worthless to the reader, because "± 2.7" will be assumed by some to indicate ± SD, by others to indicate ± SE, by others to
110
Chapter
7
One-Sample
Hypotheses TABLE 7.3a: Enzyme Activities
Animals.
Data Are
X
in the Muscle of Various ± SE, with n in Parentheses
Enzyme Activity (umole/min/g of tissue) Animal
Isomerase
Mouse Frog Trout Crayfish
0.76 1.53 1.06 4.22
± ± ± ±
0.09 0.08 0.12 0.30
Transketolase (4) (4) (4) (4)
0.39 0.18 0.24 0.26
± ± ± ±
0.04(4) 0.02(4) 0.04 (4) 0.05 (4)
TABLE 7.3b: Enzyme Activities
Animals.
in the Muscle of Various Data Are X ± 95% Confidence Limits
Enzyme Activity of tissue)
(fLmole/min/g
Animal
n
Mouse Frog Trout Crayfish
4 4 4 4
Isomerase 0.76 1.53 1.06 4.22
± ± ± ±
0.28 0.25 0.38 0.98
Transketolase 0.39 ± 0.13 0.18±O.O5 0.24 ± O.ll 0.26 ± 0.15
indicate the 95% (or 99%, or other) confidence interval, and by others to indicate the range.* There is no widely accepted convention; one must state explicitly what quantity is meant by this type of statement. If such statements of '±" values appear in a table, then the explanation is best included somewhere in the table (either in the caption or in the body of the table), even if it is stated elsewhere in the paper. Third, the units of measurement of the variable must be clear. There is little information conveyed by stating that the tail lengths of 24 birds have a mean of 8.42 and a standard error of 0.86 if the reader does not know whether the tail lengths were measured in centimeters, or inches, or some other unit. Whenever data appear in tables, the units of measurement should be stated somewhere in the table. Keep in mind that a table should be self-explanatory; one should not have to refer back and forth between the table and the text to determine what the tabled values represent. Frequently, the types of information given in Tables 7.1,7.2, 7.3a, and 7.3b are presented in graphs, rather than in tables. In such cases, the measurement scale is typically indicated on the vertical axis, and the mean is indicated in the body of the graph by a short horizontal line or some other symbol. The standard deviation, standard error, or a confidence interval for the mean is commonly indicated on such graphs via a vertical line or rectangle. Often the range is also included, and in such instances the SD or SE may be indicated by a vertical rectangle and the range by a vertical line. Some authors will indicate a confidence interval (generally 95 %) in * In older literature the ± symbol referred to yet another measure, known as the "probable error" (which fell into disuse in the early twentieth century). In a normal curve, the probable error (PE) is 0.6745 times the standard error, because X ± PE includes 50% of the distribution. The term probable error was first used in 1815 by German astronomer Friedrich Wilhelm Bessel (1784-1846) (Walker, 1929: 24, 51,186).
Section 7.4
Reporting Variability around the Mean
111
RO
E E
70
.S ..ci OJ,
~ 60
16
12
..J
18
-iO f-
13 16
50
40L------L----~------~----~------J------Bedford. Indiana
Rochester. Fairfield. Minnesota Iowa
Pratt. Mount Pleasant. Kansas Michigan
FIGURE 7.4: Tail lengths of male field mice from different localities, indicating the mean, the mean ± standard deviation (vertical rectangle), and the range (vertical line), with the sample size indicated for each location. The data are from Table 7.1.
5.0
~ OJ
:J
.q·E'"
4.0
>'"-" 0
- Mo. Any rank equal to Mo is ignored in this procedure. The sum of the ranks with a plus sign is called T + and the sum of the ranks with a minus sign is T _, with the test then proceeding as described in Section 9.5. The Wilcoxon test assumes that the sampled population is symmetric (in which case the median and mean are identical and this procedure becomes a hypothesis test about the mean as well as about the median, but the one-sample t test is typically a more powerful test about the mean). Section 9.5 discusses this test further. 7.10
CONFIDENCE LIMITS FOR THE POPULATION MEDIAN
The sample median (Section 3.2) is used as the best estimate of M, the population median. Confidence limits for M may be determined by considering the binomial distribution, as discussed in Section 24.9. *Here M represents
the Greek capital letter mu.
Section 7.11
Hypotheses Concerning the Variance
121
HYPOTHESESCONCERNINGTHE VARIANCE The sampling distribution of means is a symmetrical distribution, approaching the normal distribution as n increases. But the sampling distribution of variances is not symmetrical, and neither the normal nor the t distribution may be employed to test hypotheses about u2 or to set confidence limits around u2. However, theory states that (7.15) (if the sample came from a population with a normal distribution), where X2 represents a statistical distribution" that, like t, varies with the degrees of freedom, u, where v = n - 1. Critical values of X~.v are found in Appendix Table B.1. Consider the pair of two-tailed hypotheses, Ho: u2 = u~ and HA: u2 -t= u6, where u6 may be any hypothesized population variance. Then, simply calculate . Icnt Iy, or, equiva
X2
=
SS 2' uo
(7.16)
and if the calculated X2 is 2: X;/2,v or ::; X~I-a/2).v' then Ho is rejected at the a level of significance. For example, if we wished to test Ho: u2 = 1.0( oC)2 and HA: u2 -t= 1.0( C)2 for the data of Example 7.1, with a = 0.05, we would first calculate X2 = SSj u6' In this example, v = 24 and s2 = 1.80( C)2, so SS = vs2 = 43.20( o C)2. Also, as u2 is hypothesized to be 1.0(oC)2, X2 = SSju~ = 43.20(OC)2j1.0(oC)2 = 43.20. Two critical values are to be obtained from the chi-square table (Appendix Table B.1): X~005/2),24 = X6.025.24 = 39.364 and X~I-(l.()5/2).24 = X6.975.24 = 12.401. As the calculated X2 is more extreme than one of these critical values (i.e., the calculated X2 is > 39.364), Ho is rejected, and we conclude that the sample of data was obtained from a population having a variance different from 1.0( C)2. It is more common to consider one-tailed hypotheses concerning variances. For the hypotheses Ho: u2 ::; u6 and HA: u2 > u6, Ho is rejected if the X2 calculated from Equation 7.16 is 2: X~,v' For Ho: u2 2: u6 and HA: u2 < u6, a calculated X2 that is ::; X~ I-a).v is grounds for rejecting Ho. For the data of Example 7.4, a manufacturer might be interested in whether the variability in the dissolving times of the drug is greater than a certain value-say, 1.5 sec. Thus, Ho: 2 ::; 1.5 sec2 and 2 HA: u > 1.5 see- might be tested, as shown in Example 7.11. 0
0
0
(T
EXAMPLE 7.11 A One-Tailed Test for the Hypotheses Ho: and HA: (T2 > 1.5 sec2, Using the Data of Example 7.4
(T2
s 1.5 sec2
SS = 18.8288 see/ u = 7 s2 = 2.6898 see2 X2 = SS = 18.82X8 sec = 12.553 (T0 1.5 seeX6.05,7
= 14.067
"The Greek letter "chi" (which in lowercase
is X) is pronounced
as the "k y" in "sky."
122
Chapter 7
One-Sample Hypotheses
< 14.067, Ho is not rejected.
Since 12.553
We conclude
0.05
< P < 0.10
that the variance
of dissolving
[P
= 0.084]
times is no more than 1.5 sec/.
As is the case for testing hypotheses about a population mean, fL (Sections 7.1 and 7.2), the aforementioned testing of hypotheses about a population variance, o, depends upon the sample's having come from a population of normally distributed data. However, the F test for variances is not as robust as the t test for means: that is, it is not as resistant to violations of this underlying assumption of normality. The probability of a Type I error will be very different from the specified a if the sampled population is nonnormal, even if it is symmetrical. And, likewise, a will be distorted if there is substantial asymmetry (say, \ Jbl \ > 0.6), even if the distribution is normal (Pearson and Please, 1975). 7.12
CONFIDENCE LIMITS FOR THE POPULATION VARIANCE Confidence intervals may be determined for many parameters other than the population mean, in order to express the precision of estimates of those parameters. By employing the X2 distribution, we can define an interval within which there is a 1 - a chance of including u2 in repeated sampling. Appendix Table B.l tells us the probability of a calculated X2 being greater than that in the Table. If we desire to know the two X2 values that enclose 1 - a of the chi-square curve, we want the portion of the curve between XZ1-a/2).v and X;/2,v (for a 95% confidence interval, this would mean the area between that
X6.975,ll
and X6025.J.lt
2
2
< 1J5
45 sec-, for 95 % confidence, L I would be SS/X60S.7 = 18.8188 sec2/14.067 = 1.34 see/ and L2 = 00. The hypothesized a6 (45 see") lies within the confidence interval, indicating that the null hypothesis would not be rejected. If the desire is to estimate a population's standard deviation (a) instead of the population variance (a2), then simply substitute a for a2 and ao for a6 above and use the square root of LJ and L2 (bearing in mind that = (0).
roo
(b) Prediction Limits. We can also estimate the variance that would be obtained from an additional random sample of m data from the same population. To do so, the following two-tailed 1 - (l' prediction limits may be determined: LI
= -----
S2
(7.21 )
Fa(2).n-I.II1-1 L2 =
s2 Fa(2).m-l.n-1
(7.22)
(Hahn, 1972; Hahn and Meeker, 1991: 64, who also mention one-tailed prediction intervals; Patel, 1989). A prediction interval for s would be obtained by taking the square roots of the prediction limits for s2. The critical values of F, which will be employed many times later in this book, are given in Appendix Table B.4. These will be written in the form Fa.vl.v2' where VI and V2 are termed the "numerator degrees of freedom" and "denominator degrees of freedom," respectively (for a reason that will be apparent in Section 8.5). So, if we wished to make a prediction about the variance (or standard deviation) that would be obtained from an additional random sample of 10 data from the population from which the sample in Example 7.1 came, n = 25, n - 1 = 24, m = 10, and m - 1 = 9; and to compute the 95% two-tailed prediction interval, we would consult Table B.4 and obtain FO'(2).n-l.m-J = FO.OS(2).24,9 = 3.61 and Fa(2).m-l.n-l = FO.OS(2).9.24 = 2.79. Thus, the prediction limits would be LI = 1.80( C)2/3.61 = 0.50(oC)2 and L2 = [1.80('C)2][2.79] = 5.02( oC)2.
124 7.13
Chapter 7
One-Sample Hypotheses
POWER AND SAMPLE SIZE IN TESTS CONCERNING THE VARIANCE
(a) Sample Size Required. We may ask how large a sample would be required to perform the hypothesis tests of Section 7.12 at a specified power. For the hypotheses He: (J2 S (J6 versus HA: (J2 > (J6, the minimum sample size is that for which 2
Xl-{3,v
----
_
2 (Jo
2
(7.23 )
2 '
Xa,v
S
and this sample size, n, may be found by iteration (i.e., by a directed trial and error), as shown in Example 7.12. The ratio on the left side of Equation 7.23 increases in magnitude as n increases. EXAMPLE 7.12 versus HA:
0'2
Estimation of Required Sample Size to Test
>
Ho:
0'2
s
O'~
O'~
How large a sample is needed to reject Hi, : (J2 S 1.50 sec/, using the data of Example 7.11, if we test at the 0.05 level of significance and with a power of 0.90? (Therefore, a = 0.05 and f3 = 0.10.) From Example 7.11, s2 = 2.6898 sec/. As we have specified (J6 = 1.75 sec-, (J61 s2 = 0.558. To begin the iterative process of estimating n, let us guess that a sample size of 30 would be required. Then,
= 19.768 = 0.465.
X6.90.29
42.557
X2
0,05,29
Because 0.465 n = 50 is required:
< 0.558, our estimate of n is too low. So we might guess that 2 XO.90,49
X
2 0.05.49
=
36.818 66.339
=
0555 . _.
Because 0.555 is a little less than 0.558, n = 50 is a little too low and we might guess n = 55, for which X6.90,541 X6.05,54 = 41.183/70.153 = 0.571. Because 0.571 is greater than 0.558, our estimate of n is high, so we could try n = 51, for which X6 0.10). One-tailed testing could be employed if the interest were solely in whether the distribution is skewed to the right (Ho: $I :50 vs. Ho: $I > 0), in which case Ho would be rejected if ,JIiI ? (,J7]J),Y( I).11'Or, a one-tailed test of Ho: $I ? 0 versus Hi; $I < 0 could be used to test specifically whether the distribution is skewed to the left; and Ho would be rejected if ,J7]J :5 -( ,JIiI)a(l),1/" If the sample size, n, does not appear in Table B.22, a conservative approach (i.e., one with lowered power) would be to use the largest tabled n that is less than the n of our sample; for example, if n were 85, we would use critical values for n = 80. Alternatively, a critical value could be estimated, from the table's critical values for n's. immediately above and below n under consideration, using linear or harmonic interpolation (see the introduction to Appendix B), with harmonic interpolation appearing to be a little more accurate. There is also a method (D'Agostino, 1970, 1986; D' Agostino, Belanger, and D' Agostino, 1990) by which to approximate the exact probability of Hi;
o
"*
(b) Testing Kurtosis. Our estimate of a population's kurtosis (f32) is h2, given by Equation 6.17. We can ask whether the population is not mesokurtic by the two-tailed hypotheses Hi; f32 = 3 versus Ho: f31 3. Critical values for this test are presented in Table B.23, and Ho is rejected if b: is either less than the lower-tail critical value for (b2 )a(2).11 or greater than the upper-tail critical value for (b2 )a(2),nFor the data of Example 6.7, b: = 2.25. To test the above Ho at the 5 % level of significance, we find that critical values for n = 70 do not appear in Table B.23. A conservative procedure (i.e., one with lowered power) is to employ the critical values for the
"*
Section 7.16
Hypotheses Concerning Symmetry and Kurtosis
127
tabled critical values for the largest n that is less than our sample's n. In our example, this is n = 50, and Ho is rejected if b: is either less than the lower-tail (b2 )005(2).50 = 2.06 or greater than the upper-tail (b2 )0.05(2).50 = 4.36. In the present example, b2 = 2.25 is neither less than 2.06 nor greater than 4.36, so Ho is not rejected. And, from Table B.23, we see that 0.05 < P < 0.10. Rather than using the nearest lower n in Table B.23, we could engage in linear or harmonic interpolation between tabled critical values (see introduction to Appendix B), with harmonic interpolation apparently a little more accurate. There is also a method (D'Agostino, 1970, 1986; D'Agostino, Belanger, and D'Agostino, 1990) to approximate the exact probability of Hi; One-tailed testing could be employed if the interest is solely in whether the population's distribution is leptokurtic, for which H«: {32 :5 3 versus Ho: {32 > 3 would apply; and Hi, would be rejected if bi 2: the upper-tail (b2 )a( I ).n- Or, if testing specifically whether the distribution is platykurtic, a one-tailed test of Ho: {32 2: 3 versus He: f32 < 3 would be applicable; and Ho would be rejected if b: :5 the lower-tail (b2)a(I).n.
EXAMPLE7.13 Two-Tailed Nonparametric Testing of Symmetry Around the Median, Using the Data of Example 6.7 and the Wilcoxon Test of Section 9.5 The population of data from which this sample came is distributed symmetrically around its median. HA: The population is not distributed symmetrically around its median. n = 70; median = X(7o+ I )/2 = X35.5 = 70.5 in. Ho:
X
d
(in.)
(in.)
63 64 65 66 67 68 69 70 71 72 73 74 75 76
-7.5 -6.5 -5.5 -4.5 -3.5 -2.5 -1.5 -0.5 0.5 1.5 2.5 3.5 4.5 5.5
Idl
Rank of
f
(in.)
lell
Signed rank of Idl
2 2 3 5 4 6 5 8 7 7 10 6 3 2
7.5 6.5 5.5 4.5 3.5 2.5 1.5 0.5 0.5 1.5 2.5 3.5 4.5 5.5
69.5 67.5 64 57.5 48.5 35.5 21.5 8 8 21.5 35.5 48.5 57.5 64
-69.5 -67.5 -64 -57.5 -48.5 -35.5 -21.5 -8 8 21.5 35.5 48.5 57.5 64
(n(Signed -139 -135 -192 -287.5 -194 -213 -107.5 -64 56 160.5 355 291 172.5 128
70 T_ = 1332 T+ = 1163 TO.05(2),70 = 907 (from Appendix Table B.12) As neither T_ nor T + < To 05(2 ),7(), do not reject Hi; [P > 0.50]
rank)
128
Chapter 7
One-Sample Hypotheses
(c) Testing Symmetry around the Median. Symmetry of dispersion around the median instead of the mean may be tested non parametrically by using the Wilcoxon paired-sample test of Section 9.5 (also known as the Wilcoxon signed-rank test). For each datum (Xi) we compute the deviation from the median (d, = Xi - median) and then analyze the d/s as in Section 9.5. For the two-tailed test (considering both L and T + in the Wilcoxon test), the null hypothesis is He: The underlying distribution is symmetrical around (i.e., is not skewed from) the median. For a one-tailed test, T _ is the critical value for Ho: The underlying distribution is not skewed to the right of the median; and T + is the critical value for He: The underlying distribution is not skewed to the left of the median. This test is demonstrated in Example 7.13. EXERCISES 7.1. The following data are the lengths of the menstrual cycle in a random sample of 15 women. Test the hypothesis that the mean length of human menstrual cycle is equal to a lunar month (a lunar month is 29.5 days). The data are 26, 24, 29, 33, 25, 26, 23, 30, 31, 30, 28,27,29,26, and 28 days. 7.2. A species of marine arthropod lives in seawater that contains calcium in a concentration of 32 mmole/kg of water. Thirteen of the animals are collected and the calcium concentrations in their coelomic fluid are found to be: 28, 27, 29, 29, 30, 30,31,30,33,27,30,32, and 31 mmole/kg. Test the appropriate hypothesis to conclude whether members of this species maintain a coelomic calcium concentration less than that of their environment. 7.3. Present the following data in a graph that shows the mean, standard error, 95% confidence interval, range, and number of observations for each month. Table of Caloric Intake (kcal/g of Body Weight) of Squirrels
Month January February March
Number Standard Error of Data Mean
Range
0.458 0.413 0.327
0.289-0.612 0.279-0.598 0.194-0.461
13 12 17
0.026 0.027 0.018
7.4. A sample of size 18 has a mean of 13.55 cm and a variance of 6.4512 cm-. (a) Calculate the 95% confidence interval for the population mean. (b) How large a sample would have to be taken from this population to estimate u. to within 1.00 ern, with 95% confidence? (c) to within 2.00 cm with 95% confidence? (d) to within 2.00 ern with 99% confidence? (e) For the data of Exercise 7.4, calculate the 95% prediction interval for what the mean would
be of an additional sample of 10 data from the same population. 7.5. We want to sample a population of lengths and to perform a test of Ho: fL = fLa versus /-I A: u. -:j= fLO, at the 5% significance level, with a 95% probability of rejecting /-10 when IfL - fLol is at least 2.0 em. The estimate of the population variance, (J2, is 52 = 8.44cm2. (a) What minimum sample size should be used? (b) What minimum sample size would be required if a were 0.01 ? (c) What minimum sample size would be required if a = 0.05 and power = 0.99? (d) If n = 25 and a = 0.05, what is the smallest difference, IfL - fLO I, that can be detected with 95% probability? (e) If n = 25 and a = 0.05, what is the probability of detecting a difference, IfL - fLol, as small as 2.0 ern? 7.6. There are 200 members of a state legislature. The ages of a random sample of SO of them are obtained, and it is found that X = 53.87 yr and s = 9.89 yr. (a) Calculate the 95% confidence interval for the mean age of all members of the legislature. (b) If the above X and 5 had been obtained from a random sample of 100 from this population. what would the 95% confidence interval for the population mean have been? 7.7. For the data of Exercise 7.4: (a) Calculate the 95% confidence interval for the population variance. (b) Calculate the 95% confidence interval for the population standard deviation. (c) Using the 5% level of significance, test /-10: (J2 :s 4.4000 cm2 versus /-I A: (J2 > 4.4000 em". (d) Using the 5% level of significance, test /-10: (J 2 3.00 ern versus HA : (J < 3.00 cm. (e) How large a sample is needed to test /-10: (J2 :s 5.0000 cm2 if it is desired to test
Exercises at the 0.05 level of significance with 75% power? ) For the data of Exercise 7.4, calculate the 95% prediction interval for what the variance and standard deviation would be of an additional sample of 20 data from the same population.
129
7.8. A sample of 100 body weights has .../li1 = 0.375 and b: = 4.20. (a) Test He: .jfJl = 0 and HA: .jfJl # 0, at the 5% significance level. (b) Test Ho: /32 = 3 and HA: /32 # 3, at the 5% significance level.
CHAPTER
8
Two-Sample Hypotheses 8.1 8.2 8.3
TESTING FOR DIFFERENCE BETWEEN TWO MEANS CONFIDENCE LIMITS FOR POPULATION MEANS SAMPLE SIZE AND ESTIMATION OF THE DIFFERENCE BETWEEN TWO POPULATION
8.4
SAMPLE SIZE, DETECTABLE DIFFERENCE, AND POWER IN TESTS
8.5
TESTING FOR DIFFERENCE BETWEEN TWO VARIANCES
8.6 8.7
CONFIDENCE LIMITS FOR POPULATION VARIANCES SAMPLE SIZE AND POWER IN TESTS FOR DIFFERENCE BETWEEN TWO VARIANCES
MEANS
8.8 TESTING FOR DIFFERENCE BETWEEN TWO COEFFICIENTS OF VARIATION 8.9 CONFIDENCE LIMITS FOR THE DIFFERENCE BETWEEN TWO COEFFICIENTS OF VARIATION 8.10 NONPARAMETRIC STATISTICAL METHODS 8.11 TWO-SAMPLE RANK TESTING 8.12 TESTING FOR DIFFERENCE BETWEEN TWO MEDIANS 8.13 TWO-SAMPLE TESTING OF NOMINAL-SCALE DATA 8.14 TESTING FOR DIFFERENCE BETWEEN TWO DIVERSITY INDICES 8.15
CODING DATA
Among the most commonly employed hiostatistical procedures is the comparison of two samples to infer whether differences exist between the two populations sampled. This chapter will consider hypotheses comparing two population means, medians, variances (or standard deviations), coefficients of variation, and indices of diversity. In doing so, we introduce another very important sampling distrihution, the F distrihution-named for its discoverer, R. A. Fisher-and will demonstrate further use of Student's t distribution. The objective of many two-sample hypotheses is to make inferences ahout population parameters by examining sample statistics. Other hypothesis-testing procedures, however, draw inferences about populations without referring to parameters. Such procedures are called nonparametric methods, and several will be discussed in this and following chapters. 8.1
TESTING FOR DIFFERENCE BETWEEN TWO MEANS A very common situation for statistical testing is where a researcher desires to infer whether two population means are the same. This can be done by analyzing the difference between the means of samples taken at random from those populations. Example 8.1 presents the results of an experiment in which adult male rabbits were divided at random into two groups, one group of six and one group of seven." The members of the first group were given one kind of drug (called "B"), and the "Sir Ronald Aylmer Fisher (IX90-llJ02) is credited with the first explicit recommendation of the important concept or assigning subjects a/ random to groups for different experimental treatments (Bartlett. 1')65: Fisher. 19250: Rubin, 1990).
130
Section 8.1
Testing for Difference between Two Means
131
members of the second group were given another kind of drug (called "G "). Blood is to be taken from each rabbit and the time it takes the blood to clot is to be recorded. EXAMPLE8.1 A Two-Sample t Test for the Two-Tailed Hypotheses, Ho: J-t2 and HA: J-t1 J-t2 (Which Could Also Be Stated as Ho: J-t1 - J-t2 0 and HA: J-t1 - J-t2 0). The Data Are Blood-Clotting Times (in Minutes) of Male Adult Rabbits Given One of Two Different Drugs
J-t1
'* '*
=
Ho: HA:
J.LI J.LI
=
=
J.L2
*- J.L2 Given drug B
Given drug G
8.8 8.4 7.9 8.7 9.1 9.6
9.9 9.0 11.1 9.6 8.7 10.4 9.5
n: = 7
6 VI = 5 XI = 8.75 min
fll
=
SSI
= l.6950 min2 l.6950 5
= 6 = 9.74 min SS2 = 4.0171 min2
V2
X2
+ 4.0171 = 5.7121 = 0.5193 min2 + 6 11 0.5193 --7
t
JO.0866
+ 0.0742
= XI - X2 = 8.75 - 9.74 = -0.99 = -2.475 0.40
sx) -X2 to.05(2),lJ
= tn.o5(2),11
0.40
2.201
=
Therefore, reject Hn. 0.02 < P(
It I
2
2.475) < 0.05
[P
= 0.031]
We conclude that mean blood-clotting time is not the same for subjects receiving drug B as it is for subjects receiving drug G. We can ask whether the mean of the population of blood-clotting times of all adult male rabbits who might have been administered drug B (Iefs call that mean J.LI) is the same as the population mean for blood-clotting times of al\ adult male rabbits who might have been given drug G (call it J.L2). This would involve the two-tailed hypotheses Ho: J.LI - J.L2 = 0 and HA: J.LI - J.L2 *- 0; and these hypotheses are commonly expressed in their equivalent forms: He; J.LI = J.L2 and HA: J.LI *- J.L2. The data from this experiment are presented in Example 8.1.
132
Chapter 8
Two-Sample Hypotheses In this example, a total of 13 members of a biological population (adult male rabbits) were divided at random into two experimental groups, each group to receive treatment with one of the drugs. Another kind of testing situation with two independent samples is where the two groups are predetermined. For example, instead of desiring to test the effect of two drugs on blood-clotting time, a researcher might want to compare the mean blood-clotting time of adult male rabbits to that of adult female rabbits, in which case one of the two samples would be composed of randomly chosen males and the other sample would comprise randomly selected females. In that situation, the researcher would not specify which rabbits will be designated as male and which as female; the sex of each animal (and, therefore, the experimental group to which each is assigned) is determined before the experiment is begun. Similarly, it might have been asked whether the mean blood-clotting time is the same in two strains (or two ages, or two colors) of rabbits. Thus, in Example 8.1 there is random allocation of animals to the two groups to be compared, while in the other examples in this paragraph, there is random sampling of animals within each of two groups that are already established. The statistical hypotheses and the statistical testing procedure are the same in both circumstances. If the two samples came from two normally distributed populations, and if the two populations have equal variances, then a t value to test such hypotheses may be calculated in a manner analogous to its computation for the one-sample ( test introduced in Section 7.1. The t for testing the preceding hypotheses concerning the difference between two population means is t
=
XI
-
X2
(8.1 )
SXI-X2
The quantity X I X 2 is the difference between the two sample means; and SXI -X2 is the standard error of the difference between the sample means (explained further below), which is a measure of the variability of the data within the two samples. Therefore, Equation 8.1 compares the differences between two means to the differences among all the data (a concept to be enlarged upon when comparing more than two means-in Chapter 10 and beyond). The quantity Sx I -x 2 ' along with s~X'1- -x'2 the variance of the difference between the means, needs to be considered further. Both s~x v and Sx -x are statistics that can 1-/\2 1 2 be calculated from the sample data and are estimates of the population parameters, O"~x -x2 and O"xI -x 2 ' respectively. It can be shown mathematically that the variance 1of the difference between two independent variables is equal to the sum of the variances of the two variables, so that O"~x -x2 = O"~ + O"~t . Independence means 1-'I -2 that there is no association correlation between the data in the two populations. * As O"~ = 0"2/ n, we can write 2
2
0"1
0"-
XI-X2
= -
nl
2
+ -0"2
Because the two-sample t test requires that we assume O"T 2 0"-
0"2 -
XI-X2
"If there is a unique relationship between another sample, then the data are considered instead of the methods of the present chapter.
= nl
(8.2)
n:
0"2
+ -
= O"~,
we can write (8.3)
n2
each datum in one sample and a specific datum in paired and the considerations of Chapter 9 apply
Section 8.1
Testing for Difference between Two Means
Thus, to calculate the estimate of
sT
(7~I-X2'
133
we must have an estimate of (72.Since both
s~
and are assumed to estimate (72, we compute the pooled variance, 5~, which is then used as the best estimate of (72: S2 P
and
= SSI + SS2 VI + v2 2
s~
52
Sp
-
XI-X2
I
and Equation 8.1 becomes
(8.5)
n2
-:
sX -X2
P
+
nl
Thus,*
(8.4)
~.2 ~
+
(8.6)
n2
{=
which for equal sample sizes (i.e., to as n),
(8.7a)
nl
so each sample size may be referred
= nz-
{=
(8.7b)
Example 8.1 summarizes the procedure for testing the hypotheses under consideration. The critical value to be obtained from Appendix Table B.3 is (a(2),(v] +V2)' the two-tailed { value for the a significance level, with VI + V2 degrees of freedom. We shall also write this as {a(2),v, defining the pooled degrees of freedom to be V
=
+
VI
or, equivalently,
V2
IJ
=
nl
+
n2
-
2.
(8.8)
In the two-tailed test, Ho will be rejected if either { :::::ta(2),v or t s; -ta(2),vAnother way of stating this is that Ho will be rejected if It I ::::: ta(2),v' This statistical test asks what the probability is of obtaining two independent samples with means (XI and X2) at least this different by random sampling from populations whose means (ILl and IL2) are equal. And, if that probability is a or less, then He: ILl = IL2 is rejected and it is declared that there is good evidence that the two population means are different." Ho: /-LI= /-L2may be written Ho: ILl - /-L2= 0 and HA: /-LI=F IL2 as H1: ILl - IL2 =F 0; the generalized two-tailed hypotheses are Ho: ILl - IL2 = /-LO and HA: ILl - IL2 =F /-LO, tested as t = IXI - X21 - /-LO , (8.9) SX]-X2
where /-LOmay be any hypothesized difference between population means. *The standard
JNs~j(nln2)'
error
of the difference
where N =
/11
+
between
means
may also be calculated
as s x] -X2
112·
"Instead of testing this hypotheses, a hypothesis of "correlation" tested, which would ask whether there is a significant linear relationship X and the group from which it came. This is not commonly done.
(Section between
19.11b) could be the magnitude of
134
Chapter 8
Two-Sample Hypotheses
By the procedure of Section 8.9, one can test whether the measurements population are a specified amount as large as those in a second population.
in one
(a) One-Tailed Hypotheses about the Difference between Means. One-tailed hypotheses can be tested in situations where the investigator is interested in detecting a difference in only one direction. For example, a gardener may use a particular fertilizer for a particular kind of plant, and a new fertilizer is advertised as being an improvement. Let us say that plant height at maturity is an important characteristic of this kind of plant, with taller plants being preferable. An experiment was run, raising ten plants on the present fertilizer and eight on the new one, with the resultant eighteen plant heights shown in Example 8.2. If the new fertilizer produces plants that are shorter than, or the same height as, plants grown with the present fertilizer, then we shall decide that the advertising claims are unfounded; therefore, the statements of ILl > IL2 and ILl = IL2 belong in the same hypothesis, namely the null hypothesis, Ho, If, however, mean plant height is indeed greater with the newer fertilizer, then it shall be declared to be distinctly better, with the alternate hypothesis (H A: ILl < J.L2) concluded to be the true statement. The t statistic is calculated by Equation 8.1, just as for the two-tailed test. But this calculated t is then compared with the critical value ta( I),v, rather than with fa(2)Y· A Two-Sample t Test for the One-Tailed Hypotheses, Ho:
EXAMPLE8.2
P-1 ~ P-2 and HA: P-1 < P-2 (Which Could Also Be Stated as Ho: P-1 - P-2 ~ 0 and HA: P-1 - P-2 < 0). The Data Are Heights of Plants, Each Grown with
One of Two Different Fertilizers He: HA:
J.LI 2: J.L2 J.LI
J-L2 may be written as H«: J-Ll - J-L2 ::s: 0 and HA: J-Ll - J-L2 > 0, respectively. The generalized hypotheses for this type of one-tailed test are He: J-Ll - J-L2 ::s: J-LO and HA: J-Ll - f.L2 > J-LO, for which the t is t
= X
I
X2
-
-
J-LO
(8.10)
SX1-X2
and J-LO may be any specified value of J-LI - J-L2. Lastly, He: J-LI ~ J-L2 and HA: J-LI < J-L2 may be written as Ho: J-Ll - J-L2 ~ 0 and HA: J-LI - J-L2 < 0, and the generalized one-tailed hypotheses of this type are Ho: J-Ll - J-L2 ~ J-LO and HA: J-Ll - J-L2 < J-LO, with the appropriate t statistic being that of Equation 8.10. For example, the gardener collecting the data of Example 8.2 may have decided, because the newer fertilizer is more expensive than the other, that it should be used only if the plants grown with it averaged at least 5.0 cm taller than plants grown with the present fertilizer. Then, J-LO = J-LI - J-L2 = -5.0 em and, by Equation 8.10, we would calculate t = (51.91 - 56.55 + 5.0)/1.55 = 0.36/1.55 = 0.232, which is not ~ the critical value shown in Example 8.2; so Ho: J-LI - J-L2 ~ -5.0 em is not rejected. The following summary of procedures applies to these general hypotheses: if It I ~
For HA:
J-LI
J-L2 -=F J-LO,
For HA:
J-LI
J-L2
J-LO,
if
t ~
then reject Hi;
ta(2),v'
tex( I ),1I'
tex( I ),1I'
then reject Ho.
then reject Ho·
*For this one-tailed hypothesis test, probabilities of t up to 0.25 are indicated in Appendix Table B.3. If t = 0, then P = 0.50; so if -/O.2S( I).v < I < 0, then 0.25 < P < 0.50; and if t > 0 then P > 0.50. tFor this one-tailed hypothesis test, I = 0 indicates P = 0.50; therefore, if 0 < t < 10.25( I).lI' then 0.25 < P < 0.50; and if t < 0, then P > 0.50.
136
Chapter 8
Two-Sample Hypotheses
(b) Violations of the Two-Sample t-test Assumptions. The validity of two-sample t testing depends upon two basic assumptions: that the two samples came at random from normal populations and that the two populations had the same variance. Populations of biological data will not have distributions that are exactly normal or variances that are exactly the same. Therefore, it is fortunate that numerous studies, over 70 years, have shown that this t test is robust enough to withstand considerable nonnormality and some inequality of variances. This is especially so if the two sample sizes are equal or nearly equal, particularly when two-tailed hypotheses are tested (e.g., Boneau, 1960; Box, 1953; Cochran, 1947; Havlicek and Peterson, 1974; Posten, Yen, and Owen, 1982; Srivastava, 1958; Stonehouse and Forrester, 1998; Tan, 1982; Welch, 1938) but also in one-tailed testing (Posten, 1992). In general, the larger and the more equal in size the samples are, the more robust the test will be; and sample sizes of at least 30 provide considerable resistance effects of violating the r-test assumptions when testing at a = 5% (i.e., the 0.05 level of signficance), regardless of the disparity between and u~ (Donaldson, 1968; Ramsey, 1980; Stonehouse and Forrester, 1998); larger sample sizes are needed for smaller a's, smaller n 's will suffice for larger significance levels, and larger samples are required for larger differences between Uj and U2. Hsu (1938) reported remarkable robustness, even in the presence of very unequal variances and very small samples, if n j = n: + 1 and > u~. So, if it is believed (by inspecting and s~) that the population variances and u~) are dissimilar, one might plan experiments that have samples that are unequal in size by 1, where the larger sample comes from the population with the larger variance. But the procedure of Section 8.1c, below, has received a far greater amount of study and is much more commonly employed. The two-sample [ test is very robust to non normality if the population variances are the same (Kohr and Games, 1974; Posten, 1992; Posten, Yeh, and Owen, 1982; Ramsey, 1980; Stonehouse and Forrester, 1998; Tomarkin and Serlin, 1986). If the two populations have the same variance and the same shape, the test works well even if that shape is extremely nonnormal (Stonehouse and Forrester, 1998; Tan, 1982). Havlicek and Peterson (1974) specifically discuss the effect of skewness and leptokurtosis. If the population variances are unequal but the sample sizes are the same, then the probability of a Type I error will tend to be greater than the stated a (Havlicek and Peterson, 1974; Ramsey, 1980), and the test is said to be liberal. As seen in Table 8.1a, this departure from a will be less for smaller differences between and u~ and for larger sample sizes. (The situation with the most heterogeneous variances is where is zero (0) or infinity ((X).) If the two variances are not equal and the two sample sizes are not equal, then the probability of a Type I error will differ from the stated a. If the larger u2 is associated with the larger sample, this probability will be less than the stated a (and the test is called conservative) and this probability will be greater than the stated a (and the test is called liberal) if the smaller sample came from the population with the larger variance (Havlicek and Peterson, 1974; Ramsey, 1980; Stonehouse and Forrester, 1998; Zimmerman, 1987).* The greater the difference between variances, the greater will be the disparity between the probability of a Type I error and the specified a, larger differences will also result in greater departure from a. Table 8.1b
uf
uf
sf
(uf
uf
uVu~
"The reason for this can be seen from Equations larger n., then the numerator associated produces
of s~ (which is VjST
+
8.4-8.7a: V2S~)
If the larger
is greater
with the smaller n. This makes s~ larger, which translates a smaller t, resulting
in a probability
S{
is coupled
with the
than if the larger variance into a larger
S~l
of a Type I error lower than the stipulated
is
-X2' which a.
Section 8.1
Testing for Difference
between
Two Means
137
Maximum Probabilities of Type I Error when Applying the Two-Tailed (or, t Test to Two Samples of Various Equal Sizes (n, = n2 = n), Taken from Normal Populations Having Various Variance Ratios, (TtI(T~
TABLE 8.1a:
One-Tailed)
a2I/a2 2
n:
3
5
10 For
3.33 or 0.300 5.00 or 0.200 10.00 or 0.100 00 or 0 00 or 0
0.059 0.064
20
30
00
= 0_05
ll'
0.109
0.056 0.061 0.068 0.082
0.054 0.056 0.059 0.065
0.052 0.054 0.056 0.060
0.052 0.053 0.055 0.057
0.051 0.052 0.053 0.055
0.050 0.050 0.050 0.050
(0.083)
(0.068)
(0.058)
(0.055)
(0.054)
(0.053)
(0.050)
For 3.33 or 0.300 5.00 or 0.200 10.00 or 0.100 00 or 0 00 or 0
15 [16]
ll'
= 0.01
0.013 0.015 0.020 0.(l44
0.013 0.015 0.019 0.028
0.012 0.013 0.015 0.018
[0.011] [0.012] [(U1l3] [0.015]
0.011 0.011 0.(112 0.014
0.011 0.011 0.012 0.013
0.010 0.010 0.010 0.010
(0.032)
(0.022)
(0.015)
(0.014)
(0.013)
(0.012)
(0.010)
These probabilities are gleaned from the extensive analysis of Ramsey (1980), and from Table I of Posten, Yeh, and Owen (1982).
shows this for various sample sizes. For example, Table 8.1a indicates that if 20 data are distributed as n 1 = nz = 10 and the two-tailed t test is performed at the 0.05 significance level, the probability of a Type 1 error approaches 0.065 for greatly divergent population variances. But in Table 8.1b we see that if a = 0.05 is used and 20 data are distributed as = 9 and rn = 11, then the probability of a Type I error can be as small as 0.042 (if the sample of 11 came from the population with the larger variance) or as large as 0.096 (if the sample of 9 came from the population with the smaller variance). Section 6.3b explained that a decrease in the probability of the Type I error (a) is associated with an increase in the probability of a Type II error (f3); and, because power is 1 - {3, an increase in {3 means a decrease in the power of the test (1 - (3). Therefore, for situations described above as conservative-that is, P(Type 1 error) 1 and 17, > 172, then this error is near the a specified for the significance test; when r > 1 and 17, < 172, then the error diverges from that a to an extent reflecting the magnitude of r and the difference betwen 17 I and 172. And, if r < 1, then the error is close to the stated a if 171 < 172, and it departs from that a if 17, > 172 (differing to a greater extent as the difference between the sample sizes is larger and the size of r is greater). But, larger sample sizes result in less departure from the a used in the hypothesis test. The effect of heterogeneous variances on the t test can be profound. For example, Best and Rayner (1987) estimated that a t test with 17, = 5 and 172 = 15, and uJ/ U2 = 4, has a probabilty of a Type I error using t of about 0.16; and, for those sample sizes when u,1 U2 = 0.25, P(Type I error) is about 0.01; but the probability of that error in those cases is near 0.05 if t' is employed. When the two variances are unequal, the Brown-Forsythe test mentioned in Section 10.1g could also be employed and would be expected to perform similarly to the Behrens-Fisher test, though generally not as well. If the Behrens-Fisher test concludes difference between the means, a confidence interval for that difference may be obtained in a manner analogous to that in Section 8.2: The procedure is to substitute s '-x -x for Sy -x and to use v ' instead of u in 12 1 2 Equation 8.14. Because the t test is adversely affected by heterogeneity of variances, some authors have recommended a two-step testing process: (1) The two sample variances are compared, and (2) only if the two population variances are concluded to be similar should the t test be employed. The similarity of variances may be tested by the procedures of Section 8.5. However, considering that the Behrens-Fisher t' test is so robust to variance inequality (and that the most common variance-comparison test performs very poorly when the distributions are nonnormal or asymmetrical), the routine test of variances is not recommended as a precursor to the testing of means by either t or t (even though some statistical software packages perform such a test). Gans (1991) and Markowski and Markowski (1990) enlarge upon this conclusion; Moser and Stevens (1992) explain that there is no circumstance when the testing of means using either t or t' is improved by preliminary testing of variances; and Sawilowski (2002) and Wehrhahn and Ogawa (1978) state that the t test's probability of a Type 1 error may differ greatly from the stated a if such two-step testing is employed.
uf
sf
I
(d) Which Two-Sample Test to Use. It is very important to inform the reader of a research report specifically what statistical procedures were used in the presentation and analysis of data. It is also generally advisable to report the size (17), the mean (X), and the variability (variance, standard deviation, or standard error) of each group of data; and confidence limits for each mean and for the difference between the means (Section 8.2) may be expressed if the mean came from a normally distributed population. Visualization of the relative magnitudes of means and measures of variability may be aided by tables or graphs such as described in Section 7.4.
142
Chapter 8
Two-Sample Hypotheses
Major choices of statistical methods for comparing two samples are as follows: • If the two sampled populations are normally distributed and have identical variances (or if they are only slightly to moderately nonnormal and have similar variances): The t test for difference between means is appropriate and preferable. (However, as samples nearly always come from distributions that are not exactly normal with exactly the same variances, conclusions to reject or not reject a null hypothesis should not be considered definitive when the probability associated with t is very near the specified cr. For example, if testing at the 5% level of significance, it should not be emphatically declared that Hi, is false if the probability of the calculated tis 0.048. The conclusion should be expressed with caution and, if feasible, the experiment should be repeated-perhaps with more data.) • If the two sampled populations are distributed normally (or are only slightly to moderately nonnormal), but they have very dissimilar variances: The Behrens-Fisher test of Section 8.1c is appropriate and preferable to compare the two means. • If the two sampled populations are very different from normally distributed, but they have similar distribution shapes and variances: The Mann-Whitney test of Section 8.11 is appropriate and preferable. • If the two sampled populations have distributions greatly different from normal and do not have similar distributions and variances: (1) Consider the procedures of Chapter 13 for data that do not exhibit normality and variance equality but that can be transformed into data that are normal and homogeneous of variance; or (2) refer to the procedure mentioned at the end of Section 8.11, which modifies the Mann-Whitney test for Behrens-Fisher situations; or (3) report the mean and variability for each of the samples, perhaps also presenting them in tables and/or graphs (as in Section 7.4), and do not perform hypothesis testing.* (e) Replication of Data. It is important to use data that are true replicates of the variable to be tested (and recall that a replicate is the smallest experimental unit to which a treatment is independently applied). Tn Example 8.1 the purpose of the experiment was to ask whether there is a difference in blood-clotting times between persons administered two different drugs. This necessitates obtaining a blood measurement on each of n1 individuals in the first sample (receiving one of the drugs) and n: individuals in the second sample (receiving the other drug). It would not be valid to use n1 measurements from a single person and ti: measurements from another person, and to do so would be engaging in what Hurlbert (1984), and subsequently many others, discuss as pseudoreplication. 8.2
CONFIDENCE LIMITS FOR POPULATION MEANS
In Section 7.3, we defined the confidence interval for a population mean as X ± ta(2).vsy, where sy is the best estimate of ify and is calculated as ~s2/n. For the * Another procedure, seldom encountered but highly recommended by Yuen (1974). is to perform the Behrens-Fisher test on trimmed means (also known as "truncated means"). A trimmed mean is a sample mean calculated after deleting data from the extremes of the tails of the data distribution. There is no stipulated number of data to be deleted. but it is generally the same number for each tail. The degrees of freedom are those pertaining to the number of data remaining after the deletion.
Section 8.2
Confidence Limits for Population Means
143
(]'1
two-sample situation where we assume that = (]'~, the confidence interval for either ILl or IL2 is calculated using s~ (rather than either s1 or s~) as the best estimate of (]'2,and we use the two-tailed tabled t value with v = VI + V2 degrees of freedom. Thus, for ILi (where iis either 1 or 2, referring to either of the two samples), the 1 - a confidence interval is (8.13) For the data of Example 8.1,
~S~/112
=
JO.5193 min2/7
=
0.27 min. Thus, the 95%
confidence interval for IL2 would be 9.74 min ± (2.201 )(0.27 min) = 9.74 min ± 0.59 min, so that LI (the lower confidence limit) = 9.15 min and L2 (the upper confidence limit) = 10.33 min, and we can declare with 95% confidence that, for the population of blood-clotting times after treatment with drug G, the population mean, IL2, is no smaller than 9.15 min and no larger than 10.33 min. This may be written as P(9.15 min :::; IL2 :::; 10.33 min) = 0.95. The confidence interval for the population mean of data after treatment with drug B would be 8.75 min ±(2.201)JO.5193 min2/6 = 8.75 min ± O.64min; so LI = 8.11 min and L2 = 9.39 min. Further interpretation of the meaning of the confidence interval for each of these two population means is in Section 7.3. Confidence limits for the difference between the two population means can also be computed. The 1 - a confidence interval for ILl - IL2 is (8.14) Thus, for Example 8.1, the 95% confidence interval for ILl - IL2 is (8.75 min 9.74 min) ± (2.201 )(0.40 min) = -0.99 min ± 0.88 min. Thus, LI = -1.87 min and L2 = -0.11 min, and we can write P( -1.87 min :::; ILl - IL2 :::; -0.11 min) = 0.95. If He: ILl = IL2 is not rejected, then both samples are concluded to have come from populations having identical means, the common mean being denoted as IL. The best estimate of IL is the "pooled" or "weighted" mean: X
-
P -
I1IXI 111
+ +
112X2
(8.15)
,
112
which is the mean of the combined data from the two samples. Then the 1 - a confidence interval for IL is _ Xp ± ta(2).v ~
s~ 111
+
.
(8.16)
112
If Ho is not rejected, it is the confidence interval of Equation 8.16, rather than those of Equations 8.13 and 8.14, that one would calculate. As is the case with the t test, these confidence intervals are computed with the assumption that the two samples came from normal populations with the same variance. If the sampled distributions are far from meeting these conditions, then confidence intervals should be eschewed or, if they are reported, they should be presented with the caveat that they are only approximate. If a separate 1 - a confidence interval is calculated for ILl and for IL2, it may be tempting to draw a conclusion about He: ILl = IL2 by observing whether the two confidence intervals overlap. Overlap is the situation where L, for the larger mean is less than L2 for the smaller mean, and such conclusions are made visually
144
Chapter 8
Two-Sample Hypotheses
enticing if the confidence intervals are presented in a graph (such as in Figure 7.5) or in a table (e.g., as in Table 7.3b). However, this is not a valid procedure for hypothesis testing (e.g., Barr, 1969; Browne, 1979; Ryan and Leadbetter, 2002; Schenker and Gentleman, 2001). If there is no overlap and the population means are consequently concluded to be different, this inference will be associated with a Type I error probability less than the specified 0' (very much less if the two standard errors are similar); and if there is overlap, resulting in failure to reject Ho, this conclusion will be associated with a probability of a Type II error greater than (i.e., a power less than) if the appropriate testing method were used. As an illustration if this, the data of Example 8.1 yield L, = 8.11 min and L2 = 9.39 min for the mean of group Band L, = 9.15 min and L2 = 10.33 min for the mean of group G; and the two confidence intervals overlap even though the null hypothesis is rejected. (a) One-Tailed Confidence Limits for Difference between Means. If the two-sample t test is performed to assess one-tailed hypotheses (Section 7.2), then it is appropriate to determine a one-tailed confidence interval (as was done in Section 7.3a following a one-tailed one-sample t test). Using one-tailed critical values of t, the following confidence limits apply: For Ho:
ILl
::; IL2
versus
L, =X For He:
ILl
>
or
IL2,
- (to'(I).l')(Sxl-xJ
:2: IL2
L, =
ILl
versus
-00
ILl
ILO:
and L2 =00. or
IL2,
ILl
-
IL2 :2: ILO
+ (to'( I ).l') (sx1 -xJ
and L2 = X
In Example 8.2, one-tailed (1.746)(1.55) = 2.71 cm.
ILl
confidence
ILl
IL2
< ILo:
.
limits would be L,
-00
and L2
(b) Confidence Limits for Means when Variances Are Unequal. If the population variances arc judged to be different enough to warrant using the Behrens-Fisher test (Section 8.1c) for H«: ILl = IL2, then the computation of confidence limits is altered from that shown above. If this test rejects the null hypothesis, a confidence interval for each of the two population means (ILl and IL2) and a CI for the difference between the means (ILl - IL2) should be determined. The 1 - 0' confidence interval for ILi is obtained as -
Xi ± to'(2)yl
)
S2.
-2
-
+, n, which is Xi ± to'(2).l' \jIsx'
(8.17)
1
I
rather than by Equation 8.13, where v ' is from Equation 8.12. The confidence interval for the difference between the two population means is computed to be (8.18) rather than by Equation 8.14, where s-x' -x is from Equation 8.11a or 8.11b. 12 One-tailed confidence intervals are obtained as shown in Section 8.2a above, but using s-x'1 - -x2 instead of sx 1 -x 2 . A confidence interval (two-tailed or one-tailed) for ILl - IL2 includes zero when the associated Ho is not rejected.
Section 8.2
Confidence Limits for Population Means
145
In Example 8.2a, Hi, is rejected, so it is appropriate to determine a 95% CI for ILl, which is 37.4 ± 2.274.J137 = 37.4 days ± 2.7 days; for IL2, which is 46.0 ± 2.274 )10.93 = 46.0 days ± 7.5 days; and for ILl - IL2, which is 37.4 - 46.0 ± (2.274 )(3.51) = -8.6 days ± 7.98 days. If He: ILl = IL2 is not rejected, then a confidence interval for the common mean, Xp (Equation 8.15), may be obtained by using the variance of the combined data from the two samples (call it 5~) and the degrees of freedom for those combined data (Vt = 111 + 112 - 1): (8.19)
(c) Prediction Limits. As introduced in Section 7.3b, we can predict statistical characteristics of future sampling from populations from which samples have previously been analyzed. Such a desire might arise with data from an experiment such as in Example 8.1. Data were obtained from six animals treated with one drug and from seven animals treated with a second drug; and the mean blood-clotting times were concluded to be different under these two treatments. Equations 8.14 and S.18 showed how confidence intervals can be obtained for the difference between means of two samples. It could also be asked what the difference between the means would be of an additional sample of ml animals treated with the first drug and a sample of an additional m2 animals treated with the second. For those two additional samples the best prediction of the difference between the two sample means would be XI - X2,whichinExampleS.1is8.75min - 9.74min = -0.99 min; and there would be a 1 - 0' probability that the difference between the two means would be contained in this prediction interval: Xj
where 52 c
=
-
ml
X2
2
52 L
-± sp
+
111
+
ta(2).v
52 p
+
m2
N
5c'
52 ...E...
(8.19a)
(8.19b)
112
(Hahn, 1977). For example, if an additional sample of 10 data were to be obtained for treatment with the first drug and an additional sample of 12 were to be acquired for treatment with the second drug, the 95% prediction limits for the difference between means would employ 5~ = 0.5193 min", 11I = 6, 112= 7, ml = 10, mz = 12, to.05(2).v=2.201. and u = 11; and 5~ = 0.51 min, so the 95% prediction limits would be LI = -2.11 min and L2 = 0.13 min. As the above procedure uses the pooled variance, 5~, it assumes that the two sampled populations have equal variances. If the two variances are thought to be quite different (the Behrens-Fisher situation discussed in Section 8.1c), then it is preferable to calculate the prediction interval as Xj
where 52 c
-
mj
X2
a(2).v'
')
52
= ~j
-±t
+
51 I1j
+
52 2
m2
N
5c'
(8.19c)
\.2
+
~ 112
(8.19d)
148
Chapter 8
Two-Sample Hypotheses
2. Sample sizes not large enough to result in detection of a difference of biological importance can expend resources without yielding useful results, and sample sizes larger than needed to detect a difference of biological importance can result in unnecessary expenditure of resources. 3. Sample sizes not large enough to detect a difference of biological importance can expose subjects in the study to potentially harmful factors without advancing knowledge, and sample sizes larger than needed to detect a difference of biological importance can expose more subjects than necessary to potentially harmful factors or deny them exposure to potentially beneficial ones. Assuming each sample comes from a normal population and the population variances are similar, we can estimate the minimum sample size to use to achieve desired test characteristics: 2S2 n;:::
8;
(ta.v
+
t/3( 1).1')2
(8.22)
(Cochran and Cox, 1957: 19-21).* Here, 8 is the smallest population difference we wish to detect: 8 = fLI - fL2 for the hypothesis test for which Equation 8.1 is used; 8 = IfLI - fL21 - fLO when Equation 8.9 is appropriate; 8 = fL I - fL2 - fLa when performing a test using Equation 8.10. In Equation 8.22, ta.v may be either ta( I ).1' or ta(2).v, depending, respectively, on whether a one-tailed or two-tailed test is to be performed. Note that the required sample size depends on the following four quantities: • 8, the minimum detectable difference between population means.l If we desire to detect a very small difference between means, then we shall need a larger sample than if we wished to detect only large differences. • 0.2, the population variance. If the variability within samples is great, then a larger sample size is required to achieve a given ability of the test to detect differences between means. We need to know the variability to expect among the data; assuming the variance is the same in each of the two populations sampled, 0.2 is estimated by the pooled variance, s~' obtained from similar studies. • The significance level, lX. If we perform the t test at a low lX, then the critical value, fa.v, will be large and a large n is required to achieve a given ability to detect differences between means. That is, if we desire a low probability of committing a Type I error (i.e., falsely rejecting Ho), then we need large sample SIzes. • The power of the test, 1 - f3. If we desire a test with a high probability of detecting a difference between population means (i.e., a low probability of committing a Type II error), then f3( 1) will be small, t/3( I) will be large, and large sample sizes are required. Example 8.4 shows how the needed sample size may be estimated. As ta(2).'J and depend on n, which is not yet known, Equation 8.22 must be solved iteratively, as we did with Equation 7.10. It matters little if the initial guess for n is inaccurate. Each iterative step will bring the estimate of n closer to the final result (which is
t/3( I).1'
*The method of Section 10.3 may also be used for estimation of sample size, but it offers no substantial advantage over the present procedure. '1'8 is lowercase Greek delta. If /-LO in the statistical hypotheses is not zero (see discussion surrounding Equations 8.9 and iI.10), then 8 is the amount by which the absolute value of the difference between the population means differs from /-L()-
Section 8.4
Sample Size, Detectable Difference, and Power in Tests
149
declared when two successive iterations fail to change the value of n rounded to the next highest integer). In general, however, fewer iterations are required (i.e., the process is quicker) if one guesses high instead of low.
EXAMPLE 8.4
Estimation of Required Sample Size for a Two-Sample t
Test We desire to test for significant difference between the mean blood-clotting times of persons using two different drugs. We wish to test at the 0.05 level of significance, with a 90% chance of detecting a true difference between population means as small as 0.5 min. The within-population variability, based on a previous study of this type (Example 8.1), is estimated to be 0.52 mirr'. Let us guess that sample sizes of 100 will be required. Then, u = 2( n - 1) = 2(100 - 1) = 198,to.05(2).198::::;1.972,{3 = 1 - 0.90 = 0.10,tO.l0(1),198 = 1.286, and we calculate (by Equation 8.22): n
2
2( 0.52) (1.972 (0.5 )2
Let us now use n = 45 to determine to.10(1).88= 1.291, and n2
u
2(0.52) (1.987 (0.5 )2
+ 1.286)2 = 44.2. = 2(n - 1) = 88,
to.05(2),88
1.987,
+ 1.291)2 = 44.7.
Therefore, we conclude that each of the two samples should contain at least 45 data. If n] were constrained to be 30, then, using Equation 8.21, the required ri: would be (44.7) (30) n2 = 2(30) _ 44.7 = 88.
For a given total number of data (n] + 112),maximum test power and robustness occur when 111= 112(i.e., the sample sizes are equal). There are occasions, however, when equal sample sizes are impossible or impractical. If, for example, 111were fixed, then we would first determine 11by Equation 8.22 and then find the required size of the second sample by Equation 8.21, as shown in Example 8.4. Note, from this example, that a total of 45 + 45 = 90 data are required in the two equal-sized samples to achieve the desired power, whereas a total of 30 + 88 = 118 data are needed if the two samples are as unequal as in this example. If 2n - 1 ~ 0, then see the discussion following Equation 8.21. (b) Minimum Detectable
how small a population given sample size:
Difference. Equation 8.22 can be rearranged to estimate difference (B, defined above) would be detectable with a
(8.23) The estimation of B is demonstrated
in Example 8.5.
150
Chapter 8
Two-Sample Hypotheses
EXAMPLE8.5 Sample t Test
Estimation of Minimum Detectable Difference in a Two-
In two-tailed testing for significant difference between mean blood-clotting times of persons using two different drugs, we desire to use the O.OSlevel of significance and sample sizes of 20. What size difference between means do we have a 90% chance of detecting? Using Equation 8.23 and the sample variance of Example 8.1, we calculate: 8
=
-V/2(0.SI93) 20
+
(to.05(2).3X
[0.10(1),38)
= (0.2279) (2.024 + 1.304) = 0.76 min. In a Behrens-Fisher
situation (i.e., if we don't assume that uT = u~), Equation 8.23
would employ \j sTf n
c=:": + s~/ n instead of \J2s~/ n.
(c) Power of the Test.
Further rearrangement
I
of Equation 8.22 results in
8 t{3( 1).1/ :::;
~
.
~
(8.24)
fay,
2s~ n
which is analogous to Equation 7.12 in Section 7.7. On computing f{3(I).'" one can consult Appendix Table 8.3 to determine f3( 1), whereupon 1 ~ f3( 1) is the power. But this generally will only result in declaring a range of power (c.g., 0.7S < power < 0.90). Some computer programs can provide the exact probability of f3( 1), or we may, with only slight overestimation of power (as noted in the footnote in Section 7.7) consider f{3( I) to be approximated by a normal deviate and may thus employ Appendix Table B.2. If the two population variances arc not assumed to be the same, then would be used in place of
J2s~/ n in Equation
)ST / n
+ s~/ n
8.24.
The above procedure for estimating power is demonstrated in Example 8.6, along with the following method (which will be expanded on in the chapters on analysis of variance). We calculate
4> =
/ n82 4s~
-V
(8.25)
(derived from Kirk, 1995: 182) and 4> (lowercase Greek phi) is then located in Appendix Figure B.la, along the lower axis (taking care to distinguish between 4>'s for 0' = 0.01 and 0' = O.OS).Along the top margin of the graph are indicated pooled degrees of freedom, u, for 0' of either 0.01 or n.os (although the symbol V2 is used on the graph for a reason that will be apparent in later chapters). By noting where 4> vertically intersects the curve for the appropriate u, one can read across to either the left or right axis to find the estimate of power. As noted in Section 7.7c, the calculated power is an estimate of the probability of rejecting a false null hypothesis in future statistical tests; it is not the probability of rejecting Ho in tests performed on the present set of data.
Section 8.5 EXAMPLE 8.6
Testing for Difference between Two Variances
Estimation of the Power of a Two-Sample
151
t Test
What would be the probability of detecting a true difference of 1.0 min between mean blood-clotting times of persons using the two drugs of Example 8.1, if n] = n: = 15, and a(2) = 0.05? For n = 15, u = 2(n - 1) = 28 and (005(2).28 = 2.048. Using Equation 8.24: 1.0
tf3(]).2R
S
~;====
- 2.048 = 1.752.
2(0.5193 ) 15 Consulting Appendix v = 28: 0.025 < P( t
2:
Table B.3, we see that, for one-tailed 1.752) < 0.0.s, so 0.025 < f3 < 0.05.
probabilities
and
Power = 1 - f3, so 0.95 < power < 0.975. Or, by the normal approximation, we can estimate f3 by P( Z 2: 1.752) = 0.04. So power = 0.96. [The exact figures are f3 = 0.045 and power = O.95S.] To use Appendix Figure B.l, we calculate c/J=
/n8
2
\j
=
4s~
\j
/(15)(1.0)
=2.69.
4(0.5193)
In the first page of Appendix Figure B.l, we find that c/J= 2.69 and v( = V2) = 28 are associated with a power of about 0.96.
(d) Unequal Sample Sizes. For a given total number of data, n] + n2, the twosample t test has maximum power and robustness when n] = na. However, if nl =F na, the above procedure for determining minimum detectable difference (Equation 8.23) and power (Equations 8.24 and 8.25) can be performed using the harmonic mean of the two sample sizes (Cohen, 1988: 42): n=
2n]n2
Thus, for example, if nl
(8.26)
+ nz
nl
= 6 and n: = 7, then n = 2(6)(7)
= 6.46.
6 + 7 ~.5 TESTING FOR DIFFERENCE BETWEEN TWO VARIANCES If we have two samples of measurements, each sample taken at random from a normal population, we might ask if the variances of the two populations are equal. Consider the data of Example 8.7, where the estimate of is 21.87 rnoths/, and s~, the estimate of u~, is 12.90 moths-. The two-tailed hypotheses can be stated as Ho: = and HA: =F and we can ask, What is the probability of taking two samples from two populations having identical variances and having the two sample variances be as different as are and s~? If this probability is rather low (say s O.OS, as in previous chapters), then we reject the veracity of Ho and conclude that the two samples came from populations having unequal variances. If the probability is greater
sy,
uy u~
uy u~,
sy
uy,
152
Chapter 8
Two-Sample Hypotheses
than 0', we conclude that there is insufficient evidence to conclude that the variances of the two populations are not the same. (a) Variance-Ratio Test. The hypotheses variance-ratio test, for which one calculates
may be submitted
to the two-sample
2
or
F
=
s~, whichever is larger."
(8.27)
s[
That is, the larger variance is placed in the numerator and the smaller in the denominator. We then ask whether the calculated ratio of sample variances (i.e., F) deviates so far from 1.0 as to enable us to reject Hi, at the 0' level of significance. For the data in Example 8.7, the calculated F is 1.70. The critical value, FO.05(2),[O,9, is obtained from Appendix Table BA and is found to be 3.59. As 1.70 < 3.59, we do not reject Hot. Note that we consider degrees of freedom associated with the variances in both the numerator and denominator of the variance ratio. Furthermore, it is important to realize that Fa,IIl,II2 and Fa,II2,IIl are not the same (unless, of course, V[ = V2), so the numerator and denominator degrees of freedom must be referred to in the correct order. If Ho: O"T = O"~ is not rejected, then sT and s~ are assumed to be estimates of the same population variance, 0"2. The best estimate of this 0"2 that underlies both samples is called the pooled variance (introduced as Equation 804): V[ST VI
+ V2S~ + v:
(8.28)
One-tailed hypotheses may also be submitted to the variance ratio test. For He: O"T 2::: O"~ and H A: O"T < O"~,s~ is always used as the numerator of the variance ratio; for Ho: O"T ::; O"~ and HA: O"T > O"~,sT is always used as the numerator. (A look at the alternate hypothesis tells us which variance belongs in the numerator of F in order to make F > 1.) The critical value for a one-tailed test is Fa( I)'v1.1I2 from Appendix Table BA, where VI is the degrees of freedom associated with the numerator of F and V2 is the degrees of freedom associated with the denominator. Example 8.8 presents the data submitted to the hypothesis test for whether seeds planted in a greenhouse have less variability in germination time than seeds planted outside. The variance-ratio test is not a robust test, being severely and adversely affected by sampling nonnormal populations (e.g., Box, 1953; Church and Wike, 1976; Markowski and Markowski, 1990; Pearson, 1932; Tan, 1982), with deviations from mesokurtosis somewhat more important than asymmetry; and in cases of nonnormality the probability of a Type I error can be very much greater than 0'. *What we know as the F statistic is a ratio of the variances of two normal distributions and was first described by R. A. Fisher in 1924 (and published in 1928) (Lehmann, 1999); the statistic was named in his honor by G. W. Snedecor (1934: 15). "Some calculators and many computer programs have the capability of determining the probability of a given F. For the present example, we would thereby find that P( F 2: 1.70) = 0.44.
Section 8.5
Testing for Difference between Two Variances
153
EXAMPLE 8.7 The Two-Tailed Variance Ratio Test for the Hypothesis Ho: u~ u~ and HA: u~ u~. The Data Are the Numbers of Moths Caught During the Night by 11 Traps of One Style and 10 Traps of a Second Style
*"
=
Ho: HA:
a
= 0.05 Trap type 1
Trap type 2
41
52 57 62 55 64 57
35 33 36 40 46 31 37 34 30 38 n1 =
11
=
10
VI
SSI
sf F =
sf2
052
55 60 59
n: = 10 V2 = 9
= 218.73 moths/
= 21.87
= 21.87
56
=
12.90
moths/
SS2 = 116.10 moths/ s~
= 12.90 moths2
1 70 .
FO.05(2).IO.9 = 3.96
Therefore, do not reject Hi; P(0.20 < F < O.50)[P
s2 p
= 218.73 moths
2·
= 0.44]
?
+ 116.10 moths- = 17.62moths2 10 + 9
The conclusion is that the variance of numbers of moths caught is the same for the two kinds of traps.
(b) Other Two-Sample Tests for Variances. A large number of statistical procedures to test differences between variances have been proposed and evaluated (e.g., Brown and Forsythe, 1974c; Church and Wike, 1976; Draper and Hunter, 1969; Levene, 1960; Miller, 1972; O'Neill and Mathews, 2000), often with the goal of avoiding the
154
Chapter 8
Two-Sample Hypotheses EXAMPLE 8.8 A One-Tailed Variance-Ratio Test for the Hypothesis That the Germination Time for Pine Seeds Planted in a Greenhouse Is Less Variable Than for Pine Seeds Planted Outside
Ho:
rr- 2 :> •.•. 2 vI - "2
a = 0.05
Germination Time (in Days) of Pine Seeds Greenhouse
Outside
69.3 75.5
69.5 64.6 74.0 84.8 76.0 93.9
81.0
74.7 72.3 78.7 76.4
81.2
73.4 88.0
SSI = 90.57 days''
nz = 9 V2 = 8 SS2 = 700.98
sT
s~
nl
=
7
VI = 6
F
=
87.62 15.10
FO.05( I ).8.6
Therefore,
15.10 days-
=
=
87.62 days-
= 5.80 =
4.15
reject H«.
0.01 < P(F ;::::5.80) < 0.025 The conclusion the greenhouse
days2
is that the variance in germination than in those grown outside.
[P
=
0.023]
time is less in plants
grown in
lack of robustness of the variance-ratio test when samples come from non normal populations of data. A commonly encountered one is Levene's test, and its various modifications, which is typically less affected by non normal distributions than the variance-ratio test is. The concept is to perform a two-sample t test (two-tailed or one-tailed, as the situation warrants; see Section 8.1), not on the values of X in the two samples but on values of the data after conversion to other quantities. A common conversion is to employ the deviations of each X from its group mean or median; that is, the two-sample t test is performed on IXij - Xii or on IXij - median of group il. Other
Section 8.5
Testing for Difference between Two Variances
155
data conversions, such as the square root or the logarithm of IXij - Xii, have also been examined (Brown and Forsythe, 1974c). Levene's test is demonstrated in Example 8.9 for two-tailed hypotheses, and X' is used to denote IXi - XI. This procedure may also be employed to test one-tailed hypotheses about variances, either Ho: (TT ;::::(T~ vs. HA: (TT < (T~, or He: (TT :::; (T~ vs. HA: (TT > (T~. This would be done by the one-tailed t-testing described in Section 8.1, using (T2 in place of JL in the hypothesis statements and using IXi - XI instead of Xi in the computations.
EXAMPLE 8.9 The Two-Sample Levene Test for Ho: tT~ =1= tT~. The Data Are Those of Example 8.7
a
tT~
=
tT~
and HA:
X'
=
= 0.05 = 401 moths, n = 11, = 577 moths, n = 10,
For group 1: LX For group 2: LX
u v
= 10, X = 36.45 moths. = 9, X = 57.70 moths.
Trap Type 1
Trap Type 2
X' = IXi -
41 35 33 36 40 46 31 37 34 30 38
XI
4.55 1.45 3.45 0.45 3.55 9.55 5.45 0.55 2.45 6.45 1.55
LXi
LIXi
401 moths
52 57 62 55 64 57 56 55 60 59
-
XI
LXi
= 39.45 moths
= 39.45 moths/Tl
=
X'2
= 3.59 moths SSi
= 77 .25 moths/
=
LX;
577 moths
LIXi
= 28.40 moths/In =
SS;
2.84 moths
= 35.44 moths/
-
= 28.40
For the absolute values of the deviations from the mean: X' 1
XI
5.70 0.70 4.30 2.70 6.30 0.70 1.70 2.70 2.30 1.30
=
LX;
=
IXi -
XI
moths
158
Chapter 8
Two-Sample Hypotheses
the calculated confidence intervals are only approximations, with the approximation poorer the further from normality the populations are. Meeker and Hahn (1980) discuss calculation of prediction limits for the variance ratio and provide special tables for that purpose. 8.7
SAMPLE SIZE AND POWER IN TESTS FOR DIFFERENCE BETWEEN TWO VARIANCES
(a) Sample Size Required. In considering the variance-ratio test of Section 8.5, we may ask what minimum sample sizes are required to achieve specified test characteristics. Using the normal approximation recommended by Desu and Raghavarao (1990: 35), the following number of data is needed in each sample to test at the a level of significance with power of 1 - f3: 2
+ 2.
n=
(8.32)
For analysts who prefer performing calculations with "common logarithms" (those employing base 10) to using "natural logarithms" (those in base e),* Equation 8.32 may be written equivalently as 2
z;
n=
+
(2.30259)
Zf3(I)
+ 2.
IOgGD
(8.33)
This sample-size estimate assumes that the samples are to be equal in size, which is generally preferable. If, however, it is desired to have unequal sample sizes (which will typically require more total data to achieve a particular power), one may specify that VI is to be m times the size of V2; then (after Desu and Raghavarao, 1990: 35): m =
n: =
(m
nl
1
(8.34)
n: - t
+ 1 )(n - 2) 2m
+ 2,
(8.35)
and
".
nl
= m(n2 - 1) + 1.
(8.36)
"In this book, in will denote the natural, or Naperian, logarithm, and log will denote the common, or Briggsian, logarithm. These are named for the Scottish mathematician John Napier (1550-1617), who devised and named logarithms, and the English mathematician Henry Briggs (1561-1630). who adapted this computational method to base 10; the German astronomer Johann Kepler (1550-1617) was the first to use the abbreviation "Log," in 1624, and Italian mathematician Bonaventura Cavalieri (1598-1647) was the first to use "log" in 1632 (Cajori, 1928/9, Vol. II: 105-106; Gullber, 1997: 152). Sometimes loge and loglO will be seen instead of In and log, respectively.
Section 8.8
Testing for Difference between Two Coefficients of Variation
159
As in Section 8.5, determination of whether sf or s~ is placed in the numerator of the variance ratio in Equation 8.32 depends upon the hypothesis test, and Za is either a one-tailed or two-tailed normal deviate depending upon the hypothesis to be tested; nl and n2 correspond to sf and s~, respectively. This procedure is applicable if the variance ratio is > 1. (b) Power of the Test. We may also estimate what the power of the variance ratio test would be if specified sample sizes were used. If the two sample sizes are the same (i.e., n = nl = n2), then Equations 8.32 and 8.33 may be rearranged, respectively, as follows: (8.37)
Z~(l} ~
~(2.30259)IOgGD
-
(8.38)
Z.
After Z{3( I) is calculated, f3( 1) is determined from the last line of Appendix Table B.3, or from Appendix Table B.2, or from a calculator or computer that gives probability of a normal deviate; and power = 1 - f3( 1 ). If the two sample sizes are not the same, then the estimation of power may employ Z(3(I) -- )2m(n2 m
(sf)
- 2) In s~
+ 1
-
Za
(8.39)
or Z{3(I)
=
\j
!2m(n2 m
+
- 2)(2.30259)IOg(s221) 1
-
Za,
(8.40)
S2
where m is as in Equation 8.34. 1.8 TESTING FOR DIFFERENCE BETWEEN TWO COEFFICIENTS OF VARIATION
A very useful property of coefficients of variation is that they have no units of measurement. Thus, V's may be compared even if they are calculated from data having different units, as is the case in Example 8.10. And it may be desired to test the null hypothesis that two samples came from populations with the same coefficients of variation. EXAMPLE 8.10 A Two-Tailed cients of Variation
Test for Difference
Between
Two Coeffi-
Ho: The intrinsic variability of male weights is the same as the intrinsic variability of male heights (i.e., the population coefficients of variation of weight and height are the same, namely Ho: a I I ILl = a21 IL2)· Ho: The intrinsic variability of male weight is not the same as the intrinsic variability of male heights (i.e., the population coefficients of variation of weight and height are not the same, namely Ho: all ILl #- a21 IL2).
160
Chapter 8
Two-Sample Hypotheses
(a) The variance-ratio
test.
Weight (kg)
Log of weight
Height (em)
Log of height
72.5 71.7 60.8 63.2 71.4
1.86034 1.85552 1.78390 1.80072 1.85370 1.86392 1.89154 1.87910 1.85733 1.83885
183.0 172.3 180.1 190.2 191.4 169.6 166.4 177.6 184.7 187.5 179.8
2.26245 2.23629 2.25551 2.27921 2.28194 2.22943 2.22115 2.24944 2.26647 2.27300 2.25479
73.1 77.9 75.7 72.0 69.0
nl
= 10
n2
= 11
Vj
= 9
V2
= 10
Xl = 70.73 kg
X2 = 180.24 em
SSj
SS2 = 678.9455 crrr'
=
246.1610 kg2
sf
= 27.3512 kg2
s~
Sl
= 5.23 kg
S2 =
VI
= 0.0739
V2
= 0.00987026
(SSlog)l
67.8946 cm2 8.24 em
= 0.0457
(SSlog)z
(Sfog)l = 0.0010967 F = 0.0010967
=
(sfog)z
= 0.00400188 = 0.00040019
= 2.74
0.00040019 FO.05(2).9.10
=
3.78
Therefore, do not reject He: 0.10
< P < 0.20
[P = 0.13]
It is concluded that the coefficient of variation is the same for the population of weights as it is for the population of heights. (b) The Z test. V p
=
VI VI Vj
+ +
V2 V2
= 9(0.0739) 9
V2
+ +
10(0.0457) 10
V~ = 0.003493 VI
-
V2
Z = r======::::::::::::=== V~ ( Vj
+ V~)(0.5 + V2
= 1.1221 = 0.0591 19
Testing for Difference between Two Coefficients of Variation
Section 8.8
161
0.0739 - 0.0457
) (000:493
+ 0.003493) (0.5 + 0.003493) 10
= 0.0282 = 1.46 0.0193 20.05(2)
=
to.05(2).00
= 1.960
Do not reject Hi; 0.10 < P < 0.20
[P = 0.14]
It is concluded that the coefficient of variation is the same for the population of weights as it is for the population of heights.
Lewontin (1966) showed that
(8.41 )
may be used for a variance-ratio 8.41,
(Sfog\
test, analogously
to Equation
8.27. In Equation
refers to the variance of the logarithms of the data in Sample i, where
logarithms to any base may be employed. This procedure is applicable only if all of the data are positive (i.e., > 0), and it is demonstrated in Example 8.1Oa. Either two-tailed or one-tailed hypotheses may be tested, as shown in Section 8.5. This variance-ratio test requires that the logarithms of the data in each sample come from a normal distribution. A procedure advanced by Miller (1991) allows testing when the data, not their logarithms, are from normal distributions (that have positive means and variances). The test statistic, as demonstrated in Example 8.1Ob, is VI -
V2
2 = r============== V~ + V~)(0.5 ( VI V2
(8.42)
where
vp --
VI VI VI
+ +
V2 V2
(8.43)
V2
is referred to as the "pooled coefficient of variation," which is the best estimate of the population coefficient of variation, a] /-t, that is common to both populations if the null hypothesis of no difference is true. This procedure is shown, as a two-tailed test, in Example 8.10b. Recall that critical values of 2 may be read from the last line of the table of critical values of t (Appendix Table B.3), so 2(1'(2) = t(l'(2),00' One-tailed testing is also possible, in which case the alternate hypothesis would declare a specific direction of difference and one-tailed critical values (too( I),(1') would be consulted. This test works best if there are at least 10 data in each sample and each population's coefficient of variation is no larger than 0.33. An estimate of the power of the test is given by Miller and Feltz (1997).
162
Chapter 8
Two-Sample Hypotheses
8.9
CONFIDENCE LIMITS FOR THE DIFFERENCE BETWEEN TWO COEFFICIENTS OF VARIATION
Miller and Feltz (1997) have provided this 1 - a confidence interval for where the two sampled populations are normally distributed:
CT)/ ILl
-
(T2/ jL2,
(8.44) 8.10
NONPARAMETRIC
STATISTICAL METHODS
There is a large body of statistical methods that do not require the estimation of population parameters (such as IL and CT) and that test hypotheses that are not statements about population parameters. These statistical procedures are termed nonparametric tests. * These are in contrast to procedures such as t tests, which are called parametric tests and which do rely upon estimates of population parameters and upon the statement of parameters in the statistical hypotheses. Although they may assume that the sampled populations have the same dispersion or shape, nonparametric methods typically do not make assumptions about the nature of the populations' distributions (e.g., there is no assumption of normality); thus they are sometimes referred to as distribution-free tests."!Both parametric and nonparametric tests require that the data have come at random from the sampled populations. Nonparametric tests (such as the two-sample testing procedure described in Section 8.11) generally may be applied to any situation where we would be justified in employing a parametric test (such as the two-sample t test), as well as in some instances where the assumptions of the latter are untenable. If either the parametric or nonparametric approach is applicable, then the former will generally be more powerful than the latter (i.e., the parametric method will typically have a lower probability of committing a Type II error). However, often the difference in power is not great and can be compensated by a small increase in sample size for the nonparametric test. When the underlying assumptions of a parametric test are seriously violated, then the non parametric counterpart may be decidedly more powerful. Most nonparametric statistical techniques convert observed data to the ranks of the data (i.e., their numerical order). For example, measurements of 2.1, 2.3, 2.9, 3.6, and 4.0 kg would be analyzed via their ranks of 1, 2, 3, 4, and 5. A possible disadvantage of this rank transformation of data is that some information is lost (for example, the same ranks would result from measurements of 1.1, 1.3, 2.9, 4.6, and 5.0 kg). A possible advantage is that outliers (see Section 2.5) will have much less influence (for example, the same ranks would result from measurements of 2.1,2.3, 2.9,3.6, and 25.0 kg). It is sometimes counseled that only nonparametric testing may be employed when dealing with ordinal-scale data, but such advice is based upon what Gaito (1980) calls "an old misconception"; this issue is also discussed by Anderson (1961), Gaito (1960), Savage (1957), and Stevens (1968). Interval-scale or ratio-scale measurements are not intrinsically required for the application of parametric testing procedures. Thus parametric techniques may be considered for ordinal-scale data if the assumptions of such methods are met-typically, random sampling from normally distributed populations with homogeneity of variances. But ordinal data often come *The term nonparametric was first used by Jacob Wolfowitz in 1942 (David, 1995; Noether, 19/;4). tThe terms nonparametric and distribution-free are commonly used interchangeably, but they do not both define exactly the same set of statistical techniques (Noether, 1984).
Section 8.11
Two-Sample Rank Testing
163
from nonnormal populations, in which case properly subjecting them to parametric analysis depends upon the robustness of the test to the extent of nonnormality present. 8.11 TWO-SAMPLE
RANK TESTING
Several nonparametric procedures, with various characteristics and assumptions, have been proposed for testing differences between the dispersions, or variabilities, of two populations (e.g., see Hettmansperger and McKean, 1998: 118-127; Hollander and Wolfe, 1999: 141-188; Sprent and Smeeton, 2001: 175-185). A far more common desire for nonparametric testing is to compare two populations' central tendencies (i.e., locations on the measurement scale) when underlying assumptions of the t test are not met. The most frequently employed such test is that originally proposed, for equal sample sizes, by Wilcoxon (1945)* and independently presented by Mann and Whitney (1947), for equal or unequal n's. It is called the Wilcoxon-Mann-Whitney test or, more commonly, the Mann-Whitney test. (a) The Mann-Whitney Test. For this test, as for many other nonparametric procedures, the actual measurements are not employed, but we use instead the ranks of the measurements. The data may be ranked either from the highest to lowest or from the lowest to the highest values. Example 8.11 ranks the measurements from highest to lowest: The greatest height in either of the two groups is given rank 1, the second greatest height is assigned rank 2, and so on, with the shortest height being assigned rank N, where (8.45) N = n) + n2. A Mann- Whitney statistic is then calculated as (8.46) where na and n2 are the number of observations in samples 1 and 2, respectively, and R) is the sum of the ranks in sample 1. The Mann-Whitney statistic can also be calculated as (8.47) (where R2 is the sum of the ranks of the observations in sample 2), because the labeling of the two samples as 1 and 2 is arbitrary." If Equation 8.46 has been used to calculate U, then U' can be obtained quickly as (8.48) *Wilcoxon may have proposed this test primarily to avoid the drudgery of performing numerous t tests in a time before ubiquitous computer availability (Noether, 1984). Kruskal (1957) gives additional history, including identification of seven independent developments of the procedure Wilcoxon introduced, two of them prior to Wilcoxon, the earliest being by the German psychologist Gustav Dcuchler in 1914. i'The Wilcoxon two-sample tcst (sometimes referred to as the Wilcoxon rank-sum test) uses a test statistic commonly called W, which is R) or R2; the test is equivalent to the Marin-Whitney test, for V = R2 - n2(n2 + 1 )/2 and V' = R) - n) (n) + 1 )/2. V (or V') is also equal to the number of data in one sample that are exceeded by each datum in the other sample. Note in Example 8.11: For females. ranks 7 and 8 each exceed 6 male ranks and ranks 10, 11, and 12 each exceed all 7 males ranks, for a total of 6 + 6 + 7 + 7 + 7 = 33 = V; for males, rank 9 exceeds 2 female ranks for a total of 2 = V'.
164
Chapter 8
Two-Sample Hypotheses
EXAMPLE 8.11 The Mann-Whitney Test for Nonparametric Testing of the Two-Tailed Null Hypothesis That There Is No Difference Between the Heights of Male and Female Students He: Male and female students are the same height.
HA: a
Male and female students are not the same height.
= 0.05
Heights of males
Heights of females
193 ern 188 185 183 180 175 170
178 cm 173 168 165 163
n( = 7
ni
V
=
n)n2
=
(7)(5)
= 35 +
+
n) (n,
+
Ranks of male heights
Ranks of female heights
1
6 8 10 11 12
2
3 4
5 7 9 R( = 31
= 5
R2 = 47
+ 1 ) - R)
2 (7)(8)
31
2 28 - 31
= 32 V' = n(n2 - V =
(7)( 5) - 32
=3 V005(2).7,5
= VO.05(2),5.7
= 30
As 32 > 30, Ho is rejected. 0.01 < P( V
2:
32 or V' ::::;3) < 0.02
[P
=
0.018]*
Therefore, we conclude that height is different for male and female students. and if Equation 8.47 has been used to compute V', then V can he ascertained as (8.49) * In many of the examples in this book, the exact probability of a statistic from a non parametric test (such as U) will be given within brackets. Tn some cases, this probability is obtainable from published sources (e.g., Owen, 1962). It may also be given by computer software, in which case there are two cautions: The computer result may not be accurate to the number of decimal places given, and the computer may have used an approximation (such as the normal approximation in the case of U; see Section 8.11 d), which may result in a probability departing substantially from the exact probability, especially if the sample sizes are small.
Section 8.11
Two-Sample Rank Testing
165
For the two-tailed hypotheses, He: male and female students are the same height and HA: male and female students are not the same height, the calculated V or V'-whichever is larger-is compared with the two-tailed value of Vo:(2).I1j,n2 found in Appendix Table B.I1. This table is set up assuming nj ::; n2, so if n] > na, simply use Vo:(2).112,nj as the critical value. If either V or V'is as great as or greater than the critical value, Ho is rejected at the a level of significance. A large V or V' will result when a preponderance of the large ranks occurs in one of the samples. As shown in Example 8.11, neither parameters nor parameter estimates are employed in the statistical hypotheses or in the calculations of V and V'. The values of V in the table are those for probabilities less than or equal to the column headings. Therefore, the V of 32 in Example 8.11 is seen to have a probability of 0.01 < P ::; 0.02. If the calculated V would have been 31, its probability would have been expressed as 0.02 < P < 0.05. We may assign ranks either from large to small data (as in Example 8.11), or from small to large, calling the smallest datum rank 1, the next largest rank 2, and so on. The value of V obtained using one ranking procedure will be the same as the value of V' using the other procedure. In a two-tailed test both V and V' are employed, so it makes no difference from which direction the ranks are assigned. In summary, we note that after ranking the combined data of the two samples, we calculate V and V' using either Equations 8.46 and 8.48, which requires the determination of R], or Equations 8.47 and 8.49, which requires R2. That is, the sum of the ranks for only one of the samples is needed. However, we may wish to compute both R] and R2 in order to perform the following check on the assignment of ranks (which is especially desirable in the somewhat more complex case of assigning ranks to tied data, as will be shown below):
+
R]
R2
=
+ 1)
N(N 2
(8.50)
Thus, in Example 8.11,
+
R]
R2
= 30 + 48 = 78
should equal N(N
+ 1) = 12(12 + 1) =78. 2
2
This provides a check on (although it does not guarantee the accuracy of) the assignment of ranks. Note that hypotheses for the Mann- Whitney test are not statements about parameters (e.g., means or medians) of the two populations. Instead, they address the more general, less specific question of whether the two population distributions of data are the same. Basically, the question asked is whether it is likely that the two samples came at random from the two populations described in the null hypothesis. If samples at least that different would occur with a probability that is small (i.e., less than the significance level, such as 0.05), then Ho is rejected. The Mann- Whitney procedure serves to test for difference between medians under certain circumstances (such as when the two sampled populations have symmetrical distributions), but in general it addresses the less specific hypothesis of similarity between the two populations' distributions. The Watson test of Section 26.6 may also be employed when the Marin-Whitney test is applicable, but the latter is easier to perform and is more often found in statistical software.
166
Chapter
8
Two-Sample
Hypotheses
(b) The Mann-Whitney Test with Tied Ranks. Example 8.12 demonstrates an important consideration encountered in tests requiring the ranking of observations. When two or more observations have exactly the same value, they are said to be tied. The rank assigned to each of the tied ranks is the mean of the ranks that would have been assigned to these ranks had they not been tied.* For example, in the present set of data, which are ranked from low to high, the third and fourth lowest values are tied at 32 words per minute, so they are each assigned the rank of (3 + 4 )/2 = 3.5. The eighth, ninth, and tenth observations are tied at 44 words per minute, so each of them receives the rank of (8 + 9 + 10)/3 = 9. Once the ranks have been assigned by this procedure, V and V' are calculated as previously described. (c) The One-Tailed Mann-Whitney Test. For one-tailed hypotheses we need to declare which tail of the Mann-Whitney distribution is of interest, as this will determine whether V or V'is the appropriate test statistic. This consideration is presented in Table 8.2. In Example 8.12 we have data that were ranked from lowest to highest and the alternate hypothesis states that the data in group 1 are greater in magnitude than those in group 2. Therefore, we need to compute V' and compare it to the one-tailed critical value, V0'( l).nl.n2' from Appendix Table B.11. TABLE 8.2: The Appropriate
Test Statistic for the One-Tailed Mann-Whitney
Test
Ho: Group 1 2': Group 2 HA: Group 1 < Group 2
He; Group 1 ::; Group 2 HA: Group 1 > Group 2
v
V'
V'
V
Ranking done from low to high Ranking done from high to low
(d) The Normal Approximation to the Mann-Whitney Test. Note that Appendix Table B.ll can be used only if the size of the smaller sample does not exceed twenty and the size of the larger sample does not exceed forty. Fortunately, the distribution of V approaches the normal distribution for larger samples. For large n, and n2 we use the fact that the V distribution has a mean of (8.51) which may be calculated, equivalently, as /LV
=
V
+ V'
(8.51a)
2
and a standard error of (8.52)
* Although other procedures have been proposed to deal with ties, assigning the rank mean has predominated for a long time (e.g., Kendall, 1945).
Two-Sample Rank Testing
Section 8.11
167
EXAMPLE 8.12 The One-Tailed Mann-Whitney Test Used to Determine the Effectiveness of High School Training on the Typing Speed of College Students. This Example Also Demonstrates the Assignment of Ranks to Tied Data H«: Typing speed is not greater in college students having had high school typing HA:
training. Typing speed is greater in college students having had high school typing training.
a
0.05
=
Typing Speed (words per minute) With training Without training (rank in parentheses) (rank in parentheses) 44 (9) 48 (12) 36 (6) 32 (3.5) 51 (13) 45 (11) 54 (14) 56 (15) nl
=
R,
= 83.5
32 (3.5) 40 (7) 44 (9) 44 (9) 34 (5) 30 (2) 26 (1)
8
Because ranking was done from low to high and the alternate hypothesis states that the data of group one are larger than the data of group two, use V' as the test statistic (as indicated in Table 8.2).
ir
VO.05(
1).8.7
= VO.05(
1).7.8
= n2nl +
n2(n2 2
+ 1) - R2
= (7)(8)
+ (7)(8)
+
2 28 - 36.5
=
56
=
47.5
- 36.5
= 43
As 47.5 > 43, reject Hi; 0.01 < P < 0.025
[P
Consequently, it is concluded that college-student students who had typing training in high school.
=
0.012] typing speed is greater
for
where N = nl + na, as used earlier. Thus, if a V, or a V', is calculated from data where either nl or n2 is greater than that in Appendix Table B.11, its significance can be determined by computing Z = V - flU (8.53) (J'U
168
Chapter 8
Two-Sample Hypotheses
or, using a correction for continuity, by
z;
=
,-I
V_------'---fL--"-v--'--1 __
0_.5
(8.54)
The continuity correction is included to account for the fact that Z is a continuous distribution, but V is a discrete distribution. However, it appears to be advisable only if the two-tailed P is about 0.05 or greater (as seen from an expansion of the presentation of Lehmann, 1975: 17). Recalling that the t distribution with v = 00 is identical to the normal distribution, the critical value, Za, is equal to the critical value, ta,oo. The normal approximation is demonstrated in Example 8.13. When using the normal approximation for two-tailed testing, only V or V' (not both) need be calculated. If Viis computed instead of U, then Viis simply substituted for V in Equation 8.53 or 8.54, the rest of the testing procedure remaining the same. EXAMPLE 8.13 The Normal Approximation to a One-Tailed Mann-Whitney Test to Determine Whether Animals Raised on a Dietary Supplement Reach a Greater Body Weight Than Those Raised on an Unsupplemented Diet In the experiment, 22 animals (group 1) were raised on the supplemented diet, and 46 were raised on the unsupplemented diet (group 2). The body weights were ranked from 1 (for the smallest weight) to 68 (for the largest weight), and V was calculated to be 282.
Ho: Body weight of animals on the supplemented
diet are not greater than those
on the un supplemented diet. Body weight of animals on the supplemented the unsupplemented diet.
diet are greater than those on
HA:
nI
= 22, nz = 46, N = 68
V
=
282 V' = nin;
fLV
=
= (22)(46) 2
nln2
2 =
if
u
V = (22)( 46) - 282 = 1012 - 282 = 730
+ 1) =
/nln2(N
\I
= 506
12
Z = V' -
fLV
oo
For a one-tailed test at
ex
/(22)(46)(68
\I
+ 1) = 76.28
12
= 224 = 2.94 76.28
= 0.05, to.05(
As Z = 2.94 > 1.6449, reject H«. So we conclude that the supplemental
1),00
=
ZO.05( I)
= 1.6449.
[P = 0.0016] diet results in greater body weight.
One-tailed testing may also be performed using the normal approximation. Here one computes either V or V', in accordance with Table 8.2, and uses it in either Equation 8.55 or 8.56, respectively, inserting the correction term (-0.5) if P is about
Section 8.11
Two-Sample Rank Testing
169
0.025 or greater: Zc
=
V -
!-LV
-
0.5 , I'f V IS . use d,or
(8.55)
0.5 , Iif V' IS . use.d
(8.56)
(TV
Z c -- V' -
!-LV
-
(TV
The resultant Z; is then compared to the one-tailed critical value, Za( I), or, equivalently, taCI ),00; and if Z ~ the critical value, then Ho is rejected.* If tied ranks exist and the normal approximation is utilized, the computations are slightly modified as follows. One should calculate the quantity (8.57) where Ii is the number of ties in a group of tied values, and the summation is performed over all groups of ties. Then, (8.58) and this value is used in place of that from Equation 8.52. (The computation of ~ tis demonstrated, in a similar context, in Example 10.11.) The normal approximation is best for a( 2) = 0.10 or 0.05 [or for a( 1) = 0.05 or 0.025] and is also good for a(2) = 0.20 or 0.02 [or for a(1) = 0.10 or 0.01]' with the approximation improving as sample sizes increase; for more extreme significance levels it is not as reliable, especially if nl and n2 are dissimilar. Fahoome (2002) determined that the normal approximation (Equation 8.53) performed well at the two-tailed 0.05 level of significance (i.e., the probability of a Type I error was between 0.045 and 0.055) for sample sizes as small as 15, and at a(2) = 0.01 (for P(Type I error) between 0.009 and 0.011) for n] and n2 of at least 29. Indeed, in many cases with even smaller sample sizes, the normal approximation also yields Type I error probabilities very close to the exact probabilities of V obtained from specialized computer software (especially if there are few or no ties)." Further observations on the accuracy of this approximation are given at the end of Appendix Table B.l1. Buckle, Kraft, and van Eeden (1969) propose another distribution, which they refer to as the "uniform approximation." They show it to be more accurate for n] n2, especially when the difference between n] and n2 is great, and especially for small a. Fix and Hodges (1955) describe an approximation to the Mann-Whitney distribution that is much more accurate than the normal approximation but requires very involved computation. Hodges, Ramsey, and Wechsler (1990) presented a simpler method for a modified normal approximation that provides very good results for probabilities of about 0.001 or greater. Also, the two-sample t test may be applied to the ranks of the data (what is known as using the rank transformation of the data), with the probability of the resultant t approaching the exact probability for very large n. But these procedures do not appear to he generally preferable to the normal approximation described above, at least for the probabilities most often of interest.
'*
"By this procedure, Z must be positive in order to reject H(). If it is negative, then of Ho being true is.? > 0.50. + As a demonstration of this, in Example 8.11 the exact probahility is 0.018 and by the normal approximation is 0.019; and for Example 8.12, the exact probahility approximation are both 0.012. ln Exercise R.12. P for U is 0.53 and P for Z is 0.52; 8.13, P for U is 0.41 and P for Z; is 0.41.
the probability the probability and the normal and in Exercise
170
Chapter 8
Two-Sample Hypotheses
(e) The Mann-Whitney Test with Ordinal Data. The Marin-Whitney test may also be used for ordinal data. Example 8.14 demonstrates this procedure. In this example, 25 undergraduate students were enrolled in an invertebrate zoology course. Each student was guided through the course by one of two teaching assistants. but the same examinations and grading criteria were applied to all students. On the basis of the students' final grades in the course, we wish to test the null hypothesis that students (students in general, not just these 25) perform equally well under both teaching assistants. The variable measured (i.e., the final grade) results in ordinal data, and the hypothesis is amenable to examination by the Mann-Whitney test.
d
-,
(y, I)
s'J Therefore. we do not use the original measurements for the two samples. but only the difference within each 2air of measurements, One deals. then. with a sample of dj values. whose mean is d and whose variance. standard deviation. and standard error arc denoted as s~/' .\/. and ,\'il respectively, Thus. the p{{ireq-smllp/e ( test. as this procedure may he called. is essentially a one-sample (test. analogous to that described 179
180
Chapter 9
Paired-Sample Hypotheses EXAMPLE 9.1
0' =
The Two-Tailed
He:
/Ld = 0
HA:
/Lei
Paired-Sample
t Test
*0
0.05
Deer
Hindleg length (em)
Foreleg length (em)
(j)
(X1j)
(X2j)
1 2 3 4 5 6 7
142 140 144 144 142 146 149 150 142 148
8 9 10
d
s~
s{[ =
9.3444 crrr'
v=n-l=9
t
= 2.262 0.005
=
Xlj
-
X2j)
4 4
-3 5 -1 5 6 5 6
2
3.3 em
=
0.97 em
= -d = -3.3 = 3.402 s{[
t005(2).9
(dj
138 136 147 139 143 141 143 145 136 146
n = 10 =
Difference (em)
Therefore,
0.97 reject Hi;
< P( It I ;::: 3.402)
< 0.01
[P = 0.008]
in Sections 7.1 and 7.2. In the paired-sample t test, n is the number of differences (i.e., the number of pairs of data), and the degrees of freedom are u = n - 1. Note that the hypotheses used in Example 9.1 are special cases of the general hypotheses Ho: /Lei = /LO and H/\: /Lei * /LO, where /LO is usually, but not always, zero. For one-tailed hypotheses with paired samples, one can test either He: /Ld ;::: /La and H/\: /Ld < /LO, or He: /Lei :::; /LO and HA: /Ld > /LO, depending on the question to be asked. Example 9.2 presents data from an experiment designed to test whether a new fertilizer results in an increase of more than 250 kg/ha in crop yield over the old fertilizer. For testing this hypothesis, 18 test plots of the crop were set up. It is probably unlikely to find 18 field plots having exactly the same conditions of soil, moisture, wind, and so on, but it should be possible to set up two plots with similar environmental conditions. If so, then the experimenter would be wise to set up nine pairs of plots, applying the new fertilizer .randomly to one plot of each pair and the old fertilizer to the other plot of that pair. As Example 9.2 shows, the statistical hypotheses to be tested are H«: /Ld :::; 250 kg/ha and H A: /Ld > 250 kg/ha. Paired-sample t-testing assumes that each datum in one sample is associated with one, but only one, datum in the other sample. So, in the last example, each yield using
Section 9.1 EXAMPLE 9.2
Ho: H A: a
=
Testing Mean Difference between Paired Samples
A One-Tailed
J.Ld :::; J.Ld
181
t Test
Paired-Sample
250 kg/ha
> 250 kg/ha
0.05
Crop Yield (kg/ha) Plot
With new fertilizer
(j) I 2
3 4 5 6
7 8 9
n
=
s~,
=
With oldfertilizer
(Xli)
(X2;)
2250 2410 2260 2200 2360 2320 2240 2300 2090
1920 2020 2060 1960 1960 2140 19XO 1940 1790
d =
9 6502.78 (kg/ha}'
sJ
=
t =
v=n-I=X
330 3l)0 200 240 400 IXO 260 360 300
295.6 kgjha 26.9 kgjha d -
250
=
1.695
sJ to.1l5( I ).X =
Therefore,
I.R60 0.05
do not reject Ho.
< P < 0.10
[P
=
0.0641
new fertilizer is paired with only one yield using old fertilizer: and it would have been inappropriate to have some tracts of land large enough to collect two or more crop yields using each of the fertilizers. The paired-sample t test does not have the normality and equality of varianoes assumptions of the two-sample t test, but it does assume that the differences. d., come from a normally distributed population of differences. If a nonnormal distribution of differences is doubted, the non parametric test of Section 9.5 should be considered. If there is. in fact. pairwise association of data from the two samples, then analysis by the two-sample t test will often be less powerful than if the paired-sample t test was employed, and the two-sample test will not have a probability of a Type I error equal to the specified significance level. a. It appears that the latter probability will be increasingly less than a for increasingly large correlation between the pairwise data (and, in the less common situation where there is a negative correlation between the data, the probability will be greater than a): and only a small relationship is needed
182
Chapter 9
Paired-Sample Hypotheses to make the paired-sample test advantageous (Hines, 1996; Pollak and Cohen, 1981; Zimmerman, 1997). If the data from Example 9.1 were subjected (inappropriately) to the two-sample t test, rather than to the paired-sample t test, a difference would not have been concluded, and a Type II error would have been committed.
9.2
CONFIDENCE LIMITS FOR THE POPULATION MEAN DIFFERENCE
In paired-sample testing we deal with a sample of differences, dj, so confidence limits for the mean of a population of differences, fLd, may be determined as in Section 7.3. In the manner of Equation 7.6, the 1 - ex confidence interval for fLd is
(9.2) For example, interval for fLd confidence limits Furthermore, 1 - ex confident 9.3
for the data in Example 9.1, we can compute the 95 % confidence to be 3.3 cm ± (2.262)(0.97 cm ) = 3.3 em ± 2.2 ern; the 95% are L] = 1.1 cm and L2 = 5.5 ern. we may ask, as in Section 7.5, how large a sample is required to be in estimating fLd to within ± d (using Equation 7.7).
POWER, DETECTABLE DIFFERENCE AND SAMPLE SIZE IN PAIRED-SAMPLE TESTING OF MEANS By considering the paired-sample test to be a one-sample t test for a sample differences, we may employ the procedures of Section 7.7 to acquire estimates required sample size (n), minimum detectable difference (8), and power (1 using Equations 7.10, 7.11, and 7.12, respectively.
9.4
of of
m,
TESTING FOR DIFFERENCE BETWEEN VARIANCES FROM TWO CORRELATED POPULATIONS The tests of Section 8.5 address hypotheses comparing O'T to O'~ when the two samples of data are independent. For example, if we wanted to compare the variance of the lengths of deer forelegs with the variance of deer hindlegs, we could measure a sample of foreleg lengths of several deer and a sample of hindleg lengths from a different group of deer. As these are independent samples, the variance of the foreleg sample could be compared to the variance of the hindleg sample by the procedures of Section 8.5. However, just as the paired-sample comparison of means is more powerful than independent-sample comparison of means when the data are paired (i.e., when there is an association between each member of one sample and a member of the other sampIe), there is a variance-comparison test more powerful than those of Section 8.5 if the data are paired (as they arc in Example 9.1). This test takes into account the amount of association between the members of the pairs of data, as presented by Snedecor and Cochran (1989: 192-193) based upon a procedure of Pitman (1939). We compute: (9.3) Here, n is the sample size common to both samples, r is the correlation coefficient described in Section 19.1 (Equation 19.1), and the degrees of freedom associated with t arc v = n - 2. For a two-tailed test (Ho: O'T = O'~ vs. HA: O'T i= O'~), either F = sTls~ or F = s~/sT may be used, as indicated in Equation 8.29, and H« is rejected if It I 2 ta(2),l" This is demonstrated in Example 9.3. For the one-tailed hypotheses,
Ho:
O'T :::;(T~ versus = s~/sT; and
use F
HA:
(TT
>
a one-tailed
(T~, use
F =
test rejects
sTls~;
for Ho:
Hi, if t
O'T 2 O'~ versus
2 ta(]),l"
HA:
O'T
T_. [man (1974a) presents an approximation based on Student's t: t
T
=
In2~n
\j
+
1)(2n
2( n -
1)
}LT
+ 1) _
(9.15) (T
-
Wr)2'
n -
1
with n - 1 degrees of freedom. As shown at the end of Appendix Table B.12, this performs slightly better than the normal approximation (Equation 9.10). The test with a correction for continuity is performed by subtracting 0.5 from IT - J.LTI in both the numerator and denominator of Equation 9.15. This improves the test for er( 2) from 0.001 to 0.10, but the uncorrected t is better for er( 2) from 0.20 to 0.50. One-tailed t-testing is effected in a fashion similar to that described for Z in the preceding paragraph. * Fellingham and Stoker (1964) discuss a more accurate approximation, but it requires more computation, and for sample sizes beyond those in Table B.12 the increased accuracy is of no great consequence. (c) The Wilcoxon Paired-Sample Test for Ordinal Data. The Wilcoxon test nonparametrically examines differences between paired samples when the samples consist of interval-scale or ratio-scale data (such as in Example 9.4), which is legitimate because the paired differences can be meaningfully ordered. However, it may not work well with samples comprising ordinal-scale data because the differences between ordinal scores may not have a meaningful ordinal relationship to each other. For example, each of several frogs could have the intensity of its green skin color recorded on a scale of 1 (very pale green) to 10 (very deep green). Those data would represent an ordinal scale of measurement because a score of 10 indicates a more intense green than a score of 9, a 9 represents an intensity greater than an 8, and so on. Then the skin-color "When Appendix Table 8.12 cannot be used, a slightly improved approximation is effected comparing the mean of 1 and Z to the mean of the critical values of 1 and Z (Irnan, 1974a).
by
188
Chapter 9
Paired-Sample Hypotheses
intensity could be recorded for these frogs after they were administered hormones for a period of time, and those data would also be ordinal. However, the differences between skin-color intensities before and after the hormonal treatment would not necessarily be ordinal data because, for example, the difference between a score of 5 and a score of 2 (a difference of 3) cannot be said to represent a difference in skin color that is greater than the difference between a score of 10 and a score of 8 (a difference of 2). To deal with such a situation, Kornbrot (1990) presented a modification of the Wilcoxon paired-sample test (which she called the "rank difference test"), along with tables to determine statistical significance of its results. (d) Wilcoxon Paired-Sample Test Hypotheses about a Specified Difference Other Than Zero. As indicated in Section 9.1, the paired-sample t test can be used for hypotheses proposing that the mean difference is something other than zero. Similarly, the Wilcoxon paired-sample test can examine whether paired differences are centered around a quantity other than zero. Thus, for data such as in Example 9.2, the nonparametric hypotheses could be stated as He: crop yield does not increase more than 250 kg/ha with the new fertilizer, versus HA : crop yield increases more than 250 kg/ha with the new fertilizer. In that case, each datum for the old-fertilizer treatment would be increased by 250 kg/ha (resulting in nine data of 1920, 2020, 2060 kg/ha, etc.) to be paired with the nine new-fertilizer data of 2250,2410,2260 kg/ha, and so on. Then, the Wilcoxon paired-sample test would be performed on those nine pairs of data. With ratio- or interval-scale data, it is also possible to propose hypotheses considering a multiplication, rather than an addition, constant. This concept is introduced at the end of Section 8.11f. 9.6
CONFIDENCE LIMITS FOR THE POPULATION
MEDIAN
DIFFERENCE
In Section 9.2, confidence limits were obtained for the mean of a population of differences. Given a population of differences, one can also determine confidence limits for the population median. This is done exactly as indicated in Section 7.10; simply consider the observed differences between members of pairs (dj) as a sample from a population of such differences. EXERCISES 9.1. Concentrations of nitrogen oxides carbons (recorded in JLg/m3) were a certain urban area. (a) Test the hypothesis that both pollutants were present in the tration. Day
and of hydrodetermined in classes of air same concen-
Nitrogen oxides
Hydroca rhons
6 7 8 9 10
104 116 84 77 61 84 81 72 61 97
11
84
108 118 89 71 66 83 88 76 68 96 81
1 2 3 4 5
(b) Calculate the 95% confidence interval for JLd. 9.2. Using the data of Exercise 9.1,test the appropriate hypotheses with Wilcoxon's paired-sample test. 9.3. Using the data of Exercise 9.1,test for equality of the variances of the two kinds of air pollutants.
C HAP
T E R
10
Multisample Hypotheses and the Analysis of Variance 10.1 SINGLE-FACTOR ANALYSIS OF VARIANCE 10.2 CONFIDENCE LIMITS FOR POPULATION MEANS 10.3 SAMPLE SIZE, DETECTABLE DIFFERENCE, AND POWER 10.4 NONPARAMETRIC ANALYSIS OF VARIANCE 10.5 TESTING FOR DIFFERENCE AMONG SEVERAL MEDIANS 10.6 HOMOGENEITY OF VARIANCES 10.7 HOMOGENEITY OF COEFFICIENTS OF VARIATION 10.8 CODING DATA 10.9 MULTISAMPLE TESTING FOR NOMINAL-SCALE
DATA
When measurements of a variable are obtained for each of two independently collected samples, hypotheses such as those described in Chapter 8 are appropriate. However, biologists often obtain data in the form of three or more samples, which are from three or more populations, a situation calling for multisarnplc analyses, as introduced in this chapter. It is tempting to some to test multisarnple hypotheses by applying two-sample tests to all possible pairs of samples. In this manner, for example, one might proceed to test the null hypothesis H(): fLI = fL2 = fL3 by testing each of the following hypotheses by the two-sample t test: 110: fLI = fL2, H(): fLI = fL3. Ho: fL2 = fL3. But such a procedure, employing a series of two-sample tests to address a multisample hypothesis, is invalid. The calculated lest statistic, t, and the critical values we find in the t table are designed to test whether the two sample statistics, XI and X2, are likely to have come from the same population (or from two populations with identical means). In properly employing the two-sample test, we could randomly draw two sample means from the same population and wrongly conclude that they are estimates of two different populations' means; but we know that the probability of this error (the Type I error) will he no greater than a. However, consider that three random samples were taken from a single population. In performing the three possible two-sample t tests indicated above, with a = 0.05, the probability of wrongly concluding that two of the means estimate different parameters is 14'7'0,considerably greater than a. Similarly, if a is set at 5% and four means are tested, two at a time, by the two-sample t test, there arc six pairwise 110 's to be tested in this fashion, and there is a 26% chance of wrongly concluding a difference between one or more of the means. Why is this? For each two-sample t test performed at the 5% level ofsignificanee, there is a 95°1 a], there will generally be more power than if the variances were equal. If the sample sizes are all equal, nonnormality generally affects the power of the analysis of variance to only a small extent (Clinch and Keselman, 1982; Glass, Peckham, and Sanders, 1972; Harwell et al., 1992; Tan, 1982), and the effect decreases with increased n (Donaldson, 1968). However, extreme skewness or kurtosis can severely alter (and reduce) the power (Games and Lucas, 1966), and non normal kurtosis generally has a more adverse effect than skewness (Sahai and Ageel, 2000: 85). With small samples, for example, very pronounced platykurtosis in the sampled populations will decrease the test's power, and strong Ieptokurtosis will increase it (Glass, Peckham, and Sanders, 1972). When sample sizes are not equal, the power is much reduced, especially when the large samples have small means (Boehnke, 1984). The robustness of random-effects (i.e .. Model II) analysis of variance (Section 10.If) has not been studied as much as that of the fixed-effects (Model I) ANOV A. However, the test appears to be robust to departures of normality within the k populations, though not as robust as Model I ANOV A (Sahai, 2000: 86), provided the k groups (levels) of data can be considered to have been selected at random from all possible groups and that the effect of each group on the variable can be considered to be from a normally distributed set of group effects. When the procedure is nonrobust, power appears to be affected more than the probability of a Type I error; and the lack of robustness is not very different if sample sizes are equal or unequal (Tan, 1982; Tan and Wong, 1980). The Model II analysis of variance (as is the case with the Model I ANOV A) also assumes that the k sampled populations have equal variances. (h) Testing of Multiple Means when Variances Are Unequal. Although testing hypotheses about means via analysis of variance is tolerant to small departures from the assumption of variance homogeneity when the sample sizes are equal. it can yield
Section 10.1
Single-Factor Analysis of Variance
203
very misleading results in the presence of more serious heterogeneity of variances and/or unequal sample sizes. Having unequal variances represents a multisample Behrens-Fisher problem (i.e., an extension of the two-sample Behrens-Fisher situation discussed in Section S.lc). Several approaches to this analysis have been proposed (e.g., see Keselman et aI., 2000; Lix, Keselman, and Keselman, 1996). A very good one is that described by Welch (1951), which employs k
2: Ci(Xi F'
-
Xw)2
i=1
=
1) [1 +
(k
k2 -
(10.22)
2)]'
2A (k -
1
where (10.23)
(10.24)
(10.25)
A =
2: (1 k
i=1
- cjC)2 I,
where
Vi
=
ru -
1
(10.26)
Vi
and F' is associated with degrees of freedom of k2 -
VI
1
3A
=
k -
1 and (10.27)
which should be rounded to the next lower integer when using Appendix Table B.4. This procedure is demonstrated in Example 10.3. A modified ANOYA advanced by Brown and Forsythe (1974a, b) also works well: F"
=
groups SS B
(10.28)
EXAMPLE 10.3 Welch's Test tor an Analysis-at-Variance Experimental Design with Dissimilar Group Variances The potassium content (mg of potassium per 100 mg of plant tissue) was measured in five seedlings of each of three varieties of wheat.
Ho: HA: a
= 0.05
ILl
=
IL2
=
IL3·
The mean potassium content is not the same for seedlings of all three wheat varieties.
204
Chapter 10
Multisample Hypotheses and the Analysis of Variance
Variety G
Variety A
Variety L
27.9 27.0 26.0 26.5 27.0 27.5
24.2 24.7 25.6 26.0 27.4 26.1
29.1 27.7 29.9 30.7 28.8 31.1
1
2
3
ni
6
6
6
Vi
5
5
5
Xi
26.98
25.67
29.55
s2
0.4617
1.2787
1.6070
12.9955
4.6923
3.7337
C
= ~ c, = 21.4215
350.6186
120.4513
110.3308
~
CiXi
0.0309
0.1220
0.1364
I
c,
=
ni/s7
CiXi
(1
- ~y
A ~
= 581.4007
L (1
Vi
=
= 0.2893
Vi ~CiXi
Xw
- ~y
i
c F'
581.4007 21.4215
=
= 27.14
"'V c.i X, - X w)2 = --~~~~--~~~~~ (k _ + 2A (k -
1) [1
k2 -
12.9955(26.98
- 27.14)2
(3 - 1) [1
2)]
1
+ 4.6923(25.67 - 27.14)2 + 3.7337(29.55 - 27.14)
+ 2(0.2893)(3 32
-
- 2)] 1
= 0.3327 + 10.1396 + 21.6857 = 32.4144 = 17.5 2(0.9268)
1.8536
For critical value of F: III
= k - 1= 3 - 1= 2 k2 -
V2 =
1
=
3A By harmonic interpolation FO.05( I ),2.9.22=
2 3 - 1 = _8_ = 9.22 3( 0.2893) 0.8679
in Appendix Table B.4 or by computer program:
4.22. So, reject
Ho.
0.0005 < P < 0.001
[P
= 0.0073]
Section 10.1
where
Single-Factor Analysis of Variance
( l--.!.sn.)
b=
N
1
2 I'
205
(10.28a)
and k
B
=
2. h.
(10.29)
i=l
F" has degrees of freedom of
VI
= k - 1 and B2
V2
= -----;;--?
2. bi
(10.30)
i=l Vi
If k = 2, both the F' and F" procedures are equivalent to the t' test. The Welch method (F') has been shown (by Brown and Forsythe, 1974a; BOning, 1997; Dijkstra and Werter, 1981; Harwell et al., 1992; Kohr and Games, 1974; Levy, 1978a; Lix, Keselman, and Keselman, 1996) to generally perform better than F or F" when population variances are unequal, especially when n/s are equal. However, the Welch test is liberal if the data come from highly skewed distributions (Clinch and Keselman, 1992; Lix, Keselman, and Keselman, 1996). Browne and Forsythe (1974a) reported that when variances are equal, the power of F is a little greater than the power of F', and that of F' is a little less than that of F". But if variances are not equal, F' has greater power than F" in cases where extremely low and high means are associated with low variances, and the power of F" is greater than that of F' when extreme means are associated with large variances. Also, in general, F' and F" are good if all n; 2 10 and F' is reasonably good if all n, 2 5. (i) Which Multisample Test to Use. As with all research reports, the reader should be informed of explicitly what procedures were used for any statistical analysis. And, when results involve the examination of means, that reporting should include the size (n), mean (X), and variability (e.g., standard deviation or standard error) of each sample. If the samples came from populations having close to normal distributions, then presentation of each sample's confidence limits (Section 10.2) might also be included. Additional interpretation of the results could include displaying the means and measures of variability via tables or graphs such as those described in Section 7.4. Although it not possible to generalize to all possible situations that might be encountered, the major approaches to comparing the means of k samples, where k is more than two, are as follows: • If the k sampled populations are normally distributed and have identical variances (or if they are only slightly to moderately non normal and have similar variances): The analysis of variance, using F, is appropriate and preferable to test for difference among the means. (However, samples nearly always come from distributions that are not exactly normal with exactly the same variances, so conclusions to reject or not reject a null hypothesis should not be considered definitive when the probability associated with F is very near the a specified for the hypothesis test; in such a situation the statistical conclusion should be expressed with some caution and, if feasible, the experiment should be repeated (perhaps with more data).
206
Chapter 10
Multisample Hypotheses and the Analysis of Variance
• If the k sampled populations are distributed normally (or are only slightly to moderately nonnormal), but they have very dissimilar variances: The Behrens-Fisher testing of Section 10.1h is appropriate and preferable to compare the k means. If extremely high and low means are associated with small variances, F' is preferable; but if extreme means are associated with large variances, then F" works better. • If the k sampled populations are very different from normally distributed, but they have similar distributions and variances: The Kruskal-Wallis test of Section 10.4 is appropriate and preferable. • If the k sampled populations have distributions greatly different from normal and do not have similar distributions and variances: (1) Consider the procedures of Chapter 13 for data that do not exhibit normality and variance equality but that can be transformed into data that are normal and homogeneous of variance; or (2) report the mean and variability for each of the k samples, perhaps also presenting them in tables and/or graphs (as in Section 7.4), but do not perform hypothesis testing. (j) Outliers. A small number of data that are much more extreme than the rest of the measurements are called outliers (introduced in Section 2.5), and they may cause a sample to depart seriously from the assumptions of normality and variance equality. If, in the experiment of Example 10.1, a pig weight of 652 kg, or 7.12 kg, or 149 kg was reported, the researcher would likely suspect an error. Perhaps the first two of these measurements were the result of the careless reporting of weights of 65.2 kg and 71.2 kg, respectively; and perhaps the third was a weight measured in pounds and incorrectly reported as kilograms. If there is a convincingly explained error such as this, then an offending datum might be readily corrected. Or, if it is believed that a greatly disparate datum is the result of erroneous data collection (e.g., an errant technician, a contaminated reagent, or an instrumentation malfunction), then it might be discarded or replaced. In other cases outliers might be valid data, and their presence may indicate that one should not employ statistical analyses that require population normality and variance equality. There are statistical methods that are sometimes used to detect outliers, some of which are discussed by Barnett and Lewis (1994: Chapter 6), Snedecor and Cochran (1989: 280-281), and Thode (2002: Chapter 6). True outliers typically will have little or no influence on analyses employing non parametric two-sample tests (Sections 8.11 and 8.12) or muItisample tests (Section 10.4). 10.2
CONFIDENCE LIMITS FOR POPULATION MEANS
When k > 2, confidence limits for each of the k population means may be computed in a fashion analogous to that for the case where k = 2 (Section 8.2, Equation 8.13), under the same assumptions of normality and homogeneity of variances applicable to the ANOV A. The 1 - CI' confidence interval for /Li is (10.31) where s2 is the error mean square and u is the error degrees of freedom from the analysis of variance. For example, let us consider the 95% confidence interval for /L4
Section 10.3
in Example 10.1. Here, X4
=
Sample Size, Detectable Difference, and Power
63.24 kg, s2
=
9.383 kg2, n4
=
5, and
to.05(2),15
207
2.13I.
= 2
Therefore, the lower 95% confidence limit, LI' is 63.24 kg - 2.131~9.383 kg /5 63.24 kg 3.999 kg = 59.24 kg, and L2 is 63.24 kg + 2.131~9.383 kg2/5 = 63.24 kg + 3.999 kg = 67.24 kg. Computing a confidence interval for J.Li would only be warranted if that population mean was concluded to be different from each other population mean. And calculation of a confidence interval for each of the k J.L'S may be performed only if it is concluded that J.LI # J.L2 # ... # J.Lk. However, the analysis of variance does not enable conclusions as to which population means are different from which. Therefore, we must first perform multiple comparison testing (Chapter 11), after which confidence intervals may be determined for each different population mean. Confidence intervals for differences between means may be calculated as shown in Section 11.2. SAMPLE SIZE, DETECTABLE DIFFERENCE, AND POWER IN ANALYSIS OF VARIANCE
In Section 8.3, dealing with the difference between two means, we saw how to estimate the sample size required to predict a population difference with a specified level of confidence. When dealing with more than two means, we may also wish to determine the sample size necessary to estimate difference between any two population means, and the appropriate procedure will be found in Section 11.2. In Section 8.4, methods were presented for estimating the power of the twosample t test, the minimum sample size required for such a test, and the minimum difference between population means that is detectable by such a test. There are also procedures for analysis-of-variance situations, namely for dealing with more than two means. (The following discussion begins with consideration of Model I ~ fixed-effects model-s-analyses of variance.) If Hi, is true for an analysis of variance, then the variance ratio of Equation 10.18 follows the F distribution, this distribution being characterized by the numerator and denominator degrees of freedom (VI and V2, respectively). If, however, Ho is false, then the ratio of Groups MS to error MS follows instead what is known as the noncentral F distribution, which is defined by VI, V2, and a third quantity known as the noncentrality parameter. As power refers to probabilities of detecting a false null hypothesis, statistical discussions of the power of ANOV A testing depend upon the noncentral F distribution. A number of authors have described procedures for estimating the power of an ANOV A, or the required sample size, or the detectable difference among means (e.g., Bausell and Li, 2002; Cohen, 1988: Ch.8; Tiku, 1967, 1972), but the charts prepared by Pearson and Hartley (1951) provide one of the best of the methods and will be described below. (a) Power of the Test. Prior to performing an experiment and collecting data from it, it is appropriate and desirable to estimate the power of the proposed test. (Indeed, it is possible that on doing so one would conclude that the power likely will be so low that the experiment needs to be run with many more data or with fewer groups or, perhaps, not run at all.) Let us specify that an ANOV A involving k groups will be performed at the a significance level, with n data (i.e., replications) per group. We can then estimate the power of the test if we have an estimate of 0"2, the variability within the k populations (e.g., this estimate typically is s2 from similar experiments, where s2 is the error MS), and an estimate of the variability among the populations. From this information we
208
Chapter 10
Multisample Hypotheses and the Analysis of Variance
may calculate a quantity called cjJ (lowercase Greek phi), which is related to the non centrality parameter. The variability among populations might be expressed in terms of deviations of the k population means, Jki, from the overall mean of all populations, Jk, in which case k
n
2: (Jki
Jk)2
i=1
cjJ=
(10.32)
(e.g., Guenther, 1964: 47; Kirk, 1995: 182). The grand population
mean is
k
2:
Jki
i=1 fL =--
(10.33)
k
if all the samples are the same size. In practice, we employ the best available estimates of these population means. Once cjJ has been obtained, we consult Appendix Figure B.l. This figure consists of several pages, each with a different VI (i.e., groups DF) indicated at the upper left of the graph. Values of cjJ are indicated on the lower axis of the graph for both a = 0.01 and a = 0.05. Each of the curves on a graph is for a different V2 (i.e., error DF), for a = 0.01 or 0.05, identified on the top margin of a graph. After turning to the graph for the V1 at hand, one locates the point at which the calculated cjJ intersects the curve for the given V2 and reads horizontally to either the right or left axis to determine the power of the test. This procedure is demonstrated in Example 10.4.
EXAMPLE 10.4 Estimating the Power of an Analysis of Variance When Variability among Population Means Is Specified A proposed analysis of variance of plant root elongations is to comprise ten roots at each of four chemical treatments. From previous experiments, we estimate (T2 to be 7.5888 mm2 and estimate that two of the population means are 8.0 mrn, one is 9.0 mm, and one is 12.0 mm. What will be the power of the ANOV A if we test at the 0.05 level of significance? k=4 n = 10 VI = k - 1 = 3 V2 = k( n - 1) = 4( 9) = 36 fL = 8.0 + 8.0 + 9.0 + 12.0 4
=
9.25
9.25)2
+ (9.0 - 9.25)2 + (12.0 - 9.25)2]
4(7.5888)
Section 10.3
=
Sample Size, Detectable Difference, and Power
209
/10(10.75)
-V 4(7.5888)
= J3.5414 =
1.88
In Appendix Figure B.1c, we enter the graph for VI = 3 with cp = 1.88, Cl' = 0.05, and V2 = 36 and read a power of about 0.88. Thus, there will be a 12% chance of committing a Type II error in the proposed analysis. An alternative, and common, way to estimate power is to specify the smallest difference we wish to detect between the two most different population means. Calling this minimum detectable difference 8, we compute (10.34) and proceed to consult Appendix Figure B.l as above, and as demonstrated in Example 10.5. This procedure leads us to the statement that the power will be at least that determined from Appendix Figure B.1 (and, indeed, it typically is greater). EXAMPLE 10.5 Estimating the Power of an Analysis of Variance When Minimum Detectable Difference Is Specified For the ANOV A proposed in Example 10.3, we do not estimate the population means, but rather specify that, using ten data per sample, we wish to detect a difference between population means of at least 4.0 mm. k=4 VI
n V2
8
= 3 =
10
= 36 =
4.0mm
s2 = 7.5888 mnr'
/
10(4.0)2
-V 2(4)(7.5888) = J2.6355 =
1.62
In Appendix Figure B.1, we enter the graph for VI = 3 with CP = 1.62, Cl' = 0.05, and V2 = 36 and read a power of about 0.72. That is, there will be a 28% chance of committing a Type II error in the proposed analysis. It can be seen in Appendix Figure B.l that power increases rapidly as cp increases, and Equations 10.32 and 10.34 show that the power is affected in the following ways: • Power is greater for greater differences among group means (as expressed by '2.(/-Li ~ /-L)2 or by the minimum detectable difference, 8) . • Power is greater for larger sample sizes, n, (and it is greater when the sample sizes are equal).
210
Chapter
10
Multisample
Hypotheses
and the Analysis
of Variance
• Power is greater for fewer groups, k. • Power is greater for smaller within-group variability, u2 (as estimated which is the error mean square). • Power is greater for larger significance levels, a.
by s2,
These relationships are further demonstrated in Table 1O.3a (which shows that for a given total number of data, N, power increases with increased 8 and decreases with increased k) and Table 10.3b (in which, for a given sample size, ru, power is greater for larger 8's and is less for larger k's). The desirable power in performing a hypothesis test is arbitrary, just as the significance level (a) is arbitrary. A goal of power between 0.75 and 0.90 is often used, with power of 0.80 being common.
TABLE 10.3a: Estimated
Power of Analysis of Variance Comparison of Means, with k Samples, with Each Sample of Size n, = 20, with k
N =
2:
ni Total Data, and with a Pooled Variance (52) of 2.00, for i=1 Several Different Minimum Detectable Differences (8)
k:
2
3
4
5
6
N:
40
60
80
100
120
1.0 1.2 1.4 1.6
0.59 0.74 0.86 0.94
0.48 0.64 0.78 0.89
0.42 0.58 0.73 0.84
0.38 0.53 0.68 0.81
0.35 0.49 0.64 0.78
1.8 2.0 2.2 2.4
0.97 0.99 >0.99 >0.99
0.95 0.98 0.99 >0.99
0.92 0.97 0.99 >0.99
0.90 0.95 0.98 0.99
0.88 0.94 0.98 0.99
8
The values of power were obtained from UNIST AT (2003: 473-474). TABLE 10.3b: Estimated
Power of Analysis of Variance Comparison of Means, with k Samples, with the k Sample Sizes (ni) Totaling k
N =
2: n, =
60 Data, and with a Pooled Variance (52) of 2.00, for i=1 Several Different Minimum Detectable Differences (8)
8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4
k:
2
3
4
5
6
n. :
30
20
15
12
10
0.77 0.90 0.96 0.99
0.48 0.64 0.78 0.89
0.32 0.44 0.58 0.71
0.23 0.32 0.43 0.54
0.17 0.24 0.32 0.42
G.95 0.98 0.99 >0.99
0.82 0.90 0.95 0.98
0.66 0.76 0.85 0.91
0.52 0.63 0.72 0.81
>0.99 >0.99 >0.99 >0.99
The values of power were obtained from UNISTAT (2003: 473-474).
Section 10.3
Sample Size, Detectable Difference, and Power
211
Estimating the power of a proposed ANOV A may effect considerable savings in time, effort, and expense. For example, such an estimation might conclude that the power is so very low that the experiment, as planned, ought not to be performed. The proposed experimental design might be revised, perhaps by increasing n, or decreasing k, so as to render the results more likely to be conclusive. One may also strive to increase power by decreasing s2, which may be possible by using experimental subjects that are more homogeneous. For instance, if the 19 pigs in Example 10.1 were not all of the same age and breed and not all maintained at the same temperature, there might well be more weight variability within the four dietary groups than if all 19 were the same in all respects except diet. As noted for one-sample (Section 7.7) and two-sample (Section 8.4) testing, calculations of power (and of minimum required sample size and minimum detectable difference) and estimates apply to future samples, not to the samples already subjected to the ANOV A. There are both theoretical and practical reasons for this (Hoenig and Heisey, 2001). (b) Sample Size Required. Prior to performing an analysis of variance, we might ask how many data need to be obtained in order to achieve a desired power. We can specify the power with which we wish to detect a particular difference (say, a difference of biological significance) among the population means and then ask how large the sample from each population must be. This is done, with Equation 10.34, by iteration (i.e., by making an initial guess and repeatedly refining that estimate), as shown in Example 10.6. How well Equation 10.34 performs depends upon how good an estimate s2 is of the population variance common to all groups. As the excellence of s2 as an estimate improves with increased sample size, one should strive to calculate this statistic from a sample with a size that is not a very small fraction of the n estimated from Equation 10.34.
EXAMPLE10.6 ysis of Variance
Estimation of Required Sample Size for a One-Way Anal-
Let us propose an experiment such as that described in Example 10.1. How many replicate data should be collected in each of the four samples so as to have an 80% probability of detecting a difference between population means as small as 3.5 kg, testing at the 0.05 level of significance? In this situation, k = 4, VI = k - 1 = 3, 8 = 3.5 kg, and we shall assume (from the previous experiment in Example 10.1) that s2 = 9.383 kg2 is a good estimate of cr2. We could begin by guessing that n = 15 is required. Then, V2 = 4( 15 - 1) = 56, and by Equation 10.34, cjJ
=
I n8
2
\j
=
2ks2
\j
I
15(3.5)2 2( 4 )(9.383)
= 1.56.
Consulting Appendix Figure B.1, the power for the above VI, V2, a, and cjJ is approximately 0.73. This is a lower power than we desire, so we guess again with a larger n, say n = 20: cjJ
=
\j
I
20(3.5)2 2(4)(9.383)
= 1.81.
212
Chapter 10
Multisample
Hypotheses and the Analysis of Variance
Appendix Figure B1 indicates that this cp, for V2 = 4(20 - 1) = 76, is associated with a power of about 0.84. This power is somewhat higher than we specified, so we could recalculate power using n = 18: cp =
/ 18(3.5)2 \) 2(4)(9.383)
= 1.71
and, for V2 = 4( 18 - 1) = 68, Appendix Figure B.1 indicates a power slightly above 0.80. Thus, we have estimated that using sample sizes of at least 18 will result in an ANOV A of about 80% for the described experiment. (It will be seen that the use of Appendix Figure B.1 allows only approximate determinations of power; therefore, we may feel more comfortable in specifying that n should be at least 19 for each of the four samples.)
(c) Minimum Detectable Difference. If we specify the significance level and sample size for an ANOV A and the power that we desire the test to have, and if we have an estimate of 0'2, then we can ask what the smallest detectable difference between population means will be. This is sometimes called the "effect size." By entering on Appendix Figure B.1 the specified a, VI, and power, we can read a value of cp on the bottom axis. Then, by rearrangement of Equation 10.34, the minimum detectable difference is (10.35) Example 10.7 demonstrates
this estimation procedure.
EXAMPLE 10.7 Estimation of Minimum Detectable Difference in a OneWay Analysis of Variance In an experiment similar to that in Example 10.1, assuming that s/ = 9.3833 (kg)2 is a good estimate of 0'2, how small a difference between /L'S can we have 90% confidence of detecting if n = 10 and a = 0.05 are used? Ask = 4andn = 10,v2 = 4(10 - 1) = 36. For VI = 3,V2 = 36,1 - f3 = 0.90, and a = 0.05, Appendix Figure B.l c gives a cp of about 2.0, from which we compute an estimate of 8
/2ks2cp2
=
\)
n
=
\)
/2(4)(9.3833)(2.0)2 10
=
5.5 kg.
(d) Maximum Number of Groups Testable. For a given a, n, 8, and 0'2, power will decrease as k increases. It may occur that the total number of observations, N, will be limited, and for given ANOV A specifications the number of experimental groups, k, may have to be limited. As Example 10.8 illustrates, the maximum k can be determined by trial-and-error estimation of power, using Equation 10.34.
Section 10.3
Sample Size, Detectable Difference, and Power
213
EXAMPLE 10.8 Determination of Maximum Number of Groups to be Used in a One-Way Analysis of Variance Consider an experiment such as that in Example 10.1. Perhaps we have six feeds that might be tested, but we have only space and equipment to examine a total of 50 pigs. Let us specify that we wish to test, with a = 0.05 and {3::; 0.20 (i.e., power of at least 80%), and to detect a difference as small as 4.5 kg between population means. Ifk = 6wereused,thenn = 50/6 = 8.3 (call it Sj.v. = 5,V2 = 6(8 - 1) = 42, and (by Equation 10.34) cJ>
=
\j
I
= 1 20
(8)(4.5)2 2(6)(9.3833)
.,
for which Appendix Figure B.1e indicates a power of about 0.55. lfk = 5 were used,n = 50/5 = to,vl = 4,V2 = 5(10 - 1) = 45, and cJ>
=
\j
I
= 1.47,
(10)(4.5)2 2(5)(9.3833)
for which Appendix Figure B.ld indicates a power of about 0.70. If k = 4 were used, n = 50/4 = 12.5 (call it 12), VI = 3,V2 = 4(12 - 1) = 44, and cJ> =
\j
I
(12)(4.5)2 2( 4)( 9.3833)
=
1.80,
for which Appendix Figure B.lc indicates a power of about 0.84. Therefore, we conclude that no more than four of the feeds should be tested in an analysis of variance if we are limited to a total of 50 experimental pigs.
(e) Random-Effects Analysis of Variance. If the analysis of variance is a randomeffects model (described in Section 10.1f), the power, 1 - {3, may be determined from (10.36) (after Scheffe, 1959: 227; Winer, Brown, and Michels, 1979: 246). This is shown in Example 10.9. As with the fixed-effects ANOV A, power is greater with larger n, larger differences among groups, larger a, and smaller .'12. EXAMPLE 10.9 Estimating the Power of the Random-Effects Analysis of Variance of Example 10.2 Groups MS = 3.00: 52 Fa(I),v\.V2
=
=
FO.05(1).3,16
1.25; =
VI
3.24
=
3, V2 = 16
214
Chapter 10
Multisample
Hypotheses and the Analysis of Variance
(V2
-
2)(groups
(16)(1.25)(3.24) (14)(3.00)
MS)
= 1.54
By consulting Appendix Table B.4, it is seen that an F of 1.54, with degrees of freedom of 3 and 16, is associated with a one-tailed probability between 0.10 and 0.25. (The exact probability is 0.24.) This probability is the power. To determine required sample size in a random-effects analysis, one can specify values of (Y, groups MS, s2, and k. Then, VI = k - 1 and V2 = k( n - 1); and, by iterative trial and error, one can apply Equation 10.36 until the desired power (namely, 1 - (3) is obtained. 10.4
NONPARAMETRIC
ANALYSIS OF VARIANCE
If a set of data is collected according to a completely randomized design where k > 2, it is possible to test nonparametrically for difference among groups. This may be done by the Kruskal-Wallis test" (Kruskal and Wallis, 1952), often called an "analysis of variance by ranks. "t This test may be used in any situation where the parametric single-factor ANOV A (using F) of Section 10.1 is applicable, and it will be 3/7T (i.e., 95.5%) as powerful as the latter; and in other situations its power, relative to F, is never less than 86.4 % (Andrews, 1954; Conover 1999: 297). It may also be employed in instances where the latter is not applicable, in which case it may in fact be the more powerful test. The nonparametric analysis is especially desirable when the k samples do not come from normal populations (Kesel man, Rogan, and Feir-Walsh, 1977; Krutchkoff, 1998). It also performs acceptably if the populations have no more than slightly different dispersions and shapes; but if the k variances are not the same, then (as with the Mann-Whitney test) the probability of a Type I error departs from the specified in accordance with the magnitude of those differences (Zimmerman, 2000).j: As with the parametric analysis of variance (Section 10.1), the Kruskal- Wallis test tends to be more powerful with larger sample sizes, and the power is less when the n/s are not equal, especially if the large means are associated with the small ni's (Boehnke, 1984); and it tends to be conservative if the groups with large ni's have high within-groups variability and liberal if the large samples have low variability (Keselman, Rogan, and Feir-Walsh, 1997). Boehnke (1984) advises against using the Kruskal-Wallis test unless N > 20. If k = 2, then the Kruskal- Wallis test is equivalent to the Marin-Whitney test of Section 8.11. Like the Marin-Whitney test, the Kruskal- Wallis procedure does (Y
*William Henry Kruskal (b. 1919), American statistician, and Wilson Allen Wallis (b. 1912), American statistician and econometrician. t As will be seen, this procedure does not involve variances, but the term nonparametric analysis of variance is commonly applied to it in recognition that the test is a non parametric analog to the parametric ANOV A. +Modifications of the Kruskal-Wallis test have been proposed for nonparametric situations where the k variances are not equal (the "Behrens-Fisher problem" addressed parametrically in Section 10.1 h) but the k populations are symmetrical (Rust and Fligner, 1984; Conover 1999: 223-224).
Section 10.4
Nonparametric
Analysis of Variance
215
not test whether means (or medians or other parameters) may be concluded to be different from each other, but instead addresses the more general question of whether the sampled populations have different distrubutions. However, if the shapes of the distributions are very similar, then the test does become a test for central tendency (and is a test for means if the distributions are symmetric). The Type I error rate with heterogeneous variances is affected less with the Kruskal- Wallis test than with the parametric analysis of variance if the groups with large variances have small sample sizes (Keselman, Rogan, and Feir-Walsh, 1977; Tomarkin and Serlin, 1986). Example 10.10 demonstrates the Kruskal- Wallis test procedure. As in other nonparametric tests, we do not use population parameters in statements of hypotheses, and neither parameters nor sample statistics are used in the test calculations. The Kruskal-Wallis test statistic, H, is calculated as H=
12
k
---L+ 1)
N(N
1
i=1
R2
3(N
-
n,
+ I),
( 10.37)
L.7=
where n, is the number of observations in group i. N = 1 n, (the total number of observations in all k groups), and R, is the sum of the ranks of the n, observations in group i.* The procedure for ranking data is as presented in Section 8.11 for the Mann- Whitney test. A good check (but not a guarantee) of whether ranks have been assigned correctly is to see whether the sum of all the ranks equals N(N + 1)/2. Critical values of H for small sample sizes where k :s; 5 are given in Appendix Table B.13. For larger samples and/or for k > 5, H may be considered to be approximated by X2 with k - 1 degrees of freedom. Chi-square, X2, is a statistical distribution that is shown in Appendix Table B.l, where probabilities are indicated as column headings and degrees of freedom (v) designate the rows. If there are tied ranks, as in Example 10.11, H is a little lower than it should be, and a correction factor may be computed as C
=
1 _
N3 and the corrected value of H is
Lf
(10.40)
N
H
He
(10.41 )
=-
C * Interestingly,
H (or He of Equation
10.41) could also be computed If
=
groups SS, total MS
as (10.38)
applying the procedures of Section 10.1 to the ranks of the data in order to obtain the Groups SS and Total MS. And, because the Total MS is the variance of all N ranks, if there are no ties the Total MS is the variance of the integers from 1 to N, which is
N(N The following alternate differences among the R = N( N - 1)/2:
K
formula groups'
+
1 )(2N
+
1)/6
-
N -
1
N2(N
+
1 )2/4N
( 1O.3Xa)
(Pearson and Hartley. 1976: 49) shows that H is expressing the mean ranks CR.i = Ri] ni) and the mean of all N ranks, which is
±
12 ni(Ri - R)2 H = ----'1'---'=--=.' _ N(N - I)
(10.39)
216
Chapter 10
Multisample
Hypotheses and the Analysis of Variance
EXAMPLE 10.10 Ranks
The Kruskal-Wallis
Single-Factor Analysis of Variance
by
An entomologist is studying the vertical distribution of a fly species in a deciduous forest and obtains five collections of the flies from each of three different vegetation layers: herb, shrub, and tree. He: H A:
The abundance of the flies is the same in all three vegetation layers. The abundance of the flies is not the same in all three vegetation layers.
a = 0.05 The data are as follows (with ranks of the data in parentheses):*
Numbers of Flies/nr' of Foliage Herbs Shrubs Trees 14.0 12.1 9.6 8.2 10.2
=
nj
R[
8.4(11) 5.1 (2) 5.5 (4) 6.6 (7) 6.3 (6)
(15) (14) (12) (10) (13)
=
5 64
6.9 7.3 5.8 4.1 5.4
(8) (9) (5) (1) (3)
= 5 R3 = 26
n: = 5 R2 = 30
n3
N=5+5+ H
=
12
-
N(N + 2 12 [64 15(16) 5
= ~[1134.400]
2
3(N
62
+ 3 0 + 25 ]5
+ 1)
3(16)
- 48
240
= 56.720 - 48 = 8.720 Ho.os,s.s,s = 5.780 Reject Ho. 0.005 "To check whether
< P < 0.01
ranks were assigned correctly,
sums: 64 + 30 + 26 = 120) is compared to N(N not guarantee that the ranks were assigned properly,
the sum of the ranks (or sum of the rank
+
1)/2 = 15(16)/2 = 120. This check will but it will often catch errors of doing so.
Section 10.4 EXAMPLE 10.11
Nonparametric
The Kruskal-Wallis
A limnologist obtained eight containers
Analysis of Variance
217
Test with Tied Ranks
of water from each of four ponds. The
pH of each water sample was measured. The data are arranged in ascending order within each pond. (One of the containers from pond 3 was lost, so n3 = 7, instead of 8; but the test procedure does not require equal numbers of data in each group.) The rank of each datum is shown parenthetically.
He: pH is the same in all four ponds. HA: pH is not the same in all four ponds. a
= 0.05 Pond J
Pond 2 7.71 7.73 7.74 7.74 7.78 7.78 7.80 7.81
7.68 (1) 7.69 (2) 7.70 (3.5*) 7.70 (3.5*) 7.72 (8) 7.73 (10*) 7.73 (10*) 7.76 (17) *Tied ranks.
= 8
nl
7.74 7.75 7.77 7.78 7.80 7.81 7.84
(6*) (10*) (13.5*) (13.5*) (20*) (20*) (23.5*) (26*)
n2 = 8 R2 = 132.5
R, = 55 N=8+8+7
+ 8 12
Pond 4
(13.5*) (16) (18) (20*) (23.5*) (26*) (28)
7.71 7.71 7.74 7.79 7.81 7.85 7.87 7.91
(6*) (6*) (13.5*) (22) (26*) (29) (30) (31)
n4 = 8 R4 = 163.5
ns = 7 = 145
R3
= 31 R2
k
---2:N(N + 1) n;
H
Pond 3
1
-3(N+l)
i=l
=
2
12 [55 31 (32) 8
2
+ 132.5 8
2
+ 145 7
2
+ 163.5 8
3(32)
]_
= 11.876 Number of groups of tied ranks = m = 7.
L t = L (t( - ti) = (23
-
+(33
2) + (33 - 3) + (23
3) + (33 - 2) + (33
3) + (43 - 3)
-
4)
= 168 = 1 -
C
Lt N3 -
= 1 - __ 16_8_ = 1 - ~ N
= H = 11.876 = 11.943
H c
C
v=k-l=3
0.9944
313
-
31
29760
= 0.9944
218
Chapter 10
Multisample X6.0S,3
Hypotheses and the Analysis of Variance
= 7.815
Reject H«. 0.005
< P
2) to ask whether all k sample variances estimate the same population variance. The null and alternate hypotheses would then be Ho: (J'T = = ... = and He: the k population variances are not all the same. The equality of variances is called homogeneity of variances, or homoscedasticity; variance heterogeneity is called heteroscedasticity" (J'~
(J'~
(a) Bartlett's Test. A commonly encountered method employed to test for homogeneity of variances is Bartlett's test! (Bartlett, 1937a, 1937b; based on a principle of Neyman and Pearson, 1931). In this procedure, the test statistic is B ~ where
Vi
(Insf,{~
v) -
i~
Vi
Insi,
(10,44)
= n, - 1 and n, is the size of sample i. The pooled variance, s~, is calculated
as before as 2:,7= 1 SSj 2:,7= 1 vi. Many researchers prefer to operate with common logarithms (base 10), rather than with natural logarithms (base e);+ so Equation 10.44 may be written as R ~ 2,30259[ (log
sf,
{~Vi) - ~ Vi
log 5i],
The distribution of B is approximated by the chi-square distribution.t 1 degrees of freedom (Appendix Table B.1), but a more accurate
(lOA5) with k chi-square
"The two terms were introduced by K. Pearson in 1905 (Walker, 1929: 181); since then they have occasionally been spelled homoskedasticity and heteroskedasticity ; respectively. tMaurice Stevenson Bartlett (1910-2002), English statistician. :fSee footnote in Section 8.7. A summary of approximations is given by Nagasenker (1984).
*
Section 10.7
approximation
Homogeneity of Coefficients of Variation
221
is obtained by computing a correction factor,
c=
1
+
1 3( k - 1)
1
k
~i=l
Vi
1 k
(10.46)
~Vi i=l
with the corrected test statistic being Be
=
CB
(10.47)
Example 10.13 demonstrates these calculations. The null hypothesis for testing the homogeneity of the variances of four populations may be written symbolically as He: O"T = O"~ = O"~ = (T~, or, in words, as "the four population variances are homogeneous (i.e., are equal)." The alternate hypothesis can be stated as "The four population variances are not homogeneous (i.e., they are not all equal)," or "There is difference (or heterogeneity) among the four population variances." If Hi, is rejected, the further testing of Section 11.8 will allow us to ask which population variances are different from which. Bartlett's test is powerful if the sampled populations are normal, but it is very badly affected by nonnormal populations (Box, 1953; Box and Anderson, 1955; Gartside, 1972). If the population distribution is platykurtic, the true Q' is less than the stated Q' (i.e., the test is conservative and the probability of a Type II error is increased); if it is leptokurtic, the true Q' is greater than the stated Q' (i.e., the probability of a Type I error is increased). When k = 2 and nl = 172, Bartlett's test is equivalent to the variance-ratio test of Section 8.5a. However, with two samples of unequal size, the two procedures may yield different results; one will be more powerful in some cases, and the other more powerful in others (Maurais and Ouimet, 1986). (b) Other Multisample Tests for Variances. Section 8.5b noted that there are other tests for heterogeneity (Levene's test and others) but that all are undesirable in many situations. The Bartlett test remains commendable when the sampled populations are normal, and no procedure is especially good when they are not. Because of the poor performance of tests for variance homogeneity and the robustness of analysis of variance for multisample testing among means (Section 10.1), it is not recommended that the former be performed as tests of the underlying assumptions of the latter. HOMOGENEITY OF COEFFICIENTS OF VARIATION
The two-sample procedure of Section 8.8 has been extended by Feltz and Miller (1996) for hypotheses where k ;::::3 and each coefficient of variation ( Vi) is positive:
k
~vivl i=l X2
k ~Vi i=_l__
= VF(O.5
+ VF)
(10.48)
222
Chapter 10
Multisample
Hypotheses and the Analysis of Variance
EXAMPLE 10.13
Bartlett's Test for Homogeneity of Variances
Nineteen pigs were divided into four groups, and each group was raised on a different food. The data, which are those of Example 10.1, are weights, in kilograms, and we wish to test whether the variance of weights is the same for pigs fed on all four feeds. a21 = a22 = a2J = a24
Ho' .
H A: The four population variances are not all equal (i.e., are heterogeneous).
= 0.05
(l'
Feed I
Feed 2
Feed 3
60.8 67.0 65.0 68.6 61.7
68.7 67.7 75.0 73.3 71.8
69.6 77.1 75.2 71.5
Feed 4 61.9 64.2 63.1 66.7 60.3
ni
1 5
2 5
3 4
4 5
Vi
4
4
3
4
k
L: Vi
= 15
i=1 k
44.768
SSi
37.660
34.970
L: SSj = 140.750
23.352
i=1
52 I
log Vi
57
log
11.192 1.0489
9.415 0.9738
11.657 1.0666
5.838 0.7663
4.1956
3.8952
3.1998
3.0652
k
s7
L: Vi log s7 = 14.3558 i=1 k
11vi
0.250
0.250
0.333
L: Ilvi
0.250
= 1.083
i=1
52
L: SSj L: Vi
=
P
log B
5~
=
= 140.750 = 9.3833 15
= 1 +
C
x(L:l-
= 0.9724
2.30259 [(log
3( k - 1) Vi
s~)(L:
_1)
LVi
= 1 + 3/3) (1.083 - 1~)
Vi)
= 1.113
- L: Vi log 57 )]
Ii
= 2.30259[(0.9724)( 15) - 14.3558]
B, =
= 2.30259( 0.2302)
X60S3 = 7.815
= 0.530
Do not reject Ho. 0.90
c
=
C
< P < 0.95 [P = 0.92]
0.530 1.113
=
0.476
Exercises
225
EXERCISES The following data are weights of food (in kilograms) consumed per day by adult deer collected at different times of the year. Test the null hypothesis that food consumption is the same for all the months tested. Feb.
May
Aug.
Nov.
4.7 4.9 5.0 4.8 4.7
4.6 4.4 4.3 4.4 4.1 4.2
4.8 4.7 4.6 4.4 4.7 4.8
4.9 5.2 5.4 5.1 5.6
• An experiment is to have its results examined by analysis of variance. The variable is temperature (in degrees Celsius), with 12 measurements to be taken in each of five experimental groups. From previous experiments, we estimate the withingroups variability, a-2, to be 1.54(oC)2. If the 5% level of significance is employed, what is the probability of the ANOV A detecting a difference as small as 2.0GC between population means? . For the experiment of Exercise 10.2, how many replicates are needed in each of the five groups to detect a difference as small as 2.0 C between population means, with 95% power? For the experiment of Exercise 10.2, what is the smallest difference between population means that we are 95% likely to detect with an ANOV A using 10replicates per group? 0
10.5. Using the Kruskal-Wallis test, test nonpararnctrically the appropriate hypotheses for the data of Exercise 10.1. 10.6. Three different methods were used to determine the dissolved-oxygen content of lake water. Each of the three methods was applied to a sample of water six times, with the following results. Test the null hypothesis that the three methods yield equally variable results (a-~ = a~ = a-~). Method J (mg/kg)
Method 2 (mg/kg)
Method 3 (mg/kg)
10.96 10.77 10.90 10.69 10.87 10.60
10.88 10.75 10.80 10.81 10.70 10.82
10.73 10.79 10.78 10.82 10.88 10.81
10.7. The following statistics were obtained from measurements of the circumferences of trees of four species. Test whether the coefficients of variation of circumferences are the same among the four species. Species B Species A n:
X (m): s2 (rn/):
Species
Q
Species H
40 54 58 32 2.126 1.748 1.350 1.392 0.488219 0.279173 0.142456 0.203208
Homogeneity of Coefficients of Variation
Section 10.7
223
where the common coefficient of variation is k
2: Vp
=
ViVi
(10.49)
l_·=_~__
2:
Vi
i=l
This test statistic approximates the chi-square distribution with k - 1 degrees of freedom (Appendix Table B.l) and its computation is shown in Example 10.14. When k = 2, the test yields results identical to the two-sample test using Equation 8.42 (and X2 = Z2). As with other tests, the power is greater with larger sample size; for a given sample size, the power is greater for smaller coefficients of variation and for greater differences among coefficients of variation. If the null hypothesis of equal population coefficients of variation is not rejected, then Vp is the best estimate of the coefficient of variation common to all k populations.
EXAMPLE 10.14
Testing for Homogeneity of Coefficients of Variation
For the data of Example 10.1: Ho:
The coefficients of the fOUTsampled populations
(TV J.L 1 HA:
=
(T~/J.L2
(TV J.L3
=
are the same; i.e.,
(T~/J.L4·
The coefficients of variation of the fOUTpopulations same.
Feed 1
Feed 2
Feed 3
ni
5
5
4
5
Vi
4
4
3
4
68.30
73.35
11.192
16.665
11.657
9.248
si (kg)
3.35
4.08
3.41
3.04
Vi
0.0518
0.0597
0.0465
0.0456
(kg)
s7 (kg2)
are not all the
Feed 4
64.62
Xi
n .LVi
=
66.64
= 4 + 4 + 3 + 4 = 15
i=l
n .LViVi i=1
= (4)(0.0518)
+ (4)(0.0597)
+ (3)(0.0465)
V~
+ (4)(0.0456)
= (0.7679)2 = 0.002620
= 0.7679
224
Chapter 10
Multisample
Hypotheses and the Analysis of Variance
11
2:
IJV;
=
(4)(0.0518)2
+ (4)(0.0597)2
+ (3)(0.0465)2
+ (4)(0.0456)2
i=l
= 0.03979
0.03979 _ (0.7679)2 15 0.002620( 0.5 + 0.(02620) For chi-square:
IJ
0.0004786 = 0.363 0.001317
= 4 - 1 = 3: X6.0S.3 = 7.815. Do not reject Ho. 0.90 < P < 0.95 [P
= 0.948]
Miller and Feltz (1997) reported that this test works best if each sample size (ni: is at least 10 and each coefficient of variation (Vi) is no greater than 0.33; and the; describe how the power of the test (and, from such a calculation, the minimurr detectable difference and the required sample size) may be estimated. 10.8
CODING DATA
In the parametric ANOV A, coding the data by addition or subtraction of a constan causes no change in any of the sums of squares or mean squares (recall Section 4.8) so the resultant F and the ensuing conclusions are not affected at all. If the coding i: performed by multiplying or dividing all the data by a constant, the sums of square: and the mean squares in the ANOV A each will be altered by an amount equal to the square of that constant, but the F value and the associated conclusions will remair unchanged. A test utilizing ranks (such as the Kruskal- Wallis procedure) will not be affectec at all by coding of the raw data. Thus, the coding of data for analysis of variance either parametric or nonparametric, may be employed with impunity, and cod. ing frequently renders data easier to manipulate. Neither will coding of data altei the conclusions from the hypothesis tests in Chapter 11 (multiple comparisons) OJ Chapters 12, 14, 15, or 16 (further analysis-of-variance procedures). Bartlett's tesi is also unaffected by coding. The testing of coefficients of variation is unaffectec by coding by multiplication or division, but coding by addition or subtractior may not be used. The effect of coding is indicated in Appendix C for man; statistics. 10.9
MULTISAMPLE TESTING FOR NOMINAL-SCALE
DATA
A 2 x c contingency table may be analyzed to compare frequency distributions 01 nominal data for two samples. In a like fashion, an r X c contingency table maj be set up to compare frequency distributions of nominal-scale data from r samples Contingency table procedures are discussed in Chapter 23. Other procedures have been proposed for multisample analysis of nominal-scalf data (e.g., Light and Margolin, 1971: Windsor, 1948).
C HAP
T E R
11
Multiple Comparisons 11.1 TESTING ALL PAIRS OF MEANS 11.2 CONFIDENCE INTERVALS FOR MULTIPLE COMPARISONS 11.3 TESTING A CONTROL MEAN AGAINST EACH OTHER MEAN 11.4 MULTIPLE CONTRASTS 11.5 NONPARAMETRIC MULTIPLE COMPARISONS 11.6 NONPARAMETRIC MULTIPLE CONTRASTS 11.7 MULTIPLE COMPARISONS AMONG MEDIANS 11.8 MULTIPLE COMPARISONS
AMONG
VARIANCES
The Model I single-factor analysis of variance (ANOY A) of Chapter 10 tests the null hypothesis Ho: f..L1 = f..L2 = ... = f..Lk. However, the rejection of Ho does not imply that all k population means are different from one another, and we don't know how many differences there are or where differences lie among the k means. For example, if k = 3 and Hi; f..L1 = V2 = f..L3 is rejected, we are not able to conclude whether there is evidence of f..L1 V2 = f..L3 or of f..L1 = V2 f..L3 or of f..L1 V2 f..L3· The introduction to Chapter 10 explained that it is invalid to employ multiple two-sample t tests to examine the difference among more than two means, for to do so would increase the probability of a Type I error (as shown in Table 10.1). This chapter presents statistical procedures that may be used to compare k means with each other; they are called multiple-comparison procedures" (MCPs). Except for the procedure known as the least significance difference test, all of the tests referred to in this chapter may be performed even without a preliminary analysis of variance. Indeed, power may be lost if a multiple-comparison test is performed only if the ANOY A concludes a significant difference among means (Hsu, 1996: 177-178; Myers and Well, 2003: 261). And all except the Scheffe test of Section 11.4 are for a set of comparisons to be specified before the collection of data. The most common principle for multiple-comparison testing is that the significance level. Q', is the probability of committing at least one Type I error when making all of the intended comparisons for a set of data. These are said to be a family of comparisons, and this error is referred to as [amilywise error (FWE) or, sometimes, experimentwise error. Much less common are tests designed to express comparison wise error, the probability of a Type I error in a single comparison. A great deal has been written about numerous multiple-comparison tests with various objectives, and the output of many statistical computer packages enhances misuse of them (Hsu, 1996: xi). Although there is not unanimity regarding what the "best" procedure is for a given situation, this chapter will present some frequently encountered highly regarded tests for a variety of purposes. If the desire is to test for differences between members of all possible pairs of means, then the procedures of Section 11.1 would be appropriate, using Section 11.1a
*
"The term multiple comparisons
*
was introduced
* *
by D. E. Duncan
in 1951 (David.
1995).
Section 11.1
Testing All Pairs of Means
227
if sample sizes are unequal and Section 11.1b if variances are not the same. If the data are to be analyzed to compare the mean of one group (typically called the control) to each of the other group means, then Section 11.3 would be applicable. And if the researcher wishes to examine sample means after the data are collected and compare specific means, or groups of means, of interest, then the testing in Section 11.4 is called for. Just as with the parametric analysis of variance, the testing procedures of Sections 11.1-11.4 are premised upon there being a normal distribution of the population from which each of the k samples came; but, like the A OVA, these tests are somewhat robust to deviations from that assumption. However, if it is suspected that the underlying distributions are far from normal, then the analyses of Section 11.5. or data transformations (Chapter 13), should be considered. Multiple-comparison tests are adversely affected by heterogeneous variances among the sampled populations. in the same manner as in ANOVA (Section 10.1g) (Keselman and Toothaker, 1974; Petrinovich and Hardyck, 1969). though to a greater extent (Tukey, 1993). In multiple-comparison testing-except when comparing means to a controlequal sample sizes are desirable for maximum power and robustness, but the procedures presented can accommodate unequal n's. Petrinovich and Hardyck (1969) caution that the power of the tests is low when sample sizes are less than 10. This chapter discusses multiple comparisons for the single-factor ANOV A experimental design (Chapter 10).* Applications for other situations are found in Section 12.5 (for the two-factor ANOV A design), 12.7b (for the nonparametric randomizedblock ANOV A design), 12.9 (for dichotomous data in randomized blocks), 14.6 (for the multiway ANOYA design), 18.6 and 18.7 (for regression), and 19.8 (for correlation ). 11.1 TESTING ALL PAIRS OF MEANS There are k( k - 1)/2 different ways to obtain pairs of means from a total of k means." For example, if k = 3. the k(k - 1)/2 = 3(2)/2 = 3 pairs are ILl and IL2, ILl and IL3, and }L2 and IL3; and for k = 4. the k(k - 1)/2 = 4(3)/2 = 6 pairs are ILl and IL2, ILl and IL3, ILl and IL4, IL2 and IL3, IL2 and IL4, and IL3 and IL4· SO each of k( k - 1 )/2 null hypotheses may be tested, referring to them as Ho: ILB = ILA, where the subscripts A and B represent each pair of subscripts; each corresponding alternate hypothesis is Ho: IL8 -=FILA. An excellent way to address these hypotheses is with the Tukey test (Tukey, 1953). also known as the honestly significant difference test (HSD test) or wholly significant difference test (WSD test). Example 11.1 demonstrates the Tukey test, utilizing an ANOV A experimental design similar to that in Example 10.1, except that all groups have equal numbers of data (i.e., all of the ni's are equal). The first step in examining these multiple-comparison hypotheses is to arrange and number all five sample means in order of increasing magnitude. Then pairwise differences between the means, X A - X 8, are tabulated. Just as a difference between means, divided by *For nonpararnetric testing, Conover and Iman (1981) recommend applying methods as those in Sections 11.1-11.4 on the ranks of the data. However Hsu (1996: 177); Sawilowsky. Blair, and Higgins (1999): and Toothaker (1991: 1(9) caution against doing so. tThe number
of combinations
c k
2
= 2!(k
of k groups taken 2 at a time is (by Equation k! = k(k - I )(k _ 2)! 2!(k - 2)'
2)! = k(k
-
2
I)
5.10): (11.1)
228
Chapter 11
Multiple Comparisons EXAMPLE 11.1
Tukey Multiple
Comparison Test with Equal Sample Sizes.
The data are strontium concentrations (mg/ml) in five different bodies of water. First an analysis of variance is performed. Ho:
fLl
=
fL2
=
fLJ
=
fL4
=
fLs·
H A: Mean strontium concentrations water. a
=
are not the same in all five bodies of
0.05
Grayson '.\'Pond
Beaver Lake
2K2
39.6
46.3
33.2
40.H
42.1
44.1
54.1 59.4
XI
=
Angler's
Cove
Appletree
Llike
Rock River
41.0
56.3
36.4
37.9
43.5
46.4
34.6
37.1
4H.H
40.2
62.7
29.1
43.6
43.7
3H.6
60.0
31.0
42.4
40.1
36.3
57.3
32.1 mg/ml
111
X 2 = 40.2 mg/ml
=6
112
=
X3
6
=
113
= 6
Source of variation Total Groups Error
k = 5, n
=
X4 = 41.1 mg/ml
44.1 mg/ml
f/4
= 6
DF
SS
X S
=
58.3 mg/ml
115
= 6
MS
2437.5720 2193.4420
29 4
548.3605
244.1300
25
9.7652
6
Samples number (i) of ranked means: Ranked sample mean (Xi):
To test each He:
2
4
3
5
40.2
41.1
44.1
58.3
I 32.l
fLH = fLA,
SE = )9.7:52
=
.J1.6275
=
1.28.
As QO.OS2S.k does not appear in Appendix Table B.5, the critical value with the next lower OF is used: QO.05.24.S=4.166.
Section 11.1
Comparison
Difference
B vs. A
(XH
5 vs. 1 5 vs. 2 5 vs. 4 5 vs. 3 3 vs. I 3 vs. 2 3 vs. 4 4 vs. 1 4 vs. 2 2 vs. 1
58.3 58.3 58.3 58.3 44.1 44.1 Do not 44.1 Do not 40.2 -
-
32.1 40.2 41.1
XA)
= 26.2 = 18.1 17.2 14.2 12.0 3.9
44.1 = 32.1 = 40.2 = test 32.1 = 9.0 test 32.1 = 8.1
Thus, we conclude that fLl is different from the other means, and that fL2, other: fLl * fL2 = fL4 = fLo, * Wi· the appropriate
q, is calculated
229
Testing All Pairs of Means
SE
q
Conclusion
1.28 1.28 1.28 1.28 1.28 1.28
20.4 7
Reject Hi; fLs = fLl Reject Hi: Wi = fL2 Reject He: Wi = fL4 Reject Ho: fLS = fLo, Reject Ho: fLo, = fLl Do not reject Ho: fLo, =
1.28
7.03
Reject
H«:
fL4
=
fLl
1.28
6.33
Reject
Ho:
fL2
=
fLl
14.14 13.44 11.09 9.38 3.05
fL2
from the other means, that fLS is different and fLo, are indistinguishable from each
fL4,
standard error, yields a t value (Section 8.1), the Tukey by dividing a difference between two means by
SF ~ /::
test statistic,
'
(11.2)
where n is the number of data in each of groups B and A, and s2 is the error square by ANOYA computation (Equation 10.14). Thus XB
-
q=----
XII
mean
(11.3)
SE which is known as the studentized range" (and is sometimes designated as T). The null hypothesis He; X B = X II is rejected if q is equal to or greater than the critical value, qo:.//.k, from Appendix Table B.5, where u is the error degrees of freedom (via Equation 10.15, which is N - k). The significance level, a, is the probability of committing at least one Type I error (i.e., the probability of incorrectly rejecting at least one Ho) during the course of comparing all pairs of means. And the Tukey test has good power and maintains the probability of the familywise Type I error at or below the stated a. The conclusions reached by this multiple-comparison testing may depend upon the order in which the pairs of means are compared. The proper procedure is to compare first the largest mean against the smallest. then the largest against the next smallest. and so on, until the largest has been compared with the second largest. Then one compares the second largest with the smallest, the second largest with the next smallest. and so on. Another important procedural rule is that if no significant difference is found *E. S. Pearson
and H. O. Hartley first used this term in 1953 (David,
1995).
230
Chapter 11
Multiple Comparisons
between two means, then it is concluded that no significant difference exists between any means enclosed by those two, and no differences between enclosed means are tested for. Thus, in Example 11.1, because we conclude no difference between population means 3 and 2, no testing is performed to judge the difference between means 3 and 4, or between means 4 and 2. The conclusions in Example 11.1 are that Sample 1 came from a population having a mean different from that of any of the other four sampled populations; likewise, it is concluded that the population mean from which Sample 5 came is different from any of the other population means, and that samples 2, 4, and 3 came from populations having the same means. Therefore, the overall conclusion is that Ml of- fL2 = M4 = M3 of- MS, As a visual aid in Example 11.1, each time a null hypothesis was not rejected, a line was drawn beneath means to connect the two means tested and to encompass any means between them. The null hypothesis Ho: MB = MA may also be written as MB - MA = O. The hypothesis MB - MA = MO, where MO of- 0, may also be tested; this is done by replacing X B - X A with I X B - X A I - MO in the numerator of Equation 11.3. Occasionally, a multiple-comparison test, especially if ne of- nA, will yield ambiguous results in the form of conclusions of overlapping spans of nonsignificance. For example, one might arrive at the following:
XJ
X2
X3
X4
for an experimental design consisting of four groups of data. Here the four samples seem to have come from populations among which there were two different population means: Samples 1 and 2 appear to have been taken from one population, and Samples 2,3, and 4 from a different population. But this is clearly impossible, for Sample 2 has been concluded to have come from both populations. Because the statistical testing was not able to conclude decisively from which population Sample 2 came, at least one Type II error has been committed. Therefore, it can be stated that Ml of- M3 of- M4, but it cannot be concluded from which of the two populations Sample 2 came (or if it came from a third population). Repeating the data collection and analysis with a larger number of data might yield more conclusive results. (a) Multiple Comparisons with Unequal Sample Sizes. If the sizes of the k samples are not equal, the Tukey-Krarner procedure (Kramer, 1956; supported by Dunnett, 1980a; Stoline, 1981; Jaccard, Becker, and Wood, 1984)* is desirable to maintain the probability of a Type I error near a and to operate with good power. For each comparison involving unequal n's, the standard error for use in Equation 11.3 is calculated as SE =
52 (
2
1 ns
+
1 ) nA
(1 \.4) '
which is inserting the harmonic mean of ne and I'lA (Section 3.4b) in place of n in Equation 11.2;t and Equation 11.4 is equivalent to 11.2 when ns = nA. This test is shown in Example 11.2, using the data of Example 10.1.
____________
*This procedure has been shown to be excellent (e.g., Dunnett, 19ROa; Hayter. 1984; Keselman, Murray, and Rogan, 1976; Smith, 1971; Somerville. 1993: Stoline, 1981). with the probability of a familywise Type I error no greater than the stated a. tSome researchers have replaced n in Equation l1.2 with the harmonic mean of all k samples or with the median or arithmetic mean of the pair of means examined. Dunnett (1980a); Keselman, Murray, and Rogan (1976); Keselman and Rogan (1977); and Smith (1971) concluded the Kramer approach to be superior to those methods, and it is analogous to Equation R.7a. which is used for ..I.Ctwa:.!o~sa;umllLokLtestin!!:.
Testing All Pairs of Means
Section 11.1
EXAMPLE 11.2
231
The Tukey-Kramer Test with Unequal Sample Sizes
The data (in kg) are those from Equation
10.1.
k=4 s2 = Error MS Error OF QOOS,15.4
=
=
= 9.383
15
4.076
Sample number (i) of ranked means:
4
Ranked sample mean (Xi):
1
2
3
63.24
64.62
71.30
73.35
4
5
5
5
I s2 = \j/9.383 5
=
J2.111
Sample sizes (ni): If ne = nA (call it n), then SE
If ne i:- nA, then SE
= \
=
\In
s2 (1 nB
:2
+ 1) nA
_ -
10.383 (1
\1-2- 5
= 1.453.
+ 1)
4
= ~1.877 = 1.370. Comparison
Difference
B vS.A
3 vs. 4 3 vs. 1 3 vs. 2 2 vs. 4 2 vs. 1 1 vs. 4
(Xs
73.35 73.35 73.35 71.30 71.30 64.62
-
SE
Xii )
63.24 64.62 71.30 63.24 64.62 63.24
= = = = = =
10.11 8.73 2.05 8.06 6.68 1.38
1.453 1.370 1.370 1.453 1.370 1.453
Q
Conclusion
6.958 Reject He: IL3 6.371 Reject Hi; IL3 1.496 Do not reject He: 5.547 Reject He: IL2 4.876 Reject Hi: IL2 0.950 Do not reject Hi:
Thus, we conclude that IL4 and ILl are indistinguishable, that are indistinguishable, and that IL4 and ILl are different from
IL2 IL2
= = IL3
= = ILl
IL4 ILl
=
IL2
IL4 ILl = IL4
and and
IL3 IL3:
IL4 = ILl i:- IL2 = IL3·
(b) Multiple Comparisons with Unequal Variances. Although the Tukey test can withstand some deviation from normality (e.g., Jaccard, Becker, and Wood, 1984), it is less resistant to heterogeneous variances. especially if the sample sizes are not equal. The test is conservative if small n's are associated with small variances and undesirably liberal if small samples come from populations with large variances, and in the presence of both nonnormality and heteroscedasticity the test is very liberal. Many investigations* have determined that the Tukey-Kramer test is also adversely affected by heterogeneous variances. "These include those of Dunnett (19S0b, 19S2): Games and Howell (1976): Jaccard, Becker, and Wood (1984): Keselrnan, Games, and Rogan (1979): Keselman and Rogan (197S): Keselman and Toothaker (1974): Keselman, Toothaker, and Shooter (1975): Ramseyer and Tcheng (1973): Jenkdon and Tamhane (1979).
232
Chapter 11
Multiple Comparisons
As a solution to this problem, Games and Howell (1976) proposed the use of the Welch approximation (Section 8.1b) to modify Equation 11.4 to be appropriate when the k population variances are not assumed to be the same or similar: SE
=
~ (051 + 2
ne
s~),
(11.5)
nA
and q will be associated with the degrees of freedom of Equation 8.12; but each sample size should be at least 6. This test maintains the probability of a familywise Type I error around a (though it is sometimes slightly liberal) and it has good power (Games, Keselman, and Rogan, 1981; Keselman, Games, and Rogan, 1979; Keselman and Rogan, 1978; Tamhane, 1979). If the population variances are the same, then the Tukey or Tukey-Kramer test is preferable (Kirk, 1995: 147-148). If there is doubt about whether there is substantial heteroscedacity, it is safer to use the Games and Howell procedure, for if the underlying populations do not have similar variances, that test will be far superior to the Tukey-Kramer test; and if the population variances are similar, the former will have only a little less power than the latter (Levy, 1978c). (c) Other Multiple-Comparison Methods. Methods other than the Tukey and Tukey-Kramer tests have been employed by statisticians to examine pairwise differences for more than two means. The Newman-Keuls test (Newman, 1939; Keuls, 1952), also referred to as the Student-Newman-Keuls test, is employed as is the Tukey test, except that the critical values from Appendix Table B.5 are those for qa.v,p instead of qa.v.k, where p is the range of means for a given Hi; So, in Example 11.2, comparing means 3 and 4 would use p = 4, comparing means 3 and 1 would call for p = 3, and so on (with p ranging from 2 to k). This type of multiple-comparison test is called a multiple-range test. There is considerable opinion against using this procedure (e.g., by Einot and Gabriel, 1975; Ramsey, 1978) because it may falsely declare differences with a probability undesirably greater than a. The Duncan test (Duncan, 1955) is also known as the Duncan new multiple range test because it succeeds an earlier procedure (Duncan, 1951). It has a different theoretical basis, one that is not as widely accepted as that of Tukey's test, and it has been declared (e.g., by Carmer and Swanson, 1973; Day and Quinn, 1899) to perform poorly. This procedure is executed as is the Student-Newman-Keuls test, except that different critical-value tables are required. Among other tests, there is also a procedure called the least significant difference test (LSD), and there are other tests, such as with Dunn or Bonjerroni in their names (e.g., Howell, 2007: 356-363). The name wholly significant difference test (WSD test) is sometimes applied to the Tukey test (Section 11.1) and sometimes as a compromise between the Tukey and Student-Newman-Keuls procedures by employing a critical value midway between qa,v,k and qa,v,p' The Tukey test is preferred here because of its simplicity and generally good performance with regard to Type I and Type II errors. 11.2
CONFIDENCE INTERVALS FOR MULTIPLE COMPARISONS
Expressing a 1 - a confidence interval using a sample mean denotes that there is a probability of 1 - a that the interval encloses its respective population mean. Once multiple-comparison testing has concluded which of three or more sample
Section 11.2
Confidence Intervals for Multiple Comparisons
233
means are significantly different, confidence intervals may be calculated for each different population mean. If one sample mean (Xi) is concluded to be significantly different from all others, then Equation 10.31 (introduced in Section 10.2) is used: (10.31) In these calculations, s2 (essentially a pooled variance) is the same as the error mean square would be for an analysis of variance for these groups of data. If two or more sample means are not concluded to be significantly different, then a pooled mean of those samples is the best estimate of the mean of the population from which those samples came: (11.6) where the summation is over all samples concluded population. Then the confidence interval is
~ x;
/?
± ta(2).V\l2:
ni'
to have come from the same
(11.6a)
again summing over all samples whose means are concluded to be indistinguishable. This is analogous to the two-sample situation handled by Equation 8.16, and it is demonstrated in Example 11.3. If a pair of population means, /LB and /LA, are concluded to be different, the 1 - a confidence interval for the difference (/LB - /LA) may be computed as ( 11.7) Here, as in Section 11.1, 1J k is the total number of Equation 11.4, depending the underlying population demonstrated in Example
is the error degrees of freedom appropriate to an ANOVA, means, and SE is obtained from either Equation 11.2 or upon whether ne and nA are equal, or Equation 11.5 if variances are not assumed to be equal. This calculation is 11.3 for the data in Example 11.1.
(a) Sample Size and Estimation of the Difference between Two Population Means. Section 8.3 showed how to estimate the sample size required to obtain a confidence interval of specified width for a difference between the two population means associated with the two-sample t test. In a multisample situation, a similar procedure may be used with the difference between population means, employing q instead of the t statistic. As in Section 8.3, iteration is necessary, whereby n is determined such that n
=
s2(qa,v,k)2 d2
.
(11.8)
Here, d is the half-width of the 1 - a confidence interval, s2 is the estimate of error variance, and k is the total number of means; 1J is the error degrees of freedom with the estimated n, namely 1J = k( n - 1).
234
Chapter 11
Multiple Comparisons EXAMPLE 11.3 Example 11.1
Confidence
Intervals (CI) for the Population
Means from
It was concluded in Example 11.1 that ILl -=1=IL2 = IL4 = IL3 -=1=ILs. Therefore, we may calculate confidence intervals for ILlfor IL2.4,3 and for IL5 (where IL2.4,3 indicates the mean of the common population from which Samples 2, 4, and 3 came). Using Equation 10.31:
= Xl ±
95% CIforILI
to.05(2).25
\j/.\2
= 32.1 ± (2.060)~9.7:52
nl
= 32.1 mg/ml ± 2.6 mg/ml.
Again using Equation 10.31: -
95% CI for
IL5
--
=
±
X5
) 52
to.05(2),25
~ n5
= 58.3 mg/ml ± 2.6 mg/ml.
Using Equation 11.6: Xp
=
_-
X2.4,3
n2X2 ~~-~~-~~ + n4X4 n: + n4
_ (6)(40.2)
+ n3X3 + ns
+ (6)(41.1) + (6)(44.1) 6 + 6 + 6
= 41.8mg/ml.
Using Equation 11.6a: 95% CI for
IL2.4,3
=
s2
X2.4,3
±
to 05(2),25 )
6
+ 6+ 6
= 41.8 mg/ml ±1.5 mg/ml.
Using Equation 11.7:
95% CI for
IL5 -
IL2.4,3
=
--
--
X 5 -
X 2.4,3
±
s2 (
--2
QO.05,25.5 \
-
1
+
n5
= 58.3 - 41.8 ± (4.166)(1.04) = 16.5 mg/ml ± 4.3 mg/ml. Using Equation 11.7: 2
95% CI for
IL2.4,3
-
ILl =
X2.4,3
-
Xl ±
s2 (
QO.05,25.5 \
11.3
=
41.8 - 32.1 ± (4.166)(1.04)
=
9.7 mg/ml ± 4.3 mg/ml.
n2
+
1 n4
+
+ 1) n3
nl
TESTING A CONTROL MEAN AGAINST EACH OTHER MEAN
Sometimes means are obtained from k groups with the a priori objective of concluding whether the mean of one group, commonly designated as a control, differs significantly from each of the means of the other k - 1 groups. Dunnett (1955) provided
Section 11.3
Testing a Control Mean Against Each Other Mean
235
an excellent procedure for such testing. Thus, whereas the data described in Section 11.1 were collected with the intent of comparing each sample mean with each other sample mean, the Dunnett test is for multisample data where the objective of the analysis was stated as comparing the control group's mean to the mean of each other group. Tukey's test could be used for this purpose. but it would be less powerful (Myers and Well, 2003: 255). If k = 2, Dunnett's test is equivalent to the two-sample {test (Section 8.1). As in the previous section, s2 denotes the error mean square, which is an estimate of the common population variance underlying each of the k samples. The Dunnett's test statistic (analogous to that of Equation 11.3) is q' =
Xcontrol
XA
-
SE
(\1.9)
where the standard error, when the sample sizes are equal, is SE =
\j
!2s2, n
(11.10)
and when the sample sizes are not equal, it is
(1l.lJ) and when the variances are not equal: SE
=
\j
!s~ I1A
+
.2 '\ontrol
(1l.11a)
I1control
For a two-tailed test, critical values, q~(2).v.k' are given in Appendix Table B.7. If Iq'l 2:::q~(2).v.k' then H«: fLcolltrol = fLA is rejected. Critical values for a one-sample test, q~( I )./,.k' are given in Appendix Table 8.6. In a one-tailed test, Hi; fLconlrol :::; fLA is rejected if q' 2::: q~(I).v.k: and Ho: fLcontrnl 2::: fLA is rejected if Iq'l 2:::q~(I).v.k and
< fLA (i.e., if q :::; -q~(I).lI.k)' This is demonstrated in Example 11.4. These critical values ensure that the familywise Type I error = 0'. The null hypothesis Ho: fLcontrol = fLA is a special case of Ho: fLcontrol - 0 = fL() where fLO = O. However, other values of fLU may be placed in the hypothesis, and Dunnett's test would proceed by placing I X control - X A - fLO I in the numerator of the q' calculation. In an analogous manner, H«: fLcontrol - fLO :::; fL (or H«: fLcontrol - fL() 2::: fL) may be tested. When comparison of group means to a control mean is the researcher's stated desire, the sample from the group designated as the control ought to contain more observations than the samples representing the other groups. Dunnett (1955) showed that the optimal size of the control sample typically should be a little less than ~ times the size of each other sample. Xcontrol
(a) Sample Size and Estimation of the Difference between One Population Mean and the Mean of a Control Population. This situation is similar to that discussed in Section 11.2a, but it pertains specifically to one of the k means being designated as
236
Chapter 11
Multiple Comparisons
EXAMPLE11.4 Dunnett's Test for Comparing the Mean of a Control Group to the Mean of EachOther Group The yield (in metric tons per hectare) of each of several plots (24 plots, as explained below) of potatoes has been determined after a season's application of a standard fertilizer. Likewise, the potato yields from several plots (14 of them) were determined for each of four new fertilizers. A manufacturer wishes to promote at least one of these four fertilizers by claiming a resultant increase in crop yield. A total of 80 plots is available for use in this experiment. Optimum allocation of plots among the five fertilizer groups will be such that the control group (let us say that it is group 2) has a little less than = J4 = 2 times as many data as each of the other groups. Therefore, it was decided to use n : = 24 and nl = n3 = n4 = m-, = 14, for a total N of 80. Using analysis-of-variance calculations, the error MS (s2) was found to be 10.42 (metric tons/ha)? and the error OF = 75.
v"7 XI, do not reject Ho: /-L2 Reject Hi; /-L2 2: /-L5 Reject Hi; /-L2 2: /-L4
2: /-LI
Do not test
We conclude that only fertilizer control fertilizer (fertilizer 2).
from a control
5 produces
group. The procedure
a yield greater
uses this modification
than the yield from the
of Equation
1] .8:
(11.l2) (b) Confidence Intervals for Differences between Control and Other Group Means. Using Dunnett's q' statistic and the SE of Equation 11.10, 11.11, or 11.11a, two-tailed confidence limits can be calculated each of the other group means: 1 -
Cl'
CI for
/-Lcontrol
-
for the difference
/-LA = (Xcontrol
between
the control
mean and
(11.13)
Section 11.4
Multiple
Contrasts
237
One-tailed confidence limits are also possible. The 1 ~ 0' confidence can be expressed that a difference, ILcontrol ~ ILA, is not less than (i.e., is at least as large as) (Xcontrol
~ XA)
(11.14)
~ (q~(I),v,k)(SE),
or it might be desired to state that the difference is no greater than (Xcontrol
XA)
~
(11.15)
+ (q~(I),v,k)(SE).
MULTIPLE CONTRASTS Inspecting the sample means after performing an analysis of variance can lead to a desire to compare combinations of samples to each other, by what are called multiple contrasts. The method of Scheffe" (1953; 1959: Sections 3.4, 3.5) is an excellent way to do this while ensuring a familywise Type I error rate no greater than 0'. The data in Example 11.1 resulted in ANOV A rejection of the null hypothesis He: ILl = ILZ = IL3 = IL4 = ILS; and, upon examining the five sample means, perhaps by arranging them in order of magnitude (X1 < Xz < X4 < X3 < Xs), the researcher might then want to compare the mean strontium concentration in the river (group 5) with that of the bodies of water represented by groups 2, 4, and 3. The relevant null hypothesis would be He: (ILZ + IL4 + IL3 )/3 = ILs, which can also be expressed as He: ILz/3 + IL4/3 + IL3/3 ~ ILS = O. The Scheffe test considers that each of the four IL'S under consideration is associated with a coefficient, c.: Cz = l, C4 = l, q = l, and Cs = ~ 1 (and the sum of these coefficients is always 3 3' 3 . zero). The test statistic, S, is calculated as
(11.16)
S= where
(11.17) and the critical value of the test is Sa =
)(k ~
1 )Fa(I),k-I,N-k
.
(11.18)
Also, with these five groups of data, there might be an interest in testing H«: ~ (ILZ + IL4 + IL3 )/3 = 0 or Ho: (ILl + ILS )/2 ~ (ILZ + IL4 + IL3 )/3 = 0 or Ho: (ILl + IL4 )/2 ~ (ILZ + IL3 )/2 = 0, or other contrasts. Any number of such hypotheses may be tested, and the familywise Type I error rate is the probability of falsely rejecting at least one of the possible hypotheses. A significant F from an ANOV A of the k groups indicates that there is at least one significant contrast among the groups, although the contrasts that are chosen may not include one to be rejected by the Scheffe test. And if F is not significant, testing multiple contrasts need not be done, for the probability of a Type I error in that testing is not necessarily at 0' (Hays, 1994: 458). The testing of several of these hypotheses is shown in Example 11.5. In employing the Scheffe test, the decision of which means to compare with which others occurs after inspecting the data, so this is referred to as an a posteriori, or post hoc test. ILl
* Henry
Scheffe (1907 -1977),
American
statistician.
238
Chapter 11
Multiple Comparisons
EXAMPLE11.5
Scheffe's Test for Multiple Contrasts, Using the Data of Example 11.1
= 0.05, the critical value, Sa, for each contrast is (via Equation 11.1H) J(k - 1 )F005(
For a
Example]
1.1 showed s2
= 9.7652 and
=
J( 5
=
J4(2.76)
=
3.32.
n
= 6.
= 41.8
+
X4
-
!
-
Xs
i9.7li52i~ i
-
I ).4.25
s
SE
Contrast
Xl + X~ :1
I) FO.()5(
-
58.3
'CY 1
+
Ii
(~Y Gt +
6
6
(I
+
)21
--6!
= 1.47
Conclusion
Rejeci 110:
11.22
j
~
fJ.2
I =
+ -fJ..'
X2
X3
GY GY GY~
X
+ + 4 ---=-----'-'---'"
+ --+--
3
6
+ -----
6
= 1.47
6.60
Reject 110: fJ.1
-
fJ.2
I (1)2
\
.' -
l
+
+
9.76521~
6
( I )2 ( I )2
+
fJ.4
( I )2
~-+-'-"' - + _._"'- 6
6
1.16
2.9:1
Accept
110:
6
+
fJ.1
6
fJ.5
2 _ fJ..1.._+.~3
41.8
+ I!:i
:1
=:1.4
+ 2
.•. fJ.'
=()
= -9.7
XI
fJ.4
= ()
6
:12.1 - 41.8
=45.2
+
fJ.1,
:1
= -16.5
X
1).k-I.N-k
=0
X4
= 36.6
_
X2
_~
X:1
9.7652
2 -
= -5S'i
42.15
.- 1.28
04
Reject "11: _ fJ.2
\
+ 2
fJ.1
+
fJ.4
2 fJ.~ = ()
The Scheffe test may also be used to compare one mean with one other. It is then testing the same hypotheses as is the Tukey test. It is less sensitive than the Tukey test to non normality and heterogeneity of variances (Hays. 1994: 458,458: Sahai and Agecl, 2000: 77): but is less powerful and it is recommended that it not be used for pairwise comparisons (e.g., Carmer and Swanson, 1973: Kirk, 1995: 154: Toothaker, 1991: 51,77,89-90). Shaffer (1977) described a procedure, more powerful than Scheffe's, specifically for comparing a combination of groups to a group specified as a control. (a) Multiple Contrasts with Unequal Variances. The Scheffc test is suitable when the samples in the contrast each came from populations with the same variance.
Section 11.5
Nonparametric
Multiple Comparisons
239
When the population variances differ but the sample sizes are equal, the probability of a Type T error can be different from 0.05 when a is set at 0.05. If the variances and the sample sizes are unequal, then (as with the ANOV A, Section 10.1 g) the test will be very conservative if the large variances are associated with large sample sizes and very liberal if the small samples come from the populations with the large variances (Keselman and Toothaker, 1(74). If the (J"~'s cannot be assumed to be the same or similar, the procedure of Brown and Forsythe (I974b) may be employed. This is done in a fashion analogous to the two-sample Welch modification of the t test (Section S.lc), using (11.19)
['
with degrees
of freedom
of
~ (.}~})2
( v'
l1i
(11.20)
=
(b) Confidence Intervals for Contrasts. lishment
of 1 -
0'
confidence
The Scheffc limits for a contrast:
procedure
enables
the estab-
(11.21) (with SE from Equation 11.17). Shaffer's (1977) method produces confidence intervals for a different kind of contrast, that of a group of means with the mean of a control group. Example 11.6 demonstrates the determination of confidence intervals for two of the statistically significant contrasts of Example 11.5. 11.5 NONPARAMETRIC MULTIPLE COMPARISONS In the multisample situation where the nonpararnetric Kruskal-Wallis test (Section 10.4) is appropriate, the researcher usually will desire to conclude which of the samples are significantly different from which others, and the experiment will be run with that goal. This may be done in a fashion paralleling the Tukey test of Section 11.1, by using rank sums instead of means, as demonstrated in Example 11.7. The rank sums, determined as in the Kruskal-Wallis test, are arranged in increasing order of magnitude. Pairwise differences between rank sums are then tabulated, starting with the difference between the largest and smallest rank sums, and proceeding in the same sequence as described in Section 11.1. The standard error is calculated as SE
=
\j
I n(nk)(nk
12
+ 1)
(11.22)
240
Chapter 11
Multiple Comparisons
(Nernenyi, 1963; Wilcoxon and Wilcox, 1964: 10),* and the Studentized (Appendix Table B.5 to be used is qa.oo.K') EXAMPLE 11.6
Confidence
Intervals for Multiple
range
Contrasts
The critical value, Sa, for each confidence interval is that of Equation 11.8: ~(k - 1)Fa(I),k-I,N-b and for a = 0.05, Sa = 3.32 and s2 = 9.7652 as in Example 11.5. (a)
A confidence interval for
fL2
+
~3
+
fL4
-
fL5
would employ SE
=
1.47
=
1.47
from Example 11.5, and the 95% confidence interval is
e"
+
~3
+ X4 -
X,)
± SaSE ~ -16.5
± (3.32)( 1.47)
= -16.5 mg/rnl ± 4.9 mg/rnl LJ = -21.4 mg/rnl L2 = -11.6 mg/rnl. (b)
+ fL4 would employ SE 3 from Example 11.5, and the 95% confidence interval is
A confidence interval for
(Xl -
X, +
~3
fLI
-
fL2
+
fL3
+ X4) ± SaSE ~ -9.7 ± (3.32)( 1.47) = -9.7 mg/rnl ± 4.9 mg/rnl LI
= -14.6 mg/rnl
L2 = - 4.8 mg/ ml.
(a) Nonparametric Multiple Comparisons with Unequal Sample Sizes. Multiplecomparison testing such as in Example 11.7 requires that there be equal numbers of data in each of the k groups. If such is not the case, then we may use the procedure of Section 11.7, but a more powerful test is that proposed by Dunn (1964), using a standard error of SE =
+ 1)
N(N
(J....
12
nA
+
J....)
(11.24)
ne
for a test statistic we shall call (11.25)
*Some authors (e.g., Miller 1981: 166) perform this test in an equivalent fashion by considering the difference between mean ranks (RA and R8) rather than rank sums (RA and R8), in which case the appropriate standard error would be
SE =
\j
I k( nk
+ 1). 12
(11.23)
Section 11.5
Nonparametric Multiple Comparisons
241
EXAMPLE 11.7 Nonparametric Tukey- Type Multiple Comparisons, Using the Nemenyi Test The data are those from Example 10.10. SE =
+ 1) =
n(nk)(nk
12
/5(15)(16) \j 12
Sample number (i) of ranked rank sums: Rank sum (Ri): Comparison
(8 vs. A)
= 10.00
J100 321 26
30
64
Difference (RB -
RA)
SE
q
QO.05.oo,3
1 vs. 3
64 - 26 = 38 10.00 3.80
3.314
1 vs. 2
64 - 30
= 34 10.00 3.40
3.314
2 vs. 3
30 - 26
4 10.00 0.40
3.314
=
Conclusion
Reject He: Fly abundance is the same at vegetation heights 3 and 1. Reject Ho: Fly abundance is the same at vegetation heights 2 and 1. Do not reject H«: Fly abundance is the same at vegetation heights 3 and 2.
Overall conclusion: Fly abundance is the same at vegetation heights 3 and 2 but is different at height 1. where R indicates a mean rank (i.e., RA = RAI nil and RB = RBI nu). Critical values for this test, Qa,k, are given in Appendix Table B.15. Applying this procedure to the situation of Example 11.7 yields the same conclusions, but this will not always be the case as this is only an approximate method and conclusions based upon a test statistic very near the critical value should be expressed with reservation. It is advisable to conduct studies that have equal sample sizes so Equation 11.22 or 11.23 may be employed. If tied ranks are present, then the following is an improvement over Equation 11.24 (Dunn, 1964):
SE =
N(N (
+ 1) _ 12
Lt 12(N
)( 1 -
1)
nA
1)
+
nB
(11.26) .
In the latter equation, L t is used in the Kruskal-Wallis test when ties are present and is defined in Equation 10.42. The testing procedure is demonstrated in Example 11.8; note that it is the mean ranks (Ri), rather than the ranks sums (Ri), that are arranged in order of magnitude. A procedure developed independently by Steel (1960, 1961b) and Dwass (1960) is somewhat more advantageous than the tests of Nemenyi and Dunn (Critchlow and Fligner, 1991; Miller, 1981: 168-169), but it is less convenient to use and it tends to be very conservative and less powerful (Gabriel and Lachenbruch, 1969). And
242
Chapter 11
Multiple Comparisons
Nonparametric
EXAMPLE 11.8
Multiple Comparisons with Unequal Sam-
pie Sizes The data are those from Example 10.11, where the Kruskal- Wallis test rejected the null hypothesis That water pH was the same in all four ponds examined
Lt
=
For
i1A =
168. as in Example 10.11. 8 and ns
8,
12(~'IJ("~+ I)
_)(N(~2'1)
SE
=
\j
liB
I(JI(J2)
_ ~)(~ 12
nA
SE =
= 7 and
+ ~)
12(30)
J20.5500 For
=
= 118
X
K
4.53 = 8.
\j1(31(J2)12 - ~)(~ 12(J0)
7
+ ~) = J22.0179 = 4.69. 0
Sample number (i) of ranked means: Rank sum (Rj): Sample size (lIj): Mean rank (Rj)
1
63.24 8 6.88
2 64.62 5 16.56
4 71.30 8 20.44
To test at the 0.05 significance level, the critical value is Qo.o).4
(Rn
B vs. A
3 vs. 2
= 2.639.
Difference
Comparison
3 vs. I
3 73.35 7 20.71
20.71 20.71
-
-
-
RA)
SE
Q
Conclusion
6.88 = 13.831 4.69 2.95 Reject Ho: Water pH is the 16.56 = 4.15
same in ponds 3 and 1. 4.69 0.88 Do not reject Hi; Water pH is the same in ponds 3 and 2.
3 vs. 4 4 vs. 1 4 VS. 2 2 vs. 1
Do not test 20.44
-
6.88 = 13.56
Do not test 16.56 - 6.88 = 9.68
4.53 2.99 Reject H« Water pH is the same in ponds 4 and 1. 4.53 2.14 Do not reject Ho: Water pH is the same in ponds 2 and 1.
Overall conclusion: Water pH is the same in ponds 4 and 3 but is different in pond I. and the relationship of pond 2 to the others is unclear.
Section 11.6
Nonparametric Multiple Contrasts
243
this test can lose control of Type J error if the data come from skewed populations (Toothaker, 1991: 108). (b) Nonparametric Comparisons of a Control to Other Groups. Subsequent to a Kruskal-Wallis test in which Ho is rejected, a non parametric analysis may be performed to seek either one-tailed or two-tailed significant differences between one group (designated as the "control") and each of the other groups of data. This is done in a manner paralleling that of the procedure of Section 11.4, but using group rank sums instead of group means. The standard error to be calculated is (11.27) (Wilcoxon and Wilcox, 1964: 11), and one uses as critical values either q~( 1).oo.k or q~(2).oo.k (from Appendix Table B.6 or Appendix Table B.7, respectively) for one-tailed or two-tailed hypotheses, respectively.* The preceding nonparametric test requires equal sample sizes. If the n's are not all equal, then the procedure suggested by Dunn (1964) may be employed. By this method, group B is considered to be the control and uses Equation 11.27, where the appropriate standard error is that of Equation 11.26 or 11.28, depending on whether there are ties or no ties, respectively. We shall refer to critical values for this test, which may be two tailed or one tailed, as Q~.k; and they are given in Appendix Table B.16. The test presented by Steel (1959) has drawbacks compared to the procedures above (Miller, 1981: 133). 1.6 NONPARAMETRIC
MULTIPLE CONTRASTS
Multiple contrasts, introduced in Section 11.4, can be tested nonparametrically using the Kruskal- Wallis H statistic instead of the F statistic. As an analog of Equation 11.16, we compute
s=
(11.29)
where c, is as in Section 11.4, and
(11.30) unless there are tied ranks, in which cases we use
SE =
N(N (
+ 1) _ 12
~t 12(N
)(~ -
1)
CT)
(11.31)
ni'
"If mean ranks, instead of rank sums, are used, then
(11.28)
244
Chapter 11
Multiple Comparisons where
) Ha.
IlJ
L [ is JI2.
' n
as in Equation 10.42. The critical value for these multiple contrasts is using Appendix Table B.n to obtain the critical value of H. If the needed
critical value of H is not on that table, then X;.(k 11.7
MULTIPLE COMPARISONS
AMONG
may be used.
-I)
MEDIANS
If the null hypothesis is rejected in a multisample median test (Section 10.5), then it is usually desirable to ascertain among which groups significant differences exist. A Tukey-type multiple comparison test has been provided by Levy (1979), using q
=
fIB
-
fiA.
( 11.32)
SE As shown
in Example 11.19, we employ the values off'lj for each group, where of data in group j that are greater than the grand median. (The values of flj are the observed frequencies in the first row in the contingency table used in the multisample median test of Section 10.5.) The values of flj are ranked, and pairwise differences among the ranks are examined as in other Tukey-type tests. The appropriate standard error, when N (the total number of data in all groups) is an even number. is
fli is the number
SE
and, when N is an odd number,
=
\j
I n( N
the standard SE
=
\j
+ I),
(11.33)
4N
I
error is nN
4( N - 1)
.
( 11.34)
The critical values to be used are q".-x.k. This multiple-comparison test appears to possess low statistical power. If the sample sizes are slightly unequal, as in Example 1l.9, the test can be used by employing the harmonic mean (see Section 3.4b) of the sample sizes. k ( 11.35) n = k •
~l i= 1 for an approximate 11.8
MULTIPLE COMPARISONS
nj
result.
AMONG
VARIANCES
If the null hypothesis that k population variances are all equal (see Section 10.6) is rejected. then we may wish to determine which at' the variances differ from which others. Levy (1975a, 1975c) suggests multiple-comparison procedures for this purpose based on a logarithmic transformation of sample variances. A test analogous to the Tukey test of Section 11.1 is performed by caleulating q
lns~
-
Ins;\
= ~--------
SE
(11.36)
Section 11.8
Multiple Comparisons among Variances
EXAMPLE 11.9 Tukey-Type Multiple Comparison Medians. Using the Data of Example 10.12 Sample
number
(j) of samples
Ranked jj, : Sample size (l1j) k = 4 N = 11
ranked
by flj :
for Differences
4
2
2 12
:
3 I]
245
among
3
6
9
12
11
+ 12 + 11 + 12 = 46
11 = ]2
By Equation
11.35, 4
n=
11
12 By Equation
11.48
+ I + 1 + 1
1
12
11
11.36,
SE =
\j
!(11.48) ( 46) 4(46
Hi;
Median
of population
B
H II:
Median
of population
B
Comparison
.fIR
-
3 vs. 2
9
2=7
3 vs. 1
9 - 3 = 6
3 vs. 4
Do not test
4 vs. 2
6
4 vs. 1
Do not test
1 vs. 2
Do not test
-
-
fIll
2=4
=
-
= 1.713.
1)
Median
*' Median
of population of population
A. A.
Conclusion
SE
q
Q(UI5.4.cx)
1.713
4'()86
3.633
1.713
3.503
3.633
Do not reject H().
1.713
2.335
3.633
Do not reject
Reject
Ho.
H().
Overall conclusion: The medians of populations 3 and 2 (i.c., south and east-see Example 10.12) are not the same; but the test lacks the power to allow clear conclusions about the medians of populations 4 and 1.
where SE if both samples
being compared
=
\
II
(11.37)
v'
arc of equal size. If v A SE
=
\j
!~ + VB
_1 . VII
*'
VU.
we can employ ( 11.38)
246
Chapter 11
Multiple Comparisons EXAMPLE 11.10 Tukey-Type Multiple among Four Variances (i.e., k = 4) 52
Comparison
In
57
I
ni
Vi
2.74 g2
50
49
r.coso
2
2.83 g2
48
47
1.0403
3
2.20 g2
50
49
0.7885
4
6.42 g2
50
49
1.8594
Sample ranked by variances (i): Logorithm of ranked sample variance
(In
1
Sample
degrees
4
(U885
1 1.0080
2 1J1403
1.8594
49
49
47
49
(vd:
of freedom
Compo rison
3
57 ):
Test for Differences
Difference
(Bvs.A)
(In'7:1 -
SE
q
qo.O:'i.:)c.4
0.202*
5.301
3.633
Reject Ho:
(T~
= (T~
In.I~)
4 vs. 3
I.H594 -
0.7HH5
4 vs. I
I.H594 -
1.00HO = O.H514 0.202
4.215
3.633
Reject Ho:
(T~
= (TT
4 vs. 2
I.H594 -
1.0403
4.01 5
3.h:n
Reject Ho:
(T~
= (T~
2 vs. 3
1.0403 -
O.7HH5 = O.251H 0.204
1.234
3.h33
Do not reject lIo:
2 vs. I I vs. 3
Do not tcst Do not test
* As .t.
t
V4 = V1 :
As u 4
Overall
.
*
V2 :
SE
=
SE
=
conclusion:
\j
IIv
1.0709
Conclusions
=
=
O.H191 0.204;
=
\j
I492
=
(T~
+ -
=
V2
= uT = u~
=
(T~
0.202.
)1 1 )1 V4
(T~
-
+ -1 =
49
0.204.
47
* u~.
Just as in Sections 11.1 and 11.2, the subscripts A and B refer to the pair of groups being compared; and the sequence of pairwise comparisons must follow that given in those sections. This is demonstrated in Example 11.10.* The critical value for this test is q".x.k (from Appendix Table 8.5). A Newman-Keuls-type test can also be performed using the logarithmic transformation. For this test, we calculate q using Equation 11.36; but the critical value,
* Recall (as in Section IO.h) that "In" refers to natural logarithms (i.e .. logarithms If one prefers using common logarithms Clog"; logarithms in base 10). then 2.30259( log .17:1q -
SE
log
I~ )
using base e).
(11.39)
248
Chapter 11
Multiple Comparisons
4 are the same as the means of groups 2 and 3. (b) Test the hypothesis that the means of groups 2 and 4 are the same as the mean of group 3. 11.6. The following ranks result in a significant KruskalWallis test. Employ nonparametric multiple-range testing to conclude between which of the three groups population differences exist.
Group 1
Group 2
Group 3
8 4 3 5 1
10 6
14 13 7 12 15
9
11 2
C HAP
T E R
12
Two-Factor Analysis of Variance 12.1 TWO-FACTOR 12.2 TWO-FACTOR 12.3 TWO-FACTOR
ANALYSIS OF VARIANCE WITH EQUAL REPLICATION ANALYSIS OF VARIANCE WITH UNEQUAL REPLICATION ANALYSIS OF VARIANCE WITHOUT REPLICATION
12.4 ANALYSIS WITH RANDOMIZED
BLOCKS OR REPEATED MEASURES
12.5 MULTIPLE COMPARISONS AND CONFIDENCE INTERVALS 12.6 SAMPLE SIZE, DETECTABLE DIFFERENCE, AND POWER 12.7 NONPARAMETRIC 12.8 DICHOTOMOUS 12.9 DICHOTOMOUS 12.10 INTRODUCTION
RANDOMIZED-BLOCK
OR REPEATED-MEASURES
ANALYSIS OF VARIANCE
NOMINAL-SCALE DATA IN RANDOMIZED BLOCKS RANDOMIZED-BLOCK OR REPEATED-MEASURES DATA TO ANALYSIS OF COVARIANCE
Section 10.1 introduced methods for one-way analysis of variance, which is the analysis of the effects of a factor (such as the type of feed) on a variable (such as the body weight of pigs). The present chapter will discuss how the effects of two factors can he assessed using a single statistical procedure. The simultaneous analysis to he considered, of the effect of more than one factor on population means, is termed a factorial analysis of variance," and there can he important advantages to such an experimental design. Among them is the fact that a single set of data can suffice for the analysis and it is not necessary to perform a one-way ANOV A for each factor. This may he economical with respect to time, effort, and money: and factorial analysis of variance also can test for the interactive effect of factors. The two-factor analysis of variance is introduced in this chapter. Examination of the effects of more than two factors will he discussed in Chapter 14. There have been attempts to devise dependable nonparamctric statistical tests for experimental designs with two or more factors. For a one-factor ANOV A. the Marin-Whitney test or an ANOV A on the ranks of the data may he employed for nonparamctric testing (Section 10.4). But, except for the situation in Section 12.7. nonpararnctric procedures with more than one factor have not been generally acceptable. For multi factor analyses (this chapter. Chapter 14. and Chapter 15). it has been proposed that a parametric ANOV A may he performed on the ranks of the data (and this rank transformation is employed by some computer packages) or that the Kruskal- Wallis test of Section 10.4 may he expanded. However, Akritas (I L)L)O): Blair, Sawilowsky. and Higgins (I L)~7): Brunner and Neumann (19~6): McKean and Vidmar (I L)L)4): Sawilowsky, Blair. and Higgins (l L)~L)): Seaman ct al. (1 L)L)4): Toothaker and *Somc concepts on two-factor analysis of variance were discussed as early as I~99 (Thiele. I ~l)9). In 1l)26. R. A. Fisher was the first to present compelling arguments for factorial analysis (Box. 1l)7~: I ss. Street. Il)l)()).
250
Chapter 12
Two-Factor Analysis of Variance Chang (1980); and Toothaker and Newman perform poorly and should not be employed.
12.1
TWO-FACTOR
(1994)
found
that
these
procedures
ANALYSIS OF VARIANCE WITH EQUAL REPLICATION Example 12.1 presents data from an experiment suited to a two-way analysis of variance. The two factors are fixed (as defined in Section 10.1f and discussed further in Section 12.1d). The variable under consideration is blood calcium concentration in birds, and the two factors being simultaneously tested arc hormone treatment and sex. Because there are two levels in the first factor (hormone-treated and non treated) and two levels in the second factor (female and male). this experimental design" is termed a 2 x 2 (or 22) factorial. The two factors arc said to be "crossed" because each level of one factor is found in combination with each level of the second factor." There are 11 = 5 replicate observations (i.e., calcium determinations on each of five birds) for each of the 2 X 2 = 4 combinations of the two factors: therefore, there are a total of N = 2 x 2 x 5 = 20 data in this experiment. In general. it is advantageous to have equal replication (what is sometimes called a "balanced" or "orthogonal" experimental design). but Section 12.2 will consider cases with unequal numbers of data per cell. and Section 12.3 will discuss analyses with only one datum per combination of factors. For the general case of the two-way factorial analysis of variance. we can refer to one factor as A and to the other as B. Furthermore. let us have a represent the number of levels in factor A, b the number of levels in factor 8, and 11 the number of replicates. A triple subscript on the variable. as Xijl, will enable us to identify uniquely the value that is replicate I of the combination of level i of factor A and leveljoUactorB.ln Example 12.I,X213 = 32.3mgj100m1.X115 = 9.5mgj100ml, and so on. Each combination of a level of factor A with a level of factor B is called a cell. The cells may be visualized as the "groups" in a one-factor A OVA (Section 10.1). There are four cells in Example 12.1: females without hormone treatment. males without hormone treatment, females with hormone treatment. and males with hormone treatment. And there are 11 replicate data in each cell. For the cell formed by the combination of level i of factor A and level j of factor B. Xij denotes the cell mean; for the data in Example 12.1, the mean of a cell is the cell total divided by 5, so X" = 14.8R X 12 = 12.12. X 21 = 32.52. and X22 = 27.78 (with the units for each mean being mg/lOO ml). The mean of all bn data in level i of factor A is Xi, and the mean of all an data in level j of factor B is Xj. That is, the mean for the 10 non-hormone-treated birds is XI, which is an estimate of the population mean, ILl ; the mean for the hormonetreated birds is X2. which estimates IL2'; the mean of the female birds is XI: which estimates ILl: and the mean of the male birds is X2, which estimates IL2. There are a total of abn = 20 data in the experiment, and (just as in the single-factor ANOV A of Section 10.1) the mean of all N data (the "grand mean")
* R. A. Fisher (H\ 0.25
[P = 0.62]
256
Cha pter 12
Two-Factor
Analysis
of Variance
plasma calcium) that is in addition to the sum of the effects of each factor considered separately. For the one-factor ANOYA it was shown (Section 1O.1a) how alternative formulas, referred to as "machine formulas," make the sum-of-squares calculations easier because they do not require computing deviations from means, squaring those deviations, and summing the squared deviations. There are also machine formulas for two-factor analyses of variance that avoid the need to calculate grand, cell, and factor means and the several squared deviations associated with them. Familiarity with these formulas is not necessary if the ANOV A calculations are done by an established computer program, but they can be very useful if a calculator is used. These are shown in Section 12.1b. Table 12.1 summarizes the sums of squares, degrees of freedom, and mean squares for the two-factor analysis of variance. (b) Machine Formulas. Just as with one-factor analysis of variance (Section 10.1), there are so-called machine formulas for two-factor ANOV A that allow the computation of sums of squares without first calculating overall, cell, within-cell, and factor means. These calculations are shown in Example 12.2a, and they yield the same sums of squares as shown in Example 12.1a.
TABLE 12.1: Summary
of the Calculations Effects and Equal Replication Source of variation Total
Sum of squares
[XUI
-
Analysis of Variance with Fixed
Degrees of freedom (0 F)
(SS)
Xl
Equation
Mean square (MS)
N-
Equation 12.2 or 12.17
Xl
Cells [Xii
for a Two-Factor
12.4
cells SS
ab -
cells DF or 12.19
Factor
A
[Xi
-
Factor
B
[Xi
Equation
Xl
- Xl
A X B interaction
12.10
a -
12.12
b -
[Xijl
cells (Error)
- Xii]
factor A OF
or 12.20 Equation
I
factor
B SS
factor
B DF
or 12.21 cells SS - factor A SS - factor
Within
factor ASS
I
Equation
(a -
1)(b
-
1)
A X B SS A X B OF
B SS
ab( n -
12.6
or total SS - cells SS
I)
or total OF - cells OF
error
SS
error
OF
Note: For each source of variation, the bracketed quantity indicates the variation being assessed; of levels in factor A; h is the number of factors in factor B; II is the number of replicate data in each cell; N is the total number of data (which is aIm): Xijl is datum I in the cell
a is the number
formed by level i of factor A and level j of factor B; Xi- is the mean of the data in level i of factor A; X-j is the mean of the data in level j of factor B; Xi} is the mean of the data in the cell formed by level i of factor A and level j of factor B: and
X
is the mean of all N data.
Section 12.1
Two-Factor Analysis of Variance with Equal Replication
EXAMPLE 12.2a Example 12.2 h
1I
Using Machine Formulas for the Sums of Squares in
1
LLL i=lj=II=1 a b
Xijl
=
436.5
1
L L L X3, = 11354.31
i=lj=II=1
b
1
L L X1jl
total for no hormone =
= 74.4 + 60.6 = 135.0
j= 1/=1 2 I
L L X2jl
total for hormone =
1
1I
L LXiii
total for females =
= 162.6 + 138.9 = 301.5
1
j=ll=
= 74.4 + 162.6 = 237.0
i=I/=1
a
total for males =
1
L L Xi2/
= 60.6 + 138.9 = 199.5
i=II=1
a
b
2
n
( i~j~~Xijl
c=
)
(436.5 )2 = 9526.6125 20
N a
total SS =
n
b
L L L X3, -
c=
11354.31 - 9526.6125 = 1827.6975
i=1 j=1 1=1
cells SS7 =
± ( LL -
Xijl)2
i=1 j=1
n
a
b
_ (74.4)2
1-1
-
C
+ (60.6)2 + (162.6)2 + (138.9)2 _ 9526.6125 5
=
257
1461.3255
within-cells (i.e., error) SS
= total SS - cells SS = 1827.6975 - 1461.3255 = 366.3720
factor A (hormone group) SS _ (sum without hormone)2+
=
± (± ±
i=1
2 Xijl)
j= 11= 1 hn
(sum with hormone)2
number of data per hormone group
-
C _ C
258
Chapter 12
Two-Factor Analysis of Variance
+ (301.5)2
(135.0)2
(2)(5)
- 9526.6125
= 1386.1125
factor B (sex) SS
=
±(± ±Xijl)2 j= 1/= 1 i=
C
1
an
+ (sum for males)2
(sum for females)2
number of data per sex (237.0)2 + (199.5)2 (2)(5) A
_ 9526.6125=70.3125
x B interaction SS = cells SS - factor ASS = 1461.3255 -
- C
- factor B SS
1386.1125 - 70.3125 = 4.9005
The total variability is expressed by II
total SS
=
b
II
2: 2: 2: XBI
-
C,
(12.17)
i=lj=l/=l
where
(12.18)
C= The variability among cells is
± ±(±Xi )2 F
cells SS
=
i=lj=l
/=1
-
C.
(12.19)
n And the variability among levels of factor A is
factor ASS
=
± (±j=l/=l±
i=l
bn
XijI)
2
) -co
(12.20)
Simply put, the factor A SS is calculated by considering factor A to be the sale factor in a single-factor analysis of variance of the data. That is, we obtain the sum for each level of factor A (ignoring the fact that the data are also categorized into levels of factor B); the sum of a level is what is in parentheses in Equation 12.20. Then we square each of these level sums and divide the sum of these squares by the number of data per level (i.e., bn). On subtracting the "correction term," C, we arrive at
Section 12.1
Two-Factor Analysis of Variance with Equal Replication
259
the factor A SS. If the data were in fact analyzed by a single-factor ANOV A, then the groups SS would indeed be the same as the factor A SS just described, and the groups DF would be what the two-factor ANOV A considers as factor A DF; but the error SS in the one-way ANOV A would be what the two-factor ANOY A considers as the within-cells SS plus the factor B SS and the interaction sum of squares, and the error DF would be the sum of the within-cells, factor B, and interaction degrees of freedom. For factor B computations, we simply ignore the division of the data into levels of factor A and proceed as if factor B were the single factor in a one-way ANOV A:
±(±
f actor BSS
iXijl)2
j=1 i=l/=l = -------
-c.
(12.21)
an (c) Graphical Display. The cell, column, and row means of Example 12.1 are summarized in Table 12.2. Using these means, the effects of each of the two factors, and the presence of interaction, may be visualized by a graph such as Figure 12.1. We shall refer to the two levels of factor A as AI and A2, and the two levels of Row, and Column Means of the Data of Example 12.2 (in mg/100 ml)
TABLE 12.2: Cell,
No hormone (Ad Hormone (A2)
'"c
.S:
~ C 0)
u c~ 0-
u~
Female (Bd
Male (B2 )
14.9
12.1
13.5
32.5
27.8
30.2
23.7
20.0
35 30 25
So
20
~E u~
15
'"~
10
Effect of A
;::l::::
ee
c::: :>.i
5 0 AI
FIGURE 12.1: The means of the two-factor ANOVA data of Example 12.1, as given in Table 12.1. The Ai are the levels of factor A, the Bj are the levels of factor B. A plus sign indicates the mean of an Ai over all (i.e., both) levels of factor B, and an open circle indicates the mean of a Bj over all (i.e., both) levels of factor A.
260
Chapter 12
Two-Factor Analysis of Variance
factor B as B, and B2. The variable, X, is situated on the vertical axis of the figure and on the horizontal axis we indicate A, and A2. The two cell means for B, (14.9 and 32.5 mg/lOO ml, which are indicated by black circles) are plotted and connected by a line; and the two cell means for B2 (12.1 and 27.8 mg/100 ml, which are indicated by black squares) are plotted and connected by a second line. The mean of all the data in each level of factor A is indicated with a plus sign, and the mean of all the data in each level of factor B is denoted by an open circle. Then the effect of factor A is observed as the vertical distance between the plus signs; the effect of factor B is expressed as the vertical distance between the open circles; and nonparallelism of the lines indicates interaction between factors A and B. Thus, the ANOV A results of Example 12.1 are readily seen in this plot: there is a large effect of factor A (which is found to be significant by the F statistic) and a small effect of factor B (which is found to be nonsignificant). There is a small interaction effect, indicated in the figure by the two lines departing a little from being parallel (and this effect is also concluded to be nonsignificant). Various possible patterns of such plots are shown in Figure 12.2. Such figures may be drawn for situations with more than two levels within factors. And one may place either factor A or factor B on the horizontal axis; usually the factor with the larger number of levels is placed on this axis, so there are fewer lines to examine.
x
-----€O-----t 8 ·-----€O_--•. ..... · B2
+ •.•
1
x
AI
A2
(a)
o
X
+
(b)
181
+
o
• B2 A2
(e)
X
AI (d)
FIGURE 12.2: Means in a two-factor AN OVA, showing various effects of the two factors and their interaction. (a) No effect of factor A (indicated by the plus signs at the same vertical height on the X axis), a small effect of factor 8 (observed as the circles being only a small distance apart vertically), and no interaction of factors A and 8 (seen as the lines being parallel). (b) Large effect of factor A, small effect of factor 8, and no interaction (which is the situation in Figure 12.1). (c) No effect of A, large effect of 8, and no interaction. (d) Large effect of A, large effect of 8, and no interaction. (e) No effect of A, no effect of 8, but interaction between A and B. (f) Large effect of A, no effect of 8, with slight interaction. (g) No effect of A, large effect of 8, with large interaction. (h) Effect of A, large effect of B, with large interaction.
Section 12.1
Two-Factor Analysis of Variance with Equal Replication
x
261
x
H] B] A]
II]
112
(e)
A2 (f)
B]
B]
x
+
X
+
+ B2
A]
A2
-
9
A]
-B2 112
(h)
(g)
FIGURE 12.2: (continued)
(d) Model I ANOVA. Recall, from Section 10.H, the distinction between fixed and random factors. Example 12.1 is an ANOYA where the levels of both factors are fixed; we did not simply pick these levels at random. A factorial analysis of variance in which all (in this case both) factors are fixed effects is termed a Modell ANOVA. In such a model, the null hypothesis of no difference among the levels of a factor is tested using F = factor MS/error MS. In Example 12.2, the appropriate F tests conclude that there is a highly significant effect of the hormone treatment on the mean plasma calcium content, and that there is not a significantly different mean plasma calcium concentration between males and females. In addition, we can test for significant interaction in a Model I ANOY A by F = interaction MS/error MS and find, in our present example, that there is no significant interaction between the sex of the bird and whether it had the hormone treatment. This is interpreted to mean that the effect of the hormone treatment on calcium is not different in males and females (i.e., the effect of the hormone is not dependent on the sex of the bird). This concept of interaction (or its converse, independence) is analogous to that employed in the analysis of contingency tables (see Chapter 23). If, in a two-factor analysis of variance, the effects of one or both factors are significant, the interaction effect mayor may not be significant. Tn fact, it is possible to encounter situations where there is a significant interaction even though each of the individual factor effects is judged to be insignificant. A significant interaction implies that the difference among levels of one factor is not constant at all levels of the second factor. Thus, it is generally not useful to speak of a factor effect-even if its F is significant-if there is a significant interaction effect.
262
Chapter
12
Two-Factor
Analysis of Variance
TABLE 12.3: Computation
of the F Statistic for Tests of Significance in a Two-Factor ANOVA with Replication
Hypothesized
effect
Factor A Factor B A
X
B interaction
Model I (factors A and B both fixed)
Model II (factors A and B both random)
Model III (factor A fixed: factor B random)
factor A MS error MS factor B MS error MS A X BMS error MS
factor A MS A x BMS factor B MS A X BMS A X BMS error MS
factor A MS A x BMS factor B MS error MS A X BMS error MS
(e) Model II ANOVA. If a factorial design is composed only of factors with random levels, then we are said to be employing a Model I! ANOVA (a relatively uncommon situation). In such a case, where two factors are involved, the appropriate hypothesis testing for significant factor effects is accomplished by calculating F = factor MS/interaction MS (see Table 12.3). We test for the interaction effect, as before, by F = interaction MS/error MS, and it is generally not useful to declare factor effects significant if there is a significant interaction effect. The Model II ANOV A for designs with more than two factors will be discussed in Chapter 14. (f) Model III ANOV A. If a factorial design has both fixed-effect and random-effect factors, then it is said to be a mixed-model." or a Model II! ANOVA. The appropriate F statistics are calculated as shown in Table 12.3. Special cases of this will be discussed in Section 12.4. This book observes Voss's (1999) "resolution" of a controversy over the appropriate F for testing a factor effect in mixed models.
(g) Underlying Assumptions. The assumptions underlying the appropriate application of the two-factor analysis of variance are basically those for the single-factor ANOV A (Section 1O.lg): The data in each cell came at random from a normally distributed population of measurements and the variance is the same in all of the populations represented by the cells. This population variance is estimated by the within-cells mean square (i.e., the error mean square). Although these hypothesis tests are robust enough that minor deviations from these assumptions will not appreciably affect them, the probabilities associated with the calculated F values lose dependability as the sampled populations deviate from normality and homoscedasticity, especially if the populations have skewed distributions. If there is doubt about whether the data satisfy these assumptions, then conclusions regarding the rejection of a null hypothesis should not be made if the associated P is near the a specified for the test. Few alternatives to the ANOV A exist when the underlying assumptions are seriously violated. In a procedure analogous to that described in Section lO.lg for single-factor analysis of variance with heterogeneous group variances, Brown and Forsythe (1974b) present a two-factor ANOV A procedure applicable when the cell variances are not assumed to have come from populations with similar variances. In some cases an appropriate data transformation (Chapter 13) can convert a set of data so the extent of nonnormality and heteroscedasticity is small. As indicated at the end *The term mixed model was introduced by A. M. Mood in 1950 (David, 1995).
Section 12.1
Two-Factor Analysis of Variance with Equal Replication
263
of the introduction to this chapter, there appears to be no nonparametric procedures to be strongly recommended for factorial ANOV A, with the exception of that of Section 12.7. (h) Pooling Mean Squares. If it is not concluded that there is a significant interaction effect, then the interaction MS and the within-cells (i.e., the error) MS are theoretically estimates of the same population variance. Because of this, some authors suggest the pooling of the interaction and within-cells sums of squares and degrees of freedom in such cases. From these pooled SS and DF values, one can obtain a pooled mean square, which then should be a better estimate of the population random error (i.e., within-cell variability) than either the error MS or the interaction MS alone; and the pooled MS will always be a quantity between the interaction MS and the error MS. The conservative researcher who does not engage in such pooling can be assured that the probability of a Type I error is at the stated a level. But the probability of a Type II error may be greater than is acceptable to some. The chance of the latter type of error is reduced by the pooling described, but confidence in stating the probability of committing a Type I error may be reduced (Brownlee, 1965: 509). Rules of thumb for deciding when to pool have been proposed (e.g., Paull, 1950; Bozivich, Bancroft, and Hartley, 1956), but statistical advice beyond this book should be obtained if such pooling is contemplated. The analyses in this text will proceed according to the conservative nonpooling approach, which Hines (1996), Mead, Bancroft, and Han (1995), and Myers and Well (2003: 333) conclude is generally advisable. (i) Multiple Comparisons. If significant differences are concluded among the levels of a factor, then the multiple comparison procedures of Section 11.1, 11.2, 11.3, or 11.4 may be employed. For such purposes, s2 is the within-cells MS, u is the within-cells DF, and the n of Chapter 11 is replaced in the present situation with the total number of data per level of the factor being tested (i.e., what we have noted in this section as bn data per level of factor A and an data per level of factor B). If there is significant interaction between the two factors, then the means of levels should not be compared. Instead, multiple comparison testing may be performed among cell means.
G) Confidence Limits for Means. We may compute confidence intervals for population means of levels of a fixed factor by the methods in Section 10.2. The error mean square, s2, is the within-cells MS of the present discussion; the error degrees of freedom, u, is the within-cells DF; and n in Section 10.2 is replaced in the present context by the total number of data in the level being examined. Confidence intervals for differences between population means are obtained by the procedures of Section 11.2. This is demonstrated in Example 12.3. EXAMPLE 12.3
Confidence Limits for the Results of Example 12.2
We concluded that mean plasma calcium concentration with the hormone treatment and those without.
is different between birds
total for nonhormone group = 135.0 mg/lOO ml = 13.50 mg/IOO ml number in nonhormone group 10 total for hormone group number in hormone group
301.5 mg/ 100 ml 10
= 30.15 mg/lOO ml
266
Chapter 12
Two-Factor Analysis of Variance
where N is the total number of data in all cells. * (For example, in Figure 12.3c, ther are two data on row 3, column 1; and (16)(9)/72 = 2. The appropriate hypothesis tests are the same as those in Section 12.1. The sums of squares, degrees of freedom, and mean squares may be calculated by some factorial ANOY A computer programs. Or, the machine formulas referred to in Table 12.1 may be applied with the following mod· ifications: For sums of squares, substitute nij for 11 in Equations 12.17 and 12.18, and use
a
cells SS
(~/-1 Xij/)2
b
= ~ ~
-
i=lj=1
factor A SS
=
t~
± I~ XUI)
=
2 : _
± (± ~Xijl)' i= 1/= 1
j= 1
= ~
C,
(12.24)
: -
C,
(12.25)
l1ij
a
within-cells (error) OF
(12.23)
l1ij
i= 1
factor B SS
:- C,
l1ij
b
1)
~(l1ij
(12.26)
i=1 j=1
(b) Disproportional Replication; Missing Data. In factorial analysis of variance, it is generally advisable to have data with equal replication in the cells (Section 12.1), or at least to have proportional replication (Section 12.2a). If equality or proportionality is not the case, we may employ computer software capable of performing such analyses of variance with disproportional replication (see Section 14.5). Alternatively, if only a very few cells have numbers of data in excess of those representing equal or proportional replications, then data may be deleted, at random, within such cells, so that equality or proportionality is achieved. Then the ANOY A can proceed as usual, as described in Section 12.1 or 12.2a. If one cell is one datum short of the number required for equal or proportional replication, a value may be estimated" for inclusion in place of the missing datum, as follows (Shearer, 1973): a
aAi
b
lliJ
+ bBj - ~ ~ ~Xi;t
-
Xijl = --------'---N+1-a-b
an
i=lj=l/=1
( 12.27)
"The number of replicates in each of the cells need not be checked against Equation 12.22 to determine whether proportional replication is present. One need check only one cell in each of a-I levels of factor A and one in each of b - 1 levels of factor B (Huck and Layne. 1974). tThe estimation of missing values is often referred to as imputation and is performed by some computer routines. However, there are many different methods for imputing missing values, especially when more than one datum is missing, and these methods do not all yield the same results.
Section 12.3
Two-Factor Analysis of Variance without Replication
267
where Xij! is the estimated value for replicate I in level i of factor A and level j of factor B; Ai is the sum of the other data in level i of factor A; B, is the sum of the other data in level j of factor B; L L L Xijl is the sum of all the known data, and N is the total number of data (including the missing datum) in the experimental design. For example. if datum XI24 had been missing in Example 12.1. it could have had a quantity inserted in its place, estimated by Equation 12.27. where a = 2. b = 2. N = 20, Ai = A I = the sum of all known data from animals receiving no hormone treatment; B, = B2 = the sum of all known data from males; and L L L Xijl = the sum of all 19 known data from both hormone treatments and both sexes. After the missing datum has been estimated, it is inserted into the data set and the ANOV A computations may proceed, with the provision that a missing datum is not counted in determining total and within-cells degrees of freedom. (Therefore. if a datum were missing in Example 12.1. the total OF would have been 18 and the within-cells OF would have been 15.) If more than one datum is missing (but neither more than 10% of the total number of data nor more data than the number of levels of any factor), then Equation 12.27 could be used iteratively to derive estimates of the missing data (e.g., using cell means as initial estimates). The number of such estimates would not enter into the total or within-cells degrees-of-freedom determinations. If only a few cells (say, no more than the number of levels in either factor) are each one datum short of the numbers required for equal or proportional replication, then the mean of the data in each such cell may be inserted as an additional datum in that cell. In the latter situation, the analysis proceeds as usual but with the total OF and the within-cells OF each being determined without counting such additional inserted data. Instead of employing these cell means themselves. however, they could be used as starting values for employing Equation 12.27 in iterative fashion. Another procedure for dealing with unequal. and nonproportional, replication is by so-called unweighted means analysis, which employs the harmonic mean of the nil'S. This will not be discussed here. None of these procedures is as desirable as when the data are equally or proportionally distributed among the cells. TWO-FACTOR ANALYSIS OF VARIANCE WITHOUT
REPLICATION
It is generally advisable that a two-factor experimental design have more than one datum in each cell, but situations are encountered in which there is only one datum for each combination of factors (i.e., n = 1 for all cells). It is sometimes feasible to collect additional data, to allow the use of the procedures of Section 12.1 or 12.2, but it is also possible to perform a two-factorial ANOV A with nonreplicated data. In a situation of no replication, each datum may be denoted by a double subscript, as Xij. where i denotes a level of factor A and j indicates a level of factor B. For a levels of factor A and b levels of factor B. the appropriate computations of sums of squares, degrees of freedom, and mean squares are shown directly below. These are analogous to equations in Section 12.1, modified by eliminating n and any summation within cells. II
total SS =
2
b
2: 2: (Xij i= Ij= I
-
X)
,
(12.28)
268
Chapter
12
Two-Factor
Analysis
of Variance
where the mean of all N data (and N = ab) is {/
X
b
LLXij i=lj= 1
= --'----
(12.29)
N Further, a
factor A SS = b
L (x,
(12.30)
i=1
factor B SS =
a:± (x,
(12.31)
j= 1
When there is no replication within cells (i.e., n = I), the cells SS of Section 12.1 is identical to the total SS, and the cells DF is the same as the total DF. Consequently, the within-cells sum of squares and degrees of freedom are both zero; that is, with only one datum per cell, there is no variability within cells. The variability among the N data that is not accounted for by the effects of the two factors is the remainder' variability: remainder SS remainder DF
= =
total SS - factor A SS - factor B SS total DF - factor A DF - factor B DF
(12.32) (12.33)
These sums of squares and degrees of freedom, and the relevant mean squares, are summarized in Table 12.4. Note that Equations 12.32 and 12.33 are what are referred to as "interaction" quantities when replication is present; with no replicates it is not possible to assess interaction in the population that was sampled. Table 12.5 TABLE 12.4: Summary
of the Calculations
for a Two-Factor
Analysis of Variance with No
Source of variation
Sum of squares (SS)
Degrees of freedom (OF)
Total [Xi; - Xl
Equation 12.27 or 12.34
N -
Factor A
Equation 12.29
a-I
Replication
[Xi
- Xl
Factor B [Xj
- Xl
Remainder
Mean square (MS)
1 factor ASS factor A OF
or 12.35 Equation
12.30
b -
12.31
(a-l)(b-l)
1
factor B SS factor B OF
or 12.26 Equation
or total OF - factor A OF - factor B OF
remainder SS remainder OF
Note: For each source of variation, the bracketed quantity indicates the variation being assessed; a is the number of levels in factor A; b is the number of factors in factor B; N is the total number of data (which is ab); Xij is the datum in level i of factor A and level j of factor B: Xi' is the mean of the
data in level i of factor A; X,; is the mean of the data in level j of factor B; and X is the mean of all N data. *Somc authors refer to "remainder" as "error" or "residual."
Section 12.3
Two-Factor
Analysis of Variance
without
Replication
269
TABLE12.5: Computation of the F Statistic for Tests of Significance in a Two-Factor ANOVA without Replication
(a) If It Is Assumed That There May Be a Significant Interaction Effect Model I Model II Modellll (factors A and B (factors A and B (factor A fixed; Hypothesized effect both fixed) both random) factor B random) Factor A
Test with caution"
factor A MS remainder MS
factor A MS remainder MS
Factor B
Test with caution"
factor B MS remainder MS
Test with caution"
A X B interaction
No test possible
No test possible
No test possible
* Analysis can be performed as in Model II, but with increased chance of Type II error. (b) If It Is Correctly Assumed That There Is No Significant Interaction Effect Hypothesized effect Model I Model II Model III Factor A
factor A MS remainder MS
factor A MS remainder MS
factor A MS remainder MS
Factor B
factor B MS remainder MS
factor B MS remainder MS
factor B MS remainder MS
A
No test possible
No test possible
No test possible
X
B interaction
summarizes the significance tests that may be performed to test hypotheses about each of the factors. Testing for the effect of each of the two factors in a Model I analysis (or testing for the effect of the random factor in a Model III design) is not advisable if there may, in fact, be interaction between the two factors (and there will be decreased test power); hut if a significant difference is concluded, then that conclusion may he accepted. The presence of interaction, also called nonadditivity, may be detectable by the testing procedure of Tukey (1949). (a) "Machine Formulas." If there is no replication in a two-factor ANOV A, the machine formulas for sums of squares are simplifications of those in Section 12.1b:
(12.34) a
total SS =
b
2: 2: XB -
(12.35)
C,
i=! j=!
t, Ct,XD) factor : ASS
=
factor : B SS =
C,
-
b
±(±XB) j=!
i=!
-
C,
a
and the remainder sum of squares is as in Equation 12.32.
(12.36)
(12.37)
270
Chapter 12
Two-Factor Analysis of Variance
(b) Multiple Comparisons and Confidence Limits. Tn a two-factor ANOY A with no replication, the multiple-comparison and confidence-limit considerations of Sections 12.1i and 12.1j may be applied. However, there is no within-cells (error) mean square or degrees of freedom in the absence of replication. If it can be assumed that there is no interaction between the two factors, then the remainder MS may be employed where s2 is specified in those sections, the remainder DF is used in place of u, a ~ the same as an, and b is the same as bn. If, however, there may be interaction, then multiple comparisons and confidence limits should be avoided. 12.4
TWO-FACTOR MEASURES
ANALYSIS
OF VARIANCE
WITH
RANDOMIZED
BLOCKS OR REPEATED
The analysis of variance procedure of Section 10.1 (the completely randomized design) is for situations where the data are all independent of each other and the experimental units (e.g.,the pigs in Example 10.1) are assigned to the k treatments in a random fashion (that is, random except for striving for equal numbers in each treatment). The following discussion is of two types of ANOY A, for the same null and alternate hypotheses as in Section 10.1, in which each datum in one of the k groups is related to one datum in each of the other groups. (a) Randomized Blocks. To address the hypotheses of Example 10.1, each of fo animals from the same litter could be assigned to be raised on each of four diets. Th body weights of each set of four animals (i.e., the data for each litter) would be sai to constitute a block, for the data in a litter are related to each other (namely b having the same mother). With a experimental groups (denoted as k in ChapterlO and b blocks, there would be N = ab data in the analysis. The concept of block is an extension, for more than two groups, the concept of pairs (Section 9.1, whic deals with two groups). This experimental plan is called a randomized-complete-bloc design, and each block contains a measurement for each of the a treatments. Onl complete blocks will be considered here, so this will simply be called a randomized block design," When the analysis employs blocks of data within which the data ar related, the hypothesis testing of differences among groups can be more powe than in the completely randomized design. An illustration of a randomized-block ANOY A is in Example 12.4. The inten of the experiment shown is to determine whether there is a difference among thr anesthetic drugs in the time it takes for the anesthetic to take effect when injecte intramuscularly into cats of a specified breed. Three cats are obtained from each 0 five laboratories; because the laboratories may differ in factors such as the food an exercise the animals have had, the three from each laboratory are considered to b a block. Thus, the experiment has a = 3 treatment groups and b = 5 blocks, an the variable in anesthetic group i and block j is indicated as Xij. The sum of the dat in group i can be denoted as LJ=1 Xij, the total of the measurements in block j
Ljl= I Xij,
and the sum of all ab data as N = Lj'= I LJ= I Xij. In this example, there' only one datum in each of the ab cells (i.e., one per combination of treatment an block), a very common situation when working with randomized blocks. In Example 12.4, the interest is whether there is any difference among the effects the different anesthetic drugs, not whether there is any difference due to laborato source of the animals. (Indeed, the factor defining the blocks is sometimes referred t "The randomized-block (1926; David, 1995).
experimental
design was developed
and so named
by R. A. Fish
Section 12.4
Analysis with Randomized Blocks or Repeated Measures
271
EXAMPLE 12.4 A Randomized Complete Block Analysis of Variance (Model III Two-Factor Analysis of Variance) without Within-Cell Replication He: The mean time for effectiveness is the same for all three anesthetics (i.e., ILl = IL2 = IL3)' H A: The mean time for effectiveness is not the same for all three anesthetics. Q'
=
0.05
Each block consists of three cats from a single source, and each block is from a different source. Within a block, the cats are assigned one of the anesthetics at random, by numbering the cats 1,2, and 3 and assigning each of them treatment 1, 2, or 3 at random. For this experiment the randomly designated treatments, from 1 to 3, for each block were as follows, with the anesthetic's time for effect (in minutes) given in parentheses: Animal 2
Animal I
Animal 3
Block l:
Treatment 3 (10.75)
Treatment (8.25)
1
Treatment 2 (11.25)
Block 2:
Treatment 1 (10.00)
Treatment 3 (11.75)
Treatment 2 (12.50)
Block 3:
Treatment 3 (11.25)
Treatment 1 (10.25)
Treatment 2 (12.00)
Block 4:
Treatment (9.50)
1
Treatment (9.75)
2
Treatment (9.00)
Block 5:
Treatment 2 (11.00)
Treatment (8.75)
1
Treatment 3 (10.00)
3
These data are rearranged as follows in order to tabulate the treatment, block, and grand totals (and, if not using the machine formulas, the treatment, block, and grand means). Treatment (i) Block
U)
1
2
3
Block Total
Block Mean
(iXiJ)
(X-j)
1=1
1 2 3 4 5
8.25 11.00 10.25 9.50 8.75
11.25 12.50 12.00 9.75 11.00
10.75 11.75 11.25 9.00 10.00
47.75
56.50
52.75
9.55
11.30
10.55
b
Treatment Treatment
LXi; /=1 mean: Xi-
total:
30.25 35.25 33.50 28.25 29.75
10.08 11.75 11.17 9.42 9.92
272
Chapter 12
Two-Factor Analysis of Variance a
Grand total
=
/,
2: 2: Xii
= 157.00
Grand mean
=
X = 10.47
i=I/=1
The sums of squares required in the following table may be obtained equations in Section 12.3, as referenced in Table 12.4. Source of variation Total
SS 21.7333
14
Treatments Blocks
DF
7.7083
2
11.0667
4
2.9583
8
Remainder
using the
MS
3.8542 0.3698
F = treatments MS = 3.8542 = 10.4 remainder MS 0.3698
Fo.os(
1).2.8
= 4.46, so reject Ho·
0.005 < P < O.Ol [P = 0.0060] as a "nuisance factor" or "nuisance variable.") The anesthetics, therefore, are three levels of a fixed-effects factor, and the laboratories are five levels of a random-effects factor. So the completely randomized experimental design calls for a mixed-model (i.e., Model Ill) analysis of variance (Section 12.1f). Blocking by laboratory is done to account for more of the total variability among all N data than would be accounted for by considering the fixed-factor effects alone. This will decrease the mean square in the denominator of the F that assesses the difference among the treatments, with the intent of making the ANOV A more powerful than if the data were not collected in blocks. The assignment of an experimental unit to each of the animals in a block should be done at random. For this purpose, Appendix Table B.41 or other source of random numbers may be consulted. In the present example, the experimenter could arbitrarily assign numbers 1, 2, and 3 to each cat from each laboratory. The random-number table should then be entered at a random place, and for each block a random sequence of the numerals 1, 2, and 3 (ignoring all other numbers and any repetition of a l, 2, or 3) will indicate which treatments should be applied to the animals numbered 1,2, and 3 in that block. So, in Example 12.4, a sequence of animals numbered 3, 1,2 was obtained for the first block; 1,3,2 for the second; 3, 2, 1 for the third; and so on. The randomized-block experimental design has found much use in agricultural research, where b plots of ground are designated as blocks and where the environmental (for example, soil and water) conditions are very similar within each block (though not necessarily among blocks). Then a experimental treatments (e.g., fertilizer of pesticide treatment) are applied to random portions of each of the b blocks. (b) Randomized Blocks with Replication. It is common for the randomizedcomplete-block experimental design to contain only one datum per cell. That is what is demonstrated in Example 12.4, with the calculation of F executed as shown in the last column of Tables 12.5a and 12.5b. If there are multiple data per cell (what is known as the generalized randomized-block design), this would be handled
Section 12.4
Analysis with Randomized Blocks or Repeated Measures
273
as a mixed-model two-factor ANOV A with replication. Doing so would obtain the mean squares and degrees of freedom as indicated earlier in this chapter; and the appropriate F's would be those indicated in the last column of Table 12.3. A case where this would be applicable is where the experiment of Example 12.4 employed a total of six cats-instead of three-from each laboratory, assigning two of the six at random to each of the three treatments. (c) Repeated Measures. The hypotheses of Example 12.4 could also be tested using an experimental design differing from, but related to, that of randomized blocks. In an experimental procedure using what are called repeated measures, each of b experimental animals would be tested with one of the a anesthetic drugs; then, after the effects of the drug had worn off, one of the other anesthetics would be applied to the same animal; and after the effects of that drug were gone, the third drug would be administered to that animal. Thus, b experimental animals would be needed, far fewer than the ab animals required in the randomized-block experiment of Example 12.4, for each block of data contains a successive measurements from the same experimental animal (often referred to as an experimental "subject"). If possible, the application of the a treatments to each of the b subjects should be done in a random sequence, comparable to the randomization within blocks in Section 12.4a. Also, when collecting data for a repeated-measures analysis of variance, sufficient time should be allowed between successive treatments so the effect of a treatment is not contaminated with the effect of the previous treatment (i.e., so there is no "carryover effect" from treatment to treatment).* Thus, the arrangement of data from a repeated-measures experiment for the hypotheses of Example 12.4 would look exactly like that in that example, except that each of the five blocks of data would be measurements from a single animal, instead of from a animals, from a specified laboratory. There are some repeated-measures studies where the treatments are not administered in a random sequence to each subject. For example, we might wish to test the effect of a drug on the blood sugar of horses at different times (perhaps at 1, 2, and 5 hours) after the drug's administration. Each of b horses ("subjects") could be given the drug and its blood sugar measured before administering the drug and subsequently at each of the three specified times (so a would be 4). In such a situation, the levels of factor A (the four times) are fixed and are the same for all blocks, and carryover effects are a desired part of the study.' The repeated-measures experimental design is commonly used by psychological researchers, where the behavioral response of each of several subjects is recorded for each of several experimental circumstances. (d) Randomized-Block and Repeated-Measures Assumptions. In a randomizedblock or repeated-measures experiment, we assume that there are correlations among the measurements within a block or among measurements repeated on a subject. For the randomized-block data in Example 12.4, it may be reasonable to suppose that if an animal is quickly affected by one anesthetic, it will be quickly affected by each * Although the experiment should be conducted to avoid carryover effects, the times of administering the drug (first, second, or third) could be considered the levels of a third factor, and a three-factor ANOV A could be performed in what is referred to as a "crossover experimental design" (described in Section 14.1a). '[Kirk (1995: 255) calls randomized levels of factor A within blocks a "subjects-by-treatment" experimental design and uses the term subjects-bv-trials to describe a design where the sequence of application of the levels of factor A to is the same in each block.
274
Chapter
12
Two-Factor
Analysis of Variance
of the others. And for the repeated-measures situation described in Section 12.4c, it might well be assumed that the effect of a drug at a given time will be related to the effect at a previous time. However, for the probability of a calculated F to be compared dependably to tabled values of F, there should be equal correlations among all pairs of groups of data. So, for the experiment in Example 12.4, the correlation between the data in groups 1 and 2 is assumed to be the same as the correlation between the data in groups 1 and 3, and the same as that between data in groups 2 and 3. This characteristic, referred to as compound symmetry, is related to what statisticians call sphericity (e.g., Huynh and Feldt, 1970), or circularity (e.g., Rouanet and Lepine, 1970), and it-along with the usual ANOV A assumptions (Section l2.1g)-is an underlying assumption of randomized-block and repeated-measures analyses of variance. Violation of this assumption is, unfortunately, common but difficult to test for, and the investigator should be aware that the Type J error in such tests may be greater than the specified a. An alternative procedure for analyzing data from repeated-measures experiments, one that does not depend upon the sphericity assumption, is multivariate analysis of variance (see Chapter 16), which has gained in popularity with the increased availability of computer packages to handle the relatively complex computations. This assumption and this alternative are discussed in major works on analysis of variance and multivariate analysis (e.g., Girden, 1992; Kirk, 1995; Maxwell and Delaney, 2004; O'Brien and Kaiser, 1985; and Stevens, 2002). If there are missing data, the considerations of Section 12.2b apply. If the experimental design has only one datum for each combination of the factors, and one of the data is missing, then the estimation of Equation 12.26 becomes {/
aAi A
Xij
+
bBj -
= --------------~---
b
L LXij i=lj=!
(12.38)
(a-l)(b-l)
If more than one datum is missing in a block, the entire block can be deleted from the analysis. (e) More Than One Fixed-Effects Factor. There are many possible experimental designs when the effects of more than one factor are being assessed. One other situation would be where the experiment of Example 12.4 employed blocks as a random-effects factor along with two fixed-effects factors, perhaps the drug and the animal's sex. The needed computations of sums of squares, degrees of freedom, and mean squares would likely be performed by computer, and Appendix D (Section D.3b for this hypothetical example) would assist in testing the several hypotheses. 12.5
MULTIPLE COMPARISONS OF VARIANCE
AND CONFIDENCE INTERVALS IN TWO-FACTOR
ANALYSIS
If a two-factor analysis of variance reveals a significant effect among levels of a fixed-effects factor having more than two levels, then we can determine between which levels the significant difference(s) occur(s). If the desire is to compare all pairs of means for levels in a factor, this may be done using the Tukey test (Section 11.1). The appropriate SE is calculated by Equation 11.2, substituting for n the number of data in each level (i.e., there are bn data in each level of factor A and an data in levels of factor B); s2 is the within-cells MS and v is the within-cells degrees of freedom. If there is no replication in the experiment, then we are obliged to use the remainder MS in place of the within-cells MS and to use the remainder DF as u.
Section 12.6
Sample Size, Detectable Difference, and Power
275
The calculation of confidence limits for the population mean estimated by each significantly different level mean can be performed by the procedures of Section 11.2, as can the computation of confidence limits for differences between members of pairs of significantly different level means. If it is desired to compare a control mean to each of the other level means, Dunnett's test, described in Section 11.3, may be used; and that section also shows how to calculate confidence limits for the differences between such means. Scheffe's procedure for multiple contrasts (Section 11.4) may also be applied to the levels of a factor, where the critical value in Equation 11.18 employs either a or b in place of k (depending, respectively, on whether the levels of factor A or B are being examined), and the within-cells DF is used in place of N - k. In all references to Chapter 11, n in the standard-error computation is to be replaced by the number of data per level, and s2 and 1J are the within-cells MS and OF, respectively. Multiple-comparison testing and confidence-interval determination are appropriate for levels of a fixed-effects factor but are not used with random-effects factors. (a) If Interaction Is Significant. On concluding that there is a significant interaction between factors A and B, it is generally not meaningful to test for differences among levels of either of the factors. However, it may be desired to perform multiple comparison testing to seek significant differences among cell means. This can be done with any of the above-mentioned procedures, where n (the number of data per cell) is appropriate instead of the number of data per level. For the Scheffe test critical value (Equation 1].] 8), k is the number of cells (i.e., k = ab) and N - k is the withincells DF. (b) Randomized Blocks and Repeated Measures. In randomized-block and repeated-measures experimental designs, the sphericity problem mentioned in Section 12.4d is reason to recommend that multiple-comparison testing not use a pooled variance but, instead, employ the Games and Howell procedure presented in Section 11.1b (Howell, 1997: 471). In doing so, the two sample sizes (ns and nA) for calculating SE (in Equation 11.5) will each be b. An analogous recommendation when the Dunnett test (Section 11.3) is performed would be to use Equation 11.11a in favor of Equation 11.11. And for multiple contrasts, the procedures of Section 11.4a would be followed. A similar recommendation for confidence limits for each mean is to use the variance associated with that mean instead of a pooled variance. Confidence limits for the difference between two means would employ Equation 11.7.
12.6 SAMPLE SIZE, DETECTABLE DIFFERENCE, AND POWER IN TWO-FACTOR OF VARIANCE
ANALYSIS
The concepts and procedures of estimating power, sample size, and rrurnmum detectable difference for a single-factor ANOV A are discussed in Section] 0.3, and the same considerations can be applied to fixed-effects factors in a two-factor analysis of variance. (The handling of the fixed factor in a mixed-model ANOV A will be explained in Section 12.6e.) We can consider either factor A or factor B (or both, but one at a time). Let us say k' is the number of levels of the factor being examined. (That is, k' = a for factor A: k' = b for factor B.) Let us define n' as the number of data in each level. (That is, n' = bn for factor A; n' = an for factor B.) We shall also have s2 refer
276
Chapter 12
Two-Factor Analysis of Variance
to the within-cells MS. The mean of the population denoted as fLm. (a) Power of the Test.
from which level m came is
We can now generalize equation 10.32 as k'
11.'
2:
(fLm
m=1
cjJ=
(12.39)
Equation 10.33 as k'
2: fL=
fLm
m=1
(12.40)
k'
and Equation 10.34 as
-
cjJ =
) n'[j2
-
2k's2 '
(12.41)
in order to estimate the power of the analysis of variance in detecting differences among the population means of the levels of the factor under consideration. After any of the computations of cjJ have taken place, either as above or as below, then we proceed to employ Appendix Figure B.l just as we did in Section 10.3, with VI being the factor OF (i.e., k' 1), and V2 referring to the within-cells (i.e., error) OF. Later in this book there are examples of ANOV As where the appropriate denominator for F is some mean square other than the within-cells MS. In such a case, s2 and V2 will refer to the relevant MS and OF. (b) Sample Size Required. By using Equation 12.41 with a specified significance level, and detectable difference between means, we can determine the necessary minimum number of data per level, n', needed to perform the experiment with a desired power. This is done iteratively, as it was in Example 10.6. (c) Minimum Detectable Difference. In Example 10.7 we estimated the smallest detectable difference between population means, given the significance level, sample size, and power of a one-way ANOV A. We can pose the same question in the two-factor experiment, generalizing Equation 10.35 as (12.42)
8= n'
(d) Maximum Number of Levels Testable. The considerations of Example 10.8 can be applied to the two-factor case by using Equation 12.41 instead of Equation 10.34. (e) Mixed-Model ANOVA. All the preceding considerations of this section can be applied to the fixed factor in a mixed-model (Model III) two-factor analysis of variance with the following modifications. For factor A fixed, with replication within cells, substitute the interaction MS for the within-cells MS, and use the interaction DF for V2.
Nonparametric
Randomized Blocks
277
For factor A fixed, with no replication (i.e., a randomized block experimental design), substitute the remainder MS for the within-cells MS, and use the remainder DF for P2. If there is no replication, then n = I, and n' = b. (Recall that if there is no replication, we do not test for interaction effect.) 7 NONPARAMETRIC RANDOMIZED-BLOCK
OR REPEATED-MEASURES ANALYSIS OF VARIANCE
Friedman's" test (1937, 1940) is a nonparametric analysis that may be performed on a randomized-block experimental design, and it is especially useful with data that do not meet the parametric analysis of variance assumptions of normality and hornoscedasticity, namely that the k samples (i.e., the k: levels of the fixed-effect factor) come from populations that are each normally distributed and have the same variance. Kepner and Robinson (1988) showed that the Friedman test compares favorably with other nonparametric procedures. If the assumptions of the parametric ANOV A are met, the Friedman test will be 3k/[ 7T( k + 1)1 as powerful as the parametric method (van Elteren and Noether, 1959). (For example, the power of the nonparametric test ranges from 64% of the power of the parametric test when k = 2, to 72% when k = 3, to 87% when k = 10, to 95% when k approaches 00.) If the assumptions of the parametric test are seriously violated, it should not be used and the Friedman test is typically advisable. Where k = 2, the Friedman test is equivalent to the sign test (Section 24.6). In Example 12.5, Friedman's test is applied to the data of Example 12.4. The data within each of the b blocks are assigned ranks. The ranks are then summed for each of the a groups, each rank sum being denoted as Ri. The test statistic, X;, is calculated as + 12 a (12.44) LRT-3b(a+1). ba(a + 1)i=1
X;=
Critical values of X;, for many values of a and b. are given In Appendix Table B.14. When a = 2, the Wilcoxon paired-sample test (Section 9.5) should be used; if b = 2, then the Spearman rank correlation (Section 19.9) should be employed. Appendix Table B.14 should be used when the a and b of an experimental design are contained therein. For a and b beyond this table, the distribution of X; may be considered to be approximated by the X2 distribution (Appendix Table B.1), with a-I degrees offreedom. Fahoome (2002) advised that the chi-square approximation is acceptable when b is at least 13 when testing at the 0.05 level of significance and at least 23 when (\' = 0.01 is specified. However, Iman and Davenport (1980) showed that this commonly used approximation tends to be conservative (i.e., it may have a * Milton Friedman (1912-2006), American economist and winner of the 1976 Nobel Memorial Prize in Economic Science. He is often credited with popularizing the statement, "There's no such thing as a free lunch," and in 1975 he published a book with that title: F. Shapiro reported that Friedman's statement had been in use by others more than 20 years before (Hafner. 20(H). + An equivalent formula is a
12:L (Ri
X;
=
-
R.)2
---'.-i_=-,--I----
( 12.43)
lJa(a+l) (Pearson and Hartley. 1976: 52), showing that we are assessing sums (Ri) and the mean of the rank sums (R.).
the difference
between
the rank
278
Chapter 12
Two-Factor Analysis of Variance EXAMPLE 12.5 Friedman's Analysis of Variance by Ranks Applied Randomized Block Data of Example 12.4
to the
He: The time for effectiveness is the same for all three anesthetics. HA: The time for effectiveness is not the same for all three anesthetics. a
= 0.05
The data from Example 12.4, for the three treatments and five blocks, are shown here, with ranks (1, 2, and 3) within each block shown in parentheses. Treatment (i) Block (j)
1
2
3
1
8.25
11.25
10.75
(1)
(3)
11.00
12.50
(2) 11.75
(1)
(3)
(2)
10.25
12.00
11.25
(1)
(3)
(2)
9.50
9.75
9.00
(2)
(1)
8.75
(3) 11.00
10.00
(1)
(3)
(2)
6 1.2
15 3.0
9 1.8
2 3 4 5
Rank sum (Ri) Mean rank (Ri) a
=
3, b
=
X2r
5 =
12
ba( a
+ 1)
~ R2 L.J
+ 1)
2b (a
I
12 (62 + 152 (5)(3)(3 + 1) = 0.200( 342) - 60 = 8.400
=
(xn
0.05,3.5
+ 92) _ 3(5)(3
+ 1)
= 6.400
Reject Ho. P
FF= b(a FO.05( 1 ).2,4 =
< 0.01 [P = 0.0085] (5 - 1)(8.4) 5(3 - 1)
(b-l)X~ - 1) - X~
_ 8.4 = 33.6 = 21.0 1.6
6.94
Reject Ho. 0.005
("2
h2 h-; h,
b2 h~ h,
h2
bJ
h,
h2
hJ
4.4 4.3 4.7 4.1
3.7 3.9 4.0 4.4
4.94.3 4.7 4.1 4.9 3.8 5.3 4.7
4.1 4.6 4.3 4.9 4.5 4.2 3.H 4.5
3.7 3.9 4.1 4.5
4.9 4.6 5.3 5.0
5.2 4.7 5.6 4.7 5.H 5.0 5.4 4.5
5.06.1 5.4 6.2 5.7 6.5 5.3 5.7
5.5 5.9 5.6 5.0
3.9 3.3 3.4 3.7
4.8 4.5 5.0 4.6
5.0 5.2 4.6 4.9
4.9 5.5 5.5 5.3
5.9 5.3 5.5 5.7
6.0 5.7 5.5 5.7
6.1 5.3 5.5 5.8
4.1 3.9 4.3 4.0
5.6 5.8 5.4 6.1
5.0 5.4 4.7 5.1
6.0 6.3 5.7 5.9
14.2. Use an appropriate computer program to test for the effects of all factors and interactions in the following 2 X 2 X 2 X 3 Model [ analysis of variance design, where a, is a level of factor A, hi is a level of factor B. c, is a level of factor C, and d, is a factor of level D.
14.3. Using an appropriate computer program. test for all factor and interaction effects in the following. Model [ 3 X 2 analysis of variance with unequal replication.
b,
b2
h,
172
b,
b2
34.1 36.9 33.2 35.\
35.6 36.3 34.7 35.8
38.6 39.1 41.3 41.4
40.3 41.3 42.7 41.9 40.8
41.0 41.4 43.0 43.4
42.1 42.7 43.1 44.8 44.5
14.4. A Latin-square experimental design was used to test for the effect of four hormone treatments (A" A2, AJ, and A4) on the blood calcium levels (measured in mg Ca per 100 ml of blood) of adult farm-raised male ostriches. The two blocking factors arc farms (factor B) and analytical methods (factor C). Test the null hypothesis Ho: The mean blood-calcium level is the same with all four hormone treatments.
Analytical methods (/,
17, (",
d,
d2
dJ
C2
{/2
c,
b2 C2
c,
17,
b2 C2
C,
Farms
C,
C2
C,.>
C4
A3
A4
12.5
9.7
112 12.0
A,
8,
112 10.3
Ao.>
A,
A4
82
\3.1
9.5
13.0
C2
12.2 13.4 12.2 13.1 10.9 12.1 10.1 11.2 12.6 13.1 12.4 13.0 11.3 12.0 10.2 10.8 \2.5 \3.5 \2.3 13.4 \ 1.2 11.7 9.8 10.7 \ 1.9 12.8 11.8 12.7 10.6 11.3 10.0 10.9 11.8 12.6 11.9 12.5 10.4 11.1 9.8 10.6 12.1 12.4 11.6 12.3 10.3 11.2 9.8 10.7 12.6 13.0 12.5 13.0 1 1.1 11.9 10.0 10.9 12.8 12.9 12.7 12.7 11.1 \ 1.8 10.4 10.5 12.9 13.1 12.4 13.2 11.4 11.7 10.1 10.8
9.4
A,
A2
A4
A3
83
8.8
11.7
12.4
14.3
A4
II,
84
10.6
7.1
113 12.0
1o.i
A2
r
CHAPTER
15
Nested (Hierarchical) Analysis of Variance 15.1 NESTING WITHIN ONE FACTOR 15.2 NESTING IN FACTORIAL EXPERIMENTAL DESIGNS 15.3 MULTIPLE COMPARISONS AND CONFIDENCE INTERVALS 15.4 POWER, DETECTABLE DIFFERENCE, AND SAMPLE SIZE IN NESTED ANALYSIS
OF VARIANCE
Chapters 12 and 14 dealt with analysis-of-variance experimental designs that the statistician refers to as crossed. A crossed experiment is one where all possible combinations of levels of the factors exist; the cells of data arc formed by each level of one factor being in combination with each level of every other factor. Thus, Example 12.1 is a two-factor crossed experimental design, for each sex is found in combination with each hormone treatment. In Example 14. L each of the three factors is found in combination with each of the other factors. In some experimental designs, however, we may have some levels of one factor occurring in combination with the levels of one or more other factors, and other distinctly different levels occurring in combination with others. In Example 15.1 a, where blood-cholesterol concentration is the variable, there arc two factors: drug type and drug source. Each drug was obtained from two sources, but the two sources arc not the same for all the drugs. Thus, the experimental design is not crossed; rather, we say it is nested (or hierarchical). One factor (drug source) is nested within another factor (drug type). A nested factor, as in the present example, is typically a random-effects factor. and the experiment may be viewed as a modified one-way ANOYA where the levels of this factor (drug source) arc samples and the cholesterol measurements within a drug source are called a
"subsamplc. " Sometimes experiments arc designed with nesting in order to test a hypothesis about difference among the samples. More typical. however, is the inclusion of a random-effects nested factor in order to account for some within-groups variability and thus make the hypothesis testing for the other factor (usually a fixed-effects factor) more powerful. 15.1 NESTING WITHIN ONE FACTOR In the experimental design such as in Example 15.1 a, the primary concern is to detect population differences among levels of the fixed-effects factor (drug type). We can often employ a more powerful test by nesting a random-effects factor that can account for some of the variability within the groups of interest. The partitioning of the variability in a nested ANOY A may he ohserved in this example. (a) Calculations for the Nested ANOV A. Testing the hypotheses in Example 15.1 a involves calculating relevant sums of squares and mean squares. This is often done by computer: it can also be accomplished with a calculator as follows
307
308
Chapter 15
Nested (Hierarchical) Analysis of Variance
A Nested (Hierarchical) Analysis of Variance
EXAMPLE 15.1a
The variable is blood cholesterol concentration in women (in mg/1 00 ml of plasma). This variable was measured after the administration of one of three different drugs to each of 12 women, and each administered drug was obtained from one of two sources. Drug 1 Drug 2 Drug 3 Source A Source Q Source D Source B Source L Source S
Ilij
102 104
103 104
lOR
2
2
2
206
207
103
103.5
104 106
105 107
2"
2
2
218
217
210
212
109
108.5
105
106
109 108
110
N = 12
nij
2:
Xijl
1=1 Xi Ili b
4
4
4
413
435
422
Xijl
N = 12 b
nij
2: 2: 2:
i=lj=
j=I/=1
105.8333
=
a
Ilij
2: 2:
X
Xijl
1/=1
= 1270 Xi
103.25
108.75
105.5
(see Example 15.1b). In the hierarchical design described, we can uniquely designate each datum by using a triple-subscript notation, where Xijl indicates the lth datum in subgroup j of group i. Thus, in Example 15.1a, X222 = 108 mg/100 ml, X311 = 104 mg/lOO ml, and so on. For the general case, there are a groups, numbered 1 through a, and b is the number of subgroups in each group. For Example 15.1a, there are three levels of factor A (drug type), and b (the number of levels of factor B, i.e., sources for each drug) is 2. The number of data in subgroup j of group i may be denoted by nij (2 in this experiment), and the total number of data in group i is n, (in this example, 4). The total number of observations in the entire experiment is N = L7= 1 n, (which could also be computed as N = Lj/= 1 Lj= 1 nij). The sum of the data in subgroup j of group i is calculated as L~~l Xijl; group i is
the sum of the data in group i is b
~ Xi
Lj=l L;I~1 Xiii;
and the mean of
nij
2: 2:
Xiii
i=l/=I
= '-----
(15.1 )
ni
The grand mean of all the data is a
X =
b
nij
2: 2: 2:
Xiii _i=_I_j_=_I_I=_I __
N
( 15.2)
Nesting within One Factor
Section 15.1 Computations
EXAMPLE 15.1 b
309
for the Nested ANOVA of Example 15.1 a
Drug I Drug 2 Drug 3 Source A Source Q Source D Source B Source L Source S
2121 S.O
" (.
2: 2: Xiii /Iii
;~II~
21424.5
23762.0
23544.5
22050.0
22472.0
(jt~/i;lr
) 2
a
1 n,
47306.25
42642.25
2:--'--------'----
44521.00
n,
i= 1
134469.50
=
II
a
b
I/ii
2: 2: 2:
i~I/~1
(
c=
X~I = 1344S0.00
2: 2: 2: Xi;1
"
\ 2
1=1/~1/~1
)
h
2: 2: 2: Xi]1 II
=
i=sj=
2
Q~2.Ql__=
=
13440S.33
12
N
I~ 1
total SS
lIil
lIii
C
=
1344S0.()O -
1344()S.33
=
71.67
1 I~ 1 (
I 2: Xi;I) 2: 2: -'-~~~ b
II
among all subgroups
SS
=
lIill
2
\ 1~1
1;= 1 l1i; among all subgroups
-
C
=
134471.00
-
13440S.33
=
62.67
i=
error SS
=
total SS (
2: II
groups SS
=
i~1
subgroups
SS
=
SS
=
71.67 -
62.67
=
9.00
12
"lIil
l/~1
1~1 Xiii)
-
C
=
134469.50
-
13440S.33
=
61.17
IIi
among all subgroups
SS -
groups SS
Source of variation Total Among all subgroups (Sources) Groups (Drugs) Subgroups Error
=
62.67 -
SS
OF
71.67 62.67 61.17 1.50 9.00
11 5 2 3 6
61.17
=
1.50
MS
30.58 0.50 1.50
H«: There is no difference among the drug sources in affecting mean blood
cholesterol concentration. H A: There is difference among the drug sources in affecting mean blood cholesterol concentration.
310
Chapter 15
Nested (Hierarchical) Analysis of Variance
F = ~:~~ = 0.33. P > 0.50 Ho: There three H A: There three
FO.05( I ).3.6
= 4.76.
Do not reject Ho.
[P = 0.80]
is no difference in mean cholesterol concentrations owing to the drugs (i.e., fJ.-1 = fJ.-2 = fJ.-3)· is difference in mean cholesterol concentrations owing to the drugs; F
30.58
= --0.50 = 61.16.
F005(1)23 = 9.55. ., ,.
0.0025 < P < 0.005
[P
=
ReJ·ectHo.
0.0037]
The total sum of squares for this ANOYA design considers the deviations of the Xijl from X and may be calculated as total SS =
±± ~
(Xiii
_ X)2
(15.3);
2: 2: 2: Xli
- C,
( 15.3a)
i=lj=I/=1
or by this "machine formula": a
total SS =
b
/lij
i=lj=I/=1
where
(15.4) For the total variability, total DF
( 15.5)
1.
= N -
The variability among groups (i.e., the deviations Xi - X) is expressed as the "among groups SS" or, simply, groups SS
=
2: a
ri;
(
Xi - X
1=1
or
b
groups SS
=
±
/lij
2: 2: Xijlj
(
( 15.6)
,2
j=l/=l
i=l
)2
-
C;
(15.6a)
ni
and groups DF
= a - 1.
( 15.7)
There is a total of ab subgroups in the design, and, considering them as if they were groups in a one-way ANOY A, we can calculate a measure of the deviations Xi, - X as a
among all subgroups SS
=
b
2: 2: (X nij
i=lj=l
ij
-
X)
2
(15.8)
Section 15.1
Nesting within One Factor
311
or
among all subgroups SS
=
a
b
(
~ Xijl)2 I-I
2: 2: i=lj=1
-
C;
(15.8a)
nij
and among all subgroups DF
=
ab - 1.
(15.9)
The variability due to the subgrouping within groups is evidenced by the deviations of subgroup means from their group means, Xij - Xi, and the appropriate sum of squares is the "among subgroups within groups" SS, which will be referred to as subgroups SS
= among all subgroups SS - groups SS;
(15.10)
and subgroups OF
=
among all subgroups DF - groups DF
=
a(b - 1).
(15.11)
The within-subgroups, or "error," variability expresses the deviations Xijl - Xij, namely the deviations of data from their subgroup means; it is essentially the withincells variability encountered in Chapters 12 and 14. The appropriate sum of squares is obtained by difference: error SS
=
total SS - among all subgroups SS,
(15.12)
with error DF
= total DF - among all subgroups OF = N - ab.
(15.13)
The summary of this hierarchical analysis of variance is presented in Table 15.1. Recall that MS = SS/OF. Some similarities may be noted between Tables 12.1 and 15.1, but in the nested ANOV A of Table 15.1 we cannot speak of interaction between the two factors. Calculations for the data and hypotheses of Example 15.1a are shown in Example 15.1b. (b) Hypothesis Testing in the Nested ANOVA. For the data in Example 15.1a, we can test the null hypothesis that no difference in cholesterol occurs among subgroups (i.e., the source of the drugs has no effect on the mean concentration of blood cholesterol). We do this by examining F
= subgroups MS.
(15.14)
error MS For Example 15.1b, this is F = 0.50/1.50 = 0.33; since FO.05(J ),3.6 = 4.76, Hi, is not rejected. (The exact probability of an F at least this large if Ho is true is 0.80.) The null hypothesis that there is no difference in cholesterol with the administration of the three different drugs can be tested by F
=
groups MS subgroups MS '
(15.15)
which in the present example is F = 30.58/0.50 = 61.16. As FO.05(J ).2,3 = 9.55, Ho is rejected. (The exact probability is 0.0037.) In an experimental design having subgroups nested within groups, as shown here, the groups most often represent a
312
Chapter 15
Nested (Hierarchical) Analysis of Variance TABLE 15.1: Summary
of Hierarchical
(Nested) Single-Factor
Analysis of Variance
Source of variation
OF
SS a
Total [Xijl -
Xl
b
Ca/culatio
/ljj
~~~XBI
-
C
N-J
i=\j=(\~\Xijl)2
±± ± ( ~
Among all subgroups [Xij - Xl
i=lj=1
~
Groups (i.e., Among groups) [Xi - Xl
i=1
-c
1=\
ab-I
n,j
~Xijl)2 j=I/=1 n,
-c
a-I
Subgroups (i.e., Among subgroups within groups) [Xij - Xd
among all subgroups SS - groups SS
a(b-I)
Error (i.e., Within subgroups) [Xijl - Xij)
total SS - among all subgroups SS
N-ab
Note: C =
(/ b Iljj ~ ~ ~ ( i=lj=l/=l
N
2 Xii ) J
; a = number of groups; b = number of subgroups within each
group; n, = number of data in group i; nij = number of data in subgroup j for group i; N number of data in entire experiment.
=
total
fixed-effects factor. But the hypotheses testing is the same if, instead, the groups are a random-effects factor. If we do not reject the null hypothesis of no difference among subgroups within groups, then the subgroups MS might be considered to estimate the same population variance as does the error MS. Thus, some statisticians suggest that in such cases a pooled mean square can be calculated by pooling the sums of squares and pooling the degrees of freedom for the subgroups variability and the error variability, for this will theoretically provide the ability to perform a more powerful test for differences among groups. (Pooling was previously discussed in Section 12.1h.) However, there is not widespread agreement on this matter, so the suggested procedure is to be conservative and not engage in pooling, at least not without consulting a statistician. If there are unequal numbers of subgroups in each group, then the analysis becomes more complex, and the preceding calculations are not applicable. This situation is generally submitted to analysis by computer, perhaps by a procedure referred to in Glantz and Slinker (2001). A hierarchical experimental design might have two (or more) layers of nesting with each subgroup composed of sub-subgroups, thus involving an additional step in the hierarchy. For instance, for the data of Example 15.1a, the different drugs define the groups, the different sources define the subgroups, and if different technicians or different instruments were used to perform the cholesterol analyses within each subgroup, then these technicians or instruments would define the sub-subgroups. Sokal and Rohlf (1995: 288-292) describe the calculations for a design with subsubgroups, although one generally resorts to computer calculation for hierarchical designs with more than the two steps in the hierarchy discussed in the preceding paragraphs. See Appendix D.4b for assistance in hypothesis testing for one such nested design.
Section 15.2
Nesting in Factorial Experimental Designs
Brits and Lemmer (1990) discuss nonparametric single factor.
313
ANOV A with nesting within a
.2 NESTING IN FACTORIAL EXPERIMENTAL DESIGNS
Experimental designs are encountered where there are two or more crossed factors as well as one or more nested factors. For example, in Example 12.1 the two crossed factors are sex and hormone treatment, and five birds of each sex were given each hormone treatment. In addition, the experimenter might have obtained three syringes of blood (that is, three subsamples) from each bird, so that individual birds would be samples and the triplicate blood collections would be subsamples. The birds represent a nested, rather than a crossed, factor because the same animal is not found at every combination of the other two factors. The analysis-of-variance table would then look like that in Example 15.2. The computation of sums of squares could be obtained by computer, and the appropriate hypothesis testing will be that indicated in Appendix D.4c. Some available computer programs can operate with data where there is not equal replication. As shown in Appendix Table D.4c, in a factorial ANOV A with nesting, the determination of F's for hypothesis testing depends upon whether the crossed factors are fixed effects or random effects. The concept of hierarchical experimental designs could be extended further in this example by considering that each subsample (i.e., each syringe of blood) in Example 15.2 was subjected to two or more (i.e., replicate) chemical analyses. Then chemical analysis would be a factor nested within the syringe factor, syringe nested within animal, and animal nested within the two crossed factors. EXAMPLE 15.2 (Animal) Nested Example 12.1
An Analysis of Variance with a Random-effects Factor within the Two-factor Crossed Experimental Design of
For each of the four combinations of two sexes and two hormone treatments (a = 2 and b = 2), there are five animals (c = 5), from each of which three blood collections are taken (n = 3). Therefore, the total number of data collected is N = abcn = 60. Source of variation Total Cells Hormone treatment Sex (Factor B)
(Factor A)
AXB
Among all animals Cells Animals (Within cells) (Factor C) Error (Within animals)
SS
* * * * *
DF N ab a-I b (a - 1)(b abc ab ab(c abc( n -
1
MS
= 59
1= 3 = 1
t
1= 1
t
-1)=1 1 = 19 1= 3 1) = 16 1) = 40
t .J.
I
t
* These sums of squares can be obtained
from appropriate computer software: the other sums of squares in the table might not be given by such a program, or MS might be given but not SS.
+ The mean squares can be obtained from an appropriate computer program. Or, they
may be obtained from the sums of squares and degrees of freedom (as MS = SSIDF). The degrees of freedom might appear in the computer output, or they may have to be determined hy hand. The appropriate F statistics are those indicated in Appendix D.4c.
314
Chapter 15
Nested (Hierarchical) Analysis of Variance
Ho: There is no difference in mean blood calcium concentration
between
males and females. HA: There is a difference in mean blood calcium concentration males and females.
between
F
=
factor A MS factor eMS
FO.05( 1),1 ,I 6
= 4.49
H«: The mean blood calcium concentration H A:
is the same in birds receiving and not receiving the hormone treatment. The mean blood calcium concentration is not the same in birds receiving and not receiving the hormone treatment. F = factor B MS
factor eMS
Fo.os(
1),1.16
= 4.49
He: There is no interactive HA:
effect of sex and hormone treatment on mean blood calcium concentration. There is interaction between sex and hormone treatment in affecting mean blood calcium concentration. F
He: There within HA: There within
MULTIPLE COMPARISONS
X B MS factor eMS
FO.05( 1 ),1.16
= 4.49
is no difference in blood calcium concentration combinations of sex and hormone treatment. is difference in blood calcium concentration combinations of sex and hormone treatment. F
15.3
= A
=
factor eMS error MS
FO.05( 1).16.40
among animals among animals
= 1.90
AND CONFIDENCE INTERVALS
Whenever a fixed-effects factor is concluded by an ANOV A to have a significan effect on the variable, we may turn to the question of which of the factor's levels are different from which others. If there are only two levels of the factor, then 0 course we have concluded that their population means are different by the ANOVA. But if there are more than two levels, then a multiple-comparison test must employed. The multiple-comparison procedures usable in nested experimental designs ar discussed in Chapter 11, with slight modifications such as those we saw in Section 12.5 and 14.6. Simply keep the following in mind when employing the tests of Sectio 11.1, 11.3, and 11.4: I. k refers to the number of levels being compared. (In Example I5.la, k = a, th number of levels in factor A. In Example 15.2, k = a when comparing levels 0 factor A, and k = b when testing levels of factor B.) 2. The sample size, n, refers to the total number of data from which a level mean' calculated. (In Example 15.1a, the sample size bn = 4 would be used in place 0 n. In Example 15.2, we would use ben = 30 to compare level means for facto A and aen = 30 for factor B.)
Exercises
315
J. The mean square . .\"2. refers to the MS in the denominator of the F ratio appropriate to testing the effect in question in the ANOV A. (In Example 15.1 a. the subgroups [sources] MS would be used. In Example 15.2. the factor C MS would he uscd.) 4. The degrees of freedom. v, for the critical value of q or q' are the degrees of freedom associated with the mean square indicated in item J. (In Examples 15.la and 15.2. these would be 3 and 16. respcctivcly.) 5. The critical value of F in the Schcffc test has the same degrees of freedom as it does in the ANOV A for the factor under consideration. (In Example 15.la. these are 2 and J. In Example 15.2, they are 1 and 16.) Once a multiple-comparison test has determined where differences lie among level means. we can express a confidence interval for each different mean. as was done in Sections 11.2. 12.5. and 14.6. keeping in mind the sample sizes. mean squares. and degrees of freedom defined in the preceding list. POWER, DETECTABLE
DIFFERENCE,
AND SAMPLE SIZE IN NESTED ANALYSIS
OF VARIANCE
In Sections 12.n and 14.7. power and sample size for factorial analyses of variance were discussed. The same types of procedures may he employed for a fixed-effects factor within which nesting occurs. As previously used. k' is the numher of levels of the factor. 1/' is the total number of data in each level. and I) = k' - I. The appropriate mean square. s2. is that appearing in the denominator of the F ratio used to test that factor in the ANOV A. and V2 is the degrees of freedom associated with .\.2. Referring to Section 12.6. the power of a nested ANOVA to detect differences among level means may he estimated using Equations 12.40-12.42 (as in Section 12.na). Equation 12.41 may he used to estimate the minimum number of data per level that would he needed to achieve a specified power (see Section 12.nh). and Equation 12.42 allows estimation of the smallest detectable difference among level means (see Section 12.6c). Section 12.6d describes how to estimate the maximum number of level means that can he tested. EXERCISES 1. Using the data and conclusions of Example 15.1. perform the following: (a) Use the Tukey test to conclude which of the three drug means arc statistically different from which. (b) Determine the 95 'X, confidence interval for each significantly different drug mean. Three water samples were taken from each of three locations. Two determinations of fluoride content were performed on each of the nine samples. The data are as follows. in milligrams of fluoride per liter of water: Locations Samples
2 I I 2 3 I 2 3 2 3 1.1 1.3 1.2 1.3 1.3 1.4 I.H 2.1 2.2 1.2 1.1 1.0 1.4 1.:'11.2 2.0 2.0 1.0
(a)
(h) (c)
(d)
Test the hypothesis that there is no difference in mean fluoride content among the samples within locations. Test the hypothesis that there is no difference in mean fluoride content among the locations. [f the null hypothesis in part (h) is rejected. use the Tukev test to conclude which of the three populations means differ from which. If the null hypothesis in part (h) is rejected. determine the 95')'0 confidence interval for each different population mean.
C HAP
T E R
16
Multivariate Analysis of Variance 16.1 THE MULTIVARIATE NORMAL DISTRIBUTION 16.2 MULTIVARIATE ANALYSIS OF VARIANCE HYPOTHESIS TESTING 16.3 FURTHER ANALYSIS
16.4 OTHER EXPERIMENTAL DESIGNS
Chapters 10, 12, 14, and 15 discussed various experimental designs categorized as analysis of variance (ANOY A), wherein a variable is measured in each of several categories, or levels, of one or more factors. The hypothesis testing asked whether the population mean of the variable differed among the levels of each factor. These are examples of what may be termed univariate analyses of variance, because they examine the effect of the factor(s) on only one variable. An expansion of this concept is an experimental design where more than one variable is measured on each experimental subject. In Example 10.1. 19 animals were allocated at random to four experimental groups, and each group was fed a different diet. Thus, diet was the experimental factor, and there were four levels of the factor. In that example, only one variable was measured on each animal: the body weight. But other measurements might have been made on each animal. such as blood cholesterol. blood pressure, or body fat. If two or more variables are measured on each subject in an ANOY A design, we have a multivariate analysis of variance (abbreviated MANOY A).* There arc several uses to which multivariate analysis of variance may be put (e.g., Hair ct al., 2006: 399-402). This chapter presents a brief introduction to this type of analysis, a multifaceted topic often warranting consultation with knowledgeable practitioners. More extensive coverage is found in many texts on the subject (e.g., Bray and Maxwell, 1985; Hair et al., 2006: Chapter 6; Hand and Taylor, 1987: Chapter 4; Johnson and Wichern, 2002: Chapter 6; Marcoulides and Hershberger, 1997: Chapters 3-4; Sharma, 1996: Chapters 11-12; Srivastava, 2002: Chapter 6; Stevens, 2002: Chapters 4-6; and Tabachnik and Fidell, 2001: Chapter 9). Other multivariate statistical methods are discussed in Chapter 20. The multivariate analysis-of-variance experimental design discussed here deals with a single factor. There are also MANOY A procedures for blocked, repeatedmeasures, and factorial experimental designs, as presented in the references just cited. 16.1
THE MULTIVARIATE
NORMAL DISTRIBUTION
Recall that univariate analysis of variance assumes that the sample of data for each group of data came from a population of data that were normally distributed, and univariate normal distributions may be shown graphically as in Figures 6.1 and 6.2. In "Sometimes the variables are referred variables or criterion variables. 316
to as depelldell'
variables and the factors as independent
Section 16.1
The Multivariate Normal Distribution
317
y
FIGURE 16.1: A bivariate normal distribution,
where X1 and X2 have identical standard deviations.
such two-dimensional figures, the height of the curve, Y, representing the frequency of observations at a given magnitude of the variable X, is plotted against that value of X; and the highest point of the normal curve is at the mean of X. The simplest multivariate case is where there are two variables (call them XI and X2) and they can be plotted on a graph with three axes representing Y, XI, and X2. The two-variable extension of the single-variable normal curve is a surface representing a bivariate normal distribution, such as that shown in Figure 16.1.* The three-dimensional normal surface rises like a hill above a flat floor (where the floor is the plane formed by the two variables, XI and X2), and the highest point of the curved surface is at the means of XI and X2. A plot of more than three dimensions would be required to depict multivariate distributions with more than two measured variables. Multivariate normality requires, among other characteristics (e.g., Stevens, 2002: 262), that for each Xi (in this example, Xi and X2) there is a normal distribution of Y values. As shown in Figure 6.2a, univariate normal distributions with smaller standard deviations form narrower curves than those with larger standard deviations. Similarly, the hill-shaped bivariate graph of Figure 16.1 will be narrow when the standard deviations of XI and X2 are small and broad when they are large. Rather than drawing bivariate normal graphs such as Figure 16.1, we may prefer to depict these three-dimensional plots using two dimensions, just as mapmakers represent an elevated or depressed landscape using contour lines. Figure 16.2a shows the distribution of Figure 16.1 with a small plane passing through it parallel to the XI and X2 plane at the base of the graph. A circle is delineated where the small plane intersects the normal-distribution surface. Figure 16.2b shows two planes passing through the normal surface of Figure 16.1 parallel to its base, and their intersections form two concentric circles. If Figures l6.2a and 16.2b are viewed from above the surface, looking straight down toward the plane of XI and X2, those circles would appear as in Figures 16.3a and 16.3b, respectively. Three such intersecting planes would result in three circles, and so on. If the standard deviations "This three-dimensional 1872 (Stigler, 1999: 404).
surface is what gave rise to the term bell curve, named by E. Jeuffret
in
318
Chapter 16
Multivariate Analysis of Variance
y
y
(b)
(a)
FIGURE 16.2: The bivariate normal distribution of Figure 16.1, with (a) an intersecting the Xl-X2 plane, and (b) two intersection planes parallel to the Xl-X2 plane.
plane parallel to
o (a) FIGURE 16.3: Representations planes.
of Figures 16.2a and 16.2b, showing the circles defined by the intersecting
of Xl and X2 are not the same, then parallel planes will intersect the bivariatenormal surface to form ellipses instead of circles. This is shown, for three planes, in Figure 16.4. In such plots, the largest ellipses (or circles, if XI and X2 have equal standard deviations) will be formed nearest the tails of the distribution. Only ellipses (and not circles) will be discussed hereafter, for it is unlikely that the two variables will have exactly the same variances (that is, the same standard deviations). If an increase in magnitude of one of the two variables is not associated with a change in magnitude of the other, it is said that there is no correlation between XI and X2. If an increase in magnitude of one of the variables is associated with either an increase or a decrease in the other, then the two variables are said to be correlated. (Chapter 19 provides a discussion of correlation.) If Xl and X2 are not correlated, graphs such as Figure 16.4 will show the long axis of all the ellipses parallel to either the Xl or X2 axis of the graph (Figures 16.4a and 16.4b, respectively). If, however, Xl and X2 are positively correlated, the ellipses appear as running from the lower left to the upper right of the graph (Figure 16.4c); and if the two variables are negatively correlated, the ellipses run from the lower right to the upper left (Figure 16.4d).
Section 16.2
Multivariate Analysis of Variance Hypothesis Testing
319
FIGURE 16.4: Representations of bivariate normal distributions where the standard deviations of X, and X2 are not the same. (a) X, and X2 are not correlated. (b) X, and X2 are not correlated. (c) X, and X2 are positively correlated. (d) X, and X2 are negatively correlated.
MULTIVARIATE ANALYSIS OF VARIANCE HYPOTHESIS TESTING
At the beginning of the discussion of univariate analysis of variance (Chapter 10), it was explained that, when comparing a variable's mean among more than two groups, to employ multiple r-tests would cause a substantial inflation of a, the probability of a Type I error. In multivariate situations, we desire to compare two or more variables' means among two or more groups, and to do so with multiple ANOV As would also result in an inflated chance of a Type I error". Multivariate analysis of variance is a method of comparing the population means for each of the multiple variables of interest at the same time while maintaining the chosen magnitude of Type I error. A second desirable trait of MANOV A is that it considers the correlation among multiple variables, which separate ANOV As cannot do. Indeed, if the variables *For m variables, the probability of a Type I error will range from a, if all of the variables are perfectly correlated, to 1 - (1 - a)l1I, if there is no correlation among them. So, for example, if testing with two variables at the 5% significance level, P(Type I error) = 0.10 (Hair et al., 2006: 4(0), which can be seen from Table 10.1 by substituting m for C.
320
Chapter 16
Multivariate Analysis of Variance
FIGURE 16.5: Two bivariate
normal groups of positively correlated
data differing
in both dimensions.
are correlated, MANOY A may provide more powerful testing than performing a series of separate ANOY As. (However, if the variables are not correlated, separate ANOYAs may be more powerful than MANOY A.) For example, there are two bivariate distributions depicted in Figure 16.5, with the distribution of variable Xl being very similar in the two groups and the distribution of X2 also being very similar in the two groups. Therefore, a univariate ANOY A (or a two-sample t-test) will be unlikely to conclude a difference between the means of the groups for variable Xl or for variable X2, but MANOY A may very well conclude the means of the two bivariate distributions to be different. Third, sometimes group differences for each of several variables are too small to be detected with a series of ANOY As, but a MANOY A will conclude the groups different by considering the variables jointly. Stevens (2002: 245) cautions to include in the analysis only variables for which there is good rationale, because very small differences between means for most of them may obscure substantial differences for some of them. In univariate ANOY A with k groups, a typical null hypothesis is Ho:
J.L1
=
J.L2
= ... =
J.Lk,
which says that all k population means are the same. And the corresponding hypothesis is
alternate
HA: The k: population means are not all equal. Recall that HA does not say that all means are different, only that at least one is different from the others. Thus, for example, Example 10.1 presented an experiment to ask whether the mean body weight of pigs is the same when the animals are raised on four different feeds. And He: H/l:
J.L1
=
J.L2
=
J.L3
=
J.L4
All four population means are not equal.
In a MANOY A with two variables (Xl and X2) and k groups, the null hypothesis may be stated as Ho:
J.Lll
=
fLl2
= ... =
fLlk
and
J.L2l
=
fL22
= ... =
J.L2b
Section 16.2
Multivariate
Analysis of Variance Hypothesis Testing
321
where fLij denotes the population mean of variable i in group j. This Hi, says that the means of variable 1 are the same for all k groups and the means of variable 2 are the same for all k groups. The corresponding MANOY A alternate hypothesis is HA:
The k populations do not have the same group means for variable 1 and the same group means for variable 2.
Thus, Hi, is rejected if any of the fLlj'S are concluded to differ from each other or if any ofthe fL2/S are concluded to differ from each other. Example 16.1 is similar to Example 10.1, except it comprises two variables (Xl and X2): the weight of each animal's body fat and the dry weight of each animal without its body fat. The null hypothesis is that mean weight of body fat is the same on all four diets and the mean fat-free dry body weight is the same on all of these diets. (It is not being hypothesized that mean body-fat weight is the same as mean fat-free dry body weight!) In this example, Ho: fLll and HA:
=
i-v:
= fL13 = fLl4
and
fL2l
= fL22 = fL23 = fL24
The four feeds do not result in the same mean weight of body fat and the same mean fat-free dry body weight.
If, in the sampled populations, one or more of the six equals signs in Hi, is untrue, then Hi, should be rejected. There are several methods for comparing means to test MANOY A hypotheses. This chapter will refer to four test statistics employed for this purpose and encountered in MANOY A computer programs. None of these four is considered "best" in all situations. Each captures different characteristics of the differences among means; thus, the four have somewhat different abilities to detect differences in various circumstances. The computations of these test statistics are far from simple and-especially for more than two variables-cannot readily be expressed in algebraic equations. (They are represented with much less effort as matrix calculations, which are beyond the scope of this book.) Therefore, we shall depend upon computer programs to calculate these statistics and shall not attempt to demonstrate the numerical manipulations. It can be noted, however, that the necessary calculations involve total, groups, and error sums of squares (SS, introduced in Section 10.1) and sums of cross products (to be introduced in Section 17.2). And, just as mean squares are derived from sums of squares, quantities known as "covariances" are derived from sums of crossproducts. The four common MANOY A test statistics are the following.* They are all given by most MANOY A computer software; they often result in the same or very similar conclusions regarding Ho and operate with similar power (especially with large samples); and they yield identical results when only one variable is being analyzed (a univariate ANOY A) or when k = 2. • Wilks' lambda. Wilks' A (capital Greek lambda), also called Wilks' likelihood ratio (or Wilks' U),t is the oldest and most commonly encountered multivariate analysis-of-variance statistic, dating from the original formulation of the "Each of the four MANGY A statistics is a function of what are called eigenvalues, or roots, of matrices. Matrix algebra is explained in many tests on multivariate statistics, and an introduction to it is given in Section 20.1. tNamed for American statistician Samuel Stanley Wilks (1906-1964), who made numerous contributions to theoretical and applied statistics, including to MANGY A (David and Morrison, 2006).
322
Chapter 16
Multivariate Analysis of Variance EXAMPLE 16.1
A Bivariate Analysis of Variance
Several members of a species of sparrow were collected at the same location at four different times of the year. Two variables were measured for each bird: the fat content (in grams) and the fat-free dry weight (in grams). For the statement of the null hypothesis, J.Lij denotes the population mean for variable i (where i is 1 or 2) and month j (i.e., j is 1,2,3, or 4). H(): J.Lll = J.L12 = J.L13 = J.L14 and J.L21 = J.L22 = J.L23 = J.L24 H A: Sparrows do not have the same weight of fat and the same weight of fat-free dry body tissue at these four times of the year. a
=
0.05 December
X:
January
March
February
Fat weight
Lean dry weight
Fat weight
Lean dry weight
Fat weight
Lean dry weight
Fat weight
Lean dry weight
2.41 2.52 2.61 2.42 2.51
4.57 4.11 4.79 4.35 4.36
4.35 4.41 4.38 4.51
5.30 5.61 5.83 5.75
3.98 3.48 3.36 3.52 3.41
5.05 5.09 4.95 4.90 5.38
1.98 2.05 2.17 2.00 2.02
4.19 3.81 4.33 3.70 4.06
2.49
4.44
4.41
5.62
3.55
5.07
2.04
4.02
Computer computation Wilks' A
=
yields the following output: 0.0178,
F = 30.3,
DF = 6,28,
P«
0.0001.
Reject Ho. Pillai's trace
=
1.0223,
F
= 5.23,
DF = 6,30,
P
=
0.0009.
Reject Ho. Lawley-Hotelling 0.0001.
trace
52.9847,
FIlS,
DF
6,26,
P«
DF
3, 15,
P«
Reject Ho. Roy's maximum root 0.0001.
52.9421,
F
265,
Reject Ho.
MANOV A procedure (Wilks, 1932). Wilks' A is a quantity ranging from 0 to 1; that is, a measure of the amount of variability among the data that is not explained by the effect of the levels of the factor. * So, unlike typical test statistics "Thus, a measure
of the proportion
of the variability
that is explained
by the levels of the factor
IS
TJ2 =
(16.l)
1 - A,
and TJ2 has a meaning like that of R2 in another regression or multiple correlation (Section 20.3).
kind of multivariate
analysis.
that of multiple
Section 16.2
Multivariate Analysis of Variance Hypothesis Testing
323
(e.g., Fort or X2), Hi, is rejected for small, instead of large, values of A. Tables of critical values have been published (e.g., Rencher, 2002: 161,566-573), but computer programs may present A transformed into a value of F or X2 (tables of which are far more available) with the associated probability (P); as elsewhere, large values of For X2 yield small P's. In Example 16.1, A is an expression of the amount of variability among fat weights and among lean dry body weights that is not accounted for by the effect of the four times of year. • Pillai's trace. This statistic, based on Pillai (1955), is also called the Pillai-Bartlett trace (or V). * Many authors recommend this statistic as the best test for general use (see Section 16.2a). Large values of V result in rejection of H«. There are tables of critical values of this statistic (e.g., Rencher, 2002: 166-167, 578-581), but it is often transformed to a value of F. • Lawley-Hotelling trace. This statistic (sometimes symbolized by U) was developed by Lawley (1938) and modified by Hotelling (1951). Tables of critical values exist for this (e.g., Rencher, 2002: 167, 582-686), but it is commonly transformed into values of F. • Roy's maximum root. This is also known by similar names, such as Roy's largest, or greatest, root and sometimes denoted by (J (lowercase Greek theta). This statistic (Roy, 1945) may be compared to critical values (e.g., Rencher, 2002: 165,574-577), or it may be converted to an F. Wilks' A is a very widely used test statistic for MANOV A, but Olson (1974, 1976, 1979) concluded that Pillai's trace is usually more powerful (see Section 16.2b), and others have found that the Pillai statistic appears to be the most robust and most desirable for general use. If the four MANOV A test statistics do not result in the same conclusion about Ho, further scrutiny of the data may include examination of scatter plots (see Section 17.1), with axes as in Figures 16.5 and 16.6, for correlation among variables. Correlation will suggest favoring the conclusion reached by Pillai's trace, and noncorrelation will suggest relying on Roy's statistic for superior power. In Example 16.1, all four test statistics conclude that Ho is to be rejected. (a) Assumptions. As in univariate ANOV A (Section 10.1), the underlying mathematical foundations of MANOV A depend upon certain assumptions. Although it is unlikely that all of the assumptions will be exactly met for a given set of data, it is important to be cognizant of them and of whether the statistical procedures employed are robust to departures from them. A very important underlying assumption in MANOV A is that the data represent random samples from the populations of interest and that the observations on each subject are independent. In Example 16.1, the body-fat weights of sparrows in each month are assumed to have come at random from the population of body-fat weights of all sparrows from that month from that location, and the lean dry weights at each month are assumed to have come randomly from a population of such weights. Also, the body-fat weight of each subject (i.e., each sparrow) must be independent of the body-fat weight of each other subject, and the lean dry body weights of all the subjects must be independent of each other. MANOV A is invalidated by departure from the assumption of random and independent data. * A trace is the result of a specific mathematical operation on a matrix. See Section 20.1 for more information on matrices.
Section 16.2
Multivariate
Analysis of Variance Hypothesis Testing
325
There are statistical tests for this assumption, such as that which is analogous to the Bartlett test of the homogeneity of variances in the univariate ANOV A (Section 10.6) (Box, 1949, 1950) but which is seriously affected by nonnormality (e.g., Olson. 1974; Stevens, 2002: 271) and thus is not generally recommended. Data transformations (Chapter 13) may be useful to reduce nonnormality or to reduce heterogeneity of variances and covariances. Although MANOV A is typically robust to departures from the variability and variance-correlations assumptions, the Pillai trace (according to Olson, 1974, 1976) is generally the most robust of the four methods.
(b) Power.
The power of a MANOV A depends upon a complex set of characteristics, including the extent to which the underlying assumptions (see Section 16.2a) are met. In genera!, increased sample size is associated with increased power, but power decreases with increase in the number of variables. Thus, if it is desired to employ an experimental design with several variables, larger samples will be needed than would be the case if there were only two variables. Also, as with ANOV A, the power of MANOV A is greater when the population differences between means are larger and when the variability within groups is small. The magnitude of correlations between variables can cause MANOV A to be either more or less powerful than separate ANOV As. Some MANOV A computer programs calculate power, and Rencher (2002: Section 4.4) and Stevens (2002: 192-2(2) discuss the estimation of power and of the sample size required in MANOV A. Many computer programs provide a calculation of power, but recall (e.g., Section 10.3) that power estimated from a set of data should be considered as applying to future data sets. Differences in power among the four test statistics are often not great, but there can be differences. If the group means differ in only one direction (i.e., they are uncorrelated, as in Figure 16.6), a relatively uncommon situation, Roy's statistic is the most powerful of the four, followed-in order of power-by the Lawley-Hotelling trace, Wilks' A, and Pillai's trace. However, in the more common situation where the group means differ among more than one dimension (i.e., the variables are correlated, as in Figure 16.5), then the relative powers of these statistics are in the reverse order: The Pillai trace is the most powerful, followed by Wilks' A, the Lawley-Hotelling trace, and then Roy's statistic. In intermediate situations, the four statistics tend more toward the latter ordering than the former.
(c) Two-Sample Hypotheses. The preceding four procedures may be used in the case of only two groups, for when k = 2 all four will yield the same results. But another test encountered for two-group multivariate analysis is Hotelling's T2 (Hotelling, 1931). This is analogous to the univariate two-group situation, where either ANOVA (Section 10.1) or Student's t (Section 8.1) may be employed. T2 is related to the MANOV A statistics of Section 16.2 as follows (Rencher, 2002: 130)*:
* T2
T2
= (n I + n: - 2)
C ~ !\).
T2
= (nl + n: - 2)
C ~ V).
is also related to the multiple-regression
coefficient
of determination,
(16.3)
(16.4)
R2 (Equation
20.19).
as ( 16.2)
326
Chapter 16
Multivariate
Analysis of Variance
+ n2 - 2) U,
T2 = (nl T2
=
+ 172 - 2)
(nl
(_8_), 1 -
where m is the number of variables. Upon calculation of T2 (usually by computer), Rencher, 2002: 558-561) may be consulted, or F
=
+
/11
may be used, with m and nl
+
/12
F
=
/12
+
(nl
m -
-
/11 (I'll
(16.6)
8
tables of critical values of T2 (e.g.,
m - 1 T2
/12
(16.5)
(16.7)
2)m
-
1 degrees of freedom or, equivalently,
+ /12 + /12
-
m T2 l)m
(l6.7a)
with m and nl + /12 degrees of freedom. Nonparametric testing is also available for two-sample multivariate tests (i.e., as analogs to the univariate Mann-Whitney test, Section 8.11) and for paired-sample tests (analogous to the univariate Wilcoxon or sign tests, Sections 9.5 and 24.6). 16.3
FURTHER ANALYSIS
When a MANOY A rejects H«, there are procedures that might be employed to expand the analysis of difference among groups (e.g., Bray and Maxwell, 1985: 40-45; Hand and Taylor, 1987: Chapter 5; Hair et al., 2006: 422-426); Hummel and Sligo, 1971; Stevens, 2002: 217-225; Weinfurt, 1995). One approach is to perform a univariate ANOY A on each of the variables (Rencher, 2002: 162-164, followed perhaps by multiple comparisons; see Chapter 11) to test the difference among means for each variable separately. However, this procedure will ignore relationships among the variables, and other criticisms have been raised (e.g., Weinfurt, 1995). Multiplecomparison tests are described in some of the references cited in this discussion. In univariate ANOY A, one can reject Ho and have none of the !-L'S declared different by further analysis (Chapter 11). Similarly, a MANOYA may reject Ho with subsequent ANOYAs detecting no differences (either because of lack of power or because the interrelations among variables are important in rejection of the multivariate Ho). Some computer programs perform ANOY As along with a MANOY A. If k = 2 and Ho is rejected by MANOY A or Hotelling's T2 test, then two-sample t tests and univariate ANOY As will yield identical results. 16.4
OTHER EXPERIMENTAL DESIGNS
The data of Example 16.1 are subjected to a multivariate analysis of variance composed of one factor (time of year) and two variables (weight of fat and fat-free dry body weight). This is the same experimental design as the ANOY A of Example 10.1 except that two variables, instead of one, are measured on each animal. A MANOY A may also involve more than two variables. For example, the blood· cholesterol concentration might have been a third variable measured for each of the animals, and the null hypothesis regarding the four factor levels (months) would be Ho:
!-Ll1
=
!-L12 = !-L13
and
=
!-L31
!-L14
=
!-L32
and
=
!-L21 !-L33
=
=
!-L22
!-L34·
=
!-L23
=
!-L24
Exercises
327
Other sets of data may be the result of consideration of more than one variable and more than one factor. For example, measurements of two or more variables, such as those in Example 16.1, might have been collected for sparrows collected at more than one time of year and at more than one geographic location. This would be a multivariate factorial experimental design (which could include an examination of factor interactions), and multivariate versions of repeated-measures and hierarchical analyses of variance are also possible, as is multivariate analysis of covariance (MANCOV A). These are multivariate extensions of the considerations of Chapters 12, 14, and 15 and are discussed in some of the references cited immediately preceding Section 16.1. Multivariate one-sample testing (analogous to the univariate testing in Section 7.1) and paired-sample testing (analogous to Section 9.1) are possible (e.g., Rencher, 2002: Sections 5.3.2 and 5.7.1). Analysis of covariance (ANCOVA), introduced in Section 12.9, also may be extended to experimental designs with multiple dependent variables. This is done via MANCOV A, for which computer routines are available. EXERCISES 1 Using multivariate analysis of variance, analyze the following data for the concentration of three amino acids in centipede hemolymph (mg/lOO ml), asking whether the mean concentration of these , amino acids is the same in males and females: Male lanine
Aspartic Acid
7.0 '7.3 ( 8.0 J 8.1 7.9
17.0
,6.4 6.6
15.1 15.9 18.2
8.0
17.2 19.3 19.8 18.4
Female Tyrosine 19.7 20.3 22.6 23.7 22.0 18.1 18.7 21.5
Alanine 7.3 7.7 8.2 8.3 6.4 7.1 6.4 8.6
Aspartic Acid 17.4 19.8 20.2 22.6 23.4 21.3 22.1 18.8
Tyrosine 22.5 24.9 26.1 27.5 28.1 25.8 26.9 25.5
16.2. The following data for deer are for two factors (species and sex), where for each combination of factors there is a measurement of two variables (rate of oxygen consumption, in ml 02/g/hr, and rate of evaporative water loss, in mg/min). Perform a multivariate analysis of variance to test for equality of the population means of these two variables for each of the two factors and the factor interaction. Species 2
Species 1 Male
Female 0.165 0.184 0.127 0.140 0.128
76 71 64 66 69
0.145 0.110 0.108 0.143 0.100
Female 80 72 77 69 74
0.391 0.262 0.213 0.358 0.402
Male 71 70 63 59 60
0.320 0.238 0.288 0.250 0.293
65 69 67 56 52
C HAP
T E R
17
Simple Linear Regression 17.1
REGRESSION VERSUS CORRElATION
17.2 17.3
THE SIMPLE LINEAR REGRESSION EQUATION TESTING THE SIGNIFICANCE OF A REGRESSION
17.4 17.5 17.6
INTERPRETATIONS OF REGRESSION FUNCTIONS CONFIDENCE INTERVALS IN REGRESSION INVERSE PREDICTION
17.7
REGRESSION WITH REPLICATION AND TESTING FOR LINEARITY
17.8 17.9 17.10
POWER AND SAMPLE SIZE IN REGRESSION REGRESSION THROUGH THE ORIGIN DATA TRANSFORMATIONS IN REGRESSION
17.11
THE EFFECT OF CODING DATA
Techniques that consider relationships between two variables are described in this and the following two chapters. Chapter 20 presents the expansion of such techniques to analyze situations where more than two variables may be related to each other. 17.1
REGRESSION VERSUS CORRELATION* The relationship between two variables may be one of functional dependence of one on the other. That is, the magnitude of one of the variables (the dependent variable) is assumed to be determined by-that is, is a function of-the magnitude of the second variable (the independent variable). whereas the reverse is not true. For example, in the relationship between blood pressure and age in humans, blood pressure may be considered the dependent variable and age the independent variable; we may reasonably assume that although the magnitude of a person's blood pressure might be a function of age, age is not determined by blood pressure. This is not to say that age is the only biological determinant of blood pressure, but we do consider it to be one determining factor." The term dependent docs not necessarily in.ply a cause-and-effect relationship between the two variables. (See Section 17.4.) Such a dependence relationship is called a regression. The term simple regression refers to the simplest kind of regression, one in which only two variables are considered. :!: "The historical developments of regression and correlation are strongly related, owing their discovery-the latter following the former-to Sir Francis Galton. who first developed these procedures during I ~75-1 ~Wi (Walker. 1929: 103-104, 1~7): see also the first footnote in Section 19.1. He first used the term regression in I HilS (Desmond. 2()()()). "Some authors refer to the independent variable as the predictor. regressor. explanatory. or exogenous variable and the dependent variable as the response, criterion, or endogenous variable. :i In the case of simple regression. the adjective linear may be used to refer to the relationship between the two variables being a straight line. but to a statistician it describes thc relationship of the parameters discussed in Section 17.2.
328
Section 17.1
Regression versus Correlation
329
Data amenable to simple regression analysis consist of pairs of data measured on a ratio or interval scale. These data are composed of measurements of a dependent, variable (Y) that is a random effect and an independent variable (X) that is either a fixed effect or a random effect. * (See Section JO.lf for a review of these concepts.) It is convenient and informative to graph simple regression data using the ordinate (Y axis) for the dependent variable and the abscissa (X axis) for the independent variable. Such a graph is shown in Figure 17.1 for the n = 13 data of Example 17.1, where the data appear as a scatter of 13 points, each point representing a pair of X and Y values. t One pair of X and Y data may be designated as (Xl, Yj), another as (X2, Y2), another as (X3, Y3), and so on, resulting in what is called a scatter plot of all n of the (Xi, Yi) data. (The line passing through the data in this figure will be explained in Section 17.2.) EXAMPLE 17.1 Wing Lengths of 13 Sparrows of Various Ages. The Data Are Plotted in Figure 17.1.
Age (days) (X)
Wing length (em) ( Y)
3.0 4.0 5.0 6.0 8.0 9.0 10.0 11.0 12.0 14.0 15.0 16.0 17.0
1.4 1.5 2.2 2.4 3.1 3.2 3.2 3.9 4.1 4.7 4.5 5.2 5.0 n
=
13
*On rare occasions, we want to describe a regression relationship where the dependent variable (Y) is recorded on a nominal scale. This requires logistic regression, a procedure discussed in Section 24.1R. "Royston (1956) observed that "the basic idea of using co-ordinates to determine the location of
a point in space dates back to the Greeks at least, although it was not until the time of Descartes that mathematicians systematically developed the idea." The familiar system of specifying the location of a point by its distance from each of two perpendicular axes (now commonly called the X and Y axes) is referred to as Cartesian coordinates, after the French mathematician and philosopher Rene Descartes (1596-1650), who wrote under the Latinized version of his name, Renatus Cartesius. His other enduring mathematical introductions included (in 1637) the use of numerals as exponents, the square root sign with a vinculum (i.e., with a horizontal line: F), and the use of letters at the end of the alphabet (e.g., X, Y, Z) to denote variables and those near the beginning (e.g., a, b, c) to represent constants (Asirnov, 19R2: 117: Cajori, 1928: 205, 20R. 375).
\ "
Chapter 17 \
330
Simple Linear Regression
) 6
)
5 E c
,,-- ..-~
4 cm).
birds is greater
than 4 cm
DO
From Example t
17.5b, YI:l.' ()
= 4.225 - 4 = 0.225 = 3.082 0.073
to.05( 1).11
Therefore,
=
0.073 1.796 reject Ho. 0.005
17.6
= 4.225 cm and s y I~ = 0.073 ern,
< P < 0.01
[P
=
0.0052]
INVERSE PREDICTION Situations exist where we desire to predict the value of the (Xi) that is to be expected in the population at a specified dent variable (Yi), a procedure known as inverse prediction. instance, we might ask, "How old is a bird that has a wing 4.5 algebraic rearrangement of the linear regression relationship obtain
x
= _Y-,-i_-_a
From Fi~ure 17.8, it is clear that, although Yi arAesymmetrical
above
(17.30)
b
I
predicted
independent variable value of the depenIn Example 17.1, for em long?" By simple of Equation 17.8, we
conildence
and below
limits calculated
Yi, confidence
around
the
limits aAssociated with
the predicted Xi are not symmetrical to the left and to the right of Xi. The 1 - a confidence limits for the X predicted at a given Y may be calculated as follows, which is demonstrated in Example 17.7:
(17.31)
X
where" K = b2 - t2s~. This computation is a special case of the prediction of the associated with multiple values of Y at that X. For the study of Example 17.1, age can be predicted of m birds to be taken from the population and having a mean body weight of Yi: (17.32)
"Recall t~(2).(11-2)
equivalent,
that Fa( I ).1.v
=
t~(2).v'
Therefore,
we could compute
= Fa(I).I.(1l-2)· Snedecor and Cochran computation of these confidence limits.
K
=
b2 -
(1989: 171) presented
FI'J" where
F =
an alternative,
yet
348
Chapter 17
Simple Linear Regression EXAMPLE 17.7
Inverse Prediction
We wish to estimate, with 95% confidence, the age of a bird with a wing length of 4.5 cm. Predicted age: = _Y-,-i _-_a b 4.5 - 0.715 0.270
x
= 14.019 days To compute 95% confidence interval: t =
to.05(2),11
= 2.201 K
b2
=
-
t2s~ 2
= 0.270
(2.201
-
f (0.0135)2
= 0.0720 95% confidence interval:
x
+
bey,;
stx[(Y'i)')'
Y) ± ~\
+
K(l
+
D]
= 10.0 + 0.270(4.5 - 3.415) 0.0720 ± 2.201 0.047701 [( 4.5 - 3.415 )2 0.0720 \ 262.00 =
10.0
+ 0.0720 (1 + 11 )] 3
+ 4.069 ± 30.569 JO.003913
= 14.069 ± 1.912 days L1
=
12.157 days
L2
=
15.981 days
where Yi is the mean of the m values of Yi; calculated as
x
+ b(Yi - Y) ±.!...K
where" t = tcx(2).(n+m-3),K
(s2
K\
= b2
)'
[(Yi - y)2 + K(ml
2:x2
Y-X
-
(2)' Sb
+111 +
3)'
+
limits would be
-n1)],
(17.33)
t2(s~)',
_ (s~-X)' -
* Alternatively, we may compute K = b2 Fa( 1).I.(n
and the confidence
2:x2 -
(17.34)
'
F(s~
h, where F
(2 a(2).(n+II1-3)
=
Section 17.7
Regression with Replication and Testing for Linearity
and
349
nz
(s~.x)'
+ ~(Yij
= residual SS
-
+ m - 3)
Yif/(n
(17.35)
/=1
(Ostle and Malone, 1988: 241; Seber and Lee, 2003: 147-148).
,7.7 REGRESSION WITH REPLICATION AND TESTING FOR LINEARITY If, in Example 17.1, we had wing measurements for more than one bird for at least some of the recorded ages, then we could test the null hypothesis that the population regression is linear.* (Note that true replication requires that there are multiple birds at a given age, not that there are multiple wing measurements on the same bird.) Figure 17.9 presents the data of Example 17.8a. A least-squares, best-fit, linear regression equation can be calculated for any set of at least two data, but neither the equation itself nor the testing for a significant slope (which requires at least three data) indicates whether Y is, in fact, a straight-line function of X in the population sampled. EXAMPLE 17.8a Regression Data Where There Are Multiple for Each Value of X
Age (yr)
Systolic blood pressure (mm Hg)
Xi
Yi/
k
=
" L.J " X~iJ = L.J x2 =
X
= 52.5 ~xy
=
Y -
1 to 5;
j
=
1 to
N
ni:
=
ni
Yi
3 4 3 5 5
108.0 120.5 134.3 147.2 159.8
Y
20
Yij = 2744
~~
~ ~ Yu = 383,346
~ ~ Xii Yij
~l
~xy
= 6869.20
= 149,240
= 5180.00
Y = 137.2 =
~x2
a
=
59 ' 100
3975.00
~
=
i
5;
= 1050
~~Xij
b
108, 110, 106 125,120,118,119 132, 137, 134 148, 151, 146, 147, 144 162, 156, 164, 158, 159
30 40 50 60 70
]
2 3 4 5
Values of
bX
5180.00 3975.00 =
=
137.2 -
Therefore, the least-squares
1.303 mm H / r g Y (1.303 )(52.5)
=
68.79 mm Hg
regression line is Yij = 68.79
+ ] .303Xij'
We occasionally encounter the suggestion that for data such as those in Figure 17.9 the mean Y at each X be utilized for a regression analysis. However, to do so would be to discard information, and such a procedure is not recommended (Freund, 1971). "Thornby (1972) presents a procedure to test the hypothesis of linearity not multiple observations of Y. But the computation is rather tedious.
even when there are
350
Chapter 17
Simple Linear Regression 170
••
160
eo
::r: E E
150
c::
;:...:
140
2 :::l
v: v:
130
2
0..
"'0 0
120
.2
co
.~
110
0
Vi >, Vl
100 l)()
30
20
Age. X. in yr
FIGURE 17.9: A regression where there are multiple
EXAMPLE 17.8b 17.8a
Statistical
values of y for each value of X.
Analysis of the Regression
Ho:
The population
regression
is linear.
HA:
The population
regression
is not linear.
total SS =
Li
k
among-groups
C~ r
= 6869.20
SS
=
total OF = N -
Y"
OF
within-groups
SS
within-groups
=
k
1
=
376,476.80 = 6751.93 4
= total SS
among-groups
= 6869.20
6751.93 = 117.27
=
OF
Yi,
N
= 383,228.73 among-groups
(,tt,
n,
r
1 = 19
L ---'-------'--i=!
Data of Example
total OF
-
among-groups
SS
OF
= 19 - 4 = 15 deviations-from-linearity
SS
= among-groups
SS -
regression
SS
= 6751.93 - 6750.29 = 1.64 deviations-from-linearity
OF
=
among-groups
=4-1=3
OF
-
regression
OF
Section 17.7
Regression with Replication and Testing for Linearity
Source of variation Total Among groups Linear regression Deviations from linearity Within groups F
SS
DF
MS
6869.20 6751.93 6750.29 1.64 117.27
19 4 1 3 15
0.55 7.82
351
0.55 = 0.070 7.82 Since F < 1.00, do not reject Hi; =
P Ho: HA:
=
[P
= 0.975]
o. f3 -=I- o.
f3
=
~l
regression SS ~ ~
F
> 0.25
6750.29 6.61
FOOS(I).1.18
Source of variation
SS
DF
MS
Total Linear regression Residual
6869.20 6750.29 118.91
19 1 18
6750.29 6.61
=
=
(5180.00 )2 = 6750.29 3975.00
1021.2
4.41
Therefore, reject Ho. P
« 0.0005
[P < 0.00000000001]
r2 = 6750.29 = 0.98 6869.20 Sy·x
=
.)6.61
=
2.57 mm Hg
Example 17.8b appropriately analyzes data consisting of multiple Y values at each X value, and Figure 17.9 presents the data graphically. For each of the k: unique Xi values, we can speak of each of ru values of Y (denoted by Yij) using the double subscript on i exactly as in the one-way analysis of variance (Section 10.1). In Example 17.8a, nl = 3, n2 = 4, n3 = 3, and so on; and XII = 50 em, Yii = 108 mm; X12 = 50 em, YI2 = 110 mm; X13 = 50 em, Y13 = 106 mm; and so on through XS5 = 70 em, Y55 = 159 mm. Therefore, k
n,
2: xy = 2: 2: (Xij i=1 j=1
-
X) (Yij
-
Y)
=
2: 2: Xij Yii
-
(17.36)
352
Chapter 17
Simple Linear Regression
where N
=
2,7= I n, the
total number of pairs of data. Also, (17.37)
and k
total SS
=
n,
2: 2:(Yij
-
y)2
=
i=lj=1
2:2: Yv
-
(17.38)
C,
where k
and
N =
2: ru.
(17.39)
i=1
Examples 17.8a and 17.8b show the calculations of the regression coefficient, b, the Y intercept a, and the regression and residual sums of squares, using Equations 17.4, 17.7, 17.11, and 17.13, respectively. The total, regression, and residual degrees of freedom are N - 1, 1, and N - 2, respectively. As shown in Section 17.3, the analysis of variance for significant slope involves the partitioning Aofthe total variability of Y (i.e., Yij - Y) into that variability due to regression (Yi - Y) ~nd that variability remaining (i.e., residual) after the regression line is fitted (Yij - Vi). However, by considering the k groups of Y values, we can also partition the total variability exactly as we did in the one-way analysis of variance (Sections 10.1a and 10.1b), by describing variability among groups (Y, - Y) and within groups (Yij - Vi):
k
among-groups
SS
=
2: n, ( Yi
-C,
(17.40)
i=l
among-groups
DF = k -
(17.41)
1,
within-groups SS = total SS - among-groups within-groups DF = total DF - among-groups
(17.42)
SS, DF = N - k.
(17.43)
The variability among groups (Yi - Y) can also be partitioned. Part of this variability ("Vi - Y) results from the linear regression fit to the data, and the rest (Yi - 1\) is due to the deviation of each group of data from the regression line, as shown in Figure 17.10. Therefore, deviations-from-linearity
SS = among-groups
SS - regression SS
(17.44)
and deviations-from-linearity
DF
= among-groups DF =
k - 2.
regression DF ( 17.45)
Table 17.2 summarizes this partitioning of sums of squares. Alternatively, and with identical results, we may consider the residual variability (Yij - Vi) to be divisible into two components: within-groups variability (Yij - Yi)
Section
17.7
Regression
with
Replication
and Testing
for Linearity
353
140 Oil
:r:
E E 138
.5
130 L....l_---l..._-J.L_-'--_l..._---"_---'-_--'-_~_l..._____'__ 45 50
55
Age, X, in yr FIGURE 17.10: An enlarged portion of Figure 17.9, showing the partitioning of Y deviations. The mean Y at X = 50 yr is 134.3 mm Hg, shown by the symbol "0"; the mean of all Y's is Y, shown by a dashed line; Yij is the jth Y at the ith X; and Yj is the mean of the n, Y's at Xj.
TABLE 17.2: Summary
of the Analyses of Variance Calculations Regression Is Linear, and for Testing Ho: f3 = 0
Population
Sum of squares (SS)
Source of variation Total [Yjj
N - 1 regression SS regression DF
Linear regression [Yi - V] Residual [Yij
-
total SS - regression SS
A
i: ( ±
Yij)
Deviations from linearity [Vi - Y;J Within groups -
V;J
k -
1
N
ru
(Lxyt Lx2 among-groups
residual SS residual DF
2
---'----j=-I---'-----
i=1
Lin~ar regression [Yj - V]
[Yij
N - 2
Y;J
Among groups [Vi - V]
the
Mean Square (MS)
DF
V]
-
for Testing Ho:
1
SS - regression SS
total SS - among-groups
SS
k -
2
N - k
deviations SS deviations DF within-groups SS within-groups DF
Note: To test Ho: the population regression is linear, we use F = deviations MS/within-groups MS, with a critical value of Fa(I),(k-2),(N-k)' If the null hypothesis of linearity is rejected, then Ho: f3 = 0 is tested using F = regression MS/within-groups MS, with a critical value of Fa(l),I,(N-k)'
354
Chapter
17
Simple
Linear Regression TABLE 17.3: Summary
of Analysis of Variance Partitioning of Sources of Variation for Testing Linearity, as an Alternative to That in Table 17.2
DF
Source of variation Total [Yij - Yj Among groups [Vi - YJ Within groups [Yij - Y;]
1 k - 1
N -
N-k
J
Linear regression Y; - Yj Residual [Yij - Y;] Within groups [Yij - V;] Deviations from linearity [Vi
N-2 Y;]
N k -
k 2
Note: Sums of squares and mean squares are as in Table 17.2.
and deviations-from-linearity (Yi )7i). This partitioning of sums of squares and degrees of freedom is summarized in Table 17.3.* If the population relationship between Y and X is a straight line (i.e., "He The population regression is linear" is a true statement), then the deviations-fromlinearity MS and the within-groups MS will be estimates of the same variance; if the relationship is not a straight line (Ho is false), then the deviations-from-linearity MS will be significantly greater than the within-groups MS. Thus, as demonstrated in Example 17.8b, F = deviations-from-linearity within-groups MS
MS
(17.46)
provides a one-tailed test of the null hypothesis of linearity. (If all ni's are equal, then performing a regression using the k Y's will result in the same b and a as will the calculations using all N Y/s but the significance test for f3 will be much less powerful and the preceding test for linearity will not be possible.) The power for the test for linearity will be greater for larger numbers of replicate Y's at each X. If the null hypothesis of linearity is not rejected, then the deviations-from-linearity MS and the within-groups MS may be considered to be estimates of the same population variance. The latter will be the better estimate, as it is based on more degrees of freedom; but an even better estimate is the residual MS, which is s~-x, for it constitutes a pooling of the deviations MS and the within-groups MS. Therefore, if a regression is assumed to be linear, s~-x is the appropriate variance to use in the computation of standard errors (e.g., by Equations 17.20,17.26-17.29) and confidence intervals resulting from them, and this residual mean square (s~-x) is also appropriate in testing the hypothesis He: f3 = 0 (either by Equation 17.14 or by Equations 17.20 and 17.21), as demonstrated in Example 17.8b. If the population regression is concluded not to be linear, then the investigator can consider the procedures of Section 17.10 or 20.14 or of Chapter 21. If, however, it is desired to test He: f3 = 0, then the within-groups MS should be substituted for the residual MS (s~.x); but it would not be advisable to engage in predictions with the linear-regression equation. *Some authors refer to deviations from linearity as "lack of fit" and to within-groups variability as "error" or "pure error."
Section 17.9
Regression through the Origin
355
(a) Regression versus Analysis of Variance. Data consisting of replicate values of Y at each of several values of X (such as in Example I7.X) could also he submitted to a single-factor analysis of variance (Chapter 10). This would be done by considering the X's as levels of the factor and the Y's as the data whose means are to he compared (i.e., Y here is the same as X in Chapter 10). This would test Hi; fil = fi2 = ... = fit: instead of H«: (3 = 0, where k is the number of different values of X (c.g., k = ) in Example 17.X). When there are only two levels of the ANOYA factor (i.e .. two different X's in regression), the power or testing these two hypotheses is the same. Otherwise. the regression analysis will he more powerful than the ANOY A (Cottingham, Lennon, and Brown, 2(0)). POWER AND SAMPLE SIZE IN REGRESSION Although there are hasic differences between regression and correlation (see Section 17.1), a set of data for which there is a statistically significant regression coefficient (i.c., He: (3 = 0 is rejected. as explained in Section 17.3) would also yield a statistically significant correlation coefficient (i.e .. we would reject H(): p = 0, to be discussed in Section 19.2). In addition, conclusions about the power of a significance test for a regression coefficient can be obtained by estimating power associated with the significance test for the correlation coefficient that would have been obtained from the same set of data. After performing a regression analysis for a set of data, we may obtain the sample correlation coefficient. r, either from Equation 19.1. or. more simply. as
' & c --'-,.
r=h
( 17.47)
2:y-
or we may take the square root of the coefficient of determination /"2 (Equation 17.1S). assigning to it the sign of h. Then, with r in hand, the procedures of Section 1l).4 may be employed (Cohen, 198X: 76-77) to estimate power and minimum required sample size for the hypothesis test for the regression coefficient, HI): (3 = O. 17.9 REGRESSION THROUGH THE ORIGIN Although not of common hiological importance. a special type of regression procedure is called for when we are faced with sets of data for which we know, a priori, that in the population Y will he zero when X is zero (i.c .. the population Y intercept is known to be zero). Since the point on the graph with coordinates (0.0) is termed the origin of the graph, this regression situation is known as regression through the origin. In this type of regression analysis. both variables must be measured on a ratio scale. for only such a scale has a true zero (see Section 1.1). For regression through the origin. the linear regression equation is (l7.4X) and some of the calculations
pertinent
to such a regression
arc as follows: ( 17.49)
total SS
=
2: yr.
with total DF
= 11,
(17.)())
356
Chapter 17
Simple Linear Regression
rezression SS
=
(~xYl I
~xl
b
residual SS
I
with regression DF
'
= total SS - regression SS, with residual DF =
=
(17.51)
1,
n -
1,
(17.52)
s2
= ~
s2
(17.53)
~xl'
b
st.x
where is residual mean square (residual SS/residual DF). Tests of hypotheses about the slope of the line are performed, as explained earlier in this chapter, with the exception that the preceding values are used; n - 1 is used as degrees of freedom whenever n - 2 is used for regressions not assumed to pass through the origin. Some statisticians (e.g., Kvalseth, 1985) caution against expressing a coefficient of determination, ,2, for this kind of regression. Bissell (1992) discusses potential difficulties with, and alternatives to, this regression model. A regression line forced through the origin does not necessarily pass through point (X, Y). (a) Confidence Intervals. For regressions passing intervals may be obtained in ways analogous to the is, a confidence interval for the population regression Equation 17.53 for s~ and An - 1 degree of freedom
through the origin, confidence procedures in Section 17.5. That coefficient, f3, is calculated using in place of n - 2. A confidence
interval for an estimated Y is
(Xl)
2 SYX
sy =
2:,X2
(17.54)
'
stx
using the as in Equation 17.53; a confidence interval for of m additional measurements at Xi is
L predicted
as the mean
(17.55) and a confidence interval for the Yi predicted for one additional measurement 2
(
sX'Y
Xl + ~X2
1
)
of Xi is (17.56)
(Seber and Lee, 2003: 149). (b) Inverse Prediction. For inverse prediction (see Section 17.6) with a regression passing through the origin,
Y
A
X=~
(17.57)
b'
I
and the confidence interval for the Xi predicted at a given Y is
X K
bYK
i
and" K = b2
where t = ta(2).(n-l) * Alternatively,
+
=
b2
-
± ~ K -
F:sr" where F
s2
YX
(-.!L 2:,xl
+
K) '
s; (Seber and Lee, 2003: 149).
t2
= f~(2).(n-l)
=
Fa(
I ).l.(n-l)·
(17.58)
Section 17.10
Data Transformations
in Regression
357
If X is to be predicted for multiple values of Y at that X, then
~ Y X=--.!. I b '
(17.59)
where Yi is the me.m of m values of Y; and the confidence limits would be calculated as
X + bY
i
K
where ( =
(a(2).(n+IIl-2)
V 1(5 Y·x
± ~ K
2
and* K = b2
),
(
Y;
2:X2
+ K)
(17.60)
m'
r2(5~)';
-
i'~2;
(52
(5~)' = and
)'
(17.61 )
III
residual SS + )' (~.2 . YX -
2: (Yij
iz:
_
n + m - 2
(17.62)
(Seber and Lee, 2003: 149). '.10
DATA TRANSFORMATIONS
IN REGRESSION
As noted in Section 17.2d, the testing of regression hypotheses and the computation of confidence intervals-though not the calculation of a and h-depend upon the assumptions of normality and homoscedasticity, with regard to the values of Y, the dependent variable. Chapter 13 discussed the logarithmic, square-root, and arcsine transformations of data to achieve closer approximations to these assumptions. Consciously striving to satisfy the assumptions often (but without guaranty) appeases the others. The same considerations are applicable to regression data. Transformation of the independent variable will not affect the distribution of Y, so transformations of X generally may be made with impunity, and sometimes they conveniently convert a curved line into a straight line. However, transformations of Y do affect least-squares considerations and will therefore be discussed. Acton (1966: Chapter 8); Glantz and Slinker (2001: 150-154 ); Montgomery, Peck, and Vining (2001: 173-193); and Weisberg (2005: Chapter 7) present further discussions of transformations in regression. If the values of Yare from a Poisson distribution (i.e., the data are counts, especially small counts), then the square-root transformation is usually desirable: Y' =
Jy
+ 0.5,
(17.63)
where the values of the variable after transformation (Y') are then submitted regression analysis. (Also refer to Section 13.2.) If the Y values are from a binomial distribution (e.g.. they are proportions percentages), then the arcsine transformation is appropriate: Y'
=
arcsin
v'Y.
to or
(17.64)
(See also Section 13.3.) Appendix Table B.24 allows for ready use of this transformation. *Alternatively.
K
=
02 -
F(.\'~)'. where F
=
t~(2).(Il+I1I-2)
=
Fa(J).I.(n+m-2j-
358
Chapter 17
Simple Linear Regression
The most commonly used transformation in regression is the logarithmic transformation (see also Section 13.1), although it is sometimes employed for the wrong reasons. This transformation, Y' = logY, (17.65) or Y'
= 10g(Y + 1),
(17.66)
is appropriate when there is heteroscedasticity owing to the standard deviation of Y at any X increasing in proportion to the value of X. When this situation exists, it implies that values of Y can be measured more accurately at low than at high values of X. Figure 17.11 shows such data (from Example 17.9) before and after the transformation. EXAMPLE 17.9 mation of Y
Regression Data Before and After
Logarithmic
Transfor-
Original data (as plotted in Figure 17.11a), indicating the variance of Y (namely, s~ ) at each X: X
5 10 15 20 25
s.2
Y 10.72,11.22,11.75,12.31 14.13,14.79,15.49,16.22 18.61,19.50,20.40,21.37 24.55,25.70,26.92,28.18 32.36,33.88,35.48,37.15
.y
0.4685 0.8101 1.4051 2.4452 4.2526
Transformed data (as plotted in Figure 17.11b), indicating the variance of log Y (namely, g Y ) at each X:
sro
x
logY
5 10 15 20 25
1.03019, 1.04999, 1.07004, 1.09026 1.15014,1.16997,1.19005,1.21005 1.26975, 1.29003, 1.30963, 1.32980 1.39005, 1.40993, 1.43008, 1.44994 1.51001, 1.52994, 1.54998, 1.56996
S2
log Y
0.000668 0.000665 0.000665 0.000665 0.000666
Many scatter plots of data imply a curved, rather than a straight-line, dependence of Yon X (e.g., Figure 17.11a). Often, logarithmic or other transformations of the values of Y and/or X will result in a straight-line relationship (as Figure 17.11b) amenable to linear regression techniques. However, if original, nontransformed values of Yagree with our assumptions of normality and homoscedasticity, then the data resulting from any of the preceding transformations will not abide by these assumptions. This is often not considered, and many biologists employing transformations do so simply to straighten out a curved line and neglect to consider whether the transformed data might indeed be analyzed legitimately by least-squares regression methods. If a transformation may not be used validly to straighten out a curvilinear regression, then Section 20.15 (or perhaps Chapter 21) may be applicable.
Section 17.10
Data Transformations
in Regression
359
40.00
• • •
35.00
•
30.00
•
•••
25.00
>-
•• ••
20.00 15.00
I
10.00
I
5.00
0
5
10
15
20
25
X (a) 1.6(){)()0
•• ••
1.50000
• • •
1.40000
•
>-
•
• ••
OIl .2 1.30000
•• • •
1.20000
1.10000
•• ••
1.00000 0
5
10
15
20
25
X (b) FIGURE 17.11: Regression data (of Example 17.9) exhibiting an increasing variability magnitude of X. (a) The uriginal data. (b) The data after logarithmic transformation
of Y with increasing of Y.
Section 13.4 mentions some other, less commonly employed, data transformations. Iman and Conover (1979) discuss rank transformation (i.e., performing a regression of the ranks of Y on the ranks of X). (a) Examination of Residuals. Since the logarithmic transformation is frequently proposed and employed to try to achieve homoscedasticity, we should consider how a justification for such a transformation might be obtained. If a regressi~n is fitted by least squares, then the sample residuals (i.e., the values of Yi - Yi) may be plotted against their corresponding X's, as in Figure 17.12 (see Draper and Smith, 1998: 62-64). If homoscedasticity exists, then the residuals should be distributed evenly above and below zero (i.e., within the shaded area in Figure 17.12a).
360
Chapter 17
Simple Linear Regression
X (a)
x
X
(e)
(eI)
FIGURE 17.12: The plotting of residuals. (a) Data exhibiting homoscedasticity. (b) Data with heteroscedasticity ofthe sort in Example 17.9. (c) Data for which there was likely an error in the regression calculations, or an additional variable is needed in the regression model. (d) Data for which a linear regression does not accurately describe the relationship between Y and X, and a curvilinear relationship should be considered.
If there is heteroscedasticity due to increasing variability in Y with increasing values of X. then the residuals will form a pattern such as in Figure 17.12h. and a logarithmic transformation might he warranted. If the residuals form a pattern such as in Figure 17.12c. we should suspect that a calculation error has occured or. that an additional important variable should he added to the regression model (see Chapter 20). The pattern in Figure 17.12d indicates that a linear regression is an improper model to describe the data; for example. a quadratic regression (see Section 21.2) might he employed. Glejser (1969) suggests fitting the simple linear regression (17.67) where E, = lYi - Yil. A statistically significant b greater than zero indicates Figure 17.12h to he the case, and the logarithmic transformation may be attempted. Then. after the ap~tion of the transformation. a plot of the new residuals (i.e .. log Yi - log Yi) should be examined and Equation 17.67 fitted. where E,
=
Ilog Yi -
logY; I· If this regression
has a b not significantly different from zero,
then we may assume that the transformation was justified. An outlier (see Section I7.2d) will appear on plots such as Figure 17.12 as a point very far outside the pattern indicated by the shaded area. Tests for normality in the distribution ~of residuals may be made by using the methods of Section 6.6 (employing Yi - Yi in place of Xi in that section); graphical examination of normality (as in Figure 6.11) is often convenient.
f·"
Section 17.11
The Effect of Coding Data
361
THE EFFECT OF CODING DATA Either X or Y data, or both, may be coded prior to the application of regression analysis, and coding may facilitate computations, especially when the data are very large or very small in magnitude. As shown in Sections 3.5 and 4.8, coding may consist of adding a constant to (or subtracting it from) X, or multiplying (or dividing) X by a constant; or both addition (or subtraction) and multiplication (or division) may be applied simultaneously. Values of Y may be coded in the same fashion; this may even be done simultaneously with the coding of X values, using either the same or different coding constants. If we let M x and My represent constants by which X and Y, respectively, are to be multiplied, and let Ax and A y be constants then to be added to MxX and MyY, respectively, then the transformed variables, [Xl and [Y], are ( 17.(8) [X] = MxX + Ax and [Y]
=
MyY
+ Ay.
(17.69)
As shown in Appendix C, the slope, /J, will not be changed by adding constants to X and/or Y, for such transformations have the effect of simply sliding the scale of one or both axes. But if multiplication factors are used in coding, then the resultant slope, [17], will be equal to (17 ) (M yj M x). Note that coding in no way alters the value of r2 or the tor F statistics calculated for hypothesis testing. A common situation involving multiplicative coding factors is one where the variables were recorded using certain units of measurement, and we want to determine what regression statistics would have resulted if other units of measurement had been used. For the data in Examples 17.1, 17.2, and 17.3, a = 0.715 em, and b = 0.270 ern/day, and Syx = 0.218 ern. If the wing length data were measured in inches, instead of in centimeters, there would have to be a coding by multiplying by 0.3937 in./cm (for there are 0.3937 inches in one centimeter). By consulting Appendix C, with My = 0.3937 in./cm, Ay = 0, Mx = I, and Ax = 0, we can calculate that if a regression analysis were run on these data, where X was recorded in inches, the slope would be [b] = (0.270 cm/day)(O.3937 in./cm) = 0.106 in.zday: the Y intercept would be [a] = (0.715 cm)(0.3937 in./cm) = 0.281 in.: and the standard error of estimate would be Sy·x = (0.3937 in./cm)(0.218 cm) = 0.086. A situation employing coding by both adding a constant and multiplying a constant is when we have temperature measurements in degrees Celsius (or Fahrenheit) and wish to determine the regression equation that would have resulted had the data been recorded in degrees Fahrenheit (or Celsius). The appropriate coding constants for use in Appendix C are determined by knowing that Celsius and Fahrenheit temperatures are related as follows: degrees
Celsius degrees
= (~)
9
Fahrenheit
This is summarized elsewhere on logarithmically transformed
(degrees ~ = (~)
Fahrenheit)
-
(~) 9
(degrees
Celsius)
(32)
+
32.
(Zar, 19(8), as are the effects of multiplicative data (Zar, 19(7).
coding
362
Simple Linear Regression
Chapter 17
EXERCISES 17.1. The following
data are the rates of oxygen consumption of birds. measured at different environmental temperatures:
Temperature ( .:C)
Oxygen consumption (ml/g/hr)
-18.
5.2
-15. -10.
4.7 4.5 3.6 3.4
5.
O. 5. 10. 19.
3.1 2.7 Us
(a) Calculate a and b for the regression of oxygen consumption rate on temperature. (b) Test, by analysis of variance, the hypothesis
No: f3
=
O.
(e) Test. by the I test, the hypothesis Ho: f3 = O. (d) Calculate the standard error of estimate of the regression. (e) Calculate the coefficient of determination of the regression. (f) Calculate the 95% confidence limits for f3.
17.2. Utilize
the regression equation computed for the data of Exercise 17. I. (a) What is the mean rate of oxygen consumption in the population for hirds at IS' C?
What is the 95'Yo confidence interval for this mean rate? (e) If we randomly chose one additional bird at 15c C from the population, what would its rate of oxygen consumption he estimated to he? (d) We can be 95% confident of this value lying between what limits? 17.3. The frequency of electrical impulses emitted from electric fish is measured from three fish at each of several temperatures. The resultant data are as follows: (b)
Temperature
CC) 20 22 23 25 27 28 30
Impulse frequency (number/see) 225,230.239 251. 259, 265 266,273, 2110 287,295,302 301.310.317 307, 313, 325 324.330.338
(a) Compute a and b for the linear regression equation relating impulse frequency to ternperature. (b) Test, by analysis of variance 110: f3 = O. (e) Calculate the standard error of estimate of the re gressi on. (d) Calculate the coefficient of determination of the regression. (e) Test lio: The population regression is linear.
C HAP
T E R
18
Comparing Simple Linear Regression Equations 18.1COMPARING TWO SLOPES 18.2COMPARING TWO ELEVATIONS 18.3COMPARING POINTS ON TWO REGRESSION LINES 18.4COMPARING MORE THAN TWO SLOPES 18.5COMPARING MORE THAN TWO ELEVATIONS 18.6MULTIPLE COMPARISONS AMONG SLOPES 18.7MULTIPLE COMPARISONS AMONG ELEVATIONS 18.8MULTIPLE COMPARISONS OF POINTS AMONG REGRESSION LINES 18.9AN OVERALL TEST FOR COINCIDENTAL REGRESSIONS
A regression equation may be calculated for each of two or more samples of data to compare the regression relationships in the populations from which the samples came. We may ask whether the slopes of the regression lines arc significantly different (as opposed to whether they may he estimating the same population slope, (3). Then, if it is concluded that the slopes of the lines are not significantly different. we may want to test whether the several sets of data are from populations in which the population Y intercepts, as well as the slopes, are the same. In this chapter, procedures for testing differences among regression lines will be presented, as summarized in Figure 18.1. 18.1 COMPARING TWO SLOPES
l
The comparison of the slopes of two regression lines is demonstrated in Example 1H.l. The regression relationship to he studied is the amount of water lost by salamanders maintained at various environmental temperatures. Using the methods of Section 17.2 or 17.7, a regression line is determined using data from each of two species of salamanders. The regression line for 26 animals of species I is 10.57 + 2.97X, and that for 30 animals of species 2 is 24.91 + 2.17 X; these two regression lines are shown in Figure IH.2. Temperature, the independent variable (X), is measured in degrees Celsius, and the dependent varia hie (Y) is measured in microliters (J.LI) of water per gram of body weight per hour. Example 18.1 shows the calculations of the slope of each of the two regression lines. In this example, the slope of the line expresses water loss. in J.Ll/g/hr. for each temperature increase of L'C. The raw data (the 26 X and Y data for species I and the 30 pairs of data for species 2) are not shown. hut the sums of squares (2:x2 and 2:i) and sum of crossproducts (LXY) for each line are given in this example. (The calculation of the Y intercepts is not shown.) As shown is Example 18.1, a simple method for testing hypotheses about equality of two population regression coefficients involves the use of Student's t in a fashion analogous to that of testing for differences between two population means (Section 8.1). The test statistic is r = hI - h2 (18.1 ) S"I =l>:
364
Chapter 18
Comparing Simple Linear Regression Equations How many lines? k>2
k=2
Test H,,: 131 = 132 = ... = 13k (Section 18.4)
Test Hn: 131 = 132 (Section 18.1) H"not
I/n not rCje~ejected
reje~ejected
I. Compute common slope (Equation 18.9) 2. Test II,,: Both population elevations are equal (Section 18.2) II"
STOP
''0' "j'~*"'d
I. Compute common slope (Equation 18.30) 2. Tcst Hn: All k population elevations are equal (Section 18.5) H"not
I. Compute common regression equation (Equation 18.24) 2. STOP
I. Multiple comparison testing of slopes (Section 18.n) 2. STOP
reje~CjCcted
STOP I. Compute regression (Equation 2. STOP
common equation 18.24)
FIGURE 18.1: Flow chart for the comparison
EXAMPLE 18.1 sion Coefficients
I. Multiple comparison testing of elevations (Section 18.7) 2. STOP of regression lines.
Testing for Difference Between Two Population Regres-
For each of two species of salamanders, the data are for water loss (Y, measured as ILlig/hr) and environmental temperature (X, in QC).
n«.
{31
HA :
{31
= {32 =1=
{32
For Species 1:
For Species 2:
n =26
n =30
2: xy
= 4363.1627
2: x2 = 2272.4750 2: xy 4928.8100
2:l
= 13299.5296
2:l
2:x2
b
= 1470.8712
=
= 10964.0947
b = 4928.8100 = 2.17 2272.4750 residual SS = 10964.0947
= 4363.1627 = 2.97
1470.8712 residual SS = 13299.5296 (4363.1627 )2
(4928.8100)2 2272.4750
1470.8712
= 356.7317 residual DF S2
(
YX
Sb) =b:
=
)
P
=
=
26 - 2
356.7317 24 12.1278 1470.8712
= 273.9142 =
24
+ 273.9142 + 28 +
residual DF =
12.1278 2272.4750
12.1278
= 0.1165
=
30 - 2
=
28
Section 18.1
t
Comparing Two Slopes
365
= 2.97 - 2.17 = 6.867
v =
0.1165 24 + 28
Reject Ho if to.05(2),52 =
=
52
It I ~
ta(2).v
2.007; Reject H«. P < 0.001
= 0.0000000081 J
[P
Calculation not shown: al
= 10.57
o: = 24.91
where the standard error of the difference between regression coefficients is
(lR.2)
and the pooled residual mean square is calculated as 2 ( Sy·x
)
_ p -
+ (residual SSh (residual DF)I + (residual DFh (residual SS)
1
(18.3)
,
the subscripts 1 and 2 referring to the two regression lines being compared. The critical value of t for this test has (n 1 - 2) + (n2 - 2) degrees of freedom (i.e., the sum of the two residual degrees of freedom), namely (18.4)
ay a~,
Just as the t test for difference between means assumes that = the preceding t test assumes that (atx) 1 = (atx h- The presence of the latter condition can be tested by the variance ratio test, F = (stx )larger! (st.x )smaller; but this is usually not done due to the limitations of that test (see Section 8.5). The 1 - a confidence interval for the difference between two slopes, /31 and /32, is (bl
-
b2)
±
(18.5)
ta(2),vSbl-b2'
where v is as in Equation 18.4. Thus, for Example 18.1, 95% confidence interval for /31 - /32 = (2.97 - 2..17) ±
(to.05(2),52
)(0.1165)
= 0.80 ± (2.007)(0.1165) = 0.80 lLl/g/hrrC ± 0.231Ll/g/hrrC; and the upper and lower 95% confidence limits for /31 - /32 are Ll = 0.57 ILl/g/hr/OC and L2 = 1.03 1L1/g/hr/o C. If He: /31 = /32 is rejected (as in Example 18.1), we may wish to calculate the point where the two lines intersect. The intersection is at (18.6)
366
Chapter 18
Comparing Simple Linear Regression Equations
at which the value of Y may be computed either as
or
The point of intersection of the two lines in Example 18.1 is at
x I -
__ 1_0_.5_7 = 17.92 °C _24_.9_1 2.97 2.17
and YI
=
1.057
+ (2.97)(17.92)
=
63.79 JLl/g/hrrC.
Figure 18.2 illustrates this intersection. 120
•..
100
..c: f32, or He: f31 - f32 ~ f30 versus HA: f31 - f32 > f3o, then we reject Hi, if t ~ tar I),v- In either case, I is computed by Equation 18.1, or by Equation 18.11 if f30 #- O. An alternative method of testing Ho: f31 = f32 is by the analysis of covariance procedure of Section 18.4. However, if a computer program is not used, the preceding t test generally involves less computational effort. (a) Power and Sample Size in Comparing Regressions. In Section 17.8 it was explained that the procedure for consideration of power in correlation analysis (Section 19.4) could be used to estimate power and sample size in a regression analysis. Section 19.6 presents power and sample-size estimation when testing for difference between two correlation coefficients. Unfortunately, utilization of that procedure for comparing two regression coefficients is not valid-unless one has the 2 rare case of (Lx = (Lx2)2 and = (Ll)2 (Cohen, 1988: 110).
}1
(Ll)1
COMPARING TWO ELEVATIONS
If He: f31 = f32 is rejected, we conclude that two different populations of data have been sampled. However, if two population regression lines are not concluded to have different slopes (i.e., Ho: f31 = f32 is not rejected), then the two lines are assumed to be parallel. In the latter case, we often wish to determine whether the two population regressions have the same elevation (i.e., the same vertical position on a graph) and thus coincide. To test the null hypothesis that the elevations of the two population regression lines are the same, the following quantities may be used in a t test, as shown in Example 18.2: sum of squares of X for common regression (18.12) sum of crossproducts
for common regression (18.13)
sum of squares of Y for common regression (18.14) residual SS for common regression (18.15)
368
Chapter 18
Comparing Simple Linear Regression Equations
residual OF for common regression = OFc =
nl
+
n2
-
3,
(18.16)
and residual MS for common regression Then, the appropriate t
=
(stx)c
=
SSe OFc
(18.l7)
test statistic is
=
(18.18)
and the relevant critical value of t is that for v = OF(" Example 18.2 and Figure 18.3 consider the regression of human systolic blood pressure on age for men over 40 years old. A regression equation was fitted for data for men in each of two different occupations. The two-tailed null hypothesis is that in the two sampled populations the regression elevations are the same. This also says that blood pressure is the same in both groups, after accounting for the effect of age. In the example, the Ho of equal elevations is rejected, so we conclude that men in these two occupations do not have the same blood pressure. As an alternative to this t-testing procedure, the analysis of covariance of Section 18.4 may be used to test this hypothesis, but it generally requires more computational effort unless a computer package is used.
EXAMPLE 18.2 sion Coefficients
Testing for Difference and Elevations
Between
Two Population
Regres-
The data are for systolic blood pressure (the dependent variable, Y, in millimeters of mercury [i.e., mm Hg]) and age (the independent variable, X, in years) for men over 40 years of age; the two samples are from different occupations. For Sample 1: n
= 13
For Sample 2: n
=
15
X
= 54.65 yr
X
= 56.93 yr
Y
= 170.23 mm Hg
Y
= 162.93 mm Hg
:Lx2
=
1012.1923
:Lx2
=
:Lxy
=
1585.3385
:L xy
= 2475.4333
:Li = 2618.3077 b
= 1.57 mm Hg/yr
a = 84.6mm Hg residual SS residual OF Ho:
f31=f32
HA:
f31
* f32
= 135.2833 = 11
:Li = b
1659.4333
3848.9333
= 1.49 mm Hg/yr
a = 78.0 mm Hg residual SS = 156.2449 residual OF
= 13
Section 18.2
369
= 135.2833 + 156.2449 = 12.1470 11 + 13 = 11 + 13 = 24
( 2 5yX
v
Comparing Two Elevations
)
p
=
Sbl -b2
0.1392
1.57-1.49 = 0.575 0.1392 t005(2).24 = 2.064; do not reject Ho. t =
P
> 0.50
[P
= 0.57]
Ho: The two population regression lines have the same elevation. HA: The two population regression lines do not have the same elevation.
+ 1659.4333 = 2671.6256 Be = 1585.3385 + 2475.4333 = 4060.7718 Ce = 2618.3077 + 3848.9333 = 6467.2410
Ac
1012.1923
=
= 4060.7718 = 1.520 mm H / r
b
g Y
2671.6256
c
ss.
6467.2410 _ (4060.7718)2 2671.6256 DFc = 13 + 15 - 3 = 25 =
=
295.0185
c
(s~.x)c f
=
= 295.0185 = 11.8007
25 (170.23 - 162.93) 11.8007
.l [ 13
+
.l
1.520(54.65 - 56.93)
= 10.77 = 8.218
+ (54.65 - 56.93) 2]
15
1.3105
2671.6256
(0.05(2),25= 2.060; reject Ho· P < 0.001
[P
= 0.0000000072]
If it is concluded that two population regressions do not have different slopes but do have different elevations, then the slopes computed from the two samples are both estimates of the common population regression coefficient, and the Y intercepts of the two samples are (18.19) and (18.19a) and the two regression equations may be written as (18.20) and (18.20a)
370
Chapter 18
Comparing Simple Linear Regression Equations 20() OIl
:r::
190
E E
.=
IXO
~ 170 :::l
'"'"
'" 160 Q: "0
o .2
co
150
.~
o ~ 140 o:
130~~--~--~--~---L--~--~ 40 45 50 55 60 6S
70
7S
Age, X, in yr
FIGURE 18.3: The two regression
lines of Example
18.2.
(though this can be misleading if X = 0 is far from the range of X's in the sample). For the two lines in Example 18.2 and Figure 18.3, A
Yi
=
84.6
+ 1.52Xi
Yi
=
78.0
+ 1.52Xi.
and
If it is concluded that two population regressions have neither different slopes nor different elevations, then both sample regressions estimate the same population regression, and this estimate may be expressed using the common regression coefficient, be, as well as a common Y intercept:
(18.21) where the pooled sample means of the two variables may be obtained as Xp
=
n2X2
nl
+ +
n2Y2
nl
+ +
nlXI
(18.22)
n2
and
Yp --
n'YI
(18.23)
n2
Thus, when two samples have been concluded to estimate the same population regression, a single regression equation representing the regression in the sampled population would be (18.24) We may also use t to test one-tailed hypotheses about elevations. For data such as those in Example 18.2 and Figure 18.3, it might have been the case that one occupation was considered to be more stressful, and we may want to determine whether men in that occupation had higher blood pressure than men in the other occupation. ....••
Section 18.3
Comparing Points on Two Regression Lines
371
This t test of elevations is preferable to testing for difference between the two population Y intercepts. Difference between Y intercepts would be tested with the null hypothesis Ho : al = a2, using the sample statistics al and a2, and could proceed with t = al - a2
(18.25)
Sll\-(/2
where
( 18.26)
(the latter two equations are a special case of Equations 18.27 and 18.28). However, a test for difference between Y intercepts is generally not as advisable as a test for difference between elevations because it uses a point on each line that may lie far from the observed range of X's. There are many regressions for which the Y intercept has no importance beyond helping to define the line and in fact may be a sample statistic prone to misleading interpretation. In Figure 18.3, for example, discussion of the Y intercepts (and testing hypotheses about them) would require a risky extrapolation of the regression lines far below the range of X for which data were obtained. This would assume that the linear relationship that was determined for ages above 40 years also holds between X = 0 and X = 40 years, a seriously incorrect assumption in the present case dealing with blood pressures. Also, because the Y intercepts are so far from the mean values of X, their standard errors would be very large, and a test of He: al = a2 would lack statistical power. COMPARING POINTS ON TWO REGRESSION LINES
If the slopes of two regression lines and the elevations of the two lines have not been concluded to be different, then the two lines are estimates of the same population regression line. If the slopes of two lines are not concluded to be different, but their elevations are declared different, the~ the population lines are assumed to be parallel, and for a given Xi, the corresponding Yi on one line is different from that on the other line. If the slopes of two population regression lines are concluded different, then the lin~s are intersecting rather than para!leJ. In such cases we may wish to test whether a Y on one line is the same as the Y on the second line at a particular X. For a two-tailed test, we can state the null hypothesis as Ho: fLy\ = fLY2 and the alternate as HA: fLy\ i:- fLy2' The test statistic is ~ t
= _Y--,-I __
~ Y-=.2
(18.27)
where
and the degrees of freedom are the pooled degrees of freedom of Equation] a test is demonstrated in Example 18.3.
8.4. Such
372
Chapter 18
Comparing Simple Linear Regression Equations
EXAMPLE 18.3 Testing for Difference Between Points on the Two Nonparallel Regression Lines of Example 18.1 and Figure 18.2. We Are Testing Whether the Volumes (Y) Are Different in the Two Groups at X = 12°C
=
Ho:
ILY1
HA:
IL}/I"*
ILY2 IL}/2
Beyond the statistics given in Example 18.1, we need to know the following: XI
= 22.93°C
and
X2 = 18.95°C.
We then compute: A
+ (2.97) ( 12) = 46.21 ILlIg/hr Y2 = 24.91 + (2.17)(12) = 50.95 ILI/g/hr
YI =10.57
s: Y1
,= -
Y2
\
12.1278[-.L 26
= J2.1l35 t = 46.21 -
+
-.L
+ (12 - 22.93)2 + (12 - 18.95)2]
30
1470.8712
2272.4750
= 1.45 ILlIg/hr
50.95
=
-3.269
1.45 v = 26
+ 30 - 4 = 52
to.05(2).52= 2.007 As
It I > to.05(2).52,reject
H«.
0.001 < P < 0.002
[P = 0.0019]
One-tailed testing is also possible. However, it should be applied with caution, as it assumes that each of the two predicted Y's has associated with it the same variance. Therefore, the test works best when the two lines have the same X, the same ~x2, and the same n. 18.4
COMPARING MORE THAN TWO SLOPES If the slopes of more than two regression equations are to be compared, the null hypothesis Ho: 131 = 132 = ... = 13k may be tested, where k is the number of regressions. The alternate hypothesis would be that, in the k sampled populations, all k slopes are not the same. These hypotheses are analogous to those used in testing whether the means are the same in k samples (Chapter 10). The hypothesis about equality of regression slopes may be tested by a procedure known as analysis of covariance (which was introduced in Section 12.10). Analysis of covariance (AN COY A) encompasses a large body of statistical methods, and various kinds of A COY A are presented in many comprehensive texts, including some of the books cited in the introduction of Chapter 16. The following version of analysis of covariance suffices to test for the equality (sometimes called homogeneity) of regression coefficients (i.e., slopes). Just as an analysis of variance for H«: ILl = IL2 = ... = ILk assumes that all k population variances are equal
Section
18.4
Comparing
373
More Than Two Slopes
(i.e., (TT = (T~ = ... = (TD, the testing of 131 = 132 = ... = 13k assumes that the residual mean squares in the k populations are all the same (i.e., ((Tt.x ) I = ((Tt-x h = ... = ((Tt-x h)· Heterogeneity of the k residual mean squares can be tested by Bartlett's test (Section 1O.6a), but this generaIIy is not done for the same reasons that the test is not often employed as a prelude to analysis-of-variance procedures. The basic calculations necessary to compare k regression lines require quantities already computed: L x2, L xy, L y2 (i.e., total SS), and the residual SS and DF for each computed line (Table 18.1). The values of the k residual sums of squares may then be summed, yielding what we shaIl caIl the pooled residual sum of squares, SSp; and the sum of the k residual degrees of freedom is the pooled residual degrees of freedom, DFp. The values OfLX2, LXY, and Ll for the k regressions may each be summed, and from these sums a residual sum of squares may be calculated. The latter quantity will be termed the common residual sum of squares, SSe. TABLE 18.1: Calculations
for Testing for Significant Elevations of k Simple Linear Regression Lines
Regression
I
Regression 2
Regression k
Differences
Among
LX2
LXY
Al
BI
CI
SSI = CI
-
B2
C2
SS2 = C2
-
Bk
Ck
SSk = C;
-
A2
Ak
Residual SS
Li
B2 ...--l Al B2 --.2 A2
8k2
Slopes and
Residual OF OFI = nl - 2 OF2 = n: - 2
OFk = nk
2
-
Ak k SS!, = LSS; ;~I
Pooled regression
k
OF!, = L(n;
-
2)
;~1 k =
k
Common regression Total regression*
Ae = LA; ;~I At
k
Be = L B;
c,
;~1
k
= L C;
BZ Ae
;~ I
c,
Bt
SSe = Ce· -
SSt
= C,
-
B2t At
2k
-
Ln ;~I
k
OFe = Ln; ;~I
-
k
-
k
OFt = Ln; ;~I
-
2
* See Section 18.5 for explanation.
To test He: 131 = 132 = ... = 13k, we may calculate
F
SSe - SSp) k - 1 -'----~~-=----'SSp (
=
(18.29)
DFp a statistic with numerator and denominator degrees of freedom of k - 1 and DFp, respectively.* Example 18.4 demonstrates this testing procedure for three regression lines calculated from three sets of data (i.e., k = 3). *The quantity SSe - SSp is an expression of variability hence, it is associated with k - I degrees of freedom.
among the k regression
coefficients;
374
Chapter 18
Comparing Simple Linear Regression Equations
If He: 131 = 132 = ... = 13k is rejected, then we may wish to employ a multiple comparison test to determine which of the k population slopes differ from which others. This is analogous to the multiple-comparison testing employed after rejecting Ho: ILl = IL2 = ... = ILk (Chapter 11), and it is presented in Section 18.6. If Ho: 131= 132 = ... = 13k is not rejected, then the common regression coefficient, be, may be used as an estimate of the 13 underlying all k samples: k
2: (2:xY)i be
=
i=1
(18.30)
-k----·
~(2:x2\ For Example 18.4, this is be EXAMPLE 18.4
=
2057.66/1381.10
=
1.49.
Testing for Difference Among Three Regression Functions·
2:x2 2:xy 2:l
n
b
Residual SS
Residual DF
Regression 1
430.14
648.97
1065.34
24
1.51
86.21
22
Regression 2
448.65
694.36
1184.12
29
1.55
109.48
27
Regression 3
502.31
714.33
1186.52
30
1.42
170.68
28
366.37
77
370.33
79
427.10
81
Pooled regression Common regression Total regression
1381.10
2057.66
3435.98
2144.06
3196.78
5193.48
1.49
83
* The italicized values are those computed from the raw data; all other values are derived from them. To test for differences among slopes: He; 131 = 132 = 133; HA: not equal. 370.33 F
=
-
All three f3's are
366.37
3 -
1
----::3:-76~6.-;;:-;37:O-----= 0.42 77
As Fo.os( I ).2,77
:::=
3.13, do not reject Ho.
P > 0.25 b:
=
c
2057.66 1381.10
=
[P
=
0.66]
1.49
To test for differences among elevations, Hr: HA:
The three population regression lines have the same elevation. The three lines do not have the same elevation.
Section 18.6
Multiple Comparisons Among Slopes
375
427.10 - 370.33 3 - 1 F = -----;;3~7~O.=33~- = 6.06 79 As
FO.05( 1).2.79
== 3.13, reject Hi; 0.0025 < P < 0.005
[P
= 0.0036]
.5 COMPARING MORE THAN TWO ELEVATIONS
Consider the case where it has been concluded that all k population slopes underlying our k samples of data are equal (i.e., Ho: 131 = 132 = ... = 13k is not rejected). In this situation, it is reasonable to ask whether all k population regressions are, in fact, identical; that is, whether they have equal elevations as well as slopes, and thus the lines all coincide. The null hypothesis of equality of elevations may be tested by a continuation of the analysis of covariance considerations outlined in Section 18.4. We can combine the data from all k samples and from the summed data compute ~ x2, ~ xy, ~ a residual sum of squares, and residual degrees of freedom; the latter will be called the total residual sum of squares (SSt) and total residual degrees of freedom (OFt). (See Table 18.1.) The null hypothesis of equal elevations is tested with
l,
SSt - SSe k - 1
F=
SSe
(18.31 )
OFe with k - 1 and OFc degrees of freedom. An example of this procedure is offered in Example 18.4. If the null hypothesis is rejected, we can then employ multiple comparisons to determine the location of significant differences among the elevations, as described in Section 18.6. If it is not rejected, then all k sample regressions are estimates of the same population regression, and the best estimate of that underlying population regression is given by Equation 18.24 using Equations 18.9 and 18.21. .6 MULTIPLE COMPARISONS
AMONG
SLOPES
If an analysis of covariance concludes that k population slopes are not all equal, we may employ a multiple-comparison procedure (Chapter 11) to determine which f3's are different from which -others, For example, the Tukey test (Section 11.1) may be employed to test for differences between each pair of 13 values, by Ho: 138 = f3A and HA: f3B =F f3A, where A and B represent two of the k regression lines. The test statistic is _ bs -
q -
bA
SE
(18.32)
If ~ x2 is the same for lines A and B, use the standard error SE
=
(18.33)
376
Chapter 18
Comparing Simple Linear Regression Equations
If
2: x2
is different for lines A and B, then use SE
(18.34)
=
The degrees of freedom for determining the critical value of q are the pooled residual DF (i.e., DFp in Table 18.1). Although it is not mandatory to have first performed the analysis of covariance before applying the multiple-comparison test, such a procedure is commonly followed. The confidence interval for the difference between the slopes of population regressions A and B is (18.35) where qa.lf.k is from Appendix Table 18.1). If one of several regression other lines is to be compared, Section 11.3) are appropriate.
Table B.5 and v is the pooled residual OF (i.e., DFp in lines is considered to be a control to which each of the then the procedures of Dunnett's test (introduced in Here, SE=
\j
12(s~.x)p 2:x2
(18.36)
if 2: x2 is the same for the control line and the line that is compared to the control line (line A), and
SE
=
(2)[ sy·x
12
P
(2: x ) A
+
(2: x
1
21
( 18.37)
) control
if it is not. Either two-tailed or one-tailed hypotheses may be thus tested. The 1 - CI' confidence interval for the difference between the slopes of the control line and the line that is compared to it (line A) is (18.38) where q~(2).lf.k is from Appendix Table B.6. To apply Scheffe's procedure (Section 11.4), calculate SE as Equation 18.37, depending on whether 2: x2 is the same for both lines. 18.7
MULTIPLE COMPARISONS
AMONG
18.36 or
ELEVATIONS
If the null hypothesis H« : 131 = 132 = ... = 13k has not been rejected and the null hypothesis of all k elevations being equal has been rejected, then multiple-comparison procedures may be applied (see Chapter 11) to conclude between which elevations there are differences in the populations sampled. The test statistic for the Tukey test (Section 11.1) is
_ I(VA q -
-
VB) - be(XA SE
-
XB)I '
(18.39)
with DFc degrees of freedom (see Table 18.1), where the subscripts A and B refer to the two lines the elevations of which are being compared, be is from Equation 18.30,
Section 18.8
Multiple Comparisons of Points Among Regression Lines
377
and
(18.40)
SE =
If Dunnett's test (Section 11.3) is used to compare the elevation of a regression line (call it line A) and another line considered to be for a control set of data,
(18.41)
SE =
Equation 18.41 would also be employed if Scheffe's test (Section 11.4) were being performed on elevations.
'18.8 MULTIPLE COMPARISONS OF POINTS AMONG REGRESSION LINES If it is concluded that there is no significant difference among the slopes of three or more regression lines (i.e., Hi; f31 = f32 = ... = f3k is not rejected; see Section 18.4), then it would be appropriate to test for differences among elevations (see Sections 18.5 and 18.7). Occasionally, when the above null hypothesis is rejected it is desired to ask whether points on the several regression lines differ at a specific value of X. This can be done, as a multisample extensio? of Section 18.3, by modifying Equations 18.27 and 18.28. For each line the value of Y is computed at the specified X, as
= a, +
Yi
and a Tukey test is performed for Ho:
fL)'H
(18.42)
b-X
= fLYA as (18.43)
where
SE
(18.44)
=
with DFp degrees of freedom. An analogous Dunnett or Scheffe test would employ
SE
(18.45)
=
A special case of this testing is ,:,here we wish to test for differences among the Y intercepts (i.e., the values of Y when X = 0), although such a test is rarely appropriate. Equations 18.43 and 18.44 for the Tukey test would become as
q
=
-
SE
aA
'
(l8.46~_~
378
Chapter 18
Comparing Simple Linear Regression Equations
and
SE
(18.47)
=
respectively. The analogous Dunnett or Scheffe test for Y intercepts would employ
SE
18.9
=
( 18.48)
AN OVERALL TEST FOR COINCIDENTAL REGRESSIONS It is also possible to perform a single test for the null hypothesis that all k regression lines are coincident; that is, that the {3's are all the same and that all of the a's are identical. This test would employ SS, - SSp
F=
2( k - 1) SSp
(18.49)
DFp
with 2( k - 1) and DFp degrees of freedom. If this F is not significant, then all k sample regressions are concluded to estimate the same population regression, and the best estimate of that population regression is that given by Equation 18.24. Some statistical workers prefer this test to those of the preceding sections in this chapter. However, if the null hypothesis is rejected, it is still necessary to employ the procedures of the previous sections if we wish to determine whether the differences among the regressions are due to differences among slopes or among elevations. EXERCISES 18.1. Given: For Sample 1: n = 28, 69.47, Li = 108.77. X For Sample 2: n = 30, 97.40, L;i = 153.59, X
L;x2 = 142.35, L;xy = 14.7, Y = 32.0. L;x2 = 181.32, L;xy = 15.8, Y = 27.4.
(a) TestHo: f31 =f32vs.H;\: f31 i=f32· (b) If Ho in part (a) is not rejected, test Ho: The elevations of the two population regressions are the same, versus H A: The two elevations are not the same. 18.2. Given: For Sample 1: n = 33, 2341.37, L;i = 7498.91.
L; x2
744.32,
L;xy
For Sample 2: n = 34, L;x2 973.14, L;xy 3147.68, L;i = 10366.97. For Sample 3: n = 29, L;x2 664.42. L;xy 2047.73, L;i = 6503.32. For the total of all 3 samples: n = 96, L;x2 3146.72, L;xy = 7938.25, L;i = 20599.33. (a) Test Ho: f31
= = =
= f32 = f33, vs. H A: All three {:l's
are not equal. (b) If Ho: in part (a) is not rejected, test Ho: The
three population regression lines have the same elevation, versus H A: The lines do not have the same elevation.
C HAP
T E R
19
Simple Linear Correlation 19.1 THE CORRELATION COEFFICIENT 19.2 HYPOTHESES ABOUT THE CORRElATION COEFFICIENT 19.3 CONFIDENCE INTERVALS FOR THE POPULATION CORRElATION
COEFFICIENT
19.4 POWER AND SAMPLE SIZE IN CORRElATION 19.5 COMPARING TWO CORRElATION COEFFICIENTS 19.6 POWER AND SAMPLE SIZE IN COMPARING TWO CORRELATION 19.7 COMPARING MORE THAN TWO CORRElATION COEFFICIENTS 19.8 MULTIPLE COMPARISONS AMONG 19.9 RANK CORRELATION 19.10 WEIGHTED RANK CORRELATION 19.11 CORRELATION
CORRElATION
WITH NOMINAL-SCALE
COEFFICIENTS
COEFFICIENTS
DATA
19.12 INTRAClASS CORRELATION 19.13 CONCORDANCE CORRELATION 19.14 THE EFFECT OF CODING
Chapter 17 introduced simple linear regression, the linear dependence of one variable (termed the dependent variable, Y) on a second variable (called the independent variable, X). In simple linear correlation, we also consider the linear relationship between two variables. but neither is assumed to be functionally dependent upon the other. An example of a correlation situation is the relationship between the wing length and tail length of a particular species of bird. Section 17.1 discussed the difference between regression and correlation. Recall that the adjective simple refers to there being only two variables considered simultaneously. Chapter 20 discusses correlation involving more than two variables. Coefficients of correlation are sometimes referred to as coefficients of association.
19.1 THE CORRELATION COEFFICIENT
Some authors refer to the two variables in a simple correlation analysis as XI and X2. Here we employ the more common designation of X and y, which does not. however, imply dependence of Y on X as it does in regression; nor does it imply a cause-and-effect relationship between the two variables. Indeed, correlation analysis yields the same results regardless of which variable is labeled X and which is Y. The correlation coefficient (sometimes called the simple correlation coefficient.* indicating that the relationship of only two variables is being examined) is "It is also called the Pearson product-moment correlation coefficient because of the algebraic expression of the coefficient. and the pioneering work on it, by Karl Pearson (I X57 -1936). who in I X96 was the lirst to refer to this measure as a correlation coefficient (David, 1995: Seal. 1967). This followed the major elucidation of the concept of correlation by Sir Francis Galton (1822-1911,
380
Chapter 19
Simple Linear-Correlation
calculated as" r=
~xy
(19.1
i,
(see Section 17.2a for the definition of the abbreviated symbols L x2, L and LXY). Among other methods (e.g., Symonds, 1926), Equation 19.1 may be computed by this "machine formula":
(19.2)
Although the denominator of Equations 19.1 and 19.2 is always positive, the numerator may be positive, zero, or negative. thus enabling r to be either positive, zero, or negative, respectively. A positive correlation implies that for an increase in the value of one of the variables, the other variable also increases in value; a negative correlation indicates that an increase in value of one of the variables is accompanied by a decrease in value of the other variable." If L xy = 0, then r = 0, and there is zero correlation, denoting that there is no linear association between the magnitudes of the two variables; that is, a change in magnitude of one does not j imply a change in magnitude of the other. Figure 19.1 presents these considerations· graphically. :1: Also important is the fact that the absolute value of the numerator of Equation 19.1 can never be larger than the denominator. Thus, r can never be greater than 1.0 nor cousin of Charles Darwin and proponent of human eugenics) in IXX8 (who published it first with the terms co-relation and reversion). The symbol r can be traced to Galton's I X77-18X8 discussion of regression in heredity studies (he later used r to indicate the slope of a regression line), and Galton developed correlation from regression. Indeed, in the early history of correlation, correlation coefficients were called Galton functions. The basic concepts of correlation, however, predated Galton's and Pearson's work by several decades (Pearson, 1920; Rodgers and Nicewander, 1988; Stigler, 1989; Walker, 1929: 92-102, 106, 109-110. 187). The term coefficient of correlation was used as early as 1X92 by Francis Ysidro Edgeworth (I X45-l926; Irish statistician and economist, whose uncle and grand-uncle [sic] was Sir Francis Beaufort, 1774-IX57; Beaufort conceived the Beaufort Wind Scale) (Desmond, 2000; Pearson, 1920). "The computation depicted in Equation 19.2 was first published by Harris (1910). The correlation coefficient may also be calculated as r = Lxy/[(n - I )SXsy] (Walker, 1929: 111). It is also the case that I r 1= Jbyb X, where by is the regression coefficient if Y is treated as the dependent variable (Section 17.2a) and b X is the regression coefficient if X is treated as the dependent variable; that is, r is the geometric mean (Section 3.4a) of by and b X: also, following from
J(
Equation 17.15, I r I = Regression SS)/ (Total SS ); see also Rodgers and Nicewander (1988). In literature appearing within a couple of decades of Pearson's work, it was sometimes suggested that a correlation coefficient be computed using deviations from the median instead of from the mean (Eells, 1926; Pearson, 1920), which would result in a quantity not only different from r but without the latter's theoretical and practical advantages. tThe first explanation of negative correlation was in an lX92 paper (on shrimp anatomy) by English marine biologist Walter Frank Raphael Weldon (IX60-1906) (Pearson, 1920). tGalton published the first two-variable scatter plot of data in I XX5(Rodgers and Nicewander, 19XX).
380
Chapter 19
Simple Linear' Correlation
calculated as" (19.1
l,
(see Section 17.2a for the definition of the abbreviated symbols 2: x2, 2: and 2: xy). Among other methods (e.g., Symonds, 1926), Equation 19.1 may be computed by this "machine formula":
r=
(19.2)
Although the denominator of Equations 19.1 and 19.2 is always positive, the numerator may be positive, zero, or negative, thus enabling r to be either positive, zero, or negative, respectively. A positive correlation implies that for an increase in the value of one of the variables, the other variable also increases in value; a negative correlation indicates that an increase in value of one of the variables is accompanied by a decrease in value of the other variable." If 2: xy = 0, then r = 0, and there is zero correlation, denoting that there is no linear association between the magnitudes of the two variables; that is, a change in magnitude of one does not, imply a change in magnitude of the other. Figure 19.1 presents these considerations graphically. :j: Also important is the fact that the absolute value of the numerator of Equation 19.1 can never be larger than the denominator. Thus, r can never be greater than 1.0 nor cousin of Charles Darwin and proponent of human eugenics) in 1888 (who published it first with the terms co-relation and reversion). The symbol r can be traced to Galton's 1877-1888 discussion of regression in heredity studies (he later used r to indicate the slope of a regression line), and Galton developed correlation from regression. Indeed, in the early history of correlation, correlation coefficients were called Galton functions. The basic concepts of correlation, however, predated Galton's and Pearson's work by several decades (Pearson, 1920; Rodgers and Nicewander, 1988; Stigler, 1989; Walker, 1929: 92-102, 106, 109-110,187). The term coefficient of correlation was used as early as 1892 by Francis Ysidro Edgeworth (1845-1926; Irish statistician and economist, whose uncle and grand-uncle [sic] was Sir Francis Beaufort, 1774-1857; Beaufort conceived the Beaufort Wind Scale) (Desmond, 2000; Pearson, 1920). *The computation depicted in Equation 19.2 was first published by Harris (1910). The correlation coefficient may also be calculated as r = 2: xy] [( n - l)s xsy] (Walker, 1929: 111). It is also the case that I r 1= )byb X, where by is the regression coefficient if Y is treated as the dependent variable (Section 17.2a) and b X is the regression coefficient if X is treated as the dependent variable; that is, r is the geometric mean (Section 3.4a) of by and bX; also, following from Equation 17.15, I r I = )( Regression SS)/ (Total SS); see also Rodgers and Nicewander (1988). In literature appearing within a couple of decades of Pearson's work, it was sometimes suggested that a correlation coefficient be computed using deviations from the median instead of from the mean (Eells, 1926; Pearson, 1920), which would result in a quantity not only different from r but without the latter's theoretical and practical advantages. tThe first explanation of negative correlation was in an 1892 paper (on shrimp anatomy) by English marine biologist Walter Frank Raphael Weldon (1860-1906) (Pearson, 1920). :J:Galtonpublished the first two-variable scatter plot of data in 1885 (Rodgers and Nicewander, 1988).
Section 19.1
The Correlation Coefficient
381
....... .. .... ~e. nx» y
y
...... ..
.:.•f.•••\-.. ~-:: :.~ ..::-. ...~.:
..
X (b)
X (a)
......~-..
y
·I.~:: i:~.~·: .... :... :::::: ..•....
y
..... -.:x
X
(c)
(d)
FIGURE 19.1: Simple linear correlation. tion. (d) No correlation.
(a) Positive correlation.
(b) Negative correlation.
(c) No correla-
less than -1.0. Inspection of this equation further will reveal also that r has no units of measurement, for the units of both X and Y appear in both the numerator and denominator and thus cancel out arithmetically. A regression coefficient, b, may lie in the range of -00 :s b :s 00, and it expresses the magnitude of a change in Yassociated with a unit change in X. But a correlation coefficient is unitless and -1 :s r :s 1. The correlation coefficient is not a measure of quantitative change of one variable with respect to the other, but it is a measure of strength of association between the two variables. That is, a large value of I r I indicates a strong association between X and Y. The coefficient of determination, r2, was introduced in Section 17.3 as a measure of how much of the total variability in Y is accounted for by regressing Yon X. In a correlation analysis, r2 (occasionally called the correlation index) may be calculated simply by squaring the correlation coefficient, r. It may be described as the amount of variability in one of the variables (either Y or X) accounted for by correlating that variable with the second variable.* As in regression analysis, r2 may be considered to be a measure of the strength of the straight-line relationship." The calculation of rand r2 is demonstrated in Example 19.1a. Either r or r2 can be used to express the strength of the relationship between the two variables. * As in Section 17.3a, 1 - ,2 may be referred to as the coefficient of nondetermination. A term found in older literature is coefficient of alienation: ~,given by Galton in 1889 and named by T. L. Kelley in 1919 (Walker, 1929: 175). t Ozer (1985) argued that there are circumstances where Irl is a better coefficient of determination than ,.2.
384
Chapter 19
Simple Linear Correlation where the standard error of r is calculated hy Equation 19.3, and the degrees of freedom :II"C 1/ = n - 2. (Fisher. 192 L 1925h: 157).* The null hypothesis is rejected if 1(1 2: (,,(2).1" Alternatively, this two-tailed hypothesis may be tested using F
(Cacoullos.
I + I -
=
19(5), where the critical value is
Irl Irl
( 19.5) (See Example
F"(2).,,.I'"
19.1 b.) OL critical
values of Irl (namely, 1',,(2).,,) may he read directly from Appendix Table B.17.1 One-tailed hypotheses about the population correlation coefficient may also be tested by the aforementioned procedures. For the hypotheses Ho: p ::::;0 and H II: P > D. compute either lor F (Equations 19.4 or Ill.S. respectively) and reject Hi, if r is positive and either ( 2: f,,( I ).1" or F 2: Fa( I ).1r. 1" or r 2: rat I ).1" To test Ho: p 2: 0 vs. H/\: p < 0, reject 110 if r is negative and either If I 2: {a( I ).1" or F 2: r:r( I ).1" or [r] 2: ra( 1).1" If we wish to test He: (J = Po for any Po other than zero, however, Equations 19.4 and 19.5 and Appendix Table B.17 are not applicable. Only for PI) = () can I' be considered to have come from a distribution approximated by the normal, and if the distribution of r is not normal, then the { and F statistics may not be validly employed. Fisher (I <J2L 1925b: 1(2) dealt with this problem when he proposed a transformation enabling r to he converted to a value. called z, which estimates a population parameter, , (lowercase Greek zeta), that is normally distributed. The transformation" is Z
~ (I + r)
= O.)ln
--
1 -
.
( 19.9)
r
For values of r bet ween () and I, the corresponding values or Fisher's z will lie bel ween () and + : and for r's from 0 to - 1. the corresponding zs will fall between () and -00. "As an aside. I Illay also be computed as follows (Marun Andres. Hcrranz Tejedor. and Silva Mato. I SlSlS): Consider all N data (where N = 1/1 + 1/2) to be a sample of measurements. and associate with each datum either O. if the datum is a value of X. or I. if it is a value of Y: consider this set of N zcrox and ones to he a second sample of data. Then calculate / for the two samples. as would be done in a t we-sample I-test (Section K.I). This concert will be used in Section 1l).11h. 'Critical values of r mav also be calculated as
'7L/J +
I~.I'
( ISl.6) 1J
where (Y may be either one tailed or two tailed, and /) = n - 2. If a regression analysis is performed, rather than a correlation analysis. the probability of rejection of lit): f3 = () is identical to the probability of rejecting 110: (J = O. Also. r is related to /J as
r
= SX·h.
(I Sl.7)
s>, where
Sx
and
S)(
me the standard
:::z is also equal to r + r'j3 the inverse hyperbolic tangent Appendix Table B.ll). is
----------------------
deviations
+
/'/5 ...
or X
and y, respectively. and is a quantity that mathematicians
of r, narncly r = tanh
r = tanh ;
or
I r. The transformation
recounizc
"- e:.. r = ('2;
as
of ::: to r. given in
+
•••••••••••••••
( ISl.X)
1
Section 19.2
Hypotheses about the Correlation Coefficient
385
For convenience, we may utilize Appendix Table B.18 to avoid having to perform the computation of Equation 19.8 to transform r to z." For the hypothesis H«: P = PO, then we calculate a normal deviate, as 2
=
z - ~o ,
(19.10)
O'z
where z is the transform of r; ~o is the transform of the hypothesized and the standard error of z is approximated by O'z=
coefficient, PO;
(19.11)
~
\j~
(Fisher, 1925b: 162), an approximation that improves as n increases. TnExample 19.1a, r = 0.870 was calculated. If we had desired to test Ho: p = 0.750, we would have proceeded as shown in Example 19.2. Recall that the critical value of a normal deviate may be obtained readily from the bottom line of the t table (Appendix Table B.3), because 20'(2) = (a(2).00' EXAMPLE 19.2 r
Testing Ho: P
= PO,
Where Po
'* 0
= 0.870
n = 12 Ho: P = 0.750;
z =
HA: P
*-
0.750.
0.5 In (_11_+_0._87_0) = 1.3331 0.870
~o = 0.9730 2
=
z - ~o -----.=====
1.3331 - 0.9730
) n ~ 3
20.05(2) =
to.05(2),oo
)I
0.3601 0.3333
= 1.0803
= ] .960
Therefore, do not reject Hi; 0.20 < P < 0.50
[P
= 0.28]
One-tailed hypotheses may also be tested, using 2a( I) (or to'( l),oo) as the critical value. For He: P :::; PO and HA: P > PO, Hi, is rejected if 2;::: 20'(1), and for He: P ;::: PO versus HA: P < PO, Ho is rejected if 2 :::; - 20'( I)' If the variables in correlation analysis have come from a bivariate normal distribution, as often may be assumed, then we may employ the aforementioned procedures, as well as those that follow. Sometimes only one of the two variables may be assumed to have been obtained randomly from a normal population. It may be possible to employ a data transformation (see Section 17.10) to remedy this situation. If that cannot be done, then the hypothesis Ho: P = 0 (or its associated one-tailed hypotheses) may be tested, but none of the other testing procedures of this chapter (except for the * As noted at the end of Section 19.7, there is a slight and correctable small, however, this correction will be insignificant and may be ignored.
bias in z. Unless n is very
386
Chapter 19
Simple Linear Correlation
methods of Section 19.9) are valid. If neither variable came from a normal population and data transformations do not improve this condition, then we may turn to the procedures of Section 19.9. 19.3
CONFIDENCE INTERVALS FOR THE POPULATION CORRELATION COEFFICIENT
Confidence limits on P may be determined the lower and upper confidence limits are L
by a procedure
related to Equation 19.5;
- (1 (1
+ Fa)r + (1 - Fa) + Fa) + (1 - Fa)r
(19.12)
_ (1 (1
+ Fa)r - (1 - Fa) + Fa) - (1 - Fa),'
(I 9.13)
1 -
and L
2 -
respectively, where Fa = Fa(2),II,II and v = n - 2 (Muddapur, 1988).* This is shown in Example 19.3. Fisher's transformation may be used to approximate these confidence limits, although the confidence interval will generally be larger than that from the foregoing procedure, and the confidence coefficient may occasionally (and undesirably) be less than 1 - a (Jeyaratnam, 1992). By this procedure, we convert r to z (using Equation 9.8 or Appendix Table B.18); then the 1 a confidence limits may be computed for (19.16)
r
or, equivalently, (19.17) The lower and upper confidence limits, L, and L2, are both z values and may be transformed to r values, using Appendix Table B.19 or Equation 19.9. Example 19.3 demonstrates this procedure. Note that although the confidence limits for ~ are symmetrical, the confidence limits for p are not. 19.4
POWER AND SAMPLE SIZE IN CORRELATION
(a) Power in Correlation. If we test He: p = 0 at the a significance level, with a sample size of n, then we may estimate the probability of correctly rejecting Ho when PO is in fact a specified value other than zero. This is done by using the Fisher z transformation for the critical value of r and for the sample r (from Appendix Table B.18 or Equation 19.8); let us call these two transformed values Za and z, respectively. Then, the power of the test for Ho: P = 0 is 1 f3( 1 ), where f3( 1) is the one-tailed probability of the normal deviate Z{3(I)
* Jeyaratnam
= (z -
(1992) asserts that the same confidence LI L
where w is ra,II from Equation
r -
w
1 -
rw
= ---
2 -
r 1
+ +
(19.18)
Za)~,
limits are obtained and
w rw'
19.6 using the two-tailed
by (19.14) (19.15)
ta(2),v-
Section 19.4
Power and Sample Size in Correlation
387
EXAMPLE19.3 Setting Confidence Limits for a Correlation Coefficient. This Example Usesthe Data of Example 19.1a r
= 0.870,
n
= 12,
/J
= 10,
a
= 0.05
For the 95% confidence interval for p: Fa
= FO.05(2).10.]O = 3.72; so
t.,
=
L2
(1 + Fa)r - (1 - Fa) _ 4.11 + 2.72 = 0.963. = -'----------=~----'---~
(1 + Fa)r + (1 - Fa) _ 4.11 - 2.72 (1 + Fa) + (1 - Fa)r 4.72 2.37 (1 + Fa)
-
= 0.592
4.72 + 2.37
(1 -- Fa)r
For the Fisher approximation: r
= 0.870; therefore, z = 1.3331 (from Appendix Table B.I8).
(T t.
=)
1 n - 3
=
0.3333
95% confidence interval for ~ =
z ±
ZO.05(2)(Tz
=
z ±
t().05(2),(~Pz
= =
L1
=
0.680; L2
=
1.3331 ± (1. 9600) (0.3333 ) 1.3331 ± 0.6533
1.986
These confidence limits are in terms of z. For the 95% confidence limits for p, transform L1 and L2 from z to r (using Appendix Table B.19): L1 = 0.592, L2 = 0.963. Instead of using the Appendix table, this confidence-limit to r may be done using Equation 19.9: L]
= e2(O.680)
-
1
= e2(1.986) e2(
1',)86)
-
from z
= 2.8962 = 0.592
e2(O.680) + I L2
transformation
4.8962
1 = 52.0906 = 0.963. 54.0906
+ 1
(Cohen, 1988: 546), as demonstrated in Example 19.4. This procedure may be used for one-tailed as well as two-tailed hypotheses, so a may be either 0'(1) or 0'(2), respectively. (b) Sample Size for Correlation Hypothesis Testing. If the desired power is stated, then we can ask how large a sample is required to reject He: P = 0 if it is truly false with a specified PO O.This can be estimated (Cohen, 1988: 546) by calculating
*
n=(Z,B(])
+ ~o
Za)2
+ 3,
(19.19)
where ~o is the Fisher transformation of the PO specified, and the significance level, a, can be either one-tailed or two-tailed. This procedure is shown in Example 19.5a.
388
Chapter 19
Simple Linear Correlation
EXAMPLE19.4 Example 19.1b
of Power of the Test of Ho: p = 0 in
Determination
= 12; v = 10 r = 0.870, so z = 1.3331
n
r005(2).lO = 0.576, so ZO.05= 0.6565 Z{3(l)
=
(1.3331 - 0.6565)J12
-
3
= 2.03 From Appendix Table B.2, P( Z the test is 1 - f3 = 0.98.
EXAMPLE19.5a Ho: p = 0
2:
2.03)
Determination
=
0.0212
=
f3. Therefore,
the power of
of Required Sample Size in Testing
We desire to reject Ho: P = a 99% of the time when Ipi 2: 0.5 and the hypothesis is tested at the 0.05 level of significance. Therefore, f3( 1) = 0.01 and (from the last line of Appendix Table B.3) and Z{3(l) = 2.3263; 0'(2) = 0.05 and Z,,(2) = 1.9600; and, for r = 0.5, Z = 0.5493. Then + 1.9600)2 n = (2.32630.5493 + 3 = 63.9, so a sample of size at least 64 should be used.
(c) Hypothesizing p Other Than O. For the two-tailed hypothesis Ho: p PO i= 0, the power of the test is determined from Z{3( 1)
= IZ -
Zo
I - Za(2).Jrl=3
= Po, where (19.20)
instead of from Equation 19.18; here, zo is the Fisher transformation of PO. One-tailed hypotheses may be addressed using a( 1) in place of 0'(2) in Equation 19.20. (d) Sample Size for Confidence Limits. After we calculate a sample correlation coefficient, r, as an estimate of a population correlation coefficient, p, we can estimate how large a sample would be needed from this population to determine a confidence interval for p that is no greater than a specified size. The confidence limits in Example 19.5b, determined from a sample of n = 12, define a confidence interval having a width of 0.963 - 0.592 = 0.371. We could ask how large a sample from this population would be needed to state a confidence interval no wider than 0.30. As shown in Example 19.5b, this sample size may be estimated by the iterative process (introduced in Section 7.7a) of applying Equations 19.12 and 19.13. Because the desired size of the confidence interval (0.30) is smaller than the confidence-interval width obtained from the sample of size 12 in Example 19.3 (0.371), we know that a sample larger than 12 would be needed. Example 19.5b shows the confidence interval calculated for n = 15 (0.31), which is a little larger than desired; a confidence interval is calculated for n = 16, which is found to be the size desired (0.30). If the same process is used for determining the sample size needed to obtain a confidence interval no larger than 0.20, it is estimated that n would have to be at least 30.
Section 19.4
Power and Sample Size in Correlation
389
EXAMPLE 19.5b Determination of Required Sample Size in Expressing Confidence Limits for a Correlation Coefficient If a calculated r is 0.870 (as in Example 19.1a), and a 95% confidence interval no wider than 0.30 is desired for estimating p, the following iterative process may be employed: If n = 15 were used, then u = 13, and FO.05(2).13.13 = 3.12, so L,
= (1 + Fa)r + (1 - Fa) = (4.12)(0.870) (1 + Fa)
+ (1 - Fa)r
4.12 -
- 2.12 (2.12)(0.870)
= 3.584 - 2.12 4.12 -
1.844
= 1.464 = 0.643 2.276 L2
= (1 + Fa)r - (I - Fa) = (4.12)(0.870) (1 + Fa)
-
(1 - Fa)r
+ 2.12 = 3.584 + 2.12 4.12 + (2.12)(0.870) 4.12 + 1.844
= 5.704 = 0.956 5.964 and the width of the confidence interval is L2 - LI = 0.956 - 0.643 which is a little larger than desired, so larger n is needed. If n = 20 were used, then u = 18, and FO.05(2).IH,IH = 2.60, so LI
= (1 + Fa)r + (I - Fa) = (3.60)(0.870) (1 + Fa)
+ (I - Fa)r
3.60 -
- 1.60 (1.60)(0.870)
= 0.31,
= 3.132 - 1.60 3.60 -
1.392
= 1.532 = 0.694 2.208 L2
= (1 + Fa)" - (I - Fa) = 3.132 + 1.60 = 4.732 = 0.948 (I + Fa)
-
(I - Fa)r
3.60 + 1.392
4.992
and the width of the confidence interval is L2 - LI = 0.948 - 0.694 which is smaller than that desired, so a smaller n may be used. If n = 16 were used, then l/ = 14, and FO.05(2).14.14 = 2.98, so LI
= (1 + Fa)r + (I - Fa) = (3.98)(0.870) (1 + Fa)
+ (1 - Fa)"
- 1.98 3.98 - (1.98)(0.870)
=
0.25,
= 3.463 - 1.98 3.98 - 1.723
= 1.483 = 0.657 2.257 L2
= (1 + Fa)r - (I - Fa) = 3.463 + 1.98 = 5.443 = 0.954 (1 + Fa) - (1 - Fa)r 3.98 + 1.723 5.703
and the width of the confidence interval is L2 - LI = 0.954 - 0.657 = 0.30, so it is estimated that a sample size of at least 16 should be used to obtain the desired confidence interval. To calculate the desired confidence interval using the Fisher transformation, r = 0.870; z = 1.3331 (e.g., from Appendix Table B.I8); 20.05(2) = 1.9600. If n
= 15 were used, then
~ ~ = \j~
CT-
= 0.2887.
The 95 % confidence interval for? is 1.3331 ± (1.9600)( 0.2887)
= 1.3331 ± 0.5658.
390
Chapter 19
Simple Linear Correlation
The confidence limits are LI = 0.7672: L2 = 1.8990 For the 95% confidence interval for P, transform this LI and L2 for for r (e.g., using Appendix Table B.19):
= 0.645:
LI
L2
z
to LI and L2
= 0.956
and the width of the confidence interval is estimated to be 0.956 - 0.645 which is a little larger than that desired, so a larger n should be used. If n
=
16 were used, then
(T
z
.
=
I
1
\j 16 - 3
=
=
0.31,
0.2774.
For the 95% confidence interval for ~ = 1.3331 ± (1.9600)(0.2774) = 1.3331 ± 0.5437 and L
I
= 0.7894;
L2
= l.8768
For the 95% confidence interval for P, transform the za to
r':
The confidence limits are LI
=
0.658;
L2
=
0.954
and the width of the confidence interval is estimated to be 0.954 - 0.658 = 0.30, so it is estimated that a sample size of at least 16 should be used. 19.5
COMPARING
TWO CORRELATION COEFFICIENTS
Hypotheses (either one-tailed or two-tailed) about two correlation be tested by the use of
coefficients may (19.21)
where (19.22) 3 If
nl = n i-
then Equation 19.22 reduces to (T Z I - Z2
= ) n : 3'
(19.23)
where n is the size of each sample. The use of the Fisher z transformation both normalizes the underlying distribution of each of the correlation coefficients, '1 and r2, and stabilizes the variances of these distributions (Winterbottom, 1979). The multisample hypothesis test recommended by Paul (1988), and presented in Section 19.7, may be used for the two-tailed two-sample hypothesis mentioned previously. It tends to result in a probability of Type I error that is closer to the specified a; but, as it also tends to be larger than 0', I do not recommend it for the two-sample casco The preferred procedure for testing the two-tailed hypotheses, Ho: PI = P2 versus HA: PI = P2, is to employ Equation 19.21, as shown in Example 19.6.* One-tailed hypotheses may be tested using one-tailed critical values, namely Za( I). *A IZI -
null hypothesis such as 110: PI - P2 = PO, where Po # 0, might be tested hy substituting (0 for the numerator in Equation 19.21, but no utility for such a test is apparent.
z21-
Section 19.5
Comparing Two Correlation Coefficients
391
Testing the Hypothesis Ho: P1 = P2
EXAMPLE 19.6
For a sample of 98 bird wing and tail lengths, a correlation coefficient of 0.78 was calculated. A sample of 95 such measurements from a second bird species yielded a correlation coefficient of 0.84. Let us test for the equality of the two population correlation coefficients. He: PI = P2; rl
=
0.78
ZI
=
1.0454
nl
= 98
Z =
HA: PI ri
Z2 =
n2 1.0454 -
I
-V nl ZO.OS(2)
=
=
-
P2
0.84 1.2212
= 95 -0.1758 0.1463
1.2212
+
1
=I:-
3
to.05(2).CXJ
1 n: -
-1.202
3
= 1.960
Therefore, do not reject He: 0.20 < P < 0.50
[P
=
0.23]
The common correlation coefficient may then be computed as
+ (n2 (nl - 3) + (n2
_ ~----~~--~~--~~ (nl - 3)zl
Zw -
rw
3)Z2
(95)(1.0454) 95
3)
+ (92)(1.2212) + 92
=
1.1319
= 0.81.
Occasionally we want to test for equality of two correlation coefficients that are not independent. For example, if Sample 1 in Example 19.6 were data from a group of 98 young birds, and Sample 2 were from 95 of these birds when they were older (three of the original birds having died or escaped), the two sets of data should not be considered to be independent. Procedures for computing rl and ri- taking dependence into account. are reviewed by Steiger (1980). (a) Common Correlation Coefficient. As in Example 19.6, a conclusion that PI = P2 would lead us to say that both of our samples came from the same population of data, or from two populations with identical correlation coefficients. In such a case, we may combine the information from the two samples to calculate a better estimate of a single underlying p. Let us call this estimate the common, or weighted, correlation coefficient. We obtain it by converting
(nl
- 3)
+ (n2
3)Z2 3)
(19.24)
to its corresponding r value, rl\', as shown in Example 19.6. If both samples are of equal size (i.e., nl = n2), then the previous equation reduces to _
Zw -
ZI
+
Z2
(19.25)
392
Chapter 19
Simple Linear Correlation
Appendix Table B.19 gives the conversion of Zw to the common correlation coefficient, rw (which estimates the common population coefficient, p). Paul (1988) has shown that if p is less than about 0.5, then a better estimate of that parameter utilizes (nl
-
+ (n2
1)Z'I
Zw = -----'---------=-
(nl
-
1) + (n2
where Z;
=
3Zi Zi
+
1)z2 1)
ri
(19.26)
(19.27)
4(ni - 1)
and z, is the z in Equation 19.8 (Hotelling, 1953). We may test hypotheses about the common correlation coefficient (Ho: P = 0 versus HA: P ::j:. 0, or Ho: P = PO versus HA: P ::j:. PO, or similar one-tailed tests) by Equation 19.33 or 19.34. 19.6
POWER AND SAMPLE SIZE IN COMPARING
TWO CORRELATION COEFFICIENTS
The power of the preceding test for difference between two correlation coefficients is estimated as 1 - {3,where {3is the one-tailed probability of the normal deviate calculated as Z (J( I) -- IZI - z21 - Z a (19.28) O"ZI-Z2
(Cohen, 1988: 546-547), where a may be either one-tailed or two-tailed and where Zo'( I) or ZO'(2) is most easily read from the last line of Appendix Table B.3. Example 19.7 demonstrates EXAMPLE 19.7 Example 19.6
this calculation for the data of Example 19.6. Determination
= 1.0454
ZI
O"ZI-Z2
Zo'
=
Z (J( I)
Z2
of the Power of the Test of Ho: PI =
oz
in
= 1.2212
= 0.1463 = 1.960 - 11.0454 - 1.22121 _ 1.960 0.1463 = 1.202 - 1.960
ZO.05(2)
= -0.76 From Appendix Table B.2, (3
= P(Z
2':
= 1 - P(Z :s -0.76)
-0.76)
= 1 - 0.2236 = 0.78.
Therefore, power
= I - {3 = 1 - 0.78 = 0.22.
If we state a desired power to detect a specified difference between transformed correlation coefficients, then the sample size required to reject Hi, when testing at the a level of significance is
+
n = 2(ZO' Z\
-
Z{J(\
»)2
Z2
(Cohen, 1988: 547). This is shown in Example 19.8.
+ 3
(19.29)
Section 19.7
Comparing More Than Two Correlation Coefficients
393
The test for difference between correlation coefficients is most powerful for = rn, and the proceding estimation is for a sample size of n in both samples. Sometimes the size of one sample is fixed and cannot be manipulated, and we then ask how large the second sample must be to achieve the desired power. If nl is fixed and n is determined by Equation 19.29, then (by considering n to be the harmonic mean of nl and n2), nl (n + 3) - 6n n2 = -'-------'--(19.30) 2nl - n - 3
nl
(Cohen, 1988: 137).*
EXAMPLE 19.8
Ho: P1
Estimating
the
Sample
Size Necessary
for
the Test of
= P2
Let us say we wish to be 90% confident of detecting a difference, ZI - Z2, as small as 0.5000 when testing He: PI = P2 at the 5% significance level. Then ,B(1) = 0.10,a(2) = 0.05, and n = 2 (1.9600
+ 1.2816)2 + 3 0.5000
= 87.1. So sample sizes of at least 88 should be used.
19.7 COMPARING MORE THAN TWO CORRELATION COEFFICIENTS If k samples have been obtained and an r has been calculated for each, we often want to conclude whether or not all samples came from populations having identical p's. If Ho: PI = P2 = '" = Pk is not rejected, then all samples might be combined and one value of r calculated to estimate the single population p. As Example 19.9 shows, the testing of this hypothesis involves transforming each r to a Z value. We may then calculate
[t.(n, - 3)Z.]'
k
X2
2: (ni
=
- 3)
zT -
i=1
(19.31)
k
2:(ni
- 3)
i=1
which may be considered to be a chi-square value with k -
1 degrees of freedom."
(a) Common Correlation Coefficient. If Ho is not rejected, then all k sample correlation coefficients are concluded to estimate a common population p. A common r (also known as a weighted mean of r) may be obtained from transforming the "If the denominator in Equation 19.30 is ~ 0, then we must either increase nl or change the desired power, significance level, or detectable difference in order to solve for n2. "[Equation 19.31 is a computational convenience for (19.31a) where
Zw
is a weighted
mean of
Z
shown in Equation
19.32.
394
Chapter 19
Simple Linear Correlation EXAMPLE 19.9 tion Coefficients
Testing a Three-Sample
Hypothesis
Concerning
Correla-
Given the following: nl = 24 rl = 0.52
n2 = 29 r: = 0.56
n3
= 32
rs
=
0.87
To test: He: HA:
1 2 3
PI
P2
=
=
P3·
All three population correlation coefficients are not equal. r,
Zi
Z2
ni
ni - 3
(ni - 3)Zi
in, - 3)zT
0.52 0.56 0.87
0.5763 0.6328 1.3331
0.3321 0.4004 1.7772
24 29 32
21 26 29
12.1023 16.4528 38.6599
6.9741 10.4104 51.5388
76
67.2150
68.9233
I
Sums:
( 67.2150f
= 68.9233 -
76
= 9.478 v=k-l=2 X6.05.2 = 5.991 Therefore, reject Ho. 0.005 < P < 0.01
[P
=
0.0087]
If Ho had not been rejected, it would have been appropriate common correlation coefficient:
to calculate the
= 67.2150 = 0.884 76
weighted mean
Z
value, k
2:(ni Zw
- 3)Zi
i=1 = k
2:(ni
(19.32) - 3)
i=1
to its corresponding r value (let's call it rw), as shown in Example 19.9. This transformation is that of Equation 19.8 and is given in Appendix Table B.19. If Ho is not rejected, we may test He: P = 0 versus HA: P -=F0 by the method attributed to Neyman
Section 19.7
Comparing More Than Two Correlation Coefficients
395
(1959) by Paul (1988): k
Lnm Z=~
(19.33)
~'
where N = 2:,7= I ru, and rejecting Ho if IZI 2: Za(2). For one-tailed testing, Ho: P :S 0 versus HA: P > 0 is rejected if Z 2: Z",( I); and H«: P 2: 0 versus HA: P < 0 is rejected if Z :S -Z",(I). For the hypotheses Ho: P = PO versus H A: P PO, the transformation of Equation 19.8 is applied to convert PO to ~o. Then (from Paul, 1988),
"*
n
L(ni-3)
Z=(zw-~o)
(19.34)
i=l
is computed and Ho is rejected if IZI 2: Z",(2). For one-tailed testing, Ho: P :S PO is rejected if Z 2: Z",(I) or H«: P 2: PO is rejected if Z :s - Z",( 1). If correlations are not independent, then they may be compared as described by Steiger (1980). (b) Overcoming Bias. Fisher (1958: 205) and Hotelling (1953) have pointed out that the z transformation is slightly biased, in that each z will be a little inflated. This minor systematic error is likely to have only negligible effects on our previous considerations, but it is inclined to have adverse effects on the testing of multisample hypotheses, for in the latter situations several values of z.. and therefore several small errors, are being summed. Such a hypothesis test and the estimation of a common correlation coefficient are most improved by correcting for bias when sample sizes are small or there are many samples in the analysis. Several corrections for bias are available. Fisher recommended subtracting r
2(n - 1) from z, whereas Hotelling determined as subtracting
that better corrections to 3z 4(n
However, Paul (1988) recommends such corrections for bias. It uses
+
Z
are available, such
r
1)
a test that performs better than one employing
(19.35) with k - 1 degrees of freedom. Example 19.10 demonstrates this test. If the multisample null hypothesis is not rejected, then p, the underlying population correlation coefficient, may be estimated by calculating Zw via Equation 19.32 and converting it to rw. As an improvement, Paul (1988) determined that n, - 1 should be used in place of n, - 3 in the latter equation if P is less than about 0.5. Similarly, to compare P to a specified value, PO, ru - 1 would be used instead of n, - 3 in Equation 19.34 if P is less than about 0.5.
396
Chapter 19
Simple Linear Correlation
EXAMPLE 19.10 The Hypothesis Correction for Bias
24(0.52 - 0.71 )2 [1 -
(0.52)(0.71)]2
+
Testing
of Example 19.9, Employing
29(0.56 - 0.71 )2
11 -
(0.56)(0.71)]2
+
32( 0.87 - 0.71 )2 [1 -
(0.87)(0.71)]2
= 2.1774 + 1.7981 + 5.6051 = 9.5806 X6.o'i.2
= 5.991
Therefore, reject Ho. 0.005 < P < 0.01 19.8
= 0.0083]
[P
MULTIPLE COMPARISONS AMONG CORRELATION COEFFICIENTS If the null hypothesis of the previous section (Ho: PI = P2 = ... = pd is rejected, it is typically of interest to determine which of the k correlation coefficients are different from which others. This can be done, again using Fisher's z transformation (Levy, 1976). In the fashion of Section 11.1 (where multiple comparisons were made among means), we can test each pair of correlation coefficients, re and rA, by a Tukey-type test, if ne = nil: q
=
ZB ZA -=-------'--'-
(19.36)
SE where SE=
1
-\In -
(19.37) 3
and n is the size of each sample. If the sizes of the two samples, A and E, are not equal, then we can use
(19.38)
The appropriate critical value for this test is qa.oo.k (from Appendix Table B.5). This test is demonstrated in Example 19.11. It is typically unnecessary in multiple comparison testing to employ the correction for bias described at the end of Section 19.7. (a) Comparing a Control Correlation Coefficient to Each Other Correlation Coefficient. The foregoing methods enable us to compare each correlation coefficient with each other coefficient. If, instead, we desire only to compare each coefficient to one
Section 19.8
Multiple Comparisons Among Correlation Coefficients
397
EXAMPLE 19.11 Tukey-Type Multiple Comparison Testing Among the Three Correlation Coefficients in Example 19.9 Samples ranked by correlation coefficient (i): Ranked correlation coefficients (ri): Ranked transformed Sample size (ni):
0.52
2 0.56
3 0.87
0.5763 24
0.6328 29
1.3331 32
1
coefficients (Zi):
Comparison
Difference
B vs. A
ZB
3 vs. 1 3 vs. 2 2 vs. 1
-
SE
ZA
1.3331 0.5763 0.6328 1.3331 0.6328 - 0.5763
qo.os.oo,3
0.7568 0.203 3.728 0.7003 0.191 3.667 0.0565 0.207 0.273
= = =
Overall conclusion: PI = P2
q
f=.
3.314 3.314 3.314
Conclusion Reject Ho: P3 = PI Reject Ho: P3 = P2 Do not reject Ho: P2 = PI
P3
particular coefficient (call it the correlation coefficient of the "control" set of data), then a procedure analogous to the Dunnett test of Section 11.3 may be employed (Huitema,1974). Let us designate the control set of data as B, and each other group of data, in turn, as A. Then we compute _
ZB
q for each A, in the same sequence standard error is
-
ZA
SE
as described
SE
=)
(19.39)
'
2
n -
in Section 11.3. The appropriate (19.40) 3
if samples A and B are of the same size, or SE
=
\j
I
1
ne
3
+
1
(19.41)
if nA f=. n p: The critical value is q'a (I) .oo,p (from Appendix Table B.6) or q'a (2) ..p (from Appendix Table B.7) for the one-tailed or two-tailed test, respectively. (b) Multiple the concepts comparisons be examined employ the Z
Contrasts Among Correlation Coefficients. Section 11.4 introduced and procedures of multiple contrasts among means; these are multiple involving groups of means. In a similar fashion, multiple contrasts may among correlation coefficients (Marascuilo, 1971: 454-455). We again transformation and calculate, for each contrast, the test statistic
2">iZi S
i = "--------'-
SE
(19.42)
398
Chapter 19
Simple Linear Correlation
where SE =
)~c~a~;
(19.43
and c; is a contrast coefficient, as described in Section 11.4. (For example, if we wishe to test the hypothesis He: (PI + P2)/2 - P3 = 0, then CI = ~,C2 = ~,andC3 =-1. The critical value for this test is (19.44)* 19.9
RANK CORRELATION
If we have data obtained from a bivariate population that is far from normal, then the correlation procedures discussed thus far are generally inapplicable. Instead, we may operate with the ranks of the measurements for each variable. Two different rank correlation methods are commonly encountered, that proposed by Spearman (1904) and that of Kendall' (1938). And, these procedures are also applicable if the data are ordinal. Example 19.12 demonstrates Spearman's rank correlation procedure. After each measurement of a variable is ranked, as done in previously described nonparametric testing procedures, Equation 19.1 can be applied to the ranks to obtain the Spearman rank correlation coefficient, rs. However, a computation that is often simpler is 11
6 Ts
xz'.v =
*Because
19.44 is preferable
= ~(k
because
i=1
1 -
vFa( I ).v.oo' it is equivalent
Sa
but Equation
=
-
2: d~ (19.46)~
to write 1 )Fa( I).(k-I
it engenders
(19.45)
).00'
less rounding
error in the calculations.
+ Charles Edward
Spearman (1863-1945), English psychologist and statistician, an important researcher on intelligence and on the statistical field known as factor analysis (Cattell, 1978). Sir Maurice George Kendall (1907-1983), English statistician contributing to many fields (Bartholomew, 1983; David and Fuller, 2007; Ord, 1984; Stuart, 1984). Kruskal (1958) noted that the early development of the method promoted by Kendall began 41 years prior to the 1938 paper. Karl Pearson observed that Sir Francis Galton considered the correlation of ranks even before developing correlation of variables (Walker. 1929: 128).
+ As
+
the sum of n ranks is n( n
L
1 )/2. Equation
19.1 may be rewritten
11
(rankofXi)(rankofYi)
-
n(n
= -r=================================================== 11 (
~1 (rank of Xi)2
-
n(n+1)2)(11 4
~I
as
1)2
4
i=1
G
+
for rank correlation
(19.47)
(rank of Yi)2
Instead of using differences between ranks of pairs of X and Y, we may use the sums of the ranks for each pair, where S, = rank of Xi + rank of Yi (Meddis, 1984: 227; Thomas, 1989): -
7n + 5 n - 1
---
(19.48)
Section 19.9
Rank Correlation
399
EXAMPLE 19.12 Spearman Rank Correlation for the Relationship Between the Scores of Ten Students on a Mathematics Aptitude Examination and a Biology Aptitude Examination
Student (i)
Mathematics examination score (Xi)
Rank of Xi
57 45 72 78 53 63 86 98 59 71
1
2 3 4 5 6 7 8 9 10
Biology examination score (Yj)
3 1 7 8 2 5 9 10 4 6
83 37 41 84 56 85 77 87 70 59
1
)0.05(2 ),10
=
0; HA: Ps
-4 0 5 0 -1 -4 3 0 -1 2
d2I
16 0 25 0 1 16 9 0 1 4
6(72) 103 - 10 0.436
= 1
(r.
7 1 2 8 3 9 6 10 5 4
di
r, = 1
n = 10
To test Ho: Ps
Rank of Yj
=
1
=
0.564
"* O.
= 0.648 (from Appendix Table B.20)
Therefore, do not reject Ho. P
=
0.10
where d, is a difference between X and Y ranks: d, = rank of Xi - rank of Yj.* The value of r., as an estimate of the population rank correlation coefficient, PI' may range from -1 to + L and it has no units; however, its value is not to be expected to be the same as the value of r that might have been calculated for the original data instead of their ranks. Appendix Table B.20 may be used to assess the significance of r., A comment following that table refers to approximating the exact probability of r., If n is greater than that provided for in that table, then r, may be used in place of r in the hypothesis *Spearman (1904) also presented a rank-correlation method, later (Spearman, 1906) called the "footrule ' coefficient. Instead of using the squares of the eli's, as r; does, this coefficient employs the absolute values of the dj's:
rr. -_
rr
1 -
3~ n
2
Idjl -
1
.
(19.4Ra)
However, typically does not range from -1 to 1;its lower limit is -0.5 if n is odd and, if n is even, it is -I when n = 2 and rapidly approaches -0.5 as n increases (Kendall, 1970: 32-33).
400
Chapter 19
Simple Linear Correlation
testing procedures of Section 19.2.* If either the Spearman or the parametric correlation analysis (Section 19.2) is applicable, the former is 9j 7T2 = 0.91 times as powerful as the latter (Daniel, 1990: 362; Hotel1ing and Pabst, 1936; Kruskal, 1958).t (a) Correction for Tied Data. If there are tied data, then they are assigned average ranks as described before (e.g., Section 8.11) and is better calculated either by Equation 19.1 applied to the ranks (Iman and Conover, 1978) or as
'I'
(r~)c-
,
(n3
_
)[(n3
-
-
n)j6
n)j6
- LdT
- Ltx
- 2Ltx][(n3
- Lty
- n)j6
(19.50)
- 2LtyJ
(Kendal1, 1962: 38; Kendall and Gibbons, 1990: 44; Thomas, 1989). Here,
"l x L (0 12' 1
=
-
L.J
where
Ii
rd
(19.51)
is the number of tied values of X in a group of ties, and
"L.J ty _L (tF -
- ti) (19.52) , 12 where ti is the number of tied V's in a group of ties; this is demonstrated in Example 19.13. If L t x and L ty are zero, then Equation 19.50 is identical to Equation 19.46. Indeed, the two equations differ appreciably only if there are numerous tied data. Computationally, it is simpler to apply Equation 19.1 to the ranks to obtain (rs)c when ties are present. (b) Other Hypotheses,
Confidence Limits, Sample Size, and Power. If n ;:::10 and z transformation may be used for Spearman coefficients, just as it was in Sections 19.2 through 19.6, for testing several additional kinds of hypotheses (including multiple comparisons), estimating power and sample size, and setting confidence limits around Ps. But in doing so it is recommended that 1.060j(n - 3) be used instead of lj(n - 3) in the variance of z (Fieller, Hartley, and Pearson, 1957, 1961). That is, Ps :s 0.9, then the Fisher
(
O'z
.060 n - 3
) s _)-
should be used for the standard error of
--I,
(19.53)
z (instead of Equation 19.11).
(c) The Kendall Rank Correlation Coefficient. In addition to some rarely encountered rank-correlation procedures (e.g., see Kruskal, 1958), the Kendall *In this discussion, r,1 will be referred to as an unbiased estimate of a population correlation coefficient, PI', although that is not strictly true (Daniel, 1990: 365; Gibbons and Chakraborti, 2003: 432; Kruskall, 1958), i"Zimmerman (l994b) presented a rank-correlation procedure that he asserted is slightly more powerful than the Spearman r,1 method. Martin Andres, Herranz Tejedor, and Silva Mato (1995) showed a relationship between the Spearman rank correlation and the Wilcoxon-Mann-Whitney test of Section 8.11. The Spearman statistic is related to the coefficient of concordance, W (Section 19.13), for two groups of ranks: W = (rs
+
1)/2.
( 19.49)
Section 19.9
Rank Correlation
401
EXAMPLE 19.13 The Spearman Rank Correlation Coefficient, Computed for the Data of Example 19.1
X
Rank of X
10.4 10.8 11.1 10.2
4 8.5 10 1.5
10.3 10.2 10.7 10.5
3 1.5 7 5 8.5 11 6 12
10.8 11.2 10.6 11.4
n = 12
t's
= 42.00
Ldf
Y
Rank ofY
7.4 7.6 7.9 7.2 7.4 7.1 7.4 7.2
5 1 5 2.5
7.8 7.7 7.8 8.3
9.5 8 9.5 12
d2I
di -1 1.5 -1 -1 -2 0.5 2 2.5 -1 3 -3.5 0
5 7 11 2.5
1 2.25 1 1
4 0.25 4 6.25 1 9 12.25 0
6~df
= 1 -
n3 - n 6(42.00)
= 1-
1716
= 1 - 0.147 = 0.853 To test Ho: Ps = 0; HA: Ps (rs
)005(2).12
* 0,
= 0.587 (from Appendix Table B.20)
Therefore, reject Ho. p
< 0.001
To employ the correction for ties (see Equation 19.50): among the X's there are two measurements ~ tX
=
3
(2
-
2)
of 10.2 em and two of 10.8 cm, so
+ (23
-
2)
=
1;
12 among the Y's there are two measurements and two at 7.8 em, so ~ty
=
tied at 7.2 cm, three at 7.4 em,
(23 - 2) + (33 - 3) + (23 - 2) = 3; 12
therefore, (rs)c
=
3
(12 - 12)/6 - 42.00 - 1 - 3 ~[(123 - 12)/6 - 2(1)][(123 - 12)/6 - 2(3)]
and the hypothesis test proceeds exactly as above.
=
242 284.0
=
D.852;
402
Chapter 19
Simple Linear Correlation
rank-correlation method is often used* (see, e.g., Daniel, 1990: 365-381; Kend and Gibbons. 1938. 1990: 3-8; Siegel and Castellan, 1988: 245-254). The sampl Kendall correlation coefficient is commonly designated as T (lowercase Greek tau, an exceptional use of a Greek letter to denote a sample statistic)." The correlation statistic T for a set of paired X and Y data is a measure of the extent to which the order of the X's differs from the order of the Y's. Its calculation will not be shown here, but this coefficient may be determined-with identical results-either from the data or from the ranks of the data. For example, these six ranks of X's and six ranks of V's are in exactly the same order: X: Y:
1 1
2
3
4
5
6
2 3 4 5 6
and T would be calculated to be 1, just as rs would be 1, for there is perfect agreement of the orders of the ranks of X's and V's. However, the following Y ranks are in the reverse sequence of the X ranks: X:
1
2
3
4
Y:
6
5
4
3
5 2
6 I
and T would be - 1,just as r~ would be. for there is an exact reversal of the relationship between the X's and Y's. And. just as with r., T will be closer to zero the further the X and Y ranks are from either perfect agreement or an exact reversal of agreement; but the values of T and r\ will not be the same except when T = - 1 or 1. The performances of the Spearman and Kendall coefficients for hypothesis testing are very similar, but the former may be a little better, especially when n is large (Chow. Miller, and Dickinson. 1974). and for a large n the Spearman measure is also easier to calculate than the Kendall. Jolliffe (1981) describes the use of the runs-up-and-down test of Section 25.8 to perform nonparametric correlation testing in situations where rs and T are ineffective. 19.10
WEIGHTED RANK CORRELATION
The rank correlation of Section 19.9 gives equal emphasis to each pair of data. There are instances, however, when our interest is predominantly in whether there is correlation among the largest (or smallest) ranks in the two populations. In such cases we should prefer a procedure that will give stronger weight to intersample agreement on which items have the smallest (or largest) ranks, and Quade and Salama (1992) refer to such a method as weighted rank correlation (a concept they introduced in Salama and Quade, 1982). In Example 19.14, a study has determined the relative importance of eight ecological factors (e.g., aspects of temperature and humidity, diversity of ground cover, abundance of each of several food sources) in the success of a particular species of bird in a particular habitat. A similar study ranked the same ecological factors for a second species in that habitat, and the desire is to ask whether the same ecological factors arc most important for both species. We want to ask whether there is a positive correlation between the factors most important to one species and the factors most important to the other species. Therefore, a one-tailed weighted correlation analysis is called for. "The idea of this correlation measure was presented as early as IR99 in a posthumous publication of the German philosopher. physicist. and psychologist Gustav Theodor Fechner (lHOl-ISH7) (KruskaL 19SH). :'It is much less frequently designated as T. t, ri, or T.
Section 19.10
Weighted Rank Correlation
403
EXAMPLE 19.14 A Top-Down Correlation Analysis, Where for Each of Two Bird Species, Eight Ecological Factors Are Weighted in Terms of Their Importance to the Successof the Birds in a Given Habitat He: HA:
The same ecological factors are most important to both species. The same ecological factors are not most important to both species. Rank
Savage number (5j)
h
Factor (i)
Species 1
Species 2
Species 1
Species 2
A B C 0
1 2 3 4
1 2 3 7
2.718 1.718 1.218 0.885
E F G H
5 6 7 8
8 6 5 4
0.635 0.435 0.268 0.125
2.718 1.718 1.218 0.268 0.125 0.435 0.635 0.885
7.388 2.952 1.484 0.237 0.079 0.189 0.170 0.111
8.002
8.002
12.610
Sum
(Si)1 (Si
n
n
= 20
rT
n
:L(S;)I(S;)2 i=1
=
:L(Si)1 i=1
cs, h
(n -
-
n
SJ)
12.610 - 8 = 0.873 8 - 2.718
= 12.610 0.005
< P < 0.01
A correlation analysis performed on the pairs of ranks would result in a Spearman rank correlation coefficient of rs = 0.548, which is not significantly different from zero. (The one-tailed probability is 0.05 < P < 0.10.) Iman (1987) and 1man and Conover (1987) propose weighting the ranks by replacing them with the sums of reciprocals known as Savage scores (Savage, 1956). For a given sample size. n, the ith Savage score is
s,
n
1
=:L ~.
(19.54)
j=i ]
Thus, for example, if n = 4, then SI = 1/1 + 1/2 + 1/3 + 1/4 = 2.083,S2 = 1/2 + 1/3 + 1/4 = 1.083,S3 = 1/3 + 1/4 = 0.583, and S4 = 1/4 = 0.250. A check on arithmetic is that 2:.7=1S, = n; for this example, n = 4 and 2.083 + 1.083 + 0.583 + 0.250 = 3.999. Table 19.1 gives Savage scores for n of 3 through 20. Scores for larger n are readily computed; but. as rounding errors will be compounded in the summation, it is wise to employ extra decimal places in such calculations. If there are tied ranks, then we may use the mean of the Savage scores for the positions of the tied data. For example. if n = 4 and ranks 2 and 3 are tied. then use (1.083 + 0.583 )/2 = 0.833 for both S2 and S3.
404
Chapter
19
Simple Linear Correlation TABLE 19.1: Savage Scores, Si, for Various n
]()
II 12 13 14 15 16 17 18 19 20
n 11 12 13 14 15 16 17 18 19 20
2
3
4
5
6
7
8
9
10
1.833 2.083 2.283 2.450 2.593 2.718 2.829 2.929 3.020 3.103 3.180 3.252 3.318 3.381 3.440 3.495 3.548 3.598
0.833 1.083 1.283 1.450 1.593 1.718 1.829 1.929 2.020 2.103 2.180 2.251 2.318 2.381 2.440 2.495 2.548 2.598
0.333 0.583 0.783 0.950 1.093 1.218 1.329 1.429 1.520 1.603 1.680 1.752 1.818 1.881 1.940 1.995 2.048 2Jl98
0.250 0.450 0.617 0.756 0.885 0.996 1.096 1.187 1.270 1.347 1.418 1.485 1.547 1.606 1.662 1.714 1.764
0.200 0.367 0.510 0.635 0.746 0.846 0.937 1.020 I Jl97 1.168 1.235 1.297 1.356 1.412 1.464 1.514
0.167 0.310 0.435 0.546 0.646 0.737 0.820 0.897 0.968 1.035 1.097 1.156 1.212 1.264 1.314
0.143 0.268 0.379 0.479 0.570 0.653 0.730 0.802 0.868 0.931 0.990 1.045 1.098 1.148
0.125 0.236 0.336 0.427 0.510 0.587 0.659 0.725 0.788 0.847 0.902 0.955 1.005
0.111 0.211 0.302 0.385 0.462 0.534 0.600 0.663 0.722 0.777 0.830 0.880
0.100 0.191 0.274 0.351 0.423 0.489 0.552 0.611 0.666 0.719 0.769
11
12
13
14
15
16
17
18
19
20
0.091 0.174 0.251 0.323 0.389 0.452 0.510 0.566 0.619 0.669
OJ)83 0.160 0.232 0.298 0.361 0.420 0.475 O.52R 0.578
0.077 0.148 0.215 0.278 0.336 0.392 0.445 0.495
0.071 0.138 0.201 0.259 0.315 0.368 0.418
0.067 0.129 0.188 0.244 0.296 0.346
OJl62 0.121 0.177 0.230 0.280
0.059 0.114 0.167 0.217
0.056 0.108 0.158
0.053 0.103
0.050
1=
3 4 5 6 7 8 9
1=
Sample Sizes, n
The Pearson correlation coefficient of Equation 19.1 may then he calculated using the Savage scores, a procedure that Iman and Conover (1985,1987) call "top-down correlation ": we shall refer to the top-down correlation cocfficien t as r i. Alternatively, if there are no ties among the ranks of either of the two samples, then n
2: ts, ) (Si)2 I
i=1
(/1 -
SJ)
- n (19.55)
where (Si) I and (Si h are the ith Savage scores in Samples 1 and 2, respectively; this is demonstrated in Example 19.14, where it is concluded that there is significant agreement between the two rankings for the most important ecological factors. (As indicated previously, if all factors were to receive equal weight in the analysis of this
Section 19.11
Correlation with Nominal-Scale Data
405
set of data, a nonsignificant Spearman rank correlation coefficient would have been calculated. )* Significance testing of rr refers to testing He: PT ::s; 0 against H /I: PI > 0 and may be effected by consulting Appendix Table B.2L which gives critical values for r t , For sample sizes greater than those appearing in this table, a one-tailed normal approximation may be employed (Iman and Conover, 1985, 1987): Z
=
rr
In-=!'
(19.56)
The top-down correlation coefficient, r j , is 1.0 when there is perfect agreement among the ranks of the two sets of data. If the ranks arc completely opposite in the two samples, then rr = -1.0 only if n = 2; it approaches -0.645 as n increases. If we wished to perform a test that was especially sensitive to agreement at the bottom, instead of the top, of the list of ranks, then the foregoing procedure would be performed by assigning the larger Savage scores to the larger ranks. If there are more than two groups of ranks. then see the procedure at the end of Section 20.16. CORRELATION WITH NOMINAL-SCALE
DATA
(a) Both Variables Are Dichotomous. Dichotomous normal-scale data are data recorded in two nominal categories (e.g., observations might be recorded as male or female, dead or alive, with or without thorns), and Chapters 23 and 24 contain discussions of several aspects of the analysis of such data. Data collected for a dichotomous variable may be presented in the form of a table with two rows and two columns (a "2 X 2 contingency table"; see Section 23.3). The data of Example 19.15, for instance, may be cast into a 2 X 2 table, as shown. We shall set up such tables by havingf andf22 be the frequencies of agreement between the two variables (where hi is the frequency in row i and column j). Many measures of association of two dichotomous variables have been suggested (e.g., Conover, 1999: Section 4.4: Everitt, 1992: Section 3.6; Gibbons and Chakraborti, 2003: Section 14.3). So-called contingency coefficients, such as (19.57) and n
+ X
2 '
( 19.57a)
n
* Procedures other than the use of Savage scores may be used to assign differential weights to the ranks to be analyzed; some give more emphasis to the lower ranks and some give less (Quade and Salama. 1l)l)2). Savage scores are recommended as an intermediate strategy. ;-A term coined by Karl Pearson (Walker. 1l)5R).
406
Chapter 19
Simple Linear Correlation
EXAMPLE 19.15 Correlation for Dichotomous Nominal-Scale Data. Data Are Collected to Determine the Degree of Association, or Correlation, Between the Presence of a Plant Disease and the Presence of a Certain Species of Insect
Presence of plant disease
Case
Presence of insect
+ +
+ +
+
+ + +
8 9 10
+
+ +
11 12
+
+
13
+
+ +
1 2
3 4 5
6 7
14 The data may be tabulated
in the following
2 X 2 contingency
table:
Plant Disease
(6)(4)
-
Insect
Present
Present Absent
6 0
4 4
10 4
Total
6
8
14
Absent
Total
(4)(0)
)(6)(8)( 10)( 4) = 0.55 Q = fllf22 fnf22
(fll (fll
- fl2hl
+ + +
fl2h
I
f22)
-
h2)
+
(6)(4) (6)(4)
-
+
(4)(0) (4)(0)
(f12 - f21) _ (6 (f12 + f21) (6
= 1.00 + 4) - (4 + 0) = 10 - 4 = + 4) + (4 + 0) 10 + 4
0.43
Section 19.11
Correlation with Nominal-Scale Data
407
°
employ the X2 statistic of Section 23.3. However, they have drawbacks, among them the lack of the desirable property of ranging between and 1. [They are indeed zero when X2 = (i.e., when there is no association between the two variables), but the coefficients can never reach 1, even if there is complete agreement between the two varia bles.] The Cramer, or phi, coefficient" (Cramer, 1946: 44),
°
(19.58)
°
does range from to 1 (as does cf>2, which may also be used as a measure of associationj." It is based upon X2 (uncorrected for continuity), as obtained from Equation 23.1, or, more readily, from Equation 23.6. Therefore, we can write (19.59) where K is the sum of the frequencies in row i and Cj is the sum of column j. This measure is preferable to Equation 19.58 because it can range from - 1 to + 1, thus expressing not only the strength of an association between variables but also the direction of the association (as does r). If cf> = 1, all the data in the contingency table lie in the upper left and lower right cells (i.e.,/12 = 121 = 0). In Example 19.15 this would mean there was complete agreement between the presence of both the disease and the insect; either both were always present or both were always absent. If III = [n = 0, all the data lie in the upper right and lower left cells of the contingency table, and cf> = -1. The measure cf> may also be considered as a correlation coefficient, for it is equivalent to the r that would be calculated by assigning a numerical value to members of one category of each variable and another numerical value to members of the second category. For example, if we replace each" +" with 0, and each" -" with 1, in Example 19.15, we would obtain (by Equation 19.1) r = 0.55.+ *Harald Cramer (1893-1985) was a distinguished Swedish mathematician (Leadbetter, 1988). This measure is commonly symbolized by the lowercase Greek phi 4>-pronounced "fy" as in "simplifyt= and is a sample statistic, not a population parameter as a Greek letter typically designates. (It should, of course, not be confused with the quantity used in estimating the power of a statistical test, which is discussed elsewhere in this book.) This measure is what Karl Pearson called "mean square contingency" (Walker, 1929: 133). 'f 4>may be used as a measure of association between rows and columns in contingency tables larger than 2 X 2, as
4>=
X ~ n(k
2 -
I)'
( 19.58a)
where k is the number of rows or the number of columns, whichever is smaller (Cramer, 1946: 443); 4>2is also known as the mean square contingency (Cramer 1946: 282). :fIf the two rows and two columns of data are arranged so C2 :::: R2 (as is the case in Example 19.15), the maximum possible 4>is 4>max =
~
C R2 l: RIC2
(19.59a)
4> = 0.55 is, in fact, 4>max for marginal totals of 10,4,6, and 8, but if the data in Example 19.15 had been.fl1 = 5, Iv: = 5, hI = I, and.f22 = 3, 4>would have been 0.23, resulting in 4>/4>1Il0X = 0.42. Some researchers have used 4>/4>max as an index of association, but Davenport and EI-Sanhurry (1991) identified a disadvantage of doing so.
408
Chapter 19
Simple Linear Correlation
The statistic cp is also preferred over the previous coefficients of this section beca it is amenable to hypothesis testing. The significance of cp (i.e., whether it indicat that an association exists in the sampled population) can be assessed by consideri the significance of the contingency table. If the frequencies are sufficiently large ( Section 23.6), the significance of X~ (chi-square with the correction for continuity may be determined. The variance of cp is given by Kendall and Stuart (1979: 572). The Yule coefficient of association (Yule 1900, 1912),*
Q = II d22 -
A2!2! ,
(19.60
+ !I 2!2I ranges from -1 (if either III or 122 is zero) to + 1 (if either !I2 or hi is zero). Th II d22
variance of Q is given by Kendall and Stuart (1979: 571). A better measure is that of lves and Gibbons (1967). It may be expressed as a correlation coefficient, rn
=
(fll + 122) - (f12 + 121) .
(19.61)
(.fll + 122) + (.f12 + hd of positive and negative values of rn (which can range from -1 to
The interpretation + 1) is just as for cp. The expression of significance of rn involves statistical testing which will be described in Chapter 24. The binomial test (Section 24.5) may be utilized, with a null hypothesis of He: p = 0.5, using cases of perfect agreement and cases of disagreement as the two categories. Alternatively, the sign test (Section 24.6), the Fisher exact test (Section 24.16), or the chi-square contingency test (Section 23.3) could be applied to the data. Tetrachoic correlation is a situation where each of two nominal-scale variables has two categories because of an artificial dichotomy (Glass and Hopkins, 1996: 136-137; Howell, 2007: 284-285; Sheskin, 2004: 997-1000).; For example, data might be collected to ask whether there is a correlation between the height of children, recorded as "tall" or "short," and their performance on an intelligence test, recorded as "high" or "low." Underlying each of these two dichotomous variables is a spectrum of measurements, and observations are placed in the categories by an arbitrary definition of tall, short, high, and low. (If there are more than two categories of one or both variables, the term polychoric correlation may be used.) Therefore, the categories of X and the categories of Y may be considered to represent ordinal scales of measurement. The tetrachoic correlation coefficient, r, is an estimate of what the correlation coefficient, r, of Section 9.1 would be if the continuous data (ratio-scale or intervalscale) were known for the underlying distributions; it ranges from -1 to 1. It is rarely encountered, largely because it is a very poor estimate (it has a large standard error and is adversely affected by non normality). The calculation of r., and its use in hypothesis testing, is discussed by Sheskin (2004: 998-1000). (b) One Variable Is Dichotomous. Point-biserial correlation is the term used for a correlation between Y, a variable measured on a continuous scale (i.e., a ratio or *Q is one of several measures of association discussed by British statistician George Udny Yule (1871-1951). He called it Q in honor of Lambert Adolphe Jacques Quetelet (1796-1874), a pioneering Belgian statistician and astronomer who was a member of more than 100 learned societies, including the American Statistical Association (of which he was the first foreign member elected after its formation in 1839); Quetelet worked on measures of association as early as 1832 (Walker, 1929: 130-131). tThis coefficient was developed by Karl Pearson (1901).
Section 19.11
Correlation with Nominal-Scale Data
409
interval scale, not an ordinal scale), and X, a variable recorded in two nominal-scale categories. * Although this type of correlation analysis has been employed largely in the behavioral sciences (e.g., Glass and Hopkins, 1996: 364-365, 368-369; Howell, 1997: 279-283,2007: 277-281; Sheskin, 2004: 990-993), it can also have application with biological data. If it is Y, instead of X, that has two nominal-scale categories, then logistic regression (Section 24.18) may be considered. Example 19.16 utilizes point-biserial correlation to express the degree to which the blood-clotting time in humans is related to the type of drug that has been administered. In this example, the data of Example 8.1 are tabulated denoting the use of one drug (drug B) by an X of 0 and the use of the other drug (drug G) by an X of l. The dichotomy may be recorded by any two numbers, with identical results, but employing 0 and 1 provides the simplest computation. Then a point-biserial correlation coefficient, Fpb, is calculated by applying Equation 19.1 or, equivalently, Equation 19.2 to the pairs of X and Y data. The sign of Fpb depends upon which category of X is designated as 0; in Example 19.16, rpb is positive, but it would have been negative if drug B had been recorded as 1 and drug Gas O. The coefficient rpb can range from -1 to 1, and it is zero when the means of the two groups of Y's are the same (i.e., Y I = Yo; see the next paragraph). A computation with equivalent results is
_ YI
Fpb -
-
Yo)
nInO
N(N
Sy
1)
-
,
(19.62)
where Yo is the mean of all no of the Y data associated with X = 0, YI is the mean of the n I Y data associated with X = 1, N = no + n I, and Sy is the standard deviation of all N values of y.t By substituting rph for rand N for n, Equations 19.3 and 19.4 may be used for a point-biserial correlation coefficient, and hypothesis testing may proceed as in Section 19.2. Hypothesis testing involving the population point-biserial correlation coefficient, Pph, yields the same results as testing for the difference between two population means (Section 8.1). If an analysis of this kind of data has been done by a t test on sample means, as in Example 8.1, then determination of the point-biserial correlation coefficient may be accomplished by
rpb
=
'VIt
2
2
t +N-2
.
(19.63)
If variable X consists of more than two nominal-scale categories and Y is a continuous variable, then the expression of association between Y and X may be called a point-polyserial correlation (Olsson, Orasgow, and Oorans, 1982), a rarely encountered analytical situation not discussed here. Biserial correlation involves a variable (Y) measured on a continuous scale and differs from point-biserial correlation in the nature of nominal-scale variable X (Glass and Hopkins, 1996: 134-136; Howell, 1997: 286-288,2007: 284-285; Sheskin, 2004: 995-997). In this type of correlation, the nominal-scale categories are artificial. For "This procedure was presented and named by Karl Pearson in 1901 (Glass and Hopkins. 1996: 133). -t Although this is a correlation, not a regression, situation, it can be noted that a regression line (calculated via Section 17.1) would run from Yo to Y I. and it would have a slope of Y I - Yo and a Y intercept of Yo.
410
Chapter 19
Simple Linear Correlation EXAMPLE 19.16
Point-Biserial
Correlation,
Using Data of Example 8.1
Hi; There is no correlation between blood-clotting time and drug. (Ho: Pph = 0) H A: There is correlation between blood-clotting time and drug. (H A: Ppb =1=0) For variable X, drug B is represented time (in minutes) for blood to clot.
X
o
= 0 and drug G by X = 1; Y is the
Y
o
8.8 8.4
o
7.9
o
8.7
no =
9.1
1
9.9
1
1
9.0 11.1
1
9.6
1 1
8.7 10.4
1
9.5
9.6
rpb
=
nl
3.2077 ~( 3.2308) (8.8969)
(= /1-
\j t005(2).11
6
=
0.5983 0.3580 13 - 2
= 7
2: Y 2: y2
2:X =7 2:X2 = 7 2:x2 = 3.2308 2:XY = 68.2 2:xy = 3.2077
o o
Therefore,
by X
2:l
N=13
=
120.70
= 1129.55 =
= 0.5983,
8.8969
I~b
= 0.3580
= 0.5983 = 2.476 0.2416
2.201
reject Ho. 0.02
< P < 0.05 [P
This is the same result as obtained two drug groups (Example 8.1).
comparing
=
0.031] the mean
clotting
times of the
example, mice in a diet experiment might be recorded as heavy or light in weight by declaring a particular weight as being the dividing point between "heavy" and "light" (and X behaves like an ordinal-scale variable). This correlation coefficient, rh, may be obtained by Equation 9.1 or 9.2, as is rpb, and will be larger than rph except that when rph is zero, rb is also zero. Because X represents an ordered measurement scale, X = 0 should be used for the smaller measurement (e.g., "light" in the previous example) and X = 1 for the larger measurement (e.g., "heavy') If there are more than two ranked categories of
Section 19.12
Intraclass Correlation
411
X, the term polyserial correlation may be applied (Olsson, Drasgow, and Dorans, 1982). However, the calculated biserial correlation coefficient is adversely affected if the distribution underlying variable X is not normal; indeed, with nonnormality, Irbl can be much greater than 1. The correlation ri. is an estimate of the correlation coefficient of Section 9.1 that would have been obtained if X were the measurements from the underlying normal distribution. The calculation of, and hypothesis testing with, this coefficient is discussed by Sheskin (2004: 995-996). INTRAClASS CORRElATION
In some correlation situations it is not possible to designate one variable as X and one as Y. Consider the data in Example 19.17, where the intent is to determine whether there is a relationship between the weights of identical twins. Although the weight data clearly exist in pairs, the placement of a member in each pair in the first or in the second column is arbitrary, in contrast to the paired-sample testing of Section 9.1, where all the data in the first column have something in common and all the data in the second column have something in common. When pairs of data occur as in Example 19.17, we may employ intraclass correlation, a concept generally approached by analysis-of-variance considerations (specifically, Model II single-factor ANOV A (Section 10.1£). Aside from assuming random sampling from a bivariate normal distribution, this procedure also assumes that the population variances are equal. If we consider each of the pairs in our example as groups in an ANOV A (i.e., k = 7), with each group containing two observations (i.e., n = 2), then we may calculate mean squares to express variability both between and within the k groups (see Section 10.1). Then the intraclass correlation coefficient is defined as _ groups MS T]
-
groups MS
error MS
+ error MS '
(19.64)
this statistic being an estimate of the population intra class correlation coefficient, PI. To test Ho: PI = 0 versus He: PI oF 0, we may utilize F
= groups MS error MS '
(19.65)
a statistic associated with groups DF and error DF for the numerator and denominator, respectively.* If the measurements are equal within each group, then error MS = 0, and n = 1 (a perfect positive correlation). If there is more variability within groups than there is between groups, then rt will be negative. The smallest it may be, however, is - 1/ (n - 1); therefore, only if n = 2 (as in Example 19.17) can rt be as small as -1. We are not limited to pairs of data (i.e., situations where n = 2) to speak of intraclass correlation. Consider, for instance, expanding the considerations of Example 19.17 into a study of weight correspondence among triplets instead of twins. Indeed, n need not even be equal for all groups. We might, for example, ask whether there is a relationship among adult weights of brothers; here, some families might "If desired,
F may be calculated
first. followed by computing
,,=
(F - l)j(F
+ 1).
(19.66)
414
Chapter 19
Simple Linear Correlation
Also, if n > 2, then (19.74) (Fisher, 1958: 219), and n k]
(Zerbe and Goldgar, 1980). Nonparametric been proposed (e.g., Rothery, 1979). 19.13
n 2 k2 - 2 2(n - 1)
+
(19.75)
measures of intraclass correlation have
CONCORDANCE CORRELATION
If the intent of collecting pairs of data is to assess reproducibility or agreement of data sets, an effective technique is that which Lin (1989, 1992, 2000) refers to as concordance correlation. For example, the staff of an analytical laboratory might wish to know whether measurements of a particular substance are the same using two different instruments, or when performed by two different technicians. Tn Example 19.18, the concentration of lead was measured in eleven specimens of brain tissue, where each specimen was analyzed by two different atomic-absorption spectrophotometers. These data are presented in Figure 19.2. If the scales of the two axes are the same, then perfect reproducibility of assay would be manifested by the data falling on a 45° line intersecting the origin of the graph (the line as shown in Figure 19.2), and concordance correlation assesses how well the data follow that 45° line. The concordance correlation coefficient, r., is _ 2~xy re - -----=----~'-------=-
~x2
+
~i
+
n(X
_
y)2'
(19.76)
This coefficient can range from -1 to +1, and its absolute value cannot be greater than the Pearson correlation coefficient, r; so it can be stated that - 1 :::;- I r I :S rc:::; I r I :::; 1; and rc = 0 only if r = O. Hypothesis testing is not recommended with rc (Lin, 1992), but a confidence interval may be obtained for the population parameter Pc of which rc is an estimate. To do so, the Fisher transformation (Equation 19.8) is applied to rc to obtain a transformed value we shall call Zc; and the standard error of Zc is obtained as
(J'
z-
(1
2~(1
(1
r(l
- rc)U -
,.n
2
(19.77)
n - 2
where U =
In(X )~x2~i
y)2
(19.78)
.
This computation is shown in Example 19.18. Furthermore, we might ask whether two concordance correlations are significantly different. For example, consider that the between-instrument reproducibility analyzed in Example 19.18 was reported for very experienced technicians, and a set of data
Section 19.13
Concordance Correlation
415
EXAMPLE 19.18 Reproducibility of Analyses of Lead Concentrations in Brain Tissue (in Micrograms of Lead per Gram of Tissue), Using Two Different Atomic-Absorption Spectrophotometers
Tissue Lead (p,glg) Spectrophotometer A Spectrophotometer (Xi) (Yj)
Tissue sample (i) 1 2 3 4
0.22 0.26 0.30 0.33
0.21 0.23 0.27 0.27
5 6 8
0.36 0.39 0.41 0.44
0.31 0.33 0.37 0.38
9 10 11
0.47 0.51 0.55
0.40 0.43 0.47
7
n
11
=
2:X = 4.24 2:X2 = 1.7418 2:x2 0.10747
2:Y 3.67 2: y2 1.2949 2:i = 0.07045
X = 0.385
Y = 0.334
=
=
=
r, =
2:x2
22:xy 2:i + n(X
+
_
2:XY = 1.5011 2:xy = 0.08648
y)2
2(0.08648)
=
r
For
rc
=
0.10747
+ 0.07045 + 11(0.385 - 0.334)2
0.8375;
rz.
0.7014
=
2: y x
= 0.9939;
)2:x22:i
= 0.8375,
Zc
0.17296 0.20653
r2
= 0.9878
= 1.213 (from Appendix Table B.18, by interpolation) -
U = In(X
-
- Y)
)2:x22:i
2
=
(3.31662)(0.051) ~(0.10747)(0.07045)
2
= 0.09914
B
416
Chapter 19
Simple Linear Correlation
2r~(1 - rc ) [; r(l - rz,)2
(1 (1 (Tzc =
2
n -
(1
0.9878)(0.7014)
(J
0.7014)(0.9878)
+ 2(0.8375)3(1
- 0.8375)(0.09914)
(0.9939)(1 (0.8375)4 (0.09914
2(0.9878)(1
- 0.70]4)
f
- 0.7014)
11 - 2
= ) 0.0291 + 0.06~77
0.002447 = 0.0245
95% confidence interval for ~c:
z, ± LI
=
ZO.05(2)(Tz,
= 1.213 ± (1.960)(0.0610)
1.093;
=
L2
= 1.213 ± 0.120
1.333.
For the 95% confidence limits for oc. the foregoing confidence limits for transformed (as with Appendix Table B.19) to L1 = 0.794;
~c
are
L2 = 0.870.
was also collected for novice analysts. In order to ask whether the measure of reproducibility (namely, rc) is different for the highly experienced and the less experienced workers, we can employ the hypothesis testing of Section 19.5. For this, we obtain rc for the data from the experienced technicians (call it rd and another rc (call it r2) for the data from the novices. Then each rc is transformed
s::
0.50
C5'
;:
E
0.40
S
..::'" ~ 0.30
~
-'
~ ~ ~ c: ~ c:
8
0.20
0.10
es " ...J
'"
o
0.10
0.20
0.30
0.40
Lead Content (/1-g1g) Using Instrument
0.50
0.60
A(X)
FIGURE 19.2: lead concentrations in brain tissue (/1-g/g), determined by two different analytical instruments. The data are from Example 19.18 and are shown with a 45° line through the origin.
Exercises
to its corresponding Equation 19.21 is
Zc
(namely, z\ and
Z2)
and the standard
417
error to be used in (19.79)
where each O"~ is obtained as the square of the O"z, in Equation 19.77. Lin (1989) has shown that this method of assessing reproducibility is superior to comparison of coefficients of variation (Section 8.8), to the paired-t test (Section 9.1), to regression (Section 17.2), to Pearson correlation (Section 19.1), and to intrac1ass correlation (Section 19.12). And he has shown the foregoing hypothesis test to be robust with n as small as 10; however (Lin and Chinchilli, 1996), the two coefficients to be compared should have come from populations with similar ranges of data. Lin (1992) also discusses the sample-size requirement for this coefficient; and Barnhart, Haber, and Song (2002) expand concordance correlation to more than two sets of data. 14 THE EFFECT OF CODING Except for the procedures in Sections 19.11 and 19.12, coding of the raw data will have no effect on the correlation coefficients presented in this chapter, on their z transformations, or on any statistical procedures regarding those coefficients and transformations. See Appendix C for information about the use of coding in Sections 19.11 and 19.12. EXERCISES .1. Measurements of serum cholesterol (mg/IOO ml) and arterial calcium deposition (mg/l 00 g dry weight of tissue) were made on 12 animals. The data are as follows: Calcium (X)
Cholesterol (Y)
59 52 42 59 24 24 40 32 63 57 36 24
298 303 233 287 236 245 265 233 286 290 264 239
19.3. Given: rl = -0.44, nl = 24,"2 = -0.40, n: = 30. (a) Test He: PI = P2 versus HA: PI =F P2. (b) If Ho in part (a) is not rejected, compute the common correlation coefficient. 19.4. Given: rl = 0.45,n I = 18, r: = 0.56,n: = 16.Test Ho: PI 2: P2 versus HA: PI < P2· 19.5. Given: rl = 0.85,nl = 24,r2 = 0.78,n2 = 32"3 = 0.86,n3 = 31. (a) Test H«: PI = P2 = P3, stating the appropriate alternate hypothesis. (b) If Ho in part (a) is not rejected, compute the common correlation coefficient. 19.6. (a) Calculate the Spearman rank correlation coefficient for the data of Exercise 19.1. (b) Test Ho: p, = 0 versus HA: Ps =I- O. 19.7. Two different laboratories evaluated the efficacy of each of seven pharmaceuticals in treating hypertension in women, ranking them as shown below. Drug
(a) (b) (c) (d)
Calculate the correlation coefficient. Calculate the coefficient of determination. Test Ho: P = 0 versus HA: P =F O. Set 95% confidence limits on the correlation coefficien t. Z. Using the data from Exercise 19.1: (a) Test Ho: p :S 0 versus HA: P > O. (b) Test Ho: P = 0.50 versus HA: P =F 0.50.
L P Pr 0 E A H
Lab 1 rank
Lab 2 rank
I
1
2 3 4 5 6 7
3 2 4
7 6 5
C HAP
T E R
20
Multiple Regression and Correlation 20.1 20.2 20.3
INTERMEDIATE COMPUTATIONAL STEPS THE MULTIPLE-REGRESSION EQUATION ANALYSIS OF VARIANCE OF MULTIPLE REGRESSION OR CORRELATION
20.4 20.5
HYPOTHESES CONCERNING PARTIAL REGRESSION COEFFICIENTS STANDARDIZED PARTIAL REGRESSION COEFFICIENTS
20.6
SElECTING INDEPENDENT
20.7 20.8 20.9
PARTIAL CORRelATION PREDICTING Y VALUES TESTING DIFFERENCE BETWEEN TWO PARTIAL REGRESSION COEFFICIENTS
VARIABLES
20.10 "DUMMY" VARIABLES 20.11 INTERACTION OF INDEPENDENT 20.12 COMPARING 20.13 20.14 20.15 20.16
VARIABLES
MULTIPLE REGRESSION EQUATIONS
MULTIPLE REGRESSION THROUGH THE ORIGIN NONLINEAR REGRESSION DESCRIPTIVE VERSUS PREDICTIVE MODelS CONCORDANCE: RANK CORRElATION AMONG
SEVERAL VARIABLES
The previous three chapters discussed the analyses of regression and correlation relationships between two variables tsinuik: regression and correlation). This chapter will extend those kinds of analyses to regressions and correlations examining the intcrrclationships among three or more variables tmultiple regression and correlation). In multiple regression, one of the variables is considered to be functionally dependent upon at least one of the others. Multiple correlation is a situation when none of the variables is deemed to he dependent on another." The computations required for multiple-regression and multiple-correlation analyses would be very arduous. and in many cases prohibitive. without the computer capability that is widely available for this task. Therefore. the mathematical operations used to obtain regression and correlation coefficients. and to perform relevant hypothesis tests. will not he emphasized here. Section 20.1 summarizes the kinds of calculations that a computer program will typically perform and present. but that information is not necessary for understanding the statistical procedures discussed in the remainder of this chapter. including the interpretation of the results of the computer's work. Though uncommon. there are cases where the dependent varia hie is recorded on a nominal scale (not on a ratio or interval scale), most often a scale with two nominal categories (c.g .. male and female. infected and not infected. successful and unsuccessful). The analyses of this chapter arc not applicable for such data. Instead. a procedure known as logistic regression (Section 24.18) may he considered. 'Much development in multiple correlation thcory began in the late nineteenth century hy several pioneers. including Karl Pearson (IH.'i7-IY36) and his colleague. George Udny Yule ( I X71- 195 I ) (Pearson 1% 7). (Pearson lirst called partia 1 regression coefficients "double regression coefficients.' and Yule later called them "net regression coefficients.") Pearson was the tirst to use the terms tnultipl« correlation, in IY()K, and multiple correlation coefficiellt. in 1914 (David. 1995).
420
Chapter 20
Multiple Regression and Correlation
Multiple regression is a major topic in theoretical and applied statisncs; only an introduction is given here, and consultation with a statistical expert is often advisable.* 20.1
INTERMEDIATE
COMPUTATIONAL
STEPS
There are certain quantities that a computer program for multiple regression and/or correlation must calculate. Although we shall not concern ourselves with the mechanics of computation, intermediate steps in the calculating procedures are indicated here so the user will not be a complete stranger to them if they appear in the computer output. Among the many different programs available for multiple regres- . sion and correlation, some do not print all the following intermediate results, or they may do so only if the user specifically asks for them to appear in the output. Consider n observations of M variables (the variables being referred to as Xl through XM: (see Example 20.1a)). If one of the M variables is considered to be dependent upon the others, then we may eventually designate that variable as Y, but the program will perform most of its computations simply considering all M variables as X's numbered 1 through M. The sum of the observations of each of the M variables is calculated as 11
11
~Xlj
~X2j
j=l
j=l
11
...
(20.1)
~XMjj=l
For simplicity, let us refrain from indexing the ~'s and assume that summations are always performed over all n sets of data. Thus, the sums of the variables could be denoted as (20.2) Sums of squares and sums of cross products are calculated just as for simple regression, or correlation, for each of the M variables. The following sums. often referred to as raw sums of squares and raw sums of cross products, may be presented in computer output in the form of a matrix, or two-dimensional array: ~Xf
~X1X2
~X1X3
~X1XM
~X2Xl
~xi
~X2X3
~X2XM
~X3Xl
~X3X2
~X5
~X3XM
(20.3)
As ~ XiXk = ~ XkXi. this matrix is said to be symmetrical about the diagonal running from upper left to lower right.' Therefore, this array, and those that follow, "Greater discussion of multiple regression and correlation, often with explanation or the underlying mathematical procedures and alternate methods. can be found in many texts, such as Birkes and Dodge (1993); Chatterjee and Hadi (2006); Draper and Smith (1998); Glantz and Slinker (2001); Hair et al. (2006: Chapter 4); Howell (2007: Chapter 15); Kutner, Nachtshcim. and Neter (2004); Mickey, Dunn, and Clark (2004); Montgomery, Peck. and Vining (2006); Pedhazur (1997); Seber and Lee (2003); Tabaehnik and Fidell (2001: Chapter 5); and Weisberg (2005). ';'We shall refer to the values of a pair of variables as Xi and Xk.
Section 20.1
Intermediate Computational
EXAMPLE 20.1a The n x M Data Matrix Regression or Correlation (n = 33; M = 5) Variable
1 ("C)
]
1 2 3 4 5 6 7 8 9
6 I
-2 11
-1 2 5 1 1
3 11 9 5 -3
]()
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 are sometimes
I
H -2
.,'"'
6 10 4 5 5 3 8 8 6 6 3 5 1
3 (mm)
9.9 9.3 9.4 9.1 6.9 9.3 7.9 7.4 7.3 RR 9.H 10.5 9.1 10.1 7.2 11.7 R7 7.6 8.6 10.9 7.6 7.3 9.2 7.0 7.2
5.7 6.4 5.7 6.1 6.0 5.7 5.9 6.2 5.5 5.2 5.7 6.1 6.4 5.5 5.5 6.0 5.5 6.2 5.9 5.6 5.H 5.8 5.2 6.0 5.5 6.4 6.2 5.4 5.4 6.2 6.8 6.2 6.4
7.0
8.8 10.1 12.1 7.7 7.8 11.5 10.4
8
10
presented
2 (em)
as a half-matrix.
LXr LX2X LX:\X j
j
for
a Hypothetical
Steps
421
Multiple
(i)
4 (min)
5 (ml)
1.6 3.0 3.4 3.4 3.0 4.4 2.2 2.2 1.9 0.2 4.2 2.4 3.4 3.0 0.2 3.9 2.2 4.4
2.12 3.39 3.61 1.72 I.HO 3.21 2.59 3.25 2.H6 2.32 1.57 1.50 2.69 4.06 1.98 2.29 3.55 3.31 1.83 1.69 2.42 2.98 1.84 2.48 2.83 2.41 1.78 2.22 2.72 2.36 2.81 1.64 1.82
0.2
2.4 2.4 4.4 1.6 1.9 1.6 4.1 1.9 2.2 4.1 1.6 2.4 1.9 2.2
such as
Lxi LX:\X2 Lxj
(20.4)
422
Chapter 20
Multiple Regression and Correlation
If a raw sum of squares, ~ Xl, is reduced by (~Xi)2 / n, we have a sum of squares that has previously (Section 17.2) been symbolized as x2, referring to 2:2:(Xij - Xi)2. Similarly, a raw sum of cross products, 2:XiXk, if diminished
2:
by2:Xi2:Xk/n,yields2:xiXk,whichrepresents2:(Xij - Xi)(Xkj - Xk)·These quantities are known as corrected sums of squares and corrected sums of crossproducts, respectively, and they may be presented as the following matrix: 2:xT
2:XIX2
2:XIX3
2:XIXM
2: X2XI 2: X3X\
2: x~ 2: X2X 2: X3X2 2: x~
2: X2XM 2: X3XM
2:XMXI
2:XMX2
2:x~.
3
2:XMX3
(20.5)
From Matrix 20.5, it is simple to calculate a matrix of simple correlation coefficients, for rik (representing the correlation between variables i and k) = ~ xixi. / )~ x~ ~x~ (Equation
19.1):
ru
rl2
r21
r22 r32
r31
rJ3 r23 r33
riM r2M r3M (20.6)
rMI
rM2
rM3
rMM·
Each clement in the diagonal of this matrix (i.e., rii) is equal to 1.0, for there will always be a perfect positive correlation between a variable and itself (see Example 20.1b). EXAMPLE 20.1b A Matrix of Simple Correlation Coefficients, as It Might Appear as Computer Output (from the Data of Example 20.1a) 2
3
1 1.00000 0.32872 0.16767 2 0.32872 1.00000 - 0.14550 3 0.16767 - 0.14550 1.00000 4 0.05191 0.18033 0.24134 5 -0.73081 -0.21204 -0.05541
4
5
0.05191 -0.73081 0.18033 -0.21204 0.24134 - 0.05541 1.00000 0.31267 0.31267 1.00000
The final major manipulation necessary before the important regression or correlation statistics of the following sections can be obtained is the computation of the inverse of a matrix. The process of inverting a matrix will not be explained here; it is to two-dimensional algebra what taking the reciprocal is to ordinary, onedimensional algebra.* While inverting a matrix of moderate size is too cumbersome to be performed easily by hand, it may be readily accomplished by computer. A "The plural of matrix is matrices. As a shorthand notation, statisticians may refer to an entire matrix by a boldface letter, and the inverse of the matrix by that letter's reciprocal. So, Matrices
The Multiple-Regression
Section 20.2
Equation
423
multiple-regression or correlation program may invert the corrected sum of squares and crossproducts matrix, Matrix 20.5, resulting in a symmetrical matrix symbolized CII
CI2
clJ
CIM
C2I
C22
C23
ClM
(20.7)
CMI
CM2
CM3
CMM·
Or the correlation matrix, Matrix 20.6, may be inverted, yielding a different array of values, which we may designate di, d21 d31
dv:
dl3
dIM
d22
d23
d2M
d32
d:13
d:w (20.8)
dMI
dM2
dM3
dMM.
Computer routines might compute either Matrix 20.7 or 20.8; the choice is unimportant because the two are interconvertible: cik
=
(20.9)
dik
12: 722 x
x(
or, equivalently, (20.10) From manipulations of these types of arrays, a computer program can derive the sample statistics and components of analysis of variance described in the following sections. If partial correlation coefficients are desired (Section 20.7), the matrix inversion takes place as shown. If partial regression analysis is desired (Sections 20.2-20.4), then inversion is performed only on the M - 1 rows and M - 1 columns corresponding to the independent variables in either Matrix 20.5 or 20.6. THE MULTIPLE-REGRESSION
EQUATION
Recall, from Section 17.2, that a simple linear regression for a population variables is the relationship Yi
= (\' +
f3Xi.
of paired (17.1)
In this relationship, Y and X represent the dependent and independent variables, respectively; f3 is the regression coefficient in the sampled population; and (\' (the Y intercept) is the predicted value of Y in the population when X is zero. And the subscript i in this equation indicates the ith pair of X and Y data in the sample. In some situations, however, Y may be considered dependent upon more than one variable. Thus, (20.11 ) 20.3,20.5, and 20.6 might be referred to by the symbols X, x, and r, respectively; and Matrices 20.7 and 20.R could be written, respectively, as c = x-I and d = r-I. David (2006) gives A. C. Aitken primary credit for introducing matrix algebra into statistics in 1931.
424
Chapter 20
Multiple Regression and Correlation
may be proposed. implying that one variable (Y) is linearly dependent upon a second variable (XI) and that Y is also linearly dependent upon a third variable (X2). Here.: denotes the ith independent varia blc. and Xi; specifics the /t h obscrvat ion of variable i. In this particular multiple regression model. we have one dependent variable and two independent variables.' The two population parameters f31 and {32are termed portia! regression coejficients; {31 expresses how much Y would change for a unit change in XI, if X2 were held constant. It is sometimes said that {31 is a measure of the relationship of Y to XI after "controlling for" X2: that is. it is a measure of the extent to which Y is related to XI after removing the effect of X2. Similarly. {32describes the rate of change of Y as X2 changes. with XI being held constant. {31and {32are called partial regression coefficients. then. because each expresses only part of the dependence relationship. The Y intercept. a. is the value of Y when both XI and X2 are zero. Whereas Equation 17.1 mathematically represents a line (which may be presented on a two-dimensional graph), Equation 20.11 defines a plane (which may be plotted on a three-dimensional graph). ;\ regression with 171 independent variables defines an /11dimensional surface. sometimes referred to as a "response surface" or "hyperplane." The population data whose relationship is described by Equation 20.11 will probably not all lie exactly on a plane, so this equation may be expressed as (20.12)
the "residual." or "error." is the amount by which Y; differs from what is predicted by Cy + {3IXlj + {3?X2;. where the sum of all f'S is zero. the f's arc assumed to be normally distributed. and each partial regression coefficient. {3i. estimates the change of Y in the population when there is a change of one unit (c.g., a change of I centimeter. I minute. or I milliliter) in Xi and no change in the other X's. If we sample the population containing the three varia hies (Y. XI. and X2) in Equation 20.11. we can compute sample statistics to estimate the population parameters in the model. The multiple-regression function derived from a sample of data would he fj.
A
Y;
= ({
+
hlX1j
+ 1)2X~;.
(20.13)
The sample statistics a. b I. and tr: are estimates of the population parameters IT. {31, and {32. respectively. where each partial regression coefficient hi is the expected change in Y in the population for a change of one unit in Xi if all of the other 11/ - I independent variables are held constant. and ({ is the expected population value or Y when each Xi is zero. (Often. the sample Y intercept. a, is represented by hlJ and the population Y intercept is represented as {31J instead of ex.) Theoretically, in multiple-regression analyses there is no limit to /II. the number of independent variables (X;) that can be proposed as influencing the dependent variable (Y). as long as II 2': 111 + 2. (There will be computational limitations. however.) The general population model. of which Equation 2!l.12 is the special case Ior III = 2. is: (20.14)
"Dependence in a regression context refers to murhcm.u ical. not necessarily biological, dcpcndeuce. Sometimes the independent varia hies arc called "predictor" or ··regrcssor" or "cxplanutory" or "exogenous" variables. and the dependent variable lll,iY he referred to as the "response" or "criterion" or "endogenous" variable. 'This equation reflects t hat mult iplc regression is a special case of what mathematical call the g('//('ml linear model. Multiple correlation. simple regression and correlation. .......................
\.' 2. is to obtain an equation with which to predict the population value of Y at a specified X. Polynomial regression is discussed in greater detail in Cohen ct al. (2003: Section 6.2), von Eye and Schuster (1998: Chapter 7). and some of the books noted in the introduction to ChaptcrZtl (e.g., Draper and Smith, 1998: Chapter 12; Glantz and Slinker. 2001: 91-96; Kutner. Nachtsheim, and Netcr. 2004: Section 8.1). 21.1
POLYNOMIAL
CURVE
FITTING
A polynomial equation may be analyzed by submitting values of Y. X. Xl. X3. and so on to multiple regression computer programs. * There are also computer programs "Serious rounding errors can readily arise when dealing with powers of Xi. and these problems can often be reduced by coding (see Appendix C). A commonly recommended coding is to subtract X (i.c .. to use Xi - X in place of Xi); this is known as centering the data (e.g .. Cohen ct al., 2D03: Section 0.2.3; Ryan. 1l)l)7: Sections 3.2.4 and 4.2.1). Coding. such as described in Appendix C. should be attempted with rounding-error in polynomial regression.
Section 21.1 EXAMPLE 21.1
Stepwise Polynomial
Polynomial Curve Fitting
459
Regression
The following shows the results of a polynomial-regression analysis, by forward addition of terms, of data collected from a river, where X is the distance from the mouth of the river (in kilometers) and Y is the concentration of iron in the water (in micrograms per liter). X (km)
Y (J-Lg/L)
1.22 1.34 1.51 1.66 1.72
40.9 41.8 42.4 43.0 43.4
1.93 2.14 2.39 2.51 2.78 2.97 3.17 3.32 3.50 3.53
43.9 44.3 44.7 45.0 45.1 45.4 46.2 47.0 48.6 49.0
3.85 3.95 4.11 4.18
49.7 50.0 50.8 51.1
n = 19 First, a ~near regresSi~n is fit to the data a - 37.389,
b - 3.1269,
and
To test H«: f3 = 0 against HA: f3
~1
Sh -
"* O,t
= 1), result~. gin 0.15099.
= !!.- = 20.7 9, with
11
= 17.
Sh
As
= 2.110, Ho is rejected.
to.05(2).17
Then, a quadratic resulting in
(second-power) regression / a 40.302, b, = 0.66658, b: = 0.45397, /
i=
is fit to the data 0.91352
Sbl
=
Sh2
= 0.16688.
"*
To test Ho: f32 = 0 against lJA: f32 0, t = 2.720, with As to.05(2).16 = 2.120, Hi, is rejected. Then, a cubic (third-power) regression is fit to the data (m a = 32.767,
= 10.411, b: = -3.3868, b3 = 0.47011, bl
Sb1 Sb2 Sb,
(m
11
= 16.
=
3), resulting in
= 3.9030 = 1.5136 = 0.18442.
2),
460
Chapter 21
Polynomial Regression
°
To test He: f33 = against HA: f33 ::j:. 0, t = 2.549, with u = 15. As (0.05(2),15 = 2.131, Ho is rejected. Then, a quartic (fourth-power) regression is fit to the data (m = 4), resulting in a
To test Ho: f34 As to.05(2).14
= 6.9265,
°
b)
= 55.835,
b2
=
h= b4 =
-31.487, 7.7625, -0.67507,
Sb1 Sh2 Sb3 Sb4
= 12.495 = 7.6054 = 1.9573
= 0.18076.
= against HA: f34 ::j:. O,t = 3.735, with = 2.145, Hi, is rejected.
u
= 14.
Then, a quintic (fifth-power) regression is fit to the data (m = 5), resulting in a
= 36.239,
°
b) = -9.1615, b2 = 23.387, b3 = -14.346, b4 = 3.5936, bs = -0.31740,
= = sb, = Sb~ = si, = Sb,
Sh2
49.564 41.238 16.456 3.1609 0.23467.
To test He: f3s = against HA: f35 ::j:. 0, t = 1.353, with v = 13. As (005(2),13 = 2.160, do not reject Ho. Therefore, it appears that a quartic polynomial is an appropriate regression function for the data. But to be more confident, we add one more term beyond the quintic to the model (i.e., a sex tic, or sixth-power, polynomial regression is fit to the data; m = 6), resulting in a
= 157.88,
b:
= -330.98, = 364.04,
h
=
b,
b4
To test He: f36 As to.05(2),12
°
bs b6
-199.36,
= 58.113, = - 8.6070, =
0:50964,
= 192.28 = 201.29 Sh, = 108.40 Sh~ = 31.759 Sbs = 4.8130 Shr, = 0.29560. Sb,
Sh2
= against HA: f36 ::j:. O,t = 1.724, with = 2.179, do not reject H().
v
= 12.
In concluding that the quartic regression is a desirable fit to the data, we have = 6.9265 + 55.835X - 31.487X2 + 7.7625X3 - 0.67507X4. See Figure 21.1 for graphical presentation of the preceding polynomial equations.
Y
that will perform polynomial regression with the input of only Y and X data (with the program calculating the powers of X instead of the user having to submit them as computer input). The power, m, for fitting a polynomial to the data may be no greater than n - 1*; but m's larger than 4 or 5 are very seldom warranted. The appropriate maximum m may be determined in one of two ways. One is the backward-elimination multiple-regression procedure of Section 20.6b. This would involve beginning with the highest-order term (the term with the largest m) in which
* If In = n - 1, the curve will fit perfectly to the data (i.e., R2 = 1). For example, it can be observed that for two data (n = 2), a linear regression line (m = 1) will pass perfectly through the two data points; for n = 3, the quadratic curve from a second-order polynomial regression (m = 2) will fit perfectly through the three data points; and so on.
Section 21.1
Polynomial Curve Fitting
461
we have any interest. But, except for occasional second- or third-order equations, this m is difficult to specify meaningfully before the analysis.
The other procedure, which is more commonly used, is that of f(~rward-selection multiple regression (Section 20.6c). A simple linear regression (Yi = a + bXi) is fit to the data as in Figure 21.1 a. Then a second-degree polynomial (known as a quadratic equation, "Vi = a + b i X, + b2X;) is fit, as shown in Figure 21.1b. The next step would be to fit a third-degree polynomial (called a cubic equation, = a + b, Xi + b2X; + b3XI), and the stepwise process of adding terms could continue beyond that. But at each step we ask whether adding the last term significantly improved the polynomial-regression equation. This "improvement" may be assessed by the t test for Ho: f3j = 0 (Section 20.4), where bj, the sample estimate of {3j, is the partial-regression coefficient in the last term added.* At each step of adding a term, rejection of H«: f3 = 0 for the last term added indicates that the term significantly improves the model; and it is recommended practice that, at each step, each previous (i.e., lower-order) term is retained even if its b is no longer significant. If the Ho is not rejected, then the final model might be expressed without the last term, as the equation assumed to appropriately describe the mathematical relationship between Y and X. But, as done in Example 21.1, some would advise carrying the analysis one or two terms beyond the point where the preceding Ho is not rejected, to reduce the possibility that significant terms are being neglected inadvertently. For example, it is possible to not reject Ho: f33 = 0, but by testing further to reject He: f34 = O. A polynomial regression may be fit through the origin using the considerations of Section 20.13. After arriving at a final equation in a polynomial regression analysis, it may be desired to predict values of Y at a given value of X. This can be done by the procedures of Section 20.8, by which the precision of a predicted Y (expressed by a standard error or confidence interval) may also be computed. Indeed, prediction is often the primary goal of a polynomial regression (see Section 20.15) and biological interpretation is generally difficult, especially for m > 2. It is very dangerous to extrapolate by predicting Y's beyond the range of the observed X's, and this is even more unwise than in the case of simple regression or other multiple regression. It should also be noted that use of polynomial regression can be problematic, especially for m larger than 2, because Xi is correlated with powers of Xi (i.e., with X;, and so on), so the analysis may be very adversely affected by multicollinearity (Section 20.4a). The concept of polynomial regression may be extended to the study of relationships of Y to more than one independent variable. For example, equations such as these may be analyzed by considering them to be multiple regt~ssions:
Yi
A
Xl,
A
Y
=
A
Y
=
+ hXj a + bjXl
a
+ b2Xr + b:,x2 + b4XjX2 + b2Xr + b3X2 + b4xi + bSX1X2.
"This hypothesis may also be tested by F
= (Regression SS for model of degree
m) - (Regression SS for model of degree m Residual MS for the model of degree m
I) (21.4 )
with a numerator OF of 1 and a denominator OF that is the residual OF for the m-degree model, and this gives results the same as from the t test.
462
Chapter 21
Polynomial Regression
50
50
::!
....l
ef)
CIj
:l. c:
C
2
:l. c c:
45
o 45
.:::
~. Distance.
in km
Distance.
50
50
....l
~
::!
:l. c:
:l. c
OJ)
C
2
in km
(b)
(a)
OJ>
C
45
2
40~-~---L1__~ __LI__ ~ __ LI~ 100 200 300 400 Distance.
45
40~~--~~--~----~~ 100 200 300
in km
Distance,
(c)
400
in km
(d)
50 !:::I CI:.
:l.
C
C
2
45
40L-.-J.._-'-_.l...-----'._--'-_L-_ 100 200 400 300 Distance.
in km
(e) FIGURE 21.1: 19
data
of
Y
0.66658X
+
polynomial
regression
Example 21.1.
models.
(a) Linear:
Y
Y
Each of the following =
37.389
regressions
+
3.1269X. - 3.3868X2
(b)
is fit to the
Quadratic:
Y
=
OA5397X2. (c) Cubic: = 32.767 + 10A11X + OA7011X3 (d) Quartic: = 6.9265 + 55.835X - 31.487X2 + 7.7625X3 - O.67507X4 (e) Quintic = 36.239 - 9.1615X + 23.387X2 - 14.346X3 + 3.5936X4 - O.31740XS The stepwise analysis ofExample 21.1 concludes that the quartic equation provides the appropriate fit; that is, the quintic expression does not provide a significant improvement in fit over the quartic.
40.302
+
Fitting
points
Y
Section 21.2
Quadratic Regression
463
In these examples, the term XI X2 represents interaction between the two independent variables. Because there is more than one independent variable, there is no clear sequence of adding one term at a time in a forward-selection procedure, and some other method (such as in Section 20.6e) would have to be employed to strive for the best set of terms to compose the multiple-regression model. QUADRATIC REGRESSION
The most common polynomial regression is the second-order,
or quadratic, regression: (21.5)
with three population parameters, 0', {31, and {32, to be estimated by three regression statistics, a, bi, and b2, respectively, in the quadratic equation (21.6) The geometric shape of the curve represented by Equation 21.6 is a parabola. An example of a quadratic regression line is shown in Figure 21.2. If bz is negative as shown in Figure 21.2, the parabola will be concave downward. If b: is positive (as shown in Figure 21.1 b), the curve will be concave upward. Therefore, one-tailed hypotheses may be desired: Rejection of Hi: {32 ::::: 0 would conclude a parabolic relationship in the population that is concave downward ({32 < 0), and rejecting H«: {32 :5 0 would indicate the curve is concave upward in the population ({32 > 0). (a) Maximum and Minimum Values of Vi. A common interest in polynomial regression analysis, especially where m = 2 (quadratic), is the determination of a maximum or minimum value of Yi (Bliss 1970: Section 14.4; Studier, Dapson, and Bigelow, 1975). A maximum value of Yi is defined as one that is greater than those Yi's that are close to it; and a minimum Yi is one that is less than the nearby Vi's. If, in a quadratic regression (Equation 21.6), the coefficient b2 is negative, then there will be a maximum, as shown in Figure 21.2. If b2 is positive, there will be a minimum (as is implied in Figure 21.1 b). It may be desired to determine what the maximum or minimum value of Yi is and what the corresponding value of Xi is. The maximum or minimum of a quadratic equation is at the following value of the independent variable: =b, (21.7) Xo= -2b2 A
A
Placing Xo in the quadratic equation (Equation 21.6), we find that Yo
=
(21.S)
a
Thus, in Figure 21.2, the maximum is at A
Xo=
-17.769
-----
1.15 hr,
2( -7.74286)
at which Yo
= 1.39
(17 .769 )2 4( - 7.74286)
11.58 mg/IOO ml.
464
Chapter 21
Polynomial Regression 12 11
IO E
~
9
0
Oil
8
E .S
7
>-' C
S?
5
~ 5o
4
0
3
t:
U
/
6
•
2
o
0.25 0.50 0.75
i.oo
1.25 l.50 1.75 2.00
Time, X, in hr FIGURE 21.2: Quadratic
fit to eight data points resulting in the equation
Vi
1.39 + 17.769Xi -
7.74286Xr.
A confidence interval for a maximum procedures of Section 20.8.
or minimum
Yo may be computed
by the
EXERCISES 21.1. The following measurements are the concentrations of leaf stomata (in numbers of stomata per square millimeter) and the heights of leaves above the ground (in centimeters). Subject the data to a polynomial regression analysis by stepwise addition of terms. Y (number/mm") 4.5 4.4 4.6 4.7 4.5 4.4 4.5 4.2 4.4 4.2 3.8
X
(ern) 21.4 21.7 22.3 22.9 23.2 23.8 24.8 25.4 25.9 27.2 27.4
3.4 3.1 3.2 3.0
28.0 28.9 29.2 29.8
21.2. Consider the following data, where X is temperature (in degrees Celsius) and Y is the concentration of a mineral in insect hemolymph (in millimoles per liter). X
Y
(0C)
(mmole/L)
3.0 5.0 8.0 14.0 21.0 25.0 28.0
2.8 4.9 6.7 7.6 7.2 6.1 4.7
Exercises (a) Fit a quadratic equation to these data. (b) Test for significance of the quadratic ter~. (c)
Estimate the mean population value of Yi at Xi = 1O.O°C and compute the 95% confidence interval for the estimate.
465
(d) Determine the values of X and Y at which the quadratic function is maximum.
C HAP
T E R
22
Testing for Goodness of Fit 22.1 22.2 22.3 22.4 22.5 22.6 22.7 22.8
CHI-SQUARE GOODNESS OF FIT FOR TWO CATEGORIES CHI-SQUARE CORRECTION FOR CONTINUITY CHI-SQUARE GOODNESS OF FIT FOR MORE THAN TWO CATEGORIES SUBDIVIDING CHI-SQUARE GOODNESS OF FIT CHI-SQUARE GOODNESS OF FIT WITH SMAll FREQUENCIES HETEROGENEITY CHI-SQUARE TESTING FOR GOODNESS OF FIT THE lOG-LIKELIHOOD RATIO FOR GOODNESS OF FIT KOlMOGOROV-SMIRNOV
GOODNESS OF FIT
This chapter and the next concentrate on some statistical methods designed for use with nominal-scale data. As nominal data are counts of items or events in each of several categories, procedures for their analysis are sometimes referred to as enumeration statistical methods. This chapter deals with methods that address how well a sample of observations from a population of data conforms to the population's distribution of observations expressed by a null hypothesis. These procedures, which compare frequencies in a sample to frequencies hypothesized in the sampled population. are called goodness-of-fit tests. In testing such hypotheses, the widely used chi-square statistic" (X2) will be discussed, as will the more recently developed loglikelihood ratio introduced in Section 22.7. Goodness of fit for ordered categories (as contrasted with nominal-scale categories) is addressed by the Kolrnogorov-Smirnov test of Section 22.8 or by the Watson test of Section 27.5.
"The symbol for chi-square is X2, where the Greek lowercase letter chi (X) is pronounced as the "ky" in "sky" (see Appendix A). Some authors use the notation X2 instead of X2. which avoids employing a Greek letter for something other than a population parameter; hut this invites confusion with the designation of X2 as the square or an observation; X: so the symbol X2 will he used in this book. Karl Pearson (llJ()O) pioneered the use of this statistic for goodness-or-fit analysis, and David (1995) credits him with the first use of the terms chi-squared and goodness offif at that time. Pearson and R. A Fisher subsequently expanded the theory and application of chi-square (Lancaster. I%\): Chapter I). Chi-squured is the term commonly preferred to chi-square by British writers. Karl Pearson (I K57 -llJ36) was a remarkable British mathematician. Walker (195K) notes that Pearson has been referred to as "the founder or the science of statistics": she called Pearson's development of statistical thinking and practice "an achievement of fantastic proportions" and said of his influence on others: "Few men in all the history of science have stimulated so many other people to cultivate and enlarge the fields they had planted.' Karl Pearson, Walter Frank, and Francis Galton founded the British journal Biometrika, which was first issued in October IlJOI and which still influences statistics in many areas. Pearson edited this journal for 35 years. succeeded for 30 years by his son, Egon Sharpe Pearson, himself a powerful contributor to statistical theory and application (see Bartle: t. IlJKI).
Section 22.1
Chi-Square Goodness of Fit for Two Categories
467
CHI-SQUARE GOODNESS OF FIT FOR TWO CATEGORIES It is often desired to obtain a sample of nominal scale data and to infer whether the population [rom which it carne conforms to a specified distribution. For example. a plant geneticist might raise 100 progeny from a cross that is hypothesized to result in a 3: I phenotypic ratio of yellow-flowered to green-flowered plants. Perhaps this sample of 100 is composed of H4 yellow-flowered plants and 16 greenflowered plants. although the hypothesis indicates 1 for several f; 'soFor such data, or for ordinal data, the test appropriate for ungrouped continuous data (using D) is conservative (e.g., Noether, 1963; Pettitt and Stephens, 1977), meaning that the testing is occurring at an a smaller-perhaps much smaller-than that stated, and the probability of a Type II error is inflated; that is, the power of the test is reduced. Therefore, use of dmax is preferred to using D for grouped or ordinal data." , Example 22.13 shows how the data of Example 22.10 would look had the investigator recorded them in 5-meter ranges of trunk heights. Note that power is lost (and Ho is not rejected) by grouping the data, and grouping should be avoided or minimized whenever possible. When applicable (that is, when the categories are ordered), the Kolmogorov: Smirnov test is more powerful than the chi-square test when n is small or when f;
EXAMPLE 22.13 Kolmogorov-Smirnov Goodness-of-Fit Test for Continuous, But Grouped, Data The hypotheses are as in Example 22.10, with that example's data recorded in 5-meter segments of tree height (where, for example, 5-10 m denotes a height of at least 5 m but less than 10 m). Trunk height A
-:
1 2 ,3 4 5
(Xi)
f;
fi
Fi
r;
di
0-5m 5-10 m 10-15 m 15-20 m 20-25 m
5 5 5 1 1
3 3 3 3 3
5 10 13 14 15
3 6 9 12 15
2 4 4 2 0
n = 15; k = 5 dmax = 4 (d)max 0.05,5,15 -- 5 Therefore, do not reject H«.
[0.10
< P < 0.20]
"If n is not evenly divisible by k, then, conservatively, the critical value for the nearest larger n in the table may be used. (However, that critical value might not exist in the table.) tThe first footnote of this section refers to the Kolmogorov-Smirnov two-sample test, which also yields conservative results if applied to discrete data (Noether, 1963).
488
Chapter 22
Testing for Goodness of Fit
values are small, and often in other cases.* Another advantage of the KolmogorovSmirnov test over chi-square is that it is not adversely affected by small expected frequencies (see Section 22.5). EXERCISES 22.1. Consult App~ix Table B.1 (a) What is the probability of computing a X2 at least as large as 3.452 if OF = 2 and the null hypothesis is true? (b) What is P(X2 ? 8.668) if II = 5? . X 20.05,4 ?. ()C Wh at IS (d) What is X6.01.8? 22.2. Each of 126 individuals of ~ertain mammal species was placed in an enclosure containing equal amounts of each of six different foods. The frequency with which the animals chose each of the foods was: Food item (i)
fi
N A W G M C
13 26 31 14 28 14
By performing a heterogeneity chi-square analysis, determine whether the four samples may justifiably be pooled. If they may, pool them and retest the null hypothesis of equal sex frequencies. 22.5. Test the hypothesis and data of Exercise 22.2 using the log-likelihood G. 22.6. A straight line is drawn on the ground perpendicular to the shore of a body of water. Then the locations of ground arthropods of a certain species are measured along a I-meter-wide band on either side of the line. Use the Kolmogorov-Smirnov procedure on the following data to test the null hypothesis of uniform distribution of this species from the water's edge to a distance of 10 meters inland.
(a) Test the hypothesis that there is no preference among the food items. (b) If the null hypothesis is rejected, ascertain which of the foods are preferred by this species. 22.3. A sample of hibernating bats consisted of 44 males and 54 females. Test the hypothesis that the hiber- .: nating population consists of equal numbers of males and females. 22.4. In attempting to determine whether there is a 1 : 1 sex ratio among hibernating bats, samples were taken from four different locations in a cave: Location
Males
Females
V
44 31 12 15
54 40 18 16
0
E M
./
Distance from water (m)
Number observed
0.3 0.6 1.0 1.1 1.2 1.4 1.6 1.9 2.1 2.2 2.4 2.6 2.8 3.0 3.1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Distance from from (m) 3.4 4.1 4.6 4.7 4.8 4.9 4.9 5.3 5.8 6.4 6.8 7.5 7.7 8.8 9.4
Number observed I
1 1 1 1 1 1 1 1 1 1 1 1 1 1
22.7. For a two-tailed Kolmogorov-Smirnov goodnessof-fit test with continuous data at the 5% level of significance, how large a sample is necessary to detect a difference as small as 0.25 between cumulative relative frequency distributions?
* A chi-square goodness-of-fit test performed on the data of Example 22.12, for Ho: There is no preference among the five food categories, would disregard the order of the categories and would yield X2 = 4.250; and the log-likelihood goodness of fit would result in G = 4.173. Each of those statistics would be associated with a probability between 0.25 and 0.50.
Exercises
!.8. A bird feeder is placed at each of six different heights. It is recorded which feeder was selected by each of 18 cardinals. Using the KolmogorovSmirnov procedure for discrete data, test the null hypothesis that each feeder height is equally desirable to cardinals.
Feeder height
\
~\
/ /
Number observed
1 (lowest) 2 3
2 3 3
4
4
5 6 (highest)
4 2
489
C HAP
T E R
23
Contingency Tables 23.1 CHI-SQUARE ANALYSIS OF CONTINGENCY 23.2 VISUALIZING CONTINGENCY-TABLE DATA 23.3 23.4
TABLES
2 x 2 CONTINGENCY TABLES CONTINGENCY TABLES WITH SMALL FREQUENCIES
23.S HETEROGENEITY TESTING OF 2 x 2 TABLES 23.6 SUBDIVIDING CONTINGENCY TABLES 23.7 THE LOG-LIKELIHOOD 23.8
MULTIDIMENSIONAL
RATIO FOR CONTINGENCY ~ONTINGENCY
--------------------' /
TABLES
TABLES
Enumeration data may be collected simultaneously for two nominal-scale variables. These data may be displayed in what is known as a contingency table, where the r rows of the table represent the r categories of one variable and the c columns indicate the c categories of the other variable; thus, there are rc "cells" in the table. (This presentation of data is also known as a cross tabulation or cross classification.i Example 23.1 a is of a contingency table of two rows and four columns, and may be referred to as a 2 x 4 ("two by four") table having (2) ( 4) = 8 cells. A sample of 300 people has been obtained from a specified population (let's say members of an actors' professional association), and the variables tabulated are each person's sex and each person's hair color. In this 2 x 4 table, the number of people in the sample with each of the eight combinations of sex and hair color is recorded in one of the eight cells of the table. These eight data could also be recorded in a 4 X 2 contingency table, with the four hair colors appearing as rows and the two sexes as columns, and that would not change the statistical hypothesis tests or the conclusions that result from them. As with previous statistical tests, the total number of data in the sample is designated as n.
EXAMPLE23.1 A 2 X 4 Contingency Table for Testing the Independence of Hair Color and Sex in Humans (a) H«:
Human
hair color is independent
HA:
Human
hair color is not independent
(j'
of sex in the population
sampled.
of sex in the population
sampled.
= 0.05 Hair color Sex Male Female
Total
Black
Brown
Blond
Red
32 55
43 65
16 64
9 16
200
108 (= C2)
80 (= C3)
25
300
(= C4)
87 (= Cj)
Total
100 (= Rd (= R2) (= n)
Chapter 23
(b) The observed frequencY,fij, if Hi, is true (i.e.,fij)
Contingency Tables
491
in each cell is shown, with the frequency expected
in parentheses. Hair color
Black
Brown
Bland
Red
Total
32
43 (36.0000)
9 (8.3333) 16 (16.6667)
100 (= Rd
(29.0000)
25 (= C4)
300 (= n)
Sex
55
65
(58.0000)
(72.0000)
16 (26.6667) 64 (53.3333)
87 (= C1)
108 (= C2)
80 (= C3)
Male Female Total
X2
200 (= R2)
= ~ ~ (fij --= fij )2 fij
(32 - 29.0000)2
+ (43 - 36.0000)2 + (16 - 26.6667)2
29.0000
26.6667
36.0000
+ (9 - 8.3333 )2 + (55 - 58.0000 )2 + (65 - 72.0000 )2 8.3333
72.0000
58.0000
+ (64 - 53.3333)2 + (16 - 16.6667)2 53.3333 = 0.3103
16.6667
+ 1.3611 + 4.2667 + 0.0533 + 0.1552 + 0.6806 + 2.1333
+ 0.0267 = 8.987 v=(r
-l)(c
X6 05 3 =
-1)=(2
-1)(4
-1)=3
7.815
Therefore, reject Hi; 0.025 < P < 0.05
[P = 0.029]
The hypotheses to be tested in this example may be stated in any of these three ways: H«: In the sampled population,
a person's hair color is independent of that person's sex (that is, a person's hair color is not associated with the person's sex), and HA: In the sampled population, a person's hair color is not independent of that person's sex (that is, a person's hair color is associated with the person's sex), or He: In the sampled population, the ratio of males to females is the same for people having each of the four hair colors, and HA: In the sampled population, the ratio of males to females is not the same for people having each of the four hair colors; or
492
Chapter 23
Contingency Tables
He: In the sampled is the same for HA: In the sampled is not the same
population, the proportions both sexes, and population, the proportions for both sexes.
In order to test the stated hypotheses, have been collected in a variety of ways:
of people with the four hair colors of people with the four hair colors
the sample of data in this example could
• It could have been stipulated, in advance of collecting the data, that a specified number of males would be taken at random from all the males in the population and a specified number of females would be taken at random from all the females in the population. Then the hair color of the people in the sample would be recorded for each sex. That is what was done for Example 23.1 a, where it was decided, before the data were collected, that the sample would consist of 100 males and 200 females. • It could have been stipulated, in advance of collecting the data, that a specified number of people with each hair color would be taken at random from all persons in the population with that hair color. Then the sex of the people in the sample would be recorded for each hair color. • It could have been stipulated, in advance of collecting, that a sample of n people would be taken at random from the population, without specifying how many of each sex would be in the sample or how many of each hair color would be in the sample. Then the sex and hair color of each person would be recorded. For most contingency-table situations, the same statistical testing procedure applies to anyone of these three methods of obtaining the sample of n people, and the same result is obtained. However, when dealing with the smallest possible contingency table, namely one with only two rows and two columns (Section 23.3), an additional sampling strategy may be encountered that calls for a different statistical procedure. Section 23.8 will introduce procedures for analyzing contingency tables of more than two dimensions, where frequencies are tabulated simultaneously for more than two variables. 23.1
CHI-SQUARE ANALYSIS OF CONTINGENCY
TABLES
The most common procedure for analyzing contingency table data uses the chi-square statistic.* Recall that for the computation of chi-square one utilizes observed and expected frequencies (and never proportions or percentages). For the goodness-of-fit analysis introduced in Section 22.1, fi denoted the frequency observed in category i of the variable under study. In a contingency table, we have two variables under consideration, and we denote an observed frequency asfij. Using the double subscript, fij refers to the frequency observed in row i and column j of the contingency table. In Example 23.1, the value in row 1 column 1 is denoted as /11, that in row 2 column 3 as /2], and so on. Thus.jj ] = 32,[12 = 43,[13 = 16, ... '[23 = 64, and h4 = 16. The total frequency in row i of the table is denoted as R, and is obtained as R = LJ"= 1 fij· Thus, RI = /11 + [v: + /13 + /14 = 100, which is the total number of males in the sample, and R2 = hi + /22 + /23 + h4 = 200, which is the total number of females in the sample. The column totals, Ci, are obtained by analogous "The early development of chi-square analysis of contingency tables is credited to Karl Pearson (1904) and R. A. Fisher (1922). In 1904, Pearson was the first to use the term "contingency table" (David, 1995).
494
Chapter 23
Contingency Tables and it is in this way that the we can check foAr arithmetic
fij values in Example 23.1 b were obtained. errors
in our
cal~ulations
by observing
Note that that R, =
J tu
"L = I = "Lj"~ I jij and C, = "L~=I fij = "L;'= I fij· That is, the row totals of the expected frequencies equal the row totals of the observed frequencies, and the column totals of the expected frequencies equal the column totals of the observed frequencies. Once X2 has been calculated, its significance can be ascertained from Appendix Table B.l, but to do so we must determine the degrees of freedom of the contingency table. The degrees of freedom for a chi-square calculated from contingency-table data are* (23.5) v= (r - l)(c - 1). In Example 23.1, which is a 2 X 4 table, u = (2 1) (4 - 1) = 3. The calculated statistic is 9.987 and the critical value is X~.()5.3 = 7.815, so the null hypothesis is rejected. It is good to calculate expected frequencies and other intermediate results to at least four decimal places and to round to three decimal places after arriving at the value of X2. Barnett and Lewis (1994: 431-440) and Simonoff (2003: 228-234) discuss outliers in contingency-table data.
(a) Comparing Proportions.
Hypotheses for data in a contingency table with only two rows (or only two columns) often refer to ratios or proportions. In Example 23.1. the null hypothesis could have been stated as, "I n the sampled population, the sex ratio is the same for each hair color" or as "In the sampled population, the proportion of males is the same for each hair color." The comparison of two proportions is discussed in Sections 23.3b and 24.10; and the comparison of more than two proportions is further discussed in Sections 24.13-24.1 S. 23.2
VISUALIZING
CONTINGENCY-TABLE
DATA
Among the ways to present contingency-table data in known as a mosaic display." In Chapter 1, nominal-scale data were presented in a categories of the nominal-scale variable appear on one the horizontal axis, as in Figure 1.2), and the number of
graphical
form is a method
bar graph in Figure 1.2. The axis of the graph (typically observations is on the other
* In the early days of contingency-table analysis, K. Pearson and R. A. Fisher disagreed vehemently over the appropriate degrees 01"freedom to employ; Fisher's (1922) view has prevailed (Agresti. 2002: 622; Savage, 1976), as has his use 01"the term degrees offreedom, tThe current use of mosaic displays is attributed to Hartigan and Kleiner (198 I). In an historical review of rectangular presentations of data, Friendly (2002) credits the English astronomer Edmond (a.k.a. Edmund) Halley (1656-1742), famous for his lfill2 observation of thc comet that bears his name. with the first use of rectangular areas in the data representation for two independent variables (which. however, were not variables for a contingency table). Further developments in the visual LIse of rectangular areas took place in France and Germany in the early 17XOs: a forerunner of mosaic graphs was introduced in 11144by French civil engineer Charles Joseph Minard (1791 -(1170), and what resembled the modern mosaic presentation was first used in IX77 by German statistician Georg von Mayr (11141-1925). In 1 2f, then define D = d - 0.5.
=
the largest multiple of 0.5 that is
Contingency Tables
-:
Species I
.>V
Species 2
.: V
Vw
Species 3
With c
'g
'3
;j
:.J
::0 ....l
::0 ....l
..J::0
·3 '"
::0 ....l
ithout
Disease
Di scasc
-r c
rr, c;
('I
C
'3 ~ u
511
'"
u
FIGURE 23.3: A three-dimensional contingency table, where the three rows are species, the four columns are locations, and the two tiers are occurrence of a disease. An observed frequency, fiji, will be recorded in each combination
of row, column, and tier.
and Simonoff (2003: 329) discuss mosaic displays for contingency tables with more than two dimensions, and such graphical presentations can make multidimensional contingency table data easier to visualize and interpret than if they are presented only in tabular format. Example 23.8 presents a 2 x 2 x 2 contingency table where data (fijl) are collected as described previously, but only for two species and two locations. Note that throughout the following discussions the sum of the expected frequencies for a given row, column, or tier equals the sum of the observed frequencies for that row, column, or tier. EXAMPLE 23.8 gency Table
Test for Mutual
Independence
in a 2
x 2 x
2 Contin-
He: Disease occurrence, species, and location are all mutually independent population sampled. HA:
in the
Disease occurrence, species, and location are not all mutually independent the population sampled.
The observed frequencies
(fijl):
Disease present
Species I Species 2 Disease totals (t = 2): Location totals: (c = 2):
Disease absent
Species totals
(r
Location I
Location 2
Location I
Location 2
44
12 22
3K
10
104
20
18
88
28
in
=
2)
Grand total: n
=
192
512
Chapter 23
Contingency Tables A
The expected frequencies (fijl): Disease present
Disease absent
Location 1 Location 2 Species 1 Species 2
38.8759 32.8950
Disease totals: Location totals:
i
=
18.5408 15.6884
Location 1 Location 2 Species totals 31.5408 26.6884
T] = 106 C]
R, = 104 R2 = 88
15.0425 12.7283
T2 = 86
Grand total:
= 130, C2 = 62
2: 2: 2: (fijl
n
= 192
--: hj, )2 fiji
X2 = (44 - 38.8759 )2 + (12 - 18.5408 )2 38.8759
+ (38 - 31.5408)2 31.5408
18.5408
+ (10 - 15.0425)2 + (28 - 32.8950)2 + (22 - 15.6884)2 15.0425
32.8950
+ (20 - 26.6884? 26.6884
15.6884
+ (18 - 12.7283)2 12.7283
= 0.6754 + 2.3075 + 1.3228 + 1.6903 + 0.7284 + 2.5392
+ 1.6762 + 2.1834 = 13.123 }J
+ 2 = (2)(2)(2)
- 2 - 2 - 2
0.01 < P < 0.025
[P = 0.011]
= rct - r - c - t
+ 2= 4
X5.o5.4 = 9.488 Reject Ho.
(a) Mutual Independence. We can test more than one null hypothesis using rnultidimensional contingency-table data. An overall kind of hypothesis is that which states mutual independence among all the variables. Another way of expressing this Ho is that there are no interactions (either three-way or two-way) among any of the variables. For this hypothesis, the expected frequency in row i, column j, and tier Lis (23.10) where n is the total of all the frequencies in the entire contingency table. In Example 23.8 this null hypothesis would imply that presence or absence of the disease occurred independently of species and location. For three dimensions, this_
Section 23.8
Multidimensional
Contingency Tables
513
null hypothesis is tested by computing X2 =
±±±
i= 1 j=
11=1
(!ijl
-=- fijl
)2,
(23.11 )
!ijl
which is a simple extension of the chi-square calculation for a two-dimensional table (by Equation 23.1). The degrees of freedom for this test are the sums of the degrees of freedom for all interactions: lJ =
(r - l)(e
- 1)(t - 1)
+ (r - l)(e - 1) + (r - l)(t - 1) + (e - 1)(t - 1), (23.12)
which is equivalent to lJ
= ret - r - e - t + 2.
(23.13)
(b) Partial Independence. If the preceding null hypothesis is not rejected, then we conclude that all three variables are mutually independent and the analysis proceeds no further. If, however, Ho is rejected, then we may test further to conclude between which variables dependencies and independencies exist. For example, we may test whether one of the three variables is independent of the other two, a situation known as partial independenee.* For the hypothesis of rows being independent of columns and tiers, we need total frequencies for rows and total frequencies for combinations of columns and tiers. Designating the total frequency in columnj and tier l as (CT)jl, expected frequencies are calculated as
(23.14) and Equation 23.11 is used with degrees of freedom lJ
= (r - 1)(e
- 1)(t
1)
+ (r - 1)(e - 1) + (r - 1)(t
- 1),
(23.15)
which is equivalent to lJ
= ret - et - ,.
+ 1.
(23.16)
For the null hypothesis of columns being independent of rows and tiers, we compute expected frequencies using column totals, Cj, and the totals for row and tier combinations, (RTk ~ Cj(RT)it (23.17) !ijl = n and lJ
= ret - rt - e + 1.
(23.18)
And, for the null hypothesis of tiers being independent of rows and columns, we use tier totals, T[, and the totals for row and column combinations, (RC)ir (23.19) lJ
= ret - re - t + 1.
(23.20)
* A different hypothesis is that of conditional independence, where two of the variables are said to be independent in each level of the third (but each may have dependence on the third). This is discussed in the references cited at the beginning of this section.
514
Chapter 23
Contingency Tables
In Example 23.9, all three pairs of hypotheses for partial independence are tested. In one of the three (the last), Hi, is not rejected; thus we conclude that presence of disease is independent of species and location. However, the hypothesis test of Example 23.8 concluded that all three variables are not independent of each other. Therefore, we suspect that species and location are not independent. The independence of these two variables may be tested using a two-dimensional contingency table, as described earlier, in Section 23.3, and demonstrated in Example 23.10. In the present case, the species-location interaction is tested by way of a 2 x 2 contingency table, and we conclude that these two factors are not independent (i.e., species occurrence depends on geographic location). In general, hypotheses to be tested should be stated before the data are collected. But the hypotheses proposed in Example 23.10 were suggested after the data were examined. Therefore, instead of accepting the present conclusion of the analysis in Example 23.10, such a conclusion should be reached by testing this pair of hypotheses upon obtaining a new set of data from the population of interest and stating the hypotheses in advance of the testing.
EXAMPLE 23.9 Test for Partial Independence in a 2 x 2 x 2 Contingency Table. As the Ho of Overall Independence Was Rejected in Example 23.8, We May Test the Following Three Pairs of Hypotheses Hi: Species is independent of location and disease. HA: Species is not independent of location and disease. A
The expected frequencies ([;jI): Disease present
Species 1 Species 2 Location and disease totals:
X2
Disease absent
Location J
Location 2
Location 1 Location 2 Species totals
39.0000 33.0000
18.4167 15.5833
31.4167 26.5833
15.1667 12.8333
( CT)" =72
( CT)12 = 34
( CT)21 = 58
( CT)22 = 28
= (44 - 39.0000)2 + (12 - 18.4167)2 39.0000
18.4167
+
R, R2
Grand total: n = 192
(38 - 31.4167)2 31.4167
+ ... + (18 - 12.8333)2 12.8333 = '\0.6410
+ 2.2357 + 1.3795 + 1.7601 + 0.7576 + 2.6422
\ + 1.6303 + 2.0801 = 13.126
lJ
= rct - ct - r + 1 = (:2) (2) (2) -
X6.05.3 = 7.815
= 104 = 88
(2) (2) - 2 + 1 = 3
Section 23.8
Multidimensional
Reject Ho. Species is not independent
HA:
of species and disease.
Disease absent
Disease present Species 1
Species 2
Species 1
Species 2
Location totals
37.91677 18.0833
33.8542 16.1458
32.5000 15.5000
25.7292 12.2708
C1 = 130 C2 = 62
Location 1 Location 2 Species and disease totals:
__
= 0.0044]
(fiji):
The expected frequencies
2
[P
of species and disease.
Location is not independent
(RT)l1
(RT)12
= 56
= 50
(44 - 37.9167)2
X
Grand total: n = 192
(RThz
(RThl
= 48
= 38
+ (28 - 33.8542)2 + ... + (18 - 12.2708)2
37.9167
33.8542
12.2708
= 0.9760 + 1.0123 + 0.9308 + 1.2757 + 2.0464 + 2.1226 + 1.9516 + 2.6749 = 12.990 • u = rct - rt - c X6.05,3
515
of location and presence of disease.
0.005 < P < 0.001 He: Location is independent
Contingency Tables
+ 1 = (2) (2) (2) - (2) (2) - 2 + 1 = 3
= 7.815
Reject Ho. Location is not independent
of species and presence of disease.
0.001 < P < 0.005
[P
= 0.0047]
Ho: Presence of disease is independent of species and location. H A: Presence of disease is not independent
of species and location.
The expected frequencies (fijl): Species 1 L~tion
Species 2
1 Location 2 Location 1 Location 2 Disease totals
Disease present 745.2708 Disease absent 36.7292
12.1458 9.8542
26.5000 21.5000
22.0833 17.9167
Tl = 106 T2 = 86
Species and location totals:
(RC)l1
(RC)J2
(RCb
(RC)22
Grand total: n = 192
= 82
= 22
= 48
= 40
516
Chapter 23
Contingency Tables
X
2
=
(44 - 45.2708)2 45.2708
=
0.0357
(12 -
+
12.1458)2
+ ... +
12.1458
(18 -
17.9167)2
17.9167
+ 0.0018 + 0.0849 + 0.0003 + 0.0440 + 0.0022
+ 0.1047 + 0.0004 =
0.274
u = rct - rc - t
+ 1 = (2) (2 ) ( 2) - (2) (2) - 2 + 1 = 3
= 7.815 Do not reject Ho.
X~.O'i,3
0.95 < P < 0.975
EXAMPLE 23.10 Test for Independence for Partial Dependence
[P = 0.96]
of Two Variables, Following Tests
The hypothesis test of Example 23.8 concluded that all three variables are not mutually independent, while the last test in Example 23.9 concluded that presence of disease is independent of species and location. Therefore, it is desirable (and permissible) to test the following two-dimensional contingency table: Ho: Species occurrence is independent
of location.
H A: Species occurrence is not independent
of location.
Location 1
Location 2
Total
Species 1 Species 2
82 48
22 40
104 88
Total
130
62
192
X2
=
12.874
v=(r-l)(c-l)=1 X60SJ
= 3.841
Reject
n; P
< 0.001
[P
= (J.(10033]
(c) The Log-Likelihood Ratio. The log-likelihood ratio of Section 23.7 can be expanded to contingency tables with more than two dimensions. While some authors have chosen this procedure over chi-square testing and it is found in some statistical computer packages, others (e.g., Haber, 1984; Hosmane, 1987; Koehler, 1986; Larntz, 1978; Rudas, 1986; and Stelzl, 20(0) have concluded that X2 is preferable. With X2 in contrast to G, the probability of a Type I error is generally closer to CI'.
Exercises
517
EXERCISES 3.1. Consider the following data for the abundance of a certain species of bird. (a) Using chi-square, test the null hypothesis that the ratio of numbers of males to females was the same in all four seasons. (b) Apply the G test to that hypothesis.
independence; independence.
Spring
Males Females
Summer
135 77
163 86
FaLL Winter
71 40
Area
Male
FemaLe
MaLe
Female
42 84
33 51
55
63 48
With rabies
Without rabies
E W
14
29 38
12
i.3.Data were collected with the additional each skunk recorded,
W
34
43
23.4. A sample of 150 was obtained of men with each of
38
three types of cancer, and the following data are the frequencies of blood types for the men. (a) Using chi-square, test the null hypothesis that, in the sampled population, the frequency distribution of the three kinds of cancer is the same fo~en with each of the four blood types (whi h is the same as testing the Ho that the freq ncy distribution of the four blood types is the same in men with each of the three kinds of cancer). (b) Apply the G test to the same hypothesis.
J.2. The following data are frequencies of sk unks found with and without rabies in two different geographic areas. (a) Using chi-square, test the null hypothesis that the incidence of rabies in skunks is the same in both areas. (b) Apply the G test to that hypothesis. Area
Without rabies
With rabies
E Sex
and, if Ho is rejected, test for partial
as in Exercise 23.2, but tabulation of the sex of as follows. Test for mutual
Blood type A
B
Cancer type
0
Colon Lung Prostate
61 65 18 69 57 15 73 60 12
AB
Total
6 9 5
R] = 150 R2 = 150 R3 = 150
C HAP
T E R
24
Dichotomous Variables 24.1 24.2 24.3 24.4
BINOMIAL PROBABILITIES THE HYPERGEOMETRIC DISTRIBUTION SAMPLING A BINOMIAL POPULATION GOODNESS OF FIT FOR THE BINOMIAL
24.5
THE BINOMIAL
24.6
THE SIGN TEST
24.7 24.8 24.9 24.10
POWER, DETECTABLE DIFFERENCE, AND SAMPLE SIZE FOR THE BINOMIAL CONFIDENCE LIMITS FOR A POPULATION PROPORTION CONFIDENCE INTERVAL FOR A POPULATION MEDIAN TESTING FOR DIFFERENCE BETWEEN TWO PROPORTIONS
24.11 24.12
CONFIDENCE LIMITS FOR THE DIFFERENCE BETWEEN PROPORTIONS POWER, DETECTABLE DIFFERENCE, AND SAMPLE SIZE IN TESTING DIFFERENCE BETWEEN TWO PROPORTIONS
24.13 24.14 24.15
COMPARING MORE THAN TWO PROPORTIONS MULTIPLE COMPARISONS FOR PROPORTIONS TRENDS AMONG PROPORTIONS
24.16 24.17
THE FISHER EXACT TEST PAIRED-SAMPLE TESTING OF NOMINAL-SCALE
24.18
LOGISTIC REGRESSION
TEST AND ONE-SAMPLE
DISTRIBUTION TEST OF A PROPORTION AND SIGN TESTS
DATA
This chapter will concentrate on nominal-scale data that come from a population with only two categories. As examples. members of a mammal litter might be classified as male or female. victims of a disease as dead or alive. trees in an area as "deciduous" or "evergreen." or progeny as color-blind or not color-blind. A nominal-scale variable having two categories is said to be dichotomous. Such variables have already been discussed in the context of goodness of fit (Chapter 22) and contingency tables (Chapter 23). The proportion of the population belonging to one of the two categories is denoted as p (here departing from the convention of using Greek letters for population parameters). Therefore. the proportion of the population belonging to the second class is I - p . and the notation q = I - P is commonly employed. For example. if 0.5 (i.e .. 50%) of a population were male. then we would know that 0.5 (i.e .• 1 - 0.5) of the population were female. and we could write p = 0.5 and q = 0.5; if 0.4 (i.e., 40%) of a population were male. then 0.6 (i.c., 60%) of the population were female, and we could write p = 0.4 and q = 0.6. If we took a random sample of ten from a population where p = q = 0.5, then we might expect that the sample would consist of five males and five females. However, we should not be too surprised to find such a sample consisting of six males and four females. or four males and six females. although neither of these combinations would be expected with as great a frequency as samples possessing the population sex ratio of 5 : 5. It would, in fact. be possible to obtain a sample of ten with nine males and one female, or even one consisting of all males, but the probabilities of such samples being encountered by random chance are relatively low. 518
Section 24.1
Binomial Probabilities
519
If we were to obtain a large number of samples from the population under consideration, the frequency of samples consisting of no males, one male, two males, and so on would be described by the binomial distribution (sometimes referred to as the "Bernoulli distribution"*). Let us now examine binomial probabilities. BINOMIAL PROBABILITIES
Consider a population consisting of two categories, where p is the proportion of individuals in one of the categories and q = 1 - p is the proportion in the other. Then the probability of selecting at random from this population a member of the first category is p, and the probability of selecting a member of the second category is q.t For example, let us say we have a population of female and male animals, in proportions of p = 0.4 and q = 0.6, respectively, and we take a random sample of two individuals from the population. The probability of the first being a female is p (i.e., 0.4) and the probability of the second being a female is also p. As the probability of two independent (i.e., mutually exclusive) events both occurring is the product of the probabilities of the two separate events (Section 5.7), the probability of having two females in a sample of two is (p )(p) = p2 = 0.16; the probability of the sample of two consisting of two males is (q) (q) = q2 = 0.36. What is the probability of the sample of two consisting of one male and one female? This could occur by the first individual being a female and the second a male (with a probability of pq) or by the first being a male and the second a female (which would occur with a probability of qp). The probability of either of two mutually exclusive outcomes is the sum of the probabilities of each outcome (Section 5.6), so the probability of one female and one male in the sample is pq + qp = 2pq = 2( 0.4) (0.6) = 0.48. Note that 0.16 + 0.36 + 0.48 = 1.00. Now consider another sample from this population, one where n = 3. The probability of all three individuals being female is ppp = p3 = (0.4)3 = 0.064. The probability of two females and one male is ppq (for a sequence of 'i' 'i' 0") + pqp (for ~ o"'i') + qpp (for o"~ ~), or 3p2q = 3(0.4)2(0.6) = 0.288. The probability of one female and two males is pqq (for ~ 0" 0") + qpq (for 0" 'i' 0") + qqp (for 0" o"'i' ), or 3pq2 = 3( 0.4) (0.6)2 = 0.432. And, finally, the probability of all three being males is qqq = q3 = (0.6)3 = 0.216. Note that p3 + 3p2q + 3pq2 + q3 = 0.064 + 0.288 + 0.432 + 0.216 = 1.000 (meaning that there is a 100% probability-that is, it is certain -that the three animals will be in one of these three combinations of sexes). If we performed the same exercise with n = 4, we would find that the probability of four females is p4 = (0.4)4 = 0.0256, the probability of three females (and one male) is 4p3q = 4(0.4 )3(0.6) = 0.1536, the probability of two females is 6p2q2 = 0.3456, "The binomial formula in the following section was first described, in 1676, by English scientistmathematician Sir Isaac Newton (1642-1727), more than 10 years after he discovered it (Gullberg, 1997: 776). Its first proof, for positive integer exponents, was given by the Swiss mathematician Jacques (also known as Jacob, Jakob, or James) Bernoulli (1654-1705), in a 1713 posthumous publication; thus, each observed event from a binomial distribution is sometimes called a Bernoulli trial. Jacques Bernoulli's nephew, Nicholas Bernoulli (1687-1759), is given credit for editing that publication and writing a preface for it. but Hald (1984) explains that in 1713 Nicholas also presented an improvement to his uncle's binomial theorem. David (1995) attributes the first use of the term binomial distribution to G. U. Yule, in 1911. tThis assumes "sampling with replacement." That is, each individual in the sample is taken at random from the population and then is returned to the population before the next member of the sample is selected at random. Sampling without replacement is discussed in Section 24.2. If the population is very large compared to the size of the sample. then sampling with and without replacement are indistinguishable in practice.
520
Chapter 24
Dichotomous Variables
the probability of one female is 4pq3 = 0.3456, and the probability of no females (i.e., all four are male) is q4 = 0.1296. (The sum of these five terms is 1.0000, a good arithmetic check.) If a random sample of size n is taken from a binomial population, then the probability of X individuals being in one category (and, therefore, n X individuals in the second category) is (24.1)
«:"
In this equation, pX refers to the probability of sample consisting of X items, each having a probability of p, and n - X items, each with probability q. The binomial coefficient,
(n)X
-
X!(n
n!
(24.2)
X)!'
is the number of ways X items of one kind can be arranged with n X items of a second kind, or, in other words, it is nCX, the number of possible combinations of n items divided into one group of X items and a second group of n - X items. (See Section 5.3 for a discussion of combinations and Equation 5.3 explaining the factorial notation, "!".) Therefore, Equation 24.1 can be written as P(X)
= X!(n n~ X)!pxqn-x.
(24.3)
Thus, (n)pXqn-X is the Xth term in the expansion of (p + qY, and Table 24.1 x shows this expansion for powers up through 6. Note that for any power, n, the sum of the two exponents in any term is n. Furthermore, the first term will always be v". the second will always contain q, the third will always contain pn-2q2, and so on, with the last term always being q", The sum of all the terms in a binomial expansion will always be 1.0, for p + q = 1, and (p + q)n = 111 = 1. As for the coefficients of these terms in the binomial expansion, the Xth term of the nth power expansion can be calculated by Equation 24.3. Furthermore, the examination of these coefficients as shown in Table 24.2 has been deemed interesting for centuries. This arrangement is known as Pascal's triangle," We can see from this
r:'
TABLE 24.1: Expansion of the Binomial,
+
q)n
(p + qY
n 1 2 3 4 S 6
(p
P
p2 p3 p4 p5 p6
+ q + 2pq + 3p2q + 4p3q + Sp4q + 6p5q
+ q2 + 3pq2 + q3 + 6p2q2 + 4pq3 + q4 + 10p3q2 + lOp2q3 + Spo" + q5 + lSp4q2 + 20p3q" + lSp2q4 + 6pq5 _ q6
*Blaise Pascal (1623-1662), French mathematician and physicist and one of the founders of probability theory (in 1654, immediately before abandoning mathematics to become a religious recluse). He had his triangular binomial coefficient derivation published in 1665,although knowledge of the triangular properties appears in Chinese writings as early as 1303 (Cajori, 1954; David, 1962; Gullberg 1997: 141; Struik, 1967: 79). Pascal also invented (at age 19) a mechanical adding and subtracting machine which, though patented in 1649, proved too expensive to be practical to construct (Asimov, 1982: 130-131). His significant contributions to the study of fluid pressures have been honored by naming the international unit of pressure the pascal, which is a pressure of one
Section 24.1 TABLE 24.2:
n
X = 0
1 2 3 4 5 6 7 8 9 10
1 1 1 1 1 1 1 1 1 I
2 I 2 3 4 5 6 7 8 9 10
1 3 6 10 15 21 28 36 45
1 4 10 20 35 56 84 120
5
1 5 15 35 70 126 210
521
Binomial Coefficient, nCx
4
3
Binomial Probabilities
6
1 6 21 56 126 252
7
1 7 28 84 210
1 8 36 120
8
1 9 45
9
1 10
10
Sum of coefficients 2 4 8 16 32 64 128 256 512 1024
= = = = = = = = = =
21 22 23 24 25 26 27 28 29 210
triangular array that any binomial coefficient is the sum of two coefficients on the line above it, namely,
(;)
= (;
(n I}
~ ~) +
X
(24.4)
This can be more readily observed if we display the triangular array as follows: 1 1
1 2
1 1 1
3 4
5
1
6 10
1
3 4 10
1 5
1
Also note that the sum of all coefficients for the nth power binomial expansion is Z", Appendix Table 8.26a presents binomial coefficients for much larger n's and X's, and they will be found useful later in this chapter. Thus, we can calculate probabilities of category frequencies occurring in random samples from a binomial population. If, for example, a sample of five (i.e., n = 5) is taken from a population composed of 50% males and 50% females (i.e., p = 0.5 and q = 0.5) then Example 24.1 shows how Equation 24.3 is used to determine the probability of the sample containing 0 males, 1 male, 2 males, 3 males, 4 males, and 5 males. These probabilities are found to be 0.03125, 0.15625, 0.31250, 0.31250, 0.15625, and 0.03125, respectively. This enables us to state that if we took 100 random samples of five animals each from the population, about three of the samples [i.e., (0.03125)(100) = 3.125 of them] would be expected to contain all females, about 16 [i.e., (0.15625)(100) = 15.625] to contain one male and four females, 31 [i.e., (0.31250)(100)] to consist of two males and three females, and so on. If we took 1400 random samples of five, then (0.03125) (1400) = 43.75 [i.e., about 44] of them would be expected to contain all females, and so on. Figure 24.1a shows graphically newton per square meter (where a newton-named for Sir Isaac Newton-is the unit of force representing a one-kilogram mass accelerating at the rate of one meter per second per second). Pascal is also the name of a computer programming language developed in 1970 by Niklaus Wirth. The relationship of Pascal's triangle to n ex was first published in 1685 by the English mathematician John Wallis (1616-1703) (David, 1962: 123-124).
522
Chapter 24
Dichotomous Variables
EXAMPLE 24.1 Computing Binomial Probabilities, P(X), P = 0.5, and q = 0.5 (Following Equation 24.3)
x
P(X)
o
~(0.50)(0.55) 0!5!
1
~(0.51 1!4!
2
= (1)(1.0)(0.03125)
= 0.03125
= (5)(0.5)(0.0625)
= 0.15625
~(0.52)(0.53) 2!3!
= (10)(0.25)(0.125)
= 0.31250
3
~(0.53)(0.52) 3!2!
= (10)(0.125)(0.25)
= 0.31250
4
~(0.54)(0.51) 4!1!
= (5)(0.0625)(0.5)
= 0.15625
5
~(0.55)(0.50) 5!0!
=
=
)(0.54)
p
0.4
(1)(0.03125)(1.0)
= 0.5
q=
0.03125
p
0.4
0.5
0.3 P(X)
Where n = 5,
= 0.3
q =
0.7
0.3 P(X)
0.2
0.2
0.1
0.1
0
0
0
2
5 X (b)
0.6 0.5 0.4 P(X)
p
=
0.1
q
=
0.9
0.3 0.2 0.1 0
()
2
3
4
5
X
(c) FIGURE 24.1: The binomial distribution, for n = 5. (a) p q = 0.9. These graphs were drawn utilizing the proportions
=
q = 0.5. (b) p = 0.3, q given by Equation 24.1.
=
0.7. (c) P
=
0.1
Section 24.1
Binomial Probabilities
523
the binomial distribution for p = q = 0.5, for n = 5. Note, from Figure 24.1a and Example 24.1, that when p = q = 0.5 the distribution is symmetrical [i.e., P(O) = P(n),P(l) = pen - l),etc.], and Equation 24.3 becomes P(X)
,
= X!(n
(24.5)
005'1.
n.
- X)!
Appendix Table B.26b gives binomial probabilities for n = 2 to n = 20, for = 0.5. Example 24.2 presents the calculation of binomial probabilities for the case where n = 5,p = 0.3, and q = 1 - 0.3 = 0.7. Thus, if we were sampling a population consisting of 30% males and 70% females, 0.16807 (i.e., 16.807%) of the samples would be expected to contain no males, 0.36015 to contain one male and four females, and so on. Figure 24.1b presents this binomial distribution graphically, whereas Figure 24.1c shows the distribution where p = 0.1 and q = 0.9.
p
EXAMPLE 24.2 Computing Binomial Probabilities, P{X), P 0.4, and q = 0.7 (Following Equation 24.3)
=
X
Where n = 5,
P(X)
o
~(0.30)(0.75) 0!5!
1
~(0.31 1!4!
2
~(0.32 2!3!
3
~(0.33)(0.72) 3!2!
4
~(0.34)(O.71) 4!1!
= (5)(0.0081 )(0.7) = 0.02835
5
~(0.35)(0.70) 5!0!
= (1 )(0.00243)(1.0)
= (1)(1.0)(0.16807)
= 0.16807
)(0.74)
= (5)(0.3)(0.2401)
= 0.36015
)(0.73)
= (10)(0.09)(0.343)
= 0.30870
=
(10)(0.027)(0.49)
=
0.13230
= 0.00243
For calculating binomial probabilities for large n, it is often convenient to employ logarithms. For this reason, Appendix Table B.40, a table of logarithms of factorials, is provided. Alternatively, it is useful to note that the denominator of Equation 24.3 cancels out much of the numerator, so that it is possible to simplify the computation of P(X), especially in the tails of the distribution (i.e., for low X and for high X), as shown in Example 24.3. If p is very small, then the use of the Poisson distribution (Section 25.1), should be considered.* The mean of a binomial distribution of counts X is /LX
=
np,
*Raff (1956) and Molenaar (1969a, 1969b) discuss; several distribution, including the normal and Poisson distributions.
(24.6)
approximations
to the binomial
524
Chapter 24
Dichotomous Variables EXAMPLE 24.3 Computing 0.02, and q = 0.98
P
Binomial
Probabilities,
P(X), with
=
n = 400,
(Many calculators can operate with large powers of numbers; otherwise, logarithms may be used.)
x o
P(X) __ n_!__ pOqn-O = q" = 0.98400 = 0.00031 O!(n - O)! __ n_!__ plqn-l 1!(n - 1)!
2
= (400)(0.02)(0.98399)
= npqn-I
= 0.00253
__ n_!__ p2ql1-2 = n(n - 1) p2qn-2 2!(n - 2)! 2!
= (400)(399) (0.022)(0.98398) = 0.01028 2 3
p: qn-. 3 = n(n - 1 )(n - 2) p? qn-
____ n! 3!(n
-
0
0
3)!
3
3! (400) (399) (398) (0.023) (0.98397) (3)(2)
= 0.02784
and so on. the variance* of X is a~
=
(24.8)
npq,
and the standard deviation of X is ax
= Jnpq.
(24.9)
Thus, if we have a binomially distributed population where p (e.g., the proportion of males) = 0.5 and q (e.g., the proportion of females) = 0.5 and we take 10 samples from that population, the mean of the 10 X's (i.e., the mean number of males per sample) would be expected to be np = (10) (0.05) = 5 and the standard deviation of the 10 X's would be expected to be Jnpq = )(10)(0.5)(0.5) = 1.58. Our concern typically is with the distribution of the expected probabilities rather than the expected X's, as will be explained in Section 24.3. 24.2
THE HYPERGEOMETRIC
DISTRIBUTION
Binomial probabilities (Section 24.1) may result from what is known as "sampling with replacement." This means that after an item is randomly removed from the *A
measure
of symmetry
(see Section 6.5a) for a binomial YI=q-P,
N7J
a
distribution
is (24.7)
so it can be seen that Y\ = only when p = q = 0.05, Y1 > 0 implies a distribution skewed to the right (as in Figures 24.1b and 24.1c) and YI < 0 indicates a distribution skewed to the left.
Section 24.2
The Hypergeometric Distribution
525
population to be part of the sample it is returned to the population before randomly selecting another item for inclusion in the sample. (This assumes that after the item is returned to the population it has the same chance of being selected again as does any other member of the population; in many biological situations-such as catching a mammal in a trap-this is not so.) Sampling with replacement ensures that the probability of selecting an item belonging to a specific one of the binomial categories remains constant. If sampling from an actual population is performed without replacement, then selecting an item from the first category reduces p and increases q (and, if the selected item were from the second category, then q would decrease and p would increase). Binomial probabilities may also arise from sampling "hypothetical" populations (introduced in Section 2.2), such as proportions of heads and tails from all possible coin tosses or of males and females in all possible fraternal twins. Probabilities associated with sampling without replacement follow the hypergeometric distribution instead of the binomial distribution. The probability of obtaining a sample of n items from a hypergeometric distribution, where the sample consists of X items in one category and n - X items in a second category, is
P(X)
=
(24.10)
X!(N)
- X)!(n
- X)!(N2
- n + X)!NT!
(24.11)
Here, NT is the total number of items in the population, N) in category 1 and N2 in category 2. For example, we could ask what the probability is of forming a sample consisting of three women and two men by taking five people at random from a group of eight women and six men. As N) = 8, N2 = 6, NT = 14, n = 5, and X = 3, the probability is
P(X)
=
8! 6! 5!X9! 3! 5! 2! 4! 14! (8·7·6·5·4·3·2)(6·5·4·3·2)(5·4·3·2)(9·8·7·6·5·4·3·2) (3·2)(5·4·3·2)(2)(4·3·2)(14·13·12·11
·10·9·8·7·6·5·4·3·2)
= 0.4196. If the population is very large compared to the size of the sample, then the result of sampling with replacement is indistinguishable from that of sampling without replacement, and the hypergeometric distribution approaches-and is approximated by-the binomial distribution. Table 24.3 compares the binomial distribution with p = 0.01 and n = 100 to three hypergeometric distributions with the same p and n but with different population sizes. It can be seen that for larger NT the hypergeometric is closer to the binomial distribution.
526
Chapter
24
Dichotomous
Variables
Distribution Where N" the Number of Items of One Category, Is 1% of the Population Size, NT; and the Binomial Distribution with p = 0.01; the Sample Size, n, Is 100 in Each Case
TABLE 24.3: The Hypergeometric
24.3
X
P(X) for hypergeomctric: NT = 1000,NJ = 10
P(X) for hypergcometric: NT = 2000, NJ = 20
P(X) for hypergeometric: NT = 5000, NJ = 50
0 1 2 3 4 5 6 >6
0.34693 0.38937 0.19447 0.05691 0.01081 0.00139 0.00012 0.00000
0.35669 0.37926 0.18953 0.05918 0.01295 0.00211 0.00027 0.00001
0.36235 0.37347 0.18670 0.06032 0.01416 0.00258 0.00038 0.00004
0.36603 0.36973 0.18486 0.06100 0.01494 0.00290 OJlO046 0.00008
Total
1.00000
1.00000
1.00000
1.00000
SAMPLING A BINOMIAL
P(X) for binomial: P = 0.01
POPULATION
Let us consider a population of N individuals: Y individuals in one category and N - Y in the second category. Then the proportion of individuals in the first category is
p=-
Y
(24.12)
N
and the proportion
in the second is q
= 1 - p
lf a sample of n observations
q =
or
N-Y
(24.13)
N
is taken from this population, with replacement, and
X observations are in one category and n - X are in the other, then the population parameter p is estimated by the sample statistic
~
p
X
(24.14)
=-,
n
which is the proportion of the sample that is in the first category.* The estimate of q is ~
q=l -p
or
~
n - X
q=--n
(24.15)
which is the proportion of the sample occurring in the second category. In Example 24.4 we have X = 4 and n = 20, so P = 4/20 = 0.20 and q = 1 - p = 0.80. lf our sample of 20 were returned to the population (or if the population were extremely large), and we took another sample of 20, and repeated this multiple sampling procedure many times, we could obtain many calculations of p, each estimating the population parameter p. If, in the population, p = 0, then obviously any sample from that population would have p = 0; and if p = 1.0, then each and *Placing the symbol "," above a letter is statistical convention for denoting an estimate of the quantity which that letter denotes. Thus, p refers to an estimate of p ; and the statistic q is a sample estimate of the population parameter q. Routinely, p is called "p hat" and q is called "q hat.'
Section 24.3 EXAMPLE 24.4
Sampling a Binomial Population
527
Sampling a Binomial Population
From a population of male and female spiders, a sample of 20 is taken, which contains 4 males and 16 females. n X
= 20 =4
By Equation 24.14, X 4 P = - = - = 0.20. n 20 A
Therefore, we estimate tion 24.15, or
q
that 20% of the population
q
P
1 -
=
= n - X
=
=
n
1 - 0.20
=
are males and, by Equa-
0.80
20 - 4 = 16 = 0.80 20 20 '
so we estimate that 80% of the population are females. The variance of the estimate p (or of q) is, by Equation 24.17, s~ P
=
pq
= (0.20)(0.80)
n - 1
20 -
= 0.008421.
1
2:
If we consider that the sample consists of four 1's and sixteen O's, then X = 4, X2 = 4, and the variance of the twenty 1's and O's is, by Equation 4.17, s2 = (4 - 42/20)/(20 - 1) = 0.168421, and the variance of the mean, by Equation 6.7, is = 0.168421/20 = 0.008421. The standard error (or standard deviation) of f; (or of q) is, by Equation 24.21, sp = JO.008421 = 0.092.
2:
s5r
every p would be 1.0. However, if p is neither 0 nor 1.0, then all the many samples from the population would not have the same values off;. The variance of all possible p's is (T~ = pq (24.16) n'
P
which can be estimated from our sample as
s~ = ----.P!L. rt -
P
(24.17)
1
This variance is essentially a variance of means, so Equation 24.16 is analogous to Equation 6.4, and Equation 24.17 to Equation 6.7. In Example 24.4 it is shown that the latter is true. The variance of q is the same as the variance of p; that IS
(24.18) and s~ q
=
s~. P
(24.19)
528
Chapter 24
Dichotomous Variables
The standard error of p (or of
q),
also called the standard deviation, is
=
(TA
P
)pq
n
(24.20)
,
which is estimated from a sample as"
~~/!
sp The possible values of either
p or q is zero,
(T~,
(T~,
1
(Tp'
and
(Tq
(24.21)
range from a minimum of zero when
p = q = 0.5; and s~, ~l sp' and Sq can range p or q is zero, to a maximum when p = q = 0.5.
to a maximum when
from a minimum of zero when either
(a) Sampling Finite Populations t. If n is a substantial portion of the entire population of size N, and sampling is without replacement, then a finite population correction is called for Gust like that found in Section 7.7) in estimating (T~ or (Tp:
s? = P
pq
n -
1
(1 -
Nn)
(24.23)
and (24.24) when n/ N is called the sampling fraction, and 1 - n] N is the finite population correction, the latter also being written as (N - n)/ N. As N becomes very large compared to n, Equation 24.23 approaches Equation 24.17 and Equation 24.24 approaches 24.2l. We can estimate Y, the total number of occurrences in the population in the first category, as (24.25) Y =pN; and the variance and standard error of this estimate are S~ = _N....o..(_N __
n--,)-=-p-,-q
n - 1
Y
(24.26)
and
Sy =
\j
fN(N
- n)pq, n - 1
(24.27)
respectively.
*We often see (24.22) used to estimate up. Although it is an underestimate, when n is large the difference between Equations 24.21 and 24.22 is slight. TThese procedures are from Cochran (1977: 52). When sampling from finite populations, the data follow the hypergeometric (Section 24.2), rather than the binomial, distribution.
Goodness of Fit for the Binomial Distribution
Section 24.4
529
GOODNESS OF FIT FORTHE BINOMIAL DISTRIBUTION (a) When p Is Hypothesized to Be Known. In some biological situations the population proportions, p and q, might be postulated, as from theory. For example, theory might tell us that 50% of mammalian sperm contain an X chromosome, whereas 50% contain a Y chromosome, and we can expect a 1 : 1 sex ratio among the offspring. We may wish to test the hypothesis that our sample came from a binomially distributed population with equal sex frequencies. We may do this as follows, by the goodness-of-fit testing introduced in Chapter 22. Let us suppose that we have tabulated the sexes of the offspring from 54 litters of five animals each (Example 24.5). Setting p = q = 0.5, the proportion of each possible litter composition can be computed by the procedures of Example 24.1, using Equation 24.3, or they can be read directly from Appendix Table B.26b. From these proportions, we can tabulate expected frequencies, and then can subject observed and expected frequencies of each type of litter to a chi-square goodness-of-fit analysis (see Section 22.1), with k - 1 degrees of freedom (k being the number of classes of X). In Example 24.5, we do not reject the null hypothesis, and therefore we conclude that the sampled population is binomial with p = 0.5.
EXAMPLE 24.5 Postulated
Goodness of Fit of a Binomial Distribution,
When p Is
The data consist of observed frequencies of females in 54 litters of five offspring per litter. X = 0 denotes a litter having no females, X a litter having one female, and so on;f is the observed number of litters, and! is the nu'"!!ber of litters expected if the null hypothesis is true. Computation of the values of! requires the values of P(X), as obtained in Example 24.1.
::=1
Ho: The sexes of the offspring reflect a binomial distribution with p = q = 0.5. H A: The sexes of the offspring do not reflect a binomial distribution with p = q = 0.5.
X2
Xi
Ii
0 1 2 3 4 5
3 10 14 17 9 1
A
Ii (0.03125)(54) (0.15625) (54) (0.31250)(54) (0.31250)(54) (0.15625)( 54) (0.03125) (54)
= = = = = =
1.6875 8.4375 16.8750 16.8750 8.4375 1.6875
= (3 - 1.6875)2 + (10 - 8.4375)2 + (14 - 16.8750)2 1.6875
8.4375
16.8750
+ (17 - 16.8750)2 + (9 - 8.4375)2 + (1 - 1.6875)2 16.8750
8.4375
1.6875
= 1.0208 + 0.2894 + 0.4898 + 0.0009 + 0.0375 + 0.2801 = 2.1185
530
Chapter 24
Dichotomous Variables
v=k-l=6-1=5
= 11.070
X6.05,5
Therefore, do not reject Hi; 0.75 < P < 0.90
[P
= 0.83]
To avoid bias in this chi-square computation, no expected frequency should be less than 1.0 (Cochran, 1954). If such small frequencies occur, then frequencies i~ the appropriate extreme classes of X may be pooled to arrive at sufficiently large Ii values. Such pooling was not necessary in Example 24.5, as no Ii was less than 1.0. But it will be shown in Example 24.6. A
EXAMPLE 24.6 Goodness of Fit of a Binomial Distribution, When p Is Estimated from the Sample Data The data consist of observed frequencies of left-handed persons in 75 samples of eight persons each. X = 0 denotes a sample with no left-handed persons, X = 1 a sample with one left-handed person, and so on; f is the observed number of sampl~s, and f is the number of samples expected if the null hypothesis is true. Each f is computed by multiplying 75 by P( X), where P( X) is obtained from Equation 24.3 by substituting p and q for p and q, respectively. A
Ho: The frequencies of left- and right-handed follow a binomial distribution. The frequencies of left- and right-handed not follow a binomial distribution.
HA:
X = ~liXi
~j;
p
q Ii
t.x,
0
21
0
1 2 3 4 5 6 7 8
26 19 6
26 38 18 8 0 6 0 0 96
q3 ()
persons in the population do
= 96 = 1.2800 75
= X = 1.2800 = 0.16 = probability of a person being left-handed n 8 = 1 - P = 0.84 = probability of a person being right-handed
Xi
75
persons in the population
A
Ii ~(0.16o)(0.848)(75) = (0.24788)(75) 0!8! (0.37772)(75) = 28.33 (0.25181 )(75) = 18.89 (0.09593)(75) = 7.19 (0.02284) (75) = 171 ) (0.00348) (75) = 0.26 1.99 (0.00033) (75) = 0.02 (0.00002)(75) = 0.00 (0.00000)(75) = 0.00
= 18.59
Section 24.4
Goodness of Fit for the Binomial Distribution
531
Note: The extremely small f values of 0.00, O.qo, 0.02, and 0.26 are each I~ss than 1.00. So they are combined with the adjacent f of 1.71. This results in an f of 1.99 for a corresponding f of 3.
'L/i
=
75
'L/iXi = 96 X2 = (21 -18.59)2 18.59 + (3 - 1.99f 1.99 = 1.214 v=k
- 2=5
+ (26 - 28.33)2 + (19 - 18.89f 28.33
18.89
+ (6 - 7.19)2 7.19
- 2=3
X5.05,3 = 7.815
Therefore, do not reject Ho. 0.50 < P < 0.75
[P = 0.7496]
The G statistic (Section 22.7) may be calculated in lieu of chi-square, with the summation being executed over all classes except those where not only /i = 0 but also all more extreme /i 's are zero. The Kolmogorov-Smirnov statistic of Section 22.8 could also be used to determine the goodness of fit. Heterogeneity testing (Section 22.6) may be performed for several sets of data hypothesized to have come from a binomial distribution. If the preceding null hypothesis had been rejected, we might have looked in several directions for a biological explanation. The rejection of H« might have indicated that the population p was, in fact, not 0.5. Or, it might have indicated that the underlying distribution was not binomial. The latter possibility may occur when membership of an individual in one of the two possible categories is dependent upon another individual in the sample. In Example 24.5, for instance, identical twins (or other multiple identical births) might have been a common occurrence in the species in question. In that case, if one member of a litter was found to be female, then there would be a greater-than-expected chance of a second member of the litter being female. (b) When p Is Not Assumed to Be Known. Commonly, we do not postulate the value of p in the population but estimate it from a sample of data. As shown in Example 24.7, we may do this by calculating
~ p=
(24.28) n
It then follows that q = 1 - p. The values of p and q may be substituted in Equation 24.3 in place of p and q, respectively. Thus, expected frequencies may be calculated for each X, and a chi-square goodness-of-fit analysis may be performed as it was in Example 24.5. In such a procedure, however, v is k - 2 rather than k - 1, because two constants
Chapter 24
532
Dichotomous Variables
(n and j)) must be obtained from the sample, and v is, in general, determined as k minus the number of such constants. The G statistic (Section 22.7) may be employed when p is not known, but the Kolmogorov-Smirnov test (Section 22.8) is very conservative in such cases and should be avoided. The null hypothesis for such a test would be that the sampled population was distributed binomially, with the members of the population occuring independently of one another. 24.5
THE BINOMIAL TEST AND ONE-SAMPLE TEST OF A PROPORTION
With the ability to determine binomial probabilities, a simple procedure may be employed for goodness-of-fit testing of nominal data distributed between two categories. This method is especially welcome as an alternative to chi-square goodness of fit where the expected frequencies are small (see Section 22.5). If p is very small, then the Poisson distribution (Section 25.l) may be used; and it is simpler to employ when n is very large. Because the binomial distribution is discrete, this procedure is conservative in that the probability of a Type I error is s; 0:'. (a) One-Tailed Testing. Animals might be introduced one at a time into a passageway at the end of which each has a choice of turning either to the right or to the left. A substance, perhaps food, is placed out of sight to the left or right; the direction is randomly determined (as by the toss of a coin). We might state a null hypothesis, that there is no tendency for animals to turn in the direction of the food, against the alternative, that the animals prefer to turn toward the food. If we consider p to be the probability of turning toward the food, then the hypothesis (one-tailed) would be stated as Ho: P s; 0.5 and HA: p > 0.5, and such an experiment might be utilized, for example, to determine the ability of the animals to smell the food. We may test Hi, as shown in Example 24.7. In this procedure, we determine the probability of obtaining, at random, a distribution of data deviating as much as, or more than, the observed data. In Example 24.7, the most likely distribution of data in a sample of twelve from a population where p ; in fact, was 0.5, would be six left and six right. The samples deviating from a 6 : 6 ratio even more than our observed sample (having a 10 : 2 ratio) would be those possessing eleven left, one right, and twelve left, zero right.
EXAMPLE 24.7
A One-Tailed
Binomial Test
Twelve animals were introduced, one at a time, into a passageway at the end of which they could turn to the left (where food was placed out of sight) or to the right. We wish to determine if these animals came from a population in which animals would choose the left more often than the right (perhaps because they were able to smell the food). Thus, n = 12, the number of animals; X is the number of animals turning left; and p is the probability of animals in the sampled population that would turn left. /
He: p
s;
0.5 and HA: P
>5
In this example, P( X) is obtained either from Appendix Equation 24.3.
Tahle B.26b or by
Section 24.5
The Binomial Test and One-Sample Test of a Proportion
533
(a) The test using binomial probabilities
x
o I 2 3 4 5 6 7 8 9 10 II 12
P(X) 0.00024 0.00293 0'()1611 0.0537l 0.12085 0.19336 0.22559 0.19336 0.12085 0.05371 0.0l6rI 0.00293 0.00024
On performing the experiment, ten of the twelve animals turned to the left and two turned to the right. If Ho is true, P( X 2': 10) = 0.01611 + 0.00293 + 0.00024 = 0.01928. As this probability is less than 0.05, reject Ho. (b) The test using a confidence limit This test could also be performed by using the upper confidence limit of P as a critical value. For example, by Equation 24.35, with a one-tailed F,
x
= pn = (0.5)( 12) = 6,
Vi, =
v2
+ I)
=
14,
= 2 (12 - 6) = 12,
Fo.os( L2
2( 6
=
1).14.12
=
(6 12 - 6
2.64, and
+ 1)(2.64) + (6 + 1)(2.64)
p
Because the observed (namely X / n value (0.755), we reject Hi;
=
=
0.755. 10(12
=
0.833) exceeds the critical
(c) A simpler alternative Appendix Table B.27 can be consulted for n - 12 and a( 1) = 0.05 and to find an upper critical value of n - CO.O,)(I).n = 12 - 2 = 10. As an X of 10 falls within the range of 2': n - CO.OS(I).12,n, lIo is rejected; and (by examining the column headings) 0.01 < P(X 2': 10) < 0.25).
p
The general one-tai led hypotheses are H«: P ::; Po and H II: P > Po, or Ho: P 2': Po and HII: p < Po, where po need not be 0.5. The determination of the probability of p
534
Chapter 24
Dichotomous Variables
as extreme as, or more extreme than, that observed is shown in Example 24.8, where the expected frequencies, P(X), are obtained either from Appendix Table B.26b or by Equation 24.3. If the resultant probability is less than or equal to a, then Ho is rejected. A simple procedure for computing this P when Po = 0.5 is shown in Section 24.6. Alternatively, a critical value for the one-tailed binomial test may be found using the confidence-limit determinations of Section 24.8. This is demonstrated in Example 24.7b, using a one-tailed F for the confidence interval presented in Section 24.8a. If H»: P > 0.5 (as in Example 24.7), then Equation 24.35 is used to obtain the upper critical value as the critical value for the test. If the alternative hypothesis had been H A: P > 0.5, then Equation 24.29 would have been appropriate, calculating the lower confidence limit to be considered the critical value. Or the needed upper or lower confidence limit could be obtained using a one-tailed Z for the procedure of Section 24.8c. Employing the confidence limits of Section 24.8b is not recommended. A simpler alternative, demonstrated as Example 24.8c, is to consult Appendix Table B.27 to obtain the lower confidence limit, Ca(, ).n' for one-tailed probabilities, 0'( 1).If Hj\: P < 0.5, then Ho is rejected if X::::; Ca(, ).n' where X = np. If HA: P > 0.5 (as in Example 24.7), then Ho is rejected if X > n - Ca(, )./1' (b) Two-Tailed Testing. The preceding experiment might have been performed without expressing an interest specifically in whether the animals were attracted toward the introduced substance. Thus, there would be no reason for considering a preference for only one of the two possible directions, and we would be dealing with two-tailed hypotheses, Ho: P = 0.5 and H A: P 0.5. The testing procedure would be identical to that in Example 24.7, except that we desire to know P( X ::::;2 or X ;::::10). This is the probability of a set of data deviating in either direction from the expected as much as or more than those data observed. This is shown in Example 24.8. The general two-tailed hypotheses are He: P = Po and H A: P PO. If Po = 0.5, a simplified computation of P is shown in Equation 24.5. Instead of enumerating the several values of P( X) required, we could determine critical values for the two-tailed binomial test as the two-tailed confidence limits described in Section 24.8. If the observed p lies outside the interval formed by L, and L2, then Ho is rejected. This is demonstrated in Example 24.8b, using the confidence limits of Section 24.8a. If the hypothesized P is 0.5, then VI and V2 are the same for L, and L2; therefore, the required critical value of F is the same for both confidence limits. This is shown in Example 24.8a, where VI = v!'. V2 = vi, and FO.05(2).14.12 is used for both L I and L2. The confidence-limit calculation of Section 24.8c can also be used, but employing Section 24.8b is not recommended. A simpler two-tailed binomial test is possible using Appendix Table B.27, as demonstrated in Example 24.8c. If the observed count, X = pn, is either es Ca(2 ).v or ;::::n - Ca(2).n, then Hi, is rejected.
"*
"*
/
(c) Normal and Chi-Square Approximations. Some researchers have used the normal approximation to the binomial distribution to perform the two-tailed test (for He: P = Po versus H j\: p po) or the one-tailed test (either Ho: P ::::;Po versus HA: P > Po, or Ho: p ;::::Po versus H»: p < po). The test statistic is
"*
Z = X - nPn ,Jnpoqo
(24.29)
Section 24.5 EXAMPLE 24.8
The Binomial Test and One-Sample Test of a Proportion A Two-Tailed
535
Binomial Test
The experiment is as described in Example 24.7, except that we have no a priori interest in the animals' turning either toward or away from the introduced substance. Ho: p HA: P n
= 0.5
* 0.5
= 12
(a)
P(X;:::: 10 or X ~ 2)
(b)
= 0 through X = 12, are given in Example 24.7.
The probabilities of X, for X
+ 0.00293 + 0.00024 + 0.01611 + 0.00293 + 0.00024
=
0.01611
=
0.03856
As this probability is less than 0.05, reject Ho. Alternatively, this test could be performed by using the confidence limits as critical values. By Equations 24.34 and 24.35, we have X
=
pn
= (0.5)(12)
= 6,
and for LI we have VI
=2(12
- 6
V2
= 2(6) = 12
= 3.21
FO.05(2),14.12 LI
=
6
+
+ 1)=14
6 (12 - 6
+ 1)(3.21)
=0.211,
and for L2 we have VI
= 2( 6 + 1) = 14
v;=2(12-6)=12 FO.05(2),14,12 L2 =
= 3.21
(6 12 - 6
+ 1)(3.21) = 0.789. + (6 + 1)(3.21)
As the observed p (namely X[n 0.211 to 0.789, we reject Ho. (c)
= 10/12 = 0.833) lies outside the range of
A simpler procedure uses Appendix Table B.27 to obtain critical values of CO.05(2),6 = 2andn - CO.05(2),6 = 12 - 2 = 1O.AsX = 10,Hoisrejected; and 0.02 < P(X ~ 2 or X > 10) < 0.05.
where X = np, the number of observations in one of the two categories, and npo is the number of observations expected in that category if Ho is true. Equivalently, this
536
Chapter 24
Dichotomous Variables
may be expressed as Z = P - Po ~poqoln .
(24.30)
The two-tailed null hypothesis, Ho : P = Po (tested in Example 24.9), is rejected if 1Z 1 >ZO'(2); the one-tailed Ho : P ::; Po is rejected if 1Z 1 >Zo'( 1), and Ho : P 2: Po is rejected if Z Po, the critical value is the smallest value of X for which the probability of that X or a larger X is :S 0'. (In Example 24.7a this is found to be X = 10.) For a one-tailed test of He: P :S Po versus HA: P > Po, the critical value is the largest X for which the probability of that X or of a smaller X is :SO'. Then we examine the binomial distribution for the observed proportion, from our sample. The power of the test is 2: the probability of an X at least as extreme as the critical value referred to previously.* This is demonstrated in Example 24.11.
p,
EXAMPLE 24.11 Test of Example
Determination 24.7a
of the Power of the One-Tailed
Binomial
p
In Example 24.7, Ho: P :S 0.5 and = Xf n = 10/12 = 0.833. And X = 10 is the critical value, because P(X 2: 10) < 0.05, but P(X 2: 11) > 0.05. Using Equation 24.3, P(X) for X's of 10 through 12 is calculated for the binomial distribution having P = 0.833 and n = 12:
X
P(X)
10 11 12
0.296 0.269 0.112
Thus, the power when performing this test on a future set of such data is estimated to be 2:0.296 + 0.269 + 0.112 = 0.68.
"*
For a two-tailed test of H«: P = Po versus HA: P Po, there are two critical values of X, one that cuts off 0'/2 of the binomial distribution in each tail. Knowing these two X's, we examine the binomial distribution for and the power of the test is the probability in the latter distribution that X is at least as extreme as the critical values. This is demonstrated in Example 24.12. Cohen (1988: Section 5.4) presents tables to estimate sample size requirements in the sign test.
p,
(b) Normal Approximation for Power. Having performed a binomial test or a sign test, it may be estimated what the power would be if the same test were "If the critical X delineates a probability of exactly ex in the tail of the distribution, then the power is equal to that computed; if the critical value defines a tail of less than ex, then the power is greater than that calculated.
540
Chapter 24
Dichotomous Variables
EXAMPLE 24.12 of Example 24.10
Determination of the Power of the Two-Tailed SignTest'
In Example 24.10, He: P = 0.50, the critical values are 1 and 9, and p = Xjn = 8/10 = 0.800. Using Equation 24.3, P(X) is calculated for all X's equal to or more extreme than the critical values, for the binomial distribution having P = 0.800 and n = 10:
x
P(X)
a 1 9 10
0.000 0.000 0.269 0.112
Therefore, the power of performing this test on a future set of data is 0.000 + 0.269 + 0.112 = 0.38.
2:
0.000
+
performed on a future set of data taken from the same population. As noted earlier (e.g., Sections 7.7 and 8.7), this is not an estimate of the power of the test already performed; it is an estimate of the power of this test performed on a new set of data obtained from the same population. It has been noted in the preceding discussions of the binomial distribution that normal approximations to that distribution are generally poor and inadvisable. However, rough estimates of power are often sufficient in planning data collection. If n is not small and the best estimate of the population proportion, p ; is not near 0 or 1, then an approximation of the power of a binomial or sign test can be calculated as
power ~ P Z '"
Po - P
)Pnq
jPOqO
- Z'(2l\j pq
+ P Z
2:
Po - P
)Pnq
+
Z~(2)
~
jPOqO
'II
pq
(24.31) (Marascuilo and McSweeney, 1977: 62). Here Po is the population proportion in the hypothesis to be tested, qo = 1 - Po,P is the true population proportion (or our best estimate of it), q = 1 - p, ZO'(2) = [0'(2),00' and the probabilities of Z are found in Appendix Table B.2, using the considerations of Section 6.1. This is demonstrated in Example 24.13. For the one-tailed test, Ho: P :::;Po versus H A: P > Po, the estimated power is
power ~ p
Z" P",ftf'
+
Z.(1l) P;:"
(24.31a)
Section 24.7
Power, Detectable Difference, and Sample Size for Tests
541
EXAMPLE 24.13 Estimation of Power in a Two-Tailed Binomial Test. Using the Normal Approximation To test Ho: P = 0.5 versus HA: P -=F0.5, using a = 0.05 (so ZO.05(2) = 1.9600) and a sample size of 50, when P in the population is actually 0.5: Employing Equation 24.31,
power
=
P
Z :::; Po - P
-
)P:
Za(2) )POqO -pq
+ P Z ;::::po - P + Za(2)
jPOqO
\j pq
)Pnq
= P Z :::; 0.5 - 0.4
I (0.5)(0.5)
Z
(0.4)(0.6)
-
a(2)'V
(0.4)(0.6)
50
0.5 - 0.4 j (0.4 )(0.6)
\j
I (0.5)(0.5)
Z
+
a(2)'V
(0.4 )(0.6)
50
= P[ Z :::; 1.4434 - (1.9600) (1.0206)] + P[Z ;::::1.4434 + (1.9600)(1.0206)] = P[Z :::; 1.4434 - 2.0004] + P[Z :::; 1.4434 + 2.0004] = P[Z :::; -0.56] + P[Z ;::::3.44] = P[Z ;::::0.56] + P[Z ;::::3.44] = 0.29 + 0.00 = 0.29. and for the one-tailed hypotheses, Ho: P ;::::Po versus HA: P < Po,
power
=
P Z '"
PJif - z,,(l)~P;:O .
(24.31b)
(c) Sample Size Required and Minimum Detectable Difference. Prior to designing an experiment, an estimate of the needed sample size may be obtained by specifying
542
Chapter 24
Dichotomous Variables a and the minimum difference between Po and P that is desired to be detected with a given power. If P is not very near 0 or 1, this may be done with a normal approximation, with the understanding that this will result only in a rough estimate. Depending upon the hypothesis to be tested, Equation 24.31, 24.31a, or 24.31b may be used to estimate power for any sample size (n) that appears to be a reasonable guess. If the calculated power is less than desired, then the calculation is repeated with a larger n; if it is greater than that desired, the calculation is repeated with a smaller n. This repetitive process (called iteration) is performed until the specified power is obtained from the equation, at which point n is the estimate of the required sample size. An estimate of the required n may be obtained without iteration, when one-tailed testing is to be performed. Equations 24.31 a and 24.31 b can be rearranged as follows (Simonoff, 2003, 59-60): ForHo:p$poversusHA:p >po,
n
=
(Za( l)..;poqo - Z{3( 1)-JiHi)2
(24.32)
Po - P and for H«: P ;:::Po versus HA: p < po, n
= (Za(
1)
-JPOCiO + Z{3( 1) -JiHi)2 Po -
(24.32a)
P
Levin and Chen (1999) have shown that these two equations provide values of n that tend to be underestimates, and they present estimates that are often better. If p is not very near 0 or 1, we can specify a, power, and n, and employ iteration to obtain a rough estimate of the smallest difference between Po and p that can be detected in a future experiment. A reasonable guess of this minimum detectable difference, Po - p, may be inserted into Equation 24.31, 24.31a, or 24.31b (depending upon the hypothesis to be tested), and the calculated power is then examined. If the power is less than that specified, then the calculation is performed again, using a larger value of Po - P; if the calculated power is greater than the specified power, the computation is performed again, using a smaller Po - p. This iterative process, involving increasing or decreasing Po - P in the equation, is repeated until the desired power is achieved, at which point the PO - P used to calculate that power is the minimum detectable difference sought. Or, to estimate the minimum detectable difference for one-tailed testing, either Equation 24.32 or 24.32a (depending upon the hypotheses) may be rearranged to give
_ Za( Po -
P -
Po -
p -
or
respectively.
1)
-JiiOZiO - Z{3( 1) -JiHi
In
_ Za( I)JPOZiO + Z{3(1)-JiHi
In
'
(24.33)
'
(24.33a)
Section 24.8
Confidence Limits for a Population Proportion
543
.8 CONFIDENCE LIMITS FOR A POPULATION PROPORTION
Confidence intervals for the binomial parameter, p, can be calculated by a very large number of methods. * Among them are the following: (a) Clopper-Pearson Interval. A confidence interval for p may be computed (Agresti and Coull, 1998; Bliss, 1967: 199-201; Brownlee, 1965: 148-149; Fleiss, Levin, and .Paik, 2003: 25) using a relationship between the F distribution and the binomial distribution (Clopper and Pearson, 1934). As demonstrated in Example 24.14a, the lower confidence limit for P is LI
X
= ------------
(24.34)
X + (n - X + 1)Fa(2).I'!.'12 where
VI =
2(n
- X + 1) and L2
V2 =
(X
2X. And the upper confidence limit for pis
+ 1 )Fa(2)''''!'''2
= -----------'--=---
n - X + (X
(24.35)
+ 1 )F,,(2)''''!'''2
with v; = 2(X + 1), which is the same as V2 + 2, and v; = 2(n - X), which is equal to VI - 2. The interval specified by L I and L2 is one of many referred to as an "exact" confidence interval, because it is based upon an exact distribution (the binomial distribution) and not upon an approximation of a distribution. But it is not exact in the sense of specifying an interval that includes p with a probability of exactly 1 - £Y. Indeed, the aforementioned interval includes p with a probability of at least I - £Y, and the probability might be much greater than 1 - £Y. (So a confidence interval calculated in this fashion using £Y = 0.05, such as in Example 24.14a, will contain p with a probability of 95% or greater.) Because this interval tends to be larger than necessary for 1 - £Y confidence, it is said to be a conservative confidence interval (although the conservatism is less when n is large). (b) Wald Interval. This commonly encountered approximation for the confidence interval, based upon the normal distribution," is shown in Exercise 24.14b: A
P ± Z,,(2)
~
pq ----;;.
(24.36)
But this approximation can yield unsatisfactory results, especially when p is near 0 or 1 or when n is small. (Although the approximation improves somewhat as n or pq increases, it still performs less well than the method discussed in Section 24.8c.) One problem with this confidence interval is that it overestimates the precision of estimating p (and thus is said to be "liberal." That is, the interval includes pless "Many of these are discussed by Agresti and Coull (1998); Blyth (1986); Bohning (1994); Brown, Cai. and DasGupta (2002); Fujino (1980); Newcombe (1998a); and Vollset (1993). "Brownlee (1965: 136) credits Abraham de Moivre as the first to demonstrate, in 1733, the approximation of the binomial distribution by the normal distribution. Agresti (20()2: 15) refers to this as one of the first confidence intervals proposed for any parameter. citing Laplace (1812: 283).
544
Chapter 24
Dichotomous Variables
EXAMPLE 24.14 Determination of 95% Confidence Interval for the Binomial Population Parameter, p One hundred fifty birds were randomly collected from a population, and there were 65 females in the sample. What proportion of the population is female? n = 150,
X
= 65
~ X 65 P = -;; = 150 = 0.4333,
~ ~ q = 1 - P = 1 - 0.4333 = 0.5667
(a) The Clopper-Pearson Confidence Interval:
For the lower 95% confidence limit, VI =2(n V2
-
+ 1)=2(150
X
= 2X = 2(65)
F:0.052,172,130 ()
+ 1)=172
- 65
= 130
~ F:0.05 ()2
,140,120
= 1. 42
X
LI
(n - X
X
+
65
+ (150 - 65 + 1)( 1.42)
+ 1 )FO.05(2),172,130
65
~
= 0.347.
For the upper 95% confidence limit,
VI
or
2
V
+ 1) = 2( 65 + 1) = 132
= 2( X
VI
= 2(n or v2
V2
-
X)
= VI
FO.05(2),132,170 L2
+ 2 = 130 + 2 = 132
=
~
(X
= 2(150 - 65) = 170 2 = 172 - 2 FO.05(2).120.160
=
170
= 1.39
+ 1)FO.05(2)J32,170 + (X + 1)FO.05(2)J32,170
= ---------'--'------
n - X
+ 1)(1.39) = 0.519. 150 - 65 + (65 + 1) ( 1.39) (65
~------'------'!'--'----'------
Therefore, we can state the 95% confidence interval as P(0.347 ::::;p ::::;0.519)
=
0.95,
which is to say that there is 95% confidence that the interval between 0.347 and 0.519 includes the population parameter p. Note: In this example, the required critical values of F have degrees of freedom (172 and 130 for LI, and 132 and 170 for L2) that are not in Appendix Table B.4. So the next lower available degrees of freedom were used, which is generally an acceptable procedure. Exact critical values from an appropriate computer program
Section 24.8
Confidence Limits for a Population Proportion
are FO.05(2)J72J30 = 1.387 and FO.05(2).132,170 = 1.376, yielding L2 = 0.514, which are results very similar to those just given.
L1
545
= 0.353 and
(b) The Wald Confidence Interval: ZO.05(2) = 1.9600
P
±
Z"(2)~
= 0.4333 ± 1.9600 L1
= 0.354,
L2
\j
I (0.4333)
(0.5667) = 0.4333 ± 0.0793 150
= 0.513
(c) The Adjusted Wald Confidence Interval: Z~.05(2/2 = 1.96002/2 = 1.9208
Z~.05(2) = 1.96002 = 3.8416; adjusted X is X = X adjusted n is n = n adijuste d p
A.
IS
+ Z~.05(2/2 = 65 + 1.9208 = 66.9208
+ Z6.05(2) = 150 + 3.8416 = 153.8416
P = -X = 66.9208 =.,04350 153.8416
= 0.4350 ± 1.9600 L2
A
n
95% confidence interval for pis
L, = 0.357,
q
~
p
± ZO.05(2))
(0.4350) (0.5650) 153.8416
= 05650 .
Pi
= 0.4350 ± 0.0783
= 0.513
than 1 - a of the time (e.g., less than 95% of the time when a = 0.05). Another objection is that the calculated confidence interval is always symmetrical around p (that is, it has a lower limit as far from p as the upper limit is), although a binomial distribution is skewed (unless P = 0.50; see Figure 24.1). This forced symmetry can result in a calculated L1 that is less than 0 or an L2 greater than 1, which would be unreasonable. Also, when p is 0 or 1, no confidence interval is calculable; the upper and lower confidence limits are both calculated to be p. Though it is commonly encountered, many authors* have noted serious disadvantages of Equation 24.36 (even with application of a continuity correction), have strongly discouraged its use, and have discussed some approximations that perform much better. (c) Adjusted Wald Interval. A very simple, and very good, modification of the Wald interval (called an "adjusted Wald interval" by Agresti and Caffo, 2000, and Agresti "See, for example; Agresti and Caffo (2000); Agresti and Coull (1998); Blyth (1986); Brown, Cai, and DasGupta (200t, 2002); Fujino (1980); Newcombe (1998a); Schader and Schmid (1990); and Vollset (1993). Fleiss, Levin, and Paik (2003: 28-29) present a normal approximation more complicated than, but apparently more accurate than, Equation 24.36.
546
Chapter 24
Dichotomous Variables
and Coull, 1998) substitutes*
X
= X
+
Z~(2/2
for X and
n
= n
+
Z~(2) for n in
Equation 24.36,1 as shown in Example 24.14c; then, p = X/n, and q = 1 - p. The probability of this confidence interval containing the population parameter is much closer to 1 - a than the intervals in Sections 28.4a and 28.4b, although that probability may be a little below or a little above 1 - a in individual cases. And . neither L\ nor L2 is as likely as with the nonadjusted Wald interval to appear less than 0 or greater than 1. (d) Which Interval to Use. Because n is fairly large in Example 24.14, and ji and q are close to 0.5, the confidence limits do not vary appreciably among the aforementioned procedures. However, the following general guidelines emerge from the many studies performed on confidence intervals for the binomial parameter, p, although there is not unanimity of opinion: • If it is desired that there is probability of at least 1 - a that the interval from L, to L2 includes p, even if the interval might be very conservative (i.e., the probability might be much greater than 1 - a), then the Clopper-Pearson interval (Section 24.8a) should be used. • If it is desired that the probability is close to 1 - a that the interval from L, to L2 includes p ; even though it might be either a little above or a little below 1 - a, then the adjusted Wald interval (Section 24.8c) is preferable. Another approximation is that of Wilson (1927) and is sometimes called the "score interval" (with different, but equivalent, formulas given by Agresti and Coull, 2000 and Newcombe, 1998a); and it yields results similar to those of the adjusted Wald interval. • The nonadjusted Wald interval (Section 24.8b) should not be used. • None of these confidence-interval calculations works acceptably when X = 0 (that is, when p = 0) or when X = 1 (that is, when p = 1). The following exact confidence limits (Blyth, 1986; Fleiss, Levin, and Paik, 2003: 23; Sprent and Smeeton, 2001: 81; Vollset, 1993) should be used in those circumstances': IfX=O: IfX=l:
(24.37) (24.38)
One-tailed confidence intervals can be determined via the considerations presented in Sections 6.4a and 7.3a and the use of a one-tailed critical value for F or Z. *The symbol above the X and n is called a tilde (pronounced "til-duh"); X is read as "X tilde" n as "n tilde." tFor a 95% confidence interval, Za(2) = 1.9600, Zk05(2) = 3.8416, and Z3.05(2/2 = 1.9208; so it is often recommended to simply use X + 2 in place of X and n + 4 in place of n, which yields very good results. For the data of Example 24.14, this would give the same confidence limits as in Example 24.14c: Ll = 0.357 and L2 = 0.513. +The notation 'ifX represents the nth root of X, which may also be written as X 1/", so 'Va( 1) may be seen written as r a( 1)]1/ ", It can also be noted that negative exponents represent reciprocals: x :» = 1/ X", This modern notation for fractional and negative powers was introduced by Sir Isaac ewton (1642-1727) in 1676, although the notion of fractional powers was conceived much earlier, such as by French writer icole Oresme (ca. 1323-1382) (Cajori, 1928-1929: Vol. I: 91,354, 355).
and
Section 24.8
Confidence Limits for a Population Proportion
547
There are computer programs and published tables that provide confidence limits for p, but users of them should be confident in the computational method employed to obtain the results. (e) Confidence Limits with Finite Populations. It is usually considered that the size of a sampled population (N) is very much larger than the size of the sample (n), as if the population size was infinite. If, however, n is large compared to N (i.e., the sample is a large portion of the population), it is said that the population is finite. As n approaches N, the estimation of p becomes more accurate, and the calculation of confidence limits by the adjusted Wald method (or by the Wald method) improves greatly by converting the lower confidence limit, L J, to i L, )c and the upper confidence limit, L2, to (L2 )c, as follows (Burstein, 1975): (Ld, ~
X ;
(L ). = X 2c
0.5 -
+
~
X/n
(X -" 0.5 +
n
(L _ 2
LI)~ Z = 7
-
X
+
~
X/n))N
N
n-
(24.39)
- n.1
(24.40)
The more (N - n) / (N - 1) differs from 1.0, the more the confidence limi ts from Equations 24.39 and 24.40 will be preferable to those that do not consider the sample population to be finite. (f) Sample Size Requirements. A researcher may wish to estimate, for a given how large a sample is necessary to produce a confidence interval of a specified width. Section 24.8b presents a procedure (although often a crude one) for calculating a confidence interval for p when p is not close to 0 or 1. That normal approximation can be employed, with p and q obtained from an existing set of data, to provide a rough estimate of the number of data (n) that a future sample from the same population must contain to obtain a confidence interval where both LJ and L2 are at a designated distance, 8, from p:
p,
2
n
~~
ZO.05(2)PQ
(24.41)
= ----'--'-82
(Cochran, 1977: 75-76; Hollander and Wolfe, 1999: 30), where ZO.05(2) is a two-tailed normal deviate. If we do not have an estimate of p, then a conservative estimate of the required n can be obtained by inserting 0.5 for and for in Equation 24.41. If the sample size, n, is not a small portion of the population size, N, then the required sample size is smaller than the n determined by Equation 24.41 and can be estimated as n (24.42) nl = ------1 + (n - 1)/ N
p
q
(Cochran, 1977: 75-76). For the confidence intervals of Sections 24.8a and 24.8c, a much better estimate of the needed sample size may be obtained by iteration. For a given a value of n may be proposed, from Equation 24.41 or otherwise. (Equation 24.41 will yield an underestimate of n.) Then the confidence limits are calculated. If the confidence interval is wider than desired, perform the calculation again with a larger n; and if it is
p,
548
Chapter 24
Dichotomous Variables
narrower than desired, try again with a smaller n. This process can be repeated unti an estimate of the required n is obtained for the interval width desired. 24.9
CONFIDENCE INTERVAL FOR A POPULATION MEDIAN
The confidence limits for a population median" may be obtained by considering a binomial distribution with p = 0.5. The procedure thus is related to the binomial and sign tests in earlier sections of this chapter and may conveniently use Appendix Table B.27. That table gives Ca,/z, and from this we can state the confidence interval for a median to be P( Xi :::;population median :::;Xi)
2"
1 -
(24.43)
0',
where
i=
C,,(2).n
(24.44)
+ 1
and (24.45) (e.g., MacKinnon, 1964), if the data are arranged in order of magnitude (so that Xi is the smallest measurement and X, is the largest). The confidence limits, therefore, are LI = Xi and L2 = Xj. Because of the discreteness of the binomial distribution, the confidence will typically be a little greater than the 1 - 0' specified. This procedure is demonstrated in Example 24.14a. EXAMPLE 24.14a
A Confidence
Interval for a Median
Let us determine a 95% confidence interval for the median of the population from which each of the two sets of data in Example 3.3 came, where the population median was estimated to be 40 mo for species A and 52 mo for species B. For species A, n = 9, so (from Appendix Table B.27) CO.05(2).9 = 1 and n - C005(2).9 = 9 - 1 = 8. The confidence limits are, therefore, Xi and Xi, where i = 1 + 1 = 2 and j = 8; and we can state P(X2:::;
population median rs Xs)
2"
0.95
or P(36 mo :::; population median rs 43 mo)
2"
0.95.
For species B,n = 10, and Appendix Table B.27 informs us that CO.05(2),10 = 1; therefore, n - CO.05(2).IO = 10 - 1 = 9. The confidence limits are Xi and Xj, where i = 1 + 1 = 2 and j = 9; thus, P(X2:::;
population median rs X9)
2"
0.95.
or P(36 mo :::; population median rs 69 mo)
Hutson (1999) discussed calculation than the median. *Such confidence 1984).
intervals
2"
0.95.
of confidence intervals for quantiles
were first discussed
by William
R. Thompson
other
in 1936 (Noether,
Section 24.10
Testing for Difference Between Two Proportions
549
(a) A Large-Sample Approximation. For samples larger than appearing in Appendix Table B.27, an excellent approximation of the lower confidence limit (based on Hollander and Wolfe, 1999: Section 3.6), is derived from the normal distribution as (24.46) where n
-
Fn
Za(2)
i = ----'---'--
(24.47)
2 rounded to the nearest integer, and Za(2) is the two-tailed normal deviate read from Appendix Table B.2. (Recall that Za(2) = ta(2),cxJ' and so may be read from the last line of Appendix Table B.3.) The upper confidence limit is
By this method we approximate confidence 2 1 - ll'.
(24.48)
= Xn-i+l.
L2
a confidence interval for the population median with
TESTING FOR DIFFERENCE BETWEEN TWO PROPORTIONS Two proportions may be compared by casting the underlying data in a 2 x 2 contingency table and considering that one margin of the table is fixed (Section 23.3b). For example, in Example 23.3 the column totals (the total data for each species) are fixed and the proportion of mice afflicted with the parasite are PI = 18/24 = 0.75 for species 1 and P2 = 10/25 = 0.40 for species 2. The null hypothesis iH«: PI = P2) may be tested using the normal distribution (as shown in Example 24.15), by computing A
Z =
A
PI
-
!pq
\j Here,
P2
(24.49)
+ pq .
n1
n2
15 is the proportion of parasitized mice obtained by pooling all n data: XI nl
+ +
X2
(24.50)
n2
or, equivalently, as A
nlPI nl
+ +
A
n2P2
(24.51 )
n2
and Zj = 1 - 15. A null hypothesis may propose a difference other than zero between two proportions. With Po the specified difference, H«: I PI - P2 I = Po may be tested by replacing the numerator* of Equation 24.49 with IpI - P21 - PO. One-tailed testing is also possible (with Po = 0 or Po 0) and is effected in a fashion analogous to that used for testing difference between two means (Section 8.1a). This is demonstrated in Example 24.15b, where the alternate hypothesis is that the parasite infects a higher proportion of fish in species 1 than in species 2, and also in Example 24.15c.
"*
*If Po is not zero in Ho and HA, in some cases a somewhat reported if the denominator of Equation 77; Eberhardt and Fligncr, 1977).
24.49 is replaced
by
more
)p I q I /1/1
+
powerful P2Q2/I/2
test has been (Agresti,
2002:
550
Chapter 24
Dichotomous Variables
EXAMPLE 24.15
Testing for Difference Between Two Proportions
Using the data of Example 23.3,
PI
= XI!nl
P2
=
X2/
= 18/24 = 0.75
n: = 10/25 = 0.40
J5 = (18 + 10)/(24 Zf
=
1 ~ 0.5714
+ 25) = 28/49 = 0.5714 0.4286.
=
(a) The two-tailed test for Ho: PI = P2 versus HA: PI A
"* P2
A
PI ~ P2
Z
jpq
-v nj
+ pq n:
0.75 ~ 0.40 /(0.5714)(0.4286)
For
Q'
= 0.05,
~
24
0.35 JO.1414
= 2.4752
ZO.05(2)
+ (0.5714)(0.4286) 25
= 1.9600 and Hi, is rejected.
0.01 < P < 0.02 (b) The one-tailed test for Ho: PI
S
[P = 0.013]
P2 versus HA: PI
> P2
For Q' = 0.05, ZO.05( I) = 1.6449. Because Z > 1.6449 and the difference (0.35) is in the direction of the alternate hypothesis, Ho is rejected. 0.005 < P < 0.01
[P
=
(c) The one-tailed test for Ho: PI ;:::: P2 versus HA: PI
0.007]
< P2
For Q' = 0.05, ZO.05( I) = 1.6449. Because Z > 1.6449 butthe difference (0.35) is not in the direction of the alternate hypothesis, Hi, is not rejected. Important note: Three pairs of hypotheses are tested in this example. This is done only for demonstration of the test, for in practice it would be proper to test only one of these pairs of hypotheses for a given set of data. The decision of which one of the three pairs of hypotheses to use should be made on the basis of the biological question being asked and is to be made prior to the collection of the data.
If the preceding hypotheses pertain to a sample of n I proportions and a second sample of n2 proportions, then the t test of Section 8.1 can be used in conjunction with the data transformation of Section 13.3.
Section 24.11
4.11
Confidence Limits for the Difference Between Proportions
551
CONFIDENCE LIMITS FOR THE DIFFERENCE BETWEEN PROPORTIONS
When a proportion (jJ I) is obtained by sampling one population, and a proportion (jJ2) is from another population, confidence limits for the difference between the two population proportions (PI - P2) can be calculated by many methods.* The most common are these: (a) Wald Interval. A confidence interval may be expressed in a fashion analogous to the Wald interval for a single proportion (which is discussed in Section 24.8b): PI A
-
+ P2q2 --
pA 2 ± Z a(2) ~Plql -nl
(24.52)
,
o:
Pz.
where PI = XI/nl,P2 = X2/n2,qAI = 1 - pAI,andqA2 = 1 Though commonly used, this calculation of confidence limits yields poor results, even when sample sizes are large. These confidence limits include PI - P2 less than 1 - Q' of the time (e.g., less than 95% of the time when expressing a 95% confidence interval), and thus they are said to be "liberal." The confidence limits include PI - P2 much less than 1 - Q' of the time when PI and P2 are near 0 or 1. Also, Equation 24.52 produces confidence limits that are always symmetrical around PI - P2, whereas (unless PI - P2 = 0.50) the distance between the lower confidence limit (LJ) and PI - P2 should be different than the distance between PI - P2 and the upper limit (L2). This unrealistic symmetry can produce a calculated LI that is less than 0 or greater than 1, which would be an unreasonable result. Tn addition, when both PI and /h are either 0 or 1, a confidence interval cannot be calculated by this equation. Therefore, this calculation (as is the case of Equation 24.36 in Section 24.8b) generally is not recommended. (b) Adjusted Wald Interval. Agresti and Caffo (2000) have shown that it is far preferable to employ an "adjusted" Wald interval (analogous to that in the onesample situation of Section 24.8c), where Equation 24.52 is employed by substituting" Xi = Xi + Z~(2)/ 4 for Xi and fii = niZ~(2j12 for ni. As shown in Example 24.15a, the adjusted confidence interval is obtained by using Pi = Xjni in place of Pi in Equation 24.52. This adjusted Wald confidence interval avoids the undesirable severe liberalism obtainable with the unadjusted interval, although it can be slightly conservative (i.e., have a probability of a little greater than 1 of containing PI - P2) when PI and P2 are both near 0 or 1. Newcombe (1998b) discussed a confidence interval that is a modification of the one-sample interval based upon Wilson (1927) and mentioned in Section 24.8d. It is said to produce results similar to those of the adjusted Wald interval. Q'
(c) Sample Size Requirements. If the statistics PI and P2 are obtained from sampling two populations, it may be desired to estimate how many data must be collected from those populations to calculate a confidence interval of specified width for PI - P2· The following may be derived from the calculation of the Wald interval * A large number of them arc discussed by Agresti and Caffo (2000), Agresti and Coull ()998), Blyth (1986), Hauck and Anderson (1986), Newcombe (1998b), and Upton (1982). iFora95%confidenceintcrval,Z;(2)/4 = (1.9600)2/4 = 0.9604andZ;(2/2 = (1.9600)2/2 = 1.9208, so using Xi + 1 in place of Xi and n, + 2 in place of n, yields very good results.
552
Chapter 24
Dichotomous Variables EXAMPLE 24.1Sa Confidence Population Proportions
Interval
for the Difference
Between
Two
For the 95% adjusted Wald confidence interval using the data of Example 23.3, ZO.05(2) = 1.9600,
= 0.9604 and (1.9600)2/2
so (1.9600)2/4 XI
= 18, so
Xl
= 1.9208
= 18 + 0.9604 = 18.9604
+ 0.9604 = 10.9604 = 24, so fll = 24 + 1.9208 = 25.9208
X2 = 10, so X2 = 10 nl
n: =
25, so fl2 = 25
+ 1.9208 = 26.9208
PI
= 18.9604/25.9208 = 0.7315 and 711 = 1 - 0.7315 = 0.2685
P2
= 10.9604/26.9208 = 0.4071 and?/J = 1 - 0.4071 = 0.5929
95% CI for PI
-
P2
=
PI
-
P2 ± ZO.05(2)
\j
Ip~711 + nl
= 0.7315 _ 0.4071 ± 1.9600 / (0.7315)(0.2685)
\j = 0.3244 ± 1.9600JO.0076 LI
=
0.07;
L2
=
7
P2 12 fl2
+ (0.4071 )(0.5929)
25.9208
26.9208
+ 0.0090 = 0.3244 ± 0.2525
0.58.
(Section 24.11a) for equal sample sizes n = Z2 a(2)
(n
=
[PIll! (~ PI
=
nl
+ -
n2):
P2(12. ~ )2
(24.53)
P2
This is an underestimate of the number of data needed, and a better estimate may be obtained by iteration, using the adjusted Wald interval (Section 24.11b). For the given PI and a speculated value of n is inserted into the computation of the adjusted Wald interval in place of nl and n2. If the calculated confidence interval is wider than desired, the adjusted Wald calculation is performed again with a larger n; if it is narrower than desired, the calculation is executed with a smaller n. This process is repeated until n is obtained for the interval width desired. The iteration using the adjusted Wald equation may also be performed with one of the future sample sizes specified, in which case the process estimates the size needed for the other sample.
P2,
24.12
POWER, DETECTABLE DIFFERENCE, AND SAMPLE SIZE IN TESTING DIFFERENCE BETWEEN TWO PROPORTIONS
(a) Power of the Test. If the test of He: PI = P2 versus H A: PI i=- P2 is to be performed at the a significance level, with nl data in sample 1 and n2 data in sample 2, and if the two samples come from populations actually having proportions of PI
Section 24.12
Power, Detectable Difference, and Sample Size
553
and P2, respectively, then an estimate of power is
power
= P
[
-Zo:(2)~pq/nl + pq/n2 - (PI - P2)] Z::;: --'-'-'-r========----~Plql/nl + P2q2/n2
+P
[z ~
(24.54)
+
Zo:(2)~pZj/nl
pZj/n2
+
~Plqdnl
-
(PI
-
P2)]
P2q2/n2
(Marascuilo and McSweeney, 1977: 111), where
+ +
J5 = nlPI nl
n2P2 n2
(24.55)
ql = 1 - PI,
(24.56)
= 1-
P2,
(24.57)
p.
(24.58)
q: and
Zj=1
The calculation is demonstrated in Example 24.16. For the one-tailed test of Ho: PI ~ P2 versus HA: PI < P2, the estimated power is power = P Z::;: [
+
-Zo:(I)~pq/nl
pq/n2
+
~Plql/nl
-
(PI
-
[z
~
Zo:(I)~pZj/nl
+
~Plql/nl
pZj/n2
+
;
(24.59)
P2q2/n2
and for the one-tailed hypotheses, Ho: PI ::;:P2 versus H A: PI power = P
P2)]
-
(PI
-
>
P2,
P2)]
.
(24.60)
P2q2/n2
These power computations are based on approximations to the Fisher exact test (Section 24.16) and tend to produce a conservative result. That is, the power is likely to be greater than that calculated. (b) Sample Size Required and Minimum Detectable Difference. Estimating the sample size needed in a future comparison of two proportions, with a specified power, has been discussed by several authors,* using a normal approximation. Such an estimate may also be obtained by iteration analogous to that of Section 24.7c.
"See, for example, Casagrande, Pike, and Smith (1978); Cochran and Cox (1957: 27); and Fleiss, Levin, and Paik (2003: 72). Also, Hornick and Overall (1980) reported that the following computation of Cochran and Cox (1957: 27) yields good results and appears not to have a tendency to be conservative: (24.61) where the arcsines are expressed in radians.
554
Chapter 24
Dichotomous Variables
EXAMPLE 24.16 Two Proportions
Estimation of Power in a Two-Tailed Test Comparing
We propose to test He: PI = P2 versus HA: PI =I- P2, with a = 0.05, nl = 50, and = 45, where in the sampled populations PI = 0.75 and P2 = 0.50. The power of the test can be estimated as follows. We first compute (by Equation 24.55): n2
p = (50)(0.75) 50
+ (45)(0.50) + 45
= 0.6316
and
q = 1 - P = 0.3684.
Then
l?..!L = (0.6316)(0.3684)
= 0.0047;
P q = 0.0052;
50
nl
Plql nl
n2
= -",-(0_.7_5-,---,)(_0._25-'..) = 0.0038;
Z005(2)
P2Q2 = (0.50) (0.50)
n:
50
=
0.0056;
45
= 1.9600;
and, using Equation 24.64, ower
=
p[z -:; -1.9600JO.0047
+ 0.0052 - (0.75 - 0.50)] JO.0038 + 0.0056
p
+
p
[z::::
1.9600JO.0047 + 0.0052 - (0.75 - 0.50)] JO.0038 + 0.0056
= P(Z -:; -4.59) = P(Z :::: -4.59)
+ P(Z + [1 -
:::: -0.57) P(Z
:::: 0.57)]
= 0.0000 + [1.0000 - 0.2843] = 0.72. The estimation procedure uses Equation 24.54, 24.59, or 24.60, depending upon the null hypothesis to be tested. The power is thus determined for the difference (PI - P2) desired to be detected between two population proportions. This is done using equal sample sizes (nj = n2) that are a reasonable guess of the sample size that is required from each of the two populations. If the power thus calculated is less than desired, then the calculation is repeated using a larger sample size. If the calculated power is greater than the desired power, the computation is performed again but using a smaller sample size. Such iterative calculations are repeated until the specified power has been obtained, and the last n used in the calculation is the estimate of the sample size required in each of the two samples. The samples from the two populations should be of the same size (n I = n2) for the desired power to be calculated with the fewest total number of data (nj + n2). FIeiss, Tytun, and Ury (1980), Levin and Chen (1999), and Ury and Fleiss (1980) discuss the estimation of n I and tt: when acquiring equal sample sizes is not practical.
Comparing More Than Two Proportions
Section 24.13
555
In a similar manner, n, a, and power may be specified, and the minimum detectable difference (PI - P2) may be estimated. This is done by iteration, using Equation 24.54,24.59, or 24.60, depending upon the null hypothesis to be tested. A reasonable guess of the minimum detectable difference can be entered into the equation and, if the calculated power is less than that desired, the computation is repeated with a larger (PI - P2); if the calculated power is greater than that desired, the computation is repeated by inserting a smaller PI - P2 into the equation; and when the desired power is obtained from the equation, the PI - P2 last used in the calculation is an expression of the estimated minimum detectable difference. Ury (1982) described a procedure for estimating one of the two population proportions if n, a, the desired power, and the other population proportion are specified. ~4.13 COMPARING MORE THAN TWO PROPORTIONS
I I
Comparison of proportions may be done by contingency-table analysis. For example, the null hypothesis of Example 23.1 could be stated as, "The proportions of males and females are the same among individuals of each of the four hair colors." Alternatively, an approximation related to the normal approximation is applicable (if n is large and neither P nor q is very near 1). Using this approximation, one tests He: Pl = P2 = ... = Pk against the alternative hypothesis that all k proportions are not the same, as (24.62) (Pazer and Swanson, 1972: 187-190). Here, k
2: Xi
[5=
i=1
(24.63)
k
2:ni
i= I
is a pooled proportion, q = 1 - Zj, and X2 has k - 1 degrees of freedom. Example 24.17 demonstrates this procedure, which is equivalent to X2 testing of a contingency table with two rows (or two columns). We can instead test whether k p's are equal not only to each other but to a specified constant, Po (i.e., H«: PI = P2 = ... = Pk = po)· This is done by computing X2
=
±
i=l
(Xi - niPo)2, niPo(l
(24.64)
- po)
which is then compared to the critical value of X2 for k (rather than k - 1) degrees of freedom (Kulkarni and Shah, 1995, who also discuss one-tailed testing of Ha, where HA is that Pi =1=Po for at least one i). If each of the several proportions to be compared to each other is the mean of a set of proportions, then we can use the multisample testing procedures of Chapters 10, 11, 12, 14, and 15. To do so, the individual data should be transformed as suggested in Section 13.3, preferably by Equation 13.7 or 13.8, if possible.
556
Chapter 24
Dichotomous Variables
Comparing Four Proportions, Using the Data of Exam·
EXAMPLE 24.17
pie 23.1 32
A
nl
= 87, Xl = 32, P: = -
87
A
= 0.368, ql = 0.632
43 n2 = 108, X2 = 43, P2 = 108 = 0.398, qz = 0.602 A
A
16 n3 = 80, X3 = 16, P3 = - = 0.200, q3 = 0.800 80 A
A
9
A
A
n4 = 25, X4 = 9, P4 = - = 0.360, q4 = 0.640 25
15 = LXi = 32 + 43 + 16 + 9 = 100 = ! L n, 87 + 108 + 80 + 25 300 3 q=1-J5=~
3
mr
[132 - (87) (87) (~)
[43 - (108)
+ (108) (~)
(~)
[9 - (25)
+ (25) (~)
mr (~)
[16 -
(80)
+ (80) (~)
mr (~)
mr (~)
= 0.4655 + 2.0417 + 6.4000 + 0.0800
= 8.987 (which is the same
X2 as in Example 23.1)
v=k-1=4-1=3 X6.05,3 = 7.815 Therefore, reject Ho. 0.025 < P < 0.05
[P
= 0.029]
Note how the calculated X2 compares with that in Example 23.1; the two procedures yield the same results for contingency tables with two rows or two columns.
Finally, it should be noted that comparing several p's yields the same results as if one compared the associated q's.
Section 24.14 MULTIPLE COMPARISONS
Multiple Comparisons for Proportions
557
FOR PROPORTIONS
(a) Comparisons of All Pairs of Proportions. If the null hypothesis Ho: PI = = ... = Pk (see Section 24.13) is rejected, then we may desire to determine specifically which population proportions are different from which others. The following procedure (similar to that of Levy, 1975a) allows for testing analogous to the Tukey test introduced in Section 11.1. An angular transformation (Section 13.3) of each sample proportion is to be used. If but not X and n, is known, then Equation 13.5 may be used. If, however, X and n are known, then either Equation 13.7 or 13.8 is preferable. (The latter two equations give similar results, except for small or large where Equation 13.8 is probably better.) As shown in Example 24.18, the multiple comparison procedure is similar to that in Chapter 11 (the Tukey test being in Section 11.1). The standard error for each comparison is, in degrees,*
P2
p,
p,
I
SE =
\j EXAMPLE 24.18 Four Proportions
820.70 + 0.5
Tukey-Type Multiple of Example 24.17
Samples ranked by proportion Ranked sample proportions
(i):
(pi
(24.65)
n
= XJni):
Comparison
Testing
Among
the
3
4
1
2
16/80
9/25
32/87
43/108
= 0.200 = 0.360 = 0.368 = 0.398 Ranked transformed (pi, in degrees): Comparison B vs. A
proportions 26.85
37.18
37.42
39.18
Difference
Pa - PA
SE
q
2 vs. 3 2 vs. 4
39.18 - 26.85 = 12.33 2.98 4.14 39.18 - 37.18 = 2.00 4.46 0.45
2 vs. 1 1 vs. 3
Do not test 37.42 - 26.85 = 10.57 3.13 3.38
QO.05,oo,4
3.633 3.633
Conclusion Reject Ho: P2 = P3 Do not reject He: P2
3.633
P4
Do not reject Ho: PI
1 vs. 4 4 vs. 3
=
=
P3
Do not test Do not test
Overall conclusion: P4 = PI = P2 and P3 = P4 = PI, which is the kind of ambiguous result described at the end of Section 11.1. By chi-square analysis (Example 23.6) it was concluded that P3 =1=P4 = PI = P2; it is likely that the present method lacks power for this set of data. Equation 13.8 is used for the transformations. For sample 3, for example, X/en + 1) = 16/81 = 0.198 and (X + l)/(n + 1) = 17/81 = 0.210, so P3 = ~[arcsin .)0.198 + arcsin .)0.210] = ~[26.4215 + 27.2747] = 26.848. If we use Appendix Table B.24 to obtain the two needed arcsines, we have 1 P3 = 2[26.42 + 27.27] = 26.845. "The constant 820.70 square degrees results from (1800 /27T)2, which follows from the variances reported by Anscombe (1948) and Freeman and Tukey (1950).
558
Chapter 24
Dichotomous Variables
if the two samples being compared are the same size, or 410.35
SE =
+
+ 0.5
nA
410.35
ne + 0.5
(24.66)
if they are not. The critical value is qa, ,k (from Appendix Table B.5). Use of the normal approximation to the binomial is possible in multiple-comparison testing (e.g., Marascuilo, 1971: 380-382); but the preceding procedure is preferable, even though it-and the methods to follow in this section-may lack desirable power. (b) Comparison of a Control Proportion to Each Other Proportion. A procedure analogous to the Dunnett test of Section 11.3 may be used as a multiple comparison test where instead of comparing all pairs of proportions we desire to compare one proportion (designated as the "control") to each of the others. Calling the control group B, and each other group, in turn, A, we compute the Dunnett test statistic: q =
p'
B
- p'
A
(24.67)
SE Here, the proportions have been transformed appropriate standard error is SE=
as earlier in this section, and the
/1641.40 0.5
(24.68)
-V n + if Samples A and B are the same size, or SE =
I V
+
820.70 nA + 0.5
820.70
ne + 0.5
(24.69)
if nA f:- n p: The critical value is q'a (1) ,oo,p (from Appendix Table B.6) or q'a (2) ,oo,p (from Appendix Table B.7) for one-tailed or two-tailed testing, respectively. (c) Multiple Contrasts Among Proportions. The Scheffe procedure for multiple contrasts among means (Section 11.4) may be adapted to proportions by using angular transformations as done earlier in this section. For each contrast, we calculate
~CiP; S
i = -'-------'-
(24.70)
SE where SE
=
c2 i
820.70~ ~
i
n,
+ 0.5
,
(24.71)
and c; is a contrast coefficient as described in Section 11.4. For example, if we wished to test the hypothesis Hn: (P1 + P2 + P4)/3 - P3 = O,thencl = ~,C2 = ~,C3 = -1, and
C4
= -31 .
Section 24.15
Trends Among Proportions
559
TRENDSAMONG PROPORTIONS In a 2 x c contingency table (2 rows and c columns), the columns may have a natural quantitative sequence. For example, they may represent different ages, different lengths of time after a treatment, different sizes, different degrees of infection, or different intensities of a treatment. In Example 24.19, the columns represent three age classes of women, and the data are the frequencies with which the 104 women in the sample exhibit a particular skeletal condition. The chi-square contingency-table analysis of Section 23.1 tests the null hypothesis that the occurrence of this condition is independent of age class (i.e., that the population proportion, p, of women with the condition is the same for all three age classes). It is seen, in Example 24.19a, that this hypothesis is rejected, so we conclude that in the sampled population there is a relationship between the two variables (age class and skeletal condition).
EXAMPLE24.19 Testing for Linear Trend in a 2 x 3 Contingency Table. The Data Are the Frequencies of Occurrence of a Skeletal Condition in Women, Tabulated by Age Class Age class Condition present Condition absent Total
~ Pj
XJ
Young
Medium
Older
Total
6 22
16 28
18 14
40 64
28 0.2143 -1
44 0.3636 0
32 0.5625 1
104
(a) Comparison of proportions Ho: In the sampled population, the proportion of women with this condition is the same for all three age classes. H A: In the sampled population, the proportion of women with this condition is not the same for all three age classes. ~
/
~
fll
= (40)(28)/104
= 10.7692, [v: = (40)(44)/104
f23
= (64)(28)/104
= 19.6923
=
± ± U;j ~ J;j i= 1 j= 1
= 16.9231, ... ,
)2
fij
= (6 - 10.7692)2 + (16 - 16.9231)2 + ... + (14 - 19.6923)2 10.7692
16.9231
19.6923
= 2.1121 + 0.0504 + 2.6327 + 1.3200 + 0.0315 + 1.6454 =
7.7921
(b) Test for trend He: In the sampled population, there is a linear trend among these three age categories for the proportion of women with this condition.
560
Chapter 24
Dichotomous Variables HA:
Tn the sampled population, there is not a linear trend among these three age categories for the proportion of women with this condition.
n (njt,ftJXJ - R"t,CJXj)
2
XI
2
=--.
RjR2
()
nj~ CjXl
-
j~
2
c.x,
+ (16) ( 0) 104 + (44) (0) (40)(64) ·104[(28)( _12) + (44)(02) - [( 28 ) ( -1) + (44)( 0) {104[ ( 6 ) ( - 1) - 40[ (28) ( -1)
+ (18) ( 1 )] + (32) ( 1)]} 2 + (32)(12)] + (32) ( 1 )]2
= 0.04062. (1248 - 160 )2 =
6240 - 16 (0.04062)( 190.1902) = 7.726 Chi-square
Total Linear trend Departure from linear trend
X2 X~
= 7.792 = 7.726
v
P
2 1
0.01 < P < 0.025 [P = 0.020]
X~ = 0.066
0.75
< P < 0.90 [P = 0.80]
In addition, we may ask whether the difference among the three age classes follows a linear trend; that is, whether there is either a greater occurrence of the condition of interest in women of greater age or a lesser occurrence with greater age. The question of linear trend in a 2 x n contingency table may be addressed by the method promoted by Armitage (1955, 1971: 363-365) and Armitage, Berry, and Matthews (2002: 504-509). To do so, the magnitudes of ages expressed by the age classes may be designated by consecutive equally spaced ordinal scores: X. For example, the "young," "medium," and "older" categories in the present example could be indicated by X's of 1, 2, and 3; or by 0, 1, and 2; or by - 1, 0, and 1; or by 0.5, 1, and 1.5; or by 3, 5, and 7; and so on. The computation for trend is made easier if the scores are consecutive integers centered on zero, so scores of -1,0, and 1 are used in Example 24.19. (If the number of columns, c, were 4, then X's such as - 2, -1,1, and 2, or 1, 0, 2, and 3 could be used.) The procedure divides the contingency-table chi-square into component parts, somewhat as sum-of-squares partitioning is done in analysis of variance. The chisquare of Equation 23.1 may be referred to as the total chi-square, a portion of which can be identified as being due to a linear trend:
r
(n,t,flJXj - Rl~CjXj chi-square for linear trend
=
X7
= _n_ RIR2
c nj~ CjXj
2
(c -
j~
\2'
c.x, )
(24.72)
Section 24.16
The Fisher Exact Test
561
and the remainder is identified as not being due to a linear trend: chi-square for departure from linear trend
= x~ =
X2 -
x7.
(24.73)
Associated with these three chi-square values are degrees of freedom of c 1 for X2, 1 for X~ and c - 2 for X~.* This testing for trend among proportions is more powerful than the chi-square test for difference among the c proportions, so a trend might be identified even if the latter chi-square test concludes no significant difference among the proportions. If, in Example 24.19, the data in the second column (16 and 28) were for older women and the data in the third column (18 and 14) were for medium-aged women, the total chi-square would have been the same, but X~ would have been 0.899 [P = 0.34], and it would have been concluded that there was no linear trend with age. In Example 24.19, the presence of a physical condition was analyzed in reference to an ordinal scale of measurement ("young," "medium," and "older"). In other situations, an interval or ratio scale may be encountered. For example, the three columns might have represented age classes of known quantitative intervals. If the three categories were equal intervals of "20.0-39.9 years," "40.0-59.9 years,"and "60.0-79.9 years," then the X's could be set as equally spaced values (for example, as -1,0, and 1) the same as in Example 24.19. However, if the intervals were unequal in size, such as "20.0-29.9 years," "30.0-49.9 years,"and "50.0-79.9 years," then the X's should reflect the midpoints of the intervals, such as by 25, 40, and 65; or (with the subtraction of25 years) by 0,15, and 40; or (with subtraction of 40 years) by -15,0, and 25. THE FISHER EXACT TEST In the discussion of 2 X 2 contingency tables, Section 23.3c described contingency tables that have two fixed margins, and Section 23.3d recommended analyzing such tables using a contingency-corrected chi-square (X~ or X~), or a procedure known as the Fisher exact test,' The test using chi-square corrected for continuity is an approximation of the Fisher exact test, with X~ the same as X~ or routinely a better approximation than X~. The Fisher exact test is based upon hypergeometric probabilities (see Section 24.2). The needed calculations can be tedious, but some statistical computer programs (e.g., Zar, 1987, and some statistical packages) can perform the test. Although this book recommends this test only for 2 X 2 tables having both margins fixed, some researchers use it for the other kinds of 2 X 2 tables (see Section 23.3d).
* Armitage (1955) explained that this procedure may be thought of as a regression of the sample proportions, Pj' on the ordinal scores, Xj, where the p/s are weighted by the column totals, Cj; or as a regression of n pairs of Y and X, where Y is 1 for each of the observations in row 1 and is 0 for each of the observations in row 2. tNamed for Sir Ronald Aylmer Fisher (1890-1962), a monumental statistician recognized as a principal founder of modern statistics, with extremely strong influence in statistical theory and methods, including many areas of biostatistics (see, for example, Rao, 1992). At about the same time he published this procedure (Fisher, 1934: 99-101; 1935), it was also presented by Yates (1934) and Irwin (1935), so it is sometimes referred to as the Fisher-Yates test or Fisher-Irwin test. Yates (1984) observed that Fisher was probably aware of the exact-test procedure as early as 1926. Although often referred to as a statistician, R. A. Fisher also had a strong reputation as a biologist (e.g., Neyman, 1967), publishing-from 1912 to 1962-140 papers on genetics as well as 129 on statistics and 16 on other topics (Barnard, 1990).
562
Chapter 24
Dichotomous Variables
The probability of a given 2
x 2 table is
p=
(24.74)
p=
(24.75)
which is identical to
From Equation 24.11, both Equations 24.74 and 24.75 reduce to
p=
RI!R2!CI!C2! fll !hl !/J2!h2! n!'
(24.76)
and it will be seen that there is advantage in expressing this as R1!R2!C1!C2! n! p= ----'--'-'--fll !fI2!f21 !f22!
(24.77)
(a) One-Tailed Testing. Consider the data of Example 23.4. If species 1 is naturally found in more rapidly moving waters, it would be reasonable to propose that it is better adapted to resist current, and the test could involve one-tailed hypotheses: H«: The proportion of snails of species 1 resisting the water current is no greater than (i.e., less than or equal to) the proportion of species 2 withstanding the current, and HA: The proportion of snails of species 1 resisting the current is greater than the proportion of species 2 resisting the current. The Fisher exact test proceeds as in Example 24.20.
EXAMPLE 24.20 pie 23.4
A One-Tailed
Fisher Exact Test, Using the Data of Exam-
He: The proportion of snails of species 1 able to resist the experimental water current is no greater than the proportion of species 2 snails able to resist the current. H A: The proportion of snails of species 1 able to resist the experimental water current is greater than the proportion of species 2 snails able to resist the current.
Species I Species 2
L-
Resisted
Yielded
12 2
7 9
14
16
~-
19 11
__
30
Section 24.16
The Fisher Exact Test
563
Expressing the proportion of each species resisting the current in the sample, Resisted
Yielded
Total
0.63 0.18
0.37 0.82
1.00 1.00
Species 1 Species 2
The sample data are in the direction of HA, in that the species 1 sample has a higher proportion of resistant snails than does the species 2 sample. But are the data significantly in that direction? (If the data were not in the direction of HA, the conclusion would be that Ho cannot be rejected, and the analysis would proceed no further.) The probability of the observed table of data is RI!R2!CI!C2!
n! P=--..:...::..:....-tll
!tI2!JzI !t22!
19! II! 14! 16! 30! 12!7!2!9!
+ 10gl1! + log14! + logl6! - log30!)
= antilog [(log19! -
+ log7! + log2! + log9!)]
(log12!
= antilog [( 17.08509 + 7.60116 + 10.94041 + 13.32062 - 32.42366)
- (8.68034
+ 3.70243 + 0.30103
+ 5.55976)] =
antilog [16.52362 -
18.24356]
= antilog [-1.71994]
= antilog [0.28006 - 2.00000] = 0.01906. There are two tables with data more extreme than the observed data; they are as follows: Table A:
13 1
6 10
19 11
14
16
30
19! 11! 14! 16! P = _----=3:....=;0-'-! __ 13! 6! 1! 1O! = antilog [16.52362 -
= antilog [- 2.68776]
= 0.00205
(log13!
+ log6! + Iog l ! + logI0!)]
564
Chapter 24
Dichotomous Variables
Table B:
14
o
5 11
19 11
14
16
30
19! 11! 14! 16! p
= _----'3~0~!__ 14!5!0!11!
= antilog [16.52362
(log14!
+ logS! + logO! + 10gl1!)]
= antilog [-4.09713] = 0.00008 To summarize the probability of the original table and of the two more extreme tables (where fo in each table is the smallest of the four frequencies in that table),
Original table More extreme table A More extreme table B Entire tail
fo
P
2 1 0
0.01906 0.00205 0.00008 0.02119
Therefore, if the null hypothesis is true, the probability of the array of data in the observed table or in more extreme tables is 0.02119. As this probability is less than 0.05, Ho is rejected. Note that if the hypotheses had been Ho: Snail species 2 has no greater ability to resist current than species 1 and H A: Snail species 2 has greater ability to resist current than species 1, then we would have observed that the sample data are not in the direction of HA and would not reject Ho, without even computing probabilities. Instead of computing this exact probability of He; we may consult Appendix Table B.28, for n = 30, m, = 11, rn: = 14; and the one-tailed critical values off, for a = 0.05, are 2 and 8. As the observed f in the cell corresponding to mj = 11 and rn: = 14 is 2, Ho may be rejected. The probability of the observed contingency table occurring by chance, given the row and column totals, may be computed using Equation 24.76 or 24.77. Then the probability is calculated for each possible table having observed data more extreme that those of the original table. If the smallest observed frequency in the original table is designated as fo (which is 2 in Example 24.20), the more extreme tables are those that have smaller values of fo (which would be 1 and 0 in this example). (If the smallest observed frequency occurs in two cells of the table, then fo is designated to be the one with the smaller frequency diagonally opposite of it.) The null hypothesis is tested by examining the sum of the probabilities of the observed table and of all the more extreme tables. This procedure yields the exact probability (hence the name of the test) of obtaining this set of tables by chance if the null hypothesis is true; and if this probability is less than or equal to the significance level, a, then Hi, is rejected.
Section
L4.1 b
I
ne rrsner exact
I
est
~b~
Note that the quantity RI! R2! CI! C2!jn! appears in each of the probability calculations using Equation 24.76 and therefore need be computed only once. It is only the value of fll !f12!hi! f22! that needs to be computed anew for each table. To undertake these computations. the use of logarithms is advised for all but the smallest tables; and Appendix Table B.40 provides logarithms of factorials. It is also obvious that, unless the four cell frequencies are small. this test calculation is tedious without a computer. An alternative to computing the exact probability in the Fisher exact test of 2 x 2 tables is to consult Appendix Table B.28 to obtain critical values with which to test null hypotheses for n up to 30. We examine the four marginal frequencies, RI, R2, Cl, and C2; and we designate the smallest of the four as nll. If nll is a row total, then we call the smaller of the two column totals nl2; if nll is a column total, then the smaller row total is mi. In Example 24.20, ml = R2 and m; = Cl; and the one-tailed critical values in Appendix Table 8.28, for ll' = 0.05, are 2 and 8. The observed frequency in the cell corresponding to marginal totals m I and m2 is called f; and if f is equal to or more extreme than 2 or 8 (i.e .. if f :5 2 or f 2: 8). then Ho is rejected. However, employing tables of critical values results in expressing a range of probabilities associated with Ho; and a noteworthy characteristic of the exact test-namely the exact probability-is absent. Bennett and Nakamura (1963) published tables for performing an exact test of 2 X 3 tables where the three column (or row) totals are equal and n is as large as 60. Computer programs have been developed to perform exact testing of r x c tables where rand/or c is greater than 2. Feldman and Kluger (1963) demonstrated a simpler computational procedure for obtaining the probabilities of tables more extreme than those of the observed table. It will not be presented here because the calculations shown on this section and in Section 24.16c are straightforward and because performance of the Fisher exact test is so often performed via computer programs.
(b) Two-Tailed Testing.
For data in a 2 x 2 contingency table, the Fisher exact test may also be used to test two-tailed hypotheses, particularly when both margins of the table are fixed. Example 24.2l demonstrates this for the data and hypotheses of Example 23.4. What is needed is the sum of the probabilities of the observed table and of all tables more extreme in the same direction as the observed data. This is the probability obtained for the one-tailed test shown in Example 24.20. If either R, = R2 or Cl = C2, then the two-tailed probability is two times the one-tailed probability. Otherwise, it is not, and the probability for the second tail is computed as follows. * Again designating fa to be the smallest of the four observed frequencies and ml to be the smallest of the four marginal frequencies in the original table, a 2 x 2 table is formed by replacing fo with mJ - 0, and this is the most extreme table in the
second tail. This is shown as Table C in Example 24.21. The probability of that table is calculated with Equation 24.76 or 24.77; if it is greater than the probability of the original table, then the two-tailed probability equals the one-tailed probability and the computation is complete. If the probability of the newly formed table is not greater than that of the original table, then it contributes to the probability of the second tail and the calculations continue. The probability of the next less extreme table is *Some (e.g., Dupont, 19R6) recommend that the two-tailed probability should be determined as two times the one-tailed probability. Others (e.g .. Lloyd, 199R) argue against that calculation, and that practice is not employed here; and it can he noted that the second tail may be much smaller than the first and such a doubling procedure could result in a computed two-tailed probability that is greater than I.
Section 24.16
The Fisher Exact Test
565
Note that the quantity RI! R2! CI! C2!jn! appears in each of the probability calculations using Equation 24.76 and therefore need be computed only once. It is only the value of III !/12!hl !/22! that needs to be computed anew for each table. To undertake these computations, the use of logarithms is advised for all but the smallest tables; and Appendix Table B.40 provides logarithms of factorials. It is also obvious that, unless the four cell frequencies are small, this test calculation is tedious without a computer. An alternative to computing the exact probability in the Fisher exact test of 2 x 2 tables is to consult Appendix Table B.28 to obtain critical values with which to test null hypotheses for n up to 30. We examine the four marginal frequencies, Ri ; R2, CI, and C2; and we designate the smallest of the four as m I. If 111 I is a row total, then we call the smaller of the two column totals fIl2; if 1111 is a column total, then the smaller row total is mz- In Example 24.20, 1111 = R2 and m: = CI; and the one-tailed critical values in Appendix Table B.28, for ll' = 0.05, are 2 and 8. The observed frequency in the cell corresponding to marginal totals m I and m: is called I; and if I is equal to or more extreme than 2 or 8 (i.e., if I :::; 2 or I 2: 8), then Ho is rejected. However, employing tables of critical values results in expressing a range of probabilities associated with Ho; and a noteworthy characteristic of the exact test-namely the exact probability-is absent. Bennett and Nakamura (1963) published tables for performing an exact test of 2 x 3 tables where the three column (or row) totals are equal and n is as large as 60. Computer programs have been developed to perform exact testing of r x c tables where rand/or c is greater than 2. Feldman and Kluger (1963) demonstrated a simpler computational procedure for obtaining the probabilities of tables more extreme than those of the observed table. It will not be presented here because the calculations shown on this section and in Section 24.16c are straightforward and because performance of the Fisher exact test is so often performed via computer programs. (b) Two-Tailed Testing. For data in a 2 x 2 contingency table, the Fisher exact test may also be used to test two-tailed hypotheses, particularly when both margins of the table are fixed. Example 24.21 demonstrates this for the data and hypotheses of Example 23.4. What is needed is the sum of the probabilities of the observed table and of all tables more extreme in the same direction as the observed data. This is the probability obtained for the one-tailed test shown in Example 24.20. If either RI = R2 or CI = C2, then the two-tailed probability is two times the one-tailed probability. Otherwise, it is not, and the probability for the second tail is computed as follows.* Again designating ji, to be the smallest of the four observed frequencies and ml to be the smallest of the four marginal frequencies in the original table, a 2 x 2 table is formed by replacing fi) with m[ - 0, and this is the most extreme table in the second tail. This is shown as Table C in Example 24.21. The probability of that table is calculated with Equation 24.76 or 24.77; if it is greater than the probability of the original table, then the two-tailed probability equals the one-tailed probability and the computation is complete. If the probability of the newly formed table is not greater than that of the original table, then it contributes to the probability of the second tail and the calculations continue. The probability of the next less extreme table is *Some (e.g., Dupont, 19R6) recommend that the two-tailed probability should be determined as two times the one-tailed probability. Others (e.g., Lloyd, 199H) argue against that calculation, and that practice is not employed here; and it can be noted that the second tail may be much smaller than the first and such a doubling procedure could result in a computed two-tailed probability that is greater than I.
566
Chapter 24
Dichotomous Variables
EXAMPLE 24.21 A Two-Tailed Fisher Exact Test, Using the Data and Hypotheses of Example 23.4 The probability of the observed table was found, in Example 24.20, to be 0.01906, and the one-tailed probability was calculated to be 0.02119. In determining the one-tailed probability, the smallest cell frequency (fa) in the most extreme table (Table B) was 0, and the smallest marginal frequency (mJ) was 11. So mI - fa = 11 - 0 = 11 is inserted in place offo to form the most extreme table in the opposite tail: Table C: 14
16
o
19 11
16
30
the probability of which is 19! II! 14! 16! P = _-----"3=0..:....! __ 3! 16! II! O!
= 0.00000663, which is rounded to 0.00001. The less extreme tables that are in the second tail, and have probabilities the probability of the observed table, are these two:
less than
15 1
Table D:
16
Table E:
I~
14 2
14
16
The next less extreme table is this: TableF:
I~
13 3
14
16
The Table F cumulative probability (0.03549) is larger than the probability of the original table (0.02119), so the Table F probability is not considered a relevant part of the second tail. The second tail consists of the following:
Section 24.16
Table C Table D Table E Entire second tail
fo
p
3 4 5
0.00001 0.00029 0.00440
The Fisher Exact Test
567
0.00470
and the two-tailed P is, therefore, 0.02119 + 0.00470 = 0.02589. As this is less than 0.05, we may reject Ho. Note that X~ in Example 23.4 has a probability close to that of this Fisher exact test. If using Appendix Table B.28, n = 30, /111 = 11, /112 = 14, and the f corresponding to /11I and /112in the observed table is 2. As the two-tailed critical values of f, for a = 0.05, are 2 and 9, Hi, is rejected. determined; that table (Table D in Example 24.21) has cell frequency fo increased by 1, keeping the marginal frequencies the same. The two probabilities calculated for the second tail are summed and, if the sum is no greater than the probability of the original table, that cell frequency is again increased by 1 and a new probability computed. This process is continued as long as the sum of the probabilities in that tail is no greater than the probability of the original table. (c) Probabilities Using Binomial Coefficients. Ghent (1972), Leslie (1955), Ley ton (1968), and Sakoda and Cohen (1957) have shown how the use of binomial coefficients can eliminate much of the laboriousness of Fisher-exact -test computations, and Ghent (1972) and Carr (1980) have expanded these considerations to tables with more than two rows and/or columns. Using Appendix Table B.26a, this computational procedure requires much less effort than the use of logarithms of factorials, and it is at least as accurate. It may be employed for moderately large sample sizes, limited by the number of digits on one's calculator. Referring back to Equation 24.75, the probability of a given 2 X 2 table is seen to be the product of two binomial coefficients divided by a third. The numerator of Equation 24.75 consists, of one binomial coefficient representing the number of ways Cl items can be combined fll at a time (or hi at a time, which is equivalent) and a second coefficient expressing the number of ways C2 items can be combined Iv: at a time (or, equivalently, [n at a time). And the denominator denotes the number of ways n items can be combined RI at a time (or R2 at a time). Appendix Table B.26a provides a large array of binomial coefficients, and the proper selection of those required leads to simple computation of the probability of a 2 x 2 table. (See Section 5.3 for discussion of combinations.) The procedure is demonstrated in Example 24.22, for the data in Example 24.20. Consider the first row of the contingency table and determine the largest III and the smallest f12 that are possible without exceeding the row totals and column totals. These are f11 == 14 andfl2 = 5, which sum to the row total of 19. (Other frequencies, such as 15 and 4, also add to 19, but the frequencies in the first column are limited to 14.) In a table where fll < f12, switch the two columns of the 2 x 2 table before performing these calculations.
568
Chapter 24
Dichotomous Variables
EXAMPLE 24.22 The Fisher Exact Tests of Examples 24.20 and 24.21, Employing the Binomial-Coefficient Procedure The observed 2
X
2 contingency table is
9
19 11
16
30
7
14
The top-row frequencies, il I and il2, of all contingency tables possible with the observed row and column totals, and their associated binomial coefficients and coefficient products, are as follows. The observed contingency table is indicated by "*". Binomial coefficient ill
[v:
14 13 12* 11 10 9 8 7 6 5 4 3
5 6 7* 8 9 10 11 12 13 14 15 16
CI
= 14 1.
14. 91. 364. 1,001. 2,002. 3,003. 3,432. 3,003. 2,002. 1,001. 364.
C2
x x x x x x x x x x x x
Coefficient product
= 16
4,368. 8,008. 11,440. 12,870. 11,440. 8,008. 4,368. 1,820. 560. 120. 16. 1.
= =
4,368 } 112,112 1,041,040* 4,684,680 11,451,440 16,032,016 13,117,104 6,246,240 1,681,680 240,240 } 16,016 364
1,157,520
256,620
54,627,300 One-tailed probability: Probability associated with the opposite tail: Two-tailed probability:
1,157,520 54,627,300 256,620 54,627,300
0.02119 0.00470 0.02589
We shall need to refer to the binomial coefficients for what Appendix Table B.26a refers to as n = 14 and n = 16, for these are the two column totals (el and C2) in the contingency table in Example 24.20. We record, from Appendix Table B.26a, the binomial coefficient for n = CI = 14 and X = ill = 14 (which is 1), the coefficient for n = C2 = 16 and X = [vi = 5 (which is 4,368), and the product of the two coefficients (which is 4,368). Then we record the binomial coefficients of the next less extreme table; that is, the one with ill = 13 andil2 = 6 (that is, coefficients of 14 and 8,008) and their product (i.e., 112,112). This process is repeated for each possible table until ill can be no
Section 24.17
Paired-Sample Testing of Nominal-Scale Data
569
smaller (and il2 can be no larger): that is, ill = 3 and i12 = 16. The sum of all the coefficient products (54,627,300 in this example) is the number of ways n things may be combined RI at a time (where n is the total of the frequencies in the 2 x 2 table); this is the binomial coefficient for n (the total frequency) and X = RI (and in the present example this coefficient is 30ell = 54,627,300). Determining this coefficient is a good arithmetic check against the sum of the products of the several coefficients of individual contingency tables. Dividing the coefficient product for a contingency table by the sum of the products yields the probability of that table. Thus, the table of observed data in Example 24.20 has ill = 14 and [v: = 5, and we may compute 1,041,040/54,627,300 = 0.01906, exactly the probability obtained in Example 24.20 using logarithms of factorials. The probability of the one-tailed test employs the sum of those coefficient products equal to or smaller than the product for the observed table and in the same tail as the observed table. In the present example, this tail would include products 4,368,112,112, and 1,041,040, the sum of which is 1,157,520, and 1,157,520/54,627,300 = 0.02119, which is the probability calculated in Example 24.20. To obtain the probability for the two-tailed test, we add to the one-tailed probability the probabilities of all tables in the opposite tail that have coefficient products equal to or less than that of the observed table. In our example these products are 240,240, 16,016, and 364, their sum is 256,620, and 256,620/54,627,300 = 0.00470; the probabilities of the two tails are 0.02119 and 0.00470, which sum to the two-tailed probability of 0.02589 (which is what was calculated in Example 24.21). PAIRED-SAMPLE
TESTING OF NOMINAL-SCALE
DATA
(a) Data in a 2 x 2 Table. Nominal-scale data may come from paired samples. A 2 x 2 table containing data that are dichotomous (i.e., the nominal-scale variable has two possible values) may be analyzed by the McNemar test (McNemar, 1947). For example, assume that we wish to test whether two skin lotions are equalIy effective in relieving a poison-ivy rash. Both of the lotions might be tested on each of 50 patients with poison-ivy rashes on both arms, by applying one lotion to one arm and the other lotion to the other arm (using, for each person, a random selection of which arm gets which lotion). The results of the experiment can be summarized in a table such as in Example 24.23, where the results for each of the 50 patients consist of a pair of data (i.e., the outcomes of using the two lotions on each patient). As with other 2 x 2 tables (such as in Section 23.3), the datum in row i and columnj will be designated as tu Thus, ill = 12, [v: = 5, i21 = 11, and i22 = 22; and the total of the four frequencies is n = 50. The two-tailed null hypothesis is that, in the sampled population of people who might be treated with these two medications, the proportion of them that would obtain relief from lotion A (call it PI) is the same as the proportion receiving relief from lotion B (call it P2); that is, Ho: PI = P2 (vs. HA: PI =I- P2). In the sample, 12 patients (fll ) experienced relief from both lotions and 22 (f22) had relief from neither lotion. The proportion of people in the sample who experienced relief from lotion A is PI = (ill + 121)/ n = (12 + 11 )/50 = 0.46, and the proportion benefiting from lotion B is = (ill + iI2)/ n = (12 + 5)/50 = 0.34. The sample estimate of P I - P2 is
P2
A
PI
-
A
P2
=
ill
+ n
121
ill
+ il2 _ ill n
n
+
121
n
n
[v: _ i21 --n n
n (24.78)
570
Chapter 24
Dichotomous Variables
That is, of the four data in the 2 X 2 table, only [vi and hi are needed to test the hypotheses. The test is essentially a goodness-of-fit procedure (Section 22.1) where we ask whether the ratio of [v: to f2l departs significantly from 1 : 1. Thus, the hypotheses could also be stated as Ho: !{J = 1 and H A: !{J =F 1, where" '" is the population ratio estimated by f12/hl . The goodness-of-fit test for this example could proceed using Equation 22.1, but for this hypothesis it is readily performed via 2
X =
(f12 [v:
121 )2 , + hi
-
(24.79)
which is equivalent to using the normal deviate for a two-tailed test: Z
=
Ifl2 - f2l1 ~f12 + f2l
(24.80) .
Because X2 and Z are continuous distributions and the data to be analyzed are counts (i.e., integers), some authors have employed corrections for continuity. A common one is the Yates correction for continuity (introduced in Section 22.2). This is accomplished using Equation 22.3 or, equivalently, with
Xz.
= I (f12 - h II - 1)2 [v:
+ hi
(24.81)
which is the same as employing (24.82) The calculation of Xz. for this test is demonstrated in Example 24.23. A McNemar test using X2 operates with the probability of a Type I error much closer to Cl' than testing with Xz., although that probability will occasionally be a little greater than Cl'. That is, the test can be liberal, rejecting Hi, more often than it should at the Cl' level of significance. Use of will routinely result in a test that is conservative, rejecting Ho less often than it should and having less power than employing X2. Often, as in Example 24.23, the same conclusion is reached using the test with and without a correction for continuity. Bennett and Underwood (1970) and others have advised that the continuity correction should not generally be used. Because this test employs only two of the four tabulated data (f12 and hd, the results are the same regardless of the magnitude of the other two counts (fll and f22), which are considered as tied data and ignored in the analysis. If [v: and hi are small, this test does not work well. If [v: + 121 :S: 10, the binomial test (Section 24.5) is recommended, with n = [v: + f2l and X = fl2 (or
xz.
hd·
Another type of data amenable to McNemar testing results from the situation where experimental responses are recorded before and after some event, in which case the procedure may be called the "McNemar test for change." For example, we might record whether students saying the plan to pursue a career in microbiology, "The symbol t/f is the lowercase Greck letter psi. (See Appendix A.)
Paired-Sample Testing of Nominal-Scale Data
Section 24.17
571
McNemar's Test for Paired-Sample Nominal Scale Data
EXAMPLE 24.23
Ho: The proportion of persons experiencing relief is the same with both lotions (i.e., He: PI = P2). HA: The proportion of persons experiencing relief is not the same with both lotions (i.e., Ho: PI -=I- P2). a = 0.05 n = 50
Lotion A
X2
Lotion B
Relief
Relief No relief
12 11
(f12 - hI )2
=
hd
(f12 v
= 1,
No relief 5 22 (5 5
0.10 < P < 0.25
[P = 0.13]
and with the same result,
I fl2 - f21 I = I 5 - 1 I = 1.500, H12 + hI .j 5 + 11
2 = 20.05(2)
= 2.250
= 3.841. Therefore, do not reject He:
X605,l
Alternatively,
11 )2
+ 11
= 1.900. Therefore, do not reject
Hi;
0.10 < P < 0.20
[P = O.13J
With a correction for continuity,
2 (1112 - hI I -1 f X = (f12
c
(I 5
- f21 )
5
11 I -1 )2 + 11
=
1.562.
Do not reject Hi; 0.10 < P < 0.25 Alternatively,
[P = 0.21]
and with the same result,
2c =
Ifl2 - f21 1-1 ~=r=========--
HI2
+
121
I 5 - 1 I -1 = 1.250. ../5 + 11
Do not reject Ho. 0.20 < P < 0.50
[P = 0.21]
before and after an internship experience in microbiology laboratory. The column headings could be "yes" and "no" before the internship, and the row designations then would be "yes" and "no" after the internship.
572
Chapter 24
Dichotomous Variables
The McNemar test should not be confused with 2 x 2 contingency-table analysis (Section 23.3). Contingency-table data are analyzed using a null hypothesis of independence between rows and columns, whereas in the case of data subjected to the McNemar test, there is intentional association between the row and column data. (b) The One-Tailed McNemar Test. Using the normal deviate (Z) as the test statistic, one-tailed hypotheses can be tested. So, for example, the hypotheses for a poison ivy-treatment experiment could be He: The proportion of people experiencing relief with lotion A is not greater than (i.e., is less than or equal to) the proportion having relief with lotion B, versus HA : The proportion of people experiencing relief with lotion A is greater than the proportion obtaining relief with lotion B. And Ho would be rejected if Z (or Zc, if using the continuity correction) were greater than or equal toZa(l) andfs, > f12. (c) Power and Sample Size for the McNemar Test. The ability of the McNemar test to reject a null hypothesis, when the hypothesis is false, may be estimated by computing Z/3( I) =
y'n
-JP (I/J
-
1) - Za~I/J
+ 1
~r================------
~(I/J
+ 1) - p(I/J - 1)2
(24.83)
(Connett, Smith, and McHugh, 1987). Here, n is the number of pairs to be used (i.e., n = fll + [v: + .121 + f22); p is an estimate, as from a pilot study, of the proportion fl2/n or f2l/n, whichever is smaller; I/J is the magnitude of difference desired to be detected by the hypothesis test, expressed as the ratio in the population of either fI2 to .121, or .121 to f12, whichever is larger; and Za is Za(2) or Za(l), depending upon whether the test is two-tailed or one-tailed, respectively. Then, using Appendix Table B.2 or the last line in Appendix Table B.3 (i.e., for t with u = (0), determine f3( I ); and the estimated power of the test is 1 - f3( 1). This estimation procedure is demonstrated in Example 24.24. Similarly, we can estimate the sample size necessary to perform a McNemar test with a specified power: [Za~I/J n
+ 1 + Z/3(I)~(I/J
=
p(I/J
+ 1) - p(I/J - l)2t -
(24.84)
1)2
(Connett, Smith, and McHugh, 1987). This is demonstrated
in Example 24.25
(d) Data in Larger Tables. The McNemar test may be extended to square tables larger than 2 x 2 (Bowker, 1948; Maxwell, 1970). What we test is whether the upper right corner of the table is symmetrical with the lower left corner. This is done by ignoring the data along the diagonal containingji, (i.e., row 1, column 1; row 2, column 2; etc.). We compute
=
±L
(fij - fji )2 , (24.85) i= 1 [>! fij + fji where, as before, fij is the observed frequency in row i and column j. and the degrees of freedom are r( r - 1) v= (24.86) 2 X2
Section 24.17
EXAMPLE 24.24
Paired-Sample Testing of Nominal-Scale Data
573
Determination of Power of the McNemar Test
Considering the data of Example 24.23 to be from a pilot study, what would be the probability of rejecting Hi, if 200 pairs of data were used in a future study, if the test were performed at the 0.05 level of significance and if the population ratio of 121 to 112were at least 2? From the pilot study, using n = 51 pairs of data, 112/n = 6/51 = 0.1176 and I2I/n = 10/51 = 0.1961; so p = 0.1176. We specify a(2) = 0.05, so ZO.05(2) = 1.9600 (from the last line of Appendix Table B.3). And we also specify a new sample size of n = 200 and !/J = 2. Therefore,
- 1) -
~J0:T'i76(2 R2
=
+ 1) - 0.1176(2 - 1)2
(14.1421)(0.3429)(1) )3 -
- 1.9600(1.7321)
(0.1176)( I )
4.8493 - 3.3949 = 1.4544 ~2.8824 1.6978 From Appendix Table B.2, if
1.9600J2+1
Zf3( I)
= 0.86.
= 0.86, then 13(1)
=
0.19; therefore,
power
[i.e., I -13(1)]isl - 0.19=0.81. From Appendix Table B.3, if Zf3( 1) [i.e., tf3( 1).ooJ is 0.86, then 13(1) lies between 0.25 and 0.10, and the power [i.e., 1 - 13(1)]lies between 0.75 and 0.90. [13( 1) = 0.19 and power = 0.81.] and where r is the number of rows (or, equivalently, the number of columns) in the table of data. This is demonstrated in Example 24.26. Note that Equation 24.85 involves the testing of a series of 1 : 1 ratios by what is essentially an expansion of Equation 24.79. Each of these 1 : 1 ratios derives from a unique pairing of the r categories taken two at a time. Recall (Equation 5.10) that the number of ways that r items can be combined two at a time is rC2 = r!/[2(r - 2)!]. So, in Example 24.26, where there are three categories, there are 3C2 = 3!/[2(3 - 2)!] = 3 pairings, resulting in three terms in the X2 summation. If there were four categories of religion, then the summation would involve 4C2 = 4!/[2(4 - 2)!] = 6 pairings, and 6 X2 terms; and so on. For data of this type in a 2 x 2 table, Equation 24.85 becomes Equation 24.79, and Equation 24.86 yields lJ = 1. (e) Testing for Effect of Treatment Order. If two treatments are applied sequentially to a group of subjects, we might ask whether the response to each treatment depended on the order in which the treatments were administered. For example, suppose we have two medications for the treatment of poison-ivy rash, but, instead of the situation in Example 24.23, they are to be administered orally rather than by external
574
Chapter 24
Dichotomous Variables
EXAMPLE 24.25
Determination of Sample Size for the McNemar Test
Considering the data of Example 24.23 to be from a pilot study, how many pairs of data would be needed to have a 90% probability of rejecting the two-tailed H« if a future test were performed at the 0.05 level of significance and the ratio of 121 to 112 in the population were at least 2? As in Example 24.24, p = 0.1176 and Z,,(2) = 1.9600. In addition, we specify that I/J = 2 and that the power ofthe test is to be 0.90 [so 13( 1) = 0.10). Therefore, the required sample size is [Z"(2)~1/J + 1 + Z{3(l)~(1/J + 1) - p(1/J n = ~--------------------------------~~ p(1/J - 1)2 _ [1.9600J2+}
+ 1.2816~(2 + 1) - (0.1176)(2 (0.1176)(2
= [1.9600(1.7321)
l)2t
-
-
1)2]2
1)2
+ 1.2816(1.6978)]2 0.1176
= (5.5708)2 = 263.9. 0.1176
Therefore, at least 264 pairs of data should be used. application to the skin. Thus, in this example, both arms receive medication at the same time, but the oral medications must be given at different times. Gart (1969a) provides the following procedure to test for the difference in response between two sequentially applied treatments and to test whether the order of application had an effect on the response. The following 2 X 2 contingency table is used to test for a treatment effect: Order of Application of Treatments A and B A, then B B, then A Total Response with first treatment Response with second treatment
112 122
n
Total
By redefining the rows, the following 2 X 2 table may be used to test the null hypothesis of no difference in response due to order of treatment application: Order of Application of Treatments A and B A, then B B, then A Total Response with treatment A Response with treatment B Total
112 122
n
Section 24.17 McNemar's
EXAMPLE 24.26
Paired-Sample Testing of Nominal-Scale Data Test for a 3 x 3 Table of Nominal-Scale
575
Data
Ho: Of men who adopt a religion different from that of their fathers, a change from one religion to another is as likely as a change from the latter religion to the former. Of men who adopt a religion different from that of their fathers, a change from one religion to another is not as likely as a change from the latter religion to the former.
HA:
Man's
Man's Father's Religion
Religion
Protestant
Catholic
Jewish
Protestant Catholic Jewish
173 15 5
20 51 3
7 2 24
= 3
r
hI )2 + (113 - hI )2 + (123 - /32)2 112 - i2l .fi3 - hI i23 - /12
(112 -
(20 -
15)2 + (7 - 5)2 + (2 - 3f
=
+ 15 7 + 5 0.7143 + 0.3333 + 0.2000
=
1.248
=
3(2) 2
20
v
=
r(r-I) 2
2
+ 3
= 3
X5.0S.3 = 7.815 Do not reject Ho. 0.50 < P < 0.75 [P
= 0.74]
These two contingency tables have one fixed margin (the column totals are fixed; Section 23.3b) and they may be tested by chi-square, which is shown in Example 24.27. One-tailed hypotheses may be tested as described in Section 23.3e. EXAMPLE 24.27
Gart's Test for Effect of Treatment
H«: The two oral medications
RA:
rash. The two oral medications poison-ivy rash.
and Treatment
Order
have the same effect on relieving poison-ivy do not have the same effect on relieving
576
Chapter 24
Dichotomous Variables
Order of Application of Medications A and B A, then B B, then A
Total
Response with 1st medication Response with 2nd medication
14 4
6 12
20 16
Total
18
18
36
Using Equation 23.6,
2 nUl dn - Jr2/21 )2 X = RIR2CI C2 _ 36[(14)(12)
-
(6)(4)f
(20)(16)(18)(18) X20.05,1
= 3.841; reject
= 7.200.
Hr;
That is, it is concluded that there is a difference in response medications, regardless of the order in which they are administered. 0.005 < P < 0.01
to the two
[P = 0.0073]
Ho: The order of administration
of the two oral medications does not affect their abilities to relieve poison-ivy rash. HA: The order of administration of the two oral medications does affect their abilities to relieve poison-ivy rash.
Order of Application of Medications A and B A, then B B, then A
Total
Response with medication A Response with medication B
14 4
12 6
26 10
Total
18
18
36
X X20.05,1
2 _ 36[(14)(6) - (12)(4)]2 (26)(10)(18)(18)
= 0.554
= 3.841; do not reject Ho.
That is, it is concluded that the effects of the two medications are not affected by the order in which they are administered. 0.25 < P < 0.50
[P
= 0.46]
Section 24.18
Logistic Regression
577
LOGISTIC REGRESSION Previous discussions of regression (Chapters 17,20, and 21) considered data measured on a continuous (i.e., ratio or interval) scale, where there are measurements of a dependent variable (Y) associated with measurements of one or more independent variables (X's). However, there are situations (commonly involving clinical, epidemiological, or sociological data) where the dependent variable is measured on a nominal scale; that is, where the data are in two or more categories. For example, a sample of men could be examined for the presence of arterial plaque, and this information is recorded together with the age of each man. The mean age of men with plaque and the mean of those without plaque could be compared via a two-sample t test (Section 8.1). But the data can be analyzed in a different fashion, deriving a quantitative expression of the relationship between the presence of plaque and the age of the subject and allowing for the prediction of the probability of plaque at a specified age. Regression data with Y recorded on a dichotomous scale do not meet the assumptions of the previously introduced regression methods, assumptions such as that the Y's and residuals (E'S; Section 20.2) have come from a normal distribution at each value of X and have the same variance at all values of X. So another statistical procedure must be sought. The most frequently employed analysis of such data is logistic regression. * A brief introduction to logistic regression is given here, employing terminology largely analogous to that in Chapters 17 and 20.t Because of the intense calculations required, users of this regression technique will depend upon computer programs, which are found in several statistical computer packages, and will typically benefit from consultation with a statistician familiar with the procedures. (a) Simple Logistic Regression. The simplest-and most common-logistic regression situation is where the categorical data (Y) are binomial, also known as dichotomous (i.e., the data consist of each observation recorded as belonging in one of two categories). Each value of Y is routinely recorded as "1" or "0" and might, for example, refer to a characteristic as being "present" (1) or "absent" (0), or to subjects being "with" (1) or "without" (0) a disease." Logistic regression considers the probability (P) of encountering a Y of 1 at a given X in the population that was sampled. So, for example, p could be the probability of * A similar procedure, but one that is less often preferred, is known as discriminant analysis. What statisticians refer to as the general linear model underlies several statistical techniques, including analysis of variance, analysis of covariance, multivariate analysis of variance, linear regression, logistic regression, and discriminant analysis. "Logistic regression is a wide-ranging topic and is covered in portions of comprehensive books on regression (e.g., Chatterjee and Hadi, 2006: Chapter 12; Glantz and Slinker, 2001: Chapter 12; Hair et al., 2006: Chapter 5; Kutner, Nachtshcim, and Neter, 2004: Chapter 14; Meyers, Garnst, and Guarino, 2006: Chapter 6A and 68; Montgomery, Peck, and Vining, 2001: Section 14.2; Pedhazur, 1997: Chapter 17; Vittinghoff et al., 2005: Chapter 6): in books on the analysis of categorical data (e.g., Agresti, 2002: Chapters 5 and 6: Agresti, 2007: Chapters 4 and 5; Fleiss, Levin, and Paik, 2003: Chapter 11); and in works that concentrate specifically on logistic regression (e.g .. Garson, 2006; Hosmer and Lemeshow, 2000; Kleinbaum and Klein, 2002; Menard, 2002; Pampel, 2(00). tDesignating observations of Y as "0" or "I" is thus an example of using a "dummy variable," first described in Section 20.10. Any two integers could be used, but 0 and 1 are almost always employed, and this results in the mean of the Y's being the probability of Y = 1.
578
Chapter 24
Dichotomous Variables
encountering a member of the population that has a specified characteristic or the proportion that has a specified disease. The logistic regression relationship in a population is
p
= 1+
present,
(24.87)
ea + (3X '
where e is the mathematical constant introduced in Chapter 6. This equation may also be written, equivalently, using the abbreviation "exp" for an exponent on e:
+ (3X) , 1 + exp(O' + (3X) exp(
p=
0'
(24.87a)
or, equivalently, as
1
p -
+
- 1
orp
=
e-(a+{3X)
1 1
+ exp[-(O'
The parameter 0' is often seen written as (3o. The sample regression equations corresponding regression are, respectively,
1
+
+
bX)
1
+ exp(a +
1
e-(a+bX)
~
p=
(24.88)
ell+bX'
exp(a
~
p=
to these expressions of population
ff'+bX
~
p=
(24.87b)
+ (3X))
1
+
(24.88a)
bX)' ~
1
or p =
1
+ exp[ - (a +
(24.88b)
bX))'
and a is often written as boo Logistic regression employs the concept of the odds of an event (briefly in Section 5.5), namely the probability of the event occurring expressed the probability of the event not occurring. Using the designations p = and q = 1 - P P( Y = 0), the odds can be expressed by these four statements: P( Y = 1) or P( Y = 1) or _P_ or E!.. 1 - P( Y = 1)
P( Y = 0)
1 - P
q
mentioned relative to P( Y = 1) equivalent (24.89)
and the fourth will be employed in the following discussion. A probability, p, must lie within a limited range (0 :0:: P :0:: 1).* However, odds, p/q, have no upper limit. For example, if p = 0.1, odds = 0.1/0.9 = 0.11; if p = 0.5, odds = 0.5/0.5 = 1; if P = 0.9, odds = 0.9/0.1 = 9; if P = 0.97, odds = 0.97/0.03 = 32.3; and so on. Expanding the odds terminology, if, for example, a population contains 60% females and 40% males, it is said that the odds are 0.60/0.40 = "6 to 4" or "1.5 to 1" in favor of randomly selecting a female from the population, or 1.5 to 1 against selecting a male. In order to obtain a linear model using the regression terms 0' and {3X, statisticians utilize the natural logarithm of the odds, a quantity known as a logit; this is sometimes * If a linear regression were performed on p versus X, predicted values of p could be less than 0 or greater than 1, an untenable outcome. This is another reason why logistic regression should be used when dealing with a categorical dependent variable.
Section 24.18
referred to as a "logit transformation"
Logistic Regression
579
of the dependent variable:
legit = In(odds) = In(~}
(24.90)
For 0 < odds < 1.0, the logit is a negative number (and it becomes farther from 0 the closer the odds are to 0); for odds = 0.5, the logit is 0; and for odds> 0, the logit is a positive number (and it becomes farther from 0 the closer the odds are to 1). Using the logit transformation, the population linear regression equation and sample regression equations are, respectively, logit for p = (\' and logit for
P
=
a
+ f3X
(24.91 )
+
(24.92)
bX.
This linear relationship is shown in Figure 24.2b. Determining a and b for Equation 24.92 is performed by an iterative process termed maximum likelihood estimation, instead of by the least-squares procedure used in the linear regressions of previous chapters. The term maximum likelihood refers to arriving at the a and b that are most likely to estimate the population parameters underlying the observed sample of data. As with the regression procedures previously discussed, the results of a logistic regression analysis will include a and b as estimates of the population parameters (\' and f3, respectively. Computer output will routinely present these, with the standard error of b (namely, Sb), confidence limits for f3, and a test of Ho: f3 = O. In logistic regression there are no measures corresponding to the coefficient of determination (r2) in linear regression analysis (Section 17.3), but some authors have suggested statistics to express similar concepts. For a logistic relationship, a plot of p versus X will display an S-shaped curve rising from the lower left to the upper right, such as that in Figure 24.2a. In such a graph, p is near zero for very small values of X, it increases gradually as X increases, then it increases much more rapidly with further increase in X, and then it increases at a slow rate for larger X's, gradually approaching 1.0. Figure 24.2a is a graph for a logistic equation with (\' = - 2.0 and {3 = 1.0. The graph would shift to the left by 1 unit of X for each increase of (\'by 1, and the rise in p would be steeper with a larger f3. If f3 were negative instead of positive, the curve would be a reverse 5 shape, rising from the lower right to the upper left of the graph, instead of from the lower left to the upper right." For a l-unit increase in X, the odds increase by a factor of ef3. If, for example, f3 = 1.0, then the odds for X = 4 would be e1.0 (namely, 2.72) times the odds for X = 3. And a one-unit increase in X will result in a unit increase in the logit of p. So, if f3 = 1.0, then the logit for X = 4 would be 1.0 larger than the logit for X = 3. If f3 is negative, then the odds and logit decrease instead of increase. Once the logistic-regression relationship is determined for a sample of data, it may be used to predict the probability of X for a given X. Equation 24.92 yields a logit of p for the specified X; then odds =
elogit
(24.93)
* Another representation of binary Y data in an S-shaped relationship to X is that of probits, based upon a cumulative normal distribution. The use of logits is simpler and is preferred by many.
580
Chapter 24
Dichotomous Variables (a)
(b)
• • • •
1.0 0.9
0.7
.?- 0.6 ""
•
Bco 0.5
.D
0 •... 0...
•
6 5 4 3
0.8
2 1 0
"" '0
0.4
'5b
....l0 -)
0.3
•
-2
0.2 0.1
• • • •
0
-4
-2
-3 -4 -5 -6
• 0
2
4
6
• -4
8
-2
x.
(b) The relationship
()
2
4
6
8
X
FIGURE 24.2: Logistic regression, where to
•
•
•
X
e-(a+{3X)1
•
•
•
ll'
= -2.0 and f3 = 1.0. (a) The relationship
of p = 1/[1
+
of the logit of p to X.
and p= A
If n is large, the hypothesis H«: (3
=
----odds + odds
(24.94)
0 may be tested using
Z=-b ,
(24.95)
Sh
which is analogous to the linear-regression test of that hypothesis using t (Section 17.3b). This is known as the Wald test"; sometimes, with exactly the same result, the test statistic used is X2 = Z2, with 1 degree of freedom. In further analogy with linear regression, confidence limits for {3are obtainable as (24.96) Assessing Hi: {3 = 0 and expressing a confidence interval for {3may also be done using a computer-intensive process known as likelihood-ratio, or log-likelihood, testing. It has been noted (e.g., by Hauck and Donner, 1977; Menard, 2002: 43; Pampel, 2000: 30) that when b is large, its standard error (Sb) is inflated, thereby increasing the probability of a Type II error in a Wald test (and thus decreasing the power of the test). Therefore, it is sometimes recommended that the likelihood-ratio test be used routinely in preference to the commonly encountered Wald test. For the same reason, likelihood-ratio confidence limits can be recommended over those obtained by Equation 24.96. The interpretation of logistic-regression coefficients is discussed more fully in the references cited in the second footnote in Section 24.18. In recommending an adequate number of data for logistic-regression analysis, some authors have suggested that n be large enough that there are at least 10 observations of Y = 1 and at least 10 of Y = O. *Named for Hungarian-born, Vienna-educated, American mathematician and econometrician Abraham Wald (1902-1950).
Section 24.18
Logistic Regression
581
(b) Multiple Logistic Regression. Just as the concepts and procedures of linear regression with one independent variable, X (Chapter 17), can be expanded into those of linear regression with more than one X (Chapter 20), the basic ideas of logistic regression with one X (Section 24.18a) can be enlarged to those of logistic regression with more than one X. Expanding Equations 24.87-24.87b to equations for multiple logistic regression with m independent variables, Q' + {3X is replaced with Q' + 2.;'~,{3iXi, which is Q' + {3,X, + {32X2 + ... + {3mXI1l; and, in Equations 24.88-24.88b, a + bX is replaced with a + b.X], which is a + b, XI + b2X2 + ... + hmXI1l. Analogous to multiple linear regression, (3i expresses the change in In(logit) for a I-unit change in Xi, with the effects of the other Xi's held constant. Further interpretation of logistic partial-regression coefficients is discussed in the references cited in the second footnote of Section 24.18. The statistical significance of the overall multiple logistic regression model is tested via Ho: {3, = {32 = ... = {3m, which is analogous to the analysis-of-variance testing in multiple linear regression (Section 20.3). The significance of each partialregression coefficient is tested via Ho: {3i = O. As with multiple liner regression, Ho: {3, = {32 = ... = {3m can be rejected while none of the significance tests of individual partial-regression coefficients result in rejection of Ho: {3i = 0 (especially when the conservative Wald test is used). The standardized partial-regression coefficients in multiple linear regression (Section 20.5) cannot be computed for logistic regression, but some authors have proposed similar coefficients. Also, there is no perfect logistic-regression analog to coefficients of determination (R2 and R~ in Section 20.3). Several measures have been suggested by various authors to express the concept of such a coefficient; sometimes they are referred to as "pseudo-A?" values (and some do not have 1.0 as their maximum). Just as with multiple linear-regression analysis, multiple logistic regression is adversely affected by multicolinearity (see Section 20.4a). Logistic analysis does not work well with small numbers of data, and some authors recommend that the sample be large enough that there are at least 10m O's and at least 10m l's for the dependent variable. And, as is the case of multiple linear regression, multiple logistic regression is adversely affected by outliers.
2.;:1
(c) Other Models of Logistic Regression. Though not commonly encountered, the dependent variable can be one that has more than two categories. This is known as a polytomous (or "polychotomous" or "multinomial") variable. Also, the dependent variable can be one that is measured on an ordinal scale but recorded in nominalscale categories. For example, subjects could be classified as "underweight," "normal weight," and "overweight." Logistic regression may also be performed when an independent variable (X) is recorded on a dichotomous nominal scale. For example, the dependent variable could be recorded as hair loss ( Y = 1) or no hair loss (Y = 0) in men, with the independent variable (X) being exposed (X = 1) or not exposed (X = 0) to a certain drug or radiation treatment. Such data can be subjected to a 2 X 2 contingency-table analysis (Section 23.3), where the null hypothesis is that the proportion (p) of men with loss of hair is the same in the treated and nontreated groups. Using logistic regression, however, the null hypothesis is that hair loss is statistically dependent upon receiving the treatment and the relationship between p and whether treatment was applied is quantified. In multiple logistic regression, one or more of the X's may be recorded on a dichotomous scale. So, for example, Y could be recorded as hair loss or no hair loss, X, could
582
Chapter 24
Dichotomous Variables
be age (measured on a continuous scale), and X2 could be sex (a dichotomous variable: male or female). Indeed, using dummy variables (introduced in Section 20.10), X's can be recorded on a nominal scale with more than two categories. If all m of the independent variables are nominal and Y is dichotomous, the data could be arranged in a 2 X m contingency table, but different hypotheses would be tested thereby. It is also possible to perform stepwise multiple logistic regression (analogous to the procedures of Section 20.6) in order to determine which of the X's should be in the final regression model, logistic regression with polynomial terms (analogous to Chapter 21), and logistic regression with interaction of independent variables (analogous to Section 20.11). EXERCISES 24.1. If, in a binomial population, p = 0.3 and n = 6, what proportion of the population does X = 2 represent? 24.2. If, in a binomial population, p = 0.22 and n = 5, what is the probability of X = 4? 24.3. Determine whether the following data, where n = 4, are likely to have come from a binomial population with p = 0.25:
x
f
o
30 51 33 10 2
1
2 3 4
24.4. Determine whether the following data, where n = 4, are likely to have come from a binomial population:
x
f
o 1 2 3
20 41 33 11
4
4
24.5. A randomly selected male mouse of a certain species was placed in a cage with a randomly selected male mouse of a second species, and it was recorded which animal exhibited dominance over the other. The experimental procedure was performed, with different pairs of animals, a total of twenty times, with individuals from species 1 being dominant six times and those from species 2 being dominant fourteen times. Test the null hypothesis that there is no difference in the ability of members of either species to dominate.
24.6. A hospital treated 412 skin cancer patients over a period of time. Of these, 197 were female. Using the normal approximation to the binomial test, test the hypothesis that equal numbers of males and females seek treatment for skin cancer. 24.7. Test the null hypothesis of Exercise 22.3, using the binomial test normal approximation. 24.8. Ten students were given a mathematics aptitude test in a quiet room. The same students were given a similar test in a room with background music. Their performances were as follows. Using the sign test, test the hypothesis that the music has no effect on test performance.
Student 1 2 3 4 5
6 7 8 9 10
Score without music 114 121 136 102 99 114 127 150 129 130
Score with music 112 122 141 107
96 109 121 146 127 128
24.9. Estimate the power of the hypothesis test of Exercise 24.5 if (\' = 0.05. 24.10. Using the normal approximation, estimate the power of the hypothesis test of Exercise 24.6 if (\'= 0.05. 24.11. In a random sample of 30 boys, 18 have curly hair. Determine the 95% confidence limits for the proportion of curly-haired individuals in the population of boys that was sampled.
Exercises Determine the Clopper-Pearson interval. (b) Determine the Wald interval. (c) Determine the adjusted Wald interval. :. In a random sample of 1215 animals, 62 exhibited a certain genetic defect. Determine the 95% confidence interval for the proportion of the population displaying this defect. (a) Determine the Clopper-Pearson interval. (b) Determine the Wald interval. (c) Determine the adjusted Wald interval. • From this sample of 14 measurements, determine the 90% confidence limits for the population median running speed of monkeys: 28.3. 29.1,29.5, 20.1,30.2,31.4,32.2,32.8, 33.l, 33.2, 33.6, 34.5, 34.7, and 34.8 km/hr. · Using the data of Exercise 23.3, test He: PI = P2 versus HA: PI =f. P2· • Using the data of Example 23.3, determine the 95% adjusted Wald confidence limits for PI - P2. • Using the data of Exercise 23.1, test the null hypothesis that there is the same proportion of males in all four seasons. · If the null hypothesis in Exercise 24.16 is rejected, perform a Tukey-type multiple-comparison test to conclude which population proportions are different from which. (Use Equation 13.7 for the standard error.) • A new type of heart valve has been developed and is implanted in 63 dogs that have been raised on various levels of exercise. The numbers of valve transplants that succeed are tabulated as follows. (a) Is the proportion of successful implants the same for dogs on all exercise regimens? (b) Is there a trend with amount of exercise in the proportion of successful implants? (a)
Amount of exercise Implant Successful Unsuccessful Total
None Slight Moderate Vigorous Total 8 7
9 3
17 3
14 2
48 15
15
12
20
16
63
, In investigating the cold tolerance of adults of a species of tropical butterfly, 68 of the butterflies (32 females and 36 males) were subjected to a cold temperature until half of the 68 had died. Twenty of the females survived, as did 14 of the males, with the data tabulated as follows:
583
Females Males 34 Alive 20 14 Dead r---,-;12..----------,;2"'2------1 34 68 32 36 Prior to performing the experiment and collecting the data, it was stated that Ho: females are as likely as males to survive the experimental temperature, and HA: Females and males are not equally likely to survive. (a) Use the Fisher exact test for the one-tailed hypotheses. (b) Use chi-square with the Yates correction for continuity (Sections 23.3c and 23.3d) for the two-tailed hypotheses. (c) Use chi-square with the Cochran-Haber correction for continuity (Sections 23.3c and 23.3d) for the two-tailed hypotheses. (d) Use the Fisher exact test for the two-tailed hypotheses. 24.20. Thirteen snakes of species Sand 17 of species E were placed in an enclosure containing 14 mice of species M and 16 of species U. Each of the 30 snakes ate one of the 30 mice, and the following results were recorded: Snakes S Snakes E 11 14 MiceM 3 16 Mice U 10 6 17 13 30 Prior to performing the experiment and collecting the data, it was decided whether the interest was in a one-tailed or two-tailed test. The one-tailed test hypotheses would be Ho: Under the conditions of this experiment, snakes of species E are not more likely than species S to eat mice of species M (i.e., they are less likely or equally likely to do so), and HA: Snakes of species E are more likely to eat mice of species M. The two-tailed hypotheses would be Ho: Under the conditions of this experiment, snakes of species S and species E are equally likely to cat mice of species M, and H A: Snakes of species S and species E are not equally likely to eat mice of species M. (a) Use the Fisher exact test for the one-tailed hypotheses. (b) Use chi-square with the Yates correction for continuity (Sections 23.3c and 23.3d) for the two-tailed hypotheses. (c) Use chi-square with the Cochran-Haber correction for continuity. (d) Use the Fisher exact test for the two-tailed hypotheses.
584
Chapter 24
Dichotomous Variables
24.21. One hundred twenty-two pairs of brothers, one member of each pair overweight and the other of normal weight, were examined for presence of varicose veins. Use the McNemar test for the data below to test the hypothesis that there is no relationship between being overweight and developing varicose veins (i.e., that the same proportion of overweight men as normal weight men possess
varicose veins). In the following data tabula' "v.v." stands for "varicose veins." Normal Weight With V.v. Without v. v.
Overweight With v. v. Without v.v.
19
5
12
86
'I
n
= 1~
e
C HAP
T E R
25
Testing for Randomness 25.1 POISSON PROBABILITIES 25.2 CONFIDENCE LIMITS FOR THE POISSON PARAMETER 25.3 GOODNESS OF FIT FOR THE POISSON DISTRIBUTION 25.4 THE POISSON DISTRIBUTION FOR THE BINOMIAL TEST 25.5 COMPARING TWO POISSON COUNTS 25.6 SERIAL RANDOMNESS OF NOMINAL-SCALE
CATEGORIES
25.7 SERIAL RANDOMNESS
OF MEASUREMENTS:
PARAMETRIC
25.8 SERIAL RANDOMNESS
OF MEASUREMENTS:
NONPARAMETRIC
TESTING TESTING
A random distribution of objects in space is one in which each one of equal portions of the space has the same probability of containing an object, and the occurrence of an object in no way influences the occurrence of any of the other objects. A biological example in one-dimensional space could be the linear distribution of blackbirds along the top of a fence, an example in two-dimensional space could be the distribution of cherry trees in a forest, and an example in three-dimensional space could be the distribution of unicellular algae in water." A random distribution of events in time is one in which each period of time of given length (e.g., an hour or a day) has an equal chance of containing an event, and the occurrence of anyone of the events is independent of the occurrence of any of the other events. An example of events in periods of time could be the numbers of heart-attack patients entering a hospital each day. 25.1 POISSON PROBABILITIES
The Poisson distribution: is important in describing random occurrences when the probability of an occurrence is small. The terms of the Poisson distribution are e-l1-/Lx
P(X)
=
Xl
(25.1a)
or, equivalently, P(X) '"An extensive Finglcton (ILJ~5).
coverage
of the description
=
/LX
el1-X!
and analysis of spatial pattern
(25.1b) is given by Upton and
iAlso known as Poisson's law and named for Simeon Denis Poisson (17Hl-IH40), a French mathematician, astronomer. and physicist (Feron, ILJ7X). He is often credited with the first report of this distribution in a I X37 publication. However. Dale (ILJXLJ) reported that it appeared earlier in an I X30 memoir of an I X2LJpresentation by Poisson. and Abraham de Moivre (1667 -1754) apparently described it in 171 X (David. 1%2: 16X; Stigler. ILJX2). It was also descrihed independently by others. including "Student" (W. S. Gosset. ILJ7o-ILJ37) during I LJ06-ILJ()LJ (Boland, 2000; Haight, 1%7: 117). Poisson's name might have first been attached to this distribution. in contrast to his being merely cited. by H.E. Soper in ILJ14 (David. ILJLJ5).
585
586
Chapter 25
Testing for Randomness
where P( X) is the probability of X occurrences in a unit of space (or time) and IL is the population mean number of occurrences in the unit of space (or time). Thus,
= «», = e-!1-J-L,
P(O) P( 1)
=
P( 2)
(25.2) (25.3) i 2
-!1-
2J-L ,
(25.4)
e:» J-L3 P( 3) - -------'- (3)(2)'
(25.5)
e
e-!1-J-L4
P( 4) - -----'--(4)(3)(2)'
(25.6)
and so on, where P( 0) is the probability of no occurrences in the unit space, P( 1) is the probability of exactly one occurrence in the unit space, and so on. Figure 25.1 presents some Poisson probabilities graphically. In calculating a series of Poisson probabilities, as represented by the preceding five equations, a simple computational expedient is available: P(X)
=
P(X
X
1)J-L.
(25.7)
Example 25.1 demonstrates these calculations for predicting how many plants will have no beetles, how many will have one beetle, how many will have two beetles, and so on, if 80 beetles are distributed randomly among 50 plants.
0.40 0.35
P(X)
0.30
0.30
0.25
0.25
0.20
0.20
JL
= 2.0
JL
= 4.0
P(X)
0.15
0.15
0.10
0.10
0.05
0.05
0
0
0
>4
3
0
X
0.25
JL
0.25
= 3.0
0.20 P(X)
0.20
0.15
P(X)
0.15
0.10
0.10
0.05
0.05
0
0
1
2
3
4
5
6
X FIGURE 25.1: The Poisson distribution Equation 25.1.
7
>7
0
0
1
2
3
4
5
6
7
8 9 >9
X for various values of !1-. These graphs were prepared
by using
Section 25.2
Confidence Limits for the Poisson Parameter
587
Frequencies from a Poisson Distribution
EXAMPLE 25.1
There are 50 plants in a greenhouse. If ~O leaf-eating beetles are introduced and they randomly land on the plants. what are the expected numbers of beetles per plant? n
=
j.L
=
50 HO beetles 50 plants
=
1.6 beetles/plant
Using Equation 25.2. P(O) = e-IL = e-I.n = 0.20190; thus. 20.190'Yo of the plants-that is. (0.20190)(50) = 10.10 (ahout 1O)-are expected to have no beetles. The prohahilities of plants with Xi = 1,2,3 .... beetles are as follows. using Equation 25.7 (although Equation 25.1a or 25.lb could be used instead): Number of beetles X 0 I 2 ,., .)
4 5 :::;5 2:6
Estimated number of plants Poisson probability P(X)
A
0.20190 (0.20190)(1.6)/1 (0.32304) ( 1.6 )/2 (0.25843)( 1.6)/3 (0.13783)( 1.6)/4 (0,(15513 )( 1.6 )/5
f
f = [P(X)][n]
= 0.32304 = 0.25843 = 0.13783
= 0.05513 = 0.0\ 764
0.99397 1.00000 - 0.99397 = 0.00603
(0.20190)(50) (0.32304)(50) (0.25843)(50) (0.137~3 ) (50) (0.05513)(50) (0.01764) (50)
= = = = = =
rounded
10.10 16.15 12.92 6.89 2.76 0.88
10 16 13 7 3
49.70 (0.00603) (50) = 0.30
50 0
50.00
50
I
The Poisson distrihution is appropriate when there is a small probahility of a single event. as reflected in a small u; and this distrihution is very similar to the hinomial distribution where n is large and p is small. For example, Table 25.1 compares the Poisson distribution where j.L = 1 with the hinomial distribution where n = 100 and p = O.tll (and. therefore. j.L = np = I). Thus. the Poisson distribution has importance in describing binomially distributed events having low probability. Another interesting property of the Poisson distribution is that 2 = u; that is. the variance and the mean are equal. (T
CONFIDENCE LIMITS FOR THE POISSON PARAMETER
Confidence limits for the Poisson distrihution follows. The lower I - a confidence limit is
parameter.
u; may be obtained as
2 X( l-a/2). u LI = ----'--------'---2
where
1I =
(25.8)
2X; and the upper I - a confidence limit is X2 - trf2. L2---. 2
v
(25.9)
588
Chapter 25
Testing for Randomness Where f.L = 1 Compared with the Binomial Distribution Where n = 100 and p = 0.01 (i.e., with f.L = 1) and the Binomial Distribution Where n = 10 and p = 0.1 (i.e., with f.L = 1)
TABLE 25.1: The Poisson Distribution
X
P(X) for Poisson: f.L = I
P(X) for binomial: n = 100, p = 0.01
P(X) for binomial: n = 10, p = 0.1
0 1 2 3 4 5 6 7 >7
0.36788 0.36788 0.18394 0.06131 0.01533 0.00307 0.00050 0.00007 O.OOOO!
0.36603 0.36973 0.18486 0.06100 0.01494 0.00290 0.00046 0.00006 0.00002
0.34868 0.38742 0.19371 0.05740 0.01116 0.00149 0.00014 0.00001 0.00000
Total
1.00000
1.00000
1.00001
where v = 2(X + 1) (Pearson and Hartley, 1966: 81). This is demonstrated in Example 25.2. L] and L2 are the confidence limits for the population mean and for the population variance. Confidence limits for the population standard deviation, (T, are simply the square roots of L] and L2. The confidence limits, L) and L2 (or their square roots), are not symmetrical around the parameter to which they refer. This procedure is a fairly good approximation. If confidence limits are desired to be accurate to more decimal places than given by the available critical values of we may engage in the more tedious process of examining the tails of the Poisson distribution (e.g., see Example 25.3) to determine the value of X that cuts off al2 of each tail. Baker (2002) and Schwertman and Martinez (1994) discuss several approximations to L] and L2, but the best of them require more computational effort than do the exact limits given previously, and they are all poor estimates of those limits.
i,
EXAMPLE 25.2
Confidence
Limits for the Poisson Parameter
An oak leaf contains four galls. Assuming that there is a random occurrence of galls on oak leaves in the population, estimate with 95% confidence the mean number of galls per leaf in the population. The population mean,
fL,
is estimated as X = 4 galls/leaf.
The 95% confidence limits for
fL
are
2
X(1-a/2).v
L]
where
2
L] =
X2 0.975,8
2
2.180
= --
= 2X = 2( 4) = 8
= 1.1 galls/leaf
2
L2
=
X2 a~2' v ,
where
L2
=
X2 0.025. ]0
= --
2
u
20.483
2
v = 2( X
+ 1) = 2( 4 + 1) = 10
= 10.2 galls/leaf
Section 25.3
Goodness of Fit for the Poisson Distribution
589
Therefore, we can state P(l.1
galls/leaf' es
J.L ::;
10.2 galls/leaf)
2::
0.95
10.2 galls/leaf)
2::
0.95;
and P( 1.1 galls/leaf rs
(T2
::;
and, using the square roots of L] and L2, P( l.0 galls/leaf es
(T
::;
3.2 galls/leaf)
2::
0.95.
GOODNESS OF FIT FOR THE POISSON DISTRIBUTION
The goodness of fit of a set of data to the Poisson distribution is a test of the null hypothesis that the data are distributed randomly within the space that was sampled. This may be tested by chi-square (Section 22.3), as was done with the binomial distribution in Section 34.4. When tabulating the observed frequencies (Ji) and the expected freq~encies (f;), the frequencies in the tails of the distribution should be pooled so no fi is less than l.0 (Cochran, 1954). The degrees of freedom are k - 2 (where k is the number of categories of X remaining after such pooling). Example 25.3 fits a set of data to a Poisson distribution, using the sample mean, X, as an estimate of J.L in Equations 25.2 and 25.7. The G statistic (Section 22.7) may be used for goodness-of-fit analysis instead of chi-square. It will give equivalent results when nj k is large; if nj k is very small, G is preferable to X2 (Rao and Chakravarti, 1956). If fL were known for the particular population sampled, or if it were desired to assume a certain value of u, then the parameter would not have to be estimated by X, and the degrees of freedom for X2 for G goodness-of-fit testing would be k - l. For example, if the 50 plants in Example 25.1 were considered the only plants of interest and 80 beetles were distributed among them, fL would be 80/50 = 1.6. Then the observed number of beetles Pc!' plant could be counted and those numbers compared to the expected frequencies (f) determined in Example 25.1 for a random distribution. It is only when this parameter is specified that the Kolmogorov-Smirnov goodness-of-fit procedure (Section 22.8) may be applied (Massey, 1951).
EXAMPLE 25.3
Fitting the Poisson Distribution
Thirty plots of ground were examined within an abandoned golf course, each plot being the same size; a total of74 weeds were counted in the 30 plots. The frequency f is the number of plots found to contain X weeds; P( X) is the probability of X weeds in a plot if the distribution of weeds is random within the golf course.
Ho: The weeds are distributed randomly. HA: The weeds are not distributed randomly. n
= 30
X =
74 weeds = 2.47 weeds/plot 30 plots
P(O) = e-x = e-2.47
= 0.08458
590
Chapter 25
Testing for Randomness
X
f
fX
0 1 2 3 4 5 6
2 1 13 10 3 1 0
0 1 26 30 12 5 0
30
74
P(X)
f
= = = = = =
(0.08458)(2.47)/1 (0.20891 )(2.47)/2 (0.25800)(2.47)/3 (0.21242)(2.47)/4 (0.13116)(2.47)/5 (0.06479)(2.47)/6
= [P(X)][n]
0.08458 0.20891 0.25800 0.21242 0.13116 0.06479 0.02667
2.537 6.267 7.740 6.373 3.935 1.944 0.800
0.98653
29.596
1.9,
A
A
The last f calculated (for X 6) is less than so the calculation of f's proceeds no further. The sum ~f the seven calculated f's is 25.596, so P( X > 6) = 30 ;: 25.596 = 0.404 and the f's of 0.800 and 0.404 are summed to 1.204 to obtain anf that is no smaller than 1.0.* Then the chi-square goodness of fit would proceed as in Section 22.3:
2 _
X
-
X:
o
1
2
3
4
f: f:
2
1
13 7.740
10 6.373
3
1
0
3.935
1.944
1.204
2.537
(2 - 2.537)2 2.537
6.267
5
;?:
6
n 30
+ (1 - 6.267)2 + (13 - 7.740)2 6.267
7.740
+ (10 - 6.373)2 + (3 - 3.935)2 + (1 - 1.944)2 + (0 - 1.204? =
6.373 0.114 + 4.427
+
3.935 3.575 + 2.064
+
1.944 0.222 + 0.458
1.204
+ 1.204
= 12.064 v=k
- 2=7
X6.05,5
= 11.070.
- 2=5
Therefore, reject Ho. 0.025
< P < 0.05
* P( X) > 6 could also have been obtained by adding all of the P( X)'s in the preceding table, which would result in a sum of 0.98653; and (0.98653) (30) = 29.596. The null hypothesis in Poisson goodness-of-fit objects in space (or events in time) is random .
testing is that the distribution of
• A random distribution of objects in a space is one in which each object has the same probability of occurring in each portion of the space; that is, the occurrence of each object is independent of the occurrence of any other object. There are
Section 25.3
Goodness of Fit for the Poisson Distribution
591
these two kinds of deviation from randomness that will cause rejection of the null hypothesis: • A uniform distribution of objects in a space is one in which there is equal distance between adjacent objects, as if they are repelling each other. • A contagious distribution* (also referred to as a "clumped," "clustered," "patchy," or "aggregated" distribution) in a space is one in which objects are more likely than in a random distribution to occur in the vicinity of other objects, as if they are attracting each other. Figures 25.2 and 25.3 show examples of these three kinds of distributions. If a population has a random (Poisson) distribution, the variance is the same as the mean: that is, 0'2 = J-t and 0'2/ J-t = 1.0. t If the distribution is more uniform than random (said to be "underdispersed"), 0'2 < J-t and 0'2/ J-t < 1.0; and if the distribution is distributed contagiously ("overdispersed"), 0'2 > J-t and 0'2/ J-t > 1.0.
(a) FIGURE 25.2: Distributions (T2
= u;
e•••
(b) uniform,
•• • • • •
•
•
••
•
• •• •• •
• • ••
•• • • •• • • • • •• • ••
• •• • • • • • (a)
in one-dimensional
in which
• ••
(T2
(e)
(b)
space (i.e., along a line):
< iJ.; (c) contagious,
in which
• • • • • • • •
• • • •
• • • • • • • • • • • •
• • •
• • • •
• • • •
• • • •
• • • • • • • • •
(T2
>
(a) random (Poisson), in which u,
•
• •••
••
••
•••
• ••
• ••
(b)
FIGURE 25.3: Distributions in two-dimensional space: (a) random (Poisson), in which in which (T2 < iJ.; (c) contagious, in which (T2 > u-
•• •••
••• •
••
•• • • ••••
•
.~ .
• • •• • • (e)
(T2
= p; (b) uniform,
* A mathematical distribution that is sometimes used to describe contagious distributions of biological data is the negative binomial distribution, which is described, for example, by Ludwig and Reynolds (1988: 24-26, 32-35) and PieIou (1977: 278-281), and by Ross and Preece (1985), who credit a 1930 French paper by G. Polya with the first use of the term contagious in this context. David (1995) reported that "negative binomial distribution" is a term first used by M. Greenwood and G. U. Yule, in 1920. t Although a population with a random distribution will always have its mean equal to its variance, Hurlbert (1990), Pie Iou (1967: 155), and others have emphasized that not every population with f.L = u2 has a random distribution.
592
Chapter 25
Testing for Randomness The investigator generally has some control over the size of the space, or the length of the time interval, from which counts are recorded. So a plot size twice as large as that in Example 25.3 might have been used, in which case each f would most likely have been twice the size as in this example, with X of 4.94, instead of 2.47. In analyses using the Poisson distribution, it is desirable to use a sample distribution with a fairly small mean-let us say certainly below 10, preferably below 5, and ideally in the neighborhood of 1. If the mean is too large, then the Poisson too closely resembles the binomial, as well as the normal, distribution. If it is too small, however, then the number of categories, k, with appreciable frequencies will be too small for sensitive analysis. Graphical testing of goodness of fit is sometimes encountered. The reader may consult Gart (1969b) for such considerations.
25.4
THE POISSON DISTRIBUTION
FOR THE BINOMIAL
TEST
The binomial test was introduced in Section 24.5 as a goodness-of-fit test for counts in two categories. If n is large, the binomial test is unwieldy. If p is small, it may be convenient to use the Poisson distribution, for it becomes very similar to the binomial distribution at such p's. (a) One-Tailed Testing. Let us consider the following example. It is assumed (as from a very large body of previous information) that a certain type of genetic mutation naturally occurs in an insect population with a frequency of 0.0020 (i.e., on average in 20 out of 10,000 insects). On exposing a large number of these insects to a particular chemical, we wish to ask whether that chemical increases the rate of this mutation. Thus, we state Ho: p :::;:0.0020 and H II: p > 0.0020. (The general one-tailed hypotheses of this sort would be Ho: p :::;:Po and H A: P > Po, where Po is the proportion of interest in the statistical hypotheses. If we had reason to ask whether some treatment reduced the natural rate of mutations, then the one-tailed test would have used He: P :2- Po and HI\:p < po.) As an example, if performing this exposure experiment for the hypotheses Ho: Po :::;:0.0020 and H II: Po > 0.0020 yielded 28 of the mutations of interest in 8000 insects observed, then the sample mutation rate is p = X / n= 28/8000 = 0.0035. The question is whether the rate of 0.0035 is significantly greater than 0.0020. If we conclude that there is a low probability (i.e.,:::;: a) of a sample rate being at least as large as 0.0035 when the sample is taken at random from a population having a rate of 0.0020, then Ho is to be rejected. The hypotheses could also be stated in terms of numbers, instead of proportions, as He: j1- :::;: j1-O and H A: j1- > j1-0, where j1-0 = pon (which is 0.0020 X 8000 = 16 in this example ). By substituting pon for j1- in Equation 25.1, we determine the probability of observing X = 28 mutations if our sample came from a population with Po = 0.0020. To test the hypothesis at hand, we determine the probability of observing X 2 28 mutations in a sample. (If the alternate hypothesis being considered were HA: p < Po, then we would compute the probability of mutations less than or equal to the number observed.) If the one-tailed probability is less than or equal to a, then Ho is rejected at the a level of significance. This process is shown in Examples 25.4 and 25.5a.
Section 25.4
The Poisson Distribution for the Binomial Test
593
EXAMPLE 25.4 Poisson Probabilities for Performing the Binomial Test with a Very Small Proportion 0.0020 n = 8000 We substitute pon following": Po
=
=
(0.0020)(8000)
For lower tail of distribution X 0 1 2 3 4 5 6 7 8 9 10
P(X) 0.00000 0.00000 0.00001 0.00008 0.00031 0.00098 0.00262 0.00599 0.01199 0.02131 0.03410
Cumulative P(X) 0.00000 0.00000 0.00001 0.00009 0.00040 0.00138 0.00400 0.00999 0.02198 0.04329 0.07739
=
16 for
J1.-
in Equation 25.1 to compute the
For upper tail of distribution
X 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
Cumulative P(X)
P(X) 0.02156 0.01437 0.00920 0.00566 0.00335 0.00192 0.00106 0.00056 0.00029 0.00015 0.00007 0.00003 0.00002 0.00001 0.00000
0.05825 0.03669 0.02232 0.01312 0.00746 0.00411 0.00219 0.00113 0.00057 0.00028 0.00013 0.00006 0.00003 0.00001 0.00000
The cumulative probability is the probability in the indicated tail. For example, 25) = 0.02232. This series of computations terminates when we reach a P( X) that is zero to the number of decimal places used.
P( X :::; 10) = 0.07739 and P( X:2:
*For example,
e
-161L29
"
using Equation
25.1a, P( X = 28) =
e-161028
= 0.00192; and P( X = 29) =
28!
= 0.00100.
29!
(b) Two-Tailed Testing. If there is no reason, a priori, to hypothesize that a change in mutation rate would be in one specified direction (e.g., an increase) from the natural rate, then a two-tailed test is appropriate. The probability of the observed number of mutations is computed as shown in Example 25.4. Then we calculate and sum all the probabilities (in both tails) that are equal to or smaller than that of the observed. This is demonstrated in Example 25.5b. (c) Power of the Test. Recall that the power of a statistical test is the probability of that test rejecting a null hypothesis that is in fact a false statement about the
594
Chapter 25
Testing for Randomness
EXAMPLE 25.5a A One-Tailed Binomial Test for a Proportion from a Poisson Population, Using the Information of Example 25.4 He: P HA: P
0.0020 > 0.0020 Q' = 0.05 n = 8000 X = 28 Pan = (0.0020)(8000) :S:
=
16
Therefore, we could state He: /-L
HA:
/-L
:S:
16
> 16.
From Example 25.4, we see that P(X = 28) = 0.00192 and P(X
2':
28) = 0.00411.
As 0.00411 < 0.05, reject Hi;
EXAMPLE 25.5b A Two-Tailed Binomial Test for a Proportion from a Poisson Population, Using the Information of Example 25.4 He: P
=
0.0020
HA: P
-=I-
0.0020
Q'
=
0.05
n
= 8000
X
=
Pan
28
= (0.0020)(8000)
= 16
Therefore, we can state Ho: /-L = 16 HA: /-L -=I- 16. From Example 25.4, we see that P(X = 28) = 0.00192. The sum of the probabilities in one tail that are :S: 0.00192 is 0.00411; the sum of the probabilities in the other tail that are :S: 0.00192 is 0.00138. Therefore, the probability of obtaining these data from a population where Ha is true is 0.00411 + 0.00138 = 0.00549. As 0.00549 < 0.05, reject n-:
population. We can determine the power of the preceding test when it is performed with a sample size of n at a significance level of Q'. For a one-tailed test, we first determine the critical value of X (i.e., the smallest X that delineates a proportion of the Poisson distribution :S: Q'). Examining the distribution of Example 25.4, for example, for Q' = 0.05, we see that the appropriate X is 24 [for P(X 2': 24) = 0.037), while P( X 2': 23) = 0.058]. We then examine the Poisson distribution having the
Section 25.5
Comparing Two Poisson Counts
595
sample X replace J.Lin Equation 25.1. The power of the test is 2': the probability of an X at least as extreme as the critical value of X.* For a two-tailed hypothesis, we identify one critical value of X as the smallest X that cuts off :5 al2 of the distribution in the upper tail and one as the largest X that cuts off :5 al2 of the lower tail. In Example 25.6, these two critical values for a = 0.05 (i.e., al2 = 0.025) are X = 25 and X = 8 las P( X 2': 25) = 0.022 and P(X :5 8) = 0.022]. Then we examine the Poisson distribution having the sample X replace J.Lin Equation 25.1. As shown in Example 25.6, the power of the two-tailed test is at least as large as the probability of X in the latter Poisson distribution being more extreme than either of the critical values. That is, power 2': P( X2': upper critical value) + P(X:5lower critical value). COMPARING TWO POISSON COUNTS
If we have two counts, XI and X2, each from a population with a Poisson distribution, we can ask whether they are likely to have come from the same population (or from populations with the same mean). The test of Ho: J.LI = J.L2(against HA: J.Ll -=I=- J.L2) is related to the binomial test with p = 0.50 (Przyborowski and Wilenski, 1940; Pearson and Hartley, 1966: 78-79), so that Appendix Table B.27 can be utilized, using n = XI + X2. For the two-tailed test, Ho is rejected if either XI or X2 is:5 the critical value, Ca(2).n' This is demonstrated in Example 25.7. For a one-tailed test of Ho: J.LI:5 J.L2against HA: J.LI> J.L2,we reject Ho if XI > X2 and X2 :5 Ca(l ),n, where n = XI + X2. For He: J.LI2': J.L2and HA: ILl < J.L2,Hi, is rejected if XI < X2 and XI :5 C,,( I ).n, where n = XI + X2· This procedure results in conservative testing, and if n is at least 5, then a normal approximation should be used (Detre and White, 1970; Przyborowski and Wilenski, 1940; Sichel, 1973). For the two-tailed test, Z = IXI ~XI
-
+
X21
(25.10)
X2
is considered a normal deviate, so the critical value is Za(2) (which can be read as at the end of Appendix Table B.3). This is demonstrated in Example 25.7. For a one-tailed test,
la(2),
(25.11) For Hi; J.Ll :5J.L2versusHA:J.L1 > J.L2,HoisrejectedifXI > X2andZ2': Za(I)·For HO:J.Ll2': J.L2versusHA:J.Ll < J.L2,Ho is rejected ifXj < X2andZ:5 -Za(I)' The normal approximation is sometimes seen presented with a correction for continuity, but Pirie and Hamdan (1972) concluded that this produces results that are excessively conservative and the test has very low power. An alternative normal approximation, based on a square-root transformation (Anscom be, 1948) is (25.12)
* If the critical value delineates exactly a of the tail of the Poisson distribution, then the test's power is exactly what was calculated; if the critical value cuts off that calculated.
596
Chapter
25
Testing for Randomness
EXAMPLE 25.6 Estimation of the Power of the Small-Probability Binomial Tests of Examples 25.5a and 25.5b, Using a 0.05
=
Substituting X
=
28 for
J.L
in Equation 25.1, we compute the following*:
For lower tail of distribution X
P(X)
0
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
]
2 3 3 5 6 7 8
Cumulative P(X) 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
For upper tail of distribution X
P(X)
Cumulative P(X)
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
0.060 0.067 0.072 0.075 0.075 0.073 0.068 0.061 0.054 0.045 0.037 0.030 0.023 0.018 0.013 0.009 0.007 0.004 0.003 0.002 0.001 0.001 0.000
0.798 0.738 0.671 0.599 0.524 0.449 0.376 0.308 0.247 0.193 0.148 0.111 0.081 0.058 0.040 0.027 0.018 0.011 0.007 0.004 0.002 0.001 0.000
The critical value for the one-tailed test of Example 25.5a is X = 24. The power of this test is > P( X 2:: 24) in the preceding distribution. That is, the power is >0.798. The critical values for the two-tailed test of Example 25.5b are 25 and 8. The power of this test is >P(X 2:: 25) + P(X :s 8) = 0.738 + 0.000. That is, the power is >0.738. *For example,
e
-2112825
using Equation
25.1 a,
PC X = 24) =
e-2112824 24!
=
0.0601 0; and
PC X =
25) =
= 0.06731.
25!
(Best, 1975). It may be used routinely in place of Equation 25.10, and it has superior power when testing at a < 0.05. Equation 25.12 is for a two-tailed test; for one-tailed testing, use (25.13) with the same procedure to reject Ho as with Equation 25.11.
Section 25.6 EXAMPLE 25.7
Serial Randomness of Nominal-Scale Categories
A Two-Sample
597
Test with Poisson Data
One fish is found to be infected with 13 parasites and a second fish with 220. Assuming parasites are distributed randomly among fish, test whether these two fish are likely to have come from the same population. (If the two were of different species, or sexes, then we could ask whether the two species, or sexes, are equally infected.) The test is two-tailed for hypotheses Ho: ILl == IL2 and HA: ILl
i= IL2·
Using Appendix Table B.27 for n = XI + X2 == 13 + 22 == 35, we find a critical value of CO.05(2).35 = 11. Because neither XI nor X2 is :::; 11, Hi, is not rejected. Using the smaller of the two X's, we conclude that the probability is between 0.20 and 0.50 that a fish with 13 parasites and one with 22 parasites come from the same Poisson population (or from two Poisson populations having the same mean). Using the normal approximation of Equation 25.10, 2 == IXI - X21 == 113 - 221 ~XI + X2 v13 + 22
20.05(2) == to 05(2),
9 == 1.521 5.916
= 1.960.
Therefore, do not reject
Hi;
0.10 < P < 0.20
SERIAL RANDOMNESS
OF NOMINAL-SCALE
[P
= 0.13]
CATEGORIES
Representatives of two different nominal-scale categories may appear serially in space or time, and their randomness of occurrence may be assessed as in the following example. Members of two species of antelopes are observed drinking along a river, and their linear order is as shown in Example 25.8. We may ask whether the sequence of occurrence of members of the two species is random (as opposed to the animals either forming groups with individuals of the same species or shunning members of the same species). A sequence of like elements, bounded on either side by either unlike elements or no elements, is termed a run. Thus, any of the following arrangements of five members of antelope species A and seven members of species B would be considered to consist of five runs: BAABBBAAABBB,
or
BAAABAABBBBB,
and so on.
BBAAAABBBBAB,
or
BABAAAABBBBB,
or
To test the null hypothesis of randomness, we may use the runs lest. * If n I is the total number of elements of the first category (in the present example, the number of antelope of species A), n2 the number of antelope of species B, and u the number of runs in the entire sequence, then the critical values, Ua(2),fll,fl2' can be read from Appendix Table B.29 for cases where both nl :::;30 and n2 :::; 30. The critical values in this table are given in pairs; if the u in the sample is :::;the first member of the pair or e: the second, then Ho is rejected. "From its inception the runs test has also been considered to be a nonparametric test of whether two samples come from the same population (e.g., Wald and Wolfowitz, 1940), but as a two-sample test it has very poor power and the Mann-Whitney test of Section 8.11 is preferable.
598
Chapter 25
Testing for Randomness EXAMPLE 25.8
The Two-Tailed
Runs Test with Elements of Two Kinds
Members of two species of antelopes (denoted as species A and B) are drinking along a river in the following order: AABBAABBBBAAABBBBAABBB. Hi: The distribution of members of the two species along the river is random. HA: The distribution of members of the two species along the river is not random.
For species A, n] = 9; for species B, n : = 13; and u = 9. UO.05(2),9.I3
=
As u is neither
6 and 17 (from Appendix Table B.29) s,
6 nor
2':
17, do not reject He: 0.10 :s P :s 0.20
The power of the runs test increases with sample size.* Although Appendix Table B.29 cannot be employed if either n] or n2 is larger than 30, for such samples the distribution of u approaches normality with a mean of fLu
= 2n]n2 + 1
(25.14)
N
and a standard deviation of =
U
u
where N = fl] statistic
+
tt:
/2n]n2(2fl]n2 N2(N -
-V
-
N)
(25.15)
1)
(Brownlee, 1965: 226-230; Wald and Wolfowitz, 1940). And the Z; =
lu -
fLu
I - 0.5
(25.16)
Uu
may be considered a normal deviate, with Zo:(2) being the critical value for the test. (The 0.5 in the numerator of Z; is a correction for continuity.) Using Equation 25.16, the runs test may be extended to data with more than two categories (Wallis and Roberts, 1956: 571), for, in general, fLu
=
+ 1) -
N (N
2: n7
(25.17)
N
and 2:n7[2:n7 Uu
+
N(N
+
1)]
=
N2(N
-
t -
2N2:n
1)
3
N
(25.18)
where ri; is the number of items in category i, N is the total number of items (i.e., N = ni), and the summations are over all categories. (For two categories,
2:
*Mogull (1994) has shown that the runs test should not used in the unusual case of a sample consisting entirely of runs of two (for example, a sample consisting of BBAABBAABBAABB). In such situations the runs test is incapable of concluding departures from randomness; it has very low power, and the power decreases with increased sample size.
Section 25.7
Serial Randomness of Measurements: Parametric Testing
599
Equations 25.17 and 25.18 are equivalent to Equations 25.14 and 25.15, respectively.) O'Brien (1976) and O'Brien and Dyck (1985) present a runs test, for two or more categories, that utilizes more information from the data, and is more powerful, than the above procedure. (a) One-Tailed Testing. There are two ways in which a distribution of nominal-scale categories can be nonrandom: (a) The distribution may have fewer runs than would occur at random, in which case the distribution is more clustered, or contagious, than random; (b) the distribution may have more runs than would occur at random, indicating a tendency toward a uniform distribution. To test for the one-tailed situation of contagion, we state He: The elements in the population are not distributed contagiously, versus HA: The elements in the population are distributed contagiously; and Hi, would be rejected at the 0'( 1) significance level ifll :::;the lower of the pair of critical values in Appendix Table B.29. Thus, had the animals in Example 25.8 been arranged AAAAABBBBBBAAAABBBBBBB, then u = 4 and the one-tailed 5% critical value would be the lower value of llO.05(1 ).9.13, which is 7; as 4 < 7, Hi, is rejected and the distribution is concluded to be clustered. In using the normal approximation, Hi, is rejected if Z; 2=: Za( 1) and u :::;fLl/' To test for uniformity, we use H«: The clements in the population are not uniformly distributed versus H A: The elements in the population arc uniformly distributed. If It2=: the upper critical value in Appendix Table B.29 for 0'( 1), then Hi, is rejected. If the animals in Example 25.8 had been arranged as ABABABBABABBABABBABBAB, then II = 18, which is greater than the upper critical value of II (J.()5 ( 1 ).9,13 (which is 16); therefore, Hi, would have been rejected. If the normal approximation were used, Ho would be rejected if Z; 2=: Za( 1) and 1I 2=: fLl/' (b) Centrifugal and Centripetal Patterns. Occasionally, nonrandomness in the sequential arrangement of two nominal-scale categories is characterized by one of the categories being predominant toward the ends of the series and the other toward the center. In the following sequence, for example, AAAABAABBBBBBABBBBBAABBAAAAA
the A's are more common toward the termini of the sequence, and the B's are more common toward the center of the sequence. Such a situation might be the pattern of two species of plants along a line transect from the edge of a marsh, through the center of the marsh, to the opposite edge. Or we might observe the occurrence of diseased and healthy birds in a row of cages, each cage containing one bird. Ghent (1993) refers to this as a centrifugal pattern of A 's and a centripetal pattern of B'« and presents a statistical test to detect such distributions of observations. SERIAL RANDOMNESS
OF MEASUREMENTS:
PARAMETRIC TESTING
Biologists may encounter continuous data that have been collected serially in space or time. For example, rates of conduction might be measured at successive lengths along a nerve. A null hypothesis of no difference in conduction rate as one examines successive portions essentially is stating that all the measurements obtained arc a random sample from a population of such measurements. Example 25.9 presents data consisting of dissolved oxygen measurements of a water solution determined on the same instrument every five minutes. The desire
600
Chapter 25
Testing for Randomness EXAMPLE 25.9
The Mean Square Successive Difference
Test
An instrument for measuring dissolved oxygen is used to record a measurement every five minutes from a container of lake water. It is desired to know whether the differences in measurements are random or whether they are systematic. (If the latter, it could be due to the dissolved oxygen content in the water changing, or the instrument's response changing, or to both.) The data (in ppm) are as follows, recorded in the sequence in which they were obtained: 9.4, 9.3, 9.3, 9.2, 9.3, 9.2, 9.1,9.3,9.2,9.1,9.1. He: Consecutive measurements H A:
obtained on the lake water with this instrument have random variability. Consecutive measurements obtained on the lake water with this instrument have nonrandom variability and are serially correlated.
n = 11 s2
= 0.01018 (ppmj-
2 (9.3 ~ 9.4)2 + (9.3 ~ 9.3)2 + (9.2 ~ 9.3)2 + ... + (9.1 ~ 9.1)2 s = ~------~--~--------~--~------~--------~------~ * 2( 11 ~ 1) = 0.00550 C = 1 ~ 0.00550 = 1 ~ 0.540 = 0.460 0.01 018 CO.OS.!l = 0.452 Therefore, reject Hi; 0.025 < P < 0.05 is to conclude whether fluctuations in measurements are random or whether they indicate a nonrandom instability in the measuring device (or in the solution). The null hypothesis that the sequential variability among measurements is random may be subjected to the mean square successive difference test, a test that assumes normality in the underlying distribution. In this procedure, we calculate the sample variance, s2, which is an estimate of the population variance, u2, as introduced in Section 4.4: n
L(Xi s2
=
,---i
~ X)2
=---'1
_
(4.15)
n ~ 1 or n
LX; s2
=
n
~i=~l
_
n ~ 1
(4.17)
If the null hypothesis is true, then another estimate of u2 is n-I
L (Xi
+l
s2
*
~
Xi)2
= ~i=~l=--
2( n ~ 1)
_
(25.19)
Section 25.8
Serial Randomness of Measurements: Nonparametric
(von Neumann et aI., 1941). Therefore, the ratio s;/ Using Young's (1941) notation, the test statistic is
s2 should
Testing
601
equal 1 when Ho is true.
(25.20) and if this value equals or exceeds the critical value Co•n, in Appendix Table B.30, we reject the null hypothesis of serial randomness.* The mean square successive difference test considers the one-tailed alternate hypothesis that measurements are serially correlated. For n larger than those in Appendix Table B.30, the hypothesis may be tested by a normal approximation:
Z=
In
C
Ij n2
(25.22)
- 2 1
(von Neumann et al., 1941), with the value of the calculated Z being compared with the critical value of Za( I) = ta( 1).00' This approximation is very good for a = 0.05, for n as small as 10; for a = 0.10,0.25, or 0.025, for n as small as 25; and for a = 0.01 and 0.005, for n of at least 100. SERIAL RANDOMNESS
OF MEASUREMENTS:
NONPARAMETRIC
TESTING
If we do not wish to assume that a sample of serially obtained measurements came from a normal population, then the procedure of Section 25.7 should not be employed. Instead, there are non parametric methods that address hypotheses about serial patterns. (a) Runs Up and Down: Two-Tailed Testing. We may wish to test the null hypothesis that successive directions of change in serial data tend to occur randomly, with the alternate hypothesis stating the directions of change occur either in clusters (that is, where an increase from one datum to another is likely to be followed by another increase, and a decrease in the magnitude of the variable is likely to be followed by another decrease) or with a tendency toward regular alte-rnation of increases and decreases (i.e., an increase is likely to be followed by a decrease, and vice versa). In the series of n data, we note whether datum i + 1 is larger than datum i (and denote this as a positive change, indicated as "+ ") or is smaller than datum i (which is referred to as a negative change, indicated as "- "). By a nonparametric procedure presented by Wallis and Moore (1941), the series of +'s and -'s is examined and we determine the number of runs of +'s and - 's, calling this number u as we did in Section 25.6. Appendix Table B.31 presents pairs of critical values for u, where for the two-tailed test for deviation from randomness one would reject Hi, if u were either :s the first member of the pair or 2: the second member of the pair. "Equations
4.15 and 25.19 may be combined
so that C can be computed
2: (Xi
as
11-1
C = 1 -
- Xi+I)2
i=1
(25.21 ) 2(SS)
where SS is the numerator
of either Equation
4.15 or 4.17.
602
Chapter 25
Testing for Randomness
For sample sizes larger than those in Table B.31, a normal approximation may be employed (Edgington, 1961; Wallis and Moore, 1941) using Equation 25.16, where ILl(
and (TI(
=
2n - 1
(25.23)
3
_ )16n -
29
(25.24)
•
90
The runs-up-and-down test may be used for ratio, interval, or ordinal data and is demonstrated in Example 25.10. It is most powerful when no adjacent data are the same; if there are identical adjacent data, as in Example 25.10, then indicate the progression from each adjacent datum to the next as "0" and determine the mean of all the u's that would result from all the different conversions of the O's to either +'8 or - 'so Levene (1952) discussed the power of this test. EXAMPLE 25.10
Testing of Runs Up and Down
Data are measurements days.
of temperature
in a rodent burrow at noon on successive
He: The successive positive and negative changes in temperature measurements are random. H A: The successive positive and negative changes in the series of temperature measurements are not random.
n
Day
Temperature (0C)
1 2 3 4 5 6 7 8 9 10 11 12
20.2 20.4 20.1 20.3 20.5 20.7 20.5 20.4 20.8 20.8 21.0 21.7
Difference
+ + + + + 0
+ +
= 12
If the difference of 0 is counted as +, then u = 5; if the 0 is counted as -, then u = 7; mean u = 6. For a = 0.05, the critical values are uO.05(2),12= 4 and 11. Hi, is not rejected; 0.25 < P -::;0.50.
Section 25.8
Serial Randomness of Measurements: Nonparametric
Testing
603
(b) Runs Up and Down: One-Tailed Testing. In a fashion similar to that in Section 25.6a, a one-tailed test would address one of two situations. One is where /10: In the sampled population, the successive positive and negative changes in the series of data are not clustered (i.c .. are not contagious), and fill: In the sampled population, the successive positive and negative changes in the series of data are clustered (i.eo, are contagious). For this test. Ho would he rejected if /I::; the first member of the pair of one-tailed critical values in Appendix Table 8.31: if the test is performed using the normal approximation. Ho would he rejected if ILc ~ Z",( 1)\ and II ::; jL/I' The other one-tailed circumstance is where Hi: In the sampled population, the successive positive and negative changes in the series of data do not alternate regularly (i.c., are not uniform), versus HI1: In the sampled population, the series of data do alternate regularly (i.e., are uniform). Ho would he rejected if /I ~ the second member of the pair of one-tailed critical values: if the test uses the normal approximation, Ho would be rejected if Z; ~ Z,,( I) and /I ~ jL/I. (c) Runs Above and Below the Median. Another method of assessing randomness of ratio-, intcrval-, or ordinal-scale measurements examines the pattern of their distribution with respect to the set of data. We first determine the median of the sample (as explained in Section 3.2). Then we record each datum as heing either above (+) or below (-) the median. If a sample datum is equal to the median, it is discarded from the analysis. We then record II, the number of runs, in the resulting sequence of + 's and - 'so The test then proceeds as the runs test of Section 25.6. This is demonstrated, for two-tailed hypotheses, in Example 25.11: common one-tailed hypotheses are those inquiring into contagious (i.e., clumped) distributions of data above or below the median.
EXAMPLE 25.11
Runs Above and Below the Median
The data and hypotheses are those of Example 25.10. The median of the 12 data is determined to he 20.5 C. The sequence of data, indicating whether they are above ( + ) or below ( - ) the median, is - - - - 0 + 0 - + + + +. For the runs test, 1/1 = 5./12 = 5,/1 = 4. The critical values are IIO.O:'i(2).:'i..'i = 2 and 10: therefore, do not reject Ho: 0.20 < P ::; 0.50.
Although the test for runs up and down and the test for runs above and below the median may both he considered nonparametric alternatives to the parametric means square successive difference test of Section 25.7, the latter runs test often resembles the parametric test more than the former runs test does. The test for runs up and down works well to detect long-term trends in the data, unless there are short-term random fluctuations superimposed upon those trends. The other Iwo tests tend to perform better in detecting long-term patterns in the presence of short-term randomness (A. W. Ghent, personal communication).
602
Chapter 25
Testing for Randomness For sample sizes larger than those in Table B.31, a normal approximation may be employed (Edgington, 1961; Wallis and Moore, 1941) using Equation 25.16, where fLlI
2n ~ = --3
(25.23)
and (25.24) The runs-up-and-down test may be used for ratio, interval, or ordinal data and is demonstrated in Example 25.10. It is most powerful when no adjacent data are the same; if there are identical adjacent data, as in Example 25.10, then indicate the progression from each adjacent datum to the next as "0" and determine the mean of all the u'« that would result from all the different conversions of the O's to either + 's or ~ 'so Levene (1952) discussed the power of this test. EXAMPLE 25.10
Testing of Runs Up and Down
Data are measurements days.
of temperature
He: The successive
positive ments are random.
in a rodent
=
changes
in temperature
positive and negative ture measurements are not random.
changes
in the series of tempera-
Day
Temperature
1 2 3 4 5 6 7 8 9 10 11 12
20.2 20.4
( C)
measure-
Difference
+
20.1 20.3 20.5 20.7 20.5 20.4
+ + + +
20.8 20.8 21.0 21.7
°
+ +
12
If the difference of 0 is counted then II = 7; mean u = 6. For
at noon on successive
and negative
H A: The successive
n
burrow
Cl'
= 0.05, the critical values are
Hi, is not rejected;
0.25
+,
as
then
UO.05(2),12
< P ::; 0.50.
II
=
5; if the 0 is counted
= 4 and 11.
as -,
Section 25.8
Serial Randomness of Measurements: Nonparametric Testing
603
(b) Runs Up and Down: One-Tailed Testing. [n a fashion similar to that in Section 25.63. a one-tailed test would address one of two situations. One is where H«: In the sampled population. the successive positive and negative changes in the series of data are not clustered (i.e .. arc not contagious). and If A : In the sampled population. the successive positive and negative changes in the series of data are clustered (i.e., arc contagious). For this test. H« would he rejected if II::; the first member of the pair of one-tailed critical values in Appendix Table B.3I; if the test is performed using the normal approximation. fill would he rejected if IZc ~ Z,,( 1)1 and 1I ::; flil' The other one-tailed circumstance is where HI): In the sampled population, the successive positive and negative changes in the series of data do not alternate regularly (i.e .. are not uniform). versus If/I: In the sampled population. the series of data do alternate regularly (i.c., are uniform). HI) would be rejected if II ~ the second member of the pair of one-tailed critical values; if the test uses the normal approximation, Hi, would be rejected if Z; ~ Zu( I) and II ~ flil' (c) Runs Above and Below the Median. Another method of assessing randomness of ratio-, intcrval-. or ordinal-scale measurements examines the pattern of their distribution with respect to the set of data. We first determine the median of the sample (as explained in Section 3.2). Then we record each datum as being either above (+) or below (-) the median. If a sample datum is equal to the median. it is discarded from the analysis. We then record II. the number of runs, in the resulting sequence of +'s and - 's. The test then proceeds as the runs test of Section 25.6. This is demonstrated. for two-tailed hypotheses. in Example 25.11; common one-tailed hypotheses are those inquiring into contagious (i.c .. clumped) distributions of data above or below the median.
EXAMPLE 25.11
Runs Above and Below the Median
The data and hypotheses arc those of Example 25.10. The median of the 12 data is determined to he 20SC. The sequence of data. indicating whether they are above ( + ) or below ( - ) the median, is - - - - () + 0 - + + + +. For the runs test. 111 = 5.112 = 5. II = 4. The critical values are 1/0.05(2).5.5 = 2 and 10; therefore. do not reject He: 0.20 < P ::; 0.50.
Although the test for runs up and down and the test for runs above and below the median may both be considered nonpararnetric alternatives to the parametric means square successive difference test of Section 25.7. the latter runs test often resembles the parametric test more than the former runs test does. The test for runs up and down works well to detect long-term trends in the data, unless there are short-term random fluctuations superimposed upon those trends. The other two tests tend to perform better in detecting long-term patterns in the presence of short-term randomness (A. W. Ghent. personal communication).
604
Chapter 25
Testing for Randomness
EXERCISES 25.1. If, in a Poisson distribution, fL = 1.5, what is P( O)? What is P(5)? 25.2. A solution contains bacterial viruses in a concentration of 5 X 108 bacterial-virus particles per milliliter. In the same solution are 2 X lORbacteria per milliliter. If there is a random distribution of virus among the bacteria, (a) What proportion of the bacteria will have no virus particles? (b) What proportion ofthe bacteria will have virus particles? (c) What proportion of the bacteria will have at least two virus particles? (d) What proportion of the bacteria will have three virus particles? 25.3. Fifty-seven men were seated in an outdoor area with only their arms exposed. After a period of time, the number of mosquito bites (X) on each man's arms was recorded, as follows, where.f is the number of men with X bites. Test the null hypothesis that mosquitoes bite these men at random.
X
o
8
1 2
17 18 11 3 0
3 2:
.f
4 5
25.4. We wish to compile a list of certain types of human metabolic diseases that occur in more than 0.01 % of the population. A random sample of 25,000 infants reveals five infants with one of these diseases. Should that disease be placed on our list? 25.5. A biologist counts 112 diatoms in a milliliter of lake water, and 134 diatoms are counted in a milliliter of
a second collection of lake water. Test the hypothesis that the two water collections came from the same lake (or from lakes with the same mean diatom concentrations). 25.6. An economic entomologist rates the annual incidence of damage by a certain beetle as mild (M) or heavy (H). For a 27-year period he records the following: H M M M H H M M H M H H H M M H H H H M M H H M M M M. Test the null hypothesis that the incidence of heavy damage occurs randomly over the years. 25.7. The following data are the magnitudes of fish kills along a certain river (measured in kilograms of fish killed) over a period of years. Test the null hypothesis that the magnitudes of the fish kills were randomly distributed over time. Year
Kill (kg)
1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970
147.4 159.8 155.2 161.3 173.2 191.5 198.2 166.0 171.7 184.9 177.6 162.8 177.9 189.6 206.9 221.5
25.8. Analyze the data of Exercise 25.7 nonparametrically to test for serial randomness.
C HAP
T E R
26
Circular Distributions: Statistics
Descriptive
26.1 DATA ON A CIRCULAR SCALE 26.2 GRAPHICAL PRESENTATION OF CIRCULAR DATA 26.3TRIGONOMETRIC FUNCTIONS 26.4THE MEAN ANGLE 26.5ANGULAR DISPERSION 26.6THE MEDIAN AND MODAL ANGLES 26.7 CONFIDENCE LIMITS FOR THE POPULATION
MEAN AND MEDIAN
ANGLES
26.8 AXIAL DATA 26.9THE MEAN OF MEAN ANGLES
26.1 DATA ON A CIRCULAR SCALE
In Section 1.1b, an interval scale of measurement was defined as a scale with equal intervals but with no true zero point. A special type of interval scale is a circular scale, where not only is there no true zero, but any designation of high or low values is arbitrary. A common example of a circular scale of measurement is compass direction (Figure 26.1a). where a circle is said to be divided into 360 equal intervals, called degrees," and for which the zero point is arbitrary. There is no physical justification for a direction of north to be designated 0 (or 360) degrees, and a direction of 270" cannot be said to be a "larger" direction than 90'.'1' Another common circular scale is time of day (Fig. 26.1 b), where a day is divided into 24 equal intervals, called hours, but where the designation of midnight as the zero or starting point is arbitrary. One hour of a clay corresponds to 150' (i.e .. 360"/24) of a circle, and 1 of a circle corresponds to four minutes of a day. Other time divisions, such as weeks and years (see Figure 26.1c), also represent circular scales of measurement. 0
* A degree is divided into 60 minutes (i.e .. I' = 60') and a minute into 60 seconds (I' = 60"). A number system based upon 60 is termed sexagesimal: and we owe the division of the circle into 360 degrees-and the 60-minute hour and 60-second minute-to the ancient Babylonians (about 3000 years ago). The use of the modern symbols ( and' and ") appears to date from the 1570s (Cajori, 1928-1929, Vol. II: 146). "Occasionally one will encounter angular measurements expressed in radians instead or in degrees. A radian is the angle that is subtendcd by an arc of a circle equal in length to the radius of the circle. As a circle's circumference is 27T times the radius, a radian is 360"/27T = 180"/7T = 57.295779510 (or 57 dcg, 17 min. 44.8062 see). The term radian was first used, in 1873. by James Thomson, brother of Baron William Thomson (Lord Kelvin). the famous Scottish mathematician and physicist (Cajori, 1928-1929, Vol. II: 147). A direction measured clockwise, from OC at north. is called an arimuth, Rarely, a direction is recorded as an angular measurement called a grad: a right angle (90°) is divided into 100 grads, so a grad is 0.9 of a degree.
605
606
Chapter 26
Circular Distributions:
Descriptive Statistics North.O"
West:270° I---~r-----I
00:00 hr
East:90°
18:00 hr
f-----'"
::=----1 06:00 hr
South:18(f
12:00 hr
(a)
(b)
Jan.
Oct. f----~I!E_----I
Apr.
Jul. (c) FIGURE 26.1: Common circular scales of measurement. of the year (with the first day of each month shown).
(a) Compass directions.
(b) Times of day. (c) Days
In general, X time units may be converted to an angular direction (a, in degrees), where X has been measured on a circular scale having k time units in the full cycle: (360 )(X) 0
a=
k
.
(26.1)
For example, to convert a time of day (X, in hours) to an angular direction, k = 24 hr; to convert a day of the week to an angular direction, number the seven days from some arbitrary point (e.g., Sunday = day 1) and use Equation 26.1 with k = 7; to convert the Xth day of the year to an angular direction, k = 365 (or, k = 366 in a leap year); to convert a month of the year, k = 12; and so on.* Such conversions are demonstrated in Example 26.1. Data from circular distributions generally may not be analyzed using the statistical methods presented earlier in this book. This is so for theoretical reasons as well as for empirically obvious reasons stemming from the arbitrariness of the zero point on the circular scale. For example, consider three compass directions-lO°, 30°, and 350°, *Equation 26.1 gives angular directions corresponding to the ends of time periods (e.g., the end of the Xth day of the year). If some other point in a time period is preferred, the equation can be adjusted accordingly. For example, noon can be considered on the Xth day of the year by using X -0.5 in place of X. If the same point is used in each time period (e.g., always using either noon or midnight), then the statistical procedures of this and the following chapter will be unaffected by the choice of point. (However, graphical procedures, as in Section 26.2, will of course be affected, in the form of a rotation of the graph if Equation 26.1 is adjusted. If, for example, we considered noon on the Xth day of the year, the entire graph would be rotated about half a degree counterclockwise.)
.2
Section 26.2
Graphical Presentation of Circular Data
EXAMPLE 26.1 Conversions of Times Measured Corresponding Angular Directions
607
on a Circular Scale to
By Equation 26.1, a=
(360
(X)
0 )
k
.
1. Given a time of day of 06:00 hr (which is one-fourth of the 24-hour clock and should correspond, therefore, to one-fourth of a circle), X = 6hr,k a
=
= 24 hr. and
(360 )(6 U
hr)/24 hr
=
90°.
=
93.75
2. Given a time of day of 06: 15 hr, X a
= 6.25 hr, k = 24 hr, and =
(360 )(6.25 C
hr)/24 hr
C •
3. Given the 14th day of February, being the 45th day of the year, X a
=
45 days, k
=
365 days, and
= (360°)( 45 days)/365 days = 44.38°.
for which we wish to calculate an arithmetic mean. The arithmetic mean calculation of (10° + 30° + 350°)/3 = 390°/3 = 130° is clearly absurd, for all data are northerly directions and the computed mean is southeasterly. This chapter introduces some basic considerations useful in calculating descriptive statistics for circular data, and Chapter 27 discusses tests of hypotheses.* Statistical methods have also been developed for data that occur on a sphere (which are of particular interest to earth scientists}! GRAPHICAL PRESENTATION OF CIRCULAR DATA
Circular data are often presented as a scatter diagram, where the scatter is shown on the circumference of a circle. Figure 26.2 shows such a graph for the data of Example 26.2. If frequencies of data are too large to be plotted conveniently on a scatter diagram, then a bar graph, or histogram, may be drawn. This is demonstrated in Figure 26.3, for the data presented in Example 26.3. Recall that in a histogram, the length, as well as the area, of each bar is an indication of the frequency observed at each plotted value of the variable (Section 1.3). Occasionally, as shown in Figure 26.4, a histogram is seen presented with sectors, instead of bars, composing the graph; this is sometimes called a rose diagram. Here. the radii forming the outer boundaries "More extensive reviews of methods for circular data include Batschelet t (1965. 1972. 19S1), Fisher (1993), Jammalamadaka and SenGupta (2001), Mardia (1972a. 19S1). and Mardia and Jupp (2000). TEdward Batschelet (1914-1979), Swiss biomathernatician, was one of the most influential writers in developing, explaining, and promulgating circular statistical methods, particularly among biologists. tNotable discussions of the statistical analysis of sperical data are as follows: Batschelet (1981: Chapter 11); Fisher, Lewis, and Embleton (19S7): Mardia (1972a: Chapters Sand 9): Mardia and Jupp (2000: Chapters 9. I (l, ctc.); Upton and Fingleton (19S9: Chapter 10): and Watson (1983).
608
Chapter 26
Circular Distributions:
Descriptive Statistics
270° I------'~~::------l
90°
IHO° FIGURE 26.2: A circular scatter diagram median as explained in Section 26.6.)
EXAMPLE 26.2 Figure 26.2
A Sample
for the data of Example 26.2. (The dashed line defines the
of Circular Data. These Data Are Plotted in
Eight trees are found leaning in the following compass directions: 45°, 55°,81°, 96°,110°,117°,132°,154°. of the sectors are proportional to the frequencies being represented, but the areas of the sectors are not. Since it is likely that the areas will be judged by eye to represent the frequencies, the reader of the graph is being misled, and this type of graphical presentation is not recommended. However, a true-area rose diagram can be obtained by plotting the square roots of frequencies as radii.* 0°
0°
180
180°
(a)
(b)
0
FIGURE 26.3: (a) Circular histogram for the data of Example 26.3 where the concentric circles represent frequency increments of 5. (b) A relative frequency histogram for the data of Example 26.3 with the concentric circles representing relative frequency increments of 0.05.
*The earliest user of rose diagrams was the founder of modern nursing and pioneer social and health statistician, Florence Nightingale (1820-1910), in 1858. She employed true-area colored diagrams, which she termed "coxcombs," to indicate deaths from various causes over months of the year (Fisher, 1993: 5-6). (Nightingale gained much fame for her work with the British Army during the Crimean War.)
Section 26.2
Graphical Presentation of Circular Data
609
EXAMPLE26.3 A Sample of Circular Data, Presented as a Frequency Table, Where aj Is an Angle and fj Is the Observed Frequency of aj. These Data Are Plotted in Figure 26.3
a, (deg)
Ii
0-30 30-60 60-90 90-120 120-150 150-180 180-210 210-240 240-270 270-300 300-330 330-360
0 6 9 13 15 22 17 12 8 3 0 0
n
Relative
Ii
0.00 0.06 0.09 0.12 0.14 0.21 0.16 0.11 0.08 0.03 0.00 0.00
= 105
Total
= 1.00
(}O
FIGURE 26.4: A rose diagram of the data of Example 26.3, utilizing sectors instead of bars. This procedure is not recommended unless square roots of the frequencies are employed (see Section 26.2).
Another manner of expressing circular frequency distributions graphically is shown in Figure 26.5. Here. the length of each bar of the histogram represents a frequency, as in Figure 26.3(a), but the bars extend from the circumference of a circle instead of from the center. In addition, an arrow extending from the circle's center toward the circumference indicates both the direction and the length of the mean vector, and this expresses visually both the mean angle and a measure of data concentration (as explained in Sections 26.4 and 26.5).
610
Chapter 26
Circular Distributions: Descriptive Statistics
\
FIGURE 26.5: Circular histogram for the data of Example 26.3, including angle (il) and a measure of dispersion (r).
an arrow depicting
the mean
A histogram of circular data can also be plotted as a linear histogram (see Section 1.3), with degrees on the horizontal axis and frequencies (or relative frequencies) on the vertical axis. But the impression on the eye may vary with the arbitrary location of the origin of the horizontal axis, and (unless the range of data is small-say, no more than 180°) the presentation of Figure 26.3 or Figure 26.5 is preferable. 26.3
TRIGONOMETRIC
FUNCTIONS
A great many of the procedures that follow in this chapter and the next require the determination of basic trigonometric functions. Consider that a circle (perhaps representing a compass face) is drawn on rectangular coordinates (as on common graph paper) with the center as the origin (i.e., zero) of both a vertical X axis and a horizontal Y axis; this is what is done in Figure 26.6. There are two methods that can be used to locate any point on a plane (such as a sheet of paper). One is to specify X and Y (as done previously in discussing regression and correlation in Chapters 17 and 19). However, with circular data it is conventional to use a vertical, instead of a horizontal, X axis. This second method
sine cosine + sine + cosine +
__ ~~
__ -,,- __ ~~
__ -L __
-,-+
y
SIne -
cosme sine + cosme -
FIGURE 26.6: A unit circle, showing coordinates.
four points and their
polar (a and r) and rectangular
(X and Y)
Section 26.3
Trigonometric
Functions
611
specifies both the angle, a, with respect to some starting direction (say, clockwise from the top of the X axis, namely "north") and the straight-line distance, r, from some reference point (the center of the circle). This pair of numbers, a and r, is known as the "polar coordinates" of a point." Thus, for example, in Figure 26.6, point 1 is uniquely identified by polar coordinates a = 30° and r = 1.00, point 2 by a = 120° and r = 1.00, and so on. If the radius of the circle is specified to be 1 unit, as in Figure 26.6, the circle is called a unit circle. If a is negative, it is expressing a direction counterclockwise from zero. It may be added to 360° to yield the equivalent positive angle; thus, for example, -60° = 360° - 60° = 300°. An angle greater than 360° is equivalent to the number of degrees by which it exceeds 360° or a multiple of 360°. So, for example, 450° = 450° - 360° = 90° and 780° = 780° - 360° - 360° = 60°. The first-mentioned method of locating points on a graph referred to the X and Y axes. By this method, point 1 in Figure 26.6 is located by the "rectangular coordinates" X = 0.87 and Y = 0.50, point 2 by X = -0.50 and Y = 0.87, point 3 by X = -0.87 and Y = -0.50, and point 4 by X = 0.50 and Y = -0.87. The cosine (abbreviated "cos") of an angle is defined as the ratio of the X and the r associated with the circular measurement: cosa =
X
(26.2)
r
while the sine (abbreviated
"sin") of the angle is the ratio of the associated Y and r:
.
Y
sma
(26.3)
= -.
r
Thus, for example, the sine of (JI in Figure 26.6 is sin 30° = 0.50/1.00 = 0.50, and its cosine is cos 30° = 0.87/1.00 = 0.87. Also, sin 120 = 0.87/1.00 = 0.87, cas 120° = -0.50/1.00 = -0.50, and so on. Sines and cosines (two of the most used "trigonometric;' functions") are readily available in published tables, and many electronic calculators give them (and sometimes convert between polar and rectangular coordinates as well). The sines of 0° and 180° are zero, angles between 0° and 180° have sines that are positive, and the sines are negative for 180° < a < 360°. The cosine is zero for 90° and 270°, with positive cosines obtained for 0° < a < 90° and for 270 < a < 360°, and negative cosines for angles between 90° and 270°. A third trigonometric function is the tangent+ 0
0
tana
=
Y
X
sina cosa
(26.5)
*This use of the symbol r has no relation to the r that denotes a sample correlation coefficient (Section 19.1). t Trigonometry refers, literally, to the measurement of triangles (such as the triangles that emanate from the center of the circle in Figure 26.6). tThe angle having a tangent of Y / X is known as the arctangent (abbreviated arctan) of Y / X. As noted at the end of Section 26.3, a given tangent can be obtained from either of two different angles. If X ~ 0, then a = arctan (Y / X); if X < 0, then a = arctan (Y / X) + 18()0. The cotangent is the reciprocal of the tangent, namely cota
cosa = -X = --.
Y
sina
(26.4)
612
Chapter 26
Circular Distributions:
Descriptive Statistics
On the circle, two different angles have the same sine, two have the same cosine, and two have the same tangent: sin a
= sin (180° - a)
cos a
=
a) tan a = tan (180° + a). cos (360°
We shall see later that rectangular coordinates, X and Y, may also be used in conjunction with mean angles just as they are with individual angular measurements.* 26.4
THE MEAN ANGLE If a sample consists of n angles, denoted as a, through an, then the mean of these angles, ii, is to be an estimate of the mean angle, /La in the sampled population. To compute the sample mean angle, ii, we first consider the rectangular coordinates of the mean angle: n ~
COSai
i='
X=
(26.6)
n
and
n
~sinai
=
y
_1=_' __
(26.7)
n Then, the quantity r
=
+
)X2
y2
(26.8)
is computed." this is the length of the mean vector, which will be further discussed in Section 26.5. The value of Zi is determined as the angle having the following cosine and sine: X (26.9) cosZi = r
and sin ii =
Y
(26.10)
r
Example 26.4 demonstrates
these calculations. It is also true that _
Y
tana = -
X
sin Zi
= --.
(26.11)
cosZi
If r = 0, the mean angle is undefined and we conclude that there is no mean direction. "Over time, many different symbols and abbreviations have been used for trigonometric functions. The abbreviations sin. and tan. were established in the latter half of the sixteenth century, and the periods were dropped early in the next century (Cajori, 1928-1929, Vol. II: 150,158). The cosine was first known as the "sine of the complement" -because the cosine of a equals the sine of 90° - a for angles from 0° to 90°-and the English writer E. Gunter changed "complementary sine" to "cosine" and "complementary tangent" to "cotangent" in 1620 (ibid.: 157). tThis use of the symbol r has no relation to the r that denotes a sample correlation coefficient (Section 19.1).
Section 26.4
The Mean Angle
613
If the circular data are times instead of angles, then the mean time corresponding to the mean angle may be determined from a manipulation of Equation 26.1: ka X=-. 360
(26.12) 0
EXAMPLE 26.4
Calculating the Mean Angle for the Data of Example 26.2
a, (deg)
sin ai
cos a,
45 55 81 96 110 117 132 154
0.70711 0.81915 0.98769 0.99452 0.93969 0.89101 0.74315 0.43837
0.70711 0.57358 0.15643 -0.10453 -0.34202 -0.45399 -0.66913 -0.89879
L sin a, = 6.52069 Y
L cos a, = - 1.03134
= Lsin a,
X
=
2: cos o,
n
n
= 0.81509
= -0.12892
n=8
r
=
JX2
cosa- -- -X r
+
y2
= )(
-0.12892)2
-_ -0.12892 0.82522
. - -- -y -- 0.8]509 sma r 0.82522
+ (0.81509)2 =
JO.68099
= 0.82522
-- - 015623 . -- 098772 .
The angle with this sine and cosine is ii
=
99°.
So, to determine a mean time of day, X, from a mean angle, ii, X = (24 hr)(Zi)/360°. For example, a mean angle of 270° on a 24-hour clock corresponds to X = (24 hr) (270°)/360° = 18:00 hr (also denoted as 6:00 P.M.). If the sine of a is S, then it is said that the arcsine of S is a; for example, the sine of 30° is 0.50, so the arcsine of 0.50 is 30°. If the cosine of a is C, then the arccosine of C is a; for example, the cosine of 30° is 0.866, so the arccosine of 0.866 is 30°. And, if the tangent of a is T, then the arctangent of T is a; for example, the tangent of 30° is 0.577, so the arctangent of 0.577 is 30 * 0
(a) Grouped Data. Circular data are often recorded in a frequency table (as in Example 26.3). For such data, the following computations are convenient alternatives "The arcsine is also referred to as the "inverse sine" and can be abbreviated "arcsin" or sin -I; the arccosine can be designated as "arccos" or cos-I , and the arctangent as "arctan" or tan -I.
614
Chapter
26
Circular Distributions:
Descriptive
Statistics
to Equations 26.6 and 25.7, respectively: X
=
"
F.
~Jl
cos a'I
(26.13)
n y
= "~Jl -F'sina I
(26.14)
n (which are analogous to Equation 3.3 for linear data). In these equations, a, is the midpoint of the measurement interval recorded (e.g., a: = 45° in Example 26.3, which is the midpoint of the second recorded interval, 30 - 60°), [t is the frequency of occurrence of data within that interval (e.g.,j2 = 6 in that example), and n = "Lk Example 26.5 demonstrates the determination of ii for the grouped data (where Ii is not 0) of Example 26.3. EXAMPLE 26.5
ai
fi
45° 75° 105° 135° 165° 195° 225° 2550 285°
6 9 13 15 22 17 12 8 3
Calculating the Mean Angle for the Data of Example 26.3
. I
0.707l1 0.96593 0.96593 0.707l1 0.25882 -0.25882 -0.70711 -0.96593 -0.96593
4.24266 8.69337 12.55709 10.60665 5.69404 -4.39994 -8.48532 -7.72744 -2.89779
Lfi sin ai
n = 105
ccs a,
~.sin a I
sinai
y
[i
. I
0.70711 0.25882 -0.25882 -0.70711 -0.96593 -0.96593 -0.707l1 -0.25882 0.25882
= 18.28332
= LJ;I sina
I
ccs:« I
4.24266 2.32938 -3.36466 -10.60665 -21.25046 -16.42081 -8.48532 -2.07056 0.77646
Lfi cos ai X =
= - 54.84996
LJ;
n
+ y2 =
= JX2
cosa- --
-X r
-_
J( -0.52238)2
-0.52238
--
-
cos c.I n
= 0.17413 r
I
= -0.52238
+ (0.17413)2 = 0.55064
094868 .
0.55064
. a- -- -y -- 0.17413 -- 031623 sm . r
0.55064
The angle with this cosine and sine is ii = 162°. There is a bias in computing r from grouped data, in that the result is too small. A correction for this is available (Batschelet, 1965: 16-17, 1981: 37-40; Mardia, 1972a: 78-79; Mardia and Jupp, 2000: 23; Upton and Fingleton, 1985: 219), which may be applied when the distribution is unimodal and does not deviate greatly from symmetry. For data grouped into equal intervals of d degrees each, rc
= cr,
(26.15)
Section 26.5
where
rc
Angular Dispersion
615
is the corrected r, and c is a correction factor, d7T
c=
360°
(26.16)
sin (~)" The correction is insignificant for intervals smaller than 30°. This correction is for the quantity r; the mean angle, 71, requires no correction for grouping. ANGULAR DISPERSION
When dealing with circular data, it is desirable to have a measure, analogous to those of Chapter 4 for a linear scale, to describe the dispersion of the data. We can define the range in a circular distribution of data as the smallest arc (i.e., the smallest portion of the circle 's circumference) that contains all the data in the distribution. For example, in Figure 26.7a, the range is zero; in Figure 26.7b, the shortest arc is from the data point at 38° to the datum at 60°, making the range 22°; in Figure 26.7c, the data are found from 10° to 93 with a range of 83°; in Figure 26.7d, the data run from 322° to 135°, with a range of 173°; in Figure 26.7e, the shortest arc containing all the data is that running clockwise from 285° to 171°, namely an arc of 246°; and in Figure 26.7f, the range is 300°. For the data of Example 26.4, the range is 109° (as the data run from 45° to 154°). Another measure of dispersion is seen by examining Figure 26.7; the value of r (by Equation 26.8, and indicated by the length of the broken line) varies inversely with the amount of dispersion in the data. Therefore, r is a measure of concentration. It has no units and it may vary from 0 (when there is so much dispersion that a mean angle cannot be described) to 1.0 (when all the data are concentrated at the same direction). (An r of 0 does not, however, necessarily indicate a uniform distribution. For example, the data of Figure 26.8 would also yield r = 0). A line specified by both its direction and length is called a vector, so r is sometimes called the length of the mean vector. In Section 3.1 the mean on a linear scale was noted to be the center of gravity of a group of data. Similarly, the tip of the mean vector (i.e., the quantity r), in the direction of the mean angle (71) lies at the center of gravity. (Consider that each circle in Figure 26.7 is a disc of material of negligible weight, and each datum is a dot of unit weight. The disc, held parallel to the ground, would balance at the tip of the arrow in the figure. In Figure 26.7f, r = 0 and the center of gravity is the center of the circle.) Because r is a measure of concentration, 1- r is a measure of dispersion. Lack of dispersion would be indicated by 1- r = 0, and maximum dispersion by 1- r = 1.0. As a measure of dispersion reminiscent of those for linear data, Mardia (1972a: 45), Mardia and Jupp (2000: 18), and Upton and Fingleton (1985: 218) defined circular variance: 52 = 1 - r. (26.17) D
,
Batschelet (1965, 1981: 34) defined angular variance: S2 =
2( 1 -
r)
(26.18)
as being a closer analog to linear variance (Equation 4.15). While 52 may range from I, and s2 from 0 to 2, an 52 of loran s2 of 2 does not necessarily indicate a uniform distribution of data around the circle because, as noted previously, r = 0
o to
616
Chapter 26
Circular Distributions:
Descriptive Statistics
2700 f-----"*-----j
900
1800
1800
(a)
(b)
1800
1800
(e)
(d)
2700 f-----"*-----j
900
1800
(e) FIGURE 26.7: Circular distributions with various amounts of dispersion. The direction of the broken-line arrow indicates the mean angle, which is 50° in each case, and the length of the arrow expresses r (Equation 26.8), a measure of concentration. The magnitude of r varies inversely with the amount of dispersion, and that the values of 5 and So vary directly with the amount of dispersion. (a) r = 1.00,5 = 0°,50 = 0°. (b) r = 0.99,5 = 8.10°,50 = 8.12°. (c) r = 0.90,5 = 25.62°,50 = 26.30°. (d) r = 0.60,5 = 51.25°,50 = 57.9P. (e) r = 0.30,5 = 67.79°,50 = 88.91. (f) r = 0.00,5 = 81.03°,50 = 00. (By the method of Section 27.1, the magnitude of r is statistically significant in Figs. a, b. and c, but not in d, e, and t.)
does not necessarily indicate a uniform distribution. The variance measure s~
=
-2lnr
(26.19)
is a statistic that ranges from 0 to 00 (Mardia, 1972a: 24). These three dispersion measures are in radians squared. To express them in degrees squared, multiply each by (180o/1T )2.
Section 26.6
The Median and Modal Angles
617
Measures analogous to the linear standard deviation include the "mean angular deviation," or simply the angular deviation, which is 180 -~2(1 0
S =
r--.---
- r),
(26.20)
1T
in degrees.* This ranges from a minimum of zero (e.g., Fig. 26.7a) to a maximum of 81.03 ° (e.g., Fig. 26.7f). t Mardia (1972a: 24, 74) defines circular standard deviation as 1800
So = -J-21nr
(26.21)
1T
degrees; or, employing common, instead of natural, logarithms: 180
0
So = -~-4.60517logr
(26.22)
1T
degrees. This is analogous to the standard deviation, S, on a linear scale (Section 4.5) in that it ranges from zero to infinity (see Fig. 26.7). For large r, the values of S and So differ by no more than 2 degrees for r as small as 0.80, by no more than 1 degree for r as small as 0.87, and by no more than 0.1 degree for r as small as 0.97. It is intuitively reasonable that a measure of angular dispersion should have a finite upper limit, so S is the dispersion measure preferred in this book. Appendix Tables B.32 and B.33 convert r to S and So, respectively. If the data are grouped, then s and So are biased in being too high, so rc (by Equation 26.15) can be used in place of r. For the data of Example 26.4 (where r = 0.82522), s = 34° and So = 36°; for the data of Example 26.5 (where r = 0.55064), s = 54° and So = 63°. Dispersion measures analogous to the linear mean deviation (Section 4.3) utilize absolute deviations of angles from the mean or median (e.g., Fisher, 1993: 36). Measures of symmetry and kurtosis on a circular scale, analogous to those that may be calculated on a linear scale (Section 6.5), are discussed by Batschelet (1965: 14-15,1981: 54-44); Mardia (1972a: 36-38,74-76), and Mardia and Jupp (2000: 22, 31,145-146). THE MEDIAN AND MODAL ANGLES
In a fashion analogous to considerations for linear scales of measurement (Sections 3.2 and 3.3), we can determine the sample median and mode of a set of data on a circular scale. To find the median angle, we first determine which diameter of the circle divides the data into two equal-sized groups. The median angle is the angle indicated by that diameter's radius that is nearer to the majority of the data points. If n is even, the median is nearly always midway between two of the data. Tn Example 26.2, a diameter extending from 103° to 289 divides the data into two groups of four each (as indicated by the dashed line in Fig. 26.2). The data are concentrated around 103°, rather than 289°, so the sample median is 103°. If n is odd, the median will almost always be one of the data points or 180 opposite from one. If the data in Example 26.2 had been seven in number-with the 45° lacking-then the diameter line would have run through 110° and 290°, and the median would have been 110°. Though uncommon, it is possible for a set of angular data to have more than one angle fit this definition of median. In such a case, Otieno and Anderson-Cook (2003) 0
0
*Simply delete the constant, UlOo/'1T. in this and in the following equations is desired in radians rather than degrees. IThis is a range of 0 to 1.41 radians.
if the measurement
618
Chapter 26
Circular Distributions:
Descriptive Statistics
recommend calculating the estimate of the population median as the mean of the two or more medians fitting the definition. Mardia (1972a: 29-30) shows how the median is estimated, analogously to Equation 3.5, when it lies within a group of tied data. If a sample has the data equally spaced around the circle (as in Fig. 26.7f), then the median, as well as the mean, is undefined. The modal angle is defined as is the mode for linear scale data (Section 3.3). Just as with linear data, there may be more than one mode or there may be no modes.
26.7
CONFIDENCE LIMITS FOR THE POPULATION MEAN AND MEDIAN ANGLES
The confidence limits of the mean of angles may be expressed as
a ± d.
(26.23)
That is, the lower confidence limit is L, = a- d and the upper confidence limit is Lz = a + d. For n as small as 8, the following method may be used (Upton, 1986). For r s 0.9,
d = arccos
and for r
?
R
(26.24)
0.9, (26.25)
where R
=
nr.
(26.26)
This is demonstrated in Examples 26.6 and 27.3.* As this procedure is only approximate, d-and confidence limits-should not be expressed to fractions of a degree. This procedure is based on the von Mises distribution, a circular analog of the normal distribution.' Batschelet (1972: 86; Zar, 1984: 665-666) presents nomograms that yield similar results. * As shown in these examples, a given cosine is associated with two different angles: a and 3600 - a; the smaller of the two is to be used. t Richard von Mises (1883-1953), a physicist and mathematician, was born in the AustroHungarian Empire and moved to Germany, Turkey, and the United States because of two world wars (Geiringer, 1978). He introduced this distribution (von Mises, 1918), and it was called "circular normal" by Gumbel, Greenwood, and Durand (1953), and later by others, because of its similarity to the linear-scale normal distribution. It is described mathematically by Batschelet (1981: 279-282), Fisher (1993: 48-56), Jammalamadaka and SenGupta (2001: 35-42), Mardia (1972a: 122-127), Mardia and Jupp (2000: 36, 68-71, 85-88, 167-173), and Upton and Fingleton (1985: 277-229).
Section 26.8 EXAMPLE 26.6
The 95% Confidence Interval forthe
Axial Data
619
Data of Example 26.4
n=8
a
= 99°
r = 0.82522 R
= nr =
(8)(0.82522)
6.60108
=
X6.05.l = 3.841 Using Equation 26.24:
2n (2R2 - nx;.l) d = arccos
\
4n -
X;.l
R
= arccos
/2( 8) [2( 6.60108)2 - 8( 3.841)] 4(8) - 3.841
-V
6.60108 = arccos 0.89883
= 26° or 360° - 26° = 334°. The 95% confidence interval is 99° ± 26°; L,
=
73° and L2
=
125°.
Confidence limits for the median angle may be obtained by the procedure of Section 24.8. The median is determined as in Section 26.6. Then the data are numbered 1 through n, with 1 representing the datum farthest from the median in a counterclockwise direction and n denoting the datum farthest in a clockwise direction. AXIAL DATA
Although not common, circular data may be encountered that are bimodal and have their two modes diametrically opposite on the circle. An example of such data is shown in Figure 26.8, where there is a group of seven angular data opposite a group of eight data (with the data shown as small black circles). Such measurements are known as axial data, and it is desirable to calculate the angle that best describes the circle diameter that runs through the two groups of data. Determining the mean angle (a) of the diameter in one direction means that the mean angle of the diameter in the other direction is a + 180°. These 15 measurements are given in Example 26.7 and resulted from the following experiment: A river flows in a generally southeasterly-northwesterly direction. Fifteen fish, of a species that prefers shallow water at river edges, were released in the middle of the river. Then it was recorded which direction from the point of release each of the fish traveled.
620
Chapter 26
Circular Distributions: Descriptive Statistics
••
NW
270°
SE
•
FIGURE 26.8: A bimodal circular distribution,
showing the data of Example 26.7.
The Data of Fig. 26.8, and Their Axial Mean
EXAMPLE 26.7
ai (degrees)
modulo 2ai (degrees)
35 40 40 45 45 55 60 215 220 225 225 235 235 240 245
70 80 80 90 90 110 120 70 80 90 90 110 110 120 130 n = 15
sin Za, 0.93964 0.98481 0.98481 1.00000 1.00000 0.93969 0.86602 0.93964 0.98481 1.00000 1.00000 0.93969 0.93969 0.86602 0.76604
0.34202 0.17365 0.17365 0.00000 0.00000 -0.34202 -0.50000 0.34202 0.17365 0.00000 0.00000 -0.34202 -0.34202 -0.50000 -0.64279
2: sin 2ai
2: cos Za,
= 14.15086 Y = -0.94339
r
= 0.94842
sin d = 0.99470 cos z = -0.10290
cos2ai
= -1.46386 X
= -0.09759
Section 26.9
The Mean of Mean Angles
621
The angle (2a) with this sine and cosine is 95.9°; so a = 95.9°/2 = 48°. Also: tan2a = Y/X = -9.66687; and, since C < 0, a = arctan - 9.66687 + 180° = -84.P+ 180 = 95.9°; so a = 48°. 0
The statistical procedure is to calculate the mean angle (Section 26.4) after doubling each of the data (that is, to find the mean of 2ai). It will be noted that doubling of an angle greater than 180 will result in an angle greater than 360°; in that case, 360 is subtracted from the doubled angle. (This results in angles that are said to be "modulo 360°.") The doubling of angles for axial data is also appropriate for calculating other statistics, such as those in Sections 26.5-26.7, and for the statistical testing in Chapter 27. The mean angle of 48° determined in Example 26.7 indicates that a line running from 48 to 228 (that is, from 48 ° to 48 + 180°) is the axis of the bimodal data (shown as a dashed line in Fig. 26.8). 0
0
0
0
0
THE MEAN OF MEAN ANGLES
If a mean is determined for each of several groups of angles, then we have a set of mean angles. Consider the data in Example 26.8. Here, a mean angle, a, has been calculated for each of k samples of circular data, using the procedure of Section 26.4. If, now, we desire to determine the grand mean of these several means, it is not appropriate to consider each of the sample means as an angle and employ the method of Section 26.4. To do so would be to assume that each mean had a vector length, r, of 1.0 (i.e., that an angular deviation, s, of zero was the case in each of the k samples), a most unlikely situation. Instead, we shall employ the procedure promulgated by Batschelet" (1978, 1981: 201-202), whereby the grand mean has rectangular coordinates k
~Xj j='
-
X=~-
(26.27)
k and
(26.28)
k
where X, and Yj are the quantities X and Y, respectively, applying Equations 26.6 and 26.7 to sample j; k is the total number of samples. If we do not have X and Y for each sample, but we have a and r (polar coordinates) for each sample, then k
~
X
rjcosaj
= J_"=_'~~~
(26.29)
k and k
~
Y
rj sinaj
= J_"=_'
_
k
(26.30)
* Batschelet (1981: 1(8) refers to statistical analysis of a set of angles as a first-order analysis and the analysis of a set of mean angles as a second-order analysis.
622
Chapter 26
Circular Distributions:
Descriptive Statistics
Having obtained X and Y, we may substitute them for X and Y, respectively, in Equations 26.8, 26.9, and 26.10 (and 26.11, if desired) in order to determine a, which is the grand mean. For this calculation, all n/s (sample sizes) should be equal, although unequal sample sizes do not appear to seriously affect the results (Batschelet, 1981: 202). Figure 26.9 shows the individual means and the grand mean for Example 26.8. (By the hypothesis testing of Section 27.1, we would conclude that there is in this example no significant mean direction for Samples 5 and 7. However, the data from these two samples should not be deleted from the present analysis.) Batschelet (1981: 144,262-265) discussed confidence limits for the mean of mean angles. EXAMPLE 26.8
The Mean of a Set of Mean Angles
Under particular light conditions, each of seven butterflies is allowed to fly from the center of an experimental chamber ten times. From the procedures of Section 26.4, the values of Zi and r for each of the seven samples of data are as follows. k
= 7;
= 10
n
Sample U)
aj
1 2 3 4 5 6 7
160 169 117 140 186 134 171
X
=
2:
rj
Y
=
2:
rj
0
cos aj k sin Zij
x,
0.8954 0.7747 0.4696 0.8794 0.3922 0.6952 0.3338
-3.69139 7
1.94906
7
. - -- -Y -- 0.27844 -- 0 .46691 sma r 0.59634
Therefore, ii = 152°.
Y=rsinZi } 1
-3.69139
= -0.52734
-- - 088429 .
rj cosaj
0.30624 0.14782 0.41842 0.56527 -0.04100 0.50009 0.05222
+ y2 = JO.35562 ;:::0.59634
cosa- -- -X _ - -0.52734 r 0.59634
=
-0.84140 -0.76047 -0.21319 -0.67366 -0.39005 -0.48293 -0.32969
= 1.94906 = 0.27844
k r = ~X2
rj
}
Exercises
270° ----------i.::---------
623
90°
FIGURE 26.9: The data of Example 26.8. Each of the seven vectors in this sample is itself a mean vector. The mean of these seven means is indicated by the broken line.
EXERCISES ~.1. Twelve nests of a particular bird species were recorded on branches extending in the following directions from the trunks of trees:
(c) Determine 95% confidence limits for the population mean. (d) Determine the sample median direction. 26.2. A total of 15 human births occurred as follows:
Direction N: NE: E: SE:
s.
SW: W: NW:
0° 45°
90° 135° 180° 225° 270° 315°
Frequency 2 4 3 1 1 1
0 0
(a) Compute the sample mean direction. (b) Compute the angular deviation for the data.
1:15 2:00 4:30 6:10
A.M. A.M. A.M. A.M.
4:40 11:00 5:15 2:45
A.M. A.M. A.M. A.M.
5:30 4:20 10:30 3:10
A.M. A.M. A.M.
6:50 5:10 8:55
A.M. A.M. A.M.
A.M.
(a) Compute the mean time of birth. (b) Compute the angular deviation for the data. (c) Determine 95% confidence limits for the population mean time. (d) Determine the sample median time.
C HAP
T E R
27
Circular Distributions: Hypothesis Testing 27.1 27.2 27.3
TESTING SIGNIFICANCE OF THE MEAN ANGLE TESTING SIGNIFICANCE OF THE MEDIAN ANGLE TESTING SYMMETRY AROUND THE MEDIAN ANGLE
27.4 27.5
TWO-SAMPLE AND MULTISAMPLE TESTING OF MEAN ANGLES NONPARAMETRIC TWO-SAMPLE AND MULTISAMPLE TESTING OF ANGLES
27.6 27.7
TWO-SAMPLE TWO-SAMPLE
27.8 27.9 27.10
TWO-SAMPLE AND MULTISAMPLE TESTING OF ANGULAR DISPERSION PARAMETRIC ANALYSIS OF THE MEAN OF MEAN ANGLES NONPARAMETRIC ANALYSIS OF THE MEAN OF MEAN ANGLES
27.11 27.12
PARAMETRIC TWO-SAMPLE ANALYSIS OF THE MEAN OF MEAN ANGLES NONPARAMETRIC TWO-SAMPLE ANALYSIS OF THE MEAN OF MEAN ANGLES
27.13 27.14
PARAMETRIC PAIRED-SAMPLE TESTING WITH ANGLES NONPARAMETRIC PAIRED-SAMPLE TESTING WITH ANGLES
27.15 27.16 27.17
PARAMETRIC ANGULAR CORRELATION AND REGRESSION NONPARAMETRIC ANGULAR CORRelATION GOODNESS-OF-FIT TESTING FOR CIRCULAR DISTRIBUTIONS
27.18
SERIAL RANDOMNESS
AND MULTISAMPLE AND MULTISAMPLE
TESTING OF MEDIAN ANGLES TESTING OF ANGULAR DISTANCES
OF NOMINAL-SCALE
CATEGORIES ON A CIRCLE
in Chapter 26 and the information contained in the basic Armed with the procedures statistics of circular distributions (primarily ii and r). we can now examine a number of methods for testing hypotheses about populations measured on a circular scale. 27.1
TESTING SIGNIFICANCE OF THE MEAN ANGLE (a) The Rayleigh Test for Uniformity. We can place more confidence in ii as an estimate of the population mean angle, fLa, if s is small, than if it is large. This is identical to stating that ii is a better estimate of fL{/ if r is large than if r is small. What is desired is a method of asking whether there is, in fact, a mean direction [or the population of data that were sampled, for even if there is no mean direction (i.e., the circular distribution is uniform) in the population, a random sample might still display a calculable mean. The test we require is that concerning Ho: The sampled population is uniformly distributed around a circle versus H i\: The population is not a uniform circular distribution. This may be tested by the Rayleigh test", As circular uniformity implies there is no mean direction, the Rayleigh test may also be said to test lio: p = 0 versus H A: p *- 0, where p is the population mean vector length. "Named for Lord Rayleigh [John William Strutt, Third Baron Rayleigh (1/142- IlJ IlJ)]. a physicist and applied mathematician who gained his greatest fame for discovering and isolating the chemical clement argon (winning him the obcl Prize in physics in IlJ(4), although some of his other contributions to physics were at least as important (Lindsay. IlJ76). He was a pioneering worker with directional data beginning in 18/10(Fisher, 1993: 10: Moore. 1980: Rayleigh, 19(9).
624
Section 27.1
Testing Significance of the Mean Angle
625
The Rayleigh test asks how large a sample r must be to indicate confidently a nonuniform population distribution. A quantity referred to as "Rayleigh's R" is obtainable as (27.1 ) R = nr, and the so-called "Rayleigh's population mean direction:
z" may be utilized for testing the null hypothesis of no
R2
Z
= -
n
or
z
Appendix Table B.34 presents critical values of of the probability of Rayleigh's R is"
=
nr'".
Za.n.
(27.2)
Also an excellent approximation
(27.4) (derived from Greenwood and Durand, 1955). This calculation is accurate to three decimal places for n as small as 10 and to two decimal places for n as small as 5.t The Rayleigh test assumes sampling from a von Mises distribution, a circular analog of the linear normal distribution. (See von Mises footnote to Section 26.7.) If Hi, is rejected by Rayleigh's test, we may conclude that there is a mean population direction (see Example 27.1), and if Ho is not rejected, we may conclude the population distribution to be uniform around the circle; but only if we may assume that the population distribution does not have more than one mode. (For example, the data in Example 26.7 and Figure 26.8 would result in a Rayleigh test failing to reject Hi; While these data have no mean direction, they are not distributed uniformly around the circle, and they are not unimodal.) EXAMPLE 27.1 Rayleigh's Test for Circular Uniformity, Applied to the Data of Example 26.2 These data are plotted in Figure 26.2. Ho: P = 0 (i.e., the population is uniformly distributed around the circle). HA: P :f. 0 (i.e., the population is not distributed uniformly around the circle ). Following Example 26.4: n = 8 r = 0.82522 R = nr = (8)(0.82522) Z
=
R2
= (6.60176)2 = 5.448.
n 8 Using Appendix Table B.34,
* Recall
= 6.60176
Z005.8 =
2.899. Reject Ho.
O.OOl < P < 0.002
the following notation: (27.3)
t A simpler, but less accurate, approximation for P is to consider Zz as a chi-square with 2 degrees of freedom (Mardia, 1972a: 113; Mardia and Jupp, 2000: 92). This is accurate to two decimal places for n as small as about 15.
626
Chapter 27
Circular Distributions: Hypothesis Testing
27()' f-------------7I':
FIGURE 27.1: The data for the V test of Example 27.2. The broken angle (94°).
line indicates the expected mean
Section 26.8 explains how axially bimodal data-such as in Figure 26.8-can be transformed into unimodal data, thereafter to be subjected to Rayleigh testing and other procedures requiring unimodality. What is known as "Rae's spacing test" (Batschelet, 1981: 66-69; Rao, 1976) is particularly appropriate when circular data are neither unimodal nor axially bimodal, and Russell and Levitin (1994) have produced excellent tables for its use. (b) Modified Rayleigh Test for Uniformity versus a Specified Mean Angle. The Rayleigh test looks for any departure from uniformity. A modification of that test (Durand and Greenwood, 1958; Greenwood and Durand, 1955) is available for use when the investigator has reason to propose, in advance, that if the sampled distribution is not uniform it will have a specified mean direction. In Example 27.2 (and presented graphically in Figure 27.1), ten birds were released at a site directly west of their home. Therefore, the statistical analysis may include the suspicion that such birds will tend to fly directly east (i.e., at an angle of 90°). The testing procedure considers Hr: The population directions are uniformly distributed versus HA: The directions in the population are not uniformly distributed and /La = 90°. By using additional information, namely the proposed mean angle, this test is more powerful than Rayleigh's test (Batschelet, 1972; 1981: 60). The preceding hypotheses are tested by a modified Rayleigh test that we shall refer to as the V test, in which the test statistic is computed as
v where
/LO
= R cos(a - /Lo),
(27.5)
is the mean angle proposed. The significance of V may be ascertained from u = V
I~
"\ n·
(27.6)
Appendix Table B.35 gives critical values of ua.l1, a statistic which, for large sample sizes, approaches a one-tailed normal deviate, Za( I), especially in the neighborhood of probabilities of 0.05. If the data are grouped, then R may be determined from (Equation 26.15) rather than from r.
'c
(c) One-Sample Test for the Mean Angle. The Rayleigh test and the V test are non parametric methods for testing for uniform distribution of a population
Section 27.1
Testing Significance of the Mean Angle
627
EXAMPLE 27.2 The V Test for Circular Uniformity Under the Alternative of Nonuniformity and a Specified Mean Direction H«: The population p = 0).
is uniformly distributed
around the circle (i.e., H«:
HA: The population is not uniformly distributed HA: P *- 0), but has a mean of 90°. a, (deg)
sm c,
cos s,
66 75 86 88 88 93 97 101 118 130
0.91355 0.96593 0.99756 0.99939 0.99939 0.99863 0.99255 0.98163 0.88295 0.76604
0.40674 0.25882 0.06976 0.03490 0.03490 0.05234 0.12187 0.19081 0.46947 0.64279
2: sin a, = 9.49762
2:cosai = -0.67216
n y
around the circle (i.e.,
= 10
= 9.49762 = 0.94976 10
X
= - 0.67216 = -0.06722 10
r = ~( -0.06722)2
+ (0.94976)2 = 0.95214
sin a = 1:::= 0.99751 r
cos z
= .r =
R
=
(10)(0.95214)
=
V
=
R cos ( 94 ° -
90 0)
r
-0.07060
9.5214
= 9.5214cos(4°) = (9.5214 )(0.99756) = 9.498 u
=
v)~
= (9.498) =
12
\110
4.248
Using Appendix Table B.35,
UO.05,10
= 1.648. Reject Ho.
P < 0.0005.
628
Chapter 27
Circular Distributions:
Hypothesis Testing
of data around the circle. (See Batschelet, 1981: Chapter 4, for other tests of the null hypothesis of randomness.) If it is desired to test whether the population mean angle is equal to a specified value, say /LO, then we have a one-sample test situation analogous to that of the one-sample t test for data on a linear scale (Section 7.1). The hypotheses are He: /La
=
HA:
"* /LO,
and /La
/LO
and Ho is tested simply by observing whether /LO lies within the 1 - a confidence interval for /La. If /La lies outside the confidence interval, then Ho is rejected. Section 26.7 describes the determination of confidence intervals for the population mean angle, and Example 27.3 demonstrates the hypothesis-testing procedure.* EXAMPLE 27.3 of Example 27.2
The One-Sample Test for the Mean Angle, Using the Data
He: The population has a mean of 90° (i.e., /La = 90°). HA: The population mean is not 90° (i.e., /La 90°).
"*
The computation r = 0.95
of the following is given in Example 27.2:
7i = 94° Using Equation 26.25, for a = 0.05 and n = 10: R
=
nr
=
(10)(0.95)
=
9.5
X6051 = 3.841
= arccos
(Hi' - 9.52 )e3.R4I/1O
~102
1
9.5
= arccos[0.9744] = 13°, or 360 0
-
Thus, the 95% confidence interval for
13° = 34r.
/La
is 94° ± 13°.
As this confidence interval does contain the hypothesized not reject Hi;
mean (/LO
*For demonstration purposes (Examples 27.2 and 27.3) we have applied the V test and the one-sample test for the mean angle to the same set of data. In practice this would not be done. Deciding which test to employ would depend, respectively, on whether the intention is to test for circular uniformity or to test whether the population mean angle is a specified value.
Section 27.2
Testing Significance
of the Median Angle
629
TESTING SIGNIFICANCE OF THE MEDIAN ANGLE
(a) The Hodges-Ajne Test for Uniformity. A simple alternative to the Rayleigh test (Section 27.1) is the so-called Hodges-Ajne test." which does not assume sampling from a specific distribution. This is called an "omnibus test" because it works well for unimodal, bimodal, and multimodal distributions. If the underlying distribution is that assumed by the Rayleigh test, then the latter procedure is the more powerful. Given a sample of circular data, we determine the smallest number of data that occur within a range of 180°. As shown in Example 27.4, this is readily done by drawing a line through the center of the circle (i.e., drawing a diameter) and rotating that line around the center until there is the greatest possible difference between the numbers of data on each side of the line. If, for example, the diameter line were vertical (i.e., through 0° and 180°), there would be 10 data on one side of it and 14 on the other; if the line were horizontal (i.e., through 90° and 270G), then there would be 3.5 points on one side and 20.5 points on the other; and if the diameter were rotated slightly counterclockwise from horizontal (shown as a dashed line in the figure in Example 27.4), then there would be 3 data on one side and 21 on the other, and no line will split the data with fewer data on one side and more on the other. The test statistic, which we shall call m, is the smallest number of data that can be partitioned on one side of a diameter; in Example 27.4, m = 3. The probability of an m at least this small, under the null hypothesis of circular uniformity, is
p =
(n - 2m) ( n) m 211-1
(n - 2m)
=
n! m!(n
211-1
m)!
(27.7)
(Hodges, 1955), using the binomial coefficient notation of Equation 24.2. Instead of computing this probability, we may refer to Appendix Table B.36, which gives critical values for m as a function of nand 0'. (It can be seen from this table that in order to test at the 5% significance level, we must have a sample of at least nine data.) For n > 50, P may be determined by the following approximation:
[_7T2] --
.j2;
P ~ --exp A where
8A2'
7TJfi
A = 2(n
-
(27.8)
(27.9) 2m)
(Ajne, 1968); the accuracy of this approximation Table B.36.
is indicated at the end of Appendix
(b) Modified Hodges-Ajne Test for Uniformity versus a Specified Angle. Just as (in Section 27.1b) the V test is a modification of the Rayleigh test to test for circular uniformity against an alternative that proposes a specified angle, a test presented by Batschelet (1981: 64-66) is a modification of the Hodges-Ajne test to test "This procedure was presented by Ajne (196S). Shortly thereafter, Bhattacharyya and Johnson (1969) showed that his test is identical to a test given by Hodges (1955) for a different purpose.
630
Chapter 27
Circular Distributions: EXAMPLE 27.4
Hypothesis Testing The Hodges-Ajne
Test for Circular Uniformity
Ho: The population is uniformly distributed around the circle. HA: The population is not uniformly distributed around the circle. This sample of 24 data is collected: 10°,150,250,30°,30°,30°,35°,45°,500,60°, 75°,80°, 100°, 110°,255°,270°,280°,280°,300°,320°,330°,350°,350°,355°
.
.• 9(jD
27(jD
n = 24; m = 3 For a = 0.05, the critical value (from Appendix Table B.36) is mO.05.24 reject u.; 0.002 < P :s 0.005. (n Exact probability
= P
2m)(;)
(n -
=
(24 ___
2m) m!(n n'.
=
4;
m)!
6) 24! --""3 ! =21! = 0.0043 223
For comparison, the Rayleigh test for these r = 0.563, R = 13.513, z = 7.609, P < 0.001.
data
would
yield
a
12°,
nonparametrically for uniformity against an alternative that specifies an angle. For the Batschelet test, we count the number of data that lie within ±90° of the specified angle; let us call this number m' and the test statistic is C = n - m'.
(27.10)
We may proceed by performing a two-tailed binomial test (Section 24.5), with p = 0.5 and with C counts in one category and m' counts in the other. As shown in the figure in Example 27.5, this may be visualized as drawing a diameter line perpendicular to the radius extending in the specified angle and counting the data on either side ofthat line. (c) A Binomial Test. A nonparametric test to conclude whether the population median angle is equal to a specified value may be performed as follows. Count the number of observed angles on either side of a diameter through the hypothesized angle and subject these data to the binomial test of Section 24.5, with p = 0.5.
Section 27.3 EXAMPLE 27.5
Testing Symmetry Around the Median Angle
631
The Batschelet Test for Circular Uniformity
He: The population is uniformly distributed around the circle. H A: The population is not uniformly distributed around the circle, but is concentrated around 4SC). The data are those of Example 27.4. rt = 24; p = 0.5; m' = 19; C = 5
••
,,
,, ,, ,, ,,
/
/ / / / / / / /
'/
,,
,, ,, ,, ,,
-,
180"
For the binomial test of Section 24.5, using Appendix Table B.27, CO.05(2).24
= 6,
reject Ho,0.005 < P :::;0.01; by the procedure shown in Example 24.8a, the exact probability would be P = 0.00661.
TESTING SYMMETRY
AROUND THE MEDIAN ANGLE
The symmetry of a distribution around the median may be tested nonparametrically using the Wilcoxon paired-sample test (also known as the Wilcoxon signed-rank test) of Section 9.5. For each angle (Xi) we calculate the deviation of Xi from the median (i.e., d, = Xi - median), and we then analyze the di's as explained in Section 9.5. This is shown in Example 27.6 for a two-tailed test, where Ho: The underlying distribution is not skewed from the median. A one-tailed test could be used to ask whether the distribution was skewed in a specific direction from the median. (T _ would be the test statistic for Ho: The distribution is not skewed clockwise from the median, and T + would be the test statistic for Ho: The distribution is not skewed counterclockwise from the median.)
EXAMPLE 27.6 Testing for Symmetry Data of Example 27.6
Ho: The underlying distribution H A: The underlying distribution
Around the Median
Angle, for the
is symmetrical around the median. is not symmetrical around the median.
For the 8 data below, the median is 161.5 Using the Wilcoxon signed-rank test:
Cl •
632
Chapter 27
Circular Distributions:
Hypothesis Testing d;
=
Signed rank of
Idil
Xi- median 97° 104° 121 159° 164 172° 195 213
-64.5° -57.5° -40.5° -2.5° 2.5° 10.5° 33.5° 51.5°
0
0
0
0
1.5 + 3 + 4 + 6
=
T_
= 8 + 7 + 5 + 1.5 = 21.5
TO.05(2),8
= 3
Neither T + nor T _ is
27.4
TWO-SAMPLE
-1.5 1.5 3 4 6
14.5
T+
=
-8 -7 -5
8 7 5 1.5 1.5 3 4 6
< TO.05(2),8, so do not reject Hi; P > 0.50
AND MULTISAMPLE TESTING OF MEAN ANGLES
(a) Two-Sample Testing. It is common to consider the null hypothesis H«: ILl = J.L2, where ILl and IL2 are the mean angles for each of two circular distributions (see Example 27.7). Watson and Williams (1956, with an improvement by Stephens, 1972) proposed a test that utilizes the statistic F = K(N
+ R2 - R)
- 2)(RI N -
RI -
R2
(27.11) '
where N = nl + n2. In this equation, R is Rayleigh's R calculated by Equation 27.1 with the data from the two samples being combined; RI and R2 are the values of Rayleigh's R for the two samples considered separately. K is a factor, obtained from Appendix Table B.37, that corrects for bias in the F calculation; in that table we use the weighted mean of the two vector lengths for the column headed r:
= nlrl
r
+ nzrz = -----'-------=RJ + R2 N
W
N
The critical value for this test is F a( J ),1.N t=
\j
IK(N
-2'
Alternatively,
- 2)(Rl N -
(27.12)
+ R2 - R) RI -
(27.13)
R2
may be compared with la(2),N-2' This test may be used for rw as small as 0.75, if N /2 2 25. (Batschelet, 1981: 97, 321; Mardia 1972a: 155; Mardia and Jupp, 2000: 130). The underlying assumptions of the test are discussed at the end of this section.
Section 27.4 EXAMPLE 27.7
The Watson-Williams
Ho:
ILl
=
HA:
ILl
"* IL2
94 65 45 52 38 47 73 82 90 40 87
Y
Test for Two Samples
Sample 2
cos a;
0.99756 0.90631 0.70711 0.78801 0.61566 0.73135 0.95630 0.99027 1.00000 0.64279 0.99863
= 11
a:
-0.06976 0.42262 0.70711 0.61566 0.78801 0.68200 0.29237 0.13917 0.00000 0.76604 0.05234
2: sinai
2: coe a,
= 9.33399
= 4.39556
= 0.84854,
X
77 70 61 45 50 35 48 65 36
n2
= 0.39960
Y
0.22495 0.34202 0.48481 0.70711 0.64279 0.81915 0.66913 0.42262 0.80902
= 0.78585,
rl = 0.93792
r: =
sin a]
=
sin a2
cosal
= 0.42605
X
= 5.12160 = 0.56907
0.97026
al
= 65°
= 0.80994 cos a2 = 0.58651 a2 = 54°
RI
=
R2
0.90470
10.31712
=
16.40664
2:cosai
=
+ 5.12160
=
9.51716
N = 11
+ 9 = 20
4.39556
= 16.40664 = 0.82033 20
= 9.51716 = 0.47586 20
R
0.97437 0.93969 0.87462 0.70711 0.76604 0.57358 0.74314 0.90631 0.58779
= 7.07265
+ 7.07265
r
cos c,
2: sinai
= 9
2: sin a, = 9.33399
X
sinai
(deg)
By combining the twenty data from both samples:
Y
633
IL2
Sample 1 sm e,
a, (deg)
nl
Two-Sample and Multisample Testing of Mean Angles
= 0.94836 = 18.96720
rw = 10.31712
+ 8.73234 = 0.952; K = 1.0251 20
=
8.73234
634
Chapter 27
Circular Distributions:
F = K(N
Hypothesis Testing
-
+ Rz - R)
2)(RI N -
= (1.0351)(20
RI
Rz
-
- 2)(10.31712 + 8.73234 - 18.96720) 20 - 10.31712 - 8.73234
= (1.0351)1.48068 0.95054 = 1.61
= 4.41.
FO.05(1).l,18
Therefore, do not reject Ho. 0.10 < P < 0.25
[P
=
0.22]
Thus, we conclude that the two sample means estimate the same population mean, and the best estimate of this population mean is obtained as sin a
=
1::: r
=
0.86500
cos z = :":'= 0.50177 r
The data may be grouped as long as the grouping interval is:::; 10 See Batschelet (1972; 1981: Chapter 6) for a review of other two-sample testing procedures. Mardia (1972a: 156-158) and Mardia and Jupp (2000: 130) give a procedure for an approximate confidence interval for ILl - ILZ. 0
•
(b) MuItisample Testing. The Watson-Williams test can be generalized to a multisample test for testing Ho: ILl = ILZ = ... = ILk, a hypothesis reminiscent of analysis of variance considerations for linear data (Section 10.1). In multi sample tests with circular data (such as Example 27.8),
(27.14)
Here, k is the number of samples, R is the Rayleigh's R for all k samples combined, an.d N = '2,7= I nj. The correction factor, K, is obtained from Appendix Table B.37, usmg k
k
2: njrj _
r
j=1
-----W
_
N
2: Rj
j=1
N'
(27.15)
The critical value for this test is Fa( 1 ),k-l,N -u- Equation 27.15 (and, thus this test) may be used for rw as small as 0.45 if N / k ;:::6 (Batschelet, 1981: 321; Mardia, 1972a: 163; Mardia and Jupp, 2000: 135). If the data are grouped, the grouping interval should be
Section 27.4
Two-Sample and Multisample Testing of Mean Angles
The Watson-Williams
EXAMPLE 27.8
635
Test for Three Samples
He: All three samples are from populations with the same mean angle. HA: All three samples are not from populations with the same mean angle.
Sample 1 sm e,
a\ (deg) 135 145 125 140 165 170 n\
0.70711 0.57358 0.81915 0.64279 0.25882 0.17365
= 6
ccs a:
a, (deg)
-0.70711 -0.81915 -0.57358 -0.76604 -0.96593 -0.98481
150 130 175 190 180 220
2: sinai
2: cos a,
= 3.17510 /11 = 147 Tl = 0.96150 R\ = 5.76894
= -4.81662
n: = 6
0.50000 0.76604 0.08716 -0.17365 0.00000 -0.64279
Sample 3 sin ai
a, (deg) 140 165 185 180 125 175 140
= 0.53676 /12 = 174 T2 = 0.88053 R2 = 5.28324
= -5.25586
-0.76604 -0.96593 -0.99619 -1.00000 -0.57358 -0.99619 -0.76604
2: cos a,
2:
=7
= 2.36356 /13
=
-6.06397
= 159
T3 =
0
0.92976
R3 = 6.50832 k=3 N
= 6 + 6 + 7 = 19
For all 19 data:
2: sin
a;
=
3.17510
2:cosai = -4.81662 Y = 0.31976 X
= -0.84929
+ 0.53676 + 2.36356 = 6.07542 - 5.25586 - 6.06397
-0.86603 -0.64279 -0.99619 -0.98481 -1.00000 -0.76604
2: cos a,
cos s,
0.64279 0.25882 -0.08715 0.00000 0.81915 0.08716 0.64279 sin a;
cos s,
2: sin a, 0
0
n3
Sample 2 sinai
=
-16.13645
636
Chapter 27
Circular Distributions:
Hypothesis Testing
r = 0.90749 R = 17.24231 5.76894
rw
+ 5.28324 + 6.50832
=
=
0.924
19
F = K-( N_-_k_)----'( L=-R-,---j _-_R_) (k - 1) (N Rj)
L
= (1.0546)(19
- 3)(5.76894 + 5.28324 + 6.50832 - 17.24231) (3 - 1)(19 - 5.76894 - 5.28324 - 6.50832)
= (1.0546)5.09104 2.87900
= 1.86 VI = V2
k -
1
=
2
= N - k = 16
FO.05( 1 ),2,16
=
3.63.
Therefore, do not reject Ho. 0.10 < P < 0.25
[P = 0.19]
Thus, we conclude that the three sample means estimate the same population mean, and the best estimate of that population mean is obtained as sin d =
rr
= 0.35236
cos d = ~ = -0.93587 r
no larger than 10°. Upton (1976) presents an alternative to the Watson-Williams test that relies on X2, instead of F, but the Watson-Williams procedure is a little simpler to use. The Watson-Williams tests (for two or more samples) are parametric and assume that each of the samples came from a population conforming to what is known as the von Mises distribution, a circular analog to the normal distribution of linear data. (See the second footnote in Section 26.7.) In addition, the tests assume that the population dispersions are all the same. Fortunately, the tests are robust to departures from these assumptions. But if the underlying assumptions are known to be violated severely (as when the distributions are not unimodal), we should be wary of their use. In the two-sample case, the nonparametric test of Section 27.5 is preferable to the Watson- Williams test when the assumptions of the latter are seriously unmet. Stephens (1982) developed a test with characteristics of a hierarchical analysis of variance of circular data, and Harrison and Kanji (1988) and Harrison, Kanji, and Gadsden (1986) present two-factor ANOYA (including the randomized block design). Batschelet (1981: 122-126), Fisher (1993: 131-133), Jammalamadake and SenGupta (2001: 128-130), Mardia (1972a: 158-162, 165-166), and Stephens (1972) discuss testing of equality of population concentrations.
Section 27.5 NONPARAMETRIC
Nonparametric
TWO-SAMPLE
Two-Sample and Multisample
Testing of Angles
637
AND MULTISAMPLE TESTING OF ANGLES
If data are grouped-Batschelet (1981: 110) recommends a grouping interval larger than 10° - then contingency-table analysis may be used (as introduced in Section 23.1) as a two-sample test. The runs test of Section 27.18 may be used as a two-sample test, but it is not as powerful for that purpose as are the procedures below, and it is best reserved for testing the hypothesis in Section 27.18. (a) Watson's Test. Among the nonparametric procedures applicable to two samples of circular data (e.g., see Batschelet, 1972, 1981: Chapter 6; Fisher, 1993: Section 5.3; Mardia, 1972a: Section 2.4; Mardia and Jupp, 2000: Section 8.3) are the median test of Section 27.6 and the Watson test. The Watson test, a powerful procedure developed by Watson* (1962), is recommended in place of the Watson-Williams two-sample test of Section 27.4 when at least one of the sampled populations is not unimodal or when there are other considerable departures from the assumptions of the latter test. It may be used on grouped data if the grouping interval is no greater than 5° (Batschelet, 1981: 115). The data in each sample are arranged in ascending order, as demonstrated in Example 27.9. For the two sample sizes, n] and n2, let us denote the ith observation in Sample 1 as ali and the jth datum in Sample 2 as ou. Then, for the data in Example 27.9, all = 35°, a21 = 75°, al2 = 45'\ a22 = 80 and so on. The total number of data is N = n] + n2. The cumulative relative frequencies for the observations in Sample 1 are i/ nl, and those for Sample 2 are jj nz. As shown in the present example, we then define values of dk (where k runs from 1 through N) as the differences between the two cumulative relative frequency distributions. The test statistic, called the Watson U2, is computed as 0
,
(27.16a)
where
d
=
'Zdk/ N; or, equivalently, as
(27.16b)
Critical values of 2 2 Ua,I1I,112 = Ua,112.f11·
U;
11 112
, I,
are given in Appendix Table B.38a bearing in mind that
Watson's U2 is especially useful for circular data because the starting point for determining the cumulative frequencies is immaterial. It may also be used in any situation with linear data that are amenable to Mann-Whitney testing (Section 8.11), but it is generally not recommended as a substitute for the Mann-Whitney test; the latter is easier to perform and has access to more extensive tables of critical values, and the former may declare significance because group dispersions are different.
* Geoffrey Stuart Watson (1921-1998), outstanding
Australian-born
statistician
(Mardia,
1998).
638
Chapter 27
Circular Distributions: Hypothesis Testing
EXAMPLE 27.9
U2 Test for Nonparametric
Watson's
Two-Sample
Testing
He: The two samples came from the same population,
or from two populations having the same direction. The two samples did not come from the same population, or from two populations having the same directions.
HA:
Sample 1
Sample 2 ~i
i aIi (deg)
j
j
a2j (deg)
nl
1 2 3 4 5 6
35 45 50 55 60 70
7
85
8
95
9
105
10
120
nl
n2
0.1000 0.2000 0.3000 0.4000 0.5000 0.6000 0.6000 1 0.6000 2 0.7000 0.7000 3 0.8000 0.8000 4 0.9000 0.9000 5 1.0000 1.0000 6 1.0000 7 1.0000 8 1.0000 9 1.0000 10 1.0000 11
= 10
75 80 90 100 110 130 135 140 150 155 165
= nln2
1011 =
0.1856
Do not reject Ho. 0.10 < P < 0.20
i
j
nl
n2
~
0.1000 0.2000 0.3000 0.4000 0.5000 0.6000 0.5091 0.4182 0.5182 0.4273 0.5273 0.4364 0.5364 0.4454 0.5454 0.4545 0.3636 0.2727 0.1818 0.0909 0.0000
d2k 0.0100 0.0400 0.0900 0.1600 0.2500 0.3600 0.2592 0.1749 0.2685 0.1826 0.2780 0.1904 0.2877 0.1984 0.2975 0.2066 0.1322 0.0744 0.0331 0.0083 0.0000
2. dk = 7.8272 2. d~ =
[2.d~ _ (2.dk)2]
N2
N
= (10)(11) 212
[3.5018 _ (7.8272)2] 21
= 0.1458
U5 os
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0909 0.1818 0.1818 0.2727 0.2727 0.3636 0.3636 0.4546 0.4546 0.5455 0.6364 0.7273 0.8182 0.9091 1.0000
n: = 11
u2
dk =
3.5018
Section 27.5
Nonparametric Two-Sample and Multisample Testing of Angles
639
(b) Watson's Test with Ties. If there are some tied data (i.e., there are two or more observations having the same numerical value), then the Watson two-sample test is modified as demonstrated in Example 27.10. We define tli as the number of data in EXAMPLE 27.10
Watson's
U2 Test for Data Containing
Ties
H«: The two samples came from the same population, or from two populations having the same directions. HA: The two samples did not come from the same population, or from two populations having the same directions. i
mi,
tli
Gli
mli
J
mu t2j m2j
G2j
nl
1 2 3 4
40° 45 50 55
1 2 3 4
1 1 1 1
5
70
1
5
6
80
2
7
7
95
1
8
8 105 9 110 10 120 nl
1 9 2 11 1 12
0.0000 0.0000 0.0833 0.1667 0.2500 0.3333 0.3333 0.3333 0.4167 0.4167 0.5833 0.5833 0.6667 0.6667 0.7500 0.9167 1.0000
1 2
30° 1 35 1
3
50
1
4 5
60 65
1 2
6 7 8
75 80 90
1 1 1
9
100
1
= 12
N = 12
n2
k
1 0.1000 2 0.2000 0.2000 0.2000 3 0.3000 0.3000 4 0.4000 6 0.6000 0.6000 7 0.7000 8 0.8000 9 0.9000 0.9000 10 1.0000 1.0000 1.0000 1.0000
= 10
=
mli
":v~2[L: tkdj
= (12)(10) 222 = 12
= 0.2246
Do not reject P > 0.50
Hi;
0.0612
-
_
m2j
2 dk
tk
-0.1000 -0.2000 -0.1167 -0.0333 -0.0500 0.0333 -0.0667 -0.2677 -0.1833 -0.2833 -0.2167 -0.3167 -0.2333 -0.3333 -0.2500 -0.0833 0.0000
0.0100 0.0400 0.0136 0.0011 0.0025 0.0011 0.0044 0.0711 0.0336 0.0803 0.0470 0.1003 0.0544 0.1111 0.0625 0.0069 0.0000
1 1 1 1 2 1 1 2 1 1 3 1 1 1 1 2 1
2: tkdk
2: tkdi
= -3.5334
=0.8144
nl
+ 10 = 22 U' ~
UJ 0510
d
n2
(2: ~dk )2]
[0.8144 _ (-3.5334)2] 22
n2
640
Chapter 27
Circular Distributions:
Hypothesis Testing
Sample 1 with a value of au and tu as the number of data in Sample 2 that have a value of a2j. Additionally, ms, and m2j are the cumulative number of data in Samples 1 and 2, respectively; so the cumulative relative frequencies are m iii n! and m2j/ n2, respectively. As in Section 27.5a, dk represents a difference between two cumulative distributions; and each tk is the total number of data, in both samples, at each aij. The test statistic is
(27.17)
(c) Wheeler and Watson Test. Another nonparametric test for the null hypothesis of no difference between two circular distributions is one presented by Wheeler and Watson (1964; developed independently by Mardia, 1967). This procedure ranks all N data and for each a calculates what is termed a uniform score or circular rank: d
=
0
(360 )(rankofa). N
(27.18)
This spaces all of the data equally around the circle. Then
c, =
n,
2: cos d,
(27.19)
j=!
and
n,
s,
=
2: sin dj,
(27.20)
j=!
where i refers to either sample 1 or 2; it does not matter which one of the two samples is used for this calculation. The test statistic is W
= 2( N - 1) (CT + Sf).
(27.21)
n]n2
Critical values of W have been published for some sample sizes (Batschelet, 1981: 344; Mardia, 1967; Mardia and Jupp, 2000: 375-376; Mardia and Spurr, 1973). It has also been shown that W approaches a X2 distribution with 2 degrees of freedom for large N. This approximation works best for significance levels no less than 0.025 and has been deemed acceptable by Batschelet (1981: 103) if N is larger than 17, by Fisher (1993: 123) if n] and nz are each at least 10, and by Mardia and Spurr (1973) if N is greater than 20. This approximation should not be used if there are tied data or if the two sample dispersions are very different (Batschelet 1981: 103). An approximation related to the F distribution has been proposed (Mardia, 1967; Mardia and Jupp, 2000: 148-149) as preferable for some sample sizes. This test is demonstrated in Example 27.11. This example shows C, and S, for each of the two samples, but C and S are only needed from one of the samples in order to perform the test.
Section 27.5
Nonparametric Two-Sample and Multisample Testing of Angles
EXAMPLE 27.11 of Example 27.9
The Wheeler
and Watson Two-Sample
641
Test for the Data
Ho: The two samples came from the same population, or from two populations having the same directions. H/I: The two samples did not come from the same population, or from two populations having the same directions. nl
= 1O,n2 = 11, and N = 21
360" = 360~ = 17.14290 N 21 Sample 1 Direction (degrees)
Rank of direction
Sample 2
Circular rank (degrees)
I 2 3 4 5 6
35 45 50 55 60 70
Rank of direction
Circular rank (degrees)
17.14 34.29 51.43 68.57 85.71 102.86
85
9
154.29
95
11
188.57
105
13
222.86
120
15
257.14
= -0.2226 51 = 3.1726 2(N
-
75 80
7 8
120.00 137.14
90
10
171.43
100
12
205.71
110
14
240.00
130 135 140 150 160 165
16 17 18 19 20 21
274.29 291.43 308.57 325.71 342.86 360.00
= 0.2226 52 = -3.1726
C,
W=
Direction (degrees)
C2
1)(CT
+ 5T)
nl/12 2(21 -
1 H( -0.2226)2 (10)(11)
+ (3.1726)2] = 3.678
642
Chapter 27
Circular Distributions: u
=
Hypothesis Testing
2
= 5.991
X6.05.2
Do not reject Ho. 0.10 < P < 0.25 (b) MuItisample Testing. The Wheeler and Watson test may also be applied to more than two samples. The procedure is as before, where all N data from all k samples are ranked and (by Equation 27.18) the circular rank, d, is calculated for each datum. Equations 27.19 and 27.20 are applied to each sample, and
w
= 2
± [Cf
i=l
+
Sf].
(27.22)
n,
Some critical values for k = 3 have been published (Batschelet, 1981: 345; Mardia, 1970b; Mardia and Spurr, 1973). For large sample sizes, W may be considered to approximate X2 with 2( k - 1) degrees of freedom. This approximation is considered adequate by Fisher (1993: 123) if each ru is at least 10 and by Mardia and Spurr (1973) if N is greater than 20. Maag (1966) extended the Watson U2 test to k > 2, but critical values are not available. Comparison of more than two medians may be effected by the procedure in Section 27.6. If the data are in groups with a grouping interval larger than 10, then an r X c contingency table analysis may be performed, for r samples in c groups; see Section 23.1 for this analytical procedure. 27.6
TWO-SAMPLE
AND MULTISAMPLE TESTING OF MEDIAN ANGLES
The following comparison oftwo or more medians is presented by Fisher (1933: 114), who states it as applicable if each sample size is at least 10 and all data are within 90 of the grand median (i.e., the median of all N data from all k samples). If we designate m, to be the number of data in sample i that lie between the grand median 0
k
and the grand median
-
90°, and M = ~ mi, then i=l
N2
----~-' M(N - M)
k
NM
m2
(27.23)
-
i=l
n;
N - M
is a test statistic that may be compared to X2 with k - 1 degrees of freedom.* If "H«: All k population medians are equal" is not rejected, then the grand median is the best estimate of the median of each of the k populations. 27.7
TWO-SAMPLE
AND MULTISAMPLE TESTING OF ANGULAR DISTANCES
Angular distance is simply the shortest distance, in angles, between two points on a circle. For example, the angular distance between 95° and 120 is 25 between 340 and 30 is 50 and between 190 and 5° is 175°. In general, we shall refer to the angular distance between angles al and a2 as dill-il2• (Thus, dY5-120" = 25(\ and so on.) 0
0
,
0
0
0
,
"The same results are obtained if m, is defined as the number of data in sample i that lie between the grand median and the grand median + 9(F.
0
Section 27.7
Two-Sample and Multisample Testing of Angular Distances
643
Angular distances are useful in drawing inferences about departures of data from a specified direction. We may observe travel directions of animals trained to travel in a particular compass direction (perhaps "homeward"), or of animals confronted with the odor of food coming from a specified direction. If dealing with times of day we might speak of the time of a physiological or behavioral activity in relation to the time of a particular stimulus. (a) Two-Sample Testing. If a specified angle (e.g., direction or time of day) is /-LO, we may ask whether the mean of a sample of data, ii, significantly departs from /-LO by testing the one-sample hypothesis, Ho: /-La = /-LO, as explained in Section 27.1c. However, we may have two samples, Sample 1 and Sample 2, each of which has associated with it a specified angle of interest, /-LI and /-L2, respectively (where /-LI and /-L2 need not be the same). We may ask whether the angular distances for Sample 1 (dOli-ILl) are significantly different from those for Sample 2 (da2i-IL2)' As shown in Example 27.12, we can rank the angular distances of both samples combined and then perform a Mann-Whitney test (see Section 8.11). This was suggested by Wallraff (1979). Two-Sample Testing of Angular Distances
EXAMPLE 27.12
Birds of both sexes are transported away from their homes and released, with their directions of travel tabulated. The homeward direction for each sex is 135°. Ho: Males and females orient equally well toward their homes. HA: Males and females do not orient equally well toward their homes. Males Direction traveled
Females
Angular distance
Rank
10° 20 5 10 10 25 5
6 11 2.5 6 6 12.5 2.5
145" 155 130 145 l45 160 140
Direction traveled
Angular distance
Rank
160 135 145 150 125 120
25° 0 10 15 10 15
12.5 1 6 9.5 6 9.5
0
46.5 For the two-tailed Mann-Whitney nl
44.5
test:
= 7, RI = 46.5
n2 = 6, R2 = 44.5 V
=
I11n2
+
V' = nln2 U,0.05(2 ),7,6
nl (nl
+ 1)
2 V = (7)(6)
RI
= (7)(6)
23.5
= 18.5
= V O.05(2 ),6,7 = 36 .
Do not reject Ho.
P > 0.20
+
7(8) 2
- 46.5 = 23.5
644
Chapter 27
Circular Distributions:
Hypothesis Testing
The procedure could be performed as a one-tailed instead of a two-tailed test, if there were reason to be interested in whether the angular distances in one group were greater than those in the other. (b) Multisample Testing. If more than two samples are involved, then the angular deviations of all of them are pooled and ranked, whereupon the Kruskal- Wallis test (Section 10.4) may be applied, followed if necessary by nonpararnetric multiplecomparison testing (Section 11.5). 27.8
TWO-SAMPLE
AND MULTISAMPLE TESTING OF ANGULAR DISPERSION
The Wallraff (1979) procedure of analyzing angular distances (Section 27.7) may be applied to testing for angular dispersion. The angular distances of concern for Sample 1 are dOli-al and those for Sample 2 are da2i-li2' Thus, just as measures of dispersion for linear data may refer to deviations of the data from their mean (Sections 4.3 and 4.4), here we consider the deviations of circular data from their mean. The angular distances of the two samples may be pooled and ranked for application of the Mann-Whitney test, which may be employed for either two-tailed (Example 27.13) or one-tailed testing. EXAMPLE 27.13
Two-Sample
Testing for Angular
Dispersion
The times of day that males and females are born are tabulated. The mean time of day for each sex is determined (to the nearest 5 min) as in Section 26.4. (For males.ji, = 7:55 A.M.; for females, 02 = 8:15 A.M.)
Ho: The times of day of male births are as variable as the times of day of HA:
female births. The times of day of male births do not have the same variability as the times of day of female births. Male
Female
Time of day
Angular distance
Rank
Time of day
Angular distance
Rank
05:10 hr 06:30 09:40 10:20 04:20 11:15
2:45 hr 1:25 1:45 2:25 3:35 3:20
11 4 6 10 13 12
08:15 hr 10:20 09:45 06:10 04:05 07:50 09:00 10:10
0:00 hr 2:05 1:30 2:05 4:10 0:25 0:45 1:55
1 8.5 5 8.5 14 2 3 7 R2 = 49
For the two-tailed Mann-Whitney
test:
nl = 6,R, = 56 n2 = 8, R2 = 49 U
= nln2 +
+ 1)
nl(nl 2
R, = (6)(8)
+ 6(7) - 56 = 13 2
Section 27.9
V' = nln2 VO.05(2).6.8
Parametric Analysis of the Mean of Mean Angles
U = (6)(8)
-
645
13 = 35
= 40.
Do not reject Ho. P = 0.20 If we wish to compare the dispersions of more than two samples, then the aforementioned Mann-Whitney procedure may be expanded by using the Kruskal-Wallis test (Section 10.4), followed if necessary by nonparametric multiple comparisons (Section 11.5). PARAMETRIC ANALYSIS OF THE MEAN OF MEAN ANGLES
A set of n angles, ai, has a mean angle, ii, and an associated mean vector length, r. This set of data may be referred to as a first-order sampLe. A set of k such means may be referred to as a second-order sample. Section 26.9 discussed the computation of the mean of a second-order sample, namely the mean of a set of means. We can also test the statistical significance of a mean of means. For a second-order sample of k mean angles, X can be obtained with either Equation 26.27 or 26.29 and Y with either Equation 26.28 or 26.30. Assuming that the second-order sample comes from a bivariate normal distribution (i.e., a population in which the X/s follow a normal distribution, and the Y/s are also normally distributed), a testing procedure due to Hotelling* (1931) may be applied. The sums of squares and crossproducts of the k means are
(~Xj)2
(27.24)
k (~
Yj
k
)2
(27.25)
and (27.26) where L in each instance refers to a summation over all k means (i.e., L = Lj=I)' Then, we can test the null hypothesis that there is no mean direction (i.e., Ho: p = 0) in the population from which the second-order sample came by using as a test statistic
(27.27)
with the critical value being the one-tailed F with degrees of freedom of 2 and k - 2 (Batschelet, 1978; 1981: 144-150). This test is demonstrated in Example 27.14, using the data from Example 26.8. *Harold Hotelling (1895-1973), American mathematical economist and statistician. He owed his life, and thus the achievements of an impressive career, to a zoological mishap. While attending the University of Washington he was called to military service in World War I and appointed to care for mules. One of his charges (named Dynamite) broke Hotelling's leg, thus preventing the young soldier from accompanying his division to France where the unit was annihilated in battle (Darnell, 1988).
646
Chapter 27
Circular Distributions:
Hypothesis Testing
EXAMPLE 27.14 The Second-Order Analysis for Testing the Significance of the Mean of the Sample Means in Example 26.8 direction (i.e., p = 0). There is a mean population direction (i.e., p -=F 0).
H«: There is no mean population
HA: k
= 7,
X
= -0.52734,
Y
= 0.27844
2:Xj = -3.69139, 2:Xl = 2.27959, 2:x2 = 0.33297 2: Yj = 1.94906, 2: Yl = 0.86474, 2: l = 0.32205 2: x, Yj F = 7(7 -
=
-1.08282,
2) [( -0.52734)2(0.32205)
2 =
2: xy
=
-0.05500
- 2( -0.52734)(0.27844)( -0.055(0) + (0.27844)2(0.33297) (0.33297)(0.32205) - (-0.055(){))2
1 J
16.66
FO.05( 1),2,5
= 5.79
Reject Ho. 0.005 < P < 0.01 And, from Example 26.8, we see that the population mean angle is estimated to be 152°. This test assumes the data are not grouped. The assumption of bivariate normality is a serious one. Although the test appears robust against departures due to kurtosis, the test may be badly affected by departures due to extreme skewness. rejecting a true H« far more often than indicated by the significance level, a (Everitt. 1979; Mardia, 1970a). 27.10
NONPARAMETRIC ANALYSIS OF THE MEAN OF MEAN ANGLES The Hotelling testing procedure of Section 27.9 requires that the k X's come from a normal distribution, as do the k Y's. Although we may assume the test to be robust to some departure from this bivariate normality, there may be considerable nonnormality in a sample, in which case a nonparametric method is preferable. Moore (1980) has provided a non parametric modification of the Rayleigh test, which can be used to test a sample of mean angles; it is demonstrated in Example 27.15. The k vector lengths are ranked, so that rl is the smallest and rk is the largest. We shall call the ranks i (where i ranges from 1 through k) and compute k
X
=
2: iCOSai _i=_I
_
k
(27.28)
k
2: isinai Y =
,--i=--=-I __
k
(27.29) (27.30)
Section 27.11
Parametric Two-Sample Analysis of the Mean of Mean Angles
EXAMPLE 27.15 Nonparametric Second-Order Analysis Direction in the Sample of Means of Example 26.8
for
647
Significant
He: The population from which the sample of means came is uniformly distributed around the circle (i.e., p = 0). H A: The population of means is not uniformly distributed around the circle (i.e., p 0).
*
rank (i)
Sample
r,
1 2 3 4 5 6 7
n,
cas
ai
0.3338 171 0.3922 186 0.4696 117 0.6962 134 0.7747 169 0.8794 140 0.8954 160
0
-0.98769 -0.99452 -0.45399 -0.69466 -0.98163 -0.76604 -0.93969
sin ii,
i cas Zii
i sin ai
0.15643 -0.10453 0.04570 0.71934 0.19081 0.64279 0.34202
-0.98769 -1.98904 -1.36197 -2.77863 -4.90814 -4.59627 -6.57785
0.15643 -0.20906 2.67302 2.87736 0.95404 3.85673 2.39414
-23.19959
12.70266
k=7 X
Y
2: = 2: =
i cas
ai = -23.19959
i sin Zii
= 12.70266 = 1.81467
k
R(J.()S.7
=
7 +k y2 =
R' ~ ~X2
= -3.31423
7
k
+
\j/(-3.31423)2
(1.81467)2
7
J2.03959
= 1.428
1.066
Therefore, reject He: P
< 0.001
The test statistic, R', is then compared Appendix Table B.39.
to the appropriate
critical value, R~Jl' in
(a) Testing with Weighted Angles. The Moore modification of the Rayleigh test can also be used when we have a sample of angles, each of which is weighted. We may then perform the ranking of the angles by the weights, instead of by the vector lengths, r. For example, the data of Example 26.2 could be ranked by the amount of leaning. Or, if we are recording the direction each of several birds flies from a release point, the weights could be the distances flown. (If the birds disappear at the horizon, then the weights of their flight angles are all the same.) PARAMETRIC TWO-SAMPLE
ANALYSIS OF THE MEAN OF MEAN ANGLES
BatscheJet (1978, 1981: 150-154) explained how the Hotelling (1931) procedure of Section 27.9 can be extended to consider the hypothesis of equality of the means of
648
Chapter 27
Circular Distributions:
Hypothesis Testing
two populations of means (assuming each population to be bivariate normal). We proceed as in Section 27.9, obtaining an X and Y for each of the two samples eX,,! and Y I for Sample 1, and X 2 and Y2 for Sample 2). Then, we apply Equations 27.24, 27.25, and 27.26 to each of the two samples, obtaining (2:x2)t, (2:xY)I' and (l:i)t for Sample 1, and (l: (l: and (l: for Sample 2. Then we calculate
x2)2' xY)2'
i)2
(Lx2)1 (Li\ = (LxY)1
(Lx2t (Lit (Lxy)c
(Lx2)2; + (Li)2; + (LxY)2;
(27.31)
+
=
=
(27.32) (27.33)
and the null hypothesis of the two population mean angles being equal is tested by F
=
N - 3
2(~ k,
[(Xl - xd(~Y)c
- 2(Xl - X2)(YI (Lx2))Li)c
+ ~)
- Y2)(LXY)c
+ (Yl
-
YdC~>2)c
- (LXY);
,
k2
(27.34) where N = k, + k2, and F is one-tailed with 2 and N - 3 degrees of freedom. This test is shown in Example 27.16, using the data of Figure 27.2. EXAMPLE 27.16 Parametric Two-Sample Second-Order Analysis for Testing the Difference Between Mean Angles We have two samples, each consisting of mean directions and vector lengths, as shown in Figure 27.2. Sample 1 is the data from Examples 26.8 and 27.14, where k, = 7;
(LX2)1
XI
= -0.52734;
(Li),
= 0.33297;
YI = 0.27844;
111
= 152°;
= 0.32205;
(LxY)l
= -0.05500.
Sample 2 consists of the following 10 data: J
11J
rj
1 2 3 4 5 6 7 8 9 10
115° 127 143 103 130 147 107 137 127 121
0.9394 0.6403 0.3780 0.6671 0.8210 0.5534 0.8334 0.8139 0.2500 0.8746
Applying the calculations of Examples 26.8 and 27.14, we find k2 = 10;
L rj cos aj
X2 = -0.36660;
= -3.66655;
Y2 = 0.54720;
L rj sin aj 112
= 124°.
= 5.47197;
Section 27.12
Nonparametric Two-Sample Analysis ofthe Mean of Mean Angles
(2:x2)2
= 0.20897;
(2:i)2
= 0.49793;
(2:xY)2
649
= -0.05940.
Then, we can test He:
HA:
/1-] = /1-2 (The means of the populations from which these two samples came are equal.) /1-] =F /1-2 (The two population means are not equal.)
+ 10
N = 7
(2:x2)c (2:i)c (2:xy)c F=
= 0.33297 + 0.20897
=
0.54194
= 0.32205 + 0.49793 = 0.81998 + (-0.05940)
= -0.05500
= -0.11440
(17-3) 2
(!7 + ~)10 [-0.52734
-
(-0.36660)]2(0.81998)
2[ -0.52734
x
-
(-0.36660)](0.27844
- 0.54720)( -0.11440)
+ (0.27844 - 0.54720)2(0.54194) (0.54194 )(0.81998)
-
(-0.11440)2
= 4.69 Foose]
),2.]4
= 3.74.
Reject Ho. 0.025 < P < 0.05
[P = 0.028]
The two-sample Hotelling test is robust to departures from the normality assumption (far more so than is the one-sample test of Section 27.9), the effect of nonnormality being slight conservatism (i.e., rejecting a false Ho a little less frequently than indicated by the significance level, a) (Everitt, 1979). The two samples should be of the same size, but departure from this assumption does not appear to have serious consequences (Batschelet, 1981: 202). NON PARAMETRIC TWO-SAMPLE
ANALYSIS OF THE MEAN OF MEAN ANGLES
The parametric test of Section 27.11 is based on sampled populations being bivariate normal and the two populations having variances and covariances in common, unlikely assumptions to be strictly satisfied in practice. While the test is rather robust to departures from these assumptions, employing a non parametric test may be employed to assess whether two second-order populations have the same directional orientation. Batschelet (1978; 1981: 154-156) presented the following non parametric procedure (suggested by Mardia, 1967) as an alternative to the Hotelling test of Section 27.11. First compute the grand mean vector, pooling all data from both samples. Then, the X coordinate of the grand mean is subtracted from the X coordinate of each of the data in both samples, and the Y of the grand mean is subtracted from the Y of each of the data. (This maneuver determines the direction of each datum from the
650
Chapter 27
Circular Distributions: Hypothesis Testing
270° -------t---------
FIGURE 27.2: The data of Example 27.16. The open circles indicate the ends of the seven mean vectors of Sample 1 (also shown in Figure 26.9), with the mean of these seven indicated by the broken-line vector. The solid circles indicate the 10 data of Sample 2, with their mean shown as a solid-line vector. (The "+" indicates the grand mean vector of all seventeen data, which is used in Example 27 .17.)
grand mean.) As shown in Example 27.17, the resulting vectors are then tested by a non parametric two-sample test (as in Section 27.5). This procedure requires that the data not be grouped.
EXAMPLE 27.17 Nonparametric Two-Sample Second-Order Analysis, Using the Data of Example 27.16
Ho: The two samples came from the same population, or from two popuHA:
lations with the same directions. The two samples did not come from the same population, two populations with the same directions.
nor from
Total number of vectors = 7 + 10 = 17 To determine the grand mean vector (which is shown in Figure 27.2): 2>jcosaj
2:
rj
sin aj
= (-3.69139) + (-3.66655) = -7.35794 = 1.94906 + 5.47197 = 7.42103
X = -7.35794 = -0.43282 17 y = 7.42103 = 0.43653 17 X and Yare all that we need to define the grand mean; however, if we wish we can also determine the length and direction of the grand mean vector: r = ~X2
+ y2 = ~( -0.43282)2
cos z = -0.43282 0.61473
= -0.70408
+ (0.43653)2 = 0.61473
Section 27.12
sin ii
=
n=
Nonparametric Two-Sample Analysis of the Mean of Mean Angles
0.43653 0.61473 135°.
=
651
0.71012
Returning to the hypothesis test, we subtract the foregoing X from the X, and the Y from the Y, for each of the 17 data, arriving at 17 new vectors, as follows: Sample 1 Datum
X
1 2 3 4 5 6 7
-0.84140 -0.76047 -0.21319 -0.67366 -0.39005 -0.48293 -0.32969
X-X
Y
-0.40858 -0.32765 0.21963 -0.24084 0.04277 -0.05011 0.10313
0.30624 0.14782 0.41842 0.56527 -0.04100 0.50009 0.05222
y-y
New a
-0.13029 -0.28871 -0.01811 0.12874 -0.47753 0.06356 -0.38431
184° 210 20 137 276 107 290
Sample 2 Datum
X
1 2 3 4 5 6 7 8 9 10
-0.39701 -0.38534 -0.30188 -0.15006 -0.52773 -0.46412 -0.24366 -0.59525 -0.15045 -0.45045
y
X-X 0.03581 0.04748 0.13084 0.28276 -0.09491 -0.03130 0.18916 -0.16243 0.28237 -0.01763
0.85139 0.51137 0.22749 0.65000 0.62892 0.30140 0.79698 0.55508 0.19966 0.74968
y -
Y
0.41485 0.07484 -0.20904 0.21347 0.19239 -0.13513 0.36045 0.11855 -0.23687 0.31315
New a 86° 75 320 48 108 230 68 127 334 92
Now, using Watson's two-sample test (Section 27.5) on these new angles: Sample 1 il n, ali 1
2
20
107
3 4 5
137 184 210
6
276
0.1429 0.1429 0.1429 0.1429 0.1429 0.1429 0.2857 0.2857 0.2857 0.4286 0.5714 0.7143 0.7143 0.8571
Sample 2 j
au
1 2 3 4 5
48 68 75 86 92
6 7
108 127
8
230
j/n2
dk
d~
0.0000 0.1000 0.2000 0.3000 0.4000 0.5000 0.5000 0.6000 0.7000 0.7000 0.7000 0.7000 0.8000 0.8000
0.1429 0.0429 -0.0571 -0.1571 -0.2571 -0.3571 -0.2143 -0.3143 -0.4143 -0.2714 -0.1286 0.0143 -0.0857 0.0571
0.0204 0.0018 0.0033 0.0247 0.0661 0.1275 0.0459 0.0988 0.1716 0.0737 0.0165 0.0002 0.0073 0.0033
652
Chapter 27
Circular Distributions:
7
290
nj
Hypothesis Testing
1.0000 1.0000 1.0000
9 10
320 334
0.8000 0.9000 1.0000
= -1.6998
N = 7
0.0400 0.0100 0.0000 2:.d2k
2:.dk
n: = 10
=7
0.2000 0.1000 0.0000
=
0.7111
+ 10 = 17
U2 ~ n~2
[L + (I - p)(C(/ - CI». For example, let us consider the above example of desiring !-()()'i(2).1.2()()' We calculate p = (100/200 100/260)/(100/200 - IOOj3(0) = 0.6lJ2:then the estimate is Fo OS(2).1.260 = C"" = 5.07 + (I - O.6lJ2)(5.IO5.07) = 5.07 + 0.01 = S.OR. Similarly, we can use harmonic interpolation to estimate the probability (let's call it 1') associated with a test statistic (Ices call it C). If, for example. we wish to know the probability of a chi-square at least as large as X2 = 7.R45, for 7 degrees of freedom (IJ = 7). Table 13.1 shows us that P lies between 0.25 and 0.50. This table tells us that X~ ,07 = 6.346 (let's call this II and use P" to designate the probability, 0.50, associated with
II)
and X~02.,.7
.~ ·lJ.037 (Ices
call this b and use Ph to designate
follows:p = (100/a - 100/C)/(lOO/1l (IS.75RO - 12.7470)/ (15.75RO - 11.0(56) Ph)
= 0.25 + (I - 0.(42) (0.50 - 0.25) = 0.34. That is, harmonic I iarmonic
interpolation
is especially
its probability,
0.25). We proceed
100/h) = (100/6.346 - IOO/n'45)/(IOO/6.346 = 0.642, and the estimated probability is I' = Ph
useful when"
=
interpolation
00. For example.
yields
to determine
-
+
= 7.R45.
X6:14.7
to.O! (2).2XOO'
between to.O! (2).1000 = 2.5RI and fO.O! (! ).e,-, = 2.575R in Appendix Table 13.3, we calculate IOO/2ROO)/( 100/1000 - 100/(0) = O.0643/1l.101l0 = 0.6430. Then, fOO!(2).2XOO = C" 0.(430)(2.SRI - 2.575R) = 2.575R + O.OllllJ = 2.57R. (Note that 100/00 = 0.)
P =
as
100/lJ.(37) (I - p)( I'll which lies
(100/ lOOO 2.575R + (I =
671
a> ....• N
TABLE B.1: Critical Values of the Chi-Square /I
I
a: 0.999
0.995
0.99
0.975
(X2) Distribution
0.95
0.90
0.75
0.50
0.25
0.10
0.05
0.025
0.01
0.005
0.001
0.004 0.103 0.352 0.711 1.145
0.016 0.2l! 0.584 1,064 1.610
0.455 1.386 2.366 3.357 4.351
1.323 2.773 4.108 5.385 6,626
2.706 4.605 6.251 7,779 9.236
3.841 5.991 7.815 9.488 11.070
2.204 2.833 3.490 4,168 4,865
7,841 9,037 10,219 11.389 12,549
10.645 12.017 13.362 14.684 15.987
12.592 14,067 15.507 16,919 18.307
5.024 7.378 9.348 11.143 12,833 14.449 16,013 17.535 19,023 20.483
6.635 9.210 11.345 13.277 15,086
1.635 2.167 2,733 3.325 3,940
0.t02 0.575 1.213 1.923 2.675 3.455 4,255 5.071 5,899 6.737
16,812 18.475 20,090 21.666 23.209
7.879 to.597 12.838 14.860 16.750 18.548 20,278 21.955 23.589 25.188
13.701 14.845 15.984 17.117 18,245
19.675 21.026 22.362 23.685 24,996
21.920 23.337 24.736 26.119 27.488
24.725 26.217 27.688 29.141 30.578
26.757 28.300 29.819 31.319 32.801
10.828 13.816 16.266 18.467 20,515 22.458 24.322 26,124 27,877 29.588 31.264 32.909 34.528 36.123 37,697
26.296 27.587 28.869 30.144 31.410
28,845 30,191 31.526 32.852 34.170 35.479 36.781 38.076 39.364 40.646 41.923 43.195 44.461 45,722 46,979 48.232 49.480 50.725 51.966 53.203 54.437 55.668 56.896 58.120 59.342
32,000 33.409 34.805 36.191 37.566
46,797 48.268 49.728 51.179 52.620
45,642 46.963 48.278 49.588 50.892
34.267 35.718 37.156 38.582 39.997 41.401 42,796 44.181 45.559 46.928 48.290 49.645 50.993 52.336 53.672
52.191 53.486 54.776 56.061 57.342
55.003 56.328 57.648 58.964 60.275
61.098 62.487 63.870 65.247 66.619
58.619 59.893 61.162 62.428 63.691
61.581 62.883 64.181 65.476 66.766
67.985 69.346 70.703 72.055 73.402
64.950 66.206 67.459 68.710 69.957 71.201 72.443
68,053 69.336 70.616 71.893 73.166 74.437 75.704
74,745 76,084 77.419 78,750 80.077 81.400 82.720
'73.6&3
76.969
84.037
1 2 3 4 5
O.fXIO 0.002 0.024 0.091 0.210
0.000 0.010 0.072 0.207 0.412
0.000 0.020 0.115 0.297 0.554
6 7 8 9 10
0.381 0.599 0.857 1.152 1.479
0,676 0,989 1.344 1.735 2.156
0.872 1.239 1.646 2.088 2.558
u
1.834 2,214 2.617 3.041 3.483
2.603 3.074 3.565 4.075 4,601
3.053 3.571 4.107 4.660 5.229
3.816 4.404 5.009 5.629 6.262
4.575 5.226 5.892 6.571 7.261
5.578 6.304 7,042 7.790 8.547
7.584 8.438 9,299 10,165 11.037
5.348 6.346 7.344 8.343 9.342 10.341 11.340 12.340 13.339 14.339
5,142 5,697 6.265 6.844 7.434
5,812 6.408 7.015 7.633 8.260
6.908 7.564 8.231 8.907 9.591
11,912 12.792 13.675 14.562 15.452
15.338 16.338 17.338 18.338 19.337
19.369 20.489 21.605 22.718 23,828
8.897 9.542 10.196 10.856 11.524
10,283 10.982 11.689 12.401 13.120
13,240 14,041 14,848 15.659 16.473
16.344 17.240 18,137 19.037 19.939
20.337 21.337 22,337 23.337 24.337
24.935 26.039 27.141 28.241 29.339
29.615 30.813 32,007 33.196 34,382
32.671 33.924 35.172 36.415 37.652
26 27 28 29 30
9.222 9,803 10.391 to,986 11.588
12.198 12,879 13.565 14,256 14.953
13.844 14.573 15.308 16.047 16.791
15.379 16.151 16.928 17.708 18.493
25.336 26,336 27.336 28.336 29.336
30.435 31.528 32,620 33,711 34,800
15.655 16,362 17,074 17.789 18,509
19.281 20.072 20.867 21.664 22.465
25.390 26.304 27.219 28.136 29.054
30.336 31.336 32.336 33,336 34.336
36 37 38 39 40
15.324 15.965 16.611 17.262 17,916
17.539 18.291 19,047 19.806 20.569 21.336 22.106 22.878 23.654 24.433
25.643 26.492 27.343 28.196 29,051
29.973 30.893 31.815 32.737 33,660
35.336 36.336 37.335 38.335 39.335
35.887 36,973 38.058 39.141 40.223 41.304 42.383 43.462 44.539 45.616
35.563 36.741 37.916 39.087 40.256 41.422 42.585 43.745 44.903 46,059
38.885 40,113 41.337 42.557 43,773
12.196 12.811 13.431 14.057 14.688
17.292 18,114 18,939 19,768 20,599 21.434 22.271 23,110 23.952 24.797
20.843 21.749 22,657 23.567 24.478
31 32 33 34 35
8.034 8,643 9.260 9,886 10.520 11.160 11.808 12.461 13.121 13,787 14.458 15.134 15,815 16,501 17,192
7.962 8.672 9.390 10,117 10.851 11.591 12.338 13.091 13.848 14.611
9.312 10.085 10,865 11.651 12.443
21 22 23 24 25
3.942 4.416 4.905 5.407 5.921 6.447 6,983 7.529 8,085 11.649
17,275 18.549 19,812 21.064 22.307 23.542 24.769 25.989 27.204 28.412
47.212 48,363 49.513 50.660 51.805
50.998 52.192 53.384 54.572 55.758
41 42 43 44 45 46 47
18,576 19.239 19.906 20.576 21.251 21.929 22.610
29,907 30,765 31.625 32.487 33.350 34.215 35.081
34.585 35.510 36.436 37.363 38.291 39.220 40.149
40.335 41.335 42.335 43.335 44.335
52.949 54.090 55.230 56.369 57.505 58.641 59,774
56.942 58,124 59.304 60.481 61.656
45.335 46.335
46,692 47,766 48,840 49,913 50,985 52.056 53.127
62.830 64.001
60.561 61.777 62,990 64.20] 65.410 66.617 67.821
.949
41.079
47.335
54.196
60.907
65.171
69.023
12 13 14 15 16 17 18 19 20
17.887 lR586 19.289 19.996 20,707 21.421 22.138 22.859 23.584 24.311 25,041 25.775
19.233 19.960 20.691 21.426 22,164 22,906 23,650 24.398 25.148 25.901 26.657 27.416
0.001 0.051 0.216 0.484 0.831 1.237 1.690 2.180 2,700 3.247
25.215 25.999 26.785 27.575 28.366 29.160 29.956
23.269 24.075 24.884 25.695 26.509 27.326 28,144 28.965 29.787 30.612 31.439 32.268
44.985 46.194 47.400 48.602 49.802
38.932 40.289 41.638 42.980 44.314
39.252 40,790 42312 43,820 45.315
54.052 55.476 56,892 58,301 59,703
l> -c -c m ::J Q.
x' to
Vl r+
'~. " r+
;::;.
~ --I
'"
c-
m-
'"
:Q. :J C\
~
'a-"
-c
TABLE B.1 (cont.): Critical Values of the Chi-Square
<x2)
Distribution
a: 0.999
0.995
0.99
0.975
0.95
0.90
0.75
0.50
0.25
0.10
0.05
0.025
0.01
0.005
0.001
49 50
23.983 24.674
27.249 27.991
28.941 29.707
31.555 32.357
33.930 34.764
36.818 37.689
42.010 42.942
48.335 49.335
55.265 56.334
62.038 63.167
66.339 67.505
70.222 71.420
74.919 76.154
78.231 79.490
85.351 86.661
51 52 53 54 55
25.368 26.065 26.765 27.468 28.173
28.735 29.481 30.230 30.981 31.735
30.475 31.246 32.019 32.793 33.570
33.162 33.968 34.776 35.586 36.398
35.600 36.437 37.276 38.116 38.958
38.560 39.433 40.308 41.183 42.060
43.874 44.808 45.741 46.676 47.610
50.335 51.335 52.335 53.335 54.335
57.401 58.468 59.534 60.600 6l.665
64.295 65.422 66.548 67.673 68.796
68.669 69.832 70.993 72.153 73.311
72.616 73.810 75.002 76.192 77.380
77.386 78.616 79.843 81.069 82.292
80.747 82.001 83.253 84.502 85.749
87.968 89.272 90.573 91.872 93.168
56 57 58 59 60
28.881 29.592 30.305 31.021 31.738
32.491 33.248 34.008 34.770 35.535
34.350 35.131 35.913 36.698 37.485
37.212 38.027 38.844 39.662 40.482
39.801 40.646 41.492 42.339 43.188
42.937 43.816 44.696 45.577 46.459
48.546 49.482 50.419 51.356 52.294
55.335 56.335 57.335 58.335 59.335
62.729 63.793 64.857 65.919 66.981
69.919 71.040 72.160 73.279 74.397
74.468 75.624 76.778 77.931 79.082
78.567 79.752 80.936 82.117 83.298
83.513 84.733 85.950 87.166 88.379
86.994 88.236 89.477 90.715 91.952
94.461 95.751 97.039 98.324 99.607
61 62 63 64 65
32.459 33.181 33.906 34.633 35.362
36.301 37.068 37.838 38.610 39.383
38.273 39.063 39.855 40.649 41.444
41.303 42.\26 42.950 43.776 44.603
44.038 44.889 45.741 46.595 47.450
47.342 48.226 49.111 49.996 50.883
53.232 54.171 55.110 56.050 56.990
60.335 61.335 62.335 63.335 64.335
68.043 69.104 70.165 71.225 72.285
75.514 76.630 77.745 78.860 79.973
80.232 81.381 82.529 83.675 84.821
84.476 85.654 86.830 88.004 89.177
89.591 90.802 92.010 93.217 94.422
93.186 94.419 95.649 96.878 98.105
100.888 102.166 103.442 104.716 105.988
66 67 68 69 70
36.093 36.826 37.561 38.298 39.036
40.158 40.935 41.713 42.494 43.275
42.240 43.038 43.838 44.639 45.442
45.431 46.261 47.092 47.924 48.758
48.305 49.162 50.020 50.879 51.739
51.770 52.659 53.548 54.438 55.329
57.931 58.872 59.814 60.756 61.698
65.335 66.335 67.335 68.334 69.334
73.344 74.403 75.461 76.519 77.577
81.085 82.197 83.308 84.418 85.527
85.965 87.108 88.250 89.391 90.531
90.349 91.519 92.689 93.856 95.023
95.626 96.828 98.028 99.228 100.425
99.330 100.554 101.776 102.996 104.215
107.258 108.526 109.791 111.055 112.317
71 72 73 74 75
39.777 40.520 41.264 42.010 42.757
44.058 44.843 45.629 46.417 47.206
46.246 47.051 47.858 48.666 49.475
49.592 50.428 51.265 52.103 52.942
52.600 53.462 54.325 55.189 56.054
56.221 57.113 58.006 58.900 59.795
62.641 63.585 64.528 65.472 66.417
70.334 71.334 72.334 73.334 74.334
78.634 79.690 80.747 81.803 82.858
86.635 87.743 88.850 89.956 91.061
91.670 92.808 93.945 95.081 96.217
96.189 97.353 98.516 99.678 100.839
101.621 102.816 104.010 105.202 106.393
105.432 106.648 107.862 109.074 110.286
113.577 114.835 116.092 117.346 118.599
76 77 78 79 80 81 82 83 84 85
43.507 44.258 45.010 45.764 46.520 47.277 48.036 48.796 49.557 50.320
47.997 48.788 49.582 50.376 51.172 51.969 52.767 53.567 54.368 55.170
50.286 51.097 51.910 52.725 53.540 54.357 55.174 55.993 56.813 57.634
53.782 54.623 55.466 56.309 57.153 57.998 58.845 59.692 60.540 61.389
56.920 57.786 58.654 59.522 60.391 61.261 62.132 63.004 63.876 64.749
60.690 61.586 62.483 63.380 64.278 65.176 66.076 66.976 67.876 68.777
67.362 68.307 69.252 70.198 71.145 72.091 73.038 73.985 74.933 75.881
75.334 76.334 77.334 78.334 79.334 80.334 81.334 82.334 83.334 84.334
83.913 84.968 86.022 87.077 88.130 89.184 90.237 91.289 92.342 93.394
92.166 93.270 94.374 95.476 96.578 97.680 98.780 99.880 100.980 102.079
97.351 98.484 99.617 100.749 101.879 103.010 104.139 105.267 106.395 107.522
101.999 103.158 104.316 105.473 106.629 107.783 108.937 1l0.090 111.242 112.393
107.583 108.771 109.958 111.144 112.329 113.512 114.695 115.876 117.057 118.236
111.495 112.704 113.911 115.117 116.321 117.524 118.726 119.927 121.126 122.325
119.850 121.100 122.348 123.594 124.839 126.083 127.324 128.565 129.804 131.041
86 87 88 89 90
51.085 51.850 52.617 53.386 54.155
55.973 56.777 57.582 58.389 59.196
58.456 59.279 60.103 60.928 61.754
62.239 63.089 63.941 64.793 65.647
65.623 66.498 67.373 68.249 69.126
69.679 70.58\ 71.484 72.387 73.291
76.829 77.777 78.726 79.675 80.625
85.334 86.334 87.334 88.334 89.334
94.446 95.497 96.548 97.599 98.650
103.177 104.275 105.372 106.469 107.565
108.648 109.773 110.898 112.022 113.145
113.544 114.693 115.841 116.989 118.136
119.414 120.591 121.767 122.942 124.116
123.522 124.718 125.913 127.106 128.299
132.277 133.512 134.745 135.978 137.208
v
91 92 93 94 95
I
54.926 55.698 56.472 57.246 58.022
60.005 60.815 61.625 62.437 63.250
62.581 63.409 64.238 65.068 65.898
66.501 67.356 68.211 69.068 69.925
70.003 70.882 71.760 72.640 73.520
74.196 75.100 76.006 76.912 77.818
81.574 82.524 83.474 84.425 85.376
90.334 91.334 92.334 93.334 94.334
99.700 100.750 101.800 102.850 103.S99
108.661 109.756 110.850 111.944 113.038
114.268 115.390 116.511 117.632 118.752
119.282 120.427 121.571 122.715 123.858
125.289 126.462 127.633 128.803 129.973
129.491 130.681 131.871 133.059 134.247
138.438 139.666 140.893 142.119 143.344
:t> "0 "0 II> ::::J
a.
x'
ao
..'" ~
~: ;:;. ~ -i Q)
0-
iD
'" Q)
::::J
a. Cl
Q; "0
aen ...•
w
...• '" ..,. TABLE B.1 (cont.): Critical Values of the Chi-Square
v
I
<x
2) Distribution
»
"0 "0
a: 0.999
0.995
0.99
0.975
0.95
0.90
0.75
0.50
0.25
0.10
0.05
0.025
0.01
0.005
0.001
70.783 71.642 72.501 73.361 74.222 75.083 75.946 76.809 77.672 78.536
74.401 75.282 76.164 77.046 77.929 78.813 79.697 80.582 81.468 82.354
78.725 79.633 80.541 81.449 82.358 83.267 84.177 85.088 85.998 86.909
86.327 87.278 88.229 89.181 90.133 91.085 92.038 92.991 93.944 94.897
95.334 96.334 97.334 98.334 99.334 100.334 101.334 102.334 103.334 104.334
104.948 105.997 107.045 108.093 109.141 110.189 111.236 112.284 113.331 114.37R
114.131 115.223 116.315 117.407 118.498
125.000 126.141 127.282 128.422 129.561
135.433 136.619 137.803 138.987 140.169 141.351 142.532 143.712 144.891 146.070
x'
130.700 131.838 132.975 134.111 135.247
131.141 132.309 133.476 134.642 135.807 136.971 138.134 139.297 140.459 141.620
144.567 145.789 147.010 148.230 149.449
119.589 120.679 121.769 122.858 123.947
119.871 120.990 122.108 123.225 124.342 125.458 126.574 127.689 128.804 129.918
150.667 151.884 153.099 154.314 155.528
;:;.
:":> 'a.
96 97 98 99 100 101 102 103 104 105
58.799 59.577 60.356 61.137 61.918
64.063 64.878 65.694 66.510 67.328
62.701 63.484 64.269 65.055 65.841
68.146 68.965 69.785 70.606 71.428
66.730 67.562 68.396 69.230 70.065 70.901 71.737 72.575 73.413 74.252
106 107 108 109 110
66.629 67.418 68.207 68.998 69.790
72.251 73.075 73.899 74.724 75.550
75.092 75.932 76.774 77.616 78.458
79.401 80.267 81.133 82.000 82.867
83.240 84.127 85.015 85.903 86.792
87.821 88.733 89.645 90.558 91.471
95.850 96.804 97.758 98.712 99.666
105.334 106.334 107.334 108.334 109.334
115.424 116.471 117.517 118.563 119.608
125.035 126.123 127.211 128.298 129.385
131.031 132.144 133.257 134.369 135.480
136.382 137.517 138.651 139.784 140.917
142.780 143.940 145.099 146.257 147.414
147.247 148.424 149.599 150.774 151.948
156.740 157.952 159.162 160.372 161.581
111 112 113 114 115
70.582 71.376 72.170 72.965 73.761
76.377 77.204 78.033 78.862 79.692
79.302 80.146 80.991 81.836 82.682
83.735 84.604 85.473 86.342 87.213
87.681 88.570 89.461 90.351 91.242
92.385 93.299 94.213 95.128 96.043
100.620 101.575 102.530 103.485 104.440
110.334 111.334 112.334 113.334 114.334
120.654 121.699 122.744 123.789 124.834
130.472 131.558 132.643 133.729 134.813
136.591 137.701 138.811 139.921 141.030
142.049 143.180 144.311 145.441 146.571
148.571 149.727 150.882 152.037 153.191
153.122 154.294 155.466 156.637 157.808
162.788 163.995 165.201 166.406 167.610
116 117 118 119 120
74.558 75.356 76.155 76.955 77.755
80.522 81.353 82.185 83.018 83.852
83.529 84.377 85.225 86.074 86.923
88.084 88.955 89.827 90.700 91.573
92.134 93.026 93.918 94.811 95.705
96.958 97.874 98.790 99.707 100.624
105.396 106.352 107.307 108.263 109.220
115.334 116.334 117.334 118.334 119.334
125.878 126.923 127.967 129.011 130.055
135.898 136.982 138.066 139.149 140.233
142.138 143.246 144.354 145.461 146.567
147.700 148.829 149.957 151.084 152.211
154.344 155.496 156.648 157.800 158.950
158.977 160.146 161.314 162.481 163.648
168.813 170.016 171.217 172.418 173.617
121 122 123 124 125
78.557 79.359 80.162 80.965 81.770
84.686 85.521 86.356 87.192 88.029
87.773 88.624 89.475 90.327 91.180
92.446 93.320 94.195 95.070 95.946
96.598 97.493 98.387 99.283 100.178
101.541 102.458 103.376 104.295 105.213
110.176 111.133 112.089 113.046 114.004
120.334 121.334 122.334 123.334 124.334
131.098 132.142 133.185 134.228 135.271
141.315 142.398 143.480 144.562 145.643
147.674 148.779 149.885 150.989 152.094
153.338 154.464 155.589 156.714 157.839
160.100 161.250 162.398 163.546 164.694
164.814 165.980 167.144 168.308 169.471
174.816 176.014 177.212 178.408 179.604
126 127 128 129 130
82.575 83.381 84.188 84.996 85.804
88.866 89.704 90.543 91.383 92.223
92.033 92.887 93.741 94.596 95.451
96.822 97.698 98.576 99.453 100.331
101.074 101.971 102.867 103.765 104.662
106.132 107.051 107.971 108.891 109.811
114.961 115.918 116.876 117.834 118.792
125.334 126.334 127.334 128.334 129.334
136.313 137.356 138.398 139.440 140.482
146.724 147.805 148.885 149.965 151.045
153.198 154.302 155.405 156.508 157.610
158.962 160.086 161.209 162.331 163.453
165.841 166.987 168.133 169.278 170.423
170.634 171.796 172.957 174.118 175.278
180.799 181.993 183.186 184.379 185.571
131 132 133 134 135
86.613 87.423 88.233 89.044 89.856
93.063 93.904 94.746 95.588 96.431
96.307 97.163 98.021 98.878 99.736
101.210 102.089 102.968 103.848 104.729
105.560 106.459 107.357 108.257 109.156
110.732 111.652 112.573 113.495 114.417
119.750 120.708 121.667 122.625 123.584
130.334 131.334 132.334 133.334 134.334
141.524 142.566 143.608 144.649 145.690
152.125 153.204 154.283 155.361 156.440
158.712 159.814 160.915 162.016 163.116
164.575 165.696 166.816 167.936 169.056
171.567 172.711 173.854 174.996 176.138
176.438 177.597 178.755 179.913 181.070
186.762 187.953 189.142 190.331 191.520
136 137 138 139
90.669 91.482 92.296 93.111
97.275 98.119 98.964 99.809
100.595 101.454 102.314 103.174 4
105.609 106.491 107.372 108.254 109.137
110.056 110.956 111.857 112.758 113659
115.338 116.261 117.183 118.106 119029
124.543 125.502 126.461 127.421 128380
135.334 136.334 137.334 138.334 139334
146.731 147.772 148.813 149.854 150894
157.518 158.595 159.673 160.750 161.827
164.216 165.316 166.415 167.514 168.613
170.175 171.294 172.412 173.530 174.
177.280 178.421 179.561 180.701
182.226 183.382 184.538 185.693
192.707 193.894 195.080 196.266
--,_,-,--"
'-=r·"'-''':",-~_-."~''',-_,,_o.c-.",,-,c·
_-_-" Cl
ii] "0
:::l"
'"
>-n-
Appendix B
Statistical Tables and Graphs
675
Table B.l was prepared using Equation 26.4.6 of Zelen and Severo (1964). The chi-square values were calculated to six decimal places and then rounded to three decimal places. Examples:
X6.05,12= 21.026
Ai.O,13S= 61.162
and
For large degrees of freedom (v), critical values of X2 can be approximated very well by
(Wilson and Hilferty, 1931). It is for this purpose that the values of Za(l) are given below (from White, 1970). a: 0.999
Za(lf
-3.09023
a: 0.50
0.995
0.99
0.975
0.95
0.90
0.75
-2.57583
-2.32635
-1.95996
-1.64485
-1.28155
-0.67449
0.10
0.25
oms
0.05
0.01
0.005
0.001
Za( If 0.0סס00 0.67449 1.28155 1.64485 1.95996 2.32635 2.57583 3.09023
The percent error, that is, (approximation approximation is as follows: v
a: 0.999
30 -0.7 100 -0.1 ~.O' 140
- true value)/true value x 100%, resulting from the use of this
0.995 0.99 0.975 0.95 0.90 0.75 0.50 0.25 0.10 0.05 0.025 0.01 0.005 0.001 -0.3 -0.2 -0.1 ~.O' ~.O' D.O' ~.O' ~.O' ~.O' ~.O' ~.O' ~.O' D.O' ~.O' ~.O' ~.O' ~.O' ~.O' ~.O' 0.0' ~.O' 0.0' 0.0' ~.O' ~.O' 0.0' 0.0' 0.0' ~.O' 0.0'
~.O' ~.O' 0.0' 0.0' ~.O' ~.O'
OJ 0.0' D.O'
0.2 ~.O' 0.0'
where the asterisk indicates a percent error the absolute value of which is less than 0.05%. Zar (1978) and Lin (1988a) discuss this and other approximations for For one degree of freedom, the X2 distribution is related to the normal distribution (Appendix Table B.2) and the t distribution (Appendix Table B.3) as'
X;
X;,l
V'
= (Za(2))2
=
(ta(2),oot·
For example, X6.05,l = 3.841, and (ZO.05(2)2 = (to.05(2),(x,)2 = (1.9600)2 = 3.8416. The relationship between X2 and F (Appendix Table B.4) is 2 Xa,v --
Forexample,x6.05,9
=
16.919,and (9)(F005(1),9.oo)
F a(l),v,oo'
II
=
(9)(l.88)
=
16.92.
676
Appendix B
Statistical Tables and Graphs TABLE B.2: Proportions
of the Normal Curve (One-Tailed) This table gives the proportion of the normal curve in the right-hand tail that lies at or beyond (i.e., is at least as extreme as) a given normal deviate; for example, Z = I(Xi - f.L)I/U' or Z = I(X - f.L)I/U'x. For example, the proportion of a normal distribution for which Z ~ 1.51 is 0.0655.
z
0
1
2
3
4
5
6
7
8
9
Z
0.0 0.1 0.2 0.3 0.4
0.5000 0.4602 0.4207 0.3821 0.3446
0.4960 0.4562 0.4168 0.3783 0.3409
0.4920 0.4522 0.4129 0.3745 0.3372
0.4880 0.4483 0.4090 0.3707 0.3336
0.4840 0.4443 0.4052 0.3669 0.3300
0.4801 0.4404 0.4013 0.3632 0.3264
0.4761 0.4364 0.3974 0.3594 0.3228
0.4721 0.4325 0.3936 0.3557 0.3192
0.4681 0.4286 0.3897 0.3520 0.3156
0.4641 0.4247 0.3859 0.3483 0.3121
0.0 0.1 0.2 0.3 0.4
0.5 0.6 0.7 0.8 0.9
0.3085 0.2743 0.2420 0.2119 0.1841
0.3050 0.2709 0.2389 0.2090 0.1814
0.3015 0.2676 0.2358 0.2061 0.1788
0.2981 0.2643 0.2327 0.2033 0.1762
0.2946 0.2611 0.2297 0.2005 0.1736
0.2912 0.2578 0.2266 0.1977 0.1711
0.2877 0.2546 0.2236 0.1949 0.1685
0.2843 0.2514 0.2207 0.1922 0.1660
0.2810 0.2483 0.2177 0.1894 0.1635
0.2776 0.2451 0.2148 0.1867 0.1611
0.5 0.6 0.7 0.8 0.9
1.0 1.1 1.2 1.3 1.4
0.1587 0.1357 0.1151 0.0968 0.0808
0.1562 0.1335 0.1131 0.0951 0.0793
0.1539 0.1314 0.1112 0.0934 0.0778
0.1515 0.1292 0.1093 0.0918 0.0764
0.1492 0.1271 0.1075 0.0901 0.0749
0.1469 0.1251 0.1056 0.0885 0.0735
0.1446 0.1230 0.1038 0.0869 0.0721
0.1423 0.1210 0.1020 0.0853 0.0708
0.1401 0.1190 0.1003 0.0838 0.0694
0.1379 0.1170 0.0985 0.0823 0.0681
1.0 1.1 1.2 1.3 1.4
1.5 1.6 1.7 1.8 1.9
0.0668 0.0548 0.0446 0.0359 0.0287
0.0655 0.0537 0.0436 0.0351 0.0281
0.0643 0.0526 0.0427 0.0344 0.0274
0.0630 0.0516 0.0418 0.0336 0.0268
0.0618 0.0505 0.0409 0.0329 0.0262
0.0606 0.0495 0.0401 0.0322 0.0256
0.0594 0.0485 0.0392 0.0314 0.0250
0.0582 0.0475 0.0384 0.0307 0.0244
0.0571 0.0465 0.0375 0.0301 0.0239
0.0559 0.0455 0.0367 0.0294 0.0233
1.5 1.6 1.7 1.8 1.9
2.0 2.1 2.2 2.3 2.4
0.0228 0.0179 0.0139 0.0107 0.0082
0.0222 0.0174 0.0136 0.0104 0.0080
0.0217 0.0170 0.0132 0.0102 0.0078
0.0212 0.0166 0.0129 0.0099 0.0075
0.0207 0.0162 0.0125 0.0096 0.0073
0.0202 0.0158 0.0122 0.0094 0.0071
0.0197 0.0154 0.0119 0.0091 0.0069
0.0192 0.0150 0.0116 0.0089 0.0068
0.0188 0.0146 om 13 O.OOS7 0.0066
0.0183 0.0143 om 10 0.0084 0.0064
2.0 2.1 2.2
2.5 2.6 2.7 2.8 2.9
0.0062 0.0047 0.0035 0.0026 0.0019
0.0060 0.0045 0.0034 0.0025 0.0018
0.0059 0.0044 0.0033 0.0024 0.0018
0.0057 0.0043 0.0032 0.0023 0.0017
0.0055 0.0041 0.0031 0.0023 0.0016
0.0054 0.0040 0.0030 0.0022 0.0016
0.0052 0.0039 0.0029 0.0021 0.0015
0.0051 0.0038 0.0028 0'()021 0.0015
0.0049 0.0037 0.0027 0.0020 0.0014
0.0048 0.0036 0.0026 0.0019 0.0014
2.5 2.6
3.0 3.1 3.2 3.3 3.4
0.0013 0.0010 0.0007 0.0005 0.0003
0.0013 0.0009 0.0007 0.0005 0.0003
0.0013 0.0009 0.0006 0.0005 0.0003
0.0012 0.0009 0.0006 0.0004 0.0003
0.0012 0.0008 0.0006 0.0004 0.0003
0.0011 0.0008 0.0006 0.0004 0.0003
0.0011 0.0008 0.0006 0.0004 0.0003
0.0011 0.0008 0.0005 0.0004 0.0003
0.0010 0.0007 0.0005 0.0004 0.0003
0.0010 0.0007 0.0005 0.0003 0.0002
3.0 3.1 3.2 3.3 3.4
3.5 3.6 3.7 3.8
0.0002 0.0002 0.0001 0.0001
0.0002 0.0002 0.0001 0.0001
0.0002 0.0001 0.0001 0.0001
0.0002 0.0001 0.0001 0.0001
0.0002 0.0001 0.0001 0.0001
0.0002 0.0001 0.0001 0.0001
0.0002 (lOOO1 0.0001 0.0001
0.0002 0.0001 0.0001 0.0001
0.0002 0.0001 0.0001 0.0001
0.0002 0.0001 0.0001 0.0001
3.5 3.6 3.7 3.8
2.3 2.4
2.7 2.8 2.9
Table B.2 was prepared using an algorithm of Hastings (1955: 187). Probabilities for values of Z in between those shown in this table may be obtained by either linear or harmonic interpolation. David (2005) presented a brief history of tables related to the normal distribution. Critical values of Z may be found in Appendix Table B.3 as Zo: = to:.oo. For example, ZO.05(2) = 10.05(2),00 = 1.9600. These critical values are related to those of X2 and F as
(first described by R. A. Fisher in 1924 and published in 1928; Lehmann, 1999). Many computer programs and calculators can generate these proportions. Also, there are many quick and easy approximations. For example, we can compute
Appendix
Statistical
B
Tables and Graphs
677
where setting c = 0.806z( I - 0.01 Rz) (Hamaker. 1978) yields P dependable to the third decimal place for z as small as about 0.2, and using c = zl( 1.237 + 0.0249z) (Lin, 1988b) achieves that accuracy for z as small as about 0.1. Hawkes's (1982) formulas are accurate to within I in the fifth decimal place, though they require more computation.
Table B.3 was prepared using Equations 26.7.3 and 26.7.4 of Zelen and Severo (1964), except for the values at infinity degrees of freedom, which arc adapted from White (1970). Except for the values at infinity degrees of freedom, f was calculated to eight decimal places and then rounded to three decimal places.
Examples: fO.05(2).13
= 2.160
and
fO.OJ(J).JY
= 2.539
If a critical value is needed for degrees of freedom not on this table, one may conservatively employ the next smaller v that is on the table. Or. the needed critical value, for v < 1000, may be calculated by linear interpolation, with an error of no more than 0.001. If a little more accuracy is desired, or if the needed v is > 1000, then harmonic interpolation should be used. Critical values of I for infinity degrees of freedom are related to critical values of Z and X2 as and The accuracy below.
of arithmetic
0(2): 0(1): Arithmetic Harmonic
and harmonic
0.50 D.25 345 234
0.20 0.10
I
a(2).00
interpolation
0.10 0.05
of
f
=
Z a(2)
_!2""' -
VXa.l·
is within 0.002 for vat least as large as that shown
0.05 0.025
O.D2 0.01
0.01 ODDS
0.005 0.0025
0.002 0.001
6
7
7
9
9
4
5
5
6
7
o.ooi (l.OOOS 10 7
678
Appendix
B
Statistical Tables and Graphs TABLE B.3: Critical
Values of the t Distribution
0.20 0.10
0.10 0.05
0.05 0.025
0.02 0.01
0.01 0.005
0.005 0.0025
0.002 0.001
0.001 0.0005
1.000 0.816 0.765 0.741 0.727
3.078 1.886 1.638 1.533 1.476
6.314 2.920 2.353 2.132 2.015
12.706 4.303 3.182 2.776 2.571
31.821 6.965 4.541 3.747 3.365
63.657 9.925 5.841 4.604 4.032
127.321 14.089 7.453 5.598 4.773
318.309 22.327 10.215 7.173 5.893
636.619 31.599 12.924 8.610 6.869
6 7 8 9 10
0.718 0.711 0.706 0.703 0.700
1.440 1.415 1.397 1.383 1.372
1.943 1.895 1.860 1.833 1.812
2.447 2.365 2.306 2.262 2.228
3.143 2.998 2.896 2.821 2.764
3.707 3.499 3.355 3.250 3.169
4.317 4.029 3.833 3.690 3.581
5.208 4.785 4.501 4.297 4.144
5.959 5.408 5.041 4.781 4.587
11 12 13 14 15 16 17 18 19 20
0.697 0.695 0.694 0.692 0.691 0.690 0.689 0.688 0.688 0.687
1.363 1.356 1.350 1.345 1.341 1.337 1.333 1.330 1.328 1.325
1.796 1.782 1.771 1.761 1.753 1.746 1.740 1.734 1.729 1.725
2.201 2.179 2.160 2.145 2.131 2.120 2.110 2.101 2.093 2.086
2.718 2.681 2.650 2.624 2.602 2.583 2.567 2.552 2.539 2.528
3.106 3.055 3.012 2.977 2.947 2.921 2.898 2.878 2.861 2.845
3.497 3.428 3.372 3.326 3.286 3.252 3.222 3.197 3.174 3.153
4.025 3.930 3.852 3.787 3.733 3.686 3.646 3.610 3.579 3.552
4.437 4.318 4.221 4.140 4.073 4.015 3.965 3.922 3.883 3.850
21 22 23 24 25
0.686 0.686 0.685 0.685 0.684
1.323 1.321 1.319 1.318 1.316
1.721 1.717 1.714 1.711 1.708
2.080 2.074 2.069 2.064 2.060
2.518 2.508 2.500 2.492 2.485
2.831 2.819 2.807 2.797 2.787
3.135 3.119 3.104 3.091 3.078
3.527 3.505 3.485 3.467 3.450
3.819 3.792 3.768 3.745 3.725
26 27 28 29 30
0.684 0.684 0.683 0.683 0.683
1.315 1.314 1.313 1.3ll 1.310
1.706 1.703 1.701 1.699 1.697
2.056 2.052 2.048 2.045 2.042
2.479 2.473 2.467 2.462 2.457
2.779 2.771 2.763 2.756 2.750
3.067 3.057 3.047 3.038 3.030
3.435 3.421 3.408 3.396 3.385
3.707 3.690 3.674 3.659 3.646
31 32 33 34 35
0.682 0.682 0.682 0.682 0.682
1.309 1.309 1.308 1.307 1.306
1.696 1.694 1.692 1.691 1.690
2.040 2.037 2.035 2.032 2.030
2.453 2.449 2.445 2.441 2.438
2.744 2.738 2.733 2.728 2.724
3.022 3.Cl15 3.008 3.002 2.996
3.375 3.365 3.356 3.348 3.340
3.633 3.622 3.611 3.601 3.591
36 37 38 39 40
0.681 0.681 0.681 0.681 0.681
1.306 1.305 1.304 1.304 1.303
1.688 1.687 1.686 1.685 1.684
2.028 2.026 2.024 2.023 2.021
2.434 2.431 2.429 2.426 2.423
2.719 2.715 2.712 2.708 2.704
2.990 2.985 2.980 2.976 2.971
3.333 3.326 3.319 3.313 3.307
3.582 3.574 3.566 3.558 3.551
41 42 43 44 45
0.681 0.680 0.680 0.680 0.680
1.303 1.302 1.302 1.301 1.301
1.683 1.682 1.681 1.680 1.679
2.020 2.018 2.017 2.015 2'(114
2.421 2.418 2.416 2.414 2.412
2.701 2.698 2.695 2.692 2.690
2.967 2.963 2.959 2.956 2.952
3.301 3.296 3.291 3.286 3.281
3.544 3.538 3.532 3.526 3.520
46 47 48 49 50
0.680 0.680 0.680 0.680 0.679
1.300 1.300 1.299 1.299 1.299
1.679 1.678 1.677 1.677 1.676
2.013 2.012 2.011 2.010 2.009
2.410 2.408 2.407 2.405 2.403
2.687 2.685 2.682 2.680 2.678
2.949 2.946 2.943 2.940 2.937
3.277 3.273 3.269 3.265 3.261
3.515 3.510 3.505 3.500 3.496
v
a(2): 0.50 a(I): 0.25
1 2 3 4 5
Appendix TABLE B.3 (cont.): Critical
B
Statistical Tables and Graphs
Values of the t Distribution
0.20 0.10
0.10 0.05
0.05 0.025
0.02 0.01
0.01 0.005
0.005 0.0025
0.002 0.001
0.001 0.0005
0.679 0.679 0.679 0.679 0.679
1.298 1.297 1.297 1.296 1.296
1.675 1.674 1.673 1.672 1.671
2.007 2.005 2.003 2.002 2.000
2.400 2.397 2.395 2.392 2.390
2.674 2.670 2.667 2.663 2.660
2.932 2.927 2.923 2.918 2.915
3.255 3.248 3.242 3.237 3.232
3.488 3.480 3.473 3.466 3.460
62 64 66 68 70
0.678 0.678 0.678 0.678 0.678
1.295 1.295 1.295 1.294 1.294
1.670 1.669 1.668 1.668 1.667
1.999 1.998 1.997 1.995 1.994
2.388 2.386 2.384 2.382 2.381
2.657 2.655 2.652 2.650 2.648
2.911 2.908 2.904 2.902 2.899
3.227 3.223 3.218 3.214 3.211
3.454 3.449 3.444 3.439 3.435
72 74 76 78 80
0.678 0.678 0.678 0.678 0.678
1.293 1.293 1.293 1.292 1.292
1.666 1.666 1.665 1.665 l.(i64
1.993 1.993 1.992 1.991 1.990
2.379 2.378 2.376 2.375 2.374
2.646 2.644 2.642 2.640 2.639
2.896 2.894 2.891 2.889 2.887
3.207 3.204 3.201 3.198 3.195
3.431 3.427 3.423 3.420 3.416
82 84 86 88 90
0.677 0.677 0.677 0.677 0.677
1.292 1.292 1.291 1.291 1.291
1.664 1.663 1.663 1.662 1.062
1.989 1.989 1.988 1.987 1.987
2.373 2.372 2.370 2.369 2.368
2.637 2.636 2.634 2.633 2.632
2.885 2.883 2.881 2.880 2.878
3.193 3.190 3.188 3.185 3.183
3.413 3.410 3.407 3.405 3.402
92 94 96 98 100
0.677 0.677 0.677 0.677 0.677
1.291 1.291 1.290 1.290 1.290
1.662 1.661 1.661 1.661 1.660
1.986 1.986 1.985 1.984 1.984
2.368 2.367 2.366 2.365 2.364
2.630 2.629 2.628 2.627 2.626
2.876 2.875 2.873 2.872 2.871
3.181 3.179 3.177 3.175 3.174
3.399 3.397 3.395 3.393 3.390
105 110 115 120 125
0.677 0.677 0.677 0.677 0.676
1.290 1.289 1.289 1.289 1.288
1.659 1.659 1.658 1.658 1.657
1.983 1.982 1.981 1.980 1.979
2.362 2.361 2.359 2.358 2.357
2.623 2.621 2.619 2.617 2.616
2.868 2.865 2.862 2.860 2.858
3.170 3.166 3.163 3.160 3.157
3.386 3.381 3.377 3.373 3.370
130 135 140 145 150
0.676 0.676 0.676 0.676 0.676
1.288 1.288 1.288 1.287 1.287
1.657 1.656 1.656 1.655 1.655
1.978 1.978 1.977 1.976 1.976
2.355 2.354 2.353 2.352 2.351
2.614 2.613 2.611 2.610 2.609
2.856 2.854 2.852 2.851 2.849
3.154 3.152 3.149 3.147 3.145
3.367 3.364 3.361 3.359 3.357
160 170 180 190 200
0.676 0.676 0.676 0.676 0.676
1.287 1.287 1.286 1.286 1.286
1.654 1.654 1.653 1.653 1.653
1.975 1.974 1.973 1.973 1.972
2.350 2.348 2.347 2.346 2.345
2.607 2.605 2.603 2.602 2.601
2.846 2.844 2.842 2.840 2.839
3.142 3.139 3.136 3.134 3.131
3.352 3.349 3.345 3.342 3.340
250 300 350 400 450
0.675 0.675 0.675 0.675 0.675
1.285 1.284 1.284 1.284 1.283
1.651 1.650 1.649 1.649 1.648
1.969 1.968 1.967 1.966 1.965
2.341 2.339 2.337 2.336 2.335
2.596 2.592 2.590 2.588 2.587
2.832 2.828 2.825 2.823 2.821
3.123 3.118 3.114 3.111 3.108
3.330 3.323 3.319 3.315 3.312
500 600 700 800 900
0.675 0.675 0.675 0.675 0.675
1.283 1.283 1.283 1.283 1.282
1.648 1.647 1.647 1.647 1.647
1.965 1.964 1.963 1.963 1.963
2.334 2.333 2.332 2.331 2.330
2.586 2.584 2.583 2.582 2.581
2.820 2.817 2.816 2.815 2.814
3.107 3.104 3.102 3.100 3.099
3.310 3.307 3.304 3.303 3.301
1000
0.675 0.6745
1.282 1.2816
1.646 1.6449
1.962 1.9600
2.330 2.3263
2.581 2.5758
2.813 2.8070
3'(l98 3.0902
3.300 3.2905
v
a(2): 0.50 a(1): 0.25
52 54 56 58 60
00
679
680
Appendix B
Statistical Tables and Graphs TABLE B.4: Critical
VI
=
Numerator
OF
=
a(2): 0.50 a(l): 0.25
0.20 0.10
0.10 0.05
1 2 3 4 5
5.83 2.57 2.02 1.81 1.69
39.9 8.53 5.54 4.54 4.06
161. 18.5 10.1 7.71 6.61
6 7 8 9 10
1.62 1.57 1.54 1.51 1.49
3.78 3.59 3.46 3.36 3.29
5.99 5.59 5.32 5.12 4.96
8.81 8.07 7.57 7.21 6.94
11 12 13 14 15
1.47 1.46 1.45 1.44 1.43
3.23 3.18 3.14 3.10 3.07
4.84 4.75 4.67 4.60 4.54
6.72 6.55 6.41 6.30 6.20
16 17 18 19 20
1.42 1.42 1.41 1.41 1.40
3.05 3.03 3.01 2.99 2.97
4.49 4.45 4.41 4.38 4.35
21 22 23 24 25
1.40 1.40 1.39 1.39 1.39
2.96 2.95 2.94 2.93 2.92
26 27 28 29 30
1.38 1.38 1.38 1.38 1.38
35 40 45 50 60
1'2 =
Oenom.OF
Values of the F Distribution
1 0.05 0.025
0.01 0.005
0.005 0.0025
0.002 0.001
0.001 0.0005
4050. 98.5 34.1 21.2 16.3
16200. 199. 55.6 31.3 22.8
64800. 399. 89.6 45.7 31.4
405000. 999. 167. 74.1 47.2
1620000. 2000. 267. 106. 63.6
13.7 12.2 11.3 10.6 10.0
18.6 16.2 14.7 13.6 12.8
24.8 21.1 18.8 17.2 16.0
35.5 29.2 25.4 22.9 21.0
46.1 37.0 31.6 28.0 25.5
9.65 9.33 9.07 8.86 8.68
12.2 11.8 11.4 11.1 10.8
15.2 14.5 13.9 13.5 13.1
19.7 18.6 17.8 17.1 16.6
23.7 22.2 21.1 20.2 19.5
6.12 6.04 5.98 5.92 5.87
8.53 8.40 8.29 8.18 8.10
10.6 10.4 10.2 10.1 9.94
12.8 12.6 12.3 12.1 11.9
16.1 15.7 15.4 15.1 14.8
18.9 18.4 17.9 17.5 17.2
4.32 4.30 4.28 4.26 4.24
5.83 5.79 5.75 5.72 5.69
8.02 7.95 7.88 7.82 7.77
9.83 9.73 9.63 9.55 9.48
11.8 11.6 11.5 11.4 11.3
14.6 14.4 14.2 14.0 13.9
16.9 16.6 16.4 16.2 16,0
2.91 2.90 2.89 2.89 2.88
4.23 4.21 4.20 4.18 4.17
5.66 5.63 5.61 5.59 5.57
7.72 7.68 7.64 7.60 7.56
9.41 9.34 9.28 9.23 9.18
11.2 11.1 11.0 11.0 10.9
13.7 13.6 13.5 13.4 13.3
15.8 15.6 15.5 15.3 15.2
1.37 1.36 1.36 1.35 1.35
2.85 2.84 2.82 2.81 2.79
4.12 4.08 4.06 4.03 4.00
5.48 5.42 5.38 5.34 5.29
7.42 7.31 7.23 7.17 7.08
8.98 8.83 8.71 8.63 8.49
10.6 10.4 10.3 10.1 9.96
12.9 12.6 12.4 12.2 12.0
14.7 14.4 14.1 13.9 13.5
70 80 90 100 120
1.35 1.34 1.34 1.34 1.34
2.78 2.77 2.76 2.76 2.75
3.98 3.96 3.95 3.94 3.92
5.25 5.22 5.20 5.18 5.15
7.01 6.96 6.93 6.90 6.85
8.40 8.33 8.28 8.24 8.18
9.84 9.75 9.68 9.62 9.54
11.8 11.7 11.6 11.5 11.4
13.3 13.2 13.0 12.9 12.8
140 160 180 200 300
1.33 1.33 1.33 1.33 1.33
2.74 2.74 2.73 2.73 2.72
3.91 3.90 3.89 3.89 3.87
5.13 5.12 5.11 5.10 5.07
6.82 6.80 6.78 6.76 6.72
8.14 8.10 8.08 8.06 8.00
9.48 9.44 9.40 9.38 9.30
11.3 11.2 11.2 11.2 11.0
12.7 12.6 12.6 12.5 12.4
500
1.33 1.32
2.72 2.71
3.86 3.84
5.05 5.02
6.69 6.64
7.95 7.88
9.23 9.14
11.0 10.8
12.3 12.1
00
648. 38.5 17.4 12.2 10.0
0.02 0.01
Appendix TABLE B.4 (cont.): Critical VI
umcrator OF
= V2
=
Denom. OF
=
B
Statistical Tables and Graphs
Values of the F Distribution
2
a(2): 0.50 u{I): 0.25
0.20 0.10
0.10 0.05
0.05 0.025
0.02
5000. 99.0 30.8 Ig.O 13.3
orn
I 2 3 4 5
7.50 3.00 2.2R 2.00 U,S
49.5 9.00 5.46 4.32 :1.78
200. 19.0 9.55 6.94 5.79
ROO. 39.0 16.0 10.6 8.43
6 7 8 10
1.76 1.70 1.6n 1.62 1.60
3.46 3.26 3.11 3.01 2.92
5.14 4.74 4.46 4.26 4. ]()
7.26 6.54 6.06 5.71 5.46
10.9 9.55 8.65 8.02 7.56
II 12 13 14 15
1.5R 1.56 1.55 1.53 1.52
2.R6 2)\1 2.76 2.73 2.70
3.9R 3.89 3.81 3.74 3.68
5.26 5. ]() 4.97 4.86 4.77
16 17 18 19 20
1.51 1.51 1.50 1.49 1.49
2.67 2.64 2.62 2.61 2.59
3.63 3.59 3.55 3.52 3.49
21 22 23 24 25
1.48 1.48 1.47 1.47 1.47
2.57 2.56 2.55 2.54 2.53
26 27 2S 29 30
1.46 1.46 1.46 1.45 1.45
35 40 45 50 60
O.ot 0.OD5 200(XI. 19,}. 49.8 26.3 IR3
0.005 D.OO25
(J.OO2 D.OOl
0.001 O.OODS
80000. 399. 79.9 38.0 25.0
500DDO. 999. 149. 61.2 37.1
2000000. 2000. 237. 87.4 49.8
14.5 12.4 11.0 10.1 9.43
19.1 IS.'} 13.9 12.5 11.6
27.0 21.7 18.5 16.4 14.9
34.8 27.2 22.7 19.9 17.9
7.21 6.93 6.70 6.51 6.36
8.91 8.51 8.19 7.92 7.70
10.R 10.3 9.S4 9.47 '}.17
13.8 13.0 12.3 Il.g 11.3
16.4 15.3 14.4 13.7 13.2
4.69 4.62 4.56 4.51 4.46
6.23 6.11 6.01 5.93 5.S5
7.51 7.35 7.21 7.09 6.99
8.92 8.70 8.51 S.35 R.21
11.0 10.7 10.4 10.2 9.95
12.7 12.3 11.9 11.6 11.4
3.47 3.44 3.42 3.40 3.39
4.42 4.38 4.35 4.32 4.29
5.7S 5.72 5.66 5.61 5.57
6.89 6.81 6.73 6.66 6.60
8.nS 7.96 7.S6 7.77 7.69
9.77 9.61 9.47 9.34 9.22
11.2 11.0 10.8 10.6 10.5
2.52 2.51 2.50 2.50 2.49
3.37 3.35 3.34 3.33 3.32
4.27 4.24 4.22 4.20 4.18
5.53 5.4,} 5.45 5.42 5.39
6.54 6.49 6.44 6.40 6.35
7.61 7.54 7.48 7.42 7.36
9.12 9.02 8.93 8.S5 g.77
10.3 10.2 10.1 9.99 9.90
1.44 1.44 1.43 1.43 1.42
2.46 2.44 2.42 2.41 2.39
3.27 3.23 3.20 3.18 3.15
4.11 4.05 4.01 3.97 3.'})
5.27 5.18 5.11 5.06 4.98
6.19 6.07 5.97 5.90 5.79
7.14 6.99 6.86 6.77 6.63
K.47 8.25 8.09 7.96 7.77
9.52 9.25 9.04 8.88 S.65
70 80 90 100 120
1.41 1.41 1.41 1.41 I.4D
2.38 2.37 2.36 2.36 2.35
3.13 3.11 3.10 3.09 3.07
3.89 3.86 3.84 3.83 3.80
4.92 4.88 4.S5 4.82 4.79
5.72 5.67 5.62 5.59 5.54
6.53 6.46 6.41 6.37 6.3D
7.64 7.54 7.47 7.41 7.32
8.49 S.37 R.211 8.21 8.10
140 160 180 200 300
1.40 1.40 1.40 1.40 1.39
2.34 2.34 2.33 2.33 2.32
3.06 3.05 3.05 3.04 3.03
3.79 3.78 3.77 3.70 3.73
4.76 4.74 4.73 4.71 4.68
5.50 5.48 5.46 5.44 5.39
6.26 6.22 6.20 6.17 6.11
7.26 7.21 7.18 7.15 7.07
8.03 7.97 7.93 7.90 7.80
500
1.39 1.3'1
2.31 2.30
3.01 3.DO
3.72 3.69
4.65 4.61
5.35 5.3D
6.06 5.99
7.00 6.91
7.72 7.60
4 5
Statistical Tables and Graphs
727
for the Tukey Test
(UJI
v
\'
B
4.74(i
k : II 253.2 32,59 17.13 12,57 10.4X
12 260.0 33.40 17.53 12,X4 10.70
13 26ti.2 34.13 17.X9 l.lO9 10,89
14 27I.X 34.XI 18.22 13.32 II,OX
6
7
14,24 1O,5X X,913
215,!; 2K20 15,00 11.10 .005
0.005 0.fXJ?5
0.002 O.lXlI
0.001 (l.OOOS
0.23H75 0.38345
0.35477 0.535H4
0041811 0.63150
OAli702 0.70760
0.53456 0.78456
0.57900 0.82900
0.61428 0.86428
0.63000 0.90000
0.67063 0.92063
0 I
0.23261 0.33126
0.33435 0046154
0.39075 0.53H29
0044641 0.110468
0.5()4l)5 O.6H377
0.54210 0.71409
0.57722 0.77639
0.62216 0.82217
0.65046 0.H5047
0 I
0.315511 0041172
O.37W) 0048153
0.30244 0.37706
0.35522 0044074
0042174 0.34273 OA(X)45 0049569
OA7('i')2
I
0.22!i115 0.29930 0.21803 0.27516
0.61133 0045440 O.5596lJ
0.51576 0.63692 OAH988 0602H7
0.54981 0.69887 0.52240 0.64 I «t
0.58914 0.74881 0.56231 0.68777
0.616H2 0.76133 0.58954 0.7 I9lili
7
0 I
0.20935 0.25645
0.28991 0.35066
0.33905 OAOH92
O.382lJ4 0046010
0.43337 0.51 L)68
OAli76 I O.55lJ70
0049932 0.59646
0.53714 0.640HI
0.56345 0.67126
8
0 I 0 I
0.2014H 0.2414lJ 0.19475 0.22919
0.27H28 0.32925 0.26794 D.31157
0.3253H 0.38365 0.31325 0.36287
0.366lJ7 0.43160 0.35277 0.40794
0.41522 0.48732 0.39922 0046067
0044819 0.52.119 0.43071 0049652
0047834 0.56000 O.45l)83 0.52953
0.51499 0.60194 0.49525 0.56963
0.54065 0.63114 0.5 Il)lJ7 O.5lJ7liO
10
0 1
0.IR913 0.21881
0.25884 0.2966H
0.30221 0.34525
0.34022 0.38798
O.3R481 O.43R09
OAISI7 0.47220
0.44329 0.50373
0.47747 0.54207
0.50148 0.56889
11
0 I 0 I
0.18381 0.20981 0.1787R 0.20190
0.25071 0.2R38R 0.24325 0.27269
0.29227 0.33008
(U71R7 0.41864
0.28330 0.31686
0.32894 0.37084 (U1869 0.35588
0.36019 0.40167
0040122 0045127 O,3~856 0043208
0.42835 O.4H146 0.41484 0.46197
0.46147 0.51823 0.44696 0.49737
0.48475 0.54408 0.46954 0.52235
13
0 I
0.17410 0.194H7
0.23639 0.26279
0.27515 0.30520
0.309:15 0.34265
0.34lJ54 0.38668
O.:17703 0041680
040254 044475
0.43372 0.47H89
0.45.170 0.50292
14
0 I 0 I
0.16lJ76 0.18H59 0.lfl575 0.18293
0.23010 0.25395 0.22430 0.24600
0.26767 0.2947H 0.26077 O.2H541
0.30081 0.33081i
0,36649 0040238
039127 0.42936
0.29291i 0.32026
0.33lJ80 0.37331 0.3308:, O.3612H
0.J';679 038940
OJ8090 0.41551
0.42159 0.46237 0.41043 0.4474H
0.44298 OA85fl3 0.43129 0.47001
Iii
0 I
0.16204 0.17778
O.2IS'!S 0.23879
O.2343lJ O.27f>92
0.28570 0.31065
0.32256 035039
0.34784 0377ti4
(U7132 0040296
0.40012 0043398
0042043 0045591
17
0 I 0 I
0.15859 O.1930H 0.15537 0.16H75
0.21397 0.23221 0.20933 0.22617
0.24847 0.26918 0.24296 0.26208
O.nR97 0.30189 0.27270 0.29386
0.31489 0.34045 0.30773 0.33134
0.33953 U.3669 I
0.3'i055 0.42168
0.33181 0.35707
0.36245 0.39151 0.35419 03HI01
0.38164 0.41037
0041041 0.442lJ8 0040106 0.43114
19
0 I
0.15235 O.Iti475
O.2049H 0.220liO
0.23781 0.25553
0.266H5 0.28641i
0.3010H 0.32295
0.324.'i9 0.34801
0.34647 037133
0.37332 0.39995
0.39235 0042020
20
0
21
I 0 1
0.14948 0.16104 0.14678 0.15758
0.200H9 0.21544 0.1n05 0.2\Oti4
0.23298 0.24947 0.22844 0.243H4
0.26137 0.27961 0.25622 0.27325
0.2lJ4R4 0.31518 O.288lJ8 0.30796
0.31784 0.33962 0.31149 0.33182
(133924 (U6237 (U3246 0.35404
0.36633 0.39031 O.35R22 O.3R134
038414 0.41008 0,37645 0040067
22
0 I
0.14422 0.15435
0.19343 0.20616
0.22416 0.23859
0.25136 0.267-'2
0.28346 0.30123
0.30552 O.3245h
0.32607 0.34628
0.35133 0.37298
O.36tJ21 O.3918lJ
23
0 I
0.14179 0.15132 0.13949 0.14848
0.27825 0.29494 0.27333 O.2R904
0.34483 0.36516
I
0.24679 0.26176 0.24245 0.25656
0,32005 0.33902
()
0.22012 O.n367 0.21630 0.22906
0.299~9 0.31776
24
0.19001 O.20\lJ7 O.IK677 0.19804
02lJ456 0.31138
0.314:15 0.33221
O.33R68 O.357R2
O.3li239 O.J8365 O.J5593 0.37597
0 I 0 I
0.13730 0.14579 0.13522 0.14326
0.18370 0.19433 0.18077 0.19084
0.21268 0.21472 0.20924 0.2201i3
0.23835 0.25166 0.23445 0.24704
O.2fl8fl6 0.2H349 0.26423 0.27825
o 2R95 I 0.30539 0.2H472 0.29973
(U08lJ5 0.32580 0.30382 0.319HO
0.332H5 0.35091 0.32733 0.34441
0.34980 0.36H75 0.34398 0.3h187
0 1
0.13323 0.14086
O.I77lJlJ 0.IR753
0.20596 0.21676
0.23074 0.24267
0.26001 0.27330
0.28016 0.29439
0.29895 0.31403
0.32206 0.33823
0.33845 0.35541
n
s
3
0 1
4 5 Ii
lJ
12
15
11.\
25 26 27
0
0'(2): 0.50 0'( I): 0.25
746
Appendix B
Statistical Tables and Graphs TABLE B.10 (cont.): Critical
Continuous
Values of
D{j
for the 8-Corrected
Kolmogorov-Smirnov
Goodness-of-Fit
Test for
Distributions
om
0.20 0.10
0.10 0.05
0.05 0.025
0.02 0.01
0.005
0.005 0.0025
0.002 0.001
0.001 0.0005
O.J3LH O.J3X5g
O.I7533 0.IX440
0.202X3 0.21309
0.22721 O.23R53
0.25600 O.26X61
0.275X2 0.2X933
0.29430 O.30!i04
0.31704 0.33242
0.33319 0.34927
0 I 0 I
0.12951 0.13041 0.12777 0.1:1435
O.I72X(J O.IXI42 0.17037 O.17X59
O.199X5 0.20961 0.19700 0.20630
O.223X3 0.23461 0.22061 0.23088
0.25217 0.26417 O.24X51 0.25994
O.2716X O.2X452 0.36772 0.27996
0.2X9R7 0.30351 0.2!i564 0.29R63
0.31227 O.3268X 0.30770 0.32162
0.32818 0.34349 0.32335 0.33794
31
0 I
0.12610 0.1323X
0.16805 0.175X9
0.19427 0.20314
0.21752 0.22732
0.24501 0.25391
0.26393 0.27561
0.28157 0.29399
0.30333 0.31662
0.31875 0.33268
32
0 1 0 1
0.12450 0.13051 0.12295 0.12R71
0.165X2 0.17332 0.1636R 0.170X6
0.19166 0.20014 0.IX915 0.19726
0.21457 0.22393 0.21173 0.22069
0.24165 0.25207 0.23X43 0.24X40
0.26030 0.27146 O.256R3 0.26750
0.27771 0.2!i955 0.27399 0.2K'i32
0.29915 0.311X4 0.29513 O.3072X
0.31436 0.32764 0.31015 0.32286
34
0 1
0.12147 0.12699
0.16162 O.16R50
O.IX674 0.19451
0.20901 0.21759
0.23534 0.24490
O.2534X 0.26371
0.27042 0.2RI27
0.29127 0.30296
0.30609 0.31827
35
0 1 0 1
0.12004 0.12534 0.IIH66 0.12375
0.15964 O.16h25 0.15774 0.16408
O.IR442 0.19188 0.18218 0.18935
0.2063lJ 0.21462 0.20387 0.21278
0.23237 0.24154 0.22951 0.23831
0.25027 0.26008
0.28757 0.29873 0.28401 0.29472
0.30219 0.3\388
0.24718 0.25660
O.26fi 100, we may use the normal approximation (Section 9.5).The accuracy of this approximation is expressed below as follows: critical T from approximationtrue critical T. In parentheses is the accuracy of the approximation with the continuity correction.
760
Appendix
B
Statistical
Tables and Graphs
a(2): n 20 40 60 80 100
0.50 0.20 0.10
0(0) 1(1) 1(0) 1(0) 1(1)
1(1) 1(1) 1(1) 1(1) 1(0)
0.05
0(-1) 0(0) 0(-1) 1(1) 1(1) 0(0) 0(-1) 1(0) 1(1) -1(-1)
0.02
-1(-1) -2(-2) -2( -2) -2(-2) -2(-3)
0.01
-2(-2) -2(-3) -2( -3) -4(-4) -4(-4)
0.001
0.005
-3(-3) -3(-4) -4(-4) -5( -5) -6( -7)
-5( -5) -7(-8) -9( -9) -10(-10) -11(-11)
The accuracy of the t approximation of Section 9.5 is similarly expressed here:
a(2): n 20 40 60 80 100
0.50
0.20 0.10 0.05 0.02 0.01 0.005 0.001
0(-1) 1(0) 1(0) 0(0) 0(-1) 0(0) 0(-1) 0(0) 0(0) 0(-1)
0(0) 1(0) 1(0) 0(0) 1(0)
1(0) 1(0) 1(I) 1(0) 0(0)
1(1) 1(1) 2(1) 2(2) 2(2)
2(1) 2(2) 3(2) 3(2) 3(3)
2(1) 3(3) 4(3) 4(4) 4(4)
4(3) 5(4) 6(5) 7(7) 8(7)
Appendix TABLE B.13: Critical III
"2
11-:,
114
B
Values of the Kruskal-Wallis
o: D.IO
D.05
0.02
4.714 5.143 5.361
6.250
Statistical Tables and Graphs
H Distribution
0.01
0.005
7.200
7.200
0.002
0.001
2 3 3 3 3
2 2 2 3 3
2 I 2 I 2
4.571 4.2R6 4.500 4.571 4.556
3 4 4 4 4
3 2 2 3 3
3 1 2 I
5.600
6.4K9
5.333 S.20R 5.444
6.000
2
4.622 4.5()() 4.45R 4.056 4.511
6.144
6.444
7.000
4 4 4 4 4
3 4 4 4 4
3 I 2 3 4
4.709 4.167 4.555 4.545 4.654
5.791 4.967 5.455 5.59S 5.692
6.564 6.667 6.600 1i.712 6.9li2
6.745 6.667 7.031i 7.144 7.654
7.3lg
8.01R
7.2R2 7.598 8.000
7.855 8.227 8.654
8.909 9.269
5 5 5 5 5
2 2 3 3 3
I 2 I 2 3
4.200 4.373 4.OIR 4.651 4.533
5.000 5.160 4.960 5.251 5.64K
6.000 6Jl44 6.124 6.533
6.533 6.909 7.079
7.182 7.636
8.048
8.727
5 5 5 5 5
4 4 4 4 5
I 2 3 4 1
3.9X7 4.541 4.549 4.619 4.109
4.9K5 5.273 5.656 5.657 5.127
6.431 6.505 6.671i 6.953 6.145
6.955 7.205 7.445 7.760 7.309
7.364 7.573 7.927 KI89 8.182
8.114 8.481 8.868
8.591 8.795 9.168
5 5 5 5 6
5 5 5 5 I
2 :1 4 5 I
4.623 4.545 4.523 4.940
5.33S 5.705 5.666 5.780
6.446 6.86fi 7.0(X) 7.220
7.338 7.57H 7.R23 KOOO
8.131 K316 8.523 8.780
6.446 8.809 9.163 9.620
7.338 9.521 9.606 9.920
6 6 0 6 6
2 2 3 3 3
I 2 I 2 3
4.20D 4.545 3.l)O9
6.1H2 0.236 6.227 6.590
6.982
4.682 4.53X
4.X22 5.345 4.H55 5.34H 5.615
6.970 7.410
7.515 7.872
K182 8.628
9.346
6 6 6 0 6
4 4 4 4 5
1 2 3 4 1
4.03R 4.494 4.604 4.595 4.12R
4.947 5.340 5.610 5.6RI 4.990
6.174 6.571 6.725 6.900 0.138
7.100 7.340 7.500 7.795 7.182
7.614 7.R46 H.033 8.381 8.077
R.494 8.918 9.167 8.515
8.827 9.170 9.861
6 6 6 6 Ii
5 5 5 5 6
2 3 4 5 1
4.596 4.535 4.522 4.547 4.000
5.33X 5.602 5.li61 5.729 4.945
6.5R5 1i.829 7.018 7.110 6.286
7.376 7.590 7.931i [W28 7.121
8.196 8.314 K643 8.859 8.165
8.967 9.150 9.458 9.771 9.077
9.189 9.669 9.960 10.271 9.692
6 6 6 6 6
6 6 6 6 6
2 3 4 5 6
4.43il 4.55il 4.54R 4.542 4.M3
5.410 5.625 5.724 5.765 5.ilOI
6.667 6.900 7.107 7.152 7.240
7.467 7.725 KI24 R.222
8.210 8.458 8.754 K987 9.170
9.219 9.458 9.662 9.94R 10.187
9.752 10.150 10.342 10.524 10.889
7 R 2 2 2 3 3
7 H 2 2 2 1 2 2 2 3
7 H 1 2 2 1 1 2 2 I
4.594 4.595
5.819 5.H05
7.332 7.355
8.37H K465
9.373 9.495
10.516 10.805
11.310 11.705
5.357 5.667
5.679 6.167
6.667
6.667
5.143 5.556 5.544 5.333
5.833 6.333 6.333
6.500 6.978
7.133
3 3 3
1 1 2 1 1 1 2 I
smo
7.533
761
762
Appendix
B
Statistical
Tables and Graphs TABLE B.13 (cont.): Critical III
3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
112
3 3 3 3 3 I 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 I 2 2 2 2 I 2 2 2 2 3 3 3 3 3 3 3 3 3 3
11,
2 2 3 3 3 I I 2 2 I 2 2 3 3 3 I 2 2 3 3 3 4 4 4 4 I I 2 2 2 I I 2 2 2 I 2 2 2 3 3 3 3 3 3
114
I 2 I 2 3 1 I I 2 I I 2 1 2 3 I I 2 I 2 3 I 2 3 4 I I I 2 2 I I I 2 2 I I 2 2 I 2 2 3 3 3
I I I I 2 I I I I 2 I I I 2 I I 2 I 2 3
Values
of the Kruskal-Wallis
H Distribution
a: 0.10
0.05
0.02
0.01
0.005
0.002
0.001
5.6K9 5.745 5.055 5.X79 6.026
6.244 6.527 0.000 0.727 7.000
6.6K9 7.IK2 7.IOlJ 7.036 7.X72
7.200 7.636 7.400 X.I05 il.53K
7.400 7.K73 /1.055 /1.379 /I.K97
K.01I\ 11.345 11.1\03 9.462
X.455
5.250 5.533 5.755 5.067 5.591 5.750 5.5K9 5.'11,72 6.016 5.IX2 5.568 5.XOX 5.692 5.901 6.019 5.564 5. H, and for a = :; and 6, were taken, with permission of the publisher, from Martin, LeBlanc, and Toan (1993). Examples: and WO.05.43 = 7 AOO (X;)O.lJH6 = 7.000
764
Appendix
B
Statistical
Tables and Graphs TABLE B.15: Critical
k
a: 0.50
Values
0.20
0.10
of Q for Nonparametric 0.05
0.02
Multiple-Comparison
Testing
0.01
0.005
0.002
0.001
1.900 2.394 2.039 2.807
2.327 2.713 2.930 3.091
2.570 2.930 3.144 3.291
2.g07 3.144 3.342 3.481
3.091 3.403 3.588 3.719
3.291 3.588 3.765 3.89\
2 3 4 5
0.674 I.3g3 1.732 1.960
1.2R2 I.R34 2.128 2.327
1.645 2.128 2.394 2.570
6 7 8 9 10
2.12R 2.201 2.309 2.401 2.540
2.475 2.593 2.690 2.773 2.845
2.713 2.823 2.9\4 2.992 3.059
2.936 3.03g 3.124 3.197 3.201
3.209 3.304 3.384 3.453 3.512
3.403 3.494 3.570 3.635 3.092
3.5g8 3.075 3.748 3.810 3.865
3.820 3.902 3.972 4.031 4.083
3.988 4.067 4.134 4.191 4.241
11 12 13 14 15
2.609 2.071 2.726 2.777 2.823
2.908 2.965 3.016 3.062 3.105
3.119 3.172 3.220 3.264 3.304
3.317 3368 3.414 3.456 3.494
3.565 3.013 3.656 3.695 3.731
3.743 3.789 3.X30 3.X6X 3.902
3.914 3.957 3.997 4.034 4.067
4.129 4.171 4.209 4.244 4.276
4.286 4.326 4.363 4.397 4.428
10 17 18 19 20
2.866 2.905 2.942 2.976 3.008
3.144 3.181 3.215 3.246 3.276
3.342 3.376 3.409 3.439 3.467
3.529 3.562 3.593 3.622 3.649
3.765 3.796 3.825 3.852 3.878
3.935 3.965 3.')93 4.019 4.044
4.09X 4.127 4.154 4.179 4.203
4.305 4.333 4.359 4.383 4.406
4.456 4.483 4.508 4.532 4.554
21 22 23 24 25
3.038 3.067 3.094 3.120 3.144
3.304 3.331 3.356 3.380 3.403
3.494 3.519 3.543 3.566 3.5X8
3.675 3.699 3.722 3.744 3.765
3.902 3.925 3.947 3.968 3.988
4.067 4.089 4.110 4.130 4.149
4.226 4.247 4.268 4.287 4.305
4.428 4.448 4.461) 4.486 4.504
4.575 4.595 4.614 4.632 4.649
Appendix
Table
B.15 was prepared
Qa such that P( Qa) ::; a( I )/[k(k
using Equation
26.2.23 of Zelcn and Severo (1964). which determines Table B.2). (This procedure
- \ )], where Q" is a normal deviate (Appendix
is a form of what is known as a Bonferroni
calculation.)
Examples: QO.05,4
=
2.639
and
Qo 10,5
=
2.576
Appendix
O'
for Nonparametric
Statistical
Multiple-Comparison
Tables and Graphs
Testing
765
TABLE B.16: Critical
Values
a(2): 0.50 a( 1): 0.25
0.20 0.10
0.10 0.05
0.05 0.025
0.02 0.01
O.ot 0.005
0.005 0.0025
0.002 0.001
0.001 0.0005
2 3 4 5
0.674 1.150 1.383 1.534
1.282 1.645 1.834 1.960
1.645 1.960 2.128 2.242
1.960 2.242 2.394 2.498
2.327 2.576 2.713 2.807
2.576 2.807 2.936 3.024
2.807 3.024 3.144 3.227
3.091 3.291 3.403 3.481
3.291 3.481 3.588 3.662
6 7 8 9 10
1.645 1.732 1.803 1.863 1.915
2.054 2.128 2.190 2.242 2.287
2.327 2.394 2.450 2.498 2.540
2.576 2.639 2.690 2.735 2.773
2.879 2.936 2.983 3.024 3.059
3.091 3.144 3.189 3.227 3.261
3.291 3.342 3.384 3.421 3.453
3.540 3.588 3.628 3.662 3.692
3.719 3.765 3.803 3.836 3.865
11 12 13 14 15
1.960 2.001 2.037 2.070 2.101
2.327 2.362 2.394 2.424 2.450
2.576 2.609 2.639 2.666 2.690
2.807 2.838 2.866 2.891 2.914
3.091 3.119 3.144 3.168 3.189
3.291 3.317 3.342 3.364 3.384
3.481 3.506 3.529 3.551 3.570
3.719 3.743 3.765 3.785 3.803
3.891 3.914 3.935 3.954 3.972
16 17 18 19 20
2.128 2.154 2.178 2.201 2.222
2.475 2.498 2.520 2.540 2.558
2.713 2.735 2.755 2.773 2.791
2.936 2.955 2.974 2.992 3.008
3.209 3.227 3.245 3.261 3.276
3.403 3.421 3.437 3.453 3.467
3.588 3.605 3.621 3.635 3.649
3.820 3.836 3.851 3.865 3.878
3.988 4.003 4.018 4.031 4.044
21 22 23 24 25
2.242 2.261 2.278 2.295 2.311
2.576 2.593 2.609 2.624 2.639
2.807 2.823 2.838 2.852 2.866
3.024 3.038 3.052 3.066 3.078
3.291 3.304 3.317 3.330 3.342
3.481 3.494 3.506 3.518 3.529
3.662 3.675 3.687 3.698 3.709
3.891 3.902 3.914 3.924 3.935
4.056 4.067 4.078 4.088 4.098
k
of
B
with a Control
Appendix Table B.16 was prepared using Equation 26.2.23 of Zelen and Severo (1964), which determines Q~ such that P( Q~) :S a( 2)/ [( k - 1) 1 = a( 1)/ (k - 1), where Q~ is a normal deviate. (This procedure is a form of what is known as a Bonferroni calculation.)
Examples: QO.05(2),9 = 2.735
and
QO.OI(I),12 = 3.119
766
Appendix
B
Statistical Tables and Graphs TABLE B.17: Critical
Values of the Correlation
Coefficient,
r
v
a: 0.50 a: 0.25
0.20 0.10
0.10 0.05
0.05 0.025
0.02 O.oJ
0.01 0.005
0.005 0.0025
0.002 0.001
0.001 0.0005
1 2 3 4 5
0.707 0.500 0.404 0.347 0.309
0.951 0.800 0.687 0.608 0.551
0.988 0.900 0.805 0.729 0.669
0.997 0.950 0.878 0.811 0.755
1.000 0.980 0.934 0.882 0.833
1.000 0.990 0.959 0.917 0.875
1.000 0.995 0.974 0.942 0.906
1.000 0.998 0.986 0.963 0.935
1.000 0.999 0.991 0.974 0.951
6 7 8 9 10
0.281 0.260 0.242 0.228 0.216
0.507 0.472 0.443 0.419 0.398
0.621 0.582 0.549 0.521 0.497
0.707 0.666 0.632 0.602 0.576
0.789 0.750 0.715 0.685 0.658
0.834 0.798 0.765 0.735 0.708
0.870 0.836 0.805 0.776 0.750
0.905 0.875 0.847 0.820 0.795
0.925 0.898 0.872 0.847 0.823
11 12 13 14 15
0.206 0.197 0.189 0.182 0.176
0.380 0.365 0.351 0.338 0.327
0.476 0.457 0.441 0.426 0.412
0.553 0.532 0.514 0.497 0.482
0.634 0.612 0.592 0.574 0.558
0.684 0.661 0.641 0.623 0.606
0.726 0.703 0.683 0.664 0.647
0.772 0.750 0.730 0.711 0.694
0.801 0.780 0.760 0.742 0.725
16 17 18 19 20
0.170 0.165 0.160 0.156 0.152
0.317 0.308 0.299 0.291 0.284
0.400 0.389 0.378 0.369 0.360
0.468 0.456 0.444 0.433 0.423
0.542 0.529 0.515 0.503 0.492
0.590 0.575 0.561 0.549 0.537
0.631 0.616 0.602 0.589 0.576
0.678 0.662 0.648 0.635 0.622
0.708 0.693 0.679 0.665 0.652
21 22 23 24 25
0.148 0.145 0.141 0.138 0.136
0.277 0.271 0.265 0.260 0.255
0.352 0.344 0.337 0.330 0.323
0.413 0.404 0.396 0.388 0.381
0.482 0.472 0.462 0.453 0.445
0.526 0.515 0.505 0.496 0.487
0.565 0.554 0.543 0.534 0.524
0.610 0.599 0.588 0.578 0.568
0.640 0.629 0.618 0.607 0.597
26 27 28 29 30
0.133 0.131 0.128 0.126 0.124
0.250 0.245 0.241 0.237 0.233
0.317 0.311 0.306 0.301 0.296
0.374 0.367 0.361 0.355 0.349
0.437 0.430 0.423 0.416 0.409
0.479 0.471 0.463 0.456 0.449
0.515 0.507 0.499 0.491 0.484
0.559 0.550 0.541 0.533 0.526
0.588 0.579 0.570 0.562 0.554
31 32 33 34 35
0.122 0.120 0.118 0.116 0.115
0.229 0.225 0.222 0.219 0.216
0.291 0.287 0.283 0.279 0.275
0.344 0.339 0.334 0.329 0.325
0.403 0.397 0.392 0.386 0.381
0.442 0.436 0.430 0.424 0.418
0.477 0.470 0.464 0.458 0.452
0.518 0.511 0.504 0.498 0.492
0.546 0.539 0.532 0.525 0.519
36 37 38 39 40
0.113 0.111 0.110 0.108 0.107
0.213 0.210 0.207 0.204 0.202
0.271 0.267 0.264 0.261 0.257
0.320 0.316 0.312 0.308 0.304
0.376 0.371 0.367 0.362 0.358
0.413 0.408 0.403 0.398 0.393
0.446 0.441 0.435 0.430 0.425
0.486 0.480 0.474 0.469 0.463
0.513 0.507 0.501 0.495 0.490
41 42 43 44 45
0.106 0.104 0.103 0.102 0.101
0.199 0.197 0.195 0.192 0.190
0.254 0.251 0.248 0.246 0.243
0.301 0.297 0.294 0.291 0.288
0.354 0.350 0.346 0.342 0.338
0.389 0.384 0.380 0.376 0.372
0.420 0.416 0.411 0.407 0.403
0.458 0.453 0.449 0.444 0.439
0.484 0.479 0.474 0.469 0.465
46 47 48 49 50
0.100 0.099 0.098 0.097 0.096
0.188 0.186 0.184 0.182 0.181
0.240 0.238 0.235 0.233 0.231
0.285 0.282 0.279 0.276 0.273
0.335 0.331 0.328 0.325 0.322
0.368 0.365 0.361 0.358 0.354
0.399 0.395 0.391 0.387 0.384
0.435 0.431 0.427 0.423 0.419
0.460 0.456 0.451 0.447 0.443
52 54 56 58 60
0.094 0.092 0.090 0.089 0.087
0.177 0.174 0.171 0.168 0.165
0.226 0.222 0.218 0.214 0.211
0.268 0.263 0.259 0.254 0.250
0.316 0.310 0.305 0.300 0.295
0.348 0.341 0.336 0.330 0.325
0.377 0.370 0.364 0.358 0.352
0.411 0.404 0.398 0.391 0.385
0.435 0.428 0.421 0.414 0.408
Appendix B TABLE
B.17(cont.): Critical Values of the Correlation
Statistical Tables and Graphs
767
Coefficient, r
0.20 O.
0.10 O. 5
0.05 .02
0.02 1
0.01
0.005 2
0.002
0.001
v
a: 0.50 a: O.
62 64 66 68 70
0.086 0.084 0.083 0.082 0.081
0.290 0.286 0.282 0.278 0.274
0.320 0.315 0.310 0.306 0.302
0.402 0.396 0.390 0.385 0.380
0.229 0.226 0.223 0.220 0.217
0.270 0.266 0.263 0.260 0.257
0.298 0.294 0.290 0.286 0.283
0.347 0.342 0.337 0.332 0.327 0.323 0.319 0.315 0.311 0.307
0.379 0.374 0.368 0.363 0.358
0.080 0.079 0.078 0.077 0.076
0.207 0.204 0.201 0.198 0.195 0.193 0.190 0.188 0.185 0.183
0.246 0.242 0.239 0.235 0.232
72 74 76 78 80
0.162 0.160 0.157 0.155 0.153 0.151 0.149 0.147 0.145 0.143
0.354 0.349 0.345 0.340 0.336
0.375 0.370 0.365 0.361 0.357
82 84 86 88 90
0.075 0.074 0.073 0.072 0.071
0.141 0.140 0.138 0.136 0.135
0.181 0.179 0.177 0.174 0.173
0.253 0.251 0.248 0.245 0.242
0.280 0.276 0.273 0.270 0.267
0.304 0.300 0.297 0.293 0.290
0.333 0.329 0.325 0.321 0.318
0.328 0.349 0.345 0.341 0.338
92 94 96 98 100
0.070 0.070 0.069 0.068 0.068
0.133 0.132 (1.131 0.129 0.128
0.171 0.169 0.167 0.165 0.164
0.215 0.212 0.210 0.207 0.205 0.203 0.201 0.199 0.197 0.195
0.240 0.237 0.235 0.232 0.230
0.264 0.262 0.259 0.256 0.254
0.287 0.284 0.281 0.279 0.276
0.315 0.312 0.308 0.305 0.303
0.334 0.331 0.327 0.324 0.321
105 110 115 120 125
0.066 0.064 0.063 0.062 0.060
0.160 0.156 0.153 0.150 0.147
0.190 0.186 0.182 0.178 0.174
0.225 0.220 0.215 0.210 0.206
0.248 0.242 0.237 0.232 0.228
0.314 0.307 0.300 0.294 0.289
0.059 0.058 0.057 0.056 0.055
0.144 0.141 0.139 0.136 0.134
0.101 0.098 0.095 0.093 0.091
0.130 0.126 0.122 0.119 0.116
0.202 0.196 0.190 0.185 0.181
0.220 0.213 0.207 0.202 0.197
0.267 0.262 0.257 0.253 0.249 0.241 0.234 0.228 0.222 0.216
0.283 0.278 0.273 0.269 0.264
0.053 0.052 0.050 0.()49 0.048
0.202 0.199 0.195 0.192 0.189 0.183 0.177 0.172 0.168 0.164
0.223 0.219 0.215 0.212 0.208
160 170 180 190 200
0.171 0.168 0.165 0.162 0.159 0.154 0.150 0.145 0.142 0.138
0.270 0.264 0.258 0.253 0.248 0.243 0.239 0.234 0.230 0.227
0.296 0.289 0.283 0.277 0.272
130 135 140 145 150
0.125 0.122 0.119 0.117 0.114 0.112 0.110 0.108 0.106 0.105
250 300 350 400 450
0.043 0.039 0.036 0.034 0.032
0.081 0.074 0.068 0.064 0.060
0.104 0.095 0.088 0.082 0.077
0.124 0.113 0.105 0.098 0.092
0.146 0.134 0.124 0.116 0.109
0.162 0.148 0.137 0.128 0.121
0.176 0.161 0.149 0.140 0.132
0.194 0.177 0.164 0.154 0.145
0.206 0.188 0.175 0.164 0.154
500 600 700 800 900
0.030 0.028 0.026 0.024 0.022
0.057 0.052 0.048 0.045 0.043
0.074 0.067 0.062 0.058 0.055
0.088 0.080 0.074 0.069 0.065
0.104 0.095 0.088 0.082 0.077
0.115 0.105 0.097 0.091 0.086
0.125 0.114 0.106 0.099 0.093
0.138 0.126 0.116 0.109 0.103
0.146 0.134 0.124 0.116 0.109
1000
0.021
0.041
0.052
0.062
0.073
0.081
0.089
0.098
0.104
0.256 0.249 0.242 0.236 0.230
The values in Appendix Table B.17 were computed using Equation 19.4 and Table B.3. Examples: rO.05(2),25
and
= 0.381
rO.Ol( 1).30 = 0.409
If we require a critical value for degrees of freedom not on this table, the critical value for the next lower degrees of freedom in the table may be conservatively used. Or, linear or harmonic interpolation may be used. If v > 1000, then use harmonic interpolation, setting the critical value equal to zero for v = 00. Also note:
Ta,v
=
t=. --'lo»
+
I'
768
Appendix B
Statistical Tables and Graphs TABLE B.18: Fisher's z Transformation
for Correlation
r
Coefficients,
r
0
1
2
3
4
5
6
7
8
9
r
0.000 0.010 0.020 0.030 0.040
0.0000 0.0100 0.0200 0.0300 0.0400
0.0010 0.0110 0.0210 0.0310 0.04lO
0.0020 0.0120 0.0220 0.0320 0.0420
0.0030 0.0130 0.0230 0.0330 0.0430
0.0040 0.0140 0.0240 0.0340 0.0440
0.0050 0.0150 0.0250 0.0350 0.0450
0.0060 0.0160 0.0260 0.0360 0.0460
0.0070 0.0170 0.0270 0.0370 0.0470
0.0080 0.0180 0.0280 0.0380 0.0480
0.0090 0.0190 0.0290 0.0390 0.0490
0.000 0.010 0.020 0.030 0.040
0.050 0.060 0.070 0.080 0.090
0.0500 0.0601 0.0701 0.0802 0.0902
0.0510 0.0611 0.0711 0.0812 0.0913
0.0520 0.0621 0.0721 0.0822 0.0923
0.0530 0.0631 0.0731 0.0832 0.0933
0.0541 0.0641 0.0741 0.0842 OJ)943
0.0551 0.0651 0.0751 0.0852 (U)953
0.0561 0.0661 0.0761 0.0862 0.0963
0.0571 0.0671 0.0772 0.0872 0.0973
0.0581 0.0681 0.0782 0.0882 0.0983
0.0591 0.0691 0.0792 0.0892 0.0993
0.050 0.060 0.070 0.080 0.090
O.lOO 0.110 0.120 0.130 0.140
0.1003 0.llO4 0.1206 0.1307 0.1409
0.1013 0.1115 0.1216 0.1318 0.1419
0.1024 0.1125 0.1226 0.1328 0.1430
0.1034 0.1135 0.1236 0.1338 0.1440
0.1044 0.1145 0.1246 0.1348 0.1450
0.1054 0.1155 0.1257 0.1358 0.1460
0.1064 0.1165 0.1267 0.1368 0.1470
0.lO74 0.1175 0.1277 0.1379 0.1481
0.1084 0.1186 0.1287 0.1389 0.1491
0.1094 0.1196 0.1297 0.1399 0.1501
O.lOO 0.110 0.120 0.130 0.140
0.150 0.160 0.170 0.180 0.190
0.1511 0.1614 0.1717 0.1820 0.1923
0.1522 0.1624 0.1727 0.1830 0.1934
0.1532 0.1634 0.1737 0.1840 0.1944
0.1542 0.1645 0.1748 0.1851 0.1955
0.1552 0.1655 0.1758 0.1861 0.1965
0.1563 0.1655 0.1768 0.1872 0.1975
0.1573 0.1675 0.1779 0.1882 0.1986
0.1583 0.1686 0.1789 0.1892 0.1996
0.1593 0.1696 0.1799 0.1903 0.2006
0.1604 0.1706 0.1809 0.1913 0.2017
0.150 0.160 0.170 0.180 0.190
0.200 0.2lO 0.220 0.230 0.240
0.2027 0.2132 0.2237 0.2342 0.2448
0.2038 0.2142 0.2247 0.2352 0.2458
0.2048 0.2153 0.2258 0.2363 0.2469
0.2059 0.2163 0.2268 0.2374 0.2480
0.2069 0.2174 0.2279 0.2384 0.2490
0.2079 0.2184 0.2289 0.2395 0.2501
0.2090 0.2195 0.2300 0.2405 0.2512
0.2100 0.2205 0.2310 0.2416 0.2522
0.2111 0.2216 0.2321 0.2427 0.2533
0.2121 0.2226 0.2331 0.2437 0.2543
0.200 0.210 0.220 0.230 0.240
0.250 0.260 0.270 0.280 0.290
0.2554 0.2661 0.2769 0.2877 0.2986
0.2565 0.2672 0.2779 0.2888 0.2997
0.2575 0.2683 0.2790 0.2899 0.3008
0.2586 0.2693 0.2801 0.2909 0.3018
0.2597 0.2704 0.2812 0.2920 0.3029
0.2608 0.2715 0.2823 0.2931 0.3040
0.2618 0.2726 0.2833 0.2942 0.3051
0.2629 0.2736 0.2844 0.2953 0.3062
0.2640 0.2747 0.2855 0.2964 0.3073
0.2650 0.2758 0.2866 0.2975 0.3084
0.250 0.260 0.270 0.280 0.290
0.300 O.3lO 0.320 0.330 0.340
0.3095 0.3205 0.3316 0.3428 0.3541
0.3106 0.3217 0.3328 0.3440 0.3552
0.3117 0.3228 0.3339 0.3451 0.3564
0.3128 0.3239 0.3350 0.3462 0.3575
0.3139 0.3250 0.3361 0.3473 0.3586
0.3150 0.3261 0.3372 0.3484 0.3598
0.3161 0.3272 0.3383 0.3496 0.3609
0.3172 0.3283 0.3395 0.3507 0.3620
0.3183 0.3294 0.3406 0.3518 0.3632
0.3194 0.3305 0.3417 0.3530 0.3643
0.300 0.310 0.320 0.330 0.340
0.350 0.360 0.370 0.380 0.390
0.3654 0.3769 0.3884 0.4001 0.4118
0.3666 0.3780 0.3896 0.4012 0.4130
0.3677 0.3792 0.3907 0.4024 0.4142
0.3689 0.3803 0.3919 0.4036 0.4153
0.3700 0.3815 0.3931 0.4047 0.4165
0.3712 0.3826 0.3942 0.4059 0.4177
0.3723 0.3838 0.3954 0.4071 0.4189
0.3734 0.3850 0.3966 0.4083 0.4201
0.3746 0.3861 0.3977 0.4094 0.4213
0.3757 0.3873 0.3989 0.4lO6 0.4225
0.350 0.360 0.370 0.380 0.390
0.400 0.410 0.420 0.430 0.440
0.4236 0.4356 0.4477 0.4599 0.4722
0.4248 0.4368 0.4489 0.4611 0.4735
0.4260 0.4380 0.4501 0.4624 0.4747
0.4272 0.4392 0.4513 0.4636 0.4760
0.4284 0.4404 0.4526 0.4648 0.4772
0.4296 0.4416 0.4538 0.4660 0.4784
0.4308 0.4428 0.4550 0.4673 0.4797
0.4320 0.4441 0.4562 0.4685 0.4809
0.4332 0.4453 0.4574 0.4698 0.4822
0.4344 0.4465 0.4587 0.471 0 0.4834
0.400 O.4lO 0.420 0.430 0.440
0.450 0.460 0.470 0.480 0.490
0.4847 0.4973 0.510l 0.5230 0.536.1
0.4860 0.4986 0.5114 0.5243 0.5374
0.4872 0.4999 0.5126 0.5256 0.5387
0.4885 0.5011 0.5139 0.5269 0.5400
0.4897 0.5024 0.5152 0.5282 0.5413
0.4910 0.5037 0.5165 0.5295 0.5427
0.4922 0.5049 0.5178 0.5308 0.5440
0.4935 0.5062 0.5191 0.5321 0.5453
0.4948 0.5075 0.5204 0.5334 0.5466
0.4960 0.5088 0.5217 0.5347 0.5480
0.450 0.460 0.470 0.480 0.490
0.500 0.510 0.520 0.530 0.540
0.5493 0.5627 0.5763 0.590l 0.6042
0.5506 0.5641 0.5777 0.5915 0.6056
0.5520 0.5654 0.5791 0.5929 0.6070
0.5533 0.5668 0.5805 0.5943 0.6084
0.5547 0.5681 0.5818 0.5957 0.6098
0.5560 0.5695 0.5832 0.5971 0.6112
0.5573 0.5709 0.5846 0.5985 0.6127
0.5587 0.5722 0.5860 0.5999 0.6141
0.5600 0.5736 0.5874 0.6013 0.6155
0.5614 0.5750 0.5888 0.6027 0.6169
0.500 0.510 0.520 0.530 0.540
Appendix TABLE B.18 (cont.): Fisher's z Transformation
B
Statistical Tables and Graphs
for Correlation
Coefficients R
769
r q
r
0
I
7.
">
4
S
Ii
7
0.550 0.560 0.570 0.580 0.590
0.6184 0.6328 0.6475 0.6625 0.6777
0.6198 0.6343 0.6490 0.6640 0.6792
0.6213 0.6358 0.6505 0.6655 0.6807
0.6227 0.6372 0.6520 0.6670 0.6823
0.6241 0.6387 0.6535 0.6685 0.6838
0.6256 0.6401 0.6550 0.6700 0.6854
0.6270 0.6416 0.6565 0.6716 0.6869
0.6285 0.6431 0.6580 0.6731 0.6885
0.6299 0.6446 0.6595 0.6746 0.6900
0.6314 0.6460 0.6610 0.6761 0.6916
0.550 0.560 0.570 0.580 0.590
0.600 0.610 0.620 0.630 0.640
0.6931 0.7089 0.7250 0.7414 0.7582
0.6947 0.7105 0.7266 0.7431 0.7599
0.6963 0.7121 0.7283 0.7447 0.7616
0.6978 0.7137 0.7299 0.7464 0.7633
0.6994 0.7153 0.7315 0.7481 0.7650
0.7010 0.7169 0.7332 0.7497 0.7667
0.7026 0.7185 0.7348 0.7514 0.7684
0.7042 0.7201 0.7365 0.7531 0.7701
0.7057 0.7218 0.7381 0.7548 0.7718
0.7073 0.7234 0.7398 0.7565 0.7736
0.600 0.610 0.620 0.630 0.640
0.650 0.660 0.670 0.680 0.690
0.7753 0.7928 0.8107 0.8291 0.8480
0.7770 0.7946 0.8126 0.8310 0.8499
0.7788 0.7964 0.8144 0.8328 0.851X
0.7805 0.7981 0.8162 0.8347 0.8537
0.7823 0.7999 0.8180 0.8366 0.8556
0.7840 0.8017 0.8199 0.8385 0.8576
0.7858 0.8035 0.8217 0.8404 0.8595
0.7875 0.8053 0.8236 0.8422 0.8614
0.7893 0.8071 0.8254 0.8441 0.8634
0.7910 0.8089 0.8273 0.8460 0.8653
0.650 0.660 0.670 0.680 0.690
0.700 0.710 0.720 0.730 0.740
0.8673 0.8872 0.9076 0.9287 0.9505
0.8693 0.8892 0.9097 0.9309 0.9527
0.8712 0.8912 0.9118 0.9330 0.9549
0.8732 0.8933 0.9139 0.9352 0.9571
0.8752 0.8953 0.9160 0.9373 0.9594
0.8772 0.8973 0.9181 0.9395 0.9616
0.8792 0.8994 0.9202 0.9417 0.9639
0.8812 0.9014 0.9223 0.9439 0.9661
0.8832 0.9035 0.9245 0.9461 0.9684
0.8852 0.9056 0.9266 0.9483 0.9707
0.700 0.710 0.720 0.730 0.740
0.750 0.760 0.770 0.780 0.790
0.9730 0.9962 1.0203 1.0454 1.0714
0.9752 0.9986 1.0228 1.0479 1.0741
0.9775 1.0010 1.0253 1.0505 1.0768
0.9798 1.0034 1.0277 1.0531 1.0795
0.9822 1.0058 1.0302 1.0557 1.0822
0.9845 1.0082 1.0327 1.0583 1.0849
0.9X68 1.0106 1.0352 1.0609 1.0876
0.9X91 1.0130 1.0378 1.0635 1.0903
0.9915 1.0154 1.0403 1.0661 1.0931
0.9938 1.0179 1.0428 1.0688 1.0958
0.750 0.760 0.770 0.780 0.790
0.800 0.810 0.820 0.830 0.840
1.0986 1.1270 1.1568 1.1881 1.2212
1.1014 1.1299 1.1599 1.1914 1.2246
1.1042 1.1329 1.1630 1.1946 1.2280
1.1070 1.1358 1.1660 1.1979 1.2314
1.1098 1.1388 1.1691 1.2011 1.2349
1.1127 1.1417 1.1723 1.2044 1.2384
!.l155 1.1447 1.1754 1.2077 1.2419
1.1184 1.1477 1.1786 1.2111 1.2454
1.1212 1.1507 1.1817 1.2144 1.2490
1.1241 1.1538 1.1849 1.2178 1.2526
0.800 0.810 0.820 0.830 0.840
0.850 0.860 0.870 0.880 0.890
1.2561 1.2933 1.333 I 1.3758 1.4219
1.2598 1.2972 1.3372 1.3802 1.4268
1.2634 1.3011 1.3414 1.3847 1.4316
1.2671 1.3050 1.3456 1.3892 1.4365
1.2707 1.3089 1.3498 1.3938 1.4415
1.2744 1.3129 1.3540 1.39R4 1.4465
1.2782 1.3169 1.3583 1.4030 1.4516
1.2819 1.3209 1.3626 1.4077 1.4566
1.2857 1.3249 1.3670 1.4124 1.4618
1.2895 1.3290 1.3713 1.4171 1.4670
0.X50 0.860 0.870 0.880 0.890
0.900 0.910 0.920 0.930 0.940
1.4722 1.5275 1.5890 1.6584 1.7380
1.4775 1.5334 1.5956 1.6658 1.7467
1.4828 1.5393 1.6022 1.6734 1.7555
1.4882 1.5453 1.6089 1.6811 1.7645
1.4937 1.5513 1.6157 1.6888 1.7736
1.4992 1.5574 1.6226 1.6967 1.7828
1.5047 1.5636 1.6296 1.7047 1.7923
1.5103 1.5698 1.6366 1.7129 1.8019
1.5160 1.5762 1.6438 1.7211 1.8116
1.5217 1.5825 1.6510 1.7295 1.8216
0.900 0.910 0.920 0.930 0.940
0.950 0.960 0.970 0.980 0.990
1.8318 1.9459 2.0923 2.2975 2.6466
1.8421 1.9588 2.1095 2.3234 2.6995
1.8527 1.9721 2.1273 2.3507 2.7587
1.8635 1.9856 2.1457 2.3795 2.8257
1.8745 1.9996 2.1648 2.4101 2.9030
1.8857 2.0139 2.1847 2.4426 2.9944
1.8972 2.0287 2.2054 2.4774 3.1062
1.9090 2.0439 2.2269 2.5147 3.2502
1.9210 2.0595 2.2494 2.5549 3.4531
1.9333 2.0756 2.2729 2.5987 3.7997
0.950 0.960 0.970 0.980 0.990
Appendix Table B.18 was produced using Equation 19.8. Example: For r = 0.641, z = 0.7599.
770
Appendix B
Statistical Tables and Graphs r, Corresponding
to Fisher's z Transformation
z
TABLE B.19: Correlation
0
1
2
Coefficients, 3
4
5
6
7
8
9
z
0.00 0.01 0.02 0.03 0.04
0.0000 0.0100 0.0200 0.0300 0.0400
0.0010 0.0110 0.0210 0.0310 0.0410
0.0020 0.0120 0.0220 0.0320 0.0420
0.0030 0.0130 0.0230 0.0330 0.0430
0.0040 0.0140 0.0240 0.0340 0.0440
0.0050 0.0150 0.0250 0.0350 0.0450
0.0060 0.0160 0.0260 0.0360 0.0460
0.0070 0.0170 0.0270 0.0370 0.0470
0.0080 0.0180 0.0280 0.0380 0.0480
0.0090 0.0190 0.0290 0.0390 0.0490
0.00 0.01 0.02 0.03
0.05 0.06 0.07 0.08 0.09
0.0500 0.0599 0.0699 0.0798 0.0898
0.0510 0.0609 0.0709 0.0808 0.0907
0.0520 0.0619 0.0719 0.0818 0.0917
0'(}530 0.0629 0.0729 0.0828 0.0927
0.0539 0.0639 0.0739 0.0838 0.0937
0.0549 0.0649 0.0749 0.0848 0.0947
0.0559 0.0659 0.0759 0.0858 0.0957
0.0569 0.0669 0.0768 0.0868 0.0967
0.0579 0.0679 0.0778 0.0878 0.0977
0.0589 0.0689 0.0788 0.0888 0.0987
0.05 0.06 0.07 0.08 0.09
0.10 0.11 0.12 0.13 0.14
0.0997 0.1096 0.1194 0.1293 0.1391
0.1007 0.1105 0.1204 0.1303 0.1401
0.1016 0.1115 0.1214 0.1312 0.1411
0.1026 0.1125 0.1224 0.1322 0.1420
0.1036 0.1135 0.1234 0.1332 0.1430
0.1046 0.1145 0.1244 0.1342 0.1440
0.1056 0.1155 0.1253 0.1352 0.1450
0.1066 0.1165 0.1263 0.1361 0.1460
0.1076 0.1175 0.1273 0.1371 0.1469
0.1086 0.1184 0.1283 0.1381 0.1479
0.10 0.11 0.12 0.13 0.14
0.15 0.16 0.17 0.18 0.19
0.1489 0.1586 0.1684 0.1781 0.1877
0.1499 0.1596 0.1694 0.1790 0.1887
0.1508 0.1606 0.1703 0.1800 0.1897
0.1518 0.1616 0.1713 0.1810 0.1906
0.1528 0.1625 0.1723 0.1820 0.1916
0.1538 0.1635 0.1732 0.1829 0.1926
0.1547 0.1645 0.1742 0.1839 0.1935
0.1557 0.1655 0.1752 0.1849 0.1945
0.1567 0.1664 0.1761 0.1858 0.1955
0.1577 0.1674 0.1771 0.1868 0.1964
0.15 0.16 0.17 0.18 0.19
0.20 0.21 0.22 0.23 0.24
0.1974 0.2070 0.2165 0.2260 0.2355
0.1983 0.2079 0.2175 0.2270 0.2364
0.1993 0.2089 0.2184 0.2279 0.2374
0.2003 0.2098 0.2194 0.2289 0.2383
0.2012 0.2108 0.2203 0.2298 0.2393
0.2022 0.2117 0.2213 0.2308 0.2402
0.2031 0.2127 0.2222 0.2317 0.2412
0.2041 0.2137 0.2232 0.2327 0.2421
0.2051 0.2146 0.2241 0.2336 0.2430
0.2060 0.2156 0.2251 0.2346 0.2440
0.20 0.21 0.22 0.23 0.24
0.25 0.26 0.27 0.28 0.29
0.2449 0.2543 0.2636 0.2729 0.2821
0.2459 0.2552 0.2646 0.2738 0.2831
0.2468 0.2562 0.2655 0.2748 0.2840
0.2477 0.2571 0.2664 0.2757 0.2849
0.2487 0.2580 0.2673 0.2766 0.2858
0.2496 0.2590 0.2683 0.2775 0.2867
0.2506 0.2599 0.2692 0.2784 0.2876
0.2515 0.2608 0.2701 0.2794 0.2886
0.2524 0.2618 0.2711 0.2803 0.2895
0.2534 0.2627 0.2720 0.2812 0.2904
0.25 0.26 0.27 0.28 0.29
0.30 0.31 0.32 0.33 0.34
0.2913 0.3004 0.3095 0.3185 0.3275
0.2922 0.3013 0.3104 0.3194 0.3284
0.2931 0.3023 0.3113 0.3203 0.3293
0.2941 0.3032 0.3122 0.3212 0.3302
0.2950 0.3041 0.3131 0.3221 0.3310
0.2959 0.3050 0.3140 0.3230 0.3319
0.2968 0.3059 0.3149 0.3239 0.3328
0.2977 0.3068 0.3158 0.3248 0.3337
0.2986 0.3077 0.3167 0.3257 0.3346
0.2995 0.3086 0.3176 0.3266 0.3355
0.30 0.31 0.32 0.33 0.34
0.35 0.36 0.37 0.38 0.39
0.3364 0.3452 0.3540 0.3627 0.3714
0.3373 0.3461 0.3549 0.3636 0.3722
0.3381 0.3470 0.3557 0.3644 0.3731
0.3390 0.3479 0.3566 0.3653 0.3739
0.3399 0.3487 0.3575 0.3662 0.3748
0.3408 0.3496 0.3584 0.3670 0.3757
0.3417 0.3505 0.3592 0.3679 0.3765
0.3426 0.3514 0.3601 0.3688 0.3774
0.3435 0.3522 0.3610 0.3696 0.3782
0.3443 0.3531 0.3618 0.3705 0.3791
0.35 0.36 0.37 0.38 0.39
0.40 0.41 0.42 0.43 0.44
0.3799 0.3885 0.3969 0.4053 0.4136
0.3808 0.3893 0.3978 0.4062 0.4145
0.3817 0.3902 0.3986 0.4070 0.4153
0.3825 0.3910 0.3995 0.4078 0.4161
0.3834 0.3919 0.4003 0.4087 0.4170
0.3842 0.3927 0.4011 0.4095 0.4178
0.3851 0.3936 0.4020 0.4103 0.4186
0.3859 0.3944 0.4028 0.4112 0.4194
0.3868 0.3952 0.4036 0.4120 0.4203
0.3876 0.3961 0.4045 0.4128 0.4211
0.40 0.41 0.42 0.43 0.44
0.45 0.46 0.47 0.48 0.49
0.4219 0.4301 0.4382 0.4462 0.4542
0.4227 0.4309 0.4390 0.4470 0.4550
0.4235 0.4317 0.4398 0.4478 0.4558
0.4244 0.4325 0.4406 0.4486 0.4566
0.4252 0.4333 0.4414 0.4494 0.4574
0.4260 0.4342 0.4422 0.4502 0.4582
0.4268 0.4350 0.4430 0.4510 0.4590
0.4276 0.4358 0.4438 0.4518 0.4598
0.4285 0.4366 0.4446 0.4526 0.4605
0.4293 0.4374 0.4454 0.4534 0.4613
0.45 0.46 0.47 0.48 0.49
0.50 0.51 0.52 0.53 0.54
0.4621 0.4699 0.4777 0.4854 0.4930
0.4629 0.4707 0.4785 0.4861 0.4937
0.4637 0.4715 0.4792 0.4869 0.4945
0.4645 0.4723 0.4800 0.4877 0.4953
0.4653 0.4731 0.4808 0.4884 0.4960
0.4660 0.4738 0.4815 0.4892 0.4968
0.4668 0.4746 0.4823 0.4900 0.4975
0.4676 0.4754 0.4831 0.4907 0.4983
0.4684 0.4762 0.4838 0.4915 0.4990
0.4692 0.4769 0.4846 0.4922 0.4998
0.50 0.51 0.52 0.53 0.54
0.55 0.56 0.57 0.58 0.59
0.5005 0.5080 0.5154 0.5227 0.5299
0.5013 0.5087 0.5161 0.5234 0.5306
0.5020 0.5095 0.5168 0.5241 0.5313
0.5028 0.5102 0.5176 0.5248 0.5320
0.5035 0.5109 0.5183 0.5256 0.5328
0.5043 0.5117 0.5190 0.5263 0.5335
0.5050 0.5124 0.5198 0.5270 0.5342
0.5057 0.5132 0.5205 0.5277 0.5349
0.5065 0.5139 0.5212 0.5285 0.5356
0.5072 0.5146 0.5219 0.5292 0.5363
0.55 0.56 0.57 0.58 0.59
0.04
Appendix TABLE B 19 (cont.): Correlation
Coefficients,
r
B
Corresponding
Statistical
Tables and Graphs
771
to Fisher's z Transformation
z
0
I
2
3
4
:'i
6
7
R
9
z
0.60 0.61 0.62 0.63 0.64
0.5370 0.5441 O.5SII 0.55RI 0.5649
0.537R 0.544R O.551R O.55R7 0.5656
0.:'i3RS 0.5455 0.5525 0.5594 0.5663
053!)2 0.5462 0.5532 0.5601 0.5669
0.5399 0.54A9 0.553') O.560R 0.5676
0.5406 0.547A O.554A 0.5615 0.56Kl
0.5413 0.S4R3 0.5553 0.5622 0.5690
0.5420 0.5490 0.5560 0.5629 0.5696
0.5427 0.5497 0.5567 0.5635 0.5703
0.5434 0.5504 0.5574 0.5642 0.5710
0.60 0.61 0.62 0.63 0.64
0.65 0.66 0.67 0.6R 0.69
0.5717 0.57R4 0.5R50 0.5'.115 0.5980
0.5723 0.5790 0.5X56 0.5'.122 0.59R6
0.5730 0.5797 (J..~X63 0.5928 0.5'.193
0.5737 O.5R04 O.SH69 0.5935 0.5'.199
0.5744 O.5RIO 0.5R76 0.5941 0.6(~)5
0.5750 O.5R17 (l.SHR3 0.594R 0.6012
0.5757 O.5R23 O.SHXl) 0.5954 0.601X
0.5764 0.5830 O.5RlJ6 0.5961 0.6025
0.5770 0.5R37 0.5902 0.5967 0.6031
0.5777 0.5R43 0.5909 0.5973 0.6037
0.65 0.66 0.67 0.68 0.69
0.70 0.71 0.72 0.73 0.74
0.6044 0.6107 0.6169 0.6231 0.6291
0.6050 0.h113 0.6175 0.6237 0.6297
0.6056 0.6119 0.61RI 0.6243 0.6304
0.6063 0.6126 0.61R8 0.6249 0.6310
0.6069 0.6132 (Ui194 0.6255 0.6316
0.6075 O.613R 0.6200 0.6261 0.6322
O.60H2 0.6144 0.6206 0.6267 O.632R
0.608H 0.6150 0.6212 0.6273 0.(i334
0.6094 0.6157 0.6218 0.6279 0.6340
0.6100 0.6163 0.6225 0.62R5 0.6346
0.70 0.71 0.72 0.7:1 0.74
0.75 0.76 0.77 0.78 D.79
0.6351 0.6411 0.6469 0.6527 0.6584
0.6357 0.6417 0.6475 0.6533 0.6590
0.6363 0.6423 O.64RI 0.6539 0.6595
0.6369 0.6428 0.64R7 0.6544 0.6601
0.6375 0.6434 0.6492 O.6551J 0.6607
0.63RI 0.6440 0.649R 0.6556 0.6612
0.63X7 0.6446 0.6504 0.6561 O.661R
0.639:1 0.6452 0.6510 D.6567 0.6624
0.6399 0.6458 0.6516 06573 0.6629
0.6405 0.6463 0.6521 0.657K 0.6635
0.75 0.76 0.77 0.78 0.79
0.80 (l.SI 0.82 0);3 0.84
0.6640 0.6696 0.6751 0.6X05 0.685R
0.6646 0.6701 0.6756 0.6810 O.6R63
0.6652 0.6707 0.1i762 0.6815 0.6869
O.M57 0.6712 0.6767 0.6821 0.6X74
0.6663 O.671R 0.6772 0.6R21i O.6R79
0.666K 0.6723 O.677R 0.61132 0.6RR4
0.6674 0.6729 (l.67R3 0.6R37 0.6R90
0.6679 0.6734 0.67R9 0.61142 0.61195
0.6685 0.6740 0.6794 0.6R47 0.6900
0.6690 (U,745 0.6799 0.6853 0.6905
(l.SO O.RI 0.82 0.83 O.R4
0.R5 (Ul6 0.87 0.R8 0.R9
0.6911 0.6963 0.7014 0.7064 0.7114
0.6916 0.6968 0.7019 0.7069 0711'.1
0.6921 0.n973 0.7024 0.7074 0.7124
0.6926 0.697R 0.7029 0.7079 0.712lJ
0.6932 0.69113 0.7034 0.7084 0.71:14
0.6937 0'698R 0.7039 O.70R9 0.7139
0.6
....• co
•••
....• co ..,.
»
""C ""C
to :::l o,
x' OJ
Vl r+
TABLE B.26a (cont.):
QJ
Binomial Coefficients, nCx
r+
;;:;. r+
;:;.
X
n = 39
40
41
42
43
44
45
46
X
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
1 39 741 9139 82251 575757 3262623 15380937 61523748 211915132 635745396 1676056044 3910797436 8122425444 15084504396 25140840660 37711260990 51021117810 62359143990 68923264410 68923264410 62359143990 51021117810 37711260990 25140840660 15084504396
1 40 780 9880 91390 658008 3838380 18643560 76904685 273438880 847660528 2311801440 5586853480 12033222880 23206929840 40225345056 62852101650 88732378800 113380261800 131282408400 137846528820 131282408400 113380261800 88732378800 62852101650 40225345056
1 41 820 10660 101270 749398 4496388 22481940 95548245 350343565 1121099408 3159461968 7898654920 1762007630 35240152720 63432274896 103077446706 151584480450 202112640600 244662670200 269128937220 269128937220 244662670200 202112640600 151584480450 103077446706
1 42 861 11480 111930 850668 5245786 26978328 118030185 445891810 1471442973 4280561376 11058116888 25518731280 52860229080 98672427616 166509721602 254661927156 353697121050 446775310800 513791607420 538257874440 513791607420 446775310800 353697121050 254661927156
1 43 903 12341 123410 962598 6096454 32224114 145008513 563921995 1917334783 5752004349 15338678264 36576848168 78378960360 151532656696 265182149218 421171648758 608359048206 800472431850 960566918220 1052049481860 1052049481860 960566918220 800472431850 608359048206
1 44 946 13244 135751 1086008 7059052 38320568 177232627 708930508 2481256778 7669339132 21090682613 51915526432 114955808528 229911617056 416714805914 686353797976 1029530696964 1408831480056 1761039350070 2012616400080 2104098963720 2012616400080 1761039350070 1408831480056
1 45 990 14190 148995 1221759 8145060 45379620 215553195 886163135 3190187286 10150595910 28760021745 73006209045 166871334960 344867425584 646626422970 1103068603890 1715884494940 2438362177020 3169870830126 3773655750150 4116715363800 4116715363800 3773655750150 3169870830126
1 46 1035 15180 163185 1370754 9366819 53524680 260932815 1101716330 4076350421 13340783196 38910617655 101766230790 239877544005 511738760544 991493848554 1749695026860 2818953098830 4154246671960 5608233007146 6943526580276 7890371113950 8233430727600 7890371113950 6943526580276
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
~ -I QJ
Shown are binomial coefficients.n C X = (~),
tr
if QJ
:::l o, G)
Ol
""C
~
which are the numbers of combinations of n items taken X at a time. For each n, the array of binomial coefficientsis
symmetrical; the middle of the array isshown in italics, and the coefficientfor X isthe same as for n - X. For example, 1035 .
....-.•...-----------------------
18C13 = 18Cs = 8568 and 46 C44 = 46 C2 =
--"'-- .•.....-~~ -~.~~. ~"-.~~~~~.~-.~---.~-.,
.•....
..
~~.~.
..
.. -
..
. .
TABLE B.2Gb:Proportions X
0 1 2 3 4 5
n = 1 0.5סס00 0.50000
of the Binomial
Distribution
for p = q = 0.5
2
J
4
5
6
7
8
9
10
X
0.25000 0.5סס00 0.25000
0.12500 0.37500 0.37500 0.12500
0.06250 0.25000 0.37500 0.25000 0.05230
0.03125 0.15625 0.31250 0.31250 0.15625 0.03125
0.01563 0.09375 0.23438 0.31250 0.23438 0.09375
0.00781 0.05469 0.16406 0.27344 0.27344 0.16406
0.00391 0.03125 0.10938 0.21875 0.27344 0.21875
0.00195 0.01758 0.07031 0.16406 0.24609 0.24609
0.00098 0.00977 0.04395 0.11719 0.20508 0.24608
0 1 2 3 4 5
0.01563
0.05469 0.00781
0.10938 0.03125 0.00391
0.16406 0.07031 0.01758 0.00195
0.20508 0.11719 0.04395 0.00977 0.00098
6 7 8 9 10
6 7 8 9 10 n = 11
12
l3
14
15
16
17
18
19
20
X
0 1 2 3 4 5
0.00049 0.00537 0.02686 0.08057 0.16113 0.22559
0.00024 0.00293 0.01611 0.05371 0.12085 0.19336
0.00012 0.00159 0.00952 0.03491 0.08728 0.15710
0.0000li 0.00085 0.00333 0.02222 0.06110 0.12219
0.00002 0.00024 0.00183 0.00854 0.02777 0.06665
0.00001 0.00013 0.00104 0.00519 0.01816 0.04721
0.0סס00 0.סס007 0.00058 0.00311 0.01167 0.03268
0.0סס00 0.סס004 0.00033 0.00185 0.00739 0.02218
0.00000 0.00002 0.00018 0.00109 0.00462 0.01479
0 1 2 3 4 5
6 7 8 9 10
0.22559 0.16113 0.08057 0.02686 0.00537
0.22559 0.19336 0.12085 0.05371 0.01611
0.20947 0.20947 0.15710 0.06728 0.03491
0.18329 0.20947 0.18329 0.12219 O.OIillO
0.סס003 0.00046 0.00320 0.01389 0.04166 0.09164 0.15274 0.19638 0.19638 0.15274 0.09164
0.12219 0.17456 0.19638 0.17456 0.12219
0.09442 0.14838 0.18547 0.18547 0.14838
0.07082 0.12140 0.16692 0.18547 0.16692
0.03696 0.07393 0.12013 0.16018 0.17620
6 7 8 9 10
11 12 l3 14 15
0.00049
0.00293 0.00024
0.00952 0.00159 0.00012
0.02222 0.00555 0.00085 0.00006
0.04166 0.0l389 0.00320 0.00046 0.00003
0.06665 0.02777 0.00854 0.00183 0.00024
0.09442 0.04721 0.01816 0.00519 0.00104
0.12140 0.07082 0.03268 0.01167 0.00311
0.05175 0.09611 0.14416 0.17620 0.17620 0.14416 0.09611 0.05175 0.02218 0.00739
0.16018 0.12013 0.07393 0.03696 0.01479
11 12 13 14 15
0.00002
0.00013 0.00001
0.00058 0.00007 0.סס000
0.00185 0.00033 0.00004 0.00000
0.00462 0.00109 0.00018 0.00002 0.00000
16 17 18 19 20
X
16 17 18 19 20
P(X)
=
X!(n
n! -
x(l X)!p
_
),,-x P
»
'0 '0 It)
"
Q.
x' tD
,.. OJ ,.. VI
~.
"'.
OJ
0;1 C"
if OJ
::>
Q.
Gl
0; '0
a....• 00
III
786
Appendix
n
a(2): .50 a(l): .25
B
Statistical Tables and Graphs TABLE B 27' Critical
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 101 102 103 104 105 106 107 108 109 110
-
0 0 1 1 2 2 2 3 3 4 4 5 5 6 6 7 7 7 8 8 9 9
-
-
-
-
-
0 0
-
-
0 1 1 2 2 2 3 3 4 4 4 5 5 6 6 7 7 7 8 8
10 10 11 11 12 12
10 10 10
13 13 14 14 15
11 11 12 12 13
15 15 16 16 17 17 18 18 19 19 20 20 21 21 22 46 47 47 48 48 49 49 49 50 50
9 9
0 0 0 1 1 I
2 2 3 3 3 4 4 5 5 5 6
6 7 7 7 8 8 9 9 10
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
5 5 6 6 7 7 7 8 8 9
4 5 5 5 6 6 7 7 7 8
4 4 4 5 5
9 9
8 8
0 0 0 0 1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 8
-
3 4 4 4 5
0 0 0 0 1 1 1 2 2 2 3 3 4 4
-
0 0 0 1 1 1 2 2 2 3
-
-
-
-
-
13 14 14 15 15
10 10 11 11 12 12 13 13 13 14
11 12 12 12 13
9 10 10 10 11 11 12
15 16 16 17 17
14 15 15 16 16
13 14 14 15 15
18 18 19 19 19 43 44 44 44 45 45 46 46 47 47
16 17 17 18 18 41 42 42 43 43 44 44 44 45 45
10 10 11
9
0 0 0 0 I
1 I
2 2 2 3 3 3
6 6 6 7 7 7 8 8 9 9 9 10 10
8
0 0 0 0 I I
1 I
2 2 2 3
3 3 4 4 4 5 5 5 6 6 6
7 7
8
8
11
9 9 9 10 10
8 8 9 9 9
12 13 13 13 14
11 12 12 13 13
11 11 11 12 12
10 10
15 16 16 17 17
14 15 15 15 16
13 14 14 15 15
13 13 13 14 14
40 40 41 41 41
38 38 39 39 40 40 41 41 41 42
37 37 37 38 38
35 36 36 37 37
12 12 12 13 13 34 35 35 35 36
39 39 40 40 41
38 38 38 39 39
36 37 37 37 38
42 42 43 43 44
II
Test with p = 05 .20 .10 .05 .02 .01 .005 .002 .001 .10 .05 .025 .01 .005 .0025 .001 .0005
Values of C for the Sign Test or the Binomial
.20 .10 .05 .02 .01 .005 .002 .001 .10 .05 .025 .01 .005 .0025 .001 .0005
11
11 II
-
0 0 0 0 I
1 1 1 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 8 8 8 9 9 10 10 10 11 11 11 12 12 13 33 34 34 34 35 35 36 36 36 37
n
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
a(2): .50 a(l): .25
22 23 23 24 24 24 25 25 26 26 27 27 28 28 29 29 30 30 31 31
20 20 21 21 22
19 19 20 20 20
22 23 23 24 24 24 25 25 26 26
21 21 22 22 23 23 24 24 24 25
18 18 18 19 19 20 20 21 21 21 22 22 23 23 24
27 27 28 28 29
23 26 26 27 27
24 25 25 25 26
16 17 17 18 18 18 19 19 20 20 20 21 21 22 22 23 23 23 24 24
32 32 33 33 34 34 35 35 36 36
29 30 30 30 31 31 32 32 33 33 34 34 35 35 36 36 37 37 37 38
28 28 28 29 29 30 30 31 31 32
26 27 27 28 28 28 29 29 30 30
25 25 26 26 26 27 27 28 28 29
31 31 32 32 32
29 30 30 30 31
33 33 34 34 35 35 36 36 37 37
31 32 32 33 33
38 39 39 40 40
32 33 33 33 34 34 35 35 36 36 37 37 38 38 38
36 37 37 38 38
91 92 93 94 95
39 39 40 40 41 41 42 42 43 43
96 97 98 99 100 151 152 153 154 155
44 44 45 45 46 70 71 71 72 72
41 41 42 42 43 67 67 68 68 69
39 39 40 40 41 64 65 65 66 66
37 38 38 39 39
156 157 158 159 160
73 73 74 74 75
69 69 70 70 71
67 67 68 68 69
65 65 66 66 67
62 63 63 64 64
33 34 34 35 35 36 36 37 37 37 60 61 61 62 62 63 63 63 64 64
15 16 16 17 17 17 18 18 19 19 20 20 20 21 21
15 15 15 16 16 17 17 17 18 18 19 19 19 20 20
22 22 22 23 23 24 24 25 25 25 26 26 27 27 28
21 21 22 22 22 23 23 24 24 24 25 25 26 26 27
28 28 29 29 30 30 31 31 31 32
27 27 28 28 29 29 29 30 30 31
32 33 33 34 34 34 35 35 36 36 59 59 60 60 61 61 61 62 62 63
31 32 32 32 33 33 34 34 35 35 57 58 58 59 59 60 60 60 61 61
14 14 14 15 15 16 16 16 17 17 18 18 18 19 19 20 20 20 21 21 22 22 22 23 23 24 24 24 25 25 26 26 27 27 27 28 28 29 29 29 30 30 31 31 32 32 32 33 33 34 56 56 56 57 57 58 58 59 59 60
13 13 14 14 14 15 15 16 16 16 17 17 18 18 18 19 19 20 20 20 21 21 22 22 22 23 23 24 24 24 25 25 26 26 26 27 27 28 28 29 29 29 30 30 31 31 31 32 32 33 54 55 55 56 56 57 57 57 58 58
B
Appendix TABLE B.27 (cont.): Critical
65 66 66 67 67 68 68 69 69 70 95 95 96 96 97 97 98 98 99 99 100 100 101 101 102
.20 .10 .05 .10 .05 .025 48 46 44 48 46 45 49 47 45 49 47 46 50 48 46 46 50 48 47 51 49 47 51 49 52 50 48 52 50 48 52 50 49 49 53 51 53 51 50 54 52 50 54 52 51 51 55 53 55 53 51 56 54 52 56 54 52 57 55 53 53 57 55 54 58 56 54 58 56 55 59 56 59 57 55 60 57 56 60 58 56 60 58 57 61 59 57 61 59 57 62 60 58 62 60 58 63 61 59 63 61 59 64 62 60 64 62 60 65 63 61 65 63 61 66 63 62 66 64 62 90 88 86 91 88 86 91 89 87 92 89 87 92 90 87 88 93 90 93 91 88 94 91 89 94 92 89 95 92 90 95 93 90 96 93 91 96 94 91 92 97 94 92 97 94
102 103 103 104 104
98 98 99 99 99
a(2): .50
n
111 112 113 114 ll5 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 \35 \36 137 138 139 140 141 142 143 144 145 146 147 148 149 150 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220
a(I):
.25
51 51 52 52 53 53 54 54 55 55 56 56 57 57 58 58 59 59 60 60 61 61 62 62 63 63 64 64 65 65
95 95 96 96 97
93 93 94 94 94
Values of C for the Sign Test or the Binomial
.02 .01 .005 .002 .001 .01 .005 .0025 .001 .0005 42 41 40 38 37 43 41 40 39 38 43 41 42 39 38 44 41 42 40 39 44 42 43 40 39 45 43 42 40 39 44 45 42 41 40 44 45 43 41 40 46 45 43 42 41 44 46 42 41 45 44 47 42 45 43 47 46 45 43 42 48 46 45 42 43 48 47 45 44 43 49 47 46 44 43
161 162 163 164 165 166 167 16R 169 170 171 172 173 174 175
75 76 76 77 77
49 49 50 50 51
48 48 48 49 49
51 52 52 53 53 53 54 54 55 55
50 50 51 51 52
56 56 57 57 58
54 55 55 56 56 56 57 57 58 58 81 82 82 83 83 84 84 84 85 85 86 86 87 87 88 88 89 89 89 90
58 58 59 59 60 83 83 84 84 85 85 86 86 87 87 88 88 89 89 89 90 90 91 91 92
52 52 53 53 54
46 47 47 48 48 49 49 49 50 SO 51 51 52 52 52
n
.10 .05
78 78 79 79 80 80 81 81 82 82
74 74 75 75 76
79 79 79 80 80 81 81 82 82 83 83 84 84 85 85
71 72 72 73 73 74 74 75 75 76 76 77 77 78 78
,,(1):.25
45 45 46 46 46
44 44 45 45 45
176 177 178 179 180
83 83 83 84 84
47 47 48 48 49 49 50 50 50 51
46 46 47 47 48 48 48 49 49 50
181 182 183 184 185
85 85 86 86 87 87 88 88 89 89
50 51 51 51 52 52 53 53 54 54
191 192 193 194 195
82 82 83 83 84 84 85 115 86 86
51 52 52 53 53 53 54 54 55 55 78 78 79 79 79 80 80 81 81 82 82 83 83 83 84
86 87 87 88 88
84 85 85 86 86
83 83 84 84 85
53 53 54 54 55 55 56 56 56 57 80 80 81 81 81
76 77 77 78 78 78 79 79 80 80 81 81 82 82 82
186 187 188 189 190
196 197 198 199 200 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270
90 90 91 91 92 92 93 93 94 94 119 120 120 121 121 122 122 123 123 124 124 125 125 126 126 126 127 127 128 128
76 77 77 78 78
787
Test with p = 0.5
.20 .10 71 72 72 73 73
a(2): .50
Statistical Tables and Graphs
69 70 70 70 71
78 79 79 80 80 81 81 82 82 83
86 83 84 86 il7 84 87 85 88 85 118 85 89 86 89 86 89 87 90 87 114 111 115 112 115 112 116 113 116 113 117 114 117 114 118 115 118 115 119 116 119 116 120 117 120 117 121 118 121 liS 122 119 122 119 123 120 123 120 123 120
.05 .025 67 68 68 68 69 69 70 70 71 71 72 72 73 73 74 74 74 75 75 76 76 77 77 78 78 79 79 80 80 81 81 81 82 82 83 83 84 84 85 85 109 109 110 110 111
.02 .OJ
05 65 60 66 67 67 68 68 68 69 69 70 70 71 71 72 72 73 73 73 74 74 75 75 76 76 77 77 78 78 78 79 79 80 80
77 77 78 78 79
III
79 79 80 80 81 104 105 105 106 106 \06 107 107 108 108 109 109 110 110 111 111 111 112 112 113
114 114 115 115 116
81 82 82 83 106 107 107 107 108 108 109 109 110 110 111 111 112 112 113
116 117 117 117 118
113 114 114 114 115
111 112 112 113 113
.01 .005 .002 .001 .005 .0025 .001 .0005 62 59 63 60 64 62 59 60 64 63 61 60 63 61 60 65 64 62 60 65 64 61 65 62 64 63 61 66 66 65 62 63 67 62 65 63 07 66 64 63 66 64 63 68 68 64 67 65 64 67 69 65 68 64 69 66 68 65 70 66 70 68 67 65 70 69 66 67 71 69 67 66 71 67 70 68 72 67 70 68 72 71 67 69 73 71 68 69 73 72 70 68 74 72 70 69 74 72 69 71 74 73 71 70 75 73 71 70 75 74 72 71 76 74 72 71 76 71 75 73 72 72 73 73 74
75 76 76 77 77 77 78 78 79 79 102 103 103 104 104
73 74 74 75 75 75 76 76 77 77 100 101 101 101 102
74 75 75 75 76 99 99 99 100 100
105 105 106 106 106 107 107 108 108 109 109 110 110
102 103 103 104 104 105 105 106 106 106 107 107 108 108 109
101 101 102 102 103 103 103 104 104 105 105 106 106 107 107
III
111
788
Appendix B
Statistical Tables and Graphs TABLE B.27 (cont.): Critical
(2):.50 (1):.25
.20 .10
.10 .05 .05 .025
221 222 223 224 225
104 105 105 106 106
100 100 101 101 102
97 98 98 99 99
95 95 96 96 97
226 227 228 229 230
102 103 103 104 104
100 100 101 101 102
231 232 233 234 235
107 107 108 108 109 109 110 110 111 111
105 105 106 106 107
102 102 103 103 104
97 98 98 99 99 100 100 101 101 101
236 237 238 239 240 241 242 243 244 245 246 247 248 249 250
112 112 113 113 114 114 115 115 116 116 117 117 118 118 119
107 108 108 109 109 110 110 111 111 111
301 302 303 304 305 306 307 308 309 310
144 144 145 145 146 146 147 147 148 148
138 139 139 140 140 141 141 142 142 143
104 105 105 106 106 107 107 108 108 109 109 110 110 111 111 135 136 136 137 137 138 138 139 139 140
311 312 313 314 315
149 149 150 150 151
316 317 318 319 320 321 322 323 324 325 326 327 328 329 330
151 151 152 152 153 153 154 154 155 155 156 156 157 157 158
143 144 144 145 145 146 146 147 147 148 148 149 149 149 150 150 151 151 152 152
n
112 112 113 113 114
140 140 141 141 142 142 143 143 144 144 145 145 146 146 147 147 148 148 149 149
Values of C for the Sign Test or the Binomial
.02 .01 .005 .002 .001 .01 .005 .0025 .001 .0005 92 90 89 87 85 91 93 89 87 86 93 91 90 88 86 94 92 90 88 86 94 92 91 88 87
n
95 95 95 96 96 97 97 98 98 99 99 100 100 101 101 101 102 102 103 103 104 104 105 105 106
93 93 94 94 95 95 95 96 96 97 97 98 98 99 99 100 100 100 101 101
91 91 92 92 93 93 94 94 95 95 95 96 96 97 97
89 89 90 90 91 91 92 92 92 93 93 94 94 95 95
87 88 88 89 89 90 90 90 91 91
271 272 273 274 275 276 277 278 279 280 281 282 283 284 285
92 92 93 93 94
286 287 288 289 290
98 98 99 99 100 100 100 101 101 102
94 94 95 95 96 96 97 97 98 98
291 292 293 294 295
102 102 103 103 104
96 96 96 97 97 98 98 99 99 100
133 133 133 134 134 135 135 136 136 137 137 138 138 139 139
129 130 130 131 131 132 132 133 133 134
127 128 128 129 129 130 130 130 131 131
125 126 126 127 127 127 128 128 129 129
123 123 124 124 125 125 125 126 126 127
121 121 122 122 123 123 124 124 125 125
134 134 135 135 136
140 140 141 141 141
136 137 137 138 138
132 132 133 133 134 134 135 135 136 136
130 130 131 131 132 132 133 133 133 134
127 128 128 129 129 130 130 131 131 131
126 126 126 127 127 128 128 129 129 130
142 142 143 143 144 144 145 145 146 146
139 139 140 140 141 141 141 142 142 143
136 137 137 138 138 139 139 140 140 141
134 135 135 136 136
132 132 133 133 134
137 137 138 138 139
134 135 135 136 136
130 131 131 131 132 132 133 133 134 134
102 102 103 103 104 104 105 105 106 106 107 107 108 108 109
296 297 298 299 300 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380
",(2):.50 (1):.25 129 129 130 130 131 131 132 132 133 133 134 134 135 135 136 136 137 137 138 138
.20 .10 124 124 125 125 126
139 139 140 140 141 141 142 142 143 143 168 169 169 170 170 171 171 172 172 173 173 174 174 175 175 176 176 177 177 178 178 178 179 179 180 180 181 181 182 182
Test with p = 0.5
.10 .05 121 121 122 122 123 123 124 124 125 125
.05 .025 118 119 119 120 120 121 121 122 122 123
129 129 130 130 131 131 132 132 133 133
126 126 127 127 128
123 124 124 124 125
128 129 129 130 130
134 134 135 135 135
130 131 131 132 132 133 133 134 134 135 159 160 160 161 161
125 126 126 127 127 128 128 129 129 130 130 131 131 132 132
126 127 127 128 128
136 136 137 137 138 162 163 163 164 164 165 165 166 166 167
161 162 162 163 163
.02 .01 115 116 116 117 117 118 118 119 119 120 120 120 121 121 122 122 123 123 124 124
.01 .005 .002 .001 .005 .0025 .001 .0005 113 111 109 107 114 112 110 108 114 112 110 108 113 110 115 109 115 113 111 109 116 116 117 117 117 118 118 119 119 120 120 121 121 122 122
114 114 115 115 116 116 116 117 117 118 118 119 119 120 120
111 112 112 113 113 114 114 115 115 115 116 116 117 117 118
110 110 111 111 112 112 112 113 113 114 114 115 115 116 116
125 125 126 126 127 127 127 128 128 129
123 123 123 124 124 125 125 126 126 127
121 121 122 122 122 123 123 124 124 125
118 119 119 120 120 120 121 121 122 122
117 117 117 118 118 119 119 120 120 121
156 157 157 158 158
153 153 154 154 155 155 156 156 156 157 157 158 158 159 159
150 151 151 152 152 153 153 154 154 155 155 156 156 156 157
148 149 149 150 150 151 151 151 152 152 153 153 154 154 155
146 146 147 147 147 148 148 149 149 150 150 151 151 152 152
144 144 145 145 146 146 146 147 147 148 148 149 149 150 150
157 158 158 159 159 160 160 161 161 162
155 156 156 157 157 158 158 158 159 159
152 153 153 154 154 155 155 156 156 157
151 151 152 152 152 153 153 154 154 155
162 163 163 163 164
160 160 161 161 162
157 158 158 158 155
155 156 156 157 157
167 168 168 169 169 170 170 171 171 172 172 173 173 174 174
169 169 170 170 171
159 159 159 160 160 161 161 162 162 163 163 164 164 165 165 166 166 167 167 168
175 175 176 176 177
171 172 172 172 173
168 168 169 169 170
164 164 165 165 166 166 167 167 168 168
160 160 161 161 162 162 163 163 164 164 164 165 165 166 166
Appendix TABLE B.27 (cont.): Critical
a(2): .50 a( I):.25
.20 .10
.10 .05
331 332 333 334 335
158 159 159 160 160
153 153 154 154 155
150 150 150 151 151
336 337 338 339 340 341 342 343 344 345
161 161 162 162 163
155 156 156 157 157
163 164 164 165 165
158 158 159 159 160
152 152 153 153 154 154 155 155 156 156
166 166 167 167 168 193 193 194 194 195 195 196 196 197 197
160 161 161 162 162
/I
346 347 348 349 350 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440
198 198 199 199 200 200 201 201 202 202 203 203 204 204 205 205 206 206 207 207 207 208 208 209 209 210 210 211 211 212
187 187 188 188 189 189 190 190 191 191 192 192 192 193 193 194 194 195 195 196 196 197 197 198 198 199 199 200 200 201 201 202 202 203 203 204 204 205 205 206
157 157 158 158 159 183 184 184 184 185 185 186 186 187 187 188 188 189 189 190 190 191 191 192 192 193 193 194 194 195 195 196 196 196 197 197 198 198 199 199 200 200 201 201 202
.05 .02 .025 .01 147 143 147 144 148 144 148 145 149 145
Values of C for the Sign Test or the Binomial
381 382 383 384 385
151 152 152 153 153
146 146 147 147 148 148 149 149 149 150
.01 .005 .002 .001 .005 .0025 .001 .0005 141 139 136 135 142 139 137 135 142 140 137 136 142 140 138 136 143 141 138 136 143 141 139 137 144 142 139 137 144 142 140 138 145 143 140 138 143 145 141 139 146 144 141 139 146 144 141 140 147 145 142 140 147 145 142 141 148 145 143 141
154 154 155 155 156
150 151 151 152 152
148 149 149 149 150
180 180 181 181 182
176 177 177 178 178
182 183 183 184 184
179 179 180 180 180
174 174 175 175 176 176 177 177 177 178
185 185 186 186 187
181 181 182 182 183 183 184 184 185 185 186 186 187 187 188 188 188 189 189 190 190 191 191 192 192
149 150 150 ISO
151
187 187 188 188 189 189 190 190 191 191 192 192 193 193 194 194 195 195 196 196 197 197 198 198 198
193 193 194 194 195
178 179 179 180 180 181 181 182 182 183 183 184 184 185 185 185 186 186 187 187 188 188 189 189 190 190 191 191 192 192
146 146 147 147 148 171 172 172 173 173 174 174 175 175 176 176 177 177 177 178 178 179 179 180 180
B
183 183 184 184 185
.10 .05 173 174 174 175 175
386 387 388 389 390 391 392 393 394 395
185 186 186 187 187 188 188 189 189 190
179 180 180 181 181 182 182 183 183 184
176 176 177 177 178
173 173 174 174 175
178 179 179 180 180
175 176 176 177 177
181 181 182 182 183 207 208 208 208 209
178 178 178 179 179
n
789
Test with p = 0.5
.20 .10 177 177 178 178 179
a(2): .50 a(l): .25
143 144 144 145 145
141 142 142 143 143
396 397 398 399 400
190 191 191 192 192
184 185 185 186 186
169 169 170 170 170 171 171 172 172 173
167 167 168 168 168 169 169 170 170 171
451 452 453 454 455 456 457 458 459 460
217 218 218 219 219 220 220 221 221 222
211 211 212 212 213 213 214 214 215 215
209 210 210 211 211
173 174 174 175 175 176 176 176 177 177
171 172 172 173 173 174 174 174 175 175
222 223 223 224 224
216 216 217 217 218
212 212 213 213 214
181 181 182 182 183
178 178 179 179 180
225 225 226 226 227 227 228 228 229 229
218 219 219 220 220 221 221 222 222 223
214 215 215 216 216 217 217 218 218 219
183 184 184 184 185 185 186 186 187 187 188 188 189 189 190
180 181 181 182 182
176 176 177 177 178 178 179 179 179 180
461 462 463 464 465 466 467 468 469 470 471 472 473 474 475
180 181 181 182 182
476 477 478 479 480 481 482 483 484 485
230 230 231 231 232 232 233 233 234 234
183 183 184 184 185
486 487 488 489 490
235 235 236 236 237
223 224 224 224 225 225 226 226 227 227 228 228 229 229 230
219 220 220 221 221 221 222 222 223 223 224 224 225 225 226
182 183 183 184 184 185 185 186 186 187
Statistical Tables and Graphs
.OS
.025 170 171 171 172 172
204 204 205 205 206 206 207 207 208 208
.02 .01 167 167 168 168 169
.01 .005 .002 .001 .005 .0025 .001 .0005 164 162 159 157 165 163 160 158 165 163 160 158 166 164 161 159 164 166 161 159
169 170 170 171 171 172 172 172 173 173
167 167 168 168 169 169 170 170 170 171
174 174 175 175 176 200 200 201 201 202
171 172 172 173 173
202 203 203 204 204
200 200 200 201 201 202 202 203 203 204 204 205 205 206 206
208 209 209 210 210 211 211 212 212 213
205 205 205 206 206
213 214 214 215 215
209 210 210 211 211
216 216 217 217 218 218 218 219 219 220
212 212 213 213 214 214 214 215 215 216
220 221 221 222 222
216 217 217 218 218
207 207 208 208 209
197 198 198 199 199
207 207 208 208 208 209 209 210 210 211 211 212 212 213 213 214 214 215 215 216
164 165 165 166 166
162 162 163 163 164 164 164 165 165 166
167 167 168 168 169 169 170 170 171 171 195 195 196 196 197 197 198 198 198 199 199 200 200 201 201
192 192 193 193 194
160 160 161 161 162 162 162 163 163 164
194 195 195 195 196
164 165 165 166 166 190 190 191 191 191 192 192 193 193 194
196 197 197 198 198
194 195 195 196 196
202 202 203 203 204
199 199 200 200 201
197 197 197 198 198
204 205 205 205 206
201 201 202 202 203 203 204 204 205 205 206 206 207 207 208 208 208 209 209 210
199 199 200 200 201 201 202 202 203 203
206 207 207 208 208 209 209 210 210 211 211 212 212 212 213
166 167 167 Hi8 168
203 204 204 205 205 206 206 207 207 208
790
Appendix B
Statistical Tables arid Graphs
.. Critical TABLE B 27 (cont.): n
(2):.50 (1):.25
441 442 443 444 445 446 447 448 449 450 501 502 503 504 505
212 213 213 214 214
506 507 508 509 510 511 512 513 514 515 516 517 518 519 520
244 245 245 246 246 247 247 248 248 249 249 250 250 251 251 252 252 253 253 254 254 255 255 256 256 257 257 258 258 259 259 260 260 261 261 262 262 263 263 264
521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550
215 215 216 216 217 242 242 243 243 244
264 265 265 266 266
.20 .10 206 207 207 207 208
.10 .05 202 203 203 204 204
208 209 209 210 210
Values of C for the Sign Test or the Binomial
.02 .01 195 196 196 197 197
.01 .005 .002 .001 .005 .0025 .001 .0005 192 190 187 185 193 191 188 185 193 191 188 186 194 191 188 186 194 192 189 187
205 205 206 206 207
.05 .025 199 199 200 200 201 201 202 202 203 203
197 198 198 199 199
195 195 196 196 197
192 193 193 194 194
235 236 236 237 237
231 232 232 233 233
228 228 229 229 229
223 224 224 225 225
221 221 222 222 223
238 238 239 239 240 240 241 241 241 242 242 243 243 244 244 245 245 246 246 247 247 248 248 249 249 250 250 251 251 252 252 253 253 254 254 255 255 256 256 257 257 258 258 258 259
234 234 234 235 235
230 230 231 231 232
226 226 227 227 228
236 236 237 237 238 238 239 239 240 240 241 241 242 242 243 243 244 244 245 245 246 246 247 247 247 248 248 249 249 250 250 251 251 252 252 253 253 254 254 255
232 233 233 234 234 235 235 236 236 237 237 238 238 239 239 240 240 240 241 241 242 242 243 243 244 244 245 245 246 246 247 247 248 248 249 249 250 250 251 251
n
(2):.50 (1):.25 237 238 238 239 239 239 240 240 241 241 267 267 268 268 269
.20 .10
218 219 219 220 220
189 190 190 191 191 215 215 216 216 217
187 188 188 189 189 213 213 214 214 215
223 224 224 224 225
220 221 221 222 222
217 218 218 219 219
215 216 216 216 217
491 492 493 494 495 496 497 498 499 500 551 552 553 554 555 556 557 558 559 560
228 229 229 230 230 231 231 232 232 232 233 233 234 234 235 235 236 236 237 237
225 226 226 227 227 228 228 229 229 230 230 231 231 232 232 232 233 233 234 234
223 223 224 224 225 225 226 226 227 227
217 218 218 219 219 220 220 221 221 222
561 562 563 564 565 566 567 568 569 570
272 272 272 273 273 274 274 275 275 276
227 228 228 229 229 230 230 231 231 232
220 220 221 221 221 222 222 223 223 224 224 225 225 226 226 227 227 228 228 228
222 222 223 223 224 224 225 225 226 226
571 572 573 574 575
269 270 270 271 271 272 272 273 273 274
238 238 239 239 240 240 241 241 242 242 242 243 243 244 244
235 235 236 236 237 237 238 238 239 239 240 240 241 241 241
232 233 233 234 234
229 229 230 230 231
227 227 228 228 229
276 277 277 278 278 279 279 280 280 281 281 282 282 283 283
235 235 235 236 236 237 237 238 238 239
231 232 232 233 233 234 234 235 235 235
229 229 230 230 231 231 232 232 233 233
284 284 285 285 286 286 287 287 288 288
276 277 277 278 278 279 279 280 280 281
245 245 246 246 247
242 242 243 243 244
239 240 240 241 241
236 236 237 237 238
234 234 235 235 235
289 289 290 290 291
281 282 282 283 283
576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600
269 270 270 271 271
230 231 231 232 232 233 233 234 234 235 259 260 260 261 261 262 262 263 263 264 264 265 265 266 266 267 267 268 268 269
274 275 275 276 276
Test with p = 0 5
.10 .05 226 227 227 228 228 229 229 230 230 231 255 256 256 257 257 258 258 259 259 260
.05 .025
.02 .01
223 223 224 224 225
219 219 220 220 221
225 226 226 227 227 252 252 252 253 253
221 222 222 223 223 247 248 248 249 249
254 254 255 255 256
250 250 251 251 251
.01 .005 .002 .001 .005 .0025 .001 .0005 213 216 210 208 211 216 214 209 217 214 211 209 217 215 212 209 218 215 212 210 218 216 213 210 219 216 213 211 219 217 214 211 220 217 214 212 220 218 214 212 244 242 238 236 245 242 239 236 245 243 239 237 246 243 240 237 246 243 240 238 247 244 241 238 247 244 241 239 248 242 245 239 248 245 242 240 249 246 242 240
260 261 261 261 262 262 263 263 264 264
256 257 257 258 258 259 259 260 260 261
252 252 253 253 254 254 255 255 256 256
249 249 250 250 251 251 252 252 253 253
246 247 247 248 248 249 249 250 250 251
243 243 244 244 245 245 246 246 247 247
241 241 242 242 242 243 243 244 244 245
265 265 266 266 267 267 268 268 269 269 270 270 271 271 272 272 273 273 274 274
261 262 262 263 263 263 264 264 265 265 266 266 267 267 268
257 257 258 258 259 259 260 260 261 261 261 262 262 263 263
254 254 255 255 256 256 257 257 258 258 258 259 259 260 260
251 251 252 252 253 253 254 254 255 255 256 256 257 257 258
248 248 249 249 249 250 250 251 251 252 252 253 253 254 254
245 246 246 247 247 248 248 249 249 249 250 250 251 251 252
264 264 265 265 266 266 267 267 268 268
258 259 259 259 260 260 261 261 262 262
277 277 278 278 279
273 274 274 275 275
269 269 270 270 271
261 261 262 262 263 263 264 264 265 265 266 266 267 267 267
255 255 256 256 257 257 257 258 258 259 259 260 260 261 261
252 253 253 254 254
275 275 275 276 276
268 269 269 270 270 271 271 272 272 273
263 263 264 264 265
255 255 255 256 256 257 257 258 258 259
Appendix
..
TABLE B 27 (cont)· Critical
II
601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 701 702 703 704 705 706 707 708 709 7\0
0'(2): .50 0'(1): .25
.20 .10
291 292 292 293 293 294 294 295 295 296
284 284 285 285 286
296 297 297 298 298
289 289 290 290 291 291 292 292 293 293 294 294 295 295 295 296 296 297 297 298
299 299 300 300 301 301 302 302 303 303 304 304 305 305 306 306 307 307 308 308 308 309 309 310 310 311 311 312 312 313 313 314 314 315 315 341 341 342 342 343 343 344 344 345 345
286 287 287 288 288
298 299 299 300 300 301 301 302 302 303 303 304 304 305 305 306 306 307 307 308 333 333 334 334 334 335 335 336 336 337
.10 .05 .05 .025 279 275 280 276 280 276 281 277 281 277 282 278 282 278 283 279 283 279 284 280 284 280 285 281 285 281 286 282 286 282
.02 .01 271 271 272 272 273
287 287 288 288 289
278 279 279 280 280
289 289 290 290 291 291 292 292 293 293 294 294 295 295 296 296 297 297 298 298 299 299 300 300 301 301 302 302 303 303 328 328 329 329 330 330 331 331 332 332
283 283 284 284 285 285 286 286 287 287 287 288 288 289 289 290 290 291 291 292 292 293 293 294 294 295 295 296 296 297 297 298 298 299 299 324 324 325 325 325 326 326 327 327 328
273 274 274 275 275 276 276 277 277 278
281 281 281 282 282 283 283 284 284 285 285 286 286 287 287 288 288 289 289 290 290 291 291 291 292 292 293 293 294 294 319 319 320 320 321 321 322 322 323 323
Values of C for the Sign Test or the Binomial
.01 .005 .005 .0025 265 268 266 268 269 266 267 269 270 267 267 270 271 268 271 268 272 269 272 269 270 273 270 273 274 271 274 271 275 272 275 272 276 273 273 276 274 276 277 274 277 275 278 275 276 278 276 279 276 279 277 280 280 277 278 281 281 278 279 282 282 283 283 284 284 285 285 285 286 286 287 287 288 288 289 289 290 290 291 291 315 316 316 317 317 318 318 319 319 320
.002 .001 .001 .0005
n
0'(2): .50 0'(1): .25
.20
.02 .01 295 295 296 296 297
302 302 303 303 304
311 311 312 312 313
307 307 308 308 309
297 298 298 299 299 300 300 301 301 302 302 302 303 303 304
318 3111 319 319 320 320 321 321 322 322
313 314 314 315 315 316 316 317 317 318
309 310 310 311 311 312 312 312 313 313
304 305 305 306 306
323 323 324 324 325 325 326 326 327 327 328 328 329 329 330
318 319 319 319 320 320 321 321 322 322 323 323 324 324 325 325 326 326 327 327
314 314 315 315 316 316 317 317 318 318
309 310 310 311 311 312 312 313 313 313
319 319 320 320 321
314 314 315 315 316
321 322 322 323 323
352 352 353 353 354
348 348 349 349 350 350 351 351 352 352
316 317 317 318 318 343 343 344 344 345 345 346 346 346 347
651 652 653 654 655
316 316 317 317 318
308 309 309 310 310
264 264 265 265 266 266 267 267 268 268 269 269 270 270 271
262 262 262 263 263
656 657 658 659 660 661 662 663 664 665 666 667 668 669 670
318 319 319 320 320
311 311 312 312 313 313 314 314 314 315 315 316 316 317 317
269 269 269 270 270 271 271 272 272 273 273 274 274 275 275 276 276 276 277 277
284 284 285 285 286
271 272 272 272 273 273 274 274 275 275 276 276 277 277 278 278 279 279 279 280 280 281 281 282 282
321 321 322 322 323 323 324 324 325 325 326 326 327 327 328 328 329 329 330 330 331 331 332 332 333 333 334 334 335 335
286 287 287 288 288 312 313 313 314 314 315 315 316 316 317
283 283 284 284 285 309 309 310 310 311 311 311 312 312 313
279 280 280 281 281 282 282 283 283 284
278 278 279 279 280 280 281 281 282 2R2 306 306 307 307 308 308 309 309 310 310
671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 751 752 753 754 755 756 757 758 759 760
336 336 337 337 338 338 339 339 340 340 365 366 366 367 367 368 368 369 369 370
330 331 331 332 332 357 357 358 358 359 359 360 360 361 361
791
Test with p = 0 5
.05 .025 300 300 300 301 301
.to
259 260 260 261 261
266 267 267 268 268
Statistical Tables and Graphs
.10 .05 304 304 304 305 305 306 306 307 307 308 308 309 309 310 310
262 262 263 263 264
264 264 265 265 266
B
354 355 355 356 356
304 305 305 306 306
307 307 308 308 309
.01 .005 .005 .0025 292 289 292 289 293 290 290 293 294 291 294 291 295 292 295 292 293 295 293 296 293 296 297 294 297 294 295 298 295 298 296 299 299 296 297 300 297 300 301 298 298 301 299 302 299 302 300 303 303 300 304 301 304 301 304 301 302 305 302 305 303 306 303 306 307 304 304 307 308 305 305 308 309 306 306 309 310 307 310 307 311 311 312 312 313 313 314 314 314 315 339 340 340 341 341 342 342 343 343 344
308 308 309 309 310 310 310 311 311 312 336 337 337 337 338 338 339 339 340 340
.002 .001 .001 .0005 285 283 286 283 286 284 287 284 284 287 285 287 285 288 288 286 289 286 287 289 287 290 290 288 288 291 289 291 292 289 290 292 290 293 293 291 294 291 291 294 295 295 295 296 296 297 297 298 298 299 299 300 300 301 301 302 302 303 303 303 304 304 305 305 306 306 307 307 308 308 332 333 333 334 334 335 335 336 336 336
292 292 293 293 294 294 295 295 296 296 297 297 298 298 298 299 299 300 300 301 301 302 302 303 303 304 304 305 305 306 329 330 330
:m 331 332
332 333 333 334
792
Appendix B
Statistical Tables and Graphs TABLE B.27 (cont.): Critical
n
a(2): .50 a(l): .25
.20 .10
.10 .05
346 346 346 347 347
333 333 334 334 335
Values of C for the Sign Test or the Binomial
.05 .025 328 329 329 330 330 331 331 332 332 333 333 334 334 335 335
.02 .01 324 324 324 325 325
.01 .005 .005 .0025 320 317 321 318 321 318 322 319 322 319
326 326 327 327 328 328 329 329 330 330
323 323 324 324 324
319 320 320 321 321
325 325 326 326 327
322 322 323 323 324
.002 .001 .001 .0005 313 311 314 311 314 312 315 312 315 313 316 313 316 313 317 314 317 314 318 315 318 315 319 316 319 316 319 317 320 317
711 712 713 714 715 716 717 718 719 720
348 348 349 349 350
337 338 338 339 339 340 340 341 341 342
721 722 723 724 725
350 351 351 352 352
342 343 343 344 344
335 335 336 336 337 337 338 338 339 339
726 727 728 729 730
353 353 354 354 355
345 345 346 346 347
340 340 341 341 342
336 336 337 337 338
331 331 332 332 333
327 328 328 329 329
324 325 325 326 326
320 321 321 322 322
731 732 733 734 735
355 356 356 357 357
347 348 348 349 349
342 343 343 344 344
338 338 339 339 340
333 334 334 335 335
330 330 331 331 332
327 327 328 328 328
736 737 738 739 740
358 358 359 359 360
350 350 351 351 352
345 345 346 346 347
340 341 341 342 342
335 336 336 337 337
332 333 333 334 334
741 742 743 744 745
360 361 361 362 362
352 353 353 354 354
347 348 348 349 349
343 343 344 344 345
338 338 339 339 340
746 747 748 749 750
363 363 364 364 365
354 355 355 356 356
350 350 351 351 351
345 346 346 347 347
801 802 803 804 805 806 807 808 809 810
390 390 391 391 392
381 382 382 383 383 384 384 385 385 386
376 377 377 378 378 379 379 380 380 381
386 387 387 388 388 389 389 390 390 391
381 382 382 383 383 384 384 384 385 385
811 812 813 814 815 816 817 818 819 820
392 393 393 394 394 395 395 396 396 397 397 398 398 399 399
n
761 762 763 764 765 766 767 768 769 770
a(2): .50 a(l): .25
370 371 371 372 372
771 772 773 774 775
373 373 374 374 375 375 376 376 377 377
318 318 319 319 320
776 777 778 779 780
323 323 324 324 325
320 321 321 321 322
329 329 330 330 331
325 326 326 327 327
334 335 335 336 336
331 332 332 333 333
340 341 341 342 342
337 337 338 338 339
372 372 373 373 374 374 375 375 376 376
367 367 368 368 369
377 377 378 378 379 379 379 380 380 381
.20 .10 362 362 363 363 364 364 365 365 366 366
Test with p = 0 5
.10 .05
.05 .025
.02 .01
357 357 358 358 359
352 353 353 354 354
359 360 360 361 361
347 348 348 349 349 350 350 351 351 352 352 353 353 354 354
.01 .005 .005 .0025 344 341 344 341 345 342 345 342 346 343 346 343 347 344 347 344 348 345 348 345 349 346 349 346 350 347 347 350 351 347
.002 .001 .001 .0005 337 334 337 335 338 335 338 336 339 336 339 337 340 337 340 337 341 338 341 338 342 339 342 339 343 340 343 340 344 341
367 367 368 368 369
362 362 363 363 364
355 355 356 356 357 357 358 358 359 359
378 378 379 379 380
369 370 370 371 371
364 365 365 366 366
360 360 361 361 362
355 355 356 356 357
351 352 352 353 353
348 348 349 349 350
344 344 345 345 346
341 342 342 343 343
781 782 783 784 785
380 381 381 382 382
372 372 373 373 374
367 367 367 368 368
362 363 363 364 364
357 357 358 358 359
354 354 354 355 355
350 351 351 352 352
346 347 347 348 348
344 344 345 345 345
322 323 323 324 324
786 787 788 789 790
383 383 384 384 385
374 375 375 376 376
369 369 370 370 371
365 365 365 366 366
359 360 360 361 361
356 356 357 357 358
353 353 354 354 355
349 349 350 350 351
346 346 347 347 348
327 328 328 329 329
325 325 326 326 327
791 792 793 794 795
385 386 386 386 387
376 377 377 378 378
371 372 372 373 373
367 367 368 368 369
362 362 363 363 364
358 359 359 360 360
355 356 356 356 357
351 352 352 353 353
348 349 349 350 350
334 334 335 335 336
330 330 331 331 332
327 328 328 329 329
796 797 798 799 800
387 388 388 389 389
379 379 380 380 381
374 374 375 375 376
369 370 370 371 371
364 365 365 366 366
361 361 362 362 363
357 358 358 359 359
353 354 354 355 355
351 351 352 352 353
360 360 361 361 362 362 363 363 364 364
356 356 357 357 358
353 353 354 354 355
379 380 380 381 381
377 377 377 378 378
403 403 404 404 405
391 391 392 392 393 393 393 394 394 395
384 384 385 385 385
408 409 409 410 410
396 396 397 397 398 398 399 399 400 400
387 387 388 388 389
355 356 356 357 357
415 415 416 416 417 417 418 418 419 419
401 401 401 402 402
358 359 359 360 360
851 852 853 854 855 856 857 858 859 860
406 406 407 407 408
369 369 370 370 371
363 364 364 365 365 365 366 366 367 367
389 390 390 391 391
386 386 387 387 388
382 382 383 383 384
379 379 380 380 381
371 372 372 373 373 374 374 375 375 376
368 368 369 369 370 370 371 371 372 372
365 365 366 366 366 367 367 368 368 369
361 361 361 362 362 363 363 364 364 365
358 358 359 359 360
861 862 863 864 865
405 406 406 407 407 408 408 409 409 410
395 396 396 397 397 398 398 399 399 400
392 392 393 393 394 394 395 395 396 396
388 389 389 390 390 391 391 392 392 393
384 385 385 386 386
866 867 868 869 870
411 411 412 412 413 413 414 414 415 415
401 401 402 402 403
360 361 361 361 362
420 420 421 421 422 422 423 423 424 424
381 382 382 383 383 384 384 385 385 386
403 404 404 405 405
387 387 388 388 388
r}
Appendix TABLE B.27 (cont.): Critical
II
821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836
837 838 839 840 841 842 843 844 845 846 847 848 849 850 901 902 903 904 905 906 907 908 909 910 911 912 913 914 9\5 916 917 918 919 920 921 922 923 924 925
.20 .10
400 400 401 401 402 402 403 403 404 404 405 405 406 406 407 407 4U8 40R 409 4U9 410 410 411 411 412
391 392 392 393 393 394 394 395 395 396 396 397 397 397 39R
.025 386 381 31';6 382 31';7 382 387 383 388 383 388 384 389 384 31N 385 390 385 390 386 391 3R6 391 387 392 387 392 381\ 393 3RK
398 399 399 400 400
393 394 394 395 395
412 413 413 414 414 439 440 440 441 441 442 442 443 443 444 444 445 445 440 446 447 447 448 44R 449 449 450 450 451 45\
.10 .05
.os
(2):.50 (I):.25
389 389 390 390 391 401 396 391 401 396 392 4()2 397 392 402 397 393 403 398 393 403 398 394 404 39,} 394 404 399 394 405 400 395 405 400 395 430 425 420 431 425 421 431 426 421 432 426 422 432 427 422 433 427 423 433 428 423 434 428 423 434 429 424 435 429 424 435 430 425 436 430 425 436 431 426 437 431 426 437 432 427 438 432 427 438 433 428 439 433 428 439 434 429 440 434 429 440 435 430 441 435 430 441 436 431 442 436 431 442 436 432
Values of C for the Sign Test or the Binomial
376 377 377 378 378
.01 .oos .002 .001 .005 .0025 .001 .0005 373 369 365 362 373 370 366 363 374 370 366 303 374 371 367 364 375 371 367 364
379 379 380 380 381
375 375 376 376 377
381 381 382 382
.02 .01
383
377 378 378 379 37,}
383 384 384 385 385
380 380 381 3R\ 382
386 386 387 387 388 388 389 389 390 390 415 415 416 416 417
382 383 383 384 3R4 385 385 386 386 3R6
411 411 412 412 413 417 413 417 414 418 414 418 41S 41l) 415 419 416 420 416 420 417 421 417 421 41i1 422 422 423 423 424 424 425 425 426 426
418 419 419 419 420 420 421 421 422 422
372 372 373 373 374 374 375 375 375 376 376 377 377 378 378 379 379 31';0 380 381 3111 382 3H2 383 383 407 408 408 409 409 410 410 411 411 412 412 413 413 414 414 415 415 416 416 416 417 417 418 418 419
368 368 369 369 370 370 370 371 37\ 372 372 373 373 374 374
365 365 366 366 367 367 368 368 369 369 369 370 :l70 371 371
375 375 370 376 377 377 378 378 379 379 403 404 404 405 405
372 372 373 373 374
406 406 406 407 407 40i; 408 409 409 410 410 411 411 412 412 413 413 414 414 415
374 375 375 376 376 4(Xl 401 401 402 402 403 403 403 404 404
(2):.50 n (1):.25 871 425 '1',72 425 873 420 874 420 875 427 876 427 877 428 878 428 879 429 880 429 881 882 883 884 K85 886 887 888 R89 890 8l)1 892
~m 894 895 896 illJ7 898 899 <JOO
951 l)52 953 954 955 956 957 958 959 ,}60
405 405 406 406 407 407 408 408 409 409
961 962 963 964 965 966 967 968 '}6'1 970
410 410 411 411 412
971 972 973 974 975
.20 .10
B
Statistical Tables and Graphs
793
Test with p = 0.5
.10 .05 416 410 41() 411 417 411 417 412 418 412 418 413 419 413 419 414 420 414 420 415 429 420 415 43() 421 416 430 421 416 431 422 417 431 422 417 432 423 418 432 423 418 433 424 418 433 424 419 434 425 411.) 434 425 420 435 426 420 435 426 421 436 427 421 436 427 422 437 428 422 437 42R 423 438 429 423 438 429 424 439 430 424 464 455 449 4fl5 455 450 465 456 450 466 456 451 466 457 451 467 457 452 467 458 452 468 458 453 468 459 453 469 4S,} 454 469 460 454 470 460 454 470 461 455 471 461 455 471 462 456 472 462 456 472 463 457 473 463 457 473 4M 45R 473 464 458 474 465 459 474 465 459 475 466 460 475 466 460 476 466 461
.05 .025 406 40() 407 407 408 408 408 409 409 410 410 411 411 412 412 413 413 414 414 415 415 416 416 417 417 418 418 419 419 420 444 445 445 446 446 447 447 448 448 44'1 449 450 450 451 451 452 452 453 453 453 454 454 455 455 456
.02 .01 4(X) 401 401 402 402 403 403 404 404 405 405 40:; 406 406 407 407 408 40R 409 40l) 410 410 411 411 412 412 413 413 414 414 439 439 440 440 441 441 442 442 442 443 443 444 444 445 445 446 446 447 447 448 448 449 449 450 450
.01 .00:; .002 .UOI .(Xl5 .!Xl25 .001 .0005 397 386 393 389 3'1',9 386 397 394 397 394 390 387 31(7 398 395 390 398 395 391 388 399 391 388 395 399 392 389 396 400 392 396 389 400 397 393 390 401 397 393 :l'JO 401 394 391 398 402 398 394 391 39') 392 402 399 403 399 395 392 3l)3 4(~l 396 403 4()4 400 396 393 404 401 397 394 405 397 394 401 405 394 402 397 406 402 398 395 406 403 398 395 407 403 399 396 407 404 399 396 408 404 400 397 408 405 400 397 408 405 401 398 409 401 405 39H 39l) 4()9 4()6 402 39,} 410 402 406 410 407 403 400 435 435 436 436 437 437 438 438 439 439 440 440 441 441 442 442 442 443 443 444 444 445 445 446 446
431 432 432 433 433 434 434 435 435 4~6 436 436 437 437 438 438 439 439 440 440 441 441 442 442 443
427 427 428 428 429 429 43() 430 431 431 432 432 433 433 434 434 434 435 435 436 436 437 437 438 438
424 424 425 425 426 426 427 427 428 428 429 429 429 430 430 431 431 432 432 433 433 434 434 435 435
794
Appendix
B
n
a(2): .50 01(1): .25
452 452 453 453 454
.20 .10 443 443 443 444 444
454 455 455 456 456
445 445 446 446 447
457 457 458 458 459
447 448 448 449 449
459 460 460 461 461
450 450 451 451 452
462 462 463 463 464
452 453 453 454 454
Statistical
Tables and Graphs
TABLE B 27 (cont.): .. Critical Values of C for the Sign Test or the Binomial
926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950
.10 .05 .02 .01 .005 .05 .025 .01 .005 .0025 437 432 427 423 419 437 433 427 423 420 438 433 428 424 420 438 434 428 424 421 439 434 429 425 421 439 435 429 425 422 440 435 430 426 422 440 436 430 426 423 441 436 430 427 423 441 437 431 427 424 424 442 437 431 428 442 438 432 428 425 425 443 438 432 429 443 438 433 429 426 444 439 433 430 426 444 439 434 430 426 445 440 434 430 427 445 440 435 431 427 446 441 435 431 428 446 441 436 432 428 447 442 436 432 429 447 442 437 433 429 448 443 437 433 430 448 443 438 434 430 449 444 438 434 431
.002 .001 .001 .0005 415 412 415 412 416 413 416 413 417 414 417 418 418 419 419
414 415 415 416 416
420 420 421 421 422
417 417 418 418 419
422 423 423 424 424
419 420 420 420 421
425 425 425 426 426
421 422 422 423 423
n
976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000
Appendix Table B.27 was prepared by considering where the probability of a lower critical value, Ca,n, is
Test with p = .10 .05 .02 .05 .025 .01 461 456 451 462 457 451 462 457 452 463 458 452 463 458 453 464 459 453 464 459 454 465 460 454 465 460 455 466 461 455
0 5
476 477 477 478 478
.20 .10 467 467 468 468 469
479 479 480 480 481 481 482 4S2 483 483
469 470 470 471 471 472 472 473 473 474
466 467 467 468 468
461 462 462 463 463
455 456 456 457 457
484 484 485 485 486
474 475 475 476 476 477 477 478 478 479
469 469 470 470 471 471 472 472 473 473
464 464 465 465 466
458 458 459 459 460
.01 .005 .005 .0025 443 447 447 444 448 444 448 445 449 445 446 449 446 450 450 447 447 451 451 447 448 452 448 452 449 453 449 453 453 450 454 450 454 451 455 451 455 452 452 456
466 467 467 468 468
460 461 461 462 462
456 457 457 458 458
a(2): .50 a(1): .25
486 487 487 488 488
453 453 454 454 455
.002 .001 .001 .0005 439 436 439 436 440 437 440 437 441 438 441 438 442 438 ~ 442 439 443 439 .~ 443 440 ~ 444 440 444 441 ~ 444 441 ~ 445 442 ~ 445 442 ~ I 446 443 11 446 443 .f 447 444 ~ 447 444 \. 448 445 .~ 448 445 tl 449 446 .1 449 446 450 447 450 447
binomial probabilities, such as those in Table upper critical values are n - Caft·
< a(2);
Example:
j
,
.I
1
B.26b,
l
.~
:;1
II
CO.Ol,950= 434, with 950 - 434 = 516 as the upper critical value.~
:~
I
TABLE B.28: Critical
n
m1
m2
2 3 4 4 4
1 1 1 1 2
1 1 1 2 2
5 5 5
1 1 2
1 2 2
6
1
1
6
1
2
6 6 6 6 7
1 2 2 3 1
3 2 3 3 1
7 7 7 7 7
1 1 2 2 3
8 8 8
1 1 1
8
1
8
2
8 8 8
2 2 3
8
3
8
4
9
1
9
• 1
9 9
1 1
9
2
9 9 9 9
2 2 3 3
9
4
a: 0.50 0, 1 -,1 -,1 0, 1
0.20
0.10
Values for Fisher's Exact Test
0.05 -,-,-,-,-
-,-,-,-,-
-,-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
0,2
-,-,-,-
-,-
-,-
-,1 -,1
-,1 -,-
-, 1
-,-
-,-
-,-
0,2
-,2
-,2
-,1 -,-
-,1 -,-
, -,2 -,-
-,-
-,1 -,I
-,-,2 -,-,-
-,-
-,-
-,-
-,-
-,-
-,-
0,2 0,2 0,3
-,2 0,2 0,3
-,2 -,-
-,I
-,1
-,1
-,2 -,0,3 -,-
2 3 2 3 3
-,1 -,1
-,-,-
-,-,-
-,2 0,2 0,2
-,2 -,2 0,3
1 2 3 4 2
-,I -,1 -,1 -,-,1
3 4 3 4 4
0,2 0,2 0,2 0,3
1 2 3 4 2 3 4 3 4 4
-,0,2
-,-,-,-
-,-
-,-
-,2
-,-,-
-,-,-,-,-
-,-
-,-
-,-
-,-
-,2 -,2 0,3
-,2 -,-
-,2
-,3
-,3
-,3
-,1
-,1 -,-
-,-,-
-,-,-
-,-
-,-,-,-
-,-,-, -
-,-
-,-
-,-
-,-
-,-
-.-
-,-,-,-,-
-,2
-,2
-,2
-,2
-,2
-,-
-,-
-.-
-,-
-,-,3 -,-
-,2
-,-
-,-,-
-,-,-,-,-
-,-
-,-
-,-,-,-,-
-,2
-,2
-,-
-,-
-,-
-, -, -,-
-,-
-,-
0,3 0,3 0,4
0,3 0,3 0,4
-,3
-,3
-,3
-,-,3
0,3 0,4
-,-
-,-
-,-
0,4
0,4
0,4
-,-
-,1 -,1 -,1 -,1 -,1
-,1 -,-,-
-, 1
-,-
-,-,-
-,-
0,2 0,2 0,2 0,3 0,3
1,3
-,-,-
-,-
-,-
-,-
-,-
-,-,-
0.001
-,-
-,-,-,-,-,-,-
-,-
0,3
0.002
-,-
-,-,-,-
0,3
-,-
0.005
-,-
-,-
-,1 -,1
-,-
0,2
o.ot
0.02
-,-,-
-,-,-,-
-.-
~
..• OJ
-,-
-,-,-
-,-
-,-,-,-
-,-
n' OJ
-,2
-,2
-.2
-,2
-,2
-,-
-,-
o;t
-,2 -,2 -,3 0,3 0,3
-,2 -,2 -,3 0,3 0,4
-,2
-,2
iD
-,-
-,-,-
-,-
-,-
-,-,-
-,3 -,3 0,4
-,3 -,3 0,4
-,3 -,3 0,4
-,3 -,-,4
-,3
-,-
-,-,-
-,4
~.
o:
'"OJ
::I a. G\
0] 'C
::r
'"
....• \0 0\
»
TABLE B.28 (cont.): Critical Values for Fisher's Exact Test n
10 10
m,
10
1 1 1
10
1
10
1
1 2 3 4 5
0.20
a: 0.50
rI12
-,1 -,1 -,1 -,1 0, 1
-,1 -,1 -,1 -,1
-,-,1 -,2
0.10
-,1 -,-,-
-,1 -,1
-,1 -,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
-,1
-,3
-,3
-,3
-,3
0,3 0,3 0,3
0,3 0,3 0,3 0,4
-,3 0,3 0,4
-,3 -,0,4 0,4 0,5
-,3 -, -,4 0,4
-,3 -,-,4 0,4
0,5
0,5
-,1 -,-,-,-,-
-,-
0,5
0,5
1 1 1 1 1
1 2 3 4 5
-,1 -.1 -,1 -,1 -,1
-,1 -,1 -,1 -,1 -,1
-,1 -,1 -,-,-
-,1 -,1 -,-,-
-,1 -,-,-
-,-
-,-
-,-
11 11 11 11 11
2 2 2 2 3
2 3 4 5 3
-,1 -,1
-,1 -,1
0,2 0,2 0,2
-,2 -,2 -,2 -,2 -,2
-,2
0,2 0,2 0,2
-,2 -,2 -,2
11 11 11 11
3
4 5 4 5 5
0,2 0,2 0,2
0,2 0,3 0,3 0,3
U
1,4
1,4
0,4
-,1 -,1 -,1 -,1 -,1
-,1 -,1 -,1 • -,1 -,1
-,1 -,1 -,-,-,-
0,1
-,-
-,1 -,1
-,1 -,1
0,2 0,2
-,2 0,2
0,2 0,2
0,2 -,2
11
3 4 4 5
12 12 12 12 12
1 1 1 1
1 2 3 4 5
12 12 12 12 12
1 2 2 2 2
6 2 3 4 5
12 12
2 3
6 3
1
1,3
1.4
-,2
-,2
-,-
-,2 -,-,-
-,3
-,2 -,2 -,-
-,-,3
-,-
-,-
-, -, -, -, -.2
-,-,-,4 -,0,5
-,-
-,-
-,-
-,-
-,-
-, -, -,-,3
-,2
0,4
-,-,-,-
-,-
-,2
0,4 1,4
-, -, -, -, -,3
-,2
-, -
-,-,-
11 11 11 11 11
10
1,3 1,3 2,3
-, -
-,-,-
-,-
-,-
-,-
10
10
-,-
-,-
-,-
-,2 -, -, -
-,-,-
-, -, -, -
-, -, -, -, -
-,-
0,2 1,2
0,2 0,3 0,3 0,4
-,-
-,-
4 5 4 5 5
0,2 0,2 0,2
-, -,-, -, -
-,2 -,2
3 3 4 4 5
10
10
0,2 0,2 0,2 0,2
0.005
-,-
-,-,3
-, -
-,-
-, -
-,-
-,4
-,4
-, -, -,4
-,-
-,-
-, -
0,5
0,5
0,5
-, -
-,-
-,-
-,-
-,-
-, -, -
-,-
-,-, -,-
-,-,-
-,-
-,-,-,-
-,-
-,3
-,-
-,-,-
-, -, -, -
-,-
-, -
-,2 -,-,-
-, -
-,-
-, -, -,3
-,-,-,3
-, -,-
-,2 -, -
-,3
-,2 -,-,-,-,3
-,3
-,3
-,3
-,3
-, -, -,4 -,4
-,-,-,4 -,4
-, -, -, -
-, -
-,-,-,-
-,3 -,3 -,3 0,4 0,4
-,3 -,3 -,3 0,4 0,4
0,5
0,5
0,5
0,5
-,5
-,-,-,4 -,-,5
-,1 -,1 -,-,-,-
-,1 -,-,-
-,1
-, -
-,-
-,-
-,-
-,-,-
-,-
-,-
-, -, -, -
-, -, - -
-,-
-,-
-,-
-,-
-, -
-,-
-,-
-,-
-, -, -, -, -,
-
-,-,-,-,-,-
-,-
-,-
-,-
-,-
-,-
-,2 -,2 -,2 -,2
-,2 -,2 -,2
-,2 -,2 -,2
-, -, -
-,-,-
-,2
-,-
-,2 -,2 -,2 -,-
-, -
-,-,2
-,-
-,3 0,3 0,3 0,3
-,2
-,3
0,3 0,3 0,4
-,3
-,-
-,-,3
-,-
-,4
-,4
0,4
-,4
-,-
--
-,4 -, -
-
-,2
-, -,2
-,-
-,2 -,2 -, -, -
-,2
-, -
-,-
-,-
-,-
-,-
-,
-
-,-
-
-,
-
-,-
-, -,3
-,-,3
-, -,3
0.002
0.001 -,-
-,2 -,2
-,1
om
0.02
--
-,2 -,2 -,2
2 3 4 5 3
10 10 10 10
-,-
0.05 I
-,2 -,2 -,2
2 2 2 2 3
10
-, 1
-,
-,-,-,3
-,2
-, -,3
-,-
-,3
-,-
-,-,-,-,-,-,-,-,-,-,-
-,4 -,0,5
-c -c to
OJ Q.
x' tIl
TABLE B.28 (cont.): Critical
a: 0.50
0.20
0.10
nil
12 12 12
3 3 3
4 5 6
0,2 0,2 1,2
0,2 0,3 0,3
-,3 0,3 0,3
-,3 -,3 0,3
-,3 -,3 0,3
-,3 -,3 -,-
-,3 -,3
-,3 -,3
-,-
-,-
12 12 12 12 12
4 4 4 5 5
4 5 6 5 6
0,2 1,3 1,3 1,3 2,3
0,3 0,3 0,4 1,4 1,4
0,3 0,3 0,4 0,4 1,4
-,3 0,4 0,4 0,4 0,5
-,3 0,4 0,4 0,4 0,5
-,3 0,4 0,4 0,4 0,5
-,4 -,4 0,4 0,4 0,5
-,4 -,4
12 13 13 13 13
6 1
2,4 -,1 -,1
1,5
1,5 -,1
1,5 -,1
1,5
-,1
-,-
-, 1 -,-
1,5 -,-
-,1 -,1 -,1
1,5 -,1 -,1
1 1
6 1 2 3 4
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
13 13 13 13 13
1 1 2 2 2
5 6 2 3 4
-,1
-,1 -,1
-,-,-
-,-,-
-,-,-
-,1 -,1 0,2
-,I -,1
-,2
-,2 -,2 -,2
-,2 -,2 -,2
-,2 -,2 -,2
-,-,-,2 -,2 -,2
13 13 13 13 13
2 2 3 3 3
5 6 3 4 5
0,2 0,2 0,2 0,2 0,2
0,2 0,2 -,2 0,2 0,3
-,2 -,2 -,2 -,3 0,3
-,2 -,2 -,2 -,3 -,3
-,-,-
-,-,-
-,-,-
-,-,-
-,3 -,3 -,3
-,3 -,3 -,3
-,3 -,3 -,3
13 13 13 13 13
3 4 4 4 5
6 4 5 6 5
0,2 0,2 1,3 1,3 1,3
0,3 0,3 0,3 0,3 0,3
0,3 0,3 0,3 0,4 0,4
0,3 -,3 0,4 0,4 0,4
-,3 -,3 0,4 0,4 0,4
-,3 -,3 -,4 0,4 0,4
13 13 14 14 14
5 6 1 1 1
6 6 1 2 3
1,3 2,4 -,1
1,4 1,5 -,1 -,1
0,4 1,5 -,1 -,1
0,4 1,5 -,1
0,5 0:5 -,1
-,1
1,4 1,4 -,1 -,1 -,1
-,-
-,-
-,-,-
-,-,-
14 14 14 14 14
1 1
4 5 6
-,1 -,1 -,1
-,1 -,1 -,1
-,-,-
-,-
-,-,-
-,-
-,-
-,-
7
0,1 -,1
-,-
-,-
-,-,-
-,1
-,2
-,2
-,-,-,2
1
1 1 2
nl2
2
-,I
-,1 -,1
-,I
-,I
-,-
-,-
-,2
-,-
Values for Fisher's Exact Test
0.05
11
-,-
0,5 0,5 0,6
0.02 -,3
-,3
-,-,-
-,-
-,4 -,4 -,-.5 0,5
-,4 -,4
0,6 -,-
-,-
0.01 -,-,-,-
-,-,-,-
0.005 -,-,-,-
-,-,-,-
-,-
-,4
-,4 -,-
-,-
-,-
-,-
-,-,-
-,-
-,-
-,5 0,5
-,5 0,5
-,5
-,5
-,5
-,5
-,5
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-,-,-,-
0,6
0,6
0,6
0,6
0,6
-,-
-,-
-,-
-,-
-,-
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-,-,-
-,-,-,-,-
-,-
-,-
-,-
-,-
-,-,-,-
-,-
-,-
-,3 -,3 -,3
-,-,-,3 -,3 -,-
-,-
-,-,3 -,3 -,-
-,-,-
-, -
-,-
-,-
-,4 -,4 0,4 0,4
-,4 -,4 -,4 -,4
0,5 0,5 -,-,-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
-,-,-
-,-,-
-,-,-
-,2 -,2 -,-
-,2 -,2
-,2
-,2 -,-
-,-,-,-,-,2
-,-
-,-,-,-
-,4
-, -
-,-
-,-,-
-,-
-,-,-,-,-
-,-
-,-
-,-,-,-
-,4
-,-,-
-,-,-
-,-
-,-,-
0,6 -,-,-,-,-
-,-
0.001
0.002
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
-,-
-,-,-
-,-,-
-,-,-
-,-
-,-,-
-,-
-,-,-,-
-,-,-,-,-
-,-
-,-
-,-,-
-,-
-,-,-
-,-
-,-
-,-
-,-
-,3
-,3
-,3
-,3
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
-,-
-,-
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,4 -,4
-,4 -,4
-,4 -,4
-,4
-,-,4
-,-
-,4 -,4
-,4
-c -c
-,-
I\)
-,-
-,-
-,-
-,-
-,-
-,-,-
-,-
-,-
-,-,-,-
-,-
-,-,-
-,-,4 -,-
-,-
-,5
-,5
-,5
-,5
-,5
-,5
-,5
-,5
-,5
-,5
x'
0,5 0,5
0,5 0,6
-,5 0,6
-,5 0,6
-,5 0,6
-,-
-,-
-,-
-,6
-,6
Vl
-,-
-,-
-,-,-
-,-,-
~.
-,-
-,-,-
-,-,-
-,-
-,-,-,-
-,-
-,-,-
-,-,-
-,6 -,-, -
-,6
-,-
-,5 0,6 -,-
-,-
-,-,-,-
-,5 0,6 -,-
-,-
-,-
-,-,-
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
-,2
-.2
-,2
-,-
-,-,-,-,-
-,-,-
-,-
-,-,-
-,-,-,-,-
-,-
-,-
-,-,-
-,-,-
»
::l a. CD
,.. OJ ,..
-,-
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-,-
-,-,-,-,-
-,-,-
-,-
::l a.
-,-
-,-
G\
;:;. OJ
o;l 0-
ffi OJ
0;
-c
iI
...• ...•
\0
...•
'"co »
TABLE B.28 (cont.): Critical
n
mj
m2
a: 0.50
0.20
14 14 14 14 14
2 2 2 2 2
3 4 5 6 7
-,1
0,1 0,2 0,2 0,2
-,1 -,2 -,2 0,2 0,2
-,2 -,2 -,2 -,2
-,2 -,2 -,2 -,2
-,-
14 14 14 14 14
3 3 3 3 3
3 4 5 6 7
0,2 0,2 0,2 0,2 1,2
-,2 -,2 0,3 0,3 0,3
14 14 14 14 14
4 4 4 4 5
4 5 6 7 5
0,2 0,2 1,3 1,3 1,3
-,-,-
-, -,-
-,-,-
-,-
-,-
-,-
-,3 -,3 -,3
-,3 -,3 -,3
-,3 -,3
-,3 -,3
-,-
-,-
-,-,-
-,-
-,-,-
-,-
-,-
-,3 -,4 -,4
-,4 -,4 -,4
-,4 -,4 -,4
-,-
-,-
-,-
-,4
-,5
-,5
0,5 0,5 0,5 0,6 1,6
0,5 0,5 0,5 0,6 1,6
-,5 0,5 0,5 0,6 1,6
-,-,-
-,-
-,-
-,-,-,-
-,-,-,-
-,-,-,-
-,-,-
-,-
-,2 -,2 -,3 0,3 0,3
-,2 -,2 -,3 -,3 0,3
-,2 -,3 -,3 -,3 0,3
-,2 -,3 -,3 -,3 -,-
-,-
0,3 0,3 0,3 0,4 0,3
-,3 0,3 0,3 0,4 0,4
-,3 -,3 0,4 0,4 0,4
-,3 -,3 0,4 0,4 0,4
-,3 -,3 0,4 0,4 0,4
-,3 -,4 -,4 0,4 -,4
1,3 2,3 2,4 2,4 3,4
1,4 1,4 1,4 1,5 2,5
0,4 1,4 1,4 1,5 2,5
0,4 0,5 1,5 1,5 1,6
0,4 0,5 0,5 1,5 1,6
0,4 0,5 0,5 0,6 1,6
-,1
-,1 -,1 -,1 -,1 -,1
-,1 -,1 -,1
-,1 -,1 -,1
-,1
-,1
-,-
15 IS IS IS IS
1 1 1 1 1
1 2 3 4 5
IS IS IS IS IS
1 1 2 2 2
6 7 2 3 4
-,1
-,1
-,1 -,1 -, 1 -,1 -,1
IS IS IS IS IS
2 2 2 3 3
5 6 7 3 4
0,2 0,2 0,2 0,1 0,2
-,2 0,2 0,2 -,2 -,2
5 6 7 4 5
-,2 -,-,-,-,-
-,-,-,-
6 7 6 7 7
3 3 3 4 4
-,2
-,1
-,1 -,1 -,1 -,1 -,1
-,1
0,2 0,2 0,2 0,2 0,2
-,2 0,3 0,3 -,3 0,3
-,-
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
-,2 -,2 -,2
-,2 -,2 -,2
-,2 -,2 -,2
-,2 -,2 -,2
-,2 -,2
-,2 -,2
-,-
-,2 -,2 -,2 -,2 -,2
-,2 -,2 -,2 -,2 -,2
-,3 0,3 0,3 -,3 0,3
-,3 -,3 0,3 -,3 -,3
-,-
-,-
.
:::l
0.01
0.02
-,-,-,-,-
-,2 -,2
5 5 6 6 7
Values for Fisher's Exact Test
0.05
-,2 -,2
14 14 14 14 14
IS IS IS IS 15
0.10
"0 "0 It>
0.005
-,-,-,-,-,-
-,-,-,-,-,-
-,-,-,-
-,-
-I
-,-
0-
-,-
-,-,-,-,-,-
-,-,-
-,4
-,4
-,4
-,4
-,-
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
-,-,-
-,-
-,-,-
-,-
-,-
-,-,-,-
-,3
-,3
-,3
-,3
-,-,-,-,-
-,-,-,-,-
-,-,-,-,-
-,-
-,4 -,4
-,4 -,4
-,4 -,4
-,4 -,4
-,-
-,-
-,-
-,-,-
-,-
-,-,-
-,5
-,5
-,5
-,5
-,5
-,5
-,5
-,5
-,5
-,5
-,5
-,5
-,5
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
-,-,-
-,-,-
0,6 0,6 0,7
0,6 0,6 0,7
0,6 0,6 0,7
-,6 0,6 0,7
-,6 0,6 0,7
-,-,-,-
-,-,-
-,-,-
-,-,-
-,-
-,-
-,-
-,-
-,-,-
-.-
-,-,-,-,-
-,-
-,-,-
-,-,-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-,-
-,-
-,-
-,-
-,6
-,6
-,6
-,6
-,-
-,-
-,-
-,-
0,7
0,7
0,7
0,7
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-,-
-,-
-,-,-
-,-
-,-
-,-
-,-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
-,-,-,-,-
-,-,-
-,2
-,2
-,2
-,2
-,-,-,-
-,-
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-.
-,-
-,-
-,-,-
-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-,-,-,-
-,-,-,-
-,2
-,2 -,-
-,-,-
-,-,-
-,-,-
-,-
-,-
-,-
-,-
-,-,-,-
-,-
-,-
-,-
-,-
-,2 -,3
-,2 -.3
-,3 -,3
-,3 -,3
-,3 -,3
-,3 -,3
-,3 -,3
-,3 -,3
-,3
-,3
-,-,-
-,3 -,3
-,3 -,3 -,-
-,-,-,-
-,-,-
-,-
-,-,-,-
-,-,-,-
-,3 -,4
-,3 -,4
-,4 -,4
-,4 -,4
-,4 -,4
-,3 -,3 -,3 -,3 -,3
-,-
-,-,-,-,-
-,-
-,-,-
-,-
-,-
-,-
-,-,-
-,-
-. -
-,-,-,-
-,-
-,-
-,-
-,-,-
-,-,-,-
-,-
-,-
-,-,-,-
-,4 -,4
-,4 -,4
-,4 -,4
-,4
-,4
-,4
-,4
-,-
-,-
-,-
-,-
-,-
'"
-,-,-,-,-,-
-,-
-,-,-,-,-
-,-
x'
-,-,-,-,-,-
-,-
-, -
a.
-,-,-,-,-,-
-,-,-,-,-,-
-,-,-
-,3 -,3 -,3 -,3 -,3
0.001
0.002
~
..•~. Q>
;:;. Q> Q>
if Q>
:::l a. Gl
'"
"0
::T
'"
TABLE B.28 (cont.): Critical
0.20
a: 0.50
n
m1
15
4 4 5 5 5
6 7 5 6 7
1,3 1,3 1,3 1,3 1,3
0,3 0.3 0,3 0.3 1,4
0,3 0,4 0,3 0,4 1,4
0,4 0,4 0,4 0,4 0,4
0,4 0,4 0,4 0,4 0,5
-,4 0,4 -,4 0,4 0,5
-,4 -,4 -,4 0,4 0,5
-,4 -,4 -,4 0,5 0,5
6 6 7 1 1
6 7 7 1 2
1,4 2,4 2,4 -,1 -,1
1.4 1,4 2,5
1,4 1,5 1,5
0,5 1,5 1,6
0,5 0,5 1.6
0,5 0.5 1.6
0,5 0,5 1,6
-,I -,I
-,I
0.4 1,5 1,5 -,1
-,I
-,I
-,-
-, 1
-,I
-,-
3 4 5 6 7
-,1
-,1
-,I
-,-
-,I
-,-
-,I -,-,-
-,-
-,I -,I
-.1 -,1
-,I
-,-
-,-
-,-
-,-
-,-
-,-
-,2 -.2 -,2
-,2 -.2 -,2 -,-
IS IS
15 IS
15 15 IS
16 16
m2
16 16 16 16 16
1 1
16 16 16 16 16
1 2 2 2 2
16 16 16 16 16
2 2 2 3 3
6 7
16 16 16 16 16
3 3 3 3 4
5
16 16 16 16 16
4 4 4 4 5
5 6 7
16 16 16 16 16
5 5 5 6 6
6 7
I
1 1
-.1 -,1
-,-,-
8
0, 1
2 3 4 5
-,I
-,I
-,1 -,1 0.2
-.1 -.1 -.2
0.2 0.2 0.2 -.1 0.2
0.2 0.2 0.2 -.1 -.2
-,-
0.2 0.2 0.2 1.2 0,2 0.2 0,2 1.3 1.3 1,3 1.3 1,3 2,3 1.4 2.4
R
3 4 6 7 8
4
8
5
8
6 7
Values for Fisher's Exact Test
0.05
0.10
om
0.02
0.002
0.005
0.001
-,4
-,-
-,-
-,-
-,-,-
-,-,-
-,-
-,-
-,4 -,5 0,5
-,4 -,5 -.5
-,5 -,5 -.5
-,5 -,5 -.5
-,5 -,5
-,5 -,5
-,5 -,5
-,5 -,5
-,5
0,5 0.6 0.6
-.5 0,6 0.6
-,6 0,6 0,6
-,6 -,6 O.7
-,6 -,6 0,7
-,6 -,6 O.7
-,6
-,6
-,-
-,-
-,7
-,-
-,-
-,6 0,6 O.7 -,-
-,-
-,-
-,-
-.7 -,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
-,-
-,-,-
-,-
-,-
-,-
-,4 -,-
-,-
-.-,6 -,6 0,7
-,5 -,-
-.-
-,-
-,-
-,-,-
-,-
-,-
-,-
-,-,-
-,-
-.2
-,-
-,-
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,2 -,2 -,2 -,2
-,2 -,2 -,2 -,2
-,2 -.2 -,2 -,2
-.2 -.2
-.2 -,2
-.-
-,-
-.-
-,-
-.2 -,2
-.2 -,2
-,2 -,3
-,2 -,3
-,3 -,3
-.3 -.3
-.3 -.3
-.3 -.3
-.2 0,3 0,3 0.3 -,2
-,3 -,3 0,3 0,3 -,3
-,3 -,3 -.3 0,3 -,3
-.3 -,3 -,3 0.3 -,3
-.3 -,3 -.3
-.3 -.3
-.3 -,3
-.3
-.3
-,-
-,3
-,3
-,3
-.4
-,4
-,4
-,4
-,4
-.4
0,3 0.3 0,3 0.4 0.3
0,3 0.3 0.3 0.4 0,3
-.3 -.3 0,4 0.4 0,4
-,3 -.4 0.4 0.4 -.4
-.3 -,4 0,4 0.4 -.4
-,4 -,4 -,4 0,4 -,4
-.4 -.4 -,4
-.4 -.4 -.4
-.4 -.4 -.4
-.4 -,4
-.4 -,4
-,4 -,-
-,-
-,-
-,-
-,4
-,4
-.4
-,5
-.5
-,5
-,5
-,5
-,5
0,3 1.4 1,4 1.4 1.4
0.4 0.4 1,4 0,4 1.4
0,4 0,4 0,5 0,4 1,5
0.4 0.4
0,4 0,5 0.5 0,5 O.S
-.4 0,5 0,5 0,5 0,5
-,4 0.5 0.5 0,5
-,5 -.5 0,5 -.5 0,6
-.5 -,5
-.5 -.5
-.5 -,5
-,5 -,5
-,5 -.5
-,5
-.5
-,-
-,-
-,-
-.5 -,6
-,5 -.6
-.6 -,6
-.6 -,6
-,6 -,6
-.2 -.2 -,2 -,2
-.2
-,2
-,-,-,-
.
-,-
-.-
-,2
-,-
-,-,-
-,-
-.-.-,3 -,3
-,-
-,-,-
-,-,-
-,3
-,-,-
-,-
-,-
-,3
-,-
-.-.-
-,-,-
-,-
-.-
-,3
-.3
-,-
-,-
-,-
-.-,3 -.3
-,-
-,-
-,-
-,-
-,-,-,-
-,-
-.4
-,-
-,-
-,4
-,4
-,-
-,-
"0 "0
-,-
II)
~ c. x' ID
-,4 -,-
~
-,-
-,-,-
;:;.
-,5
-,5
-.4
-,-
-,-
»
-,-,-
-,-,-
-,-,-,-,-
..• '"~. '"Q} rr
a,s 0.4 0,5
a,s
-.5 0.6
-,-
-,-,-
-.6 -,6
-,6 -,6
if ~ '" c. -,6 -.6
f;I
'"
"0
zr
'"
....•
OJ
::l c.. OJ
a-
d
TABLE B.28 (cont.): Critical Values for Fisher's Exact Test n
ml
0.20
a: 0.50
m2
25 25 25 25 25
1 1 1 1 1
1 2 3 4 5
-,1 -,1 -,1 -,1 -,1
-,1 -,1 -,1 -,1 -,1
-,1 -,1 -,1 -,1 -,1
-,1 -,1 -,1 -,1 -,1
-,1 -,1 -, -
25 25 25 25 25
1 1 1 1 1
6 7 8 9
-,1 -,1 -,1 -, I -,I
-,1 -,I -,1 -,1 -,1
-, -,-
-,-
-, -
-,-,-,-
-, -,-, -,-
25 25 25 25 25
1 1 2 2 2
-,I -,1 -,I -,1 -,1
-,1 -,1 -,1 -,1 -,1
-, -
-,-
-, -,1
-,-,1
25 25 25 25 25
2 2 2 2 2
5 6 7 8 9
-,I -,1 -,1
-,1 -,1 -,1
-,2
-,2
-,2 -,2
-,2 -,2
0,2 0,2
-,2 -,2
-,2 -,2
-,2 -,2
25 25 25 25 25
2 2 2 3 3
10
0,2 0,2 0,2
3 4
-,1 -,1
-,1 -,I
-,2 -,2 -, -,2 -,2
-,2 -,2 -,-,2 -,2
-,-
12
0,2 0,2 0,2
25 25 25 25 25
3 3 3 3 3
5 6 7 8 9
0,1 0,2 0,2 0,2 0,2
-,2 -,2 -,2 -,2 0,3
-,2 -,2 -,2 -,3 -,3
-,2 -,2 -,2 -,3 -,3
25 25 25 25 25
3 3 3 4 4
10
0,3 0,3 0,3
0,3 0,3 0,3
-,3 -,3 -,3
4 5
0,2 0,2 0,2 0,1 0,2
-,2
-,2
-,2
-,2
-,2
25 25 25 25 25
4 4 4 4 4
6 7 8 9
0,2 0,2 0,2 0,2
-,2 0,3 0,3 0,3
10
1,3
0,3
-,3 -,3 0,3 0,3 0,3
10 11
12 2 3 4
11
11
12
-, -
-,2
-,2
-,2
-,2
0.05
0.10
-, -
-,1 -,1 -,-,-,-
-,1 -, -, -, -
-,-
-,-
-,-
-,-
-,-
-, -
-,-
-,-,-
-,-
-, -
-,-
-,-
-,-, -
-,-
-, -,-,-
-,-
-,2
-,2 -,2
-,2 -,2
-,2 -,2
-,2 -,2
-,2 -,2 -,2 -,2 -, -
-,2 -,2 -,2 -,2 -,-
-,2 -,2
-,2 -,2 -,-
-, -
-,-,-
-,2
-,-
-,-
-, -
-, -,2 -,2
-,-,-,2 -,2
-,2
-,2
-,3 -,3 -,3 -,3
-,3 -,3 -,3 -,3
-,2
-,3 -,3 -,3 -,2 -,3
-,3 -,3 -,3 -,3 -,3
-,3 -,3 -,3 -,3 0,4
-,3 -,3 -,3 -,4 -,4
-,3 -,3 -,3 -,4 -,4
-,
-
-,-,-,-
-,-,2
-,-
-,-
-,-
-,-,-
-,-
-,-
-,-
-, -, -,2
-,-
-,-
-,1 -,-,-,-
-,-
-,-,-
om
0.02
-,-
-,2 -,2 -,2
-,2 -,2 -,2
-,2 -,2 -, -
-,-
-, -
-,-
-,-,-, -
-,3
-,3 -,3 -,3
-,3 -,3 -,3
-,3
-,3
-,3
-,3
-,3
-,-, -
-, -,-,3 -,3
-,-,-,3 -,3
-,3 -,4 -,4
-,3 -,4 -,4
-,4
-,4
-,4
-,4
-,-, -
-,-
-,-
-,-
-, -,-
-,-
-, -, -
-,-
-,-,-,-,-
-,-, -
-,-,-
-,-,-,2 -,2 -,-
-,-,-
-, -
-,-
- - -
---
-, -, -,2 -, -,-
-,-
-, -, -
-,-
-,-
-, -
-,-
-, -, -
-,-,-,-,3 -,3
-
-,3 -,3 -,-,-,-
-,
-,-,-,-,3 -,4
- -, -, -,4 -,4
-,-,4 -,4
-,4 -,4 -,4 -,4 -,-
-,4
-,4
-,4
-,4
-, -
-,-
-, -
-,-,-
-,3
-,3
-,3
-,3 -,-,-
-, -, -, -
-,-,-
-, -
-,3 -,3
-,-,3 -,3
-,4
-,4
-,4
-,4
-,4 -,4 -,4
-,4 -,4 -,4
-, -, -,3 -,4 -,4 -,4 -,4 -,4
-,-
-,-,-
-,-
-,3 -,3
-
-,-
-,-
-,-,-
-,
-,-,2
-,-,-
-, -
-,-
-,-
-,-
-,3 -,3
-, -,-, -
-,-
-,-,-
-, -, -
-,3
-,3
-, -
-, -
-, -
-,-,-,3 -,3
-,2
-, -
-
-,-
-,-
-,-, -, -,3 -,3
-,2
-,-,-
-,-
-, -,-
-, -
-, -, -
-, -, -
-,-,-
-,-
-,-
-,-
-,-
-,-,-,-,-
-,
-, -,-, -,-
-,--,-,-
-, -, -
-, -
-,-
-,-,-
- -, -
-,-,-
0.001
-,-
-,-,-, -, -
-,-
0.002
0.005
-, -, -
-,-
-,-,-,-,-
-, -,-, -
-,-,-,-
-,-
-, -, -,
-,3
--,-,-,3
-,3
-,3
-,3 -, -
-,3 -,-,-,-,-
-, -, -, -,-
-,-,-
-,-
-, -, -,-
-
--
-,-
-,-
--
-,-
-,-
-, -, -
-,-
-
-, -, -
-
-, -, -
-, -
-,-
-,-,-
-
-,-
-, -
-, -, -
-,-,-,3 -,3
-,-
-,3 -,3
-
-,-
-,3 -, -,-
-,-,-
-,-,3 -,-
l>
-,-,-
"tl "tl
-,-
0.
~ ::J x' tc
~ ~ :a;:;. ~
-, -
-,-
- -,-,4 -,4
--
-,4 -,4
-,-,-,4 -,4
-,4 -, -, - -
-,4 -,-,--
-,-
-,-
m
- -
--
III
-, -, -
-,-
-,-
Cl
-,-
-,-
"tl
-,-
-,-,4 -,4
-I III 1J'
'"
::J 0.
a;
::J'
'"
..• ..• co
..•
00
'"
l> '0 '0
TABLE B.28 (cont.): Critical
Values for Fisher's Exact Test
(1)
:::l
C.
n
ml
m2
0.10
0.20
a: 0.50
0.02
0.05
-,-
-,-
-,-
-,-
-,-
-,-,4 -,4 -,4
-,-
-,-
-,-
-,-
-,-
-,4 -,4 -,5
-,4 -,4 -,5
-,4 -,5 -,5
-,4 -,5 -,5
-,4 -,5 -,5
-,4 -,5 -,5
-,5 -,5 -,5
-,5 -,5 -,5 -,5 -,5
-,5 -,5 -,5 -,5 -,5
-,5 -,5 -,5 -,5
-,5 -,5 -,5 -,5
-,5 -,5 -,5
-,5 -,5 -,5
-,5
-,5
-,-
-,-
-,-,-
-,-
-,-
-,-
-,-,-
-,-,-
-,-
-,-
-,-
-,4 -,5 -,5 -,5 -,6
-,4 -,5 -,5 -,5 -,6
-,5 -,5 -,5 -,6 -,6
-,5 -,5 -,5 -,6 -,6
-,5 -,5 -,6 -,6 -,6
-,5 -,5 -,6 -,6 -,6
1,3 1,3 0,2 0,2 0,2
0,3 0,3 -,2 0,3 0,3
0,4 0,4 -,3 -,3 0,3
0,4 0,4 -,3 -,3 -,3
0,4 0,4 -,3 -,3 -,4
-,4 0,4 -,3 -,3 -,4
-,4 -,4 -,3 -,4 -,4
-,4 -,4 -,3 -,4 -,4
-,-
25 25 25 25 25
5 5 5 5 5
8 9 10 11 12
1,3 1,3 1,3 1,3 1,3
0,3 0,3 0,3 1,4 1,4
0,3 0,4 0,4 0,4 1,4
0,4 0,4 0,4 0,4 0,4
-,4 0,4 0,4 0,4 0,5
-,4 -,4 0,5 0,5 0,5
-,4 -,4 -,5 0,5 0,5
-,4 -,4 -,5 0,5 0,5
0,2 1,3 1,3 1,4 1,4
0,3 0,3 0,3 1,4 1,4
0,3 0,4 0,4 0,4 1,4
-,3 0,4 0,4 0,4 0,4
-,4 -,4 0,4 0,4 0,5
-,4 -,4 -,4 0,5 0,5
-,4 -,4 -,5 0,5 0,5
-,4 -,4 -,5 -,5 -,5
2,4 2,4 1,3 1,4 1,3
1,4 1,4 0,3 1,4 1,4
1,5 1,5 0,4 0,4 1,4
1,5 1,5 0,4 0,4 0,5
0,5 1,5 0,4 0,5 0,5
0,5 0,5 -,4 0,5 0,5
0,5 0,6 -,5 0,5 0,5
0,6 0,6 -,5 -,5 0,6
0,6 0,6 -,5 -,5 -,6
0,6 0,6 -,5 -,5 -,6
-,6 0,6 -,5 -,6 -,6
-,6 -,6 -,5 -,6 -,6
12 8 9
2,4 2,4 2,4 2,4 2,4
1,4 2,5 2,5 1,4 1,4
1,5 1,5 1,5 1,5 1,5
1,5 1,5 1,6 0,5 1,5
0,5 1,6 1,6 0,5 0,5
0,5 1,6 1,6 0,5 0,5
0,6 0,6 1,6 0,5 0,6
0,6 0,6 0,6 0,6 0,6
0,6 0,6 0,7 -,6 0,6
-,6 0,7 0,7 -,6 -,6
-,6 0,7 0,7 -,6 -,6
8 8 8 9 9
10 11 12 9 10
2,5 3,5 3,5 2,5 3,5
2,5 2,5 2,5 2,5 2,5
1,5 2,5 2,6 1,5 2,6
1,5 1,6 1,6 1,5 1,6
1,6 1,6 1,6 1,6 1,6
1,6 1,6 1,6 1,6 1,6
0,6 1,6 1,7 0,6 1,7
0,6 1,7 1,7 0,6 1,7
0,7 0,7 0,7 0,7 0,7
0,7 0,7 0,7 0,7 0,7
9
11
9
12 10 12
3,5 3,5 3,5 3,6 4,6
2,5 3,6 2,5 3,6 3,6
2,6 2,6 2,6 2,6 3,7
1,6 2,7 1,6 2,7 2,7
1,6 2,7 1,7 2,7 2,7
1,7 2,7 1,7 2,7 2,8
1,7 1,7 1,7 1,7 2,8
1,7 1,7 1,7 1,7 2,8
1,7 1,8 1,7 1,8 1,8
11 12 12 1 2
4,6 4,6 5,7 -,1 -,1
3,6 4,7 4,7 -,1 -,1
3,7 3,7 4,8 -,1 -,1
2,7 3,8 3,8 -,1 -,1
2,7 3,8 3,8 -,1 -,1
2,8 2,8 3,9 -,1 -,1
2,8 2,8 3,9 -,1 -,-
2,8 2,8 3,9 -,1
1,8 2,9 2,9 -,-,-
25 25 25 25 25
6 6 7 7 7
11
25 25 25 25 25
7 7 7 8 8
10
25 25 25 25 25 25 25 25 25 25
10 10 10
25 25 25 26 26
11 11 12 1 1
9
10 12 7 8 9 11
11
x' a:J
-,-,-
11 12 5 6 7
6 7 8
0.001
-,-,-
4 4 5 5 5
6 6 6 6 6
0.002
-,-,-,4 -,4 -,4
25 25 25 25 25
25 25 25 25 25
0.005
0.01
-,-
-,-,-,5 -,5 -,5
-,-,-
-,-,-
-,-,-,-,-
-,-
-,-
-,5 -,6 -,6 -,6 -,6
-,5 -,6 -,6 -,6 -,6
-,5 -,6 -,6 -,6 -,-
-,5 -,6 -,6 -,6 -,-
-,-
-,-,6 -, 7 -,7
-,-,-,6 -,7 -, 7
-,6
-,6
-,-
-,-
-,-
-,6 -,6 -,6
-,6 -,6 -,6
-,-,6 -,6 -,7
-,6 -,6 -,7
-,6 0,7 0,7 -,6 -,6
-,7 -,7 0,7 -,6 -, 7
-, 7 -,7 -,7 -,6 -,7
-, 7 -,7 -,7 -,7 -,7
-,7 -, 7 -, 7 -,7 -, 7
-,7 -,7 -, 7 -, 7
-,7 -,7 -,-,7 -,7
0,7 0,7 0,7 0,7 0,7
0,7 0,7 0,8 0,7 0,7
-,7 0,7 0,8 -,7 0,8
-, 7 0,8 0,8 -,7 0,8
-,7 -,8 0,8 -,8 -,8
-, 7 -,8 0,8 -,8 -,8
-,8 -,8 -,8 -,8 -,8
-,8 -,8 -,8 -,8 -,8
0,7 1,8 1,8 1,8 1,8
0,8 1.8 0,8 1,8 1,9
0,8 0,8 0,8 0,8 1,9
0,8 0,8 0,8 0,8 1,9
0,8 0,8 0,8 0,9 1,9
0,8 0,9 0,8 0,9 0,9
0,9 0,9 0,9 0,9 0,9
0,9 0,9 0,9 0,9 0,9
-,9 0,9 0,9 0,9 0,9
1,8 2,9 2,9 -,-, -
1,9 1,9 2,10
1,9 1,9 2,10
1,9 1,9 2,10
1,9 1,9 2,10
0,9 1,10 1,10
0,9 1,10 1,10
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
0,10 1,10 1,11 -,-
-,-
0,10 1,10 1,11 -,-,-
-,-
•..•. •..•. VI
':a""
,,' '"-t
'"
e-
m '"
:::l
C. Gl
''""
'0
:::r
TABLE B.28 (cont.): Critical
n
m)
26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26 26
1 1
1 1 1 1 1 1
1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4
mz
3 4 5 6 7
8 9
10 11 12 13 2 3 4 5 6 7
8 9
10 11 12 13 3 4 5 6 7
8 9
10 11 12 13 4 5 6 7
8 9
0.20
a: 0.50 -,1 -,1
-,1 -,1 -,1 -,1 -,1 -,1 -,1 -,1 0,1 -,1 -,1 -,1 -,1 -,1
-,1 0,1 0,2 0,2 0,2 0,2 0,2 -,1 -,1 -,1 0,2 0,2 0,2 0,2 0,2 0,2 0,2 1,2 0,1 0,2 0,2 0,2 0,2 0,2
-,1
-,1 -,1
-,1 -,1 -, 1 -,1
-,1 -,1 -,1
-,1
-,-
-,-,-
-,-,-
-,1 -,1
-,1 -,1 -,1
-,-
-,-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,1 -,1 -,1
-,1 -,2 -,2 -,2 -,2 -,2 -,2 -,2 -,2
-,1 -,2 -,2 -,2 -,2 -,2 -,2 -,2 -,2 -,2
-,1
-,1 -,1 -,2 -,2 -,2 0,2 0,2 0,2 -,1
-,1 -,1 -,2 -,2 -,2 -,2 0,3 0,3 0,3 0,3 -,2 -,2 -,2 -,2 0,3 0,3
-,-
-,2 -,-,-,2 -,2 -,2 -,2 -,2 -,3 -,3 -,3 0,3 0,3 0,3 -,2 -,2 -,2 -,3 -,3 0,3
0.10
Values for Fisher's Exact Test
0.05 -,-,-,-,-
0.02
0.01
-,-
-,-
-,-
-,-
-, -
-,-
-,-
-,-
-,-
-,-,-, -
-,-
-,-
-,-
-,-,-
-,-
-,-,-,-
-,-,-,-
-,-
-,-
-,-
-,-,-,-,-
-,-,-,-,-,-
-,-,-,-,-
-,-
-,-
-,-
-,-,-
-,-,-
-,-
-,-,-,-
-,-
-,-
-,-,-,-,-
-, -
-,-
-,-
-,-
-,-
-,-
-,-
-,2 -,2 -,2 -,2 -,2 -,2 -,2
-,2 -,2 -,2 -,2 -,2 -,2 -,2
-,2 -,2 -,2 -,2 -,2
-,2 -,2 -,2 -,2 -,2
-,2 -,2 -,2
-,2 -,2 -,2 -,-
-,2 -,2
-,2 -,2
-,-
-,-,-
-,-
-,-
-,-
-,-
-,-,-,-
-,-
-,-
-,-,-
-,-
-,-,-
-,-
-,-
-,-
-,-,-,-
-,-
-,-
-,3 -,3 -,3 -,3 -,3
-,-,-,3 -,3 -,3 -,3 -,3
-,-,-
-,-,-
-,-,-
-,-,-,-,-
-,-
-,-,-
-,-
-,-
-,-,-
-,-,-
-,3 -,3 -,4 -,4 -,4 -,4
-,3 -,3 -,4 -,4 -,4 -,4
-,3 -,4 -,4 -,4 -,4 -,4
-,3 -,4 -,4 -,4 -,4 -,4
-,-
-,-,-, -
-,-
-,-
-,-
-,2 -,3
-,2 -,3 -,3 -,3 -,3 -,3 -,3 -,3
-,-,-
-,-
-,-
-,-,-,-
-,2 -,2 -,2 -,2 -,2 -,3 -,3 -,3 -,3 -,3
-,2 -,2 -,2 -,3 -,3 -,3 -,3 -,3 -,3 -,3
-,2 -,2 -,2 -,3 -,3 -,3 -,3 -,3 -,3 -,3
-,3 -,3 -,3 -,3 -,3 -,3 -,-,-
-,-
-,-
-,-
-,
-
-,-,-,-
-,2 -,2 -,3 -,3 -,3 -,3
-,2
-,2
-,3 -,3 -,3 -,3 -,4
-,3 -,3 -,3 -,3 -,4
-,3 -,3 -,3 -,3 -,4 -,4
-,3 -,3 -,3 -,3 -,4 -,4
-,-,-
-,-,-,-
-,-
-,-
-,-
-,-
-,-,-
-,-
-,-
-,-,-,-,-
-,-
-,-,-,-,-
-,-
-,-
-,-
-,-
-,-,-,-,-,-,-,-,-,-,3 -,3 -,-,-,-,-,-,-,-,-
-,-
-,-,4
-,-,4 -,4 -,4
-,-
-,-
-,-
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-,-,-,-
-,-
-,-
-,-
-,-,-,-,-,-
-,-,-,-
-,-
-,-
-,2
-,2
-,-,-,-
-,-,-,-
-,-
-,-
-,-
-,-,-,-,-
-,-,-,-,-
-,-,-
-,-,-,-
-,-,-,-
-,-
-,3 -,3 -,3 -,3
-,3 -,3 -,3 -,3
-,3 -,3 -,3
-,3 -,3 -,3
-,-
-,-,-,-
-,-,-,-
-,-,-,-,-
-,-
-, -
-,-,-,-
-,4 -,4 -,4 -,4 -,4
0.001
-,-,-,-,-,-
-,-,-,-,-,-
-,-
0.002 -,-,-,-,-,-
-,-,-,-,-,-,-,-,-,-,-
-,-
0.005
-,-
-,-,-,-,-
-,-,-,-,-
-,4 -,4 -,4 -,4 -,4 -,-
-,4 -,4 -,-,-,-
-,-
-,-
-,-,-,-,-,-,-
-,-,-
-,-
-,-
-,-,-
-,-,-,-
-,-
-,-
-,-
-,-,-,-,-,-
-,-
-,-
-,-
-,-,-,-,-,-
-,-,-
-,-,-
-,-,3 -,3 -,-
-,-
-,-
-,-,-
-,-,-
-,-
-,-,-,-
-,-
-,-,-,-
-,-
-,-,-,-
-,3
-,3
-,-
-,-
-,-,-,-,-,-
-,-
-,-,-,-
-,-,-
-,-,-,-
l> "0 "0
'tl
-,-,-,-,-
-,-
-,-,-,-
~::J
Q.
x' to
-,-
~ ~ ~.
-,3
-,-,3
-,-,-
-,-,-
[ oi0'
-,-
-,-
-,-,-
-,-,-
iD V>
QJ
::J
Q.
CI @ 'tl
::r
..
V>
co
III
~
'" »
TABLE B.28 (cont.): Critical
n
ml
m2
a: 0.50
0.10
0.20
"0 "0 "tl "tl
m ::J o,
x' a>
..'".. III
-,-,-
!a' ;:;. III
a;t
-,-
0-
-, 1
-,1 -,1
-,-,-
-,-,-
-,-
-,-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-
-,-,-
-,-
if III
::J c,
-,-
G)
-,-
"tl
a;
a~
"
~ co
»
TABLE B.28 (cont.): Critical
n
ml
m2
a: 0.50
0.20
0.10
"0 "0 ...•
Vl
~:
;:;. 0>
C;; 0-
if 0>
a. OJ
"0
~ co IV
•••
co
IV -I>
»
"0 "0
TABLE B.28 (cont.): Critical
0.20
a: 0.50
0.10
C1>
~ a.
Values for Fisher's Exact Test
0.05
0.01
0.02
0.005
0.002
0.001
x'
n
ml
30 30
4 4
5 6
0,1 0,2
-,2 -,2
-,2 -,2
-,2 -,2
-,3 -,3
-,3 -,3
-,3 -,3
-,3 -,3
-,3 -,3
-,3 -,3
-,3 -,4
-,3 -,4
-,4 -,4
-,4 -,4
-,4 -,4
-,4 -,4
-,4 -,4
-,4 -,4
Vl
30 3O 3O 3O 3O 3O 3O 3O 3O 3O 30 30 30 30 30 3O 3O 3O 3O 30 30 30 30 30 30 30 30 3O 30 30 30 30 30 30 30 30 30 30
4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7
7
0,2 0,2 0,2 0,2 0,2 1,3 1,3 1,3 1,3 0,2 0,2 0,2 0,2 0,2 1,3
-,2 -,2 0,3 0,3 0,3 0,3 0,3 0,3 0,4 -,2 -,2 0,3 0,3 0,3 0,3 0,3 0,3 1,4 1,4 1,4
-,2 -,3 -,3 0,3 0,3 0,3 0,4 0,4 0,4 -,2 -,3 -,3 0,3 0,3 0,3 0,4 0,4 0,4 0,4 1,4
-,3 -,3 -,3 -,3 -,3 0,4 0,4 0,4 0,4 -,2 -,3 -,3 -,3 -,3 0,4 0,4 0,4 0,4 0,4 0,5
-,3 -,3 -,3 -,3 -,4 -,4 0,4 0,4 0,4 -,3 -,3 -,3 -,4 -,4 -,4 0,4 0,4 0,4 0,5 0,5
-,3 -,3 -,3 -,3 -,4 -,4 -,4 -,4 0,4 -,3 -,3 -,3 -,4 -,4 -,4 -,4 0,5 0,5 0,5 0,5
-,3 -,3 -,4 -,4 -,4 -,4 -,4 -,4 0,4 -,3 -,3 -,4 -,4 -,4 -,4 -,4 -,5 0,5 0,5 0,5
-,3 -,3 -,4 -,4 -,4 -,4 -,4 -,4
-,4 -,4 -,4 -,4 -,4 -,4
-,4 -,4 -,4 -,4
-,4 -,4 -,4 -,4
-,4 -,4 -,4
-,4 -,4 -,4
-,4
-,4
-,-
-,-
~.
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-
-,-,-,-
-,-,-,-,-
-,-
-,-,-,-,-
-,-,-,-,-
-,-,-,-,-
-,-,-,-,-
-,-
-,4 -,4 -,4 -,5 -,5 -,5 -,5 -,5 -,5
-,4 -,4 -,4 -,5 -,5 -,5 -,5 -,5 -,5
-,4 -,5 -,5 -,5 -,5 -,5
-,4 -,5 -,5 -,5 -,5 -,5
-,-,-
-,-,-
-,-,-
-,-,-
-,-,-,-,-,-
-,-,-,-,-,-,4 -,5 -,5 -,5 -,5 -,-,-
0,3 0,3 0,3 0,3 0,3 1,4 1,4 1,4 1,4 1,5
-,3 0,3 0,4 0,4 0,4 0,4 0,4 1,5 1,5 1,5 0,4 0,4 0,4 0,4 1,5 1,5 1,5 1,5
-,3 -,3 0,4 0,4 0,4 0,4 0,4 1,5 1,5 1,5 0,4 0,4 0,4 0,4 0,5 1,5 1,5 1,6
-,3 -,4 -,4 0,4 0,4 0,5 0,5 0,5 0,5 1,5
-,3 -,4 -,4 -,4 0,5 0,5 0,5 0,5 0,5 0,6 -,4 -,4 0,5 0,5 0,5 0,5 0,6 1,6
-,4 -,5 -,5 -,5 -,5 -,6 -,6 -,6 -,6 0,6
-,4 -,5 -,5 -,5 -,5 -,6 -,6 -,6 -,6
-,5 -,5 -,6 -,6 -,6 -,6 -,6
-,5 -,5 -,6 -,6 -,6 -,6 -,6 -,-
-,-
-,-,-,-
-,-
-,-,-,-
-,-
-,-
-,5 -,5 -,6 -,6 -,6 -,6 0,7 0,7
-,5 -,5 -,6 -,6 -,6 -,6 -,7 0,7
-,6 -,6 -,6 -,7 -,7
-,6 -,6 -,6 -,7 -,7
-,6 -,6 -,6 -,7 -,7
-,7 -,7 -,7
-,7 -,7 -,7
-,7 -,7
m2
8 9
10 11 12 13 14 15 5 6 7 8 9
10 11 12 13 14 15 6 7 8 9
10 11 12 13 14 15 7 8 9
10 11 12 13 14
1,3 1,3 1,3 1,3 2,3 0,2 0,2 1,2 1,3 1,3 1,4 1,4 2,4 2,4 2,4 1,3 1,3 1,4 1,4 2,4 2,4 2,4 2,4
0,3 0,3 1,4 1,4 1,4 1,4 2,5 2,5
-,4 0,4 0,4 0,5 0,5 0,5 1,6 1,6
-,-
-,-
-,3 -,3 -,4 -,4 -,4 -,4 -,4 -,5 -,5 0,5 0,5
-,4 -,4 -,4 -,4 -,4 -,5 -,5 -,5 -,5 -,5 -,-
-,4 -,4 -,4 -,4 -,5 0,5 0,5 0,5 0,6 0,6
-,4 -,4 -,4 -,4 -,5 -,5 -,5 0,6 0,6 0,6
-,4 -,4 -,5 -,5 -,5 -,5 -,6 -,6 0,6 0,6
-,4 -,4 -,4 -,4 -,4 -,4 -,-,-,-,4 -,4 -,4 -,4 -,4 -,5 -,5 -,5 -,5 -,5 -,-,4 -,4 -,5 -,5 -,5 -,5 -,6 -,6 0,6 0,6
-,4 -,5 -,5 0,5 0,5
-,4 -,5 -,5 -,5 0,6 0,6 0,6 0,6
-,5 -,5 -,5 -,6 -,6 0,6 0,6 0,7
-,5 -,5 -,5 -,6 -,6 -,6 0,7 0,7
0,6 0,6 0,6
-,-
-,-
-,-,-
-,-,-,-,-
-,4 -,4 -,5 -,5 -,5 -,5 -,5 -,-
-,-
-,-,-,-,-,-
-,4 -,4 -,5 -,5 -,5 -,5 -,5 -,-
-,-,-,-
-,-,-
-,5 -,5 -,5 -,5 -,6 -,6 -,6 -,6 -,-
-,5 -,5 -,5 -,5 -,6 -,6 -,6 -,6 -,-
-,-
-,-
-,5 -,6 -,6 -,6 -,6
-,5 -,6 -,6 -,6 -,6 -,7 -,7 -,7
-,7 -,7 -,7
-,-
-,-
-,-
-,-,-
-,4 -,5 -,5 -,5 -,5 -,-
-,-,-,-,-,-
-,-,-
-,5 -,5 -,6 -,6 -,6 -,6
-,5 -,5 -,6 -,6 -,6 -,6 -,-,-
-,-
-,-,-
-,-,-
-,6 -,6 -,6 -,7 -,7 -,7 -,7 -,-
co
...•
826
Appendix B
1
Statistical Tables and Graphs
I
TABLEB.29:Critical Values of u for the Runs Test «(2): 0.50 «(1): 0.25
0.20 0.10
0.10 0.05
0.05 0.025
0.02 0.01
0.01 0.005
0.005 0.0025
0.002 0.001
0.001 0.0005
2,4
-,5
-,-
-,-
2,2,3,-
2,2,2,-
-,-,-,-
-,-,-,-,-,-
-,-,-,-,-,-
-,-,-,-,-,-
-,-
2,5
-,-,-,-,-,-
-,-,-,-,-,-
8 9 10 11 12
3,3,3,3,3,-
2,2,2,2,2,-
2,-
-,-,-,-,-
-,-,-,-,-
2,-
2,-
-,-
-,-
-,-,-,-,-,-
-,-,-,-
2, -
-,-,-,-,-
-,-
-,-,-,-,-,-
13 14 15 16 17
3,3,3,3,3,-
2,2,2,2,2,-
2,2,2,2,2,-
2,2,2,2,2,-
-,-,-,-,-,-
-,-,-
-,-,-,-,-
-,-,-,-,-,-
-,-,-,-,-,-
18 19 20 21
3,3,3,4,4,-
2,3,-
2, -
-,-
2,-
2,2,2,2,-
-,-,-,-,-,-
-,-
2,-
3,3,-
2,2,2,2,2,-
-,-,-,-,-,-
-,-,-,-,-,-
4,4,4,4,4,-
3,-
2,-
3,3,3,-
2, -
2,2,2,-
2,2,2,2,2,-
2,2,2,2,2,-
-,-,-,-,-
-,-,-,-
2,-
-,-,-,-,-,-
-,-,-,-,-,-
4,4,4,-
3,-
2,2,2,-
2,2,2,-
2,2,2,-
2,2,2,-
-,-,-,-
-,-,-,-
-,-,-,-
-,-
-,-,-,-
-,-
-,-
-,-
-,-
-,-,-,-
-,-
-,-,-,-,-
-,-,-,-
-,-,-,-
-,-
-,-
-,-
-,-
-,-
-,-,-,-,-,-
-,-,-,-,-,-
-,-,-,-,-,-
-,-,-,-,-
-,-,-,-,-
nz 2
3 4 5 6 7
22 23 24 25 26 27 28
29 2 3
3
30
3 4 5 6 7
2,6 3,6 3, 7 3, 7 3, 7
8 9 10 11 12
4, 7
3,-
3,-
3,3,2,6 2, 7 2, 7
2, -
2,-
-,-
2, 2, -
-,-
-,-,-
-,-,-
-,-,-,-
2,3,-
-,7 2,2,2,2,2,3,3,3,-
2,2,2,2,2,-
-,-
-,-
4,4,4,4,-
3,3,3,3,3,-
2,2,2,2,-
-,-,-,-
13 14 15 16 17
4,4,4,4,4,-
3,3,4,4,4,-
3,3,3,3,3,-
2,3,3,3,3,-
2,2,2,2,2,-
2,2,2,2,2,-
-,-
18 19 20 21
4,4,4,4,4,-
3,3,3,3,4,-
3,3,3,-
22
4,4,4,5,5,-
3,-
2,2,2,2,2,-
2,2,2,2,2,-
2,2,2,2,2,-
23 24
5,5,-
4,4,-
4,4,-
3,3,-
3,3,-
2,2,-
2,2,-
2,2,-
3,-
-,-,-
2,-
-,-
2,2,2,-
-,-,-
-,-,-
-,-
-,-,-
1
I
~
Appendix B
Statistical Tables and Graphs
827
B.29(cont.): Critical Values of u for the Runs Test 0.20 0.10 0.02 0.01 0.05 0.005 0.10 0.01 0.05 0.025 0.005 0.0025
0.002 0.001
0.001 0.0005
TABLE
a(2): 0.50 a(1): 0.25
nj
n2
3
25 26 27
5,5,5,-
4,4,4,-
4,4,4,-
3,3,3,-
3,-
28 29 30
5,5,5,-
4,4,4,-
4,4,4,-
4 5 6 7 8
3,7 3,6 4,8 4,8 4,8
2,8 3,8 3,9 3,9 3,9
9 10 11 12 13
5,9 5,9 5,9 5,9 5,9
14 15 16 17 18
3 4
4 5
5
2,2,2,-
2,2,2,-
- -
3,-
2,2,2,-
3,3,3,-
3,3,3,-
2,2,2,-
2,2,2,-
2,2,2,-
2,2,2,-
2,8 2,9 3,9 3,9 3,-
- -
- -
- -
-,9 2,2,2,-
-
- -, -
-
2,9 2,9 2,3,-
- - -
- -
-
- -, -
- -
- -
2,-
-, -
-
-
4,9 4,4,4,4,-
3,3,3,4,4,-
3,3,3,3,3,-
2,2,2,3,3,-
2,2,2,2,2,-
- -
-
-
2,2,2,2,-
- -
- - -
- -,-
- -
2,-
- -
5,9 6,6,6,6,-
4,4,5,5,5,-
4,4,4,4,4,-
3,3,4,4,4,-
3,3,3,3,3,-
2,3,3,3,3,-
2,2,2,2,2,-
2,2,2,2,2,-
- -
2,2,2,-
19 20 21 22 23
6,6,6,6,6,-
5,5,5,5,5,-
4,4,4,4,4,-
4,4,4,4,4,-
3,3,3,3,4,-
3,3,3,3,3,-
2,3,3.3,-
2,2,2,2,2,-
2,2,2,2,2,-
24 25 26 27 28
6,6,6,6,6,-
5,5,5,5,6,-
5,5,5,5,5,-
4,4,4,4,4,-
4,4,4,4,4,-
3,3,3,3,3,-
3,3,3,3,3,-
2;2,2,3,3,-
2,2,2,2,2,-
29 30
6,6,-
6,6,-
5,5,-
4,4,-
4,4,-
4,4,-
3,3,-
3,3,-
2,2,-
5 6 7 8 9
4,8 4,9 5,9 5,9 5,10
3,9 3,9 4,10 4,10 4,10
3,9 3,10 3,10 3,11 4,11
2,10 3,10 3,11 3,11 3,-
2,10 2,11 2,11 2,3,-
- -
- -
-
2,11 2,-, 2,2,-
-,11 -, 2,2,-
- - -
2,-
- -
10 11 12 13 14
6,10 6,10 6,10 6,10 6,10
5,11 5,11 5,11 5,11 5,-
4,11 4,4,4,5,-
3,4,4,4,4,-
3,3,3,3,3,-
3,3,3,3,3,-
2,2,2,3,3,-
2,2,2,2,2,-
- -
15 16 17 18 19
6,11 7,11 7,11 7,11 7,11
5,6,6,6,6,-
5,5,5,5,5,-
4,4,4,5,5,-
4,4,4,4,4,-
3,3,3,4,4,-
3,3,3,3,3,-
2,2,3,3,3,-
2,2,2,2,2,-
20 21 22 23 24
7,11 7,11 7,7,7,-
6,6,6,6,6,-
5,5,6,6,6,-
5,5,5,5,5,-
4,4,4,4,4,-
4,4,4,4,4,-
3,3,4,4,4,-
3,3,3,3,3,-
3,3,3,3,3,-
3, -
-
-
'>,-
-
-
-
-
-
2,-
- -
- - -
-
-
- -
-
-
- -
-
-
2,2,2,2,-
828
Appendix B
Statistical Tables and Graphs
..
TABLE B 29 (cont.): Critical
Values of
u for
the Runs Test
ni
a(2): 0.50 a(I): 0.25
0.20 0.10
0.10 0.05
0.05 0.025
0.02 0.01
0.01 0.005
0.005 0.0025
0.002 0.001
0.001 0.0005
5
25 26 27 28 29
8,8,8,8,8,-
6,6,6,6, 7,-
6,6,6,6,6, -
5,5,5,5,6,-
4,5, 5, 5,5,-
4,4,4,4,4,-
4,4,4,4,4,-
3,3,3,3,4,-
3,3,3,3,3,-
5
30
8,-
7,-
6,-
6,-
5,-
4,-
4,-
4,-
6
6 7
4,10 4,11 5,11 5,11 5, 12
3,11 4,11 4,12 4, 12 5,12
3, 11 3, 12 3, 12 4,13 4,13
2, 12 3,12 3,13 3,13 3, -
2, 12 2,13 3,13 3,3,-
2,12 2,13 2,13 2,3, -
-
-
-,-
-,13 2,2,2,-
-,-
9 10
5,9 5,8 6,10 6,10 6,11
11 12 13 14 15
7,11 7,11 7,12 7, 12 7,12
5, 12 6,12 6,12 6,13 6,13
5,13 5,13 5,13 5,13 6,-
4,13 4,13 5, 5, 5, -
4,4,4,4,4,-
3,3,3,4,4,-
3, 3, 3,3,3,-
2,3,3, 3,3,-
2,2,2,2,3,-
16 17 18 19 20
8,12 8,12 8, 12 8, 12 8, 12
6,13 6,13 7, 13 7,7,-
6,6,6,6,6,-
5,5,5,6,6,-
4,5,5, 5, 5,-
4,4,4,4,4,-
4,4,4, 4,4,-
3,3,3,3,4,-
3,3,3,3,-' 3,-
21 22 23 24 25
8, 12 8,13 8,13 8,13 8,13
7,7,7,7,8,-
6,6,6,7,7, -
6, 6, 6, 6,6,-
5,5,5,5,5,-
5, 5,5,5,5,-
4,4,4,4,4,-
4, 4,4,4,4,-
3,3,3,4,4,-
26 27 28 29 30
9,13 9,13 9,13 9,13 9,13
8, 8, 8,8,8,-
7,7,7, 7, 7,-
6,6,6,6,6,-
6,6, 6,6,6,-
5,5,5,5,5,-
5, 5,5,5,5,-
4,4,4,4,4, -
4,4,4,4,4,-
7 8 9 10 11
6,10 6,11 7,11 7,12 7, 12
5,11 5,12 5,12 6,13 6,13
4,12 4,13 5,13 5,13 5,14
3,13 4,13 4,14 5, 14 5, 14
3,13 3,14 4,14 4,15 4, 15
3,13 3, 14 3,15 3,15 4,15
2, 14 3, 14 3, 15 3,15 3, -
2,14 2, 15 2, 15 3,3,-
2,15 2,2,2,-
12 13 14 15 16
8, 12 8,12 8,13 8,13 8,13
6,13 7,14 7, 14 7,14 7, 14
6,14 6,14 6,14 6, 15 6, 15
5,14 5, 15 .5, 15 6, 15 6, -
4, 15 5,5, 5,5,-
4,4, 4,4,5,-
3,4, 4,4,4,-
3,3, 3,3,4,-
3,3,3,3,3,-
17 18 19 20 21
9,13 9,14 9, 14 9,14 9, 14
7, 14 8, 14 8, 15 8,15 8, 15
7,15 7, 15 7, 15 7,7,-
6,6,6,6,7, -
5,5, 6,6,6,-
5,5,5,5,5,-
4, 4, 5,5,5, -
4,4,4,4,4,-
3,4,4,4,4,-
n,
S
6 7
7
3,-
-,2,2,-
-,-
Appendix TABLEB.29 (cont)·.. Critical Values
B
Statistical
Tables and Graphs
829
of u for the Runs Test
0.20 0.10
0.10 0.05
0.05 0.025
0.02 0.01
0.0] 0.005
0.005 0.0025
0.002 0.001
0.001 0.0005
nj
ni
a(2): 0.50 a(l): 0.25
7
22 23 24 25 26
9,]4 10,14 10,14 10,14 10,14
8,15 8,15 8,15 8,8,-
7, 8,8,8,8,-
7,7,7, 7,7,-
6,6,6,6,6,-
5, 6,6,6, 6,-
5,5.5,5,5,-
4, 5,5, 5,5,-
4,4,4,4,4.-
27 28 29 30
10,14 10,14 10,14 10,14
9,9,9,9,-
8,8,8,8,-
7,7,8, 8,-
6,7,7,7,-
6,6,6,6,-
6,6,6. 6,-
5, 5,5,5, -
5.5,5,5,-
8 9 10 11 12
7,11 7,10 7, 12 8,13 8,13
5,13 6,13 6,13 7,14 7,14
5,13 5,]4 6,14 6,15 6,IS
4,14 5,14 5,15 5,15 6,16
4,14 4,15 4,15 5,16 5,16
3,15 3, 15 4,16 4,16 4,17
3,15 3,16 3,16 4,17 4,17
2,16 3,16 3,17 3,17 3. -
2,16 2,17 3,17 3,3,-
13 14 15 16 17
8,13 9,14 9,14 9,14 9,14
7,15 7,15 8,15 8,15 8,16
6,15 7,16 7,16 7,16 7,16
6,16 6,16 6,16 6,17 7,17
5,17 5,17 5,17 6,17 6,-
5,17 5,17 5,5,5,-
4,17 4,5,5,5,-
4,4, 4,4,4, -
3,3,4,4,4,-
18 20 21 22 23
10,14
8, 16 8,17 8,17 8,17 8,17
7,17 7,17 7,8,8,-
6,6,7,7, 7.-
6,6,6,6,6, -
5,5,6,6,6, -
4,-
10,15 10,15
8,16 9,16 9,16 9,16 9,16
5,5,5,-
4,4,5,5,5.-
24 25 26 27 28
11,16 11,16 11,16 11,16 11,16
9,16 9,17 10,17 10,17 10,17
8,17 9,9,9, 9,-
8,8,8,8,8,-
7,7,7,7, 8,-
6,7,7,7,7,-
6,6,6,6,6, -
5,5,6,6, 6,-
5,5,5,5,5.-
29 30
11,16 11,16
10,17 10,17
9,9,-
8,8,-
8,8,-
7,7, -
6,7,-
6,6,-
5,6,-
9 10 11 12 13
8,12 8,13 8,13 9,14 9,14
6,14 7,14 7,15 7,15 8,15
6,14 6,15 6,15 7, 16 7, ]6
5,15 5,16 6, 16 6,16 6,17
4, 16 5,16 5,17 5,17 6,18
4, 16 4,17 5,17 5,18 5,18
3, 17 4,17 4. 18 4,18 5,18
3,17 3,18 3,18 4,19 4,19
3,17 3, 18 3,19 3,19 4, 19
14 IS 16 17 18
9.14 10,15 10,15
7,17 8,17 8,17 8,17 8,18
7,17 7,18 7,18 7,18 8,18
6, 18 6,18 6,18 7,19 7,19
5,18 6,19 6, 19 6,19 6, -
5,19 5,19 5, 19 5,6.-
4,19 4,5, 5,5, -
4,4, 4,4,5,-
8,18 9,18 9,18 9,18 9,18
8,18 8,18 8,19 8,19 8,19
7,19 7,19 7,7,8,-
6,7, 7,7,7, -
6,6,6, 6, 6,-
5,5,6,6,6,-
5,5,5,5,5,-
9,18
8,8,8,8,8,-
7,7, 7,8,8,-
7,7, 7, 7,7,-
6, 6,6,6,6,-
6, 6, 6,6,6,-
8,8,-
8, 8,-
7,-
7,7,-
6, 6,-
7 8
8 9
9
10,15 10,15
10.16
8,16 8,16 9,16 9,17 9,17
19 20 2l 22 23
11,16 11,16 11,16 11,16 12,16
9,17 10,17 10,18 10,18 10,18
24 25 26 27 28
12,17 12,17 12,17 12,17 12,17
10,18 10,18 10,18 11,18 11,18
10.19 10,19 10,19
9,19 9,19 9,9, 9,-
29 30
12,17 12,18
11,18 11,18
10,19 10,19
9,9,-
10,15
to, 19
7, -
5, -
830
Appendix
B
Statistical Tables and Graphs TABLE B 29 (cont.): .. Critical
0.20 0.10
0.10 ()'O5
Values of u for the Runs Test 0.01 0.05 n.oz 0.005 O.D25 0.01 0.005 0.0025
0.002 0.001
0.001 0.0005
4. 1~ 4.19 5.19 5.20 5.20
4. 1~ 4.19 4.20 4.20 5.20
3,19 3.19 4,20 4,20 4,21
6.20 6.20 6.20 7,21 7.21
6.20 6,20 6.20 6,21 6.21
5.21 5.21 5.21 6.6.-
5,21 5, 5. 5, 5, -
7.21 7.21 8, 8. 8. -
7, 7. 7, 7. 7.
-
6.6.6.6. 7.-
6, 6.6, 6, 6,-
8.-
7.11.8.8. 8.-
7, 7.7.7. 7, -
6,-
n:
0'(2): 0.50 n(l): 0.25
10 11 12 13 14
9.13 9.12 9,14 10,15 10,15
7.15 ~. 15 s, 16 x, 16 9.17
6.16 7, 16 7.17 s, 17 ~, 17
6.16 6.17 7,17 7. 1~ 7.18
5.17 5. 1~ 6. 1~ 6.19 6,19
5.17 5. 1~ 5.19 5.19 6.19
15 16 17 1~ 19
10.16 11,16 11,16 11,16 12.17
9.17 9.17 10.18 10.18 10.111
8,18 ~, 18 9. 1~ 9,19 9.19
7,18 8.19 8.19 8,19 ~.20
7.19 7,20 7.20 7.20 ~,20
20 21 22 23 24
12.17 12,17 12.17 12,18 12.18
10.18 10. 1~ 11,19 11. 19 11.19
9.19 10.19 10,20 10.20 HUO
9,20 9.20 9.20 9.20 9.20
8.20 8,21 8.21 8,21 8.21
25 26 27 28 29
13.18 13,18 13.18 13,18 13.18
11. 19 11. 20 12.20 12.20 12.20
HUO Ill. 20 11,20 11. 20 11,20
10.20 10.21 10.21 10.21 10.21
9.9.9.9,9,-
10
30
14.18
12.20
11. 20
10.21
9.-
9,-
8.-
8, -
7,-
11
11 12 13 14 15
9.15 10.15 10,16 11. 16 11.16
8.16 9.16 9,17 9,17 10,18
7.17 8.17 11,18 8.18 9.19
7.17 7.18 7.19 8. 19 8. 19
6,18 6.19 6.19 7.20 7.20
5.19 6.19 6.20 6.20 7.21
5.19 5.20 5.20 6.21 6.21
4.20 5.20 5.21 5.21 5,22
4,20 4,21 4,21 5,22 5,22
16 17 1R 19 20
11.17 12.17 12.17 12,18 12.18
10.18 10, 1~ 10.19 11.19 11. 19
9.19 9.19 10.20 10,20 10.20
8. 20 9,20 9.20 9.21 9.21
7.21 8.21 8.21 8.22 8. 22
7.21 7.22 7.22 ~.22 8.22
6.22 7.22 7,22 7.22 7.23
6.22 6.22 6.23 6.23 7.23
5,23 5,23 6,23 6,-
21 22 23 24 25
13,18 13, 1~ 13.19 13.19 14.19
11.20 11,20 12,20 12.20 12.20
10.20 10.21 11. 21 11. 21 11. 21
10,21 10.22 Ill. 22 10.22 10.22
9.22 9.22 9,22 9,22 9.23
8.22 R.23 S.23 9.23 9.23
7.23 8.23 8.23 8.8.-
7.7.7.7,7.-
6,6,7,-
26 27 28 29 30
14.19 14.19 14.20 14.20 14,20
12.21 12,21 13.21 13.21 13.21
11. 22 11. 22 12,22 12.22 12.22
10.22 11. 22 11,22 11.22 11,22
10.23 10,23 10.23 10.23 Ill. -
9. 9. 9. 9. 10. -
8.9.9.9. -
8.8.8.-
7,7,7.-
8.8,-
8.8,-
12 13 14 15 16
10.16 11.14 11.17 12.17 12.17
9.17 9.18 10.18 10.19 10.19
8.18 9.18 9.19 9,19 10.20
7,19 8.19 S,20 8.20 9.21
7.19 7,20 7.21 8,21 S,22
6.20 6.21 7.21 7.22 7.22
5.21 6.21 6.22 6.22 7.23
5.21 5.22 5.22 6.23 6.23
4,22 5,22 5.23 5,23 6,24
17 18 19 20 21
12.18 13.18 13.18 13.19 14.19
11.19 11.20 11. 20 12,20 12.21
10.20 10.21 10.21 11,21 11.22
9.21 9,21 10.22 10.22 10.22
8.22 S.22 9,23 9.23 9.23
R.22 11.23 8.23 8,23 9.24
7,23 7,23 7.24 8,24 8.24
6,24 7,24 7.24 7.24 7,25
6,24 6,24 6.25 7,25 7.25
fll
10
11 12
12
K.8, 8. 9. -
~.-
6,7,7,7,-
6,-
7,7.-
Appendix B
Statistical Tables and Graphs
B.29(cont.): Critical Values of u for the Runs Test 0.20 0.10 0.05 0.02 0.01 0.005 0.005 0.10 0.05 0.025 0.01 0.0025
831
TABLE
0'(2): 0.50 0'(1): 0.25
0.002 0.001
0.001 0.0005
8,24 8,24 9,25 9,25 9,25
7,25 8,25 8, 8,8,-
7, 7,7,8, 8,-
10,25 10,25 10,25 10,25
9,25 9,10,10.-
8,9,9,9, -
8,8,8,8,-
7,21 8,21 8,22 8,22 9,23
7,21 7,22 7,22 8,23 8,23
6,22 7,22 7,23 7,23 7,24
5,23 6,23 6,24 6,24 7,25
5,23 5,24 6,24 6,25 6,25
10,22 10,23 10,23 11,23 11,24
9,23 9,24 10,24 10,24 10,24
8,24 9,24 9,24 9,25 9,25
8,24 8,25 8,25 9,25 9,26
7,25 7,25 8,26 8,26 8,26
7,25 7,26 7,26 7,26 7,27
12,23 12,23 13,24 13,24 13,24
11,24 11,24 12,24 12,24 12,25
10,25 10,25 11,25 11,26 11,26
10,25 10,26 10,26 10,26 10,26
9,26 9,26 9,26 10,26 10,26
8,26 8,27 9,27 9,27 9,27
8,27 8,27 8,27 8,9,-
14.23 14,24 15,24
13,24 13,24 14,24
12,25 13,25 13,25
11,26 12,26 12,26
11,26 11,26 11,26
10,27 10,27 10,27
9,9,10,-
9,9, 9, -
12,18 13,16 13,19 14,19 14,20
11,19 11,20 11,20 12,21 12,21
10,20 10,21 11,21 11,22 11,22
9,21 9,22 10,22 10,23 10,23
8,22 8,23 9,23 9,24 9,24
7,23 8,23 8,24 8,24 9,25
7,23 7,24 8,24 8,25 8,25
6,24 7,24 7,25 7,25 7,26
6,24 6,25 6,25 7,26 7,26
19 20 21 22 23
14,20 15,20 15,21 15,21 16,21
13,22 13,22 13,22 14,23 14,23
12,23 12,23 12,23 12,24 13,24
11,23 11,24 11,24 12,24 12,25
10,24 10,25 10,25 11,26 l1,26
9,25 9,25 10,26 10,26 10,26
8,26 9,26 9,26 9,27 10,27
8,26 8,27 8,27 9,27 9,28
7,27 7,27 8,28 8,28 8,28
24 25 26 27 28
16,22 16,22 16,22 17,22 17,23
14,23 14,24 15,24 15,24 15,24
13,24 13,24 14,25 14,25 14,25
12,25 12,25 13,26 13,26 13,26
11,26 11,26 12,26 12,27 12,27
10,27 11,27 11,27 11,27 11,28
10,27 10,28 10,28 10,28 11,28
9,28 9,28 9,28 10,28 10,29
8,28 9,28 9,29 9,29 9,29
29 30
17,23 17,23
15,24 15,25
14,26 14,26
13,26 13,26
12,27 12,27
12,28 12,28
11,28 11,28
10,29 10,29
9, 10,-
15 16 17 18 19
13,19 14,19 14,20 14,20 15,21
12,20 12,21 12,21 13,22 13,22
11,21 11,22 11,22 12,23 12,23
10,22 10,23 11,23 11,24 11,24
9,23 9,24 10,24 10,25 10,25
8,24 9,24 9,25 9,25 10,26
8,24 8,25 8,26 9,26 9,27
7,25 7,26 8,26 8,27 8,27
6,26 7,26 7,27 7,27 8,28
20 21 22 23 24
15,21 16,21 16,22 16,22 16,22
13,23 14,23 14,24 14,24 15,24
12,24 13,24 13,25 13,25 14,25
12,25 12,25 12,25 12,26 13,26
11,26 11,26 11,26 11,27 12,27
10,26 10,27 10,27 11,27 11,28
9,27 10,27 10,28 10,28 10,28
8,28 9,28 9,28 9,29 10,29
8,28 8,29 8,29 9,29 9,30
nl
n2
12
22 23 24 25 26
14,19 14,20 14,20 14,20 15,20
12,21 12,21 13,21 13,22 13,22
11,22 11,22 12,22 12,22 12,23
10,22 11,23 11,23 11,23 11,23
9,23 10,24 10,24 10,24 10,24
9,24 9,24 9,24 9,24 10,25
27 28 29 30
15.20 15,21 15,21 15,21
13,22 13,22 14,22 14,22
12,23 12,23 13,23 13,23
11,24 12,24 12,24 12,24
10,24 11,24 11,24 11,25
13 14 15 16 17
l1,17 12,17 12,18 13,18 13,19
10,18 10,19 1\,19 11,20 11,20
9,19 9,20 10,20 10,21 10,21
8,20 9,20 9,21 9,21 10,22
18 19 20 21 22
13,19 14,19 14,20 14,20 15,20
12,20 12,21 12,21 13,22 13,22
11,21 11,22 11,22 12,22 12,23
23 24 25 26 27
15,20 15,21 15,21 16,21 16,21
13,22 13,22 14,23 14,23 14,23
28 29 30
16,22 16,22 16,22
14 15 16 17 18
12 13
13 14
14 15
15
832
Appendix B
Statistical Tables and Graphs TABLE B.29 (cont.): Critical
n]
n:
Values of u for the Runs Test
0:(2):0.50 0'(1):0.25
0.20 0.10
0.10 0.05
0.05 0.025
0.02 0.01
0.01 0.005
0.005 0.0025
0.002 0.001
0.001 0.0005
15.24 15.25 16.25 16.25 16.26
14.26 14.26 14.26 15.26 15.26
13.26 13.27 14.27 14.27 14.27
12.27 12.28 12.28 13.28 13,28
II.2R 11.28 12.28 12.29 12.29
II.29 11.29 II.29 11.29 11.30
10.29 10.30 10.30 10.30 11.30
9.30 9.30 10,30 10,30 to,31
16.26
15
25 26 27 28 29
15
30
17.23 17.23 17.23 18,24 18.24 18.24
10,31
12.22 13.22 13.23 14.23 14.24
10.24 10.25 10.26 11.26 11.26
9.25 9.26 10.26 10.27 10.27
12.30 8.26 9.26 9.27 9.27 10.28
It.30
14.20 15.1R 15.21 15.21 16.22
14.2R II.23 11.24 II.25 12.25 12.25
12.29
16 17 18 19 20
15.27 II.23 12.23 12.24 13,24 13.25
13.28
16
R.26 R.27 8.28 9.28 9.29
7,27 7,28 8,28 8.29 8.29
21 22 23 24 25
16,22 17.23 17.23 17.23 17.24
14.24 15.24 15.25 15.25 16.25
13,25 14.25 14.26 14.26 15.26
12.26 13.26 13.27 13.27 14.27
11,27 12.27 12.2R 12.2R 13.28
11.28 II.2R 11.2R 12.29 12.29
10.28 10.29 11,29 II.29 11.30
9.29 9.29 10.30 10.30 10,30
9,30 9,30 9,30 9,31 10,31
26 27 2R 29 30
18.24 18.24 18,24 19.25 19.25
16.26 16.26 16.26 17.26 17.27
15.27 IS.27 15.27 16.2il 16.28
13.29 13.29 13.29 14.30 14.30
12.29 12.30 13.30 13.30 13.30
11.30 12,30 12.30 12.31 12.31
11.31 11.31 11.31 11.32 11.32
10.31 10,32 10.32 11.32 11,32
17 18 19 20 21
15,21 16,21 16.22 16.22 17.23
13.23 14.23 14.24 15.24 IS,25
12.24 13.24 13.25 13.25 14.26
14.28 14.28 14.28 IS.2il 15.29 II.25 12.25 12.26 13.26 13.27
10,26 11.26 11.27 II.27 12.28
10.26 10.27 10.27 II.28 It.28
9.27 9.27 10,28 10.29 10.29
8,28 9.28 9.29 9.29 10.30
8.28 8,29 8,29 9.30 9.30
22 23 24 25 26
17.23 17,24 18.24 18,24 18,25
15.25 16.25 16.26 16.26 17.26
14.26 15.27 15.27 15.27 15,28
13.27 14.27 14.2il 14.28 14.29
12.2R 13.29 13.29 13.29 13.30
11.29 12.29 12.30 12.30 13.30
11,30 11.30 11,30 12,31 12,31
10.30 10.31 II.31 II.31 11.32
9,31 10,31 10.32 10,32 10,32
27 28 29 30
19.25 19,25 19.26 20.26
17.27 17.27 17,27 18.2il
16.28 16.28 16.28 17.29
15.29 15.29 15.29 16.30
II.32 12.32 12.33 12,33
11.33 11,33 11,33 11,33
16.22 16.20 17,23 17.23 18.24
14.24 15.24 15.25 15.25 16.26
13.25 14.25 14.26 14.26 15.27
12.26 13.26 13.27 13.27 14.28
13.31 13.31 13.31 14.32 II.27 II.zs II.29 12.29 12.30
12.31 12.32 13.32 13.32
18 19 20 21 22
14.30 14.30 14.30 14.31 II.27 12.27 12.28 12.28 13.29
10.28 10.29 11.29 II.30 11.30
9,29 9.30 10,30 10.31 10.31
9.29 9.30 9,31 10,31 10,32
23 24 25 26 27
18.24 18,25 19.25 19,25 19.26
16.26 17.27 17.27 17.27 18.28
15.27 15.2il 16.28 16.28 16.29
14.2il 14.29 15.29 15.29 15.30
13.29 13.30 14.30 14.30 14.31
12.30 13.30 13.31 13.31 13.32
12.31 12.31 12.32 12.32 13.32
11,32 11.32 11,32 12.33 12.33
10,32 10,33 11,33 11,33 11,34
28 29 30
20.26 20.26 20.27
18,2R 18,28 18.29
17.29 17.29 17.30
16.30 16.30 16.31
14.31 15.32 15.32
14.32 14.32 14.32
13.33 13.33 14,33
12.33 12.34 13.34
12,34 12,34 12,34
16 17
17 18
18
Appendix TABLE B 29 (cont.)· Critical
Values
B
Statistical
Tables and Graphs
833
of u for the Runs Test
0.20 0.10
0.10 0.05
0.05 0.025
0.02 0.01
0.0\ 0.005
0.005 0.0025
0.002 0.001
0.001
nl
n:
a(2): 0.50 a(I): 0.25
19
19 20 21 22 23
17,23 17,24 18,24 lR,25 19,25
15,25 16,25 16.26 16,26 17,27
14,26 14.27 15,27 15,28 16,28
13,27 13,27 14,28 14,29 15,29
12,28 12.29 13,29 13,30 13,30
11,29 12,29 12,30 12,30 13.31
11,29 11,30 11.31 12.31 12,32
10.30 10.31 11.31 11,32 11,32
9.31 10,31 10,32 10,32 11,33
24 25 26 27 28
19,25 19,26 20,26 20,26 20,27
17,27 17,28 18.2R 18,2S 18.29
16,28 16.29 17,29 17.30 17,30
15,29 15,3!) 16,30 16,31 16,31
14,31 14,31 14,31 15,32 15,32
13,31 13,32 14,32 14,32 14,33
12.32 13.32 13.33 13,33 14.34
11,33 12,33 12,34 12,34 13,34
11,33 11,34 11,34 12,35 12,35
29 30
21,27 21,2R
19.29 19.29
1S,30 18,31
17,31 17,32
15,32 16,33
15,33 15,34
IS,24 18,22 19,25 19.26 20,26
16,26 16.27 17,27 17,28 18,28
15,27 15,2R 16,28 Iii,29 16,29
14,28 14.29 15,29 15,30 15,30
13.29 13,30 14,30 14.31 14,31
12,30 12,31 13.31 13,32 14.32
13,35 13,35 11.31 11.32 11.33 12,33 12,34
12.35 13,31i
20 21 22 23 24
14,34 14.34 11.31 12,31 12,32 12,32 13.33
25 26 27 28 29
20,26 20,27 21,27 21,28 21. 2R
18,28 18.29 19,29 19.30 19,30
17,30 17.30 1R,30 18,31 18,31
lli,31 11i,31 17.31 17,32 17,32
15,32 15.32 15.33 16.33 16.33
14,33 14,33 15,33 15.34 15,34
13,33 13.34 14,34 14,34 14.35
12,34 13,35 13.35 13,35 13.36
12,35 12,35 12,36 13.36 13.36
20
30
22,28
20.30
18,32
17,32
16.34
15,34
15.35
14.36
13.37
21
21 22 23 24 25
19.25 19,26 20,26 20,27 21,27
17,27 17,28 18,211 18,29 19,29
16.28 Iii,29 17.29 17.30 17,30
15.29 15,30 16.30 16.31 16,31
14.30 14,31 14.31 15,32 15.32
13.31 13,32 14,32 14.33 14.33
12,32 13,32 13,33 13.34 14,34
11,33 12,33 12.34 12.34 13,35
11,33 11,34 11,35 12,35 12.36
26 27 28 29 30
21.27 21. 28 22,28 22,29 22,29
19,30 19,30 20.30 20.3l 20.31
1R,31 1S,31 HU2 19.32 19.32
17,32 17.32 17,33 18,33 18,33
15.33 16,33 16,34 16.34 17.35
15.34 15.34 15.35 16.35 16,35
14,34 14.35 15,35 15,36 15,36
13.35 13,36 14,36 14.37 14.37
12,36 I3.31i 13,37 13,37 13,3S
22 23 24 25 26
20.26 20,24 21,27 21. 28 22.28
18,28 18.29 19,29 19.30 19.30
17,29 17.30 17.30 18,31 18,31
16,30 16.31 11i.31 17,32 17,32
14,32 15,32 15,33 16,33 16.34
14,32 14,33 14.33 15,34 15.34
13.33 13,34 14,34 14,35 14.35
12,34 12.35 13,35 13,36 13.36
11,35 12.35 12.36 13,36 13,37
27 28 29 30
22,29 22,29 23.29 23,30
20,31 20,31 21. 31 21.32
19,32 19,32 19,33 20.33
18,33 18,33 111,34 19,34
16,34 17.35 17,35 17,35
15.35 16,35 16,36 16.36
15,36 15,36 15,37 16,37
14.37 14,37 14.38 15,38
13,37 13,38 14,3S 14.39
23 24 25 26 27
21.27 21.28 22, 28 22.29 23,29
19,29 19,30 20,30 20.31 20.31
17.31 1R.3I 18.32 19,32 19,33
16,32 17.32 17,33 18,33 18,34
15.33 16.33 16.34 16,34 17,35
14,34 15.34 15,35 16.35 16,36
14.34 14,35 14.35 15.36 15,36
13.35 13.36 14.36 14.37 14.37
12,36 13.36 13.37
28 29 30
23,30 23.30 24,3()
21. 32 21,32 21. 33
20,33 20,33 20,34
18,34 19,35 19,35
17,35 17.31i 18.36
16,36 17.37 17.37
Iii.37 16,37 16,38
15,38 15,38 15.39
19 20
21 22
22
23
23
OJX)()5
10,32 10.33 11.33 11,34 11,34
13,3R
14.38 14,39 14,39 15,39
834
Appendix B
Statistical Tables and Graphs TABLE B 29 (cont.): Critical
Values of u for the Runs Test
a(2): 0.50 a( I):0.25
0.20 0.\0
0.10 (l.O5
0.05 0.025
0.02 0.0\
(1.0\ 0.005
0.005 0.0025
0.002 0.001
O.OOl 0.0005
nl
n:
24
24 25 26 27 21l
22,21l 22,26 23.29 23.30 23.30
20.30 20.31 20.31 21.32 21.32
1~,32 19,32 19,33 20,33 20,34
17,33 IR,33 IR,34 19,34 19,35
16,34 17,34 17,35 17,36 IR,36
15,35 16,35 16,36 16,36 17,37
15,35 15,36 15,37 16,37 16,38
14.36 14.37 14.3~ 15.3~ 15.39
13.37 13.38 14,38 14,39 14,39
29 30
24.31 24.31
22,33 22,33
20,34 21,35
19,35 20,36
liU6 IR.37
17,37 17,38
16,38 17,39
15,39 16,40
15,40 15,40
25
25 26 27 21l 29
23,29 23,30 24,30 24,31 24,31
21,31 21,32 21,33 22,33 22,33
19,33 20.33 20.34 21.34 21.35
18.34 19.34 19,35 19,35 20,36
17.35 17,36 Ill,36 Ill,37 Ill,37
16,36 16,37 17,37 17,38 18,38
15,37 16.37 16.3R 16.38 17.39
14,38 15,38 15,39 15,39 16,40
14,38 14,39 14.39 15,40 15.41
25
30
25,32
23.34
21.35
20,36
17.39
16,40
15.41
26 27 28 29 30
24,30 24,28 25,31 25,32 25,32
21,33 22.33 22,34 23,34 23,35
19,35 19,36 20,36 20,37 21.37
17,37 17,38 18,38 18,39 18,39
16.38 16,39 17.39 17,40 18,40
15,39 15,39 16,40 16,41 17,41
14.40 15,40 15.41 16.41 16.42
27 28 29 30
25.31 25.32 25,32 26,33
22,34 23.34 23,35 24,35
20.34 21.34 21.35 21,35 22.36 21.35 21.36 22,36 22,37
19,31l IR.36 18,37 19,37 19,31l 19,31l
18,39
26
19.37 19,38 19,39 20,39
PUll 1~,39 19,39 19,40
17,39 17,40 Ill,40 IIl,41
16,40 16,41 17,41 17,42
15,41 16,41 16,42 16,43
2R
28 29 30
25,33 26,30 26,34
23,35 24,35 24,36
22,36 22,37 23,37
20.36 20,37 21. 37 21,31l 21.37 21,38 22,38
19,39 20,39 20,40
19.39 19,40 19.41
18,40 18,41 18,41
17,41 17.42 17.42
16,42 16,43 17,43
29 29
29 30
26,34 27,34
24,36 25,36
23,37 23,3~
22,3~ 22,39
20,40 21.40
19,41 20.41
19,41 19,42
17,43 1~.43
17,43 17,44
30
30
27,35
25,37
24,31l
23,39
21, 41
20,42
19,43
18.44
18,44
24
26 27
27 28
Appendix and Eisenhart Example:
Table B.29 was prepared (1943).
using the procedure
UO.05(2).24,30
= 20
described
and
by Brownlee
(1965: 225-226)
and Swed
36
The pairs of critical values are consulted as described in Section 25.6. The probability of a II found in the table is less than or equal to its column heading and greater than the next smaller column heading. For example, the two-tailed probability for nl = 25, n2 = 26, and u = 20 is 0.05 < P ~ (l.It); the two-tailed probability for nl = 20, n: = 30, and II = 15 is 0.002 < P ~ (J.()05; and the one-tailed probability for n 1 = 24, n2 = 25, and u = 21 is 0.10 < P ~ 0.25. For n larger than 30, use the normal approximation of Section 25.6.
Appendix TABLE B.30: Critical
B
Statistical Tables and Graphs
835
Values of C for the Mean Square Successive Difference 0.Q25 0.005 0.0025 0.05 0.01
Test 0.001
0.0005
0.4,09 0.3,91 0.3,74
0.5,09 0.4,88 0.4,69
0.5,87 0.5.65 0.5,44
0.6,68 0.6,45 0.6,24
0.7,16 0.6,94 0.6,73
0.7,56 0.7,35 0.7,14
0.7,99 0.7,79 0.7,60
0.8,25 0.8,07 0.7,88
0.1,94 0.1,87 0.1,80 0.1,74 0.1,69
0.3,60 0.3,47 0.3,35 0.3,25 0.3,15
0.4,52 0.4,36 0.4,22 0.4,09 0.3,97
0.5,26 0.5,09 0.4,93 0.4,78 0.4,65
0.6,04 0.5,86 0.5,69 0.5.53 0.5,39
0.6,53 0.6,34 0.6,17 0.6,01 0.5,86
0.6,94 0.6,76 0.6.58 0.6,42 0.6,27
0.7,40 0.7,22 0.7.05 0.6,89 0.6,73
0.7,70 0.7,52 0.7,35 0.7,19 0.7,04
16 17 18 19 20
0.1,64 0.1,59 0.1,55 0.1,51 0.1,48
0.3,06 0.2,98 0.2,90 0.2,83 0.2,76
0.3,86 0.3,76 0.3,67 0.3,58 0.3,50
0.4,53 0.4,41 0.4,31 0.4,21 0.4,12
0.5,25 0.5,13 0.5,01 0.4,90 0.4,80
0.5,72 0.5,58 0.5,46 0.5,35 0.5,24
0.6,12 0.5,99 0.5,86 0.5,74 0.5,63
0.6,58 0.6,45 0.6,32 0.6,19 0.6,08
0.6,89 0.6,75 0.6,62 0.6,50 0.6,38
21 22 23 24 25
0.1,44 0.1,41 0.1,38 0.1,35 0.1,33
0.2,70 0.2,64 0.2,59 0.2,54 0.2,49
0.3,43 0.3,35 0.3,29 OJ,22 OJ,16
0.4,03 OJ,95 0.3,87 0.3,80 0.3,73
0.4,70 0.4,61 0.4,52 0.4,44 0.4,36
0.5,13 0.5,04 0.4,94 0.4,86 0.4,77
0.5,52 0.5,42 0.5,32 0.5,23 0.5,14
0.5,97 0.5,86 0.5,76 0.5,66 0.5,57
0.6,27 0.6,16 0.6,06 0.5,96 0.5.87
26 27 28 29 30
0.1,30 0.1,28 0.1,26 0.1 ,23 0.1,21
0.2,45 0.2,40 0.2,36 0.2,32 0.2,29
0.3,11 0.3,05 0.3,00 0.2,95 0.2,91
0.3,66 0.3,60 0.3,54 0.3,49 0.3,43
0.4,29 0.4,22 0.4,15 0.4,09 0.4,02
0.4,69 0.4,62 0.4,55 0.4,48 0.4,41
0.5,06 0.4,98 0.4,90 0.4.83 0.4.76
0.5,49 0.5,40 0.5,32 0.5,25 0.5,17
0.5,78 0.5.69 0.5.61 0.5,53 0.5,46
31 32 33 34 35
0.1,20 0.1,18 0.1,16 0.1,14 0.1,13
0.2,25 0.2,22 0.2,18 0.2,15 0.2,12
0.2,86 0.2,82 0.2,78 0.2,74 0.2,71
0.3,38 OJ,33 0.3,29 0.3,24 (H2O
0.3,97 0.3,91 0.3,86 0.3,80 0.3,76
0.4,35 0.4,29 0.4,23 0.4,18 0.4,12
0.4,70 0.4,63 0.4.57 0.4,51 0.4,46
0.5,10 0.5,04 0.4,97 0.4,91 0.4,85
0.5,38 0.5,31 0.5,25 0.5,18 0.5,12
36 37 38 39 40
0.1,11 0.1,10 0.1,08 0.1,07 0.1,06
0.2,10 0.2,07 0.2,04 0.2,02 0.1,99
0.2,67 0.2,64 0.2,60 0.2,57 0.2,54
0.3,16 0.3,12 0.3,08 0.3,04 0.3,01
0.3,71 0.3,66 0.3,62 0.3.57 0.3,53
0.4,07 0.4,02 0.3,97 0.3.93 0.3,88
0.4,40 0.4,35 0.4,30 0.4,25 0.4,20
0.4,79 0.4,74 0.4,68 0.4,63 0.4,58
0.5,06 0.5,00 0.4,95 0.4.89 0.4,84
41 42 43 44 45
0.1,04 0.1,03 0.1,02 0.1,01 0.1,00
0.1,97 0.1,95 0.1,92 0.1,90 0.1,88
0.2,51 0.2,48 0.2,45 0.2,43 0.2,40
0.2,97 0.2,94 0.2,90 0.2,87 0.2,84
0.3,49 0.3,45 0.3,42 0.3,38 0.3,35
0.3,84 0.3,80 0.3,76 0.3,72 0.3,68
0.4,15 0.4,11 0.4,07 0.4,02 0.3,98
0.4,53 0.4.48 0.4,44 0.4,39 0.4,35
0.4,79 0.4,74 0.4,69 0.4,64 0.4,60
46 47 48 49 50
0.0,99 0.0,98 0.0,97 0.0,96 0.0,95
0.1,86 0.1,84 0.1,82 0.1,81 0.1,79
0.2,38 0.2,35 0.2,33 0.2,30 0.2.28
0.2,81 0.2,79 0.2,76 0.2,73 0.2,70
0.3,31 0.3,28 0.3,25 0.3,22 0.3,19
0.3,64 0.3,61 0.3,57 0.3,54 0.3,51
0.3,94 0.3,91 0.3,87 0.3.83 0.3,80
0.4,31 0.4.26 0.4,22 0.4,19 0.4,15
0.4,55 0.4,51 0.4,47 0.4,43 0.4,39
52 54 56 58 60 62 64 66 68 70
0.0,93 0.0,91 0.0,90 0.0,88 0.0,87 0.0,85 0.0,84 0.0,83 0.0,81 0.0;80
0.1,75 0.1,72 0.1,69 0.1,66 0.1,64 0.1,61 0.1,58 0.1,56 0.1,54 0.1,52
0.2,24 0.2.20
0.2,65 0.2,61 0.2,56 0.2,52 0.2,48 0.2,44 0.2,40 0.2,37 0.2,33 0.2,30
0.3,13 0.3,07 0.3,02 0.2,97 0.2,92 0.2,88 0.2.84 0.2,79 0.2,76 0.2,72
0.3,44 0.3,38 0.3,33 0.3,27 0.3,22 0.3,17 0.3.13 0.3,08 0.3,04 0.3,00
0.3,73 0.3.67 0.3.61 0.3,55 0.3,49 0.3,44 0.3,39 0.3,34 0.3,30 0.3.25
0.4,08 0.4,0] 0.3,94 0.3,88 0.3,82 0.3,76 0.3,71 0.3,66 0.3,61 0.3.56
0.4,31 0.4,24 0.4,17 0.4,11 0.4,05 0.3,99 0.3,93 0.3,87 0.3,82 0.3,77
n
a: 0.25
0.10
8 9 10
0.2,23 0.2,12 0.2,03
11 12 13 14 15
0.2,16 0.2,12 0.2,09 0.2,06 0.2,03 0.2,00 0.1,97 0.1,94
836
Appendix
B
Statistical Tables and Graphs TABLE B.30 (cont.): Critical
a: 0.25
n
Values of C for the Mean Square Successive Difference Test 0.D25 D.OOl o.or 0.005 0.0025
0.10
0.05
0.1.91
72 74 71i 7k 80
0.0,79 O.O,7k 0.0.77 0.0,71i 0.0,75
D.I,50 O.I,4k 0.1.41i 0.1,44 0.1,42
82 k4 81i 88 90
0.0.74 0.0.73 0.0,72 0.0.71 0.0.70
0.1,40 0.1.39 0.1.37 0.1.36 0.1.34
92 94 9(i 98 100
0.0.69 0.O,li8 0.0,67 0.O,li5 0.0,64
o.z.s:
0.3.21 0.3.17 0.3.13 0.3J)9 O.3J)S
0.3.51 0.3,47 0.3,42 0.3.38 0.3.34
0.3,72 OJ,68 OJ,63 OJ,59 OJ,55
0.2.78 0.2.75 0.2.72 0.2.69 0.2.66
0.3.02 0.2.9k 0.2,95 0.2.92 O.2.k9
0.3.31 0.3.27 0.3.23 0.3.20 0.3.16
OJ.51 0.3.47 0.3.43 0.3,39 0.3,36
0.2,38 0.2.36 0.2.33 0.2,31 0.2.29
0.2,63 0.2,60 O.2,5S 0.2,55 0.2,53
0.2.SIi 0.2,/l3 0.2)\0 0.2,77 0.2,75
0.3.13 0.3,10 0.3,07 0.3,04 0.3,01
0.3,32 0.3.29 0.3.26 OJ.23 0.3.20 0.3.12
0.2,10 0.2.0S
0.2,li8 O.2,li2 0.2.57 0.2.52 0.2,47
nzss
0.1.77 0.1.74
0.2,47 0.2,41 0.2,31i 0.2.31 0.2,27
0.2,94
n.r.s:
0.2.24 0.2.19 0.2.14
0.2.il2 O.2,71i 0.2,71
(nOS 0.2,99 0.2,93 0.2,87
0.1.70 0.I.li7 0.1.fi4 D.1.61 0.1.59
0.2,02 0.1.98 0.1,94 0.1,91
0.2,23 0.2,19 0.2,15 0.2,11 0.2.08
0.2.42 0.2,38 0.2.34 0.2.30
0.2.fi6 0.2.61 0.2.56 0.2.52 0.2.48
0.2,82 0.2,77 0.2.72 0.2.68 0.2,63
O.2.lik 0.2.li5 0.2.lil 0.2.5k 0.2.55
0.2.91i 0.2.92 O.2.kk 0.2.k5
c.i.sz
0.2,27 0.2.24 0.2.21 0.2,18 0.2.16
O.I.kO 0.1.77 0.1,75 0.1.73 0.1.72
0.2.13 D.2.11 0.2.08 O.2.01i 0.2,04
0.2.52 0.2,49 0.2.46 0.2,44 0.2,41
0.1.33 0.1,31 0.1,30 0.1.29 0.1.27
0.1.70
0.2,02 0.2,00 0.1,98 0.1.96 0.1,94
105 110 115 120 125
0.1,24 0.1,21 0.1.19 0.1.11i 0.1.14
D.l,59 D.I,51i 0.1,52 0.1.49 0.1.41i
0.1,89 0.1,85
130 135 140 145 150
0.1.11 0.1.0l) 0.1.07 0.1.05 0.1.02
0.1,43 0.1,41 O.I.3/l D.I,36 0.1,33
Appendix
o.i.s« n.t.se O.l.k4
o.i.ss 0.1.66 0.1.65 0.1,63
Table B.30 was prepared
c.r.ss
by the method
CO.05.60
=
0.0005
0.209
outlined and
0.2,26
in Young (1941). Examples:
COO1.6X = 0.276
For n greater than shown in the table, the normal approximation (Equation 25.22) may be utilized. This approximation is excellent, especially for a near 0.05, as shown in the following tabulation. The following table considers the absolute difference between the exact critical value of C and the value of C calculated from the normal approximation. Given in the table are the minimum sample sizes necessary to achieve such absolute differences of various specified magnitudes. Absolute Difference «;0.002 «;0.005 «; 0.010 «;0.020
a
=
0.25
30 20 15 8
0.10
0.05
0.025
o.or
0.005
0.0025
0.001
0.0005
35 20 10 8
8 8 8 8
35 20 15 8
70 45 30 20
90 60 40 25
120 80 50 30
100 60 40
120 ilO 45
Appendix TABLE 8.31: Critical
0'(2): 0.50 n
a(I):
0.25
Values,
0.20 0.10
0.10 0.05
0.05 0.025
B
Statistical
U, for the
Runs-Up-and-Down 0.01 0.005 0.01
Tables and Graphs
837
Test 0.005 0.0025
0.002 0.001
0.001 0.0005
-,-
-,-, -
-,-,-
-,-
O.oz
4 5
1, 2,-
1,1, -
-,1,-
-,1, -
-, -
6 7 8 9 10
2,5 3,6 3, 7 4, 7 5,8
2, 2, 3, 7 3,8 4,9
1,2,2, 3,8 3,9
1. 2, 2,2. 3, -
1,1, 2, 2, 3,-
1, 1, 1, 2, 2. -
-,1,1,2, 2, -
-,1, 1, 1, 2, -
-,1,1,1,2, -
11 12 13 14 15
5,9 6,10 6,10 7,11 8, 12
4,10 5,10 6,11 6.12 7,13
4,10 4,11 5,12 6, 12 6,13
4,10 4,11 5.12 5,13 6,14
3, 4, 4,12 5, 13 3, 14
3, 3, 4, 4.13 5,14
3, 3,3, 4, 4, -
2, 3. 3, 4, 4,
2, 2, 3, 3, 4,
16 17 18 19 20
8,12 9,13 10,14 10,15 11,15
7, 13 8,14 8,15 9, 10 10,16
7,14 7,15 8,15 8,16 9,17
6.14 7,15 7, 16 8, 17 8,17
6, 15 6, 16 7,16 7,17 8, 18
5, 15 6,16 6,17 7,18 7,18
5,15 5,16 6,17 7,18 7,19
5,5, 6,17 6,18 7,19
4, 5, 5,6,18 6,19
21 22 23 24 25
11,16 12,17 13,17 13,18 14.19
10,17 11. 18 12,18 12,19 13,20
10,18 10,18 11, 19 11,20 12,21
9, 18 10, 19 10,20 11,20 11,21
8,19 9,20 10,20 10,21 11,22
8, 19 8.20 9.21 10,22 10,22
8,20 8,20 9,21 9,22 10,23
7,20 8,21 8,22 9,22 9,23
7,20 7,21 8,22 8,23 9,23
26 27 28 29 30
15,29 15,20 16,21 17,21 17,22
13,21 14,21 15,22 15,23 16,24
13,21 13,22 14,23 14,24 15,24
12,22 13,23 13,23 14,24 14,25
11,23 12,23 12,24 13,25 13,26
11,23 11,24 12,25 12,25 13,26
10,24 11,24 11,25 12,26 12,27
10,24 10,25 11,26 11,26 12,27
9,24 10.25 10,26 11,27 11,28
31 32 33 34 35
18,23 18,24 19,24 20,25 20,26
16,24 17,25 18,26 18,26 19,27
16.25 16,26 17,26 17,27 18,28
15,26 15,26 16,27 17,28 17,29
14,26 15,27 15,28 16,29 16,29
13,27 14,28 15,29 15,29 16,30
13,27 14,28 14,29 15,30 15,31
12,28 13,29 13,30 14,20 15,31
12,28 12,29 13,30 14,31 14,32
36 37 38 39 40
21,26 22,27 22,28 23,28 24,29
20,28 20,29 21,29 21,30 22,31
19,29 19,29 20,30 20,31 21,32
18,29 18,20 19,31 20,32 20,32
17.30 18,31 18,32 19,32 19,33
16,31 17,32 17,32 18,33 19,34
16,31 16,32 17,33 17,34 18,34
15,32 16,33 16,33 17,34 17,35
15,32 15,33 16,34 16,35 17,35
41 42 43 44 45
24,30 25.30 26,31 26,32 27,33
23,31 23,32 24,33 24,33 25,34
22,32 22,33 23,34 23,34 24,35
21,33 21,34 22,35 23,35 23,36
20,34 20,35 21,35 22,36 22,37
19,35 20,35 20,36 21,37 22,38
19,34 19,36 20,37 20,37 21,38
18,36 18,37 19,37 20,38 20,39
17.36 18,37 18,38 19,39 20,39
46 47 48 49 50
27,33 28,34 29,35 29,35 30,36
26,35 26,36 27,36 28,37 28.38
25,36 25,37 26,37 27,38 27,39
24,37 24,37 25,38 26,39 26,40
23,38 23,38 24,39 25,40 25,41
22,38 23,39 23,40 24,41 24,41
21,39 22,40 23,40 23,41 24,42
21,40 21,40 22,41 22,42 23,43
20,40 21. 41 21,42 22,42 22,43
-, -
-
-
Appendix Table B.31 was prepared using the recursion algorithm attributed to Andre (1883; Bradley, 1968: 281) in order to ascertain the probability of a particular u, after which it was determined what value of u would yield a cumulative tail probability es a. Note: The a represented by each u in the table is less than or equal to that in the column heading and greater than the next smaller column heading. For example, if n = 9 and u = 3 for a two-tailed test, 0.05 < P ,;:;0.10; if n = 15 and u = 7 for u one-tailed test for uniformity, 0.05 < r « 0.10. The normal approximation of Section 25.8 will correctly reject HO at the indicated one-tailed and two-tailed significance levels for sample sizes as small as these: 0.10
0.02
0'(2): 0'(1):
0.50 0.25
0.20 0.10
0.05
0.05 0.025
0.01
0.01 0.005
0.005 0.0025
0.002 0.001
0.001 0.0005
n:
5
4
9
11
13
15
19
20
20
838
Appendix B
Statistical Tables and Graphs TABLE B.32: Angular
Deviation,s,
As a Function
of Vector Length,
r
0
1
2
3
4
5
6
7
8
9
0.00 0.01 0.02 0.03 0.04
81.0285 80.6223 80.2141 79.8038 79.3914
80.9879 80.5816 80.1731 79.7626 79.3500
80.9474 80.5408 80.1322 79.7215 79.3086
80.9068 80.5000 80.0912 79.6803 79.2672
80.8662 80.4593 80.0502 79.6391 79.2258
80.8256 80.4185 80.0092 79.5978 79.1843
80.7850 80.3776 79.9682 79.5566 79.1429
80.7444 80.3368 79.9271 79.5153 79.1014
80.7037 80.2959 79.8860 79.4740 79.0599
80.6630 80.2550 79.8449 79.4327 79.0183
0.05 0.06 0.07 0.08 0.09
78.9768 78.5600 78.1410 77.7198 77.2962
78.9352 78.5182 78.0990 77.6775 77.2537
78.8936 78.4764 78.0569 77.6353 77.2112
78.8520 78.4345 78.0149 77.5930 77.1687
78.8103 78.3927 77.9728 77.5506 77.1262
78.7687 78.3508 77.9307 77.5083 77.0836
78.7270 78.3089 77.8885 77.4659 77.0410
78.6853 78.2670 77.8464 77.4235 76.9984
78.6435 78.2250 77.8042 77.3811 76.9557
78.6018 78.1830 77.7620 77.3387 76.9130
0.10 0.11 0.12 0.13 0.14
76.8703 76.4421 76.0114 75.5783 75.1427
76.8276 76.3991 75.9582 75.5349 75.0990
76.7849 76.3562 75.9250 75.4914 75.0553
76.7421 76.3132 75.8818 75.4479 75.0115
76.6993 76.2701 75.8385 75.4044 74.9678
76.6565 76.2271 75.7952 75.3608 74.9240
76.6137 76.1840 75.7519 75.3172 74.8801
76.5708 76.1409 75.7085 75.2737 74.8363
76.5279 76.0978 75.6651 75.2300 74.7924
76.4850 76.0546 75.6217 75.1864 74.7485
0.15 0.16 0.17 0.18 0.19
74.7045 74.2638 73.8204 73.3744 72.9256
74.6606 74.2196 73.7760 73.3296 72.8805
74.6166 74.1754 73.7314 73.2849 72.8355
74.5726 74.1311 73.6869 73.2401 72.7904
74.5286 74.0868 73.6423 73.1952 72.7453
74.4845 74.0425 73.5978 73.1503 72.7002
74.4404 73.9981 73.5531 73.1055 72.6550
74.3963 73.9537 73.5085 73.0605 72.6098
74.3522 73.9093 73.4638 73.0156 72.5646
74.3080 73.8649 73.4191 72.9706 72.5193
0.20 0.21 0.22 0.23 0.24
72.4741 72.0197 7l.5624 71.1022 70.6390
72.4287 71.9741 71.5165 71.0560 70.5925
72.3834 71.9285 7l.4706 71.0098 70.5460
72.3380 71.8828 71.4247 70.9635 70.4994
72.2926 71.8371 7l.3787 70.9173 70.4528
72.2472 71.7914 7l.3327 70.8710 70.4062
72.2018 71.7457 71.2866 70.8246 70.3596
72.1563 71.6999 7l.2406 70.7783 70.3129
72.1108 71.6541 71.1945 70.7319 70.2662
72.0652 71.6083 71.1483 70.6854 70.2195
0.25 0.26 0.27 0.28 0.29
70.1727 69.7033 69.2307 68.7549 68.2758
70.1259 69.6562 69.1833 68.7072 68.2277
70.0791 69.6091 69.1358 68.6594 68.1796
70.0322 69.5619 69.0883 68.6115 68.1314
69.9853 69.5147 69.0408 68.5637 68.0832
69.9384 69.4674 68.9933 68.5158 68.0350
69.8914 69.4202 68.9456 68.4678 67.9867
69.8445 69.3729 68.8980 68.4199 67.9384
69.7975 69.3255 68.8504 68.3719 67.8901
69.7504 69.2782 68.8027 68.3239 67.8417
0.30 0.31 0.32 0.33 0.34
67.7933 67.3073 66.8178 66.3246 65.8278
67.7448 67.2585 66.7686 66.2751 65.7779
67.6964 67.2097 66.7195 66.2256 65.7280
67.6478 67.1608 66.6702 66.1760 65.6781
67.5993 67.1119 66.6210 66.1264 65.6281
67.5507 67.0630 66.5717 66.0767 65.5780
67.5021 67.0140 66.5223 66.0270 65.5279
67.4535 66.9650 66.4730 65.9773 65.4778
67.4048 66.9160 66.4236 65.9275 65.4277
67.3560 66.8669 66.3741 65.8777 65.3775
0.35 0.36 0.37 0.38 0.39
65.3272 64.8228 64.3143 63.8019 63.2852
65.2770 64.7721 64.2533 63.7504 63.2334
65.2267 64.7214 64.2122 63.6989 63.1814
65.1763 64.6707 64.1610 63.6473 63.1294
65.1259 64.6199 64.1098 63.5957 63.0774
65.0755 64.5691 64.0586 63.5441 63.0253
65.0250 64.5182 64.0074 63.4924 62.9732
64.9745 64.4673 63.9561 63.4407 62.9211
64.9240 64.4164 63.9047 63.3889 62.8689
64.8734 64.3654 63.8533 63.3371 62.8167
0.40 0.41 0.42 0.43 0.44
62.7644 62.2391 6l.7094 61.1751 60.6361
62.7121 62.1854 61.6562 61.1215 60.5820
62.6597 62.1336 61.6030 6l.0677 60.5278
62.6073 62.0807 6l.5496 61.0139 60.4735
62.5548 62.0278 61.4963 60.9601 60.4192
62.5023 6l.9749 61.4429 60.9063 60.3648
62.4498 6l.9219 61.3894 60.8523 60.3104
62.3972 6l.8688 6l.3359 60.7984 60.2560
62.3445 6l.8158 6l.2824 60.7443 60.2015
62.2919 61.7626 61.2288 60.6903 60.1469
0.45 0.46 0.47 0.48 0.49
60.0923 59.5435 58.9896 58.4305 57.8659
60.0377 59.4884 58.9339 58.3743 57.8091
59.9830 59.4332 58.8782 58.3180 57.7523
59.9282 59.3779 58.8224 58.2617 57.6954
59.8734 59.3226 58.7666 58.2053 57.6385
59.8185 59.2672 58.7107 58.1489 57.5815
59.7636 59.2118 58.6548 58.0924 57.5245
59.7087 59.1563 58.5988 58.0358 57.4674
59.6537 59.1008 58.5427 57.9792 57.4102
59.5986 59.0452 58,4866 57.9226 57.3530
Appendix TABLE B.32 (cant.): Angular
Deviation,s,
0
I
2
0.50 0.51 0.52 0.53 0.54
57.295R 56.7199 56.13H2 55.5503 54.9562
57.2384 56.6620 56.0797 55.4912 54.R964
57.1811 55.5040 56.0211 55.4320 54.R3fifi
57.1236 56.5460 55.9625 55.3727 54.77fi7
57.0661 56.4879 55.9038 55.3134 54.7167
0.55 0.57 0.5S 0.59
54.3555 53.74R2 53.1339 52.5124 51.8S35
54.2951 53.6R71 53.0721 52.4499 51.8202
54.2346 53.6259 53.0!ll2 52.3873 51.7568
54.1741 53.5647 52.94H2 52.3246 51.6934
0.60 0.61 D.62 0.63 0.64
51.2469 50.6023 49.9493 49.2877 48.6171
51.1821-\ 50.5373 49.SR35 49.2210 4X.5495
51.1186 50.4723 49.8177 49.1543 48.4X18
0.65 0.66 0.67 D.6X 0.69
47.9371 47.2473 46.5473 45.8366 45.1147
47.86XS 47.177X 46.4767 45.7549 45.0419
0.70 0.7! 0.72 0.73 0.74
44.3X!1 43.6352 42.X762 42.1036 41.3166
0.75 0.76 0.77 D.78 D.79
As a Function
4
of Vector Length,
839
r
7
8
9
57.0086 56.4298 55.8450 55.2540 54.6567
56.9510 56.3716 55.7862 ';5.1946 54.5966
56.!!933 56.3133 55.7273 55.1351 54.5364
56.S356 56.2550 55.66R4 55.0755 54.4762
56.777S 56.1966 55.fi()94 55.0159 54.4159
54.1134 53.5033 52.111162 52.261g 51.6298
54.0527 53.4419 52.S241 52.19S9 51.56fi2
53.9920 53.3S05 52.7619 52.1360 51.5025
53.9311 53.31g9 52.6997 52.0730 51.43S7
53.H702 53.2573 52.6373 52.0099 51.3749
53.H092 53.1957 52.5749 51.94611 ';1.3109
51.0544 50.4073 49.7517 49.0875 48.4141
50.9900 50.3421 49.6857 49.0205 48.3462
SO.9256 50.2768 49.6196 48.9535 4K2783
50.11611 50.2115 49.5534 48.8864 48.2102
50.7965 50.1461 49.4X71 48.8192 4S.!421
50.7318 50.0806 49.4207 48.7519 4S.0739
50.6671 50.0150 49.3542 4K6846 48.0055
47.7999 47. JOXI 46.4060 45.6932 44.9690
47.7312 47.0384 46.3353 45.6213 44.8959
47.6624 46.96X6 46.2643 45.5492 44.X227
47.5934 46.X9X6 46.1933 45.4771 44.7494
475244 46.S2X6 46.1222 45.4049 44.6760
47.4553 46.75S4 46.0510 45.3325 44.6025
47.3il61 46.68X2 45.9790 45.2600 44.5288
47.3167 46.6178 45.90S2 45.IX75 44.4550
44.3071 43.5599 42.7996 42.0256 41.2370
44.2329 43.4X44 42.722S 41.9474 41.1573
44.15X6 43.40X9 42.6459 41.8691 41.(J775
44.0S42 43.3332 42.5689 41.7901i 40.9975
44.0097 43.2574 42.4917 41.7120 40.9174
43.9351 43.IX14 42.4144 41.6332 40.lB71
43.X603 43.1053 42.3369 41.5543 40.7566
43.7854 43.0291 42.2593 41.4752 40.6760
43.7103 42.9527 42.IX15 41.3960 40.5952
40.5142 3'1.6957 3H.8599 3iW057 37.1319
40.4331 39.6129 38.7753 37.9192 37.f)434
40.3519 39.5299 38.6906 37.8326 36.9547
40.2704 39.4468 38.6056 37.7457 36.8657
40.18X8 39.3635 38.5205 :17.6586 36.7766
40.J070 }9.2800 3R.4352 37.5714 36.6872
4(>.0251 }9.1963 38.3497 37.4k39 36.5976
39.9430 39.1125 38.2640 37.3962 36.50711
39.8607 39.0285 38.1781 37.30X3 36.41711
39.7783 38.9443 38.0920 37.2202 36.3275
0.80 D.SI 0.82 0.R3 0.S4
36.2370 35.3195 34.3775 33.40S9 32.4114
36.1463 35.2264 34.2818 33.3105 32.3099
36.0554 35.1331 34.IX59 33.21 IS 32.20X2
35.9642 35.0396 34.0X9X 33.1128 32.1061
35.872X 34.9457 Tl.9933 33.0135 32.0037
35.7812 34.8517 33.X966 32.9139 31.9009
35.6893 34.7573 33.7997 32.X140 :11.7979
35.5972 34.6628 :13.7024 32.7138 31.6945
35.5049 34.5679 33.6()48 32.6133 31.5907
35.4123 34.4728 33.5070 32.5125 31.4X06
O.RS 0.86 0.X7 0.88 0.S9
31.3822 30.3181 29.2152 2X.069I 26.X741
31.2774 30.2096 29.1026 27.9519 26.7517
31.1723 30.1007 2K9X96 27.X342 26.6287
31.066X 29.9915 28.X762 27.7160 26.505!
30.9609 29.8818 2K7623 27.5973 26.3810
3(US547 29.7718 28.6479 27.4781 26.2562
30.7481 29.6613 2K5331 27.3584 20.1309
30.6412 29.5504 28.417X 27.23X! 26.0050
30.5339 29.4391 2K3020 27.1173 25.X784
30.4262 29.3274 2KI858 26.9960 2S. 7513
0.90 0.91 0.92 D.93 0.94
25.6234 24.3085 22.9183 21.4381 19.8478
25.4950 24.1731 22.7746 21.2844 19.6817
25.365'1 24.0369 22.6300 21.1296 195142
25.2362 23.9000 22.4845 20.9737 19.3453
2S.IOSX 23.7622 22.3380 20.8166 19.1748
24.9746 23.6237 22.1906 20.65H3 19.0029
24.S428 23.4843 22.0421 20.49H8 18.8293
24.7104 23.3441 21.8927 20.3380 18.6542
24.5771 23.2030 21.7422 20.1759 18.4773
24.4432 23.0611 21.5907 20.0126 18.2988
0.95 0.96 0.97 O.9S 0.99
IKIIX5 16.2057 14.0345 11.4592 8.ID29
17.9364 16.()()18 13.79X7 11.1690 7.6S71
17.7524 15.7954 13.55X7 JO.tnI ! 7.2474
17.5666 15.5861 13.3144 IO.564X 6.7794
17.37X7 15.1741 13.0fi55 10.2494 6.2765
17.1887 15.1590 12.8117 9.9239 5.7296
10.9967 14.9409 12.5529 9.SX74 5.1247
10.8024 14.7196 12.28S6 9.2387 4.4382
16.6059 14.494X 12.01SS KX763 3.6238
16.4070 14.2666 11.7422 8.4984 2.5625
1.00
0.0000
Values of s are given in degrees. hy Equation 26.20.
5
Statistical Tables and Graphs
6
o.se
3
B
, 840
Appendix B
i
Statistical Tables and Graphs TABLE B.33: Circular Standard
Deviation,
so.
as a Function
of Vector
Length,
r
0
1
2
3
4
5
6
7
8
9
00
173.8843 160.2649 151.7323 145.3750
212.9639 172.0755 159.2623 15Ul212 144.8163
201.9968 170.4075 158.3005 150.3295 144.2690
195.2961 168.8585 157.3760 149.6560 143.7326
190.3990 167.4115 156.4857 148.9998 143.2066
186.5119 166.0531 155.6270 148.3597 142.6905
183.2748 164.7723 154.7974 147.7351 142.1839
180.4925 163.5600 153.9950 147.1250 141.6865
178.0473 162.4087 153.2178 146.5287 141.1979
175.8622 161.3121 152.4640 145.9456 140.7177
140.2456 135.9109 132.1350 128.7748 125.7364
139.7813 135.5110 131.7822 128.4578 125.4476
139.3245 135.1165 131.4333 128.1438 125.1612
138.8749 134.7272 131.0883 127.8329 124.8774
138.4324 134.3430 130.7472 127.5250 124.5959
137.9966 133.9636 130.4097 127.2200 124.3167
137.5672 133.5889 130.0759 126.9178 124.0399
137.1442 133.2189 129.7455 126.6184 123.7654
136.7273 132.8533 129.4186 126.3218 123.4930
136.3162 132.4290 129.0951 126.0278 123.2228
122.9548 120.3832 117.9866 115.7381 113.6166
122.6888 120.1361 117.7554 115.5205 113.4108
122.4249 119.8908 117.5258 115.3042 113.2060
122.1630 119.6472 117.2975 115.0891 113.0023
121.9031 119.4052 117.0707 114.8753 112.7997
121.6452 119.1648 116.8452 114.6626 112.5981
121.3891 118.9261 116.6211 114.4511 112.3976
121.1349 118.6889 116.3984 114.2408 112.1980
120.8825 118.4533 116.1770 114.0316 111.9995
120.6320 118.2192 115.9569 113.8236 111.8019
0.15 0.16 0.17 0.18 0.19
111.6054 109.6906 107.8609 106.1070 104.4209
111.4097 109.5039 107.6823 105.9355 104.2557
111.2151 109.3182 107.5044 105.7646 104,(1911
111.0213 109.1332 107.3272 105.5944 103.9272
110.8285 108.9491 107.1508 105.4248 103.7638
110.6367 108.7657 106.9751 105.2559 103.6DlD
110.4457 108.5832 106.8000 105.0877 103.4388
110.2556 108.4015 106.6258 104.9200 103.2772
110.0664 108.2205 106.4522 104.7530 103.1161
109.8780 108.0404 106.2793 104.5866 102.9556
0.2 0.21 0.2 0.2 0.2
102.7957 101.2255 99.7054 98.2310 96.7982
102.6362 101.0714 99.5560 98.0859 96.6571
102.4774 100.9177 99.4070 97.9412 96.5164
102.3191 100.7645 99.2585 97.7969 96.3760
102.1613 100.6118 99.1104 97.6531 96.2360
102.0040 UX1.4595 98.9628 97.5096 96.0964
101.8473 100.3078 98.8155 97.3665 95.9571
101.6911 100.1565 98.6688 97.2239 95.8182
101.5354 10()'0057 98.5224 97.0816 95.6797
101.3802 99.8553 98.3765 96.9397 95.5415
0.25 0.2 0.2 0.2 0.2
95.4037 94.0445 92.7177 91.4210 90.1521
95.2663 93.9104 92.5867 91.2929 90.0267
95.1292 93.7766 92.4560 91.1651 89.9015
94.9924 93.6432 92.3257 91.0375 89.7766
94.8560 93.5100 92.1956 90.9102 89.6519
94.7199 93.3772 92.0657 90.7832 89.5275
94.5841 93.2447 91.9362 90.6565 89.4033
94.4487 93.1125 91.8070 90.5300 89.2794
94.3136 92.9806 91.6781 90.4038 89.1557
94.1789 92.8490 91.5494 90.2778 89.0322
0.3 0.31 0.31
88.9090 87.6900 86.4933 85.3173 84.1608
88.7861 87.5693 86.3748 85.2008 84.0462
88.6634 87.4489 86.2565 85.0845 83.9317
88.5409 87.3287 86.1384 84.9684 83.8175
88.4186 87.2087 86.0205 84.8525 83.7034
88.2966 87.0889 85.9028 84.7367 83.5895
88.1748 86.9694 85.7853 84.6212 83.4757
88.0533 86.8500 85.6680 84.5058 83.3621
87.9320 86.7309 85.5509 84.3906 83.2487
87.8109 86.6120 85.4340 84.2757 83.1355
0.3
83.0224 81.9010 80.7953 79.7043 78.6272
82.9095 81.7897 80.6855 79.5960 78.5202
82.7968 81.6786 80.5759 79.4878 78.4133
82.6843 81.5676 80.4665 79.3798 78.3066
82.5719 81.4568 80.3572 79.2719 78.2000
82.4597 81.3462 80.2480 79.1641 78.0935
82.3476 81.2357 80.1390 79.0565 77.9872
82.2357 81.1254 80.0301 78.9490 77.8809
82.1240 81.0152 79.9214 78.8416 77.7748
82.0124 80.9052 79.8128 78.7343 77.6688
0.4 0.41 0.4 0.43 0.4
77.5629 76.5107 75.4697 74.4391 73.4183
77.4572 76.4061 75.3662 74.3366 73.3167
77.3516 76.3016 75.2628 74.2342 73.2152
77.2460 76.1973 75.1594 74.1319 73.1138
77.1470 76.0930 75.0562 74.0296 73.0125
77.0354 75.9888 74.9531 73.9275 72.9113
76.9302 75.8848 74.8501 73.8255 72.8101
76.8252 75.7809 74.7472 73.7235 72.7091
76.7202 75.6770 74.6444 73.6217 72.6081
76.6154 75.5733 74.5417 73.5199 72.5072
0.45 0.4 0.4 0.4 0.4
72.4064 71.4030 70.4073 69.4187 68.4367
72.3057 71.3031 70.3081 69.3202 68.3388
72.2051 71.2033 70.2090 69.2218 68.2410
72.1046 71.1035 70.1100 69.1234 68.1433
72.0041 71.0038 70.0110 69.0251 68.0456
71.9037 70.9042 69.9121 68.9269 67.9479
71.8034 70.8047 69.8133 68.8287 67.8504
71.7032 70.7052 69.7146 68.7306 67.7528
71.6030 70.6058 69.6159 68.6326 67.6554
71.5030 70.5065 69.5173 68.5346 67.5580
0.3 0.3 0.35 0.3 0.3 0.3
Appendix B (cont.): Circular Standard
TABLE B.33
r
Statistical Tables and Graphs
841
Deviation, so' as a Function of Vector Length, r
0
I
2
3
4
5
6
7
8
9
67.4606 66.4900 65.5243 64.5629 63.6053
67.3633 66.3932 65.4279 64.4670 63.5098
67.2661 66.2965 65.3316 64.3711 63.4143
67.1689 66.1998 65.2354 64.2752 63.3188
67.0718 66.1031 65.1392 64.1794 63.2233
66.9747 66.0065 65.0431 64.0837 63.1279
66.8776 65.9100 64.9469 63.9879 63.0325
66.7806 65.8135 64.8509 63.8922 62.9371
66.6837 65.7170 64.7548 63.7966 62.M17
66.5868 65.6206 64.6588 63.7009 62.7464
62.6511 61.6998 60.7508 59.8036 58.8577
62.5559 61.6048 60.6560 59.7089 58.7632
62.4607 61.5098 60.5612 59.6143 58.6687
62.3655 61.4149 60.4664 59.5197 58.5742
62.2703 61.3199 60.3717 59.4251 58.4797
62.1751 61.2250 60.2770 59.3305 58.3852
62.0800 61.1301 60.1823 59.2359 58.2907
61.9849 61.0353 60.0876 59.1414 58.1962
61.8899 60.9404 59.9929 59.0468 58.1017
61.7948 6O.M56 59.8982 58.9523 58.0072
57.9127 56.9680 56.0232 55.0776 54.1308
57.8182 56.8736 55.9287 54.9830 54.0361
57.7238 56.7791 55.8342 54.HRR4 53.9413
57.6293 56.6846 55.7396 54.7938 53.8465
57.5348 56.5902 55.6451 54.6991 53.7517
57.4404 56.4957 55.5505 54.6044 53.6568
57.3459 56.4012 55.4560 54.5097 53.5619
57.2514 56.3067 55.3614 54.4150 53.4671
57.1570 56.2122 55.2668 54.3203 53.3722
57.0625 56.1177 55.1722 54.2256 53.2772
53.1823 52.2313 51.2775 50.3201 49.3585
53.0873 52.1361 51.1819 50.2241 49.2621
52.9923 52.0408 51.0863 50.1281 49.1656
52.8973 51.9455 50.9907 50.0321 49.()691
52.8022 51.8502 50.8950 49.9360 48.9725
52.7071 51.7548 50.7993 49.8398 4H.H759
52.6120 51.6594 50.7035 49.7437 48.7792
52.5169 51.5640 50.6077 49.6474 48.6825
524217 51.4685 50.5119 49.5512 48.5858
52.3266 51.3730 50.4160 49.4549 48.4XX9
48.3920 47.4200 46.4417 45.4562 44.4628
48.2951 47.3225 46.3435 45.3573 44.3630
48.1981 47.2249 46.2452 45.25X2 44.2631
48.1011 47.1272 46.146X 45.1591 44.1631
48.0039 47.0295 46.04H4 45.0599 44.0630
47.906H 46.9317 45.9499 44.9606 43.9628
47.8095 46.8338 45.8513 44.8612 43.8625
47.7123 46.7359 45.7527 44.7617 43.7621
47.6149 46.6379 45.6539 44.6622 43.6617
47.5175 46.539X 45.5551 44.5625 43.5611
43.4604 42.4482 41.4249 40.3894 39.3403
43.3597 42.3463 41.3219 40.2851 39.2346
43.2588 42.2444 41.2188 40.1807 39.1288
43.1578 42.1424 41.1156 40.0761 39.0228
43.056X 42.0402 41.0122 39.9715 38.9166
42.9556 41.9380 40.90H7 39.8666 38.8103
42.H543 41.8356 40.8051 39.7617 3K703H
42.7529 41.7331 40.7014 39.6565 38.5972
42.6515 41.6305 40.5975 39.5513 38.4904
42.5499 41.5277 40.4935 39.4459 38.3834
38.2763 37.1956 36.0964 34.9767 33.8340
38.1690 37.0H65 35.9854 34.8635 33.7183
38.0615 36.9773 35.8742 34.7501 33.6024
37.9539 36.8679 35.7628 34.6364 33.4863
37.8461 36.7583 35.6511 34.5225 33.3698
37.7381 36.6484 35.5393 34.4084 33.2531
37.6300 36.5384 35.4272 34.2940 33.1362
37.5217 36.4282 35.3149 34.1793 33.0189
37.4132 36.3178 35.2024 34.0645 32.9014
37.3045 36.2072 35.0896 33.9493 32.7836
32.6655 31.4682 30.2381 28.9708 27.6607
32.5471 31.3467 30.1131 28.M18 27.5271
32.4285 31.2249 29.9877 28.7124 27.3930
32.3095 31.102X 29.8620 28.5825 27.2584
32.1902 30.9803 29.7359 28.4522 27.1233
32.0707 30.8575 29.6094 28.3215 26.9877
31.9508 30.7343 29.4825 28.1903 26.8515
31.8306 30.6108 29.3552 28.0586 26.7148
31.7101 30.4869 29.2274 27.9265 26.5775
31.5X'.13 30.3627 29.0993 27.7938 26.4397
26.3013 24.8839 23.3977 21.8282 20.1556
26.1623 24.7386 23.2448 21.6660 19.9817
26.0227 24.5925 23.0910 21.5027 19.8064
25.8826 24.4458 22.9364 21.33M 19.6298
25.7418 24.2984 22.7808 21.1729 19.4517
25.6004 24.1502 22.6244 21.0062 19.2722
25.4584 24.0012 22.4671 20.8286 190912
25.3158 23.8515 22.3088 20.6697 18.'1087
25.1725 23.7011 22.1496 20.4996 18.7246
25.0285 23.5498 21.9894 20.3283 18.5.3X8
18.3513 16.3714 14.1415 11.5171 8.1232
18.1622 16.1612 13.9003 11.2226 7.7044
17.9712 15.9486 13.6550 10.920S 7.2620
17.77X4 15.7333 13.4055 10.6101 6.7912
17.5837 15.5152 13.1516 10.2907 6.2859
17.3870 15.2943 12.8929 99614 5.7368
17.1882 15.0703 12.6292 9.6212 5.1298
16.9874 14.M32 123601 9.2689 4.4414
16.7843 14.6128 12.0854 8.9030 3.6255
16.5790 14.3790 11.8045 8.5218 2.5630
0.0000
Values of So are given in degrees. by Equation 26.21.
842
Appendix B
Statistical Tables and Graphs TABLE B.34: Circular
n
a: 0.50
0.20
Values of z for Rayleigh's 0.10 0.05 0.02
Test for Circular
Uniformity
0.01
0.005
0.002
0.001
6 7 8 9 10
0.734 0.727 0.723 0.719 0.717
1.639 1.634 1.631 1.628 1.626
2.274 2.27R 2.281 2.283 2.285
2.865 2.R85 2.899 2.910 2.919
3.576 3.627 3.665 3.694 3.716
4.058 4.143 4.205 4.252 4.289
4.491 4.617 4.710 4.780 4.835
4.985 5.181 5.322 5.430 5.514
5.297 5.556 5.743 5.885 5.996
11 12 13 14 15
0.715 0.713 0.711 0.710 0.709
1.625 1.623 1.622 1.621 1.620
2.287 2.288 2.289 2.290 2.291
2.926 2.932 2.937 2.941 2.945
3.735 3.750 3.763 3.774 3.784
4.319 4.344 4.265 4.383 4.398
4.879 4.916 4.947 4.973 4.996
5.582 5.638 5.685 5.725 5.759
6.085 6.158 6.219 6.271 6.316
16 17 18 19 20
0.708 0.707 0.706 0.705 0.705
1.620 1.619 1.619 1.618 1.618
2.292 2.292 2.293 2.293 2.294
2.948 2.951 2.954 2.956 2.958
3.792 3.799 3.806 3.811 3.816
4.412 4.423 4.434 4.443 4.451
5.015 5.033 5.048 5.061 5.074
5.789 5.815 5.838 5.858 5.877
6.354 6.388 6.418 6.445 6.469
21 22 23 24 25
0.704 0.704 0.703 0.703 0.702
1.617 1.617 1.616 1.616 1.616
2.294 2.295 2.295 2.295 2.296
2.960 2.961 2.963 2.964 2.966
3.821 3.825 3.829 3.833 3.836
4.459 4.466 4.472 4.478 4.483
5.085 5.095 5.104 5.112 5.120
5.893 5.908 5.922 5.935 5.946
6.491 6.510 6.528 6.544 6.559
26 27 28 29 30
0.702 0.702 0.701 0.701 0.701
1.616 1.615 1.615 1.615 1.615
2.296 2.296 2.296 2.297 2.297
2.967 2.968 2.969 2.970 2.971
3.839 3.842 3.844 3.847 3.849
4.488 4.492 4.496 4.500 4.504
5.127 5.133 5.139 5.145 5.150
5.957 5.966 5.975 5.984 5.992
6.573 6.586 6.598 6.609 6.619
32 34 36 38 40
0.700 0.700 0.700 0.699 0.699
1.614 1.614 1.614 1.614 1.613
2.297 2.297 2.298 2.298 2.298
2.972 2.974 2.975 2.976 2.977
3.853 3.856 3.859 3.862 3.865
4.510 4.516 4.521 4.525 4.529
5.159 5.168 5.175 5.182 5.188
6.006 6.018 6.030 6.039 6.048
6.637 6.654 6.668 6.681 6.692
42 44 46 48 50
0.699 0.698 0.698 0.698 0.698
1.613 1.613 1.613 1.613 1.613
2.298 2.299 2.299 2.299 2.299
2.978 2.979 2.979 2.980 2.981
3.867 3.869 3.871 3.873 3.874
4.533 4.536 4.539 4.542 4.545
5.193 5.198 5.202 5.206 5.210
6.056 6.064 6.070 6.076 6.082
6.703 6.712 6.721 6.729 6.736
55 60 65 70 75
0.697 0.697 0.697 0.696 0.696
1.612 1.612 1.612 1.612 1.612
2.299 2.300 2.300 2.300 2.300
2.982 2.983 2.984 2.985 2.986
3.878 3.881 3.883 3.885 3.887
4.550 4.555 4.559 4.562 4.565
5.218 5.225 5.231 5.235 5.240
6.094 6.104 6.113 6.120 6.127
6.752 6.765 6.776 6.786 6.794
80 90 100 120 140
0.696 0.696 0.695 0.695 0.695
1.611 1.611 1.611 1.611 1.611
2.300 2.301 2.301 2.301 2.301
2.986 2.987 2.988 2.990 2.990
3.889 3.891 3.893 3.896 3.899
4.567 4.572 4.575 4.580 4.584
5.243 5.249 5.254 5.262 5.267
6.132 6.141 6.149 6.160 6.168
6.801 6.813 6.822 6.837 6.847
160 180 200 300 500
0.695 0.694 0.694 0.694 0.694
1.610 1.610 1.610 1.610 1.610
2.301 2.302 2.302 2.302 2.302
2.991 2.992 2.992 2.993 2.994
3.900 3.902 3.903 3.906 3.908
4.586 4.588 4.590 4.595 4.599
5.271 5.274 5.276 5.284 5.290
6.174 6.178 6.182 6.193 6.201
6.855 6.861 6.865 6.879 6.891
0.6931
1.6094
2.3026
2.9957
3.9120
4.6052
5.2983
6.2146
6.9078
00
Appendix
B
Statistical
Tables and Graphs
843
The values in Appendix Table 8.34 were computed using Durand and Greenwood's Equation 6 (1958). This procedure was found to give slightly more accurate results than Durand and Greenwood's Equation 4. distinctly better results than the Pearson curve approximation, and very much better results than the chi-square approximation (Stephens, \969a). By examining the exact critical values of Greenwood and Durand (1955), we see that the preceding tabled values for a = 0.05 are accurate to the third decimal place for n as small as 8, and For a = 0.01 for n as small as 10. For n as small as 6, none of the tabled values for a = 0.05 or 0.01 has a relative error greater than 0.3%.
Examples: ZO.05.RO
As n increases,
the critical values become
=
2.986
and
closer to X2a,2/2.
ZO.01.32
=
4.510
844
Appendix B
Statistical Tables and Graphs TABLE B.35: Critical
n
a: 0.25
8 9 10
Values of u for the V Test of Circular Uniformity 0.025 (WI 0.005 0.0025
0.10
0.05
0.001
0.0005
O.ti88 0.tiS7 0.685
1.290 1.294 1.293
I.M9 I.M9 I.M8
1.947 1.941> 1.950
2.21>0 2.21>6 2.290
2.498 2.507 2.514
2.ti91 2.705 2.71ti
2.916 2.937 2.954
3.066 3.094 3.115
11 12 13 14 15
0.684 0.684 0.683 0.682 0.682
1.292 1.291 1.290 1.290 1.289
I.M8 I.M8 I.M7 I.M7 1.647
1.950 1.951 1.952 1.953 1.953
2.293 2.296 2.299 2.301 2.302
2.520 2.525 2.529 2.532 2.535
2.725 2.732 2.738 2.743 2.748
2.967 2.978 2.987 2.995 3.002
3.133 3.147 3.159 3.169 3.177
lti 17 18 19 20
O.til>I 0.681 O.ti81 0.680 0.6S0
1.289 1.288 1.288 1.2il7 1.2S7
1.647 1.647 1.647 1.647 1.646
1.953 1.954 1.954 1.954 1.955
2.304 2.305 2.306 2.301> 2.30S
2.531> 2.540 2.542 2.544 2.546
2.751 2.755 2.751> 2.761 2.763
3.008 3.013 3.017 3,(121 3.025
3.185 3.191 3.197 3.202 3.207
21 22 23 24 25
0.680 0.679 0.679 0.679 0.67 a. Cl
Q] "0
~
TABLE B.40 (cont.):
Common Logarithms of Factorials
X
0
1
2
3
4
5
6
7
8
9
350 360 370 380 390
740.09197 765.60023 791.22897 816.97494 842.83506
742.63728 768.15774 793.79R34 819.55587 845.42724
745.18382 770.71644 796.36889 822.13793 848.02053
747.73160 773.27635 798.94059 824.72113 850.61492
750.28060 775.83745 801.51347 827.30546 853.21042
752.83083 778.39975 804.08750 829.89092 855.80701
755.38228 780.96323 806.66268 832.47751 858.40471
757.93495 783.52789 809.23903 835.06522 861.00350
760.48883 786.09374 811.81652 837.65405 863.60338
763.04393 788.66077 814.39516 840.24400 866.20435
400 410 420 430 440
868.80641 894.88621 921.07182 947.36072 973.75050
871.40956 897.50006 923.69610 949.99519 976.39494
874.01378 900.11495 926.32141 952.63068 979.04037
876.61909 902.73090 928.94776 955.26717 981.68677
879.22547 905.34790 931.57512 957.90466 984.33415
881.83293 907.96595 934.20351 960.54314 986.98251
884.44145 910.58504 936.83292 963.18263 989.63185
887.05105 913.20518 939.46335 965.82311 992.28215
889.66171 915.82636 942'()9479 968.46459 994.93343
892.27343 918.44857 944.72725 971.10705 997.58568
450 460 470 480 490
]()00.23889 ]()26.82369 1053.50280 1080.27423 1107.13604
1002.89307 1029.48739 ]()56.17582 1082.95637 1109.82713
1005.54821 ]()32.15203 ]()58.84977 1085.63942 1112.51909
1008.20430 1034.81761 1061.52463 1088.32337 1115.21194
1010.86136 1037.48413 1064.20040 ]()91.00821 1117.90567
1013.51937 1040.15158 1066.87710 1093.69395 1120.60027
1016.17834 1042.81997 1069.55471 1096.38059 1123.29575
1018.83825 1045.48929 1072.23322 1099.06812 1125.99211
1021.49912 1048.15953 1074.91265 1101.75654 1128.68934
1024.16093 1050.83070 1077.59299 1104.44585 1131.38744
If log X! is needed for X > 499, consult Lloyd, Zar, and Karr (1968) or Pearson and Hartley (1966: Table 51), or note that "Stirling'S approximation" being accurate to about two decimal places for n as small as 10: log X! = (X
+
0.5) log X
-
0.434294X
+
is excellent,
»
0.39909;
"0 "0 /I)
and this approximation is even better, having half the error:
:;)
a.
;:C. 10gX! = (X
+
0.5)log(X
+
0.5)
-
0.434294(X
+
0.5)
+
0.399090
(Kemp, 1989; Tweddle, 1984). Walker (1934) notes that Abraham de Moivre was aware of this approximation before Stirling. Thus we have another example of "Stigler's Law of Eponymy" (see the second footnote in Chapter 6).
CD
~
III ...• ~.
n' !!!..
~ III
cr
m
'"
III
:;)
a. Cl
a;
"0
zr
'"
00 U1 U1
856
Appendix
B
Statistical Tables and Graphs TABLE B.41: Ten Thousand
Random Digits
00-04
05-09
10-14
15-19
20-24
25-29
30-34
35-39
40-44
45-49
00 01 02 03 04
22808 49305 81934 10840 99555
04391 36965 19920 13508 73289
45529 44849 73316 48120 59605
53968 64987 69243 22467 37105
57136 59501 69605 54505 24621
98228 35141 17022 70536 44100
85485 50159 53264 91206 72832
13801 57369 83417 81038 12268
68194 76913 55193 22418 97089
56382 75739 92929 34800 68112
05 06 07 08 09
32677 09401 73424 37075 02060
45709 75407 31711 81378 37158
62337 27704 65519 59472 55244
35132 11569 74869 71858 44812
45128 52842 56744 86903 45369
96761 83543 40864 66860 78939
08745 44750 75315 03757 08048
53388 03177 89866 32723 28036
98353 50511 96563 54273 40946
46724 15301 75142 45477 03898
10 11 12 13 14
94719 70234 07972 58521 32580
43565 48272 71752 64882 45202
40028 59621 92745 26993 21148
79866 88778 86465 48104 09684
43137 16536 01845 61307 39411
28063 36505 27416 73933 04892
52513 41724 50519 17214 02055
66405 24776 48458 44827 75276
71511 63971 68460 88306 51831
66135 01685 63113 78177 85686
15 16 17 18 19
88796 31525 02747 46651 43598
30829 82746 35989 28987 14436
35009 78935 70387 20625 33521
22695 82980 89571 61347 55637
23694 61236 34570 63981 39789
11220 28940 17002 41085 26560
71006 96341 79223 67412 66404
26720 13790 96817 29053 71802
39476 66247 31681 00724 18763
60538 33839 15207 14841 80560
20 21 22 23 24
30596 56198 68266 31107 37555
92319 64370 67544 28597 05069
11474 85771 06464 65102 38680
64546 62633 84956 75599 87274
60030 78240 18431 17496 55152
73795 05766 04015 87590 21792
60809 32419 89049 68848 77219
24016 35769 15098 33021 48732
29166 14057 12018 69855 03377
36059 80674 89338 54015 01160
25 26 27 28 29
90463 99189 37631 73829 15634
27249 88731 74016 21651 89428
43845 93531 89072 50141 47090
94391 52638 59598 76142 12094
12145 54989 55356 72303 42134
36882 04237 27346 06694 62381
48906 32978 80856 61697 87236
52336 59902 80875 76662 90118
00780 05463 52850 23745 53463
74407 09245 36548 96282 46969
30 31 32 33 34
00571 83374 78666 47890 56238
45172 10184 85645 88197 13559
78532 56384 13181 2\368 79344
63863 27050 08700 65254 83198
98597 77700 08289 35917 94642
15742 13875 62956 54035 35165
41967 96607 64439 83028 40188
11821 76479 39150 84636 21456
91389 80535 95690 38186 67024
07476 17454 18555 50581 62771
35 36 37 38 39
36369 42934 09010 83897 82206
32234 34578 15226 90073 01230
38129 28968 43474 72941 93252
59963 74028 30174 85613 89045
99237 42164 26727 85569 25141
72648 56647 39317 24183 91943
66504 76806 48508 08247 75531
99065 61023 55438 15946 87420
61161 33099 85336 02957 99012
16186 48293 40762 68504 80751
40 41 42 43 44
14175 58968 62601 97030 89074
32992 88367 04595 71165 31587
49046 70927 76926 47032 21360
41272 74765 11007 85021 41673
94040 18635 67631 65554 71192
44929 85122 64641 66774 85795
98531 27722 07994 21560 82757
27712 95388 04639 04121 52928
05106 61523 39314 57297 62586
35242 91745 83126 85415 02179
45 46 47 48 49
07806 91540 99279 63224 98361
81312 86466 27334 05074 97513
81215 13229 33804 83941 27529
99858 76624 77988 25034 66419
26762 44092 93592 43516 35328
28993 96604 90708 22840 19738
74951 08590 56780 35230 82366
64680 89705 70097 66048 38573
50934 03424 39907 80754 50967
32011 48033 51006 46302 72754
Appendix B
Statistical Tables and Graphs
00-04
05-09
B.41(cont.): Ten Thousand Random Digits 10-14 15-19 20-24 25-29 30-34 35-39
40-44
45-49
50 51 52 53 54
27791 33147 67243 78176 70199
82504 46058 10454 70368 70547
33523 92388 40269 95523 94431
27623 10150 44324 09134 45423
16597 63224 46013 31178 48695
32089 26003 00061 33857 01370
81596 56427 21622 26171 68065
78429 29945 68213 07063 61982
14111 44546 47749 41984 20200
68245 50233 76398 99310 27066
55 56 57 58 59
19840 32970 43233 08514 28595
01143 28267 53872 23921 51196
18606 17695 68520 16685 96108
07622 20571 70013 89184 84384
77282 50227 31395 71512 80359
68422 69447 60361 82239 02346
70767 45535 39034 72947 60581
33026 16845 59444 69523 01488
15135 68283 17066 75618 63177
91212 15919 07418 79826 47496
60 61 62 63 64
83334 66112 25245 21861 74506
81552 95787 14749 221S5 40569
88223 84997 30653 41576 90770
29934 91207 42355 15238 40812
68663 67576 88625 92294 57730
23726 27496 37412 50643 84150
18429 01603 87384 69848 91500
84855 22395 09392 48020 53850
26897 41546 11273 19785 52104
94782 68178 28116 41518 37988
65 66 67 68 69
23271 0854S 14236 55270 02301
39549 16021 80869 49583 05524
33042 64715 90798 86467 91801
10661 08275 85659 40633 23647
37312 50987 10079 27952 51330
50914 67327 28535 27187 35677
73(J27 11431 35938 35058 05972
21010 31492 10710 66628 90729
76788 86970 67046 94372 26650
64037 47335 74021 75665 81684
70 71 72 73 74
72843 49248 62598 27510 84167
03767 43346 99092 69457 66640
62590 29503 87806 98616 69100
92077 22494 42727 62172 22944
91552 08051 30659 07056 19833
76853 09035 10118 61015 23961
45812 758(J2 83000 22159 80S34
15503 63967 96198 65590 37418
93138 74257 47155 51082 42284
87788 00046 00361 34912 12951
75 76 77 78 79
14722 46696 13938 48778 00571
S8488 05477 09867 56434 71281
54999 32442 28949 42495 01563
55244 18738 94761 07050 66448
03301 43021 38419 35250 94560
37344 72933 38695 09660 55920
01053 14995 90165 56192 31580
79305 30408 82841 34793 26640
94771 64043 75399 36146 91262
95215 67834 09932 96806 30863
80 81 82 83 84
96050 30870 59153 78283 12175
57641 81575 29135 70379 95800
21798 14019 00712 54969 41106
14917 07831 73025 05821 93962
21S36 81840 14263 26485 06245
15053 25506 17253 28990 00883
33566 29358 95662 40207 65337
51177 88668 75535 00434 75506
91786 42742 26170 38863 66294
12610 62048 95240 61892 62241
85 86 87 88 89
14192 69060 46154 93419 13201
39242 38669 11705 54353 04017
17961 00849 29355 41269 68889
29448 24991 71523 07014 81388
84078 84252 21377 28352 60829
14545 41611 36745 77594 46231
39417 62773 00766 57293 46161
83649 63024 21549 59219 01360
26495 57079 51796 26098 25839
41672 59283 81340 63041 52380
90 91 92 93 94
62264 58030 81242 16372 54191
99963 30054 26739 70531 04574
98226 27479 92304 92036 58634
29972 70354 81425 54496 91370
95169 12351 29052 50521 40041
07546 33761 37708 83872 77649
01574 94357 49370 30064 42030
94986 81081 46749 67555 42547
06123 74418 59613 40354 47593
52804 74297 50749 23671 07435
95 96 97 98 99
15933 21518 34524 46557 31929
92602 77770 64627 67780 13996
19496 53826 92997 59432 05126
18703 97114 21198 23250 83561
63380 82062 14976 63352 03244
58017 34592 07071 43890 33635
14665 87400 91566 07109 26952
88867 64938 44335 07911 01638
84807 75540 83237 85956 227S8
44672 54751 24335 62699 26393
TABLE
857
858
Appendix B
Statistical Tables and Graphs
50-54
55-59
60-64
(cont.): Ten Thousand Random Digits 70-74 75-79 80-84 65-69 85-89
90-94
95-99
00 01 02 03 04
53330 96990 30385 75252 52615
26487 62825 16588 66905 66504
85005 97110 63609 60536 78496
06384 73006 09132 13408 90443
13822 32661 53081 25158 84414
83736 63408 14478 35825 31981
95876 03893 50813 10447 88768
71355 10333 22887 47375 49629
31226 41902 03746 89249 15174
56063 69175 10289 91238 99795
05 06 07 08 09
39992 51788 88569 14513 50257
51082 87155 35645 34794 53477
74547 13272 50602 44976 24546
31022 92461 94043 71244 01377
71980 06466 35316 60548 20292
40900 25392 66344 03041 85097
84729 22330 78064 03300 00660
34286 17336 89651 46389 39561
96944 42528 89025 25340 62367
49502 78628 12722 23804 61424
10 11 12 13 14
35170 22225 90103 68240 01589
69025 83437 12542 89649 18335
46214 43912 97828 85705 24024
27085 30337 85859 18937 39498
83416 75784 85859 30114 82052
48597 77689 64101 89827 07868
19494 60425 00924 89460 49486
49380 85588 89012 01998 25155
28469 93438 17889 81745 61730
77549 61343 01154 31281 08946
15 16 17 18 19
36375 11237 48667 99286 44651
61694 60921 68353 42806 48349
90654 51162 40567 02956 13003
16475 74153 79819 73762 39656
92703 94774 48551 04419 99757
59561 84150 26789 21676 74964
45517 39274 07281 67533 00141
90922 10089 14669 50553 21387
93357 45020 00576 21115 66777
00207 09624 17435 26742 68533
20 21 22 23 24
83251 41551 68990 63393 93317
70164 54630 51280 76820 87564
05732 88759 51368 33106 32371
66842 10085 73661 23322 04190
77717 48806 21764 16783 27608
25305 08724 71552 35630 40658
36218 50685 69654 50938 11517
85600 95638 17776 90047 19646
23736 20829 51935 97577 82335
06629 37264 53169 27699 60088
25 26 27 28 29
48546 31435 56405 70102 92746
41090 57566 29392 50882 32004
69890 99741 76998 85960 52242
58014 77250 66849 85955 94763
04093 43165 29175 03828 32955
39286 31150 11641 69417 39848
12253 20735 85284 55854 09724
55859 57406 89978 63173 30029
83853 85891 73169 60485 45196
15023 04806 62140 00327 67606
30 31 32 33 34
67737 35606 64836 86319 90632
34389 76646 28649 92367 32314
57920 14813 45759 37873 24446
47081 51114 45788 48993 60301
60714 52492 43183 71443 31376
04935 46778 25275 22768 13575
48278 08156 25300 69124 99663
90687 22372 21548 65611 81929
99290 59999 33941 79267 39343
18554 43938 66314 49709 17648
35 36 37 38 39
83752 56755 14100 69227 77718
51966 21142 28857 24872 56967
43895 86355 60648 48057 36560
03129 33569 86304 29318 87155
37539 63096 97397 74385 26021
72989 66780 97210 02097 70903
52393 97539 74842 63266 32086
45542 75150 87483 26950 11722
70344 25718 51558 73173 32053
96712 33724 52883 53025 63723
40 41 42 43 44
09550 12404 07985 58124 46173
38799 42453 27418 53830 77223
88929 88609 92734 08705 75661
80877 89148 20916 57691
87779 85892 58969 46048 24055
99905 96045 99011 30342 27568
17122 10310 73815 86530 41227
25985 45021 49705 72608 58542
16866 62023 68076 93074 73196
76005 70061 69605 80937 44886
45 46 47 48 49
13476 82472 55370 89274 55242
72301 98647 63433 74795 74511
85793 17053 80653 82231 62992
80516 94591 30739 69384 \7981
59479 36790 68821 53605 17323
66985 42275 46854 67860 79325
24801 51154 41939 01309 35238
84009 77765 38962 27273 21393
71317 01115 20703 76316 13114
87321 09331 69424 54253 70084
TABLE B.41
SOOOO
Many computer programs can generate random numbers (actually pseudorandom numbers, for the sequence of digits will repeat-but commonly only after about two billion uses of the random-number generator; McCullough, 1998).
v, O.W
=
VI
= 1
60 30 20 15
x
I
1/
I
/ / / / 1/ /
!/
12
/
/
/
9
J()
1/
8
6
/ 1/ 1/
/
/ 1 /
/
1
/
/ 7177 7 7
/
11.96
7 7
/'// 1// 1// //
/
I /
/
/
//
/
/ 1/ / /
~~ / 1/ /
0.94
4 11-
/
/
/
1/ / /
1/
/
/
I I
/
/
/
"'pi c}o to IO):{ II // 1/ /. I I 1/ //
/
'I'I'
//
.r >:
~
.:-:
/'
/
O.gO
0.70
7 /6
/ / ./ V
0.60
/'
UjO 0.40
IUU
V
1l.21l
~
1l.10
(for"
=
Il.()'i)
I
4
FIGURE B.1b: Power and sample
size in analysis of variance;
II
I
~
____
!:;'" 'tAl
/'
1// ,
~
0.90
/'
/, '//,0 /;/~ ./
0.61
Gl
/
0.92
/
,
0.71
0.11
15
1/
/
/ I
I
/ / I I / / / / I II / / / / / I I I I II / / / I I I ~~' / / /; I I I I I I / / / o
/
/
/
/
0.21
I I I I I
/ I / I / I II -x:
/ '/ I II,'I
I O.8(
II II III / 1/ I I I II I I
0.94
O.9(
::J o, OJ ""0
I
O.lJ)
•...
Q)
0.98
o.n
0.92
II
/
0.9~
I
~ ~
]()
V1 = 2.
", =
cc
60 30 20 15 12 10 9 8
7
6
-r:
60
30
20
V / I I I/
/ ~ / / I / /1/ / 1/ /1 / I
1/
15
12
1/
10
1/ /1
8
6
/ 0.98
IV I A / /1/ / V I I I
I A / 1/
1/
lAlli/II/III
II I
II / /1/ / Y / / V /1 / I
1/
/1
I ,'1 90 i? 36 I /5
III
I II I I II
II
/
h6;'1 I l
~i
0.97 0.96 0.95 0.94
-e 0
I
I
I
0.92 ~
I
0.90
'"'
Ii
-
fI I IV I AI I A / I
I
I / Y I II /l I 1// I / 1/ I /
A'/
1
V / V /1/
/l I I II 1/;1/ / V , / Y / / 1//
I
I
/I
I / /11 1 / V If / 11
V -)'
/1
/)'
I
'"
1/
/ 0.80
/
»
"'C "'C t1)
::l
a. 0.70
1//j,lV //,V/ / 1/
0.60 0.50
ec
1'/ /
A
60 30,26;5 /
.12109 '/
,8
7
6
/
//y/./.: ~ / .A'/7r77J7/V r/' ~:LA:L--2iL:L: L /' 1--7I7?f~~~"--+-+-----1r--t----;T7try
0.70 0.60
V
OAO
0.30 0.20 F-I-----1---1--+---6~
0.30 0.20
t:=::±==±::::::i==±==~~==I:::::=t==±::::::i==±==t=:::::t==±==±==±=lc::::±==±=:±==::t===t:::::±c:::±=:±==::t=j 2
; 10.3c: Mimimum detectable difference, 8; 10.4: Kruskal-Wallis H and He; 10.6: Bartlett's 8; 11.1: Tukey-test studentized range, q; 11.3: Dunnell'S multiple-comparison statistic, q'; 11.4: Scheffe's multiple-contrast statistic, and
S; 11.5: Dunn's
'Ii; Pillai's
Coefficient
trace,
multiple-comparison
v;
statistic,
Lawley-Hotelling
of determination,
,2; Chapter
Q; 12.7: Friedman's
lace, U; Roy's
maximum
19: All correlation
X; and (X~)e; 16.2: Wilks' A (V)
root, (J; 16.2c: Hotelling's
coefficients
(e.g., r), except
T2; 17.3a:
those in Sections
19.1la,t 19.1Ib, 19.12t and their standard errors (e.g., Sf); also, all Fisher z transformations of correlation coefficient and standard errors of z: 20.1: Element of inverse correlation matrix, dik; 20.3: Multiple coefficient of determination, R2; Adjusted multiple coefficient of determination, R~; Multiple correlation coefficient, R; 20.5: Standardized partial regression coefficient, bi; Standard error of Sbi; 20.16: Kendall coefficient of concordance, W, We and CT; 25.7: mean square successive difference statistic, C; 25.8: nonparametric runs test statistic, u. In general, nominal-scale variables (most of Chapters 22-25) and circular variables (Chapters 26-27) should not be submitted to data coding.
* For regression addition
constant
through the origin may not be used.
(Section
t In these cases, coding may only be performed
17.9), both Ax with A = O.
and A y !Kust be zero; that is, coding
by an
APPENDIX
D
Analysis-of-Variance Hypothesis Testing 0.1 0.2 0.3 D.4
DETERMINATION OF APPROPRIATE Fs AND DEGREESOF FREEDOM TWO-FAGOR ANALYSIS OF VARIANCE THREE-FACTOR ANALYSIS OF VARIANCE NESTED ANALYSIS OF VARIANCE
When an analysis-or-variance (ANOYA) experimental design has more than one factor. then the appropriate F values (and degrees of freedom) depend upon which of the factors are fixed and which arc random effects (see Section 12.1). and which factors (if any) arc nested within which (see Chapter 15). Shown below in Section D.I is a procedure that enables us to determine the appropriate Fs and DFs for a given experimental design. It is a simplification of the procedures outlined hy Bennett and Franklin (I 27.587, so do not reject Ho: 0.05 < P < 0.10
8.1. He: ILl = IL2, HA: ILl i= IL2, nl = 7, SSI = 108.6171 (mg/lOO ml?, XI = 224.24 mg/l00 ml, VI = 6, n: = 6, SS2 = 74.7533 (mg/100 ml)2, X2 = 225.67 mg/l00 ml, V2 = 5, = 16.6700 (mg/l00 ml)2, SXI-X2 = 2.27 mg/100 ml, t = -0.630, to.05(2).11 = 2.201; therefore, do not reject Ho: P > 0.50 [P = 0.54]. 8.2. He: ILl 2: IL2, HA: ILl < IL2, nl = 7, SSI = 98.86 mm", VI = 6, Xl = 117.9 mm, n2 = 8, SS2 = 62.88 mm", V2 = 7, X2 = 118.1 mm, s; = 12.44mm2,sX1_X2 = 1.82mm,t = -0.11, to.05(1).13 = 1.771; therefore, do not reject He: P > 0.25 [P = 0.46].
s;
8.3. Ho: ILl 2: IL2, H« ILl < IL2, Xl = 4.6 kg, sT = 11.02 kg2, nl = 18;Vl = 17,X2 = 6.0kg2, s~ = 4.35 kg2, n: = 26, V2 = 25, SXI-X2 = 0.88 kg, t = -1.59, to.05(1).42 = 1.682; therefore, do not reject Ho, 0.05 < P < 0.10 [P = 0.060]. 8.4. Ho: IL2 - ILl ~ 10 g, HA: IL2 - ILl > 10 g, Xl = 334.6 g, SSI = 364.34 g2, nl = 19, VI = 18, X2 = 349.8 g, SS2 = 286.78 g2, n: = 24, V2 = 23, sf, = 15.88 g2, SXI-X2 = 1.22 g, t = 4.26, to.05(1).41 = 1.683 therefore, reject Ho and conclude that IL2 is at least 10 g greater than ILl: P < 0.0005 [P = 0.00015]. 8.5. (a) Ho is not rejected; Xp = 224.90 mg/lOO ml; t005(2).11 = 2.201; sf, = 16.6700 (mg/lOO ml?; 95% confidence interval = 22.490 ± )16.5700/13 = 224.90 ± 1.13; L, = 223.77 mg/ 100 ml, L2 = 226.03 mg/l00 m\. (b) Xl - X2 = -3.43 mgllOO ml; s~ = 6.6601 (mg/l00 ml?; 95% prediction interval = -3.43 ± 2.201 J6.6601 = -3.43 ± 5.68; L, = -9.29 mg/l00 ml, L2 = 2.25 mgllOO m\. 8.6. sf, = (244.66
+ 289.18)/(13
20.53( km/hr )2; d
=
+ 13) = 2.0 km/hr. If we guess
Answers to Exercises n = 50, then u = 2(50 - 1) = 98, t005(2),9fl = 1.984, and n = 40A. Then, guess n = 41; 1/ = 80, tOOS(2),flO = 1.990, and n = 40.6. So, the desired n = 41. 8.7. (a) If we guess n = 25, then v = 2(24) = 48. /005(2),48 = 2.011, to 10(1).48 = 1.299, and n = 18.0. Then, guess n = 18; v = 34, /0.05(2),34 = 2.032, to.IO( 1).34 = 1.307, and n = 18.3. So, the desired sample size is n = 19. (b) n = 20.95, v = 40, to.OS(2),40 = 2.021, to IO(I).40 = 1.303, and a = 4.65 km/hr. (e) n = 50, v = 98, to 05(2),98 = 1.984, and tJ3(1).98 = 0.223; f3 > 0.25, so power < 0.75 (or, by the normal approximation, f3 = OA1, so power = 0.59). 8.8. nl n2 Ho;
8.13. Ho: Northern birds do not have shorter wings than southern birds; HA: Northern birds have shorter wings than southern birds. Northern
ranks
Southern
11.5 1 15 8.5
= 7, VI = 6, sT = 16A8 mrrr', n2 = 8, V2 = 7, s~ = 8.98 mm2; F = 1.83, F005(2).6, 7 = 3.87; do not reject Ho; 0.10 < P < 0.25 [P = 0.22].
n[ = 21,vI = 20,sT = 38.71 g2,n2 = 20,V2 = 19, s~ = 21.35g2,sTls~ = 1.81. (a) F005(2),20.19 = 2.51, FO.05(2),19.20 = 2A8; LI = 0.72 g2, L2 = 4A9g2. (b)Z005(1) = 1.6449,ZO.LO(I) = 1.2816, n = 26.3 (so a sample of at least 27 should be taken from each population). (e) ZJ3(I) = 0.87, f3(1) = 0.19, power = 0.81.
5
5
2.5 10
8.5 14 11.5
R1 = 53.5, n1 = 7, n2 = 8; V = 30.5; V' = 25.5; VO.05(1),7,8 = 43; therefore, do not reject Ho; P > 0.10 [P ~ OA1]. 8.14. Ho: Intersex cells have 1.5 times the volume of normal cells; H A: Intersex cells do not have 1.5 times the volume of normal cells. Normal
*
Male ranks
Female ranks
2
5
1 11 10 4
3 12
7 9
8 6 13
X
372 354 403.5 381 373.5 376.5 390 367.5 358.5 382.5
8.11. HO:(TI/Ml = (T2/M2,HA:(TI/MI (T2/M2; S1 = 3.82 em, VI = 0.356, S2 = 2.91 em, V2 = 0.203; Vp = 0.285; Z = 2.533, ZO.05(2) = 1.960; reject H«; om < P < 0.02 [P = 0.013]. 8.12. Ho: Male and female turtles have the same serum cholesterol concentrations; HA: Male and female turtles do not have the same serum cholesterol concentrations.
ranks
5 7 13 2.5
1.21, F005(2 ).0.5 = 6.98; do not reject P > 0.50 [P = 0.85].
8.9. nl
8.10.
RI = 44, nl = 7, R2 = 47, n2 = 6; V = 26; V' = (7)(6) - 26 = 16; V005(2),7.6 = VO.05(2),0,7 = 36; therefore, do not reject He; P > 0.20 [P = 0.53].
= 7, VI = 6, sT = 18.1029 (mg/100 rnl )", = 6, V2 = 5, s~ = 14.9507 (mg/100 ml)2;
F =
877
RI
1.5
Rank 4 1 16 10 5 7 12 3 2
Intersex 380 391 377 392 398 374
Rank 9 13 8 14 15 6
11
= 71,111 = 1O,n2 = 6; V = 44, V' = 16;
VO.05(2), LO,6= 49; therefore, 0.10 < P < 0.20.
do not reject Ho;
Chapter 9 = -2.09 Mg/m3, 3 s{j = 1.29 Mg/m . (a)l = -1.62,n = 1l,v = 10, to.05(2),1O= 2.228; therefore, do not reject Ho: 0.10 < P < 0.20 [P = 0.l4]. (b) 95% confidence interval for Md = - 2.09 ± (2.228)(1.29) = -2.09 ± 2.87; LI = -4.96 Mg/ m', L2 = 0.78 Mg/m3.
9.1. Ho: Md = 0,
HA: Md =1=0; d
878
Answers to Exercises
9.2. d,
Signed rank
-4 -2 -5
-5.5 -3 - 7.5
6 -5 1 -7
-4 -7 1 3
9 -7.5 1.5 -10.5 -5.5 -10.5 1.5 4
T=9 + 1.5 + 1.5 + 4= 16; TOOS(2),11 = 10; since T is not:=; 10, do not reject He; 0.10 < P < 0.20. 9.3.
= 285.21 (fLg/m3)2,5~ = 270.36 (fLgl m ' )2; F = 1.055; r = 0.9674; t = 0.317; t005(2),9 = 2.262; do not reject Ho: uT = u~; P > 0.50 [P = 0.76].
5T
Chapter 10 10.1. He; fLl = fL2 = fL3 = fL4; HA: The mean food consumption is not the same for all four months; F = 0.7688/0,0348 = 22.1; F005(1),3,IR = 3.16; reject Ho; P < 0.0005 [P = 0.0000029]. 10.2. k = 5,VI = 4,n = 12,V2 = 55,u2 = 1.54(cC)2, 8 = 2.0n C; ¢ = 1.77; from Appendix Figure B.1 d we find that the power is about 0.88. 10.3. n = 16, for which V2 = 75 and ¢ = 2.04. (The power is a little greater than 0.95; for n = 15 the power is about 0.94.) 10.4.
= 45, power = 0.95, ¢ = 2.05; minimum detectable difference is about 2.50 C.
V2
10.5. Ho: The amount of food consumed is the same during all four months; H A: The amount of food consumed is not the same during all four months; nl = 5,n2 = 6,n3 = 6,n4 = 5; RI = 69.5, R2 = 23.5, R3 = 61.5, R4 = 98.5; N = 22; H = 17.08; X6,OS3 = 7,815; reject He; P« 0.001. He (i.e., H corrected for ties) would be obtained as 2: t = 120, C = 0.9887, He = 17.28. F = 27,9, FO,05(1).3.17= 3.20; rejectHo; P« 0.0005 [P = 0.00000086]. 10.6. Ho: uT = u~ = u~; HA: The three population variances are not all equal; B = 5.94517, C = 1.0889, Be = B/C = 5,460; X6.05.2 = 5.991; do not reject H«; 0.05 < P < 0.10 [P = 0.065]. 10.7. He: fLl/UI = fLZ/U2 = fL3/U3 = fL4/U4; 51 = 0.699, VI = 0.329,52 = 0.528, V2 = 0.302,53 = 0.377, V3 = 0.279,054 = 0,451, V4 = 0.324;
VI' = 0.304; X2 = 1.320, X6.05.3 = 7.815; do not reject Ho; 0.50 < P < 0.75 [P = 0.72].
Chapter 11 11.1. (a, b) Ranked sample means: 14.8 16.2 20.2; k = 3, n = 8, CI' = 0.05, s2 = 8,46, v = 21 (which is not in Appendix Table B.5, so use v = 20, which is in the table); reject Ho: fL2 = fLI; reject He; fL2 = fL3; do not reject He: fL3 = fLl· Therefore, the overall conclusion is fLI = fL3 -=f- fL2. (c) XI' = XIJ = 15.5, to 05(2)21 = 2.080, nl + n2 = 16,95% CI for fLU = 15.5 ± 1.5; 95% CI for fL2 = 20.2 ± 2.1; Xu - X2 = -4.7, SE = 1.03,95% CI for fLU - fL2 = -4.7 ± 3.2. 11.2. XI = 4.82, nl = 5; X2 = 4.33, n2 = 6; X3 = 4.67, n3 = 6; X4 = 5.24, ns = 5; 52 = 0.0348; v = 18; QO.05.l8A = 3.997; conclusion: fL2 -=f- fL3 = fLl -=f- fL4· 11.3. Means, sample sizes, and QO.05.l8.l4as in Exercise 1l.2; 57 = 0.0170, 5~ = 0.0307, 5~ = 0.0227, s~ = 0.0730, conclusion: fL2 -=f- fL3 = fLl -=f- fL4. 11.4. Ranked sample means: 60.62, 69.30, 86.24, 100.35; sample sizes of 5,5,5, and 4, respectively; k = 4, v = 15, CI' = 0.05, s2 = 8.557; control group is group 1; Q;U)S(2).ISA = 2.61; reject He: fL4 = fLl, reject Ho: fL3 = fLl, reject Ho: fL2 = fLl. Overall conclusion: The mean of the control population is different from the mean of each other population. 11.5. Ranked sample means: 60.62, 69.30, 86.24, 100.35; sample sizes of 5,5,5, and 4, respectively; k = 4, v = 15, CI' = 0.05,52 = 8.557; critical value of S is 3.14; for Ho: (fLI + fL4)/2 (fL2 + fL3)/2 = 0, S = 8,4, reject He; for He: (fL2 + fL4 )/2 - fL3 = 0, S = 13.05, reject Ho. 11.6. RI = 21, R2 = 38, R3 = 61. Overall conclusion: The variable being measured is the same magnitude in populations 1 and 2. The variable is of different magnitude in population 3.
Chapter 12 12.1. (a) Ho: There is no difference in mean hemolymph alanine among the three species; H A: There is difference in mean hemolymph alanine among the three species; F = 27.6304/2.1121 = 13.08; Fo.os( I ).2.18 = 3.55; reject Ho; P < 0.0005 [P = 0.00031]. (b) Ho: There is no difference in mean hemolymph alanine between males and females; HA: There is difference in mean hemolymph alanine between males and females; F = 138.7204/2.1121 = 65.68; F005(1).I.18 = 4,41 reject Ho; P« 0.0005 [P = 0.00000020]. (c) Ho: There is no species X sex interaction in mean hemolymph alanine; HA: There is species X sex interaction in mean hemolymph alanine;
Answers to Exercises F = 3.4454/2.1121 = 1.63; F005( I ).2.IR = 3.55; do not reject Ho; 0.10 < P < 0.25 [P = 0.22]. (d) See graph below; the wide vertical distance between the open circles indicates difference between sexes; the vertical distances among the plus signs indicates difference among the three species; ..c:
E" ;>,
22
-0
6 <J.)
20
E
18
..c: 0 0
OJ 0.25 [P = 0.44]; be = 3.16. (b) He: The three population regression lines have the same elevation; H A: The three lines do not all have the same elevation; F = 4.61; as Fo.os( I).2.90 = 3.10, reject Ho; 0.01 < P < 0.025 [P = 0.012].
Answers to Exercises
Chapter 19
*
19.1. (a) r = 0.86. (b) r2 = 0.73. (c) Ho: P = 0; HA: P 0; s, = 0.16; t = 5.38; as (0.05(2).10= 2.228; reject Ho; P < 0.001 [P = 0.00032]. Or: r = 0.86, rO.05(2),1O= 0.576; reject Ho; P < 0.001. Or: F = 13.29, F005(2).IO.IO= 3.72; reject Ho; P < 0.001. (d) LI = 0.56, L2 = 0.96.
19.2. (a) Ho: P :S 0; HA: P > 0; r = 0.86; t = 5.38; tOOS( I ),10 = 1.812; reject Ho; P < 0.0005 [P = 0.00016]. Or: rO.05(I ).10 = 0.497; reject Ho; P < 0.0005. Or: F = 13.29; FO.05(I ).10,10= 2.98; reject Ho: P < 0.0005. (b) Ho: P = 0.50; HA: P 0.50; r = 0.86; Z = 1.2933; (0 = 0.5493; Uz = 0.3333; Z = 2.232; Z005(2) = 1.960; reject Ho; 0.02 < P < 0.05 [P = 0.026].
*
0.001213 = 0.78; since FO.05(I ).3.4 = reject Ho; P > 0.25 [P = 0.56]. 19.10. (a) rc = 0.9672. (b) Zc = 2.0470, r U = 0.2502, UZe = 0.2135; for ?c: L, L2 = 2.4655; for Pc: LI = 0.926, L2
19.4. Ho: PI 2: P2; HA: PI < P2; Zl = 0.4847, 0.6328; UZ1-Z2 = 0.3789; Z = -0.3909; ZO.05(I) = 1.645; do not reject Ho; P > 0.25 [P = 0.35].
Z2 =
19.5. (a) Ho: PI = P2 = P3; HA: The three population correlation coefficients are not all the same; X2 = 111.6607 - (92.9071)2/78 = 0.998; X605,2 = 5.991; do not reject He; 0.50 < P < 0.75 [P = 0.61). X~ = 1.095,0.50 < P < 0.75 [P = 0.58]. (b) Zw = 92.9071/78 = 1.1911; rw = 0.83. 19.6. (a) 2, d; = 88.00, r, = 0.69; (b) Ho: Ps = 0; HA: Ps 0; as (rs)o05(2),12 = 0.587, reject Ho; 0.01 < P < 0.02.
6.59, do not
= 0.9991, = 1.6285, = 0.986.
Chapter 20 20.1. (a) Y = -30.14 + 2.07X\ + 2.58X2 + 0.64X3 + 1.l1X4. (b) Ho: No population regression; H;\: There is a population regression; F = 90.2, F005( I ).4.9 = 3.63, reject Ho, P« 0.0005 [P = 0.00000031]. (c) Ho: f31 = 0, HA: f31 0; (0.05(2).9= 2.262; "*,, below denotes significance:
*
*
19.3. (a) Ho: PI = P2; HA: PI P2; ZI = -0.4722. Z2 = -0.4236; UZ1-Z2 = 0.2910; Z = -0:167; Z005(2) =1.960; do not reject Ho; P > 0.50 [P = 0.87]. (b) Zw = -0.4449; rw = -0.42.
881
bi
( = III
Shj
Conclusion
Shj
I
2 3 4
2.07 2.58 0.64 1.11
0.46 0.74 0.46 0.76
4.50* 3.49* 1.39 1.46
Reject Ha. Reject Ho. Do not reject Ho. Do not reject Ho.
(d) SY-I.2JA = 3.11 g; R2 = 0.9757. (e) Y = 61.73 g. (I) s y = 2.9549 g, LI = 55.0 g, L2 = 68.4 g. (g) Ho: ILY :S 50.0 g, HA: ILY > 50.0 g, t = 3.970; (005(1).9 = 1.833, reject H«: 0.001 < P < 0.0025 [P = 0.0016]. 20.2. (1) With Xl, X2, X3, and X4 in the model, see Exercise 20.1c. (2) Delete X3. With Xl, X2, and X4 in the model, 10.05(2),10= 2.228 and:
*
19.7. (a) r.,. = 0.914; P < 0.Q1.
bi
(b) reject Ho; 0.005
0.25 [P = 0.51]. 20.5. (a) W = 0.675. (b) Ho: There is no agreement among the four faculty reviewers; HA: There is agreement among the four faculty reviewers; X; = lO.800; (X; )o.OS,4.S = 7.800; reject Ho; 0.005 < P < 0.01.
Chapter 21
'*
A
Y
+
- 0.0259X2. (b) He: = 69.4; FO.05(1).1,4 = 7.71; reject Ho; 0.001 < P < 0.0025 [P = 0.0011]. (c) = 6.92 eggs/cm/'; Sy = 0.26 eggs/cm/; 95%
21.2. (a)
=
1.00
+ 0.851X
f32 = 0; HA: f32
'* 0; F
Y
confidence interval = 6.92 ± 0.72 eggs/crrr'. ~d) Xo = 16.43°C; Yo = 7.99 eggs/ cm/. (e) For Xo: 95% confidence interval = 16.47 ± 0.65°C;
=
7.99 ±
Chapter 22 22.1. (a) For v = 2,P(X2:2: 3.452) is between 0.10 and 0.25 [P = 0.18]; (b) For u = 5,0.10 < (X2 :2: 8.668) < 0.25 [P = 0.12]; (c) X50S4 = 9.488; (d) X50l.8 = 20.090. . . 22.2. (a) X2 = 16.000, v = 5,0.005 < P < 0.01 lP = 0.0068]. As P < 0.05, reject Hi, of equal food item preference. (b) By grouping food items N, G, and C; n = 41, and for He: Equal food preference, X2 = 0.049, v = 2, 0.975 < P < 0.99 [P = 0.98]; as P > 0.05, Ho is not rejected. By grouping food items A, W, and M; n = 85, and for He: Equal food preference, X2 = 0.447, v = 2,0.75 < P < 0.90 [P = 0.80]; as P > 0.05, Ho is not rejected. By considering food items N, G, and C as one group and items A, W, and M as a second group, and Ho: Equal preference for the two groups, X~ = 14.675, v = 1, P < 0.001 [P = 0.00013]; Hi, is rejected. 22.3. XZ = 0.827, v = 1,0.25 < P < 0.50 [P = 0.36]. As P > 0.05, do not reject Ho: The population consists in equal numbers of males and females. 22.4. Location
21.1. In each step, Ho: f3i = 0 versus HA: f3i 0 is tested, where i is the highest term in the polynomial expression. An asterisk indicates Hi. is r~jected. (1) Linear regression: Y = 8.8074 - 0.18646X; t = 7.136*; t~05(2).13 = 2.160. (2) Quadratic regression: Y = -14.495 + 1.6595X 0.036133X2; t = 5;298*; to.OS(2).12 = 2.179. (3) Cubic regression: Y = -33.8lO + 3.9550X 0.12649X2 + 0.001178IX3; t = 0.374; tOOS(2),11 = 2.201. (4) Quartic regression: Y = 525.30 - 84.708X + 5.1223X2 - 0.13630X3 4 0.0013443X ; t = 0.911; to.OS(2).1O = 2.28. Therefore, the quadratic expression is concluded to be the "best."
for Yo: 95% confidence interval 0.86 eggs/cm2.
1 2 3 4
Males 44 31 12 15
Total of chi-squares Pooled chi-square lO2 Heterogeneity chi-square
X2
u
54 40 18 16
1.020 1.141 1.200 0.032
1 1 1 1
128
3.393 2.939 0.454
4 1 3
Females
0.90 < P < 0.95 Because P(heterogeneity X2) > 0.05, the four samples may be pooled with the following results: XZ = 2.717, v = 1,0.05 < P < 0.10 [P = 0.099]; P > 0.05, so do not reject He: Equal numbers of males and females in the population. 22.5. G = 16.188, v = 5,0.005 < P < 0.01 [P = 0.0063]; P < 0.05, so reject Ho of no difference in food preference. 22.6. He; There is a uniform distribution of the animals from the water's edge to a distance of 10 meters upland; max D, = 0.24333, max D; = 0.2033, D = 0.2433; DO.OS(2).31 = 0.23788; reject Ho, 0.02 < P < 0.05. 22.7. DO.05.27 = 0.25438 and DO.05.28 = 0.24993, so a sample size of at least 28 is called for.
Answers to Exercises
Il
22.8. dmax = 1; (dmax )0.05,6.IS= 6; do not reject He: The feeders are equally desirable to the birds; P > 0.50.
I
Chapter 23
I
23.1. (a)]11
1 I
= 157.1026j2 = 133.7580j3 = 70.0337, fi4 = 51.1057,]21 = 91.8974,]22 = 78.2420, ]23 = 40.9663,]24 = 29.8943, RI = 412, R2 = 241, C1 = 249, C2 = 212, C3 = 111, C4 = 81, n. = 653;
X2 = 0.2214 + 0.0115 + 0.0133 + 1.2856 + 0.3785 + 0.0197 + 0.0228 + 2.1978 = 4.151; v = (2 - 1)(4 - 1) = 3; X~.05,3 = 7.815,0.10 < P(X2 ~ 4.156) < 0.25 [P = 0.246]; P> 0.05, do not reject Ho. (b) G = 4.032, v = 3, X605.3 = 7.815,0.25 < P(X2 ~ 4.(32) < 0.50 [P = 0.26]; P > 0.05, do not reject H«. 23.2. (a)fll = 14,[12 = 29,[21 = 12,h2 = 38, RI = 43, R2 = 50, Cl = 26, C2 = 67, n = 93; X2 = 0.8407, v = 1;X6.05.1 = 3.841,0.25 < P(X2 ~ 0.8407) < 0.50; as P > 0.05, do not reject Ho [P = 0.36]. (b) G = 0.8395, v = 1; X6os.1 = 3.841,0.25 < P(X2 ~ 0.8395) < 0.50; as P > 0.05, do not reject Ho [P = 0.36]. 23.3. He: Sex, area, and occurrence of rabies are mutually independent; X2 = 33.959; v = 4; X605,4 = 9.488; reject Ho; P < 0.001 [P = 0.00000076]. Ho: Area is independent of sex and rabies; X2 = 23.515; v = 3; X605 3 = 7.815; reject Ho; P < 0.001 [P = 0.000032].'Ho: Sex is independent of area and rabies; X2 = 11.130; v = 3; reject Ho; 0.01 < P < 0.25 [P = 0.011]. He: Rabies is independent of area and sex; X2 = 32.170; v = 3; reject Ho; P < 0.001 [P = 0.00000048].
23.4. (a) RI = R2 = R3 = 150, CI ~= 20~, C2 ~ 182, C3 = 45, C4 = 20, n = 450; hi = hi = hi = 67.6667j2 = ]22 = 132 = 60.6667,]13 = ]23 = ~. ,,~ _ .2_ h3 = 15.0000,[14 = h4 = h4 - 6.6667, X 4.141; v = (2)(3) = 6; X6.05.6 = 12.592; 0.50 < P < 0.75 [P = 0.66]; do not reject Ho. (b) G = 4.141, same probability and conclusion as part (a).
Chapter 24 24.1. P(X = 2) = 0.32413. 24.2. P( X = 4) = 0.00914. 24.3. Ho: The sampled population is binomial with P = 0.25; HA: The sampled population is not binomial with P = 0.25; I,ii = 126; Fl = (0.31641 )(126) = 39.868, F2 = 53.157, F3 = 26.578, F4 = 5.907, F5 = 0.493; combine F4 and Fs and combine i4 and is; X2 = 11.524, v =
883
k - 1 = 3, X605.3 = 7.815; reject Ho; 0.005 < P < 0.01 [P = 0.0092]. 24.4. Ho: The sampled population is binomial; The sampled population is not binomial; 156/4 = 0.3578; X2 = 3.186, v = k - 2 109
p
HA: =
= 3,
X605.3 = 7.815; do not reject Ho; 0.25 < P < 0.50 [P = 0.36]. 24.5. He: P = 0.5; HA:p i' 0.5; 11 = 20; P(X ~ 60r X ~ 14) = 0.11532; since this probability is greater than 0.05, do not reject Ho. 197 24.6. He: P = 0.5; HA: P i' 0.5; P~ = 412 = 0.47 82 ; Z = -0.888; Z, = 0.838; Z005(2) = 1005(2).00 1.960; therefore, do not reject H«; P ~ 0.37 [P = 0.40].
=
24.7. Ho:p = 0.5; HA:p i' 0.5; X = 44; Z = -1.0102; ZO()5(2) = t005(2),:)() = 1.960; do not reject Ho; 0.20 < P < 0.50 [P = 0.30J. 24.8. He: P = 0.5; HA: P i' 0.5; number of positive differences = 7; for n = 10 and P = 0.5; P(X ~ 3 or X ~ 7) = 0.34378; since this probability is greater than 0.05, do not reject Ho. 24.9. n = 20, P = 0.50; critical values are 5 and 15; p = 6/20 = 0.30, power = 0.00080 + 0.00684+ 0.02785 + 0.07160 + 0.13042 + 0.17886 + 0.00004 + 0.00001 = 0.42. 24.10. Po = 0.50, P = 0.4782, n = 412; power = P( Z < -1.(8) + P(Z>2.84) = 0.1401 + 0.0023 = 0.14. 24.11. X = 18, n = 30, P = 0.600 (a) F005(2).26,36 ~ F005(2).26,35 = 2.04, L, = 0.404; FO.~5(2),3R.24 ~ F005(2).30,24 = 2.21, L2 = 0.778. Using exact probabilities of F: FO.05(2),26,36 = 2.025, L, = 0.406; F005(2),2R,34 = 2.156, L2 = 0.773. (b) Z005(2) = 1.9600, confidence interval is 0.600 ± (1.9600)(0.0894), L, = 0.425, L2 = 0.775. (c) X = 19.92, n = 33.84, j5 = 0.589, confidence interval is 0.589 ± (1.9600) (0.0846), LJ = 0.423, L2 = 0.755. 24.12. X = 62,n = 12l5,p = 0.051 (a)FO.05(2),2308,J24~ FO.OS(2),00,120 = 1.31, t.; = 0.039; FO.05(2). 126.2306~ FO.05(2),120,00= 1.27, L2 = 0.065. Using exact probabilities of F: F005(2).2308,124= 1.312, LI = 0.039; F005(2),126,2306 = 1.271, L2 = 0.061. (b) ZO.05(2) = 1.9600, confidence interval is 0.051 ± (1.9600) (0.0631), LI = 0.039, L2 = 0.063. (c) X = 63.92, n = 1218.84, j5 = 0.052. confidence interval is 0.052 ± (1.9600)(0.00638), LI = 0.039, L2 = 0.065. 24.13. sample median = 32.5 km/hr, i = 4,j = 1, P(29.5 kmJhr ~ population median ~ 33.6 km/hr) = 0.90.
884
Answers to Exercises
24.14.
PI =
0.7500,p2 = 0.4000,.0 = 0.5714, q = 0.4286, SE = 0.1414, Z = 2.475,0.01 < P < 0.02, reject Ho [P = 0.013].
24.15. XI
n2
= 18.96, nl = 25.92,PI = 0.7315, Xl = 10.96, = 26.92,'f52 = 0.4071, SE = 0.1286,95%
confidence interval = 32.444 ± 0.2521, LI = 0.372, L2 = 0.576. 24.16. Ho: PI = Pl = P3 = P4, HA: All four population proportions are not equal; XI = 163, Xl = 135, X3 = 71,X4 = 43,nl = 249,/12 = 212,/13 = 111, /14 = 81; PI = 0.6546, P2 = 0.6368, P3 = 0.6396, P4 = 0.5309;.0 = 412/653 = 0.6309, Xl = 0.6015 + 0.0316 + 0.0364 + 3.4804 = 4.150, X605.3 = 7.815; do not reject Ho; 0.10 < P< 0.25 [P = 0.246]. 24.17. He: PI = P2 = P3 = P4 is not rejected. So multiple-comparison testing is not done. 24.18. (a) P of original table = 0.02965; P of next more extreme table (i.e., where III = 21) = 0.01037; P of next more extreme table (i.e., where .fI I = 22) = 0.00284; and so on with total P for that tail = 0.0435; flo is rejected. (b) X~ = 2.892, 0.05 < P < 0.10 [P = 0.089], flo is not rejected. (c) X~ = 2.892,0.05 < P < 0.10 [P = 0.089], Ho is not rejected. (d) Since RI = R2, the two-tailed Pis 2 times the one-tailed P; two-tailed P = 2(0.0435) = 0.087, Ho is not rejected. 24.19. (a) P of original table = 0.02034; P of next most extreme table (i.e., where hi = 2) = 0.00332; P of next most extreme table (i.e., where III = 1) = 0'(lO021; and so on, with a total P for that tail = 0.02787, Ho is rejected. (b) X~ = 3.593,0.05 < P < 0.10 [P = 0.057], H« is not rejected. (c) X71 = 4.909,0.025 < P < 0.05 [P = 0.027], and Hi, is not rejected. (d) For the most extreme table in the tail opposite from that in part (a), III = 13 and [v: = 1, P = 0.00000; for next more extreme table, III = 12, P = 0.00001; for the next more extreme table.jj. = 11, P = 0.00027; and so on through each of the tables in this tail with a probability less than 0.02034; sum of the tables' probabilities in the second tail = 0.00505; sum of the probabilities of the two tails = 0.02787 + 0.00505 = 0.03292; Ho is rejected. 24.20. Ho: There is no difference in frequency of occurrence of varicose veins between overweight and normal weight men; HA: There is a difference in frequency of occurrence of varicose veins between overweight men and normal weight men; III = 19'/12 = 5,hl = 12'/22 = 86, /1 = 122; X~ = 2.118; X6os.1 = 3.841; do not reject Ho; 0.10 < P < 0.25 [P = 0.15].
Chapter 25 25.1. If J.t = l.5, P(X 0.0141.
=
0)
=
0.2231 and P(X
=
5)
=
25.2. J.t = ~ = 2.5 viruses per bacterium. (a) P(X = 0) = 0.0821. (b) P(X > 0) = 1.0000 - P(X = 0) = 1.0000 - 0.0821 = 0.9197. (c) P(X ::;,.2) = 1.0000 - P(X = 0) P( X = 1) = 1.0000 - 0.0821 - 0.2052 = 0.7127. (d) P(X = 3) = 0.2138. 25.3. Ho: Biting mosquitoes select the men randomly; HA: Biting mosquitoes do not select the mean randomly. X = IJiXi = 98/57 = 1.7193; X2 = 3.060, v = 6 - 2 = 4, X~OS,4 = 7.815; do not reject H: 0.50 < P < 0.75 lP = 0.55] 25.4. Ho: P ::; 0.00010; H A: P > 0.00010; Po = 0.00010; /1 = 25,000; pon = 2.5; X = 5; P(X ::;,.5) = 0.1087; do not reject Ho; do not include this disease on the list. 25.5. Ho: J.tl = J.t2; HA: J.ti 'I- J.t2; XI = 112, X2 = 134; Z = 1.40; ZO.OS(2) = 1.9600; do not reject Ho; 0.10 < P < 0.20 [P = 0.16]. 25.6. Ho: The incidence of heavy damage is random over the years; HA: The incidence of heavy damage is not random over the years; 111 = 14, 112 = 13,11 = 12, «o.os, 14.13 = 9 and 20. As 12 is neither rs 9 nor ::;,.20; do not reject Ho; P = 0.50. 25.7. Ho: The magnitude of fish kills is randomly distributed over time; H A: The magnitude of fish kills is not randomly distributed over time; /1 = 16, s2 = 400.25, = 3126.77/30 = 104.22; C = 0.740, Co.OS. 16 = 0.386; reject Ho; P < 0.0005. 25.8. Ho: The data are sequentially random; HA: The data are not sequentially random; /1 = 16, It = 7; critical values = 6 and 14; do not reject Ho; 0.05 < P ::; 0.10.
s;
Chapter 26 26.1. /1 = 12, Y = 0.48570, X = 0.20118, r = 0.52572 (c = 1.02617, rc = 0.53948). (a) a = 68". (b) s = 56° (using correction for grouping, s = 55°), s' = 65° (using correction for grouping, s' = 64°). (c) 68" ± 4?D (using correction for grouping, 68° ± 46°). (d) median = 67.5°. 26.2. n = 15, Y = 0.76319, X = 0.12614, r = 0.77354. (a) a = 5:22 A.M. (b) s = 2:34 hr. (c) 5:22 hr ± 1:38 hr. (d) median = 5:10 A.M.
Chapter 27 27.1. Ho: P = 0; HA: P 'I- 0; r = Z = 3.317, ZO.OS.12 = 2.932; P < 0.05. 27.2. Ho:p = O,HA:P 'I- 0; r = Z = 8.975, ZO.OS.IS = 2.945;
0.526; R = 6.309; reject Ho; 0.02 < 0.774; R = 11.603; reject Ho; P < O.OOl.
Answers to Exercises 27.3. Ho: P
-:f- 0; n = 11, Y = -0.88268, 0.89917, a = 281 c, R = 9.891, /LO = 270 (a) V = 9.709, U = 4.140, «o.os, II = 1.648; reject Ho; P < 0.0005. (b) Ho: /La = 270°, HA: /La -:f- 270°,95% confidence interval for /La = 2810 ± 19°, so do not reject H«.
=
0; HA: P
X = 0.17138,
r
=
0
•
27.9.
27.5. n = 11, m' = 0, C Ho; P < 0.001.
=
11, CO.05(2),
II =
;
1, reject 27.10.
27.6. Ho: Mean flight direction is the same under the two sky conditions; HA: Mean flight direction is not the same under the two sky conditions; nl = 8,n2 = 7,RI = 7,5916,R2 = 6,1130, al = 352°, a2 = 305°, N = 15, rw = 0.914, R = 12.5774; F = 12.01, F005( I). I, 13 = 4.67; reject Ho: 0.0025 < P < 0.005 [P = 0.004]. 27.7. Ho: The flight direction is the same under the two sky conditions; HA: The flight direction is not the same under the two sky conditions; nl = 8, n: = 7,N = 15; L,dk = -2.96429.L,d~ = 1,40243, V2 = 0.2032, V50s 87 = 0.1817; do not reject Ho; 0.02 < P < 6.05: 27.8. Ho: Members of all three hummingbird species have the same mean time of feeding at the feeding station; HA: Members of all three species do not have the same mean time of feeding at the feeding station; nl = 6, n2 = 9, n3 = 7, N = 22; RI = 2,965, R2 = 3,938, R3 = 3.868; al = 10:30 hr, a2 = 11:45 hr, a3 = 11:10 hr; rw = 0,490,F = 0.206, FO.05( I ),2. 19 = 3,54; do not reject Ho; P> 0.25 [P = 0.82]. Therefore, all three ai's estimate
the same /La, the best estimate of which is 11:25 hr. Ho: Birds do not orient better when skies are sunny than when cloudy; HA: Birds do orient better when skies are sunny than when cloudy. Angular distances for group 1 (sunny): 10,20,45, 10,20,5,15, and 0 for group 2 (cloudy): 20, 55, 105,90,55,40, and 25°. For the one-tailed MannWhitney test: nl = 8, n2 = 7, RI = 40, V = 52, V005( 1).8.7 = 43; reject He; P = 0.0025. Ho: Variability in flight direction is the same under both sky conditions; HA: Variability in flight direction is not the same under both sky conditions; al = 352°, a2 = 305°; angular distances for group 1 (sunny): 2,12,37,18,28,3,7, and 8°, and for group 2 (cloudy): 35, 0, 50, 35, 0, 15, and 30°; for the two-tailed Marin-Whitney test: RI = 58, V = 34, V' = 22, V005(2),8.7 = 46; reject Ho; P < 0.001. (a) Ho: Paa = 0; H A: Paa -:f- 0; Laa = 0,9244; raa = 0.9236; s; = 0.0004312; LI = 0.9169, L2 = 0,9440; r~j'~ct u; (b) Ho: (Paa)s = 0, HA: (Paa) -:f- 0; r' = 0,453, r" = 0.009, (raa)s = 0.365; (n - l)(raa)s = 2.92,fora(2) = 0.05 the critical value is 3.23; do not reject Ho; for a( 2) = 0.10, the critical value is 2.52, so 0.05 < P < 0.10. Hr: Pal = 0; HA: Pal -:f- 0; ral = 0.833, nr~1 = 6.24, X~05.2 = 5.991, reject H(). Ho: The distribution is not contagious; HA: The distribution is contagious; nl = 8, n: = 8, U = 6, u' = 3; using Appendix Table B.28: ml = 7, m.; = 7,[ = 2, n = 15, critical values are 1 and 6, so do not reject Ho; P 2: 0.50. 0
27.4. n = 12, m = 2, mo.os. 12 = 0, do not reject Ho; 0.20 < P :s: 0.50,
27.11.
27.12. 27.13.
885
Literature Cited
ACTON, F. S. 1966. Analysis of Straight Line Data. Dover, New York. 267 pp. ADRIAN, Y. E. O. 2006. The Pleasures of Pi,e and Other Interesting Numbers. World Publishing, Singapore. 239 pp.
Scientific
AGRESTI,A. 1992. A survey of exact inference for contingency tables. Statist. Sci. 7: 131-153. Also Comment, by E. J. Bedrick and J. R. Hill, ibid. 7: 153-157; D. E. Duffy, ibid. 7: 157-160; L. D. Epstein and S. E. Fienberg, ibid. 7: 160-163; S. Kreiner, Exact inference in multidimensional tables. ibid. 7: 163-165; D. Y. Lin and L. J. Wei, ibid. 7: 166-167; C. R. Mehta, An interdisciplinary approach to exact inference in contingency tables. ibid. 167-170; S. Suissa, ibid. 7: 170-172; Rejoinder, by A. Agresti, ibid. 7: 173-177. AGRESTI,A. 2002. Categorical Data Analysis. Wiley-Interscience,
New York. 710 pp.
AGRESTI, A. 2007. An Introduction to Categorical Data Analysis. 2nd ed. Wiley-Interscience, York. 372 pp.
New
AGRESTI, A. and B. CAFFO. 2000. Simple and effective confidence intervals for proportions and differences of proportions result from adding two successes and two failures. Amer. Statist. 54: 280-288. AGRESTI, A. and B. A. COULL. 1998. Approximate is better than "exact" binomial proportions. Amer. Statist. 52: 119-126. AGRESTI, A. and M.-C. YANG. 1987. An empirical investigation contingency tables. Computa. Statist. Data Anal. 5: 9-21.
for interval estimation
of
of some effects of sparseness
in
AIKEN, L. S. and S. G. WEST. 1991. Multiple Regression: Testing and Interpreting Interactions. Sage Publications, Newbury Park, CA. 212 pp. AJNE, B. 1968. A simple test for uniformity AKRITAS, M. G. 1990. The rank transform Assoc. 85: 73- 78.
of a circular distribution. method
ALGINA, J. and S. SEAMAN.1984. Calculation 547-549.
of semipartial
ANDERSON, N. H. 1961. Scales and statistics: 305-316. ANDRE, D. 1883. Sur Ie nombre Rend. (Paris) 97: 1356-1358.
parametric
correlations.
behavior
ANSCOMBE, F. 1. 1948. The transformation Biometrika 35: 246-254.
Educ. Psychol. Meas. 44:
and non-parametric.
de permutations de n elements [Cited in Bradley, 1968: 281.]
ANDREWS, F. C. 1954. Asymptotic Statist. 25: 724-735.
Biometrika 55: 343-354. designs. J. Amer. Stati.st.
in some two-factor
qui presentent
Psychol. Bull. 58: S sequences.
of some rank tests for analysis of variance. of Poisson,
ARMITAGE,P. 1955. Tests for linear trends in proportions
binomial,
and negative
and frequencies.
Compt.
Ann. Math.
binomial
data.
Biometrics 11: 375-386.
ARMITAGE,P. 1971. Statistical Methods in Medical Research. John Wiley, New York. 504 pp.
Biometrics 41: 823-833. ARMITAGE,P., G. BERRY, and J. N. S. MAITHEWS. 2002. Statistical Methods in Medical Research. ARMITAGE,P. 1985. Biometry Blackwell
and medical statistics.
Science, Malden, MA. 817 pp.
ASIMOV, 1. 1982. Asimov's Biographical Encyclopedia of Science and Technology. 2nd rev. ed. Doubleday, Garden City, NY. 941 pp.
886
Literature Cited BAHADUR, R R. 1967. Rates of convergence 303~324.
of estimates
and test statistics.
BAKER, L. 2002. A comparison of nine confidence intervals expected number of events is::::2 5. Amer. Statist. 56: 85~89.
Ann. Math. Statist. 38:
for a Poisson
parameter
BALL, W. W. R 1935. A Short Account of the History of Mathematics. Macmillan, BARNARD,G. A. 1947. Significance
when the
London.
522 pp.
tests for 2 X 2 tables. Biometrika 34: 123~138.
BARNARD, G. A. 1979. In contradiction to J. Berkson's efficient.J. Statist. Plan. In! 2: 181~187. BARNARD,G. A. 1984. Discussion
dispraise:
Conditional
tests can be more
of Dr. Yare's paper. J. Roy. Statist. Soc. Ser. A, 147: 449~450.
BARNARD,G. 1990. Fisher: A retrospective.
CHANCE 3: 22~28.
BARNETT,V. and T. LEWIS. 1994. Outliers in Statistical Data. 3rd ed. John Wiley, Chichester, 584 pp. BARNHART,H. X., M. HABER, and J. SONG. 2002. Overall concordance correlation evaluating agreement among multiple observers. Biometrics 58: 1020~ 1027. BARR, D. R. 1969. Using confidence
intervals
BARTHOLOMEW,D. J. 1983. Sir Maurice
to test hypothses.
coefficient
NY. for
J. Qual. Techno!. 1: 256~258.
Kendall FBA. Statistician 32: 445~446.
BARTLETT,M. S. 1936. The square root transformation Suppl. 3: 68~ 78. BARTLETT,M. S. 1937a. Some examples of statistical biology. J. Royal Statist. Soc. Suppl. 4: 137 ~ 170. BARTLETT,M. S. 1937b. Properties 160: 268~282.
887
in analysis of variance.
BARTLETT,M. S. 1939. A note on tests of significance Philos. Soc. 35: 180~185.
of research
in agriculture
and applied
tests. Proc. Roy. Statist. Soc. Ser. A,
in multivariate
analysis.
Proc. Cambridge
Biometrics 3: 39~52.
BARTLETT,M. S. 1965. R. A. Fisher and the last 50 years of statistical Assoc. 6: 395~409. BARTLETT,M. S. 1981. Egon Sharpe Pearson,
Royal Statist. Soc. .
methods
of sufficiency and statistical
BARTLETT,M. S. 1947. The use of transformations.
J.
methodology.
1. Amer. Statist.
1895~1980. Biometrika 68: 1~12.
BASHARIN, G. P. 1959. On a statistical estimate variables. Theory Prob. Appl. 4: 333~336.
for the entropy
of a sequence
of independent
BATES, D. M. and D. G. WAITS. 1988. Nonlinear Linear Regression Analysis and Its Applications. John Wiley, New York. 365 pp. BATSCHELET,E. 1965. Statistical Methods for the Analysis of Problems in Animal Orientation and Certain Biological Rhythms. American Institute of Biological Sciences, Washington, DC. 57 pp. BATSCHELET,E. 1972. Recent statistical methods for orientation data, pp. 61~91. (With discussion.) In S. R Galler, K. Schmidt-Koenig, G. J. Jacobs, and R E. Belleville (eds.), Animal Orientation and Navigation. National Aeronautics and Space Administration, Washington, DC. BATSCHELET,E. 1974. Statistical rhythm evaluations, pp. 25~35. In M. Ferin, F. Halberg, RM. Richart, and R. L. Vande Wiele (eds.), Biorhythms and Human Reproduction. John Wiley, New York. BATSCHELET,E. 1976. Mathematics for Life Scientists. 2nd ed. Springer-Verlag,
New York. 643 pp.
BATSCHELET,E. 1978. Second-order statistical analysis of directions, pp. 1~24.In K. Schmidt-Koenig and W. T. Keeton (eds.), Animal Migration, Navigation, and Homing. Springer-Verlag, Berlin. BATSHCELET,E. 1981. Circular Statistics in Biology. Academic
Press, New York. 371 pp.
BAUSELL,R B. and Y.-F. Lt. 2002. Power Analysisfor Experimental Research. Cambridge Press, Cambridge, UK. 363 pp.
University
888
Literature Cited BEALL, G. 1940. The transformation 72: 168.
of data from entomological
field experiments.
BEALL, G. 1942. The transformation of data from entomological analysis of variance becomes applicable. Biometrika 32: 243-262.
field experiments
BECKMANN,P. 1977. A History of n. 4th ed. Golem Press, Boulder, BECKMAN,R. J. and R. D. COOK. 1983. Outlier and response.
Can. Entomol.
Colorado.
s. Technometrics
so that the
202 pp.
25: 119-149,
and discussion
BEHRE S, W. Y. 1929. Ein Beitrag zur Fehlerberechnung Jahrbucher 68: 807 -837.
bei weinige Beobachtungen.
Landwirtsch
BENARD, A. and P. VANELTEREN. 1953. A generalization Mathematicae 15: 358-369.
of the method
Indagatones
BENNETT,B. M. and E. NAKAMURA.1963. Tables for testing significance Technometrics 5: 501-511. BENNETT,B. M. and R. E. UNDERWOOD.1970. On McNemar's function. Biometrics 26: 339-343.
of m rankings.
in a 2 X 3 contingency
test for the 2 X 2 table and its power
of the exact test. 1. Statist. Plan. Infer. 2:27 -42.
BERKSON,J. 1978. In dispraise
BERNHARDSON,C. S. 1975. Type I error rates when multiple significant F test of ANOY A. Biometrics 31: 229-232.
comparison
procedures
BERRY,K. 1. and P. W. MIELKE JR. 1988. Monte Carlo comparisons of the asumptotic likelihood-ratio test for sparse r X c tables. Psychol. Bull. 103: 256-264. BERRY, W. D. and S. FELDMAN. 1985. Multiple Hills, CA. 95 pp.
Regression
BEST, D. J. 1975. The difference
between
BEST, D. J. and J. C. W. RAYNER. 1987. Welch's problem. Technometrics 29: 205-210. The incomplete
approximate
solution
interval.
J. Royal
BHATTACHARYYA, G. K. and R. A. JOHNSON. 1969. On Hodges's uniformity of a circular distribution. Biometrika 56: 446-449. BIRKES, D. and Y. DODGE. 1993. Alternative 228 pp.
Methods
BIRNBAUM,Z. W. and F. H. TINGEY. 1951. One-sided functions. Ann. Math. Statist. 22: 592-596. BISSELL, A. F. 1992. Lines 192-210.
through
the origin-is
Beverly can
contours
1. Roya! Statist.
for the Behrens-Fisher Statist. Soc. Ser. C. Appl.
bivariate
of Regression.
confidence
and
the whole regression
points of the X2 distribution.
gamma
chi-square
Austral. 1. Statist. 17: 29-33.
two Poisson expectations.
BEST, D. J. and D. E. ROBERTS.1975. The percentage Soc. Ser. C. Appl. Statist. 24: 385-388.
follow a
in Practice. Sage Publications,
BERTRAND,P. Y. and R. L. HOLDER. 1988. A quirk in multiple regression: be greater than the sum of its parts. Statistician 37: 371-374.
BHATTACHARJEE,G. P.1970. Statist. 19: 285-287.
table.
sign test and a test for John
Wiley,
for probability
NO INT the answer?
J. Appl.
New York. distribution Statist.
19:
BLAIR, R. C. and J. J. HIGGINS. 1980a. A comparison of the power of Wilcoxon's rank-sum test statistic to that of Student's t statistic under various non normal distributions. J. Educ. Statist. 5: 309-335. BLAIR, R. C. and J. J. HIGGINS. 1980b. The power of t and Wilcoxon Rev. 4: 645-656.
statistics.
A comparison.
Eval.
BLAIR, R. C. and J. J. HIGGINS. 1985. Comparison of the power of the paired samples t test to that of Wilcoxon's signed-ranks test under various population shapes. Psycho!. Bull. 97: 119-128. BLAIR, R. C, J. J. HIGGINS, and W. D. S. SMITLEY.1980. On the relative Brit. J. Math. Statist. Psycho!. 38: 114-120.
power of the U and t tests.
BLAIR, R. C; S. S. SAWlLOWSKY,and J. J. HIGGINS. 1987. Limitations of the rank transform in tests for interactions. Communic. Statisf.-Simu!a. 16: 1133-1145.
statistic
Literature Cited BLATNER,D. 1997. The Joy of
71".
Walker,
New York. 130 pp.
Buss,
C. I. 1967. Statistics in Biology, Vol. 1. McGraw-Hill,
New York. 558 pp.
Buss,
C. I. 1970. Statistics in Biology,
New York. 639 pp.
Vol. 2. McGraw-Hill,
BLOOMFIELD,P. 1976. Fourier Analysis 258 pp.
of Time Series: An Introduction.
BLOOMFIELD,P. and W. L. STEIGER. 1983. Least Absolute Algorithms. Birkhatiser, Boston, MA. 349 pp. BLYTH, C. R. 1986. Approximate
binomial
confidence
BOEHNKE, K. 1984. F- and H-test assumptions BOHNING,D. 1994. Better approximate 22: 207-218. BOLAND, P. J. 1984. A biographical
889
Theory,
Applications,
and
limits. J. Amer. Statist. Assoc. 81: 843-855.
revisited.
confidence
Deviations:
John Wiley, New York.
Educ. Psychol. Meas. 44: 609-615.
intervals for a binomial
parameter.
Can. J. Statis.
glimpse of William Sealy Gosset. Amer. Statist. 38: 179-183.
BOLAND, P. J. 2000. William Sealy Gosset-s-alias 'Student' 1876-1937, pp. 105-122. In Houston, K. (ed.), Creators of Mathematics: The Irish Connection, University College Dublin Press, Dublin. BONEAU, C. A. 1960. The effects of violations 57: 49-64. BONEAU, C. A. 1962. A comparison
of assumptions
the t test. Psychol.
Bull.
of the power of the U and t tests. Psycho!. Rev. 69: 246-256.
BOOMSMA,A. and I. W. MOLENAAR.1994. Four electronic Statist. 48: 153-162. BOWKER,A. H. 1948. A test for symmetry BOWLEY, A. L. 1920. Elements tables.
underlying
in contingency
tables for probability
distributions.
Amer.
tables. J. Amer. Statist. Assoe. 43: 572-574.
of Statistics. 4th ed. Charles
Scribner's
+
Sons, New York. 459 pp.
BOWMAN,K. 0., K. HUTCHESON,E. P. ODUM, and L. R. SHENTON.1971. Comments on the distribution of indices of diversity, pp. 315-366. In G. P. Patil, E. C. Pielou, and W. E. Waters (eds.), Statistical Ecology, Vol. 3. Many Species Populations, Ecosystems, and Systems Analysis. Pennsylvania State University Press, University Park. BOWMAN,K. O. and L. R. SHENTON. 1975. Omnibus based on Jbj and b2. Biometrika 62: 243-250.
test contours
for departures
from normality
BOWMAN, K. O. and L. R. SHENTON. 1986. Moment (Jbj, b2) techniques, pp. 279-329. In R. B. D'Agostino and M. A. Stephens (eds.), Goodness-of-Fit Techniques. Marcel Dekker, New York. Box, G. E. P. 1949. A general 317-346. Box, G. E. P. 1950. Problems
distribution
theory for a class of likelihood
criteria.
in the analysis of growth and linear curves. Biometrics
Box, G. E. P. 1953. Non-normality
and tests on variances.
Biometrika
of transformations.
36:
6: 362-389.
40: 318-335.
Box, G. E. P. and S. L. ANDERSON. 1955. Permutation theory in the derivation the study of departures from assumption. J. Royal Statist. Soe. B17: 1-34. Box, G. E. P. and D. R. Cox. 1964. An analysis 211-243.
Biometrika
of robust criteria and
J. Royal Statist. Soc. B26:
Box, J. F. 1978. R. A. Fisher: The Life of a Scientist. John Wiley, New York. 512 pp. BOZIVICH, H., T. A. BANCROFT, and H. O. HARTLEY. 1956. Power of analysis of variance procedures for certain incompletely specified models. Ann. Math. Statist. 27: 1017-1043. BRADLEY,R. A. and M. HOLLANDER.1978. Wilcoxon, (1978). BRAY,J. S. and S. E. MAXWELL. 1985. Multivariate Hills, CA. 80 pp. BRILLOUIN,L. 1962. Science and Information
Frank, pp. 1245-1250.
Analysis
and Tanur
of Variance. Sage Publications,
Theory. Academic
BRITS,S. J. M. and H. H. LEMMER.1990. An adjusted Friedman Statist. ~ Theor. Meth. 19: 1837 -1855.
In Kruskal
test
Beverly
Press, New York. 351 pp.
test for the nested design. Communic.
890
Literature Cited BROWER,J. E., J. H. ZAR, and C. N. VaN ENDE. 1998. Field and Laboratory Methods for General Ecology. 4th ed. McGraw-Hill, Boston. 273 pp. BROWN, L. D., T. T. CAI, and A. DASGUPTA. 2001. Interval estimation for a binomial proportion. Statist. Sci. 16: 101-117. Comment by A. Agresti and B. A. Coull, ibid. 16: 117-120; G. Casella, ibid. 16: 120-122; C. Corcoran and C. Mehta, ibid. 16: 122-124; M. Ghosh, ibid. 16: 124-125; T. J. Santner, ibid. 16: 126-128. Rejoinder by L. D. Brown, T. T. Cai, and A. DasGupta, ibid. 16: 128-133. BROW ,L. D., T. T. CAI, and A. DASGUPTA. 2002. Confidence and asymptotic expansions. Ann. Statist. 30: 160-201.
intervals
for a binomial
BROWN, M. B. and A. B. FORSYTHE.1974a. The small size sample behavior test the equality of several means. Technometrics 16: 129-132. BROWN, M. B. and A. B. FORSYTHE.1974b. The ANOVA heterogeneous variances. Biometrics 30: 719- 724. BROWN,M. B. and A. B. FORSYTHE.1974c. Robust Assoc. 69: 364-367. BROWNE, R. H. 1979. On visual assessment 657-665.
and multiple
tests for the equality
of the significance
proportion
of some statistics which
comparisons
for data with
of variance. 1. Amer. Statist.
Biometrics 35:
of a mean difference.
BROWNLEE, K. A. 1965. Statistical Theory and Methodology in Science and Engineering, 2nd ed. John Wiley, New York. 590 pp. BRUNNER, E. and N. NEUMANN. 1986. Rank tests in 2 X 2 designs. Statist. Neerland. 40: 251-272. [Cited in McKean and Vidmar, 1994.] BUCKLE,N., C. KRAIT, and C. VANEEDEN. 1969. An approximation distribution. 1. Amer. Statist. Assoc. 64: 591-599. BUDESCU, D. Y. and M. I. ApPELBAUM.1981. Variance the F test. J. Educ. Statist. 6: 55-74. BONING, H. 1997. Robust
stablizing
to the Wi\coxon-Mann-Whitney transformations
and the power of
analysis of variance. 1. App!. Statist. 24: 319-332.
BURR, E. J. 1964. Small-sample distributions U2. Ann. Math. Statist. 35: 1091-1098.
of the two-sample
Cramer-von
Mises' W2 and Watson's
BURSTEIN,H. 1971. Attribute Sampling. Tables and Explanations. McGraw-Hili, BURSTEIN, H. 1975. Finite population Assoc. 70: 67 -69. BURSTEIN, H. 1981. Binomial
correction
for binomial
2 X 2 test for independent
samples
New York. 464 pp.
limits. J. Amer. Statist.
confidence
with independent
Communic. Statist. - Theor. Meth. AlO: 11-29. CACOULLOS,T. 1965. A relation between the t and F distributions.
1.
proportions.
A mer. Statist. Assoc. 60:
528-531. CAJORI, F. 1928-1929. A History of Mathematical Notations. Vol. I: Notations in Elementary Mathematics, 451 pp. Vol. II: Notations Mainly in Higher Mathematics, 367 pp. Open Court Publishing, LaSalle, Illinois. As one volume: Dover, New York, 1993. CAJORI, F. 1954. Binomial formula, pp. 588-589. clopaedia Britannica Inc., New York.
In The Encyclopaedia Britannica, Vol. 3. Ency-
CALITZ, F. 1987. An alternative to the Kolmogorov-Smirnov Statist.- Theor. Meth. 16: 3519-3534.
test for goodness
of fit. Communic.
CAMILLI, G. 1990. The test of homogeneity for 2 X 2 contingency tables: A review of and some personal opinions on the controversy. Psycho!. Bull. 108: 135-145. CAMILLI,G. and K. D. HOPKINS. 1978. Applicability of chi-square small expected frequencies. Psychol. Bull. 85: 163-167. CAMILLI,G. and K. D. HOPKINS. 1979. Testing for association small sample sizes. Psychol. Bull. 86: 1011-1014.
to 2 X 2 contingency
in 2 X 2 contingency
CARMER, S. G. and M. R. SWA. so . 1973. An evaluation of ten pairwise procedures by Monte Carlo methods. J. Amer. Statist. Assoc. 68: 66-74. CARR,W. E. 1980. Fisher's exact test extended 22: 269-270. See also: 1981. Corrigendum.
tables with
tables with very
multiple
comparison
to more than two samples of equal size. Technometrics
Technometrics 23: 320.
Literature Cited
891
CASAGRANDE,J. T., M. C. PIKE, and P. G. SMITH. 1978. An improved approximate formula calculating sample sizes for comparing binomial distributions. Biometrics 34: 483-486. CADELL, R. B. 1978. Spearman,
C. E., pp. 1036-1039.
CAUDILL, S. B. 1988. Type I errors 65-68.
after preliminary
for
In Kruskal and Tanur (1978). Statistician 37:
tests for heteroscedasticity.
CHAPMAN,J.-A. W. 1976. A comparison of the X2, -210gR, and multinomial probability criteria for significance tests when expected frequencies are small. 1. A mer. Statist. Assoc. 71: 854-863. CHADERJEE, S., A. S. HADI, and B. PRICE.2006. Regression Analysis by Example. 4th ed. John Wiley, Hoboken, NJ. 375 pp. CHOW, B., J. E. MILLER, and P. C. DICKINSON. 1974. Extensions of Monte-Carlo comparison of some properties of two rank correlation coefficients in small samples. J. Statist. Computa. Simula. 3: 189-195. CHRISTENSEN,R. 1990. Log-Linear Models. Springer-Verlag,
New York. 408 pp.
CHURCH,J. D. and E. L. WIKE. 1976. The robustness of homogeneity of variance tests for asymmetric distributions: A Monte Carlo study. Bull. Psychomet. Soc. 7: 417 -420. C!CCHITELLI,G. 1989. On the robustness 249-258.
of the one-sample
t test. J. Statist. Computa. Simula. 32:
CLEVELAND,W. S., C. L. MALLOWS,and J. E. McRAE. 1993. ATS methods: for non-Gaussian data. J. Amer. Statist. Assoc. 88: 821-835. CLINCH, J. J. and H. J. KESELMAN.1982. Parametric Statist. 7: 207-214.
alternatives
Nonparametric
regression
to the analysis of variance. 1. Educ.
CLOPPER, C. J. and E. S. PEARSON. 1934. The use of confidence or fiducial limits illustrated case of the binomial. Biometrika 26: 404-413. [Cited in Agresti and Coull, 1998.] COAKLEY,C. W. and M. HEISE, 1996. Versions 1242-1251.
of the sign test in the presence
of ties. Biometrics 52:
COCHRAN,W. G. 1937. The efficiencies of the binomial series tests of significance a correlation coefficient. 1. Roy. Statist. Soc. 100: 69- 73. COCHRAN,W. G. 1942. The X2 correction
COCHRAN,W. G. 1950. The comparison
when the assumptions
of percentages
COCHRAN,W. G. 1952. The X2 test for goodness COCHRAN, W. G. 1954. Some 417-451.
methods
of a mean and of
Iowa State Coil. J. Sci. 16: 421-436.
for continuity.
COCHRAN,W. G. 1947. Some consequences satisfied. Biometrics 3: 22-38.
in the
in matched
for analysis of variance
are not
samples. Biometrika 37: 256-266.
of fit. Ann. Math. Statist. 23: 315-345.
for strengthening
the common
X2 tests.
Biometrics 10:
COCHRAN,W. G. 1977. Sampling Techniques. 3rd. ed. John Wiley, New York. 428 pp. COCHRAN,W. G. and G.M. Cox. 1957. Experimental Designs. 2nd ed. John Wiley, New York. 617 pp. COHEN, A. 1991. Dummy variables
in stepwise regression.
Amer. Statist. 45: 226-228.
COHEN, J. 1988. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Lawrence Associates, Hillsdale, N.J. 567 pp.
Erlbaum
COHEN. J., P. COHEN, S. G. WEST, and L. S. AIKEN. 2003. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. 3rd ed. Lawrence Applebaum Associates, Mahwah, NJ. 703 pp. CONNED, J. E., J. A. SMITH, and R. B. McHUGH. 1987. Sample case-control studies. Statist. Med. 6: 53-59. CONOVER,W. J. 1973. On the methods Statist. Assoc. 68: 985-988.
of handling
size and power
ties in the Wilcoxon
for pair-matched
signed-rank
test. J. Amer.
CONOVER, W. J. 1974. Some reasons for not using the Yates continuity correction on 2 X 2 contingency tables. J. Amer. Statist. Assoc. 69: 374-376. Also: Comment, by C.F. Starmer, J. E. Grizzle, and P. K. Sen. ibid. 69: 376-378; Comment and a suggestion, by N. Mantel, ibid. 69: 378-380; Comment, by O. S. Miettinen, ibid. 69: 380-382; Rejoinder, by W. J. Conover, ibid. 69: 382.
892
Literature Cited CONOVER,W. J. 1999. Practical Nonparametric Statistics. 3rd ed. John Wiley, New York. 584 pp. CONOVER,W. J. and R. L. [MAN. 1976. On some alternative procedures using ranks for the analysis of experimental designs. Cummunic. Statist.- Theor. Meth. A5: 1349-1368. CONOVER,W. J. and R. L. IMAN. 1981. Rank transformations nonparametric statistics. Amer. Statist. 85: 124-129.
as a bridge between
parametric
and
COTTINGHAM,K. L., J. T. LENNON, and B. L. BROWN. 2005. Knowing where to draw the line: Designing more informative ecological experiments. Front. Ecol. Environ. 3: 145-152. COWLES, S. M. and C. DAVIS. 1982. On the origins of the .05 level of statistical Psycho!. 37: 553-558.
Amer.
significance.
Cox, D. R. 1958. Planning of Experiments. John Wiley, New York. 308 pp. CRAMER,E. M. 1972. Significance 26-30.
Amer. Statist. 26(4):
test and tests of models in multiple regression.
CRAMER,H. 1946. Mathematical Methods of Statistics. Princeton 575 pp.
University
Press, Princeton,
CRESSIE, N. and T. R. C. READ. 1989. Pearson's X2 and the loglikelihood comparative review. Internat. Statist. Rev. 57: 19-43.
N.J.
G2: A
ratio statistic
CHATTERJEE,S., A. S. HAD!, and B. PRICE. 2006. Regression Analysis by Example. 4th ed. Wiley, Hoboken, N.J. 375 pp. CRITCHLOW, D. E. and M. A. FLiGNER. 1991. On distribution-free multiple comparisons one-way analysis of variance. Communic. Statist. - Theor. Meth. 20: 127 -139. CURETON, E. E. 1967. The normal approximation to the signed-rank zero differences are present. 1. A mer. Statist. Assoc. 62: 1068-1069. D' AGOSTINO,R. B. 1970. Transformation 679-681.
to normality
sampling
in the
distribution
when
of gl' Biometrika 57:
of the null distribution
D'AGOSTINO, R. B. 1986. Tests for the normal distribution, pp. 367-419. In R. B. D'Agostino M. A. Stephens (eds.), Goodness-of-fit Techniques. Marcel Dekker, New York.
and
D' AGOSTINO,R. B., A. BELANGER,and R. B. D' AGOSTINO,JR. 1990. A suggestion and informative tests of normality. Amer. Statist. 44: 316-321.
for using powerful
D'AGOSTI 0, R. B., W. CHASE, and A. BELANGER. 1988. The appropriateness procedures for testing the equality of two independent binomial populations. 198-202.
of some common Amer. Statist. 42:
D' AGOSTINO,R. B. and G. E. NOETHER.1973. On the evaluation Statist. 27: 81-82.
of the Kolmogorov
D'AGOSTINO, R. B. and E. S. PEARSON. 1973. Tests of departure for the distribution of b2 and .Jbl. Biometrika 60: 613-622.
from normality.
D' AGOSTINO,R. B. and G. L. TIETJEN. 1973. Approaches 60: 169-173. DALE, A. 1. 1989. An early occurrence DANIEL, W. W. 1990. Applied
for statistical
Statistics. 2nd ed. PWS-kent, usage in age-estimation
Hotelling
1895-1973.
DAVENPORT,J. M. and J. T. WEBSTER.1975. The Behrens-Fisher Metrika 22: 47-54. .1962. Games, Gods and Gambling. Hafner
DAVID, H. A. 1995. First (?) occurrence 49: 121-133.
of common
Biometrika
Boston, MA. 635 pp.
New York. 542 pp.
Statist. Sci. 3: 57 -62.
DAVENPORT,E. C, JR. and N. A. EL-SANHURRY. 199\. Phi/Phimax: Psychol. Meas. 51: 821-828.
DAVID, F.
of.Jbl.
results
technics. 1. Wildlife Manage.
DARLINGTON,R. B. 1990. Regression and Linear Models. McGraw-Hill, DARNELL, A. C. 1988. Harold
Empirical
Statist. Prob. Lett. 7: 21-22.
of the Poisson distribution.
Nonparametric
DAPSON, R. W. 1980. Guidelines 44: 541-548.
to the null distribution
statistic. Amer.
Review
problem,
and Synthesis.
an old solution
Educ.
revisited.
Press, New York. 275 pp.
terms in mathematical
statistics. Amer. Statist.
Literature Cited DAVID, H. A. 1998a. First (?) occurrence of common list with corrections. Amer. Statist. 52: 36-40. DAVID, H. A. 1998b. Early sample measures DAVID, H. A. 2005. Tables related 309-311. DAVID, H. A. 2006. The introduction
terms in probability
of variability.
to the normal
and statistics-A
second
Statist. Sci. 13: 368-377.
distribution:
Amer. Statist. 59:
A short history.
of matrix algebra into statistics. Amer. Statist. 60: 162.
DAVID, H. A. and D. F. MORRISON. 2006. Samuel 46-49.
Stanley
DAVID, H. A. and H. A. FULLER. 2007. Sir Maurice Kendall Amer. Statist. 61: 41-46. DAY, R. W. and G. P. QUINN. 1989. Comparisons ecology. Eco!. Monogr. 59: 433-463.
DEMPSTER, A. P. 1983. Reflections 321-322.
on
W. G.
(1907-1983):
of treatments
Cochran,
A centenary
after an analysis of Spearman's
of two Poisson-distributed A comment
of variance S when n
=
in 12.
78-87. In Houston, K. College Dublin Press,
DESU, M. M. and D. RAGHAVARAO.1990. Sample Size Methodology. Academic 135 pp.
of quantiles:
appreciation.
Intern. Statist. Rev. 51:
1909-1980.
DESMOND, A. E. 2000. Francis Ysidro Edgeworth 1845-1926, pp. (ed.), Creators of Mathematics: The Irish Connection. University Dublin.
DETRE, K. and C. WHITE. 1970. The comparison 26: 851-854.
A mer. Statist. 60:
Wilks (1906-1964).
DE JONGE, C. and M. A. J. VANMONTFORT.1972. The null distribution Statist. Neerland. 26: 15-17.
DHARMADHIKARI,S. 1991. Bounds 257-258.
893
Press, Boston, MA.
observations.
on O'Cinneide.
Biometrics
Amer. Statist. 45:
DUKSTRA,J. B. and S. P. J. WERTER. 1981. Testing the equality of several means when the population variances are unequal. Communic. Statist.-Simula. Computa. BIO: 557-569. DIXON, W. J. and F. J. MASSEY,JR. 1969. Introduction to Statistical Analysis. 3rd ed. McGraw-Hill, New York. 638 pp.
Amer. Statist. 30: 181-183. generatoLlnternat. Statist. Rev. 64: 329-344. DONALDSON, T. S. 1968. Robustness of the F-test to errors of both kinds and the correlation between the numerator and denominator of the f-ratio. J. Amer. Statist. Assoc. 63:
DOANE, D. P. 1976. Aesthetic DODGE, Y. 1996. A natural
frequency
random
classifications.
number
660-676. DONNER, A. and G. WELLS. 1986. A comparison correlation coefficient. Biometrics 42: 401-412.
of confidence
DRAPER,N. R. and W. G. HUNTER. 1969. Transformations: 11: 23-40.
interval
methods
Some examples
for the interclass
revisited.
Technometrics
DRAPER,N. R. and H. SMITH.1998. Applied Regression Analysis. 3rd ed. John Wiley, New York. 706 pp. DUNCAN, D. B. 1951. A significance test for difference variance. Virginia J. Sci. 2: 171-189.
between
ranked
treatments
in an analysis of
F tests. Biometrics 11: 1-42. DUNN, O. J. 1964. Multiple comparisons using rank sums. Technometrics 6: 241-252. DUNN, O. J. and V. A. CLARK. 1987. Applied Sciences: Analysis of Variance and Regression. 2nd ed.
DUNCAN, D. B. 1955. Multiple
range and multiple
John Wiley, New York. 445 pp. DUNNETT, C. W. 1955. A multiple comparison procedure control. J. Amer. Statist. Assoc. SO: 1096-1121. DUNNETT, C. W. 1964. New tables for multiple DUNNETT, C. W. 1970. Multiple
comparison
comparisons
for comparing
several treatments
with a
with a control. Biometrics 20: 482-491.
tests. Biometrics 26: 139-141.
894
Literature Cited DUNNETf, C. W. 1980a. Pairwise multiple comparisons size case. J. A mer. Statist. Assoc. 75: 789-795.
in the homogeneous
DUNNETf, C. W. 1980b. Pairwise multiple comparisons Assoc. 75: 796-800.
in the unequal
DUNNE'IT, C. W. 1982. Robust 2611-2629.
multiple
DUPONT, W. D. 1986. Sensitivity of Fisher's tables. Statist. in Med. 5: 629-635.
equal sample
variance case. 1. Amer. Statist.
Communic.
comparisons.
variance,
Staiist==Theor.
exact test to minor perturbations
DURAND, D. and J. A. GREENWOOD. 1958. Modifications of the Rayleigh analysis of two-dimensional orientation data. J. Geol. 66: 229-238.
x
in 2
Meth. 11:
2 contingency
test for uniformity
in
DWASS, M. 1960. Some k-sample rank-order tests, pp.198-202. In I. Olkin, S. G. Ghurye, W. Hoeffding, W. G. Madow, and H. B. Mann. Contributions to Probability and Statistics. Essays in Honor of Harold Hotelling. Stanford University Press, Stanford, CA. DYKE, G. 1995. Obituary:
Frank Yates. J. Roy. Statist. Sac. Ser. A, 158: 333-338.
EASON, G., C. W. COLES, and G. GErI'lNBY. 1980. Mathematics and Statistics for the Bio-Sciences. Ellis Horwood, Chichester, England. 578 pp. EBERHARDT,K. R. and M. A. FLIGNER. 1977. A comparison Amer. Statist. 31: 151-155. EDGINGTON,E. S. 1961. Probability table for number series. 1. Amer. Statist. Assoc. 56: 156-159. Galton,
EELLS, W. C. 1926. A plea for a standard 45-52.
of runs of signs of first differences
EISENHART,C. 1947. The assumptions
definition
underlying
of the standard
EISENHART,C. 1978. Gauss, Carl Friedrich,
J. Educ. Res. 13:
deviation.
of several
methods
of multiple
Biometrics 3: 1-21.
the analysis of variance.
of the uncertainties
EISENHART,C. 1979. On the transition
in ordered
and Fisher. Austral. 1. Statist. 35: 129-140.
EINOT, I. and K. R. GABRIEL. 1975. A survey of the powers comparisons. J. Amer. Statist. Assoc. 70: 574-583. EISENHART,C. 1968. Expression
of proportions.
results really too close? BioI. Rev. 61: 295-312.
EDWARDS, A. W. F. 1986. Are Mendel's EDWARDS, A. W. F. 1993. Mendel,
of two tests for equality
of final results. Science 160: 1201-1204.
pp. 378-386.
from "Student's"
In Kruskal and Tanur (1978). to "Student's" t. A mer. Statist. 33: 6-10.
z
EVERITr, B. S. 1979. A Monte Carlo investigation of the robustness two-sample tests. J. ArneI'. Statist. Assoc. 74: 48-51.
of Hotelling's
EVERITf, B. S. 1992. The Analysis of Contingency Tables. 2nd ed. Chapman pp.
one-
and
& Hall, New York. 164
EZEKIEL, M. 1930. Methods of Correlation Analysis. John Wiley, New York. 427 pp. FAHOOME, G. 2002. Twenty
nonparametric
statistics
and their large sample
J.
approximations.
Modern Appl. Statist. Meth. 1: 248-268. FELDMAN, S. E. and E. KLUGER. 1963. Short Psychometrika 28: 289-291.
cut calculation
of the Fisher-Yates
"exact
FELLINGHAM,S. A. and D. J. STOKER. 1964. An approximation for the exact distribution Wilcoxon test for symmetry. J. Amer. Statist. Assoc. 59: 899-905. FELTZ, C. J. 1998. Generalizations of the delta-corrected Austral. & New Zealand 1. Statist. 40: 407-413. FELTZ, C. J. and G. E. MILLER. 1996. An asymptotic from k populations. Statist. in Med. 15: 647-658. PERON, R. 1978. Poisson, Simeon Denis, pp. 704-707. FIELLER, E.
c., H.
FIELLER, E.
c., H.
Kolmogorov-Smirnov
test for the equality
of the
goodness-of-fit
of coefficients
test.
of variation
In Kruskal and Tanur (1978).
O. HARTLEY,and E. S. PEARSO . 1957. Tests for rank correlation Biometrika 44: 470-481. O. HARTLEY,and E. S. PEARSON.1961. Tests for rank correlation
Biometrika 48: 29-40.
test."
coefficients. coefficients.
I. II.
Literature Cited FIENBERG,S. E. 1970. The analysis of multidimensional FIENBERG, S. E. 1972. The analysis ]77-202.
of incomplete
tables. Ecology 51: 419-433.
contingency multiway
895
contingency
Biometrics
tables.
28:
FIENBERG, S. E. 1980. The Analysis of Cross-Classified Categorical Data. 2nd ed. MIT Press, Cambridge, MA. 198 pp. FISHER, N. I. 1993. Statistical Analysis of Circular Data. Cambridge England. 277 pp. FISHER, N. I. and A. J. LEE. 1982. Nonparametric Biometrika 69: 315-321. FISHER, N. I. and A. J. LEE. 1983. A correlation 327-332.
measures coefficient
University
Press, Cambridge,
of angular-angular for circular
data.
association.
Biometrika 70:
FISHER,N. I., T. LEWIS, and B. J. J. EMBLETON.1987. Statistical Analysis of Spherical Data. Cambridge University Press, Cambridge, England. 329 pp. FISHER,R. A. 1918a. The correlation between Trans. Roy. Soc. Edinburgh 52: 399-433.
relatives on the supposition
error"
FISHER, R. A. 1922. On the interpretation Royal Statist. Soc. 85: 87 -94.
of a coefficient
of correlation
of X2 from contingency
FISHER, R. A. 1922a. The goodness of fit of regression coefficients. J. Roy. Statist. Soc. 85: 597 -612. FISHER, R. A. 1925a. Applications
of "Student's"
inheritance.
Eugenics Rev. 10: 213-220.
FISHER, R. A. 1918b. The causes of human variability. FISHER, R. A. 1921. On the "probable sample. Metron 1: 3-32.
of Mendelian
deduced
from a small
tables and the calculation
formulas, and the distribution
of P. J.
of regression
Metron 5: 90-104.
distribution.
FISHER, R. A. 1925b. Statistical Methods for Research Workers. [1st ed.] Oliver and Boyd, Edinburgh, Scotland. 239 pp. + 6 tables. FISHER, R. A. 1926. The arrangement Sahai and Ageel, 2000: 488.]
of field experiments.
J.
Ministry Agric. 33: 503-513.
[Cited in
FISHER, R. A. 1928. On a distribution yielding the error functions of several well known statistics, pp. 805-813. In: Proc. Intern. Math. Congr., Toronto, Aug. 11-16, 1924, Vol. II. University of Toronto Press, Toronto. [Cited in Eisenhart, 1979.] FISHER, R. A. 1934. Statistical Methods for Research Workers. 5th ed. Oliver and Boyd, Edinburgh, Scotland. FISHER, R. A. 1935. The logic of inductive FISHER, R. A. 1936. Has Mendel's FISHER, R. A. 1939a. "Student."
inference.
J.
Roy. Statist. Soc. Ser. A, 98: 39-54. Ann. Sci. 1: 115-137.
work been rediscovered?
Ann. Eugen. 9: 1-9.
FISHER, R. A. 1939b. The comparison 174-180.
of samples
with possibly unequal
variances.
FISHER, R. A. 1958. Statistical Methods for Research Workers. 13th ed. Hafner,
Ann. Eugen. 9:
New York. 356 pp.
FISHER, R. A. and F. YATES. 1963. Statistical Tables for Biological, Agricultural, and Medical Research. 6th ed. Hafner, New York. 146 pp. FISZ, M. 1963. Probability Theory and Mathematics Statistics. 3rd ed. John Wiley, New York. 677 pp. FIX, E. and J. L. HODGES, JR. 1955. Significance 26: 301-312.
probabilities
of the Wilcoxon
test. Ann. Math. Statist.
FLEISS, J. L., E. LEVIN, and M. C. PAIK. 2003. Statistical Methods [or Rates and Proportions. John Wiley, Hoboken, NJ. 760 pp. FLEISS, J. L., A. TYTUN, and H. K. URY. 1980. A simple approximation for calculating for comparing independent proportions. 3rd ed. Biometrics 36: 343-346. FLIGNER, M. A. and G. E. POLICELLO II. 1981. Robust problem. J. Amer. Statist. Assoc. 76: 162-168.
rank procedures
sample sizes
for the Behrens-Fisher
896
Literature Cited FaNG, D. Y. T., C. W. KWAN, K. F. LAM, and K. S. L. LAM. 2003. Use of the sign test for the meadian in the presence of ties. Amer. Statist. 57: 237 -240. Correction: 2005, Amer. Statist. 59: 119. FRANKLIN, L. A. 1988a. The complete exact distribution Statist. Computa. Simula. 29: 255-269.
rho for n = 12( 1)18. J.
of Spearman's
FRANKLIN,L. A. 1988b. A note on approximations and convergence in distribution rank correlation coefficient. Communic. Statist.- Theor. Meth. 17: 55-59.
for Spearman's
FRANKLIN,L. A. 1989. A note on the Edgeworth approximation to the distribution of Spearman's rho with a correction to Pearson's approximation. Communic. Statist==Simula. Computa. 18: 245-252. FREEMAN.M. F. and J. W. TUKEY. 1950. Transformations Ann. Marh. Statist. 21: 607-611. FREU D, R. J. 1971. Some observations 29-30.
related to the angular and the square root.
on regressions
with grouped
FRIEDMAN,M. 1937. The use of ranks to avoid the assumption of variance. J. Amer. Statist. Assoc. 32: 675-701. FRIEDMAN,M. 1940. A comparison Ann. Math. Statist. 11: 86-92.
of alternate
FRIENDLY,M. 1994. Mosaic displays 190-200. FRIENDLY.M. 1995. Conceptual
of normality
tests of significance
for multi way contingency
confidence
conditional,
and partial views of categorical
GAITO, J. 1980. Measurement Bull. 87: 564-567.
and statistics.
Graph. Statist. 11: 89-107. of the boxplot. Amer. Statist.
limits. Biometrika 67: 677 -681.
GABRIEL, K. R. and P. A. LACHENBRUCH. 1969. Nonparametric A Monte Carlo study of the adequacy of the asymptotic 593-596. GAITO, J. 1960. Scale classification
Statist. Assoc. 89:
data. Amer. Statist. 49: 153-160.
FRIENDLY,M. 2002. A brief history of the mosaic display . .I. Comput.
binomial
of m rankings.
for the problem
FRIGGE,M., D. C. HOAGLIN,and B. IGLEWICZ.1989. Some implications 43: 50-54. FUJI NO, Y. 1980. Approximate
implicit in the analysis
tables. 1. Amer.
and visual models for categorical
FRIENDLY,M. 1999. Extendingmosaic displays: Marginal, data. 1. Comput. Graph. Statistics 8: 373-395.
data. Amer. Statist. 25(3):
ANOVA in small samples: approximation. Biometrics 25:
Psychol. Bull. 67: 277-278.
scales and statistics:
Resurgence
of an old misconception.
GAMES, P. A. and J. F. HOWELL. 1976. Pairwise multiple comparison procedures and/or variances: A Monte Carlo study. J. Educ. Statist. 1: 113-125. GAMES, P. A., H. J. KESELMAN, and J. C. ple comparison procedures for means when 594-598.
with unequal
test on variances.
n's
ROGAN. 1981. Simultaneous pairwise multisample sizes are unequal. Psychol. Bull. 90:
GAMES, P. A. and P. A. LUCAS. 1966. Power of the analysis of variance of independent nonnormal and normally transformed data. Educ. Psychol. Meas. 26: 311-327. GANS, D. J. 1991. Preliminary
Psychol.
groups on
Amer. Statist. 45: 258.
GARSIDE, G. R. and C. MACK. 1976. Actual type 1 error probabilities for various homogeneity case of the 2 x 2 contingency table. Amer. Statist. 30: 18-21.
tests in the
GARSON, G. D. 2006. Logistic Regression. www2.chass.ncsu.edu/garson/PA765/logistic.htm. GART, J. J. 1969a. An exact test for comparing 56: 75-80. GARr, J. J. 1969b. Graphically 37th Session, pp. 119-121.
oriented
GARTSIDE,P. S. 1972. A study of methods 67: 342-346.
matched
proportions
tests of the Poisson for comparing
GEARY, R. C. and C. E. V. LESER. 1968. Significance 20-21.
in crossover
distribution.
several variances.
designs. Biometrika
Bull. Intern. Statist. Inst., J. Amer. Statist. Assoc.
tests in multiple regression.
Amer. Statist. 22(1):
Literature Cited GEIRINGER,H. 1978. Von Mises, Richard,
pp. 1229-1231.
GEORGE, E. O. 1987. An approximation Left. 5: 169-173.
897
In Kruskal and Tanur (1978).
of F distribution
by binomial
probabilities.
Statist. Prob.
GHENT, A. W. 1972. A method for exact testing of 2 X 2, 2 X 3, 3 X 3, and other contingency tables, employing binomial coefficients. Amer. Midland Natur. 88: 15-27. GHENT, A. W. 1991. Insights into diversity and niche breadth analysis from exact small-sample of the equal abundance hypothesis. Amer. Midland Natur. 126: 213-255. GHENT, A. W. 1993. An exact test and normal approximation for centrifugal and centripetal in line and belt transects in ecological studies. Amer. Midland Natur. 130: 338-355.
tests
patterns
GHENT, A. W. and J. H. ZAR. 1992. Runs of two kinds of elements on a circle: A redevelopment, with corrections, from the perspective of biological research. Amer. Midland Natur. 128: 377 -396. GIBBONS, J. D. and S. CHAKRABORTI.2003. Nonparumetric Dekker, New York. 645 pp. GILL, J. L. 1971. Analysis of data with heterogeneous GIRDEN, E. R. 1992. ANOVA:
Repeated
Statistical
Inference.
4th ed. Marcel
data: A review . .I. Dairy Sci. 54: 369-373.
Measures. Sage Publications,
GLANTZ, S. A. and B. K. SLINKER.2001. Primer of Applied ed. McGraw-Hili, New York. 949 pp.
Regression
GLASS, G. V. and K. D. HOPKINS. 1996. Statistical Methods Bacon, Boston. 674 pp.
Newbury
Park, CA, 77 pp.
and Analysis
of Variance. 2nd
and Psyhology. Allyn and
in Education
GLASS, G. V., P. D. PECKHAM, and J. R. SANDERS. 1972. Consequences of failure to meet assumptions underlying the fixed effects analysis of variance and covariance. Rev. Educ. Res. 42: 239-288. J. Amer. Statist. Assoc. 64: 316-323.
GLEJSER, H. 1969. A new test for heteroscedasticity.
GOODMAN, L. A. 1970. The multivariate analysis of qualitative c1assifications . .I. Amer. Statist. Assoc. 65: 226-256. GORMAN,J. W. and R. J. TOMAN. 1966. Selection metrics 8: 27-51.
of variables
GREENWOOD,J. A. and D. DURAND. 1955. The distribution n random unit vectors. Ann. Math. Statist. 26: 233-246. GRIZZLE, J. E. 1967. Continuity
correction
GREEN, S. B. 1991. How many subjects Res. 26: 499-510.
data: Interactions
among multiple
for fitting equations
to data. Techno-
of length and components
in the x2-test for 2 X 2 tables. A mer. Statist. 21(4): 28-32. does it take to do a regression
GROENEVELD, R. A. and G. MEEDEN. 1984. Measuring 391-399.
skewness
GUENTHER,W. C. 1964. Analysis
of Variance. Prentice
GULLBERG,J. 1997. Mathematics:
From the Birth of Numbers.
analysis?
and
Hall, Englewood
approximation
HABER, M. 1980. A comparison of some continuity tables . .1. A mer. Statist. Assoc. 75: 510-515.
corrections
correction
and statistical
HABER, M. 1987. A comparison contingency tables. Communic.
Statistician
33:
New York. 1093 pp. distribution:
for unbiased
estimation
for the chi-squared
Theory of the
test on 2 X 2
testing. Intern. Statist. Rev. 50: 135-144.
HABER, M. 1984. A comparison of tests for the hypothesis of no three-factors contingency tables. 1. Statist. Comput. Simula 20: 205-215. HABER, M. 1986. An exact unconditional 129-132.
Behav.
Cliffs, NJ. 199 pp.
W. W. Norton,
GURLAND, J. and R. C. TRIPATHI. 1971. A simple standard deviation. Amer. Statist. 25(4): 30-32.
Multivar.
kurtosis.
GUMBEL, E. J., J. A. GREENWOOD,and D. DURAND. 1953. The circular normal and tables. 1. Amer. Statist. Assoc. 48: 131-152.
HABER, M. 1982. The continuity
of the sum of
test for the 2 X 2 comparative
of some conditional and unconditional Statist. -Simllla. Computa. 16: 999-1013.
HABER, M. 1990. Comments on "The test of homogeneity review of and some personal opinions of the controversy" 146-149.
interaction
2 X 2 X 2
trial. Psychol. Bull. 99: exact
tests for 2 X 2
for 2 X 2 contingency tables: A by G. Camilli. Psychol. Bull. 108:
898
Literature Cited HAFNER, K. 2001. A new way of verifying old and familiar sayings. New York Times, Feb. 1, 200l. Section G, p. 8. HAHN, G. J.1972. Simultaneous prediction intervals to contain the standard deviations future samples from a normal distribution. 1. Amer. Statist. Assoc. 67: 938-942.
or ranges of
HAHN, G. J. 1977. A prediction interval on the difference between two future sample means and its application to a claim of product superiority. Technometrics 19: 131-134. HAHN, G. J. and W. Q. MEEKER. 1991. Statistical Intervals: A Guide for Practitioners. John Wiley, New York. 392 pp. HAIGHT, F. A. 1967. Handhook of the Poisson Distrihution. John Wiley, New York. 168 pp. HAIR, J. F., JR., W. C. BLACK, B. J. BABIN, R. E. ANDERSON, and R. L. TATHAM.2006. Multivariate Data Analysis. 6th ed. Prentice Hall, Upper Saddle River, NJ. 899 pp. L. In
HALBERG, F. and l-K. LEE. 1974. Glossary of selected chronobiologic terms, pp.xXXVIlL. E. Scheving, F. Halberg, and J. E. Pauly (eds.), Chronology, Igaku Shoin, Tokyo. HALO, T. N. 1981. Thiele's
contributions
HALO, A. 1984. Nicolas Bernoulli's
to Statistics. Internat. Statist. Rev. 4: 1-20.
Theorem.
HAMAKER,H. C. 1978. Approximating Statist. 27: 76-77.
Internat. Statist. Rev. 52: 93-99.
the cumulative
normal
distribution
and its inverse.
Appl.
R2 > r;X\ + r;xz. Amer. Statist. 41: 129-132. [See also Hamilton's to other writers on this topic: Amer. Statist. 42: 90-91.] HAND, D. J. and C. C. TAYLOR. 1987. Multivariate Analysis of Variance and Repealed Measures. HAMILTON,D. 1987. Sometimes (1988) reply to and reference Chapman
and Hall, London.
262 pp.
HARDLE, W. 1990. Applied Nonparametric Regression. Cambridge England. 333 pp.
University
HARDY, M. A. 1993. Regression with Dummy Variables. Sage Publications, pp.
Press, Cambridge,
Newbury
Park, CA. 90
HARLOW, L. L., S. A. MULAIK, and J. H. STEIGER(eds.). 1997. What If There Were No Significance Tests? Lawrence Erlbaum Associates, Mahwah, NJ. 446 pp. HARRIS, J. A. 1910. The arithmetic of the product correlation. Amer. Natur. 44: 693-699.
moment
HARRISON,D. and G. K. KANJI. 1988. The development Appl. Statist. 15: 197 -223.
method
of calculating
of analysis of variance
HARRISON,D., G. K. KANJI, and R. J. GADSDEN. 1986. Analysis of variance Statist. 13: 123-138.
the coefficient of for circular data. 1.
for circular data. 1. Appl.
HARTER,H. L. 1957. Error rates and sample sizes for range tests in multiple comparisons. 13: 511-536. HARTER,H. L. 1960. Tables of range and studentized
Biometrics
range. Ann. Math. Statist. 31: 1122-1147.
HARTER,H. L.1970. Order Statistics and Their Use in Testing and Estimation, Vol. 1. Tests Based on Range and Studentized Range of Samples from a Normal Population. U.S. Government Printing Office, Washington, DC. 761 pp. HARTER, H. L., H. J. KHAMIS, and R. E. LAMB. 1984. Modified Kolmogorov-Smirnov goodness of fit. Communic. Statist. =Simuia. Computa. 13: 293-323.
tests for
tables, pp. 268-273. In W. F. Eddy (ed.), Computer Science and Statistics: Proceedings of the 13th Symposium on the Interface.
HARTIGAN,J. A. and B. KLEINER. 1981. Mosaics for contingency Springer-Verlag,
New York.
HARTIGAN,J. A. and B. KLEINER. 1984. A mosaic of television
ratings. Amer. Statist. 38: 32-35.
HARWELL, M. R., E. N. RUBINSTEIN,W. S. HAYES, and C. C. OLDS. 1992. Summarizing Monte Carlo results in methodological research: The one- and two-factor fixed effects ANOV A cases. 1. Educ. Statist. 17: 315-339. HASTINGS,C., JR. 1955. Approximations for Digital Computers. Princeton ton, NJ. 201 pp.
University
Press, Prince-
Literature Cited HAUCK, W. W. and S. ANDERSON. 1986. A comparison of large-sample confidence for the difference of two binomial populations. Amer. Statist. 20: 318-322. HAUCK, W. W., JR. and A. Do ER. 1977. Wald's Amer. Statist. Soc. 72: 851-853. HAVILAND,M. B. 1990. Yate's correction for Statist. Med. 9: 363-367. Also: Comment, 9: 371-372; G. Barnard, ibid. 9: 373-375; 9: 379-382; Rejoinder, by M. B. Haviland,
test as applied
899
interval methods in logit analysis. J.
to hypotheses
continuity and the analysis of2 x 2 contingency tables. by N. Mantel, ibid. 9: 369-370; S. W. Greenhouse, ibid. R. B. 0' Agostino, ibid. 9: 377 -378; J. E. Overall, ibid. ibid. 9: 383.
HAVLICEK,L. L. and N. L. PETERSON. 1974. Robustness of the t test: A guide for researchers effect of violations of assumptions. Psychol. Reports 34: 1095-1114. HAWKINS, D. M. 1980. A note on fitting a regression 233. HAYS, W. L. 1994. Statistics, 5th ed. Harcourt
term. Amer. Statist. 34(4):
without an intercept
Brace College Publishers,
Fort Worth, TX. 1112 pp.
HAYTER, A. J. 1984. A proof of the conjecture that the Tukey-Krarner procedure is conservative. Ann. Statist. 12: 61-75. HAYTER,A. J. 1986. The maximum familywise 1. Amer. Statist. Assoc. 81: 1000-1004.
error rate of Fisher's
HEALY, M. J. R. 1984. The use of R2 as a measure 608-609.
of goodness
HENDERSON, D. A. and D. R. DENISON. 1989. Stepwise research. Psycho!. Reports 64: 251-257.
on
multiple
least significant
comparisons difference
test.
of fit. J. Roy. Statist. Soc. Ser. A, 147:
regression
in social and psychological
HETrMANSPERGER,T. P. and J. W. McKEAN. 1998. Robust Nonparametric Statistical Methods. John Wiley, New York. 467 pp. HEYDE, C. C. and E. SENETA.(eds.). 2001. Statisticians of the Centuries. Springer-Verlag, 500 pp. HICKS, C. R. 1982. Fundamental Concepts in Design of Experiments. 3rd ed. Holt, Winston, New York. 425 pp. HINES, W. G. S. 1996. Pragmatics
&
Rinehart,
of pooling in ANOV A tables. Amer. Statist. 50: 127 -139.
HODGES, J. L., JR. and E. L. LEHMANN. 1956. The efficiency of some nonparametric the t-test. Ann. Math. Statist. 27: 324-335. HODGES, J. L., JR. 1955. A bivariate
New York.
competitors
of
sign test. Ann. Math. Statist. 26: 523-527.
HODGES, J. L.JR., P. H. RAMSEY,and S. WECHSLER.1990. Improved Wilcoxon test. 1. Educ. Statist. 15: 249-265.
significance
HOENIG, J. M. and D. M. HEISEY:· 2001. The abuse of power: calculations for data analysis. Amer. Statist. 55: 19-24.
The pervasive
HOLLANDER, M. and J. SETHURAMAN.1978. Testing Biometrika 65: 403-411.
for agreement
between
probabilities
of the
fallacy of power
two groups
of judges.
HOLLANDER, M. and D. A. WOLFE. 1999. Nonparametric Statistical Methods. 3rd ed. John Wiley, New York. 787 pp. HORNICK, C. W. and J. E. OVERALL. 1980. Evaluation contingency tables. 1. Educ. Statist. 5: 351-362.
of three
sample-size
formulae
for 2
x
2
HOSMANE, B. 1986. Improved likelihood ratio tests for the hypothesis of no three-factor interaction in two dimensional contingency tables. Communic. Statist. - Theor. Meth. 15: 1875-1888. HOSMANE, B. 1987. An empirical investigation of chi-square tests for the hypothesis of no three-factor interaction in i x J x K contingency tables. J. Statist. Computa. Simula. 28: 167-178. HOSMER,D. W., JR. and S. LEMESHOW.2000. Applied Logistic Regression. 2nd ed. John Wiley, New York. 375 pp. HOTELLlNG,H. 1931. The generalization
of Student's
ratio. Ann. Math. Statist. 2: 360-378.
900
Literature Cited HOTELLlNG,H. 1951. A generalized T test and measure of generalized disperion, pp. 23-41. In Neyman, J. (ed.), Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability. University of California Press, Berkeley. HOTELLlNG, H. 1953. New light on the correlation coefficient and its transform. J. Roy. Statist. Soc. B15: 193-232. HOTELLlNG,H. and M. R. PABST.1936. Rank correlation and tests of significance involving no assumption of normality. Ann. Math. Statist. 7: 29-43. HOTELLlNG, H. and L. M. SOLOMONS. 1932. The limits of a measure of skewness. Ann. Math. Statist. 3: 141-142. HOWELL,D. C. 2007. Statistical Methods for Psychology. 6th ed. Thomson Wadsworth, Belmont, CA. 739 pp. Hsu, J. C. 1996. Multiple Comparisons: Theory Methods. Chapman and Hall, London. 277 pp. Hsu, P. L. 1938. Contributions to the theory of "Student's" t-test as applied to the problem of two samples. Statist. Res. Mem. 2: 1-24. [Cited in Ramsey, 1980.] HUBBARD,R. and M. L. BAYARRI. 2003. Confusion over measures of evidence (P's) versus errors (a's) in classical statistical testing. Amer. Statist. 57: 171-178. Also: Discussion, by K. N. Berk, ibid., 57: 178-179; M. A. Carlton, ibid., 5: 179-181; Rejoinder, by R. Hubbard and M. L. Bayarri, ibid., 181-182. HUBER,P. 1964. Robust estimation of a location parameter. Ann. Math. Statist. 35: 73-101. HUBER,P. 2004. Robust Statistics. John Wiley, New York. 308 pp. HUBERTY,C. J. and S. A. MOURAD.1980. Estimation in multiple correlation/prediction. Educ. Psychol. Meas. 40: 101-112. HUCK,S. W. and B. H. LAYNE.1974. Checking for proportional n's in factorial ANOV A's. Educ. Psychol. Meas. 34: 281-287. HUFF,D. 1954. How to Lie with Statistics. W. W. Norton, New York. 142 pp. HUITEMA,B. E. 1974. Three multiple comparison procedures for contrasts among correlation coefficients. Proc. Soc. Statist. Sect., Amer. Statist. Assoc., 1974, pp. 336-339. HUITEMA,B. E. 1980. The Analysis of Covariance and Alternatives. John Wiley, New York. 445 pp. HUMMEL,T. J. and J. R. SLIGO.1971. Empirical comparisons of univariate and multivariate analysis of variance procedures. Psychol. Bull. 76: 49-57. HURLBERT,S. H. 1984. Psudoreplication and the design of ecological field experiments. Ecol. Monogr. 54: 187-211. (Designated a "Citation Classic," 1993. Current Contents 24(12): 18.) HURLBERT, S. H. 1990. Spatial distribution of the montane unicorn. Oikos 58: 257-271. HUTCHESON, K. 1970. A test for comparing diversities based on the Shannon formula. J. Theoret. BioI. 29: 151-154. HUTCHINSON, T. P. 1979. The validity of the chi-squared test when expected frequencies are small: A list of recent research references. Communic. Statist. - Theor. Meth. A8: 327-335. HUTSON,A. D. 1999. Calculating non parametric confidence intervals using fractional order statistics. J. Appl. Statist. 26: 343-353. HUYNH,H. and FELDT, L. S. 1970. Conditions under which mean square ratios in repeated measurements designs have exact F-distributions. J. Amer. Statist. Assoc. 65: 1582-1589. IMAN,R. L. 1974a. Use of a t-statistic as an approximation to the exact distribution of the Wilcoxon signed ranks test statistic. Communic. Statist.- Theor. Meth. A3: 795-806. IMAN,R. L. 1974b. A power study of a rank transform for the two-way classification model when interaction may be present. Can. J. Statist. 2: 227-229. IMAN, R. L. 1987. Tables of the exact quantiles of the top-down correlation coefficient for n = 3(1 )14. Communic. Statist==Theor. Meth. 16: 1513-1540. IMAN,R. L. and W. J. CONOVER. 1978. Approximation of the critical region for Spearman's rho with and without ties present. Communic. Statist==Simula. Computa. 7: 269-282. IMAN,R. L. and W. J. CONOVER. 1979. The use of the rank transform in regression. Technometrics 21: 499-509.
Literature
Technical IMAN, R. L. and J. M. CONOVER. 1985. A measure of top-down correlation. SAND85-0601, Sandia National Laboratories, Albuquerque, New Mexico. 44 pp. IMAN, R. L. and W. J. CONOVER. 1987. A measure of top-down 351-357. Correction: Technometrics 31: 133 (1989).
correlation.
Report
Technometrics 29:
IMAN, R. L. and J. M. DAVENPORT. 1976. New approximations to the exact distribution Kruskal- Wallis test statistic. Communic. Statist. - Theor. Meth. A5: 1335-1348. IMAN, R. L. and J. M. DAVENPORT.1980. Approximations of the critical statistic. Cummunic. Statist. - Theor. Meth. A9: 571-595.
901
Cited
region
of the
of the Friedman
IMAN, R. L., S. C. HORA, and W. 1. CONOVER.1984. Comparison of asymptotically distribution-free procedures for the analysis of complete blocks. 1. A mer. Statist. Assoc. 79: 674-685. IMAN, R. L., D. QUADE, and D. A. ALEXANDER.1975. Exact probability levels for the Kruskal-Wallis test, pp. 329-384. In H. L. Harter and D. B. Owen, Selected Tables in Mathematical Statistics, Volume HI. American Mathematical Society, Providence, R1. INTERNATIONALBUSINESS MACHINES CORPORATION.1968. System/360 Scientific Subroutine Package (360A-CM-03X) Version III. Programmer's Manual. 4th ed. White Plains, NY. 454 pp. IRWIN,J. O. 1935. Tests of significance Metron 12: 83-94.
for differences
between
percentages
based on small numbers.
In Kruskal and Tanur (1978). measure for nominal data. Amer. Statist. 21(5):
IRWIN, J. O. 1978. Gosset, William Sealy, pp. 409-413. IVES, K. H. and J. D. GIBBONS. 1967. A correlation 16-17.
JACCARD,J., M. A. BECKER,and G. WOOD. 1984. Pairwise multiple comparison Psycho!. Bull. 96: 589-596. JACQUES, J. A. and M. NORUSIS. 1973. Sampling requirements heteroscedastic linear regression. Biometrics 29: 771-780.
procedures:
on the estimation
of parameters
JAMMALAMADAKA,S. R. and A. SENGUPTA. 2001. Topics in Circular Statistics. World Publishing, Singapore. 322 pp.
J EYARATNAM,S. 1992. Confidence
intervals
for the correlation
coefficient.
A review. in
Scientific
Statist. Prob. Leu. 15:
389-393. JOHNSON, M. E. and V. W. LOWE, JR. 1979. Bounds Technometrics 21: 377 -378.
on the sample
JOHNSON, B. R. and D. J. LEEMING. 1990. A study of the digits of numbers. Sankhya: Indian J. Statist. 52B: 183-189.
tt ,
skewness
and kurtosis.
e, and certain other irrational
JOHNSON,R. A. and D. W. WICHERN.2002. Applied Multivariate Statistical Analysis. 5th ed. Prentice Hall, Upper Saddle River, NJ. 767 pp. JOLLIFFE, I. T. 1981. Runs test for detecting 137-14l.
dependence
Jur>, P. E. and K. V. MARDIA.1980. A general correlation regression problems. Biometrika 67: 163-173. KEMP, A. W. 1989. A note on Stirling's expansion KEMPTHORNE,O. 1979. In dispraise 199-213. KENDALL, M. G. 1938. A new measure
between coefficient
Statistician 30:
two variances. for directional
data and related
for factorial n. Statist. Prob. Leu. 7: 21-22.
of the exact
test:
of rank correlation.
Reactions.
J.
Statist. Plan. Inference 3:
Biometrika 30: 81-93.
KENDALL, M. G. 1943. The Advanced Theory of Statistics. Vol. 1. Charles Griffin, London, [Cited in Eisenhart, 1979.] KENDALL, M. G. 1945. The treatment
of ties in ranking
problems.
England.
Biometrika 33: 239-251.
KENDALL, M. G. 1962. Rank Correlation Methods. 3rd ed. Charles pp.
Griffin, London,
England.
199
KENDALL, M. G. 1970. Rank Correlation Methods. 4th ed. Charles pp.
Griffin, London,
England.
202
KENDALL,M. G. and B. BABINGTONSMITH. 1939. The problem 275-287.
of m rankings. Ann. Math. Statist. 10:
902
Literature Cited KENDALL,M. and J. D. GIBBONS. 1990. Rank Correlation England. 260 pp. KENDALL,M. G. and A. STUART.1966. The Advanced 552 pp.
KENNEDY, J. J. 1992. Analyzing Qualitative ed. Praeger, New York. 299 pp.
5th ed. Edward
Arnold, London,
Theory of Statistics, Vol. 3. Hafner, New York.
KENDALL, M. G. and A. STUART.1979. The Advanced London, England. 748 pp. KENNARD, R. W. 1971. A note on the Cp statistics.
Methods.
Theory of Statistics, Vol. 2. 4th ed. Griffin,
Technometrics
Data. Log-linear
13: 899-900.
Analysis for Behavioral
KEPNER, J. L. and D. H. ROBINSON.1988. Nonparametric methods for detecting repeated-measures designs . .T. Amer. Statist. Assoc. 83: 456-461.
Research. 2nd
treatment
effects in
KESELMAN,H. J., P. A. GAMES, and J. C. ROGAN. 1979. An addendum to "A comparison of the modified-Tukey and Scheffe methods of multiple comparisons for pairwise contrasts." .T. Amer. Statist. Assoc. 74: 626-627. KESELMAN,H. J., R. MURRAY,and J. C. ROGAN. 1976. Effect of very unequal multiple comparison test. Educ. Psychol. Meas. 36: 263-270. KESELMAN,H. J. and J. C. ROGAN. 1977. The Tukey multiple Bull. 84: 1050-1056.
comparison
group sizes on Tukey's
test: 1953-1976.
Psycho!.
KESELMAN,H. J. and J. C. ROGAN. 1978. A comparison of the modified-Tukey and Scheffe methods of multiple comparisons for pairwise contrasts . .T. Amer. Statist. Assoc. 73: 47 -52. KESELMAN,H. J., J. C. ROGAN, and B. J. FEIR-WALSH. 1977. An evaluation of some non-parametric and parametric tests for location equality. Brit . .T. Math. Statist. Psycho I. 30: 213-221. KESELMAN, H. J. and L. E. TOOTHAKER.1974. Comparisons of Tukey's T-method and Scheffc's S-method for various numbers of all possible differences of averages contrasts under violation of assumptions. Educ. Psycho!. Meas. 34: 511-519. KESELMAN,H. J., L. E. TOOTHAKER,and M. SHOOTER.1975. An evaluation of two unequal of the Tukey multiple comparison statistic . .T. Amer. Statist. Assoc. 70: 584-587.
nk forms
KESELMAN,H. J., R. WILCOX, J. TAYLOR,and R. K. KOWALCHUK.2000. Tests for mean equality that do not reuire honogeneity of variances: Do they really work? Communic. Statist. -Simula. 29: 875-895. KEULS, M. 1952. The use of the "studentized Euphytica 1: 112-122.
range"
in connection
KHAMIS, H. J. 1990. The a-corrected Infer. 24: 317-335.
Kolmogorov-Smirnov
KHAMIS, H. J. 1993. A comparative Statist. 20: 401-421.
study of the a-corrected
KHAMIS, H. J. 2000. The two-stage 439-450.
a-corrected
test for goodness
boundedness
KIRBY,W. 1981. [Letter to the editor.] KIRK, R. E. 1995. Experimental Pacific Grove, CA. 921 pp.
of sample statistics. Technometrics
Statist. 27:
root transformation
revisited .
problem:
A review . .T. Educ. Behav.
Water Resources
Sciences. 3rd ed, Brooks/Cole,
KLEINBAUM,D. G. and M. KLEIN. 2002. Logistic Regression: A Self-Learning Verlag, New York. 513 pp. KOEHLER, K. J. 1986. Goodness-of-fit Amer. Statist. Assoc. 81: 483-492.
Res. 10: 220-222.
23: 215-216.
Design: Procedures for the Behavioral
KNOKE, D. and P. J. BURKE. 1980. Log-Linear
test. .T. App!.
test. .T. Appl.
Kolmogorov-Smirnov
KIM, S.-H. and A. S. COHEN. 1998. On the Behrens-Fisher Statist. 23: 356-377.
of fit. .T. Statist. Plan.
Kolmogorov-Smirnov
KIHLBERG, J. K., J. H. HERSON, and W. E. SCHUTZ. 1972. Square .T. Royal Statist. Soc. Ser. C. Appl. Statist. 21: 76-81.
KIRBY,W. 1974. Algebraic
with an analysis of variance.
Models. Sage Publications,
tests for log-linear
models
Text. 2nd cd, SpringerBeverly
in sparse
Hills, CA. 80 pp.
contingency
tables. 1.
Literature Cited KOEHLER, K. J. and K. LARNTZ. 1980. An empirical investigation sparse multinomials. J. Amer. Statist. Assoc. 75: 336-344.
of goodness-of-fit
statistics
903 for
KOHR, R. L. and P. A. GAMES. 1974. Robustness of the analysis of variance, the Welch procedure, and a Box procedure to heterogeneous variances. J. Exper. Educ. 43: 61-69. KOLMOGOROV,A. 1933. Sul\a determinazione Instituto Italiano degli Attuari 4: 1-11.
empirica
di una legge di distribuzione.
KORNBROT,D. E. 1990. The rank difference test: A new and meaningful alternative signed ranks test for ordinal data. Brit. 1. Math. Statist. Psycho!. 43: 241-264. KORNER,T. W. 1996. The Pleasures of Counting. 534 pp.
Cambridge
University
KROLL, N. E. A. 1989. Testing 47-79.
KRUSKAL, W. H. 1957. Historical Statist. Assoc. 52: 356-360. KRUSKAL,W. H. 1958. Ordinal
notes
measures
in 2 X 2 contingency
on the Wilcoxon of association.
tables.
unpaired
KRUSKAL, W. H. and W. A. WALLIS. 1952. Use of ranks 1. Amer. Statist. Assoc. 47: 583-621.
in one-criterion
KRUTCHKOFF,R. G. 1988. One-way fixed effects analysis of variance be unequal. 1. Statist. Computa. Simula. 30: 259-271.
Statist. 14:
test. J. A mer.
of Statistics. Free Press, analysis
KUTNER, M. H., C. J. NACHTSHEIM,and J. NETER. 2004. Applied McGraw-Hall/irwin, New York. 701 pp.
of variance.
when the error variances
points on a circle. Ned. Akad.
KULKARNI, P. M. and A. K. SHAH. 1995. Testing the equality prespecified standard. Statist. Prob. Leu. 25: 213-219.
KVALSETH,T. O. 1985. Cautionary
of
1. A mer. Statist. Assoc. 53: 814-861. Encyclopedia
random
product-
numbers
1. Educ.
two-sample
KRUSKAL,W. H. and J. M. TANUR (eds.). 1978. International New York. 1350 pp.
KUIPER, N. H. 1960. Tests concerning 63: 38-47.
England.
of the sample
range tests to group means with unequal
independence
dell
to the Wilcoxon
Press, Cambridge,
KOWALSKI,C. J. 1972. On the effects of non-normality on the distribution moment correlation coefficient. 1. Roy. Statist. Soc. C21: 1-12. KRAMER,C. Y. 1956. Extension of multiple replications. Biometrics 12: 307 -310.
Giornalle
may
Wetensch. Proc. Ser.A
of several binomial
proportions
Linear Regression
to a
Models. 4th ed.
note about R2. Amer. Statist. 39: 279-285.
LANCASTER,H. O. 1949. The combination Biometrika 36: 370-382.
of probabilities
LANCASTER,H. O. 1969. The Chi-Squared
Distribution.
LANDRY, L. and Y. LEPAGE. 1992. Empirical Statist.-Simula. 21: 971-999. LAPLACE,P. S. 1812. Theeorie Analytique
arising from data in discrete distributions. John Wiley, New York. 356 pp.
behavior
of some tests for normality.
de Probabilites.
Coursier,
London.
Communic.
[Cited in Agresti, 2002:
] 5.] LARGEY, A. and J. E. SPENCER. 1996. F- and t-tests 'conflicting' outcomes. Statistician 45: ] 02-109. LARNTZ,K. 1978. Small-sample comparisons J. Amer. Statist. Assoc. 73: 253-263.
in multiple
regression:
of exact levels for chi-squared
LAUBSCHER, N. F., F. W. STEFFANS, and E. M. DE Mood's distribution-free statistic for dispersion and its 447-507. LAWAL, H. B. ] 984. Comparisons of the X2, 2 improved C test statistics in small samples of
The possibility
goodness-of-fit
LANGE. 1968. Exact normal approximation.
415-418. LAWLEY,D. N. 1938. A generalization LEADBETTER,M. R.1988.
of Fisher's
Harald Cramer,
z test. Biometrika
1893-1985.
statistics.
critical values Technometrics
y2, Freernan-Tukey one-way multinomials. 30: ]80-187.
Intern. Statist. Rev. 56: 89-97.
of
for 10:
and Williams's Biometrika 71:
904
Literature Cited LEE, A. F. S. and N. S. FINEBERG.1991. A fitted test for the Behrens-Fisher problem. Communic. Statist. - Theor. Meth. 20: 653-666. LEE, A. F. S. and J. GURLAND.1975. Size and power of tests for equality of means of two normal populations with unequal variances. J. Amer. Statist. Assoc. 70: 933-941. LEHMANN, E. L. 1999. "Student" and small-sample theory. Statist. Sci. 14: 418-426. LEHMANN, E. L. and C. REID. 1982. Jerzy Neyman 1894-1981. Amer. Statist. 36: 161-162. LENTH,R A. 2001. Some practical guidelines for effective sample size determination. Amer. Statist. 55: 187-193. LEONHARDT, D. 2000. John Tukey, 85, statistician who coined 2 crucial words. The New York Times, July 28, 2000, Section A, p. 19. LESLIE,P. H. 1955. A simple method of calculating the exact probability in 2 X 2 contingency tables with small marginal totals. Biometrika 42: 522-523. LEVENE,H. 1952. On the power function of tests of randomness based on runs up and down. Ann. Math. Statist. 23: 34-56. LEVENE,H. 1960. Robust tests for equality of variances, pp. 278-292. In Iolkin, S. G. Ghurye, W. Hoeffding, W. G. Madow, and H. B. Mann (eds.), Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling. Stanford University Press, Stanford, CA. LEVIN,B. and X. CHEN.1999. Tsthe one-half continuity correction used once or twice to derive a wellknown approximate sample size formula to compare two independent binomial distributions? Amer. Statist. 53: 62-66. LEVIN,J. R, R C. SERLIN,and L. WEBNE-BEHRMAN. 1989. Analysis of variance through simple correlation. Amer. Statist. 43: 32-34. LEVY,K. J. 1975a. Some multiple range tests for variances. Educ. Psycho!. Meas. 35: 599-604. LEVY,K. J. 1975b. Comparing variances of several treatments with a control. Educ. Psycho!. Meas. 35: 793-796. LEVY,K. J. 1975c. An empirical comparison of several multiple range tests for variances. 1. Amer. Statist. Assoc. 70: 180-183. LEVY,K. J. 1976. A multiple range procedure for independent correlations. Educ. Psycho!. Meas. 36: 27-31. LEVY,K. J. 1978a. An empirical comparison of the ANaYA F-test with alternatives which are more robust against heterogeneity of variance. J. Statist. Computa. Simula. 8: 49-57. LEVY,K. J. 1978b. An empirical study of the cube root test for homogeneity of variances with respect to the effects of a non-normality and power. J. Statist. Computa. Simula. 7: 71-78. LEVY,K. J. 1978c. A priori contrasts under conditions of variance heterogeneity. J. Exper. Educ. 47: 42-45. LEVY,K. J. 1979. Pairwise comparisons associated with the K independent sample median test. Amer. Statist. 33: 138-139. LEWONTIN, R C. 1966. On the measurement of relative variability. Systematic Zool. 15: 141-142. LEYTON, M. K.1968. Rapid calculation of exact probabilities for 2 X 3 contingency tables. Biometrics 24: 714-717. LI, L. and W. R. SCHUCANY. 1975. Some properties of a test for concordance of two groups of rankings. Biometrika 62: 417-423. LIDDELL,D. 1976. Practical tests of 2 X 2 contingency tables. Statistician 25: 295-304. LIDDELL,D. 1980. Practical tests for comparative trials: a rejoinder to N. L. Johnson. Statistician 29: 205-207. LIGHT,R J. and B. H. MARGOLIN. 1971. Analysis of variance for categorical data. 1. Amer. Statist. Assoc. 66: 534-544. LIN, J.-T. 1988a. Approximating the cumulative chi-square distribution and its inverse. Statistician 37: 3-5. LIN, J.-T. 1988b. Alternatives to Hamaker's approximations to the cumulative normal distribution and its inverse. Statistician 37: 413-414.
Literature Cited LIN, L. J-K. 1989. A concordance 255-268.
correlation
LIN, L. J-K. 1992. Assay validation 599-604.
coefficient
to evaluate
using the concordance
LIN, L. I-K. 2000. A note on the concordance
correlation
reproducibility.
correlation coefficient.
coefficient.
Biometrics
905
Biometrics
45:
Biometrics
48:
56: 324-325.
LIN, L. I-K. and V. CHINCILLI. 1996. Rejoinder to the letter to the editor from Atkinson and Klevill [Comment on the use of concordance correlation to assess the agreement between two variables]. Biometrics 53: 777 -778. LINDSAY, R. B. 1976. John William Strutt, Third Baron Rayleigh, pp. 100-107. In C. C. Gillispie (ed.), Dictionary of Scientific Biography, Vol. XIII. Charles Scribner's Sons, New York. LING, R. F. 1974. Comparison of several J. Amer. Statist. Assoc. 69: 859-866.
algorithms
for computing
LITTLE,R. J. A. and D. B. RUBIN. 2002. Statistical Analysis NJ.381 pp.
sample
means and variances.
with Missing Data. John Wiley, Hoboken,
LIX, L. M., J. C. KESELMAN, and H. J. KESELMAN. 1996. Consequences of assumption violations revisited: A quantitative review of alternatives to the one-way analysis of variance F test. Rev. Educ. Res. 66: 579-619. LLOYD, C. J. 1988. Doubling the one-sided P-value in testing independence a two-sided alternative. Statist. Med. 7: 1297-3006. LLOYD,M., J. H. ZAR, and J. R. KARR. 1968. On the calculation of diversity. Amer. Midland Natur. 79: 257-272.
in 2 X 2 tables against
of information-theoretical
measures
LOCKHART,R. A. and M. A. STEPHE s. 1985. Tests of fit for the van Mises distribution. 72: 647-652.
Biometrika
LUDWIG, J. A. and J. F. REYNOLDS.1988. Statistical Ecology. LUND, U. 1999. Least 723-733.
circular
distance
MAGURRAN, A. E. 2004. Measuring 256 pp. MAAG, U. R. 1966. A k-sample
regression
Biological
analogue
John Wiley, New York. 337 pp.
for directional
Diversity.
of Watson's
Blackwell
Publishing,
U2 statistic. Biometrika
MACKINNON, W. J. 1964. Table for both the sign test and distribution-free the median for sample sizes to 1,000. J. Amer. Statist. Assoc. 59: 935-956. MAlTY, A. and M. SHERMAN.2006. The two-sample 60: 1163-1166. MALLOWS,C. L. 1973. Some comments
1. Appl.
data.
Malden,
confidence
MANTEL, N. 1966. F-ratio probabilities MANTEL, N. 1974. Comment 378-380.
from binomial
procedures
and a suggestion.
selection.
Amer. Statist.
MANTEL, N. and S. W. GREENHOUSE.1968. What is the continuity 27-30. MAOR, E. 1994. e: The Story of a Number.
Princeton
University
correction?
Statist. Assoc. Amer.
69:
Statist. 22(5):
NJ. 223 pp. McGraw-Hill,
post hoc comparisons
MARASCUlLO,L. A. and M. MCSWEENEY.1977. Nonparametric and Distribution-free Social Sciences. Brooks/Cole, Monterey, California. 556 pp. MARCOULIDES,G. A. and S. L. HERSHBERGER.1997. Multivariate Lawrence Earlbaum Associates, Mahwah, NJ. 322 pp.
is
12: 621-625.
Press, Princeton,
Science Research.
MARASCUILO,L. A. and M. MCSWEENEY. 1967. Nonparametric Psycho/. Bull. 67: 401-412.
variables
22: 404-407.
Technometrics
1974.] J. Amer.
[Re Conover,
MARASCUlLO,L. A. 1971. Statistical Methods for Behavioral York. 578 pp.
of
15: 661-675.
tables. Biometrics
in variable
MA.
intervals
MANN, H. B. and D. R. WHITNEY. 1947. On a test of whether one of two random stochastically larger than the other. Ann. Math. Statist. 18: 50-60. MANTEL, N. 1970. Why stepdown
26:
53: 579-583.
T rest with one variance unknown.
on Cp, Technometrics
Statist.
New
for trend.
Methods for the
Statistical Methods: A First Course.
906
Literature Cited MARDlA, K. V. 1967. A non-parametric Statist. Soc. B29: 320-342.
test for the bivariate
two-sample
MARDlA, K. V. 1969. On the null distribution of a non-parametric problem. 1. Royal Statist. Soc. B31: 98-102. MARDIA,K. V. 1970a. Measures 57: 519-530.
of multivariate
MARDlA, K. V. 1970b. A bivariate
MARDlA, K. V. 1975. Statistics 349-393.
of directional
MARDlA, K. V. 1976. Linear-angular 403-405. MARDIA, K. V. 1981. Directional 1523-1543.
Royal
two-sample
Biometrika
Press, New York. 357 pp.
scores test on a circle and its parametric
coefficients
competitor.
J. Royal Statist. Soc. B37:
data. (With discussion.)
correlation
statistics
test for the bivariate
J.
c-sample test. 1. Royal Statist. Soc. 832: 74-87.
MARDIA, K. V. 1972a. Statistics of Directional Data. Academic uniform
problem.
skewness and kurtosis with applications.
non-parametric
MARDIA, K. V. 1972b. A multisample J. Royal Statist. Soc. 834: 102-113.
location
and rhythmometry.
Biometrika 63:
Communic. Statist. - Theor. Meth. A10:
in geosciences.
MARDlA, K. V. 1990. Obituary:
Professor
8. L. Welch. J. Roy. Statist. Sac. Ser. A, 153: 253-254.
MARDIA, K. V. 1998. Obituary:
Geoffrey
Stuart Watston.
Statistician 47: 701-702.
MARDIA, K. V. and P. E. JuPP. 2000. Directional Statistics. John Wiley, New York. 414 pp. MARDIA, K. V. and B. D. SPURR. 1973. Multisample populations. J. Roy. Statist. Soc. 835: 422-436.
tests
for multimodal
MARGOLIN, 8. H. and R. J. LIGHT. 1974. An analysis of variance Small sample comparisons with chi square and other competitors. 69: 755-764. MARKOWSKI,C. A and E. P. MARKOWSKI.1990. Conditions test of variance. Amer. Statist. 44: 322-326.
and axial circular
for categorical J.
for the effectiveness
MARTIN, L., R. LEBLANC, and N. K. TOAN. 1993. Tables for the Friedman 21: 39-43. MARTIN ANDRES, A 1991. A review of classic non-asymptotic methods tions by means of independent samples. Communic. Statist. -Simula. MARTINANDRES, A. and I. HERRANZTEJEDOR. 1995. Is Fisher's Statist. Data Anal. 19: 579-591.
data,
II:
Amer. Statist. Assoc. of a preliminary
rank test. Can. J. Statist.
for comparing two proporComputa. 20: 551-583.
exact test very conservative?
MARTIN ANDRES, A. and I. HERRANZ TEJEDOR. 2000. On the minimum expected validity of the chi-squared test in 2 X 2 tables . .I. Appl. Statist. 27: 807 -820.
quantity
Compo for the
MARTIN ANDRES, A., I. HERRANZ TEJEDOR, and J. D. LUNA DEL CASTILLO. 1992. Optimal correction for continuity in the chi-squared test in 2 X 2 tables (conditioned method). Communic. Statist==Simula. 21: 1077 -1101. MARTIN ANDRES, A, I. HERRANZ TEJEDOR, and A SILVA MATO. 1995. The Wilcoxon, Fisher, X2_, Student and Pearson tests and 2 X 2 tables. Statistician 44: 441-450.
Spearman,
MARTIN ANDRES, A and A. SILVA MAIO. 1994. Choosing the optimal unconditioned comparing two independent proportions. Comput. Statist. & Data Anal. 17: 555-574.
test for
MARTIN ANDRES, A, A SILVAMATO, and I. HERRANZ TEJEDOR. 1992. A critical review of asymptotic methods for comparing two proportions by means of independent samples. Communic. Statist==Simula. Computa. 21: 551-586. MARTINANDRES, A and J. M. TAPIA GARCIA.2004. Optimal multinomial trials. Communic. Statist. 33: 83-97. MASSEY, F. J., JR. 1951. The Kolmogorov-Smirnov 46: 68-78.
unconditional
test for goodness
asymptotic
test in 2 X 2
of fit. 1. Amer. Statist. Assoc.
MAURAIS,J. and R. OUIMET. 1986. Exact critical values of Bartlett's test of homogeneity of variances for unequal sample sizes for two populations and power of the test. Metrika 33: 275-289.
Literature Cited MAXWELL, A. E. 1970. Comparing Psychiatry 116: 651-655.
the classification
of subjects
MAXWELL,S. E. 1980. Pairwise multiple comparisons 5: 269-287.
by two independent
in repeated
judges.
907 Brit. 1.
designs. 1. Educ. Statist.
measures
MAXWELL, S. E. and H. D. DELANEY.2004. Designing Experiments and Analyzing Data. A Model Comparison Perspective. 2nd ed. Lawrence Earlbaum Associates, Mahwah, NJ. 1079 pp. MCCORNACK. R. L. 1965. Extended tables J. Amer. Statist. Assoc. 60: 864-871.
of the Wilcoxon
matched
MCCULLOUGH, B. D. 1998. Assessing 358-366.
the reliability
of statistical
MCCULLOUGH, B. D. 1999. Assessing 149-159.
the reliability
of statistical
pair signed
Part 1. Amer. Statist. 52:
software: software:
rank statistic.
Part II. Amer. Statist. 53:
MCCULLOCH,C. E. 1987. Tests for equality of variances with paired data. Communic. Statist. Meth. 16: 1377-1391. MCGILL, R., J. W. TUKEY, and W. A. LARSEN.1978. Variations McKAY, A. T. 1932. Distribution of the coefficient Roy. Statist. Soc. A95: 695-698.
of variation
McKEAN, J. W. and T. J. VIDMAR. 1994. A comparison of linear models. Amer. Statist. 48: 220-229.
of box plots. Amer. Statist. 32: 12-16. and the extended
of two rank-based between
MEAD, R., T. A. BA CROFT,and C. HA . 1975. Power of analysis incompletely specified mixed models. Ann. Statist. 3: 797-808.
of variance
for the analysis
correlated
k independent
proportions
test procedures
MEDDlS, R. 1984. Statistics Using Ranks: A Unified Approach. Basil Blackwell, 449 pp.
J.
"t" distribution.
methods
McNEMAR, Q. 1947. Note on the sampling error of the difference or percentages. Psychometrica 12: 153-157.
MEE, W. M., A. K. SHAH, and J. J. LEFANTE. 1987. Comparing known standard. 1. Qual. Techno!. 19: 75-81.
Theor.
Oxford,
for
England.
sample means with a
MEEKER, W. Q., JR. and G. J. HAHN. 1980. Prediction intervals for the ratios of normal distribution sample variances and exponential distribution sample means. Technometrics 22: 357-366. MEHTA, C. R. and J. F. HILTON. 1993. Exact power of conditional beyond the 2 x 2 contingency table. Amer. Statist. 47: 91-98. MEHTA, J. S. and R. SRINIVASAN.1970. On the Behrens-Fisher
and unconditional
problem.
MENARD, S. 2002. Applied Logistic Analysis. 2nd ed. Sage Publications,
tests: Going
Biometrika 57: 649-655. Thousand
Oaks, CA. 111 pp.
on Plant Hybrids]. Verhandlunpp 3-47 (which appeared in 1866, and which has been published in English translation many times beginning in 1901, including as Mendel, 1933, and as pp. 1-48 in Stern and Sherwood, 1966).
MENDEL, G. 1865. Versuche
tiber Planzen-Hybriden
[Experiments
gen des naturjorschenden Vereines in Brunn 4, Abhandlungen,
MENDEL, G. 1933. Experiments in Plant Hybridization. Harvard 353 pp.
University
Press, Cambridge,
MA.
MEYERS, L. S., G. GAMST, and A. J. GUARINO. 2006. Applied Multivariate Research: Design and Interpretation. Sage Publications, Thousand Oaks, CA. 722 pp. MICKEY, R. M., O. J. DUNN, and V. A. CLARK. 2004. Applied Statistics: Analysis of Variance and Regression. John Wiley, New York. 448 pp. MILLER, A. 1. 1972. [Letter to the editor.] MILLER, G. E.
1991.
Asymptotic
test
Technometrics 14: 507. statistics
for
coefficients
of
variation.
Communic.
of variation.
Communic.
Statist.- Theor. Meth. 20: 2251-2262. MILLER, G. E. and C. J. FELTZ. 1997. Asymptotic Statist==Theor. Meth. 26: 715-726. MILLER, J. 2001. variables.html
"Earliest
Uses
of
inference
Symbols
for
for coefficients Variables":
members.aol.com/jeff5701
908
Literature Cited MILLER,J. 2004a. "Earliest Known Use of Some of the Words in Mathematics": members.aoLcoml jeff570/mathword.html MILLER, J. 2004b. "Earliest Uses of Symbols of Operation": members.aoLcom/jeff5701 operation.html MILLER,J. 2004c. "Earliest Uses of Symbols in Probability and Statistics": members.aoLcoml jeff570/stat.html MILLER,L. H. 1956. Table of percentage points of Kolmogorov statistics. 1. Amer. Statist. Assoc. 51: 111-121. MILLER,R. G., JR. 1981. Simultaneous Statistical Inference. 2nd ed. McGraw-Hill, New York. 299 pp. MILTON,R. C. 1964. An extended table of critical values for the Mann-Whitney (Wilcoxon) two-sample statistic. J. Amer. Statist. Assoc. 59: 925-934. MOGULL,R. G. 1994. The one-sample runs test: A category of exception. 1. Exper. Behav. Statist. 19: 296-303. MOLENAAR, W. 1969a. How to poison Poisson (when approximating binomial tails). Statist. Neerland. 23: 19-40. MOLENAAR, W. 1969b. Additional remark on "How to poison Poisson (when approximating binomial tails)." Statist. Neerland. 23: 241. MONTGOMERY, D. C. 2005. Design and Analysis of Experiments. 6th ed. John Wiley, Hoboken, NJ. 643 pp. MONTGOMERY, D. C; W. A. PECK,and G. G. VINING.2006. Introduction to Linear Regression Analysis. John Wiley Interscience, New York to be 2006 edition. 612 pp. MOOD,A. M. 1950. Introduction to the Theory of Statistics. McGraw-Hili, New York. 433 pp. MOOD,A. M. 1954. On the asymptotic efficiency of certain non-parametric two-sample tests. Ann. Math. Statist. 25: 514-522. MOORE,B. R. 1980. A modification of the Rayleigh test for vector data. Biometrika 67: 175-180. MOORE,D. S. 1986. Tests of chi-squared type, pp. 63-95. In R. B. D'Agostino and M. A. Stephens (eds.), Goodness-of-Fit Techniques. Marcel Dekker, New York. MOORS,J. J. A. 1986. The meaning of kurtosis: Darlington revisited. Amer. Statist. 40: 283-284. MOORS,J. J. A. 1988. A quantile alternative for kurtosis. Statistician 37: 25-32. MOSER,B. K. and G. R. STEVENS.1992. Homogeneity of variance in the two-sample means test. Amer. Statist. 46: 19-21. MOSIMANN, J. E. 1968. Elementary Probablilty for the Biological Sciences. Appleton-Century-Crofts, New York. 255 pp. MUDDAPUR, M. V. 1988. A simple test for correlation coefficient in a bivariate normal population. Sankya: Indian J. Statist. Ser. B. 50: 60-68. MYERS,J. L. and A. D. WELL.2003. Research Design and Statistical Analysis. 2nd ed. Lawrence Earlbaum Associates, Mahwah, NJ. 226, 760 pp. NAGASENKER, P. B. 1984. On Bartlett's test for homogeneity of variances. Biometrika 71: 405-407. NEAVE,H. R. and O. L. WORTHINGTON. 1988. Distribution-Free Tests. Unwin-Hyman, London, England. 430 pp. NELSON,W., Y. L. TONG,J.-K. LEE, and F. HALBERG.1979. Methods for cosinor-rhythmometry. Chronobiologia 6: 305-323. NEMENYI,P. 1963. Distribution-Free Multiple Comparisons. State University of New York, Downstate Medical Center. [Cited in Wilcoxon and Wilcox (1964).) NEWCOMBE, R. G. 1998a. Two-sided confidence intervals for the single proportion: Comparison of seven methods. Statist. Med. 17: 857-872. NEWCOMBE, R. G. 1998b. Interval estimation for the difference between independent propotions: Comparison of eleven methods. Statist. Med. 17: 873-890.
literature Cited NEWMAN, D. 1939. The distribution terms of an independent estimate
of range in samples from a normal population, of standard deviation. Biometrika 31: 20-30.
909
expressed
in
NEYMAN,J. 1959. Optimal asymptotic tests of composite hypothesis, pp. 213-234. In U. Grenander (ed.), Probability and Statistics: The Harald Cramer Volume. John Wiley, New York. NEYMAN,J. 1967. R. A. Fisher: An appreciation.
Science 156: 1456-1460.
NEYMAN, J. and E. S. PEARSON. 1928a. On the use and interpretation purposes of statistical inference. Part I. Biometrika 20A: 175-240.
of certain
test criteria
for
NEYMAN, J. and E. S. PEARSON. 1928b. On the use and interpretation purposes of statistical inference. Part II. Biometrika 20A: 263-294.
of certain
test criteria
for
NEYMAN,J. and E. S. PEARSON.1931. On the problem A, 3: 460-481.
of k samples. Bull. Acad. Polon. Sci. Leu. Ser.
NEYMA , J. and E. S. PEARSON. 1933. On the problem of the most efficient hypotheses. Phil. Trans. Roy. Soc. London. Ser. A, 231: 239-337. NOETHER,G. E.1963.
Note on the Kolmogorov
NOETHER, G. E. 1984. Nonparametrics: Statist. 38: 173 -178.
statistic in the discrete
case. Metrika 7: 115-116.
The early years-impressions
NORRIS,R. C. and H. F. HJELM. 1961. Non-normality 29: 261-270.
and product
tests of statistical
and recollections.
moment
Amer.
J. Exp. Educ.
correlation.
NORTON,H. W. 1939. The 7 X 7 squares. Ann. Eugen. 9: 269-307. NORWOOD, P. K., A. R. SAMPSON, K. MCCARROLL, and R. STAUM. 1989. A multiple comparisons procedure for use in conjunction with the Benard-van Elteren test. Biometrics 45: 1175-1182. O'BRIEN, P. C. 1976. A test for randomness.
Biometrics
32: 391-401.
O'BRIEN, P. C. and P. J. DYCK. 1985. A runs test based on run lengths. Biometrics
41: 237-244.
O'BRIEN, R. G. and M. KAISER. 1985. MANOVA method An extensive primer. Psychol. Bull. 97: 316-333.
for analyzing
measures
O'CINNEIDE, C. A. 1990. The mean is within one standard 44: 292-293.
deviation
repeated
designs:
of any median. Amer.
Statist.
O'CONNOR, J. J. and E. F. ROBERTSON. 1996. "Christopher Clavius." School of Mathematics and Statistics, St. Andrews University, Scotland: www-history.mcs.st-andrews.ac.ukl history/Bioglndex.html O'CONNOR, J. J. and E. F. ROBERTSON.1997. "Christian O'CONNOR, J. J. and E. F. ROBERTSON.1998. "Charles
Kramp"
(ibid.).
Babbage"
(ibid.).
O'CONNOR, J. J. and E. F. ROBERTSON.2003. "John Venn"
(ibid.).
ODEH, R. E. and J. O. EVANS. 1974. The percentage Statist. Soc. Ser. C. AppL. Statist. 23: 98-99.
points
OLDS, E. G. 1938. Distributions of sums of squares individuals. Ann. Math. Statist. 9: 133-148.
of rank
OLSON, C. L. 1974. Comparative Statist. Assoc. 69: 894-908. OLSON, C. L. 1976. On choosing 83: 579-586.
robustness
differences
of six tests on multivariate
a test statistic
OLSON, C. L. 1979. Practical considerations Stevens. Psychol. Bull. 86: 1350-1352.
of the normal
in multivariate in choosing
distribution.
for small numbers
of
analysis of variance. J. Amer.
analysis of variance.
a MANOVAtest statistic:
OLSSON, U., F. DRASGOW,and N. J. DORANS. 1982. The polyserial 47: 337-347.
J. Royal
correlation
Psycho/. A rejoinder
coefficient.
Bull. to
Biometrika
OLVER,F. W. J. 1964. Bessel functions of integer order, pp. 355-433. In M. Abramowitz and 1. Stegun (eds.), Handbook of Mathematical Functions, National Bureau of Standards, Washington, DC. (Also, Dover, New York, 1965.)
910
Literature Cited O'NEILL, M. E. and K. MATHEWS.2000. A weighted least squares approach to Levene's test of homogeneity of variance. Austral. and New Zeal. J. Statist. 42: 81-100. ORD,K. 1984. Marine George Kendall, 1907-1983. Amer. Statist. 38: 36-37. OSTLE,B. and L. C. MALONE.1988. Statistics in Research. 4th ed. Iowa State University Press, Ames, Iowa. 664 pp. OTlENO,B. S. and C. M. ANDERSON-COOK. 2003. A more efficient way of obtaining a unique median estimate for circular data. J. Modern Appl. Statist. Meth. 2: 168-176. OTTEN,A. 1973. The null distribution of Spearman's S when n = 13( 1)16. Statist. Neerland. 27: 19-20. OVERALL, J. E., H. M. RHOADESand R. R. STARBUCK. 1987. Small-sample tests for homogeneity of response probabilities in 2 X 2 contingency tables. Psychol. Bull. 98: 307-314. OWEN,D. B. 1962. Handbook of Statistical Tables. Addison-Wesley, Reading, MA. 580 pp. OZER,D. J. 1985. Correlation and the coefficient of determination. Psychol. Bull. 97: 307-315. PAGE,W. and V. N. MURTY.1982. Nearness relations among measures of central tendency and dispersion: Part 1. Two-Year Coll. Math. J. 13: 315-327. PAMPEL,F. C. 2000. Logistic Regression: A Primer. Sage Publications, Thousand Oaks, CA. 86 pp. PAPANASTASJOU, B. C. 2003. Greek letters in measurement and statistics. Is it all Greek to you? STATS36: 17-18. PARSHALL, C. G. and J. D. KROMREY. 1996. Tests of independence in contingency tables with small samples: A comparison of statistical power, Educ. Psychol. Meas. 56: 26-44. PATEL,J. K. 1989. Prediction intervals- A review. Communic. Statist. - Theor. Meth. 18: 2393-2465. PATlL,K. D. 1975. Cochran's Q test: Exact distribution. J. Amer. Statist. Assoc. 70: 186-189. PAUL, S. R. 1988. Estimation of and testing significance for a common correlation coefficient. Communic. Statist.-Theor. Meth. 17: 39-53. PAULL,A. E. 1950. On a preliminary test for pooling mean squares in the analysis of variance. Ann. Math. Statist. 21: 539-556. PAZER,H. L. and L. A. SWANSON. 1972. Modern Methods for Statistical Analysis. Intext Educational Publishers, Scranton, PA. 483 pp. PEARSON,E. S. 1932. The analysis of variance in cases of nonnormal variation. Biometrika 23: 114-133. PEARSON, E. S. 1939. Student as a statistician. Biometrika 30: 210-250. PEARSON,E. S. 1947. The choice of statistical tests illustrated on the interpretation of data classed in a 2 X 2 table. Biometrika 34: 139-167. PEARSON,E. S. 1967. Studies in the history of probability and statistics. XVII. Some reflections on continuity in the development of mathematical statistics, 1885-1920. Biometrika 54: 341-355. PEARSON,E. S., R. B. D'AGOSTlNO,and K. O. BOWMAN.1977. Tests for departure from normality. Comparison of powers. Biometrika 64: 231-246. PEARSON,E. S. and H. O. HARTLEY.1951. Charts for the power function for analysis of variance tests, derived from the non-central F-distribution. Biometrika 38: 112-130. PEARSON,E. S. and H. O. HARTLEY.1966. Biometrika Tables for Statisticians, Vol. 1. 3rd ed. Cambridge University Press, Cambridge, England. 264 pp. PEARSON,E. S. and H. O. HARTLEY.1976. Biometrika Tables for Statisticians, Vol. 2, 2nd ed. Cambridge University Press, Cambridge, England. (1972, reprinted 1976.) 385 pp. PEARSON,E. S., R. L. PLACKETT, and G. A. BARNARD.1990. 'Student': A Statistical Biography of William Sealy Gossett. Clarendon Press, Oxford, England. 142 pp.
Literature Cited PEARSON,E. S. and N. W. PLEASE. 1975. Relation between the shape of population the robustness of four simple test statistics. Biometrika 62: 223-241.
911
distribution
and
PEARSON,K. 1900. On a criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen in random sampling. Phil. Mag. Ser. 550: 157-175. PEARSON,K. 1901. On the correlation of characters not quantitatively Roy. Soc. London A 195: 1-47. [Cited in Shesk in, 2004: 997.]
measured.
Philosoph. Trans.
PEARSON,K. 1904. Mathematical contributions to the theory of evolution. XIII. On the theory of contingency and its relation to association and normal correlation. Draper's Co. Res. Mern., Biometric Ser. ].35 p. [Cited in Lancaster (1969).] PEARSON,K. '1905. "Das Fehlergesetz und Seine Verallgemeinerungen A rejoinder. Biometrika 4: 169-212.
und Pearson."
Biometrika 13: 25-45.
PEARSO , K. 1920. Notes on the history of correlation. PEARSON,K. 1924. I. Historical 402-404.
durch Fechner
notes on the origin of the normal
Biometrika 16:
curve of errors.
PEDHAZUR, E. J. 1997. Multiple Regression in Behavioral Research: Explanation and Prediction. Harcourt Brace, Fort Worth, TX. 1058 pp. PETERS,W. S. 1987. Counting for Something. Springer-Verlag,
New York. 275 pp.
PETRINOVICH,L. F. and HARDYCK,C. D. 1969. Error rates for multiple comparison methods: evidence concerning the frequency of erroneous conclusions. Psychol. Bull. 71: 43-54. PElTIlT, A. N. and M. A. STEPHENS. 1977. The Kolmogorov-Smirnov discrete and grouped data. Technometrics 19: 205-210. PFA ZAGL, J. 1978. Estimation: Tanur (1978).
Confidence
PIELOU, E. C. 1966. The measurement Theoret. BioI. 1.3: 131-144.
intervals
of diversity
goodness-of-fit
and regions, pp. 259-267.
in different
PIELOU, E. C. 1974. Population and Community Ecology, Gordon
types of biological
Some
statistic
with
In Kruskal
and
collections.
1.
Breach, New York, 424 pp.
PIELOU, E. C. 1977. Mathematical Ecology. John Wiley, New York. 385 pp. PILLAI, K. C. S. 1955. Some 117-121.
new test criteria
in multivariate
PIRIE, W. R. and M. A. HAMDAN.1972. Some revised continuity Biometrics 28: 693-701. PrrMA , E. J. G. 1939. A note on normal correlation. PLACKElT, R. L. 1964. The continuity
correlation
in 2
analysis. corrections
Ann. Math. Statist. 26: for discrete distributions.
Biometrika 31: 9-12. x 2 tables. Biometrika 51: 327 -338.
POLLAK,M. and J. COHEN. 1981. A comparison of the independent-samples t-test and the pairedsamples I-test whcn the observations are nonnegatively correlated pairs. 1. Statist. Plan. In! 5: 133-146. POSTEN, H. O. 1992. Robustness of the two-sample t-test under violations of the homogeneity variance assumption, part II. Comrnunic. Statisl.- Theor. Meth. 21: 2169-2184.
of
POSTEN, H.O., H. C. YEH, and D. B. OWEN. 1982. Robustness of the two-sample t-test under violations of the homogeneity of variance assumption. Communic. Statist.-Theor. Meth. 11: 109-126. PRA'IT, J. W. 1959. Remarks on zeroes and ties in the Wilcoxon Statist. Assoc. 54: 655-667. PRZYBOROWSKI,J. and H. WILENSKI. 1940. Homogeneity series. Biometrika 31: 313-323.
signed rank procedures.
J.
Amer.
of results in testing samples from Poisson
912
Literature Cited QUADE, D. 1979. Using weighted rankings in the analysis of complete effects. J. Amer. Statist. Assoc. 74: 680-683.
blocks with additive
block
QUADE, D. and I. SALAMA. 1992. A survey of weighted rank correlation, pp. 213-224. In P. K. Sen and I. Salama (eds.), Order Statistics and Nonparametrics: Theory and Applications. Elsevier, New York. QUENOUILLE, M. H. 1950. Introductory Statistics. Butterworth-Springer,
London.
[Cited in Thoni
(1967: 18).] QUENOUILLE,M. H. 1956. Notes on bias in estimation.
Biometrika 43: 353-360. QUINN, G. P. and M. J. KEOUGH. 2002. Experimental Design and Data Analysis for Biologists. Cambridge University Press, Cambridge, England. 537 pp. RACTLIFFE,J. F. 1968. The effect on the t-distribution of non-normality in the sampled population. J. Royal Statist. Soc. Ser. C. Appl. Statist. 17: 42-48. RADLOW, R. and E. F. ALF, JR. 1975. An alternate multinomial assessment of the accuracy of the test of goodness of fit. J. Amer. Statist. Assoc. 70: 811-813. RAFF, M. S. 1956. On approximating the point binomial. 1. Amer. Statist. Assoc. 51: 293-303. RAHE, A. J. 1974. Table of critical values for the Pratt matched pair signed rank statistic. J. Amer. Statist. Assoc. 69: 368-373. RAHMAN, M. and Z. GOVINDARAJULU.1997. A modification of the test of Shapiro and Wilk for normality. J. Appl. Statist. 24: 219- 236. RAMSEY, P. H. 1978. Power differences between pairwise multiple comparisons. J. Amer. Statist. Assoc. 73: 479-485. RAMSEY, P. H. 1980. Exact Type I error rates for robustness of Student's t test with unequal variances. J. Educ. Statist. 5: 337-349. RAMSEY,P. H. 1989. Determining the best trial for the 2 X 2 comparative trial. J. Statist. Computa. Simula. 34: 51-65. RAMSEY,P. H. and P. P. RAMSEY. 1988. Evaluating the normal approximation to the binomial test. 1. Educ. Statist. 13: 173-182. RAMSEYER,G. C. and T.-K. TCHENG. 1973. The robustness of the studentized range statistic to violations of the normality and homogeneity of variance assumptions. Amer. Educ. Res. 1. 10: 235-240. RANNEY, G. B. and C. C. THIGPEN. 1981. The sample coefficient of determinationin simple linear regression. Amer. Statist. 35: 152-153. RAO, C. R. 1992. R. A. Fisher: The founder of modern statistics. Statist. Sci. 7: 34-48. RAO, C. R. and I. M. CHAKRAVARTI.1956. Some small sample tests of significance for a Poisson distribution. Biometrics 12: 264- 282. RAO, C. R., S. K. MITRA, and R. A. MATTHAI. 1966. Formulae and Tables for Statistical Work. Statistical Publishing Society, Calcutta, India. 233 pp. RAO, J. S. 1976. Some tests based on arc-lengths for the circle. Sankhyii: Indian J. Statist. Ser. B. 26: 329-338. RAYLEIGH.1919. On the problems of random variations and flights in one, two, and three dimensions. Phil. Mag. Ser. 6,37: 321-347. RENCHER, A. C. 1998. Multivariate Statistical Inference and Applications. John Wiley, New York. 559 pp. RENCHER, A. C. 2002. Methods of Multivariate Analysis. John Wiley, New York. 708 pp. RHOADES, H. M. and J. E. OVERALL. 1982. A sample size correction for Pearson chi-square in 2 X 2 contingency tables. Psychol. Bull. 91: 418-423. RICHARDSON,J. T. E. 1990. Variants of chi-square for 2 X 2 contingency tables. 1. Math. Statist. Psychol. 43: 309-326. RICHARDSON,J. T. E. 1994. The analysis of 2 X 1 and 2 X 2 contingency tables: An historical review. Statist. Meth. Med. Res. 3: 107-133. RODGERS, J. L. and W. L. NICEWANDER.1988. Thirteen ways to look at the correlation coefficient. Amer. Statist. 42: 59-66.
i
Literature Cited
913
ROGAN, J. C. and H. J. KESELMAN. 1977. Is the ANOV A F-test robust to variance heterogeneity when sample sizes are equal?: An investigation via a coefficient of variation. Amer. Educ. Res. 1. 14: 493-498. ROSCOE, J. T. and J. A. BYARS. 1971. Sample size restraints chi-square statistic. J. Amer. Statist. Assoc. 66: 755-759. Ross, G. J. S. and 323-336.
D. A. PREECE. 1985. The
ROTHERY,P. 1979. A non parametric
measure
negative
of intraclass
commonly binomial correlation.
imposed
on the use of the
Statistician 34:
distribution.
Biometrika 66: 629-639.
ROUANET, H. and D. LEPINE. 1970. Comparison between treatments in a repeated-measurement design: ANOV A and multivariate methods. Brit. J. Math. Statist. Psychol. 23: 147-163. ROUTLEDGE, R. D. 1990. When stepwise regression fails: correlated redundant. Intern. 1. Math. Educ. Sci. Techno!. 21: 403-410.
variables
some of which are
Roy, S. N. 1945. The individual sampling distribution of the maximum, minimum, diates of the p-statistics on the null-hypothesis. Sankhya 7(2): 133-158. Roy, S. N. 1953. On a heuristic method of test construction Math. Statist. 24: 220-238.
and its use in multivariate
ROYSTON,E. 1956. Studies in the history of probability and statistics. the graphical presentation of data. Biometrika 43: 241-247. ROYSTON,J. P. 1982a. An extension App!. Statist. 31: 115-124.
of Shapiro
and any intermeanalysis. Ann.
III. A note on the history of
and Wilk's W test for normality
to large samples.
ROYSTO , J. P. 1982b. The W test for normality. ROYSTON,J. P. 1986. A remark
App!. Statist. 31: 176-180. Appl. Statist. 35: 232-234. the Shapiro-Wilk W for ties . .I. Statist. Compui=-s Simulat. 31:
on AS181. The W test for normality.
ROYSTON, J. P. 1989. Correcting 237-249.
RUBIN, D. B. 1990. Comment: Neyman (1923) and causal inference studies. Statist. Sci. 8: 472-480.
in experiments
and observational
RUDAS, T. 1986. A Monte Carlo comparison of the small sample behaviour of the Pearson, likelihood ratio and the Cressie- Read statistics. 1. Statist. Computa. Simula. 24: 107 -120.
the
RUSSELL, G. S. and D. J. LEVITIN. 1994. An expanded table of probability values for Rao's spacing test. Technical Report No. 94-9, Institute of Cognitive and Decision Sciences, University of Oregon, Eugene. 13 pp. RUST, S. W. and M. A. FLiGNER. 1984. A modification of the Kruskal-Wallis statistic generalized Behrens-Fisher problem. Communic. Statist.- Theor. Meth. 13: 2013-2028.
for the
RYAN, G. W. and S. D. LEADBETTER.2002. On the misuse of confidence intervals for two means in testing for the significance of the difference between the means. J. Modern App!. Statist. Meth. 1: 473-478. RYAN, T. P. 1997. Modern Regression Methods. John Wiley, New York. 515 pp. SAHAl, H. and M. I. AGEEL. 2000. The Analysis o{ Variance: Fixed, Random and Mixed Models. Birkhaser, Boston. 742 pp. SAKODA,J. M. and B. H. COHEN. 1957. Exact probabilities coefficients. Psychometrika 22: 83-86.
for contingency
tables using binomial
SALAMA,1. and D. QUADE. 1982. A nonparametric comparison of two multiple regressions by means of a weighted measure of correlation. Communic. Statist. - Theor. Meth. All: 1185-1195. SAITERTHWAITE,F. E. 1946. An approximate Biometrics Bull. 2: 110-114. SAVAGE,T. R. 1956. Contributions Math. Statist. 27: 590-615. SAVAGE,I. R. 1957. Non-parametric
distribution
of estimates
to the theory of rank order statistics-
of variance the two-sample
components. case. Ann.
statistics. 1. Amer. Statist. Assoc. 52: 331-334.
SAVAGE,L. J. 1976. On rereading R. A. Fisher. Ann. Statist. 4: 441-483. Also: Discussion by B. Efron, ibid. 4: 483-484; C. Eisenhart, ibid. 4: 484; B. de Finetti, ibid. 4: 485-488; F. A. S. Fraser, ibid. 4: 488-489; V. O. Godambe, 4: 490-492; I. J. Good, ibid. 4: 492-495; O. Kempthorne, ibid. 4: 495-497; S. M. Stigler, ibid. 4: 498-500.
914
Literature Cited SAVILLE, D. J. 1990. Multiple 174-180.
comparison
SAWILOWSKY,S. S. 2002. Fermat, between two means when
uy
procedures:
The practical
solution.
Amer. Statist. 44:
Schubert, Einstein, and Behrens-Fisher: The probable u~. J. Modern Appl. Statist. Meth. 1: 461-472.
difference
eft
SAWILOWSKY,S. S., C. C. BLAIR, and J. J. HIGGINS. 1989. An investigation of the Type [ error and power properties of the rank transform procedure in factorial ANOV A. J. Educ. Statist. 14: 255-267. SCHADER,M. and F. SCHMID. 1990. Charting small sample characteristics intervals for the binomial proportion p. Statist. Papers 31: 251-264. SCHEFFE, H. 1953. A method 87-104.
of judging
all contrasts
in the analysis
of asymptotic of variance.
confidence
Biometrika 40:
SCHEFFE,H. 1959. The Analysis of Variance. John Wiley, New York. 477 pp. SCHEFFE, H. 1970. Practical 1501-1508.
solutions
of the Behrens-Fisher
problem.
J. Amer. Statist. Assoc. 65:
SCHENKER,N. and J. F. GENTLEMAN.2001. On judging the significance of differences the overlap between confidence intervals. Amer. Statist. 55: 182-186. SCHLITIGEN, R. 1979. Use of a median 95-103.
test for a generalized
Behrens-Fisher
problem.
SCHUCANY,W. R. and W. H. FRAWLEY.1973. A rank test for two group concordance. 38: 249-258. SCHWERTMAN,N. C. and R. A. MARTINEZ. 1994. Approximate Statist.-Theor. Meth. 23: 1507-1529. SEAL, H. L. 1967. Studies in the history of probability of the Gauss linear model. Biometrika 54: 1-14.
Poisson confidence
xv. The
and statistics.
by examining Metrika 26:
Psychometrika
limits. Communic.
historical
development
SEAMAN,J. W., JR., S. C. WALLS, S. E. WISE, ANDR. G. JAEGER.1994. Caveat emptor: Rank transform methods and interaction. Trends Ecol. Evol. 9: 261-263. SEBER, G. A. F. and A. J. LEE. 2003. Linear Regression Analysis. 2nd ed. John Wiley, New York. 557 pp. SEBER,G. A. F. and C. J. WILD. 1989. Nonlinear Regression. John Wiley, New York. 768 pp. SENDERS,V. L. 1958. Measurement and Statistics. Oxford University
Press, New York. 594 pp.
SERLIN, R. C. and M. R. HARWELL. 1993. An empirical study of eight tests of partial coefficients. Communic. Statist==Simula. Computa. 22: 545-567. SERLIN,R. C. and L. A. MARASCUILO.1983. Planned and post hoc comparisons and discordance for G groups of judges. J. Educ. Statist. 8: 187-205. SHAFFER,J. P. 1977. Multiple comparison emphasizing selected alization of Dunnett's procedure. Biometrics 33: 293-303. SHANNON,C. E. 1948. A mathematical 623-656.
theory of communication.
contrasts:
correlation
in tests of concordance An extension
and gener-
Bell System Tech. J. 27: 379-423,
SHAPIRO,S. S. 1986. How to Test Normality and Other Distributional Assumptions. Volume 3. The ASQC Basic References in Quality Control: Statistical Techniques, E. J. Dudewicz (ed.). American Society for Quality Control, Milwaukee, WI, 67 pp. SHAPIRO,S. S. and M. B. WILK. 1965. An analysis of variance Biometrika 52: 591-611.
test for normality
SHAPIRO,S. S., M. B. WILK, and H. J. CHEN. 1968. A comparative J. Amer. Statist. Assoc. 63: 1343-1372. SHAPIRO,S. S. and R. S. FRANCIA. 1972. An approximate Amer. Statist. Assoc. 67: 215-216.
(complete
samples).
study of various tests for normality.
analysis of variance
test for normality.
J.
SHARMA,S. 1996. Applied Multivariate Techniques. John Wiley, New York. 493 pp. SHEARER,P. R. 1973. Missing data in quantitative 22: 135-140.
designs. J. Royal Statist: Soc. Ser. C. Appl. Statist.
SHESKIN, D. J. 2004. Handbook of Parametric and Nonparametric Statistical Procedures. 3rd ed. Chapman & Hall/CRC, Boca Raton, FL. 1193 pp.
Literature Cited SICHEL, H. S. 1973. On a significance Appl. Statist. 22: 50-58.
test for two Poisson
915
J. Royal Statist. Soc. Ser. C.
variables.
Statistics for the Behavioral Sciences.
SIEGEL, S. and N. J. CASTELLANJR. 1988. Nonparametric McGraw-Hili, New York. 399 pp.
SIEGEL, S. and J. TUKEY. 1960. A nonparametric sum of ranks procedure for relative spread in unpaired samples. 1. ArneI'. Statist. Assoc. 55: 429-445. (Correction in 1991 in J. Amer. Statist. Assoc. 56: 1005.) SIMONOFF,J. S. 2003. Analyzing Categorical Data. Springer,
New York. 496 pp.
SIMPSON, G. G., A. ROE, and R. C. LEWONTIN. 1960. Quantitative Zoology. Jovanovich, New York. 44 pp. SKILLINGS,J. H. and G. A. MACK. 1981. On the use of a Friedman-type unbalanced block designs. Technometrics 23: 171-177. SMIRNOV,N. V. 1939a. Sur les ecarts de la courbe de distribution N. S. 6: 3-26. [Cited in Smirnov (1948).]
statistic
Harcourt,
Brace,
in balanced
and
Recueil Mathematique
empirique.
SMIRNOV,N. V. 1939b. On the estimation of the discrepancy between empirical curves of distribution for two independent samples. (In Russian.) Bull. Moscow Univ. Intern. Ser. (Math.) 2: 3-16. [Cited in Smirnov (1948).] SMITH,D. E. 1951. History of Mathematics. Vol. I. Dover Publications,
New York. 596 pp.
SMITH,D. E. 1953. History of Mathematics. Vol. II. Dover Publications,
New York. 725 pp.
SMITH,H. 1936. The problem of comparing the results of two experiments with unequal Council Sci. Industr. Res. 9: 211-212. [Cited in Davenport and Webster (1975).] SMITH,R. A. 1971. The effect of unequal 31-34.
group size on Tukey's
HSD procedure.
means. 1.
Psychometrika 36:
SNEDECOR, G. W. 1934. Calculation and Interpretation of Analysis of Variance and Covariance. Collegiate Press, Ames, Iowa. 96 pp. SNEDECOR,G. W. 1954. Biometry, its makers and concepts, pp. 3-10. In O. Kempthorne, T. A. Bancroft, J. W. Gowen, and J. L. Lush (eds.), Statistics and Mathematics in Biology. Iowa State College Press, Ames, Iowa. SNEDECOR,G. W. and W. G. COCHRAN.1989. Statistical Methods. 8th ed. Iowa State University Ames, Iowa. 503 pp. SOKAL,R. R. and F. J. ROHLF. 1995. Biometry. 3rd ed., W. H. Freeman SOMERVILLE,P. N. 1993. On the conservatism Statist. Prob. Lett. 16: 343-345. SOPER,H. E. 1914. Tables of Poisson's SPEARMAN,C. 1904. The proof Psychol. 15: 72-101.
of the Tukey-Kramer
exponential
and measurement
binomial
Press,
& Co., New York. 887 pp.
multiple comparison
procedure.
limit. Biometrika 10: 25-35.
of association
between
Amer. J.
two things.
Brit. J. Psychol. 2: 89-108. SPRENT,P. and N. C. SMEETON.2001. Applied Nonparametric Statistics. 3rd ed. Chapman & Hall/CRC, SPEARMAN,C. 1906. 'Footrule
' for measuring
correlation.
Boca Raton, FL. 461 pp. SRIVASTAVA,A. B. L. 1958. Effect of non-normality 45: 421-429. SRIVASTAVA,A. B. L. 1959. Effects of non-normality Biometrika 46: 114-122.
on the power function
of the t-test. Biometrika
on the power of the analysis of variance
test.
SRIVASTAVA,M. S. 2002. Methods of Multivariate Statistics. John Wiley, New York. 697 pp. STARMER,C. F., J. E. GRIZZLE, and P. K. SEN. 1974. Some Reasons for not using the Yates continuity correction on 2 X 2 contingency tables: Comment. J. Amer. Statist. Assoc. 69: 376-378. STEEL, R. G. D. 1959. A multiple comparison 15: 560-572.
rank sum test: Treatments
STEEL, R. G. D. 1960. A rank sum test for comparing 197-207.
versus control.
all pairs of treatments.
Biometrics
Technometrics 2:
916
Literature Cited STEEL,R. G. D. 1961a. Error rates in multiple comparisons. Biometrics 17: 326-328. STEEL,R. G. D. 1961b. Some rank sum multiple comparison tests. Biometrics 17: 539-552. STEEL,G. D., J. H. TORRIE,and D. A. DICKEY.1997. Principles and Procedures of Statistics. A Biometrical Approach. 3rd ed. WCB/McGraw-Hill, Boston. 666 pp. STEIGER,J. H. 1980. Tests for comparing elements in a correlation matrix. Psycho!. Bull. 87: 245-251. STELZL,1. 2000. What sample sizes are needed to get correct significance levels for log-linear models?-A Monte Carlo study using the SPSS-procedure "Hiloglinear." Meth. Psycho!. Res. 5: 95-116. STEPHENS, M. A. 1969a. Tests for randomness of directions against two circular alternatives. J. Amer. Statist. Assoc. 64: 280-289. STEPHENS, M. A. 1969b. A goodness-of-fit statistic for the circle, with some comparisons. Biometrika 56: 161-168. STEPHENS, M. A. 1972. Multisample tests for the von Mises distribution. J. Amer. Statist. Assoc. 67: 456-461. STEPHENS,M. A. 1982. Use of the von Mises distribution to analyze continuous proportions. Biometrika 69: 197-203. STEVENS, G. 1989. A nonparametric multiple comparison test for differences in scale parameters. Metrika 36: 91-106. STEVENS, J. 2002. Applied Multivariate Statistics for the Social Sciences. 4th ed. Lawrence Earlbaum, Mahwah, NJ. 699 pp. STEVENS, S. S. 1946. On the theory of scales of measurement. Science 103: 677-680. STEVENS, S. S. 1968. Measurement, statistics, and the schemapiric view. Science 161: 849-856. STEVENS, W. L. 1939. Distribution of groups in a sequence of alternatives. Ann. Eugen. 9: 10-17. STIGLER,S. M. 1978. Francis Ysidro Edgeworth, statistician. 1. Roy. Statist. Soc. Ser. A, 141: 287-322. STIGLER, S. M. 1980. Stigler's law of eponymy. Trans. N. Y. Acad. Sci. Ser. II. 39: 147-157. STIGLER, S. M. 1982. Poisson on the Poisson distribution. Statist. Prob. Lett. 1: 33-35. STIGLER, S. M. 1986. The History of Statistics: The Measurement of Uncertainty Before 1900. Belknap Press, Cambridge, MA. 410 pp. STIGLER, S. M. 1989. Francis Galton's account of the invention of correlation. Statist. Sci. 4: 73-86. STIGLER,S. M. 1999. Statistical on the Table. The History of Statistical Concepts and Methods. Harvard University Press, Cambridge, MA. 488 pp. STIGLER, S. M. 2000. The problematic unity of biometrics. Biometrics 56: 653-658. STOLINE,M. R. 1981. The status of multiple comparisons: Simultaneous estimation of all pairwise comparisons in one-way ANaYA designs. Amer. Statist. 35: 134-141. STONEHOUSE, J. M. and G. J. FORRESTER. 1998. Robustness of t and U tests under combined assumption violations. J. Appl. Statist. 25: 63-74. STORER,B. E. and C. KIM. 1990. Exact properties of some exact test statistics for comparing two binomial populations. J. Amer. Statist. Assoc. 85: 146-155. STREET,D. J. 1990. Fisher's contributions to agricultural statistics. Biometrics 46: 937-945. STRUIK,D. J. 1967. A Concise History of Mathematics. 3rd ed. Dover, NY. 299 pp. STUART, A. 1984. Sir Maurice Kenall, 1907-1983. J. Roy. Statist. Soc. 147: 120-122. STUDENT.1908. The probable error of a mean. Biometrika 6: 1-25. STUDENT.1927. Errors of routine analysis. Biometrika 19: 151-164. [Cited in Winer, Brown, and Michels, 1991: 182.] STUDIER,E. H., R. W. DAPSON,and R. E. BIGELOW.1975. Analysis of polynomial functions for determining maximum or minimum conditions in biological systems. Compo Biochem. Physiol. 52A: 19-20.
Literature Cited
917
SUTHERLAND, C. H. V. 1992. Coins and coinage: History of coinage: Origins; ancient Greek coins, pp. 530-534. In R. McHenry (general ed.), The New Encyclopcedia Britannica, 15th ed., Vol. 16. Encyclopedia Britannica, Chicago, IL. SUlTON,J. B. 1990. Values of the index of determination at the 5% significance level. Statistician 39: 461-463. SWED,F. S. and C. EISENHART, 1943. Tables for testing randomness of grouping in a sequence of alternatives. Ann. Math. Statist. 14: 66-87. SYMONDS,P. M. 1926. Variations of the product-moment (Pearson) coefficient of correlation. J. Educ. Psychol. 17: 458-469. TABACHNICK, B. G. and L. S. FlDELL.2001. Using Multivariate Statistics. 4th ed., Allyn and Bacon, Boston. 966 pp. TAMHANE, A. C. 1979. A comparison of procedures for multiple comparisons. J. Amer. Statist. Assoc. 74: 471-480. TAN,W. Y.1982. Sampling distributions and robustness oft, F and variance-ratio in two samples and ANOVA models with respect to departure from normality. Communic. Statist.-Theor. Meth. 11: 2485-2511. TAN, W. Y. and S. P. WONG.1980. On approximating the null and nonnull distributions of the F ratio in unbalanced random-effects models from nonnormal universes. J. Amer. Statist. Assoc. 75: 655-662. TATE,M. W. and S. M. BROWN.1964. Tables for Comparing Related-Sample Percentages and for the Median Test. Graduate School of Education, University of Pennsylvania, Philadelphia, PA. TATE,M. W. and S. M. BROWN.1970. Note on the Cochran Q test. J. Amer. Statist. Assoc. 65: 155-160. TATE,M. W. and L. A. HYER.1973. Inaccuracy of the X2 goodness of fit when expected frequencies are small. J. A mer. Statist. Assoc. 68: 836-841. THIELE,T. N. 1897. Elemenuer Iugttagelseslcere. Gyldendal, Kebenhavn, Denmark. 129 pp. [Cited in Hald (1981).] THIELE,T. N. 1899. am Iagttagelseslerens Halvinvarianter. Overs. Vid. Sels. Forh. 135-141. [Cited in Hald (1981).] THODE,H. c., JR. 2002. Testing for Normality. Marcel Dekker, New York. 479 pp. THOMAS,G. E. 1989. A note on correcting for ties with Spearman's p. J. Statist. Computa. Simula. 31: 37-40. THONI,H. 1967. Transformation of Variables Used in the Analysis of Experimental and Observational Data. A Review. Tech. Rep. No.7, Statistical Laboratory, Iowa State University, Ames, fA. 61 pp. THORNBY,J. I. 1972. A robust test for linear regression. Biometrics 28: 533-543. TIKU,M. L. 1965. Chi-square approximations for the distributions of goodness-of-fit statistics U~ and W~' Biometrika 52: 630-633. TIKu, M. L. 1967. Tables of the power of the F-test. J. Amer. Statist. Assoc. 62: 525-539. TIKu, M. L. 1972. More tables of the power of the F-test. J. Amer. Statist. Assoc. 67: 709-710. TOLMAN,H. 1971. A simple method for obtaining unbiased estimates of population standard deviations. Amer. Statist. 25(1): 60. TOMARKIN,A. J. and R. C. SERLIN.1986. Comparison of ANOV A alternatives under variance heterogeneity and specific noncentrality structures. Psychol. Bull. 99: 90-99. TOOTHAKER, L. E. 1991. Multiple Comparisons for Researchers. Sage Publications, Newbury Park, CA. 167 pp. TOOTHAKER, L. E. and H. CHANG.1980. On "The analysis of ranked data derived from completely randomized designs." 1. Educ. Statist. 5: 169-176. TOOTHAKER, L. E. and D. NEWMAN.1994. Nonparametric competitors to the two-way ANa VA. J. Educ. Behav. Statist. 19: 237-273.
918
Literature Cited TUKEY, J. W. 1949. One degree of freedom TUKEY, J. W. 1953. The problem University. (unpublished)
for non-additivity.
of multiple
Biometrics
comparisons.
5: 232-242.
Department
of Statistics,
Princeton
TUKEY, J. W. 1993. Where should multiple comparisons go next", pp. 187-207. In Hoppe, F. M., Multiple Comparisons. Selection, and Applications in Biometry, Marcel Dekker, New York. TWAIN, M. (S. L. CLEMENS). 1950. Lire on the Mississipi. Harper TWEDDLE, 1. 1984. Approximating
n! Historical
origins
& Row, New York. 526 pp.
and error
analysis.
J. Phys. 52:
Amer.
487-488. UNISTAT LTD. 2003. UNISTA T©: Statistical Package for MS Windows™. Ltd., London, England. 932 pp. UPTON, G. J. G. 1976. More multisample
Version 5.5. UN 1ST AT
tests for the von Mises distribution.
J. Amer. Statist. Assoc.
71: 675-678. UPTON, G. J. G. 1982. A comparison Statist. Soc., Ser. A, 145: 86-105.
of alternative
UPTON, G. J. G. 1986. Approximate confidence distribution. Biometrika 73: 525-527. UPTON, G. J. G. 1992. "Fisher's
tests for the 2
intervals
X
2 comparative
for the mean
direction
of a von Mises
exact test." J. Roy. Statist. Soc. Ser. A, 155: 395-402.
UPTON, G. J. G. and B. FINGLETON.1985. Spatial Data Analysis by Example. and Quantitative Data. John Wiley, New York. 410 pp. UPTON, G. J. G. and B. FINGLETON.1989. Spatial Data Analysis and Directional Data. John Wiley, New York. 416 pp. URY, H. K. 1982. Comparing Statistician 31: 245-250.
trial. 1. Roy.
two proportions:
1. Point Pattern
Volume
by Example.
2. Categorical
Volume
P2 when PI, n, a, and f3 are specified.
Finding
URY, H. K. and J. L. FLEISS. 1980. On approximate sample sizes for comparing proportions with the use of Yates' correction. Biometrics 36: 347-351. VANELTEREN, P. and G. E. NOETHER. 1959. The asymptotic incomplete block design. Biometrika 46: 475-477.
two independent
efficiency of the x2-test
for a balanced
VITTINGHOFF,E., D. V. GLIDDEN, S. C. SHIBOSKI,and C. E. MCCULLOCH. 2005. Regression in Biostatistics: Linear, Logistic, Survival, and Repeated Measures. Springer Sciences+ Media, New York, 340 pp. VOLLSET,S. E. 1993. Confidence
intervals
for a binomial
YONEYE, A. and C. SCHUSTER.1998. Regression Diego, CA. 386 pp. YONMISES, R. 1918. Uber die "Ganzzahligkeit"
Statist. Med. 12: 809-824.
proportion.
Analysis [or Social Sciences. Academic der Atomgewichte
Methods Business
und verwandte
Press, San
Fragen. Physikal.
z. 19: 490-500. YONNEUMANN, J., R. H. KENT, H. R. BELLINSON, and B. 1. HART. 1941. The mean square successive difference. Ann. Math. Statist. 12: 153-162. Voss, D. T. 1999. Resolving
the mixed models controvery.
WALD, A. and J. WOLFOWITZ. 1940. On a test whether Ann. Math. Statist. 11: 147-162.
Amer. Statist. 53: 352-356.
two samples
are from the same population.
WALKER, H. M. 1929. Studies in the History of Statistical Method with Special Reference Educational Problems. Williams and Wilkins, Baltimore, MD. 229 pp. WALKER, H. M. 1934. Abraham
to Certain
de Moivre. Scripta Math. 3: 316-333.
WALKER,H. M. 1958. The contributions WALLIS, W. A. 1939. The correlation
of Karl Pearson. ratio for ranked
WALLIS, W. A. and G. H. MOORE. 1941. A significance Assoc. 36: 401-409.
J. Amer. Statist. Assoc. 53: 11-22.
data.,', Amer. Statist. Assoc. 34: 533-538. test for time series analysis. 1. Amer. Statist.
WALLIS, W. A. and H. V. ROBERTS. 1956. Statistics: A New Approach. 646 pp.
Free Press, Glencoe,
IL.
Literature Cited
919
WALLRAFF, H. G. 1979. Goal-oriented and compass-oriented movements of displaced homing pigeons after confinement in differentially shielded aviaries. Behav. Ecol. Sociobiol. 5: 201-225. WA G, F. T. and D. W. SCOIT. 1994. The L, method
for robust non-parametric regression. 1. Amer. Statist. Assoc. 89: 65- 76. W ANG,Y. H. 2000. Fiducial intervals: What are they? Amer. Statist. 54: 105-111. WANG, Y. Y. 1971. Probabilities of the Type I errors of the Welch tests for the Behrens-Fisher problem . ./. A mer. Statist. Assoc. 66: 605-608. WATSON, G. S. 1961. Goodness of fit tests on a circle. Biometrika 48: 109-114. WATSON, G. S. 1962. Goodness of fit tests on a circle. II. Biometrika 49: 57-63. WATSON,G. S. 1982. William Gemmell Cochran 1909-1980. Ann. Statist. 10: 1-10. WArSON, G. S. 1983. Statistics on Spheres. John Wiley, New York. 238 pp. WATSON,G. S. 1994. The mean is within one standard deviation from any median. Amer. Statist. 48: 268-269. WATSON,G. S. 1995. U~ test for uniformity of discrete distributions. 1. Appl. Statist. 22: 273-276. WATSON, G. S. and E. J. WILLIAMS. 1956. On the construction of significance tests on the circle and the sphere. Biometrika 43: 344-352. WEHRHAHN, K. and J. OGAWA. 1978. On the use of the t-statistic in preliminary testing procedures for the Behrens-Fisher problem. J. Statist. Plan. Inference 2: 15-25. WEINFURT, K. P. 1995. Multivariate analysis of variance, pp. 245-276. In Grimm, L. G. and P. R. Yarnold (eds.), Reading and Understanding Multivariate Statistics. American Psychological Association, Washington,
DC.
WEISBERG,S. 2005. Applied Linear Regression. John Wiley, Hoboken,
NJ. 303 pp.
WELCH, B. L. 1936. Specificiation of rules for rejecting too variable a product, reference to an electric lamp problem. 1. Royal Statist. Soc., Suppl. 3: 29-48. WELCH, B. L. 1938. The significance of the difference variances are unequal. Biometrika 29: 350-361. WELCH, B. L. 1947. The generalization of "Student's" variances are involved. Biometrika 34: 28-35. WELCH, B. L. 1951. On the comparison
between problem
two means
with particular
when the population
when several different
of several mean values: An alternate
approach.
population
Biometrika
38: 330-336. WHEELER, S. and G. S. WATSON. 1964. A distribution-free
two-sample test on the circle. Biometrika 51: 256-257. WHITE, J. S. 1970. Tables of normal percentile points. 1. Amer. Statist. Assoc. 65: 635-638. WILCOXON,F. 1945. Individual comparisons by ranking methods. Biometrics Bull. 1: 80-83. WILCOXON,F., S. K. KATTI, and R. A. WILCOX. 1970. Critical values and probability levels for the Wilcoxon rank sum test and the Wilcoxon signed rank test, pp. 171-259. In H. L. Harter and D. B. Owen (eds.), Selected Tables in Mathematical Statistics. Vol. /. Markham Publishing, Chicago. WILCOXON, F. and R. A. WILCOX. 1964. Some Rapid Approximate Statistical Procedures. Lederle Laboratories, Pearl River, NY. 59 pp. WILKINSON, L. and G. E. DALLAL. 1977. Accuracy of sample moments calculations among widely used statistical programs. Amer. Statist. 31: 128-131. WILKS, S. S. 1932. Certain generalizations in the analysis of variance. Biometrika 24: 471-494. WILKS, S. S. 1935. The likelihood test of independence in contingency tables. Ann. Math. Statist. 6: 190-196. WI LLIAMS,K. 1976. The failure of Pearson's goodness of fit statistic. Statistician 25: 49. WILSON, E. B. 1927. Probable inference, the law of succession, and statistical inference . .I. Amer. Statist. Assoc. 22: 209-212. WILSON, E. B. 1941. The controlled experiment and the four-fold table. Science 93: 557-560. WILSON, E. B. and M. M. HILFERTY. 1931. The distribution of chi-square. Proc. Nat. A cad. Sci., Washington, DC. 17: 684-688.
920
Literature Cited WINDSOR,C. P. 1948. Factorial analysis of a multiple dichotomy. Human Biol. 20: 195-204. WINER,B. J., D. R. BROWN,and K. M. MICHELS.1991. Statistical Procedures in Experimental Design. 3rd ed. McGraw-Hill, New York. 1057 pp. WINTERBOTTOM, A. 1979. A note on the derivation of Fisher's transformation of the correlation coefficient. Amer. Statist. 33: 142-143. WITTKOWSKI, K. M. 1998. Versions of the sign test in the presence of ties. Biometrics 54: 789-791. YATES,F. 1934. Contingency tables involving small numbers and the X2 test. J. Royal Statist. Soc. Suppl. 1: 217-235. YATES,F. 1964. Sir Ronald Fisher and the Design of Experiments. Biometrics 20: 307-321. YATES,F. 1984. Tests of significance for 2 X 2 contingency tables. J. Royal Statist. Soc., Ser. A, 147: 426-449. Also: Discussion by G. A. Barnard, ibid. 147: 449-450; D. R. Cox ibid. 147: 451; G. J. G. Upton, ibid. 147: 451-452; I. D. Hill, ibid. 147: 452-453; M. S. Bartlett, ibid. 147: 453; M. Aitkin and J. P. Hinde, ibid. 147: 453-454; D. M. Grove, ibid. 147: 454-455; G. Jagger, ibid. 147: 455; R. S. Cormack, ibid. 147: 455; S. E. Fienberg, ibid. 147: 456; D. J. Finney, ibid 147: 456; M. J. R. Healy, ibid. 147: 456-457; F. D. K. Liddell, ibid. 147: 457; N. Mantel, ibid. 147: 457-458; J. A. Neider, ibid. 147: 458; R. L. Plackett, ibid. 147: 458-462. YOUNG,L. C. 1941. On randomness in ordered sequences. Ann. Math. Statist. 12: 293-300. YUEN, K. K. 1974. The two-sample trimmed t for unequal population variances. Biometrika 61: 165-170. YULE,G. U. 1900. On the association of attributes in statistics. Phil. Trans. Royal Sac. Ser. A 94: 257. YULE,G. U. 1912. On the methods of measuring the association between two attributes. J. Royal Statist. Soc. 75: 579-642. ZABELL,S. L. 2008. On Student's 1908 article "The probable error of a mean." .l. Amer. Statist. Assoc. 103: 1-7. Also Comment, by S. M. Stigler, ibid. 103: 7-8; J. Aldreich, ibid. 103: 8-11; A. W. F. Edwards, ibid. 103: 11-13. E. Seneta, ibid. 103: 13-15; P. Diaconis and E. Lehmann, ibid. 103: 16-19. Rejoinder, by S. L. Zabell, ibid. 103: 19-20. ZAR,J. H. 1967. The effect of changes in units of measurement on least squares regression lines. Bio Science 17: 818-819. ZAR,J. H. 1968. The effect of the choice of temperature scale on simple linear regression equations. Ecology 49: 1161. ZAR,J. H. 1972. Significance testing of the Spearman rank correlation coefficient. J. Amer. Statist. Assoc. 67: 578-580. ZAR,J. H. 1978. Approximations for the percentage points of the chi-squared distribution. J. Royal Statist. Soc. Ser. C. Appl. Statist. 27: 280-290. ZAR,J. H. 1984. Biostatistical Analysis. 2nd ed. Prentice Hall, Englewood Cliffs, NJ. 718 pp. ZAR,J. H. 1987. A fast and efficient algorithm for the Fisher exact test. Behav. Res. Meth., Instrum., & Comput. 19: 413-415. ZELEN,M. and N. C. SEVERO.1964. Probability functions, pp. 925-995. In M. Abramowitz and I. Stegun (eds.), Handbook of Mathematical Functions, National Bureau of Standards, Washington, DC. (Also, Dover, New York, 1965.) ZERBE,G. O. and D. E. GOLDGAR.1980. Comparison of intraclass correlation coefficients with the ratio of two independent F-statistics. Communic. Statist. - Theor. Meth. A9: 1641-1655. ZIMMERMAN, D. W. 1987. Comparative power of Student T test and Mann-Whitney U test for unequal samples sizes and variances. J. Exper. Educ. 55: 171-179. ZIMMERMAN, D. W. 1994a. A note on the influence of outliers on parametric and nonparametric tests. J. Gen. Psychol. 12: 391-401. ZIMMERMAN, D. W. 1994b. A note on modified rank correlation . .l. Educ. Behav. Statist. 19: 357-362. ZIMMERMAN, D. W. 1996. A note on homogeneity of variances of scores and ranks . .l. Exper. Educ. 64: 351-362. ZIMMERMAN, D. W. 1997. A note on interpretation of the paired-samples t test. .l. Educ. Behav. Statist. 22: 349-360.
Literature Cited
921
D. W. 1998. Invalidation of parametric and nonparametric tests by concurrent violation of two assumptions. J. Exper. Educ. 67: 55-68. ZIMMERMAN, D. W. 2000. Statistical significance levels of nonparametric tests biased by heterogeneous variances of treatment groups. J. Gen. Psychol. 127: 354-364. ZIMMERMAN, D. W. and B. D. ZUMBO. 1993. Rank transformations and the power of the Student t test and Welch t' test for non-normal populations with unequal variances. Can. J. Exper. Psycho!. 47: 523-539. ZIMMERMAN,
Author Index Acton, F. S., 357 Adrian, Y. E. 0 .. 67 Agecl. M. !.. 202. 238 Agresti. A .. 494. 510. 543. 545-546, 551. 549. 577 Aiken, L. S.. 430, 431. 439. 440. 444. 458 Ajne, B .. 629, 846 Akritas. M. G., 249 Alexander, D. A., 218. 761. Alf. E. F. Jr.. 474 Algina. J .. 440 Anderson, N. H .• 162 Anderson. R. E., 316, 319, 324, 326. 420, 431, 432. 434.577 Anderson. S.. 551 Anderson. S. L.. 200. 221 Anderson-Cook, C M.. 617 Andre. D.. il37 Andrews. F. C. 214 Anscornbc. F. J., 293. 295.557,595 Appelbaum. M. 1.,291 Armitage. P. 1,560.561 Asimov. I., 3, 49. 329, 520 Babin. B. J., 316. 319, 324. 326. 420. 431. 432. 434. 577 Babington Smith. B .. 280. 449. 452 Bahadur. R. R .. 171 Baker, L.. 5ilil Ball. W. W. R., 75 Bancroft. T. A., 263 Barnard, G. A.. 99. 497, 502, 561 Barnett. Y .. 20, 206. 494 Barnhart. H. X .. 417 Barr. D. R .. 144 Bartholomew, D. J .. 398 Bartlett. M. S.. 130,220, 2il7. 291. 293, 295. 296.466 Basharin. G. P., 174 Bales. D. M .. 448 Batschelet. E .. 49. 607, 614, 615, 617. 618. 621. 622. 626.628,629.632,634.636.637.640. 642,645,647.649.658.662,664 Baussel. R. B .. 207 Bayarri, M. L., 79 Beall, G .. 295 Becker, M. A., 230, 231 Beckman. R. J., 20 Beckmann, P., 67 Behrens. W. Y., 137 Belanger, A., 94.126,127.502 Bellinson. H. R., 601 Benard, A .. 280 Bennett, B. M., 565. 570. 869 Berkson, J., 502 Bernoulli, J .. 49 Berry, G., 56() Berry, K. J.. 510 Berry, W. D .. 432. 44il Bertrand. P. Y., 432
zss.
Best. D. J .. 138. 141,596. asz Bhattaeharjee. G. P.. 852 Bhattacharyya. G. K., 629 Bigelow, R. E .. 463 Birkes, D .. 332, 337. 420. 425.430 Birnbaum, Z. W., 743 Bissell. A. F.. 356 Black. W. C .. 316, 319. 324. 326. 420. 431,432.434. 577 Blair. C. C. 227. 250 Blair. R. c.. 171. 172. lil4. 249-250 Blatncr. D., 67 Bliss. c.r., 41. 463, 543. 660 Bloomfield. P .. 425. 660 Blyth. C. R .. 543. 546. 551 Boehnke, K .. 202, 214 Bohning. D .. 543 Boland. P. J .. 99. 585 Boneau. C. A .. 136. 137. 172 Boomsma, A .. 69.101 Bowker, A. H., 572 Bowley, A. L.. 90 Bowman. K. 0 .. 43. 94. 95, 174 Box. G. E. P.. 136, 152.200.221. 249. 325. 287 Box. J. F.. 190 Bozivich, H., 263 Bradley. J. Y .. 837 Bradley. R. A .. 183 Bray. J. S.. 326 Brillouin, L.. 45 Brits. S. J. M., 312 Brower, J. E., 42, 43, 46 Brown. B. L., 355 Brown. D. R .. 213, 302. 303. 869 Brown, L. D .. 543 Brown. M. B .. 153. 155.202.203.205.239. 262 Brown. S. M .. 283 Browne. R. H .. 144 Brownlee. K. A., 263. 543. 598, il34 Brunner, E., 250 Buckle. N.. 169 Budescu, D. Y., 291 Bi.ining. H., 200. 202. 205 Burke, P. J.. 510 Burr. E . .J.. il52 Burstein, H., 547 Byars. J. A .. 474. 503 Cacoullos. T.. 3il4 CaITo. B .. 545, 551 Cai. T. T.. 543 Cajori. F .. 22, 23. 28. 31. 37.41,45. 53, 55. 67. 71, 100.158.291.293.329,520.546.605, 612 Calitz. F .. 483 Camilli, G., 502 Carmer, S. G., 232. 238
923
924
Author Index Carr, W. E., 567 Casagrande, J. T., 553 Castellan , N. J. Jr., 402, 482 Cattell, R. 8., 398 Caudill, S. 8., 337 Chakraborti, S., 400, 405, 483 Chakravarti, I. M., 589 Chang, H., 250 Chapman, J.-A W., 480 Chase, W., 502 Chatterjee, S., 420, 430, 434, 443, 577 Chen, H. J., 95 Chen, X., 542, 554 Chincilli, V., 417 Chow, B., 402 Christensen, R., 510 Church,J. D., 152, 153 Cicchitelli, G., 102 Clark, V. A., 304, 420 Cleveland, W. S., 337, 430 Clinch, J. J., 200, 202, 205 Clopper, C. J., 543 Coakley, C. W., 537 Cochran, W. G., 136, 148, 182,206,281,302,303, 304,347,474,475,501,503,506,528, 530,547,553,589 Cohen, A., 437, 443 Cohen, AS., 138 Cohen, 8. H., 567 Cohen, J., 117,151,182,207,355,367,387,392, 430,431,439,440,458,539 Cohen, P., 430, 431, 439, 440, 458 Coles, C. W., 49 Connett, J. E., 572 Conover, J. M., 775 Conover, W. J., 156, 171, 172, 184, 214, 227, 279,359,400,403,404,405,502, 762 Cook, RD., 20 Cottingham, K L., 355 Coull, B. A., 543, 545-546, 551 Cowden, D. J., 29 Cowles, S. M., 78 Cox, D. R, 287,502 Cox, G. M., 148,302,303,553 Cramer, E. M., 431 Cramer, H., 407 Cressie, N., 477, 480 Critchlow, D. E., 241 Croxton, F. E., 29 Cureton, E. E., 187 D' Agostino, R B., 91, 94, 95, 126, 127,482, 502 D' Agostino, R B. Jr., 94,126,127 Dale, A I., 585 Dallal, G. E., 39 Daniel, W. W., 337, 402, 482 Dapson, R W., 340, 463 Darlington, R B., 429, 440 Darnell, A c., 645 DasGupta, A, 543 Davenport, E. C. Jr., 407 Davenport, J. M., xi, 138, 141, 218, 277, 279 David, F. N., 49, 50 David, H. A, 2, 12, 24, 27, 33, 36, 37, 38, 39, 42, 68, 69,72,74,78,79,86,88,89,102,106, 112,114,162,226,229,253,262,270,
286,291,296,300,321,331,379,398, 419,423,466,469,471,492,510,519, 520,521,585,591,656,676 Davis, C., 78 Day, R W., 232 De Jonge, c., 774 De Lange, E. M., 156 Delaney, H. D., 202, 274, 302, 304 Dempster, A P., 281 Denison, D. R, 437 Desmond, A E., 328, 380, 438 Desu, M. M., 158 Detre, K, 595 Dharmadhikari, S., 41 Dickey, D. A., 302, 304 Dickinson, P. c., 402 Dijkstra, J. B., 205 Dixon, W.J.,30,41, 138 Doane, D. P., 10 Dodge, Y., 67, 332, 337, 420, 425, 430 Donaldson, T. S., 136, 200, 202 Donner, A, 413, 580 Dorans, N. J., 409, 411 Draper, N. R, 153, 332, 359, 420, 430, 431, 433, 434,443,458 Drasgow, F., 409, 411 Duncan, D. B., 232 Dunn, O. J., 240, 241, 243, 304,420 Dunnett, C. W., 230, 231, 234, 235, 734 Dupont, W. D., 565 Durand, D., 618, 625, 626, 843, 844 Dwass, M., 241 Dyck, P. J., 599 Dyke, G., 469 Eason, G., 49 Eberhardt, K R, 549 Edgington, E. S., 602 Edwards, A. W. F., 474 Eells, W. c. 41, 42, 380 Einot, 1.,232 Eisenhart, c., 73, 99, 200, 331,834 El-Sanhurry, N. A, 407 Embleton, B. J. J., 607 Evans, J. 0.,852 Everitt, B. S., 510, 646, 649 Ezekiel, M., 428 Fahoorne, G., 169, 186,218,277,757,774 Feir-Walsh, B. J., 214, 215 Feldman, S. E., 565 Feldman, S., 432, 448 Feldt, L. S., 274 Fellingham, S. A., 187 Feltz, C. J., 125, 161, 162, 221,224,485 Feron, R, 585 Fidell, L. S., 316, 420, 510 Fieller, E. c., 400 Fienberg, S. E., 510 Fineberg, N. S., 138 Fingleton, B., 585, 607, 614,615,618,656 Fisher, N. I., 607, 608, 618, 624, 636, 637, 640, 642, 654,656,658,659,660,661,662,663, 664 Fisher, R A, 2, 78, 79, 99, 130, 136, 137, 138, 270, 284,302,341,384,395,413,414,474, 494,502,503,561,676 Fisz, M., 482
Author Fix, E., 169 Fleiss, J. L., 543, 546, 553, 554, 577 Fligner, M. A., 172, 174,214,241,549 Fong, D. Y. T, 537 Forrester, G. J, l36, 141,202 Forsythe, A. B., 153, 155,202,203,205,239,262 Francia, R S., 95 Franklin, L. A., 774, 869 Frawley, W. H., 454 Freeman, M. F., 291, 557 Freund, R. J., 349 Friedman, M., 277 Friendly, M., 494, 496, 510 Frigge, M., 114 Fujino, Y., 543 Fuller, H. A., 398 Gabriel, K R., 218, 232, 241 Gadsden, R J., 636 Gaito, J., 162 Games, P. A., 136, 202, 205, 231, 232 Gamst, G., 577 Gans, D. J., 141 Garside, G. R., 502 Garson, G. D., 577 Gart, J. J., 574, 592 Gartside, P. S., 221 Geary, R. C, 431 Geiringer, H., 618 Gentleman, J. F., 144 George, E. 0.,716 Gettinby, G., 49 Ghent, A. w., 43, 46, 567, 599, 603, 666 Gibbons, J. D., 400, 402, 405, 408, 449, 483, 452 Gill, J. L., l38 Girden, E. R, 274 Glantz, S. A., 304, 312, 357, 420, 432, 434, 458, 577 Glass, G. Y., l37, 200, 202, 408, 409 Glejser, H., 360 Glidden, D. Y., 577 Goldgar, D. E., 413, 414 Goodman, L. A., 510 Gorman, J. W., 437 Govindarajulu, Z., 95 Green, S. B., 431 Greenhouse, S. W., 716 Greenwood, J. A., 618, 625, 626, 843, 844 Grizzle, J. E., 502 Groeneveld, R A., 90 Guarino, A. J., 577 Guenther, W. C, 208 Gullberg, J., 22, 23, 28, 37, 45, 58, 67, 100, 158,519, 520 Gumbel, E. J., 618 Gurland, J., 41, l38 Haber, M., 417,501,502,516 Hadi, A. S., 420, 430, 434, 443, 577 Hafner, K, 277 Hahn, G. J., 123, 158 Haight, F. A., 585 Hair, J. F.lr., 316, 319, 324, 326, 420, 431, 432, 434, 577 Halberg, F., 660 Hald, A., 519 Hamaker, H. C, 677 Hamdan, M. A., 595 Hamilton, D., 432, 437
Index
925
Han, C, 263 Hand, n.r., 316, 326 Hardie, W., 337 Hardy, M. A., 443 Hardyck, CD., 227 Harlow, L. L., 79 Harris, J. A., 380 Harrison, D., 636 Hart, B. I., 601 Harter, H. L., 483, 732 Hartigan, 1. A., 494, 510 Hartley, H. 0., 46, 207, 215, 263, 277, 400, 588, 595,776,777,855,866 Harwell, M. R, 200, 202, 205, 439 Hastings, C Jr., 676 Hauck, W. W., 551, 580 Haviland, M. B., 502 Havlicek, L. L., 136 Hawkes, A. G., 677 Hawkins, D. M., 446 Hayes, W. S., 200, 202, 205 Hays, W. L., 237, 238 Hayter, A. J., 230 Healy, M.l. R., 428 Heise, M., 537 Heisey, D. M., 211 Henderson, D. A., 437 Herranz Tejedor, I., 384, 400 Hershberger, S. L., 316 Herson, 1. H., 291 Hettmansperger, T P., 163 Heyde, C C, 332 Hicks, CR., 869 Higgins, J. J., 171, 172, 184, 227, 249-250 Hilferty, M. M., 675 Hilton, 1. F., 502 Hines, W. G. S., 182,263 Hjelm, H. F., 383 Hoaglin, D. C, 114 Hodges, J. L.lr., 169, 172, 629, 846 Hoenig, J. M., 211 Holder, R. L., 432 Hollander, M., 156, 163, 172, 183,337,430,455, 482,547,549 Hopkins, K D., 408, 409, 502 Hora, S. C, 279 Hornick, C W., 553 Hosmane, B., 510, 516 Hosmer, D. W. Jr., 577 Hotelling, H., 41, 323, 325, 326, 392, 395, 400, 645, 647 Howell, D. C, 232, 275, 408, 409, 420, 440, 510 Howell, 1. F., 231, 232 Hsu, J. C, 226, 227 Hsu, P. L., 136 Hubbard, R, 79 Huber, P., 332, 430 Huberty, c.r, 428 Huck, S. W., 266 Huff, D., 7 Huitema, B. E., 397 Hummel, T J., 326 Hunter, W. G., 153 Hurlbert, S. H., 103, 142,591 Hutcheson, K, 43, 174 Hutchinson, T P., 480, 510 Hutson, A. D., 548 Huygens, C, 49
926
Author Index
Huynh, a, 274 Hyer, L. A" 474 lglewicz. B., 114 Iman. R L., 187,218,227,277, 279, 359, 400, 403, 404,405,762,775 International Business Machines Corporation (IBM),852 Irwin, J. 0.,99,561 Ives, K. H., 408 Jaccard, J., 230, 231 Jacques, J. A., 337 Jaeger, R. G., 250 Jammalamadaka, S. R. and SenGupta, A., 607, 618,636 Jeyaratnam, S., 386 Johnson, B. R, 67 Johnson, M. E., 89 Johnson, R. A., 315, 324, 629 Jolliffe, I. T., 402 Jupp, P. E., 607, 614, 615, 617, 618, 625, 632, 634, 637,640,654,662,664 Kaiser, M., 274 Kanji, G. K., 636 Karr, J. R., 43, 46, 174, 175,855 Katti, S. K., 757 Kemp, A. W., 46, 855 Kempthorne, 0., 502 Kendall, M. G., 99,166,287,398,399,400,402, 408,432,449,452,502 Kennard, R W., 437 Kennedy, J. J., 510 Kent, R. H., 601 Keough, M. J., 304 Kepner, J. L., 277 Keselrnan, H. J., 200, 202, 203, 205, 214, 215, 227, 230,231,232,239 Keselrnan, J. c, 200, 203, 205 Keuls, M., 232 Khamis, H. J., 483, 484, 485 Kihlberg, J. K., 291 Kim, c. 502 Kim, S.-H.,138 Kirby, W., 89 Kirk, R. E., 150, 208, 232, 238, 273, 274, 302, 869 Klein, M., 577 Klein, S.. 29 Kleinbaum, D. G., 577 Kleiner, B., 494, 510 Kluger, E., 565 Knoke, D., 510 Koehler, K. J., 474, 510, 516 Kohr, R L., 136,202,205 Kolmogorov, A., 481 Kornbrot, D. E., 188 Korner, T. W., 482 Kowalchuk, R. K., 203 Kowalski, C. J., 383, 912 Kraft, c, 169 Kramer, C. Y., 230 Kroll, N. E. A., 502 Kromrey, J. D., 502 Kruskal, W. H., 163, 214, 218, 398, 400, 402 Krutchkoff, R. G., 214 Kuiper, N. H., 94, 664 Kulkarni, P. M., 255
Kutner, M. H., 420, 430, 434, 458, 577 Kvalseth, T. 0., 356, 428, 448 Kwan, C. W., 537 Lachcnbruch, P. A., 218, 241 Lam, K. F., 537 Lam, K. S. L., 537 Lamb, R. E., 483 Lancaster, H. 0., 466, 475, 506 Landry, L., 91 Laplace, P. S., 543 Larntz, K., 474 Larntz, K., 480, 510, 516 Larsen, W. A., 111 Laubscher, N. F., 156 Lawai, H. B., 480 Lawley, D. N., 323 Layne, B. H., 266 Leadbetter, M. R., 407 Leadbetter, S. D., 144 LeBlanc, R., 763 Lee, A. F. S., 138 Lee, A. J., 349, 356, 420, 434, 654, 661, 662 Lee, J.-K., 660 Leeming, D. J., 67 Lefante, J. J., 192 Lehmann, E. L., 79,99,152,168,172,429,431,676 Lerneshow, S., 577 Lemmer, H. H., 312 Lennon, J. T., 355 Leonhardt, D., 98 Lenth, R A., 116,147 Lepage, Y., 91 Lepine, D., 274 Leser, C. E. Y., 431 Leslie, P. H., 567 Levene, H., 153, 602 Levin, 8., 542, 554 Levin, E., 543, 546, 553, 577 Levin, J. R, 197 Levitin, D. J., 626 Levy, K. J., 205, 232, 244, 247, 396, 557 Lewis, T., 20, 206, 494, 607 Lewontin, R. c, 42, 161 Ley ton, M. K., 567 u, L., 454 u, Y.-F., 207 Liddell, D., 502 Light, R J., 224, 510 Lin, J.-T., 675 Lin, L. I-K., 414, 417 Lindsay, R B. (1976),624 Ling, R F., 39 Lix, L. M., 200, 203, 205 Lloyd, C. J., 565 Lloyd, M., 43, 46,174,175,855 Lockhart, R. A., 664 Lowe, Y. W. Jr., 89 Lucas, P. A., 202 Ludwig, J. A., 591 Lund, U., 660 Maag, U. R, 641 Mack, c, 502 Mack, G. A., 280, 281 MacKinnon, W. J., 548 Magurran, A. E., 42, 46 Maity, A., 139
Author Mallows, C L., 337, 430, 437 Malone, L. C, 349 Mann, H. B., 163 Mantel, N., 435, 437, 716 Maor, E., 67 Marascuilo, L. A., 280, 281, 283, 397, 455, 540, 558 Marcoulides, G. A., 316 Mardia, K. Y., 138,607,614,615,616,617,618, 625,632,634,636,637,640,642,646, 649,654,658,660,662,664,846 Margolin, B. H., 224, 510 Markowski, C A., 141, 152 Markowski, E. P., 141, 152 Martin Andres, A., 384, 400, 502 Martin, L., 763 Martinez, R. A., 588 Massey, F. J. Jr., 30, 41, 138,589 Mathews, K., 153 Matthai, R A., 413 Matthews, J. N. S., 560 Maurais, J., 221 Maxwell, A. E., 572 Maxwell, S. E., 202, 274, 302, 304, 326 McCarroll, K., 281 McCornack, R L., 759 McCulloch, C E., 183,577 McCullough, B. D., 98, 858 McGill, R, 111 McHugh, R B., 572 McKay, A. T., 125 McKean, J. W., 163,250 McNemar, Q., 569 McRae, J. E., 337, 430 McSweeney, M., 280, 281, 283, 540 Mead, R,263 Meddis, R, 398 Mee, W. M., 192 Meeden, G., 90 Meeker, W. Q., 107, 123 Meeker, W. Q. Jr., 158 Mehta, C R, 502 Mehta, J. S., 138 Menard, S., 577, 580 Mendel. G., 474 Meyers, L. S., 226, 577 Michels, K. M., 213, 302, 303, 869 Mickey, R. M., 304, 420 Mielke, P. W. Jr., 510 Miller, A. J., 6,153 Miller, G. E., 125, 161, 162,221,224 Miller, J., 17,18,21,22; 23,26, 28, 45, 49, 57, 58, 72, 74,78, 103 Miller, J. E., 402 Miller, L. H .. 743 Miller, R. G. Jr., 240, 241, 243 Milton, R. C, 757 Mitra, S. K., 413 Mogull, R. G., 598 Molenaar, I. W., 69, 101 Molenaar, W., 523 Montgomery, D. C, 302, 303, 304, 337, 357, 420, 430,577 Mood, A. M., 156, 170, 172, 174, 184,219 Moore, B. R., 624, 646, 853 Moore, D. S., 94, 480 Moore, G. H., 601, 602 Moors, J. J. A., 89, 91
Morrison, D. F., 321 Moser, B. K., 141 Mosimann, J. E., 49 Mourad, S. A., 428 Muddapur, M. Y., 386 Mulaik, S. A., 79 Murray, R., 230 Murty, Y. N., 41 Myers,J. L.,200, 226, 235,263,
Index
927
302, 303, 304
Nachtsheim, C J., 420, 430, 434, 458, 577 Nagasenker, P. B., 220 Nakamura, E., 565 Neave, H. R, 337 Nelson, W., 660 Nernenyi, P., 240, 280 Neter, J., 420, 430, 434, 458, 577 Neumann, N., 250 Newcombe, R G., 543, 546, 551 Newman, D., 232, 250 Neyman, J., 79,101,220,394,478,508-509,561 Nicewander, W. L., 380 Noether, G. E., 162, 163,277,482,487,537,548 Norris, R C, 383 Norton, H. W., 300 Norusis, M., 337 Norwood, P. K., 281 O'Brien, P. C, 599 O'Brien, R G., 274 O'Cinneide, C A., 41 O'Connor, J. J., 21,31,45,58 O'Neill, M. E., 153 Odeh, R E., 852 Odum, E. P., 43,174 Ogawa, J., 141 Olds, C C, 200, 202, 205 Olds, E. G., 774 Olson, C L., 323, 324, 325 Olsson, U., 409, 411 Olver, F. W. J., 846 Ord, K., 398 Ostle, B., 349 Otieno, B. S., 617 Otten, A., 774 Ouimet, R., 221 Overall, J. E., 502, 553 Owen, D. B., 136, 763, 774 Ozer, D. J., 381 Pabst, M. R, 400 Page, W., 41 Paik, M. C, 543, 546, 553, 577 Pampel, F. C, 577, 580 Papanastasiou, B. C, 669 Parshall, C G., 502 Patel, J. K., 108, 123 Patil, K. D., 283 Paul, S. R, 390, 392, 395, 396 Paull, A. E., 263 Pazer, H. L., 555 Pearson, E. S., 46, 74, 79, 94, 95, 99,101,102,152, 207,215,220,277, 400, 408, 419, 466, 469,478,501,502,508-509,543,588, 595,776,777,855,866 Pearson, K., 17,46,66,89,380,492 Peck, W. A., 337, 357,420, 430, 577 Peckham, P. D., 137, 200, 202
928
Author
Index Pedhazur, E. J., 420, 440, 443, 577 Peters, W. S., 1, 304 Peterson, N. L., 136 Petrinovich, L. F., 227 Pettitt, A. N., 487, 734, 740 Pfanzagl, J., 86,106 Pielou, E. c, 43, 44, 591 Pike, M. c, 553 Pillai, K C. S., 323 Pirie, W. R, 595 Pitman, E. J. G., 182 Plackett, R L., 99, 502 Please, N. W., 102 Policello, G. E. II, 172, 174 Pollak, M., 182 Posten, H. 0., 136 Pratt, J. W., 184 Preece, D. A., 591 Price, B., 434, 443 Przyborowski, J., 595 Quade, D., 218, 279, 402, 405, 762 Quenouille, M. H., 293, 656 Quinn, G. P., 232, 304 Ractliffe, J. E, 102 Radlow, R, 474 Raff, M. S., 523 Raghavarao, D.,158 Rahe, A. J., 184 Rahman, M. M., 95 Ramsey, P. H., 136, 137, 141, 169,232,537 Ramsey, P. P., 537 Ramseyer, G. c., 231 Ranney, G. B., 340 Rao, C. R, 413, 561, 589 Rao, J. S., 626 Rayleigh,624 Rayner,J. C. W., 138, 141 Read, T. R. 474, 480 Reid, 79 Rencher, A. c., 323, 324, 325, 326, 327 Reynolds, J. F., 591 Rhoades, H. M., 502 Richardson, J. T E., 501, 502 Roberts, D. E., 852 Roberts, H. Y., 598 Robertson, E. F., 21,31, 45, 58 Robinson, D. H., 277 Rodgers, J. L., 380 Roe, A., 42 Rogan, J. c., 200, 214, 215, 230, 231, 232 Rohlf, E J., 312 Roscoe, J. T, 474, 503 Ross, G. J. S., 591 Rothery, P., 414 Rouanet, H., 274 Routledge, RD., 432 Roy, S. N., 323 Royston, E., 329 Royston, J. P., 95 Rubin, D. B., 130 Rubinstein, E. N., 200, 202, 205 Rudas, T, 480, 516 Russell, G. S., 626 Rust, S. W., 214 Ryan, G. W., 144 Ryan, T. P., 459
c,
c.
Sahai, H., 202, 238 Sakoda, J. M., 567 Salama, 1.,402,405 Sampson, A. R., 281 Sanders, J. R., 137, 200,202 Satterthwaite, E E., 138 Savage,I.R,162,403 Savage, L. J., 18, 250, 494 Sawilowski, S. S, 141,227,249-250 Schader, M., 545 Scheffe, H., 138, 213, 237, 869 Schenker, N., 144 Schlittgen, R., 174 Smid, F., 545 Schucany, W. R., 454 Schuster, 458 Schutz, W. E., 291 Schwertman, N. 588 Scott, D. W., 337, 430 Seal, H. L., 331, 379 Seaman, J. W. Jr., 250 Seaman, S., 440 Seber, G. A. F., 349, 356, 420, 434, 448 Sen,P.K,502 Senders, Y. L., 2 Seneta, E., 332 Serlin, R 136,197,202,215,439,455 Sethuraman, J., 455 Severo, N. c., 675, 677, 715, 764, 765 Shaffer, J. P., 238, 239 Shah, A. K, 192, 555 Shannon, C. E., 43 Shapiro, S. S., 91, 95 Sharma, S., 316, 324 Shearer, P. R, 266, 304 Shenton, L. R, 43, 94, 174 Sherman, M., 139 Sheskin, D. J., 408, 409, 411 Shiboski, S. c., 577 Shooter, M., 231 Sichel, H. S., 595 Siegel, S., 2, 156, 402, 482 Silva Mato, A., 384, 400 Simonoff, J. S., 494, 510, 511,542 Simpson, G. G., 42 Skillings, J. H., 280, 281 Sligo, J. R, 326 Slinker, B. K., 304, 312, 357, 420, 432, 434, 458, 577 Silva Mato, A., 502 Smeeton, N. 163,482,546 Smirnov, N., 481 Smirnov, N. Y., 481 Smith, D. E., 55,67 Smith, H., 138,332,359,420,430,431,433,434, 443,458 Smith, J. A., 572 Smith, P. G., 553 Smith, R A., 230 Smitley, W. D. S., 172 Snedecor, G. W., 1, 152, 182, 190,206,302,304, 347,438,447 Sokal, R R, 312 Solomons, L. M., 41 Somerville, P. N., 230 Song,'J.,417 Soper, H. E., 585 Spearman, c., 398, 399 Sprent, P., 163, 482. 546
c,
c,
c,
c,
Author Spurr, B. D" 640, 642 Srinivasan, R" 138 Srivastava, A. B. L., 136, 137, 200 Srivastava, M. S., 316, 324 Starbuck, R, R" 502 Starmer, C. F., 502 Staum, R" 281 Steel, R, G. D., 241,243,302,304 Steffans, F. W., 156 Steiger, J. H., 79, 391,395 Steiger, W. L., 425 Stelzl, I., 510,516 Stephens,M.A.,487,632,636,664, 734, 740,843 Stevens, G., 247 Stevens, G. R., 141 Stevens,J., 274, 316, 317, 320, 324, 325, 326 Stevens, S. S., 2, 162 Stevens, W. L., 665 Stigler, S. M., 1,66,74,197,317,380,438,585 Stoker, D. J., 187 Stoline, M. R., 230 Stonehouse, J. M., 136, 141,202 Storer, B. E., 502 Street, D. J., 190,249 Struik, D. J., 520 Stuart, A., 287, 398, 408, 432, 502 Student, 99 Studier, E. H., 463 Sutherland, C. H. V., 50 Sutton, J. B., 340, 428 Swanson, L. A., 555 Swanson, M. R" 232, 238 Swed, F. S., 834 Symonds, P. M., 380 Tabachnick,
B. G., 316, 420, 510 c, 232 Tan, W. Y., 136, 152,202 Tang, P. c, 866 Tapia Garcia, J. M., 400 Tate, M.W., 283, 474 Tatham, R, L., 316, 319, 324, 326, 420, 431, 432, 434,577 Taylor, C. c, 316, 326 Taylor, J., 203 Tcheng, T.-K., 231 Thiele, T. N., 197,249 Thigpen, C. c, 340 Thode, H. C. Jr., 20, 90, 91, 92, 94, 95, 206 Thomas, G. E., 398, 400 Thani, H., 287, 290, 293 Thornby, J. 1.,349 Tiku, M. L., 200, 207, 852, 853 Tingey, F. H., 743 Toan , N. K., 763 Tolman, H., 41 Toman, R, J., 437 Tomarkin, A. J., 136, 202, 215 Tong, Y. L., 660 Toothaker, L. E., 227, 231, 238, 239, 243, 250 Torrie, J. H., 302, 304 Tripathi, R, c, 41 Tukey, J., 156 Tukey,J. W., 111,227,269,291,557 Twain, M. (S. L. Clemens), 336 Tweddle, I., 855 Tytun, A., 554
Tamhane, A.
Index
929
Underwood, R. E., 570 UNISTAT Ltd., 210 Upton, G. J. G., 501,502,510,551,585,607,614, 615,618,656 Ury, H. K., 554, 555 van Eeden, C.,169 van Elteren, P., 277, 280 van Montfort, M. A. J., 774 Vidmar, T. J., 250 Vining, G. G., 337, 357, 420, 430, 577 Vittinghoff, E., 577 Vollset, S. E., 543, 546 von Ende, C. N., 42, 43, 46 von Eye, A., 458 von Mises, R" 618 von Neumann, J., 601 Voss, D. T., 262 Wald, A., 597 Walker, H. M., xi, 1,21,24,262,36,37,38,41,46, 49,51,68,69,72,88,89,110,220,328, 380,381,398,407,408,429,438,466, 855 Wallis, W. A., 214, 218, 449, 598, 601, 602 Wallraff, H. G. ,644 Walls, S. 250 Wang,F. T., 337, 430 Wang, Y. H., 86, 106 Wang, Y. Y., 138 Watson, B., 852 Watson, G. S., 41, 281, 607, 632, 637, 640, 664 Watts, D. G., 448 Webne-Behrrnan, L., 197 Webster, J. T., 138, 141
c,
Webster's Third New International Dictionary, Unabridged (Merriam-Webster, 2002), 669 S.,169 Wehrhahn, K., 141 Weinfurt, K. P., 326 Weisberg, S., 356, 420 Welch, B. L., 136, 138, 203 Well, A. D., 200, 226, 263, 302, 303, 304 Wells, G., 413 Werter, S. P. J., 205 West, S. G., 430, 431, 439, 440, 444, 458 Wheeler, S., 640 White, c, 595 White, J. S., 675, 677 Whitney, D. R" 163 Wichern, D. W., 315, 324 Wike, E. L., 152, 153 Wilcox, R, A., 183,240,280,757 Wilcox, R" 203 Wilcoxon, F., 163, 183,240,280,757 Wild, C. J., 448 Wilenski, H., 595 Wilk, M. B., 95 Wilkinson, L., 39 Wilks, S. S., 322, 478, 509 Williams, E. J., 632 Williams, K., 480 Wilson, E. B., 546, 551, 675 Windsor, C. P., 224 Winer, B. J., 213, 302, 303, 869 Winterbottom, A., 390 Wise, S. E., 250 Wechsler,
930
Author
Index Wittkowski, K. M., 537 Wolfe, D. A., 156, 163, 172,337,430,482,547,549 Wolfowitz, J., 597 Wong, S. P., 202 Wood, G., 230, 231 Worthington, O. L., 337 Yang, M.-C., 510 Yates, F., 136, 138,284,469,302,501,502, 561 Yeh, H. c, 136 Young, L. 601, 83.6
c,
Yuen, K. K., 142 Yule, G. u., 41, 408 Zabel!, S. L., 99 Zar, J. H., 42, 43, 46, 94, 174, 175, 361, 561,618, 666,675,774,855 Zelen, M., 675,677,715,764,765 Zerbe, G. 0.,413,414 Zimmerman, D. W., 136, 137, 138, 141,172,182, 214,400 Zumbo, B. D., 137, 138, 141
Subject Index Page numbers are given for the first or major occurences of terms, topics and symbols, and italicized page numbers refer to especially informative sites. In this index, the abbreviation "approx." means "approximation"; "c.l." means "confidence limit(s)"; "coef." means "coefficient(s)"; "correl." means "correlation"; "c. v." means "critical value"; "dist." means "distribution(s)"; "et al." (for Latin et alibi) means "and elsewhere"; "ff" means "and following"; "hyp." means "hypotheses"; "int, calc." means "intermediate calculation"; "p." means "population"; "prop." means "proportion(s )"; "regr." means "regression"; "s." means "sample"; and "stat." means "statistic(s)." As indicated in the text, a bar (" - ") over a symbol designates its mean, a carat (" ~") over a symbol designates its estimated or predicted value, and brackets (" [ ]") surrounding a single symbol refer to its value after coding.
A, Ax. Ay (coding constant), 31, 47, 361, 867;Ac (SS of X for common regr.), 367,373; Ai (SS of X for regr. i), 371; At (SS of X for total regr.), 373; A (int. calc.), 203,660 a (number of levels), 250; a (number of groups), 283; a,a, (s. Y intercept), 332,369,424, et al.; {Ie (common Y intercept), 370; a, a, (angle), 606,612, et al.; Ii (mean angle), 612; a (int. calc.), 671 Abscissa, 329 I I (absolute value), 37 Accuracy, 5-6 Achenwal, Gottfried, I Acrophase,6tiO Additivity, 286 Airy, George Biddel, 38 Aitken, A. C, 423 AI-Hasssar,22 Alientation, coef. (See Coefficient of nondetermination) Allokurtosis,89 American Statistical Association, 408 a (alpha). 669; a (significance level, probability of Type I error), 78-80; a(1), a(2) (I-tailed & 2-tailed a), 82; a (p. Y intercept), 330, 334, 423, et al. Amplitude of cycle, 660 Analysis, Fourier, harmonic, & time-series, 660 Analysis of covariance, 284 multivariate, 327 Analysis of variance, 189-218 (See also Mean, multisample hyp.) angular-data analog, 634-636 balanced (orthogonal), 250 blocks, 270-274, 304 for angles, 636 nonparametric, 277-283 completely randomized, 192 Class I & II. 200 components of variance model, 200 for correl., 426-430
crossover, 302 detectable difference, 213.275-276. 305 DFs & Fs in experimental designs, 869ff factorial, 249. 296-306, 872-873 for angles, 636 with nesting, 313 with blocks or repeated measures, 304 fixed effects (Model I) & random effects (Model II), 199-200. 261-262 (See also 2-factor) graphical display, 259-261 Greco-Latin square, 303 hierarchical (See nested) Kruskal-Wallis,2/4-218 Latin square, 299-303 mixed-model (Modellll), 262, 276-277 multivariate, 274, 316-327 nested, 299. 307-3/5, 873-874 for angles, 636 non parametric, by ranks, 214-219 number of testable groups, 212-2/3,276 power, 207-211,213-2/4,275-276,305 figure, 858ft random effects (Model II) & fixed effects (Model I), 199-200, 261-262 (See a/so 2-factor) for regr., 338-34/ multiple, 426-430 repeated measures, 273-274, 304 nonparametric, 277-283 sample size, 211-212,275-276,305 figure, 859ft single-factor (I-way), 190-218, 355 subjects-by-treatment (subjects-trial), 273 2-factor, 249-28/,872 for angles, 636 univariate, 316 ANCOVA (analysis of covariance), 284. 372ff Andrews, D. F., 37 931
932
Subject Index Angular data, 605-61Oft correl., 654-659 (See also Correlation, angular) deviation, 617 table, 838ft distance, 642 hyp.,643-644 dispersion, 615-618 hyp., 644-645 distance, 642-644 1st-order & 2nd-order sample, 645 goodness of fit, 662-665 mean, 612-615,621 (See also Mean angle) median & mode, 617-618 (See also Median angle) multisample hyp., 634-626, 644-645 non parametric, 642 I-sample hyp., 624-632,645-647 paired-sample hyp., 652-654 phase, 660 randomness testing, 665-667 regr., 659-660 SD,617 2-sample hyp., 632-634,644-645,647-649 nonparametric, 637-642, 649-652 variance, 615 weighted, 647 hyp.,644-645 ANOV A, ANOV, AOV (See Analysis of variance) Antilogarithm, 28, 290 Arbogast, Louis Francois Antoine, 45 Arbuthnot, John, 74, 537 Arcsine (arcsin), 291, 293 Arcsine transformation, 291-294,357 Arctangent (atan), 611 Archimedes of Syracuse, 67 Association (See Contingency table; Correlation) Assumptions (See individual tests) *, ** (asterisk) statistical significance, 80 Attribute, 4 Average, 18, 21 Axis (See Coordinates) Azimuth 605 B (Bartlett stat.), 220; e, (block sum), 283; Be (sum of crossproducts for common regr.), 367,373; B, (sum of crossproducts for regr. i), 373; Be (sum of crossproducts for total regr.), 373; B (int. calc.), 203, 661 b (number of levels), 250; b (number of blocks), 283; b, bi (s. regr. coef.), 332,365 be (common or weighted regr. coef.), 366, 374; b, (s. partial regr. coef.), 424, et al.; bi (standardized s. partial regr. coef.),433; b X (s. coef. of regr. of X on Y),380; by (s. coef. of regr. of Y on X), 380; Ft; (s. symmetry), 88; (Ft)a,v (c.v. of Ft),126; Ft c.v. table, 776; b2 (s. kurtosis), 88-89; (b2 )a,v (c.v. of b2), 126; b: c.v. table, 777; b, b, (int. calc.), 205, 671 Babbage, Charles, 21 Bartlett, Maurice Stevenson, 220 Bartlett test, 220-221 Batschelet, Edward, xiii, 607
Batschelet test, 630-631 Beaufort, Francis, 380 Behrens-Fisher testing, 137-142 for ANOVA, 202-205, 214 Bernoulli, Jacques, 49,51, 519 Bernoulli, Nicholas, 519 Bernoulli trial, 519 Bertin, Jacques, 494 Bessel, Friedrich Wilhelm, 110 Best linear unbiased estimate, 425 f3 (beta), 669; f3 (probability of Type TT error), 79-80; f3j (p. regr. coef.), 330, 332,372, et al.; f3j (p. partial regr. coef.; beta coefficient), 423; f3; (standardized p. partial regr. coef.), 433; f30 (p. Y intercept), 332; ..jjJJ (p. symmetry),88; f32 (p. kurtosis), 88 Bias, 18 Bickel, P. J., 37 Bimodality, 27 Biometrics & Biometrics Bulletin, 1 Biometrika, 1,466 Binomial, 291, 498 (See also Dichotomous) coefficient, 520-521,567-569 table, 782ft c.I.,543-548 dist., 291,498,519-524 goodness of fit, 529-532 negative, 195 probability, 519-524 proportions table, 785 sampling, 526-528 Binomial test, 532-537 with angles, 630 c.v. table, 786ft power, detectable difference, & sample size, 539-542 and Poisson dist., 592-595 and sign test, 537-538 power, detectable difference, & sample size, 539-542 Biometry & biostatistics, 1 Bishop, Y. M. M., 510 Bliss, C. 1.,69 Block, 270-274,277-283 BLUE (best linear unbiased estimate), 425 Bonferroni test, 232 Boscovich, Roger Joseph (Ruggiero Giuseppe), 332 Bougere, Pierre, 28 Box, G. E. P., 102 Box plot, 112- 114 Briggs, Henry, 158 Brillouin index. 45-46
e. e..
C ("correction" factor), 215,221,258-259,279; C (mean square successive difference stal.),601; Ca,ll (c.v. of C), 601; C c.v. table, 835ft; C (cosine), 613; Ca,n (sign test c.v.), 538; sign test c.v. table, 786ft; Ce (SS for common regr., 367, 373; c, (SS for regr. i), 373; c, (SS for total regr.), 373; C, (sum of cosines), 640; C, (sum of column frequencies), 407,492,510; Cv, c, (estimated c.v.), 671; Cp (Mallows stat.), 437; CT (top-down concordance stat.), 455; /lCX, Cx' /lCx, nCX (binomial
Subject Index
coef.) table, 782ft;
(J), ( ; )
(combinations), 55-57,520,562; C, (int. calc.), 203, 660, 671 c; (coef. for Scheffe rest), 237, et al.; Cij (element in inverted corrected SS & crossproducts matrix), 423; c (inverse of x), 423; c; (int. calc.), 203 Cardano, Giralamo, 49 Carryover, 273, 302 Cartesian, 329 Cartesius, Renato, 329 Cavalieri, Bonaventura, 158 Cayley, A., 300 Celsius, Anders, 3 Center of gravity, 24, 615 Centile,36 Central limit theorem, 72, 324 Central tendency, 18, 21-30 Chi square (See, e.g., Contingency table; Goodness of fit) X (chi), 669; X2 (chi square), 121,366; X; v (c.v. of X2),
c; c;
122,468, et al.; X2 c.;. table & approx., 672ft; X~ (X2 corrected for continuity), 469, et al.: X~ (X2 for departure
from trend), 561; X~
(bias-corrected
X2), 395-396;
X~
(xn
(Friedman's X2), 277,452; auh (c.v. of X~), 278; X; c.v. table, 763;
(X;t
; (corrected
'
X;), 279; X7-t
(Cochran-Haber X2), 501; X7 (X2 for trend), 560 Chrystal, G., 55 Chuquet, Nicolas, 37 CI (confidence interval), 86 Circle, unit 6l0-611 Circular distr., 605-607ft, 624ft (See a/so Angular data) descriptive stat., 605-623 hypo testing, 624-667 normal,618 graphing, 607-610 Circular rank, 640 Circularity, 274 Clavius, Christopher, 31 Clopper-Pearson interval, 543, 546 Cochran, William Gemmell, 281 Cochran's Q, 281-283 Coding data, 30-32, 46-48, 176, 224, 361, 867-868 Coefficient: of association (See Correlation coef.) (3,433 binomial (See Binomial coefficient) of concordance, 279-280, 400 of correl. (See Correlation coef.) of determination, 340, 428 adjusted, 428 "shrunken," 429 of nondetermination (or alienation), 340, 381,428 for regression (See Regression coef.) of variation (or of variability), 42 Coins, 50 Combinations, 55-57
933
Comparative trial, 498 Concentration (See Disperson) Concordance, coef. of, 279-280, 449-456 multivariate correl., 449-456 top-down, 455-456 Confidence interval, 86,342 (See a/so Confidence limit) level, coef., probability, 86, 106 limit, 85-87 (See a/so Coef. of variation; Correlation; Correlation, angular; Logistic regr.; Median; Mean; Mean difference; Proportion; Regression; Variance) Conservative test, 136, 172, et al. Consistency of statistic, 19 Constant, 239 Contagious distribution, 591 Contamination of sample, 19 Contingency coef., 405-407 Contingency table, 405-408, 490-5/7 chi-square, 492-494 continuity correction, 500-502 heterogeneity, 504-506 multidimensional. 510-516 log-likelihood ratio (G test), 508-510, 516 with small frequencies, 503-504 subdividing, 506-508 2 X 2,405-407,497-503,504-50~ 510 (See also Fisher exact test; McNemar test) visualizing, 494-496 Continuity correction, 469-470, 480, 500-502, 510, et al. Control (See Multiple comparisons)
Coordinates,329,610-61J Correction term, 196,215,221 Correlation, 379-418 angular, 654-660 nonpararnetric, 660-663 biserial, 409-411 coef., 379-382ft coded data, 417 common, 391-392, 393-395 concordance, 449-456 c.I.,386, 388-390 angular, 656 Cramer