Bootstrap methods and their application
Cambridge Series on Statistical and Probabilistic Mathematics Editorial Board...
489 downloads
2979 Views
13MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Bootstrap methods and their application
Cambridge Series on Statistical and Probabilistic Mathematics Editorial Board: R. Gill (Utrecht) B.D. Ripley (Oxford) S. Ross (Berkeley) M. Stein (Chicago) D. Williams (Bath) This series of high quality upper-division textbooks and expository mono graphs covers all areas of stochastic applicable mathematics. The topics range from pure and applied statistics to probability theory, operations re search, mathematical programming, and optimzation. The books contain clear presentations of new developments in the field and also of the state of the art in classical methods. While emphasizing rigorous treatment of the oretical methods, the books contain important applications and discussions of new techniques made possible be advances in computational methods.
Bootstrap methods and their application A . C. D a v iso n
Professor o f Statistics, Department o f Mathematics, Swiss Federal Institute o f Technology, Lausanne
D . V. H in k le y
Professor o f Statistics, Department o f Statistics and Applied Probability, University o f California, Santa Barbara
H I C a m b r id g e U N IV E R S IT Y P R E S S
P U B L IS H E D BY THE PRESS S Y N D IC A T E OF THE U N IV E R S IT Y OF C A M B R ID G E
The Pitt Building, Trumpington Street, Cambridge CB2 1RP, United Kingdom C A M B R ID G E U N IV E R S IT Y PRESS
The Edinburgh Building, Cambridge CB2 2R U , United Kingdom 40 West 20th Street, N ew York, N Y 10011-4211, U SA 10 Stamford Road, Oakleigh, M elbourne 3166, Australia © Cambridge University Press 1997 This book is in copyright. Subject to statutory exception and to the provisions o f relevant collective licensing agreements, no reproduction o f any part may take place without the written permission o f Cambridge University Press First published 1997 Printed in the United States o f America Typeset in TgX M onotype Times A catalogue record fo r this book is available fro m the British Library
Library o f Congress Cataloguing in Publication data D avison, A. C. (Anthony Christopher) Bootstrap methods and their application / A.C. D avison, D.V. Hinkley. p. cm. Includes bibliographical references and index. ISB N 0 521 57391 2 (hb). ISBN 0 521 57471 4 (pb) 1. Bootstrap (Statistics) I. Hinkley, D. V. II. Title. QA276.8.D38 1997 519.5'44~dc21 96-30064 CIP ISBN 0 521 57391 2 hardback ISB N 0 521 57471 4 paperback
Contents
Preface 1
Introduction
2
The Basic Bootstraps 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11
3
In tro d u ctio n Param etric Sim ulation N o n p aram etric Sim ulation Simple Confidence Intervals R educing E rro r Statistical Issues N o n p aram etric A pproxim ations for V ariance and Bias Subsam pling M ethods B ibliographic N otes Problem s Practicals
Further Ideas 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9
In tro d u ctio n Several Sam ples Sem iparam etric M odels Sm ooth E stim ates o f F C ensoring M issing D a ta F inite Population Sam pling H ierarchical D a ta B ootstrapping the B ootstrap
ix 1 11 11 15 22 27 31 37 45 55 59 60 66 70 70 71 77 79 82 88 92 100 103
v
Contents
vi 3.10 3.11 3.12 3.13 3.14
B ootstrap D iagnostics Choice o f E stim ator from the D ata B ibliographic N otes Problem s Practicals
136
Tests 4.1
Intro d u ctio n
4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9
R esam pling for Param etric Tests N o n p aram etric P erm utation Tests N o n p aram etric B ootstrap Tests A djusted P-values Estim ating Properties o f Tests B ibliographic N otes Problem s Practicals
Confidence Intervals 5.1 5.2
113 120 123 126 131
Intro d u ctio n
136 140 156 161 175 180 183 184 187 191 191 193 202 211 220 223
Basic C onfidence Lim it M ethods 5.3 Percentile M ethods 5.4 T heoretical C om parison o f M ethods 5.5 Inversion o f Significance Tests 5.6 D ouble B ootstrap M ethods 5.7 Em pirical C om parison o f B ootstrap M ethods 5.8 M ultip aram eter M ethods 5.9 C onditional Confidence Regions 5.10 Prediction 5.11 B ibliographic N otes 5.12 Problem s 5.13 Practicals
230 231 238 243 246 247 251
Linear Regression
256
6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8
256 257 273 290 307 315 316 321
Intro d u ctio n Least Squares L inear Regression M ultiple L inear Regression A ggregate Prediction E rro r and V ariable Selection R obust Regression B ibliographic N otes Problem s Practicals
vii
Contents
7
8
9
Further Topics in Regression
326
7.1
In tro d u ctio n
326
7.2
G eneralized L inear M odels
327
7.3
Survival D a ta
346
7.4
O th er N onlinear M odels
353
7.5
M isclassification E rro r
358
7.6
N o n p aram etric Regression
362
7.7
B ibliographic N otes
374
7.8
Problem s
376
7.9
Practicals
378
Complex Dependence
385
8.1
In tro d u ctio n
385
8.2
Time Series
385
8.3
Point Processes
415
8.4
B ibliographic N otes
426
8.5
Problem s
428
8.6
Practicals
432
Improved Calculation
437
9.1
In tro d u ctio n
437
9.2
Balanced B ootstraps
438
9.3
C ontrol M ethods
446
9.4
Im po rtan ce R esam pling
450
9.5
Saddlepoint A pproxim ation
466
9.6
B ibliographic N otes
485
9.7
Problem s
487
9.8
Practicals
494
10 Semiparametric Likelihood Inference
499
10.1 Likelihood
499
10.2 M ultinom ial-B ased Likelihoods
500
10.3 B ootstrap Likelihood
507
10.4 Likelihood Based on Confidence Sets
509
10.5 Bayesian B ootstraps
512
10.6 B ibliographic N otes
514
10.7 Problem s
516
10.8 Practicals
519
viii 11
Contents
Computer Implementation
522
11.1 11.2 11.3 11.4 11.5 11.6
In tro d u ctio n Basic B ootstraps F u rth er Ideas Tests Confidence Intervals L inear Regression
522 525 531 534 536 537
11.7 11.8
F u rth er Topics in Regression Time Series
540 543
11.9 Im proved S im ulation 11.10 S em iparam etric Likelihoods Appendix A. Cumulant Calculations Bibliography Name Index Example index Subject index
545 549 551 555 568 572 575
Preface
The publication in 1979 of Bradley Efron’s first article on bootstrap methods was a major event in Statistics, at once synthesizing some of the earlier resampling ideas and establishing a new framework for simulation-based statistical analysis. The idea of replacing complicated and often inaccurate approximations to biases, variances, and other measures of uncertainty by com puter simulations caught the imagination of both theoretical researchers and users of statistical methods. Theoreticians sharpened their pencils and set about establishing mathematical conditions under which the idea could work. Once they had overcome their initial skepticism, applied workers sat down at their terminals and began to amass empirical evidence that the bootstrap often did work better than traditional methods. The early trickle of papers quickly became a torrent, with new additions to the literature appearing every month, and it was hard to see when would be a good moment to try to chart the waters. Then the organizers o f COMPSTAT ’92 invited us to present a course on the topic, and shortly afterwards we began to write this book. We decided to try to write a balanced account o f resampling methods, to include basic aspects of the theory which underpinned the methods, and to show as many applications as we could in order to illustrate the full potential of the methods — warts and all. We quickly realized that in order for us and others to understand and use the bootstrap, we would need suitable software, and producing it led us further towards a practically oriented treatment. Our view was cemented by two further developments: the appearance o f two excellent books, one by Peter Hall on the asymptotic theory and the other on basic methods by Bradley Efron and Robert Tibshirani; and the chance to give further courses that included practicals. O ur experience has been that hands-on computing is essential in coming to grips with resampling ideas, so we have included practicals in this book, as well as more theoretical problems. As the book expanded, we realized that a fully comprehensive treatm ent was beyond us, and that certain topics could be given only a cursory treatm ent because too little is known about them. So it is that the reader will find only brief accounts o f bootstrap methods for hierarchical data, missing data problems, model selection, robust estimation, nonparam etric regression, and complex data. But we do try to point the more ambitious reader in the right direction. No project of this size is produced in a vacuum. The majority of work on the book was completed while we were at the University of Oxford, and we are very grateful to colleagues and students there, who have helped shape our work in various ways. The experience of trying to teach these methods in Oxford and elsewhere — at the Universite de Toulouse I, Universite de Neuchatel, Universita degli Studi di Padova, Queensland University of Technology, Universidade de Sao Paulo, and University of Umea — has been vital, and we are grateful to participants in these courses for prompting us to think more deeply about the
ix
X
Preface
material. Readers will be grateful to these people also, for unwittingly debugging some of the problems and practicals. We are also grateful to the organizers of COMPSTAT ’92 and CLAPEM V for inviting us to give short courses on our work. While writing this book we have asked many people for access to data, copies of their programs, papers or reprints; some have then been rewarded by our bombarding them with questions, to which the answers have invariably been courteous and informative. We cannot name all those who have helped in this way, but D. R. Brillinger, P. Hall, M. P. Jones, B. D. Ripley, H. O’R. Sternberg and G. A. Young have been especially generous. S. Hutchinson and B. D. Ripley have helped considerably with computing matters. We are grateful to the mostly anonymous reviewers who commented on an early draft of the book, and to R. G atto and G. A. Young, who later read various parts in detail. A t Cambridge University Press, A. W oollatt and D. Tranah have helped greatly in producing the final version, and their patience has been commendable. We are particularly indebted to two people. V. Ventura read large portions o f the book, and helped with various aspects of the com putation. A. J. Canty has turned our version o f the bootstrap library functions into reliable working code, checked the book for mistakes, and has made numerous suggestions that have improved it enormously. Both of them have contributed greatly — though o f course we take responsibility for any errors that remain in the book. We hope that readers will tell us about them, and we will do our best to correct any future versions of the book; see its WWW page, at U R L http://dmawww.epf1.ch/davison.mosaic/BMA/ The book could not have been completed without grants from the U K Engineer ing and Physical Sciences Research Council, which in addition to providing funding for equipment and research assistantships, supported the work o f A. C. Davison through the award o f an Advanced Research Fellowship. We also acknowledge support from the US N ational Science Foundation. We must also mention the Friday evening sustenance provided at the Eagle and Child, the Lam b and Flag, and the Royal Oak. The projects of many authors have flourished in these amiable establishments. Finally, we thank our families, friends and colleagues for their patience while this project absorbed our time and energy. Particular thanks are due to Claire Cullen Davison for keeping the Davison family going during the writing of this book. A. C. Davison and D. V. Hinkley Lausanne and Santa Barbara May 1997
1 Introduction
The explicit recognition o f uncertainty is central to the statistical sciences. N o tions such as prior inform ation, probability models, likelihood, stan d ard errors an d confidence limits are all intended to form alize uncertainty and thereby m ake allow ance for it. In sim ple situations, the uncertainty o f an estim ate may be gauged by analytical calculation based on an assum ed probability m odel for the available data. But in m ore com plicated problem s this approach can be tedious an d difficult, and its results are potentially m isleading if inappropriate assum ptions or sim plifications have been made. F or illustration, consider Table 1.1, which is taken from a larger tabulation (Table 7.4) o f the num bers o f A ID S reports in E ngland and W ales from m id -1983 to the end o f 1992. R eports are cross-classified by diagnosis period an d length o f reporting delay, in three-m onth intervals. A blank in the table corresponds to an unknow n (as yet unreported) entry. The problem was to predict the states o f the epidem ic in 1991 and 1992, which depend heavily on the values missing at the b o tto m right o f the table. T he d a ta su p p o rt the assum ption th at the reporting delay does n o t depend on the diagnosis period. In this case a simple m odel is th a t the num ber o f reports in row j and colum n k o f the table has a Poisson distribution with m ean Hjk = exp(oij -f f t) . If all the cells o f the table are regarded as independent, then the to tal nu m b er o f u n reported diagnoses in period j has a Poisson distribution w ith m ean n jk = exp(ay) k
exP (Pk), k
where the sum is over colum ns with blanks in row j. The eventual total o f as yet u n rep o rted diagnoses from period j can be estim ated by replacing a j and Pk by estim ates derived from the incom plete table, and thence we obtain the predicted to tal for period j. Such predictions are shown by the solid line in
1
2
1 ■ Introduction
D iagnosis period
R e p o rtin g delay interval (q u a rte rs ):
Y ear
Q u a rte r
0+
1
2
3
4
5
6
1988
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
31 26 31 36 32 15 34 38 31 32 49 44 41 56 53 63 71 95 76 67
80 99 95 77 92 92 104 101 124 132 107 153 137 124 175 135 161 178 181
16 27 35 20 32 14 29 34 47 36 51 41 29 39 35 24 48 39
9 9 13 26 10 27 31 18 24 10 17 16 33 14 17 23 25
3 8 18 11 12 22 18 9 11 9 15 11 7 12 13 12
2 11 4 3 19 21 8 15 15 7 8 6 11 7 11
8 3 6 8 12 12 6 6 8 6 9 5 6 10
1989
1990
1991
1992
>14 •••
••• ••• ••• •••
6 3 3 2 2 1
T otal rep o rts to end o f 1992 174 211 224 205 224 219 253 233 281 245 260 285 271 263 306 258 310 318 273 133
Figure 1.1, together w ith the observed to tal reports to the end o f 1992. How good are these predictions? It would be tedious b u t possible to p u t pen to p ap er and estim ate the prediction uncertainty th ro u g h calculations based on the Poisson model. But in fact the d a ta are m uch m ore variable th an th a t m odel would suggest, and by failing to take this into account we w ould believe th at the predictions are m ore accurate th a n they really are. Furtherm ore, a b etter approach would be to use a sem iparam etric m odel to sm ooth out the evident variability o f the increase in diagnoses from q u arter to q u arter; the corresponding prediction is the dotted line in Figure 1.1. A nalytical calculations for this m odel would be very unpleasant, and a m ore flexible line o f attack is needed. W hile m ore th an one approach is possible, the one th a t we shall develop based on com puter sim ulation is b o th flexible and straightforw ard.
Purpose of the Book O ur central goal is to describe how the com puter can be harnessed to obtain reliable stan d ard errors, confidence intervals, and o th er m easures o f uncertainty for a wide range o f problem s. The key idea is to resam ple from the original d a ta — either directly o r via a fitted m odel — to create replicate datasets, from
Table 1.1 Numbers of AIDS reports in England and Wales to the end of 1992 (De Angelis and Gilks, 1994) extracted from Table 7.4. A t indicates a reporting delay less than one month.
3
1 ■Introduction
Figure 1.1 Predicted quarterly diagnoses from a parametric model (solid) and a semiparametric model (dots) fitted to the AIDS data, together with the actual totals to the end of 1992 (+).
Time
which the variability o f the quantities o f interest can be assessed w ithout longwinded and error-prone analytical calculation. Because this approach involves repeating the original d a ta analysis procedure w ith m any replicate sets o f data, these are som etim es called computer-intensive methods. A n o th er nam e for them is bootstrap methods, because to use the d a ta to generate m ore d a ta seems analogous to a trick used by the fictional B aron M unchausen, who when he found him self a t the b o tto m o f a lake got out by pulling him self up by his b ootstraps. In the sim plest nonparam etric problem s we do literally sample from the data, and a com m on initial reaction is th a t this is a fraud. In fact it is not. It turns out th a t a wide range o f statistical problem s can be tackled this way, liberating the investigator from the need to oversimplify complex problem s. T he ap proach can also be applied in simple problem s, to check the adequacy o f stan d ard m easures o f uncertainty, to relax assum ptions, and to give quick approxim ate solutions. A n exam ple o f this is random sam pling to estim ate the p erm u tatio n distribution o f a nonparam etric test statistic. It is o f course true th a t in m any applications we can be fairly confident in a p articu lar p aram etric m odel and the stan d ard analysis based on th a t model. Even so, it can still be helpful to see w hat can be inferred w ithout particular p aram etric m odel assum ptions. This is in the spirit o f robustness o f validity o f the statistical analysis perform ed. N onparam etric b o o tstrap analysis allows us to do this.
4
1 • Introduction
3 5 7 18 43 85 91 98 100 130 230 487 _____________________________________________________________________
Despite its scope an d usefulness, resam pling m ust be carefully applied. Unless certain basic ideas are understood, it is all too easy to produce a solution to the w rong problem , or a b ad solution to the right one. B ootstrap m ethods are intended to help avoid tedious calculations based on questionable assum ptions, and this they do. But they can n o t replace clear critical thought ab o u t the problem , ap p ro p riate design o f the investigation and d a ta analysis, and incisive presentation o f conclusions. In this b o o k we describe how resam pling m ethods can be used, and evaluate their perform ance, in a wide range o f contexts. O u r focus is on the m ethods and their practical application rath er th an on the underlying theory, accounts o f which are available elsewhere. This book is intended to be useful to the m any investigators w ho w ant to know how and when the m ethods can safely be applied, and how to tell when things have gone wrong. The m athem atical level o f the book reflects this: we have aim ed for a clear account o f the key ideas w ithout an overload o f technical detail.
Examples B ootstrap m ethods can be applied b o th when there is a well-defined probability m odel for d a ta an d when there is not. In o u r initial developm ent o f the m ethods we shall m ake frequent use o f tw o simple examples, one o f each type, to illustrate the m ain points. Example 1.1 (Air-conditioning data) Table 1.2 gives n = 12 times between failures o f air-conditioning equipm ent, for which we wish to estim ate the underlying m ean or its reciprocal, the failure rate. A simple m odel for this problem is th a t the times are sam pled from an exponential distribution. The dotted line in the left panel o f Figure 1.2 is the cum ulative distribution function (C D F ) F t ) = / °’
\ l - e x p (-y/n),
y ~ °’
y > 0,
for the fitted exponential distrib u tio n w ith m ean fi set equal to the sample average, y = 108.083. The solid line on the sam e plot is the nonparam etric equivalent, the em pirical distribution function (E D F ) for the data, which places equal probabilities n-1 = 0.083 at each sam ple value. C om parison o f the two curves suggests th a t the exponential m odel fits reasonably well. A n alternative view o f this is shown in the right panel o f the figure, which is an exponential
Table 1.2 Service hours between failures of the air-conditioning equipment in a Boeing 720 jet aircraft (Proschan, 1963).
1 ■ Introduction
5
o
O co
Figure 1.2 Summary displays for the air-conditioning data. The left panel shows the EDF for the data, F (solid), and the CDF of a fitted exponential distribution (dots). The right panel shows a plot of the ordered failure times against exponential quantiles, with the fitted exponential model shown as the dotted line.
o o in o o o o
co o o
CM
O o
0.0 0.5 Failure time y
1.0 1.5 2.0 2.5 3.0
Quantiles of standard exponential
Q -Q plot — a plot o f ordered d a ta values yy) against the standard exponential quantiles
n+ 1
= - log (1 K=1
n+ 1
A lthough these plots suggest reasonable agreem ent with the exponential m odel, the sam ple is ra th e r too small to have m uch confidence in this. In the d a ta source the m ore general gam m a m odel with m ean /i and index k is used; its density is fw (y) =
1 1
/ \ K I K ' „K-1. y K exP ( - Ky / v l
y > o,
h, k
> o.
( i.i)
F or o u r sam ple the estim ated index is k = 0.71, which does not differ signif icantly (P = 0.29) from the value k = 1 th a t corresponds to the exponential m odel. O u r reason for m entioning this will becom e apparent in C h apter 2. Basic properties o f the estim ator T = Y for fj. are easy to obtain theoretically under the exponential model. For example, it is easy to show th at T is unbiased and has variance fi2/n. A pproxim ate confidence intervals for n can be calculated using these properties in conjunction with a norm al approxim ation for the distrib u tio n o f T, alth o u g h this does n o t w ork very well: we can tell this because Y / n has an exact gam m a distribution, which leads to exact confidence limits. Things are m ore com plicated under the m ore general gam m a model, because the index k is only estim ated, and so in a traditional approach we would use approxim ations — such as a norm al approxim ation for the distribution o f T, or a chi-squared approxim ation for the log likelihood ratio statistic.
6
1 ■ Introduction
The param etric sim ulation m ethods o f Section 2.2 can be used alongside these approxim ations, to diagnose problem s w ith them , or to replace them entirely.
■ Example 1.2 (City population data) Table 1.3 reports n = 49 d a ta pairs, each corresponding to a city in the U nited States o f A m erica, the p air being the 1920 and 1930 p o pulations o f the city, w hich we denote by u and x. The d a ta are plotted in Figure 1.3. Interest here is in the ratio o f m eans, because this would enable us to estim ate the to tal pop u latio n o f the U SA in 1930 from the 1920 figure. I f the cities form a ran d o m sam ple w ith ( U , X ) denoting the p air o f populatio n values for a random ly selected city, then the total 1930 population is the prod u ct o f the to tal 1920 popu latio n and the ratio o f expectations 6 = E (X )/E ([7). This ratio is the p aram eter o f interest. In this case there is no obvious p aram etric m odel for the jo in t distribution o f ( U , X ) , so it is n atu ral to estim ate 9 by its em pirical analog, T = X / U , the ratio o f sam ple averages. We are then concerned w ith the uncertainty in T. If we had a plausible param etric m odel — for exam ple, th a t the pair ( U, X ) has a bivariate lognorm al distrib u tio n — then theoretical calculations like those in Exam ple 1.1 would lead to bias an d variance estim ates for use in a norm al approxim ation, which in tu rn would provide approxim ate confidence intervals for 6. W ithout such a m odel we m ust use nonparam etric analysis. It is still possible to estim ate the bias an d variance o f T, as we shall see, and this m akes norm al approxim ation still feasible, as well as m ore com plex approaches to setting confidence intervals. ■ Exam ple 1.1 is special in th a t an exact distribution is available for the statistic o f interest an d can be used to calculate confidence limits, at least u nder the exponential m odel. But for param etric m odels in general this will n o t be true. In Section 2.2 we shall show how to use param etric sim ulation to o b tain approxim ate distributions, either by approxim ating m om ents for use in norm al approxim ations, or — when these are inaccurate — directly. In Exam ple 1.2 we m ake no assum ptions ab o u t the form o f the d ata disribution. But still, as we shall show in Section 2.3, sim ulation can be used to obtain properties o f T, even to approxim ate its distribution. M uch o f C h ap ter 2 is devoted to this.
Layout of the Book C h ap ter 2 describes the properties o f resam pling m ethods for use w ith sin gle sam ples from p aram etric an d nonparam etric m odels, discusses practical m atters such as the num bers o f replicate datasets required, and outlines delta m ethods for variance approxim ation based on different forms o f jackknife. It
1 • Introduction Table 13 Populations in thousands of n — 49 large US cities in 1920 (u) and in 1930 (x) (Cochran, 1977, p. 152).
u
X
u
X
u
X
138 93 61 179 48 37 29 23 30
143 104 69 260 75 63 50 48 111 50 52 53 79 57 317 93 58
76 381 387 78 60 507 50 77 64 40 136 243 256 94 36 45
80 464 459 106 57 634 64 89 77 60 139 291 288 85 46 53
67 120 172 66 46 121 44 64 56 40 116 87 43 43 161 36
67 115 183 86 65 113 58 63 142 64 130 105 61 50 232 54
2
38 46 71 25 298 74 50
Figure 1J Populations of 49 large United States cities (in 1000s) in 1920 and 1930.
c
o « 3 Q. O Q. O CO O)
1920 population
8
1 ■ Introduction
also contains a basic discussion o f confidence intervals and o f the ideas th at underlie b o o tstrap m ethods. C h apter 3 outlines how the basic ideas are extended to several samples, sem iparam etric and sm ooth models, simple cases where d a ta have hierarchical structure or are sam pled from a finite population, an d to situations where d ata are incom plete because censored o r missing. It goes on to discuss how the sim ulation o u tp u t itself m ay be used to detect problem s — so-called boo tstrap diagnostics — an d how it m ay be useful to b o o tstrap the bootstrap. In C h ap ter 4 we review the basic principles o f significance testing, and then describe M onte C arlo tests, including those using M arkov C hain sim ulation, and param etric b o o tstrap tests. This is followed by discussion o f nonparam etric perm utatio n tests, and the m ore general m ethods o f semi- and nonparam etric boo tstrap tests. A double b o o tstrap m ethod is detailed for im proved approxi m ation o f P-values. Confidence intervals are the subject o f C h ap ter 5. A fter outlining basic ideas, we describe how to construct simple confidence intervals based on sim ulations, an d then go on to m ore com plex m ethods, such as the studentized bootstrap, percentile m ethods, the double b o o tstrap and test inversion. The m ain m ethods are com pared em pirically in Section 5.7, then there are brief accounts o f confidence regions for m ultivariate param eters, and o f prediction intervals. The three subsequent chapters deal w ith m ore com plex problem s. C h ap ter 6 describes how the basic resam pling m ethods m ay be applied in linear regression problem s, including tests for coefficients, prediction analysis, and variable selection. C h ap ter 7 deals w ith m ore com plex regression situations: generalized linear models, oth er nonlinear m odels, semi- and nonparam etric regression, survival analysis, and classification error. C h apter 8 details m ethods appropriate for tim e series, spatial data, an d poin t processes. C h apter 9 describes how variance reduction techniques such as balanced sim ulation, control variates, and im portance sam pling can be adapted to yield im proved sim ulations, w ith the aim o f reducing the am ount o f sim ulation needed for an answ er o f given accuracy. It also shows how saddlepoint m ethods can som etim es be used to avoid sim ulation entirely. C h apter 10 describes various sem iparam etric versions o f the likelihood function, the ideas underlying which are closely related to resam pling m ethods. It also briefly outlines a Bayesian version o f the b o otstrap. C hapters 2 -10 contain problem s intended to reinforce the reader’s under standing o f b o th m ethods an d theory, and in some cases problem s develop topics th at could n o t be included in the text. Some o f these dem and a know l edge o f m om ents and cum ulants, basic facts ab o u t which are sketched in the A ppendix. The book also contains practicals th a t apply resam pling routines w ritten in
1 ■ Introduction
9
the S language to sets o f data. The practicals are intended to reinforce the ideas in each chapter, to supplem ent the m ore theoretical problem s, and to give exam ples on which readers can base analyses o f their own data. It would be possible to give different sorts o f course based on this book. O ne w ould be a “theoretical” course based on the problem s and an o th er an “applied” course based on the practicals; we prefer to blend the two. A lthough a library o f routines for use with the statistical package S P lu s is bundled w ith it, m ost o f the book can be read w ithout reference to p a r ticular softw are packages. A p art from the practicals, the exception to this is C h ap ter 11, which is a short introduction to the m ain resam pling routines, arran g ed roughly in the order with which the corresponding ideas ap p ear in earlier chapters. R eaders intending to use the bundled routines will find it useful to w ork through the relevant sections o f C h apter 11 before attem pting the practicals.
Notation A lthough we believe th a t o u r n o tation is largely standard, there are not enough letters in the English and G reek alphabets for us to be entirely consistent. G reek letters such as 6, P and v generally denote param eters or o ther unknow ns, while a is used for error rates in connection with significance tests and confidence sets. English letters X , Y, Z , and so forth are used for random variables, which take values x, y, z. T hus the estim ator T has observed value t, which m ay be an estim ate o f the unknow n p aram eter 0. The letter V is used for a variance estim ate, an d the letter p for a probability, except for regression models, where p is the num b er o f covariates. Script letters such as J/~ are used to denote sets. Probability, expectation, variance and covariance are denoted Pr( ), E( ), var(-) and cov(-, •), while the jo in t cum ulant o f Yi, Y1Y2 and Y3 is denoted cum(Yi, Yj Y2, Y3). We use I {A} to denote the indicator random variable, which takes values one if the event A is true and zero otherwise. A related function is the H eaviside function
We use #{/!} to denote the nu m ber o f elem ents in the set A, and #{^4r} for the num ber o f events A r th a t occur in a sequence A i , A 2 , __ We use = to m ean “is approxim ately equal to ”, usually corresponding to asym ptotic equivalence as sam ple sizes tend to infinity, ~ to m ean “is distributed as” o r “is distributed according to ”, ~ to m ean “is distributed approxim ately a s”, ~ to m ean “is a sam ple o f independent identically distributed random variables from ”, while s has its usual m eaning o f “is equivalent to ”.
10
1 ■ Introduction
The d a ta values in a sam ple o f size n are typically denoted by y i , . . . , y n, the observed values o f the ran d o m variables y i , . . . , y n; their average is y = n-'Zyj-
We m ostly reserve Z for ran d o m variables th a t are stan d ard norm al, at least approxim ately, an d use Q for ran d o m variables w ith o ther (approxim ately) know n distributions. As usual N(n, a 2) represents the norm al distribution w ith m ean \i an d variance a 2, while za is often the a quantile o f the stan d ard norm al distribution, w hose cum ulative distrib u tio n function is ®( ). The letter R is reserved for the n u m b er o f replicate sim ulations. Sim ulated copies o f a statistic T are denoted T ' , r = 1 ,..., R, w hose ordered values are r ('i) ^ ^ T (R)- E xpectation, variance an d probability calculated w ith respect to the sim ulation distribution are w ritten Pr*(), E*(-) and var*(-). W here possible we avoid boldface type, and rely on the context to m ake it plain when we are dealing w ith vectors o r m atrices; a T denotes the m atrix transpose o f a vector o r m atrix a. We use PD F, C D F, an d E D F as sh o rth an d for “probability density function”, “cum ulative distribution function”, and “em pirical distribution function”. The letters F and G are used for C D F s, an d / and g are generally used for the corresponding PD F s. A n exception to this is th a t /*; denotes the frequency with which y; app ears in the rth resample. We use M L E as sh o rth an d for “m axim um likelihood estim ate” or som etim es “m axim um likelihood estim ation”. The end o f each exam ple is m arked ■, an d the end o f each algorithm is m arked •.
2 The Basic Bootstraps
2.1 Introduction In this chap ter we discuss techniques which are applicable to a single, h om o geneous sam ple o f data, denoted by y i,...,} V T he sam ple values are thought o f as the outcom es o f independent and identically distributed ran d o m variables Y U . . . ,Y „ w hose probability density function (P D F ) and cumulative distribution function (C D F ) we shall denote by / and F, respectively. T he sam ple is to be used to m ake inferences ab o u t a p o p ulation characteristic, generically denoted by 6, using a statistic T whose value in the sam ple is t. We assum e for the m om ent th a t the choice o f T has been m ade and th a t it is an estim ate for 6, which we take to be a scalar. O u r atten tio n is focused on questions concerning the probability distribution o f T. F or exam ple, w hat are its bias, its stan d ard error, or its quantiles? W hat are likely values und er a certain null hypothesis o f interest? H ow do we calculate confidence limits for 6 using T ? T here are tw o situations to distinguish, the param etric and the n o n p a ra m et ric. W hen there is a p articu lar m athem atical m odel, with adjustable constants o r p aram eters ip th a t fully determ ine / , such a m odel is called parametric and statistical m ethods based on this m odel are param etric m ethods. In this case the p aram eter o f interest 6 is a com ponent o f or function o f ip. W hen no such m athem atical m odel is used, the statistical analysis is nonparametric, and uses only the fact th a t the ran d o m variables Yj are independent and identically distributed. Even if there is a plausible param etric m odel, a nonparam etric analysis can still be useful to assess the robustness o f conclusions draw n from a p aram etric analysis. A n im p o rta n t role is played in nonparam etric analysis by the empirical distribution which puts equal probabilities n-1 a t each sam ple value yj. The corresponding estim ate o f F is the empirical distribution function (E D F ) F,
11
12
2 • The Basic Bootstraps
which is defined as the sam ple p ro p o rtio n #{^4} means the number of times the event A occurs.
n M ore form ally F(y) = l i Z H ^ y - y ^ j=i
w
where H(u) is the unit step function which ju m p s from 0 to 1 at u = 0. N otice th at the values o f the E D F are fixed (0, j[), so the E D F is equivalent to its points o f increase, the ordered values >’(i) < • • • < y ln} o f the data. An exam ple o f the E D F was shown in the left panel o f Figure 1.2. W hen there are rep eat values in the sample, as would often occur with discrete data, the E D F assigns probabilities p ro p o rtional to the sam ple fre quencies at each distinct observed value y. The form al definition (2.1) still applies. The E D F plays the role o f fitted m odel when no m athem atical form is assum ed for F, analogous to a param etric C D F w ith param eters replaced by their estim ates.
2.1.1 Statistical functions M any simple statistics can be th o u g h t o f in term s o f properties o f the EDF. For exam ple, the sam ple average y = n_1 yj is the m ean o f the E D F ; see Exam ple 2.1 below. M ore generally, the statistic o f interest t will be a sym m etric function o f y \ , . . . , y„, m eaning th a t t is unaffected by reordering the data. This implies th a t t depends only on the ordered values y(i) < • • • < y^), or equivalently on the E D F F. O ften this can be expressed simply as t = t(F), where t(-) is a statistical function — essentially ju st a m athem atical expression o f the algorithm for com puting t from F. Such a statistical function is o f central im portance in the n o n p aram etric case because it also defines the param eter o f interest 9 th ro u g h the “algorithm ” 9 = t(F). This corresponds to the qualitative idea th a t 6 is a characteristic o f the population described by F. Simple exam ples o f such functions are the m ean an d variance o f Y , which are respectively defined as t(F) =
J
y dF( y) ,
t(F) =
J
y 2 dF(y) ~ { J ydF(y) J
.
(2.2)
T he same definition o f 9 applies in p aram etric problem s, although then 6 is m ore usually defined explicitly as one o f the m odel param eters tp. T he relationship betw een the estim ate t an d F can usually be expressed as t = t(F), corresponding to the relation 9 = t(F) betw een the characteristic o f interest an d the underlying distribution. T he statistical function t( ) defines
13
2.1 ■Introduction
b o th the p aram eter an d its estim ate, b u t we shall use t( ) to represent the function, and t to represent the estim ate o f 9 based on the observed d ata
Example 2.1 (Average)
T he sample average, y, estim ates the population m ean H
=
J
ydF(y).
To show th a t y = t(F), we substitute for F in the defining function at (2.2) to obtain
j= i because f a ( y ) d H ( y — x) = a(x) for any continuous function a(-).
■
Example 2.2 (City population data) F or the problem outlined in Exam ple 1.2, the p aram eter o f interest is the ratio o f m eans 9 = E (X )/E (l/). In this case F is the bivariate C D F o f Y = (V , X ), and the bivariate E D F F puts probability n~l at each o f the d a ta pairs (uj ,Xj). T he statistical function version o f 9 simply uses the definition o f m ean for b o th nu m erato r and denom inator, so th at fxdF(u,x) f ud F( u, x) The corresponding estim ate o f 9 is * [ xdF(u,x) t = t(F) = J udF(u,x) w ith x = n-1 J2 x j ar*d « = n_1 J 2 uj-
A quantity A„ is said to be 0(nd) if lim„_00 n~dA„ = a for some finite a, and o(nJ) if lim„_0Q n~dA„ = 0.
x u ■
It is quite straightforw ard to show th at (2.1) implies convergence o f F to F as n—>oo (Problem 2.1). T hen if t(-) is continuous in an appropriate sense, the definition T = t( ) implies th a t T converges to 6 as n—>oo, which is the property o f consistency. N o t all estim ates are exactly o f the form t(F). For example, if t(F) = var(Y ) then the usual unbiased sam ple variance is nt(F)/(n — 1). A lso the sample m edian is n o t exactly F -1 ( |) . Such small discrepancies are fairly un im p o rtan t as far as applying the b o o tstrap techniques discussed in this book. In a very form al developm ent we could write T — tn(F) and require th a t tn—*t as n—>oo, possibly even th a t t„ — t = 0 ( « _1). But such form ality would be excessive here, an d we shall assum e in general discussion th at T = t(F). (One case th at does
2 • The Basic Bootstraps
14
require special treatm en t is n o n p aram etric density estim ation, which we discuss in Exam ple 5.13.) The representation 6 = t(F) defines the p aram eter and its estim ator T in a robust way, w ithout any assum ption ab o u t F, oth er th an th a t 6 exists. This guarantees th a t T estim ates the right thing, no m atter w hat F is. Thus the sam ple average y is the only statistic th a t is generally valid as an estim ate o f the population m ean f i : only if Y is sym m etrically distributed ab o u t /i will statistics such as trim m ed averages also estim ate fi. This property, which guarantees th at the correct characteristic o f the underlying distribution is estim ated, w hatever th a t distribution is, is som etim es called robustness o f specification.
2.1.2 Objectives M uch o f statistical theory is devoted to calculating approxim ate distributions for p articu lar statistics T , on which to base inferences ab o u t their estim ands 8. Suppose, for exam ple, th a t we w ant to calculate a (1 — 2a) confidence interval for 6. It m ay be possible to show th a t T is approxim ately norm al w ith m ean 6 + P and variance v; here P is the bias o f T. If p an d v are b o th know n, then we can write P r(T < 1 1 F) = O
.
(2-3)
where () is the stan d ard norm al integral. I f the a quantile o f the standard norm al distrib u tio n is z« = (F) = /i2/ n , and these are estim ated by 0 and y 2/n. Since n = 12, y = 108.083, and 20.025 = —1.96, a 95% confidence interval for /i based on the norm al approxim ation (2.3) is + 1.96n_1/2y = (46.93,169.24). ■ E stim ates such as those in (2.6) are b o o tstrap estim ates. H ere they have been used in conjunction w ith a norm al approxim ation, which som etim es will be adequate. However, the b o o tstrap approach o f substituting estim ates can be applied m ore am bitiously to im prove upon the norm al approxim ation and o th e r first-order theoretical approxim ations. The elaboration o f the b o o tstrap ap proach is the purpose o f this book.
2.2 Parametric Simulation In the previous section we pointed out th a t theoretical properties o f T m ight be h ard to determ ine w ith sufficient accuracy. We now describe the sound practical alternative o f repeated sim ulation o f d a ta sets from a fitted param etric model, an d em pirical calculation o f relevant properties o f T. Suppose th a t we have a p articular param etric m odel for the distribution o f the d a ta y \ , . . . , y „ . We shall use F v (y) and f v (y) to denote the C D F and P D F respectively. W hen 1p is estim ated by (p — often b u t not invariably its m axim um likelihood estim ate — its substitution in the m odel gives the fitted model, w ith C D F F{y) = F^(y), which can be used to calculate properties o f T, som etim es exactly. We shall use Y * to denote the random variable distributed according to the fitted m odel F, and the superscript * will be used with E, var and so forth when these m om ents are calculated according to the fitted distribution. O ccasionally it will also be useful to w rite \p = xp’ to em phasize th a t this is the p aram eter value for the sim ulation model. Example 2.4 (Air-conditioning data) We have already calculated the m ean and variance u nder the fitted exponential m odel for the estim ator T = Y o f Exam ple 1.1. O u r sam ple estim ate for the m ean fi is t = y. So here 7* is exponential w ith m ean y. In the n o tatio n ju st introduced, we have by
16
2 • The Basic Bootstraps
theoretical calculation w ith this exponential distrib u tion th at E*(Y*) = y,
v ar'(Y * ) = y 2/n.
N ote th a t the estim ated bias o f Y is zero, being the difference between E '(Y *) an d the value ji = y for the m ean o f the fitted distribution. These m om ents were used to calculate an approxim ate norm al confidence interval in Exam ple 2.3. If, however, we wished to calculate the bias and variance o f T = log Y under the fitted m odel, i.e. E* (log Y*) — lo g y and v ar’ (lo g Y '), exact calculation is m ore difficult. The delta m ethod o f Section 2.7.1 would give approxim ate values —(2n)~* and n-1 . But m ore accurate approxim ations can be obtained using sim ulated sam ples o f 7* s. Sim ilar results and com m ents would apply if instead we chose to use the m ore general gam m a m odel (1.1) for this example. T hen Y* would be a gam m a random variable with m ean y and index k. m
2.2.1 Moment estimates So now suppose th a t theoretical calculation w ith the fitted m odel is too complex. A pproxim ations m ay n o t be available, or they m ay be untrustw orthy, perhaps because the sam ple size is small. The alternative is to estim ate the properties we require from sim ulated datasets. We w rite such a dataset as Yj",. . . , Y„* w here the YJ are independently sam pled from the fitted distribution F. W hen the statistic o f interest is calculated from a sim ulated dataset, we denote it by T*. F rom R repetitions o f the d a ta sim ulation we obtain T [ , . . . , T ’R. Properties o f T — 6 are then estim ated from T,*,. . . , T^. F or example, the estim ator o f the bias b(F) — E (T | F) — 0 o f T is B = b(F) = E (T | F) — t = E*(T*) - t, and this in tu rn is estim ated by R
B r = / r 1 Y , Tr ~ t = T* - 1.
(2.7)
r= 1
N ote th a t in the sim ulation t is the p aram eter value for the model, so th at T ' — t is the sim ulation analogue o f T — 6. The corresponding estim ator o f the variance o f T is 1 Vr =
R D 7’-* - f *)2’
(2-8)
with sim ilar estim ators for oth er m om ents. These em pirical approxim ations are justified by the law o f large num bers. F or exam ple, B r converges to B, the exact value under the fitted model, as R
2.2 ■Parametric Simulation Figure 2.1 Empirical biases and variances of Y* for the air-conditioning data from four repetitions of parametric simulation. Each line shows how the estimated bias and variance for R ~ 10 initial simulations change when further simulations are successively added. Note how the variability decreases as the simulation size increases, and how the simulated values converge to the exact values under the fitted exponential model, given by the horizontal dotted lines.
17
cC/> O in
increases. We usually d ro p the subscript R from B R, VR, and so forth unless we are explicitly discussing the effect o f R. How to choose R will be illustrated in the exam ples th a t follow, and discussed in Section 2.5.2. It is im p o rtan t to recognize th a t we are not estim ating absolute properties o f T , b u t ra th e r o f T relative to 9. Usually this involves the estim ation erro r T —9, b u t we should n o t ignore the possibility th at T / 0 (equivalently log T — log 9) o r som e o th er relevant m easure o f estim ation error m ight be m ore appropriate, depending u p o n the context. B ootstrap sim ulation m ethods will apply to any such measure. Example 2.5 (Air-conditioning data) C onsider Exam ple 1.1 again. As we have seen, sim ulation is unnecessary in practice for this problem because the m om ents are easy to calculate theoretically, b u t the exam ple is useful for illustration. H ere the fitted m odel is an exponential distribution for the failure times, w ith m ean estim ated by the sam ple average y = 108.083. All sim ulated failure tim es Y * are generated from this distribution. Figure 2.1 shows the results from several sim ulations, four for each o f eight values o f R, in each o f which the em pirical biases and variances o f T" = Y" have been calculated according to (2.7) and (2.8). O n both panels the “correct” values, nam ely zero and y 2/ n = (108.083)2/1 2 = 973.5, are indicated by horizontal d o tted lines. Evidently the larger is R, the closer is the sim ulation calculation to the right answer. H ow large a value o f R is needed? Figure 2.1 suggests th a t for some purposes R = 100 or 200 will be adequate, b u t th a t R = 10 will n o t be large enough. In this problem the accuracy o f the em pirical approxim ations is quite easy to determ ine from the fact th at n Y / n has a gam m a distribution with
2 • The Basic Bootstraps
18 index n. The sim ulation variances o f B R and F r are t2
t4 /
2
6 \
nR’
n2 \ R - 1 + n R . ) ’
and we can use these to say how large R should be in order th a t the sim ulated values have a specified accuracy. For exam ple, the coefficients o f variation o f VR a t R = 100 and 1000 are respectively 0.16 and 0.05. However, for a com plicated problem w here sim ulation was really necessary, such calculations could n o t be done, an d general rules are needed to suggest how large R should be. These are discussed in Section 2.5.2. ■
2.2.2 Distribution and quantile estimates The sim ulation estim ates o f bias and variance will som etim es be o f interest in their own right, but m ore usually w ould be used w ith norm al approxim ations for T , p articularly for large samples. For situations like those in Exam ples 1.1 and 1.2, however, the norm al approxim ation is intrinsically inaccurate. This can be seen from a norm al Q -Q plot o f the sim ulated values t \ , . . . , t R, th a t is, a plot o f the ordered values < • • • < t ’R) against expected norm al order statistics. It is the em pirical distrib u tio n o f these sim ulated values which can provide a m ore accurate distrib u tio n al approxim ation, as we shall now see. If as is often the case we are approxim ating the distribution o f T — 8 by th a t o f T m— t, then cum ulative probabilities are estim ated simply by the em pirical distribution function o f the sim ulated values t ' — t. M ore formally, if G(u) = P r( T — 8 < u), then the sim ulation estim ate o f G(u) is n i \ — t < u} 1 G* (U) = ~ ^ R ------- = R Z 2 1{tr ~ 1 -
,
r=l
where I {A} is the indicator o f the event A, equal to 1 if A is true and 0 otherwise. As R increases, so this estim ate will converge to G(u), the exact C D F o f T* — t under sam pling from the fitted model. Ju st as w ith the m om ent approxim ations discussed earlier, so the approxim ation GR to G contains two sources o f error, i.e. th a t betw een G an d G due to d a ta variability and th a t betw een GR an d G due to finite sim ulation. We are often interested in quantiles o f the distrib ution o f T — 8, and these are approxim ated using ordered values o f t* — t. T he underlying result used here is th a t if X i , . . . , X N are independently distributed with C D F K and if denotes the j \ h ordered value, then
This implies th a t a sensible estim ate o f K ~ l (p) is X ^ N+i)p), assum ing th at
2.2 • Parametric Simulation
19
( N + l)p is an integer. So we estim ate the p quantile o f T —9 by the (R + l)p th ordered value o f t" — t, th a t is t(‘(R+1)p) — t. We assum e th at R is chosen so th at (/?
l)p is an integer. The sim ulation approxim ation GR and the corresponding quantiles are in principle b etter th a n results obtained by norm al approxim ation, provided th at R is large enough, because they avoid the supposition th a t the distribution o f T* — t has a p articu lar form. Example 2.6 (Air-conditioning data) T he sim ulation experim ents described in Exam ple 2.5 can be used to study the sim ulation approxim ations to the d istribution an d quantiles o f Y — fi. First, Figure 2.2 shows norm al Q -Q plots o f t* values for R = 99 (top left panel) and R = 999 (top right panel). Clearly a norm al ap proxim ation would n o t be accurate in the tails, and this is already fairly clear w ith R = 99. F or reference, the lower h a lf o f Figure 2.2 shows corresponding Q -Q plots w ith exact gam m a quantiles. T he n onnorm ality o f T * is also reasonably clear on histogram s o f t* values, show n in Figure 2.3, at least at the larger value R = 999. C orresponding density estim ate plots provide sm oother displays o f the same inform ation. We look next at the estim ated quantiles o f Y — p.. T he p quantile is a p proxim ated by J'f’jK+np) — y for p = 0.05 and 0.95. The values o f R are 1 9 ,3 9 ,9 9 ,1 9 9 ,..., 999, chosen to ensure th a t (R + 1)p is an integer throughout. T hus at R = 19 the 0.05 quantile is approxim ated by y ^ — y and so forth. In order to display the m agnitude o f sim ulation error, we ran four independent sim ulations a t R = 1 9 ,3 9 ,9 9 ,...,9 9 9 . The results are plotted in Figure 2.4. A lso shown by d o tted lines are the exact quantiles under the m odel, which the sim ulations ap proach as R increases. T here is large variability in the approxi m ate quantiles for R less th an 100 and it appears th a t 500 or m ore sim ulations are required to get accurate results. The same sim ulations can be used in o th er ways. F or example, we m ight w ant to know a b o u t log Y — log /i, in which case the em pirical properties o f logy* — lo g y are relevant. ■ T he illustration used here is very simple, but essentially the same m ethods can be used in arb itrarily com plicated param etric problems. F or example, distributions o f likelihood ratio statistics can be approxim ated when largesam ple approxim ations are inaccurate or fail entirely. In C hapters 4 and 5 respectively we show how param etric boo tstrap m ethods can be used to calculate significance tests an d confidence sets. It is som etim es useful to be able to look at the density o f T, for exam ple to see if it is m ultim odal, skewed, or otherw ise differs appreciably from norm ality. A rough idea o f the density g(u) o f U = T —6, say, can be had from a histogram o f the values o f t ' — t. A som ew hat b etter picture is offered by a kernel density
20
2 • The Basic Bootstraps
Figure 2.2 Normal (upper) and gamma (lower) Q-Q plots of (* values based on R = 99 (left) and R = 999 (right) simulations from the fitted exponential model for the air-conditioning data.
Quantiles of standard normal
ooo
Quantiles of standard normal
/
/■ •
o
■
o C\J •>
o
/*S
o
CD
’
to
Jr
o o
/
o
/
o o o
j
in
/
/ /
O ''fr 60 80
120
160
200
50
Exact gamma quantile
100
150
200
Exact gamma quantile
estim ate, defined by
r= l
v
y
where w is a sym m etric P D F with zero m ean and h i s a. positive bandw idth th a t determ ines the sm oothness o f gh. The estim ate gh is non-negative and has unit integral. It is insensitive to the choice o f w(-), for which we use the standard norm al density. The choice o f h is m ore im portant. T he key is to produce a sm ooth result, while n o t flattening out significant modes. If the choice o f h is quite large, as it m ay be if R < 100, then one should rescale the density
21
2.2 - Parametric Simulation
Figure 2 3 Histograms of t* values based on R = 99 (left) and R = 999 (right) simulations from the fitted exponential model for the air-conditioning data.
o o
o
o
o r~
O o
co
o o
in o
o
o o
o
Tt o
liB 50
l
100
150 t*
lb
o
o
o 200
50
100
150
200
t*
Figure 2.4 Empirical quantiles (p = 0.05, 0.95) of T* — t under resampling from the fitted exponential model for the air-conditioning data. The horizontal dotted lines are the exact quantiles under the model.
estim ate to m ake its m ean and variance agree with the estim ated m ean bR and variance vR o f T — 9; see Problem 3.8. As a general rule, good estim ates o f density require at least R = 1000: density estim ation is usually h ard er th an probability o r quantile estim ation. N ote th a t the same m ethods o f estim ating density, distribution function and quantiles can be applied to any transform ation o f T. We shall discuss this fu rth er in Section 2.5.
22
2 • The Basic Bootstraps
2.3 Nonparametric Simulation Suppose th a t we have no p aram etric m odel, b u t th a t it is sensible to assum e th at Y i,. . . , Y„ are independent and identically distributed according to an unknow n A distribution function F. We use the E D F F to estim ate the unknow n C D F F. We shall use F ju st as we w ould a p aram etric m o d e l: theoretical calculation if possible, otherw ise sim ulation o f datasets and em pirical calculation o f required properties. In only very simple cases are exact theoretical calculations possible, b u t we shall see in Section 9.5 th a t good theoretical approxim ations can be obtained in m any problem s involving sam ple m om ents. Example 2.7 (Average) In the case o f the average, exact m om ents sam pling from the E D F are easily found. F or exam ple, E*(Y*) = E '(Y * ) = ^
^
under
; =y
j=i
and similarly 1 v a r* (Y * )= -v a r * ( Y ') n
=
1 1 " 1 -E *{Y * — E*(Y*)}2 = - x V - { y , — y f n 1 1 n ^ n 1 }=i (n — 1)
=
1
2
—
A p art from the factor (n — 1)/n, this is the usual result for the estim ated variance o f Y . ■ O ther simple statistics such as the sam ple variance and sam ple m edian are also easy to handle (Problem s 2.3, 2.4). To apply sim ulation w ith the E D F is very straightforw ard. Because the E D F puts equal probabilities on the original d a ta values y i , . . . , y „ , each Y* is independently sam pled a t ran d o m from those d a ta values. T herefore the sim ulated sam ple Y(’, . . . , Y„* is a ran d o m sam ple taken with replacem ent from the data. This simplicity is special to the case o f a hom ogeneous sample, but m any extensions are straightforw ard. This resam pling procedure is called the nonparametric bootstrap. Example 2.8 (City population data) H ere we look at the ratio estim ate for the problem described in Exam ple 1.2. F or convenience we consider a subset o f the d a ta in Table 1.3, com prising the first ten pairs. This is an application with no obvious param etric m odel, so nonparam etric sim ulation m akes good sense. Table 2.1 shows the d a ta and the first sim ulated sample, which has been draw n by random ly selecting subscript j ' from the set { l,...,n } w ith equal probability and taking (w*,x*) = (uj-,xj-). In this sam ple j ' = 1 never occurs
23
2.3 ■Nonparametric Simulation Table 2.1 The dataset
for ratio estimation, and one synthetic sample. The values j* are chosen randomly with equal probability from with replacement; the simulated pairs are
.7 u
1 138
X
/' u’ X*
143
2 93 104
3 61 69
4 179 260
5 48 75
6 37 63
7 29 50
8 23 48
9 30 111
10 2 50
6 37 63
7 29 50
2 93 104
2 93 104
3 61 69
3 61 69
10 2 50
7 29 50
2 93 104
9 30 111
1 138 143
2 93 104
(«/ -Xj*). Table 2.2 Frequencies with which each original data pair appears in each of R = 9 nonparametric bootstrap samples for the data on US cities.
j u X
3 61 69
4 179 260
5 48 75
6 37 63
7 29 50
8 23 48
9 30 111
10 2 50
1
1
1 2 4
1 1 2 1
N u m b ers o f tim es each p air sam pled D a ta
1
1
1
3
2 1
1
1
1
1
1 2 1 1
2 1
1
S tatistic t = 1.520
R eplicate r 1 2 3 4 5 6 7 8 9
an d /
1 1 3 1 1 2
1 1
2 1 1 3
2 1
1 1 1
2 2 1 1
2 1
1 2
2 3 2
1
1 1
1 1
2 1 1
1 1
1 2
1 1
3 1 1
t\ t* r; t\ t'5 t'6 t; tj (j
= = = = = = = = =
1.466 1.761 1.951 1.542 1.371 1.686 1.378 1.420 1.660
= 2 occurs three times, so th at the first d a ta pair is never selected, the
second is selected three times, and so forth. Table 2.2 shows the sam e sim ulated sample, plus eight m ore, expressed in term s o f the frequencies o f original d ata pairs. The ratio t* for each sim ulated sam ple is recorded in the last colum n o f the table. A fter the R sets o f calculations, the bias and variance estim ates are calculated according to (2.7) and (2.8). The results are, for the R = 9 replicates shown, b = 1.582 — 1.520 = 0.062,
v = 0.03907.
A simple approxim ate distribution for T — 6 is N(b,v). W ith the results so far, this is N (0.062,0.0391), b u t this is unlikely to be accurate enough and a larger value o f R should be used. In a sim ulation with R = 999 we obtained b = 1.5755 — 1.5203 = 0.0552 and v = 0.0601. The latter is appreciably bigger th an the value 0.0325 given by the delta m ethod variance estim ate n
vL = n~2 J ^ ( x ; - t u j f / u 1, j=i
24
2 ■The Basic Bootstraps
o C oO < oN
o
c\i
I 1 ll J llll.-_
in o
o o
0.5
1.0
1.5
2.0
Figure 2.5 City population data. Histograms of t9 and z * under nonparametric resampling for sample of size n — 10, R = 999 simulations. Note the skewness of both t* and
■ q
2.5
-8
_
n .llll
-6
-4
-2
0
z*
t*
which is based on an expansion th a t is explained in Section 2.7.2; see also Problem 2.9. The discrepancy betw een v and Vi is due partly to a few extrem e values o f f \ an issue we discuss in Section 2.3.2. T he left panel o f Figure 2.5 shows a histogram o f t \ whose skewness is evident: use o f a norm al approxim ation here w ould be very inaccurate. We can use the sam e sim ulations to estim ate d istributions o f related statistics, such as transform ed estim ates or studentized estim ates. The right panel o f Figure 2.5 shows a histogram o f studentized values z* = (t* — t ) / v ^ /2, where v'L is the delta m ethod variance estim ate based on a sim ulated sample. T h at is,
v'L = n~2 Y ^ ( x ,j - t , Uj)2/ u 2. 7=1 The corresponding theoretical ap proxim ation for Z is the N ( 0,1) distribution, which we would ju d g e also inaccurate in view o f the strong skewness in the histogram . We shall discuss the rationale for the use o f z* in Section 2.4. One n atu ral question to ask here is w hat effect the sm all sam ple size has on the accuracy o f norm al approxim ations. This can be answ ered in p a rt by plotting density estim ates. T he left panel o f Figure 2.6 shows three estim ated densities for T* — t w ith o u r sam ple o f n = 10, a kernel density estim ate based on o u r sim ulations, the N(b, v) approxim ation with m om ents com puted from the sam e sim ulations, an d the N ( 0 , vl ) approxim ation. The right panel shows corresponding density approxim ations for the full d a ta with n = 49; the em pirical bias and variance o f T are b = 0.00118 and v = 0.001290, and the
2.3 ■Nonparametric Simulation
25
Figure 2.6 Density estimates for 7* —t based on 999 nonparametric simulations for the city population data. The left pane! is for the sample of size n = 10 in Table 2.1, and the right panel shows the corresponding estimates for the entire dataset of size n = 49. Each plot shows a kernel density estimate (solid), the N(b,v) approximation (dashes), with these moments computed from the same simulations, and the N(0, vl ) approximation (dots).
delta m ethod variance approxim ation is vl = 0.001166. A t the larger sample size the norm al approxim ations seem very accurate. ■
2.3.1 Comparison with parametric methods A n atu ral question to ask is how well the nonparam etric resam pling m ethods m ight com pare to p aram etric m ethods, w hen the latter are appropriate. Equally im p o rtan t is the question as to which param etric m odel would produce results like those for n o n p aram etric resam pling: this is an o th er way o f asking just w hat the nonp aram etric b o o tstrap does. Some insight into these questions can be gained by revisiting Exam ple 1.1. Example 2.9 (Air-conditioning data) We now look at the results o f applying no n p aram etric resam pling to the air-conditioning data. O ne m ight naively expect to o btain results sim ilar to those in Exam ple 2.5, where exponential resam pling was used, since we found in Exam ple 1.1 th a t the d a ta ap p ear com patible w ith an exponential model. Figure 2.7 is the n o n p aram etric analogue o f Figure 2.4, and shows quantiles o f T* — t. It appears th a t R = 500 or so is needed to get reliable quantile estim ates; R = 100 is enough for the corresponding plot for bias and variance. U nder nonparam etric resam pling there is no reason why the quantiles should ap proach the theoretical quantiles under the exponential model, and it seems th a t they do n o t d o so. This suggestion is confirm ed by the Q-Q plots in Figure 2.8. The first panel com pares the ordered values o f t ' from R = 999 n o n p aram etric sim ulations w ith theoretical quantiles under the fitted exponen tial model, an d the second panel com pares the t' with theoretical quantiles
2 ■The Basic Bootstraps
26
Figure 2.7 Empirical quantiles (p = 0.05, 0.95) of T* — t under nonparametric resampling from the air-conditioning data. The horizontal lines are the exact quantiles based on the fitted exponential model.
R
Figure 2.8 Q-Q plots of y* under nonparametric resampling from the air-conditioning data, first-against theoretical quantiles under fitted exponential model (left panel) and then against theoretical quantiles under fitted gamma model (right pane!).
under the best-fitting gam m a m odel w ith index k = 0.71. The agreem ent in the second panel is strikingly good. O n reflection this is natural, because the E D F is closer to the larger gam m a m odel th a n to the exponential model. ■
2.3.2 Effects o f discreteness F or intrinsically continuous data, a m ajor difference betw een param etric and nonparam etric resam pling lies in the discreteness o f the latter. U nder nonpara-
2.4 ■Simple Confidence Intervals
27
m etric resam pling, T* and related quantities will have discrete distributions, even though they m ay be approxim ating continuous distributions. This m akes results som ew hat “fuzzy” com pared to their param etric counterparts. Example 2.10 (Air-conditioning data) For the nonparam etric sim ulation dis cussed in the previous exam ple, the right panels o f Figure 2.9 show the scatter plots o f sam ple stan d ard deviation versus sam ple average for R = 99 and R = 999 sim ulated datasets. C orresponding plots for the exponential sim u lation are shown in the left panels. T he qualitative feature to be read from any one o f these plots is th a t d a ta stan d ard deviation is proportional to d ata average. The discreteness o f the nonparam etric m odel (the E D F ) adds noise whose peculiar b anded structure is evident a t R = 999, although the qualitative structure is still apparent. ■ F or a statistic th at is sym m etric in the d a ta values, there are up to W"
_ f i n — 1\ _ (2n — 1)! \ n—1) n\(n — 1)!
possible values o f t*, depending upon the sm oothness o f the statistical function t( ). Even for m oderately small sam ples the support o f the distribution o f T* will often be fairly dense: values o f m„ for n = 7 and 11 are 1716 and 352 716 (Problem 2.5). It would therefore usually be harm less to think o f there being a P D F for T*, and to approxim ate it, either using sim ulation results as in Figure 2.6 o r theoretically (Section 9.5). There are exceptions, however, m ost n otably when T is a sam ple quantile. The case o f the sam ple m edian is discussed in Exam ple 2.16; see also Problem 2.4 and Exam ple 2.15. For m any practical applications o f the sim ulation results, the effects o f discreteness are likely to be fairly m inim al. However, one possible problem is th at outliers are m ore likely to occur in the sim ulation output. F or example, in Exam ple 2.8 there were three outliers in the sim ulation, and these inflated the estim ate v ‘ o f the variance o f T*. Such outliers should be evident on a norm al Q -Q plot (or com parable relevant plot), and when found they should be om itted. M ore generally, a statistic th at depends heavily on a few quantiles can be sensitive to the repeated values th a t occur under nonparam etric sampling, an d it can be useful to sm ooth the original d a ta when dealing with such statistics; see Section 3.4.
2.4 Simple Confidence Intervals The m ajor application for distributions and quantiles o f an estim ator T is in the calculation o f confidence limits. There are several ways o f using boo tstrap sim ulation results in this context, m ost o f which will be explored in C h apter 5. H ere we describe briefly two basic m ethods.
28
2 • The Basic Bootstraps
Figure 2.9 Scatter plots of sample standard deviation versus sample average for samples generated by parametric simulation from the fitted exponential model (left panels) and by nonparametric resampling (right panels). Top line is for R = 99 and bottom line is for R — 999.
Bootstrap average
Bootstrap average
O O
CO
O
in C\J Q
C/) o. CO to
o o
Q
co Q. (0
CsJ o
LO o
8 CD o
8
m
0
50
100 150 200 250 300
Bootstrap average
Bootstrap average
T he sim plest ap proach is to use a norm al approxim ation to the distribution o f T. As outlined in Section 2.1.2, this m eans estim ating the lim its (2.4), which require only b o o tstrap estim ates o f bias and variance. As we have seen in previous sections, a norm al approxim ation will n o t alw ays suffice. T hen if we use the b o o tstrap estim ates o f quantiles for T — 6 as described in Section 2.2.2, an equitailed (1 — 2a) confidence interval will have limits 1 ~ (^(R+lXl-a)) — f)>
1 — (^(R+lJa) — 0-
(2.10)
This is based on the probability im plication Prr„
P r ( T - b < 6 < T - a) = 1 - 2 a .
29
2.4 ■Simple Confidence Intervals
We shall refer to the limits (2.10) as the basic bootstrap confidence limits. Their accuracy depends upon R, o f course, and one would typically take R > 1000 to be safe. But accuracy also depends upon the extent to which the distribution o f T" — t agrees w ith th a t o f T — 9. Com plete agreem ent will occur if T — 9 has a distribution n o t depending on any unknow ns. This special property is enjoyed by quantities called pivots, which we discuss in m ore detail in Section 2.5.1. If, as is usually the case, the distribution o f T — 9 does depend on unknow ns, then we can try alternative expressions contrasting T and 6, such as differences o f transform ed quantities, o r studentized com parisons. For the latter, we define the studentized version o f T — 9 as
where V is an estim ate o f v a r(T | F): we give a fairly general form for V in Section 2.7.2. The idea is to mimic the Student-t statistic, which has this form, and which elim inates the unknow n standard deviation when m aking inference ab o u t a norm al mean. T hro u g hout this book we shall use Z to denote a studentized statistic. Recall th a t the S tudent-t (1 — 2a) confidence interval for a norm al m ean n has limits y - v l/2tn- i ( l - a ) ,
y - v l/2t„-i(a),
where v is the estim ated variance o f the m ean and f„_i(a), t„_ i(l — a) are quantiles o f the Student-f distribution w ith n — 1 degrees o f freedom , the distribution o f the pivot Z . M ore generally, when Z is defined by (2.11), the (1 — 2a) confidence interval limits for 9 have the analogous form
where zp denotes the p quantile o f Z . One simple approxim ation, which can often be justified for large sam ple size n, is to take Z as being N ( 0,1). The result would be no different in practical term s from using a norm al approxim ation for T — 9, and we know th a t this is often inadequate. It is m ore accurate to estim ate the quantiles o f Z from replicates o f the studentized bootstrap statistic, Z* = (T* — t ) / V * 1/2, where T ' and V * are based on a sim ulated ran d o m sample, Y ’, . . . , Yn'. If the m odel is param etric, the Y ' are generated from the fitted param etric distribution, and if the m odel is nonparam etric, they are generated from the E D F F, as outlined in Section 2.3. In either case we use the (R + l)a th order statistic o f the sim ulated values z \ , . . . , z ' R, nam ely z(*(K+1)(x), to estim ate z„. Then the studentized bootstrap confidence interval for 9 has limits (2 .12)
30
2 • The Basic Bootstraps
This studentized b o o tstrap m ethod is m ost likely to be o f use in n o n p ara m etric problem s. O ne reason for this is th a t w ith param etric m odels we can som etim es find “exact” solutions (as w ith the exponential m odel for E xam ple 1.1), and otherw ise we have available m ethods based on the likelihood function. This does n o t necessarily rule out the use o f param etric sim ulation, o f course, for approxim ating the distribution o f the q uantity used as basis for the confidence interval. Example 2.11 (Air-conditioning data) U nder the exponential m odel for the d a ta o f Exam ple 1.1, we have T = Y , and since v a r(T | FM) = n 2/n, we would take V = Y 2/n. This gives Z = (T - n ) / V l/2 = n 1/2(l - n / Y ) , which is an exact pivot because Q = Y / n has the gam m a distribution with index n and unit mean. S im ulation to construct confidence intervals is unneces sary because the quantiles o f the gam m a distribution are available from tables. Param etric sim ulation would be based on Q* = Y* / t , where Y* is the average o f a rando m sam ple Y , \ . . . , Y* from the exponential distribution with m ean t. Since Q‘ has the same distribution as Q, the only erro r incurred by sim ulation would be due to the random ness o f the sim ulated quantiles. F or exam ple, the estim ates o f the 0.025 an d 0.975 quantiles o f Q based on R = 999 sim ulations are 0.504 and 1.608, com pared to the exact values 0.517 and 1.640; these lead to estim ated an d exact 95% confidence intervals (67.2,214.6) and (65.9,209.2) respectively. We shall discuss these intervals m ore fully in C hapter 5. ■ Example 2.12 (City population data) F or the sam ple o f n = 10 pairs analysed in Exam ple 2.8, o u r estim ate o f the ratio 8 is t = x / u = 1.52. The 0.025 and 0.975 quantiles o f the 999 values o f t ‘ are 1.236 and 2.059, so the 95% basic boo tstrap confidence interval (2.10) for 8 is (0.981,1.804). To apply the studentized interval, we use the delta m ethod approxim ation to the variance o f T, which is (Problem 2.9) n
VL = n ~ 2 J ^ ( x y - tU j)2/Q 2,
j =i and base confidence intervals for 8 on ( T — 0 ) / v lL[ 2, using sim ulated values o f z ' = (t* — t ) / v L . T he sim ulated values in the right panel o f Figure 2.5 show th at the density o f the studentized b o o tstrap statistic Z ' is n o t close to norm al. The 0.025 and 0.975 quantiles o f the 499 sim ulated z ' values are -3.063 and 1.447, and since v i = 0.0325, an approxim ate 95% equitailed confidence interval based on (2.12) is (1.260,2.072). T his is quite different from the interval above. The usefulness o f these confidence intervals will depend on how well F
2.5 ■Reducing Error
31
estim ates F an d the extent to which the distributions o f T — 6 and o f Z depend on F. We can n o t ju d g e the form er, b u t we can check the latter using the m ethods outlined in Section 3.9.2; see Exam ples 3.20 and 9.11. ■
2.5 Reducing Error T he erro r in resam pling m ethods is generally a com bination o f statistical error and sim ulation error. The first o f these is due to the difference between F and F, and the m agnitude o f the resulting error will depend upon the choice o f T. T he sim ulation erro r is wholly due to use o f em pirical estim ates o f properties under sam pling from F, ra th e r th an exact properties. Figure 2.7 illustrates these tw o sources o f error in quantile estim ation. The decreasing sim ulation erro r shows as reduced scatter o f the quantile estim ates for increased R. Statistical error due to an inappropriate m odel for T is reflected by the difference betw een the sim ulated nonparam etric quantiles for large R and the d o tted lines th a t indicate the quantiles under the exponential m odel. The fu rth er statistical error due to the difference betw een F and F cann o t be illustrated, because we do n o t know the true m odel underlying the data. However, other sam ples o f the same size from th a t m odel would yield different estim ates o f the true quantiles, quite ap art from the variability o f the quantile estim ates obtained from each specific dataset by sim ulation.
2.5.1 Statistical error T he basic b o o tstra p idea is to approxim ate a quantity c{F) — such as v ar(T | F) — by the estim ate c(F), where F is either a param etric or a nonparam etric estim ate o f F based on d a ta The statistical erro r is then the difference betw een c(F) and c(F), and as far as possible we wish to m inimize this or remove it entirely. This is som etim es possible by careful choice o f c(-). For exam ple, in Exam ple 1.1 w ith the exponential m odel, we have seen th a t w orking with T / 9 rem oves statistical error completely. F or b o th confidence interval and significance test calculation, we usually have a choice as to w hat T is and how to use it. Significance testing raises special issues, because we then have to deal with a null hypothesis sam pling distribution, so here it is best to focus on confidence interval calculation. For simplicity we also assum e th a t estim ate T is decided upon. T hen the quantity c(F) will be a quantile or a m om ent o f some quantity Q = q (F, F) derived from T , such as h (T) — h{6) o r ( T — 6 ) / V l/2 where V is an estim ated variance, or som ething m ore com plicated such as a likelihood ratio. The statistical problem is to choose am ong these possible quantities so th at the resulting Q is as nearly pivotal as possible, th a t is it has (at least approxim ately) the same distribution under sam pling from b o th F and F.
32
2 • The Basic Bootstraps
Provided th a t Q is a m onotone function o f 8, it will be straightforw ard to o btain confidence limits. F or exam ple, if Q = h ( T ) — h(8) with h(t) increasing in t, and if ax is an approxim ate lower a quantile o f h (T ) — h(8), then 1 - a = Pr{Ji(T) - h(8) > aa} = Pr [0 < h~l {h (T) - a* } ],
(2.13)
where /i_1( ) is the inverse transform ation. So h~l { h(T) — aa} is an upper (1 — a) confidence lim it for 8. Parametric problems In param etric problem s F = F# and F = Fv have the sam e form, differing only in p aram eter values. T he n otion o f a pivot is quite simple here, m eaning constant behaviour und er all values o f the m odel param eters. M ore formally, we define a pivot as a function Q = q ( T , 8 ) w hose distribution does o r n o t a p articular q uantity Q is exactly or nearly pivotal, by exam ining its behaviour under the m odel form w ith varying p aram eter values. F or example, in the context o f Exam ple 1.1 n o t depend on the value o f \p: for all q,
In general Q may also depend on other statistics, as when Q is the studentized form of T.
Pr{ q ( T ,0 ) < q | v>} is independent o f \p. O ne can check, som etim es theoretically and always em pirically, whether, we could sim ultaneously exam ine properties o f T — 8, log T — log 8 and the studentized version o f the form er, by sim ulation under several exponential m odels close to the fitted m odel. This m ight result in plots o f variance or selected quantiles versus param eter values, from which we could diagnose the nonpivotal behaviour o f T — 6 and the pivotal b ehaviour o f log T — log 8. A special role for tran sfo rm atio n h ( T) arises because som etim es it is rela tively easy to choose h{-) so th a t the variance o f T is approxim ately o r exactly independent o f 8, and this stability is the prim ary feature o f stability o f distri bution. Suppose th a t T has variance v(6). T hen provided the function h(-) is well behaved at 8, T aylor series expansion as described in Section 2.7.1 leads to h(8) is the first derivative dh(6)/d6.
W L i { h ( T ) } ± { h ( 8 ) } 2 v(8), which in tu rn implies th a t the variance is m ade approxim ately constant (equal to 1) if
H{t) = /
M lijp '
(114)
This is know n as the variance-stabilizing transformation. A ny constant m ultiple o f h ( T) will be equally effective: often in one-sam ple problem s where v{8) = ri~l it2(8) equation (2.14) w ould be applied w ith a(u) in place o f {u(m)}1/2, in which case h(-) is independent o f n and v a r(T ) = n-1 . F or a problem where v{8) varies strongly with 8, use o f this transform ation
2.5 ■Reducing Error
Figure 2.10 Log-log plot of estimated variance of Y against 6 for the air-conditioning data with an exponential model. The plot suggests strongly that var(Y | 0) oc 62.
33
< ocD (0 •c (0
o o o
>
50 60 70
90
200
theta
in conjunction w ith (2.13) will typically give m ore accurate confidence limits th an would be obtained using direct approxim ations o f quantiles for T — 6. If such use o f the transfo rm ation is appropriate, it will som etim es be clear from theoretical considerations, as in the exponential case. O therw ise the tran sfo rm atio n w ould have to be identified from a scatter plot o f sim ulationestim ated variance o f T versus 6 for a range o f values o f 8. Example 2.13 (Air-conditioning data) Figure 2.10 shows a log-log plot o f the em pirical variances o f r* = y ' based on R = 50 sim ulations for each o f a range o f values o f 6. T h a t is, for each value o f 0 we generate R values t ’ corresponding to sam ples y y " „ from the exponential distribution with m ean 6, and then plot log { ( R — l) -1 X)(t* — r*)2} against log0. T he linearity an d slope o f the plot confirm th at v a r(T | F ) oc 62, where 6 = E (T | F). a Nonparametric problems In n o n p aram etric problem s the situation is m ore com plicated. It is now unlikely (but n o t strictly im possible) th a t any quantity can be exactly pivotal. A lso we cann o t sim ulate d a ta from a distribution with the same form as F, because th a t form is unknow n. However, we can sim ulate d a ta from distributions near to and sim ilar to F, an d this m ay be enough since F is near F. A rough idea o f w hat is possible can be h ad from Exam ple 2.10. In the right-hand panels o f Figure 2.9 we plotted sam ple stan d ard deviation versus sam ple average for a series o f n o nparam etrically resam pled datasets. If the E D F s o f those datasets are th o u g h t o f as m odels n ear both F and F, then although the pattern is obscured by the banding, the plots suggest th a t the true m odel has standard deviation p ro p o rtio n al to its m ean — which is indeed the case for the m ost
34
2 • The Basic Bootstraps
likely true m odel. T here are conceptual difficulties with this argum ent, b u t there is little question th a t the im plication draw n is correct, nam ely th at log Y will have approxim ately the sam e variance und er sam pling from b o th F and F. A m ore tho ro u g h discussion o f these ideas for nonparam etric problem s will be given in Section 3.9.2. A m ajor focus o f research on resam pling m ethods has been the reduction o f statistical error. This is reflected particularly in the developm ent o f accurate confidence lim it m ethods, which are described in C h apter 5. In general it is best to rem ove as m uch o f the statistical erro r as possible in the choice o f procedure. However, it is possible to reduce statistical erro r by a b o o tstrap technique described in Section 3.9.1.
2.5.2 Simulation error Sim ulation erro r arises w hen M onte C arlo sim ulations are perform ed and properties o f statistics are approxim ated by their em pirical properties in these sim ulations. F o r exam ple, we approxim ate the estim ate B = E*(T* | F) — t o f bias /? = E (T ) — 8 by the average B R = R ~ l — t) = T ' — t, using the independent replications Tj*,. . . , T R, each based on a random sam ple from our d a ta E D F F. The M onte C arlo variability in R ~ ] T ’ can only be removed entirely by an infinite sim ulation, which seems b o th im possible and unnecessary in practice. T he practical question is, how large does R need to be to achieve reasonable accuracy, relative to the statistical accuracy o f the quantity (bias, variance, etc.) being approxim ated by sim ulation? W hile it is n o t possible to give a com pletely general an d firm answer, we can get a fairly good sense o f w hat is required by considering the bias, variance and quantile estim ates in simple cases. This we now do. Suppose th a t we have a sam ple y u - - - , y n from the N(p,<j2) distribution, and th at the p aram eter o f interest 9 — n is estim ated by the sam ple average t = y. Suppose th a t we use nonparam etric sim ulation to approxim ate the bias, variance and the p quantile ap o f T — 8 = Y — jx. T hen the first step, as described in Section 2.3, is to take R independent replicate sam ples from y y n, an d calculate their m eans Yj* ,..., Y^. From these we calculate the bias, variance an d quantile estim ators as described earlier. O f course the problem is so simple th a t we know the real answers, nam ely 0, n~xa 2 and w~1/2■
J
h(u)dGAi0D(u)
for all integrable functions h(-). U nder these conditions the b o o tstrap is con sistent, m eaning th a t for any q and e > 0, Pr{\Gpn(q) — GF^ }(q)\ > e}—>0 as n—yoo.
39
2.6 ■Statistical Issues
T he first condition ensures th at there is a limit for Gf,„ to converge to, and w ould be needed even in the happy situation where F equalled F for every n > n', for som e ri. N ow as n increases, F changes, so the second and third conditions are needed to ensure th at G p n approaches G fi00 along every possible sequence o f F s. If any one o f these conditions fails, the b o o tstrap can fail. Example 2.15 (Sample maximum) Suppose th at Y i,. . . , Yn is a random sample from the uniform distribution on (0 ,9). T hen the m axim um likelihood estim ate o f 9 is the largest sam ple value, T = Yln), where Y(i) < ■■< Y(n) are the sample order statistics. C onsider nonparam etric resam pling. The lim iting distribution o f Q = n(9 — T ) / 9 is stan d ard exponential, and this suggests th a t we take our standardized quantity to be Q' = n(t — T ' ) / t , where t is the observed value o f T , an d T* is the m axim um o f a b o o tstrap sam ple o f size n taken from y i , . . . , y n. As n—>oo, however, Pr(g* = 0 | F) = Pr(T* = t \ F) = 1 - (1 - n_1)"-> 1 - e_1, an d consequently the lim iting distribution o f Q* can n o t be stan d ard exponen tial. The problem here is th a t the second condition fails: the distributional convergence is not uniform on useful neighbourhoods o f F. A ny fixed o r d er statistic Y(k) suffers from the same difficulty, b u t a statistic like a sample quantile, where we would take k = pn for some fixed 0 < p < 1, does not. ■ Asymptotic accuracy Here and below we say X n = Op{nd) when Prfn^l-Xnl > e)-*p for some constant p as n—►oo, and X„ = op(nd) when Pr(n rf|ATn| > e)-*0 as n—>cc, for any e > 0.
Consistency is a w eak property, for exam ple guaranteeing only th at the true probability coverage o f a nom inal (1 — 2a) confidence interval is 1 —2ot + op(l). S tan d ard norm al approxim ation m ethods are consistent in this sense. Once consistency is established, m eaning th at the resam pling m ethod is “valid”, we need to know w hether the m ethod is “good” relative to o ther possible m ethods. This involves looking at the rate o f convergence to nom inal properties. For example, does the coverage o f the confidence interval deviate from (1 —2a) by 0 p(n~l/2) or by 0 p(n-1 )? Some insight into this can be obtained by expansion m ethods, as we now outline. M ore detailed calculations are m ade in Section 5.4. Suppose th a t the problem is one where the lim iting distribution o f Q is stan d ard norm al, and where an Edgeworth expansion applies. T hen the distribution o f Q can be w ritten in the form Pr (Q < q \ F ) = <S>(q) + n~x/1a{q)(q) + 0 ( n ~ l ),
(2.22)
where (•) an d {■) are the C D F and P D F o f the stan d ard norm al distribution, and a(-) is an even quad ratic polynom ial. For a wide range o f problem s it can be shown th a t the corresponding approxim ation for the b o o tstrap version o f Q is Pr(2* < q \ F ) = (q) + 0 ^ ) ,
(2.23)
40
2 • The Basic Bootstraps
where a(-) is obtained by replacing unknow ns in a(-) by estim ates. Now typically a(q) = a(q) + 0 p(n~1/2), so P r(Q' < q \ F) — Pr«2 < q \ F) = Op(n~l ).
(2.24)
T hus the estim ated distrib u tio n for Q differs from the true distribution by a term th a t is Op(n_1), provided th a t Q is constructed in such a way th a t it is asym ptotically pivotal. A sim ilar argum ent will typically hold when Q has a different lim iting distribution, provided it does n o t depend on unknow ns. Suppose th a t we choose n o t to standardize Q, so th a t its lim iting distribution is norm al w ith variance v. A n E dgew orth expansion still applies, now with form
Pr(fi £ , I F) _ * ( - « j ) +
( - k ) * ( J L ) + 0(n-1),
(125)
where a'(-) is a q u ad ratic polynom ial th a t is different from a( ). The corre sponding expansion for Q' is
Pr(Q■ < , | F) - ® ( ^ ) + „ - ' ' V ( j i j ) * ( j i j ) + O
,
(2.26)
Typically v = v + Op(n~l/2), which w ould im ply th a t P r(2 “ < q I F) - P r(Q < q \ F) = Op(n~V2),
(2.27)
because the leading term s on the right-hand sides o f (2.25) and (2.26) are different. The difference betw een (2.24) and (2.27) explains o u r insistence on w orking w ith approxim ate pivots w henever possible: use o f a pivot will m ean th at a boo tstrap distribution function is an o rd er o f m agnitude closer to its target. It also gives a cogent theoretical m otivation for using the b o o tstrap to set confidence intervals, as we now outline. We can obtain the a quantile o f the distribution o f Q by inverting (2.22), giving the Cornish-Fisher expansion qx = z a + n - '^ a ' ^ Z x ) + 0 ( n _1), where za is the a quantile o f the stan d ard norm al distribution, and a"(-) is a further polynom ial. T he corresponding b o o tstrap quantile has the property th a t q ’^ —qn = Op(n~l ). F or simplicity take Q = ( T — 0 ) / V l/1, where V estim ates the variance o f T. T hen an exact one-sided confidence interval for 9 based on Q would be I a = [T — V 1/2qx, oo), an d this contains the true 6 w ith probability a. T he corresponding b o o tstrap interval is / ’ = [T — I/1/2g ”,oo), where q ’ is the a quantile o f the distrib u tio n o f Q* — which w ould often be estim ated by sim ulation, as we have seen. Since q'x — qx = Op(n~[), we have Pr(0 e I a) = a,
P r(0 e /* ) = a + 0 ( n ~ l ),
2.6 ■Statistical Issues
41
so th a t the actual probability th at / ' contains 6 differs from the nom inal probability by only 0 ( n -1 ). In contrast, intervals based on inverting (2.25) will contain 8 w ith probability a + 0 ( n ~ l/2). This interval is in principle no m ore accurate th a n using the interval [T — F 1/2za, oo) obtained by assum ing th at the distribution o f Q is stan d ard norm al. Thus one-sided confidence intervals based on quantiles o f Q’ have an asym ptotic advantage over the use o f a norm al approxim ation. Sim ilar com m ents apply to tw o-sided intervals. The practical usefulness o f such results will depend on the num erical value o f the difference (2.24) at the values o f q o f interest, and it will always be wise to try to decrease this statistical error, as outlined in Section 2.5.1. T he results above based on E dgew orth expansions apply to m any com m on statistics: sm ooth functions o f sam ple m om ents, such as m eans, variances, and higher m om ents, eigenvalues and eigenvectors o f covariance m atrices; sm ooth functions o f solutions to sm ooth estim ating equations, such as m ost m axim um likelihood estim ators, estim ators in linear and generalized linear models, and som e robust estim ators; and to m any statistics calculated from tim e series.
2.6.2 Rough statistics: unsmooth and unstable W h at typically validates the b o o tstrap is the existence o f an E dgew orth ex pansion for the statistic o f interest, as would be the case when th at statistic is a differentiable function o f sam ple m om ents. Some statistics, such as sam ple quantiles, depend on the sam ple in an unsm ooth or unstable way such th at stan d ard expansion theory does n o t apply. O ften the nonparam etric resam pling m ethod will still be valid, in the sense th a t it is consistent, b u t for finite sam ples it m ay n o t w ork very well. P art o f the reason for this is th a t the set o f possible values for T* m ay be very small, and very vulnerable to unusual d ata points. A case in poin t is th a t o f sam ple quantiles, the m ost fam iliar o f which — the sam ple m edian — is discussed in the next example. Exam ple 2.15 gives a case where naive resam pling fails completely. Example 2.16 (Sample median) Suppose th at the sample size is odd, n = 2m + 1, so th a t the sam ple m edian is y = y(m+\). In large sam ples the m edian is approxim ately norm ally distributed ab o u t the population m edian //, but stan d ard nonparam etric m ethods o f variance estim ation (jackknife and delta m ethod) d o not w ork here (Exam ple 2.19, Problem 2.17). N onparam etric resam pling does w ork to som e extent, provided the sam ple size is quite large and the d a ta are not too dirty. Crucially, b o o tstrap confidence limits work quite well. N ote first th a t the b o o tstrap statistic Y* is concentrated on the sample values y^k), which m akes the estim ated distribution o f the m edian very discrete and very vulnerable to unusual observations. Problem 2.4 shows th at the exact
2 ■The Basic Bootstraps
42
Normal
Theoretical Empirical M ean bootstrap Effective df
Table 2.4 Theoretical, empirical and mean bootstrap estimates of variance (x 10“ 2) of sample median, based on 10000 datasets of sizes n = 11,21. The effective degrees of freedom of bootstrap variances uses a x2 approximation to their distribution.
Cauchy
f3
11
21
11
21
11
21
14.3 13.9 17.2 4.3
7.5 7.3 8.8 5.4
16.8 19.1 25.9 3.2
8.8 9.5 11.4 4.9
22.4 38.3 14000 0.002
11.7 14.6 22.8 0.5
distribution o f Y * is p r(y * =
m
, \
^ ;=0
"
m
, s
(2.28) j=0 '■*'
for k = l , . . . , n where = k / n ; sim ulation is n o t needed in this case. The m om ents o f this b o o tstrap distribution, including its m ean and variance, converge to the correct values as n increases. However, the convergence can be very slow. To illustrate this, Table 2.4 com pares the average b o o tstrap variance w ith the em pirical variance o f the m edian for d a ta sam ples o f sizes n = 11 and 21 from the stan d ard norm al distribution, the Student-t distribution with three degrees o f freedom , and the C auchy d istrib u tio n ; also shown are the theoretical variance approxim ations, which are incalculable when the true distribution F is unknow n. We see th a t the b o o tstrap variance can be very po o r for n = 11 when distributions are long-tailed. The value 1.4 x 104 for average boo tstrap variance w ith C auchy d a ta is not a m istake: the b o o tstrap variance exceeds 100 for ab o u t 1% o f d atasets: for som e sam ples the b o o tstrap variance is huge. The situation stabilizes when n reaches 40 o r more. The gross discreteness o f y * could also affect the simple confidence limit m ethod described in Section 2.4. But provided the inequalities used to justify (2.10) are taken to be < an d > rath er th a n < and > , the m ethod w orks well. For example, for C auchy sam ples o f size n = 11 the coverage o f the 90% basic boo tstrap confidence interval (2.10) is 90.8% in 1000 sam ples; see Problem 2.4. We suggest ado p tin g the sam e practice for all problem s where t* is supported on a small nu m b er o f values. ■ The statistic T will certainly behave wildly under resam pling w hen t(F) does not exist, as happens for the m ean when F is a C auchy distribution. Q uite naturally over repeated sam ples the b o o tstrap will produce silly and useless results in such cases. T here are two points to m ake here. First, if d a ta are taken from a real population, then such m athem atical difficulties can n o t arise. Secondly, the stan d ard approaches to d a ta analysis include careful screening o f d a ta for outliers, nonnorm ality, an d so forth, which leads either to deletion o f disruptive d a ta elem ents or to sensible and reliable choices o f estim ators
2.6 ■Statistical Issues
43
T. In short, the m athem atical pathology o f nonexistence is unlikely to be a practical problem .
2.6.3 Conditional properties Resam pling calculations are based on the observed data, and in th at sense resam pling m ethods are conditional on the data. This is especially so in the nonp aram etric case, where nothing b u t d a ta is used. Because o f this, the question is som etim es asked: “Are resam pling m ethods therefore conditional in the inferential sense?” The short answ er is: “N o, at least n o t in any useful way — unless the relevant conditioning can be m ade explicit.” C onditional inference arises in param etric inference when the sufficient statis tic includes an ancillary statistic A whose distribution is free o f param eters. T hen we argue th at inferences ab o u t param eters (e.g. confidence intervals) should be based on sam pling distributions conditional on the observed value o f A ; this brings inference m ore into line w ith Bayesian inference. Two exam ples are the configuration o f residuals in location models, and the values o f explanatory variables in regression models. The first cannot be accom m odated in nonp aram etric b o o tstrap analysis because the effect depends upon the u n know n F. The second can be accom m odated (C hapter 6) because the effect does n o t depend upon the stochastic p a rt o f the model. It is certainly true th a t the b o o tstrap distribution o f T* will reflect ancillary features o f the data, as in the case o f the sam ple m edian (Exam ple 2.16), b u t the reflection is pale to the poin t o f uselessness. T here are situations where it is possible explicitly to condition the resam pling so as to provide conditional inference. Largely these situations are those where there is an experim ental ancillary statistic, as in regression. O ne other situation is discussed in Exam ple 5.17.
2.6.4 When might the bootstrap fail? Incomplete data So far we have assum ed th a t F is the distribution o f interest and th at the sample y i , . . . , y „ draw n from F has nothing rem oved before we see it. This m ight be im p o rtan t in several ways, n o t least in guaranteeing statistical consistency o f o u r estim ator T. But in some applications the observation th a t we get m ay not always be y itself. F or example, w ith survival d a ta the ys m ight be censored, m eaning th a t we m ay only learn th a t y was greater th an some cut-off c because observation o f the subject ceased before the event which determ ines y. Or, with m ultiple m easurem ents on a series o f patients it m ay be th a t for som e patients certain m easurem ents could n o t be m ade because the patient did n o t consent, or the d o cto r forgot.
44
2 • The Basic Bootstraps
U nder certain circum stances the resam pling m ethods we have described will work, b u t in general it w ould be unwise to assum e this w ithout careful thought. A lternative m ethods will be described in Section 3.6. Dependent data In general the n o n p aram etric resam pling m ethod th a t we have described will n o t work for dependent data. This can be illustrated quite easily in the case where the d a ta form one realization o f a correlated tim e series. For example, consider the sam ple average y an d suppose th a t the d a ta com e from a stationary series {Yj} whose m arginal variance is a 2 = var(Y; ) and whose autocorrelations are ph = c o n ( Y j , Y j +h) for h = 1 ,2 ,... In Exam ple 2.7 we showed th a t the nonparam etric b o o tstrap estim ate o f the variance o f Y is approxim ately s2/n, an d for large n this will ap proach ■;. Of2'
A simple illustration is Exam ple 2.20, where t is determ ined by the estim ating function c(y, 6) = x — 6u. For som e purposes it is useful to go beyond the first derivative term in the expansion o f t(F) and o btain the quad ratic approxim ation t(F) = t(F) + j L t( y; F) dF(y) +
\jj
Qt(y, 2; F) dF(y)dF(z),
(2.41)
where the second derivative Qt( y , z ; F ) is defined by
d£l d£2
£,=82=0
This derivative satisfies / Qt( x , y , F ) d F ( x ) = / Qt( x ,y ;F) dF{y ) = 0, b u t in general J Q, ( x, x; F) dF ( x) ^ 0. T he values qjk = Qt(yj,yk',F) are em pirical second derivatives o f t(-) analogous to the em pirical influence values lj. In principle (2.41) will be m ore accurate th an (2.35).
2.7.3 Jackknife estimates A n other ap p ro ach to approxim ating the influence function, b u t only a t the sam ple values y \ , . . . , y „ themselves, is the jackknife. H ere lj is approxim ated by ljackj = { n - W - t - j ) ,
(2.42)
where t - j is the estim ate calculated w ith y; om itted from the data. In effect this corresponds to num erical approxim ation (2.37) using e = —(n — I)- 1 ; see Problem 2.18.
2.7 • Nonparametric Bias and Variance
51
The jackknife approxim ations to the bias and variance o f T are 1
bjack = ~ ~
n
j
Ijack,j,
Vjack = ^ ackj ~
It is reasonably straightforw ard to apply (2.33) w ith F - j and F in place o f G an d F, respectively, to show th a t IjackJ — lj 5 see Problem 2.15. Example 2.21 (Average) F or the sam ple average t = y and the case deletion values are = (ny — y j ) / ( n — 1) and so ljack,j = }’j ~ V- This is the same as the em pirical influence function because t is linear. The variance approxim ation in (2.43) reduces to {n{n — l )}-1 ^2(yj — y)2 because bjack = 0; the denom inator n — 1 in the form ula for vjack was chosen to ensure th at this happens. ■ O ne application o f (2.43) is to show th a t in large sam ples the jackknife bias approxim ation gives n
bjack = E*(T") — t = \ n ~ 2
Qjj'i j=i
see Problem 2.15. So far we have seen two ways to approxim ate the bias and variance o f T using approxim ations to the influence function, nam ely the nonparam etric delta m ethod and the jackknife m ethod. O ne can generalize the basic approxim ation by using alternative num erical derivatives in these two m ethods.
2.7.4 Empirical influence values via regression T he approxim ation (2.35) can also be applied to the b o o tstrap estim ate T*. If the E D F o f the b o o tstra p sam ple is denoted by F*, then the analogue o f (2.35) is t(F*) = t(F) + - V L t(y*;F), n J 7=1
o r in sim pler n o tatio n =
(2.44)
j- 1
say, where /* is the nu m b er o f times th a t y* equals yj, for j = 1, . . . , n . The linear ap proxim ation (2.44) will be used several times in future chapters. U nder the n o n p aram etric b o o tstrap the jo in t distribution o f the /* is m ulti nom ial (Problem 2.19). It is easy to see th a t var(T *) = n~2 = vl , showing
2 • The Basic Bootstraps
52
Figure 2.12 Plots of linear approxim ation t*L against r* for the ratio applied to the city population data, with n = 10 (left panel), and n = 49 (right panel).
th a t the b o o tstrap estim ate o f variance should be sim ilar to the nonparam etric delta m ethod approxim ation. Example 2.22 (City population data) The right panels o f Figure 2.11 show how 999 resam pled values o f f* depend on «-1 / j for four values o f j, for the d ata w ith n = 10. T he lines w ith slope lj sum m arize fairly well how t’ depends on /* , b u t the correspondence is n o t ideal. A different way to see this is to p lo t t* against the corresponding t'L. Figure 2.12 shows this for 499 replicates. The line shows where the values for an exactly linear statistic would fall. The linear approxim ation is poor for n = 10, b u t it is m ore accurate for the full dataset, where n = 49. In Section 3.10 we outline how such plots m ay be used to find a suitable scale on which to set confidence limits. ■ Expression (2.44) suggests a way to approxim ate the /,-s using the results o f a b o o tstrap sim ulation. Suppose th a t we have sim ulated R sam ples from F as described in Section 2.3. Define /*• to be the frequency with which the d a ta value yj occurs in the rth b o o tstrap sample. T hen (2.44) implies th a t t; = t + ^
]
T
r = l,...,R.
j=i
This can be viewed as a linear regression equation for “ responses” t* with “covariate values” and “coefficients” lj. We should, however, adjust for the facts th a t E*(7” ) =f= t in general, th a t J2j h = 0, and th at J 2 j f r j = n- F ° r the first o f these we add a general intercept term , or equivalently replace t with T .
2.7 • Nonparametric Bias and Variance F or the second two we d ro p the term
A
A
A
53 resulting in the regression equation
_
So the vector I = ( /j,___ i ) o f approxim ate values o f the lj is obtained with the least-squares regression form ula / = (F*TF*)_1F*r d*,
(2.46)
where F* is the R x ( n — 1) m atrix w ith (r,j) elem ent n-1 /*;, and the rth row o f the R x 1 vector d* is t* — f*. In fact (2.45) is related to an alternative, o rthogonal expansion o f T in which the “rem ainder” term is uncorrelated with the “linear” piece. The several different versions o f influence produce different estim ates o f v ar(T ). In general vl is an underestim ate, w hereas use o f the jackknife values or the regression estim ates o f the Is will typically produce an overestim ate. We illustrate this in Section 2.7.5. Example 2.23 (City population data) For the previous exam ple o f the ratio estim ator, Table 2.5 gives regression estim ates o f em pirical influence values, obtained from R = 1000 samples. The exact estim ate v l for v a r(T ) is 0.036, com pared to the value 0.043 obtained from the regression estimates. The b o o tstrap variance is 0.042. For n = 49 the corresponding values are 0.00119, 0.00125 an d 0.00125. O u r experience is th a t R m ust be in the hundreds to give a good regression approxim ation to the em pirical influence values. ■
2.7.5 Variance estimates In previous sections we have outlined the m erits o f studentized quantities
where V = v{F) is an estim ate o f v a r(T | F). O ne general way to obtain a value for V is to set M v = (M - 1) 1 - 0 2> m=1 where t ], . . . ,t 'M are calculated by b o o tstrap sam pling from F. Typically we would take M in the range 50-200. N ote th at resam pling is needed to produce a stan d ard erro r for the original value t o f T.
54
2 • The Basic Bootstraps
Now suppose th a t we wish to estim ate the quantiles o f Z , using em pirical quantiles o f b o o tstrap sim ulations r=
(2-48)
Since M b o o tstrap sam ples from F were needed to obtain v, M bo o tstrap sam ples from F ' are needed to produce v". T hus w ith R = 999 and M = 50, we would require R ( M + 1) = 50949 sam ples in all, which seems prohibitively large for m any applications. This suggests th a t we should replace u1/2 with a standard error th a t involves no resam pling, as follows. W hen a linear approxim ation (2.44) applies, we have seen th a t var(T* | F) can be estim ated by v l = n~2 ^ l], where the lj = L ((y; ;F ) are the em pirical influence values for t based on the E D F F o f y \ , . . . , y n- T he corresponding variance estim ate for v a r(T ’ | F ' ) is v‘Lr = ri~2 ^ L 2{yy, F'), based on the em pirical influence values for t’ at the E D F F ’ o f y ‘r l, . . . , y' rn. A lthough this requires no furth er sim ulation, the L t( y ’ \ F *) m ust be calculated for each o f the R samples. If an analytical expression is know n for the em pirical influence values, it will typically be straightforw ard to calculate the VLr- If not, num erical differentiation can be used, though this is m ore tim e-consum ing. I f neither o f these is feasible, we can use the furth er approxim ation 2
(2.49) which is exact for a linear statistic. In effect this uses the usual form ula, with lj replaced by L t(y*j\F) — n-1 J 2 L t(y*k ;F) in the rth resam ple. However, the right-hand side o f (2.49) can badly underestim ate v'Lr if the statistic is not close to linear. A n im proved approxim ation is outlined in Problem 2.20. Example 2.24 (City population data) Figure 2.13 com pares the variance a p proxim ations for n = 10. T he top left panel shows v" with M = 50 plotted against the values n
for R = 200 b o o tstrap samples. T he top right panel shows the values o f the approxim ate variance on the right o f (2.49), also plotted against v'L. T he lower panels show Q -Q plots o f the corresponding z* values, with (t* — t ) / v ^ /2 on the horizontal axis. Plainly v’L underestim ates v', though not so severely as to have a big effect on the studentized b o o tstrap statistic. But the right o f (2.49) underestim ates v'L to an extent th a t greatly changes the distribution o f the corresponding studentized b o o tstrap statistics.
2.8 ■Subsampling Methods
55
Figure 2.13 Variance approxim ations for the city population data, n — 10. The top panels com pare the bootstrap variance v* calculated with M = 50 and the right o f (2.49) with v*L for R = 200 samples. The bottom panels com pare the corresponding studentized bootstrap statistics.
co >
Q_
2 2o o
CO
vL*
T he rig h t-h an d panels o f the corresponding plots for the full d a ta show m ore nearly linear relationships, so it appears th a t (2.49) is a b etter approxim ation at sample size n = 49. In practice the sam ple size cannot be increased, and it is necessary to seek a tran sfo rm ation o f t to attain approxim ate linearity. T he tran sfo rm atio n outlined in Exam ple 3.25 greatly increases the accuracy o f (2.49), even w ith n = 10. ■
2.8 Subsampling Methods Before and after the developm ent o f nonparam etric b o o tstrap m ethods, other m ethods based on subsam ples were developed to deal with special problems.
56
2 ■ The Basic Bootstraps
We briefly review three such m ethods here. The first two are in principle superior to resam pling for certain applications, although their com petitive m erits in practice are largely untested. T he third m ethod provides an alternative to the nonparam etric delta m ethod for variance approxim ation.
2.8.1 Jackknife methods In Section 2.7.3 we m entioned briefly the jacknife m ethod in connection with estim ating the variance o f T, using the values o f t obtained when each case is deleted in turn. G eneralized versions o f the jackknife have also been proposed for estim ating the distribution o f T — 0, as alternatives to the bootstrap. For this to work, the jackknife m ust be generalized to m ultiple case deletion. For example, suppose th a t we delete d observations rath er th an one, there being N = (j) ways o f doing this; this is the sam e thing as taking all subsets o f size n — d. The full set o f group-deletion estim ates is t{,. . . , tfN , say. The em pirical distribution o f — t will approxim ate the distribution o f T — 6 only if we renorm alize to rem ove the discrepancy in sam ple sizes, n — d versus n. So if T — 6 = Op(n~a), we take the em pirical distribution o f z f = (n - d)a{S - t)
(2.50)
as the delete-^ jackknife approxim ation to the distribution o f Z = na( T — 6). In practice we would n o t use all N subsam ples o f size n — d, b u t rath er R random subsam ples, ju st as with ordinary resampling. In principle this m ethod will apply m uch m ore generally th an b o o tstrap resam pling. But to w ork in practice it is necessary to know a and to choose d so th at n — d—>oo and d /n —>1 as n increases. T herefore the m ethod will work only in rath er special circum stances. N ote th a t if n —d is small relative to n, then the m ethod is not very different from a generalized b o o tstrap th a t takes sam ples o f size n — d ra th er th an n.
Example 2.25 (Sample maximum) We referred earlier to the failure o f the boo tstrap w hen applied to the largest o rd er statistic t = y(n), which estim ates the upper lim it o f a distribution on [0,0]. The jackknife m ethod applies here w ith a = 1, as n(9— T ) is approxim ately exponential w ith m ean 6 for uniform ly distributed ys. However, em pirical evidence suggests th a t the jackknife m ethod requires a very large sam ple size in o rd er to give good results. For example, if we take sam ples o f n = 100 uniform variables, for values o f d in the range 80-95 the distrib u tio n o f (n — d)(t — T +) is close to exponential, but the m ean is w rong by a factor th a t can vary from 0.6 to 2. ■
2.8 ■Subsampling M ethods
57
2.8.2 All-subsamples method A different type o f subsam pling consists o f taking all N = 2" — 1 non-em pty subsets o f the data. This can be applied to a lim ited type o f problem , including M -estim ation where m ean /i is estim ated by the solution t to the estim ating equation ^ c(yj — t) = 0. If the ordered estim ates from subsets are denoted by tJ’j ),. . . , f[N), then rem arkably fi is equally likely to be in any o f the N + 1 intervals
Hence confidence intervals for fi can be determ ined. In practice one w ould take a ran d o m selection o f R such subsets, and attach equal probability ( R + I)-1 to the R + 1 intervals defined by the R ff values. It is unclear how efficient this m ethod is, and to w hat extent it can be generalized to o th er estim ation problems.
2.8.3 Half-sampling methods T he jackknife m ethod for estim ating v a r(T ) can be extended to deal with estim ates based on m any samples, b u t in one special circum stance there is another, sim pler subsam pling m ethod. O riginally this was proposed for samplesurvey d a ta consisting o f stratified sam ples o f size 2. To fix ideas, suppose th at we have sam ples o f size 2 from each o f m strata, and th a t we estim ate the p o pulation m ean n by the w eighted average t = Y27=i wifi^ these weights reflect stratu m sizes. The usual estim ate for v a r(T ) is v = J 2 wf sf with sj the sam ple variance for the ith stratum . The half-sam pling m ethod is designed to reproduce this variance estim ate using only subsam ple values o f t, ju st as the jackknife does. T hen the m ethod can be applied to m ore com plex problems. In the present context there are N = 2m half-sam ples form ed by taking one elem ent from each stratu m sample. If ft denotes the estim ator calculated on such a half-sam ple, then clearly ft — t equals \ ~ y a ) c ] , where cj = +1 according to which o f yn and y,%is in the half-sam ple. D irect calculation shows th a t for a ran d o m half-sam ple E (T t — T )2 = jv a r(T ), so th a t an unbiased estim ate o f v a r(T ) is obtained by doubling the average o f (ft — t)2 over all N half-sam ples: this average equals the usual estim ate given earlier. But it is unnecessary to use all N half-sam ples. If, say, we use R half-sam ples, then we require th at
2 ■The Basic Bootstraps
58 From the earlier representation for .
i
R
[ 1
s r= l
m
i E I
- 1 we see th a t this implies th at 1
m m
wf ( yn - y a )1 +
i= 1
j(yn - y a ) { y n - yj i) i= l j = 1
equals 1 m 4 E i=l
-
>‘2)2-
For this to hold for all d a ta values we m ust have = 0 for all i ± j. This is a stan d ard problem arising in factorial design, and is solved by w hat are know n as P lackett-B urm an designs. If the rth half-sam ple coefficients cfrj form the rth row o f the R x m m atrix C +, and if every observation occurs in exactly | R half-sam ples, then C +TC f = rnlmxm. In general the ith colum n o f C + can be expressed as ( c y, . —1) w ith the first R — 1 elem ents obtained by i — 1 cyclic shifts o f c i j , . . . , For exam ple, one solution for m = 7 with R = 8 is -1 -1 +1 - 1 +1 + 1 ni ( +l +1 +1 - 1 - 1 +1 - 1 +1 +1 +1 +1 - 1 - 1 +1 - 1 -1 +1 +1 +1 - 1 - 1 - 1 +1 - 1 +1 +1 +1 - 1 - 1 -1 +1 - 1 +1 +1 +1 - 1 -1 -1 +1 - 1 +1 +1 +1 U i -1 -1 -1 -1 -1 1) This solution requires th a t R be the first m ultiple o f 4 greater th a n or equal to m. The half-sam ple designs for m = 4 ,5 ,6 ,7 are the first in colum ns o f this C + m atrix. In practice it would be com m on to double the half-sam pling design by adding its com plem ent —C \ which adds furth er balance. It is fairly clear th a t the half-sam pling m ethod extends to stratum sample sizes k larger th a n 2. The basic idea can be seen clearly for linear statistics o f the form m
t= n + X i= 1
k
m
k~l E
k
= ^ + E
7=1
i= l
a,> j= l
say. Suppose th a t in the rth subsam ple we take one observation from each stratum , as specified by the zero -o n e indicator c jy . T hen '! - , = E
E
cl,,j(aU - a,),
which is a linear regression m odel w ithout erro r in which the atj — a, are coefficients and the are covariate values to be determ ined. If the ay — a,
2.9 ■Bibliographic Notes
59
can be calculated, then the usual estim ate o f v ar(T ) can be calculated. The choice o f - values corresponds to selection o f a fractional factorial design, w ith only m ain effects to be calculated, and this is solved by a Plackett-B urm an design. O nce the subsam pling design is obtained, the estim ate o f v a r(T ) is a form ula in the subsam ple values tj. The same form ula w orks for any statistic th a t is approxim ately linear. The same principles apply for unequal stratum sizes, although then the solution is m ore com plicated and m akes use o f orthogonal arrays.
2.9 Bibliographic Notes T here are two key aspects to the m ethods described in this chapter. The first is th a t in o rd er for statistical inference to proceed, an unknow n distribution F m ust be replaced by an estim ate. In a param etric m odel, the estim ate is a p aram etric distribution F$, w hereas in a nonparam etric situation the estim ate is the em pirical distribution function or som e m odification o f it (Section 3.3). A lthough the use o f the E D F to estim ate F m ay seem novel a t first sight, it is a n atu ral developm ent o f replacing F by a param etric estim ate. We have seen th a t in essence the E D F will produce results sim ilar to those for the “ nearest” param etric model. The second aspect is the use o f sim ulation to estim ate quantities o f interest. The w idespread availability o f fast cheap com puters has m ade this a practical alternative to analytical calculation in m any problem s, because com puter time is increasingly plentiful relative to the num ber o f hours in a researcher’s day. T heoretical approxim ations based on large samples can be tim e-consum ing to obtain for each new problem , and there m ay be d o u b t about their reliability in small samples. C ontrariw ise, sim ulations are tailored to the problem at hand an d a large enough sim ulation m akes the num erical erro r negligible relative to the statistical erro r due to the inescapable uncertainty ab o u t F. M onte C arlo m ethods o f inference had already been used for m any years when E fron (1979) m ade the connection to standard m ethods o f param etric inference, drew the atten tio n o f statisticians to their potential for nonparam etric inference, and originated the term “b o o tstra p ”. This work and subsequent developm ents such as his 1982 m onograph m ade strong connections with the jackknife, which had been introduced by Q uenouille (1949) and Tukey (1958), and w ith o th er subsam pling m ethods (H artigan, 1969, 1971, 1975; M cC arthy, 1969). M iller (1974) gives a good review o f jackknife m ethods; see also G ray an d Schucany (1972). Y oung and D aniels (1990) discuss the bias in the nonparam etric boo tstrap introduced by using the em pirical distribution function in place o f the true distribution. H all (1988a, 1992a) strongly advocates the use o f the studentized b o o tstrap
60
2 ■ The Basic Bootstraps
statistic for confidence intervals an d significance tests, and m akes the connec tion to E dgew orth expansions for sm ooth statistics. The em pirical choice o f scale for resam pling calculations is discussed by C h apm an and H inkley (1986) and T ibshirani (1988). H all (1986) analyses the effect o f discreteness on confidence intervals. Efron (1987) discusses the num bers o f sim ulations needed for bias and quantile estim ation, while D iaconis an d H olm es (1994) describe how sim ulation can be avoided com pletely by com plete en um eration o f b o o tstrap sam ples; see also the bibliographic notes for C h ap ter 9. Bickel and F reedm an (1981) were am ong the first to discuss the conditions under which the b o o tstrap is consistent. T heir w ork was followed by Bretagnolle (1983) and others, and there is a grow ing theoretical literature on m odifications to ensure th a t the b o o tstra p is consistent for different classes o f aw kw ard statistics. T he m ain m odifications are sm oothing o f the d ata (Sec tion 3.4), which can im prove m atters for nonsm ooth statistics such as quantiles (D e Angelis and Young, 1992), subsam pling (Politis and R om ano, 1994b), and rew eighting (B arbe and Bertail, 1995). H all (1992a) is a key reference to Edgew orth expansion theory for the b o o tstrap , while M am m en (1992) describes sim ulations intended to help show when the b o o tstrap works, and gives the oretical results for various situations. Shao and Tu (1995) give an extensive theoretical overview o f the b o o tstrap an d jackknife. A threya (1987) has show n th a t the b o o tstra p can fail for long-tailed distri butions. Some o th er exam ples o f failure are discussed by Bickel, G otze and van Zwet (1996). T he use o f linear approxim ations an d influence functions in the context o f robust statistical inference is discussed by H am pel et al. (1986). Fernholtz (1983) describes the expansion theory th a t underlies the use o f these approx im ation m ethods. A n alternative and o rthogonal expansion, sim ilar to th at used in Section 2.7.4, is discussed by E fron and Stein (1981) and E fron (1982). Tail-specific approxim ations are described by H esterberg (1995a). The use o f m ultiple-deletion jackknife m ethods is discussed by H inkley (1977), Shao and W u (1989), W u (1990), and Politis and R om ano (1994b), the last w ith num erous theoretical exam ples. T he m ethod based on all non-em pty subsam ples is due to H artig an (1969), an d is nicely p u t into context in C h apter 9 o f Efron (1982). H alf-sam ple m ethods for survey sam pling were developed by M cC arthy (1969) an d extended by W u (1991). The relevant factorial designs for half-sam pling were developed by Plackett and B urm an (1946).
2.10 Problems 1
Let F denote the E D F (2.1). Show that E {f(y )} = F(y) and that var{F(y)} = f (3'){l — F(y)}/ n. Hence deduce that provided 0 < F(y) < 1, F(y) has a limiting
61
2.10 ■Problems
normal distribution for large n, and that Pr(|F(y) — F(y)| < e)—>1 as n—too for any positive e. (In fact the much stronger property s u p ^ ^ ^ ^ |F(y) — F (y )|—>0 holds with probability one.) (Section 2.1) 2
Suppose that Y ],..., Y„ are independent exponential with mean Y=n~' E
their average is
Yj .
(a) Show that Y has the gamma density (1.1) with k = n, so its mean and variance are n and fi2/n. (b) Show that log Y is approximately normal with mean log^i and variance n~'. (c) Compare the normal approximations for Y and for log Y in calculating 95% confidence intervals for /z. Use the exact confidence interval based on (a) as the baseline for the comparison, which can be illustrated with the data o f Example 1.1. (Sections 2.1, 2.5.1) 3
Under nonparametric simulation from a random sample y [ , . . . , y„ in which T = nr1 Yj — Y) 2 takes value t, show that E '(T ') = (n — l)t/n,
var‘(7” ) = (n — l ) 2 [m4/ n + (3 - n)t2/ {n(n — 1)}] / n2,
where w 4 = n- 1 E / X ; - f ) 4(Section 2.3; Appendix A) 4
Let t be the median o f a random sample o f size n = 2m + 1 with ordered values >>(i) < • • • < y(„); t = y(m+i). (a) Show that T" > if and only if fewer than m + 1 o f the Y ’ are less than or equal to y ^ . (b) Hence show that
This specifies the exact resampling density (2.28) o f the sample median. (The result can be used to prove that the bootstrap estimate o f var(T ) is consistent as n—>oo.) (c) Use the resampling distribution to show that for n = 11 P r * ( r < y,3 j) = Pr’( T ‘ > y(9)) = 0.051, and apply (2.10) to deduce that the basic bootstrap 90% confidence interval for the population median 6 is (2 y(6) — y(9 ), 2 y(6) — (d) Examine the coverage o f the confidence interval in (c) for samples from normal and Cauchy distributions. (Sections 2.3, 2.4; Efron, 1979, 1982) 5
Consider nonparametric simulation o f Y* based on distinct linearly independent observations y i,...,y „ . (a) Show that there are m„ = (^"T,1) ways that n — 1 red balls can be put in a line with n white balls. Explain the connection to the number o f distinct values taken by Y '. (b) Suppose that the value y" taken by Y* is n~l J 2 f j y j < where / ” can be one o f 0 and J 2 j f j ~ n- Find Pr(Y ” = y), and deduce that the most likely value o f Y ” is y, with probability p„ = n'./n". (c) Use Stirling’s approximation, i.e. n \ ~ (27r)l/2e~"n"+1//2 as n—>oo, to find approx imate formulae for m„ and p„. (d) For the correlation coefficient T calculated from distinct pairs («i, x j ) ,. . . , (u„,x„),
62
2 ■The Basic Bootstraps show that T* is indeterminate with probability W hat is the probability that 17” | = 1? Discuss the implications o f this when n < 10. (Section 2.3; Hall, 1992a, Appendix I) Suppose that are independently distributed with a two-parameter density W hat simulation experiment would you perform to check whether or not Q = q ( Y u . . . , Y n;6) is a pivot? If / is the gamma density (1.1), let fi be the M LE o f n, let
feAy)-
tpin) = max Y
l°g//vc(y; )
j=i be the profile log likelihood for n and let Q = 2 { /p(/i) — /?p(n)}. In theory Q should be approximately a x] variable for large n. Use simulation to examine whether or not Q is approximately pivotal for n = 10 when k is in the range (0.5,2). (Section 2.5.1) 7
The bootstrap normal approximation for T — 9 is N ( b R, v R), so that the p quantile ap for T — 96 can be approximated appro by ap = bR + zpvR 2. Show that the simulation variance o f this estimate is
i* \ ■ v°° I . , *3 , l 2 / t , k4 K ) - R { ' + Z ' , ^ + i2' ( 2 + < where k 3 and k4 are the third and fourth cumulants o f T" under bootstrap resampling. If T is asymptotically normal, k ^ / v U2 = 0 ( n ~ l/2) and k 4/ v1^ = 0 (n “ ’). Compare this variance to that o f the bootstrap quantile estimate — t in the special case T = Y . (Sections 2.2.1, 2.5.2; Appendix A) 8
9
Suppose that estimator T has expectation equal to 0(1 + y ) , so that the bias is 9y. The bias factor y can be estimated by C = E’( T ' ) / T — 1. Show that in the case o f the variance estimate T = ri [ ^ 2(Yj — Y ) 2, C is exactly equal to y. I f C were approximated from R resamples, what would be the simulation variance o f the approximation? (Section 2.5) Suppose that the random variables U = (Ui, .. . , Um) have means C i,...,( m and covariances cov(Uk,Ui) = n-1 cow( 0 , and that Ti = g t ( U ) , . . . , T q = gq(U). Show that E(T,)
=
g , . ( 0 + i n - > f > w( 0 | ^ ,
cov(Tj, Tj)
=
/r ‘ f >
w(
How are these estimated in practice? Show that 2
\ " (x i — tuj)2
" - 2£ i=i is a variance estimate for t = x / u , based on independent pairs (u i, Xi) ,...,( « „ ,x n). (Section 2.7.1)
63
2.10 ■Problems 10
(a) Show that the influence function for a linear statistic t(F) = / a(x) dF(x) is a ( y ) — t(F). Hence obtain the influence functions for a sample mom ent fir — f x r dF(x), for the variance /1 2 (F) — {/ti(F)}2, and for the correlation coefficient (Example 2.18). (b) Show that the influence function for {t(F) — 6 } / v ( F ) i/2 evaluated at 9 = t{F) is v(F)~l/2L, (y; F) . Hence obtain the empirical influence values lj for the studentized quantity {t{F) — t ( F) } / v L( F ) l/2, and show that they have the properties E O = 0 and n~2 E I2 = 1 . (Section 2.7.2; Hinkley and Wei, 1984)
11
The pairs ( U [ , X i ) , . . . , { U „ , X n) are independent bivariate normal with correlation 9. Use the influence function o f Example 2.18 to show that the sample correlation T has approximate variance n~l { 1 — 92)2. Then apply the delta method to show that \ log ( j r £ ) , called Fisher’s z-transform, has approximate variance n~]. (Section 2.7.1; Appendix A)
12
Suppose that a parameter 0 = t(F) is determined implicitly through the estimating equation
J u { y, 9 ) d F ( y ) = 0
.
(a) Write the estimating equation as
J u { y J ( F ) } dF(y) = 0, u(x;0) = du(x-,6)/d8
replace F by (1 — e)F + eH y, and differentiate with respect to e to show that the influence function for f(-) is
,(-V’ *
— f U(x;9)dF(x) '
Hence show that with 9 = t{F) the y'th empirical influence value is t =
1
u ( y j ; 6)
- n ~ l E L i “(w ;
(b) Let {p be the maximum likelihood estimator o f the (possibly vector) parameter o f a regular parametric m odel / v (y) based on a random sample y u . ..,y„. Show that the j \ h empirical influence value for \p at yj may be written as n I ~ lSj, where y-v g 2 l o g / v-,(y; )
dxpdip7
d\ogjjiyj) ’
dxp
J
Hence show that the nonparametric delta method variance estimate for ip is the so-called sandwich estimator
/-> ( X s A r ) ' - ' Compare this to the usual parametric approximation when y \ , . . . , y „ is a random sample from the exponential distribution with mean tp . (Section 2.7.2; Royall, 1986)
64 13
2 ■ The Basic Bootstraps The a trimmed average is defined by
t { F) =r h a [ computed at the E D F F. Express t(F) in terms o f order statistics, assuming that na is an integer. How would you extend this to deal with non-integer values o f not? Suppose that F is a distribution symmetric about its mean, p.. By rewriting t(F) as
)
rii-«(f)
-— — / 1 2 a
udF(u),
where qa(F) is the a quantile o f F, use the result o f Example 2.19 to show that the influence function o f t(F) is
L t(y,F )= l
l - 2 « r I, 1 — 2a) ', {{q '(F )-p }(l-2 * )-\
y • • • > Xp and corre sponding orthogonal eigenvectors ej, where e j e y = 1. Let Fc = (1 — s)F + eHy. Show that the influence function for Q is L a ( y ',F) = { y - n)(y - p ) T — fl, and by considering the identities Q(Fc)ej(Fs) = Xj(Fe)ej(Fc),
ej (F£) t e j(Fc) = 1,
or otherwise, show that the influence function for l j is { e j ( y — p)}2 — Xj. (Section 2.7.2) 15
Consider the biased sample variance t — n_ 1 J2(yj ~ J')2(a) Show that the empirical influence values and second derivatives are lj = (yj - y ) 2 - U
qjk = - 2 ( y j - y)(yk - y).
(b) Show that the exact case-deletion values o f t are
Compare these with the result o f the general approximation t - t-j = ( n -
1
y 'lj -
-
1
)~2qjj,
which is obtained from (2.41) by substituting F for F and for F. (c) Calculate jackknife estimates o f the bias and variance o f T. Are these sensible estimates? (Section 2.7.3; Appendix A)
2.10 ■Problems 16
65
The empirical influence values lj can also be defined in terms o f distributions supported on the data values. Suppose that the support o f F is restricted to y i , . . . , y n, with probabilities p = ( p i , . . . , p n) on those values. For such distributions t(F) can be re-expressed as t{p). (a) Show that h = j Rt{(l - e)p + s l j } e=0
where P = ( $ , ■-■>%) and 1 j is the vector with Hence or otherwise show that
1
in the y'th position and
0
elsewhere.
n
0 = Mp) -
X Mp)> k=\
where 'tj(p) = 8t(p)/dpj. (b) Apply this result to derive the empirical influence values lj = (xj — tuj )/ u for the estimate t = J2 Pjx j ! 5Z Pjuj o f the ratio o f two means. (c) The empirical second derivatives qtj can be defined similarly. Show that
d2
qtj = g~ ^
t{(l - El - E 2)p + Ell, + E 2 ly}
£| =£2=0 Hence deduce that 0, (b) e = —( n — l ) -1 , (c) e = ( n + 1) - 1 which respectively give the infinitesimal jackknife, the ordinary jackknife, and the positive jackknife.
66
2 • The Basic Bootstraps
Show that in (b) and (c) the squared distance (dF — dFe)T(dF — dFc) from F to Fe = (1 — s)F + eH Vj is o f order 0 ( n ~ 2), but that if F* is generated by bootstrap sampling, E* j( d F ‘ — d F ) T {dF’ — dF) j = 0 ( n ~ l ). Hence discuss the results you would expect from the butcher knife, which uses e = n~l/2. How would you calculate it? (Section 2.7.3; Efron, 1982; Hesterberg, 1995a) 19
The cumulant generating function o f a multinomial random variable with denominator n and probability vector ( 7 1 1 , . . . , n„) is K ( £ ) = n log
7
ty e x p ( ^ -)|,
where £ = (a) Show that with Kj = n~l, the first four cumulants o f the /* are E‘(/D co v '( / ' , / * )
= =
1, dij-n~\
cum ' ( f i J ’j J k )
=
n~2{n2Sijk-« where yj is the average o f n\ observations generated w ith equal probability from the first sample, y ii ,- - -, yi m, and
is the average o f n2 observations generated with equal
72
3 ■Further Ideas
Series 4 5
6
7
8
105 83
95 90
76 76
78 78
82 79
84 86
76 75 51 76 93 75 62
76 76 87 79 77 71
78 79 72 68 75 78
78 86 87 81 73 67 75 82 83
81 79 77 79 79 78 79 82 76 73 64
85 82 77 76 77 80 83 81 78 78 78
1
2
3
76 82
87 95
83 54 35 46 87 68
98 100 109 109 100 81 75 68 67
probability from the second sample, y 2 i , - - - , y 2n2- T he corresponding unbiased estim ate o f variance for t* based on these sam ples would be 1
”>
1
«2
Example 3.2 (Gravity data) Between M ay 1934 and July 1935, a series o f experim ents to determ ine the acceleration due to gravity, g, was perform ed at the N atio n al B ureau o f S tan d ard s in W ashington D C. T he experim ents, m ade with a reversible pendulum , led to eight successive series o f m easurem ents. The d ata are given in Table 3.1. Figure 3.1 suggests th a t the variance decreases from one series to the next, th a t there is a possible change in location, and th a t mild outliers m ay be present. T he m easurem ents for the later series seem m ore reliable, and although we would wish to estim ate g from all the data, it seems in ap p ropriate to pool the series. We suppose th a t each o f the series is taken from a separate population, F i,...,F g , b u t th a t each pop u latio n has m ean g; for a check on this see Exam ple 4.14. T hen the ap p ro p riate form o f estim ator is a weighted com bination
r = Ef=i V(Fi)/ u - J1'12+ 2 > « - »>2} •
(b,+ i )
sim ilar to the usual “ pooled variance” form ula.
■
The various com m ents m ade ab o u t calculation in Section 2.7 apply here w ith obvious m odifications. T hus the em pirical influence values can be ap proxim ated accurately by num erical differentiation, which here m eans f ^ t(F\ , ...,( 1 - e ) F j + e H yj , .. ., Fk) - t lj ~ for small
e.
e
We can also use the generalization o f (2.44), namely
'■ - ‘ +
E
r E
^ -
.- = 1 J.1
where / ' denotes the frequency o f d a ta value in the b o o tstrap sample. T hen given sim ulated values we can approxim ate the ly by regression, generalizing the m ethod outlined in Section 2.7.4. A lternative ways to calculate the Ijj an d vL are described in Problem s 3.6 and 3.7. The m ultisam ple analogue o f the jackknife m ethod o f Section 2.7.3 involves the case deletion estim ates ^jack,]] = (tlj where T hen
l)(t
t—jj),
is the estim ate obtained by om itting the yth case in the ith sample. k
vjack = E
^
_ J)
n,
~^jack,if-
One can also generalize the discussion o f bias approxim ation in Section 2.7.3. However, the extension o f the quad ratic approxim ation (2.41) is n o t straight forw ard, because there are “cross-population” terms. The same approxim ation (3.1) could be used even when the samples, and hence the F,s, are correlated. But this w ould have to be taken into account in (3.3), which as stated assum es m utual independence o f the samples. In general it would be safer to incorporate dependence th ro u g h the use o f appropriate m ultivariate E D Fs.
3.3 ■Semiparametric Models
77
3.3 Semiparametric Models In a sem iparam etric m odel, some aspects o f the d ata distribution are specified in term s o f a small num ber o f param eters, b u t other aspects are left arbitrary. A simple exam ple would be the characterization Y = fi + ae, with no assum ption on the distribution o f e except th a t it has centre and scale zero and one. Usually a sem iparam etric m odel is useful only when we have nonhom ogeneous data, w ith only the differences characterized by param eters, com m on elem ents being nonparam etric. In the context o f Section 3.2, and especially Exam ple 3.2, we m ight for exam ple be fairly sure th a t the distributions F, differ only in scale or, m ore cautiously, scale and location. T h a t is, Yy m ight be expressed as Yy — fli 4“ 6 i'c-ij> where the ey are sam pled from a com m on distribution with C D F Fo, say. The norm al distrib u tio n is a p aram etric m odel o f this form. The form can be checked to some extent by plotting standardized residuals such as
for ap p ro p riate estim ates jl, an d au to verify hom ogeneity across samples. The com m on Fo will be estim ated by the E D F o f all n, o f the ey-s, or better by the E D F o f the standardized residuals e y /( l — n f 1)1/2. The resam pling algorithm will then be Yy
fii ”1" *7['£y,
j
1, • • • , Wj, i
1 ,. . . , /c,
where the £y-s are random ly sam pled from the ED F, i.e. random ly sam pled w ith replacem ent from the standardized eys; see Problem 3.1. In an o th er context, w ith positive d a ta such as lifetimes, it m ight be ap p ro priate to think o f d istributions as differing only by m ultiplicative effects, i.e. Yy = HiSij, where the ey are random ly sam pled from some baseline distribution w ith unit m ean. The exponential distribution is a param etric m odel o f this form. The principle here w ould be essentially the sam e: estim ate the ey by residuals such as ey = y y //i„ then define Yy = &£*• with the e*- random ly sam pled w ith replacem ent from the eys. Sim ilar ideas apply in regression situations. The param etric p art o f the model concerns the system atic relationship betw een the response y and explanatory variables x, e.g. th ro u g h the m ean, and the nonparam etric p a rt concerns the ran d o m variation. We consider this in detail in C hapters 6 and 7. R esam pling plans such as those ju st outlined will give m ore accurate answers when their assum ptions ab o u t the relationships betw een F, are correct, but they are not robust to failure o f these assum ptions. Some pooling o f inform ation
78
3 • Further Ideas
across sam ples m ay be essential in o rd er to avoid difficulties w hen the sam ples are small, b u t otherw ise it is usually unnecessary. If we widen the m eaning o f sem iparam etric to include any partial modelling, then features less tangible th a n param eters com e into play. T he following two exam ples illustrate this.
Example 3.4 (Symmetric distribution) Suppose th a t with our simple random sam ple it was ap p ro p riate to assum e th a t the distrib u tion was sym m etric ab o u t its m ean or m edian. Using this assum ption could be critical to correct statistical analysis; see Exam ple 3.26. W ithout a param etric m odel it is h ard to see a clear choice for F. But we can argue as follows: u n d er F the distributions o f
Y-n
an d —( 7 — n) are the same, so u n d er F the d istributions o f Y* —fi and should be the same. This will be tru e if we sym m etrize the E D F ab o u t p., m eaning th a t we take F to be the E D F o f y \ , . . .,y„,2p.—y \ , . . . , 2 p . —y„. A robust choice for p. w ould be the m edian. (For discrete distributions we could equivalently average sam ple pro p o rtio n s for ap p ro p riate pairs o f d ata values.) The m ean, m edian an d o th er sym m etrically defined location estim ates o f the resulting estim ated distrib u tio n are all equal. ■
Example 3.5 (Equal marginal distributions) Suppose th a t Y is bivariate, say Y = ( U , X ) , and th a t it is ap p ro p riate from the context to assum e th a t U and X have the sam e m arginal distribution. T hen F can be forced to have the same m argins by defining it as the E D F o f the 2 n pairs ( u i ,x i) ,. x „ ) ,( x i,u i) ,..., (xn,Un).
B
In b o th o f these exam ples the resulting estim ate will be m ore efficient th an the EDF. This m ay be less im p o rtan t th a n producing a m odel which satisfies the practical assum ptions an d m akes intuitive sense.
Example 3.6 (M ixed discrete-continuous distributions) There will be situa tions where the raw E D F is n o t suitable for resam pling because it is not a credible m odel. Such a situation arises in classification, where we have a binary response y and covariates x which are used to predict y. If the ob served covariate values x i , . . . , x„ are distinct, then the conditional probabilities Tt(x) = Pr(Y = 1 | x) estim ated from the E D F are all 0 or 1. This is clearly not credible, so the E D F should n o t be used as a resam pling m odel if the focus o f interest is a property th a t depends critically on the conditional probabilities n(x). A n a tu ra l m odification o f the E D F is to keep the m arginal E D F o f x, but to replace the 0-1 values o f the conditional distrib u tion by a sm ooth estim ate o f n(x). This is discussed fu rth er in Exam ple 7.9. ■
3.4 ■Smooth Estimates o f F
79
3.4 Smooth Estimates of
F
F or nonparam etric situations we have so far m ostly assum ed th at the E D F F is a suitable estim ate o f F. But F is discrete, and it is natural to ask if a sm ooth estim ate o f F m ight be better. The m ost likely situation for im provem ent is where the effects o f discreteness (Section 2.3.2) are severe, as in the case o f the sam ple m edian (Exam ple 2.16) or o th er sam ple quantiles. W hen it is reasonable to suppose th a t F has a continuous PD F, one possi bility is to use kernel density estim ation. F or scalar y we take
t M
- h t H j=i
n r 1)-
™
where w(-) is a continuous an d sym m etric P D F with m ean zero and unit variance, an d do calculations o r sim ulations based on the corresponding C D F Fh, rath er th a n on the E D F F. This corresponds to sim ulation by setting Y ‘ = yr. + h£j,
j = l,...,n,
where the l j are independent and uniform ly distributed on the integers 1 ,..., n and the ej are a ran d o m sam ple from w(-), independent o f the l j . This is the smoothed bootstrap. N ote th a t h = 0 recovers the EDF. The variance o f an observation generated from (3.6) is n~l J2(yj ~ S’)2 + ^2> and it m ay be preferable for the sam ples to have the same variance as for the unsm oothed b ootstrap. This is im plem ented via the shrunk smoothed bootstrap, under which h sm ooths betw een F and a m odel in which d a ta are generated from density w(-) centred at the m ean and rescaled to have the variance o f F ; see Problem 3.8. H aving decided which sm oothed b o o tstrap is to be used, we estim ate the required p roperty o f F , a(F), by a(F/,) ra th er th an a(F). So if T is an estim ator o f 9 = t(F), an d we inten d to estim ate a(F) = v a r(T | F) by sim ulation, we w ould obtain values t \ , . . . , t ’R calculated from sam ples generated from F/,, and then estim ate a(F) by (R — I)-1 — F ) 2. N otice th a t it is a(F), n o t t(F), th a t is estim ated using sm oothing. To see w hen a(F/,) is b etter th an a(F), suppose th a t a(F) has linear approxi m ation (2.35). Then n
a(Fh) - a(F)
=
n~l ^
J
L a( Y j + h £ j - , F ) w ( E j ) d E j - i -------
7= 1
n
=
n - 1 Y , L a( Yj ; F ) + \ h 2n~ l £ 7=1
L "(Y ,;F ) + • ■•
7=1
for large n and small h, where L "(u ;F ) = d2L a( u ; F ) / 3 u 2. It follows th at the
80
3 ■Further Ideas
n
2 0
80
Table 3.2 Root mean squared error (xlO-2) for estimation of n1//2 times the standard deviation of the transformed correlation coefficient for bivariate normal data with correlation 0.7, for usual and smoothed bootstraps with R = 200 and smoothing parameter h.
Smoothed, h Usual h= 0 18.9 11.4
0 .1
18.6 1 1 .2
0.25
0.5
16.6 10.4
11.9 8.5
1 .0
6 .6
6.4
m ean squared erro r o f a(Fh), M S E ( h ) = E[{a(Fj,) — a(F )}2], roughly equals n~lj L a( y ; F)2 d F (y ) +h 2n ^ j L a( y ; F ^ y ; F) d F ( y ) + \ hA^ f U'a{ y ; F) dF(y)
J.
(3.7) Sm oothing is n o t beneficial if the coefficient o f h2 is positive, b u t if it is negative (3.7) can be reduced by choosing a positive value o f h th a t trades off the last two terms. The leading term in (3.7) is unaffected by the choice o f h, which suggests th a t in large sam ples any effect o f sm oothing will be m inor for such statistics. Example 3.7 (Sample correlation) To illustrate the discussion above, we take a(F) to be the scaled stan d ard deviation o f T = i log{(l + C )/( 1 — C)}, where C is the correlation coefficient for bivariate norm al data. We extend (3.6) to bivariate y by taking w( ) to be the bivariate norm al density with m ean zero and variance m atrix equal to the sam ple variance m atrix. F or each o f 200 samples, we applied the sm oothed b o o tstrap w ith different values o f h and R = 200 to estim ate a(F). Table 3.2 shows results for two sam ple sizes. F or n = 20 there is a reduction in root m ean squared error by a factor o f ab o u t three, w hereas for n = 80 the factor is ab o u t two. Results for the shrunk sm oothed b o o tstrap are the same, because o f the scale invariance o f C and the form o f w( ). ■ Sm oothing is potentially m ore valuable w hen the quantity o f interest depends on the local behaviour o f F, as in the case o f a sam ple quantile. Example 3.8 (Sample median) Suppose th a t t(F) is the sam ple m edian, and th at we wish to estim ate its variance a(F). In Exam ple 2.16 we saw th at the discreteness o f the m edian posed problem s for the ordinary, unsm oothed, b ootstrap. D oes sm oothing im prove m atters? U nder regularity conditions on F an d h, detailed calculations show th a t the m ean squared error o f na(Fh) is pro p o rtio n al to (n/i)-1 ci + h4C2 ,
(3.8)
where c\ an d c2 depend on F and w(-) b u t not on n. Provided th at c\ and c2 are non-zero, (3.8) is minim ized at h oc n-1/5, and (3.8) is then o f order n-4/5,
3.4 ■Smooth Estimates o f F Table 3.3 Root mean squared error for estimation of n times the variance of the median of samples of size n from the £3 and exponential densities, for usual, smoothed and shrunk smoothed bootstraps with R = 200 and smoothing parameter h.
81 S m oothed, h
n
£3
Exp
S h ru n k sm oothed, h
U sual h= 0
0.1
0.25
0.5
1.0
0.1
0.25
0.5
1.0
11 81
2.27 0.97
2.08 0.76
2.17 0.77
3.59 1.81
10.63 6.07
2.06 0.75
2.00 0.67
2.72 1.17
4.91 2.30
11 81
1.32 0.57
1.15 0.48
1.02 0.37
1.18 0.41
7.53 1.11
1.13 0.47
0.92 0.34
0.76 0.27
0.93 0.27
w hereas it is 0 ( n ~ ,/2) in the unsm oothed case. T hus there are advantages to sm oothing here, a t least in large samples. Sim ilar results hold for other quantiles. Table 3.3 shows results o f sim ulation experim ents where 1000 sam ples were taken from the exponential an d tj distributions. F or each sam ple sm oothed an d shrunk sm oothed b o o tstrap s were perform ed w ith R = 200 an d several values o f h. U nlike in Table 3.2, the advantage due to sm oothing increases with n, and the shrunk sm oothed b o o tstrap im proves on the sm oothed bootstrap, particularly at larger values o f h. As predicted by the theory, as n increases the root m ean squared error decreases m ore rapidly for sm oothed th an for unsm oothed bootstrap s; it decreases fastest for shru n k sm oothing. F o r the tj d a ta the ro o t m ean squared erro r is n o t m uch reduced. F or the exponential d a ta sm oothing was per form ed on the log scale, leading to reduction in root m ean squared erro r by a factor two o r so. Too large a value o f h can lead to large increases in ro o t m ean squared error, b u t choice o f h is less critical for shrunk sm ooth ing. Overall, a small am o u n t o f shrunk sm oothing seems w orthw hile here, provided the d a ta are well-behaved. But sim ilar experim ents w ith Cauchy d a ta gave very p o o r results m ade worse by sm oothing, so one m ust be sure th a t the d a ta are n o t pathological. F urtherm ore, the gains in preci sion are n o t large enough to be critical, at least for these sam ple sizes.
■ The discussion above begs the im p o rtan t question o f how to choose the sm oothing p aram eter for use w ith a p articular dataset. O ne possibility is to treat the problem as one o f choosing am ong possible estim ators a(Fh) an d use the nested b o o tstrap , as in Exam ple 3.26. However, the use o f an estim ated h is n o t sure to give im provem ent. W hen the rate o f decrease o f the optim al value o f h is know n, an o th er possibility is to use subsam pling, as in E xam ple 8.6.
82
3 ■Further Ideas
3.5 Censoring 3.5.1 Censored data Censoring is present w hen d a ta con tain a lower or upper b o und for an observation ra th e r th a n the value itself. Such d a ta often arise in m edical and industrial reliability studies. In the m edical context, the variable o f interest m ight represent the tim e to death o f a patien t from a specific disease, with an indicator o f w hether the tim e recorded is exact or a lower b o und due to the p atient being lost to follow -up or to d eath from oth er causes. The com m onest form o f censoring is right-censoring, in which case the value observed is Y = m in (7 ° , C), where C is a censoring value, and Y° is a no n negative failure time, which is know n only if Y° < C. The d a ta themselves are pairs ( Y , D ), w here D is a censoring indicator, w hich equals one if Y° is observed an d equals zero if C is observed. Interest is usually focused on the distributio n F° o f Y°, w hich is obscured if there is censoring. The survivor function and the cumulative hazard function are central to the study o f survival data. The survivor function corresponding to F°(y) is Pr(Y ° > y) = 1 — F°(y), an d the cum ulative h azard function is A°(y) = —lo g { l—-F°(y)}. The cum ulative h azard function m ay be w ritten as / 0y dA°(u), where for continuous y the hazard function d A° ( y) /d y m easures the in stan taneous rate o f failure at tim e y, conditional on survival to th a t point. A constant h azard X leads to an exponential distrib u tion o f failure tim es with survivor an d cum ulative h azard functions exp(—Ay) and Ay; departures from these simple form s are often o f interest. T he sim plest m odel for censoring is random censorship, u n der which C is a random variable w ith distrib u tio n function G, independent o f Y°. In this case the observed variable Y has survivor function Pr(Y > y ) = { I - F ° ( y ) } { l - G ( y ) } . O ther form s o f censoring also arise, an d these are often m ore realistic for applications. Suppose th a t the d a ta available are a hom ogeneous random sam ple (yi,di), . . . , (y n, dn), and th a t censoring occurs at random . Let y\ < ■■■< y„, so there are n o tied observations. A stan d ard estim ate o f the failure-tim e survivor function, the product-limit o r Kaplan-Meier estim ate, m ay then be w ritten as (3.9)
I f there is no censoring, all the dj equal one, and F°(y) reduces to the E D F o f y i , . . . , y n (Problem 3.9). T he product-lim it estim ate changes only a t successive failures, by an am o u n t th a t depends on the num b er o f censored observations
3.5 ■Censoring
83
between them . Ties betw een censored and uncensored d ata are resolved by assum ing th a t censoring happens instantaneously after a failure m ight have occurred; the estim ate is unaffected by o th er ties. A stan d ard error for 1—F°(y) is given by Greenwood’s formula, 1/2
(3.10) In setting confidence intervals this is usually applied on a transform ed scale. Both (3.9) an d (3.10) are unreliable where the num bers a t risk o f failure are small. Since 1 — dj is an indicator o f censoring, the product-lim it estim ate o f the censoring survivor function 1 — G is
■-*M- n Gr^Hj:yj< y v
J/
T he cum ulative h azard function m ay be estim ated by the Nelson-Aalen estim ate H{u) is the Heaviside function, which equals zero if u < 0 and equals one otherwise.
----- —y
(3.12)
Since y\ < • ■• < y„, the increase in A (> at yj is dA°(yj) = dj /( n — j + 1). The in terp retatio n o f (3.12) is th a t at each failure the hazard function is estim ated by the num b er observed to fail, divided by the num ber o f individuals at risk (i.e. available to fail) im m ediately before th a t time. In large sam ples the increm ents o f A 0, the d A0(yj), are approxim ately independent binom ial variables with denom inators (n + 1 — j ) and probabilities dj /( n — j + 1). The product-lim it estim ate m ay be expressed as 1 - F 0( y ) =
J ] {l-dA °(yj)} j-yj^y
(3.13)
in term s o f the com ponents o f (3.12). Example 3.9 (AM L data) Table 3.4 contains d a ta from a clinical trial conducted a t Stanford U niversity to assess the efficacy o f m aintenance chem o therapy for the rem ission o f acute m yelogeneous leukaem ia (A M L). A fter reaching a state o f rem ission through treatm ent by chem otherapy, patients were divided random ly into two groups, one receiving m aintenance chem otherapy an d the oth er not. T he objective o f the study was to see if m aintenance chem otherapy lengthened the tim e o f remission, w hen the sym ptom s recur. T he d a ta in the table were gathered for prelim inary analysis before the study ended.
3 ■Further Ideas
84
Table 3.4 Remission G ro u p 1 G ro u p 2
9 5
13 5
>13 8
18 8
23 12
>28 >16
31 23
34 27
>45 30
48 33
> 161 43
45
The left panel o f Figure 3.3 shows the estim ated survivor functions for the tim es o f rem ission. A plus on one o f the lines indicates a censored observation. T here is some suggestion th a t m aintenance prolongs the time to remission, b u t the sam ples are sm all and the evidence is n o t overwhelming. T he right panel shows the estim ated survivor functions for the censoring times. Only one observation in the n o n-m aintained group is censored, b u t the censoring distributions seem sim ilar for b o th groups. The estim ated probabilities th a t rem ission will last beyond 20 weeks are respectively 0.71 and 0.59 for the groups, w ith stan d ard errors from (3.10) b o th equal to 0.14. ■
3.5.2 Resampling plans Cases W hen the d a ta are a hom ogeneous sam ple subject to random censorship, the m ost direct way to b o o tstra p is to set 7* = m in( Y °’,C ’ ), where Y °* and C* are independently generated from F° and G respectively. This implies th at P r{Y'> y) = {l-G (y )}{l-F °(y )} = U ( j-yj^y
" ~ J_ ) ,
which corresponds to the E D F th a t places m ass n~l on each o f the n cases (yj,dj). T h a t is, o rdinary b o o tstra p sam pling u nder the random censorship m odel is equivalent to resam pling cases from the original data. Conditional bootstrap A second sam pling scheme starts from the prem ise th a t since the censoring variable C is unrelated to Y°, know ledge o f the quantities C i,...,C „ alone would tell us noth in g a b o u t F°. They w ould in effect be ancillary statistics. This suggests th a t sim ulations should be conditional on the p atte rn o f censorship, so far as practicable. To allow for the censoring pattern, we argue th a t although the only values o f cj know n exactly are those w ith dj = 0, the observed values o f the rem aining observations are lower b o unds for the censoring variables, because Cj > yj when d} = 1. This suggests the following algorithm .
times (weeks) for two groups o f patients with acute myelogeneous leukaemia (AM L), one receiving maintenance chem otherapy (G roup 1) and the other not (Miller, 1981, p. 49). ^ indicates right-censoring.
3.5 ■Censoring
Figure 3.3 Product-limit survivor function estimates for two groups o f patients with A M L, one receiving maintenance chem otherapy (solid) and the other not (dots). The left panel shows estimates for the time to remission, and the right panel shows the estimates for the time to censoring. In the left panel, + indicates times o f censored observations; in the right panel + indicates times o f uncensored observations.
85
n na o CO
> 3
C/D
Time (weeks)
Time (weeks)
Algorithm 3.1 (Conditional bootstrap for censored data) F or r = 1 ,...,/? , 1 generate Y| ° \ . .., Fn°* independently from F °; 2 for j = 1 ,..., n, m ake sim ulated censoring variables by setting C ’ = yj if dj = 0, an d if dj = 1, generating Cj from {G(y) — G(y; )}/{ 1 — G(y; )}, which is the estim ated distribution o f Cj conditional on Cj > y j ; then 3 set YJ = m in( Y,0*, CJ), for j = 1 ,..., n.
.
I f the largest observation is censored, it is given a notional failure time to the right o f the observed value, and conversely if the largest observation is uncensored, it is given a n o tio n al censoring tim e to the right o f the observed value. This ensures th a t the observation can ap p ear in b o o tstrap resamples. B oth the above sam pling plans can accom m odate m ore com plicated patterns o f censoring, provided it is uninform ative. F o r example, it m ight be decided at the start o f a reliability experim ent on independent and identical com ponents th a t if they have n o t already failed, item s will be censored at fixed times c i , . . . , c„. In this situation an ap p ro p riate resam pling plan is to generate failure tim es Y?* from F°, and then to take YJ = min(YJ0*,c,), for j = 1 Thi s am ounts to having separate censoring distributions for each item, w ith the j t h p u ttin g m ass one at c; . O r in a m edical study the yth individual m ight be subject to ran d o m censoring up to a tim e c“ , corresponding to a fixed calendar date for the end o f the study. In this situation, Yj = m in( Y f , C j , d f ) , with the indicator Dj equalling zero, one, o r tw o according to w hether Cj, Y j \ or c j was observed. T hen an ap p ro p riate conditional sam pling plan w ould generate
3 ■Further Ideas
86
Yj0' and C* as in the conditional plan above, b u t take YJ = m in(y;°”, and m ake D ’ accordingly. Weird bootstrap The sam pling plans outlined above m im ic how the d a ta are th o u g h t to arise, by generating individual failure and censoring times. W hen interest is focused on the survival o r h azard functions, a third and quite different approach uses direct sim ulation from the N elso n -A alen estim ate (3.12) o f the cum ulative hazard. The idea is to treat the num bers o f failures a t each observed failure tim e as independent binom ial variables w ith denom inators equal to the num bers of individuals at risk, and m eans equal to the num bers th at actually failed. Thus w hen yi < ■• ■< y n, we take the sim ulated num b er to fail at tim e yj, N*, to be binom ial w ith den o m in ato r n — j + 1 an d probability o f failure dj / ( n — j + 1). A sim ulated N elso n -A alen estim ate is then A°*00 = E V n - L ;=1 l ^ k =i ™\yj
vV yk)
(3-14)
which can be used to estim ate the uncertainty o f the original estim ate A Q(y). In this weird bootstrap the failures at different tim es are unrelated, the num ber at risk does n o t depend on previous failures, there are no individuals whose sim ulated failure tim es underlie -4°’ (y), and no explicit assum ption is m ade ab o u t the censoring m echanism . Indeed, under this scheme the censored indi viduals are held fixed, b u t the num b er o f failures is a sum o f binom ial variables (Problem 3.10). The sim ulated survivor function corresponding to (3.14) is obtained by substituting
into (3.13) in place o f dA°(yj). Example 3.10 (AM L data) Figure 3.3 suggests th a t the censoring distribu tions for b o th groups o f d a ta in Table 3.4 are sim ilar, b u t th at the survival distributions them selves are not. To com pare the resam pling schemes described above, we consider estim ates o f two param eters, the probability o f remission beyond 20 weeks and the m edian survival time, b o th for G ro u p 1. These estim ates are 1 — F°(20) = 0.71 an d inf{t : F°(t) > 5} = 31. Table 3.5 com pares results from 499 sim ulations using the ordinary, condi tional, and weird bootstraps. F or the survival probabilities, the ordinary and conditional b o o tstrap s give sim ilar results, and b o th stan d ard errors are sim ilar to th a t from G reenw ood’s form ula; the weird b o o tstrap probabilities are significantly higher an d are less variable. The schemes give infinite estim ates
87
3.5 ■Censoring Table 3.5 Results for 499 replicates of censored data bootstraps of Group 1 of the AML data: average (standard deviation) for estimated probability of remission beyond 20 weeks, average (standard deviation) for estimated median survival time, and the number of resamples in which case 3 occurs 0, 1, 2 and 3 or more times. Figure 3.4 Comparison of distributions of differences in median survival times for censored data bootstraps applied to the AML data. The dotted line is the line x = y.
F requency o f case 3
Cases C o n d itio n al W eird
P robability
M edian
0
1
2
>3
0.72 (0.14) 0.72 (0.14) 0.73 (0.12)
32.5 (8.5) 32.8 (8.5) 33.3 (7.2)
180 75 0
182 351 499
95 71 0
42 3 0
co c o
■o a>
V-
c o
O
-20
20 Cases
40
-20
0
20
40
Cases
o f the m edian 21, 19, and 2 tim es respectively. The w eird b o o tstrap results for the m edian are less variable th a n the others. The last colum ns o f the table show the num bers o f sam ples in which the sm allest censored observation appears 0, 1, 2, and 3 or m ore times. U nder the conditional scheme the observation appears m ore often th an under the ordinary b o o tstrap , and und er the weird b o o tstrap it occurs once in each resample. Figure 3.4 com pares the distributions o f the difference o f m edian survival times betw een the two groups, und er the three schemes. R esults for the condi tional and o rdinary b o o tstrap s are similar, b u t the weird bo o tstrap again gives results th a t are less variable th a n the others. This set o f d a ta gives an extrem e test o f m ethods for censored data, because quantiles o f the product-lim it estim ate are very discrete. T he weird b o o tstra p also gave results less variable th a n the o ther schemes for a larger set o f data. In general it seems th a t case resam pling and conditional resam pling give quite sim ilar an d reliable results, b o th differing from the weird bootstrap. ■
88
3 ■Further Ideas
3.6 Missing Data The expression “missing d a ta ” relates to d atasets o f a stan d ard form for which some entries are missing or incom plete. This happens in a variety o f different ways. F o r example, censored d a ta as described in Section 3.5 are incom plete w hen the censoring value c is reported instead o f y°. O r in a factorial ex perim ent a few factor com binations m ay n o t have been used. In such cases estim ates an d inferences w ould take a simple form if the dataset were “com plete”. But because p a rt o f the stan d ard form is missing, we have two problem s: how to estim ate the quantities o f interest, and how to m ake inferences about them. We have already discussed ways o f dealing w ith censored data. N ow we exam ine situations where each response has several com ponents, some o f which are missing for som e cases. Suppose, then, th a t the fictional o r p o tential com plete d a ta are y°s and th a t corresponding observed d a ta are ys, w ith some com ponents taking the value N A to represent “n o t available”. Parametric problems F o r param etric problem s the situation is relatively straightforw ard, at least in principle. First, in defining estim ators there is a general fram ew ork w ithin which com plete-data M L E m ethods can be applied using the iterative EM algorithm , which essentially w orks by estim ating missing values. Form ulae exist for com puting approxim ate stan d ard errors o f estim ators, b u t sim ulation will often be required to obtain accurate answers. O ne extra com ponent th at m ust be specified is the m echanism which takes com plete d a ta y° into observed d a ta y, i.e. f ( y \ y°). T he m ethodology is sim plest w hen d a ta are missing at random . The corresponding Bayesian m ethodology is also relatively straightforw ard in principle, and num erous general algorithm s exist for using com plete-data form s o f posterior distribution. Such algorithm s, although they involve sim u lation, are som ew hat rem oved from the general context o f b o o tstra p m ethods and will n o t be discussed here. Nonparametric problems N onparam etric analysis is som ew hat m ore com plicated, in p a rt because o f the difficulty o f defining ap p ro p riate estim ators. T he following artificial exam ple illustrates som e o f the key ideas. Example 3.11 (Mean with missing data) Suppose th a t responses y° had been obtained from n random ly chosen individuals, b u t th a t m random ly selected values were then lost. So the observed d a ta are y u - - - , y n = y \ , - . - , y l - m, N A , . . . , N A .
The EM or expectation maximization algorithm is widely used in incomplete data problems.
89
3.6 • Missing Data
To estim ate the popu latio n m ean /i we should o f course use the average response y = (n — m)-1 X/’ whose variance we would estim ate by n—m
v = (n — m) 2 Y ( y j - y f ■ But think o f this as a prototype missing d a ta problem , to which resam pling m ethods are to be applied. C onsider the following two approaches: 1
First estim ate fi by t = y, the average o f the non-m issing data. Then (a) sim ulate sam ples y\,...,y*n by sam pling with replacem ent from the n observations y \ , . . . , y„-m, N A , . . . , N A ; then (b ) calculate f* as the average o f non-m issing values.
2
First estim ate the missing values y „_m+l, . . . , by = y for j = n —m + 1 , . . . , n an d estim ate n as the m ean o f y \ , . . . , y°_m, }>°_m+1). . . , y°. Then (a) sam ple w ith
replacem ent from
y\,...,yQ n_m, f n_m+x, . . . , f n to
get
(ft) duplicate the data-loss procedure by replacing a random ly chosen m o f the y*° w ith N A ; finally (c) duplicate the d a ta estim ation o f fi to get /*. In the first approach, we choose the form o f t to take account o f the missing data. T hen in the resam pling we get a random num ber o f missing values, M* say, w hose m ean is m. The effect o f this is to m ake the variance o f T* som ew hat larger th a n the variance o f T : specifically
A ssum ing th a t we discard all resam ples with rn = n (all d a ta missing), the b o o tstrap variance will overestim ate v ar(T ) by a factor which ranges from 15% for n = 10, m = 5 to 4% for n = 30, m = 15. In the second approach, the first step was to fix the d ata so th at the com plete-data estim ation form ula /t = n-1 YTj=i y*j f ° r t could be used. Then we attem pted to sim ulate d a ta according to the two steps in the original d ata-generation process. U nfortunately the E D F o f y®,...,y®_m,y®_m+l,...,y® is an underdispersed estim ate o f the true C D F F. Even though the estim ate t is n o t affected in this particularly simple problem , the boo tstrap distribution certainly is. This is illustrated by the b o o tstrap variance
Both approaches can be repaired. In the first, we can stratify the sam pling w ith com plete an d incom plete d a ta as strata. In the second approach, we can ad d variability to the estim ates o f missing values. This device, called multiple
90
3 ■Further Ideas
imputation, replaces the single estim ate y® = y by the set y® + e \ , . . . , yj + e„_m, where ek = yk — y for k = 1 ,..., n — m. W here the estim ate yj was previously given weight 1, the n — m im puted values for the y'th case are now given equal weights (n — m)~l . The im plication is th a t F is m odified to equal n~] on each com plete-data value, and n_1 x (n — m)_1 on the m(n — m) values + ek. In this simple case y® + ek = yk, so F reduces to the E D F o f the non-m issing d a ta y n-m, as a consequence o f which t(F) = y and the b o o tstrap distribution o f T* is correct. ■ This exam ple suggests two lessons. First, if the com plete-data estim ator can be m odified to w ork for incom plete data, then resam pling cases will w ork reasonably well provided the p ro p o rtio n o f m issing d a ta is sm all: stratified resam pling would reduce variation in the am o u n t o f missingness. Secondly, the com plete-data estim ator and full sim ulation o f d a ta observation (including the data-loss step) can n o t be based on single im p u tatio n estim ation o f missing values, b u t m ay w ork if we use m ultiple im p u tatio n appropriately. O ne fu rth er poin t concerns the data-loss m echanism , which in the exam ple we assum ed to be com pletely random . If d a ta loss is dependent upon the response value y, then resam pling cases should still be v a lid : this is som ew hat sim ilar to the censored-data problem . But the o th er approach via m ultiple im putatio n will becom e com plicated because o f the difficulty o f defining a p propriate m ultiple im putations. Example 3.12 (Bivariate missing data) A m ore realistic exam ple concerns the estim ation o f bivariate correlation when some cases are incom plete. Suppose th a t Y is bivariate w ith com ponents U an d X . T he param eter o f interest is 6 = c o t t ( U , X ) . A ran d o m sam ple o f n cases is taken, such th a t m cases have x missing, b u t no cases have b o th u an d x missing o r ju st u missing. I f it is safe to assum e th a t X has a linear regression on U, then we can use fitted regression to m ake single im pu tatio n s o f missing values. T h a t is, we estim ate each missing x; by Xj = x + b(uj — u), where x, u and b are the averages and the slope o f linear regression o f x on u from the n — m com plete pairs. It is easy to see th a t it would be w rong to substitute these single im putations in the usual form ula for sam ple correlation. The result would be biased aw ay from zero if b ± 0. O nly if we can m odify the sam ple correlation form ula to remove this effect will it be sensible to use simple resam pling o f cases. The o th er strategy is to begin w ith m ultiple im p u tation to obtain a suitable bivariate F, next estim ate 6 w ith the usual sam ple correlation t(F), and then resam ple appropriately. M ultiple im p u tatio n uses the regression residuals from
3.6 • Missing Data
91
Figure 3.5 Scatter plot of bivariate sample and multiple imputation values. Left panel shows observed pairs (o) and cases where only u is observed (•). Right panel shows observed pairs (o) and multiple imputation values (+). Dotted line is imputation regression line obtained from observed pairs.
- 3 - 2 - 1 0 1 2 3
- 3 - 2 - 1 0 1 2 3
com plete pairs, ej = Xj — Xj = Xj — {x + b(uj — u )}, for j = — T hen each missing Xj is k j plus a random ly selected O ur estim ate F is the bivariate distribution which puts weight n~l on each com plete pair, and w eight n-1 x (n — m)-1 on each o f the n — m m ultiple im putations for each incom plete case. T here are two strong, implicit assum ptions being m ade here. First, as th ro u g h o u t o u r discussion, it is assum ed th at values are missing at random . Secondly, hom ogeneity o f conditional variances is being assumed, so th a t pooling o f residuals m akes sense. As an illustration, the left panel o f Figure 3.5 shows a scatter plot for a sam ple o f n = 20 where m = 5 cases have x com ponents missing. Com plete cases ap p e a r as open circles, and incom plete cases as filled circles — only the u com ponents are observed. In the right panel, the do tted line is the im putation line which gives x , for j = 1 6 ,...,2 0 , and the m ultiple im putation values are plotted w ith sym bol + . T he m ultiple im putation E D F will put probability ^ on each open circle, and probability on each + . The results in Table 3.6 illustrate the effectiveness o f the m ultiple im p u ta tion ED F. The table shows sim ulation averages and stan d ard deviations for estim ates o f co rrelation 6 and a \ = var(X ) using the stan d ard com plete-data form s o f the estim ators, w hen h alf o f the x values are missing in a sample o f size n = 20 from the bivariate norm al distribution. In this problem there would be little gain from using incom plete cases, b u t in m ore com plex situa tions there m ight be so few com plete cases th at m ultiple im putation would be highly effective or even essential.
92
3 ■Further Ideas Table 3.6 Average Full d a ta estim ates
a\ 9
1.00 (0.33) 0.69 (0.13)
O bserved d a ta estim ates --------------------------------------------------------------------------------------------------C om plete case only Single im p u ta tio n M ultiple im p u tatio n
1.01 (0.49) 0.68 (0.20)
0.79 (0.44) 0.79 (0.18)
0.96 (0.46) 0.70 (0.19)
H aving set u p an ap p ro p riate m ultiple im p u tation E D F F, resam pling proceeds in an obvious way, first creating a full set o f n pairs by random sam pling from F, and then selecting m cases random ly w ithout replacem ent for which the x values are “lo st”. T he first stage is equivalent to random sam pling w ith replacem ent from n — m copies o f the com plete d a ta plus all m x (n — m) possible m ultiple im p u tatio n values. ■
3.7 Finite Population Sampling Basics The sim plest form o f finite popu latio n sam pling is when a sample is taken random ly w ith o u t replacem ent from a population ^ with values w ith N > n know n. T he statistic t ( y \ , . . . , y n) is used to estim ate the corresponding popu latio n q uantity 9 = t{°i)\,...,ay ^ ) . The d a ta are one o f the (^ ) possible sam ples Y \ , . . . , Y n from the population, and the w ithoutreplacem ent sam pling m eans th a t the Yj are exchangeable b u t n o t independent; the sam pling fraction is defined to be / = n / N . I f n ;—y )2 is an unbiased estim ate o f y, an d the usual stan d ard erro r for y under w ithoutreplacem ent sam pling is obtained from the second line o f (3.15) by replacing y with c. N orm al approxim ation to the distribution o f Y then gives approxim ate (1 — 2a) confidence lim its y + (1 — / ) 1'/2c 1/2n_ 1/ 2za for 9, where za is the a
(standard deviation) o f estim ators for variance and correlation 6 from bivariate normal da ta (u,x) with sample size n = 20 and m = 10 x values missing at random. True values o^ — l and B — 0.7. Results from 1000 simulated datasets.
3.7 ■Finite Population Sampling
93
quantile o f the stan d ard norm al distribution. Such confidence intervals are a factor (1 —/ ) 1/2 shorter th a n for sam pling with replacem ent. The lack o f independence affects possible resam pling plans, as is seen by applying the o rdinary b o o tstrap to 7 . Suppose th a t 7 1*,...,Y„* is a random sam ple tak en w ith replacem ent from y i , . . . , y n- T heir average 7* has variance var*(7*) = n~2 ^ 2 ( y j —y ) 2, and this has expected value n~2(n— l)y over possible sam ples y i , . . . , y „ . This only m atches the second line o f (3.15) if / = n~l . T hus for the larger values o f / generally m et in practice, ordinary b o o tstrap standard errors for y are too large an d the confidence intervals for 6 are system atically too wide. ■ Modified sample size The key difficulty w ith the ordinary b o o tstrap is th at it involves withreplacem ent sam ples o f size n and so does n o t capture the effect o f the sam pling fraction, which is to shrink the variance o f an estim ator. O ne way to deal w ith this is to take resam ples o f size n', resam pling with or w ithout re placem ent. The value o f n' is chosen so th a t the estim ator variance is m atched, a t least approxim ately. F or w ith-replacem ent resam ples the average 7 ’ o f 7 ,* ,...,7 n* has variance var*(7*) = (n — 1)c/{n'n), which is only an unbiased estim ate o f (1 — f ) y / n w hen n' = (n — 1)/(1 — / ) ; this usually exceeds n. F or w ithout-replacem ent resam pling, a sim ilar argum ent implies th a t we should take n' = f n . O ne obvious difficulty with this is th a t if / WjV — TV / _
«jv /
UJ '
1 V ''. .
N
(3.16) F o r o u r d a ta trat = 156.8 an d vrat = 10.852. The regression estim ate is based on the straight-line regression x = j?o + fixu fit to the d a ta (w i,x i),...,(u „ ,x „ ), using least squares estim ates /?o and (1]. The regression estim ate o f 9 and its estim ated variance are 11 _n treg = Po +
Vreg =
^ ^
Pluj) j
(3-17)
for ou r d a ta treg = 138.3 and vreg = 8.322. Table 3.7 contains 95% confidence intervals for 6 based on norm al approxi m ations to trat an d treg, an d on the studentized b o o tstrap applied to (3.16) and (3.17). N orm al approxim ations to the distributions o f trat and treg are poor, an d intervals based on them are considerably shorter th an the o ther intervals. The popu latio n and su perpopulation bootstraps give rath er sim ilar intervals. T he sam pling fraction is / = 10/49, so the estim ate o f the distribution o f 7” using m odified sam ple size and w ithout-replacem ent resam pling uses
3 ■Further Ideas
96
Schem e
R a tio
N o rm al M odified size, n' = 2 M odified size, n' = 11 M irro r-m atch , m = 2 P o p u lation S u p erp o p u latio n
137.8 58.9 111.9 115.6 118.9 120.3
174.7 298.6 196.2 196.0 193.3 195.9
123.7 1 M il 112.8 116.1 114.0
N o rm al M odified size, n' = 2 M odified size, n' = 11 M irro r-m atch , m = 2 P o p u latio n S u p erp o p u latio n
7 1 2 3 2 1
152.0 — 258.2 258.7 240.7 255.4
L ength
C overage L ow er
Table 3.7 City population data: 95% confidence limits for the mean population per city in 1930 based on the ratio and regression estimates, using normal approximation and various resampling methods with R = 999.
R egression
U pper
O verall
A verage
SD
89
82 98 89 88 89 91
23 151 34 33 36 41
142 19 19 21 24
98 91 91 91 92
8.2
sam ples o f size n f = 2. N o t surprisingly, w ithout-replacem ent resam ples o f size n' = 2 from 10 observations give a very p o o r idea o f w hat happens w hen sam ples o f size 10 are taken w ithout replacem ent from 49 observations, and the corresponding confidence interval is very wide. Studentized boo tstrap confidence limits can n o t be based on treg, because w ith ri = 2 we have v’eg = 0. F or w ith-replacem ent resam pling, we take (n — 1)/(1 — / ) = n' = 11, giving intervals quite close to those for the m irror-m atch, population and superpop u latio n bootstraps. Figure 3.6 shows why the upp er endpoints o f the ratio and regression confidence intervals differ so m uch. T he variance estim ate v*eg is unstable because o f resam ples in which case 4 does n o t ap p e a r and case 9 appears just once o r n o t at all; then z* takes large negative values. The right panel o f the figure explains this; the regression slope changes m arkedly w hen case 4 is deleted. Exclusion o f case 9 fu rth er reduces the regression sum o f squares and hence v’eg. T he ratio estim ate is m uch less sensitive to case 4. I f we insisted on using treg, one solution w ould be to exclude from the sim ulation sam ples in which case 4 does n o t appear. T hen the 0.025 and 0.975 quantiles o f z ’eg using the popu latio n b o o tstrap are -1.30 an d 3.06, and the corresponding confidence interval is [112.9,149.1].
Table 3.8 City population data. Empirical coverages (%) and average and standard deviation of length of 90% confidence intervals based on the ratio estimate of the 1930 total, based on 1000 samples of size 10 from the population of size 49. The nominal lower, upper and overall coverages are 5, 95 and 90.
91
3.1 • Finite Population Sampling
Figure 3.6 Population bootstrap results for regression estim ator based on city d a ta with n = 10. The left panel shows values o f z'eg and ivJ/2 for resamples in which case 4 appears at least once (dots), and in which case 4 does not appear and case 9 appears zero times (0), once (1), or m ore times (+ ); the dotted line shows The right panel shows the sample and the regression lines fitted to the d a ta with case 4 (dashes) and w ithout it (dots); the vertical line shows the value fi at which 0 is estimated.
X o
-----------y
o o CO o in C\J o o CVJ o lO
/
4
//
o o
CM
o in
O co
/ //
9 / > 2'
Aft /Q
•m ,IUy
o 2
4
6 sqrt(v*)
8
10
0
50 100 150 200 250 300
u
To com pare the perform ances o f the various m ethods in setting confidence intervals, we conducted a num erical experim ent in which 1000 sam ples o f size n = 10 were taken w ithout replacem ent from the p o p ulation o f size N = 49. F or each sam ple we calculated 90% confidence intervals [L, U] for 6 using R = 999 b o o tstrap samples. Table 3.8 contains the em pirical values o f Pr(0 < L), Pr(0 < U), an d Pr(L < 9 < U). T he norm al intervals are short an d their coverages are m uch too small, while the m odified intervals with ri = 2 have the opposite problem . Coverages for the m odified sam ple size with ri = 11 and for the pop u latio n and superpopulation b o o tstrap are close to their nom inal levels, though their endpoints seem to be slightly too far left. The 80% and 95% intervals an d those for the regression estim ator have sim ilar properties. In line w ith o th er studies in the literature, we conclude th a t the population and superp o p u latio n b o o tstraps are the best o f those considered here. ■ Stratified sampling In m ost applications the pop u lation is divided into k strata, the ith o f which contains N t individuals from which a sam ple o f size n, is taken w ithout replacem ent, independent o f o th er strata. The ith sam pling fraction is f i = tii/Ni and the p ro p o rtio n o f the p o pulation in the ith stratu m is vv, = N t/ N , where N = N i H-------- 1- N k- The estim ate o f 9 and its stan d ard erro r are found by com bining quantities from each stratum . Two different setups can be envisaged for m athem atical discussion. In the first — the “small-fc” case — there is a small num ber o f large stra ta: the asym ptotic regim e takes k fixed and n „ N j—>oo with where 0 < 7tj < 1.
98
3 ■Further Ideas
A p art from there being k strata, the same ideas and results will apply as above, w ith the chosen resam pling scheme applied separately in each stratum . The second setup — the “large-/c” case — is where there are m any sm all stra ta; in m athem atical term s we suppose th a t k —>00 b u t th a t N, and n, are bounded. This situation is m ore com plicated, because biases from each stratum can com bine in such a way th a t a b o o tstrap fails completely. Example 3.16 (Average)
Suppose th a t the p o p u lation ,]M com prises k strata,
and th at the yth item in the ith stratu m is labelled the average for th at stratum is ^ . T hen the pop u latio n average is 6 = which is estim ated by T = wiYi, where % is the average o f the sam ple Y,i,. . . , Yint from the ith stratum . T he variance o f T is k
. Ni W,2(l - / ,) X — — - W f ,
V= £ i=l
(3.18)
j= 1
1
an unbiased estim ate o f w hich is k
v = £ v v , 2( l - U ) x — >=1 Hi
.
Hi
- £ ( Y y - Yj)2. 1 j= 1
(3.19)
Suppose for sake o f sim plicity th a t each N ,/n , is an integer, and th a t the popu latio n b o o tstrap is applied to each stratu m independently. T hen the variance o f the b o o tstra p version o f T is v a r '( T ') - E » , 2( l - / , ) X
x - l j
(3.20)
the m ean o f which is obtained by replacing the last term on the right by (Ni — I )-1 Z j i & i j — &i)2- If k is fixed and TV,—>-oo while f ~ * n t , (3.20) will converge to v, b u t this will n o t be the case if n!; N, are bounded and k —>00. T he boo tstrap bias estim ate also m ay fail for the same reason (Problem 3.12).
■ F or setting confidence intervals using the studentized b o o tstrap the key issue is n o t the perform ance o f bias and variance estim ates, b u t the extent to which the distrib u tio n o f the resam pled q uantity Z* = (T* — t ) / V ’ll2m atches th at o f Z = ( T —6 ) / V 1/2. D etailed calculations show th a t when the population and superpopulation b o o tstrap s are used, Z an d Z* have the same limiting distribution u n d er b o th asym ptotic regimes, an d th a t under the fixed-/c setup the approxim ation is b etter th a n th a t using the other resam pling plans. Example 3.17 (Stratified ratio) F or em pirical com parison o f the m ore prom is ing o f these finite populatio n resam pling schemes w ith stratified data, we gen erated a pop u latio n w ith N pairs (u,x) divided into strata o f sizes N i , . . . , N k
99
3.7 ■Finite Population Sampling Table 3.9 Empirical coverages (%) of nominal 90% confidence intervals using the ratio estimate for a population average, based on 1000 stratified samples from populations with k strata of size N, from each of which a sample of size n = N/'i was taken without replacement. The nominal lower (L), upper (U) and overall (O) coverages are 5, 95 and 90.
N o rm al M odified size M irro r-m atch P o p u latio n S u p erp o p u latio n
k = 20, N = 18
k = 5, N = 72
k = 3 , N = 18
L
U
O
L
U
O
L
U
O
5 6 9 6 3
93 94 92 95 97
88 89 83 89 95
4 4 8 5 2
94 94 90 95 98
90 90 82 90 96
7 6 6 6 3
93 96 94 95
86 90 88 89 96
98
according to the ordered values o f u. The aim was to form 90% confidence intervals for k
N,
e = r l E E x'> .=i j=\
where x,j is the value o f x for the jth elem ent o f stratu m i. We took independent sam ples (uy,Xy) o f sizes n, w ithout replacem ent from the ith stratum , an d used these to form the ratio estim ate o f 9 and its estim ated variance, given by k
k
t = V WjU, X ti, i= 1
i
V = Y Wi ( 1 ~ f i ) i= 1
n,
X — (---- 7T ^ l } j —1
~~ t t o j ) 2’
where E / ' = 1 X ij
E jW
_
1
Ni
.....
these extend (3.16) to stratified sampling. We used b o o tstrap resam ples with R = 199 to com pute studentized b oo tstrap confidence intervals for 9 based on 1000 different sam ples from sim ulated datasets. Table 3.9 shows the em pirical coverages o f these confidence intervals in three situations, a “large-/c” case with k = 20, Nj = 18 and n, = 6, a “small-fc” case with k = 5, Ni = 72 and n, = 24, and a “small-fc” case w ith k = 3, Ni = 18 and n, = 6. The m odified sam pling m ethod used sam pling w ith replacem ent, giving sam ples o f size n' = 7 when n = 6 an d size ri = 34 w hen n = 24, while the corresponding values o f m for the m irror-m atch m ethod were 3 and 8. T h roughout / i = jIn all three cases the coverages for norm al, population and m odified sample size intervals are close to nom inal, while the m irror-m atch m ethod does poorly. T he superp o p u latio n m ethod also does poorly, perhaps because it was applied to separate stra ta ra th e r th an used to construct a new p o pulation to be stratified a t each replicate. Sim ilar results were obtained for nom inal 80% and 95% confidence limits. O verall the population b o o tstrap and m odified sample
3 ■Further Ideas
100
size m ethods d o best in this lim ited com parison, an d coverage is n o t im proved by using the m ore com plicated m irror-m atch m ethod. ■
3.8 Hierarchical Data In some studies the v ariatio n in responses m ay be hierarchical or m ulti level, as happens in repeated-m easures experim ents and the classical split-plot experim ent. D epending u p o n the n atu re o f the p aram eter being estim ated, it m ay be im p o rtan t to take careful account o f the two (or m ore) sources o f variation w hen setting up a resam pling scheme. In principle there should be no difficulty w ith p aram etric resam pling: having fitted the m odel param eters, resam ple d a ta will be generated according to a com pletely defined model. N onparam etric resam pling is n o t straightforw ard: certainly it will n o t m ake sense to use simple n o n p aram etric resam pling, which treats all observations as independent. H ere we discuss some o f the basic points ab o u t nonparam etric resam pling in a relatively simple context. Perhaps the m ost basic problem involving hierarchical variation can be form ulated as follows. F o r each o f a groups we o b tain b responses y tj such th a t y i} = X; +
Zij,
i = 1 , . . . , a, j = l , . . . , b ,
(3.21)
where the x,s are random ly sam pled from Fx an d independently the z^s are random ly sam pled from Fz, w ith E (Z ) = 0 to force uniqueness o f the model. T hus there is hom ogeneity o f variation in Z betw een groups, and the structure is additive. T he feature o f this m odel th a t com plicates resam pling is the correlation betw een observations w ithin a group, var(Yjy) = c* + a\,
cov(y,; , Yik) = a 2x,
j f k.
(3.22)
For d a ta having this nested structure, one m ight be interested in param eters o f Fx o r Fz o r some co m bination o f both. F o r exam ple, w hen testing for presence o f variation in X the usual statistic o f interest is the ratio o f betw een-group and w ithin-group sum s o f squares. How should one resam ple nonparam etrically for such a d ata structure? There are two simple strategies, for b o th o f which the first stage is to random ly sample groups w ith replacem ent. A t the second stage we random ly sam ple w ithin the groups selected at the first stage, either w ithout replacem ent (Strategy 1) or w ith replacem ent (Strategy 2). N ote th a t Strategy 1 keeps selected groups intact. To see which strategy is likely to w ork better, we look at the second m om ents o f resam pled d a ta y'j to see how well they m atch (3.22). C onsider selecting y'i V. . . , y ’ib. A t the first stage we select a ran d o m integer /* from {1 ,2 ,__a}. A t the second stage, we select ran d o m integers from {1,2 either w ithout replacem ent (Strategy 1) o r w ith replacem ent (Strategy 2): the
101
3.8 ■Hierarchical Data
sam pling w ithout replacem ent is equivalent to keeping the J* th group intact. U nder b o th strategies E*(5y I /* = O = )V, and
However, E*(Yy* Y*’ | /* = n =
6(6- 1)
yi’iyi'm,
Strategy 1, Strategy 2.
h £ tm = i ynyi'm,
T herefore E*(Yt; ) = ?., 1
SSg SSyy var*(Y,*) = — + J a ab
(3.23)
and Strategy 1, Strategy 2,
(3.24)
where y = a 1 £ y t, S S B = E L iO ^ - y - f and S S W = £ ? =1 E*=i(.Vy ~ tf)2- To see how well the resam pling variation mimics (3.22), we calculate expectations o f (3.23) an d (3.24), using
This gives E {v a r'(i'jy )} = and Strategy 1, Strategy 2. O n balance, therefore, Strategy 1 m ore closely mimics the variation properties o f the data, an d so is the preferable strategy. R esam pling should w ork well so long as a is m oderately large, say at least 10, ju st as resam pling hom ogeneous d a ta w orks well if n is m oderately large. O f course b o th strategies would work well if b o th a an d b were very large, b u t this is rarely the case. A n application o f these results is given in Exam ple 6.9. The preceding discussion w ould apply to balanced d a ta structures, b u t not to m ore com plex situations, for which a m ore general approach is required. A direct, m odel-based ap proach would involve resam pling from suitable estim ates o f the tw o (or m ore) d a ta distributions, generalizing the resam pling from F in C h ap ter 2. H ere we outline how this m ight work for the d a ta structure (3.21).
3 ■Further Ideas
102
Estim ates o f the two C D F s Fx an d Fz can be form ed by first estim ating the xs and zs, and then using their E D F s. A naive version o f this, which parallels stan d ard linear m odel theory, is to define xi = yu
ztj = y,j - %
(3.25)
The resulting way to o btain a resam pled d ataset is to
1
choose x j , . . . , x* by random ly sam pling w ith replacem ent from x i , . . . , x a ; then
2
choose z*n , . . . , z ' ab by random ly sam pling ab times with replacem ent from z n , . . . , z ab; and finally
3
set y-j = x* + z-j,
i=
j = l,...,b.
S traightforw ard calculations (Problem 3.17) show th a t this approach has the sam e second-m om ent properties o f as Strategy 2 earlier, show n in (3.23) and (3.24), w hich are n o t satisfactory. Som ew hat predictably, Strategy 1 is mim icked by choosing z\ r a n d o m l y w ith replacem ent from one group o f residuals Zki,...,Zkb — either a random ly selected group or the group corresponding to x* (Problem 3.17). W hat has gone w rong here is th a t the estim ates x* in (3.25) have excess variation, nam ely a ^ S S g = <xl + b~loj, relative to T he estim ates Zy defined in (3.25) will be satisfactory provided b is reasonably large, although in principle they should be standardized to - 11
( 1 - f c - 1)1/2 '
(3.26)
The excess variation in X; can be corrected by using the shrinkage estim ate = cy■■+ (1 - c ) y i . , where c is given by (i - c Y =
1
b ( b - l ) S S B’
o r 1 if the righ t-h an d side is negative. A straightforw ard calculation shows th a t this choice for c m akes the variance o f the x* equal to the com ponents o f variance estim ator o f \ see Problem 3.18. N ote th a t the w isdom o f m atching first and second m om ents m ay depend u p o n 9 being a function o f such m om ents.
103
3.9 ■Bootstrapping the Bootstrap
3.9 Bootstrapping the Bootstrap 3.9.1 Bias correction o f bootstrap calculations A s w ith m ost statistical m ethods, the b o o tstrap does n o t provide exact answers. F or exam ple, the basic confidence interval m ethods outlined in Section 2.4 do n o t have coverage exactly equal to the target, or nom inal, coverage. Similarly the bias and variance estim ates B and V o f Section 2.2.1 are typically biased. In m any cases the discrepancies involved are n o t practically im portant, or there is some specific rem edy — as w ith the im proved confidence lim it m ethods o f C h ap ter 5. N evertheless it is useful to have available a general technique for m aking a bias correction to a b o o tstrap calculation. T h a t technique is the b o o tstrap itself. H ere we describe how to apply the b o o tstrap to improve estim ation o f the bias o f an estim ator in the simple situation o f a single random sample. In the n o tatio n o f C h ap ter 2, the estim ator T = t(F) has bias P = b(F) = E ( T ) - 0 = E{t(F) | F} - t(F). T he b o o tstrap estim ate o f this bias is B = b(F) = E*(T*) - T = E*{t(F*) | F} - t(F),
(3.27)
where F* denotes either the E D F o f the boo tstrap sam ple Y J , . . . , Y * draw n from F or the p aram etric m odel fitted to th at sample. Thus the calculation applies to b o th param etric an d nonparam etric situations. There is b o th random variation an d system atic bias in B in g e n eral: it is the bias w ith which we are concerned here. As with T itself, so w ith B : the bias can be estim ated using the bootstrap. If we w rite y = c(F) = E (B \ F ) — b(F), then the simple b o o tstra p estim ate according to the general principle laid out in C h ap ter 2 is C = c(F). From the definition o f c(F) this implies C = E*(B* | F ) - B , the b o o tstrap estim ate o f the bias o f B. To see ju st w hat C involves, we use the definition o f B in (3.27) to obtain C = E*[E**{r(F**) | F*} - t(F*) | F] - [E*{t(F*) | F} - t (F)];
(3.28)
or m ore simply, after com bining terms, C = E*{E**(T**)} - 2E*(T* | F ) + T .
(3.29)
H ere F** denotes the E D F o f a sample draw n from F*, o r from the param etric m odel fitted to th a t sam ple; T** is the estim ate com puted w ith th a t sam ple; and E** denotes expectation over the the distribution o f th a t sam ple conditional on F*. T here are tw o levels o f b o o tstrapping in this procedure, which is therefore
104
3 ■Further Ideas
called the nested or double bootstrap. In principle a nested b o o tstrap m ight involve m ore th a n tw o levels, b u t in practice the com putational burden would ordinarily be too great for m ore th a n two levels to be w orthwhile, and we shall assum e th a t a nested b o o tstra p has ju st tw o levels. The adjusted estim ate o f the bias o f T is Badj = B — C .
Since typically bias is o f o rder n-1 , the adjustm ent C is typically o f order n~2. T he following exam ple gives a simple illustration o f the adjustm ent. Example 3.18 (Sample variance) Suppose th a t T = n~l Z ( Y j — Y )2 is used to estim ate v a r(Y ) = a 2. Since E { J](Y / — Y ) 2} = (n — l ) a 2, the bias o f T is easily seen to be /? = —n_1<x2, which the b o o tstrap estim ates by B = —n~l T. The bias o f this bias estim ate is E (B) — ft = n~2o 2, which the b o o tstrap estim ates by C = n~2T. T herefore the adjusted bias estim ate is B — C = —n-1 T — n~2 T. T h at this is an im provem ent can be checked by showing th a t it has expectation /?(1 + n~2), w hereas B has expectation /?(1 + n~]). ■ In m ost applications b o o tstrap calculations are approxim ated by sim ulation. So, as explained in C h ap ter 2, for m ost estim ators T we would approxim ate the bias B by ]T t* — t using the resam pled values and the d a ta value t o f the estim ator. Likewise the expectations involved in the bias adjustm ent C will usually be approxim ated by sim ulation. The calculation is as follows. Algorithm 3.3 (Double bootstrap for bias adjustment) F or r = 1 1 generate the rth original b o o tstrap sam ple y j,...,y * and then t’ by • •
sam pling a t ran d o m from (nonparam etric case) or sam pling param etrically from the fitted m odel (param etric case);
2 obtain M second-level b o o tstrap sam ples y \ ' , . . . , y ' n’, either • •
sam pling w ith replacem ent from y \ , . . . , y ' n (nonparam etric case) or sam pling from the m odel fitted to y [ , . . . , y * (param etric case);
3 evaluate the estim ator T for each o f the M second-level sam ples to give ..
..
V l’ - '- ’ VMT hen approxim ate the bias adjustm ent C in (3.29) by .
c
R
- k m E
m
_
R
E ' ~ - r E ' ; + '
r = l m= 1
r=l
»3 3 °)
105
3.9 ■Bootstrapping the Bootstrap
A t first sight it w ould seem th at to apply (3.30) successfully would involve a vast am o u n t o f com putation. If a general rule is to use a t least 100 samples when b ootstrapping, this w ould im ply a total o f R M + R = 10100 sim ulated sam ples and evaluations o f t. But this is unnecessary, because o f theoretical an d co m p u tatio n al techniques th a t can be used, as explained in C h apter 9. For the case o f the bias B discussed here, the sim ulation variance o f B — C would be no greater th a n it was for B if we used M = 1 and increased R by a factor o f a b o u t 5, so th a t a to tal o f ab out 500 sam ples would seem reasonable; see Problem 3.19. M ore com plicated applications o f the technique are discussed in E xam ple 3.26 an d in C hapters 4 and 5. Theory It m ay be intuitively clear th a t bo o tstrap p in g the b o o tstrap will reduce the o rd er o f bias in the original b o o tstrap calculation, at least in simple situations such as Exam ple 3.18. However, in some situations the order o f the reduction m ay n o t be clear. H ere we outline a general calculation which provides the answer, so long as the quantity being estim ated by the b o o tstra p can be expressed in term s o f an estim ating equation. F or simplicity we focus on the single-sam ple case, b u t the calculations extend quite easily. Suppose th a t the q uantity P = b(F) being estim ated by the b o o tstrap is defined by the estim ating equation E { h ( F , F - p ) \ F } = 0,
(3.31)
w here h( G, F; P) is chosen to be o f order one. T he b o o tstrap solution is P = b(F), which therefore solves E * { h ( F \ F - , P ) \ P } = 0. In general p has a bias o f order n~a, say, where typically a is Therefore, for some e(F) th a t is o f order one, we can write E { h ( F , F ; p ) \ F } = e ( F ) n - a.
1 or §.
(3.32)
To correct for this bias we introduce the ideal pertu rb atio n y = c„(F) which modifies b(F) to b(F,y) in o rd er to achieve E[h{F,F-,b(F,y)}\F]=0.
(3.33)
T here is usually m ore th a n one way to define b(F,y), b u t we shall assum e th a t y is defined to m ake b{F, 0) = b(F). T he b o o tstrap estim ate for y is y = cn(F), which is the solution to E '[h{F ',F ;b(F \y)}\F]= 0, and the adjusted value o f P is then the second level o f resam pling.
= b (F, y ); it is b(F",y) th a t requires
106
3 ■Further Ideas
W hat we w ant to see is the effect o f substituting p ajj for ft in (3.32). First we approxim ate the solution to (3.33). T aylor expansion ab o u t 7 = 0 , together with (3.32), gives E [h{F, F ; b(F, y ) } \ F] = e(F)n~a + dn(F)y,
(3.34)
where dn( F ) = ^ E [ h { F , F ; b ( F , y ) } \ F ] y=0 Typically d„{F) = d(F) =f= 0, so th a t if we w rite r(F) = e(F)/d(F) then (3.33) and (3.34) together im ply th at 7 = c„(F) = —r(F)n~a. This, together w ith the corresponding approxim ation for y = cn(F), gives ?
-
y
= —n~a{r(F) - r(F)} = - r T ^ X , ,
say. The quantity X n = n ^ 2{r(F) - r(F)} is Op(l) because F and F differ by Op(n~l/1). It follows th at, because y = 0 ( n ~ a), h { F, F\ b ( F, y)} = h {F ,F ; b( F, y) } - n~a~ l/2X n- ^ h { F , F -,b(F, y)}
.
(3.35)
y= 0
We can now assess the effect o f the adjustm ent from [3 to (iadj- Define the conditional quantity
kH(X„) = ^ - E [ h { F , F ; b ( F , y ) } \ X „ , F ] 8y
,
1/2)+
y=0
which is Op(l). T hen taking expectations in (3.35) we deduce that, because o f (3.34), E[h{F, F; b( F, y) } \ F] = - n~a- V 2E{X„kn( X n) I F}.
(3.36)
In m ost applications E { X nkn( X n) \ F} = 0 ( n ~ b) for b = 0 or j , so com paring (3.36) with (3.32) we see th a t the adjustm ent does reduce the order o f bias by at least j. Example 3.19 (Adjusted bias estimate) In the case o f the bias /? = E( T \ F) — 9, we take h(F, F;[5) = t(F) — t(F) — /? and b(F, y) = b(F) — y. In regular problem s the bias an d its estim ate are o f order n ~ \ and in (3.32) a = 2. It is easy to check th a t d„(F) = 1, so th a t X„ = n l/2{e(F) — e(F)} and kn( X n) = ^ E { t ( F ) - t(F) - ( P - y ) \ e(F), F}
= 1. ?=o
Note that if the next term in expansion (3.34) were 0(n~a~c), then the right-hand side of (3.35) would strictly be 0 (n- a-(.-l/2) + 0(n-a-c-1/2). In almost all cases this will lead to the same conclusion.
3.9 ■Bootstrapping the Bootstrap
107
This implies th at E { X nk n(Xn) I F} = n i/2E{e(F) - e(F) \ F} = 0 ( n ~ l/2). E quation (3.36) then becom es E { T — 9 — (fi — y)} = 0 (n -3 ). This generalizes the conclusion o f Exam ple 3.18, th at the adjusted b o o tstrap bias estim ate fi — y is correct to second order. ■ F u rth er applications o f the double b o o tstrap to significance tests and confi dence limits are described in Sections 4.5 and 5.6 respectively.
3.9.2 Variation o f properties o f T A som ew hat different application o f b o o tstrapping the boo tstrap concerns assessm ent o f how the distribution o f T depends on the param eters o f F. Suppose, for exam ple, th a t we w ant to know how the variance o f T depends upon 9 an d o th er unknow n m odel param eters, but th at this variance cannot be calculated theoretically. O ne possible application is to the search for a variance-stabilizing transform ation. The p aram etric case does n o t require nested b o o tstrap calculations. However, it is useful to outline the approach in a form th at can be m im icked in the nonparam etric case. The basic idea is to approxim ate v ar(T | ip) = v(xp) from sim ulated sam ples for an appropriately b road range o f param eter values. Thus we would select a set o f p aram eter values ipn---,V>K, f ° r each o f which we w ould sim ulate R sam ples from the corresponding param etric m odel, and com pute the corresponding R values o f T . This would give t'kl, . . . , t'kR, say, for the m odel w ith p aram eter value \pk. T hen the variance v(tpk) = v ar(T | xpk) w ould be approxim ated by R
v(Vk) =
(3'37) r= l
where t*k = J T 1 £ ? = i C Plots o f v{\pk) against com ponents o f yik can then be used to see how v a r(T ) depends on 1p. Exam ple 2.13 shows an application o f this. The sam e sim ulation results can also be used to approxim ate other properties, such as the bias or quantiles o f T , or the variance o f transform ed T. As described here the num ber o f sim ulated datasets will be R K , b u t in fact this num b er can be reduced considerably, as we shall show in Section 9.4.4. The sim ulation can be bypassed com pletely if we estim ate v(ipk) by a delta-m ethod variance approxim ation VL(y)k), based on the variance o f the influence function under the p aram etric m odel. However, this will often be impossible. In the nonparam etric case there appears to be a m ajor obstacle to per form ing calculations analogous to (3.37), nam ely the unavailability o f models corresponding to a series o f p aram eter values rpi,...,\pK. But this obstacle can
108
3 ■Further Ideas
be overcome, at least partially. Suppose for simplicity th at we have a single sam ple problem , so th a t the E D F F is the fitted m odel, and im agine th at we have draw n R independent b o o tstrap sam ples from this model. These b o o t strap sam ples can be represented by their E D F s F ’, which can be thought o f as the analogues o f param etric m odels defined by R different values o f param eter ip. Indeed the corresponding values o f 9 = t(F) are simply t(F*) = (*, and other com ponents o f ip can be defined sim ilarly using the representation ip = p(F). This gives us the same fram ew ork as in the p aram etric case above. F or ex am ple consider variance estim ation. To approxim ate v a r(T ) under param eter value tp* = p(F'), we sim ulate M sam ples from the corresponding m odel F *; calculate the corresponding values o f T , which we denote by , m = 1 ,..., M ; and then calculate the analogue o f (3.37), M K = v(Wr) = M ~ l
“ fr*)2,
(3.38)
m=1 with t ’’ = M ~ l E m =i Cm- T he scatter plot o f v’ against t* will then be a proxy for the ideal plot o f v a r(T | ip) against 6, an d sim ilarly for o ther plots. Example 3.20 (City population data) Figure 3.7 shows the results o f the double b o o tstrap procedure outlined above, for the ratio estim ator applied to the d a ta in Table 2.1, w ith n = 10. The left panel shows the bias b’ estim ated using M = 50 second-level b o o tstrap sam ples from each o f R = 999 first-level b o o tstrap samples. The right panel shows the corresponding stan d ard errors * 112 vr . The lines from applying a locally w eighted robust sm oother confirm the clear increase w ith the ratio in each panel. The lim plication o f Figure 3.7 is th a t the bias and variance o f the ratio are no t stable w ith n = 10. Confidence intervals for the true ratio 9 based on norm al approxim ations to the distrib u tio n o f T — 9 will therefore be poor, as will basic b o o tstra p confidence intervals, and those based on related quantities such as the studentized b o o tstrap are suspect. A reasonable in terpretation o f the right panel is th a t v a r(T ) oc 92, so th a t log T should be m ore stable. ■ The p articu lar application o f variance estim ation can be handled in a sim pler way, a t least approxim ately. I f the n o nparam etric delta m ethod variance approxim ation vL (Sections 2.7.2 an d 3.2.1) is fairly accurate, which is to say if the linear ap proxim ation (2.35) or (3.1) is accurate, then v'r = v(tp') can be estim ated by v l = vl ( f ;). Example 3.21 (Transformed correlation) A n exam ple where simple b o otstrap m ethods tend to perform badly w ithout the (explicit o r im plicit) use o f tran s form ation is the correlation coefficient. F or a sam ple o f size n = 20 from a bivariate norm al distribution, w ith sam ple correlation t = 0.74, the left panel
109
3.9 ■Bootstrapping the Bootstrap
Figure 3.7 Bias and standard error estimates for ratio applied to city population data, n = 10. For each of R = 999 bootstrap samples from the data, M = 50 second-level samples were drawn, and the resulting bias and standard error estimates b* and v ' 1/2 plotted against the bootstrapped ratio t*. The lines are from a robust nonparametric curve fit to the simulations.
t*
t*
Figure 3.8 Scatter plot of v*L versus t* for nonparametric simulation from a bivariate normal sample of size n = 20 with R = 999. The left panel is for t the sample correlation, with dotted line showing the theoretical relationship. The right panel is for transformed sample correlation.
Transformed t*
o f Figure 3.8 contains a scatter plot o f v’L versus t* from R = 999 n o n p aram et ric sim ulations: the d o tted line is the approxim ate norm al-theory relationship v a r(T ) = n ~ '( l — 02)2. T he p lo t correctly shows strong instability o f variance. The right panel shows the corresponding plot for b o otstrapping the tra n s form ed estim ate ^ l o g ^ l + f ) /( l - t)}, whose variance is approxim ately n~l : here v i is com puted as in Exam ple 2.18. The plot correctly suggests quite stable variance. ■
3 ■Further Ideas
110
As presented here the selection o f p aram eter values ip* is com pletely random , and R would need to be m oderately large (at least 50) to get a reasonable spread o f values o f \p*. T he to tal nu m b er o f samples, R M + R, will then be very large. It is, however, possible to im prove upon the algorithm ; see Section 9.4.4. A n other im p o rtan t problem is the roughness o f variance estim ates, apparent in b o th o f the preceding exam ples. This is due n o t ju st to the size o f M , but also to the noise in the E D F s F* being used as models. Frequency smoothing O ne m ajor difference betw een the p aram etric an d nonparam etric cases is th at the param etric m odels vary sm oothly w ith p aram eter values. A simple way to inject such sm oothness into the nonp aram etric “m odels” F ’ is to sm ooth them. F or simplicity we consider the one-sam ple case. Let w( ) be a sym m etric density w ith m ean zero and unit variance, and consider the sm oothed frequencies
f j ( o , e ) c c ( n r O ^ , r= l
'
j =
(3-39)
'
H ere e > 0 is a sm oothing p aram eter th a t determ ines the effective range o f values o f t* over which the frequencies are sm oothed. As is com m on with kernel sm oothing, the value o f e is m ore im p o rtan t th an the choice o f w(-), which we take to be the stan d ard norm al density. N um erical experim entation suggests th a t close to 6 = t, values o f e in the range 0 .2 v l/ 2 - 1 .0 v l/2 are suitable, where v is an estim ated variance for t. We choose the co n stan t o f proportionality in (3.39) to ensure th a t Z j f j { 8 ,E) = n- F ° r a given e, the relative frequencies n~ 1 f j ( 8 , e) determ ine a distribution F e‘ , for which the p aram eter value is 8 " = t{Fg); in general 0* is n o t equal to 8 , although it is usually very close. Example 3.22 (City population data) In co n tin u ation o f Exam ple 3.20, the top panels o f Figure 3.9 show the frequencies f j for four sam ples with values o f t' very close to 1.6. T he variation in the f j leads to the variability in both b* and v" th a t shows so clearly in Figure 3.7. The lower panels show the sm oothed frequencies (3.39) for distributions Fg with 8 = 1.2, 1.52, 1.6, 1.9 and e = 0.2u1/2. The corresponding values o f the ratio are 8 ’ = 1.23, 1.51, 1.59, an d 1.89. T he observations w ith the smallest em pirical influence values are m ore heavily weighted when 8 is less th a n the original value o f the statistic, t = 1.52, and conversely. The third panel, for 6 = 1.6, results from averaging frequencies including those shown in the upper panels, an d the distribution is m uch sm oother th an those. The results are not very sensitive to the value o f e, although the tilting o f the frequencies is less m arked for larger s. The sm oothed frequencies can be used to assess how the bias and variance
3.9 ■Bootstrapping the Bootstrap Figure 3.9 Frequencies for city population data. The upper panels show frequencies /* for four samples with values of t‘ close to 1.6, plotted against empirical influence values lj for the ratio. The lower panels show smoothed frequencies f ’{6,e) for distributions Fq with 9 = 1.2, 1.52, 1.6, 1.9 and e = 0.2i>1//2.
1.0
theta=1.2
-0.5
111
0.0
0.5
r = 1 .5988
t*=1,6015
theta=1.6
theta=1.9
1.0
theta=1.52
-1.0
-0.5
0.0
0.5
1.0
o f T depend on 0. F o r each o f a range o f values o f 0, we generate samples from the m ultinom ial distribution Fg w ith expected frequencies (3.39), and calculate the corresponding values o f t*, t'(0). say. We then estim ate the bias for sam pling from F'e by t*(0) — O’, where t'(9) is the average o f the t'r( 6 ). The variance is estim ated similarly. T he top panel o f Figure 3.10 shows values o f t*(8 ) plotted against jittered values o f 0 for 100 sam ples generated from Fg at 0 = 1 .2 ,...,1 .9 ; we took e = 0.2. The lower panels show th a t the corresponding biases and standard deviations, which are connected by the rougher solid lines, com pare well with the double b o o tstrap results. The am ount o f com putation is m uch less, however. T he sm oothed estim ates are based on 1000 sam ples to estim ate the Fg, an d then 100 sam ples at each o f the eight chosen values o f 0 , w hereas the double b o o tstra p required ab o u t 25 000 samples. ■ O th er applications o f (3.39) are described in C hapters 9 and 10. Variance stabilization Experience suggests th a t b o o tstrap m ethods for confidence limits and signif icance tests based on estim ators T are m ost effective when 9 is essentially a location param eter, which is approxim ately induced by a variance-stabilizing transform ation. Ideally such a transform ation would be derived theoretically from (2.14) w ith variance function v(0) = var( T | F). In a nonparam etric setting a suitable transform ation m ay som etim es be suggested by analogy w ith a param etric problem , as in Exam ple 3.21. If not, a tran sfo rm atio n can be obtained em pirically using the double boo tstrap estim ates o f variance discussed earlier in the section. Suppose th a t we have b o o tstrap sam ples F* = . - , y ' J and the corresponding statistics t", for
3 ■Further Ideas
112
Figure 3.10 Use of smoothed nonparametric distributions to estimate bias and standard deviation functions for the ratio of the city population data. The top panel shows 100 bootstrapped ratios calculated from samples generated from Fg, for each of 6 = 1.2,..., 1.9; for clarity the 0 values are jittered. The lower panels show 200 of the points from Figure 3.7 and the estimated bias and standard deviation functions from that figure (smooth curves), with the biases and standard deviations estimated from the top panel (rougher curves).
theta (jittered)
r = 1 W ithout loss o f generality, suppose th at t\ < ■■■ < t*R. One way to im plem ent em pirical variance-stabilization is to choose Ri o f the t" th at are roughly evenly-spaced an d th a t include and t'R. For each o f the corresponding F* we then generate M b o o tstrap values t” , from which we estim ate the variance o f t ’ to be v'r as defined in (3.38). We now sm ooth a plot o f the v ’ against the t’, giving an estim ate v(Q) o f the variance v a r(T | F ) as a function o f the p aram eter 0 = t(F), and integrate num erically to obtain the estim ated variance-stabilizing transfo rm atio n tt
^
{t)
f ‘
J
dd
{ m v /r
(3.40)
3.10 ■Bootstrap Diagnostics
113
In general, b u t especially for small Ri, it will be b etter to fit a sm ooth curve to values o f logt>*, in p art to avoid negative estim ates v(0). Provided th at a suitable sm oothing m ethod is used, inclusion o f t\ and t'R in the set for which the v" are estim ated implies th at all the transform ed values h(t*) can be calculated. T he transform ed estim ator h ( T ) should have approxim ately unit variance. A ny o f the com m on sm oothers can be used to obtain v(0), and simple inte gration algorithm s can be used for the integral (3.40). I f the nested boo tstrap is used only to obtain the variances o f Ri o f the f*, the total num ber o f b o o tstrap sam ples required is R + M R i . Values o f R\ and M in the ranges 50-100 and 25-50 will usually be adequate, so if R = 1000 the overall num ber o f b o o tstrap sam ples required will be 2250-6000. If variance estim ates for all the t ’ are available, for exam ple nonparam etric delta m ethod estim ates, then the delta m ethod shows th a t approxim ate standard errors for the h(t'r) will be i>*1/2/ v ( t ') 1/2; a plot o f these against t* will provide a check on the adequacy o f the transform ation. T he sam e procedure can be applied with second-level resam pling done from sm oothed frequencies, as in Exam ple 3.22. Example 3.23 (City population data) For the city population d ata o f E xam ple 2.8 the p aram eter o f interest is the ratio 6 , which is estim ated by t = x / u. Figure 3.7 shows th a t the variance o f T depends strongly on 6 . We used the procedure outlined above to estim ate a transform ation based on R = 999 b o o tstrap samples, w ith R\ = 50 and M = 25. The transform ation is shown in the left panel o f Figure 3.11: the right panel shows the stan d ard errors v ^ 2 / v ( O l/2 o f the h(t'). T he transform ation has been largely successful in stabilizing the variance. In this case the variances VLr based on the linear approxim ation are readily calculated, an d the tran sfo rm atio n could have been estim ated from them rather than from the nested bootstrap. ■
3.10 Bootstrap Diagnostics 3.10.1 Jackknife-after-bootstrap Sensitivity analysis is im p o rtan t in understanding the im plications o f a statisti cal calculation. A conclusion th a t depended heavily on ju st a few observations would usually be regarded as m ore tentative th an one supported by all the data. W hen a p aram etric m odel is fitted, difficulties can be detected by a wide range o f diagnostics, careful scrutiny o f which is p a rt o f a param etric boo tstrap analysis, as o f any param etric m odelling. But if a nonparam etric b o o tstrap is used, the E D F F is in effect the m odel, and there is no baseline against which
114
3 ■Further Ideas
f ID
CO
to of as or
com pare outliers, for example. In this situation we m ust focus on the effect individual observations on b o o tstrap calculations, to answ er questions such “would the confidence interval differ greatly if this point were rem oved?”, “w hat happens to the significance level when this observation is deleted?”
Nonparametric case Once a nonparam etric resam pling calculation has been perform ed, a basic question is how it w ould have been different if an observation, yj, say, had been absent from the original data. F or exam ple, it m ight be wise to check w hether or n o t a suspicious case has affected the quantiles used in a confidence interval calculation. T he obvious way to assess this is to do a fu rth er sim ulation from the rem aining observations, b u t this can be avoided. This is because a resam ple in which y; does n o t ap p ear can be th o u g ht o f as a random sample from the d a ta w ith yj excluded. Expressed formally, if J* is sam pled uniform ly from { l ,...,n } , then the conditional distribution o f J ' given th at J* =/= j is the sam e as the distribution o f /*, where /* is sam pled uniform ly from { 1 ,... , j — \ , j + 1 ,...,« } . T he probability th a t is n o t included in a boo tstrap sample is (1 — n-1 )" = e ~ \ so the num b er o f sim ulations R - j th a t do not include yj is roughly equal to R e ~l = 0.368R. So we can m easure the effect o f on the calculations by com paring the full sim ulation w ith the subset o f t \ , . . . , t R ’ obtained from bo o tstrap sam ples where yj does n o t occur. In term s o f the frequencies f ’j which count the num ber o f tim es yj app ears in the rth sim ulation, we sim ply restrict attention to replicates with f ' j = 0. F or exam ple, the effect o f yj on the bias estim ate B can be
Figure 3.11 Variance-stabilization for the city population ratio. The left panel shows the empirical transformation «(•), and the right panel shows the standard errors u jy2/{v(r*)}1,/2 of the h{t*), with a smooth curve.
115
3.10 ■Bootstrap Diagnostics Table 3.10 M easurements on the head breadth and length o f the first two adult sons in 25 families (Frets, 1921).
1 2 3 4 5 6 7 8 9 10 11 12 13
F irst son L en Brea
Second son Len Brea
191 195 181 183 176 208 189 197 188 192 179 183 174
179 201 185 188 171 192 190 189 197 187 186 174 185
155 149 148 153 144 157 150 159 152 150 158 147 150
145 152 149 149 142 152 149 152 159 151 148 147 152
14 15 16 17 18 19 20 21 22 23 24 25
F irst son Len B rea
Second son L en Brea
190 188 163 195 186 181 175 192 174 176 197 190
195 187 161 183 173 182 165 185 178 176 200 187
159 151 137 155 153 145 140 154 143 139 167 163
157 158 130 158 148 146 137 152 147 143 158 150
m easured by the scaled difference
n(B_j - B) = J
J -
I
£ J
(t; - t - j ) - i
'^>=0
- t ) 1, r
(3.41)
J
where B - j is the bias estim ate from the resam ples in which yj does not appear, and r_; is the value o f t when yj is excluded from the original data. Such calculations are applications o f the jackknife m ethod described in Section 2.7.3, so the technique applied to b o o tstra p results is called the jackknife-after-bootstrap. The scaling factor n in (3.41) is n o t essential. A useful diagnostic is the plot o f jackknife-after-bootstrap m easures such as (3.41) against em pirical influence values, possibly standardized. F or this purpose any o f the approxim ations to em pirical influence values described in Section 2.7 can be used. The next exam ple illustrates a related plot th a t shows how the distrib u tio n o f r* — t changes w hen each observation is excluded. Example 3.24 (Frets’ heads) Table 3.10 contains d ata on the head breadth and length o f the first two ad u lt sons in 25 families. T he correlations am ong the log m easurem ents are given below the diagonal in Table 3.11. T he values above the diagonal are the partial correlations. For exam ple, the value 0.13 in the second row is the correlation betw een the log head b read th o f the first son, b i, and the log head length o f the second son, h, after allowing for the other variables. In effect, this is the correlation betw een the residuals from separate regressions o f b\ and lj on the other two variables. T he correlations are all large, b u t four o f the partial correlations are small, which suggests the simple in terpretation th at each o f the four pairs o f m easurem ents for first and second sons is independent conditionally on the values o f the o th er two m easurem ents.
116
3 ■Further Ideas
F irst son L ength B readth
F irst son S econd son
L ength B readth L ength B readth
0.43 0.75 0.72 0.72
0.70 0.72
Table 3.11 Correlations (below diagonal) and partial correlations (above diagonal) for log measurements on the head breadth and length of the first two adult sons in 25 families.
Second son L ength B readth
0.21
0.17
0.13
0.22 0.64
0.85
We focus on the p artial correlation t = 0.13 betw een log foj and log I2 . The top panel o f Figure 3.12 shows a jack k n ife-after-b ootstrap plot for t, based on 999 b o o tstrap samples. T he points at the left-hand end show the em pirical 0.05, 0.1, 0.16, 0.5, 0.84, 0.9, an d 0.95 quantiles o f the values o f t’ — t *_2 for the 368 b o o tstrap sam ples in which case 2 was n o t selected; ~t_’ 2 is the average o f t* for those samples. T he d o tted lines are the corresponding quantiles for all 999 values o f t* — t. T he distribution is clearly m uch m ore peaked when case 2 is left out. T he panel also contains the corresponding quantiles when other cases are excluded. T he horizontal axis shows the em pirical influence values for t: clearly puttin g m ore weight on case 2 sharply decreases the value o f t. The low er left panel o f the figure shows th a t case 2 lies som ew hat away from the rest, and the plot o f residuals for the regressions o f logfti and lo g /2 on (lo g b2,lo g h) in the low er right panel accounts for the jackknife-afterb oo tstrap results. Case 2 seems outlying relative to the others: deleting it will clearly increase t substantially. T he overall average and stan d ard deviation o f the t* are 0.14 an d 0.23, changing to 0.34 and 0.17 when case 2 is excluded. The evidence against zero p artial correlation depends heavily on case 2. ■ A n o th er version o f the diagnostic plot uses case-deletion averages o f the i-e- t_j = R_j X>r:/*.=0 instead o f the em pirical influence values. This m ore clearly reveals how the quantity o f interest varies w ith param eter values. Parametric case In the p aram etric case different calculations are needed, because random sam ples from a case-deletion m odel are n o t simply an unw eighted subset o f the original b o o tstrap samples. N evertheless, those original b o o tstrap samples can still be used if we m ake use o f the following identity relating expectations under two different p aram eter v alu es: E { h ( Y ) \ r p ' } = E { h ( Y ) f^ Y li 'P w) | y j-
(3.42)
Suppose th a t the full-data estim ate (e.g. m axim um likelihood estim ate) o f the m odel p aram eter is xp, an d th a t when case j is deleted the corresponding estim ate is xp^j. The idea is to use (3.42) w ith xp an d xp-j in place o f xp and xpr
117
3.10 • Bootstrap Diagnostics
Figure 3.12 Jackknifeafter-bootstrap analysis for the partial correlation between lo g b\ and lo g /2 for Frets’ heads data. The top panel shows 0.05, 0.1,0.16, 0.5, 0.84, 0.9 and 0.95 empirical quantiles o f r’ — t*_j when each o f the cases is dropped from the bootstrap calculation in turn. The lower panels show scatter plots o f the raw values o f logfci and log fe, and o f their residuals when regressed on the other two variables.
-
3
-
2
-
1
0
1
2
infinitesimal jackknife value
Log b1
Residual for log b1
respectively. F or example,
Therefore the param etric analogue o f (3.41) is /d
di _
;
f l W .*
} ~ "\
R
r
. \ f ( y * I V-y)
j) f ( y ;
Iv)
1 V~V**
R
§ (r
}J ’
w here the sam ples y* are draw n from the full-data fitted model, th at is with p aram eter value ip. Sim ilar w eighted calculations apply to o ther features o f the
118
3 ■Further Ideas
distributio n o f T* — t; see Problem 3.20. O th er applications o f the importance reweighting identity (3.42) will be discussed in C h ap ter 9.
3.10.2 Linearity Statistical analysis is simplified w hen the statistic o f interest T is close to linear. In this case the variance approxim ation v i will be an accurate estim ate o f the b o o tstrap variance v a r(T | F), and saddlepoint m ethods (Section 9.5) can be applied to o btain accurate estim ates o f the distribution o f t \ w ithout recourse to sim ulation. A linear statistic is n o t necessarily close to norm ally distributed, as Exam ple 2.3 illustrates. N o r does linearity guarantee th at T is directly related to a pivot and therefore useful in finding confidence intervals. O n the o th er hand, experience from o th er areas in statistics suggests th at these three properties will often occur together. This suggests th a t we aim to find a transfo rm atio n h(-) such th a t h ( T ) is well described by the linear approxim ation th a t corresponds to (2.35) or (3.1). For simplicity we focus on the single-sam ple case here. T he shape o f h(-) would be revealed by a p lo t o f h(t) against t, b u t o f course this is n o t available because h(-) is unknow n. However, using T aylor approxim ation and (2.44) we do have h(t') = h(tl) = h{t) + h(t)± Y ' f j l j - h(t) + h(t)(t'L - t), " i =i which shows th a t t’L = c + dh(t') w ith ap p ro p riate definitions o f constants c and d. T herefore a plot o f the values o f t'L = t + m_1 Y ^ f ) h against the t* will look roughly like h(-), a p a rt from a location and scale shift. We can now estim ate h(-) from this plot, either by fitting a p articular param etric form, or by nonparam etric curve estim ation. Example 3.25 (City population data) T he top left panel o f Figure 3.13 shows t ’L plotted against t" for 499 b o o tstrap replicates o f the ratio t = x / u for the d ata in Table 2.1. The p lo t is highly nonlinear, an d the logarithm ic tran sfo r m ation, o r one even m ore extreme, seems appropriate. N ote th a t the plot has shape sim ilar to th a t for the em pirical variance-stabilizing transform ation in Figure 3.11. For a p aram etric transform ation, we try a B ox-C ox transform ation, h{t) = (tx — 1) / 1, w ith the value o f k estim ated by m axim izing the log likelihood for the regression o f the h(t') on the t'Lr. This strongly suggests th at we use I = —2, for which the fitted curve is shown as the solid line on the plot. This is close to the result for a sm oothing spline, shown as the d o tted line. The to p right panel shows the linear approxim ation for h(t‘), i.e. h(t) + h(t)n~l Y T j = i f j b ’ plotted against h(tm). This plot is close to the line w ith unit gradient, and confirm s the results o f the analysis o f transform ations.
h(t) is dh(t)/dt.
3.10 • Bootstrap Diagnostics
Figure 3.13 Linearity transformation for the ratio applied to the city population data. The top left panel shows linear approximations t*L plotted against bootstrap replicates f \ with the estimated parametric transformation (solid) and a transformation estimated by a smoothing spline (dots). The top right panel shows the same plot on the transformed scale. The lower left panel shows the plot for the studentized bootstrap statistic. The lower right panel shows a normal Q-Q plot of the studentized bootstrap statistic for the transformed values h{t*).
119
h(t*)
CO
CO
..y -‘
CM
r*
CM _c= * N
O
O
CNJ
V
•‘ jf c a
C \1
CO
CO
-6
-4-2
0
2
- 3 - 2 - 1 0 1 2 3 Quantiles of Standard Normal
z*
The lower panels show related plots for the studentized b o o tstrap statistics on the original scale and on the new scale, . t'-t Z ~ *1/2 ’ vL
. h(t')-h(t) Z> >~ *1/2 ’ h(t)vL
where v’L = n~ 2 ^ 2 f j l j . T he left panel shows that, like t*, z ’ is far from linear. The lower right panel shows th a t the distribution o f z ’h is fairly close to stan d ard norm al, though there are som e outlying values. The distribution o f z* is far from norm al, as shown by the right panel o f Figure 2.5. It seems that, here, the tran sfo rm ation th a t gives approxim ate linearity o f t* also
3 ■Further Ideas
120
m akes the corresponding studentized b o o tstrap statistic roughly norm al. The transform atio n based on the sm oothing spline w ould give sim ilar results. ■
3.11 Choice of Estimator from the Data In some applications we m ay w ant to choose an estim ator o r o th er procedure after looking a t the data, especially if there is considerable prio r uncertainty ab o u t the n atu re o f ran d o m variation o r o f the form o f relationship am ong variables. The sim plest exam ple w ith hom ogeneous d a ta involves the choice o f estim ator for a pop u latio n m ean fi, when em pirical evidence suggests th at the underlying distribution F has long, n on-norm al tails. Suppose th a t T ( 1 ) ,..., T ( K ) can all be considered potentially suitable esti m ators for n, and for the m om ent assum e th a t all are unbiased, which m eans th a t the underlying d a ta distrib u tio n is sym m etric. T hen one n a tu ra l criterion for choice am ong these estim ators is variance or, since their exact variances will be unknow n, estim ated variance. So if the estim ated variance o f T(i) is V(i), a n atu ral procedure is to select as estim ate for a given dataset th at t(i) whose estim ated variance is smallest. This defines the adaptive estim ator T by T = T(i)
if
V(i) = m in V(k). 1Zk(i). T here are two byproducts o f this double b o o tstrap procedure. One is infor m ation on how w ell-determ ined is the choice o f estim ator, if this is o f interest, simply by exam ining the relative frequency with which each estim ator is cho sen. Secondly, the bias o f v(i) can be approxim ated: on the log scale bias is estim ated by R ~ l ^ l o g y ’ — log v, where v'r is the sm allest value o f the v’(i)s in the rth b o o tstrap sample. Example 3.26 (Gravity data) Suppose th a t the d a ta in Table 3.1 were only available as a com bined sample o f n = 81 m easurem ents. T he different dispersions o f the ingredient series m ake the com bined sam ple very no n norm al, so th a t the simple average is a po o r estim ator o f the underlying m ean fi. O ne possible ap proach is to consider trim m ed average estim ates n-k
which are averages after d ropping the k smallest and k largest order statistics yy y The usual average and sam ple m edian correspond respectively to k = 0 an d \{n — 1). The left panel o f Figure 3.14 plots the trim m ed averages against k. The m ild dow nw ard trend in the plot suggests slight asym m etry o f the d a ta distribution. O u r aim is to use the b o o tstrap to choose am ong the trim m ed averages. T he trim m ed averages will all be unbiased if the underlying d a ta distribution is sym metric, an d estim ator variance will then be a sensible criterion on which to base choice. The b o o tstrap procedure m ust build in the assum ed symmetry,
3 • Further Ideas
2.0
2.0
122
9
9 &
§ a >
1
'
I
O
% " 9 e
6
*
•
,
20
30
40
0
.
*
*
’
0
O
0
o
0 0
0 0
10
9 6
10
20
30
40
10
and this can be done (cf. Exam ple 3.4) by sim ulating sam ples sym m etrized version o f F such as
20
30
40
from a
F sym(y ) = l2 { F ( y ) + F( 2 U - y - 0)} ,
which is sim ply the E D F o f y i , . . . , y „, p. — {y\ — p.),. . . , p — (y„ — p.), with p. an estim ate o f fi which for this purpose we take to be the sam ple m edian. The centre panel o f Figure 3.14 shows b o o tstrap estim ates o f variance for eleven trim m ed averages based on R = 1000 sam ples d raw n from Fsym. We conclude from this th a t k = 36 is best, b u t th a t there is little to choose am ong trim m ed averages w ith k = 2 4 ,..., 40. A sim ilar conclusion em erges if we sam ple from F, although the b o o tstrap variances are noticeably higher for k > 24. If sym m etry o f the underlying distrib u tio n were in doubt, then we should take the biases o f the estim ators into account. O ne n atu ral criterion then would be m ean squared error. In this case o u r b o o tstrap sam ples would be draw n from F, an d we w ould select am ong the trim m ed averages on the basis o f bo o tstrap m ean squared error R
mse(i) = K_ 1 £ { r ; ( 0 - y } 2 r= 1
N ote th a t m ean squared erro r is m easured relative to the m ean y o f the b o o tstrap population. T he right panel o f Figure 3.14 shows the boo tstrap m ean squared errors for o u r trim m ed averages, an d we see th a t the estim ated biases do have an effect: now a value o f k nearer 20 w ould ap p e ar to be best. U nder the sym m etric b o o tstrap , when the m ean o f Fsym is the sam ple m edian because we sym m etrized ab o u t this point, b o o tstrap m ean squared erro r equals bo o tstrap variance. To focus the rest o f the discussion, we shall assum e sym m etry and therefore choose t to be the trim m ed average w ith k = 36. T he value o f t is 78.33, and the m inim um b o o tstrap variance based on 1000 sim ulations is 0.321. We now use the double b o o tstra p procedure to estim ate the variance for t, and to determ ine ap pro p riate quantiles for t. First we generate R = 1000
Figure 3.14 Trimmed averages and their estimated variances and m ean squared errors for the pooled gravity data, based on R = 1000 bootstrap samples, using the ordinary bootstrap (•) and the symmetric bootstrap (o).
3.12 ■Bibliographic Notes
123
sam ples y j,...,y g [ from Fsym. To each o f these sam ples we then apply the original sym m etric b o o tstrap procedure, generating M = 100 sam ples o f size n = 81 from the sym m etrized E D F o f y \ , . .. , 3^ , choosing t* to be th a t one o f the 11 trim m ed averages w ith sm allest value o f v’(i). The variance v o f t\ , . . . , t'R equals 0.356, which is 10% larger th an the original m inim um variance. If we use this variance w ith a norm al aproxim ation to calculate a 95% confidence interval centred on t, the interval is [77.16,79.50]. This is very sim ilar to the intervals obtained in Exam ple 3.2. The frequencies w ith which the different trim m ing proportions are chosen are: k 12 16 20 24 28 32 36 40 Frequency 1 25 54 96 109131 49886 T hus when sym m etry o f the underlying distribution is assum ed, a fairly heavy degree o f trim m ing seems desirable for these data, and the value k = 36 actually chosen seems reasonably well-determ ined. ■ The general features o f this discussion are as follows. We have a set o f estim ators T (a) = t(a, F ) for a e A, and for each estim ator we have an estim ated value C (a ,F ) for a criterion C (a ,F ) = E {c(T (a),0) | F} such as variance or m ean squared error. The adaptive estim ator is T = t(a, F) where a = a(F) m inim izes C (a ,F ) w ith respect to a. We w ant to know ab o u t the d istribution o f T, including for exam ple its bias and variance. The distribution o f T — 6 = t(F) — t(F) under sam pling from F will be approxim ated by evaluating it under sam pling from F. T h at is, it will be approxim ated by the d istribution o f T* - t = t (F') - f(F) = t( a , F*) - t( a, F) un d er sam pling from F. H ere F* is the analogue o f F based on y y * : if F is the E D F o f the data, then F* is the E D F o f sam pled from F. W hether or n o t the allowance for selection bias is num erically im portant will depend u p o n the density o f a values and the variability o f C(a,F).
3.12 Bibliographic Notes The extension o f b o o tstrap m ethods to several unrelated sam ples has been used by several authors, including Hayes, Perl and Efron (1989) for a special contrast-estim ation problem in particle physics; the application is discussed also in Efron (1992) an d in Practical 3.4. A general theoretical account o f estim ation in sem iparam etric m odels is given in the book by Bickel et al. (1993). The m ajority o f applications o f sem iparam etric m odels are in regression; see references for C hapters 6 and 7.
124
3 ■Further Ideas
E fron (1979, 1982) suggested and studied em pirically the use o f sm ooth ver sions o f the ED F, b u t the first system atic investigation o f sm oothed bootstraps was by Silverm an and Y oung (1987). They studied the circum stances in which sm oothing is beneficial for statistics for which there is a linear approxim ation. Hall, D iCiccio an d R om an o (1989) show th a t when the quantity o f interest depends on a local property o f the underlying C D F, as do quantiles, sm ooth ing can give w orthw hile theoretical reductions in the size o f the m ean squared error. Sim ilar ideas apply to m ore com plex situations such as L\ regression (D e Angelis, H all and Y oung 1993); see how ever the discussion in Section 6.5. D e Angelis an d Y oung (1992) give a useful review o f b o o tstrap sm oothing, and discuss the em pirical choice o f how m uch sm oothing to apply. See also W ang (1995). R o m an o (1988) describes a problem — estim ation o f the m ode o f a density — where the estim ator is undefined unless the E D F is sm oothed; see also Silverm an (1981). In a spatial d a ta problem , K endall and K endall (1980) used a form o f b o o tstrap th a t jitte rs the observed data, in order to keep the rough configuration o f p oints co n stan t over the sim ulations; this am ounts to sam pling w ithout replacem ent when applying the sm oothed bootstrap. Young (1990) concludes th a t although this ap proach can o u tperform the unsm oothed bootstrap , it does n o t perform so well as the sm oothed b o o tstrap described in Section 3.4. G eneral discussions o f survival d a ta can be found in the books by Cox and O akes (1984) and Kalbfleisch an d Prentice (1980), while Flem ing and H arringto n (1991) and A ndersen et al. (1993) give m ore m athem atical accounts. T he product-lim it estim ator was derived by K ap lan and M eier (1958): it and variants are widely used in practice. Efron (1981a) proposed the first b o o tstra p m ethods for survival data, and discussed the relation betw een trad itio n al an d b o o tstrap stan d ard errors for the product-lim it estim ator. A kritas (1986) com pared variance estim ates for the m edian survival tim e from E fron’s sam pling scheme and a different a p proach o f R eid (1981), and concluded th a t E fron’s scheme is superior. The conditional m ethod outlined in Section 3.5 was suggested by H jo rt (1985), and subsequently studied by K im (1990), who concluded th a t it estim ates the conditional variance o f the product-lim it estim ator som ew hat b etter th an does resam pling cases. D oss and G ill (1992) an d B urr and D oss (1993) give weak convergence results leading to confidence bands for quantiles o f the survival time distribution. T he asym ptotic behaviour o f param etric and no n param etric b o o tstrap schemes for censored d a ta is described by H jo rt (1992), while A ndersen et al. (1993) discuss theoretical aspects o f the weird b o o t strap. The general ap p ro ach to m issing-data problem s via the EM algorithm is dis cussed by D em pster, L aird and R ubin (1977). Bayesian m ethods using m ultiple im putatio n an d d a ta au gm entation are decribed by T anner and W ong (1987)
3.12 ■Bibliographic Notes
125
and T anner (1996). A detailed treatm ent o f m ultiple im putation techniques for m issing-data problem s, w ith special em phasis on survey data, is given by R ubin (1987). The principal reference for resam pling in m issing-data problem s is Efron (1994), together with the useful, cautionary discussion by D. B. Rubin. T he account in Section 3.6 puts m ore em phasis on careful choice o f estim ators. C ochran (1977) is a stan d ard reference on finite population sampling. V ari ance estim ation by balanced subsam pling m ethods was discussed in this con text as early as M cC arthy (1969), but the first a ttem p t to apply the boo tstrap directly was by G ross (1980), who describes w hat we have term ed the “p o pula tion b o o tstra p ”, b u t restricted to cases where N / n is an integer. This approach was subsequently developed by Bickel and F reedm an (1984), while C hao and Lo (1994) also m ake a case for this approach. Booth, Butler and H all (1994) describe the construction o f studentized b o o tstrap confidence limits in this context. Presnell and B ooth (1994) give a critical discussion o f earlier literature and describe the superp o p u latio n bootstrap. The use o f modified sam ple sizes was proposed by M cC arth y and Snowden (1985) and the m irror-m atch m ethod by S itter (1992). A different approach based on rescaling was introduced by R ao and W u (1988). A com prehensive theoretical discussion o f the jackknife an d b o o tstrap in sam ple surveys is given in C h apter 6 o f Shao and Tu (1995), w ith later developm ents described by Presnell and Booth (1994) and Booth, Butler and H all (1994), on which the account in Section 3.7 is largely based. Little has been w ritten ab o u t resam pling hierarchical d a ta although two relevant references are given in the bibliographic notes for C h apter 7. R elated m ethods for b o o tstrap p in g em pirical Bayes estim ates in hierarchical Bayes m odels are described by L aird and Louis (1987). N onparam etric estim ation o f the C D F for a ran d o m effect is discussed by L aird (1978). B ootstrapping the b o o tstrap is described by C hapm an and H inkley (1986), an d was applied to estim ation o f variance-stabilizing transform ations by Tibshirani (1988). T heoretical aspects o f adjustm ent o f boo tstrap calculations were developed by H all an d M artin (1988). See also the bibliographic notes for C hapters 4 and 5. M ilan and W h ittak er (1995) give a param etric boo tstrap analysis o f the d a ta in Table 3.10, and discuss the difficulties th at can arise when resam pling in problem s with a singular value decom position. Efron (1992) introduced the jackknife-after-bootstrap, and described a vari ety o f ingenious uses for related calculations. D ifferent graphical diagnostics for b o o tstrap reliability are developed in an asym ptotic fram ew ork by Beran (1997). The linearity plot o f Section 3.10.2 is due to C ook and W eisberg (1994). Theoretical aspects o f the em pirical choice o f estim ator are discussed by Leger and R om an o (1990a,b) and Leger, Politis and R om ano (1992). Efron (1992) gives an exam ple o f choice o f level o f trim m ing o f a robust estim ator, w ithout double bootstrapping. Some o f the general issues, w ith examples, are discussed by Faraw ay (1992).
126
3 ■Further Ideas
3.13 Problems 1
In a two-sample problem, with data y tj, j = 1 ,..., n„ i = 1,2, giving sample averages y,- and variances t>„ describe models for which it would be appropriate to resample the following quantities: (a) e y = ytj - % (b) ei} = (ytj - 3>.)/(l + n~l )l/2, (c) etj = (ytj - y,•)/{«.■( 1 + n - l )}l/2, (d) = + ( y , j — yi)/{vt( 1 + n~l )}l/1, where the signs are allocated with equal probabilities,
(e) etj = yij/% In each case say how a simulated dataset would be constructed. What difficulties, if any, would arise from replacing y and v, by more robust estimates o f location and scale? (Sections 3.2, 3.3) 2
A slightly simplified version o f the weighted mean o f k samples, as used in Example 3.2, is defined by
=
E
k
-
i=i w.-y,-
E i= i wi where w, = n j a j , with y,- = n~' J 2 j ytj and a f = n~[ J 2 j(yij ~ Pi)2 estimates o f mean /j, and variance of o f the ith distribution. Show that the influence functions for T are
Ltjiy-;F) = ^ 7- [yi - /*.• - (w- - 0) { (y< / v Wi
^ }! ^ \.
where qj,- = n j a } . Deduce that the first-order approximation under the constraint Hi = ■■■= Hk for the variance o f T is vL = 1 / ^ with empirical analogue vL = 1/ vv>- Compare this to the corresponding formula based on the unconstrained empirical influence values. (Section 3.2.1) 3
Suppose that Y is bivariate with polar representation (X , m ), so that Y T = (X cos co, X sin co). If it is known that w has a uniform distribution on [0,27t), independent o f X , what would be an appropriate resampling algorithm based on the random sample y i , . . . , y „ l (Section 3.3)
4
Spherical data y i , . . . , y „ are points on the sphere o f unit radius. Suppose that it is assumed that these data come from a distribution that is symmetric about the unknown mean direction /i. In light o f the symmetry assumption, what would be an appropriate resampling algorithm for simulating data y j ,...,y * ? (Section 3.3; Ducharme et a l., 1985)
5
Two independent random samples y ii,...,y i„ , and y i \ , . . . , y 2ni o f positive data are obtained, and the ratio o f sample means t = y ^ / y i is used to estimate the corresponding population ratio 9 = ^ 2 / ^ 1 (a) Show that the influence functions for t are Lt.i (yi ;F) = -(J 'i - 111 )6 / ni,
L t’i,
= ( £ j W I > y ) / ( 5 > u * y /£*iy)The observed value t is then equal to u{n) where n = (~n, . . . , - n ) with n = £ * 1 , nt. Show that l j = j u {(1 - e)n + e h j} ^ where
1
y is the vector with
1
= -/y,
in the («,■_1 + j )th position, with n0 =
elsewhere. One consequence o f this is that vL = n~ 2 A pply these calculations to the ratio t = yi/yi. (Section 3.2.1) 8
0
, and zeroes
J2j'=i %•
If x i , . . . , x „ is a random sample from some distribution G with density g, suppose that this density is estimated by
-
Vh i b w i ^ r ) = l j w { ^ r )
where w is a symmetric P D F with mean zero and variance t2. (a) Show that this density estimate has mean x and variance n~l Y H x j ~ x ) 2 + h2x2. (b) Show that the random variable x = x j + he has P D F gh, where J is uniformly distributed on ( l , . . . , n ) and e has P D F w. Hence describe an algorithm for bootstrap simulation from a sm oothed version o f the EDF. (c) Show that the rescaled density 1
^
( x
j= 1
v
— a —bxj\ hb
J7
will have the same first two mom ents as the E D F if a = (1 — b)x and b = {1 + nh2z 2/ J2(x j “ x)2} ~ l/2. W hat algorithm simulates from this sm oothed E D F?
128
3 ■Further Ideas (d) D iscuss the special problems that arise from using gh(x) when the range o f x is [0, oo) rather than (—oo, oo). (e) Extend the algorithms in (b) and (c) to multivariate x. (Section 3.4; Silverman and Young, 1987; Wand and Jones, 1995)
9
Consider resampling cases from censored data (y i, d \ ) , . . . , (y„, dn), where yi < ■■■< y n. Let f j denote the number o f times that (y j , d j ) occurs in an ordinary bootstrap sample, and let Sj = / ' H-------1- / ' . (a) Show that when there is no censoring, the product-limit estimate puts mass n-1 on each observed failure yi show that
E-(Y ') = y,
E m {var'(Y* | M)} = ^
x (1 - f h ^ c .
(Section 3.7; Presnell and Booth, 1994) 14
Suppose we wish to perform mirror-match resampling with k independent withoutreplacement samples o f size m, but that k = {n(l — m/ n ) } / {{ m( 1 — / ) } is not an integer. Let K ’ be the random variable such that Pr( K ’ = k') = 1 - Pr(X* = k' + 1) = k'(l + k' - k)/ k, where k' = [k] is the integer part o f k. Show that if the mirror-match algorithm is applied for an average Y ’ with this distribution fo r X ', var”(Y ”) = (1—m/n)c/ (mk). Show also that under mirror-match resampling with the simplifying assumption that randomization is not required because k is an integer,
f, « (* -!) E (C ) = c l 1- ^ j ^ T j where C ‘ is the sample variance o f the Y-. What implications are there for variance estimation for more complex statistics? (Section 3.7; Sitter, 1992) 15
Suppose that n is a large even integer and that N = 5n/2, and that instead o f applying the population bootstrap we choose a population from which to resample according to
f
#{/!} is the number of elements in the set A.
y i , - - - , y n,
yi, - --, y«,
y u . . . , y n,
y u . . . , y n,
with probability \ , yi,...,y„,
with probability
Having selected > i,...,y '). Since und er Ho the chain starts in equilibrium , Pr(Y* = /
| H 0) = P r( Z N = / ) = / 0( / ) .
T h at is, if Ho is true, then the R replicates and d a ta y are all sam pled from /o, as we require. M oreover, the R replicates o f y ‘ are jointly exchangeable w ith the d a ta und er Ho- To see this, we have first th at R
f ( y , y l . . . , f R | Ho) = fo(y) £
Pr(Z 0 = x | Z N = y ) ] ] P r(Z N = x
| Z 0 = x),
r= l
using the independence o f the replicate sim ulations from x. But by the definition o f the first p a rt o f the sim ulation, where (4.12) applies, /o (y )P r(Z 0 = x | Z N = y) = / 0(x)P r(Z ^ = y \ Z 0 = x),
145
4.2 ■Resampling fo r Parametric Tests
and so f ( y , y[, . .. , y 'R \ H 0) = J 2 /o (x ){ p r(Z N = y | Z 0 = x) [ J Pr(Z N = y*r | Z 0 = x ) \ , x
r= l
'
w hich is a sym m etric function o f y , y { , - - - , y R as required. G iven th a t the d ata vector and sim ulated d a ta vectors are exchangeable under Ho, the associated test statistic values ( t , t j , . . . , t R) are also exchangeable outcom es under H q. Therefore (4.11) applies for the P-value calculation. To com plete the description o f the m ethod, it rem ains to define the transition probability m atrix Q so th a t the chain is irreducible w ith equilibrium d istribu tion fo(y)- T here are several ways to do this, all o f which use ratios f o( v) / fo (u)F or exam ple, the M etropolis algorithm starts with a carrier M arkov chain on state space @S having any sym m etric one-step forw ard transition probability m atrix M , an d defines one-step forw ard transition from state u in the desired M arkov chain as follows: •
given we are in state u, select state v with probability muv;
•
accept the tran sitio n to v with probability min{ l,fo(v)/fo(u)}, otherwise reject it an d stay in state u.
It is easy to check th a t the induced M arkov chain has transition probabilities quv = m i n { l, f o( v )/ fo ( u) } muv,
u^v,
and Qua = muu + Y ^ max{0 , 1 - fo(v)/fo{u)}muo, V^U an d from this it follows th a t f o is indeed the equilibrium distribution o f the M arkov chain, as required. In applications it is n o t necessary to calculate the probabilities muv explicitly, although the sym m etry and irreducibility o f the carrier chain m ust be checked. If the m atrix M is n o t sym m etric, then the acceptance probability in the M etropolis algorithm m ust be m odified to m in [l,fo(v)mvu/{fo(u)muv}]. T he crucial feature o f the M arkov chain m ethod is th a t fo itself is not needed, only ratio s fo(v)/fo(u) being involved. This m eans th a t for conditional tests, w here f o is the conditional density for Y given S = s, only ratios o f the u nconditional null density for Y are n e ed ed : fo(v) = P r(7 = v \ S = s , H q) = P r(7 = v | H 0) fo(u) P r(7 = u | S = s , H 0) P r(Y= u\H o)' This greatly simplifies m any applications. The realizations o f the M arkov chain are sym m etrically tied to the artificial starting value x, an d this induces a sym m etric correlation am ong (t,
146
4 ■ Tests
This correlation depends upon the p articu lar construction o f Q, and reduces to zero at a rate which depends upon Q as m increases. W hile the correlation does not affect the validity o f the P-value calculation, it does affect the power o f the te s t: the higher the correlation, the lower the power. Example 4.3 (Logistic regression) We retu rn to the problem o f Exam ple 4.1, which provides a very sim ple if artificial illustration. The d a ta y are a binary sequence o f length n w ith s ones, and calculations are to be conditional on Y , Yj = s. Recall th a t direct M onte C arlo sim ulation is possible, since all (") possible d a ta sequences are equally likely und er the null hypothesis o f constant probability o f a unit response. One simple M arkov chain has one-step transitions which select a pair o f subscripts i, j a t random , an d switch y t an d yj. Clearly the chain is irreducible, since one can progress from any one binary sequence with s ones to any other. All ratios o f null probabilities /o (u )//o (« ) are equal to one, since all binary sequences w ith s ones are equally probable. Therefore if we run the M etropolis algorithm , all switches are accepted. But note th a t this M arkov chain, while simple to im plem ent, is inefficient and will require a large num ber o f steps to induce approxim ate independence o f the t’s. T he m ost effective M arkov chain would have one-step transitions which are ran d o m p erm utations, and for this only one step w ould be required. ■ Example 4.4 (AM L data) F or d a ta such as those in Exam ple 3.9, consider testing the null hypothesis o f p ro p o rtio n al h azard functions. D enote the failure times by z\ < z2 < • • • < z„, assum ing no ties for the m om ent, and define rtj to be the nu m b er in group i w ho were at risk ju st p rior to zj. Further, let yj be 0 or 1 according as the failure at zj is in group 1 or 2, and denote the hazard function a t tim e z for group i by fy(z). Then P r(y . = l ) = _____
1
r*Mzj>_____
rljh l (zj) + r2jh2(zj)
=
°J
aj + 0 /
where aj = rij/rzj and 6j = h2{zj)/h\(zj) for j = 1 The null hypothesis o f p ropo rtio n al hazards implies the hypothesis H q : 6\ = • • • = 6n. For the d a ta o f Exam ple 3.9, where n — 18, the values o f y and a are given in Table 4.2; one tie has been random ly split. N ote th a t censored d a ta contribute only to the rs: the times are n o t used. O f course the YjS are n o t independent, because aj depends upon the o u t com es o f Yu . . . , Y j - i . However, for the purposes o f illustration here we shall pretend th a t the ajS are fixed, as well as the survival times and censoring times. T h a t is, we shall treat the Y)s as independent Bernoulli variables with probabilities as given above. U nder this pretence the conditional likelihood for
147
4.2 • Resampling fo r Parametric Tests
n
5 11
8 11
n
12
11
10
9
8
a
n 12
1
11 10
n 9
n 8
y
1
1
1
1
0
1
Table 4.2 Ingredients o f the conditional test for proportional hazards. Failure times as in Table 3.4; at time z = 23 the failure in group 2 is taken to occur first.
8 11
*18
9 11
12
5 11
z
13 10
18 8
23 7
23 7
27 6
30 5
31 5
33 4
34 4
43 3
45 3
48 2
8
7
6
6
5
5
4
3
3
2
2
1
0
10 8
10 7
8 6
7 6
7 5
6 5
5 4
5 3
4 3
2
3 2
3
oo
0
0
1
0
1
1
0
1
0
1
1
0
10
is simply 18
n
dj + Oj
7=1
N ote th a t because aig = oo, m ust be 0 w hatever the value o f 0ig, and so this final response is uninform ative. We therefore dro p yig from the analysis. H aving done this, we see th a t under Ho the sufficient statistic for the com m on h azard ratio 0 is S = Yj, whose observed value is s = 11. W hatever the test statistic T, the exact conditional P-value (4.4) m ust be approxim ated. D irect sim ulation appears impossible, but a simple M arkov chain sim ulation is possible. First, the state space o f the chain is 3§ = {x = ( x i , . . . , x n ) : Y l x j = s}> th a t is all perm utations o f y i , . . . , y n . F or any two vec tors x and x in the state-space, the ratio o f null conditional jo in t probabilities p{x | s, 0 ] p(x | s, 01
;'= i
We take the carrier M arkov chain to have one-step transitions which are ra n dom p erm u tatio n s: this guarantees fast m ovem ent over the state space. A step which moves from x to x is then accepted with probability min ^ 1, f l j l i a]‘
•
By sym m etry the reverse chain is defined in exactly the same way. The test statistic m ust be chosen to m atch the particular alternative hy p o th esis th o u g h t relevant. H ere we suppose th at the alternative is a m onotone ratio o f hazards, for which T = YljLi Yj log(Zj) seems to be a reasonable choice. The M arkov chain sim ulation is applied with N = 100 steps back to give the initial state x an d 100 steps forw ard to state y ' , the latter repeated R = 99 times. O f the resulting £* values, 48 are less th an or equal to the observed value t = 17.75, so the P-value is (1 + 4 8 )/(l + 99) = 0.49. Thus there appears to be no evidence against the prop o rtional hazards model. Average acceptance probability in the M etropolis algorithm is approxim ately 0.7, and results for N = 10 and N = 1000 ap p ear indistinguishable from those for N = 100. This indicates unusually fast convergence for applications o f the M arkov chain m ethod. ■
148
4 ■ Tests
T he use o f R conditionally independent realizations o f the M arkov chain is som etim es referred to as the parallel method. In co n trast is the series method, where only one realization is used. Since the successive states o f the chain are dependent, a rand o m izatio n device is needed to induce exchangeability. For details see Problem 4.2.
4.2.3 Parametric bootstrap tests In m any problem s o f course the distribution o f T under H q will depend upon nuisance param eters which can n o t be conditioned away, so th at the M onte C arlo test m ethod does not apply exactly. T hen the n atu ral approach is to fit A
A
the null m odel Fo and use (4.5) to com pute the P-value, i.e. p = P r(T > t \ Fo). F or exam ple, for the p aram etric m odel where we are testing Ho : ip = ipo with X a nuisance p aram eter, Fo w ould be the C D F o f f ( y \ ipo,Xo) with Xo the m axim um likelihood estim ator (M L E ) o f the nuisance param eter when ip is fixed equal to ipo. C alculation o f the P-value by (4.5) is referred to as a b o o tstrap test. If (4.5) can n o t be com puted exactly, o r if there is no satisfactory approx im ation (norm al or otherwise), then we proceed by sim ulation. T h at is, R independent replicate sam ples yj,...,_y* are draw n from Fo, and for the rth such sam ple the test statistic value t'r is calculated. T hen the significance probability (4.5) will be approxim ated by Pboot ~
J
( 4 .1 3 )
O rdinarily one would use a simple p ro p o rtio n here, but we have chosen to m ake the definition m atch th a t for the M onte C arlo test in (4.11). Example 4.5 (Separate families test) Suppose th a t we wish to choose between the alternative m odel form s fo(y \ r\) and f i ( y \ £) for the P D F o f the random sam ple y \ , . . . , y n. In some circum stances it m ay m ake sense to take one model, say fo, as a null hypothesis, and to test this against the o th er m odel as alternative hypothesis. In the n o tatio n o f Section 4.1, the nuisance param eter is X = (t],C) and ip is the binary indicator o f m odel, w ith null value ipo = 0 and alternative value ipa = 1. The likelihood ratio statistic (4.7) is equivalent to the m ore convenient form r = » - N ° g ^ = n- ' X > g £ M ^ , L o(rj) fo(yj I ri)
(4.14)
where f\ and ( are the M L E s and Lo an d L\ the likelihoods under f o and / 1 respectively. If the tw o families are strictly separate, then the chi-squared approxim ation (4.8) does n o t apply. T here is a norm al approxim ation for the
149
4.2 ■Resampling fo r Parametric Tests
null distribution o f T , b u t this is often quite unreliable except for very large n. The p aram etric b o o tstrap provides a m ore reliable and simple option. The p aram etric b o o tstrap w orks as follows. We generate R sam ples o f size n by ran d o m sam pling from the fitted null m odel /o (y | fj). For each sample we calculate estim ates fj* and ( ’ by m axim izing the sim ulated log likelihoods
m) = E lo&w i
4>fa) = E lo&w 11)*
and com pute the sim ulated log likelihood ratio statistic
T hen we calculate p using (4.13). As a p articu lar illustration, consider the failure-tim e d ata in Table 1.2. Two plausible m odels for this type o f d a ta are gam m a and lognorm al, th a t is , , , , Kiicy)*-1 e x p ( - K y / n ) f o ( y \ r i ) = ----------^ r ( K ) ----------’
, = ^
{ l o g y - ot\ — p—
) ’y > 0 -
F or these d a ta the M L E s o f the gam m a m ean and index are fi = y = 108.083 and k = 0.707, the latter being the solution to log(/c) - h(k) = log(y) - lo g y logy and s^og) are the average and sample variance for the log yj.
with h(x) = d \o gr ( K) /d K, the digam m a function. The M L E s o f the m ean and variance o f the norm al distribution for log Y are a = lo g y = 3.829 and P2 = (n — 1)s?ogy/ n = 2.339. The test statistic (4.14) is t = —k log(fc/y) — ka + k + log r(/c) — | \og(2n[i2) — whose value for the d a ta is t = —0.465. The left panel o f Figure 4.2 shows a histogram o f R = 999 values o f t* under sam pling from the fitted gam m a m odel: o f these, 619 are greater th an t and so p = 0.62. N ote th a t the histogram has a fairly non-norm al shape in this case, suggesting th a t a norm al approxim ation will not be very accurate. This is true also for the (rath er com plicated) studentized version Z o f T : the right panel o f Figure 4.2 shows the norm al plot o f b o o tstrap values z \ The observed value o f z is 0.4954, for which the b o o tstrap P-value is 0.34, som ew hat sm aller th an th at com puted for t, b u t not changing the conclusion th a t there is no evidence to change from a gam m a to a lognorm al m odel for these data. T here are good general reasons to studentize test statistics; see Section 4.4.1. It should p erhaps be m entioned th at significance tests o f this kind are not always helpful in distinguishing between models, in the sense th at we could find evidence against either b o th or neither o f them. This is especially true w ith small sam ples such as we have here. In this case the reverse test shows no evidence against the lognorm al model. ■
150
4 ■ Tests
Figure 4.2 Null hypothesis resampling for failure data. Left panel shows histogram of under gamma sampling. Right panel shows normal plot of z ' ; R — 999 and gamma parameters p. = 108.0833, k = 0.7065; dotted line is theoretical N(0,1) approximation.
t*
Quantiles of standard normal
4.2.4 Graphical tests G raphical m ethods are p o p u lar in m odel checking: exam ples include norm al and half-norm al plots o f residuals in regression, plots o f C ook distance in regression, plots o f n o nparam etric h azard function estim ates, and plots o f intensity functions in spatial analysis (Section 8.3). In m any cases the nom inal shape o f the plot is a straight line, which aids the detection o f deviation from a null model. W hatever the situation, inform ed in terpretation o f the plot requires som e n otion o f its probable variation und er the m odel being checked, unless the sam ple size is so large th a t deviation is obvious (c.f. the plot o f resam pling results in Figure 4.2). The sim plest and m ost com m on approach is to superim pose a “probable envelope”, to which the original d a ta plot is com pared. This probable envelope is obtained by M onte C arlo or param etric resam pling m ethods. G raphical tests are n o t usually ap p ro p riate when a single specific alternative m odel is o f interest. R ath er they are used to suggest alternative models, depending upon the m anner in which such a plot deviates from its null expected behaviour, or to find suspect data. (Indeed graphical tests are not tests in the usual sense, because there is usually no simple notion o f “rejectable” behaviour: we com m ent m ore fully on this below.) Suppose th a t the g raph plots T (a) versus a for a e s / , a bounded set. The observed plot is {t(a) : a € j / } . F or exam ple, in a norm al plot j / is a set o f norm al quantiles and the values o f t(a) are the ordered values o f a sample, possibly studentized. T he idea o f the p lo t is to com pare t(a) w ith the probable behaviour o f T(a) for all a e when H q is true. Example 4.6 (Normal plot)
C onsider the d ata in Table 3.1, and suppose in
151
4.2 • Resampling fo r Parametric Tests
yt)
Figure 4.3 Normal plot of n = 13 studentized values for final sample in Table 3.1.
O/ N CO
o
o O/'
c>d in o ■CDO o c o t occurred 50 times in the first 100 samples, then it is reasonably certain th a t p will exceed 0.25, say, for m uch larger R, so there is little p o in t in sim ulating further. O n the other hand, if we observed t* > t only five times, then it w ould be w orth sam pling fu rther to m ore accurately determ ine the level o f significance. O ne effect o f n o t com puting p exactly is to w eaken the pow er o f the test, essentially because the critical region o f a fixed-level test has been random ly displaced. T he effect can be quantified approxim ately as follows. C onsider testing a t level a, which is to say reject Ho if p < a. If the integer k is chosen equal to (R + l)a , then the test rejects Ho when t'{R+l_k) < t. F or the alternative hypothesis H a , the pow er o f the test is nR(a, HA) = Pr(reject H 0 \ H A) = P r(T (*R+1_k) < T \ H A). To evaluate this probability, suppose for simplicity th a t T has a continuous distribution, w ith P D F go(t) and C D F Go(t) under Ho, and density gA(t) under H A. T hen from the stan d ard result for P D F o f an order statistic we have nR( a, HA) =
J J
R ( ^ _ Q c o M ^ g o M U - Goix ) } ^ 1 gA(t)dxdt.
A fter change o f variable an d some rearrangem ent o f the integral, this becom es nR(cc,Ha ) = [ ^ao(u, H A)hR(u;tx)du, Jo
(4.18)
where nx (u,HA) is the pow er o f the test using the exact P-value, and hR{u;a) is the b eta density on [0,1] w ith indices (R + l)a and (R + 1)(1 — a). T he next p a rt o f the calculation relies on n R{ot, H A) being a concave function o f a, as is usually the case. T hen a lower bound for n ^ u , H a ) is nm[ u , H a ) which equals U7taj( a ,H a) / a for u < a and 7tx ( a ,H 4 ) for u > a. It follows by applying (4.18) to n R(y., HA), and som e m anipulation, th at n 00( o L , H A ) - n R( a,HA)
< nco^^A')J
\u - a \ h R(u;cc)du
7too(a, H y4)a*R+1*<x(l + 1) (R + l ) a r ((R + l)a ) T ((R + 1)(1 - a)) ' We apply Stirling’s approxim ation T(x) = (2n)l/ 2 x x~ l / 1 exp(—x) for large x to the rig h t-h an d side an d obtain the approxim ate bound
156
4 ■ Tests
The following table gives som e num erical values o f this approxim ate bound. sim ulation size R power ratio for a = 0.05 power ratio for a. = 0.01
19
39
99
199
499
999
9999
0.61
0.73
0.83
—
—
0.60
0.88 0.72
0.92 0.82
0.95 0.87
0.98 0.96
These values suggest th a t the loss o f pow er with R = 99 is n o t serious for a > 0.05, and th a t R = 999 should generally be safe. In fact the values can be quite conservative. For exam ple, for testing a norm al m ean the pow er ratios for a = 0.05 are usually above 0.85 and 0.97 for R = 19 and R = 99 respectively.
4.3 Nonparametric Permutation Tests In m any practical situations it is useful to have available statistical m ethods which do n o t depend upon specific param etric models, if only in order to provide backup to results o f param etric m ethods. So, w ith significance testing, it is useful to have n onparam etric tests such as the sign test and the signed-rank test for analysing paired data, either to confirm the results o f applying the param etric paired t test, or to deal w ith evident non-norm ality o f the paired differences. N onparam etric tests in general com pute significance w ithout assum ing form s for the d a ta distributions. T he choice o f test statistic will usually be based firmly on the physical context o f the problem , possibly reinforced by w hat we know would be a good choice if a plausible p aram etric m odel were applicable. So, in a com parison o f two treatm ents w here we believe th a t treatm ent effects are additive, it w ould be reasonable to choose as test statistic the difference o f means, especially if we th o u g h t th a t the d a ta distributions were not far from norm al; for long-tailed d a ta distributions the difference o f m edians would be m ore reasonable from a statistical poin t o f view. If we are concerned about the nonrobustness o f m eans, then we m ight first convert d a ta values to relative ranks and then use an ap p ro p riate ran k test. There is a vast literature on various kinds o f nonparam etric tests, such as rank tests, U -statistic tests, and distance tests which com pare E D F s in various ways. We shall n o t a ttem p t to review these here. R ath e r our concern in this ch ap ter is w ith resam pling tests, and the sim plest form o f nonparam etric resam pling test is the p erm u tatio n test. Essentially a p erm u tatio n test is a com parative test, where the test statistic involves some sort o f com parison betw een E D Fs. T he special feature o f the p erm u tatio n test is th a t the null hypothesis implies a reduction o f the nonparam etric M L E o f the d a ta distributions to E D F s which play the role o f sufficient statistic S in equation (4.4). The conditional probability distribution
157
4.3 ■Nonparametric Permutation Tests
Figure 4.6 Scatter plot of n — 37 pairs of measurements in a study of handedness (provided by D r Gordon Claridge, University of Oxford).
dnan
used in (4.4) is then a uniform distribution over a set o f perm utations o f the d a ta structure. The following exam ple illustrates this. Example 4.9 (Correlation test) Suppose th at Y = ( U , X ) is a random pair an d th a t n such pairs are observed. T he objective is to see if U and X are independent, this being the null hypothesis Hq. A n illustrative dataset is plotted in Figure 4.6, where u = d nan is a genetic m easure and x = han d is an integer m easure o f left-handedness. T he alternative hypothesis is th a t x tends to be larger w hen u is larger. These d ata are clearly non-norm al. O ne simple test statistic is the sam ple correlation, T = p{F) say. N ote th at here the E D F F puts probabilities n~‘ on each o f the n d ata pairs (u;,x,). T he correlation is zero for any distribution th a t satisfies Ho. The correlation coefficient for the d a ta in Figure 4.6 is 0.509. W hen the form o f F is unspecified, F is m inim al sufficient for F. U nder the null hypothesis, however, the m inim al sufficient statistic is com prised o f the ordered us an d ordered xs, s = (M(i),...,U(n),X(i),...,X(„)), equivalent to the two m arginal E D Fs. So here a conditional test can be applied, w ith (4.4) defining the P-value, w hich will therefore be independent o f the underlying m arginal distributions o f U and X . N ow when S is constrained to equal s, the ran d o m sam ple ( U \ , X \ ) , ... ,(U„,X„) is equivalent to (u(i),X j), . . . , (u (n),X*) w ith ( X j ,. . .,X"n) a ran d o m p erm u tatio n o f X ( i ) ,...,X ( „ ) . F urther, when Ho is true all such p erm u tatio n s are equally likely, and there are n! o f them. Therefore the one-sided P-value is # o f perm utations such th at T* > t
In evaluating p, we can use the fact th at all m arginal sam ple m om ents
158
4 ■ Tests
Figure 4.7 Histogram o f correlation t* values for R — 999 random perm utations o f d ata in Figure 4.6.
-0.5
0.0
0.5
Correlation t*
are constant across perm utations. This implies th a t T > t is equivalent to T,XiUi> Y,XiU i-
■
As a practical m atter, it is rarely possible or necessary to com pute the p erm utatio n P-value exactly. Typically a very large num ber o f perm utations is involved, for exam ple m ore th a n 3 m illion in Exam ple 4.9 w hen n = 10. In special cases involving linear statistics there will be theoretical approxi m ations, such as norm al approxim ations or im proved versions o f these: see Section 9.5. But for general use the m ost reliable ap proach is to m ake use o f the M onte C arlo m ethod o f Section 4.2.1. T h at is, we take a large num ber R o f rand o m p erm utations, calculate the corresponding values t \ , . . . , t ’R o f T, and approxim ate p by me
1 + # { tr* > r} R + 1 '
A t least 99 and at m ost 999 ran d o m p erm u tatio n s should suffice. Example 4.10 (Correlation test, ctd) F or the d ataset shown in Figure 4.6, the test o f Exam ple 4.9 was im plem ented by sim ulation, th a t is generating random p erm u tatio n s o f the x-values, w ith R = 999. Figure 4.7 is a histogram o f the correlation values. The unshaded p a rt corresponds to the 4 t* values which are greater th an the observed correlation t = 0.509: the P-value is p = ( l + 4 ) / ( l + 9 9 9 ) = 0.005. ■ O ne feature o f p erm u tatio n tests is th a t any test statistic is as easy to use as any other, a t least in principle. So in the previous exam ple it is ju st as easy to use the ran k correlation (in which the us and xs are replaced by their relative
159
4.3 • Nonparametric Permutation Tests
ranks), a robust m easure o f correlation, or a com plicated m easure o f distance betw een the bivariate E D F F and its null hypothesis version Fo which is the pro d u ct o f the E D F s o f u and x. All th at is required is th a t we be able to com pute the test statistic for all perm utations o f the xs. In the previous exam ple the null hypothesis o f independence led unam bigu ously to a sufficient statistic s and a p erm u tatio n distribution. M ore generally the explicit null hypothesis m ay n o t be strong enough to do this, unless it can be taken to im ply a stronger hypothesis. This depends upon the practical context, as we see in the following example. Example 4.11 (Comparison of two means) Suppose th a t we w ant to com pare the m eans o f tw o populations, given random sam ples from each which are denoted by C y n ,...,j'ini) and (y 2 i , - - - , y 2n2)- The explicit null hypothesis is Hq : n\ = fi 2 , w here ji\ an d jij are the m eans for the respective populations. N ow Ho alone does n o t reduce the sufficient statistics from the two sets o f ordered sam ple values. However, suppose we believe th a t the C D F s Fi and Fj have either o f the special form s Fi(y) = G ( y - n \ ) ,
F2(y) = G(y - n 2)
or F\(y) = G ( y / n i),
F2(y) = G{y/(i2),
for some unknow n G. T hen the null hypothesis implies a com m on C D F F for the two populations. In this case, the null hypothesis sufficient statistic s is the set o f order statistics for the pooled sam ple =
yiii • • • >*^1
= yittp Wfii+i = 3^21 s • • • •>^«i+«2 =
y2n2,
th a t is s = (u(i),...,H (ni+n2)). Situations where the special form s for Fj and F 2 apply would include com parisons o f tw o treatm ents which were both applied to a random selection o f units from a com m on pool. The special forms would n o t necessarily apply to sets o f physical m easurem ents taken under different experim ental conditions or using different apparatu s, since then the sam ples could have unequal variablity even though Ho were true. Suppose th a t we test Ho by com paring the sam ple m eans using test statistic t = y 2 — yi, and suppose th a t the one-sided alternative H a : fi2 > ji\ is appropriate. If we assum e th a t Ho implies a com m on distribution for the Yu and Yzj, then the exact significance probability is given by (4.4), i.e. p = P r(T > t | S = s,Ho). N ow when S is constrained to equal s, the concatenation o f the two random sam ples ( Y u ,..., Yini, Y2i , . . . , Y2„2) m ust form a p erm utation o f s. The first
160
4 ■ Tests
m com ponents o f a p erm u tatio n will give the first sam ple and the last «2 com ponents will give the second sample. Further, w hen Ho is true all such perm utatio n s are equally likely, an d there are o f them. Therefore #
o f p erm u tatio n s such th a t T* > t ^ i+ n 2^
P ~
'
(4-21)
As in the previous exam ple, this exact probability would usually be approxi m ated by taking R ran d o m p erm u tatio n s o f the type described, and applying (4.11).
■
A som ew hat m ore com plicated tw o-sam ple test problem is provided by the following example. Example 4.12 (AM L data) Figure 3.3 shows the product-lim it estim ates o f the survivor function for tim es to rem ission o f tw o groups o f patients with acute m yelogeneous leukaem ia (A M L), w ith one o f the groups receiving m aintenance chem otherapy. D oes this treatm en t m ake a difference to survival? A com m on test for com parison o f estim ated survivor functions is based on the log-rank statistic, which com pares the actual n u m ber o f failures in group 1 with its expected value at each tim e a failure is observed, under the null hypothesis th a t the survival distributions o f the two groups are equal. To be m ore explicit, suppose th a t we pool the two groups and obtain ordered failure times y\ < ■■■ < ym, w ith m < n if there is censoring. Let / \j and r\j be the num ber o f failures and the nu m b er a t risk o f failure in group 1 at tim e yj, and similarly for group 2. T hen the log-rank statistic is T = E j = i ( /U - mij)
where ( / l j + f 2 j ) r i j r 2j ( r i j + r2j - f i; - f 2J)
(fij+f2j)rij 1]
r ij + r y
’
lJ
(ri; + r 2j ) 2{r\ j + r2j —
1)
are the conditional m ean an d variance o f the n u m b er in group 1 to fail a t time tj, given the values o f f i j + f 2j, r\j and r2j. F or the A M L d a ta t = 1.84. Is this evidence th a t chem otherapy lengthens survival tim es? For a suitable null distrib u tio n we simply treat the observations in the rows o f Table 3.4 as a single group and perm ute them , effectively random ly allocating group labels to the observations. F or each o f R perm utations, we recalculate t, obtaining t\, ..., t*R. Figure 4.8 shows the t'r plotted against order statistics from the JV(0,1) distribution, which is the asym ptotic null distribution o f T. The asym ptotic P-value is 0.033, in reasonable agreem ent with the P-value 26/(999 + 1) = 0.026 from the p erm u tatio n test. ■
4.4 • Nonparametric Bootstrap Tests
161
Figure 4.8 Results of a Monte Carlo permutation test for differences between the survivor functions for the two groups of AML data, R = 499. The dashed horizontal line shows the observed value of the statistic, and values of t* that exceed it are hollow. The dotted line is the line x = y.
Quantiles of standard normal
4.4 Nonparametric Bootstrap Tests T he p erm u tatio n tests described in the previous section are special n o n p a ra m etric resam pling tests, in which resam pling is done w ithout replacem ent. In this section we discuss the direct application o f nonparam etric resam pling m ethods, as introduced in C hapters 2 and 3. F or tightly structured problem s such as those in the previous section, this m eans resam pling w ith replacem ent rath er th an w ithout, which m akes little difference. But b o o tstrap tests apply to a m uch w ider class o f testing problems. The special n ature o f significance tests requires th a t probability calculations be done und er a null hypothesis model. In this way the b o o tstrap calculations m ust differ from those in earlier chapters. F or exam ple, where in C h apter 2 we introduced the idea o f resam pling from the E D F F, now we m ust resam ple from a distribution Fo, say, which satisfies the relevant null hypothesis H q. This has been illustrated already for param etric b o o tstrap tests in Section 4.2. A O nce the null resam pling distribution Fo is decided, the basic boo tstrap test will be to com pute the P-value as Pboot =
Pr*(7”
> r I
Fo),
or to approxim ate this by p P
i + # K > t} R+l
using the results t\,...,t*R from R b o o tstrap samples.
4 • Tests
162
Figure 4.9 Histogram of test statistic values t' = y 2 — y\ from R = 999 resamples of the two samples in Example 4.13. The data value of the test statistic is t — 2.84.
in
CM
in
o
o o ■6
-4
Example 4.13 (Comparison of two means, continued) C onsider the last two series o f m easurem ents in Exam ple 3.1, which are reproduced here labelled sam ples 1 and 2 :
sam ple 1 sam ple 2
82 84
79 86
81 85
79 82
77 77
79 76
79 77
78 80
79 83
82 81
76 78
73 78
64 78
Suppose th a t we w ant to com pare the corresponding population m eans, p\ and /i2, say w ith test statistic t = y i — y\. If, as seems plausible, the shapes o f the underlying distributions are identical, then under Ho : P2 = Pi the two distributions are the same. It would then be sensible to choose for Fo the pooled E D F o f the tw o samples. T he resam pling test will be the same as the p erm u tatio n test o f Exam ple 4.11, except th a t ran d om perm utations will be replaced by ran d o m sam ples o f size n\ 4 -112 = 26 d raw n w ith replacem ent from the pooled data. Figure 4.9 shows the results from applying this procedure to our two samples w ith R = 999. The unshaded area o f the histogram corresponds to the 48 values o f t* larger th a n the observed value t = 80.38 — 77.54 = 2.84. T he one-sided Pvalue for alternative H A : H2 > Hi is p = (48 + l ) / ( 9 9 9 + l) = 0.049. A pplication o f the p erm u tatio n test gave the sam e result. It is w orth stressing again th a t because the resam pling m ethod is wholly com putational, any sensible test statistic is as easy to use as any other. So here, if outliers were present, it w ould be ju st as easy, and perhaps m ore sensible, to choose t to be the difference o f trim m ed means.
4.4 • Nonparametric Bootstrap Tests
163
The question is: do we gain or lose anything by assum ing th a t the two distributions have the same shape? ■ The p articu lar null fitted m odel used in the previous exam ple was suggested in p a rt by the p erm u tatio n test, and is clearly n o t the only possibility. Indeed, a m ore reasonable null m odel in the context would be one which allowed different variances for the tw o p opulations sam pled: an analogous m odel is used in Exam ple 4.14 below. So in general there can be m any candidates for null m odel in the nonparam etric case, each corresponding to different restrictions im posed in ad d itio n to H q. O ne m ust judge which is m ost ap p ro p riate on the basis o f w hat m akes sense in the practical context. Semiparametric null models If d a ta are described by a sem iparam etric m odel, so th a t some features o f underlying distributions are described by param eters, then it m ay be relatively easy to specify a null model. The following exam ple illustrates this. Example 4.14 (Comparison of several means) F or the gravity d a ta in E xam ple 3.2, one p o in t th a t we m ight check before proceeding w ith an aggregate estim ation is th a t the underlying m eans for all eight series are in fact the same. One plausible m odel for the data, as m entioned in Section 3.2, is
)fij ~
j
— L ••• ? I ~ I?•■•)
where the ei; com e from a single distribution G. The null hypothesis to be tested is Ho : p\ = ■■■ = p.%, w ith general alternative. F or this an appropriate test statistic is given by yi and sj are the average and sample variance for the ith series.
8
t= E
Wi(yi - £o)2,
Wi = Hi/sf,
i=1 w ith fo = Y wi}'i/ Y wi null estim ate o f the com m on mean. The null distribution o f T w ould be approxim ately yfi were it n o t for the effect o f small sam ple sizes. So a b o o tstrap approach is sensible. T he null m odel fit includes /to and the estim ated variances °K> = (« i “
l ) s f / « i + ( Pi ~ M
2-
T he null m odel studentized residuals ytj - fo eij
{ ^ - ( E w , ) - 1}172’
when plotted against norm al quantiles, suggest mild non-norm ality. So, to be safe, we apply a nonparam etric bootstrap. D atasets are sim ulated under the null m odel
y'j = fo +
164
4 ■ Tests
0
10
20
30 t*
i
^
s?
1 2 3 4 5 6 7 8
66.4 89.9 77.3 81.4 75.3 78.9 77.5 80.4
370.6 233.9 248.3 68.8 13.4 34.1 22.4 11.3
40
50
w,' 474.4 339.9 222.3 67.8 23.1 31.1 21.9 13.5
0.022 0.047 0.036 0.116 0.599 0.323 0.579 1.155
60 Chi-squared quantiles
with e'jS random ly sam pled from the pooled residuals {e^, i = 1.......8, j = l,...,n ,} . F or each such sim ulated d ataset we calculate sam ple averages and variances, then weights, the pooled m ean, and finally t*. Table 4.3 contains a sum m ary o f the null m odel fit, from which we calculate f o = 78.6 an d t = 21.275. A set o f R = 999 b o o tstrap sam ples gave the histogram o f t ‘ values in the left panel o f Figure 4.10. O nly 29 values exceed t = 21.275, so p = 0.030. The right panel o f the figure plots ordered t* values against quantiles o f the Xi approxim ation, which is off by a factor o f ab o u t 1.24 and gives the distorted P-value 0.0034. A n o rm al-error p aram etric b o o tstrap gives results very sim ilar to the nonparam etric b o otstrap. ■
Table 4.3 Summary statistics for eight samples in gravity data, plus ingredients for significance test. The weighted mean is po = 78.6.
Figure 4.10 Resampling results for comparison of the means of the eight series of gravity data. Left panel: histogram of R = 999 values of t* under nonparametric resampling from the null model with pooled studentized residuals; the unshaded area to right of observed value t = 21.275 gives p = 0.029. Right panel: ordered t‘ values versus Xi quantiles; the dotted line is the theoretical approximation.
4.4 ■Nonparametric Bootstrap Tests
165
Example 4.15 (Ratio test) Suppose that, as in Exam ple 1.2, each observation y is a p air (u,x), and th a t we are interested in the ratio o f m eans 8 = E ( X ) /E ( U) . In p articu lar suppose th a t we wish to test the null hypothesis Hq : 6 = 0OThis problem could arise in a variety o f contexts, and the context would help to determ ine the relevant null model. F or example, we m ight have a pairedcom parison experim ent where the m ultiplicative effect 0 is to be tested. H ere do would be 1, an d the m arginal distributions o f U and X should be the same und er Hq- O ne n atu ral null m odel Fo w ould then be the sym m etrized E D F, i.e. the E D F o f the expanded d a ta ( u i , x i ) , . . . , (u„,x„),(xi,ui),. . . , ( x n,u„). ■ Fully nonparametric null models In those few situations where the context o f the problem does n o t help identify a suitable sem iparam etric null m odel, it is in principle possible to form a w holly nonp aram etric null m odel Fo. H ere we look a t one general way to do this. Suppose the test involves k distributions F i ,...,F ^ for which the null hy pothesis im poses a constraint, Ho : r(F i,. . . , F*) = 0. T hen we can obtain a null m odel by nonp aram etric m axim um likelihood, or a sim ilar m ethod, by adding the constraint to the usual derivation o f the E D F s as M LEs. To be specific, suppose th a t we force the estim ates o f F \ , . . . , Fk to be supported on the corre sponding sam ple values, as the E D F s are. T hen the estim ate for F, will attach probabilities p, = (p,i, ■■• , P;n,) to sam ple values y , i , t h e unconstrained E D F Ft corresponds to pi = n ^ 'f l , . . . , 1). N ow m easure the discrepancy be tween a possible F, and the E D F F,- by rf(p„p,), say, such th at the E D F probabilities p, m inim ize this when no constraints o ther th an Y.%] Pij = 1 are im posed. T hen a n o nparam etric null m odel is given by the probabilities which m inim ize the aggregate discrepancy subject to t ( Fi , . . . ,F k) = 0. T h at is, the null m odel m inim izes the L agrange expression
(4.22)
w here t{p i , . . . , pt) is a re-expression o f the original constraint function t ( F \ , . . ., Fk). We denote the solutions o f this constrained m inim ization problem by p®, i = l,...,k . T he choice o f discrepancy function d(-, ■) th at corresponds to m axim um likelihood estim ation is the aggregate inform ation distance k
ttj
(4.23)
4 ■ Tests
166 and a useful alternative is the reverse inform ation distance k
nk
Y Y Pli log(P x — y). This can be rew ritten as P = P r L - - G. - „ - G . > (m + * - ‘ ) } , (_ mu + n j
(4.35)
where u = x / y , and Gm and Gn are independent gam m a random variables with indices m a n d n respectively and unit scale param eters. The b o o tstrap P-value (4.35) does n o t have a uniform distribution under the null hypothesis, so P = p does n o t correspond to erro r rate p. This is fully corrected using the adjustm ent (4.34). To see this, w rite (4.35) as p = h(u), so th a t po(F') equals P r* * (T " > T* | F*o) = h(U*), where U ' = X ' / Y ' . Since h( ) is decreasing, it follows th at Padj = Pr*{/i(l/*) < h(u) | x , y } = Pr*(t/* > u | x , y ) = P r(F 2m,2„ > u),
177
4.5 ■Adjusted P-values
which is the P-value o f the exact test. Therefore p a<jj is exactly uniform and the adjustm ent is perfectly successful. ■ In the previous example, the same result for pa^ would be achieved if the b o o tstrap distribution o f T were replaced by a norm al approxim ation. This m ight suggest th a t b o o tstrap calculation o f p could be replaced by a rough theoretical approxim ation, thus rem oving one level o f boo tstrap sam pling from calculation o f padj- U nfortunately this is n o t always true, as is clear from the fact th a t if an approxim ate null distribution o f T is used which does not depend upon F at all, then pa<jj is ju st the ordinary bo o tstrap P-value. In m ost applications it will be necessary to use sim ulation to approxim ate the adjusted P-value (4.34). Suppose th at we have draw n R resam ples from the null m odel Fo, w ith corresponding test statistic values r j.......t'R. The rth resam ple has E D F F* (possibly a vector o f E D Fs), to which we fit the null model Ko- R esam pling M times from F *0 gives sam ples from which we calculate f " , m = 1 ,..., M. T hen the M onte C arlo approxim ation for the adjusted P-value is 1 + # { p r* < p } dj — R +1 ’ (4.36) where for each r • =
Pr
1 + # K m
M +l
^
fr )
(4 3 7 )
If p is calculated from the same R resamples, then a total o f R M sam ples is generated. We can sum m arize the algorithm as follows: Algorithm 4.3 (Double bootstrap test)
For r = 1
1 G enerate y\,...,y*n independently from the fitted null distribution Fo and calculate the test statistic t* from them. 2 Fit the null distribution to y [ , . . . , y * , thereby obtaining K r 3 F or m = 1, . . . , M , (a) generate y p , . . . , y ” independently from the fitted null distribu tion F*0 ; and (b) calculate from them the test statistic t” . 4 C alculate p* as in (4.37). Finally, calculate padj as in (4.36).
•
We discuss the choice o f M after the following example. Example 4.22 (Two-way table) Table 4.6 contains a set o f observed m ulti nom ial counts, for which we wish to test the null hypothesis o f row -colum n independence, or additive loglinear model.
178
4 ■ Tests
1 2 0 1 0
2 0 1 1 1
2 0 1 2 1
1 2 1 0 1
If the co u n t in row i an d colum n j is P-ijfi = yi+y+j/y++> where test statistic is
y l+
t =
1 3 2 0 1
y ,j,
0 0 7 0 0
1 0 3 1 0
then the null fitted values are
= E /J t y an d so forth. The log likelihood ratio
y 'i Xo^ y ' i / N f i ) -
A ccording to stan d ard theory, T is approxim ately distributed as Xd under the null hypothesis w ith d = (7 —1) x (5 — 1) = 24. Since t = 38.52, the approxim ate P-value is P r(^24 ^ 38.52) = 0.031. However, the chi-squared approxim ation is know n to be quite p o o r for such a sparse table, so we apply the param etric b ootstrap. The m odel F q is the fitted m ultinom ial m odel, sam ple size n = y ++ and (i,j)th cell probability p-ijfi/n. We generate R tables from this m odel and calculate the corresponding log likelihood ratio statistics t \, . . . , t ' R. W ith R = 999 we obtain 47 statistics larger th a n the observed value t = 38.52, so the b o o tstrap P-value is (1 + 4 7 )/(l + 999) = 0.048. The inaccuracy o f the chi-squared approxim ation is illustrated by Figure 4.13, which is a plot o f ordered values o f Pr(x24 > O versus expected uniform order statistics: the straight line corresponds to the theoretical chi-squared approxim ation for T. The b o o tstrap P-value tu rn s out to be quite non-uniform . A double bo o tstrap calculation w ith R = M = 999 gives pa<jj = 0.076. N ote th a t the test applied here conditions only on the total y ++, whereas in principle one would prefer to condition on all row an d colum n sums, which are sufficient statistics u nder the null h y p o th esis: this would require m ore complex sim ulation m ethods, such as those o f Section 4.2.1; see Problem 4.3. ■ Choice o f M T he general application o f the double b o o tstrap algorithm involves sim ulation at two levels, w ith a to tal o f R M samples. If we follow the suggestion to use as m any as 1000 sam ples for calculation o f probabilities, then here we would need as m any as 106 samples, which seems im practical for o th er th a n simple problem s. As in Section 3.9, we can determ ine approxim ately w hat a sensible choice for M would be. The calculation below o f sim ulation m ean squared erro r suggests th a t M = 99 w ould generally be satisfactory, and M = 249 would be safe. T here are also ways o f reducing considerably the total size o f the sim ulation, as we shall show in C h ap ter 9.
Table 4.6 Two-way table of counts (Newton and Geyer, 1994).
4.5 ■Adjusted P-values
179
Figure 4.13 Ordered values of ^ t*) versus expected uniform order statistics from R = 999 bootstrap simulations under the null fitted model for two-way table. Dotted line is theoretical approximation.
Expected uniform order statistic
To calculate the sim ulation m ean squared error, we begin w ith equation (4.37), which we rew rite in the form I {A} is the indicator function of the event A.
1 +Em=lJ{C ^ K}
Pr
M+ 1
In order to simplify the calculations, we suppose that, as M —>oo, p ' —>ur such th a t the urs are a ran d o m sam ple from the uniform distribution on [0,1]. In this case there is no need to adjust the b o o tstrap P-value, so padj = P■U nder this assum ption (M + l)p* is alm ost a B inom (M ,ur) random variable, so th a t equation (4.36) can be approxim ated by ■ l +
£ r = l* r
Padj = — r + t ~ ' where X r = /{B in o m (M , ur) < ( M + \)p}. We can now calculate the sim ulation m ean and variance o f
p adj
by using the fact th at
E(X^ | ur) = Pr{B inom (M , ur) < (M + 1)p} for k = 1,2. F irst we have th a t for all r ri m + m E (* ? ) = y
T . y=0
( " ; )uJ( l - u ) M^ d u =
w here [z] is the integer p a rt o f z. Since pa^ is p ro portional to the average o f independent X rs, it follows th a t UW
R [ ( M + l)p] (n + i)(Af + i)>
180
4 ■ Tests
which tends to the correct answ er p as R, M —>00, and , . . R [( M + 1)p](M + l - [ ( M + l)p]) var(padj) = A simple aggregate m easure o f sim ulation erro r is the m ean squared error relative to p, M S E ( p 3di) =
[(M + l)p]{M + l - [ ( M + l)p]} R ( M + l )2
N um erical evaluations o f this result suggest th a t M = 249 would be a safe choice. If 0.01 < p < 0.10 then M = 99 would be satisfactory, while M = 49 would be adequate for larger p. N ote th a t two assum ptions were m ade in the calculation, b o th o f which are harm less. First, we assum ed th a t p was independent o f the t ', w hereas in fact it w ould likely be calculated from the sam e values. Secondly, o u r m ain interest is in cases where P-values are not exactly uniform ly distributed. Problem 4.12 suggests a m ore flexible calculation, from which very sim ilar conclusions emerge.
4.6 Estimating Properties of Tests A statistical test involves two steps, collection o f d a ta and application o f a p articular test statistic to those data. Both steps involve choice, and resam pling m ethods can have a role to play in such choices by providing estim ates o f test power. Estimation o f power A s regards collection o f data, in simple problem s o f the kind under discussion in this chapter, the statistical co n trib u tio n lies in recom m endation o f sample sizes via considerations o f test power. I f it is proposed to use test statistic T, an d if the p articu lar alternative H a to the null hypothesis Ho is o f prim ary interest, then the pow er o f the test is 7i(p,HA) = P r(T > tp I H a ), where tp is defined by P r(T > tp \ Ho) = p. In the simplified language o f testing theory, if we fix p and decide to reject Ho when t > tp, then n ( p, HA) is the chance o f rejection when HA is true. A n alternative specification is in term s o f E (P | H a ), the expected P-value. In m any problem s hypotheses are expressed in term s o f param eters, and then pow er can be evaluated for arbitrary param eter values to give a pow er function. W h at is o f interest to us here is the use o f resam pling to assess the pow er o f a test, either as an aid to determ ination o f appropriate sam ple sizes for a p articu lar test, or as a way to choose from a set o f possible tests.
4.6 ■Estimating Properties o f Tests
181
Suppose, then, th a t a pilot set o f d a ta y i , . . . , y n is in hand, and th a t the m odel description is sem iparam etric (Section 3.3). The pilot d a ta can be used to estim ate the n onparam etric com ponent o f the model, and to this can be added a rb itrary values o f the param etric com ponent. This provides a fam ily o f alternative hypothesis m odels from which to sim ulate d a ta and test statistic values. F ro m these sim ulations we obtain approxim ations o f test power, provided we have critical values tp for the test statistic. This last condition will not always be met, b u t in m any problem s there will at least be a simple approxim ation, for exam ple N ( 0,1) if we are using a studentized statistic. For m any nonparam etric tests, such as those based on ranks, critical values are distribution-free, and so are available. The following exam ple illustrates this idea. Example 4.23 (M aize height data) The E D F s plotted in the left panel o f Figure 4.14 are for heights o f m aize plants growing in two adjacent rows, and differing only in a pollen sterility factor. The two sam ples can be modelled approxim ately by a sem iparam etric m odel with an unspecified baseline distri b u tio n F and one m edian-shift p aram eter 8. F or analysis o f such d a ta it is proposed to test Ho : 8 = 0 using the W ilcoxon test. W hether or n o t there are enough d a ta can be assessed by estim ating the power o f this test, which does depend upon F. D enote the observations in sample i by y i j, j = l ,...,n ; . The underlying distributions are assum ed to have the form s F ( y ) and F(y — 8), where 8 is estim ated by the difference in sam ple m edians 0. To estim ate F we subtract 0 from the second sam ple to give y 2j = y ij — 8- Then F is the pooled E D F o f the yijS and y 2js. F or these d a ta n\ = n2 = 12 and 8 = —4.5. The right panel o f Figure 4.14 plots E D F s o f the y );s and y 2js. T he next step is to sim ulate d a ta for selected values o f 0 and selected sample sizes N i an d N 2 as follows. F or group 1, sam ple d a ta from F(y), i.e. random ly w ith replacem ent from
and for group 2, sam ple d a ta y 2\ , - - - , y 2Nl from F(y — 8), i.e. random ly with replacem ent from y n + 8, . . . , yi„, + 8, y 2\ + 0, . . . , y 2„2 + 0T hen calculate test statistic t*. W ith R repetitions o f this, the pow er o f the test at level p is the p ro p o rtio n o f tim es th a t t* > tp, where tp is the critical value o f the W ilcoxon test for specified N\ and N 2. In this p articu lar case, the sim ulations show th a t the W ilcoxon test at level p = 0.01 has pow er 0.26 for 8 = 8 and the observed sam ple sizes. A dditional
4 • Tests
182
Figure 4.14 Power comparison for maize height data (Hand et al., 1994, p. 130). Left panel: EDFs of plant height for two groups. Right panel: EDFs for group 1 (unadjusted) and group 2 (adjusted by estimated median-shift 6 ~ —4.5).
Data values
Data values
calculations show th a t b o th sam ple sizes need to be increased from 12 to at least 33 to have pow er 0.8 for 9 = 9. ■ If the proposed test uses the pivot m ethod o f Section 4.4.1, then calculations o f sample size can be done m ore simply. F or exam ple, for a scalar 9 consider a two-sided test o f Ho : 9 = 9o w ith level 2a based on the pivot Z . The pow er function can be w ritten n(2a, 9) = 1 - Pr I zx>N +
I
< Z N < z X- ^ N +
VN
- i ,
VN
(4.39)
J
where the subscript N indicates sam ple size. A rough approxim ation to this pow er function can be obtained as follows. First sim ulate R sam ples o f size N from F , an d use these to approxim ate the quantiles za>sr and zi_a>jv. N ext set v Jl 2 = n^^vh^2/ N 1/2, where v„ is the variance estim ate calculated from the pilot data. Finally, approxim ate the probability (4.39) using the same R boo tstrap samples. Sequential tests Sim ilar sorts o f calculations can be done for sequential tests, where one im p o rtan t criterion is term inal sam ple size. In this context sim ulation can also be used to assess the likely eventual sam ple size, given d a ta y i , . . . , y „ at an interim stage o f a test, w ith a specified protocol for term ination. This can be done by sim ulating d a ta co n tin u atio n y^+i,y^,+2 , - ■■ up to term ination, by sam pling from fitted m odels or E D F s, as appropriate. F rom repetitions o f this sim ulation one obtains an approxim ate distribution for term inal sam ple size N.
4.7 ■Bibliographic Notes
183
4.7 Bibliographic Notes The stan d ard theory o f significance tests is described in C hapters 3-5 and 9 o f Cox an d H inkley (1974). F o r detailed treatm ent o f the m athem atical theory see L ehm ann (1986). In recent years m uch w ork has been done on obtaining im proved distrib u tio n al approxim ations for likelihood-based statistics, and m ost o f this is covered by Barndorff-N ielsen and Cox (1994). R and o m izatio n an d p erm u tation tests have long histories. R. A. Fisher (1935) introduced rando m izatio n tests as a device for explaining and justifying signifi cance tests, b o th in simple cases and for com plicated experim ental designs: the rando m izatio n used in selecting a design can be used as the basis for inference, w ithout appeal to specific erro r models. F o r a recent account see M anly (1991). A general discussion o f how to apply random ization in com plex problem s is given by W elch (1990). P erm utation tests, which are superficially sim ilar to random ization tests, are specifically n onparam etric tests designed to condition out the unknow n sam pling distribution. T he theory was developed by Pitm an (1937a,b,c), and is sum m arized by L ehm ann (1986). M ore recently R om ano (1989, 1990) has exam ined properties o f p erm u tation tests and their relation to b o o tstrap tests for a variety o f problems. M onte C arlo tests were first suggested by B arnard (1963) and are particularly p o p u lar in spatial statistics, as described by Ripley (1977,1981,1987) and Besag an d Diggle (1977). G raphical tests for regression diagnostics are described by A tkinson (1985), and Ripley (1981) applies them to m odel-checking in spatial statistics. M arkov chain M onte C arlo m ethods for conditional tests were introduced by Besag and Clifford (1989); applications to contingency table analysis are given by Forster, M cD onald and Sm ith (1996) and Smith, Forster and M cD o n ald (1996), w ho give additional references. G ilks et al. (1996) is a good general reference on M arkov chain M onte C arlo m ethods, including design o f sim ulation. T he effect o f sim ulation size R on power for M onte C arlo tests (with independent sim ulations) has been considered by M a rrio tt (1979), Jockel (1986) and by H all an d T itterington (1989); the discussion in Section 4.2.5 follows Jockel. Sequential calculation o f P-values is described by Besag and Clifford (1991) and Jennison (1992). The use o f tilted E D F s was introduced by E fron (1981b), and has sub sequently h ad a strong im pact on confidence interval m ethods; see C hapters 5 and 10. D ouble b o o tstrap adjustm ent o f P-values is discussed by Beran (1988), Loh (1987), H inkley and Shi (1989), and H all and M artin (1988). A pplications are described by N ew ton and G eyer (1994). G eyer (1995) discusses tests for inequality-constrained hypotheses, which sheds light on possible inconsistency
184
4 ■Tests
o f b o o tstrap tests an d suggests remedies. F or references to discussions o f im proved sim ulation m ethods, see C h ap ter 9. A variety o f m ethods and applications for resam pling in m ultiple testing are covered in the books by N oreen (1989) an d W estfall and Y oung (1993). Various aspects o f resam pling in the choice o f test are covered in papers by Collings an d H am ilton (1988), H am ilton an d Collings (1991), and Samawi (1994). A general theoretical treatm en t o f pow er estim ation is given by Beran (1986). The b rief discussion o f adaptive tests in Section 4.4.2 is based on D onegani (1991), w ho refers to previous w ork on the topic.
4.8 Problems 1
For the dispersion test of Example 4.2, y \ , . . . , y n are hypothetically sampled from a Poisson distribution. In the Monte Carlo test we simulate samples from the conditional distribution of Y i,..., Y„ given Y Yj — s<with s = Yl yj- If the exact multinomial simulation were not available, a Markov chain method could be used. Construct a Markov chain Monte Carlo algorithm based on one-step transitions from (mi,...,u„) to (t>i,_,u„) which involve only adding and subtracting 1 from two randomly selected us. (Note that zero counts must not be reduced.) Such an algorithm might be slow. Suggest a faster alternative. (Section 4.2)
2
Suppose that X i , . . . , X n are continuous and have the same marginal CDF F, although they are not independent. Let / be a random integer between 1 and n. Show that rank(X/) has a uniform distribution on {1,2,...,n}. Explain how to apply this result to obtain an exact Monte Carlo test using one realization of a suitable Markov chain. (Section 4.2.2; Besag and Clifford, 1989)
3
Suppose that we have a m x m contingency table with entries ytj which are counts. (a) Consider the null hypothesis of row-column independence. Show that the sufficient statistic So under this hypothesis is the set of row and column marginal totals. To assess the significance of the likelihood ratio test statistic conditional on these totals, a Markov chain Monte Carlo simulation is used. Develop a Metropolis-type algorithm using one-step transitions which modify the contents of a randomly selected tetrad yik,yu>yjk>yji> where i ^ j , k ^ I. (b) Now consider the the null hypothesis of quasi-symmetry, which implies that in the loglinear model for mean cell counts, log E(Yy) = /i + a, + + ytj, the interaction parameters satisfy yy = y;i- for all /, j. Show that the sufficient statistic So under this hypothesis is the set of totals yy+yji, i =£ j, together with the row and column totals and the diagonal entries. Again a conditional test is to be applied. Develop a Metropolis-type algorithm for Markov chain Monte Carlo simulation using one-step transitions which involve pairs of symmetrically placed tetrads. (Section 4.2.2; Smith et al, 1996)
4
Suppose that a one-sided bootstrap test at level a is to be applied with R simulated samples. Then the null hypothesis will be rejected if and only if the number of t’s exceeding t is less than k = (R + l)a — 1. If kr is the number of t*s exceeding t in the first r simulations, for what values of kr would it be unnecessary to continue simulation? (Section 4.2.5; Jennison, 1992)
4.8 ■Problems
5
185
(a) Consider the following rule for choosing the number of simulations in a Monte Carlo test. Choose k, and generate simulations t\,t’2,..., t] until the first I for which k of the t’ exceed the observed value t; then declare P-value p = (k + I)/(I + 1). Let the random variables corresponding to I and p be L and P. Show that Pr{P < (k + 1)/(/ + 1)} = Pr(L > 1 - 1 ) = k / l ,
l = k , k + 1,. .
and deduce that L has infinite mean. Show that P has the distribution of a t/(0, 1) random variable rounded to the nearest achievable significance level l , k / ( k + l ) , k / ( k + 2),..., and deduce that the test is exact. (b) Consider instead stopping immediately if k of the f* exceed t at any I < R, and anyway stopping when I = R, at which point m values exceed t. Show that this rule gives achievable significance levels / ( * + ! ) /( / + !),
P ~ \( m + l) /( K + l) ,
m = k, m l ) = k + k ^ 2
1=1
l~\
Mc+l
and evaluate this with k = 49 and 9 for R = 999. (Section 4.2.5; Besag and Clifford, 1991) 6
Suppose that n subjects are allocated randomly to each of two treatments, A and B. In fact each subject falls in one of two relevant groups, such as gender, and the treatment allocation frequencies differ between groups. The response y t] for the j l h
subject in the ith group is modelled as y,j = y,- + + e,;, where xA and rb are treatment effects and k(i, j ) is A or B according to which treatment was allocated to the subject. Our interest is in testing Ho : rA = xB with alternative that xA < tb, and the test statistic chosen is T =
Y . ri> - Y r‘>’ i,j±(i,j)=B i,jM>s on the group indicators. (a) Describe how to calculate a permutation P-value for the observed value t using the method described above Example 4.12. (b) A different calculation of the P-value is possible which conditions on the observed covariates, i.e. on the treatment allocation frequencies in the two groups. The idea is to first eliminate the group effects by reducing the data to differences djj = yij — yij+i, and then to note that the joint probability of these differences under Ho is constant under permutations of data within groups. That is, the minimal sufficient statistic So under H0 is the set of differences — Yl(J+l), where Yni) < % ) < ■• • are the ordered values within the ith group. Show carefully how to calculate the P-value for t conditional on so le) Apply the unconditional and conditional permutation tests to the following data: Group 1 A
3
5
4
B O (Sections 4.3, 6.3.2; Welch and Fahey, 1994)
Group 2 4
1
2
1
186 1
4 ■ Tests A randomized matched-pair experiment to compare two treatments produces paired responses from which the paired differences dj = yij — >’i7 are calculated for j = 1 The null hypothesis Ho o f no treatment difference implies that the djs are sampled from a distribution that is symmetric with mean zero, whereas the alternative hypothesis implies a positive mean difference. For any test statistic t, such as d, the exact randomization P-value Pr(T* > t | H0) is calculated under the null resampling m odel
d) = Sjdj,
j =
where the Sj are independent and equally likely to be + 1 and —1. W hat would be the corresponding nonparametric bootstrap sampling m odel Fo? Would the resulting bootstrap P-value differ much from the randomization P-value? See Practical 4.4 to apply the randomization and bootstrap tests to the following data, which are differences o f measurements in eighths o f an inch on cross- and self-fertilized plants grown in the same pot (taken from R. A. Fisher’s famous discussion o f Darwin’s experiment). 49
-6 7
8
16
6
23
28
41
14
29
56
24 7560 -4 8
(Sections 4.3, 4.4; Fisher, 1935, Table 3) 8
For the two-sample problem o f Example 4.16, consider fitting the null m odel by maximum likelihood. Show that the solution probabilities are given by Pij,°
1 . ni (a + Xy i j) ’ P2]'°
1 n2(P - Xy2j) ’
where a, fi and / are the solutions to the equations Y P i j f l = 1>Y PVfl ~ U and Y yijPij.o = Y y 2jP2j,o- Under what conditions does this solution not exist, or give negative probabilities? Compare this null m odel with the one used in Example 4.16. 9
For the ratio-testing problem o f Example 4.15, obtain the nonparametric M LE o f the joint distribution o f ( U , X ) . That is, if pj is the probability attached to the data pair (Uj,Xj), maximize Yl Pj subject to Y P)(x i ~ ^ a uj) = 0- Verify that the resulting distribution is the E D F o f (U,X) when 0o = x/u. Hence develop a numerical algorithm for calculating the pjS for general $oN ow choose probabilities p i , . . . , p n to minimize the distance d(p, q) = Y ^ V j log Pj - Y 2 Pi lo § with q = ( ^ ,..., i) , subject to Y ( x j ~ &oUj)Pj = 0. Show that the solution is the exponential tilted E D F Pj cc exp{r\(xj - Bouj)}. Verify that for small values o f do — x / u these PjS are approximately the same as those obtained by the M LE method. (Section 4.4; Efron, 1981b)
10
Suppose that we wish to test the reduced-rank m odel H0 : g(0) — 0, where g(-) is a Pi-dimensional reduction o f p-dimensional 6. For the studentized pivot method we take Q = {g(T ) - g(6)}T V ~ l { g ( T ) - g(0)}, with data test value q0 = g(t)r i;g-1g(t), where vg estimates var[g(T )}. Use the nonparametric delta method to show that var{g(T )} = g(t)VLg ( t y , where g(0) = 8 g( 6 ) / d d T. Show how the method can be applied to test equality o f p means given p indepen dent samples, assuming equal population variances. (Section 4.4.1)
187
4.9 ■Practicals 11
In a parametric situation, suppose that an exact test is available with test statistic U, that S is sufficient under the null hypothesis, but that a parametric bootstrap test is carried out using T rather than U. Will the adjusted P-value padj always produce the exact test? (Section 4.5)
12
In calculating the mean squared error for the simulation approximation to the adjusted P-value, it might be more reasonable to assume that P-values u, follow a Beta distribution with parameters a and b which are close to, but not equal to, one. Show that in this case
E(Xk) = " V ,Pl j^o
T(M + l)r (a + j)V(b + M - j)T(a + b) T ( j + I W ( M - j + l )T(a + b + M ) r ( a ) r ( b ) ’
where X r = /{B in om (M , ur) < ( M + l)p}. Use this result to investigate numerically the choice o f M. (Section 4.5) 13
For the matched-pair experiment o f Problem 4.7, suppose that we choose between the two test statistics ty = d and t2 = (n — 2m)~l J2"Z2+i ^c/)> f° r som e m in the range 2, . . . , [^n], on the basis o f their estimated variances Vi and v2, where
„
=
E (d j-h )2
1 v->
n2 =
Ej=m+l(^U) ~ f2)2 + m(^(rn+1) ~ h ) 2 + m(rf(„_m) —t2)2 --- ----------------------------------------------------------------------- . n(n — 2m)
Give a detailed description o f the adaptive test as outlined in Section 4.4.2. To apply it to the data o f Problem 4.7 with m = 2, see Practical 4.4. (Section 4.4.2; D onegani, 1991) 14
Suppose that we want critical values for a size a one-sided test o f Ho : 9 = 9o versus H A : 9 > 0n. The ideal value is the 1 — a quantile to,i-a o f the distribution o f T under Ho, and this is estimated by the solution f o , i - a to Pr’(T ' > t0 | F o) = aTypically t o i - c is biased. Consider an adjusted critical value ( o , i - o - y . Obtain the double bootstrap algorithm for choosing y, and compare the resulting test to use o f the adjusted P-value (4.34). (Sections 4.5, 3.9.1; Beran, 1988)
4.9 Practicals 1
The data in dataframe dogs are from apharmacological experiment. The two variables are cardiac oxygen consum ption (M VO) and left ventricular pressure (LVP). D ata for n = 7 dogs are M VO LVP
78 32
92 33
116 45
90 30
106 38
7899 24 44
Apply a bootstrap test for the hypothesis o f zero correlation between M VO and LVP. Use R = 499 simulations. (Sections 4.3, 4.4) 2
For the permutation test outlined in Example 4.12,
188
4 ■Tests
ami.fun
*
>
o C\J
50
100
150
200
250
t*
log t*
as we shall see in Section 5.4. However, variance approxim ations such as vL can be som ew hat unstable for small n, as in the previous exam ple with n = 12. Experience suggests th a t the m ethod is m ost effective when 6 is essentially a location p aram eter, which is approxim ately induced by variance-stabilizing tran sfo rm atio n (2.14). However, this requires know ing the variance function v(9) = v a r(T | F), which is never available in the nonparam etric case. A suitable transfo rm atio n m ay som etim es be suggested by analogy w ith a p aram etric problem , as in the previous example. T hen equations (5.10) and (5.11) will apply w ithout change. Otherwise, a transform ation can be obtained em pirically using the technique described in Section 3.9.2, using either nested b o o tstrap estim ates v* or delta m ethod estim ates v*L w ith which to estim ate values o f the variance function v(6). E quation (5.10) will then apply with estim ated transfo rm atio n h( ) in place o f h( ). F or the studentized boo tstrap interval (5.11), if the tran sfo rm ation is determ ined em pirically by (3.40), then studentized values o f the transform ed estim ates h(t'r) are K = v { K) l/2{ k O - M O }/”! 1/ 2-
O n the original scale the (1 — 2a) studentized interval has endpoints h~l { h(t) - v1/' 2tS(0 _ 1/2Z('(i-o
*
U((R + l)(l-a))>
whose tran sfo rm atio n back to the 9 scale is f ((R+l)<x)’
f ((R+1)(1—ot))-
( 5 .1 8 )
R em arkably this 1 — 2a interval for 9 does n o t involve h at all, and so can be com puted w ithout know ing h. The interval (5.18) is know n as the bootstrap percentile interval, an d was initially recom m ended in place o f (5.6). As w ith m ost b o o tstrap m ethods, the percentile m ethod applies for both param etric and n o n p aram etric b oo tstrap sampling. Perhaps surprisingly, the m ethod turns out not to w ork very well with the nonparam etric b o o tstrap even when a suitable transfo rm ation h does exist. However, adjustm ents to the percentile m ethod described below are successful for m any statistics. Example 5.4 (Air-conditioning data, continued) F or the air-conditioning d a ta discussed in Exam ples 5.1 and 5.2, the percentile m ethod gives 95% intervals [70.8, 148.4] und er the exponential m odel and [43.9, 192.1] under the n o n p ara m etric model. N either is satisfactory, com pared to accurate intervals such as the basic b o o tstra p interval using logarithm ic transform ation. ■
5.3.2 Adjusted percentile m ethod F or the percentile m ethod to w ork well, it would be necessary th at T be unbiased on the transform ed scale, so th a t the sw ap o f quantile estim ates be correct. This does not usually happen. A lso the m ethod carries the defect o f the basic b o o tstrap m ethod, th a t the shape o f the distribution o f T changes as the sam pling distribution changes from F to F, even after transform ation. In particular, the im plied sym m etrizing transform ation often will not be quite the same as the variance-stabilizing transform ation — this is the cause o f the po o r perform ance o f the percentile m ethod in Exam ple 5.4. These difficulties need to be overcom e if the percentile m ethod is to be m ade accurate. Parametric case with no nuisance parameters We assum e to begin w ith th a t the d a ta are described by a param etric m odel w ith ju st the single unknow n param eter 9, which is estim ated by the m axim um likelihood estim ate t = 9. In order to develop the adjusted percentile m ethod we m ake the sim plifying assum ption th a t for some unknow n transform ation h( ), unknow n bias correction factor w and unknow n skewness correction factor a, the transform ed estim ator U = h ( T) for (j>= h(9) is norm ally distributed, U ~ N ( — wer(), ) + log{ 1 + a( Z — w)}, which is m onotone increasing in e/>. T herefore substitution o f za for Z and u for U in this equation identifies the a confidence lim it for cj), which is
1
a =
i r\
w + z«
U + f f ( u h ---------;-------------- r .
1 — a(w + za)
Now the a confidence lim it for d is dx = b u t h( ) is u n k nown. However, if we denote the distribution function o f T ' by G, then G(0a) = Pr*(T* < e a \ t ) = ? x '{ U' a \u)
=
-1{G(0}.
(5.22)
205
5.3 ■Percentile Methods
In term s o f sim ulation values # denotes the number of times the event occurs.
'# { t; < t} R+ 1 T he value o f a can be determ ined inform ally using (5.19). Thus if /() denotes the log likelihood defined by (5.19), with derivative ?(), then it is easy to show th at e { m 3} = 6a, var{ } v a r{ /(0)}3/2 To calculate a we approxim ate the m om ents o f f ( 6) by those o f / (d) under the fitted m odel w ith p aram eter value 9, so th a t the skewness correction factor is 1 E*{/*(0)3} a = T -------- :— *—1— > 6 v a r * { n 0 )3/2}
(5.23)
w here ( ' is the log likelihood o f a set o f d a ta sim ulated from the fitted m odel. M ore generally a is one-sixth the standardized skewness o f the linear approxim ation to T. One p o tential problem w ith the B C a m ethod is th a t if a in (5.21) is m uch closer to 0 or 1 th an a, then (R + l)a could be less th an 1 o r greater th an R, so th a t even w ith interpolation the relevant quantile can n o t be calculated. If this happens, and if R can n o t be increased, then it would be appropriate to quote the extrem e value o f t' and the im plied value o f a. For example, if ( R + l)a > JR, then the u pper confidence limit t'Rj would be given w ith implied right-tail error a 2 equal to one m inus the solution to a = R / ( R + 1). Example 5.5 (Air-conditioning data, continued) R eturning to the problem o f Exam ple 5.4 and the exponential b o o tstrap results for R = 999, we find th a t the n u m b er o f y * values below y = 108.083 is 535, so by (5.22) w = whose derivative is iw = ^
fiz
-
nfi
The second and third m om ents o f if(fi) are nfi~2 and 2n/i~3, so by (5.23) a = I n ” 1/2 = 0.0962.
206
5 ■Confidence Intervals
a
z„ = w + za
$ = ®(w + - p rjj;)
r = (/?-(- 1)5
0.025 0.975 0.050 0.950
- 1 .8 7 2 2.048 -1 .5 5 7 1.733
0.067 0.996 0.103 0.985
67.00 995.83 102.71 984.89
Table 5.1 Calculation of adjusted percentile bootstrap confidence limits for fi with the data of Example 1.1, under the parametric exponential model with R = 999; a = 0.0962, w = 0.0878.
‘w 65.26 199.41 71.19 182.42
T he calculation o f the adjusted percentile limits (5.21) is illustrated in Table 5.1. T he values o f r = ( K + l) a are not integers, so we have applied the interpolation form ula (5.8). H ad we tried to calculate a 99% interval, we should have had to calculate the 999.88th ordered value o f t ‘, which does n o t exist. The im plied right-tail erro r for t'gg9j is the value a 2 which solves 9"
1000
= d>
(V0 0 8 7 8 .
+
0-0878 + ^
1 -0 .0 9 6 2 (0 .0 8 7 8 + Z !_ a2)
nam ely a2 = 0.0125.
■
Parametric case with nuisance parameters W hen 9 is one o f several unknow n param eters, the previous developm ent applies to a derived distribution called the least-favourable family. As usual we denote the nuisance param eters by X and w rite ip = (9, a ). If the log likelihood function for ip based on all the d a ta is n2/cd,$). Finally, for the studentized b o o tstrap upp er a confidence lim it (5.7), we first calculate the variance approxim ation v = 2n~l t2 from the expected Fisher inform ation m atrix and then the confidence lim it is nt/cd,i-a. The coverage o f this limit is exactly a. Table 5.3 shows num erical values o f coverages for the four m ethods in the case k = 10 an d m = 2, w here d = | n = 10. The results show quite dram atically first how b a d the basic an d percentile m ethods can be if used w ithout careful thought, an d secondly how well studentized and adjusted percentile m ethods can do in a m oderately difficult situation. O f course use o f a logarithm ic transform atio n would im prove the basic b o o tstrap m ethod, which would then give correct answers. ■
yi is the average of
yn>---,ymr
209
5.3 ■Percentile M ethods Table 5.3 Exact coverages (%) of confidence limits for normal variance based on maximum likelihood estimator for 10 samples each of size two.
N om inal
Basic
S tudentized
P ercentile
BCa
1.0 2.5 5.0 95.0 97.5 99.0
0.8 2.5 4.8 35.0 36.7 38.3
1.0 2.5 5.0 95.0 97.5 99.0
0.0 0.0 0.0 1.6 4.4 6.9
1.0 2.5 5.0 91.5 100.0 100.0
Nonparametric case: single sample The adjusted percentile m ethod for the nonparam etric case is developed by applying the m ethod for the p aram etric case w ith no nuisance param eters to a specially constructed nonparam etric exponential family w ith support on the d a ta values, the least-favourable fam ily derived from the m ultinom ial distribution for frequencies o f the d a ta values under nonparam etric resampling. Specifically, if lj denotes the em pirical influence value for t at yj, then the resam pling m odel for an individual Y * is the exponential tilted distribution P r(7 * = y j ) = pj =
(5-26)
The p aram eter o f interest 6 is a m onotone function o f r\ with inverse rj(6), say. The M L E o f rj is fj = rj(t) = 0, which corresponds to the E D F F being the n o n p aram etric M L E o f the sam pling distribution F. The bias correction factor w is calculated as before from (5.22), b u t using nonp aram etric b o o tstrap sim ulation to obtain values o f t*. The skewness correction a is given by the em pirical analogue o f (5.23), where now fj($) is the first derivative drj(6)/dd.
W hen the m om ents needed in (5.23) are evaluated at 6, or equivalently at fj = 0, two sim plifications occur. First we have E*(L*) = 0, and secondly the m ultiplier ij(t) cancels when (5.23) is applied. The result is th at
a
6 /
6 (e
\ 3/2’ if)
which is the direct analogue o f (5.25). Example 5.8 (Air-conditioning data, continued) The nonparam etric version o f the calculations in the preceding exam ple involves the same form ula (5.21), b u t now w ith a = 0.0938 and w = 0.0728. The form er constant is calculated from (5.27) w ith lj = y7 — y. The confidence lim it calculations are shown in Table 5.4 for 90% an d 95% intervals. ■
210
5 ■Confidence Intervals
a
za = w + z a
0.025 0.975 0.050 0.950
-1.8872 2.0327 -1.5721 1.7176
5 = ®(w +
i^ i;)
0.0629 0.9951 0.0973 0.9830
r
= (R + 1)5 62.93 995.12 97.26 983.01
Table 5.4 Calculation of adjusted percentile bootstrap confidence limits for p. in Example 1.1 using nonparametric bootstrap with R ~ 999; a = 0.0938, w = 0.0728.
C(r) 55.33 243.50 61.50 202.08
function o f sam ple m om ents, say t == t ( s ) where In = n - ' E ”=i Sitij) for i = then (5.26) is a one-dim ensional reduction o f a /c-dimensional exponential fam ily for si(Y * ),... ,s*( Y *). By equation (2.38) the influence values lj for t are given sim ply by lj = t T {s(yj) — s} w ith t = dt/ds. T he m ethod as described will apply as given to any single-sample problem, and to m ost regression problem s (C hapters 6 and 7), but n o t exactly to problem s where statistics are based on several independent samples, including stratified samples. Nonparametric case: several samples In the param etric case the B C a m ethod as described applies quite generally through the unifying likelihood function. In the n onparam etric case, however, there are predictable changes in the B C a m ethod. The background approx im ation m ethods are described in Section 3.2.1, which defines an estim ator in term s o f the E D F s o f k samples, t = t(F\ , . . ., Fk). T he em pirical influence values lij for j = 1 and i = 1, . . . , k and the variance approxim ation vL are defined in (3.2) an d (3.3). I f we retu rn to the origin and developm ent o f the B C a m ethod, we see th at the definition o f bias correction w in (5.22) will rem ain the same. The skewness correction a will again be one-sixth the estim ated standardized skewness o f the linear approxim ation to t, which here is ,
i
s i , » rJ
This can be verified as an application o f the p aram etric m ethod by constructing the least-favourable jo in t family o f k distributions from the k m ultinom ial distributions on the d a ta values in the k samples. N ote th a t (5.28) can be expressed in the same form as (5.27) by defining hj = nlij/ni, where n = Y so th at
vL = n 2 Y ^ i ij
° =
(5.29)
( e u ?5)!
211
5.4 ■ Theoretical Comparison o f M ethods
see Problem 3.7. This can be helpful in w riting an all-purpose algorithm for the B C a m eth o d ; see also the discussion o f the A B C m ethod in the next section. A n exam ple is given at the end o f the next section.
5.4 Theoretical Comparison of Methods The studentized b o o tstrap and adjusted percentile m ethods for calculating confidence limits are inherently m ore accurate th an the basic b o o tstrap and percentile m ethods. This is quite clear from em pirical evidence. H ere we look briefly at the theoretical side o f the story for statistics which are approxim ately norm al. Some aspects o f the theory were discussed in Section 2.6.1. For simplicity we shall restrict m ost o f the detailed discussion to the single-sam ple case, but the results generalize w ithout m uch difficulty.
5.4.1 Second-order accuracy To assess the accuracies o f the various b o o tstrap confidence limits we calculate coverage probabilities up to the n_1/2 term s in series approxim ations, these based on corresponding approxim ations for the C D F s o f U = ( T — 0 ) /v 1/2 and Z = (T —0 ) / F 1/2. H ere v is v a r(T ) or any approxim ation which agrees to first order w ith vL, the variance o f the linear approxim ation to T. Sim ilarly V is assum ed to agree to first order with VL. F o r exam ple, in the scalar param etric case where T is the m axim um likelihood estim ator, v is the inverse o f the expected Fisher inform ation m atrix. In all o f the equations in this section equality is correct to o rd er n~1^2, i.e. ignoring errors o f order n ~ \ The relevant approxim ations for C D F s are the one-term C o rn ish-F isher approxim ations P r([/ < u) = G(6 +
v 1/ 2m)
= OoThere are several possible including those described in hazard ratio 9o- H ere we use which holds fixed the survival sim ulated values y \ , . . . , y ' n are
resam pling schemes th a t could be used here, Section 3.5 b u t m odified to fix the constant the sim pler conditional m odel o f Exam ple 4.4, and censoring times. T hen for any fixed 9o the generated by
222
5 • Confidence Intervals
Figure 5.3 Bootstrap P-values p(0o) for testing constant hazard ratio 0o, with R = 199 at each point. Solid curve is spline fit on logistic scale. Dotted lines interpolate solutions to p(l?o) = 0.05,0.95, which are endpoints of 90% confidence interval.
log(theta)
where the num bers a t risk ju st p rio r to zj are given by
f
J-i
)
r\j = m ax I 0, m - ^ ( 1 - y ’k ) - c1;I *=i
( r2j = m ax
1 0, r 2i
Y.y'k
C2j
k= 1
w ith Cij the n u m b er o f censoring tim es in group i before zj. F o r the A M L d a ta we sim ulated R = 199 sam ples in this way, and calculated the corresponding values t*(90) for a grid o f 21 values o f 90 in the range 0.5 < 0o ^ 10. F or each Go we com puted the one-sided P-value Pieo) =
#{t*(0o) > t(0o)} 200
then on the logit scale we fitted a spline curve (in log 6), and interpolated the solutions to p(9o) = a, 1—a to determ ine the endpoints o f the (1—2a) confidence interval for 9. Figure 5.3 illustrates this procedure for a = 0.05, which gives the 90% confidence interval [1.07,6.16]; the 95% interval is [0.86,7.71] and the p o int estim ate is 2.52. T hus there is m ild evidence th a t 6 > 1. A m ore efficient ap proach w ould be to use R = 99 for the initial grid to determ ine rough values o f the confidence limits, n ear which further sim ulation with R = 999 w ould provide accurate interp o latio n o f the confidence limits. Yet m ore efficient algorithm s are possible. ■ In a m ore system atic developm ent o f the m ethod, we m ust allow for a nuisance p aram eter X, say, which also governs the d a ta distribution b u t is not constrained by Ho. T hen b o th Ra(0) an d C \ - a{ Y \ , . . . , Y„) m ust depend upon X to m ake the inversion m ethod w ork exactly. U nder the b o o tstra p approach X is replaced by an estim ate.
5.6 • Double Bootstrap M ethods
223
Suppose, for exam ple, th a t we w ant a lower 1 — a confidence limit, which is obtained via the critical region for testing Ho : 9 = 9 q versus the alternative hypothesis H a : 9 > 9o■Define ip = (9, A). I f the test statistic is T(9o), then the size a critical region has the form R«(8o) = { ( y u - - - , y n) ■Pr{T (0o) > t(90) | ip = (0o,A)} < a}, an d the exact lower confidence limit is the value uy = ua(y, X), such th a t Pr{ T (ua) > t(ua) | xp = (ua,/1)} = a. We replace X by an estim ate s, say, to obtain the lower 1 — a boo tstrap confidence lim it u i_ a = ua(y,s). The solution is found by applying for u the equation Pr* {T*(u) > t(u) | xp = (u,s)} = a, where T*(w) follows the distribution under xp = (u , s). This requires application o f an interp o latio n m ethod such as the one illustrated in the previous example. T he sim plest test statistic is the point estim ate T o f 9, and then T(9o) = T. The m ethod will tend to be m ore accurate if the test statistic is the studentized estim ate. T h a t is, if v a r(T ) = o 2(9,A), then we take Z = (T — 9o)/v(9o,S)\ for furth er details see Problem 5.11. The same rem ark would apply to score statistics, such as th a t in the previous example, where studentization would involve the observed or expected Fisher inform ation. N ote th a t for the p articu lar alternative hypothesis used to derive an upper limit, it w ould be stan d ard practice to define the P-value as Pr{T(0o) < t(9o) \ Fo}, for exam ple if T ( 0 q) were an estim ator for 9 or its studentized form. Equivalently one can retain the general definition and solve p(9o) = 1 — a for an upp er limit. In principle these m ethods can be applied to b o th param etric and sem ipara m etric problem s, b u t not to com pletely nonparam etric problems.
5.6 Double Bootstrap Methods W hether the basic or percentile b o o tstrap m ethod is used to calculate con fidence intervals, there is a possibly non-negligible difference betw een the nom inal 1 — a coverage an d the actual probability coverage o f the interval in repeated sam pling, even if R is very large. The difference represents a bias in the m ethod, an d as indicated in Section 3.9 the b o o tstrap can be used to estim ate and correct for such a bias. T h a t is, by b o otstrapping a b o o tstrap confidence interval m ethod it can be m ade m ore accurate. This is analogous to the b o o tstrap adjustm ent for b o o tstra p P-values described in Section 4.5. O ne straightforw ard application o f this idea is to the norm al-approxim ation confidence interval (5.4), which produces the studentized b o o tstra p interval;
5 • Confidence Intervals
224
see Problem 5.12. A m ore am bitious application is b o o tstrap adjustm ent o f the basic b o o tstrap confidence limit, which we develop here. First we recall the full n o tatio n s for the quantities involved in the basic bo o tstrap confidence interval m ethod. The “ideal” u p per 1 —a confidence limit is t(F) — ax(F), where Pr { T - 6 < ax(F) | F j = Pr{f(F) - t(F) < aa(F) \ F} = a. W h at is calculated, ignoring sim ulation error, is the confidence lim it t(F)—ax(F). The bias in the m ethod arises from the fact th a t aa(F) ^ a a(F) in general, so th at Pr{f(F) < t(F) - aa( F) | F} ± 1 - a.
(5.52) A
We could try to elim inate the bias by adding a correction to ax(F), b u t a m ore successful approach is to adjust the subscript a. T h a t is, we replace ax(F) by Oq(a)(F) an d estim ate w hat the adjusted value q(a) should be. This is in the sam e spirit as the B C a m ethod. Ideally we w ant q(a) to satisfy P r{t(F) < t(F) - fl, (a)(F) | F} = 1 - a.
(5.53)
The solution q(a) will depend u p o n F, i.e. q(oc) = q(a, F). Because F is unknow n, we estim ate q(a) by q(a) = q(a, F). This m eans th a t we obtain q(a) by solving the b o o tstrap version o f (5.53), namely Pr*{t(F) < t(F') - ai{a)( h
I F} = 1 - a.
(5.54)
This looks intim idating, b u t from the definition o f aa(F) we see th a t (5.54) can be rew ritten as Pr*{Pr**(T** < 2 T ' - t \ F*) > q{oc) | F} = 1 - a.
(5.55)
The sam e m ethod o f adjustm ent can be applied to any b o o tstrap confi dence lim it m ethod, including the percentile m ethod (Problem 5.13) and the studentized b o o tstra p m ethod (Problem 5.14). To verify th a t the nested b o o tstrap reduces the o rd er o f coverage erro r m ade by the original b o o tstra p confidence limit, we can apply the general discussion o f Section 3.9.1. In general we find th a t coverage 1 —a + 0 ( n ~ “) is corrected to 1—a + 0 ( n ~ fl~1/2) for one-sided confidence limits, w hether a = | or 1. However, for equi-tailed confidence intervals coverage 1 — 2a + 0 (n-1 ) is corrected to 1 — 2a -I- 0 ( n ~ 2); see Problem 5.15. Before discussing how to solve equation (5.55) using sim ulated samples, we look at a simple illustrative exam ple where the solution can be found theoretically. Example 5.12 (Exponential mean) C onsider the param etric problem o f ex ponential d a ta w ith unknow n m ean /i. T he d a ta estim ate for fi is t = y, F is
5.6 ■Double Bootstrap M ethods
225
the fitted exponential C D F w ith m ean y, and F * is the fitted exponential C D F w ith m ean y * — the m ean o f a param etric b o o tstrap sam ple y \ , . . . , y ' n draw n from F. A result th a t we use repeatedly is th a t if X \ , . . . , X n are independent exponential w ith m ean y, then 2n X / y has the x l n distribution. The basic b o o tstrap u p p e r 1 — a confidence limit for n is 2y - y c 2n,u/(2n), where Pt(x I„ < cjn,%) = oc. To evaluate the left-hand side o f (5.55), for the inner probability we have P r* * (F " < 2 ? - y | F*) = Pr{*2„ < 2n(2 - J) / ? ) } , which exceeds q if and only if 2n(2 — y / y ’) > C2n,q■ Therefore the outer probability on the left-hand side o f (5.55) is Pr" {2„(2 - « ? • ) >
I
= Pr { & > 2 _
^ / ( 2„ , } .
(5-56)
w ith q = q(a). Setting the probability on the right-hand side o f (5.56) equal to 1 — a, we deduce th a t 2n 2 - cl n m l{2n)
C2n’a'
Using q(a) in place o f a in the basic b o o tstrap confidence lim it gives the adjusted u p p er 1 —a confidence limit 2 n y / c 2n,a, which has exact coverage 1 —oc. So in this case the double b o o tstrap adjustm ent is perfect. Figure 5.4 shows the actual coverages o f nom inal 1 — a b o o tstrap upper confidence limits when n = 10. There are quite large discrepancies for both basic and percentile m ethods, which are com pletely rem oved using the double b o o tstrap adjustm ent; see Problem 5.13. ■ In general, an d especially for n onparam etric problem s, the calculations in (5.55) can n o t be done exactly and sim ulation or approxim ation m ethods m ust be used. A basic sim ulation algorithm is as follows. Suppose th a t we draw R sam ples from F, and denote the m odel fitted to the rth sam ple by F ’ — the E D F for one-sam ple n o nparam etric problem s. Define ur = Pr(T** < 21* - 1 1 F*). This will be approxim ated by draw ing M sam ples from F", calculating the estim ator values r” for m = 1, . . . , M and com puting the estim ate I {A} is the zero-one indicator function of the event A.
M «M ,r =
^ K
m=1
~ '} •
5 • Confidence Intervals
226
Figure 5.4 Actual coverages of percentile (dotted line) and basic bootstrap (dashed line) upper confidence limits for exponential mean when n = 10. Solid line is attained by nested bootstrap confidence limits.
0.0
0.2
0.4
0.6
0.8
1.0
Nominal coverage
T hen the M onte C arlo version o f (5.55) is R
^ «(«)} = 1 r= l
which is to say th a t q(a) is the a quantile o f the uMr. The sim plest way to obtain <j(ot) is to o rd er the values uMr into uM{l) < ■■■ < and then set q{a) = W h at this am ounts to is th a t the (R + l)a th ordered value is read off from a Q -Q plot o f the uMr against quantiles o f the U ( 0 , 1) distribution, and th a t ordered value is then used to give the required quantile o f the t* — t. We illustrate this in the next example. The to tal nu m b er o f sam ples involved in this calculation is R M . Since we always think o f sim ulating as m any as 1000 sam ples to approxim ate probabilites, here this w ould suggest as m any as 106 sam ples overall. The calculations o f Section 4.5 w ould suggest som ething a bit smaller, say M = 249 to be safe, b u t this is still ra th e r im practical. However, there are ways o f greatly reducing the overall nu m b er o f sim ulations, two o f which are described in C h apter 9. Example 5.13 (Kernel density estimate) B ootstrap confidence intervals for the value o f a density raise som e aw kw ard issues, which we now discuss, before outlining the use o f the nested b o o tstra p in this context. The stan d ard kernel estim ate o f the P D F f ( y ) given a ran d o m sample y u - - - , y n is
227
5.6 ■Double Bootstrap M ethods
where w( ) is a sym m etric density with m ean zero and unit variance, and h is the bandw idth. O ne source o f difficulty is th a t if we consider the estim ator to be t(F), as we usually do, then t(F) = h~l f w{h~l (y — x ) } f ( x ) d x is being estim ated, n o t f ( y) . The m ean and variance o f f ( y ; h ) are approxim ately f ( y ) + j h 2f ' ( y ) ,
(nh)~lf ( y )
J
w2(u)du,
(5.57)
for small h an d large n. In general one assum es th a t as n—► o o so h—>0 in such a way th a t nh—*-oo, an d this m akes both bias and variance tend to zero as n increases. T he density estim ate then has the form t„(F), such th at t „ ( F ) - t ( F ) = f (y) . Because the variance in (5.57) is approxim ately proportional to the mean, it m akes sense to w ork w ith the square root o f the estim ate. T h a t is we take T = {f ( y ; h )}1/2 as estim ator o f 9 = {f ( y )}1/2. By the delta m ethod o f Section 2.7.1 we have from (5.57) th at the approxim ate m ean and variance o f T are
{f(y)Y/2+Uf(yT1/2{h2f"(y)-i2(nhr iK},
(5.58)
where K = f w 2(u) du. T here rem ains the problem o f choosing h. For point estim ation o f f ( y ) it is usually suggested, on the grounds o f m inim izing m ean squared error, th a t one take h o c n-1/5. This m akes b o th bias and stan d ard erro r o f order n~2^5. But there is no reason to do the same for setting confidence intervals, and in fact h o c n-1/5 tu rn s o u t to be a p oor choice, particularly for standard bo o tstrap m ethods, as we now show. Suppose th a t we resam ple y i , . . . , y ‘ from the E D F F. T hen the bo o tstrap version o f the density estim ate, th a t is
has m ean exactly equal to f ( y ’,h); the approxim ate variance is the same as in (5.57) except th a t f ( y \ h ) replaces f ( y ) . It follows th at T* = { f ' ( y \ h ) } 1^2 has approxim ate m ean and variance { f ( y , h ) } 1/2 - K/ONfc)}-172^ ) -1 ^ ,
{ ( n h) - lK .
Now consider the studentized estim ates
7
=
{ f ( y M
ll
2-
{ f ( y ) Y
i( n /j) - ‘/ 2K i /2
z>;ft)}1/2
12 ’
F rom (5.58) an d (5.59) we see th a t if h
\(nh)~^K ^ oc
n“ 1/5, then as n increases
2 = e + { f ( y ) } - l/2K - ' /2{f " (y) - \ K } ,
Z* = e \
(5.59)
5 • Confidence Intervals
228
Figure 5.5 Studentized quantities for density estimation. The left panels show values of Z when h = n~1^5 for 500 standard normal samples of sizes n and 500 bootstrap values for one sample at each n. The right panels show the corresponding values when h = n-1^3.
20
50 100 200 5001000
20
50 100 200 5001000
20
50 100 200 5001000
20
50 100 200 5001000
where b o th e and s' are N ( 0,1). This m eans th a t quantiles o f Z can n o t be well approxim ated by quantiles o f Z*, no m atter how large is n. The same thing happens for the u n transform ed density estim ate. There are several ways in which we can try to overcome this problem . O ne o f the sim plest is to change h to be o f o rd er « -1/3, when calculations sim ilar to those above show th a t Z = e an d Z* = e*. Figure 5.5 illustrates the effect. H ere we estim ate the density a t y = 0 for sam ples from the N ( 0,1) distribution, with w(-) the stan d ard norm al density. T he first two panels show box plots o f 500 values o f z an d z* w hen h = n~1/s, which is near-optim al for estim ation in this case, for several values o f n; the values o f z* are obtained by resam pling from one dataset. T he last two panels correspond to h = n~1/3. The figure confirm s the key points o f the theory sketched above: th a t Z is biased aw ay from zero when h = n-1^5, b u t not w hen h = n_1/3; an d th a t the distributions o f Z and Z ’ are quite stable and sim ilar when h = n-1/3. U nder resam pling from F, the studentized b o o tstrap applied to {/(>’; ^)}1/2 should be consistent if h oc n~1/3. F rom a practical point o f view this m eans considerable undersm oothing in the density estim ate, relative to standard practice for estim ation. A bias in Z o f o rd er n~ 1/3 or worse will rem ain, and this suggests a possibly useful role for the double bootstrap. F or a num erical exam ple o f nested b o o tstrap p in g in this context we revisit Exam ple 4.18, where we discussed the use o f a kernel density estim ate in estim ating species abundance. T he estim ated P D F is
f(y.h) = z z where (•) is the stan d ard norm al density, and the value o f interest is / ( 0 ;/i), which is used to estim ate /(0 ). In light o f the previous discussion, we base
5.6 ■Double Bootstrap M ethods Figure 5.6 Adjusted bootstrap procedure for variance-stabilized density estimate f = {/(0;0.5)}1/2 for the tuna data. The left panel shows the EDF of 1000 values of I* —t. The right panel shows a plot of the ordered u'Mr against quantiles r/(R + 1) of the 1/(0,1) distribution. The dashed line shows how the quantiles of the u are used to obtain improved confidence limits, by using the right panel to read off the estimated coverage q{a) corresponding to the required nominal coverage a, and then using the left panel to read off the q(a) quantile of t* —t.
229
o o
O
■0) O
LU
fo E
LU
t*-t
Nominal coverage
confidence intervals on the variance-stabilized estim ate t = { /(0 ;h )} 1/2. We also use a value o f h considerably sm aller th an the value (roughly 1.5) used to estim ate / in Exam ple 4.18. T he right panel o f Figure 5.6 shows the quantiles o f the uMr obtained when the double b o o tstrap bias adjustm ent is applied with R = 1000 and M = 250, for the estim ate w ith b andw idth h = 0.5. If T* — t were an exact pivot, the distrib u tio n o f the u would lie along the do tted line, and nom inal and estim ated coverage would be equal. The distribution is close to uniform , confirm ing o u r decision to use a variance-stabilized statistic. The dashed line shows how the distribution o f the u* is used to remove the bias in coverage levels. F or an up p er confidence limit with nom inal level 1 — a = 0.9, so th a t a = 0.1, the estim ated level is 4(0-1) = 0.088. The 0.088 quantile o f the values o f tj. — t is t(*gg) — t = —0.091, while the 0.10 quantile is t(*100) — t = —0.085. The corresponding u p per 10% confidence limits for f ( 0 ) V 2 are t - (t(*88) - t) = 0.356 - (-0 .0 9 1 ) = 0.447 and t - (t(*100) - t) = 0.356 — (—0.085) = 0.441. F or this value o f a the adjustm ent has only a small effect. Table 5.7 com pares the 95% limits for /(0 ) for different m ethods, using bandw idth h = 0.5, for which /(0 ;0 .5 ) = 0.127. The longer upper tail for the double b o o tstrap interval is a result o f adjusting the nom inal a = 0.025 to §(0.025) = 0.004; a t the upper tail we obtain §(0.975) = 0.980. The lower tail o f the interval agrees well w ith the o ther second-order correct m ethods. F o r larger values o f h the density estim ates are higher and the confidence intervals narrow er.
5 • Confidence Intervals
230
Upper Lower
Basic
Basic1-
Student
S tu d en t
Percentile
BCa
D ouble
0.204 0.036
0.240 0.060
0.273 0.055
0.266 0.058
0.218 0.048
0.240 0.058
0.301 0.058
In Exam ple 9.14 we describe how saddlepoint m ethods can greatly reduce the tim e taken to perform the double b o o tstrap in this problem . It m ight be possible to avoid the difficulties caused by the bias o f the kernel estim ate by using a clever resam pling scheme, b u t it would be m ore com plicated th an the direct ap p ro ach described above. ■
5.7 Empirical Comparison of Bootstrap Methods T he several b o o tstrap confidence lim it m ethods can be com pared theoretically on the basis o f first- and second-order accuracy, as in Section 5.4, b u t this really gives only suggestions as to which m ethods we would expect to be good. The theory needs to be bolstered by num erical com parisons. O ne rath e r extrem e com parison was described in Exam ple 5.7. In this section we consider one m oderately com plicated application, estim ation o f a ratio o f means, and assess through sim ulation the perform ances o f the m ain b o o tstrap confidence limit m ethods. T he conclusions ap p ear to agree qualitatively with the results o f other sim ulation studies involving applications o f sim ilar com plexity: references to some o f these are given in the bibliographic notes a t the end o f the chapter. The application here is sim ilar to th a t in Exam ple 5.10, and concerns the ratio o f m eans for d a ta from tw o different gam m a distributions. The first sam ple o f size ni is draw n from a gam m a distrib u tio n w ith m ean fi\ = 100 and index 0.7, while the second independent sam ple o f size n2 is draw n from the gam m a distribution w ith m ean n 2 = 50 and index 1. T he p aram eter 9 = n i / ( i 2, whose value is 2, is estim ated by the ratio o f sam ple m eans t = y \ / y 2. F or particular choices o f sam ple sizes we sim ulated 10000 datasets and to each applied several o f the nonparam etric b o o tstrap confidence lim it m ethods discussed earlier, always w ith R = 999. We did n o t include the double b o o tstrap m ethod. As a control we added the exact p aram etric m ethod when the gam m a indexes are know n: this turns out not to be a strong control, b u t it does provide a check on sim ulation validity. The results quoted here are for tw o cases, n\ = n2 = 10 and n\ = n2 = 25. In each case we assess the left- and right-tail erro r rates o f confidence intervals, and their lengths. Table 5.8 shows the em pirical erro r rates for b o th cases, as percentages, for nom inal rates betw een 1% and 10% : sim ulation stan d ard errors are rates
Table 5.7 Upper and lower endpoints of 95% confidence limits for / ( 0) for the tuna data, with bandwidth h = 0.5; t indicates use of square-root transformation.
231
5.8 • M ultiparameter Methods Table 5.8 Empirical error rates (%) for nonparametric bootstrap confidence limits in ratio estimation: rates for sample sizes wi = n2 = 10 are given above those for sample sizes «| = «2 = 25. R = 999 for all bootstrap methods. 10000 datasets generated from gamma distributions.
M e th o d
N o m in al e rro r rate L ow er lim it
E xact N o rm al ap proxim ation Basic Basic, log scale S tudentized S tudentized, log scale B o o tstrap percentile BCa ABC
U p p e r lim it
1
2.5
5
10
10
5
2.5
1
1.0 1.0 0.1 0.1 0.0 0.0 2.6 1.6 0.6 0.8 1.1 1.1 1.8 1.2 1.9 1.4 1.9 1.3
2.8 2.3 0.5 0.5 0.0 0.1 4.9 3.2 2.1 2.3 2.8 2.5 3.6 2.6 4.0 3.0 4.2 3.0
5.5 4.8
10.5 9.9
1.7 2.1 0.2 0.4 8.1 6.0 4.6 4.6 5.6 5.0 6.5 5.1 6.9 5.6 7.4 5.7
6.3 6.4 1.8 3.0 12.9 11.4 9.9 9.9 10.7 10.1 11.6 10.1 12.3 10.9 12.7 11.0
9.8 10.2 20.6 16.3 24.4 19.2 13.1 11.5 11.9 10.9 11.6 10.8 14.6 12.6 14.0 11.8 14.6 12.1
4.8 4.9 15.7 11.5 21.0 15.0 7.5 6.3
2.6 2.5 12.5 8.2 18.6 12.5 4.8 3.3 4.0 3.0 3.5 2.9 5.9 4.2 5.3 3.8 5.5 3.7
1.0 1.1 9.6 5.5 16.4 10.3 2.5 1.7 2.0 1.4 1.7 1.3 3.3 2.1 3.0 1.9 3.1 1.9
6.7 5.9 6.3 5.7 8.9 7.1 8.3 6.8 8.7 6.8
divided by 100. The norm al approxim ation m ethod uses the delta m ethod variance approxim ation. The results suggest th a t the studentized m ethod gives the best results, provided the log scale is used. Otherwise, the studentized m ethod and the percentile, B C a and A B C m ethods are com parable b u t only really satisfactory a t the larger sample sizes. Figure 5.7 shows box plots o f the lengths o f 1000 confidence intervals for b o th sam ple sizes. The m ost pronounced feature for ni = n2 = 10 is the long — som etim es very long — lengths for the two studentized m ethods, which helps to account for their good error rates. This feature is far less prom inent a t the larger sam ple sizes. It is noticeable th a t the norm al, percentile, B C a an d A B C intervals are sh o rt com pared to the exact ones, and th at taking logs improves the basic intervals. Sim ilar com m ents apply when ni = n2 = 25, but w ith less force.
5.8 Multiparameter Methods W hen we w ant a confidence region for a vector param eter, the question o f shape arises. Typically a rectangular region form ed from intervals for each com ponent p aram eter will n o t have high enough coverage probability, although a B onferroni argum ent can be used to give a conservative confidence coefficient,
232
5 ■Confidence Intervals
n1=n2=10
Figure 5.7 Box plots of confidence interval lengths for the first 1000 simulated samples in the numerical experiment w ith gamma data.
1000 100 10
...... ^ ................... B "
" S .......E3........ Et3....... S "
n1=n2=25 10 5
2
■0.... 0 .... 0 .....0 .... 6 .... B .... [j.....0 .... 0 -
1
as follows. Suppose th a t 9 has d com ponents, an d th a t the confidence region Ca is rectangular, w ith interval Cxj = (9Lyi, 9Vj) for the ith com ponent 9t. T hen Pr(0 * Ca) = P r ( \ J { 9 t $
^
Pr(0, ^ Q , ) = ^
say. If we take a, = a / d then the region Ca has coverage a t least equal to 1 — a. F or certain applications this could be useful, in p a rt because o f its simplicity. But there are tw o poten tial disadvantages. First, the region could be very conservative — the true coverage could be considerably m ore than the nom inal 1 — a. Secondly, the rectangular shape could be quite at odds w ith plausible likelihood contours. This is especially true if the estim ates for p aram eter com ponents are quite highly correlated, w hen also the B onferroni m ethod is m ore conservative. One simple possibility for a jo in t b o o tstrap confidence region when T is approxim ately norm al is to base it on the quad ratic form Q = ( T - 9 ) t V ~ 1( T - 9 ) ,
(5.60)
where V is the estim ated variance m atrix o f T. N ote th a t Q is the m ultivariate extension o f the square o f the studentized statistic o f Section 5.2. If Q had exact p quantiles ap, say, then a 1 — a confidence set for 9 would be {9 : ( T - 9 ) t V ~ 1( T - 9 ) < a ^ } .
(5.61)
233
5.8 ■Multiparameter Methods
T he elliptical shape o f this set is correct if the distribution o f T has elliptical contours, as the m ultivariate norm al distribution does. So if T is approxim ately m ultivariate norm al, then the shape will be approxim ately correct. M oreover, Q will be approxim ately distributed as a y 2d variable. But as in the scalar case such distrib u tio n al approxim ations will often be unreliable, so it m akes sense to approxim ate the distrib u tio n o f Q, and in p articular the required quantiles a i_a, by resam pling. T he m ethod then becom es com pletely analogous to the studentized b o o tstrap m ethod for scalar param eters. The b o o tstrap analogue o f Q will be Q’ = ( T , - t ) r F * - 1( T * - t ) , which will be calculated for each o f R sim ulated samples. If we denote the ordered b o o tstra p values by q[ ,0)T
denote the unit vectors ortho g o n al to a(0, ). The sam ple values o f these vectors are 2, b and c, and the sam ple eigenvalues are 1\ < %2 < ^3- Let A denote the 2 x 3 m atrix (S,c)r and B the 2 x 2 m atrix with { j, k)th element ------— n~ l y ^ ( b Tyj)(cTyj)(aTyj)2.
Table 5.9 Latitude (°) and longitude (°) of pole positions determined from the paleomagnetic study of New Caledonian laterites (Fisher et a/., 1987, p. 278).
5.8 • M ultiparameter Methods
Figure 5.10 Equal-area projection of the laterite data onto the plane tangential to the South Pole (+). The sample mean polar axis is the hollow circle, and the square region is for comparison with Figures 5.11 and 10.3.
237
90
T hen the analogue o f (5.60) is Q = na(9,(j>)T A T J3_1/la(0, ),
(5.65)
which is approxim ately distributed as a y\ variable in large samples. In the b o o tstrap analogue o f Q, a is replaced by a, and A and B are replaced by the corresponding quantities calculated from the b o o tstrap sample. Figure 5.11 shows results from setting confidence regions for the m ean polar axis based on Q. The panels show the 0.5, 0.95 and 0.99 contours, using x\ quantiles an d those based on R = 999 nonparam etric boo tstrap replicates q". T he contours are elliptical in this projection. For this sam ple size it would not be m isleading to use the asym ptotic 0.5 and 0.95 quantiles, though the 0.99 quantiles differ by more. However, sim ulations with a random subset o f size n — 20 gave dram atically different quantiles, and it seems to be essential to use the b o o tstrap quantiles for smaller sam ple sizes. A different ap proach is to set T = (6, (j>)T, and then to base a confidence region for (d,4>) on (5.60), w ith V taken to be nonparam etric delta m ethod estim ate o f the covariance m atrix. This approach does not take into account the geom etry o f spherical d a ta and w orks very poorly in this example, partly because the estim ate t is close to the South Pole, which limits the range o f ().
238
5 * Confidence Intervals
Figure 5.U The 0.5, 0.95, and 0.99 confidence regions for the mean polar axis of the laterite data based on (5.65), using x\ quantiles (left) and bootstrap quantiles (right). The boundary of each panel is the square region in Figure 5.10; also shown are the South Pole (+) and the sample mean polar axis < °).
5.9 Conditional Confidence Regions In param etric inference the probability calculations for confidence regions should in principle be m ade conditional on the ancillary statistics for the m odel, w hen these exist, the basic reason being to ensure th a t the inference accounts for the actual inform ation content in the observed data. In param etric m odels w hat is ancillary is often specific to the m athem atical form o f F, and there is no n o n p aram etric analogue. However, there are situations where there is a m odel-free ancillary indicator o f the experim ent, as w ith the design o f a regression experim ent (C h ap ter 6). In fact there is such an indicator in one o f our earlier exam ples, an d we now use this to illustrate some o f the points which arise w ith conditional b o o tstrap confidence intervals. Example 5.16 (City population data) F o r the ra tio estim ation problem o f Exam ple 1.2, the statistic d = u w ould often be regarded as ancillary. The reason rests in p a rt on the n o tio n o f a m odel for linear regression o f x on u with v ariatio n p ro p o rtio n al to u. The left panel o f Figure 5.12 shows the scatter plo t o f t* versus d" for the R = 999 n o n p aram etric b o o tstrap sam ples used earlier. T he observed value o f d is 103.1. T he m iddle and right panels o f the figure show trends in the conditional m ean an d variance, E*(T* | d') and v ar* (T ’ | d"), these being approxim ated by crude local averaging in the scatter plot on the left. The calculation o f confidence lim its for the ratio 6 = E(AT)/E(l/) is to be m ade conditional on d* = d, the observed m ean o f u. Suppose, for example, th at we w ant to apply the basic b o o tstra p m ethod. T hen we need to approxim ate the conditional quantiles ap(d) o f T — 6 given D = d for p = a and 1 — a, and
239
5.9 ■Conditional Confidence Regions
5 \
V V vaitH
15
0.0012
l i f t ; , .
0.0008
0.0010
■
0.0014
5
7 “ 3 ^ i + w ) } j= i S tan d ard norm al-theory likelihood analysis suggests th a t differences in S(6) for 0 n ear 0 are ancillary statistics. We shall reduce these differences to two p articu lar statistics which m easure skewness and curvature o f S( ) near 0,
242
5 ■Confidence Intervals
b'
c*
1.64 2.44 4.62 4.87 5.12 5.49 6.06 6.94
.. ..
-0.62
-0.37
-0.17
0
0.17
0.37
0.62
0.87
59 62
52 88
53 81
71 83
68 79
62 82
50 68
53 81
92 91 92 97 94 93
84 91 96 96 100 100
93 91 100 89 100 100
93 95 95 98 100 100
95 89 86 96 97 100
97 92 97 95 96 100
87 92 100 97 95 100
93 95 97 96 95 100
2.45
_ 50 76 76 81 85 86 100
nam ely B = S(d + 5) - S(6 - 5 ) ,
C = S(0 + 5) - 2S(0) + S(0 - 5);
for num erical convenience we rescale B and C by 0.0032. It is expected th at B and C respectively influence the bias an d variablity o f 0. We are interested in the conditional confidence th a t should be attached to the set 0 + 1, th at is Pr(|0 — 0| < 1 | b,c). The d a ta analysis gives 0 = 28 (year 1898), b = 0.75 and c = 5.5. W ith no assum ption on the shape o f the distribution o f Y , except th a t it is constant, the obvious b o o tstrap sam pling scheme is as follows. First calculate the residuals ej = Xj — f i u j = 1 ,...,2 8 and e; = x j — fi2, j = 2 9 ,..., 100. T hen sim ulate d a ta series by x ' = m + e ’, j = 1 ,...,2 8 and x* = n 2 + s ) , j = 29.......100, w here e’ is random ly sam pled from eioo- Each such sam ple series then gives 0*,fr* an d c*. F rom R = 10 000 b o o tstra p sam ples we find th a t the pro p o rtio n o f samples A A w ith 16 — 9\ < 1 is 0.862, which is the unconditional b o o tstrap confidence. But when these sam ples are p artitio n ed according to b* and c”, strong effects show up. Table 5.11 shows p a rt o f the table o f proportions for outcom e 10* — 01 < 1 for a 16 x 15 partitio n , 201 o f these p artitions being non-em pty and m ost o f them having at least 50 b o o tstrap samples. The proportions are consistently higher th an 0.95 for ( b' ,c ') n ear (b,c), which strongly suggests th a t the conditional confidence Pr(|0 — 0| < 1 | b = 0.75, c = 5.5) exceeds 0.95. T he conditional probability Pr(|0 — 0| < 1 | b,c) will be sm ooth in b and c, so it m akes sense to assum e th a t the estim ate p(b’,c*) = Pr*(|0* — 0| < 1 | 6*,c’)
Table 5.11 Nile data. Part of the table of proportions (%) of bootstrap samples for which 10" —§ | ^ 1, for interval values of b' and c*. R = 10000 samples.
5.10 • Prediction
243
is sm ooth in b ' , c ' . We fitted a logistic regression to the proportions in the 201 non-em pty cells o f the com plete version o f Table 5.11, the result being logit p(b* , c ) = —0.51 — 0.20b’2 + 0.68c*. The residual deviance is 223 on 198 degrees o f freedom , which indicates an adequate fit for this simple model. The conditional bo o tstrap confidence is the fitted value o f p a t b' = b, c* = c, which is 0.972 w ith standard erro r 0.009. So the conditional confidence attached to 6 = 28 + 1 is m uch higher th an the unconditional value. The value o f the stan d ard error for the fitted value corresponds to a binom ial stan d ard error for a sam ple o f size 3500, or 35% o f the whole b o o tstrap sim u lation, which indicates high efficiency for this m ethod o f estim ating conditional probability. ■
5.10 Prediction Closely related to confidence regions for param eters are confidence regions for future outcom es o f the response Y , m ore usually called prediction regions. A pplications are typically in m ore com plicated contexts involving regression m odels (C hapters 6 and 7) and time series m odels (C hapter 8), so here we give only a b rief discussion o f the m ain ideas. In the sim plest situation we are concerned with prediction o f one future response Yn+l given observations y \ , . . . , y n from a distribution F. The ideal upp er y prediction lim it is the y quantile o f F, which we denote by ay(F). The sim plest ap p ro ach to calculating a prediction limit is the plug-in approach, th a t is substituting the estim ate F for F to give ay = ay(F). But this is clearly biased in the optim istic direction, because it does n o t allow for the uncertainty in F. R esam pling is used to correct for, or remove, this bias. Parametric case Suppose first th a t we have a fully param etric model, F = Fg, say. T hen the prediction lim it ay(F) can be expressed m ore directly as ay(9). T he true coverage o f this limit over repetitions o f b o th d a ta and predictand will n o t generally be y, b u t rath er P r{7 n+i < ay(6) \ 6} = h(y),
(5.66)
say, where h(-) is unknow n except th a t it m ust be increasing. (The coverage also depends on 6 in general, b u t we suppress this from the no tatio n for simplicity.) T he idea is to estim ate h(-) by resam pling. So, for d a ta Y J , . . . , Y * and predictand Yn*+1 all sam pled from F = Fg, we estim ate (5.66) by Mv) = Pr*{y„*+1 < a y(d')},
(5.67)
244
5 • Confidence Intervals
where as usual O' is the estim ator calculated for d a ta Y Y ‘. In practice it would usually be necessary to use R sim ulated repetitions o f the sam pling and approxim ate (5.67) by (5.68) Once h(y) has been calculated, the adjusted y prediction limit is taken to be at0 as n—»oo, and use simulation to check its performance when n = 100 and Y has the (7(0,0) distribution. (Sections 2.6.1, 5.2)
4
The gamma model (1.1) with mean /i and index k can be applied to the data o f Example 1.1. For this model, show that the profile log likelihood for pt is ^prof(M) = nk„ lo g (kft/fi) + (k„ - 1) Y 2 lo 8 JO ~ ^ Y I Vi/t1 ~ n lo g r ^ where k h is the solution to the estimating equation
n log(K/n) + n +
log yj - ^
y j / f i - m p ( K ) = 0,
’
248
5 • Confidence Intervals with tp(fc) the derivative o f logr(K ). Describe an algorithm for simulating the distribution o f the log likelihood ratio statistic W( p ) = 2{ ’C 2 „ , i - a / ( 2 n ) . Verify that the bootstrap adjustment o f this limit gives the exact upper 1 — a limit 2 n y / c 2n,tt(Section 5.6; Beran, 1987; Hinkley and Shi, 1989) 14
Show how to make a bootstrap adjustment o f the studentized bootstrap confidence limit method for a scalar parameter. (Section 5.6)
cv is the a quantile of the
distribution.
251
5.13 ■Practicals 15
For an equi-tailed (1 — 2a) confidence interval, the ideal endpoints are t + p with values o f P solving (3.31) with h(F, F ; P ) = I {t(F) - t(F) < 0} - a,
h(F, F; P) = I {t ( F) - t(F) < p } - (1 - a).
Suppose that the bootstrap solutions are denoted by [i? and P t- a., and that in the language o f Section 3.9.1 the adjustments b(F, y) are /Ja+?1 and /?i_a+w. Show how to estimate yi and y2, and verify that these adjustments modify coverage 1 — 2a + 0 (n _1) to 1 — 2a + 0(n~2). (Sections 3.9.1, 5.6; Hall and Martin, 1988) 16
Suppose that D is an approximate ancillary statistic and that we want to estimate the conditional probability G(u | d) = Pr(T — 9 < u \ D = d) using R simulated values (t’,d"r). One sm ooth estimate is the kernel estimate
G(„ I d ) , £ f= i W{h-'(d;-d)} where w( ) is a density symmetric about zero and h is an adjustable bandwidth. Investigate the bias and variance o f this estimate in the case where ( T , D ) is ap proximately bivariate normal and w( ) = = C - S ?
T he em pirical influence values as defined in Section 2.7.2 are therefore (1 -n x (x j-x )/S S x \ '< = { n(Xj — x ) / S S x ) “■
(6' 13)
T he nonparam etric delta m ethod variance approxim ation (2.36) applied to [1] gives vl
Y, { x j — x)2e2j = — -S S 2 1■
(6-14)
This m akes no assum ption o f hom oscedasticity. In practice we m odify the variance approxim ation to account for leverage, replacing ej by r, as defined in (6.9). Second formulation The second possibility is th a t a t any value o f x, responses Yx can be sam pled from a distribution Fx(y) whose m ean an d variance are n(x) and 2 “ f t +
SS,
'
Because E*(e*) = n r 1 Y ( rj — r) = 0, it follows th a t E*(j?j) = Pi. Also, because var*(e*) = n_1 £ " =1(r; ~ Ff for a11 J, . y^(x; — x)2var*(£;) , v ar (Pi) = -----------^ -------- J- = n ^ ( r , - - r f / S S x. The latter will be approxim ately equal to the usual estim ate s2/ S S x, because n_1 Y;(rj ~ r ) 2 = (n ~ 2)~'
e] = s2- 1° fact if the individual hj are replaced by
their average h, then the m eans an d variances o f Pq and p \ are given exactly by (6.5) an d (6.6) w ith the estim ates Pq, P i an d s2 substituted for param eter values. T he advantage o f resam pling is im proved quantile estim ation when norm al-theory distributions o f the estim ators Pq, P i , S 2 are n o t accurate. Example 6.1 (M am m als) F or the d a ta plotted in the right panel o f Figure 6.1, the simple linear regression m odel seems appropriate. S tan d ard analysis sug gests th a t errors are approxim ately norm al, although there is a small suspicion o f heteroscedasticity: see Figure 6.2. T he p aram eter estim ates are Po = 2.135 and Pi = 0.752. From R = 499 b o o tstra p sim ulations according to the algorithm above, the
263
6.2 ■Least Squares Linear Regression
Figure 6.2 Normal Q-Q plot of modified residuals r;- and their plot against leverage values hj for linear regression fit to log-transformed mammal data.
co 3 TD
tO 3 ■o
■o "D O
Quantiles of Standard Normal
Leverage h
estim ated sta n d a rd errors o f intercept and slope are respectively 0.0958 and 0.0273, com pared to the theoretical values 0.0960 and 0.0285. The em pirical distributions o f b o o tstra p estim ates are alm ost perfectly norm al, as they are for the studentized estim ates. T he estim ated 0.05 and 0.95 quantiles for the studentized slope estim ate
sE{fay w here SE(fS\) is the stan d ard error for obtained from (6.6), are z*25) = —1.640 an d z'475) = 1.5 89, com pared to the stan d ard norm al quantiles +1.645. So, as expected for a m oderately large “clean” dataset, the resam pling results agree closely w ith those obtained from stan d ard m ethods. ■ Zero intercept In som e applications the intercept f o will n o t be included in (6.1). This affects the estim ation o f Pi and a 2 in obvious ways, b u t the resam pling algorithm will also differ. First, the leverage values are different, nam ely
so the m odified residual will be different. Secondly, because now e; 0, it is essential to m ean-correct the residuals before using them to sim ulate random errors. Repeated design points I f there are rep eat observations a t som e or all values o f x, this offers an enhanced o p p o rtu n ity to detect heteroscedasticity: see Section 6.2.6. W ith
264
6 • Linear Regression
m any such repeats it is in principle possible to estim ate the C D F s Fx separately (Section 6.2.2), b u t there is rarely enough d a ta for this to be useful in practice. T he m ain advantage o f repeats is the o p portunity it affords to test the adequacy o f the linear regression form ulation, by splitting the residual sum o f squares into a “pure e rro r” com ponent an d a “goodness-of-fit” com ponent. To the extent th a t the com parison o f these com ponents through the usual F ratio is quite sensitive to non-norm ality and heteroscedasticity, resam pling m ethods m ay be useful in interpreting th a t F ratio (Practical 6.3).
6.2.4 Resam pling cases A com pletely different approach w ould be to im agine the d a ta as a sam ple from som e bivariate distribution F o f (X , Y). This will sometimes, b u t not often, mimic w hat actually happened. In this approach, as outlined in Section 6.2.2, the regression coefficients are viewed as statistical functions o f F, and defined by (6.10). M odel (6.1) still applies, b u t w ith no assum ption on the random errors e7 other th an independence. W hen (6.10) is evaluated a t F we obtain the least squares estim ates (6.2). W ith F now the bivariate distribution o f (X, Y ), it is appropriate to take F to be the E D F o f the d a ta pairs, an d resam pling will be from this ED F, ju st as in C h ap ter 2. T he resam pling sim ulation therefore involves sam pling pairs w ith replacem ent from { x \ , y \ ) , . . . , (x„,y„). This is equivalent to taking (x,*,y*) = (x i , y i ), where I is uniform ly distributed on {1 ,2 ,...,n } . Sim ulated values Pq, fi\ o f the coefficient estim ates are com puted from (xj,_y*),...,(x*,y*) using the least squares algorithm which was applied to obtain the original estim ates feo, fi\. So the resam pling algorithm is as follows. Algorithm 6.2 (Resampling cases in regression) F or r = 1 sam ple i\ , r a n d o m l y w ith replacem ent from {1,2 2 for j = 1 ,..., n, set x j = x,-, y j = y ;•; then 3 fit least squares regression to ( x \ , y \ ) , ... ,(x*n,y*n), giving estim ates K r K ’ sr2• There are two im p o rtan t differences betw een this second b o o tstrap m ethod and the previous one using a p aram etric m odel an d sim ulated errors. First, w ith the second m ethod we m ake no assum ption ab o u t variance hom ogeneity — indeed we do n o t even assum e th a t the conditional m ean o f Y given X = x is linear. This offers the advantage o f potential robustness to heteroscedasticity, and the disadvantage o f inefficiency if the constant-variance m odel is correct. Secondly, the sim ulated sam ples have different designs, because the values
The model E(Y | X = x) = a + /?i(x —x), which some writers use in place of (6.1), is not useful here because a = /fo 4- fi\x is a function not only of F but also of the data, through x.
265
6.2 ■Least Squares Linear Regression Mammals data. Comparison of bootstrap biases and standard errors of intercept and slope with theoretical results, standard and robust. Resampling cases with Table 6.1
R = 999.
f>i
T heoretical
R esam pling cases
R o b u st theoretical
bias sta n d a rd e rro r
0 0.096
0.0006 0.091
— 0.088
bias sta n d a rd e rro r
0 0.0285
0.0002 0.0223
0.0223
_
x j ,...,x * are random ly sam pled. The design fixes the inform ation content o f a sample, and in principle o u r inference should be specific to the inform ation in o u r data. The variation in x j , . . . , x ’ will cause some variation in inform ation, b u t fortunately this is often u n im p o rtan t in m oderately large datasets; see, however, Exam ples 6.4 and 6.6. N ote th a t in general the resam pling distribution o f a coefficient estim ate will not have m ean equal to the d a ta estim ate, contrary to the unbiasedness property th a t the estim ate in fact possesses. However, the difference is usually negligible. Example 6.2 (M ammals) F or the d ata o f Exam ple 6.1, a b o o tstra p sim ulation was run by resam pling cases with R = 999. Table 6.1 shows the bias and stan d ard error results for b o th intercept and slope. The estim ated biases are very small. T he striking feature o f the results is th at the stan d ard erro r for the slope is considerably sm aller than in the previous b o o tstrap sim ulation, which agreed w ith stan d ard theory. The last colum n o f the table gives robust versions o f the stan d ard errors, which are calculated by estim ating the variance o f Ej to be rj. For exam ple, the robust estim ate o f the variance o f (it is
This corresponds to the delta m ethod variance approxim ation (6.14), except th a t rj is used in preference to e; . As we m ight have expected from previous discussion, the b o o tstrap gives an approxim ation to the robust stan d ard error. A A Figure 6.3 shows norm al Q -Q plots o f the b o o tstra p estim ates Pq and fi'. F or the slope p aram eter the right panel shows lines corresponding to norm al d istributions w ith the usual and the robust stan d ard errors. T he distribution o f Pi is close to norm al, with variance m uch closer to the robust form (6.17) th an to the usual form (6.6). ■ One disadvantage o f the robust stan d ard error is its inefficiency relative to the usual stan d ard erro r when the latter is correct. A fairly straightforw ard calculation (Problem 6.6) gives the efficiency, which is approxim ately 40% for the slope p aram eter in the previous example. T hus the effective degrees o f freedom for the robust stan d ard error is approxim ately 0.40 times 62, or 25.
6 • Linear Regression
266
Quantiles of standard normal
Quantiles of standard normal
The sam e loss o f efficiency would apply approxim ately to b o o tstrap results for resam pling cases.
6.2.5 Significance tests for slope Suppose th a t we w ant to test w hether or n o t the covariate x has an effect on the response y, assum ing linear regression is appropriate. In term s o f m odel param eters, the null hypothesis is Ho : fi\ = 0. If we use the least squares estim ate as the basis for such a test, then this is equivalent to testing the Pearson correlation coefficient. This connection im m ediately suggests one nonparam etric test, the p erm u tatio n test o f Exam ple 4.9. However, this is not always valid, so we need also to consider o th er possible b o o tstrap tests. Permutation test The p erm u tatio n test o f co rrelation applies to the null hypothesis o f inde pendence betw een X and Y when these are b o th random . Equivalently it applies when the null hypothesis implies th a t the conditional distribution o f Y given X = x does n o t depend upon x. In the context o f linear regression this m eans n o t only zero slope, b u t also constant erro r variance. The justification then rests sim ply on the exchangeability o f the response values under the null hypothesis. If we use AT(.) to denote the ordered values o f X \ , . . . , X n, and so forth, then the exact level o f significance for one-sided alternative H a '■Pi > 0 and test statistic T is p
=
Pr ( T > t | X (.) = x (.), y(.) = )>(.), H 0)
-
Pr [T > 1 1X = x, Y = p e rm j^ .)} ],
Figure 63 Normal plots for bootstrapped estimates of intercept (left) and slope (right) for linear regression fit to logarithms of mammal data, with R = 999 samples obtained by resampling cases. The dotted lines give approximate normal distributions based on the usual formulae (6.5) and (6.6), while the dashed line shows the normal distribution for the slope using the robust variance estimate (6.17).
6.2 ■L east Squares Linear Regression
267
where perm { } denotes a perm utation. Because all perm utations are equally likely, we have # o f perm utations such th a t T > t
P = --------------------n!i-------------------’ as in (4.20). In the present context we can take T = fii, for which p is the same as if we used the sam ple Pearson correlation coefficient, b u t the same m ethod applies for any ap p ro p riate slope estim ator. In practice the test is perform ed by generating sam ples ( x j ,y j ) ,. ..,(x * ,y * ) such th a t x* = x j and (_ y j,...,y ’ ) is a ran d o m p erm u tatio n o f ( y i , . . . , y n), and fitting the least squares slope estim ate jSj. If this is done R times, then the one-sided P-value for alternative H A : fi i > 0 is P
# { fr> M + i R + 1
It is easy to show th a t studentizing the slope estim ate would n o t affect this test; see Problem 6.4. The test is exact in the sense th at the P-value has a uniform distrib u tio n under Ho, as explained in Section 4.1; note th at this uniform distribution holds conditional on the x values, which is the relevant property here. First bootstrap test A b o o tstrap test whose result will usually differ negligibly from th a t o f the p erm u tatio n test is obtained by taking the null m odel as the pair o f m arginal E D F s o f x an d y , so th a t the x*s are random ly sam pled with replacem ent from the X j S , and independently the y * s are random ly sam pled from the y j s. A gain is the slope fitted to the sim ulated data, and the form ula for p is the same. As w ith the p erm u tatio n test, the null hypothesis being tested is stronger than ju st zero slope. The p erm u tatio n m ethod and its b o o tstrap look-alike apply equally well to any slope estim ate, n o t ju st the least squares estimate. Second bootstrap test The next b o o tstrap test is based explicitly on the linear m odel structure with hom oscedastic errors, and applies the general approach o f Section 4.4. The null m odel is the null m ean fit and the E D F o f residuals from th a t fit. We calculate the P-value for the slope estim ate under sam pling from this fitted model. T h a t is, d a ta are sim ulated by
x) =
xp
yj =
£;0 + 8}o>
w here pjo = y an d the £*0 are sam pled with replacem ent from the null m odel residuals e^o = yj ~ y , j = 1 , The least squares slope /Jj is calculated from the sim ulated data. A fter R repetitions o f the sim ulation, the P-value is calculated as before.
268
6 ■Linear Regression
This second b o o tstrap test differs from the first b o o tstrap test only in th at the values o f explanatory variables x are fixed at the d a ta values for every case. N ote th a t if residuals were sam pled w ithout replacem ent, this test would duplicate the exact p erm u tatio n test, which suggests th at this boo tstrap test will be nearly exact. The test could be m odified by standardizing the residuals before sam pling from them , which here w ould m ean adjusting for the constant null m odel leverage n-1 . This w ould affect the P-value slightly for the test as described, b u t not if the test statistic were changed to the studentized slope estimate. It therefore seems wise to studentize regression test statistics in general, if m odel-based sim ulation is used; see the discussion o f b o o tstrap pivot tests below. Testing non-zero slope values All o f the preceding tests can be easily modified to test a non-zero value o f Pi. If the null value is /?i,o, say, then we apply the test to m odified responses yj — PiflXj, as in Exam ple 6.3 below. Bootstrap pivot tests F u rther b o o tstrap tests can be based on the studentized b o o tstrap approach outlined in Section 4.4.1. F or simplicity suppose th at we can assum e ho m o scedastic errors. T hen Z = ([S\ — Pi)/S\ is a pivot, where Si is the usual standard error for As a pivot, Z has a distribution not depending upon param eter values, an d this can be verified under the linear m odel (6.1). The null hypothesis is Ho : Pi = 0, and as before we consider the one-sided alternative H a : Pi > 0. T hen the P-value is p = Pr
-
P i = 0, P o, c r
-
Pi,Po, t} + l R + l T he p erm u tatio n version o f the test is not exact w hen nuisance covariates X j are present, b u t em pirical evidence suggests th a t it is close to exact. Scalar y W hat should t be? F or testing a single com ponent, so th a t y is a scalar, suppose th a t the alternative hypothesis is one-sided, say H A : y > 0. T hen we could A 1/2 take t to be y itself, o r possibly a studentized form such as zo = y / v 0 , where Do is an ap p ro p riate estim ate o f the variance o f y. If we com pute the standard error using the null m odel residual sum o f squares, then v0 = ( n - q r ' e l e o i X l o X i o r 1, where q is the ran k o f X q. T he sam e form ula is applied to every sim ulated sam ple to get i>q an d hence z* = y*/vq1/2. W hen there are no nuisance covariates Xo, Vq = vq in the p erm u tatio n test, and studentizing has no effect: the sam e is true if the non-null stan d ard error is used. Em pirical evidence suggests th a t this is approxim ately true w hen Xo is present; see the exam ple below. Studentizing is necessary if m odified residuals are used, w ith stan d ard izatio n based on the null m odel hat m atrix. A n alternative b o o tstrap test can be developed in term s o f a pivot, as described for single-variable regression in Section 6.2.5. H ere the idea is to treat Z = (y — y ) / V l/2 as a pivot, w ith V l/1 an ap propriate stan d ard error. B ootstrap sim ulation u nder the full fitted m odel then produces the R replicates o f z ’ which we use to calculate the P-value. To elaborate, we first fit the full m odel p = X f i by least squares and calculate the residuals e = y — p. Still assum ing hom oscedasticity, the stan d ard erro r for y is calculated using the residual m ean square — a simple form ula is v = ( n - p - 1) l e Te ( X l 0Xi . 0)
6.3 ■M ultiple Linear Regression
281
N ext, d atasets are sim ulated using the m odel /
= X p + e*,
X ' = X,
where the n errors in e* are sam pled independently w ith replacem ent from the residuals e o r m odified versions o f these. The full regression o f y ‘ on X is then fitted, from which we obtain y * and its estim ated variance v", these being used to calculate z* = (y* — y ) / v ' ll2. F rom R repeats o f this sim ulation we then have the one-sided P-value #
P
{ z r* >
Z q }
+
1
R + 1
where zo = y /u 1/2. A lthough here we use p to denote a P-value as well as the num b er o f covariates, no confusion should arise. This test procedure is the same as calculating a (1 —a) lower confidence limit for y by the studentized b o o tstrap m ethod, and inferring p < a if the lower lim it is above zero. The corresponding two-sided P-value is less th an 2a if the equi-tailed (1 — 2a) studentized b o o tstrap confidence interval does n o t include zero. O ne can guard against the effects o f heteroscedastic errors by using case resam pling to d o the sim ulation, and by using a robust standard error for y as described in Section 6.2.5. Also the same basic procedure can be applied to estim ates o th e r th a n least squares. Example 6.7 (Rock data) The d a ta in Table 6.5 are m easurem ents on four cross-sections o f each o f 12 oil-bearing rocks, taken from two sites. The aim is to predict perm eability from the other three m easurem ents, which result from a com plex im age-analysis procedure. In all regression m odels we use logarithm o f perm eability as response y. The question we focus on here is w hether the coefficient o f shape is significant in a m ultiple linear regression on all three variables. The problem is n o n stan d ard in th at there are four replicates o f the ex p lanatory variables for each response value. If we fit a linear regression to all 48 cases treating them as independent, strong correlation am ong the four residuals for each core sam ple is evident: see Figure 6.8, in which the residuals have unit variance. U nder a plausible m odel which accounts for this, which we discuss in E xam ple 6.9, the ap p ro p riate linear regression for testing purposes uses core averages o f the explanatory variables. T hus if we represent the d a ta as responses yj and replicate vectors o f the explanatory variables Xjk, k = 1,2,3,4, then the m odel for o u r analysis is yj = x J . P + Ej, where the Ej are independent. A sum m ary o f the least squares regression
6 ■Linear Regression
282
Table 6.5 Rock data
case
a rea
p e rim e te r
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
4990 7002 7558 7352 7943 7979 9333 8209 8393 6425 9364 8624 10651 8868 9417 8874 10962 10743 11878 9867 7838 11876 12212 8233 6360 4193 7416 5246 6509 4895 6775 7894 5980 5318 7392 7894 3469 1468 3524 5267 5048 1016 5605 8793 3475 1651 5514 9718
2792 3893 3931 3869 3949 4010 4346 4345 3682 3099 4480 3986 4037 3518 3999 3629 4609 4788 4864 4479 3429 4353 4698 3518 1977 1379 1916 1585 1851 1240 1728 1461 1427 991 1351 1461 1377 476 1189 1645 942 309 1146 2280 1174 598 1456 1486
sh a p e 0.09 0.15 0.18 0.12 0.12 0.17 0.19 0.16 0.20 0.16 0.15 0.15 0.23 0.23 0.17 0.15 0.20 0.26 0.20 0.14 0.11 0.29 0.24 0.16 0.28 0.18 0.19 0.13 0.23 0.34 0.31 0.28 0.20 0.33 0.15 0.28 0.18 0.44 0.16 0.25 0.33 0.23 0.46 0.42 0.20 0.26 0.18 0.20
p e rm e a b ility 6.3 6.3 6.3 6.3 17.1 17.1 17.1 17.1 119.0 119.0 119.0 119.0 82.4 82.4 82.4 82.4 58.6 58.6 58.6 58.6 142.0 142.0 142.0 142.0 740.0 740.0 740.0 740.0 890.0 890.0 890.0 890.0 950.0 950.0 950.0 950.0 100.0 100.0 100.0 100.0 1300.0 1300.0 1300.0 1300.0 580.0 580.0 580.0 580.0
(Katz, 1995; Venables and Ripley, 1994, p. 251). These are measurements on four cross-sections of 12 core samples, with permeability (milli-Darcies), area (of pore space, in pixels out of 256 x 256), perimeter (pixels), and shape (perimeter/area)1^2.
6.3 • M ultiple Linear Regression
Figure 6.8 Rock data: standardized residuals from linear regression of all 48 cases, showing strong intra-core correlations.
283
co 3
T3
■O 0N ? (0 •O c 03
CO
4
6
8
10
12
Core number
Table 6.6 Least squares results for multiple linear regression of rock data, all covariates included and core means used as response variable.
V ariable intercept a r e a ( x lO - 3 ) p e r i ( x lO - 3 ) sh ap e
Coefficient
SE
f-value
3.465 0.864 -1 .9 9 0 3.518
1.391 0.211 0.400 4.838
2.49 4.09 - 4 .9 8 0.73
is shown in Table 6.6. T here is evidence o f m ild non-norm ality, b u t not heteroscedasticity o f errors. Figure 6.9 shows results from b o th the null m odel resam pling m ethod and the full m odel pivot resam pling m ethod, in b o th cases using resam pling o f errors. The observed value o f z is z0 = 0.73, for which the one-sided P-value is 0.234 und er the first m ethod, an d 0.239 under the second m ethod. Thus sh ap e should n o t be included in the linear regression, assum ing th at its effect would be linear. N ote th a t R = 99 sim ulations would have been sufficient here. ■
Vector y F or testing several com ponents sim ultaneously, we take the test statistic to be the quad ratic form T = F i X l o X v 0)y,
6 *Linear Regression
284
Figure 6.9 Resampling distributions of standardized test statistic for variable shape. Left: resampling 2 under null model, R = 999. Right: resampling pivot under full model, R = 999.
-6
-4
-2
0
2
4
6
8
-6
-4
z*
-2
0
2
4
6
8
z0*
or equivalently the difference in residual sum s o f squares for the null and full m odel least squares fits. This can be standardized to n —q RSSo — R S S q X RSSo where RSSo and R S S denote residual sum s o f squares under the null m odel and full m odel respectively. We can apply the pivot m ethod with full m odel sim ulation here also, using Z = (y — y)T ( X l 0Xi.o)(y — y ) / S 2 w ith S 2 the residual m ean square. The test statistic value is zo = y T(X[.0Xi .0) y /s 2, for w hich the P-value is given by # {z* >
Zp}
+
1
R + 1 This would be equivalent to rejecting Ho at level a if the 1 — a confidence set for y does n o t include the point y = 0. A gain, case resam pling would provide protection against heteroscedasticity: z would then require a robust standard error.
6.3.3 Prediction A fitted linear regression is often used for prediction o f a new individual response Y+ when the explanatory variable vector is equal to x +. T hen we shall w ant to supplem ent o u r predicted value by a prediction interval. Confidence limits for the m ean response can be found using the same resam pling as is used to get confidence limits for individual coefficients, b u t limits for the response Y+ itself — usually called prediction lim its — require additional resam pling to sim ulate the variation o f 7+ ab o u t x \ j i .
285
6.3 ■M ultiple Linear Regression
T he q uantity to be predicted is Y+ = x'+ji + £ +, say, and the point predictor is Y+ = The ran d o m erro r £+ is assum ed to be independent o f the random errors £ i,...,£ „ in the observed responses, and for simplicity we assum e th at they all com e from the sam e d istribution: in p articular the errors have equal variances. To assess the accuracy o f the point predictor, we can estim ate the distribution o f the prediction error S = Y+ - Y + = x tJ -
( x l P + £+)
by the distribution o f F) is an estim ate o f the m ean response at x +, a function o f x+/? with /? an estim ate o f /?, and the form o f this prediction rule is closely tied to the form o f c(y+,y+). We suppose th a t the d a ta F are sam pled from distribution F, from which the cases to be predicted are also sampled. This implies th at we are considering x + values sim ilar to d a ta values x i ,...,x „ . Prediction accuracy is m easured by the aggregate prediction error D = D(F, F) = E + [c{ Y+, tx(X+, F)} | F],
(6.39)
where E + em phasizes th a t we are averaging only over the distribution o f (AT+, 7+), w ith d a ta fixed. Because F is unknow n, D can n o t be calculated, and so we look for accurate m ethods o f estim ating it, or ra th er its expectation A = A (F ) = E { D ( F , F ) } ,
(6.40)
6 ■Linear Regression
292
the average prediction accuracy over all possible d atasets o f size n sam pled from F. The m ost direct ap proach to estim ation o f A is to apply the boo tstrap substitution principle, th a t is substituting the E D F F for F in (6.40). However, there are o th er widely used resam pling m ethods which also m erit consideration, in p art because they are easy to use, an d in fact the best approach involves a com bination o f m ethods. Apparent error The sim plest way to estim ate D or A is to take the average prediction error w hen the prediction rule is applied to the sam e d a ta th at was used to fit it. This gives the apparent error, som etim es called the resubstitution error, n
K PP = D( F, F) = n ~x ' Y ^ c { y j ,ii{xj,F)}. 7=1
(6.41)
This is n o t the sam e as the b o o tstrap estim ate A(F), which we discuss later. It is intuitively clear th a t A app will tend to underestim ate A, because the latter refers to prediction o f new responses. The underestim ation can be easily A | checked for least squares prediction w ith squared error, when A app = n~ R S S , the average squared residual. If the m odel is correct with hom oscedastic random errors, then A app has expectation a 2(l —qn~ l ), w hereas from (6.37) we know th a t A = <x2(l + qn~l ). The difference betw een the true erro r and ap p aren t erro r is the excess error, D( F, F) — D(F,F), whose m ean is the expected excess error, e(F) = E {D(F, F) - D(F, F)} = A(F) - E{D(F, F)},
(6.42)
where the expectation is taken over possible datasets F. F or squared error and least squares prediction the results in the previous p arag rap h show th at e(F) = 2qri~l o 2. The q uantity e(F) is akin to a bias and can be estim ated by resam pling, so the a p p aren t error can be m odified to a reasonable estim ate, as we see below. Cross-validation T he ap p aren t error is dow nw ardly biased because it averages errors o f predic tions for cases at zero distance from the d a ta used to fit the prediction rule. C ross-validation estim ates o f aggregate erro r avoid this bias by separating the d a ta used to form the prediction rule and the d a ta used to assess the rule. The general paradigm is to split the d ataset into a training set {(x j , y j ) : j £ S,} and a separate assessment set {(X j , y j ) : j e Sa}, represented by Ft and Fa, say. The linear regression predictor is fitted to Ft, used to predict responses yj for
293
6.4 ■Aggregate Prediction Error and Variable Selection
j € Sa, and then A is estim ated by D{Fa, Ft) = n ~ ' Y
£)}>
(6-43)
j€Sa
w ith na the size o f Sa. T here are several variations on this estim ate, depending on the size o f the training set, the m anner o f splitting the dataset, and the num ber o f such splits. The version o f cross-validation th at seems to come closest to actual use o f o u r predictor is leave-one-out cross-validation. H ere training sets o f size n —1 are taken, and all such sets are used, so we m easure how well the prediction rule does when the value o f each response is predicted from the rest o f the data. If F^j represents the n — 1 observations {(xk,yk),k ^ j}, and if /u(Xy,F_; ) denotes the value predicted for yj by the rule based on F _; , then the cross-validation estimate o f prediction error is n
Ac v = n~l
c{yj>
F-j)}, (6.44)
i= i which is the average erro r when each observation is predicted from the rest o f the sample. In general (6.44) requires n fits o f the model, b u t for least squares linear regression only one fit is required if we use the case-deletion result (Problem 6.2) ~
,
T A
Vi — x j B
P - P- j = ( X TX ) ~ ' x j ^ _ £
,
where as usual hj is the leverage for the 7th case. F or squared erro r in particular we then have ="
E
d
- ^
•
' 6-45>
From the natu re o f Ac v one would guess th a t this estim ate has only a small bias, and this is so: assum ing an expansion o f the form A(F) = oq + a\ n~l + a2n~2 + ■■■, one can verify from (6.44) th a t E(A c^) = «o + a i(n — I )-1 + • • ■, which differs from A by term s o f order n~2 — unlike the expectation o f the ap p aren t error which differs by term s o f order n_ l . K -fold cross-validation In general there is no reason th at training sets should be o f size n — 1. For certain m ethods o f estim ation the num ber n o f fits required for Ac v could itself be a difficulty — although not for least squares, as we have seen in (6.45). T here is also the possibility th at the small p erturbations in fitted m odel w hen single observations are left out m akes Ac v too variable, if fitted values H(x,F) do n o t depend sm oothly on F o r if c(y+ ,y+ ) is n o t continuous. These
294
6 ■Linear Regression
potential problem s can be avoided to a large extent by leaving out groups o f observations, rath er th an single observations. T here is m ore th an one way to d o this. One obvious im plem entation o f group cross-validation is to repeat (6.43) for a series o f R different splits into training and assessm ent sets, keeping the size o f the assessm ent set fixed at na = m, say. T hen in a fairly obvious n o tation the estim ate o f aggregate prediction error would be R
Acv = R ~{
X ! c{yJ’ jesv
r= 1
^v)}-
(6-46^
In principle there are (") possible splits, possibly an extrem ely large num ber, b u t it should be adequate to take R in the range 100 to 1000. It would be in the spirit o f resam pling to m ake the splits at random . However, consideration should be given to balancing the splits in some way — for example, it would seem desirable th a t each case should occur w ith equal frequency over the R assessm ent sets; see Section 9.2. D epending on the value o f nt = n — m and the num ber p o f explanatory variables, one m ight also need some form o f balance to ensure th a t the m odel can always be fitted. There is an efficient version o f group cross-validation th at does involve ju st one prediction o f each response. We begin by splitting the d a ta into K disjoint sets o f nearly equal size, w ith the corresponding sets o f case subscripts denoted by C i , . . . , C k , say. These K sets define R = K different splits into training and assessm ent sets, w ith S^k = Q the kt h assessm ent set and the rem ainder o f the d a ta Stf = |J,y* Ci the /cth training set. F or each such split weapply (6.43), and then average these estim ates. The result is the K-fold cross-validation estimate o f prediction error n
Acvjc = n~l y c{yj, n(xj, F - k{J))}, j=i
(6.47)
where F-k{j) represents the d a ta from which the group containing the j i h case has been deleted. N ote th a t ACvjc is equal to the leave-one-out estim ate (6.44) when K = n. C alculation o f (6.47) requires ju st K m odel fits. Practical experience suggests th a t a good strategy is to take K = m in{n1!1, 10}, on the grounds th a t taking K > 10 m ay be too com putationally intensive when the prediction rule is com plicated, while taking groups o f size at least n1/2 should p ertu rb the d a ta sufficiently to give small variance o f the estimate. The use o f groups will have the desired effect o f reducing variance, b u t at the cost o f increasing bias. F or exam ple, it can be seen from the expansion used earlier for A th a t the bias o f A Cvjc is a\{n(K — l )}-1 + ■• •, which could be substantial if K is small, unless n is very large. F ortunately the bias o f A qv ,k can be reduced by a simple adjustm ent. In a harm less abuse o f notation, let
6.4 ■Aggregate Prediction Error and Variable Selection
if n / K
=m
is an
integer, then ail groups are o f size m and Pk = l / K .
295
F-k denote the d a ta w ith the /cth group om itted, for k = 1 and let pk denote the p ro p o rtio n o f the d ata falling in the /cth group. T he adjusted cross-validation estimate o f aggregate prediction erro r is 00
0
r
&acvjk. = Ack,k + D( F, F) — ^2,PkD{F,F-k)-
(6.48)
k= 1
T his has sm aller bias th a n Acvjc and is alm ost as simple to calculate, because it requires n o additional fits o f the model. F or a com parison betw een ACvjc an d A acvjc in a simple situation, see Problem 6.12. T he following algorithm sum m arizes the calculation o f AAcvji w hen the split into groups is m ade a t random . Algorithm 6.5 (K -fold adjusted cross-validation) 1 Fit the regression m odel to all cases, calculate predictions m odel, an d average the values o f c(yj,yj) to get D. 2 C hoose group sizes m i,. . . , such th a t mi H----- + m* = n. 3 For k = 1
from th at
(a) choose Ck by sam pling times w ithout replacem ent from { 1 ,2 ,...,« } m inus elem ents chosen for previous C,s; (b) (c) (d) (e)
fit the regression m odel to all d a ta except cases j £ Ck', calculate new predictions yj = n(xj, F-k) for j e Ck ; calculate predictions %j = fi(xj,F-k) for all j ; then average the n values c{yj,%j) to give D(F,F-k).
4 A verage the n values o f c(yj,yj) using yj from step 3(c) to give Ac vj i5 C alculate Aacvji as in (6.48) with pk = mk/n.
Bootstrap estimates A direct ap plication o f the b o o tstrap principle to A(F) gives the estim ate A = A(F) = E*{D(F,F*)}, w here F* denotes a sim ulated sam ple ( x j,y j) ,. . . , (x*, >’”) taken from the d a ta by case resam pling. U sually sim ulation is required to approxim ate this estim ate, as follows. F or r = 1 we random ly resam ple cases from the d ata to obtain the sam ple (x*j,y*j) , . . . , (x*n,y'„), which we represent by F*, and to this sam ple we fit the prediction rule and calculate its predictions n ( x j , F ' ) o f the d a ta responses yj for j = 1 The aggregate prediction erro r estim ate is then calculated as R R - 1
n Y 2 c { y j,f i{ x j,F ') } .
r= l
j=l
(6.49)
6 ■Linear Regression
296
Intuitively this b o o tstra p estim ate is less satisfactory th an cross-validation, because the sim ulated d ataset F* used to calculate the prediction rule is p art o f the d a ta F used for assessm ent o f prediction error. In this sense the estim ate is a hybrid o f the a p p aren t erro r estim ate and a cross-validation estim ate, a point to which we retu rn shortly. As we have noted in previous chapters, care is often needed in choosing w hat to bootstrap. H ere, an ap p ro ach w hich w orks b etter is to use the boo tstrap to estim ate the expected excess erro r e(F) defined in (6.42), w hich is the bias o f the a p p aren t erro r A app, an d to add this estim ate to A app. In theory the b o o tstrap estim ate o f e(F) is e(F) = E ' { D ( F , F ' ) - D ( F ‘, F *)}, and its approxim ation from the sim ulations described in the previous p a ra graph defines the bootstrap estimate o f expected excess error
‘E
eB = R
n 1E c{yj>^ xpK .)} - n 1E cWp MKpF")} i=i
r= 1
(6.50)
j=i
T h at is, for the rth b o o tstra p sam ple we construct the prediction rule n(x, F'), then calculate the average difference betw een the prediction errors when this rule is applied first to the original d a ta an d secondly to the b o o tstrap sam ple itself, an d finally average across b o o tstra p samples. We refer to the resulting estim ate o f aggregate prediction error, Ab = $b + A app, as the bootstrap estimate o f prediction error, given by n
n~l E 7=1
R
E r= 1
R
F'r )} - R - 1 E D (F'r, K ) + D(F, F).
(6.51)
r= l
N ote th a t the first term o f (6.51), which is also the simple b o o tstra p estim ate (6.49), is expressed as the average o f the contributions jR-1 ^ f = i c{yy-, F ’ )} th at each original observation m akes to the estim ate o f aggregate prediction error. These contributions are o f interest in their own right, m ost im portantly in assessing how the perform ance o f the prediction rule changes with values o f the explanatory variables. This is illustrated in Exam ple 6.10 below. Hybrid bootstrap estimates It is useful to observe th a t the naive estim ate (6.49), which is also the first term o f (6.51), can be broken into two qualitatively different parts,
6.4 ■Aggregate Prediction Error and Variable Selection
297
and
w here R - j is the n u m b er o f the R b o o tstrap sam ples F ' in which (xj ,yj ) does n o t appear. In (6.52) yj is always predicted using d ata from which (X j , y j) is excluded, which is analogous to cross-validation, w hereas (6.53) is sim ilar to an a p p aren t erro r calculation because yj is always predicted using d a ta th at contain (xj,yj). N ow R - j / R is approxim ately equal to the constant e~l = 0.368, so (6.52) is approxim ately p ro p o rtio n al to A scr = n - 1E j=1
Y
(6'54)
J r:j out
som etim es called the leave-one-out bootstrap estimate o f prediction error. The n o ta tio n refers to the fact th a t Abcv can be viewed as a b o o tstrap sm oothing o f the cross-validation estim ate Acv- To see this, consider replacing the term c {y j , n ( x j , F - j )} in (6.44) by the expectation E l j[c{yj,n(Xj,F*)}], where E lrefers to the expectation over b o o tstrap sam ples F * o f size n draw n from F-j. T he estim ate (6.54) is a sim ulation approxim ation o f this expectation, because o f the result n o ted in Section 3.10.1 th a t the R - j b o o tstrap sam ples in which case j does n o t ap p ear are equivalent to random sam ples draw n from F-j. T he sm oothing in (6.54) m ay effect a considerable reduction in variance, com pared to Ac v , especially if c(y+, y +) is n o t continuous. B ut there will also be a tendency tow ard positive bias. This is because the typical b o o tstrap sample from which predictions are m ade in (6.54) includes only ab o u t (1 — e~l )n = 0.632n distinct d a ta values, an d the bias o f cross-validation estim ates increases as the size o f the train in g set decreases. W hat we have so far is th a t the b o o tstrap estim ate o f aggregate prediction erro r essentially involves a w eighted com bination o f Abcv and an apparent erro r estim ate. Such a com bin atio n should have good variance properties, b u t m ay suffer from bias. However, if we change the weights in the com bination it m ay be possible to reduce or rem ove this bias. This suggests th at we consider the hybrid estim ate A w = w A b cv + (1 - w)Aapp,
(6.55)
an d then select w to m ake the bias as small as possible, ideally E(AW) = A + 0 ( n ~ 2). N o t unexpectedly it is difficult to calculate E(AW) in general, b u t for quadratic erro r and least squares prediction it is relatively easy. We already know th at the a p p aren t erro r estim ate has expectation a 2( 1 — qn~l ), and th a t the true
298
6 • Linear Regression
A p p a re n t
Table 6.9 Estimates of aggregate prediction error (xlO -2) for data on nuclear power plants. Results for adjusted cross-validation are shown in parentheses.
K -fo ld (adjusted ) cross-validation
e rro r
B o o tstrap
0.632
32
16
10
6
2.0
3.2
3.5
3.6
3.7 (3.7)
3.8 (3.7)
4.4 (4.2)
aggregate erro r is A = er2( l + qn 1). It rem ains only to calculate E(ABCk), where here A B CV =
n~l Y 2 E -j(y j -
x ] P - j ) 2>
j =i A
w ith p ’_ j the least squares estim ate o f /? from a b o o tstra p sam ple w ith the j t h case excluded. A ra th e r lengthy calculation (Problem 6.13) shows th at E(A jjck) = c 2( l + 2 qn~l ) + 0 ( n ~ 2), from which it follows th a t E{wABCk + (1 - w)A app} = er2( l + 3w qn~l ) + 0 ( n ~ 2), which agrees w ith A to term s o f o rd er n~l if w = 2/3. It seems im possible to find an optim al choice o f w for general m easures o f prediction erro r an d general prediction rules, b u t detailed calculations do suggest th a t w = 1 — e-1 = 0.632 is a good choice. H euristically this value for w is equivalent to an ad justm ent for the below -average distance betw een cases an d b o o tstra p sam ples w ithout them , com pared to w hat we expect in the real prediction problem . T h a t the value 0.632 is close to the value 2 /3 derived above is reassuring. T he hybrid estim ate (6.55) w ith w = 0.632 is know n as the 0.632 estimator o f prediction error an d is denoted here by A0.632- T here is substantial em pirical evidence favouring this estim ate, so long as the num ber o f covariates p is n o t close to n. Example 6.10 (Nuclear power stations) C onsider predicting the cost o f a new pow er station based on the d a ta o f Exam ple 6.8. We base o u r prediction on the linear regression m odel described there, so we have n(x j , F ) = x j f i , where A
•'
18 is the least squares estim ate for a m odel w ith six covariates. The estim ated
erro r variance is s2 = 0.6337/25 = 0.0253 w ith 25 degrees o f freedom . The dow nw ardly biased a p p aren t erro r estim ate is A app = 0.6337/32 = 0.020, whereas the idealized estim ate (6.38) is 0.025 x (1 + ~ ) = 0.031. In this situation the prediction e rro r for a p articu lar station seems m ost useful, b u t before we tu rn to individual stations, we discuss the overall estim ates, which are given in Table 6.9. Those estim ates show the p a tte rn we would anticipate from the general
299
6.4 ■Aggregate Prediction Error and Variable Selection
Figure 6.11 Components of prediction error for nuclear power data based on 200 bootstrap simulations. The top panel shows the values of yj — n{xj,F*). The lower left panel shows the average error for each case, plotted against the residuals. The lower right panel shows the ratio of the model-based to the bootstrap prediction standard errors.
Case
Raw residual
Case
discussion. T he ap p aren t e rro r is considerably sm aller th an other estimates. The b o o tstrap estim ate, w ith R = 200, is larger th an the ap p aren t error, b u t sm aller th a n the cross-validation estim ates, and the 0.632 estim ate agrees well w ith the ordin ary cross-validation estim ate (6.44), for which K — n = 32. A d justm ent slightly decreases the cross-validation estim ates. N ote th a t the idealized estim ate appears to be quite accurate here, presum ably because the m odel fits well an d errors are n o t far from hom oscedastic — except for the last six cases. N ow consider the individual predictions. Prediction erro r arises from two com ponents: the variability o f the predictor and th a t o f the associated erro r s+. Figure 6.11 gives som e insight into these. Its top panel shows the values
300
6 ■Linear Regression
o f yj — n(xj,F*) for r = 1 ,...,J ? , p lo tted against case num ber j. The variability o f the average error corresponds to the variation o f individual observations a b o u t their predicted values, while the variance w ithin each group reflects param eter estim ation uncertainty. A striking feature is the small prediction erro r for the last six pow er plants, whose variances and m eans are both small. The lower left panel shows the average values o f y j — fi(xj,F*) over the 200 sim ulations, plotted against the raw residuals. They agree closely, as we should expect w ith a well-fitting m odel. T he lower right panel shows the ratio o f the m odel-based prediction stan d ard erro r to the b o o tstrap prediction standard error. It confirm s th a t the m odel-based calculation described in Exam ple 6.8 overestim ates the predictive stan d ard erro r for the last six plants, which have the partial turnkey guarantee. T he estim ated b o o tstra p prediction erro r for these plan ts is 0.003, while it is 0.032 for the rest. T he last six cases fall into three groups determ ined by the values o f the explanatory variables: in effect they are replicated. It m ight be preferable to p lo t y j — fi(xj, F ' ) only for those b o o tstrap samples which exclude the j t h case, and then m ean prediction error would b etter be com pared to jackknifed residuals yj — x j /L ; . F or these d a ta the plots are very sim ilar to those we have shown. ■ Example 6.11 (Times on delivery suite) F or a m ore system atic com parison o f prediction error estim ates in linear regression, we use d ata provided by E. Burns on the times tak en by 1187 w om en to give b irth a t the Jo h n Radcliffe H ospital in O xford. A n ap p ro p riate linear m odel has response the log time spent on delivery suite an d dum m y explanatory variables indicating the type o f labour, the use o f electronic fetal m onitoring, the use o f an intravenous drip, the reported length o f la b o u r before arriving a t the hospital and w hether or n o t the lab o u r is the w om an’s first; seven p aram eters are estim ated in all. We took 200 sam ples o f size n = 50 at ran d o m from the full data. F or each o f these sam ples we fitted the m odel described above, and then calculated cross-validation estim ates o f prediction error Acv#. w ith K = 50, 10, 5 and 2 groups, the corresponding adjusted cross-validation estim ates A a c v j c , the b o o tstrap estim ate AB, and the hybrid estim ate Ao.632- We took R = 200 for the b o o tstrap calculations. The results o f this experim ent are sum m arized in term s o f estim ates o f the expected excess erro r in Table 6.10. T he average a p p aren t error and excess erro r were 15.7 x 10-2 and 5.2 x 10-2 , the latter taken to be e(F) as defined in (6.42). T he table shows averages and stan d ard deviations o f the differences betw een estim ates A an d A app. T he cross-validation estim ate w ith K = 50, the boo tstrap an d the 0.632 estim ate have sim ilar properties, while other choices o f K give estim ates th a t are m ore variable; the half-sam ple estim ate A C v ,2 is worst. R esults for cross-validation w ith 10 and 5 groups are alm ost
301
6.4 ■Aggregate Prediction Error and Variable Selection Table 6.10 Summary results for estimates of prediction error for 200 samples of size n = 50 from a set of data on the times 1187 women spent on delivery suite at the John Radcliffe Hospital, Oxford. The table shows the average, standard deviation, and conditional mean squared error (x 10~2) for the 200 estimates of excess error. The ‘target’ average excess error is 5.2 x lO"2.
X -fo ld (adjusted) cross-validation
M ean SD M SE
B o o tstrap
0.632
50
10
5
2
4.6 1.3 0.23
5.3 1.6 0.24
5.3 1.6 0.24
6.0 (5.7) 2.3 (2.2) 0.28 (0.26)
6.2 (5.5) 2.6 (2.3) 0.30 (0.27)
9.2 (5.7) 5.4 (3.3) 0.71 (0.33)
the same. A djustm ent significantly im proves cross-validation when group size is n o t small. T he b o o tstrap estim ate is least variable, b u t is dow nw ardly biased. The final row o f the table gives the conditional m ean squared error, defined as (200)-1 {Aj — Dj ( F, F) }2 for each erro r estim ate A. This m easures the success o f A in estim ating the true aggregate prediction error D(F, F) for each o f the 200 samples. A gain the ordinary cross-validation, bootstrap, and 0.632 estim ates perform best. In this exam ple there is little to choose betw een K -fold cross-validation with 10 an d 5 groups, which b o th perform worse th an the ordinary cross-validation, bootstrap , an d 0.632 estim ators o f prediction error. K -fold cross-validation should be used w ith adjustm ent if ordinary cross-validation or the sim ulationbased estim ates are not feasible. ■
6.4.2 Variable selection In m any applications o f m ultiple linear regression, one purpose o f the analysis is to decide which covariate term s to include in the final model. T he supposition is th a t the full m odel y = x T fi + s with p covariates in (6.22) is correct, b u t th at it m ay include some red u n d an t terms. O ur aim is to elim inate those red u n d an t term s, and so obtain the true m odel, which will form the basis for further inference. This is som ew hat simplistic from a practical viewpoint, because it assum es th a t one subset o f the proposed linear m odel is “ tru e” : it m ay be m ore sensible to assum e th a t a few subsets m ay be equally good approxim ations to a com plicated true relationship betw een m ean response and covariates. G iven th a t there are p covariate term s in the m odel (6.22), there are 2P candidates for true m odel because we can include or exclude each covariate. In practice the num b er o f candidates will be reduced if prior inform ation necessitates inclusion o f p articu lar covariates or com binations o f them. There are several approaches to variable selection, including various stepwise m ethods. But the approach we focus on here is the direct one o f m inim izing aggregate prediction error, when each candidate m odel is used to predict independent, future responses at the d a ta covariate values. F or simplicity we assum e th a t m odels are fitted by least squares, and th a t aggregate prediction
302
6 ■Linear Regression
erro r is average squared error. It w ould be a sim ple m atter to use other prediction rules an d o th er m easures o f prediction accuracy. First we define som e n otation. We denote an arb itrary candidate m odel by M , which is one o f the 2P possible linear models. W henever M is used as a subscript, it refers to elem ents o f th a t model. T hus the n x pm design m atrix X M contains those pM colum ns o f the full design m atrix X th a t correspond to covariates included in M ; the y'th row o f X m is x h p the least squares estim ates for regression coefficients in M are P m , and H M is the h at m atrix X m ( X I i X m )~1X11 th a t defines fitted values = H My under m odel M . The total num b er o f regression coefficients in M is qM = pM + 1, assum ing th a t an intercept term is always included. Now consider prediction o f single responses y+ a t each o f the original design points x i,...,x „ . The average squared prediction erro r using m odel M is n n ~l J 2 ( y +j ~ x T m M > 7=1
and its expectation u n d er m odel (6.22), conditional on the data, is the aggregate prediction error n
D ( M ) = a 2 + n~x ^ ( ^ - - x ^ j Pm )2, i= i where p.T = (AMj■ is the vector o f m ean responses for the true m ultiple regression m odel. T aking expectation over the d a ta distribution we obtain A (M ) = E{D(M)} = (1 + n~lqM) a 2 + fxT(I — H M)n,
(6.56)
where /ir (/ — H M)p is zero only if m odel M is correct. The quantities D (M) and A(M) generalize D and A defined in (6.36) an d (6.37). In principle the best m odel w ould be the one th a t m inimizes D{M), but since the m odel p aram eters are unknow n we m ust settle for m inim izing a good estim ate o f D ( M) o r A(M). Several resam pling m ethods for estim ating A were discussed in the previous subsection, so the n atu ral approach would be to choose a good m ethod an d apply it to all possible models. However, accurate estim ation o f A(M ) is n o t itself im p o rtan t: w hat is im p o rtan t is to accurately estim ate the signs o f differences am ong the A(M), so th a t we can identify which o f the A(M )s is smallest. O f the m ethods considered earlier, the a p p aren t e rro r estim ate A app( M) = h^ R S S m was poor. Its use here is im m ediately ruled out w hen we observe th a t it always decreases w hen covariates are added to a m odel, so m inim ization always leads to the full model.
6.4 ■Aggregate Prediction Error and Variable Selection
303
Cross-validation O ne good estim ate, when used w ith squared error, is the leave-one-out crossvalidation estim ate. In the present no tatio n this is
=
(6.57)
w here y ^ j is the fitted value for m odel M based on all the d a ta and h ^ j is the leverage for case j in m odel M . The bias o f Ac v ( M ) is small, b u t th at is not enough to m ake it a good basis for selecting M . To see why, note first th a t an expansion gives mAc k (M ) =
et
(I
- H M)e +
2pM + fiT(I - H M)fi.
(6.58)
T hen if m odel M is true, an d M ' is a larger model, it follows th a t for large n Pr{Ac v ( M ) < ACv( M') } = P r(Z2 < 2d), where d = p w ~ P m - This probability is substantially below 1 unless d is large. It is therefore quite likely th a t selecting M to m inimize Ac v ( M ) will lead to overfitting, even for large n. So although the term p T(I — H M)n in (6.58) guarantees th at, for large n, incorrect m odels will n o t be selected, m inim ization o f A c v ( M ) does n o t provide consistent selection o f the true model. One explanation for this is th a t to estim ate A(M) w ith sufficient accuracy we need b o th large am o u n ts o f d ata to fit m odel M and a large num ber o f independent predictions. This can be accom plished using the m ore general cross-validation m easure (6.43), u nder conditions given below. In principle we need to average (6.43) over all possible splits, b u t for practical purposes we follow (6.46). T h a t is, using R different splits into training and assessm ent sets o f sizes nt = n — m and na = m, we generalize (6.57) to R
ACv(M) = jR_1 Y l m~ l X r= 1 jesv
~ yMj(St,r)}2,
where p M j ( S t,r) = x h ^ M ^ t , ) an d ^ M(^t,r) are the least squares estim ates for coefficients in M fitted to the rth training set whose subscripts are in Sv . N ote th a t the sam e R splits into training and assessm ent sets are used for all models. It can be show n that, provided m is chosen so th a t n — m —> o o and m /n —>1 as n -» o o , m inim ization o f Ac v ( M ) will give consistent selection o f the true m odel as n—► o o an d R —>o o .
304
6 ■Linear Regression
Bootstrap methods C orresponding results can be obtained for b o o tstrap resam pling m ethods. The b o o tstrap estim ate o f aggregate prediction erro r (6.51) becomes
Ab ( M ) = n~l R S S m + R ~ l £
n~l
j
- RSS'M,
j
,
(6.59)
where the second term on the right-hand side is an estim ate o f the expected excess erro r defined in (6.42). The resam pling scheme can be either case resam pling o r error resam pling, w ith x m Mj r = x Mj for the latter. It turns o u t th a t m inim ization o f A B( M) behaves m uch like m inim ization o f the leave-one-out cross-validation estim ate, an d does n o t lead to a consistent choice o f true m odel as n—*o o . However, there is a m odification o f A B(M), analogous to th a t m ade for the cross-validation procedure, which does produce a consistent m odel selection procedure. T he m odification is to m ake sim ulated datasets be o f size n — m rath er th an n, such th a t m / n —>l and n — m—> o o as n—>co. Also, we replace the estim ate (6.59) by the sim pler b o o tstrap estim ate R
Ab (M ) = R - 1 r= l
n
n- 1 Y ^ ( y j ~ x l j K r ) 2> j= 1
(6.60)
which is a generalization o f (6.49). (The previous doubts ab o u t this simple estim ate are less relevant for small n — m.) I f case resam pling is used, then n — m cases are random ly selected from the full set o f n. If m odel-based resam pling is used, the m odel being M w ith assum ed hom oscedasticity o f errors, then is a ran d o m selection o f n — m rows from X m and the n — m errors £* are random ly sam pled from the n m ean-corrected m odified residuals i"Mj ~ for m odel M. Bearing in m ind the general advice th a t the nu m ber o f sim ulated datasets should be at least R = 100 for estim ating second m om ents, we should use at least th a t m any here. T he sam e R b o o tstra p resam ples are used for each m odel M , as w ith the cross-validation procedure. One m ajo r practical difficulty th a t is shared by the consistent cross-validation and b o o tstrap procedures is th a t fitting all candidate m odels to small subsets o f d a ta is n o t always possible. W h at em pirical evidence there is concerning good choices for m / n suggests th a t this ratio should be ab o u t | . If so, then in m any applications some o f the R subsets will have singular designs X'M for big models, unless subsets are balanced by ap p ro p riate stratification on covariates in the resam pling procedure. Example 6.12 (Nuclear power stations) In Exam ples 6.8 and 6.10 o u r analyses focused on a linear regression m odel th a t includes six o f the p = 10 covariates available. T hree o f these covariates — d a te , lo g ( c a p ) and NE — are highly
305
6.4 ■Aggregate Prediction Error and Variable Selection
Figure 6.12 Aggregate prediction error estimates for sequence of models fitted to nuclear power stations data; see text. Leave-one-out cross-validation (solid line), bootstrap with R = 100 resamples of size 32 (dashed line) and 16 (dotted line).
0
2
4
6
8
10
Number of covariates
sign ifica n t, a ll o th ers h a v in g P -v a lu e s o f 0.1 or m ore. H ere w e co n sid e r the sele c tio n o f v a ria b les to in c lu d e in th e m o d el. T h e to ta l n u m b er o f p o ssib le m o d els, 2 10 = 1024, is p ro h ib itiv e ly larg e, a n d for th e p u r p o se s o f illu stra tio n w e co n sid e r o n ly the p a rticu la r seq u en ce o f m o d e ls in w h ich v a ria b les en ter in th e ord er d a t e , l o g ( c a p ) , NE, CT, l o g ( N ) , PT, T l, T2, PR, BW: th e first three are th e h ig h ly sig n ifica n t variab les.
Figure 6.12 plots the leave-one-out cross-validation estim ates and the b o o t strap estim ates (6.60) w ith R = 100 o f aggregate prediction error for the m odels w ith 0 , 1 ,..., 10 covariates. The two estim ates are very close, and b o th are m inim ized w hen six covariates are included (the six used in Exam ples 6.8 an d 6.10). Selection o f five or six covariates, ra th er th a n fewer, is quite clearcut. These results b ear o u t the rough rule-of-thum b th a t variables are selected by cross-validation if they are significant at roughly the 0.1 level. As the previous discussion would suggest, use o f corresponding crossvalidation and b o o tstra p estim ates from training sets o f size 20 or less is precluded because for training sets o f such sizes the m odels with m ore th an five covariates are frequently unidentifiable. T h at is, the unbalanced nature o f the covariates, coupled w ith the binary nature o f some o f them , frequently leads to singular resam ple designs. Figure 6.12 includes b o o tstrap estim ates for m odels w ith u p to five covariates and training set o f size 16: these results were obtained by om itting m any singular resamples. These ra th er fragm entary results confirm th a t the m odel should include at least five covariates. A useful lesson from this is th a t there is a practical obstacle to w hat in theory is a preferred variable selection procedure. O ne w ay to try to overcome
306
6 ■Linear Regression cv, resample 10
cv, resample 20
cv, resample 30
leave-one-out cv
boot, resample 10
boot, resample 20
boot, resample 30
boot, resample 50
this difficulty is to stratify on the b inary covariates, b u t this is difficult to im plem ent an d does n o t w ork well here. ■ Example 6.13 (Simulation exercise) In order to assess the variable selection procedures w ithout the com plication o f singular resam ple designs, we consider a sm all sim ulation exercise in which procedures are applied to ten datasets sim ulated from a given m odel. T here are p = 5 independent covariates, whose values are sam pled from the uniform distrib u tio n on [0, 1], and responses y are generated by adding N ( 0,1) variates to the m eans p. = x Tp. The cases we exam ine have sam ple size n = 50, an d yS3 = jS4 = = 0, so the true m odel includes an intercept and two covariate terms. To simplify calculations only six m odels are fitted, by successively adding x i , . . . , x 5 to an initial m odel with con stan t intercept. All resam pling calculations are done with R = 100 samples. T he num b er o f d atasets is adm ittedly small, b u t sufficient to m ake rough com parisons o f perform ance. The m ain results concern m odels w ith P\ = P2 = 2, which m eans th a t the two non-zero coefficients are ab o u t four stan d ard errors aw ay from zero. Each panel o f Figure 6.13 shows, for the ten datasets, one variable selection criterion plotted against the n u m b er o f covariates included in the model. Evidently the clearest indications o f the tru e m odel occur w hen training set size is 10 or 20. L arger training sets give flat profiles for the criterion, and m ore frequent selection o f overfitted models. These indications m atch the evidence from m ore extensive sim ulations, which suggest th a t if training set size n —m is a b o u t n /3 then the probability o f correct m odel selection is 0.9 or higher, com pared to 0.7 o r less for leave-one-out crossvalidation. F u rther results were obtained w ith P\ = 2 an d P2 = 0.5, the latter equal to one stan d ard erro r aw ay from zero. In this situation underfitting — failure to
Figure 6.13 Cross-validation and bootstrap estimates of aggregate prediction error for sequence of six models fitted to ten datasets of size n = 50 with p = 5 covariates. The true model includes only two covariates.
6.5 ■Robust Regression
307
include x 2 in the selected m odel — occurred quite frequently even w hen using training sets o f size 20. This deg radation o f variable selection procedures when coefficients are sm aller th a n tw o stan d ard errors is reputed to be typical.
■
The theory used to justify the consistent cross-validation and boo tstrap procedures m ay depend heavily on the assum ptions th at the dim ension o f the true m odel is small com pared to the num ber o f cases, and th a t the non-zero regression coefficients are all large relative to their stan d ard errors. It is possible th a t leave-one-out cross-validation m ay w ork well in certain situations where m odel dim ension is com parable to num ber o f cases. This w ould be im p o rtan t, in light o f the very clear difficulties o f using small training sets w ith typical applications, such as Exam ple 6.12. Evidently fu rther work, b o th theoretical an d em pirical, is necessary to find broadly applicable variable selection m ethods.
6.5 Robust Regression T he use o f least squares regression estim ates is preferred w hen errors are n ear-norm al in distrib u tio n an d hom oscedastic. However, the estim ates are very sensitive to outliers, th a t is cases which deviate strongly from the general relationship. Also, if errors have a long-tailed distribution (possibly due to heteroscedasticity), then least squares estim ation is n o t an efficient m ethod. A ny regression analysis should therefore include ap p ro p riate inspection o f diagnostics based on residuals to detect outliers, and to determ ine if a norm al assum ption for errors is reasonable. If the occurrence o f outliers does not cause a change in the regression model, then they will likely be om itted from the fitting o f th a t m odel. D epending on the general pattern o f residuals for rem aining cases, we m ay feel confident in fitting by least squares, or we m ay choose to use a m ore robust m ethod to be safe. Essentially the resam pling m ethods th a t we have discussed previously in this chapter can be adapted quite easily for use w ith m any robust regression m ethods. In this section we briefly review som e o f the m ain points. Perhaps the m ost im p o rtan t p o in t is th a t gross outliers should be rem oved before final regression analysis, including resam pling, is undertaken. There are tw o reasons for this. The first is th a t m ethods o f fitting th a t are resistant to outliers are usually n o t very efficient, and m ay behave badly u n der resampling. T he second reason is th a t outliers can be disruptive to resam pling analysis o f m ethods such as least squares th a t are n o t resistant to outliers. F o r m odel-based resam pling, the erro r distribution will be contam inated and in the resam pling the outliers can then occur at any x values. F or case resam pling, outlying cases will occur w ith variable frequency and m ake the b o o tstrap estim ates o f coefficients too variable; see Exam ple 6.4. The effects can be diagnosed from
308
6 ■Linear Regression
D ose (rads)
117.5
235.0
470.0
705.0
940.0
1410
S urvival %
44.000 55.000
16.000 13.000
4.000 1.960 6.120
0.500 0.320
0.110 0.015 0.019
0.700 0.006
Table 6.11 Survival data (Efron, 1988).
the jackk n ife-after-b o o tstrap plots o f Section 3.10.1 o r sim ilarly inform ative diagnostic plots, b u t such plots can fail to show the occurrence o f m ultiple outliers. For datasets w ith possibly m ultiple outliers, diagnosis is aided by initial use o f a fitted m ethod th a t is highly resistant to the effects o f outliers. One preferred resistant m ethod is least trim m ed squares, which minimizes m
5 > 0 )(/*)j=i
(6.61)
the sum o f the m sm allest squares o f deviations e; (/}) = yj — x j p. Usually m is taken to be [\n] + 1. R esiduals from the least trim m ed squares fit should clearly identify outliers. The fit itself is n o t very efficient, and should best be th o ught o f as an initial step in a m ore efficient analysis. (It should be noted th a t in som e im plem entations o f least trim m ed squares, local m inim a o f (6.61) m ay be found far aw ay from the global m inim um .) Example 6.14 (Survival proportions) T he d a ta in Table 6.11 and the left panel o f Figure 6.14 are survival percentages for rats a t a succession o f doses o f radiation, w ith two o r three replicates at each dose. T he theoretical relationship betw een survival rate an d dose is exponential, so linear regression applies to x = dose,
y = log(survival percentage).
T he right panel o f Figure 6.14 plots these variables. There is a clear outlier, case 13, at x = 1410. T he least squares estim ate o f slope is —59 x 10-4 using all the data, changing to —78 x 10-4 w ith stan d ard erro r 5.4 x 10-4 when case 13 is om itted. T he least trim m ed squares estim ate o f slope is —69 x 10-4 . F rom the scatter p lo t it app ears th a t heteroscedasticity m ay be present, so we resam ple cases. The effect o f the outlier on the resam ple least squares estim ates is illustrated in Figure 6.15, which plots R = 200 b o o tstrap least squares slopes PI against the corresponding values o f ]T (x ” — x*)2, differentiated by the frequency w ith which case 13 appears in the resam ple. There are three distinct groups o f b o o tstrap p ed slopes, w ith the lowest corresponding to resam ples in which case 13 does n o t occur and the highest to sam ples where it occurs twice or more. A jack k n ife-after-b o o tstrap plot w ould clearly reveal the effect o f case 13. T he resam pling stan d ard erro r o f p \ is 15.3 x 10-4 , b u t only 7.6 x 10-4 for
Here [•] denotes integer part.
6.5 • Robust Regression
Figure 6.14 Scatter plots of survival data.
309
•
S
o
t
•
0s 15 > o £ D (0 O ) CM O '
o
i co D o
CO
C\J
• • •
CM
• • • ••
200
•
• • t
• 600
• 1000
• 1400
• 200
600
1000
1400
Dose
Dose
Figure 6.15 Bootstrap estimates of slope and design sum-of-squares J2(x } - x
)2 ( x \ 0 5 ),
differentiated by frequency of case 13 (appears zero, one or more times), for case resampling with R = 200 from survival data.
Sum of squares
sam ples w ithout case 13. T he corresponding resam pling standard errors o f the least trim m ed squares slope are 20.5 x 10-4 and 18.0 x 10~4, showing b o th the resistance an d inefficiency o f the least trim m ed squares m ethod. ■
Exam ple 6.15 (Salinity d a ta ) The d a ta in Table 6.12 are n = 28 observations on the salinity o f w ater in Pam lico Sound, N o rth C arolina. The response in the second colum n is the bi-weekly average o f salinity. The next three colum ns contain values o f the covariates, respectively a lagged value o f salinity, a trend
310
6 ■Linear Regression
Salinity sal
L agged salinity la g
T ren d in d icato r tre n d
R iver discharge d is
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
7.6 7.7 4.3 5.9 5.0 6.5 8.3 8.2 13.2 12.6 10.4 10.8 13.1 12.3 10.4 10.5 7.7 9.5 12.0 12.6 13.6 14.1 13.5 11.5
8.2 7.6 4.6 4.3 5.9 5.0 6.5 8.3 10.1 13.2 12.6 10.4 10.8 13.1 13.3 10.4 10.5 7.7
23.01 22.87 26.42 24.87 29.90 24.20 23.22 22.86 22.27 23.83 25.14 22.43 21.79 22.38 23.93 33.44 24.86 22.69 21.79 22.04 21.03 21.01 25.87 26.29
25 26 27 28
12.0 13.0 14.1 15.1
4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 0 1 4 5 0 1 2 3 4 5
10.0 12.0 12.1 13.6 15.0 13.5 11.5 12.0 13.0 14.1
Table 6.12 Salinity data (Ruppert and Carroll, 1980).
22.93 21.31 20.77 21.39
indicator, an d the river discharge. We consider a linear regression m odel with these three covariates. The initial least squares analysis gives coefficients 0.78, —0.03 and —0.30, with intercept 9.70. The usual stan d ard error for the trend coefficient is 0.16, so this coefficient would be ju d g ed n o t nearly significant. However, this fit is suspect, as can be seen n o t from the Q -Q plot o f m odified residuals b u t from the plot o f cross-validation residuals versus leverages, where case 16 stands out as an outlier — due apparen tly to its unusual value o f d is . T he outlier is m uch m ore easily detected using the least trim m ed squares fit, w hich has the quite different coefficient values 0.61, —0.15 and —0.86 w ith intercept 24.72: the residual o f case 16 from this fit has standardized value 6.9. Figure 6.16 shows norm al Q -Q plots o f standardized residuals from least squares (left panel) and least trim m ed squares fits (right panel); for the la tte r the scale factor is taken to be the m edian absolute residual divided by 0.6745, the value appropriate for estim ating the stan d ard deviation o f norm al errors.
Application of standard algorithms for least trimmed squares with default settings can give very different, incorrect solutions.
311
6.5 ■Robust Regression
Figure 6.16 Salinity data: standardized residuals from least squares (left) and least trimmed squares (right) fits using all cases.
co 3 ■D '(/) T© 3 N CO x>
c
co
55
Quantiles of standard normal
Quantiles of standard normal
T here is some question as to w hether the outlier is really ab errant, o r simply reflects the need for a quad ratic term in d i s . ■ Robust methods We suppose now th a t outliers have been isolated by diagnostic plots and set aside from fu rth er analysis. The problem now is w hether o r n o t th a t analysis should use least squares estim ation: if there is evidence o f a long-tailed error distribution, then we should dow nw eight large deviations yj — x j fi by using a robust m ethod. Two m ain options for this are now described. O ne ap p ro ach is to m inim ize n o t sums o f squared deviations b u t sums o f absolute values o f deviations, Y , Iy j ~ x J J®l> so liv in g less weight to those cases w ith the largest errors. This is the L i m ethod, which generalizes — and has efficiency com parable to — the sam ple m edian estim ate o f a population mean. T here is n o simple expression for approxim ate variance o f L\ estim ators. M ore efficient is M -estim ation, which is analogous to m axim um likelihood estim ation. H ere the coefficient estim ates /? for a m ultiple linear regression solve the estim ating equation 0,
(6.62)
where tp(z) is a b o unded replacem ent for z, and s is either the solution to a sim ultaneous estim ating equation, o r is fixed in advance. We choose the latter, tak in g s to be the m edian absolute deviation (divided by 0.6745) from the least trim m ed squares regression fit. T he solution to (6.62) is obtained by iterative weighted least squares, for which least trim m ed squares estim ates are good startin g values.
6 • Linear Regression
312
W ith a careful choice o f ip(-), M -estim ates should have sm aller standard errors th a n least squares estim ates for long-tailed d istributions o f random errors e, yet have com parable stan d ard errors should those errors be hom o scedastic norm al. O ne stan d ard choice is tp(z) = z m in (l,c /|z |), H u b er’s winsorizing function, for which the coefficient estim ates have approxim ate effi ciency 95% relative to least squares estim ates for hom oscedastic norm al errors when c = 1.345. F or large sam ple sizes M -estim ates ft are approxim ately norm al in distribu tion, with approxim ate variance v ar(£) = o'2 * {'p2{e/ fa is equivalent to f a /u*1/2 > f a / v lf2. This confirms that the P-value o f the permutation test is unaffected by studentizing. (Section 6.2.5)
6 • Linear Regression
318
For least squares regression, model-based resampling gives a bootstrap estimator fi' which satisfies n 7=1
where the sj are randomly sampled modified residuals. An alternative proposal is to bypass the resampling model for data and to define directly n p = $ + { x Tx r i Y t »i’ j=i where the u’s are randomly sampled from the vectors uj = xj ( y j - xJ h
j = 1......... n.
Show that under this proposal fi" has mean fi and variance equal to therobust variance estimate (6.26). Examine, theoretically or through numerical examples, to what extent the skewness of fi’ matches the skewness of fi. (Section 6.3.1; Hu and Zidek, 1995) For the linear regression model y = X p + e, the improved version of the robust estimate of variance for the least squares estimates fi is Vrob = (X TX ) - lX Tdizg(r2i, . . . , r 2n) X ( XTX ) - \ where rj is the j th modified residual. If the errors have equal variances, then the usual variance estimate v = s2^ 7* ) - 1 would be appropriate and vroi, could be quite inefficient. To quantify this, examine the case where the random errors e; are independent N(0, a2). Show first that
E(rj) = „=, Hence show that the efficiency of the ith diagonal element of vrob relative to the ith diagonal element of v, as measured by the ratio of their variances, is bl (n-p)g{Qgt where bu is the ith diagonal element of (Z TX )_1, gJ = (d^...... dfn) with D = TX)~lX T, and Q has elements (1 —h j k ) 2/ { ( 1 —/i; )(l —hk ) } . Calculate this relative efficiency for a numerical example. (Sections 6.2.4, 6.2.6, 6.3.1; Hinkley and Wang, 1991) (X
The statistical function /?(F) for M-estimation is defined by the estimating equation
J xv{
y - x Tm ' a(F)
dF(x,y) = 0,
where a(F) is typically a robust scale parameter. Assume that the model contains an intercept, so that the covariate vector x includes the dummy variable 1. Use the
hjk is the (J,k)th element of hat matrix H and hjj = hj.
6.1 ■Problems
319
technique o f Problem 2.12 to show that the influence function for fl(F) is V?(u) is d ip(u)/du.
M
^ ) = { / x x Tyj(e)dF(x, y) |
oxy>(e),
where e — (y — x Tf i ) / o ; it is assumed that sy)(e) has mean zero. If the distribution o f the covariate vector is taken to be the E D F o f x i , . . . , x „ , show that
Lp(x,y) = m k ~ 1( X TX)~1x\p(e), where X is the usual covariate matrix and k = E{ip(e)}. U se the empirical version o f this to verify the variance approximation
y-rX ) i T , V 2(ej/s) Vl = ns.2 / (X
{ £ v(ej/s)}2’ where e; = yj — x j f t and s is the estimated scale parameter. (Section 6.5) Given raw residuals e i , . . . , e n, define independent random variables ej by (6.21). Show that the first three mom ents o f ej are 0, ej, and ej. (a) Let be raw residuals from the fit o f a linear m odel y = X f t + e , and define bootstrap data by y ' = x f t + e ’ , where the elements o f s’ are generated according to the wild bootstrap. Show that the bootstrap least squares estimates ft" take at m ost 2" values, and that
E’(ft') = ft,
var'($*) = vwild = (X TX r lX TW X ( X TX ) ~ \
where W = d ia g ( e f,...,e 2). (b) Show that when all the errors have equal variances and the design is balanced, so that hj = p / n , vwiu is negatively biased as an estimate o f var(/3). (c) Show that for the simple linear regression m odel (6.1) the expected value o f var'($*) is
/r2 m2
n 2(n — 1 — m^/m\),
where mr = n~l J2(x j — x ) r. Hence show that if the x j are uniformly spaced and the errors have equal variances, the wild bootstrap variance estimate is too small by a factor o f about 1 — 14/(5n). (d) Show that if the e,- are replaced by r;, the difficulties in (b) and (c) do not arise. (Sections 6.2.4, 6.2.6, 6.3.2) Suppose that responses y i , . . . , y „ with n = 2m correspond to m independent samples o f size two, where the ith sample comes from a population with mean n t and these means are o f primary interest; the m population variances may differ. Use appropriate dummy variables x t to express the responses in the linear m odel y = X f t + e, where /?, = n t. With parameters estimated by least squares, consider estimating the standard error o f ft, by case resampling. (a) Show that the probability o f getting a simulated sample in which all the parameters are estimable is
6 ■Linear Regression
320
(b) Consider constrained case resampling in which each o f the m samples must be represented at least once. Show that the probability that there are r resample cases from the ith sample is i
^ \ // 2m \ (/ 11 \\
11 \\ 2m—r in—1 / <m / m — 1
a, y + ,i-a w here y +iP satisfies 0,
X
=
exp {Po — Pi log(x — 5 + e^4) } ,
k
=
exp (/?2 — P 3 log x).
(7.16)
347
7.3 ■Survival Data Table 7.6 Failure times (hours) from an accelerated life test on PET film in SFg gas insulated transformers (Hirose, 1993). ^ indicates right-censoring.
V oltage (kV) 5 7
10 15
7131 >9104.25 50.25 108.30 135.60 15.17 23.90 2.40 6.68
8482 >9104.25 87.75 108.30
8559 >9104.25 87.76 117.90
19.87 28.17 2.42 7.30
20.18 29.70 3.17
8762
9026
9034
9104
87.77 123.90
92.90 124.30
92.91 129.70
95.96 135.60
21.50
21.88
22.23
23.02
3.75
4.65
4.95
6.23
This param etrizatio n is chosen so th a t the range o f each param eter is u n boun d ed ; n ote th a t xq = 5 — e^*. The u p p er panels o f Figure 7.7 show the fit o f this m odel when the p aram eters are estim ated by m axim izing the log likelihood (. The left panel shows Q -Q plots for each o f the voltages, and the right panel shows the fitted m ean failure tim e an d estim ated threshold xo- T he fit seems broadly adequate. We sim ulate replicate d atasets by generating observations from the W eibull m odel obtained by substituting the M L E s into (7.16). In order to apply our assum ed censoring m echanism , we sort the observations sim ulated w ith x = 5 to get _y*1} < < say, an d then set y(*9), and equal to y'7) + 0.25. We give these three observations censoring indicators d* = 0, so th a t they are treated as censored, treat all the o th er observations as uncensored, and fit the W eibull m odel to the resulting data. For sake o f illustration, suppose th a t interest focuses on the m ean failure tim e 9 w hen x = 4.9. To facilitate this we reparam etrize the m odel to have T(v) is the Gamma function / 0°° uv-1e-u du.
p aram eters 9 an d /? = ( / f i ,...,/ ^ ) , where 9 = 10- 3A r(l + 1/k), w ith x = 4.9. T he lower left panel o f Figure 7.7 shows the profile log likelihood for 9, i.e. ^Prof(0) = m a P
in the figure we renorm alize the log likelihood to have m axim um zero. U nder the stan d ard large-sam ple likelihood asym ptotics outlined in Section 5.2.1, the approxim ate distrib u tio n o f the likelihood ratio statistic W( 9) = 2 {< V of(0) — is xj, so a 1 — a confidence set for the true 9 is the set such th at cVtP is the p quantile of the Xv distribution. ^ p ro f(0 ) ^ < V o f ( ^ ) — 5 C U _ a .
348
7 ■Further Topics in Regression
CD
£ o
_D
o>
(5
o
(0 0
log Weibull quantiles
Voltage
theta
Chi-squared quantile
■oo o o> o S
Q_
where 6 is the overall M L E . F o r these d a ta 0 = 24.85 and the 95% confidence interval is [19.75,35.53]; the confidence set contains values o f 6 for which f prof (^) exceeds the d o tted line in the b o tto m left panel o f Figure 7.7. T he use o f the chi-squared quantile to set the confidence interval presupposes th a t the sam ple is large enough for the likelihood asym ptotics to apply, and this can be checked by the p aram etric sim ulation outlined above. The lower right panel o f the figure is a Q -Q plot o f likelihood ratio statistics w ’(6) = 2 { /‘rof(0‘ ) — /* rof(0)} based on 999 sets o f d a ta sim ulated from the fitted model. The distribution o f the w ’(6) is close to chi-squared, b u t w ith
Figure 7.7 PET reliability data analysis. Top left panel: Q-Q plot of log failure times against quantiles of log Weibull distribution, with fitted model given by dotted lines, and censored data by o. Top right panel: Fitted mean failure time as a function of voltage x; the dotted line shows the estimated voltage £o below which failure is impossible. Lower left panel: normalized profile log likelihood for mean failure time 0 at x = 4.9; the dotted line shows the 95% confidence interval for Q using the asymptotic chi-squared distribution, and the dashed line shows the 95% confidence interval using bootstrap calibration of the likelihood ratio statistic. Lower right panel: chi-squared Q-Q plot for simulated likelihood ratio statistic, with dotted line showing its large-sample distribution.
7.3 • Survival Data
349
Table 7.7 Com parison of estim ated biases and standard errors o f maximum likelihood estimates for the PET reliability data, using standard first-order likelihood theory, param etric bootstrap simulation, and model-based nonparam etric resampling.
P aram eter
Po Pi Pi ft *0
M LE
6.346 1.958 4.383 1.235 4.758
L ikelihood
P aram etric
N o n p a ra m e tric
Bias
SE
Bias
SE
Bias
SE
0 0 0 0 0
0.117 0.082 0.850 0.388 0.029
0.007 0.007 0.127 0.022 -0.004
0.117 0.082 0.874 0.393 0.030
0.001 0.006 0.109 0.022 -0.002
0.112 0.080 0.871 0.393 0.028
m ean 1.12, an d their 0.95 quantile is w(*950) = 4.09, to be com pared with ci,o.95 = 3.84. This gives b o o tstrap calibrated 95% confidence interval the set o f 9 such th a t / prof(0) > / prof(9) — 5 x 4.09, th a t is [19.62,36.12], which is slightly w ider th a n the stan d ard interval.
? is the m atrix o f second derivatives o f £ with respect to 0 and /?.
Table 7.7 com pares the bias estim ates and stan d ard errors for the m odel param eters using the param etric b o o tstra p described above and standard firsto rd er likelihood theory, und er which the estim ated biases are zero, and the variance estim ates are obtained as the diagonal elem ents o f the inverse observed inform ation m atrix (—?)_1 evaluated at the M LEs. The estim ated biases are sm all b u t significantly different from zero. The largest differences betw een the stan d ard theory and the b o o tstrap results are for f o and fo, for which the biases are o f order 2 -3 % . T he threshold param eter xo is well determ ined; the sta n d a rd 95% confidence interval based on its asym ptotic norm al distribution is [4.701,4.815], w hereas the norm al interval with estim ated bias and variance is [4.703,4.820], A m odel-based nonparam etric b o o tstrap m ay be perform ed by using resid uals e = ( y / ) . f , three o f which are censored, then resam pling errors £* from their product-lim it estim ate, an d then m aking uncensored b o o tstrap observa tions le*1/*. T he observations with x = 5 are then m odified as outlined above, an d the m odel refitted to the resulting data. The product-lim it estim ate for the residuals is very close to the survivor function o f the stan d ard exponential dis tribution, so we expect this to give results sim ilar to the param etric sim ulation, and this is w hat we see in Table 7.7. F or censoring at a pre-determ ined tim e c, the sim ulation algorithm s would w ork as described above, except th a t values o f y * greater th a n c would be replaced by c an d the corresponding censoring indicators d* set equal to zero. T he nu m b er o f censored observations in each sim ulated dataset would then be ran d o m ; see Practical 7.3. Plots show th a t the sim ulated M L E s are close to norm ally distributed: in this case sta n d a rd likelihood theory w orks well enough to give good confi dence intervals for the param eters. The benefit o f param etric sim ulation is th at the b o o tstra p estim ates give em pirical evidence th a t the stan d ard theory can
350
7 ■Further Topics in Regression
be trusted, while providing alternative m ethods for calculating m easures o f uncertainty if the stan d ard theory is unreliable. It is typical o f first-order like lihood m ethods th a t the variability o f likelihood quantities is underestim ated, although here the effect is sm all enough to be un im p o rtant. ■ Proportional hazards model I f it can be assum ed th a t the explanatory variables act m ultiplicatively on the hazard function, an elegant an d pow erful ap p ro ach to survival d a ta analysis is possible. U nder the usual form o f proportional hazards model the hazard function for an individual w ith covariates x is d A ( y ) = exp( x T P)dA°(y), where dA°(y) is the ‘baseline’ h azard function th a t would apply to an individual w ith a fixed value o f x, often x = 0. T he corresponding cum ulative hazard and survivor functions are A{y) = [ y e x p ( x T P)dA°(u), Jo
1 - F ( y ; p, x) = {1 - F °(y )}exp(x7 P)
where 1 — F°(y) is the baseline survivor function for the hazard dA°(y). The regression p aram eters P are usually estim ated by m axim izing the partial likelihood, which is the p ro d u ct over cases w ith dj = 1 o f term s ________g P f r r ft>________ E L i H (yj - y k ) e xp (x Tpky
(717)
where H(u) equals zero if u < 0 an d equals one otherwise. Since (7.17) is unaltered by recentring the xj, we shall assum e below th at E x j = 0 ; the baseline h azard then corresponds to the average covariate value x = 0. In term s o f the estim ated regression param eters the baseline cum ulative hazard function is estim ated by the Breslow estimator
A °(y )= J 2 ^n m d\ (T tiV j:yj')}exp(x^ ) ; then 2 set 7 / = m in(7P*,C *), w ith Dj = 1 if 7 / = Y f and zero otherwise.
T he next exam ple illustrates the use o f these algorithm s.
352
7 ■Further Topics in Regression
Example 7.6 (Melanoma data) To illustrate these ideas, we consider d a ta on the survival o f patients w ith m alignant m elanom a, whose tum ours were re m oved by o p eratio n a t the D ep artm en t o f Plastic Surgery, U niversity H ospital o f Odense, D enm ark. O perations to o k place from 1962 to 1977, and patients were followed to the end o f 1977. Each tu m o u r was com pletely removed, to gether w ith a b o u t 2.5 cm o f the skin aro u n d it. T he following variables were available for 205 p atients: tim e in days since the operation, possibly censored; status at the end o f the study (alive, dead from m elanom a, dead from other causes); sex; age; year o f o p eratio n ; tu m o u r thickness in m m ; and an indi cator o f w hether or n o t the tu m o u r was ulcerated. U lceration and tum our thickness are im p o rtan t prognostic variables: to have a thick o r ulcerated tu m o u r substantially increases the chance o f d eath from m elanom a, and we shall investigate how they affect survival. We assum e th a t censoring occurs at random . We fit a p ro p o rtio n al hazards m odel und er the assum ption th a t the baseline hazards are different for the ulcerated group o f 90 individuals, and the no n ulcerated group, b u t th a t there is a com m on effect o f tu m o u r thickness. F or a flexible assessm ent o f how thickness affects the h azard function, we fit a natu ral spline w ith four degrees o f freed o m ; its k nots are placed a t the em pirical 0.25, 0.5 and 0.75 quantiles o f the tu m o u r thicknesses. T hus our m odel is th at the survivor functions for the ulcerated an d non-ulcerated groups are 1 - F l ( y ; P , x ) = {1 - f ? ( 30}“ p(xrw,
l - F 2( y ; p , x ) = {1 - F 2°(y)}exp(xT/f),
where x has dim ension fo u r an d corresponds to the spline, /? is com m on to the groups, b u t the baseline survivor functions 1 — F^(y) and 1 — F^iy) m ay differ. F o r illustration we take the fitted censoring distribution to be the product-lim it estim ate obtained by setting censoring indicators d' = 1 —d, and fitting a m odel w ith no covariates, so G is ju st the product-lim it estim ate o f the censoring time distribution. T he left panel o f Figure 7.8 shows the estim ated survivor functions 1 — F®(y) an d 1 — F °(y); there is a strong effect o f ulceration. T he right panel shows how the linear predictor x Tji depends on tu m o u r thickness: from 0-3 m m the effect on the baseline h azard changes from ab o u t exp(—1) = 0.37 to ab o u t exp(0.6) = 1.8, followed by a slight dip an d a gradual upw ard increase to a risk o f a b o u t exp(1.2) = 3.3 for a tu m o u r 15 m m thick. T hus the hazard increases by a factor o f a b o u t 10, b u t m ost o f the increase takes place from 0 -3 mm. However, there are too few individuals w ith tum ours m ore th an 10 m m thick for reliable inferences at the right o f the panel. The top left panel o f Figure 7.9 shows the original fitted linear predictor, together w ith 19 replicates o btained by resam pling cases, stratified by ulcera tion. The lighter solid lines in the panel below are pointw ise 95% confidence limits, based on R = 999 replicates o f this sam pling scheme. In effect these are percentile m ethod confidence lim its for the linear predictor a t each thickness.
7.4 ■Other Nonlinear Models
Figure 7.8 Fit o f a proportional hazards model for ulcer histology and survival o f patients with malignant m elanom a (Andersen et al., 1993, pp. 709-714). Left panel: estim ated baseline survivor functions for cases with ulcerated (dots) and non-ulcerated (solid) tumours. Right p an el: fitted linear predictor x Tfi for risk as a function o f tum our thickness. The lower rug is for non-ulcerated patients, and the upper rug for ulcerated patients.
353
Time (days)
Tumour thickness (mm)
T he sharp increase in risk for small thicknesses is clearly a genuine effect, while beyond 3mm the confidence interval for the linear predictor is roughly [0,1], w ith thickness having little o r no effect. R esults from m odel-based resam pling using the fitted m odel and applying A lgorithm 7.3, an d from conditional resam pling using A lgorithm 7.2 are also show n; they are very sim ilar to the results from resam pling cases. In view o f the discussion in Section 3.5, we did n o t apply the weird bootstrap. The right panels o f Figure 7.9 show how the estim ated 0.2 quantile o f the survival distribution, yo.2 = min{y : F i ( y ; P , x ) > 0.2} depends on tum our thickness. T here is an initial sharp decrease from 3000 days to ab o u t 750 days as tu m o u r thickness increases from 0 -3 mm, but the estim ate is roughly co n stan t from then on. T he individual estim ates are highly variable, b u t the degree o f uncertainty m irrors roughly th a t in the left panels. Once again results for the three resam pling schemes are very similar. U nlike the previous exam ple, where resam pling and stan d ard likelihood m ethods led to sim ilar conclusions, this exam ple shows the usefulness o f resam pling w hen stan d ard approaches would be difficult o r im possible to apply. ■
7.4 Other Nonlinear Models A nonlinear regression m odel w ith independent additive errors is o f form
yj
=
Kxj,P) + £j,
; =
(7.20)
354
7 • Further Topics in Regression
Figure 7.9 Bootstrap results for melanoma data analysis. Top left: fitted linear predictor (heavy solid) and 19 replicates from case resampling (solid); the rug shows observed thicknesses. Top right: estimated 0.2 quantile of survivor distribution as a function of tumour thickness, for an individual with an ulcerated tumour (heavy solid), and 19 replicates for case resampling (solid); the rug shows observed thicknesses. Bottom left: pointwise 95% percentile confidence limits for linear predictor, from case (solid), model-based (dots), and conditional (dashes) resampling. Bottom right: pointwise 95% percentile confidence limits for 0.20 quantile of survivor distribution, from case (solid), model-based (dots), and conditional (dashes) resampling, R — 999.
o g TD ok> Q.
(0 -p
T his defines an iteration th a t starts at P' using a linear regression least squares fit, an d a t the final iteratio n /?' = /?. A t th a t stage the left-hand side o f (7.21) is simply the residual ej = yj — fi(xj,P). A pproxim ate leverage values and o th er diagnostics are obtained from the linear approxim ation, th a t is using the definitions in previous sections b u t w ith the UjS evaluated a t p' = p as the values o f explanatory variable vectors. This use o f the linear approxim ation can give m isleading results, depending upon the “intrinsic curvature” o f the regression surface. In particu lar, the residuals will no longer have zero expectation in general, an d standardized residuals r; will no longer have co n stan t variance u n d er hom oscedasticity o f true errors. T he usual norm al approxim ation for the distribution o f P is also based on the linear approxim ation. F or the approxim ate variance, (6.24) applies w ith X replaced by U = ( u i , . . . , u n)T evaluated at p. So w ith s2 equal to the residual m ean square, we have P -P
~
N ( 0 , s 2( U T U r l ) .
(7.22)
T he accuracy o f this ap proxim ation will depend upon tw o types o f curvature effects, called p aram eter effects and intrinsic effects. The first o f these is specific to the p aram etrizatio n used in expressing /x(x, •), and can be reduced by careful choice o f param etrization. O f course resam pling m ethods will be the m ore useful the larger are the curvature effects, and the worse the norm al approxim ation. R esam pling m ethods apply here ju st as with linear regression, either sim u lating d a ta from the fitted m odel w ith resam pled m odified residuals or by resam pling cases. F o r the first o f these it will generally be necessary to m ake a m ean adjustm ent to w hatever residuals are being used as the erro r population. It would also be generally advisable to correct the raw residuals for bias due to nonlinearity: we d o n o t show how to do this here. Exam ple 7.7 (Calcium uptake d ata) T he d ata plotted in Figure 7.10 show the calcium u p tak e o f cells, y, as a function o f tim e x after being suspended in a solution o f radioactive calcium. Also shown is the fitted curve fi(x,P) = Po { l - e x p ( - / ? i x ) } . T he least squares estim ates are Po = 4.31 and Pi = 0.209, and the estim ate o f a is 0.55 w ith 25 degrees o f freedom. The stan d ard errors for Po and Pi based on (7.22) are 0.30 an d 0.039.
7 *Further Topics in Regression
356
Figure 7.10 Calcium uptake data and fitted curve (left panel), with raw residuals (right panel) (Rawlings, 1988, p. 403).
to o (0 ZJ "O *35 o £ o 5(0 cr m o
2
Time (minutes)
Po h
4
6
8
10
12
14
Time (minutes)
E stim ate
B o o tstrap bias
T heoretical SE
B o o tstrap SE
4.31
0.028
0.30
0.38
0.209
0.004
0.039
0.040
The right panel o f Figure 7.10 shows th a t hom ogeneity o f variance is slightly questionable here, so we resam ple cases by stratified sam pling. Estim ated biases and stan d a rd errors for f o an d fo based on 999 b o o tstrap replicates are given in Table 7.8. T he m ain p o in t to notice is the appreciable difference betw een A theoretical an d b o o tstra p stan d ard errors for Po. Figure 7.11 illustrates the results. N ote the non-elliptical p a ttern o f variation and the n on-norm ality: the z-statistics are also quite non-norm al. In this case the b o o tstrap should give b etter results for confidence intervals th an norm al approxim ations, especially for Po- T he b o tto m right panel suggests th a t the param eter estim ates are closer to norm al on logarithm ic scales. Results for m odel-based resam pling assum ing hom oscedastic errors are fairly similar, alth o u g h the sta n d a rd error for f o is then 0.32. The effects o f nonlin earity are negligible in this case: for exam ple, the m axim um absolute bias o f residuals is a b o u t 0.012} E » {(x -x # )} ’
(7.24)
w ith w(-) a sym m etric density function and b an adjustable “ban d w id th ” con stan t th a t determ ines how widely the averaging is done. This estim ate is similar in m any ways to the kernel density estim ate discussed in Exam ple 5.13, and as there the choice o f b depends upon a trade-off betw een bias and variability o f the e stim a te : sm all b gives sm all bias and large variance, whereas large b has the opposite effects. Ideally b would vary w ith x, to reflect large changes in the derivative o f /i(x) and heteroscedasticity, b o th evident in Figure 7.14. M odifications to the estim ate (7.24) are needed at the ends o f the x range, to avoid the inherent bias when there is little or no d ata on one side o f x. In m any ways m ore satisfactory are the local regression m ethods, where a local linear or quad ratic curve is fitted using weights w{(x — xj ) / b} as above, and then p.(x) is taken to be the fitted value at x. Im plem entations o f this idea include the lowess m ethod, which also incorporates trim m ing o f outliers. A gain the choice o f b is critical. A different approach is to define a curve in term s o f basis functions, such as pow ers o f x which define polynom ials. The fitted m odel is then a linear co m bination o f basis functions, with coefficients determ ined by least squares regression. W hich basis to use depends on the application, b u t polynom ials are
364
7 • Further Topics in Regression
generally b a d because fitted values becom e increasingly variable as x moves tow ard the ends o f its d a ta range — polynom ial extrapolation is notoriously poor. O ne p o p u lar choice for basis functions is cubic splines, w ith which n(x) is m odelled by a series o f cubic polynom ials joined at “k n o t” values o f x, such th a t the curve has continuous second derivatives everywhere. The least squares cubic spline fit m inim izes the penalized least squares criterion for fitting /i(x), ~ M*/)}2 + * J { t t x ) } 2dx; w eighted sum s o f squares can be used if necessary. In m ost softw are im ple m entations the spline fit can be determ ined either by specifying the degrees o f freedom o f the fitted curve, o r by applying cross-validation (Section 6.4.1). A spline fit will generally be biased, unless the underlying curve is in fact a cubic. T h a t such bias is nearly always present for nonparam etric curve fits can create difficulties. T he o th er general feature th a t m akes in terp retatio n difficult is the occurrence o f spurious bum ps an d bends in the curve estim ates, as we shall see in Exam ple 7.10. Resampling methods Two types o f applications o f n o n p aram etric curves are use in checking a p a ra m etric curve, an d use in setting confidence lim its for fi(x) o r prediction limits for Y = h ( x ) + e at some values o f x. The first type is quite straightforw ard, be cause d a ta would be sim ulated from the fitted param etric m odel: Exam ple 7.11 illustrates this. H ere we look briefly a t confidence lim its and prediction limits, where the n o n p aram etric curve is the only “m odel”. The basic difficulty for resam pling here is sim ilar to th a t w ith density estim ation, illustrated in Exam ple 5.13, nam ely bias. Suppose th a t we w ant to calculate a confidence interval for ji(x) at one o r m ore values o f x. Case resam pling can n o t be used w ith stan d ard recom m endations for nonparam etric regression, because the resam pling bias o f f i { x ) will be sm aller th an th at o f ju(x). T his could probably be corrected, as w ith density estim ation, by using a larger b andw idth o r equivalent tuning constant. But simpler, at least in principle, is to apply the idea o f m odel-based resam pling discussed in C h apter 6. The naive extension o f m odel-based resam pling would generate responses y j = p.{xj) + e*, where fa(x; ) is the fitted value from some nonparam etric regression m ethod, an d ej is sam pled from appropriately m odified versions o f the residuals yj — fi(xj). U n fortunately the inherent bias o f m ost n o n p a ra m etric regression m ethods distorts b o th the fitted values and the residuals, and thence biases the resam pling scheme. O ne recom m ended strategy is to use as sim ulation m odel a curve th a t is oversm oothed relative to the usual estim ate. F o r definiteness, suppose th a t we are using a kernel m ethod o r a local sm oothing m ethod w ith tuning co n stan t b, an d th a t we use cross-validation
7.6 • Nonparametric Regression
365
to determ ine the best value o f b. T hen for the sim ulation m odel we use the corresponding curve with, say, 2b as the tuning constant. To try to elim inate bias from the sim ulation errors ej, we use residuals from an undersm oothed curve, say w ith tuning co n stan t b / 2. As with linear regression, it is appropriate to use m odified residuals, where leverage is taken into account as in (6.9). This is possible for m ost nonparam etric regression m ethods, since they are linear. D etailed asym ptotic theory shows th at som ething along these lines is necessary to m ake resam pling work, b u t there is no clear guidance as to precise relative values for the tuning constants. E xam ple 7.10 (M otorcycle im pact d a ta ) The response y here is acceleration m easured x m illiseconds after im pact in an accident sim ulation experim ent. T he full d a ta were shown in Figure 7.14, b u t for com putational reasons we elim inate replicates for the present analysis, which leaves n = 94 cases with distinct x values. The solid line in the top left panel o f Figure 7.15 shows a cubic spline fit for the d a ta o f Figure 7.14, chosen by cross-validation and having approxim ately 12 degrees o f freedom. The top right panel o f the figure gives the plot o f m odified residuals against x for this fit. N ote the heteroscedasticity, w hich broadly corresponds to the three stra ta separated by the vertical dotted lines. The estim ated variances for these stra ta are approxim ately 4, 600 and 140. Reciprocals o f these were used as weights for the spline fit in the left panel. Bias in these residuals is evident at times 10-15 ms, where the residuals are first m ostly negative and then positive because the curve does not follow the d a ta closely enough. There is a rough correspondence betw een kernel sm oothing and spline sm oothing an d this, together w ith the previous discussion, suggests th a t for m odel-based resam pling we use yj = p(xj) + ej, where fi is the spline fit obtained by doubling the cross-validation choice o f L This fit is the dotted line in the top left panel o f Figure 7.15. The random errors ej are sam pled from the m odified residuals for an o th er spline fit in which X is h a lf the crossv alidation value. The lower right panel o f the figure displays these residuals, which show less bias th a n those for the original fit, though perhaps a smaller b andw idth would be b etter still. The sam pling is stratified, to reflect the very strong heteroscedasticity. We sim ulated R = 999 d atasets in this way, and to each fitted the spline curve fi’ (x), w ith the b an d w id th chosen by cross-validation each time. We then calculated 90% confidence intervals at six values o f x, using the basic b o otstrap m ethod m odified to equate the distributions o f /i*(x) —p(x) and F or example, at x = 20 the estim ates ft and p. are respectively —110.8 and —106.2, and the 950th ordered value o f p" is —87.2, so th a t the upper confidence limit is —110.8 — {—87.2 — (—106.2)} = —129.8. The resulting confidence intervals are shown in the b o tto m left panel o f Figure 7.15, together w ith the original
7 • Further Topics in Regression
366
c3o ' o 0.
Show that Y has unconditional mean and variance (7.15) and express n and in terms o f a and fa Express a and /? in terms o f n and <j>, and hence explain how to generate data with mean and variance (7.15) by generating n from a beta distribution, and then, conditional on the probabilities, generating binom ial variables with probabilities n and denominators m. How should your algorithm be amended to generate beta-binomial data with variance function II(l — II)? (Example 7.3) 6
For generalized linear models the analogue o f the case-deletion result in Problem 6.2 is
Kj = P-(xTwxy'wjk-^xj^^i. (a) Use this to show that when the y'th case is deleted the predicted value for y, is
7 • Further Topics in Regression
378
(b) Use (a) to give an approximation for the leave-one-out cross-validation estimate o f prediction error for a binary logistic regression with cost (7.23). (Sections 6.4.1,7.2.2)
7.9 Practicals 1
Dataframe r e m is s io n contains data from Freeman (1987) concerning a measure o f cancer activity, the LI values, for 27 cancer patients, o f whom 9 went into remission. Remission is indicated by the binary variable r = 1. Consider testing the hypothesis that the LI values do not affect the probability o f remission. First, fit a binary logistic m odel to the data, plot them, and perform a permutation test:
attach(remission) plot(LI+O.03*rnorm(27),r,pch=l,xlab="LI, jittered",xlim=c(0,2.5)) rem.glm 1; a sh arp cut-off in the partial autocorrelations is characteristic o f autoregressive processes. The sam ple estim ates o f pj and pj are basic sum m aries o f the structure o f a time series. Plots o f them against j are called the correlogram and partial correlogram o f the series. One widely used class o f linear time series m odels is the autoregressivem oving average or A R M A process. T he general ARM A(p,<j) m odel is defined
387
8.2 • Time Series
by 9
P
Yj = '^2k is the periodogram,
Y2yj n- 1
I{(ok) = n ' l & l 2 = n 1
j =0
cos(cokj)
\ +1YI yjsin(mkj)
Y
(n-l
I
I j =0
' 2
8 ■Complex Dependence
388
The orthogonality properties o f the vectors involved in the Fourier transform im ply th a t the overall sum o f squares o f the d a ta m ay be expressed as n- 1
(8.5)
The em pirical Fourier transform an d its inverse can be rapidly calculated by an algorithm know n as the f a s t Fourier transform. If the d a ta arise from a statio n ary process {Yj} with spectrum g(co), where Yj = YlT=-ccai - i Ei’ '"'ith {£/} a norm al w hite noise process, then as n increases and provided the term s |a/| decrease sufficiently fast as l—> ± oo, the real and im aginary parts o f the com plex-valued ran d o m variables y i , . . . , y „ F are asym ptotically independent norm al variables w ith m eans zero and variances ng(o)[)/2,. . . , ng(«„f )/2 ; furtherm ore the % a t different F ourier frequencies are asym ptotically independent. This implies th a t as n—>co for such a process, the periodogram values I{a>k) a t different Fourier frequencies will be independent, and th a t I(cok) will have an exponential distrib u tio n with m ean g(co^). (If n is even I ( n) m ust be added to (8.5); I(n) is approxim ately independent o f the /(ajfc) an d its asym ptotic distribution is g(Tt)xi-) T hus (8.5) decom poses the to tal sum o f squares into asym ptotically independent com ponents, each associated w ith the am o u n t o f variation due to a particular Fourier frequency. W eaker versions o f these results hold w hen the process is n o t linear, o r when the process {e/} is n o t norm al, the key difference being th a t the jo in t lim iting distribution o f the p eriodogram values holds only for a finite n um ber o f fixed frequencies. If the series is w hite noise, und er m ild conditions its periodogram ordinates I{co\) , . . . , I{(o„F) are roughly a ran d o m sam ple from an exponential distribu tion w ith m ean yo. Tests o f independence m ay be based on the cumulative periodogram ordinates, J2j=i
k =
— 1.
Z jU H ujY W hen the d a ta are w hite noise these ordinates have roughly the same jo in t distributio n as the o rd er statistics o f np — 1 uniform ran d o m variables. Exam ple 8.1 (Rio N egro d a ta ) The d a ta for o u r first time series exam ple are m onthly averages o f the daily stages — heights — o f the R io N egro, 18 km upstream a t M anaus, from 1903 to 1992, m ade available to us by Professors H. O ’Reilly S ternberg an d D. R. B rillinger o f the U niversity o f C alifornia at Berkeley. Because o f the tiny slope o f the w ater surface and the lower courses o f its flatland affluents, these d a ta m ay be regarded as a reasonable approxim ation o f the w ater level in the A m azon R iver at the confluence o f the
8.2 • Time Series
389
Figure 8.1 Deseasonalized monthly average stage (metres) of the R io N egro at M anaus, 1903-1992 (Sternberg, 1995).
1900
1920
1940
1960
1980
2000
Time (years)
two rivers. To remove the strong seasonal com ponent, we subtract the average value for each m onth, giving the series o f length n = 1080 shown in Figure 8.1. F or an initial exam ple, we take the first ten years o f observations. The top panels o f Figure 8.2 show the correlogram and partial correlogram for this sh o rter series, w ith horizontal lines showing approxim ate 95% confidence limits for correlations from a w hite noise series. The shape o f the correlogram and the cut-off in the p artial correlogram suggest th a t a low -order autoregressive m odel will fit the data, which are quite highly correlated. T he lower left panel o f the figure shows the periodogram o f the series, which displays the usual high variability associated w ith single periodogram ordinates. The lower right panel shows the cum ulative periodogram , which lies well outside its overall 95% confidence b and an d clearly does n o t correspond to a white noise series. A n A R (2) m odel fitted to the shorter series gives oil = 1.14 and a.2 = —0.31, b o th w ith stan d ard erro r 0.062, and estim ated innovation variance 0.598. The left panel o f Figure 8.3 shows a norm al probability plot o f the standardized residuals from this m odel, an d the right panel shows the cum ulative peri odogram o f the residual series. The residuals seem close to G aussian white noise. ■
8.2.2 M odel-based resampling T here are two approaches to resam pling in the tim e dom ain. The first and sim plest is analogous to m odel-based resam pling in regression. T he idea is to fit a suitable m odel to the data, to construct residuals from the fitted model, an d then to generate new series by incorporating random sam ples from the
8 ' Complex Dependence
390
Figure 8.2 Summary plots for the Rio Negro data, 1903-1912. The top panels show the correlogram and partial correlogram for the series. The bottom panels show the periodogram and cumulative periodogram.
£ to o> o o O
Lag
Lag
omega
omega/pi
residuals into the fitted m odel. T he residuals are typically recentred to have the same m ean as the innovations o f the m odel. A b o u t the sim plest situation is w hen the A R (1) m odel (8.2) is fitted to an observed series y i , . . . , y „ , giving estim ated autoregressive coefficient a an d estim ated innovations
ej
= yj
- &y j - u
j = 2,...,n;
e\ is uno b tain ab le because yo is unknow n. M odel-based resam pling m ight then proceed by equi-probable sam pling w ith replacem ent from centred residuals — e, . . . , en — e to obtain sim ulated innovations e j,. . . , and then setting
8.2 ■ Time Series
Figure 8.3 Plots for residuals from AR(2) model fitted to the Rio Negro data, 1903-1912: normal Q-Q plot of the standardized residuals (left), and cumulative periodogram of the residual series (right).
391
E 2? o> o -o o
V)
co D “O cn ) ~ if I—►co and Z/n->0 as n—>oo. To calculate approxim ations for the m ean squared errors o f P an d v requires m ore careful calculations and involves the variance of — S ) 2. This is messy in general, b u t the essential points rem ain under the simplifying assum ptions th a t {Yj) is an m -dependent norm al process. In this case ym+i = y m+2 = • • • = 0, an d the third and higher cum ulants o f the
Y is the average of Yu . . . , Y n.
8.2 ■ Time Series
407
process are zero. Suppose also th a t m < I. T hen the variance o f approxim ately v a r { X ] ( S' - V ) 2} = b v a r { ( S l
~ n ) 2} + 2
~ S )2 is
(b - l)cov {(Si - n)2, (S2 - n)2} .
F or norm al data, var {(Si — n)2}
=
2{var(Si - n)}2 ,
cov{(S i - j u ) 2,(S 2 - / i ) 2}
=
2 {cov(Si - n, S2 - n )}2 ,
SO
var { J 2 ( SJ - S)2} = 2b(l~24 ))2 + 4 6 ( r V 1'))2, w here u n d er suitable conditions on the process, OO
c f = y i + 2 y 2 -\------ 1- l y i - >
^
i= i
jyj
~ ji,
say. A fter a delicate calculation we find th at E {$) — (} ~
x f 'r 't ,
var(/?) ~
{ ^ h " ( n ) } 2 x 2ln~3( 2, (8.13)
E(v) — v ~ - t i ( f i ) 2 xn~lr lT,
var(v) ~
hf(jif x 2/n“ 3f 2,
(8.14)
th u s establishing th a t the m ean squared errors o f fi and v are o f form (8.9). This developm ent can clearly be extended to m ultivariate tim e series, and thence to m ore com plicated param eters o f a single series. F or example, for the first-order co rrelation coefficient o f the univariate series {Xj}, we would apply the argum ent to the trivariate series {Yj} = { ( X j , X 2, X j X j - 1)} w ith m ean an d set G = M^i» A*n, ^ 12) = ~ H2)W hen overlapping blocks are resam pled, the argum ent is sim ilar b u t the details change. If the d a ta are n o t w rapped around a circle, there are n — I + 1 blocks w ith averages Sj = /-1 Y?i=i
an(^
E‘ (? * - ? ) = /(„- *+ !) | /(/ ~ 1)? ~
+ y"“;+l)} '
(8'15)
In this case the leading term o f the expansion for fi is the product o f h'( Y) and the rig h t-h an d side o f (8.15), so the b o o tstrap bias estim ate for Y as an estim ator o f 9 = n is non-zero, which is clearly m isleading since E (T ) = fi. W ith overlapping blocks, the properties o f the b o o tstra p bias estim ator depend on E*(Y *)—Y , and it tu rn s o u t th a t its variance is an order o f m agnitude larger th an for non-overlapping blocks. This difficulty can be rem oved by w rapping Yi....... Y„ aro u n d a circle an d using n blocks, in which case E*(Y*) = Y, or by re-centring the b o o tstrap bias estim ate to ^ = E ’ {/i(Y*)} — ft { E ”(Y ')} . In either case (8.13) and (8.14) apply. One asym ptotic benefit o f using overlapping
8 ■Complex Dependence
408
blocks when the re-centred estim ator is used is th at var(/?) and var(v) are reduced by a factor | , though in practice the reduction m ay not be visible for small n. The corresponding argum ent for tail probabilities involves E dgew orth ex pansions and is considerably m ore intricate th an th a t sketched above. A part from sm oothness conditions on h(-), the key requirem ent for the above argum ent to w ork is th a t x an d ( be finite, and th a t the autocovariances decrease sharply enough for the various term s neglected to be negligible. This is the case if ~ a; for sufficiently large j and some a with |a| < 1, as is the case for stationary finite A R M A processes. However, if for large j we find th at yj ~ j ~ s, where 5 < S < 1, £ an d x are n o t finite and the argum ent will fail. In this case g(k)X*, where g(-) is the spectrum o f the fitted m odel and X ’ has a standard exponential d istrib u tio n ; this gives E * ( |y ;i2) = g t o ) ,
v a r* (|y ;i2) = g W
C learly these resam pling schemes will give different results unless the quantities o f interest depend only on the m eans o f the |y fe' | 2, i.e. are essentially quadratic
410
8 ■Complex Dependence
Figure 8.11 Three time series generated by phase scrambling the shorter Rio Negro data.
in the data. Since the quan tity o f interest m ust also be location-invariant, this restricts the dom ain o f phase scram bling to such tasks as estim ating the variances o f linear contrasts in the data. Example 8.7 (Rio Negro data) We assess em pirical properties o f phase scram bling using the first 120 m o n th s o f the R io N egro d ata, which we saw previously were well-fitted by an A R (2) m odel w ith norm al errors. N ote th a t our statistic o f interest, T = Y l ajYj> has the necessary structure for phase scram bling n o t autom atically to fail. Figure 8.11 shows three phase scram bled datasets, which look sim ilar to the A R(2) series in the second row o f Figure 8.7. T he top panels o f Figure 8.12 show the em pirical Fourier transform for the original d a ta an d for one resam ple. Phase scram bling seems to have shrunk the m oduli o f the series tow ards zero, giving a resam pled series w ith lower overall variability. The low er left panel shows sm oothed periodogram s for the original d a ta and for 9 phase scram bled resam ples, while the right panel shows corresponding results for sim ulation from the fitted A R (2) model. The results are quite different, an d show th a t d a ta generated by phase scram bling are less variable th an those generated from the fitted model. R esam pling w ith 999 series generated from the fitted A R(2) m odel and by phase scram bling, the distribution o f 7” is close to no rm al under b o th schemes b u t it is less variable u nder phase scram bling; the estim ated variances are 27.4 and 20.2. These are sim ilar to the estim ates o f a b o u t 27.5 and 22.5 obtained using the block and statio n ary bootstraps. Before applying phase scram bling to the full series, we m ust check th a t it shows no sign o f nonlinearity or o f long-range dependence, and th at it is plausibly close to a linear series w ith norm al errors. W ith m = 20 the nonlinearity statistic described in Exam ple 8.3 takes value 0.015, and no value for m < 30 is greater th a n 0.84: this gives no evidence th a t the series is nonlinear. M oreover the p eriodogram shows no signs o f a pole as to—>0+, so long-range dependence seems to be absent. A n A R (8) m odel fits the series well, b u t the residuals have heavier tails th an the norm al distribution, w ith kurtosis 1.2. T he variance o f T * u nder phase scram bling is ab o u t 51, which
8.2 • Time Series
Figure 8.12 Phase scrambling for the shorter Rio Negro data. The upper left panel shows an Argand diagram containing the empirical Fourier transform % of the data, with phase scrambled y'k in the upper right panel. The lower panels show smoothed periodograms for the original data (heavy solid), 9 phase scrambled datasets (left) and 9 datasets generated from an AR(2) model (right); the theoretical AR(2) spectrum is the lighter solid line.
411
o
CD
O Tj-
O C\J o o
C£> - 60
-40
- 20
0
20
3
4
40
60
-60
- 40
- 20
0
20
3
4
40
60
CG o> 0) e o
o> o
1
2
omega
1
2
omega
again is sim ilar to the estim ates from the block resam pling schemes. A lthough this estim ate m ay be untrustw orthy, on the face o f things it casts no d o ubt on the earlier conclusion th a t the evidence for trend is weak. ■ The discussion above suggests th a t n o t only should phase scram bling be confined to statistics th a t are linear contrasts, b u t also th a t it should be used only after careful scrutiny o f the d a ta to detect nonlinearity and longrange dependence. W ith n on-norm al d a ta there is the further difficulty th a t the Fourier transform and its inverse are averaging operations, which can produce resam pled d a ta quite unlike the original series; see Problem 8.4 and Practical 8.3. In p articular, w hen phase scram bling is used in a test o f the null
8 ■Complex Dependence
412
hypothesis o f linearity, it im poses on the distribution o f the scram bled d a ta the additional constraints o f stationarity an d a high degree o f symmetry.
8.2.5 Periodogram resampling Like time d om ain resam pling m ethods, phase scram bling generates an entire new dataset. T his is unnecessary for such problem s as setting a confidence in terval for the spectrum at a p articu lar frequency or for assessing the variability o f an estim ate th a t is based on periodogram values. T here are well-established lim iting results for the distributions o f p eriodogram values, which under cer tain conditions are asym ptotically independent exponential random variables, and this suggests th a t we som ehow resam ple p eriodogram values. The obvious ap proach is to note th a t if g f (wk) is a suitable consistent estim ate o f g(a)k) based on d a ta yo,...,y „ _ i, w here n = 2 np + 1, then for k = 1, . . . , « f the residuals e k — I(cok)/g^(o}k) are approxim ately standard exponential variables. This suggests th a t we generate b o o tstrap periodogram values by setting I ’(ojk) = g{(ok)e*k, where g(o)k) is also a consistent estim ate o f g(a>k), an d the e\ are sam pled random ly from the set ( e \ / e , . . . , e nF/e); this ensures th a t E*(e£) = 1. T he choice o f g^co) and g(co) is discussed below. Such a resam pling scheme will only w ork in special circum stances. To see why, we consider estim ation o f 6 = f a(co)g(a>)dco by a statistic th a t can be w ritten in the form
e is the average of
r = -? -£ > /* ,
tr where I k = Ho}k), ak = a(cok), an d (ok is the /cth F ourier frequency. F o r a linear process 00 Y j = T , b* H> i=—oo where {£,} is a stream o f independent and identically distributed random variables w ith standardized fourth cum u lan t K4, the m eans and covariances o f the Ik are approxim ately E (Ik) = g(a>k),
cov(Ik,Ii) =
g ( a > k ) g ( c o ,) ( S k,
+ n~ 1 K4).
F rom this it follows th a t u n d er suitable conditions, E (T )
=
J a(co)g(a>)d(o,
v ar(T )
=
ri- 1
2nJ a2(co)g2(co)dco+K4 | J a(o))g (« ) dcoj
(8.18)
0 as n —*00 in order to remove the bias o f T , the second term in the variance is asym ptotically negligible relative to the first term , as is necessary for the resam pling scheme outlined above to work w ith a tim e series for which /c4 0. C om parison o f the variance and bias term s implies th at the asym ptotic form o f the relative m ean squared erro r for estim ation o f g(//) is m inim ized by tak in g h oc n~[^5. However, there are two difficulties in using resam pling to m ake inference ab o ut g(^) from T. T he first difficulty is analogous to th at seen in Exam ple 5.13, and appears on com paring T and its b o o tstrap analogue
k=1 We suppose th a t I k is generated using a kernel estim ate g(a>k) with sm oothing param eter h. T he standardized versions o f T and T * are Z = (n h c)1/2 T
g^ \
Z* = (n h c)1 / l T
8 ■Complex Dependence
414 where c = {2n f K 2 (u)du}
These have m eans
E (Z ) = (nhc ) l / 1
E * (Z ') = (n/ic)1/2E gO/)
gU/)
C onsiderations sim ilar to those in Exam ple 5.13 show th at E '( Z ’ ) ~ E (Z ) if h—>0 such th a t h / h ^ O as n—>o o . The second difficulty concerns the variances o f Z and Z*, which will both be approxim ately one if the rescaled residuals ek have the same asym ptotic distribution as the “erro rs” h/g{u>k). F or this to h appen with g f (co) a kernel estim ate, it m ust have sm oothing p aram eter hf oc n-1//4. T h a t is, asym ptotically gt (ftj) m ust be undersm oothed com pared to the estim ate th at m inimizes the asym ptotic relative m ean squared erro r o f T. Thus the application o f the b o o tstrap outlined above involves three kernel density estim ates: the original, g(co), w ith h o c n 1/5; a surrogate g(co) for g(a>) used when generating b o o tstrap spectra, w ith sm oothing param eter h asym ptotically larger th a n h ; and g t (oj), from which residuals are obtained, w ith sm oothing param eter ht o c n-1//4 asym ptotically sm aller th a n h. This raises sub stantial difficulties for practical application, which could be avoided by explicit correction to reduce the bias o f T o r by taking h asym ptotically narrow er th a n n ~ ^ 5, in which case the lim iting m eans o f Z and Z* equal zero. F or a num erical assessm ent o f this procedure, we consider estim ating the spectrum g(a>) = {1 — 2acos(co) + a2}-1 o f an A R (1) process w ith a. = 0.9 at rj = n i l . T he kernel K(-) is the stan d ard norm al PD F. Table 8.4 com pares the m eans and variances o f Z w ith the average m eans and variances o f Z* for 1000 time series o f various lengths, w ith norm al and x 2 innovations. The first set o f results has bandw idths h = an~1/5, hf = an-1/4, and h = an-1/6, with a chosen to m inim ize the asym ptotic relative m ean squared erro r o f g(>/). Even for tim e series o f length 1025, the m eans and variances o f Z and Z ’ can be quite different, w ith the variances m ore sensitive to the distribution o f innovations. F or the second block o f num bers we took a non-optim al b andw idth h = an~{/4, an d hf = h = h. A lthough in this case the true and bo o tstrap m om ents agree better for norm al innovations, the results for chisquared innovations are alm ost as bad as previously, and it would be unwise to rely on the results even for fairly long series. M ean and variance only sum m arize lim ited aspects o f the distributions, and for a m ore detailed com parison we com pare 1000 values o f Z and o f Z ’ for a p articu lar series o f length 257. The left panel o f Figure 8.13 shows th a t the Z* are far from norm ally distributed, while the right panel com pares the sim ulated Z ’ an d Z . A lthough Z ' captures the shape o f the distribution o f Z quite well, there is a clear difference in their m eans and variances, and confidence intervals for g(rj) based on Z ' can be expected to be poor. ■
8.3 ■Point Processes Table 8.4 Com parison o f actual and bootstrap means and variances for a standardized kernel spectral density estimate Z . For the means the upper figure is the average o f Z from 1000 AR(1) time series with a = 0.9 and length n, and the lower figure is the average o f E*(Z*) for those series; for the variances the upper and lower figures are estimates o f v ar(Z ) and E{var’ (Z*)}. The upper 8 lines o f results are for h oc n-1/ 5, h * oc n~ l/4> and h oc n~l/6 ; for the lower 8 lines h= { oc 1/4.
415
In n o v atio n s
N o rm al
M ean V ariance
C hi-squared
M ean V ariance
N o rm al
M ean V ariance
C hi-squared
M ean V ariance
65
129
257
513
1025
00
1.4 2.0 2.5 2.7 1.2 2.1 6.9 2.8
0.9 1.7 1.5 2.0 1.0 1.7 4.9 2.0
0.8 1.3 1.3 1.7 0.8 1.3 3.8 1.6
0.7 1.0 1.1 1.5 0.7 1.0
0.6 0.8 1.1 1.3 0.7 0.8 2.7 1.3
0.5
0.9 0.6 2.3
0.5 0.4 1.3 1.4 0.6 0.4 3.7 1.4
0.5 0.3 1.1 1.4 0.5 0.3 3.1 1.4
0.3 0.3 1.1 1.3 0.4
0.2 0.2 1.0 1.3 0.3 0.2 2.2 1.2
0.0
1.5 1.0 0.7 5.6 1.4
3.1 1.4
0.3 2.5 1.3
1.0 0.5 1.0
1.0 0.0 1.0
Figure 8.13 C om parison o f distributions o f Z and Z* for time series o f length 257. The left panel shows a norm al plot o f 1000 values o f Z . The right panel com pares the distributions o f Z and Z*.
Quantiles of standard normal
Z*
8.3 Point Processes 8.3.1 Basic ideas A p o in t process is a collection o f events in a continuum . Exam ples are tim es o f arrivals at an intensive care unit, positions o f trees in a forest, and epicentres
416
8 ■Complex Dependence
o f earthquakes. M athem atical properties o f such processes are determ ined by the jo in t distribution o f the num bers o f events in subsets o f the continuum . Statistical analysis is based on some n otion o f repeatability, usually provided by assum ptions o f stationarity. Let N { A ) denote the nu m b er o f events in a set A . A point process is stationary if Pr{/V(/li) = m , . . . , N ( A k ) = n k ) is unaffected by applying the same tran slatio n to all the sets A u . . . , A k , for any finite k. U nder second-order stationarity only the first an d jo in t second m om ents o f the N ( A t) rem ain unchanged by translation. F or a stationary process E{N(/1)} = X\A\, where X is the intensity o f the process and \A\ is the length, area, or volum e o f A . Second-order m om ent properties can be defined in various ways, w ith the m ost useful definition depending on the context. The sim plest stationary point process m odel is the hom ogeneous Poisson process, for which the ran d o m variables N(Ai), N i A i ) have independent Pois son distributions w henever A\ and A 2 are disjoint. This com pletely random process is a n atu ral stan d ard w ith which to com pare data, although it is rarely a plausible m odel. M ore realistic m odels o f dependence can lead to estim ation problem s th a t seem analytically insuperable, and M onte C arlo m ethods are often used, particularly for spatial processes. In particular, sim ulation from fitted param etric m odels is often used as a baseline against which to judge data. This often involves graphical tests o f the type outlined in Section 4.2.4. In practice the process is observed only in a finite region. This can give rise to edge effects, which are increasingly severe in higher dimensions. Exam ple 8.9 (Caveolae) T he u p p er left panel o f Figure 8.14 shows the p o sitions o f n = 138 caveolae in a 500 unit square region, originally a 2.65 /*m square o f muscle fibre. T he u pper right panel shows a realization o f a binom ial process, for which n points were placed a t ran d o m in the same region; this is an hom ogeneous Poisson process conditioned to have 138 events. The d ata seem to have fewer alm ost-coincident points th a n the sim ulation, b u t it is hard to be sure. Spatial dependence is often sum m arized by K -functions. Suppose th at the process is orderly and isotropic, i.e. m ultiple coincident events are precluded and jo in t probabilities are invariant und er ro tatio n as well as translation. Then a useful sum m ary o f spatial dependence is Ripley’s K -function, K ( t ) = A-1 E (#{events w ithin distance t o f an arb itrary e v e n t} ),
t > 0.
The m ean- an d variance-stabilized function Z ( t ) = { K ( t ) / n Y /2—t is som etim es used instead. F or an hom ogeneous Poisson process, K ( t ) = n t2. Em pirical versions o f K ( t ) m ust allow for edge effects, as m ade explicit in Exam ple 8.12. The solid line in the low er left panel o f Figure 8.14 is the em pirical version
417
8.3 ■Point Processes
Figure 8.14 Muscle caveolae analysis. Top left: positions of 138 cavoelae in a 500 unit square of muscle fibre (Appleyard et al., 1985). Top right: realization of an homogeneous binomial process with n = 138. Lower left: Z(t) (solid), together with pointwise 95% confidence bands (dashes) and overall 92% confidence bands (dots) based on R = 999 simulated binomial processes. Lower right: corresponding results for R = 999 realizations of a fitted Strauss process.
o o
LO
%•.* • • • •
o o o o
CO
o o
C\J
o o
0
100
200
300
400
500
m o
o
lO
in
---------
o
^
N
\
V v~
r-—
o
in T~
m
40
__r
_ -- --------
K5 O V
20
^
\
o T 7
0
I 'v\ / "
60
Distance
80
100
\
< ■
V
0
20
40
60
80
100
Distance
Z (t) o f Z(t). The dashed lines are pointw ise 95% confidence bands from R = 999 realizations o f the binom ial process, and the dotted lines are overall b ands w ith level ab o u t 92% , obtained by using the m ethod outlined after (4.17) w ith k = 2. Relative to a Poisson process there is a significant deficiency o f pairs o f points lying close together, which confirm s our previous impression. The lower right panel o f the figure shows the corresponding results for sim ulations from the Strauss process, a param etric m odel o f interaction th at can inhibit p attern s in which pairs lie close together. This m odels the local behaviour o f the d a ta b etter th an the stationary Poisson process. ■
8 ■Complex Dependence
418
o
c o
o
o
= ( l + c ^ 2 Yk,
= ( l + c * ) 1/2Y„_*,
k = l,...,m ,
from which the ith replacement time series is obtained by the inverse Fourier transform. Let T be the value o f a statistic calculated from the original series. Explain how the corresponding resample values, T 1' , . . . , T ^ +1, may be used to obtain an approximately unbiased estimate o f the variance o f T , and say for what types o f statistics you think this is likely to work. (Section 8.2.4; Hartigan, 1990) 6
In the context o f periodogram resampling, consider a ratio statistic T =
a(u>k)I((Qk) = / a M g M dw( 1 + n } ' /2X a) YkF =i 1 (®fc)/ g(ft>) dw( 1 -f
1/2Z i)
say. U se (8.18) to show that X a and X i have means zero and that var(-Xa) COV(XUX a)
=
n l aaggl ^ 2 + i(c4,
—
1llagglag Ig
“t-
2
^4 .
\ a r ( X i ) = n l gel ~ 2 + ^ k4,
431
8.5 ■Problems
where I aagg = / a2(co)g2(co) dco, and so forth. Hence show that to first order the mean and variance o f T do not involve k4, and deduce that periodogram resampling may be applied to ratio statistics. Use simulation to see how well periodogram resampling performs in estimating the distribution o f a suitable version o f the sample estimate o f the lag j autocorrelation, = Pl
e~toJg M dco f l n g (« ) dco
(Section 8.2.5; Janas, 1993; Dahlhaus and Janas, 1996) 7
Let y \ , . . . , y n denote the times o f events in an inhom ogeneous Poisson process o f intensity My), observed for 0 < y < 1, and let
J= 1
denote a kernel estimate o f My), based on a kernel w( ) that is a PDF. Explain why the following two algorithms for generating bootstrap data from the estimated intensity are (almost) equivalent.
Algorithm 8.4 (Inhomogeneous Poisson process 1) • •
Let N have a Poisson distribution with mean A = f Q' l(u ;h )d u . For j = 1, . . . , N , independently take 17* from the t /( 0 ,1) distribution, and then set Y ’ = F ~ l (U j), where F (y) = A-1 f0} l(u ;h )d u .
Algorithm 8.5 (Inhomogeneous Poisson process 2) • •
A
p1
Let N have a Poisson distribution with mean A = J0 /.(u; h) du. For j = 1, . . . , N , independently generate /* at random from the integers { ! , . . . , « } and let s* be a random variable with P D F w(-). Set YJ = y,- + ht:'.
(Section 8.3.2) 8
Consider an inhom ogeneous Poisson process o f intensity /.(y) = N n(y), where fi(y) is fixed and sm ooth, observed for 0 < y < 1. A kernel intensity estimate based on events at y i , . . . , y n is
i =i
where w( ) is the P D F o f a symmetric random variable with mean zero and variance one; let K = / w2(u)du. (a) Show that as N - * c c and h—>0 in such a way that N h —>cej, E { l(y ; h)} = X(y) + ±h2X"(y),
var j l(y ; h) j = K h~l X(y);
you may need the facts that the number o f events n has a Poisson distribution with mean A = /J Mu) du, and that conditional on there being n observed events, their
432
8 ■Complex Dependence
times are independent random variables with PDF Hence show that the asymptotic mean squared error of is minimized when h oc N ~l/S. Use the delta method to show that the approximate mean and variance of l 1/ 2(y;h) are *'/ 2 (y) + \ * r m (y) {h 2f ( y ) - ±K h r 1},
\ Kh ~l.
(b) Now suppose that resamples are formed by taking n observations at random from yi,...,y„. Show that the bootstrapped intensity estimate w ', y - y j h J=l has mean E’{ l ‘(y, h)} = l(y;h), and that the same is true when there are n' resampled events, provided that E '(n') = n. For a third resampling scheme, let n have a Poisson distribution with mean n, and generate n events independently from density ).(y;h)/ f Ql l(u;h)du. Show that under this scheme E*{3.*{_y; Ai)} =
J w(u)2(y — hu;h)du.
(c) By comparing the asymptotic distributions of P 2( y ; h ) - ^ 2 (y) z i y ’h) =
{kU -w
, ’
Z ( r ’h) =
{ r ( y ; h ) \ ' - l 1/ 2 (y;h) ------- W m F u i ---------*
find conditions under which the quantiles of Z ' can estimate those of Z. (Section 8.3.2; Example 5.13; Cowling, Hall and Phillips, 1996) Consider resampling tiles when the observation region ^ is a square, the data are generated by a stationary planar Poisson process of intensity X, and the quantity of interest is d = var(Y), where Y is the number of events in 3t. Suppose that 0t is split into n fixed tiles of equal size and shape, which are then resampled according to the usual bootstrap. Show that the bootstrap estimate of 6 is t = ^2(yj — y)2, where yj is the number of events in the jth tile. Use the fact that var(T) = (n — 1)2{k4/h + 2 k \ /( n — 1)}, where Kr is the rth cumulant of Yj, to show that the mean squared error of T is ^ { n + ( n - l ) ( 2n + n - l ) } , where n = l\9l\. Sketch this when p. > 1, fi = 1, and /i < 1, and explain in qualitative terms its behaviour when fi > 1. Extend the discussion to moving tiles. (Section 8.3)
8.6 Practicals 1
Dataframe lynx contains the Canadian lynx data, to the logarithm of which we fit the autoregressive model that minimizes A IC : t s .plot(log(lynx)) lynx.ar 0.8. T he potential im provem ent from balancing is n o t g u aranteed to be w orthwhile w hen c < 0.7. The corresponding plot for the adjusted estim ates suggests th a t c m ust be at least 0.85 for a useful efficiency gain. ■ This exam ple suggests the following strategy when a good estim ate o f bias is required: perform a sm all stan d ard unbalanced b ootstrap, and use it to estim ate the correlation betw een the statistic an d its linear approxim ation. If th a t correlation exceeds ab o u t 0.7, it m ay be w orthw hile to perform a balanced sim ulation, b u t otherw ise it will not. I f the correlation exceeds 0.85, post-sim ulation adjustm ent will usually be w orthw hile, b u t otherw ise it will not.
9.3 Control Methods The basis o f control m ethods is extra calculation during or after a series o f sim ulations w ith the aim o f reducing the overall variability o f the estim ator. This can be applied to nonparam etric sim ulation in several ways. The p o st sim ulation balancing described in the preceding section is a simple control m ethod, in which we store the sim ulated ran d o m sam ples and m ake a single post-sim ulation calculation. M ost control m ethods involve ex tra calculations a t the time o f the sim ulation, an d are applicable w hen there is a simple statistic th a t is highly correlated with T*. Such a statistic is know n as a control variate. T he key idea is to write T* in term s o f the control variate an d the difference betw een T* and the control variate, an d then to calculate the required properties for the control variate analytically, estim ating only the differences by sim ulation. Bias and variance In m any b o o tstrap contexts where T is an estim ator, a natu ral choice for the control variate will be the linear approxim ation T[ defined in (2.44). The m om ents o f can be obtained theoretically using m om ents o f the frequencies f j . In ordinary ran d o m sam pling the f j are m ultinom ial, so the m ean and variance o f T£ are E'(T'l ) = t,
v a r' ( T i ) = n~2 £ lj = vL. 7=1
In order to use T ’L as a control variate, we write T* = T[ + D ’, so th at D* equals the difference T * — T[. The m ean and variance o f T* can then
447
9.3 ■Control Methods
be w ritten E 'e r * ) = E m( T l ) + E*(D‘ ),
v ar *(T*) = var *(T£) + 2co v ' { T ’L , D ‘) + var *(/)*),
the leading term s o f which are known. O nly term s involving D * need to be approxim ated by sim ulation. G iven sim ulations T w ith corresponding linear approxim ations and differences D* = T* — T £r, the m ean and variance o f T* are estim ated by t+ D\
VKcon = v L + ^
i? ^ ( T £ r - f i ) ( D r* - D' ) + ^
i? J 2 ( D ; ~ D ' ) 2,
r= l
r= l
(9.12) where T[ = Ylr ^L,r an d D" = Use o f these and related approxim ations requires the calculation o f the T[ r as well as the T*. The estim ated bias o f T* based on (9.12) is B r co„ = D ' . This is closely related to the estim ate obtained un d er balanced sim ulation and to the re centred bias estim ate B r ^ . Like them , it ensures th at the linear com ponent o f the bias estim ate equals its population value, zero. D etailed calculation shows th a t all three approaches achieve the same variance reduction for the bias estim ate in large samples. However, the variance estim ate in (9.12) based on linear approxim ation is less variable th an the estim ated variances obtained u n d er the o th er approaches, because its leading term is n o t random . Example 9.5 (City population data) To see how effective control m ethods are in reducing the variability o f a variance estim ate, we consider the ratio statistic for the city pop u latio n d a ta in Table 2.1, w ith n = 10. F or 100 b o o tstrap sim ulations w ith R = 50, we calculated the usual variance estim ate vr = ( R — I)-1 — t*)2 and the estim ate VR>con from (9.12). The estim ated gain in efficiency calculated from the 100 sim ulations is 1.92, which though w orthw hile is n o t large. T he correlation betw een t* and t‘L is 0.94. F or the larger set o f d a ta in Table 1.3, with n = 49, we repeated the experim ent w ith R = 100. H ere the gain in efficiency is 7.5, and the correlation is 0.99. Figure 9.2 shows scatter plots o f the estim ated variances in these experim ents. F or b o th sam ple sizes the values o f v r co„ against the values o f v r . N o strong p attern is discernible. To get a m ore system atic idea o f the effectiveness o f control m ethods in this setting, we repeated the experim ent outlined in Exam ple 9.4 and com pared the usual and control estim ates o f the variances o f the five eigenvalues. The results for the five eigenvalues an d n = 15 and 25 are show n in Figure 9.3. G ains in efficiency are n o t g u aranteed unless the correlation betw een the statistic and its linear ap proxim ation is 0.80 o r m ore, and they are n o t large unless the correlation is close to one. T he line y = (1 — x4)-1 sum m arizes the efficiency gain well, th o u g h we have n o t attem p ted to justify this. ■ Quantiles C ontrol m ethods m ay also be applied to quantiles. Suppose th a t we have the sim ulated values t\, ..., t’R o f a statistic, and th a t the corresponding control variates and differences are available. We now sort the differences by the values o f the control variates. F o r exam ple, if o u r control variate is a linear approxim ation, w ith R = 4 an d t 'L 2 < t"L , < t *L 4 < t] 3, we p u t the differences in order d"2, d\, d"4, d\. The procedure now is to replace the p quantile o f the linear approxim ation by a theoretical approxim ation, tp, for p = 1/(jR + 1 ) ,..., R / ( R + 1), thereby replacing t'r) w ith t ’C r = tp + d '(r), where 7t(r) is the ran k o f t'L r. In o u r exam ple we would obtain t ’c j = t0.2 + d'2, t'c 2 = £0 . 4 + d.\, t'c 3 = to. 6 + d\, an d t ’CA = fo.g + d\. We now estim ate the pth quantile o f the distribution o f T by t'c ^ , i.e. the rth quantile o f t“ c v ... ,t*CR. If the control variate is highly correlated w ith T m, the bulk o f the variability in the estim ated quantiles will have been rem oved by using the theoretical approxim ation.
449
9.3 ■Control Methods Figure 9.3 Efficiency comparisons for estimating variances of eigenvalues. The left panels compare the usual and control variance estimates for the data of Example 3.24, for which n = 25, when R = 39. The right panel shows the gains made by the control estimate in 50 samples of sizes 15 and 25 from the normal distribution, as a function of the correlation between the statistic and its linear approximation; the solid line shows the line y = (1 —x4)-1. See text for details.
Third
Fourth
0.0 S
10
15
20
25
0.2
0.4
0.6
0.8
1.0
Correlation
O ne desirable property o f the control quantile estim ates is that, unlike m ost o th er variance reduction m ethods, their accuracy improves with increasing n as well as R. T here are various ways to calculate the quantiles o f the control variate. The preferred ap proach is to calculate the entire distribution o f the control variate by saddlepoint approxim ation (Section 9.5), and to read off the required qu an tiles tp. This is better th a n oth er m ethods, such as C o rn ish 'F ish e r expansion, because it guarantees th a t the quantiles o f the control variate will increase w ith p. Example 9.7 (Returns data) To assess the usefulness o f the control m ethod ju s t described, we consider setting studentized b o o tstrap confidence intervals for the rate o f retu rn in Exam ple 6.3. We use case resam pling to estim ate quantiles o f T* = (/?J —/?i ) / S \ where fli is the estim ate o f the regression slope, an d S 2 is the robust estim ated variance o f fii based on the linear approxim ation to Pi. F or a single b o o tstra p sim ulation we calculated three estim ates o f the qu an tiles o f T * : the usual estim ates, the order statistics < ■■■< t'R); the control estim ates taking the control variate to be the linear approxim ation to T* based on exact em pirical influence values; and the control estim ates obtained using the linear approxim ation w ith em pirical influence values estim ated by regression on the frequency array for the same bootstrap. In each case the quantiles o f the control variate were obtained by saddlepoint approxim ation, as outlined in Exam ple 9.13 below. We used R = 999 and repeated the experi m ent 50 tim es in o rder to estim ate the variance o f the quantile estim ates. We
9 *Improved Calculation
450
Figure 9.4 Efficiency and bias com parisons for estim ating quantiles o f a studentized
CM
o
bootstrap statistic for the returns data, based on a bootstrap of size R = 999. The left panel c
®
shows the variance of the usual quantile estimate divided by the variance o f the control estimate based on an exact linear approxim ation, plotted against the corresponding norm al quantile. The dashed lines show efficiencies of 1, 2, 3, 4 and 5. The right panel shows the estim ated biases for the exact control (solid) and estim ated control (dots)
O
CM
o
o -3
-2
-1
0
2
3
Normal quantile
-3
-2
-1
0
2
3
Normal quantile
estim ated their bias by com paring them w ith quantiles o f T * obtained from 100000 b o o tstrap resamples. Figure 9.4 shows the efficiency gains o f the exact control estim ates relative to the usual estim ates. T he efficiency gain based on the linear approxim ation is n o t shown, b u t it is very similar. T he right panel shows the biases o f the two control estim ates. The efficiency gains are largest for central quantiles, and are o f o rd er 1.5-3 for the quantiles o f m ost interest, at ab o u t 0.025-0.05 an d 0.95-0.975. T here is som e suggestion th a t the control estim ates based on the linear ap proxim ation have the sm aller bias, b u t b o th sets o f biases are negligible a t all b u t the m ost extrem e quantiles. The efficiency gains in this exam ple are broadly in line w ith sim ulations reported in the literatu re; see also Exam ple 9.10 below.
■
9.4 Importance Resampling 9.4.1 Basic estimators Importance sampling M ost o f o u r sim ulation calculations can be th o u g h t o f as approxim ate inte grations, w ith the aim o f approxim ating
for som e function m( ), where y ' is abbreviated n o ta tio n for a sim ulated d a ta set. In expression (9.1), for exam ple, m( y' ) = t(y*), and the distribution G for y* = (y^,..., y„*) puts m ass n~n on each elem ent o f the set f f = { y i,...,y „} ".
quantiles. See text for
details
451
9.4 ■Importance Resampling
W hen it is im possible to evaulate the integral directly, o u r usual approach is to generate R independent sam ples 7,”, ..., YR* from G, and to estim ate fi by R
pG = R ‘ 5 3 " H O r=1 This estim ator has m ean an d variance
an d so is unbiased for fi. In the situation m entioned above, this is a re expression o f o rdinary b o o tstrap sim ulation. We use n o ta tio n such as po and Eg to indicate th a t estim ates are calculated from ran d o m variables sim ulated from G, and th a t m om ent calculations are w ith respect to the distribution G. O ne problem w ith po is th a t some values o f y* m ay contribute m uch m ore to fi th an others. F or example, suppose th a t the aim is to approxim ate the probability P r’(T* < to \ F), for which we would take m(y*) = I{t(y") < to}, where I is the indicator function. If the event t(y*) < t0 is rare, then m ost o f the sim ulations will co ntribute zero to the integral. The aim o f importance sampling is to sam ple m ore frequently from those “im p o rta n t” values o f y * whose contrib u tio n s to the integral are greatest. This is achieved by sam pling from a distribution th a t concentrates probability on these y ' , and then w eighting the values o f m(y*) so as to m im ic the approxim ation we w ould have used if we h ad sam pled from G. Im portance sam pling in the case o f the nonparam etric b o o tstrap am o u n ts to re-w eighting sam ples from the em pirical distribution function F , so in this context it is som etim es know n as importance resampling. T he identity th a t m otivates im portance sam pling is n =
J m( y’ )dG(y*) = J
d H ( y ’ ),
(9.14)
where necessarily the su p p o rt o f H includes the support o f G. Im portance sam pling approxim ates the right-hand side o f (9.14) using independent sam ples y ;,..., yR *from H. T he new ap proxim ation for fi is the raw importance sampling estimate R
Ph ,raw = / r 1 5 > ( y r> ( y ; ) ,
(9.15)
r= l
where w(y’) = dG(y’ ) / d H ( y ' ) is know n as the importance sampling weight. The estim ate fin,raw has m ean fi by virtue o f (9.14), so is unbiased, and has variance
9 ■Improved Calculation
452 O u r aim is now to choose H so th at
J m ( y * ) 2 w ( y ' ) d G ( y ' ) < J m ( y *)2 dG(y*). C learly the best choice is the one for which m(y*)w(y*) = n, because then Ah,raw has zero variance, b u t this is n o t usable because /i is unknow n. In general it is hard to choose H, b u t som etim es the choice is straightforw ard, as we now outline. Tilted distributions A potentially im p o rtan t application is calculation o f tail probabilities such as n = Pr*(T* < to | F), an d the corresponding quantiles o f T*. F or probabilities w (y’ ) is taken to be the indicator function I {t(y') < £o}, and if y \, . . . , y n is a single ran d o m sam ple from the E D F F then dG(y') = n~". A ny adm issible nonparam etric choice for H is a m ultinom ial distribution w ith probability pj on yj, for j = 1 ,..., n. Then dH (f) = J J p f , j
where f j counts how m any com ponents o f Y * equal y ; . We w ould like to choose the probabilities pj to m inimize v ar# (/iH.raw), or at least to m ake this m uch sm aller th a n R_1rc(l — n). T his ap p ears to be im possible in general, b u t if T is close to norm al we can get a good approxim ate solution. Suppose th a t T * has a linear approxim ation T l which is accurate, and th at the N ( t , v ) approxim ation for T[ u nder ordinary resam pling is accurate. T hen the probability n we are trying to approxim ate is roughly $ {(t0 — f)/u 1/2}. If we were using sim ulation to approxim ate such a norm al probability directly, then provided th a t to < t a good (near-optim al) im portance sam pling m ethod would be to generate t*s from the N(to, vi) distribution, where vl is the n onparam etric delta m ethod variance. It tu rn s o u t th a t we can arrange th a t this happen approxim ately for T* by setting pj cc e x p ( M j ) ,
j= l,...,n ,
(9.18)
where the lj are the usual em pirical influence values for t. The result o f Prob lem 9.10 shows th a t u nder this distribution T * is approxim ately N ( t + XnvL, vi ), so the ap p ro p riate choice for X in (9.18) is approxim ately X = (to — t)/{nvL), again provided to < t\ in some cases it is possible to choose X to m ake T* have m ean exactly to- T he choice o f probabilities given by (9.18) is called an exponential tilting o f the original values n ~l . This idea is also used in Sections 4.4, 5.3, an d 10.2.2. Table 9.4 shows approxim ate values o f the efficiency R ~ 1 n ( l —n ) / \ a T , (p.H,raw) o f near-optim al im portance resam pling for various values o f the tail probability 7i. The values were calculated using no rm al approxim ations for the distributions
453
9.4 • Importance Resampling Table 9.4 Approximate efficiencies for estimating tail probability n under importance sampling with optimal tilted EDF when T is approximately normal.
n Efficiency
0.01 37
0.025 17
0.05 9.5
0.2 3.0
0.5 1.0
0.8 0.12
0.95 0.003
0.975 0.0005
0.99 0.00004
o f T* und er G and H ; see Problem 9.8. The entries in the table suggest th at for n < 0.05 we could a tta in the same accuracy as w ith ordinary resam pling w ith R reduced by a factor larger th an ab o u t 10. A lso shown in the table is the result o f applying the exponential tilted im portance resam pling distribution w hen t > to, or n > 0.5: then im portance resam pling will be worse — possibly much worse — th an o rdinary resampling. This last observation is a w arning: straightforw ard im portance sam pling can be bad if m isapplied. We can see how from (9.17). If d H ( y ' ) becom es very small where m( y ’) an d dG(y') are n o t small, then w{y') = d G(y’ ) / d H ( y ' ) will becom e very large and inflate the variance. For the tail probability calculation, if to > t then all sam ples y ' w ith t(y*) < to contribute R ~ lw(y'r ) to pH,raw, and som e o f these contributions are enorm ous: although rare, they w reak havoc On flH,rawA little th o u g h t shows th a t for to > t one should apply im portance sam pling to estim ate 1 — n = Pr*(T* > to) and subtract the result from 1, ra th er th an estim ate n directly. Quantiles To see how quantiles are estim ated, suppose th a t we w ant to estim ate the a quantile o f the distribution o f 7” , and T* is approxim ately N(t, vL) under G = F. T hen we take a tilted distribution for H such th a t T* is approxim ately N ( t + zxV l 2 ,vl). For the situation we have been discussing, the exponential tilted distribution (9.18) will be near-optim al with k = zi / ( n v i/ 2), and in large sam ples this will be superior to G = F for any ct =/= i. So suppose th a t we have used im portance resam pling from this tilted distribution to obtain values fj < ■■■ < tf; w ith corresponding weights vvj,. . . , w ’R. T hen for a < | the raw quantile estim ate is t"M, where - m — V R + 1^
r= l
. M+l wr* < a < - — - V wr\ r R+l ^
(9.19) r
r= 1
while for a > j we define M by R - i - y w ; < l - a < - — r=M
y
R w*;
r= M + 1
see Problem 9.9. W hen there is no im portance sam pling we have w* = 1, and the estim ate equals the usual (”(R+1)a). T he variation in w (y') and its im plications are illustrated in the following
454
9 • Improved Calculation
example. We discuss stabilizing m odifications to raw im portance resam pling in the next subsection. Exam ple 9.8 (Gravity d a ta ) F or an exam ple o f im portance resam pling, we follow Exam ple 4.19 an d consider testing for a difference in m eans for the last two series o f Table 3.1. H ere we use the studentized pivot test, w ith observed test statistic Z° = , , y 2 ~ 7yi ,1 /2 ' (s\/n2 + s\/ni)
(9'2°)
where y t an d sj are the average an d variance o f the sam ple y n , . . . , y i „ n for i = 1,2. T he test com pares zo to the general distribution o f the studentized pivot
z =
?2-?l-(/^2-W ). 1/2 ’ (S f /n 2 + S f / n i )
zo is the value taken by Z u n d er the null hypothesis m = n 2. T he observed value o f zo is 1.84, w ith norm al one-sided significance probability P r(Z > zo) = 0.033. We aim to estim ate P r(Z > zo) by P r*(Z ” > zo | F), where F stands for the E D F s o f the two samples. In this case y* = ( y u , - - - , y i ni, y 2 i>--->y2n2)< an(^ ® is the jo in t density u n d er the two E D F s, so the probability on each sim ulated d ataset is dG{y*) = n p 1 x n^""2. Because zo > 0 an d the P-value is clearly below is ap p rop riate an d the estim ated P-value is
pH,raw = R 1 y ^ J { z'r > ^0}wr*,
raw im portance sam pling
W‘ = ^ ) r dHW Y
The choice o f H is m ade by analogy w ith the single-sam ple case discussed e ar lier. The tw o E D F s are tilted so as to m ake Z* approxim ately N ( zq, v l ), which should be near-optim al. This is done by w orking w ith the linear approxim ation nl Z'L =
f ’j l 'j +
Z + Mi 1
n 2 1 Y l f 2 J lV>
7=1
;=1
where / a nd f'2j are the b o o tstrap sam ple frequencies o f y \j and y 2j, and the em pirical influence values are l
_
yij - h { s \ / n 2 + s f / n i ) 1/2
t
_ 1
yij - yi ( s l / n 2 + s 2l / n i ) U2
We take H to be the p air o f exponential tilted distributions Pi] = P r( Y { = yij) cc exp(/.hJ/ n l ),
p2j = P r(7 2‘ = y 2J) cc exp(A/2y/ n 2), (9.21)
455
9.4 ■Importance Resampling
O O o o X 8 1
Figure 9.5 Importance resampling to test for a location difference between series 7 and 8 of the gravity data. The solid points in the left panel are the weights w* and bootstrap statistics z‘ for R = 99 importance resamples; the hollow points are the pairs (z*,w‘) for 99 ordinary resamples. The right panel compares the survivor function Pr*(Z* > 2*) estimated from 50000 ordinary bootstrap resamples (heavy solid) with estimates of it based on the 99 ordinary bootstrap samples (dashes) and the 99 importance resamples (solid). The vertical dotted lines show z q .
5
o O
LL Q O
W
B
°
2 o r i
• • •
i \ I: 1 i;
o
o -2
■4
0
-2
V L y
0 z*
z*
where X is chosen so th a t Z ’L has m ean z0 : this should m ake Z* approxim ately N( zo ,v i) u n d er H. The explicit equation for X is 1 hj exp(A/u /n i) E "L ie x p (/U ij/n i)
E ”l i h j exp(Xl2}/ n 2) _ +
£ " l i exp(Xl 2J/ n 2)
Z°’
w ith approxim ate solution X = zo since vL = 1. F or our d a ta the exact solution is X = 1.42. Figure 9.5 shows results for R = 99 sim ulations. The solid points in the left panel are the weights
Wr =
= eXP | - E f l j lQg ("1Plj) - E f a l0® fa P v )
p lo tted against the b o o tstra p values z* for the im portance resamples. These values o f z* are shifted to the right relative to the hollow points, which show the values o f z ’ an d w* (all equal to 1) for 99 ordinary resamples. The values o f w* for the im portance re-w eighting vary over several orders o f m agnitude, w ith the largest values w hen z* z0 contribute to f^H,raw •
H ow well does this single im portance resam pling distribution w ork for estim ating all values o f the survivor function Pr*(Z * > z)? T he heavy solid line in the right panel shows the “tru e” survivor function o f Z* estim ated from 50 000 o rdinary b o o tstra p sim ulations. T he lighter solid line is the im portance
456
9 ■Improved Calculation
resam pling estim ate K- 1 £
wrf{*r* ^ Z)
r= 1
with R = 99, an d the d o tted line is the estim ate based on 99 ordinary boo tstrap sam ples from the null distribution. T he im portance resam pling estim ate follows the “tru e” survivor function accurately close to zq b u t does poorly for negative z*. The usual estim ate does best n ear z* = 0 b u t poorly in the tail region o f interest; the estim ated significance probability is f a = 0. W hile the usual estim ate decreases by R ~ { at each z*, the weighted estim ate decreases by m uch sm aller ju m p s close to z; the raw im portance sam pling tail probability estim ate is p.H,raw = 0.015, which is very close to the true value. T he weighted survivor function estim ate has large ju m p s in its left tail, where the estim ate is unreliable. In 50 repetitions o f this experim ent the o rdinary and raw im portance re sam pling tail probability estim ates h ad variances 2.09 x 10-4 and 2.63 x 10-5 . F or a tail probability o f 0.015 this efficiency gain o f ab o u t 8 is sm aller th an would be predicted from Table 9.4, the reason being th a t the distribution o f z* is rath er skewed an d the norm al approxim ation to it is poor. ■ In general there are several ways to obtain tilted distributions. We can use exponential tilting w ith exact em pirical influence values, if these are readily available. O r we can estim ate the influence values by regression using jRo initial ordinary b o o tstra p resam ples, as decribed in Section 2.7.4. A n other way o f using an initial set o f b o o tstrap sam ples is to derive weighted sm ooth distributions as in (3.39): illustrations o f this are given later in Exam ples 9.9 and 9.11.
9.4.2 Improved estimators Ratio and regression estimators One simple m odification o f the raw im portance sam pling estim ate is based on the fact th a t the average w eight R -1 w ( Y ' ) from any particular sim ulation will n o t equal its theoretical value o f E*{w(Y*)} = 1. This suggests th a t the weights w(Yr”) be norm alized, so th a t (9.15) is replaced by the importance resampling ratio estimate tl
_ E f-i h y ; m y ;) Z L
m y
;)
To some extent this controls the effect o f very large fluctuations in the weights. In practice it is b etter to treat the weight as a control variate o r covariate. Since ou r aim in choosing H is to concentrate sam pling where m( ) is largest, the values o f m(Yr’ )w(Yr*) and w(Yr*) should be correlated. If so, and if
457
9.4 ■Importance Resampling
the average weight differs from its expected value o f one un d er sim ulation from H, then the estim ate pH,raw probably differs from its expected value fi. This m otivates the covariance adjustm ent m ade in the importance resampling regression estimate Ph ,reg = Ah,raw ~ b(w - 1),
(9.23)
w here vv* = R ~ x w(Yr*), an d b is the slope o f the linear regression o f the m ( Y ' ) w ( Y * ) on the w (Y r*). The estim ator pH,reg is the predicted value for m { Y ' ) w { Y “) at the poin t w(Y*) = 1. T he adjustm ents m ade to pH,raw in b o th pH,rat and pH,reg m ay induce bias, b u t such biases will be o f o rd er R ~ l and will usually be negligible relative to sim ulation stan d ard errors. C alculations outlined in Problem 9.12 indicate th a t for large R the regression estim ator should outperform the raw and ratio estim ators, b u t the im provem ent depends on the problem , and in practice the raw estim ator o f a tail probability o r quantile is usually the best. Defensive mixtures A second im provem ent aim s to prevent the weight w( y' ) from varying wildly. Suppose th a t H is a m ixture o f distributions, n H\ + (1 —n ) H 2 , where 0 < n < 1. T he distributions Hi and H 2 are chosen so th at the corresponding probabilities are n o t b o th sm all sim ultaneously. T hen the weights d G ( / ) / { j i d H , ( / ) + (1 - 7z)dH 2 (y')} will vary less, because even if d H i ( y m) is very small, d H 2 (y*) will keep the den o m in ato r aw ay from zero and vice versa. This choice o f H is know n as a defensive mixture distribution, and it should do particularly well if m any estim ates, w ith different m( y’ ), are to be calculated. T he m ixture is applied by stratified sam pling, th a t is by generating exactly n R observations from Hi and the rest from H 2, and using pH,reg as usual. T he com ponents o f the m ixture H should be chosen to ensure th a t the relevant range o f values o f t* is well covered, b u t beyond this the detailed choice is n o t critical. F o r exam ple, if we are interested in quantiles o f T* for probabilities betw een a an d 1 — a, then it would be sensible to target Hi at the a quantile and H 2 a t the 1 — a quantile, m ost simply by the exponential tilting m ethod described earlier. As a further precaution we m ight add a th ird com ponent to the m ixture, such as G, to ensure stable perform ance in the m iddle o f the distribution. In general the m ixture could have m any com ponents, b u t careful choice o f two or three will usually be adequate. A lways the application o f the m ixture should be by stratified sam pling, to reduce variation. Exam ple 9.9 (G ravity d a ta ) To illustrate the above ideas, we again consider the hypothesis testing problem o f Exam ple 9.8. T he left panel o f Figure 9.6
458
9 • Improved Calculation
shows 20 replicate estim ates o f the null survivor function o f z*, using ordinary b o o tstrap resam pling w ith R = 299. The right panel shows 20 estim ates o f the survivor function using the regression estim ate fiH,reg after sim ulations w ith a defensive m ixture distribution. This m ixture has three com ponents which are G (the tw o E D F s), an d tw o pairs o f exponential tilted distributions targeted at the 0.025 an d 0.975 quantiles o f Z*. From o u r earlier discussion these distributions are given by (9.21) w ith X = ± 2 / v L \ we shall denote the first pair o f distributions by probabilities p i j an d p 2j , and the second by probabilities q i j and q 2j . The first com ponent G was used for R i = 99 samples, the second com ponent (the ps) for R 2 = 100 an d the th ird com ponent (the qs) for R j = 100: the m ixture prop o rtio n s were therefore nj = R j / ( R \ + R 2 + R 3 ) for j = 1,2,3. T he im portance resam pling weights were
where as before f \ j and f y respectively count how m any tim es y ij and y 2j a p p e ar in the resample. F or convenience we estim ated the C D F o f Z* at the sam ple values z*. The regression estim ate at z* is obtained by setting m( y’ ) = I { z ( y *) < z ( y ’ )} and calculating (9.23); this appears to involve 299 regressions for each C D F estim ate, b u t Problem 9.13 shows how in fact ju st one m atrix calculation is needed. T he im portance resam pling estim ate o f the C D F is ab o u t as variable as the ordin ary estim ate over m ost o f the distribution, b u t m uch less variable well into the tails. For a m ore system atic com parison, we calculated the ratio o f the m ean
Figure 9.6 Importance resampling to test for a location difference between series 7 and 8 of the gravity data. In each panel the heavy solid line is the survivor function Pr’(Z ‘ > z‘) estimated from 50000 ordinary bootstrap resamples and the vertical dotted lines show z q . The left panel shows the estimates for 20 ordinary bootstraps of size 299. The right panel shows 20 importance resampling estimates using 299 samples with a regression estimate following resampling from a defensive mixture distribution with three components. See text for details.
459
9.4 ■Importance Resampling Table 9.5 Efficiency gains (ratios of mean squared errors) for estimating a tail probability, a bias, a variance and two quantiles for the gravity data, using importance resampling estimators together with defensive mixture distributions, compared to ordinary resampling. The mixtures have Ri ordinary bootstrap samples mixed with R 2 samples exponentially tilted to the 0.025 quantile of z*, and with R 3 samples exponentially tilted to the 0.975 quantile of r*. See text for details.
M ixture Ri
r2
E stim ate Ri 299
99
100
100
19
140
140
R aw R a tio R egression R aw R a tio R egression R aw R a tio R egression
E stim an d Pr* (Z* > z0)
E ’ ( Z ')
var*(Z *)
Z0.05
z0.025
11.2 3.5 12.4 3.8 3.4 4.0 3.9 2.3 4.3
0.04 0.06 0.18 0.73 0.79 0.93 0.34 0.43 0.69
0.03 0.05 0.07 1.5 1.5 1.6 1.2 0.82 1.3
0.07 0.06 0.06 1.3 0.93 0.87 0.96 0.48 0.44
0.05 0.04 2.5 1.3 1.2 2.6 1.1 1.3
_
squared erro r from ordinary resam pling to th at w hen using defensive m ixture d istributions to estim ate the tail probability Pr*(Z* > z q ) with zo = 1.77, two quantiles, an d the bias E *(Z ’ ) and the variance var’ (Z*) for sam pling from the two series. T he m ixture distributions have the same three com ponents as before, b u t w ith different values for the num bers o f sam ples R \ , R 2 and Rt, from each. Table 9.5 gives the results for three resam pling m ixtures with a to tal o f R = 299 resam ples in each case. The m ean squared errors were estim ated from 100 replicate b ootstraps, w ith “tru e ” values obtained from a single b o o tstra p o f size 50000. The m ain contribution to the m ean squared erro r is from variance ra th e r th an bias. The first resam pling distribution is n o t a m ixture, b u t simply the exponential tilt to the 0.975 quantile. This gives the best estim ates o f the tail probability, w ith efficiencies for raw an d regression estim ates in line with Exam ple 9.8, b u t it gives very p o o r estim ates o f the other quantities. F or the other two m ixtures the regression estim ates are best for estim ating the m ean and variance, while the raw estim ates are best for the quantiles and n o t really worse for the tail probability. B oth m ixtures are ab o u t the same for tail quantiles, while the first m ixture is b etter for the m om ents. In this case the efficiency gains for tail probabilities and quantiles predicted by Table 9.4 are unrealistic, for two reasons. First, the table com pares 299 o rdinary sim ulations w ith ju st 100 tilted to each tail o f the first m ixture distribution, so we w ould expect the variance for a tail quantity based on the m ixture to be larger by a factor o f ab o u t three; this is ju st w hat we see when the first distrib u tio n is com pared to the second. Secondly, the distribution o f Z* is quite skewed, which considerably reduces the efficiency out as fa r as the 0.95 quantile. We conclude th a t the regression estim ate is best for estim ating central
9 ■Improved Calculation
460
quantities, th a t the raw estim ate is best for quantiles, th a t results for estim ating quantiles are insensitive to the precise m ixture used, and th a t theoretical gains m ay not be realized in practice unless a single tail quantity is to be estim ated. This is in line w ith o th er studies.
9.4.3 Balanced importance resampling Im portance resam pling w orks best for the extrem e quantiles corresponding to small tail probabilities, b u t is less effective in the centre o f a distribution. Balanced resam pling, on the o th er hand, w orks best in the centre o f a distri bution. Balanced im portance resam pling aims to get the best o f b o th worlds by com bining the two, as follows. Suppose th a t we wish to generate R balanced resam ples in which y j has overall probability p, o f occurring. To do this exactly in general is im possible for finite n R , b u t we can do so approxim ately by applying the following simple algorithm ; a m ore efficient algorithm is described in Problem 9.14.
Algorithm 9.2 (Balanced importance resampling) C hoose Ri = n R p i , . . ., C oncatenate to form .
R\
= nRpn, such th a t Ri H----- + R n = nR.
copies o f y\ w ith
R 2
copies o f y 2 w ith ... with
R n
copies o f y n,
Perm ute the n R elem ents o f W at ran d o m to form and read off the R balanced im portance resam ples as sets o f n successive elem ents o f . • A simple way to choose the Rj is to set Rj = 1 + [n(R — l)p ; ], j = 1 wher e [•] denotes integer part, and to set Rj = Rj + \ for the d = n R — (R[ - (- ■+ R'„) values o f j w ith the largest values o f nRpj — R j ; we set R j = Rj for the rest. This ensures th a t all the observations are represented in the b o o tstrap sim ulation. Provided th a t R is large relative to n , individual sam ples will be approx im ately independent an d hence the w eight associated w ith a sam ple having frequencies ( / j , . . . , / ^ ) is approxim ately
this does n o t take account o f the fact th a t sam pling is w ithout replacem ent. Figure 9.7 shows the theoretical large-sam ple efficiencies o f balanced re sampling, im portance resam pling, an d balanced im portance resam pling for estim ating the quantiles o f a norm al statistic. O rdinary balance gives m ax im um efficiency o f 2.76 a t the centre o f the distribution, while im portance
461
9.4 ■Importance Resampling
Figure 9.7 Asymptotic efficiencies of balanced importance resampling (solid), importance resampling (large dashes), and balanced resampling (small dashes) for estimating the quantiles of a normal statistic. The dotted horizontal line is at relative efficiency one.
-
2
-
1
0
1
2
Normal quantile
resam pling w orks well in the lower tail b u t badly in the centre and u p per tail o f the distribution. Balanced im portance resam pling dom inates both. Exam ple 9.10 (Returns d a ta ) In order to assess how well these ideas m ight w ork in practice, we again consider setting studentized b o o tstrap confidence intervals for the slope in the returns example. We perform ed an experim ent like th a t o f Exam ple 9.7, b u t w ith the R = 999 b o o tstrap sam ples generated by balanced resam pling, im portance resam pling, and balanced im portance resampling. Table 9.6 shows the m ean squared error for the ordinary b o o tstrap divided by the m ean squared errors o f the quantile estim ates for these m ethods, using 50 replicate sim ulations from each scheme. This slightly different “efficiency” takes into account any bias from using the im proved m ethods o f sim ulation, though in fact the co n trib u tio n to m ean squared error from bias is small. The “tru e ” quantiles are estim ated from an ordinary b o o tstrap o f size 100000. The first tw o lines o f the table show the efficiency gains due to using the control m ethod w hen the linear approxim ation is used as a control variate, w ith em pirical influence values calculated exactly and estim ated by regression from the sam e b o o tstrap sim ulation. The results differ little. T he next two rows show the gains due to balanced sampling, both w ithout and w ith the control
462
M eth o d
9 • Improved Calculation
D istrib u tio n
Q u an tile (% ) 1
2.5
5
10
50
90
95
97.5
99
C o n tro l (exact) C o n tro l (approx)
1.7 1.4
2.7 2.8
2.8 3.2
4.0 4.1
11.2 11.8
5.5 5.1
2.4 2.2
2.6 2.6
1.4 1.3
B alance w ith co n tro l
1.0 1.4
1.2 1.8
1.5 3.0
1.4 2.8
3.1 4.4
2.9 4.7
1.7 2.5
1.4 2.2
0.6 1.5
7.8 4.6 3.6 4.3 2.6
3.7 2.9 3.7 2.6 2.1
3.6 3.5 2.0 2.5 0.7
1.8 1.1 1.7 1.8 0.3
0.4 0.1 0.5 0.9 0.4
3.5 2.6 2.4 1.6 0.5
2.3 3.1 2.2 1.6 0.6
3.1 4.3 2.6 2.2 1.6
5.5 5.2 3.6 2.3 2.1
5.0 4.2 5.2 4.3 3.2
5.7 3.4 4.2 3.3 2.8
4.1 2.4 3.8 3.4 1.0
1.9 1.8 1.8 2.2 0.4
0.5 0.2 0.9 2.1 0.9
2.6 2.0 3.0 2.7 0.9
2.2 3.6 2.4 3.7 1.4
6.3 4.2 4.0 3.3 2.1
4.5 3.9 4.0 4.3 2.1
Im p o rtan ce
Hi Hi Hi H* Hs
B alanced im p o rtan ce
Hi Hi h3 h4 h 5
m ethod, which gives a w orthw hile im provem ent in perform ance, except in the tail. The next five lines show the gains due to different versions o f im portance resam pling, in each case using a defensive m ixture distribution and the raw quantile estim ate. In practice it is unusual to perform a b o o tstrap sim ulation w ith the aim o f setting a single confidence interval, and the choice o f im portance sam pling distrib u tio n H m ust balance various potentially conflicting requirem ents. O u r choices were designed to reflect this. We first suppose th at the em pirical influence values lj for t are know n an d can be used for exponen tial tilting o f the linear approxim ation t'L to t ‘. T he first defensive m ixture, H\, uses 499 sim ulations from a distribution tilted to the a quantile o f t*L and 500 sim ulations from a distribution tilted to the 1 — a quantile o f fL, for a = 0.05. The second m ixture is like this b u t w ith a = 0.025. The third, fo u rth an d fifth distributions are the sort th a t m ight be used in practice w ith a com plicated statistic. We first perform ed an ordinary b o otstrap o f size Ro, which we used to estim ate first the em pirical influence values lj by regression an d then the tilt values rj for the 0.05 and 0.95 quantiles. We then perform ed a fu rth er b o o tstrap o f size (R — Ro)/2 using each set o f tilted probabilities, giving a to tal o f R sim ulations from three different distributions, one centred an d tw o tilted in opposite directions. We took Ro = 199 and Ro = 499, giving Hj an d i / 4. F or H$ we took Ro = 499, b u t estim ated the tilted distributions by frequency sm oothing (Section 3.9.2) w ith bandw idth
Table 9.6 Efficiencies for estimation of quantiles of studentized slope for returns data, relative to ordinary bootstrap resampling.
463
9.4 ■Importance Resampling
e = 0.5t>1/2 at the 0.05 an d 0.95 quantiles o f t*, where v x/1 is the standard error o f t estim ated from the ordinary bootstrap. Balance generally im proves im portance resam pling, which is n o t sensitive to the m ixture distrib u tio n used. The effect o f estim ating the em pirical influence values is n o t m arked, while frequency sm oothing does n o t perform so well as exponential tilting. Im portance resam pling estim ates o f the central quantiles are poor, even w hen the sim ulation is balanced. Overall, any o f schemes H \H 4 leads to appreciably m ore accurate estim ates o f the quantiles usually o f interest. ■
9.4.4 Bootstrap recycling In Section 3.9 we introduced the idea o f b o o tstrapping the b o otstrap, b o th for m aking bias adjustm ents to b o o tstrap calculations and for studying the v aria tion o f properties o f statistics. F u rth er applications o f the idea were described in C hapters 4 an d 5. In b o th param etric and nonparam etric applications we need to sim ulate sam ples from a series o f distributions, themselves obtained from sim ulations in the nonparam etric case. Recycling m ethods replace m any sets o f sim ulated sam ples by one set o f sam ples and m any sets o f weights, and have the p otential to reduce the com putational effort greatly. This is particularly valuable when the statistic o f interest is expensive to calculate, for exam ple when it involves a difficult optim ization, or w hen each b o o tstrap sam ple is costly to generate, as when using M arkov chain M onte C arlo m ethods (Section 4.2.2). T he basic idea is repeated use o f the im portance sam pling identity (9.14), as follows. Suppose th a t we are trying to calculate = E{m(Y)} for a series o f d istributions G i , . . . , G k ■The naive M onte C arlo approach is to calculate each value Hk = E { m ( Y ) | Gk} independently, sim ulating R sam ples y u - - - , y R from G/c and calculating pk = R ~ l m(yr). But for any distribution H whose su p p o rt includes th a t o f G* we have
E{m(Y) | Gk} =
J m(y)dGk{y) = J
=
E jm(Y)
dGk( Y ) dH(Y)
We can therefore estim ate all K values using one set o f sam ples y \ , . . . , y N sim ulated from H, w ith estim ates N
P k = N 1^ m ( y , )
(9.24)
In some contexts we m ay choose N to be m uch larger th a n the value R we m ight use for a single sim ulation, b u t less th an K R . It is im p o rtan t to choose H carefully, an d to take account o f the fact th a t the estim ates are correlated.
464
9 • Improved Calculation
Both N and the choice o f H depend u p o n the use being m ade o f the estim ates and the form o f m(-). Exam ple 9.11 (City population d a ta ) C onsider again estim ating the bias and variance functions for ratio 8 = t(F ) o f the city population d a ta with n = 10. In Exam ple 3.22 we estim ated b(F) = E (T | F) — t(F) and v(F) = v ar( T | F) for a range o f values o f 0 = t{F) using a first-level b o o tstrap to calculate values o f t* for 999 b o o tstrap sam ples F*, and then doing a secondA A level b o o tstrap to estim ate b(F') an d v( F’) for each o f those samples. H ere the second level o f resam pling is avoided by using im portance re-weighting. A t the sam e time, we retain the sm oothing introduced in Exam ple 3.22. R a th er th a n take each Gk to be one o f the b o o tstrap E D F s F*, we obtain a sm ooth curve by using sm ooth distributions F'f) w ith probabilities pj( 6 ) as defined by (3.39). Recall th a t the p aram eter value o f F e’ is t(F'g) = 0*, say, which will differ slightly from 6 . F o r H we take F , the E D F o f the original data, on the grounds th a t it has the correct su p p o rt and covers the range o f values for y ’ w ell: it is n o t necessarily a good choice. T hen we have weights dGk( f r ) = dFg(y') = A ( PjW V " = dH(y'r ) dH(y'r ) y i n - 1/
.
say, where as usual /*• is the frequency with which y} occurs in the rth bo o tstrap sample. We should em phasize th a t the sam ples y * draw n from H here replace second-level b o o tstrap samples. C onsider the bias estim ate. T he weighted sum R~' ^ ( f ’ — 6")w'(0} is an unbiased estim ate o f the bias E” (T *‘ | F'e ) — 6 *, an d we can plot this estim ate to see how the bias varies as a function o f O' or 6 . However, the weighted sum can behave badly if a few o f the w ' ( 0 ) are very large, and it is b etter to use the ratio an d regression estim ates (9.22) and (9.23). The top left panel o f Figure 9.8 shows raw, ratio, an d regression estim ates o f the bias, based on a single set o f R = 999 sim ulations, w ith the curve obtained from the double b o o tstrap calculation used in Figure 3.7. F o r example, the ratio estim ate o f bias for a p articu lar value o f d is ]T r(r' — 0 ’)w‘(0 ) / 2 2 r w '(0), and this is plotted as a function o f 0*. T he raw an d ratio estim ates are rath er poor, but the regression estim ate agrees fairly well w ith the double boo tstrap curve. The panel also shows the estim ated bias from a defensive m ixture w ith 499 ordinary sam ples m ixed w ith 250 sam ples tilted to each o f the 0.025 and 0.975 quantiles; this is the best estim ate o f those we consider. The panels below show 20 replicates o f these estim ated biases. These confirm the im pression from the panel a b o v e: w ith o rdinary resam pling the regression estim ator is best, but it is b etter to use the m ixture distribution. The to p right panel shows the corresponding estim ates for the standard
465
9.4 ■Importance Resampling
ID o
£ o
^
o
■o
iS c
1 0, we can obtain a saddlepoint approxim ation to (9.31) by applying (9.28) an d (9.30) w ith u = (21* — t )2 and pj = Including program m ing, it took ab o u t ten m inutes to calculate 3000 values o f (9.31) by saddlepoint approxim ation; direct sim ulation with 250 sam ples at the second level took ab o u t four hours on the sam e w orkstation. ■ Estimating functions O ne simple extension o f the basic approxim ations is to statistics determ ined by m onotonic estim ating functions. Suppose th a t the value o f a scalar bo o tstrap statistic T* based on sam pling from y i , . . . , y „ is the solution to the estim ating equation n (9.32) U*(t) = ^ 2, a{ f ,y j )f 'j = 0, where for each y the function a( 6 ;y) is decreasing in d. T hen T* < t if and only if U ’(t) < 0. H ence Pr*(T* < t) m ay be estim ated by Gs(0) applied w ith cum ulant-generating function (9.30) in which aj = a{t;yj). A saddlepoint approxim ation to the density o f T is (9.33) .
A
where K ( ^ ) = d K / d t , an d £ solves the equation K '( £) = 0. The first term on the right in (9.33) corresponds to the Jacobian for transform ation from the density o f U ’ to th a t o f T ' .
471
9.5 ■Saddlepoint Approximation
Example 9.15 (M aize data) Problem 4.7 contains d a ta from a paired com parison experim ent perform ed by D arw in on the grow th o f m aize plants. The d a ta are reduced to 15 differences y \ , . . . , y n betw een the heights (in eighths o f an inch) o f cross-fertilized and self-fertilized plants. W hen two large negative values are excluded, the differences have average J> = 33 and look close to norm al, b u t w hen those values are included the average drops to 20.9. W hen d a ta m ay have been co ntam inated by outliers, robust M -estim ates are useful. If we assum e th a t Y = 8 + as, where the distribution o f e is sym m etric a b o u t zero b u t m ay have long tails, an estim ate o f location 0 can be found by solving the equation ' = 0,
(9.34)
j=i where tp(e) is designed to dow nw eight large values o f s. A com m on choice is the H u b er estim ate determ ined by
y>(e) =
c,
(9.35)
W ith c = oo this gives 1p(s) = s and leads to the norm al-theory estim ate 9 = y, b u t a sm aller choice o f c will give b etter behaviour w hen there are outliers. W ith c = 1.345 and a fixed a t the m edian absolute deviation s o f the data, we obtain 8 = 26.45. H ow variable is this? We can get some idea by looking at replicates o f 9 based on b o o tstrap sam ples y j,...,y * . A b o o tstrap value 9* solves
P
i
^
h
so the saddlepoint approxim ation to the P D F o f b o o tstrap values is obtained starting from (9.32) w ith a(f , yj ) = y>{(yj — t)/s}. The left panel o f Figure 9.10 com pares the saddlepoint approxim ation with the em pirical distribution o f 9*, and w ith the approxim ate P D F o f the b o o tstrap p ed average. The saddlepoint approxim ation to 9’ seems quite accurate, while the P D F o f the average is w ider and shifted to the left. The assum ption o f sym m etry underlies the use o f the estim ator 9, because the p aram eter 9 m ust be the same for all possible choices o f c. The discus sion in Section 3.3 an d Exam ple 3.26 implies th at our resam pling scheme should take this into account by enlarging the resam pling set to y i ,. . . , y „ , 9 — (yi — 9 ) , . . . , 9 — {y„ — 9), for some very robust estim ate o f 9 ; we take 9 to be the m edian. T he cum ulant-generating function required w hen taking sam ples
9 • Improved Calculation
472
CD
o
o o
Q
Q_
C\J
o
o
o d
-20 theta
20
40
60
theta
o f size n from this set is n log
(2n) 1 ^
exP { £ a (f ;?;)} + ex p { < ^ a(t;2 § - y , ) }
j=i
T he right panel o f Figure 9.10 com pares saddlepoint and M onte C arlo a p proxim ations to the P D F o f O' und er this sym m etrized resam pling scheme; the P D F o f the average is shown also. All are sym m etric ab o u t 6 . One difficulty here is th a t we m ight prefer to approxim ate the P D F o f O' w hen s is replaced by its b o o tstrap version s', an d this cannot be done in the current fram ew ork. M ore fundam entally, the distrib u tion o f interest will often be for a q uantity such as a studentized form o f O' derived from 6 ", s', and perhaps other statistics, necessitating the m ore sophisticated approxim ations outlined in Section 9.5.3. ■
9.5.2 C onditional approxim ation T here are num erous ways to extend the discussion above. O ne o f the m ost straightforw ard is to situations w here U is a q x 1 vector which is a linear function o f independent variables W i , . . . , W „ w ith cum ulant-generating func tions K j { £ ) , j = T h a t is, U = A T W , where A is a n x q m atrix with rows a j . The jo in t cum ulant-generating function o f U is
K(0
= log E exp
(ZTA T W) =
n
Figure 9.10 Comparison of the saddlepoint approximation to the PDF of a robust M-estimate applied to the maize data (solid), with results from a bootstrap simulation with R = 50000. The heavy curve is the saddlepoint approximation to the PDF of the average. The left panel shows results from resampling the data, and the right shows results from a symmetrized bootstrap.
473
9.5 ■Saddlepoint Approximation
an d the saddlepoint approxim ation to the density o f U at u is gs(u) = ( 2 n r q/ 2 \ K " ( t r 1/2cxp { * ( £ ) - l Tu ] ,
(9.36)
w here | satisfies the q x 1 system o f equations 8 K(t;)/d£, = u, and K "(£) = d 2K { t ) / d m T is the q x q m atrix o f second derivatives o f K ; | • | denotes determ inant. N ow suppose th a t U is p artitioned into U\ and U2, th at is, U T = ( I / / , U j ), w here U\ an d U 2 have dim ension q\ x 1 and (q — qi) x 1 respectively. N ote th a t U2 = A% W , where A 2 consists o f the last q — qi colum ns o f A. The cum ulant-generating function o f U 2 is simply K(0, £,2), where £ T = (£jr ,<J^) has been p artitioned conform ably w ith U, so the saddlepoint approxim ation to the m arginal density o f U 2 is g,(u2) = ( 2 ^ r (9-« 1,/2|X^2( 0 , |20) r 1/2 exp { k ( 0 , | 20) - U>u2) ,
(9.37)
w here £20 satisfies the (q — qi) x 1 system o f equations d K ( 0 , £ 2 )/dt ; 2 = u2, and K '22 is the (q — q\) x {q — qi) corner o f K " corresponding to U2. D ivision o f (9.36) by (9.37) gives a double saddlepoint approxim ation to the conditional density o f U\ at ui given th a t U2 = u2. W hen U\ is scalar, i.e. q\ = 1, the approxim ate conditional C D F is again (9.28), b u t with w
=
sig n (lt) ^ | x ( 0 , | 2o) - 32T0M2} - { ^ ( ^ ) -
' .
\ |K"2( 0 , 6 o)I J
Example 9.16 (City population data) A simple b o o tstrap application is to obtain the distribution o f the ratio T* in b o o tstrap sam pling from the city population d a ta pairs w ith n = 10. In order to avoid conflicts o f no tatio n we set yj = (Zj, Xj), so th a t T* is the solution to the equation ]T (x; — tZj)f* = 0. F or this we take the W j to be independent Poisson random variables with equal m eans /j, s o K j ( £ ) = n{e{ — 1). We set
=
- ( ”)• " - ( V '
N ow T ' < t if and only if J2 j(xj ~ tzj ) W j < 0, where Wj is the num ber o f times ( z j , X j ) is included in the sample. But the relation betw een the Poisson an d m ultinom ial distributions (Problem 9.19) implies th a t the jo in t conditional distrib u tio n o f ( W \ , . . . , W „ ) given th a t J 2 ^ j = n is the same as th a t o f the m ultinom ial frequency vector (/*, . . . , / * ) in ordinary b o o tstra p sam pling from a sam ple o f size n. T hus the probability th a t J2 j(xj “ tzj)W j < 0 given th at J2 W j = n is ju st the probability th a t T ' < t in ordinary b o o tstrap sam pling from the d a ta pairs.
474
9 ■Improved Calculation a
0.001 0.005 0.01 0.025 0.05 0.1 0.9 0.95 0.975 0.99 0.995 0.999
Unconditional
Conditional
W ithout replacement
S’point
Sim’n
S’point
Sim’n
S’point
Sim’n
1.150 1.191 1.214 1.251 1.286 1.329 1.834 1.967 2.107 2.303 2.461 2.857
1.149 1.192 1.215 1.252 1.286 1.329 1.833 1.967 2.104 2.296 2.445 2.802
1.216 1.236 1.248 1.273 1.301 1.340 1.679 1.732 1.777 1.829 1.865 1.938
1.215 1.237 1.247 1.269 1.291 1.337 1.679 1.736 1.777 1.833 1.863 1.936
1.070 1.092 1.104 1.122 1.139 1.158 1.348 1.392 1.436 1.493 1.537 1.636
1.070 1.092 1.103 1.122 1.138 1.158 1.348 1.392 1.435 1.495 1.540 1.635
In this situation it is o f course m ore direct to use the estim ating function m ethod w ith a(t;yj) = Xj—tZj and the sim pler approxim ations (9.28) and (9.33). T hen the Jaco b ian term in (9.33) is | 22 z; e x p { |(x , —t zj ) } / 22 exp{|(x,- —tZj)}\. A n o th er application is to conditional distributions for T*. Suppose th at the populatio n pairs are related by x; = Zj 6 + z l/ 2 £j, where the e; are a random sam ple from a distrib u tio n w ith m ean zero. T hen conditional on the Zj, the ratio 2 2 xj / 2 2 zj has variance p ro p o rtio n al to (]P Z j)~' ■In some circum stances we m ight w ant to obtain an ap proxim ation to the conditional distribution o f T * given th a t 2 2 Z j = 2 2 zjthis case we can use the approach outlined in the previous p aragraph, b u t w ith tw o conditioning variables: we take the Wj to be independent Poisson variables w ith equal m eans, and set "E (xj-tzj)W j\ [/*=
2 2 zjW j
22 w j
( h
J
o
u = \ 2 2 zj
\
n )
a, =
zJ
V 1
A third application is to approxim ating the distribution o f the ratio when a sam ple o f size m = 10 is taken w ithout replacem ent from the n = 49 d a ta pairs. A gain T ' < t is equivalent to the event 2 2 j( x j ~ t z j ) W j < 0, b u t now W j indicates th a t (z j , X j) is included in the m cities chosen; we w ant to im pose the condition 2 2 ^ 0 = m - We take Wj to be binary variables with equal success probabilities 0 < n < 1, giving Kj(£) = lo g (l — n + ne*), with n any value. We then apply the double saddlepoint approxim ation with
- U ) -
" - ( V '
Table 9.8 com pares the quantiles o f these saddlepoint distributions with
Table 9.8 Comparison of saddlepoint and simulation quantile approximations for the ratio when sampling from the city population data. The statistics are the ratio £ x j / £ zj with n = 10, the ratio conditional on Yl zj = 640 with n = 10, and the ratio in samples of size 10 taken without replacement from the full data. The simulation results are based on 100000 bootstrap samples, with logistic regression used to estimate the simulated conditional probabilities, from which the quantiles were obtained by a spline fit.
9.5 • Saddlepoint Approximation
475
M onte C arlo approxim ations based on 100000 samples. The general agreem ent is excellent in each case. ■ A fu rth er application is to p erm u tatio n distributions. Exam ple 9.17 (C orrelation coefficient) In Exam ple 4.9 we applied a perm u tation test to the sam ple co rrelation t betw een variables x and z based on pairs (x i,z i), ..., (x„,z„). F or this statistic and test, the event T > t is equivalent to EjXjZ(U) - Y l x i zj> where £(•) is a p erm u tatio n o f the integers 1,.. . , n . A n alternative form ulation is as follows. Let Wy, i , j = 1 denote independent binary variables w ith equal success probabilities 0 < n < 1, for any n. T hen consider the distrib ution o f U\ = J 2 i j x izj ^ U conditional on U 2 = ( £ , W i j , . . . , Y , j w nj,E , W|1....... 5Di w i,n-i) r = M2, where u 2 is a vector o f ones o f length 2n — 1. N otice th a t the condition E , = 1 is entailed by the o th er conditions an d so is redundant. E ach value o f Xj and each value o f zj app ears precisely once in the sum U\, w ith equal probabilities, and hence the conditional distrib u tio n o f U\ given U 2 = u 2 is equivalent to the p erm utation distribution o f T. H ere m = n2, q = 2n, and qi = 1. O u r lim ited num erical experience suggests th a t in this exam ple the sad d lepoint ap proxim ation can be inaccurate if the large num ber o f constraints results in a conditional distribution on only a few values. ■
9.5.3 Marginal approximation T he approxim ate distrib u tio n and density functions described so far are useful in contexts such as testing hypotheses, b u t they are hard er to apply to such problem s as setting studentized b o o tstrap confidence intervals. A lthough (9.26) an d (9.28) can be extended to some types o f com plicated statistics, we merely outline the results. Approximate cumulant-generating function T he sim plest ap proach is direct approxim ation to the cum ulant-generating function o f the b o o tstrap statistic o f interest, T ’. The key idea is to replace the cum ulant-generating function K ( ^ ) by the first four term s o f its expansion in powers o f + \ £ 2 k 2 + g£3*C3 + ^ 0, —oo
i) + e2 H ( y - y 2) + £3H (y - y3), differentiate w ith respect to £1, s2, and £3, and set £1 = £2 = £3 = 0. The em pirical influence values for W at F are then obtained by replacing F with F. In term s o f the influence values for t and v the result o f this calculation is L w(yi) Qviyuyi) Cw{y \, y 2 , y i )
= v~ 1 / 2 L t(yi), = v~ll 2 Qt{yx, y 2) - ^v~ 3/ 2 L t{yi)Lv{y2 )[2], = v ^ l/ 2 Ct(yu y 2 , y 3) - \ v ~ V 2 { 6 f0 'i,j'2 )l‘.,0'3) + Qv (y uy 2 ) Lt(yi)} [3] + 1V~5/ 2 L[ (y 1)LV(y 2 )LV(y3) [3],
9 • Improved Calculation
478
where [fc] after a term indicates th a t it should be sum m ed over the perm utations o f its y^s th a t give the k distinct quantities in the sum. Thus for exam ple L t( yi ) Lv(y 2 )Lv(y 3 )[3 ]
=
L t( y i) Lv(y 2 )Lv(y}) + L t( yi ) Lv( yi ) Lv(y 2 ) + L t{y2 )Lv( yi ) Lv(yi).
The influence values for z involve linear, quadratic, and cubic influence values for t, and linear an d quad ratic influence values for v, the latter given by
J
L t( x )2 dF(x)
+ 2J L t(x)Qt( x , y l )dF(x),
L v(yi)
=
L t{yi )2 —
lQv(yi,y2)
=
L t( y i )Q t ( y uy 2 )l 2 ] - L t{yi)Lt(y2) ~ J { Q t ( x , y i ) + Qt( x , y 2 ) }Lt( x) dF( x)
+J
{Qt(x,y 2 )Qt(x,yi) + L t(x)Ct{ x , y u y 2)} dF(x).
The sim plest exam ple is the average t(F) = f x d F ( x ) = y o f a sam ple o f values y u - - - , y „ from F. T hen L t(j/,-) = y t - y , Qtiyuyj) = Ct(yi,yj,yk) = 0, the expressions above simplify greatly, an d the required influence quantities are
li 9ij Cijk
= Lw(yi;F) = v~x,2{yi - y), = Q U y i , y j i h = - i v ~ i/ 2 ( y i - y ) { ( y j - y ) 2 - v } [ 2 ], = Cw(yi , yj, yk ;F) = 3v~i/2(yi - y)(yj - y)(yk - y)
+\v~5n{yi - y) {(yj - y)2 -
{(yk - y)2 -
[3],
where v = n-1 J2(yi ~ y)2- The influence quantities for Z are obtained from those for W by m ultiplication by n 1/2. A num erical illustration o f the use o f the corresponding approxim ate cum ulant-generating function K c ( £ ) is given in Exam ple 9.19. ■ Integration approach A n other ap proach involves extending the estim ating function approxim ation to the m ultivariate case, an d then approxim ating the m arginal distribution o f the statistic o f interest. To see how, suppose th a t the quantity T o f interest is a scalar, an d th a t T and S = ( S i , . . . , S q- i) r are determ ined by a q x 1 estim ating function n U(t,s) = ^ a ( t , s i , . . . , s 9- l ;Yj ). J=i T hen the b o o tstra p quantities T* an d S ’ are the solutions o f the equations n U'(t, s) = J 2 a j ( t , s ) f j = 0 ,
j=i
(9.40)
9.5 • Saddlepoint Approximation
479
where a; (t,s) = a(t,s;yj) an d the frequencies ( / j , . . . , / * ) have a m ultinom ial distribution w ith d en o m in ato r n and m ean vector n ( p \ , - typically pj = n_1. We assum e th a t there is a unique solution (t*,s*) to (9.40) for each possible set o f /* , an d seek saddlepoint approxim ations to the m arginal P D F and C D F of r . F o r fixed t an d s, the cum ulant-generating function o f U" is
K ( £ ; t , s ) = n log
(9.41)
Y 2 PJex P l ^ a / M ) } ;'=i
an d the jo in t density o f the U * at u is given by (9.36). The Jacobian needed to obtain the jo in t density o f T* and S ' from th at o f U ' is h ard to obtain exactly, b u t can be approxim ated by dcij(t,s) 8 aj(t,s) ' dt
j=i
’
dsT
where ,, 1 ’
.
p j e x p { Z Taj(t,s)} Y l = i P k e x p { £ , Tak{ t , s) } ’
as usual for r x 1 an d c x 1 vectors a and s w ith com ponents at and sj, we w rite 8 a / d s T for the r x c array whose (i,j) elem ent is dat/dsj. T he Jacobian J { t , s \ £,) reduces to the Jacobian term in (9.33) w hen s is not present. Thus the saddlepoint approxim ation to the density o f (T*,S*) at (t,s) is J ( t , s ; l ) { 2 n ) - ^ 2 \ K " { l ; t, s) p 1/2 exp K & ; t, s),
(9.42)
w here £ = £(t,s) is the solution to the q x 1 system o f equations 8 K/d£, = 0. L et us w rite A(t,s) = —K{£( t, s) ;t, s} . We require the m arginal density and distribution functions o f T* a t t. In principle they can be obtained by integration o f (9.42) num erically w ith respect to s, but this is tim e-consum ing when s is a vector. A n alternative approach is analytical approxim ation using Laplace’s m ethod, which replaces the m ost im p o rtan t p a rt o f the integrand — the rightm ost term in (9.42) — by a norm al integral, suitably centred an d scaled. Provided th a t the m atrix d 2 A ( t ,s ) / ds d sT is positive definite, the resulting approxim ate m arginal density o f T * at t is
J(t,S;?)(2 n)-,/2 |X"(|;t,S)|-1/ 2
d 2 A(t, s) dsdsT
-
1 /2
exp
s),
(9.43)
w here \ = \ ( t ) an d s = s(t) are functions o f t th a t solve sim ultaneously the
480
9 •Improved Calculation
q x 1 and (q — 1) x 1 systems o f equations 8K —
; t, s)
al
—
=
nYj l
i
8 K (
—
j
s—
, 8 aj
=
nYlpft
=1
s)
>s )
d-s- t =
°-
;= 1
(9.44) These can be solved using packaged routines, w ith starting values given by noting th a t w hen t equals its sam ple value to, say, s equals its sam ple value and £ = 0. The second derivatives o f A needed to calculate (9.43) m ay be expressed as 8 2 A(t,s) _ d 2 K ( £ ; t , s ) f d 2 K ( i ; t , s ) Y i d 2K(£-,t,s) 8 s8 s T
8 s8 £ T
\
8^8^ T
J
8 £,dsT
82 K(t;-,t,s) 8 sdsT
where a t the solutions to (9.44) the m atrices in (9.45) are given by 8 2K { t - , t , s )
n ^2p ' j( t ,s ) aj ( t , s ) a j ( t ,s ) T,
(9.46)
(9.47,
w ith sc and sj the cth and dth com ponents o f s. The m arginal C D F approxim ation for T ' at t is (9.28), with w
=
s i g n ( t - t0){ 2 X (^ ;t,s )} 1/2, dt
|K " (£ ;t,* )l1/2
(9.49) d2A (t, s) 8 s8 s T
1/2
evaluated a t s = s, £ = | ; the only additional q uantity needed here is (9.51) ;= i A pproxim ate quantiles o f T* can be obtained in the way described ju st before Exam ple 9.13. The expressions above look forbidding, b u t their im plem entation is relatively straightforw ard. The key p o in t to note is th a t they depend only on the qu an ti ties aj(t, s), their first derivatives w ith respect to t, an d their first two derivatives w ith respect to s. Once these have been program m ed, they can be input to a generic routine to perform the saddlepoint approxim ations. Difficulties th a t som etim es arise w ith num erical overflow due to large exponents can usually be circum vented by rescaling d a ta to zero m ean and unit variance, which has no
481
9.5 ■Saddlepoint Approximation
effect on location- an d scale-invariant quantities such as studentized statistics. R em em ber, however, o u r initial com m ents in Section 9.1: the investm ent o f tim e an d effort needed to p rogram these approxim ations is unlikely to be w orthw hile unless they are to be used repeatedly. Exam ple 9.19 (M aize d ata) To illustrate these ideas we consider the boo tstrap variance an d studentized average for the m aize data. Both these statistics are location-invariant, so w ithout loss o f generality we replace yj with yj — y and henceforth assum e th a t y = 0. W ith this sim plification the statistics o f interest are
where Y" = n 1 J2 YJ. A little algebra shows th at ii-1 Y , V 2 = V * {1 + Z * 2/(n - 1)} ,
n~l Y , Yj = Z ' V l/2{n - 1)~1/2,
so to apply the integration ap proach we take pj = n 1 and
from which the 2 x 1 m atrices o f derivatives daj(z,v) 8z
daj(z,v) dv
d 2 cij(z,v) 8z 2 ’
8 2 aj(z,v) 8 v1
needed to calculate (9.43)—(9.51) are readily obtained. To find the m arginal distribution o f Z*, we apply (9.43)-(9.51) with t = z and s = v. F or a given value o f z, the three equations in (9.44) are easily solved numerically. The u p p er panels o f Figure 9.11 com pare the saddlepoint distribution an d density approxim ations for Z* w ith a large sim ulation. The analytical quantiles are very close to the sim ulated ones, and although the saddlepoint density seems to have integral greater th an one it captures well the skewness o f the distribution. F or V * we take t = v and s = z, b u t the lower left panel o f Figure 9.11 shows th a t resulting P D F approxim ation fails to capture the bim odality o f the density. This arises because V * is deflated for resam ples in which neither o f the two sm allest observations — which are som ew hat separated from the rest — appear. The contours o f —A(z, v) in the lower right panel reveal a potential problem w ith these m ethods. For z = —3.5, the Laplace approxim ation used to obtain (9.43) am ounts to replacing the integral o f exp{—A(z, t>)} along the dashed vertical line by a norm al approxim ation centred at A and w ith precision given by the second derivative o f A(z, v) at A along the line. But A(—3.5, v) is bim odal for v > 0, an d the Laplace ap proxim ation does n o t account for the second peak at B. As it turns out, this doesn’t m atte r because the peak at B is so m uch
9 *Improved Calculation
482
o W a> c co 3
o
-4
*2
2000
3000
2 z
Quantiles of standard normal
1000
0
-2
v
lower th a n a t A th a t it adds little to the integral, b u t clearly (9.43) would be catastrophically bad if the peaks at A an d B were com parable. This behaviour occurs because there is no guarantee th a t A(z, v) is a convex function o f v and z. If the difficulty is th o u g h t to have arisen, num erical integration o f (9.42) can be used to find the m arginal density o f Z ’, b u t the problem is n o t easily diagnosed except by checking th a t (9.45) is positive definite a t any solution to (9.44) an d by checking th a t different initial values o f c, and s lead to the the same solution for a given value o f t. This m ay increase the com putational burden to an extent th a t direct sim ulation is m ore efficient. Fortunately this difficulty is m uch rarer in larger samples. The quantities needed for the approxim ate cum ulant-generating function
Figure 9.11 Saddlepoint approximations for the bootstrap variance V * and studentized average Z* for the maize data. Top left: approximations to quantiles of Z* by integration saddlepoint (solid) and simulation using 50000 bootstrap samples (every 20th order statistic is shown). Top right: density approximations for Z* by integration saddlepoint (heavy solid), approximate cumulant-generating function (solid), and simulation using 50 000 bootstrap samples. Bottom left: corresponding approximations for V*. Bottom right: contours of —A(z,t>), with local maxima along the dashed line z = —3.5 at A and at B.
9.5 • Saddlepoint Approximation
483
ap proach to obtaining the distribution o f n '/2(n — 1)-1/2Z* were given in E xam ple 9.18. The approxim ate cum ulants for Z* are Kc,i = 0.13, k c ,2 = 1.08, Kc,3 = 0.51 and k c ,4 = 0.50, w ith k c ,2 = 0.89 and k c ,4 = —0.28 when the term s involving the are dropped. W ith or w ithout these term s, the cum ulants are som e way from the values 0.17, 1.34, 1.05, and 1.55 estim ated from 50000 sim ulations. T he upper right panel o f Figure 9.11 shows the P D F approxim ation based on the m odified cum ulant-generating function; in this case Kc fi ( £) is convex. The m odified P D F m atches the centre o f the distribution m ore closely th a n the in tegration PD F, b u t is poor in the u p per tail. F or V ' , we have U = (yi ~ y f ~ t,
qtj = - 2 (yt - y)(yj - y),
ciJk = 0,
so the approxim ate cum ulants are kc,i = 1241, k c ,2 /k c i = kc j / kc i = 0.013 and , = —0.0015; the corresponding sim ulated values are 1243, 0.18, 0.018, 0.0010. N either saddlepoint approxim ation captures the bim odality o f the sim ulations, though the integration m ethod is the b etter o f the two. In this case b = j for the approxim ate cum ulant-generating function m ethod, and the resulting density is clearly too close to norm al.
■
Exam ple 9.20 (Robust M -estim ate) For a second exam ple o f m arginal ap proxim ation, we suppose th a t 8 and a are M -estim ates found from a random sam ple y i , . . . , y n by sim ultaneous solution o f the equations
7=1
v
7
;=1
v
7
T he choice rp(e) = e, ^(e) = e2, y = 1 gives the non-robust estim ates 8 = y and
Let T(! ) < • • • < T {R) denote the order statistics o f the Tr, set
p
1 \ " g(T(r)) R + i ^ H T {r)y
m—\
R
and let M be the random index determined by SM < p < SM+1 - Show that as R —*-oo, and hence justify the estimate t"M given at (9.19). (Section 9.4.1; Johns, 1988) 10
Suppose that T has a linear approximation T[, and let p be the distribution on y y n with probabilities p; oc exp { l l j / ( n v Ll / 2 ) } , where v L = n ~ 2 Y I l j - Find the mom ent-generating function o f T[ under sampling from p, and hence show that in this case T* is approximately N ( t -I- A v j / 2 , v L ). You may assume that T[ is approximately N ( 0, v L ) when A = 0. (Section 9.4.1; Johns, 1988; Hinkley and Shi, 1989)
11
The linear approximation t L ’ for a single-sample resample statistic is typically accurate for t ’ near t, but may not work well near quantiles o f interest. For an approximation that is accurate near the a quantile o f T \ consider expanding t" = t(p’) about p„ = (pi*,... ,p„a) rather than about ( £ ,. . ., £ ) . (a) Show that if pja oc cx-p(n~lv~[[/2z j j ) , then t(pa) will be close to the a quantile o f T" for large n. (b) Define d
,
lj* = ^ t { ( l - f i) p a + e lj] Show that t’ = ?Ljz = t(pa) + n
f]h aj=i
(c) For the ratio estimates in Example 2.22, compare numerically t’L, t'Lfi9 and the quadratic approximation
tQ = t + n^ J l f ? J + j=l
2l n ~ 2 H j = 1 k= 1
fjfk t*
with t ’. (Sections 2.7.4, 3.10.2, 9.4.1; Hesterberg, 1995a) 12
(a) The importance sampling ratio estimator o f n can be written as
E ^ r)w W J 2 W(yr) where si = R 1/2 $3{m(yr)w(jv) implies that
n + R-Vh,
1 + R~,/2Eo ’
A1} an(i ®o = R 1/2 E W O v) — !}• Show that this
var (fiH, at) = K ^ v a r {m(Y )w(Y) - /*w(Y)} .
\ j is a vector with one in the jth position and zeroes elsewhere.
9.7 ■Problems
491
(b) The variance o f the importance sampling regression estimator is approximately var(/iHreg) = R -'v a r {m (Y )w (Y ) - m v (Y )},
(9.54)
where a = cov{m (Y )w (Y ), w (Y )}/var{w (Y )}. Show that this choice o f a achieves minimum variance am ong estimators for which the variance has form (9.54), and deduce that when R is large the regression estimator will always have variance no larger than the raw and ratio estimators. (c) As an artificial illustration o f (b), suppose that for 0 > 0 and som e non-negative integer k we wish to estimate
/
m(y ) g( y ) dy =
/•°° vke~y - j j - x e e ~ eydy
by simulating from density h(y) = fie~^y, y > 0, fi > 0. Give w(y) and show that E{m (Y )w (Y )} = n for any fi and 6, but that var(£//rat) is only finite when 0 < fi < 2 6 . Calculate var{m (Y)w (Y )}, cov{m (Y )w (Y ), w (Y)}, and var{w(Y)}. Plot the asymptotic efficiencies var(/i;; raw) / var(£// ra, ) and var(/*//ratv) / var(^Wrfg) as functions o f fi for 0 = 2 and fc = 0 ,1 ,2 ,3 . Discuss your findings. (Section 9.4.2; Hesterberg, 1995b) 13
Suppose that an application o f importance resampling to a statistic T" has resulted in estimates tj < ■■■< t'R and associated weights w”, and that the importance re weighting regression estimate o f the C D F o f T" is required. Let A be the R x R matrix w hose (r,s) element is w“/ ( t “ < t‘ ) and B be the R x 2 matrix whose rth row is ( I X ) . Show that the regression estimate o f the C D F at t \ , . . . , t ’R equals (1,1 ) ( BTB ) ~ i B TA. (Section 9.4.2)
14
(a) Let h = ( h \ , . • ■,hn), k = 1, . . . , n R , denote independent identically dis tributed multinomial random variables with denominator 1 and probability vector p = (p\, . . . ,p„). Show that SnK = Yl k =l ^ ^as a multinomial distribution with denominator n R and probability vector p, and that the conditional distribution o f I nR given that S„R = q is multinomial with denominator 1 and mean vector (nR) ~{q , where q = ( R i , . . . , R „ ) is a fixed vector. Show also that Prf/]
i i , . . . ,/ni?
InR I SnR
q)
equals
nfi-l g(inR I S„r = q)
g (inR-j |
= q — i„R-J+] — ■■■— i„Rj ,
i =i
where g( ) is the probability mass function o f its argument. (b) U se (a) to justify the following algorithm:
Algorithm 9.4 (Balanced importance resampling) Initialize by setting values o f R i , . . . , R „ such that Rj = n R Pj and For m = n R , . . . , 1:
= n ^-
(a) Generate u from the 1/(0,1) distribution. (b) Find the j such that £ 1=i Ri < um < Y2i=i Ri fe) Set I m = j and decrease Rj to Rj — 1. Return the sets {I„+l, . . . , I 2n}, •••, { /n(R_i)+1, ...,/„ « } as the indices o f the R bootstrap samples o f size n. •
(Section 9.4.3; Booth, Hall and Wood, 1993)
9 • Improved Calculation
492 15
For the bootstrap recycling estimate o f bias described in Example 9.12, consider the case T = Y with the parametric m odel Y ~ N ( 0 , 1). Show that if H is taken to be the N ( y , a ) distribution, then the simulation variance o f the recycling estimate o f C is approximately
1
i
n
R
/ a2 y + \2 « -l/
~ 1)/2 r « ( « - ! ) 1 I (2a - 3)3/2 R N
°2 11 8 (a - \ f ' 2 N J J ’
provided a > Compare this to the simulation variance when ordinary double bootstrap methods are used. What are the im plications for nonparametric double bootstrap calculations? In vestigate the use o f defensive mixtures for H in this problem. (Section 9.4.4; Ventura, 1997) 16
Consider exponential tilting for a statistic whose linear approximation is
where the ( / ' , , . . . , f ‘„s), s = 1 ,..., S, are independent sets o f m ultinom ial frequen cies. (a) Show that the cumulant-generating function o f T I is s K { 0 = ft + Y
f 1 "s n* lo6 \ ~
s=l
I
exP ( ^ y M ) t= 1
Hence show that choosing £ to give K ' ( ^ ) = t0 is equivalent to exponential tilting o f T [ to have mean to, and verify the tilting calculations in Example 9.8. (b) Explain how to modify (9.26) and (9.28) to give the approximate P D F and C D F o f T[. (c) How can stratification be accom m odated in the conditional approximations o f Section 9.5.2? (Section 9.5) 17
In a matched pair design, two treatments are allocated at random to each o f n pairs o f experimental units, with differences dj and average difference d = n~l J2 djI f there is no real effect, all 2" sequences + d i , . . . , + d n are equally likely, and so are the values D" = n~l J2^j^j> where the Sj take values + 1 with probability The one-sided significance level for testing the null hypothesis o f no effect is Pr*(D* > d). (a) Show that the cumulant-generating function o f D ' is n
K(£) = Y
io g c o sh (Zdj/n),
i=i and find the saddlepoint equation and the quantities needed for saddlepoint approximation to the observed significance level. Explain how this may be fitted into the framework o f a conditional saddlepoint approximation. (b) See Practical 9.5. (Section 9.5.1; Daniels, 1958; D avison and Hinkley, 1988) 18
For the testing problem o f Problem 4.9, use saddlepoint methods to develop an approximation to the exact bootstrap P-value based on the exponential tilted EDF. Apply this to the city population data with n = 10. (Section 9.5.1)
9.7 ■Problems
493
19
(a) If W \ , . . . , W „ are independent Poisson variables with means show that their joint distribution conditional on J2j = m is multinomial with probability vector n = (fi\ ^ fij and denominator w. Hence justify the first saddlepoint approximation in Example 9.16. (b) Suppose that T* is the solution to an estimating equation o f form (9.32), but that f j = 0 or 1 and f j = m < n; T" is a delete-m jackknife value o f the original statistic. Explain how to obtain a saddlepoint approximation to the P D F o f T ’. How can this P D F be used to estimate var*(T‘)? D o you think the estimate will be good when m = n — 1 ? (Section 9.5.2; Booth and Butler, 1990)
20
(a) Show that the bootstrap correlation coefficient t ’ based on data pairs ( x j , Zj), j = 1, . . . , n , may be expressed as the solution to the estimating equation (9.40) with Xj-Si
/
Zj
Oj ( t , s ) =
( Xj
\
- s2 Si)2
53
- s2j2 - s4 ( Xj - Si ) ( Zj - S2) - t{s3s4)1/2 J (Zj
V
where s T = (s1,s 2 ,s 3,s 4), and show that the Jacobian J ( t , s ; £ ) = n5(s 3s4)1/2. Obtain the quantities needed for the marginal saddlepoint approximation (9.43) to the density o f T*. (b) W hat further quantities would be needed for saddlepoint approximation to the marginal density o f the studentized form o f T ‘ ? (Section 9.5.3; D avison, Hinkley and Worton, 1995; DiCiccio, Martin and Young, 1994) 21
Let T[‘ be a statistic calculated from a bootstrap sample in which appears with frequency f j (j = 1, . ..,n ) , and suppose that the linear approximation to T ' is T [ = t + n~‘ Y s f j h ’ where /i < k < ■ ■ ■ < / „ . The statistic r2 * antithetic to T,' is calculated from the bootstrap sample in which y, appears with frequency /* +l .. (a) Show that if T [ and r 2“ are antithetic,
var{i(7Y + r 2*)} = J-n
(n-l j 2 lJ + »~l E bh' n+l - j \
7=1
,
7=1
and that this is roughly x2/ 2 n as n—► 00, where
and t]p is the pth quantile o f the distribution o f L t( Y ;F). (b) Show that if T j is independent o f r,' the corresponding variance is
and deduce that when T is the sample average and F is the exponential distribution the large-sample performance gain o f antithetic resampling is 6 /(1 2 — n 2) = 2.8. (c) W hat happens if F is symmetric? Explain qualitatively why. (Hall, 1989a)
9 - Improved Calculation
494 22
Suppose that resampling from a sample o f size n is used to estimate a quantity z(n) with expansion z(n) = zQ+ n~az\ + n~2az2 -\----- ,
(9-55)
where zo, zi, z2 are unknown but a is known; often a = j . Suppose that we resample from the E D F F, but with sample sizes nQ, m , where 1 < no < n t < n, instead o f the usual n, giving simulation estimates z ' ( n0), z ' ( n t ) o f z(n0), z( n x). (a) Show that z*(n) can be estimated by
z‘(n) =
z ’ (no) +
^
no
^
n,
(z‘(n0) - z > i ) } •
(b) N ow suppose that an estimate o f z ’(n; ) based on Rj simulations has variance approximately b / R j and that the com putational effort required to obtain it is cnjRj, for some constants b and c. Given no and ni, discuss the choice o f R q and R\ to minimize the variance o f z"(n) for a given total com putational effort. (c) Outline how knowledge o f the limit zo in (9.55) can be used to improve z ’(n). How would you proceed if a were unknown? D o you think it wise to extrapolate from just two values no and ? (Bickel and Yahav, 1988)
9.8 1
Practicals For ordinary bootstrap sampling, balanced resampling, and balanced resampling within strata:
y L ; as usual vl is the nonparam etric delta m ethod variance estim ate for t. The distribution p*(0°) will have p aram eter value not 0° b u t 9 = t (p '(0 0)). W ith the understanding th a t 9 is defined in this way, we shall for sim plicity w rite p'(9) ra th er th an p*(0°). F or a fixed collection o f R first-level sam ples and bandw idth e > 0, the probability vectors p"(9) change gradually as 9 varies over its range o f interest. Second-level b o o tstra p sam pling now uses vectors p'(0) as sam pling distri butions on the d a ta values, in place o f the p* s. The second-level sam ple values f** are then used in (10.11) to o btain Lg(0). R epeating this calculation for, say, 100 values o f 6 in the range t + 4 v /1 2, followed by sm ooth interpolation, should give a good result. Experience suggests th a t the value e = v 1/ 2 is safe to use in (10.12) if the t* are roughly equally spaced, which can be arran g ed by weighted first-level sam pling, as outlined in Problem 10.6. A way to reduce furth er the am o u n t o f calculation is to use recycling, as described in Section 9.4.4. R a th e r th an generate second-level sam ples from each p"(9) o f interest, one set o f M sam ples can be generated using distribution p on the d ata values, an d the associated values f” , . . . , calculated. Then, following the general re-w eighting m ethod (9.24), the likelihood values are calculated as .> 0 ,3 , m=\
v
/j = 1 v
J
'
where is the frequency o f the j t h case in the with second-level boo tstrap sample. O ne simple choice for p is the E D F p. In special cases it will be possible to replace the second level o f sam pling by use o f the saddlepoint approxim ation m ethod o f Section 9.5. This w ould give an accurate an d sm ooth approxim ation to the density o f T ’“ for sam pling from each p ' ( 8 ). Exam ple 10.3 (Air-conditioning d a ta )
We apply the ideas outlined above to
10.4 ■Likelihood Based on Confidence Sets
Figure 10.4 Bootstrap likelihood for mean of air-conditioning data. Left panel: bootstrap likelihood values obtained by saddlepoint approximation for 200 random samples, with smooth curve fitted to values obtained by smoothing frequencies from 1000 bootstrap samples. Right panel: gamma profile log likelihood (solid) and bootstrap log likelihood (dots).
509
~D O O
JZ
o>
o
theta
theta
the d a ta from Exam ple 10.1. The solid points in the left panel o f Figure 10.4 are b o o tstrap likelihood values for the m ean 9 for 200 resamples, obtained by saddlepoint approxim ation. This replaces the kernel density estim ate (10.11) an d avoids the second level o f resam pling, b u t does n o t remove the variation in estim ated likelihood values for different b o o tstrap sam ples with sim ilar values o f t r*. A locally q u ad ratic nonparam etric sm oother (on the log likelihood scale) could be used to produce a sm ooth likelihood curve from the values o f L(t"), b u t an o th er approach is better, as we now describe. The solid line in the left panel o f Figure 10.4 interpolates values obtained by applying the saddlepoint approxim ation using probabilities (10.12) at a few values o f 9°. H ere the values o f t! are generated at random , and we have taken 112 e = 0.5vl ; the results depend little on the value o f e. T he log b o o tstrap likelihood is very close to log em pirical likelihood, with 95% confidence interval (43.8,92.1). ■ B ootstrap likelihood is based purely on resam pling and sm oothing, which is a p o tential advantage over em pirical likelihood. However, in its simplest form it is m ore com puter-intensive. This precludes b o otstrapping to estim ate quantiles o f b o o tstra p likelihood ratio statistics, which would involve three levels o f nested resam pling.
10.4 Likelihood Based on Confidence Sets In certain circum stances it is possible to view confidence intervals as being approxim ately posterior probability sets, in the Bayesian sense. This encourages the idea o f defining a confidence distribution for 9 from the set o f confidence
510
10 ■Semiparametric Likelihood Inference
limits, and then taking the P D F o f this distribution as a likelihood function. T h a t is, if we define the confidence distribution function C by C( 6 xj = a, then the associated likelihood would be the “density” dC( 6 )/dd. Leaving the philosophical argum ents aside, we look briefly at where this idea leads in the context o f nonparam etric b o o tstrap m ethods.
10.4.1 Likelihood from pivots Suppose th a t Z ( 9 ) = z ( 6 , F ) is a pivot, w ith C D F K ( z ) not depending on the true distribution F , an d th a t z ( 0 ) is a m onotone function o f 6 . T hen the confidence distribution based on confidence limits derived from z leads to the likelihood LH6) = \ m \ k { z ( 6 ) } ,
(10.14)
where k ( z ) = d K ( z ) / d z . Since k will be unknow n in practice, it m ust be estim ated. In fact this definition o f likelihood has a hidden defect. If the identification o f confidence distrib u tio n w ith posterior distribution is accurate, as it is to a good approxim ation in m any cases, then the effect o f some prio r distribution has been ignored in (10.14). But this effect can be rem oved by a simple device. C onsider an im aginary experim ent in which a ran d o m sam ple o f size 2n is obtained, w ith outcom e exactly tw o copies o f the d a ta y th at we have. Then the likelihood w ould be the square o f the likelihood L z (6 I y) we are trying to calculate. T he ratio o f the corresponding posterior densities would be simply L z (6 | y). This argum ent suggests th a t we apply the confidence density (10.14) twice, first w ith d a ta y to give L j(0), say, and second w ith d a ta (y, y) to give L f2n(0). The ratio L l n(6 ) / L l ( 6 ) will then be a likelihood with the unknow n prior effect removed. In an explicit notatio n , this definition can be w ritten t (Q\ — ^2n(®) _ l^2n(0)l&2n {z 2n(d)} L z(p ) = , Ln \Zn{6 )\kn \2n(0)}
(10 15) (10.15)
where the subscripts indicate sam ple size. N ote th a t F and t are the same for both sam ple sizes, b u t quantities such as variance estim ates will depend upon sam ple size. N ote also th a t the im plied p rio r is estim ated by L l 2( 6 ) / L f2n(6). Exam ple 10.4 (Exponential m ean) If d a ta y i , . . . , y n are sam pled from an exponential distrib u tio n w ith m ean 6 , then a suitable choice for z ( 6 , F ) is y / 6 . The gam m a distrib u tio n for Y can be used to check th at the original definition (10.14) gives L i (6) = 9 ~ n~ l exp(—n y / 6 ) , w hereas the true likelihood is 9~n exp (—n y / 6 ) . The true result is obtained exactly using (10.15). The im plied prior is n( 6 ) oc 0-1 , for 6 > 0. ■ In practice the distrib u tio n o f Z m ust be estim ated, in general by boo tstrap
2(0) equals dz{Q)/d6.
10.4 ■Likelihood Based on Confidence Sets
511
sam pling, so the densities k n and k 2„ in (1 0 .1 5 ) m ust be estim ated. To be specific, consider the p articu lar case o f the studentized quantity z(9) = (t—d ) / v 1L/2. A part from a co n stan t m ultiplier, the definition (1 0 .1 5 ) gives
L f (0 ) = k 2n
j k n
,(10 .1 6 )
where v„^ = v i an d v2«,l = \ vl , and we have used the fact th a t t is the estim ate for b o th sam ple sizes. The densities k„ and k 2n are approxim ated using b o o tstrap sam ple values as follows. First R nonparam etric sam ples o f size n are a ^ i j'y draw n from F an d corresponding values o f z* = (t* — t ) / v n[ calculated. T hen R sam ples o f size 2n are draw n from F and values o f Z2» = (*2» ~ 0 /(® io .)1/2 calculated. N ext kernel estim ates for k„ and k 2n, with bandw idths h n and h 2n respectively, are obtained and substituted in (10.16). F or example, (10.17)
In practice these values can be com puted via spline sm oothing from a dense set o f values o f the kernel density estim ates k„{z). There are difficulties w ith this m ethod. First, ju st as with b o o tstrap likeli hood, it is necessary to use a large num ber o f sim ulations R. A second difficulty is th a t o f ascertaining w hether or n o t the chosen Z is a pivot, o r else w hat p rio r tran sfo rm atio n o f T could be used to m ake Z pivotal; see Section 5.2.2. This is especially true if we extend (10.16) to vector 9, which is theoretically possible. N ote th a t if the studentized b o o tstrap is applied to a transform ation o f t rath er th a n t itself, then the factor \z(9)\ in (10.14) can be ignored when applying (10.16).
10.4.2 Implied likelihood In principle any b o o tstra p confidence lim it m ethod can be turned into a likelihood m ethod via the confidence distribution, b u t it m akes sense to restrict atten tio n to the m ore accurate m ethods such as the studentized b o o tstrap used above. Section 5.4 discusses the underlying theory and introduces one other m ethod, the A B C m ethod, which is particularly easy to use as basis for a likelihood because no sim ulation is required. First, a confidence density is obtained via the q u adratic approxim ation (5.42), w ith a, b and c as defined for the nonparam etric A B C m ethod in (5.49). Then, using the argum ent th a t led to (10.15), it is possible to show th a t the induced likelihood function is L Ab c (0) = ex p { -5 U 2(0)},
(10.18)
512
10 ■Semiparametric Likelihood Inference
Figure 10.5 Gamma profile likelihood (solid), implied likelihood L a b c (dashes) and pivot-based likelihood (dots) for air-conditioning dataset of size 12 (left panel) and size 24 (right panel). The pivot-based likelihood uses R = 9999 simulations and bandwidths 1.0.
50
100
150
200
250
300
theta
40
60
80
100
120
theta
where um W
2r(fl) l + 2 ar(d) + { l + 4 a r ( d ) } 1/ 2’
22(0) 1 + {1 - 4cz(0)}V2’
1/I with z(9) = (t — d)/vj[ as before. This is called the implied likelihood. Based on the discussion in Section 5.4, one w ould expect results sim ilar to those from applying (10.16). A furth er m odification is to m ultiply La b c( 8 ) by exp{(cv1/ 2 — b) 6 /vi.}, with b the bias estim ate defined in (5.49). T he effect o f this m odification is to m ake the likelihood even m ore com patible w ith the Bayesian interpretation, som ew hat akin to the adjusted profile likelihood (10.2). Exam ple 10.5 (Air-conditioning d ata) Figure 10.5 shows confidence likeli hoods for the two sets o f air-conditioning d a ta in Table 5.6, sam ples o f size 12 and 24 respectively. The im plied likelihoods L ABc ( 9 ) are sim ilar to the em pirical likelihoods for these data. The pivotal likelihood L z ( 6 ), calculated from R = 9999 sam ples w ith bandw idths equal to 1.0 in (10.17), is clearly quite unstable for the sm aller sam ple size. This also occurred with b o o tstrap likeli hood for these d a ta an d seems to be due to the discreteness o f the sim ulations with so sm all a sample. ■
10.5 Bayesian Bootstraps All the inferences we have described thus far have been frequentist: we have sum m arized uncertainty in term s o f confidence regions for the unknow n p a ram eter 6 o f interest, based on repeated sam pling from a distribution F. A
10.5 ■Bayesian Bootstraps
513
quite different ap proach is possible if prior inform ation is available regarding F. Suppose th a t the only possible values o f Y are know n to be u i , . . . , u N, and th a t these arise with unknow n probabilities p \ , . . . , p N, so that
Pr(y = Uj | p i , . . . , p N ) = pj,
= I-
If o u r d a ta consist o f the ran d o m sam ple y \ , . . . , y „ , and f j counts how m any y, equal Uj, the probability o f the observed d a ta given the values o f the Pj is pro p o rtio n al to flyLi P^‘ ■ If the prior inform ation regarding the p; is sum m arized in the p rior density n(Pi, . . . , p N), the jo in t posterior density o f the Pj given the d a ta is pro p o rtio n al to N
n ip u -.^p ^n //, 7= 1
and this induces a posterior density for 8 . Its calculation is particularly straight forw ard w hen 7i is the D irichlet density, in which case the p rio r and posterior densities are respectively prop o rtional to
ft#
7= 1
7= 1
the posterior density is D irichlet also. Bayesian bootstrap sam ples and the corresponding values o f 8 are generated from the jo in t posterior density for the pj, as follows. Algorithm 10.1 (Bayesian bootstrap) F or r = 1 ,...,/? , 1 L et G \ , . . . , G n be independent gam m a variables with shape param eters aj + f j + 1, and unit scale param eters, and for j = l , . . . , N set P j = Gj/{G\ H------- 1- G^). 2 L et 8 } = t(Fj), where F j = ( P / , . . . , / ^ ) . E stim ate the posterior density for 8 by kernel sm oothing o f d \ , . . . , dfR.
•
In practice w ith continuous d a ta we have f j = l. The simplest version o f the sim ulation puts aj = —1, corresponding to an im proper p rio r distribution w ith su p p o rt on y \ , . . . , y„; the G; are then exponential. Some properties o f this procedure are outlined in Problem 10.10. Example 10.6 (City population data) In the city population d a ta o f E xam ple 2.8, for which n = 10, the param eter 8 = t(F) and the rth sim ulated posterior value dj are
514
10 • Semiparametric Likelihood Inference
Figure 10.6 Bayesian bootstrap applied to city population data, with n = 10. The left panel shows posterior densities for ratio 6 estimated from 999 Bayesian bootstrap simulations, with a = —1, 2, 5, 10; the densities are more peaked as a increases. The right panel shows the corresponding prior densities for 0.
o ‘C
Q_
theta
theta
The left panel o f Figure 10.6 shows kernel density estim ates o f the posterior density o f 9 based on R = 999 sim ulations w ith all the aj equal to a = —1, 2, 5, and 10. The increasingly strong p rio r inform ation results in posterior densities th at are m ore an d m ore sharply peaked. The right panel shows the im plied priors on 6 , obtained using the d a ta doubling device from Section 10.4. The priors seem highly inform ative, even when a = —1. ■ The prim ary use o f the Bayesian b o o tstrap is likely to be for im putation when d a ta are missing, ra th e r th a n in inference for 9 per se. There are theoretical advantages to such weighted bootstraps, in which the probabilities P* vary sm oothly, b u t as yet they have been little used in applications.
10.6 Bibliographic Notes Likelihood inference is the core o f p aram etric statistics. M any elem entary textbooks con tain som e discussion o f large-sam ple likelihood asym ptotics, while adjusted likelihoods an d higher-order approxim ations are described by Barndorff-N ielsen an d Cox (1994). Em pirical likelihood was defined for single sam ples by Owen (1988) and extended to w ider classes o f m odels in a series o f papers (Owen, 1990, 1991). Q in and Lawless (1994) m ake theoretical connections to estim ating equations, while H all and L a Scala (1990) discuss some practical issues in using em pir ical likelihoods. M ore general m odels to which em pirical likelihood has been applied include density estim ation (H all an d Owen, 1993; Chen 1996), lengthbiased d a ta (Qin, 1993), tru n cated d a ta (Li, 1995), and tim e series (M onti,
10.6 ■Bibliographic Notes
515
1997). A pplications to directional d a ta are discussed by Fisher et al. (1996). Owen (1992a) reports sim ulations th at com pare the behaviour o f the em pirical likelihood ratio statistic w ith b o o tstrap m ethods for sam ples o f size up to 20, w ith overall conclusions in line with those o f Section 5.7: the studentized b o o tstrap perform s best, in particular giving m ore accurate confidence in ter vals for the m ean th an the em pirical likelihood ratio statistic, for a variety o f underlying populations. R elated theoretical developm ents are due to DiCiccio, H all and R om ano (1991), D iC iccio and R om an o (1989), and Chen and H all (1993). F rom a theoretical view point it is notew orthy th a t the em pirical likelihood ratio statistic can be B artlett-adjusted, th o u g h C orcoran, D avison and Spady (1996) question the practical relevance o f this. H all (1990) m akes theoretical com parisons betw een em pirical likelihood and likelihood based on studentized pivots. Em pirical likelihood has roots in certain problem s in survival analysis, notably using the product-lim it estim ator to set confidence intervals for a survival probability. R elated m ethods are discussed by M urphy (1995). See also M ykland (1995), w ho introduces the idea o f dual likelihood, which treats the L agrange m ultiplier in (10.7) as a param eter. Except in large samples, it seems likely th a t o u r caveats ab o u t asym ptotic results apply here also. Em pirical exponential families have been discussed in Section 10.10 o f Efron (1982) an d D iCiccio and R om an o (1990), am ong others; see also C orcoran, D avison and Spady (1996), w ho m ake com parisons with em pirical likelihood statistics. Jing an d W ood (1996) show th a t em pirical exponential family like lihood is n o t B artlett adjustable. A univariate version o f the statistic Q e e f in Section 10.2.2 is discussed by Lloyd (1994) in the context o f M -estim ation. B ootstrap likelihood was introduced by D avison, H inkley and W orton (1992), w ho discuss its relationship to em pirical likelihood, while a later paper (D avison, H inkley and W orton, 1995) describes com putational im provem ents. E arly w ork on the use o f confidence distributions to define nonparam etric likelihoods was done by H all (1987), Boos and M on ah an (1986), and O gbonm w an and W ynn (1986). T he use o f confidence distributions in Section 10.4 rests in p a rt on the sim ilarity o f confidence distributions to Bayesian posterior distributions. F o r related theory see W elch and Peers (1963), Stein (1985) and Berger an d B ernardo (1992). E fron (1993) discusses the likelihood derived from A B C confidence limits, shows a strong connection with profile likelihood an d related likelihoods, an d gives several applications; see also C h apter 24 o f E fron and T ibshirani (1993). T he Bayesian b o o tstrap was introduced by R ubin (1981), and subsequently used by R ubin and Schenker (1986) and R ubin (1987) for m ultiple im putation in missing d a ta problem s. B anks (1988) has described some variants o f the Bayesian b o o tstrap , while N ew ton and R aftery (1994) describe a varian t which
516
10 • Semiparametric Likelihood Inference
they nam e the w eighted likelihood b o otstrap. A com prehensive theoretical discussion o f w eighted b o o tstrap s is given in B arbe and Bertail (1995).
10.7 Problems 1
Consider empirical likelihood for a parameter 0 = t(F) defined by an estimating equation f u(t;y)dF(y) = 0, based on a random sample y\,...,y„. (a) Use Lagrange multipliers to maximize Y l°g Pj subject to the conditions Y P j = 1 and Y2 Pju(t;yj) = 0, and hence show that the log empirical likelihood is given by (10.7) with d = 1. Verify that the empirical likelihood is maximized at the sample EDF, when 6 = t(F). (b) Suppose that u(f,y) = y — t and n = 2, with y\ < y 2. Show that rj9 can be written as (9 — y ) / {( 6 — y i)(y2 — 0)}, and sketch it as a function o f 6. (Section 10.2.1)
2
Suppose that x \ , . . . , x n and are independent random samples from dis tributions with means /i and n + 3. Obtain the empirical likelihood ratio statistic for 5. (Section 10.2.1)
3
(a) In (10.5), suppose that 6 = y + n~1/2ere, where a 1 = var(y; ) and e has an asymptotic standard normal distribution. Show that rjg = —n~l/2s / a 2, and deduce that near y, SEl (0) = —§ (y ~ 0)2/ o 2. (b) N ow suppose that a single observation from F has log density f ( 0 ) = log f(y;6) and corresponding Fisher information i(6) = E{—?($)}. Use the fact that the M LE 6 satisfies the equation t(6) = 0 to show that near 6 the parametric log likelihood is roughly