This page intentionally left blank
Essays in Econometrics This book, and its companion volume in the Econometric Soci...
162 downloads
1662 Views
3MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
This page intentionally left blank
Essays in Econometrics This book, and its companion volume in the Econometric Society Monographs series (ESM No. 32), present a collection of papers by Clive W. J. Granger. His contributions to economics and econometrics, many of them seminal, span more than four decades and touch on all aspects of time series analysis. The papers assembled in this volume explore topics in causality, integration and cointegration, and long memory. Those in the companion volume investigate themes in spectral analysis, seasonality, nonlinearity, methodology, and forecasting. The two volumes contain the original articles as well as an introduction written by the editors. Eric Ghysels is Edward M. Bernstein Professor of Economics and Professor of Finance at the University of North Carolina, Chapel Hill. He previously taught at the University of Montreal and Pennsylvania State University. Professor Ghysels’s main research interests are time series econometrics and finance. He has served on the editorial boards of several academic journals and has published more than sixty articles in leading economics, finance, and statistics journals. Norman R. Swanson is Associate Professor of Economics at Texas A&M University and formerly taught at Pennsylvania State University. He received his doctorate at the University of California, San Diego, where he studied theoretical, financial, and macroeconomics under Clive Granger’s tutelage. Professor Swanson is an associate editor for numerous academic journals and is the author of more than thirty refereed articles and research papers. Mark W. Watson is Professor of Economics and Public Affairs at the Woodrow Wilson School, Princeton University. He previously served on the faculties of Harvard and Northwestern Universities and as associate editor of Econometrica, the Journal of Monetary Economics, and the Journal of Applied Econometrics and currently is a Research Associate of the National Bureau of Economic Research (NBER) and consultant at the Federal Reserve Bank of Richmond. Professor Watson is a Fellow of the Econometric Society and currently holds research grants from NBER and the National Science Foundation.
Econometric Society Monographs No. 33 Editors: Peter Hammond, Stanford University Alberto Holly, University of Lausanne The Econometric Society is an international society for the advancement of economic theory in relation to statistics and mathematics. The Econometric Society Monograph Series is designed to promote the publication of original research contributions of high quality in mathematical economics and theoretical and applied econometrics. Other titles in the series: G. S. Maddala Limited-dependent and qualitative variables in econometrics, 0 521 33825 5 Gerard Debreu Mathematical economics: Twenty papers of Gerard Debreu, 0 521 33561 2 Jean-Michel Grandmont Money and value: A reconsideration of classical and neoclassical monetary economics, 0 521 31364 3 Franklin M. Fisher Disequilibrium foundations of equilibrium economics, 0 521 37856 7 Andreu Mas-Colell The theory of general economic equilibrium: A differentiable approach, 0 521 26514 2, 0 521 38870 8 Cheng Hsiao Analysis of panel data, 0 521 38933 X Truman F. Bewley, Editor Advances in econometrics – Fifth World Congress (Volume I), 0 521 46726 8 Truman F. Bewley, Editor Advances in econometrics – Fifth World Congress (Volume II), 0 521 46725 X Herve Moulin Axioms of cooperative decision making, 0 521 36055 2, 0 521 42458 5 L. G. Godfrey Misspecification tests in econometrics: The Lagrange multiplier principle and other approaches, 0 521 42459 3 Tony Lancaster The econometric analysis of transition data, 0 521 43789 X Alvin E. Roth and Marilda A. Oliviera Sotomayor, Editors Two-sided matching: A study in game-theoretic modeling and analysis, 0 521 43788 1 Wolfgang Härdle, Applied nonparametric regression, 0 521 42950 1 Jean-Jacques Laffont, Editor Advances in economic theory – Sixth World Congress (Volume I), 0 521 48459 6 Jean-Jacques Laffont, Editor Advances in economic theory – Sixth World Congress (Volume II), 0 521 48460 X Halbert White Estimation, inference and specification, 0 521 25280 6, 0 521 57446 3 Christopher Sims, Editor Advances in econometrics – Sixth World Congress (Volume I), 0 521 56610 X Christopher Sims, Editor Advances in econometrics – Sixth World Congress (Volume II), 0 521 56609 6 Roger Guesnerie A contribution to the pure theory of taxation, 0 521 23689 4, 0 521 62956 X David M. Kreps and Kenneth F. Wallis, Editors Advances in economics and econometrics – Seventh World Congress (Volume I), 0 521 58011 0, 0 521 58983 5 David M. Kreps and Kenneth F. Wallis, Editors Advances in economics and econometrics – Seventh World Congress (Volume II), 0 521 58012 9, 0 521 58982 7 David M. Kreps and Kenneth F. Wallis, Editors Advances in economics and econometrics – Seventh World Congress (Volume III), 0 521 58013 7, 0 521 58981 9 Donald P. Jacobs, Ehud Kalai, and Morton I. Kamien, Editors Frontiers of research in economic theory: The Nancy L. Schwartz Memorial Lectures, 1983–1997, 0 521 63222 6, 0 521 63538 1 A. Cohn Cameron and Pravin K. Trivedi, Regression analysis of count data, 0 521 63201 3, 0 521 63567 5 Steinar Strøm, Editor Econometrics and economic theory in the 20th century: The Ragnar Frisch Centennial Symposium, 0 521 633230, 0 521 633656 Eric Ghysels, Norman R. Swanson, and Mark Watson, Editors Essays in econometrics: Collected papers of Clive W. J. Granger (Volume I), 0 521 77297 4, 0 521 80401 8, 0 521 77496 9, 0 521 79697 0 Eric Ghysels, Norman R. Swanson, and Mark Watson, Editors Essays in econometrics: Collected papers of Clive W. J. Granger (Volume II), 0 521 79207 X, 0 521 80401 8, 0 521 79649 0, 0 521 79697 0
CLIVE WILLIAM JOHN GRANGER
Essays in Econometrics Collected Papers of Clive W. J. Granger Volume II: Causality, Integration and Cointegration, and Long Memory Edited by
Eric Ghysels University of North Carolina at Chapel Hill
Norman R. Swanson Texas A&M University
Mark W. Watson Princeton University
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge , United Kingdom Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521792073 © Cambridge University Press 2001 This book is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format 2001 - isbn-13 978-0-511-06938-3 eBook (EBL) - isbn-10 0-511-06938-3 eBook (EBL) - isbn-13 978-0-521-79207-3 hardback - hardback isbn-10 0-521-79207-X - isbn-13 978-0-521-79649-1 paperback - paperback isbn-10 0-521-79649-0 Cambridge University Press has no responsibility for the persistence or accuracy of s for external or third-party internet websites referred to in this book, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
To Clive W. J. Granger: Mentor, Colleague, and Friend. We are honored to present this selection of his research papers. E. G. N. R. S. M. W. W.
Contents
Acknowledgments List of Contributors
page xiii xvii
Introduction eric ghysels, norman r. swanson, and mark watson
1
PART ONE : CAUSALITY
1. Investigating Causal Relations by Econometric Models and Cross-Spectral Methods, c. w. j. granger, Econometrica, 37, 1969, pp. 424–38. Reprinted in Rational Expectations, edited by j. sargent and r. lucas, 1981, University of Minnesota Press. 2. Testing for Causality: A Personal Viewpoint, c. w. j. granger, Journal of Economic Dynamics and Control, 2, 1980, pp. 329–52. 3. Some Recent Developments in a Concept of Causality, c. w. j. granger, Journal of Econometrics, 39, 1988, pp. 199-211. 4. Advertising and Aggregate Consumption: An Analysis of Causality, r. ashley, c. w. j. granger and r. schmalensee, Econometrica, 48, 1980, pp. 1149–67.
31
48
71
84
PART TWO : INTEGRATION AND COINTEGRATION
5. Spurious Regression in Econometrics, c. w. j. granger and p. newbold, Journal of Econometrics, 2, 1974, pp. 111–20. 6. Some Properties of Time Series Data and Their Use in Econometric Model Specification, c. w. j. granger, Journal of Econometrics, 16, 1981, pp. 121–30.
109
119
x
Contents
7. Time Series Analysis of Error Correction Models, c. w. j. granger and a. a. weiss, in Studies in Econometrics: Time Series and Multivariate Statistics, edited by s. karlin, t. amemiya, and l. a. goodman, Academic Press, New York, 1983, pp. 255–78. 8. Co-Integration and Error-Correction: Representation, Estimation, and Testing, r. engle and c. w. j. granger, Econometrica, 55, 1987, pp. 251–76. 9. Developments in the Study of Cointegrated Economic Variables, c. w. j. granger, Oxford Bulletin of Economics and Statistics, 48, 1986, pp. 213–28. 10. Seasonal Integration and Cointegration, s. hylleberg, r. f. engle, c. w. j. granger, and b. s. yoo, Journal of Econometrics, 44, 1990, pp. 215–38. 11. A Cointegration Analysis of Treasury Bill Yields, a. d. hall, h. m. anderson, and c. w. j. granger, Review of Economics and Statistics, 74, 1992, pp. 116–26. 12. Estimation of Common Long Memory Components in Cointegrated Systems, j. gonzalo and c. w. j. granger, Journal of Business and Economic Statistics, 13, 1995, pp. 27–35. 13. Separation in Cointegrated Systems and Persistent-Transitory Decompositions, c. w. j. granger and n. haldrup, Department of Economics, Oxford Bulletin of Economics and Statistics, 59, 1997, pp. 449–64. 14. Nonlinear Transformations of Integrated Time Series, c. w. j. granger and j. hallman, Journal of Time Series Analysis, 12, 1991, pp. 207–24. 15. Long Memory Series with Attractors, c. w. j. granger and j. hallman, Oxford Bulletin of Economics and Statistics, 53, 1991, pp. 11–26. 16. Further Developments in the Study of Cointegrated Variables, c. w. j. granger and n. r. swanson, Oxford Bulletin of Economics and Statistics, 58, 1996, pp. 374–86.
129
145
173
189
212
232
254
269
286
302
PART THREE : LONG MEMORY
17. An Introduction to Long-Memory Time Series Models and Fractional Differencing, c. w. j. granger and r. joyeux, Journal of Time Series Analysis, 1, 1980, pp. 15–29.
321
Contents
18. Long Memory Relationships and the Aggregation of Dynamic Models, c. w. j. granger, Journal of Econometrics, 14, 1980, pp. 227–38. 19. A Long Memory Property of Stock Market Returns and a New Model, z. ding, c. w. j. granger and r. f. engle, Journal of Empirical Finance, 1, 1993, pp. 83–106. Index
xi
338
349 373
Acknowledgments
Grateful acknowledgment is made to the following publishers and sources for permission to reprint the articles cited here. ACADEMIC PRESS “Non-Linear Time Series Modelling,” with A. Andersen, Applied Time Series Analysis, edited by David F. Findley, 1978,Academic Press, 25–38. “Time Series Analysis of Error Correction Models,” with A. A. Weiss, in Studies in Econometrics: Time Series and Multivariate Statistics, edited by S. Karlin, T. Amemiya, and L. A. Goodman, Academic Press, New York, 1983, 255–78. AMERICAN STATISTICAL ASSOCIATION “Is Seasonal Adjustment a Linear or Nonlinear Data-Filtering Process?” with E. Ghysels and P. L. Siklos, Journal of Business and Economic Statistics, 14, 1996, 374–86. “Semiparametric Estimates of the Relation Between Weather and Electricity Sales,” with R. F. Engle, J. Rice, and A. Weiss, Journal of the American Statistical Association, 81, 1986, 310–20. “Estimation of Common Long-Memory Components in Cointegrated Systems,” with J. Gonzalo, Journal of Business and Economic Statistics, 13, 1995, 27–35. BLACKWELL PUBLISHERS “Time Series Modelling and Interpretation,” with M. J. Morris, Journal of the Royal Statistical Society, Series A, 139, 1976, 246–57. “Forecasting Transformed Series,” with P. Newbold, The Journal of the Royal Statistical Society, Series B, 38, 1976, 189–203. “Developments in the Study of Cointegrated Economic Variables,” Oxford Bulletin of Economics and Statistics, 48, 1986, 213–28.
xiv
Acknowledgments
“Separation in Cointegrated Systems and Persistent-Transitory Decompositions,” with N. Haldrup, Oxford Bulletin of Economics and Statistics, 59, 1997, 449–64. “Nonlinear Transformations of Integrated Time Series,” with J. Hallman, Journal of Time Series Analysis, 12, 1991, 207–24. “Long Memory Series with Attractors,” with J. Hallman, Oxford Bulletin of Economics and Statistics, 53, 1991, 11–26. “Further Developments in the Study of Cointegrated Variables,” with N. R. Swanson, Oxford Bulletin of Economics and Statistics, 58, 1996, 374–86. “An Introduction to Long-Memory Time Series Models and Fractional Differencing,” with R. Joyeux, Journal of Time Series Analysis, 1, 1980, 15–29. BUREAU OF THE CENSUS “Seasonality: Causation, Interpretation and Implications,” in Seasonal Analysis of Economic Time Series, Economic Research Report, ER-1, edited by A. Zellner, 1979, Bureau of the Census, 33–46. “Forecasting White Noise,” in Applied Time Series Analysis of Economic Data, Proceedings of the Conference on Applied Time Series Analysis of Economic Data, October 1981, edited by A. Zellner, U.S. Department of Commerce, Bureau of the Census, Government Printing Office, 1983, 308–14. CAMBRIDGE UNIVERSITY PRESS “The ET Interview: Professor Clive Granger,” Econometric Theory, 13, 1997, 253–303. “Implications of Aggregation with Common Factors,” Econometric Theory, 3, 1987, 208–22. CHARTERED INSTITUTION OF WATER AND ENVIRONMENTAL MANAGEMENT “Estimating the Probability of Flooding on a Tidal River,” Journal of the Institution of Water Engineers, 13, 1959, 165–74. THE ECONOMETRICS SOCIETY “The Typical Spectral Shape of an Economic Variable,” Econometrica, 34, 1966, 150–61. “Modelling Nonlinear Relationships Between Extended-Memory Variables,” Econometrica, 63, 1995, 265–79. “Near Normality and Some Econometric Models,” Econometrica, 47, 1979, 781–4.
Acknowledgments
xv
“Investigating Causal Relations by Econometric Models and CrossSpectral Methods,” Econometrica, 37, 1969, 424–38. Reprinted in Rational Expectations, edited by T. Sargent and R. Lucas, 1981, University of Minnesota Press, Minneapolis. “Advertising and Aggregate Consumption: An Analysis of Causality,” with R. Ashley and R. Schmalensee, Econometrica, 48, 1980, 1149–67. “Co-Integration and Error-Correction: Representation, Estimation and Testing,” with R. Engle, Econometrica, 55, 1987, 251–76. ELSEVIER “Testing for Neglected Nonlinearity in Time Series Models: A Comparison of Neural Network Methods and Alternative Tests,” with T.-H. Lee and H. White, Journal of Econometrics, 56, 1993, 269–90. “On The Invertibility of Time Series Models,” with A. Andersen, Stochastic Processes and Their Applications, 8, 1978, 87–92. “Comments on the Evaluation of Policy Models,” with M. Deutsch, Journal of Policy Modelling, 14, 1992, 397–416. “Invited Review: Combining Forecasts – Twenty Years Later,” Journal of Forecasting, 8, 1989, 167–73. “The Combination of Forecasts Using Changing Weights,” with M. Deutsch and T. Teräsvirta, International Journal of Forecasting, 10, 1994, 47–57. “Short-Run Forecasts of Electricity Loads and Peaks,” with R. Ramanathan, R. F. Engle, F. Vahid-Araghi, and C. Brace, International Journal of Forecasting, 13, 1997, 161–74. “Some Recent Developments in a Concept of Causality,” Journal of Econometrics, 39, 1988, 199–211. “Spurious Regressions in Econometrics,” with P. Newbold, Journal of Econometrics, 2, 1974, 111–20. “Some Properties of Time Series Data and Their Use in Econometric Model Specification,” Journal of Econometrics, 16, 1981, 121–30. “Seasonal Integration and Cointegration,” with S. Hylleberg, R. F. Engle, and B. S. Yoo, Journal of Econometrics, 44, 1990, 215–38. “Long-Memory Relationships and the Aggregation of Dynamic Models,” Journal of Econometrics, 14, 1980, 227–38. “A Long Memory Property of Stock Market Returns and a New Model,” with Z. Ding and R. F. Engle, Journal of Empirical Finance, 1, 1993, 83–106. FEDERAL RESERVE BANK OF MINNEAPOLIS “The Time Series Approach to Econometric Model Building,” with P. Newbold, in New Methods in Business Cycle Research, edited by C. Sims, 1977, Federal Reserve Bank of Minneapolis.
xvi
Acknowledgments
HELBING AND LICHTENHAHN VERLAG “Spectral Analysis of New York Stock Market Prices,” with O. Morgenstern, Kyklos, 16, 1963, 1–27. Reprinted in Random Character of Stock Market Prices, edited by P. H. Cootner, 1964, MIT Press, Cambridge, MA. JOHN WILEY & SONS, LTD. “Using the Correlation Exponent to Decide Whether an Economic Series is Chaotic,” with T. Liu and W. P. Heller, Journal of Applied Econometrics, 7, 1992, S25–40. Reprinted in Nonlinear Dynamics, Chaos, and Econometrics, edited by M. H. Pesaran and S. M. Potter, Wiley, Chichester. “Can We Improve the Perceived Quality of Economic Forecasts?” Journal of Applied Econometrics, 11, 1996, 455–73. MACMILLAN PUBLISHERS, LTD. “Prediction with a Generalized Cost of Error Function,” Operational Research Quarterly, 20, 1969, 199–207. “The Combination of Forecasts, Using Changing Weights,” with M. Deutsch and T. Teräsvirta, International Journal of Forcasting, 10, 1994, 45–57. MIT PRESS “Testing for Causality: A Personal Viewpoint,” Journal of Economic Dynamics and Control, 2, 1980, 329–52. “A Cointegration Analysis of Treasury Bill Yields,” with A. D. Hall and H. M. Anderson, Review of Economics and Statistics, 74, 1992, 116–26. “Spectral Analysis of New York Stock Market Prices,” with O. Morgenstern, Kyklos, 16, 1963, 1–27. Reprinted in Random Character of Stock Market Prices, edited by P. H. Cootner, 1964, MIT Press, Cambridge, MA. TAYLOR & FRANCIS, LTD. “Some Comments on the Evaluation of Economic Forecasts,” with P. Newbold, Applied Economics, 5, 1973, 35–47.
Contributors
A. Andersen Department of Economic Statistics University of Sydney Sydney Australia
R. F. Engle Department of Economics University of California, San Diego La Jolla, CA U.S.A.
H. M. Anderson Department of Econometrics Monash University Australia
E. Ghysels Department of Economics University of North Carolina at Chapel Hill Chapel Hill, NC U.S.A.
R. Ashley University of California, San Diego La Jolla, CA U.S.A. J. M. Bates Bramcote Nottingham United Kingdom C. Brace Puget Sound Power and Light Company Bellevue, WA U.S.A. M. Deutsch Department of Economics University of California, San Diego La Jolla, CA U.S.A. Z. Ding Frank Russell Company Tacoma, WA U.S.A.
J. Gonzalo Department of Economics University Carlos III Madrid Spain C. W. J. Granger Department of Economics University of California, San Diego La Jolla, CA 92093 N. Haldrup Department of Economics University of Aarhus Aarhus Denmark A. D. Hall School of Finance and Economics University of Technology Sydney Australia J. Hallman Federal Reserve Board Washington, DC U.S.A.
xviii
Contributors
W. P. Heller University of California, San Diego La Jolla, CA U.S.A. S. Hylleberg Department of Economics University of Aarhus Aarhus Denmark R. Joyeux School of Economics and Financial Studies Macquarie University Sydney Australia
J. Rice Department of Statistics University of California, Berkeley Berkeley, CA U.S.A. R. Schmalensee Sloan School of Management Massachusetts Institute of Technology Cambridge, MA U.S.A. P. L. Siklos Department of Economics Wilfrid Laurier University Waterloo, Ontario Canada
T.-H. Lee Department of Economics University of California, Riverside Riverside, CA U.S.A.
N. R. Swanson Department of Economics Texas A&M University College Station, TX U.S.A.
T. Lui Department of Economics Ball State University Muncie, IN U.S.A.
T. Teräsvirta School of Finance and Economics University of Technology Sydney Australia
O. Morgenstern (deceased) Princeton University Princeton, NJ U.S.A.
F. Vahid-Araghi Department of Econometrics Monash University Australia
M. J. Morris University of East Anglia United Kingdom
M. Watson Department of Economics Princeton University Princeton, NJ U.S.A.
P. Newbold Department of Economics Nottingham University Nottingham United Kingdom P. C. B. Phillips Cowles Foundation for Research in Economics Yale University New Haven, CT U.S.A. R. Ramanathan Department of Economics University of California, San Diego La Jolla, CA U.S.A.
A. A. Weiss Department of Economics University of Southern California Los Angeles, CA U.S.A. H. White Department of Economics University of California, San Diego La Jolla, CA U.S.A. B. S. Yoo Yonsei University Seoul South Korea
Introduction Volume I
At the beginning of the twentieth century, there was very little fundamental theory of time series analysis and surely very few economic time series data. Autoregressive models and moving average models were introduced more or less simultaneously and independently by the British statistician Yule (1921, 1926, 1927) and the Russian statistician Slutsky (1927). The mathematical foundations of stationary stochastic processes were developed by Wold (1938), Kolmogorov (1933, 1941a, 1941b), Khintchine (1934), and Mann and Wald (1943). Thus, modern time series analysis is a mere eight decades old. Clive W. J. Granger has been working in the field for nearly half of its young life. His ideas and insights have had a fundamental impact on statistics, econometrics, and dynamic economic theory. Granger summarized his research activity in a recent ET Interview (Phillips 1997), which appears as the first reprint in this volume, by saying, “I plant a lot of seeds, a few of them come up, and most of them do not.” Many of the seeds that he planted now stand tall and majestic like the Torrey Pines along the California coastline just north of the University of California, San Diego, campus in La Jolla, where he has been an economics faculty member since 1947. Phillips notes in the ET Interview that “It is now virtually impossible to do empirical work in time series econometrics without using some of his [Granger’s] methods or being influenced by some of his ideas.” Indeed, applied time series econometricians come across at least one of his path-breaking ideas almost on a daily basis. For example, many of his contributions in the areas of spectral analysis, long memory, causality, forecasting, spurious regression, and cointegration are seminal. His influence on the profession continues with no apparent signs of abatement. SPECTRAL METHODS In his ET Interview, Granger explains that early in his career he was confronted with many applied statistical issues from various disciplines
2
Eric Ghysels, Norman R. Swanson, and Mark Watson
because he was the only statistician on the campus of the University of Nottingham, where he completed his PhD in statistics and served as lecturer for a number of years. This led to his first publications, which were not in the field of economics. Indeed, the first reprint in Volume II of this set contains one of his first published works, a paper in the field of hydrology. Granger’s first influential work in time series econometrics emerged from his research with Michio Hatanaka. Both were working under the supervision of Oskar Morgenstern at Princeton and were guided by John Tukey. Cramér (1942) had developed the spectral decomposition of weakly stationary processes, and the 1950s and early 1960s were marked by intense research efforts devoted to spectral analysis. Many prominent scholars of the time, including Milton Friedman, John von Neumann, and Oskar Morgenstern, saw much promise in the application of Fourier analysis to economic data. In 1964, Princeton University Press published a monograph by Granger and Hatanaka, which was the first systematic and rigorous treatment of spectral analysis in the field of economic time series. Spectral methods have the appealing feature that they do not require the specification of a model but instead follow directly from the assumption of stationarity. Interestingly, more than three decades after its initial publication, the book remains a basic reference in the field. The work of Granger and Hatanaka was influential in many dimensions. The notion of business cycle fluctuations had been elaborately discussed in the context of time series analysis for some time. Spectral analysis provided new tools and yielded fundamental new insights into this phenomenon. Today, macroeconomists often refer to business cycle frequencies, and a primary starting point for the analysis of business cycles is still the application of frequency domain methods. In fact, advanced textbooks in macroeconomics, such as Sargent (1987), devote an entire chapter to spectral analysis. The dominant feature of the spectrum of most economic time series is that most of the power is at the lower frequencies. There is no single pronounced business cycle peak; instead there are a wide number of moderately sized peaks over a large range of cycles between four and eight years in length. Granger (1966) dubbed this shape the “typical spectral shape” of an economic variable. A predecessor to Granger’s 1966 paper entitled “The Typical Spectral Shape of an Economic Variable” is his joint paper with Morgenstern published in 1963, which is entitled “Spectral Analysis of New York Stock Market Prices.” Both papers are representative of Granger’s work in the area of spectral analysis and are reproduced as the first set of papers following the ET Interview. The paper with Morgenstern took a fresh look at the random walk hypothesis for stock prices, which had been advanced by the French mathematician M. L. Bachelier (1900). Granger and Morgenstern esti-
Introduction
3
mated spectra of return series of several major indices of stocks listed on the New York Stock Exchange. They showed that business cycle and seasonal variations were unimportant for return series, as in every case the spectrum was roughly flat at almost all frequencies. However, they also documented evidence that did not support the random walk model. In particular, they found that very long-run movements were not adequately explained by the model. This is interesting because the random walk hypothesis was associated with definitions of efficiency of financial markets for many years (e.g., see the classic work of Samuelson 1965 and Fama 1970). The Granger and Morgenstern paper is part of a very important set of empirical papers written during the early part of the 1960s, which followed the early work of Cowles (1933). Other related papers include Alexander (1961, 1964), Cootner (1964), Fama (1965), Mandelbrot (1963), and Working (1960). Today, the long-term predictability of asset returns is a well-established empirical stylized fact, and research in the area remains very active (e.g., see Campbell, Lo, and MacKinlay 1997 for recent references). SEASONALITY Seasonal fluctuations were also readily recognized from the spectrum, and the effect of seasonal adjustment on economic data was therefore straightforward to characterize. Nerlove (1964, 1965) used spectral techniques to analyze the effects of various seasonal adjustment procedures. His approach was to compute spectra of unadjusted and adjusted series and to examine the cross spectrum of the two series. Nerlove’s work took advantage of the techniques Granger and Hatanaka had so carefully laid out in their monograph. Since then, many papers that improve these techniques have been written. They apply the techniques to the study of seasonal cycles and the design of seasonal adjustment filters. For example, many significant insights have been gained by viewing seasonal adjustment procedures as optimal linear signal extraction filters (e.g., see Hannan 1967; Cleveland and Tiao 1976; Pierce 1979; and Bell 1984, among others). At the same time, there has been a perpetual debate about the merits of seasonal adjustment, and since the creation of the X11 program, many improvements have been made and alternative procedures have been suggested. The Census X-11 program was the product of several decades of research. Its development was begun in the early 1930s by researchers at the National Bureau of Economic Research (NBER) (see, for example, Macaulay 1931), and it emerged as a fully operational procedure in the mid 1960s, in large part due to the work by Julius Shiskin and his collaborators at the U.S. Bureau of the Census (see Shiskin et al. 1967). During the 1960s and 1970s, numerous papers were written on the topic of seasonality, including important papers by Sims
4
Eric Ghysels, Norman R. Swanson, and Mark Watson
(1974) and Wallis (1974). Granger’s (1979) paper, “Seasonality: Causation, Interpretation and Implications,” is the first of two papers on the topic of seasonality included in this volume. It was written for a major conference on seasonality, which took place in the late 1970s, and appeared in a book edited by Zellner (1979). In this paper, he asks the pointed question, “Why adjust?” and gives a very balanced view of the merits and drawbacks of seasonal adjustment. The paper remains one of the best reflections on the issue of seasonality and seasonal adjustment. The second paper in this subsection, “Is Seasonal Adjustment a Linear or a Nonlinear Data-Filtering Process?,” written with Ghysels and Siklos (1996), also deals with a pointed question that was initially posed by Young (1968). The question is: Are seasonal adjustment procedures (approximately) linear data transformations? The answer to this question touches on many fundamental issues, such as the treatment of seasonality in regression (cf. Sims 1974; Wallis 1974) and the theory of seasonal adjustment. The paper shows that the widely applied X-11 program is a highly nonlinear filter. NONLINEARITY The book by Box and Jenkins (1970) pushed time series analysis into a central role in economics. At the time of its publication, the theory of stationary linear time series processes was well understood, as evidenced by the flurry of textbooks written during the late 1960s and the 1970s, such as Anderson (1971), Fuller (1976), Granger and Newbold (1977), Hannan (1970), Nerlove et al. (1979), and Priestley (1981). However, many areas of time series analysis fell beyond the scope of linear stationary processes and were not well understood. These areas included nonstationarity and long memory (covered in Volume II) and nonlinear models. Four papers on nonlinearity in time series analysis are reproduced in Volume I and are representative of Granger’s important work in this area. Because the class of nonlinear models is virtually without bound, one is left with the choice of either letting the data speak (and suffering the obvious dangers of overfitting) or relying on economic theory to yield the functional form of nonlinear economic relationships. Unfortunately, most economic theories provide only partial descriptions, with blanks that need to be filled in by exploratory statistical techniques. The papers in this section address the statistical foundations of nonlinear modeling some of the classical debates in the literature of nonlinear modeling. The first paper, “Non-Linear Time Series Modeling,” describes the statistical underpinnings of a particular class of nonlinear models. This paper by Granger and Andersen predates their joint monograph on bilinear models (Granger and Andersen 1978). This class of models is not as popular today as it once was, although bilinear models are con-
Introduction
5
nected in interesting ways to models of more recent vintage, such as the class of ARCH models introduced by Engle (1982). One of the classical debates in the literature on nonlinear models pertains to the use of deterministic versus stochastic processes to describe economic phenomenon. Granger has written quite extensively on the subject of chaos (a class of deterministic models) and has expressed some strong views on its use in economics, characterizing the theory of chaos as fascinating mathematics but not of practical relevance in econometrics (see Granger 1992, 1994). Liu, Granger, and Heller (1992), in the included paper entitled “Using the Correlation Exponent to Decide Whether an Economic Series Is Chaotic,” study the statistical properties of two tests designed to distinguish deterministic time series from stochastic white noise. The tests are the Grassberger-Procacia correlation exponent test and the Brock, Dechert, and Scheinkman test. Along the same lines, Lee, White, and Granger (1993), in the paper entitled “Testing for Neglected Nonlinearity in Time Series Models” examine a battery of tests for nonlinearity. Both papers are similar in that they consider basic questions of nonlinear modeling and provide useful and practical answers. The fourth paper in this section, “Modeling Nonlinear Relationships Between Extended-Memory Variables,” is the Fisher-Schultz lecture delivered at the 1993 European Meetings of the Econometric Society in Uppsala. The lecture coincided with the publication of the book by Granger and Teräsvirta (1993) on modeling nonlinear economic relationships. This book is unique in the area because it combines a rich collection of topics ranging from testing for linearity, chaos, and long memory to aggregation effects and forecasting. In his Fisher-Schultz lecture, Granger addresses the difficult area of nonlinear modeling of nonstationary processes. The paper shows that the standard classification of I(0) and I(1) processes in linear models is not sufficient for nonlinear functions. This observation also applies to fractional integration. As is typical, Granger makes suggestions for new areas of research, advancing the notions of short memory in mean and extended memory, and relates these ideas to earlier concepts of mixing conditions, as discussed for instance in McLeish (1978), Gallant and White (1988), and Davidson (1994). At this point, it is too early to tell whether any of these will give us the guidance toward building a unified theory of nonlinear nonstationary processes. The final paper in this section is entitled “Semiparametric Estimates of the Relation Between Weather and Electricity Sales.” This paper with Engle, Rice, and Weiss is a classic contribution to the nonparamentric and semiparametric literature and stands out as the first application of semiparametric modeling techniques to economics (previous work had been done on testing). Other early work includes Robinson (1988) and Stock (1989). Recent advances in the area are discussed in Bierens (1990), Delgado and Robinson (1992), Granger and Teräsvirta (1993),
6
Eric Ghysels, Norman R. Swanson, and Mark Watson
Härdle (1990), Li (1998), Linton and Neilson (1995), and Teräsvirta, Tjostheim, and Granger (1994), to name but a few. In this classic paper, Granger and his coauthors use semiparametric models, which include a linear part and a nonparametric cubic spline function to model electricity demand. The variable that they use in the nonparametric part of their model is temperature, which is known to have an important nonlinear effect on demand. METHODOLOGY The title of this subsection could cover most of Granger’s work; however, we use it to discuss a set of six important papers that do not fit elsewhere. The first paper is Granger and Morris’s 1976 paper “Time Series Modeling and Interpretation.” This is a classic in the literatures on aggregation and measurement error. The paper contains an important theorem on the time series properties of the sum of two independent series, say ARMA(p,m) + ARMA(q,n), and considers a number of special cases of practical interest, like the sum of an AR(p) and a white noise process. A key insight in the paper is that complicated time series models might arise from aggregation. The paper also contains the seeds of Granger’s later paper (Granger 1987) on aggregation with common factors, which is discussed later. The next paper, Granger and Anderson’s “On the Invertibility of Time Series Models,” also deals with a fundamental issue in time series. Invertibility is a familiar concept in linear models. When interpreted mechanically, invertibility refers to conditions that allow the inverse of a lag polynomial to be expressed in positive powers of the backshift operator. More fundamentally, it is a set of conditions that allows the set of shocks driving a stochastic process to be recovered from current and lagged realizations of the observed data. In linear models, the set of conditions is the same, but in nonlinear models they are not. Granger and Anderson make this point, propose the relevant definition of invertibility appropriate for both linear and nonlinear models, and discuss conditions that ensure invertibility for some specific examples. The third paper in this section is Granger’s “Near Normality and Some Econometric Models.” This paper contains exact small sample versions of the central limit theorem. Granger’s result is really quite amazing: Suppose x and y are two independent and identically distributed (i.i.d.) random variables and let z be a linear combination of x and y. Then the distribution of z is closer to the normal than the distribution of x and y (where the notion of “closer” is defined in terms of cumulants of the random variables). The univariate version of this result is contained in Granger (1977), and the multivariate generalization is given in the paper included here. The theorem in this paper shows that a bivari-
Introduction
7
ate process formed by a weighted sum of bivariate vectors whose components are i.i.d. is generally nearer-normal than its constituents, and the components of the vector will be nearer-uncorrelated. The fourth paper, “The Time Series Approach to Econometric Model Building,” is a paper joint with Paul Newbold. It was published in 1977, a time when the merits of Box-Jenkins-style time series analysis versus classical econometric methods were being debated among econometricians. Zellner and Palm (1974) is a classic paper in the area. Both papers tried to combine the insights of the Box-Jenkins approach with the structural approach to simultaneous equations modeling advocated by the Cowles Foundation. The combination of time series techniques with macroeconomic modeling received so much attention in the 1970s that it probably seems a natural approach to econometricians trained over the last two decades. Work by Sims (1980) on vector autoregression (VAR) models, the rational expectation approach in econometrics pursued by Hansen and Sargent (1980), and numerous other papers are clearly a result of and in various ways a synthesis of this debate. Of much more recent vintage is the next paper in this subsection, entitled: “Comments on the Evaluation of Policy Models,” joint with Deutsch (1992). In this paper, the authors advocate the use of rigorous econometric analysis when constructing and evaluating policy models and note that this approach has been largely neglected both by policy makers and by econometricians. The final paper in this section is Granger’s 1987 paper, “Implications of Aggregation with Common Factors.” This paper concerns the classic problem of aggregation of microeconomic relationships into aggregate relationships. The paper deals almost exclusively with linear microeconomic models so that answers to the standard aggregation questions are transparent. (For example, the aggregate relationship is linear, with coefficients representing averages of the coefficients across the micropopulation.) The important lessons from this paper don’t deal with these questions but rather with the implications of approximate aggregation. Specifically, Granger postulates a microeconomic environment in which individuals’ actions are explained by both idiosyncratic and common factors. Idiosyncratic factors are the most important variables explaining the microeconomic data, but these factors are average out when the microrelations are aggregated so that the aggregated data depend almost entirely on the common factors. Because the common factors are not very important for the microdata, an econometrician using microdata could quite easily decide that these factors are not important and not include them in the micromodel. In this case, the aggregate model constructed from the estimated micromodel would be very misspecified. Because macroeconomists are now beginning to rely on microdatasets in their empirical work, this is a timely lesson.
8
Eric Ghysels, Norman R. Swanson, and Mark Watson
FORECASTING By the time this book is published, Granger will be in his sixth decade of active research in the area of forecasting.1 In essence, forecasting is based on the integration of three tasks: model specification and construction, model estimation and testing, and model evaluation and selection. Granger has contributed extensively in all three, including classics in the areas of forecast evaluation, forecast combination, data transformation, aggregation, seasonality and forecasting, and causality and forecasting. Some of these are reproduced in this section.2 One of Granger’s earliest works on forecasting serves as a starting point for this section of Volume I. This is his 1959 paper, “Estimating the Probability of Flooding on a Tidal River,” which could serve as the benchmark example in a modern cost-benefit analysis text because the focus is on predicting the number of floods per century that can be expected on a tidal stretch. This paper builds on earlier work by Gumbel (1958), where estimates for nontidal flood plains are provided. The paper illustrates the multidisciplinary flavor of much of Granger’s work. The second paper in this section is entitled “Prediction with a Generalized Cost of Error Function” (1969). This fundamental contribution highlights the restrictive nature of quadratic cost functions and notes that practical economic and management problems may call instead for the use nonquadratic and possibly nonsymmetric loss functions. Granger illuminates the potential need for such generalized cost functions and proposes an appropriate methodology for implementing such functions. For example, the paper discusses the importance of adding a bias term to predictors, a notion that is particularly important for model selection. This subject continues to receive considerable attention in economics (see, for example, Christoffersen and Diebold 1996, 1997; Hoffman and 1
2
His first published paper in the field was in the prestigious Astrophysical Journal in 1957 and was entitled “A Statistical Model for Sunspot Activity.” A small sample of important papers not included in this section are Granger (1957, 1967); Granger, Kamstra, and White (1989); Granger, King, and White (1995); Granger and Sin (1997); Granger and Nelson (1979); and Granger and Thompson (1987). In addition, Granger has written seven books on the subject, including Spectral Analysis of Economic Time Series (1964, joint with M. Hatanaka), Predictability of Stock Market Prices (1970, joint with O. Morgenstern), Speculation, Hedging and Forecasts of Commodity Prices (1970, joint with W. C. Labys), Trading in Commodities (1974), Forecasting Economic Time Series (1977, joint with P. Newbold), Forecasting in Business and Economics (1980), and Modelling Nonlinear Dynamic Relationships (1993, joint with T. Teräsvirta). All these books are rich with ideas. For example, Granger and Newbold (1977) discuss a test for choosing between two competing forecasting models based on an evaluation of prediction errors. Recent papers in the area that propose tests similar in design and purpose to that discussed by Granger and Newbold include Corradi, Swanson, and Olivetti (1999); Diebold and Mariano (1995); Fair and Shiller (1990); Kolb and Stekler (1993); Meese and Rogoff (1988); Mizrach (1991); West (1996); and White (1999).
Introduction
9
Rasche 1996; Leitch and Tanner 1991; Lin and Tsay 1996; Pesaran and Timmerman 1992, 1994; Swanson and White 1995, 1997; Weiss 1996). A related and subsequent paper entitled “Some Comments on the Evaluation of Economic Forecasts” (1983, joint with Newbold) is the third paper in this section. In this paper, generalized cost functions are elucidated, forecast model selection tests are outlined, and forecast efficiency in the sense of Mincer and Zarnowitz (1969) is discussed. The main focus of the paper, however, is the assertion that satisfactory tests of model performance should require that a “best” model produce forecasts, which cannot be improved upon by combination with (multivariate) BoxJenkins-type forecasts. This notion is a precursor to so-called forecast encompassing and is related to Granger’s ideas about forecast combination, a subject to which we now turn our attention. Three papers in this section focus on forecast combination, a subject that was introduced in the 1969 Granger and Bates paper, “The Combination of Forecasts.” This paper shows that the combination of two separate sets of airline passenger forecasts yield predictions that meansquare-error dominate each of the original sets of forecasts. That combined forecasts yield equal or smaller error variance is shown in an appendix to the paper. This insight has led to hundreds of subsequent papers, many of which concentrate on characterizing data-generating processes for which this feature holds, and many of which generalize the framework of Granger and Bates. A rather extensive review of the literature in this area is given in Clemen (1989) (although many papers have been subsequently published). The combination literature also touches on issues such as structural change, loss function design, model misspecification and selection, and forecast evaluation tests. These topics are all discussed in the two related papers that we include in this section – namely, “Invited Review: Combining Forecasts – Twenty Years Later,” (1989) and “The Combination of Forecasts Using Changing Weights” (1994, joint with M. Deutsch and T. Teräsvirta). The former paper has a title that is self explanatory, while the latter considers changing weights associated with the estimation of switching and smooth transition regression models – two types of nonlinear models that are currently receiving considerable attention. The literature on data transformation in econometrics is extensive, and it is perhaps not surprising that one of the early forays in the area is Granger and Newbold’s “Forecasting Transformed Series” (1976). In this paper, general autocovariance structures are derived for a broad class of stationary Gaussian processes, which are transformed via some function that can be expanded by using Hermite polynomials. In addition, Granger and Newbold point out that the Box and Cox (1964) transformation often yields variables that are “near-normal,” for example, making subsequent analysis more straightforward. (A more recent paper in this area, which is included in Volume II, is Granger and Hallman
10
Eric Ghysels, Norman R. Swanson, and Mark Watson
1991). The sixth paper in this part of Volume I is entitled “Forecasting White Noise.” Here, where Granger illustrates the potential empirical pitfalls associated with loose interpretation of theoretical results. His main illustration focuses on the commonly believed fallacy that: “The objective of time series analysis is to find a filter which, when applied to the series being considered, results in white noise.” Clearly such a statement is oversimplistic, and Granger illustrates this by considering three different types of white noise, and blending in causality, data transformation, Markov chains, deterministic chaos, nonlinear models, and timevarying parameter models. The penultimate paper in this section, “Can We Improve the Perceived Quality of Economic Forecasts?” (1996), focuses on some of the fundamental issues currently confronting forecasters. In particular, Granger espouses on what sorts of loss functions we should be using, what sorts of information and information sets may be useful, and how forecasts can be improved in quality and presentation (for example, by using 50% rather than 95% confidence intervals). The paper is dedicated to the path-breaking book by Box and Jenkins (1970) and is a lucid piece that is meant to encourage discussion among practitioners of the art. The final paper in Volume I is entitled “Short-Run Forecasts of Electricity Loads and Peaks” (1997) and is meant to provide the reader of this volume with an example of how to correctly use current forecasting methodology in economics. In this piece, Ramanathan, Engle, Granger, Vahid-Araghi, and Brace implement a short-run forecasting model of hourly electrical utility system loads, focusing on model design, estimation, and evaluation.
Volume II CAUSALITY Granger’s contributions to the study of causality and causal relationships in economics are without a doubt among some of his most well known. One reason for this may be the importance in so many fields of research of answering questions of the sort: What will happen to Y if X falls? Another reason is that Granger’s answers to these questions are elegant mathematically and simple to apply empirically. Causality had been considered in economics before Granger’s 1969 paper entitled “Investigating Causal Relations by Econometric Models and Cross-Spectral
Introduction
11
Methods” (see, for example, Granger 1963; Granger and Hatanaka 1964; Hosoya 1977; Orcutt 1952; Simon 1953; Wiener 1956). In addition, papers on the concept of causality and on causality testing also appeared (and continue to appear) after Granger’s classic work (see, for example, Dolado and Lütkepohl 1994; Geweke 1982; Geweke et al. 1983; Granger and Lin 1994; Hoover 1993; Sims 1972; Swanson and Granger 1997; Toda and Phillips 1993, 1994; Toda and Yamamoto 1995; Zellner 1979, to name but a very few). However, Granger’s 1969 paper is a cornerstone of modern empirical causality analysis and testing. For this reason, Volume II begins with his 1969 contribution. In the paper, Granger uses crossspectral methods as well as simple bivariate time series models to formalize and to illustrate a simple, appealing, and testable notion of causality. Much of his insight is gathered in formal definitions of causality, feedback, instantaneous causality, and causality lag. These four definitions have formed the basis for virtually all the research in the area in the last thirty years and will probably do so for the next thirty years. His first definition says that “. . . Yt causes Xt if we are able to better predict Xt using all available information than if the information apart from Yt had been used” (Granger 1969, p. 428). It is, thus, not surprising that many forecasting papers post Granger (1969) have used the “Granger causality test” as a basic tool for model specification. It is also not surprising that economic theories are often compared and evaluated using Granger causality tests. In the paper, Granger also introduces the important concept of instantaneous causality and stresses how crucial sampling frequency and aggregation are, for example. All this is done within the framework of recently introduced (into economics by Granger and Hatanaka 1964) techniques of spectral analysis. The next paper in this part of Volume II, “Testing for Causality: A Personal Viewpoint” (1980), contains a number of important additional contribution that build on Granger (1969) and outlines further directions for modern time series analysis (many of which have subsequently been adopted by the profession). The paper begins by axiomatizing a concept of causality. This leads to a formal probabilistic interpretation of Granger (1969), in terms of conditional distribution functions, which is easily operationalized to include universal versus not universal information sets (for example, “data inadequacies”), and thus leads to causality tests based on conditional expectation and/or variance, for example. In addition, Granger discusses the philosophical notion of causality and the roots of his initial interest and knowledge in the area. His discussion culminates with careful characterizations of so-called instantaneous and spurious causality. Finally, Granger emphasizes the use of post-sample data to confirm causal relationships found via in-sample Wald and Lagrange multiplier tests. Continuing with his methodological contributions, the third paper, “Some Recent Developments in a Concept of Causality” (1986), shows
12
Eric Ghysels, Norman R. Swanson, and Mark Watson
that if two I(1) series are cointegrated, then there must be Granger causation in at least one direction. He also discusses the use of causality tests for policy evaluation and revisits the issue of instantaneous causality, noting that three obvious explanations for apparent instantaneous causality are that: (i) variables react without any measurable time delay, (ii) the time interval over which data are collected is too large to capture causal relations properly, or that temporal aggregation leads to apparent instantaneous causation, and (iii) the information set is incomplete, thus leading to apparent instantaneous causality. It is argued that (ii) and (iii) are more plausible, and examples are provided. This section closes with a frequently cited empirical investigation entitled “Advertising and Aggregate Consumption: An Analysis of Causality” (1980). The paper is meant to provide the reader with an example of how to correctly use the concept of causality in economics. In this piece, Ashley, Granger, and Schmalensee stress the importance of out-of-sample forecasting performance in the evaluation of alternative causal systems and provide interesting evidence that advertising does not cuase consumption but that consumption may cause advertising. INTEGRATION AND COINTEGRATION Granger’s “typical spectral shape” implies that most economic time series are dominated by low-frequency variability. Because this variability can be modeled by a unit root in a series’ autoregressive polynomial, the typical spectral shape provides the empirical motivation for work on integrated, long memory, and cointegrated processes. Granger’s contributions in this area are usefully organized into four categories. The first contains research focused on the implications of this low-frequency variability for standard econometric methods, and the Granger and Newbold work on spurious regressions is the most notable contribution in this category. The second includes Granger’s research on linear time series models that explain the joint behavior of low-frequency components for a system of economic time series. His development of the idea of cointegration stands out here. The third category contains both empirical contributions and detailed statistical issues arising in cointegrated systems (like “trend” estimation). Finally, the fourth category contains his research on extending cointegration in time-invariant linear systems to nonlinear and time-varying systems. Papers representing his work in each of these categories are included in this section of Volume II. The first paper in this section is the classic 1974 Granger and Newbold paper “Spurious Regressions in Econometrics,” which contains what is arguably the most influential Monte Carlo study in econometrics. (The closest competitor that comes to our mind is the experiment reported in Slutsky 1927.) The Granger-Newbold paper shows that linear regressions involving statistically independent, but highly persistent random
Introduction
13
variables will often produce large “t-statistics” and sample R2s. The results reported in this paper showed that serial correlation in the regression error together with serial correlation in the regressor have disastrous effects on the usual procedures of statistical inference. The basic result was known (Yule 1926), but the particulars of Granger and Newbold’s experiments were dramatic and unexpected. Indeed, in his ET Interview, Granger reminisces about giving a seminar on the topic at the London School of Economics (LSE), where some of the most sophisticated time-series econometricians of the time found the GrangerNewbold results incredible and suggested that he check his computer code. The paper had a profound impact on empirical work because, for example, researchers could no longer ignore low Durbin-Watson statistics. One of the most insightful observations in the paper is that, when considering the regression y = xb + e, the null hypothesis b = 0 implies that e has the same serial properties as y, so that it makes little sense constructing a t-statistic for this null hypothesis without worrying about serial correlation. The basic insight that both sides of an equation must have the same time series properties shows up repeatedly in Granger’s work and forms the basis of what he calls “consistency” in his later work. The Granger-Newbold spurious regression paper touched off a fertile debate on how serial correlation should be handled in regression models. Motivated by the typical spectral shape together with the likelihood of spurious regressions in levels regressions, Granger and Newbold suggested that applied researchers specify regressions using the firstdifferences of economic time series. This advice met with skepticism. There was an uneasy feeling that even though first-differencing would guard against the spurious regression problem, it would also eliminate the dominant low-frequency components of economic time series, and it was the interaction of these components that researchers wanted to measure with regression analysis. In this sense, first-differencing threw the baby out with the bath water. Hendry and Mizon (1978) provided a constructive response to the Granger-Newbold spurious regression challenge with the suggestion that time series regression models be specified as autoregressive distributed lags in levels (that is, a(B)yt = c(B)xt + et). In this specification, the first-difference restriction could be viewed a common factor of (1 - B) in the a(B) and c(B) lag polynomials, and this restriction could be investigated empirically. These autoregressive distributed lag models could also be rewritten in error-correction form, which highlighted their implied relationship between the levels of the series (useful references for this includes Sargan 1964; Hendry, Pagan, and Sargan 1981; and Hendry 1995). This debate led to Granger’s formalization of cointegration (see ET Interview, page 274). His ideas on the topic were first exposited in his 1981 paper “Some Properties of Time Series Data and Their Use in Econometric Model Specification,” which is included as the second paper
14
Eric Ghysels, Norman R. Swanson, and Mark Watson
in this section of Volume II. The paper begins with a discussion of consistency between the two sides of the previously mentioned equation. Thus, if y = xb + e and x contains important seasonal variation and e is white noise that is unrelated to x, then y must also contain important seasonal variation. The paper is most notable for its discussion of consistency in regards to the order of integration of the variables and the development of “co-integration,” which appears in Section 4 of the paper. (As it turns out, the term was used so much in the next five years that by the mid-1980s the hyphen had largely disappeared and cointegration became cointegration.) The relationship between errorcorrection models and cointegration is mentioned, and it is noted that two cointegrated variables have a unit long-run correlation. The paper probably contains Granger’s most prescient statements. For example, in discussing the “special case” of the autoregressive distributed lag that gives rise to a cointegrating relation, he states: “Although it may appear to be very special, it also seems to be potentially important.” And after giving some examples of cointegrated variables, he writes: “It might be interesting to undertake a wide-spread study to find out which pairs of economic variables are co-integrated.” Granger expanded on his cointegration ideas in his 1983 paper “Time Series Analysis of Error Correction Models” with Weiss, which is included as the third paper in this section. This paper makes three important contributions. First, it further explores the link between errorcorrection models and cointegration (focusing primarily on bivariate models). Second, it introduces methods for testing for cointegration. These include the residual-based tests developed in more detail in Engle and Granger’s later paper and the tests that were analyzed several years later by Horvath and Watson (1995). The paper does not tackle the unitroot distribution problems that arise in the tests (more on this later) and instead suggests practical “identification” procedures analogous to those used in Box-Jenkins model building. The final contribution of the paper is an application of cointegration to three classic economic relations, each of which was studied in more detail by later researchers using “modern” cointegration methods. The first application considered employee income and national income (in logarithms) and, thus, focused on labor’s share of national income, one of the “Great Ratios” investigated earlier by Kosobud and Klein (1961) using other statistical methods. The second application considered money and nominal income, where Granger and Weiss found little evidence supporting cointegration. Later researchers added nominal interest rates to this system, producing a long-run money demand relation, and found stronger evidence of cointegration (Baba, Hendry, and Star 1992; Hoffman and Rasche 1991; Stock and Watson 1993). The third application considered the trivariate system of nominal wages, prices, and productivity, which was studied in more detail a decade later by Campbell and Rissman (1994).
Introduction
15
The now-classic reference on cointegration, Engle and Granger’s “CoIntegration and Error-Correction: Representation, Estimation and Testing,” is included as the fourth paper in this section. This paper is so well known that, literally, it needs no introduction. The paper includes “Granger’s Representation Theorem,” which carefully lays out the connection between moving average and vector error correction representations for cointegrated models involving I(1) variables. It highlights the nonstandard statistical inference issues that arise in cointegrated models including unit roots and unidentified parameters. Small sample critical values for residual-based cointegration tests are given, and asymptotically efficient estimators for I(0) parameters are developed (subsequently known as Engle-Granger two-step estimators). The paper also contains a short, but serious, empirical section investigating cointegration between consumption and income, long-term and short-term interest rates, and money and nominal income. Granger’s 1986 “Developments in the Study of Cointegrated Economic Variables” is the next entry in the section and summarizes the progress made during the first five years of research on the topic. Representation theory for I(1) processes was well understood by this time, and several implications had been noted, perhaps the most surprising was the relationship between cointegration and causality discussed in the last subsection. (If x and y are cointegrated, then either x must Granger-cause y or the converse, and thus cointegration of asset prices is at odds with the martingale property.) Work had begun on the representation theory for I(2) processes (Johansen 1988a; Yoo 1987). Inference techniques were still in their infancy, but great strides would be made in the subsequent five years. A set of stylized cointegration facts was developing (consumption and income are cointegrated, money and nominal interest rates are not, for example). The paper ends with some new ideas on cointegration in nonlinear models and in models with time-varying coefficients. This is an area that has not attracted a lot of attention (a notable exception being Balke and Fomby 1997), primarily because of the difficult problems in statistical inference. Cointegration is one of those rare ideas in econometrics that had an immediate effect on empirical work. It crystallized a notion that earlier researchers had tried to convey as, for example, “true regressions” (Frisch 1934), low-frequency regressions (Engle 1974), or the most predictable canonical variables from a system (Box and Tiao 1977). There is now an enormous body of empirical work utilizing Granger’s cointegration framework. Some of the early work was descriptive in nature (asking, like Granger and Weiss, whether a set of variables appeared to be cointegrated), but it soon became apparent that cointegration was an implication of important economic theories, and this insight allowed researchers to test separately both the long-run and short-run implications of the specific theories. For example, Campbell and Shiller (1987)
16
Eric Ghysels, Norman R. Swanson, and Mark Watson
and Campbell (1987) showed that cointegration was an implication of rational expectations versions of present value relations, making the concept immediately germane to a large number of applications including the permanent income model of consumption, the term structure of interest rates, money demand, and asset price determination, for example. The connection with error correction models meant that cointegration was easily incorporated into vector autoregressions, and researchers exploited this restriction to help solve the identification problem in these models (see Blanchard and Quah 1989; King et al. 1991, for example). Development of empirical work went hand in hand with development of inference procedures that extended the results for univariate autoregressions with unit roots to vector systems (for example, see Chan and Wei 1987; and Phillips and Durlauf 1986). Much of this work was focused directly on the issues raised by Granger in the papers reproduced here. For example, Phillips (1986) used these new techniques to help explain the Granger-Newbold spurious regression results. Stock (1987) derived the limiting distribution of least squares estimators of cointegrating vectors, showing that the estimated coefficients were T-consistent. Phillips and Ouliaris (1990) derived asymptotic distributions of residualbased tests for cointegration. Using the vector error-correction model, Johansen (1988b) and Ahn and Reinsel (1990) developed Gaussian maximum likelihood estimators and derived the asymptotic properties of the estimators. Johansen (1988b) derived likelihood-based tests for cointegration. Many refinements of these procedures followed during the late 1980s and early 1990s (Phillips 1991; Saikkonen 1991; Stock and Watson 1993, to list a few examples from a very long list of contributions), and by the mid 1990s a rather complete guide to specification, estimation, and testing in cointegrated models appeared in textbooks such as Hamilton (1994) and Hendry (1995). During this period, Granger and others were extending his cointegration analysis in important directions. One particularly useful extension focused on seasonality, and we include Hylleberg, Engle, Granger, and Yoo’s “Seasonal Integration and Cointegration,” as the next paper in this section. A common approach to univariate modeling of seasonal series is to remove the seasonal and trend components by taking seasonal differences. For example, for quarterly data, this involves filtering the data using (1 - B4). This operation explicitly incorporates (1 - B4) into the series’ autoregressive polynomial and implies that the autoregression will contain four unit roots: two real roots associated with frequencies 0 and p and a complex conjugate pair associated with frequency p/2. Standard cointegration and unit-root techniques focus only on the zero-frequency unit root; the Hyllberg et al. paper discusses the complications that arise from the remaining three unit roots. Specifically, the paper develops tests for unit roots and seasonal cointegration at
Introduction
17
frequencies other than zero. This is done in a clever way by first expanding the autoregressive polynomial in a partial fraction expansion with terms associated with each of the unit roots. This simplifies the testing problem because it makes it possible to apply standard regression-based tests to filtered versions of the series. This paper has led to the so-called HEGY approach of testing for seasonal roots separately. It has been extended in several ways notably by Ghysels et al. (1994) who built joint tests, such as testing for the presence of all seasonal unit roots, based on the HEGY regressions. Many of Granger’s papers include empirical examples of the proposed techniques, but only occasionally is the empirical analysis the heart of the paper. One notable exception is “A Cointegration Analysis of Treasury Bill Yields,” with Hall and Anderson, which is included as the sixth paper in this section. The framework for the paper is the familiar expectations theory of the term structure. There are two novelties: first, the analysis is carried out using a large number of series (that is, twelve series), and second, the temporal stability of the cointegrating relation is investigated. The key conclusion is that interest-rate spreads on 1–12 month U.S. Treasury Bills appear to be I(0) except during the turbulent 1979–82 time period. A natural way to think about cointegrated systems is in terms of underlying, but unobserved, persistent, and transitory components. The persistent factors capture the long-memory or low-frequency variability in the observed series, and the transitory factors explain the shorter memory or high-frequency variation. In many situations, the persistent components correspond to interesting economic concepts (“trend” or “permanent” income, aggregate productivity, “core” inflation, and so on) Thus, an important question is how to estimate these components from the observed time series, and this is difficult because there is no unique way to carry out the decomposition. One popular decomposition associates the persistent component with the long-run forecasts in the observed series and the transitory component with the corresponding residual (Beveridge and Nelson 1981). This approach has limitations: notably the persistent component is, by construction, a martingale, and the innovations in the persistent and the transitory components are correlated. In the next two papers included in this section, Granger takes up this issue. The first paper, “Estimation of Common Long-Memory Components in Cointegrated Systems,” was written with Gonzalo. They propose a decomposition that has two important characteristics: first, both components are a function only of the current values of the series, and second, innovations in the persistent components are uncorrelated with the innovations in the transitory component. In the second paper, “Separation in Cointegrated Systems and Persistent-Transitory Decompositions” (with N. Haldrup), Granger takes up the issue of estimation of these components in large systems. The key question is
18
Eric Ghysels, Norman R. Swanson, and Mark Watson
whether the components might be computed separately for groups of series so that the components could then be analyzed separately without having to model the entire system of variables. Granger and Haldrup present conditions under which this is possible. Unfortunately the conditions are quite stringent so that few simplifications surface for applied researchers. The final three papers in this section focus on nonlinear generalizations of cointegration. The first two of these are joint works with Hallman. In “Nonlinear Transformations of Integrated Time Series,” Granger and Hallman begin with integrated and cointegrated variables and ask whether nonlinear functions of the series will also appear to be integrated and cointegrated. The problem is complex, and few analytic results are possible. However, the paper includes several approximations and simulations that are quite informative. One of the most interesting results in the paper is a simulation that suggests that Dickey-Fuller tests applied to the ranks of Gaussian random walks have well-behaved limiting distributions. This is important, of course because statistics based on ranks are invariant to all monotonic transformations applied to the data. In their second paper “Long Memory Series with Attractors,” Granger and Halman discuss nonlinear attractors (alternatively I(0) nonlinear functions of stochastically trending variables) and experiment with semiparametric methods for estimating these nonlinear functions. The last paper, “Further Developments in the Study of Cointegrated Variables,” with Swanson is a fitting end to this section. It is one of Granger’s “seed” papers – overflowing with ideas and, as stated in the first paragraph, raising “more question than it solves.” Specifically, the paper not only discusses time-varying parameter models for cointegration and their implications for time variation in vector error-correction models, how nonlinear cointegrated models can arise as solutions to nonlinear optimization problems, and models for nonlinear leading indicator analysis but also contains a nonlinear empirical generalization of the analysis in King et al. (1991). No doubt, over the next decade, a few of these seeds will germinate and create their own areas of active research. LONG MEMORY Even though integrated variables have been widely used in empirical work, they represent a fairly narrow class of models capable of generating Granger’s typical spectral shape. In particular, it has been noted that autocorrelation functions of many time series exhibit a slow hyperbolic decay rate. This phenomenon, called long memory or sometimes also called long-range dependence, is observed in geophysical data, such as river flow data (see Hurst 1951, 1956; Lawrence and Kottegoda 1977) and in climatological series (see Hipel and McLeod 1978a, 1978b; Mandelbrot and Wallis 1968) as well as in economic time series
Introduction
19
(Adelman 1965; Mandelbrot 1963). In two important papers, Granger extends these processes to provide more flexible low-frequency or longmemory behavior by considering I(d) processes with noninteger values of d. The first of these papers, Granger and Joyeux’s (1980) “An Introduction to Long-Memory Time Series Models and Fractional Differencing,” is related to earlier work by Mandelbrot and Van Ness (1968) describing fractional Brownian motion. Granger and Joyeux begin by introducing the I(d) process (1 - B)dyt = et for noninteger d. They show that the process is covariance stationary when d < –12 and derive the autocorrelations and spectrum of the process. Interestingly, the autocorrelations die out at a rate t 2d-1 for large t showing that the process has a much longer memory than stationary finite-order ARMA processes (whose autocorrelations die out at rate rt where ΩrΩ < 1). In the second of these papers, “Long Memory Relationships and the Aggregation of Dynamic Models,” Granger shows how this long-memory process can be generated by a large number of heterogenous AR(1) processes. This aggregation work continues to intrigue researchers, as evidenced by recent extensions by Lippi and Zaffaroni (1999). Empirical work investigating long-memory processes was initially hindered by a lack of statistical methods for estimation and testing, but methods now have been developed that are applicable in fairly general settings (for example, see Robinson 1994, 1995; Lobato and Robinson 1998). In addition, early empirical work in macroeconomics and finance found little convincing evidence of long memory (see Lo 1991, for example). However, a new flurry of empirical work has found strong evidence for long memory in the absolute value of asset returns. One of the most important empirical contributions is the paper by Ding, Granger, and Engle, “A Long Memory Property of the Stock Market Returns and a New Model,” which is included as the last paper in this section. Using daily data on S&P 500 stock returns from 1928 to 1991, this paper reports autocorrelations of the absolute values of returns that die out very slowly and remain significantly greater than zero beyond lags of 100 periods. This finding seems to have become a stylized fact in empirical finance (see Andersen and Bollerslev 1998; Lobato and Savin 1998) and serves as the empirical motivation for a large number of recent papers.
REFERENCES Adelman, I., 1965, Long Cycles: Fact or Artifact? American Economic Review, 55, 444–63. Ahn, S. K., and G. C. Reinsel, 1990, Estimation of Partially Nonstationary Autoregressive Models, Journal of the American Statistical Association, 85, 813–23.
20
Eric Ghysels, Norman R. Swanson, and Mark Watson
Alexander, S., 1961, Price Movements in Speculative Markets: Trends or Random Walks, Industrial Management Review, 2, 7–26. 1964, “Price Movements in Speculative Markets: Trends or Random Walks, No. 2,” in P. Cootner (ed.), The Random Character of Stock Market Prices, Massachusetts Institute of Technology Press, Cambridge, MA. Andersen, T., and T. Bollerslev, 1998, Heterogeneous Information Arrivals and Return Volatility Dynamics: Uncovering the Long-run in High Frequency Returns, Journal of Finance, 52, 975–1005. Anderson, T. W., 1971, The Statistical Analysis of Time Series, New York: Wiley. Baba, Y., D. F. Hendry, and R. M. Star, 1992, The Demand for M1 in the U.S.A., 1960–1988, Review of Economic Studies, 59, 25–61. Bachelier, L., 1900, Theory of Speculation, in P. Cootner, ed., The Random Character of Stock Market Prices, Cambridge, MA: Massachusetts Institute of Technology Press, 1964; Reprint. Balke, N., and T. B. Fomby, 1997, Threshold Cointegration, International Economic Review, 38, No. 3, 627– 45. Bell, W. R., 1984, Signal Extraction for Nonstationary Time Series, The Annals of Statistics, 12, 646–64. Beveridge, S., and C. R. Nelson, 1981, A New Approach to Decomposition of Time Series into Permanent and Tansitory Components with Particular Attention to Measurement of the “Business Cycle,” Journal of Monetary Economics, 7, 151–74. Bierens, H., 1990, Model-free Asymptotically Best Forecasting of Stationary Economic Time Series, Econometric Theory, 6, 348–83. Blanchard, O. J., and D. Quah, 1989, The Dynamic Effects of Aggregate Demand and Supply Disturbances, American Economic Review, 79, 655–73. Box, G. E. P., and D. R. Cox, 1964, An Analysis of Transformations, Journal of the Royal Statistical Society Series B, 26, 211– 43. Box, G. E. P., and G. M. Jenkins, 1970, Time Series Analysis, Forecasting and Control, San Fransisco: Holden Day. Box, G. E. P., and G. Tiao, 1977, A Canonical Analysis of Multiple Time Series, Biometrika, 64, 355–65. Burns, A. F., and W. C. Mitchell, 1947, Measuring Business Cycles, New York: National Bureau of Economic Research. Campbell, J. Y., 1987, Does Saving Anticipate Declining Labor Income, Econometrica, 55, 1249–73. Campbell, J. Y., A. W. Lo, and A. C. McKinlay, 1997, The Econometrics of Financial Markets, Princeton, NJ: Princeton University Press. Campbell, J. Y., and R. J. Shiller, 1987, Cointegration and Tests of the Present Value Models, Journal of Political Economy, 95, 1062–88. Reprinted in R. F. Engle and C. W. J. Granger, eds., Long-Run Economic Relationships, Readings in Cointegration, Oxford, Oxford University Press. Chan, N. H., and C. Z. Wei, 1987, Limiting Distributions of Least Squares Estimators of Unstable Autoregressive Processes, The Annals of Statistics, 16, 367–401. Christoffersen, P., and F. X. Diebold, 1996, Further Results on Forecasting and Model Selection Under Asymmetric Loss, Journal of Applied Econometrics, 11, 651–72.
Introduction
21
1997, Optimal Prediction Under Asymmetric Loss, Econometric Theory, 13, 808–17. Clemen, R. T., 1989, Combining Forecasts: A Review and Annotated Bibliography, International Journal of Forecasting, 5, 559–83. Cleveland, W. P., and G. C. Tiao, 1976, Decomposition of Seasonal Time Series: A Model for the X-11 Program, Journal of the American Statistical Association, 71, 581–7. Cootner, P. (ed.), 1964, The Random Character of Stock Market Prices, Massachusetts Institute of Technology Press, Cambridge, MA. Corradi, V., N. R. Swanson, and C. Olivetti, 1999, Predictive Ability With Cointegrated Variables, Working Paper, Texas A&M University. Cowles, A., 1933, Can Stock Market Forecasters Forecast?, Econometrica, 1, 309–324. 1960, A Revision of Previous Conclusions Regarding Stock Price Behavior, Econometrica, 28, 909–915. Cramér, H., 1942, On Harmonic Analysis of Certain Function Spaces, Arkiv. Mat. Astron. Fysik, 28B, No. 12, 1–7. Davidson, J., 1994, Stochastic Limit Theory, Oxford: Oxford University Press. Delgado, M. A., and P. M. Robinson, 1992, Nonparametric and Semiparametric Methods for Economic Research, Journal of Economic Surveys, 6, 201–49. Diebold, F. X., and R. S. Mariano, 1995, Comparing Predictive Accuracy, Journal of Business and Economic Statistics, 13, 253–63. Dolado, J. J., and H. Lütkepohl, 1994, Making Wald Tests Work for Cointegrated VAR Systems, Econometric Reviews. Engle, R. F., 1974, Band Spectrum Regression, International Economic Review, 15, 1–11. 1982, Autoregressive Conditional Heteroskedasticity with Estimates of UK Inflation, Econometrica, 50, 987–1007. Fair, R. C., and R. J. Shiller, 1990, Comparing Information in Forecasts from Econometric Models, American Economic Review, 80, 375–89. Fama, E., 1965, The Behavior of Stock Market Prices, Journal of Business, 38, 34–105. 1970, Efficient Capital Markets: A Review of Theory and Empirical Work, Journal of Finance, 25, 383–417. Frisch, R., 1934, Statistical Confluence Analysis by Means of Complete Regressions Systems, Oslo: Universitets, Økonomiske Institut. Fuller, W. A., 1976, Introduction to Statistical Time Series, New York: John Wiley. Gallant, A. R., and H. White, 1988, A Unified Theory of Estimation and Inference for Nonlinear Dynamics Models, New York: Basil Blackwell. Geweke, J., 1982, Measures of Linear Dependence and Feedback Between Time Series, Journal of the American Statistical Association, 77, 304–24. Geweke, J., R. Meese, and W. Dent, 1983, Comparing Alternative Tests of Causality in Temporal Systems, Journal of Econometrics, 21, 161–94. Ghysels, E., C. W. J. Granger, and P. L. Siklos, 1996, Is Seasonal Adjustment a Linear or Nonlinear Data-Filtering Process? Journal of Business and Economic Statistics, 14, 374–86. Ghysels, E., H. S. Lee, and J. Noh, 1994, Testing for Unit Roots in Seasonal TimeSeries – Some Theoretical Extensions and a Monte Carlo Investigation, Journal of Econometrics, 62, 415–42.
22
Eric Ghysels, Norman R. Swanson, and Mark Watson
Granger, C. W. J., 1957, A Statistical Model for Sunspot Activity, The Astrophysical Journal, 126, 152–8. 1963, Economic Processes Involving Feedback, Information and Control, 6, 28–48. 1966, The Typical Spectral Shape of an Economic Variable, Econometrica, 34, 150–61. 1967, Simple Trend-Fitting for Long-Range Forecasting, Management Decision, Spring, 29–34. 1974, Trading in Commodities, Cambridge, England: Woodhead-Faulkner. 1977, Tendency Towards Normality of Linear Combinations of Random Variables, Metrika, 23, 237–48. 1979, Seasonality: Causation, Interpretation and Implications, in A. Zellner, ed., Seasonal Analysis of Economic Time Series, Economic Research Report, ER-1, Bureau of the Census 1979. 1980, Forecasting in Business and Economics, San Diego: Academic Press. 1992, Comment on Two Papers Concerning Chaos and Statistics by S. Chatterjee and M. Ylmaz and by M. Berliner, Statistical Science, 7, 69–122. 1994, Is Chaotic Economic Theory Relevant for Economics? Journal of International and Comparative Economics, forthcoming. Granger, C. W. J., and A. P. Andersen, 1978, An Introduction to Bilinear Time Series Models, Göttingen: Vandenhoeck and Ruprecht. Granger, C. W. J., and M. Hatanaka, 1964, Spectral Analysis of Economic Time Series, Princeton, NJ: Princeton University Press. Granger, C. W. J., M. Kamstra, and H. White, 1989, Interval Forecasting: An Analysis Based Upon ARCH-Quantile Estimators, Journal of Econometrics, 40, 87–96. 1995, Comments of Testing Economic Theories and the Use of Model Selection Criteria, Journal of Econometrics, 67, 173–87. Granger, C. W. J., and W. C. Labys, 1970, Speculation, Hedging and Forecasts of Commodity Prices, Lexington, MA: Heath and Company. Granger, C. W. J., and J.-L. Lin, 1994, Causality in the Long-Run, Econometric Theory, 11, 530–6. Granger, C. W. J., and O. Morgenstern, 1963, Spectral Analysis of New York Stock Market Prices, Kyklos, 16, 1–27. Reprinted in P. H. Cootner, ed., Random Character of Stock Market Prices, Cambridge, MA: MIT Press, 1964. 1970, Predictability of Stock Market Prices, Lexington, MA: Heath and Company. Granger, C. W. J., and M. Morris, 1976, Time Series Modeling and Interpretation, Journal of the Royal Statistical Society Series A, 139, 246–57. Granger, C. W. J., and H. L. Nelson, 1979, Experience with Using the Box-Cox Transformation When Forecasting Economic Time Series, Journal of Econometrics, 9, 57–69. Granger, C. W. J., and P. Newbold, 1977, Forecasting Economic Time Series, New York: Academic Press. 1977, Forecasting Economic Time Series, 1st ed., San Diego: Academic Press. Granger, C. W. J., and C.-Y. Sin, 1997, Estimating and Forecasting Quantiles with Asymmetric Least Squares, Working Paper, University of California, San Diego.
Introduction
23
Granger, C. W. J., and T. Teräsvirta, 1993, Modeling Nonlinear Dynamic Relationships, Oxford: Oxford University Press. Granger, C. W. J., and P. Thompson, 1987, Predictive Consequences of Using Conditioning on Causal Variables, Economic Theory, 3, 150–2. Gumbel, D., 1958, Statistical theory of Floods and Droughts, Journal I.W.E., 12, 157–67. Hamilton, J. D., 1994, Time Series Analysis, Princeton, NJ: Princeton University Press. Hannan, E. J., 1967, Measurement of a Wandering Signal Amid Noise, Journal of Applied Probability, 4, 90–102. 1970, Multiple Time Series, New York: Wiley. Hansen, L. P., and T. J. Sargent, 1980, Formulating and Estimating Dynamic Linear Rational Expectations Models, Journal of Economic Dynamics and Control, 2, No. 1, 7– 46. Härdle, W., 1990, Applied Nonparametric Regression, Cambridge: Cambridge University Press. Hendry, D. F., 1995, Dynamic Econometrics, Oxford, England: Oxford University Press. Hendry, D. F., and G. E. Mizon, 1978, Serial Correlation as a Convenient Simplification, Not a Nuisance: A Comment on a Study of the Demand For Money by the Bank of England, Economic Journal, 88, 549–63. Hendry, D. F., A. R. Pagan, and J. D. Sargan, 1984, Dynamic Specification, Chapter 18, in M. D. Intriligator and Z. Griliches, eds., Handbook of Econometrics, Vol. II, Amsterdam: North Holland. Hipel, K. W., and A. I. McLeod, 1978a, Preservation of the Rescaled Adjusted Range, 2: Simulation Studies Using Box-Jenkins Models, Water Resources Research, 14, 509–16. 1978b, Preservation of the Rescaled Adjusted Range, 3: Fractional Gaussian Noise Algorithms, Water Resources Research, 14, 517–18. Hoffman, D. L., and R. H. Rasche, 1991, Long-Run Income and Interest Elasticities of Money Demand in the United States, Review of Economics and Statistics, 73, 665–74. 1996, Assessing Forecast Performance in a Cointegrated System, Journal of Applied Econometrics, 11, 495–517. Hoover, K. D., 1993, Causality and Temporal Order in Macroeconomics or Why Even Economists Don’t Know How to Get Causes from Probabilities, British Journal for the Philosophy of Science, December. Horvath, M. T. K., and M. W. Watson, 1995, Testing for Cointegration When Some of the Cointegrating Vectors and Prespecified, Econometric Theory, 11, No. 5, 952–84. Hosoya, Y., 1977, On the Granger Condition for Non-Causality, Econometrica, 45, 1735–6. Hurst, H. E., 1951, Long-term Storage Capacity of Reservoirs, Transactions of the American Society of Civil Engineers, 116, 770–99. 1956, Methods of Using Long Term Storage in Reservoirs, Proceedings of the Institute of Civil Engineers, 1, 519–43. Ignacio, N., N. Labato, and P. M. Robinson, 1998, A Nonparametric Test for I(0), Review of Economic Studies, 65, 475–95.
24
Eric Ghysels, Norman R. Swanson, and Mark Watson
Johansen, S. 1988a, The Mathematical Structure of Error Correction Models, in N. U. Prabhu, ed., Contemporary Mathematics, Vol. 80: Structural Inference for Stochastic Processes, Providence, RI: American Mathematical Society. 1988b, Statistical Analysis of Cointegrating Vectors, Journal of Economic Dynamics and Control, 12, 231–54. Khintchine, A. 1934, Korrelationstheorie der Stationare Stochastischen Processe, Math Annual, 109, 604–15. King, R., C. I. Plosser, J. H. Stock, and M. W. Watson, 1991, Stochastic Trends and Economic Fluctuations, American Economic Review, 81, No. 4, 819–40. Kolb, R. A., and H. O. Stekler, 1993, Are Economic Forecasts Significantly Better Than Naive Predictions? An Appropriate Test, International Journal of Forecasting, 9, 117–20. Kolmogorov, A. N., 1933, Grundbegriffe der Wahrscheinlichkeitrechnung, Ergebnisse der Mathematik. Published in English in 1950 as Foundations of the Theory of Probability, Bronx, NY: Chelsea. 1941a, Stationary Sequences in Hilbert Space, (Russian) Bull. Math. Univ. Moscow 2, No. 6, 40. 1941b, Interpolation und Extrapolation von Stationaren Zufalligen Folgen [Russian, German summary], Bull. Acad. Sci. U.R.S.S. Ser. Math., 5, 3–14. Kosobud, R., and L. Klein, 1961, Some Econometrics of Growth: Great Ratios of Economics, Quarterly Journal of Economics, 25, 173–98. Lawrence, A. J., and N. T. Kottegoda, 1977, Stochastic Modeling of River Flow Time Series, Journal of the Royal Statistical Society Series A, 140, 1–47. Lee, T.-H., H. White, and C. W. J. Granger, 1993, Testing for Neglected Nonlinearity in Time Series Models: A Comparison of Neural Network Methods and Alternative Tests, Journal of Econometrics, 56, 269–90. Leitch, G., and J. E. Tanner, 1991, Economic Forecast Evaluation: Profits Versus the Conventional Error Measures, American Economic Review, 81, 580–90. Li, Q., 1998, Efficient Estimation of Additive Partially Linear Models, International Economic Review, forthcoming. Lin, J.-L., and R. S. Tsay, 1996, Co-integration Constraint and Forecasting: An Empirical Examination, Journal of Applied Econometrics, 11, 519–38. Linton, O., and J. P. Neilson, ••, 1995, A Kernal Method of Estimating Structured Nonparametric Regression Based on Marginal Integration, Biometrika, 82, 91–100. Lippi, M., and P. Zaffaroni, 1999, Contemporaneous Aggregation of Linear Dynamic Models in Large Economies, Mimeo, Universita La Sapienza and Banca d’Italia. Liu, T., C. W. J. Granger, and W. Heller, 1992, Using the Correlation Exponent to Decide whether an Economic Series Is Chaotic, Journal of Applied Econometrics, 7S, 525–40. Reprinted in M. H. Pesaran and S. M. Potter, eds., Nonlinear Dynamics, Chaos, and Econometrics, Chichester: Wiley. Lo, A., 1991, Long-Term Memory in Stock Prices, Econometrica, 59, 1279– 313. Lobato, I., and P. M. Robinson, 1998, A Nonparametric Test for I(0), Review of Economic Studies, 65, 475–95. Lobato, I., and N. E. Savin, 1998, Real and Spurious Long-Memory Properties of Stock-Market Data, Journal of Business and Economic Statistics, 16, No. 3, 261–7.
Introduction
25
Lütkepohl, H., 1991, Introduction to Multiple Time Series Analysis, New York: Springer-Verlag. Macauley, F. R., 1931, The Smoothing of Time Series, New York, NY: National Bureau of Economic Research. Mandelbrot, B., 1963, The Variation of Certain Speculative Prices, Journal of Business, 36, 394–419. Mandelbrot, B. B., and J. W. Van Ness, 1968, Fractional Brownian Motions, Fractional Brownian Noises and Applications, SIAM Review, 10, 422–37. Mandelbrot, B. B., and J. Wallis, 1968, Noah, Joseph and Operational Hydrology, Water Resources Research, 4, 909–18. Mann, H. B., and A. Wald, 1943, On the Statistical Treatment of Linear Stochastic Difference Equations, Econometrica, 11, 173–220. McLeish, D. L., 1978, A Maximal Inequality and Dependent Strong Laws, Annals of Probability, 3, 829–39. Meese, R. A., and K. Rogoff, 1983, Empirical Exchange Rate Models of the Seventies: Do They Fit Out of Sample, Journal of International Economics, 14, 3–24. Mincer, J., and V. Zarnowitz, 1969, The Evaluation of Economic Forecasts, in Economic Forecasts and Expectations, J. Mincer, ed., New York: National Bureau of Economic Research. Mizrach, B., 1991, Forecast Comparison in L2, Working Paper, Rutgers Univeristy. Nerlove, M., 1964, Spectral Analysis of Seasonal Adjustment Procedures, Econometrica, 32, 241–86. 1965, A Comparison of a Modified Hannan and the BLS Seasonal Adjustment Filters, Journal of the American Statistical Association, 60, 442–91. Nerlove, M., D. Grether, and J. Carvalho, 1979, Analysis of Economic Time Series – A Synthesis, New York: Academic Press. Orcutt, G. H., 1952, Actions, Consequences and Causal Relations, Review of Economics and Statistics, 34, 305–13. Pesaran, M. H., and A. G. Timmerman, 1992, A Simple Nonparametric Test of Predictive Performance, Journal of Business and Economic Statistics, 10, 461–5. 1994, A Generalization of the Nonparametric Henriksson-Merton Test of Market Timing, Economics Letters, 44, 1–7. Phillips, P. C. B., 1986, Understanding Spurious Regressions in Econometrics, Journal of Econometrics, 33, No. 3, 311–40. 1991, Optimal Inference in Cointegrated Systems, Econometrica, 59, 283–306. 1997, ET Interview: Clive Granger, Econometric Theory, 13, 253–304. Phillips, P. C. B., and S. N. Durlauf, 1986, Multiple Time Series Regression with Integrated Processes, Review of Economic Studies, 53, No. 4, 473–96. Phillips, P. C. B., and S. Ouliaris, 1990, Asymptotic Properties of Residual Based Tests for Cointegration, Econometrica, 58, No. 1, 165–93. Pierce, D. A., 1979, Signal Extraction Error in Nonstationary Time Series, The Annals of Statistics, 7, 1303–20. Priestley, M. B., 1981, Spectral Analysis and Time Series, New York: Academic Press. Rissman, E., and J. Campbell, 1994, Long-run Labor Market Dynamics and Short-run Inflation, Economic Perspectives.
26
Eric Ghysels, Norman R. Swanson, and Mark Watson
Robinson, P. M., 1988, Root N-consistent Semiparametric Regression, Econometrica, 56, 931–54. 1994, Semiparametric Analysis of Long Memory Time Series, The Annals of Statistics, 22, 515–39. 1995, Gaussian Semiparametric Estimation of Long Range Dependence, The Annals of Statistics, 23, 1630–61. Saikkonen, P., 1991, Asymptotically Efficient Estimation of Cointegrating Regressions, Econometric Theory, 7, 1–21. Samuelson, P., 1965, Proof that Properly Anticipated Prices Fluctuate Randomly, Industrial Management Review, 6, 41–9. Sargan, J. D., 1964, Wages and Prices in the United Kingdom: A Study in Econometric Methodology, in P. E. Hart, G. Mills, and J. N. Whittaker, eds., Econometric Analysis of National Economic Planning, London: Butterworths. Sargent, T. J., 1987, Macroeconomic Theory, 2nd ed., New York: Academic Press. Shiskin, J., A. H. Young, and J. C. Musgrave, 1967, The X-11 Variant of the Census Method II Seasonal Adjustment Program, Technical Paper 15, U.S. Bureau of the Census, Washington, DC. Simon, H. A., 1953, Causal Ordering and Identifiability, in W. C. Hood and T. C. Koopmans, eds., Studies in Econometric Method, Cowles Commission Monograph 14, New York. Sims, C. A., 1972, Money, Income, and Causality, American Economic Review, 62, 540–52. 1974, Seasonality in Regression, Journal of the American Statistical Association, 69, 618–26. 1980, Macroeconomics and Reality, Econometrica, 48, No. 1, 1–48. Slutzky, E., 1927, The Summation of Random Causes as the Source of Cyclic Processes, Econometrica, 5, 105–46, 1937. Translated from the earlier paper of the same title in Problems of Economic Conditions, Moscow: Cojuncture Institute. Stock, J. H., 1987, Asymptotic Properties of Least Squares Estimators of Cointegrating Vectors, Econometrica, 55, 1035–56. 1989, Nonparametric Policy Analysis, Journal of the American Statistical Association, 84, 567–75. Stock, J. H., and M. W. Watson, 1993, A Simple Estimator of Cointegrating Vectors in Higher Order Integrated Systems, Econometrica, 61, No. 4, 783–820. Swanson, N. R., and C. W. J. Granger, 1997, Impulse Response Functions Based on a Causal Approach to Residual Orthogonalization in Vector Autoregresion, Journal of the American Statistical Association, 92, 357–67. Swanson, N. R., and H. White, 1995, A Model Selection Approach to Assessing the Information in the Term Structure Using Linear Models and Artificial Neural Networks, Journal of Business and Economic Statistics, 13, 265–75. 1997, A Model Selection Approach to Real-Time Macroeconomic Forecasting Using Linear Models and Artificial Neural Networks, Review of Economics and Statistics, 79, 540–50. Teräsvirta, T., D. Tjostheim, and C. W. J. Granger, 1994, Aspects of Modeling Nonlinear Time Series, in Handbook of Econometrics, Vol. IV, Amsterdam: Elsevier.
Introduction
27
Toda, H. Y., and P. C. B. Phillips, 1993, Vector Autoregressions and Causality, Econometrica, 61, 1367–93. 1994, Vector Autoregression and Causality: A Theoretical Overview and Simulation Study, Econometric Reviews, 13, 259–85. Toda, H. Y., and T. Yamamoto, 1995, Statistical Inference in Vector Autoregressions with Possibly Integrated Processes, Journal of Econometrics, 66, 225–50. Wallis, K. F., 1974, Seasonal Adjustment and Relations between Variables, Journal of the American Statistical Association, 69, 18–32. Wiener, N., 1956, The Theory of Prediction, in E. F. Beckenback, ed., Modern Mathematics for Engineers, Series 1. Weiss, A. A., 1996, Estimating Time Series Models Using the Relevant Cost Function, Journal of Applied Econometrics, 11, 539–60. Wold, H., 1938, A Study in the Analysis of Stationary Time Series, Stockholm: Almqvist and Wiksell. Working, H., 1960, Note on the Correlation of First Differences of Averages in a Random Chain, Econometrica, 28, 916–18. Yoo, B. S., 1987, Co-integrated Time Series Structure, Ph.D. Dissertation, UCSD. Young, A. H., 1968, Linear Approximations to the Census and BLS Seasonal Adjustment Methods, Journal of the American Statistical Association, 63, 445–71. Yule, G. U., 1921, On the Time-Correlation Problem, with Especial Reference to the VariateDifference Correlation Method, Journal of the Royal Statistical Society, 84, 497–526. 1926, Why Do We Sometimes Get Nonsense Correlations Between Time Series? A Study in Sampling and the Nature of Time Series, Journal of the Royal Statistical Society, 89, 1–64. 1927, On a Method of Investigating Periodicities in Disturbed Series, with Special Reference to Wolfer’s Sunspot Numbers, Philosophical Transactions, 226A. Zellner, A., 1979, Causality and Econometrics, in K. Brunner and A. H. Meltzer, eds., Three Aspects of Policy and Policymaking, Carnegie-Rochester Conference Series, Vol. 10, Amsterdam: North Holland. Zellner, A., and F. Palm, 1974, time Series Analysis and Simultaneous Equation Econometric Models, Journal of Econometrics, 2, 17–54.
PART ONE
CAUSALITY
CHAPTER 1
Investigating Causal Relations by Econometric Models and Cross-Spectral Methods* C. W. J. Granger
There occurs on some occasions a difficulty in deciding the direction of causality between two related variables and also whether or not feedback is occurring. Testable definitions of causality and feedback are proposed and illustrated by use of simple two-variable models. The important problem of apparent instantaneous causality is discussed and it is suggested that the problem often arises due to slowness in recording information or because a sufficiently wide class of possible causal variables has not been used. It can be shown that the cross spectrum between two variables can be decomposed into two parts, each relating to a single causal arm of a feedback situation. Measures of causal lag and causal strength can then be constructed. A generalization of this result with the partial cross spectrum is suggested. The object of this paper is to throw light on the relationships between certain classes of econometric models involving feedback and the functions arising in spectral analysis, particularly the cross spectrum and the partial cross spectrum. Causality and feedback are here defined in an explicit and testable fashion. It is shown that in the two-variable case the feedback mechanism can be broken down into two causal relations and that the cross spectrum can be considered as the sum of two cross spectra, each closely connected with one of the causations.The next three sections of the paper briefly introduce those aspects of spectral methods, model building, and causality which are required later. Section IV presents the results for the two-variable case and Section V generalizes these results for three variables. * Econometrica, 37, 1969, 424–438. Reprinted in Rational Expectations, edited by T. Sargent and R. Lucas, 1981, University of Minnesota Press.
32
I.
C. W. J. Granger
SPECTRAL METHODS
If Xt is a stationary time series with mean zero, there are two basic spectral representations associated with the series: (i) the Cramer representation, p
X t = Ú e itw dzx (w ),
(1)
-p
where zx (w) is a complex random process with uncorrelated increments so that
[
]
E dzx (w ) dzx ( l ) = 0,
w π l,
= dFx (w ), w = l ;
(2)
(ii) the spectral representation of the covariance sequence p
mtxx = E[ X t X t -t ] = Ú e itw dFx (w ). -p
(3)
If Xt has no strictly periodic components, dFx(w) = fx(w)dw, where fx(w) is the power spectrum of Xt. The estimation and interpretation of power spectra have been discussed in Granger and Hatanaka (1964) and Nerlove (1964). The basic idea underlying the two spectral representations is that the series can be decomposed as a sum (i.e., integral) of uncorrelated components, each associated with a particular frequency. It follows that the variance of the series is equal to the sum of the variances of the components. The power spectrum records the variances of the components as a function of their frequencies and indicates the relative importance of the components in terms of their contribution to the overall variance. If Xt and Yt are a pair of stationary time series, so that Yt has the spectrum fy(w) and Cramer representation p
Yt = Ú e itw dzy (w ), -p
then the cross spectrum (strictly power cross spectrum) Cr(w) between Xt and Yt is a complex function of w and arises both from
[
]
E dzx (w ) dzy (w ) = 0,
w π l,
= Cr (w ) dw , w = l ; and p
mtxy = E[ X tYt -t ] = Ú e itw Cr (w ) dw . -p
It follows that the relationship between two series can be expressed only in terms of the relationships between corresponding frequency components. Two further functions are defined from the cross spectrum as being more useful for interpreting relationships between variables: (i) the coherence,
Investigating Causal Relations
33
2
C (w ) =
Cr (w ) , f x (w ) fy (w )
which is essentially the square of the correlation coefficient between corresponding frequency components of Xt and Yt, and (ii) the phase, f (w ) = tan -1
imaginary part of Cr (w ) , real part of Cr (w )
which measures the phase difference between corresponding frequency components. When one variable is leading the other, f(w)/w measure the extent of the time lag. Thus, the coherence is used to measure the degree to which two series are related and the phase may be interpreted in terms of time lags. Estimation and interpretation of the coherence and phase function are discussed in Granger and Hatanaka (1964, chaps. 5 and 6). It is worth noting that f(w) has been found to be robust under changes in the stationarity assumption (Granger and Hatanaka 1964, chap. 9). If Xt, Yt, and Zt are three time series, the problem of possibly misleading correlation and coherence values between two of them due to the influence on both of the third variable can be overcome by the use of partial cross-spectral methods. The spectral, cross-spectral matrix [ fij(w)] = S(w) between the three variables is given by È dzx (w ) ˘ E Í dzy (w ) ˙ dzx (w ) dzy (w ) dzz (w ) = [ fij (w )]dw Í ˙ ÍÎ dzz (w ) ˙˚
[
]
where fij (w ) = f x (w )
when i = j = x,
xy
= Cr (w ) when i = x, j = y, etc. The partial spectral, cross-spectral matrix between Xt and Yt given Zt is found by partitioning S(w) into components: ÈS S S = Í 11 12 Î S 21 S 22
˘ ˙˚ .
The partitioning lines are between the second and third rows, and second and third columns. The partial spectral matrix is then -1 S xy ,z = S11 - S12 S 22 S 21 .
Interpretation of the components of this matrix is similar to that involving partial correlation coefficients. Thus, the partial cross spectrum
34
C. W. J. Granger
can be used to find the relationship between two series once the effect of a third series has been taken into account. The partial coherence and phase are defined directly from the partial cross spectrum as before. Interpretation of all of these functions and generalizations to the n-variable case can be found in Granger and Hatanaka (1964, chap. 5). II.
FEEDBACK MODELS
Consider initially a stationary random vector Xt = {X1t, X2t, . . . , Xkt}, each component of which has zero mean. A linear model for such a vector consists of a set of linear equations by which all or a subset of the components of Xt are “explained” in terms of present and past values of components of Xt. The part not explained by the model may be taken to consist of a white-noise random vector et, such that E[e t¢e s ] = 0, = I,
t π s, t = s,
(4)
where I is a unit matrix and 0 is a zero matrix. Thus, the model may be written as m
A0 X t = Â Aj X t - j + e t
(5)
j =1
where m may be infinite and the A’s are matrices. The completely general model as defined does not have unique matrices Aj as an orthogonal transformation. Yt = LXt can be performed which leaves the form of the model the same, where L is the orthogonal matrix, i.e., a square matrix having the property LL¢ = I. This is seen to be the case as ht = Let is still a white-noise vector. For the model to be determined, sufficient a priori knowledge is required about the values of the coefficients of at least one of the A’s, in order for constraints to be set up so that such transformations are not possible. This is the socalled identification problem of classical econometrics. In the absence of such a priori constraints, L can always be chosen so that the A0 is a triangular matrix, although not uniquely, thus giving a spurious causalchain appearance to the model. Models for which A0 has nonvanishing terms off the main diagonal will be called “models with instantaneous causality.” Models for which A0 has no nonzero term off the main diagonal will be called “simple causal models.” These names will be explained later. Simple causal models are uniquely determined if orthogonal transforms such as L are not possible without changing the basic form of the model. It is possible for a model apparently having instantaneous causality to be transformed using an orthogonal L to a simple causal model.
Investigating Causal Relations
35
These definitions can be illustrated simply in the two variable case. Suppose the variables are Xt, Yt. Then the model considered is of the form m
m
X t + b0Yt = Â a j X t - j + Â bj Yt - j + e t¢, j =1
j =1
m
m
Yt + c0 X t = Â c j X t - j + Â d j Yt - j + e t¢¢. j =1
(6)
j =1
If b0 = c0 = 0, then this will be a simple causal model. Otherwise it will be a model with instantaneous causality. Whether or not a model involving some group of economic variables can be a simple causal model depends on what one considers to be the speed with which information flows through the economy and also on the sampling period of the data used. It might be true that when quarterly data are used, for example, a simple causal model is not sufficient to explain the relationships between the variables, while for monthly data a simple causal model would be all that is required. Thus, some nonsimple causal models may be constructed not because of the basic properties of the economy being studied but because of the data being used. It has been shown elsewhere (Granger 1963; Granger and Hatanaka 1964, chap. 7) that a simple causal mechanism can appear to be a feedback mechanism if the sampling period for the data is so long that details of causality cannot be picked out. III.
CAUSALITY
Cross-spectral methods provide a useful way of describing the relationship between two (or more) variables when one is causing the other(s). In many realistic economic situations, however, one suspects that feedback is occurring. In these situations the coherence and phase diagrams become difficult or impossible to interpret, particularly the phase diagram. The problem is how to devise definitions of causality and feedback which permits tests for their existence. Such a definition was proposed in earlier papers (Granger 1963; Granger and Hatanaka 1964, chap. 7). In this section, some of these definitions will be discussed and extended. Although later sections of this paper will use this definition of causality they will not completely depend upon it. Previous papers concerned with causality in economic systems (Basman 1963; Orcutt 1952; Simon 1953; Strotz and Wold 1960) have been particularly concerned with the problem of determining a causal interpretation of simultaneous equation systems, usually with instantaneous causality. Feedback is not explicitly discussed. This earlier work has concentrated on the form that the parameters of the equations should take in order to discern definite causal relationships. The stochastic elements and the natural
36
C. W. J. Granger
time ordering of the variables play relatively minor roles in the theory. In the alternative theory to be discussed here, the stochastic nature of the variables and the direction of the flow of time will be central features. The theory is, in fact, not relevant for nonstochastic variables and will rely entirely on the assumption that the future cannot cause the past. This theory will not, of course, be contradictory to previous work but there appears to be little common ground. Its origins may be found in suggestion by Wiener (1956). The relationship between the definition discussed here and work of Good (1962) has yet to be determined. – If At is a stationary stochastic process, let At represent the set of = past values {At-j, j = 1, 2, . . . , •} and A t represent the set of past and present values {At-j, j = 0, 1, . . . , •}. Further let A(k) represent the set {At-j, j = k, k + 1, . . . , •}. Denote the optimum, unbiased, least-squares predictor of At using the – set of values Bt by Pt(AΩB). Thus, for instance, Pt(XΩX ) will be the optimum predictor of Xt using only past Xt. The predictive error series will be denoted by et(AΩB) = At - Pt(AΩB). Let s 2(AΩB) be the variance of et(AΩB). The initial definitions of causality, feedback, and so forth, will be very general in nature. Testable forms will be introduced later. Let Ut be all the information in the universe accumulated since time t - 1 and let Ut - Yt denote all this information apart from the specified series Yt. We then have the following definitions. Definition 1: Causality. If s 2(XΩU) < s 2(XΩ U - Y ), we say that Y is causing X, denoted by Yt fi Xt. We say that Yt is causing Xt if we are better able to predict Xt using all available information than if the information apart from Yt had been used. – – Definition 2: Feedback. If s 2(XΩU ) < s 2(XΩ U - Y ), and s 2(YΩU ) < 2 s (YΩ U - X ), we say that feedback is occurring, which is denoted Yt ¤ Xt, i.e., feedback is said to occur when Xt is causing Yt and also Yt is causing Xt. – = – Definition 3: Instantaneous Causality. If s 2(XΩU , Y ) < s 2(XΩU ), we say that instantaneous causality Yt fi Xt is occurring. In other words, the current value of Xt is better “predicted” if the present value of Yt is included in the “prediction” than if it is not. Definition 4: Causality Lag. If Yt fi Xt, we define the (integer) causality lag m to be the least value of k such that s 2[XΩU - Y(k)] < s 2[XΩU - Y(k + 1)]. Thus, knowing the values Yt-j, j = 0, 1, . . . , m - 1, will be of no help in improving the prediction of Xt. The definitions have assumed that only stationary series are involved. – In the nonstationary case, s(XΩU ) etc. will depend on time t and, in
Investigating Causal Relations
37
general, the existence of causality may alter over time. The definitions can clearly be generalized to be operative for a specified time t. One could then talk of causality existing at this moment of time. Considering nonstationary series, however, takes us further away from testable definitions and this tack will not be discussed further. The one completely unreal aspect of the above definitions is the use of the series Ut, representing all available information. The large majority of the information in the universe will be quite irrelevant, i.e., will have no causal consequence. Suppose that all relevant information is numerical in nature and belongs to the vector set of time series YDt = {Y it, i Œ D} for some integer set D. Denote the set {i Œ D, i π j} by D( j) and {Y ji , i Œ D( j)} by Y Dt ( j), i.e., the full set of relevant information except one particular series. Similarly, we could leave out more than one series with the obvious notation. The previous definitions can now be used but with Ut replaced by Yt and Ut - Yt by YD( j). Thus, for example, suppose that the vector set consists only of two series, Xt and Yt, and that all other – information is irrelevant. Then s 2(XΩX ) represents the minimum pre– – dictive error variance of Xt using only past Xt and s 2(XΩX , Y ) represents this minimum variance if both past Xt and past Yt are used to – – – predict Xt. Then Yt is said to cause Xt if s 2(XΩX ) > s 2(XΩX , Y ). The definition of causality is now relative to the set D. If relevant data has not been included in this set, then spurious causality could arise. For instance, if the set D was taken to consist only of the two series Xt and Yt, but in fact there was a third series Zt which was causing both within the enlarged set D¢ = (Xt, Yt, Zt), then for the original set D, spurious causality between Xt and Yt may be found. This is similar to spurious correlation and partial correlation between sets of data that arise when some other statistical variable of importance has not been included. In practice it will not usually be possible to use completely optimum predictors, unless all sets of series are assumed to be normally distributed, since such optimum predictors may be nonlinear in complicated ways. It seems natural to use only linear predictors and the above definitions may again be used under this assumption of linearity. Thus, for instance, the best linear predictor of Xt using only past Xt and past Yt will be of the form •
•
Pt ( X X , Y ) = Â a j X t - j + Â bj Yt - j j =1
j =1
– – where the aj’s and bj’s are chosen to minimize s 2(XΩX , Y ). It can be argued that the variance is not the proper criterion to use to measure the closeness of a predictor Pt to the true value Xt. Certainly if some other criteria where used it may be possible to reach different conclusions about whether one series is causing another. The variance does seem to be a natural criterion to use in connection with linear predictors as it is mathematically easy to handle and simple to interpret. If one uses this criterion, a better name might be “causality in mean.”
38
C. W. J. Granger
The original definition of causality has now been restricted in order to reach a form which can be tested. Whenever the word causality is used in later sections it will be taken to mean “linear causality in mean with respect to a specified set D.” It is possible to extend the definitions to the case where a subset of series D* of D is considered to cause Xt. This would be the case if s 2(XΩYD) < s 2(XΩYD-D*) and then YD* fi Xt. Thus, for instance, one could ask if past Xt is causing present Xt. Because new concepts are necessary in the consideration of such problems, they will not be discussed here in any detail. It has been pointed out already (Granger 1963) that instantaneous causality, in which knowledge of the current value of a series helps in predicting the current value of a second series, can occasionally arise spuriously in certain cases. Suppose Yt fi Xt with lag one unit but that the series are sampled every two time units. Then although there is no real instantaneous causality, the definitions will appear to suggest that such causality is occurring. This is because certain relevant information, the missing readings in the data, has not been used. Due to this effect, one might suggest that in many economic situations an apparent instantaneous causality would disappear if the economic variables were recorded at more frequent time intervals. The definition of causality used above is based entirely on the predictability of some series, say Xt. If some other series Yt contains information in past terms that helps in the prediction of Xt and if this information is contained in no other series used in the predictor, then Yt is said to cause Xt. The flow of time clearly plays a central role in these definitions. In the author’s opinion there is little use in the practice of attempting to discuss causality without introducing time, although philosophers have tried to do so. It also follows from the definitions that a purely deterministic series, that is, a series which can be predicted exactly from its past terms such as a nonstochastic series, cannot be said to have any causal influences other than its own past. This may seem to be contrary to common sense in certain special cases but it is difficult to find a testable alternative definition which could include the deterministic situation. Thus, for instance, if Xt = bt and Yt = c(t + 1), then Xt can be predicted exactly by b + Xt-1 or by (b/c)Yt-1. There seems to be no way of deciding if Yt is a causal factor of Xt or not. In some cases the notation of the “simplest rule” might be applied. For example, if Xt is some complicated polynomial in t and Yt = Xt+1, then it will be easier to predict Xt from Yt-1 than from past Xt. In some cases this rule cannot be used, as the previous example showed. In any case, experience does not indicate that one should expect economic laws to be simple in nature. Even for stochastic series, the definitions introduced above may give apparently silly answers. Suppose Xt = At-1 + et, Yt = At + ht, and Zt = At + gt, where et, ht, and gt are all uncorrelated white-noise series with equal
Investigating Causal Relations
39
variances and At is some stationary series. Within the set D = (Xt, Yt) the definition gives Yt fi Xt. Within the set D¢ = (Xt, Yt), it gives Zt fi Xt. But within the set D≤ = (Xt, Yt, Zt), neither Yt nor Zt causes Xt, although the sum of Yt and Zt would do so. How is one to decide if either Yt or Zt is a causal series for Xt? The answer, of course, is that neither is. The causal series is At and both Yt and Zt contain equal amounts of information about At. If the set of series within which causality was discussed was expanded to include At, then the above apparent paradox vanishes. It will often be found that constructed examples which seem to produce results contrary to common sense can be resolved by widening the set of data within which causality is defined. IV.
TWO-VARIABLE MODELS
In this section, the definitions introduced above will be illustrated using two-variable models and results will be proved concerning the form of the cross spectrum for such models. Let Xt, Yt be two stationary time series with zero means. The simple causal model is m
m
X t = Â a j X t - j + Â bj Yt - j + e t , j =1
j =1
m
m
Yt = Â c j X t - j + Â d j Yt - j + ht . j =1
(7)
j =1
where et, ht are taken to be two uncorrelated white-noise series, i.e., E[etes] = 0 = E[hths], s π t, and E[etes] = 0 all t, s. In (7) m can equal infinity but in practice, of course, due to the finite length of the available data, m will be assumed finite and shorter than the given time series. The definition of causality given above implies that Yt is causing Xt provided some bj is not zero. Similarly Xt is causing Yt if some cj is not zero. If both of these events occur, there is said to be a feedback relationship between Xt and Yt. It will be shown later that this new definition of causality is in fact identical to that introduced previously. The more general model with instantaneous causality is m
m
X t + b0Yt = Â a j X t - j + Â bj Yt - j + e t , j =1
j =1
m
m
Yt + c0 X t = Â c j X t - j + Â d j Yt - j + ht . j =1
(8)
j =1
If the variables are such that this kind of representation is needed, then instantaneous causality is occuring and a knowledge of Yt will improve the “prediction” or goodness of fit of the first equation for Xt. Consider initially the simple causal model (7). In terms of the time shift operator U, that is, UXt = Xt-1, these equations may be written
40
C. W. J. Granger
X t = a(U ) X t + b(U )Yt + e t , Yt = c(U ) X t + d(U )Yt + ht ,
(9)
where a(U), b(U), c(U), and d(U) are power series in U with the coefficient of U0 zero, i.e., a(U) = S mj=1ajU j, etc. Using the Cramer representations of the series, i.e., p
X t = Ú e itw dZ x (w ), -p
p
Yt = Ú e itw dZ y (w ), -p
and similarly for et and ht, expressions such as a(U)Xt can be written as p
a(U ) X t = Ú e itw a( e - iw ) dZ x (w ). -p
Thus, equations (9) may be written
Ú
p
-p
Ú
p
-p
e itw {[1 - a( e - iw )]dZ x (w ) - b( e - iw ) dZ y (w ) - dZe (w )} = 0, e itw {- c ( e - iw ) dZ x (w ) + [1 - d( e - iw )]dZ y (w ) - dZh (w )} = 0,
from which it follows that È dZ x ˘ È dZe ˘ AÍ ˙=Í ˙ Î dZ y ˚ Î dZh ˚
(10)
where È1 - a - b ˘ A= Í Î - c 1 - d ˙˚ and where a is written for a(e-iw), etc., and dZx for dZx(w), etc. Thus, provided the inverse of A exists, È dZ x ˘ -1 È dZe ˘ Í dZ ˙ = A Í dZ ˙. Î y˚ Î h˚
(11)
As the spectral, cross-spectral matrix for Xt, Yt is directly obtainable from È dZ x ˘ EÍ ˙ dZ x dZ y , Î dZ y ˚
[
]
these functions can quickly be found from (11) using the known properties of dZe and dZh. One finds that the power spectra are given by 1 2 2 1 - d s e2 + b s h2 , 2pD 1 2 2 fy (w ) = c s e2 + 1 - a s h2 , 2pD f x (w ) =
(
)
(
)
(12)
where D = Ω(1 - a)(1 - d) - bcΩ2. Of more interest is the cross spectrum which has the form
Investigating Causal Relations
Cr (w ) =
41
1 [(1 - d)cs e2 + (1 - a)bs h2 ]. 2pD
Thus, the cross spectrum may be written as the sum of two components Cr (w ) = C1 (w ) + C 2 (w ),
(13)
where C1 (w ) =
s e2 (1 - d)c 2pD
C 2 (w ) =
s h2 (1 - a)b. 2pD
and
If Yt is not causing Xt, then b ∫ 0 and so C2(w) vanishes. Similarly if Xt is not causing Yt then c ∫ 0 and so C1(w) vanishes. It is thus clear that the cross spectrum can be decomposed into the sum of two components – one which depends upon the causality of X by Y and the other on the causality of Y by X. If, for example, Y is not causing X so that C2(w) vanishes, the Cr(w) = C1(w) and the resulting coherence and phase diagrams will be interpreted in the usual manner. This suggests that in general C1(w) and C2(w) can each be treated separately as cross spectra connected with the two arms of the feedback mechanism. Thus, coherence and phase diagrams can be defined for X fi Y and Y fi X. For example, 2
C xy (w ) =
C1 (w ) f x (w ) fy (w )
may be considered to be a measure of the strength of the causality X fi Y plotted against frequency and is a direct generalization of coherence. We call C xy (w) the causality coherence. Further, f xy (w ) = tan -1
imaginary part of C1 (w ) real part of C1 (w )
will measure the phase lag against frequency of X fi Y and will be called the causality phase diagram. Similarly such functions can be defined for Y fi X using C2(w). These functions are usually complicated expression in a, b, c, and d; for example, C xy (w ) =
s e4 (1 - d) c
(s
2 e
2
(1 - d) + s h2 b
2
)(s
2 e
2 2
2
c + 1 - a s h2
)
.
42
C. W. J. Granger
Such formulae merely illustrate how difficult it is to interpret econometric models in terms of frequency decompositions. It should be noted that 0 < Ω C xy (w)Ω < 1 and similarly for C yx (w). As an illustration of these definitions, we consider the simple feedback system X t = bYt -1 + e t , Yt = cX t - 2 + ht , 2 e
(14)
2 h
where s = s = 1. In this case a(w) = 0, b(w) = be-iw, c(w) = ce-2iw, and d(w) = 0.The spectra of the series {Xt}, {Yt} are f x (w ) =
1 + b2 2p 1 - bce -3 iw
2
and fy (w ) =
1 + c2 2p 1 - bce -3 iw
2
,
and thus are of similar shape. The usual coherence and phase diagrams derived from the cross spectrum between these two series are C (w ) =
c 2 + b 2 + 2 bc cos w (1 + b 2 )(1 + c 2 )
and f (w ) = tan -1
c sin 2w - b sin w . c cos 2w + b cos w
These diagrams are clearly of little use in characterizing the feedback relationship between the two series. When the causality-coherence and phase diagrams are considered, however, we get C xy (w ) =
c2 , (1 + b 2 )(1 + c 2 )
C yx (w ) =
b2 . (1 + b 2 )(1 + c 2 )
Both are constant for all w, and, if b π 0, c π 0, q xy(w) = 2w (time lag of two units),1 q xy(w) = w (time lag of one unit). The causality lags are thus seen to be correct and the causality coherences to be reasonable. In particular, if b = 0 then C yx(w) = 0, i.e., no causality is found when none is present. (Further, in this new case, q xy(w) = 0.) 1
A discussion of the interpretation of phase diagrams in terms of time lags may be found in Granger and Hatanaka (1964, chap. 5).
Investigating Causal Relations
43
Other particular cases are also found to give correct results. If, for example, we again consider the same simple model (14) but with s 2e = 1, s h2 = 0, i.e., ht ∫ 0 for all t, then one finds C xy(w) = 1, C yx (w) = 0, i.e., X is “perfectly” causing Y and Y is not causing X, as is in fact the case. If one now considers the model (8) in which instantaneous causality is allowed, it is found that the cross spectrum is given by Cr (w ) =
1 [(1 - d)( c - c0 )s e2 + (1 - a )( b - b0 )s h2 ] 2pD ¢
(15)
where D¢ = Ω(1 - a)(1 - d) - (b - b0)(c - c0)Ω2. Thus, once more, the cross spectrum can be considered as the sum of two components, each of which can be associated with a “causality,” provided that this includes instantaneous causality. It is, however, probably more sensible to decompose Cr(w) into three parts, Cr(w) = C1(w) + C2(w) + C3(w), where C1(w) and C2(w) are as in (13) but with D replaced by D¢ and C3 (w ) =
-1 [ c0 (1 - d)s e2 + b0 (1 - a)s h2 ] 2pD
(16)
representing the influence of the instantaneous causality. Such a decomposition may be useful but it is clear that when instantaneous causality occurs, the measures of causal strength and phase lag will lose their meaning. It was noted in Section II that instantaneous causality models such as (8) in general lack uniqueness of their parameters, as an orthogonal transformation L applied to the variables leaves the general form of the model unaltered. It is interesting to note that such transformations do not have any effect on the cross spectrum given by (15) or the decomposition. This can be seen by noting that equations (8) lead to È dzx ˘ È dze ˘ AÍ ˙=Í ˙ Î dzy ˚ Î dzh ˚ with appropriate A. Applying the transformation L gives È dzx ˘ È dze ˘ LAÍ ˙ = L Í dz ˙ dz Î y˚ Î h˚ so that È dzx ˘ -1 È dze ˘ Í dz ˙ = (LA) L Í dz ˙ Î y˚ Î h˚ È dze ˘ = A-1 Í ˙ Î dzh ˚ which is the same as if now such transformation had been applied. From its definition, L will possess an inverse. This result suggests that spectral
44
C. W. J. Granger
methods are more robust in their interpretation than are simultaneous equation models. Returning to the simple causal model (9), X t = a(U ) X t + b(U )Yt + e t , Yt = c(U ) X t + d(U )Yt + ht , throughout this section it has been stated that Yt fi / Xt if b ∫ 0. On intuitive grounds this seems to fit the definition of no causality introduced in Section III, within the set D of series consisting only of Xt and Yt. If b ∫ 0 then Xt is determined from the first equation and the minimum variance of the predictive error of Xt using past Xt will be s e2 . This variance cannot be reduced using past Yt. It is perhaps worthwhile proving this – – result formally. In the general case, it is clear that s 2(XΩX , Y ) = s e2, i.e., the variance of the predictive error of Xt, if both past Xt and past Yt are used, will be s 2e from the top equation. If only past Xt is used to predict Xt, it is a well known result that the minimum variance of the predictive error is given by 1 p 1 s 2 ( X X ) = exp p Ú log p f x (w ) dw . 2 -p 2
(17)
It was shown above in equation (12) that f x (w ) =
1 2 2 1 - d s e2 + b s h2 2pD
(
)
where D = Ω(1 - a)(1 - d) - bcΩ2. To simplify this equation, we not that
Ú
p
-p
2
log 1 - ae iw dw = 0
by symmetry. Thus if, f x (w ) = a 0
p 1 - a j e iw
2
p 1 - b j e iw
2
,
– then s 2(XΩX ) = a0. For there to be no causality, we must have a0 = s e2. It is clear from the form of fx(w) that in general this could only occur if ΩbΩ ∫ 0, in which case 2pfx(w) = s 2e /Ω1 - aΩ2 and the required result follows. V.
THREE-VARIABLE MODELS
The above results can be generalized to the many-variables situation, but the only case which will be considered is that involving three variables. Consider a simple causal model generalizing (7): X t = a1 (U ) X t + b1 (U )Yt + c1 (U )Z t + e1,t , Yt = a 2 (U ) X t + b2 (U )Yt + c 2 (U )Z t + e 2 ,t , Z t = a3 (U ) X t + b3 (U )Yt + c3 (U )Z t + e 3 ,t ,
Investigating Causal Relations
45
where a1(U), etc., are polynomials in U, the shift operator, with the coefficient of U 0 zero. As before, ei,t, i = 1, 2, 3, are uncorrelated, whitenoise series and denote the variance ei,t = s 2i . Let a = a1 - 1, b - b2 = 1, g = c3 - 1, and È a b1 c1 ˘ A = Í a 2 b c 2 ˙, ˙ Í ÍÎ a3 b3 g ˙˚ where b1 = b1(e-iw), etc., as before. Using the same method as before, the spectral, cross-spectral matrix S(w) is found to be given by S(w) = A-1k(A¢)-1 where Ès 12 0 0 ˘ Í ˙ K = Í 0 s 22 0 ˙. ÍÎ 0 0 s 32 ˙˚ One finds, for instance, that the power spectrum of Xt is f x (w ) = D
-2
[s
2 1
2
2
bg - c 2 b3 + s 22 c1 b3 - g b1 + s 32 b1 c 22 - c1 b
2
]
where D is the determinant of A. The cross spectrum between Xt and Yt is C rxy (w ) = D
-2
[s
2 1
( bg - c 2 b3 )( c 2 a3 - ga 2 ) + s 22 ( c1 b3 - b1g )
(ag - c a ) + s 1 3
2 3
( b1 c 2 - c1 b )( c1 a 2 - c 2a )].
Thus, this cross spectrum is the sum of three components, but it is not clear that these can be directly linked with causalities. More useful results arise, however, when partial cross spectra are considered. After some algebraic manipulation it is found that, for instance, the partial cross spectrum between Xt and Yt given Zt is C rxy ,z (w ) = -
[s 12s 22 b3 a3 + s 12s 22 ba 2 + s 22s 32 b1a ] fz¢(w )
where 2
fz¢(w ) = s 12 bg - c 2 b3 + s 22 c1 b3 - b1g
2
2
+ s 32 b1 c 2 - c1 b .
Thus, the partial cross spectrum is the sum of three components C rxy ,z (w ) = C1xy ,z + C 2xy ,z + C3xy ,z where C1xy ,z = -
s 12s 22 b3 a3 , etc. fz¢(w )
46
C. W. J. Granger
These can be linked with causalities. The component C 1xy,z(w) represents the interrelationships of Xt and Yt through Zt, and the other two components are direct generalizations of the two causal cross spectra which arose in the two-variable case and can be interpreted accordingly. In a similar manner one finds that the power spectrum of Xt, given Zt is 2
2
2
s 2s 2 b + s 12s 32 b + s 22s 32 b1 f x ,z (w ) = 1 2 3 . fz¢(w ) The causal and feedback relationship between Xt and Yt can be investigated in terms of the coherence and phase diagrams derived from the second and third components of the partial cross spectrum, i.e., 2
( )
coherence xy,z =
VI.
C 2xy ,z , etc. f x ,y fy ,z
CONCLUSION
The fact that a feedback mechanism may be considered as the sum of two causal mechanisms, and that these causalities can be studied by decomposing cross or partial cross spectra suggests methods whereby such mechanisms can be investigated. I hope to discuss the problem of estimating the causal cross spectra in a later publication. There are a number of possible approaches, and accumulated experience is needed to indicate which is best. Most of these approaches are via the modelbuilding method by which the above results were obtained. It is worth investigating, however, whether a direct method of estimating the components of the cross spectrum can be found.
REFERENCES Basman, R. L. “The Causal Interpretation of Non-Triangular Systems of Economic Relations.” Econometrica 31 (1963): 439–48. Good, I. J. “A Causal Calculus, I, II.” British J. Philos. Soc. 11 (1961): 305–18, and 12 (1962): 43–51. Granger, C. W. J. “Economic Processes Involving Feedback.” Information and Control 6 (1963): 28–48. Granger, C. W. J., and Hatanaka, M. Spectral Analysis of Economic Time Series. Princeton, N.J.: Princeton Univ. Press, 1964. Nerlove, M. “Spectral Analysis of Seasonal Adjustment Procedures.” Econometrica 32 (1964): 241–86. Orcutt, G. H. “Actions, Consequences and Causal Relations.” Rev. Econ. and Statis. 34 (1952): 305–13.
Investigating Causal Relations
47
Simon, H. A. “Causal Ordering and Identifiability.” In Studies in Econometric Method, edited by W. C. Hood and T. C. Koopmans. Cowles Commission Monograph 14. New York, Wiley, 1953. Strotz, R. H., and Wold, H. “Recursive versus Non-Recursive Systems: An Attempt at Synthesis.” Econometrica 28 (1960): 417–27. Wiener, N. “The Theory of Prediction.” In Modern Mathematics for Engineers, Series 1, edited by E. F. Beckenbach. New York: McGraw-Hill, 1956.
CHAPTER 2
Testing for Causality* A Personal Viewpoint C. W. J. Granger
A general definition of causality is introduced and then specialized to become operational. By considering simple examples a number of advantages, and also difficulties, with the definition are discussed. Tests based on the definitions are then considered and the use of postsample data emphasized, rather than relying on the same data to fit a model and use it to test causality. It is suggested that a bayesian viewpoint should be taken in interpreting the results of these tests. Finally, the results of a study relating advertising and consumption are briefly presented. 1.
THE PROBLEM AND A DEFINITION
Most statisticians meet the concept of causality early in their careers as, when discussing the interpretation of a correlation coefficient or a regression, most textbooks warn that an observed relationship does not allow one to say anything about causation between the variables. Of course this warning has much to recommend it, but consider the following special situation: Suppose that X and Y are the only two random variables in the universe and that a strong correlation is observed between them. Further suppose that God, or an acceptable substitute, tells one that X does not cause Y, leaving open the possibility of Y causing X. In the circumstances, the strong observed correlation might lead to acceptance of the proposition that Y does cause X. This possibility occurs because of the extra structure imposed on the situation by the knowledge that X does not cause Y. As will be seen, the way structure is imposed will be important in definitions of causality. The textbooks, having given a cautionary warning about causality, virtually never then go on with a positive statement of the form “the procedure to test for causality is . . .”, although a few do say that causality can be detected from a properly conducted experiment. The obvious * Journal of Economic Dynamics and Control, 2, 1980, 329–352.
Testing for Causality: A Personal Viewpoint
49
reason for the lack of such positive statements is that there is no generally accepted procedure for testing for causality, partially because of a lack of a definition of this concept that is universally liked. Attitudes towards causality differ widely, from the defeatist one that it is impossible to define causality, let alone test for it, to the populist viewpoint that everyone has their own personal definition and so it is unlikely that a generally acceptable definition exists. It is clearly a topic in which individual tastes predominate, and it would be improper to try to force research workers to accept a definition with which they feel uneasy. My own experience is that, unlike art, causality is a concept whose definition people know that they do not like but few know that they do like. It might therefore be helpful to present a definition that some of us appear to think has some acceptable features so that it can be publicly debated and compared with alternative definitions. For ease of exposition, a universe is considered in which all variables are measured just at prespecified time points at constant intervals t = 1, 2, . . . When at time n, let all the knowledge in the universe available at that time be denoted Wn and denote by Wn - Yn this information except the values taken by a variable Yt up to time n, where Yn Œ Wn. Wn includes no variates measured at time points t > n, although it may well contain expectations or forecasts of such values. However, these expectations will simply be functions of Wn. Wn will certainly be multivariate and Yn could be, and both will be stochastic variables. To provide structure to the situation, the following axioms will be assumed to hold: Axiom A: The past and present may cause the future, but the future cannot cause the past. Axiom B: Wn contains no redundant information, so that if some variable Zn is functionally related to one or more other variables, in a deterministic fashion, then Zn should be excluded from Wn. Thus, for example, if temperature is measured hourly at some location both in degrees Fahrenheit and degrees Centigrade, there is no point in including both of these variables in the universal information set. Suppose that we are interested in the proposition that the variable Y causes the variable X. At time n, the value Xn+1 will be, in general, a random variable and so can be characterized by probability statements of the form Prob(Xn+1 Œ A) for a set A. This suggests the following: General Definition:
Yn is said to cause Xn+1 if
Prob(Xn+1 Œ AΩWn) π Prob(Xn+1 Œ AΩWn - Yn) for some A. For causation to occur, the variable Yn needs to have some unique information about what value Xn+1 will take in the immediate future.
50
C. W. J. Granger
The ultimate objective is to produce an operational definition, which this is certainly not, by adding sufficient limitations. This process will be discussed in section 3, and the definition will also be defended there. In the following section some more general background material will be introduced which will, hopefully, make the defence a little easier. 2. A VARIETY OF VIEWPOINTS ON CAUSALITY The obvious place to look for definitions of causality and discussions of the concept is the writings of philosophers on the topic, of which there have been plenty from Aristotle onwards. A useful discussion of parts of this literature can be found in Bunch (1963). I think that it is fair to say that the philosophers have not reached a consensus of opinion on the topic, have not found a definition that a majority can accept and, in particular, have not produced much that is useful to practicing scientists. Most of the examples traditionally used by philosophers come from classical physics or chemistry, such as asking what causes the flame when a match is struck, or noting that applying heat to a metal rod causes it to become longer. Much of the literature attempts to discuss unique causes in deterministic situations, so that if A occurs then B must occur. Although most writers have seemed to agree with Axiom A, that causes must precede effects, even this is not universally accepted. Quite a few philosophers, at least in the past, seem to believe that causes and effects should be contiguous both in time and space, which undoubtedly reflects the preoccupation with classical physics. Social scientists would surely want to consider the possibility that an event occurring in one part of the world could cause an event elsewhere at a later time. The philosophers are not constrained to look for operational definitions and can end up with asking questions of the ilk: “If two people at separate pianos each strike the same key at the same time and I hear a note, which person caused the note that I hear?” The answer to such questions is, of course: “Who cares?” For an interesting discussion of the lack of usefulness of the philosophers’ contribution by a pair of lawyers, another group which clearly requires an operation definition of causation, see Hart and Honore (1959). They take the viewpoint that “the cause is a difference to the normal course which accounts for the difference in the outcome.” They also point out that legally this difference can be not doing something, “as the driver did not put on the brakes, the train crashed.” One interesting aspect of the philosophers’ contribution is that they often try to discuss what the term causality means in “common usage”, although they make no attempt to use common usage terms in their discussion. Rather than trying to decide what the public thinks it means by such a difficult concept as causality, it may be preferable to try to influence common usage towards a sounder definition.
Testing for Causality: A Personal Viewpoint
51
The philosophers and others have provided a variety of definitions, but no attempt to review them will be made here, as most are of little relevance to statisticians. Once a definition has been presented, it is very easy for someone to say “but that is not what I mean by causation.” Such a remark has to be taken as a vote against the particular definition, but it is entirely destructive rather than constructive. To be constructive, the critic needs to continue and provide an alternative definition. What is surely required is a menu of definitions that can be discussed and criticized but at least defended by someone. Only by providing such a menu can a debate be undertaken which, hopefully, will result in one, or a few, definitions that can receive widespread support. I believe that definitions should be allowed to evolve due to debate rather than be judged solely on a truth or not scale. It is possible, as has been suggested, that everyone has their own definition so that no convergence will occur, but this outcome does seem to be unlikely. Before proceeding further, it is worthwhile asking if there is any need, or demand, for a testable definition of causality. It is worth noting that the Social Science Citation Index lists over 1000 papers with words such as causal, causation or causality in their titles, and in a recent five-year period the Science Citation Index lists over 3000 such articles. Papers mentioning such words in the body of the paper, not in the title, are vastly more numerous. There does therefore seem to be a need for a widely accepted definition. Statisticians already have methods for measuring relationships between variables, but causal relations may be thought of as being in some sense deeper than the ordinarily observed kind. Consider the following three time series: Xt = number of patients entering a maternity hospital in day t, Yt = number of patients leaving the same hospital in day t, Zt = ice cream sales in the same city in day t. It seems very likely that the series Xt is useful in forecasting Yt, and it is also possible that Zt may appear to be useful in forecasting Yt, as both variables contain seasonal components. However, most people would surely expect that if a more careful analysis was conducted, using perhaps longer data series, larger information sets including more explanatory variables or more sophisticated techniques, then the observed (forecasting) relationship between Xt and Yt is likely to continue to be found, whereas that between Zt and Yt may well disappear. The deeper relationship is a candidate for the title causal. There thus appears to be both a need, and a demand, for techniques to investigate causality. The possible uses of a causal relationship, if found, will be discussed below. It has been suggested that although such deeper relations need to be named, that name should not involve words like “cause” or “causality”, as these words are too emotion-laden, involve too much preconception
52
C. W. J. Granger
and have too long a history. Alternative phrases such as “due to”, “temporally interrelated”, “temporally prior” and “feedback free” have been proposed, for example. To my mind, this suggestion reflects a basic misunderstanding about language and its use. Most of the components of a language are just a notation, with generally agreed meanings. If I use words such as “apple” or “fear”, I will not need to define them first, as it is understood that most people mean approximately the same thing by them. Occasionally, with unusual or technical words, such as “therm” or “temperature”, I might need to add a definition. If I start a piece of written work, or a lecture, by carefully defining something, then I can use this as a notation throughout, such as distribution, mean, or variance. If my definition is quite different from general usage, then I may be unpopular but will not be logically incorrect, as, for example, if I write cosx for what is usually denoted by x3. As causation has no generally accepted definition, this criticism cannot apply. Provided I define what I personally mean by causation, I can use the term. I could, if I so wish, replace the word cause throughout my lecture by some other words, such as “oshkosh” or “snerd”, but what would be gained? It is like saying that whenever I use x, you would prefer me to use z. If others wanted to refer to my definition, they can just call it “Granger causality” to distinguish it from alternative definitions. There already exist many papers in economics which do just that, some of which are referenced later, and no misunderstanding occurs. If it is later observed that which is called “Granger causality” is identical to the definition introduced by some earlier writer, then the name should be altered. In fact, I would be very surprised if the definition to be discussed in the next section has not been suggested many times in the past. Part of the definition was certainly proposed by Norbert Wiener (1958). It would not be a telling argument to appeal to “common usage” in connection with the words cause or causality, as statisticians continually use words in ways different from common usage, examples being mean, variance, moments, probable, significant, normal, regression and distribution. These remarks made so far in this section are designed to defuse certain criticisms that can be made of what is to follow. My experience suggests that I will be unsuccessful in this aim. When discussing deterministic causation, philosophers distinguish two cases: (a) Necessity – if A occurs, then B must occur. (b) Sufficiency – if I observe B did occur, this means that A must have occurred. For example, if one has a metal rod, then event A might be that one heats the rod and event B is that the rod expands. Although causality is defined for pairs of sequences, or functions, obeying axiom A in parts of mathematical science, any statistician or any worker dealing with data gener-
Testing for Causality: A Personal Viewpoint
53
ated by an animal body, a person’s behavior, part of an economy or an atmosphere, for example, will not be happy with these deterministic definitions. Rather than saying “If A occurs, then B must occur”, they would probably be happier with statements such as “If A occurs, then the probability of B occurring increases (or changes).” For example, if a person smokes, he does not necessarily get cancer, but he does increase the probability of cancer. If a person goes sailing, he does not necessarily get wet, but he does increase the probability of getting wet. It is therefore important for a useful definition to deal with stochastic events or processes. It is interesting to note that the advent of quantum physics had a big impact on the philosophical writings about causality, which had relied heavily on classical physics for examples. Bertrand Russell, in particular, dramatically changed his views of causality at that time. There have, of course, been several attempts to introduce probabilistic theories of causality. A particularly convincing attempt, well worth reading, is that by Suppes (1970). One of his definitions is: An event Bt¢ (occurring at time t¢) is a prima facie cause of the event At if and only if (i) t¢ < t, (ii) Prob (Bt¢) > 0, and (iii) Prob (AtΩBt¢) > P(At). One might observe a large African population, for example, and find that the probability of not getting cholera is 0.91 but that of those inoculated against the disease, the probability of not getting cholera is 0.98. If At is not getting cholera and Bt is inoculation, then the evidence suggests that “inoculation is a Suppes prima facie cause of not getting cholera.” Note that, by replacing A and B with their complements, the same evidence is also likely to lead to the conclusion “not having inoculation is a Suppes prima facie cause of getting cholera.” There is obvious arbitrariness in practice in defining an event. If the inequality in (iii) is reversed, Suppes talks of negative causation. Nevertheless, for probabilistic events, rather than variables or processes, the discussion by Suppes is very useful and is certainly potentially applicable to a series of properly conducted random experiments. Good (1961, 1962) has a somewhat similar definition, although he effectively hides it amongst 24 assumptions and 17 theorems combined with vary little interpretation. If E and F are two events with F occurring before E, then he says that there is a tendency for F to cause E, given some state of the universe, if Prob (EΩF) > Prob(EΩnot F). It would be a lengthy task to critically discuss and compare such definitions, and so I will not attempt it at this time. At the very start of this paper, the case where random variables X and Y are correlated, but God tells you that causation in one direction is impossible, was briefly discussed. Virtually all definitions of causality require some imposed structure, such as that provided here by God. In many definitions, Axiom A provides this structure, but not all definitions
54
C. W. J. Granger
follow this route. The causality concepts discussed by Simon, Wold and Blalock [see Blalock (1964)] and others do not require Axiom A but do presume special knowledge about the structure of relations between two or more variables. Given this structure, the possibility of causal relationships can then be discussed, usually in terms of the vanishing or not of correlation or partial correlation coefficients. Because these definitions require a number of assumptions about structure to be true, they will be called conditional causation definitions. If the assumptions are correct, or can be accepted as being correct, these definitions may have some value. However, if the assumptions are somewhat doubtful, these definitions do not prove to be useful. Sims (1977) has discussed the Simon and Wold approach and found it not operational in practice. Certainly there has been little use made of these definitions in recent years, at least in economics. The “path analysis” of Sewell Wright (1964) has similarities with the Wold and Simon approach, but he does state that he would prefer to use his analysis together with Axiom A, which would bring it nearer to the definition discussed in the next section. The full question of priority in these matters is a complex one and, I think, need not detain us here. The question of whether any real statement can be made about causality based just on statistical data is clearly an important one. Naturally, as a statistician. I think that proper statements can be made, if they are carefully phrased. The link between smoking and cancer provides an example. So far the only convincing link has been a statistical one, but it is now generally accepted.The real question for most people is not “Does smoking cause cancer?” but rather “How does smoking cause cancer?” Before the accumulation of statistical evidence, people could be thought of as having subjective, personal probabilities that the statement “Smoking causes cancer, in a statistical sense” is true. Since the evidence has been presented, for most people these subjective probabilities have greatly increased and may well be near one. The weight-of-evidence is certainly in favor of this causality. Smoking is certainly a prima facie cause of cancer and is probably more than that, in the opinion of the majority. A decision, such as for an individual to stop smoking or for a government to ban it, could be a wrong one, but statisticians are used to making decisions under uncertainty and realize that when properly based on the statistical evidence can be wrong but are usually correct. There is one problem with the statistical approach which was pointed out by the philosopher Hume as applying to any testing procedure. It is always possible that the evidence from the past may be irrelevant, as causation can change from the past to the future. It is therefore necessary to introduce: Axiom C: All causal relationships remain constant in direction throughout time.
Testing for Causality: A Personal Viewpoint
55
The strength, and lags, of these relationships may change, but causal laws are not allowed to change from positive strength to zero, or go from zero to positive strength, through time. This axiom is, of course, central to the applicability of all scientific laws and so is generally accepted, even though it is not necessarily true.
3.
AN OPERATIONAL DEFINITION
The general definition introduced above is not operational, in that it cannot be used with actual data. To become operational, a number of constraints need to be introduced. To do this, it is convenient to first re-state the general definition. Suppose that one is interested in the possibility that a vector series Yt causes another vector Xt. Let Jn be an information set available at time n, consisting of terms of the vector series Zt, i.e., Jn : Zn-j,
j 0.
Jn is said to be a proper information set with respect to Xt if Xt is included within Zt. Further, suppose that Zt does not include any components of Yt, so that the intersection of Zt and Yt is zero. Further, define J¢n : Zn-j, Yn-j,
j 0,
so that J¢n is the information set Jn plus the values in past and present Yt. Denote by F(Xn+1ΩJn) the conditional distribution function of Xn+1 given Jn, so that this distribution has mean E[Xn+1ΩJn]. The notation using other information sets is obvious. These expressions are used in the following definitions: Definition 1: Yn does not cause Xn+1 with respect to J¢n if F ( X n +1 J n ) = F ( X n +1 J n¢ ), so that the extra information in J¢n has not affected the conditional distribution. A necessary condition is that E[ X n +1 J n ] = E[ X n +1 J n¢ ]. Definition 2: If J¢n ∫ Wn, the universal information set, and if F ( X n +1 W n ) π F ( X n +1 W n - Yn ), then Yn is said to cause Xn+1. Definition 3: If F ( X n +1 J n¢ ) π F ( X n +1 J n ),
56
C. W. J. Granger
then Yn is said to be a prima facie cause of Xn+1 with respect to the information set J¢n . Definition 4: Yn is said not to cause Xn+1 in mean with respect to J¢n if d n +1 ( J n¢ ) = E[ X n +1 J n¢ ] - E ( X n +1 J n ) is identically zero. Definition 5: If dn+1(Wn) is not zero, then Yn is said to cause Xn+1 in mean. Definition 6: If dn+1(J¢n ) is not identically zero, then Yn is said to be a prima facie cause in mean of Xn+1 with respect to J¢n . Definition 2 is equivalent to the general definition introduced in the first section, which was discussed in Granger and Newbold (1977). If a less general information set than the universal set is available, J¢n , then a prima facie cause can occur, as in Definitions 1 and 3. These definitions can be strengthened by adding phrases such as “almost surely”, or “except on sets of measure zero” at appropriate points, but as these will not help towards the eventual aim of an operational definition capable of being tested, such niceties are ignored. If, rather than discussing the whole distribution of Xn+1, one is content with just point forecasts using a least squares criterion, then the final three definitions become relevant. To ask for causality in mean is much less stringent than asking for full causality, but does provide a definition much nearer to being operational. If one wishes to use some criterion other than least squares, this can be done, but point forecasts will be made much more difficult to obtain. Definition 6 can be rephrased: Let s 2(XΩJn) be the variance of the one-step forecast error of Xn+1 given Jn, and similarly for s 2(XΩJn, Y) ∫ s 2(XΩJ¢n ), then Y is a prima facie cause of X, with respect to J¢, if s 2(XΩJn, Y) < s 2(XΩJn). Thus knowledge of Yn increases one’s ability to forecast Xn+1, in a least squares sense. This corresponds to a definition hinted at by Wiener (1958), introduced specifically in Granger (1964) and Granger and Hatanaka (1964), re-introduced in Granger (1969), amplified and applied by Sims (1972, 1977) and then used by numerous authors since, including Black (1978), Williams, Goodhart and Gowland (1976), Skoog (1976), Sargent (1976), Mehra (1977), Gordon (1977), Feige and Pearce (1976a,b) Ciccolo (1978), and Caines, Sethi and Brotherton (1977). However, it should be said that some of the recent writers on this topic, because they have not looked at the original papers, have evolved somewhat unclear and incorrect forms of this definition. It is rather like the party game where a phrase or rumor is whispered around the room, ending up quite differently from how it started.
Testing for Causality: A Personal Viewpoint
57
In this newer formulation, Axiom B becomes: Axiom B¢: F[YnΩJn] is not a singular distribution, so that is not that of a variable taking only a constant value. This implies that Yn is not deterministically related to the contents of Jn. If purely time-series techniques are used to generate one-step forecasts, these forecasts will usually be linear functions of the information set because of the present state of the art, although some progress in the use of certain non-linear models is occurring; see for instance, Granger and Andersen (1978) and Swamy and Tinsley (1980). However, if forecasts are made from reduced form equations derived from a possible non-linear, structural econometric model, then the contents of the information set may be utilized non-linearly. The definitions discussed here do not require that only linear models are used, although most of the actual applications so far and much of the theoretical discussions have concentrated on the linear case. If the available information in Jn is used only linearly, then it may be possible to observe that Yn is a linear prima facie cause in mean of Xn+1 with respect to J¢n and with the available modeling and forecasting techniques this provides the operational definition that is being sought. For the remainder of this paper the phrase “linear prima facie cause” will be replaced simply by “cause” for convenience, unless a more general case is being considered. The definition as given relates a pair of vectors, Yn and Xn+1, but the usual case will be concerned with just a pair of individual series, Yn and Xn+1. Further, to actually model data it will usually be necessary to assume either that the series are stationary or that they belong to some simple class of models with timevarying parameters. Again, this is not strictly necessary for the definition but is required for practical implementation. There are a number of important implications of the definition of cause here developed. If, for example, it is found that Yn causes Xn+1 with respect to some information set, then this implies no restrictions on whether or not Xn causes Yn+1; this second causation may occur but need not. If both causations occur, one may say that there is feedback between the two series Xt and Yt. A simple example is Xt = et + ht-1,
Yt = ht + et-1,
where et, ht are a pair of independent white noise series. Further, if there are three series Xt, Yt and Zt and it is observed that X causes Y and Y causes Z, then it is not necessarily true that X causes Z, although it can occur. Example 1: Xt = et,
Yt = et-1 + ht,
Zt = ht-1,
where again et, ht are independent white noises. There are four information sets that need to be considered: Jn(X,Y) – consisting of past and
58
C. W. J. Granger
present Xn-j, Yn-j ( j 0), and similarly, Jn(X,Z), Jn(Y,Z), and Jn(X,Y,Z) – consisting of past and present Xn-j, Yn-j, Zn-j ( j 0). Then clearly X causes Y with respect to either Jn(X, Y) or Jn(X, Y, Z), Y causes Z with respect to Jn(Y, Z) and to Jn(X, Y, Z), but X does not cause Z with respect to Jn(X, Z) but it does cause Z with respect to Jn(X, Y, Z). This last result occurs because Zn+1 is completely predetermined from Yn-j, Xn-j ( j 0) but not from just Yn-1 ( j 0). The importance of stating the information set being utilized is well illustrated by this example. A further example shows a different situation: Example 2: Xt = et + wt,
Yt = et-1,
Zt = et-2 + ht,
where et, ht and wt are three independent white noises. Here X causes Z in Jn(X, Z) but not in Jn(X, Y, Z). One thing that is immediately clear from the definition is that if Yn causes Xn+1, then Y¢n = a(B)Yn causes X¢n+1 = b(B) Xn+1 if a(B) and b(B) • are each one-sided filters of the form a(B) = Â j =0 a j B j . However, if two-sided filters are used, as occurs for example in some seasonal adjustment procedures, then causality can obviously be lost because Axiom A is disrupted. The use of proper information sets, that is, sets including the past and present values of the series to be forecast Xt, does have the following important implication: It is impossible to find a cause for a series that is self-deterministic, that is, a series that can be forecast without error from its own past. The basic idea of the causal definition being discussed is that knowledge of the causal variable helps forecast the variable being caused. If a variable is perfectly forecastable from its own past, clearly no other variable can improve matters. Example 3: Xt = a + bt + ct2 and Yt = dXt+1. Then the following three equations generate Xt exactly, without error, i.e.: Xt = a + bt + ct2,
Xt = d-1Yt-1,
Xt = 2Xt-1 - Xt-2 + 2c,
so that at first sight Xt is “caused” by time, or by Yt-1, or by its own past. If all three equations fit equally well, that is perfectly, it is clear that no kind of data analysis can distinguish between them. It is therefore obvious that in this circumstance a statistical test for causality is impossible, unless some extra structure is imposed on the situation. It may be noted that causality tests can be made with variables that contain deterministic components, as proved formally by Hosoya (1977), but with this definition one cannot say that the deterministic component of one variable causes the deterministic component of another variable.
Testing for Causality: A Personal Viewpoint
4.
59
SOME DIFFICULTIES
Virtually any sophisticated statistical procedure has some problems associated with it, and there is every reason that this will be true also with any operational definition of causality. These difficulties can either be intrinsic to the definition itself or be associated with its practical implementation. Some of the difficulties will arise because of data inadequacies. One obvious problem arises when the data is gathered insufficiently frequently. Suppose that a change in wood prices causes a change in furniture prices one week later, but prices are only recorded monthly; then the true causal relationship will appear to be instantaneous. It is perhaps worth defining “prima facie apparent instantaneous causality in mean”, henceforth instantaneous causality, between Xn+1 and Yn+1 with respect to J¢n if E[ X n +1 J n , Yn +1 ] π E[ X n +1 J n ]. Although the phrase ‘instantaneous causality’ is somewhat useful on occasions, the concept is a weak one, partly because Axiom A is not being applied and because, at least in the linear case, it is not possible to differentiate between instantaneous causation of X by Y, of Y by X or of feedback between X and Y, as simple examples show. If extra structure is imposed, it may be possible to distinguish between these possibilities, as will be discussed below. If one totally accepts Axiom A, then instantaneous causality will either occur because of the data collection problem just mentioned or because both series have a common cause which is not included in the information set J¢n being used. The problem of missing variables, and consequential misinterpretation of one’s results, is a familiar one in those parts of statistics which consider relationships between variables. A simple example of apparent causation due to a common cause is: Example 4: Zt = ht,
Xt = ht-1 + dt,
Yt = ht-2 + et,
where et, ht and dt are independent white noises. Here Zt is causing both Xt and Yt with respect to information sets Jn(X, Z), Jn(X, Z) and Jn(X, Y, Z), but Xt is causing Yt in Jn(X, Y) but not Jn(X, Y, Z). This apparent causation of Y by X in Jn(X, Y ) may be thought of as spurious because it vanishes when the information set is expanded, something one would not expect with a true cause. Sims (1977) has studied the system Yt = c(B)Zt + e t , X t = d(B)Zt + ht , and found that it is unlikely to give rise to a spurious one-way causation between X and Y based on Jn(X, Y), although presumably a feedback relationship between X and Y is more likely to be found.
60
C. W. J. Granger
An important case where missing variables can lead to misleading interpretations is when one variable is measured with an error having time structure. The following example illustrates the difficulty: Example 5: Xt = ht,
Yt = dt,
Zt = Xt + et + bet-1,
where ht and dt are white noises with correlation (ht, ds) = 0, t π s, but this correlation equals l when t = s and et is a white noise independent of ht and dt. Zt may be thought of as Xt with an MA(1) measurement error. There is no causation between Xt and Yt apart from instantaneous causation. As Zt is the sum of a white noise and an MA(1) term, it will be MA(1), so that there exists a constant q with ΩqΩ < 1 and a white noise series et so that Zt = (1 + qB)et . It follows that -1
-1
et = (1 + qB) ht + (1 + qB) (e t + be t -1 ). The one-step forecast of Zn+1 using Zn-j ( j 0) is just qen with error en-1, but this error is a function of hn-j (j 0) which is correlated with dn-j (j 0) which is equal to Yn-j. It thus follows that the Yn-j will help forecast Zn+1, so that apparently Yn causes Zn+1 with respect to Jn(Y, Z), but this would not be the case if Xn-j were observable, so that Jn(X, Y, Z) could be considered. This result is at first sight quite worrying, as in many disciplines, such as economics, variables are almost inevitably observed with error, so that xt – the missing variable – will always be missing. However, as the results of Sims (1977) and Newbold (1978) show, by no means does the addition of measurement error to variables necessarily produce spurious causations, as the error has to have particular time-series structure compared to the original series. Nevertheless, the possibility of misleading results occurring from a common type of situation has to be kept in mind when interpreting results. Another situation which needs care in interpretation is when the time that a variable is recorded is different from the time at which the event occurred that led to the variable’s value. For example, March unemployment figures in New York City and New York State may not become known to the public until April 1 and April 15, respectively. The values must be associated with March, not the time of their release, otherwise spurious causation may well occur. A further example of this problem is the relationship between lightning and thunder. As the lightning is usually observed before the thunder, because light travels faster than sound, it might seem that lightning causes thunder. However, both are manifestations of what is essentially the same event, and if the observations are placed at the time of the original electrical discharge, the spu-
Testing for Causality: A Personal Viewpoint
61
rious causation disappears. If one is being pedantic, the light-producing part of the discharge does occur before the sound-producing part, but both lightning and thunder do have a common cause. A further interpretation problem can arise because of Axiom B. Suppose one has three variables which are related through some linear identity, such as Work force = Unemployed + Employed. It is clear that all three variables cannot be in the information set to be used, but it is not necessarily obvious which one should be excluded. If, for example, total consumption is caused by size of work force, but this latter variable is excluded, one may expect to find that numbers of both unemployed and employed appear to cause consumption. Once one is aware of such interpretational difficulties, it is not difficult to invent strategies for analyzing them, such as excluding different variables and repeating the analysis, or by testing equality of certain coefficients in the model, for example. One apparently serious problem of interpretation, which is suggested by the thunder and lightning example, arises from the idea of a leading indicator. Suppose that X causes both Y and Z, but that the causal lag is shown from X to Y, then from X to Z. If now X is not observed, Y will appear to cause Z. Example 4 shows such a situation. The search for such leading indicators occurs in various fields. In economics, for example, the Bureau of the Census publishes a list of such indicators, plus an index, which are supposed to help indicate when the economy is about to experience a down-turn or an up-turn. A number of possible leading indicators for earthquakes are also being considered, an example being unusual animal behavior. If leading indicators are included in an information set, tests may well indicate prima facie causality. In most cases this will be just another example of the missing variable problem. Sometimes the missing variable will be available and, when added to the information set, the leading indicator will no longer appear to cause. In other cases the missing variable is not observable and, when this occurs, it will not always be obvious whether a variable is a cause or merely a leading indicator. This relates to the question of how to interpret the outcomes of the causality tests, which are discussed in correct it is helpful to use it, if it is incorrect one may well be worse off by its use. This is certainly true of any causality test that is conditional on the truth of some very specific theory. Whereas in many fields there may be theories, specific or not, that are generally accepted as being true, such theories are much more difficult to find in economics. It is interesting to note that Zellner in his paper never gives a single example of what he would consider to be a “well thought out economic theory” nor even of a specific theory or law that is generally accepted by the majority of economists. Again, one
62
C. W. J. Granger
returns to the personal belief aspect of causality testing; an individual may strongly believe some theory and is happy to test causality conditional on this theory, whereas someone else would not want to do that. One obvious place where a good theory would be particularly useful would be where extra structure is required to resolve causal directions in what appears to be instantaneous causality/feedback. For example, if sufficient structure can be put on a model to ensure identifiability – in the econometrician’s sense of having a unique model – then a conditional causal test can be constructed.This is very much in the spirit of the Simon and Wold approach to causality, which is very well summarized in Zellner’s article. However, it must be emphasized that only conditional causality can result, and this is potentially very much weaker than the unconditional causality definition discussed earlier. The definitions of causation introduced in the previous section admittedly have a number of arbitrary aspects, some of which are potentially removable, others perhaps not. The data is assumed to be measurable on a cardinal scale, whereas actual data often occurs on different scales. If the data is intrinsically ordinal, I consider that it may be difficult to use these definitions, because of the lack of suitable distribution functions. However, it may be possible to build and evaluate forecasting models for such data, and so one aspect of the definitions will go through. With attribute data, without any natural order to the categories, the general definition remains unstable, but clearly the ‘causality in mean’ definitions are not relevant. This type of data is much nearer to the situation of one event causing another that was discussed by Suppes (1970) and Good (1961/62) and may often occur as the outcome of designed experiments. To be relevant to statisticians, a sequence of experiments will be required, as there seems to be no possibility of investigating causal relationships between unique events using statistical procedures. A further arbitrary feature of the definitions is the use of one-step forecasts rather than h-step for any h. It is usually by no means clear what is the natural length of the step, and the pragmatic procedure is to use just the data period of the publicly available data, which can lead to the apparent instantaneous causation problem mentioned above. In the bivariate information set case, where one asks if Y causes X with respect to Jn(X, Y ), Pierce (1975) has shown that if Y causes X using an h-step forecasting criterion, with h > 1, then it will necessarily be found that Y causes X with a one-step criterion. However, this does not seem to be true in the multivariate case: Example 6: Xt = et,
Yt = et-2 + ht,
Zt = et-1 + qt,
where et, ht, qt are independent zero-mean, white noise series. Here Zn causes Xn+1 with respect to Jn(X, Z) and Jn(X, Y, Z), Yn causes Xn+2 with respect to Jn(X, Y) and Jn(X, Y, Z), Yn(or Yn-1) causes Xn+1 with
Testing for Causality: A Personal Viewpoint
63
respect to Jn(X, Y) but not with respect to Jn(X, Y, Z). Although some justification can be made that one-step forecasts are the most natural to consider, it will remain an arbitrary aspect of the definitions. It is, on occasion, possible to distinguish between different types of causes by considering alternative information sets. For example, one might call Y a primary cause of X, if tests show this to be so for Jn(X, Y), Jn(X, Y, Z) and for all other information sets containing X and Y and any other series. A secondary cause might be one such that X causes Z in Jn(X, Y, Z) but not in Jn(X, Z), as illustrated in example one above. This example shows that X can cause Z, according to the definition, even though X and Z are statistically independent, provided that X can add further information to the primary cause, which in Example 1 is Y. The existence of such secondary causes may be upsetting to some readers, and so it might be relevant to alter the basic definition to deal only with primary causes. However, I personally would not, at this time, wish to emphasize such a change. Most of the problems and difficulties discussed in this section relate not to the basic definition but with making it operational, in my opinion. Some are inherent to any statistical study using an incomplete or finite data set. Many of the difficulties become considerably reduced in inportance once care is taken with interpretation of test results. In the following section a brief discussion of actual test procedures is presented, and in the final section some further important interpretational questions are considered, such as the relevance of control variables and the meaning of exogeneity. 5.
TEST PROCEDURES
There has been a lot of thought given in recent years to the question of how the above definitions can be actually tested, although the major attention has been given to the case of whether X causes Y with respect to Jn(X, Y ), that is, just the two-variable case. Although most empirical studies have considered this case, it is probably not a particularly important one in economics, as it is easy to suggest relevant missing variables. It is clear that more attention is needed on how to utilize bigger information sets. As the two-variable case has been well summarized recently by Pierce and Haugh (1977), only a few of the more important aspects will be discussed here. To give some structure to the discussion, consider the pair of zero-mean, jointly stationary series xt, yt, which are purely non-deterministic. The moving-average, or Wold, representation can be denoted, following Pierce and Haugh, by Ê xt ˆ Èy 11 (B) y 12 (B)˘ Ê at ˆ = , Ë yt ¯ ÍÎy 21 (B) y 22 (B)˙˚ Ë bt ¯
(1)
64
C. W. J. Granger
where each yij(B) is a power-series, possibly infinite in length, in the backward operator B and (at, bt)¢ is a two-element white noise vector, with zero correlation between at and bs, except possibly when t = s. Assuming that the moving average matrix operator is invertible, the corresponding autoregressive model can be denoted by È A(B) H (B)˘ Ê xt ˆ Ê at ˆ ÍÎC (B) D(B) ˙˚ Ë y ¯ = Ë b ¯ . t t
(2)
Rather than considering models for the actual series, one can equally well consider relationships between prewhitened series. If the filters F(B)xt = ut and G(B)yt = vt produce a pair of series ut and vt that are individually white noises, then moving average and autoregressive models will exist of the form Ê ut ˆ Èq 11 (B) q 12 (B)˘ Ê at ˆ = , Ë vt ¯ ÍÎq 21 (B) q 22 (B)˙˚ Ë bt ¯
(3)
Èa (B) b (B)˘ Ê ut ˆ Ê at ˆ ÍÎg (B) d (B) ˙˚ Ë v ¯ = Ë b ¯ . t t
(4)
and
There are obviously relationships between the various operators, as described by Pierce and Haugh. Denote the correlation between ut-k and vt by ruv(k) and consider the regression •
vt =
Âw u j
t- j
+ ft ,
(5)
j =-•
where ruv(k) = (su/sv)wk. Similarly, one can consider the regression yt = V (B) xt + ht .
(6)
Here V(B) = (F(B)/G(B))w(B) and ft, ht are residuals which are uncorrelated with ut-j, xt-j, respectively, but are not necessarily white noises. Using this notation, Pierce and Haugh (1977) prove the following two theorems, amongst others: Theorem 1: Instantaneous (prima facie) causality (in mean) exists if and only if the following equivalent conditions hold: (i) at least one of cov (at, bt), g (0), b(0) in (4) are non-zero, or (ii) at least one of cov (at, bt), H(0), C(0) in (2) are non-zero. In their 1977 paper, Pierce and Haugh had further conditions, such as ruv(0) π 0 or w 0 π 0, but Price (1979) and Pierce and Haugh (1979) show that these conditions are not necessarily correct when there is feedback between x and y.
Testing for Causality: A Personal Viewpoint
65
Theorem 2: y is not a (prima facie) cause (in mean) of x if and only if the following equivalent conditions hold: (1) (2) (3) (4) (5) (6) (7)
y12(B) [equivalently q12(B)] can be chosen zero. q12(B) is either 0 or a constant. y12(B) is either 0 or proportional to y11(B). Vj = 0 (j < 0) in (6). b(B) is either 0 or a constant. H(B) is either 0 or proportional to A(B). ruv(k) = 0, or equivalently wk = 0 (k < 0).
If any of these conditions do not hold, then y will be a prima facie cause of x in mean with respect to Jn(x, y). (1) and (4) were pointed out by Sims (1972), the first part of (6) was mentioned in Granger (1969), and that of (7) was emphasized in Granger and Newbold (1977). Multivariate generalizations of these conditions, concerning the possibility that the vector y may cause the vector x, have been discussed by Caines and Chan (1975) and elsewhere. Because of this variety of equivalent conditions, there are clearly numerous statistical tests that can be devised based on these conditions. The performance of these tests needs further investigation, either using statistical theory or Monte Carlo study, especially as some are suspected to be occasionally biased or to be lacking in power. My own experience has largely been with the autoregressive form (2), first fitting the bivariate model with H(B) constrained to be zero and then refitting without this constraint, to see if a significant decrease in the variance of the residual for the xt equation can be achieved. This experience, using both simulated and actual data as, for example, in Chiang (1978), suggests that misleading results do not occur but that the power is not particularly satisfactory. However these tests are not of considerable importance for two basic reasons: (i) they deal only with the bivariate case, whereas the more important applications are likely to involve more variables; and (ii) they are not properly based on the definitions presented above. This latter point arises because these definitions are explicitly based on the extra forecasting ability schieved from one information set over another, whereas the equivalent conditions given in Theorem 2, for example, make no mention of forecasts. This makes no difference for populations, as the definition of non-causation in mean and the conditions in Theorem 2 are then equivalent. However, if only a finite sample is available, as will always occur in practice, the equivalence disappears. Suppose that a sample is used to model the relationship between xt and yt in the autoregressive form (2) and the estimate of H(B) is found to be significantly different from zero. Then the result is essentially saying that if this fact were known at the start of the sample, it could have been used to improve forecasts of xt. This is quite different from actually producing improved forecasts. It is generally
66
C. W. J. Granger
accepted that to find a model that apparently fits better than another is much easier than to find one that forecasts better. Thus tests based on the “equivalent conditions” in Theorem 2 are just tests of goodness of fit, whereas the original definition requires evidence of improved forecasts. To satisfy this requirement, alternative models, based on different information, can be identified and estimated using the first part of the sample and then their respective forecasting abilities compared on the later part of the sample. The best way to actually test for differences in “post-sample” forecasting ability and the optimum way to divide the sample into a modeling part and a forecast evaluation part need further investigation, but at least a test that is in sympathy with the basis of the definition would result. An application of these ideas, in a two-variable case, is provided by Ashley, Granger and Schmalensee (1979), who consider possible causal relationships between aggregate advertising expenditures and consumption spending. They use a five-step procedure: (i) Using a block of data, which is called the sample, each series is prewhitened by building ARIMA models, to get ut, vt as above. (ii) The cross-correlations ruv(k) are examined to see if there is evidence of possible causal relationships. (iii) For each indicated possible causal relationship, a model is built on these residuals ut, vt. If a one-way cause is suggested, the transfer function methods of Box and Jenkins (1970) may be utilized, but if a two-way causality appears to be present, the method for modeling this situation suggested in Granger and Newbold (1977) can be used. (iv) The models in stages (i) and (iii) are then put together to suggest a model for the original data, in differenced form where necessary. This model is estimated, insignificant terms dropped and a final model achieved. (v) The forecasting ability, in terms of mean-squared one-step forecast error, of the bivariate model and the single series ARIMA model, are then compared using post-sample data. If the bivariate model forecasts significantly better, then evidence of causation is found. These stages are somewhat biased against finding causation, as, if in stage (ii) no evidence of causes is found, then no bivariate models will be constructed. The separation of the modeling period and the evaluation period does prevent evidence for spurious causation occurring because of data mining. However, a weakness is that if an important structural change occurs between the sample and the post-sample, the test will lose power. The relevance of Axiom C is evident. Ashley, Granger and Schmalensee, using quarterly data, find evidence that consumption causes advertising, but that advertising does not cause con-
Testing for Causality: A Personal Viewpoint
67
sumption except instantaneously. These results agree with parts of the advertising literature that find advertising expenditure is determined by management from previous sales figures and that advertising has little or no long-memory ability. On the other hand, these results might well be the opposite of the pre-conceptions of many economists, which illustrates both the relevance of performing a test and also of not relying on some partly formed theory. 6.
DISCUSSION AND CONCLUSIONS
The definition of causation proposed and defended above essentially says that Xn+1 will consist of a part that can be explained by some proper information set, excluding Yn-j( j 0), plus an unexplained part. If the Yn-j can be used to partly forecast the unexplained part of Xn+1, then Y is said to be a prima facie cause of X. It is clear that in practice the quality of the answer one gets from a test is related to the sophistication of the analysis used in deciding what is explained and by what. The definition also relies very heavily on Axiom A, that the future cannot cause the past, as using the “arrow of time” imposes the structure necessary for the definition to hold. It also means that the definition does emphasize forecasting. If one does not accept Axiom A, the rest of the work connected with the definition becomes irrelevant. It is important to realize that the truth of Axiom A cannot be tested using the methods discussed in this paper. I should point out that the work by physicists on “time-reversibility” does not seem to contradict Axiom A, as a careful reading of the review article by Overseth (1967) will show. Because of the way the definition is framed, and the tests based on it are organized, it is only appropriate for use with sequences of data. It cannot say anything about unique events or contribute to topics such as whether there exists an ultimate or first cause. Such topics have to remain the province of philosophers and theologians. In interpreting the test results it has been suggested above that one thinks in terms of changing personal beliefs about whether Y causes X. There is nothing essentially new in this suggestion, as it is certainly what occurs in practice. The definition and tests based on it provide a way to organize the available data in such a way that some workers will feel is appropriate for them to need to possibly change their prior probabilities. I leave to others the discussion of the effect of this procedure, and of the whole causation testing methodology on scientific methodology. Some of the economists writing about what is called Granger causality have related this concept to the more familiar one of exogeneity; see, for example, Sims (1977) and Geweke (1978). When econometric models are constructed it is usual to divide variables into exogenous (Z) and endogenous (Y ), and it is assumed that components of Z may cause components of Y but not vice versa. There is thus assumed to be a one-way causal relationship from Z to Y. For estimation and econometric identi-
68
C. W. J. Granger
fication purposes, it is important that this classification be correct as questions of efficiency, model uniqueness and model specification are concerned. Tests for exogeneity are with respect not only to the information set used but also to the division of variables picked. One may find, for instance, that Z minus W is exogenous to Y plus W, for some variable W. It is also possible that missing variables can disrupt the exogenous interpretation, as when Z is exogenous to Y but not to some extended Y. The possibility of “instantaneous causality” obviously greatly complicates the problem of how to test for exogeneity. Some of these problems have been discussed elsewhere [Granger (1980)] and so will not be followed up here. Some variables are such that prior beliefs will be strong that they are exogenous: An example is that weather is probably exogenous to the economy. However, other variables have often been considered to be exogenous yet need to be tested, the best examples being the control variables. One can argue that a government controlled interest rate is in fact partly determined by previous movements elsewhere in the economy, and so is not strictly exogenous. The true exogenous part of such a variable is that which cannot be forecast from other variables and its own past, and it follows that it is only this part that has any policy impact. The theory of rational expectations, currently attracting a lot of attention in economics, is relevant here but its discussion is not really appropriate. The effect of the presence of control variables on causal relationships was considered by Sims (1977). It is certainly possible that the actions of a controller can lead to what appears to be a causal relationship between two variables. Equally, it is possible that two variables that would be causally related if no controls were used, would seem to be unrelated in the presence of a control. It is also worth pointing out that controllability is a much deeper property than causality, in my opinion, although some writers have confused the two concepts. If Y causes X, it does not necessarily mean that Y can be used to control X. An example is if one observes that the editorial recommendations of the New York Times about which candidates to support causes some voters to change their votes. However, if one started controlling these editorials, and this became known, the previously observed causality may well disappear. The reason is clearly that the structure has been altered by changing a previously uncontrolled variable to one that is controlled. If causation is found between a controlled variable and something else, this could be useful in deciding how to control, provided movements are kept near those observed in the past. It seems quite possible that some variables used in the past by governments to control may be so ineffectual that causation will not be found, so testing is worthwhile. The relationship between control, causation and the recent rational expectations literature is potentially an interesting one, but is too large a topic to be considered here.
Testing for Causality: A Personal Viewpoint
69
There is clearly much more discussion required of this and other definitions and more experience required with the various methods of testing that have been suggested. It is my personal belief that the topic is of sufficient importance, and of interest, to justify further work in this field.
REFERENCES Ashley, R., C.W.J. Granger and R. Schmalensee, 1979, Advertising and aggregate consumption: An analysis of causality, Research report (Department of Economics. University of California, San Diego, CA). Black, H., 1978, Inflation and the issue of unidirectional causality, Journal of Money, Credit and Banking X, Feb., 99–101. Blalock, H.M. Jr., 1964, Causal inferences in non-experimental research (University of North Carolina Press, Chapel Hill, NC). Box, G.E.P. and G.M. Jenkins, 1970, Time series analysis, forecasting and control (Holden Day, San Francisco, CA). Bunch, M., 1963, Causality (Meridian Books, Cleveland, OH). Caines, P.E. and C.W. Chan, 1975, Feedback between stationary stochastic processes, IEEE Transactions on Automatic Control AC 20, 498–508. Caines, P.E., S.P. Sethi and T. Brotherton, 1977, Impulse response identification and causality detection for the Lydia–Pinkham data, Annals of Economic and Social Measurement 6, Spring, 147–164. Chiang, C., 1978, An investigation of the relationship between price series. Ph.D. thesis (University of California, San Diego, CA). Ciccolo, J.H. Jr., 1978, Money equity values and income: Tests for exogeneity, Journal of Money, Credit and Banking X, Feb., 45–64. Feige, F.L. and D.K. Pearce, 1976a, Inflation and incomes policy: An application of time series models, Journal of Monetary Economics. Supplementary Series 2, 273–302. Feige, E.L. and D.K. Pearce, 1976b, Economically rational expectations: Are innovations in the rate of inflation independent of innovations in measures of monetary and fiscal policy? Journal of Political Economy 84, 499–522. Geweke, J., 1978, Testing the exogeneity specification in the complete dynamic simultaneous equation model. Journal of Econometrics 7, no. 2, 163–186. Good, I.J., 1961/62, A causal calculus. I/II, British Journal of Philosophical Society 11, 305/12, 43–51. Gordon, R.J., 1977, World inflation and monetary accommodation in eight countries, Brookings Papers on Economic Activity, Part 3, 409–478. Granger, C.W.J., 1963, Economic processes involving feedback. Information and Control 6, 28–48. Granger, C.W.J., 1969, Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37, 424–438. Granger, C.W.J., 1980, Generating mechanisms, models and causality, Paper presented to World Econometrics Congress, Aix-en-Provence, Sept. 1980.
70
C. W. J. Granger
Granger, C.W.J. and A. Andersen, 1978, An introduction to bilinear time series models (Vandenhoeck and Ruprecht, Göttingen). Granger, C.W.J. and M. Hatanaka, 1964, Spectral analysis of economic time series (Princeton University Press, Princeton, NJ). Granger, C.W.J. and P. Newbold, 1977, Forecasting economic time series (Academic Press, New York). Hart, H.L.A. and A.M. Honore, 1959, Causation in the law (Oxford University Press, Oxford). Hosoya, Y., 1977, On the Granger condition for non-causality, Econometrica 45, no. 7, 1735–1736. Mehra, Y.P., 1977, Money wages, prices and causality, Journal of Political Economy 85, Dec., 1227–1244. Newbold, P., 1978, Feedback induced by measurement errors, International Economic Review 19, 787–791. Overseth, O.E., 1967, Experiments in time reversal, Scientific American 221, Oct., 88–101. Pierce, D.A., 1975, Forecasting in dynamic models with stochastic regression. Journal of Econometrics 3, 349–374. Pierce, D.A. and L.D. Haugh, 1977, The assessment and detection of causality in temporal systems. Journal of Econometrics 5, 265–293. Pierce, D.A. and L.D. Haugh, 1979, Comment on price, Journal of Econometrics 10, 257–260. Price, J.M., 1979, A characterization of instantaneous causality: A correction, Journal of Econometrics 10, 253–256. Sargent, T.F., 1976, A classical macroeconometric model for the U.S., Journal of Political Economy 84, 207–238. Sims, C.A., 1972, Money, income and causality, American Economic Review 62, 540–552. Sims, C.A., 1977, Exogeneity and causal ordering in macroeconomic models, in: New methods in business cycle research, Proceedings of a conference (Federal Reserve Bank, Minneapolis, MN). Skoog, G.R., 1976, Causality characterizations: Bivariate, trivariate and multivariate propositions, Staff Report no. 14 (Federal Reserve Bank, Minneapolis, MN). Suppes, P., 1979, A probabilistic theory of causality (North-Holland, Amsterdam). Wiener, N., 1958, The theory of prediction, in: E.F. Bec, ed., Modern mathematics for engineers, Series 1, Ch. 8. Williams, D., C.A.E. Goodhart and D.H. Gowland, 1976, Money, income and causality: The U.K. experience, American Economic Review 66, 417–423. Wright, S., 1964, The interpretation of multivariate systems, in: O. Kempthorne et al., Statistics and mathematics in biology, Ch. 2 (Hafner, New York). Zellner, A., 1978, Causality and econometrics, in: Proceedings of conference held at University of Rochester, NY, forthcoming.
CHAPTER 3
Some Recent Developments in A Concept of Causality* C. W. J. Granger
The paper considers three separate but related topics. (i) What is the relationship between causation and co-integration? If a pair of I(1) series are co-integration, there must be causation in at least one direction. An implication is that some tests of causation based on different series may have missed one source of causation. (ii) Is there a need for a definition of ‘instantaneous causation’ in a decision science? It is argued that no such definition is required. (iii) Can causality tests be used for policy evaluation? It is suggested that these tests are useful, but that they should be evaluated with case. 1.
INTRODUCTION
Suppose that one is interested in the question of whether or not a vector of economic time series yt ‘causes’ another vector xt. There will also exist a further vector of variables wt which provide a context within which the causality question is being asked. Two information sets are of interest: Jt: xt-j, yt-j, wt-j, j ≥ 0, and J¢t: xt-j, wt-j, j ≥ 0, so that Jt uses all of the available information but J¢t excludes the information is past and present yt. It is important to assume that components of yt are not perfect functions of the other components of Jt, so that there does not exist a function g( ) such that yt = g(wt-j, j ≥ 0), for example. Let f(x|J) be the conditional distribution of x given J and E[x|J] be the corresponding conditional mean, then the following definitions of causality and non-causality will be used in the following discussion: * Journal of Econometrics, 39, 1988, 199–211.
72
C. W. J. Granger
(i) yt does not cause xt+1 with respect to Jt if
f ( xt +1 J t ) = f ( xt +1 J t¢). (ii) If f ( xt +1 J t ) π f ( xt +1 J t¢), then yt is a ‘prima facie’ cause of xt+1 with respect to Jt. (iii) If E[ xt +1 J t ] = E[ xt +1 J t¢], then yt does not cause xt+1 in mean, with respect to Jt. (iv) If E[ xt +1 J t ] π E[ xt +1 J t¢], then yt is a prima facie cause in mean of xt+1 with respect to Jt. The ‘in mean’ definitions were introduced in Granger (1963), based on a suggestion by Wiener (1956), and the general definition was discussed in Granger (1980) and elsewhere. The definitions are based on two fundamental principles: (a) The cause occurs before the effect. (b) The causal series contains special information about the series being caused that is not available in the other available series, here wt. It follows immediately that there are forecasting implications of the definitions. The ‘in mean’ definition implies that, if yt causes xt, then xt+1is better forecast if the information in yt-j is used than if it is not used, where ‘better’ means a smaller variance of forecast error, or the matrix equivalence of variance. The general definition (ii) implies that if one is trying to forecast any function g(xt+1) of xt+1, using any cost function, then one will be frequently better off using the information in yt-j, j ≥ 0, and never worse off. This has recently been proved formally be Granger and Thomson (1987) and indicates the considerably greater depth of the more general definition (ii) compared to (iv). If Jt contained all the information available in the universe at time t, then yt could be said to cause xt+1. In practice Jt will contain considerably less information and so the phase ‘prima facie’ has to be used. (ii) is a weaker definition than (i), but it is a definition of a type of causality which is given a specific name. The name is chosen to include the unstated assumption that possible causation is not considered for any arbitrarily selected group of variables, but only for variables for which the researcher has some prior belief that causation is, in some sense, likely. Because the question of possible causality is being asked, yt would
Some Recent Developments in a Concept of Causality
73
have been considered a candidate for a cause before the definition was applied. Thus, one may start with a ‘degree of belief’ that yt causes xt+1, measured as a probability, and after using a causality test based on these definitions, one’s ‘degree of belief’ may change. For example, before the test the degree of belief could be 0.3 and after the test this could increase to 0.6. The extent to which the belief probability changes will depend on the perceived quality and quantity of the data, the size and relevance of wt and the perceived relevance, quality or power of the test. Naturally, using just statistical techniques, it is unlikely that the probability will go to one, or to zero, and if one does not like the definitions being used, then the tests are irrelevant and the degree of belief cannot change. For a given yt and Jt, the definition in (ii) is a general one and not specific to a particular investigator. However, interpretations of tests based on the definition do depend on the degrees of belief of the investigator and so are specific. Further, going back a step, as the choice of variables to be considered in a causality analysis is in the hands of the investigator, the definition can also be thought of as being specific in this respect. It would be interesting to try to give a more formal Bayesian viewpoint to these ideas, incorporating the dynamics of prior beliefs as new information becomes available, but I do not feel competent to undertake such an analysis. There are many tests for causation that have been suggested and some are discussed in Geweke (1984) and will not be considered here. Many empirical papers is economics and in other fields have used definitions (iii) and (iv) although usually just with a pair of univariate series xt, so that wt is empty. A few papers have considered wider information sets. The definitions have proved useful in various theoretical contexts, including rational expectations [Lucas and Sargent (1981)], exogeneity [Engle, Hendry and Richard (1983)] and econometric modeling strategy [Hendry and Richard (1983)]. There has also been some interest by philosophers in the definitions [Spohn (1984)]. Criticisms of the definitions have ranged from the inconsequential (the word causation cannot be used) to the more substantial. As examples of the latter, Zellner (1979) believes that causation cannot be securely established except in the context of a confirmed subject matter theory, and Holland (1986) believes that tests of causation can only be carried out within the context of controlled experiments. As I have attempted to answer various criticisms elsewhere [Granger (1980, 1986a)], I will not discuss them further here. The present paper considers three separate but related questions: (i) What is the effect of the relationship between the concepts of co-integration and causality on tests of causality?
74
C. W. J. Granger
(ii) Is there need for a definition of instantaneous causality in a decisions science such as economics? (iii) Can causality tests potentially be used for policy evaluation? It should be noted that in the definitions only discrete time series are considered and that a time lag of one is involved. The size of this unit is not defined. It is merely assumed that a relevant, positive unit does exist for the definitions to hold. There is, of course, no reason for the available data to be measured on the same unit of time that the definitions would require for a proper test of causation. The data may be available monthly and the actual causal lag be only a couple of days, for example. The relevance of this difference in units is also discussed briefly in the consideration of the second of the above questions. The paper is mostly concerned with bringing various results together for econometricians to see. Section 3 presents some new ideas on instantaneous causality and its interpretation, and section 4 is largely a summary of an unpublished paper. 2.
CO-INTEGRATION AND CAUSATION
Define an I(0) series as one that has a spectrum that is bounded above and also is positive at all frequencies. If the first and second moments of the series are time-invariant, then xt will be second-order stationary and it can be assumed that the autocorrelations rk decline exponentially (in magnitude) for k large. In practical terms, xt may be thought to have a generating mechanism that can be well approximated by stationary, invertible ARMA (p, q) model, with finite p and q. A series will be said to be integrated of order one, denoted I(1), if its changes are I(0). I(1) series are sometimes called ‘non-stationary’ because their variance increase linearly with time, provided they started a finite number of time units earlier. There is plenty of empirical evidence that macroeconomic series often appear to be I(1). The causality definitions make no assumptions about whether the series being considered are I(0) or I(1), but if they are I(1), some care has to be taken with their empirical analysis. Suppose that xt, yt are both I(1), without trends in mean, so that their changes are both I(0) and with zero means. Then it will be typically true that any linear combination of xt, yt will also be I(1). However, it is possible that there will exist a constant such that zt = xt - Ayt is I(0). This would happen, for instance, if xt = Aqt + x1t,
(1a)
yt = qt + y1t,
(1b)
and
Some Recent Developments in a Concept of Causality
75
where qt ~ I(1) and x1t, y1t are both I(0). When this occurs xt, yt are said to be ‘co-integrated’. Clearly, not all pairs of I(1) series have this property. It was shown in Granger (1983) that, if xt, yt are both I(1) but are co-integrated, then they will be generated by an ‘error-correction’ model taking the form Dxt = g1zt-1 + lagged Dxt, Dyt + e1t, and Dxt = g2zt-1 + lagged Dxt, Dyt + e2t, where the product g · g2 π 0 and e1t, e2t are finite-order moving averages. Thus, changes in the variables xt, yt are partly driven by the previous value of zt. It can be shown that the line x - Ay = 0 can be considered to be an ‘equilibrium’ or ‘attractor’ for the system in the phase-space, where xt is plotted against yt, so that zt can be interpreted as the extent to which the system is out of equilibrium. Further interpretations, methods of testing for and examples of co-integration can be found in the special issue of the Oxford Bulletin of Economics and Statistics, August 1986, which includes a survey article by Granger (1986b).A consequence of the errorcorrection model is that either Dxt or Dyt (or both) must be caused by zt-1 which is itself a function of xt-1, yt-1. Thus, either xt+1 is caused in means by yt or yt+1 by xt if the two series are co-integrated. This is a somewhat surprising result, when taken at face value, as co-integration is concerned with the long run and equilibrium, whereas the causality in mean is concerned with short-run forecastability. However, what it essentially says is that for a pair of series to have an attainable equilibrium, there must be some causation between them to provide the necessary dynamics. The various concepts can be easily generalized to vectors of economic series, as in Granger (1986b). It is also possible to generalize the results to nonlinear equilibria, as discussed in Granger (1986c). It should be noted that in the error-correction model, there are two possible sources of causation of xt by yt-j, either through the zt-1 term, if g1 π 0, or though the lagged Dyt terms, if they are present in the equation. To see what form the causation through zt-1 takes, consider again the factor model (1) where, for simplicity, qt is taken to be a random walk, so that Dqt = at, zero mean white noise, and x1t, y1t, are white noises. Then, zt = xt - Ayt = x1t - Ay1t. Now consider Dxt, which is given by Dxt = Aat + Dx1t = - x1,t-1 + Aat + x1t. The first term is the forecastable part of Dxt and the final two terms will constitute the one-step forecast error of a forecast made at time t - 1 based on xt-j, yt-j, j ≥ 1. However, the forecast, -x1,t-1, is not directly observable but is, generally, correlated to zt-1, which results in the
76
C. W. J. Granger
causation in mean. This causation will not occur only if zt has zero variance. If zt is not used in the modeling, then x1,t-1 will be related only to the sum of many lagged Dxt, but this sum will also include the sum of at-j, which will give a ‘noise’ term of possibly large variance. Thus, x1,t-1 will be little correlated to the sum of lagged Dxt. If classical, multivariate timeseries modeling techniques are used, as discussed in Box and Jenkins (1970) and in the first edition of Granger and Newbold (1977), then once it is realized that xt, yt are I(1), then their changes will be modelled using a bivariate ARMA (p, q) model, with finite p, q. Without zt being explicitly used, the model will be mis-specified and the possible value of lagged yt in forecasting xt will be missed. Thus, many of the papers discussing causality tests based on the traditional time-series modeling techniques could have missed some of the forecastability and hence reached incorrect conclusions about non-causality in mean. On some occasions, causation could be present but would not be detected by the testing procedures used. This problem only arises when the series are I(1) and co-integrated, but this could be a common situation when causality questions are asked. It does seem that many of the causality tests that have been conducted should be re-considered. It would also be interesting to try to relate the causal impact of the zt-1 terms to the frequency-domain causation decompositions considered by Granger (1969) and by Geweke (1982). It is tempting to think that the main impact will be at very low frequencies, but this is not clear. 3.
INSTANTANEOUS CAUSALITY
One of the earliest, and most telling, criticisms of causality tests based on statistical techniques is that correlation cannot be equated to causality. A major difficulty with looking at a correlation is that it gives no indication about the direction of relationship. If X, Y are correlated random variables, then Y can be used to explain X but X can also be used to explain Y. For a definition of causation to be useful for statistical testing, it must contain an assumption on structure that allows such dual relationships to be disentangled. In the definitions given in the first section this assumption is that the cause occurs before the effect and so the ‘arrow of time’ can be used to help distinguish between cause and effect. Other definitions use alternative methods of performing this distinction. Holland (1986), for instance, considers only situations in which the cause is the input to an experiment and the effect is found from the results of the experiment. Strict application of a time gap requirement means that the definitions given above can make no statements about instantaneous causality. Suppose that xt, yt are a pair of series and let e xt = xt - E[ xt J t -1 ], e yt = yt - E[ yt J t -1 ],
Some Recent Developments in a Concept of Causality
77
where Jt-1: xt-j, yt-j, j ≥ 1, and suppose that r = corr(ext, eyt) π 0, then one may suppose that there is an apparent instantaneous causality in these series. At the very least, the question of whether or not a causal explanation can be given to this finding deserves consideration. Pierce and Haugh (1977) discuss whether yt causes xt instantaneously by using the information sets J¢t: Jt, yt and J≤t: Jt, xt. If xt is better ‘forecast’ using J¢t rather than Jt, one could say that yt instantaneously causes xt (in mean), a necessary condition being just r π 0. However, this same condition is necessary for the statement that xt instantaneously causes yt. One is back to the symmetry problem and this definition of instantaneous causality is therefore unsatisfactory, as no direction of relationship can be deduced just from the data. It is possible, on occasions, to add further information and to reach a conclusion. If, for example, one ‘knows’ that xt cannot cause yt (instantaneously or otherwise), then the symmetry is broken. This extra ‘knowledge’ can come from some economic theory or a belief in exogeneity (the economy cannot cause weather) but the conclusion, or the change in the degree of belief about causation, will depend on the correctness of the extra knowledge. Three possible explanations for the apparent instantaneous causality will be discussed. (i) There is true instantaneous causality in an economic system so that some elements in the system react without any measurable time delay to changes in some other elements. (ii) There is no true instantaneous causality, but the finite time delay between cause and effect is small compared to the time interval over which data is collected. Thus, the apparent causation is due to temporal aggregation. (iii) There is a jointly causal variable wt-1, that causes both xt and yt but is not included in the information set, possibly because it is not observed. It can be argued that true instantaneous causality will never occur in economics, or any other decision science, and that the missing variable explanation is always a possible one, so that a definition of instantaneous causality is never actually needed. In this discussion, it is assumed that the cause can never occur after the effect, so that the causal lag is either zero, giving instantaneous causality, or positive, giving the causal definition used throughout this paper. If follows that one cannot have instantaneously causality between a pair of flow variables, such as imports and exports, or a pair of production series, as these variables are available only for discrete time, and part of one variable must almost inevitably occur after part of the other. Similarly, a stock variable, such as a price measured at time t cannot instantaneously cause a flow variable, most of which occurs before t. This will be true however short the time interval used, provided it is finite and positive. Thus, instantaneous causality can
78
C. W. J. Granger
only strictly be discussed for pairs or groups of stock variables. If one also believes that economic variables and the outcomes of large numbers of decisions made by economic agents or institutions, that each agent can only concentrate on a single decision at a time, so that their brains are single-track decision makers, and that there is always a delay in making a decision, as new information is assimilated, analyzed and a decision rule applied, and that there is then a further delay until the decision is implemented and becomes observable, then the presence of true instantaneous causality in economics becomes very unlikely. The true causal lag may be very small but never actually zero. The observed or apparent instantaneous causality can then be explained by either temporal aggregation or missing causal variables. Temporal aggregation is a realistic, plausible and well-known reason for observing apparent instantaneous causation and so needs no further discussion. It is common practice is statistics in general, and in econometrics in particular, to discuss a pair of random variables, say X and Y, that have a joint distribution function. The residuals ext, eyt introduced above provide an example. However, there is virtually no discussion about the mechanism that produces this joint distribution. How are the values of the variables X and Y, observed at time t, say, actually generated such that they are also characterised by having a joint distribution? Clearly, these values have to be generated simultaneously. For example, if Xt, Yt are respectively stock market closing price indices from the Pacific Stock Exchange and the Sydney Stock Exchange, and suppose both exchanges close at the identical time, then, if xt, yt have a joint distribution, a mechanism has to be described that can lead to this simultaneous generation of a pair of price indices at sites separated by several thousand miles. Of course, the physical locational difference is irrelevant as the ‘electronic distance’ is negligible, provided the members of one exchange pay very close or constant attention to what is happening at the other exchange. If the two variables are statistically independent, so that their joint distribution is the product of the two marginal distributions, the joint generation is easily understood as all that is needed is two generation mechanisms operating independently of each other. However, the concept of independence is not one that is always well understood, as it depends on the set of variables within which independence is being discussed. For example, if X, Y, Z are three variables, then X and Y can be independent if only this pair is considered but X|Z and Y|Z need not be independent, where X|Z is the conditional variable X given Z. This is easily seen by taking X, Y, Z to be jointly Gaussian with a covariance matrix having cov(X, Y) = 0 but other covariances non-zero. A theorem can be proved that is the reverse of this example, so that if X, Y are not independent, there always could exist another variable Z such that X|Z, Y|Z are independent. Thus, the apparent joint distribution between X and Y occurs because there are really three variables, Z is
Some Recent Developments in a Concept of Causality
79
affecting each of X and Y which are independent within the group of three variables, but as Z is unobserved, and thus is marginalized out, the observed joint distribution occurs. Formally, the theorem takes the form: For any bivariate probability definity function (p.d.f.) f(x, y), there exists a trivariate p.d.f. f(x, y, z) such that (i) f ( x, y) = Ú f ( x, y, z)dz, and (ii) f ( x, y, z) = f ( x z)f 2 ( y z)f 3 (z). A necessary and sufficient condition for (ii) is that f ( x z) = f ( x y, z). Here f(x|y) is the conditional distribution of X given Z, after Y has been marginalized out. The theorem states that, if X and Y are a pair of continuous random variables, there potentially could exist a third variable Z such that the joint distribution of X, Y, Z, f(x, y, z), has the property f ( x, y, z) = f ( x z) f ( y z) f (z), so that X|Z and Y|Z are independent. The result is given as Theorem 1 of Holland and Rosenbaum (1986) and originates with Suppes and Zanotti (1981). In private correspondence, Peter Thomson (Victoria University, New Zealand) proved the result for the case when X, Y, Z are jointly Gaussian. In this case, Thomson shows that, if X, Y, Z are Gaussian with zero means, unit variances and correlations r = corr( X , Y ), r1 = corr( X , Z ), r 2 = corr(Y , Z ), then the joint distribution has the required property provided only that r1 · r2 = r. The theorem can be expanded to any group of random variables X1, X2, . . . , XN such that if they are conditional on Z1, Z2, . . . , Zm, m £ N - 1, they will be independent. In the causality context, if the theorem is correct, then any apparent instantaneous causal relationship can be explained by the possible existence of an unobserved variable that causes both (or all) the variables of interest. As the missing variable is unobserved it could occur at an earlier time. It follows that the concept of (real) instantaneous causality is not required as the present definition of causation (with a lag between cause and effect) can be used to explain all joint distributions and thus any apparent instantaneous causality or joint distribution. The question remains of how one can disentangle the actual causal structure between variables that have here been called apparently instantaneously causal. I suspect that this cannot be achieved by purely statistical means, although this important question deserves further consideration. One natural approach is to add extra structure, as mentioned
80
C. W. J. Granger
above, such as suggested by theory, ‘common sense’ or by beliefs that ‘small’ cannot cause ‘big’, for example. The relevance of conclusions based on such ideas will depend on how correct the assumptions made, the tests will be of ‘conditional causality’ and the interpretation of the test results will depend on the degree of belief that one has of the assumptions being made. As it stands, the discussion in this section probably has no implications for practical econometrics but should have relevance for the interpretation of the results obtained from empirical work. 4.
CAUSALITY AND CONTROL VARIABLES
Although the definitions of causality discussed in the first section are very simple, there can be problems with their use and interpretation. Tests based on the definition can also give some apparently surprising results. Some of these questions can be illustrated with a discussion of the potential usefulness of causality tests on control variables. The illustration can be based on a very simple case. Suppose that yt is some economic variable which the government is trying to control. yt will be called the target series, at will be the desired value for yt and the cost function is the expected square of the difference between yt and at. Let xt be the variable controlled by the government and suppose that yt is generated by what is called in the engineering literature the ‘plant equation’, yt = ayt-1 + cxt + ut,
(2)
where ut is zero-mean, white noise. It is easily seen that optimum control is achieved by taking xt = -c -1 [ayt -1 - at ],
(3)
yt = at + ut,
(4)
so that
under the assumption that the specification and parameters of the plant equation are unchanged, however xt is generated. There is a difficulty with this specification as there is an apparent instantaneous causation in (2). However, this is largely illusory, as one can take the time interval in this generating process to be the decision lag (rather than the period between data observations), and then note that at will be determined by the government at time t - 1 as a desired value for yt to achieve during the period from t - 1 to t. Thus, one can put at = a¯t-1, placing at the time at which its value is determined. It is seen from (3) that xt is also determined at time t - 1. Thus, the control variable may be denoted wt-1 = xt and is also associated with the time at which it is determined. The government will observe wt-1 at time t - 1. The public will observe xt at time t but should equate it with wt-1. It is important in causality discussions to
Some Recent Developments in a Concept of Causality
81
associate a variable with the time at which it occurs, rather than when it is observed. This problem is discussed Granger (1980), particularly concerning the temporal relationship between thunder and lightening. The equations now are yt = ayt-1 + cwt-1 + ut, -1
(5)
wt = -c (ayt - at ),
(6)
yt = a¯t-1 + ut.
(7)
If an economic theory gives the plant equation (5), it may appear that wt-1 should cause yt, but this would be an incorrect interpretation as the whole system has to be considered jointly rather than one equation at a time. From the government’s perspective, the question asked is: Is yt better forecast using the information set Jt-1: yt-j, wt-j, a¯t-j, j ≥ 0, rather than the information set J¢t-1: yt-j, a¯t-j, j ≥ 0? However, from (6), clearly these information sets contain the same information as wt is exactly explained by yt and a¯t. Thus, the government would not find wt-1 causing yt in this case. This result was proved by Sargent (1976). The same conclusion would hold if wt were selected sub-optimally, but still exactly a function of other variables, such as wt = g1yt + g2a¯t,
(8)
as pointed out by Buiter (1984). The situation for the public is somewhat different if a¯t is not publically announced and is also stochastic, such as if a¯t is generated by a¯t = byt + g a¯t-1 + et.
(9)
Now, wt-1 would seem to cause yt in that yt is better forecast by Yt-j, wt-j, j ≥ 0, than by yt-j, j ≥ 0, alone. The Sargent and Buiter results are not very robust if the very stringent conditions of the model are relaxed. For example, if a white-noise error term, vt, is added to (6) the government no longer is able to perfectly control its variable, as is surely the case in most instances. The vector autoregressive or reduced form representation for the structural system (5), (6) and (9) is yt = ayt-1 + cwt-1 + ut, wt = c -1 [a (b - a ) yt -1 - acwt -1 + gat -1 ] + n t + c -1 [et - aut ], a¯t = abyt-1 + g a¯t-1 + bcwt-1 + et + but. In general, it is easy to use such a VAR model to asked causality questions. Any left-hand-side variable is caused by any right-hand-side variable having a non-zero coefficient. Thus, for example, wt-1 will cause a¯t if bc π 0. However, the results above indicate that this type of result only holds true if there is no linear combination of the residuals to the various
82
C. W. J. Granger
equations that have zero variance. No such linear combination exists if ut, vt and et all have positive variances, but is not true if vt = 0, all t, as assumed by Sargent and Buiter. There is one other case where the public finds wt-1 causing yt, but the government will find no causation. If vt = 0 at t, but the plant equation includes a stochastic variable zt, yt = ayt-1 + cwt-1 + dzt-1 + ut, but zt is observed by the government and not by the public. The optimal value of the control variable will then be wt = -c -1 [ayt + dzt -1 - at -1 ]. However, as the public does not observe Zt, but does observe wt which is related to it, again the public will find the control variable causing the target variable. It is thus seen that the public and the government, if performing causality tests of yt by wt-1, can reach different conclusions, depending on who is doing the test and what information set is available. The timing of variables is also clearly important. Some care has to be taken in interpreting causality tests as this exercise clearly shows. These questions are discussed in more detail in Granger (1987) where the I(1) case and cointegration aspects are also considered.
REFERENCES Box, G.E.P. and G.M. Jenkins, 1970, Time series analysis, forecasting and control (Holden-Day, San Francisco, CA). Buiter, W.H., 1984, Granger-causality and policy effectiveness, Economica 51, 151–162. Geweke, J., 1982, Measurement of linear dependence and feedback between time series, Journal of the American Statistical Association 77, 304–324. Geweke, J., 1984, Inference and causality in economic time series models, Ch. 19 in: Z. Griliches and M.D. Intriligator, eds., Handbook of econometrics II (North-Holland, Amsterdam). Engle, R.F., D.F. Hendry and J.F. Richard, 1983, Exogeneity, Econometrica 51, 277–304. Granger, C.W.J., 1963, Economic processes involving feedback, Information and Control 6, 28–48. Granger, C.W.J., 1969, Investigating causal relations by econometric models and cross-spectral methods, Econometrica 36, 424–438. Granger, C.W.J., 1980, Testing for causality: A personal viewpoint, Journal of Economic Dynamics and Control 2, 329–352. Granger, C.W.J., 1983, Co-integrated variables and error-correcting models, Economics Department discussion paper no. 83-13 (University of California, San Diego, CA).
Some Recent Developments in a Concept of Causality
83
Granger, C.W.J., 1986a, Comment on Holland (1986). Granger, C.W.J., 1986b, Developments in the study of co-integrated economic variables, Oxford Bulletin of Economics and Statistics 48, 213–228. Granger, C.W.J., 1986c, Economic stochastic processes with simple attractors, Economics Department discussion paper no. 86-20 (University of California, San Diego, CA). Granger, C.W.J., 1987, Causality testing of control variables, Economics Department discussion paper (University of California, San Diego, CA). Granger, C.W.J. and P. Newbold, 1977, Forecasting economic time series, 1st ed. (Academic Press, New York). Granger, C.W.J. and P.J. Thomson, 1987, Predictive consequences of using conditioning or causal variables, Econometric Theory, forthcoming. Hendry, D.F. and J.F. Richard, 1983, The econometric analysis of economic time series, International Statistical Review 51, 111–163. Holland, D.W., 1986, Statistics and causal inference, Journal of the American Statistical Association 81, 945–960. Holland, D.W. and P.R. Rosenbaum, 1986, Conditional association and unidimensionality in monotone latent variable models, Annals of Statistics 14, 1523–1543. Lucas, R.E. and T.J. Sargent, 1981, Rational expectations and econometric practice (University of Minnesota Press, Minneapolis, MN). Machina, M.J. and W.S. Neilson, 1980, The Ross characterization of risk aversion: Strengthing and extension, Economics Department discussion paper, Aug. (University of California, San Diego, CA). Pierce, D.A. and L.D. Haugh, 1977, Causality in temporal systems: Characterizations and a survey, Journal of Econometrics 5, 265–293. Sargent, T.J., 1976, The observational equivalence of natural and unnatural rate theories of macroeconomics, Journal of Political Economy 84, 631–670. Spohn, W., 1984, Probabilistic causality: From Hume via Suppes to Granger, in: M.C. Galavotti and G. Gambetta, eds., Causalita e modelli probabilistici (Clueb Editrica, Bologna) 64–87. Suppes, P. and M. Zanotti, 1981, When are probabilistic explanations possible?, Syntheses 48, 191–199. Wiener, N., 1956, The theory of prediction, in: E.F. Beckenback, ed., Modern mathematics for engineers (Publisher, Place). Zellner, A., 1979, Causality and econometrics, in: K. Brunner and A. Meltzer, eds., Carnegie–Rochester conference series on public policy, Vol.? (North-Holland, Amsterdam).
CHAPTER 4
Advertising and Aggregate Consumption: An Analysis of Causality*1 R. Ashley, C. W. J. Granger, and R. Schmalensee
This paper is concerned with testing for causation, using the Granger definition, in a bivariate time-series context. It is argued that a sound and natural approach to such tests must rely primarily on the out-of-sample forecasting performance of models relating the original (nonprewhitened) series of interest. A specific technique of this sort is presented and employed to investigate the relation between aggregate advertising and aggregate consumption spending. The null hypothesis that advertising does not cause consumption cannot be rejected, but some evidence suggesting that consumption may cause advertising is presented. 1.
INTRODUCTION
This paper is concerned with two related questions. The first is empirical: do short-run variations in aggregate advertising affect the level of consumption spending?2 Many studies find that advertising spending varies pro-cyclically.3 But firms often use sales- or profit-based decision rules in fixing advertising budgets,4 so that observed correlation might * Econometrica, 48, 1980, 1149–1167. 1 An earlier version if this paper was written while all three authors were at the University of California, San Diego. Financial support was provided by the Academic Senate of that institution and by National Science Foundation Grant SOC76–14326. The authors are indebted to Robert J. Coen of McCann-Erickson, Dee Ellison of the Federal Trade Commission, Joseph Boorstein and Jonathan Goldberg of the Columbia Broadcasting System, and Robert Parker of the U.S. Department of Commerce for assistance in data preparation, and to Christopher A. Sims and two referees for useful comments. Final responsibility for errors and omissions of course remains with the authors. 2 The techniques we employ in this study are not well-suited to the detection of very longrun effects that advertising might have on spending patterns, via induced cultural change, for instance. 3 See, for instance, Simon [16, pp. 67–74] and the references he cites. 4 See, for instance, Kotler [11, pp. 350–351], Schmalensee [15, pp. 17–18], and the references they cite.
Advertising and Aggregate Consumption
85
reflect the effect of advertising on consumers’ spending decisions, the effect of aggregate demand on firms’ advertising decisions, or some combination of both effects. Previous studies of this empirical question, surveyed in Section 2, do not adequately deal with the problem of determining the direction of causation between consumption and advertising. The second question with which we are concerned is methodological: how should one test hypotheses about causation in a bivariate time series context? Section 3 proposes a natural approach to such tests that is a direct application of the definition of causality introduced by Granger [8]. We argue that it is appropriate to use Box-Jenkins [2] techniques to pre-whiten the original series of interest and to use cross-correlograms and bivariate modeling of the pre-whitened series to identify models relating the original series. In our view the out-of-sample forecasting performance of the latter models provide the best information bearing on hypotheses about causation. The data employed in our study of the advertising/consumption question are described in Section 4, and the results of applying our testing procedure are presented in Section 5. Our main findings are briefly summarized in Section 6. 2.
PREVIOUS STUDIES
Some evidence against the view that variations in aggregate advertising affect aggregate demand is provided by numerous studies of advertising behavior at cyclical turning points; aggregate advertising generally lags the rest of the economy at such points.5 Turning point studies do not use much of the information in the time series examined, however, and they do not provide formal tests of hypotheses. Four relatively recent studies have applied statistical techniques to study the relation between advertising and aggregate demand. In the first of these,Verdon, McConnell, and Roesler [23] employed the Printer’s Ink monthly index of advertising spending (hereinafter referred to as PII). They de-trended PII, GNP, and the Federal Reserve index of industrial production, smoothed all three series with a weighted moving average, and examined correlations between the transformed PII series and the other two transformed series at various leads and lags and for various periods. The correlations obtained showed no clear patterns. In a critique of this study, Ekelund and Gramm [7] argued that consumption spending, rather than GNP or the index of industrial production, should be used in tests of this sort. They regressed de-trended quarterly advertising data from Blank [1] on de-trended consumption spending, and all regressions were insignificant. 5
See Simon [16, pp. 67–74] and Schmalensee [15, pp. 17–18] for surveys of these studies.
86
R. Ashley, C. W. J. Granger, and R. Schmalensee
Taylor and Weiserbs [21] considered four elaborations of the Houthakker–Taylor [10] consumption function that included contemporaneous advertising. Annual data were employed, consumption and income were expressed in 1958 dollars, and advertising spending was used both in current dollars and deflated by the GNP deflator. One of their models performed well, and it had a significant advertising coefficient even when re-estimated by a two-stage least squares procedure that treated advertising as endogenous. Taylor and Weiserbs concluded that aggregate advertising had a significant effect on aggregate consumption. There are at least four serious problems with this study, however. First, as the authors acknowledge, their conclusion rests on the somewhat restrictive maintained hypothesis that the Houthakker–Taylor framework is correct. Second, the GNP deflator is not a particularly good proxy for the price of advertising messages.6 Third, their two-stage least squares procedure may not deal adequately with advertising’s probable endogeneity. It rests on a rather ad hoc structural equation for advertising spending. Further, all structural equations have lagged endogenous variables, so that the consistency of the estimators depends critically on the disturbances being serially uncorrelated.7 Fourth, annual data are likely to be inappropriate here. In a survey of econometric studies of the effects of advertising on the demand for individual products, Clarke [4] finds that between 95 per cent and 100 per cent of the sales response to a maintained increase in advertising occurs within one year. Similarly, Schmalensee’s [15, Ch. 3] estimates of aggregate advertising spending functions indicate that between 75 per cent and 85 per cent of the advertising response to a maintained increase in sales occurs within one year. These findings suggest that in this context so much information is lost by aggregation over time that annual data simply cannot contain much information about the direction of causation. Finally, Schmalensee [15, pp. 49–58] employed an extension of Blank’s [1] quarterly advertising series, deflated to allow for changes in media cost and effectiveness, in connection with several standard aggregate consumption equations specified in constant dollars per capita. Using instrumental variables estimators, the previous quarter’s advertising, the current quarter’s advertising, and the following quarter’s advertising were added one at a time to the consumption equations. It was found 6
7
Using the sources described in the Appendix, an implicit deflator for the six media considered there was constructed for the period 1950–1975. Over that period, it grew at 2.2 per cent per year, while the GNP deflator increased an average of 3.5 per cent per year. The simple correlation between the first differences of the two series was only .60. We are told that Durbin’s [6] test did not reject the null hypothesis of no serial correlation, but that test explicitly considers only the alternative of first-order autoregression. Moreover, the small sample properties of Durbin’s test are not well understood [12], and Taylor and Weiserbs have only 35 residuals.
Advertising and Aggregate Consumption
87
that current advertising generally out-performed lagged advertising, and future advertising generally outperformed current advertising in fitting the data. Schmalensee took this pattern to imply that causation ran from consumption to advertising, reasoning that if advertising were causing consumption, past advertising would have outperformed future advertising. Schmalensee’s study has at least two major weaknesses. First, no tests of significance are applied to the observed performance differences. Second, nothing rules out the possibility that advertising is causing consumption as well as being caused by it. If both effects are present, both affect observed performance differentials, and these can in principle go in either direction. It seems clear that in order to go beyond these studies, one must employ a statistical procedure explicitly designed to test hypotheses about causality in a time-series context. Accordingly, we now present such a procedure. 3.
TESTING FOR CAUSALITY
The phrase ‘X causes Y’ must be handled with considerable delicacy, as the concept of causation is a very subtle and difficult one. A universally acceptable definition of causation may well not be possible, but a definition that seems reasonable to many is the following: Let Wn represent all the information available in the universe at time n. Suppose that at time n optimum forecasts are made of Xn+1 using all of the information in Wn and also using all of this information apart from the past and present values Yn-j, j 0, of the series Yt. If the first forecast, using all the information, is superior to the second, than the series Yt has some special information about Xt, not available elsewhere, and Yt is said to cause Xt. Before applying this definition, an agreement has to be reached on a criterion to decide if one forecast is superior to another. The usual procedure is to compare the relative sizes of the variances of forecast errors. It is more in keeping with the spirit of the definition, however, to compare the mean-square errors of post-sample forecasts. To make the suggested definition suitable for practical use a number of simplifications have to be made. Linear forecasts only will be considered, together with the usual least-squares loss function, and the information set Wn has to be replaced by the past and present values of some set of time series, Rn: {Xn-j, Yn-j, Zn-j, . . . , j 0}. Any causation now found will only be relative to Rn and spurious results can occur if some vital series is not in this set. The simplest case is when Rn consists of just values from the series Xt and Yt, where now the definition reduces to the following.
88
R. Ashley, C. W. J. Granger, and R. Schmalensee
Let MSE (X) be the population mean-square of the one-step forecast error of Xn+1 using the optimum linear forecast based on Xn-j, j 0, and let MSE(X, Y) be the population mean-square of the one-step forecast error of Xn+1 using the optimum linear forecast based on Xn-j, Yn-j, j 0. Then Y causes X if MSE(X, Y) < MSE(X). With a finite data set, some test of significance could be used to test if the two mean-square errors are significantly different; one such test is presented below and employed in Section 5. As the scope of this definition has been greatly circumscribed by the simplifications used, the possibility of incorrect conclusions being reached is expanded,8 but at least a useable form of the definition has been obtained. This definition of causation (stated in terms of variances rather than mean-square errors) was introduced into the economic literature by Granger [8]; it has been applied by Sims [17] and numerous subsequent authors employing a variety of techniques. (See [14] for a survey.) The next several paragraphs present the five-step approach to the analysis of causality (as defined above) between a pair of time series Xt and Yt that is employed in Section 5, below. The remainder of this Section them discusses the rationale for our approach. (i) Each series is pre-whitened by building single-series ARIMA models using the Box-Jenkins [2] procedure. Denote the resulting residuals by ext and eyt. (ii) Form the cross-correlogram between these two residual series, i.e., compute r k = corr(e xt , e yt -k ) for positive and negative values of k. If any rk for k > 0 are significantly different from zero, there is an indication that Yt may be causing Xt, since the correlogram indicates that past Yt may be useful in forecasting Xt. Similarly, if any rk is significantly non-zero for k < 0, Xt appears to be causing Yt. If both occur, two-way causality, or feedback, between the series is indicated. Unfortunately, the sampling distribution of the rk depends on the exact relationship between the series. On the null hypothesis of no relationship, it is well known that the rk are asymptotically distributed as independent normal with means zero and variances 1/n, where n is the number of observations employed [9, p. 238], but the experience shows that the test suggested by this result must be used with extreme caution in finite 8
Sims [20] provides a discussion of possible spurious sources of apparent causation in applications of this definition. In Section 6, below, we consider the likely importance of these in our empirical analysis.
Advertising and Aggregate Consumption
89
samples.9 In practice, we also use a priori judgement about the forms of plausible relations between economic time series. Thus, for example, a value of r1 well inside the interval [-2/ n, +2/ n] might be tentatively treated as significant, while a substantially larger value of r7 might be ignored if r5, r6, r8, and r9 are all negligible. This step is perfectly analogous to the univariate Box-Jenkins identification step, where a tentative specification is obtained by judgmental analysis of a correlogram. The key word is “tentative”; the indicated direction of causation is only tentative at this stage and may be modified or rejected on the basis of subsequent modeling and forecasting results.10 (iii) For every indicated causation, a bivariate model relating the residuals is identified, estimated, and diagnostically checked. If only one-way causation is present, the appropriate model is unidirectional and can be identified directly from the shape of the cross-correlogram, at least in theory. However, if the series are related in a feedback fashion, the cross-correlogram has to be unraveled into a pair of transfer functions to help with model identification, by a procedure developed by Granger and Newbold [9, Ch. 7]. (iv) From the fitted mode for residuals, after dropping insignificant terms, the corresponding model for the original series is derived, by combining the univariate models with the bivariate model for the residuals. It is then checked for common factors, estimated, and diagnostic checks applied.11 (v) Finally, the bivariate model for the original series is used to generate a set of one-step forecasts for a post-sample period. The corresponding errors are then compared to the post-sample onestep forecast errors produced by the univariate model developed in step (i) to see if the bivariate model actually does 9
10
11
One must apparently be even more careful with the Box-Pierce [3] test on sums of squared rk; see [5]. See Granger and Newbold [9, pp. 230–266] for a fuller discussion of this approach. Unpublished simulations performed at UCSD (e.g., C. Chiang, “An Investigation of Relationships Between Price Series,” unpublished dissertation, Department of Economics, 1978) find that it rarely signals non-existent causations but lacks power in that subtle causations are not always detected. OLS estimation suffices to produce unbiased estimates, since all the bivariate models considered are reduced forms. It also allows one to consider variants of one equation without disturbing the forecasting results from the other, and it is computationally simpler. On the other hand, where substantial contemporaneous correlation occurs between the residuals, seemingly-unrelated regressions GLS estimation can be expected to yield noticeably better parameter estimates and post-sample forecasts. All estimation in this study is OLS; a re-estimation of our final bivariate model using GLS might strengthen our conclusions somewhat.
90
R. Ashley, C. W. J. Granger, and R. Schmalensee
forecast better.12 The use of sequential one-step forecasts follows directly from the definition above and avoids the problem of error build-up that would otherwise occur as the forecast horizon is lengthened. Because of specification and sampling error (and perhaps some structural change) the two forecast error series thus produced are likely to be cross-correlated and autocorrelated and to have non-zero means. In light of these problems, no direct test for the significance of improvements in mean-squared forecasting error appears to be available. Consequently, we have developed the following indirect procedure. For some out-of-sample observation, t, let e1t and e2t be the forecast errors made by the univariate and bivariate models, respectively, of some time series. Elementary algebra then yields the following relation among sample statistics for the entire out-of-sample period:
[
2
2
]
MSE(e1 ) - MSE(e 2 ) = [ s 2 (e1 ) - s 2 (e 2 )] + m(e1 ) - m(e 2 ) ,
(1)
where MSE denotes sample mean-squared error, s2 denotes sample variance, and m denotes sample mean. Letting D t = e1t - e 2t , and
Â
2
= e1t + e 2t ,
(2)
equation (1) can be re-written as follows, even if e1t and e2t are correlated [9, p. 281]:
[
] [
2
2
]
MSE (e1 ) - MSE( e2 ) = cov(D, Â ) + m(e 1 ) - m(e 2 ) ,
(3)
where cov denotes the sample covariance over the out-of-sample period. Let us assume that both error means are positive; the modifications necessary in the other cases should become clear. Consider the analogue of (3) relating population parameters instead of sample statistics, and let cov denote the population covariance and m denote the population mean. From (3), it is then clear that we can conclude that the bivariate model outperforms the univariate model if we can reject the joint null hypothesis cov (D, S) = 0 and m(D) = 0 in favor of the alternative hypothesis that both quantities are nonnegative and at least one is positive. Now consider the regression equation D t = b 1 + b 2 [Ât - m( Ât )] + ut , 12
(4)
Alternatively, one might fit both models to the sample period, produce forecasts of the first post-sample observation, re-estimate both models with that observation added to the sample, forecast the second post-sample observation, and so on until the end of the post-sample period. This would, of course, be more expensive than the approach in the text.
Advertising and Aggregate Consumption
91
where ut is an error term with mean zero that can be treated as independent of St.13 From the algebra of regression, the test outlined in the preceding paragraph is equivalent to testing the null hypothesis b1 = b2 = 0 against the alternative that both are nonnegative and at least one is positive. If either of the two least squares estimates, bˆ 1 and bˆ 2, is significantly negative, the bivariate model clearly cannot be judged a significant improvement. If one estimate is negative but not significant, a one-tailed t test on the other estimated coefficient can be used. If both estimates are positive, an F test of the null hypothesis that both population values are zero can be employed. But this test is, in essence, four-tailed; it does not take into account the signs of the estimated coefficients. If the estimates were independent, it is clear that the probability of obtaining an F statistic greater than or equal to F0, say, and having both estimates positive is equal to one-fourth the significance level associated with F0. Consideration of the possible shapes of isoprobability curves for ( bˆ 1, bˆ 2) under the null hypothesis that both population values are zero establishes that the true significance level is never more than half the probability obtained from tables of the F distribution. If both estimates are positive, then one can perform an F test and report a significance level equal to half that obtained from the tables. The approach just described differs from others that have been employed to analyze causality in its stress on models relating the original variables and on post-sample forecasting performance. We now discuss these two differences. Many applications of the causality definition considered here (e.g., [13]) essentially stop at our stage (ii) and thus consider only the sample cross-correlogram of the prewhitened series. For a variety of reasons, it seems to us unwise to rest causality conclusions entirely on correlations between estimated residuals. Sims [19], for instance, has argued that there may be a tendency for such correlations to be biased toward zero because of specification error.To see the nature of the argument, suppose Y causes X, so that the appropriate model for X is bivariate. Estimation of such a model on the original series would allow the data to indicate the relative importance of “past X” and “past Y” in forecasting X. Prewhitening X, on the other hand, involves use of a misspecified model in this case, since “past Y” should be included. As in standard discussions 13
In fact, this independence assumption must be violated; a bit of algebra shows that in the population, cov(Ât , ut ) = cov(Ât , D t ) - b 2 var( Ât ) where var denotes the population variance. On the other hand, it is clear that b1 is estimated without bias, and it can be shown that the bias in bˆ 2 is equal to the difference between the sample and population values of cov (St, ut)/var (St). This bias should thus be of negligible importance in moderate samples.
92
R. Ashley, C. W. J. Granger, and R. Schmalensee
of omitted variable bias, correlation between “past X” and “past Y” will tend to lead the misspecified univariate model to over-state the importance of “past X” in forecasting current X. The correlation between the residual series from this model and (original or prewhitened) “past Y” will accordingly be biased toward zero. Thus, models directly relating the original variables provide a sounder, as well as a more natural basis for conclusions about causality. As has been argued in detail by Granger and Newbold [9, Sect. 7.6], however, prewhitening and analysis of the cross-correlogram of the prewhitened series are useful steps in the identification of models relating the original series, since the cross-correlogram of the latter is likely to be impossible to interpret sensibly. Because the correlations between the prewhitened series (the rk) have unknown sampling distributions, this analysis involves subjective judgements, as does the identification step in univariate Box-Jenkins analysis. In neither case is an obviously better approach available, and in both cases the tentative conclusions reached are subjected to further tests. It is somewhat less clear how out-of-sample data are optimally employed in an analysis of causality. This question is closely related to fundamental problems of model evaluation and validation and is complicated by sampling error and possible specification error and timevarying coefficients. An attempt to sort all this out would clearly carry us well beyond the bounds of the present essay. However, we think the riskiness of basing conclusions about causality entirely on within-sample performance is reasonably clear. Since the basic definition of causality is a statement about forecasting ability, it follows that tests focusing directly on forecasting are most clearly appropriate. Indeed, it can be argued that goodness-of-fit tests (as opposed to tests of forecasting ability) are contrary in spirit to the basic definition.14 Moreover, within-sample forecast errors have doubtful statistical properties in the present context when the Box-Jenkins methodology is employed. While the power of that methodology has been demonstrated in numerous applications and rationalizes our use of it here, it must be noted that the identification (model specification) procedures in steps (i)–(iv) above involve consideration and evaluation of a wide variety of model formulation. A good deal of sample information is thus employed in specification choice, and there is a sense in which most of the sample’s real degrees of freedom are used up in this process. It thus seems both safer and more natural to place considerable weight on out-of-sample forecasting performance. 14
If one finds that one model (using a wider information set, say) fits better than another, one is really saying “If I had known that at the beginning of the sample period, I could have used that information to construct better forecasts during the sample period.” But this is not strictly operational and thus seems somewhat contrary in spirit to the basic definition of causality that we employ.
Advertising and Aggregate Consumption
93
The approach outlined above uses the post-sample data only in the final step, as a test track over which the univariate and bivariate models are run in order to compare their forecasting abilities. This approach is of course vulnerable to undetected specification error or structural change. Partly as a consequence of this, the likely characteristics of postsample forecast errors render testing for performance improvement somewhat delicate, as we noted above. Finally, the appropriate division of the total data set into sample and post-sample periods in this approach is unclear. (We say a bit about this in light of our advertising/consumption results in Section 6.) These are nontrivial problems. But at present, we see no way to make more use of the post-sample data that does not encounter apparently equally severe problems.15 We do not want to seem overly dogmatic on this issue. Our basic point is simply that model specification (perhaps especially within the BoxJenkins framework) may well be infected by sampling error and polluted by data mining, so that it is unwise to perform tests for causality on the same data set used to select the models to be tested. The procedure outlined above seems to handle this problem sensibly. 4.
THE DATA
In light of the evidence on the lengths of the relevant lags noted in Section 2, above, the use of quarterly data seems necessary if defensible judgements are to be reached about the causal relation, if any, between aggregate advertising and aggregate consumption. This section discusses the time series variables used to study that relation. All variables are 15
Two possibilities have been suggested. Both involve goodness-of-fit tests, about which we have some misgivings as footnote 14 indicates. (i) One could use asymptotic variants of covariance analysis (“Chow tests”) to investigate the appropriateness of the sample specification for the post-sample period. Assuming this test is passed by both univariate and bivariate models, goodness-of-fit in the pooled sample could be used to compare model performance. However, depending on the sample/post-sample split, final conclusions may be inordinately influenced by the same sample data that guided specification choice. Moreover, it is not clear what should be done if either model fails the stability test. Simply concluding that no inferences about causality can be made seems unsatisfactory, but any other alternative must run the risk of “mining” the post-sample data. Similar problems arise if the post-sample data are used for any critical diagnostic tests on the models selected. (In addition, appropriate testing procedures are unclear, since sampling error implies likely non-whiteness of post-sample errors.) (ii) One could simply re-estimate the univariate and bivariate models derived from the sample using only the post-sample data and compare fits for this period. Depending on the sample/post-sample split, again, these estimates may be unreliable. However, this approach avoids mining the post-sample data, and it yields error series with zero means. But these series will not necessarily be white. Moreover, it seems odd to carry over the specification from the sample period but otherwise to ignore the data on which it is based. Still, if very long time series are available, this second approach may be a viable alternative to the one discussed in the text.
94
R. Ashley, C. W. J. Granger, and R. Schmalensee
computed for the period 1956–1975, yielding a total of 80 quarterly observations. A logarithmic transformation of all series is employed to reduce observed heteroscedasticity. We know of two series of U.S. quarterly advertising spending estimates: the PII and its successors,16 and extensions by the Columbia Broadcasting System (CBS) of Blank’s [1] series. The Appendix indicates why we elect to use the CBS figures here and describes their employment in the computation of ADN: national advertising in major media, current dollars per capita, seasonally adjusted. In [15, Ch. 3] it is argued that percentage-of-sales decision rules for advertising spending have the strongest theoretical rationale when both advertising and sales are in nominal (current dollar) terms. On the other hand, one might expect the impact of advertising on consumer spending to be most apparent when both quantities are in real terms. Real advertising data are obtained by adjusting expenditure figures to take into account changes in both rates and audience sizes; real advertising per capita must measure the number of messages to which an average person is exposed. There apparently exist no quarterly advertising cost or price indices that could be used directly to obtain real advertising, however. One must either deflate nominal spending totals by some arbitrarily chosen alternative quarterly price indices or use interpolated values of annual advertising price indices. Since the cost of advertising messages has changed relative to prices of other goods and services (see footnote 6, above), it seems safest to interpolate. The Appendix describes the use of interpolated annual indices to calculate ADR: national advertising in major media, 1972 dollars per capita, seasonally adjusted. The following consumption series were based on data from the January and March, 1976 issues fo the Survey of Current Business: CTN: total personal consumption expenditure, thousands of current dollars per capita, seasonally adjusted; CGN: personal consumption expenditure on goods, thousands of current dollars per capita, seasonally adjusted; CTR: total personal consumption expenditure, thousands of 1972 dollars per capita, seasonally adjusted; CGR: personal consumption expenditure on goods, thousands of 1972 dollars per capita, seasonally adjusted. The main reason for considering consumption spending on goods only is that the bulk of services consumption is devoted to items that are not heavily nationally advertised, though they may be locally advertised [15, pp. 62–64]. Moreover, services consumption is notoriously stable about its trend. It is relatively well known [18, 24] that the standard methods of seasonal adjustment, which have been applied to the series discussed thus 16
These are the Marketing/Communications Index and, beginning in 1971, the McCannErickson Index. In recent years, all these estimates have been prepared by McCannErickson and reported monthly in the Survey of Current Business.
Advertising and Aggregate Consumption
95
far, can lead to sizeable biases in contexts such as ours.17 We would have preferred to begin with a set of time series that had not been seasonally adjusted, and some of the results reported below would seem to support this prejudice. Of the series discussed so far, however, it was only possible to obtain unadjusted numbers corresponding to CTN and CGN. Based on unpublished data supplied by the U.S. Department of Commerce, we assembled UCTN: total personal consumption expenditure, thousands of current dollars per capita, not seasonally adjusted; and UCGN: personal consumption expenditure on goods, thousands of current dollars per capita, not seasonally adjusted. All series employed are natural logarithms (as noted above) of quarterly totals at annual rates. All are available from the authors on request. 5.
EMPIRICAL RESULTS
We initially considered only the first six (seasonally adjusted) series described in Section 4. It was decided to retain the last 20 observations to evaluate out-of-sample forecasting performance, since we reached the judgement that fewer than 60 data points would not permit adequate identification and estimation in this case. As per step (i) of the approach outlined in Section 3, univariate time series models were identified and estimated for the six series considered using the sixty quarterly observations from 1956 through 1970.18 None of the six residual (prewhitened) series showed significant serial correlation. Proceeding to step (ii), cross-correlograms of the appropriate pairs of residual series were computed. Letting ext denote the residual from a univariate model for the variable xt, this involved computation of corr (eadnt, ectnt-k), corr (eadnt, ecgnt-k), corr (eadrt, ectrt-k), and corr (eadrt, ecgrt-k) for k between -10 and +10. All four cross-correlograms were strikingly similar, indicating that it made little difference whether we worked in nominal or real terms, or whether we used total or goods consumption. All four showed a strong contemporaneous correlation (k = 0), which, however, provides no information on the direction of causation. Sizeable positive correlations for k = -1 suggested that advertising might be causing consumption, while similar correlations for k = +1, +2, and +3 suggested consumption causing advertising. All four of these cross-correlograms showed substantial negative values at k = +7 and k = -5. Since the neighboring correlations were 17
18
See the Appendix, especially footnote 29. Since the Census X-11 procedure used on these data involves a two-sided filter for most of the sample period, its employment in an investigation of causation is particularly worrisome. Descriptions of these models and other statistical results not reported here are contained in an earlier version of this essay, available as Discussion Paper 77–9 from the Department of Economics, University of California, San Diego (La Jolla, CA 92093).
96
R. Ashley, C. W. J. Granger, and R. Schmalensee
clearly negligible, we found it difficult to interpret these in causal terms. Suspecting that the correlations at k = -5 and, possibly, k = +7 were artifacts of the seasonal adjustment procedures applied to the data, we obtained the unadjusted consumption expenditure series UCTNt and UCGNt defined above. In light of the discussion of services consumption in Section 4 and the similarity of the cross-correlograms discussed above, it was decided to confine our attention initially to UCGNt, current dollar consumption spending on goods. Proceeding as before, the following univariate model was identified, estimated, and checked:
(1 - B)(1 - B4 )UCGN t = .00086 + (1 - .204 B 2 - .747 B4 )eugnt , (.00043) (.082) (.075) (C.1) where B is the lag or backward shift operator, numbers in parentheses are standard errors, and eucgnt is a residual series, as above. (The presence of (1–B4) reflects the use of seasonal differencing.) The corresponding univariate model for advertising was the following:
(1 - B) ADN t = .00911 + (1 - .256 B 5 )eadnt . (.0022) (.13)
(A.1)
The cross-correlogram between the residual series from these models is given as row 1 in Table 4.1. Use of unadjusted consumption substantially reduced the anomalous correlations at k = -5 and k = +7. (An approximate 95 per cent confidence interval for any single correlation here is [-.27, +.27].) This suggests that these correlations were in fact artifacts of the use of standard seasonal adjustment procedures. In light of these results, it was decided to restrict further attention to the relation between ADNt and UCGNt.19 The sample and post-sample performance of the univariate models (A.1) and (C.1) are shown in Table 4.2. As per Section 3, we now proceed to step (iii), modeling the relation between the univariate residual (i.e., prewhitened) series eadnt and eucgnt. Examination of row 1 of Table 4.1 shows that the contemporaneous (k = 0) correlation is large compared to 1/ n, which is .14 here. The correlation at k = +1 is not significant on the usual test, but it and the k = 0 term together suggest a sensible lag structure that deserves further examination. In contrast, the k = -1 and k = -2 terms are clearly negligible. The correlations at k = -3, -4, and -5 are nonnegligible, but it is hard to put them together with the k = 0 term (and the negligible terms in 19
Note that this means that, as mentioned in footnotes 17 and 29, the advertising series has been put through a two-sided filter, while the consumption series has not been. In general, one would expect this to bias our results toward a finding that advertising causes consumption, if the series are actually causally related.
.05 -.13 -.15 -.14
eadnt, eucgnt-k hadnt, hadnt-k eucgn¢t, eucgn¢t-k eucgn¢t, eucgn¢t-k
1 2 3 4
.06 .09 -.12 -.13
-6 -.14 -.20 .08 .10
-5 -.13 .00 -.05 .05
-4 -.19 .19 -.09 .16
-3
b
Univariate Bivariate on Residuals Bivariate on Original Series Univariate Bivariate on Residuals Univariate Bivariate on Original Series
Model Type
.09 -.03 .01 .18
-2 .16 .19 -.09 -.08
+3
454 435 416 245 213 268 263
-.02 -.03 .01 .05
+2
eadn gadn hadn eucgn gucgn eucgn¢ hucgn
.18 .01 -.03 .13
+1
Sample Variancea
.50 1.0 1.0 .50
0
Error Term
.04 .01 -.03 -.10
-1
Sample period (1956–70) variance ¥106; not corrected for degrees of freedom. Post-sample period (1971–75) mean squared error of one-step-ahead forecasts ¥106.
(A.1) (AC.1) (AC.2) (C.1) (CA.1) (C.2) (CA.2)
1 2 3 4 5 6 7
a
Model
Row
Table 4.2 Performance of univariate and bivariate models
-7
Residual Series
Row
Correlation for k =
Table 4.1 Auto and cross-correlograms for residual series.
-.13 .00 -.05 -.14
+4
-.13 .09 -.12 -.01
+6
-.13 -.13 -.15 -.01
+7
722 600 533 261 290 234 222
Post-Sample MSEb
.16 -.20 .08 -.09
+5
98
R. Ashley, C. W. J. Granger, and R. Schmalensee
between) to form a plausible lag structure. Hence the cross-correlogram tentatively suggests that a unidirectional model, in which eucgnt causes, but is not caused by, eadnt is appropriate. Before proceeding on this assumption, however, it seems appropriate to test it by constructing a forecasting model for eucgnt employing lagged values of eadnt. The best model obtained, called (CA.1) in Table 4.2, includes eadnt-k for k = 3, 4, and 5 only. A comparison of rows 4 and 5 of Table 4.2 shows that this model performs quite badly in the post-sample period. These findings support the tentative identification of unidirectional causation. Accordingly, we now consider the impact of prewhitened consumption on prewhitened advertising. The form of the cross-correlogram suggests that an appropriate identification for a model of this relationship is
(1 - aB)eadnt = (b 1 + b 2 B)eucgnt + gadnt . The aB term is included because it is necessary to have polynomials in the lag operator, B, of the same order on both sides of the equation since the model represents a unidirectional relationship between two white noise series [9, Ch. 7]. If a purely forecasting model is constructed using this identification (by omitting the contemporaneous term), one obtains
(1 + .200 B)eadnt = (.382 B)eucgnt + gadnt , (.15) (.21)
(AC.1)
where gadnt appears to be white noise. The within-sample variance of gadnt is only 4 per cent less than that of eadnt, as a comparison of rows 1 and 2 of Table 4.2 indicates. On the other hand, the form of model (AC.1) is economically plausible. Moreover, (AC.1) forecasts well in the post-sample period, yielding a 17 per cent improvement over the performance of (A.1). We are now in a position to perform step (iv) of the procedure outlined in Section 3, the construction of models relating the original series. The evidence so far suggests that a unidirectional bivariate model is appropriate, with UCGNt causing ADNt, but not the reverse. Substituting for eadnt and eucgnt in (AC.1) from (A.1) and (C.1), appropriate forms for the final forecasting model can be identified. Estimation and deletion of insignificant higher-order terms yields the following bivariate model:
(1 +
.327 B - .625B 2 )(1 - B) ADN t
(.13)
(AC.2)
(.16)
= .00665 + (.636 B + .317 B 5 )UCGN t + (1 - .686 B 2 )hadnt ,
(.0025)
(.21)
(.19)
(.19)
Advertising and Aggregate Consumption
99
(1 - B)(1 - B4 )UCGN t = .00126 + (1 - .223B 2 - .659 B4 )eucgnt¢. (.00055) (.12) (.13) (C.2) Note that (C.2) is not identical to the univariate model (C.1) presented earlier. This is because (C.1) was estimated using a standard univariate Box-Jenkins program that used backforecasting to produce unconditional estimates, whereas all bivariate models had to be estimated with a more general (but less convenient) nonlinear least squares program that produces conditional, single-equation estimates [2, Sect. 7.1].20 For most models, these procedures yield virtually identical estimates. Rows 4 and 6 in Table 4.2 indicate that (C.1) is slightly better than (C.2) within the sample, but it produces slightly worse forecasts in the post-sample period. Model (C.2) thus appears to be the appropriate one to use for post-sample comparisons. The auto-correlograms of the residual series hadnt and eucgn¢t are given in rows 2 and 3 of Table 4.1. Both pass the standard single-series tests for whiteness.The cross-correlogram between these two series is given as row 4 in Table 4.1.Several of the correlations for negative k suggest that further lagged values of UCGNt should be added to the right-hand side of (AC.2). A variety of experiments of this sort were performed in the course of identifying the model, however, and no significant or suggestive results were obtained. An examination of the correlations for positive k in row 4 of Table I shows that none exceeds one asymptotic standard error, 1/ n = .14. The correlation at k = +1 is nonnegligible, however, and its size and location are suggestive.If the large contemporaneous correlation between the residual series is partly due to advertising causing consumption, one would expect the previous quarter’s advertising to have some effect on current consumption.This effect should show up as a nonzero correlation between eucgn¢t and hadnt-1. On the other hand, it is hard to rationalize taking the isolated nonnegligible correlation at k = +4 seriously. Thus the marginal term at k = +1 led us to identify and estimate the following model as a check on the (AC.2)/(C.2) structure:
(1 - B)(t - B4 )UCGN t
(CA.2)
= .001885 - .121(1 - B) ADN t -1 + (1 - .162 B - .684 B )hucgnt . 2
(.00090) (.076)
(.15)
4
(.11)
The series hucgnt passes the standard tests. A comparison of rows 6 and 7 in Table 4.2 indicates that (CA.2) performs slightly better than (C.2) in both sample and post-sample periods. We now turn to step (v) of our procedure, the evaluation of the postsample forecasting performance of models fitted to the original series. Let 20
See footnote 11.
100
R. Ashley, C. W. J. Granger, and R. Schmalensee
us first consider models (C.2) and (CA.2). Use of the formal comparison test presented in Section 3 is ruled out here because, while the bivariate model (CA.2), had a smaller forecast error variance at the 18 per cent level of significance, its mean forecast error was larger at the .1 per cent level. (These significance levels are based on one-tailed t tests on regression equation (4) in Section 3.) The overall post-sample mean-squared error for the bivariate model is only 5.1 per cent lower than for the univariate model, and neither of these tests suggests that this difference is significant at any reasonable level. We conclude, therefore, that the bivariate model (CA.2), is not an improvement on the univariate model for aggregate consumption (C.2); past advertising does not seem to be helpful in forecasting consumption.21 We must accordingly retain the null hypothesis that aggregate advertising does not cause aggregate consumption. In contrast, Table 4.2 indicates that our bivariate model for aggregate advertising (AC.2), forecast noticeably better than the univariate model (A.1), reducing the post-sample MSE by some 26 per cent.22 The postsample forecast error series from both models had positive sample means. The Durbin-Watson statistic for equation (4), in Section 3, was 2.35 (20 observations), so no autocorrelation correction was indicated. Both coefficient estimates were positive, and the F statistic (with 2 and 18 degrees of freedom) corresponding to the null hypothesis that both population values are zero was 1.86, significant at the 18.4 per cent level.23 In light of the discussion in Section 3, this means that we can reject the null hypothesis that the two models have equal mean-squared errors in favor of the superiority of the bivariate model at something less than the 9.2 per cent level of significance.This is hardly overwhelming evidence, but it does suggest that aggregate consumption is useful in forecasting aggregate advertising, and this indicates that consumption does cause advertising. 6.
CONCLUSIONS
Applying the definition of causality discussed in Section 3, the analysis of Section 5 provides evidence that fluctuations in aggregate consumption cause fluctuations in aggregate advertising. No significant statistics suggesting that advertising changes affect consumption were encoun21
22
23
In earlier versions of this paper, we argued that this conclusion was strengthened because the negative coefficient of (1 - B)ADNt-1 in (CA.2) made no economic sense. Chris Sims has pointed out to us, however, that a negative coeffiecient is not all that implausible. Suppose that the main effect of aggregate advertising is to increase current spending on durables at the expense of future spending. Then, all else equal, a “high” value of past advertising would lead one to expect a “low” value of current consumption spending. It is worth noting that the model built on the original variables, AC.2, out-performs the model built on the prewhitened series (AC.1). This is consistent with specification error in the latter, as discussed toward the end of Section 3. From M. Ambramowitz and I. Stegun, Handbook of Mathematical Functions (Dover, 1972), equation (26.6.4), the significance level of an F-statistic with 2 and n degrees of freedom is given exactly by [n/(n + 2F)]n/2.
Advertising and Aggregate Consumption
101
tered. Our empirical results are thus consistent with a model in which causation runs only from consumption to advertising. Of course, any set of empirical results is in principle consistent with an infinite number of alternative models. In order to establish the value of the evidence we have presented, it is necessary to consider whether our results could have arisen from plausible alternative models with different causal structures. As we noted in Section 5, our results are consistent with “instantaneous” causation from advertising to consumption.24 All crosscorrelograms between pairs of prewhitened series show high contemporaneous correlations. This suggests the possibility of an instantaneous or very short-term (within one quarter) relationship between advertising and consumption. But there is no way to tell if this relationship involves consumption causing advertising, advertising causing consumption, or a feedback structure involving both directions of causation. Thus, sudden unexpected changes in aggregate advertising may affect consumption within a quarter, but the finding that past advertising does not help in forecasting consumption indicates that such effects, if they exist, do not persist over time intervals that are substantial relative to a calendar quarter. It seems implausible to us that advertising affects consumption in this fashion. As Sims [20] has pointed out, if one variable, Xt, is used to stabilize another, Yt, optimally over time, the resultant time series can show spurious causation from Yt to Xt. But this does not seem likely to be a problem here. It is somewhat implausible to think that uncoordinated advertising decisions lead the business sector to act “as if” accurately stabilizing aggregate consumption. But more importantly, if the structural effect of advertising on consumption were positive, and if the exogenous disturbances to consumption were positively serially correlated, the optimal control hypothesis would imply negative, not positive coefficients on lagged consumption in model (AC.2). Though our data set was superior to those previously employed to study the aggregate advertising/consumption relation, it was not entirely satisfactory. First, it would have been preferable to have worked with advertising data that had not been seasonally adjusted. On the other hand, as pointed out in footnote 19, seasonality problems here should have biased our estimates toward finding causation from advertising to consumption. Second, it is at least plausible that ADN is more infected with measurement error than UCGN. As Sims [20] has shown, this can lead to a spurious causal ordering in the direction we find. However, it seems unlikely to us that measurement error in ADNt is sufficiently large relative to its quarter-to-quarter variation to have significantly affected the results reported here. 24
It should be clear that the difficulty of interpreting contemporaneous correlations in causal terms is not particular to our approach to testing for causality or to our data set.
102
R. Ashley, C. W. J. Granger, and R. Schmalensee
Finally, the total sample of 80 observations was not as large as would have been desirable. Given the importance of post-sample testing in our approach, a post-sample period of more than 20 observations might have permitted more precise inferences. Were we to do this study again, we would probably divide the data more evenly between sample and postsample periods for this reason. Of course, this problem relates to the strength of our conclusions, not directly to the pattern of causation we detect.25 In short, causality testing with typical economic data remains at the frontier of econometric work and is hence a rather non-routine affair. Nevertheless, we believe that the results discussed above showing that fluctuations in past aggregate consumption appear to influence aggregate advertising, but not vice-versa, are valid at the significance level quoted. Moreover, our experience with the test for causality proposed in Section 3 has left us confident of its utility. Its first desirable feature is the focus on the original variables rather than the pre-whitened (residual) series. In the application in Section 5, steps (iv) and (v) yielded much stronger evidence than did the analysis of pre-whitened series in steps (ii) and (iii). The second desirable feature of our approach is its stress on out-of-sample forecasting performance. We discussed the complexities involved in optimal use of out-of-sample data in Section 3. Sample data mining (leading to specification error) and structural instability can lead to difficulty in obtaining useful causal inferences with the methodology proposed here. However, we find this possibility distinctly preferable to the spurious inferences that these problems can easily produce when out-of-sample verification is not employed. Similarly, restricting causal hypothesis testing to a separate out-of-sample period clearly decreases the number of degrees of freedom available for such testing: on the other hand, only then can one be really sure that none of those degrees of freedom have been “used up” in the model identification and estimation process. APPENDIX The CBS advertising spending estimates are used here instead of the PII for two reasons. First, changes in media coverage in the PII cause a break in 1971.26 Second, within the 1953–1970 period, the media covered by the PII become increasingly unrepresentative over time.27 25
26
27
In addition to these problems, we cannot rule out the possibility that our results were generated by a structure in which advertising and consumption both depend on some omitted third variable. But Sims [20] has shown that conditions under which spurious causal orderings can arise in this fashion are rather implausible. See the May and June, 1971 issues of the Survey of Current Business. A similar break occurred between 1952 and 1953 [23, p. 8]. PII covered network radio and television but did not cover the spot markets in these media. (Spot television was added in 1971.) By 1966, national advertising spending for
Advertising and Aggregate Consumption
103
In [15, App. B], CBS estimates of quarterly movements of national advertising spending in newspapers, magazines, business papers, outdoor media, network television, spot television, network radio, and spot radio were employed to extend Blank’s [1] series through 1967.28 For this study, we obtained more recent CBS estimates of quarterly spending in all these media except business papers and outdoor media for the 1966–1975 subperiod,29 along with current McCann–Erickson estimates of annual spending totals in these media for the entire 1956–1975 period.30 The quarterly totals reported in [15, App. B] were used for the 1956–1965 subperiod. The quarterly flows for each medium were rescaled, where necessary, so that annual averages equaled the McCann–Erickson annual totals.The six resultant series were used, along with quarterly population from various issues of the Survey of Current Business, to obtain ADN. A set of annual cost-per-million (CPM) indices, which reflect changes in both media costs and audience sizes, were obtained from McCann–Erickson for the media covered by ADN for the 1960–1975 subperiod. These were linked to the Printer’s Ink indices reported in [15, App. A] at 1960. This six CPM indices were then interpolated, using a linear method that ensured that the averages of the quarterly indices equaled the annual value.31 The six current dollar spending series were
28
29
30
31
spot television was two-thirds that for network television, while spending in spot radio was more than four times that for network radio [15, p. 8]. National advertising is prepared centrally and disseminated to several localities, while local advertising is prepared and disseminated in the same locality. Local advertising is largely done by retailers, while national manufacturers are the dominant national advertisers. Spending in business papers was excluded because we did not expect it to be causally related to household consumption spending. Outdoor media had to be dropped because CBS had stopped preparing quarterly estimates. The CBS series were seasonally adjusted at the source using (basically) the Census X-11 program. The sources used by CBS in preparing the earlier data are discussed in [1; 15, App. B]. The more recent estimates of quarterly movements are based on information from the Television Bureau of Advertising, Broadcast Advertisers Reports, Television/Radio Age, the Radio Advertising Bureau, the Newspaper Advertising Bureau, Publishers’ Information Bureau, and a cooperative service commissioned by the major radio networks. The McCann-Erickson totals include both media charges and production costs. These estimates appear at intervals in Advertising Age. See also [22, Ch. I] and recent numbers of the Statistical Abstract of the United States. Let X(t) be the value of some series in year t, and let x(i, t) be the interpolated value for quarter i of that year. In the interpolation method employed, x(1, t) and x(2, t) were found by linear interpolation between X(t - 1) and an adjusted number X¢(t), and x(3, t) and x(4, t) were similarly based on X¢(t) and X(t + 1). X¢(t) was selected for each year for each series so that the average of the x(i, t) equalled X(t). This makes all the x(i, t) linear functions of x(t - 1), X(t), and X(t + 1). (Ordinary linear interpolation was used for 1975.) Using standard tests for homogeneity of means and variances of percentage changes ending in each of the four quarters, this method performed well relative to a variety of alternative average-preserving interpolation techniques.
104
R. Ashley, C. W. J. Granger, and R. Schmalensee
deflated by the resultant quarterly CPM indices, and the deflated totals were used, along with the population series, to obtain ADR.
REFERENCES [1] [2] [3]
[4] [5]
[6]
[7]
[8] [9] [10] [11] [12]
[13]
[14] [15] [16] [17]
Blank, D. M.: “Cyclical Behavior of National Advertising,” Journal of Business, 35 (1962), 14–27. Box, G. E. P., and G. M. Jenkins: Time Series Analysis. San Francisco: Holden-Day, 1970. Box, G. E. P., and D. A. Pierce: “Distribution of Residual Autocorrelations in Autoregressive-Integrated Moving Average Time Series Models,” Journal of the American Statistical Association, 65 (1970), 1509–1526. Clarke, D. G.: “Econometric Measurement of the Duration of the Advertising Effect on Sales,” Journal of Marketing Research, 13 (1976), 345–357. Davies, N., C. M. Triggs, and P. Newbold: “Significance Levels of the BoxPierce Portmanteau Statistic in Finite Samples,” Biometrica, 64 (1977), 517–522. Durbin, J.: “Testing for Serial Correlation in Least-Squares Regression When Some of the Regressors are Lagged Dependent Variables,” Econometrica, 38 (1970), 410–421. Ekelund, R. G., and W. P. Gramm: “A Reconsideration of Advertising Expenditures, Aggregate Demand, and Economic Stabilization,” Quarterly Review of Economics and Business, 9 (1969), 71–77. Granger, C. W. J.: “Investigating Causal Relations by Econometric Models and Cross-Spectral Methods,” Econometrica, 37 (1969), 424–438. Granger, C. W. J., and P. Newbold: Forecasting Economic Time Series. New York: Academic Press, 1977. Houthakker, H. S. and L. D. Taylor: Consumer Demand in the United States, 2nd Ed. Cambridge: Harvard University Press, 1970. Kotler, P.: Marketing Management, 3rd Ed. Englewood Cliffs: PrenticeHall, 1976. Maddala, G. S., and A. S. Rao: “Tests for Serial Correlation in Regression Models with Lagged Dependent Variables and Serially Correlated Errors,” Econometrica, 41 (1973), 761–764. Pierce, D. A.: “Relationships – and the Lack Thereof – Between Economic Time Series, with Special Reference to Money and Interest Rates,” Journal of the American Statistical Association, 72 (1977), 11–22. Pierce, D. A., and L. D. Haugh: “Causality in Temporal Systems: Characterizations and a Survey,” Journal of Econometrics, 5 (1977), 265–293. Schmalensee, R.: The Economics of Advertising. Amsterdam: NorthHolland, 1972. Simon, J. L.: Issues in the Economics of Advertising. Urbana: University of Illinois Press, 1970. Sims, C. A.: “Money, Income, and Causality,” American Economic Review, 62 (1972), 540–552.
Advertising and Aggregate Consumption
105
[18] ———:“Seasonality in Regression,” Journal of the American Statistical Association, 69 (1974), 618–626. [19] ———-:“Comment,” Journal of the American Statistical Association, 72 (1977), 23–24. [20] ———:“Exogeneity and Causal Ordering in Macroeconomic Models,” in New Methods in Business Cycle Research, ed. by C. A. Sims. Minneapolis: Federal Reserve Bank of Minneapolis, 1977. [21] Taylor, L. D., and D. Weiserbs: “Advertising and the Aggregate Consumption Function,” American Economic Review, 62 (1972), 642–655. [22] U. S. Bureau of the Census: Historical Statistics of the United States: Colonial Times to 1970. Washington: U.S. Government Printing Office, 1975. [23] Verdon, W. A., C. R. McConnell, and T. W. Roesler: “Advertising Expenditures as an Economic Stabilizer, 1954–64,” Quarterly Review of Economics and Business, 8 (1968), 7–18. [24] Wallis, K. F.: “Seasonal Adjustment and Relations Between Variables,” Journal of the American Statistical Association, 69 (1974), 18–31.
PART TWO
INTEGRATION AND COINTEGRATION
CHAPTER 5
Spurious Regressions in Econometrics* C. W. J. Granger and P. Newbold
1.
INTRODUCTION
It is very common to see reported in applied econometric literature time series regression equations with an apparently high degree of fit, as measured by the coefficient of multiple correlation R2 or the corrected coefficient R 2, but with an extremely low value for the Durbin–Watson statistic. We find it very curious that whereas virtually every textbook on econometric methodology contains explicit warnings of the dangers of autocorrelated errors, this phenomenon crops up so frequently in wellrespected applied work. Numerous examples could be cited, but doubtless the reader has met sufficient cases to accept our point. It would, for example, be easy to quote published equations for which R2 = 0.997 and the Durbin–Watson statistic (d) is 0.53. The most extreme example we have met is an equation for which R2 = 0.99 and d = 0.093. However, we shall suggest that cases with much less extreme values may well be entirely spurious. The recent experience of one of us [see Box and Newbold (1971)] has indicated just how easily one can be led to produce a spurious model if sufficient care is not taken over an appropriate formulation for the autocorrelation structure of the errors from the regression equation. We felt, then, that we should undertake a more detailed enquiry seeking to determine what, if anything, could be inferred from those regression equations having the properties just described. There are, in fact, as is well-known, three major consequences of autocorrelated errors in regression analysis: (i) Estimates of the regression coefficients are inefficient. (ii) Forecasts based on the regression equations are sub-optimal. (iii) The usual significance tests on the coefficients are invalid. The first two points are well documented. For the remainder of this paper, we shall concentrate on the third point, and, in particular, examine * Journal of Econometrics, 2, 1974, 111–120.
110
C. W. J. Granger and P. Newbold
the potentialities for “discovering” spurious relationships which appear to us to be inherent in a good deal of current econometric methodology. The point of view we intend to take is that of the statistical time series analyst, rather than the more classic econometric approach. In this way it is hoped that we might be able to illuminate the problem from a new angle, and hence perhaps present new insights. Accordingly, in the following section we summarize some relevant results in time series analysis. In sect. 3 we indicate how nonsense regressions relating economic time series can arise, and illustrate these points in sect. 4 with the results of a simulation study. Finally, in sect. 5, we re-emphasize the importance of error specification and draw a distinction between the philosophy of time series analysis and econometric methodology, which we feel to be of great importance to practitioners of the latter. 2. SOME RESULTS IN TIME SERIES ANALYSIS Let Wt denote a time series which is stationary (it could represent deviation from some deterministic trend). Then, the so-called mixed autoregressive moving average process, Wt - f1Wt-1 - . . . - fpWt-p = at - q1at-1 - . . . - qqat-q,
(1)
where at represents a sequence of uncorrelated deviates, each with the same variance, is commonly employed to model such series. The sequence at is referred to as “white noise”. For brevity, eq. (1) can be written as f (B)Wt = q (B)at ,
(2)
where f(B) and q(B) are polynomial lag operators with appropriate roots to ensure stationarity of Wt and uniqueness of representation. Suppose, now, that one has a given time series Xt. Box and Jenkins (1970) urge that, while this series itself may not be stationary, it can often be reduced to stationarity by differencing a sufficient number of times; that is, there exists an integer d such that ∇dXt = Wt
(3)
is a stationary time series. Combining eqs. (2) and (3), the series Xt can be represented by the model, f (B)— d X t = q (B)at .
(4)
Eq. (4) is said to represent an autoregressive integrated moving average process of order (p, d, q), denoted as A.R.I.M.A. (p, d, q). As regards economic time series, one typically finds a very high serial correlation between adjacent values, particularly if the sampling interval is small, such as a week or a month. This is because many economic series are rather “smooth”, with changes being small in magnitude compared
Spurious Regresssion in Econometrics
111
to the current level. There is thus a good deal of evidence to suggest that the appropriate value for d in eq. (4) is very often one. [See, for example, Granger (1966), Reid (1969) and Newbold and Granger (1974).] Alternatively, if d = 0 in eq. (4) we would expect f(B) to have a root (1 - fB) with f very close to unity. The implications of this statement are extremely important, as will be seen in the following section. The simplest example of the kind of series we have in mind is the random walk, ∇Xt = at. This model has been found to represent well certain price series, particularly in speculative markets. For many other series, the integrated moving average process, ∇Xt = at - qat-1, has been found to provide good representation. A consequence of this behaviour of economic time series is that a naive “no change” model will often provide adequate, though by no means optimal, forecasts. Such models are often employed as benchmarks against which the forecast performance of econometric models can be judged. [For a criticism of this approach to evaluation, see Granger and Newbold (1973).] 3. HOW NONSENSE REGRESSIONS CAN ARISE Let us consider the usual linear regression model with stochastic regressors: Y = Xb + e,
(5)
where Y is a T ¥ 1 vector of observations on a “dependent” variable, b is a K ¥ 1 vector of coefficients whose first member b0 represents a constant term and X is a T ¥ K matrix containing a column of ones and T observations on each of (K - 1) “independent” variables which are stochastic, but distributed independently of the T ¥ 1 vector of errors e. It is generally assumed that E(e ) = 0,
(6)
E (ee ¢) = s 2 I .
(7)
and
A test of the null hypothesis that the “independent” variables contribute nothing towards explaining variation in the dependent variable can be framed in terms of the coefficient of multiple correlation R2. The null hypothesis is H0 : b1 = b2 = . . . = bK-1 = 0,
(8)
112
C. W. J. Granger and P. Newbold
and the test statistic F=
T - K R2 K - 1 1 - R2
(9)
is compared with tabulated values of Fisher’s F distribution with (K - 1) and (T - K) degrees of freedom, normality being assumed. Of course, it is entirely possible that, whatever the properties of the individual time series, there does exist some b so that e = Y - Xb satisfies the conditions (6) and (7). However, to the extent that the Yt’s do not constitute a white noise process, the null hypothesis (8) cannot be true, and tests of it are inappropriate. Next, let us suppose that the null hypothesis is correct and one attempts to fit a regression of the form (5) to the levels of economic time series. Suppose, further, that, as we have argued in the previous section is often the case, these series are non-stationary or, at best, highly autocorrelated. In such a situation the test procedure just described breaks down, since the quantity F in eq. (9) will not follow Fisher’s F distribution under the null hypothesis (8). This follows since under that hypothesis the residuals from eq. (5), et = Yt - b0;
t = 1, 2, . . . , T,
will have the same autocorrelation properties as the Yt series. Some idea of the distributional problems involved can be obtained from consideration of the case: Yt = b0 + b1Xt + et, where it is assumed that Yt and Xt follow the independent first order autoregressive processes, Yt = fYt-1 + at,
Xt = f*Xt-1 + at.
(10)
In this case, R2 is simply the square of the ordinary sample correlation between Yt and Xt. Kendall (1954) gives: var(R) = T -1 (1 + ff *)(1 - ff *). Since R is constrained to lie in the region (-1, 1), if its variance is greater 1 than –3 then its distribution cannot have a single mode at zero. The necessary condition is ff* > (T - 3)/(T + 3). Thus, for example, if T = 20 and f = f*, a distribution which is not unimodal at the origin will arise if f > 0.86, and if f = 0.9, E(R2) = 0.47. Thus a high value of R2 should not, on the grounds of traditional tests, be regarded as evidence of a significant relationship between autocorrelated series. Also a low value of d strongly suggests that there does not
Spurious Regresssion in Econometrics
113
Table 5.1 Regressing two independent random walks. S: Frequency:
0–1 13
S: Frequency:
8–9 3
1–2 10 9–10 3
2–3 11
3–4 13
4–5 18
5–6 8
6–7 8
7–8 5
10–11 1
11–12 5
12–13 0
13–14 1
14–15 0
15–16 1
exist a b such that e in eq. (5) satisfies eq. (7). Thus, the phenomenon we have described might well arise from an attempt to fit regression equations relating the levels of independent time series. To examine this possibility, we conducted a number of simulation experiments which are reported in the following section. 4.
SOME SIMULATION RESULTS
As a preliminary, we looked at the regression Yt = b0 + b1Xt, where Yt and Xt were, in fact, generated as independent random walks each of length 50. Table 5.1 shows values of bˆ1 S= , S.E.(bˆ 1 ) the customary statistic for testing the significance of b1, for 100 simulations. Using the traditional t test at the 5% level, the null hypothesis of no relationship between the two series would be rejected (wrongly) on approximately three-quarters of all occasions. If bˆ1 S.E. (bˆ 1 ) were distributed as N(0, 1), then the expected value of S would be 2 2/p 0.8. In fact, the observed average value of S was 4.5, suggesting that the standard deviation of bˆ 1 is being underestimated by the multiple factor 5.6. Thus, instead of using a t-value of approximately 2.0, one should use a value of 11.2, when attributing a coefficient value to be “significant” at the 5% level. To put these results in context, they may be compared with results reported by Malinvaud (1966). Suppose that Xt follows the process (10) and the error series obeys the model et = fet-1 + at, so that, under the null hypothesis, Yt will also follow this process, where at and at are independent white noise series. In the case f = f* = 0.8, it is shown that the estimated variance of bˆ 1 should be multiplied by a
114
C. W. J. Granger and P. Newbold
factor 5.8, when the length of the series is T = 50. The approximations on which this result is based break down as both f and f* tend to unity, but our simulation indicates that the estimated variance of bˆ 1 should be ultiplied by (5.6)2 31.4 when T = 50 and random walks are involved. Our second simulation was more comprehensive. A series Yt was regressed on m independent series Xj,t; j = 1, 2, . . . , m, with m taking values from one to five. Each of the series involved obey the same model, the models being (i) (ii) (iii) (iv)
random walks, white noises, A.R.I.M.A. (0, 1, 1), changes in A.R.I.M.A. (0, 1, 1), i.e., first order moving average.
All error terms were distributed as N(0, 1) and the A.R.I.M.A. (0, 1, 1) series was derived as the sum of a random walk and independent white noise. The results of the simulations, with 100 replications and series of length 50 are shown in table 5.2. It is seen that the probability of accepting H0, the hypothesis of no relationship, becomes very small indeed for m 3 when regressions involve independent random walks. The average R 2 steadily rises with m, as does the average d, in this case. Similar conclusions hold for the A.R.I.M.A. (0, 1, 1) process. When white noise series, i.e., changes in random walks, are related, classical regression yields satisfactory results, since the error series will be white noise and least squares fully efficient. However, in the case where changes in the A.R.I.M.A. (0, 1, 1) series are considered – that is, first order moving average processes – the null hypothesis is rejected, on average twice as often as it should be. It is quite clear from these simulations that if one’s variables are random walks, or near random walks, and one includes in regression equations variables which should in fact not be included, then it will be the rule rather than the exception to find spurious relationships. It is also clear that a high value for R2 or R 2, combined with a low value of d, is no indication of a true relationship. 5.
DISCUSSION AND CONCLUSION
It has been well known for some time now that if one performs a regression and finds the residual series is strongly autocorrelated, then there are serious problems in interpreting the coefficients of the equation. Despite this, many papers still appear with equations having such symptoms and these equations are presented as though they have some worth. It is possible that earlier warnings have been stated insufficiently strongly. From our own studies we would conclude that if a regression equation relating economic variables is found to have strongly autocor-
Spurious Regresssion in Econometrics
115
Table 5.2 Regressions of a series on m independent “explanatory” series. Series either all random walks or all A.R.I.M.A. (0, 1, 1) series, or changes in these. Yo = 100, Yt = Yt-1 + at, Y¢t = Yt + kbt; Xj,o = 100, Xj,t = Xj,t-1 + aj,t X¢j,t = Xj,t + kbj,t; aj,t, at, bt, bj,t sets of independent N(0, 1) white noises. k = 0 gives random walks, k = 1 gives A.R.I.M.A. (0, 1, 1) series. Ho = no relationship, is true. Series length = 50, number of simulations = 100, R2 = corrected R2. Per cent times Ho rejecteda
Average Durbin-Watson d
Average R2
Per cent R2 > 0.7
m=1 m=2 m=3 m=4 m=5
76 78 93 95 96
Random walks 0.32 0.46 0.55 0.74 0.88
0.26 0.34 0.46 0.55 0.59
5 8 25 34 37
Changes m = 1 m=2 m=3 m=4 m=5
8 4 2 10 6
2.00 1.99 1.91 2.01 1.99
0.004 0.001 -0.007 0.006 0.012
0 0 0 0 0
m=1 m=2 m=3 m=4 m=5
64 81 82 90 90
A.R.I.M.A. (0, 1, 1) 0.73 0.96 1.09 1.14 1.26
0.20 0.30 0.37 0.44 0.45
3 7 11 9 19
Changes m = 1 m=2 m=3 m=4 m=5
8 12 7 9 13
2.58 2.57 2.53 2.53 2.54
0.003 0.01 0.005 0.025 0.027
0 0 0 0 0
Levels
Levels
a
Test at 5% level, using an overall test on R2.
related residuals, equivalent to a low Durbin–Watson value, the only conclusion that can be reached is that the equation is mis-specified, whatever the value of R2 observed. If such a conclusion is accepted, the question then arises of what to do about the mis-specification. The form of the mis-specification can be considered to be either (i) the omission of relevant variables or (ii) the inclusion of irrelevant variables or (iii) autocorrelated residuals. In general, the mis-specification is best considered to be a combination of these possibilities. The usual recommendations are to either include a lagged dependent variable or take first differences of the variables
116
C. W. J. Granger and P. Newbold
involved in the equation or to assume a simple first-order autoregressive form for the residual of the equation. Although any of these methods will undoubtedly alleviate the problem in general, it is doubtful if they will completely remove it. It is not our intention in this paper to go deeply into the problem of how one should estimate equations in econometrics, but rather to point out the difficulties involved. In our opinion the econometrician can no longer ignore the time series properties of the variables with which he is concerned – except at his peril. The fact that many economic “levels” are near random walks or integrated processes means that considerable care has to be taken in specifying one’s equations. One method we are currently considering is to build single series models for each variable, using the methods of Box and Jenkins (1970) for example, and then searching for relationships between series by relating the residuals from these single models. The rationale for such an approach is as follows. In building a forecasting model, the time series analyst regards the series to be forecast as containing two components. The first is that part of the series which can be explained in terms of its own past behaviour and the second is the residual part [at in eq. (4)] which cannot. Thus, in order to explain this residual element one must look for other sources of information–related time series, or perhaps considerations of a nonquantitative nature. Hence, in building regression equations, the quantity to be explained is variation in at – not variation in the original series. This study is, however, still in its formative stages. Until a really satisfactory procedure is available, we recommend taking first differences of all variables that appear to be highly autocorrelated. Once more, this may not completely remove the problem but should considerably improve the interpretability of the coefficients. Perhaps at this point we should make it clear that we are not advocating first differencing as a universal sure-fire solution to any problem encountered in applied econometric work. One cannot propose universal rules about how to analyse a group of time series as it is virtually always possible to find examples that could occur for which the rule would not apply. However, one can suggest a rule that is useful for a class of series that very frequently occur in practice. As has been noted, very many economic series are rather smooth, in that the first serial correlation coefficient is very near unity and the other low-order serial correlations are also positive and large. Thus, if one has a small sample, of say twenty terms, the addition of a further term adds very little to the information available, as this term is so highly correlated with its predecessor. It follows that the total information available is very limited and the estimates of parameters associated with this data will have high variance values. However, a simple calculation shows that the first differences of such a series will necessarily have serial correlations that are small in magnitude, so that a new term of the differenced series adds informa-
Spurious Regresssion in Econometrics
117
tion that is almost uncorrelated to that already available and this means that estimates are more efficient. One is much less likely to be misled by efficient estimates. The suggested rule perhaps should be to build one’s models both with levels and also with changes, and then interpret the combined results so obtained. As an example (admittedly extreme) of the changes that can occur in one’s results from differencing, Sheppard (1971) regressed U.K. consumption on autonomous expenditure and mid-year money stock, both for levels and changes for the time period 1947–1962. The regression on levels yielded a corrected R2 of 0.99 and a d of 0.59, whilst for changes these quantities were -0.03 and 2.21 respectively. This provides an indication of just how one can be misled by regressions involving levels if the message of the d statistic is unheeded. It has been suggested by a referee that our results have relevance to the structural model – unrestricted reduced form controversy, the feeling being that the structural model is less vulnerable to the problems we have described since its equations are in the main based on well developed economic theory and contain relatively few variables on the righthand side. There is some force to this argument, in theory at least, although we believe that in practice things are much less clear-cut. When considering this problem the question immediately arises of what is meant by a good theory. To the time series analyst a good theory is one that provides a structure to a model such that the errors or residuals of the fitted equations are white noises that cannot be explained or forecast from other economic variables. On the other hand, some econometricians seem to view a good theory as one that appears inherently correct and thus does not need testing. We would suggest that in fact most economic theories are insufficient in these respects as even if the variables to be included in a model are well specified, the theory generally is imprecise about the lag structure to be used and typically says nothing about the time-series properties of the residuals. There are also data problems in that the true lags need not necessarily be integer multiples of the sampling interval of the available data and there will almost certainly be added measurement errors to the true values of the variables being considered. All of these considerations suggest that a simpleminded application of regression techniques to levels could produce unacceptable results. If one does obtain a very high R2 value from a fitted equation, one is forced to rely on the correctness of the underlying theory, as testing the significance of adding further variables becomes impossible. It is one of the strengths of using changes, or some similar transformations, that typically lower R2 values result and so more experimentation and testing can be contemplated. In any case, if a “good” theory holds for levels, but is unspecific about the time-series properties of the residuals, then an equivalent theory holds for changes so that nothing is lost by model
118
C. W. J. Granger and P. Newbold
building with both levels and changes. However, much could be gained from this strategy as it may prevent the presentation in econometric literature of possible spurious regressions, which we feel is still prevalent despite the warnings given in the text books about this possibility.
REFERENCES Box, G.E.P. and G.M. Jenkins, 1970, Time series analysis, forecasting and control (Holden-Day, San Francisco). Box, G.E.P. and P. Newbold, 1971, Some comments on a paper of Coen, Gomme and Kendall, J. R. Statist. Soc. A 134, 229–240. Granger, C.W.J., 1966, The typical spectral shape of an economic variable, Econometric 34, 150–161. Granger, C.W.J. and P. Newbold, 1973, Some comments on the evaluation of economic forecasts, Applied Economics 5, 35–47. Kendall, M.G., 1954, Exercises in theoretical statistics (Griffin, London). Malinvaud, E., 1966, Statistical methods of econometrics (North Holland, Amsterdam). Newbold, P. and C.W.J. Granger, 1974, Experience with forecasting univariate time series and the combination of forecasts, J. R. Statist. Soc. A 137, forthcoming. Reid, D.J., 1969, A comparative study of time series prediction techniques on economic data, Ph.D. Thesis (University of Nottingham, U.K.). Sheppard, D.K., 1971, The growth and role of U.K. financial institutions 1880–1962 (Methuen, London).
CHAPTER 6
Some Properties of Time Series Data and Their Use in Econometric Model Specification* C. W. J. Granger
1.
INTRODUCTION
It is well known that time-series analysts have a rather different approach to the analysis of economic data than does the remainder of the econometric profession. One aspect of this difference is that we admit more readily to looking at the data before finally specifying a model; in fact, we greatly encourage looking at the data.Although econometricians trained in a more traditional manner are still very much inhibited in the use of summary statistics derived from the data to help model selection, or identification, it could be to their advantage to change some of these attitudes. In fact, I have heard rumors that econometricians do data-mine in the privacy of their own offices and I am merely suggesting that some aspects, at least, of this practice should be brought out into the open. The type of equations to be considered are generating equations, so that a simulation of the explanatory side should produce the major properties of the variable being explained. If an equation has this property, it will be said to be consistent, reverting to the original meaning of this term. As a simple example of a generally non-consistent model, suppose that one has yt = a + bxt + et, where yt is positive, but xt is unbounded in both directions. A more specific example is when yt is exponentially distributed and xt normally distributed. The only case when such a model is consistent is when b is zero. Although it would be ridiculous to suggest that econometricians would actually propose such models, it might be noted that two models that appear in the finance literature have Dt = a + bDt-1 + cEt + et, * Journal of Econometrics, 16, 1981, 121–130.
120
C. W. J. Granger
and Pt = d + eDt + fEt + et, where Dt represents dividends, Et is earnings and Pt is share price. Note that Pt and Dt are necessarily positive, but that Et can be both positive and negative, as Chrysler and other companies can testify. A further example arises from consideration of the question of whether or not a series is seasonal. For the purposes of this discussion, a time series will be said to be seasonal if its spectrum contains prominent peaks round the seasonal frequencies, which are 2pj/12, j = 1, 2, . . . , 6, if the data are recorded monthly. In practice, this will just mean that a plot of the series through time will display the presence of a fairly regular twelve-month repeating shape. Without looking at the data, one may not know if a given series is seasonal or not and economic theory by itself may well not be up to the task of deciding. If now we look at a group of variables which are to be modelled, how does the presence, or lack, of seasonality help with model specification? Considering just single-equation models, which are suitable for the simple point to be made, of the form yt = a + bxt + czt + et,
(1.1)
then it would clearly be inconsistent to require (i) if yt were seasonal, xt, zt not seasonal that et be white noise (or non-seasonal), or (ii) generally, if yt were not seasonal, but just one of xt or zt was seasonal and that et be white noise or AR(1) or any non-seasonal model. Clearly, we may have information about the time-series properties of the data, in terms of spectral shapes, that will put constraints on the form of models that can be built or proposed. As the point is a very simple one, and ways of dealing with the seasonality are well understood, or are at least currently thought to be so, this case will not be pursued further. There is, however, a special case that is worth mentioning at this point. Suppose that yt is not seasonal, but that both xt and zt are seasonal, then is model (1.1) a possible one, with et not seasonal? In general, the answer is no, but if it should happen that the term bxt + czt is non-seasonal, then the model (1.1) is not ruled out. This could only happen if there is a constraint f = c/b such that the seasonal component in xt is exactly the reverse of f times the seasonal component in zt. A simple case where this would occur is if the seasonal components is xt and zt were identical and f takes the value minus one. This does at first sight appear to be a highly unlikely occurrence, but an example will be given later, in a very different context, where such cancellations could occur.
Some Properties of Time Series Data
121
It is obvious that the spectrum of one side of a generating equation, such as (1.1), must be identical to the spectrum of the other side. If the spectrum of one side has a distinctive feature, it must be reproduced in the spectrum of the other side, obvious examples being periodic components, such as the seasonal and trend terms. The majority of this paper will be concerned with discussions of this point in connection with a generalized version of the trend term. 2.
INTEGRATED SERIES AND FILTERS
To proceed further, it is necessary to introduce a class of time series models that have been popular in parts of electrical and hydraulic engineering for some years, but which have so far had virtually no impact in econometrics. Suppose that xt is a zero-mean time series generated from a white noise series et by use of the linear filter a(B), where B is the backward operator, so that xt = a(B)e t ,
(2.1)
with Bket = et-k. Further suppose that -d
a(B) = (1 - B) a¢(B),
(2.2)
where a¢(z) has no poles or roots at z = 0. Then xt will be said to be “integrated of order d” and denoted xt ~ I (d). Further, defining d
xt¢ = (1 - B) xt = a¢(B)e t , then xt¢ ~ I (0) from the assumed properties of a¢(B). a(B) will be called an “integrating filter of order d.” If a¢(B) is the ratio of a polynomial in B of order m divided by a polynomial of order l, then xt will be ARIMA (l, d, m) in the usual Box and Jenkins (1970) notation. However, unlike the vast majority of the literature on ARIMA models, the class of models here considered allow the order of integration, d, to be possibly non-integer. Clearly, not constraining d to be an integer generalizes the class of models before considered, but to be relevant, the generalization has to be shown to be potentially important. Some earlier accounts of similar models may be found in Hipel and McLeod (1978), Lawrence and
122
C. W. J. Granger
Kottegoda (1977), Mandelbrot and Van Ness (1968) and Mandelbrot and Taqqu (1979), although some details in the form of the models are different than those here considered, which were first introduced in Granger and Joyeux (1981). Some of the main properties of these models may be summarized as follows: The spectrum of xt, generated by (2.1) and (2.2), may be thought of as
(
fx (w ) = 1 1 - z
2d
) a¢(z)
2
, z = e iw ,
if var(et) = 1, from analogy with the usual results from filtering theory. It is particularly important to note that for small w, fx (w ) = cw -2d .
(2.3)
It was shown in Granger and Joyeux (1981) that the variance of xt increases as d increases, and that this variance is infinite for d –12 , but is finite for d < –12 . Further, writing •
xt = Â bj e t - j , j =0
and denoting r j = correlation ( xt , xt - j ), then, for j large, rj = A1 j2d-1,
d < –12 , d π 0,
bj = A2 jd-1,
d 1, d π 0,
and
where A1 and A2 are appropriate constraints. When d = 0, both rj and bj decrease exponentially in magnitude as j increases, but with d π 0, it is seen that these quantities decline much slower. Because of this property the integrated series, when d π 0, have been called “long-memory”. For long-term forecasting, the low frequency component is of paramount importance and (2.3) shows that if d is not an integer, this component cannot be well approximated by an ARIMA (l, d, m) model with integer d and low order for l and m. It is not clear at this time if integrated models with non-integer d occur in practice and only extensive empirical research can resolve this issue. However, some aggregation results presented in Granger (1980) do suggest that these models may be expected to be relevant for actual economic variables. It is proved there, for example, that if xjt, j = 1, . . . , n, are set of independent series, each generated by an AR(1) model, so that xjt = ajxj,t-1 + ejt,
j = 1, . . . , N,
Some Properties of Time Series Data
123
where the ejt are independent, zero-mean white noise and if the aj’s are values independently drawn from a beta distribution on (0,1), where dF (a ) = (2 B ( p, q))a 2 p -1 (1 - a 2 )
q -1
da , 0 a 1, p > 0, q > 0,
(2.4)
N
then, if xt = Â x j ,t , for N large j =1
xt ~ I (1 - q 2).
(2.5)
The shape of the distribution from which the a’s are drawn is only critical near 1 for this result to hold. A more general result arises from considering xjt is generated by xjt = ajxj,t-1 + yj,t + bjWt + ejt,
(2.6)
where the series yj,t, Wt and ejt are all independent of each other for all j, gjt are white noise with variances s 2j, yj,t has spectrum fy(w,qj) and is at least potentially observable for each micro-component. It is assumed that there is no feedback in the system and the various parameters a, qj, b and s 2 are all assumed to be drawn from independent populations and the distribution function for the a’s is (2.4). Thus, the xj,t are generated by an AR(1) model, plus a independent causal series yj,t and a common factor causal series Wt. With these assumptions, it is shown in Granger (1980) that (i) xt ~ I (dx ) where dx is the largest of the three terms (1 - q/2 + dy), l - q + dw and (1 - q)/2, where y¯t ~ I(dy), Wt ~ I(dw), and (ii) if a transfer function model of the form xt = a1 (B) yt + a 2 (B)Wt + et is fitted, then both a1(B) and a2(B) are integrating filters of order 1 - q. It should be noted from (2.6) that, if aj < 1 then the spectrum of xj,t is
(
fx , j (w) = 1 1 - a j z
2
)[ f
y,j
(w) + b j2 fw (w ) + fe , j (w )],
so that if one assumes that xj,t ~ I(0) it necessarily follows that yj,t and Wt are both I(0). In Granger (1980) it was shown that integrated models may arise from micro-feedback models and also from large-scale dynamic econometric models that are not too sparse. Thus, at the very least, it seems that integrated series can occur from realistic aggregation situations, and so do deserve further consideration.
124
C. W. J. Granger
3. THE ALGEBRA OF INTEGRATED SERIES AND IT’S IMPLICATIONS The algebra of integrated series is quite simple. If xt ~ I(dx) and a(B) is an integrating filter of order d¢, then a(B)xt will be I(dx + d¢). Thus, dx is unchanged if xt is operated on by a filter of order zero. Further, if xt ~ I(dx), yt ~ I(dy) then zt = bxt + cyt ~ I(max(dx,dy)) in general. This result is proved by noting that the spectrum of zt is
[
]
fz (w ) = b 2 fx (w ) + c 2 fy (w ) + 2bc cr (w ) + cr (w ) ,
(3.1)
where cr(w) is the cross-spectrum between xt and yt and has the property that |cr(w)|2 fx(w)fy(w). For small w, fx (w ) = A3w -2dx
and
fy (w ) = Aw
-2 d y
,
and clearly the term with the largest d value will dominate at low frequencies. There is, however, one special case where this result does not hold, and this will be discussed in the following section. Suppose now that one is considering the relationship between a pair of series xt and yt, and where dx and dy are known, or at least have been estimated from the data. For the moment, it will be assumed that dx and dy are both non-integer. If a model of the form b(B) yt = c(B) xt + h(B)e,
(3.2)
is considered, where all of the polynomials are of finite order, and will usually be of low order, and et is white noise, independent of xt, then this model is consistent, from consideration of spectral shapes at low frequencies, only if dx = dy. If one knows that dx < dy then to make the model consistent, either c(B) must be an integrating filter of order dx - dx or h(B) is an integrating filter of order dx, or both. In either case, the polynomials cannot be of finite order. Similarly, if dx > dy, the necessarily c(B) must be an integrating filter of order dy - dx, and so cannot be of finite order. As an extreme case of model (3.2) inconsistency, suppose that dx < –12 , so variance of xt is finite, but 1 > dy > –12 , so variance of yt is infinite. Using just finite polynomials in the filters, clearly yt cannot be explained by the model, if variance et is finite, which is generally taken to be true. Similarly if dy < –12 but 1 > dx > –12 , then one is attempting to explain a finite variance series by a infinite variance one. This same problem occurs when the d’s can take integer values, of course. Suppose that one knows that change in employment has d = 0, and that level of production has d = 1, then one would not expect to build a model of the form Change in employment = a + b (level of production) + f (B)e t . However, replacing b by b(1 - B) would produce a consistent model, in the sense of this term being used here. Only with integer d values can a
Some Properties of Time Series Data
125
filter, which is a polynomial in B of finite length, be applied to a series to reduce the order of integration to zero. Naturally, similar constraints can be derived for models involving more than one explanatory variable, although these constraints can become rather complicated if many variables are involved. As an illustration, suppose one has a single-equation model of the form b(B) yt = c(B) xt + g(B)Zt + h(B)e t ,
(3.3)
where et is white noise independent of xt and zt and dx, dy and dz are assumed known and non-integer. If all of the polynomials are of finite order, then necessarily dy = max(dx,dz). If this condition does not hold then, generally, at least one of the polynomials has to correspond to an integrating filter and hence to be of infinite order. When all of the d’s are integer, rather simpler rules apply. However, care has to be taken in the model specification so that infinite variance variables are not used to explain finite variance variables, or vice versa. In practice, it is still not uncommon to see this type of misspecification in published research. 4.
CO-INTEGRATED SERIES
This section considers a very special case where some of the previously stated rules do not hold. Although it may appear to be very special, it also seems to be potentially very important. Start with model (3.3) and ask again, is it possible for dy < max(dx,dz). For convenience, initially the no-lag case c(C) = c, g(B) = g is considered, so that b(B) yt = cxt + gZt + h(B)e t ,
(4.1)
where dy > 0, h(B)et is I(dy) and var(e) = 1. The spectrum of the righthand side will be
{c
2
[
]}
fx (w ) + g 2 fz (w ) + gc cr (w ) + cr (w ) + h(z)
2
2p ,
(4.2)
where now cr(w) is the cross-spectrum between xt and yt. The special case of interest has: (i) fx(w) = a 2fz(w), w small, so dx = dz, (ii) cr(w) = afz(w), w small, so that the coherence C(w) = 1 and the phase f(w) = 0 for w small. A pair of series obeying (i) and (ii) will be called co-integrated. If further, g = -ca, the part of the spectrum (4.2) inside the main brackets will vanish at low frequencies and so a model of the form (4.1) will be appropriate even when dy < max(dx,dz) in this special case. It is seen that in this case the difference between two co-integrated series can result in an I(0) series. A slightly more general result arises from considering xt = zt + qt, where dx = dx, dq < dx, and zt, qt are independent, then
126
C. W. J. Granger
xt and zt will be co-integrated, but the difference xt - zt will be I(dq). It should be noted that if a pair of series xt, zt are co-integrated, then so will be a(B)xt, b(B)zt where a(B), b(B) are any pair of finite lag filters; thus, in particular if xt and zt are co-integrated then so will be xt and zt-k for all k, although the approximation that the phase is zero at low frequencies may become unacceptable for large values of k. Co-integrated pairs of series may arise in a number of ways, for example: (i) If xt is the input and zt the output of a black box of limited capacity, or of finite memory, the xt, zt will be co-integrated. For instance the series might be births and deaths in an area with no immigration or emigration, cars entering and leaving the Lincoln Tunnel, patients entering and leaving a maternity hospital, or houses started and houses completed in some region. For these examples to hold, it is necessary to have dx > 0. (ii) Series for which a market ensures that they cannot drift too far apart, for example interest rates in different parts of a country or gold prices in London and New York. (iii) If fn,h(Jn) is an optimal forecast of xn+h based on a proper information set Jn, so that Jn includes xn-j, j 0, then xn+h and fn,h are co-integrated if dx > 0. Thus, if “unanticipated money supply”, xn+1 - fn,1, is used in a model, this can be appropriate if the variable being explained has dx > 0. It should be emphasized that for this result to hold fn,h must be on optimal forecast and, if dx is not an integer, then this means that in theory the forecast has to be based on an infinity of lagged x’s. If 1 > dx > 0, but an ARIMA (l,d,m) model is used to form forecasts, with integer d, the forecasts will not be optimal and the series and its forecast will not be co-integrated. There obviously are pairs of economic series, such as prices and wages, which may or may not be co-integrated and a decision on this has to be determined by an appropriate theory or an empirical investigation. It might be interesting to undertake a wide-spread study to find out which pairs of economic variables are co-integrated. In the frequency domain, the conditions for co-integration of two series state that the two series move in a similar way, ignoring lags, over the long swings of the economy and in “trend”, although the idea of trend is rarely carefully defined and will here mean just the very low frequency component. Although the two series may be unequal in the short term, they are tied together in the long run. The use of the difference between two series to explain the change in a series has been suggested by Sargan (1964) and Hendry (1978) and implemented in a number of models, particularly in Britain. An example is a model of the form
Some Properties of Time Series Data
127
a(B)Dyt = b(B)D xt + b ( yt -1 - xt -1 ) + et , and the use of the term b(yt-1 - xt-1) has been found in some cases to produce a better model, in terms of goodness of fit. The form of the model has an important property. The difference equation without the innovations et, a(B)Dyt = b(B)D xt + b ( yt -1 - xt -1 ), is such that if xt and yt each tend to equilibrium, so that Dxt Æ 0 and Dyt Æ 0 then xt and yt tend to the same equilibrium level. When the stochastic elements et are present, equilibrium becomes much less meaningful, and is replaced by xt and yt tending to having identical means, assuming the means exist. However, if the d values of xt is greater than de, this generating model ensures that xt and yt will be co-integrated. They, therefore, will move closely together in the long run, which is possible the property that most naturally replaces the concept of equilibrium for stochastic processes. It is important to note that this property does not hold if dx = dy = de, as then the coherence between xt and yt need not be high at low frequencies, depending on the relative variances of et and yt. 5.
CONCLUSION
Having, I hope, made a case for the prior analysis of time series data before model specification inter-relating the variables, it has now to be admitted that the practical implementation of the rules suggested above is not simple. Obviously, one can obtain satisfactory estimates of the spectrum of a series, but it is not clear at this time how d values should be estimated. In the references given earlier, a variety of ways of estimating d are suggested, and a number of sensible modifications to these can easily be proposed, but the statistical properties of these d estimates need to be established. It is possible that too much data is required for practical use of the specification rules or that d values for real economic variables are all integers. Only further analysis, both theoretical and empirical can answer these questions.
REFERENCES Box, G. E. P. and G. M. Jenkins, 1970, Time series analysis, forecasting and control (Holden-Day, San Francisco, CA). Davidson, J. E. H., D. F. Hendry, F. Srba and S. Yeo, 1978, Econometric modeling of the aggregate time-series relationship between consumer’s expenditure and income in the United Kingdom, Economic Journal 88, 661–692.
128
C. W. J. Granger
Granger, C. W. J., 1980, Long-memory relationships and the aggregation of dynamic models, Journal of Econometrics 14, 227–238. Granger, C. W. J. and R. Joyeux, 1981, An introduction to long-memory time series and fractional differencing, Journal of Time Series Analysis V.1. Hipel, W. H. and A. I. McLeod, 1978, Preservation of the rescaled adjusted range, Part 1, Water Resources Research 14, 491–518. Lawrence, A. J. and N. T. Kottegoda, 1977, Stochastic modeling of river-flow time series, Journal of the Royal Statistical Society A140, 1–47. Mandelbrot, B. B. and J. W. Van Ness, 1968, Fractional Brownian motions, fractional noises and applications, Siam Review 10, 422–437. Mandelbrot, B. B. and M. S. Taqqu, 1979, Robust R/S analysis of long-run serial correlation, Research report RC 7936 (IBM, Yorktown Heights, NY). Sargan, J. D., 1964, Wages and prices in the United Kingdom: A study in econometric methodology, in: P. E. Hart, G. Mills and J. K. Whitacker, eds., Econometric analysis for national economic planning (Butterworths, London).
CHAPTER 7
Time Series Analysis of Error-Correction Models* C. W. J. Granger and A. A. Weiss
1.
INTRODUCTION
The error-correction model considered in the first sections of this paper takes the form d
(1 - B) a1 ( B) yt = m1 + b ( yt -1 - Ax t -1 ) d
+(1 - B) b1 ( B) x t + c1 ( B)e1t d
(1 - B) a 2 ( B) x t = m2 + c 2 ( B)e 2 t
(1) (2)
where e1t, e2t are a pair of independent, zero-mean white noise series with finite variances, so that E[ejtejs] = 0, t π s, j = 1, 2, m1, m2 are constants, B is the lag operator so that Bkzt = zt-k, a1(B), b1(B), etc. are finite polynomials in B with the property that a1(1) π 0, b1(1) π 0, etc. and a1(0) = a2(0) = c1(0) = c2(0) = 1. In the main body of the paper, d will take either the values 0 or 1, so that is d = 0 the model is on levels of xt, yt, if d = 1 the model uses differenced data except in the error-correcting term b(yt-1 Axt-1). In an appendix to the paper, other values of d are briefly considered, including fractional values. The model in (1), (2) has a one-way causal structure, xt causing yt+1 but yt not causing xt+1. By allowing b1(0) to be non-zero, simultaneity between xt and yt is a possibility. It might be noted that there is little point in including terms such as b 2(yt-2 - Axt-2) in (1) as the resulting model can always be rewritten in the present form. It is assumed that (1 - B)dxt, (1 - B)dyt are stationary. The main purpose of error-correction models is to capture the timeseries properties of variables, through the complex lag-structures allowed, whilst at the same time incorporating an economic theory of an equilibrium type. To see this, consider the case when d = 1 and suppose that for all t > T, e1t = e2t = 0, and with m1 = m2 = 0. Then eventually, after short-term dynamics have worked themselves out, (1 - B)xt = (1 - B)yt * Studies in Econometrics: Time Series and Multivariate Statistics, edited by S. Karlin, T. Amemiya, and L. A. Goodman, Academic Press, New York, 1983, 255–278.
130
C. W. J. Granger and A. A. Weiss
= 0, and yt = Axt, so the variables have an equilibrium relationship. If the constants m1, m2 are non-zero, then eventually xt, yt will be linear trends but still related by yt = Axt. If d = 0, the equilibria are of a rather trivial kind: xt = constant, yt = constant. By using error-correction models, a link is formed between classical econometric models that rely heavily on such theory but do not utilize a rich lag-structure. They may be thought of as capturing the true dynamics of the system whilst incorporating the equilibrium suggested by economic theory. This paper will consider the time-series properties of series generated by models such as (1), (2) and by various generalizations of this model. It will be assumed that m1 = m2 = 0. A time-series identification test will be proposed for series obeying such models and empirical examples presented. In what follows, a series xt will be called integrated of order d, denoted xt ~ I(d), if it has a univariate ARIMA (p, d, q) model of the form d
(1 - B) g p ( B) x t = hq ( B) a t where gp(B), hq(B) are finite polynomials in B of orders p, q respectively, and at is white noise. In particular, it follows that if xt ~ I(d), then (1 B)dxt ~ I(0). If xt ~ I(d), then at low frequencies, the spectrum of xt will take the form -d
A(1 - cosw ) Aw -2 d and then gives a distinctive characteristic of the series that has to be reproduced by any model for xt. A number of empirical papers have used error-correction models, including Sargan (1964), Davidson, Hendry, Srba and Yeo (1978), Hendry and von Ungern Sternberg (1980), Currie (1981), and Dawson (1981). 2.
THE ONE-WAY CAUSAL MODEL
Consider the model (1), (2), the first equation of which may be written as a1 ( B) yt = a 2 ( B) x t + c1 ( B)e1t d
(3) d
with the notation a1(B) = (1 - B) a1(B) - bB and a2(B) = (1 - B) b1(B) - bAB. Eliminating xt from (3) using (2) gives d
d
(1 - B) a 2 ( B)a1 ( B) yt = a 2 ( B) c 2 ( B)e 2 t + c1 ( B) a 2 ( B)(1 - B) e1t (4) As, if d = 0 or 1, the right-hand side can always be written as a finite moving average, it follows that yt ~ I(d) regardless of the value of b in (1). If b π 0, this follows from (4), if b = 0 from (1) since (1 - B)dxt ~ I(0). However, if d = 1, the value of b does have a dramatic impact on the low frequency component of yt. If b π 0, it is seen from (4), essentially replac-
Time Series Analysis of Error Correction Models
131
ing B by eiw and letting w be small so that the term (1 - eiw) is negligible, that when considered in the frequency domain, the second term on the right hand side of (4) is negligible. Thus, the low frequency component of yt is determined largely by the low frequency component of e2t, which, through (2) also determines the low frequency component of xt. However, if b = 0, substitution for xt from (2) into (1), indicates that the low frequency component of both e1t and e2t will jointly determine the low frequency component of yt. Now consider the series zt = yt - Axt which has the univariate model a 2 ( B)a1 ( B) zt = c 2 ( B)[ b1 ( B) - Aa1 ( B)]e 2 t + c1 ( B) a 2 ( B)e1t.
(5)
It follows immediately that zt ~ I(0) even if xt, yt are both I(1). As this is rather a special property it was given a specific name in Granger (1981) as: Definition: If xt ~ I(d), yr ~ I(d) and there exists a constant A such that zt = yt - Axt ~ I(0), then xt , yt will be said to be co-integrated. A will be unique. One reason why this property is special is that if d = 1, then both xt and yt will have infinite variance but there exists a constant A so that zt has finite variance. In general, for any pair of infinite variance series xt Cyt will have infinite variance for all C. It has been shown that if xt, yt are generated by (1), (2) with d = 1, then these series are necessarily cointegrated. Equally, if xt, yt are not co-integrated, then an error-correction model with d = 1 would be inappropriate. This is clear because if xt, yt were not co-integrated, the left-hand side of (1) would have finite variance but the error-correction term on the right-hand side of this equation would have infinite variance, and thus the model would be obviously mis-specified. If d = 1, it easily follows from the definition that the differenced series (1 - B)xt, (1 - B)yt will have a coherence of one at very low frequencies and zero phase at these frequencies. Thus, Axt and yt will have identical low frequency components but may differ over higher frequencies. As this is a long-run property, it can be thought of as a natural generalization for integrated stochastic processes of one form of the equilibrium property considered by economists. It also follows immediately from the definition that if xt, yt are co-integrated, then so will series produced from them by application of linear transformations and finitelength filters, so that for example x¢t = a + bxt-s, y¢t = c + fyt-k will be cointegrated for any finite, not too large, values of s and k and for any constants a, b, c, f. When d = 0, the model is much less interesting. Obviously, if xt and yt are both I(0) then clearly yt - Axt is I(0) for any A. Suppose that xt and yt are related as in (3), then this model can always be written in the form
132
C. W. J. Granger and A. A. Weiss
(1) with d = 1 but xt will be given by (2) with d = 0. Thus, for I(0) series the error-correction model has no special implications. Returning to the case where xt and yt are both I(1), it is interesting to ask what model a time-series analyst is likely to build, given enough data. Looking at the series individually, differencing will be suggested and then bivariate models of the differenced series considered. Presumably, the model identified will then be a1 ( B)(1 - B) yt = a 2 ( B)(1 - B) x t + c1 ( B)(1 - B)e1t derived from (3), plus (1), assuming one-way causality is determined and polynomials of the correct order are identified. The model is overdifferenced but it is quite likely the fact that the moving-average term c1(B)(1 - B)e1t has a unit root may not be realized, given estimation difficulties in this case, especially if one is not looking for it. When an error correction model is a possibility it would be convenient to have an easy way of identifying this. Looking at the coherence function for low frequencies is neither cheap nor easy due to estimation problems. The obvious method is to perform the regression yt = m + Axt + ut giving Â, and then asking if zt = yt - Âxt is I(0). This test, and generalizations of it, are discussed in Section VI. 3. MULTI-COMPONENT CO-INTEGRATED SERIES An obvious and potentially important generalization is when yt and xt are co-integrated, as in equation (1), but where xt has several distinguishable and observable components, so that xt = x1t + gx2t, for example. The error correction term in (1) now becomes, in the two component case, b(yt-1 A1x1,t-1 - A2x2,t-1). If y1 ~ I(1), then a necessary condition for both components to belong in the error-correction term is x1t and x2t ~ I(1). If, say x1t ~ I(d), with d > 1 then the error-correction term cannot be I(0), if d < 1 then x1t cannot contribute to the coherence, at low frequencies, between (1 - B)xt and (1 - B)yt.Thus, it is supposed that yt, x1t, x2t are all I(1). Denoting the w-frequency component of yt by yt(w), and similarly for other series, for xt and yt to be co-integrated a sufficient condition is yt (w ) = A1 x1t (w ) + A2 x 2 t (w ) for small w and some constants A1 and A2. Multiplying this equation by yt (w ) and taking expectations, and similarly using x1t (w ) and x 2 t (w ) and expectations gives three equations. Solving out for A1 and A2 gives a relationship between the spectra and cross-spectra of the series at low frequencies. A little algebra then produces the following relationship between coherences at low frequencies:
Time Series Analysis of Error Correction Models
133
2 1 - C122 - C 1y - C 22y + 2C12C1yC2y = 0
(6)
2 12
2 jy
where C = coherence between x1t, x2t, at low frequencies, and C = coherence between xjt, yt, j = 1, 2, at low frequencies. Some consequences of (6) are: (i) If any one pair of the series yt, x1t, x2t are co-integrated, then the remaining pairs must be equally related at low frequencies, e.g., if C12 = 1, then C1y = C2y. (ii) If any two pairs are co-integrated, then the remaining pair must also be co-integrated, as if C1y = C2y = 1, then C12 = 1. (iii) Neither pair yt, x1t or yt, x2t need be co-integrated. For example, 2 if C12 = 0, then (6) gives merely 1 = C 1y + C 22y. Thus, if yt and xt are co-integrated it does not necessarily mean that yt is cointegrated with any component of xt. This last property does make a search for co-integrated series more difficult, particularly if one of the necessary components is not observed and no satisfactory proxy is available. For example, if yt is the output price series for some industry, a co-integrated series could have as components, input prices, wages, and possibly a productivity measure, provided all series are I(1). One cannot test for co-integratedness in pairs, but one has to look at zt = yt - Atx1t - A2x2t and see if zt ~ I(0). Clearly, if one vital component is missing, then co-integration may not be determined. The existence of a relevant theory to indicate a full-list relevant components is obviously particularly useful. The model can be further generalized to have a vector x t, with several components xjt, causing a vector y t with components yjt. One or more of the equations for the y components could contain a lagged zt term, where zt = Sfjyjt - SAjxjt. Discovering the correct specification of zt, such that all yjt, xjt are I(1) but zt is I(0) is likely to be rather difficult without the use of a specific, and correct, equilibrium theory.
4.
THE BIVARIATE FEEDBACK CASE
Now consider the bivariate feedback model d
d
(1 - B) a1 ( B) yt = b1 ( yt -1 - A1 x t -1 ) + (1 - B) b1 ( B) x t + c1 ( B)e1t (7a) d
d
(1 - B) a 2 ( B) x t = b 2 ( yt -1 - A2 x t -1 ) + (1 - B) b2 ( B) yt + c 2 ( B)e 2 t (7b) which may be conveniently rewritten as a1 ( B) yt = a 2 ( B) x t + c1 ( B)e1t
(8a)
134
C. W. J. Granger and A. A. Weiss
a 3 ( B) x t = a 4 ( B) yt + c 2 ( B)e 2 t
(8b)
where d
a1 ( B) = (1 - B) a1 ( B) - b1 B,
d
a1 ( B) = (1 - B) b1 ( B) - A1 b1 B,
d
d
a 3 ( B) = (1 - B) a 2 ( B) + A2 b 2 , a 4 ( B) = (1 - B) b2 ( B) + b 2 B. To make the model identified, a recursive scheme will be assumed, so that corr(e1t,e2s) = 0 at s,t including s = t, b2(0) = 0, but b1(0) need not be zero. It is also assumed that d = 1. The univariate model for yt takes the form D( B) yt = c1 ( B)a 3 ( B)e1t + c 2 ( B)a 2 ( B)e 2 t where D(B) = a 1 (B)a 3 (B) + a 2 (B)a 4 (B). The univariate model for xt has D(B) on its left-hand side. For xt, yt to be both I(1), so that D(B) has a factor (1 - B) requires either b1b2 = 0 or A1 = A2. Some further algebra finds that the model for zt = yt - Axt, takes the form D( B) zt = f1 ( B)e1t + f2 ( B)e 2 t and if A1 = A2 = A or if b1b2 = 0, then f1(b), f2(b) have a factor (1 - B), which therefore cancels through the equation for zt, giving zt ~ I(0). Thus, for xt, yt to be co-integrated and for an error-correction term to be present in each equation of (7), necessarily A1 = A2 = A. If only one errorcorrection term occurs in the model, for instance, if b2 = 0, b1 π 0, then xt, yt will be co-integrated and I(1), with the low frequency component of e2t driving the low frequency component of both xt and yt. If both b1 and b2 are non-zero, the low frequency components of xt and yt are driven by a mixture of the low frequency components of e1t and e2t. The model is thus different when two error-correction components are present. The only unusual special case seems to be when b1 = 0, b2 π 0, and if b1(B) = Aa1(B), as then xt, yt are both I(2) but zt = yt - Axt is I(0). The series are thus still co-integrated.
5.
AGGREGATION
If xt, yt are I(1) and co-integrated, so that zt = yt - Axt is I(0), then changing the sampling interval of the series will not change the situation. If xt is measured weekly, say, and is I(1), then if recorded every k weeks, the new data set will still be I(1). The model for the change in xt will be different but it will remain I(0). Similarly, zt will stay I(0) and so cointegration is unchanged. Here, xt, yt have been considered as stock variables (the same remarks hold if they are both flow variables), but
Time Series Analysis of Error Correction Models
135
accumulated over k weeks rather than one week, say. If xt is a flow variable and yt a stock variable, temporal aggregation relationships are less clear. It seems doubtful if it is logical to suppose that a stock and a flow variable are co-integrated, given arbitrariness of sampling intervals. Suppose now that a pair x1t, y1t are co-integrated, both I(1) and z1t = y1t - A1x1t ~ I(0). Similarly for a second pair x2t, y2t, and z2t = y2t - A2x2t. The variables could be income and consumption in two different regions. Now suppose that data for individual regions is not available, the observable data being y–t = y1t + y2t and x–t = x1t + x2t. y–t , x–t are both I(1) but z–t = y–t - Axt will not be I(0) unless A1 = A2 (= A) or unless x1t, x2t are cointegrated, with (A1 - A)x1t + (A2 - A)x2t ~ I(0), so that y1t and y2t will necessarily be co-integrated, with (A1 - A)A2y1t + (A2 - A)A1y2t ~ I(0). This may seem an unlikely condition for variables from different regions. If many regions are involved in the aggregation, it seems highly unlikely that the aggregates are co-integrated even if regional components are. It thus seems that rather stringent conditions are required to find errorcorrection models relevant for some of the most important aggregates of the economy. On the other hand, it is possible for aggregate series, with many components to be co-integrated but for regional components not to be, generalizing some of the results of Section III. For some equilibrium theories in economies, the value of A is determined, for instance, if the ratio of yt to xt is thought to tend to a constant in equilibrium. Then building models on the log variables suggests that A = 1. This could apply to various “regions” and aggregation will then lead to the same error-correction models. 6.
TESTING FOR CO-INTEGRATION
There are a number of ways that the error-correction specification, or equivalently, co-integration, could be tested. Assuming that xt, yt are both I(1), one way would be to look at estimates of the cross-spectrum between these series of low frequencies. Other ways would be to build the relevant model, such as (1), (2), and test if b is non-zero, or to build (1), (2) taking b = 0 and then testing if the moving average term c1(B) (1 - B)de1t has a root on the unit circle. These methods are not simple to use and the latter two require complete specification of the lags in the model. Under the null hypothesis of no error-correction mechanism, the first test is likely to have unsatisfactory properties with medium-sized samples and the same is likely to be so for the third test if the alternative hypothesis is true. It would be useful to have a simple test to identify error-correction, using the time-series sense of the word, meaning a simple test of specification to be used prior to the full data analysis. One possible way to do this is to form the regression yt = m + Axt + et
(9)
136
C. W. J. Granger and A. A. Weiss
ˆ - Âxt is I(0) or I(1). The using least squares and then to ask if êt = yt - m standard time-series method for doing this is to look at the correlogram of êt and decide, by eye, if it is declining fast enough for I(0) to be appropriate. This same procedure presumably will have been used to decide that xt, yt are both I(1), assuming that only integer values of d are being considered.There are two obvious difficulties with this identification procedure, the estimate of A will be inefficient in general, as there is no reason to suppose that et is white noise, and no strict test of et ~ I(0) is being used. Take H0: no error correction mechanism HA: xt, yt are co-integrated. If HA is true, there will be a single value of  which, in theory, makes the variance of et finite, so that if this value is discovered by the search procedure it should be very distinctive, regardless of the temporal structure of êt. This argument only holds strictly for large samples and it is less clear what happens for ordinary sized samples. What is clear is that the frequently used assumption that a better estimate of A is obtained by assuming et to be AR(1) is not appropriate in this case. As yt is I(1), the model one is inclined to get is just yt = yt-1 + error, with A = 0, as a simple simulation study showed. A more complete procedure is to build models of the form p
yt = m + Ax t + Â a j ( yt -k - yt -k-1 ) k=1
q
+ Â b j ( x t -k - x t -k-1 ) + e t
(10)
k=0
where et should be white noise if p and q are chosen in an ad hoc fashion but are “large enough” to pick up any temporal structure in the I(0) variable et in (9), assuming HA is correct. This form does not require an identification of the complete model, will give efficient estimate of parameters if H0 is true, and is still easily performed. A test based on (9) will be called the “inefficient test”; that based on (10) will be called the efficient test. If HA is true, êt from (9) should be I(0), which may be judged from the correlogram of et, and eˆt should be near white noise from (10) if p, q are chosen large enough. In the applications presented in the following section, equations (9) and (10) were estimated using least squares. It should be noted that error-correction cannot be tested by estimating models such as (9) or (10) and asking if the estimate of A is significant because of the spurious regression possibilities, as discussed in Granger and Newbold (1977). If H0 is true, spurious regression can obviously occur, but this is not a problem when HA is true. Equation (10) does not correspond to equation (1) with d = 1 and so tests based on it are not equivalent to building the system (1), (2). Con-
Time Series Analysis of Error Correction Models
137
sider the simple error correcting model yt - yt-1 = b(yt-1 - Axt-1) + et. Then this can be rewritten yt - Axt = (b + 1)(yt-1 - Axt-1) - A(xt - xt-1) + et. This suggests that models of the form p
yt - Ax = m + g ( yt -1 - Ax t -1 ) + Â a j ( yt -k - yt -k-1 ) k=1
q
+ Â b j ( x t -k - x t -k-1 ) + e t
(11)
k=1
should be estimated, where ΩgΩ < 1, and et should be white noise. Equations (9), (10), and (11) were fitted to various data sets and the results are presented below. As an experiment in (10) the model was also fitted with the k going from 1 to q in the last summation, but little differences in conclusions occurred, and so these results will not always be presented. 7. APPLICATION 1: EMPLOYEES’ INCOME AND NATIONAL INCOME The series considered: yt = compensation of employees (logs), and xt = national income (logs) both measured in current dollars. The data is quarterly starting 1947-I and the series has 138 terms. In this and the other applications, the data was taken from the Citibank Economic Data base. The fitter version of equation (9) was yt = -0.680 + 1.041x t + e t (18.1) (177.6) (t-values are shown in brackets, assuming et to be white noise) and, similarly, the fitted version of equation (10), with p = q = 3, was yt = -0.754 + 1.068 x t - 1.26Dyt -1 - 0.028 Dyt -2 - 0.23Dyt -3 ( -43.7) ( 353.7) ( -6.3) ( -0.11) ( -1.10) -1.03Dx t - 1.09Dx t -1 - 1.62 Dx t -2 + e t ( 7.27) ( -7.02) ( -11.64)
(12)
where Dxt-k = xt-k - xt-k-1 Table I shows the autocorrelations for Dyt, Dxt, et, Det, e*t , et and e*t for lags 1 to 12, where e*t is the residual from (11) with all of the coefficients increased by 10% and similarly for e*t from (12). The correlograms for xt, yt (not shown) stay high and suggest differencing. Dxt and Dyt still have positive serial correlation at low lags but d = 1 apears to be an appropriate identification (columns 1, 2). The residual series et from (11) has a correlogram appropriate for an I(0) series, column (3), but if the parameters in (11) are changed upwards by 10%,
138
C. W. J. Granger and A. A. Weiss
Table 7.1 Autocorrelations.
Lag
(1) Dyt
(2) Dxt
(3) et
(4) Det
(5) e*t
(6) et
(7) e*t
1 2 3 4 5 6 7 8 9 10 11 12
.65 .34 .13 -.08 -.22 -.12 -.06 -.02 .02 .06 .02 -.05
.51 .22 -.01 -.19 -.25 -.17 -.02 -.01 .12 .18 .13 -.00
.89 .65 .38 .13 -.02 -.08 -.06 -.02 .03 .06 .05 .04
.61 .22 -.13 -.48 -.50 -.33 -.13 .01 .16 .16 .05 -.10
.95 .85 .74 .65 .59 .56 .55 .55 .55 .53 .49 .47
.45 .49 .41 .37 .22 .29 .30 .13 .17 .13 .06 -.01
.92 .90 .85 .83 .78 .77 .75 .71 .69 .66 .65 .61
(approx. twice standard error is 0.17) The estimated variances of the residuals are V(et) = .00226, V(e*t ) = 0.025, V(et) = .42E-03, V(e*t ) = .05E-02.
the resulting residuals e*t have a correlogram, column (5) suggesting that e *t is I(1). Thus the results of the inefficient test suggest that an errorcorrection model is appropriate. However, the more complete model (12) does not produce residuals that are white noise; in fact et has considerable temporal structure, suggesting either that the model fails this test or that further lagged values in the differenced series are required. However, it was found that adding further lags made little difference. Changing parameters upwards by 10% again produced errors that appear to be I(1), column 7. The estimates of A in both models is near one, but seems to be statistically greater than one. The tests thus seem somewhat inconclusive; the error correction models is not rejected but neither is it strongly supported. Using GNP instead of national income gave similar results. The model in (12) was re-estimated using Dxt-j, j = 1, 2, 3 instead of Dxt-j, j = 0, 1, 2, but the results in Table I were changed very little. The estimated model became yt = - 0.743 + 1.064 x t - 0.173Dyt -1 - 3.529Dyt -2 ( 0.327) ( -0.8) ( -2.0) + 0.001Dyt -3 - 1.60 Dx t -1 - 1.43Dx t -2 - 1.13Dx t -3 + e t . ( 0.004) ( -10.5) ( - 6.8) ( -7.6) A form of equation (11) was also fitted, giving
Time Series Analysis of Error Correction Models
139
( yt - 0.901x t ) = 0.002 + 1.002( yt -1 - 0.901x t -1 ) ( 391.0) ( 8.8) (103.0) -1.054 ( x t - x t -1 ) + e t . ( -15.7) The t-statistics are seen to be very large, and the estimated model can effectively be rewritten yt - yt -1 = -0.1( x t - x t -1 ) + e t which does not support the error-correction formulation. The residual et has variance 0.14E-03 and estimated serial correlations r1 = 0.63, r2 = 0.36, r3 = 0.21, ΩrkΩ ≤ 1, k > 3. This is the best-fitting model of those estimated, gives a stationary model in differences, but does not find errorcorrection relevant.
8.
APPLICATION 2. M3 AND GNP
Here yt = M3, money supply (logs) and xt = GNP (logs). The data is quarterly, starting in 1959I, and there are 90 observations. The simple model estimated was yt = 0.028 + 1.097x t + e (.73) (198.6)
(13)
and the more complicated model using p = 3, q = 2 was yt = 0.09 + 1.081x t + 0.688 Dyt -1 + 0.536Dyt -2 (2.6) (189.4) (1.46) ( 0.91) + 1.41Dyt -3 - 0.322 Dx t - 0.226Dx t -1 + e t . (2.98) ( -1.18) ( -0.796) Once more, the estimates of A are near, but significantly greater than one. Table 7.2 shows the estimated autocorrelations for Dyt, Dxt, et, e*t , et and e*t , where starred values are residuals from equations like (13), (14), but with parameter values increased by ten percent. The evidence of the correlograms suggests that xt and yt are both I(1). Residuals from perturbing coefficients by ten percent, columns 4 and 6 again appear to be I(1). However, the residuals from equations (13) and (14) are not clearly I(0); their correlograms decline but somewhat slowly, as seen in columns 3 and 5. Adding further lagged differenced variables helps very little. If p = q = 5 is used, the co-efficient on Dyt-5 is the only one on a differenced variable to have a t-value over two, and the residuals from this equation have the correlogram shown in column 7 to Table 2. Thus, the residuals are far from white noise and there is little evidence that an error-correction mechanism is appropriate in this case.
140
C. W. J. Granger and A. A. Weiss
Table 7.2 Autocorrelations.
Lag
(1) Dyt
(2) Dxt
(3) et
(4) e*t
(5) et
(6) e*t
(7) ht
1 2 3 4 5 6 7 8 9 10 11 12
.77 .51 .26 .13 .06 .05 .12 -.05 -.05 -.05 -.03 -.02
.25 .15 .11 .16 .07 .08 .12 .03 .07 .17 .22 .01
.90 .77 .66 .55 .43 .31 .22 .12 .05 -.01 -.07 -.15
.95 .89 .84 .80 .78 .70 .65 .60 .55 .50 .46 .41
.88 .74 .63 .54 .47 .37 .29 .20 .11 .01 -.11 -.23
.95 .89 .85 .81 .77 .72 .67 .63 .58 .54 .49 .45
.89 .76 .67 .56 .46 .39 .30 .17 .07 -.05 -.17 -.29
e*t .15E-01
ht .11E-03
(approx. twice standard error is .21) Variances of residuals:
et .79E-03
e*t .16E-01
et .49E-03
A model of form (11) was also estimated, giving
( yt - 0.095x t ) = 0.0015 + 1.035 ( yt -1 - 1.095x t -1 ) (903.0) ( 7.8) ( 33) - 0.79( x t - x t -1 ) + e t . ( -9.3) As 1.035 is not significantly different from 1, this model does not support error-correction. et has variance 0.57E-04, which is the smallest residual variance of the equations fitted and has serial correlations r1 = 0.65, r2 = 0.31, ΩrkΩ < 0.12, k > 2. 9. APPLICATION 3. PRICES, WAGES AND PRODUCTIVITY IN THE TRANSPORTATION INDUSTRY In Table 7.3, yt = price index, U.S. transporation industry x1t = hourly earnings, workers in transport industry x2t = productivity measure, transportation industry Data is monthly, starting in 1969, and there are 151 observations. Analysis of the individual series strongly suggested that they are I(1), but the first differences had no temporal structure other than seasonal effects in yt and x1t.
Time Series Analysis of Error Correction Models
141
The simple models fitted were yt = 18.58 + 20.04 x1t + e1t (15.4) (109.73)
(15)
yt = 54.3 + 21.81x1t - 787.69x 2 t + e 2 t . (17.9) (120.90) ( -12.30)
(16)
and
More complicated models are yt = 18.8 + 20.0 x1t + 0.70 Dyt -1 + 0.42 Dyt -2 (17.9) (100.6) ( 3.4) (2.21) -13.8 Dx1,t -1 - 8.69Dx1,t -2 + e1t ( -1.9) - ( 3.3)
(17)
yt = 55.08 + 21.95x1t - 810.6x 2 t + 0.53Dyt -1 (20) (115) ( -13.9) ( 3.87) +0.25Dyt -2 - 17.4 Dx1,t - 9.6Dx1,t -1 (2.01) - ( 5.38) +673Dx 2 ,t + 599Dx 2 ,t -1 + e 2 t
(18)
and
It seems that the models relating just prices to wages produce residuals with slowly declining correlograms (columns 1, 4), and so this pair of variables appear not to be co-integrated. Using the three variables produces models that appear to be error-correcting using the inefficient test, (column 2), especially compared to residuals from the perturbed model (column 3). However, adding lagged differences does little to drive residuals towards white noise, as seen by comparing columns 2 and 5. Adding further differences altered this very little. Unfortunately, the results are again inconclusive. The inefficient procedure suggests an error-correction model could be appropriate if industrial prices are explained by wages and productivity, but more complicated procedures do not fully support such a conclusion. However, when an equation of form (11) was fitted, a clearer picture occurs. The equation is
( yt -24.8 x1,t - 94.6x 2 ,t ) = -0.199 + 0.941( yt -1 (9.0) ( 0.44) ( 0.6) ( 38.2) -24.8 x1,t -1 - 94.6x 2 ,t -1 ) -22.4 Dx1,t - 104.6Dx 2 ,t + e t . ( -6.2) ( -0.44) et has variance 2.998 which is the smallest achieved by the various models, and has r1 = -0.2 and all other rk small, except for r6 = 0.29, r12 = 0.49, suggesting a seasonal component. Here, the terms involving x2,t are no longer significant and 0.941 is significantly less than 1, suggesting that
142
C. W. J. Granger and A. A. Weiss
Table 7.3 Autocorrelations.
Lag
(1) e1t
(2) e2t
(3) e*2t
(4) e1t
(5) e2t
1 2 3 4 5 6 7 8 9 10 11 12
.84 .78 .76 .64 .58 .57 .46 .44 .47 .38 .36 .38
.71 .61 .61 .45 .33 .33 .19 .16 .20 .08 .10 .18
.87 .83 .84 .78 .72 .72 .68 .66 .67 .61 .58 .60
.95 .88 .80 .72 .64 .59 .55 .51 .49 .45 .42 .37
.90 .78 .63 .50 .38 .29 .23 .18 .16 .15 .16 .15
(approx. twice standard error is .16) Variance of residuals
18.1
9.04
23.6
15.7
6.74
an error-correction model may be appropriate. If the model is reestimated using just x1,t, the same conclusion is reached. On the other hand, if the same model, involving just yt (price) and x1t (wages) is estimated using logs of the series, the equation achieved is
(log yt - 3.42 log x1t ) = 0.997(log yt -1 - 3.42 log x1,t -1 ) ( 895.0) - 3.36D log x1t + e t ( -0.49) where et has r1 = -0.14, r6 = 0.15, r12 = 0.51, and all other rk small. Thus, error correction is not supported in logs of the variables. 10.
CONCLUSIONS
The error-correction mechanism is an interesting way of possibly bringing economic theory into time-series modeling, but in the applications presented here, and also in some others that have not been presented, the “theory” being applied does seem to be too simplistic. The temporal structure and relationships between series do not fit simply into the class of models being considered, which are restricted to linear forms (possibly in logs) and with time invariant parameters. The tests suggested to help identify error-correction models do appear to have some difficulties and require further study. If the economic theory is believed strongly enough, it may be worth building a model inserting the error-
Time Series Analysis of Error Correction Models
143
correction term and comparing its results to a model built just on first differences. One further reason for the unsatisfactory results of the applications is that only integer d values were considered. Other d values are briefly discussed in Appendix 1, but a full discussion and the necessary associated empirical work is too lengthy to report here.
APPENDIX 1. SERIES
FRACTIONAL INTEGRATED
The results of the first five sections go through without change if d is allowed to take any value, rather than just the values zero and one there considered. The case where d is a fraction has been considered by Granger and Joyeux (1980) and Hosking (1981), so that xt ~ I(d) if (1 B)dxt can be modeled as an ARMA (p, q) model, with finite, integer p, q. If d is a fraction, (1 - B)d can only be realized as a specific power series in B. Such models can arise from aggregation of dynamic components with different parameters, see Granger (1981). It can be shown that xt has finite variance if d < 1/2, but has infinite variance if d ≥ 1/2. If xt, yt are both I(d) and generated by (1), (2) for and d, then zt = xt - Ayt will be I(0), and so will be co-integrated. The identification test, based on the cross-spectrum, discussed in Section 6 is still relevant in this more general case.
APPENDIX 2. ERROR CORRECTION AND SEASONALITY A popular class of univariate models for series with seasonal components is that introduced by Box and Jenkins (1980) of the form d
ds
(1 - B) (1 - Bs ) a1 ( B) a 2 ( Bs ) x t = b1 ( B) b2 ( Bs )e t
(A2.1)
where et is white noise, a1(B), b1(B) are polynomials in B, and a2(Bs), b2(Bs) are polynomials in Bs, where s is the length of the seasonal, so that s = 12 if monthly data is used. The model is completed by adding appropriate starting up values, containing the typical seasonal shape. One problem with this model is that if it is used to generate a series, although this series will have the correct seasonal shape in early years, it will eventually drift away from this shape. As many economic series have a varying seasonal, but one that varies about a fairly consistent shape, the model is clearly not completely satisfactory, except in the short-run. A method of improving the model is to add an error-correcting term such as A(xt - St), where St is a strongly seasonal series having the correct constant underlying shape.
144
C. W. J. Granger and A. A. Weiss
REFERENCES Box, G. E. P., and Jenkins, G. M. (1970). “Time Series Analysis, Forecasting and Control.” Holden Day, San Francisco. Currie, D. (1981). The Economic Journal 363. Davidson, J., Hendry, D., Srba, F., and Yeo, S. (1978). Economic Journal 88, 661. Dawson, A. (1981). Applied Economics 3, 351. Granger, C. W. J. (1981). Journal of Econometrics 16, 121. Granger, C. W. J., and Joyeux, R. (1980). Journal of Time Series Analysis 1, 15. Granger, C. W. J., and Newbold, P. (1977). “Forecasting Economic Time Series.” Academic Press, New York. Hendry, D., and von Ungern Sternberg, T. (1980). In (A. Deaton, ed.), Essays in the Theory and Measurement of Consumers’ Behaviour Cambridge University Press. Hosking, J. R. M. (1981). Biometrika 68, 165. Sargan, J. D. (1974). (P. E. Hart, G. Mills, and J. K. Whittaker, eds.), “Econometric Analysis for National Economic Planning” Butterworth, London.
CHAPTER 8
Co-Integration and Error Correction: Representation, Estimation, and Testing* Robert F. Engle and C. W. J. Granger**
The relationship between co-integration and error correction models, first suggested in Granger (1981), is here extended and used to develop estimation procedures, tests, and empirical examples. If each element of a vector of time series xt first achieves stationarity after differencing, but a linear combination a ¢xt is already stationary, the time series xt are said to be co-integrated with co-integrating vector a.There may be several such co-integrating vectors so that a becomes a matrix. Interpreting a ¢xt = 0 as a long run equilibrium, co-integration implies that deviations from equilibrium are stationary, with finite variance, even though the series themselves are nonstationary and have infinite variance. The paper presents a representation theorem based on Granger (1983), which connects the moving average, autoregressive, and error correction representations for co-integrated systems. A vector autoregression in differenced variables is incompatible with these representations. Estimation of these models is discussed and a simple but asymptotically efficient twostep estimator is proposed. Testing for co-integration combines the problems of unit root tests and tests with parameters unidentified under the null. Seven statistics are formulated and analyzed. The critical values of these statistics are calculated based on a Monte Carlo simulation. Using these critical values, the power properties of the tests are examined and one test procedure is recommended for application. In a series of examples it is found that consumption and income are co-integrated, wages and prices are not, short and long interest rates are, and nominal GNP is co-integrated with M2, but not M1, M3, or aggregate liquid assets. * Econometrica, 55, 1987, 251–276. ** The authors are indebted to David Hendry and Sam Yoo for many useful conversations and suggestions as well as to Gene Savin, David Dickey, Alok Bhargava, and Marco Lippi. Two referees provided detailed constructive criticism, and thanks go to Yoshi Baba, Sam Yoo, and Alvaro Ecribano who creatively carried out the simulations and examples. Financial support was provided by NSF SES-80-08580 and SES-82-08626. A previous version of this paper was entitled “Dynamic Model Specification with Equilibrium Constraints: Co-integration and Error Correction.”
146
R. F. Engle and C. W. J. Granger
Keywords: Co-integration, vector autoregression, unit roots, error correction, multivariate time series, dickey-fuller tests. 1.
INTRODUCTION
An individual economic variable, viewed as a time series, can wander extensively and yet some pairs of series may be expected to move so that they do not drift too far apart. Typically economic theory will propose forces which tend to keep such series together. Examples might be short and long term interest rates, capital appropriations and expenditures, household income and expenditures, and prices of the same commodity in different markets or close substitutes in the same market. A similar idea arises from considering equilibrium relationships, where equilibrium is a stationary point characterized by forces which tend to push the economy back toward equilibrium whenever it moves away. If xt is a vector of economic variables, then they may be said to be in equilibrium when the specific linear constraint a¢xt = 0 occurs. In most time periods, xt will not be in equilibrium and the univariate quantity zt = a¢xt may be called the equilibrium error. If the equilibrium concept is to have any relevance for the specification of econometric models, the economy should appear to prefer a small value of zt rather than a large value. In this paper, these ideas are put onto a firm basis and it is shown that a class of models, known as error-correcting, allows long-run components of variables to obey equilibrium constraints while short-run components have a flexible dynamic specification. A condition for this to be true, called co-integration, was introduced by Granger (1981) and Granger and Weiss (1983) and is precisely defined in the next section. Section 3 discusses several representations of co-integrated systems, Section 4 develops estimation procedures, and Section 5 develops tests. Several applications are presented in Section 6 and conclusions are offered in Section 7. A particularly simple example of this class of models is shown in Section 4, and it might be useful to examine it for motivating the analysis of such systems. 2. INTEGRATION, CO-INTEGRATION, AND ERROR CORRECTION It is well known from Wold’s theorem that a single stationary time series with no deterministic components has an infinite moving average representation which is generally approximated by a finite autoregressive moving average process. See, for example, Box and Jenkins (1970) or
Co-Integration and Error-Correction
147
Granger and Newbold (1977). Commonly however, economic series must be differenced before the assumption of stationarity can be presumed to hold. This motivates the following familiar definition of integration: Definition: A series with no deterministic component which has a stationary, invertible, ARMA representation after differencing d times, is said to be integrated of order d, denoted xt ~ I(d). For ease of exposition, only the values d = 0 and d = 1 will be considered in much of the paper, but many of the results can be generalized to other cases including the fractional difference model. Thus, for d = 0 xt will be stationary and for d = 1 the change is stationary. There are substantial differences in appearance between a series that is I(0) and another that is I(1). For more discussion see, for example, Feller (1968) or Granger and Newbold (1977). (a) If xt ~ I(0) with zero mean then (i) the variance of xt is finite; (ii) an innovation has only a temporary effect on the value of xt; (iii) the spectrum of xt, f(w), has the property 0 < f(0) < •; (iv) the expected length of times between crossings of x = 0 is finite; (v) the autocorrelations, rk, decrease steadily in magnitude for large enough k, so that their sum is finite. (b) If xt ~ I(1) with x0 = 0, then (i) variance xt goes to infinity as t goes to infinity; (ii) an innovation has a permanent effect on the value of xt, as xt is the sum of all previous changes; (iii) the spectrum of xt has the approximate shape f(w) ~ Aw-2d for small w so that in particular f(0) = •; (iv) the expected time between crossings of x = 0 is infinite; (v) the theoretical autocorrelations, rk Æ 1 for all k as t Æ •. The theoretical infinite variance for an I(1) series comes completely from the contribution of the low frequencies, or long run part of the series. Thus an I(1) series is rather smooth, having dominant long swings, compared to an I(0) series. Because of the relative sizes of the variances, it is always true that the sum of an I(0) and an I(1) will be I(1). Further, if a and b are constants, b π 0, and if xt ~ I(d), then a + bxt is also I(d). If xt and yt are both I(d), then it is generally true that the linear combination zt = xt - ayt will also be I(d). However, it is possible that zt ~ I(d - b), b > 0. When this occurs, a very special constraint operates on the long-run components of the series. Consider the case d = b = 1, so that xt, yt are both I(1) with dominant long run components, but zt is I(0) without especially strong low frequencies. The constant a is therefore such that the bulk of the long run components of xt and yt cancel out. For a = 1, the vague idea that xt and yt cannot drift too far apart has been translated into the more precise statement that “their difference will be I(0).” The use of the constant a merely suggests that some scaling needs to be used before the
148
R. F. Engle and C. W. J. Granger
I(0) difference can be achieved. It should be noted that it will not generally be true that there is an a which makes zt ~ I(0). An analogous case, considering a different important frequency, is when xt and yt are a pair of series, each having important seasonal component, yet there is an a so that the derived series zt has no seasonal. Clearly this could occur, but might be considered to be unlikely. To formalize these ideas, the following definition adapted from Granger (1981) and Granger and Weiss (1983) is introduced: Definition: The components of the vector xt are said to be co-integrated of order d, b, denoted xt ~ CI(d, b), if (i) all components of xt are I(d); (ii) there exists a vector a(π0) so that zt = a¢xt ~ I(d - b), b > 0. The vector a is called the co-integrating vector. Continuing to concentrate on the d = 1, b = 1 case, co-integration would mean that if the components of xt were all I(1), then the equilibrium error would be I(0), and zt will rarely drift far from zero if it has zero mean and zt will often cross the zero line. Putting this another way, it means that equilibrium will occasionally occur, at least to a close approximation, whereas if xt was not co-integrated, then zt can wander widely and zero-crossings would be very rare, suggesting that in this case the equilibrium concept has no practical implications. The reduction in the order of integration implies a special kind of relationship with interpretable and testable consequences. If however all the elements of xt are already stationary so that they are I(0), then the equilibrium error zt has no distinctive property if it is I(0). It could be that zt ~ I(-1), so that its spectrum is zero at zero frequency, but if any of the variables have measurement error, this property in general cannot be observed and so this case is of little realistic interest. When interpreting the co-integration concept it might be noted that in the N = 2, d = b = 1 case, Granger and Weiss (1983) show that a necessary and sufficient condition for cointegration is that the coherence between the two series is one at zero frequency. If xt has N components, then there may be more than one cointegrating vector a. It is clearly possible for several equilibrium relations to govern the joint behavior of the variables. In what follows, it will be assumed that there are exactly r linearly independent co-integrating vectors, with r N - 1, which are gathered together into the N ¥ r array a. By construction the rank of a will be r which will be called the “cointegrating rank” of xt. The close relationship between co-integration and error correcting models will be developed in the balance of the paper. Error correction mechanisms have been used widely in economics. Early versions are Sargan (1964) and Phillips (1957). The idea is simply that a proportion of the disequilibrium from one period is corrected in the next period.
Co-Integration and Error-Correction
149
For example, the change in price in one period may depend upon the degree of excess demand in the previous period. Such schemes can be derived as optimal behavior with some types of adjustment costs or incomplete information. Recently, these models have seen great interest following the work of Davidson, Hendry, Srba, and Yeo (1978) (DHSY), Hendry and von Ungern-Sternberg (1980), Currie (1981), Dawson (1981), and Salmon (1982) among others. For a two variable system a typical error correction model would relate the change in one variable to past equilibrium errors, as well as to past changes in both variables. For a multivariate system we can define a general error correction representation in terms of B, the backshift operator, as follows. Definition: A vector time series xt has an error correction representation if it can be expressed as: A(B)(1 - B) xt = -g zt -1 + ut where ut is a stationary multivariate disturbance, with A(0) = I, A(1) has all elements finite, zr = a ¢xr, and g π 0. In this representation, only the disequilibrium in the previous period is an explanatory variable. However, by rearranging terms, any set of lags of the z can be written in this form; therefore it permits any type of gradual adjustment toward a new equilibrium. A notable difference between this definition and most of the applications which have occurred is that this is a multivariate definition which does not rest on exogeneity of a subset of the variables. The notion that one variable may be weakly exogenous in the sense of Engle, Hendry, and Richard (1983) may be investigated in such a system as briefly discussed below. A second notable difference is that a is taken to be an unknown parameter vector rather than a set of constants given by economic theory.
3. PROPERTIES OF CO-INTEGRATED VARIABLES AND THEIR REPRESENTATIONS Suppose that each component of xt is I(1) so that the change in each component is a zero mean purely nondeterministic stationary stochastic process. Any known deterministic components can be subtracted before the analysis is begun. It follows that there will always exist a multivariate Wold representation:
(1 - B) xt = C (B)e t ,
(3.1)
taken to mean that both sides will have the same spectral matrix. Further, C(B) will be uniquely defined by the conditions that the function det [C(z)], z = e iw, have all zeroes on or outside the unit circle, and that
150
R. F. Engle and C. W. J. Granger
C(0) = IN, the N ¥ N identity matrix (see Hannan (1970, p. 66)). In this representation the et are zero mean white noise vectors with E[e t e ¢s ] = 0, t π s, = G, t = s, so that only contemporaneous correlations can occur. The moving average polynomial C(B) can always be expressed as C (B) = C (1) + (1 - B)C *(B)
(3.2)
by simply rearranging the terms. If C(B) is of finite order, then C*(B) will be of finite order. If C*(1) is identically zero, then a similar expression involving (1 - B)2 can be defined. The relationship between error correction models and co-integration was first pointed out in Granger (1981). A theorem showing precisely that co-integrated series can be represented by error correction models was originally stated and proved in Granger (1983). The following version is therefore called the Granger Representation Theorem. Analysis of related but more complex cases is covered by Johansen (1985) and Yoo (1985). Granger Representation Theorem: If the N ¥ 1 vector xt given in (3.1) is co-integrated with d = 1, b = 1 and with co-integrating rank r, then: (1) C(1) is of rank N - r. (2) There exists a vector ARMA representation A(B) xt = d(B)e t
(3.3)
with the properties that A(1) has rank r and d(B) is a scalar lag polynomial with d(1) finite, and A(0) = IN. When d(B) = 1, this is a vector autoregression. (3) There exist N ¥ r matrices, a, g, of rank r such that a ¢C (1) = 0, C (1)g = 0, A(1) = ga ¢. (4) There exists an error correction representation with zt = a¢xt, an r ¥ 1 vector of stationary random variables: A*(B)(1 - B) xt = -g zt -1 + d(B)e t
(3.4)
with A*(0) = IN. (5) The vector zt is given by zt = K (B)e t ,
(3.5)
(1 - B)zt = -a ¢g zt -1 + J (B)e t ,
(3.6)
Co-Integration and Error-Correction
151
where K(B) is an r ¥ N matrix of lag polynomials given by a ¢C*(B) with all elements of K(1) finite with rank r, and det (a¢g) > 0. (6) If a finite vector autoregressive representation is possible, it will have the form given by (3.3) and (3.4) above with d(B) = 1 and both A(B) and A*(B) as matrices of finite polynomials. In order to prove the Theorem the following lemma on determinants and adjoints of singular matrix polynomials is needed. Lemma 1: If G(l) is a finite valued N ¥ N matrix polynomial on l Œ [0, 1], with rank G(0) = N - r for 0 r N, and if G*(0) π 0 in G(l ) = G(0) + lG*(l ), then (i) det (G(l )) = lr g(l )I N
with g(0) finite,
(ii) Adj(G(l )) = lr -1 H (l ), where IN is the N ¥ N identity matrix, 1 rank (H(0)) r, and H(0) is finite. Proof: The determinant of G can be expressed in a power series in l as •
det(G(l )) = Â d i li . i =0
Each di is a sum of a finite number of products of elements of G(l) and therefore is itself finite valued. Each has some terms from G(0) and some from lG*(l). Any product with more than N - r terms from G(0) will be zero because this will be the determinant of a submatrix of larger order than the rank of G(0). The only possible non-zero terms will have r or more terms from lG*(l) and therefore will be associated with powers of l of r or more. The first possible nonzero di is dr. Defining •
g(l ) = Â d i li - r i=r
establishes the first part of the lemma since dr must be finite. To establish the second statement, express the adjoint matrix of G in a power series in l: •
AdjG(l ) = Â li H i , i =0
Since the adjoint is a matrix composed of elements which are determinants of order N - 1, the above argument establishes that the first r - 1 terms must be identically zero. Thus
152
R. F. Engle and C. W. J. Granger •
AdjG(l ) = lr -1 Â li - r +1 H i r -1
= lr -1 H (l ). Because the elements of Hr-1 are products of finitely many finite numbers, H(0) must be finite. The product of a matrix and its adjoint will always give the determinant so: lr g(l )I N = (G(0) + lG*(l ))H (l ) = G(0)H (l )lr -1 + h(l )G*(l )lr . Equating Powers of l we get G(0)H (0) = 0. Thus the rank of H(0) must be less than or equal to r as it lies entirely in the column null space of the rank N - r matrix G(0). If r = 1, the first term in the expression for the adjoint will simply be the adjoint of G(0) which will have rank 1 since G(0) has rank N - 1. Q.E.D. Proof of Granger Representation Theorem: The conditions of the Theorem suppose the existance of a Wold representation as in (3.1) for an N vector of random variables xt which are co-integrated. Suppose the co-integrating vector is a so that zt = a ¢xt is an r-dimensional stationary purely nondeterministic time series with invertible moving average representation. Multiplying a times the moving average representation in (3.1) gives
(1 - B)zt = (a ¢C (1) + (1 - B)a ¢C *(B))e t . For zt to be I(0), a¢C(1) must equal 0. Any vector with this property will be a co-integrating vector; therefore C(1) must have rank N - r with a null space containing all co-integrating vectors. It also follows that a ¢C*(B) must be an invertible moving average representation and in particular a ¢C*(1) π 0. Otherwise the co-integration would be with b = 2 or higher. Statement (2) is established using Lemma 1, letting l = (1 - B), G(l) = C(B), H(l) = A(B), and g(l) = d(B). Since C(B) has full rank and equals IN at B = 0, its inverse is A(0) which is also IN. Statement (3) follows from recognition that A(1) has rank between 1 and r and lies in the null space of C(1). Since a spans this null space, A(1) can be written as linear combinations of the co-integrating vectors A(1) = g a ¢.
Co-Integration and Error-Correction
153
Statement (4) follows by manipulation of the autoregressive structure. Rearranging terms in (3.3) gives:
[ A˜ (B) + A(1)](1 - B) xt = - A(1) xt -1 + d(B)e t , A*(B)(1 - B) xt = -g zt -1 + d(B)e t , A*(0) = A(0) = IN. The fifth condition follows from direct substitution in the Wold representation. The definition of co-integration implies that this moving average be stationary and invertible. Rewriting the error correction representation with A*(B) = I + A**(B) where A**(0) = 0, and premultiplying by a¢ gives:
(1 - B)zt = -a ¢g zt -1 + [a ¢d(B) + a ¢A**(B)C (B)]e t = -a ¢g zt -1 + J (B)e t . For this to be equivalent to the stationary moving average representation the autoregression must be invertible. This requires that det (a¢g) > 0. If the determinant were zero then there would be at least one unit root, and if the determinant were negative, then for some value of w between zero and one, det(I r - (I r - a ¢g )w ) = 0, implying a root inside the unit circle. Condition six follows by repeating the previous steps, setting d(B) = 1. Q.E.D. Stronger results can be obtained by further restrictions on the multiplicity of roots in the moving average representations. For example, Yoo (1985), using Smith Macmillan forms, finds conditions which establish that d(1) π 0, that A*(1) is of full rank, and that facilitate the transformation from error correction models to co-integrated models. However, the results given above are sufficient for the estimation and testing problems addressed in this paper. The autoregressive and error correction representations given by (3.3) and (3.4) are closely related to the vector autoregressive models so commonly used in econometrics, particularly in the case when d(B) can reasonably be taken to be 1. However, each differs in an important fashion from typical VAR applications. In the autoregressive representation A(B) xt = e t , the co-integration of the variables xt generates a restriction which makes A(1) singular. For r = 1, this matrix will only have rank 1. The analysis of such systems from an innovation accounting point of view is treacherous as some numerical approaches to calculating the moving average representation are highly unstable. The error correction representation
154
R. F. Engle and C. W. J. Granger
A*(B)(1 - B) xt = -ga ¢xt -1 + e t looks more like a standard vector autoregression in the differences of the data. Here the co-integration is implied by the presence of the levels of the variables so a pure VAR in differences will be misspecified if the variables are co-integrated. Thus vector autoregressions estimated with co-integrated data will be misspecified if the data are differenced, and will have omitted important constraints if the data are used in levels. Of course, these constraints will be satisfied asymptotically but efficiency gains and improved multistep forecasts may be achieved by imposing them. As xt ~ I(1), zt ~ I(0), it should be noted that all terms in the error correction models are I(0). The converse also holds; if xt ~ I(1) are generated by an error correction model, then xt is necessarily co-integrated. It may also be noted that if xt ~ I(0), the generation process can always be written in the error correction form and so, in this case, the equilibrium concept has no impact. As mentioned above, typical empirical examples of error correcting behavior are formulated as the response of one variable, the dependent variable, to shocks of another, the independent variable. In this paper all the variables are treated as jointly endogenous; nevertheless the structure of the model may imply various Granger causal orderings and weak and strong exogeneity conditions as in Engle, Hendry, and Richard (1983). For example, a bivariate co-integrated system must have a causal ordering in at least one direction. Because the z’s must include both variables and g cannot be identically zero, they must enter into one or both of the equations. If the error correction term enters into both equations, neither variable can be weakly exogenous for the parameters of the other equation because of the cross equation restriction. The notion of co-integration can in principle be extended to series with trends or explosive autoregressive roots. In these cases the cointegrating vector would still be required to reduce the series to stationarity. Hence the trends would have to be proportional and any explosive roots would have to be identical for all the series. We do not consider these cases in this paper and recognize that they may complicate the estimation and testing problems. 4.
ESTIMATING CO-INTEGRATED SYSTEMS
In defining different forms for co-integrated systems, several estimation procedures have been implicitly discussed. Most convenient is the error correction form (particularly if it can be assumed that there is no moving average term). There remain cross-equation restrictions involving the parameters of the co-integrating vectors; and therefore the maximum
Co-Integration and Error-Correction
155
likelihood estimator, under Gaussian assumptions, requires an iterative procedure. In this section, we will propose another estimator which is a two step estimator. In the first step the parameters of the co-integrating vector are estimated and in the second these are used in the error correction form. Both steps require only single equation least squares and it will be shown that the result is consistent for all the parameters. The procedure is far more convenient because the dynamics do not need to be specified until the error correction structure has been estimated. As a byproduct we obtain some test statistics useful for testing for co-integration. From (3.5) the sample moment matrix of the data can be directly expressed. Let the moment matrix divided by T be denoted by: MT = 1 T 2 Â xt xt¢. t
Recalling that zt = a¢xt, (3.5) implies that a ¢MT = Â [K (B)e t ] xt¢ T 2 . t
Following the argument of Dickey and Fuller (1979) or Stock (1984), it can be shown that for processes satisfying (3.1), TÆ•
lim E (MT ) = M a finite nonzero matrix,
(4.1)
a ¢M = 0, or (vec a )¢ (I ƒ M ) = 0.
(4.2)
and
Although the moment matrix of data from a co-integrated process will be nonsingular for any sample, in the limit, it will have rank N - r. This accords well with the common observation that economic time series data are highly collinear so that moment matrices may be nearly singular even when samples are large. Co-integration appears to be a plausible hypothesis from a data analytic point of view. Equations (4.2) do not uniquely define the co-integrating vectors unless arbitrary normalizations are imposed. Let q and Q be arrays which incorporate these normalizations by reparametrizing a into q, a j ¥ 1 matrix of unknown parameters which lie in a compact subset of Rj: vec a = q + Qq.
(4.3)
Typically q and Q will be all zeros and ones, thereby defining one coefficient in each column of a to be unity and defining rotations if r > 1. The parameters q are said to be “identified” if there is a unique solution to (4.2), (4.3). This solution is given by
(I ƒ M )Qq = -(I ƒ M )q
(4.4)
156
R. F. Engle and C. W. J. Granger
where by the assumption of identification, (I ƒ M)Q has a left inverse even though M does not. As the moment matrix MT will have full rank for finite samples, a reasonable approach to estimation is to minimize the sum of squared deviations from equilibrium. In the case of a single co-integrating vector, aˆ will minimize a¢MTa subject to any restrictions such as (4.3) and the result will be simply ordinary least squares. For multiple co-integrating vectors, define aˆ as the minimizer of the trace (a¢MTa). The estimation problem becomes: Min tr(a ¢MT a ) = Min vec a ¢(I ƒ MT )vec a a . s .t .( 4.3 )
a . s .t .( 4.3 )
= Min(q + Qq )¢ (I ƒ MT )(q + Qq ), q
which implies the solution -1 qˆ = -(Q ¢(I ƒ MT )Q) (Q ¢(I ƒ MT )q), vec aˆ = q + Qqˆ .
(4.5)
This approach to estimation should provide a very good approximation to the true co-integrating vector because it is seeking vectors with minimal residual variance and asymptotically all linear combinations of x will have infinite variance except those which are co-integrating vectors. When r = 1 this estimate is obtained simply by regressing the variable normalized to have a unit coefficient upon the other variables. This regression will be called the “co-integrating regression” as it attempts to fit the long run or equilibrium relationship without worrying about the dynamics. It will be shown to provide an estimate of the elements of the co-integrating vector. Such a regression has been pejoratively called a “spurious” regression by Granger and Newbold (1974) primarily because the standard errors are highly misleading. They were particularly concerned about the non-co-integrated case where there was no relationship but the unit root in the error process led to a low Durbin Watson, a high R2, and apparently high significance of the coefficients. Here we only seek coefficient estimates to use in the second stage and for tests of the equilibrium relationship. The distribution of the estimated coefficients is investigated in Stock (1984). When N = 2, there are two possible regressions depending on the normalization chosen. The nonuniqueness of the estimate derives from the well known fact that the least squares fit of a reverse regression will not give the reciprocal of the coefficient in the forward regression. In this case, however, the normalization matters very little. As the moment matrix approaches singularity, the R2 approaches 1 which is the product of the forward and reverse regression coefficients. This would be exactly true if there were only two data points which, of course, defines a singular matrix. For variables which are trending together, the correlation
Co-Integration and Error-Correction
157
approaches one as each variance approaches infinity. The regression line passes nearly through the extreme points almost as if there were just two observations. Stock (1984) in Theorem 3 proves the following proposition: Proposition 1: Suppose that xt satisfies (3.1) with C*(B) absolutely summable, that the disturbances have finite fourth absolute moments, and that xt is co-integrated (1, 1) with r co-integrating vectors satisfying (4.3) which identify q. Then, defining qˆ by (4.5), p T 1-d (qˆ - q ) æ æÆ 0 for d > 0.
(4.6)
The proposition establishes that the estimated parameters converge very rapidly to their probability limits. It also establishes that the estimates are consistent with a finite sample bias of order 1/T. Stock presents some Monte Carlo examples to show that these biases may be important for small samples and gives expressions for calculating the limiting distribution of such estimates. The two step estimator proposed for this co-integrated system uses the estimate of a from (4.5) as a known parameter in estimating the error correction form of the system of equations. This substantially simplifies the estimation procedure by imposing the cross-equation restrictions and allows specification of the individual equation dynamic patterns separately. Notice that the dynamics did not have to be specified in order to estimate a. Surprisingly, this two-step estimator has excellent properties; as shown in the Theorem below, it is just as efficient as the maximum likelihood estimator based on the known value of a. Theorem 2: The two-step estimator of a single equation of an error correction system, obtained by taking aˆ from (4.5) as the true value, will have the same limiting distribution as the maximum likelihood estimator using the true value of a. Least squares standard errors will be consistent estimates of the true standard errors. Proof: Rewrite the first equation of the error correction system (3.4) as yt = g zˆ t -1 + Wt b + e t + g (zt -1 - zˆ t -1 ), zt = X t a , zˆ t = X t aˆ , where Xt = x¢t, W is an array with selected elements of Dxt-i and y is an element of Dxt so that all regressors are I(0). Then letting the same variables without subscripts denote data arrays, Èg TÍ Îb
-g ˘ = (zˆ , W )¢ (zˆ , W ) T -b ˙˚
[
-1
] [(zˆ, W)¢ (e + g )(z - zˆ)]
T.
158
R. F. Engle and C. W. J. Granger
This expression simplifies because zˆ ¢(z - zˆ ) = 0. From Fuller (1976) or Stock (1984), X¢X/T 2 and X¢W/T are both of order 1. Rewriting, W ¢(z - zˆ )
T = [W ¢X T ][T (a - aˆ )][1
T ],
and therefore the first and second factors to the right of the equal sign are of order 1 and the third goes to zero so that the entire expression vanishes asymptotically. Because the terms in (z - zˆ )/ T vanish asymptotically, least squares standard errors will be consistent. Letting S = plim [(zˆ , W)¢(zˆ , W)/T], Èg TÍ Îb
-g ˘ A ææÆ D(0, s 2 S -1 ) -b ˙˚
where D represents the limiting distribution. Under additional but standard assumptions, this could be guaranteed to be normal. To establish that the estimator using the true value of a has the same limiting distribution it is sufficient to show that the probability limit of [(z, W)¢(z, W)/T] is also S and that z¢e/ T has the same limiting distribution as zˆ ¢e/ T . Examining the off diagonal terms of S first, zˆ ¢ W T - z¢ W T = T (aˆ - a )¢ [W ¢X T ](1 T ). The first and second factors are of order 1 and the third is 1/T so the entire expression vanishes asymptotically: (zˆ - z)¢ (zˆ - z) T = z¢ z T - zˆ ¢ zˆ T = T (aˆ - a )¢ [ X ¢X T 2 ]T (aˆ - a )(1 T ). Again, the first three factors are of order 1 and the last is 1/T so even though the difference between these covariance matrices is positive definite, it will vanish asymptotically. Finally,
(zˆ - z)¢ e
T = T (aˆ - a )¢ [ X ¢e T ] 1
T,
which again vanishes asymptotically. Under standard conditions the estimator using knowledge of a will be asymptotically normal and therefore the two-step estimator will also be asymptotically normal under these conditions. This completes the proof. Q.E.D. A simple example will illustrate many of these points and motivate the approach to testing described in the next section. Suppose there are two series. x1t and x2t, which are jointly generated as a function of possibly correlated white noise disturbances e1t and e2t according to the following model: x1t + bx2t = u1t,
u1t = u1t-1 + e1t,
x1t + ax2t = u2t , u2t = ru2t -1 + e 2t ,
(4.7) r < 1.
(4.8)
Clearly the parameters a and b are unidentified in the usual sense as there are no exogenous variables and the errors are contemporaneously
Co-Integration and Error-Correction
159
correlated. The reduced form for this system will make x1t and x2t linear combinations of u1t and u2t and therefore both will be I(1). The second equation describes a particular linear combination of the random variables which is stationary. Hence x1t and x2t are CI(1, 1) and the question is whether it would be possible to detect this and estimate the parameters from a data set. Surprisingly, this is easy to do. A linear least squares regression of x1t on x2t produces an excellent estimate of a This is the “co-integrating regression.” All linear combinations of x1t and x2t except that defined in equation (4.8) will have infinite variance and, therefore, least squares is easily able to estimate a. The correlation between x2t and x2t which causes the simultaneous equations bias is of a lower order in T than the variance of x2t. In fact the reverse regression of x2t on x1t has exactly the same property and thus gives a consistent estimate of 1/a. These estimators converge even faster to the true value than standard econometric estimates. While there are other consistent estimates of a, several apparently obvious choices are not. For example, regression of the first differences of x1 on the differences of x2 will not be consistent, and the use of Cochrane Orcutt or other serial correlation correction in the cointegrating regression will produce inconsistent estimates. Once the parameter a has been estimated, the others can be estimated in a variety of ways conditional on the estimate of a. The model in (4.7) and (4.8) can be expressed in the autoregressive representation (after subtracting the lagged values from both sides and letting d = (1 - p)/(a - b) as: Dx1t = bdx1t-1 + abdx2t-1 + h1t, Dx2t = -dx1t-1 -adx2t-1 + h2t,
(4.9) (4.10)
where the h’s are linear combinations of the e’s. The error correction representation becomes: Dx1t = bdzt-1 + h1t,
(4.11)
Dx2t = -dzt-1 + h2t,
(4.12)
where zt = x1t + ax2t. There are three unknown parameters but the autoregressive form apparently has four unknown coefficients while the error correction form has two. Once a is known there are no longer constraints in the error correction form which motivates the two-step estimator. Notice that if r Æ 1, the series are correlated random walks but are no longer co-integrated. 5.
TESTING FOR CO-INTEGRATION
It is frequently of interest to test whether a set of variables are cointegrated. This may be desired because of the economic implications such as whether some system is in equilibrium in the long run, or it may
160
R. F. Engle and C. W. J. Granger
be sensible to test such hypotheses before estimating a multivariate dynamic model. Unfortunately the set-up is nonstandard and cannot simply be viewed as an application of Wald, likelihood ratio, or Lagrange multiplier tests. The testing problem is closely related to tests for unit roots in observed series as initially formulated by Fuller (1976) and Dickey and Fuller (1979, 1981) and more recently by Evans and Savin (1981), Sargan and Bhargava (1983), and Bhargava (1984), and applied by Nelson and Plosser (1983). It also is related to the problem of testing when some parameters are unidentified under the null as discussed by Davies (1977) and Watson and Engle (1982). To illustrate the problems in testing such an hypothesis, consider the simple model in (4.7) and (4.8). The null hypothesis is taken to be no cointegration or r = 1. If a were known, then a test for the null hypothesis could be constructed along the lines of Dickey and Fuller taking zt as the series which has a unit root under the null. The distribution in this case is already nonstandard and was computed through a simulation by Dickey (1976). However, when a is not known, it must be estimated from the data. But if the null hypothesis that r = 1 is true, a is not identified. Thus only if the series are co-integrated can a be simply estimated by the “co-integrating regression,” but a test must be based upon the distribution of a statistic when the null is true. OLS seeks the a which minimizes the residual variance and therefore is most likely to be stationary, so the distribution of the Dickey-Fuller test will reject the null too often if a must be estimated. In this paper a set of seven test statistics is proposed for testing the null of non-co-integration against the alternative of co-integration. It is maintained that the true system is a bivariate linear vector autoregression with Gaussian errors where each of the series is individually I(1). As the null hypothesis is composite, similar tests will be sought so that the probability of rejection will be constant over the parameter set included in the null. See, for example, Cox and Hinkley (1974, pp. 134–136). Two cases may be distinguished. In the first, the system is known to be of first order and therefore the null is defined by Dyt = e 1t È(e ) ˘ , Í 1t ˙ ~ N (0, W). D xt = e 2t Î(e 2t )˚
(5.1)
This is clearly the model implied by (4.11) and (4.12) when r = 1 which implies that d = 0. The composite null thus includes all positive definite covariance matrices W. It will be shown below that all the test statistics are similar with respect to the matrix W so without loss of generality, we take W = I. In the second case, the system is assumed merely to be a stationary linear system in the changes. Consequently, the null is defined over a full
Co-Integration and Error-Correction
161
set of stationary autoregressive and moving average coefficients as well as w. The “augmented” tests described below are designed to be asymptotically similar for this case just as established by Dickey and Fuller for their univariate tests. The seven test statistics proposed are all calculable by least squares. The critical values are estimated for each of these statistics by simulation using 10,000 replications. Using these critical values, the powers of the test statistics are computed by simulations under various alternatives. A brief motivation of each test is useful. 1. CRDW. After running the co-integrating regression, the Durbin Watson statistic is tested to see if the residuals appear stationary. If they are nonstationary, the Durbin Watson will approach zero and thus the test rejects non-co-integration (finds co-integration) if DW is too big. This was proposed recently by Bhargava (1984) for the case where the series is observed and the null and alternative are first order models. 2. DF. This tests the residuals from the co-integrating regression by running an auxiliary regression as described by Dickey and Fuller and outlined in Table 8.1. It also assumes that the first order model is correct. 3. ADF. The augmented Dickey-Fuller test allows for more dynamics in the DF regression and consequently is over-parametrized in the first order case but correctly specified in the higher order cases. 4. RVAR. The restricted vector autoregression test is similar to the two step estimator. Conditional on the estimate of the co-integrating vector from the co-integrating regression, the error correction representation is estimated. The test is whether the error correction term is significant. This test requires specification of the full system dynamics. In this case a first order system is assumed. By making the system triangular, the disturbances are uncorrelated, and under normality the t statistics are independent. The test is based on the sum of the squared t statistics. 5. ARVAR. The augmented RVAR test is the same as RVAR except that a higher order system is postulated. 6. UVAR. The unrestricted VAR test is based on a vector autoregression in the levels which is not restricted to satisfy the co-integration constraints. Under the null, these are not present anyway so the test is simply whether the levels would appear at all, or whether the model can be adequately expressed entirely in changes. Again by triangularizing the coefficient matrix, the F tests from the two regressions can be made independent and the overall test is the sum of the two F’s times their degrees of freedom, 2. This assumes a first order system again. 7. AUVAR. This is an augmented or higher order version of the above test. To establish the similarity of these tests for the first order case for all positive definite symmetric matrices W, it is sufficient to show that the residuals from the regression of y on x for general W will be a scalar
162
R. F. Engle and C. W. J. Granger
multiple of the residuals for W = I. To show this, let e1t and e2t be drawn as independent standard normals. Then yt = Â e 1i , i =1 ,t
xt = Â e 2i ,
(5.2)
i =1 ,t
and ut = yt - xt  xt yt
Âx
2 t
.
(5.3)
To generate y* and x* from W, let e*2t = ce2t, (5.4)
e*1t = ae2t + be1t, where 2 c = w xx , a = w yx c, b 2 = w yy - w yx w xx .
Then substituting (5.4) in (5.2) x* = cx, y* = ay + bx, u* = y* - x* yt* xt*  xt*2 = ay + bx - cx  (ayt + bxt )cxt = au,
Âc
2
xt2
thus showing the exact similarity of the tests. If the same random numbers are used, the same test statistics will be obtained regardless of W. In the more complicated but realistic case that the system is of infinite order but can be approximated by a p order autoregression, the statistics will only be asymptotically similar. Although exact similarity is achieved in the Gaussian fixed regressor model, this is not possible in time series models where one cannot condition on the regressors; similarity results are only asymptotic. Tests 5 and 7 are therefore asymptotically similar if the p order model is true but tests 1, 2, 4 and 6 definitely are not even asymptotically similar as these tests omit the lagged regressors. (This is analogous to the biased standard errors resulting from serially correlated errors.) It is on this basis that we prefer not to suggest the latter tests except in the first order case. Test 3 will also be asymptotically similar under the assumption that u, the residual from the cointegration regression, follows a p order process. This result is proven in Dickey and Fuller (1981, pp. 1065–1066). While the assumption that the system is p order allows the residuals to be of infinite order, there is presumably a finite autoregressive model, possibly of order less than p,
Co-Integration and Error-Correction
163
Table 8.1 The test statistics: reject for large values. 1. The Co-integrating Regression Durbin Watson: yt = axt + c + ut x1 = DW. The null is DW = 0. 2. Dickey Fuller Regression: Dut = -fut-1 + et. x2 = tf: the t statistic for f. 3. Augmented DF Regression: Dut = -fut-1 + b1Dut-1 + ... + biDut - p + et. x3 = tf. 4. Restricted VAR: Dyt = b1ut-1 + e1t, Dxt = b2ut-1 + gDyt + e2t. 2 2 x4 = t b1 + t b2 . 5. Augmented Restricted VAR: Same as (4) but with p lags of Dyt and Dxt in each equation. 2 2 x5 = t b1 + t b2 . 6. Unrestricted VAR: Dyt = b1yt-1 + b2xt-1 + c1 + e1t, Dxt = b3yt-1 + b4xt-1 + gDyt + c2 + e2t. x6 = 2[F1 + F2] where F1 is the F statistic for testing b1 and b2 both equal to zero in the first equation, and F2 is the comparable statistic in the second. 7. Augmented Unrestricted VAR: The same as (6) except for p lags of Dxt and Dyt in each equation. g7 = 2[F1 + F2]. Notes: yt and xt are the original data sets and ut are the residuals from the co-integrating regression.
which will be a good approximation. One might therefore suggest some experimentation to find the appropriate value of p in either case. An alternative strategy would be to let p be a slowly increasing nonstochastic function of T, which is closely related to the test proposed by Phillips (1985) and Phillips and Durlauf (1985). Only substantial simulation experimentation will determine whether it is preferable to use a data based selection of p for this testing procedure although the evidence presented below shows that estimation of extraneous parameters will decrease the power of the tests. In Table 8.1, the seven test statistics are formally stated. In Table 8.2, the critical values and powers of the tests are considered when the system is first order. Here the augmented tests would be expected to be less powerful because they estimate parameters which are truly zero under both the null and alternative. The other four tests estimate no extraneous parameters and are correctly specified for this experiment. From Table 8.2 one can perform a 5 per cent test of the hypothesis of non-co-integration with the co-integrating regression Durbin Watson test, by simply checking DW from this regression and, if it exceeds 0.386, rejecting the null and finding co-integration. If the true model is Model II with r = .9 rather than 1, this will only be detected 20 per cent of the time; however if the true r = .8 this rises to 66 per cent. Clearly, test 1 is the best in each of the power calculations and should be preferred for this set-up, while test 2 is second in almost every case. Notice also that the augmented tests have practically the same critical values as the basic
164
R. F. Engle and C. W. J. Granger
Table 8.2 Critical values and power. I Model: Dy, Dx independent standard normal, 100 observations, 10,000 replications, p = 4.
Statistic 1 2 3 4 5 6 7
Critical Values 1%
Name CRDW DF ADF RVAR ARVAR UVAR AUVAR
.511 4.07 3.77 18.3 15.8 23.4 22.6
5%
10%
.386 3.37 3.17 13.6 11.8 18.6 17.9
.322 3.03 2.84 11.0 9.7 16.0 15.5
II Model: yt + 2xt = ut, Dut = (r - 1)ut-1 + et, xt + yt = ut, Dut = ht; r = .8, .9, 100 observations, 1000 replications, p = 4.
Statistic 1 2 3 4 5 6 7
Statistic 1 2 3 4 5 6 7
Name
Rejections per 100: r = .9 1%
CRDW DF ADF RVAR ARVAR UVAR AUVAR
Name CRDW DF ADF RVAR ARVAR UVAR AUVAR
4.8 2.2 1.5 2.3 1.0 4.3 1.6
Rejections per 100: r = .8 1% 34.0 20.5 7.8 15.8 4.6 19.0 4.8
5%
10%
19.9 15.4 11.0 11.4 9.2 13.3 8.3
33.6 29.0 22.7 25.3 17.9 26.1 16.3
5%
10%
66.4 59.2 30.9 46.2 22.4 45.9 18.3
82.1 76.1 51.6 67.4 39.0 63.7 33.4
tests; however, as expected, they have slightly lower power. Therefore, if it is known that the system is first order, the extra lags should not be introduced. Whether a pre-test of the order would be useful remains to be established. In Table 8.3 both the null and alternative hypotheses have fourth order autoregressions. Therefore the basic unaugmented tests now are misspecified while the augmented ones are correctly specified (although some of the intervening lags could be set to zero if this were known).
Co-Integration and Error-Correction
165
Table 8.3 Critical values and power with lags. Model I: Dyt = .8Dyt-4 + et, Dxt = .8Dxt-4 + ht; 100 observations, 10,000 replications, p = 4, et, ht independent standard normal.
Statistic 1 2 3 4 5 6 7
Name CRDW DF ADF RVAR ARVAR UVAR AUVAR
Critical Values 1% .455 3.90 3.73 37.2 16.2 59.0 28.0
5%
10%
.282 3.05 3.17 22.4 12.3 40.3 22.0
.209 2.71 2.91 17.2 10.5 31.4 19.2
Model II: yt + 2xt = ut, Dut = (r - 1)ut-1 + .8Dut-4 + et, yt + xt = ut, Dut = .8Dut-4 + ht; r = .9, .8, 100 observations, 1000 replications, p = 4.
Statistic 1 2 3 4 5 6 7
Statistic 1 2 3 4 5 6 7
Name CRDW DF ADF RVAR ARVAR UVAR AUVAR
Name CRDW DF ADF RVAR ARVAR UVAR AUVAR
Rejections per 100: r = .9 1% 15.6 9.4 36.0 .3 26.4 .0 9.4 Rejections per 100: r = .8 1% 77.5 66.8 68.9 7.0 57.2 2.5 32.2
5%
10%
39.9 25.5 61.2 4.4 48.5 .5 26.8
65.6 37.8 72.2 10.9 62.8 3.5 40.3
5%
10%
96.4 89.7 90.3 42.4 80.5 10.8 53.0
98.6 96.0 94.4 62.5 89.3 25.9 67.7
Notice now the drop in the critical values of tests 1, 4, and 6 caused by their nonsimilarity. Using these new critical values, test 3 is the most powerful for the local alternative while at r = .8, test 1 is the best, closely followed by 2 and 3. The misspecified or unaugmented tests 4 and 6 perform very badly in this situation. Even though they were moderately powerful in Table 8.2, the performance here dismisses them from consideration. Although test 1 has the best performance overall, it is not the recommended choice from this experiment because the critical value is
166
R. F. Engle and C. W. J. Granger
so sensitive to the particular parameters within the null. For most types of economic data the differences are not white noise and, therefore, one could not in practice know what critical value to use. Test 3, the augmented Dickey-Fuller test, has essentially the same critical value for both finite sample experiments, has theoretically the same large sample critical value for both cases, and has nearly as good observed power properties in most comparisons, and is therefore the recommended approach. Because of its simplicity, the CRDW might be used for a quick approximate result. Fortunately, none of the best procedures require the estimation of the full system, merely the co-integrating regression and then perhaps an auxiliary time series regression. This analysis leaves many questions unanswered. The critical values have only been constructed for one sample size and only for the bivariate case, although recently, Engle and Yoo (1986) have calculated critical values for more variables and sample sizes using the same general approach. There is still no optimality theory for such tests and alternative approaches may prove superior. Research on the limiting distribution theory by Phillips (1985) and Phillips and Durlauf (1985) may lead to improvements in test performance. Nevertheless, it appears that the critical values for ADF given in Table 8.2 can be used as a rough guide in applied studies at this point. The next section will provide a variety of illustrations. 6.
EXAMPLES
Several empirical examples will be presented to show performance of the tests in practice. The relationship between consumption and income will be studied in some detail as it was analyzed from an error correction point of view in DHSY and a time series viewpoint in Hall (1978) and others. Briefer analyses of wages and prices, short and long term interest rates, and the velocity of money will conclude this section. DHSY have presented evidence for the error correction model of consumption behavior from both empirical and theoretical points of view. Consumers make plans which may be frustrated; they adjust next period’s plans to recoup a portion of the error between income and consumption. Hall finds that U.S. consumption is a random walk and that past values of income have no explanatory power which implies that income and consumption are not co-integrated, at least if income does not depend on the error correction term. Neither of these studies models income itself and it is taken as exogenous in DHSY. Using U.S. quarterly real per capita consumption on nondurables and real per capita disposable income from 1947-I to 1981-II, it was first checked that the series were I(1). Regressing the change in consumption on its past level and two past changes gave a t statistic of +.77 which is even the wrong sign for consumption to be stationary in the levels. Running the same model with second differences on lagged first differ-
Co-Integration and Error-Correction
167
ences and two lags of second differences, the t statistic was -5.36 indicating that the first difference is stationary. For income, four past lags were used and the two t statistics were -.01 and -6.27 respectively, again establishing that income is I(1). The co-integrating regression of consumption (C) on income (Y) and a constant was run. The coefficient of Y was .23 (with a t statistic of 123 and an R2 of .99). The DW was however .465, indicating that by either table of critical values one rejects the null of “non-co-integration” or accepts co-integration at least at the 5 per cent level. Regressing the change in the residuals on past levels and four lagged changes, the t statistic on the level is 3.1 which is essentially the critical value for the 5 per cent ADF test. Because the lags are not significant, the DF regression was run giving a test statistic of 4.3 which is significant at the 1 per cent level, illustrating that when it is appropriate, it is a more powerful test. In the reverse regression of Y on C, the coefficient is 4.3 which has reciprocal .23, the same as the coefficient in the forward regression. The DW is now .463 and the t statistic from the ADF test is 3.2. Again the first order DF appears appropriate and gives a test statistic of 4.4. Whichever way the regression is run, the data rejects the null of non-cointegration at any level above 5 per cent. To establish that the joint distribution of C and Y is an error correction system, a series of models was estimated. An unrestricted vector autoregression of the change in consumption on four lags of consumption and income changes plus the lagged levels of consumption and income is given next in Table 8.4. The lagged levels are of the appropriate signs and sizes for an error correction term and are individually significant or nearly so. Of all the lagged changes, only the first lag of income change is significant. Thus the final model has the error correction term estimated from the co-integrating regression and one lagged change in income. The standard error of this model is even lower than the VAR suggesting the efficiency of the parameter restrictions. The final model passes a series of diagnostic tests for serial correlation, lagged dependent variables, non-linearities, ARCH, and omitted variables such as a time trend and other lags. One might notice that an easy model building strategy in this case would be to estimate the simplest error correction model first and then test for added lags of C and Y, proceeding in a “simple to general” specification search. The model building process for Y produced a similar model. The same unrestricted VAR was estimated and distilled to a simple model with the error correction term, first and fourth lagged changes in C and a fourth lagged change in Y. The error correction is not really significant with a t statistic of -1.1 suggesting that income may indeed be weakly exogenous even though the variables are co-integrated. In this case the standard error of the regression is slightly higher in the restricted model but the difference is not significant.The diagnostic tests are again generally good.
168
R. F. Engle and C. W. J. Granger
Campbell (1985) uses a similar structure to develop a test of the permanent income hypothesis which incorporates “saving for a rainy day” behavior. In this case the error correction term is approximately saving which should be high when income is expected to fall (such as when current income is above permanent income). Using a broader measure of consumption and narrower measure of income he finds the error correction term significant in the income equation. The second example examines monthly wages and prices in the U.S. The data are logs of the consumer price index and production worker wage in manufacturing over the three decades of 50’s, 60’s and 70’s. Again, the test is run both directions to show that there is little difference in the result. For each of the decades there are 120 observations so the critical values as tabulated should be appropriate. For the full sample period the Durbin Watson from the cointegrating regression in either direction is a notable .0054. One suspects that this will be insignificantly different from zero even for samples much larger than this. Looking at the augmented Dickey Fuller test statistic, for p on w we find -.6 and for w on p we find +.2. Adding a twelfth lag in the ADF tests improves the fit substantially and raises the test statistics to .88 and 1.50 respectively. In neither case do these approach the critical values of 3.2. The evidence accepts the null of non-co-integration for wages and prices over the thirty year period. For individual decades none of the ADF tests are significant at even the 10 per cent level. The largest of these six test statistics is for the 50’s regressing p on w which reaches 2.2, which is still below the 10 per cent level of 2.8. Thus we find evidence that wages and prices in the U.S. are not co-integrated. Of course, if a third variable such as productivity were available (and were I(1)), the three might be co-integrated. The next example tests for co-integration between short and long term interest rates. Using monthly yields to maturity of 20 year treasury bonds as the long term rate (Rt) and the one month treasury bill rate rt as the short rate, co-integration was tested with data from February, 1952 to December, 1982. With the long rate as the dependent variable, the co-integrating regression gave: Rt = 1.93 + .785rt + ERt,
DW = .126,
R2 = .866,
with a t ratio of 46 on the short rate. The DW is not significantly different from zero, at least by Tables 8.2 and 8.3; however, the correct critical value depends upon the dynamics of the errors (and of course the sample size is 340 – much greater than for the tabulated values). The ADF test with four lags gives: DERt = -.06ERt -1 + .25DERt -1 - .24 DERt - 2 + .24 DERt -3 - .09 DERt -4 . (-3.27) (4.55) (-4.15) (-4.15) (-1.48)
Co-Integration and Error-Correction
169
When the twelfth lag is added instead of the fourth, the test statistic rises to 3.49. Similar results were found with the reverse regression where the statistics were 3.61 and 3.89 respectively. Each of these test statistics exceeds the 5 per cent critical values from Table 8.3. Thus these interest rates are apparently co-integrated. This finding is entirely consistent with the efficient market hypothesis. The one-period excess holding yield on long bonds as linearized by Shiller and Campbell (1984) is: EHY = DRt -1 - (D - 1)Rt - rt where D is the duration of the bond which is given by
(
) (c(1 + c)
i
D = (1 + c) - 1
i -1
with c as the coupon rate and i the number of periods to maturity. The efficient market hypothesis implies that the expectation of the EHY is a constant representing a risk premium if agents are risk averse. Setting EHY = k + e and rearranging terms gives the error correction form: -1
DRt = (D - 1) (Rt -1 - rt -1 ) + k ¢ + e t , implying that R and r are co-integrated with a unit coefficient and that for long maturities, the coefficients of the error correction term is c, the coupon rate. If the risk premium is varying over time but is I(0) already, then it need not be included in the test of co-integration. The final example is based upon the quantity theory equation: MV = PY. Empirical implications stem from the assumption that velocity is constant or at least stationary. Under this condition, log M, log P, and log Y should be co-integrated with known unit parameters. Similarly, nominal money and nominal GNP should be co-integrated. A test of this hypothesis was constructed for four measures of money: M1, M2, and M3, and L, total liquid assets. In each case the sample period was 1959-I through 1981-II, quarterly. The ADF tests statistics were: M1 M2 M3 L
1.81 3.23 2.65 2.15
1.90 3.13 2.55 2.13
where in the first column the log of the monetary aggregate was the dependent variable while in the second, it was log GNP. For only one of the M2 tests is the test statistic significant at the 5 per cent level, and none of the other aggregates are significant even at the 10 per cent level. (In several cases it appears that the DF test could be used and would therefore be more powerful.) Thus the most stable relationship is
170
R. F. Engle and C. W. J. Granger
between M2 and nominal GNP but for the other aggregates, we reject co-integration and the stationarity of velocity. 7.
CONCLUSION
If each element of a vector of time series xt is stationary only after differencing, but a linear combination a¢xt need not be differenced, the time series xt have been defined to be co-integrated of order (1, 1) with co-integrating vector a. Interpreting a¢xt = 0 as a long run equilibrium, co-integration implies that equilibrium holds except for a stationary, finite variance disturbance even though the series themselves are nonstationary and have infinite variance. The paper presents several representations for co-integrated systems including an autoregressive representation and an error-correction representation. A vector autoregression in differenced variables is incompatible with these representations because it omits the error correction term. The vector autoregression in the levels of the series ignores cross equation constraints and will give a singular autoregressive operator. Consistent and efficient estimation of error correction models is discussed and a two step estimator proposed. To test for co-integration, seven statistics are formulated which are similar under various maintained hypotheses about the generating model. The critical values of these statistics are calculated based on a Monte Carlo simulation. Using these critical values, the power properties of the tests are examined, and one test procedure is recommended for application. In a series of examples it is found that consumption and income are co-integrated, wages and prices are not, short and long interest rates are, and nominal GNP is not co-integrated with M1, M3, or total liquid assets, although it is possibly with M2.
REFERENCES Bhargava, Alok (1984): “On the Theory of Testing For Unit Roots in Observed Time Series,” manuscript, ICERD, London School of Economics. Box, G. E. P., and G. M. Jenkins (1970): Time Series Analysis, Forecasting and Control. San Francisco: Holden Day. Campbell, John Y. (1985): “Does Saving Anticipate Declining Labor Income? An Alternative Test of the Permanent Income Hypothesis,” manuscript, Princeton University. Cox, D. R., and C. V. Hinkley (1974): Theoretical Statistics. London: Chapman and Hall. Currie, D. (1981): “Some Long-Run Features of Dynamic Time-Series Models,” The Economic Journal, 91, 704–715.
Co-Integration and Error-Correction
171
Davidson, J. E. H., David F. Hendry, Frank Srba, and Steven Yeo (1978): “Econometric Modeling of the Aggregate Time-series Relationship Between Consumer’s Expenditure and Income in the United Kingdom,” Economic Journal, 88, 661–692. Davies, R. R. (1977): “Hypothesis Testing When a Nuisance Parameter is Present Only Under the Alternative,” Biometrika, 64, 247–254. Dawson, A. (1981): “Sargan’s Wage Equation: A Theoretical and Empirical Reconstruction,” Applied Economics, 13, 351–363. Dickey, David A. (1976): “Estimation and Hypothesis Testing for Nonstationary Time Series,” PhD. Thesis, Iowa State University, Ames. Dickey, David A., and Wayne A. Fuller (1979): “Distribution of the Estimators for Autoregressive Time Series With a Unit Root,” Journal of the American Statistical Assoc., 74, 427–431. ——— (1981): “The Likelihood Ratio Statistics for Autoregressive Time Series with a Unit Root,” Econometrica, 49, 1057–1072. Engle, Robert F., David F., Hendry, and J. F. Richard (1983): “Exogeneity,” Econometrica, 51, 277–304. Engle, Robert F., and Byung Sam Yoo (1986): “Forecasting and Testing in Co-integrated Systems,” U.C.S.D. Discussion Paper. Evans, G. B. A., and N. E. Savin (1981): “Testing for Unit Roots: 1,” Econometrica, 49, 753–779. Feller, William (1968): An Introduction to Probability Theory and Its Applications, Volume I. New York: John Wiley. Fuller, Wayne A. (1976): Introduction to Statistical Time Series. New York: John Wiley. Granger, C. W. J. (1981): “Some Properties of Time Series Data and Their Use in Econometric Model Specification,” Journal of Econometrics, 121–130. ——— (1983): “Co-Integrated Variables and Error-Correcting Models,” unpublished UCSD Discussion Paper 83-13. Granger, C. W. J., and P. Newbold (1977): Forecasting Economic Time Series. New York: Academic Press. ——— (1974):“Spurious Regressions in Econometrics,” Journal of Econometrics, 26, 1045–1066. Granger C. W. J., and A. A. Weiss (1983): “Time Series Analysis of ErrorCorrecting Models,” in Studies in Econometrics, Time Series, and Multivariate Statistics. New York: Academic Press, 255–278. Hall, Robert E. (1978): “A Stochastic Life Cycle Model of Aggregate Consumption,” Journal of Political Economy, 971–987. Hannan E. J. (1970): Multiple Time Series. New York: Wiley. Hendry, David F., and T. von Ungern-Sternberg (1981): “Liquidity and Inflation Effects on Consumer’s Expenditure,” in Essays in the Theory and Measurement of Consumer’s Behavior, ed. by A. S. Deaton. Cambridge: Cambridge University Press. Johansen, Soren (1985): “The Mathematical Structure of Error Correction Models,” manuscript, University of Copenhagen. Nelson, C. R., and Charles Plosser (1982): “Trends and Random Walks in Macroeconomic Time Series,” Journal of Monetary Economics, 10, 139–162. Pagan, A. R. (1984): “Econometric Issues in the Analysis of Regressions with Generated Regressors,” International Economic Review, 25, 221–248.
172
R. F. Engle and C. W. J. Granger
Phillips, A. W. (1957): “Stabilization Policy and the Time Forms of Lagged Responses,” Economic Journal, 67, 265–277. Phillips, P. C. B. (1985): “Time Series Regression with Unit Roots,” Cowles Foundation Discussion Paper No. 740, Yale University. Phillips, P. C. B., and S. N. Durlauf (1985): “Multiple Time Series Regression with Integrated Processes,” Cowles Foundation Discussion Paper 768. Salmon, M. (1982): “Error Correction Mechanisms,” The Economic Journal, 92, 615–629. Sargan, J. D. (1964): “Wages and Prices in the United Kingdom: a Study in Econometric Methodology,” in Econometric Analysis for National Economic Planning, ed. by P. E. Hart, G. Mills, and J. N. Whittaker. London: Butterworths. Sargan, J. D., and A. Bhargava (1983): “Testing Residuals from Least Squares Regression for Being Generated by the Gaussian Random Walk,” Econometrica, 51, 153–174. Shiller, R. J., and J. Y. Campbell (1984): “A Simple Account of the Behaviour of Long-Term Interest Rates,” American Economic Review, 74, 44–48. Stock, James H. (1984): “Asymptotic Properties of Least Squares Estimators of Co-Integrating Vectors,” manuscript, Harvard University. Watson, Mark W., and Robert Engle (1985): “A Test for Regression Coefficient Stability with a Stationary AR(1) Alternative,” forthcoming in Review of Economics and Statistics. Yoo, Sam (1985): “Multi-co-integrated Time Series and Generalized Error Correction Models,” manuscript in preparation, U.C.S.D.
CHAPTER 9
Developments in the Study of Cointegrated Economic Variables* C. W. J. Granger**
1.
INTRODUCTION
At the least sophisticated level of economic theory lies the belief that certain pairs of economic variables should not diverge from each other by too great an extent, at least in the long run. Thus, such variables may drift apart in the short run or according to seasonal factors, but if they continue to be too far apart in the long-run, then economic forces, such as a market mechanism or government intervention, will begin to bring them together again. Examples of such variables are interest rates on assets of different maturities, prices of a commodity in different parts of the country, income and expenditure by local government and the value of sales and production costs of an industry. Other possible examples would be prices and wages, imports and exports, market prices of substitute commodities, money supply and prices and spot and future prices of a commodity. In some cases an economic theory involving equilibrium concepts might suggest close relations in the long-run, possibly with the addition of yet further variables. However, in each case the correctness of the beliefs about long-term relatedness is an empirical question. The idea underlying cointegration allows specification of models that capture part of such beliefs, at least for a particular type of variable that is frequently found to occur in macroeconomics. Since a concept such as the long-run is a dynamic one, the natural area for these ideas is that of timeseries theory and analysis. It is thus necessary to start by introducing some relevant time series models. Consider a single series xt, measured at equal intervals of time. Time series theory starts by considering the generating mechanism for the series. This mechanism should be able to generate all of the statistical properties of the series, or at very least the conditional mean, variance and temporal autocorrelations, that is the “linear properties” of the * Oxford Bulletin of Economics and Statistics, 48, 1986, 213–228. ** I would like to acknowledge the excellent hospitality that I enjoyed at Nuffield College and the Institute of Economics and Statistics, Oxford whilst this paper was prepared.
174
C. W. J. Granger
series, conditional on past data. Some series appear to be “stationary”, which essentially implies that the linear properties exist and are timeinvariant. Here we are concerned with the weaker but more technical requirement that the series has a spectrum which is finite but non-zero at all frequencies. Such a series will be called I(0), denoting “integrated of order zero.” Some series need to be differenced to achieve these properties and these will be called integrated of order one, denoted xt ~ I(1). More generally, if a series needs differencing d time to become I(0), it is called integrated of order d, denoted xt ~ I(d). Let Db denote application of the difference operator b times, if xt ~ I(d) then the bth difference series Dbxt is I(d - b). Sometimes a series needs to be integrated (summed) to become I(0), for example the difference of an I(0) series is I(-1) and its integral is again I(0). Most of this paper will concentrate on the practically important cases when d = 0 or 1. The simplest example of an I(0) series is a white noise et, so that rk = corr(et, et-k) = 0 for all k π 0. Another example is a stationary AR(1) series, xt generated by xt = axt-1 + et (1.1) where |a| < 1 and et is white noise with zero mean. The simplest example of an I(1) series is a random walk, where xt is generated by xt = xt-1 + et (1.2) as would theoretically occur for a speculative price generated by an informationally efficient market. Here, the first differenced series is white noise. The most general I(1) series replaces et in equation (1.2) by any I(0) series not necessarily having zero mean. Many macro economic series appear to be I(1), as suggested by the “typical spectral shape” (see Granger (1966)), by analysis of Box-Jenkins (1970) modeling techniques or by direct testing, as in Nelson and Plosser (1982). Throughout the paper all error processes, such as those in (1.1), (1.2) are assumed to have finite first and second moments. There are many substantial differences between I(0) and I(1) series. An I(0) series has a mean and there is a tendency for the series to return to the mean, so that it tends to fluctuate around the mean, crossing that value frequently and with rare extensive excursions. Autocorrelations decline rapidly as lag increases and the process gives low weights to events in the medium to distant past, and thus effectively has a finite memory. An I(1) process without drift will be relatively smooth, will wander widely and will only rarely return to an earlier value. In fact, for a random walk, for a fixed arbitrary value the expected time until the process again passes through this value is infinite. This does not mean that returns do not occur, but that the distribution of the time to return is very long-tailed. Autocorrelations {rk} are all near one in magnitude even for large k; an innovation to the process affects all later values and so the process has indefinitely long memory. To see this, note that the pure random walk I(1) solves to give
Cointegrated Economic Variables
xt = et + et-1 + et-2 + ... + e1
175
(1.3)
assuming the process starts at time t = 0, with x0 = 0. Note that the variance of xt is ts e2 and becomes indefinitely large as t increases and rk = 1 - |k|/t. If xt is a random walk with “drift” (1.2) becomes xt = xt-1 + m + et where et is zero-mean white noise. The solution is now t -1
xt = mt + Â e t - j
(1.4)
j =0
so that xt consists of a linear trend plus a drift-free I(1) process (random walk) being the process in (1.3). The only more general univariate process considered in this section is xt = m(t ) + xt¢ where x¢t is a drift-free random walk, such as generated by (1.3), and m(t) is some deterministic function of time, being the “trend in mean” of xt. 2.
COINTEGRATION
Consider initially a pair of series xt, yt, each of which is I(1) and having no drift or trend in mean. It is generally true that any linear combination of these series is also I(1). However, if there exists a constant A, such that zt = xt - Ayt
(2.1)
is I(0), then xt, yt will be said to be cointegrated, with A called the cointegrating parameter. If it exists, A will be unique in the situation now being considered.As zt has such different temporal properties from those of either of its components it follows that the xt and yt must have a very special relationship. Both xt and yt have dominating low-frequency or “long wave” components, and yet zt does not. Thus, xt and Ayt must have low-frequency components which virtually cancel out to produce zt. A good analogy is two series each of which contain a prominent seasonal component. Generally, any linear combination of these series will also contain a seasonal, but if the seasonals are identical in shape there could exist a linear combination which has no seasonal. The relationship xt = Ayt
(2.2)
might be considered a long-run or “equilibrium” relationship, perhaps as suggested by some economic theory, and zt given by (2.1) thus measures
176
C. W. J. Granger
the extent to which the system xt, yt is out of equilibrium, and can thus be called the “equilibrium error”.The term “equilibrium” is used in many ways by economists. Here the term is not used to imply anything about the behaviour of economic agents but rather describes the tendency of an economic system to move towards a particular region of the possible outcome space. If xt and yt are I(1) but “move together in the long-run”, it is necessary that zt be I(0) as otherwise the two series will drift apart without bound. Thus, for a pair of I(1) series, cointegration is a necessary condition for the ideas discussed in the first section of this paper to hold. In some circumstances, an even stronger condition may be required, such as putting complete bounds on zt, which will guarantee that it is I(0), but such cases are not considered here. The extension to series having trends in their means is straightforward. Consider xt = mx (t ) + xt¢ yt = my (t ) + yt¢
(2.3)
where x¢t, y¢t are both I(1) but without trends in mean, and let zt = xt - Ayt = mx (t ) - Amy (t ) + xt¢ - Ayt¢. For zt to be I(0), and xt, yt not to drift too far apart, it is necessary both that zt have no trend in mean, so that mx (t ) = Amy (t )
(2.4)
for all t, and that x¢t, y¢t be cointegrated with the same value of A as the cointegrating parameter. It is seen that if the two trends in mean are different functions of time, such as an exponential and a cubic, then (2.4) cannot hold. One thing that should be noted is that a model of the form xt = byt + et where xt is I(0) and yt is I(1), makes no sense as the independent and dependent variables have such vastly different temporal properties. Theoretically the only plausible value for b in this regression is b = 0. If xt, yt are both I(1) without trends in mean and are cointegrated it has been proved in Granger (1983) and Granger and Engle (1985) that there always exists a generating mechanism having what is called the “error-correcting” form: D xt = - r1 zt -1 + lagged(D xt , Dyt ) + d(B)e 1t Dyt = - r 2 zt -1 + lagged(D xt , Dyt ) + d(B)e 2t where zt = xt - Ayt,
(2.5)
Cointegrated Economic Variables
177
d(B) is a finite polynomial in the lag operator B (so that Bkxt = xt-k) and is the same in each equation, and e1t, e2t are joint white noise, possibly contemporaneously correlated and with |r1| + |r2| π 0. Not only must cointegrated variables obey such a model but the reverse is also true; data generated by an error-correction model such as (2.5) must be cointegrated. The reason for this is easily seen, as if xt, yt are I(1) their changes will be I(0) and so every term in the equations (2.5) is I(0) provided zt is also I(0) meaning that xt, yt are cointegrated. If zt is not I(0), i.e., if xt, yt are not cointegrated, then the zt term does not belong in these equations given that the dependent variables are I(0) and hence at least one of r1, r2 does not vanish. These models were introduced into economics by Sargan (1964) and Phillips (1957) and have generated a lot of interest following the work of Davidson, Hendry, Srba and Yeo (1978), Hendry and von UngernSternberg (1980), Curry (1981), Dawson (1981) and Salmon (1982) amongst others. The models are seen to incorporate equilibrium relationships, perhaps suggested by an economic theory of the long-run, with the type of dynamic model favoured by time-series econometricians. The equilibrium relationships are allowed to enter the model but are not forced to do so. The title “error-correcting” for equations such as (2.5) is a little optimistic. The absolute value of zt is the distance that the system is away from equilibrium. Equation (2.5) indicates that the amount and direction of change in xt and yt take into account the size and sign of the previous equilibrium error, zt-1. The series zt does not, of course, certainly reduce in size from one time period to another but is a stationary series and thus is inclined to move towards its mean. A constant should be included in the equilibrium equation (2.2) and in (2.1) if needed, to make the mean of zt zero. There are a number of theoretical implications of cointegratedness that are easily derived from the results so far presented: (i) If xt, yt are cointegrated, so will be xt and byt-k + wt, for any k where wt ~ I(0), with a possible change in cointegrating parameter. Formally, if xt is I(1) then xt and xt-k will be cointegrated for any k, but this is not an interesting property as it is true for any I(1) process and so does not suggest a special relationship, unlike cointegration of a pair of I(1) series. It follows that if xt, yt are cointegrated but are only observed with measurement error, then the two observed series will also be cointegrated if all measurement errors are I(0). (ii) If xt is I(1) and fn,h(Jn) is the optimal forecast of xn+h, based on the information set Jn available at time n, then xt+h, ft,h(Jt) are cointegrated if Jn is a proper information set, that is if it includes xn-j, j 0. If Jn is not a proper information set, xt+n and its optimum forecast are only cointegrated if xt, is cointegrated with variables in Jt.
178
C. W. J. Granger
(iii) If xn+h, yn+h are cointegrated series with parameter A and are optimally forecast using the information set Jn: xn-j, yn-j, j 0, then the h-step forecasts f n,hx, f n,hy will obey f n,hx = Af n,hy as h Æ • (proved by S. Yoo (1986)). Thus, long-term optimum forecasts of xt, yt will be tied together by the equilibrium relationships. Forecasts formed without cointegration terms such as univariate forecasts will not necessarily have this property. (iv) If Tt is an I(1) target variable and xt is an I(1) controllable variable, then Tt, xt will be cointegrated if optimum control is applied. (See Nickell (1985).) (v) If xt, yt are I(1) and cointegrated, there must be Granger causality in at least one direction, as one variable can help forecast the other. This follows directly from the error-correlation model and the condition that |r1| + |r2| π 0, as zt-1 must occur in at least one equation and thus knowledge of zt must improve forecastability of at least one of xt, yt. Here causality is with respect to the information set Jt defined in (iii). (vi) If xt, yt are a pair of prices from a jointly efficient, speculative market, they cannot be cointegrated. This follows directly from (v) as if the two prices were cointegrated, one can be used to help forecast the other and this would contradict the efficient market assumption. Thus, for example, gold and silver prices, if generated by an efficient market, cannot move closely together in the long-run. Tests of this idea have been conducted by Granger and Escribano (1986). 3.
TESTING FOR COINTEGRATION
This topic has been discussed at some length by Granger and Engle (1985) and so only an outline of their conclusions is presented here. It is necessary to start with a test for whether a series xt is I(0) and a useful test has been provided by Dickey and Fuller (1981). The following regression is formed p
D xt = bxt -1 + Â g j D xt - j + et j =1
where p is selected to be large enough to ensure that the residual et is empirical white noise. The test statistic is the ratio of bˆ to its calculated standard error obtained from an ordinary least squares (OLS) regression. The null hypothesis is H0: xt ~ I(1). This is rejected if bˆ is negative and significantly different from zero. However, the test-statistic does not have a t-distribution but tables of significance levels have been provided by Dickey and Fuller (1979).
Cointegrated Economic Variables
179
To test for cointegration between a pair of series, that are expected to be I(1), one method is to first form the “cointegration regression” xt = c + ayt + at
(3.1)
and then to test if the residual at appears to be I(0) or not. It might be noted that when xt and yt are cointegrated, this regression when estimated using, say, OLS should give an excellent estimate of the true cointegrating coefficient A, in large samples. Note that at will have a finite (or small) variance only if a = A, otherwise at will be I(1) and thus have theoretically a very large variance in a large sample. Stock (1984) has shown that when series are cointegrated, OLS estimates of A are highly efficient with variances 0(T-2) compared to more usual situations where the variances are 0(T-1), T being the sample size. Stock also shows that the estimates are consistent with an 0(T-1) bias. However, some recent Monte Carlo simulations by Banerjee et al. (1986) suggest that these bias terms can be very substantial in some cases. Two simple tests of the null hypothesis H0: xt, yt not cointegrated are based either on a Durbin-Watson statistic (D/W) for (3.1), but testing if D/W is significantly greater than zero (see Sargan and Bhargara (1983) who provide critical values), or using the previously mentioned DickeyFuller test for aˆ t. The latter test was found by Granger and Engle (1985) to have more stable critical values from a small simulation study and with T = 100 observations approximate significance levels for the pseudo tstatistic testing b = 0 are, 10 per cent ~ 2.88, 5 per cent ~ 3.17, 1 per cent ~ 3.75. A great deal more experience with these tests, and more extensive simulation studies, are required before confidence in the quality of these, or alternative, testing procedures is assured. Some estimates of power for this test were found to be quite satisfactory for a sample size of 100. Applying this test, some examples of the outcomes of empirical analysis are (mostly from Granger and Engle, 1985) apparently cointegrated US national income and consumption US non-durables, production and sales US short and long-term interest rates UK W, P, H, U, T, (Hall, – this issue) UK Velocity and short-term interest rates (Hendry and Ericsson, 1983) apparently not cointegrated US wages and prices US durables, production and sales US money and prices.
180
C. W. J. Granger
Of course, some of the examples where cointegration was not found strongly suggest that further variables should be included in the investigation, such as the addition of productivity to wages and prices. This extension is considered next. 4. GENERALISATION: MANY VARIABLES AND GENERAL COINTEGRATION Let xt be a vector of N component time series, each without trend in mean and each I(d), d > 0. For the moment, it is assumed that the d-differenced vector series is a zero mean, purely non-deterministic stationary process, so that there is a Wold representation d
(1 - B) xt = C (B)e t
(4.1)
where this is taken to mean that both sides have the same spectral matrix and et is an N ¥ 1 zero-mean white noise vector with E[e t e ¢s ] = 0 t π s =G t = s so that only contemporaneous correlations can occur. Equation (4.1) is normalized by taking C(0) = IN, the unit matrix. Then xt will be said to be cointegrated CI(d, b) if there exists a vector a such that zt = a¢xt is I(d - b), b > 0. The case considered in earlier sections has N = 2, d = b = 1. Moving to general values for N, d, b adds a large number of possible interrelationships and models. In particular it is clear that a need no longer be unique, as there can be several “equilibrium” relationships linking N > 2 variables. If there are r vectors a, each of which produces z’s integrated of order less than d, then r is called the “order of cointegration” and it is easily seen that r N - 1. For the practically important case d = b = 1, it is shown in Granger (1983) and in Granger and Engle (1985) that (i) C(1) is of rank N - r (ii) there exists a vector autoregressive (VAR) representation A(B) xt = d(B)e t where A(1) is of rank r with A(0) = IN and d(B) is a scalar stable lag polynomial. If a finite order VAR model exists, it takes this form but with d(B) = 1. (iii) there exist N ¥ r matrices a, g of rank r such that
Cointegrated Economic Variables
181
a ¢C (1) = 0 C (1)g = 0 A(1) = ga ¢ (iv) there exists an error-correction representation with zt = a¢xt an r ¥ 1 stationary vector, of the form A* (B)(1 - B) xt = -g zt -1 + d(B)e t
(4.2)
where A*(0) = IN, A*(1) is of full rank and |A*(w)| = 0 has all its roots outside the unit circle. It should be noted that the first term on the right hand side can be written as (given (iii) and (v)) g zt -1 = A(1) xt -1 and so, for all terms in (4.2) to be I(0) it is necessary that A(1) does not have a row consisting of just one non-zero term. A resulting condition on a is mentioned below. Commenting on these results, (i) concerning the rank of C(1) is a necessary and sufficient condition for cointegration and all other results are derived from it. In (ii) concerning VAR, A(B) is the adjoint matrix of C(B) and d(B) is proportional to the determinant of C(B) after dividing out unit roots. It follows from (ii) that if a VAR model is estimated for cointegrated variables, efficiency will be lost unless A(1) is restricted to being of rank r. In (iii) it should be noted that the matrices g, a, are not uniquely defined by the set of equations shown. If q is an r ¥ r matrix of full rank, then g can be replaced by gq and a¢ by q-1a¢ and the equations will still hold. This lack of uniqueness leads to some interpretational problems in the error-correction model (4.2), which are similar to the identification problems of classical simultaneous equations models. To illustrate the problem, suppose that N = 3 and r = 2 and that a1, a2 are a pair of cointegrating vectors, giving zt (a 1 ) = a 11 x1t + a 12 x2t + a 13 x3t zt (a 2 ) = a 21 x1t + a 22 x2t + a 23 x3t as a pair of I(0) variables corresponding to equilibrium relationships a 1¢ xt = 0, a¢2xt = 0. However, generally any combination of a pair of I(0) variables will also be I(0) and so zt (l ) = (1 - l )zt (a 1 ) + lzt (a 2 ) will also be I(0) [it is assumed that for no l will zt(l) consist of just one component of xt: this is a constraint on the matrix a preventing
182
C. W. J. Granger
zt(l) = xt, for example, which would make zt ~ I(1)]. Thus, the equilibrium relations are not uniquely identified, and the error-correction models cannot be strictly interpreted as “correcting” for deviations from a particular pair of equilibrium relationships. The only invariant relationship is the line in the (x1, x2, x3) space defined by zt (a 1 ) = 0, zt (a 2 ) = 0 This same line is given by zt (l1 ) = 0, zt (l 2 ) = 0 for any l1 π l2 and will be called the “equilibrium sub-space”. The errorcorrection model might thus be interpreted as Dxt being influenced by the distance the system is from the equilibrium sub-space. For general N, r, the equilibrium sub-space will be a hyper-plane of dimension N - r. It is unclear if the identification question can be solved in ways similar to those used with simultaneous equations, that is by adding sufficient zeros to A(1) or by appeals to “exogeneity.” For the N = 3, r = 2 case, l’s can be chosen to give zt = a1x1t + a 2x2t and zt = a3x1t + a4 x3t and these seem to provide a natural way for testing for cointegration. For more general N and r, the number of possible combinations becomes extensive and testing will be more difficult, particularly when r is an unknown, as will be usual in practice. Turning briefly to the most general case, with any N, d, b and r, the error-correction model becomes d
[
b
]
A *(B)(1 - B) xt = -g 1 - (1 - B) (1 - B)
d -b
zt -1 + d(B)e t
(4.3)
where d(B) is a scalar polynomial in B. It should be noted that [1 - (1 - B)b], if expanded in powers of B, has no term in B0 and so only lagged zt occur on the right hand side. Again, every term in (4.3) is I(0) when cointegration is present. It is possible to define fractional differencing, as in Granger and Joyeux (1980), and equation (4.3) still holds in this case, although its practical importance has yet to be established. In the general case (with integer N, b, d, r) Yoo (1986) has considered alternative ways of defining the zt’s possibly using lagged xt components, for a given C(B) matrix but with some added assumptions about its form. Johanssen (1985) has also found some mathematically exact and attractive results for the general case, which do not rely on the assumption that all components of xt are integrated of the same order. He points out, for
Cointegrated Economic Variables
183
example, that if xit is I(1) and x2t is I(0), then x1t and x 2t = S tj=0 x2,t-j could be cointegrated, thus expanding the class of variables that might be tested. The work of Yoo and Johanssen suggests a more general definition of cointegration. Let a(B) be an N ¥ 1 vector of functions of the lag operator B, such that each component, such as aj(B) has the property that aj(1) π 0. Then if xt is a vector of I(d) series such that zt = a ¢(B)x t is I(d - b), xt may be called cointegrated. If a cointegrating vector a occurs, as defined in earlier sections there will be many a(B) that also cointegrate, and so uniqueness is lost but extra flexibility is gained. Consideration of these possibilities does allow for a generalisation that is potentially very important in economics. Suppose that N = 2, so that xt has just two components, and let a be a cointegrating vector, with a¢ = (1, A). In this case a will be unique, if it does not depend on B, so that r = 1. [Generally, one would expect r < N]. However, there may exist another cointegrating vector of quite a different form, A¢ AA¢ ˆ Ê a ¢(B) = 1 , Ë D D ¯ a¢ = (1, -A¢) and D = 1 - B. An example of this possibility is where xt = (xt,yt),xt,yt are cointegrated with vector a, giving equilibrium error: zt = xt - Ayt and xt, Szt = Stj=0 zt-j are cointegrated, so that xt - A¢Szt is I(0). This would correspond to a cointegrating vector of the form a (B) = (1 - SA¢, SAA¢) where S = 1/D and D = 1 - B. For example, xt, yt could be sales and production of some industry, zt = change in inventory, Szt inventory and xt, yt could be cointegrated as well as xt, Szt. Another example might be xt = income, yt = expenditure, zt = savings, Szt = wealth. Such series might be called “multicointegrated.” Throughout this section, if the series involved have deterministic trends in mean, these need to be estimated and removed before the concepts discussed can be applied. One method of removing trends of general shape is discussed in Granger (1985). 5.
FURTHER GENERALIZATIONS
The processes considered so far have been linear and with timeinvariant parameters. Clearly general models, and possibly more realistic ones, are achieved by removing these restrictions.
184
C. W. J. Granger
As institutions, technology and society changes, so may any equilibrium relationships. In the bivariate case, the cointegrating parameter may be slowly changing with time, for instance. To proceed with analysis of this idea, it is necessary to define time-varying parameter (TVP) I(0) and I(1) processes. Using concepts introduced by Priestley (1981), it is possible to define a time-varying spectrum ft(w) for a process such as one generated by an ARMA model with TVP. For example, consider xt = b (t ) xt -1 + e t where b(t) is a deterministic function of time, obeying the restriction that |b(t)| < 1 all t. If ft(w) is bounded above and also is positive for all t, w, the process may be called TVP I(0). If the change of xt is TVP I(0), then xt can be called TVP I(1). For a vector process xt that is TVP I(d) and has no deterministic components Cramer (1961) has shown that there exists a generalised Wold representation d
(1 - B) xt = Ct (B)e t
(5.1)
where E[e t ] = 0 E[e t e ¢s ] = 0 E[e t e t¢] = W t Ct (0) = I N and if Ct (B) = Â C jt B j it will be assumed that
ÂC
jt
W t C jt¢ < •
j
so that the variance of (1 - B)dxt is finite. Assume now that Ct(1) has rank N - 1 for all t, so that the cointegration rank is 1, then there will exist N ¥ 1 vectors a(t), g (t) such that a ¢(t )Ct (1) = 0 Ct (1)g (t ) = 0. The TVP equilibrium error process will then be zt = a ¢(t ) xt .
(5.2)
The corresponding error-correction models will be as (4.2) but with A*(B), g, d(B) all functions of time. A testing procedure would involve estimating the equilibrium regression (5.2) using some TVP techniques,
Cointegrated Economic Variables
185
such as a Kalman filter procedure, probably assuming that the components of a(r) are stochastic but slowly changing. It might be thought that allowing a(t) to change with time can always produce an I(0) zt. For example, suppose that N = 2 and consider zt = xt - A(t ) yt Taking A(t) = xt/yt clearly gives zt = 0, which is an uninteresting I(0) situation. However, it is also clear that taking, A(t) = xt/yt + d will produce a zt that is I(1) in general. Interpretation of any TVP cointegration test will have to consider this possible difficulty. Turning to the possibility of non-linear cointegration, it might be noted that in the basic error-correlation model (2.5) or (4.2) zt-1 terms appear linearly so that changes in dependent variables are related to zt-1, whatever its size. In the actual economy, a more realistic behaviour is to ignore small equilibrium errors but to react substantially to large ones, suggesting a non-linear relationship. An error-correction model that captures this idea is, in the bivariate case, D xt = f1 (zt -1 ) + lagged(D xt , Dyt ) + e 1t Dyt = f2 (zt -1 ) + lagged(D xt , Dyt ) + e 2t
(5.3)
where zt = xt - Ayt. It is generally true that if zt is I(0) with constant variance, then f(zt) will also be I(0). Similarly, if zt is I(1) then generally f(zt) is also I(1), provided f(z) has a linear component for large z, i.e. f/z(z) Æ S•j=0 ajzj with a0 π 0.A rigorous treatment of these results is provided by Escribano (1986). As generally zt and f(zt) will be integrated of the same order, if a test suggests that a pair of series are cointegrated, then a non-linear errorcorrection model of form (5.3) is a possibility. Of course, most of the other results of previous sections do not hold as they are based on the linear Wold representation. Equation (5.3) can be estimated by one of the many currently available non-linear, non-parametric estimation techniques such as that employed in Engle, Granger, Rice and Weiss (1986). Error correction models essentially consider process whose components drift widely but the joint process has a generalised preference towards a certain part of the process space. In the cases so far considered this preferred sub-space is a hyper-plane but more general preferred subspaces could be considered although with considerably increased mathematical difficulty. 6.
CONCLUSION
This paper has attempted to expand the discussion about differencing macro-economic series when model building by emphasizing the use of
186
C. W. J. Granger
a further factor, the “equilibrium error”, that arises from the concept of cointegration. This factor allows the introduction of the impact of longrun or “equilibrium” economic theories into the models used by the time-series analysts to explain the short-run dynamics of economic data. The resulting error-correction models should produce better short-run forecasts and will certainly produce long-run forecasts that hold together in economically meaningful ways. If long-run economic theories are to have useful impact on econometric models they must be helpful in model specification and yet not distract from the short-run aspects of the model. Historically, many econometric models were based on equilibrium relationships suggested by a theory, such as xt = Ayt + et
(6.1)
without any consideration of the levels of integratedness of the observed variables xt, yt or of the residual series et. If xt is I(0) but yt is I(1), for example, the value of A in the resulting regression is forced to be near zero. If et is I(1), standard estimation techniques are not appropriate. A test for cointegration can thus be thought of as a pre-test to avoid “spurious regression” situations. Even if xt and yt are cointegrated an equation such as (6.1) can only provide a start for the modeling process, as et may be explainable by lagged changes in xt and yt, eventually resulting in an error-correction model of the form (2.5). However, there must be two such equations, which again makes the equation (2.5) a natural form. Ignoring the process of properly modeling the et can lead to forecasts from (6.1) that can be beaten by simple time-series models, at least in the short-term. Whilst the paper has not attempted to link error-correction models with optimizing economic theory, through control variables for example, there is doubtless much useful work to be done in this area. Testing for cointegration in general situations is still in an early stage of development. Whether or not cointegration occurs is an empirical question but the beliefs of economists do appear to support its existence and the usefulness of the concept appears to be rapidly gaining acceptance.
REFERENCES Box, G. E. P. and Jenkins, G. M. (1970). Time Series Analysis Forecasting and Control, San Francisco, Holden Day. Cramer, H. (1961). “On Some Classes of Non-Stationary Processes”, Proceedings 4th Berkeley Symposium on Math, Stats and Probability, pp. 157–78, University of California Press.
Cointegrated Economic Variables
187
Currie, D. (1981). “Some Long-Run Features of Dynamic Time-Series Models”, The Economic Journal, Vol. 363, pp. 704–15. Davidson, J. E. H., Hendry, D. F., Srba, F. and Yeo, S. (1978). “Econometric Modeling of the Aggregate Time-Series Relationship Between Consumer’s Expenditure and Income in the United Kingdom”, The Economic Journal, Vol. 88, pp. 661–92. Dawson, A. (1981). “Sargan’s Wage Equation: A Theoretical and Empirical Reconstruction”, Applied Economics, Vol. 13, pp. 351–63. Dickey, D. A. and Fuller, W. A. (1979). “Distributions of the Estimators for Autoregressive Time Series with a Unit Root,” Journal of the American Statistical Associations, Vol. 74, pp. 427–31. Dickey, D. A. and Fuller, W. A. (1981). “The Likelihood Ratio Statistics for Autoregressive Time Series with a Unit Root, Econometrica, Vol. 49, pp. 1057–72. Engle, R. F., Granger, C. W. J., Rice, J. and Weiss, A. (1986). “Non-Parametric Estimation of the Relationship Between Weather and Electricity Demand”, Journal of the American Statistical Association (forthcoming). Escribano, A. (1986). Ph.D. thesis, Economics Department, University of California, San Diego. Granger, C. W. J. (1966). “The Typical Spectral Shape of an Economic Variable”, Econometrica, Vol. 34, pp. 150–61. Granger, C. W. J. (1983). “Co-Integrated Variables and Error-Correcting Models”, UCSD Discussion Paper, pp. 83–13a. Granger, C. W. J. and Engle, R. F. (1985). “Dynamic Specification with Equilibrium Constraints: Cointegration and Error-Correction” (forthcoming, Econometrica). Granger, C. W. J. and Escribano, A. (1986). “Limitation on the Long-Run Relationship Between Prices from an Efficient Market”, UCSD Discussion Paper. Granger, C. W. J. and Joyeux, R. (1980). “An Introduction to Long-Memory Time Series and Fractional Differencing”, Journal of Time Series Analysis, Vol. 1, pp. 15–29. Hendry, D. F. and Ericsson, N. R. (1983). “Assertion without Empirical Basis: An Econometric Appraisal of “Monetary Trends in ... the United Kingdom” by Milton Friedman and Anna Schwartz”, Bank of England Academic Panel Paper No. 22. Hendry, D. F. and von Ungern-Sternberg, T. (1981). “Liquidity and Inflation Effects on Consumer’s Expenditure”, in Deaton, A. S. (ed.), Essays in the Theory and Measurement of Consumers’ Behaviour, Cambridge University Press. Johanssen, S. (1985). “The Mathematical Structure of Error-Correction Models”, Discussion Paper, Maths Department, University of Copenhagen. Nelson, C. R. and Plosser, C. I. (1982). “Trends and Random Walks in Macroeconomic Time Series”, Journal of monetary Economics, Vol. 10, pp. 139–62. Nickell, S. (1985). “Error-Correction, Partial Adjustment and All That: an Expository Note”, BULLETIN, Vol. 47, pp. 119–29. Phillips, A. W. (1957). “Stabilization Policy and the Time Forms of Lagged Responses”, Economic Journal, Vol. 67, pp. 265–77.
188
C. W. J. Granger
Priestley, M. B. (1981). Spectral Analysis of Time Series, Academic Press, New York. Salmon, M. (1982). “Error Correction Mechanisms”, The Economic Journal, Vol. 92, pp. 615–29. Sargan, J. D. (1964). “Wages and Prices in the United Kingdom: A Study in Economic Methodology”, in Hart, P., Mills, G. and Whittaker, J. N. (eds.), Econometric Analysis for National Economic Planning, Butterworths, London. Sargan, J. D. and Bhargava, A. (1983). “Testing Residuals from Least Squares Regression for Being Generated by the Gaussian Random Walk”, Econometrica, Vol. 51, pp. 153–74. Stock, J. H. (1984). “Asymptotic Properties of a Least Squares Estimator of Co-Integrating Vectors”, Manuscript Harvard University. Yoo, S. (1986). Ph.D. thesis, Economics Department, University of California San Diego.
CHAPTER 10
Seasonal Integration and Cointegration* S. Hylleberg, R. F. Engle C. W. J. Granger, and B. S. Yoo**
This paper develops tests for roots in linear time series which have a modulus of one but which correspond to seasonal frequencies. Critical values for the tests are generated by Monte Carlo methods or are shown to be available from Dickey–Fuller or Dickey–Hasza–Fuller critical values. Representations for multivariate processes with combinations of seasonal and zero-frequency unit roots are developed leading to a variety of autoregressive and error-correction representations. The techniques are used to examine cointegration at different frequencies between consumption and income in the U.K. 1.
INTRODUCTION
The rapidly developing time-series analysis of models with unit roots has had a major impact on econometric practice and on our understanding of the response of economic systems to shocks. Univariate tests for unit roots were first proposed by Fuller (1976) and Dickey and Fuller (1979) and were applied to a range of macroeconomic data by Nelson and Plosser (1982). Granger (1981) proposed the concept of cointegration which recognized that even though several series all had unit roots, a linear combination could exist which would not. Engle and Granger (1987) present a theorem giving several representations of cointegrated series and tests and estimation procedures. The testing is a direct generalization of Dickey and Fuller to the hypothesized linear combination. All of this work assumes that the root of interest not only has a modulus of one, but is precisely one. Such a root corresponds to a zero* Journal of Econometrics, 44, 1990, 215–238. ** The research was carried out while the first author was on sabbatical at UCSD and the last author was completing his dissertation. The authors are indebted to the University of Aarhus, NSF SES87-05884, and SES87-04669 for financial support. The data will be made available through the Inter-university Consortium for Political and Social Research at the University of Michigan.
190
S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo
frequency peak in the spectrum. Furthermore, it assumes that there are no other unit roots in the system. Because many economic time series exhibit substantial seasonality, there is a definite possibility that there may be unit roots at other frequencies such as the seasonals. In fact, Box and Jenkins (1970) and the many time-series analysts influenced by their work implicitly assume that there are seasonal unit roots by using the seasonal differencing filter. This paper describes in section 2 various classes of seasonal processes and in section 3 sets out to test for seasonal unit roots in time-series data both in the presence of other unit roots and other seasonal processes. Section 4 defines seasonal cointegration and derives several representations. Section 5 gives an empirical example and section 6 concludes. 2.
SEASONAL TIME-SERIES PROCESSES
Many economic time series contain important seasonal components and there are a variety of possible models for seasonality which may differ across series. A seasonal series can be described as one with a spectrum having distinct peaks at the seasonal frequencies wS ∫ 2pj/s, j = 1, . . . , s/2, where s is the number of time periods in a year, assuming s to be an even number and that a spectrum exists. In this paper, quarterly data will be emphasised so that s = 4, but the results can be naturally extended in a straightforward fashion to monthly data, for example. Three classes of time-series models are commonly used to model seasonality. These can be called: (a) Purely deterministic seasonal processes, (b) Stationary seasonal processes, (c) Integrated seasonal processes, Each is frequently used in empirical work often with an implicit assumption that they are all equivalent. The first goal of this paper is to develop a testing procedure which will determine what class of seasonal processes is responsible for the seasonality in a univariate process. Subsequently this approach will deliver multivariate results on cointegration at seasonal frequencies. A purely deterministic seasonal process is a process generated by seasonal dummy variables such as the following quarterly series: xt = mt
where
mt = m0 + m1S1t + m2S2t + m3S3t .
(2.1)
Notice that this process can be perfectly forecast and will never change its shape. A stationary seasonal process can be generated by a potentially infinite autoregression j (B) xt = e t , e t
i.i.d.,
Seasonal Integration and Cointegration
191
with all of the roots of j (B) = 0 lying outside the unit circle but where some are complex pairs with seasonal periodicities. More precisely, the spectrum of such a process is given by 2
f (w ) = s 2 j(e iw ) , which is assumed to have peaks at some of the seasonal frequencies ws. An example for quarterly data is xt = rxt-4 + et, which has a peak at both the seasonal periodicities p/2 (one cycle per year) and p (two cycles per year) as well as at zero frequency (zero cycles per year). A series xt is an integrated seasonal process if it has a seasonal unit root in its autoregressive representation. More generally it is integrated of order d at frequency q if the spectrum of xt takes the form f (w ) = c(w - q )
-2 d
,
for w near q. This is conveniently denoted by xt ~ I q (d). The paper will concentrate on the case d = 1. An example of an integrated quarterly process at two cycles per year is xt = -xt-1 + et,
(2.2)
and at one cycle per year it is xt = -xt-2 + et.
(2.3)
The very familiar seasonal differencing operator, advocated by Box and Jenkins (1970) and used as a seasonal process by Grether and Nerlove (1970) and Bell and Hillmer (1985) for example, can be written as
(1 - B4 ) xt = e t = (1 - B)(1 + B + B 2 + B3 ) xt = (1 - B)(1 + B)(1 + B 2 ) xt = (1 - B)S(B) xt ,
(2.4)
which therefore has four roots with modulus one: One is a zero frequency, one at two cycles per year, and two complex pairs at one cycle per year. The properties of seasonally integrated series are not immediately obvious but are quite similar to the properties of ordinary integrated processes as established for example by Fuller (1976). In particular they have ‘long memory’ so that shocks last forever and may in fact change permanently the seasonal patterns. They have variances which increase linearly since the start of the series and are asymptotically uncorrelated with processes with other frequency unit roots.
192
S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo
The generating mechanisms being considered, such as (2.2) or (2.4), are stochastic difference equations. They generalize the ordinary I(1), or I0(1) in the present notation, process. It is well known that an equation of the form
(1 - B) xt = e t
(2.5)
has two components to its solution: the homogeneous solution x1t where
(1 - B) x1t = 0 and the particular solution x2t given by x2t = (1 (1 - B))e t . t-1 Thus xt = x1t + x2t, where x1t = x0 (the starting value) and x2t = Sj=0 et-j. Clearly, if E[et] = m π 0, then x2t will contain a linear trend mt. The equation with S(B) = (1 + B)(1 + B2),
S(B) xt = e t ,
(2.6)
also has a solution with two components. The homogeneous solution is t
t
t
x1t = c1 (-1) + c 2 (i) + c3 (-i) , where c1, c2, c3 are determined from the starting conditions, plus the requirement that x1t is a real series, i.e., c2 and c3 are complex conjugates. If x-2 = x-1 = x0 = 0 so that the starting values contain no seasonal, then x1t ∫ 0. The particular solution is -1
x2t = [S(B)] e t , and noting that 1 2
-1 [S(B)] = [1 (1 + B) + (1 - B) (1 + B 2 )],
some algebra gives x2t =
int[ ( t -1) 2 ] 1 t -1 j j ( ) 1 e + Â Â (-1) De t - 2 j , t- j 2 j =0 j =0
where D = 1 - B and int[z] is the largest integer in z. The two parts of this solution correspond to the two seasonal roots and to eqs. (2.2) and (2.3). The homogeneous solutions to eqs. (2.5), (2.2), and (2.3) are given, respectively, by
Seasonal Integration and Cointegration
193
t -1
s1t = Â e t - j
for zero - frequency root,
j =0 t -1
j
s 2t = Â (-1) e t - j
for the two - cycle - per - year root,
j =0 int[ ( t -1) 2 ]
s3 t =
Â
(-1)De t - 2 j for the one - cycle - per - year root.
j =0
The variances of these series are given by V ( s1t ) = V ( s 2t ) = V ( s3t ) = ts 2, so that all of the unit roots have the property that the variance tends to infinity as the process evolves. When the series are excited by the same {et} and t is divisable by four, the covariances are all zero. At other values of t the covariances are at most s2, so the series are asymptotically uncorrelated as well as being uncorrelated in finite samples for complete years of data. It should be noted that, if E[et] = m π 0, all t, then the first term in x2t will involve an oscillation of period 2. The complete solution to (2.6) contains both cyclical deterministic terms, corresponding to “seasonal dummies” plus long nondeclining sums of past innovations or their changes. Thus, a series generated by (2.6) will have a component that is seasonally integrated and may also have a deterministic seasonal component, largely depending on the starting values. A series generated by (2.6) will be inclined to have a seasonal with peak that varies slowly through time, but if the initial deterministic component is large, it may not appear to drift very fast. If xt is generated by
(1 - B4 ) xt = e t ,
(2.7)
the equation will have solutions that are linear combinations of those for (2.5) and (2.6). A series with a clear seasonal may be seasonally integrated, have a deterministic seasonal, a stationary seasonal, or some combination. A general class of linear time-series models which exhibit potentially complex forms of seasonality can be written as d(B) a (B)( xt - m t ) = e t ,
(2.8)
where all the roots of a(z) = 0 lie outside the unit circle, all the roots of d(z) = 0 lie on the unit circle, and mt is given as above. Stationary seasonality and other stationary components of x are absorbed into a(B), while deterministic seasonality is in mt when there are no seasonal unit roots in d(B). Section 3 of this paper considers how to test for seasonal
194
S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo
unit roots and zero-frequency unit roots when other unit roots are possibly present and when deterministic or stochastic seasonals may be present. A pair of series each of which are integrated at frequency w are said to be cointegrated at that frequency if a linear combination of the series is not integrated at w. If the linear combination is labeled a, then we use the notation xt ~ CIw
with cointegrating vector a.
This will occur if, for example, each of the series contains the same factor which is Iw(1). In particular, if xt = avt + x¯t,
yt = vt + y¯t,
where vt is Iw(1) and x¯t and y¯t are not, then zt ∫ xt - ayt is not Iw(1), although it could be still integrated at other frequencies. If a group of series are cointegrated, there are implications about their joint generating mechanism. These are considered in section 4 of this paper. 3.
TESTING FOR SEASONAL UNIT ROOTS
It is the goal of the testing procedure proposed in this paper to determine whether or not there are any seasonal unit roots in a univariate series. The test must take seriously the possibility that seasonality of other forms may be present. At the same time, the tests for conventional unit roots will be examined in seasonal settings. In the literature there exist a few attempts to develop such tests. Dickey, Hasza, and Fuller (1984), following the lead suggested by Dickey and Fuller for the zero-frequency unit-root case, propose a test of the hypothesis a = 1 against the alternative a < 1 in the model xt = axt-s + et. The asymptotic distribution of the least-squares estimator is found and the small-sample distribution obtained for several values of s by Monte Carlo methods. In addition the test is extended to the case of higherorder stationary dynamics. A major drawback of this test is that it doesn’t allow for unit roots at some but not all of the seasonal frequencies and that the alternative has a very particular form, namely that all the roots have the same modulus. Exactly the same problems are encountered by the tests proposed by Bhargava (1987). In Ahtola and Tiao (1987) tests are proposed for the case of complex roots in the quarterly case but also their suggestion may at best be a part of a more comprehensive test strategy. In this paper we propose a test and a general framework for a test strategy that looks at unit roots at all the seasonal frequencies as well as the zero frequency. The test follows the Dickey–Fuller framework and in fact has a well-known distribution possibly on transformed variables in some special cases.
Seasonal Integration and Cointegration
195
For quarterly data, the polynomial (1 - B4) can be expressed as
(1 - B4 ) = (1 - B)(1 + B)(1 - iB)(1 + iB) = (1 - B)(1 + B)(1 + B 2 ),
(3.1)
so that the unit roots are 1, -1, i, and -i which correspond to zero frequency, –12 cycle per quarter or 2 cycles per year, and –14 cycle per quarter or one cycle per year. The last root, -i, is indistinguishable from the one at i with quarterly data (the aliasing phenomenon) and is therefore also interpreted as the annual cycle. To test the hypothesis that the roots of j(B) lie on the unit circle against the alternative that they lie outside the unit circle, it is convenient to rewrite the autoregressive polynomial according to the following proposition which is originally due to Lagrange and is used in approximation theory. Proposition: Any (possibly infinite or rational) polynomial j(B), which is finite-valued at the distinct, nonzero, possibly complex points q1, . . . , qp, can be expressed in terms of elementary polynomials and a remainder as follows: p
j(B) = Â l k D(B) d k (B) + D(B)j ** (B),
(3.2)
k =1
where the lk are a set of constants, j**(B) is a (possibly infinite or rational) polynomial, and d k (B) = 1 -
p 1 B, D(B) = ’ d k (B). qk k =1
Proof: Let lk be defined to be l k = j(q k )
’d
j
(q k ) ,
j πk
which always exists since all the roots of the d’s are distinct and the polynomial is bounded at each value by assumption. The polynomial p
p
j(B) - Â l k D(B) d k (B) = j(B) - Â j(q k )’ d j (B) d j (q k ) k =1
k =1
j πk
will have zeroes at each point B = qk.Thus it can be written as the product of a polynomial, say j**(B), and D(B). QED An alternative and very useful form of this expression is obtained by adding and subtracting D(B)Slk to (3.2) to get p
j(B) = Â l k D(B)(1 + d k (B)) d k (B) + D(B)j*(B), k =1
(3.3)
196
S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo
where j*(B) = j**(B) + Slk. In this representation j(0) = j*(0) which is normalized to unity. It is clear that the polynomial j(B) will have a root at qk if and only if lk = 0. Thus testing for unit roots can be carried out equivalently by testing for parameters l = 0 is an appropriate expansion. To apply this proposition to testing for seasonal unit roots in quarterly data, expand a polynomial j(B) about the roots +1, -1, i, and -i as qk, k = 1, . . . , 4. Then, from (3.3), j(B) = l1 B(1 + B)(1 + B 2 ) + l 2 (- B)(1 - B)(1 + B 2 ) + l 3 (-iB)(1 - B)(1 + B)(1 - iB) + l 4 (iB)(1 - B)(1 + B)(1 + iB) + j*(B)(1 - B4 ). Clearly, l3 and l4 must be complex conjugates since j(B) is real. Simplifying and substituting p1 = -l1, p2 = -l2, 2l3 = -p3 + ip4, and 2l4 = -p3 ip4, gives j(B) = -p 1 B(1 + B + B 2 + B3 ) - p 2 (- B)(1 - B + B 2 - B3 ) - (p 4 + p 3 B)(- B)(1 - B 2 ) + j*(B)(1 - B4 ).
(3.4)
The testing strategy is now apparent. The data are assumed to be generated by a general autoregression j(B) xt = e t ,
(3.5)
and (3.4) is used to replace j(B), giving j* (B) y4t = p 1 y1t -1 + p 2 y2t -1 + p 3 y3t - 2 + p 4 y3t -1 + e t ,
(3.6)
where y1t = (1 + B + B 2 + B3 ) xt = S(B) xt , y2t = -(1 - B + B 2 - B3 ) xt , y3t = -(1 - B 2 ) xt , y4t = (1 - B4 ) xt = D 4 xt .
(3.7)
Eq. (3.6) can be estimated by ordinary least squares, possibly with additional lags of y4 to whiten the errors. To test the hypothesis that j(qk) = 0, where qk is either 1, -1, or ±i, one needs simply to test that lk is zero. For the root 1 this is simply a test for p1 = 0, and for -1 it is p2 = 0. For the complex roots l3 will have absolute value of zero only if both p3 and p4 equal zero which suggests a joint test. There will be no seasonal unit roots if p2 and either p3 or p4 are different from zero, which therefore requires the rejection of both a test for p2 and a joint test for p3 and p4. To find that a series has no unit roots at all and is therefore stationary, we must establish that each of the p’s is different from zero (save
Seasonal Integration and Cointegration
197
possibly either p3 or p4). A joint test will not deliver the required evidence. The natural alternative for these tests is stationarity. For example, the alternative to j(1) = 0 should be j(1) > 0 which means p1 < 0. Similarly, the stationary alternative to j(-1) = 0 is j(-1) > 0 which corresponds to p2 < 0. Finally, the alternative to |j(i)| = 0 is |j(i)| > 0. Since the null is two-dimensional, it is simplest to compute an F-type of statistic for the joint null, p3 = p4 = 0, against the alternative that they are not both equal to zero. An alternative strategy is to compute a two-sided test of p4 = 0, and if this is accepted, continue with a one-sided test of p3 = 0 against the alternative p3 < 0. If we restrict our attention to alternatives where it is assumed that p4 = 0, a one-sided test for p3 would be appropriate with rejection for p3 < 0. Potentially this could lack power if the first-step assumption is not warranted. In the more complex setting where the alternative includes the possibility of deterministic components it is necessary to allow mt π 0. The testable model becomes j* (B) y4t = p 1 y1t -1 + p 2 y2t -1 + p 3 y3t - 2 + p 4 y3t -1 + m t + e t ,
(3.8)
which can again be estimated by OLS and the statistics on the p’s used for inference. The asymptotic distribution of the t-statistics from this regression were analyzed by Chan and Wei (1988). The basic finding is that the asymptotic distribution theory for these tests can be extracted from that of Dickey and Fuller (1979) and Fuller (1976) for p1 and p2, and from Dickey, Hasza, and Fuller (1984) for p3 if p4 is assumed to be zero. The tests are asymptotically similar or invariant with respect to nuisance parameters. Furthermore, the finite-sample results are well approximated by the asymptotic theory and the tests have reasonable power against each of the specific alternatives. It is clear that several null hypotheses will be tested for each case of interest. These can all be computed from the same least-squares regression (3.6) or (3.8) unless the sequential testing of p3 and p4 is desired. To show intuitively how these limiting distributions relate to the standard unit-root tests consider (3.6) with j(B) = 1. The test for p1 = 0 will have the familiar Dickey–Fuller distribution if p2 = p3 = p4 = 0 since the model can be written in the form y1t = (1 + p 1 ) y1t -1 + e t . Similarly, y2t = -(1 + p 2 ) y2t -1 + e t , if the other p’s are zero. This is a test for a root of -1 which was shown by Dickey and Fuller to be the mirror of the Dickey–Fuller
198
S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo
distribution. If y2t is regressed on -y2t-1, the ordinary DF distribution will be appropriate. The third test can be written as y3t = -(1 + p 3 ) y3t - 2 + e t , assuming p4 = 0 which is therefore the mirror of the Dickey–Hasza–Fuller distribution for biannual seasonality. The inclusion of y3t-1 in the regression recognizes potential phase shifts in the annual component. Since the null is that p3 = p4 = 0, the assumption that p4 = 0 may merely reduce the power of the test against some alternatives. To show that the same distributions are obtained when it is not known a priori that some of the p’s are zero, two cases must be considered. First, if the p’s other than the one being tested are truly nonzero, then the process does not have unit roots at these frequencies and the corresponding y’s are stationary. The regression is therefore equivalent to a standard augmented unit-root test. If however some of the other p’s are zero, there are other unit roots in the regression. However, it is exactly under this condition that it is shown in section 2 that the corresponding y’s are asymptotically uncorrelated. The distribution of the test statistic will not be affected by the inclusion of a variable with a zero coefficient which is orthogonal to the included variables. For example, when testing p1 = 0, suppose p2 = 0 but y2 is still included in the regression. Then y1 and y2 will be asymptotically uncorrelated since they have unit roots at different frequencies and both will be asymptotically uncorrelated with lags of y4 which is stationary. The test for p1 = 0 will have the same limiting distribution regardless of whether y2 is included in the regression. Similar arguments follow for the other cases. When deterministic components are present in the regression even if not in the data, the distributions change. Again, the changes can be anticipated from this general approach. The intercept and trend portions of the deterministic mean influence only the distribution of p1 because they have all their spectral mass at zero frequency. Once the intercept is included, the remaining three seasonal dummies do not affect the limiting distribution of p1. The seasonal dummies, however, do affect the distribution of p2, p3, and p4. Table 10.1a gives the Monte Carlo critical values for the one-sided ‘t’ tests on p1, p2, and p3 in the most important cases. These are very close to the Monte Carlo values from Dickey–Fuller and Dickey–Hasza– Fuller for the situations in which they tabulated the statistics. In Table 10.1b we present the critical values of the two-sided ‘t’ test on p4 = 0 and the critical values for the ‘F ’ test on p3 « p4 = 0. Notice that the distribution of the ‘t’ statistic is very similar to a standard normal except when the auxiliary regression contains seasonal dummies, in which case it becomes fatter-tailed. The distribution for the ‘F’ statistic also looks like an F distribution with degrees of freedom equal to two
48 100 136 200
48 100 136 200
48 100 136 200
48 100 136 200
48 100 136 200
Intercept No seas. dum. No trend
Intercept Seas. dum. No trend
Intercept No seas. dum. Trend
Intercept Seas. dum. Trend
T
No intercept No seas. dum. No trend
Auxiliary regressions
0.025
-2.29 -2.26 -2.25 -2.23
-3.25 -3.14 -3.17 -3.13
-3.39 -3.22 -3.23 -3.18
-3.85 -3.73 -3.75 -3.70
-4.04 -3.80 -3.80 -3.74
0.01
-2.72 -2.60 -2.62 -2.62
-3.66 -3.47 -3.51 -3.48
-3.77 -3.55 -3.56 -3.51
-4.23 -4.07 -4.09 -4.05
-4.46 -4.09 -4.15 -4.05
‘t’: p1
-3.71 -3.53 -3.52 -3.49
-3.56 -3.47 -3.46 -3.44
-3.08 -2.95 -2.94 -2.91
-2.96 -2.88 -2.89 -2.87
-1.95 -1.97 -1.93 -1.94
0.05
-3.37 -3.22 -3.21 -3.18
-3.21 -3.16 -3.16 -3.15
-2.72 -2.63 -2.62 -2.59
-2.62 -2.58 -2.58 -2.57
-1.59 -1.61 -1.59 -1.62
0.10
-3.80 -3.60 -3.57 -3.52
-2.65 -2.58 -2.65 -2.59
-3.75 -3.60 -3.49 -3.50
-2.68 -2.61 -2.60 -2.58
-2.67 -2.61 -2.60 -2.60
0.01
-3.41 -3.22 -3.18 -3.18
-2.24 -2.24 -2.25 -2.25
-3.37 -3.22 -3.15 -3.16
-2.27 -2.24 -2.21 -2.22
-2.27 -2.22 -2.23 -2.24
0.025
‘t’: p2
-3.08 -2.94 -2.93 -2.91
-1.91 -1.94 -1.96 -1.95
-3.04 -2.94 -2.90 -2.89
-1.95 -1.95 -1.91 -1.92
-1.95 -1.92 -1.94 -1.95
0.05
Fractiles
-2.73 -2.63 -2.61 -2.60
-1.57 -1.60 -1.63 -1.62
-2.69 -2.63 -2.59 -2.60
-1.60 -1.60 -1.58 -1.59
-1.60 -1.57 -1.61 -1.61
0.10
-4.46 -4.12 -4.05 -4.04
-2.68 -2.56 -2.56 -2.58
-4.31 -4.06 -4.06 -4.00
-2.64 -2.61 -2.53 -2.57
-2.66 -2.55 -2.58 -2.58
0.01
-4.02 -3.76 -3.72 -3.69
-2.27 -2.19 -2.20 -2.21
-3.92 -3.72 -3.72 -3.67
-2.23 -2.23 -2.18 -2.21
-2.23 -2.18 -2.21 -2.24
0.025
‘t’: p3
-3.66 -3.48 -3.44 -3.41
-1.92 -1.89 -1.90 -1.92
-3.61 -3.44 -3.44 -3.38
-1.90 -1.90 -1.88 -1.90
-1.93 -1.90 -1.92 -1.92
0.05
Table 10.1a Critical values from the small-sample distributions of test statistics for seasonal unit roots on 24000 Monte Carlo replications: data-generating process D4xt = et ~ nid(0, 1).
-3.28 -3.14 -3.12 -3.10
-1.52 -1.54 -1.52 -1.56
-3.24 -3.14 -3.11 -3.07
-1.52 -1.54 -1.53 -1.53
-1.52 -1.53 -1.56 -1.55
0.10
Seasonal Integration and Cointegration 199
48 100 136 200
48 100 136 200
48 100 136 200
48 100 136 200
48 100 136 200
Intercept No seas. dum. No trend
Intercept Seas. dum. No trend
Intercept No seas. dum. Trend
Intercept Seas. dum. Trend
T
No Intercept No seas. dum. No trend
Auxiliary regressions
0.025
-2.11 -2.01 -1.99 -1.98
-2.06 -1.99 -1.98 -1.98
-2.37 -2.32 -2.31 -2.33
-2.05 -1.97 -1.97 -1.97
-2.26 -2.32 -2.78 -2.27
0.01
-2.51 -2.43 -2.44 -2.43
-2.44 -2.38 -2.36 -2.36
-2.86 -2.78 -2.72 -2.74
-2.41 -2.38 -2.36 -2.35
-2.75 -2.76 -2.71 -2.65
-1.91 -1.94 -1.94 -1.92
-1.70 -1.65 -1.64 -1.66
-1.98 -1.96 -1.96 -1.96
-1.72 -1.68 -1.68 -1.66
-1.76 -1.68 -1.68 -1.65
0.05
-1.48 -1.51 -1.51 -1.48
-1.33 -1.28 -1.29 -1.29
-1.53 -1.53 -1.52 -1.54
-1.33 -1.30 -1.31 -1.29
-1.35 -1.32 -1.31 -1.30
0.10
‘t’: p4
1.51 1.51 1.53 1.55
1.26 1.28 1.26 1.26
1.54 1.52 1.51 1.53
1.30 1.28 1.27 1.28
1.33 1.31 1.30 1.29
0.90
1.97 1.92 1.96 1.97
1.64 1.65 1.62 1.64
1.96 1.93 1.92 1.95
1.68 1.65 1.65 1.65
1.72 1.67 1.66 1.67
0.95
Fractiles
2.34 2.28 2.31 2.31
1.96 1.98 1.92 1.96
2.35 2.29 2.28 2.32
2.04 1.97 1.97 1.96
2.05 2.00 1.99 1.97
0.975
2.78 2.69 2.78 2.71
2.37 2.32 2.31 2.30
2.81 2.73 2.71 2.78
2.41 2.32 2.31 2.30
2.49 2.40 2.38 2.36
0.99
5.37 5.52 5.55 5.56
2.23 2.31 2.33 2.34
5.50 5.56 5.56 5.56
2.32 2.35 2.36 2.37
2.45 2.39 2.41 2.42
0.90
6.55 6.60 6.62 6.57
2.95 2.98 3.04 3.07
6.60 6.57 6.63 6.61
3.04 3.08 3.00 3.12
3.26 3.12 3.14 3.16
0.95
7.70 7.52 7.59 7.56
3.70 3.71 3.69 3.76
7.68 7.72 7.66 7.53
3.78 3.81 3.70 3.86
4.04 3.89 3.86 3.92
0.975
‘F’: p3 p4
Table 10.1b Critical values from the small-sample distributions of test statistics for seasonal unit roots on 24000 Monte Carlo replications: data-generating process D4xt = et ~ nid(0, 1).
9.27 8.79 8.77 8.96
4.64 4.70 4.57 4.66
9.22 8.74 8.92 8.93
4.78 4.77 4.73 4.76
5.02 4.89 4.81 4.81
0.99
200 S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo
Seasonal Integration and Cointegration
201
and T minus the number of regressors in (3.6). However, when seasonal dummies are present, the tail becomes fatter here as well. 4.
ERROR-CORRECTION REPRESENTATION
In this section, an error-correction representation is derived which explicitly takes the cointegrating restrictions at the zero and at the seasonal frequencies into account. As the time series being considered has poles at different locations on the unit circle, various cointegrating situations are possible. This naturally makes the general treatment mathematically complex and notationally involved. Although we treat the general case we will present the special cases considered to be of most interest. Let xt be an N ¥ 1 vector of quarterly time series, each of which potentially has unit roots at zero and all seasonal frequencies, so that each component of (1 - B4)xt is a stationary process but may have a zero on the unit circle. The Wold representation will thus be
(1 - B4 ) xt = C (B)e t ,
(4.1)
where et is a vector white noise process with zero mean and covariance matrix W, a positive definite matrix. There are a variety of possible types of cointegration for such a set of series. To initially examine these, apply the decomposition of (3.2) to each element of C(B). This gives p
C (B) = Â D k D(B) d k (B) + C **(B)D(B), k =1
where dk(B) = 1 - (1/qk)B and D(B) is the product of all the dk(B). For quarterly data the four relevant roots, qk, are 1, -1, i, and -i, which after solving for the L’s becomes C (B) = Y1 [1 + B + B 2 + B3 ] + Y2 [1 - B + B 2 - B3 ] + (Y3 + Y4 B)[1 - B 2 ] + C **(B)(1 - B4 ),
(4.2)
where Y1 = C(1)/4, Y2 = C(-1)/4, Y3 = Re[C(i)]/2, and Y4 = Im[C(i)]/2. Multiplying (4.1) by a vector a¢ gives
(1 - B4 )a ¢xt = a ¢C (B)e t . Suppose for some a = a1, a¢1C(1) = 0 = a¢1Y1, then there is a factor of (1 - B) in all terms, which will cancel out giving
(1 + B + B 2 + B3 )a 1¢ xt = a 1¢ {Y2 [(1 + B 2 )] + (Y3 + Y4 B)[1 + B] + C **(B)[1 + B + B 2 + B3 ]}e t , so that a¢1xt will have unit roots at the seasonal frequencies but not at zero frequency. Thus x is cointegrated at zero frequency with cointegrating vector a1, if a 1¢C(1) = 0. Denote these as
202
S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo
xt ~ CI0
with cointegrating vector a1.
Notice that the vector y1t = S(B)xt is I0(1) since (1 - B)y1t = C(B)et, while a¢1y1t is stationary whenever a¢1C(1) = 0 so that y1t is cointegrated in exactly the sense described in Engle and Granger (1987). Since y1t is essentially seasonally adjusted xt it follows that one strategy for estimation and testing for cointegration at zero frequency in seasonal series is to first seasonally adjust the series. Similarly, letting y2t = -(1 - B)(1 + B2)xt, (1 + B)y2t = -C(B)et so that y2t has a unit root at -1. If a¢2C(-1) = 0, then a¢2Y2 = 0 and a¢2y2t will not have a unit root at -1. We say then that xt is cointegrated at frequency w = –12 , which is denoted xt ~ CI1/2 with cointegrating vector a2. Finally denote y3t = -(1 - B2)xt, which satisfies (1 + B2)y3t = -C(B)et and therefore includes unit roots at frequency –14. If a¢3C(i) = 0 which implies that a¢3Y3 = a¢3Y4 = 0, then a¢3y3t will not have a unit root at –14, implying that xt ~ CI1/4 with cointegrating vector a3. Cointegration at frequency –14 can also occur under weaker conditions. Consider the bivariate system: 1
(1 + B 2 ) xt = ÈÍ
0
˘ ˙e t , ÎB 1 + B ˚ 2
in which both series are I1/4(1) and there is no fixed cointegrating vector. However, the polynomial cointegrating vector (PCIV), as introduced by Yoo (1987), of (-B, 1) will generate a stationary series. It is not surprising with seasonal unit roots, that the timing could make a difference. We now show that the need for PCIV is a result purely of the fact that one vector is sought to eliminate two roots (±i) and that one lag in the cointegrating polynomial is sufficient. Expanding the PCIV a(B) about the two roots (±i) using (3.2) gives a (B) = Re[a (i)] + B Im[a (i)] + a **(B)(1 + B 2 ) ∫ (a 3 + a 4 B) + a **(B)(1 + B 2 ), so that the condition that a¢(B) C(B) have a common factor of (1 + B2) depends only on a3 and a4. The general statement of cointegration at frequency –14 then becomes xt ~ CI 1 4
with polynomial cointegrating vector a 3 + a 4 B,
if and only if (a 3¢ + a 4¢ i)(Y3 - Y4i) = 0, which is equivalent to a(i)¢C(i) = 0.
Seasonal Integration and Cointegration
203
There is no guarantee that xt will have any type of cointegration or that these cointegrating vectors will be the same. It is however possible that a1 = a2 = a3, a4 = 0, and therefore one cointegrating vector could reduce the integration of the x series at all frequencies. Similarly if a2 = a3, a4 = 0, one cointegrating vector will eliminate the seasonal unit roots. This might be expected if the seasonality in two series is due to the same source. A characterization of the cointegrating possibilities has now been given in terms of the moving-average representation. More useful are the autoregressive representations and in particular, the error-correction representation. Therefore, if C(B) is a rational matrix in B, it can be written [using the Smith–McMillan decomposition [Kailath (1980)], as adapted by Yoo (1987), and named the Smith–McMillan–Yoo decomposition by Engle (1987)] as follows: -1
-1
C (B) = U (B) M (B)V (B) ,
(4.3)
where M(B) is a diagonal matrix whose determinant has roots only on the unit circle, and the roots of the determinants of U-1(B) and V(B)-1 lie outside the unit circle. This diagonal could contain various combinations of the unit roots. However, assuming that the cointegrating rank at each frequency is r, the matrix can be written without loss of generality as ÈI M (B) = Í N - r Î0
0 ˘ , D 4 I r ˙˚
(4.4)
where Ik is a k ¥ k unit matrix. The following derivation of the errorcorrection representation is easily adapted for other forms of M(B). Substituting (4.3) into (4.1) and multiplying by U(B) gives -1
D 4U (B) xt = M (B)V (B) e t .
(4.5)
The first N - r equations have a D4 on the left side only while the final r equations have D4 on both sides which therefore cancel. Thus (4.5) can be written as -1
M (B)U (B) xt = V (B) e t ,
(4.6)
0˘ ÈD I M (B) = Í 4 N - r . I r ˙˚ Î0
(4.7)
with
Finally, the autoregressive representation is obtained by multiplying by V(B) to obtain A(B) xt = e t , where
(4.8)
204
S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo
A(B) = V (B)M (B)U (B).
(4.9)
Notice that at the seasonal and zero-frequency roots, det[A(q)] = 0 since A(B) has rank r at those frequencies. Now, partition U(B) and V(B) as ÈU (B)¢ ˘ 1 ˙, V (B) = [V1 (B), g (B)], U (B) = Í ÍÎa (B)¢ ˙˚ where a(B) and g(B) are N ¥ r matrices and U1(B) and V1(B) are N ¥ (N + r) matrices. Expanding the autoregressive matrix using (3.3) gives A(B) = P 1 B[1 + B + B 2 + B3 ] - P 2 B[(1 - B)(1 + B 2 )] + (P 4 - BP 3 )B[1 + B 2 ] + A*(B)[1 - B4 ], with P1 = -g (1)a¢(1)/4 ∫ -g1a¢1, P2 = -g(-1)a(-1)¢/4 ∫ -g2a¢2, P3 = Re[g (i)a(i)¢]/2, and P4 = Im[g (j)a(i)¢]/2. Letting a1 = a(1)/4, a2 = a(-1)/4, a3 = Re[a(i)]/2, and a4 = Im[a(i)] while g1 = g (1), g2 = g (-1), g3 = Re[g (i)], and g4= Im[g (i)], the general error-correction model can be written A*(B)D 4 xt = g 1a 1¢ y1t -1 + g 2a 2¢ y2t -1 - (g 3a 3¢ - g 4a 4¢ ) y3t - 2 + (g 4a 3¢ + g 3a 4¢ ) y3t -1 + e t ,
(4.10)
where A*(0) = C(0) = IN in the standard case. This expression is an errorcorrection representation where both a, the cointegrating vector, and g, the coefficients of the error-correction term, may be different at different frequencies and, in one case, even at different lags.This can be written in a more transparent form by allowing more than two lags in the errorcorrection term. Add D4(g3a¢4 + g4a¢3 + g4a¢4B)xt-1 to both sides and rearrange terms to get A *(B)D 4 xt = g 1a 1¢ y1t -1 + g 2a 2¢ y2t -1 - (g 3 + g 4 B)(a 3¢ + a 4¢ B) y3t - 2 + e t
(4.11)
where Ã*(B) is a slightly different autoregressive matrix from A*(B). The error-correction term at the annual seasonal enters potentially with two lags and is potentially a polynomial cointegrating vector. When a4 = 0 or g4 = 0 or both, the model simplifies so that, respectively, cointe gration is contemporaneous, the error correction needs only one lag, or both. Notice that all the terms in (4.11) are stationary. Estimation of the system is easily accomplished if the a’s are known a priori. If they must be estimated, it appears that a generalization of the two-step estimation procedure proposed by Engle and Granger (1987) is available. Namely, estimate the a’s using prefiltered variables y1, y2, and y3, respectively, and then estimate the full model using the estimates of the a’s. In the PCIV case this regression would include a single lag. It is conjectured that the least-squares estimates of the remaining para-
Seasonal Integration and Cointegration
205
meters would have the same limiting distribution as the estimator knowing the true a’s just as in the Engle–Granger two-step estimator. The analysis by Stock (1987) suggests that although the inference on the a’s can be tricky due to their nonstandard limiting distributions, inference on the estimates of A*(B) and the g’s can be conducted in the standard way. The following generalizations of the above analysis are discussed formally in Yoo (1987). First, if r > 1 but all other assumptions remain as before, the error-correction representation (4.11) remains the same but the a’s and g’s now become N ¥ r matrices. Second, if the cointegrating rank at the long-run frequency is r0, which is different from the cointegrating rank at the seasonal frequency, rs, (4.11) is again legitimate with the sizes of the matrices on the right-hand side appropriately redefined. Thirdly, if the cointegrating vectors a1, a2, and a3 coincide, equalling say, a, and a4 = 0, a simpler error-cointegrating model occurs: A*(B)D 4 xt = g (B)a ¢xt -1 + e t ,
(4.12)
where the degree of g (B) is at most 3, as can be seen either from (4.10) or from an expansion of g (B) using (3.2). For four roots there are potentially four coefficients and three lags. Finally, some of the cointegrating vectors may coincide but some do not. A particularly interesting case is where a single linear combination eliminates all seasonal unit roots. Thus suppose a2 = a3 ∫ as and a4 = 0. Then (4.10) becomes A*(B)D 4 xt = g 1a 1¢S(B) xt -1 + g s (B)a ¢s Dxt -1 + e t ,
(4.13)
where gs(B) has potentially two lags. Thus zero-frequency cointegration occurs between the elements of seasonally adjusted x, while seasonal cointegration occurs between the elements of differenced x. This is the case examined by Engle, Granger, and Hallman (1989) for electricity demand. There monthly electricity sales were modeled as cointegrated with economic variables such as customers and income at zero frequency and possibly at seasonal frequencies with the weather. The first relation is used in long-run forecasting, while the second is mixed with the shortrun dynamics for short-run forecasting. Although an efficiency gain in the estimates of the cointegrating vectors is naturally expected by checking and imposing the restrictions between the cointegrating vectors, there should be no efficiency gain in the estimates of the “short-run parameters”, namely A*(B) and g’s, given the superconsistency of the estimates of the cointegrating vectors. Hence, the representation (4.11) is considered relatively general and the important step of model-building procedure is then to identify the cointegratedness at the different frequencies. This question is considered in the next section.
206
S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo
5. TESTING FOR COINTEGRATION: AN APPLICATION In this section it is assumed that there are two series of interest, x1t and x2t, both integrated at some of the zero and seasonal frequencies, and the question to be studied is whether or not the series are cointegrated at some frequency. Of course, if the two series do not have unit roots at corresponding frequencies, the possibility of cointegration does not exist. The tests discussed in section 3 can be used to detect which unit roots are present. Suppose for the moment that both series contain unit roots at the zero frequency and at least some of the seasonal frequencies. If one is interested in the possibility of cointegration at the zero frequency, a strategy could be to form the static O.L.S. regression x1t = Ax2t + residual, and then test if the residual has a unit root at zero frequency, which is the procedure in Engle and Granger (1987). However, the presence of seasonal unit roots means that A may not be consistently estimated, in sharp contrast to the case when there are no seasonal roots when  is estimated superefficiently. This lack of consistency is proved in Engle, Granger, and Hallman (1989). If, in fact, x1t and x2t are cointegrated at both the zero and the seasonal frequencies, with cointegrating vectors a1 and as and with a1 π as, it is unclear what value of A would be chosen by the static regression. Presumably, if a1 = as, then  will be an estimate of this common value. These results suggest that the standard procedure for testing for cointegration is inappropriate. An alternative strategy would be to filter out unit-root components other than the one of interest and to test for cointegration with the filtered series. For example, to remove seasonal roots, one could form x˜ 1t = S(B) x1t , x˜ 2t = S(B) x2t , where S(B) = (1 - Bs)/(1 - B), and then perform a standard cointegration test, such as those discussed in Engle and Granger (1987), on x˜1t and x˜2t. If some seasonal unit roots were thought to be present in x1t and x2t, this procedure could be done without testing for which roots were present, but the filtered series could have spectra with zeros at some seasonal frequencies, and this may introduce problems with the tests. Alternatively, the tests of section 3 could be used, appropriate filters applied just to remove the seasonal roots indicated by these tests, and then the standard cointegration tests applied. For zero-frequency cointegration, this procedure is probably appropriate, although the implications of the pretesting for seasonal roots has not yet been investigated.
Seasonal Integration and Cointegration
207
To test for seasonal cointegration the corresponding procedure would be to difference the series to remove a zero-frequency unit root, if required, then run a regression of the form s-2
Dx1t = Â a j Dx2t - j + residual, j =0
and test if the residual has any seasonal unit roots. The tests developed in section 3 could be applied, but will not have the same distribution as they involve estimates of the aj. The correct test has yet to be developed. A situation where the tests of section 3 can be applied directly is where a1 = as and some theory suggests a value for this a, so that no estimation is required. One merely forms x1t - ax2t and tests for unit roots at the zero and seasonal frequencies. An example comes from the permanent income hypothesis where the log of income and the log of consumption may be thought to be cointegrated with a = 1. Thus c - y should have no unit roots using a simplistic form of this theory, as discussed by Davidson, Hendry, Srba, and Yeo (1978), for instance. To illustrate the tests, quarterly United Kingdom data for the period 1955.1 to 1984.4 were used with y = log of personal disposable income and c = log of consumption expenditures on nondurables. The data are shown in Fig. 10.1. From the figure, it is seen that both series may have a random-walk character implying that we would expect to find a unit root at the zero frequency. However, the two series seem to drift apart whereby cointegration at the zero frequency with cointegrating vector (1, -1) is less likely. For the seasonal pattern, it is clear that c contains a much stronger and less changing seasonal pattern than y, although even the seasonal consumption pattern changes over the sample period. Based on these preliminary findings, one may or may not find seasonal unit roots in c and y or both, but cointegration at the seasonal frequencies cannot be expected. The tests are based on the auxillary regression (3.6) where f(B) is a polynomial in B. The deterministic term is a zero, an intercept (I), an intercept and seasonal dummies (I, SD), an intercept and a trend (I, Tr), or an intercept, seasonal dummies, and a trend (I, SD, Tr). In the augmented regressions nonsignificant lags were removed, and for c and y this implied a lag polynomial of the form 1 - f1B - f4B4 f5B5, where f1 was around 0.85, f4 around -0.32, and f5 around 0.25. For c - y the lag polynomial was approximately 1 - 0.29B - 0.22B2 + 0.21B4. The “t” statistics from these augmented regressions are shown in Table 10.2.
Figure 10.1. Income and consumption in the UK.
Seasonal Integration and Cointegration
209
Table 10.2 Tests for seasonal unit roots in the log of UK consumption expenditure on nondurables c, in the log of personal disposable income y, and in the difference c - y; 1955.1–1984.4. Auxiliary regression3
‘t’: p1 (zero frequency)
‘t’: p2 (biannual)
‘t’: p3 (annual)
‘t’: p4
‘F’: p3 p4
c
— I I, SD I, Tr I, SD, Tr
2.45 -1.62 -1.64 -2.43 -2.33
-0.31 -0.32 2.22 -0.35 -2.16
0.22 0.22 -1.47 0.18 -1.53
-0.84 -0.87 -1.77 -0.85 -1.65
0.38 0.40 2.52 0.38 2.43
y
— I I, SD I, Tr I, SD, Tr
2.61 -1.50 -1.56 -2.73 -2.48
-1.44 -1.46 -2.38 -1.46 -2.30
-2.35b -2.38b -4.19b -2.52b -4.28b
-2.51b -2.51b -3.89b -2.23b -3.46b
5.68b 5.75b 14.74b 5.46b 13.74b
c-y
— I I, SD I, Tr I, SD, Tr
1.54 -1.24 -1.19 -2.40 -2.48
-1.10 -1.10 -2.64 -1.21 -2.84
-0.98 -1.03 -2.55 -1.05 -2.72
-1.17 -1.16 -2.79b -1.04 -2.48b
1.19 1.22 8.25b 1.12 7.87b
VAR
a
The auxiliary regressions were augmented by significant lagged values of the fourth difference of the regressand. b Significant at the 5% level.
The results indicate strongly a unit root at the zero frequency in both c, y, and c - y implying that there is no cointegration between c and y at the long-run frequency, at least not for the cointegrating vector [1, -1]. Similarly, the hypothesis that c, y, and c - y are I1/2(1) cannot be rejected implying that c and y are not cointegrated at the biannual cycle either. The results also indicate that the log of consumption expenditures on nondurables are I1/4(1) as neither the “F” test nor the two “t” tests can reject the hypothesis that both p4 and p3 are zero. Such hypotheses are, however, firmly rejected for the log of personal disposable income and conditional on these results, c and y cannot possibly be cointegrated at this frequency or at the frequency corresponding to the complex conjugate root, irrespective of the forms of the cointegrating vectors. In fact, conditional on p4 being zero, the “t” test on p3 cannot reject a unit root in c - y at the annual frequency in any of the auxiliary regressions. The assumption that p4 = 0 is not rejected when seasonal dummies are absent and the joint “F” test cannot reject in these cases either. When the auxiliary regression contains deterministic seasonals, both p4 = 0 and p3 « p4 = 0 are rejected, leading to a theoretical conflict which can of course happen with finite samples.
210
6.
S. Hylleberg, R. F. Engle, C. W. J. Granger, and B.S. Yoo
CONCLUSION
The theory of integration and cointegration of time series is extended to cover series with unit roots at frequencies different from the long-run frequency. In particular, seasonal series are studied with a focus upon the quarterly periodicity. It is argued that the existence of unit roots at the seasonal frequencies has similar implications for the persistence of shocks as a unit root at the long-run frequency. However, a seasonal pattern generated by a model characterized solely by unit roots seems unlikely as the seasonal pattern becomes too volatile, allowing “summer to become winter.” A proposition on the representation of rational polynomials allows reformulation of an autoregression isolating the key unit-root parameters. Based on least-squares fits of univariate autoregressions on transformed variables, similar to the well-known augmented Dickey–Fuller regression, tests for the existence of seasonal as well as zero-frequency unit roots in quarterly data are presented and tables of the critical values provided. By extending the definition of cointegration to occur at separate frequencies, the error-correction representation is developed by use of the Smith–McMillan lemma and the proposition on rational lag polynomials. The error-correction representation is shown to be a direct generalization of the well-known form, but on properly transformed variables. The theory is applied to the UK consumption function and it is shown that the unit-elasticity error-correction model is not valid at any frequency as long as we confine ourselves to only the consumption and income data.
REFERENCES Ahtola, J. and G.C. Tiao, 1987, Distributions of least squares estimators of autoregressive parameters for a process with complex roots on the unit circle, Journal of Time Series Analysis 8, 1–14. Barsky, R.B. and J.A. Miron, 1989, The seasonal cycle and the business cycle, Journal of Political Economy 97, 503–534. Bell, W.R. and S.C. Hillmer, 1984, Issues involved with the seasonal adjustment of economic time series, Journal of Business and Economic Statistics 2, 291–320. Bhargava, A., 1987, On the specification of regression models in seasonal differences, Mimeo. (Department of Economics, University of Pennsylvania, Philadelphia, PA). Box, G.E.P. and G.M. Jenkins, 1970, Time series analysis, forecasting and control (Holden-Day, San Francisco, CA).
Seasonal Integration and Cointegration
211
Chan, N.H. and C.Z. Wei, 1988, limiting distributions of least squares estimates of unstable autoregressive processes, Annals of Statistics 16, 367–401. Davidson, J.E., D.F. Hendry, F. Srba, and S. Yeo, 1978, Econometric modeling of aggregate time series relationships between consumer’s expenditure and income in the U.K., Economic Journal 91, 704–715. Dickey, D.A. and W.A. Fuller, 1979, Distribution of the estimators for autoregressive time series with a unit root, Journal of the American Statistical Association 84, 427–431. Dickey, D.A., H.P. Hasza, and W.A. Fuller 1984, Testing for unit roots in seasonal times series, Journal of the American Statistical Association 79, 355–367. Engle, R.F., 1987, On the theory of cointegrated economic time series, U.C.S.D. discussion paper no. 87-26, presented to the European meeting of the econometric Society, Copenhagen, 1987. Engle, R.F. and C.W.J. Granger, 1987, Co-integration and error correction: Representation, estimation and testing, Econometrica 55, 251–276. Engle, R.F., C.W.J. Granger, and J. Hallman, 1989, Merging short- and long-run forecasts: An application of seasonal co-integration to monthly electricity sales forecasting, Journal of Econometrics 40, 45–62. Fuller, W.A., 1976, Introduction of statistical time series (Wiley, New York, NY). Grether, D.M. and M. Nerlove, 1970, Some properties of optimal seasonal adjustment, Econometrica 38, 682–703. Hylleberg, S., 1986, Seasonality in regression (Academic Press, New York, NY). Kailath, T., 1980, Linear systems (Prentice-Hall, Englewood Cliffs, NJ). Nelson, C.R. and C.I. Plosser, 1982, Trends and random walks in macroeconomic time series, Journal of Monetary Economics 10, 129–162. Nerlove, M., D.M. Grether, and J.L. Carvalho, 1979, Analysis of economic time series: A synthesis (Academic Press, New York, NY). Stock, J.H., 1987, Asymptotic properties of least squares estimates of cointegrating vectors, Econometrica 55, 1035–1056. Yoo, S., 1987, Co-integrated time series: Structure, forecasting and testing, Ph.D. dissertation, University of California, San Diego, CA.
CHAPTER 11
A Cointegration Analysis of Treasury Bill Yields* Anthony D. Hall, Heather M. Anderson, and Clive W. J. Granger**
Abstract This paper shows that yields to maturity of U.S. Treasury bills are cointegrated, and that during periods when the Federal Reserve specifically targeted short-term interest rates, the spreads between yields of different maturity define the cointegrating vectors. This cointegrating relationship implies that a single non-stationary common factor underlies the time series behavior of each yield to maturity and that risk premia are stationary. An error correction model which uses spreads as the error correction terms is unstable over the Federal Reserve’s policy regime changes, but a model using post 1982 data is stable and is shown to be useful for forecasting changes in yields. 1.
INTRODUCTION
A topic which is discussed frequently in the term structure literature is that of the relationships between yields associated with bonds of different maturities. Arbitrage arguments, often augmented by considerations about risk are generally used to justify such relationships; the underlying problem is to explain the empirical observation that yields of different maturity appear to move together over time. Formal empirical analysis of the relationships between yields of different maturities is not straightforward because nominal yields are not generally considered to be stochastically stationary. It has long been * Review of Economics and Statistics, 74, 1992, 116–126. ** Australian National University, University of California, San Diego, and University of California, San Diego, respectively. The authors wish to thank David Hendry and the referees for helpful comments on an earlier draft of this paper. Financial support for Hall from the Australian Research Council, Anderson from the P.E.O. International Peace Scholarship Fund, and for Granger from the National Science Foundation Grant SES 8902950 is gratefully acknowledged.
Cointegration and Treasury Bill Yields
213
recognised that it is possible for sets of nonstationary variables to move together over time. Granger (1981) formalised this concept, defining such sets of variables as cointegrated variables, and since then various tests for cointegration and techniques for working with cointegrated variables have been developed. The literature which relates cointegration to the theory of the term structure is currently small. A few authors have tested for (and found) cointegration between the yield on a long-term bond and that on a shortterm bond,1 but the question of how one might further apply the theory of cointegration to study the term structure is largely unanswered. This study suggests that the term structure for U.S. Treasury bills is well modelled as a cointegrated system. The organization of the paper is as follows. Section II relates the theory of cointegration, error correction models and common factors to well-known models of the term structure. Here, it is shown that if yields to maturity are integrated processes, then the term structure data are theoretically cointegrated. The cointegration expected here is of a special type, and the theoretical restrictions which should characterize this cointegration are derived and explored. Section III describes the data which have been used in this study. The empirical evidence that yields are cointegrated according to the predictions made in section II is presented in section IV. An estimated error correction model is presented to illustrate how this information can be utillised. The estimated model is statistically significant and is shown to be potentially useful for forecasting yields of Treasury bills. Section V concludes.
2.
THEORETICAL FRAMEWORK
A.
Theory of the Term Structure
Let R(k, t) be the continuously compounded yield to maturity of a k period pure discount bond, (k = 1, 2, 3, . . .) and let the forward rate F(k, t) be the rate of return from contracting at time t to buy a one period pure discount bond which matures at time t + k. Then F(1, t) = R(1, t), and forward rates can be recursively calculated from the FisherHicks formulae, R(k, t ) =
1È k ˘ F ( j, t )˙ Â Í k Î j =1 ˚
for k = 1, 2, 3, K . 1
See, for instance, Campbell and Shiller (1987) or Engle and Granger (1987).
(1)
214
A. D. Hall, H. M. Anderson, and C. W. J. Granger
Forward rates F(j, t) typically differ from the yield R(1, t + j - 1) actually realised, so that investors may be assumed to rely on their expectations of R(1, t + j - 1) when they choose between investing now or later. The relationship between forward and expected rates is assumed to be F ( j, t ) = Et [R(1, t + j - 1)] + L( j, t )
(2)
where Et denotes expectations based on information available at time t and the L(j, t) are premia, which may account for risk considerations or for investors’ preferences about liquidity. Substitution of equation (2) into equation (1) leads to a very general relationship between yields of different maturities, i.e., R(k, t ) =
1È k ˘ Et [R(1, t + j - 1)]˙ + L(k, t ), Â Í k Î j=1 ˚
where L(k, t ) =
1È k ˘ Â L( j , t )˙˚ k ÍÎ j=1
(3)
This equation indicates that the yields of bonds with similar maturities will move together. Many of the traditional theories of the term structure focus on the properties of the premia L(k, t). The pure expectations hypothesis asserts that the L(k, t) are zero, while other versions of the expectations hypothesis assert that the premia are constant over time. Other assumptions about the premia would lead to different theories about the term structure, many of which are consistent with the framework described here. Despite its simplicity, equation (3) does not provide an immediately useful basis for empirical studies of the term structure. None of the variables on the right hand side of this equation are directly measurable, and there is considerable empirical evidence that yields to maturity are integrated rather than stationary processes, so that conventional statistical analysis is not necessarily appropriate in this context.
B.
Integration and Cointegration within the Term Structure
A series X(t), which needs to be differenced d times before it has a stationary invertible ARMA representation, is said to be integrated of order d, and this property is represented by the notation X(t) ~ I(d). It is generally accepted that interest rates, and Treasury bill yields in particular, are well described as I(1) processes.2 2
See, for instance, Campbell and Shiller (1988), Stock and Watson (1988) or Engle and Granger (1987). For a formal analysis of Treasury Bill yields, see Anderson, Granger and Hall (1990).
Cointegration and Treasury Bill Yields
215
Given that the vector series X(t) has only I(1) components, it is sometimes possible to find vectors of constants a1, a2, . . . , ar such that the linear combinations ai¢X(t) are all I(0). In this case we say that X(t) is cointegrated, and we define the vectors a1, a2, . . . , ar to be cointegrating vectors. The space spanned by the cointegrating vectors is called the cointegration space. Assuming that yields to maturity are integrated I(1) processes, the possibility that they might be cointegrated is seen by rearranging equation (3) to obtain
[R(k, t ) - R(1, t )] =
1 k =1 j =i   Et DR(1, t + j) + L(k, t ) k i =1 j =1
(4)
where R(k, s) = R(k, s) - R(k, s - 1). The right hand side of equation (4) is stationary provided that R(1, t) and the premia L(k, t) are stationary. Given these conditions, it follows that the left hand side of equation (4) is stationary and that (1, -1)¢ is a cointegrating vector for X(t) = [R(k, t), R(1, t)]¢. This implies that each yield R(k, t) is cointegrated with R(1, t), and that the spreads between R(k, t) and R(1, t) are the stationary linear combinations of X(t) which result from the cointegration of X(t). We define the spread between the yields R(i, t) and R(j, t) as S(i, j, t) = R(i, t) - R(j, t). The cointegration implied by the above considerations is of a very special type. Specifically, the model predicts that any yield series is cointegrated with the one period yield, so that if we were to consider a set of n yield series (which included the one period yield), then each of the (n - 1), n dimensional spread vectors contained in the set
[(-1, 1, 0, K , 0)¢ , (-1, 0, 1, 0, K , 0)¢ ,
K , (-1, 0, K 0, 1)¢
]
is cointegrating for the (now augmented) vector X(t) = [R(1, t), R(k2, t), R(k3, t), . . . , R(kn, t)]¢ (in which k2, k3, . . . , kn are the maturities of the other (n - 1) bills). As these spread vectors are linearly independent, the cointegration space has rank (n - 1). Given the above arguments, it is straightforward to show that the spread between any two yields will be cointegrating. The spread vector associated with any two yields is just a linear combination of two of the spread vectors defined using the one period yield, and since linear combinations of stationary variables are also stationary it follows that this more general spread vector is cointegrating. An implication of the finding that any spread is cointegrating is that any set of (n - 1) linearly independent spread vectors defined in an n dimensional space will comprise a basis for the cointegrating space associated with X(t) = [R(1, t), R(k2, t), R(k3, t), . . . , R(kn, t)]¢. Thus any set of n yields will have a cointegrating rank of (n - 1).
216
A. D. Hall, H. M. Anderson, and C. W. J. Granger
This cointegration between yields of different maturity implies analogous cointegration between the one month holding returns associated with Treasury bills of different maturities. If H(k, t + 1) is the continuously compounded rate of return from t to t + 1 (one month) on a Treasury bill with k months to maturity at t, then it is straightforward to demonstrate that H (1, t + 1) = R(1, t ) H (k, t + 1) = k[R(k, t ) - R(k - 1, t )] -k[DR(k - 1, t + 1)] +R(k - 1, t + 1) for k ≥ 2, and that the return in excess of the one-month rate will be H (k, t + 1) - H (1, t + 1) = k[R(k, t ) - R(k - 1, t )] -(k - 1)[DR(k - 1, t + 1)] +[R(k - 1, t ) - R(1, t )] for k ≥ 2. It follows from the properties of the yields that the holding returns are also I(1) processes, that any set of n holding returns will have a cointegrating rank of (n - 1). If this set includes the one-month holding return, the (n - 1) “excess returns” will form a basis for the cointegrating space. C.
Modeling Cointegrated Data
It was shown in Engle and Granger (1987) that cointegration implies and is implied by an error correction representation, which in the case of the series X(t)¢ = [R(1, t), R(2, t), . . . , R(k, t)], can be expressed by the equation DX (t ) = -d [S(t - 1) - m ] + c(B) DX (t - 1) + d(B)e (t )
(5)
where d is a non-zero n ¥ (n - 1) matrix, S(t) is an (n - 1) ¥ 1 vector of spreads, c(B) and d(B) are polynomials in the lag operator B, and e(t) is a vector of white noise, which may be contemporaneously correlated. The vector [S(t - 1) - m] is called the error correction term, while d is a matrix of adjustment coefficients. Statistical significance of the d will show that the error correction model is a valid representation of the data, and support the hypothesis that the spreads contained in S(t) are cointegrating. The error correction model has a very sensible economic interpretation in this context. Equation (5) shows that although yields on bonds of
Cointegration and Treasury Bill Yields
217
different maturity may diverge in the short run, the yields will adjust when the spreads between them deviate from the equilibrium value m, so that in the long run yields of different maturity will move together. The error correction model does not necessarily imply that yields adjust because the spreads between them are out of equilibrium. As Campbell and Shiller (1987, 1988) point out in the context of their present value models of the term structure, the spreads might measure anticipated changes in yields. Using the short yields as an example, this merely implies that agents have more information in the spread for forecasting changes in short yields, than is available in the history of short yields alone. Thus the spreads are useful for forecasting changes in short yields, and the error correction model arises because of agents’ forward looking behavior. An alternative interpretation of the cointegration between yields of different maturities arises from the relationship between cointegration and common factors. Stock and Watson (1988) show that when there are (n - p) linearly independently cointegrating vectors for a set of n I(1) variables, then each of these n variables can be expressed as a linear combination of p I(1) common factors and an I(0) component. Applying this result to the current context, we expect that there will be a single nonstationary common factor in yields of different maturity. Denoting the I(1) common factor by W(t), a simple representation of how it links the yields curve is given by R(1, t ) = A(1, t ) + b1W (t ) R(2, t ) = A(2, t ) + b2W (t ) ... R(n, t ) = A(n, t ) + bn W (t ) in which the A(i, t) are I(0) variables. Since W(t) is I(1) while the A(i, t) are I(0), the observed long-run movement in each yield series is primarily due to movement in the common factor. Thus W(t) “drives” the time series behavior of each yield and determines how the entire yield curve changes over time.There may be a number of additional factors that explain the variation in the I(0) variables A(i, t), but these factors will be stationary and dominated by the nonstationary factor W(t). The assertion that the same common variable underlies the time series behavior of each yield to maturity is not new to the literature on the term structure. Cox, Ingersoll and Ross (1985) build a continuous time general equilibrium model of real yields to maturity in which the instantaneous interest rate is common to all yields. In the discrete time model developed in this paper it is emphasized that there is only one nonstationary I(1) common variable. Here, one could interpret this nonstationary common factor as the one period yield, or for that matter, any of the other period yields. It is also appropriate to think of this common
218
A. D. Hall, H. M. Anderson, and C. W. J. Granger
factor as something exogenous to the system of yields such as inflation, measures of monetary growth, or measures of investment. 3.
THE DATA
The analysis has been conducted on the nominal yield to maturity data from the FAMA Twelve Month Treasury Bill Term Structure File of the U.S. Government Securities File of the Center for Research in Securities Prices (CRSP) at the University of Chicago.The file contains twelve yield series on Treasury bills; one series for bills with one month to maturity, another for bills with two months to maturity, and so on to a series with twelve months to maturity. Full details of how the file has been constructed are given in the CRSP documentation. These data are particularly appropriate for an investigation of the term structure. The observed yield on each bill has been derived from the price of that bill on a given day (the last trading day of the month), so that the data relate to bills which are identical in all respects other than term, and unlike many yield data sets, the raw data have been neither interpolated over time nor interpolated over maturities. The nominal yield series studied here have been derived by taking the average of bid and asked quotes. The yields are standardized to a 30.4 day basis, and are expressed in percentages. The sample used consists of 228 observations for each series, dating from January 1970 until December 1988, but the series on yields to maturity for twelve month bonds has not been used because many of the observations were missing.3 The sample covers three monetary regimes which are distinguished by the degree of interest rate targeting undertaken by the Federal Reserve. The first regime, covering the period up to and including September 1979, corresponds to a period during which the Federal Reserve was targeting interest rates. The period from October 1979 to September 1982 covers the Federal Reserve’s “new operating procedures,” when it ceased targeting interest rates. The final regime, from October 1982 onwards, corresponds to the abandonment of the “new operating procedures” and the resumption of partial interest rate targeting. Plots of the yield data and differenced yield data for the four yields of shortest term are provided in Figs. 11.1 and 11.2. These are representative of all the yields to maturity, and they illustrate the similar behavior of the yields over the sample period. In particular, they illustrate that the yields were considerably more volatile during the “new operating procedures” regime than they have been at other times. Most of the analysis is based on full sample, but in view of the regime changes described above, and 3
Two observations for the eleven-month bill and one for the ten month bill were also missing. These missing values were interpolated from the observed movements in the yield of the nine-month bill.
Cointegration and Treasury Bill Yields
219
Figure 11.1. Yields to maturity (% per month).
Figure 11.2. Differenced yields to maturity (% per month).
of empirical evidence that these caused structural changes in the term structure,4 three subsets corresponding to the monetary regimes have also been analyzed. The SHAZAM (White (1978)) and PC-GIVE (Hendry (1989)) computer packages were used for the computations. 4. A.
THE EMPIRICAL EVIDENCE Time Series Properties of Individual Yields
Augmented Dickey-Fuller unit root test statistics were computed for each of the eleven yield series, and the details of this analysis can be 4
See, for instance, Huizinga and Mishkin (1986) or Hardouvelis (1988).
220
A. D. Hall, H. M. Anderson, and C. W. J. Granger
found in Anderson, Granger and Hall (1990). The full sample test statistics show no evidence against the null hypothesis that there is a unit root in yield levels, but the data clearly reject the null hypothesis that there is a unit root in the differences. When the three subsamples are examined, the same pattern emerges for each of the eleven yield series. A reasonable conclusion is that each yield to maturity is an I(1) process, over each of the Federal Reserve’s monetary regimes.5 B.
Cointegration Analysis
We now consider the hypotheses of interest, namely, that the yields are cointegrated with (n - 1) cointegrating vectors corresponding to any set of n yields, and that the cointegrating vectors are the spread vectors.6 Johansen (1988) and Johansen and Juselius (1990) have developed likelihood based procedures which test for cointegration, estimate the cointegrating vectors and permit the testing of restrictions on the cointegrating vectors. These techniques have been applied to test the hypotheses of interest. The results for the analysis which uses the eleven yields to maturity are presented in Table 11.1. Johansen’s l-max and trace statistics accept the restriction that the rank of the cointegrating space is not more than ten, but strongly reject the hypotheses that the rank is not more than nine. This supports the proposition suggested by the theory that there are ten cointegrating vectors for the set of eleven yields. Conditional on there being ten cointegrating vectors, the null hypothesis that ten linearly independent spreads formed from the eleven yields comprise a basis for the cointegration space is rejected. This likelihood ratio test statistic is distributed as a chi-squared random variable with ten degrees of freedom under the null hypothesis. The value of the test statistic is 30.28,
5
6
The conclusion that yields to maturity are integrated processes cannot be true in a very strict sense because integrated series are unbounded, while nominal yields are bounded below by zero. Nevertheless, it is evident from the data that the statistical characteristics of yields are closer to those of I(1) series than they are to I(0) series, so that for the purposes of building models of the term structure it is appropriate to treat these yield series as if they were I(1). A unit root analysis of spreads provides indirect evidence on these hypotheses and such an analysis of each spread between all of the yields can be found in Anderson, Granger and Hall (1990). The spreads are found to be stationary over the full sample and in the first and third subsamples, consistent with the proposal that each spread is cointegrating. In the second subsample, many of the spreads are found to be nonstationary. The yields may still be cointegrating in this subsample, but the spreads do not define the cointegrating vectors. While relevant in this context, a unit root test on a spread tests the null hypothesis that the vector [-1, 1]¢ is not cointegrating, rather than the required null that the vector [-1, 1]¢ is cointegrating. As well, for sets of more than two yields, the unit root tests do not test the joint hypotheses that the spread vectors are cointegrating.
Cointegration and Treasury Bill Yields
221
Table 11.1 Hypothesis tests to determine the cointegrating rank for the set of yields R(1, t), . . . , R(11, t) full sample (1970:3–1988:12). Null Hypothesis about Rank r r £ 10 r£9
l-max Test Statistic
5% Critical Value
trace Test Statistic
5% Critical Value
6.33 29.38
8.08 14.10
6.33 35.71
8.08 17.84
Note: The critical values are from Johansen and Juselius (1990).
compared to its 5% critical value of 18.31. There are two plausible explanations for this rejection; either the spreads are not cointegrating, contradicting the theory, or the rejection has been caused by problems associated with the changes in the monetary regimes. To investigate the first possibility, subsets of the spreads were tested to see whether singly or jointly they are contained in the cointegrating space. A selection of all the possible tests of hypothesis involving subsets of the various spreads between the eleven yields are summarized in Table 11.2. In this table, the first column lists m yields, and the null hypothesis in each case is that the (m - 1) linearly independent spreads formed from these yields are contained in the cointegration space. These likelihood ratio test statistics are all conditional on the rank of cointegration space being ten. The first row reports the test statistic that the ten linearly independent spreads span the cointegration space. The next block of test statistics considers the null hypotheses that an individual spread belongs to the cointegrating space. We report the tests for all spreads involving the one-month yield and all spreads involving adjacent maturities. For the tests involving the one-month yield, the null is accepted for seven out of the ten spreads. We find that the spreads S(2, 1, t), S(3, 1, t) and S(4, 1, t) are not cointegrating. For the tests involving adjacent maturities, the null is accepted for six of the ten spreads, and in this instance the spreads S(2, 1, t), S(3, 2, t) S(7, 6, t) and S(11, 10, t) are not cointegrating. The next block of table 2 reports the test statistics obtained when we progressively increase the number of yields (k) in the subset, and test the null hypotheses that a set of (k - 1) linearly independent spreads formed from these yields belongs to the cointegrating space. All of these joint hypotheses are rejected. The final block of statistics reports test statistics on all possible subsets of spreads involving the four shortest yields to maturity. Again we find a mixture of acceptances and rejection of the null hypotheses. In general, rejections seem to occur when the spread involves either the one-month, two-month, three-month or eleven-month yields. A subsample analysis has not been performed in the eleven variable case due to degrees of freedom considerations. In order to analyze the
222
A. D. Hall, H. M. Anderson, and C. W. J. Granger
Table 11.2 Tests that spread vectors are cointegrating full sample (1970:3–1988:12). Spreads between
Test Statistic
DF
5% Critical Value
R(1) through R(11) R(1), R(2) R(1), R(3) R(1), R(4) R(1), R(5) R(1), R(6) R(1), R(7) R(1), R(8) R(1), R(9) R(1), R(10) R(1), R(11) R(2), R(3) R(3), R(4) R(4), R(5) R(5), R(6) R(6), R(7) R(7), R(8) R(8), R(9) R(9), R(10) R(10), R(11)
30.28 4.39 6.56 4.36 2.48 1.54 0.65 0.34 0.21 0.11 0.01 6.95 0.00 0.26 0.73 5.49 2.51 0.72 1.15 4.57
10 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
18.31 3.84 3.84 3.84 3.84 3.84 3.84 3.84 3.84 3.84 3.84 3.84 3.84 3.84 3.84 3.84 3.84 3.84 3.84 3.84
R(1) through R(3) R(1) through R(4) R(1) through R(5) R(1) through R(6) R(1) through R(7) R(1) through R(8) R(1) through R(9) R(1) through R(10)
7.27 13.31 14.34 14.34 21.96 21.96 21.97 22.58
2 3 4 5 6 7 8 9
5.99 7.81 9.49 11.07 12.59 14.07 15.51 18.31
R(2), R(4) R(1), R(2), R(4) R(1), R(3), R(4) R(2), R(3), R(4)
2.30 4.81 7.84 12.81
1 2 2 2
3.84 5.99 5.99 5.99
Note: R(k) is the yield to maturity of a k period bill. Column one lists m yields. The null hypothesis in each case is that (m - 1) linearly independent spreads formed from these yields belong in the cointegration space. The tests are conditional on the rank of the cointegration space being 10, and the test statistics have a chi-squared distribution with DF degrees of freedom.
possible effects of the changes in the Federal Reserve’s operating pro. cedures, a detailed analysis of the four shortest yields has been performed. Table 11.3 reports the results of the tests to determine the cointegrating rank of these four yields. Over the full sample, the tests accept the null hypothesis that the rank of the cointegrating space is not
Cointegration and Treasury Bill Yields
223
Table 11.3 Hypothesis tests to determine the cointegrating rank for the set of yields R(1, t), R(2, t), R(3, t), and R(4, t). Null Hypothesis about Rank r
l-max Test Statistic
5% Critical Value
trace Test Statistic
5% Critical Value
Full Sample 70:3–88:12
r£3 r£2
6.42 50.79
8.08 14.60
6.42 57.20
8.08 17.84
First Sample 70:3–79:9
r£3 r£2
0.31 39.27
8.08 14.60
0.31 39.58
8.08 17.84
Second Sample 79:10–82:9
r£3 r£2 r£1 r=0
3.22 10.50 20.14 28.56
8.08 14.60 21.28 27.34
3.22 13.72 33.87 62.43
8.08 17.84 31.26 48.42
Third Sample 82:10–88:12
r£3 r£2
1.64 16.97
8.08 14.60
1.64 18.61
8.08 17.84
Sample
Note: The critical values are from Johansen and Juselius (1990).
more than three, but reject the null that the rank is not more than two. This confirms, as the theory predicts, that the four shortest yields are cointegrated and that the cointegrating rank is three. This result is repeated in the first and third subsample, but in the sample during which the new procedures were operating the tests suggest that the cointegrating rank is two. Hypothesis tests that the spreads are contained in the cointegration space are reported in Table 11.4. For the full sample, conditional on there being three cointegrating vectors, the hypothesis that three linearly independent spreads span the cointegrating space is rejected, and an analysis of subsets of these spreads also leads to some rejections.These results are consistent with the results of testing the same hypotheses in the eleven yield model. However, in the first and third subsamples, we can accept the hypothesis that the spreads form a basis for the cointegration space. With only one exception, each of the hypotheses that subsets of these spreads are contained in the cointegration space is not rejected. These results are consistent with the predictions of the theory. On the other hand, the results from an analysis of the second subsample are not consistent with the theory. Conditional on there being two cointegrating vectors, the tests indicate that none of the possible spread vectors are cointegrating. On the basis of this evidence, we conclude that during periods in which the Federal Reserve has targeted interest rates as an instrument of monetary policy, the tests broadly support the predictions of the theory. We find (n - 1) cointegrating vectors among each n yields to maturity, and it is reasonable to conclude that the spreads form a basis for the cointegrating space. This cointegrating relationship has the
224
A. D. Hall, H. M. Anderson, and C. W. J. Granger
Table 11.4 Tests that spread vectors are cointegrating.
70:3–88:12
70:3–79:9
79:10–82:9
82:10–88:12
DF
5% Critical Value
14.66
5.56
...
1.87
3
7.81
3.77 5.49 3.44 6.20 1.69 0.45
5.39 2.83 2.13 0.00 0.02 0.05
8.08 8.04 8.57 7.29 10.02 12.92
0.24 0.02 0.00 0.15 0.23 0.29
1 1 1 1 1 1
3.84 3.84 3.84 3.84 3.84 3.84
6.32 3.86 5.98 14.09
5.56 5.48 4.20 0.05
13.92 21.50 24.04 23.21
1.11 1.66 1.86 0.30
2 2 2 2
5.99 5.99 5.99 5.99
Sample Period Spreads Between R(1) through R(4) R(1), R(2) R(1), R(3) R(1), R(4) R(2), R(3) R(2), R(4) R(3), R(4) R(1), R(2), R(3) R(1), R(2), R(4) R(2), R(3), R(4) R(2), R(3), R(4)
Note: R(k) is the yield to maturity of a k period bill. Column one list m yields. The null hypothesis in each case is that (m - 1) linearly independent spreads formed from these yields belong in the cointegration space. For the full sample and subsamples (70:3–79:9) and (82:10–88:12), the test statistics are conditional on there being 3 cointegrating vectors. For the subsample (79:10–82:9), the test statistics are conditional on 2 cointegrating vectors. All test statistics have a chi-squared distribution with DF degrees of freedom.
important implication that the risk or liquidity premia of Treasury bills are stationary I(0) variables. This conclusion follows directly from consideration of equation (4) and the empirical evidence that yields are I(1) and cointegrated processes, and the findings that the spreads between the yields define the cointegrating relationships. These relationships appear to have broken down during the period of the new operating procedures. During this time, the Federal Reserve placed primary emphasis on controlling the growth of reserves available to depository institutions while greatly expanding the allowable range of fluctuations in the federal funds rate. This period experienced wide gyrations in quarterly monetary growth rates despite the announced policy of controlling the growth in monetary aggregates, unusually high real interest rates, changing inflation and deteriorating economic conditions. Short-term interest rates were influenced almost exclusively by the private sector. There was a marked increase in the short-run volatility of interest rates, a conventional measure of the risk of holding long-term debt, presumably with a substantial impact on risk or liquidity premia. Over this period we observe a change in the cointegrating relationships between yields on Treasury bills. Yields are still cointegrated, but the spreads no longer define the cointegrating relationships, and there appears to be at least one extra nonstationary common factor over this
Cointegration and Treasury Bill Yields
225
period. A reasonable explanation is that because of the uncertainty caused by the enhanced volatility in monetary growth, interest rates and economic activity resulting from the introduction of the new procedures, the risk or liquidity premia became nonstationary over this period, causing a breakdown of the cointegrating relationships. C.
Error Correction Models
In this section we present an estimated error correction model using the four shortest yields to illustrate how these cointegration results might be utilised. The spreads are used to define the cointegrating vectors, but because this is not consistent with the data over the whole sample, estimation of the model is restricted to the period after the new operating procedures were abandoned.7 The error correction model presented here was derived by sequentially reducing a general unrestricted model that contained four lags of each of the differenced series. Details of the advantages of modeling by reduction may be found in Hendry (1989). Sets of (n - 1) linearly independent spread vectors are not unique for n ≥ 2, so that it was necessary to choose which spreads were to be used in the estimation of the error correction model. In theory it should not matter which spreads are used; in practice, we used the spreads S(2, 1, t), S(3, 2, t) and S(4, 3, t), since these were the least correlated. In estimating the model, it was necessary to include two dummy variables, D84 and D87, to account for outliers which occurred in October 1984 and October 1987 (the second outlier is presumably due to the effects of the stock market crash). Ordinary least squares (OLS) and full information maximum likelihood (FIML) model reductions lead to the same model specification and these estimates are presented in Tables 11.5 and 11.6. Diagnostic statistics reveal little insample evidence of misspecification.The diagnostic test statistics are those produced by PC-GIVE and details of each test statistic can be found in Hendry (1989). Forecast Chow tests of the null hypothesis that there is no change in any parameter between the sample period (January 1983 until December 1987) and the forecast period (January 1988 until December 1988) show weak evidence of change in the equation for R(1, t). This is apparently due to an outlying observation for R(1, t) in December 1988, which was more than five standard deviations away from the sample mean. Disregarding the effects of
7
As expected, error correction models estimated over the full sample showed evidence of instability. It may have been possible to obtain stable models of yields over longer samples by including (exogenous) volatility variables in the error correction model, or by introducing ARCH errors into the models, but these approaches were not tried here.
.205 .296 .438 .108 .006 .030 .030
F(12, 41) F(4, 45) c2(2) F(11, 41) F(1, 52) n.a. F(12, 53)
Distribution
2.75a
0.0444 0.6050 0.0295 1.35 2.40 1.39 1.00 0.60
Test
Diagnostic Statistics
— .272 .406 .100 .005 .028 .028
S.E.
F(12, 42) F(4, 46) c2(2) F(9, 44) F(1, 53) F(11, 42) F(12, 54)
Distribution 0.0375 0.5185 0.0272 0.75 1.45 0.41 0.51 0.21 0.45 0.89
Test
Model for DR(2, t)
— .994 -.268 .224 -.012 -.113 -.125
Coefficient
Model for DR(2, t)
— — .382 — .004 .029 .029
S.E.
F(12, 44) F(4, 48) c2(2) F(5, 50) F(1, 55) F(4, 51) F(12, 56)
Distribution
0.0350 0.3812 0.0283 0.31 0.41 1.00 0.52 0.85 0.52 0.64
Test
Model for DR(3, t)
— — .664 — -.004 -.105 -.126
Coefficient
Model for DR(3, t)
— — — — .004 .030 .030
S.E.
F(12, 45) F(4, 49) c2(2) F(3, 53) F(1, 56) F(2, 54) F(12, 57)
Distribution
0.0345 0.2723 0.0299 0.49 0.19 1.51 0.45 0.11 0.48 0.61
Test
Model for DR(4, t)
— — — — .0002 -.098 -.100
Coefficient
Model for DR(4, t)
Note: — implies that in the reduction process the estimated coefficient was found to be insignificant; n.a. means that the statistic was not computed. a Significant at the 5% critical level.
Dep. Var. S.D. R2 Standard Error Serial Correlation ARCH Normality Heteroskedasticity Reset Functional Form Chow
Type
.644 .665 -.682 .293 -.014 -.127 -.176
S(2, 1, t - 1) S(3, 2, t - 1) S(4, 3, t - 1) DR(2, t - 1) Constant D84 D87
S.E.
Model for DR(1, t)
Coefficient
Explanatory Variable
Model for DR(1, t)
Table 11.5 OLS error correction model for the four variable system (1983:1–1987:12).
.7675 .0283
— .983 -.462 .092 -.011 -.116 -.119
— .094 .188 .035 .004 .027 .027
S.E.
.7088 .0262
Model for DR(2, t)
Diagnostic Statistics
Coefficient
Model for DR(2, t)
— — .106 — .004 .028 .028
S.E.
.6168 .0273
Model for DR(3, t)
— — .558 — .003 -.105 -.126
Coefficient
Model for DR(3, t)
Note: — implies that in the reduction process the estimated coefficient was found to be insignificant.
ractual/predicted Standard Error
.114 .183 .283 .067 .005 .029 .029
S.E.
Model for DR(1, t)
.864 .602 -.873 .186 -.016 -.128 -.171
S(2, 1, t - 1) S(3, 2, t - 1) S(4, 3, t - 1) DR(2, t - 1) Constant D84 D87
Type
Coefficient
Explanatory Variable
Model for DR(1, t)
Table 11.6 FIML error correction model for the four variable system.
— — — — .004 .029 .029
S.E.
.5218 .0292
Model for DR(4, t)
— — — — .0002 -.098 -.100
Coefficient
Model for DR(4, t)
228
A. D. Hall, H. M. Anderson, and C. W. J. Granger
this outlying observation, further Chow tests provide no evidence that the estimated models are unstable. Error correction terms have statistically significant coefficients, thereby confirming the cointegration found earlier and the validity of the error correction representation. It is interesting to note the manner in which the cointegrating vectors enter into each equation; the spreads are not relevant in the model for changes in the yield of longest maturity, but successively more spreads are needed to “explain” changes in yields as the term to maturity becomes shorter. This pattern is also found in other error correction models (not reported in this paper), estimated with different sets of yields. This type of model suggests that yields of longer maturities “drive” the term structure, with short-term yields adjusting to movements in the longer term yields. One interpretation of this observation is based on an expectations argument. The spreads between yields at the longer end of the term structure contain information about future shorter-term rates, and current short-term rates adjust according to this information.
D.
Forecasts
The existence of an error correction model implies some Grangercausality in the system, which in turn suggests that the error correction model may be a useful forecasting tool. The error correction model estimated by FIML has been used to obtain 12 one-step-ahead forecasts over the period 1988:1 to 1988:12, to illustrate its use for this purpose. These forecasts are compared with a set of naive no-change forecasts and the forecasts from an unrestricted second order vector autoregression (VAR). The dummy variables discussed above are included in the VAR model. Table 11.7 provides the summary statistics for these forecasts. The biases are all of the same order of magnitude and all forecast standard deviations are high, but the error correction model has smaller forecast standard deviations, leading to consistently smaller root mean square errors. The error correction model gives between a 4% and 16% reduction in root mean square error over the naive model, and smaller gains over the VAR. The improvement in forecasts using the error correction model are small (and not statistically significant), but they illustrate the potential of this type of model.
5.
CONCLUSION
This paper shows that it is appropriate to model the term structure of U.S. Treasury bills as a cointegrated system. During monetary regimes
Cointegration and Treasury Bill Yields
229
Table 11.7 Summary statistics for one step-ahead forecast errors (1988:1–1988:12). Statistic Variable
Method
Mean
St. Dev.
RMSE
DR(1)
Naive VAR ECM
.0144 .0070 .0033
.0568 .0543 .0491
.0586 .0548 .0492
DR(2)
Naive VAR ECM
.0188 .0111 .0151
.0239 .0268 .0228
.0304 .0290 .0274
DR(3)
Naive VAR ECM
.0184 .0130 .0152
.0164 .0195 .0169
.0247 .0235 .0228
DR(4)
Naive VAR ECM
.0164 .0091 .0151
.0182 .0227 .0181
.0245 .0245 .0236
RMSE Ratios Ratio
DR(1)
DR(2)
DR(3)
DR(4)
ECM/Naive ECM/VAR
.8406 .8987
.9006 .9421
.9220 .9696
.9626 .9631
characterized by stabilizing the short-run fluctuations in the federal funds rate, the spreads between yields of different maturity define the cointegrating vectors in this system. An error correction model implied by this cointegration is estimated, found to be statistically significant, and seems to provide more accurate forecasts of yields than naive no-change forecasts, or forecasts based on a VAR. During the period of the new operating procedures, yields are still cointegrated, but the spreads no longer define the cointegrating vectors. The type of cointegration found for monetary regimes that emphasize controlling short-term interest rates has the important implications that the term or liquidity premia of Treasury bills are stationary processes and that a single nonstationary common factor underlies the time series behavior of each yield to maturity. The common factor cannot be uniquely identified, and it could be a linear combination of several I(1) variables. It is worth emphasizing that this is a nonstationary factor and it may be possible to find a number of common stationary factors
230
A. D. Hall, H. M. Anderson, and C. W. J. Granger
that are useful in explaining the behavior of Treasury bill yields.8 Further research may suggest a useful way of identifying the common nonstationary factor so that it can then be estimated and studied. Much might be learned about the term structure if this common factor can be related to economic variables such as monetary growth and/or inflation, and further research on the com-mon factor interpretation of cointegration in the term structure will undoubtedly improve our understanding of how the term structure changes over time.
REFERENCES Anderson, Heather M., Clive W. J. Granger, and Anthony D. Hall (1990), “Treasury Bill Yield Curves and Cointegration,” University of California, San Diego Discussion Paper Number 90–24. Campbell, John Y., and Robert J. Shiller (1987), “Cointegration and Tests of Present Value Models,” Journal of Political Economy 95, 1062–1088. —— (1988), “Interpreting Cointegrated Models,” Journal of Economic Dynamics and Control 12, 505–522. Cox, John C., Jonathan E. Ingersoll, and Stephen A. Ross (1985), “A Theory of the Term Structure of Interest Rates,” Econometrica 53, 385–407. Engle, Robert F., and Clive W. J., Granger (1987), “Cointegration and Error Correction Representation, Estimation, and Testing,” Econometrica 55, 251–276. Granger, Clive W. J. (1981), “Some Properties of Time Series Data and Their Use in Econometric Model Specification,” Journal of Econometrics 16, 121–130. Hardouvelis, Gikas A. (1988), “The Predictive Power of the Term Structure during Recent Monetary Regimes,” The Journal of Finance 43, 339–356. Hendry, David F. (1989), “PC-GIVE: An Interactive Econometric Modeling System,” Institute of Economics and Statistics and Nuffield College, University of Oxford. Huizinga, John, and Frederic S. Mishkin (1986), “Monetary Policy Regime Shifts and the Unusual Behavior of Real Interest Rates,” Carnegie-Rochester Conference on Public Policy 24, 231–274. Johansen, Soren (1988), “Statistical Analysis of Cointegration Vectors,” Journal of Economic Dynamics and Control 12, 231–254. Johansen, Soren, and Katarina Juselius (1990), “Maximum Likelihood Estimation and Inference on Cointegration–with Applications to the Demand for Money,” Oxford Bulletin of Economics and Statistics 52, 169–210. Knez, Peter, Robert Litterman, and José Scheinkman (1989), “Explorations into Factors Explaining Money Market Returns,” Goldman Sachs & Co., Discussion Paper No. 6. 8
For this reason, our analysis is consistent with estimated factor models that use the stationary excess holding returns of Treasury bills. With these data, Stambaugh (1989) finds two common factors while Knez, Litterman and Scheinkman (1988) report estimated models with three and four factors.
Cointegration and Treasury Bill Yields
231
Stambaugh, Robert F. (1988), “The Information in Forward Rates: Implications for Models of the Term Structure,” Journal of Financial Economics 21, 41–70. Stock, James H., and Mark W. Watson (1988), “Testing for Common Trends,” Journal of the American Statistical Association 83, 1097–1107. White, Kenneth J. (1978), “A General Computer Program for Econometric Methods-SHAZAM,” Econometrica 46, 239–240.
CHAPTER 12
Estimation of Common Long-Memory Components in Cointegrated Systems* Jesus Gonzalo and Clive Granger
The study of cointegration in large systems requires a reduction of their dimensionality. To achieve this, we propose to obtain the I(1) common factors in every subsystem and then analyze cointegration among them. In this article, a new way of estimating common long-memory components of a cointegrated system is proposed. The identification of these I(1) common factors is achieved by imposing that they be linear combinations of the original variables Xt, and that the error-correction terms do not cause the common factors at low frequencies. Estimation is done from a fully specified error-correction model, which makes it possible to test hypotheses on the common factors using standard chi-squared tests. Several empirical examples illustrate the procedure. Keywords: common factors; cointegration; error-correction model; permanent–transitory decomposition. If xt and yt are both integrated of order 1, denoted I(1), so that their changes are stationary, denoted I(0), they are said to be cointegrated if there exists a linear combination zt = yt - Axt, which is I(0). Several useful generalizations can be made of this definition, but this simple form is sufficient for the points proposed in this article. The basic ideas of cointegration were discussed by Granger (1986) and in the book of readings edited by Engle and Granger (1991). A simple constraint that results in cointegration involves an I(1) common factor ft: È yt ˘ È A˘ È y˜ t ˘ ÍÎ x ˙˚ = ÍÎ 1 ˙˚ ft + ÍÎ x˜ ˙˚, t t
(1)
where y˜ t and x˜ t are both I(0). Clearly zt = y˜ t - A x˜ t , being a linear combination of I(0) series, will never be I(1) and usually will be I(0). The reverse is also true – if (xt, yt) are cointegrated, there must exist a * Journal of Business and Economic Statistics, 13, 1995, 27–35.
Estimation of Common Long Memory Components
233
common factor representation of the form (1), as proved by Stock and Watson (1988). A natural question that arises is how to estimate the common factor ft, which might be an unobserved factor and is the driving force that results in cointegration. It has been suggested in the literature quoted previously that cointegration can be equated with certain types of equilibrium in that, in the long-run future, the pair of series is expected to lie on the attractor line xt = Ayt. Although much attention has been given to estimation of the cointegrating vector (1, -A), relatively little attention has been given to estimation of ft. Notice that when the longrun equilibrium is estimated, the common factor ft is eliminated. There are several reasons why it is interesting to recover ft – for example, situations in which the model of the complete set of variables appears very complex, although in fact, if we are interested in the long-run behavior, a simpler representation, using a small set of common long-memory factors could be adequate. This is the case for cointegration in large systems. Economists often conduct research on what might be considered to be natural subdivisions of the macroeconomy. The analysis of the long-run behavior of the whole macrosystem can be conducted by first finding the common factors in every subdivision of the economy and then studying cointegration among them. Another reason for singling out the ft is that the estimation of this common factor allows one to decompose ( yt, xt) into two components ( ft, ( y˜ t , x˜ t )) that convey different kinds of information. For example, policymakers may be primarily interested in the trend (permanent component ft) behavior, but those concerned with business cycles are more interested in the cyclical component (transitory component). Moreover, singling out the common factors allows us to investigate how they are related to other variables. The final goal of any factor model is to be able to identify the common factors with some observable variable. This article proposes a way of achieving this. The situation studied here has analogies with the decomposition of an I(1) series into permanent and transitory components, where these components are considered to be I(1) and I(0), respectively. This question was considered by Quah (1992). Because the sum of an I(1) and I(0) series is I(1), it is easily seen that the question, as posed, does not completely identify the I(1) permanent components. To achieve identification, a further condition has to be imposed, such as maintaining that the permanent component is a random walk, or requiring the two components to be orthogonal at all leads and lags. In this article a different condition is used. This is possible because the situation being studied here involves more than one series, and this extra dimension allows a different type of condition to be considered. Basically the conditions imposed are that ft be a linear combination of ( yt, xt) and that the part that is left ( y˜ t , x˜ t ), not have any permanent effect on ( yt, xt). The first condition
234
J. Gonzalo and C. W. J. Granger
makes ft observable; the second one makes ft a good candidate to summarize the long-run behavior of the original variables. By these two conditions, we identify ft up to a nonsingular matrix multiplication to the left. The linear combination is easily estimated from a fully specified errorcorrection model (ECM). This makes the suggested decomposition very convenient, mainly because the ECM takes care of the unit-root problem (see Johansen 1988; Phillips 1991), and therefore hypothesis testing on the linear combination ft can be conducted using standard chi-squared tests. Another advantage is that any extension (nonlinearities, timevarying parameters, etc.) that could be incorporated in the ECM can be easily taken into account in this decomposition. This article is organized as follows. Section 1 describes the factor model (1) for p variables and proposes a way to identify the common long-memory factors ft. Section 2 shows how to estimate the linear combinations that form the common factors and how to test hypotheses on these linear combinations. Section 3 is an application of the method. Section 4 concludes. Proofs of the main results are in the Appendix. 1.
FACTOR MODEL
Let Xt be a ( p ¥ 1) vector of I(1) time series with mean 0, for simplicity, and assume that the rank of cointegration is r [there exists a matrix ap¥ r of rank r, such that a ¢Xt is I(0)]. It follows that 1. The vector Xt has an ECM representation •
DX t = g a ¢ X t -1 + Â Gi DX t -i + e t , p¥r r¥ p
(2)
i =1
where D = I - L, with L the lag operator. 2. The elements of Xt can be explained in terms of a smaller number ( p - r) of I(1) variables, ft, called (common) factors plus some I(0) components X = A1 ft + X˜ t ,
t p¥ 1
p¥k k ¥1
p¥1
(3)
where k = p - r. In the standard factor analysis, mostly oriented to cross-section data [for time series, see Peña and Box (1987)], the main objective is to estimate the loading matrix A1 and the number k of common factors from (3). In our case, these two things are already known once the cointegrating vectors, a, have been estimated: k = p - r and A1 is any basis of the null space of a ¢(a ¢A1 = 0). The goal of this article is to estimate ft. In factor analysis this is done from (3), after imposing constraints on ft and X˜ t that are not adequate in time series. Even dynamic factor analysis
Estimation of Common Long Memory Components
235
(see Geweke 1977) needs the assumption of stationarity that does not hold here. As will be shown in Section 2, the common factors can be estimated from the ECM (2) instead of from (3). One of the conditions that will identify the common factors, ft, is to impose that ft be linear combinations of the variables Xt: f = B1 X t .
t k ¥1
(4)
k ¥ p p¥1
This condition not only helps to identify ft but also to associate the common factors with some observable variables, which is always advisable in factor analysis. The other condition that will identify ft (up to a nonsingular matrix multiplication to the left) is to impose that A1 ft and X˜ t form the permanent and transitory components Xt, respectively, according to the following definition of a permanent– transitory (P–T) decomposition [part of this definition follows Quah (1992)]. Definition 1: Let Xt be a difference-stationary sequence. A P–T decomposition for Xt is a pair of stochastic processes Pt, Tt such that 1. 2. 3. 4.
Pt is difference-stationary and Tt is covariance stationary, var (DPt) and var (Tt) > 0, Xt = Pt + Tt, we let ÈDPt ˘ ÈuPt ˘ H * (L)Í ˙ = ÍÎu ˙˚ p ¥ p Î Tt ˚ Tt
(5)
be the autoregressive (AR) representation of (DPt, Tt), with uPt and uTt uncorrelated, then ∂ Et ( X t + h ) π0 h Æ• ∂ uPt
(a) lim and
(b) lim
h Æ•
∂ Et ( X t + h ) = 0, ∂ uTt
where Et is the conditional expectation with respect to the past history. According to Condition 4, the only shocks that can affect the longrun forecast of Xt are those coming from the innovation term, uPt, of the permanent component, Pt. Condition (4) is not included in Quah’s definition, and it is this that makes Pt and Tt permanent and transitory components, respectively. The next proposition clarifies this condition.
236
J. Gonzalo and C. W. J. Granger
Proposition 1: Let ÈH11 (L) H12 (L) ˘ ÈDPt ˘ Èu1t ˘ ÍÎH (L) H (L)˙˚ ÍÎ T ˙˚ = ÍÎu ˙˚ 21 22 t 2t
(6)
be the AR representation of (DPt, Tt). Condition (4) in Definition 1 is satisfied iff the total multiplier of DPt with respect to Tt is 0; equivalently H12 (1) = 0.
(7)
Apart from the instantaneous causality between the innovations (u1t, u2t) of both components that is likely to occur in economics because of temporal aggregation (see Granger 1980), Condition (4) says that Tt does not Granger-cause Pt in the long run or at frequency 0 [see Geweke (1982) and Granger and Lin (1992) for a formal definition of causality at different frequencies]. Let us consider the following example: Xt = Pt + Tt,
(8)
DPt = a1Tt-1 + a2DTt-1 + u1t
(9)
where
and Tt = b1DPt-1 + u2t.
(10)
This is a P–T decomposition according to Definition 1 iff a1 = 0. When a1 π 0, even though Tt is I(0), this term cannot be called transitory because it will have a permanent effect on Xt (i.e., an effect on the long-run forecast of Xt). Notice that changes in the permanent component can affect the transitory component and also that changes in the transitory component could have an impact on the changes of the permanent component (a transitory impact on the levels of Pt and therefore on Xt). There are decompositions that do not satisfy Condition (4). For instance, in the decomposition proposed by Aoki (1989), based on a dynamic factor (state-space) model, the I(0) component may have a permanent effect on the levels of the I(1) component and therefore on Xt. Another example is the decomposition of Kasa (1992): -1
-1
X t = a ^ (a ^¢ a ^ ) ft + a (a ¢a ) zt ,
(11)
where ft = a¢^Xt and zt = a ¢Xt. In general (see the proof of the next proposition) X˜ t = a(a ¢a)-1zt will not be “transitory” according to Condition (4) in Definition 1. The next proposition shows that the two conditions required for the common factors are enough to identify them up to a nonsingular transformation.
Estimation of Common Long Memory Components
237
Proposition 2: In the factor model (2) the following conditions are sufficient to identify the common factors ft: 1. ft are linear combinations of Xt. 2. A1 ft and X˜ t form a P–T decomposition. Substituting (4) in (3), we obtain X˜ t = (I - A1B1)Xt = A2a ¢Xt = A2zt, where zt = a ¢Xt. Then, from the ECM (2), it is clear that the only linear combinations of Xt such that X˜ t has no long-run impact on Xt are ft = g ^¢ X t ,
(12)
k ¥ p p¥1
where g ¢^g = 0 and k = p - r. These are the linear combinations of DXt that have the “common feature” (see Engle and Kozicki 1990) of not containing the levels of the error correction term zt-1 in them. Once the common factors ft are identified, inverting the matrix (g^, a)¢, we obtain the P–T decomposition of Xt proposed in this article: X = A1 g ^¢ X t + A2 a ^¢ X t ,
t p ¥1
p¥ kk¥ p
(13)
p¥ r r ¥ p
where A1 = a^(g ¢^a^)-1 and A2 = g (a ¢g )-1. In the next proposition, it is shown when this common factor decomposition (13) exists. Proposition 3: If the matrix II = gp¥ ra r¢ ¥ p has no more than k = p - r eigenvalues equal to 0 – that is, if det(a ¢g) π 0 – then (g ^, a)¢ is nonsingular and the factor model (13) exists. Even though ft is not estimated from the factor model (3), the assumptions made to identify the common factors imply certain constraints on the P–T components that are the counterpart of assumptions imposed in standard factor analysis. Proposition 4: The factor model Xt = A1 ft + A2zt,
(14)
where ft = g^Xt and zt = a ¢Xt satisfies the following properties: 1. The common factors ft are not cointegrated. 2. Cov(Df *it, z*j,t-s) = 0 (i = 1, . . . , k; j = 1, . . . , p - k; s ≥ 0), where Df *it = Dfit - E(Dfit Ω lags(DXt-1)) and z*jt = zjt - E(zjt Ω lags(DXt-1)). The first property follows from Proposition 3 and the second from the ECM (2). This second property is another way of expressing that zt does not cause ft in the long run.
238
J. Gonzalo and C. W. J. Granger
Properties (1) and (2) are equivalent to the assumptions made in standard factor analysis on the uncorrelatedness of the factors and the orthogonality between the factors and the error term (A2zt). As mentioned before, most of the P–T decompositions have been designed and used in a univariate framework. Stock and Watson (1988) proposed a common-trends decomposition that basically extends the univariate decomposition proposed by Beveridge and Nelson (1981) to cointegrated systems. The next proposition shows the connection between the common-trends decomposition of Stock and Watson and decomposition (14). Proposition 5: The random-walk component (in the Beveridge–Nelson sense) of the I(1) common factor ft in the decomposition (14) corresponds to the common trend of the Stock–Watson decomposition. The advantage of our decomposition with respect to the commontrends model of Stock and Watson is that in our case it is easier to estimate the common long-memory components and to test hypotheses on them, as is shown in Section 2. Notice that alternative definitions of ft will vary only by I(0) components and therefore will be cointegrated. In the univariate case, part of the literature has been oriented to obtaining orthogonal P–T decomposition (see Bell 1984; Quah 1992; Watson 1986). To the best of our knowledge, nothing has been written about the multivariate case. From the factor model (14) an orthogonal decomposition can be obtained such that the corresponding Dft and zt are uncorrelated at all leads and lags. First, project zt on Dft-s for all s and get the residuals z˜ t = zt - P[zt Dft - s " s].
(15)
Then define the new I(1) common factors f˜t as -1 f˜ = ( A¢A ) A¢ ( X - A z˜ ).
(16) 1 1 t 2 t ˜ It is clear that Df t and z˜ t are uncorrelated at all leads and lags, but notice that, unless the z˜ t are linear combinations of current Xt, f˜t will not be a linear combination of contemporaneous Xt. This is what is lost if orthogonality is required. To obtain an orthogonal P–T decomposition (according to Definition 1), one has to allow the common factors to be linear combinations of future, present, and past values of Xt. t
2.
1
ESTIMATION AND TESTING
In this section it is shown how to estimate and test hypotheses on g^. Most of the proofs in this section are based on Johansen and Juselius (1990).
Estimation of Common Long Memory Components
239
Consider a finite ECM with Gaussian errors, H1 : DXt = PXt-1 + G1DXt-1 + . . . + Gq-1DXt-q+1 + et,
t = 1, . . . , T,
(17)
where e1, . . . , eT are IINp(0, L), X-q+1, . . . , X0 are fixed, and (18)
P = g a¢ .
p¥ p
p¥r r¥ p
Following Johansen (1988), we can concentrate the model with respect to P, eliminating the other parameters. This is done by regressing DXt and Xt-1 on (DXt-1, . . . , DXt-q+1). This gives residuals R0t and R1t and residual product matrices T
Sij = T -1 Â Rit Rjt¢ , i, j = 0, 1.
(19)
t =1
The remaining analysis will be performed using the concentrated model R0t = ga¢R1t + et.
(20)
The estimate of a is determined by reduced-rank regression in (20) (see Ahn and Reinsel 1990; Anderson 1951; Johansen 1988) and is found by solving the Eigenvalues problem -1 ΩlS11 - S10S 00 S01Ω = 0 (21) for Eigenvalues lˆ 1 > · · · > lˆ p and eigenvectors Vˆ = ( vˆ 1, . . . , vˆ p). The maximum likeihood estimators are given by aˆ = ( vˆ 1, . . . , vˆ r), gˆ = S01aˆ , and Lˆ = S00 - gˆ gˆ ¢. Finally the maximized likelihood function becomes p
-1
r È ˘ 2T L-max = Lˆ = S00 ’ (1 - lˆ i ) = S00.1 Í ’ (1 - lˆ i )˙ , Î ˚ i =1 i = r +1
(22)
-1 where S00.1= S00 - S01S11 S10. The next theorem shows how to estimate g^.
Theorem 1: Under the hypothesis of cointegration H2: P = ga ¢, the maximum likelihood estimator of g^ is found by the following procedure: First solve the equation -1 ΩlS00 - S01S11 S10Ω = 0, (23) ˆ = (m ˆ 1, . . . , giving the Eigenvalues lˆ 1 > · · · > lˆ p and Eigenvectors M ˆ ˆ ˆ p), normalized such that M¢S00M = I. The choice of gˆ ^ is now m
ˆ r +1 , . . . , m ˆ p ), gˆ ^ = (m which gives the maximized likelihood function (22).
(24)
240
J. Gonzalo and C. W. J. Granger
Notice, as Johansen (1989) pointed out, the duality between g^ and a. This is the idea of the proof of the preceding theorem. Both estimates come from the canonical correlation analysis between R0t and R1t. They are the canonical vectors and can be found by solving the following equations: È-l i S00 ÍÎ S 10
ˆ i˘ S01 ˘ Èm = 0, i = 1, . . . , p, ˙ Í -l i S11 ˚ Î vˆ i ˙˚
(25)
ˆ ¢S00M ˆ = Ip and Vˆ ¢S11Vˆ = Ip. with the normalizations M From (25) and the preceding normalizations, it is clear that ˆ ¢j S01 vˆ i = 0, m
i π j.
(26)
ˆ r+1, . . . , m ˆ p). Because aˆ = ( vˆ 1, . . . , vˆ r) and gˆ = S01aˆ , then gˆ ^ = ( m If for any reason a is not estimated by maximum likelihood or simultaneous reduced-rank least squares [see Gonzalo (1994) for different methods of estimation], the way to estimate g^ is the following: Insert the estimate of a, a˜ , into the ECM (17), use this to estimate gˆ , and then solve lS00 - g˜ g˜ ¢ = 0,
(27)
p¥r r¥ p
giving the Eigenvalues l˜ 1 > · · · > l˜ p ( l˜ r+j = 0, j = 1, . . . , p - r) and ˜ 1, . . . , m ˜ p) normalized such that M˜¢¢S00M˜ = I. Eigenvectors M˜ = ( m ˜ r+1, . . . , m ˜ p); the Eigenvectors correThe choice of g˜ ^ is now g˜ ^ = ( m sponding to the Eigenvalues equal 0. To find the asymptotic distribution of gˆ ^, it is convenient to decompose g ^¢ as follows: gˆ ^ = g^dˆ + gaˆ , where dˆ = (g ¢^g^)-1 g ¢^gˆ ^, and aˆ = (g ¢g )-1 g ¢gˆ ^. Theorem 2: When T Æ •, T 1 2 (gˆ dˆ -1 - g ) fi N (0, V ), ^
^
(28)
where fi means convergence in distribution, V = g (g ¢ (S00 - L) g )-1 g ¢ g ¢^ L g^, and S00 var (DXt Ω DXt-1, . . . , DXt-q+1). As mentioned earlier, one of the advantages of our decomposition is that one can test whether or not certain linear combinations of Xt can be common factor. Johansen (1991) showed how to test the hypotheses on a and g : H 3: a = J r , r £ s £ p p¥r
p¥ s s¥r
and H 4a : g = Q y , r £ n £ p. p¥r
p¥nn¥r
Estimation of Common Long Memory Components
241
In the next theorem it is shown how to test the hypotheses on g^: H 4b : g ^ = G q p¥ k
with k = p - r and k £ m £ p.
p¥ mm¥ k
Theorem 3: Under the hypotheses H4b: g^ = Gq, one can find the maximum likelihood estimator of g^ as follows: First solve -1 ΩlG¢S00G - G¢S01S11 S10GΩ = 0 (29) ˆ 4b = ( m ˆ 4b.1, . . . , m ˆ 4b.m) normalized by for lˆ 4b.1 > · · · > lˆ 4b.m and M ˆ 4¢b (G¢S00G) M ˆ 4b = I. Choose M
ˆ 4b.( m+1)-( p-r ) , . . . , m ˆ 4b.m ) and gˆ ^ = Gqˆ . qˆmx( p-r ) = (m
(30)
The maximized likelihood function becomes -1
p
-2 T max
L
Ê ˆ (H 4b ) = S00.1 ’ (1 - lˆ 4b.i+( m-p ) ) , Ë i=r +1 ¯
(31)
which gives the likelihood ratio test of the hypothesis H4b in H2 as p
-2 ln(◊ ; H 4b in H 2 ) = -T Â ln{(1 - lˆ 4b.i +(m - p ) ) (1 - lˆ i )} r +1
~ c (2p - r ) ¥ ( p -m ) .
(32)
Finally one may be interested in estimating a and g^ under H3 and H4b. The way to proceed is to convert H4b into H4a. Notice that Q (the matrix in H4a) is formed by the p - m eigenvectors of GG¢ corresponding to the eigenvalues equal to 0. Following theorem 3.1 of Johansen (1991), a and g can be estimated under H3 and H4a. Once g is estimated, we are in the situation described in (27). 3.
APPLICATIONS
In the first two examples (consumption and gross national product (GNP), dividends and stock prices), it is shown how to obtain the common factors directly from an ECM. The third application (interest rates in Canada and the United States) shows, step by step, how to estimate the common factors and how to decompose these variables into permanent and transitory components. 3.1
Consumption and GNP, Dividends and Stock Prices
The (vector autoregression) VAR (ECM) models of Tables 12.1 and 12.2 are reproduced from Cochrane (1991). Focusing out attention in the
242
J. Gonzalo and C. W. J. Granger
Table 12.1 Consumption and GNP Regressions (Cochrane 1991). Left
Right variable
variable
Dct Dyt
const.
coeff. t stat. coeff. t stat.
-.43 -.49 5.19 3.49
Dct-2
Dyt-1
Dyt-2
R2
1. Vector autoregression -.02 .07 -.02 -1.23 .90 -.19 .08 .52 .16 3.45 3.81 1.12
.09 1.91 .22 2.74
-.02 -.40 .14 1.89
.06
ct-1 - yt-1
Dct-1
.27
2. P–T decomposition g ^¢ = (1, 0); a¢ = (1, -1). È c t ˘ È 1˘ È 0˘ Í y ˙ = Í 1˙ f t + Í -1˙ zt , Î t˚ Î ˚ Î ˚ where ft = g ^¢ (ct, yt)¢ = ct and zt = a¢(ct, yt)¢ = ct - yt. Note: yt denotes real GNP and ct denotes log (nondurable + services consumption). D denotes first differences, Dyt = yt - yt-1. Data sample: 1947:1–1989:3.
Table 12.2 Dividend and Price Regressions (Cochrane 1991). Left
Right variable
variable
Ddt Dpt
const.
coeff. t stat. coeff. t stat.
20.01 .78 78.65 2.34
Ddt-2
Dpt-1
Dpt-2
R2
1. Vector autoregression .038 .046 -.06 .47 .25 .34 .225 .06 -.08 2.11 .25 -.36
-.08 -.65 .114 .68
-.04 .32 -.09 -.55
.038
dt-1- pt-1
Ddt-1
.14
2. P–T decomposition g ^ = (1, 0); a = (1, -1). È dt ˘ È 1˘ È 0˘ Í p ˙ = Í 1˙ f t + Í -1˙ zt Î t˚ Î ˚ Î ˚ where ft = dt and zt = dt - pt. Note: dt denotes log dividends and pt denotes log price (cumulated returns) on the valueweighted New York Stock Exchange portfolio. D denotes first differences; Dpt is the log return. Data sample: 1927–1988.
consumption–GNP example, it can be seen from the VAR of Table 12.1 that the error-correction term (ct-1 - yt-1) does not appear to be significant in the consumption equation; therefore, g ¢ = (0, 1) and g ¢^ = (1, 0). In other words, the I(1) common factor (permanent component) in our decomposition is
Estimation of Common Long Memory Components
243
È ct ˘ ft = (1, 0) Í ˙, Î yt ˚ a multiple of the consumption variable. This means that, if consumption is kept fixed, any change in the income is going to affect (ct, yt) only through zt (the transitory component) and therefore will only have transitory effects (see the factor model in Table 12.1). This is exactly the conclusion reached by Cochrane (1991) through the impulse-response functions: GNP’s response to a consumption shock is partly permanent but also partly temporary. More importantly, GNP’s response to a GNP shock holding consumption constant is almost entirely transitory. This finding has a natural interpretation: If consumption does not change, permanent income must not have changed, so any change in GNP must be entirely transitory. (p. 2)
The same kind of conclusion is obtained in the second example with dividends and stock prices in Table 12.2. From the factor model it can be seen that a shock in dividends has a permanent (long-run) effect in prices and dividends, but a shock in prices, with no movements in dividends, is completely transitory. 3.2
Interest Rates in Canada and the United States
The main purpose of this application is to find the permanent component that is driving the interest rates of Canada and the United States in the long run. To do that, three interest rates with different maturities have been considered in each country – short-term, mediumterm, and long-term interest rates. In Canada (see Fig. 12.1), the shortterm rate is the weighted average of the yields on successful bids for three-month treasury bills (x1c), the medium-term rate refers to government bonds with original maturity of 3 to 5 years (x2c), and the longterm rate refers to bonds with original maturity of 10 years and over (x3c). In the United States (see Fig. 12.2), the short-term rate is an annual average of the discount rate on new issues of three-month treasury bills (x1u), the medium-term rate refers to 3-year constant maturity government bonds (x2u), and the long-term rate refers to 10-year constant-maturity bonds (x3u). The data consist of 240 monthly observations from 1969:1 to 1988:12 and were obtained from the IMF data base. To show the potential of our decomposition as a dimensionreduction method, two different approaches have been followed to obtain the common permanent component of the whole set of interest rates. In the first approach, the interest rates are considered within countries, and in each country the I(1) common factor is estimated. The
244
J. Gonzalo and C. W. J. Granger
% 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 MAR68
ST LT MT
DEC70
SEP73
JUN76
MAR79
NOV81
AUG84
MAY87
FEB90
Month
Figure 12.1. Canada Interest Rates (1969:1–1988:12): ——, Short-Term (ST); ...., Medium-Term (MT); –·–·, Long-Term (LT).
% 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 MAR68
LT MT ST
DEC70
SEP73
JUN76
MAR79
NOV81
AUG84
MAY87
FEB90
Month
Figure 12.2. U.S. Interest Rates (1969:1–1988:12): ——, Short-Term (ST); ...., Medium-Term (MT); –·–·, Long-Term (LT).
common permanent component between these two I(1) country factors will be the factor that is driving the whole system of interest rates in the long run. In this process the number of variables involved at every step is at most 3. This is what makes this first approach very convenient for analyzing cointegration in big systems. The second approach consists of analyzing the cointegration of the whole system (6 variables) without any a priori partition. This second way becomes unfeasible when the number of variables is large (greater than 10). The conclusion obtained by these two different approaches
Estimation of Common Long Memory Components
245
Table 12.3 Augmented Dickey–Fuller Statistics for Tests of a Unit Root.
x1ct x2ct x3ct x1ut x2ut x3ut
ADF(0)
ADF(1)
ADF(2)
ADF(3)
ADF(4)
-1.46 -1.67 -1.64 -2.05 -1.72 -1.55
-2.18 -1.93 -1.75 -2.7 -2.35 -1.88
-2.10 -1.80 -1.67 -2.17 -1.78 -1.60
-1.97 -1.86 -1.73 -2.15 -1.83 -1.66
-1.93 -1.63 -1.6 -2.07 -1.79 -1.62
Note: ADF(q) is the t statistic of d^ in the regression Dxt = c + dxt-1 + Sqj=1 f i Dxt-i + et. The critical values (from Mackinnon 1991) for n = 240 are 1% (-3.46), 5% (-2.87), and 10% (-2.57). xijt denotes the i term interest rate in country j at time t, for i = 1 (short), i = 2 (medium), i = 3 (long), j = c (Canada), and j = u (U.S.). Data are from the IMF. Sample period: 1969:1–1988:12.
matches perfectly. There is only one common long-memory factor in the whole system formed by the six interest rates, and that factor is the U.S. common permanent component. To reach the preceding conclusion these steps have been followed: (1) Unit-root tests (Table 12.3): Using the augmented Dickey– Fuller test, the null of the unit root is not rejected for any of the six interest rates. (2) Cointegrations tests (Table 12.4): Using the Johansen likelihood ratio (LR) test, for a VAR of order 3 (order suggested by the Akaike information criterion), it is found that Canada, as well as the United States, has two cointegrating vectors, and the whole system has five cointegrating vectors. Therefore there is one common I(1) factor in each country, and they are cointegrated, implying that there is only one common permanent component in the whole system. (3) Estimation of the cointegration structure: In Table 12.5 we provide the estimates of the cointegrating vectors and of the linear combinations that define our common permanent components. From these estimates, following Section 1, all interest rates can be decomposed into permanent and transitory components. Some examples are shown in Figures 12.3 and 12.4. (4) Testing hypotheses on the long-memory common factors: From Table 12.5, the I(1) common factor of the whole system is f1 = -.006x1c + .034x2x - .003x3c + .112x1u - .22x2u + .26x3u. Following Theorem 3, we tested that the U.S. interest rates are the only variables driving the whole system in the long run; that is,
246
J. Gonzalo and C. W. J. Granger
Table 12.4 Testing for Cointegration. H2
Trace
r≤2 r≤1 r=0
3.52 25.22 56.63
r≤2 r≤1 r=0
3.98 29.18 61.98
r≤5 r≤4 r≤3 r≤2 r≤1 r=0
3.79 16.49 36.59 68.89 104.11 153.87
lmax
lmax (.90)
3.52 21.70 31.40
6.50 12.91 18.90
United States 6.50 15.66 28.71
3.95 25.23 32.79
6.50 12.91 18.90
Canada and United States 6.50 15.66 28.71 45.23 66.49 90.39
3.79 12.70 20.10 32.29 35.23 49.75
6.50 12.91 18.90 24.78 30.84 36.35
Trace (.90) Canada 6.50 15.66 28.71
Note: The critical values have been obtained from Osterwald-Lenum (1992). Test statistics for the hypothesis H2 are for several values of r versus r + 1 (l max) and versus general alternative H1 (trace) for Canadian and U.S. interest rates data (1969:1–1988:12).
% 30
20 Px1 10
x1c
0
Tx1
–10 MAR68
DEC70
SEP73
JUN76
MAR79
NOV81
AUG84
MAY87
FEB90
Month
Figure 12.3. Canada: P–T Decomposition of Short-Term Interest Rates (x1c); f1 = - .006x1c + .034x2c - .003x3c + .112x1u - .22x2u + .26x3u; z1 = .008x1c + .037x2c - .081x3c - .083x1u + .075x2u + .032x3u; z2 = -007x1c + .046x2c - .074x3c + .073x1u - .275x2u + .242x3u; z3 = .068x1c - .100x2c - .075x3c - .053x1u + .101x2u + .039x3u; z4 = -.019x1c + .181x2c - .239x3c - .011x1u .014x2u + .089x3u; z5 = -.041x1c - .022x2c + .007x3c + .023x1u + .019x2u + .030x3u; Px1c = 7.86f1; Tx1c = -3.58z1 - 6.92z2 + 4.52z3 + 1.31z4 - 18.16z5: ——, x1c; ......, Px1c; –·–·–·, Tx1c.
.018 -.380 -.466
-.016 .058 .095
x1c x2c x3c
.079 .004 .059
.008 -.031 .051
x1u x2u x3u
x1u x2u x3u
-.100 .324 -.179
.091 -.275 .191
.008 .037 -.081 -.083 .075 .032 .034 -.160 .225 -.110 .033 .095
Eigenvectors V ˆa -.045 .016 x1c -.004 -.062 x2c .046 .076 x3c x1u x2u x3u Eigenvectors M ˆb -.079 .107 x1c -.123 -.189 x2c .273 .243 x3c x1u x2u x3u
Eigenvalues lˆ (.128, .10, .016)
United States
.015 .076 -.023 -.053 .270 -.276
-.007 .046 -.074 .073 -.275 .242 .035 -.202 .031 -.045 .221 .004
.068 -.100 -.075 -.053 .101 .039
-.040 .296 -.417 -.047 -.026 .190
-.019 .181 -.239 -.011 -.014 .089
-.143 -.099 .189 .029 .128 -.103
-.041 -.022 .007 .023 .019 .030
(.187, .136, .126, .080, .051, .016)
Canada and United States
-.006 .034 -.003 .112 -.220 .260
.001 .015 -.008 .012 -.063 .070
Note: The Eigenvalues l and Eigenvectors V ˆ , Mˆ based on the normalizations Vˆ S11Vˆ = l and Mˆ S00M = l for Canada and U.S. interest rate data (1969:1–1988:12). a The first r columns form a ˆ. b The last p - r columns form ˆg ^.
.009 -.149 .148
-.066 .109 -.030
x1c x2c x3c
(.123, .086, .014)
Canada
Table 12.5 Estimation of the Cointegration Structure.
248
J. Gonzalo and C. W. J. Granger
% 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 –1 –2 MAR68
x3u Px3
Tx3
DEC70
SEP73
JUN76
MAR79
NOV81
AUG84
MAY87
FEB90
Month
Figure 12.4. United States P–T Decomposition of Long-Term Interest Rates (x1u); See Definitions of Variables in Figure 12.3. Px1u = 5.91f1; Tx1u = 6.01z1 - 2.81z2 + .252z3 - 1.27z4 + 1.79z5. ——, x3u; ......, Px3u; –·–·–·, Tx3u.
H 4b : g ^ = Gq
È0 Í0 Í Í0 with G = Í Í1 Í0 Í Î0
0 0 0 0 1 0
0˘ 0˙ ˙ 0˙ ˙. 0˙ 0˙ ˙ 1˚
Under H4b,qˆ = (.2, -.25, .27).This hypothesis is not rejected with a p value of .86. The same conclusion was obtained when the analysis was done by countries. The common long-memory factor in Canada is f1c = .08x1c + .004x2c + .06x3c, and in the United Sates f1u = .11x1u - .19x2u + .24x3u. These two common factors are cointegrated, and the hypothesis tests that in the long-run the driving force of these two common factors is f1u has a p value of .45. Results in more detail can be found in the paper by Gonzalo and Granger (1992). 4.
CONCLUSION
The results of this article have implications on three fronts. In the first place, they provide a new form of estimating the I(1) common factors that ensure that a set of variables are cointegrated, thus allowing us to gain more understanding of the nature of economic time series. Second, they show a new method for estimating the permanent com-
Estimation of Common Long Memory Components
249
ponent (“trend”) of a time series using multivariate information, and third, they provide a new way of studying cointegration in large systems by using the common long-memory factors of every “natural” subsystem. Further research needs to be done on the small-sample properties of gˆ ^ and on how to incorporate different characteristics of the ECM (nonlinearities, time-varying parameters, etc.) in the estimation of the common factors and therefore in the estimation of P–T decompositions. ACKNOWLEDGMENTS This research was partially supported by the Sloan Foundation and U.S. National Science Foundation Grant SES-9023037. We thank Chor-Yiu Sin and two anonymous referees for helpful comments. APPENDIX: PROOFS OF THE MAIN RESULTS Proof of Proposition 1: Inverting (6), 11 12 ÈDPt ˘ È H (L) H (L)˘ Èu1t ˘ ÍÎ T ˙˚ = ÍÎH 21 (L) H 22 (L)˙˚ ÍÎu ˙˚, 2t t
(A.1)
we obtain the moving average representation of DPt DPt = H 11 (1)u1t + H 12 (1)u2t + (1 - L){H˜ 11 (L)u1t + H˜ 12 (L)u2t },
(A.2)
H1 j (L) = H1 j (1) + (I - L)H˜ 1 j (L), j = 1, 2,
(A.3)
where
and ut = (u1t, u2t) is a vector white noise with covariance matrix ÈS S = Í 11 ÎS 21
S 12 ˘ . S 22 ˙˚
Assuming that u1t and u2t are not perfectly correlated, they can be decomposed as u1t = uPt
and u2 t =
-1
  11
u + uTt .
21 1 t
(A.4)
From (A.2) and (A.4), lim
hÆ •
and
∂ Et ( pt + h ) -11 = H11 (1) + H12 (1)Â11 Â 21 ∂ uPt
(A.5)
250
J. Gonzalo and C. W. J. Granger
lim
hÆ •
∂ Et ( pt + h ) = H12 (1). ∂ uTt
(A.6)
Noticing that lim Et (X t + h ) = lim Et (Pt + h ), hÆ •
hÆ •
(A.7)
(Pt, Tt) will be P–T decomposition according to Definition 1 iff H12(1) = -1 H11 (1) H12(1) [H21(1)H11(1)-1H12(1)- H22(1)]-1 = 0. In other words, iff the total multiplier of DPt with respect to Tt is 0, (A.8)
H11-1 (1)H12 (1) = 0.
Proof of Proposition 3: If ga ¢ has only p - r eigenvalues equal to 0, then rank(a ¢g) = r. Taking determinants on the right side of the matrix multiplication Èa ¢ ˘ Èa ¢g ÍÎg ¢ ˙˚ [g g ^¢ ] = ÍÎ 0 ^
a ¢g ^ ˘ , g ^¢ g ^ ˙˚
(A.9)
it follows that this matrix has full rank and therefore Èa ¢ ˘ rank of Í ˙ = p. Îg ^¢ ˚
(A.10)
Proof of Proposition 5: In this proof, for simplicity it is assumed that Xt follows an AR(q) as in (17). Multiplying the ECM (17) by g ¢^ and substituting Xt = A1 ft + A2zt into (17), we get the AR representation of the common factors ft q-1
Dft =
q-1
 g ¢ G A Df ^
i
1
i =1
t -i
+ Â g ^¢ Gi A2 Dzt -i + g ^¢ e t .
(A.11)
i =1
From (A.11), the random-walk part [in the Beveridge–Nelson (1981) sense] of ft is q-1
-1
-1 Ê ˆ 1 - Â g ^¢ Gi A1 g ^¢ (I - L) e t . Ë ¯ i=1
(A.12)
The common trend decomposition of Stock and Watson (1988) is obtained from the Wold representation of DXt, DX t = C (L)e t = C (1)e t + DC˜ (L)e t ,
(A.13)
where -1
C (1) = a ^ (g ^¢ Ya ^ ) g ^¢
(A.14)
Estimation of Common Long Memory Components
251
Y = mean lag matrix in H1 = I - · · · - Gq-1 + P.
(A.15)
with
Therefore, -1
-1 È -1 ˘ Ê q-1 ˆ C (1) = a ^ (g ^¢ a ^ ) ¥ Í I - Á g ^¢ Â Gia ^ ˜ a ^ (g ^¢ a ^ ) ˙ g ^¢ . ¯ Î Ë i =1 ˚
(A.16)
The result follows from noticing that A1 = a^(g ¢^a^)-1. Proof of Theorem 1: Johansen (1989) showed that the likelihood function of Model (20) can be expressed as 2 T L-max = S00.1 g ^¢ S00 g ^ g ^¢ (S00 - S01 S11-1 S10 )g ^ .
(A.17)
Therefore L is maximized by maximizing g ^¢ (S00 - S01S11-1S10 )g ^ g ^¢ S00g ^ .
(A.18)
This is accomplished by choosing g^ to be the Eigenvectors correspond-1 ing to the p - r smallest Eigenvalues of S01S11 S10 with respect to S00 and the maximal value is p
’ (1 - lˆ ). i
(A.19)
i = r +1
The result follows from substituting (A.19) in (A.17). Proof of Theorem 2: The proof follows from proposition 3.11 of Johansen and Juselius (1990). Proof of Theorem 3: Substituting g^ by Gq in (A.17), it is clear that q can be estimated as the Eigenvectors corresponding to the ( p - r) -1 smallest Eigenvalues of G¢S01S11 S10G with respect to G¢S00G. The distribution of the LR test follows from proposition (3.13) of Johansen and Juselius (1990).
REFERENCES Ahn, S. K., and Reinsel, G. C. (1990), “Estimation for Partially Nonstationary Multivariate Autoregressive Models,” Journal of the American Statistical Association, 85, 813–823.
252
J. Gonzalo and C. W. J. Granger
Anderson, T. W. (1951), “Estimating Linear Restrictions on Regression Coefficients for Multivariate Normal Distributions,” The Annals of Mathematical Statistics, 22, 327–351. Aoki, M. (1989), “A Two-Step Space Time Series Modeling Method,” Computer Mathematical Applications, 17, 1165–1176. Bell, W. R. (1984), “Signal Extraction for Nonstationary Time Series,” The Annals of Statistics, 12, 646–664. Beveridge, S., and Nelson, C. R. (1981), “A New Approach to Decomposition of Economic Time Series Into Permanent and Transitory Components With Particular Attention to Measurement of the ‘Business Cycle’,” Journal of Monetary Economics, 7, 151–174. Cochrane, J. (1991), “Univariate vs. Multivariate Forecasts of GNP Growth and Stock Returns: Evidence and Implications for the Persistence of Shocks, Detrending Methods, and Tests of the Permanent Income Hypothesis,” Working Paper 3427, National Bureau of Economic Research, Cambridge, MA. Engle, R. F., and Granger, C. W. J. (eds.) (1991), Long-Run Economic Relationships: Readings in Cointegration (Advance Texts in Econometrics), Oxford, U.K.: Oxford University Press. Engle, R. F., and Kozicki, S. (1990), “Testing for Common Feature,” Discussion Paper 9023, University of California, San Diego, Dept. of Economics. Geweke, J. (1977), “The Dynamic Factor Analysis of Economic Time Series Models,” in Latent Variables in Socioeconomic Models, eds. D. Aigner and A. Goldberger, Amsterdam: North-Holland, pp. 365–383. (1982), “Measurement of Linear Dependence and Feedback Between Multiple Time Series,” Journal of the American Statistical Association, 378, 304–324. Gonzalo, J. (1994), “Five Alternative Methods of Estimating Long-Run Equilibrium Relationships,” Journal of Econometrics, 60, 203–233. Gonzalo, J., and Granger, C. W. J. (1992), “Estimation of Common LongMemory Components in Cointegrated Systems,” Discussion Paper 4, Boston University, Dept. of Economics. Granger, C. W. J. (1980), “Testing for Causality: A Personal Viewpoint,” Journal of Economic Dynamics and Control, 2, 329–352. (1986), “Developments in the Study of Cointegrated Economic Variables,” Oxford Bulletin of Economics and Statistics, 48, 213–228. Granger, C. W. J., and Lin, J. (1992), “Causality in the Long-Run,” Discussion Paper 9215, Academica Sinica. Johansen, S. (1988), “Statistical Analysis of Cointegrating Vectors,” Journal of Economic Dynamics & Control, 12, 231–254. (1989), “Likelihood Based Inference on Cointegration. Theory and Applications,” unpublished lecture notes, University of Copenhagen, Institute of Mathematical Statistics. (1991), “Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models,” Econometrica, 59, 1551– 1580. Johansen, S., and Juselius, K. (1990), “Maximum Likelihood Estimation and Inference on Cointegration – With Applications to the Demand for Money,” Oxford Bulletin of Economics and Statistics, 52, 169–210.
Estimation of Common Long Memory Components
253
Kasa, K. (1992), “Common Stochastic Trends in International Stock Markets,” Journal of Monetary Economics, 29, 95–124. Mackinnon, J. G. (1991), “Critical Values for Cointegration Tests,” in Long-Run Economic Relationships: Readings in Cointegration, eds. R. Engle and C. Granger, Oxford, U.K.: Oxford University Press, pp. 267–276. Osterwald-Lenum, M. (1992), “A Note With Quantiles of the Asymptotic Distribution of the Maximum Likelihood Cointegration Rank Test Statistics,” Oxford Bulletin of Economics and Statistics, 54, 461–472. Peña, D., and Box, G. E. P. (1987), “Identifying a Simplifying Structure in Time Series,” Journal of the American Statistical Association, 82, 836–843. Phillips, P. C. B. (1991), “Optimal Inference in Cointegrated Systems,” Econometrica, 59, 283–306. Quah, D. (1992), “The Relative Importance of Permanent and Transitory Components: Identification and Some Theoretical Bounds,” Econometrica, 60, 107–118. Stock, J. H., and Watson, M. W. (1988), “Testing for Common Trends,” Journal of the American Statistical Association, 83, 1097–1107. Watson, M. W. (1986), “Univariate Detrending Methods With Stochastic Trends,” Journal of Monetary Economics, 18, 1–27.
CHAPTER 13
Separation in Cointegrated Systems and Persistent-Transitory Decompositions* Clive W. J. Granger and Niels Haldrup*
1.
INTRODUCTION
It is a frequent empirical finding in macroeconomics that several cointegration relations may exist amongst economic variables but in the particular way that the single relations appear to have no variables in common. It is also sometimes found in such systems that the error correction terms or other stationary variables from one set of variables may have important explanatory power for variables in another set. For example Konishi et al. (1993) considered three types of variables of US data: real, financial and interest rate variables. They found that cointegration existed between variables in each subset but not across the variables such that the different sectors did not share a common stochastic trend. On the other hand, it was also found that the error correction terms of the interest rate relation and the sector of financial aggregates had predictive power with respect to the real variables of the system. As argued by Konishi et al. (1993) the situation sketched above may extend the usual “partial equilibrium” cointegration set-up to a more “general equilibrium” setting although in a limited sense. The notion of separation initially developed by Konishi and Granger (1992) and Konishi (1993) provides a useful way of describing formally the above possibility: Consider two groups of I(1)-variables, X1t and X2t of dimension p1 and p2, respectively, X1 and X2 are assumed to have no variables in common and in each sub-system there is cointegration with the cointegration ranks being r1 < p1 and r2 < p2. Hence it follows that the dimensions of the associated common stochastic trends of each system are p1 - r1 and p2 - r2. Denote the two sets of I(1) stochastic trends W1t and W2t. It follows from Stock and Watson (1988) that each subsystem can be given the representation: * Oxford Bulletin of Economics and Statistics, 59, 1997, 449–464. ** The first author acknowledges support from NSF grant SBR 93-08295. The research was undertaken while the second author was visiting the UCSD during fall, 1995. We would like to thank Namwon Hyung, the Editor and an anonymous referee for helpful comments.
Cointegrated Systems and Persistent-Transitory Decompositions 255
~ X1t = G1W1t + X 1t ~ X2t = G2W2t + X 2t
(1)
~ where Gi, i = 1, 2, are pi ¥ (pi - ri) matrices and the X it components are stationary I(0) relations. Separate cointegration across sub-systems means that the components W1t and W2t are not cointegrated so that there is no long-run relationship between the X1t and X2t variables. As a consequence the stacked time series Xt = (X¢1t , X¢2t)¢ will be of dimension p = p1 + p2 and have cointegration rank r = r1 + r2. The full system stochastic trend component will have the dimension p - r. Despite this separation of variables it can easily occur that a relationship exists between X1t and X2t in the short-run. Essentially there are two ways this can happen: DX2t(DX1t) may appear in the transitory I(0) ~ ~ component X 1t(X 2t) and/or error correction terms from one system may enter the second. Absence of these two sorts of interactions will be referred to as separation of types A and B, respectively. Although it will be possible to distinguish between short-run and long-run separation of Type A, long-run separation appears to be the most interesting for the present purpose as we shall demonstrate. Presence of both types of separation is denoted complete separation, and if only one of these is present we refer to partial separation. What will be of concern in this paper is also to consider a decomposition of the vector time series Xt in persistent-transitory (P-T) components for separated cointegration models in order to see how the single components interact across systems. Identification of the P-T components are generally non-unique since any I(1) process can be contaminated with an I(0) process and still have the I(1) property. As a result various additional requirements have been suggested in the literature to identify the components and more recently Gonzalo and Granger (1995), using a factor model approach, suggest that the temporary component be defined in terms of the error correction relations such that it will have no explanatory power on the series in the long run. Moreover, the single factors can be measured in terms of the observed variables Xt. One of the findings of the present paper (which in some respects actually goes beyond the particular decomposition suggested by Gonzalo and Granger), is that if the decomposition Xt = P(Xt) + T(Xt) is considered where P(Xt) and T(Xt) are the persistent (long-memory) and transitory (short-memory) components, respectively, the persistent component P1t associated with the X1-system, for instance, can be expressed as P1t = P1(X1t,X2t) unless some sort of separation is present. Hence, in order to extract observable persistent components in a separated system, it is not generally sufficient to consider each sub-system in isolation, since all system variables may be needed to define the components. We also demonstrate that if one (wrongly) treats a partially separated system as
256
C. W. J. Granger and N. Haldrup
completely separated, both the persistent and the transitory components of the other system may turn out to affect the persistent component of the sub-system considered. Only when the entire system is completely separated will it be sufficient to look at the sub-models to find the longmemory components and the associated common stochastic trends. This result is interesting because it suggests that cointegration analysis, and especially common stochastic trends analysis and P-T decompositions, may suffer from only looking at small partial models.1 In the interpretation of common stochastic trends the idea of “general equilibrium” cointegration is therefore relevant since persistent and transitory components may interact across systems. On the other hand, estimation is another (if not the prime) important issue in cointegration analysis, and it is certainly not costless to consider larger systems in a “general equilibrium” setting. For systems of increasing size practical problems are common with respect to estimation and difficulties easily arise in interpreting and identifying cointegration relations. Moreover, as emphasized by e.g. Abadir et al. (1996) significant finite sample inaccuracies may appear in cointegrated VAR models as the number of variables increases. Therefore, once estimation is brought into the picture, there is a size-precision trade-off that must be addressed as well. Hence there seem to be conflicting suggestions for empirical practice and obviously the size of the system at hand should reflect the purpose of the analysis. Testing for complete separation may be useful in bringing together these diverging opinions. The plan of the paper is the following. In Section II we provide a formal definition of the various separation concepts and we briefly review some of the literature concerned with decomposition of a series into persistent and transitory components. The following section focuses on the decomposition in the context of separated cointegrated models.We demonstrate that if a partially separated system is treated as if it is complete, both the long- and short-memory components of the neglected system may potentially affect the persistent component of the system being analyzed. However, the problem can be avoided by considering the full system but with the implication that the (true) long- and short-memory factors may depend upon all the model variables of each sub-system. In Section IV possible extensions to non-linear error correction models are considered and we demonstrate that fairly strong restrictions need to be imposed on the functional forms across systems in order to ensure stability. In the final section we conclude. We should emphasize that although this paper is strictly on representations, we will briefly provide discussions of the implications for estimation where appropriate. 1
Similar problems arise for instance in impulse response analysis where it is well known that the impulse responses are quite sensitive to the information set of the econometrician.
Cointegrated Systems and Persistent-Transitory Decompositions 257
2.
DEFINITION OF THE CONCEPTS
We shall here define formally the different concepts that will be used in the sequel. 2.1
Notions of Separation in Cointegrated Systems
The definition of separation provided below extends Konishi and Granger (1992) and Konishi (1993). Definition 1: Consider the p-dimensional cointegrated vector time series Xt = (X¢1t,X¢2t)¢ where X1t and X2t are of dimension p1 and p2 (p = p1 + p2) and have no variables in common. Then the associated error correction model reads DX t = g a ¢ X t -1 + G (L) DX t -1 + e t p¥ r r ¥ p
(2)
p¥ p
where r is the cointegration rank and et is i.i.d., with covariance matrix W. If the matrix of cointegration parameters can be factored as a¢ =
Ê a11¢ 0 ˆ Ë 0 a 22 ¢ ¯
(3)
where a¢ii is pi ¥ ri, i = 1, 2, the system is said to have separate cointegration with cointegration ranks for each sub-system given by r1 and r2, respectively. Conformably with this partitioning, consider also the matrices g =
Ê g 11 g 12 ˆ Ë g 21 g 22 ¯
and G (L) =
Ê G11 (L) G12 (L) ˆ . Ë G21 (L) G22 (L)¯
(4)
Given separate cointegration we define type A-separation (separation in dynamic adjustment in the long-run) when G12(1), G21(1) = 0. Type Bseparation (separation in error correction) occurs when g12, g21 = 0. Partial separation is present when either type A or type B separation is present and finally there is complete separation when both type A and type B separation is present. Observe that the maintained assumption is that we have separate cointegration and that in the definition of type B separation there is no feedback from the error correction terms across each sub-system. If the model considered is a Gaussian VAR model type B separation means that X2t is weakly exogenous with respect to the long-run parameters of system 1 and X1t is weakly exogenous with respect to the long-run parameters of system 2, compare e.g. Ericsson (1992) and Johansen (1992). With respect to type A separation there is no feedback, in the long-run, from the first differenced variables across the systems. However, we do not preclude the possibility that the first differences of the variables in
258
C. W. J. Granger and N. Haldrup
one system may have explanatory power in the other system in the shortrun. We could consider short-run separation in the sense that G12(L), G21(L) = 0 but, as we shall see in what follows, only the notion of longrun separation will be of interest. Notice that if, for instance, g21, G21(L) = 0, X2t is strongly exogenous, i.e. the conjunction of weak exogeneity and Granger non-causality. Johansen (1992) proves that from an estimation viewpoint in a partial model, weak exogeneity is sufficient for obtaining fully efficient estimates of the economically interesting long-run parameters and the adjustment coefficients. Moreover, partial separation of type B and weak exogeneity will make inference easy in the single systems since limit distributions become a mixture of Gaussian distributions. Within our set-up full efficiency will be lost, however, if there is not complete separation but the system is treated as such. In this situation type B separation is not sufficient in order to obtain nice properties from an estimation point of view; type A separation is needed as well. 2.2
P-T Decomposition of a Vector Time Series
It is frequently of interest to decompose a time series into components that may have different characteristics, for instance a PersistentTransitory (P-T) decomposition may be relevant, see e.g. Beveridge and Nelson (1981) and Quah (1992). For a vector time series similar decompositions may be considered, see e.g. Stock and Watson (1988), Kasa (1992), Mellander et al. (1992), Gonzalo and Granger (1995), and Proietti (1995). However, since identification of such factors is generally non-unique,2 additional identifying requirements are needed. Gonzalo and Granger have suggested that the persistent I(1) factors should (1) be observable, i.e. such that the persistent components be expressed in terms of the original variables Xt, and (2), the shocks to the transitory part should have no impact on the persistent components in the long-run.3 Essentially, this is why the two types of factors for this particular decomposition may be given the economic interpretation of long-memory and short-memory components. The second condition stated above says that if we let Xt = Pt + Tt be factorization of Xt into a persistent and a transitory component, then the components can be given the VAR representation Ê H11 (L) H12 (L) ˆ Ê DPt ˆ Ê e pt ˆ = Ë H 21 (L) H 22 (L)¯ Ë Tt ¯ Ë e Tt ¯
(5)
such that Tt does not cause DPt in the long-run if H12(1) = 0. 2
3
Recently Abadir et al. (1996) have suggested to use the Jordan decomposition of the first order companion matrix of the VAR as a vehicle to extract the common stochastic trends. See e.g. Hosoya (1991) and Granger and Lin (1995) for a definition of causality at different frequencies.
Cointegrated Systems and Persistent-Transitory Decompositions 259
Observability of the factors can be achieved by considering the expression X t = P( X t ) + T ( X t )
(6)
where P(Xt) = A1ft and T(Xt) = A2zt with ft = g ^¢ Xt and zt = a¢Xt and where A1 = a^(g ^¢ )-1 and A2 = g(a¢g)-1. The matrices a^ and g^ are orthogonal complements of a and g, i.e. such that g ^¢ g = 0 and a¢^a = 0. Throughout the symbol “^” will indicate the orthogonal complement of the associated matrix. Notice that the orthogonal matrices in the present case are both p ¥ (p - r) and that the factorization of the vector process exists since a¢g is invertible by definition of the cointegration rank r. The persistent (or long-memory) component is given by P(Xt) which can be seen to be expressed in terms of the (p - r) common stochastic trends ft, and similarly the temporary (or short-memory) component can be expressed by the r error correction terms in a particular way. Observe that P(Xt) and T(Xt) do not necessarily constitute an orthogonal factorization; this will only happen in special situations. The Gonzalo–Granger decomposition has similarities with other decompositions in the literature. For instance the ft term is identical to the common stochastic trends of Stock and Watson (1988) which is a multivariate generalization of the Beveridge–Nelson decomposition of a univariate time series. In a recent paper Proietti (1995) compares the various representations in a common set-up and he demonstrates that the Gonzalo–Granger decomposition can be obtained from the Beveridge–Nelson decomposition by adding a particular distributed lag polynomial of the first differences of the series to the long-memory component. The reason why this can be done is, of course, that any stationary component can be added to the stochastic trend (or I(1)) component without altering the dominant I(1) characteristics. 3. PERSISTENT-TRANSITORY DECOMPOSITION IN SEPARATED COINTEGRATING SYSTEMS In this section we focus our attention on different types of separated models to see how the long- and short-memory components in their PT factorizations will depend upon the particular type of separation. 3.1
Erroneously Treating Non- and Partially-separated Systems as Completely Separated
In order to interpret the outcome of cointegration analysis it is frequently considered advantageous to consider systems of low dimension. Assume that the econometrician correctly considers a separated cointegrated system, but wrongly assumes that separation is complete rather
260
C. W. J. Granger and N. Haldrup
than partial. The difference is, naturally, that the feedback from other cointegrating relations through the error correction terms and/or the first differenced variables from the other system are ignored in the analysis. With no loss of generality we assume for simplicity that the model is recursive to make the subsequent arguments more intelligible. P-T decompositions are considered for both systems, i.e. X1t = P1t + T1t and X2t = P2t + T2t. Proposition 2: Let Xt = (X¢1t ,X¢2t) be generated according to (2)–(4) with the additional requirement that g21 = 0 and G21(L) = 0, such that X2t is recursively determined compared to X1t. Then, if the econometrician considers the Xt system in isolation, that is DX1t = g 11a11¢ X1,t -1 + G11 (L)DX i,t -1 + u1t
(7)
it follows that: (1) T1t Æ / DP1t (2a) Partial separation of type A: DP2t Æ / DP1t. (2b) No partial separation of type A: DP2t Æ DP1t, unless G12(1) Œ space(g11). (3a) Partial separation of type B: T2t Æ / DP1t. (3b) No partial separation of type B: T2t Æ DP1t, unless g12 Œ space(g11). The notation “Æ” and “Æ / ” signifies the influence or non-influence, respectively, of one component on the other in the long-run. Proof: The error term given in (7) captures what has been left out from the analysis, so u1t = g 12a 22 ¢ X 2 ,t -1 + G12 (L)DX 2 ,t -1 + e1t .
(8)
The X2 system reads DX 2 t = g 22a 22 ¢ X 2 ,t -1 + G22 (L)DX 2 ,t -1 + e 2 t .
(9)
By treating (7) as an isolated system the common stochastic trends are given by premultiplication of the error correction model (7) by the p1 ¥ ^¢ ^¢ (p1 - r1) orthogonal complement of g11, i.e. g 11 where g 11 g11 = 0. This yields x
f1t = g 11^¢ X1t = g 11^¢ G11 (L) X1,t -1 + g 11^¢ Â u1,t - j .
(10)
j=0
In accordance with the Gonzalo–Granger decomposition we can define (with an obvious notation) ^¢ X1t = P1t + T1t = A11g 11 X1t + A21a¢11X1t = A11f1t + A21XZ1t ^ 11
^¢ 11
^ -1 11
-1
(11)
where A11 = a (g a ) , and A21 = g11(a¢11g11) . Similarly we can define X2t = P2t + T2t. The difference of the system 1 permanent component is
Cointegrated Systems and Persistent-Transitory Decompositions 261 ^¢ now given by DP1t = A11g 11 DX1t and by using (8)–(11) and the fact that g12Z2,t-1 = g12a 2¢ 2T2,t-1 it follows that
DP1t = A11g 11^¢ G11 (L)(DP1,t -1 + DT1,t -1 ) + A11g 11^¢ {g 12a 22 ¢ T2 ,t -1 + G12 (L)(DP2 ,t -1 + DT2 ,t -1 ) + e1,t }.
(12)
Now result (1) follows directly. With respect to the results (2a) and (2b) it is seen that partial separation of type A, G12(1) = 0, implies that DP2,t has no influence on DP1t in the long-run. On the other hand this result does not hold it separation of type A is absent. Observe, though, the ^¢ exception when g 11 G12(1) = 0, that is, when G12(1) is in the column space of g11. The results (3a) and (3b) follow accordingly. Separation of type B means that g12 = 0 so in this case T2t has no influence on DP1t in the longrun while the reverse result applies if type B separation is absent. The ^¢ latter result is modified, however, if g 11 g12 = 0, i.e. when g12 is in the column space of g11. First, notice that the simplifying assumption g21, G21(L) = 0 means that the variables X2t are strongly exogenous w.r.t. the long-run parameters of system 1. This has no implications for the qualitative results presented but it simplifies the algebra considerably and makes it more clear how the interaction across systems works. The result (1) is seen to be fully in accordance with the Gonzalo–Granger decomposition such that in the long-run the system 1 temporary component will have no impact on the persistent component of the same system. More interestingly, (2b) and (3b) demonstrate how (apart from the cases where g12 and G12(1) are in the space spanned by g11) the components of the second system affect the first. In fact, both components from system 2 will have an influence on the persistent component of system 1 by the absence of type A or type B separation. It demonstrates that by looking at small partial models and ignoring information from other systems, P-T decompositions will be produced which differ from the “true” factorizations that rely on a correct specification of the VAR model. Similarly, common stochastic trends analysis will be affected more generally (see also equation 10), which mirrors the influence of the information set in e.g. impulse response analysis. We have here put the main emphasis on the DP1t component. The way that the temporary component T1t is affected by T2t and DP2t is straightforward due to its residual nature and hence the properties mirror the above discussion. Previously we have noted that partial separation of type B is closely related to the notion of weak exogeneity. Observe, however, that following the discussion given in Section 2.1, treating a sub-system as completely separated when, in fact this is not the case, full efficiency will be lost by analysing the sub-system in isolation. Both type A and type B separation is needed to obtain efficiency in a partial system, unless the
262
C. W. J. Granger and N. Haldrup
excluded variables from the other system are taken into account in the sub-system analysis. 3.2
Partial Separation and P-T Decomposition of the Full System
The proper way to proceed, in order to avoid the caveat emphasized in the previous section, is to consider the two sub-systems jointly. Again we assume for simplicity that g21 = 0 and let G21(L) = 0. Proposition 3: Let Xt = (X¢1t ,X¢2t)¢ be generated according to (2)–(4) with the additional requirement that g21 = 0 and G21 = 0, such that X2t is recursively determined compared to X1t. Then, if the econometrician considers the X1 and X2-systems jointly, persistent-temporary factorizations of the system can be characterized as follows: X 1t = P1 ( X 1t , X 2t ) + T1 ( X 1t , X 2t ) X 2t = P2 ( X 2t ) + T2 ( X 2t ).
(13)
It also follows that (1) DP2t Æ DP1t apart from the conditions given in (19) below. (2) T1t, T2t - DP1t as required. Proff: Define the matrix *¢ ˆ Ê g ^¢ g 12 g ^¢ = Á 11 ^¢ ˜ Ë 0 g 22 ¯
(14)
^¢ such that g ^¢ g = 0 whereby g 12 * will satisfy g 11 g12 + g 12 *¢g22 = 0. Notice that if g12 Œ space(g11) this could imply that g 12 * = 0 or more generally that g 12 *Œ nullspace(g22). The common stochastic trends of the full system read ^¢ f1t = g 11 X1t + g 11 *¢ X2t ^¢ f2t = g 22 X2t.
(15)
From the definition of X1t given in (11) the decomposition (13) follows. The result for X2t is trivially given. Consider now the interaction of persistent and temporary components across sub-systems. The long-memory components read DPt = A1g ^¢ DXt
(16)
^ ^¢ ^ -1 where A1 = a^(g ^¢ a^)-1. Define now the matrices A11 = a 11 (g 11 a 11 ) , A21 ^¢ ^ ^¢ ^ -1 -1 = a 22(g 22 a 22) , and A22 = g 22(a 2¢ 2g22) where it is noted that I - A21g 22 = A22a 2¢ 2. By straightforward matrix operations, using rules of partitioned inverse, it can be shown that
Cointegrated Systems and Persistent-Transitory Decompositions 263 ^¢ *¢ A22a 2¢ 2DX2t DP1t = A11g 11 DX1t + A11g 12 ^¢ DP2t = A12g 22 DX2t.
(17)
By using the error-correction model (2)–(4) for DX1t and DX2t in the ^¢ present set-up and using the fact that g 11 g22 + g 12 *¢g22 = 0, it follows that the single components are related in the following way: DP1t = A11g 11^¢ G11 (L)(DP1,t -1 + DT1,T -1 ) + { A11G11^¢ G12 (L) + A11G12*≤ A22a 22 ¢¢ G22 (L)}(DP2 ,T -1 + DT2 ,T -1 ) + A11g 11^¢ e1t + A11g 12*¢ A22a 22 ¢ e 2t ^¢ ^¢ G22 (L)(DP2 ,t -1 + DT2 ,t -1 ) + A12 g 22 e 2t . DP2 t = A12 g 22
(18)
This proves the second part of the Proposition. Note the particular cases where DP2t does not influence DP1t, i.e. when ^¢ A11g 22 G12 (1) + A11g 12*¢ A22a 22 ¢ G22 (1) = 0.
(19)
A special case where this occurs is when both g12 and G12(1) lie in the space spanned by g11 because then also g 12 * will lie in the nullspace spanned by g22. this case includes complete separation of the X1- and X2systems. However, generally the condition in (19) is not satisfied. From the above Proposition it follows that by considering the two subsystems jointly the common stochastic trends and the Gonzalo-Granger decomposition effectively separates the adjustment of error correction errors from the long-memory component as intended. However, it is interesting to observe that generally the variables of the full system will be needed in both the long- and the short-memory components of the X1-system. Note that P1t and P2t are not cointegrated. Since P1t is I(1) plus I(0) in a particular way, it can also be seen that P1t, which essentially is determined by f1t given in (15), will have X1t as the only factor if g 12 * = 0. In particular this is the case if separation is of type B. In general, however, X2t will contribute to both the I(1) and the I(0) components. Concerning the second part of the Proposition the autoregressive representation of the components as they are given in (18) demonstrates that in the long-run T1t and T2t will not have any explanatory power with respect to the long-memory components in either system. This is fully consistent with their definition of course. However, the long-memory component of the X2-system will cause the corresponding component of the X1-system, but without being cointegrated. An exception occurs for instance when g12 = 0 and G12(1) = 0. Then the condition (19) is satisfied so not surprisingly the P1t and P2t components do not interact in the longrun due to complete separation of the sub-systems. The analysis of the past two sections demonstrates the importance of considering whether error correction terms and other short-run dynam-
264
C. W. J. Granger and N. Haldrup
ics from other systems may have an impact on the system of interest when cointegration is separate.Although it is not going to affect the cointegration properties of the data, it clearly becomes of importance in extracting and interpreting the common stochastic trends and the longand short-memory components of the multivariate system. In this sense it is of interest to consider the notion of cointegration in a general (rather than a partial) equilibrium framework.After all, it can be seen that examinations including common stochastic trends analysis should be done with care due to the dependence of such trends with respect to the information set. 4. EXTENSIONS TO NON-LINEAR ERROR CORRECTION MODELS Cointegrated models with non-linear error correction mechanisms have recently attracted much attention in the literature, compare e.g. Granger and Swanson (1996), and Granger and Teräsvirta (1993) and the references therein. The types of non-linearity entering such systems need to be restricted, however, in order to ensure stability of the model. In this section we demonstrate how the restrictions required in one system may or may not restrict the other system when cointegration is separate. Non-linear error correction models may take many different forms. Consider, for example, a simple system with the non-linear error correction mechanism entering as follows: DX t = gq ( b ¢Z t -1 ) + G (L)DX t + e t ,
(20)
where Zt = a¢Xt. As usual Xt is a p-vector time series and we let q(b¢Zt-1) be a (p - r) vector of non-linear functions of the lagged error correction terms; notice that since b is r ¥ 1, b¢Zt is assumed to be a scalar variable. Here we want to emphasize the non-linear property and assume for simplicity that G(L) = 0. Multiplying (20) by a¢ we obtain DZ t = a ¢gq ( b ¢Z t -1 ) + a ¢e t
(21)
which is a non-linear VAR(1) process. In defining Z t = h( b ¢Z t -1 ) + h t
(22)
h(Z ) = Z + a ¢g q ( b ¢Z )
(23)
where
the admissible class of functions ensuring stability should satisfy the necessary and sufficient stability conditions, see Tweedie (1975), Lasota and Mackey (1989), and Granger and Teräsvirta (1993), h(Z ) £ a Z for Z c and a < 1
(24)
Cointegrated Systems and Persistent-Transitory Decompositions 265
and h(Z ) is finite for all finite Z .
(25)
Ω Ω.Ω Ω can be any norm, not necessarily the Euclidean norm. It follows that
for the case of one dimension, the functions satisfying stability must be dominated by a linear function with slope less than one. For instance, if q(Z) is one dimensional the function could be logistic in Z or log(Z). The stability condition above applies to the vector Z. If this is stable, so are the single components, but it is not generally possible to provide conditions on the stability of each element in q(Z). The restrictions above can be weakened in some cases meaning that only a subset of the functions in q(Z) need to be restricted, i.e. the functions for which the adjustments lie in the space spanned by a^ we need no restrictions to be imposed to ensure stability. Assume for simplicity that this space is empty such that each element of q(Z) should be considered in derivation of the stability conditions. Despite non-linearity in the adjustment and error correction terms, the common stochastic trends ft, in the Stock–Watson and Gonzalo– Granger sense, turn out to behave linearly since ft = g ^¢ Xt = g ^¢ D-1e t in the present situation. In other words, the common stochastic trends will have no non-linear feature. Assume now that cointegration is separate, using the terminology of Section II, and that error correction is non-linear in the following way, Ê DX it ˆ Ê g 11 g 12 ˆ Ê q1 ( b1¢a11¢ X1,t -1 ) ˆ Ê e1t ˆ = + Ë DX 2 t ¯ Ë 0 g 22 ¯ Ë q 2 ( b 2¢a 22 ¢ X 2 ,t -1 )¯ Ë e 2 t ¯
(26)
using an obvious notation. In case of complete separation, which in the present set-up means that g12 = 0, the common stochastic trends (with no non-linear feature) are easily calculated for each sub-system. This case is rather trivial. So is the situation where Xt = (X¢1t , X 2¢ t)¢ is treated jointly and separation is partial (g12 ≠ 0). In this case g ^¢ effectively kills both the non-linear error correction terms. Consider instead the case where system 1 is treated as completely separated although it is only partially separated. In this case the common stochastic trends of the X1-system read Df1t = g 11^¢ DX1t = g 11^¢ g 12q 2 ( b 2¢a 22 ¢ X 2 ,t -1 ) + g 11^¢ e1t .
(27)
Hence, although the common stochastic trends of the X2-system are linear, the corresponding trends of the X1-system will generally have a non-linear feature. What restrictions are needed on q1(.) and q2(.) in the partially separated system to ensure stability? We have that DZ1t = a11¢ g 11q1 ( b1¢Z1,t -1 ) + a11¢ g 12q 2 ( b 2¢Z 2 ,t -1 ) + a11¢ e1t DZ 2 t = a 22 ¢ g 22q 2 ( b 2¢Z 2 ,t -1 ).
(28)
266
C. W. J. Granger and N. Haldrup
So, the stability requirements in this case are not affected: As long as the stability conditions of system 2 are satisfied, the stability conditions that are necessary for system 1 will be unaffected by system 2. Observe, however, that if we introduce g21 ≠ 0 such that a 2¢ 2g21q1(Z1,t-1) will appear in the expression for DZ2t in (28), the stability condistions for the single systems cannot be calculated in isolation. The systems have to be treated jointly in this case, i.e. by letting Zt = (Z¢1t ,Z¢2t)¢ and considering the system (21). The joint stability requirements of q1(.) and q2(.) are given by (24) and (25). It is clearly a restriction implied by the particular non-linear model considered above, that the functional forms of the error correction terms associated with the X2-system, and entering in the X1-system, must be the same as those arising in the X2-system with respect to the same error correction terms. Many other model constructions could be considered. For instance, the model Ê DX1t ˆ Ê g 11q11 ( b11¢ Z1,t -1 ) + g 12q12 ( b12¢ Z 2 ,t -1 ) ˆ Ê e1t ˆ = + Ë DX 2 t ¯ Ë g 21q 21 ( b 21 ¢ Z1,t -1 ) + g 22q 22 ( b 22 ¢ Z 2 ,t -1 )¯ Ë e 2 t ¯
(29)
could be analyzed. This class of model is probably more relevant in practice, but its increased flexibility adds to the complexity of deriving common stochastic trends and P-T decompositions. No results are presently available for this type of non-linear error correction models, but it is certainly a class of dynamical models that will be of interest for future research. 5.
CONCLUSION
Separation in cointegrated systems is a useful notion which helps to reduce the complexity of large systems and eases their interpretation. Within a cointegrated VAR set-up, cf. Johansen (1988, 1991), both partially and completely separated models can be easily tested by considering particular hypotheses on the cointegration vectors and the adjustment coefficients, and hence this should become an integral part of cointegration analysis (see Konishi and Granger, 1992). Moreover, looking at small models clearly has advantages with respect to the parameter accuracy that can be obtained in finite samples as has been demonstrated by e.g. Abadir et al. (1996). However, although it increases the dimension of the model, the absence of error correction or short-run separation, i.e. where error correction terms and stationary variables from other systems may enter the model, is an important possibility to consider as well, not only because it may improve the model for forecasting purposes, but also, as we have demonstrated, because the implied short-tun dynamics actually may add to our understanding of the stochastic trends driving the system as well
Cointegrated Systems and Persistent-Transitory Decompositions 267
as the complex dynamical interaction that may exist across systems. It is therefore our suggestion for empirical practice that the applied econometrician is aware of such important links rather than just focusing on the long-run properties of the data in terms of cointegration. More generally it is our suggestion to consider VAR models with a size that reflects the purpose of the analysis. Too large models give rise to degrees of freedom problems with respect to estimation and inference and it complicates the interpretation of empirical results. On the other hand, common stochastic trends analysis, persistent-temporary decompositions and impulse response analysis is rather sensitive to the information set of the econometrician, and looking at small models may thus have very misleading implications. Generalizations to non-linear models, and in particular, non-linear error correction models, are still in their infancy, but potentially a rich class of dynamical systems can be analyzed within this set-up. However, much more research needs to be done in order to obtain results that are useful for the practitioner.
REFERENCES Abadir, M., Hadri, K. and Tzavalis, E. (1996). “The influence of VAR dimensions on estimator biases”, Discussion paper, University of York. Beveridge, S. and Nelson, C. R. (1981). “A New Approach to the Decomposition of Economic Time Series into Permanent and Transitory Components with Particular Attention to the Measurement of the Business Cycle”, Journal of Monetary Economics, Vol. 7, pp. 151–74. Ericsson, N. R. (1992). “Cointegration, Exogeneity, and Policy Analysis: An Overview”, Journal of Policy Modeling, Vol. 14, pp. 251–80. Gonzalo, J. and Granger, C. W. J. (1995). “Estimation of Common Long-Memory Components in Cointegrated Systems”, Journal of Business and Economic Statistics, Vol. 13, pp. 27–35. Granger, C. W. J. and Lin, J. (1995). “Causality in the Long-Run”, Econometric Theory, Vol. 11, pp. 530–36. Granger, C. W. J. and Swanson, N. (1996). “Further Developments in the Study of Cointegrated Variables”, BULLETIN, Vol. 58, pp. 537–53. Granger, C. W. J. and Teräsvirta, T. (1993). Modeling Nonlinear Economic Relationships, Oxford University Press. Hosoya, Y. (1991). “The Decomposition and Measurement of the Interdependence between Second-order Stationary Processes”, Probability Theory and Related Fields, Vol. 88, pp. 429–44. Johansen, S. (1988). “Statistical Analysis of Cointegration Vectors”, Journal of Economic Dynamics and Control, Vol. 12, pp. 231–54. Johansen, S. (1992).“Cointegration in Partial Systems and the Efficiency of Single Equation Analysis”, Journal of Econometrics, Vol. 52, pp. 389–402.
268
C. W. J. Granger and N. Haldrup
Johansen, S. (1991).“Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models’, Econometrica, Vol. 59, pp. 1551–80. Kasa, K. (1992). “Common Stochastic Trends in International Stock Markets”, Journal of Monetary Economics, Vol. 29, pp. 95–124. Konishi, T. (1993). Separation and Long-Run Non-causality in a Cointegrated System, PhD Dissertation, UCSD. Konishi, T. and Granger, C. W. J. (1992). “Separation in Cointegrated Systems”, Manuscript, Department of Economics, UCSD. Konishi, T., Ramey, V. A. and Granger, C. W. J. (1994), “Stochastic Trends and Short-Run Relationships between Financial Variables and Real Activity”. Manuscript, Department of Economics, UCSD. Lasota, A. and MacKey, M. C. (1989). “Stochastic Perturbation of Dynamical Systems: the Weak Convergence of Measures”, Journal of Mathematical Analysis and Applications, Vol. 138, pp. 232–48. Mellander, E., Vredin, A. and Warne, A. (1992). “Stochastic Trends and Economic Fluctuations in a Small Open Economy”, Journal of Applied Econometrics, Vol. 7, pp. 369–94. Proietti, T. (1997), “Short Run Dynamics in Cointegrated Systems”, BULLETIN, Vol. 59, pp. 405–22. Quah, D. (1992). “The Relative Importance of Permanent and Transitory Components: Identification and Some Theoretical Bounds’, Econometrica, Vol. 60, pp. 107–18. Stock, J. H. and Watson, M. W. (1988). “Testing for Common Trends”, Journal of the American Statistical Association, Vol. 83, pp. 1097–107. Tweedie, R. L. (1975). “Sufficient Conditions for Ergodicity of Spectra”, in Grenander, U. (ed.), Probability and Statistics, New York, Wiley.
CHAPTER 14
Nonlinear Transformations of Integrated Time Series* C. W. J. Granger and Jeff Hallman
Abstract In this paper we consider the effects of nonlinear transformations on integrated processes and unit root tests performed on such series. A test that is invariant to monotone data transformations is proposed. It is shown that series are generally not cointegrated with nonlinear transformations of themselves, but the same transformation applied to a pair of cointegrated series can result in cointegration between the transformed series. Keywords: Nonlinear transformations; integrated processes; unit root tests; cointegrated series; monotone data transformations; autocorrelations; Dickey–Fuller statistics. 1.
INTRODUCTION
In this paper we are concerned with the effects of nonlinear transformations on integrated, particularly I(1), processes. Three questions are considered. (i) If xt is integrated and zt = f(xt), will zt appear to be integrated as well? (ii) Are xt and zt cointegrated? (iii) If xt, yt are I(1) and cointegrated, will g(xt), g(yt) also be cointegrated? These questions arise naturally when considering regressions of the form wt = a + bxt + czt (or yt ) + residuals where wt is stationary. The terms xt, zt or yt can only occur on the righthand side if they are either I(0) or cointegrated. For example, a * Journal of Time Series Analysis, 12, 1991, 207–224.
270
C. W. J. Granger and J. Hallman
researcher may try to explain the unemployment rate in terms of rt and log rt, where rt is an interest rate. The outline of the paper is as follows. Following this introduction, in Section 2 we address question (i) by contrasting the effects of several nonlinear transformations on the empirical autocorrelations and Dickey–Fuller (DF) statistics of a random walk. The DF test appears to be much more sensitive to nonlinear transformation than is the empirical autocorrelation function.A simple modification of the DF is proposed which works correctly for a large class of transformations. In Section 3 we consider questions (ii) and (iii); the answers are generally no and yes, respectively, although the DF test is again somewhat misleading. The topics considered are relevant because of the current interest in integrated, or ‘unit root’ series in econometrics and macroeconomics, and in nonlinear time series models. Properties of nonlinearly transformed series are examined in greater detail by Granger and Hallman (1988), while nonlinear theoretical relationships between integrated series are considered by Granger (1988) and Hallman (1989). 2. UNIT ROOT TESTS ON TRANSFORMED SERIES There is now a substantial literature on the topic of testing for unit roots in linear time series models. The result obtained by Phillips (1987) forms the basis for the distributional theory of the various tests. Phillips assumes that a series yt is generated by yt = yt-1 + ut where y0 = 0 and ut is assumed to satisfy the following assumptions. Assumption 2.1: (Phillips) (a) Eut = 0 (b) supt E|ut|b < • for some b > 2. 2¸ Ï1 (c) s 2 = lim E Ì (Â ut ) ˝ TÆ• ÓT ˛ exists and is greater than zero. (d) {ut}•1 is strong mixing with coefficients am that satisfy S•1 am1-2/b < •. Given these assumptions, Phillips shows that T (aˆ - 1) ∫ T
 Dy y Ây t
t -1
2 t -1
2
Æ
W (1) - s u2 s 2 2
2 Ú 10 W (r )dr
Nonlinear Transformations of Integrated Time Series
271
and ta ∫
2
aˆ - 1 1 2
s˜ (Â yt2-1 )
Æ
(s s u )W (1) - 1 1 2
2(Ú 10 W 2 (r )dr )
where W(r) is a standard Brownian motion and s˜ 2 is the usual estimate of the variance of the residuals from the regression. The statistic -ta is called the Dickey–Fuller (DF) test statistic and its distribution is known by the same name. If y0 π 0 it is subtracted from the other terms in the series. Models with more complicated serial correlation but still only a single unit root can be handled by including lags of Dyt in the regression; this is the augmented Dickey–Fuller (ADF) test. For example, the ADF test using four lags is minus the t statistic of the coefficient a in the regression 4
Dyt = aˆ yt -1 + Â bˆ i Dyt -i .
(2.1)
i =1
Both the simple and augmented versions of the test have the same limiting distribution. The test given by (2.1) is designed to have power against the alternative hypothesis that yt is generated by a stationary AR model with zero mean. A test for the more general alternative where the mean of the series may be nonzero is constructed by performing the same regression with the addition of a constant term, i.e. 4
Dyt = c + aˆ yt -1 + Â bˆ i Dyt -i .
(2.2)
i =1
It is interesting to ask how the DF and ADF tests work with nonlinearly transformed series. Suppose that xt = xt-1 + et, where et meets the requirements of Assumption 2.1, and let yt = f ( xt ). A mean value expansion has yt = yt-1 + ht with ht = f ¢( xt -1 + rt )e t where rt lies in the interval [yt-1, yt]. There is no reason to expect ht to meet the requirements of Phillips’ assumption unless f(·) is affine. As examples, consider the following transformations, noting that the term (Sut)2 in (c) is just y21. yt = xt2: here ht = et2 + 2xt-1et and this violates all four parts of Assumption 2.1.
272
C. W. J. Granger and J. Hallman
yt = xt3: here ht = 3xt-1et + 3xt-1e t2 + e t3 also violates all four parts of Assumption 2.1. yt = sgn(xt): this violates (c) since (ST1 ht)2 = yT2 = 1, so that s2 = 0. yt = sin xt: Granger and Hallman (1988) show this to be a stationary AR(1) process with variance –12 + ca 4t, implying that the limit in (c) is 1 + 2ca 4T 1 = lim =0 TÆ• TÆ• 2T 2T lim
yt = exp(xt): in this case ht = {exp(et) - Eexp(et)}yt-1 has a variance exploding faster than t, thus violating (b) and (c). It is also clear from the expression for ht that it is not mixing. yt = 1/xt: to avoid problems associated with xt taking nonpositive values, assume that x0 is large. Then yt will be bounded and limTÆ• (y2t /T) will be zero, violating (c). (d) also fails as ht = -e t yt2-1 + (e t - s 2 ) yt3-1 . As a simple example of what can happen to the DF test when the series tested is a transformation of a random walk, let xt be the simplest type of random walk given by xt = xt-1 + et
(2.3)
with x0 = 0 and where et (t = 1, 2, . . . , T) is an independent identically distributed (i.i.d.) series with prob(et = 1) = prob(et = -1) = –12 . xt meets the conditions of Assumption 2.1, and so the DF test when performed on it will have the DF distribution. Considering the transformed series yt = sgn(xt), it is seen that the change series Dyt is just Ï 2 if xt > 0 and xt -1 < 0 Ô Dyt = Ì-2 if xt < 0 and xt -1 > 0 Ô 0 otherwise Ó so that Dytyt-1 is -2 if xt crosses zero between time t - 1 and t, and is zero otherwise. y2t-1 = 1 for all t, of course, and so T
T
Ê Ë t =1
ˆ ¯
1 2
DF( yt ) ∫ -Â Dyt yt -1 sˆ Á Â yt2-1 ˜ t =1
=-
-2 ¥ no. of zero crossings of xt 1 2
(Tsˆ 2 )
.
Since sˆ 2 is just the mean square error (MSE) of the regression of Dyt on yt-1, sˆ 2 £ so that
1 T
T
 Dy
2 t
t =1
=
4 ¥ no. of zero crossings of xt T
Nonlinear Transformations of Integrated Time Series
273
Table 14.1 Dickey–Fuller Empirical Distribution. Transformation
1% -0.68 -2.61 -4.06 -0.48 1.45 5.75 -11.6 -0.74 -0.96
x x2 x3 ΩxΩ sgn(x) sinx exp(x) ln(x + 75) 1/(x + 75)
5%
10%
25%
50%
75%
90%
95%
99%
0.06 -0.87 -1.58 0.34 2.16 6.17 2.99 0.03 0.01
0.48 0.02 -0.11 0.80 2.67 6.34 4.04 0.50 0.49
1.05 1.15 1.23 1.40 3.58 6.70 5.05 1.06 1.07
1.59 1.84 1.99 2.01 4.58 7.07 6.03 1.59 1.59
2.15 2.46 2.65 2.60 6.05 7.46 7.22 2.14 2.14
2.62 3.21 3.33 3.24 8.37 7.82 8.68 2.63 2.66
2.90 3.74 3.78 3.70 11.31 8.00 10.13 2.94 2.97
3.54 4.86 4.78 4.76 14.25 8.50 36.06 3.50 3.56
Table 14.2 Augmented Dickey–Fuller Empirical Distribution. Transformation x x2 x3 ΩxΩ sgn(x) sinx exp(x) ln(x + 75) 1/(x + 75)
1%
5%
10%
25%
50%
75%
90%
95%
99%
-0.83 -2.63 -4.0 -0.78 0.53 3.71 -10.5 -0.85 -1.02
0.03 -1.20 -2.03 0.12 1.24 4.08 -2.29 -0.03 -0.07
0.40 -0.25 -0.61 0.59 1.56 4.27 1.88 0.42 0.43
1.03 1.07 1.04 1.29 2.08 4.63 3.25 1.03 1.04
1.57 1.82 1.87 1.89 2.82 5.06 4.07 1.58 1.59
2.14 2.45 2.48 2.47 4.01 5.49 4.72 2.14 2.15
2.64 3.06 3.04 3.05 6.08 5.89 5.39 2.64 2.65
2.95 3.44 3.39 3.35 6.90 6.15 7.76 2.94 2.97
3.58 4.23 3.95 4.23 10.92 6.70 39.1 3.60 3.64
DF( yt ) ≥
2 ¥ no. of crossings 1 2
(4 ¥ no. of crossings)
1 2
= (no. of zero crossings of xt ) . Feller (1968) shows that for the simple random walk given by (3.3) the number of returns to the origin divided by T1/2 is asymptotically distributed as a truncated normal random variable. As the probability of a zero crossing is just half the probability of a return to the origin, it follows that twice the number of crossings divided by T1/2 has the same distribution. The DF test statistic for yt is at least O(T1/4) and will become infinitely large as the sample size grows. Tables 14.1 and 14.2 show the empirical distributions of the DF and ADF tests on several transformations of a Gaussian random walk. These were found by creating 2000 random walks of length 200, making the indicated transformations and recording the values of the test statistics. Four lags of the dependent variable were used for the ADF statistic, and constants were included in both the ADF and DF regressions.
274
C. W. J. Granger and J. Hallman
Table 14.3 Autocorrelations. Transformation
Lag 1
2
3
4
5
6
7
8
9
10
x x2 x3 ΩxΩ sgn(x) sinx exp(x) ln(x + 50) 1/(x + 50)
0.96 0.93 0.92 0.93 0.94 0.59 0.60 0.96 0.96
0.92 0.87 0.85 0.87 0.81 0.33 0.42 0.91 0.91
0.87 0.80 0.79 0.81 0.76 0.20 0.30 0.87 0.87
0.83 0.74 0.71 0.75 0.73 0.12 0.23 0.83 0.83
0.79 0.68 0.64 0.70 0.68 0.09 0.19 0.79 0.78
0.75 0.63 0.60 0.66 0.65 0.06 0.18 0.75 0.75
0.72 0.58 0.55 0.62 0.62 0.01 0.17 0.72 0.71
0.69 0.54 0.50 0.58 0.60 -0.03 0.15 0.68 0.68
0.65 0.49 0.46 0.54 0.58 -0.06 0.12 0.65 0.65
0.62 0.45 0.42 0.50 0.54 -0.05 0.12 0.61 0.61
In these tests the null hypothesis is that the series is I(1) and the alternative is that it is I(0). The first row of Table 14.1 shows that the test statistic is less than 2.90 95% of the time when H0 is true. The results show that not only is H0 always (correctly) rejected for sin xt, but it is also usually rejected for the long-memory processes sgn(xt) and exp(xt). It would certainly be incorrect to accept the latter two series as being I(0). The other transformations are also rejected too often, except for the last two. This is misleading, however, because the effect of adding 75 to the random walk before transforming it is to reduce the curvature of the transformation greatly, making both ln(xt + 75) and 1/(xt + 75) nearly linear transformations of xt over this range. Adding a smaller constant than 75 would undoubtedly move the DF and ADF distributions to the right. Only those realizations in which xt crossed the zero axis between observations 5 and 195 were used in obtaining the statistics for the transformation sgn(xt). About 85% of the realizations in the simulation had at least one such crossing. In the Box–Jenkins modeling strategy, the shape of the correlogram is used to decide whether a series seems to be I(0) or I(1). If the autocorrelations decline slowly with lag length, an I(1) model is chosen. Table 14.3 presents the means across replications of the first ten autocorrelations obtained in an experiment similar to the one generating Tables 14.1 and 14.2. For most of the transformed series, the correlogram closely resembles that of a random walk, even for the bounded series sgn(x). The exceptions are the stationary series sin xt and the explosive series exp(xt). The DF and ADF unit root tests appear to be more sensitive than the autocorrelations to series transformations. Economists often transform their variables by taking logarithms, using Box–Cox transformations etc., before building models and making inferences. It can easily happen that a unit root exists in the original series, but the usual tests reject a unit root in the transformed series despite a high degree of autocorrelation
Nonlinear Transformations of Integrated Time Series
275
in the latter. A test for unit roots that is invariant to a broad class of transformations would avoid this outcome. It is not possible to construct a test which is invariant to every possible transformation of the series being tested. As a trivial example, consider the transformation T(xt) = 568.3. No test on the transformed series can possibly yield any information about the original series. More interestingly, Granger and Hallman (1988) show that the sine (or cosine) of a random walk yields a stationary AR(1). This suggests that any periodic transformation will result in a stationary series, since periodic transformations can be arbitrarily well approximated by Fourier transforms. Given such a series with no long memory properties, it will not be possible to detect that it resulted from transforming a random walk. Many nonparametric tests are based on notions of rank and ordering. Since the ordering of a series is unaffected by strictly monotone transformations, tests based on these notions have distributions that are unaffected by monotone transformations of the data. The ranks Rt of a time series xt are defined by Rt = the rank of xt among x1, x2, . . . , xT. A simple test for unit roots in a (possibly) transformed series is to calculate the DF or ADF statistic of the ranks of the series rather than of the original series itself. Here the null hypothesis is that there exists a strictly monotone transformation of the time series being tested which has a unit root. The question immediately arises: what is the distribution of what will be called the rank Dickey–Fuller (RDF) statistic and its augmented cousin (RADF)? Unfortunately we have not obtained an analytical answer to this question. Phillips’ distributional results can be extended via the continuous mapping theorem to find the distribution of a test for a given transformation, but this will not solve the problem since the rank transformation is different for every sample. Rank statistics are usually applied in situations where it is known that the normalized sample ranks R(xi)/N converge to the population distribution function F(xi). For the null hypothesis here, there is no well-defined distribution function for the ranks to converge to, as xt is a nonstationary random walk. Despite the fact that ‘nice’ analytic representations of their distribution functions are not available, the RDF and RADF statistics are random variables which are easily computed for any given time series. The approach taken here is to investigate their usefulness as tests in some specific cases by means of computer simulation. Figure 14.1 shows estimated densities for the RDF statistic (with constant) for sample sizes 25, 50, 100, 200, 400 and 800. The plots were constructed by generating 5000 independent random walks of the indicated sample sizes, calculating and recording the RDF test statistics and finally estimating the density with
276
C. W. J. Granger and J. Hallman
RDF Densities
0.6 n = 25 n = 50 n = 100 n = 200 n = 400 n = 800
0.4
0.2
0.0 0
1
2
3
4
Figure 14.1. Rank Dickey–Fuller densities.
a kernel estimator. As the Figure indicates, the density does not change much as the sample size changes, except for the smallest sample size of 25. Figure 14.2 shows the corresponding densities for the augmented version of the test, RADF. Here there is marked variation in the density as the sample size varies, but this is also true of the ADF test as seen in Figure 14.3. The fact that an elegant asymptotic theory is available for the ADF but not for the RADF does not seem to make much difference to their small-sample behavior. Tables 14.4 and 14.5 give percentiles of the RDF and RADF tests under the null hypothesis that xt is a monotone transformation of a pure random walk. Figures 14.4 and 14.5 compare the power of the RDF test with that of the DF. The upper left panel of Figure 14.4, for example, shows the fraction of rejections of the hypothesis H0 : r = 0 in the model Dxt = -rxt-1 + et for several values of r when the sample size is 50. The four lines on the plot show the rejection percentages for the DF and RDF at the 5% and 10% significance levels. Figure 14.4 compares the tests when there is no constant allowed in the regression, while Figure 14.5 compares the tests with a constant included. Similar power comparisons between the ADF and RADF statistics are a topic for further research. As the DF test without a constant is equivalent to a likelihood ratio test, it is not surprising that it is more powerful than its rank counter-
Nonlinear Transformations of Integrated Time Series
277
RADF Densities
0.6 n = 25 n = 50 n = 100 n = 200 n = 400 n = 800
0.4
0.2
0.0 0
1
2
3
4
Figure 14.2. Rank augmented Dickey–Fuller densities.
ADF Densities 0.5
0.4
n = 25 n = 50 n = 100 n = 200 n = 400 n = 800
0.3
0.2
0.1
0.0 –1
0
1
2
Figure 14.3. Augmented Dickey–Fuller densities.
3
4
278
C. W. J. Granger and J. Hallman
Table 14.4 Rank Dickey–Fuller Percentiles. Without constant
Constant included
Sample size
10%
5%
1%
10%
5%
1%
25 50 100 200 400 800
1.70 1.77 1.82 1.87 1.88 1.97
2.03 2.13 2.14 2.18 2.18 2.28
2.71 2.79 2.76 2.80 2.82 2.83
2.63 2.63 2.68 2.71 2.75 2.78
2.98 2.93 2.95 3.00 3.01 3.06
3.70 3.49 3.60 3.53 3.57 3.59
Table 14.5 Rank Augmented Dickey–Fuller Percentiles. Without constant
Constant included
Sample size
10%
5%
1%
10%
5%
1%
25 50 100 200 400 800
1.67 1.57 1.61 1.66 1.70 1.79
2.05 1.91 1.92 1.95 2.04 2.08
2.87 2.56 2.52 2.57 2.61 2.73
2.39 2.37 2.41 2.48 2.55 2.65
2.72 2.66 2.68 2.75 2.82 2.92
3.48 3.25 3.24 3.27 3.42 3.51
part.What is surprising is that the RDF is apparently more powerful than the DF when constants are allowed into the regression. A possible explanation is as follows. When there are no lags of Dyt involved, regression (2.2) is equivalent to (2.1) using the mean-corrected y˜t = yt - y . When this is done with the original data, the mean is a parameter that has to be estimated, using up a degree of freedom. For ranks, however, the mean is almost completely determined by the number of observations – it is just N (N + 1) rank( yN ) . 2 N Since it is nearly deterministic, having to estimate it has little effect on the power of the test. Finally, Tables 14.6 and 14.7 show the empirical distributions of the RDF and RADF tests from a simulation in which the statistics were computed for 200 observations of the indicated transformations of a pure random walk. 500 trials were performed to obtain the percentiles shown. Rank statistics are invariant to monotone transformations, and so the
Nonlinear Transformations of Integrated Time Series
279
% rejected
% rejected
Power of DF and RDF Tests without Intercept 100
80
60
60
40
40
20
20
0.85
0.90
0.95
1.00 n = 50
0.85
% rejected
0.85
% rejected
80
100
0.85
0.90
0.95 1.00 n = 100
100
80
80
60
60
40
40
20
20
0.85
0.85
0.90
0.95 1.00 n = 200
RDF 90% RDF 95% DF 90% DF 95% 0.90 0.92 0.94 0.96 0.98 1.00 n = 400
Figure 14.4. Power of Dickey–Fuller and rank Dickey-Fuller tests without intercept.
computed statistics for x, x3, exp(x), ln(x + 75), and 1/(x + 75) are all identical. Since |x| = (x2)1/2, their statistics are also identical. For the strictly monotone transformations in the tables, RDF and RADF have the correct size by construction. For the other transformations, a comparison of Tables 14.6 and 14.7 with Tables 14.1 and 14.2 indicates that the RDF and RADF distributions appear considerably more robust than the DF and ADF distributions. Only for the sin x transfor-
280
C. W. J. Granger and J. Hallman
Figure 14.5. Power of Dickey–Fuller and rank Dickey–Fuller tests with intercept.
mation do the RDF and RADF tests consistently reject the null hypothesis, but this is the correct thing to do as sin xt is a stationary AR(1). A reasonable strategy for unit root testing is to compute both the conventional and the rank versions of the DF and ADF tests, since it is rarely known with certainty that the underlying data-generating process (DGP) is linear. If it is, both kinds of tests have the correct size and similar power. Otherwise the rank versions of the tests are more applicable. If
Nonlinear Transformations of Integrated Time Series
281
Table 14.6 Rank Dickey–Fuller (With Constant) Empirical Distribution. Transformation x x2 x3 ΩxΩ sgn(x) sinx exp(x) ln(x + 75) 1/(x + 75)
1%
5%
10%
25%
50%
75%
90%
95%
99%
0.49 0.97 0.49 0.97 -0.99 6.14 0.49 0.49 0.49
0.91 1.36 0.91 1.36 -0.09 6.45 0.91 0.91 0.91
1.16 1.57 1.16 1.57 0.21 6.58 1.16 1.16 1.16
1.48 1.95 1.48 1.95 1.57 6.92 1.48 1.48 1.48
1.90 2.43 1.90 2.43 2.84 7.25 1.90 1.90 1.90
2.34 3.11 2.34 3.11 4.11 7.63 2.34 2.34 2.34
2.81 3.77 2.81 3.77 5.15 7.96 2.81 2.81 2.81
3.06 4.22 3.06 4.22 5.53 8.14 3.06 3.06 3.06
3.70 5.09 3.70 5.09 6.53 8.51 3.70 3.70 3.70
Table 14.7 Rank Augmented Dickey–Fuller (With Constant) Empirical Distribution. Transformation x x2 x3 ΩxΩ sgn(x) sinx exp(x) ln(x + 75) 1/(x + 75)
1%
5%
10%
25%
50%
75%
90%
95%
99%
0.22 0.67 0.22 0.67 -1.24 3.71 0.22 0.22 0.22
0.71 0.98 0.71 0.98 -0.42 4.09 0.71 0.71 0.71
0.91 1.21 0.91 1.21 0.01 4.27 0.91 0.91 0.91
1.28 1.59 1.28 1.59 0.53 4.67 1.28 1.28 1.28
1.74 2.09 1.74 2.09 1.59 5.06 1.74 1.74 1.74
2.14 2.61 2.14 2.61 2.33 5.54 2.14 2.14 2.14
2.59 3.17 2.59 3.17 2.92 5.93 2.59 2.59 2.59
2.89 3.49 2.89 3.49 3.53 6.22 2.89 2.89 2.89
3.30 4.49 3.30 4.49 4.16 6.63 3.30 3.30 3.30
the ADF test rejects its null while the RADF does not, for example, we might look at a plot of the rank transformation to see if it is suggestive of a parametric transformation yielding a series that could reasonably be modeled as a linear I(1) process. The fact that all the transformed series in Tables 14.1 and 14.2 have DF and ADF distributions shifted to the right indicates that the case where RADF rejects and ADF does not is unlikely unless the process really is linear. In this case we might find the ADF test more believable on the grounds that its asymptotic distribution has been worked out. 3.
COINTEGRATED VARIABLES
Two questions will be considered. (i) If xt is I(1), can xt and g(xt) be cointegrated for some function g(·)?
282
C. W. J. Granger and J. Hallman
(ii) If xt, yt are I(1) and cointegrated, will g(xt), g(yt) also be cointegrated? It will be assumed that xt is a pure Gaussian random walk, possibly with drift, generated by xt = m + xt-1 + et et ~ i.i.d. N(0, s2), so that xt ~ N(mt, s2t). For the second question, yt will be assumed to be given by yt = axt + et where e is i.i.d. Gaussian, mean zero and independent of et. Denote E{g(xt)} ∫ mt. In general, this will be a function of time. For example, if g(x) = x2, then m t = m2t2 + s 2t which is a function of time even if xt has no drift. If g(xt), xt are cointegrated with a constant cointegrating parameter a, then g( xt ) - m t = a xt + at where at is I(0). aˆ t will be uncorrelated with xt if a is estimated by ordinary least squares (OLS). Is there a constant a such that g(xt) - mt - axt is I(0)? A simple form of Stein’s lemma says that if x is Gaussian then cov{g( x) x} = E {g ¢( x)} var( x) and so the OLS estimate of a tends asymptotically to E{g¢(xt)}. There are essentially three cases. (i) limtÆ• E{g¢(xt)} = c, a constant, in which case cointegration will occur. (ii) limtÆ• E{g¢(xt)} = 0 and there is no cointegration. (iii) limtÆ• E{g¢(xt)} = Gt, a function of time. In this case there is no constant-parameter cointegration. There may or may not be time-varying parameter cointegration, but this will not be considered in this paper. It is easily seen that if g(x) = axk for some integer k, then xt and g(xt) can only be (constant-parameter) cointegrated if k = 1. Similarly, if g(x) = exp(lx), there cannot be cointegration. An example where apparent cointegration might seem possible is when g(x) = ln(a + x), where a is large and positive throughout the sample period and it is assumed that xt has no drift, so that m = 0. In this case, 1 a+ x x x2 ˆ Ê ª a -1 Á 1 - + 2 ˜ + O(a -4 ) Ë a a ¯
g ¢ ( x) =
Nonlinear Transformations of Integrated Time Series
283
Table 14.8 Percentiles of Dickey–Fuller Cointegration Test. Transformation
55%
60%
65%
70%
75%
80%
85%
90%
95%
x2 x3 sinx exp(x) ln(x + 75) 1/(x + 75)
2.81 3.16 1.87 2.96 3.13 3.15
2.95 3.30 1.96 3.10 3.26 3.26
3.13 3.45 2.14 3.21 3.42 3.43
3.33 3.62 2.23 3.30 3.58 3.59
3.52 3.82 2.38 3.44 3.74 3.73
3.77 4.07 2.56 3.57 4.01 3.99
4.04 4.27 2.67 3.76 4.27 4.29
4.34 4.67 2.88 4.06 4.53 4.54
4.75 5.24 3.12 4.49 5.06 5.06
Table 14.9 Percentiles of Augmented Dickey–Fuller Cointegration Test. Transformation
55%
60%
65%
70%
75%
80%
85%
90%
95%
x2 x3 sinx exp(x) ln(x + 75) 1/(x + 75)
2.69 2.83 1.69 2.00 2.98 2.96
2.84 2.96 1.80 2.10 3.09 3.11
2.97 3.08 1.94 2.18 3.26 3.23
3.10 3.23 2.05 2.30 3.39 3.40
3.36 3.42 2.16 2.39 3.54 3.54
3.58 3.62 2.28 2.52 3.74 3.75
3.75 3.82 2.47 2.74 3.95 3.92
4.19 4.07 2.65 2.88 4.25 4.25
4.87 4.33 3.06 3.30 4.59 4.58
so that E {g ¢( x)} =
1 s 2t + + O(a -4 ) a a3
Provided that s2 times the number of observations included in a sample is small compared with a3, E{g¢(x)} will approximate the (small) constant 1/a and apparent constant-parameter cointegration may occur. In Tables 14.8 and 14.9 the results of tests for cointegration (DF and ADF) between x and g(x) are given for several functions g(·). Selected percentiles of the empirical distribution of the DF and ADF tests performed on the residuals of a regression of xt on the indicated function g(xt), where xt is a pure random walk of 200 observations, are shown. The tables are based on a simulation experiment with 500 trials for each function. The values in the table can be compared with the 5% and 10% critical values of the DF(3.37, 3.02) and ADF(3.25, 2.98) tests for cointegration from Engle and Yoo (1987). It is seen that, except for the sine function, the cointegration tests can be somewhat misleading. For all the other transformations, the tests find cointegration a third or more of the time when it should not theoretically be there. It should be noted that the critical values for these tests were found using independent series xt, yt. Certainly xt and g(xt) are not independent of each other.
284
C. W. J. Granger and J. Hallman
Turning to the second question, a mean value expansion shows that g( yt ) = g(axt + e t ) ª g(axt ) + e t g ¢(axt + rt ) where rt is some remainder term. As seen in Section 4, the second term will generally appear to be I(0) in mean with some heteroskedasticity, particularly if xt, et are independent. If it is assumed that this is correct, g(yt) - g(axt) is I(0). It follows that g(xt), g(yt) are cointegrated if either (i) a = 1 or (ii) g(x) is homogeneous, so that g(ax) = alg(x), in which case the cointegrating parameter is al. It should be pointed out that these results are only approximate. The answer to the two questions posed at the beginning of the section are generally no and yes respectively. The second case requires g(·) to be homogeneous or the series to be scaled so that the cointegrating coefficient is 1. Granger and Hallman (1988) give an example where xt, yt are not cointegrated but xt2, yt2 are. 4.
CONCLUSIONS
Nonlinear transformations of integrated series generally retain the long memory properties of traditional I(1) series, such as slowly declining autocorrelations. However, the DF and ADF unit root tests performed on such transformed series will often reject the null hypothesis that the series was generated by a linear process with a unit root. Since an investigator is rarely certain that the generating process for his data is in fact linear, a unit root test that is invariant to monotone data transformations is desirable. The test proposed here is to perform the DF or ADF test on the ranks of the series, rather than on the series itself. The power functions of the rank tests are very close to the power functions of the conventional tests, but the rank versions have the desired invariance property by construction. In theory, a nonlinearly transformed series generally cannot be cointegrated with the original series. This emphasizes the importance of having the correct functional form when investigating a hypothesized long-run relationship yt = f(xt). If the actual cointegrating relationship is yt = g(xt), then yt and f(xt) will be cointegrated only if g is an affine transformation of f. Hallman (1989) addresses this issue. Testing for cointegration by performing unit root tests on the residuals from a regression of xt on f(xt) can be misleading, often finding cointegration when it theoretically cannot be there. Finally, if xt, yt are cointegrated series, then g(xt), g(yt) can also be cointegrated if either (i) g(·) is homogeneous or (ii) the data are scaled so that the cointegrating coefficient for xt, yt is 1.
Nonlinear Transformations of Integrated Time Series
285
ACKNOWLEDGEMENTS This paper was prepared under National Science Foundation grant SES 8902950.
REFERENCES Engle, R. F. and Yoo, B. S. (1987) Forecasting and testing in cointegrated systems. J. Economet. 35, 143–59. Feeler, W. (1968) An Introduction to Probability Theory and Its Application, Vol. 1. New York: Wiley. Granger, C. W. J. (1988) Introduction to processes having equilibria as simple attractors: the Markov case. Discussion Paper, University of California, San Diego. —and Hallman, J. J. (1988) The algebra of I(1). Finance and Economics Discussion Series 45, Board of Governors of the Federal Reserve System. Hallman, J. J. (1989) Cointegration with transformed variables. Finance and Economics Discussion Series, Board of Governors of the Federal Reserve System. In preparation. Phillips, P. C. B. (1987) Time series regression with a unit root. Econometrica 55, 277–301.
CHAPTER 15
Long Memory Series with Attractors Clive W. J. Granger and Jeff Hallman
1.
INTRODUCTION
The results presented in this paper can be motivated by considering the prices of some agricultural product, say tomatoes, in two parts of a country, denoted PNt, PSt for the prices in the north and south. At a time t, values of these prices will be a point in the plane with axes PN, PS. In this plane the line PN = PS may be considered to be an attractor because, if two prices are quite different, and thus off this line, there will be market pressure to bring the prices together. If PN is much larger than PS it will be a profitable enterprise to buy tomatoes in the south, transport them to the north and sell them there. This activity will raise demand and thus prices in the south, and raise supply, and thus lower prices, in the north. As the prices becomes near each other, the profitability of this activity will decline and so the strength of the attraction becomes small. This example illustrates a type of behavior that might be expected to occur frequently in economics. One may have a pair of economic series xt, yt each of which varies over a wide range but plots of xt against yt suggest that the economy has a preference for these points to lie in or near some region which could be called the attractor. This preference may occur through a market mechanism or by the action of government policy, say, when the market is fairly efficient, so that there are no trade barriers for instance, and when the government policy is effective. It might also be assumed that because of sticky prices, long-run contracts or delays in policy implementation a point off the attractor is not brought directly back on to it. The economy is taken to be stochastic, being influenced by frequent unforecastable shocks, and the attractor is not capturing so that if (xt, yt) is on the attractor the economy is liable to be taken off it by a shock or innovation. The object of the paper is to characterize attractors, to study the properties of series having attractors and then to consider the empirical aspects of these concepts. * Oxford Bulletin of Economics and Statistics, 53, 1991, 11–26.
Long Memory Series with Attractors
287
The proposal can be considered to be a nonlinear generalization of the concept known as cointegration which is discussed in Granger (1986) and in the book of readings, Engle and Granger (1990), and which has been widely used in macroeconomics and in finance. Although a variety of generalizations are available, the concept of cointegration is easily explained using characterizations of time series as being either I(0) or I(1). An I(0) series can be taken as being just a stationary, trend free series whereas an I(1) series is such that its difference is I(0). These two types of series have quite different appearances and properties, some of which are discussed in the next section. In particular, under reasonable assumptions, the variance of an I(0) series is bounded whereas the unconditional variance of an I(1) series increases without bound as t increases. A pair of series xt, yt are said to cointegrate if they are each I(1) but there exists a linear combination zt = xt - Ayt which is I(0). In this case it is shown later that the line x = Ay may be thought of as an attractor. A generating mechanism that produces cointegrated series is xt = AWt + x˜t yt = Wt + y˜t
(1.1)
where x˜t, y˜t is a bivariate system of I(0) series and Wt is I(1). As this system has three components it will be called a “three-factor” system. A simpler, two factor generating mechanism is xt = AWt + azt yt = Wt + bzt where a - Ab = 1. If xt, yt are generated by a two factor mechanism, then one can solve exactly for the factors whereas this is not true in the three factor case. Any pair of cointegrated series must have a representation such as (1.1), so that the cointegration property is produced by the single I(1) factor Wt. Another generating mechanism that must occur is known as the error-correcting (EC) model, of the form Dxt = r1zt-1 + lags of Dxt, Dyt + residual Dyt = r2zt-1 + lags of Dxt, Dyt + residual
(1.2)
where at least one of r1, r2 is non-zero, zt = xt - Ayt and the residuals are white noises and hence I(0). If x = Ay is considered to be an equilibrium, the equation (1.2) may be thought of as the disequilibrium mechanism that produces this equilibrium. If xt, yt are generated by (1.2) they will be cointegrated and if they are cointegrated then they must have an EC representation. If economic theory suggests linear equilibrium relationships between series, the cointegration idea is sufficient for exploration of this theory.
288
C. W. J. Granger and J. Hallman
However, if the theory suggests a nonlinear equilibrium the cointegration ideas have to be generalized. In particular the characterization of series being I(0) or I(1) is too linear and has to be replaced by a more general method of characterization, and this is attempted in the next section. Section 3 introduces the nonlinear generalization of cointegration and discusses some properties of processes having attractors. The exposition is mostly descriptive, is not necessarily completely rigorous and considers only the bivariate case. Generalizations to more variables is straightforward in concept but clearly more complex mathematically. 2.
SHORT AND LONG MEMORY
Consider the conditional probability density function of xt+h given the information set It:xt-j, Qt-j, j ≥ 0 where Qt is a vector of other explanatory variables. The series xt will be said to be short memory in distribution (SMD) with respect to It if Prob( xt + h in A I t in B) - Prob( xt + h in A) Æ0
(2.1)
as h ≠ • for all appropriate sets A, B such that Prob(I, in B) > 0. The definition is clearly closely related to uniform mixing. If (2.1) does not hold xt can be called long memory in distribution (LMD). More specific are definitions of memory in mean. Defining the conditional mean E ( xt + h I t ) = ft ,h so that ft,h is the optimum least squares forecast of xt+h using It, then xt is said to be short memory in mean (SMM) if lim ft ,h = F hÆ•
where F is a random variable with distribution D and if D does not depend on It. The case of particular interest here is where D is singular, so that F just takes a single value, m, which is the unconditional mean of xt, assumed to be finite. Other cases include limit cycles and process with strange (possibly fractionally dimensional) attractors. Although interesting these cases are less easily associated with the simple concepts of equilibrium considered in this paper. If ft,h depends on It for all h, xt is said to be long memory in mean (LMM). It is clear that if xt is SMD then it is SMM and also any function of xt is also SMM, provided the unconditional mean of the function exists. If xt is LMM then it must be LMD but not necessarily vice versa. However, in general if xt is LMD then many functions of g(xt) will be LMM, provided the mean exists, as shown in Granger and Thompson (1987). An example of a series that is SMM but LMD is if xt = etyt, where yt is LMM and independent of et which is I(0), such as a white noise, as shown in
Long Memory Series with Attractors
289
Granger and Hallman (1989, 1990). In the same papers it is proposed that if xt is LMM then any monotonic nondecreasing function of xt is also LMM and this hypothesis is found to be correct when xt is a Gaussian random walk and for a variety of actual functions. However it is also found there that if xt is a Gaussian random walk then sin xt is SMM, in particular it has the linear properties of a stationary AR(1) process. It is thus suggested that if xt is LMM then sin xt and cos xt will often be SMM. It will be assumed below that this proposition is correct. A single series xt will be said to have the point attractor m if xt is short memory in mean, so that lim ft x,h = m h
as h ≠ and for all t and also provided that var( xt + h - ft x,h ) £ finite constant as h ≠, so that the asymptotic forecast error is bounded. This definition may be considered to be a special case of processes with strange attractors, where xt is generated by a deterministic mechanism but with a very small stochastic added noise, perhaps computer and round-off error, in which case ft,h Æ F, where F lies on an attractor of reduced, and sometimes fractional dimension, which does not depend on the initial values xt-j, j ≥ 0. 3.
BIVARIATE ATTRACTOR
The definition that is proposed for an attractor for a pair of series xt, yt is based on Figure 15.1. In the (x, y) plane suppose there is region A, illustrated as a curve in the figure, (xt, yt) is the point taken by the bivariate process at time t and (xtA, ytA) is the point on A nearest to (xt, yt), using a Euclidean measure of distance. Denote y = at + btx the tangent to A at the point (xtA, ytA), where this tangent is assumed to be defined and unique, for convenience. Clearly at, bt will be functions of (xt, yt) and of A, by construction, except when A is a straight line. Define zt = yt - at - btxt so that zt is the signed distance from (xt, yt) to (xtA, ytA). The bivariate process (xt, yt) may be said to have A as an attractor if zt is short memory in mean with m = 0 and has bounded variance. A stronger condition is that zt is SMD with mean zero and finite variance but this is a difficult hypothesis to test. It may be noted from the definition of zt that it may be difficult to distinguish between a nonlinear attraction and a timevarying (linear) cointegration. If xt, yt are each individually SMD then any function of these series including zt, will also be SMD. The only interesting case is where xt, yt are
290
C. W. J. Granger and J. Hallman
Figure 15.1.
long memory in mean but a particular function of them, zt = f(xt, yt), is short memory in mean and the attractor is then A: z = 0, i.e. (x, y) such that f(x, y) = 0. Clearly, not all pairs of LMD series will possess such an attractor, as defined here. A sufficient condition for zt to be SMM is that some other distance from (xt, yt) to A is SMM. The form of function studied below is qt = g( xt ) - h( yt ). From this definition qt must be at least as big in magnitude as zt, so that if qt is SMM, so will be zt. A method of generating LMM processes having an attractor is as follows. Suppose that the curve f(x, y) = 0 can be written g( x) = h( y)
(3.1) -1
-1
and define G(x) = g (x), H(y) = h (y) assuming these inverse functions exist. Let wt be a Gaussian random walk so that wt = wt-1 + et
(3.2)
where et is zero mean, Gaussian, constant variance white noise. Let xtA = G(wt), ytA = H(wt). If G, H are monotonic nondecreasing, then from the
Long Memory Series with Attractors
291
results stated earlier, xtA, ytA will be LMM and will lie on the attractor. The tangent to the attractor at xtA, ytA has slope H¢[g(xtA)]g¢(xtA) using the notation introduced above and where H¢(x) ∫ dH/dx corresponding to qt = tan-1 [slope]. As xtA is a function of wt, one can just write qt ∫ q(wt). xt, yt now can be generated by xt = xtA - zt sin qt
(3.3)
yt = ytA + zt cos qt
where zt is a zero mean SMM, finite variance series generated independently from wt. This is a generalization of the “two factor” mechanism that generates I(1) cointegrated series, wt corresponding to the common factor that is LMD. Clearly other generalizations are possible, with wt being LMD other than a simple random walk or using “three factor” form, but these will not be considered here. Note that if zt is SMM then so will be zt sin qt from the result stated in Section II about products of processes. With the construction (3.3) it is clear that as the long run forecast of zt is zero, because it is SMM, then the optimum long run forecasts of the pair of series xt, yt will lie on the attractor. A single series that is SMM and has an attractor must have a point attractor. It follows that a pair of LMM series cannot have an attractor that is bounded in all directions. Consider a possible attractor that is a circle of radius r and with center (0, 0). The distance from xt to the origin is then zt + r which is necessarily SMM. A similar argument can be applied to other bounded shapes. It follows that if a pair of LMM series have an attractor, that attractor must be unbounded in some direction. A form of the error correction model can be found from (3.3). Write the first equation as xt = G(wt ) - zt st
(3.4)
where st = sin qt = sin q(wt). Note that G(wt +1 ) = G(wt + e t +1 ) = G(wt ) + e t +1G ¢(wt ) +
1 2 e t +1G ¢¢(wt ) etc 2
using a Taylor series expansion. It follows that the best forecast of G(wt+1) made at time t is 1 ftG,1 = G(wt ) + s e2G ¢¢(wt ) etc 2 ∫ G(wt ) + f (wt )
(3.5)
assuming et is zero mean white noise. From (3.4) the optimum forecast of xt+1 is
292
C. W. J. Granger and J. Hallman x z s f t,1 = f Gt,1 - f t,1 f t,1
(3.6)
given that z, s are independent, given the assumption that zt, wt are independent. Writing f tx,1 = xt+1 - etx,1
(3.7)
where etx,1 is the one-step forecast error, it follows by substitution of (3.6) into (3.7) and subtracting from (3.4) that xt +1 - xt = f (wt ) + zt st - ft 2,1 ft s,1 + etx,1 Suppose that ft z,1 = rzt + Â b j Dzt - j is the best linear predictor of zt from j
its own past and using a Taylor series expansion on s(wt) gives Dxt +1 - f (wt ) = zt [(1 - r) st - rs e2 s ¢¢(wt ) + etc] + terms in Dzt , s ¢¢(wt )etc + etx,1 which is a form of error correction model. The leading term on the right-hand side is the error correction term with a time varying parameter. The other right-hand side terms are SMM. The left-hand side is not just Dxt+1 but has to be modified by subtracting f(wt). It should be noted that Dxt+1 is not necessarily SMM given the method of generating xt. 4.
ESTIMATION OF THE ATTRACTOR
If one has no prior information about the shape of a possible attractor, a nonparametric estimator is worth consideration. A technique that is clearly appropriate is the Alternating Conditional Expectations (ACE) algorithm proposed by Breiman and Friedman (1985). Although originally suggested for use with cross sectional data it can easily be used with time series. Starting with a sample from a pair of random variables x, y the objective is to find a pair of instantaneous transformations q(x), f(y) such that the correlation between these transformed variables is maximized. This criterion is equivalent to maximizing R2 for the regression of f(y) on q(x). Essentially the steps in the algorithm are (i) fix q0(x) = x/x (ii) consider a smooth spline functions f1(y) that maximizes corr(q0(x), f1(y)) (iii) fix f1(y) and consider smooth spline functions q1(x) so that corr(q1(x), f1(y)) is maximized (iv) fix q1(x) and find f2 so that corr(q1(x), f2(y)) is maximized, and so forth until an appropriate stopping rule becomes operative. Details can be found in the paper by Breiman and Friedman (1985). Clearly different nonparametric estimators could be used instead of the smooth splines.
Long Memory Series with Attractors
293
In the ACE implementation used for this paper a fixed-window regression smooth is employed, computing E(Y|X) as follows: (a) sort the observations by x value. (b) Define the window Wn as the set of all observations · xj, yj} such that |j - n| £ k, where k is that predetermined minimum window size (minus one). (c) E(yn|X) is the fitted value of yn from a linear regression of y on a constant and X, using only the observations in the window Wn. (d) For technical reasons detailed in Breiman and Friedman, it is necessary for the data smooths to always have a zero mean, so that sample mean of the computed E(Y|X) is subtracted before the observations are sorted back into their original order. If k = T, the sample size, the smooth is just the linear regression yt = a + bxt and the returned values are {bxt}. At the other extreme, k = 0 will return y minus its mean. In between, larger values of k trade more smoothness for less ability to track discontinuities and sharp changes in the slope of Y|X. The effect of reducing the window size is similar to what happens in a linear regression as more variables are allowed to enter. Just how many “equivalent parameters” are used by ACE is a question explored in the next section. The smoother used in Breiman and Friedman’s ACE implementation is the “supersmoother” of Friedman and Stuetzle (1982). It differs from the fixed window smoother by making several passes with different window sizes and then choosing a window size for each observation based on a local cross validation measure. Unfortunately, it tends to choose window sizes that are too small in moderate sample sizes or when there is sorted data. Both are to be expected in our applications, so a fixed window is used instead. One other point should be mentioned. Breiman and Friedman prove that for a stationary, ergodic process, ACE converges to the optimal transformations if the smooths used are (i) uniformly bounded as T Æ •, (ii) linear, and (iii) mean squared consistent. Marhoul and Owen (1984) have shown regression smooths to be mean squared consistent under conditions not satisfied in our setup. More work will be needed to find conditions under which ACE will always find an existing attractor. This does not prevent us from using ACE to find candidate attractors which we can then test using the procedure of the next section. As an example of the use of the technique, data was generated with xt = wt yt = w3t wt a pure Gaussian random walk and using sample size 200. Figure 15.2 shows the generated data, the actual underlying cubic relationship and
294
C. W. J. Granger and J. Hallman
40
30 y = fhat (x) y = x3
20 y 10
0 ADF(4) = 5.71 0
1
2
3
x
Figure 15.2.
the estimated curve from the ACE algorithm. As the two curves are so close to each other, there is no point in labeling them. 5.
TESTING FOR AN ATTRACTOR
Assume that xt, yt are long memory. If the set A defined by A = ( x, y:g( x) = h( y)) is an attractor set, then a sufficient condition that zt, as defined in Section 3, is SMM is that wt = g(xt) - h(yt) is SMM, and thus will appear to be I(0) in tests. On the other hand, if A is not an attractor set, wt will have a unit root. Rejecting the hypothesis of a unit root in the wt estimated by ACE is evidence that A is an attractor. The augmented Dickey-Fuller (ADF) statistic for testing the unit root hypothesis is the negative of the t statistic for d in the regression k
Dwt = dwt -1 + Â b j Dwt - j + ut . j =1
The hypothesis of a unit root in {wt} is rejected if the ADF is large enough. If wt has a nonzero mean it is subtracted off before performing the test. When wt is a residual from ACE or from a regression including a constant term, it has mean zero by construction. The use of the ADF as a test for linear cointegration was first suggested by Engle and Granger (1987), and its distribution has been studied by Engle and Yoo (1986), and others. Engle and Yoo provide tables of critical values for the test. These depend on both the number
Long Memory Series with Attractors
295
of observations in the sample and on the number of parameters estimated in the cointegrating regression. This presents a problem, in that ACE does not estimate parameters. However, shrinking window sizes in ACE is much like allowing for more parameters in a regression. What is needed is an indication of how many equivalent parameters are being used by ACE for different window sizes. A simple Monte Carlo experiment was conducted using just 100 repetitions of the following: (i) generate x, e as vectors of 100 i.i.d. N(0,1) random variables (ii) form summations t
Sxt = Â x j j =1 t
Se t = Â e j j =1
(iii) form Syt by (a) Syt = 0.33Sxt + Set (b) Syt = 3Sxt + Set If the series were stationary, these would correspond to R2 values of 0.1 and 0.9 respectively. In fact, Syt, Sxt are I(1) series that are not cointegrated. The ACE algorithm was applied to the series, for various window sizes. The series wt were formed and the ADF statistic formed, using 4 lags. Table 15.1 shows the estimated percentiles in the two cases (a) and (b). For purposes of comparison, Table 15.2 shows the percentiles for ADF statistics or residuals from linear regressions of a random walk on k - 1 other independent random walks. The last rows in Table 15.1(a), (b) are for linear regressions. For “middle” window sizes, the distributions in Table 1 are fairly stable and roughly correspond to the k = 3 case in Table 2. It is seen that the “degree of explanation” very roughly corresponding to R2 does matter but generally, window size does not make a big difference. Even though the results in Table 1 are for two series the use of the ACE algorithm approximately adds a further independent series to the process. Clearly a great deal of further research is required on this topic but “spurious regression” problems do not seem to be excessive and it is recommended that ADF statistical tables can be used as though an extra series was involved, as a practically useful approximation. 6.
AN APPLICATION
Two monthly series for the period January 1947 to December 1985 were taken from the Citibase data tape:
296
C. W. J. Granger and J. Hallman
Table 15.1(a) ADF percentiles for case (a). W
5%
10%
20%
50%
80%
90%
95%
Mean
9 14 19 24 29 34 39 44 49 100
1.57 0.96 0.84 0.57 0.67 0.64 0.39 0.71 0.70 0.43
1.98 1.82 1.67 1.49 1.40 1.37 1.27 1.18 1.02 0.92
2.32 2.07 1.94 1.88 1.83 0.75 1.66 1.62 1.55 1.36
2.90 2.63 2.53 2.33 2.30 2.28 2.19 2.18 2.16 1.92
3.72 3.31 3.26 3.23 3.05 3.03 3.03 3.00 3.07 2.66
4.25 4.01 3.59 3.42 3.51 3.43 3.33 3.28 3.24 2.90
4.75 4.35 4.08 3.81 3.64 3.55 3.59 3.59 3.48 3.18
2.98 2.74 2.57 2.44 2.35 2.30 2.26 2.23 2.20 1.97
Table 15.1(b) ADF percentiles for case (b). W
5%
10%
20%
50%
80%
90%
95%
Mean
9 14 19 24 29 34 39 44 49 100
1.26 1.05 1.06 0.88 0.97 0.84 0.74 0.84 0.81 0.80
1.56 1.56 1.33 1.37 1.31 1.29 1.28 1.28 1.26 1.20
1.87 1.85 1.74 1.71 1.67 1.65 1.66 1.63 1.60 1.54
2.65 2.49 2.36 2.34 2.33 2.26 2.29 2.28 2.30 2.17
3.35 3.18 3.16 3.08 3.01 3.04 2.98 3.01 2.99 2.94
3.78 3.61 3.45 3.50 3.52 3.43 3.48 3.47 3.55 3.29
4.10 3.93 3.89 3.79 3.65 3.78 3.74 3.81 3.86 3.66
2.66 2.55 2.45 2.40 2.37 2.34 2.31 2.31 2.32 2.20
Table 15.2 ADF percentiles for OLS (based on 1000 trials). K
5%
10%
20%
50%
80%
90%
95%
Mean
2 3 4 5
0.37 0.96 1.46 1.79
0.82 1.39 1.80 2.06
1.31 1.74 2.10 2.34
1.99 2.41 2.76 2.98
2.71 3.06 3.36 3.61
3.07 3.36 3.68 3.97
3.31 3.61 4.01 4.31
1.96 2.37 2.74 2.99
RMB: US base money (M0) divided by the consumer price index, and FY3: interest rate on three month US Treasury bills The sample size is 468. The standard linear analysis for both series did not reject the null hypothesis that are I(1) but the two stage procedure
Long Memory Series with Attractors
297
Figure 15.3.
suggested in Engle and Granger (1987) did not find the series to be (linearly) cointegrated. Some details of these results are RMB: Augmented Dickey-Fuller (ADF) with 4 lags = -0.05 FY3: ADF with 4 lags = 1.94 zt: ADF with 4 lags = 2.92 A value of about 3.50 would be needed to be found for those ADF statistics to reject the null hypothesis that the series is I(1). Figures 15.3 and 15.4 show the transformations of the series, q(RMBt), f(FY3t) achieved by use of the ACE algorithm with a window of size 93. The transformation of money base is seen to be almost linear but that for 3 month interest rates is very nonlinear. The 4 lag ADF statistics for these transformed series are -0.05 and 1.47 respectively, so for both an I(1) characterization is not rejected. Figure 15.5 shows the transformation of the line q(RMB) = f(FY3) back to the original variable plot. This curve is the potential nonlinear attractor. The ADF (4 lags) statistics for the original variables and their transformations are
RMB FY3
Original
Transformed
-0.05 1.94
-0.05 1.47
298
C. W. J. Granger and J. Hallman
Figure 15.4.
Figure 15.5.
There is thus no evidence that a null hypothesis of long memory (i.e. I(1)) should be rejected. Table 15.3 shows the ADF statistics for qt = qˆ (RMB) - fˆ (FY3) for various window sizes. R2 and ADF statistics increase as window size becomes smaller and using the rule suggested in the previous section the
Long Memory Series with Attractors
299
Table 15.3 Window 46 70 93 116 140 163 186 210 233 468
R2
ADF
0.90 0.88 0.86 0.82 0.79 0.75 0.72 0.70 0.67 0.49
6.30 5.60 4.99 4.36 4.14 3.85 3.71 3.64 3.63 2.92
Figure 15.6.
ADF statistics appear to be significant for window sizes of 140 and less. The analysis reported here used window size 93, as being representative but not so narrow that spurious nonlinearity is likely to occur. Figure 15.6 shows the estimated qt using this window, and this series seems to be short rather than long memory. The evidence certainly suggests that these two series are not cointegrated linearly, but they do have a nonlinear attractor, which can be viewed as a nonlinear cointegration. An alternative way to consider this evidence is by constructing error correction models using the original and the transformed data. For each series Dxt was regressed on lagged
300
C. W. J. Granger and J. Hallman
Dxt, Dyt and a single lagged wt (zt in the original data). Insignificant lagged Dxt, Dyt were dropped and the remaining model re-estimated. Using ordinary t-statistics, zt-1 was not significant for either original series, as expected because zt was I(1). For the transformed series wt-1 was not significant in the equation for DTRMB but for the TFY3 the error correction model estimated was DTFY3t = 0.0001 - 0.076wt -1 + 0.32TFY3t -1 (0.50) (-5.3) (7.3) + 0.12TFY3t -3 + residual (2.7) R2t = 0.133, Durbin Watson = 1.99. A clearly significant coefficient for wt-1 is observed. This example appears to find a nonlinear attractor but its interpretation needs some care. In Figure 3, the values on the low part of the attractor often also correspond to observations for the early part of the period used. Thus what is being here interpreted as a nonlinear attractor could possibly be viewed as time varying cointegration between the variables. Methods of distinguishing between various types of nonlinear models need to be investigated. In Hallman (1990) two other examples are presented with somewhat similar results. (a) monthly Standard and Poor common stock components index and earnings per share, January 1954 to January 1986, and (b) ratio of M1 to M2 and the 6 month Treasury bill rate, quarterly averages from 1959 to 1985. In both examples the original series were not cointegrated but evidence of a nonlinear attractor was found after using the ACE algorithm, particularly in example (a). In various other examples examined no linear or nonlinear attractor was found. Our experience has been that it is not easy to find examples where there seems to be nonlinear cointegration but not linear cointegration.At least the suggested methods seem not to produce spurious results. 7.
CONCLUSION
A definition of a nonlinear attractor has been proposed that we believe falls into some views of macroeconomic equilibrium and which is capable of being estimated and tested with actual data. We have shown that the ACE algorithm provides a practical estimation technique and tests can be derived using it, although a great deal of further work is required on the test procedures. It is hoped that this work will provide a useful starting point for generalizations of linear cointegration.
Long Memory Series with Attractors
301
REFERENCES Breiman, L. and Friedman, J. H. (1985). “Estimating Optimal Transformations for Multiple Regression and Correlation”, Journal of American Statistical Association, Vol. 80, pp. 580–97. Engle, R. F. and Granger, C. W. J. (1987). “Cointegration and Error Correction: Representation, Estimation and Testing”, Econometrica, Vol. 55, pp. 251–76. Engle, R. F. and Granger, C. W. J. (1990). Long Run Economic Relationships: Readings in Cointegration, Oxford University Press. Engle, R. F. and Yoo, S. (1987). “Forecasting and Testing in Cointegrated Systems”, Journal of Econometrics, Vol. 35, pp. 143–59. Friedman, J. H. and Stuetzle, W. (1982). “Smoothing of Scatter Plots”. Technical Report ORION 006, Department of Statistics, Stanford University. Granger, C. W. J. (1986). “Developments in the Study of Cointegrated Economic Variables”, BULLETIN, Vol. 46, pp. 213–28. Granger, C. W. J. and Hallman, J. (1989). “The Algebra of I(1)”, Finance and Economics Discussion Series, Paper 45, Division of Research and Statistics, Federal Reserve Board, Washington DC. Granger, C. W. J. and Hallman, J. (1990). “Nonlinear Transformations of Integrated Time Series”, forthcoming in Journal of Time Series Analysis. Granger, C. W. J. and Thompson, P. J. (1987). “Predictive Consequences of Using Conditioning or Causal Variables”, Econometric Theory, Vol. 3, pp. 150–52. Hallman, J. J. (1990). Ph.D. Thesis, Economics Department. University of California, San Diego. Marhoul, J. C. and Owen, A. B. (1980). “Consistency of Smoothing with Running Linear Fits”. L. C. S. Technical Report #8, November 1980, Department of Statistics, Stanford University.
CHAPTER 16
Further Developments in the Study of Cointegrated Variables* C. W. J. Granger and Norman Swanson**
1.
INTRODUCTION
Since the publication of a paper with a similar title, Granger (1986), there has been considerable interest and activity concerning cointegration. This is illustrated by the books by Engle and Granger (1991), Banerjee, Dolado, Galbraith and Hendry (1993), Johansen (1995) and Hatanaka (1995) plus many papers, both theoretical and applied. Much of the work has been highly technical, and impressive and very useful but has not necessarily helped economists interpret their data. This work has often accepted the constraints imposed by the early papers and has not questioned these constraints. It is the objective of this paper to suggest and examine generalizations whilst maintaining the main idea of cointegration and consequently to, hopefully, provide ways of making interpretations of the results of cointegration analysis both more realistic and more useful. The paper raises more questions than it solves and so can be thought of as a research agenda rather than a completed project. The same also was clearly true for the 1986 paper. The standard theory begins with a vector of n components x t, all of which are I(1), so that each component of D x t is stationary in the simplest form. Assume that there exists an n ¥ r, r < n matrix a such that zt = a ¢ x t
(1)
has components that are all I(0), or zero mean stationary processes, where z t has r < n components. A vector x t having these properties is said to be cointegrated. It was clear from the beginning that cointegration could only arise if one had a “common factor”, later called a “common trend” representation x t = Dwt + x*t * Oxford Bulletin of Economics and Statistics, 58, 1996, 374–386. ** This study supported by NSF award SBR-93-08295 and by a Penn State University Research and Graduate Studies Office Faculty Award.
Further Developments in the Study of Cointegrated Variables
303
where D is an n ¥ m matrix, with m = n - r, wt is an m ¥ 1 I(1) vector x *t is an n ¥ 1 vector of I(0) components. The z ’s have the I(0) property because there are fewer common factors, the w’s, than x’s, so that there must exist linear combinations of the x’s that eliminate the w’s. The other crucial property of the common trend representation is that the I(1) property dominates the I(0) property, so that an I(1) variable plus an I(0) one is always I(1). An important consequence of cointegration is that the x ’s must at least appear to have been generated by an error-correction system of equations A(B)D x t = g zt -1 + e t
(2)
where g is an n ¥ r matrix, e t is a stationary multivariate disturbance, and A(B) is the usual lag polynomial with A(0) = I and A(1) having all finite elements. If one assumes that wt can be written as a linear combination of x’s, a somewhat more constrained but more interesting representation can be obtained. A popular way of doing this is to take wt = g ¢^ x t
(3)
where g ^¢ g = 0, g ^¢ is an m ¥ n matrix and 0 here is m ¥ r as discussed by Warne (1991) and Gonzalo and Granger (1995). This definition has the advantage that there is no causality from zt to wt at zero frequency, as discussed in Granger and Lin (1995), so that it is natural to view the w’s as contributors to the permanent components, and the z’s as contributors to the transitory components of the system. As there are now n equations relating w’s and z’s to x’s these can be inverted to give (from Warne (1991)) -1
-1
x t = a ^ (g ¢^ g ^ ) w t + g (a ¢g ) zt = permanent compontent + transitory component.
(4)
It is well known that the actual z terms are not identified, since any linear combination of z’s will still be I(0), but replacing z t by r z t, where r is a square r ¥ r matrix such that r ¢ r = I does not affect the decomposition (4). Multiplying the error-correction equation (2) first by a ¢ and then, separately, by g ^¢ and using (4) to replace lagged Dxt by lagged Dzt and Dwt one gets the transformed VAR model zt = (I - a ¢g )zt -1 + lags of Dwt + lags of D zt + innovations¸ ˝. (5) Dw t = lags of Dwt + lags of D zt + innovations ˛ It should be noted that constants may have to be added to these equations to ensure that each component of z t has zero mean. Some papers assume that the common trends should be random walks, possibly following from the Beveridge and Nelson (1981) decom-
304
C. W. J. Granger and N. R. Swanson
position of an I(1) variable into permanent and transitory components. The above decomposition which is used throughout this paper may not have this property, which is viewed as coming from an alternative arbitrary assumption to the one we use. However, the alternative approach does have difficulties with some situations. Suppose that two series, Xt, Yt, are analyzed individually and each is found to be a random walk, so that the first difference of each produces series ex,t, ey,t that is uncorrelated with it’s own past, i.e. corr(ex,t,, ex,t-k) = 0, and similarly for ey,t. Can Xt, Yt be cointegrated? It might seem unlikely if the common trend also has to be a random walk. But if the common trend is allowed to be IMA (1,1) for instance, some algebra shows that cointegration can occur, with Xt =
(1 + bB)e t + ht , s h2 = bs e2 , (1 - B)
and similarly for Yt. There are several obvious generalizations and several problems with the decomposition approach. The distinction between I(1) and I(0) can be broadened to I(d) and I(b), d > b, using d = 2 or fractionally integrated processes, which is simple in theory but not in practice. The effects of increasing the contents of x t or changing the series in x t can lead to difficulties with interpretation in practice. The greatest difficulty, of how to define I(0) precisely, and hence I(1) as the accumulation of I(0), was immediately clear and still remains. A (second-order) stationary process is clearly an example of an I(0) process but does not necessarily make up the complete set of all possibilities, particularly once non-Gaussian, non-linear or time-varying coefficient cases are considered. (An I(0) process can have time varying coefficients but it has become common in the literature to call I(1) “non-stationary”, showing how confusion can easily enter a field.) Pragmatically, an I(0) processes is one that does not fail a powerful test having some general form of I(0) as a null. 2.
SIMPLE GENERALIZATIONS
The traditional approach is to start with cointegration as in (1), the equation for zt. Then (2), the error-correction model is specified. This leads to (3), the common trend representation. Finally, (5), the generating mechanism for the z t and wt components is derived. To consider generalizations it is easier to reverse this sequence, first generating processes z t and wt, which may not be observed and which have distinctly different properties, and then generating x t from z t and wt, and finally deriving the corresponding generalized error-correction model. As an example of this approach, consider the bivariate case, where there is a pair of variables Xt and Yt. First generate a pair of univariate series zt and wt by
Further Developments in the Study of Cointegrated Variables
zt = l t zt -1 + e z ,t wt = f t wt -1 + ew ,t
l t < 1¸ ˝ f t > 1˛
305
(6)
where ez,t, ew,t will be taken to be Martingale difference series (or perhaps white noise). Suppose, further, that zt = xt - atyt and wt = c1xt + c2tyt. Then, assuming that c2t = 1 + c1at yields xt = c2tzt + atwt yt = -c1zt + wt, which links the observed series xt, yt with the generated components zt, wt. The error-correction model is
(1 - f t B) xt = g 1t zt -1 + innovation (1 - f t B) yt = g 2t zt -1 + innovation where g 1t = c 2t (l t - f t ) g 2t = -c1 (l t - f t ). In the above example, several parameters are allowed to vary through time, particularly l, f and a. These could be deterministic functions of t, stochastic functions based on observed variables such as measures of the state of the business cycle, or unobserved series such as used in the stochastic unit root literature, e.g. Granger and Swanson (1995), where ft = exp(at) and at is an I(0) process with non-zero mean such that E[ft] = 1. The cointegrating parameter, a, could change seasonally, as explored by Fransis (1992) or it could switch between a pair of regimes, as discussed in Granger and Teräsvirta (1993), for example. It is clear that if one just looks at the error-correction equation, and if ft does not vary far from unity, then if the g ’s are found to appear to vary over time it will be difficult to determine exactly where in the generating process this time variation originates. It should be emphasized that the bivariate set-up here is not the most general possible, as (6) can contain further lags, and as xt, yt are linear functions of both zt and wt, for example. However, the basic ideas of cointegration are preserved even though wt is not necessarily a standard I(1) variable and zt is not necessarily a standard I(0) variable. 3.
NONLINEAR GENERALIZATIONS
Suppose that x t is a vector of n components, each of which is I(1) and that
306
C. W. J. Granger and N. R. Swanson
zt = a ¢ x t is a vector of r I(0) components with zero means. An interpretation that has become almost standard has a ¢ x t = 0 determining an attractor or “equilibrium” for the system, so that ΩΩ z tΩΩ is a measure of the extent to which the system is out of equilibrium, for some norm. If, in some sense, the economy “prefers” z t to be small, there must be associated costs with non-zero values of z t. Consider the myopic cost function r
n
j= 1
k =1
2
Ct +1 = Â 2G j (q j¢zt+1 ) + Â l k ( Dxk ,t +1)
(7)
where x t+1 is chosen to minimize J = Et [Ct +1 I t ] where It is some information set available at time t, including x t-j, j ≥ 0. The first terms in (7) are disequilibrium costs and the second group of terms are the costs associated with changing the x values. Substituting for z t, q ¢j z t+1 becomes f ¢j x t+1, say, so that ∂J/∂xk,t+1 = 0 gives Dxk ,t +1 = Â g k , j g j (I t ) + e k ,t +1 where g k , j = -f j ,k l k and E [G ¢ (f¢j xt +1) It ] = g j ( It ), with G¢(w) = dG(w)/dw. Assuming that g j (I t ) = g j (d ¢j zt ) one gets the nonlinear error-correction model r
Dxk ,t+1 = Â g kj g j (d ¢j zt ) + e k ,t+ 1
(8)
j =1
which can be written Dx t +1 = g g(d ¢ zt ) + e t +1
(9)
with obvious notation. Note that there are n equations and, by construction, there are r factors on the right hand side. Thus there will be n - r independent linear combinations, wt = g ^ x t such that Dwt = g ^ e t so that each element of wt is a random walk. Multiplying (9) by a ¢ gives
(10)
Further Developments in the Study of Cointegrated Variables
Dzt +1 = a ¢g g(d ¢ zt ) + a ¢e t +1 .
307
(11)
It is seen that this particular nonlinear cost function leads to a nonlinear error-correction model (8), wt, z t are linear functions of x t and wt is a standard I(1) processing but z t is generated by a non-linear vector AR(1) model. Clearly, constraints will be required on g (z) and on a ¢ g to ensure that z t is I(0), or at least not dominated by I(1) components. Simple sufficient conditions for a form of stability are given by Mokkadem (1987) and Lasota and MacKey (1987). To get more dynamics into the error-correction equations, further costs associated with changes of the form (xk,t+1 - xk,t-j)2 need to be added to the cost function. If the cost of change term in (7) is some positive function other than quadratic, then a much more complicated form of error-correction model results. This type of generalization is not considered further here. One obvious generalization is where xk,t = fk(yk,t), k = 1, . . . , n, where yk,t are the observed series, and the xk,t are suitably transformed series that have the regular I(1) and cointegration properties, as discussed by Granger and Hallman (1991). Other generalizations are discussed in Swanson (1995). A related, but different situation occurs when the persistent component is a growth process. A positive series Wt will be called a growth process if ÈW ˘ Prob Í t +k > 1˙ ≥ Î Wt ˚
1
2
with k > 0 and as t Æ • Prob (Wt Æ •) = 1. Examples are: (i) DWt = m(t) m(t + k) + et with m(t) a deterministic increasing trend, so that ≥ 1, k > 0; m(t ) (ii) D log Wt = a + et, a > 0, and (iii) DWt = g(Wt-1) + et with g(W) > 0, where in each case et = htet, et is iid with mean zero and ht can be a stochastic or deterministic heteroskedasticity term. In these examples Wt will be a growth process provided ht does not grow too rapidly. For the third example, Granger, Inoue and Morin (1995) provide a full discussion. A pair of growth processes W1t, W2t will have W1t “dominating” W2t if W2t Æ 0 in probability as t Æ •. In example (ii), if W1t has parameter a1 W1t in its generation and W2t has a2, with a1 > a2 > 0 then W1t will dominate W2t. Clearly a growth process will always dominate a nongrowth one. To illustrate some characteristics of growth processes, possible cointegrating relations between pairs of growth series Xt, Yt and between log Xt and log Yt, plus some other situations are discussed in the remainder of this section. Upper case letters are used to denote growth processes, which are not usually I(1). The case considered will be where Xt, Yt are given by
308
C. W. J. Granger and N. R. Swanson
X t = A1W1t + A2W2t + x˜ t ¸ ˝ Yt = W1t + y˜ t ˛
(12)
where x˜ t, y˜ t are both I(0), that is stationary or short-memory series, W1t, W2t are each positive, growth processes, being persistent series possibly containing deterministic elements. It is assumed that W1t dominates W2t and that A1 and A2 are both positive. Taking logs of (12) and assuming that t is large enough so that W2t/W1t is negligible gives x˜ t ˆ ¸ Ê log X t = log A1 + log W1t + log 1 + Ë A1W1t ¯ ÔÔ ˝ y˜ t ˆ Ê Ô log Yt = log W1t + log 1 + Ô˛ Ë W1t ¯
(13)
A pair of growth series Xt, Yt will be said to be cointegrated if there is a linear combination, Xt - AYt which is not a growth process. There are two cases: (i) If A2 = 0, so W2t does not enter Xt, then both the levels Xt, Yt and log Xt, log Yt are cointegrated, with cointegration vectors (1, A1), (1, -1) respectively, (ii) If A2 π 0, then log Xt, log Yt are cointegrated but not Xt, Yt. If a rather more general form of (13) had been used, with W2t also entering the Yt equation more possibilities occur but no extra insights result. The residual terms in (13) need not disappear asymptotically if one can write x˜ t = W1tx*, where x* is I(0) and similarly for y˜ t. Some consequences of this example are (i) If log Xt, log Yt are cointegrated then Xt, Yt may or may not be. It is not possible for Xt, Yt to be cointegrated but log Xt, log Yt not to be. More generally, it may be true that Xqt , Ytq are cointegrated for q < qo but not for q > qo. (ii) If one hopes to “find” cointegration it is more likely to be found using log variables. The reverse is that if cointegration is found between levels, then a more stringent condition has been passed. 4.
CURRENT INTERPRETATIONS
Initially, applied papers just asked if a pair of series were cointegrated. Then papers considered small systems and asked how many cointegrations were found, and if the cointegrations could be interpreted in terms of known economic laws. Only later did it become standard practice to specify and examine the error-correction model, even though this is more fundamental. Cointegration is just a property, whereas the errorcorrection (EC) system is a possible data generating mechanism. Once this mechanism is known, the common trends, wt, can be determined,
Further Developments in the Study of Cointegrated Variables
309
with some possible implied long-run non-causalities, and also the EC generating mechanism for D x t can be transformed into a VAR in z t and D wt. As these latter variables are not identified, it may be possible to apply linear transformations to them in order to generate simplified forms of the VAR which have interesting and useful interpretations. Once the z t and wt variables are known, the vector x t, and thus each of its components, can be decomposed into its permanent and transitory components, as in (4). Although exchanging n variables for 2n not linearly independent ones may not seem much of an accomplishment, these components can be directly used to test equilibrium theories, to test the effects of structural breaks, and possibly to suggest conditional long-run forecasts of the “if investment grows at 1% then . . .” variety (using permanent components) and to examine seasonal adjustments, short-term forecasts, and leading indicators (using transitory components). One use of this approach is for the amalgamation of cointegration type studies across different sectors of an economy. Suppose that x 1,t and x 2,t are two vectors of economic variables from different sectors that have been analyzed and modeled separately. Assume that w1,t and w2,t are the common trends found in the two sectors, and suppose that tests indicate that x1,t and x2,t are not cointegrated with each other. In this case the two sectors may be said to “separate in the long-run” and the cointegrations z 1,t and z 2,t found in the analyses of x 1,t and x 2,t individually are, theoretically, all that would be found if these variables were analyzed together as a complete system. However, even if there is long-run separation there may be short-run relationships. For example the errorcorrection model may take the form D x1,t = g z1,t -1 + g 2 z2 ,k -1 + lags of D x1,t + innovations, 1
(14)
so that disequilibrium errors from one sector may enter the errorcorrection equations of another sector. These questions have been studied in Konishi and Granger (1993) and Konishi, Ramey and Granger (1995). It has been found that the z’s from one sector offer an efficient mechanism for transferring short-run information into another sector’s error-correction model. (See for example, Kozicki (1994) who used z’s from real macroeconomic variables to help explain interest rate spreads.) One problem that appears to occur in practice and that has important implications whenever cointegrations are interpreted, is that although a zt may seem to be as I(0), in that it has short memory, it often does not have a simple, tight, attractor of the kind assumed to occur in Granger (1986). If a zt starts from a high positive value, say, and starts to fall, it often does not appear to slow down around the attractor at zero but rapidly continues through it. An example, kindly supplied by Dr. Gawon Yoon, is shown in Figure 16.1. Using Johansen’s technique (as described in his book) on three major quarterly U.S. series, Y = income, C = consumption, and INV = investment for the period 1959:1–1994:1 (see
310
C. W. J. Granger and N. R. Swanson
Figure 16.1. Plot of Error Correction Term: Z = Y - INV (Z is the standardized error correction term from a VEC(2) model estimated using Johansen’s method.)
below), two cointegrations were found. The one shown is the more volatile which is essentially: Z = Y - INV. During the two “oil-shock” recessions this variable (which has been normalized) is seen to start from a high value and to proceed to a large negative value with no hesitation around zero. (The transitory components of the three variables behave similarly, as this z is an important part of the term.) Such behavior would be consistent with a broad attractor, with the economy preferring to be in the attractor. However, the attractor is now a band rather than just a line, as mentioned in Granger (1993). It can be represented by a non-linear error-correction term which has g zt-1 replaced by g (z) = 0, - z0 < z < z0 ¸ ˝ a = g (z - z0 ) , a > 0,˛
(15)
with g(-z) = g (z) for example, although there is no particular reason why g(z) should be symmetric. Provided that there is mean reverting behav-
Further Developments in the Study of Cointegrated Variables
311
ior, the basic interpretation is unaltered. However, the need for nonlinear error-correction equations, and hence cost-functions, becomes clearer. 5. EXAMPLE OF NONLINEAR ERROR-CORRECTION In this section a summary is given of an extension of the study by King, Plosser, Stock and Watson (1991) (henceforth KPSW) using an updated data set and considering some simple nonlinear possibilities. For the period 1959:1 to 1994:1, quarterly data for six U.S. macro variables were considered: C: Real per capita consumption expenditures (log). Y: Real per capita “private” gross national product (log). INV: Real per capita gross private domestic fixed investment (log). M: Real balances, the log of M2 per capita minus the log of the implicit price deflator. R: Nominal interest rate, 3-month U.S. Treasury bill rate. INF: Price inflation (measured as an annual percentage). A detailed description of the data is given in KPSW (following KPSW, we assumed that C, Y, and INV can be characterized as I(1) processes with drift, while M, R, and INF are I(1) processes without drift.) Using 1959:1–1985:4 as the in-sample period, and retaining the data from 1986:1–1994:1 for ex-post forecast evaluations, three cointegrations were found which were essentially: Z1 = C - Y + 0.01INF Z2 = INV - Y + 0.02INF Z3 = M - Y + 0.01R which are similar to those found by KPSW. Error-correction equations were estimated for each of the six variables using a constant, two lags of every differenced variable and the three Z’s lagged once, giving 16 coefficients per equations. In all, nearly one hundred coefficients were estimated, which are too many to reproduce here. In order to examine the system for possible evidence of nonlinearity, each Z term was replaced by Z+ and Z-, where Z + = Z if Z ≥ 0 ¸ Ô = 0 otherwise˝ + Ô˛ Z =Z-Z . Figure 16.2 shows the significant (i.e. p-value £0.05) cases, where a Z enters the error-correction equation. Thus Z1,t-1 (i.e. C - Y) only affects DINF and DY. DR is only affected by Z2,t-1 (i.e. INV - Y) and DM is only
312
C. W. J. Granger and N. R. Swanson
Figure 16.2. Linear and Nonlinear Causation from Z’s to Dependent Variables (Causation from Z’s to changes of dependent variables is depicted with arrows. Solid lines are associated with p-values of 0.05 or less, while dotted lines denote significance at the 10% level. A* indicates possible evidence of a nonlinear relationship at the 5% level).
affected by any Z3,t-1. The dotted line is for a p-value less than 0.10. A star on a line indicates that a non-linear term is involved when Z+ and Z- are used in the error-correction equations. All of this particular type of nonlinearity occurs in connection with DINV, DY and DM. In particular, Z1 becomes marginally significant when Z1+ is introduced. Z3 is clearly significant by itself (t = 2.55), but when Z3+ is introduced it becomes significant (t = 2.20), while Z3 is no longer significant (t = -0.51). The nonlinear error-correction equations for DC, DY, DINV, and DM all fit better in terms of adjusted R2 and log likelihood. Also fewer lagged endogenous variables enter significantly into the nonlinear equations than into the linear equations. In this sense, the nonlinear equations are more parsimonious than their linear counterparts. (In the linear equations, the Z’s are only allowed to enter linearly.) Interestingly, while the nonlinear equations for DC, DY, DINV, and DM produce superior fitting equations in-sample, the DY, DR and DINF nonlinear alternatives are superior based on ex-post evidence, using the 1986:1–1994:1 period, as shown in Table 1, Panel (A) (for the linear models) and Panel (B) (for the nonlinear models, denoted nonlinear 1).
Further Developments in the Study of Cointegrated Variables
313
A different type of non-linearity was considered by defining
}
Dt = 1 if the business cycle is in a down swing (peak Æ trough) = 0 otherwise,
and then generating the variables DZ and (1 - D)Z. The peaks and troughs correspond to the NBER turning-point dates. An excellent overview of NBER business-cycle dating procedures is given by Zarnowitz and Moore (1986). For the period 1959:1 to 1985:4 it was found that Dˆ Ct = 0.004 - 0.002 DRt -1 - 0.003DZ1,t -1 - 0.006 DZ2 ,t -1 + 0.006 DZ3 ,t -1 (t = 3.1) (-2.0) (-1.13) (-2.5) (2.7) (16) with other terms being insignificant. One DZ term was also significant in each of the DINV and DR equations. For DC, this particular nonlinearity appears to produce a superior fitting equation in-sample and using an out-of-sample forecast period of 1986:1–1994:1, (16) forecasts better for 4 of 6 series and when compared with linear models. Results for all six series are given in Table 16.1, Panel (A) (for linear models) and Panel (B) (for nonlinear models, denoted nonlinear 2). So far, the nonlinearities examined have been of a type other than in the proposed model, (8). We now consider nonlinear equations as in (8) with -1
g j (d ¢j z t ) = (1 + exp{-d ¢j zt }) - 1 2 ,
j = 1, 2,3.
(17)
Thus, g¢(·) is the logistic cumulative distribution function where z t¢ = (Z1,t Z2,t Z3,t), and g(0) = 0. The equations were estimated using nonlinear least squares. Interestingly, all 6 equations fit better in sample than their linear counterparts when one nonlinear error-correction term was used in place of all three linear error-correction terms, as evidenced in Panel (A) of Table 1. The greatest in-sample improvement was seen in the DC equation, where it was found that DCˆ t = 0.003 - 0.002 DRt -1 + 0.005g(dˆ ¢ zt ), (t = 3.3) (-2.4) (2.8) where dˆ ¢ = (9.8 -3.2 24.3). The money equation also showed marked improvement, and was found to be ˆ = 0.443DM - 0.004 DR + 0.002 DINF + 0.009 g(dˆ ¢ z ), DM t t -1 t -1 t -1 t (3.3) (-3.9) (2.6) (2.7) where dˆ ¢ = (-1.4 -3.5 -2.0). Table 1, Panel (A) lists some summary measures of the out-of-sample forecasting ability of these nonlinear equations, and the usual linear error-correction equations. The nonlinear equations perform better than
0.215 0.426 0.578 0.358 0.255 0.362
R
0.276 0.399 0.586 0.377 0.283 0.370
Nonlinear 2
2
0.249 0.403 0.572 0.365 0.290 0.386
Nonlinear 389.2 264.0 383.3 345.0 120.5 -196.1
Nonlinear
389.8 269.0 387.0 347.4 -120.0 -195.2
Nonlinear 1 394.1 266.6 388.0 349.0 -117.9 -194.5
Nonlinear 2
Log Likelihood
387.2 263.9 383.1 345.0 -120.7 -195.7
Linear
Log Likelihood
-9.901 -7.599 -9.847 -9.093 -0.191 1.243
Nonlinear 1
-9.984 -7.559 -9.871 -9.143 -0.277 1.164
Nonlinear
-9.982 -7.553 -9.866 -9.124 -0.229 1.229
Nonlinear 2
AIC
Panel (B)
-9.909 -7.559 -9.831 -9.104 -0.234 1.194
Linear
AIC
Panel (A)
-9.9421 -7.119 -9.367 -8.613 0.290 1.723
-9.630 -7.245 -9.517 -8.789 0.077 1.518
Nonlinear
-9.502 -7.073 -9.386 -8.644 0.251 1.710
Nonlinear 2
SIC Nonlinear 1
-9.504 -7.155 -9.426 -8.699 0.170 1.599
Linear
SIC
0.00595 0.0265 0.0201 0.00770 0.481 1.396
0.00563 0.0218 0.0159 0.00943 0.534 1.406
Nonlinear 2
0.00554 0.0215 0.0140 0.00714 0.534 1.542
Nonlinear
RMSE Nonlinear 1
0.00577 0.0220 0.0178 0.00814 0.529 1.414
Linear
RMSE
R 2, log likelihood, Aikake Information Criteria (AIC) and Schwartz Information Criteria (sic) are calculated in-sample using the period 1959:1–1985:4. Out-ofsample root mean squared forecast errors (RMSE) are calculated using one-step-ahead forecasts for the period 1986:1–1994:1. Variable corresponds to the dependent variables in each of six equations calculated using two lags of each variable and various linear and nonlinear error-correction terms. In Panel (A), the linear models use the error-correction terms in the usual linear way, while the nonlinear models replace the linear error-correction term with one nonlinear error-correction term as in (17) above, which is calculated using nonlinear least-squares. In Panel (b), the results for the other two nonlinear models are given. The model, Nonlinear 1 corresponds to the case where Z+ and Z- are used in place of Z from the linear model, while Nonlinear 2 corresponds to the nonlinear case discussed above where regression slope dummy variables are used in place of the usual linear error-correcting terms.
1
DC DINV DM DY DR DINV
Nonlinear 1
0.203 0.389 0.561 0.340 0.270 0.378
DC DINV DM DY DR DINF
Variable
Linear
Variable
R
2
Table 16.1 Comparison of linear versus nonlinear models1
Further Developments in the Study of Cointegrated Variables
315
the linear equations for 4 of 6 variables based on root-mean squared error (RMSE). The largest gains made by the nonlinear alternative seem to be for the DC and DY equations, which are superior to all other linear and nonlinear models discussed above, based on the RMSE. In all, our limited analysis suggests that there is some evidence of nonlinear errorcorrection. However, detailed empirical analyses need to be carried out on systems of equations as well as on sectors of the economy before the role of nonlinearities of the type examined here can be unambiguously determined. 6. EARLY WARNINGS, FRAGILITY AND THE FUTURE Modeling a sector of an economy may be broken into two parts. The first part considers the inter-relationships between the variables, or the “structure” of the economy. The second part examines the effect on this structure of “exogenous” shocks, where exogenous is taken to have its old-fashioned meaning of “coming from outside” the sector. Structural change will include both changes in policy variables and also parameter movements which are attributable to changes in taste or technology, say. The possibility that structural changes affect the frequency and distribution of exogenous shocks is ruled out in our example, although it clearly could occur. In a linear world, an exogenous shock has the same effect at any state of the economy, but this may not be the case with a non-linear error-correction equation. Consider an equation of the form DX t = g (Z t -1 ) + (1 + a[ h(Z t -1 )])e t
(18)
where Zt is an error-correction term and et is an exogenous shock, for simplicity. First consider the case with a = 0, so that no heteroskedasticity is present in (15). Assume that g (z) is given by (15) with a large, so that g (z) is small over some central band but takes large values for ΩzΩ outside this band. If the exogenous shock is small but ΩzΩ is large, Xt will change substantially. If the shock is large, but opposite in sign to g (z), then Xt may change very little. Finally if the shock and g(z) are of a similar sign and both are large, then Xt will change greatly and be fragile. This fragility occurs particularly when ΩzΩ is large in this example. Now consider the case where g (z) is always small, a is non-zero, and ah(z) > 0 is small for ΩzΩ in (0, Z0), but is very large for ΩzΩ outside the band (0, Z0). Now a small shock can be amplified if Zt-1 happens to be large in magnitude. An analogy can be given by considering an avalanche. A snow pack on a hill side can accumulate, and remain stable, until a certain depth/temperature combination occurs. At this point, a shock in the form of a gunshot will produce instability and thus an avalanche. In all other circumstances, the gunshot will have absolutely no impact on the hillside. Thus the effect of the shock interacts with the
316
C. W. J. Granger and N. R. Swanson
measure of the disequilibrium, in our example. Also, different kinds of shocks could have different effects. A sudden reduction in temperature could increase snow stability while an increase in temperature could make the snow pack less stable – actually we are unsure of the true mechanics of avalanches as this account probably shows. The above example suggests that consideration of a collection of disequilibrium errors, or Z’s, could indicate whether an economy, or a major sector of an economy, is fragile or not. If the economy is far from equilibrium in several important directions it could be considered to be fragile, as an exogenous shock (or even a large endogenous one) could produce very large changes in the Z’s or in-transitory components, and hence in the short-run volatility of major variables. It may be worthwhile to examine fragility indices for major macro variables, policy variables, the labor sector, the financial sector, the international sector, and various other sectors of the economy which are of interest. However, discussion and empirical investigation is required before deciding whether such indices are likely to be useful, and for determining how they should be constructed. Error-correction terms and transitory components will be amongst those variables measuring the economy which will be most likely to react to exogenous shocks. Presumably, there will be a subset of variables that are publicly available, and that react quickest to these shocks. However, as there are several different types of exogenous shocks, the same set of variables will not always react first to a new shock. For example, assume that there is a “core” economy that will eventually be affected by any shock. The idea is to specify a group of “early warning variables”, that between them provide warnings of new shocks plus estimates of the delays between the effects of the shocks going from this outer set of variables to the core. Again, an obvious analogy exists in weather forecasting, before satellite images were available. Around a country, monitoring stations recorded weather changes and depending on which direction the new weather came from, some of these stations provided leading indicators for the bulk of the country. For the economy, some of the desirable features for early warning variables are that they are recorded frequently, weekly or at most monthly, that they have major transitory components and so can change in value rapidly, and that economic reasoning suggests that they should lead the macro economy. Some retail sales and financial variables, such as interest rate spreads, are obvious candidates. Once candidates for inclusion have been selected it should be possible to form nonlinear weighted averages of normalized variables, with weights that are zero unless large enough changes occur in the indicator. Clearly empirical investigation is required so that the forecasting ability of the early warning variables can be determined. Standard impulse response analysis is unlikely to be helpful as it attempts to identify shocks with indi-
Further Developments in the Study of Cointegrated Variables
317
vidual variables, and inevitably has difficulties doing so. Also, exogenous shocks are likely to affect many variables, but with differing lags. One model which may have some potential is a dynamic factor analysis model. However, such models are difficult to implement in practice. As the definition of cointegration becomes relaxed, allowing for generalizations of I(0), I(1), and I(d) variables to be utilized by using timevarying parameter and nonlinear in mean generating mechanisms, for example more helpful and presumably more innovative interpretations of cointegration based analysis should arise. Economists may get used to thinking of a sector of an economy not in terms of the basic variables x t but rather in terms of the derived disequilibrium measures, z t and the common trends wt. These variables can perhaps be made more useful by some suitable normalizations or rotations suggested either by empirical properties of the data or by statistical or economic theory. The move from linear to nonlinear specifications has to be justified not only from apparent beliefs of the actuality in nonlinearity in the economy or from empirical evidence of its existence, but also from important uses, other than forecasting, which rely on non-linearity in the system.
REFERENCES Banerjee, A., J. Dolado, J.W. Galbraith and D.F. Hendry (1993), Cointegration, Error-Correction and the Econometrics Analysis of Non-Stationary Data, Oxford University Press. Beveridge, Stephen and Charles R. Nelson (1981), “A New Approach to Decomposition of Economic Time Series into Permanent and Transitory Components with Particular Attention to Measurement of the ‘Business Cycle’,” Journal of Monetary Economics, 7, 151–174. Engle, R.F. and C.W.J. Granger (1991), Long-run Economic Relationships: Readings in Cointegration, Oxford University Press. Francis, Philip Hans (1992), “A Multivariate Approach to Modeling Univariate Seasonal Time Series,” Discussion Paper, Econometric Institute, Erasmus University, Rotterdam. Gonzalo, J. and C.W.J. Granger (1995), “Estimation of Common Long Memory Components in Cointegrated Systems”, to appear. Granger, C.W.J. (1986), “Developments in the Study of Cointegrated Economic Variables”, Oxford Bulletin of Economics and Statistics, 68, 213–228. Granger, C.W.J. (1993), “What Are We Learning About the Long Run?”, Economic Journal, 103, 307–317. Granger, C.W.J., T. Inoue and N. Morin (1995), “Non-linear Stochastic Trends,” to appear. Granger, C.W.J. and J. Hallman (1991), “Long Memory Processes with Attractors”, Oxford Bulletin of Economics and Statistics, 53, 11–26. Granger, C.W.J. and Jin-Lung Lin (1995), “Causality in the Long Run” to appear in Econometric Theory.
318
C. W. J. Granger and N. R. Swanson
Granger, C.W.J. and N.R. Swanson (1995), “An Introduction to Stochastic Unit Root Processes”, UCSD Working Paper. Granger, C.W.J. and T. Teräsvirta (1993), Modeling Nonlinear Economic Relationships, Oxford University Press. Hatanaka, M. (1995), Time Series Based Econometrics: Unit Roots and Cointegrations, Oxford University Press. Johansen, S. (1995), Likelihood Based Inference on Cointegration in the Vector Autoregressive Model, Oxford University Press. King, Robert G., Charles I. Plosser, James H. Stock and Mark M. Watson (1991), “Stochastic Trends and Economic Fluctuations,” American Economic Review, 81, 819–840. Konishi, T. and C.W.J. Granger (1993), “Separation in Cointegrated Systems”, UCSD Working Paper. Konishi, T., V. Ramey and C.W.J. Granger (1995), “Stochastic Trends and ShortRun Relationships Between Financial Variables and Real Activity”, to appear. Lagota, A. and M.C. Mackey (1987), “Noise and Statistical Periodicity,” Physica, 28D, 143–154. Mokkadem, A. (1987), “Sur un modéle autorégressif non linéar, ergodicité et ergodicité géométrique,” Journal of Time Series Analysis 18, 195–204. Swanson, N.R. (1995), “LM Tests and Nonlinear Error-Correction in Economic Time Series,” Working Paper, Economics Department, Pennsylvania State University. Warne, A. (1991), “A Common Trends Model: Identification, Estimation and Asymptotics”, Working Paper, Economics Department, University of Stockholm. Zarnowitz, V. and G.H. Moore (1977), “The Recession and Recovery of 1973–1976,” Explorations in Economic Research 4, 471–577.
PART THREE
LONG MEMORY
CHAPTER 17
An Introduction to Long-Memory Time Series Models and Fractional Differencing* C. W. J. Granger and Roselyne Joyeux
Abstract The idea of fractional differencing is introduced in terms of the infinite filter that corresponds to the expansion of (1 - B)d. When the filter is applied to white noise, a class of time series is generated with distinctive properties, particularly in the very low frequencies and provides potentially useful long-memory forecasting properties. Such models are shown possibly to arise from aggregation of independent components. Generation and estimation of these models are considered and applications on generated and real data presented. Keywords: Fractional differencing, long-memory, integrated models. 1.
ON DIFFERENCING TIME SERIES
It has become standard practice for time series analysts to consider differencing their series “to achieve stationarity.” By this they mean that one differences to achieve a form of the series that can be identified as an ARMA model. If a series does need differencing to achieve this, it means that strictly the original, undifferenced series has infinite variance. There clearly can be problems when a variable with infinite variance is regressed on another such variable, using least squares techniques, as illustrated by Granger and Newbold (1974). A good recent survey of this topic is by Plosser and Schwert (1978). This has led time series analysts to suggest that econometricians should at least consider differencing their variables when building models. However, econometricians have been somewhat reluctant to accept this advice, believing that they may lose something of importance. Phrases such as differencing “zapping out the low frequency components” are used. At first sight the two viewpoints appear irreconcilable, but it will be seen that by considering a * Journal of Time Series Analysis 1, 1980, 15–29.
322
C. W. J. Granger and R. Joyeux
general enough class of models, both sides of the controversy can be correct. Suppose that xt is a series that, when differenced d times, gives the series yt, which has an ARMA representation. xt will then be called an integrated series, with parameter d, and denoted xt ~ I(d). If yt has spectrum f(w), then xt does not strictly possess a spectrum, but from filtering considerations the spectrum of xt can be thought of as f x (w ) = 1 - z
-2 d
f (w ),
wπ0
(1)
-iw
where z = e . This follows by noting that differencing a series once multiplies its spectrum by Ω1 - zΩ2 = 2(1 - cos w). If yt is strictly ARMA, then limwÆ0f(w) = c, where c is a constant. c is taken to be positive, as, if c = 0, this may be thought to be an indication that the series has been overdifferenced. It follows that f x (w ) = cw -2 d
for w small .
Now consider the case where fx(w) is given by (1), but d is a fraction 0 < d < 1. This corresponds to a filter a(B) = (1 - B)d which when applied to xt results in an ARMA series. It will be shown that if –12 d < 1, the xt has infinite variance, and so the ordinary Box–Jenkins identification procedure will suggest that differencing is in order, but if xt is differenced, the spectrum becomes fDx (w ) = [2(1 - cos w )]
-2 (1- d )
f (w ),
so that fDx(0) = 0, and an ARMA model with invertible moving average component is no longer completely appropriate. Thus, in this case, the time series analysts will suggest differencing to get finite variance, but if the series is differenced, its zero frequency component will be removed and the econometrician’s fears are realized. It seems that neither differencing nor not differencing is appropriate with data having the spectrum (1) with fractional d. In later sections, properties of these series are discussed and some open questions mentioned. It should be pointed out that if a series has spectrum of the form (1), with fractional d, then it is possible to select a model of the usual ARMA(p, d, q) type, with integer d, which will closely approximate this spectrum at all frequencies except those near zero. Thus, models using fractional d will not necessarily provide clearly superior shortrun forecasts, but they may give better longer-run forecasts where modeling the low-frequencies properly is vital. It will be seen that fractional d models have special long-memory properties which can give them extra potential in long-run forecasting situations. It is this possibility that makes consideration of single series of this class of interest. The generalized differencing and the solution of the difference-or-not controversy, together with the chance of obtaining superior relationships between series, is a further reason for believing that these models may be of importance. A
Long-Memory Time Series Models and Fractional Differencing
323
discussion of how fractional integrated models may arise is given in Granger (1980), and is summarized below. Long-memory models have been much considered by workers in the field of water resources. Good recent surveys are those by Lawrance and Kottegoda (1977) and Hipel and McLeod (1978). Many aspects of the models were first investigated by Mandelbrot (e.g., (1968), (1971)) in a series of papers. However, the fundamental reasoning underlying the long-memory models is quite different in these previous papers than that utilized here. The models that arise are not identical in details, and the statistical techniques used both differ and sometimes have different aims. However, it should be emphasized that many of the results to be reported have close parallels in this previous literature. The results presented below are closer in form to the classical time series approach and are, hopefully, easier to interpret. It should also be realized that this paper represents just those results achieved at the start of a much more detailed, and wider-ranging, investigation. 2.
TIME SERIES PROPERTIES
Consider a series xt with spectrum -d
f (w ) = a (1 - cos w ) ,
(2)
where a is a positive constant. This is a series which, if differenced d times, will produce white noise. However, we now consider the case -1 < d < 1, but d ≠ 0, so that “fractional differencing” may be required. It will be assumed that xt is derived from a linear filter applied to zeromean white noise and that xt has zero mean. The autocovariances, if they exist, will be given by mt = Ú
2p
0
cos tw f (w ) dw
p
= Ú a 2 - d cos tw ( sin w 2)
-2 d
0
dw
by noting that 2
(1 - cos w ) = 2( sin w 2) . Using the standard formula (Gradshteyn and Ryztik (1965), page 372, equation 3.631.8)
Ú
p
0
sinn -1 x cos ax dx =
p cos ap 2 Ê n + a + 1 n - a + 1ˆ 2 n -1 ◊n◊B , Ë ¯ 2 2
and some algebra gives m t = a◊21+d sin(pd)
G(t + d) ◊G(1 - 2d) G(t + 1 + d)
provided - 1 < d
0. If a series is generated by the more general model, compared to (2),
326
C. W. J. Granger and R. Joyeux
-d
x t = (1 - B) a ¢( B)e t where 0 < a¢(0) < • and et is white noise, results (4), (6) and (8) continue to hold for j large. A filter of the form a( B) = (1 - B)
d
which, using the previously introduced phrase, is an integrating filter of order -d, can also be called a fractional differencing operator. This is easily seen by taking d = –12 , as then applying a(B) twice corresponds to an ordinary, full difference and thus applying it once gives a half, or fractional, difference. The idea of half differencing should not be confused with that of differencing over a half sampling interval, as the two concepts are quite unrelated. It is not clear at this time if integrated models with non-integer d occur in practice and only extensive empirical research can resolve this issue. However, some aggregation results presented in Granger (1980) do suggest that these models may be expected to be relevant for actual economic variables. It is proved there, for example, that if xjt, j = 1, . . . , n are set of independent series, each generated by an AR(1) model, so that xjt = aj xj,t-1 + e jt,
j = 1, . . . , N
where the ejt are independent, zero-mean white noise and if the aj’s are values independently drawn from a beta distribution on (0, 1), where dF (a ) =
q-1 1 a 2 p-1 (1 - a 2 ) da , 0 £ a £ 1 and p > 0, q > 0 B( p, q)
then if N
x=
Âx
j ,t
for N large, x ~ I (1 - q 2).
j-1
The shape of the distribution from which the a’s are drawn is only critical near 1 for this result to hold. A more general result arises from considering xjt generated by xjt = aj xj,t-1 + yj,t + bjWt + e jt where the series yj,t, W1 and ejt are all independent of each other for all j, ejt are white noise with variances s 2j , yj,t has spectrum fy(w, qj) and is at least potentially observable for each micro-component. It is assumed that there is no feedback in the system and the various parameters a, qj, b and s 2 are all assumed to be drawn from populations and the distribution function for the a’s independent are generated by an AR(1) model, plus an independent causal series yj,t and a common factor causal series W1. With these assumptions, it is shown that
Long-Memory Time Series Models and Fractional Differencing
327
(i) x˜ t ~ I ( d x ) where dx is the largest of the three terms 1 - q/2 + dy, 1 - q + dw and 1 q/2, where yt ~ I(dy), Wt ~ I(dw), and (ii) if a transfer function model of the form x˜ t = a1 ( B) yt + a 2 ( B)Wt + e t is fitted, then both a1(B) and a2(B) are integrating filters of order 1 - q. In Granger (1980) it was shown that integrated models may arise from micro-feedback models and also from large-scale dynamic econometric models that are not too sparse. Thus, at the very least, it seems that integrated series can occur from realistic aggregation situations, and so do deserve further consideration. There are a number of ways in which data can be generated to have the long-memory properties, at least to a good order of approximation. Mandelbrot (1971) has come down heavily in favour of utilizing aggregates of the form just discussed and has conducted simulation studies to show that series with the appropriate properties are achieved. An alternative technique, which appears to be less efficient but to have easier interpretation has been proposed by Hipel and McLeod (1978). Suppose that you are interested in generating a series with autocorrelations rt, t = 0, 1, . . . , with r0 = 1. Define the N ¥ N correlation matrix C N = [ r i- j ], and let this have Cholesky decomposition CN = M ◊ MT, where T denotes transpose and M = [mij] is an N ¥ N lower triangular matrix. Then it is easily shown that if et, t = 1, . . . , N are terms in a Gaussian white noise series with zero mean and unit variance, then the series t
yt =
Âm e
ti i
(11)
i=1
will have the autocorrelations rt. The generating process is seen to be non-stationary and is expansive for large N values. However, by using rt given by (3), a series with long-memory properties is generated. An obvious alternative is to use a long autoregressive representation, of the form m
x t + Â a j,m x t - j = e t
(12)
j=1
where et is white noise and the aj,m are generated by solving the first m Walker–Yule equations with the theoretical values of rt given by (3), i.e.,
328
C. W. J. Granger and R. Joyeux
È a1,m ˘ È r1 ˘ Ía ˙ Ír ˙ Í 2 ,m ˙ = -C m-1 Í 2 ˙ Í M ˙ Í M ˙ Í a m,m ˙ Írm ˙ Î ˚ Î ˚
(13)
and aj,m = 0,
j > m.
Clearly, m will have to be fairly large for a reasonable approximation to be achieved. The obvious problem with this technique is what startingup values to use for the y’s. If the starting-up values do not belong to a long-memory process, such as using a set of zeros, the long-memory property of the model means that it will take a long time to forget this incorrect starting-up procedure. To generate data, we decided to combine the Hipel–McLeod and the autoregressive methods. The Hipel–McLeod method was used to generate N observations which are then used as start-up values for the autoregressive equation, taking m = N, and then n values for yjt are generated. The methods just described are appropriate only for -1 < d < –12 , d ≠ 0. To generate data yt with –12 d < 1, xt is first formed for - –12 < d < 0 and then yt generated by yt = xt + yt-1. We have had a little experience with this generating procedure, with d = .25 and d = .45. Using N = 50 and n = 400 or 1000, the estimated autocorrelations did not compare well with the theoretical ones, but using N = 100 and n = 100 or 400 the estimated and theoretical autocorrelations matched closely for d = .25 but were less good for d = .45. Clearly, more study is required to determine the comparative advantages of alternative generation methods and the properties of the series produced. 4.
FORECASTING AND ESTIMATION OF d
The obvious approach to forecasting models with spectra given by (2) is to find an ARMA or ARIMA model which has a spectrum approximating this form. Unfortunately, this is a rather difficult problem as functions of the form (2), when expressed in z = e-iw, are not analytic in z and so standard approximation theory using rational functions does not apply. Whilst realizing that much deeper study is required, in the initial stages of our investigations we have taken a very simple viewpoint, and used AR(m) models of the type discussed above, in equations (12) and (13). In practice, one would expect series to possibly have spectral shapes of form (2) at low frequencies but to have different shapes at other frequencies. These other shapes, which can perhaps be thought of as being generated by short-memory ARMA models, will be important for shortterm forecasts but will be of less relevance for long-term forecasts. Thus, the AR(m) model may be useful for forecasting 10 or 20 steps ahead,
Long-Memory Time Series Models and Fractional Differencing
329
say, in a series with no seasonal. If m is taken to be 50, for example, a rather high order model is being used, but it should be noted that the parameters of the model depend only on d, and so the model is seen to be highly parsimonious. The method will, nevertheless, require quite large amounts of data. To use this autoregressive forecasting model, the value of d is required. There are a variety of ways of estimating the essential parameter model d. The water resource engineers use a particular re-scaled range variable which has little intuitive appeal (see, for instance, Lawrance and Koltegoda (1977)). Other techniques could be based on estimates of the logarithm of the spectrum at low frequencies. At this time we are taking a very pragmatic approach, by using ARd(m) models, with a grid of d values from -.9 to +.4, excluding d = 0, forming ten-step forecasts and then estimating the mean squared errors for each of the models with different d-values, together with white noise random walk models. Some initial results that have been obtained are discussed in the following section. The method we have used is clearly arbitrary and sub-optimal, as the following theory shows. Suppose that the observed series xt has two uncorrelated components xt = yt + zt, where yt is a “pure” long-memory series, having spectrum given exactly by (2), and zt is a stationary standard short-memory model that can be represented by an ARMA(p, q) model with small p and q and with all roots not near unity. For large t, the autocovariances m 2t of zt will be negligible, and so r xt Ar ty , where r ty is given by (3) and A=
var y . var x
(14)
To derive an autoregressive model of order m appropriate for long-run forecasting, the coefficients ajm in (12) should be solved from the new Walker–Yule equations -1
È 1 Ar1 Ar 2 L Ar m-1 ˘ Í Ar 1 Ar1 L Ar m-2 ˙˙ 1 am = Í r m◊A Í M Ar1 1 . ˙ . . Í ˙ 1 ˚ Î Ar m-1 where am = (a1m, a2m, . . . , amm) and r 1m = (r1, r2, . . . , rm) which may be written -1
a m = [(1 - A) I m + AC m ] rm◊A
(15)
330
C. W. J. Granger and R. Joyeux
where Im is the m ¥ m unit matrix and Cm is the autocovariance matrix as introduced in the previous section. Obviously, if A = 1, (15) becomes identical to (13). (15) may be rewritten -1
a m = C m-1[ I m + DC m-1 ] rm
(16)
where D=
1- A A
and this can be expanded as D2 -2 È ˘ a m = C m-1 Í I m - DC m-1 + C m + . . . ˙ rm. 2 Î ˚ Thus, the zero-order approximation, assuming D is small, is a (m0 ) = C m-1 r m , which is identical to (13). The first-order approximation is a (m1) = [C m-1 - DC m-2 ]rm , etc. There now effectively become two parameters to estimate, d and D. As stated before, in our preliminary investigation just zero-order approximations were used, and the relevance of this with real data needs investigation. The techniques discussed in this section only apply for the range -1 < d < .5, d ≠ 0. If d lies in the region .5 d < 1 a number of approaches could be taken, for instance, one could first difference and then get a series with -1 < d < 0, or one could apply the fractional differencing operator (1 - B).5, and then get a series with 0 < d < .5. The first of these two suggestions is much the easier, but the second may provide a better estimate of the original d. Clearly, much more investigation is required. As an indication of the forecasting potential of the long-memory models, Table 17.1 shows the following quantities: -d
V ( d) = variance of yt = (1 - B) e t , where et is white noise with unit variance. yt is thus a pure long-memory series, with spectrum given by (2) for all frequencies and with a = –12 p, N -1
S N ( d) =
Âb
2 j
( d),
j=0
which is the variance of the N-step forecast error using the optimal forecast for yn+N, using yn-j, j 0, and where bj are the theoretical moving average coefficients given by (5), and RN2 ( d) =
V ( d) - S N ( d) V ( d)
Long-Memory Time Series Models and Fractional Differencing
331
Table 17.1 Forecasting properties of long-memory models. 1. Ten-step Forecasts (N = 10) d
V(d)
S10(d)
-.9 -.8 -.7 -.6 -.5 -.4 -.3 -.2 -.1 .1 .2 .3 .4 .5 .6 .7 .8 .9
1.81 1.648 1.504 1.380 1.273 1.183 1.109 1.052 1.014 1.019 1.099 1.316 2.070
1.81 1.648 1.504 1.380 1.273 1.182 1.108 1.051 1.014 1.017 1.078 1.204 1.425 1.791 2.376 3.289 4.691 6.816
R10(d) .369 .227 .745 .182 .365 .611 .840 .868 .485 .223 .185 .857 .312
E E E E E E E E E E E E E
– – – – – – – – – – – – –
05 04 04 03 03 03 03 03 03 02 01 01 00
2. Twenty-step Forecasts (N = 20) d
V(d)
S20(d)
-.9 -.8 -.7 -.6 -.5 -.4 -.3 -.2 -.1 .1 .2 .3 .4 .5 .6 .7 .8 .9
1.81 1.648 1.504 1.380 1.273 1.183 1.109 1.052 1.014 1.019 1.099 1.316 2.070
1.81 1.648 1.504 1.380 1.273 1.183 1.109 1.052 1.014 1.018 1.085 1.232 1.510 2.016 2.913 4.487 7.222 11.938
R20(d) .460 .331 .127 .362 .823 .164 .263 .315 .204 .126 .121 .645 .270
E E E E E E E E E E E E E
– – – – – – – – – – – – –
06 05 04 04 04 03 03 03 03 02 01 01 00
which is a measure of the N-step forecastability of yt. Clearly, this measure only applies for series with finite variance, and so V and R2 are only defined for d < 0.5. The table shows these quantities for both tenand twenty-step forecasts. It is seen that variance decreases as d goes
332
C. W. J. Granger and R. Joyeux
from -.9 to -.1 and then increases again as d goes from .1 to .4. It might be noted that d = -1, which corresponds to a differenced white noise, has V(-1) = 2 and, of course, RN(-1) = 0, N > 1. For d negative the amount of forecastability is low, but as d approaches .5 the results are much more impressive. For example, with d = .4, which corresponds to a finite variance model and would presumably be identified as low-order ARMA by the standard Box–Jenkins techniques, the table shows that ten-step forecast error variance is 30% less than forecasts using just the mean and twenty-step forecast error variance is 27% less than simple shortmemory forecasting models would produce. It is clear that the longmemory models would be of greatest practical importance if the real world corresponded to d values around 0.5. A slightly curious feature of the table is that RN(d) does not quite increase monotonically as d goes from -.9 to +.4, as there is a slight dip at d = -.1. 5.
PRACTICAL EXPERIENCE
To this point in time we have had only limited experience with the techniques discussed above, but it has been encouraging. Using the method described in section three, series of length 400 were generated with d = .25 and d = .45, using an AR(100) approximation with an initial 100 terms being generated by the Hipel–McLeod moving average model for use as start-up values. The following two tables show the theoretical and estimated autocorrelations for levels and differences together with some estimated partial autocorrelations. These allow ARIMA models to be identified and estimated by standard techniques.
d = .25
Lag 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Level
Differences
Est. Autocorr.
Theor. Autocorr.
Est. Partial
Est. Autocorr.
.41 .31 .24 .27 .27 .26 .29 .19 .18 .14 .20 .15 .18 .15 .15
.33 .24 .19 .17 .15 .14 .13 .12 .11 .11 .10 .10 .09 .09 .09
.41 .17 .08 .15 .11 .07 .13 -.03 .01 -.01 .07 -.03 .06 .00 .02
-.42 -.02 -.09 .03 .01 -.04 .11 -.08 .02 -.08 .10 -.08 .05 -.02 -.01
Long-Memory Time Series Models and Fractional Differencing
333
The approximate standard error for small lags is 0.05. For levels, the first 96 estimated autocorrelations are all non-negative and the first 22 are more than twice the standard error. If an AR(2) model is identified, it is estimated to be x t = .34 x t -1 + .17x t -2 - .03 + e t (6.85) ( 3.45) ( -.05) (brackets show t-values) and the estimated residuals pass the usual simple white noise tests. An alternative model might be identified as ARIMA(0, 1, 1), but this will have similar long-run forecasting properties as a random walk.
d = .45
Lag 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Levels
Differences
Est. Autocorr.
Theor. Autocorr.
Est. Partial Corr.
Est. Autocorr.
Est. Partial
.67 .59 .53 .51 .45 .44 .41 .41 .40 .39 .37 .31 .31 .28 .27
.82 .76 .74 .71 .70 .69 .68 .67 .66 .65 .65 .64 .63 .63 .63
.67 .25 .12 .13 .01 .07 .04 .08 .06 .03 .01 -.07 .02 -.01 .01
-.38 -.03 -.07 .06 -.07 .02 -.05 .02 .02 .01 .05 -.08 .02 -.02 .02
-.38 -.20 -.19 -.06 -.11 -.07 -.11 -.09 -.04 -.02 -.06 -.04 -.01 -.04 -.03
For levels, the first 34 autocorrelations are non-negative and the next 65 are negative but very small. The first 16 are greater than twice the standard error. The results suggest that the generating mechanism has not done a good job of reproducing the larger lag autocorrelations. A relevant ARIMA model for this data might be IMA(1, 1) and the following model was estimated:
(1 - B) x t = e t - 0.61e t -1 + .008 (15.5) ( 0.4) The grid-search method was applied to each series, that is ARd(50) models, with parameters depending just on different d-values, were used to forecast ten-steps ahead. The various mean-squared (ten-step) forecast errors (MSE) were found to be:
334
C. W. J. Granger and R. Joyeux
(i) d = 0.25 Selected d
Resulting MSE
d = .1 d = .2 d = .25 d = .3 d = .4 MSE (random walk) MSE (mean) MSE (estimated AR(2))
1.29 1.36 1.30 1.263 1.264 = 2.45 = 1.45 = 1.36
MSE (random walk) is the ten-step forecast error mean-squared error if one had assumed the series were a random walk. MSE (mean) is that resulting from forecasts made by a model of mean plus white noise, where estimate of the mean is continually updated. The grid method “estimates” d to be about 0.3, which is near the true value of 0.25, and produces forecasts that have a ten-step error variance which is somewhat better than the fitted AR(2) model and is considerably better than using a random walk or IMA(1, 1) model. The theoretical MSE with d = 0.25 is 1.13, which suggests that the forecasting method used is not optimal. The results are biased in favour of the AR(2) model, as its parameters are estimated over the same data set from which the mean-squared forecast errors were estimated. (ii) d = 0.45 Selected d
Resulting MSE
d = .1 d = .2 d = .3 d = .4 d = .45 MSE (random walk) = 2.84 MSE (mean) = 2.42
2.27 2.15 2.06 2.02 2.01
The theoretical achievable MSE using d = 0.45 is 1.58. Once more, the grid search has apparently selected the correct d (although values of d greater than 0.45 were not considered), the forecast method used was not optimal but the ten-step MSE achieved was about thirty percent better than if a random walk or IMA(1, 1) model had been used. The grid search procedure has been used by us on just one economic series so far, the U.S. monthly index of consumer food prices, not seasonally adjusted, for the period January 1947 to June 1978. The ordinary identification of this series is fairly interesting. The correlogram of the raw series clearly indicates that first differencing is required. For the first
Long-Memory Time Series Models and Fractional Differencing
335
differenced series, the autocorrelations up to lag 72 are all positive and contain fifteen values greater than twice the standard error. The first twenty-four autocorrelations are: lag rk
1 .36
2 .22
3 .17
4 .11
5 .14
6 .22
7 .17
8 .08
9 .17
10 .21
11 .19
12 .24
lag rk
13 .18
14 .07
15 .05
16 .10
17 .07
18 .11
19 .05
20 .07
21 .05
22 .08
23 .15
24 .11
The standard errors are .05 and .07 for the two rows. The partial autocorrelations are generally small except for the first, but the sixth, ninth and tenth are greater than twice the standard error. The parsimonious model probably identified by standard procedures is thus ARIMA (1, 1, 0). Using first differenced series, plus extra d differencing, the grid gives the following ten-step forecasting mean square forecast errors for the series in level form: d MSE
-.9 139
-.8 128
-.7 115
-.6 101
-.5 86.1
-.4 72.1
d MSE
.1 21.6
.2 17.8
.3 16.1
.35 16.03
.4 16.4
.45 17.32
-.3 58.4
-.2 46.1
-.1 35.7
The variance of the whole series is 1204.6 and the ten-step forecast MSE using just a random walk model is 27.6. The evidence thus suggests that the original series should be differenced approximately 1.35 times and that substantially superior ten-step forecasts then result. The ten-step forecast MSE using an ARIMA(1, 1, 0) model is 19.85, which should be compared to the minimum grid value of 16.03. The actual model fitted was
(1 - B) x t = .37(1 - B) x t -1 + .246t + e t . ( 7.63) ( 4.77) There is clearly plenty of further work required on such questions as how best to estimate d, how best to form forecasts for integrated models and the properties of these estimates and forecasts. It is clear that the techniques we have used in this paper are by no means optimal but hopefully they do illustrate the potential of using long-memory models and will provoke further interest in these models. It is planned to investigate the above questions and also to find if these models appear to occur, and can be used to improve long-term forecasts, in actual economic data. APPENDIX:
THE d = 0 CASE
The d = 0 case can be considered from a number of different viewpoints, which lead to different models. Some of these viewpoints are:
336
C. W. J. Granger and R. Joyeux
(i) If f(w) = a(1 - cos w)-d then simply taking d = 0 gives the usual stationary case with f(0) = c, where c is a positive but finite constant. This corresponds to taking d = 0 in an ARIMA(p, d, q) model. (ii) If one considers aggregates of the form xjt = a j xj,t-1 + bjet N
zt =
Âx
jt
j=1
then approximately b È ˘ zt = ÍÚ dF (a , b ) ˙ e t Î 1 - ab ˚ i.e., zt = b log(1 - B) e t if a, b are independent and a is rectangular on (0, 1). This corresponds to a series with spectrum proportional to log (1 - z) log (1 - z–), which takes the form (log w)2 for small w and so is infinite at w = 0. The moving average form corresponding to this model has bi A/j for j large, which is the same as equation (6) with d = 0. The autocovariances take the form ut s e2
log t for large t . t
This type of model can be thought of as arising from applying filters of the form [(1 - B)d - 1]/d to white noise, and then letting d Æ 0. (iii) By looking at equations (4) and (8), one could ask what models correspond to autoregressive equations with aj
A for large j j
or have autocovariances of form mt
A t
for large t, which arose in section 3 from a particular aggregation. The relationships between these various viewpoints and the relevance for forecastings need further investigation.
REFERENCES Gradshteyn, I. S. and I. M. Ryzhik (1965) Tables of Integrals, Series and Products (4th Edition), Academic Press.
Long-Memory Time Series Models and Fractional Differencing
337
Granger, C. W. J. (1980) Long Memory Relationships and the Aggregation of Dynamic Models. To appear Journal of Econometrics. Granger, C. W. J. and P. Newbold (1974) Spurious Regressions in Economics, Journal of Econometrics, 2, 111–120. Hipel and McLeod (1978) Preservation of the Rescaled Adjusted Range Parts 1, 2 and 3, Water Resources Research 14, 491–518. Lawrance, A. J. and N. T. Kottegoda (1977) Stochastic Modeling of Riverflow Time Series, Journal of the Royal Statistical Society, A 140, 1–47. Mandelbrot, B. B. and J. W. Van Ness (1968) Fractional Brownian Motions, Fractional Noises and Applications, SIAM Review 10, 422–437. Mandelbrot, B. B. (1971) A Fast Fractional Gaussian Noise Generator, Water Resources Research 7, 543–553. Plosser, C. I. and G. W. Schwert (1978) Money, Income and Sunspots: Measuring Economic Relationships and the Effects of Differencing, Journal of Monetary Economics 4, 637–660. Rosenblatt, M. (1976) Fractional Integrals of Stochastic Processes and the Central Limit Theorem, Journal of Applied Probability 13, 723–732.
CHAPTER 18
Long Memory Relationships and the Aggregation of Dynamic Models* C. W. J. Granger
By aggregating simple, possibly dependent, dynamic micro-relationships, it is shown that the aggregate series may have univariate long-memory models and obey integrated, or infinite length transfer function relationships. A long-memory time series model is one having spectrum or order w-2d for small frequencies w, d > 0. These models have infinite variance for d –12 but finite variance for d > –12 . For d = 1 the series that need to be differenced to achieve stationarity occur, but this case is not found to occur from aggregation. It is suggested that if series obeying such models occur in practice, from aggregation, then present techniques being used for analysis are not appropriate. 1. INTRODUCTION In this paper it is shown that aggregation of dynamic equations, that is equations involving lagged dependent variables, can lead to a class of model that has fundamentally different properties to those in current use in econometrics. If these models are found to arise in practice, then they should prove useful in improving long-run forecasts in economics and also in finding stronger distributed lag relationships between economic variables. The following definitions are required for later sections: Suppose that xt is a zero-mean time series generated from a zeromean, variance s2 white noise series et by use of the linear filter a(B), where B is the backward operator, so that xt = a(B)e t , Bk e t = e t -k ,
(1)
and that a(B) may be written -d
a(B) = (1 - B) a¢(B), * Journal of Econometrics, 14, 1980, 227–238.
(2)
Long Memory Relationships and Aggregation
339
where a¢(z) has no poles or roots at z = 0. Then xt will be said to be “integrated of order d” and denoted xt ~ I (d). Note that d need not be an integer. Further, defining d
xt¢ = (1 - B) xt = a¢(B)e t , then x¢t ~ I(0), because of the stated properties of a¢(b). a(B) will be called an “integrating filter of order d”. If a¢(B) is the ratio of two finite polynomials in B of orders l and m, and if d is an integer, then xt will be ARIMA (l,d,m) in the usual Box and Jenkins (1970) notation. In the more general models considered here, d need no longer be an integer. To help with interpretation, one can consider the idea of “fractional differencing.” The usual differencing procedure consists of using the operator (1 - B). Suppose there is a filter a(B) such that when used twice, one gets the usual difference, i.e., a(B)2 = (1 - B). Clearly, such a filter can exist and also that if this filter is used just once, it can be thought of as “half differencing”, which is an example of fractional differencing with d = –12 . An integrated series is one that requires fractional differencing to achieve a stationary ARMA series. An introduction to this class of models may be found in Granger and Joyeux (1980); other accounts and references may be found in Hipel and McLeod (1978), Lawrance and Kottegoda (1977), and Mandelbrot and Van Ness (1968). Some of the main properties of these models may be summarized as follows: Using the well-known results of filtering theory, the spectrum of xt, given by (1) and (2), is seen to be fx (w ) =
1 1- z
2d
a¢(z)
2
s2 , z = e iw . 2p
It follows that for small w, fx (w ) cw -2d ,
(3)
where c=
s2 2 (a¢(1)) . 2p
Note that for a stationary ARMA series, fx(w) c for w small, but for an ARIMA (p,1,q) series, fx(w) is as (3) but with d = 1. Thus, neither model provides an adequate approximation to the non-integer integrated model. It should be also noted that for longer-run forecasting purposes, it is this low-frequency part of the spectrum that is the most important.
340
C. W. J. Granger
Consider now the case where a(B) is a pure integrating filter of order d, so that -d
xt = (1 - B) e t ,
(4)
then it is shown in Granger and Joyeux (1979) that cov( xt , xt -k ) =
s e2 G(k + d) sin(pd) G(1 - 2d), 2p G(k + 1 - d)
provided d < –12 . The variance of xt increases as d increases and is infinite for d –12 . It follows that r k = corr( xt , xt -k ) =
G(1 - d) G(k + d) , G(d) G(k + 1 - d)
(5)
for d < –12 and d π 0. Of course, rk = 0,
k > 0 if d = 0,
which is the white noise case. Writing •
•
xt = Â bj e t - j
and
j =0
Âa x j
t- j
= et ,
j =0
as the MA(•) and AR(•) representations of xt, one finds that with xt generated by (4), bj =
G( j + d) , G(d)G( j + 1)
aj =
G( j - d) , G(-d)G( j + 1)
j 1,
j π 0,
(6)
and j 1,
(7)
Using the fact, easily derived from Sterling’s theorem, that G(j + a)/G(j + b) is well approximated by ja-b for large j, it follows that (5), (6) and (7) may be approximated for large j by rj A1j2d-1,
(5¢)
bj A2j ,
(6¢)
|aj| A3j-(1+d),
(7¢)
d-1
where A1, A2 and A3 are appropriate constants. If xt is generated by the more general model (1), (2), then eqs. (5¢), (6¢) and (7¢) will still hold for the autocorrelations, moving average and autoregressive parameters for large j but the constants A1, A2 and A3 will alter. The fact that the rj and
Long Memory Relationships and Aggregation
341
bj decline at a slower rate than for any ARMA (l,m) model with finite l, m suggests that the series will possess interesting and potentially useful long-memory properties, particularly if d > 0. This is also seen from (3), which gives a low-frequency form for the spectrum that is different from any ARIMA (l,d,m) model with finite l, m and integer d. As the lowfrequencies are of central importance in long-run forecasting, getting the correct model corresponding to (3) is seen to be important. It should be noted that if d Æ -•, then bj Æ 0 for j > 0, so that white noise is obtained. The algebra of integrated series is quite simple, as if xt ~ I(d) and an integrating filter is applied to it, to form yt = (1 - B)
-d¢
xt ,
then yt ~ I (d + d ¢). If x1t ~ I(d1) and x2t ~ I(d2), with x1t and x2t independent, then x = x1t + x2t ~ I (max(d1 , d2 )). It follows that if a relationship of the form xt = a(B) yt + et is constructed, with xy ~ I(dx), yt ~ I(dy) and a(B) is an integrating filter of order d, then et ~ I(de) with dx = max(de,d + dy) and if dy > dx, then d < 0. 2.
AGGREGATION OF INDEPENDENT SERIES
Suppose that x1t, x2t are a pair of series generated by xjt = ajxj,t-1 + ejt,
j = 1,2,
(8)
where e1t, e2t are a pair of independent, zero-mean white noise series, then their sum, x¯t = x1t + x2t, is easily shown to obey an ARMA (2,1) model [see, for example, Granger and Morris (1976) and Granger and Newbold (1977)]. The autoregressive part of this model is (1 - a1B)(1 - a2B). If now N independent series are added, each obeying an AR(1) model with different a values, of the form (8), then their sum will be ARMA(N,N - 1) unless cancellation of roots occurs between the autoregressive and moving average sides of the model. Many of the important microeconomic variables are aggregates of a very large number of micro-variables: Total personal income, unemployment, consumption of non-durable goods, inventories, and profits are just a few examples. Although the components of these
342
C. W. J. Granger
macroseries are not independent, the above results can be generalised to suggest that the expected models for aggregates will have huge numbers of parameters, which is not what is found in practice. An alternative point of view is obtained by considering f¯(w), the power spectrum of the aggregate series N
xt = Â x jt , j =1
where each xjt is generated by an AR(1) model, such as (8). The power spectrum of xjt is 1
f j (w ) =
1 -a jz
2
◊
var(e jt ) , z = e - iw , 2p
(9)
and the spectrum of x¯ is then N
f (w ) = Â f j (w ), j =1
as the components are independent. If the aj are assumed to be random variables drawn from a population with distribution function F(a), and similarly var(ejt) are drawn from some population and are independent of the a’s, one gets the approximation f (w )
N 1 E[var(e jt )] ◊ Ú dF (a ). 2 2p 1 - az
(10)
This is, of course, a standard technique for considering the effects of aggregation; see Theil (1954). If F(a) is the distribution function of a discrete random variable on the region -1 to 1, so that a can take just m specific values in this range, then f¯(w) will be the spectrum of an ARMA(m,m - 1) process. However if a can take any value in some region, so that it is a continuous variable, then f¯(w) will correspond to no ARMA process having of finite number of parameters. To proceed further it is necessary to assume a particular distribution for a, and for mathematical convenience it will be assumed that a has a form of beta distribution on the range (0,1). It will be argued later that the exact form selected for this distribution function is not of critical importance except near a = 1. The range 0 to 1 can be easily changed and alternatives are considered below. The particular form of the beta distribution used here is dF (a ) =
2 B( p, q)
= 0,
a 2 p -1 (1 - a 2 )
q -1
da , 0 a 1, elsewhere,
(11)
Long Memory Relationships and Aggregation
343
where p > 0, q > 0. A wide variety of shapes can be taken by this function with different choices of p and q. Noting that one can write 1 1 - az
=
2
È 1 + az 1 + az ˘ , + (1 - a ) ÍÎ 1 - az 1 - az ˙˚ 1
2
and that • 1 + az j = 1 + 2Â (az) , 1 - az j =1
it follows from (10) and (11) that the coefficient of zk in f¯(w) is 1
q-2 2 a 2 p +k -1 (1 - a 2 ) da , B( p, q) Ú0
and this coefficient is m–k, the kth autocovariance of x–t, from the standard Fourier expansion of a spectrum. Thus, provided q < 1, B( p + k 2 , q - 1) B( p, q) G(q - 1) G ( p + k 2) = ◊ , B( p, q) G( p + k 2 + q - 1)
mk =
which, for large k, gives the approximation m– = A k1-q. k
4
Comparing this with (5¢), it follows that x–t ~ I(1 - q/2). It should be noted that if q > 1, then 1 - q/2 < –12 and x–t will have finite variance, but if 0 < q £ 1, x–t will not have finite variance. It is seen that the order of the integration, 1 - q/2, does not depend on p and so the shape of dF(a) appears to be of little relevance in this respect except near a = 1, where q determines the slope of dF(a) in the form chosen. Because of this, it is easily seen that the range of a can be changed to a to 1, a > 0, without any effect on the main result. If the upper end of the range is changed from 1 to b, important changes do occur. If b > 1, x– can become explosive, which is generally considered to be inappropriate for economic variables. If b < 1, the beta distribution on the range (0,b) is 2 a 2 p -1 (b 2 - a 2 ) B( p, q) b( p +q -1)
q -1
,
and from this it is simple to show that m– A bkk1-q, for large k. k
5
344
C. W. J. Granger
Although this does not strictly correspond to the autocovariance of any ARMA model with a finite number of parameters, such a model is likely to provide a good approximation in most cases, provided b is not very near to 1. Thus, to get x–t ~ I(0), one way is to require the aj’s in the individual AR(1) component models to be constrained to be less than some quantity which is strictly less than one. A second way is to take b = 1 but let q Æ •. The assumption that the ejt are all white noises can be removed. Suppose that the xjt are generated by xjt = ajxj,t-1 + yjt,
(12)
where yjt has spectrum fy(w,qj) depending on the vector of parameters qj. Further suppose that the yjt are all independent and that the a’s and q’s are drawn from independent populations. Then eq. (10) becomes f (w ) NEq [ fy (w ,q )]Ú
1 1 - az
2
dF (a ).
It follows immediately with the previous assumptions that if N
yt = Â y jt ~ I (dy ), j =1
then xt ~ I (dy + 1 - q) from the previous results. The power cross-spectrum between xjt and yjt, cr(j)(w), is seen from (12) to be cr ( j ) (w ) =
fy (w ,q j ) , z = e - iw , 1 -a jz
and, because all components are assumed independent, the crossspectrum between x–t and y–t, denoted cr (w), is given by N
cr (w ) = Â cr ( j ) (w ), j =1
and so approximately cr (w ) NEq [ fy (w ,q )]Ú
1 dF (a ). 1 - az
Using the same distribution as before for a gives
Long Memory Relationships and Aggregation
•
cr (w ) NEq [ fy (w ,q )]Â zk k =0
345
B( p + k 2, q) , B( p, q)
and so the coefficient of zk in the sum is of the order of k-q. If a one-way causal transfer function or distributed lag equation is fitted relating x¯t and y¯t, of the form xt = a(B) yt + et , then as a(z) = cr (w ) fy (w ), it follows that a(B) will be an integrating filter of order d = 1 - q from (6¢), providing q π 1. 3.
AGGREGATION OF DEPENDENT SERIES
Initially, consideration is given just to the perfectly dependent set of series xjt generated by xjt = ajxj,t-1 + bjWt.
(13)
Later, these models will be embedded in a more general, and acceptable, class. It is seen immediately that bj ˆ ÊN xt = Á Â ˜W , Ë j =1 1 - a j B ¯ t
(14)
which can be approximated by 1 Ê ˆ xt NE[b ] Ú dF (a ) Wt , Ë 1 - aB ¯ assuming that the a’s and b’s are drawn from independent populations. With the usual assumption about F(a), from the result at the end of the previous section, it is seen that • B( p + k 2, q) ˆ Ê xt NE[b ]Á Â Bk ˜ Wt , Ë k =0 B( p, q) ¯
and so the coefficient of Bk for large k is of order k-q. It follows from (6¢) that if Wt I(dW), then xt ~ I (1 - q + dW ). If a one-way causal transfer function equation of the form xt = a(B)Wt ,
346
C. W. J. Granger
is fitted, then – as before – a(B) will be an integrating filter of order 1 - q. Consider now the more general model xjt = ajxj,t-1 + yj,t + bjWt + ejt,
(15)
where the series yjt, Wt and ejt are all independent of each other for all j, ejt are white noises with variances s 2j, yj,t has spectrum fy(w,qj) and is at least potentially observable for each micro-component. It is assumed that there is no feedback in the system, so that xjt does not cause yjt or Wt. The various parameters a, q, b and s2 are all assumed to be drawn from independent populations and the distribution function for the a’s is still given by (11). With these assumptions the results obtained above can be combined to give: (i) xt ~ I (dx ), where dx is the largest of the three terms: 1 - q/2 + dy (coming from the yjt components), 1 - q + dw (coming from the wt components), and 1 q/2 (coming from the ejt components), where y–t ~ I(dy) and Wt ~ I(dW).] It is thus seen that, with the type of aggregation considered, integrated models are inclined to occur, unless the a’s are constrained to be strictly less than one. Note that with the beta distribution used, given in (11), prob(a = 1) = 0 if q > 0. Further, if dy = dw = 0, then dx < 1 with q > 0. Thus, in this case, ordinary single differencing will not be required because of aggregation. With dy = dw = 0, to get dx = 1 one needs a distribution for the a’s with prob(a = 1) π 0. However, if dy > 0, for instance, dx = 1 can occur in the case generally considered in this paper. (ii) If a transfer-function model of the form xt = a1 (B) yt + a2 (B)Wt + et
(16)
is fitted, then both a1(B) and a2(B) will be integrating filters of order 1 q. The term in (16) involving y–t would contribute 1 - q + dy to dx, which can be compared to the 1 - q/2 + dy contributed by all the yjt’s. As q > 0, it is seen that the y– term makes a smaller contribution to dx, which may be thought of as a measure of the information loss in using the aggregate y– to “explain” x–t rather than all the individual yj,t’s. If et ~ I(dx), then if dx > 1 - q + dw, it follows that de = dx, but if dx = 1 - q + dw, then de can be less than dx, assuming y–t and et independent. 4.
SOME OTHER MODELS
The possibility of feedback between the micro-variables was excluded in the previous sections. To show that similar results are likely to be found from simple feedback situations, consider the micro-model
Long Memory Relationships and Aggregation
1 1 ¸ e jt + h jt Ô 1 -a jB 1 - bjB Ô ˝, 1 1 y jt = e jt + h jt Ô Ô˛ 1 -g jB 1 -d jB
347
x jt =
(17)
where ejt, hjt are a pair of independent zero-mean white noise series. This is a two-way causal, or feedback system for each micro-component. Clearly, from the results of section 2, the aggregates x–t and y–t will both be integrated series if the a’s, b’s, g ’s and d’s are all drawn from independent beta populations. The relationships between x–t and y–t are of a feedback nature and the transfer functions involved will correspond to integrating filters. The exact analytical details are not important, as eqs. (17) are not of the usual form for feedback models and assumptions of independence between the parameters a, b, g and d may not correspond to actual microeconomic theory. It is possible to present a heuristic argument that long-memory or integrating processes can arise from very large-scale dynamic, econometric models. Suppose that the model takes the simple form A(B) xt = e t ,
(18)
where the (j,k)th element of the N ¥ N matrix A(B) is ajk + bjkB, xt is a N ¥ 1 vector of economic variables, and et is an N ¥ 1 white noise vector. Thus, the equations of the model allow any variable to be lagged once. Writing xt = A -1 (B)e t , and nothing that each element of A-1(B) will be the ratio of a polynomial in B of order N - 1 divided by a polynomial in B of N, and further that all such ratios can be written as N
 (c
j
(1 - a j B)),
(19)
j =1
assuming all the roots of |(A(z)| = 0 are real, for convenience, it follows that each element of xt will be the sum of N components, one for each ejt, and each of these components will be the sum of N AR(1) type filters applied to a white noise series. Comparing this construction with those met in sections 2 and 3 above strongly suggests that each component of xt will be integrated and relationships between components will involve integrating transfer functions, provided the roots of |A(z)| = 0 are drawn from a beta distribution on the range (a,1). This argument is not rigorous, because the cj in (19) will not be independent of each other or of the a’s. The details seem to be very complex but the likely conclusions from a more careful analysis are probably those already indicated by the heuristic argument.
348
C. W. J. Granger
Although the results presented in this paper suggest that integrated processes, and long-memory relationships are likely to occur from aggregation of dynamic models, it should be pointed out that they by no means necessarily arise. For example, if xjt = ejt + bjej,t-1, so that each microvariable is MA(1), then x–t will also be MA(1). Similarly, if each xjt is IMA(d,q), then so will be the aggregate x–t. This difference between aggregating AR(1) and MA(1) models is quite dramatic. 5.
CONCLUSION
It has been shown that aggregation of dynamic equations can lead to models, both for single variables and relating pairs of aggregates, that are quite different from those currently in use. This means that present models may well be mis-specified and are being inefficiently estimated. The practical problems of estimating these new, alternative models requires further research.
REFERENCES Box, G.E.P. and G.M. Jenkins, 1970, Time series analysis, forecasting and control (Holden Day, San Francisco, CA). Granger, C.W.J. and R. Joyeux, 1980. A introduction to long-memory time series and fractional differencing, Journal of Time Series Analysis 1, forthcoming. Granger, C.W.J. and M. Morris, 1976, Time series modeling and interpretation, Journal of the Royal Statistical Society A 38, 246–257. Granger, C.W.J. and P. Newbold, 1977, Forecasting economic time series (Academic Press, New York). Hipel, W.H. and A.I. McLeod, 1978, Preservation of the rescaled adjusted range, Parts 1–3, Water Resources Research 14, 491–518. Lawrance, A.J. and N.T. Kottegoda, 1977, Stochastic modeling of river-flow time series, Journal of the Royal Statistical Society A 140, 1–47. Mandelbrot, B.B. and J.W. Van Ness, 1968, Fractional Brownian motions, fractional noises and applications, SIAM Review 10, 422–437. Theil, H., 1954, Linear aggregation of economic relations (North-Holland, Amsterdam).
CHAPTER 19
A Long Memory Property of Stock Market Returns and a New Model* Zhuanxin Ding, Clive W. J. Granger, and Robert F. Engle**
Abstract A “long memory” property of stock market returns is investigated in this paper. It is found that not only there is substantially more correlation between absolute returns than returns themselves, but the power transformation of the absolute turn |rt|d also has quite high autocorrelation for long lags. It is possible to characterize |rt|d to be “long memory” and this property is strongest when d is around 1. This result appears to argue against ARCH type specifications based upon squared returns. But our Monte-Carlo study shows that both ARCH type models based on squared returns and those based on absolute return can produce this property. A new general class of models is proposed which allows the power d of the heteroskedasticity equation to be estimated from the data. 1.
INTRODUCTION
If rt is the return from a speculative asset such as a bond or stock, this paper considers the temporal properties of the functions |rt|d for positive values of d. It is well known that the returns themselves contain little serial correlation, in agreement with the efficient market theory. However, Taylor (1986) found that |rt| has significant positive serial correlation over long lags. This property is examined on long daily stock market price series. It is possible to characterize |rt|d to be “longmemory”, with quite high autocorrelations for long lags. It is also found, as an empirical fact, that this property is strongest for d = 1 or near 1 compared to both smaller and larger positive values of d. This result * Journal of Empirical Finance, 1, 1993, 83–116. ** We thank Jurg Barlocher, Xiaohong Chen, Takeo Hoshi, Bruce Lehman, Victor Ng, and Ross Starr for helpful comments and discussions. We are also grateful to the editor (Richard T. Baillie) and two anonymous referees for their constructive comments. The second and third authors would like to thank NSF for financial support.
350
Z. Ding, C. W. J. Granger and R. F. Engle
Table 19.1 Summary statistics of rt. data
sample size
mean
std
skewness
kurtosis
min
max
studentized range
normality test
rt
17054
0.00018
0.0115
-0.487
25.42
-0.228
0.154
33
357788
appears to argue against ARCH type specifications based upon squared returns. The paper examines whether various classes of models are consistent with this observation. A new general class of models is then proposed which allows the power d of the heteroskedasticity equation to be estimated from the data. The remainder of this paper is organized as follows: In section 2, we give a brief description of the data we use. In section 3 we carry out the autocorrelation and cross-correlation analysis. The special pattern of the autocorrelogram and crosscorrelogram of the stock returns is exploited and presented. Section 4 investigates the effect of temporal aggregation on the autocorrelation structure and examines the short sample autocorrelation property of stock returns. Section 5 presents a Monte Carlo study of various financial models. Based on this, we propose a new general class of models in section 6. Section 7 concludes the analysis. 2.
THE DATA
The data set we will analyze in this paper is the Standard & Poor 500 (hereafter S&P 500) stock market daily closing price index.1 There are altogether 17055 observations from Jan 3, 1928 to Aug 30, 1991. Denote pt as the price index for S&P 500 at time t (t = 0, . . . , 17055). Define rt = lnpt - lnpt-1
(1)
as the compounded return for S&P 500 price index at time t (t = 1, . . . , 17054). Table 19.1 gives the summary statistics for rt. We can see from Table 19.1 that the kurtosis for rt of 25.42 is higher than that of a normal distribution which is 3. The kurtosis and studentized range statistics (which is the range divided by standard deviation) show the characteristic “fattailed” behavior compared with a normal distribution. The Jarque–Bera normality test statistic is far beyond the critical value which suggests that rt is far from a normal distribution. Figs. 19.1, 19.2 and 19.3 give the plots of pt, rt and |rt|. We can see from the figures the long run movement of daily pt, rt, |rt| over the past 62 years. There is an upward trend for pt but rt is rather stable around mean 1
We are indebted to William Schwert for providing us the data.
A Long Memory Property of Stock Market Returns
351
Figure 19.1. Standard & Poor 500 daily price index 01/03/28–08/30/91.
Figure 19.2. Standard & Poor 500 daily returns 01/04/28–08/30/91.
Figure 19.3. Standard & Poor 500 daily absolute returns 01/04/28– 08/30/91.
m = 0.00018. From the series |rt|, we can clearly see the observation of Mandelbrot (1963) and Fama (1965) that large absolute returns are more likely than small absolute returns to be followed by a large absolute return. The market volatility is changing over time which suggests a suitable model for the data should have a time varying volatility structure as suggested by the ARCH model. During the Great Depression of 1929 and early 1930s, volatilities are much higher than any other period. There is a sudden drop in prices on Black Monday’s stock market crash of 1987, but unlike the Great Depression, the high market volatility did not last very long. Otherwise, the market is relatively stable. 3. AUTOCORRELATION ANALYSIS OF THE RETURN SERIES It is now well established that the stock market returns themselves contain little serial correlation [Fama (1970), Taylor (1986)] which is in
352
Z. Ding, C. W. J. Granger and R. F. Engle
Table 19.2 Autocorrelations of rt. data
lag 1
2
3
4
5
10
20
40
70
100
rt |rt| r at
0.063 0.318 0.218
-0.039 0.323 0.234
-0.004 0.322 0.173
0.031 0.296 0.140
0.022 0.303 0.193
0.018 0.247 0.107
0.017 0.237 0.083
0.000 0.200 0.059
0.000 0.174 0.058
0.004 0.162 0.045
0.3 0.2 0.1 0.0 0
20
40
60
80
100
Figure 19.4. Autocorrelation of |r|. r**2, r from high to low.
agreement with the efficient market theory. But this empirical fact does not necessarily imply that returns are independently identically distributed as many theoretical financial models assume. It is possible that the series is serially uncorrelated but is dependent. The stock market data is especially so since if the market is efficient, a stock’s price should change with the arrival of information. If information comes in bunches, the distribution of the next return will depend on previous returns although they may not be correlated. Taylor (1986) studied the correlations of the transformed returns for 40 series and concluded that the returns process is characterized by substantially more correlation between absolute or squared returns than there is between the returns themselves. Kariya et al. (1990) obtained a similar result when studying Japanese stock prices. Extending this line we will examine the autocorrelation of rt and |rt|d for positive d in this section, where rt, is the S&P 500 stock return. Table 19.2 gives the sample autocorrelations of rt, |rt| and rt2 for lags 1 to 5 and 10, 20, 40, 70, 100. We plot the autocorrelogram of rt, |rt| and rt2 from lag 1 to lag 100 in fig. 19.4. The dotted lines show ±1.96/ T which is the 95% confidence interval for the estimated sample autocorrelations if the process rt is independently and identically distributed (hereafter i.i.d.). In our case T = 17054 so ±1.96/ T = 0.015. It is proved [Bartlett (1946)] that if rt is a i.i.d process then the sample autocorrelation rt is approximately N(0, 1/T). In fig. 19.4, about one quarter of the sample autocorrelations within lag 100 are outside the 95% confidence interval for a i.i.d process. The first lag autocorrelation is 0.063 which is significantly positive. Many other researchers [see Fama (1976), Taylor
A Long Memory Property of Stock Market Returns
353
Table 19.3 Autocorrelations of |rt|d. d
lag 1
2
3
4
5
10
20
40
70
100
0.125 0.25 0.5 0.75 1 1.25 1.5 1.75 2 3
0.110 0.186 0.257 0.297 0.318 0.319 0.300 0.264 0.218 0.066
0.108 0.181 0.255 0.299 0.323 0.326 0.309 0.276 0.234 0.088
0.102 0.182 0.263 0.305 0.322 0.312 0.278 0.228 0.173 0.036
0.098 0.176 0.251 0.286 0.296 0.280 0.242 0.192 0.140 0.025
0.121 0.193 0.259 0.291 0.303 0.295 0.270 0.234 0.193 0.072
0.100 0.164 0.222 0.246 0.247 0.227 0.192 0.149 0.107 0.019
0.100 0.164 0.221 0.241 0.237 0.211 0.170 0.125 0.083 0.009
0.095 0.148 0.192 0.207 0.200 0.174 0.136 0.095 0.059 0.004
0.065 0.120 0.166 0.180 0.174 0.153 0.122 0.088 0.058 0.006
0.089 0.131 0.165 0.173 0.162 0.138 0.106 0.073 0.045 0.003
(1986), Hamao et al. (1990)] also found that most stock market return series have a very small positive first order autocorrelation. The small positive first order autocorrelation suggests that the rt do have some memory although it is very short and there is a portion of stock market returns that is predictable although it might be a very small one. So the efficient market or random walk hypothesis does not hold strictly. Alternatively, this could be from non-synchronous measurement of prices. The second lag autocorrelation (= -0.039) is significantly negative which supports the so called “mean-reversion” behaviour of stock market returns. This suggests that the S&P 500 stock market return series is not a realization of an i.i.d process. Furthermore, if rt is an i.i.d process, then any transformation of rt is also an i.i.d process, so will be |rt| and rt2. The standard error of the sample autocorrelation of |rt| will be 1/ T = 0.015 if rt has finite variance, the same standard error is applicable for the sample autocorrelation of rt2 providing the rt also have finite kurtosis. But from Fig. 19.4, it is seen that not only the sample autocorrelations of |rt| and rt2 are all outside the 95% confidence interval but also they are all positive over long lags. Further, the sample autocorrelations for absolute returns are greater than the sample autocorrelations for squared returns at every lag up to at least 100 lags. It is clear that the S&P 500 stock market return process is not an i.i.d process. Based on the finding above, we further examined the sample autocorrelations of the transformed absolute S&P 500 returns |rt|d for various positive d, Table 19.3 gives corr(|rt|d, |rt+t|d) for d = 0.125, 0.25, 0.50, 0.75, 1, 1.25, 1.5, 1.75, 2, 3 at lags 1 to 5 and 10, 20, 40, 70, 100. Figs. 19.5, 19.6 show the autocorrelogram of |rt|d from lag 1 to 100 for d = 1, 0.50, 0.25, 0.125 in Fig. 19.5 and d = 1, 1.25, 1.5, 1.75, 2 in Fig. 19.6. From Table 19.3 and Figs. 19.5, 19.6 it is seen that the conclusion obtained above remains
354
Z. Ding, C. W. J. Granger and R. F. Engle
0.4 0.3 0.2 0.1 0.0 0
20
40
60
80
100
Figure 19.5. d - 1, 0.5, 0.25, 0.125 from high to low.
0.5 0.4 0.3 0.2 0.1 0.0 0
20
40
60
80
100
Figure 19.6. d - 1, 1.25, 1.50, 1.75, 2 from high to low.
0.30
0.20
0.20 rho
rho
0.30
0.10
0.10 0.0
0.0 0
1
2
d
3
4
5
Figure 19.7. Autocorrelation of |r|**d at lag 1.
0
1
2
d
3
4
5
Figure 19.8. Autocorrelation of |r|**d at lag 2.
valid. All the power transformations of the absolute return have significant positive autocorrelations at least up to lag 100 which supports the claim that stock market returns have long-term memory. The autocorrelations decrease fast in the first month and then decrease very slowly. The most interesting finding from the autocorrelogram is that |rt|d has the largest autocorrelation at least up to lag 100 when d = 1 or is near 1. The autocorrelation gets smaller almost monotonically when d goes away from 1. To illustrate this more clearly, we calculate the sample autocorrelations rt(d) as a function of d, d > 0, for t = 1, 2, 5, 10 and taking d = 0.125, 0.130, . . . , 1.745, 1.750, 2, 2.25, . . . , 4.75, 5. Figs. 19.7, 19.8, 19.9 and 19.10 give the plots of calculated rt(d) at t = 1, 2, 5, 10. It is seen clearly from these figures that the autocorrelation rt(d) is a smooth function of d.
A Long Memory Property of Stock Market Returns
0.25
0.20
0.15 rho
rho
0.30
355
0.10
0.05 0.0
0.0 0
1
2
d
3
4
0
5
1
2
3
4
5
d
Figure 19.9. Autocorrelation of |r|**d at lag 5.
Figure 19.10. Autocorrelation of |r|**d at lag 10.
0.3 0.2 0.1 0.0 0
500
1000
1500
2000
2500
Figure 19.11. Autocorrelation of |r| up to lag 2500.
Table 19.4 Lags at which first negative autocorrelations of |rt|d occurs. d
0.125
0.25
0.5
0.75
1
1.25
1.5
1.75
2
3
t*
2028
2534
2704
2705
2705
2705
2705
2685
2598
520
There is a saddle point d˜ between 2 and 3 such that when d < d˜ , r.t(d) is a concave function and when d > d˜ , rt(d) is a convex function of d. There is a unique point d* around 1 such that rt(d) reaches its maximum at this point, rt(d*) > rt(d) for d π d*. In fact, |rt|d has positive autocorrelations over a much longer lags than 100. Table 19.4 shows the lags (t*) at which the first negative autocorrelation of |rt|d occurs for various d. It can be seen from the table that in most cases, |rt|d has positive autocorrelations over more than 2500 lags. Since there are about 250 working days every year, the empirical finding suggests that |rt|d has positive autocorrelations for over 10 years! We pick |rt| as a typical transform of the return series here and plot its sample autocorrelations up to lag 2500 in Fig. 19.11. The dotted
356
Z. Ding, C. W. J. Granger and R. F. Engle
lines are 95% confidence interval for the estimated sample autocorrelation of an i.i.d process as before. It is striking that all the sample autocorrelations are not only positive but also stay outside the confidence interval. Different models have been tried to approximate this sample autocorrelation curve, including: (1) rt an exponentially decreasing function of t(rt = abt) (which is similar to the autocorrelation function of a ARMA model); (2) rt the same as the autocorrelation function of a fractionally integrated process [see Granger and Joyeux (1980)] rt = =
G(1 - b ) G(t + b ) G(b ) G(t + 1 - b ) G(1 - b ) (t + b - 1) ◊ ◊ ◊ bG(b ) G(b ) (t - b ) ◊ ◊ ◊ (1 - b )G(1 - b )
(t + b - 1) ◊ ◊ ◊ b (t - b ) ◊ ◊ ◊ (1 - b ) (t + b - 1) = r t -1 ; (t - b ) =
(2)
and (3) rt a polynomially decreasing function of t(rt = a/tb) which is approximately the same as (2) when t is large. It is found, compared to the real data, that the fitted autocorrelation using method (1) decreases too slowly at the beginning and then too fast at the end while by using methods (2) and (3) the opposite result is found. The final preferred model is a combination of these methods. A theoretical autocorrelation function is specified as follows: b
rt =
1 ar t -1 b 2t
(3)
t b3
which can easily be transformed to a linear model logrt = loga + b1logrt-1 + tlogb2 - b3logt.
(4)
Let a* = loga, b 1* = b1, b 1* = logb2, and b 3* = -b3, then logrt = a* + b 1*logrt-1 + b 2*t + b 3*logt.
(5)
Ordinary Least Squares gives as estimates: log r t = - 0.049 +
(- 3.9) 2
0.784 log r t -1 - 0.195 ¥ 10 -4 t - 0.057 log t ,
(62.9)
R = 0.92, D - W = 2.65.
(- 5.9)
(- 9.1) (6)
The t-statistics inside parentheses show that all the parameters are significant. After transferring the above equation back to autocorrelations one gets:
A Long Memory Property of Stock Market Returns
357
Figure 19.12. Autocorrelation of |r| (solid line) and its fitted value (dotted line).
t
r t = 0.893r t0.-784 t 0.057 . 1 (0.999955)
(7)
Fig. 19.12 plots the fitted autocorrelations (dotted line) and the sample autocorrelations themselves. It is seen that the theoretical model fits the actual sample autocorrelations quite well. Similar studies were also carried out for the New York Stock Exchange daily price index and the German daily stock market price index (DAX) over a shorter sample period (1962–1989 for NYSE, 1980–1991 for DAX); we get similar autocorrelation structures for transformed returns. Furthermore, we did the cross-correlation analysis of transformed S&P 500 and New York Stock Exchange daily returns series and also found the cross-correlation is the biggest when d = 1 and that it also has long memory. This suggests there may be volatility copersistence for these two stock market index prices (see Bollerslev and Engle 1989). Our conjecture is that this property will exist in most financial series. 4. SENSITIVITY OF AUTOCORRELATION STRUCTURE We now further investigate the effect of temporal aggregation on the autocorrelation structure. Table 19.5 gives the autocorrelations of | r t,5|d, where r t,5 is the 5 day temporal average of rt. i.e. rt ,5 =
1 (rt¢+1 + rt¢+ 2 + ◊ ◊ ◊ + rt¢+ 5 ), 5
(8)
where t = 1, 2, . . . , 3410 and t¢ = 5(t - 1). It can be seen that the temporal aggregation does not change the long memory property of the absolute return series. rt(| r t,5|d) still reaches a unique maximum when d is found 1 or 1.25 for different lags t. Compared with the original daily series, the first order autocorrelation for | r t,5|d is much bigger than the second one. Although the temporally aggregated return series here is not
358
Z. Ding, C. W. J. Granger and R. F. Engle
Table 19.5 Autocorrelations of | r t,5|d. d
lag 1
2
3
4
5
10
20
40
70
100
0.125 0.25 0.5 0.75 1 1.25 1.5 1.75 2 3
0.145 0.187 0.247 0.296 0.332 0.352 0.356 0.349 0.332 0.237
0.109 0.155 0.213 0.255 0.279 0.286 0.277 0.255 0.227 0.109
0.148 0.184 0.229 0.261 0.279 0.282 0.271 0.250 0.223 0.115
0.149 0.184 0.227 0.255 0.267 0.263 0.245 0.217 0.186 0.075
0.136 0.169 0.204 0.223 0.227 0.217 0.196 0.169 0.140 0.048
0.105 0.137 0.180 0.212 0.233 0.243 0.242 0.231 0.214 0.124
0.129 0.158 0.188 0.203 0.205 0.197 0.180 0.160 0.138 0.068
0.077 0.102 0.136 0.161 0.175 0.178 0.173 0.160 0.144 0.079
0.072 0.095 0.126 0.149 0.163 0.168 0.164 0.153 0.138 0.073
0.041 0.052 0.065 0.074 0.079 0.080 0.076 0.069 0.061 0.026
exactly the same as a weekly return series, we expect a similar result will hold for the weekly data. It should also be noted from Fig. 19.2 that the volatility structure differs considerably between the pre-war and the post-war period. The per-war period (1928–1945) is much more volatile than the post-war period (1946–1986). It will be interesting to look at the memory structure for these two periods. Table 19.6 shows the autocorrelations of |rt|d for the pre-war period (1928–1945). It is seen that the magnitude of the autocorrelation for |rt|d is about the same as those in Table 19.2 |rt| has the largest autocorrelation for the first two lags and then this property becomes strongest for |rt|0.75 or |rt|0.5. Table 19.7 gives the autocorrelations of |rt|d for the post-war period (1946–1986). It is clear from the table that during this less volatile period the market has both a smaller and a shorter memory in the sense that the autocorrelations are smaller and decrease faster. The autocorrelations are only about two thirds as big as those of the pre-war period. Comparing Tables 19.2, 19.6 and 19.7 we can probably say that the long memory property that was found in the whole sample period can be mainly attributed to the pre-war period. The market has a strong and long memory of big events like the great depression in 1929 and the early 1930s when volatility was very high. 5. MONTE-CARLO STUDY OF VARIOUS FINANCIAL TIME SERIES MODELS The empirical findings of section 3 and 4 have strong implications for the modeling of financial time series. Taylor (1986) showed that neither dayof-the-week effects nor a linear, correlated process can provide satisfac-
A Long Memory Property of Stock Market Returns
359
Table 19.6 Autocorrelations of |rt|d 1928–1945. d
lag 1
2
3
4
5
10
20
40
70
100
0.125 0.25 0.5 0.75 1 1.25 1.5 1.75 2 3
0.114 0.201 0.273 0.300 0.310 0.310 0.302 0.289 0.273 0.201
0.135 0.227 0.298 0.323 0.329 0.323 0.310 0.292 0.272 0.194
0.126 0.231 0.311 0.332 0.329 0.310 0.283 0.251 0.218 0.114
0.117 0.204 0.275 0.296 0.296 0.280 0.256 0.226 0.196 0.098
0.138 0.215 0.276 0.294 0.293 0.281 0.260 0.236 0.211 0.128
0.131 0.200 0.245 0.251 0.241 0.223 0.199 0.175 0.151 0.076
0.122 0.197 0.245 0.248 0.232 0.205 0.173 0.141 0.111 0.034
0.118 0.183 0.216 0.212 0.192 0.163 0.130 0.099 0.072 0.012
0.067 0.128 0.169 0.172 0.159 0.138 0.114 0.091 0.070 0.020
0.115 0.158 0.172 0.162 0.141 0.116 0.090 0.067 0.047 0.007
Table 19.7 Autocorrelations of |rt|d 1946–1986. d
lag 1
2
3
4
5
10
20
40
70
100
0.125 0.25 0.5 0.75 1 1.25 1.5 1.75 2 3
0.089 0.129 0.162 0.181 0.191 0.194 0.189 0.178 0.163 0.099
0.062 0.095 0.128 0.151 0.167 0.178 0.182 0.179 0.170 0.104
0.054 0.086 0.121 0.143 0.157 0.163 0.160 0.150 0.135 0.066
0.057 0.102 0.141 0.164 0.180 0.191 0.198 0.200 0.199 0.173
0.086 0.126 0.158 0.175 0.182 0.180 0.170 0.154 0.133 0.056
0.047 0.082 0.111 0.126 0.133 0.134 0.129 0.119 0.105 0.047
0.054 0.082 0.106 0.119 0.123 0.120 0.110 0.095 0.078 0.023
0.051 0.068 0.082 0.088 0.089 0.084 0.074 0.061 0.047 0.010
0.041 0.058 0.068 0.067 0.062 0.053 0.042 0.031 0.021 0.002
0.038 0.053 0.066 0.068 0.064 0.056 0.046 0.036 0.027 0.005
tory explanation of the significant correlations among absolute return series, where a linear correlated process can be represented as •
rt = r + Â a i et -i .
(9)
i =0
where r and ai are constants with a0 = 1, et is a zero-mean i.i.d process. Taylor concludes that any reasonable model must be a non-linear one. Furthermore, the special autocorrelation pattern of |rt|d found in section 3 implies that any theoretical model should be also to capture this before the model can be considered to be “adequate”. It should be noted that a process can have zero autocorrelations but have autocorrelations of squares greater than for moduli. For example, consider the following nonlinear model:
360
Z. Ding, C. W. J. Granger and R. F. Engle
rt = |st|et,
(10)
st = ast-t + ht,
where et ~ N(0, 1), E(st) = E(ht) = E(rt) = 0, |a| < 1, et and ht are stochastically independent, st is independent of ht+t for t > 0, st, st-t are jointly normally distributed with variance 1, hence we have var(ht) = 1 - a2 and ht ~ N(0, 1 - a2). The conditional variance of rt when st is known is st2, i.e. var(rt|st) = st2. For this model corr(rt, rt-t) = 0 but by using numerical integration it is found that with |a| < 1 corr ( rt , rt -t ) =
2Ê 2ˆ a2 E st st -t + ast2 < corr (rt 2 , rt 2-t ) = . pË p¯ 4
(11)
It is thus seen that the results of Table 19.5 do not necessarily occur. One possible explanation for the large positive autocorrelation between |rt| and |rt+t| or |rt|d and |rt+t|d is the heteroskedasticity of the data, i.e. the variance or conditional variance is changing over time. One family of nonlinear time series models that is able to capture some aspects of the time varying volatility structure is Engle’s ARCH (AutoRegressive Conditional Heteroskedasticity) model [Engle (1982)]. In its original setting, the ARCH model is defined as a data generating process for a random variable which has a conditional normal distribution with conditional variance a linear function of lagged squared residuals. More formally, the ARCH(p) model is defined as follows: rt = m + et. e t = st et , et ~ N (0, 1), p
st2 = a 0 + Â a i e t2-i .
(12)
i =1
It is easily shown that rt is not autocorrelated with each other but |rt|d is. Hence the distribution of rt is dependent on rt-i, i > 0. Since its introduction by Engle (1982), the ARCH model has been widely used to model time-varying volatility and the persistence of shocks to volatility. Much work has also been done both theoretically and empirically. Many modifications and extensions of the original ARCH model have also appeared in the literature. For example, in order to capture the long memory property of the conditional variance process, Bollerslev (1986) introduced the GARCH(p, q) model, which defines the conditional variance equation as follows: p
q
st2 = a 0 + Â a i e t2-i + Â b j st2- j . i =1
(13)
j =1
Taylor (1986) modeled the conditional standard deviation function instead of conditional variance. Schwert (1989), following the argument
A Long Memory Property of Stock Market Returns
361
of Davidian and Carroll (1987), modeled the conditional standard deviation as a linear function of lagged absolute residuals. The Taylor/ Schwert GARCH(p, q) model defines the conditional standard deviation equation as follows: p
q
st = a 0 + Â a i e t - i + Â b j st - j . i =1
(14)
j =1
One may, at first glance, think that it would be better to use Taylor/ Schwert model than Bollerslev’s GARCH since the model is expressed in terms of absolute returns rather than squared returns. But this conclusion is not necessarily true when the model is a nonlinear one. In fact, our Monte Carlo study shows both Bollerslev’s GARCH and Taylor/Schwert’s model with appropriate parameters can produce the special correlation patterns found in section 3. Both models were estimated for S&P 500 returns and the following results were obtained: (1) GARCH rt = 0.000438 + 0.144e t -1 + e t , (7.2) (18.4) st2 =
0.0000008 + 0.091e t2-1 + 0.906 st2 . (12.5) (50.7) (43.4)
(15)
log likelihood: 56822. (2) Taylor/Schwert rt = 0.0004 + 0.139e t -1 + e t , (7.0) (19.6) st =
0.000096 +
(12.6)
0.104 e t -1 +
(67)
0.913 st .
(16)
(517)
log likelihood: 56776. The first order moving average term is in the mean equations of both models to account for the positive first order autocorrelation for the return series. We can see all the parameters are very significant in the above models. The normality test statistic of the standardized residuals for both models are far beyond the critical value of a normal distribution as assumed by both models. This is not surprising since there are definitely other factors affecting the volatility. Nevertheless, the loglikelihood value for Bollerslev’s GARCH is significantly larger than that of Taylor/Schwert model. Based on the estimation results, some simulations have performed using the parameter estimated above assuming et ~ IID N(0, 1). Our
362
Z. Ding, C. W. J. Granger and R. F. Engle
0.3 0.2 0.1 0.0 0
20
40
60
80
100
Figure 19.13. Bollerslev’s GARCH model.Autocorrelation of |r|, r***2, r from high to low. 0.4 0.3 0.2 0.1 0.0 0
20
40
60
80
100
Figure 19.14. Bollerslev’s GARCH model. d - 1, 0.5, 0.25, 0.125 from high to low. 0.4 0.3 0.2 0.1 0.0 0
20
40
60
80
100
Figure 19.15. Bollerslev’s GARCH model. d - 1, 1.25, 1.50, 1.75, 2 from high to low.
purpose is to check whether theoretical ARCH models can generate the same type of autocorrelations as stock market return data. Obviously if the theoretical model does not exhibit the same pattern of autocorrelations as stock market return data, then it follows that the theoretical model is misspecified for these data. A total of 18054 observations was generated and the first 1000 were discarded in order to be less affected by the initial value of s0 which was set to be the unconditional standard deviation of the S&P 500 returns. Figs. 19.13, 19.14, 19.15 and 19.16, 19.17, 19.18 plot the simulated autocorrelogram of the data generated by the two models. It can be seen that the special autocorrelation pattern does exists here. For both models, |r|d has the largest autocorrelations when d = 1, and the autocorrelation gets smaller when d goes away from 1. It is interesting that Bolleslev’s GARCH model can produce this result even though the conditional variance is a linear function of squared
A Long Memory Property of Stock Market Returns
363
0.3 0.2 0.1 0.0 0
20
40
60
80
100
Figure 19.16. Taylor/Schwert model. Autocorrelation of |r|, r**2, r from high to low. 0.4 0.3 0.2 0.1 0.0 0
20
40
60
80
100
Figure 19.17. Taylor/Schwert model. d - 1, 0.5, 0.25, 0.125 from high to low. 0.5 0.4 0.3 0.2 0.1 0.0 0
20
40
60
80
100
Figure 19.18. Taylor/Schwert model. d - 1, 1.25, 1.50, 1.75, 2 from high to low.
returns. For Bollerslev’s GARCH model, the autocorrelation between |rt| and |rt+t| is very close to that between |rt|1.25 and |rt+t|1.25. But for the Taylor/Schwert model, the autocorrelation between |rt| and |rt+t| after lag 40 is close to that between |rt|0.5 and |rt+t|0.5. One major difference between autocorrelograms of the two simulated data series and the real data is that the autocorrelations of the real data decreases rapidly in the first month and then decrease very slowly over a long period, but the autocorrelations of the two simulated data decrease almost constantly over time. 6. A NEW MODEL – ASYMMETRIC POWER ARCH The Monte Carlo study shows that the ARCH model generally captures the special pattern of autocorrelation existing in many stock market
364
Z. Ding, C. W. J. Granger and R. F. Engle
returns data. Both Bolleslev’s GARCH and Taylor/Schwert’s GARCH in absolute value model can produce this property. It seems there is no obvious reason why one should assume the conditional variance is a linear function of lagged squared returns (residuals) as in Bollerslev’s GARCH, or the conditional standard deviation a linear function of lagged absolute returns (residuals) as in Taylor/Schwert model. Fortunately, a more general class of model is available which includes Bolleslev’s GARCH, Taylor/Schwert and five other models in the literature as special cases. The general structure is as follows: e t = st et , et ~ N (0, 1), p
q
d
std = a 0 + Â a i ( e t -i - g i e t -i ) + Â b j std-i , where i =1
(17)
j =1
a 0 > 0, d 0, a i 0, i = 1, . . . , p, -1 < gi < 1, i = 1, . . . , p, bj 0, j = 1, . . . , q. The model imposes a Box–Cox power transformation of the conditional standard deviation process and the asymmetric absolute residuals. By using this transformation we can linearize otherwise nonlinear models. The functional form for conditional standard deviation is familiar to economists as the constant elasticity of substitution (CES) production function of Arrow et al. (1961). The asymmetric response of volatility to positive and negative “shocks” is well known in the finance literature as the leverage effect of the stock market returns [Black (1976)], which says that stock returns are negatively correlated with changes in return volatility – i.e. volatility tends to rise in response to “bad news” (excess returns lower than expected) and to fall in response to “good news” (excess returns higher than expected) [Nelson (1991)]. Empirical studies by Nelson (1991), Glosten, Jaganathan and Runkle (1989) and Engle and Ng (1992) show it is crucial to include the asymmetric term is financial time series models [for a detailed discussion, see Engle and Ng (1992)]. This generalized version of ARCH model includes seven other models (see appendix A) as special cases. We will call this model Asymmetric Power ARCH model and denote it as A-PARCH. If we assume the distribution of rt is conditionally normal, then the condition for existence of Esdt and E|et|d is (see appendix B): 1 2p
p
 a {(1 + g i
i =1
i ) + (1 - g i ) } 2 d
d
d -1 2
q
G
Ê d + 1ˆ + b j < 1. Ë 2 ¯ Â j =1
(18)
A Long Memory Property of Stock Market Returns
365
If this condition is satisfied, then when d 2 we have et convariance stationary. But d 2 is a sufficient condition for et to be covariance stationary. The new model is estimated for S&P 500 return series by the maximum likelihood method using the Berndt–Hall–Hall–Hausman algorithm. The estimated model is as follows: rt = 0.00021 + 0.145e t -1 + e t , (3.2) (19.0) 1.43
st1.43 = 0.000014 + 0.083( e t -1 - 0.373e t -1 ) (4.5) (32.4) (-20.7)
+ 0.920 st1.43 , (474)(33.7)
(19)
log likelihood: 56974. The estimated d is 1.43 which is significantly different from 1 (Taylor/Schwert model) or 2 (Bollerslev GARCH). The t-statistic for the asymmetric term is 32.4 which is very significant implying the leverage effect does exist in S&P 500 returns. By using the log-likelihood values estimated, a nested test can easily be constructed against either Bollerslev’s GARCH or Taylor/Schwert model. Let l0 be the log-likelihood value under the null hypothesis that the true model is Bollerslev’s GARCH and l be the log-likelihood value under the alternative that the true model is A-PARCH, then 2(l - l0) should have a c 2 distribution with 2 degrees of freedom when the null hypothesis is true. But in our example 2(l - l0) = 2(56974 - 56822) = 304 which is far beyond the critical value at any reasonable level. Hence we can reject that the data is generated by Bollerslev’s GARCH model. The same procedure is applicable to Taylor/Schwert model and we can also reject it. 6.
CONCLUSION
In this paper, a “long-memory” property of the stock market returns series is investigated. We found not only there is substantially more correlation between absolute returns than returns themselves, but the power transformation of the absolute return |rt|d also has quite high autocorrelation for long lags. Furthermore, for fixed lag t, the function rt(d) = corr(|rt|d, |rt+t|d) has a unique maximum point when d is around 1. This result appears to argue against ARCH type specifications based upon squared returns. But our Monte Carlo study shows both ARCH type of model based upon squared return and those based upon absolute return can produce this property. The ARCH specification based upon the linear relationship among absolute returns is neither necessary nor sufficient to have such a property. Finally, we propose a new general class of ARCH models which we call Asymmetric Power ARCH model and denote A-PARCH. The new model encompasses seven other models in
366
Z. Ding, C. W. J. Granger and R. F. Engle
the literature. We estimate S&P 500 returns by the new model and the estimated power d for the conditional heteroskedasticity function is 1.43 which is significantly different from 1 (Taylor/Schwert model) or 2 (Bollerslev’s GARCH). APPENDIX A We now show that the new model includes the following seven ARCH models as a special case. (1) Engle’s ARCH(p) model [see Engle (1982)], just let d = 2 and gi = 0, i = 1, . . . , p, bj = 0, j = 1, . . . , q in the new model. (2) Bollerslev’s GARCH(p, q) model (see Bollerslev 1986), let d = 2 and gi = 0, i = 1, . . . , p. (3) Taylor/Schwert’s GARCH in standard deviation model let d = 1 and gi = 0, i = 1, . . . , p. (4) GJR model [see Glosten et al. (1989)], let d = 2. When d = 2, we have when 0 gi < 1 p
q
2
st2 = a 0 + Â a i ( e t -i - g i e t -i ) + Â b j st2- j i =1
j =1
p
2
q
= a 0 + Â a i (1 - g i ) e t2-i + Â b j st2- j i =1 p
j =1
{
2
2
}
+ Â a i (1 + g i ) - (1 - g i ) Si- e t2-i i =1 p
2
q
= a 0 + Â a i (1 - g i ) e t2-i +Â b j st2- j i =1
j =1
p
+ Â 4a i g i Si- e t2-i , where i =1
Ï1 if e t -i < 0 Si- = Ì Ó0 otherwise. If we further define 2
a i* = a i (1 - g i ) , g i* = 4a i g i , then we have p
q
p
st2 = a 0 + Â a i*e t2-i + Â b j st2- j + Â g i*Si- e t2-i i =1
j =1
which is exactly the GJR model.
i =1
A Long Memory Property of Stock Market Returns
367
When -1 < gi < 0 we have p
q
2
p
st2 = a 0 + Â a i (1 + g i ) e t2-i + Â b j st2- j - Â 4a i g i Si+ e t2-i , where i =1
j =1
i =1
Ï1 if e t -i > 0 Si+ = Ì Ó0 otherwise, define 2
a i* = a i (1 + g i ) , g i* = -4a i g i , we have p
q
p
st2 = a 0 + Â a i*e t2-i + Â b j st2- j + Â g i*Si+ e t2-i i =1
j =1
i =1
which allows positive shocks to have a stronger effect on volatility. (5) Zakoian’s TARCH model (see Zakoian 1991), let d = 1 and bj = 0, j = 1, . . . , q. We have p
st = a 0 + Â a i ( e t - i - g i e t - i ) i =1 p
p
= a 0 + Â a i (1 - g i )e t+-i - Â a i (1 + g i )e t--i , where i =1
i =1
Ïe if e t -i > 0 e t+-i = Ì t -i and Ó0 otherwise, e t--i = e t -i - e t+-i . So by defining a i+ = a i (1 - g i ), g i- = a i (1 + g i ), we have p
p
st = a 0 + Â a i+ e t+-i - Â a i- e t--i i =1
i =1
which is the exact TARCH form. If we further let bj π 0, j = 1, . . . , q then we get a more general class of TARCH models. (6) Higgins and Bera’s NARCH model [see Higgins and Bera (1990)], let gi = 0, i = 1, . . . , p and bj = 0, j = 1, . . . , q. Our model becomes
368
Z. Ding, C. W. J. Granger and R. F. Engle
p
d
std = a 0 + Â a i e t -i , i.e. i =1 2 d t
(s )
p
d 2
2 = a 0 + Â a i (e t2-i )
.
i =1
Define d* = d 2, p
ˆ Ê a 0 = a 0*w d 2 = Á 1 - Â a i ˜ w d * . ¯ Ë i =1 We have exactly Higgins and Bera’s NARCH. (7) Geweke (1986) and Pantula (1986)’s log-ARCH model. The logARCH model is the limiting case of our model when d Æ 0. Since p
q
d
std = a 0 + Â a i ( e t -i - g i e t -i ) + Â b j std- j , i =1
j =1
decompose a0 as: p q Ï ¸ d a o = Ì1 - Â a i E ( et -i - g i et -i ) - Â b j ˝w d Ó i =1 ˛ j =1
= a 0*w d , hence Esdt = wd. Then we have d p q std - 1 Ï ¸ (w - 1) d = Ì1 - Â a i E ( et -i - g i et -i ) - Â b j ˝ d Ó i =1 ˛ d j =1 d
p
+ Âai i =1
( e t -i - g i e t -i ) - 1 d
q
+ Âbj j =1
std- j - 1 d
d
p
- Âai i =1
E ( et - i - g i et - i ) - 1 , d
when d Æ 0 the above equation becomes p
q
Ï ¸ d log s t = Ì1 - Â a i lim E ( e t - i - g i e t - i ) - Â b j ˝ log w dÆ 0 Ó ˛ i =1 j =1 p
q
+ Â a i log ( e t - i - g i e t - i ) + Â b j log s t - j i =1 p
- Â a i log E ( e t - i - g i e t - i ) i =1
j =1
A Long Memory Property of Stock Market Returns
p
369
p
= a 0* log w - Â a i log 2 p + Â a i log ( e t - i - g i e t - i ) i =1
j =1
q
+ Â b j log s t - j , j =1
where p q Ï ¸ d a 0* = Ì1 - Â a i lim E ( et -i - g i et -i ) - Â b j ˝ d Æ 0 Ó i =1 ˛ j =1 p q Ï ¸ = Ì1 - Â a i - Â b j ˝, Ó i =1 ˛ j =1
since limdÆ0E(|et-i| - giet-i)d = 1. This is a generalized version of Geweke/ Pantula model. If we further let gi = 0, i = 1, . . . , p, and bj = 0, j = 1, . . . , q, then we get the exact Geweke/Pantula model. APPENDIX B. CONDITIONS FOR THE EXISTENCE OF Es dt AND E|e t | d If we assume the distribution is conditional normal, then the condition for existence of Esdt of the new model is p
d
q
 a i E( et -i - g i et -i ) +  b j < 1, where i =1
j =1 2
x 1 +• d ( x - g i x ) e 2 dx Ú 2p -• d -1 1 d + 1ˆ d d (1 + g i ) + (1 - g i ) 2 2 GÊ . Ë 2 ¯ 2p
d
E ( et - i - g i et - i ) =
[
=
]
So the condition becomes p
1 2p
[
d
d
]
 a i (1 + g i ) + (1 - g i ) 2 i =1
d -1 2
q
G
Ê d + 1ˆ + b j < 1. Ë 2 ¯ Â j =1
(B1)
Since E et
d
d
= E et Estd =
1 d2 Ê d + 1 ˆ d 2 G Es . Ë 2 ¯ t p
So the condition for the existence of E|et|d is the same as that of Esdt . The proof of the above results is almost identical to the proof of theorem 1 in Bollerslev (1986). When condition (B1) is satisfied, we have the unconditional expectation of sdt as follows
370
Z. Ding, C. W. J. Granger and R. F. Engle
p q ˆ Ê d Estd = a 0 Á 1 - Â a i E ( et -i - g i et -i ) - Â b j ˜ ¯ Ë i =1 j =1
= wd E et
d
and
=
1 d2 Ê d + 1 ˆ d 2 G Es Ë 2 ¯ t p
=
1 d2 Ê d + 1 ˆ d 2 G w . Ë 2 ¯ p
In its special case, when d = 2, and gi = 0, we have the covariance stationarity condition for et as p
1 2p
Âa
=
1 p
i
i =1
Ê 2Á 2 Ë
2 -1 2
p
Âai 2 i =1
p
ˆ Ê 2 + 1ˆ q + Âbj ˜ GË ¯ 2 ¯ j =1
q Ê 1ˆ Ê 1ˆ G + Âbj Ë 2 ¯ Ë 2 ¯ j =1
q
= Âai + Â b j < 1 i =1
j =1
which is the same as that derived by Bollerslev (1986). When d = 2 and gi π 0, we have the covariance stationarity condition for GJR model as 1 2p
p
 a [(1 + g i
2
i
2
]
) + (1 - g i ) 2
i =1
p
2 -1 2
q
G
Ê 2 + 1ˆ + bj Ë 2 ¯ Â j =1
q
= Â a i [1 + g i ] + Â b j < 1. i =1
j =1
When d = 1 and gi = 0, we have the condition for existence of Est and E|et| of Taylor/Schwert model 1 2p
p
Âa
q i
2G(1) + Â b j
i =1
j =1 p
q
= 2 p  a i +  b j < 1, i =1
j =1
p Since (2 p ) < 1, so even if S i=1 ai + Sqj=1 bj > 1 it can still be true that Est or E|et| exists and is finite, this condition is weaker than the covariance stationarity condition of the model. It is possible E|et|2 does not exist and et is not covariance stationary even if this condition is satisfied. When d = 1 and gi π 0, we have the existence condition of Esdt and E|et| for the Asymmetric Taylor/Schwert model or the generalized Zakoian
A Long Memory Property of Stock Market Returns
371
p model which is the same as that for the Taylor/Schwert model 2 p S i=1 ai q + S j=1 bj < 1. Under the assumption that
1 2p
p
 a [(1 + g i
i =1
d
i
d
]
) + (1 - g i ) 2
d -1 2
q
G
Ê d + 1ˆ + b j < 1, Ë 2 ¯ Â j =1
i.e. the dth moment of st and |et| exist, then if d 2 we have that et is covariance stationary. If d 1 then Est and E|et| exist and are finite. But d 2 is a sufficient condition for the process et to be covariance stationary.
REFERENCES Black, Fisher, 1976, Studies in stock price volatility changes, Proceedings of the 1976 business meeting of the business and economics statistics section,American Statistical Association, 177–181. Bollerslev, T., 1986, Generalized autoregressive conditional heteroskedastickity, Journal of Econometrics 31, 307–327. Bollerslev, T. and R. F. Engle, 1992, Common persistence in conditional variance, Forthcoming in Econometrica. Davidian, M. and R. J. Carroll, 1987, Variance function estimation, Journal of American Statistical Association 82, No. 400, 1079–1091. Eatwell, J., M. Milgate and P. Newman (eds.), The new Palgrave: Finance (New York, Norton). Engle, R. F. 1982, Autoregressive Conditional heteroskedasticity with estimates of the variance of U.K. Inflation, Econometrica, 50, 987–1008. Engle, R. F. 1990, Discussion: stock volatility and the crash of ’87, Review of Financial Studies, Vol. 3, No. 1, 103–106. Engle, R. F. and T. Bollerslev, 1986, Modeling the persistence of conditional variances, Econometric Review, 5, 1–50, 81–87. Engle, R. F. David Lilien and Russ Robins, 1987, Estimating time varying risk premia in the term structure:The ARCH-M Model, Econometrica, 55, 391–407. Engle, R. F. and G. Gonzalez Rivera, 1991, Semiparametric ARCH models, Journal of Business and Economic Statistics, 9, 345–360. Engle, R. F. and V. Ng, 1992, Measuring and testing the impact of news on volatility, Forthcoming in Journal of Finance. Fama, E. F., 1970, Efficient capital markets: A Review of Theory and Empirical Work, Journal of Finance, 25, 383–417. Fama, E. F., 1976, Foundations of finance: Portfolio decision and security prices, New York: Basic Books Inc. French, Ken, William Schwert and Robert Stambaugh, 1986, Expected stock returns and volatility, Journal of Financial Economics, 19, 3–29. Glosten, L., R. Jaganathan and D. Runkle, 1989, Relationship between the expected value and the volatility of the nominal excess return on stocks,
372
Z. Ding, C. W. J. Granger and R. F. Engle
unpublished manuscript, J.L. Kellogg Graduate School, Northwestern University. Granger, C. W. J., 1991, Forecasting stock market prices: Lessons for forecasters, UCSD Working Paper. Granger, C. W. J., 1980, Long memory relationships and the aggregation of dynamic models. J. of Econometrics 14, 227–238. Granger, C. W. J. and A. P. Anderson, 1978, An introduction to bilinear time series model, Vandenhoek and Ruprecht, Gottingen. Granger, C. W. J. and R. Joyeux, 1981, An introduction to long-memory time series models and fractional differencing, J. of Time Series Analysis 1, 15–29. Granger, C. W. J. and O. Morgenstern, 1970, Predictability of stock market prices. Heath-Lexington Press. Granger, C. W. J. and Paul Newbold, 1986, Forecasting economic time series, New York, Academic Press. Hamao, Y., R. W. Masulis, V. Ng, 1990, Correlation in price changes and volatility across international stock markets, Review of Financial Studies, Vol. 3, No. 2. 281–307. Higgins, M. and A. Bera, 1990, A class of nonlinear ARCH models, Working Paper, Department of Economics, University of Wisconsin at Milwaukee. Kariya, T. Tsukuda, Y. Maru, J., 1990, Testing the random walk hypothesis for Japanese stock prices in S. Taylor’s Model, Working paper, University of Chicago. Nelson, D. B., 1990, Stationarity and persistence in the GARCH(1, 1) model, Econometric Theory, 6, 318–334. Nelson, D. B., 1991, Conditional heteroskedasticity in asset returns: A New Approach, Econometrica, Vol. 59, No. 2, 347–370. Schwert, W., 1990, Stock volatility and the crash of ’87, Review of Financial Studies, Vol. 3, No. 1, 77–102. Taylor, S., 1986, Modeling financial time series, New York, John Wiley & Sons. Zakoian, J., 1991, Threshold heteroskedasticity model, unpublished manuscript, INSEE.
Index
Abadir, M., 256, 258n2, 266 Adelman, I., 19 advertising, and aggregate consumption, 84–104 aggregation, and error correction models, 134–5 Ahn, S. K., 16, 239 Ahtola, J., 194 Alexander, S., 3 Alternating Conditional Expectations (ACE), 292–4 Andersen, A. P., 4, 6, 17, 57 Anderson, H. M., 220 Anderson, T. W., 4, 19, 214n2, 239 Aoki, M., 236 Ashley, R., 12, 66–7 AutoRegressive Conditional Heteroskedasticity (ARCH) model, 360–6, 367–9 Baba, Y., 14 Bachelier, M. L., 2 Balke, N., 15 Banerjee, A., 302 Basman, R. L., 35 Bates, J. M., 9 Bell, W. R., 3, 191, 238 Bera, A., 367–8 Beveridge, S., 17, 238, 258, 303 Bhargava, A., 160, 179, 194 Bierens, H., 6 bivariate attractor, and long memory, 289–92 bivariate feedback model, and error correction, 133–4
Black, F., 364 Black, H., 56 Blalock, H. M., Jr., 54 Blanchard, O. J., 16 Blank, D. M., 85–6, 94, 103 Bollerslev, T., 19, 357, 360, 366, 369, 370 Box, G. E. P., 4, 9–10, 15, 66, 76, 109, 116, 121, 143, 146, 190–1, 234 Box-Jenkins models, 85, 92, 174, 274. See also Box, G. E. P.; Jenkins, G. M. Breiman, L., 292–4 Brotherton, T., 56 Buiter, W. H., 81–2 Bunch, M., 50 Caines, P. E., 56, 65 Campbell, J. Y., 3, 15–16, 168, 213n1, 214n2, 217 Carroll, R. J., 361 causality. See Granger causality Center for Research in Securities Prices (CRSP), 218 central limit theorem, 6–7 Chan, C. W., 65 Chan, N. H., 16, 197 chaos and chaos theory, 5 Chiang, C., 65, 89n10 Christoffersen, P., 8 Ciccolo, J. H., Jr., 56 Citibank Economic Database, 137 Clarke, D. G., 86 Clemen, R. T., 9 Cleveland, W. P., 3
374
Index
Cochrane, J., 241 cointegration, 13–18, 74–6, 129–43, 145–70, 173–86, 189–201, 203, 212–30, 232–51, 254–67, 281–4, 302–17 conditional causality, 80 constant elasticity of substitution (CES), 364 consumption GNP and stock dividends and prices, 241–3 income and co-integrating regression of, 166–7 and advertising, 84–104 Cootner, P., 3 Corradi, V., 8n2 cost, of advertising, 94. See also price cost-benefit analysis, 8 Cowles, A., 3 Cowles Foundation, 7 Cox, D. R., 9, 160 Cox, J. C., 217 Cramer, H., 2, 184 Cramer representation, 32 Currie, D., 130, 149, 177
189, 202–6, 213n1, 214n2, 216, 232, 237, 283, 287, 294, 297, 302, 357, 360, 364, 366 Ericsson, N. R., 257 error correction and causality, 75 and cointegration, 145–70, 182, 185–6, 216–18, 225–9, 234, 264–6, 308–9, 311–15 and seasonality, 201–5 and time series analysis of models, 129–43 Escribano, A., 178, 185 Evans, G. B. A., 160 exogeneity, and testing for causality, 68 expected rates, of treasury bills, 214
Davidian, M., 361 Davidson, J., 5, 130, 149, 177, 207 Davies, R. R., 160 Dawson, A., 130, 149, 177 Delgado, M. A., 6 Deutsch, M., 7, 9 Dickey, D. A., 155, 160, 162, 178, 189, 194, 197. See also DickeyFuller test Dickey-Fuller (DF) test, 161–8, 270–1, 273, 275–6, 277f, 278–81, 283–4, 294, 296t, 297–9 Diebold, F. X., 8 Ding, Z., 19 Dolado, J. J., 11, 302 Durbin, J., 86n7 Durbin-Watson statistic, 109, 156, 161, 168 Durlauf, S. N., 16, 163, 166
factor model, and long memory components, 234–8 Fair, R. C., 8n2 Fama, E., 3, 351–2 Federal Reserve, 212–30 feedback, and causal relations, 34–6 Feige, E. L., 56 Feller, W., 147 Fomby, T. B., 15 forecasting and cointegration analysis of treasury bill yields, 228, 229t and long-memory models, 328–32 fractional differencing, and longmemory models, 321–36 fractional integrated series, and error correction models, 143 Franses, P. H., 305 Friedman, J. H., 292–4 Friedman, M., 2 Frisch, R., 15 Fuller, W. A., 4, 155, 158, 160, 162, 178, 189, 191, 194, 197. See also Dickey-Fuller test full information maximum likelihood (FIML) model, 225, 227t, 228
Ekelund, R. G., 85 Engle, R. F., 5, 10, 16–17, 19, 73, 149, 154, 160, 166, 176, 178–80, 185,
Galbraith, J. W., 302 Gallant, A. R., 5 GARCH model, 360–6
Index Geweke, J., 11, 67, 235–6, 368–9 Ghysels, E., 4, 17 GJR model, 366, 370 Glosten, L., 364, 366 GNP and deflator, 86 money supply and error correction models, 139–40 and stock dividends and prices, 241–3 Gonzalo, J., 17, 240, 248, 255, 258, 303 Good, I. J., 36, 53, 62 Goodhart, C. A. E., 56 Gordon, R. J., 56 Gowland, D. H., 56 Gradshteyn, I. S., 323 Gramm, W. P., 85 Granger, C. W. J. See specific subjects Granger causality, 10–12, 31–46, 48–69, 71–82, 84–104 Granger Representation Theorem, 150–2 Great Depression, 351 Grether, D. M., 191 Gross National Product. See GNP Gumbel, D., 8 Haldrup, N., 18 Hall, A. D., 17, 214n2, 220 Hall, R. E., 166 Hallman, J., 9, 18, 205–6, 270, 272, 275, 284, 289, 300, 307 Hamao, Y., 353 Hamilton, J. D., 16 Hannan, E. J., 3–4, 150 Hansen, L. P., 7 Härdle, W., 6 Hardouvelis, G. A., 219n4 Hart, H. L. A., 50 Hasza, H. P., 194, 197 Hatanaka, M., 2–3, 11, 32–5, 42n1, 56, 302 Haugh, L. D., 63–4, 77 Heller, W., 5 Hendry, D. F., 13–14, 16, 73, 126, 130, 149, 154, 177, 207, 219, 225, 302 Higgins, M., 367–8
375 Hillmer, S. C., 191 Hinkley, C. V., 160 Hipel, K. W., 18, 121, 323, 327–8, 332 Hoffman, D. L., 8, 14 Holland, D. W., 73, 76, 79 Honore, A. M., 50 Hoover, K. D., 11 Horvath, M. T. K., 14 Hosking, J. R. M., 143 Hosoya, Y., 11, 58, 258n3 Huizinga, J., 219n4 Hume, D., 54 Hurst, H. E., 18 Hylleberg, S., 16–17 impulse response analysis, 256n1 income consumption and co-integrating regression of, 166–8 error-correction models and analysis of employees’ and national, 137–9 information sets, and causality, 71 Ingersoll, J. E., 217 Inoue, T., 307 instantaneous causality, 34–6, 38–9, 43, 59, 68, 76–80, 101 integrated seasonal processes, 190–1 integration, 12–13, 121–5, 146–9, 189–210, 269–84, 286–300 interest rates, and long memory, 243–8, 296–300 Jaganathan, R., 364 Jenkins, G. M., 4, 10, 66, 76, 116, 121, 143, 146, 190–1 Johansen, S., 15–16, 150, 182–3, 220, 234, 238–40, 251, 257–8, 266, 302, 309 Joyeux, R., 19, 122, 143, 182, 356 Juselius, K., 220, 238, 251 Kailath, T., 203 Kariya, T., 352 Kasa, K., 236, 258 Khintchine, A., 1 King, R., 16, 18, 311 Klein, L., 14 Knez, P., 229n8
376
Index
Kolb, R. A., 8n2 Kolmogorov, A. N., 1 Konishi, T., 254, 257, 266, 309 Kosobud, R., 14 Kottegoda, N. T., 18, 122, 323, 329 Kozicki, S., 237 Lasota, A., 264, 307 Lawrence, A. J., 18, 121, 323, 329 leading indicators, and testing for causality, 61–2 Lee, T.-H., 5 Leitch, G., 8 Li, Q., 6 likelihood ratio (LR) test, 245 Lin, J.-L., 8, 11, 236, 258n3, 303 linear models, and cointegrated variables, 314t Linton, O., 6 Lippi, M., 19 Litterman, R., 229n8 Liu, T., 5 Lo, A. W., 3, 19 Lobato, I., 19 long memory, 191, 232–51, 286–300, 321–36, 349–71 Lucas, R. E., 73 Lütkepohl, H., 11 Mackey, M. C., 264, 307 MacKinley, A. C., 3 Malinvaud, E., 113 Mandelbrot, B. B., 3, 19, 122–3, 327, 351 Mann, H. B., 1 Mariano, R. S., 8n2 Marketing/Communications Index, 94n16 maturity, and yields of treasury bills, 216 McCann-Erickson Index, 94n16, 103 McConnell, C. R., 85 McLeish, D. L., 5 McLeod, A. I., 18, 121, 323, 327–8, 332 mean-squared forecast errors (MSE), 333–4 Meese, R. A., 8n2 Mehra, Y. P., 56
Mellander, E., 258 Mincer, J., 9 Mishkin, F. S., 219n4 Mizon, G. E., 13 Mizrach, B., 8n2 Mokkadem, A., 307 Moore, G. H., 313 Morgenstern, O., 2 Morin, N., 307 Morris, M., 6 Neilson, J. P., 6 Nelson, C. R., 17, 160, 174, 189, 238, 258, 303 Nelson, D. B., 364 Nerlove, M., 3–4, 32, 191 Newbold, P., 4, 7, 8n2, 9, 12–13, 56, 60, 65–6, 76, 89, 92, 109, 111, 136, 147, 156, 321 New York Stock Exchange, 357 Ng, V., 364 non-causality, definitions of, 71–3 Olivetti, C., 8n2 one-step forecasts, 62–3 one-way causal model, 130–2 Orcutt, G. H., 11, 35 Ouliaris, S., 16 Overseth, O. E., 67 Pagan, A. R., 13 Palm, F., 7 Pearce, D. K., 56 Peña, D., 234 Pesaran, M. H., 8 Phillips, A. W., 177 Phillips, P. C. B., 1, 11, 16, 148, 163, 166, 234, 270 Pierce, D. A., 3, 63–4, 77 Plosser, C. I., 160, 174, 189, 311, 321 price consumption and stock dividends, 241–3 wages and productivity in transportation industry and error correction models, 140–2 U. S. monthly index of consumer, 334–5. See also cost Price, J. M., 64
Index Priestley, M. B., 4, 184 productivity, and prices and wages in transportation industry, 140–2 Proietti, T., 258–9 purely deterministic seasonal process, 190 Quah, D., 16, 233, 235, 238, 258 Ramanathan, R., 10 Ramey, V., 309 random walk hypothesis long-memory models and forecast errors, 334 and nonlinear transformations, 272 and regression analysis, 111, 113 and spectral analysis, 2–3 Rasche, R. H., 8, 14 Reid, D. J., 111 Reinsel, G. C., 16, 239 Rice, J., 5, 185 Richard, J. F., 73, 149, 154 Rissman, E., 15 Robinson, P. M., 5–6, 19 Roesler, T. W., 85 Rogoff, K., 8n2 Rosenbaum, P. R., 79 Ross, S. A., 217 Runkle, D., 364 Russell, B., 53 Ryztik, I. M., 323 Saikkonen, P., 16 Salmon, M., 149, 177 Samuelson, P., 3 Sargan, J. D., 13, 126, 130, 148, 160, 177, 179 Sargent, T. F., 56 Sargent, T. J., 2, 7, 73, 81–2 Savin, N. E., 19, 160 Scheinkman, J., 229n8 Schmalensee, R., 12, 66–7, 85n5, 86–7 Schwert, G. W., 321, 360–1 seasonality, 3–4, 143, 189–210 Sethi, S. P., 56 Sheppard, D. K., 117
377 Shiller, R. J., 8n2, 16, 213n1, 214n2, 217 Siklos, P. L., 4 Simon, H. A., 11, 35, 54, 62 Simon, J. L., 85n5 simple causal models, 34–5, 39, 44 Sims, C. A., 3–4, 7, 11, 54, 56, 59, 60, 65, 67–8, 88, 91, 100n21, 101, 102n25 Skoog, G. R., 56 Slutsky, E., 1, 12 social sciences, and causality, 50 spectral analysis, 1–3, 32–5 Spohn, W., 73 Srba, F., 130, 149, 177, 207 Stambaugh, R. F., 229n8 Standard & Poor price index, 350, 357 Star, R. M., 14 stationary seasonal process, 190–1 Stegun, I., 100n23 Stekler, H. O., 8n2 Stock, J. H., 6, 14, 16, 155–8, 205, 214n2, 217, 233, 238, 250, 254, 258–9, 311 Strotz, R. H., 35 Stuetzle, W., 293 Suppes, P., 53, 62, 79 Swanson, N. R., 8n2, 9, 11, 18, 264, 305, 307 Tanner, J. E., 8 Taylor, L. D., 86 Taylor, S., 349, 351–2, 358–9, 360 Teräsvirta, T., 5, 9, 264, 305 Thomson, P. J., 72, 79 Tiao, G. C., 3, 15, 194 Timmerman, A. G., 9 Toda, H. Y., 11 transitory components, of long memory, 233 treasury bills and cointegration analysis of yields, 212–30, 243 long memory and interest rate, 296–300 Tsay, R. S., 8 Tukey, J., 2 Tweedie, R. L., 264
378
Index
unit roots and long memory components of interest rates, 245 and nonlinear transformations, 270–81 and testing for seasonality, 194–201 Vahid-Araghi, F., 10 Van Ness, J. W., 19, 121 Vector Autoregression (VAR), 153–4, 160–7, 170, 228, 258–9 Verdon, W. A., 85 Von Neumann, J., 2 Von Ungern Sternberg, T., 130, 149, 177 Wald, A., 1 Wallis, J., 19 Wallis, K. F., 3–4 Warne, A., 303 Watson, M. W., 14, 16, 160, 214n2, 217, 233, 238, 250, 254, 258–9, 311 Wei, C. Z., 16, 197
Weiserbs, D., 86 Weiss, A. A., 5, 9, 14–15, 146, 148, 185 White, H., 5, 8n2, 9, 219 Wiener, N., 11, 36, 52, 56, 72 Williams, D., 56 Wold, H., 1, 35, 54, 62 Working, H., 3 Wright, S., 54 X-11 program, 3–4 Yamamoto, T., 11 Yeo, S., 130, 149, 177–8, 207 Yoo, S., 15–17, 150, 153, 166, 182–3, 202–3, 205, 283, 294 Yoon, G., 309 Young, A. H., 4 Yule, G. U., 1, 13 Zaffaroni, P., 19 Zakoian, J., 367, 370–1 Zanotti, M., 79 Zarnowitz, V., 9, 313 Zellner, A., 4, 7, 11, 61–2, 73