Algorithms and Computation in Mathematics • Volume 13 Editors Manuel Bronstein Arjeh M. Cohen Henri Cohen David Eisenbud...

Author:
Geon Ho Choe

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

Algorithms and Computation in Mathematics • Volume 13 Editors Manuel Bronstein Arjeh M. Cohen Henri Cohen David Eisenbud Bernd Sturmfels

Geon Ho Choe

Computational Ergodic Theory With 250 Figures and 10 Tables

123

Geon Ho Choe Korea Advanced Institute of Science and Technology Department of Mathematics Guseong-dong 373-1, Yuseong-gu Daejeon 305-701, Korea e-mail: [email protected]

Mathematics Subject Classiﬁcation (2000): 11Kxx, 28Dxx, 37-01, 37Axx

Library of Congress Control Number: 2004117450

ISSN 1431-1550 ISBN 3-540-23121-8 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microﬁlm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2005 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a speciﬁc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: by the author using a Springer LATEX macro package Production: LE-TEX Jelonek, Schmidt & Vöckler GbR, Leipzig Cover design: design & production GmbH, Heidelberg Printed on acid-free paper

46/3142YL - 5 4 3 2 1 0

In Memory of My Father

Preface

This book is about probabilistic aspects of discrete dynamical systems and their computer simulations. Basic measure theory is used as a theoretical tool to describe probabilistic phenomena. All of the simulations are done using the mathematical software Maple, which enables us to translate theoretical ideas into computer programs word by word avoiding any other technical details. The level of familiarity with computer programming is kept minimal. Thus even theoretical mathematicians can appreciate the experimental nature of one of the fascinating areas in mathematics. A new philosophy is introduced even for those who are already familiar with numerical simulations of dynamical systems. Most computer experiments in this book employ many signiﬁcant digits depending on the entropy of the given transformation so that they can still produce meaningful results after many iterations. This is why we can do experiments that have not been tried elsewhere before. It is the main point of the book. Readers should regard the Maple programs as important as the theoretical explanations of mathematical facts. The book is designed in such a way that theoretical and experimental parts complement each other. A dynamical system is a set of points together with a transformation rule that describes the movement of points. Sometimes the points lose the geometrical implication and they are just elements in an abstract set that has certain structures depending on the type of the problem under consideration. The theory of dynamical systems studies the long term statistical behavior of the points under the transformation rules. When the given set of points has a probabilistic structure, the theory of dynamical systems is called ergodic theory. The aim of the book is to study computational aspects of ergodic theory including data compression. Many mathematical concepts introduced in the book are illustrated by computer simulations. That gives a diﬀerent ﬂavor to ergodic theory, and may attract a new generation of students who like to play with computers. Some theorems look diﬀerent, at least to beginners, when the abstract facts are explained in terms of computer simulations. Many computer experiments

VIII

Preface

in the book are numerical simulations, but some others are based on symbolic mathematics. Readers are strongly encouraged to try all the experiments. Students with less mathematical background may skip some of the technical proofs and concentrate on the computer experiments instead. The author wishes the book to be interdisciplinary and easily accessible to people with diverse backgrounds: for example, students and researchers in science and engineering who want to study discrete dynamical systems or anyone who is interested in the theoretical background of data compression, even though this book is intended mainly for graduate students and advanced undergraduate seniors in pure and applied mathematics. Mathematical rigor is maintained throughout the book but the statements of facts and theorems can be understood easily by looking at the included ﬁgures and tables or by doing computer simulations given at the end of each chapter. This enables even students outside mathematics to understand the concepts introduced in the book. Maple programs for computer experiments presented in a chapter are collected in the last section of the same chapter. A Maple program is divided into as many components as possible so that even readers with no experience in programming can appreciate how a mathematical idea is translated into a series of Maple commands. Four fundamental ideas of ergodic theory are explained in the book: the Birkhoﬀ Ergodic Theorem on the uniform distribution in Chapter 3, the Shannon–McMillan–Breiman Theorem on entropy in Chapter 8, the Lyapunov exponent of a diﬀerentiable mapping in Chapters 9, 10, and the ﬁrst return time formula on entropy by Wyner, Ziv, Ornstein and Weiss in Chapter 12. Chapter 1 is a collection of mathematical facts and Maple commands. Chapter 2 introduces measure preserving transformations. Chapter 4 explains the Central Limit Theorem for dynamical systems. Chapter 5 presents a few miscellaneous facts related with ergodicity including Kac’s lemma on the ﬁrst return time. It also includes mixing property of a transformation. In Chapter 6 homeomorphisms of the unit circle are studied. Chapter 7 on mod 2 normal numbers is almost independent of other chapters. Chapter 11 illustrates how to sketch the stable and unstable manifolds, whose tangential directions can be obtained using the method in Chapter 10. Chapter 13 discusses the relation between recurrence time and Hausdorﬀ dimension. Chapter 14 introduces data compression at an elementary level. This topic does not belong to conventional ergodic theory but readers will ﬁnd that compression algorithms are closely related with many ideas in earlier chapters. There are many important topics not covered in this book and readers are encouraged to move on to other books later. This book is an outgrowth of the author’s eﬀorts of more than ten years to test the possibility of using computer experiments in studying rigorous theoretical mathematics. During that period the clock speed of the central processing units of personal computers has increased about one hundred times. Now it takes at most a few minutes to process most of Maple programs in the book. Maple not only does symbolic computations but also allows us to use a

Preface

IX

practically unlimited number of signiﬁcant digits in ﬂoating point calculations. Due to the sensitive dependence on initial conditions of nonlinear dynamical systems there is a possibility of obtaining meaningless output after many iterations of a transformation in a computer experiment. Once we begin with suﬃciently many digits, however, iterations can be done without paying much attention to the sensitive dependence on initial data. The optimal number of signiﬁcant digits can be given in terms of the Lyapunov exponent. A mathematical software such as Maple is an indispensable tool for anyone who studies nonlinear phenomena using computer simulations with suﬃciently many signiﬁcant digits. It can be compared to a microscope for a biologist or a telescope for an astronomer. Due to the sensitive dependence on initial data, even if we change slightly the value of a starting point in iterating a transformation, the cumulative eﬀect on its orbit can be enormous. That is why we need to have high precision to specify a seed for iteration, and it might be regarded as using a microscope. As for the second aspect, once we begin a computer experiment with suﬃciently many signiﬁcant digits we can observe reasonably distant future orbit points with an acceptable margin of error for practical purposes. Thus a mathematician may be compared to an astronomer using a telescope. Comments from readers are welcome. For supplementary materials on the topics covered in the book including Maple programs please check the author’s homepage http://shannon.kaist.ac.kr/choe/ or send emails to [email protected]

Geon Ho Choe

Acknowledgements

The ﬁrst draft of this book was written while the author was on sabbatical leave at University of California, Berkeley during the 1998–1999 academic year. He wishes to thank his old thesis advisor Henry Helson for his warm encouragements over many years, for sharing his oﬃce, and for correcting grammatical errors in the manuscript. The author also thanks William Bade and Marc Rieﬀel, who helped the author in case of need many years ago. He wishes to thank Kyewon Koh Park for introducing him to Ornstein and Weiss’s paper on the ﬁrst return time and for many other things. Although today’s personal computers are fast enough for most experiments presented in this book, that was not the case in the past. The author had to upgrade his hardware and software many times as the computers became faster and faster over the years. The following organizations provided the necessary funds: Korea Advanced Institute of Science and Technology, Global Analysis Research Center at Seoul National University, Korea Science and Engineering Foundation, Ministry of Information and Communication, and Korea Research Foundation. The ﬁnal stage of revision was done while the author was staying at Imperial College London. The author thanks the following people who have helped him in writing the book: Dong Pyo Chi, Jaigyoung Choe, Yun Sung Choi, Karma Dajani, Michael Eastham, John Elgin, Michael Field, Stefano Galatolo, William Geller, Young Hwa Ha, Toshihiro Hamachi, Boris Hasselblatt, Peter Hellekalek, Huyi Hu, Shunsuke Ihara, Yuji Ito, the late Anzelm Iwanik, Brunon Kaminski, Dohan Kim, Dong-Gyeom Kim, Hong Jong Kim, Soon-Ki Kim, Young-One Kim, Cor Kraaikamp, Jeroen Lamb, Jungseob Lee, Mariusz Lema´ nczyk, Stefano Luzzatto, Stefano Marmi, Makoto Mori, Hitoshi Nakada, Masakazu Nasu, Shigeyoshi Ogawa, Khashayar Pakdaman, Karl Petersen, Norbert Riedel, Klaus Schmidt, J¨ org Schmeling, Fritz Schweiger, William A. Veech, Dalibor Voln´ y, Peter Walters, the colleagues at KAIST including Sujin Shin, and many other teachers and friends. The author also thanks many students who have attended his classes, and his former and current students Young-Ho Ahn, Hyun Jin Jang, Myeong Geun

XII

Acknowledgements

Jeong, Dae Hyeon Kang, Mihyun Kang, Bong Jo Kim, Chihurn Kim, Dong Han Kim, Miseon Lee and Byoung Ki Seo for various help from them. The staﬀ at Springer have been very supportive and patient for almost six years. The author thanks the editor Martin Peters, the publishing director Joachim Heinze, Ruth Allewelt and Claudia Rau. He is also grateful to several anonymous reviewers who gave many helpful suggestions at various stages. The author thanks his mother and mother-in-law for their love and care. Finally he wishes to thank his wife for her love and patience.

Contents

1

Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Point Set Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Sets and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Metric spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Measures and Lebesgue Integration . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Lebesgue integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Nonnegative Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Perron–Frobenius Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Stochastic matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Compact Abelian Groups and Characters . . . . . . . . . . . . . . . . . . . 1.4.1 Compact abelian groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.3 Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.4 Endomorphisms of a torus . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Continued Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Statistics and Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Independent random variables . . . . . . . . . . . . . . . . . . . . . . 1.6.2 Change of variables formula . . . . . . . . . . . . . . . . . . . . . . . . 1.6.3 Statistical laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Random Number Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7.1 What are random numbers? . . . . . . . . . . . . . . . . . . . . . . . . 1.7.2 How to generate random numbers . . . . . . . . . . . . . . . . . . . 1.7.3 Random number generator in Maple . . . . . . . . . . . . . . . . . 1.8 Basic Maple Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.1 Restart, colon, semicolon, Digits . . . . . . . . . . . . . . . . . . . . . 1.8.2 Set theory and logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.3 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.4 How to plot a graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.5 Calculus: diﬀerentiation and integration . . . . . . . . . . . . . . 1.8.6 Fractional and integral parts of real numbers . . . . . . . . . .

1 1 1 2 4 4 6 11 11 13 14 14 16 17 18 20 21 21 23 23 25 25 27 28 28 29 29 31 32 34 35

XIV

Contents

1.8.7 1.8.8 1.8.9 1.8.10 1.8.11 1.8.12 1.8.13 1.8.14 1.8.15

Sequences and printf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Continued fractions I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Continued fractions II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statistics and probability . . . . . . . . . . . . . . . . . . . . . . . . . . . Lattice structures of linear congruential generators . . . . Random number generators in Maple . . . . . . . . . . . . . . . . Monte Carlo method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36 37 39 40 41 42 44 44 45

2

Invariant Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Invariant Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Other Types of Continued Fractions . . . . . . . . . . . . . . . . . . . . . . . 2.3 Shift Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Isomorphic Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Coding Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 The logistic transformation . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 Chebyshev polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.3 The beta transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.4 The baker’s transformation . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.5 A toral automorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.6 Modiﬁed Hurwitz transformation . . . . . . . . . . . . . . . . . . . . 2.6.7 A typical point of the Bernoulli measure . . . . . . . . . . . . . 2.6.8 A typical point of the Markov measure . . . . . . . . . . . . . . . 2.6.9 Coding map for the logistic transformation . . . . . . . . . . .

47 47 56 59 60 66 70 71 73 76 78 79 80 81 82 84

3

The Birkhoﬀ Ergodic Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.1 Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.2 The Birkhoﬀ Ergodic Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.3 The Kronecker–Weyl Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.4 Gelfand’s Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.5 Borel’s Normal Number Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.6 Continued Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.7 Approximation of Invariant Density Functions . . . . . . . . . . . . . . . 103 3.8 Physically Meaningful Singular Invariant Measures . . . . . . . . . . 105 3.9 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 3.9.1 The Gauss transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 112 3.9.2 Discrete invariant measures . . . . . . . . . . . . . . . . . . . . . . . . . 113 3.9.3 A cobweb plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 3.9.4 The Kronecker-Weyl Theorem . . . . . . . . . . . . . . . . . . . . . . 115 3.9.5 The logistic transformation . . . . . . . . . . . . . . . . . . . . . . . . . 116 3.9.6 An unbounded invariant density and the speed of growth at a singularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 3.9.7 Toral automorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Contents

3.9.8 3.9.9 3.9.10 3.9.11 3.9.12 3.9.13 3.9.14

XV

The Khinchin constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 An inﬁnite invariant measure . . . . . . . . . . . . . . . . . . . . . . . 123 Cross sections of the solenoid . . . . . . . . . . . . . . . . . . . . . . . 125 The solenoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Feigenbaum’s logistic transformations . . . . . . . . . . . . . . . . 130 The H´enon attractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Basin of attraction for the H´enon attractor . . . . . . . . . . . 131

4

The Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.1 Mixing Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.2 The Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 4.3 Speed of Correlation Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 4.4 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 4.4.1 The Central Limit Theorem: σ > 0 for the logistic transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 4.4.2 The Central Limit Theorem: the normal distribution for the Gauss transformation . . . . . . . . . . . . . . . . . . . . . . . . 148 4.4.3 Failure of the Central Limit Theorem: σ = 0 for an irrational translation mod 1 . . . . . . . . . . . . . . . . . . . . . . . . 151 4.4.4 Correlation coeﬃcients of T x = 2x mod 1 . . . . . . . . . . . . 151 4.4.5 Correlation coeﬃcients of the beta transformation . . . . . 152 4.4.6 Correlation coeﬃcients of an irrational translation mod 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

5

More on Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5.1 Absolutely Continuous Invariant Measures . . . . . . . . . . . . . . . . . . 155 5.2 Boundary Conditions for Invariant Measures . . . . . . . . . . . . . . . . 156 5.3 Kac’s Lemma on the First Return Time . . . . . . . . . . . . . . . . . . . . 159 5.4 Ergodicity of Markov Shift Transformations . . . . . . . . . . . . . . . . . 165 5.5 Singular Continuous Invariant Measures . . . . . . . . . . . . . . . . . . . . 170 5.6 An Invertible Extension of a Transformation on an Interval . . . 172 5.7 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 5.7.1 Piecewise deﬁned transformations . . . . . . . . . . . . . . . . . . . 173 5.7.2 How to sketch the graph of the ﬁrst return time transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 5.7.3 Kac’s lemma for the logistic transformation . . . . . . . . . . . 175 5.7.4 Kac’s lemma for an irrational translation mod 1 . . . . . . . 176 5.7.5 Bernoulli measures on the unit interval . . . . . . . . . . . . . . . 178 5.7.6 An invertible extension of the beta transformation . . . . . 179

6

Homeomorphisms of the Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 6.1 Rotation Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 6.2 Topological Conjugacy and Invariant Measures . . . . . . . . . . . . . . 188 6.3 The Cantor Function and Rotation Number . . . . . . . . . . . . . . . . 193 6.4 Arnold Tongues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

XVI

Contents

6.5 6.6 6.7 6.8

How to Sketch a Conjugacy Using Rotation Number . . . . . . . . . 195 Unique Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Poincar´e Section of a Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 6.8.1 The rotation number of a homeomorphism of the unit circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 6.8.2 Symmetry of invariant measures . . . . . . . . . . . . . . . . . . . . . 200 6.8.3 Conjugacy and the invariant measure . . . . . . . . . . . . . . . . 201 6.8.4 The Cantor function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 6.8.5 The Cantor function as a cumulative density function . . 202 6.8.6 Rotation numbers and a staircase function . . . . . . . . . . . . 204 6.8.7 Arnold tongues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 6.8.8 How to sketch a topological conjugacy of a circle homeomorphism I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 6.8.9 How to sketch a topological conjugacy of a circle homeomorphism II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

7

Mod 2 Uniform Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.1 Mod 2 Normal Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.2 Skew Product Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 7.3 Mod 2 Normality Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 7.4 Mod 2 Uniform Distribution for General Transformations . . . . . 221 7.5 Random Walks on the Unit Circle . . . . . . . . . . . . . . . . . . . . . . . . . 222 7.6 How to Sketch a Cobounding Function . . . . . . . . . . . . . . . . . . . . . 226 7.7 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 7.7.1 Failure of mod 2 normality for irrational translations mod 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 7.7.2 Ergodic components of a nonergodic skew product transformation arising from the beta transformation . . . 231 7.7.3 Random walks by an irrational angle on the circle . . . . . 232 7.7.4 Skew product transformation and random walks on a cyclic group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 7.7.5 How to plot points on the graph of a cobounding function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 7.7.6 Fourier series of a cobounding function . . . . . . . . . . . . . . . 235 7.7.7 Numerical computation of a lower bound for n||nθ|| . . . . 236

8

Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 8.1 Deﬁnition of Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 8.2 Entropy of Shift Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . 242 8.3 Partitions and Coding Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 8.4 The Shannon–McMillan–Breiman Theorem . . . . . . . . . . . . . . . . . 248 8.5 Data Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 8.6 Asymptotically Normal Distribution of (− log Pn )/n . . . . . . . . . 253 8.7 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254

Contents

XVII

8.7.1 Deﬁnition of entropy: the logistic transformation . . . . . . 254 8.7.2 Deﬁnition of entropy: the Markov shift . . . . . . . . . . . . . . . 255 8.7.3 Cylinder sets of a nongenerating partition: T x = 2x mod 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 8.7.4 The Shannon–McMillan–Breiman Theorem: the Markov shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 8.7.5 The Shannon–McMillan–Breiman Theorem: the beta transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 8.7.6 The Asymptotic Equipartition Property: the Bernoulli shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 8.7.7 Asymptotically normal distribution of (− log Pn )/n: the Bernoulli shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 9

The Lyapunov Exponent: One-Dimensional Case . . . . . . . . . . 269 9.1 The Lyapunov Exponent of Diﬀerentiable Maps . . . . . . . . . . . . . 269 9.2 Number of Signiﬁcant Digits and the Divergence Speed . . . . . . . 278 9.3 Fixed Points of the Gauss Transformation . . . . . . . . . . . . . . . . . . 280 9.4 Generalized Continued Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . 282 9.5 Speed of Approximation by Convergents . . . . . . . . . . . . . . . . . . . . 286 9.6 Random Shuﬄing of Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 9.7 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 9.7.1 The Lyapunov exponent: the Gauss transformation . . . . 292 9.7.2 Hn,k : a local version for the Gauss transformation . . . . . 293 9.7.3 Hn,k : a global version for the Gauss transformation . . . . 294 9.7.4 The divergence speed: a local version for the Gauss transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 9.7.5 The divergence speed: a global version for the Gauss transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 9.7.6 Number of correct partial quotients: validity of Maple algorithm of continued fractions . . . . . . . . . . . . . . . . . . . . . 295 9.7.7 Speed of approximation by convergents: the Bernoulli transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297

10 The Lyapunov Exponent: Multidimensional Case . . . . . . . . . . 299 10.1 Singular Values of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 10.2 Oseledec’s Multiplicative Ergodic Theorem . . . . . . . . . . . . . . . . . 302 10.3 The Lyapunov Exponent of a Diﬀerentiable Mapping . . . . . . . . 310 10.4 Invariant Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 10.5 The Lyapunov Exponent and Entropy . . . . . . . . . . . . . . . . . . . . . . 318 10.6 The Largest Lyapunov Exponent and the Divergence Speed . . . 321 10.7 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 10.7.1 Singular values of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . 323 10.7.2 A matrix sends a circle onto an ellipse . . . . . . . . . . . . . . . 324 10.7.3 The Lyapunov exponents: a local version for the solenoid mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325

XVIII Contents

10.7.4 The Lyapunov exponent: a global version for the H´enon mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 10.7.5 The invariant subspace E1 of the H´enon mapping . . . . . 329 10.7.6 The invariant subspace E2 of the H´enon mapping . . . . . 330 10.7.7 The divergence speed: a local version for a toral automorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 10.7.8 The divergence speed: a global version for H´enon mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 11 Stable and Unstable Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 11.1 Stable and Unstable Manifolds of Fixed Points . . . . . . . . . . . . . . 333 11.2 The H´enon Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 11.3 The Standard Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 11.4 Stable and Unstable Manifolds of Periodic Points . . . . . . . . . . . . 340 11.5 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 11.5.1 A stable manifold of the H´enon mapping . . . . . . . . . . . . . 345 11.5.2 An unstable manifold of the H´enon mapping . . . . . . . . . . 348 11.5.3 The boundary of the basin of attraction of the H´enon mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 11.5.4 Behavior of the H´enon mapping near a saddle point . . . 353 11.5.5 Hyperbolic points of the standard mapping . . . . . . . . . . . 355 11.5.6 Images of a circle centered at a hyperbolic point under the standard mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 11.5.7 Stable manifolds of the standard mapping . . . . . . . . . . . . 358 11.5.8 Intersection of the stable and unstable manifolds of the standard mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 12 Recurrence and Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 12.1 The First Return Time Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 12.2 Lp -Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 12.3 The Nonoverlapping First Return Time . . . . . . . . . . . . . . . . . . . . 369 12.4 Product of the Return Time and the Probability . . . . . . . . . . . . 376 12.5 Symbolic Dynamics and Topological Entropy . . . . . . . . . . . . . . . 377 12.6 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 12.6.1 The ﬁrst return time Rn for the Bernoulli shift . . . . . . . . 381 12.6.2 Averages of (log Rn )/n and Rn for the Bernoulli shift . . 382 12.6.3 Probability density function of (log Rn )/n for the Bernoulli shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 12.6.4 Convergence speed of the average of log Rn for the Bernoulli shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 12.6.5 The nonoverlapping ﬁrst return time . . . . . . . . . . . . . . . . . 386 12.6.6 Probability density function of Rn Pn for the Markov shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 12.6.7 Topological entropy of a topological shift space . . . . . . . . 389

Contents

XIX

13 Recurrence and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 13.1 Hausdorﬀ Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 13.2 Recurrence Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 13.3 Absolutely Continuous Invariant Measures . . . . . . . . . . . . . . . . . . 394 13.4 Singular Continuous Invariant Measures . . . . . . . . . . . . . . . . . . . . 398 13.5 The First Return Time and the Dimension . . . . . . . . . . . . . . . . . 400 13.6 Irrational Translations Mod 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 13.7 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 13.7.1 The recurrence error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 13.7.2 The logistic transformation . . . . . . . . . . . . . . . . . . . . . . . . . 409 13.7.3 The H´enon mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 13.7.4 Type of an irrational number . . . . . . . . . . . . . . . . . . . . . . . 412 13.7.5 An irrational translation mod 1 . . . . . . . . . . . . . . . . . . . . . 414 14 Data Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 14.1 Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 14.2 Entropy and Data Compression Ratio . . . . . . . . . . . . . . . . . . . . . . 419 14.3 Huﬀman Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 14.4 Lempel–Ziv Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 14.5 Arithmetic Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 14.6 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 14.6.1 Huﬀman coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 14.6.2 Lempel–Ziv coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 14.6.3 Typical intervals arising from arithmetic coding . . . . . . . 436 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449

1 Prerequisites

The study of discrete dynamical systems requires ideas and techniques from diverse ﬁelds including mathematics, probability theory, statistics, physics, chemistry, biology, social sciences and engineering. Even only in mathematics, we need various facts from classical analysis, measure theory, linear algebra, topology and number theory. This chapter will help the reader build suﬃcient background to understand materials in later chapters without much diﬃculty. We introduce notations, deﬁnitions, concepts and fundamental facts that are needed in later chapters. In the last section some of basic Maple commands are presented.

1.1 Point Set Topology 1.1.1 Sets and functions Let N, Z, Q, R and C denote the sets of natural numbers, integers, rational numbers, real numbers and complex numbers, respectively. The diﬀerence of two sets A, B is given by A\B = {x : x ∈ A and x ∈ B}, and their symmetric diﬀerence by A B = (A ∪ B) \ (A ∩ B). If A ⊂ X, then Ac denotes the complement of A, i.e., Ac = X \ A. Let X and Y be two sets. A function (or a mapping or a map) f : X → Y is a rule that assigns a unique element f (x) ∈ Y to every x ∈ X. In this case, X and Y are called the domain and the range (or codomain) of f , respectively. For x ∈ X, the element f (x) is called the image of x under f . Sometimes the range of f means the set {f (x) : x ∈ X}. If {f (x) : x ∈ X} = Y , then f is said to be onto. If x1 = x2 implies f (x1 ) = f (x2 ), then f is said to be one-to-one. If f : X → Y is onto and one-to-one, then f is said to be bijective, and its inverse f −1 exists. Suppose that f : X → Y is not necessarily invertible. The inverse image of E ⊂ Y is deﬁned by f −1 (E) = {x : f (x) ∈ E}. Then f −1 (E ∪ F ) = f −1 (E) ∪ f −1 (F ), f −1 (E ∩ F ) = f −1 (E) ∩ f −1 (F ) and f −1 (Y \ E) = X \ f −1 (E).

2

1 Prerequisites

The number of elements of a set A is called the cardinality of A, and denoted by cardA. If f : X → Y is bijective, then we say that X and Y have the same cardinality. If X has ﬁnitely many elements or if X and N have the same cardinality, then X is said to be countable. The sets Z and Q are countable. A countable union of countable sets is also countable. If X is not countable, then X is said to be uncountable. The sets R and C are uncountable. Take a subset A ⊂ X. Then 1A denotes the characteristic function or the indicator function of A, i.e., 1A (x) = 1 if x ∈ A and 1A (x) = 0 if x ∈ A. A partition of a set X is a collection of pairwise disjoint subsets Eλ , λ ∈ Λ, such that λ∈Λ Eλ = X. If there are ﬁnitely (or countably) many subsets under consideration we call the partition ﬁnite (or countable). Suppose that A is a set of real numbers. If there exists α ∈ R ∪ {+∞} such that x ≤ α for every x ∈ A, then α is called an upper bound of A. There exists a unique upper bound α such that if t < α then t is not an upper bound. We call α the least upper bound, or the supremum of A, and write α = sup A. Similarly, we deﬁne the greatest lower bound β, or the inﬁmum of A using lower bounds of A, and write β = inf A. Let {an }∞ n=1 be a sequence of real numbers. Deﬁne lim sup an = inf sup ak , lim inf an = sup inf ak . n→∞

n≥1

k≥n

n→∞

n≥1

k≥n

∞ Both limits exist even when {an }∞ n=1 does not converge. If {an }n=1 converges to a limit L, then the lim sup and the lim inf are equal to L. A bounded and monotone sequence in R converges to a limit. Let {An }∞ n=1 be a sequence of subsets of X. Deﬁne lim sup An = An . n→∞

m≥1 n≥m

This set comprises the points x satisfying x ∈ An for inﬁnitely many n. The Ces`aro sum of a sequence a1 , a2 , a3 , . . . of numbers is a new sequence c1 , c2 , c3 , . . . deﬁned by cn = (a1 + · · · + an )/n. If {an }∞ n=1 converges, then its Ces`aro sum also converges to the same limit. 1.1.2 Metric spaces A metric on a set S is a function d : S × S → R such that (i) d(x, y) ≥ 0, and d(x, y) = 0 if and only if x = y, (ii) d(x, y) = d(y, x), (iii) d(x, z) ≤ d(x, y) + d(y, z). A set S together with a metric d, denoted by (S, d), is called a metric space. A norm on a vector space V is a function || · || : V → R such that (i) ||v|| ≥ 0 for v ∈ V , and ||v|| = 0 if only if v = 0, (ii) ||cv|| = |c| ||v|| for every scalar c and v ∈ V , (iii) ||v1 + v2 || ≤ ||v1 || + ||v2 || for v1 , v2 ∈ V . A normed space is a metric space with the metric d(x, y) = ||x − y||. Two norms

1.1 Point Set Topology

3

|| · || and || · || on a vector space V are said to be equivalent if there exist constants A, B > 0 such that A||x|| ≤ ||x|| ≤ B||x|| for x ∈ V . All norms on a ﬁnite-dimensional vector space are equivalent. The n-dimensional Euclidean space Rn has many norms. For 1 ≤ p < ∞ n 1/p let ||x||p = ( i=1 |xi |p ) and ||x||∞ = max1≤i≤n |xi |. Then (Rn , || · ||p ), 1 ≤ p ≤ ∞, is a normed space. A sequence x1 , x2 , x3 , . . . in a metric space (X, d) converges to a limit x ˜ ˜) converge to 0. In this case, we write x ˜ = limn→∞ xn . if the numbers d(xn , x A sequence x1 , x2 , x3 , . . . in a metric space (X, d) is called a Cauchy sequence if for any ε > 0 there exists N depending on ε such that n, m ≥ N implies d(xn , xm ) < ε. A Cauchy sequence in X need not converge to a limit in X. For example, the sequence 1, 1.4, 1.41, 1.414, 1.4142, . . . in Q has a limit √ 2 ∈ Q. A metric space X is said to be complete if every Cauchy sequence has a convergent subsequence with the limit in X. The Euclidean space Rn with the metric dp (x, y) = ||x − y||p , 1 ≤ p ≤ ∞, is complete. Note that d2 gives the usual Euclidean metric. Two metrics d and d on the same set X are equivalent if there exist constants A, B > 0 such that A d(x, y) ≤ d (x, y) ≤ B d(x, y) for every x, y ∈ X. If d and d are equivalent, then a sequence converges with respect to d if and only if it converges with respect to d . All the metrics dp on Rn are equivalent. Let (X, d) be a metric space. An open ball centered at x0 of radius r > 0 is a set of the form Br (x0 ) = {x ∈ X : d(x, x0 ) < r}. An open set V ⊂ X is a set such that, for every x0 ∈ V , there exists r > 0 for which Br (x0 ) ⊂ V . A set is closed if its complement is open. For example, closed intervals and closed disks are closed sets. A closure of a set A, denoted by A, is the smallest closed set containing A. The closure of an open ball Br (x0 ) is the closed ball {x : d(x, x0 ) ≤ r}. A function f from X into another metric space Y is continuous if limn→∞ f (xn ) = f (limn→∞ xn ). An equivalent condition is the following: f −1 (B) is open in X for every open ball B ⊂ Y . If f : X → Y is bijective and continuous and if f −1 is also continuous, then f is called a homeomorphism, and X and Y are said to be homeomorphic. Let X be a metric space. An open cover U of A ⊂ X is a collection of open subsets Vα such that A ⊂ α Vα . A subcover of U is a subcollection of U that is still a cover of A. A subset K is said to be compact if for every open cover of K there exists a subcover consisting of ﬁnitely many subsets. Fact 1.1 (Bolzano-Weierstrass Theorem) Let X be a metric space. A set K ⊂ X is compact if and only if every sequence in K has a subsequence that converges to a point in K. Fact 1.2 (Heine-Borel Theorem) A subset A ⊂ Rn is compact if and only if A is closed and bounded. For example, a bounded closed interval is compact. For another example, consider the Cantor set C. Let F1 = [0, 13 ] ∪ [ 23 , 1] be the subset obtained by removing the middle third set ( 13 , 23 ) from [0, 1]. Let F2 be the subset obtained

4

1 Prerequisites

by removing the middle third sets ( 19 , 29 ) and ( 79 , 89 ) from the closed intervals Inductively we deﬁne a closed set Fn for every [0, 13 ] and [ 23 , 1], respectively. ∞ n ≥ 1. Put C = n=1 Fn . Then C is compact and its interior is empty, i.e., C does not contain any nonempty open interval. The set C consists of the ∞ points x = i=1 bi 3−i , bi = 0 or 2 for every i ≥ 1. For example, C contains 1 the point 4 = (0.020202 . . .)3 . Note that C is uncountable and has Lebesgue measure zero. Fact 1.3 (i) Let X, Y be compact metric spaces. If f : X → Y is bijective and continuous, then f −1 is also continuous, and f is a homeomorphism. (ii) A continuous function f : K → R deﬁned on a compact set K has a maximum and a minimum. In other words, there exist points x1 , x2 ∈ K such that f (x1 ) = minx∈K f (x) and f (x2 ) = maxx∈K f (x). (iii) A sequence of functions {fn }∞ n=1 is said to converge uniformly to f if for ε > 0 there exits N such that n ≥ N implies |f (x) − fn (x)| < ε for all x. In this case, if fn is continuous for every n ≥ 1, then f is also continuous. Fact 1.4 (Brouwer Fixed Point Theorem) Let B be the closed unit ball in Rn . If f : B → B is a continuous mapping, then f has a ﬁxed point, i.e., there exists x ∈ B such that f (x) = x. Consult [Mu] for a proof. The conclusion is valid for any set B that is homeomorphic to the closed unit ball in Rn . For more details on basic facts presented in this section consult [Rud1].

1.2 Measures and Lebesgue Integration 1.2.1 Measure Given a set X we want to measure the size of a subset A of X. When X is not a set of countably many elements, it is not always possible to measure the size of every subset of X in a consistent and mathematically meaningful way. So we choose only some but suﬃciently many subsets of X and deﬁne their sizes. Such subsets under consideration are called measurable subsets and a collection of them is called a σ-algebra. Axiomatically, a σ-algebra A is a collection of subsets satisfying the following conditions: (i) ∅, X ∈ A, (ii) if A ∈ A, then X \ A ∈ A, and ∞ (iii) if A1 , A2 , A3 , . . . ∈ A, then n=1 An ∈ A. A measure µ is a rule that assigns a nonnegative real number or +∞ to each measurable subset. More precisely, a measure is a function µ : A → [0, +∞) ∪ {+∞} that satisﬁes the following conditions: (i) µ(A) ∈ [0, +∞) ∪ {+∞} for every A ∈ A, (ii) µ(∅) = 0, and (iii) if A1 , A2 , A3 , . . . are pairwise disjoint measurable subsets, then

1.2 Measures and Lebesgue Integration

µ(

∞

n=1

An ) =

∞

5

µ(An ) .

n=1

If µ(X) = 1, then µ is called a probability measure. Example 1.5. (i) The counting measure µ counts the number of elements, i.e., µ(A) is the cardinality of A. (ii) Take x ∈ X. The Dirac measure (or a point mass) δx at x is deﬁned by δx (A) = 1 if x ∈ A, and δx (A) = 0 if x ∈ A. A set X with a σ-algebra A is called a measurable space and denoted by (X, A). If a measure µ is chosen for (X, A), then it is called a measure space and denoted by (X, A, µ) or (X, µ). A measure space (X, A, µ) is said to be complete, if a subset N ∈ A satisﬁes µ(N ) = 0 and A ⊂ N , then A ∈ A and µ(A) = 0. A measure can be extended to a complete measure, i.e., the σ-algebra can be extended to include all the subsets of measure zero sets. Let Rn be the n-dimensional Euclidean space. An n-dimensional rectangle in Rn is a set of the form [a1 , b1 ] × · · · × [an , bn ], where the closed intervals may be replaced by open or half-open intervals. Consider the collection R of n-dimensional rectangles and deﬁne a set function µ : R → [0, +∞) ∪ {+∞} by the usual concept of length for n = 1, or area for n = 2, or volume for n ≥ 3. Then µ is extended to the σ-algebra generated by R. Finally, it is extended again to a complete measure, which is called the n-dimensional Lebesgue measure. A function f : (X, A) → R is said to be measurable if the inverse image of any open interval belongs to A. A simple function s(x) is a measurable n function of the form s(x) = i=1 αi 1Ei (x) for some measurable subsets Ei and constants αi , 1 ≤ i ≤ n. For a measurable function f ≥ 0, there exists an increasing sequence of simple functions sn ≥ 0 such that sn (x) converges to f (x) for every x. On a metric space X, the smallest σ-algebra B containing all the open subsets is called the Borel σ-algebra. If the σ-algebra under consideration is the Borel σ-algebra, then measurable subsets and measurable functions are called Borel measurable subsets and Borel measurable functions, respectively. A measure on B is called a Borel measure. A continuous function f : X → R is Borel measurable. ∞ Let S = {0, 1}. Deﬁne an inﬁnite product space X = 1 S = S × S × · · · . An element of X is an inﬁnite binary sequence x = (x1 , x2 , x3 . . .) where xi = 0 or 1. Deﬁne a cylinder set of length n by [a1 , . . . , an ] = {x ∈ X : x1 = a1 , . . . , xn = an } . Let R be the collection of cylinder sets. For 0 ≤ p ≤ 1, deﬁne a set function µp : R → [0, ∞) by µp ([a1 , . . . , an ]) = pk (1 − p)n−k

6

1 Prerequisites

where k is the number of times that the symbol ‘0’ appears in a1 , . . . , an . Then µp is extended to a measure, also denoted by µp , deﬁned on the σ-algebra generated by R. The new measure is called the (p, 1 − p)-Bernoulli measure. ∞ −i A number x in the unit interval has a binary expansion x = i=1 ai 2 , ai ∈ {0, 1}, and it can be identiﬁed with the sequence (a1 , a2 , a3 , . . .) ∈ X. For p = 12 we can identify µp with Lebesgue measure on [0, 1]. Here we ignore the binary representations whose tails are all equal to 1. A measure is continuous if a set containing only one point has measure zero. In this case, by the countable additivity of a measure, a countable set has measure zero. Bernoulli measures for 0 < p < 1 are continuous. A measure is discrete if there exists a subset A of countably many points such that the complement of A has measure zero. If K ⊂ X satisﬁes µ(X \ K) = 0, then K is called a support of µ. For X ⊂ Rn a measure µ on X is singular if there exists a measurable subset K ⊂ X such that K has Lebesgue measure zero and µ(X \ K) = 0. If p = 12 , then the Bernoulli measure µp , which can be regarded as a measure on [0, 1], is singular. Two measures µ1 , µ2 on X are mutually singular if there exist disjoint measurable subsets A, B such that µ1 (B) = 0 and µ2 (A) = 0. Let µ be a measure on X ⊂ Rn . If a measurable function ρ(x) ≥ 0 satisﬁes µ(E) = E ρ(x) dx for any measurable subset E, then µ is said to be absolutely continuous and ρ is called a density function. In this case f (x) dµ = f (x)ρ(x) dx X

X

for any measurable function f and we write dµ = ρ dx. If µ is a probability measure, then ρ is called a probability density function, or a pdf for short. Let (X, µ) be a measure space. Let P (x) be a property whose validity depends on x ∈ X. If P (x) holds true for every x ∈ A with µ(X \ A) = 0, then we say that P (x) is true almost everywhere (or, for almost every x in X) with respect to µ. Let A be a measurable subset of a probability measure space (X, µ). If µ(A) > 0, then we deﬁne a new measure µA on X by µA (E) = µ(E ∩A)/µ(A). Then µA is called a conditional measure. A partition P = {Ej : j ≥ 1} of a measurable space (X, A) is said to be measurable if Ej ∈ A for every j. 1.2.2 Lebesgue integration Now we want to compute the average of a function deﬁned on X with respect to a measure µ. It is not possible in general to consider all the functions. Let f ≥ 0 be a bounded measurable function on X, i.e., there is a constant M > 0 such that 0 ≤ f (x) < M . Put k k−1 M . M, En,k = f −1 2n 2n

1.2 Measures and Lebesgue Integration

7

The measurability of f guarantees the measurability of En,k , and the size of En,k can be determined by µ. As n → ∞, we take the limit of n

2 k M × µ(En,k ) , 2n

k=1

which is called the Lebesgue integral of f on (X, µ) and denoted by f dµ . X

The key idea is that we partition the vertical axis in Lebesgue integral, not the horizontal axis as in Riemann integral. If f is real-valued, then f = f+ − f− for some nonnegative functions f+ and f− , and we deﬁne f dµ = f+ dµ − f− dµ . X

X

X

For complex-valued measurable functions, we integrate the real and imaginary parts separately. function of a measurable subset A. Then 1A Let 1A be the characteristic

is measurable function and X 1A (x) dµ = µ(A) for any measure µ on X. In measure theory two measurable subsets A1 , A2 are regarded as being identical if µ(A1 A2 ) = 0. In this case we say that A1 = A2 modulo measure zero sets and write A1 A2 . (In most cases we simply write A1 = A2 to denote A1 A2 if there is no danger of confusion.) A measurable function f is said to be integrable if its integral is ﬁnite. If two functions diﬀer on a subset of measure zero, then their integrals are equal. We then regard them as identical functions. The set of all integrable functions is a vector space. Fact 1.6 (Monotone Convergence Theorem) Let 0 ≤ f1 ≤ f2 ≤ · · · be a monotone sequence of measurable functions on a measure space (X, µ). Then lim fn dµ = lim fn dµ . X n→∞

n→∞

X

Fact 1.7 (Fatou’s lemma) Let fn ≥ 0, n ≥ 1, be a sequence of measurable functions on a measure space (X, µ). Then lim inf fn dµ ≤ lim inf fn dµ . X n→∞

n→∞

X

Fact 1.8 (Lebesgue Dominated Convergence Theorem) Let fn , n ≥ 1, be a sequence of measurable functions on a measure space (X, µ). Suppose that limn→∞ fn (x) exists for every x ∈ X and that there exists an integrable function g ≥ 0 such that |fn (x)| ≤ g(x) for every n. Then lim fn dµ = lim fn dµ . X n→∞

n→∞

X

8

1 Prerequisites

For 1 ≤ p < ∞ deﬁne ||f ||p =

1/p |f | dµ , p

X

and for p = ∞ deﬁne ||f ||∞ = inf{C : µ{x : |f (x)| > C} = 0} . For 1 ≤ p ≤ ∞ let

Lp (X, µ) = {f : ||f ||p < ∞} .

Then Lp (X, µ) is a vector space with a norm || · ||p , and it is complete as a metric space. Suppose that µ(X) < ∞. It is known that Lp (X) ⊂ Lr (X) for 1 ≤ r < p ≤ ∞. Hence Lp (X) ⊂ L1 (X) for any 1 ≤ p ≤ ∞. A sequence of functions {fn }n , fn ∈ Lp , is said to converge in Lp to f ∈ Lp if limn→∞ ||fn − f ||p = 0. A sequence of measurable functions {gn }n is said to converge in measure to g if for every ε > 0, lim µ({x : |gn (x) − g(x)| > ε}) = 0 .

n→∞

Various modes of convergence are compared in the following: Fact 1.9 Suppose that 1 ≤ p < ∞. (i) If fn → f in Lp , then fn → f in measure. (ii) If fn → f in measure and |fn | ≤ g ∈ Lp for all n, then fn → f in Lp . (iii) If fn , f ∈ Lp and fn → f almost everywhere, then fn → f in Lp if and only if ||fn ||p → ||f ||p . Proof. Here only (i) is proved. Let An,ε = {x : |fn (x) − f (x)| ≥ ε}, then p |fn − f |p dµ ≥ εp µ(An,ε ) , |fn − f | dµ ≥ An,ε

and hence µ(An,ε ) ≤ ε−p |fn − f |p dµ → 0.

Fact 1.10 Assume that µ(X) < ∞. (i) For 1 ≤ p ≤ ∞, if fn → f in Lp , then fn → f in measure. (ii) If fn (x) → f (x) almost everywhere, then fn → f in measure. A function φ : R → R is said to be convex , if for any λ1 , . . . , λn ≥ 0 such n that i=1 λi = 1, we have

n n φ λi xi ≤ λi φ(xi ) i=1

for xi ∈ R. The sum

n i=1

i=1

λi xi is called a convex combination of the xi .

1.2 Measures and Lebesgue Integration

9

Fact 1.11 (Jensen’s inequality) Let (X, µ) be a probability measure space and let f : X → R be a measurable function. If φ : R → R is convex and φ ◦ f is integrable, then f dµ

φ

≤

X

φ(f (x)) dµ . X

Fact 1.12 (H¨ older’s inequality) Let 1 < p < ∞, 1 < q < ∞, Then ||f g||1 ≤ ||f ||p ||g||q .

1 p

+

1 q

= 1.

Equality holds if and only if there exist constants C1 ≥ 0 and C2 ≥ 0, not both equal to 0, such that C1 |f (x)|p = C2 |g(x)|q for almost every x. If p = 2 we have the Cauchy–Schwarz inequality. Fact 1.13 (Minkowski’s inequality) Let 1 ≤ p ≤ ∞. Then ||f + g||p ≤ ||f ||p + ||g||p . If 1 < p < ∞, then the equality holds if and only if there exist constants C1 ≥ 0 and C2 ≥ 0, not both zero, such that C1 f = C2 g almost everywhere. For p = 1 the equality holds if and only if there exists a measurable function h ≥ 0 such that f (x)h(x) = g(x) for almost every x satisfying f (x)g(x) = 0. Let V , V be vector spaces over a set of scalars F = R or C. A mapping L : V → V is said to be linear if L(c1 v1 + c2 v2 ) = c1 L(v1 ) + c2 L(v2 ) for v1 , v2 ∈ V and c1 , c2 ∈ F . Let (X1 , || · ||1 ) and (X2 , || · ||2 ) be two normed spaces. A linear mapping T : X1 → X2 is bounded if ||T || = sup{||T x||2 : ||x||1 ≤ 1} < ∞ . In this case, we have ||T x||2 ≤ ||T || ||x||1

for x ∈ X1 ,

and ||T || is called the norm of T . A bounded linear map is continuous. A complete normed space is called a Banach space. For 1 ≤ p ≤ ∞, Lp is a Banach space. Another example of a Banach space is the set of continuous functions on [a, b] with the norm ||f || = maxa≤x≤b |f (x)|. Fact 1.14 (Chebyshev’s inequality) If f ∈ Lp (X, µ), 1 ≤ p < ∞, then for any ε > 0 we have 1 |f |p dµ . µ({x : |f (x)| > ε}) ≤ p ε X Proof. Let Aε = {x : |f (x)| > ε}. Then |f |p dµ ≥ |f |p dµ ≥ εp µ(Aε ) . X

Aε

10

1 Prerequisites

Deﬁnition 1.15. Let (X, µ) be a probability space. A sequence of measurable subsets {An }∞ n=1 are said to be independent if, for any 1 ≤ i1 < · · · < ik , we have µ(Ai1 ∩ · · · ∩ Aik ) = µ(Ai1 ) · · · µ(Aik ) . Fact 1.16 (Borel–Cantelli Lemma) Let {An }∞ n=1 be a sequence of measurablesubsets of a probability space (X, µ). ∞ (i) If n=1 µ(An ) < ∞, then µ(lim sup An ) = 0. n→∞

(ii) Suppose that A1 , A2 , . . . are independent. If

∞ n=1

µ(An ) = ∞, then

µ(lim sup An ) = 1. n→∞

Given measure spaces (X, A, µ) and (Y, B, ν), deﬁne the product measure µ × ν on X × Y as follows: Consider the smallest σ-algebra on X × Y , denoted by A × B, containing rectangles A × B where A ∈ A, B ∈ B. Deﬁne µ × ν on rectangles by (µ × ν)(A × B) = µ(A)ν(B). It is known that µ × ν can be extended to a measure on (X × Y, A × B). For example, if µ is the Lebesgue measure on R, then µ × µ is the Lebesgue measure on R2 . Consider the product measure space (X ×Y, A×B, µ×ν). Let f : X ×Y → C be measurable with respect to A × B. Deﬁne fx : Y → C by fx (y) = f (x, y) and f y : X → C by f y (x) = f (x, y). Then fx is measurable with respect to B for each x, and f y is measurable with respect to A for each y. Theorem 1.17 (Fubini’s theorem). Let (X, A, µ) and (Y, B, ν) be measure spaces. If f (x, y) : X × Y → C is measurable with respect to A × B and if |f (x, y)| dν(y) dµ(x) < ∞ , X

then

Y

f (x, y) d(µ × ν) =

X×Y

X

f (x, y) dν dµ = f (x, y) dµ dν .

Y

Y

X

In short, multiple integrals can be calculated as iterated integrals. For more details on product measures see [AG],[Rud2]. Deﬁnition 1.18. Let f : [a, b] → R and let P be a partition of [a, b] given by x0 = a < x1 < · · · < xn = b. Put V[a,b] (f, P ) =

n k=1

|f (xk ) − f (xk−1 )|

1.3 Nonnegative Matrices

11

and V[a,b] (f ) = sup V[a,b] (f, P ) , P

where the supremum is taken over all ﬁnite partitions P . If V[a,b] (f ) < ∞, then f is said to be of bounded variation and V[a,b] (f ) is called the total variation of f on [a, b]. An inner product on a complex vector space V is a function (·, ·) : V ×V → C such that (i) (v1 + v2 , w) = (v1 , w) + (v2 , w) and (v, w1 + w2 ) = (v, w1 ) + (v, w2 ), (ii) (cv, w) = c(v, w) and (v, cw) = c(v, w), c ∈ C, (iii) (v, w) = (w, v), (iv) (v, v) ≥ 0. The Cauchy–Schwarz inequality holds: |(v, w)| ≤ ||v|| ||w||. If (v, v) = 0 only for v = 0, then the inner product is said to be positive deﬁnite. In this case we deﬁne a norm by ||v|| = (v, v). Thus a positive deﬁnite inner product space is a normed space. If the resulting metric is complete, then V is called a Hilbert space. Of all Lp spaces, the case p = 2 is special: L2 is a Hilbert space. We can deﬁne an inner product by (f, g) = X f (x)g(x) dµ for f, g ∈ L2 (X, µ). Let V be a vector space over a set of scalars F = R or C. A subspace W ⊂ V is said to be invariant under a linear operator L if Lw ∈ W for every w ∈ W . If Lv = λv for some 0 = v ∈ V and λ ∈ F , then v and λ are called an eigenvector and an eigenvalue of L, respectively. Given a Hilbert space H, an invertible linear operator U : H → H is said to be unitary if (U f, U g) = (f, g), f, g ∈ H. This is equivalent to the condition that (U f, U f ) = (f, f ) where f ∈ H. If U has an eigenvalue λ, then |λ| = 1. If W is invariant under U , then the orthogonal complement W ⊥ = {f : (f, g) = 0 for every g ∈ W } is also invariant under U . Let L1 and L2 be bounded linear operators in Hilbert spaces H1 and H2 , respectively. If there is a linear isomorphism, i.e., a linear one-to-one and onto mapping S : H1 → H2 such that (Sv, Sw)H2 = (v, w)H1 for v, w ∈ H1 and SL1 = L2 S, then L1 and L2 are said to be unitarily equivalent. For the proofs of facts in this section consult [Rud2],[To].

1.3 Nonnegative Matrices 1.3.1 Perron–Frobenius Theorem We say that a matrix A = (aij ) is nonnegative (positive) if aij ≥ 0 (aij > 0) for every i, j. In this case we write A ≥ 0 (A > 0). Similarly we deﬁne nonnegative and positive vectors. If a nonzero square matrix A ≥ 0 has an λ is positive. eigenvector v = (vi ) > 0, then the corresponding eigenvalue To see why, note that there exists i such that (Av)i = j aij vj > 0. Since λvi = (Av)i > 0 and vi > 0, we have that λ > 0. We present the Perron–Frobenius Theorem, and prove the main part using the Brouwer Fixed Point Theorem. Eigenvectors of A will be ﬁxed points of a continuous mapping deﬁned in terms of A.

12

1 Prerequisites

n Throughout the section || · ||1 denotesn the 1-norm on R , i.e., ||x||1 = i |xi |, x = (x1 , . . . , xn ). Put S = {v ∈ R : ||v||1 = 1, vi ≥ 0 for every i}.

Lemma 1.19. If none of the columns of A ≥ 0 is a zero vector, then there exists an eigenvalue λ > 0 with an eigenvector w ≥ 0. Proof. Let A be an n × n matrix. Take v ∈ S. Then Av is a convex linear combination of columns of A and ||Av||1 > 0. Deﬁne f : S → S by f (v) =

1 Av . ||Av||1

Then f is continuous and has a ﬁxed point w ∈ S by the Brouwer Fixed Point Theorem since S is homeomorphic to the closed unit ball in Rn−1 . Hence Aw/||Aw||1 = w and by putting λ = ||Aw||1 we have Aw = λw.

Lemma 1.20. If A ≥ 0 has an eigenvalue λ > 0 with an eigenvector w > 0, then 1/m λ = lim (Am )ij . m→∞

ij

(This implies that λ is unique.) If µ ∈ C is an eigenvalue of A, then |µ| ≤ λ. Proof. We may assume ||w||1 = 1. Put C = minj wj > 0. Since ||Am w||1 = m ij (A )ij wj , we have (Am )ij ≤ ||Am w||1 ≤ (Am )ij . C ij

ij

From ||Am w||1 = ||λm w||1 = λm , 1/m 1/m C 1/m (Am )ij ≤λ≤ (Am )ij . ij

ij

Now let m → ∞. For the second statement, choose v ∈ Cn such that Av = µv and ||v||1 = j |vj | = 1. Put u = ( |v1 |, . . . , |vn | ) ∈ Rn . Then (Am )ij , |µ|m = ||µm v||1 = ||Am v||1 ≤ ||Am u||1 ≤ ij

and hence |µ| ≤

m ij (A )ij

1/m for every m.

Deﬁnition 1.21. A matrix A ≥ 0 is said to be irreducible if for any i, j there exists a positive integer m depending on i, j such that (Am )ij > 0. The period of a state i, denoted by Per(i), is the greatest common divisor of those integers n ≥ 1 for which (An )ii > 0. If no such integers exist, we deﬁne Per(i) = ∞. The period of A is the greatest common divisor of the numbers Per(i) that are ﬁnite, or is ∞ if Per(i) = ∞ for all i. A matrix is said to be aperiodic if it has period 1.

1.3 Nonnegative Matrices

13

If A is irreducible, then all the states have the same period, and so the period of A is the period of any of its states. If A is irreducible and aperiodic, then there exists (An ) ij > 0 for every i, j. For example, n≥ 1 satisfying 11 1 n consider A = . Then An = for every n ≥ 1. Thus A is not 01 01 irreducible, but it is aperiodic. Lemma 1.22. Let A be an irreducible matrix. Then any eigenvector w ≥ 0 corresponding to a positive eigenvalue is positive. (Hence the eigenvector w obtained in Lemma 1.19 is positive, and the conclusion of Lemma 1.20 holds.) Proof. Since w = 0, there exists wk > 0. For any i there exists m depending on i such that (Am )ik > 0. Since Am w = λm w, we have j (Am )ij wj = λm wi and 0 < (Am )ik wk ≤ λm wi .

Theorem 1.23 (Perron–Frobenius Theorem). Let A be irreducible. Then there is a unique eigenvalue λA > 0 satisfying the following: (i) |µ| ≤ λA for every eigenvalue µ. (ii) The positive eigenvector w corresponding to λA is unique up to scalar multiplication. (iii) λA is a simple root of the characteristic polynomial of A. (iv) If A is also aperiodic, then |µ| < λA for any eigenvalue µ = λA . Proof. (i) Use Lemmas 1.19, 1.20, 1.22. (ii) Assume that there exists v = 0 such that Av = λA v. Consider w + tv, −∞ < t < ∞, which is also an eigenvector corresponding to λA . Note that, if t is close to 0, then it is a positive vector. We can choose t = t0 so that w + t0 v ≥ 0 and at least one of components of w + t0 v is zero. By Lemma 1.22 this is impossible unless w + t0 v = 0. (iii),(iv) Consult p.112 and p.129 in [LM], respectively.

The eigenvalue λA is called the Perron–Frobenius eigenvalue and a corresponding positive eigenvector is called a Perron–Frobenius eigenvector . For more information see [Ga],[Kit]. 1.3.2 Stochastic matrices Deﬁnition 1.24. A square matrix P = (pij ), with pij ≥ 0 for every i, j, is called a stochastic matrix if j pij = 1 for every i. If P is a stochastic matrix then P n , n ≥ 1, is also a stochastic matrix. The value pij represents the probability of the transition from the ith state to the jth state and (P n )ij represents the probability of the transition from the ith state to the jth state in n steps. When stochastic matrices are discussed, we multiply row vectors from the left as a convention. So when we say that v is an eigenvector of P we mean vP = λv for some constant λ. Obviously the results for column operations obtained in Subsect. 1.3.1 are also valid for row operations.

14

1 Prerequisites

Theorem 1.25. Let P be a stochastic matrix. Then the following holds: (i) P has an eigenvalue 1. (ii) There exists v ≥ 0, v = 0, such that vP = v. (iii) If P is irreducible, then there exists a unique row eigenvector π = (π1 , . . . , πn ) such that πP = π with πi > 0 and i πi = 1. Proof. (i) The column vector u = (1, . . . , 1)T ∈ Rn satisﬁes P u = u. Hence 1 is an eigenvalue of P . (ii) Since none of the rows of P is the zero vector, we can apply the idea from the proof of Lemma 1.19 to prove the existence of a positive eigenvalue λ with a nonnegative eigenvector. Take f (v) = vP . In this case, for v ∈ S, ⎛ ⎞

(vP )j = ||vP ||1 = vi pij = vi ⎝ pij ⎠ = vi = 1 . j

j

i

i

j

i

Hence we may choose λ = 1. (iii) Apply Theorem 1.23(ii).

If P is irreducible, then the transition from an arbitrary state to another arbitrary state through intermediate states is possible. If P is irreducible and aperiodic, then π is a steady state solution of the probabilistic transition problem, i.e., after a suﬃciently long time the probability that we are at the state i is close to πi , whatever the initial probability distribution is. A vector (p1 , . . . , pn ) ≥ 0 is called a probability vector if i pi = 1. Consult Theorem 5.21 and Maple Program 1.8.8.

1.4 Compact Abelian Groups and Characters 1.4.1 Compact abelian groups A group is a set G that is closed under the operation · : G × G → G satisfying (i) (g1 · g2 ) · g3 = g1 · (g2 · g3 ) for g1 , g2 , g3 ∈ G, (ii) there is an identity element e ∈ G such that g · e = e · g = g for g ∈ G, (iii) for g ∈ G there exists an inverse g −1 ∈ G such that g · g −1 = g −1 · g = e. A group is said to be abelian if the operation is commutative: g1 · g2 = g2 · g1 for every g1 , g2 . In most cases the group multiplication symbol ‘·’ is omitted. When the elements of the group are numbers or vectors and the operation is the usual addition, the additive notation is often used. For A ⊂ G, we write gA = {ga : a ∈ A} and Ag = {ag : a ∈ A}. Let G and H be groups. A mapping φ : G → H is called a homomorphism if φ(g1 g2 ) = φ(g1 )φ(g2 ) for g1 , g2 ∈ G. In this case, φ(g −1 ) = φ(g)−1 and the identity element in G is sent to the identity element in H. If G = H, then φ is called an endomorphism. If φ : G → H is one-to-one and onto, then it is an isomorphism. If an endomorphism is also an isomorphism, then

1.4 Compact Abelian Groups and Characters

15

it is an automorphism. If φ : G → H is a homomorphism, then the kernel {g ∈ G : φ(g) = e}, denoted by ker φ, and the range {φ(g) : g ∈ G} are subgroups of G and H, respectively. A metric space (X, d) is locally compact if for every x0 ∈ X there exists r > 0 such that {x ∈ X : d(x, x0 ) ≤ r} is compact. A topological group is a group G with a topological structure, e.g., a metric, such that the following operations are continuous: (i) (g1 , g2 ) → g1 g2 , (ii) g → g −1 . If G is compact, then it is called a compact group. When G and H are topological groups, we implicitly assume that a given homomorphism φ is continuous. A measure µ on a topological group G is left-invariant if µ(gA) = µ(A) for every g ∈ G. A right-invariant measure is similarly deﬁned. On an abelian group a left-invariant measure is also right-invariant. On a locally compact group there exist left-invariant measures and right-invariant measures, called left and right Haar measures, respectively under some regularity conditions. On compact groups, left and right Haar measures coincide and they are ﬁnite. For example, the circle T = {e2πix : 0 ≤ x < 1} is regarded as the unit interval with its endpoints identiﬁed, and its Haar measure is Lebesgue measure. For more information consult [Fo1],[Roy]. Theorem 1.26. (i) A closed subgroup of the circle group T with inﬁnitely many elements is T itself. (ii) A ﬁnite subgroup of T with n elements is of the form {1, w, . . . , wn−1 } where w = e2πi/n . Proof. (i) Let H be an inﬁnite closed subgroup of T. Since H is compact, it has a limit point a. In any small neighborhood of a there exists an element of H, say b. Since a/b and b/a are in H and since they are close to 1, 1 is also a limit point. In short, for every n ≥ 1 there exists u = e2πiθ ∈ H such that θ is real and |θ| < n1 . Since {uk : k ∈ Z} ⊂ H, any interval of length greater than 1 n contains an element of H. Therefore H is dense in T, and H = T. (ii) If a ﬁnite subgroup H has n elements, then z n = 1 for every z ∈ H. Since the polynomial equation z n = 1 has only n solutions, and since H has

n elements, all the points satisfying z n = 1 belong to H. Theorem 1.27. (i) The automorphisms of T are given by z → z and z → z. (ii) The homomorphisms of T are given by z → z n for some n ∈ Z. (iii) The homomorphisms of Tn to T are of the form (z1 , . . . , zn ) → z1 k1 · · · zn kn for some (k1 , . . . , kn ) ∈ Zn . Proof. (i) Let h : T → T be an isomorphism. Since (−1)2 = 1, we have h(−1)2 = h(1) = 1, and hence h(−1) = ±1. Since h is one-to-one, h(−1) = −1. Since i2 = −1, we have h(i)2 = h(−1) = −1, and hence h(i) = ±i. First, consider the case h(i) = i. The image of the arc with the endpoints 1 and i is the same arc since a continuous image of a connected k k set is connected. By induction we see that h(e2πi/2 ) = e2πi/2 , and hence

16

1 Prerequisites k

k

k

h(e2πi/2 ) = h(e2πi/2 ) = e2πi/2 . Thus h(z) = z for points z in a dense subset. Since h is continuous, h(z) = z on T. For the case h(i) = −i, we consider the automorphism g(z) = h(z) and use the previous result. (ii) Let h : T → T be a homomorphism. Assume that h = 1. Since h is continuous, the range h(T) is a connected subgroup, which implies that h(T) = T. Since ker h is a closed subgroup, but not the whole group, we see that ker h is ﬁnite. By Theorem 1.26 we have ker h = {1, w, . . . , wn−1 } where w = e2πi/n for some n ≥ 1. Let Jn be the closed arc from the endpoint 1 to w in the counterclockwise direction. Then h(In ) = T and h is one-to-one in the interior of Jn . Note that h(e2πi/(2n) ) = −1 since h(w) = 1. Hence h(e2πi/(4n) ) = ±i. First, consider the case h(e2πi/(4n) ) = i. Proceeding as in Part (i), we see that h(z) = z n on Jn . Take q ∈ T. Since q = z k where k ≥ 1 and z ∈ Jn , we observe that h(q) = h(z k ) = h(z)k = (z n )k = (z k )n = q n . In the case h(e2πi/(4n) ) = −i, we have h(z) = z −n , n ≥ 1. (iii) Let h : Tn → T be a homomorphism. An element (z1 , . . . , zn ) ∈ Tn is a product of n elements uj = (1, . . . , 1, zj , 1, . . . , 1). Hence h((z1 , . . . , zn )) = h(u1 ) · · · h(un ). Now use Part (ii) for each h(uj ).

1.4.2 Characters Let G be a compact abelian group. A character of G is a continuous mapping χ : G → C such that (i) |χ(g)| = 1, (ii) χ(g1 g2 ) = χ(g1 )χ(g2 ). In other words, a character of G is a homomorphism from G into the unit circle group. Note that χ(e) = 1 and χ(g −1 ) = 1/χ(g) = χ(g). A trivial character χ is the one that sends every element to 1. In that case, we write χ = 1. The set of all characters is also a group. It is called the dual group and is denoted by G. The following facts are known: (i) G 1 × G2 = G1 × G2 , (ii) G = G. Example 1.28. (i) The characters of Zn = {0, 1, . . . n − 1} are given by χk (j) = e2πijk/n , j ∈ Zn , 0 ≤ k ≤ n − 1 . (ii) The characters of T are given by χk (z) = z k , k ∈ Z. See Theorem 1.27(ii). Let µ be the Haar measure on a compact abelian group G. For f ∈ L1 (G, µ) → C by deﬁne the Fourier transform f : G f (g) χ(g −1 ) dµ . f(χ) = G

For each χ, f(χ) is called a Fourier coeﬃcient of f . Note that |f(χ)| ≤ ||f ||1 for every χ ∈ G. The characters form an orthonormal basis for L2 (G, µ); in other words, for f ∈ L2 (G, µ) we have the Fourier series expansion f (x) = f(χ) χ(x) χ∈G

1.4 Compact Abelian Groups and Characters

and

|f (x)|2 dµ = G

17

|f(χ)|2 .

χ∈G

Furthermore, for f1 , f2 ∈ L2 (G, µ), we have f1 (χ)f2 (χ) , f1 (x)f2 (x) dµ = G

χ∈G

which is called Parseval’s identity. For g ∈ G, consider the unitary operator Ug on L2 (G, µ) deﬁned by (Ug f )(x) = f (gx). Since for a character χ, (Ug χ)(x) = χ(gx) = χ(g)χ(x) cχ χ(g) χ. Thus Ug = χ(g)Pχ where Pχ is the we have Ug ( cχ χ) = orthogonal projection onto the 1-dimensional subspace spanned by χ. 1.4.3 Fourier series A classical example of the Fourier transform on a compact abelian group is the Fourier series on the unit circle identiﬁed with [0, 1). If f (x) is a periodic function of period 1, then f is be regarded as a function on [0, 1] with f (0) = f (1), and so it may be regarded as a function on the unit circle. As for the pointwise convergence of the Fourier series the following fact is known: If f (x) is continuous with f (0) = f (1) and if f (x) is piecewise continuous with jump discontinuities, then f (x) = f(n) e2πinx n∈Z

where f(n) =

1

f (x) e−2πinx dx

0

and the convergence is absolute and uniform. Fact 1.29 (Mercer’s theorem) Let f be an integrable function on [0, 1]. Then lim f(n) = 0 . |n|→∞

For the proof see [Hels]. The same type of theorem for the Fourier transform on the real line is called the Riemann–Lebesgue lemma. Deﬁne the Fourier coeﬃcients of a measure µ on the unit circle [0, 1) by 1 e−2πinx dµ(x) . µ (n) = 0

Observe that µ (0) = 1 and | µ(n)| ≤ 1 for every n. If dµ = f (x) dx for an integrable function f , then µ (n) = f(n) for every n.

18

1 Prerequisites

Two integrable functions on [0, 1] with the same Fourier coeﬃcients are equal almost everywhere. Furthermore, two measures with the same Fourier coeﬃcients are also identical. Here is a proof: Let V be the vector space of all continuous functions f on [0, 1] such that f (0) = f (1) with the norm ||f || = max |f (x)|. A Borel probability measure µ may be regarded as a linear

1 mapping from V into C deﬁned by µ(f ) = 0 f dµ. Then |µ(f )| ≤ ||f || and µ : V → C is continuous. If two measures have the same Fourier coeﬃcients, then they coincide on the set of the trigonometric polynomials, which is dense in V . Hence the two measures are identical. For a diﬀerent proof see [Hels]. Fact 1.30 Let f be a continuous function on the unit circle, so that it can be regarded as a continuous function of period 1 on the real line. If f is k-times diﬀerentiable on the unit circle, and if f (k) is integrable, then nk f(n) → 0

as |n| → ∞ .

Proof. First, consider k = 1. Then 1 f (x)e−2πinx dx f (n) = 0

1 = f (x)e−2πinx 0 − (−2πin)

1

= 2πin

1

f (x)e−2πinx dx

0

f (x)e−2πinx dx = 2πinf(n) .

0 (k) (n) = (2πin)k f(n). Hence In general, for k ≥ 1, f

|nk f(n)| =

1 |f (k) (n)| . (2π)k

Now apply Fact 1.29 for f (k) . Example 1.31. The nth Fourier coeﬃcient of f (x) = f(n) =

1 4

− |x − 12 | is given by

0, n even , 1 − 2 2 , n odd . π n

See Maple Program 1.8.9. 1.4.4 Endomorphisms of a torus Let Tn denote the n-dimensional torus {(x1 , . . . , xn ) : 0 ≤ xi < 1, 1 ≤ i ≤ n}, in the additive notation, which may be identiﬁed with the additive quotient group Rn /Zn . We state some of basic facts: (i) A ﬁnite subgroup of T is of the form {0, n1 , . . . , n−1 n } for some n. (ii) An automorphism of T satisﬁes either

1.4 Compact Abelian Groups and Characters

19

x → x or x → 1−x for every x ∈ T. (iii) A homomorphisms of T is of the form x → kx (mod 1) for some k ∈ Z. (iv) A homomorphisms of Tn to T is of the form (x1 , . . . , xn ) → k1 x1 +· · ·+kn xn (mod 1) for some (k1 , . . . , kn ) ∈ Zn . We will use additive and multiplicative notations interchangeably in this book. A matrix is called an integral matrix if all the entries are integers. Theorem 1.32. Let A and B denote n × n integral matrices. Then we have the following: (i) Deﬁne a mapping φ : Tn → Tn by φ(x) = Ax (mod 1). Then φ is an endomorphism of Tn . (ii) If φ and ψ are endomorphisms of Tn represented by A and B, then the composite map ψ ◦ φ is represented by BA. (iii) Every endomorphism of Tn is of the form given in (i). (iv) An endomorphism φ : Tn → Tn , φ(x) = Ax (mod 1), is onto if and only if det A = 0. (v) An endomorphism φ : Tn → Tn , φ(x) = Ax (mod 1), is one-to-one and onto if and only if det A = ±1. Proof. (i),(ii) Obvious. (iii) Let πi : Tn → T be the projection onto the ith coordinate. Then πi ◦ φ : Tn → T is a homomorphism. Now use Theorem 1.27(iii). (iv) If det A = 0, then the linear mapping LA : Rn → Rn , LA (x) = Ax, is onto, and hence φ is also onto. Suppose det A = 0. Then det AT = 0 and there exists 0 = v ∈ Qn such that AT v = 0. Note that we can obtain v using Gaussian elimination. Since the entries of AT are rational numbers and since Gaussian elimination uses only addition, subtraction, multiplication and division by nonzero numbers, we can choose v so that the entries of v are also rational numbers. Furthermore, by multiplying by a suitable constant if necessary, we can choose v so that its entries are integers. Let v = (k1 , . . . , kn ) and deﬁne a homomorphism χ : Tn → T by χ(x1 , . . . , xn ) = k1 x1 + · · · + kn xn (mod 1). Since AT v = 0, we have v · Ay = AT v · y = 0 for y ∈ Rn . Hence the range of φ is included in ker χ = Tn and φ is not onto. (v) Note that the inverse of an automorphism φ is also an automorphism. Let A and B be two integral matrices representing φ and φ−1 . By (ii) we see that AB = BA = I, and hence (det A)(det B) = 1. Since the determinant of an integral matrix is also an integer, we conclude that det A = ±1. Suppose det A = ±1. Then A−1 is also an integral matrix since the (j, i)th entry of A−1 is given by (−1)i+j det Aij / det A where Aij is the (n−1)×(n−1) matrix obtained by deleting the ith row and the jth column of A. Let ψ be an endomorphism of Tn deﬁned by A−1 . By (ii) we see that ψ ◦ φ and φ ◦ ψ are endomorphisms of Tn and that both are represented by the identity matrix. Hence ψ ◦ φ = φ ◦ ψ = id, and so φ is invertible.

31 Example 1.33. Consider the toral endomorphism φ given by A = . Then 11 1 1 φ is not one-to-one since φ x + 2 , y + 2 = φ(x, y).

20

1 Prerequisites

1.5 Continued Fractions Let 0 < θ < 1 be an irrational number, and consider its continued fraction expansion 1 . θ= 1 a1 + 1 a2 + a3 + · · · The natural numbers an are called the partial quotients and we write θ = [a1 , a2 , a3 , . . .]. Take p−1 = 1, p0 = 0, q−1 = 0, q0 = 1. For n ≥ 1 choose pn , qn such that (pn , qn ) = 1 and pn = [a1 , a2 , . . . , an ] = qn

The rational numbers

1 a1 +

.

1 ··· +

1 an

pn are called the convergents. qn

Example 1.34. (i) Let θ0 = [k, k, k, . . .] for k ≥ 1. Then θ0 = 1/(k + θ0 ), and √ −k + k 2 + 4 . θ0 = 2 (ii) It is known that e = 2.718281828459045 . . . satisﬁes e = 2 + [1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, . . . . . . , 1, 2n, 1, . . .] . See Maple Programs 1.8.10, 1.8.11. Deﬁnition 1.35. For x ∈ R put ||x|| = min |x − n| . n∈Z

Thus ||x|| is the distance from x to Z. Fact 1.36 (i) qn+1 = an+1 qn + qn−1 , pn+1 = an+1 pn + pn−1 (ii) pn+1 qn − pn qn+1 = (−1)n pn , n ≥ 1, alternate. (iii) The signs of the sequence θ − qn 1 pn 1 ,n≥0 < < θ − (iv) qn qn qn+1 qn (qn+1 + qn ) 1 1 ,n≥1 < ||qn θ|| < (v) qn+1 qn+1 + qn (vi) If 1 ≤ j < qn+1 , then ||jθ|| ≥ |qn θ − pn | = ||qn θ||. Hence ||qn θ|| decreases monotonically to 0 as n → ∞.

1.6 Statistics and Probability

21

Proof. For (i)-(v) consult [AB],[Kh],[KuN],[La]. For (vi) we follow the proof in Chap. 2, Sect. 3 of [RS]. Suppose that ||jθ|| = |jθ − h| for some h ∈ Z. Consider the equation jθ − h = A(qn+1 θ − pn+1 ) + B(qn θ − pn ). Since θ is irrational, we obtain h pn+1 pn A = . j qn+1 qn B Since the determinant of the matrix is equal to (−1)n , we see that A and B are integers. Since (pk , qk ) = 1 and j < qn+1 , we see that h/j is diﬀerent from pn+1 /qn+1 and that B = 0. If A = 0, then |jθ − h| = |B(qn θ − pn )| ≥ |qn θ − pn |. Now suppose that AB = 0. Then A and B have opposite signs since j < qn+1 . Since qn+1 θ − pn+1 and qn θ − pn also have opposite signs, we have |jθ − h| = |A(qn+1 θ − pn+1 )| + |B(qn θ − pn )| ≥ |qn θ − pn |.

1.6 Statistics and Probability We introduce basic facts. For more information, consult [AG],[Du],[Fo2]. 1.6.1 Independent random variables The terminology in probability theory is diﬀerent from that in measure theory but there is a correspondence between them. See Table 1.1. A sample space Ω with a probability P is a probability measure space with a probability measure µ. A random variable X on Ω is a measurable function X : Ω → R. (The values of X are usually denoted by x.) Deﬁne a probability measure µX on R by µX (A) = P (X −1 (A)) = Pr(X ∈ A) , where Pr(X ∈ A) denotes the probability that a value of X belongs to A ⊂ R. The probability measure µX has all the information on the distribution of values of X. For a measurable function h : R → R, we have ∞ h(X(ω)) dP (ω) = h(x) dµX (x) . −∞

Ω

Taking h(x) = |x| , we have p p |X(ω)| dP (ω) = E[ |X| ] = p

∞

−∞

Ω

|x|p dµX (x) .

If there exists fX ≥ 0 on R such that fX (x) dx = µX (A) A

for every A, then fX is called a probability density function (pdf) of X.

22

1 Prerequisites

The expectation (or mean) and the variance of X are deﬁned by ∞ x dµX (x) E[X] = −∞

and σ 2 [X] =

∞

−∞

2

(x − E[X]) dµX (x) .

The standard deviation, denoted by σ, is the square root of the variation. If two random variables X and Y are identically distributed, i.e., they have the same associated probability measures dµX and dµY on R, then they have the same expectation and the same variance. Table 1.1. Comparison of terminology Measure Theory Probability Theory a probability measure space X a sample space Ω x∈X ω∈Ω a σ-algebra A a σ-ﬁeld F a measurable subset A an event E a probability measure µ a probability P µ(A) P (E) a measurable function f a random variable X f (x) x, a value of X a characteristic function χE an indicator function 1E

Lebesgue integral X f dµ expectation E[X] almost everywhere almost surely, or with probability 1 convergence in L1 convergence in mean convergence in measure convergence in probability conditional measure µA (B) conditional probability Pr(B|A)

Deﬁnition 1.37. A sequence of random variables Xi : Ω → R, i ≥ 1, is said to be independent if the sequence of subsets {Xi−1 (Bi )}∞ i=1 is independent for all Borel measurable subsets Bi ⊂ R. (For independent subsets consult Deﬁnition 1.15.) Fact 1.38 Let X1 , . . . , Xn be independent random variables. (i) For any Borel measurable subsets Bi ⊂ R we have Pr(X1 ∈ B1 , . . . , Xn ∈ Bn ) = Pr(X1 ∈ B1 ) · · · Pr(Xn ∈ Bn ) . (ii) If −∞ < E[Xi ] < ∞ for every i, then E[X1 · · · Xn ] = E[X1 ] · · · E[Xn ] . (iii) If E[Xi2 ] < ∞ for every i, then σ 2 [X1 + · · · + Xn ] = σ 2 [X1 ] + · · · + σ 2 [Xn ] .

1.6 Statistics and Probability

23

1.6.2 Change of variables formula Let X be a random variable with a continuous pdf fX , i.e., the probability of ﬁnding a value of X between x and x + ∆x is fX (x)|∆x|. If y = g(x) is a monotone function, then we deﬁne Y = g(X). Let fY be the pdf for Y , that is, the probability of ﬁnding a value of Y between y and y + ∆y is fY (y)|∆y|. Note that fX (x)|∆x| = fY (y)|∆y| implies ∆x . fY (y) = fX (x) ∆y Hence fY (y) = fX (x)

1 1 = fX (g −1 (y)) −1 . |g (x)| |g (g (y))|

Example 1.39. Let X be a random variable uniformly distributed in [0, 1], i.e., 0 ≤ X ≤ 1 and fX (x) = 1, 0 ≤ x ≤ 1 and fX (x) = 0, elsewhere. Deﬁne a new random variable Y = − ln(X), i.e., g(x) = − ln x. Then 0 ≤ Y ≤ ∞. Since g (x) = − x1 and x = e−y , we have fY (y) = e−y ,

y≥0.

Example 1.40. If a random variable X has the pdf fX (x) = e−x , x ≥ 0, then the pdf fY of Y = ln X satisﬁes y

y

fY (y) = fX (x)x = e−e ey = ey−e . See Fig. 1.1.

1 0.3 0.2 0.1 0

x

5

–6 –4 –2

y

2

Fig. 1.1. Pdf’s of X (left) and Y = log X (right) when X has an exponential distribution

1.6.3 Statistical laws There are various versions of the Law of Large Numbers. In this subsection the symbol µ denotes a mean, not a measure.

24

1 Prerequisites

Theorem 1.41 (Weak Law of Large Numbers). Let X1 , X2 , X3 , . . . be a sequence of independent identically distributed L2 -random variables with mean µ and variance σ 2 . If Sn = X1 +· · ·+Xn , then n1 Sn converges to µ in measure as n → ∞. In other words, for any ε > 0, X1 + · · · + Xn lim Pr − µ > ε = 0 . n→∞ n Proof. Use Chebyshev’s inequality(Fact 1.14) with p = 2. Then ⎛ ⎞ n n 1 1 σ2 Pr ⎝ (Xj − µ) > ε⎠ ≤ 2 2 E[(Xj − µ)2 ] = 2 . n ε j=1 nε n j=1 Now let n → ∞.

The following fact is due to A. Khinchin. Theorem 1.42 (Strong Law of Large Numbers). Let X1 , X2 , X3 , . . . be a sequence of independent identically distributed L1 -random variables with mean µ. If Sn = X1 + · · · + Xn , then n1 Sn converges to µ almost surely as n → ∞. Deﬁnition 1.43. The probability density function of the standard normal distribution on R is deﬁned by 2 1 Φ(x) = √ e−x /2 2π

i.e., the standard normal variable X satisﬁes x Pr(X ≤ x) = Φ(t) dt . −∞

For the graph y = Φ(x) see Fig. 1.2. Most of the probability (≈ 99.7 %) is concentrated in the interval −3 ≤ x ≤ 3. Consult Maple Program 1.8.12. 0.4

–4

–2

0

2

x

4

Fig. 1.2. The pdf of the standard normal distribution

The cumulative density function (cdf) of a given probability distribution on R is deﬁned by x → Pr(X ≤ x), −∞ < x < ∞. Note that a cdf is a monotonically increasing function. See Fig. 1.3 for the graph of the cumulative density function for the standard normal distribution.

1.7 Random Number Generators

25

1

–4

–2

0

2

x

4

Fig. 1.3. The cdf of the standard normal distribution

Theorem 1.44 (The Central Limit Theorem). Let X1 , X2 , X3 , . . . be a sequence of independent identically distributed L2 -random variables with mean µ and variance σ 2 . If Sn = X1 + · · · + Xn , then x Sn − nµ √ lim Pr ≤x = Φ(t) dt , −∞ < x < ∞ . n→∞ σ n −∞ In other words, if n observations are randomly drawn from a not necessarily normal population with mean µ and standard deviation σ, then for suﬃciently large n the distribution of the sample mean Sn /n is√approximately normally distributed with mean µ and standard deviation σ/ n. For more information consult [Fo2].

1.7 Random Number Generators 1.7.1 What are random numbers? There is no practical way of obtaining a perfectly random sequence of numbers. One might consider measuring the intensity of cosmic rays or movements of small particles suspended in ﬂuid and use the measured data as a source of random numbers. Such eﬀorts have failed. The data usually have patterns or deviate from theoretical predictions based on properties of ideal random numbers. Furthermore, when numerical experiments are done, they need to be repeated by other researchers to be conﬁrmed independently. That is why we need simple and fast algorithms for generating random numbers. Computer languages such as Fortran and C have built-in programs and they employ generators of the form xn+1 = axn + b (mod M ) for some suitably chosen values of a, b and M . We start with a seed x0 and proceed recursively. The constants x0 , a, b, M should be chosen with great care. Since computer generated random numbers are produced by deterministic rules, they are not truly random. They are called pseudorandom numbers. In practical applications they behave as if they were perfectly random, and they can be used to simulate random phenomena. From time to time we use the term random number generators(RNG’s) instead of pseudorandom number generators(PRNG’s) for the sake of simplicity if there is no danger of confusion. One more remark on terminology: quasirandom points are almost

26

1 Prerequisites

uniformly distributed points in a multidimensional cube in a suitable sense, and they are diﬀerent from what we call random points in this book. For more information consult [Ni]. s Consider as function f (x1 , . . . , xs ) deﬁned on the s-dimensional cube Q = k=1 Ik ⊂ R , Ik = [0, 1]. Suppose that we want to integrate f (x1 , . . . , xs ) over Q numerically. We might try the following method based on Riemann integration. First, we partition each interval Ik , 1 ≤ k ≤ s, into n subintervals. Note that the number of small subcubes in Q is equal to ns . Next, choose a point qi , 1 ≤ i ≤ ns , from each subcube and evaluate f (qi ) and take the average over such points qi . The diﬃculty in such an algorithm is that even when the dimension s is modest, say s = 10, the sum of ns numbers becomes practically impossible to compute. To avoid such a diﬃculty we employ the so-called the Monte Carlo method based on randomness. Choose m points randomly from {qi : 1 ≤ i ≤ ns }, where m is suﬃciently large but very small compared with ns , and take the average of f over these m points. The method works well and it is the only practical way of obtaining a good estimate of the integral of f over Q in a reasonable time limit. The error of estimation depends on the number m of points chosen. Once the sample size m is given, the eﬀectiveness of estimation depends on how to choose m points randomly from ns points. To do that we use a PRNG. See Maple Program 1.8.15. For the history of the Monte Carlo method, see [Ec],[Met]. For serious simulations really good generators are needed, which is never satisﬁed fully as theoretical models in applications become more sophisticated and computer hardware is upgraded constantly to do faster computation. As the demand for good random number generators grows, there also arises the need for eﬃcient methods to test newly invented algorithms. There are currently many tests for PRNG’s, and some of them are listed in [Kn]. One of the standard tests is to check whether there is a lattice structure in the set of the points (x3i+1 , x3i+2 , x3i+3 ), i ≥ 0, where the integers xn are generated by a generator. See Fig. 1.4 and Maple Program 1.8.13 for some bad linear congruential generators.

Fig. 1.4. Lattice structures of the points (x3i+1 , x3i+2 , x3i+3 ), 0 ≤ i ≤ 2000, given by LCG(M, a, b, x0 ) for a = 137, b = 187, M = 28 , (left) and a = 6559, b = 0, M = 230 (right)

1.7 Random Number Generators

27

For an elementary introduction to random numbers and some historical remarks see [Benn],[Pson], and for a survey see [Hel]. For a comprehensive introduction, consult [Kn]. 1.7.2 How to generate random numbers A linear congruential generator, denoted by LCG(M, a, b, x0 ), is an algorithm given by xn+1 := axn + b (mod M ) with initial seed x0 . Fact 1.45 The period length of LCG(M, a, b, x0 ) is equal to M if and only if (i) b is relatively prime to M , (ii) a − 1 is a multiple of p, for every prime p dividing M , and (iii) a − 1 is a multiple of 4, if M is a multiple of 4. For the proof see [Kn]. In the 1960s IBM developed an LCG called Randu, which turned out to be less eﬀective than expected. These days once a new generator is invented, it is often compared with Randu to test its performance. ANSI(American National Standards Institute) and Microsoft employ LCG’s in their C libraries. For a prime number p, an inversive congruential generator ICG(p, a, b, x0 ) is deﬁned by xn+1 := ax−1 n + b (mod p) with initial seed x0 , where x−1 is the multiplicative inverse of x modulo p. Table 1.2 is a list of random number generators that are currently used in practice except Randu. Table 1.2. Pseudorandom number generators Generator

Algorithm

Randu LCG(231 , 65539, 0, 1) ANSI LCG(231 , 1103515245, 12345, 141421356) Microsoft LCG(231 , 214013, 2531011, 141421356) Fishman LCG(231 − 1, 950706376, 0, 3141) ICG ICG(231 − 1, 1, 1, 0) Ran0 LCG(231 − 1, 16807, 0, 3141) Ran1 Ran0 with shuﬄe Ran2 L’Ecuyer’s algorithm with shuﬄe Ran3 xn = xn−55 − xn−24 (mod 231 ) MT bitwise XOR operation

Period 229 231 231 31 2 −2 231 − 1 231 − 2 > 231 − 2 > 2.3 × 1018 ≤ 255 − 1 219937 − 1

The generators Ran0, Ran1, Ran2 and Ran3 are from [PTV]. Ran0 is an LCG found by Park and Miller [PM]. Ran1 is Ran0 with Bays-Durham shuﬄe.

28

1 Prerequisites

Ran2 is L’Ecuyer’s algorithm [Le] with shuﬄe, where L’Ecuyer’s algorithm is a combination of two LCG’s which have diﬀerent modulo operations. Ran3 is Knuth’s algorithm [Kn]. MT is Mersenne Twister [MN]. The name comes from the fact that the period is a Mersenne prime number, i.e., both 19937 and 219937 − 1 are prime. One might consider the possibility of using a chaotic dynamical system as a random number generator. For instance, if T : [0, 1] → [0, 1] is a suitably chosen chaotic mapping whose orbit points x, T x, T 2 , . . . for some x are more or less random in [0, 1], then deﬁne a binary sequence bn , n ≥ 1, by the rule: bn = 0 (or 1, respectively) if and only if T n−1 x ∈ [0, 12 ) (or T n−1 x ∈ [ 12 , 1), respectively). Then bn may be regarded as being suﬃciently random. See Chap. 13 in [BoG] for a discussion of using T x = (π +x)5 (mod 1) as a random number generator, which is suitable only for small scale simulations. For extensive simulations it is recommended to use well-known generators that have passed various tests. 1.7.3 Random number generator in Maple Maple uses an LCG of the form xn+1 = axn + b (mod M ) where a = 427419669081 ,

b=0,

M = 999999999989

and x0 = 1. Note that M is prime. Hence the generator has period M − 1. This follows from Euler’s theorem: If M > 1 is a positive integer and if a is relatively prime to M , then aφ(M ) = 1 (mod M ) where φ(1) = 1 and φ(M ) is the number of positive integers less than M and relatively prime to M . See Maple Program 1.8.14. If we want a prime number for M that is close to 1012 , then M = 1012 − 11 is such a choice. Between 1012 − 20 and 1012 + 20, it is the only prime number. The command rand() generates a random number between 1 and M − 1. We never obtain 0 since a has a multiplicative inverse modulo M . Maple produces the ﬁrst several thousand digits of π = 3.1415 . . . very π as a starting point (or a seed) of an orbit quickly. We often choose π − 3 or 10 of a transformation with an absolutely continuous invariant measure on the unit interval. For empirical evidence of randomness of digits of π, see [AH].

1.8 Basic Maple Commands Computer simulations given in this book are indispensable for understanding theoretical results. The software Maple is used for symbolic and numerical experiments. It can handle a practically unlimited number of decimal digits. Some of the Maple algorithms given in this book are not fully optimized for eﬃciency in memory requirement and computing speed because we hope to translate theoretical ideas into computer programs without introducing

1.8 Basic Maple Commands

29

any complicated programming skills. Furthermore, sometimes a less polished program is easier to understand. Maple procedures are not used in Maple programs presented in this book. Readers with a little experience in computing may improve some of the programs. We present some of the essential Maple commands. They are organized into subsections. Please try to read from the beginning. For more information on a special topic, type a keyword right after the question mark ‘?’. 1.8.1 Restart, colon, semicolon, Digits Sometimes, during the course of Maple computation, we need to restart from the beginning and reset the values assigned to all the variables. If that is the case, we use the command restart. (Maple instructions in this book, which are given by a user, are printed in a teletype font.) > restart; A command should end with a semicolon ‘;’ and the result of computation is displayed. If a command ends with colon ‘:’ then the result of computation is not displayed. Sometimes output of a Maple command is not presented in this book to save space. >

# Anything written after the symbol # is ignored by Maple.

>

Digits:=10;

Digits := 10 This proclaims that the number of signiﬁcant decimal digits is set to ten. Maple can use a practically unlimited number of decimal digits in ﬂoating point calculation. The default value is equal to ten; in other words, if the number of signiﬁcant digits is not declared formally, then Maple assumes that it is set to 10. Do not start a Maple experiment with too many digits unnecessarily. It slows down the computation. Start with a small number of digits in the beginning. It is recommended to try a simple toy model ﬁrst. > Digits; 10 This conﬁrms the number of signiﬁcant decimal digits employed by Maple at the moment. 1.8.2 Set theory and logic Deﬁne two sets A and B. > A:={1,2,a}; A := {1, 2, a} >

B:={0,1}; B := {0, 1}

30

1 Prerequisites

Find the elements of A. > op(A); 1, 2, a Find the number of elements (or cardinality) of A. > nops(A); 3 Find A ∪ B. > A union B; {0, 1, 2, a} Find A \ B. > A minus B; {2, a} Find A ∩ B. > A intersect B; {1} Here is a mathematical statement that 1 ∈ A. > 1 in A; 1 in {1, 2, a} Check the validity of the previous statement (denoted by ‘%’). The logical command evalb (b for ‘Boolean’) evaluates a statement and returns true or false. > evalb(%); true Check the validity of the statement inside the parentheses. > evalb(0 in A); false The following is an if . . . then . . . conditional statement. > K:=1; K := 1 Choose a number K. > if K > 2 then X:=1: else X:=0: fi: Note that fi is a short form of end if. Now check the value of X. > X; 0 In the following a b means the compound statement (a < b or a > b), and it is equivalent to a = b. > evalb(-1 1); true

1.8 Basic Maple Commands

31

1.8.3 Arithmetic Here are several examples of basic operations of Maple. Some of the results are stored in computer memory as abstract objects. > 1+0.7; 1.7 > Pi; π > sqrt(2); √ 2 > a-b; a−b > x*y^2; x y2 >

%;

x y2 The symbol ‘%’ denotes the result obtained from the preceding operation. > gamma; γ > evalf(%,10); .5772156649 n 1 − ln n . This is the Euler constant γ = limn→∞ k=1 k The command evalf (f for ‘ﬂoating’) evaluates a given expression and returns a number in a ﬂoating point format. > sqrt(2.); 1.414213562 > evalf(Pi,30); 3.14159265358979323846264338328 > evalf(exp(1),20); 2.7182818284590452354 Maple understands standard mathematical identities. > sin(x)^2+cos(x)^2; sin(x)2 + cos(x)2 >

>

simplify(%); 1 sum(1/n^2,n=1..infinity); 1 2 π 6

32

1 Prerequisites

In general, two dots represent a range for addition, integration and plotting, etc. In the preceding summation, we take the sum for all n ≥ 1. The commands sum and add are diﬀerent. In the following, sum is used for the symbolic addition. > sum(sin(n),n=1..N); 1 sin(1) cos(N + 1) 1 1 1 sin(1) cos(1) 2 − sin(N + 1) + + sin(1) − 2 cos(1) − 1 2 2 cos(1) − 1 The command add is used for numerical addition of numbers. > add(exp(n),n=1..N); Error, unable to execute add

This does not work since N is a symbol, not a speciﬁed number. The command sum may be used for a ﬁnite sum of numbers, too. It takes longer and consumes more computer memory if we try to add very many numbers. When we add reasonably many numbers there seems to be no big diﬀerence in general. 1.8.4 How to plot a graph In its early days Maple was developed for students and researchers who often had modest computer hardware resources. Developers of Maple thought that the users would need only a small subset of commands in a single task. When we start Maple it loads a part of the whole set of Maple commands and gradually adds more as the user instructs. To plot a graph we need to include the package ‘plots’. > with(plots): Plot a graph. Double dots denote the range. > plot(sin(x)/x, x=0..10, y=-1.1..1.1); See Fig. 1.5. The graph suggests that the limit as x → 0 is equal to 1. The same technique can be used instead of L’Hospital’s rule. 1 y 0

2

4

x

6

8

10

–1

Fig. 1.5. y = (sin x)/x, 0 < x < 10

Find the Taylor series. The following result shows that x = 0 is a removable singularity of (sin x)/x. The command series may be used in some cases.

1.8 Basic Maple Commands >

33

taylor(sin(x)/x,x=0,10); 1 1 4 1 1 1 − x2 + x − x6 + x8 + O(x9 ) 6 120 5040 362880

Now let us draw a graph of a function f (x, y) of two variables. > f:=(x,y)->x*y/sqrt(x^2+y^2); xy f := (x, y) → x2 + y 2 Use the command plot3d to plot the graph z = f (x, y) in R3 . Plot the graph around the origin where the function is not yet deﬁned. The function can be extended continuously at the origin by deﬁning f (0, 0) = 0. > plot3d(f(x,y),x=-0.00001..0.00001,y=-0.00001..0.00001, scaling=constrained); See Fig. 1.6. By pointing the cursor of the mouse at the graph and moving it while pressing a button, we can rotate the graph.

5e–06 0 –5e–06 –1e–05

y

0

1e–05

0

–1e–05 x

Fig. 1.6. z = f (x, y) where |x|, |y| < 10−5

In an introductory analysis course students have diﬃculty in checking the continuity or diﬀerentiability of a function f (x, y) with a possible singularity at (0, 0). By plotting the graph on gradually shrinking small squares around the origin, we can check whether f is continuous or diﬀerentiable at (0, 0). The continuity of f at (0, 0) is equivalent to the connectedness of the graph z = f (x, y) at (0, 0), and the diﬀerentiability of f is equivalent to the existence of the tangent plane. For example, plot the graph on |x| ≤ a, |y| ≤ a for a = 0.01, a = 0.001, and so on. In the above example, it is easy to see that there is no tangent plane at (0, 0). Here are several options for plotting: Use labels for names of coordinate axes, tickmarks for the number of tickmarks on the axes, and color for the color of the curve, thickness for the thickness of the curve. We can choose a symbol, for example, symbol=circle. For the graph obtained from the following command see Fig. 8.3. > plot(-x*ln(x)-(1-x)*ln(1-x),x=0..1,labels=["p","H(p)"], tickmarks=[2,3],color=blue,thickness=3);

34

1 Prerequisites

When we want to have a line graph by connecting points, we use the command listplot. To plot only the points, then we use the command pointplot. These commands can be interchangeably used if we specify the style by style=point or style=line. When we draw two graphs together in a single frame, then we use display: > g1:=pointplot([[1,1/2],[1/2,1],[1/2,1/2]],symbol=circle): > g2:=plot(x^2, x=-1..1, tickmarks=[2,2]): > display(g1,g2); See Fig. 1.7. 1

–1

0

x

1

Fig. 1.7. Display of two graphs

The Maple programs presented in this book usually do not contain detailed options for drawing because we want to emphasize the essential ideas and to save space. Maple programs from the author’s homepage contain all the necessary plot options to draw graphs in the text. All the graphs, ﬁgures, and diagrams in this book were drawn by the author using only Maple. 1.8.5 Calculus: diﬀerentiation and integration Maple can diﬀerentiate and integrate a function symbolically. First let us integrate a function. The ﬁrst slot in int( , ) is for a function and the second slot for the variable x. Integrate the function 1/x with respect to x. > int(1/x,x); ln(x) Here log may be used in place of ln. For the logarithm to the base b, use log[b]. > log[2](2^k); ln(2k ) ln(2) Simplify the preceding result represented by the symbol ‘%’.

1.8 Basic Maple Commands >

35

simplify(%);

k Integrate 1/ x(1 − x) over the interval 0 ≤ x ≤ 1. > int(1/(sqrt(x*(1-x))),x=0..1); π Now diﬀerentiate a function with respect to x. > diff(exp(x^2),x); 2

2 x e(x ) Substitute a condition into the previous result(%). > subs(x=-1,%); −2 e > ?subs The question mark ‘?’ followed by a command explains the meaning of the command. In this case, the command subs is explained. 1.8.6 Fractional and integral parts of real numbers Many transformations on the unit interval are deﬁned in terms of fractional parts of some real-valued functions. Here are ﬁve basic commands: (i) frac(x) is the fractional part of x, (ii) ceil(x) is the smallest integer ≥ x, (iii) floor(x) is the greatest integer ≤ x, (iv) trunc(x) truncates x to the next nearest integer towards 0, and (v) round(x) rounds x to the nearest integer. The command ceil comes from the word ‘ceiling’, and the two commands floor and trunc coincide for x ≥ 0 and diﬀer for x < 0. > frac(1.3); .3 > frac(-1.3); −.3 > ceil(0.4); 1 > floor(1.8); 1 > trunc(-10.7); −10 > round(-10.7); −11 Draw the graphs of the preceding piecewise linear functions. Observe the values of frac(x) for x < 0. Without the plot option discont=true, the graphs would be connected at the discontinuities.

36

1 Prerequisites

plot(frac(x),x=-3..3,discont=true); > plot(ceil(x),x=-3..3,discont=true); > plot(floor(x),x=-3..3,discont=true); See Fig. 1.8. >

0 –3 –2

1 2

3

–3 –2

0

1 2

3

–3 –2

0

1 2

3

Fig. 1.8. y = frac(x), y = ceil(x) and y = floor(x) (from left to right)

plot(trunc(x),x=-3..3,discont=true); plot(round(x),x=-3..3,discont=true); Deﬁne a new piecewise function. > fractional:=x->x-floor(x): > plot(fractional(x),x=-3..3,discont=true); See Fig. 1.9. > >

–3 –2

0

1 2

3

–3 –2

0

1 2

3

–3 –2

0

1 2

3

Fig. 1.9. y = trunc(x), y = round(x) and y = fractional(x) (from left to right)

1.8.7 Sequences and printf Two brackets enclose a list, or a ﬁnite sequence, or a vector. Deﬁne a list aa. > aa:=[1,3,5,7,-2,-4,-6]; aa := [1, 3, 5, 7, −2, −4, −6] > pointplot([seq([i,aa[i]],i=1..7)]); See the left plot in Fig. 1.10. To obtain a line graph, use listplot.

1.8 Basic Maple Commands 5

0

37

5

2 3 4 5 6 7

–5

0

2 3 4 5 6 7

–5

Fig. 1.10. A pointplot (left) and a listplot (right) of a sequence

Here is how print a list of characters. We use a do loop. For each i, 1 ≤ i ≤ 7, the task given between do and end do is executed. Note that od is a short form of end do. > for i from 1 to 7 do > printf("%c","("); > printf("%d",aa[i]); > printf("%c",")" ); > printf("%c"," " ); > od; (1) (3) (5) (7) (-2) (-4) (-6)

The Maple command printf is used to display expressions. It corresponds to the function of the same name in C programming language. It displays an expression using the formatting speciﬁcations placed inside " ", such as %c for a single character or %d for an integer. A blank space produces a blank. To display a single symbol such as a parenthesis, place it inside " ". 1.8.8 Linear algebra First, we need to load the package for linear algebra. > with(linalg): > A:=matrix([[2/4,1/4,1/4],[2/6,3/6,1/6],[1/5,2/5,2/5]]); ⎡1 1 1⎤ ⎢2 4 4⎥ ⎢ ⎥ ⎢1 1 1⎥ A := ⎢ ⎥ ⎢3 2 6⎥ ⎣ ⎦ 1 2 2 5 5 5 The sum of entries of each row is all equal to 1. This is an example of a stochastic matrix. Find the determinant, the rank and the trace of A. > det(A); 1 20

38

1 Prerequisites >

rank(A);

>

trace(A);

3 7 5 Here is the characteristic polynomial of A. > charpoly(A,x); 1 9 7 x− x3 − x2 + 20 20 5 √ Find the eigenvalues of A. The symbol I denotes the complex number −1. > eigenvals(A); 1 1 1 1 I I, − 1, + 5 10 5 10 We consider row operations and ﬁnd row eigenvectors. Note that the column eigenvectors of the transpose of A are the row eigenvectors of A. > B:=transpose(A): The following gives eigenvalues, corresponding multiplicities and eigenvectors. > Eigen:=eigenvectors(B); −1 2 −2 2 1 1 7 3 + I }], − I, 1, I, 1, { Eigen := [1, 1, { , , 1 }], [ + 3 3 3 3 5 10 5 2 −1 2 −2 2 1 1 − I }] + I, 1, I, 1, { [ − 3 3 3 3 5 10 Find the Perron–Frobenius eigenvector by choosing the positive eigenvector. > Positive_Vector:=Eigen[1]; 7 3 Positive Vector := [1, 1, { , , 1 }] 5 2 > Positive_Vector[3]; 7 3 { , ,1} 5 2 > u:=Positive_Vector[3][1]; 7 3 u := , , 1 5 2 > a:=add(u[i],i=1..3); 39 a := 10 Find the normalized Perron–Frobenius eigenvector v. > v:=evalm((1/a)*u); 14 5 10 , , v := 39 13 39

1.8 Basic Maple Commands

39

This may be a row vector or a column vector. In the following v is regarded as a row vector. > evalm(v&*A); 14 5 10 , , 39 13 39 So v is an eigenvector corresponding to the eigenvalue 1. > evalf(%); [.3589743590, .3846153846, .2564102564] > evalm(A^10): > evalf(%); ⎡ ⎤ .3589743038 .3846155735 .2564101227 ⎣ .3589742237 .3846153940 .2564103823 ⎦ .3589746391 .3846151061 .2564102548 Note that all the rows are almost equal to v. Here is how to obtain the inner product of two vectors. > w1:=vector([4,2,1]); w1 := [4, 2, 1] > w2:=vector([1,-2,-3]); w2 := [1, −2, −3] Calculate the inner product of w1 and w2 . > innerprod(w1,w2); −3 1.8.9 Fourier series Compute the Fourier coeﬃcients bn of f (x) = > with(plots): > f:=x->1/4-abs(x-1/2); 1 f := x → − x − 4

1 4

− |x − 12 |, −N ≤ n ≤ N .

1 2

> plot(f(x),x=0..1); See Fig. 1.11. > int(f(x)*exp(-2*Pi*I*n*x),x=0..1);

>

1 −π n I − 2 + (e(π n I) )2 π n I − 2 (e(π n I) )2 + 4 e(π n I) 8 (e(π n I) )2 π 2 n2 subs(exp(Pi*I*n)=(-1)^n,%);

>

−π n I − 2 + ((−1)n )2 π n I − 2 ((−1)n )2 + 4 (−1)n 8 ((−1)n )2 π 2 n2 simplify(%);

40

1 Prerequisites

(−1)(−2 n) π n I + 2 (−1)(−2 n) − π n I + 2 + 4 (−1)(1−n) 8 π 2 n2 subs({(-1)^(-2*n)=1,(-1)^(1-n)=-(-1)^n},%); −

>

− >

4 − 4 (−1)n 8 π 2 n2

b[n]:=simplify(%); bn :=

−1 + (−1)n 2 π 2 n2

N:=5: Increase N if a more detailed graph is needed. > for n from 1 to N do > b[n]:=(-1+(-1)^n)/(2*Pi^2*n^2): > od: > seq(b[n],n=1..N); 1 1 1 − 2 , 0, − 2 , 0, − π 9π 25 π 2 Draw the graph of the N th partial Fourier series and compare it with the graph of f (x). For large values of N the two graphs become almost identical. > Fourier:=x->2*Re(add(b[n]*exp(2*Pi*n*I*x),n=1..N)); >

Fourier := x → 2 (add(bn e(2 I π n x) , n = 1..N )) > plot(Fourier(x),x=0..1); See Fig. 1.11.

0.4

0.4

y

y 0

x

–0.4

0

1

x

1

–0.4

Fig. 1.11. y = f (x) (left) and its partial Fourier series (right)

1.8.10 Continued fractions I Find the continued fraction of an irrational number θ. > k:=5.: > theta:=-k/2+sqrt(k^2+4)/2; >

θ := 0.192582404 convert(theta,confrac,cvgts);

1.8 Basic Maple Commands

41

[0, 5, 5, 5, 5, 5, 5, 2, 1, 2] In the preceding result all the partial quotients except the ﬁrst one should be equal to k = 5 theoretically. Because we did not have enough signiﬁcant digits, the last three values are not correct. By increasing Digits, we will have more correct partial quotients. > cvgts; 1 5 26 135 701 3640 7981 11621 31223 ] , , , , , , , [0, , 5 26 135 701 3640 18901 41442 60343 162128 > for i from 1 to 5 do > a[i]:=evalf(cvgts[i]-theta): > od; a1 := −0.192582404 a2 := 0.0074175960 a3 := −0.0002747117 a4 := 0.0000101886 a5 := −0.3783 10−6 1.8.11 Continued fractions II Here is an iterative algorithm to construct a number θ with speciﬁed partial quotients. > restart; > N:=20: In the following do not use the command fsolve (= ﬂoat + solve), which would yield a numerical solution. > solve(x=1/(k+x),x); √ √ k2 + 4 k k2 + 4 k ,− − − + 2 2 2 2 Find the theoretical value. > x0:=-1/2*k+1/2*(k^2+4)^(1/2); √ k2 + 4 k x0 := − + 2 2 Generate a sequence of partial quotients an . > k:=1: > for n from 1 to N do a[n]:=k: od: List the sequence an , 1 ≤ n ≤ N . > seq(a[n],n=1..N); 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 > continued[1]:=1/a[N]: > for i from 2 to N do > continued[i]:=1/(a[N-i+1]+continued[i-1]): > od:

42

1 Prerequisites >

theta:=continued[N]; θ :=

Calculate |θ − x0 |. > evalf(theta-x0,10);

6765 10946

−0.3 10−8 Now for the second example, we take e = 2.718281828459045 . . .. > restart; Generate a sequence of partial quotients an . > N:=8: > for n from 1 to 3*N do a[n]:=1: od: > for n from 1 to N do a[3*n-1]:=2*n: od: List the sequence an , 1 ≤ n ≤ 3N . > seq(a[n],n=1..3*N); 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1, 10, 1, 1, 12, 1, 1, 14, 1, 1, 16, 1 > continued[1]:=1/a[N]: > for i from 2 to N do > continued[i]:=1/(a[N-i+1]+continued[i-1]): > od: > num:=2+continued[N]; 1264 num := 465 > evalf(num-exp(1.0),10); −.2258 10−5 1.8.12 Statistics and probability The probability density function of the standard normal variable X is given as follows: > f:=x->1/sqrt(2*Pi)*exp(-x^2/2); 2

e(−1/2 x ) √ 2π Its integral over the whole real line is equal to 1 since it represents a probability distribution. > int(f(x),x=-infinity..infinity); 1 > plot(f(x),x=-4..4); See Fig. 1.2. Find the maximum of f (x). > f(0); f := x →

1.8 Basic Maple Commands

43

√ 2 √ 2 π >

evalf(%);

0.3989422802 Find Pr(−1 ≤ X ≤ 1). > int(f(x),x=-1.0..1.0); 0.6826894921 Find Pr(−2 ≤ X ≤ 2). > int(f(x),x=-2.0..2.0); 0.9544997361 Find Pr(−3 ≤ X ≤ 3). > int(f(x),x=-3.0..3.0); 0.9973002039 Deﬁne the error function erf(x) by x 2 erf(x) = √ exp(−t2 ) dt . π 0 See Fig. 1.12. 1

–4

–2

0

2

x

4

–1

Fig. 1.12. The error function

Deﬁne the cumulative density function (cdf) of X by Pr(−∞ ≤ X ≤ x), which is expressed in terms of the error function. > int(f(t), t=-infinity..x); √ 1 2x 1 erf( )+ 2 2 2 > plot(1/2*erf(1/2*2^(1/2)*x)+1/2, x=-4..4); See Fig. 1.3. In this book we often sketch a pdf f (x) of a random variable based on ﬁnitely many sample values, say x1 , . . . , xn . Let a = min xi , b = max xi . Then we divide the range [a, b] into subintervals of equal length, each of which is called a bin or an urn, and count the number of times the values xi fall into each bin. Using these frequencies we plot a histogram or a line graph. As we take more and more sample values and bins, the corresponding plot approximates the theoretical prediction better and better.

44

1 Prerequisites

1.8.13 Lattice structures of linear congruential generators The set of ordered triples of pseudorandom numbers generated by a bad linear congruential generator displays a lattice structure. See Fig. 1.4. > with(plots): Deﬁne a linear congruential generator LCG(M, a, b, x0 ). > M:=2^8: > a:=137: > b:=187: > x[0]:=10: Choose the number of points in a cube. > N:=10000: > for n from 1 to 3*N do > x[n]:=modp(a*x[n-1]+b,M): > od: Rotate the following three-dimensional plot. > pointplot3d([seq([x[3*i+1],x[3*i+2],x[3*i+3]],i=0..N-1)], axes=boxed,tickmarks=[0,0,0],orientation=[45,55]); Do the same experiment with M = 230 , a = 65539, b = 0 and x0 = 1. 1.8.14 Random number generators in Maple Maple uses a linear congruential generator of the form xn+1 = axn + b (mod M ) where a = 427419669081, b = 0, M = 999999999989 and x0 = 1. Note that M is prime. > M:=999999999989; Factorize M . > ifactor(M); (999999999989) Is M prime? > isprime(M); true If we want a prime number for M that is close to 1012 , then M = 1012 − 11 is such a choice. Between 1012 − 20 and 1012 + 20, M is the only prime number as seen in the following: > for i from -20 to 20 do > if isprime(10^12+i)=true then print(i): fi: > od; −11 The following commands all give the same answer: > 100 mod 17; modp(100,17); irem(100,17); 15 15 15

1.8 Basic Maple Commands

45

Find the multiplicative inverse of 10 modulo 17. Note that 17 is prime. > 1/10 mod 17; modp(1/10,17); irem(1/10,17); 12 12 Error, wrong number (or type) of parameters in function irem

In the preceding three commands the third one does not work. The command rand() generates random numbers between 1 and M − 1. > rand(); rand(); rand(); 427419669081 321110693270 343633073697 Let’s check the formula used by Maple. > a:=427419669081: > M:=999999999989: > x[0]:=1: > for n from 0 to 2 do > x[n+1]:=modp(a*x[n],M): > od; x1 := 427419669081 x2 := 321110693270 x3 := 343633073697 How to generate random numbers between L and N ? > restart; > L:=3: > N:=7: > ran:=rand(L..N): > ran(); ran(); ran(); 4 3 5 1.8.15 Monte Carlo method As the ﬁrst example of the Monte Carlo method we estimate the average of a function f (x) on an interval by choosing sample points randomly. > f:= x-> sin(x): Choose an interval [a, b]. > a:=0: b:=Pi: Integrate the given function. > int(f(x),x=a..b); 2

46

1 Prerequisites

Choose the number of points where the given function is evaluated. > SampleSize:=1000: > Integ:=0: > for i from 1 to SampleSize do > ran:=evalf(rand()/999999999989*(b-a)+a): > f_value:=f(ran): > Integ:=Integ + f_value: > od: > Approx_Integral:=evalf(Integ/SampleSize*(b-a)); Approx Integral := 2.019509977 For the second application of the Monte Carlo method, we count the number of points inside the unit circle in the ﬁrst quadrant and estimate π/4. > restart: We have to restart because we need to redeﬁne the function f , which is already used in the above. If we use a new name g instead of f , then there is no need to restart. > f:=x-> 1 - floor(x): Choose the number of points where the given function is evaluated. > SampleSize:=1000: > Count:=0: > for i from 1 to SampleSize do > ran1:=evalf(rand()/999999999989): > ran2:=evalf(rand()/999999999989): > f_value:=f(ran1^2+ran2^2): > Count:=Count+f_value: > od: Compare the following values: > Approx_Area:=evalf(Count/SampleSize); >

Approx Area := .7800000000 True_Area:=evalf(Pi/4); True Area := .7853981635

2 Invariant Measures

We introduce a discrete dynamical system in terms of a measure preserving transformation T deﬁned on a probability space. In other words, we consider {T n : n ∈ N} or {T n : n ∈ Z} depending on whether T is invertible or not. The approach can handle not only purely mathematical concepts, but also physical phenomena in nature and data compression in information theory. Examples of such measure preserving transformations include an irrational translation modulo 1, the multiplication by 2 modulo 1, the beta transformation, the Gauss transformation, a toral automorphism, the baker’s transformation and the shift transformation on the set of inﬁnite binary sequences. A method of classifying them is also introduced.

2.1 Invariant Measures Deﬁnition 2.1. (i) Let (X1 , µ1 ) and (X2 , µ2 ) be measure spaces. We say that a mapping T : X1 → X2 is measurable if T −1 (E) is measurable for every measurable subset E ⊂ X2 . The mapping is measure preserving if µ1 (T −1 E) = µ2 (E) for every measurable subset E ⊂ X2 . When X1 = X2 and µ1 = µ2 , we call T a transformation. (ii) If a measurable transformation T : X → X preserves a measure µ, then we say that µ is T -invariant (or invariant under T ). If T is invertible and if both T and T −1 are measurable and measure preserving, then we call T an invertible measure preserving transformation. Theorem 2.2. Let (X, A, µ) be a measure space. The following statements are equivalent: (i) A transformation T : X → X preserves µ. (ii) For any f ∈ L1 (X, µ) we have f (T (x)) dµ . f (x) dµ = X

X

48

2 Invariant Measures

(iii) Deﬁne a linear operator UT in Lp (X, µ) by (UT f )(x) = f (T x) . Then UT is norm-preserving, i.e., ||UT f ||p = ||f ||p . Proof. First we show the equivalence of (i) and (ii). To show (ii) ⇒ (i), take f = 1E . Then µ(E) = f (x) dµ = f (T (x)) dµ = 1T −1 E (x) dµ = µ(T −1 E) . X

X

X

To show (i) ⇒ (ii), observe ﬁrst that a complex-valued measurable function f can be written as a sum f = f1 − f2 + i (f3 − f4 ) √ where i = −1 and each fj is real, nonnegative and measurable. Thus we may assume that f is real-valued and f ≥ 0. If f = 1E for some measurable subset E, then f (x) dµ = µ(E) = µ(T −1 E) = 1T −1 E (x) dµ = f (T (x)) dµ . X

X

X

By linearity the same relation holds for a simple function f . Now for a general nonnegative function f ∈ L1 choose an increasing sequence of simple functions sn ≥ 0 converging to f pointwise. Then {sn ◦ T }∞ n=1 is an increasing sequence and it converges to f ◦ T pointwise. Hence f (T x) dµ = lim sn (T x) dµ = lim sn (x) dµ = f (x) dµ . X

n→∞

n→∞

X

X

X

The proof of the equivalence of (i) and (iii) is almost identical.

Example 2.3. (Irrational translations modulo 1) Let X = [0, 1). Consider Tx = x + θ

(mod 1)

for an irrational number 0 < θ < 1. Since T preserves the one-dimensional length, Lebesgue measure is invariant under T . See the left graph in Fig. 2.1. Example 2.4. (Multiplication by 2 modulo 1) Let X = [0, 1). Consider T x = 2x (mod 1) . Then T preserves Lebesgue measure. Even though the map doubles the length of an interval I, its inverse image has two pieces in general, each of which has half the length of I. When we add them, the sum equals the original length of I. For a generalization, see Ex. 2.11.

2.1 Invariant Measures 1

49

1

y

y

0

0

1

x

Fig. 2.1. The translation by θ = transformation (right)

√

x

1

2 − 1 modulo 1 (left) and a piecewise linear

Example 2.5. (A piecewise linear mapping) Let X = [0, 1). Deﬁne & 2x (mod 1) , 0 ≤ x < 21 , Tx = 4x (mod 1) , 21 ≤ x < 1 . Then T preserves Lebesgue measure since the inverse image of [0, a] consists of three intervals of lengths a2 , a4 and a4 . See the right graph in Fig. 2.1. Example 2.6. (The logistic transformation) Let X = [0, 1]. Consider T x = 4x(1 − x) . The invariant probability density function of T is given by ρ(x) =

1 , π x(1 − x)

which is unbounded but has its integral equal to 1. See Fig. 2.2. The same density function ρ(x) is invariant under transformations obtained from the Chebyshev polynomials. See Maple Programs 2.6.1, 2.6.2.

1

5

y

y

0

x

1

0

x

Fig. 2.2. y = 4x(1 − x) (left) and y = ρ(x) (right)

1

50

2 Invariant Measures

√ Example 2.7. (The beta transformation) Take β =√( 5 + 1)/2 = 1.618 · · · , which satisﬁes β 2 −β −1 = 0. Hence β −1 = 1/β = ( 5−1)/2, which is called the golden ratio. See Fig. 2.3 where two similar rectangles can be found. 1

0

β

1

Fig. 2.3. The golden ratio: two rectangles are similar

Let X = [0, 1). Consider the so-called β-transformation T x = βx (mod 1) . Its invariant probability density is given by ⎧ β3 1 ⎪ ⎪ ⎨ , 0≤x< , 2 β ρ(x) = 1 +2β β 1 ⎪ ⎪ , ⎩ ≤x g1:=plot(T(x),x=0..1,y=0..1,axes=boxed): > g2:=listplot([[1/2,0],[1/2,1]],color=green): > display(g1,g2); Deﬁne a probability density function ρ(x). > rho:= x->1/(Pi*sqrt(x*(1-x))); 1 ρ := x → π x (1 − x) Since ρ(x) → +∞ as x → 0+ or x → 1−, we draw the graph y = ρ(x) only on the interval [ε, 1 − ε] for some small ε > 0. See Fig. 2.2.

72

2 Invariant Measures

plot(rho(x),x=0.01..0.99);

1 Check whether 0 ρ(x) dx = 1. > int(rho(x),x=0..1); 1 Find the inverse images of b, where b is a point on the positive y-axis. > a:=solve(T(x)=b,x); 1 1√ 1 1√ a := + 1 − b, − 1−b 2 2 2 2 In the following deﬁnition note that a1 ≤ a2 . > a1:=1/2-1/2*sqrt(1-b); √ 1 1−b a1 := − 2 2 > a2:=1/2+1/2*sqrt(1-b); √ 1 1−b a2 := + 2 2 Find inverse images of b = 32 under T . > b:=2/3: > a1; √ 1 3 − 2 6 > a2; √ 1 3 + 2 6 > g5:=listplot([[a1,0],[a1,b]],color=green): > g6:=listplot([[a2,0],[a2,b]],color=green): > g7:=plot(b,x=0..1,color=green,tickmarks=[2,2]): > display(g1,g5,g6,g7); See Fig. 2.23. >

1

b y

0

a1

x

a2

1

Fig. 2.23. Two inverse images of b = 1/3 under the logistic transformation

From now on we treat b again as a symbol.

2.6 Maple Programs >

b:=’b’;

>

a1;

73

b := b 1 − 2 >

a2; 1 + 2

√

√

1−b 2 1−b 2

Find the measure of T −1 ([b, 1]). > measure1:=int(rho(x),x=a1..a2);

√ 2 arcsin( 1 − b) measure1 := π Find the measure of [b, 1]. > measure2:=int(rho(x),x=b..1); 1 −π + 2 arcsin(2 b − 1) measure2 := − π 2 Compare the two values. > d:=measure1-measure2; √ 2 arcsin( 1 − b) 1 −π + 2 arcsin(2 b − 1) + d := π 2 π Assume 0 < b < 1. When we make assumptions about a variable, it is printed with an appended tilde. > assume(b < 1, b > 0): To show d = 0 we show sin(πd) = 0. > dd:=simplify(Pi*d); √ π dd := 2 arcsin( 1 − b˜) − + arcsin(2 b˜ − 1) 2 > expand(sin(dd)): > simplify(%); 0 This shows that d is an integer. Since |d| < 1, we conclude d = 0. 2.6.2 Chebyshev polynomials Consider the transformations f : [0, 1] → [0, 1] obtained from Chebyshev polynomials T : [−1, 1] → [−1, 1] with deg T ≥ 2 after the normalization of the domains and ranges. When deg T = 2, we obtain the logistic transformation. The transformations share the same invariant density function. > with(plots): We need a package for orthogonal polynomials. > with(orthopoly);

74

2 Invariant Measures

[G, H, L, P, T, U ] The letter ‘T’ is from the name of a Russian mathematician P.L. Chebyshev. Many years ago his name was transliterated as Tchebyshev, Tchebycheﬀ or Tschebycheﬀ. That is why ‘T’ stands for Chebyshev polynomials. Let w(x) = (1 − x2 )−1/2 and let H = L2 ([−1, 1], w dx) be the Hilbert space

1 with the inner product given by (u, v) = −1 u(x)v(x)w(x) dx for u, v ∈ H. Chebyshev polynomials are orthogonal in H. Let us check it! Choose integers m and n. Take small integers for speedy calculation. > m:=5: > n:=12: > int(T(m,x)*T(n,x)/sqrt(1-x^2),x=-1..1); 0 Chebyshev polynomials are deﬁned on [−1, 1] and therefore we normalize the domain so that the induced transformations are deﬁned on [0, 1]. Choose the degree of a Chebyshev polynomial. > Deg:=3; Deﬁne a Chebyshev polynomial. > Chev:=x->T(Deg,x): > plot(Chev(x),x=-1..1); The graph of T is omitted to save space. Normalize the domain and the range of T . In this subsection f denotes the transformation since ‘T’ is reserved for the Chebyshev polynomial. > f:=x->expand((Chev(2*x-1)+1)/2): Now f is a transformation deﬁned on [0, 1]. > f(x); 16 x3 − 24 x2 + 9 x Draw the graph y = f (x). > plot(f(x),x=0..1); See Fig. 2.24. 1

y

0

x

1

Fig. 2.24. A transformation f deﬁned by a Chebyshev polynomial

2.6 Maple Programs

75

Now we prove that the transformations deﬁned by Chebyshev polynomials all share the same invariant density. It is known that T (n, cos θ) = cos(nθ). > T(Deg,cos(theta))-cos(Deg*theta); 4 cos(θ)3 − 3 cos(θ) − cos(3 θ) >

simplify(%);

0 Deﬁne a topological conjugacy φ based on the formula in [AdMc]. It will be used as an isomorphism for two measure preserving transformations f and g. The formula in Ex. 2.24 may be used, too. > phi:=x->(1+cos(Pi*x))/2; 1 1 φ := x → + cos(π x) 2 2 Draw the graph y = φ(x). > plot(phi(x),x=0..1,y=0..1,axes=boxed); See the left graph in Fig. 2.25. 1

1

y

y

0

x

1

0

x

1

Fig. 2.25. The topological conjugacy φ(x) (left) and its inverse (right)

Find the inverse of φ. > psi:=x->arccos(2*x-1)/Pi; arccos(2 x − 1) π Draw the graph y = ψ(x) = φ−1 (x). > plot(psi(x),x=0..1,y=0..1,axes=boxed); See the right graph in Fig. 2.25. Check φ(ψ(x)) = x. > phi(psi(x)); x Deﬁne a transformation g(x) = ψ(f (φ(x))). > g:=x->psi(f(phi(x))); ψ := x →

g := x → ψ(f(φ(x)))

76

2 Invariant Measures

Draw the graph y = g(x). > plot(g(x),x=0..1,y=0..1,axes=boxed): See Fig. 2.26. 1

y

0

x

1

Fig. 2.26. A transformation g conjugate to f through φ

It is obvious that g preserves Lebesgue measure dx on X = [0, 1]. Find the invariant probability measure µ for f . g

(X, dx) −−−−→ (X, dx) ⏐ ⏐ ⏐ ⏐ φ, φ, f

(X, µ) −−−−→ (X, µ) Note that the inverse image of [φ(x), 1] under φ is [0, x], which has Lebesgue measure equal to x. For µ to be an f -invariant measure, it must satisfy µ([φ(x), 1]) = x for 0 ≤ x ≤ 1. Thus µ([y, 1]) = ψ(y) and µ([0, y]) = 1 − ψ(y) for 0 ≤ y ≤ 1. > -diff(psi(y),y); 1 −y 2 + y π Hence d 1 ρ(y) = (1 − ψ(y)) = . dy y(1 − y) π

2.6.3 The beta transformation Find the invariant measure of the β-transformation. > with(plots): > beta:=(1+sqrt(5))/2: Deﬁne the transformation T .

2.6 Maple Programs

77

T:=x-> frac(beta*x): Deﬁne the invariant probability density function ρ(x). > rho:=x->piecewise( 0 plot(T(x),x=0..1,y=0..1,axes=boxed); See the left graph in Fig. 2.4. > b0:=T(1); √ 5 1 b0 := − + 2 2 > plot(rho(x),x=0..1); See the right graph in Fig. 2.4. > int(rho(x),x=0..1); √ 15 + 7 5 √ √ (5 + 5) (2 + 5) > simplify(%); 1 When we make assumptions about a variable, a tilde is appended. > assume(b >= 0, b b; b˜ > m1:=beta^3/(1+beta^2): > m2:=beta^2/(1+beta^2): Find the cumulative density function (cdf) for the invariant measure. In the following, cdf(b) is the measure of the interval [0, b]. > cdf:=piecewise( b=1/beta, m1*(1/beta)+ m2*(b-1/beta) ): > plot(cdf(b),b=0..1); See Fig. 2.27. Find the inverse image (or inverse images) of a point b on the y-axis. > a1:=solve(beta*x=b,x); 2 b˜ √ a1 := 1+ 5 > a2:=solve(beta*x-1=b,x); 2 (1 + b˜) √ a2 := 1+ 5 In the following, cdf inverse(b) is the measure of the inverse image of [0, b]. > cdf_inverse:=piecewise( b = b0, m1*a1 + m2*(1-1/beta) ): ρ := x → piecewise(0 ≤ x and x

plot(cdf_inverse(b),b=0..1); > simplify(cdf-cdf_inverse); 0 This shows that ρ(x) is invariant under T . 2.6.4 The baker’s transformation Using the mathematical pointillism we draw images of a rectangle under the iterates of the baker’s transformation. See Fig. 2.8. > with(plots): Deﬁne the baker’s transformation. > T:=(x,y)->(frac(2*x), 0.5*y+0.5*trunc(2.0*x)); T := (x, y) → (frac(2 x), 0.5 y + 0.5 trunc(2.0 x)) S:=4000: Choose a starting point of an orbit of length S. > seed[0]:=(sqrt(3.0)-1,evalf(Pi-3)): Generate S points evenly scattered in the unit square. > for i from 1 to S do seed[i]:=T(seed[i-1]): od: > pointplot({seq([seed[i]],i=1..S)},symbolsize=1); See the left plot in Fig. 2.28. Generate S points evenly scattered in C = {(x, y) : 0 ≤ x ≤ 21 , 0 ≤ y ≤ 1}. > for i from 1 to S do > image0[i]:=(seed[i][1]/2,seed[i][2]): od: > pointplot({seq([image0[i]],i=1..S)},symbolsize=1); See the right plot in Fig. 2.28. Find T (C). > for i from 1 to S do image1[i]:=T(image0[i]): od: > pointplot({seq([image1[i]],i=1..S)},symbolsize=1); See the ﬁrst graph in Fig. 2.8. >

2.6 Maple Programs 1

79

1

y

y

0

x

1

0

x

1

Fig. 2.28. Evenly scattered points in the unit square (left) and a rectangle C (right)

Find T 2 (C). > for i from 1 to S do image2[i]:=T(T(image0[i])): od: > pointplot({seq([image2[i]],i=1..S)},symbolsize=1); See the second graph in Fig. 2.8. Find T 3 (C). > for i from 1 to S do image3[i]:=T(T(T(image0[i]))): od: > pointplot({seq([image3[i]],i=1..S)},symbolsize=1); See the third graph in Fig. 2.8. In the preceding three do loops, if we want to save computing time and memory, we may repeatedly use seed[i]:=T(seed[i]): and plot seed[i], 1 ≤ i ≤ S. See Maple Program 2.6.5. 2.6.5 A toral automorphism Find the successive images of D = {(x, y) : 0 ≤ x ≤ 21 , 0 ≤ y ≤ 21 } under the Arnold cat mapping. > with(plots): Deﬁne a toral automorphism. > T:=(x,y)->(frac(2*x+y),frac(x+y)); > seed[0]:=(evalf(Pi-3),sqrt(2.0)-1): > S:=4000: > for i from 1 to S do seed[i]:=T(seed[i-1]): od: Find S points in D. > for i from 1 to S do > seed[i]:=(seed[i][1]/2,seed[i][2]/2): od: > pointplot({seq([seed[i]],i=1..S)},symbolsize=1); See Fig. 2.29. Find T (D). > for i from 1 to S do seed[i]:=T(seed[i]): od: > pointplot({seq([seed[i]],i=1..S)},symbolsize=1); See the ﬁrst plot in Fig. 2.9.

80

2 Invariant Measures 1

y

0

x

1

Fig. 2.29. Evenly scattered points in the square D

Find T 2 (D). > for i from 1 to S do seed[i]:=T(seed[i]): od: > pointplot({seq([seed[i]],i=1..S)},symbolsize=1); See the second plot in Fig. 2.9. Find T 3 (D). > for i from 1 to S do seed[i]:=T(seed[i]): od: > pointplot({seq([seed[i]],i=1..S)},symbolsize=1); See the third plot in Fig. 2.9. 2.6.6 Modiﬁed Hurwitz transformation with(plots): Deﬁne two constants α and β. > alpha:=(sqrt(5)-1)/2: beta:=(3-sqrt(5))/2: Deﬁne a new command. > bracket:=x->piecewise( 0 x, -floor(-x + beta) ); Deﬁne the transformation T . > T:=x-> 1/x - bracket(1/x); Deﬁne a probability density where C is an undetermined normalizing constant. > rho:=x->piecewise(-alpha N_decimal:=10000: > N_decimal*log[2.](10); 33219.28095 > N_binary:=33300: This is the number of binary signiﬁcant digits needed to produce a decimal number with N decimal signiﬁcant decimal digits. > evalf(2^(-N_binary),10); >

0.5025096306 10−10024 for j from 1 to N_binary do d[j]:=ceil(ran()/3): od:

Find the number of the bit ‘1’ in the binary sequence {dj }. > num_1:=add(d[j],j=1..N_binary); num 1 := 24915 The following number should be close to 43 by the Birkhoﬀ Ergodic Theorem. See Chap. 3 for more information. > evalf(num_1 / N_binary); 0.7481981982 Convert the binary number 0.d1 d2 d3 . . . into a decimal number. > Digits:=N_decimal: > M:=N_binary/100; M := 333

82

2 Invariant Measures

33300 To compute s=1 ds 2−s , calculate 100 partial sums ﬁrst then add them all. In the following partial sum[k] is stored as a quotient of two integers. > for k from 1 to 100 do > partial_sum[k]:=add(d[s+M*(k-1)]/2^s,s=1..M): od: > x0:=evalf(add(partial_sum[k]/2^(M*(k-1)),k=1..100)); x0 := 0.964775085321818215215 . . . This is a typical point for the Bernoulli measure represented on [0, 1]. 2.6.8 A typical point of the Markov measure Find a typical binary Markov sequence x0 deﬁned by a stochastic matrix P . Consult Sect. 2.3 and see Fig. 2.31. > with(linalg): > P:=matrix([[1/3,2/3],[1/2,1/2]]); ⎡1 2⎤ ⎢3 3⎥ P := ⎣ ⎦ 1 1 2 2 2/3

1/3 0

1/2 1

1/2

Fig. 2.31. The graph corresponding to P

Choose the positive vector in the following: > eigenvectors(transpose(P)); 4 −1 [1, 1, { 1, }], [ , 1, {[−1, 1]}] 3 6 The Perron-Frobenius eigenvector is given by the following: > v:=[3/7,4/7]: > evalm(v&*P); 3 4 , 7 7 Observe that the rows of P n converge to v. See Theorem 5.21. > evalf(evalm(P&^20),10); 0.4285714286 0.5714285714 0.4285714286 0.5714285714

2.6 Maple Programs >

3/7.;

>

4/7.;

83

0.4285714286 0.5714285714 Construct a typical binary Markov sequence of length N binary. > ran0:=rand(0..2): ran1:=rand(0..1): > N_decimal:=10000: > N_decimal*log[2.](10); 33219.28095 > N_binary:=33300; >

N binary := 33300 evalf(2.0^(-N_binary),5);

0.50251 10−10024 As in the previous case this shows that the necessary number of binary digits is equal to 33300 when we use 10000 signiﬁcant decimal digits. > d[0]:=1: > for j from 1 to N_binary do > if d[j-1]=0 then d[j]:=ceil(ran0()/2): > else d[j]:=ran1(): fi > od: Count the number of the binary symbol 1. > num_1:=add(d[j],j=1..N_binary); num 1 := 19111 The following is an approximate measure of the cylinder set [1]1 , which should be close to 47 = 0.5714 . . . by the Birkhoﬀ Ergodic Theorem in Chap. 3. > evalf(num_1 / N_binary,10); 0.5739039039 Convert the binary number 0.d1 d2 d3 . . . into a decimal number. > Digits:=N_decimal; Digits := 10000 In computing add(d[s]/2^s,s=1..33300), we ﬁrst divide it into 10 partial sums, and calculate each partial sum separately, then ﬁnally add them all. > M:=N_binary/10; M := 3330 In the following partial sum[k] is stored as a quotient of two integers. > for k from 1 to 10 do > partial_sum[k]:=add(d[s+M*(k-1)]/2^s,s=1..M): od: > x0:=evalf(add(partial_sum[k]/2^(M*(k-1)),k=1..10)); x0 := 0.58976852039782534571049721 . . . This is a typical point for the Markov measure represented on [0, 1].

84

2 Invariant Measures

2.6.9 Coding map for the logistic transformation Let E0 = [0, 21 ), E1 = [ 21 , 1]. For x ∈ [0, 1] deﬁne abinary sequence bn by bn 2−n and sketch the T n−1 x ∈ Ebn . We identify (b1 , b2 , . . .) with ψ(x) = graph of ψ. Consult Sect. 2.5 for the details. > with(plots): > Digits:=100: Choose the logistic transformation. > T:=x-> 4*x*(1-x): Choose the number of points on the graph. > N:=2000: > M:=12: Choose a starting point of an orbit. > seed[0]:=evalf(Pi-3): Find φ(x). > for n from 1 to N+M-1 do > seed[n]:=T(seed[n-1]): > if seed[n] < 1/2 then b[n]:=0: else b[n]:=1: fi: > od: Find ψ(x). > for n from 1 to N do > psi[n]:=add(b[n+i-1]/2^i,i=1..M): od: We don’t need many Digits now. > Digits:=10: > pointplot([seq([seed[n],psi[n]],n=1..N)],symbolsize=1); See Fig. 2.32. 1

y

0

x

1

Fig. 2.32. y = ψ(x) for the logistic transformation

4 The Central Limit Theorem

The Birkhoﬀ Ergodic Theorem may be viewed as a dynamical systems theory version of the Strong Law of Large Numbers presented in Subsect. 1.6.3 on statistical laws. The Central Limit Theorem also has its dynamical systems theory version. More precisely, under suitable conditions on a function f and a transformation T the sequence {f ◦ T i : i ≥ 0} becomes asymptotically n−1 statistically independent, and (1/n) i=0 f (T i x) is asymptotically normally distributed after normalization. One of the conditions on T is a property called mixing, which is stronger than ergodicity. The speed of correlation decay is also presented later in the chapter.

4.1 Mixing Transformations When we have two points on the unit interval and consider their orbits under an irrational translation mod 1, their relative positions are not changed at all. In other words, they are not mixed. Deﬁnition 4.1. Let T be a measure preserving transformation of a probability space (X, A, µ). Then T is said to be weak mixing if for every A, B ∈ A n−1 1 |µ(T −k A ∩ B) − µ(A)µ(B)| = 0 . n→∞ n

lim

k=0

It is said to be mixing, or strong mixing if lim µ(T −n A ∩ B) = µ(A)µ(B) .

n→∞

It is easy to see that mixing implies weak mixing and weak mixing implies ergodicity. See Theorem 3.12. For a measure preserving transformation T deﬁne an isometry UT on L2 (X, µ) by UT f (x) = f (T x) ,

f ∈ L2 (X, µ) .

134

4 The Central Limit Theorem

Let (·, ·) denote the inner product on L2 (X, µ). Then (f, 1) = f dµ . X

In the following three theorems, only Theorem 4.5 will be proved. Other cases can be proved similarly. Theorem 4.2. Let (X, µ) be a probability space and let T : X → X be a measure preserving transformation. The following statements are equivalent: (i) T is ergodic. (ii) For f, g ∈ L2 (X, µ), n−1 1 (UT k f, g) = (f, 1)(1, g) . n→∞ n

lim

k=0

(iii) For f ∈ L2 (X, µ), n−1 1 (UT k f, f ) = (f, 1)(1, f ) . n→∞ n

lim

k=0

Theorem 4.3. Let (X, µ) be a probability space and let T : X → X be a measure preserving transformation. The following statements are equivalent: (i) T is weak mixing. (ii) For f, g ∈ L2 (X, µ), n−1 1 (UT k f, g) − (f, 1)(1, g) = 0 . n→∞ n

lim

k=0

(iii) For f ∈ L2 (X, µ), n−1 1 (UT k f, f ) − (f, 1)(1, f ) = 0 . n→∞ n

lim

k=0

(iv) For f ∈ L2 (X, µ), n−1 2 1 (UT k f, f ) − (f, 1)(1, f ) = 0 . n→∞ n

lim

k=0

For the equivalence of (iii) and (iv) we use Lemma 4.4. For its proof consult [Pet],[Wa1]. In the following lemma we give a necessary and suﬃcient condition for the convergence of the Ces`aro sum.

4.1 Mixing Transformations

135

Lemma 4.4. Let {an }∞ n=1 be a bounded sequence of nonnegative numbers. The following statements are equivalent: (i) n 1 ak = 0 . lim n→∞ n k=1

(ii) There exists J ⊂ N of density zero, i.e., lim

n→∞

card{k ∈ J : 1 ≤ k ≤ n} =0, n

such that the subsequence of an with n ∈ J converges to 0. (iii) n 1 2 ak = 0 . lim n→∞ n k=1

Theorem 4.5. Let (X, µ) be a probability space and let T : X → X be a measure preserving transformation. The following statements are equivalent: (i) T is mixing. (ii) For f ∈ L2 (X, µ), lim (UT n f, f ) = (f, 1)(1, f ) .

n→∞

(iii) For f, g ∈ L2 (X, µ), lim (UT n f, g) = (f, 1)(1, g) .

n→∞

Proof. Throughout the proof we will write U in place of UT for notational simplicity. k (i) ⇒ (ii). Take a simple function f = i=1 ci 1Ei , Ei ⊂ X. Then U nf =

k

ci 1T −n Ei

i=1

and n

(U f, f ) =

k

ci cj µ(T −n Ei ∩ Ej )

i,j=1

→

k

ci cj µ(Ei )µ(Ej ) = (f, 1)(1, f ) .

i,j=1

For h ∈ L2 the Cauchy–Schwarz inequality implies |(h, 1)| ≤ ||h||2 ||1||2 = ||h||2 since µ(X) = 1. Take g ∈ L2 . Since simple functions are dense in L2 ,

136

4 The Central Limit Theorem

there exists a simple function f such that ||f − g||2 < ε for every ε > 0. Let n be an integer such that |(U n f, f ) − (f, 1)(1, f )| < ε . Since U is an isometry, ||U n f − U n g||2 = ||f − g||2 < ε. The Cauchy–Schwarz inequality implies that |(U n f, f ) − (U n g, g)| = |(U n f, f ) − (U n f, g) + (U n f, g) − (U n g, g)| ≤ |(U n f, f − g)| + |(U n f − U n g, g)| ≤ ||U n f ||2 ||f − g||2 + ||U n f − U n g||2 ||g||2 < ||f ||2 ε + ε ||g||2 < ε (2 ||g||2 + ε) and

|(f, 1)(1, f ) − (g, 1)(1, g)| = |(f, 1)|2 − |(g, 1)|2 = ( |(f, 1)| + |(g, 1)|) | |(f, 1)| − |(g, 1)| | ≤ ( ||f ||2 + ||g||2 ) | (f, 1) − (g, 1) | ≤ ( ||f ||2 + ||g||2 ) | (f − g, 1) | ≤ ( ||f ||2 + ||g||2 ) ||f − g||2 ≤ (2 ||g||2 + ε) ε .

Hence |(U n g, g) − (g, 1)(1, g)| = |(U n g, g) − (U n f, f ) + (U n f, f ) − (f, 1)(1, f ) + (f, 1)(1, f ) − (g, 1)(1, g)| ≤ ε(2 ||g||2 + ε) + ε + ε(2 ||g||2 + ε) . Letting ε → 0, we conclude that limn→∞ (U n g, g) = (g, 1)(1, g). (ii) ⇒ (iii). From (U n (f + g), f + g) → (f + g, 1)(1, f + g) as n → ∞, we have (∗) (U n f, g) + (U n g, f ) → (f, 1)(1, g) + (g, 1)(1, f ) . Substituting if for f , we obtain i (U n f, g) − i (U n g, f ) → i (f, 1)(1, g) − i (g, 1)(1, f ) , and so (U n f, g) − (U n g, f ) → (f, 1)(1, g) − (g, 1)(1, f ) .

(∗∗)

Adding (∗) and (∗∗), we have (U f, g) → (f, 1)(1, g). (iii) ⇒ (i). Choose f = 1A and g = 1B for measurable subsets A and B. Then U n f = 1T −n A and n (U f, g) = 1T −n A 1B dµ = µ(T −n A ∩ B) . n

Hence T is mixing.

4.1 Mixing Transformations

137

Theorem 4.6. Let G be a compact abelian group and let T : G → G be an onto endomorphism. Then T is ergodic if and only if T is mixing. Note that if G is a ﬁnite group consisting more than one element, then T is not ergodic since T leaves invariant the one point set consisting of the identity element of G, which has positive measure. Recall that if an endomorphism T is onto, then it preserves Haar measure. See Ex. 2.11. Proof. It suﬃces to show that ergodicity implies mixing. Deﬁne U by U f = f ◦T for f ∈ L2 . Take a character χ of G. Then U n χ, n ≥ 1, is also a character of G since (U n χ)(g1 g2 ) = χ(T n (g1 g2 )) = χ(T n (g1 )T n (g2 )) = χ(T n (g1 ))χ(T n (g2 )) . Note that either (U n χ, χ) = 0 or (U n χ, χ) = 1 for n ≥ 1 since characters are orthonormal. Assume χ = 1. Then U n χ = χ for any n ≥ 1. If it were not so, put n0 = min{n ≥ 1 : U n χ = χ} . Then U kn0 χ = χ, and hence 1 1 j >0, (U χ, χ) ≥ n→∞ n n0 j=1 n

lim

which contradicts the ergodicity of T . (Or, we may use the fact that f = χ + U χ + · · · + U n0 −1 χ is T -invariant. Since (f, χ) = 1, f is not constant.) Thus (U n χ, χ) = 0 for every n ≥ 1. Similarly, let χ1 , χ2 be two distinct characters such that χ1 = 1 and χ2 = 1. If U k χ1 = χ2 for some k ≥ 1, then (U n+k χ1 , χ2 ) = (U n χ2 , χ2 ) = 0 ,

n≥1

by the previous argument. Thus (U n χ1 , χ2 ) = 0 for suﬃciently large n. Consider a ﬁnite linear combination of characters f = j=0 cj χj where χ0 = 1. Then for large n cj U n χj , cj χj (U n f, f ) = = cj ck (U n χj , χk ) = c0 c0 . j,k

Since (f, 1) = c0 , we have lim (U n f, f ) = (f, 1)(1, f ) .

n→∞

Proceeding as in the proof of Theorem 4.5, and using the fact that the set of all ﬁnite linear combinations of characters is dense in L2 , we can show that (U n f, f ) converges to (f, 1)(1, f ) for arbitrary f ∈ L2 .

138

4 The Central Limit Theorem

Theorem 4.7. A weak mixing transformation has no eigenvalue except 1. Proof. Let (X, µ) be a probability space and let T : X → X be a weak mixing transformation. Suppose that λ = 1 is an eigenvalue of T , i.e., there exists a nonconstant L2 -function f such that f (T x) = λf (x) with ||f ||2 = 1. Deﬁne U f (x) = f (T x). Then (U j f, f ) = λj (f, f ). The Cauchy–Schwarz inequality implies that |(f, 1)|2 < (f, f )(1, 1) = 1, where we have the strict inequality because f is not constant. (See Fact 1.12.) Hence n−1 n−1 1 j 1 j (U f, f ) − (f, 1)(1, f ) = lim λ − |(f, 1)|2 > 0 , n→∞ n n→∞ n j=0 j=0

lim

which contradicts Theorem 4.3.

Example 4.8. An irrational translation T x = x+θ (mod 1) is not weak mixing since UT (e2πinx ) = e2πinθ e2πinx for n ∈ Z. Or, we may take f (x) = e2πix and use Theorem 4.3. See √ Fig. 4.1 and Maple Program 4.4.6 for a simulation with f (x) = x and θ = 3 − 1.

0.3 inner product

0.3

0.2

Cesaro sum

0.1 0

0.2 0.1

n

0

30

n

30

Fig. 4.1. An irrational translation T x = {x + θ} is not mixing. The inner product (f ◦ T n , f ) does not converge as n → ∞ (left) but the Ces` aro sum converges (right) √ for f (x) = x and θ = 3 − 1

4.2 The Central Limit Theorem For an ergodic measure preserving transformation T on a probability space (X, µ), take a measurable function (i.e., a random variable) f on X. Let (Sn f )(x) =

n−1 i=0

f (T i x) .

4.2 The Central Limit Theorem

We consider asymptotic statistical distributions of variance of f are given by µ(f ) = f (x) dµ

1 n Sn f .

139

The mean and the

X

and

(f (x) − µ(f ))2 dµ , X

respectively. The symbol µ stands for two things in this chapter: the mean and the measure. Note that f ◦ T i has the same probability distribution for every i ≥ 0, i.e., Pr(f ◦ T i ∈ E) = Pr(f ∈ E) since

µ((f ◦ T i )−1 (E)) = µ(T −i (f −1 (E))) = µ(f −1 (E)) .

Note that the mean of n1 Sn f is equal to µ(f ) and its variance is given by 2 Sn f − nµ(f ) dµ . n X The Central Limit Theorem does not necessarily hold true for general transformations. See [LV],[Vo]. In the following a suﬃcient condition for a special class of transformations is given. Theorem 4.9 (The Central Limit Theorem). Let T be a piecewise C 1 and expanding transformation on [0, 1], i.e., there is a partition 0 = a0 < a1 < . . . < ak−1 < ak = 1 such that T is C 1 on each [ai−1 , ai ] and |T (x)| ≥ B for some constant B > 1. (At the endpoints of an interval we consider directional derivatives.) Assume that 1/|T (x)| is a function of bounded variation. Suppose that T is weak mixing with respect to an invariant probability measure µ. Let f be a function of bounded variation such that the equation f (x) = g(T x) − g(x) has no solution g of bounded variation. Then 2 1 Sn f − nµ(f ) √ dµ > 0 σ 2 = lim n→∞ 0 n and, for every α, lim µ

n→∞

where

0 α & Sn f (x) − nµ(f ) √ ≤α = Φ(t) dt , x: σ n −∞ 2 1 Φ(t) = √ e−t /2 . 2π

140

4 The Central Limit Theorem

Consult [HoK],[R-E] for a proof. Also see an expository article by Young [Y2]. For more details consult [Bal],[BoG],[Den]. Remark 4.10. The nth standard deviation is deﬁned by 5 2 1 Sn f − nµ(f ) √ σn = dµ . n 0 Then σn converges to a positive constant for transformations satisfying the conditions in Theorem 4.9. See Fig. 4.2 for n = 10, 20, . . . , 100 and Maple Program 4.4.1. If n is large, then the values of n1 Sn f (x), for random choices of x, are distributed around the average µ(f ) in a bell-shaped form with √ standard deviation σn / n. See Fig. 4.3 and Maple Program 4.4.2.

1

0

1

n

100

1

0

n

100

0

n

100

Fig. 4.2. σn converges to a positive constant as n → ∞. Simulations for T x = {2x}, T x = {βx} and T x = 4x(1 − x) with f (x) = 1[1/2,1) (x) (from left to right)

Deﬁnition 4.11. An integrable function f is called an additive coboundary if there exists an integrable function g satisfying f (x) = g(T x) − g(x). A necessary condition for the existence of such g is

1 0

f (x) dµ = 0.

Remark 4.12. If f is an additive coboundary of the form f = g ◦ T − g where g ∈ L2 (X, µ), then µ(f ) = 0 and Sn f (x) =

n−1

f (T i x)

i=0

=

n−1

g(T i+1 x) − g(T i x)

i=0

= g(T n x) − g(x) .

4.2 The Central Limit Theorem

141

In this case σ = 0 and the Central Limit Theorem does not hold since 1 1 2 2 (g(T n x) − g(x)) dµ σ = lim n→∞ n 0 1 1 1 1 n 2 n 2 g(T x) dµ − 2 g(T x)g(x) dµ + g dµ = lim n→∞ n 0 0 0 1 1 2 g 2 dµ − g(T n x)g(x) dµ = lim n→∞ n 0 0

1 1/2 1 1/2 1 2 2 n 2 2 g dµ + g(T x) dµ g dµ ≤ lim n→∞ n 0 0 0 4 1 2 g dµ = 0 . = lim n→∞ n 0 It is known that f is an additive coboundary if and only if σ = 0. For a proof consult [BoG]. The proof for T x = {2x} was ﬁrst obtained by M. Kac in 1938, and subsequent generalizations are found in [For],[Ka1],[Ci]. Remark 4.13. Let T be weak mixing. If E is a measurable subset such that 0 < µ(E) < 1, then the function 1E (x) − µ(E) is not an additive coboundary. For, if we let 1E (x) − µ(E) = g(T x) − g(x) then exp(2πig(T x)) = exp(−2πiµ(E)) exp(2πig(x)) . Hence exp(−2πiµ(E)) is the eigenvalue of the linear operator UT f (x) = f (T x), which is a contradiction since exp(−2πiµ(E)) = 1. Example 4.14. (Multiplication by 2 modulo 1) For x ∈ X = [0, 1) we have the binary expansion x=

∞

di (x)2−i ,

di (x) ∈ {0, 1} .

i=1

Taking T x = {2x} and f (x) = 1E (x), E = [ 12 , 1), in the Birkhoﬀ Ergodic Theorem, we have the Normal Number Theorem, i.e., 1 1 d1 (T i−1 x) = lim n→∞ n 2 i=1 n

for almost every x. Note that the classical Central Limit Theorem in Subsect. 1.6.3 gives more information. Note that di = d1 ◦ T i−1 , i ≥ 1. They have the same mean 1 di (x) dx = d1 (T i−1 x) dx = d1 (x) dx = 2

142

4 The Central Limit Theorem

and the same variance 2 2 1 1 1 di (x) − dx = d1 (x) − dx = . 2 2 4 They are identically distributed, i.e., Pr(di = 0) = Pr(di = 1) =

1 2

for i ≥ 1. (This fact may be used to show that the di have the same mean and variance.) Furthermore, they are independent. Hence we have n a i−1 − 21 n i=1 d1 ◦ T lim Pr Φ(t) dt . ≤ a = √ 1 n→∞ −∞ 2 n The probability on the left hand side is given by Lebesgue measure on the unit interval. This can be derived from Theorem 4.9. See the left graph in Fig. 4.3, where the smooth curves is the density function for the standard normal distribution. The number of random samples is 104 ; in other words, we select 104 points from [0, 1] uniformly according to the invariant probability measure, which is Lebesgue measure in this case.

0.4

–4

0.4

0.4

0

4

–4

0

4

–4

0

4

Fig. 4.3. The Central Limit Theorem: Simulations for T x = {2x}, T x = {βx} and T x = 4x(1 − x) with n = 100 and f (x) = 1[1/2,1) (x) (from left to right)

Example 4.15. (Irrational translation) Let 0 < θ < 1 be irrational, and let T x = {x + θ}. In this case, T is not weak mixing and the Central Limit Theorem does not hold. For a simulation, we ﬁrst take a function f which is not an additive coboundary to avoid the trivial case. For example, choose an interval E = (a, b) and put f (x) = 1E (x) − (b − a). Then f is not an additive coboundary if b − a ∈ Z · θ + Z = {mθ + n : m, n ∈ Z} . For, if we had f (x) = g(x + θ) − g(x), then

4.3 Speed of Correlation Decay

143

exp(2πi(1E (x) − (b − a))) = exp(2πig(x + θ)) exp(−2πig(x)) . Put q(x) = exp(2πig(x)). Then q(x + θ) = e−2πi(b−a) q(x) . Hence e−2πi(b−a) = e2πinθ for some n, which is a contradiction. For another choice of f (x) see Fig. 4.4 and Maple Program 4.4.3.

0.15

0.15

0.1

0.1

0.05

0.05

0

n

300

0

n

300

Fig.√4.4. Failures of the Central Limit Theorem: σn → 0 as n → ∞ for T x = {x+θ}, θ = 2 − 1, with f (x) = 1[1/2,1) (x) (left) and f (x) = x (right)

4.3 Speed of Correlation Decay Throughout the section we use the natural logarithm. Let T be a measure preserving transformation on (X, µ). For a real-valued function f ∈ L2 (X, µ) the nth correlation coeﬃcient rn (f ) is deﬁned by 2 f dµ . rn (f ) = f (T n x)f (x) dµ − X X Theorem 4.5 implies that T is mixing if and only if rn (f ) → 0 as n → ∞ for every f . If T satisﬁes the conditions given in Theorem 4.9, rn (f ) → 0 exponentially. More precisely, there exist constants C > 0 and α > 0 such that rn (f ) ≤ C e−αn , where C depends on f and α does not depend on f . For a proof consult [BoG]. Note that log rn (f ) lim inf − ≥α. n→∞ n This property can be observed in the following simulations. If n is large, it is hard to estimate rn (f ) accurately due to the margin of error in numerical computation. See Maple Program 4.4.5.

144

4 The Central Limit Theorem

Example 4.16. (Exponential decay) Correlation coeﬃcients of T x = {2x} are exponentially decreasing as n → ∞. See Maple Program 4.4.4 and Fig. 4.5 for the theoretical calculation with f (x) = x and 1 ≤ n ≤ 15. In the right graph −(log rn (f ))/n converges to log 2 ≈ 0.693, which is marked by a horizontal line. If n is large, say n = 10, then rn (f ) is very small theoretically. In this

case, due to the margin of error in estimating the integral f (T n x)f (x) dµ by the Birkhoﬀ Ergodic Theorem, it is diﬃcult to numerically estimate the exponential decay of correlations. 400000

0.04 r

3 1/r

-(log r)/n 2

200000

0.02

1 0

n

0

15

15

n

0

n

15

Fig. 4.5. Correlation of T x = {2x} is exponentially decreasing: y = rn (f ), y = 1/rn (f ) and y = −(log rn (f ))/n, where f (x) = x (from left to right)

For g(x) = sin(2πx) we have g(T n x) = sin(2n+1 πx) and rn (g) = 0, n ≥ 1,

1 since 0 sin(2πmx) sin(2πnx) dx = 0, m = n. Similarly, for h(x) = 1E (x),

1 E = [0, 21 ), we have rn (h) = 0, n ≥ 1, since 0 h(T n x)h(x) dx = 41 . √

Example 4.17. (Exponential decay) Let β = 5+1 2 . Correlation coeﬃcients of T x = {βx} are exponentially decreasing as n → ∞. The transformation T is isomorphic to the Markov shift in Ex. 2.32, and hence it is mixing. See Fig. 4.6 for a simulation with f (x) = x and 1 ≤ n ≤ 10, where the sample size is 106 .

0.02

1000

r

3

1/r

2

500

0.01

-(log r)/n

1 0

n

10

0

n

10

0

n

10

Fig. 4.6. Correlation of T x = {βx} is exponentially decreasing: y = rn (f ), y = 1/rn (f ) and y = −(log rn (f ))/n, where f (x) = x (from left to right)

4.3 Speed of Correlation Decay

145

Example 4.18. (Polynomial decay) For 0 < α < 1, deﬁne & x(1 + 2α xα ) , 0 ≤ x < 21 , Tα x = 1 2x − 1 , 2 ≤x≤1. It is shown in [Pia] that there exists an absolutely continuous ergodic invariant probability measure dµ = ρ(x) dx for every 0 < α < 1. In Fig. 4.7 the graph y = ρ(x) is sketched by applying the Birkhoﬀ Ergodic Theorem. See also Maple Program 3.9.6. For α = 1 there exists an absolutely continuous ergodic invariant measure µ such that µ([0, 1]) = ∞. Consult [Aa]. 1.5

6 pdf

1

4

0.5

2 0

x

1

0

x

1

Fig. 4.7. The graphs of y = ρ(x) (left) and y = 1/ρ(x) (right) for Ex. 4.18 with α = 0.5

If f is Lipschitz continuous, then there exists a constant C > 0 such that rn (f ) ≤ C n−(1/α)+1 . In general, consider a piecewise smooth expanding transformation T on the unit interval that has the form T x = x + x1+α + o(x1+α ) near 0, where 0 < α < 1. The notation o(xp ) means a function φ(x) such that lim φ(x)x−p = 0 .

x→0+

The rate of decay of correlations is polynomial for a Lipschitz continuous function f , i.e., there exist positive constants C1 and C2 such that C1 n−(1/α)+1 ≤ rn (f ) ≤ C2 n−(1/α)+1 . The density function ρ has the order x−α as x → 0. For the proof consult [Hu]. See also [BLvS],[Bal],[LSV],[Y3]. In Fig. 4.8 we choose 106 samples with α = 0.5 and f (x) = x, and plot y = rn (f ), y = 1/rn (f ) and y = n(1/α)−1 rn (f ), 1 ≤ n ≤ 30. In this example the rate of decay is not fast, and so we can numerically estimate rn within an acceptable margin of error. Thus we can take a relatively large value for n.

146

4 The Central Limit Theorem

0.06 r

200

0.1

1/r

nr

0.04

0

0.05

100

0.02

30

n

0

30

n

0

n

30

Fig. 4.8. Polynomial decay of correlations: y = rn (f ), y = 1/rn (f ) and y = n rn (f ) for α = 0.5 and f (x) = x (from left to right)

Example 4.19. (Irrational translations) Let 0 < θ < 1 irrational. Correlation coeﬃcients of T x = {x + θ} do not decrease to 0 as n → ∞. To see why, choose a nonconstant real-valued L2 -function f (x). Then 1 t → f (x + t)f (x) dx 0

is a continuous function of t but not constant as t varies. Hence there exist a < b and C > 0 such that 1 2 1 f (x + t)f (x) dx − f dx > C 0 0 for every a < t < b. Observe that there exists a sequence {nk }∞ k=1 such that a < {nk θ} < b for every k. Hence for every k 1 2 1 f (x + nk θ)f (x) dx − f dx > C . rnk (f ) = 0 0 To see why t → t, let

f (x + t)f (x) dx is a nonconstant continuous function of f (x) =

∞

an e2πinx

n=−∞

be the Fourier expansion of f . Note that there exists n = 0 such that an = 0 since f is not constant. Then f (x + t) =

∞

an e2πint e2πinx ,

n=−∞

and Parseval’s identity implies that 1 ∞ f (x + t)f (x) dx = |an |2 e2πint , 0

n=−∞

4.4 Maple Programs

147

which is not a constant function of t since there exists n = 0 such that |an |2 = 0. To see that the function t →

∞

|an |2 e2πint

n=−∞

is continuous, observe that ∞ N 2 2πint 2 2πint |an | e − |an | e |an |2 → 0 ≤ n=−∞

|n|>N

n=−N

as N → ∞. Since the convergence is uniform, the limit is continuous. √ Simulation results for T x = {x + θ} with f (x) = x and θ = 3 − 1 are presented in Fig. 4.9. The graph y = rn (f ), 1 ≤ n ≤ 30, is not decreasing as n increases. See Maple Program 4.4.6.

0.05 r

0

n

30

Fig. 4.9. Correlations of an irrational translation modulo 1 do not converge to zero: y = rn (f ) where f (x) = x

4.4 Maple Programs 4.4.1 The Central Limit Theorem: σ > 0 for the logistic transformation Check the Central Limit Theorem for the logistic transformation. > with(plots): > T:=x->4.0*x*(1-x): > density:=x->1/(Pi*sqrt(x*(1-x))): > entropy:=log10(2.): For the deﬁnition of entropy consult Chap. 8. For the reason why we have to consider entropy in deciding the number of signiﬁcant digits see Sect. 9.2.

148

4 The Central Limit Theorem

Num:=10000: This is the number of the samples. A smaller value, say 1000, wouldn’t do. > N:=100: Choose a multiple of 10. > L:=Num+N-1; L := 10099 Choose a slightly larger integer than entropy × N . > Digits:=ceil(N*entropy)+20; >

Digits := 51 Take a test function. We may choose any other test function such as f (x) = x. > f:=x->trunc(2*x): Find the average of f . > mu_f:=int(f(x)*density(x),x=0..1); 1 mu f := 2 Choose a starting point of an orbit of T . > seed[0]:=evalf(Pi-3): Find an orbit of T . > for k from 1 to L do seed[k]:=T(seed[k-1]): od: > Digits:=10: Find σn for n = 10, 20, 30, . . . and draw a line graph. > for n from 10 to N by 10 do > for i from 1 to Num do > Sn[i]:=add(f(seed[k]),k=i..i+n-1): > od: > var[n]:=add((Sn[i]-n*mu_f)^2/n,i=1..Num)/Num: > sigma[n]:=sqrt(var[n]): > od: > listplot([seq([n*10,sigma[n*10]],n=1..N/10)],labels= ["n"," "]); See Fig. 4.2. 4.4.2 The Central Limit Theorem: the normal distribution for the Gauss transformation Check the normal distribution in the Central Limit Theorem for the Gauss transformation. > with(plots): Here is the standard normal distribution function. > Phi:=x->1/(sqrt(2*Pi))*exp(-x^2/2): > T:=x->frac(1.0/x): > entropy:=evalf(Pi^2/6/log(2)/log(10));

4.4 Maple Programs

149

entropy := 1.030640835 > f:=x->trunc(2.0*x): Choose the number of the samples. > Num_Sample:=10000: Choose the size of each random sample. > n:=100: Choose a slightly larger integer than entropy × n. > Digits:=ceil(n*entropy)+10; Digits := 114 seed[0]:=evalf(Pi-3): Find many orbits of T of length n. On each of them calculate Sn . > for i from 1 to Num_Sample do > for k from 1 to n do > seed[k]:=T(seed[k-1]): > od: > Sn[i]:=evalf(add(f(seed[k]),k=1..n),10): > seed[0]:=seed[n]: > od: We no longer need many signiﬁcant digits. > Digits:=10: > mu:=add(Sn[i]/n,i=1..Num_Sample)/Num_Sample; µ := 0.4145300000 Find the variance. > var:=add((Sn[i]-n*mu)^2/n,i=1..Num_Sample)/Num_Sample; var := 0.2166699100 Find the standard deviation. > sigma:=sqrt(var); σ := 0.4654781520 > for i from 1 to Num_Sample do > Zn[i]:=(Sn[i]-n*mu)/(sigma*sqrt(n)): > od: Find the gap between the values of the normalized average Zn[i]. > gap:=1/(sigma*sqrt(n)); gap := 0.2148328543 > 10/gap; 46.54781520 > Bin:=46: We partition the interval [−5, 5] into exactly 46 subintervals. To see why, try 45 or 47. This phenomena occurs since f (x) is integer-valued. > width:=10.0/Bin; This is the length of subintervals. >

150

4 The Central Limit Theorem >

for k from 1 to Bin do freq[k]:=0: od:

for i from 1 to Num_Sample do slot:=ceil((Zn[i]+5.0)/width): freq[slot]:=freq[slot]+1: od: We draw the probability density function. > g1:=listplot([seq([(i-0.5)*width-5,freq[i]/Num_Sample/ width],i=1..Bin)],labels=[" "," "]): > g2:=plot(Phi(x),x=-5..5): > display(g1,g2); See Fig. 4.10. We may use a cumulative probability density function. Its graph does not depend on the number of bins too much. > > > >

1

0

0.4

n

100

–4

0

4

Fig. 4.10. The Central Limit Theorem for the Gauss transformation: σn converges (left) and the numerical data ﬁt the standard normal distribution (right)

For more experiments try the toral automorphism T (x, y) = (2x + y, x + y)

(mod 1) .

See Fig. 4.11 for a simulation with f (x, y) = 1[1/2,1) (x). 1 0.4

0

n

100

–4

0

4

Fig. 4.11. The Central Limit Theorem for a toral automorphism: σn converges (left) and the numerical data ﬁt the standard normal distribution (right)

4.4 Maple Programs

151

4.4.3 Failure of the Central Limit Theorem: σ = 0 for an irrational translation mod 1 We show that σ = 0 for an irrational translation modulo 1. > with(plots): Choose an irrational number θ. > theta:=sqrt(2.0)-1: > T:=x->frac(x+theta): > Num:=1000: > N:=100: > L:=Num+N-1; L := 1099 Take a test function f (x). > f:=x->trunc(2.0*x): Compute µ(f ). > mu_f:=int(f(x),x=0..1); Choose a starting point of an orbit of T . > seed[0]:=evalf(Pi-3): > for k from 1 to L do seed[k]:=T(seed[k-1]): od: Find σn . > for n from 1 to N do > for i from 1 to Num do > Sn[i]:=add(f(seed[k]),k=i..i+n-1): > od: > var[n]:=add((Sn[i]-n*mu_f)^2,i=1..Num)/Num/n: > sigma[n]:=sqrt(var[n]): > od: > listplot([seq([n,sigma[n]],n=1..N)],labels=["n"," "]); See Fig. 4.4. 4.4.4 Correlation coeﬃcients of T x = 2x mod 1 Derive theoretically rn (f ) for T x = 2x (mod 1) with f (x) = x. > T:=x->frac(2.0*x): > f:=x->x: Compute µ(f ). > mu_f:=int(f(x),x=0..1): Put Ij = [(j − 1) × 2−n , j × 2−n ]. Then

n

f (T x)f (x) dx = 0

2 n

1

j=1

2 n

{2 x} x dx = n

Ij

j=1

(2n x − (j − 1)) x dx . Ij

152

4 The Central Limit Theorem

Find rn (f ). For the symbolic summation in the following use the command sum instead of the command add. > r[n]:=abs(sum(int(x*(2^n*x-j+1),x=(j-1)*2^(-n)..j*2^(-n)), j=1..2^n) - 1/4); 5 (2n + 1) (2n + 1)2 1 1 − + + rn := − 4 6 (2n )2 4 (2n )2 12 (2n )2 Simplify the preceding result. > simplify(%);

See Fig. 4.5.

1 (−n) 2 12

4.4.5 Correlation coeﬃcients of the beta transformation We introduce a general method to estimate correlation coeﬃcients. > with(plots): Deﬁne the β-transformation and the invariant density function. > Digits:=100; > beta:=(sqrt(5.)+1)/2: > T:=x->frac(beta*x): Deﬁne the invariant probability density. > density:=x->piecewise(0 M:=100000: Here M is the orbit length in the Birkhoﬀ Ergodic Theorem. Take f (x) = x. > f:=x->x: Compute µ(f ). > mu_f:=int(f(x)*density(x),x=0..1): > seed[0]:=evalf(Pi-3): > for k from 1 to N+M-1 do > seed[k]:=T(seed[k-1]): > seed[k-1]:=evalf(T(seed[k-1]),10): #This saves memory. > od: > Digits:=10: In the following we use the idea that

4.4 Maple Programs

1

f (T n x)f (x) dµ ≈ 0

153

M −1 1 f (T n+k x0 )f (T k x0 ) . M k=0

Find the nth correlation coeﬃcient. > for n from 1 to N do > r[n]:=abs(add(f(seed[n+k])*f(seed[k]),k=0..M-1)/M-mu_f^2): > od: Plot y = rn . > listplot([seq([n,r[n]],n=1..N)],labels=["n","r"]); Plot y = 1/rn . > listplot([seq([n,1/r[n]],n=1..N)],labels=["n","1/r"]); Plot y = − log rn . > listplot([seq([n,-log(r[n])/n],n=1..N)],labels= ["n","-(log r)/n"]); See Fig. 4.6. 4.4.6 Correlation coeﬃcients of an irrational translation mod 1 Show that the correlation coeﬃcients of an irrational translation modulo 1 do not converge to zero. > with(plots): > theta:=sqrt(3.0)-1: > N:=30: > T:=x->frac(x+theta): > f:=x->x: > mu_f:=int(f(x),x=0..1): Find the inner product of f ◦ T j and f . Here we use a diﬀerent idea from the one in Subsect. 4.4.5. Since T preserves Lebesgue measure, we choose the sample points i/S, 1 ≤ i ≤ S. > S:=1000: > for j from 0 to N do > inner_prod[j]:=add(f(frac(i/S+j*theta))*f(i/S),i=1..S)/S: > od: The following shows that T is not mixing. > f0:=plot(mu_f^2,x=0..N,labels=["n","inner product"]): > f1:=listplot([seq([n,inner_prod[n]],n=0..N)]): > display(f0,f1); See the left graph in Fig. 4.1. The following shows that T is not weak mixing. > for n from 1 to N do > Cesaro1[n]:=add(abs(inner_prod[j]-mu_f^2),j=0..n-1)/n: > od: > listplot([seq([n,Cesaro1[n]],n=1..N)]);

154

4 The Central Limit Theorem 0.3 0.2 0.1 0

n

30

Fig. 4.12. An irrational translation mod 1 is not weak mixing

See Fig. 4.12. The limit is positive. The following shows that T is ergodic. > for n from 1 to N do > Cesaro2[n]:=add(inner_prod[j],j=0..n-1)/n: > od: > h0:=plot(mu_f^2,x=0..N,labels=["n","Cesaro sum"]): > h1:=listplot([seq([n,Cesaro2[n]],n=1..N)]): > display(h0,h1); See the right graph in Fig. 4.1. Find the nth correlation coeﬃcient. > for n from 1 to N do > r[n]:=abs(inner_prod[n]-mu_f^2): > od: > listplot([seq([n,r[n]],n=1..N)],labels=["n","r"]); See Fig. 4.9.

5 More on Ergodicity

For a transformation deﬁned on an interval, conditions for the existence of an absolutely continuous invariant probability measure are given in the ﬁrst section, and a related property in the second one. An ergodic transformation visits a subset E of positive measure over and over with probability 1, and the average of the recurrence times between consecutive occurrences is equal to the reciprocal of the measure of E. Simply put, it takes longer to return to a set of a smaller size. This fact was proved by M. Kac. For his autobiography, see [Ka4]. Later, conditions for the ergodicity of a Markov shift are given in terms of the associated stochastic matrix. In the last section is presented an example of a noninvertible transformation with an invertible extension. For a collection of interesting examples of ergodic theory consult [ArA], and for applications to statistical mechanics see [Kel]. For an exposition on dynamical systems see [Ru3]. For continuous dynamical systems consult [HK],[Ly], and for complex dynamical systems see [El],[Ly].

5.1 Absolutely Continuous Invariant Measures For a piecewise diﬀerentiable mapping on an interval, the existence of an absolutely continuous invariant measure is known under various conditions. First we consider a mapping with ﬁnitely many discontinuities. Deﬁnition 5.1. Take X = [0, 1]. A piecewise diﬀerentiable transformation T : X → X is said to be eventually expansive if some iterate of T has its derivative bounded away from 1 in modulus, i.e., there exist n ∈ N and a constant C > 1 such that |(T n ) | ≥ C where the derivative is deﬁned. The following fact has many variants and it is called a folklore theorem. Fact 5.2 (Finite partition) Let 0 = a0 < a1 < · · · < an = 1 be a partition of X = [0, 1]. Put Ii = (ai−1 , ai ) for 1 ≤ i ≤ n. Suppose that T : X → X satisﬁes the following:

156

5 More on Ergodicity

(i) T |Ii has a C 2 -extension to the closure Ii of Ii , (ii) T |Ii is strictly monotone, (iii) T (Ii ) is a union of intervals Ij , (iv) there exists an integer s such that T s (Ii ) = X, and (v) T is eventually expansive. Then T has an ergodic invariant probability measure µ such that dµ = ρ dx where ρ is piecewise continuous and satisﬁes A ≤ ρ(x) ≤ B for some constants 0 < A ≤ B. For the proof see [AdF]. For the case of transformations with inﬁnitely many discontinuities see [Bow]. Fact 5.3 (Continuous density) Let {∆i } be a countable partition of [0, 1] by subintervals ∆i = (ai , ai−1 ) satisfying a0 = 1, ai < ai−1 and limi ai = 0. Suppose that a map T on [0, 1] satisﬁes the following: (i) T |∆i has a C 2 -extension to the closure ∆i of ∆i , (ii) T |∆i is strictly monotone, (iii) T (∆& i ) = [0, 1], 0 supx1 ∈∆i |T (x1 )| < ∞, and (iv) supi inf x2 ∈∆i |T (x2 )|2 (v) T is eventually expansive. Then T has an ergodic invariant probability measure µ such that dµ = ρ dx where ρ is continuous and satisﬁes A ≤ ρ(x) ≤ B for some constants 0 < A ≤ B. For the proof see [Half]. For similar results see [Ad],[CFS],[LY],[Re],[Si2]. In [Half] a condition for the diﬀerentiability of ρ(x) is given. As a corollary we obtain the ergodicity of the Gauss transformation.

5.2 Boundary Conditions for Invariant Measures Theorem 5.4. Suppose that φ : (0, 1] → R satisﬁes the following: (i) φ(0+) = +∞, φ(1) = 1, (ii) φ (x) < 0 for 0 < x < 1, (iii) φ is & twice continuously0diﬀerentiable and φ (x) < 0, supx∈∆i |φ (x)| < ∞, where ∆i = (φ−1 (i + 1), φ−1 (i) ], and (iv) supi (y)|2 inf |φ y∈∆ i ∞ (v) n=1 1/|φ (φ−1 (n))| < ∞ where φ (1) is understood as one-sided. Deﬁne T on [0, 1] by T 0 = 0 and T x = {φ(x)} for 0 < x ≤ 1. Assume that T is eventually expansive. Then there exists a continuous function ρ such that dµ = ρ dx is an ergodic invariant probability measure for T and 1 ρ(1) . ρ(0) = 1 − φ (1)

5.2 Boundary Conditions for Invariant Measures

157

Proof. From Fact 5.3 there exists such a continuousfunction ρ. Assume that ∞ 1 n=1 (n, n + α), we have C ≤ ρ ≤ C. Since T x ∈ (0, α) if and only if φ(x) ∈ µ((0, α)) = µ T −1 (0, α)

∞ −1 −1 φ (α + n), φ (n) =µ n=1 ∞ µ φ−1 (α + n), φ−1 (n) , = n=1

and

α

ρ dx = 0

∞ n=1

Put fk (α) =

k n=1

φ−1 (n)

ρ dx .

φ−1 (α+n) φ−1 (n)

ρ dx φ−1 (α+n)

for 0 < α < 1. Then fk (x) =

k −ρ(φ−1 (x + n)) φ (φ−1 (x + n)) n=1

and k k2 k2 2 1 1 −ρ(φ−1 (x + n)) ≤ ≤ C C |φ (φ−1 (n))| |φ (φ−1 (x + n))| φ (φ−1 (x + n)) n=k1

n=k1

n=k1

since |φ | is monotonically decreasing. Since the right hand side converges to 0 as k1 and k2 increase to inﬁnity, fk converges uniformly on [0, 1]. Therefore we can diﬀerentiate the series ∞ φ−1 (n) ρ(x) dx n=1

φ−1 (x+n)

term by term, and obtain ρ(x) =

∞ −ρ(φ−1 (x + n)) . φ (φ−1 (x + n)) n=1

Since ρ(0) = − and

∞ ρ(φ−1 (n)) φ (φ−1 (n)) n=1

158

5 More on Ergodicity

ρ(1) = −

∞ ∞ ρ(φ−1 (n + 1)) ρ(φ−1 (n)) = − , φ (φ−1 (n + 1)) φ (φ−1 (n)) n=1 n=2

we see that ρ(0) − ρ(1) = −

ρ(φ−1 (1)) . T (φ−1 (1))

Now use φ(1) = 1.

Corollary 5.5. ρ(0) > ρ(1) since φ < 0. Example 5.6. The diﬀerentiability condition on φ is indispensable. Consider the linearized Gauss transformation 1 1 1 T x = −n(n + 1) x − , <x< , n≥1, n n+1 n introduced in Chap. 2. Note that φ(x) is not diﬀerentiable at x = φ−1 (n). Since T preserves Lebesgue measure, we have ρ = 1, which does not satisfy the above relation. See Fig. 5.1 and Maple Program 5.7.1. The sample size is 106 and the number of bins is 100 for simulations presented in Figs. 5.1, 5.2.

1

1

y

0

x

1

0

x

1

Fig. 5.1. The linearized Gauss transformation (left) and a simulation of its invariant density function (right)

Remark 5.7. We may go on further to obtain relations among higher order derivatives at x = 0, 1. The diﬀerentiability of ρ(x) is investigated by Halfant [Half]. Assuming the diﬀerentiability of ρ and suitable growth conditions to allow term by term diﬀerentiation where needed, we can derive the following: 1 φ (1) 1− ρ (1) = ρ (0) − ρ(1) . [φ (1)]2 [φ (1)]3 For the proof see [C1].

5.3 Kac’s Lemma on the First Return Time

159

1 Example 5.8. Consider T x = 1/x (mod 1). Choose ∆i = ( i+1 , 1i ]. Then 1 1 ρ(x) = satisﬁes the conclusions in Theorem 5.4 and Remark 5.7. ln 2 1 + x

Example 5.9. Deﬁne the perturbed Gauss transformation 1 1 Tx = −x (mod 1) 2 x and the logarithmic transformation T x = − ln x

(mod 1) ,

whose invariant densities are given in Fig. 5.2. In both cases ρ(0) = 2 ρ(1).

1

1 y

y

0

x

1

0

x

1

Fig. 5.2. Invariant density functions for T x = {( x1 − x)/2} (left) and T x = {− ln x} (right)

5.3 Kac’s Lemma on the First Return Time Theorem 5.10 (The Poincar´ e Recurrence Theorem). Let T be a measure preserving transformation on a probability space (X, µ). If E ⊂ X is measurable and µ(E) > 0, then almost every point x ∈ E is recurrent, i.e., there exists a sequence 0 < n1 < n2 < · · · such that T nj x ∈ E for every j. ∞ ∞ Proof. Let En = k=n T −k E. Then n=0 En is the set of all points ∞that visit E inﬁnitely often under positive iterations by T . Put F = E ∩ ( n=0 En ). If x ∈ F then there is a sequence 0 < n1 < n2 < · · · such that T ni x ∈ E for every i. Since T nj −ni (T ni x) = T nj x ∈ E for every j > i where i is ﬁxed, T ni x returns to E inﬁnitely often. Hence T ni x ∈ F . This is true for every i, and so x ∈ F returns to F inﬁnitely often. To show that µ(F ) = µ(E), we note that T −1 En = En+1 and µ(En ) = µ(En+1 ). Since E0 ⊃ E1 ⊃ E2 ⊃ · · · , we have

160

5 More on Ergodicity

µ

∞

En

n=0

= lim µ(En ) = µ(E0 ) . n→∞

Similarly, since E ∩ E0 ⊃ E ∩ E1 ⊃ E ∩ E2 ⊃ · · · , we have

∞ µ(F ) = µ (E ∩ En ) = lim µ(E ∩ En ) = µ(E ∩ E0 ) = µ(E) n→∞

n=0

where the last equality comes from the fact that E ⊂ E0 .

Deﬁnition 5.11. Let T : X → X be a measure preserving transformation on a probability space (X, µ). Assume that E ⊂ X satisﬁes µ(E) > 0. For x ∈ E we deﬁne the ﬁrst return time in E by RE (x) = min{n ≥ 1 : T n x ∈ E} . The Poincar´e Recurrence Theorem guarantees that RE (x) < ∞ for almost every x ∈ E. Deﬁne the ﬁrst return transformation TE : E → E almost everywhere on E by TE (x) = T RE (x) x . For simulation results for y = TE (x) see Fig. 5.3. 0.5

0

0.5

x

0.5

0.5

0

x

0.5

0

0.5

x

Fig. 5.3. Plots of (x, TE (x)) with E = [0, 1/2] for T x = {x + θ}, θ = T x = {2x} and T x = 4x(1 − x) (from left to right)

√

2 − 1,

Remark 5.12. (i) If α < 1, then {x ∈ X : RE (x) ≤ α} = ∅. For α ≥ 1, let k = [α]. Then {x ∈ E : RE (x) ≤ α} = E ∩ (T −1 E ∪ · · · ∪ T −k E) . Hence RE : E → R is a measurable function. (ii) Let Ek = {x ∈ E : RE (x) = k}. For a measurable subset C ⊂ E we have ∞ −1 TE (C) = (Ek ∩ T −k C) . k=1

Thus TE−1 (C) is a measurable subset, and hence TE is a measurable mapping.

5.3 Kac’s Lemma on the First Return Time

161

Example 5.13. Let T be an irrational translation modulo 1. Take an interval E ⊂ [0, 1). Then the ﬁrst return time RE assumes at most three values. Furthermore, if k1 < k2 < k3 are three values of RE , then k1 + k2 = k3 . See Fig. 5.7 and Maple Program 5.7.4. For the proof and related results see [Fl],[Halt],[Lo],[LoM],[Sl1],[Sl2]. Take E ⊂ X with µ(E) > 0. Deﬁne the conditional measure µE on E by µE (A) = µ(A)/µ(E) for measurable subsets A ⊂ E. Theorem 5.14. (i) If T is a measure preserving transformation on a probability space (X, µ) and if µ(E) > 0, then TE preserves µE . (ii) Furthermore, if T is ergodic, then TE is also ergodic. Proof. (i) First, we consider the case when T is invertible. Then TE : E → E is one-to-one. To show that TE preserves µE , for A ⊂ E let An = {x ∈ A : RE (x) = n} . and we have a pairwise disjoint union A = Then An is measurable ∞ Hence TE (A) = n=1 TE (An ). Note that TE (An ) = T n (An ). Hence µE (TE A) =

∞

µE (T n An ) =

n=1

∞

∞ n=1

An .

µE (An ) = µE (A) .

n=1

When T is not invertible, consult [DK],[Kel],[Pia]. (ii) Let B ⊂ E be an invariant set for TE such that µE (B) > 0. We will show that µE (B) = 1. Note that B = TE−1 (B) = TE−2 (B) = TE−3 (B) = · · · , and hence B=(

∞

T −n B) ∩ E .

n=0

Since T is ergodic, we have µ(

∞

T −n B) = 1 ,

n=0

and hence

∞ n=0

It follows that B = E.

T −n B = X .

162

5 More on Ergodicity

Remark 5.15. Take X = [0, 1]. Let us sketch the graph of TE : E → E based on the mathematical pointillism. Choose a dense subset {x1 , x2 , x3 , . . .} ⊂ E. Plot the ordered pairs (xj , TE (xj )). If TE is reasonably nice, e.g., piecewise continuous, then the ordered pairs are dense in G = {(x, TE (x)) : x ∈ E}. See Fig. 5.3 where 5000 points are plotted. Consult Maple Program 5.7.2. Remark 5.16. Assume that T is ergodic and 0 < µ(E) < 1. By Theorem 5.14 TE is ergodic, but (TE )2 need not be so. If E is of the form E = F T −1 F for some F with µ(F ) > 0, then (TE )2 : E → E is not ergodic. To see why, ﬁrst consider the case when F ∩T −1 F = ∅. Then TE (T −1 F ) = F and TE (F ) = T −1 F . Note that F and T −1 F are invariant under (TE )2 . Next, if F ∩ T −1 F = A with µ(A) > 0, then put B1 = T −1 F \ A, B2 = F \ A. Then TE (B1 ) = B2 , TE (B2 ) = B1 . Hence B1 , B2 are invariant under (TE )2 . Therefore, in either case, (TE )2 is not ergodic. See Figs. 5.4, 5.5, 5.6, where (TE )2 has two ergodic components in each example. For a related idea consult Sect. 7.3.

0.5

0

0.5

x

0.5

0

x

0.5

2 Fig. 5.4. Plots of (x, TE (x)) (left) √ and (x, TE (x)) (right) with E = [0, 2θ] and F = [θ, 2θ] for T x = {x + θ}, θ = 2 − 1

0.6

0.6

0.4

0.4

0.4 x 0.6

0.4 x 0.6

Fig. 5.5. Plots of (x, TE (x)) (left) and (x, TE 2 (x)) (right) with E = [1/4, 3/4] and F = [0, 1/2] for T x = {2x}

5.3 Kac’s Lemma on the First Return Time 1

1

0.8

0.8

0.6

0.6

0.4

0.4 0.4

0.6

x

0.8

1

0.4

0.6

x

0.8

163

1

Fig. 5.6. Plots of (x, TE (x)) (left) and (x, TE 2 (x)) (right) with E = [1/4, 1] and F = [3/4, 1] for T x = 4x(1 − x)

Theorem 5.17 (Kac’s lemma). If T is an ergodic measure preserving transformation on a probability space (X, µ) and if µ(E) > 0, then RE (x) dµ = 1 , in other words,

E

E

RE dµE = 1/µ(E).

We give two proofs. The ﬁrst one is only for invertible transformations. Proof. (T is invertible) For n ≥ 1, let En = {x ∈ E : RE (x) = n} . ∞ We have pairwise disjoint unions E = n=1 En and X=

∞ n−1

T k En .

n=1 k=0

Hence

RE dµ = E

∞ n=1

RE dµ =

En

∞

n µ(En ) = µ(X) = 1 .

n=1

Proof. (T is not necessarily invertible) Take x ∈ E and consider the orbit of x under the map TE : E → E: x, TE (x), . . . , TE (x), . . . , TE L (x) . L−1 Put N = =0 RE (TE x). Then N is the time duration for the orbit of x under T to revisit E exactly L times, i.e., N n=1

1E (T n x) = L .

164

5 More on Ergodicity

Applying the Birkhoﬀ Ergodic Theorem for TE and T , we have

1 N 1 RE (TE x) = lim N = . n L→∞ L N →∞ µ(E) n=1 1E (T x) L

RE dµE = lim E

=1

For the original proof by Kac, see [Ka2]. For a version of Kac’s lemma on inﬁnite measure spaces consult [Aa]. See Maple Programs 5.7.3, 5.7.4. Remark 5.18. Here is how to sketch the graph of the ﬁrst return time RE based on the mathematical pointillism when X = [0, 1] and E ⊂ X is an interval. First, choose suﬃciently many points x in E that are uniformly distributed more or less. Several thousand points will do. Next, calculate RE (x), then plot the points (x, RE (x)). If we take suﬃciently many points, then the plot would look like the graph of y = RE (x), x ∈ E. See Figs. 5.7, 5.8, 5.9.

5

0

5

x

0.5

0 0.5

Fig. 5.7. Plots of (x, RE (x)) for T x = {x + θ}, θ = and E = [1/2, 1] (right)

15

15

10

10

5

5

0

x

0.5

0 0.5

1

x

√

2 − 1 on E = [0, 1/2] (left)

x

1

Fig. 5.8. Plots of (x, RE (x)) for T x = {2x} on E = [0, 1/2] (left) and E = [1/2, 1] (right)

5.4 Ergodicity of Markov Shift Transformations 15

15

10

10

5

5

0

0 0.5

0.5

x

165

1

x

Fig. 5.9. Plots of (x, RE (x)) for T x = 4x(1 − x) on E = [0, 1/2] (left) and E = [1/2, 1] (right)

5.4 Ergodicity of Markov Shift Transformations A product of two stochastic matrices A and B is also a stochastic matrix since the sum of the ith row of AB is equal to (AB)ij = Aik Bkj = Aik Bkj = Aik = 1 . j

j

k

k

j

k

Hence if P is a stochastic matrix then P n is a stochastic matrix for any n ≥ 1 N −1 1 and so is N n=0 P n . Lemma 5.19. Let P be a stochastic matrix. If a vector π = (πi )i > 0 satisﬁes πP = π and i πi = 1, then we have the following: −1 n (i) Q = limN →∞ N1 N n=0 P exists. (ii) Q is a stochastic matrix. (iii) QP = P Q = Q. (iv) If vP = v, then vQ = v. (v) Q2 = Q. Proof. Let P be a k × k stochastic matrix. For A = {0, 1, . . . , k − 1} let T ∞ be the shift transformation on the inﬁnite product space X = 1 A and µ denote the Markov measure on X deﬁned by P and π. Note that T is not necessarily ergodic. (i) Let A = [i]1 , B = [j]1 be two cylinder sets of length 1 where i, j ∈ A. Then [j, a1 , . . . , an−1 , i]1,n+1 T −n A ∩ B = a1 ,...,an−1

where the union is taken over arbitrary choices for a1 , . . . , an−1 ∈ A. Then µ([j, a1 , . . . , an−1 , i]) µ(T −n A ∩ B) = a1 ,...,an−1

=

a1 ,...,an−1

πj Pj,a1 · · · Pan−1 ,i = πj (P n )ji .

166

5 More on Ergodicity

The Birkhoﬀ Ergodic Theorem implies that for almost every x ∈ X N 1 1A (T n x) = fi∗ (x) N →∞ N n=1

lim

where fi∗ is T -invariant and ∗ fi dµ = 1A dµ = µ(A) = πi . X

X

Since 1A (T n x)1B (x) = 1T −n A∩B (x) , by the Lebesgue Dominated Convergence Theorem we have X

fi∗ (x)1B (x) dµ = lim

N →∞

X

N 1 1A (T n x)1B (x) dµ N n=1

N 1 µ(T −n A ∩ B) N →∞ N n=1

= lim

N 1 πj (P n )ji N →∞ N n=1

= lim and

N 1 n (P )ji = lim N →∞ N n=1

X

fi∗ (x)1B (x) dµ . πj

N Hence the limit of N1 n=1 P n exists. N (ii) Since the sum of any row of N1 n=1 P n is equal to 1, its limit Q has the same property. (iii) Since

N +1 N N 1 n 1 n 1 n P , P = P = P P N n=2 N n=1 N n=1 by taking limits we have P Q = QP = Q. N (iv) Since N1 n=1 P n v = v for every N , we have Qv = v. (v) Since

N N 1 1 n Q=Q = P Q N n=1 N n=1 for every N from (iii), we have Q2 = Q. Now we ﬁnd conditions for ergodicity of Markov shift transformations.

5.4 Ergodicity of Markov Shift Transformations

167

Theorem 5.20. Let P be a k × k stochastic matrix. Suppose that a vector 1. For A = {0, 1, . . . , k − 1} let π = (πi )i > 0 satisﬁes πP = π and i πi = ∞ T be a Markov shift transformation on X = 1 A associated with P and π. N −1 (By Theorem 5.19, Q = limN →∞ N1 n=0 P n exists.) Then the following are equivalent: (i) T is ergodic. (ii) All rows of Q are identical. (iii) Every entry of Q is positive. (iv) P is irreducible. (v) The subspace {v : vP = v} has dimension 1. Proof. (i) ⇒ (ii). Let A = [i]1 , B = [j]1 . The ergodicity of T implies N −1 1 πj (P n )ji = πj Qji , N →∞ N n=0

µ(A)µ(B) = lim

and hence πi πj = πj Qji . Since πj > 0, we have Qji = πi , which implies that every row of Q is equal to π. that πQ = π. LetQj = Qij for every i. From (ii) ⇒ (iii). Recall π Q = π , we have i ij j i i πi Qj = πj . Since i πi = 1, we have Qj = πj > 0. (iii) ⇒ (iv). Note that for every (i, j), N −1 1 n (P )ij , 0 < Qij = lim N →∞ N n=0

and hence (P n )ij > 0 for some n ≥ 1. (iv) ⇒ (v). Suppose that v = 0 is not a constant multiple of π and satisﬁes vP = v. For some t0 = 0, there exist i, j such that (π + t0 v)i = 0, (π + t0 v)j > 0 and (π + t0 v) ≥ 0 for every . Put w = π + t0 v. Take n such that (P n )ji > 0. Then wP n = w, and hence 0 = wi = (wP n )i = w (P n )i ≥ wj (P n )ji > 0 ,

which is a contradiction. (v) ⇒ (i). Suppose that T is not ergodic. Then there exist two cylinder sets A, B such that µ(A) > 0, µ(B) > 0 satisfying N −1 1 µ(T −n A ∩ B) = µ(A)µ(B) . N →∞ N n=0

lim

(See the remark following Theorem 3.12.) Let , A = [i1 , . . . , ir ]a+r−1 a If a + n ≥ b + s, we have

B = [j1 , . . . , js ]b+s−1 . b

168

5 More on Ergodicity

µ(T −n A ∩ B) = πj1 Pj1 j2 · · · Pjs−1 js (P a−b+n−s+1 )js i1 Pi1 i2 · · · Pir−1 ir . Hence N −1 1 µ(T −n A ∩ B) = πj1 Pj1 j2 · · · Pjs−1 js Qjs i1 Pi1 i2 · · · Pir−1 ir n→∞ N n=0

lim

= µ(B)

Qjs i1 µ(A) . πi 1

Hence Qjs i1 = πi1 . Since QP = Q every row of Q is an eigenvector of P with an eigenvalue 1. Hence π and the js th row of Q are two independent eigenvectors with the same eigenvalue 1, which is a contradiction.

Note that ergodicity of a Markov shift implies that every row of Q is equal to π. Now we ﬁnd mixing conditions for Markov shift transformations. Theorem 5.21. Let P be a k × k stochastic matrix. Suppose that a vector π = (πi )i > 0 satisﬁes πP = π and i πi = 1. For A = {0, 1, . . . , k − 1} let ∞ T be a Markov shift transformation on X = 1 A associated with P and π. Then the following are equivalent: (i) T is mixing. (ii) (P n )ij converges to πj as n → ∞ for every i, j. (iii) P is irreducible and aperiodic. Proof. Let µ denote N −1the Markov measure on X deﬁned by P and π. Put Q = limN →∞ N1 n=0 P n . (i) ⇒ (ii). Let A = [j]1 , B = [i]1 be two cylinder sets of length 1 where i, j ∈ A. Since µ(T −n A ∩ B) converges to µ(A)µ(B) as n → ∞, πi (P n )ij converges to πi πj . (ii) ⇒ (iii). Since P n is close to Q for suﬃciently large n, and since every entry of Q is positive, we observe that (P n )ij > 0 for suﬃciently large n. (iii) ⇒ (i). It suﬃces to show that µ(T −n A ∩ B) → µ(A)µ(B) as n → ∞ for cylinder sets A and B. Let A = [i1 , . . . , ir ]a+r−1 , a

B = [j1 , . . . , js ]b+s−1 . b

Let J denote the Jordan canonical form of P . Since an eigenvalue λ = 1 of P satisﬁes |λ| < 1, the Jordan block Mλ corresponding to λ = 1 satisﬁes Mλn → 0 as n → ∞. Suppose that the ﬁrst Jordan block is given by M1 = (1) diagonal corresponding to the simple eigenvalue 1, then J n converges to the N matrix diag(1, 0, . . . , 0). Hence P n converges to a limit. Since N1 n=1 P n converges to Q, P n converges to Q. Hence lim µ(T −n A ∩ B) = lim πj1 Pj1 j2 · · · Pjs−1 js (P n )js i1 Pi1 i2 · · · Pir−1 ir

n→∞

n→∞

= µ(B)Qjs i1

1 1 µ(A) = µ(B)µ(A) . µ(A) = µ(B)πi1 πi1 πi 1

5.4 Ergodicity of Markov Shift Transformations

169

Theorem 5.22. In Theorem 5.21 (ii), the speed of convergence is exponential, i.e., ||P n − Q|| → 0 exponentially as n → ∞ for any norm || · || on the vector space V of k × k complex matrices. Proof. Fix a norm || · || on the space V . Let S be an invertible matrix such that S −1 P S = J where J is the Jordan canonical form of P as in the proof of the theorem. Then ||A|| = ||S −1 AS|| ,

A∈V ,

deﬁnes a new norm || · || on V . Since all norms on V are equivalent, there exist constants 0 < α ≤ β such that α||A|| ≤ ||S −1 AS|| ≤ β||A|| for every A ∈ V . Hence α||P n − Q|| ≤ ||S −1 (P n − Q)S|| ≤ β||P n − Q|| . Since S −1 (P n − Q)S = J n − diag(1, 0, . . . , 0), the speed of the convergence of P n to Q is equivalent to the speed of convergence of J n to diag(1, 0, . . . , 0) for any norm on V . Now we use the norm ||A||∞ = max |Aij | . 1≤i,j≤k

Its restriction to any subspace of V will be also denoted by || · ||∞ . For a Jordan block Mλ = λI + N corresponding to λ, |λ| < 1, we have N k = 0. Hence for n ≥ k (Mλ )n =

k−1

C(n, j)λn−j N j

j=0

where C(n, j) is the binomial coeﬃcient. Let η = max{||I||∞ , ||N ||∞ , . . . , ||N k−1 ||∞ } . Then for n ≥ k ||(Mλ )n ||∞ ≤ η

k−1

C(n, j)|λ|n−j ≤ ηknk−1 |λ|n−k+1 .

j=0

Thus (Mλ )n → 0 exponentially. Now use ||J n − diag(1, 0, . . . , 0)||∞ ≤ max ||(Mλ )n ||∞ . |λ| T:=x->-trunc(1/x)*(trunc(1/x)+1)*(x-1/(1+trunc(1/x)))+1; ⎛ ⎞ 1 1 ⎜ T := x → −trunc( ) (trunc( ) + 1) ⎝x − x x

1 ⎟ ⎠+1 1 trunc( ) + 1 x

> plot(T(x),x=0..1); See Fig. 5.1. Here is another example, which is a formula of a linearized logarithmic transformation in a closed form. > S:= x->-2^(trunc(-log[2.0](x))+1)*x+2;

S := x → −2(trunc(−log2.0 (x))+1) x + 2 > plot({frac(-log[2](x)),S(x)},x=0..1); See Fig. 5.15.

174

5 More on Ergodicity 1

y

0

x

1

Fig. 5.15. y = {log2 x} and its linearized version

5.7.2 How to sketch the graph of the ﬁrst return time transformation We introduce a method to sketch the graph of the ﬁrst return time transformation without its explicit formula. > with(plots): Take T x = 2x (mod 1). > T:=x->frac(2*x): > Digits:=100: Choose an interval [a, b] ⊂ [0, 1] where the ﬁrst return time is deﬁned. > a:=1/4: > b:=3/4: > SampleSize:=5000: Take points in the interval [a, b]. > for i from 1 to SampleSize do > seed[i]:=a+frac(sqrt(3.0)*i)*(b-a): > od: For x ∈ [a, b] ﬁnd TE (x). > for i from 1 to SampleSize do > X:=T(seed[i]): > for j from 1 while (a > X or b < X) do > X:=T(X): > od: > Y[i]:=X: > od: > Digits:=10: Plot suﬃciently many points on the graph of TE . See the left graph in Fig. 5.5. > g1:=pointplot([seq([seed[i],Y[i]],i=1..SampleSize)], symbolsize=1): > g2:=plot((a+b)/2,x=a..b,y=a..b,labels=["x"," "]): > display(g1,g2); Now we sketch the graph y = (TE )2 (x). > Digits:=200:

5.7 Maple Programs

175

We need twice as many signiﬁcant digits for (TE )2 since we need to compute the second return time. > for i from 1 to SampleSize do > X:=T(seed[i]): > for j from 1 while (a > X or b < X) do X:=T(X): od: > X2:=T(X): > for j from 1 while (a > X2 or b < X2) do X2:=T(X2): od: > Z[i]:=X2: > od: > Digits:=10: Plot suﬃciently many points on the graph of (TE )2 . > g3:=pointplot([seq([seed[i],Z[i]],i=1..SampleSize)], symbolsize=1): > display(g3,g2); See the right graph in Fig. 5.5. 5.7.3 Kac’s lemma for the logistic transformation We simulate Kac’s lemma for the logistic transformation. > with(plots): > Digits:=100: > x[0]:=evalf(Pi-3): Choose an interval [a, b] ⊂ [0, 1] where the ﬁrst return time is deﬁned. > a:=0.5: > b:=1: Choose a transformation. > T:=x->4.0*x*(1-x): > rho:=x->1/(Pi*sqrt(x*(1-x))): > SampleSize:=10000: Choose points in [0, 1]. > for i from 1 to SampleSize do x[i]:=T(x[i-1]): od: Choose points in [a, b]. > j:=1: > for i from 1 to SampleSize do > if (x[i] > a and x[i] < b) then seed[j]:=x[i]: > j:=j+1: fi: > od: Find the number of points in the interval [a, b]. > N:=j-1; N := 5002 The Birkhoﬀ Ergodic Theorem implies that the following two numbers should be approximately equal. > evalf(int(rho(x),x=a..b),10); .5000000000

176

5 More on Ergodicity

evalf(N/SampleSize,10); .5002000000 Taking sample points from [a, b] is equivalent to using the conditional measure on [a, b]. > for j from 1 to N do > orbit:=T(seed[j]): > for k from 1 while(orbit < a or orbit > b) do > orbit:=T(orbit): > od: > ReturnTime[j]:=k: > od: We no longer need many digits. > Digits:=10; Kac’s lemma states that the average of the ﬁrst return time is equal to the reciprocal of the measure of the given set. > evalf(add(ReturnTime[j],j=1..N)/N); 1.999000400 > 1/int(rho(x),x=a..b); 2.000000000 Plot the points (x, RE (x)). See Fig. 5.9. > pointplot([seq([seed[j],ReturnTime[j]],j=1..N)]); >

5.7.4 Kac’s lemma for an irrational translation mod 1 An irrational translation modulo 1 has a special property: There are three values for the ﬁrst return time in general and the sum of two smaller values is equal to the maximum. > with(plots): An irrational translation modulo 1 has entropy zero, and so there is no need to take many signiﬁcant digits. For the deﬁnition of entropy consult Chap. 8, and for the choice of number of signiﬁcant digits see Remark 9.18. > Digits:=50: Choose the interval [a, b] where the ﬁrst return time is deﬁned. > a:=0: > b:=0.05: Choose θ. > theta:=sqrt(2.0)-1: > T:=x->frac(x+theta): > SampleSize:=1000: > x[0]:=evalf(Pi-3): > for i from 1 to SampleSize do x[i]:=T(x[i-1]): od: > j:=1:

5.7 Maple Programs

177

for i from 1 to SampleSize do if (x[i] > a and x[i] < b) then seed[j]:=x[i]: j:=j+1: fi: od: Find the number of points inside the interval [a, b]. We will compute the ﬁrst return time at thus obtained points. > N:=j-1; > > > >

N := 50 Check the Birkhoﬀ Ergodic Theorem. > evalf(b-a,10); 0.05 > evalf(N/SampleSize,10); 0.05000000000 > for j from 1 to N do > orbit:=T(seed[j]): > for k from 1 while(orbit < a or orbit > b) do > orbit:=T(orbit): > od: > ReturnTime[j]:=k: > od: > Digits:=10: In the following there are only three values of the ﬁrst return time and the maximum of the ﬁrst return time is the sum of two smaller values. > seq(ReturnTime[j],j=1..N);

>

12, 29, 29, 12, 17, 12, 29, 12, 17, 12, 29, 29, 12, 29, 29, 12, 17, 12, 29, 29, 12, 29, 29, 12, 17, 12, 29, 12, 17, 12, 29, 29, 12, 17, 12, 29, 12, 17, 12, 29, 29, 12, 29, 29, 12, 17, 12, 29, 29, 12 Max:=max(seq(ReturnTime[j],j=1..N));

>

Max := 29 Min:=min(seq(ReturnTime[j],j=1..N));

Min := 12 for j from 1 to N do if (ReturnTime[j] > Min and ReturnTime[j] < Max) then Middle[k]:=ReturnTime[j]: k:=k+1: fi: od: Find the number of times when Min < ReturnTime < Max. > K:=k-1; K := 8 > seq(Middle[k],k=1..K); 17, 17, 17, 17, 17, 17, 17, 17 The following two numbers are approximately equal by Kac’s lemma. > evalf(add(ReturnTime[j],j=1..N)/N); > > > > >

178

5 More on Ergodicity

19.94000000 >

evalf(1/(b-a),10);

20.00000000 > pointplot([seq([seed[j],ReturnTime[j]],j=1..N)]); See Fig. 5.16.

20 10

0

0.02

x

0.04

Fig. 5.16. The ﬁrst return time of an irrational translation mod 1

5.7.5 Bernoulli measures on the unit interval Regard the ( 41 , 43 )-Bernoulli measure as being deﬁned on the unit interval. > with(plots): Construct a typical point x0 , or use Maple Program 2.6.7. > z:=4: > ran:=rand(0..z-1): > N:=10000: > evalf(2^(-N),10); 0.5012372749 10−3010 > Digits:=3020: Generate the ( 41 , 43 )-Bernoulli sequence {dj }j . > for j from 1 to N do > d[j]:=ceil(ran()/(z-1)): > od: Convert the binary number (0.d1 d2 d3 . . .)2 into a decimal number. > M:=N/100; M := 100 The following method is faster the direct summation of all terms. > for k from 1 to 100 do > partial_sum[k]:=add(d[s+M*(k-1)]/2^s,s=1..M): > od: > x0:=evalf(add(partial_sum[k]/2^(M*(k-1)),k=1..100)):

5.7 Maple Programs

179

T:=x->frac(2*x): > S:=10000: Divide the unit interval into 64 subintervals and ﬁnd the relative frequency pi of visiting the ith subinterval and plot the points (i − 0.5, pi ). > Bin:=64: > for k from 1 to Bin do freq[k]:=0: od: > seed:=x0: > for j from 1 to S do > seed:=T(seed): > slot:=ceil(Bin*seed): > freq[slot]:=freq[slot]+1: > od: To draw lines we need an additional package. > with(plottools): > Digits:=10: > for i from 1 to Bin do > g[i]:=line([(i-0.5)/Bin,freq[i]*Bin/S],[(i-0.5)/Bin,0]): > od: Draw a histogram as an approximation of the ( 14 , 34 )-Bernoulli measure. > display(seq(g[i],i=1..Bin),labels=["x"," "]); > Digits:=ceil(1000*log10(2))+10; >

Digits := 312 Plot an orbit of length 1000. > orbit[0]:=x0: > for j from 1 to 1000 do > orbit[j]:=T(orbit[j-1]): > orbit[j-1]:=evalf(orbit[j-1],10): > od: > Digits:=10: > pointplot([seq([i,orbit[i]],i=1..1000)],labels=["n","x"]): See Fig. 5.10. 5.7.6 An invertible extension of the beta transformation Find an invertible extension of the β-transformation Tβ (x) = {βx} using the idea in Sect. 5.6. > with(plots): > beta:=(sqrt(5.0)+1)/2; β := 1.618033988 >

a:=1/(3-beta); a := 0.7236067974

>

a*beta;

180

5 More on Ergodicity

1.170820392 The following calculations show that a = β 2 /(1 + β 2 ) and 1/β = β − 1. > beta^2/(1+beta^2); 0.7236067975 > 1/beta; 0.6180339890 Deﬁne an invertible transformation T on the domain D given in Fig. 5.17. It is a union of three rectangles. Let us call the left top rectangle R1 , the left bottom one R2 and the right one R3 . Note that the roof of D is given by the invariant density function of Tβ .

1

a

0

1/ β

1

Fig. 5.17. The domain D = R1 ∪ R2 ∪ R3

g0:=textplot([[1/beta,0,‘1/b‘]],font=[SYMBOL,9]): > g00:=textplot([[0,a,"a"]],font=[HELVETICA,9]): > g1:=listplot([[0,a],[1/beta,a],[1/beta,0]],color=green): > g2:=listplot([[0,a*beta],[1/beta,a*beta],[1/beta,a],[1,a], [1,0]]): > display(g0,g00,g1,g2); See Fig. 5.17. We want to ﬁnd an invertible mapping T : D → D satisfying >

T (R1 ∪ R2 ) = R2 ∪ R3

and T (R3 ) = R1 .

Deﬁne T . > T:=(x,y)->(frac(beta*x),(1/beta)*y+ trunc(beta*x)*a); y T := (x, y) → (frac(β x), + trunc(β x) a) β Note that T horizontally stretches D by the factor β and squeezes vertically by the factor 1/β. So it preserves the two-dimensional Lebesgue measure λ. Deﬁne φ : D → [0, 1] by φ(x, y) = x. Then Tβ ◦ φ = φ ◦ T . Now we study the images of the region R1 ∪R2 under T and T 2 . First, represent R1 ∪ R2 using the mathematical pointillism. > seed[0]:=(evalf(Pi-3),0.1):

5.7 Maple Programs

181

Arnold:=(x,y)->(frac(2*x+y),frac(x+y)); > S:=3000: > for i from 1 to S do seed[i]:=Arnold(seed[i-1]): od: The points seed[i] are more or less uniformly scattered in the unit square. They are modiﬁed to generate points in R1 ∪ R2 . > for i from 1 to S do > image0[i]:=(seed[i][1]/beta,seed[i][2]*(a*beta)): > od: > pointplot({seq([image0[i]],i=1..S)},symbolsize=1); See the ﬁrst diagram in Fig. 5.18 for R1 ∪ R2 . > for i from 1 to S do image1[i]:=T(image0[i]): od: > pointplot([seq([image1[i]],i=1..S)],symbolsize=1); See the second diagram in Fig. 5.18 for T (R1 ∪ R2 ) = R2 ∪ R3 . > for i from 1 to S do image2[i]:=T(image1[i]): od: > pointplot({seq([image2[i]],i=1..S)},symbolsize=1); See the third diagram in Fig. 5.18 for T 2 (R1 ∪ R2 ). >

1

0

1

1

0

1

1

0

1

Fig. 5.18. Dotted regions represent R1 ∪ R2 , T (R1 ∪ R2 ) and T 2 (R1 ∪ R2 ) (from left to right)

Now we show the ergodicity of T . If E ⊂ D is an invariant subset of positive measure, then for ε > 0 there exist rectangles F1 , . . . ,Fk ⊂ D such that k k F approximates E within an error bound of ε, i.e., λ E F i=1 i i=1 i < ε . k Since T n E = E for every n ∈ Z, we have λ E i=1 T n Fi < ε . If n > 0 is suﬃciently large, then T n Fi consists mostly of horizontal strips, and T −n Fi consists mostly of vertical strips. See Fig. 5.19. Let || · || denote the L1 -norm. Then ||1U − 1V || = |1U − 1V | dλ = λ(U V ) k k for U, V ⊂ D. Since both A = i=1 T n Fi and B = i=1 T −n Fi approximate E within the error bound of ε, we observe that

182

5 More on Ergodicity

Fig. 5.19. Dotted regions represent a rectangle F and its images T k F , 1 ≤ k ≤ 8 (from top left to bottom right)

λ((A ∩ B)A) = ||1A∩B − 1A || ≤ ||1A 1B − (1E )2 || + ||1E − 1A || = ||(1A − 1E )(1B − 1E ) + (1E∩A − 1E ) + (1E∩B − 1E )|| + ||1E − 1A || ≤ ||(1A − 1E )(1B − 1E )|| + ||1E∩A − 1E || + ||1E∩B − 1E || + ||1E − 1A || ≤ ||1A − 1E || + ||1E∩A − 1E || + ||1E∩B − 1E || + ||1E − 1A || < 4ε . We used the fact |1B (x) − 1E (x)| ≤ 1 in the third inequality. Since A and B mostly consist of horizontal and vertical strips respectively, A ∩ B approximates A and B only if there is not much area outside the strips. Thus λ(An ) and λ(Bn ) are close to 1, and we conclude that λ(E) = 1.

6 Homeomorphisms of the Circle

A homeomorphism of the unit circle is regarded as a transformation. If its rotation number is irrational, then the homeomorphism is equivalent to an irrational rotation. We ﬁnd numerically the conjugacy mapping that gives the equivalence of the homeomorphism and the corresponding rotation.

6.1 Rotation Number Consider a homeomorphism T of the unit circle S 1 . For the sake of notational simplicity, the unit circle is identiﬁed with the interval [0, 1). Assume that T preserves orientation, i.e, T winds around the circle counterclockwise. Then there exists a continuous strictly increasing function f : R → R such that (i) f (x + 1) = f (x) + 1, x ∈ R, (ii) T x = {f (x)}, 0 ≤ x < 1. We say that f represents T . Note that f is not unique: We may use g deﬁned by g(x) = f (x) + k for any integer k. See Figs. 6.1, 6.2. The most elementary example is given by a translation T x = x + θ (mod 1) (or, a rotation by e2πiθ on the unit circle), which is represented by f (x) = x + θ. 2

2

y

y 1

–1

1

1 –1

x

2

–1

1

x

2

–1

Fig. 6.1. Graphs of f (x) = x + a sin(2πx) + b, b = a = 1/(2π) (right)

√

2 − 1, for a = 0.1 (left) and

184

6 Homeomorphisms of the Circle 1

1

y

y

0

x

0

1

1

x

Fig.√6.2. Homeomorphisms of the circle represented by f (x) = x + a sin(2πx) + b, b = 2 − 1, for a = 0.1 (left) and a = 1/(2π) (right)

Let f (0) (x) = x, f (1) (x) = f (x), and f (n) (x) = f (f (n−1) (x)), n ≥ 1. Observe that f (n) is monotone and f (n) (x + k) = f (n) (x) + k for every k ∈ Z. Theorem 6.1. Let T be an orientation preserving homeomorphism of the unit circle represented by f : R → R. Deﬁne f (n) (x) − x . n→∞ n

ρ(T, f ) = lim

Then the limit exists and does not depend on x. Proof. We follow the argument given in [CFS]. There are two possibilities depending on the existence of a ﬁxed point of T . Case I. Suppose that there exists j ∈ Z, j = 0, such that T j x0 = x0 for some x0 . If j < 0, then x0 = T −j x0 , and we may assume j > 0. Since f (j) (x0 ) = x0 (mod 1), there exists k ∈ Z such that f (j) (x0 ) = x0 + k. Hence f (2j) (x0 ) = f (j) (f (j) (x0 )) = f (j) (x0 + k) = f (j) (x0 ) + k = x0 + 2k , and in general we have f (qj) (x0 ) = x0 + qk for q ∈ N. Take an integer n. Then n = qj + r, 0 ≤ r < j. Hence f (n) (x0 ) = f (qj+r) (x0 ) = f (r) (f (qj) (x0 )) = f (r) (x0 + qk) = f (r) (x0 ) + qk , and so we have f (n) (x0 ) lim = lim n→∞ n→∞ n

f (r) (x0 ) qk + n qj + r

=

k j

since q → ∞ as n → ∞. In this case, the limit exists at x0 and is rational. Case II. Suppose that there exists no ﬁxed point of T j for any j. Since the mapping f (j) (x) − x is continuous, its range {f (j) (x) − x : x ∈ R} is a

6.1 Rotation Number

185

connected subset in R. Since f (j) (x) − x ∈ Z, the range is included in an open interval of the form (p, p + 1) for some integer p depending on j, and hence p < f (j) (x) − x < p + 1 for every x. Thus x + p < f (j) (x) < x + p + 1 for every x. Then f (j) (0) = f (j) (f ((−1)j) (0)) < f ((−1)j) (0) + p + 1 < · · · < (p + 1) . Similarly, p < · · · < f ((−1)j) (0) + p < f (j) (f ((−1)j) (0)) = f (j) (0) . Hence for every ≥ 1

It follows that

p+1 f (j) (0) p . < < j j j (j) f (0) f (j) (0) 1 < . − j j j

Switching the roles of j and , we have (j) f (0) f () (0) 1 < . − j Thus (j) f (0) f () (0) f (j) (0) f (j) (0) f (j) (0) f () (0) 1 1 < + . − + − = − j j j j j Therefore {f (j) (0)/j}j≥1 is a Cauchy sequence, and it converges. By Cases I and II we now know that there is at least one point x0 ∈ R where the limit exists. Take an arbitrary point x ∈ R. Then we can ﬁnd J ∈ Z depending on x such that x0 + J ≤ x < x0 + J + 1 . Since f (n) is monotone, we have f (n) (x0 + J) ≤ f (n) (x) < f (n) (x0 + J + 1) and hence f (n) (x0 ) + J ≤ f (n) (x) < f (n) (x0 ) + J + 1 . Since

(n) f (x0 ) f (n) (x) J + 1 < →0 − n x n

as n → ∞, the limit exists at every point and is independent of x.

186

6 Homeomorphisms of the Circle

Remark 6.2. In the proof of Theorem 6.1 we have shown that if T j x0 = x0 for some j ≥ 1 then ρ(T, f ) = kj for some k ≥ 1. For example, consider the circle homeomorphism T represented by f (x) = x + 0.1 sin(2πx) +

1 . 2

Then f (0) = 21 and f ( 21 ) = 1. Hence T 2 (0) = 0, and so ρ(T, f ) = 21 . See Fig. 6.3 where the simulation is done with a point x0 ∈ {0, 21 }. See also Fig. 6.12.

0.5

0.5002

0.4 0.5

0.3 0.2

0.4998

0.1 0

n

30

0

x

1

Fig. 6.3. Plots of f (n) (x0 )/n, 1 ≤ n ≤ 30, (left) and (f (1000) (x)−x)/1000, 0 ≤ x ≤ 1, (right) where f (x) = x + 0.1 sin(2πx) + 0.5

Theorem 6.3. Let T be an orientation preserving homeomorphism of the unit circle represented by f : R → R. Then ρ(T, f ) is rational if and only if T n has a ﬁxed point for some n ≥ 1. Proof. It suﬃces to show that if ρ(T, f ) is rational then T n has a ﬁxed point for some n ≥ 1. The other direction has been proved already while proving Theorem 6.1 as explained in Remark 6.2. First consider the special case ρ(T, f ) = 0. We will show that T x0 = x0 for some x0 . Suppose not. Then f (x) − x ∈ Z for any x. Since f (x) − x is continuous, either f (x) − x > 0 for every x or f (x) − x < 0 for every x. If f (x) > x for every x, then f (0) > 0. Since f is increasing, so is f (n) for every n ≥ 1. Hence f (n+1) (0) = f (n) (f (0)) > f (n) (0) . Therefore f (n) (0) > f (n−1) (0) > · · · > f (2) (0) > f (0) > 0 . If f (n) (0) → ∞ as n → ∞, then f (N ) (0) > 1 for some N ≥ 1. Hence f (2N ) (0) = f (N ) (f (N ) (0)) > f (N ) (1) = f (N ) (0) + 1 > 2 . In general, f (kN ) (0) > k. Hence

6.1 Rotation Number

187

1 k f (kN ) (0) >0, = ≥ lim k→∞ kN k→∞ N kN

ρ(T, f ) = lim

which is a contradiction. It follows that {f (n) (0)}∞ n=1 is bounded, and hence it converges to some x0 . Then f (x0 ) = f ( lim f (n) (0)) = lim f (f (n) (0)) = lim f (n+1) (0) = x0 . n→∞

n→∞

n→∞

If f (x) < x for every x, then f (0) < 0 and we proceed similarly. Now let ρ(T, f ) = p/q for p, q ∈ N, (p, q) = 1, q ≥ 2. Note that g(x) = f (q) (x) − p represents T q and g (2) (x) = f (q) (g(x)) − p = f (q) (f (q) (x) − p) − p = f (2q) (x) − 2p . In general, g (n) (x) = f (nq) (x) − np , and hence p f (nq) (x) f (nq) (x) g (n) (x) −p=q −p=0. − p = q lim = lim n→∞ n→∞ n→∞ q nq n n lim

It follows that ρ(T q , g) = 0. By the preceding argument for the ﬁrst case, we conclude that T q has a ﬁxed point.

Deﬁnition 6.4. Observe that if g(x) = f (x) + k for some k ∈ Z, then ρ(T, g) = ρ(T, f ) + k . Deﬁne the rotation number ρ(T ) of T by ρ(T ) = ρ(T, f )

(mod 1)

where f is any function representing T . Among all functions representing T there is a unique function f such that 0 ≤ f (0) < 1 , which is assumed throughout the rest of the chapter. In this case, ρ(T ) = ρ(T, f ). For a simulation see Fig. 6.4 and Maple Program 6.8.1. Remark 6.5. Theoretically, there is no diﬀerence in the limits of f (n) (x) − x n

and

f (n) (x) n

as n → ∞. For ﬁnite n, however, there is a very small diﬀerence that cannot be ignored in computer experiments. Even for n = 1000 this small diﬀerence may cause a serious problem. See Fig. 6.4 and Maple Program 6.8.8.

188

6 Homeomorphisms of the Circle 0.4

0.41094

0.3

0.41092

0.2

0.4109

0.1 0.41088 0

n

30

0

x

1

Fig. 6.4. Plots of f (n) (0)/n, 1 ≤ n ≤ 30, (left) and (f (1000) √(x) − x)/1000 at x = T j (0), 1 ≤ j ≤ 100, (right) where f (x) = x + 0.1 sin(2πx) + 2 − 1. The horizontal line indicates the sample average

6.2 Topological Conjugacy and Invariant Measures Let X be a compact metric space. If T : X → X is continuous, then there exists a Borel probability measure invariant under T . For, if we take an arbitrary point x0 ∈ X, we then deﬁne µn by µn =

n−1 1 δ T k x0 , n k=0

where δa is the Dirac measure concentrated on a. Measures can be regarded as bounded linear functionals on the space of continuous functions on X and the set of all measures µn has a limit point, which can be shown to be invariant under T . Therefore, any homeomorphism of the circle has an invariant measure. For the details, consult Chap. 1, Sect. 8 in [CFS]. Deﬁnition 6.6. Let T : X → X be a homeomorphism of a topological space X. Then T is said to be minimal if {T n x : n ∈ Z} is dense in X for every x ∈ X. Theorem 6.7. Let T : S 1 → S 1 be a minimal homeomorphism. If µ is a T -invariant probability measure on S 1 , then µ(A) > 0 for any nonempty open subset A. Proof. First, observe that there exists x0 ∈ S 1 such that µ(U ) > 0 for any neighborhood U of x0 . If not, for any x ∈ S 1 there would exist an open neighborhood Vx of x such that µ(Vx ) = 0. Since S1 = Vx , x∈S 1

the compactness of S 1 implies that there are ﬁnitely many points x1 , . . . , xn satisfying S 1 = Vx1 ∪ · · · ∪ Vxn . Hence µ(S 1 ) ≤ µ(Vx1 ) + · · · + µ(Vxn ) = 0, which is a contradiction.

6.2 Topological Conjugacy and Invariant Measures

189

Next, take a nonempty open subset A. Then there exists n such that T n x0 ∈ A, i.e., x0 ∈ T −n A. Since T is a homeomorphism, T −1 A is an open neighborhood of x0 . Hence µ(T −n A) > 0. Since µ is T -invariant, µ(A) =

µ(T −n A) > 0. Remark 6.8. Take a concrete example given by a family of homeomorphisms Ta,b of the circle represented by f (x) = x + a sin(2πx) + b,

|a|

0 and f (x + 1) = f (x) + 1. If b = 21 , then there exist two invariant measures with their supports on two sets of periodic points of period 2 given by {0, 21 } and {x1 ≈ 0.2, T (x1 ) ≈ 0.8}. (See Fig. 6.5 where we see that T 10 (x) are close to one of these four points for most of x.) Hence the rotational number is equal to 21 by Remark 6.2. 1

1

y

1

y

0

x

1

y

0

x

0

1

1

x

Fig. 6.5. y = T x, y = T 2 x and y = T 10 x where T is represented by f (x) = x + 0.1 sin(2πx) + 0.5 (from left to right)

For other values of b, invariant measures are obtained with a = 0.1 in Fig. 6.6.

2 1

1

1

0

x

1

0

x

1

0

x

1

Fig. 6.6. Invariant measures of homeomorphisms of represented by f (x) = x + 0.1 sin(2πx) + b for b = 1/3, b = 1/4 and b = 1/5 (from left to right)

190

6 Homeomorphisms of the Circle

The invariant measures of Ta,b and Ta,1−b are symmetric with respect to x = 21 since 1 − Ta,b (1 − x) = Ta,1−b (x). See Fig. 6.7 and Maple Program 6.8.2.

6

6

4

4

2

2

0

x

0

1

x

1

Fig. 6.7. Invariant measures of homeomorphisms represented by f (x) = x + 0.1 sin(2πx) + b (left) and f (x) = x + 0.1 sin(2πx) + 1 − b (right) for b = 0.484

Deﬁnition 6.9. Two homeomorphisms T1 and T2 of a topological space X are said to be topologically conjugate if there exists a homeomorphism φ of X such that φ ◦ T1 = T2 ◦ φ. Such a mapping φ is called a topological conjugacy. T

1 X −−−− → X ⏐ ⏐ ⏐ ⏐ φ, φ,

T

2 X −−−− → X

In the following theorem an invariant measure of a homeomorphism of the unit circle plays a crucial role in constructing a topological conjugacy. Theorem 6.10. Let T be a homeomorphism of the circle with an irrational rotation number θ. Take a Borel probability measure µ on S 1 invariant under T . Deﬁne the cumulative density function φ : S 1 → S 1 by φ(x) = µ([0, x]) . Then we have the following: (i) φ is a continuous, monotonically increasing, onto function. (ii) φ satisﬁes φ ◦ T = Rθ ◦ φ where Rθ is the rotation by θ, i.e., the following diagram is commutative: T S 1 −−−−→ S 1 ⏐ ⏐ ⏐ ⏐ φ, φ, R

θ → S1 S 1 −−−−

(iii) For each x ∈ S 1 , φ−1 (x) is either a point or a closed interval in S 1 . (iv) φ is a homeomorphism if and only if T is minimal.

6.2 Topological Conjugacy and Invariant Measures

191

Proof. (i) If there were a point x0 ∈ S 1 such that µ({x0 }) > 0, then the invariance under T would imply that µ({T n x0 }) > 0 for every n, which would be impossible unless T n x0 = x0 for some n. This would imply that the rotation number is rational, which is a contradiction by Theorem 6.3. Hence µ is a continuous measure, and so φ is a continuous function. Observe that φ(0) = 0 and φ is onto. (ii) Note that µ([a, c]) = µ([a, b]) + µ([b, c]) (mod 1) for any choice of a, b, c ∈ S 1 . Here [x, y] are intervals in the circle and we use the convention that [y, x] = [y, 1) ∪ [0, x] for 0 ≤ x < y < 1. Hence φ(T x) = µ([0, T x]) = µ([0, T 0]) + µ([T 0, T x])

(mod 1)

= µ([0, T 0]) + µ([0, x]) (mod 1) = µ([0, T 0]) + φ(x) (mod 1) Let f : R → R be the function representing T with 0 ≤ f (0) < 1, and consider 0, f (0), f (2) (0), . . . , f (n) (0). Then [0, f (n) (0)] = [0, f (0)] ∪ [f (0), f (2) (0)] ∪ · · · ∪ [f (n−1) (0), f (n) (0)] . Deﬁne an inﬁnite measure ν on R by ν(A) =

∞

µ(A ∩ [j, j + 1) − j)

j=−∞

for A ⊂ R where B − j = {b − j : b ∈ B}, B ⊂ R. Note that ν([0, j]) = j and ν([f (j−1) (0), f (j) (0)]) = µ([0, T 0]) for every j ≥ 1. Hence ν([0, f (n) (0)]) = n µ([0, T 0]) . Let t denote the greatest integer less than or equal to t ∈ R. Then f (n) (0) n→∞ n f (n) (0) = lim n→∞ n ν([0, f (n) (0)]) = lim n→∞ n ν([0, f (n) (0)]) = µ([0, T 0]) = lim n→∞ n

θ = lim

Therefore φ(T x) = φ(x) + θ (mod 1).

192

6 Homeomorphisms of the Circle

(iii) Take x ∈ S 1 . Let t0 ∈ φ−1 (x). Choose t0 ∈ S 1 . Suppose that t0 < t. Then t ∈ φ−1 (x) if and only if µ([t0 , t]) = 0. In this case, we have [t0 , t] ⊂ φ−1 (x). Deﬁne b = sup{t ≥ t0 : µ([t0 , t]) = 0} . Similarly, deﬁne a = inf{t ≤ t0 : µ([t, t0 ]) = 0} . Then [a, b] = φ−1 (x). (iv) If φ is a homeomorphism, then all the topological properties of Rθ are carried over to T . Since θ is irrational, any orbit of Rθ is dense. Hence any orbit of T is dense. Now assume that T is minimal. To show that φ is a homeomorphism it is enough to show that φ is one-to-one. (See Fact 1.3.) Take x1 < x2 . Theorem 6.7 implies that φ(x2 ) − φ(x1 ) = µ([x1 , x2 ]) > 0. For a simulation see Fig. 6.8 and Maple Program 6.8.3. The cdf is the topological conjugacy φ. Compare the cumulative density function (cdf) also with the one given in Fig. 6.13. 1 3 y

2 1 0

x

1

0

x

1

Fig. 6.8. An invariant pdf (left) and the corresponding cdf (right) for T x = x + 0.15 sin(2πx) + 0.33 (mod 1). Compare the cdf with the one given in Fig. 6.13

In 1932 A. Denjoy proved the following theorem. Fact 6.11 (Denjoy Theorem) Let T be a homeomorphism of the unit circle represented by f with an irrational rotation number θ. If f is continuously diﬀerentiable with f > 0 and if f is of bounded variation on the circle, then there is a topological conjugacy φ such that φ ◦ T ◦ φ−1 = Rθ where Rθ is a rotation by θ. For the proof of and related results, consult [Ar2],[CFS],[Dev],[KH],[MS]. J.-C. Yoccoz [Yoc] proved that if an analytic homeomorphism of the circle preserves orientation and has no periodic point, then it is conjugate to a rotation.

6.3 The Cantor Function and Rotation Number

193

6.3 The Cantor Function and Rotation Number Let Rt denote the rotation on the circle by t. For example, R1 is the identity mapping. If T is a circle homeomorphism without periodic points represented by f and if 0 < t, then ρ(T, f ) < ρ(Rt ◦ T, f + t) . See [MS] for the proof. Hence the mapping t → ρ(Rt ◦ T, f + t) is monotonically increasing. Furthermore, Theorem 6.3 implies that it is strictly increasing at t if ρ(Rt ◦ T, f + t) is irrational. ∞ Let C be the Cantor set. A point x ∈ C is of the form x = j=1 aj 3−j , aj ∈ {0, 2}. Deﬁne h : C → [0, 1] by h(x) =

∞

bj 2−j

where bj =

j=1

aj . 2

Note that h is onto [0, 1]. Take x, y ∈ C such that x < y. If x and y are two endpoints of one of the middle third open intervals removed from [0, 1] to construct C, then h(x) = h(y) = k/2 for some integers k, . If x and y are not two endpoints of one of the middle third open intervals, then h(x) < h(y). We extend h to a continuous mapping from [0, 1] onto itself by deﬁning h to be constant on each middle third set. For example, h( 31 ) = h( 32 ) = 21 and h( 21 ) = 21 . The function thus obtained is called the Cantor function or the Cantor–Lebesgue function. Observe that the Cantor function has derivative equal to zero at almost every x. See Fig. 6.9 and Maple Programs 6.8.4, 6.8.5. 1

y

0

x

1

Fig. 6.9. The Cantor function

Fix a, |a| < 1/(2π). Consider a function ρ : [0, 1] → [0, 1] where ρ(b) is the rotation number of f (x) = x + a sin(2πx) + b. It is known that y = ρ(b) is a staircase function, i.e., ρ is a variation of the Cantor function. More precisely, for every rational number r, ρ−1 (r) has nonempty interior. See Fig. 6.10 and Maple Program 6.8.6.

194

6 Homeomorphisms of the Circle 1

ρ

0

1

b

Fig. 6.10. The Cantor type function deﬁned by rotation numbers ρ(b) of f (x) = x + 0.15 sin(2πx) + b, 0 ≤ b < 1

6.4 Arnold Tongues Consider the rotation number ρ(a, b) of f (x) = x + a sin(2πx) + b as a function of a and b. In Fig. 6.11 we plot z = ρ(a, b) on the rectangle 0 ≤ a < 1/(2π), 0 ≤ b ≤ 1.

1

ρ

0 a

1

0.1 0

b

Fig. 6.11. Rotation numbers of f (x) = x + a sin(2πx) + b

To ﬁnd the region {(a, b) : ρ(a, b) = 0} we check the existence of the ﬁxed points of f . (See Remark 6.2.) From x + a sin(2πx) + b = x we have a sin(2πx) = −b, and so a ≥ b. To ﬁnd the region {(a, b) : ρ(a, b) = 1} we solve x + a sin(2πx) + b = x + 1 , that is, a sin(2πx) = 1 − b. Hence a ≥ 1 − b.

6.5 How to Sketch a Conjugacy Using Rotation Number

195

Consider the set {(a, b) : ρ(a, b) ∈ Q}. The level sets for rational rotation numbers are called the Arnold tongues after V.I. Arnold. In Fig. 6.12 the boundaries of the Arnold tongues are plotted for several rational numbers with small denominators (≤ 5). The left and right triangular regions correspond to ρ = 0 and ρ = 1, respectively. The tongues corresponding to the rotation numbers 1 1 1 2 1 3 2 3 4 , , , , , , , , 5 4 3 5 2 5 3 4 5 are given from left to right. It is known that the Lebesgue measure of the set {b ∈ [0, 1] : ρ(a, b) ∈ Q} converges to 0 as a → 0. Consult [Ar1],[Ot] for more information and see Maple Program 6.8.7. 0.15 0.1 a 0.05 0

b

1

Fig. 6.12. Arnold tongues

6.5 How to Sketch a Conjugacy Using Rotation Number We present a method of plotting the graph of a topological conjugacy based on the mathematical pointillism. Let T be a homeomorphism of the circle with irrational rotation number θ. Let Rt be the rotation by t on the unit circle. Suppose that we have the relation φ ◦ T = Rθ ◦ φ for some homeomorphism φ : S 1 → S 1 . We may assume that φ(0) = 0. Otherwise, put t = φ(0). Put ψ = R−t ◦ φ. Then φ = Rt ◦ ψ. Hence (Rt ◦ ψ) ◦ T = Rθ ◦ (Rt ◦ ψ) , ψ ◦ T = R−t ◦ Rθ ◦ Rt ◦ ψ = R−t+θ+t ◦ ψ = Rθ ◦ ψ . Observe that ψ(0) = 0.

196

6 Homeomorphisms of the Circle

Now we describe how to ﬁnd a mapping φ satisfying the above relation without using Theorem 6.10. First, we look for φ such that φ(0) = 0. Let G be the graph of y = φ(x), i.e., G = {(x, φ(x)) : x ∈ S 1 }. Since φ ◦ T = Rθ ◦ φ, we have φ ◦ T n = Rnθ ◦ φ for n ∈ Z. Hence φ(T n x) = {φ(x) + nθ}. Take x = 0. Then φ(T n 0) = {nθ} and (T n 0, {nθ}) ∈ G. Deﬁne γ : S 1 → G by γ(y) = (φ−1 (y), y) . If φ is a homeomorphism, then γ is also a homeomorphism and γ({nθ}) = (T n 0, {nθ}) . Since the points {nθ}, n ≥ 0, are dense in S 1 , the points γ({nθ}), n ≥ 0, are dense in G, and they uniquely deﬁne φ(x) on S 1 . For a simulation consider homeomorphisms of the circle represented by f (x) = x + a sin(2πx) + b, |a| < 1/(2π). See Fig. 6.13 where 1000 points are plotted on the graph of y = φ(x). Compare it with the cumulative density function in Fig. 6.8. See Maple Program 6.8.8. For another experiment with a circle homeomorphism see Maple Program 6.8.9. 1

y

0

x

1

Fig. 6.13. The points (T n 0, {nθ}), n ≥ 0, approximate the graph of the topological conjugacy φ of a homeomorphism represented by f (x) = x + 0.15 sin(2πx) + 0.33

6.6 Unique Ergodicity Theorem 6.12. Let X be a measurable space with a σ-algebra A and let T : X → X be a measurable transformation. If there exists only one T -invariant probability measure µ on (X, A), then T is ergodic with respect to µ. Proof. Suppose that there exists A ∈ A such that µ(A) > 0 and T −1 A = A. Deﬁne a conditional measure ν by ν(B) = µ(B ∩ A)/µ(A) for B ∈ A. Then ν(X) = 1 and ν(T −1 B) = ν(B). Hence ν = µ, and so µ(B) = µ(B ∩ A)/µ(A) for every B ∈ A. If we take B = X \ A, then µ(X \ A) = 0. Thus µ(A) = 1, and the ergodicity of T is proved.

6.6 Unique Ergodicity

197

Deﬁnition 6.13. A continuous transformation T on a compact metric space X is said to be uniquely ergodic if there exists only one T -invariant Borel probability measure on X. Note that T is ergodic with respect to the only T -invariant measure. Example 6.14. A translation T x = {x + θ} is uniquely ergodic if and only if θ is irrational. Proof. First, suppose that θ is rational and of the form θ = p/q for some integers p, q ≥ 1 such that (p, q) = 1. Deﬁne a T -invariant probability measure on {0, 1/q, . . . , (q−1)/q} ⊂ S 1 by assigning probability 1/q to each point. Then we have a T -invariant probability measure other than Lebesgue measure, and hence T is not uniquely ergodic. Next, assume that θ is irrational. If µ is a T -invariant probability measure, then 1

0

1

e−2πinx dµ =

e−2πin(x+θ) dµ ,

0

(n) for every n. If n = 0, then µ (n) = 0. Hence µ is and µ (n) = e−2πinθ µ Lebesgue measure by the uniqueness theorem in Subsect. 1.4.3.

For more general transformations we have the following: Theorem 6.15. If T : S 1 → S 1 is a homeomorphism with irrational rotation number, then it is uniquely ergodic. Proof. First, we consider the case when T is minimal. Let µ be a T -invariant Borel probability measure. Deﬁne a mapping φ as in Theorem 6.10. Since T is minimal, φ is a homeomorphism. Let G be the graph of y = φ(x). Then the mapping γ : S 1 → G deﬁned by γ(y) = (φ−1 (y), y) is a homeomorphism. See Sect. 6.5. The points γ({nθ}), n ≥ 0, uniquely deﬁne φ(x) on S 1 with the condition φ(0) = 0. Hence by Theorem 6.10 an invariant probability measure µ is uniquely determined. Next, suppose that T is not minimal. Let K denote the closure of the points T n 0, n ≥ 0, in S 1 . Then φ is determined on K as in the previous case. Recall that φ is continuous and monotonically increasing. Observe that S1 \ K = (aj , bj ) j

where (aj , bj ) are pairwise disjoint open intervals. Suppose that µ((aj , bj )) > 0 for some j. Then φ(aj ) < φ(bj ) and there would exist n such that φ(aj ) < {nθ} < φ(bj ) . This implies that {nθ} = φ(T k 0) for some k and aj < φ(T k 0) < bj , which is a contradiction. Hence µ((aj , bj )) = 0 for every j, and φ is constant on each interval [aj , bj ]. Thus φ is determined uniquely on S 1 , and so is µ.

For a diﬀerent proof see [CFS],[Wa1].

198

6 Homeomorphisms of the Circle

6.7 Poincar´ e Section of a Flow A continuous ﬂow {Ft : t ∈ R} on a metric space X is a one-parameter group of homeomorphisms Ft of X such that (i) Ft ◦ Fs = Ft+s for t, s ∈ R, (ii) F0 is the identity mapping on X, and (iii) (Ft )−1 = F−t . We assume that (t, x) → Ft (x) is continuous. A ﬂow line is a curve t → Ft (x), t ∈ R. It passes through x at t = 0. When an autonomous ordinary diﬀerential equation (du/dt, dv/dt) = W (u, v) is given for a vector ﬁeld W , the solution curves of the diﬀerential equation are ﬂow lines. A circle homeomorphism can be obtained from a continuous ﬂow on the torus X = S 1 × S 1 . Let E = {v = 0} = S 1 × {0}, which is a circle obtained by taking the intersection of X with a plane perpendicular to X. See Fig. 6.14. Given a continuous ﬂow on X, deﬁne a mapping T : E → E as follows: Take x ∈ E and consider the ﬂow line starting from x. Let T x be the ﬁrst point where the ﬂow line hits E. Then T is a homeomorphism of the unit circle E. The set E is called the Poincar´e section and T the Poincar´e mapping.

x Tx

Fig. 6.14. A circle homeomorphism T is deﬁned by a Poincar´e section

The Poincar´e section method reduces the dimension of the phase space under consideration, so that the problem becomes simpler and some of the properties of the continuous ﬂow can be obtained by studying a discrete action of a mapping T . For example, the following statements are equivalent: (i) T : E → E has a periodic point of period k ≥ 1. (ii) There exists a closed ﬂow line that winds around the torus k times in the v-direction. See [DH] for a historical introduction to the Poincar´e mapping in celestial mechanics, and consult [Tu] for accurate numerical computations. Example 6.16. Fix w = (a, b) ∈ R2 , and deﬁne Ft (x) = x + tw (mod 1), where t ∈ R, x = (u, v) ∈ X. Then we have T (u, 0) = (u+ ab , 0) (mod 1), or ignoring the second coordinate, T u = u + ab (mod 1). See Fig. 6.15. Consult [Si1] for converting a smooth ﬂow on the torus into a linear ﬂow with an irrational slope by making a suitable change of coordinates.

6.8 Maple Programs

199

1

0

x

Tx

1

Fig. 6.15. A translation mod 1 is deﬁned by a Poincar´e section of a linear ﬂow. The bottom edge represents E = {(u, 0) ∈ X}

6.8 Maple Programs Simulations with rotation numbers and conjugacy mappings are presented. 6.8.1 The rotation number of a homeomorphism of the unit circle Calculate the rotation number of a circle homeomorphism. > with(plots): > evalf(1/(2*Pi)); 0.15915494309189533576 Choose a constant a, |a| < 1/(2π). > a:=0.1: > b:=sqrt(2.)-1: Take f (x) = x + a sin(2πx) + b. Use the command evalhf in evaluating transcendental functions. > f:=x->evalhf(x+a*sin(2*Pi*x)+b): Calculate the rotation number of T at a point x = 0. > seed[0]:=0: Choose the number of iterations in estimating the rotation number. > L:=30: > for n from 1 to L do seed[n]:=f(seed[n-1]): od: Calculate the rotation number at seed[0]. > for j from 1 to L do > rho[j]:=(seed[j]-seed[0])/j: > od: Plot the approximation of the rotation number ρ(x) at x = 0. > listplot([seq([n,rho[n]],n=1..L)],labels=["n"," "]); See Fig. 6.4.

200

6 Homeomorphisms of the Circle

6.8.2 Symmetry of invariant measures The following is a simulation of Remark 6.8. > with(plots): > Digits:=50: Consider the β-transformation. See the left graph in Fig. 6.16. > beta:=(1+sqrt(5.))/2: > T:=x->frac(beta*x); > plot(T(x),x=0..1); 1 1 y 0.5

0

x

1

0

x

1

Fig. 6.16. y = T x (left) and an approximate invariant pdf with y = ρ(x) (right)

Deﬁne the invariant density ρ(x). > rho:=x->piecewise(0 for j from 1 to Bin do freq[j]:=0: od: > for i from 1 to S do seed:=T(seed): > slot:=ceil(Bin*seed): > freq[slot]:=freq[slot]+1: > od: Compare ρ(x) with the approximate invariant density for T obtained by the Birkhoﬀ Ergodic Theorem. See the right plot in Fig. 6.16. > g3:=listplot([seq([(i-0.5)/Bin,freq[i]*Bin/S],i=1..Bin)]): > g4:=plot(rho(x),x=0..1): > display(g3,g4); Deﬁne S using the symmetry. > S:=x->1-T(1-x): Draw the graph y = Sx. See the left graph in Fig. 6.17.

6.8 Maple Programs >

201

plot(S(x),x=0..1); 1 1 y 0.5

0

x

1

0

x

1

Fig. 6.17. y = Sx (left) and its numerical approximation of the invariant pdf with y = ρ(1 − x) (right)

Find an approximate invariant pdf for S. > for j from 1 to Bin do freq2[j]:=0: od: > for i from 1 to S do > seed:=S(seed): > slot:=ceil(Bin*seed): > freq2[slot]:=freq2[slot]+1: > od: Compare ρ(1 − x) with the approximate invariant density for S. > g7:=listplot([seq([frac((i-0.5)/Bin),freq2[i]*Bin/S], i=1..Bin)]): > g8:=plot(rho(1-x),x=0..1): > display(g7,g8); See the right plot in Fig. 6.17. 6.8.3 Conjugacy and the invariant measure The following is a simulation of Theorem 6.10. > with(plots): > a:=0.15: b:=0.33; Deﬁne a circle homeomorphism. > T:=x->frac(x+a*sin(2*Pi*x)+b): > S:=10^6: > Bin:=100: Find the probability density function (pdf). > for j from 1 to Bin do freq[j]:=0: od: > seed:=0: > for s from 1 to S do seed:=T(seed): > slot:=ceil(evalhf(Bin*seed)): > freq[slot]:=freq[slot]+1: od:

202

6 Homeomorphisms of the Circle

Plot the pdf. See the left graph in Fig. 6.8. > listplot([seq([frac((i-0.5)/Bin),freq[i]*Bin/S], i=1..Bin)]); Find the cumulative density function (cdf). > cum[0]:=0: > for i from 1 to Bin do cum[i]:=cum[i-1]+freq[i]: od: Plot the cdf, which is the topological conjugacy. > listplot([seq([frac((i-0.5)/Bin),cum[i]/S],i=1..Bin)]); See the right graph in Fig. 6.8. 6.8.4 The Cantor function Sketch the standard Cantor function arising from the Cantor ternary set. > with(plots): Choose the number of points to be plotted on the graph of the Cantor function. > N:=2000: For each xn , 1 ≤ n ≤ N , calculate the value yn of the Cantor function. An extra condition that j < 100 is needed to avoid an inﬁnite loop. Without such a condition the do loop would not end if xn belongs to the Cantor set. > for n from 1 to N do > x[n]:=evalhf(frac(n/N+Pi)): > seed:=x[n]: > j:=0: > a:=’a’: > while (a 1 and j < 100) do > j:=j+1: > a:=trunc(3.0*seed); > seed:=frac(3.0*seed): > b[j]:=a/2; > od: > y[n]:=add(b[i]* 2^(-i*b[i]),i=1..j-1) + 2^(-j): > od: The points (xn , yn ) approximate the graph of the Cantor function. > pointplot([seq([x[n],y[n]],n=1..N)],symbolsize=1); See Fig. 6.9. 6.8.5 The Cantor function as a cumulative density function The Cantor function is obtained as a cdf of the probability measure, called the Cantor measure, supported on the Cantor set. > with(plots): with(plottools): > T:=x->frac(3.0*x): > entropy:=log10(3.0); entropy := 0.4771212547

6.8 Maple Programs

203

Find a typical point seed[0] belonging to the Cantor set. Use N digits in the ternary expansion (d1 , d2 , d3 , . . .), dj ∈ {0, 2}, of seed[0]. > N:=21000: > 3.0^(-N); 0.2842175472 10−10019 > Digits:=10020: Generate a sequence of dj ∈ {0, 2}, 1 ≤ j ≤ N , with probability 21 for each. > ran:=rand(0..1): > for j from 1 to N do > d[j]:=ran()*2: > od: > seed[0]:=evalf(add(d[s]/3^s,s=1..N)): Choose the size of samples. It is bounded by the number of iterations after which orbit points lose any meaningful signiﬁcant digits. Consult Sect. 9.2. > S:=trunc(Digits/entropy)-20;

S := 20980 for i from 1 to S do seed[i]:=T(seed[i-1]): seed[i-1]:=evalf(seed[i-1],10): od: > Digits:=10: > Bin:=3^5: > for k from 1 to Bin do freq[k]:=0: od: > for j from 1 to S do > slot:=ceil(Bin*seed[j]): > freq[slot]:=freq[slot]+1: > od: > for i from 1 to Bin do > g[i]:=line([(i-0.5)/Bin,freq[i]*Bin/S],[(i-0.5)/Bin,0]): > od: > display(seq(g[i],i=1..Bin),labels=["x"," "]); See Fig. 6.18. > > > >

8 6 4 2 0

x

Fig. 6.18. Approximation of the Cantor measure

1

204

6 Homeomorphisms of the Circle

Find the cumulative density function. > cum[0]:=0: > for i from 1 to Bin do > cum[i]:=cum[i-1]+freq[i]: > od: > listplot([seq([frac((i-0.5)/Bin),cum[i]/S],i=1..Bin)]); 6.8.6 Rotation numbers and a staircase function Here is how to sketch a staircase function arising from rotation numbers of a circle homeomorphism. > with(plots): > a:=0.15; Choose the number of iterations to ﬁnd the rotation number. > N:=500: Decide the increment of b. It is given by 1/Step. > Step:=1000: For each value of b ﬁnd the rotation number. > for i from 1 to Step do > b:=evalhf((i-0.5)/Step): > f:=x->evalhf(x+a*sin(2*Pi*x)+b): > seed:=0: > for n from 1 to N do seed:=f(seed): od: > rho[i]:=seed/N: > od: Plot the staircase function in Fig. 6.10. > pointplot([seq([(i-0.5)/Step,rho[i]],i=1..Step)], symbolsize=1); For further experiments take the periodic function g(x) of period 1 given by the periodic extension of the left graph in Fig. 6.19 and consider f (x) = x + ag(x) + b, a = 0.5. It yields a staircase function on the right. 1

1

ρ

y

0

x

1

0

b

1

Fig. 6.19. The graph of g(x) on 0 ≤ x ≤ 1, (left) and a staircase function arising from f (x) = x + ag(x) + b, a = 0.5 (right)

6.8 Maple Programs

205

6.8.7 Arnold tongues The following is a modiﬁcation of a C program written by B.J. Kim. Use evalhf for the hardware ﬂoating point arithmetic with double precision. > with(plots): Take f (x) = x + a sin(2πx) + b. Choose the number of iterations in calculating the rotation number. > N:=1000: Choose the number of intermediate points for the a-axis and the b-axis. To obtain Fig. 6.12 increase Num1. > Num1:=35: Num2:=1000: Calculate the increments of a and b. > Inc_a:=evalhf(1/2/Pi/Num1): > Inc_b:=evalhf(1/Num2): Choose the rotation numbers corresponding to the Arnold tongues. Take rational numbers with denominators not greater than 5. > [0,1,1/2,1/3,2/3,1/4,3/4,1/5,2/5,3/5,4/5]; Sort the rotation numbers in increasing order. > Ratio:=sort(%); 1 1 1 2 1 3 2 3 4 Ratio := [0, , , , , , , , , , 1] 5 4 3 5 2 5 3 4 5 Count the number of elements in the sequence Ratio. > M:=nops(Ratio); M := 11 Error_bound:=0.001: Choose an error bound for the estimation of the rotation number. If it is too large, then the plot would be meaningless. If it is too small, then there may be no plot. For small values we have to increase N. > for i from 1 to Num1 do > for j from 0 to Num2 do > seed:=0.0: > for k from 1 to N do > seed:=evalhf(seed+i*Inc_a*sin(2*Pi*seed)+j*Inc_b): od: > rho[j]:=evalhf(seed/N): > od: > for m from 1 to M do S:=1: > for g from 0 to Num2 do > if abs(rho[g]-Ratio[m]) < Error_bound then > b_rational[S]:= g/Num2: > S:=S+1: fi: > od: > min_b[i,m]:= b_rational[1]: > max_b[i,m]:= b_rational[S-1]: > od: > od: >

206

6 Homeomorphisms of the Circle

Find the left and right boundaries of the Arnold tongues. For small values of Num1, it is better to use listplot to connect points. > for m from 1 to M do > L[m]:=pointplot({seq([min_b[i,m], i*Inc_a],i=1..Num1)}): > R[m]:=pointplot({seq([max_b[i,m], i*Inc_a],i=1..Num1)}): > od: > g1:=display(seq([L[m],R[m]],m=1..M)): > g2:=plot(0,x=0..1,y=0..1/2/Pi): > display(g1,g2,labels=["b","a"]); See Fig. 6.12. 6.8.8 How to sketch a topological conjugacy of a circle homeomorphism I Now we explain how to simulate the idea in Sect. 6.5. > with(plots): > a:=0.15: b:=0.33: Deﬁne a function f representing a circle homeomorphism T . > f:=x->evalhf(x+a*sin(2*Pi*x)+b): Choose an orbit of length 2000. > SampleSize:=2000: > orbit[0]:=0: > for i from 1 to SampleSize do > orbit[i]:=frac(f(orbit[i-1])): od: Calculate the rotation number of T at orbit points. > Length:=2000: > for i from 1 to SampleSize do > seed[0]:=orbit[i]: > for n from 1 to Length do seed[n]:=f(seed[n-1]): od: > rotation[i]:=(seed[Length]-seed[0])/Length: > od: > average:=add(rotation[i],i=1..SampleSize)/SampleSize; average := 0.3143096817 > rotation_num:=average: A sharp estimate for the rotation number ρ(T ) is essential in plotting points on the graph y = φ(x). A slight change in the estimation would destroy the plot completely. This is why we adopted the version in Deﬁnition 6.4. If we had used f (n) (x)/n as an approximation of ρ(T ), then we would have slightly overestimated values, which would not be good enough for our purpose. > seed[0]:=0: > N:=2000: This is the number of points on the graph of the topological conjugacy. > for n from 1 to N do seed[n]:=frac(f(seed[n-1])): od:

6.8 Maple Programs

207

Plot points on the graph of the topological conjugacy using the idea of the mathematical pointillism. See Fig. 6.13. > pointplot([seq([seed[n],frac(n*rotation_num)],n=1..N)]); Change the value of rotation num slightly and observe what happens. The method would not work! Consult Remark 6.5. > rotation_number:=average + 0.0001; rotation number := 0.3144096817 > pointplot([seq([seed[n],frac(n*rotation_num)],n=1..N)]); See Fig. 6.20. 1

y

0

x

1

Fig. 6.20. The points (T n 0, {nθ}) with a less precise value for θ

6.8.9 How to sketch a topological conjugacy of a circle homeomorphism II Here is another example of how to ﬁnd a topological conjugacy φ numerically for a circle homeomorphism T based on the mathematical pointillism. We consider a case when φ is already known and compare it with a numerical solution. > with(plots): > Digits:=20: Choose an irrational rotation number 0 < θ < 1. > theta:=frac(sqrt(2.)): Deﬁne an irrational rotation R to which T is conjugate through φ. > R:=x-> frac(x+theta); R := x → frac(x + θ) Draw the graph y = R(x). See Fig. 6.21. > plot(R(x),x=0..1); Deﬁne the topological conjugacy φ. > phi:=x->piecewise(x >= 0 and x < 3/4, (1/3)*x, x >= 3/4 and x plot(phi(x),x=0..1); Find ψ = φ−1 . > psi:=x->piecewise(x >= 0 and x < 1/4, 3*x, x >= 1/4 and x plot(psi(x),x=0..1); φ := x → piecewise(0 ≤ x and x

T:=x->psi(R(phi(x))); T := x → ψ(R(φ(x))) Draw the graph y = T x. See Fig. 6.23. > plot(T(x),x=0..1); Recall that θ is the rotation number of T . > seed[0]:=0:

1

6.8 Maple Programs

209

1

y

0

x

1

Fig. 6.23. y = T x

Choose the number of points on the graph of the conjugacy. To emphasize the key idea we choose a small number here. > N:=30: > for n from 1 to N do seed[n]:=T(seed[n-1]): od: Plot the points of the form (T n 0, {nθ}), which are on the graph y = φ(x). > pointplot([seq([seed[n],frac(n*theta)],n=1..N)]); See Fig. 6.24. 1

y

0

x

1

Fig. 6.24. Plot of points (T n 0, {nθ})

Now we apply the Birkhoﬀ Ergodic Theorem. > SampleSize:=10^4: Since the pdf is piecewise continuous, it is better to take suﬃciently many bins to have a nice graph. > Bin:=200: > for s from 1 to Bin do freq[s]:=0: od: > start:=evalf(Pi-3): > for i from 1 to SampleSize do > start:=T(start): > slot:=ceil(Bin*start): > freq[slot]:=freq[slot]+1: > od:

210

6 Homeomorphisms of the Circle

Sketch the invariant pdf ρ(x) based on the preceding numerical experiment. Theoretically it can be obtained by diﬀerentiating φ(x). Thus we have &1 , 0 < x < 43 , ρ(x) = 3 3 3, 4 <x listplot([seq([frac((i-0.5)/Bin),freq[i]*Bin/SampleSize], i=1..Bin)]); 3 2 1 0

x

1

Fig. 6.25. An approximate invariant pdf for T

Find the cumulative density function (cdf). > cum[0]:=0: > for i from 1 to Bin do > cum[i]:=cum[i-1]+freq[i]: > od: Plot the cdf for T together with the plot obtained in Fig. 6.24. They match well as expected. See Fig. 6.26. > listplot([seq([frac((i-0.5)/Bin),cum[i]/SampleSize], i=1..Bin)]); 1

y

0

x

1

Fig. 6.26. An approximate cdf for T together with the points (T n 0, {nθ})

7 Mod 2 Uniform Distribution

We construct a sequence of 0 and 1 from a dynamical system on a probability space X and a measurable set E, and deﬁne mod 2 uniform distribution. Let 1E be the characteristic function of E. For an ergodic transformation T : X → X, consider the sequence dn (x) ∈ {0, 1} deﬁned by dn (x) ≡

n

1E (T k−1 x)

(mod 2) .

k=1

The mod 2 uniform distribution problem is to investigate the convergence of N 1 dn (x) N n=1

to a limit as N → ∞. Contrary to our intuition, the limit may not be equal to 12 . This is related with the ergodicity of the skew product transformation.

7.1 Mod 2 Normal Numbers Let X = [0, 1) with Lebesgue measure µ. Deﬁne T x = 2x (mod 1). Let x=

∞

ak 2−k ,

ak ∈ {0, 1} ,

k=1

be the binary representation of x. Then ak = 1E (T k−1 x) where E = [ 12 , 1). The classical normal number theorem states that almost every x is normal, n i.e., n1 k=1 ak converges to 12 . Deﬁnition 7.1. Deﬁne dn (x) ∈ {0, 1} by dn (x) ≡

n k=1

ak

(mod 2) .

212

7 Mod 2 Uniform Distribution

Then x is called a mod 2 normal number if N 1 1 dn (x) = . N →∞ N 2 n=1

lim

Theorem 7.2. Almost every x is a mod 2 normal number. Proof. The proof may be obtained following the general arguments in later sections but we present a proof here to emphasize the key idea. Note that X is a group under addition modulo 1. Take E = [ 12 , 1). Deﬁne T1 on X × {0, 1} by T1 (x, y) = (2x, 1E (x) + y) . Here {0, 1} is regarded as an additive group Z2 , and the addition is done modulo 2. Then T1 is measure preserving and T1 n (x, y) = (2n x, dn (x) + y) . Note that L2 (X × {0, 1}) = L2 (X) ⊕ L2 (X)χ where the character χ on {0, 1} is deﬁned by χ(y) = eπiy and L2 (X)χ = {f (x)χ(y) : f ∈ L2 (X)}. Suppose that T1 is not ergodic. Then there is a nonconstant invariant function of the form h1 (x) + h2 (x)eπiy . Hence h1 (x) + h2 (x)eπiy = h1 (2x) + h2 (2x) exp(πi1E (x))eπiy and h1 (x) − h1 (2x) = (−h2 (x) + h2 (2x) exp(πi1E (x)))eπiy Since the left hand side does not depend on y we have h1 (x) = h1 (2x) and h2 (x) = h2 (2x) exp(πi1E (x)). Since x → {2x} is ergodic, h1 is constant. Let us show that h2 is also constant. First, note that 1E (1 − x) = 1 − 1E (x) and exp(πi1E (1 − x)) = − exp(πi1E (x)). Hence h2 (1 − x) = −h2 (2(1 − x)) exp(πi1E (x)) and h2 (x)h2 (1 − x) = −h2 (2x)h2 (2(1 − x)) . Put q(x) = h2 (x)h2 (1 − x). Then q(2x) = −q(x) , which is a contradiction since x → {2x} is mixing and has no eigenvalue other than 1. Thus T1 is ergodic. Let ν be the Haar measure on Z2 , i.e., ν({0}) = ν({1}) = 12 . For an integrable function f deﬁned on X × {0, 1}, the Birkhoﬀ Ergodic Theorem implies that

7.2 Skew Product Transformations

213

N −1 1 f (T1 n (x, y)) = f (x, y) d(µ × ν) N →∞ N X×{0,1} n=0 lim

for almost every (x, y) ∈ X × {0, 1}. Since X × {0, 1} can be regarded as a union of two copies of X, we have N −1 1 n f (T1 (x, 0)) = f (x, y) d(µ × ν) lim N →∞ N X×{0,1} n=0 for almost every x ∈ X. Choose f : X × {0, 1} → C deﬁned by f (x, y) = y. Then Fubini’s theorem implies that N −1 1 1 dn (x) = y d(µ × ν) = y dν = . lim N →∞ N 2 X×{0,1} {0,1} n=0

7.2 Skew Product Transformations Consider a probability space (X, µ) and a compact abelian group G with the normalized Haar measure ν. Let µ × ν denote the product measure on X × G. Let T be a measure preserving transformation on (X, µ) and let φ : X → G be a measurable function. Deﬁnition 7.3. A skew product transformation Tφ on X × G is deﬁned by Tφ (x, g) = (T x, φ(x)g) . Theorem 7.4. A skew product transformation Tφ is measure preserving. Proof. It is enough to show that (µ × ν)(Tφ −1 (A × B)) = (µ × ν)(A × B) for measurable subsets A ⊂ X, B ⊂ G since measurable rectangles generate the σ-algebra in the product measurable space X × G. Note that 1T −1 (A×B) (x, g) d(µ × ν) (µ × ν)(Tφ−1 (A × B)) = φ X×G = 1A×B (Tφ (x, g)) d(µ × ν) X×G = 1A (T x)1B (φ(x)g) d(µ × ν) X×G 1A (T x) 1φ(x)−1 B (g) dν(g) dµ(x) . = X

G

214

7 Mod 2 Uniform Distribution

In the last equality we used Fubini’s theorem. Since 1φ(x)−1 B (g) dν(g) = ν(φ(x)−1 B) = ν(B) , G

we have (µ × ν)(Tφ−1 (A × B)) =

1A (T x) dµ(x) ν(B) X

= µ(A) ν(B) = (µ × ν)(A × B) .

Hence Tφ preserves µ × ν. Deﬁne an isometry on L2 (X × G) by UTφ (h(x, g)) = h(Tφ (x, g)). Let G denote the dual group of G and put Hχ = L2 (X)χ(g) = {f (x)χ(g) : f ∈ L2 (X)} . Then we have an orthogonal direct sum decomposition 6 L2 (X × G) = Hχ . χ∈G

Since UTφ (f (x)χ(g)) = f (T x)χ(φ(x))χ(g), each direct summand Hχ is an invariant subspace of UTφ and UTφ |Hχ : Hχ → Hχ satisﬁes f (x)χ(g) → f (T x)χ(φ(x))χ(g) . Lemma 7.5. Let T : X → X be an ergodic transformation, G a compact abelian group, and φ : X → G a measurable function. Let Tφ be the skew product transformation. Then the following are equivalent: (i) Tφ is not ergodic on X × G. (ii) UTφ has an eigenvalue 1 with a nonconstant eigenfunction. (iii) There exists a measurable function f on X of modulus 1 such that −1 χ = 1. χ(φ(x)) = f (x)f (T x) for some χ ∈ G, Proof. It is clear that (i) and (ii) are equivalent. We will show that (ii) and (iii) are equivalent. Assume that (ii) holds true. Let χ∈G fχ (x)χ(g) be a nonconstant eigenfunction of UTφ corresponding to the eigenvalue 1. Then fχ (x)χ(g) = fχ (T x)χ(φ(x))χ(g) . χ∈G

χ∈G

By the uniqueness of the Fourier expansion, we have fχ (x) = fχ (T x)χ(φ(x)) for every χ. Since T is ergodic, f1 is constant where f1 corresponds to χ = 1. Since χ fχ (x)χ(g) is not constant, there exists χ = 1 such that fχ (x) =

7.2 Skew Product Transformations

215

fχ (T x)χ(φ(x)) = 0. By taking absolute values we see that |fχ (x)| = |fχ (T x)|. Hence fχ has constant modulus. Conversely, assume that a measurable function f satisﬁes −1

χ(φ(x)) = f (T x)

f (x)

for some χ = 1. Then we have f (T x)χ(φ(x)) = f (x). Hence f (x)χ(g) is a

nonconstant eigenfunction of UTφ for the eigenvalue 1. Deﬁnition 7.6. A function φ : X → G is called a G-coboundary if there exists a function q : X → G satisfying −1

φ(x) = q(x) q(T x)

.

Such a function q is not unique in general, and one such function is called a cobounding function for φ. Remark 7.7. Suppose that T is ergodic and let S 1 = {z ∈ C : |z| = 1}. (i) The characters of G = S 1 are given by χn , χn (z) = z n for z ∈ G. Then the condition in Lemma 7.5(iii) becomes φ(x)n = f (x)f (T x)−1 for some n = 0, i.e., φ(x)n is a coboundary for some n = 0. (ii) The characters of G = {−1, 1} are given by 1 and χ where χ(z) = z for z ∈ G. Hence the coboundary condition becomes φ(x) = f (x)f (T x)−1 . (iii) Let G be a compact subgroup of S 1 , i.e., either G = S 1 or G = {1, a, . . . , ak−1 } for some k ≥ 1. If f (T x)φ(x) = f (x) for some f : X → G, f ≡ 0, then |f (T x)| = |f (x)|. The ergodicity of T implies that f has constant modulus. Thus we may assume that φ(x) = f (x)f (T x)−1 with |f (x)| = 1. Such a function f is not unique. For example, cf for any c ∈ C, |c| = 1, satisﬁes the same functional equation. Let 1E be the characteristic function of E ⊂ X. Deﬁne dn (x) ∈ {0, 1} by dn (x) ≡

n

1E (T k−1 x)

(mod 2) .

k=1

Put en (x) = exp(πidn (x)) = 1 − 2 dn (x) ∈ {±1} . Then the mod 2 uniform distribution is equivalent to N 1 en (x) = 0 . N →∞ N n=1

lim

216

7 Mod 2 Uniform Distribution

Theorem 7.8. Consider the multiplicative group G = {1, −1}. Let T be an ergodic transformation on a probability space (X, µ). Take a measurable subset E ⊂ X. Put φ = exp(πi1E ) and let Tφ be the skew product transformation on X × G induced by φ. (i) Suppose that Tφ : X × G → X × G is ergodic. Then for almost every x N 1 en (x) = 0 . N →∞ N n=1

lim

(ii) Suppose that Tφ is not ergodic. Then for almost every x N 1 en (x) = q(x) q dµ lim N →∞ N X n=1 −1

where φ(x) = q(x)q(T x)

and q(x) ∈ {±1}.

consists Proof. Let ν be the Haar measure on G. Recall that the dual group G of the trivial homomorphism 1 = 1G and χ deﬁned by χ(y) = y. (i) For an integrable function f (x, y) on X × G, the Birkhoﬀ Ergodic Theorem implies that N −1 1 n f (Tφ (x, y)) = f (x, y) d(µ × ν) lim N →∞ N X×G n=0 for almost every (x, y) ∈ X × G. Since X × G can be regarded a union of two copies of X, we have N −1 1 f (Tφn (x, 1)) = f (x, y) d(µ × ν) N →∞ N X×G n=0 lim

for almost every x ∈ X. Choose f (x, y) = y. Then N 1 en (x) = y d(µ × ν) = y dν = 0 . lim N →∞ N X×{−1,1} {−1,1} 1 (ii) By Lemma 7.5, we have φ(x) = q(x)q(T x) and q(x) ∈ {±1} almost everywhere. Hence en (x) = q(x)q(T n x) for every n ≥ 1. Now use N N 1 1 q(T n x) en (x) = q(x) N n=1 N n=1

together with the Birkhoﬀ Ergodic Theorem.

N 1 When the limit of the sequence N 1 en (x) fails to converge to zero, Part (ii) provides us with information on the extent of irregularity. That is, the

sequence converges to a real-valued function of modulus | X q dµ|.

7.2 Skew Product Transformations

217

Theorem 7.9. Assume that the case (ii) holds in Theorem 7.8. If φ(x) = exp(πi1E (x)) = q(x)q(T x) for some {±1}-valued function q, then q dµ ≤ 1 − µ(E) . X

Proof. Since q(x) is real, it is of the form q(x) = exp(πi1B (x)) for some measurable subset B, and exp(πi1E (x)) = exp(πi1B (x)) exp(πi1T −1 B (x)) and E = BT −1 B modulo measure zero sets, where denotes the symmetric diﬀerence of two sets. Hence µ(E) ≤ µ(B) + µ(T −1 B) = 2 µ(B) . On the other hand, exp(πi1E (x)) = (−q)(x) (−q)(T x) = exp(πi1C (x)) exp(πi1T −1 C (x)) where C = X \ B. Hence E = C T −1 C modulo measure zero sets and µ(E) ≤ 2 µ(C) = 2(1 − µ(B)) . Since

q dµ = 1 × (1 − µ(B)) + (−1) × µ(B) = 1 − 2 µ(B) , X

we have µ(E) − 1 ≤

q dµ ≤ 1 − µ(E) . X

Remark 7.10. (i) Roughly speaking, as the size of E increases, the extent of the irregularity becomes smaller. (ii) It is possible that µ(E) is small and the integral of q is close to 1. For example, choose a small set B of positive measure such that B ∩ T −1 B = ∅. Put E = B ∪ T −1 B. Take x ∈ X \ E. The orbit of x visits T −1 B if and only if it visits B right after it visits T −1 B. Hence the orbit of x visits E twice in a row. The Birkhoﬀ Ergodic Theorem implies that the orbit of x visits E after its orbit stays outside E for a long time since the measure of B is small. Hence the sequence en (x), n ≥ 1, is given by 1, . . . . . . . . . , 1, −1, 1, . . . . . . . . . , 1, −1, 1, . . . . . . . . . , 1, . . . . . . where there are many 1’s between two −1’s. (iii) There exists a measurable set B ⊂ X such that E = B T −1 B if and only if exp(πi1E ) is a coboundary. For, if there exists a real-valued function q such that exp(πi1E (x)) = q(x)q(T x), then we let q = exp(πi1B ) and this implies the required condition.

218

7 Mod 2 Uniform Distribution

Theorem 7.11. Take G = Z2 = {±1}. Let T be an ergodic transformation on X and let φ be a Z2 -valued measurable function on X. If Tφ is not ergodic, then it has exactly two ergodic components of equal measure and they are {(x, q(x)) : x ∈ X}

and

{(x, −q(x)) : x ∈ X}

where q is a Z2 -cobounding function for φ. Proof. Since Tφ is not ergodic, Lemma 7.5 implies that φ is a coboundary and there exists q such that φ(x) = q(x)q(T x). Deﬁne a bijective mapping Q : X × Z2 → X × Z2 by Q(x, z) = (x, q(x)z). Deﬁne S : X × Z2 → X × Z2 by S(x, z) = (T x, z). Then Q and S are measure preserving, and we have (Q ◦ Tφ )(x, z) = Q(T x, φ(x)z) = (T x, q(T x)φ(x)z) = (T x, q(x)z) = (S ◦ Q)(x, z) , i.e., Tφ is isomorphic to S, which has two ergodic components X × {+1} and X × {−1}. Note that Q−1 = Q and that the following diagram commutes: Tφ

X × Z2 −−−−→ X × Z2 ⏐ ⏐ ⏐ ⏐ Q, Q, S

X × Z2 −−−−→ X × Z2 Hence Tφ has two ergodic components Q(X × {+1}) = {(x, q(x)) : x ∈ X} and Q(X × {−1}) = {(x, −q(x)) : x ∈ X}. Remark 7.12. In Theorem 7.11, Tφ |X×{+1} is isomorphic to T since Tφ (x, q(x)) = (T x, φ(x)q(x)) = (T x, q(T x)q(x)q(x)) = (T x, q(T x)) . Similarly, Tφ |X×{−1} is isomorphic to T . This can be seen in Figs. 7.2, 7.3 for T x = 2x (mod 1).

7.3 Mod 2 Normality Conditions In this section we study the skew product of the transformation T x = 2x (mod 1) with the group Z2 = {−1, 1} from the viewpoint of interval mappings. The subsets X+ = X × {+1} , X− = X × {−1} are identiﬁed with the intervals [0, 1), [1, 2), respectively, and Tφ given in 7 = [0, 2). The new Theorem 7.8 is regarded as being deﬁned on the interval X 7 8 8 mapping on X is denoted by Tφ . If Tφ is not ergodic, then it has two ergodic 7 each having Lebesgue measure 1. As in the other chapters, components in X, set relations are understood as being so modulo measure zero sets.

7.3 Mod 2 Normality Conditions

219

Example 7.13. (An ergodic skew product) Choose E = ( 21 , 1) and φ = 8φ can be regarded as the piecewise linear mapping given in exp(πi1E ). Then T Fig. 7.1. Note that it is ergodic.

2

1

0

1

2

Fig. 7.1. An ergodic skew product Tφ deﬁned by E = (1/2, 1) ⊂ X = [0, 1) is 8φ on X 7 = [0, 2) regarded as a mapping T

Example 7.14. (A nonergodic skew product) Choose E = ( 41 , 43 ) and φ = exp(πi1E ). Put F = ( 21 , 1) and q = exp(πi1F ). Then E = F T −1 F and 8φ can be regarded as the piecewise linear mapping φ(x) = q(x)q(T x). Hence T given in Fig. 7.2. Note that it has two ergodic components K1 = ( 21 , 23 ) and 7 − K1 = (0, 1 ) ∪ ( 3 , 2). K2 = X 2 2

2

1

0

1

2

Fig. 7.2. A nonergodic skew product Tφ deﬁned by E = (1/4, 3/4) ⊂ X = [0, 1) is 7 = [0, 2). 8φ on X regarded as a mapping T

Example 7.15. (A nonergodic skew product) Choose E = ( 61 , 65 ) and φ = exp(πi1E ). Put F = ( 31 , 32 ) and q = exp(πi1F ). Then E = F T −1 F and 8φ can be regarded as the piecewise linear mapping φ(x) = q(x)q(T x). Hence T given in Fig. 7.3. Note that it has two ergodic components K1 = (0, 31 ) ∪ 7 − K1 . ( 32 , 1) ∪ ( 34 , 35 ) and K2 = X

220

7 Mod 2 Uniform Distribution 2

1

0

1

2

Fig. 7.3. A nonergodic skew product Tφ deﬁned by E = (1/6, 5/6) ⊂ X = [0, 1) is 8φ on X 7 = [0, 2). regarded as a mapping T

8φ as in the previous examples we obtain the following theorem. Using T Theorem 7.16. Let E = (a, b) (0, 1). Then Tφ is not ergodic if E = ( 41 , 43 ) or E = ( 61 , 65 ). Proof. In each of Figs. 7.2, 7.3 we can ﬁnd two ergodic components. Hence Tφ is not ergodic if E = ( 41 , 43 ) or E = ( 61 , 65 ). These are the only exceptions. For other intervals, Tφ is ergodic. For the proof see [Ah1],[CHN].

Theorem 7.17. (i) For any interval E = ( 61 , 65 ) almost every x ∈ [0, 1) is a mod 2 normal number with respect to E. (ii) For E = ( 61 , 65 ) almost every x ∈ [0, 1) is not a mod 2 normal number with respect to E. More precisely, for E = ( 61 , 65 ) N 1 2 lim dn = N →∞ N 3 n=1 N 1 1 dn = N →∞ N 3 n=1

lim

for almost every x ∈ ( 31 , 32 ) , for almost every x ∈ (0, 31 ) ∪ ( 32 , 1) .

Proof. First, Theorem 7.16 implies that, for any interval E such that E = ( 61 , 65 ) and E = ( 41 , 43 ), almost every x ∈ [0, 1) is mod 2 normal with respect to E. Next, consider E = ( 41 , 43 ). Put F = ( 21 , 1). Then E = F T −1 F , and

1 φ(x) = exp(πi1E (x)) = q(x)q(T x) where q = exp(πi1F ). Then 0 q dx = 0, N and Theorem 7.8(ii) implies that limN →∞ N1 n=1 dn = 21 for almost every x. In this case we have mod 2 normality with respect to E even though exp(πi1E ) is a coboundary. There now remains the only exceptional case E = ( 61 , 65 ), for which the mod 2 normality with respect to E does not hold. Put F = ( 31 , 32 ). Then

1 E = F T −1 F . Now use 0 q dx = 31 .

7.4 Mod 2 Uniform Distribution for General Transformations

221

7.4 Mod 2 Uniform Distribution for General Transformations For general transformations, we consider the cases when the corresponding skew products are not ergodic. Since dn = 21 (1 − en ) we have N 1 1 1 lim dn (x) = − q(x) q dµ . N →∞ N 2 2 X n=1 In Fig. √ 7.4 simulations for mod 2 normality are given for T x = {x + θ}, θ = 2 − 1, and T x = {2x}. In Figs. 7.5, 7.6 simulations for failure of mod 2 normality are given for T x = {x + θ}, T x = {2x}, T x = 4x(1 − x) and √ 5+1 T x = {βx}, β = 2 . 1

1

y

y

0

x

0

1

x

1

√ Fig. 7.4. Mod 2 uniform distribution: T x = {x + θ}, θ = 2 − 1, with respect to E = (0, θ) (left) and T x = {2x} with respect to E = (0, 1/2) (right)

Example 7.18. For T x = {x + θ}, 0 < θ < 21 irrational, take E = (0, 2θ). Then exp(πi1

E (x)) = q(x)q(T x) is satisﬁed by q = exp(πi1F ), F = (θ, 2θ). Note that q dµ = 1 − 2θ. See the plot on the left in Fig. 7.5 and Maple Program 7.7.1. Example 7.19. For T x = {2x}, take E = ( 61 , 65 ). Then exp(πi1

E (x)) = q(x)q(T x) is satisﬁed by q = exp(πi1F ), F = ( 31 , 32 ). Note that q dµ = 31 . See Fig. 7.3 and the plot on the right in Fig. 7.5. Example 7.20. For T x = 4x(1 − x), take E = ( 41 , 1). Then exp(πi1

E (x)) = q(x)q(T x) is satisﬁed by q = exp(πi1F ), F = ( 43 , 1). Note that q dµ = 31 . See the plot on the left in Fig. 7.6. √

1 Example 7.21. For T x = {βx}, β = 5+1 2 , take E = (1 − β , 1). Then exp(πi1E (x)) = q(x)q(T x) is satisﬁed by q = exp(πi1F ), F = (0, β1 ). Note

that q dµ = − √15 . See the plot on the right in Fig. 7.6. Consult Maple Program 7.7.2.

222

7 Mod 2 Uniform Distribution 1

1

y

y

0

x

0

1

x

1

√ Fig. 7.5. Failure of mod 2 uniform distribution: T x = {x + θ}, θ = 2 − 1, with respect to E = (0, 2θ) (left) and T x = {2x} with respect to E = (1/6, 5/6) (right) 1

1

y

y

0

x

1

0

x

1

Fig. 7.6. Failure of mod 2 uniform distribution: T x = 4x(1 − x) with E = (1/4, 1) (left) and for T x = {βx} with E = (1 − (1/β), 1) (right)

Remark 7.22. For a generalization to the Bernoulli shifts, see [Ah1]. When T is given by T x = {x+θ}, θ irrational, the mod 2 uniform distribution problem was investigated by W.A. Veech [Ve]. For an interval E deﬁne N 1 µθ (E) = lim dn (x) N →∞ N 1

if the limit exists. Let t = length(E). He proved that µθ (E) exists for every interval E if and only if θ has bounded partial quotients in its continued fraction expansion. For closely related results, see [ILM],[ILR],[MMY].

7.5 Random Walks on the Unit Circle Random walks on a ﬁnite state space may be regarded as the Markov shifts. In this section we investigate random walks on the unit circle in terms of ergodicity of skew product transformations. Let Y be the unit circle identiﬁed with the unit interval [0, 1), which is an additive group under addition modulo 1. Let ν denote the normalized Lebesgue measure.

7.5 Random Walks on the Unit Circle

223

A random walker starts at position p on Y and tosses a fair coin. If it comes up heads then the walker moves to the right by the distance α, and if it comes up tails then the walker moves to the right by the distance β. A sequence of coin tossing can be regarded as a binary expansion of a number ∞ x ∈ [0, 1), i.e., if x = (x1 , x2 , x3 , . . .)2 = k=1 xk 2−k with xk ∈ {0, 1}, then the random walker moves depending on whether xk = 0 or xk = 1. Heads and tails correspond to the symbols 0 and 1, respectively. We may ask the following questions: Question A. Can the walker visit any open neighborhood U of a given point? Question B. How long does it take to return if the walker starts from a given subset V ? These questions can be answered using skew product transformations. ∞ Take X = 1 {0, 1} with the ( 12 , 12 )-Bernoulli measure µ, and deﬁne the shift transformation T : X → X by T ((x1 , x2 , . . .)) = (x2 , x3 , . . .) . Let µ × ν denote the product measure on X × Y . Deﬁne φ : X → Y by & α , x1 = 0 , φ(x) = β , x1 = 1 , and a skew product transformation Tφ : X × Y → X × Y by Tφ (x, y) = (T x, y + φ(x)) . Observe that for n ≥ 1 we have (Tφ )n (x, y) = (T n x, y + φ(x) + · · · + φ(T n−1 x)) . To answer Question A, take an open interval U and consider X × U ⊂ X × Y . If Tφ is ergodic, then the Birkhoﬀ Ergodic Theorem implies that the orbit of almost every (x, p) ∈ X × Y visits X × U with relative frequency of (µ × ν)(X × U ) = µ(X)ν(U ) = ν(U ) . By projecting onto the Y -axis, we see that for almost every p ∈ Y the random walker visits U with relative frequency of ν(U ). For Question B, we ﬁrst note that the ﬁrst return time to X × V for Tφ on X × Y is equal to the ﬁrst return time to V for the random walks in Y . If Tφ is ergodic, then Kac’s lemma implies that the average of the ﬁrst return time to X × V for Tφ is equal to 1 1 1 . = = ν(V ) µ(X)ν(V ) (µ × ν)(X × V ) Now we ﬁnd a condition on the ergodicity of Tφ in terms of α and β.

224

7 Mod 2 Uniform Distribution

Theorem 7.23. The skew product transformation Tφ is ergodic if and only if at least one of α, β is irrational. Proof. (⇒) If both α and β are rational, then there exists an integer k ≥ 2 such that α = ka and β = kb for some a, b ∈ N. Then Tφ n (x, y) belongs to the subset : 9 1 ) ⊂ X ×Y. for every n ≥ 0. Take F = X × (0, 2k X × y, y + k1 , . . . , y + k−1 k k 2j−1 j Then the orbit of (x, y) ∈ F never visits the subset X × j=1 2k , k , which contradicts the Birkhoﬀ Ergodic Theorem, and Tφ is not ergodic. (⇐) Suppose that Tφ is not ergodic. Lemma 7.5 implies that there exist an integer k = 0 and q(x) = 0 such that q(x) = exp(2πikφ(x))q(T x) where x = (x1 , x2 , . . .). Put ψ(x) = exp(2πikφ(x)). We may express the relation as q(x1 , x2 , . . .) = ψ(x1 )q(x2 , x3 , . . .) since φ depends only on x1 . Note that q(x2 , x3 , . . .) = ψ(x2 )q(x3 , x4 , . . .) and so on. Hence for every n ≥ 1 (∗) q(x1 , x2 , . . .) = ψ(x1 ) · · · ψ(xn )q(xn+1 , xn+2 , . . .) . n Let µ1,n be the measure on X1,n = 1 {0, 1} such that every point (a1 , . . . , an ) has equal measure 2−n , and let µn+1,∞ be the ( 12 , 12 )-Bernoulli measure on ∞ Xn+1,∞ = n+1 {0, 1}. Note that µ on X may be regarded as a product measure of µ1,n and µn+1,∞ . Integrating both sides of (∗) on X, we obtain q dµ = ψ(x1 ) · · · ψ(xn ) dµ1,n q(xn+1 , xn+2 , . . .) dµn+1,∞ . X

X1,n

Xn+1,∞

Since n

ψ(x1 ) · · · ψ(xn ) dµ1,n = X1,n

and since

ψ(x1 ) dµ1,1

=

X1,1

n

q(xn+1 , xn+2 , . . .) dµn+1,∞ = Xn+1,∞

we have

e2πikα + e2πikβ 2

q dµ = X

q dµ , X

e2πikα + e2πikβ 2

n q dµ . X

(∗∗)

Suppose that X q dµ = 0. Integrating both sides of (∗) on a cylinder set [a1 , . . . , an ], we have −n q dµ = 2 ψ(a1 ) · · · ψ(an ) q dµ = 0 . [a1 ,...,an ]

X

7.5 Random Walks on the Unit Circle

225

Hence the integral of q on every

cylinder set is equal to zero, and so q = 0, which is a contradiction. Thus X q dµ = 0, and from (∗∗) with n = 1 we have e2πikα + e2πikβ =1. 2 Thus e2πikα = e2πikβ = 1, which is possible only when both α and β are rational.

When α is irrational and β = −α, the problem was studied in [Su] from the viewpoint of number theory. An orbit of Tφ of length 300 is plotted in Fig. 7.7, √ √ 3 where we choose α = and the starting point (x0 , y0 ) = (π − 3, 2 − 1.2). 20 ∞ We identify 1 {0, 1} and the shift transformation with [0, 1] and x → 2x (mod 1), respectively. The orbit points are uniformly distributed due to the ergodicity. Positions of a random walker are obtained by projecting the orbit points of Tφ onto the vertical axis. Consult Maple Program 7.7.3. See also Maple Program 7.7.4. 1

y

0

x

1

Fig. 7.7. An orbit of Tφ of length 300 with α irrational and β = −α

From the same simulation data we plot the ﬁrst 50 positions of the random walker. In Fig. 7.8 the walker starts from y0 ≈ 0.4 and each random walk depends on whether {2n−1 x0 } belongs to [0, 21 ) or [ 21 , 1) where 1 ≤ n ≤ 50. 1

y

0

n

50

Fig. 7.8. The ﬁrst 50 positions of a random walker with α irrational and β = −α

226

7 Mod 2 Uniform Distribution

7.6 How to Sketch a Cobounding Function Let X = [0, 1] and let Y denote the unit circle group identiﬁed with [0, 1). Suppose that T : X → X is ergodic with respect to an invariant probability measure. We use the additive notation for the group operation in Y . For φ : X → Y deﬁne a skew product Tφ as before. As a concrete example, consider T x = x + θ (mod 1), θ irrational. Choose a continuous function φ : [0, 1) → R such that φ(0) = φ(1) (mod 1). Then φ is regarded as a mapping from Y into itself, and so it has a winding number, which plays a great role in determining ergodic properties of Tφ when φ is suﬃciently smooth. See [ILM],[ILR]. If the winding number is nonzero, then φ is not a coboundary up to addition of constants and Tφ is ergodic. See Fig. 7.9 for an orbit of length 1000 with φ(x) = x, which has winding number equal to 1. The vertical axis represents Y . The orbit points are uniformly distributed since Tφ is ergodic. 1

y

0

x

1

Fig. 7.9. An orbit of Tφ is uniformly distributed, where T x = x + θ (mod 1) and φ(x) = x

Now consider the case when Tφ is not ergodic. There exist an integer k = 0 and a measurable function q(x) such that k φ(x) = q(T x) − q(x)

(mod 1) .

Instead of ﬁnding q explicitly, we present a method of plotting its graph {(x, q(x)) : x ∈ X} in the product space X × Y based on the mathematical pointillism. First, we observe that k

n−1

φ(T i x) =

i=0

n−1

q(T i+1 x) − q(T i x) = q(T n x) − q(x) (mod 1) .

i=0

Take a starting point (x0 , y0 ) for an orbit under Tφ . Then n

(Tkφ ) (x0 , y0 ) = (T n x0 , y0 + q(T n x0 ) − q(x0 )) What we have are points of the form

(mod 1) .

7.6 How to Sketch a Cobounding Function

227

(x, q(x) + y0 − q(x0 )) (mod 1) and they are on the graph y = q(x) + y0 − q(x0 ) (mod 1) . The ergodicity of T implies that the points T n (x0 ), n ≥ 0, are uniformly distributed for almost every x0 . If q(x) exists and is continuous, then uniform distribution of the sequence {T n (x0 )} in X usually implies that the orbit {(Tkφ )n (x0 , y0 ) : n ≥ 0} is dense in the graph of y = q(x) + y0 − q(x0 ) (mod 1). Thus, when we plot suﬃciently many points of the form (Tkφ )n (x0 , y0 ) in X × Y , we have a rough sketch of the graph y = q(x) + y0 − q(x0 ) (mod 1). Since a cobounding function is determined up to addition of constants, we may choose q(x) + y0 − q(x0 ) as q. Observe that we automatically have q(x0 ) = y0 in this case. For example, consider T x = {x + θ} and a coboundary φ of the form φ(x) = q(T x) − q(x) for some real-valued function q(x), 0 ≤ x < 1. Observe

1 that φ must satisfy 0 φ(x) dx = 0. Choose φ with winding number equal to zero and plot an orbit of length 1000 under Tφ with the starting point (x0 , y0 ) = (0, 0.7). See Fig. 7.10 and Maple Program 7.7.5. 1

1

y

1

y

0

x

y

0

1

x

1

0

x

1

Fig. 7.10. An orbit√of Tφ is dense in the graph of a cobounding function, where T x = {x + θ}, θ = ( 5 − 1)/2, for φ(x) = (1/2) sin(2πx), φ(x) = x(1 − x) − 1/6 and φ(x) = 1/4 − |x − 1/2| (from left to right)

Lemma 7.24. For t ∈ R deﬁne ||t|| = min{t − n : n ∈ Z}. Then there exist constants A, B > 0 such that A ||t|| ≤ |e2πit − 1| ≤ B ||t|| . Proof. Deﬁne

& f (t) =

|e2πit − 1| / ||t|| , 2π ,

t = 0 , t=0.

Then f is continuous and periodic. It is continuous on the compact set [0, 1], and there exist a minimum and a maximum of f , say A and B, respectively. Then A ≤ f (t) ≤ B for every t. Since f = 0, we have A > 0.

228

7 Mod 2 Uniform Distribution

We may use Fourier series expansionto ﬁnd q. Let φ(x) = bn e2πinx the2πinx 2πinθ 2πinx and q(x) = cn e . Then q(x + θ) = cn e e and a necessary condition for φ(x) = q(x + θ) − q(x) is given by bn = (e2πinθ − 1) cn . 2πinθ − 1) converges to 0 suﬃciently rapidly as |n| → ∞, then If bn /(e q(x) = cn e2πinx exists in a suitable sense. By Lemma 7.24, the speed of convergence to 0 of the sequence bn /(e2πinθ − 1) as |n| → ∞ is determined by |bn |/||nθ||, especially when ||nθ|| is close to 0. For estimation of |bn | as |n| → ∞, consult Fact 1.30. More precisely, if θ = [a1 , a2 , a3 , . . .] is the continued fraction expansion of θ, and if ak ≤ A for every k, then

qk ≤ Aqk−1 + qk−2 < (A + 1)qk−1 . Take an integer j such that qk ≤ j < qk+1 . Then ||jθ|| ≥ ||qk θ|| and j||jθ|| ≥ j||qk θ|| ≥ qk ||qk θ|| . From Fact 1.36(i) we have 1 1 1 . < ||qk θ|| < < qk qk+1 + qk (A + 2)qk Hence

1 1 theta:=sqrt(2)-1: Consider an interval E = [a, b] where a = 0 and b = 2θ. > a:=0: b:=2*theta: > evalf(b); 0.828427124 > Ind_E:=x->piecewise(apiecewise(theta T:=x->frac(x+theta): > plot(q(x)*q(T(x)),x=0..1); See the right graph in Fig. 7.12. > SampleSize:=100: > N:=50: As N increases, the simulation result converges to the theoretical prediction. > for k from 1 to SampleSize do > seed[k]:= evalf(frac(k*Pi)): > od: Compute the orbits of length N starting from seed[k]. > Digits:=20: Since the entropy of an irrational translation modulo 1 is equal to zero, we do not need many digits. For more information consult Sect. 9.2. > for k from 1 to SampleSize do > orbit[k,0]:=seed[k]: > for n from 1 to N-1 do > orbit[k,n]:= T(orbit[k,n-1]): > od: > od: Find dn (xk ) at xk = {kπ}. > for k from 1 to SampleSize do > d[k,1]:=modp(Ind_E(orbit[k,0]),2): > for n from 2 to N do > d[k,n]:=modp(d[k,n-1]+Ind_E(orbit[k,n-1]),2): > od: > od: > for k from 1 to SampleSize do > z[k]:=add(d[k,n],n=1..N)/N: > od: > rho:=x->1: Find the exact value of C. > theta:=sqrt(2)-1: > C:=int(q(x)*rho(x),x=0..1); √ C := −2 2 + 3 > evalf(%); 0.171572876 > g1:=plot((1-C*q(x))/2.,x=0..1,y=0..1): > g2:=pointplot([seq([seed[k],z[k]],k=1..SampleSize)]): > display(g1,g2); See Fig. 7.5. >

7.7 Maple Programs

231

7.7.2 Ergodic components of a nonergodic skew product transformation arising from the beta transformation We ﬁnd ergodic components of a nonergodic skew product transformation Tφ arising from the beta transformation T . Choose E ⊂ [0, 1] and take φ = 1E . 8φ is nonergodic, we ﬁnd its ergodic components by tracking orbit When T points. Since an orbit stays inside an ergodic component, if we take a suﬃciently long orbit then the orbit points depict an ergodic component, in other words, we employ the mathematical pointillism. > with(plots): > Digits:=20: > beta:=(sqrt(5.0)+1)/2: > T:=x->frac(beta*x): Consider an interval E = [a, b]. > a:=1-1/beta; a := 0.38196601125010515179 > b:=1: 8φ , which is deﬁned on [0, 2], for notational simplicity. Let S denote T > S:=x->piecewise( 0 seed[0]:=0.782: > Length:=500: > for i from 1 to Length do seed[i]:=S(seed[i-1]): od: Plot the points (x, S(x)) on the graph of y = S(x). > g5:=pointplot([seq([seed[i-1],seed[i]],i=1..Length)], symbol=circle): > display(g1,g2,g3,g4,g5,s1); See Fig. 7.13 where the transformation restricted to an ergodic component [ β1 , 1 + β1 ] ⊂ [0, 2] is represented by the thick line segments.

232

7 Mod 2 Uniform Distribution 2

1

0

1

2

Fig. 7.13. An orbit of Tφ is inside an ergodic component where T is the beta transformation

7.7.3 Random walks by an irrational angle on the circle Consider random walks by an irrational angle on the unit circle. > with(plots): > Digits:=1000: > alpha:=sqrt(3.)/20: beta:=1-alpha: Deﬁne φ. See Fig. 7.14. > phi:=x->piecewise(0 T_phi:=(x,y)->(frac(2.0*x),frac(y+phi(x))): > Length:=300: Choose the starting point of an orbit. > seed[0]:=(evalf(Pi)-3,sqrt(2.0)-1.2): > for k from 1 to Length do > seed[k]:=T_phi(seed[k-1]): > od: > Digits:=10:

7.7 Maple Programs

233

Plot an orbit of the skew product transformation. The shift transformation is identiﬁed with T x = 2x (mod 1). > g1:=pointplot([seq([seed[n]],n=1..Length)]): > g2:=listplot([[1,0],[1,1],[0,1]],labels=["x","y"]): > display(g1,g2); See Fig. 7.7. Plot the ﬁrst 50 consecutive positions of a random walker as time passes by. > g3:=listplot([seq([n,seed[n][2]],n=0..50)]): > g4:=pointplot([seq([n,seed[n][2]],n=0..50)], symbol=circle,labels=["n","y"]): > display(g3,g4); See Fig. 7.8. 7.7.4 Skew product transformation and random walks on a cyclic group ∞ Let G = Zn = {0, 1, . . . , n − 1} and X = 1 {1, 2, 3, 4}. Deﬁne φ : X → Zn by φ(x) = φ(x1 , x2 , x3 , . . .) = g[k] if x1 = k , 1 ≤ k ≤ 4 . Deﬁne Tφ on X × Zn by Tφ (x, y) = (T x, y + φ(x)). It describes the random walks on the 1-dimensional lattice Zn with equal probability 14 of moving in one of four ways y → y + g[k] (mod n), y ∈ Zn , 1 ≤ k ≤ 4. g[1]:=1: g[2]:=-1: g[3]:=2: g[4]:=0: This deﬁnes φ(x). > SampleSize:=10000: > n:=10: > Start:=2: Choose a starting position of a random walker. > for i from 1 to SampleSize do > Position:=Start mod n; > k:=rand(1..4); > Position:=Position+g[k()] mod n: > T:=1; > while modp(Position,n) Start do > Position:=Position+g[k()] mod n; > T:=T+1; > od; > Arrival_Time[i]:=T; > od: Suppose a walker starts from y0 . In terms of Tφ , this corresponds to a starting point in X ×{y0 }, which has measure n1 . Kac’s lemma implies that the average of the ﬁrst return time to y0 is equal to n if Tφ is ergodic. > evalf(add(Arrival_Time[i],i=1..SampleSize)/SampleSize); 9.992300000 >

234

7 Mod 2 Uniform Distribution

7.7.5 How to plot points on the graph of a cobounding function We explain how to plot points on the graph of a cobounding function q(x) using the mathematical pointillism. > with(plots): Deﬁne a function φ(x) = 41 − |x − 21 |. > phi:=x->1/4-abs(x-1/2): Plot y = φ(x). > plot(phi(x),x=0..1); See Fig. 7.15.

0.4 y

0.2

0 –0.2

x

1

–0.4

Fig. 7.15. y = φ(x)

Recall that frac(x) does not produce the fractional part of x < 0. Consult Subsect. 1.8.6. > fractional:=x->x-floor(x): > Digits:=30: > m:=1: Observe what happens as m increases. > theta:=evalf((-m + sqrt(m^2+4))/2): Deﬁne a skew product transformation Tφ . > T_phi:=(x,y)->(frac(x+theta),fractional(y+phi(x))): Choose a cobounding function q(x) such that q(0) = 0.7. > seed[0]:=(0,0.7): > Length:=1000: > for i from 1 to Length do > seed[i]:=T_phi(seed[i-1]): > od: > Digits:=10: > pointplot([seq([seed[i]],i=1..Length)],symbolsize=1); See the right graph in Fig. 7.10.

7.7 Maple Programs

235

7.7.6 Fourier series of a cobounding function Approximate a cobounding function q(x) by the Fourier series. > with(plots): > phi:=x->1/4-abs(x-1/2): Recall that frac(x) does not produce the fractional part of x < 0. > fractional:=x->x-floor(x): See Fig. 1.9. > m:=1: Try other values for m. > theta:=evalf((-m + sqrt(m^2+4))/2): Compute the nth Fourier coeﬃcient bn of φ as in Subsect. 1.8.9. > N:=30: > for n from 1 to N do > b[n]:=1/2*((-1)^n-1)/Pi^2/n^2: > od: Find an approximate solution q1 (x) for a coboundary. > q1:=x->2*Re(add(b[n]/(exp(2*Pi*I*n*theta)-1)* exp(2*Pi*I*n*x),n=1..N)); bn e(2 I π n x) , n = 1..N )) e(2 I π n θ) − 1 Modify q1 (x) so that the new function qF (x) satisﬁes qF (0) = 0.7. > q_F:=x->evalf(fractional(q1(x)-q1(0)+0.7)): > plot(q_F(x),x=0..1); See the left graph in Fig. 7.16. Compare it with the right graph in Fig. 7.10. q1 := x → 2 (add(

1 0.4 y y

0.2

0 –0.2 0

x

1

x

1

–0.4

Fig. 7.16. y = qF (x) (left) and y = qF (x + θ) − qF (x) (right)

plot(q_F(frac(x+theta)) - q_F(x),x=0..1); See the right graph in Fig. 7.16. Compare it with Fig. 7.15. >

236

7 Mod 2 Uniform Distribution

7.7.7 Numerical computation of a lower bound for n||nθ|| Check the existence of K > 0 such that n||nθ|| ≥ K for every n ≥ 1. Recall that the distance from x ∈ R to Z satisﬁes ||x|| = min({x}, 1 − {x}). > with(plots): > Digits:=100: > m:=1: Try other values for m. > theta:=evalf((-m + sqrt(m^2+4))/2): > L:=300: > for n from 1 to L do > dist[n]:=min(1-frac(n*theta),frac(n*theta)): > od: > Digits:=10: In the following we are interested in the lower bound, and any value greater than 3 is omitted. In g3, the Greek letter θ is represented by q in SYMBOL font. > g1:=pointplot([seq([n,n*dist[n]],n=1..L)],symbol=circle, labels=["n","n||n||"]): > g2:=plot(0,x=0..L,y=0..3): > g3:=textplot([[0,0.5,‘q‘]],font=[SYMBOL,10]): > display(g1,g2,g3); See Fig. 7.17 3 2 n ||n θ || 1

0

100

n

200

300

√ Fig. 7.17. n||nθ||, 1 ≤ n ≤ 300, is bounded from below where θ = ( 5 − 1)/2

To check that n*dist[n] is bounded from below, we check the boundedness of 1/n/dist[n]. > pointplot([seq([n,1/n/dist[n]],n=1..L)],symbol=circle); See Fig. 7.11.

8 Entropy

What is randomness? How much randomness is there in a set of data? To utilize any possible deﬁnition of randomness in the study of unpredictable phenomena, we ﬁrst deﬁne a concept of randomness in a formal mathematical way and see whether it leads to any reasonably acceptable or applicable conclusion. In 1948 C. Shannon [Sh1] deﬁned the amount of randomness, and called it entropy, which was the starting point of information theory. Entropy is the central idea in his theory and it measures the amount of randomness in a sequence on ﬁnitely many symbols. Shannon may be regarded as the founder of modern digital communication technology. A.N. Kolmogorov and Ya. Sinai extended the deﬁnition of entropy and used it to study general measure preserving transformations in ergodic theory. In data compression, which is one of main subjects in information theory, a given transformation is always a shift transformation on a shift space, and this is why a transformation is not mentioned explicitly. The most important result on entropy is the Shannon–McMillan–Breiman Theorem. For the entire collection of Shannon’s work see [Sh2]. For brief surveys on Shannon’s lifelong achievement consult [BCG]. For the physical meaning of entropy see [At].

8.1 Deﬁnition of Entropy The following is a motivational introduction to entropy. First, imagine that there is an experiment with the possible outcome given by a ﬁnite set A = {a1 , . . . , ak }. Suppose that the probability of obtaining the result ai is pi , 0 ≤ pi ≤ 1, p1 + · · · + pk = 1. If one of the ai , let us say a1 , occurs with probability p1 that is close to 1, then in most trials the outcome would be a1 . If a1 indeed happens, we would not be surprised. If we quantitatively measure the magnitude of ‘being surprised’ is a certain sense, then it would be almost equal to zero if p1 is close to 1. There is not much information gained after the experiment. How can we deﬁne the amount of information? Here is a clue from biology: When we receive a signal or an input or a stimulus from a source, we

238

8 Entropy

perceive it using our sensory organs - eyes, ears, skin, nose, etc. It is a well established fact that the magnitude of our perception is proportional to the logarithm of the magnitude of the stimulus. In this sense, the deﬁnition of the Richter scale for earthquakes and the method of classifying stars according to their brightness are very well suited to our daily experiences. It is a reasonable guess or an approximation to the real world that, for each possible outcome ai with probability pi , the magnitude of surprise given by the source or stimulant would be equal to 1/pi . Hence the perceived magnitude of stimulus or surprise would be roughly equal to log(1/pi ). In short, information = − log (probability) . Since any ai can happen, we take the expectation or average over all the ai thus obtaining the value 1 . pi log pi i It is called the entropy of A. (We adopt the convention that 0 × log 0 = 0.) A rigorous mathematical formulation is given as follows: An experiment corresponds to a measurable partition P = {E1 , . . . , Ek } of a probability space (X, µ). Deﬁne the entropy of a partition P by 1 =− pi log pi pi log H(P) = pi i i where pi = µ(Ei ) and the base of the logarithm is 2 throughout this chapter unless otherwise stated. (Theoretically it does not matter which base we choose for logarithms. But, when the partition has two subsets, the possible maximal value for entropy is log 2, and if we choose the logarithmic base 2, then the maximal value for entropy is log 2 = 1. This choice is convenient especially when we discuss data compression ratio of a binary sequence in Chap. 14.) In this book we always assume that a partition of a probability measure space consists of measurable subsets. By removing subsets of measure zero from P if necessary, we may assume that pi = µ(Ei ) > 0 for every i. Lemma 8.1. If a partition P consists of k subsets, then H(P) ≤ log k. Proof. Let P = {E1 , . . . , Ek }. Recall that ln a/ ln 2 = log2 a for a > 0. Hence it is enough to show that −

k

pi ln pi ≤ ln k

i=1

using the Lagrange multiplier method. (Here we use the natural logarithm for the simplicity of diﬀerentiation.) Put

k k pi ln pi + λ pi − 1 . f (p1 , . . . , pk , λ) = − i=1

i=1

8.1 Deﬁnition of Entropy

239

Solving ∂f = − ln pi − 1 + λ = 0 , ∂pi together with the constraint

1≤i≤k,

p1 + · · · + pk = 1 , we observe that the maximum is obtained only when p1 = · · · = pk = Hence the maximum is equal to ln k.

1 k.

Given two partitions P and Q we let P ∨ Q be the partition, called the join of P and Q, consisting of the subsets of the form B ∩ C where B ∈ P and C ∈ Q. The join of more than two partitions is deﬁned similarly. Suppose that T : X → X is a measure preserving transformation. For a partition P we let T −j P = {T −j E1 , . . . , T −j Ek } and

Pn = P ∨ T −1 P ∨ · · · ∨ T −(n−1) P

and deﬁne the entropy of T with respect to P by h(T, P) = lim

n→∞

1 H(Pn ) . n

It is known that the sequence here is monotonically decreasing and that the limit exists. Finally we deﬁne the entropy h(T ) of T by h(T ) = sup h(T, P) P

where P ranges over all ﬁnite partitions. When there is a need to emphasize the measure µ under consideration, we write hµ (T ) in place of h(T ). Sometimes h(T ) is called the Kolmogorov–Sinai entropy. Suppose that we are given a measure preserving transformation T on (X, A). A partition P is said to be generating if the σ-algebra generated by T −n P, n ≥ 0, is A. It is known that if P is generating then h(T ) = h(T, P) . Theorem 8.2. If a partition P consists of k subsets, then h(T, P) ≤ log k. Proof. Let P = {E1 , . . . , Ek }. Then Pn consists of at most k n subsets. By

Lemma 8.1 we have H(Pn ) ≤ n log k, and hence H(Pn )/n ≤ log k. Example 8.3. An irrational translation T x = x + θ (mod 1) has :entropy 9 zero. For the proof, take a generating partition P = [0, 12 ), [ 12 , 1) . Then Pn consists of 2n disjoint intervals (or arcs) if we identify the unit interval [0, 1) with the unit circle. By Lemma 8.1 we have √ H(Pn ) ≤ log 2n, and h(T ) = limn→∞ n1 H(Pn ) = 0. See Fig. 8.1 where θ = 2 − 1. Observe that the line graph is under the curve y = log(2x)/x.

240

8 Entropy 1

0.5

0

5

n 10

15

Fig. 8.1. y = H(Pn )/n, 1 ≤ n ≤ 15, for T x = {x + θ} together with y = log(2x)/x

Recall Deﬁnition 2.22. Let (X, µ) and (Y, ν) be two probability spaces. Suppose that T : X → X and S : Y → Y preserve µ and ν, respectively. If φ : X → Y is measure preserving and satisﬁes φ ◦ T = S ◦ φ, then S is called a factor of T and φ a factor map. The following result shows that a factor map cannot increase entropy. Theorem 8.4. If S : Y → Y is a factor of T : X → X, then h(S) ≤ h(T ). Proof. Let φ : X → Y be a factor map and let Q = {F1 , . . . , Fk } be a generating partition of Y . Put Ei = φ−1 (Fi ), 1 ≤ i ≤ k. Then µ( i Ei ) = 1 and hence P = φ−1 Q = {E1 , . . . , Ek } is a partition of X. Note that P need not ;n−1 ;n−1 be generating with respect to T . Put Qn = j=0 S −j Q and Pn = j=0 T −j P. Take E ∈ Pn . Then E is of the form E = Ei1 ∩ T −1 Ei2 ∩ · · · ∩ T −(n−1) Ein and hence E = φ−1 Fi1 ∩ T −1 φ−1 Fi2 ∩ · · · ∩ T −(n−1) φ−1 Fin = φ−1 Fi1 ∩ φ−1 S −1 Fi2 ∩ · · · ∩ φ−1 S −(n−1) Fin = φ−1 (Fi1 ∩ S −1 Fi2 ∩ · · · ∩ S −(n−1) Fin ) .

Hence Pn = φ−1 Qn = {φ−1 F : F ∈ Qn } for every n, and µ(φ−1 F ) log µ(φ−1 F ) H(Pn ) = − F ∈Qn

=−

ν(F ) log ν(F ) = H(Qn ) .

F ∈Qn

Thus

1 1 H(Pn ) = lim H(Qn ) = h(S, Q) . n→∞ n n Hence h(T ) ≥ h(T, P) = h(S, Q) = h(S). The last equality holds since Q is generating.

h(T, P) = lim

n→∞

Recall that (X, µ) and (Y, ν) are said to be isomorphic if there exists an almost everywhere bijective mapping φ : X → Y such that ν(φ(E)) = µ(E)

8.1 Deﬁnition of Entropy

241

for every measurable subset E ⊂ X. Furthermore, if two measure preserving transformations T : X → X and S : Y → Y satisfy φ ◦ T = S ◦ φ, then they are said to be isomorphic. Consult Sect. 2.4 for the details. Corollary 8.5. If T : X → X and S : Y → Y are isomorphic, then they have the same entropy. Proof. Let φ : X → Y be a factor map. By applying Theorem 8.4 for φ and φ−1 , we obtain h(S) ≤ h(T ) and h(T ) ≤ h(S), respectively. Hence h(T ) = h(S).

Remark 8.6. If a factor map φ is almost everywhere ﬁnite-to-one, i.e., the cardinality of φ−1 ({y}) is ﬁnite for almost every y ∈ Y , then h(T ) = h(S). Example 8.7. The entropy of T x = 2x 9(mod 1) is equal to log 2 = 1. For the : proof, take a generating partition P = [0, 21 ), [ 21 , 1) . Then Pn consists of 2n intervals of the form (j − 1) × 2−n , j × 2−n , 1 ≤ j ≤ 2n . Since H(Pn ) = n log 2 = n, we have h(T ) = limn→∞ n1 H(Pn ) = 1. The logistic transformation is isomorphic to T x = 2x (mod 1), and the given partition P is generating for both transformations. Hence in both cases H(Pn )/n converges to the same entropy, which is marked by the horizontal lines in Fig. 8.2. See Maple Program 8.7.1.

1

1

0.5

0.5

0

n

10

0

n

10

Fig. 8.2. y = H(Pn )/n, 1 ≤ n ≤ 10, for T x = {2x} (left) and T x = 4x(1 − x) (right)

Theorem 8.8. Let T : X → X and S : Y → Y be measure preserving transformations. Then we have the following: (i) If n ≥ 1, then h(T n ) = n h(T ). (ii) If T is invertible, then h(T −1 ) = h(T ). (iii) Deﬁne the product transformation T × S on X × Y by (T × S)(x, y) = (T x, Sy) for x ∈ X and y ∈ Y . Then T × S preserves the product measure and h(T × S) = h(T )h(S). Proof. Consult [Pet],[Wa1].

242

8 Entropy

8.2 Entropy of Shift Transformations Let A = {s1 , . . . , sk } be a set of symbols. Consider a shift transformation T ∞ on X = 1 A. An element x ∈ X is of the form x = (x1 . . . xn . . .), xn ∈ A. Let P be the partition of X by subsets [si ] = {x ∈ X : x1 = si } ,

1≤i≤k.

Then Pn consists of cylinder sets of the form [a1 , . . . , an ] = {x ∈ X : x1 = a1 , . . . , xn = an } for (a1 , . . . , an ) ∈ An , and P is generating. For shift spaces in this book we always take the same partition P. Suppose that X is a Bernoulli shift where each symbol si has probability pi . Then pi1 · · · pin log(pi1 · · · pin ) = −n H(Pn ) = − pi log pi , i1 ,...,in

and h(T ) = −

i

i

pi log pi . Thus we have the following theorem.

Theorem 8.9. The entropy of the (p1 , . . . , pk )-Bernoulli shift is equal to −

k

pi log pi .

i=1

The theorem is a generalization of Ex. 8.7 since T x = 2x (mod 1) is isomorphic to the ( 12 , 12 )-Bernoulli shift. Using the Lagrange multiplier method as in Lemma 8.1, we see that −

k

pi log pi ≤ log k

i=1

and the maximum occurs if and only if p1 = · · · = pk = k1 . For k = 2, see the graph y = H(p) = −p log p − (1 − p) log(1 − p) in Fig. 8.3. D. Ornstein proved that entropy completely classiﬁes the bi-inﬁnite (i.e., two-sided) Bernoulli shifts; in other words, two bi-inﬁnite Bernoulli shifts are isomorphic if and only if they have the same entropy. For more information, consult [Pet]. A Markov shift space with a stochastic matrix P = (pij )1≤i,j≤k is the set of all inﬁnite sequences on symbols {1, . . . , k} with the probability of transition from i (at time s) to j (at time s + 1) given by Pr(xs+1 = j | xs = i) = pij . If P is irreducible and aperiodic, then all the eigenvalues other than 1 have modulus less than 1. The irreducibility of P implies the ergodicity of the

8.2 Entropy of Shift Transformations

243

1

H(p)

0

p

1

Fig. 8.3. y = H(p), 0 ≤ p ≤ 1

Markov shift, and the aperiodicity together with the irreducibility gives the mixing property. Let π = (π1 , . . . , πk ), i πi = 1, πi > 0, be the unique left eigenvector corresponding to the simple eigenvalue 1, which is called the Perron–Frobenius eigenvector. The measure of a cylinder set [a1 , . . . , an ], ai ∈ {1, . . . , k}, 1 ≤ i ≤ n, is given by πa1 pa1 a2 · · · pan−1 an . Consult Sect. 2.3. Theorem 8.10. The entropy of a Markov shift is equal to πi pij log pij . − i,j

Proof. Let P be the partition given by the 1-blocks [s]1 , s ∈ {1, . . . , k}. Then ;n−1 Pn = i=0 T −i P consists of cylinder sets of length n. Since i πi pij = πj and j pij = 1, we have H(Pn+1 ) k

=−

πi0 pi0 i1 · · · pin−1 in log(πi0 pi0 i1 · · · pin−1 in )

i0 ,i1 ,...,in =1 k

=−

πi0 pi0 i1 · · · pin−1 in log πi0 + log pi0 i1 + · · · + log pin−1 in

i0 ,i1 ,...,in =1

=−

k

πi log πi − n

i=1

k

Therefore h = lim

n→∞

πi pij log pij .

i,j=1

1 πi pij log pij . H(Pn+1 ) = − n+1 i,j

244

8 Entropy

If we have a Bernoulli shift, then pij = pj for every i. Hence πj = pj for every j. Since i pi = 1, the entropy is equal to pi pj log pj = − pj log pj , − i

j

j

which is given in Theorem 8.9. For simulations see Fig. 8.4 and Maple Program 8.7.2. Observe that the left graph for the ( 41 , 43 )-Bernoulli shift with its entropy ≈ 0.8113 is almost horizontal since H(Pn )/n is constant forevery n ≥ 1. The right graph for the Markov 1/10 9/10 shift associated with the matrix decreases monotonically, which 3/4 1/4 is a theoretically known fact, and converges to its entropy ≈ 0.6557. The horizontal line in each graph indicates the entropy. 1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0

10

n

n

10

Fig. 8.4. y = H(Pn )/n, 1 ≤ n ≤ 10, for the (1/4, 3/4)-Bernoulli shift (left) and a Markov shift (right)

Example 8.11. Using the isomorphism given in Ex. 2.32 we can ﬁnd the entropy of the beta transformation T x = βx (mod 1). Recall that β12 + β1 = 1 and β − 1 = β1 . The stochastic matrix 1/β 1/β 2 P = 1 0 has a row eigenvector (β 2 , 1) corresponding to the eigenvalue 1. Hence the Perron–Frobenius eigenvector π = (π0 , π1 ) is given by π0 =

β2 , +1

β2

π1 =

1 . β2 + 1

Hence the entropy is equal to β2 1 1 1 1 1 − 2 log + 2 log 2 − 2 (1 × log 1 + 0 × log 0) = log β . β +1 β β β β β +1 See Fig. 8.5. The horizontal line indicates the entropy log β ≈ 0.694.

8.3 Partitions and Coding Maps

245

1 0.8 0.6 0.4 0.2 0

n

10

Fig. 8.5. y = H(Pn )/n, 1 ≤ n ≤ 10, for T x = {βx} in Ex. 8.11

8.3 Partitions and Coding Maps Recall the deﬁnitions in Sect. 2.5. Note that the coding map φ is a factor map. Example 8.12. (Nongenerating partition) It is important to take a generating partition in computing the entropy h(T ). If a partition P is not generating, then it is possible that h(T, P) < h(T ) and that we may underestimate the entropy of T . Consider T x = 2x (mod 1) and & 0 1 1 P = E0 = [0, ), E1 = [ , 1) . 4 4 Then P is not generating, and h(T, P) < h(T ) = log 2. See Fig. 8.6. 1 0.8 0.6 0.4 0.2 0

n

10

Fig. 8.6. y = H(Pn )/n, 1 ≤ n ≤ 10 for T x = {2x} and a nongenerating partition P in Ex. 8.12

Example 8.13. When a partition P consists of many subsets, there is a good chance that h(T, P) is close to h(T ). Consider T x = 2x (mod 1) and & 0 1 1 2 2 P = E0 = [0, ), E1 = [ , ), E2 = [ , 1) . 3 3 3 3 Then P is generating, and h(T, P) = h(T ) = log 2. See Fig. 8.7.

246

8 Entropy 1.5 1 0.5 0

10

n

Fig. 8.7. y = H(Pn )/n, 1 ≤ n ≤ 10 for T x = {2x} and P in Ex. 8.13

Example 8.14. A generating partition deﬁnes an almost everywhere one-to-one √ coding map in general. Consider T x = x + θ (mod 1), θ = 2 − 1, and & 0 1 1 P = E0 = [0, ), E1 = [ , 1) . 2 2 Note that P is generating. See the left diagram in Fig. 8.8, which employs the mathematical pointillism. The ﬁrst ﬂoor represents φ−1 [1] = E1 , which is the cylinder set of length 1. The second ﬂoor represents the cylinder set of length 2 given by φ−1 [1, 0] = E1 ∩ T −1 E0 , and so on. Finally the 5th ﬂoor represents the cylinder set of length 5 given by φ−1 [1, 0, 1, 1, 0] = E1 ∩ T −1 E0 ∩ T −2 E1 ∩ T −3 E1 ∩ T −4 E0 . For the representation y = ψ(x) of the coding map see the right plot in Fig. 8.8. The graph of y = ψ(x) looks almost locally ﬂat for the following reasons: First, for each n there are 2n cylinder sets of length n, and ψ(x) is constant on each cylinder set. Thus there are only 2n possible values for ψ out of 2n values of the form k × 2−n , 0 ≤ k < 2n . Second, since the entropy of T is zero, orbits of two nearby points x and x stay close to each other for a long time. Thus ψ(x) and ψ(x ) seem to be equal to the naked eye. 5

1

y

0

x

1

0

x

1

Fig. 8.8. Cylinder sets of length n, 1 ≤ n ≤ 5, (left) and y = ψ(x) (right) for T x = {x + θ} and P in Ex. 8.14

8.3 Partitions and Coding Maps

247

Example 8.15. If the coding map deﬁned by a partition P gives an almost everywhere ﬁnite-to-one coding map, then h(T ) = h(T, P) even when P is not generating. Consider T x = 2x (mod 1) and the partition & 0 1 3 1 3 P = E0 = [ , ), E1 = [0, ) ∪ [ , 1) . 4 4 4 4 To show that P is not generating, we observe that a cylinder set is a union of two intervals. See the left diagram in Fig. 8.9 and Maple Program 8.7.3, which employ the mathematical pointillism. The ﬁrst ﬂoor represents φ−1 [1] = E1 , which is the cylinder set of length 1. The second ﬂoor represents the cylinder set of length 2 given by φ−1 [1, 0] = E1 ∩ T −1 E0 , and so on. Finally the 5th ﬂoor represents the cylinder set of length 5 given by φ−1 [1, 0, 0, 1, 0] = E1 ∩ T −1 E0 ∩ T −2 E0 ∩ T −3 E1 ∩ T −4 E0 . Note that the cylinder sets are symmetric with respect to x = 21 . The coding map ψ satisﬁes ψ(x) = ψ(1 − x). It is a 2-to-1 map. Since x + (1 − x) = 1 and T x + T (1 − x) = 1 for every x, two points x and 1 − x that are symmetric with respect to x = 21 are mapped by T to points also symmetric with respect to x = 21 . Since E0 and E1 are symmetric with respect to x = 21 , a point x belongs Ei if and only if 1 − x belongs to Ei . This implies that x and 1 − x are coded into the same binary sequence with respect to P. The right graph shows that entropy is equal to h(T ) = log 2 = h(T, P). 5

1 1 y

0

x

1

0.5

0

x

1

0

n

10

Fig. 8.9. Cylinder sets of length n, 1 ≤ n ≤ 5, y = ψ(x) and y = H(Pn )/n for T x = {2x} and P in Ex. 8.15 (from left to right)

Example 8.16. Consider T x = 2x (mod 1) and the partition & 0 1 1 P = E0 = [0, ), E1 = [ , 1) . 4 4 To show that P is not generating, we observe that a cylinder set is a union of many intervals for large n. See the left diagram in Fig. 8.10, which employs the

248

8 Entropy

idea of the mathematical pointillism. The ﬁrst ﬂoor represents φ−1 [1] = E1 , which is the cylinder set of length 1. The second ﬂoor represents the cylinder set of length 2 given by φ−1 [1, 1] = E1 ∩ T −1 E1 , and so on. Finally the 5th ﬂoor represents the cylinder set of length 5 given by φ−1 [1, 1, 1, 1, 1] = E1 ∩ T −1 E1 ∩ T −2 E1 ∩ T −3 E1 ∩ T −4 E1 . Fig. 8.6 shows that h(T, P) < h(T ), which implies that ψ is not a ﬁnite-to-one map. See the right plot in Fig. 8.10. 5

1

y

0

x

1

0

x

1

Fig. 8.10. Cylinder sets of length n, 1 ≤ n ≤ 5, (left) and y = ψ(x) (right) for T x = {2x} and P in Ex. 8.16

8.4 The Shannon–McMillan–Breiman Theorem Let P be a generating partition of a probability space (X, µ). Let Pn (x) be the unique member in Pn containing x ∈ X. Then the function x → µ(Pn (x)) is constant on each subset in Pn , and µ(Pn (x)) log µ(Pn (x)) = − log µ(Pn (x)) dµ H(Pn ) = − Pn (x)∈Pn

X

where the sum is taken over all subsets in Pn . Hence 1 h(T, P) = − lim log µ(Pn (x)) dµ . n→∞ n X Theorem 8.17 (The Shannon–McMillan–Breiman Theorem). If T is an ergodic transformation on (X, µ) with a ﬁnite generating partition P, then for almost every x log µ(Pn (x)) − lim = h(T ) . n→∞ n (If we integrate the left hand side of the equality, we obtain the deﬁnition of entropy.)

8.4 The Shannon–McMillan–Breiman Theorem

249

Proof. We will prove the theorem only for special cases. For general cases see ∞ [Bi],[Pet]. First we consider a Bernoulli shift T on X = 1 {1, . . . , k} where the symbol i ∈ {1, . . . , k} has probability pi . Then Pn (x) = {y ∈ X : y1 = x1 , . . . , yn = xn } = [x1 , . . . , xn ] for x = (x1 x2 x3 . . .) ∈ X and µ(Pn (x)) = px1 · · · pxn . Deﬁne fk : X → R by fk ((x1 x2 x3 . . .)) = pxk . Then fk (x) = f1 (T k−1 (x)). The Birkhoﬀ Ergodic Theorem implies that for almost every x 1 1 log fk (x) log µ(Pn (x)) = lim n→∞ n n→∞ n n

lim

= lim

n→∞

log f1 (T k−1 (x))

k=1

log f1 (x) dµ(x)

= =

1 n

k=1 n

X k

pi log pi .

i=1

k The last equality follows from the fact that X = i=1 [i], µ([i]) = pi and f1 (x) = pi on [i]. Next, for a Markov shift with a stochastic matrix (pij ), we have µ(Pn (x)) = πx1 px1 x2 · · · pxn−1 xn . Deﬁne fk,j : X → R by fk,j ((x1 x2 x3 . . .)) = pxk xj . Note that fk,k+1 (x) = f1,2 (T k−1 (x)). The Birkhoﬀ Ergodic Theorem implies that for almost every x n−1 1 1 1 log fk,k+1 (x) log πx1 + lim log µ(Pn (x)) = lim n→∞ n n→∞ n n→∞ n i=1

lim

1 log f1,2 (T k−1 (x)) n→∞ n i=1 log f1,2 (x) dµ(x) = n−1

= lim

X

=

k

πi pij log pij .

i,j=1

The last equality follows from the fact that X = and f1,2 (x) = pij on [i, j].

k

i,j=1 [i, j],

µ([i, j]) = πi pij

250

8 Entropy

C. Shannon [Sh1] proved Theorem 8.17 for Markov shifts, B. McMillan [Mc1] for convergence in measure, L. Carleson [Ca] and L. Breiman [Brei] for pointwise convergence, A. Ionescu Tulcea [Io] for Lp -convergence, and K.L. Chung [Chu] for cases with countably many symbols and ﬁnite entropy. A simulation result for the ( 32 , 31 )-Bernoulli shift is given in Fig. 8.11 where we plot y = − n1 log Pn , 10 ≤ n ≤ 1000, evaluated at a single sequence. The entropy is equal to 0.918 . . . and is indicated by the horizontal line. For a simulation with a Markov shift see Maple Program 8.7.4 and Fig. 8.15.

1

0.5

0

n

1000

Fig. 8.11. The Shannon–McMillan–Breiman Theorem for the ( 32 , 31 )-Bernoulli shift

Example 8.18. Fig. 8.12 shows the pointwise convergence of − log√ µ(Pn (x0 ))/n to entropy as n → ∞ at x0 = π − 3 for T x = x + θ (mod 1), θ = 2 − 1, T x = 2x (mod 1), and T x = βx (mod 1) with respect to generating partitions P. See Maple Program 8.7.5. 1

2 1

0.8 0.6

1

0.5

0.4 0.2

0

n

10

0

n

10

0

n

10

Fig. 8.12. y = − log µ(Pn (x0 ))/n, 1 ≤ n ≤ 10, for T x = {x + θ}, T x = {2x} and T x = {βx} (from left to right)

Let X be a shift space and T the left-shift. Consider the partition P given by the ﬁrst coordinate. Let Pn (x) be the relative frequency of the ﬁrst n-block x1 . . . xn in the sequence x = (x1 x2 x3 . . .), i.e.,

8.4 The Shannon–McMillan–Breiman Theorem

251

1 card{0 ≤ j < N : x1+j = x1 , . . . , xn+j = xn } . N →∞ N

Pn (x) = lim

Note that the Birkhoﬀ Ergodic Theorem implies that Pn (x) = µ(Pn (x)) where P is the standard partition of a shift space according to the ﬁrst coordinates. Note that Pn (x) = Pn (y) if and only if xj = yj , 1 ≤ j ≤ n. Now the Shannon– McMillan–Breiman Theorem states that for almost every x h = lim

n→∞

1 1 . log Pn (x) n

This indicates that an n-block has measure roughly equal to 2−nh . Since the sum of their measures should be close to 1, there are approximately 2nh of them. More precisely, we have the following theorem. Theorem 8.19 (The Asymptotic Equipartition Property). Let (X, µ) be a probability space and T be an ergodic µ-invariant transformation on X. Suppose that P is a ﬁnite generating partition of X. For every ε > 0 and n ≥ 1 there exist subsets in Pn , which are called (n, ε)-typical subsets, satisfying the following: (i) for every typical subset Pn (x), 2−n(h+ε) < µ(Pn (x)) < 2−n(h−ε) , (ii) the union of all (n, ε)-typical subsets has measure greater than 1 − ε, and (iii) the number of (n, ε)-typical subsets is between (1 − ε)2n(h−ε) and 2n(h+ε) . Proof. First, note that for each n ≥ 1 the function x → µ(Pn (x)) is constant on each subset Pn (x) ∈ Pn . The Shannon–McMillan–Breiman Theorem implies that the sequence of the functions 1 log µ(Pn (x)) n converges to h in measure as n → ∞. Hence for every ε > 0 there exists N such that n ≥ N implies that 0 & 1 0 take suﬃciently large n so that the measure of the union of (n, ε)-typical blocks from X is at least 1 − ε. Take m satisfying m log2 M ≥ n(h + ε) . Since the number of (n, ε)-typical blocks is bounded by 2n(h+ε) , which is again bounded by M m , a diﬀerent codeword can be assigned to each source block that is an (n, ε)-typical block in X. At worst, only the source blocks that are

8.6 Asymptotically Normal Distribution of (− log Pn )/n

253

not (n, ε)-typical will not receive distinct codewords and the probability that a source block does not receive its own codeword is less than ε. A reﬁnement of this idea can be used to compress a message. If there are more patterns, i.e., less randomness in a given sequence, then it has less entropy and can be compressed more. In data compression, which is widely used in digital signal transmission and data storage algorithms, entropy is the best possible compression ratio. For more on data compression see Chap. 14.

8.6 Asymptotically Normal Distribution of (− log Pn)/n It is known that for a mixing Markov shift the values of −(log Pn )/n are approximately normally distributed for large n. In other words, the Central Limit Theorem holds. More precisely, let Φ be the density function of the standard normal distribution given in Deﬁnition 1.43 and put Var[− log Pn (x)] , n→∞ n

σ 2 = lim

where ‘Var’ stands for variance and σ is called the standard deviation. Then & 0 α − log Pn (x) − nh √ x: lim µ ≤α = Φ(t) dt . n→∞ σ n −∞ See [Ib],[Yu]. In Fig. 8.13 we take n = 30 and choose sample values at x = T j−1 x0 , 1 ≤ j ≤ 105 , where x0 is a typical sequence in the ( 32 , 31 )-Bernoulli shift space. The standard normal distribution curve is drawn together in each plot. See Maple Program 8.7.7. 2

0.4

1

–4

–2

0

2

Z

4

–4

–2

0

2

Z

4

√ Fig. 8.13. Probability distributions of Z = (− log Pn − nh)/(σ n) for the (p, 1− p)Bernoulli shift, p = 2/3, with the number of bins equal to 120 (left) and 24 (right)

Remark 8.20. For two invertible transformations T, S such that T S = ST , we can deﬁne the entropy of {T m S n : (m, n) ∈ Z2 } on each ray in R2 , which is called the directional entropy. For more information consult [Pkoh].

254

8 Entropy

8.7 Maple Programs 8.7.1 Deﬁnition of entropy: the logistic transformation Calculate the entropy of the logistic transformation using the deﬁnition. > with(plots): > T:=x->4*x*(1-x); > entropy:=log[2](2): > Max_Block:=10: > L:=5000: > L+Max_Block-1; 5009 Generate a sequence of length L + Max Block − 1. We do not need many digits since we use the Birkhoﬀ Ergodic Theorem. > Digits:=100: Choose a partition P = {E0 , E1 } where E0 = [0, a) and E1 = [a, 1). > a:=evalf(0.5): Choose a starting point of an orbit. > seed:=evalf(Pi-3): > for k from 1 to L+Max_Block-1 do > seed:=evalf(T(seed)): > if seed < a then b[k]:=0: else b[k]:=1: fi: > od: > evalf(add(b[k],k=1..L+Max_Block-1)/(L+Max_Block-1)); 0.5056897584 Maple does not perfectly recognize the formula 0 log 0 = 0. > 0*log(0); 0 > K:=0: > K*log(K); Error, (in ln) numeric exception: division by zero

In the following not every cylinder set has positive measure. To avoid log 0 we use log(x + ε) ≈ log x for suﬃciently small ε > 0. > epsilon:=10.0^(-10): > for n from 1 to Max_Block do > for i from 0 to 2^n-1 do freq[n,i]:=0: od: > for k from 0 to L-1 do > num:=add(b[k+j]*2^(n-j),j=1..n): > freq[n,num]:= freq[n,num] + 1: > od: > h[n]:=-add(freq[n,i]/L*log10(freq[n,i]/L+epsilon),i= 0..2^n-1)/n: > od:

8.7 Maple Programs

255

Draw a graph. > g1:=listplot([seq([n,h[n]],n=1..Max_Block)]): > g2:=pointplot([seq([n,h[n]],n=1..Max_Block)], labels=["n"," "],symbol=circle): > g3:=plot(entropy,x=0..Max_Block): > display(g1,g2,g3); See Fig. 8.2. 8.7.2 Deﬁnition of entropy: the Markov shift Here is a simulation for the deﬁnition of entropy of a Markov shift. > with(plots): > with(linalg): > P:=matrix(2,2,[[1/10,9/10],[3/4,1/4]]); ⎡ 1 9 ⎤ ⎢ 10 10 ⎥ P := ⎣ ⎦ 3 1 4 4 > eigenvectors(transpose(P)); 6 −13 , 1, {[−1, 1]}], [1, 1, { 1, }] [ 5 20 Find the Perron–Frobenius eigenvector. > v:=[5/11,6/11]: Check whether v is an eigenvector with the eigenvalue 1. > evalm(v&*P); 5 6 , 11 11 > entropy:=add(add(-v[i]*P[i,j]*log[2.](P[i,j]),j=1..2), i=1..2); entropy := 0.6556951559 Choose the maximum block size. > N:=10: Generate a typical Markov sequence of length L + N − 1. > L:=round(2^(entropy*N))*100; L := 9400 If L is not suﬃciently large, then the blocks of small measure are not sampled, and the numerical average of log(1/Pn (x)) is less than the theoretical average. Consult a similar idea in Maple Program 12.6.2. > ran0:=rand(0..9): > ran1:=rand(0..3): > x[0]:=1:

256

8 Entropy

for j from 1 to L+N-1 do if x[j-1]=0 then x[j]:=ceil(ran0()/9): else x[j]:=floor(ran1()/3): fi: od: Estimate the measure of the cylinder set [1] = {x : x1 = 1} and compare it with the theoretical value. > add(x[i],i=1..L+N-1)/(L+N-1.0); 0.5441598470 > 6/11.; 0.5454545455 In the following not every cylinder set has positive measure. To avoid log 0 we use log(x + ε) ≈ log x for suﬃciently small ε > 0. > epsilon:=10.0^(-10): > for n from 1 to N do > for k from 0 to 2^n-1 do freq[k]:=0: od: > for i from 1 to L do > k:=add(x[i+d]*2^(n-1-d),d=0..n-1): > freq[k]:=freq[k]+1: > od: > for i from 0 to 2^n-1 do > prob[i]:=freq[i]/L+epsilon: od: > h[n]:=-(1/n)*add(prob[i]*log[2.0](prob[i]),i=0..2^n-1): > od: Draw a graph. > g1:=listplot([seq([n,h[n]],n=1..N)],labels=["n"," "]): > g2:=pointplot([seq([n,h[n]],n=1..N)],symbol=circle): > g3:=plot(entropy,x=0..N): > display(g1,g2,g3); See Fig. 8.4. > > > > >

8.7.3 Cylinder sets of a nongenerating partition: T x = 2x mod 1 We describe how to visualize typical cylinder sets as a nested sequence. Using the mathematical pointillism we plot Jk points in a cylinder set of length k. For example, consider T x = 2x (mod 1). Take a partition P = {E0 , E1 } where E0 = [ 14 , 34 ), E1 = [0, 14 ) ∪ [ 34 , 1). The following shows how to visualize the cylinder sets of increasing length as a series of ﬂoors of increasing height. Each ﬂoor represents a cylinder set of length 1,2,3, respectively. > with(plots): > T:=x->frac(2.0*x): > L:=2000: Generate a sequence {xk }k of length L by the rule xk = i if and only if seed[k] = T k (seed[0]) ∈ Ei . > seed[0]:=evalf(Pi-3):

8.7 Maple Programs

257

for k from 1 to L do seed[k]:=evalf(T(seed[k-1])): if (seed[k] > 1/4 and seed[k] < 3/4) then x[k]:=0: else x[k]:=1: fi: od: We will ﬁnd a cylinder set of length 3 represented by [d1 , d2 , d3 ]. > d1:=1: > j:=0: > for k from 1 to L do > if x[k]=d1 then > j:=j+1: > Q1[j]:=seed[k]: > fi: > od: > J1:=j; > > > > > >

J1 := 983 g1:=pointplot([seq([Q1[j],1],j=1..J1)]): g0:=plot(0,x=0..1,labels=["x"," "]): > display(g1,g0); See the left plot in Fig. 8.14. > d2:=0: > j:=0: > for k from 1 to L do > if (x[k]=d1 and x[k+1]=d2) then > j:=j+1: > Q2[j]:=seed[k]: > fi: > od: > J2:=j; > >

J2 := 502 g2:=pointplot([seq([Q2[j],2],j=1..J2)]): g0:=plot(0,x=0..1,labels=["x"," "]): See the second plot in Fig. 8.14. The ﬁrst ﬂoor is a cylinder set of length 1 and the second ﬂoor is a cylinder set of length 2. > d3:=0: > j:=0: > for k from 1 to L do > if (x[k]=d1 and x[k+1]=d2 and x[k+2]=d3) then > j:=j+1: > Q3[j]:=seed[k]: > fi: > od: > J3:=j; > >

> >

J3 := 248 g3:=pointplot([seq([Q3[j],3],j=1..J3)]): display(g3,g2,g1,g0);

258

8 Entropy

See the right plot in Fig. 8.14. The third ﬂoor is a cylinder set [d1 , d2 , d3 ]. 1

2 2

0

x

1

0

x

1

0

x

1

Fig. 8.14. Cylinder sets of length 1, 2, 3 for T x = 2x (mod 1) with respect to P (from left to right)

For a cylinder set of length 5 see Fig. 8.9. The partition P is not generating, but h(T, P) = h(T ) since the coding map with respect to P is two-to-one. 8.7.4 The Shannon–McMillan–Breiman Theorem: the Markov shift Here is a simulation of the Shannon–McMillan–Breiman Theorem for a Markov shift. > with(plots): > with(linalg): Choose a stochastic matrix P . > P:=matrix(2,2,[[1/2,1/2],[1/5,4/5]]); ⎡1 1⎤ ⎢2 2⎥ P := ⎣ ⎦ 1 4 5 5 Find the eigenvalues and the eigenvectors of the transpose of P . > eigenvectors(transpose(P)); 5 3 [1, 1, { 1, }], [ , 1, {[−1, 1]}] 2 10 3 The eigenvalues are 1 and 10 , and both have multiplicity 1. > v:=[2/7,5/7]: Check whether v is the Perron-Frobenius eigenvector. > evalm(v&*P); 2 5 , 7 7 Find the entropy.

8.7 Maple Programs

259

> entropy:=evalf(-add(add(v[i]*P[i,j]*log[2.](P[i,j]), j=1..2),i=1..2)); entropy := .8013772106 Choose the maximal block length. > N:=1000: Generate a Markov sequence of length N . > ran0:=rand(0..1): > ran1:=rand(0..4): > b[0]:=0: > for j from 1 to N do > if b[j-1]=0 then b[j]:= ran0(): > else b[j]:=ceil(ran1()/4): > fi: > od: Find the numerical estimate of the cylinder set [1] and its theoretical value. > evalf(add(b[j],j=1..N)/N); 0.6940000000 > 5./7; 0.7142857143 Find −(log Pn (x))/n. > for n from 1 to N do > prob[n]:=v[b[1]+1]: > for t from 1 to n-1 do > prob[n]:=prob[n]*P[b[t]+1,b[t+1]+1]: > od: > h[n]:=-log[2.](prob[n])/n: > od: Plot the graph. > g1:=listplot([seq([n,h[n]], n=10..N)]): > g2:=plot(entropy, x=0..N, labels=["n"," "]): > display(g1,g2); See Fig. 8.15.

1

0.5

0

n

1000

Fig. 8.15. The Shannon–McMillan–Breiman Theorem for a Markov shift

260

8 Entropy

8.7.5 The Shannon–McMillan–Breiman Theorem: the beta transformation with(plots): √ Take β = ( 5 + 1)/2, which is stored as a symbol. > beta:=(sqrt(5)+1)/2: Deﬁne the β-transformation. > T:=x->frac(beta*x): Here is the ergodic invariant probability density ρ(x). > rho:=x->piecewise(0 L+Max_Block-1; 100009 To estimate the measures of the cylinder sets we apply the Birkhoﬀ Ergodic Theorem, thus we do not need very many digits. > Digits:=100: We need a numerical value for β with a reasonable amount of precision to evaluate T x = {βx}. > beta:=(sqrt(5)+1)/2.0: Choose a partition P = {E0 , E1 } of [0, 1) where E0 = [0, a), E1 = [a, 1). > a:=evalf(1/beta): Find the measure of the interval [a, 1). > evalf(int(rho(x),x=1/beta..1),10); 0.2763932023 Choose a starting point x = π − 3. > seed:=evalf(Pi-3): Find the coded sequence b1 , b2 , b3 , . . . with respect to P by the rule bk = 0 or 1 if and only if T k (x) ∈ E0 or E1 , respectively. > for k from 1 to L+Max_Block-1 do > seed:=T(seed): > if seed < a then b[k]:=0: > else b[k]:=1: > fi: > od: In the coded sequence b1 , b2 , b3 , . . ., the block ‘11’ is forbidden, in other words, if bj = 1 then bj+1 = 0. This fact is reﬂected in the following. > seq(b[k],k=1..Max_Block); >

8.7 Maple Programs

261

0, 0, 0, 1, 0, 1, 0, 1, 0, 0 > Digits:=10: The following value should be close to the measure of the interval [a, 1). > evalf(add(b[k],k=1..L+Max_Block-1)/(L+Max_Block-1)); 0.2755052045 Estimate the measure of the set Pn (x) by the Birkhoﬀ Ergodic Theorem. The ﬁrst n bits b1 , . . . , bn represent the cylinder set [b1 , . . . , bn ] = Pn (x). To ﬁnd µ([b1 , . . . , bn ]), we compute the relative frequency that T k x visits [b1 , . . . , bn ]. Since T k x corresponds to the coded sequence b1+k , . . . , bn+k , we observe that T k x ∈ [b1 , . . . , bn ] if and only if bj = bj+k , 1 ≤ j ≤ n. > for n from 1 to Max_Block do > freq[n]:=0: > for k from 0 to L-1 do > for j from 1 to n do > if b[j] b[j+k] then increase:=0: break: > else increase:=1: > fi: > od: > freq[n]:=freq[n] + increase: > od: > h[n]:= -log[2.](freq[n]/L)/n: > od: When the break statement is executed in the above, the result is to exit from the innermost do statement within which it occurs. > seq(freq[n],n=1..Max_Block); 72450, 44899, 27836, 10512, 10512, 3987, 3987, 1508, 1508, 940 Draw the graph. > g1:=listplot([seq([n,h[n]],n=1..Max_Block)]): > g2:=pointplot([seq([n,h[n]],n=1..Max_Block)], labels=["n"," "],symbol=circle): > g3:=plot(entropy,x=0..Max_Block): > display(g1,g2,g3); See the right graph in Fig. 8.12.

8.7.6 The Asymptotic Equipartition Property: the Bernoulli shift Here is a simulation of the Asymptotic Equipartition Property for the ( 13 , 23 )-Bernoulli shift. > with(plots): Choose the block size. > n:=1000: > z:=3: Choose the probability of having ‘0’. > p:=1/z;

262

8 Entropy

p :=

1 3

Find the probability of having ‘1’. > q:=1-p: Find the entropy of the (p, q)-Bernoulli sequence. > entropy:=-p*log[2.](p)-q*log[2.](q); entropy := 0.9182958342 Let k denote the number of the symbol ‘1’ in an n-block, and let Prob[k] be the probability that an n-block contains exactly k bits of ‘1’ in some k speciﬁed places. > for k from 0 to n do > Prob[k]:=p^(n-k)*q^k: > od: Let n! C(n, k) = k!(n − k)! be the binomial coeﬃcient equal to the number of ways of choosing k objects from n distinct objects. It is given by the Maple command binomial(n,k). Consider the binomial probability distribution where the probability of observing exactly k bits of ‘1’ among the n bits of an arbitrary binary n-block is given by Prob[k] × C(n, k). The sum of all probabilities is equal to 1 in the following. > add(Prob[k]*binomial(n,k),k=0..n); 1 Find the average of k, which is equal to µ = nq. > add(k*Prob[k]*binomial(n,k),k=0..n); 2000 3 > mu:=n*q; 2000 µ := 3 Find the average of − log(Prob)/n, which is approximately equal to entropy. > add(-log[2.0](Prob[k])/n*Prob[k]*binomial(n,k),k=0..n); 0.9182958338 Find the standard deviation of k, which is equal to σ = nq(1 − q). > sigma:=sqrt(n*q*(1-q)); √ 20 5 σ := 3 > K1:=round(mu-4*sigma); >

K1 := 607 K2:=round(mu+4*sigma);

8.7 Maple Programs

263

K2 := 726 Deﬁne the normal distribution density. > f:=x-> exp(-(x-mu)^2/2/sigma^2) / sqrt(2*Pi*sigma^2); (x−µ)2

e(−1/2 σ2 ) √ f := x → 2 π σ2 The interval K1 ≤ x ≤ K2 is an essential range for the values of the normal distribution. > evalf(int(f(x),x=K1..K2)); 0.9999342413 Plot the probability density function for the number of occurrences of ‘1’ in an n-block. For suﬃciently large values of n the binomial distribution is approximated by the normal distribution with mean µ = nq and variance σ 2 = nq(1 − q). > pdf:=pointplot([seq([k-0.5, binomial(n,k)*Prob[k]], k=K1..K2)]): > g_normal:=plot(f(x),x=0..n): > display(pdf, g_normal, labels=["k","pdf"]); See Fig. 8.16 where an enlargement of the left plot on K1 − 50 ≤ k ≤ K2 + 50 is given on the right.

0.02

0.02

pdf

pdf 0.01

0

0.01

k

1000

0

650 k

700

Fig. 8.16. Probability of the number of occurrences of ‘1’ in an n-block of the (1/3, 2/3)-Bernoulli shift with n = 1000

Plot the cumulative density function for the number of occurrences of ‘1’ in an n-block. > cum[-1]:=0.0: This makes every cum[k] a ﬂoating point number. For larger values of n set cum[K1-1]:=0.0: and run the following do loop only for K1 ≤ k ≤ K2 to save time. > for k from 0 to n do > cum[k]:=cum[k-1] + binomial(n,k)*p^(n-k)*q^k: > od: > cum[n];

264

8 Entropy

1.000000000 >

cum[K1]; 0.00004417647482

>

cum[K2];

0.9999772852 cdf:=pointplot([seq([k,cum[k]], k=K1..K2)]): > g0:=plot(0,x=0..K1): > g1:=plot(1,x=K2..n): > display(g0, g1, cdf, labels=["k","cdf"]); See Fig. 8.17. >

1

cdf

0

k

1000

Fig. 8.17. Cumulative probability of the number of occurrences of ‘1’ in an n-block of the (1/3, 2/3)-Bernoulli shift with n = 1000

h_point:=pointplot([seq([cum[k],-log[2.](Prob[k])/n], k=K1..K2)]): > for k from K1 to K2 do > h_line[k]:=plot(-log[2.](Prob[k])/n,x=cum[k-1]..cum[k]): > od: Find the minimum of −(log2 Pn )/n given by −(log2 q n )/n. > -log[2.](q); 0.5849625007 Find the maximum of −(log2 Pn )/n given by −(log2 pn )/n. > -log[2.](p); 1.584962501 Choose ε > 0. > epsilon:=0.01: > g2:=plot({entropy - epsilon, entropy + epsilon}, x=0..1): > display(g2, h_point, seq(h_line[k], k=K1..K2), labels=["cum","-(log Pn)/n"]); >

See Fig. 8.18, where the horizontal axis represents the cumulative probability. Two horizontal lines are y = entropy ± ε.

8.7 Maple Programs

265

1 -(log Pn )/n 0.9 0.8 0

1

cum

Fig. 8.18. The Asymptotic Equipartition Property for the (1/3, 2/3)-Bernoulli shift with n = 1000

Compare the case n = 1000 with the cases n = 100, n = 10000. We observe the convergence in measure, i.e., 1 Pr − log Pn − entropy ≥ ε → 0 n as n → ∞, where the probability is given by the ( 31 , 32 )-Bernoulli measure. Consult Theorem 8.19 and see Figs. 8.18, 8.19.

1

1

-(log Pn )/n

-(log Pn )/n 0.9

0.9

0.8

0.8 0

cum

1

0

cum

1

Fig. 8.19. The Asymptotic Equipartition Property for the (1/3, 2/3)-Bernoulli shift with n = 100 (left) and n = 10000 (right)

8.7.7 Asymptotically normal distribution of (− log Pn )/n: the Bernoulli shift The values of Z = −(log Pn )/n are approximately normally distributed for large n. We consider the Bernoulli shift on two symbols 0 and 1. > with(plots): Choose an integer k ≥ 1. > k:=2: Choose the probability of the symbol ‘1’.

266

8 Entropy >

q:=1./(k+1);

q := 0.3333333333 entropy:=-(1-q)*log[2.](1-q)-q*log[2.](q); entropy := 0.9182958340 Choose the sample size. > SampleSize:=100000: Choose the block length. > n:=30: > N:=SampleSize+n-1; >

N := 100029 ran:= rand(0..k): Generate a Bernoulli sequence of length N , which is regarded as the initial N bits of a typical Bernoulli sequence x0 . > for i from 1 to N do > if ran() = k then a[i]:=1: else a[i]:=0: fi: > od: Calculate log Pn (x) at x = T j x0 where x0 = (a1 a2 a3 . . .) is the sequence obtained in the above and T is the shift transformation. > for j from 1 to SampleSize do > num_1:=add(a[s],s=j..n+j-1): > num_0:=n-num_1: > Prob[j]:=(1-q)^num_0 * q^num_1: > logPn[j]:=log[2.](Prob[j]): > od: > h:=entropy: Use n × h as an approximation for the average of − log Pn . > n*h; 27.54887502 > add(-logPn[j],j=1..SampleSize)/SampleSize; 27.59658500 > Var:=add((-logPn[j]-n*h)^2,j=1..SampleSize)/SampleSize; >

Var := 6.552650000 Find an approximation of σ for suﬃciently large n. > sigma:=sqrt(Var/n); σ := 0.4673560385 Deﬁne the normalized random variable Z. > for j from 1 to SampleSize do > Z[j]:=evalf((-logPn[j]-n*h)/sigma/sqrt(n)): > od: Choose the number of bins. > Bin:=48: > Max:=max(seq(Z[j],j=1..SampleSize));

8.7 Maple Programs

>

267

Max := 5.859799731 Min:=min(seq(Z[j],j=1..SampleSize));

Min := −3.515879837 Calculate the width of an interval corresponding to a bin. > width:=(Max-Min)/Bin; width := 0.1953266577 Find the number of occurrences in each bin. > for k from 0 to Bin do freq[k]:= 0: od: for s from 1 to SampleSize do slot:=round((Z[s]-Min)/width): freq[slot]:=freq[slot]+1: od: Here is the so-called bell curve of the standard normal distribution. > Phi:=x->1/(sqrt(2*Pi))*exp(-x^2/2): > g1:=plot(Phi(Z),Z=-5..5): > > > >

for k from 1 to Bin do Q[4*k-3]:=[(k-1/2)*width+Min,0]: Q[4*k-2]:=[(k-1/2)*width+Min,freq[k]/SampleSize/width]: Q[4*k-1]:=[(k+1/2)*width+Min,freq[k]/SampleSize/width]: Q[4*k]:=[(k+1/2)*width+Min,0]: od: Draw the graphs. > g2:=listplot([seq(Q[k],k=1..4*Bin)]): > display(g1,g2); See Fig. 8.20. > > > > > >

0.8

0.4

–4

–2

0

2

Z

4

Fig. 8.20. The histogram together with the standard normal distribution function

Find the cumulative probability. > cum[-1]:=0: > for i from 0 to Bin do cum[i]:=cum[i-1] + freq[i]: od: Draw the cumulative probability density function (cdf).

268

8 Entropy

for k from 0 to Bin do c[k]:=[(k+0.5)*width+Min,cum[k]/SampleSize]: od: > g3:=listplot([seq(c[k],k=0..Bin)]): Deﬁne the cumulative probability density given in Fig. 1.12. See also Maple Program 1.8.12. > g_cumul:=plot(1/2*erf(1/2*2^(1/2)*x)+1/2, x=-5..5, labels=["x"," "]): > display(g3,g_cumul); See Fig. 8.21, where the cdf approximates the cdf of the standard normal distribution very well. > > >

1

–4

–2

0

2

x

4

Fig. 8.21. The cumulative distribution function

9 The Lyapunov Exponent: One-Dimensional Case

The entropy of a diﬀerentiable transformation T deﬁned on an interval with an absolutely continuous invariant measure can be computed simply by taking the average of log |T |. This average is called the Lyapunov exponent, and it measures the divergence speed of two nearby points under iterations of T . In this chapter we consider only one-dimensional examples. Higher dimensional cases are treated in the next chapter. As an application, suppose that we want to calculate the continued fraction of a real number x up to its nth partial quotient. How can we ensure the validity of the accuracy of computation? The answer is to start with at least nh signiﬁcant decimal digits, where h ≈ 1.03 is the entropy of the Gauss transformation with the logarithmic base 10. The same idea is used throughout the book in most numerical experiments that are sensitive to the accuracy of initial data. In the last section a method of ﬁnding the optimal number of card shuﬄing is investigated from the viewpoint of the Lyapunov exponent. For expositions on diﬀerentiable dynamical systems see [Ru2],[Y2], and for a comprehensive introduction to Lyapunov exponents see [BaP].

9.1 The Lyapunov Exponent of Diﬀerentiable Maps Let X = [0, 1]. Suppose that T is a piecewise continuously diﬀerentiable map on X, i.e., the derivative of T is piecewise continuous. For |x − y| ≈ 0 choose a value of n that is suﬃciently large, but not too large. Then we have |T (x) − T (y)| ≈ |T (x)| × |x − y| , and |T n (x) − T n (y)| ≈

n−1 i=0

Hence

|T (T i x)| × |x − y| .

270

9 The Lyapunov Exponent: One-Dimensional Case n−1 1 1 log |T (T i x)| . log |T n (x) − T n (y)| ≈ n i=0 n

(The reason why n should not be too large is that if we let n → ∞ then T n (x) and T n (y) may come close to each other for some large n since X is a bounded set. We assume that the right hand side has a limit as n → ∞.) If µ is invariant measure for T , then the right hand

1an ergodic

1 side converges to log |T | dµ by the Birkhoﬀ Ergodic Theorem. Thus log |T | dµ measures 0 0 the exponent of the speed of the divergence.

1 Deﬁnition 9.1. The number 0 log |T | dµ is called the Lyapunov exponent of T . In Theorem 9.4 the Lyapunov exponent will be shown to be equal to the entropy under the following conditions: Let T : X → X be a piecewise C 1 mapping. Assume that there exists a T -invariant probability measure dµ = ρ(x) dx such that A ≤ ρ(x) ≤ B for some constants A, B > 0 for every x. Suppose that P is a countable generating partition of X and that P consists of intervals Ej such that µ(Ej ) > 0, j µ(Ej ) = 1. Let Pn = P ∨ T −1 P ∨ · · · ∨ T −(n−1) P , i.e., Pn consists of cylinder sets [i1 , . . . , in ] = Ei1 ∩ T −1 Ei2 ∩ · · · ∩ T −(n−1) Ein . Assume that there exists N such that if n ≥ N then (i) every cylinder set is an interval and (ii) T is monotone and continuously diﬀerentiable on any cylinder set in Pn . First we need two lemmas. Lemma 9.2. Let λ be Lebesgue measure on [0, 1]. Then λ([i2 , . . . , in ]) = |T (zi1 ,...,in )| λ([i1 , . . . , in ]) for some zi1 ,...,in ∈ [i1 , . . . , in ] for suﬃciently large n. Proof. Since T ([i1 , . . . , in ]) = [i2 , . . . , in ] and T is one-to-one on [i1 , . . . , in ], T (x) dx = |T (zi1 ,...,in )| λ([i1 , . . . , in ]) λ([i2 , . . . , in ]) = [i1 ,...,in ]

for some zi1 ,...,in ∈ [i1 , . . . , in ]. Lemma 9.3. Suppose a sequence {bn }∞ n=1 satisﬁes lim (bn+1 − bn ) = C

n→∞

for some constant C. Then lim

n→∞

bn =C. n

9.1 The Lyapunov Exponent of Diﬀerentiable Maps

271

Proof. For any ε > 0, choose N such that if n ≥ N then |bn+1 − bn − C| < ε . Hence

k

|bN +k − bN − k C| ≤

|bN +j − bN +j−1 − C| < k ε

j=1

and

bN +k k k bN N + k − N + k − N + k C < N + k ε .

Letting k → ∞ we obtain C − ε ≤ lim inf n→∞

bn bn ≤C +ε. ≤ lim sup n n→∞ n

Since ε is arbitrary, the proof is complete.

Theorem 9.4. Let h be the entropy of T . With the conditions on T given in the above we have 1 log |T (x)| dµ . h= 0

Proof. Put

bn =

µ([i1 , . . . , in ]) log λ([i1 , . . . , in ])

i1 ,...,in

and

Ln =

µ([i1 , . . . , in ]) log |T (zi1 ,...,in )| .

i1 ,...,in

Lemma 9.2 implies that µ([i1 , . . . , in ]) log λ([i2 , . . . , in ]) − Ln bn = i1 ,...,in

=

µ([i2 , . . . , in ]) log λ([i2 , . . . , in ]) − Ln

i2 ,...,in

= bn−1 − Ln . Hence

1

lim (bn − bn−1 ) = − lim Ln = −

n→∞

n→∞

0

Lemma 9.3 implies that bn =− lim n→∞ n Therefore

log |T | dµ .

log |T | dµ .

272

9 The Lyapunov Exponent: One-Dimensional Case

1 µ([i1 , . . . , in ]) log µ([i1 , . . . , in ]) n→∞ n i ,...,i

h = − lim

1

n

1

n

1 µ([i1 , . . . , in ]) log λ([i1 , . . . , in ]) = − lim n→∞ n i ,...,i bn = − lim n→∞ n =

log |T | dµ .

The second equality comes from the fact that log A + log λ([i1 , . . . , in ]) ≤ log µ([i1 , . . . , in ]) ≤ log B + log λ([i1 , . . . , in ]) .

For transformations with singular invariant measures, Theorem 9.4 does not hold. For example, consider the singular continuous measure µ on [0, 1] obtained from the (p, 1−p)-Bernoulli measure, 0 < p < 1, p = 12 , by the binary expansion as in Ex. 5.23. Then µ is invariant under T x = 2x (mod 1). Note that the entropy of T with respect to µ is equal to −p log p − (1 − p) log(1 − p), but its Lyapunov exponent is equal to log 2. Example 9.5. The Lyapunov exponent of an irrational translation modulo 1 is equal to zero since T = 1. This is in agreement with Ex. 8.3. Example 9.6. Consider T x = {2x}. The Lyapunov exponent is equal to log 2 since T = 2. This is in agreement with Ex. 8.7. √

and T x = {βx}. Since T = β, the Lyapunov Example 9.7. Let β = 5+1 2 exponent is equal to log β. This is in agreement with Ex. 8.11. Example 9.8. Let 0 < p < 1. Consider the Bernoulli transformation Tp on [0, 1] given by & x/p , 0≤x 0 the limits as n → ∞ of the above sums over the cylinder sets [i1 , . . . , in ] ⊂ [a, 1 − a] are equal since ρ(x) is bounded from below and above on [a, 1 − a]. Now we show that for large n µ([i1 , . . . , in ]) log µ([i1 , . . . , in ]) ≈ µ([i1 , . . . , in ]) log λ([i1 , . . . , in ]) where the sums are taken over the intervals [i1 , . . . , in ] inside [0, a] for some small a > 0. The collection of such cylinder sets form a partition Pa of an interval [0, a1 ] for some a1 ≤ a where the endpoints of cylinder sets are ignored. Let 0 < t1 < · · · < tk = a1 be the partition so obtained. Then

tj+1 ρ(x) dx µ([tj , tj+1 ]) t = ρ(t) = j tj+1 − tj λ([tj , tj+1 ]) for some tj < t < tj+1 . Hence as a → 0 we have lim

µ([i1 , . . . , in ]) log

n→∞

[i1 ,...,in ]∈Pa a

µ([i1 , . . . , in ]) λ([i1 , . . . , in ])

a 1 1 √ log √ dx ρ(x) log ρ(x) dx ≈ x π x π 0 0 √ √ √ a log a 2 a 2 a log π →0. + − =− π π π √ We have used the fact that ρ(x) ≈ 1/(π x) if x ≈ 0.

=

Lemma 9.10.

π/2

log sin θ dθ = − 0

π log 2 . 2

Proof. Consider an analytic function f (z) = log(1 − z), |z| ≤ r < 1. Its real part u(z) = log |1 − z| is harmonic, and hence the Mean Value Theorem for harmonic functions implies that

274

9 The Lyapunov Exponent: One-Dimensional Case

2π 1 log |1 − reiθ | dθ . u(re ) dθ = 2π 0 0

2π Letting r → 1, we have 0 = 0 log |1 − eiθ | dθ. Since |1 − eiθ | = 2 sin θ2 , 0 ≤ θ ≤ 2π, we have 2π θ log sin dθ . −2π log 2 = 2 0 1 0 = u(0) = 2π

2π

iθ

Substituting t = 12 θ, we obtain π −π log 2 = log sin t dt = 2 0

π/2

log sin t dt . 0

Example 9.11. Consider the Gauss transformation T x = {1/x}. Since |T | = x−2 , the Lyapunov exponent with respect to the logarithmic base b > 1 is given by 1 1 π2 ln x −2 1 1 . dx = dx = (−2 logb x) 6 ln b ln 2 ln b ln 2 0 1 + x ln 2 1 + x 0 We have used the following calculation: 1 1 ln(1 + x) ln x dx (integration by parts) dx = − x 1 + x 0 0 1 x3 x x2 + · · · ) dx − (1 − + =− 4 3 2 0 ∞ ∞ 1 1 (−1)n +2 = − = 2 2 n2 n n n even n=1 n=1 1 π2 . 2 6 For the last equality we have used =−

∞ 1 π2 1 1 1 . = = 2 2 2 2 6 2 n=1 n (2n) n=1 ∞

See Maple Program 9.7.1. Example 9.12. Consider the linearized Gauss transformation 1 1 1 <x< . , T x = −n(n + 1) x − n n+1 n It preserves Lebesgue measure. See the right graph in Fig. 2.1. Its entropy is equal to ∞ log n(n + 1) . n(n + 1) 1

9.1 The Lyapunov Exponent of Diﬀerentiable Maps

275

Example 9.13. Consider the linearized logarithmic transformation T x = −2n (x − 21−n ) = −2n x + 2,

2−n < x < 21−n .

It preserves Lebesgue measure. Its entropy is equal to log 2

∞

n 2−n =

n=1

3 log 2 . 2

Theorem 9.14. Let S and T be continuously diﬀerentiable transformations on [0, 1] preserving probability measures ρ1 (x) dx and ρ2 (x) dx, respectively. Suppose that there exists a continuous and piecewise diﬀerentiable conjugacy mapping φ : [0, 1] → [0, 1] such that T ◦φ=φ◦S . (We assume that S, T and φ satisfy some integrability conditions as needed in the following proof.) Then S and T have the same Lyapunov exponent. As a corollary, the Lyapunov exponent of the logistic transformation is equal to log 2. Consult Ex. 2.24. Proof. Since φ is monotone, we assume that φ(0) = 0 and φ(1) = 1. The other case is similar. Since φ is measure preserving, φ(x) x ρ1 (t) dt = ρ2 (t) dt . 0

0

Hence ρ1 (x) = ρ2 (φ(x)) φ (x). From T (φ(x))φ (x) = φ (S(x))S (x) , we have

1

1

log |T (φ(x))| ρ1 (x) dx + 0

1

=

log |φ (S(x))| ρ1 (x) dx +

0 1

0

Note that 1

log |φ (x)| ρ1 (x) dx log |S (x)| ρ1 (x) dx .

0

1

log |T (φ(x))| ρ1 (x) dx = 0

log |T (φ(x))| ρ2 (φ(x)) φ (x) dx

0

=

1

log |T (y)| ρ2 (y) dy ,

0

which is equal to the Lyapunov exponent of T . By the invariance of ρ1 dx under S, 1 1 log |φ ◦ S| ρ1 dx = log |φ | ρ1 dx .

0

0

276

9 The Lyapunov Exponent: One-Dimensional Case

Now we present a method to estimate the exponent of the growth of the distance between the orbits of two neighboring points. 7 = x + 10−k . Deﬁne Deﬁnition 9.15. For 0 ≤ x ≤ 1 − 10−k , let x Hn,k (x) =

1 k x)| + . log |T n (x) − T n (7 n n

Note that Hn,k (x) is close to the Lyapunov exponent for large n, k since n−1 1 1 1 x)| ≈ log |T n (x) − T n (7 log |T (T i x)| + log |x − x 7| . n n i=0 n

The correction term

1 k log |x − x 7| = n n should not be ignored in numerical simulations because k is also large. As mentioned in the beginning of this section, a value for n used in the experiment x)| cannot grow indeﬁnitely. should not be too large since |T n (x) − T n (7 Simulations for Theorem 9.4 are done in two ways. First, we consider a local version. Choose x0 = π − 3 as an arbitrary point. In Fig. 9.1 Hn,k (x0 ) is regarded as a function of n, 10 ≤ n ≤ 300, with k ﬁxed, and its graphs are given for the logistic transformation, the Gauss transformation and the 7 Bernoulli transformation Tp with p = 10 in Ex. 9.8. The horizontal lines indicate the Lyapunov exponents. We choose the logarithmic base 10 because it is convenient when we consider the number of signiﬁcant decimal digits. 0.4

1.5

0.3

0.4 0.3

1

0.2

0.2 0.5

0.1 0

n

300

0

0.1 n

300

0

n

300

Fig. 9.1. y = Hn,k (x0 ) with x0 = π − 3 for the logistic transformation, the Gauss transformation and a Bernoulli transformation (from left to right)

Let D be the number of signiﬁcant digits, and let h be the Lyapunov exponent. Then in numerical simulations for Hn,k we need to choose suitable combinations of n, k and D. First, we need k < D, otherwise x 80 is not deﬁned. Second, we need k > nh since 0>

1 k 7| ≈ h − . log |T n x − T n x n n

9.1 The Lyapunov Exponent of Diﬀerentiable Maps

277

Third, the maximal value of n should be less than D/h since an application of T erases h signiﬁcant decimal digits on the average, which is discussed in detail in Sect. 9.2. For the choices of k and D for the simulations presented in Fig. 9.1, consult Table 9.1. See Maple Program 9.7.2. For transformations T with constant |T | such as T x = 2x (mod 1) and T x = βx (mod 1), the numerical simulation works very well, and the graphs obtained from computer experiments are almost horizontal, and so they are not presented here. Table 9.1. Choices of k and D for Hn,k (x0 ), 10 ≤ n ≤ 300 Transformation Logistic Gauss Bernoulli

k 200 350 200

D Lyapunov exponent 300 0.3010 · · · 400 1.0306 · · · 300 0.2652 · · ·

For the global version of the simulation we take x0 = π − 3 and consider the orbit {T j x0 : 1 ≤ j ≤ 200}. In Fig. 9.2 the points (T j x0 , Hn,k (T j x0 )) are plotted in Ex. 9.8. The horizontal lines indicate the Lyapunov exponents. For the choices of n, k and D, see Table 9.2, where ‘Ave’ denotes the sample average along the orbit. See Maple Program 9.7.3. 1.05

0.3

0.31 0.3

0.28

1

0.26

0.29 0.95

0.24 0

x

1

0

x

1

0

x

1

Fig. 9.2. Plots of (x, Hn,k (x)) for the logistic transformation, the Gauss transformation and a Bernoulli transformation (from left to right)

Table 9.2. Choices of n, k and D for the average of Hn,k Transformation Logistic Gauss Bernoulli

n 100 200 100

k 100 300 100

D Ave[Hn,k ] 200 0.298 400 1.017 200 0.273

278

9 The Lyapunov Exponent: One-Dimensional Case

9.2 Number of Signiﬁcant Digits and the Divergence Speed With the rapid development of computer technology some of the abstract and sophisticated mathematical concepts can now be tested experimentally. For example, Maple allows us to do ﬂoating point calculations with a practically unlimited number of decimal digits. For example, the error bound can be as small as 10−10000 or less if required. Until recently computations with this level of accuracy was practically impossible because of the small amount of random access memory and slow central processing units even though the software was already available some years ago. The reason why we need very high accuracy in experiments with dynamical systems is that, as we iterate a transformation T , the accuracy evaporates with exponential rate. The following result gives a formula on how many signiﬁcant digits are needed in iterating T . Let x and x 7 be two nearby points. As we iterate T , the distance |T n x−T n x 7| diverges exponentially: |T n x − T n x 7| ≈ 10nh × |x − x 7| as shown in the previous section where h is the Lyapunov exponent with respect to the logarithmic base 10. Deﬁnition 9.16. Let X be the unit interval. Deﬁne the nth divergence speed for 0 ≤ x ≤ 1 − 10−n by 7| ≥ 10−1 where x 7 = x + 10−n } . Vn (x) = min{j ≥ 1 : |T j x − T j x Note that for 0 ≤ x < 1 there exists N = N (x) such that Vn (x) is deﬁned for every n ≥ N . The integer Vn may be regarded as the maximal number of iterations after which some precision still remains when we start ﬂoating point calculations with n signiﬁcant digits. Example 9.17. (i) For T x = 10x (mod 1) we have Vn (x) = n − 1 for 0 ≤ x ≤ 1 − 10−n . (ii) We may deﬁne the divergence speed on the unit circle in an almost identical way. Consider an irrational translation mod 1, which can be identiﬁed with a rotation on the circle. Then Vn = +∞ for every n ≥ 2, which is in agreement with the fact that the entropy is equal to 0. Remark 9.18. Let T be an ergodic transformation on [0, 1] with an absolutely continuous probability measure. Suppose that T is piecewise diﬀerentiable. If T has the Lyapunov exponent (or entropy) h > 0, then n ≈h Vn (x) if n is suﬃciently large. To see why, note that

9.2 Number of Signiﬁcant Digits and the Divergence Speed

279

|T Vn x − T Vn x 7| ≈ 10Vn h × |x − x 7| = 10Vn h 10−n . Hence

10−1 10Vn h−n 1

and −1 Vn h − n 0 , giving

n n−1 . h Vn Vn

In iterating a given transformation T , if T is applied once to the value x with n signiﬁcant decimal digits, then T x has n − h signiﬁcant decimal digits on average. Therefore, if we start with n signiﬁcant decimal digits for x, the above formula suggests that we may iterate T about Vn ≈ n/h times with at least one decimal signiﬁcant digit remaining in T Vn x. We may use base 2 in deﬁning Vn and use the logarithm with respect to the base 2 in deﬁning the Lyapunov exponent. Suppose that we employ n = D signiﬁcant decimal digits in a Maple simulation. Take a starting point of an orbit, e.g., x0 = π − 3. In numerical 80 , which is an numerical approximation of x0 calculations x0 is stored as x satisfying 80 | ≈ 10−D |x0 − x since there are D decimal digits to be used. For every integer j satisfying 1≤j

D , h

there corresponds an integer k ≈ D − j × h such that 80 = 0. a1 . . . . . . . . . ak ak+1 . . . . . . . . . aD T jx ?< => < ? => signiﬁcant

meaningless

where the last D − k decimal digits have no meaning in relation with the true theoretical orbit point T j x0 . Hence if j≈

D , h

80 loses any amount of precision and has nothing to do with T j x0 . then T j x In this argument we do not have to worry about truncation errors of the size 10−D , which are negligible in comparison with 10−k . A trivial conclusion is that, for simulations with irrational translations modulo 1, we do not need many signiﬁcant digits. In Fig. 9.3 for the local version of the divergence speed, the graphs y = n/Vn (x0 ), x0 = π − 3, are presented as functions of n, 2 ≤ n ≤ 100. As

280

9 The Lyapunov Exponent: One-Dimensional Case

0.4

0.2

0

100

n

0.6

3

0.4

2

0.2

1

0

n

100

0

n

100

Fig. 9.3. The divergence speed: y = n/Vn (x0 ) at x0 = π−3 for the β-transformation, the logistic transformation and the Gauss transformation (from left to right)

n increases, n/Vn (x0 ) converges to the Lyapunov exponent represented by a horizontal line. See Maple Program 9.7.4. In Fig. 9.4 for the global version of the divergence speed, the points (T j x0 , n/Vn (T j x0 )), 1 ≤ j ≤ 200, for n = 100 are plotted. The horizontal lines indicate the Lyapunov exponents. In simulations 300 signiﬁcant decimal digits were used, which is more than enough. See Maple Program 9.7.5. 0.22

1.2 0.31 1.1

0.21

1

0.3

0.9 0.2 0

1

x

0

x

1

0

x

1

Fig. 9.4. The divergence speed: Plots of (x, 100/V100 (x)) for the β-transformation, the logistic transformation and the Gauss transformation (from left to right)

9.3 Fixed Points of the Gauss Transformation Using the idea of the Lyapunov exponent we test the eﬀectiveness of the continued fraction algorithm employed by Maple. In this section we use the logarithm to the base 10. If T is a piecewise continuously diﬀerentiable ergodic transformation with an invariant measure ρ(x) dx, then h= 0

1

n−1 1 log10 |T (T j x0 )| n→∞ n j=0

log10 |T (x)| ρ(x) dx = lim

9.3 Fixed Points of the Gauss Transformation

281

for almost every x0 . The formula need not hold true for all x0 . If x0 is an exceptional point that does not satisfy the above relation, for example, if x0 is a ﬁxed point of T , then the average of log10 |T (x)| along the orbit x0 , T x0 , T 2 x0 , . . ., is equal to log10 |T (x0 )|. For example, consider the Gauss transformation T x = {1/x}, 0 < x < 1. Take 1 . x0 = [k, k, k, . . .] = 1 k+ 1 k+ k + ··· Then T x0 = x0 and x0 = 1/(k + x0 ). Note that √ −k + k 2 + 4 . x0 = 2 Since log10 |T (x0 )| = −2 log10 x0 , if we are given D decimal signiﬁcant digits we lose all the signiﬁcant digits after approximately D −2 log10 x0 iterations of T . In the simulation we take D = 100 and 1 ≤ k ≤ 9, k = 10, 20 . . . , 100 and k = 200. The number of correct partial quotients in the continued fraction of x0 obtained from Maple computation is denoted by Nk . Let Ck be the theoretical value for the maximal number of iterations of T that produces the correct values for the partial quotients of x0 = [k, k, k, . . .] 80 when the error bound for the initial data is 10−D . In other words, if we let x be an approximation of x0 such that |8 x0 − x0 | ≈

1 × 10−D , 2

then T

Ck −1

x 80 ∈ Ik =

1 1 , k+1 k

and T Ck x 80 ∈ Ik . Since x0 is a ﬁxed point of T , i.e., T j x0 = x0 for every j, we have n−1 80 − T n x0 | |T n x 80 − x0 | |T n x |T (T j x0 )| = |T (x0 )|n . ≈ ≈ |8 x0 − x0 | |8 x0 − x0 | j=0

Hence 80 − x0 | + D + log10 2 ≈ n log10 |T (x0 )| = n(−2 log10 x0 ) . log10 |T n x

282

9 The Lyapunov Exponent: One-Dimensional Case

Since

T Ck −1 x 80 ∈ Ik ,

we have

1 1 |} . |, |x0 − k+1 k Therefore an approximate upper bound Uk for Nk is given by 80 − x0 | ≤ max{ |x0 − |T Ck −1 x

Uk =

D + log10 2 + log10 max{ |x0 − k1 |, |x0 −

1 k+1 |}

−2 log10 x0

+1.

On the other hand, since 80 ∈ Ik , T Ck x we have

1 1 |} . |, |x0 − k+1 k Thus an approximate lower bound Lk for Nk is given by 80 − x0 | ≥ min{ |x0 − |T Ck x

Lk =

D + log10 2 + log10 min{ |x0 − k1 |, |x0 − −2 log10 x0

1 k+1 |}

.

Based on simulation results presented in Table 9.3 we conclude that the continued fraction algorithm employed by Maple is more or less optimal. See Maple Program 9.7.6. Table 9.3. Accuracy of the continued fraction algorithm of Maple: Test result for x0 = [k, k, k, . . .] k 1 2 3 4 5 6 7 8 9 10

Nk 237 129 95 80 69 63 58 54 51 48

Lk 237.7 129.6 95.2 78.5 68.6 62.0 57.2 53.6 50.8 48.4

Uk 240.0 130.6 96.4 79.8 70.0 63.4 58.7 55.1 52.2 49.9

k 20 30 40 50 60 70 80 90 100 200

Nk 37 33 30 28 27 26 25 25 24 20

Lk 37.0 32.4 29.8 28.0 26.7 25.7 24.9 24.2 23.6 20.3

Uk 38.5 33.9 31.3 29.5 28.2 27.2 26.3 25.7 25.1 21.8

9.4 Generalized Continued Fractions Consider the generalized continued fraction transformations T x = {1/xp } for suitable choices of p. Let x0 and p0 be solutions of the system of two equations

9.4 Generalized Continued Fractions

x=

1 −1, xp

−

p xp+1

283

= −1 .

Their numerical values are x0 ≈ 0.3183657369, p0 ≈ 0.2414851418. Note that x0 is the rightmost intersection point of the graph y = T x with y = x and that the derivative of y = {1/xp0 } at x0 is equal to −1. See Fig. 9.5. 1

1

y

y

0

x

0

1

x

1

Fig. 9.5. y = T x (left) and y = T 2 x (right) for p = p0

For p > p0 , the modulus of the slope of the tangent line to y = {1/xp } at the intersection point with y = x is greater than 1 and the rightmost ﬁxed point is a repelling point. See the left graph in Fig. 9.6. For 0 < p < p0 , the modulus of the slope of the tangent line is less than 1 and the rightmost ﬁxed point is an attracting point. See the right graph in Fig. 9.6. 1

1

y

y

0

x

1

0

x

1

Fig. 9.6. y = T 2 x for p = 0.35 > p0 (left) and p = 0.15 < p0 (right)

Theorem 9.19. For p > p0 , T has an absolutely continuous invariant density ρ(x) such that 1 ρ(0) = 1 + ρ(1) . p Proof. Use Theorem 5.4 and φ (1) = −p.

284

9 The Lyapunov Exponent: One-Dimensional Case

Observe that ρ(x) converges to 1 as p → ∞ in Fig. 9.7. Consult [Ah2] for its proof. See Fig. 9.7. The shape of the distribution becomes more concentrated around x0 as p ↓ p0 . 10

30

6

8 20

4

6 4

10

2

2 0 3

1

0 2

1

0 2

1

2 1

1

1

0

1

0

1

0

2

2

2

1

1

1

0

1

0

1

0

1

1

Fig. 9.7. Invariant probability density functions for T x = {1/xp } with p = 0.2415, 0.242, 0.244, 0.25, 0.3, 0.5, 1, 2 and 4 (from top left to bottom right). For p = 1 the theoretical pdf is also drawn

The Lyapunov exponent of T x = {1/xp } is equal to the average of log |(x−p ) | = log p − (p + 1) log x with respect to its invariant measure. We choose 10 for the logarithmic base. See Table 9.4. The sample size for each n is at least 106 . Note that log p0 − (p0 + 1) log x0 = 0 . If p (> p0 ) is close to p0 , then the density is concentrated around x0 and the integral of log p−(p+1) log x is close to 0. For p >> 1 the Lyapunov exponent

1 may be approximated by 0 log |(x−p ) | dx = log p + (p + 1) log e.

9.4 Generalized Continued Fractions

285

Table 9.4. Lyapunov exponents for T x = {1/xp } for p > p0 p Estimate 0.2415 0.004 0.242 0.019 0.244 0.044 0.25 0.083 0.3 0.230 0.5 0.538 1 1.026 2 1.739 4 2.891

True value

log10 p + (p + 1) log10 e

1.03064 2.774

In Fig. 9.8 the points (n, T n a), 1 ≤ n ≤ 1000, are plotted to represent the orbits of T x = {1/xp } for p = 0.242, 0.244, 0.25, 0.3, 0.5 and 1. The starting point is a = π − 3. As p increases the randomness also increases. For small p, it takes very long for an orbit to escape from a small neighborhood of the rightmost ﬁxed point of T . The horizontal lines represent the rightmost ﬁxed points for p = 0.242, 0.244 and 0.25. For more information see [C1]. 1

0

1

n

1000

1

0

0

1

n

1000

1

n

1000

0

0

n

1000

n

1000

1

n

1000

0

Fig. 9.8. Orbits of T x = {1/xp } of length 1000 for p = 0.242, 0.244, 0.25, 0.3, 0.5 and 1 (from top left to bottom right)

As in Sect. 3.6 we can deﬁne the Khinchin constants for generalized Gauss transformations. For their numerical estimation see [CKc].

286

9 The Lyapunov Exponent: One-Dimensional Case

9.5 Speed of Approximation by Convergents Let T be an ergodic transformation on the unit interval. Suppose that we have a partition P of [0, 1] into disjoint subintervals, say [0, 1] = k Ik , the union being ﬁnite or inﬁnite depending on the problem. If P is generating, then almost every point x has a unique symbolic representation x = a1 a2 . . . ak . . . according to the rule T k−1 x ∈ Iak for every k ≥ 1. Let Pn (x) be the unique element in the partition Pn = P ∨ T −1 P ∨ · · · T −(n−1) P. In other words, Pn (x) = Ia1 ∩ T −1 (Ia2 ) ∩ · · · ∩ T −(n−1) (Ian ) . We are interested in the convergence speed of the nth convergent xn to x where xn is any convenient point in Pn (x). In general, if Pn (x) is an interval, we choose one of the endpoints of Pn (x) as its nth convergent xn . Theorem 9.20. Let T : [0, 1] → [0, 1] be an ergodic transformation with an invariant probability measure dµ = ρ(x) dx, where ρ is piecewise continuous and A ≤ ρ(x) ≤ B for some constants 0 < A ≤ B < ∞. Let h be the Lyapunov exponent of T . Consider the case when Pn (x) is an interval for every x. Choose one of the endpoints of Pn (x) as the nth convergent xn of each x. Then log |x − xn | ≥h, lim inf − n→∞ n where the same logarithmic base is used to deﬁne entropy and logarithm. Proof. Let (I) denote the length of an interval I. The Shannon–McMillan– Breiman Theorem states that lim

n→∞

log µ(Pn (x)) = −h n

for almost every x. Since ρ is continuous at almost every x, we have ρ(t) dt ≈ ρ(x) × (Pn (x)) µ(Pn (x)) = Pn (x)

where ‘≈’ means ‘being approximately equal to’ when n is suﬃciently large. Hence log (Pn (x)) = −h lim n→∞ n for almost every x. Since |x − xn | ≤ (Pn (x)), we have lim sup n→∞

log |x − xn | ≤ −h . n

9.5 Speed of Approximation by Convergents

287

In many examples the limit inﬁmum and the inequality in the preceding theorem are limit and equality, respectively, and if this happens then we may view the symbolic representation as an expansion with the base b ≈ 10h . For example, the decimal expansion obtained from T x = 10x (mod 1) uses the base b =√ 10. In this context the expansions corresponding to x → {1/x2 } and x → {1/ x} employ bases approximately equal to 101.8 ≈ 63 and 100.5 ≈ 3, respectively. See Fig. 9.9. Example 9.21. For T x = {10x}, we have the usual decimal expansion x = 0.a1 a2 a3 . . . =

1 1 1 (a1 + (a2 + (a3 + · · · ))) 10 10 10

where

1 1 1 (a1 + (a2 + · · · + an )) . 10 10 10 Hence |x−xn | ≤ 10−n . In this case we can prove that (log |x−xn |)/n converges to −1 for almost every x. First, note that (x − xn ) × 10n = T n x. For any ﬁxed δ > 0, deﬁne An by 0 0 & & log T n x log |x − xn | ≤ −δ . ≤ −1 − δ = x : An = x : n n xn = 0.a1 a2 . . . an =

−δn , we have Let ∞ µ denote Lebesgue measure on [0, 1). Since µ(An ) = 10 n=1 µ(An ) < ∞. The Borel–Cantelli Lemma implies that the set of points belonging to An for inﬁnitely many n has measure zero. Therefore for almost every x we have log |x − xn | = −1 . lim n→∞ n √

Example 9.22. Let β = 5+1 2 . For T x = {βx}, we have the expansion with respect to the base β given by x = (0.a1 a2 a3 . . .)β =

1 1 1 (a1 + (a2 + (a3 + · · · ))) β β β

and xn = (0.a1 a2 . . . an )β =

1 1 1 (a1 + (a2 + · · · + an )) β β β

where ai ∈ {0, 1}. Hence |x − xn | ≤ β −n . Proceeding as in Ex. 9.21, we can show that (log |x − xn |)/n converges to − log β for almost every x. Note that there is no consecutive string of 1’s in the β-expansion of 0 < x < 1; in other words, if ai = 1 then ai+1 = 0. ∞ Consider an arbitrary sum i=1 ai β −i , ai ∈ {0, 1}, which is not necessarily obtained from the β-expansion. The representation is not unique. For example, 1 1 1 1 1 1 1 1 (0 + (1 + (1 + (a4 + · · · )))) = (1 + (0 + (0 + (a4 + · · · )))) . β β β β β β β β

288

9 The Lyapunov Exponent: One-Dimensional Case

This can be explained by the following fact: Let b > 1 be a real number, and let A = {a1 , . . . , ak } be a set of real numbers including 0. If all real numbers can M be expressed in the form i=−∞ ai bi , and the set of multiple representations has Lebesgue measure zero, then b is an integer. See the remark at the end of Sect. 5.1 in [Ed]. Example 9.23. Let T : [0, 1] → [0, 1] be the Bernoulli transformation deﬁned by &1 x if x ∈ [0, p) , T x = p1 (x − p) if x ∈ [p, 1] . 1−p See also Ex. 2.31. For x ∈ [0, 1) and n ≥ 1, deﬁne βn (x) and an (x) by &1 1 1−p

&

and

if T n−1 x ∈ [0, p) , if T n−1 x ∈ [p, 1] ,

p

βn (x) =

an (x) =

0 p 1−p

if T n−1 x ∈ [0, p) , if T n−1 x ∈ [p, 1] .

It can be shown that βn (x) = β1 (T n−1 x) and an (x) = a1 (T n−1 x) for every n ≥ 1, and T nx an a2 a1 . + + ··· + + x= β1 · · · βn β1 · · · βn β1 β2 β1 See [DK] for more details. Let xn =

an a2 a1 . + ··· + + β1 · · · βn β1 β2 β1

Then 0 ≤ x − xn =

T nx . β1 · · · βn

As in Ex. 9.21, we have limn→∞ (log T n x)/n = 0 for almost every x, and hence

n 1 log(x − xn ) n − log T x + log βk (x) = lim lim − n→∞ n n→∞ n k=1

n 1 log β1 (T k−1 x) = lim n→∞ n k=1 1 log β1 (x) dx = 0

= p log

1 1 . + (1 − p) log 1−p p

The third equality comes from the Birkhoﬀ Ergodic Theorem. See Maple Program 9.7.7.

9.5 Speed of Approximation by Convergents

289

Example 9.24. Let T x = {1/x}, 0 < x < 1. For almost every x we have the classical continued fraction expansion 1

x=

1

a1 + a2 +

1 a3 + · · ·

for which xn = pn /qn , (pn , qn ) = 1, and 1 pn 1 |< 2 . < |x − 2 qn qn qn+1 Hence −2 and

log |x − log qn+1 < n n log |x −

− lim

pn qn |

n

n→∞

pn qn |

= lim 2 n→∞

< −2

log qn , n

π2 log qn =h = 6 log 2 n

where log denotes the natural logarithm. The second and the third equalities come from Theorem 3.19 and Ex. 9.11, respectively. Example 9.25. (Square-root continued fractions) For T x = {1/x2 }, we have x = 5

Since T x =

@C A A B

1

1 (T x)2

1

.

1 + Tx x2

, we have

D +T 2 x

1 . x = @ A 1 1 A A 2 + 5 A x 1 B + T 2x (T x)2 Continuing indeﬁnitely, we obtain x= @ A Aa + E A 1 B

1

= [a1 , a2 , a3 , . . .]

1 a2 + √

1 a3 + · · ·

where a1 (x) = 1/x2 and an (x) = a1 (T n−1 x), n ≥ 2. To deﬁne xn we use the ﬁrst n elements a1 , . . . , an .

290

9 The Lyapunov Exponent: One-Dimensional Case

√ Example 9.26. (Squared continued fractions) For T x = {1/ x}, we have −2 −2 −2 , x = a1 + a2 + (a3 + · · · ) √ where a1 (x) = [1/ x ] and an (x) = a1 (T n−1 x), n ≥ 2. The ﬁrst n elements a1 , . . . , an are used to deﬁne xn .

2

1

0

n

100

0.4

0.4

0.2

0.2

0

n

100

0

n

100

3 0.8

2 2 1

0.6 0.4

1

0.2 0

n

100

0

n

100

0

n

100

Fig. 9.9. y = −(log10 |x−xn |)/n, 2 ≤ n ≤ 100, at x = π −3 for diﬀerent expansions: √ decimal, β, Bernoulli, Gauss, T x = {1/x2 } and T x = {1/ x} (from top left to bottom right). Horizontal lines indicate entropies

9.6 Random Shuﬄing of Cards Given a deck of n cards, what is the minimal (or optimal) number of shuﬄes to achieve reasonable randomness among the cards? P. Diaconis [Di] regarded the problem as random walks on the symmetric group Sn on n symbols. A permutation g ∈ Sn corresponds to a method of shuﬄing n cards. A product g1 · · · gk , gi ∈ Sn , 1 ≤ i ≤ k, corresponds to consecutive applications of shuﬄing corredistribution on Sn ; in other words, there sponding to gi . Let µ be a probability exist g1 , . . . , g ∈ Sn with µ(gj ) > 0, j=1 µ(gj ) = 1. A random walk on Sn is given by Pr(g → h) = µ(g −1 h) where Pr(g → h) is the probability of moving from g to h. Deﬁne the convolution µ ∗ µ(x) = g∈Sn µ(xg −1 )µ(g). Similarly, the kth convolution µ∗k is deﬁned. The probability of moving from g to h in k

9.6 Random Shuﬄing of Cards

291

steps is given by µ∗k (g −1 h). Let λ be the uniform distribution on Sn , i.e., each permutation has equal probability, and let ||·|| denote the norm deﬁned on the real vector space of real-valued measures M(G) on G by ||ν|| = g∈Sn |ν(g)|, ν ∈ M(G). Now the card shuﬄing problem can be formulated in terms of convolution: What is the minimal number k for which ||µ∗k − λ|| < ε for a suﬃciently small constant ε > 0? For a suitable choice of µ reﬂecting the usual shuﬄing methods, Diaconis showed that k is approximately 7. We introduce an approach based on ergodic theory. A continuous version of typical card shuﬄing, called riﬄe shuﬄing, can be illustrated in Fig. 9.10. It may be regarded as a limiting case of shuﬄing of n cards as n → ∞. The unit interval [0, 1] is a model for a stack of inﬁnitely many cards. Cut the stack at x = p and place the upper stack (Stack B) beside the lower stack (Stack A). Then shuﬄe Stack A and Stack B evenly. Shuffle A and B linearly 1

1 B

B p

Cut here

A

Move down

A

B

A 0

0

Fig. 9.10. A continuous version of riﬄe shuﬄing: Stacks A and B correspond to the intervals [0, p] and [p, 1], respectively. They are linearly mapped to [0, 1]

The transformation Tp deﬁned in Ex. 9.8 corresponds to a typical shuﬄing. Note that Tp is ergodic with respect to Lebesgue measure. Its entropy is given by the Lyapunov exponent H(p) = −p log p−(1−p) log(1−p). Here we use the natural logarithm. Choose randomly p1 , . . . , pk , i.e., the pi are independent and uniformly distributed in [0, 1]. Consider the composite mapping Tpk ◦ · · · ◦ Tp1 . Its entropy is equal to

1

··· 0

0

1

k i=1 k i=1

H(pi ), and its average over [0, 1]k is equal to

1

H(pi ) dp1 · · · dpk = k

H(p) dp = 0

k . 2

Suppose that there are n cards. If Tpk ◦ · · · ◦ Tp1 is close to random shuﬄing of n cards, then it would look like the transformation x → {nx}. Thus they have almost equal entropy and we may conclude that k ≈ 2 log n. In Japan and Korea there are traditional card games played with forty eight cards. Each month in a year is represented by four cards. The cards

292

9 The Lyapunov Exponent: One-Dimensional Case

are made of small, thick and inﬂexible material and the shuﬄing method is diﬀerent. It is called overhand shuﬄing. To have a perfect shuﬄing of n cards by overhand shuﬄing, approximately n2 shuﬄes are necessary, which is far more than the number of shuﬄes needed in riﬄe shuﬄing. See [Pem] for the proof. From experience, card players know this fact and it is why one has to scatter the cards on the table randomly and gather them into a stack before cards are shuﬄed by overhand shuﬄing. From the viewpoint of ergodic theory this fact is not surprising. The transformations T corresponding to overhand shuﬄing are interval exchange transformations preserving Lebesgue measure, and they satisfy T (x) = 1 where the derivative is deﬁned. (See Fig. 9.11.) Hence the Lyapunov exponent of T is zero, and it takes a long time to have a perfect shuﬄing. 1

1

1

0

0

0

1

Fig. 9.11. A continuous version of overhand shuﬄing (left) and the corresponding transformation (right)

9.7 Maple Programs 9.7.1 The Lyapunov exponent: the Gauss transformation Show that the entropy of the Gauss transformation T is equal to π 2 /(6 ln 2 ln b) where b > 1 is the logarithmic base in the deﬁnition of the Lyapunov exponent. > T:=x->frac(1/x): Take the derivative of T (x). > diff(1/x,x): Find log |T (x)|. > log[b](abs(%)); −

2 ln(|x|) ln(b)

LE:=x->-2*log[b](x): Deﬁne the invariant probability density ρ(x). > rho:=x->1/ln(2)*1/(1+x): >

9.7 Maple Programs

293

Compute log |T (x)| ρ(x) dx. > Lyapunov_exponent:=int(LE(x)*rho(x),x=0..1); Lyapunov exponent :=

π2 1 6 ln(b) ln(2)

Choose the logarithmic base b = 10. > b:=10: > evalf(Lyapunov_exponent); 1.030640835 Apply the Birkhoﬀ Ergodic Theorem. > seed[0]:=evalf(Pi-3): > SampleSize:=10000: > for i from 1 to SampleSize do > seed[i]:=T(seed[i-1]): > od: > add(LE(seed[i]),i=1..SampleSize)/SampleSize; 1.032996182 Two values obtained from the Birkhoﬀ Ergodic Theorem and the deﬁnition of the Lyapunov exponent are very close. 9.7.2 Hn,k : a local version for the Gauss transformation Check the convergence of Hn,k for the Gauss transformation at a ﬁxed point. > with(plots): > T:=x->frac(1.0/x): > Digits:=400: > k:=350: > L:=300: Choose two neighboring points seed1[0] and seed2[0]. > seed1[0]:=evalf(frac(Pi)): > seed2[0]:=seed1[0] + 10^(-k): >

for n from 1 to L do seed1[n]:=T(seed1[n-1]): od:

for n from 1 to L do seed2[n]:=T(seed2[n-1]): od: > for n from 1 to L do > H[n]:=log[10](abs(seed1[n]-seed2[n]))/n + k/n: > od: > Digits:=10: > entropy:= Pi^2/6/ln(2)/ln(10): > g1:=listplot([seq([n,H[n]],n=1..L)],labels=["n"," "]): > g2:=plot(entropy,x=0..L): > display(g1,g2); See Fig. 9.1. >

294

9 The Lyapunov Exponent: One-Dimensional Case

9.7.3 Hn,k : a global version for the Gauss transformation Simulate the global version of Hn,k along an orbit {T j x0 : 1 ≤ j ≤ 1000} for the Gauss transformation. > with(plots): > T:=x->frac(1.0/x): > entropy:=Pi^2/6/ln(2)/ln(10): > Digits:=400: > k:=300: > n:=200: > SampleSize:=200: > Length:=SampleSize + n: > seed[0]:=evalf(Pi-3.0): > for i from 1 to Length do seed[i]:=T(seed[i-1]): od: for s from 1 to SampleSize do seed2[1]:=seed[s]+10^(-k): for i from 2 to n do seed2[i]:=T(seed2[i-1]): od: H_nk[s]:=log10(abs(seed[s+n-1]-seed2[n])) / n + k/n: od: > Digits:=10: > g1:=pointplot([seq([seed[s],H_nk[s]],s=1..SampleSize)]): > g2:=plot(entropy,x=0..1): > display(g1,g2); See Fig. 9.2. > > > > >

9.7.4 The divergence speed: a local version for the Gauss transformation Check the convergence of n/Vn for the Gauss transformation at a point. > with(plots): > Digits:=500: > T:=x->frac(1.0/x): > entropy:= Pi^2/6/ln(2)/ln(10): > seed[0]:=evalf(Pi-3.0): > for i from 1 to 500 do seed[i]:=T(seed[i-1]): od: Find the divergence speed. > for k from 1 to 200 do > X0:=evalf(seed[0]+10.0^(-k)): > Xn:=T(X0): > for j from 1 while abs(Xn-seed[j]) < 1/10 do > Xn:=T(Xn): od: > V[k]:=j; > od:

9.7 Maple Programs

295

Digits:=10: fig1:=listplot([seq([n,n/V[n]],n=2..100)]): fig2:=plot(entropy,x=0..100,labels=["n"," "]): display(fig1,fig2); See Fig. 9.3. > > > >

9.7.5 The divergence speed: a global version for the Gauss transformation Simulate a global version of n/Vn for the Gauss transformation. > with(plots): > n:=100: > Digits:=2*n: > T:=x->frac(1.0/x): > SampleSize:=1000: > entropy:=evalf(Pi^2/(6*ln(2)*ln(10)),10): > Length:= SampleSize + ceil(2*n/entropy); > >

Length := 1195 seed[0]:=evalf(Pi-3.0): for i from 1 to Length do seed[i]:=T(seed[i-1]): od:

for s from 1 to SampleSize do seed1[s]:=seed[s]-10^(-n): Xn:=T(seed1[s]): for j from 1 while abs(Xn-seed[j+s]) < 1/10 do Xn:=T(Xn): od: V_n[s]:=j: od: > Digits:=10: > g1:=listplot([seq([evalf(seed[s],10),n/V_n[s]],s=1.. SampleSize)]): > g2:=plot(entropy,x=0..1): > display(g1,g2); See Fig. 9.4 > > > > > > >

9.7.6 Number of correct partial quotients: validity of Maple algorithm of continued fractions √ Find the number of correct partial quotients of xk = (−k + k 2 + 4)/2 in numerical experiments using Maple. See Table 9.3. > with(numtheory): We need a Maple package for continued fractions. > cfrac(sqrt(3)-1,5,’quotients’); [0, 1, 2, 1, 2, 1, ...]

296

9 The Lyapunov Exponent: One-Dimensional Case >

cfrac(sqrt(3)-1,5); 1 1

1+

1

2+

1

1+

1 1 + ··· What is the diﬀerence among the following three commands? The second method does not work, and we receive an error message. > convert(sqrt(3.)-1,confrac); 2+

>

[0, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 3, 2, 8] convert(sqrt(3)-1,confrac);

Error, (in convert/confrac) expecting a 3rd argument of type {name, name = algebraic} > >

Digits:=15: convert(sqrt(3.)-1,confrac);

[0, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 7] The preceding results depend on the number of signiﬁcant digits. For a related experiment see Maple Program 3.9.8. Find the ﬁxed points of the Gauss transformation. > solve(1/x=k+x ,x); 1√ 2 1 1√ 2 1 k +4 k + 4, − k − − k+ 2 2 2 2 > Digits:=100: > for k from 10 to 100 by 10 do > x[k]:=-1/2*k+1/2*sqrt(k^2+4.): > od: Count the number of correct partial quotients. All the partial quotients should be identical by the deﬁnition of xk . > for k from 10 to 100 by 10 do > convert(x[k],confrac): > od; [0, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 9, 1, 2, 5] [0, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 18, 10] [0, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 11, 2] .. .

9.7 Maple Programs

297

The command ‘nops’ counts the number of elements in a given sequence. > for k from 10 to 100 by 10 do > nops(convert(x[k],confrac)): > od; 53 40 36 .. . For k = 10 the length of the sequence given by Maple is 53. The ﬁrst element 0 means that the integer part of x[10] is zero. Observe that the number of incorrect partial quotients 9, 1, 2, 5 at the end of the sequence is four. Hence the number of correct partial quotients is equal to Nk = 53 − 1 − 4 = 48. For k = 20 and k = 30 we have Nk = 37 and Nk = 33 correct partial quotients, respectively. > Digits:=10: > DD:=100: Calculate lower bounds. > for k from 10 to 100 by 10 do > L[k]:=(DD + log10(2.) + log10(min(abs(x[k]-1/k), abs(x[k]-1/(k+1)))))/(-2*log10(x[k])): > od; L10 := 48.43895528 L20 := 37.01517232 L30 := 32.44061585 .. . Calculate upper bounds. > for k from 10 to 100 by 10 do > U[k]:=(DD + log10(2.) + log10(max(abs(x[k]-1/k), abs(x[k]-1/(k+1)))))/(-2*log10(x[k]))+1: > od; U10 := 49.89580130 U20 := 38.49850368 U30 := 33.93082056 .. . 9.7.7 Speed of approximation by convergents: the Bernoulli transformation We study the speed of approximation by convergents for the Bernoulli transformation T deﬁned on the unit interval. > with(plots): > p:=0.7:

298

9 The Lyapunov Exponent: One-Dimensional Case > >

> >

T:= x->piecewise(0 seed[0]:=evalf(Pi-3): Generate a sequence of partial quotients. > for n from 1 to Length do > if seed[n-1] < p then > beta[n]:=1/p: a[n]:=0: > else > beta[n]:=1/(1-p): a[n]:=p/(1-p): > fi: > seed[n]:=T(seed[n-1]): > od: In the following prod[n] is the product of β1 , . . . , βn . > prod[0]:=1: > for n from 1 to Length do > prod[n]:=prod[n-1]*beta[n]: > od: Find the nth convergent xn . > x[0]:=0: > for n from 1 to Length do > x[n]:=x[n-1]+a[n]/prod[n]: > c[n]:=-log10(abs(x[n]-seed[0]))/n: > od: Draw the graph. > Digits:=10: > g1:=listplot([seq([n,c[n]],n=2..Length)]): > g2:=plot(entropy,x=0..Length,labels=["n",""]): > display(g1,g2); See Fig. 9.9.

References

[Aa] [AG] [Ad]

[AdF]

[AdMc] [Ah1] [Ah2] [ASY] [AB] [AH] [Ar1]

[Ar2] [ArA] [At] [BBC]

J. Aaronson, An Introduction to Inﬁnite Ergodic Theorey, American Mathematical Society, Providence, 1997. M. Adams and V. Guillemin, Measure Theory and Probability, Birkh¨ auser, Boston, 1996. R.L. Adler, Geodesic ﬂows, interval maps, and symbolic dynamics, Ergodic Theory, Symbolic Dynamics and Hyperbolic Spaces, edited by T. Bedford, M. Keane and C. Series, 93–121, Oxford University Press, Oxford, 1991. R.L. Adler and L. Flatto, Geodesic ﬂows, interval maps, and symbolic dynamics, Bulletin of the American Mathematical Society 25 (1991), 229– 334. R.L. Adler and M.H. McAndrew, The entropy of Chebyshev polynomials, Transactions of the American Mathematical Society 121 (1966), 236–241. Y. Ahn, On compact group extension of Bernoulli shifts, Bulletin of the Australian Mathematical Society 61 (2000), 277–288. Y. Ahn, Generalized Gauss transformations, Applied Mathematics and Computation 142 (2003), 113–122. K.T. Alligood, T.D. Sauer and J.A. Yorke, Chaos – An Introduction to Dynamical Systems, Springer-Verlag, New York, 1996. J.A. Anderson and J.M. Bell, Number Theory with Applications, PrenticeHall, Upper Saddle River, 1996. J. Arndt and C. Haenel, Pi – Unleashed, 2nd ed., Springer-Verlag, Berlin, 2001. V.I. Arnold, Small denominators I: Mappings of the circumference onto itself, American Mathematical Society Translations Series 2 46 (1965), 213–284. (Originally published in Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya, 25 (1961), 21–86.) V.I. Arnold, Geometric Methods in the Theory of Ordinary Diﬀerential Equations, 2nd ed., Springer-Verlag, New York, 1988. V.I. Arnold and A. Avez, Ergodic Problems of Classical Mechanics, Benjamin, New York, 1968. P. Atkins, Galileo’s Finger: The Ten Great Ideas of Science, Oxford University Press, Oxford, 2003. D.H. Bailey, J.M. Borwein and R.E. Crandall, On the Khintchine constant, Mathematics of Computation 66 (1997), 417–431.

440 [Bal] [BaP] [BaS1]

[BaS2] [BeC] [BeY] [Benn] [BCG]

[Bi] [Bir] [Bos] [Bow] [BoG] [Brei]

[Brem] [BS] [BLvS]

[Ca] [Chi] [C1] [C2]

[C3] [CHN]

References V. Baladi, Positive Transfer Operators and Decay of Correlations, World Scientiﬁc, Singapore, 2000. L. Barreira and Ya. Pesin, Lyapunov Exponents and Smooth Ergodic Theory, American Mathematical Society, Providence, 2002. L. Barreira and B. Saussol, Hausdorﬀ dimension of measures via Poincar´ e recurrence, Communications in Mathematical Physics 219 (2001), 443– 463. L. Barreira and B. Saussol, Product structure of Poincar´e recurrence, Ergodic Theory and Dynamical Systems 22 (2002), 33–61. M. Benedicks and L. Carleson, The dynamics of the H´ enon map, Annals of Mathematics 133 (1991), 73–169. M. Benedicks and L.-S. Young, Sinai–Bowen–Ruelle measures for certain H´enon maps, Inventiones Mathematicae 112 (1993), 541–576. D.J. Bennett, Randomness, Havard University Press, Cambridge, 1998. E. Berlekamp, T.M. Cover, R.G. Gallager, S.W. Golomb, J.L. Massey and A.J. Viterbi, Claude Elwood Shannon (1916-2001), Notices of the American Mathematical Society 49 (2002), 8–16. P. Billingsley, Ergodic Theory and Information, Wiley, New York, 1965. G.D. Birkhoﬀ, Proof of the ergodic theorem, Proceedings of the National Academy of Sciences USA 17 (1931), 656–660. M.D. Boshernitzan, Quantitative recurrence results, Inventiones Mathematicae 113 (1993), 617–631. R. Bowen, Invariant measures for Markov maps of the interval, Communications in Mathematical Physics 69 (1979), 1–17. A. Boyarsky and P. G´ ora, Laws of Chaos: Invariant Measures and Dynamical Systems in One Dimension, Birkh¨ auser, Boston, 1997. L. Breiman, The individual ergodic theorem of information theory, Annals of Mathematical Statistics 28 (1957), 809–811. Correction, ibid. 31 (1960), 809–810. P. Br´emaud, An Introduction to Probabilistic Modeling, Springer-Verlag, New York, 1987. M. Brin and G. Stuck, Introduction to Dynamical Systems, Cambridge University Press, Cambridge, 2002. H. Bruin, S. Luzzatto and S. van Strien, Decay of correlations in onedimensional dynamics, Annales Scientiﬁques de l’Ecole Normale Superieure 36 (2003), 621–646. L. Carleson, Two remarks on the basic theorems of information theory, Mathematica Scandinavica 6 (1958), 175–180. B.V. Chirikov, A universal instability of many-dimensional oscillator systems, Physics Reports 52 (1979), 263–379. G.H. Choe, Generalized continued fractions, Applied Mathematics and Computation 109 (2000), 287–299. G.H. Choe, Recurrence of transformations with absolutely continuous invariant measures, Applied Mathematics and Computation 129 (2002), 501–516. G.H. Choe, A universal law of logarithm of the recurrence time, Nonlinearity 16 (2003), 883–896. G.H. Choe, T. Hamachi and H. Nakada, Mod 2 normal numbers and skew products, Studia Mathematica 165 (2004), 53–60.

References [CKc]

[CKd1] [CKd2]

[CS]

[Chu] [Ci] [CFS] [DF]

[DK] [Den]

[Dev] [Di] [DH] [Dr] [Du] [Ec]

[Ed] [Eg] [EHI] [El] [Fa] [FW]

441

G.H. Choe and C. Kim, The Khintchine constants for generalized continued fractions, Applied Mathematics and Computation 144 (2003), 397– 411. G.H. Choe and D.H. Kim, Average convergence rate of the ﬁrst return time, Colloquium Mathematicum 84/85 (2000), 159–171. G.H. Choe and D.H. Kim, The ﬁrst return time test of pseudorandom numbers, Journal of Computational and Applied Mathematics 143 (2002), 263-274. G.H. Choe and B.K. Seo, Recurrence speed of multiples of an irrational number, Proceedings of the Japan Academy Series A, Mathematical Sciences 77 (2001), 134–137. K.L. Chung, A note on the ergodic theorem of information theory, Annals of Mathemtical Statistics 32 (1961), 612–614. Z. Ciesielski, On the functional equation f (t) = g(t) − g(2t), Proceedings of the American Mathematical Society 13 (1962), 388–393. I.P. Cornfeld, S.V. Fomin and Ya. G. Sinai, Ergodic Theory, SpringerVerlag, New York, 1982. K. Dajani and A. Fieldsteel, Equipartition of interval partitions and an application to number theory, Proceedings of the American Mathematical Society 129 (2001), 3453–3460. K. Dajani and C. Kraaikamp, Ergodic Theory of Numbers, Mathematical Association of America, Washington, D.C., 2002. M. Denker, The central limit theorem for dynamical systems, 33–62, Dynamical Systems and Ergodic Theorey, Banach Center Publications 23, PWN, Warsaw, 1989. R.L. Devaney, An Introduction to Chaotic Dynamical Systems, 2nd ed., Addison-Wesley, Redwood City, 1989. P. Diaconis, Group Representations in Probability and Statistics, Institute of Mathematical Statistics, Hayward, 1988. F. Diacu and P. Holmes, Celestial Encounters: The Origins of Chaos and Stability, Princeton University Press, Princeton, 1996. A. Drozdek, Elements of Data Compression, Brooks/Cole, Paciﬁc Grove, 2002. R.M. Dudley, Real Analysis and Probability, Wadsworth & Brooks/Cole, Paciﬁc Grove, 1988. R. Eckhardt, Stan Ulam, John von Neumann, and the Monte Carlo method, From Cardinals to Chaos, edited by N.G. Cooper, 131–137, Cambridge University Press, Cambridge, 1989. G.A. Edgar, Measure, Topology, and Fractal Geometry, Springer-Verlag, New York, 1990. H.G. Eggleston, The fractional dimension of a set deﬁned by decimal properties, Quarterly Journal of Mathematics Oxford 20 (1949), 31–36. S. Eigen, A. Hajian and Y. Ito, Ergodic measure preserving transformations of ﬁnite type, Tokyo Journal of Mathematics 11 (1988), 459–470. S. Elaydi, Discrete Chaos, Chapman & Hall/CRC, Boca Raton, 1999. K. Falconer, Fractal Geometry: Mathematical Foundations and Applications, John Wiley & Sons, Chichester, 1990. D.-J. Feng and J. Wu, The Hausdorﬀ dimension of recurrent sets in symbolic spaces, Nonlinearity 14 (2001), 81–85.

442 [Fl] [Fo1] [Fo2] [For] [Ga] [HI]

[Half] [Halt] [Ham] [HHJ] [HK]

[Hel] [Hels] [Hen] [HoK]

[Hu] [Hur] [Ib] [Io] [ILM]

[ILR] [Ka1] [Ka2]

References K. Florek, Une remarque sur la r´epartition des nombres mξ mod 1, Colloquium Mathematicum 2 (1951), 323–324. G.B. Folland, A Course in Abstract Harmonic Analysis, CRC Press, Boca Raton, 1995. G.B. Folland, Real Analysis: Modern Techniques and Their Applications, 2nd ed., John-Wiley & Sons, New York, 1999. R. Fortet, Sur une suite egalement r´epartie, Studia Mathematica 9 (1940), 54–70. F.R. Gantmacher, Matrix Theory, Chelsea, New York, 1959. A. Hajian and Y. Ito, Transformations that do not accept a ﬁnite invariant measure, Bulletin of the Amererican Mathematical Society 84 (1978), 417– 427. M. Halfant, Analytic properties of R´ enyi’s invariant density, Israel Journal of Mathematics 27 (1977), 1–20. J. Halton, The distribution of the sequence {nξ} (n = 0, 1, . . .), Proceedings of Cambridge Philosophical Society 61 (1965), 665–670. T. Hamachi, On a Bernoulli shift with nonidentical factor measures, Ergodic Theory and Dynamical Systems 1 (1981), 273–283. D. Hankerson, G.A. Harris and P.D. Johnson, Jr., Introduction to Information Theory and Data Compression, CRC Press, Boca Raton, 1998. B. Hasselblatt and A. Katok, A First Course in Dynamics: with a Panorama of Recent Developments, Cambridge University Press, Cambridge, 2003. P. Hellekalek, Good random number generators are (not so) easy to ﬁnd, Mathematics and Computers in Simulation 46 (1998), 485–505. H. Helson, Harmonic Analysis, 2nd ed., Hindustan Book Agency and Helson Publishing Co., 1995. M. H´enon, A two-dimensional mapping with a strange attractor, Communications in Mathematical Physics 50 (1976), 69–77. F. Hofbauer and G. Keller, Ergodic properties of invariant measures for peicewise monotone transformations, Mathematische Zeitschrift 180 (1982), 119–140. H. Hu, Decay of correlations for piecewise smooth maps with indiﬀerent ﬁxed points, Ergodic Theory and Dynamical Systems 24 (2004), 495–524. ¨ A. Hurwitz, Uber die Entwicklungen Komplexer Gr¨ oßen in Kettenb¨ uche, Acta Mathematica 11 (1888), 187–200. I.A. Ibragimov, Some limit theorems for stationary processes, Theory of Probability and its Applications 7 (1962), 361–392. A. Ionescu Tulcea, Contribution to information theory for abstract alphabets, Arkiv f¨ or Matematik 4 (1960), 235–247. A. Iwanik, M. Lema´ nczyk and C. Mauduit, Piecewise absolutely continuous cocycles over irrational rotations, Journal of the London Mathematical Society 59 (1999), 171–187. A. Iwanik, M. Lema´ nczyk and D. Rudolph, Absolutely continuous cocycles over irrational rotations, Israel Journal of Mathematics 83 73–95. (1993), M. Kac, On the distribution of values of sums of the type f (2k t), Annals of Mathematics 47 (1946), 33–49. M. Kac, On the notion of recurrence in discrete stochastic processes, Bulletin of the American Mathematical Society 53 (1947), 1002–1010.

References [Ka3] [Ka4] [KP] [KH] [Kel] [KemS] [Kh] [KK] [KS] [KLe]

[Kin] [King] [Kit] [Kn] [Ko] [Kra] [Kre] [Krey] [KuN] [Kuc] [La] [Lan] [LY]

[Lay]

443

M. Kac, Statistical Independence in Probability, Analysis and Number Theory, Mathematical Association of America, Washington, D.C., 1959. M. Kac, Enigmas of Chance, University of California Press, Berkeley, 1987. S. Kakutani and K. Petersen, The speed of convergence in the Ergodic Theorem, Monatshefte f¨ ur Mathematik 91 (1981), 11–18. A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems, Cambridge University Press, Cambridge, 1995. G. Keller, Equillibrium States in Ergodic Theory, Cambridge University Press, Cambridge, 1998. J.G. Kemeny and J.L. Snell, Finite Markov Chains, Springer-Verlag, New York, 1976. A.Ya. Khinchin, Continued Fractions, 3rd ed., Dover Publications, New York, 1997. C. Kim and D.H. Kim, On the law of logarithm of the recurrence time, Discrete and Continuous Dynamical Systems 10 (2004), 581–587. D.H. Kim and B.K. Seo, The waiting time for irrational rotations, Nonlinearity 16 (2003), 1861–1868. Y.-O. Kim and J. Lee, On the Gibbs measures of commuting one-sided subshifts of ﬁnite type, Osaka Journal of Mathematics 37 (2000), 175– 183. J. King, Three problems in search of a measure, American Mathematical Monthly 101 (1998), 609–628. J.F.C. Kingman, The ergodic theory of subadditive stochastic processes, Journal of the Royal Statistical Society Series B 30 (1968), 499–510. B. Kitchens, Symbolic Dynamics: One-sided, Two-sided and Countable State Markov Shifts, Springer-Verlag, Berlin, 1998. D. Knuth, The Art of Computer Programming, Vol. 2, 3rd ed., AddisonWesley, Reading, 1997. I. Kontoyiannis, Asymptotic recurrence and waiting times for stationary processes, Journal of Theoretical Probability 11 (1998), 795–811. L. Kraft, A Device for Quantizing, Grouping and Coding Amplitude Modulated Pulses, Master’s Thesis, Massachusetts Institute of Technology, 1949. U. Krengel, Ergodic Theorems, Walter de Gruyter, Berlin, 1985. E. Kreyszyg, Introductory Functional Analysis with Applications, John Wiley & Sons, New York, 1978. L. Kuipers and H. Niederreiter, Uniform Distribution of Sequences, John Wiley & Sons, New York, 1974. R. Kuc, The Digital Information Age, PWS Publishing, Paciﬁc Grove, 1999. S. Lang, Introduction to Diophantine Approximations, Addison-Wesley, Reading, 1966. G. Langdon, An introduction to arithmetic coding, IBM Journal of Research and Development, 28 (1984), 135–149. A. Lasota and J. Yorke, On the existence of invariant measures for piecewise monotone transformations, Transactions of the American Mathematical Society 186 (1973), 481–488. D.C. Lay, Linear Algebra and Its Applications, 2nd ed., Addison-Wesley, Reading, 1996.

444 [Le]

[LZ]

[LV] [LM] [Liv]

[LSV] [Lo] [LoM] [Luc] [LuV] [Ly] [Mack]

[Man] [MMY]

[Mar]

[MN]

[Mau] [Mc1] [Mc2] [MS]

References P. L’Ecuyer, Eﬃcient and portable combined random number generators, Communications of the Association for Computing Machinery 31 (1988), 742–749. A. Lempel and J. Ziv, Compression of individual sequences via variable rate coding, IEEE Transactions on Information Theory 24 (1978), 530– 536. P. Liardet and D. Voln´ y, Sums of continuous and diﬀerenctiable functions in dynamical systems, Israel Journal of Mathematics 98 (1997), 29–60. D. Lind and B. Marcus, An Introduction to Symbolic Dynamics, Cambridge University Press, Cambridge, 1995. C. Liverani, Central limit theorem for deterministic systems, International Conference on Dynamical Systems, edited by F. Ledrappier, J. Lewowicz and S. Newhouse, 56–75, Pitman Research Notes in Mathematics Series 362, Longman, Harlow, 1996. C. Liverani, B. Saussol and S. Vaienti, A probabilistic approach to intermittency, Ergodic Theory and Dynamical Systems 19 (1999), 671–685. G. Loh¨ ofer, On the distribution of recurrence times in nonlinear systems, Letters in Mathematical Physics 16 (1988), 139–143. G. Loh¨ ofer and D.H. Mayer, On the gap problem for the sequence ||mβ||, Letters in Mathematical Physics 16 (1988), 145–149. R.W. Lucky, Silicon Dreams, St. Martin’s Press, New York, 1991. S. Luzzatto and M. Viana, Parameter exclusions in H´enon-like systems, Russian Mathematical Surveys 58 (2003), 1053–1092. S. Lynch, Dynamical Systems with Applications Using Maple, Birkh¨ auser, Boston, 2001. G. Mackey, Von Neumann and the early days of ergodic theory, The Legacy of John von Neumann, Proceedings of Symposia in Pure Mathematics 50, 25–38, American Mathematical Society, Providence, 1990. R. Ma˜ n´e, Ergodic Theory and Diﬀerentiable Dynamics, Springer-Verlag, Berlin, 1987. S. Marmi, P. Moussa and J.-C. Yoccoz, On the cohomological equation for interval exchange maps, Comptes Rendus Mathematique Academie des Sciences Paris 336 (2003), 941–948. G. Marsaglia, The mathematics of random number generators, The Unreasoable Eﬀectiveness of Numbery Theory, Proceedings of Symposia in Applied Mathematics 46, 73–90, American Mathematical Society, Providence, 1992. M. Matsumoto and T. Nishimura, Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator, ACM Transactions on Modeling and Computer Simulation 8 (1998), 3–30. U. Maurer, A universal statistical test for random bit generators, Journal of Cryptology 5 (1992), 89–105. B. McMillan, The basic theorems of information theory, Annals of Mathematical Statistics 24 (1953), 196–219. B. McMillan, Two inequalities implied by unique decipherability, IRE Transactions on Information Theory 2 (1956), 115–116. W. de Melo and S. van Strien, One-Dimensional Dynamics, SpringerVerlag, Berlin, 1993.

References [Met]

[Mu] [NIT]

[Na]

[ND] [Ni] [Os]

[Ot] [OW] [Pkoh] [PM]

[PC] [Pa] [Pas] [Pem] [Pet] [Pson] [Pia] [Pit] [Po] [PoY] [PTV]

445

N. Metropolis, The beginning of the Monte Carlo method, From Cardinals to Chaos, edited by N.G. Cooper, 125–130, Cambridge University Press, Cambridge, 1989. J.R. Munkres, Topology: A First Course, Prentice-Hall, Englewood Cliﬀs, 1975. H. Nakada, Sh. Ito and S. Tanaka, On the invariant measure for the transformations associated with some real continued fractions, Keio Engineering Report 30 (1977), 159–175. H. Nakada, Metrical theory for a class of continued fraction transformations and their natural extensions, Tokyo Journal of Mathematics 4 (1981), 39–426. B. Noble and J.W. Daniel, Applied Linear Algebra, 3rd ed., Prentice-Hall, Englewood Cliﬀs, 1988. H. Niederreiter, Random Number Generation and Quasi-Monte Carlo Methods, Society for Industrial and Applied Mathematics, Vermont, 1992. V.I. Oseledec, A multiplicative ergodic theorem: Lyapunov characteristic numbers for dynamical systems, Transactions of the Moscow Mathematical Society 19 (1968), 197–231. E. Ott, Chaos in Dynamical Systems, 2nd ed., Cambridge University Press, Cambridge, 2002. D. Ornstein and B. Weiss, Entropy and data compression schemes, IEEE Transactions on Information Theory 39 (1993), 78–83. K. Koh Park, On directional entropy functions, Israel Journal of Mathematics 113 (1999), 243–267. S. Park and K. Miller, Random number generators: good ones are hard to ﬁnd, Communications of the Association for Computing Machinery 31 (1988), 1192–1201. T.S. Parker and L.O. Chua, Practical Numerical Algorithms for Chaotic Systems, Springer-Verlag, New York, 1989. W. Parry, On the β-expansions of real numbers, Acta Mathematica Academiae Scientiarum Hungaricae 11 (1960), 401–416. R. Pasco, Source Coding Algorithms for Fast Data Compression, Ph.D Thesis, Stanford University, 1976. R. Pemantle, Randomization time for the overhand shuﬄe, Journal of Theoretical Probability 2 (1989), 37–49. K. Petersen, Ergodic Theory, Cambridge University Press, Cambridge, 1983. I. Peterson, The Jungle of Randomness: A Mathematical Safari, John Wiley & Sons, New York, 1998. G. Pianigiani, First return map and invariatn measures, Israel Journal of Mathematics 35 (1979), 32–48. B. Pitskel, Poisson limit law for Markov chains, Ergodic Theory and Dynamical Systems 11 (1991), 501–513. M. Pollicott, Lectures on Ergodic Theory and Pesin Theory on Compact Manifolds, Cambridge University Press, Cambridge, 1993. M. Pollicott and M. Yuri, Dynamical Systems and Ergodic Theory, Cambridge University Press, Cambridge, 1998. W. Press, S. Teukolsky, W. Vetterling and B. Flannery, Numerical Recipes in C, 2nd ed., Cambridge University Press, Cambridge, 1992.

446 [Re] [Rie1] [Rie2]

[Ris] [RS] [Rom] [R-E]

[Roy] [Rud1] [Rud2] [Ru1]

[Ru2] [Ru3] [R-N] [STV] [Sch]

[Schw] [Sh1]

[Sh2] [Shi] [Sim] [Si1] [Si2]

References A. R´enyi, Representations for real numbers and their ergodic properties, Acta Mathematica Academiae Scientiarum Hungaricae 8 (1957), 477–493. G.J. Rieger, Ein Gauss-Kusmin-L´evy Satz f¨ ur Kettenbr¨ uche nach n¨ achsten Ganzen, Manuscripta Mathematica 24 (1978), 437–448. G.J. Rieger, Mischung und Ergodizit¨ at bei Kettenbr¨ uchen nach n¨ achsten Ganzen, Journal f¨ ur die Reine und Angewandte Mathematik 310 (1979), 171–181. J. Rissanen, Generalized Kraft inequality and arithmetic coding, IBM Journal of Research and Development 20 (1976), 198–203. A.M. Rockett and P. Sz¨ usz, Continued Fractions, World Scientiﬁc, Singapore, 1992. S. Roman, Introduction to Coding and Information Theory, SpringerVerlag, New York, 1996. J. Rousseau-Egele, Un th´eor`eme de la limite locale pour une classe de transformations dilatantes et monotones par morceaux, Annals of Probability 11 (1983), 772-788. H.L. Royden, Real Analysis, 3rd ed., Macmillan, New York, 1989. W. Rudin, Principles of Mathematical Analysis, 3rd. ed., McGraw-Hill, New York, 1976. W. Rudin, Real and Complex Analysis, 3rd. ed., McGraw-Hill, New York, 1986. D. Ruelle, Ergodic theory of diﬀerentiable dynamical systems, Publications ´ Math´ematiques de l’Institut des Hautes Etudes Scientiﬁques 50 (1979), 27-58. D. Ruelle, Chaotic Evolution and Strange Attractors, Cambridge University Press, Cambridge, 1989. D. Ruelle, Chance and Chaos, Princeton University Press, Princeton, 1991. C. Ryll-Nardzewski, On the ergodic theorems II. Ergodic theory of continued fractions, Studia Mathematica 12 (1951), 74–79. B. Saussol, S. Troubetzkoy and S. Vaienti, Recurrence, dimensions and Lyapunov exponents, Journal of Statistical Physics 106 (2002), 623–634. J. Schmeling, Dimension theory of smooth dynamical systems, Ergodic Theory, Analysis, and Eﬃcient Simulation of Dynamical Systems, edited by B. Fiedler, 109–129, Springer-Verlag, Berlin, 2001. S. Schwartzman, The Words of Mathematics, Mathematical Association of America, Washington, D.C., 1994. C. Shannon, A mathematical theory of communication, Bell System Technical Journal 27 (1948), 379–423 and 623–656, reprinted in A Mathematical Theory of Communication by C. Shannon and W. Weaver, University of Illinois Press, 1963. C. Shannon, Claude Elwood Shannon: Collected Papers, edited by N.J.A. Sloan and A.D. Wyner, IEEE Press, New York, 1993. P.C. Shields, The Ergodic Theory of Discrete Sample Paths, American Mathematical Society, Providence, 1996. M. Simonnet, Measures and Probabilities, Springer-Verlag, New York, 1996. Ya. Sinai, Introduction to Ergodic Theory, Princeton University Press, Princeton, 1976. Ya. Sinai, Topics in Ergodic Theory, Princeton University Press, Princeton, 1994.

References [Sl1] [Sl2] [Spr] [Su]

[To] [Tu] [Ve]

[Vo] [vN] [Wa1] [Wa2] [We] [WZ]

[W1] [W2] [Yoc]

[Y1]

[Y2] [Y3] [Yu]

447

N.B. Slater, The distribution of integers N for which {N θ} < Φ, Proceedings of Cambridge Philosophical Society 46 (1950), 525–534. N.B. Slater, Gaps and steps for the sequence nθ mod 1, Proceedings of Cambridge Philosophical Society 63 (1967), 1115–1123. J.C. Sprott, Chaos and Time-Series Analysis, Oxford University Press, Oxford, 2003. F. Su, Convergence of random walks on the circle generated by an irrational rotation, Transactions of the American Mathematical Society 350 (1998), 3717–3741. A. Torchinsky, Real Variables, Addison-Wesley, Redwood City, 1988. W. Tucker, Computing accurate Poincar´ e maps, Physica D 171 (2002), 127–137. W.A. Veech, Strict ergodicity in zero dimensional dynamical systems and Kronecker–Weyl theorem mod 2, Transactions of the American Mathematical Society 140 (1969), 1–33. D. Voln´ y, On limit theorems and category for dynamical systems, Yokohama Mathematical Journal 38 (1990), 29–35. J. von Neumann, Proof of the quasi-ergodic hypothesis, Proceedings of the National Academy of Sciences USA 18 (1932), 70–82. P. Walters, An Introduction to Ergodic Theory, 2nd ed., Springer-Verlag, New York, 1982. P. Walters, A dynamical proof of the multiplicative ergodic theorem, Transactions of the American Mathematical Society 335 (1993), 245–257. R.B. Wells, Applied Coding and Information Theory for Engineers, Prentice-Hall, Upper Saddle River, 1999. A.D. Wyner and J. Ziv, Some asymptotic properties of the entropy of a stationary ergodic data source with applications to data compression, IEEE Transactions on Information Theory 35 (1989), 1250–1258. A.J. Wyner, Strong Matching Theorems and Applications to Data Compression and Statistics, Ph.D Thesis, Stanford University, 1993. A.J. Wyner, More on recurrence and waiting times, Annals of Applied Probability 9 (1999), 780–796. J.-C. Yoccoz, Il n’y a pas de contre-exemple de Denjoy analytique, Comptes Rendus des Seances de l’Academie des Sciences. Serie I. Mathematique 298 (1984), 141–144. L.-S. Young, Ergodic theory of diﬀerentiable dynamical systems, Real and Complex Dynamical Systems, edited by B. Branner and P. Hjorth, Proceedings of the NATO Advanced Science Institute Series C Mathematical and Physical Sciences 464, 293–336, Kluwer Academic Publishers, Dordrecht, 1995. L.-S. Young, Developments in chaotic dynamics, Notices of the American Mathematical Society 45 (1998), 1318–1328. L.-S. Young, Recurrence times and rates of mixing, Israel Journal of Mathematics 110 (1999), 153–188. A.A. Yushkevich, On limit theorems connected with the concept of entropy of Markov chains, Uspehi Matematiceskih Nauk 8 (1953), 177–180.

Index

σ-algebra, 4 abelian group, 14 absolutely continuous measure, 6, 28, 103, 395 additive coboundary, 140, 142 adjacency matrix, 378, 379 almost every point, 6 almost everywhere, 6 alphabet, 252, 378, 417, 418, 420–422 aperiodic matrix, 12, 168, 372, 375 arithmetic coding, 417, 428 Arnold cat mapping, 55, 79 Arnold tongue, 195, 205 Arnold, V.I., 55 Asymptotic Equipartition Property, 251, 252, 364, 422, 427, 428, 436 attractor, 105, 109 automorphism, 15 baker’s transformation, 53, 66, 70, 78, 173 basin of attraction, 105, 109, 131, 349 Bernoulli measure, 6, 59, 69, 81, 100, 170, 178, 223, 224, 229, 265, 402 Bernoulli shift, 59, 61, 66, 70, 81, 222, 242, 249, 252, 253, 261, 265, 272, 366, 367, 369, 370, 379, 381–383, 385, 392, 429, 433, 436 Bernoulli transformation, 69, 172, 272, 276, 288, 297 beta transformation, 50, 69, 76, 144, 179, 200, 221, 231, 244, 260, 272, 287, 397

bijective, 1–4, 61, 62, 218, 240, 310 bin, 43, 115, 209 binary expansion, 6, 60, 62, 66, 99, 100, 141, 170, 171, 223, 429 binary sequence, 5, 28, 60, 81, 84, 170, 238, 247, 364, 381, 392, 399, 420, 425, 427, 429, 436 binomial coeﬃcient, 169, 262, 368 binomial distribution, 262 Birkhoﬀ Ergodic Theorem, 81, 83, 89, 93, 99–101, 104, 105, 118, 145, 164, 170, 175, 177, 212, 216, 223, 251, 252, 260, 270, 288, 293, 399, 420, 428 Birkhoﬀ, G.D., 85 bisection method, 428 bit, 252 block, 252, 260 Borel σ-algebra, 5, 391 Borel measurable, 5 Borel measure, 5, 61 Borel Normal Number Theorem, 99 Borel–Cantelli Lemma, 10, 287 Boshernitzan, M.D., 393 bounded linear map, 9 bounded variation, 11, 139 Bowen, R., 106, 312 Breiman, L., 250 Brouwer Fixed Point Theorem, 4, 12 Cantor function, 193, 202 Cantor measure, 202 Cantor set, 3, 107, 109, 193, 202 Cantor–Lebesgue function, 193

450

Index

cardinality, 2 Carleson, L., 250 Cauchy sequence, 3 Cauchy–Schwarz inequality, 9, 11 cdf, 24, 43, 77, 192, 202, 210, 263, 268 Central Limit Theorem, 25, 139, 148, 151, 253 Ces` aro sum, 2, 134 change of variables, 23 character, 16, 212 characteristic function, 112 characteristic function of a subset, 2 Chebyshev inequality, 9, 24 Chebyshev polynomial, 49, 73 Chebyshev, P.L., 74 Chung, K.L., 250 closed set, 3 closure, 3, 87, 156, 197 coboundary, 140, 142, 215 cobounding function, 215, 227, 229 cobweb plot, 95 code, 252 codeword, 252, 253, 418 coding map, 66, 67, 241, 246, 247, 258, 418 compact, 3 complete measure, 5 complete metric space, 3 compression rate, 419 compression ratio, 419, 429, 434 conditional measure, 6, 85, 161, 176 continued fraction, 20, 41, 93, 101, 113, 121, 222, 280, 289, 295, 413 backward, 123 Hurwitz, A., 57 Nakada, H., 57 Nakada, H., Ito, Sh., Tanaka, S., 58 Rieger, G.J., 57 continuous function, 3 continuous measure, 6, 94 convergence in a metric space, 3 convergence in measure, 8, 24, 251, 265 convergence in probability, 22 convergent, 20, 286, 289, 298, 413 convex combination, 8 convex function, 8 convolution, 290 correlation, 143–145, 152, 153 countable, 2

counting measure, 5 cumulative density function, 24, 43, 77, 190, 192, 202, 210, 263, 268 cumulative probability, 24, 267, 436 cylinder set, 5, 59, 83, 94, 100, 242, 256, 270, 363, 369, 420 cylindrical coordinate system, 126 data compression, 253, 417 decay of correlation, 143 decimal expansion, 287 decoding, 418 Denjoy Theorem, 192 density function, 6 density zero, 135 Diaconis, P., 290 diameter, 391 dictionary, 426, 427, 435 diﬀeomorphism, 310 diﬀerentiable manifold, 310 dimension of a measure, 400, 402 Dirac measure, 5, 105, 188 discrete measure, 6 divergence speed, 278, 321, 322 double precision, 125, 205 dual group, 16, 214 dyadic rational, 428 edge, 378 ellipsoid, 301 encoding, 418 endomorphism, 14, 19, 53, 87, 137 entropy, 176, 239, 241–244, 271, 292, 320, 364 entropy of a partition, 238 equivalence relation, 60 equivalent metrics, 3 equivalent norms, 3, 169, 378 ergodic component, 56, 85, 218, 219 ergodic transformation, 85–87, 89, 94, 100, 156, 181 error function, 43 Euler constant, 31, 369 Euler theorem, 28 event, 22, 59 eventually expansive, 155 expectation, 22 exponential distribution, 23 extension, 62, 172, 179

Index factor, 62, 240 factor map, 62, 172, 240, 241 Fatou’s lemma, 7, 92, 368 fax code, 417 Feigenbaum, M., 108 ﬁrst return time, 160, 233, 363, 364, 369–371, 394, 400 ﬁrst return transformation, 160 ﬁxed point, 4, 194, 296, 333, 345 ﬂoating point arithmetic, 29, 31, 122, 125–128, 199, 205, 263, 278 ﬂowchart, 70 Fourier coeﬃcient, 16–18 Fourier series, 16, 228, 235 Fourier transform, 16 Fubini’s theorem, 10, 213, 214 Gauss transformation, 51, 112, 122, 148, 156, 274, 280, 289, 292–296, 397 Gauss, C.F., 51 Gaussian elimination, 19, 88 Gelfand’s problem, 99 general linear group, 303 generalized Gauss transformation, 282, 397 Gibbs’ inequality, 421 golden ratio, 50 graph of a topological shift, 378 group, 14 Haar measure, 15, 16, 53, 87, 137, 212, 213 hardware ﬂoating point arithmetic, 125–128, 199, 205 Hausdorﬀ dimension, 109, 380, 392 Hausdorﬀ measure, 392 H´enon attractor, 109, 130, 131, 312, 337, 345, 348, 349, 402, 411 H´enon mapping, 108, 312, 317, 336, 345, 348, 349, 353, 402 Hilbert space, 11, 74 histogram, 60, 98, 116, 170, 179 H¨ older continuity, 319 H¨ older inequality, 9 homeomorphism, 3, 4, 192 homomorphism, 14 Huﬀman coding, 423 Huﬀman tree, 423 hyperbolic point, 333, 355

451

independent random variables, 22, 24, 142 independent subsets, 10 indicator function, 2, 112, 229 inﬁmum, 2 information theory, 237, 420 inner product, 11 integrable function, 7 integral matrix, 19 interval exchange transformation, 292 invariant measure, 47–52, 54–56, 59, 120, 123 inversive congruential generator, 27 Ionescu Tulcea, A., 250 irrational translation (mod 1), 48, 86, 115, 138, 142, 151, 153, 176, 198, 221, 229, 239, 272, 393, 409, 414 irreducible matrix, 12–14, 167, 375, 378 irreducible polynomial, 89 isomorphic measure spaces, 61, 62, 240 isomorphic transformations, 62–64, 66, 68–70, 75, 170, 171, 218, 241, 242 isomorphism, 14, 61 Jensen’s inequality, 9, 366, 368, 380 join of partitions, 239 Jordan canonical form, 168, 300, 303 Kac’s lemma, 163, 176, 177, 223, 233, 364, 365, 376, 379, 383, 394, 396 Khinchin constant, 102, 121, 123 Khinchin, A., 24 Kolmogorov, A.N., 237 Kolmogorov–Sinai entropy, 239 Kraft’s inequality, 418 Kronecker–Weyl Theorem, 97 Lagrange multiplier method, 238, 242 lambda transformation, 55, 56, 68 Law of Large Numbers, 24, 99, 133, 252 Lebesgue Dominated Convergence Theorem, 7, 94, 166, 374 Lebesgue integral, 7 Lebesgue measure, 5, 15, 48, 49, 52–56, 61, 63, 68, 69, 76, 86, 87, 104, 105, 109, 120, 172, 180, 195 Lempel–Ziv coding, 364, 425 lexicographic ordering, 428 limsup of a sequence of sets, 2

452

Index

linearized Gauss transformation, 158, 173, 274 linearized logarithmic transformation, 173, 275 locally compact, 15 logistic transformation, 49, 71, 73, 116, 147, 175, 221, 241, 254, 272, 275, 397, 409 loop, 37, 70, 202, 378, 415 Lyapunov exponent, 104, 120, 170, 270–276, 278, 280, 284, 286, 292, 305, 307, 311, 312, 314, 316, 319–322, 325, 328, 396, 409 Markov measure, 59, 69, 82, 171, 399 Markov shift, 167, 168, 242, 253, 255, 258, 371, 375, 376, 388 Maximal Ergodic Theorem, 89 McMillan, B., 250 mean, 22, 139 Mean Ergodic Theorem, 96 measurable function, 5 measurable mapping, 47 measurable partition, 6 measurable space, 5 measurable subset, 4 measure, 4 measure preserving transformation, 47 Mercer’s theorem, 17 Mersenne prime, 28 metric space, 2, 61, 377 minimal homeomorphism, 188, 190 Minkowski’s inequality, 9 mixing, 133–139, 143, 144, 153, 168, 243, 253, 371 mod 2 normal number, 212 modem, 417 modulo measure zero sets, 7 Monotone Convergence Theorem, 7 Monte Carlo method, 26, 45, 46 multiplication by 2 (mod 1), 48, 68, 87, 141, 144, 174, 221, 241, 272, 396 Multiplicative Ergodic Theorem, 305, 307, 334 mutually singular measures, 6, 100 name of a partition, 67 Newton’s method, 52 nonnegative matrix, 11

nonnegative vector, 11 norm, 2, 169, 181, 302, 378 normal distribution, 24, 148, 253, 263, 265, 371 normal number, 99 open set, 3 orientation, 183 Ornstein, D., 242, 364 Ornstein–Weiss formula, 364, 381, 427 orthogonal complement, 11 orthogonal matrix, 300 orthogonal polynomial, 73 orthogonal projection, 17 orthonormal basis, 16 Oseledec, V.I., 302, 305, 307 outer measure, 391 outlier, 367, 410 Parseval’s identity, 17, 146 parsing, 426 partial quotient, 20, 41, 93, 101, 113, 121, 222, 280, 295, 298, 403, 413 partition, 2, 6, 66, 67, 239, 245, 258, 286 pdf, 6, 21, 43, 118, 150, 180, 201, 263 period, 12, 18, 340 periodic point, 108, 340 Perron–Frobenius eigenvalue, 13, 378 Perron–Frobenius eigenvector, 13, 38, 243, 244, 255, 370, 386, 388 Perron–Frobenius Theorem, 13 Pesin, Ya., 319 piecewise linear transformation, 49, 396 Poincar´e Recurrence Theorem, 159, 160 Poincar´e section, 198 pointillism, 54, 55, 67, 78, 106, 114, 127, 162, 164, 180, 195, 207, 226, 231, 234, 246–248, 256 positive matrix, 11 positive vector, 11 preﬁx code, 418 probability density function, 6, 21, 24, 42, 118, 150, 180, 201, 263 probability measure, 5 probability vector, 14 product measure, 10, 213, 241 pseudorandom number, 25 quasirandom points, 25

Index random number, 25, 170, 363 random variable, 21 random walk, 222, 232, 233 recurrence error, 393 Riemann–Lebesgue lemma, 17 rotation number, 187, 194 Ruelle, D., 106, 302, 312, 320 saddle point, 333, 334, 353 sample space, 21 seed, 25, 28, 70 semi-conjugacy, 62, 64 Shannon, C., 237, 250 Shannon–McMillan–Breiman Theorem, 248, 258, 260, 286, 367 shift transformation, 59 shuﬄing, 291, 292 signiﬁcant digits, 29, 41, 81, 83, 104, 113, 122, 170, 176, 254, 276–281, 296, 324, 326, 396, 403, 413 simple function, 5, 48 Sinai, Ya., 106, 237, 312 Sinai–Ruelle–Bowen measure, 106, 312 singular measure, 339, 398, 399 singular value, 299, 300, 310, 323 skew product transformation, 213, 216, 219, 223, 226, 231, 233, 234 solenoid, 106, 125, 126, 311, 316 source block, 252 Source Coding Theorem, 421 spectral measure, 96 spectral theorem, 96 speed of periodic approximation, 408 stable manifold, 317, 334, 345, 358, 361 staircase function, 193, 204 standard deviation, 22, 140, 253, 262 standard mapping, 110, 313, 318, 339, 341, 355, 357, 358, 361 stochastic matrix, 13, 37, 59, 69, 165 strong mixing, 133 Subadditive Ergodic Theorem, 304 support of a measure, 6, 106, 189, 402 supremum, 2 symbolic dynamics, 377 symmetric diﬀerence, 1 symmetric group, 290 symmetry of measures, 190, 200

453

topological conjugacy, 62, 190, 192, 195, 202, 206, 207, 275 topological entropy, 377, 378, 389 topological group, 15 topological shift space, 377, 389 toral automorphism, 19, 54, 79, 87, 88, 120, 127, 132, 150, 316, 320, 331, 336, 401 torus, 18, 54, 106, 126, 313 total variation, 11 transition probability, 13, 59, 171, 242 tree, 418, 419, 423 truncation error, 104 type of an irrational number, 403, 412 typical block, 252, 366, 379, 422, 427, 436, 437 typical point, 82, 83, 104, 170, 229, 399 typical sequence, 81, 82, 252, 364, 428, 429, 434, 436 typical subset, 251, 364, 427 uncountable, 2, 4 uniform convergence, 4, 147 uniform distribution, 23, 89, 97 uniquely ergodic, 197 unitary operator, 11 unstable manifold, 317, 334, 348, 349, 353, 357, 361 variance, 139, 253, 371 variational inequality, 380 variational principle, 377, 380 Veech, W.A., 222 vertex, 378 von Neumann, J., 96 weak convergence, 106, 110 weak mixing, 133, 134, 138, 139, 153 Weiss, B., 364 winding number, 226 Wyner, A.D., 364 XOR, 27 Yoccoz, J.-C., 192 Young, L.-S., 140 Ziv, J., 364

10 The Lyapunov Exponent: Multidimensional Case

How can we measure the randomness of a diﬀerentiable mapping T on a multidimensional space? First, ﬁnd its linear approximation at a point x by the Jacobian matrix DT (x) and observe that a sphere of a small radius centered at x is approximately mapped to an ellipsoid. By applying DT (T k x), k ≥ 1, repeatedly along the orbit of x, we observe that the image of the given sphere is approximated by an ellipsoid and that the lengths of its semi-axes increase exponentially. The average of the exponents of growth are called the Lyapunov exponents, and they are obtained using singular values of D(T n ) as n → ∞. The largest Lyapunov exponent is equal to the divergence speed of two nearby points. For a comprehensive survey consult [Y1].

10.1 Singular Values of a Matrix In this section we give a brief introduction to singular values. In this chapter we need facts only for square matrices but we consider general matrices because the argument for rectangular matrices is the same. Let A be an n × m real matrix. Its transpose is denoted by AT . The m×m symmetric matrix AT A has real eigenvalues, and the corresponding eigenvectors are also real and pairwise orthogonal. If λ ∈ R satisﬁes AT Av = λv for some v ∈ Rm , ||v|| = 1, then ||Av||2 = (Av, Av) = (AT Av, v) = (λv, v) = λ , and so λ ≥ 0. Let

λ1 ≤ · · · ≤ λm

be the eigenvalues of A A with corresponding eigenvectors vi , ||vi || = 1. Put σi = λi , 1 ≤ i ≤ m . T

The σi are called the singular values of A. Note that σi = ||Avi || and σ1 ≤ · · · ≤ σm .

300

10 The Lyapunov Exponent: Multidimensional Case

Consider an orthonormal basis for Rm given by {v1 , . . . , vm }. For i = j we have (Avi , Avj ) = (AT Avi , vj ) = (λi vi , vj ) = λi (vi , vj ) = 0 . Hence Av1 , . . . , Avm are orthogonal and the rank of A is equal to the number of nonzero singular values. 1 1 Avi = Avi for σi > 0. If σi = 0, then choose arbitrarily Let ui = σi ||Avi || ui , ||ui || = 1, so that {u1 , . . . , un } is an orthonormal basis for Rn . Then Avi = σi ui for every i. Deﬁne an n × n orthogonal matrix U = (u1 , . . . , un ) using ui as columns, an m × m orthogonal matrix V = (v1 , . . . , vm ) using vi as columns, and an n × m matrix D0 Σ= 0 0 where D is a diagonal matrix given by positive singular values. Then A = U ΣV T , which is called a singular value decomposition. See Maple Program 10.7.1. Theorem 10.1. (i) If two real n × m matrices A and B are orthogonally equivalent, i.e., there exist orthogonal matrices P and Q such that A = P BQ , then A and B have the same set of singular values. (ii) Let A be a real square matrix. The product of singular values of A is equal to | det A |, which is equal to the product of absolute values of eigenvalues of A. (iii) If A is a real symmetric matrix, then absolute values of eigenvalues of A are singular values of A. Proof. (i) Since AT A = (QT B T P T )P BQ = QT B T BQ, two matrices AT A and B T B have the same set of eigenvalues. (ii) Let A be an m × m matrix. In the singular value decomposition A = U ΣV T , we have | det U | = | det V | = 1. Hence | det A| = | det Σ| = det Σ = σ1 · · · σm where σ1 , . . . , σm are singular values of A. Let A = P JP −1 be the Jordan decomposition of A, where the diagonal of the Jordan form J consists of eigenvalues of A. Then det A = det J, and det A is the product of eigenvalues. (iii) Suppose that Av = µv, v ∈ Rm , ||v|| = 1, µ ∈ R. Then AT Av = A2 v = µ2 v , and |µ| is a singular value of A.

10.1 Singular Values of a Matrix

301

Here is a geometrical interpretation of singular values. Let A be an n × m real matrix and let vi , ui be given as in the introduction. Then F m m xi vi : xi 2 = 1 S= i=1

is the unit sphere in Rm and A(S) = {Ax : x ∈ S} =

i=1

m

xi σi ui :

i=1

m

F 2

xi = 1

.

i=1

Note that A(S) is an ellipsoid with its semi-axes given by Avi of length σi , 1 ≤ i ≤ r. 12 Example 10.2. (i) Consider A = . The images of two orthogonal eigen30 vectors of AT A are also orthogonal. See Fig. 10.1 and Maple Program 10.7.2.

1

2

y

–1

y

x –1

1

–2

x

2

–2

Fig. 10.1. A matrix sends a circle (left) onto an ellipse (right)

(ii) If one of the singular values is zero, then A(S) may include the interior 2 0 0 of an ellipsoid. Consider A = . Then 0 −1 0 ⎛ ⎞ ⎛ ⎞ 2 0 400 2 0 0 = ⎝0 1 0⎠ . AT A = ⎝0 −1⎠ 0 −1 0 0 0 000 Here we observe that σ1 = 0, σ2 = 1, σ3 = 2 with v1 = (0, 0, 1), v2 = (0, 1, 0), v3 = (1, 0, 0). Hence A(S) is the two-dimensional region x 2 + y2 ≤ 1 . 2

302

10 The Lyapunov Exponent: Multidimensional Case

Remark 10.3. To deﬁne singular values we may use AAT instead of AT A. For, √ 1 if AT Avi = λi vi , ||vi || = 1, λi = σi = ||Avi || > 0, then we let ui = Avi . σi Then 1 1 1 Avi = AT Avi = λi vi = σi vi . AT ui = AT σi σi σi Hence AAT ui = σi Avi = σi 2 ui = λi ui . Thus we obtain the same set of nonzero singular values. When AT A has 0 as one of its eigenvalues,it is not necessarily so with AAT . For example, if 10 has two eigenvalues 1 and 0, but AAT = 1 A = 1 0 then AT A = 00 has only one eigenvalue 1. To deﬁne singular values of a complex matrix A we use AH A where AH is the complex conjugate of AT . For more information see [Lay],[ND]. Deﬁnition 10.4. Let A : Rm → Rm be a linear map. Then the operator norm of A is deﬁned by ||A||op = sup

v=0

||Av|| = sup ||Av|| . ||v|| ||v||=1

It is easy to check that the operator norm is a norm on the space of all m × m matrices. Note that ||Av|| ≤ ||A||op ||v|| for every v. If A is invertible, −1 then ||w|| ≤ ||A||op ||A−1 w|| for every w, and ||A−1 ||op ≥ (||A||op ) . Theorem 10.5. Suppose that the linear map A in Deﬁnition 10.4 is given by an m × m real matrix, again denoted by A. Then ||A||op is the maximum of the singular values of the matrix A. Proof. Let λ1 ≤ · · · ≤ λm be the eigenvalues of AT A with corresponding eigenvectors vi , ||vi || = 1, 1 ≤ i ≤ m. Since the vi are orthogonal, for x2i = 1, we have v = xi v i , ||Av||2 = (Av, Av) = (AT Av, v) = λi x2i . λi xi v i , xi vi = Hence sup||v||=1 ||Av||2 = λm and ||A||op =

√

λm .

10.2 Oseledec’s Multiplicative Ergodic Theorem We prove the Multiplicative Ergodic Theorem, which was ﬁrst obtained by V.I. Oseledec [Os] and later generalized by many mathematicians. The presentation given in this section follows the idea due to D. Ruelle [Ru1]. Consult [Kre],[Po],[Y1] for more information. See [BaP],[Wa2] for diﬀerent proofs.

10.2 Oseledec’s Multiplicative Ergodic Theorem

303

Let GL(m, R) denote the general linear group, which is the set of all m×m real invertible matrices. Take A ∈ GL(m, R). If an eigenvalue of A is of the form λ = a + bi, b = 0, a, b ∈ R, then λ = a − bi is also an eigenvalue of A. Thus, if M is a k × k Jordan block corresponding to λ, then there is another k × k Jordan block M corresponding to λ. We then have the 2k × 2k combined block M ⊕ M . To ﬁnd the real Jordan canonical form of A, we replace M ⊕ M by the 2k × 2k real matrix obtained by the following rules: Each diagonal a −b entry λ = a + bi of M is replaced by and an oﬀ-diagonal entry 1 of b a 10 M is replaced by . Using the real Jordan canonical form of A, we see 01 m that R is decomposed into a direct sum Rm = E1 ⊕ · · · ⊕ Er such that (i) AEi = Ei , (ii) for every v ∈ Ei , v = 0, lim

n→∞

1 log ||A±n v|| = ±Li n

where Li is the logarithm of the absolute value of the eigenvalue corresponding Li dim Ei = log | det A|. to Ei , and (iii) Next, consider a random sequence of matrices A(1), A(2), A(3), . . . and study the growth rate of ||A(n) · · · A(1)v|| for v ∈ Rm . More speciﬁcally, we have a measure preserving transformation T on a probability space X and a measurable mapping A : X → GL(m, R). (The measurability condition on A means that the coeﬃcients of A are real-valued measurable functions on X.) In this case, we consider a sequence A(x), A(T x), A(T 2 x), . . . for each x ∈ X. Suppose that A satisﬁes log+ ||A−1 || dµ < ∞ , log+ ||A|| dµ < ∞ and where log+ a = max{log a, 0}. For n ≥ 1 we write An (x) = A(T n−1 x) · · · A(x) . If T is invertible, for n ≥ 1 we write A−n (x) = A(T −n x)−1 · · · A(T −1 x)−1 . Deﬁnition 10.6. For x ∈ X and v ∈ Rm , deﬁne L+ (x, v) = lim sup n→∞

1 log ||An (x)v|| , n

1 log ||An (x)v|| . n If L+ = L+ , then we simply write L+ . Deﬁne L− , L− and L− similarly using A−n (x), for example, L+ (x, v) = lim inf n→∞

L− (x, v) = lim sup n→∞

1 log ||A−n (x)v|| . n

304

10 The Lyapunov Exponent: Multidimensional Case

The following fact due to J.F.C. Kingman [King] will be used in proving the Multiplicative Ergodic Theorem. We write f + = max{f, 0} for a measurable function f : X → R. Fact 10.7 (The Subadditive Ergodic Theorem) Let T be a measure preserving transformation on a probability space (X,

µ). For n ≥ 1, suppose that φn : X → R is a measurable function such that (φ1 )+ dµ < ∞ and φ+n (x) ≤ φ (x) + φn (T x) almost everywhere for every , n ≥ 1. Then there exists a measurable function φ∗ : X → R ∪ {−∞} such that (i) φ ∗ ◦ T = φ∗ , (ii) (φ∗ )+ dµ < ∞, and (iii) n1 φn → φ∗ almost everywhere as n → ∞. Note that, if T is ergodic, then φ∗ is constant. If the inequality in the second condition for φn is replaced by equality, we have the Birkhoﬀ Ergodic n−1 Theorem with φn = i=0 f ◦ T i for a measurable function f . Remark 10.8. Let W be a k-dimensional subspace of Rm and let A be an invertible matrix identiﬁed with the linear map deﬁned by v → Av, v ∈ Rm . Since A is invertible, AW = {Aw : w ∈ W } is also k-dimensional. After identifying both W and AW with Rk using linear isometries, we regard the restriction A|W as a linear map from Rk into itself. If {w1 , . . . , wk } ⊂ W is an orthonormal set, then |det (A|W )| is the k-dimensional volume of the parallelepiped spanned by Awi . Lemma 10.9. Deﬁne φ(k) n (x)

= log

sup

|det (An (x)|W )| ,

dim W =k

where the supremum is taken over all k-dimensional subspaces W of Rm . Then (k) {φn }n≥1 is subadditive for k < m, and it is additive for k = m. Proof. Since A+n (x) = An (T x)A (x), we have |det(An+ (x)|W )| = det(An (T x)|A (x)W ) |det(A (x)|W )| , and hence sup |det(An+ (x)|W )| ≤ sup det(An (T x)|W ) sup |det(A (x)|W )| , W

W

W

where the supremum is taken over all k-dimensional subspaces W , so that we have subadditivity. If k = m, then W = A (x)W = Rm , so that we have additivity.

10.2 Oseledec’s Multiplicative Ergodic Theorem

305

Note that if A(x) has singular values σ1 (x) ≤ · · · ≤ σm (x), then sup

|det (A(x)|W )| = σm−k+1 (x) × · · · × σm (x) ≤ [σm (x)]k ,

dim W =k

and hence

(k) φ1 (x)

= log

sup

|det (A(x)|W )| ≤ k log σm (x) .

dim W =k

Theorem 10.5 implies that (k) (φ1 )+ ≤ k log+ σm = k log+ ||A|| < ∞ , and hence the Subadditive Ergodic Theorem holds true. Similarly for A−1 , (k) deﬁne φn using A−n (x), then use + −1 log (σ1 ) = log+ ||A−1 || < ∞ to show the integrability condition. Theorem 10.10 (The Multiplicative Ergodic Theorem for not necessarily invertible cases). Let T be a measure preserving transformation on a probability space X. Suppose that T is not necessarily invertible. For almost every x ∈ X there exist numbers L1 (x) < · · · < Lr(x) (x) and a sequence of subspaces {0} = V0 (x) V1 (x) · · · Vr(x) (x) = Rm such that (i) L+ (x, v) = Li (x) for v ∈ Vi (x) \ Vi−1 (x), (ii) A(x)Vi (x) = Vi (T x), (iii) r(x), Li (x) and Vi (x) are measurable, (iv) r(x), Li (x) and dim Vi (x) are constant along orbits of T , and r(x) (v) limn→∞ n1 log | det An (x)| = i=1 Li (x) (dim Vi (x) − dim Vi−1 (x)). It is possible that L1 = −∞. From (iv) we observe that, if T is ergodic, then r, Li and dim Vi are constant. The numbers Li (x) and dim Vi , 1 ≤ i ≤ r(x), are called the Lyapunov exponents and the multiplicities, respectively. Note that the following is a disjoint union: Rm = Vr(x) (x) \ Vr(x)−1 (x) ∪ · · · ∪ [V2 (x) \ V1 (x)] ∪ V1 (x) . We give a sketch of the proof given by Ruelle [Ru1]. Sometimes we suppress the variable x in the notation for matrices and eigenvalues when there is no danger of confusion.

306

10 The Lyapunov Exponent: Multidimensional Case

Proof. Let σn,1 (x) ≤ · · · ≤ σn,m (x) be the singular values of An (x). Deﬁne (k) (k) φn as in Lemma 10.9. The supremum in the deﬁnition of φn (x) is obtained when the subspace W is spanned by the eigenvectors of An (x)T An (x) corresponding to the k largest eigenvalues [σn,i (x)]2 , m − k + 1 ≤ i ≤ m. In this case, m log σn,i (x) . φ(k) n (x) = i=m−k+1

m Using the Subadditive Ergodic Theorem we see that n1 i=m−k+1 log σn,i (x) converges to a limit for every k. Hence n1 log σn,i (x) converges to a limit for every i. Note that the limit is constant along orbits of T . Since ATn An is symmetric, it has eigenvalues (σn,1 )2 ≤ · · · ≤ (σn,m )2 , and there exists an orthogonal matrix Cn such that Cn−1 ATn An Cn = Dn where Dn = diag((σn,1 )2 , . . . , (σn,m )2 ) is a diagonal matrix. Then (ATn An )1/2n = Cn (Dn )1/2n Cn−1 . Now (Dn )1/2n = diag((σn,1 )1/n , . . . , (σn,m )1/n ) converges as n → ∞ since 1 T 1/2n converges. n log σn,i converges. It can then be shown that (An (x) An (x)) See [Ru1] for the details. Put Σ(x) = lim (An (x)T An (x))1/2n . n→∞

Note that Σ(x) is symmetric. Let σ1 (x) < · · · < σr(x) (x) be the distinct eigenvalues of Σ(x) with corresponding eigenspaces Ui (x). If n is large, then Σ ≈ Cn (Dn )1/2n Cn−1 , and so there exists a sequence 0 = J0 (x) < J1 (x) < · · · < Jr (x) = m such that 1 log σn,j log σi = lim n→∞ n for Ji−1 < j ≤ Ji . Put Li (x) = log σi (x) . Then Li (T x) = Li (x). Since the singular values of Σ(x) are T -invariant, the number of distinct singular values are T -invariant. Hence r(x) = r(T x). This partially proves (iv). Put V0 = {0} and Vi (x) = U1 (x) + · · · + Ui (x), 1 ≤ i ≤ r(x). To prove (i), take v ∈ Vi (x) \ Vi−1 (x). Then v has a nonzero orthogonal projection u onto Uj (x) for some j, Ji−1 < j ≤ Ji , and so ||An (x)v|| ≈ ||An (x)u|| ≈ σi (x)n ||u|| for large n. Hence lim

n→∞

1 log ||An (x)v|| = log σi (x) = Li (x) . n

10.2 Oseledec’s Multiplicative Ergodic Theorem

307

For (ii) take v ∈ Vi (x) \ Vi−1 (x). Then 1 log ||An (T x)A(x)v|| n 1 log ||An+1 (x)v|| = lim n→∞ n = L+ (x, v) = Li (x) = Li (T x) ,

L+ (T x, A(x)v) = lim

n→∞

and A(x)v ∈ Vi (T x). (In general it is not true that A(x)Ui (x) = Ui (T x).) In (iii) all the objects are obtained by taking countable set operations, and they are measurable. Technical details from measure theory are skipped. For the remaining part of (iv), note that A(x) is invertible. Then (ii) implies that dim Vi (T x) = dim A(x)Vi (x) = dim Vi (x), and so Vi (T x) = Vi (x). To prove (v), note that 1/2n 1/n = lim (det An ) . det Σ = lim det(ATn An ) n→∞

n→∞

Hence r(x) 1 log σi (x) dim Ui (x) log | det An (x)| = n→∞ n i=1

lim

r(x)

=

Li (x) (dim Vi (x) − dim Vi−1 (x)) .

i=1

If T is invertible, we can deduce more. Theorem 10.11 (The Multiplicative Ergodic Theorem for invertible cases). Let T be an invertible measure preserving transformation on a probability space X. Then for almost every x ∈ X there exist numbers L1 (x) < · · · < Lr(x) (x) and a decomposition of Rm into Rm = E1 (x) ⊕ · · · ⊕ Er(x) (x) such that (i) L+ (x, v) = −L− (x, v) = Li (x) for 0 = v ∈ Ei (x), (ii) A(x)Ei (x) = Ei (T x), (iii) r(x), Li (x) and dim Ei (x) are constant along orbits of T . Note that L1 > −∞ since −L1 is the largest Lyapunov exponent for T −1 . (See the following proof.) From (iii) we see that, if T is ergodic, then r, Li and dim Ei are constant.

308

10 The Lyapunov Exponent: Multidimensional Case

Proof. As in Theorem 10.10, the Vi (x), i ≥ 0, are invariant spaces for the pair (T, A(x)). Consider (T −1 , A(T −1 x)−1 ) in place of (T, A(x)). Then we have {0} V−r(x) (x) · · · V−1 (x) = Rm and the corresponding Lyapunov exponents −Lr(x) (x) < · · · < −L1 (x) . Note that A(T −1 x)−1 V−i (x) = V−i (T −1 x), which is equivalent to V−i (T y) = A(y)V−i (y). First, we check how the Lyapunov exponents are obtained. Put Σ(x) = lim (An (x)T An (x))1/2n n→∞

and

7 Σ(x) = lim (A−n (x)T A−n (x))1/2n . n→∞

Let σn,1 (x) ≤ · · · ≤ σn,m (x) be eigenvalues of (An (x)T An (x))1/2n , and let 7n,1 (x) ≤ · · · ≤ σ1 (x) ≤ · · · ≤ σm (x) be eigenvalues of Σ(x). Similarly, let σ 71 (x) ≤ · · · ≤ σ 7m (x) σ 7n,m (x) be eigenvalues of (A−n (x)T A−n (x))1/2n and σ −1 7 eigenvalues of Σ(x). We will show that σ 7j (x) is an eigenvalue of Σ(x) for every j. Choose ε > 0. Let Gn = {x ∈ X : |σn,j (x)−1 − σj (x)−1 | < ε, 1 ≤ j ≤ m} and

7 n = {x ∈ X : |7 G σn,j (x) − σ 7j (x)| < ε, 1 ≤ j ≤ m} .

7 n ) = 1. Since An (T −n x)−1 = Note that limn→∞ µ(Gn ) = 1 and limn→∞ µ(G A−n (x), we have A−n (x)T A−n (x) = (An (T −n x)−1 )T An (T −n x)−1 = [(An (T −n x)An (T −n x)T )]−1 . Hence

σ 7n,j (x) = σ 7n,m−j+1 (T −n x)−1 .

Take x ∈ Gn ∩ T n (Gn ). Then σ 7j (x) ≈ σ 7n,j (x) = σn,m−j+1 (T −n x)−1 ≈ σm−j+1 (T −n x)−1 = σm−j+1 (x)−1 , where ≈ indicates an approximation within ε bound and the last equality comes from the fact that σj is constant along orbits of T . Now we let n → ∞

10.2 Oseledec’s Multiplicative Ergodic Theorem

309

to obtain |7 σj (x) − σm−j+1 (x)−1 | < ε for almost every x. Since ε is arbitrary, we have σ 7j (x) = σm−j+1 (x)−1 for almost every x. Therefore log σ 7j (x) = log(σm−j+1 (x)−1 ) = − log σm−j+1 (x) . Next, we claim that Vi−1 (x) ∩ V−i (x) = {0}. To see why, let S be the set of the points x where Vi−1 (x) ∩ V−i (x) = {0}. Note that T −1 S ⊂ S since Vi−1 (T −1 x) ∩ V−i (T −1 x) = A(T −1 x)−1 (Vi−1 (x) ∩ V−i (x)) = {0} , and that T (S) ⊂ S since Vi−1 (T x) ∩ V−i (T x) = A(x)(Vi−1 (x) ∩ V−i (x)) = {0} . For δ > 0 and 2 ≤ i ≤ r(x), consider the set Sn ⊂ S of the points x where ||An (x)u|| ≤ ||u|| en(Li−1 (x)+δ) and ||A−n (x)u|| ≤ ||u|| en(−Li (x)+δ) for 0 = u ∈ Vi−1 (x) ∩ V−i (x). Put w = A−n (x)u. Then w ∈ Vi−1 (T −n x) ∩ V−i (T −n x) since A(x)−1 Vj (x) = Vj (T −1 x) for every j. Since An (T −n x)w = u, the second inequality becomes ||w|| ≤ ||An (T −n x)w|| en(−Li (x)+δ) and hence

||An (T −n x)w|| ≥ ||w|| en(Li (x)−δ) .

If y ∈ T −n Sn , then

||An (y)u|| ≥ ||u|| en(Li (y)−δ)

for u ∈ Vi−1 (y) ∩ V−i (y). Thus Li (x) − Li−1 (x) ≤ 2δ for x ∈ Sn ∩ T −n Sn . Since µ(Sn ∩ T −n Sn ) → µ(S) as n → ∞, we have Li (x) − Li−1 (x) ≤ 2δ for almost every x ∈ S. Since δ is arbitrary, we obtain µ(S) = 0. Now since dim Vi−1 (x) + dim V−i (x) = m, we have Vi−1 (x) ⊕ V−i (x) = Rm . Deﬁne Ei (x) = Vi (x) ∩ V−i (x) ,

1 ≤ i ≤ r(x) .

Then Rm = V−1 (x) ∩ [V1 (x) ⊕ V−2 (x)] ∩ · · · ∩ Vr(x)−1 (x) ⊕ V−r(x) (x) ∩ Vr(x) = E1 (x) ⊕ · · · ⊕ Er(x) (x) .

310

10 The Lyapunov Exponent: Multidimensional Case

10.3 The Lyapunov Exponent of a Diﬀerentiable Mapping In this section we consider the case when the matrices An in the Multiplicative Ergodic Theorem are given by the Jacobian matrices D(T n ), where T is a diﬀerentiable mapping on a compact diﬀerentiable manifold. Let Df = (∂fi /∂xj )i,j be the Jacobian matrix of f . Since f (x0 + h) ≈ f (x0 ) + Df (x0 ) h for h ≈ 0, the behavior of f near x0 can be studied using Df (x0 ). For example, a volume element at x0 is magniﬁed by the factor |det Df (x0 )|. To investigate the growth rate along a given direction, consider a sphere Sr (x0 ) centered at x0 of a small radius r > 0. By the theory of singular values there exist m orthogonal vectors v1 , . . . , vm , ||vi || = r, such that the image of Sr (x0 ) under f is approximately an ellipsoid with semi-axes given by Df (x0 )v1 , . . . , Df (x0 )vm . Take an open set U ⊂ Rm . A function f = (f1 , . . . , fn ) : U → Rn is called a C k function if every fi , 1 ≤ i ≤ n, has k continuous partial derivatives. An m-dimensional C k diﬀerentiable manifold is a set X ⊂ R with a local coordinate system at every x ∈ X satisfying compatibility conditions. More precisely, for x ∈ X there exists an open set U ⊂ R containing x and a one-to-one continuous mapping φU : U ∩ X → Rm satisfying the following condition: if V is another open set containing x, then φV ◦ φU −1 is C k on φU (U ∩ V ∩ X) for some s ≥ 1. The mapping φU is called a local coordinate system at x. A nonempty open subset U ⊂ Rm together with φU (x) = x is an m-dimensional diﬀerentiable manifold. A circle, a torus and a sphere are examples of compact diﬀerentiable manifolds. Let Y be an n-dimensional diﬀerentiable manifold. A mapping f : X → Y is said to be C k if there exist local coordinate systems φ and ψ at every x and f (x) respectively such that ψ ◦ f ◦ φ−1 is C k as a mapping from an open subset of Rm into Rn . If k = ∞, then f is said to be smooth. From now on we assume that f is C k for some k ≥ 1. If m = n and f is bijective, then f is called a diﬀeomorphism. Let T be a diﬀeomorphism of a diﬀerentiable compact manifold X and let µ be a T -invariant ergodic probability measure on X. Take A = DT in Oseledec’s Multiplicative Ergodic Theorem. Since [D(T −1 )(T x)] [DT (x)] = I, we have det[D(T −1 )(T x)] det[DT (x)] = 1, and so det A(x) = 0. Using the chain rule repeatedly we have for n ≥ 1 D(T n )(x) = A(T n−1 x) · · · A(x) = An (x) . −1 Since D(T −1 )(T x) = [DT (x)] , we have for n ≥ 1 D(T −n )(x) = D(T −1 )(T −n+1 x) · · · D(T −1 )(T −1 x) D(T −1 )(x) −1 −1 −1 DT (T −1 x) · · · DT (T −2 x) = DT (T −n x) = A−n (x) . When we consider an m-dimensional torus, we regard it as a subset of Rm , and do all the computations modulo 1 for the sake of notational convenience.

10.3 The Lyapunov Exponent of a Diﬀerentiable Mapping

311

Theorem 10.12. Let X be a compact diﬀerentiable manifold with dim X = m. (To simplify the notation we regard X as a subset of Rm .) Let T : X → X be a diﬀeomorphism and let µ be a T -invariant ergodic probability measure on X. Then at almost every x ∈ X there exist a decomposition of Rm into Rm = E1 (x) ⊕ · · · ⊕ Er (x) and constants L1 < · · · < Lr such that (i) the limits Li = lim

n→∞

1 1 log D(T −n )(x)v log D(T n )(x)v = − lim n→∞ n n

exist for 0 = v ∈ Ei (x), 1 ≤ i ≤ r, and do not depend on v or x, (ii) dim Ei (x) is constant, 1 ≤ i ≤ r, and (iii) DT (x)Ei (x) = Ei (T x) for almost every x, 1 ≤ i ≤ r. Proof. Take A(x) = DT (x) and apply Theorem 10.11.

In the language of the theory of diﬀerentiable manifolds we may state the conclusion in Theorem 10.12 as follows: There exist subspaces Ei (x) of the tangent space Tx X of X (‘T ’ for tangent) at almost every x such that Tx X = E1 (x) ⊕ · · · ⊕ Er (x) . For simulations for the Lyapunov exponent see Figs. 10.2, 10.3, 10.4 and Maple Programs 10.7.3, 10.7.4. For the numerical calculations in this chapter we use the logarithmic base 10. Example 10.13. (Solenoid) Let S 1 = {φ : 0 ≤ φ < 1} and D = {(u, v) ∈ R2 : u2 + v 2 ≤ 1}. Note that S 1 is a group under the addition mod 1. Consider the solid torus X = S 1 × D. For 0 < a < 12 , consider the solenoid mapping T : X → X deﬁned by 1 1 T (φ, u, v) = 2φ, au + cos(2πφ), av + sin(2πφ) . 2 2 See Ex. 3.21. A small ball in X is elongated mainly in the direction of φ and contracted in the directions of u, v. Hence the Lyapunov exponents are L1 = log a < 0 ,

L2 = log 2

with dim E1 = 2 ,

dim E2 = 1 .

See Fig. 10.2 and Maple Program 10.7.3.

312

10 The Lyapunov Exponent: Multidimensional Case 0.2 n

1000

0 –0.2 –0.4

Fig. 10.2. The Lyapunov exponents of the solenoid mapping are equal to L1 = log a, L2 = log 2. They are approximated by (log σn )/n where σn are the singular values of D(T n ), 50 ≤ n ≤ 1000

Example 10.14. Consider the H´enon mapping T (x, y) = (1 − ax2 + y, bx) . ∗1 , we have det DT = −b. If |b| < 1, then the area of a small b0 rectangle is strictly decreased under the application of T . Hence a T -invariant bounded subset has Lebesgue measure 0. It is known that, for a positive Lebesgue measure set of pairs of parameters (a, b) ∈ R2 , the H´enon mapping has a Sinai–Ruelle–Bowen measure µ on its attractor with a positive Lyapunov exponent almost everywhere with respect to µ. For more information consult [ASY],[BeC],[BeY]. For the estimation of the Lyapunov exponents see the left plot in Fig. 10.3, where y = (log σn )/n, 50 ≤ n ≤ 1000, is plotted for each singular value σn of D(T n ). The curves below and above the x-axis are for L1 and L2 , respectively. Compare the results with Fig. 10.14. Since DT =

0.5 0.5 0

n

1000

0

n

1000

–0.5 –0.5

Fig. 10.3. The Lyapunov exponents are approximated by (log σn )/n where σn are the singular values of D(T n ): the H´enon mapping with a = 1.4, b = 0.3 (left), and the modiﬁed standard mapping with C = 0.5 (right)

For estimation of the largest Lyapunov exponent on the H´enon attractor, see Fig. 10.4, where (T j x0 , (log σ)/200), 1 ≤ j ≤ 100, are plotted. Compare the results with Fig. 10.15.

10.3 The Lyapunov Exponent of a Diﬀerentiable Mapping

313

0.3

0.3 0.2

0.2 0.1

0.5

0

–1

1

1 1

–0.5

Fig. 10.4. The largest Lyapunov exponent is approximated by (log σ)/200 where σ is the largest singular value of D(T 200 )(T j x0 ), 1 ≤ j ≤ 100: the H´enon mapping with a = 1.4, b = 0.3 (left), and the standard mapping with C = 0.5 (right)

Example 10.15. Let C be a real constant. Consider the modiﬁed standard mapping on the two-dimensional torus X deﬁned by T (x, y) = TC (x, y) = (2x − y + C sin(2πx), x)

(mod 1) .

Note that T is invertible and T −1 (x, y) = (y, −x + 2y + C sin(2πy))

(mod 1) .

Also observe that T (−x, −y) = −T (x, y) (mod 1) . ∗ −1 Since DT = , we have det DT = 1. Hence T preserves the area. If 1 0 T has an absolutely continuous invariant measure of the form ρ(x, y) dxdy, ρ ≥ 0, then ρ = 1/λ(A) on A ⊂ X and ρ = 0 on X \ A where λ(A) is the Lebesgue measure of A. To ﬁnd invariant measures we apply the Birkhoﬀ Ergodic Theorem. In each of the left and the middle plots in Fig. 10.5 with C = 0.5 and C = −0.5, respectively, 5000 orbit points are plotted with suitably chosen starting points. Since 1 1 1 1 TC x + , y + = T−C (x, y) + , (mod 1) , 2 2 2 2 an invariant measure of T−C is the translate of an invariant measure of TC by ( 21 , 21 ) modulo 1. Compare the ﬁrst and the second plots. If U denotes a mapping on X induced by the rotation around the origin by 180 degrees, i.e., U (x, y) = (1 − x, 1 − y) = (−x, −y) (mod 1) , then U −1 = U and T ◦ U = U ◦ T .

314

10 The Lyapunov Exponent: Multidimensional Case

1

1

y

1

y

0

x

1

y

0

x

1

0

x

1

Fig. 10.5. Absolutely continuous ergodic invariant measures of the modiﬁed standard mapping with C = 0.5 and C = −0.5 (left and middle), and singular invariant measures with C = 0.5 (right)

Let µ be an ergodic invariant measure for T , and deﬁne a new measure ν by ν(E) = µ(U (E)) for E ⊂ X. Then ν(T (E)) = µ(U (T (E))) = µ(T (U (E))) = µ(U (E)) = ν(E) . Hence ν is also T -invariant. To prove the ergodicity of ν, suppose that ν(T (E)E) = 0 where denotes the symmetric diﬀerence. Then ν(T (E)E) = µ(U (T (E))U (E)) = µ(T (U (E))U (E)) , and so the ergodicity of µ implies that µ(U (E)) = 0 or 1. Hence ν(E) = 0 or 1. As a special case, consider an absolutely continuous invariant probability measure µ. Then µ = ν by the uniqueness of the absolutely continuous ergodic invariant measure. See the left plot in Fig. 10.5. The union of three almost ﬂat closed curves in the right plot, located between the third and the fourth closed curves circling around the origin counting from the inside, form the support of a singular continuous ergodic invariant measure, which is not symmetric with respect to U . Observe that T has two ﬁxed points (0, 0) and ( 21 , 21 ). If we pick a point other than the ﬁxed point inside the hole in the ﬁrst plot as a starting point, then the orbit points are attracted to closed curves, which indicates that there exist singular continuous invariant measures. For an experiment with the standard mapping we will consider only absolutely continuous invariant measures. Since det D(T n ) = 1 for every n ≥ 1, the product of two singular values σn,1 and σn,2 is equal to 1, and hence 1 1 log σn,1 = lim log(σn,2 )−1 = −L2 . n→∞ n n→∞ n

L1 = lim

For the estimation of the Lyapunov exponents see the right plot in Fig. 10.3. The curves below and above the x-axis are for L1 and L2 , respectively. See also Fig. 10.4.

10.4 Invariant Subspaces

315

10.4 Invariant Subspaces Let T be a transformation given in Theorem 10.12. Consider the case when r = 2 and the Lyapunov exponents are given by L1 < L2 . Suppose that a tangent vector v at x ∈ X has a nonzero component in E2 (x), i.e., v = v1 +v2 , v1 ∈ E1 (x), v2 ∈ E2 (x), v2 = 0. Then L2 = lim

n→∞

1 log ||D(T n )(x)v|| . n

We use diﬀerent methods in ﬁnding E1 (x) and E2 (x). The subspace E1 (x) is obtained as the limit of the subspace spanned by the eigenvectors of An (x)T An (x) corresponding to the smaller eigenvalue. But E2 (x) cannot be obtained by the same method: it is not necessarily the limit of the subspace spanned by the eigenvectors vn of An (x)T An (x) corresponding to the larger eigenvalue. What we really obtain is a vector vn ≈ w ∈ R2 \ E1 (x) where R2 is identiﬁed with the space of all tangent vectors at x. This property is suﬃcient to calculate L2 , but, in general, vn ≈ w ∈ E2 (x) . To ﬁnd E2 (x), we use T −1 instead of T . Let Ei (x, T ) and Ei (x, T −1 ) be the invariant subspaces given in the Oseledec Multiplicative Ergodic Theorem corresponding to T and T −1 , respectively. Then E1 (x, T ) = E2 (x, T −1 ) and

E2 (x, T ) = E1 (x, T −1 ) .

Since E1 (x, T −1 ) is obtained by the previous method, we can ﬁnd E2 (x, T ). How can we ﬁnd invariant subspaces of T −1 when T −1 is not explicitly given? First, observe that for n ≥ 1 −1 D(T −n )(x) = DT (T −1 x) · · · DT (T −n x) . Put Bn (x) = DT (T −1 x) · · · [DT (T −n x)]. Then Bn (x) = An (T −n x) and D(T −n )(x) = Bn (x)−1 . Note that if an invertible matrix M satisﬁes M v = λv, λ = 0, then M −1 v = λ−1 v. Therefore, to obtain E1 (x, T −1 ), we ﬁnd an eigenvector corresponding to the smaller eigenvalue of the matrix T −1 Bn (x)−1 = Bn (x)Bn (x)T Bn (x)−1 , which is also an eigenvector of the matrix Bn (x)Bn (x)T corresponding to the larger eigenvalue. Once we have Ei (x), we obtain Ei (T j x), j ≥ 1, using DT (x)Ei (x) = Ei (T x), i = 1, 2. In the following examples we plot the directions of E1 (x), E2 (x) along an orbit of T . See Maple Programs 10.7.5, 10.7.6.

316

10 The Lyapunov Exponent: Multidimensional Case

32 Example 10.16. Consider a toral automorphism deﬁned by A = , which 11 √ √ has eigenvalues λ1 = 2√ − 3 < 1, λ2 = √ 2 + 3 > 1 with the corresponding eigenvectors v1 = (1 − 3, 1), v2 = (1 + 3, 1). In this case, it is easy to ﬁnd the Lyapunov exponents and the invariant subspaces. Since An v1 = λ1 n v1 and An v2 = λ2 n v2 , we have E1 (x) = {tv1 : t ∈ R}, E2 (x) = {tv2 : t ∈ R} and L1 = log λ1 < 0, L2 = log λ2 > 0. See Fig. 10.6 where the Ei (x), i = 1, 2, are plotted along an orbit marked by small circles. 1

1

y

y

0

x

0

1

x

1

Fig. 10.6. Directions of E1 (x) (left) and E2 (x) (right) for a toral automorphism

Example 10.17. Consider the solenoid mapping in Ex. 10.13. The subspaces E1 , E2 are plotted along an orbit on the attractor in Fig. 10.7. The subspace E2 , in the expanding direction, is tangent to the attractor, and has dimension 1. The subspace E1 , in the contracting direction, is parallel to the disks, and has dimension 2. The φ-axis connects two disks, which are identiﬁed.

1

1

1

1

0 1

–1 –1

1

–1 –1

Fig. 10.7. Directions of E1 (x) (left) and E2 (x) (right) invariant under the solenoid mapping

10.4 Invariant Subspaces

317

Example 10.18. Consider the H´enon mapping with a = 1.4, b = 0.3 given in Ex. 10.14. We draw the directions of E1 (x), E2 (x) at x along an orbit on the attractor as in Ex. 10.17. (Here and Ex. 10.17, we may plot E1 (x), E2 (x) at x outside the basin of attraction, but we choose the points on the attractors to show that E2 is tangent to the attractors.) See Figs. 10.8, 10.9. The subspace E2 (x), in the expanding direction, is tangent to the H´enon attractor at x on the attractor. At some points the directions of E1 (x) and E2 (x) almost coincide. Small circles mark the orbit points.

0.5 y

–1

1 x –0.5

Fig. 10.8. Directions of E1 (x) invariant under the H´enon mapping with a = 1.4, b = 0.3

0.5 y

–1

1 x –0.5

Fig. 10.9. Directions of E2 (x) invariant under the H´enon mapping with a = 1.4, b = 0.3

In Fig. 10.10 a region {(x, y) : −1.286 ≤ x ≤ −1.278 , 0.379 ≤ y ≤ 0.383} around the leftmost point on the H´enon attractor is magniﬁed. Small circles in the right plot mark the orbit points. It shows that E2 (x) is tangent to the H´enon attractor at a point x on the attractor. Compare the directions with the tangential directions of the stable and unstable manifolds in Figs. 11.6, 11.7 in Chap. 11, where we can clearly see that E2 is tangential to the H´enon attractor. The theory of the stable and unstable manifolds may be regarded as an integral version of the invariant subspace theory.

318

10 The Lyapunov Exponent: Multidimensional Case

0.382

0.382

y

y

0.38

0.38

–1.285

x

–1.28

–1.285

x

–1.28

Fig. 10.10. A magniﬁcation of the region around the leftmost point on the H´enon attractor with a = 1.4, b = 0.3 (left). The directions of E2 (x) are indicated (right)

Example 10.19. Consider the standard mapping in Ex. 10.15. Take C = 0.5. We draw the directions of E1 (x) and E2 (x) at x along orbits. See Fig. 10.11 where the directions at 1000 orbit points are indicated in each plot.

1

1

y

y

0 x

1

0 x

1

Fig. 10.11. Directions of E1 (x) (left) and E2 (x) (right) invariant under the action of the modiﬁed standard mapping with C = 0.5

10.5 The Lyapunov Exponent and Entropy We will discuss the relation between the Lyapunov exponents and entropy in this section. Only a rough sketch of the idea is given. Assume that T is ergodic. First, let

10.5 The Lyapunov Exponent and Entropy

319

L1 < · · · < Ls ≤ 0 < Ls+1 < · · · < Lr be the Lyapunov exponents of T and consider a subspace W u (x) = Es+1 (x) ⊕ · · · ⊕ Er (x) deﬁned by the direct sum of invariant subspaces corresponding to positive Lyapunov exponents. The superscript ‘u’ denotes the unstable direction along which two nearby points become separated farther and farther. Put = dim W u (x), x ∈ X. Note that is constant and DT (x)W u (x) = W u (T x) . Take A(x) = DT (x) and let di = dim Ei (x) for x ∈ X. Then

1 di σn,i (x) log di Li = lim n→∞ n Li >0

Li >0

1 log det An (x)|W u (x) = lim n→∞ n 1 log det A(T n−1 x)|W u (T n−1 x) · · · det A(x)|W u (x) = lim n→∞ n = log det A(x)|W u (x) dµ(x) . X

The second and the last equalities are obtained by applying Theorem 10.1 and the Birkhoﬀ Ergodic Theorem, respectively. Now we need a deﬁnition: Let (X, d) be a metric space. A function f : X → Rm is H¨older continuous of exponent α > 0 (or f is C α ) if sup d(x, y)−α ||f (x) − f (y)|| < ∞ . x=y

A H¨older continuous function is necessarily continuous. If X is a compact manifold, then we regard X as a subset in some Euclidean space. Theorem 10.20 (Pesin). Let X be a compact diﬀerentiable manifold with an absolutely continuous probability measure µ. Suppose that (i) T : X → X is a diﬀeomorphism and its derivative is C α , α > 0, (ii) µ is T -invariant, and (iii) T is ergodic with respect to µ. Let Li be the ith Lyapunov exponent and put di = dim Ei . Then di Li , entropy = Li >0

where the same logarithmic base is taken in the deﬁnitions of the entropy and the Lyapunov exponents.

320

10 The Lyapunov Exponent: Multidimensional Case

Corollary 10.21. Let T be an automorphism of the m-dimensional torus given by an integral matrix A. Let λi be the eigenvalues of A. Then counting multiplicity we have log |λi | , entropy = |λi |>1

where the same logarithmic base is taken for the entropy and log |λi |. Proof. Note that DT (x) = A for every x. Let λ be an eigenvalue of A, and let E|λ| be the direct sum of the generalized eigenspaces corresponding to the eigenvalues of modulus |λ|. If 0 = v ∈ E|λ| , then by using the real Jordan canonical form of A we see that n1 log ||An v|| converges to log |λ|. Hence the positive Lyapunov exponents are given by log |λi | with |λi | > 1.

Theorem 10.22 (Ruelle). Let X be a compact diﬀerentiable manifold with a probability measure µ. Suppose that (i) T : X → X is a C 1 diﬀeomorphism, (ii) µ is T -invariant, and (iii) T is ergodic with respect to µ. Let Li be the ith Lyapunov exponent and put di = dim Ei . Then di Li , entropy ≤ Li >0

where the same logarithmic base is taken in the deﬁnitions of the entropy and the Lyapunov exponents. Example 10.23. Consider a two-dimensional toral automorphism given by a matrix A. Since the characteristic equation of A is equal to λ2 − (tr A)λ + det A = 0 with | det A| = 1, the eigenvalues λmax and λmin , |λmax | ≥ |λmin |, satisfy |λmax × λmin | = 1. Hence |λmax | ≥ 1 and |λmin | ≤ 1. Thus entropy = log |λmax | = the largest Lyapunov exponent . In Fig. 10.12 we plot y = n1 log10 σn,max , 5 ≤ n ≤ 100, where σn,max is the n largest singularvalue of A . Compare the result with√Fig. 10.13. √ 21 (i) Take A = . It has eigenvalues λmax = 3+2 5 , λmin = 3−2 5 . The 11 √

entropy is equal to log10 ( 3+2 5 ) ≈ 0.418. In this case, An is symmetric for n ≥ 1, and the eigenvalues and singular values coincide. Hence log |λmax | = 1 n log σn,max for n ≥1. See the left graph in Fig. 10.12. √ √ 23 (ii) Take A = . It has eigenvalues λmax = 2 + 3, λmin = 2 − 3. The 12 √ entropy is equal to log10 (2 + 3) ≈ 0.572. See the right graph in Fig. 10.12.

10.6 The Largest Lyapunov Exponent and the Divergence Speed 0.43

0.59

0.42

0.58

0.41

0.57

0.4 0

n

100

0.56 0

n

321

100

Fig. 10.12. The largest Lyapunov exponent log λmax of T x = Ax is approximated by the largest singular values of An where A is given in Ex. 10.23. The horizontal lines indicate y = log λmax

10.6 The Largest Lyapunov Exponent and the Divergence Speed Let X be an m-dimensional compact diﬀerentiable manifold. (We assume that X ⊂ Rm for the simplicity of notation.) Let T : X → X be a diﬀeomorphism and suppose that the largest Lyapunov exponent of T is positive. Take two neighboring points x, x 7 ∈ X. Let Er (x) be the subspace in Theorem 10.12 with the largest Lyapunov exponent. The vector x − x 7 has nonzero component in Er (x) with probability 1. (For, if we let Z(x) ⊂ Rm be the subspace spanned by Ek (x), 1 ≤ k ≤ r − 1, then dim Z(x) < m, and hence the m-dimensional volume of Z(x) is zero.) Then, as we iterate T , the distance ||T k x − T k x 7|| increases exponentially. More precisely, for two nearby points x, x 7, and for k suﬃciently large but not too large, we have 7|| ≈ 10kLmax ||x − x 7|| ||T k x − T k x where Lmax is the largest Lyapunov exponent. Deﬁnition 10.24. For n ≥ 1, take x, x 7 ∈ X such that ||7 x − x|| = 10−n . If the direction x − x 7 has nonzero component in Er (x), then we deﬁne the nth divergence speed by 7) = min{j ≥ 1 : ||T j x − T j x 7|| ≥ 10−1 } . Vn (x, x When there is no danger of confusion we write Vn (x) in place of Vn (x, x 7). Remark 10.25. Let X be a diﬀerentiable manifold. To simplify the notation we assume that X is a bounded subset of Rm . Let T : X → X be a diﬀeomorphism and µ a T -invariant ergodic probability measure on X. Let Lmax be the largest Lyapunov exponent of T . Then for almost every x n ≈ Lmax . Vn (x)

322

10 The Lyapunov Exponent: Multidimensional Case

To see why, observe that for k suﬃciently large but not too large, we have log ||T k x − T k x 7|| log ||x − x 7|| ≈ Lmax + k k where the correction term

log ||x − x 7|| k is not negligible if ||x − x 7|| is very small. Take k = Vn (x). Then −1 −n ≈ Lmax + . Vn (x) Vn (x) Hence n/Vn ≈ Lmax for suﬃciently large n. We approximate the largest Lyapunov exponent by the divergence speed. For simulations at a single point x0 for 5 ≤ n ≤ 100, see Figs. 10.13, 10.14, and for simulations at 500 points with n = 100 see Fig. 10.15. Compare the results with Figs. 10.3, 10.4. Consult Maple Programs 10.7.7, 10.7.8.

0.6

0.4

0.4 0.2 0.2 0

n

100

0

n

100

Fig. 10.13. The divergence speed: y = n/Vn (x0 ) for toral automorphisms given in Ex. 10.23. The horizontal lines indicate the largest Lyapunov exponents

0.6

0.6

0.4

0.4

0.2

0.2

0

n

100

0

n

100

Fig. 10.14. The divergence speed: y = n/Vn (x0 ) for the H´enon mapping with a = 1.4, b = 0.3 (left) and the standard mapping with C = 0.5 (right)

10.7 Maple Programs

323

0.4

0.3 0.2

0.2 0.5

0

–1

1 –0.5

1 1

Fig. 10.15. The divergence speed: plots of (T j x0 , n/Vn (T j x0 )), n = 100, 1 ≤ j ≤ 100, for the H´enon mapping with a = 1.4, b = 0.3 (left) and the standard mapping with C = 0.5 (right)

10.7 Maple Programs 10.7.1 Singular values of a matrix Find the singular values of a matrix. > with(linalg): > A:=matrix([[2.0,1,1],[0,-3,0],[2,1,2]]); ⎡ ⎤ 2.0 1 1 A := ⎣ 0 −3 0 ⎦ 2 12 > det(A); −6.0 Find the singular values of A. Choose one of the following two methods. Use the commands Svd and evalf together in the second method. > SV_A:=singularvals(A); SV A := [0.5605845521, 2.602624117, 4.112431479] > SV_A:=evalf(Svd(A)); SV A := [4.112431479, 2.602624117, 0.5605845521] The following is the product of the singular values, which is equal to the absolute value of the determinant. > SV_A[1]*SV_A[2]*SV_A[3]; 6.000000003 Find the eigenvalues of A. > EV_A:=[eigenvals(A)[1],eigenvals(A)[2],eigenvals(A)[3]]; EV A := [3.414213562, 0.5857864376, −3.] Calculate the product of the eigenvalues, which is equal to the determinant. > EV_A[1]*EV_A[2]*EV_A[3]; −6.000000000

324

10 The Lyapunov Exponent: Multidimensional Case

Find the singular values of An . > n:=100: > Digits:=200: If n is large then Digits should be large. In general, if the entries of B have k signiﬁcant digits, then at least 2k signiﬁcant digits are needed to calculate B T B and the singular values. If one of the singular values is close to zero, use the idea explained in Subsect. 10.7.3. > evalf(Svd(evalm(A&^n)),10); [0.2315601860 1054 , 0.5047427740 1048 , 0.5048859902 1044 ] 10.7.2 A matrix sends a circle onto an ellipse Show that a 2 × 2 matrix A sends the unit circle onto an ellipse. > with(linalg): > with(plots): > with(plottools): Choose the coeﬃcients of a 2 × 2 matrix A. If the coeﬃcients are not given as decimal numbers, the eigenvectors are not normalized. Change the entries of the matrix A and observe what happens. > A:=matrix(2,2,[[1,2.],[3,0]]); 1 2. A := 3 0 > w:=[cos(theta),sin(theta)]; w := [cos(θ), sin(θ)] > k:=eigenvectors(transpose(A)&*A); k := [3.394448728, 1, {[0.2897841487, −0.9570920265]}], [10.60555128, 1, {[−0.9570920265, −0.2897841487]}] > v1:=k[1][3][1]; v1 := [0.2897841487, −0.9570920265] > v2:=k[2][3][1]; v2 := [−0.9570920265, −0.2897841487] The vectors v1 and v2 are orthogonal. > innerprod(v1,v2); 0. > g1:=line([(0,0)],[(v1[1],v1[2])]): > g2:=line([(0,0)],[(v2[1],v2[2])]): > g3:=plot([w[1],w[2],theta=0..2*Pi]): Draw the unit circle. > display(g1,g2,g3);

10.7 Maple Programs

325

Find the image of the unit circle. > image:=evalm(A&*w); image := [cos(θ) + 2. sin(θ), 3 cos(θ)] >

image[1]; cos(θ) + 2. sin(θ)

>

image[2];

3 cos(θ) u1:=evalm(A&*v1); u1 := [−1.624399904, 0.8693524461] > u2:=evalm(A&*v2); u2 := [−1.536660324, −2.871276080] Two vectors u1 and u2 are orthogonal. > innerprod(u1,u2); >

−0.1 10−8 The above result may be regarded as zero. > g4:=line([(0,0)],[(u1[1],u1[2])],color=green): > g5:=line([(0,0)],[(u2[1],u2[2])]): > g6:=plot([image[1],image[2],theta=0..2*Pi]): Now we have an ellipse. > display(g4,g5,g6); See Fig. 10.1. 10.7.3 The Lyapunov exponents: a local version for the solenoid mapping Find the Lyapunov exponent of the solenoid mapping at a single point. > with(linalg): > with(plots): > N:=1000: Here is a general rule for choosing the number of signiﬁcant decimal digits in computing the Lyapunov exponents: Let Lmin and Lmax denote the smallest and the largest Lyapunov exponents of T . If Lmin < 0 < Lmax , then after N iterations of T , the smallest and the largest singular values of the Jacobian matrix of T N are roughly equal to 10N ×Lmin respectively, and we need at least

and 10N ×Lmax ,

326

10 The Lyapunov Exponent: Multidimensional Case

N × |Lmin | + N × Lmax decimal digits. (We have to count the number of digits after the decimal point to estimate the negative Lyapunov exponents. If there are not enough signiﬁcant digits, then the number 10N ×Lmin is treated as zero, and Maple produces an error message when we try to compute the logarithm. The digits before the decimal point are used to calculate the positive Lyapunov exponents, and the digits after the decimal point are used for the negative Lyapunov exponents.) If there is not much information on the Lyapunov exponents before we start the simulation, we choose a sufﬁciently large D for the number of decimal digits, then gradually increase D to see whether our choice for D is suﬃcient. If there is a signiﬁcant change, then we need more digits. If there is no visible change, then we may conclude that we have reasonably accurate estimates for L1 , L2 . For the example in this subsection, Lmin = log10 a ≈ −0.5 and Lmax = log10 2 ≈ 0.3 . Hence we have N × (|Lmin | + Lmax ) ≈ 800 . We take D = 900 since it is better to allow enough room for truncation errors. > Digits:=900: > T:=(phi,u,v)->(frac(2.0*phi),a*u+0.5*cos(2*Pi*phi), a*v+0.5*sin(2*Pi*phi)); T := (φ, u, v) → (frac(2.0 φ), a u + 0.5 cos(2 π φ), a v + 0.5 sin(2 π φ)) Find the Jacobian matrix of T . > DT:=(phi,u,v)->matrix([[2,0,0],[-Pi*sin(2*Pi*phi),a,0], [Pi*cos(2*Pi*phi),0,a]]); ⎡ ⎤ 2 00 DT := (φ, u, v) → ⎣ −π sin(2 π φ) a 0 ⎦ π cos(2 π φ) 0 a > a:=0.3: > seed[0]:=(evalf(Pi-3),0.1*sqrt(2.0),sqrt(3.0)-1): Throw away transient points. > for i from 1 to 1000 do seed[0]:=evalf(T(seed[0])): od: Compute an orbit. > for n from 1 to N do seed[n]:=evalf(T(seed[n-1])): od: Take the 3 × 3 identity matrix. > A:=Matrix(3,3,shape=identity):

10.7 Maple Programs

327

Find the Lyapunov exponents L1, L2 and L3. > for n from 1 to N do > A:=evalm(DT(seed[n-1]) &* A): > K:=evalf(Svd(A)); > SV:=sort([K[1],K[2],K[3]]); > L1[n]:=evalf(log10(SV[1])/n,10); > L2[n]:=evalf(log10(SV[2])/n,10); > L3[n]:=evalf(log10(SV[3])/n,10); > od: Now we do not need many signiﬁcant digits. > Digits:=10: The following three numbers should be approximately equal. > log10(a); L1[N]; L2[N]; −0.5228787453 −0.5231792559 −0.5228787453 The following two numbers should be approximately equal. > log10(2.); L3[N]; 0.3010299957 0.3013305063 > g0:=plot(log10(a),x=0..N,labels=["n"," "],color=green): > g1:=listplot([seq([n,L1[n]],n=50..N)]): > display(g1,g0); See the left plot in Fig. 10.16. > g2:=listplot([seq([n,L2[n]],n=50..N)]): > display(g2,g0); See the middle plot in Fig. 10.16. > g3:=listplot([seq([n,L3[n]],n=50..N)]): > g4:=plot(log10(2),x=0..N,labels=["n"," "],color=green): > display(g3,g4); See the right plot in Fig. 10.16. See also Figs. 10.2, 10.3.

0.2

0.2 n

0.2 n

1000

n

1000

0

0

0

–0.2

–0.2

–0.2

–0.4

–0.4

–0.4

1000

Fig. 10.16. The Lyapunov exponents of the solenoid mapping are approximated by (log σn )/n where σn are the smallest, the second largest and the largest singular values of D(T n ), 50 ≤ n ≤ 1000 (from left to right)

328

10 The Lyapunov Exponent: Multidimensional Case

10.7.4 The Lyapunov exponent: a global version for the H´ enon mapping Find the Lyapunov exponents of the H´enon mapping. > with(linalg): > with(plots): Choose the length of each orbit. > N:=200: In the following we will see that L1 ≈ −0.7 and L2 ≈ 0.18, so an optimal value for D is approximately equal to N . See the explanation in Subsect. 10.7.3. > Digits:=200: > a:=1.4: b:=0.3: > T:=(x,y)->(y+1-a*x^2,b*x); > DT:=(x,y)->matrix([[-2*a*x,1],[b,0]]); −2 a x 1 DT := (x, y) → b 0 > seed[0]:=(0,0): Throw away transient points. > for i from 1 to 2000 do seed[0]:=T(seed[0]): od: Choose the number of samples. > SampleSize:=100: > for s from 1 to N*SampleSize do > seed[s]:=T(seed[s-1]): > od: Calculate the Lyapunov exponents using orbits of length N starting from T (s−1)N +1 (x0 ) for each 1 ≤ s ≤ 100. > for s from 1 to SampleSize do > A:=DT(seed[(s-1)*N+1]): > for n from 1 to N-1 do > A:=evalm(DT(seed[(s-1)*N+1+n]) &* A): > od: > K:=evalf(Svd(A),200); > SV:=sort([K[1],K[2]]): > L1[s]:=log10(SV[1])/N: > L2[s]:=log10(SV[2])/N: > od: Digits:=10: Find the average of L1 . > ave_L1:=add(L1[s],s=1..SampleSize)/SampleSize; >

ave L1 := −0.7043197238 pointplot3d([seq([seed[s],L1[s]],s=1..SampleSize)]; See Fig. 10.17. Find the average of L2 . >

10.7 Maple Programs

329

0.5 –1

1

–0.5

–0.5

–1

Fig. 10.17. The Lyapunov exponent L1 is approximated by (log σ)/200 where σ is the smaller singular value of D(T 200 )(T j x0 ), 1 ≤ j ≤ 100: the H´enon mapping with a = 1.4, b = 0.3

>

ave_L2:=add(L2[s],s=1..SampleSize)/SampleSize;

ave L2 := 0.1814409784 > pointplot3d([seq([seed[s],L2[s]],s=1..SampleSize)]; See Fig. 10.4. For conventional algorithms consult [ASY],[PC]. 10.7.5 The invariant subspace E1 of the H´ enon mapping Draw the subspaces E1 (x) invariant under the action of the H´enon mapping along an orbit using the property DT (x)E1 (x) = E1 (T x). > with(linalg): > with(plots): with(plottools): > Digits:=1000: > a:=1.4: b:=0.3: > T:=(x,y)->(y+1-a*x^2,b*x): > DT:=(x,y)->matrix([[-2*a*x,1],[b,0]]): > seed[0]:=(0,0): Throw away transient points. > for i from 1 to 5000 do seed[0]:=T(seed[0]): od: > N:=200: > S:=100: The length of an orbit, which is given by S, should not exceed N . > for i from 1 to S+N do seed[i]:=T(seed[i-1]): od: > A:=DT(seed[0]): > for n from 1 to N do A:=evalm(DT(seed[n]) &* A); od: > k:=eigenvectors(evalm(transpose(A) &* A)): Compare the eigenvalues. > if k[2][1] > >

v[0]:=k[d][3][1]: for i from 1 to S do v[i]:=evalm(DT(seed[i-1]) &* v[i-1]): od:

>

Digits:=10: for i from 1 to S do dir[i]:=(v[i][1],v[i][2])/norm(v[i]): od: for i from 1 to S do point1[i]:=seed[i]-0.1*dir[i]: od: for i from 1 to S do point2[i]:=seed[i]+0.1*dir[i]: od:

> > > >

for i from 1 to S do L[i]:=line([point1[i]],[point2[i]],color=blue): od: > g1:=display(seq(L[i],i=1..S)): > g2:=pointplot([seq([seed[i]],i=1..S)],symbol=circle): > display(g1,g2); See Fig. 10.8. > >

10.7.6 The invariant subspace E2 of the H´ enon mapping Plot the subspace E2 (x) invariant under the action of the H´enon mapping along an orbit using the property DT (x)E2 (x) = E2 (T x). > with(linalg): > with(plots): with(plottools): > Digits:=1000: > a:=1.4: b:=0.3: > T:=(x,y)->(y+1-a*x^2,b*x); > DT:=(x,y)->matrix([[-2*a*x,1],[b,0]]); > N:=500: We start from x−N . Throw away transient points. > seed[-N]:=(0,0): > for i from 1 to 5000 do seed[-N]:=T(seed[-N]): od: > for n from -N+1 to 0 do seed[n]:=T(seed[n-1]): od: > B:=DT(seed[-N]): > for n from -N+1 to -1 do > B:=evalm(DT(seed[n]) &* B): od: > k:=eigenvectors(evalm(B &* transpose(B))): Compare the eigenvalues. > if k[2][1] >= k[1][1] then d:=2: else d:=1: fi: > v[0]:=k[d][3][1]: > S:=200: > for i from 1 to S do seed[i]:=T(seed[i-1]): od: > for i from 1 to S do > v[i]:=evalm(DT(seed[i-1]) &* v[i-1]): od:

10.7 Maple Programs

Digits:=10: for i from 0 to S do dir[i]:=(v[i][1],v[i][2])/norm(v[i]): od: > for i from 0 to S do point1[i]:=seed[i]-0.1*dir[i]: od: > for i from 0 to S do point2[i]:=seed[i]+0.1*dir[i]: od: > for i from 0 to S do > L[i]:=line([point1[i]],[point2[i]]): > od: > g1:=display(seq(L[i],i=0..S)): > g2:=pointplot([seq([seed[i]],i=0..S)],symbol=circle): > display(g1,g2); See Fig. 10.9. >

> >

10.7.7 The divergence speed: a local version for a toral automorphism Plot the graph y = n/Vn (x0 ), 1 ≤ n ≤ 100, for a toral automorphism T . > with(plots): > T:=(x,y)->(frac(3.0*x+y),frac(2*x+y)): > with(linalg): Find the associated integral matrix A and its eigenvalues. > A:=matrix([[3,2],[1,1]]); > EV:=eigenvals(A); √ √ EV := 2 + 3, 2 − 3 > max_EV:=max(abs(EV[1]),abs(EV[2])): > max_Lyapunov:=log[10.](max_EV); max Lyapunov := 0.5719475475 > Digits:=1000: > Length:=1000: > seed[0]:=(evalf(Pi-3),sqrt(2.1)-1): > for i from 1 to Length do seed[i]:= T(seed[i-1]): od: > N:=100: Find Vn , 1 ≤ n ≤ N . > for n from 1 to N do > X0:=(seed[0][1]+1/10^n, seed[0][2]): > X:=evalf(T(X0)): > for j from 1 > while (X[1]-seed[j][1])^2+(X[2]-seed[j][2])^2 < 1/100 > do X:=T(X): od: > V[n]:=j; > od: >

Digits:=10:

331

332

10 The Lyapunov Exponent: Multidimensional Case

Draw the graph y = n/Vn , 5 ≤ n ≤ N . > g1:=listplot([seq([n,n/V[n]],n=5..N)],labels=["n"," "]): > g2:=plot({max_Lyapunov,0},x=0..N): > display(g1,g2); See Fig. 10.14. 10.7.8 The divergence speed: a global version for H´ enon mapping Plot (x, n/Vn (x)) along x = T j x0 , 1 ≤ j ≤ 500, for the H´enon mapping. > with(plots): > a:=1.4: b:=0.3: > T:=(x,y)->(y+1-a*x^2,b*x); > Digits:=1000: Throw away transient points. > seed[0]:=(0,0): > for i from 1 to 10000 do seed[0]:=T(seed[0]): od: > S:=500: > L:=1000: > for i from 1 to S+L do seed[i]:=T(seed[i-1]): od: > n:=100: Find Vn . > for s from 1 to S do > X0:=(seed[s][1]+10^(-n),seed[s][2]+10^(-n)): > X:=T(X0): > for j from 1 > while (X[1]-seed[s+j][1])^2+(X[2]-seed[s+j][2])^2 < 1/100 > do X:=T(X): > od: > V[s]:=j: > od: Digits:=10: pointplot3d([seq([seed[s],n/V[s]],s=1..S)],symbol=circle); See Fig. 10.15. > evalf(add(n/V[s],s=1..S)/S); 0.1814595275 > >

11 Stable and Unstable Manifolds

We introduce a new method of plotting the invariant subspaces Ei (x) given in the Multiplicative Ergodic Theorem in Chap. 10 when the transformation T under consideration has hyperbolic ﬁxed (or periodic) points. The method uses the stable and unstable manifolds of ﬁxed (or periodic) points, which are integral versions of the subspaces Ei (x) in the sense that they are tangential either to the stable manifold or to the unstable manifold. These manifolds are also invariant under T . For the simplicity of computer experiment we consider only two-dimensional examples such as a toral automorphism, the H´enon mapping and the modiﬁed standard mapping. The stable and unstable manifolds are quite useful in understanding the global behavior of T . For example, the stable manifold of a ﬁxed point of the H´enon mapping is part of the boundary of the basin of attraction, and the unstable manifold of a ﬁxed point describes the H´enon attractor.

11.1 Stable and Unstable Manifolds of Fixed Points Let X be a diﬀerentiable manifold. For the sake of notational simplicity, let us assume that X is a subset of R2 with the Euclidean metric d. Let T : X → X be an invertible and continuously diﬀerentiable transformation. Suppose that p ∈ X is a ﬁxed point of T , i.e., T p = p. It is clear that if p is a ﬁxed point of T then it is also a ﬁxed point of T −1 . If none of the absolute values of the eigenvalues of the Jacobian matrix DT (p) is equal to 1, then p is called a hyperbolic ﬁxed point. If DT (p) has two real eigenvalues λ1 , λ2 such that |λ1 | < 1 < |λ2 | , then p is called a saddle point. Since det DT (p) = λ1 λ2 , a hyperbolic ﬁxed point is a saddle point if | det DT (p)| = 1. First, we deﬁne the stable and unstable manifolds of ﬁxed points. The deﬁnitions for periodic points will be given in Sect. 11.4.

334

11 Stable and Unstable Manifolds

Deﬁnition 11.1. For a hyperbolic ﬁxed point p of T , we deﬁne the stable (invariant) manifold of p by Ws (p) = Ws (T, p) = {x ∈ X : lim d(T n x, p) = 0} , n→∞

and the unstable (invariant) manifold of p by Wu (p) = Wu (T, p) = {x ∈ X : lim d(T −n x, p) = 0} . n→∞

Note that Wu (T −1 , p) = Ws (T, p) and Ws (T −1 , p) = Wu (T, p). In the deﬁnition of the unstable manifold we cannot require limn→∞ d(T n x, p) = ∞ since X may be a bounded set. If p is not a hyperbolic ﬁxed point, then in general there exist T -invariant closed curves circling around p. If x ∈ Ws (p), then d(T n (T x), p) = d(T n+1 x, p) → 0 as n → ∞, and so we have T x ∈ Ws (p). Hence T (Ws (p)) ⊂ Ws (p). Similarly, T (Wu (p)) ⊂ Wu (p). If p and q are two distinct ﬁxed points, then Ws (p) ∩ Ws (q) = ∅ . For, if there were a point x ∈ Ws (p) ∩ Ws (q), then T n x would converge to both p and q as n → ∞, which is a contradiction. Similarly, if p and q are two distinct ﬁxed points, then Wu (p) ∩ Wu (q) = ∅ . It is possible that Ws (p) and Wu (p) intersect at points other than p. Note that D(T n )(p) = [DT (p)]n . Let v1 and v2 be eigenvectors of DT (p) corresponding to the eigenvalues λ1 and λ2 , |λ1 | < 1 < |λ2 |, respectively. Take x in a small neighborhood of p and let x = p + c1 v1 + c2 v2 for some c1 , c2 . If x ∈ Ws (p), then for suﬃciently large n > 0 we have T n x ≈ T n p + D(T n )(p)(c1 v1 + c2 v2 ) = p + c1 λ1 n v1 + c2 λ2 n v2 ≈ p + c2 λ2 n v2 , and hence c2 → 0 as n → ∞. Similarly, if x ∈ Wu (p), then for large n > 0, −n T −n x ≈ p + c1 λ−n 1 v1 + c2 λ2 v2

≈ p + c1 λ−n 1 v1 , and hence c1 → 0 as n → ∞. More precisely, v1 and v2 are tangent to Ws (p) and Wu (p) at p, respectively. See Fig. 11.1 where p is a saddle point. Flow lines near p describe the action of T , i.e., if x is on one of the ﬂow lines then T x is also on the same ﬂow line. It is known that at x ∈ Ws (p) the subspace E1 (x) in the Multiplicative Ergodic Theorem is tangent to Ws (p). Similarly, at x ∈ Wu (p) the subspace

11.1 Stable and Unstable Manifolds of Fixed Points

335

Wu(p) v1 Tx p x v2 Ws(p) Fig. 11.1. A saddle point p with ﬂow lines

E2 (x) is tangent to Wu (p). This idea can be used to sketch E1 (x) and E2 (x) without using as many signiﬁcant digits as we did in Chap. 10. In other words, we ﬁrst draw Ws (p) and Wu (p), and then ﬁnd the tangential directions. To ﬁnd the stable manifold of p we proceed as follows: Choose an arc J ⊂ Ws (p) such that (i) J is an intersection of an open ball centered at p and Ws (p), and (ii) T (J) ⊂ J. Then J ⊂ T −1 (J) ⊂ T −2 (J) ⊂ · · · and Ws (p) =

∞

T −n (J) .

n=0

See Fig. 11.2 where J is represented by a thick arc. To prove the equality, take x ∈ T −n (J) for n ≥ 1. Then T n x ∈ J ⊂ Ws (p), and so limm→∞ T m+n x = limm→∞ T m (T n x) = p. Hence x ∈ Ws (p). For the converse, take x ∈ Ws (p). Then T n x ∈ Ws (p). If n is suﬃciently large, then T n x ≈ p, and so T n x ∈ J. Hence x ∈ T −n (J). In simulations, J is a very short line segment tangent to the stable manifold. As T −1 is iterated, the component in the stable direction for T −1 (equivalently, in the unstable direction for T ) shrinks to zero. Thus T −n (J) approximates the stable manifold better and better as n increases.

–2

T (J)

–1

p

Ws(p)

p

J

Ws(p)

T (J)

p

Ws(p)

Fig. 11.2. Generation of the stable manifold of a hyperbolic ﬁxed point p

Next, to ﬁnd the unstable manifold of p, we choose an arc K ⊂ Wu (p) such that (i) K is an intersection of an open ball centered at p and Wu (p), and (ii) K ⊂ T (K). Then K ⊂ T (K) ⊂ T 2 (K) ⊂ · · · and

336

11 Stable and Unstable Manifolds

Wu (p) =

∞

T n (K) .

n=0

See Fig. 11.3 where K is represented by a thick arc. To prove the equality, note that T n (K) ⊂ T n (Wu (p)) ⊂ Wu (p) for n ≥ 0. For the converse, take x ∈ Wu (p). Then T −n x ∈ K for suﬃciently large n since limn→∞ T −n x = p. Hence x ∈ T n (K). In simulations, K is a very short line segment tangent to the unstable manifold. As T is iterated, the component in the stable direction for T shrinks to zero, and so T n (K) approximates the unstable manifold better and better as n increases. T 2(K)

T(K) K p

p

Wu(p)

p

Wu(p)

Wu(p)

Fig. 11.3. Generation of the unstable manifold of a hyperbolic ﬁxed point p

Example 11.2. Consider the toral automorphism given by T (x, y) = (3x + 2y, x + y) (mod 1) √ √ 32 on the torus. Since DT = has eigenvalues λ1 = 2 − 3, λ2 = 2 + 3 11 √ √ with the corresponding eigenvectors v1 = (1 − 3, 1), v2 = (1 + 3, 1). Since |λ1 | < 1 < |λ2 |, T has a hyperbolic ﬁxed point p = (0, 0). Note that Ws (p) = {tv1 (mod 1) : −∞ < t < ∞} and Wu (p) = {tv2 (mod 1) : −∞ < t < ∞}. Both sets are dense in the torus since the slopes of the directions are irrational. In Fig. 11.4 parts of Ws (p) and Wu (p) are drawn.

11.2 The H´ enon Mapping Consider the H´enon mapping T (x, y) = (−ax2 + y + 1, bx) with a = 1.4, b = 0.3. Then T has two hyperbolic ﬁxed points

b − 1 + (b − 1)2 + 4a b(b − 1 + (b − 1)2 + 4a) p= , , 2a 2a

11.2 The H´enon Mapping 1

337

1

y

y

0

0

1

x

x

1

Fig. 11.4. The stable manifold (left) and the unstable manifold (right) of a ﬁxed point (0, 0) of a toral automorphism

and

q=

b−1−

(b − 1)2 + 4a b(b − 1 − (b − 1)2 + 4a) , . 2a 2a

See Fig. 11.5 where two ﬁxed points are marked by small circles. Note that p is on the H´enon attractor and q is not.

0.4 y

–1

p

0 x

1

q –0.4

Fig. 11.5. Fixed points p and q of the H´enon mapping together with the H´enon attractor

To sketch the stable manifold of p, take a line segment J of a short length passing through p in the stable direction. Using formula 1 a 2 T −1 (u, v) = v, u + 2 v − 1 , b b ﬁnd T −n (J) for some suﬃciently large n. See Fig. 11.6 for a part of the stable manifold given by T −7 (J), and consult Maple Program 11.5.1. As n increases, T −n (J) densely ﬁlls the basin of attraction. If a point x0 lies on the intersection of the H´enon attractor and Ws (p), then the tangential direction

338

11 Stable and Unstable Manifolds

to Ws (p) coincides with E1 (x0 ) in the Multiplicative Ergodic Theorem. See Fig. 10.8.

2 y

–2

2 x

–2

Fig. 11.6. The stable manifold of a ﬁxed point p for the H´enon mapping together with the H´enon attractor

To sketch the unstable manifold of p choose a short line segment K passing through p and ﬁnd T n (K) for some suﬃciently large n. See Fig. 11.7 for T 8 (K) where the ﬁxed point p is marked by a circle, and consult Maple Program 11.5.2. The unstable manifold of p approximates the H´enon attractor.

0.4 y

–1

p

0 x

1

–0.4

Fig. 11.7. The unstable manifold of a ﬁxed point p of the H´enon mapping

For the stable and unstable manifolds of the other ﬁxed point q, see Fig. 11.8, and consult Maple Program 11.5.3. The point q lies on the boundary of the basin of attraction, and its stable manifold Ws (q) forms a part of the boundary of the basin. Hence Ws (q) cannot be used to ﬁnd E1 (x) at a point

11.3 The Standard Mapping

339

x on the H´enon attractor. To visualize the fact that q is a saddle point, see Maple Program 11.5.4. Even though two unstable manifolds Wu (p) and Wu (q) do not intersect, they are so much intertwined that they appear to overlap.

2 y

–2

0

2 x

–2

Fig. 11.8. The stable and unstable manifolds of the ﬁxed point q of the H´enon mapping: The basin of attraction is represented by the dotted region

11.3 The Standard Mapping Consider the modiﬁed standard mapping T (x, y) = (2x − y + C sin 2πx, x) (mod 1) , which was introduced in Ex. 10.15. Choose C = 0.5. There are two ﬁxed points p = (0, 0) and q = ( 21 , 21 ), where p is hyperbolic and q is not. The complex eigenvalues of DT (q) are of modulus 1, and there exist singular continuous invariant measures deﬁned on invariant closed curves around q. See the right plot in Fig. 10.5. The stable and unstable manifolds at the hyperbolic ﬁxed point p are presented in Fig. 11.9. The tangential directions of the stable and the unstable manifolds of p coincide with E1 and E2 in Fig. 10.11. Let us check the symmetry of Ws (p), Wu (p). First, deﬁne the rotation by 180 degrees around the origin by F (x, y) = (−x, −y) (mod 1). Then F −1 T F = T . Take x ∈ Ws (p). Since F −1 T n F x = T n x → p as n → ∞, we have T n (F x) → F p = p as n → ∞. Thus F x ∈ Ws (p), and hence F (Ws (p)) ⊂ Ws (p). Since F −1 = F , we have Ws (p) ⊂ F −1 (Ws (p)) = F (Ws (p)). Thus F (Ws (p)) = Ws (p), and similarly, F (Wu (p)) = Wu (p). Now deﬁne G(x, y) = (y, x), which is the reﬂection with respect to y = x. Then G = G−1 . Since

340

11 Stable and Unstable Manifolds 1

1

y

y

0

x

1

0

x

1

Fig. 11.9. The stable manifold (left) and the unstable manifold (right) of the ﬁxed point (0, 0) of the modiﬁed standard mapping: Compare the tangential directions with Fig. 10.11

G−1 T G(x, y) = G−1 T (y, x) = G−1 (2y − x + C sin 2πy, y) = (y, 2y − x + C sin 2πy) = T −1 (x, y) , we have G−1 T G = T −1 . Take x ∈ Wu (p). Then T −n x → p, G−1 T n Gx → p as n → ∞. Thus T n Gx → Gp = p, Gx ∈ Ws (p), and hence G(Wu (p)) ⊂ Ws (p). Similarly, G(Ws (p)) ⊂ Wu (p). Since Wu (p) ⊂ G−1 (Ws (p)) = G(Ws (p)) and Ws (p) ⊂ G−1 (Wu (p)) = G(Wu (p)), we have Wu (p) ⊂ G(Ws (p)) ⊂ G(G(Wu (p))) ⊂ Wu (p) , and hence G(Ws (p)) = Wu (p). Similarly, G(Wu (p)) = Ws (p). In summary, the left and the right plots in Fig. 11.9 are invariant under the rotation by 180 degrees around the origin, and they are mirror images of each other with respect to y = x.

11.4 Stable and Unstable Manifolds of Periodic Points So far in this chapter we have considered the stable and unstable manifolds of a ﬁxed point p. Now we can extend the deﬁnitions of the stable and unstable manifolds to the case when p is a periodic point of period k ≥ 2, i.e., T k p = p, T j p = p, 1 ≤ j < k. Deﬁnition 11.3. Let p be a periodic point of period k ≥ 2. Deﬁne the stable (invariant) manifold of p by

11.4 Stable and Unstable Manifolds of Periodic Points

341

Ws (p) = {x ∈ X : lim d(T kn x, p) = 0} , n→∞

and the unstable (invariant) manifold of p by Wu (p) = {x ∈ X : lim d(T −kn x, p) = 0} . n→∞

As an interesting example we consider another version of the modiﬁed standard mapping T (x, y) = (2x + y + C sin 2πx, x) (mod 1) . Its inverse is given by T −1 (x, y) = (y, x − 2y − C sin 2πy) (mod 1) . If T (x, y) = (x, y), then x = y and 2x + C sin 2πx = 0 (mod 1). Let t1 , t2 , 0 < t1 < t2 < 1, be two solutions of 2x + C sin 2πx = 1 other than 0, 21 . (See Fig. 11.10.) Then t1 + t2 = 1, and a ﬁxed point of T is of the form (z, z), z ∈ {0, t1 , 21 , t2 }. It is easy to check that a periodic point of T of period 2 is of the form (z, w), z = w, z, w ∈ {0, t1 , 21 , t2 }, and that it satisﬁes T (z, w) = (w, z). All the four ﬁxed points are hyperbolic, and only six periodic points of period 2 are hyperbolic. Consult Maple Program 11.5.5. 1

1

1 y

y 0

–1

t1

x

t2

y

1

0

x

1

0

x

1

Fig. 11.10. The graph of y = 2x + C sin(2πx) − 1 (left), the ﬁxed points (middle) and the periodic points of periodic 2 (right)

A few ergodic invariant measures of T for C = 0.45 are presented in Fig. 11.11. In the left plot we have the absolutely continuous invariant measure of the form ρ(x, y) dxdy. Let A = {(x, y) : ρ(x, y) > 0}. Since | det DT | = 1, T preserves area. Hence ρ(x, y) = 1/λ(A) where λ denotes Lebesgue measure. In the middle plot we have 4 singular continuous invariant measures supported on the closed curves surrounding periodic points ( 21 , t1 ), ( 21 , t2 ) and their images under T . Each of them is a union of two closed curves in the torus. In the

342

11 Stable and Unstable Manifolds

1

1

y

1

y

0

0

1

x

y

1

x

0

1

x

Fig. 11.11. An absolutely continuous invariant measure (left) and two singular continuous invariant measures (middle and right)

right plot, for a singular continuous measure supported on the closed curves surrounding periodic point ( 21 , 0), we have only two closed curves since the two pairs of edges are topologically identiﬁed on the torus. Let F1 = (t1 , t1 ), F2 = ( 21 , 21 ), F3 = (t2 , t2 ) denote three of the ﬁxed points, and let P1 = (t1 , t2 ) , P2 = (t2 , t1 ) denote two of the periodic points. In Fig. 11.12 there are plotted the images of circles centered at hyperbolic points F1 , F2 , F3 , P2 , and at two nonhyperbolic periodic points under T 2 and T 4 . The circles centered at hyperbolic points are elongated along unstable directions, and the circles centered at nonhyperbolic points are more or less rotated around the their centers. The images of the circles centered at F1 and F2 are so stretched that they go out of bound and come back by the modulo 1 operation. The images of a circle centered at P2 under T j , j odd, are closed curves surrounding P1 = T (P2 ), which are not drawn here. 1

1

P1

y

F3

y

F2 F1

P2

0 x

1

0

x

1

Fig. 11.12. Images of circles under T 2 (left) and T 4 (right): The circles are centered at F1 , F2 , F3 , P2 and at two unnamed nonhyperbolic periodic points

11.4 Stable and Unstable Manifolds of Periodic Points

343

In Fig. 11.13 there are plotted the stable manifolds of F1 , F2 , F3 , P1 , P2 . Other less interesting ﬁxed or periodic points are not considered here. Only a part of each manifold is drawn. Consult Maple Program 11.5.7. 1

y

0

1

x

Fig. 11.13. The stable manifolds of the modiﬁed standard mapping

In Fig. 11.14 there are plotted the unstable manifolds of F1 , F2 , F3 , P1 and P2 . Observe that some of the stable manifolds overlap some of the unstable manifolds. For example, the arc connecting F1 and F2 is a part of Wu (F1 ), and it is also a part of Ws (F2 ). Since T (−x, −y) = −T (x, y) (mod 1), i.e., F −1 ◦ T ◦ F = T , where F is deﬁned in Sect. 11.3, it follows that the stable and unstable manifolds of T are invariant under the rotation by 180 degrees as observed in Figs. 11.13, 11.14. Let R be the counterclockwise rotation by 90 degrees. Then R−1 ◦ T ◦ R = T −1 , and so R(Ws ) = Wu , R(Wu ) = Ws by the same argument in Sect. 11.3. Compare Figs. 11.13, 11.14. Since Ws (F3 ) and Wu (P2 ) are invariant under T 2 , we have T 2 (Ws (F3 ) ∩ Wu (P2 )) ⊂ Ws (F3 ) ∩ Wu (P2 ) . Near the periodic point P2 choose a point z0 ∈ Ws (F3 ) ∩ Wu (P2 ) ,

z0 = P2 ,

label it as 0, and label the images T n z0 as n. See Fig. 11.15. Observe that T 2n z0 ∈ Ws (F3 ) ∩ Wu (P2 ) for every n. Consult Maple Program 11.5.8. Since

344

11 Stable and Unstable Manifolds 1

y

0

1

x

Fig. 11.14. The unstable manifolds of the modiﬁed standard mapping

F3 is a ﬁxed point, T n z0 converges to F3 but T n z0 = F3 for any n. Hence an area element at z0 is squeezed in the direction of Ws (F3 ) under T 2n as n → ∞. Since T preserves area, the area element should be expanded in some other direction. In Fig. 11.15 the unstable manifold of P2 is stretched and folded as it tries to reach F3 .

5

0.8

7

3 1 Wu

y

9 10 8

0.6 6 Ws

0.4

4 2 0

0.4

0.6 x

0.8

Fig. 11.15. An orbit of a point z0 labeled as 0 on the intersection of the stable and the unstable manifolds of the modiﬁed standard mapping

11.5 Maple Programs

345

11.5 Maple Programs 11.5.1 A stable manifold of the H´ enon mapping Here is how to obtain the stable manifold of the ﬁxed point p of the H´enon mapping given in Sect. 11.2. > with(plots): with(linalg): Deﬁne the H´enon mapping T . > T:=(x,y)->(y+1-a*x^2,b*x); Deﬁne the inverse of T . > S:=(x,y)->(1/b*y,x-1+a/b^2*y^2); a y2 y S := (x, y) → ( , x − 1 + 2 ) b b Check whether T ◦S is the identity mapping. Observe that T (S(x, y)) = (x, y). > T(S(x,y)); x, y Find the Jacobian matrix. > DT:=(x,y)->matrix([[-2*a*x,1],[b,0]]); −2 a x 1 DT := (x, y) → b 0 > a:=1.4: b:=0.3: Discard the transient points. > seed[0]:=(0,0): > for k from 1 to 10000 do seed[0]:=T(seed[0]): od: Plot the H´enon attractor. > SampleSize:=2000: > for i from 1 to SampleSize do seed[i]:=T(seed[i-1]): od: > attractor:=pointplot([seq([seed[i]],i=1..SampleSize)]): Find the ﬁxed points. > Sol:=solve(b*x+1-a*x^2=x,x); Sol := −1.131354477, 0.6313544771 Choose the ﬁxed point p with a positive x-coordinate, which lies on the H´enon attractor. > if Sol[1] > Sol[2] then J:=1: else J:=2: fi: > x0:=Sol[J]; x0 := 0.6313544771 > y0:=b*x0; y0 := 0.1894063431 Find the Jacobian matrix DT (p). > A:=DT(x0,y0):

346

11 Stable and Unstable Manifolds

Find the eigenvalues, the multiplicities, and the eigenvectors of A. > Eig:=eigenvectors(A); Eig := [−1.923738858, 1, {[−0.9880577560, 0.1540839733]}], [0.1559463223, 1, {[−0.4866537468, −0.9361947229]}] Compare two eigenvalues and ﬁnd an eigenvector v in the stable direction. > if abs(Eig[1][1]) < abs(Eig[2][1]) then k:=1: > else k:=2: fi: > v:=Eig[k][3][1]; v := [−0.4866537468, −0.9361947229] Plot the ﬁxed point p. > fixed:=pointplot([(x0,y0)],symbol=circle): Choose an arc J on the stable manifold of p, which is approximated by a line segment, also denoted by J, centered at p lying in the directions ±v. Generate N points on J whose images under the iterations of T −1 will gradually approximate the stable manifold. > N:=100000: > delta:=0.1/N: > delta1:=delta*v[1]: delta2:=delta*v[2]: > for i from 1 to N do > stable_p[i]:=(x0+(i-N/2.)*delta1,y0+(i-N/2.)*delta2): od: Plot J. See the left plot in Fig. 11.16 where the H´enon attractor is also plotted. > g1:=listplot([[stable_p[1]],[stable_p[N]]]): > display(g1,attractor,fixed); 0.5 y

–1

0.5 y

0 x

–0.5

1

–1

0 x

1

–0.5

Fig. 11.16. Generation of the stable manifold of p for the H´enon mapping: a line segment J tangent to the stable manifold (left) and T −1 (J) (right)

Plot T −1 (J). See the right plot in Fig. 11.16. To save memory and computing time we use only a subset of N points to plot a simple graph. > for i from 1 to N do stable_p[i]:=S(stable_p[i]): od:

11.5 Maple Programs

347

g2:=listplot([seq([stable_p[s*10000]],s=1..N/10000)]): > display(g2,attractor,fixed); Plot T −2 (J). See the left plot in Fig. 11.17. > for i from 1 to N do stable_p[i]:=S(stable_p[i]): od: > g3:=listplot([seq([stable_p[s*2000]],s=1..N/2000)]): > display(g3,attractor,fixed); >

4

4

y

y 2

2

0

–2

x

2

0

–2

x

2

Fig. 11.17. Generation of the stable manifold of p for the H´enon mapping: T −2 (J) (left) and T −3 (J) (right)

Plot T −3 (J). See the right plot in Fig. 11.17. > for i from 1 to N do stable_p[i]:=S(stable_p[i]): od: > g4:=listplot([seq([stable_p[s*1000]],s=1..N/1000)]): > display(g4,attractor,fixed); Similarly, we plot T −4 (J), T −5 (J). See Fig. 11.18. For T −7 (J) see Fig. 11.6.

4

4

y

y 2

–2

2

x –2

2

–2

x

2

–2

Fig. 11.18. Generation of the stable manifold of p for the H´enon mapping: T −4 (J) (left) and T −5 (J) (right)

348

11 Stable and Unstable Manifolds

11.5.2 An unstable manifold of the H´ enon mapping Here is how to obtain the unstable manifold of the ﬁxed point p of the H´enon mapping in Sect. 11.2. First, we proceed as in Maple Program 11.5.1 and ﬁnd a hyperbolic ﬁxed point p with a positive x-coordinate. For the beginning of the program see Maple Program 11.5.1. Find the Jacobian matrix of the H´enon mapping T . > A:=DT(x0,y0): > Eig:=eigenvectors(A): Choose the eigenvector v in the unstable direction. > if abs(Eig[1][1]) > abs(Eig[2][1]) then k:=1: > else k:=2: fi: > v:=Eig[k][3][1]; v := [−0.98805775599470461437, 0.15408397327012552629] N:=2000: delta:=1./N: delta1:=delta*v[1]: delta2:=delta*v[2]: Compute K. > for i from 1 to N do > unstable_p[i]:=(x0+(i-N/2)*delta1,y0+(i-N/2)*delta2): > od: Plot a line segment K passing through p and tangent to the unstable manifold. > listplot([[unstable_p[1]],[unstable_p[N]]]; See the ﬁrst plot in Fig. 11.19 where 1000 points on the H´enon attractor are also plotted. Now compute T (K) in the following. > for i from 1 to N do > unstable_p[i]:=T(unstable_p[i]): > od: Plot T (K). The line segment K is shrinking in the stable direction, so T (K) better approximates the unstable manifold than K. > listplot([seq([unstable_p[i*10]],i=1..N/10)]; Compute T 2 (K) in the following. > for i from 1 to N do > unstable_p[i]:=T(unstable_p[i]): > od: Plot T 2 (K). > listplot([seq([unstable_p[i*10]],i=1..N/10)]; For T 3 (K), T 4 (K), and T 5 (K) we proceed similarly. In Fig. 11.19 the line segment K and its images T j (K), 1 ≤ j ≤ 5, are given together with the H´enon attractor and the ﬁxed point p (from top left to bottom right). > > > >

11.5 Maple Programs

349

Fig. 11.19. Generation of the unstable manifold of p for the H´enon mapping: a line segment K tangent to the unstable manifold and its images T j (K), 1 ≤ j ≤ 5 (from top left to bottom right)

11.5.3 The boundary of the basin of attraction of the H´ enon mapping Choose the ﬁxed point q of the H´enon mapping given in Sect. 11.2. The following shows that q is on the boundary of the basin of attraction. > with(plots): with(linalg): > a:=1.4: b:=0.3: Deﬁne the H´enon mapping T . Find T −1 and the Jacobian matrix of T . > T:=(x,y)->(y+1-a*x^2,b*x); > S:=(x,y)->(1/b*y,x-1+a/b^2*y^2); > DT:=(x,y)->matrix([[-2*a*x,1],[b,0]]): Throw away the transient points. > seed[0]:=(0,0): > for k from 1 to 2000 do seed[0]:=T(seed[0]): od: Plot 500 points on the H´enon attractor. > SampleSize:=500: > for i from 1 to SampleSize do seed[i]:=T(seed[i-1]): od: > attractor:=pointplot([seq([seed[i]],i=1..SampleSize)]): Find a ﬁxed point q that is not on the H´enon attractor. > Sol:=solve(b*x+1-a*x^2=x,x); Sol := −1.131354477, 0.6313544771 > if Sol[1] < Sol[2] then J:=1: else J:=2: fi: > x1:=Sol[J];

350

11 Stable and Unstable Manifolds

x1 := −1.131354477 >

y1:=b*x1;

y1 := −0.3394063431 Plot the ﬁxed point q. > fixed:=pointplot([(x1,y1)],symbol=circle): Find the Jacobian matrix at q. > A:=DT(x1,y1): > Eig:=eigenvectors(A): Choose the eigenvector v in the unstable direction. > if abs(Eig[1][1]) > abs(Eig[2][1]) then k:=1: > else k:=2: fi: > v:=Eig[k][3][1]: Take a line segment L passing through q in the direction of v. Since q lies on the boundary of the basin of attraction, a half of L lies inside the basin and the other half lies outside the basin, which will be seen in Fig. 11.20. > N:=10000: This is the number of points on L. > delta:=0.5/N: > delta1:=delta*v[1]: delta2:=delta*v[2]: > for i from 1 to N do > unstable_q[i]:=(x1+(i-N/2)*delta1,y1+(i-N/2)*delta2): > od: To plot L we use only two endpoints. > g0:=listplot([[unstable_q[1]],[unstable_q[N]]]): > display(attractor,fixed,g0); See the left plot in Fig. 11.20 for L, where the H´enon attractor is also plotted. > for i from 1 to N do unstable_q[i]:=T(unstable_q[i]): od: > g1:=listplot([seq([unstable_q[i*100]],i=1..N/100)]): > display(g1,fixed,attractor); See the second plot in Fig. 11.20, where the portion of T (L) outside the basin of attraction starts to diverge to inﬁnity and the portion inside the basin starts to move toward the H´enon attractor. > for i from 1 to N do unstable_q[i]:=T(unstable_q[i]): od: > g2:=listplot([seq([unstable_q[i*50]],i=1..N/50)]): > display(g2,attractor,fixed); See the right plot in Fig. 11.20. The portion of T 2 (L) outside the basin of attraction escapes farther to inﬁnity, and the portion inside the basin almost overlaps the H´enon attractor. > for i from 1 to N do unstable_q[i]:=T(unstable_q[i]): od: > g3:=listplot([seq([unstable_q[i]],i=N/2..N)]): > display(g3,attractor,fixed);

11.5 Maple Programs

0.2

0.2 –2

–1

0 –0.2

–1

–4

351

–2

1

1 –0.2 –0.5

Fig. 11.20. Generation of the unstable manifold of q for the H´enon mapping: a line segment L tangent to the unstable manifold, and its images T (L), T 2 (L) (from left to right)

From now on we consider the part of T k (L) inside the basin of attraction. The part outside the basin stretches to inﬁnity as k increases. > for i from N/2 to N do > unstable_q[i]:=T(unstable_q[i]): od: > g4:=listplot([seq([unstable_q[i]],i=N/2..N)]): > display(g4,attractor,fixed); .. . Similarly we obtain g5, g6, g7 and g8. See Fig. 11.21 for T k (L), 3 ≤ k ≤ 8, where the H´enon attractor is also plotted.

Fig. 11.21. Generation of an unstable manifold of q for the H´enon mapping: parts of T k (L), 3 ≤ k ≤ 8, inside the basin of attraction (from top left to bottom right)

Now we plot a part of the stable manifold of q, which forms a part of the boundary of the basin of the attraction.

352

11 Stable and Unstable Manifolds

First, ﬁnd the tangent vector w in the stable direction. > if abs(Eig[1][1]) < abs(Eig[2][1]) then k:=1: > else k:=2: fi: > w:=Eig[k][3][1]; w := [−0.2996032920, 0.9766534317] Choose a line segment I in the direction of w. > M:=1000: This is the number of points on I. > eta:=0.25/M: > eta1:=eta*w[1]: eta2:=eta*w[2]: > for i from 1 to M do > stable_q[i]:=(x1+(i-M/2)*eta1,y1+(i-M/2)*eta2): > od: To plot I we use only two endpoints. > f0:=listplot([[stable_q[1]],[stable_q[M]]]): > display(f0,fixed,attractor); See the left plot in Fig. 11.22, where the H´enon attractor is also plotted. > for i from 1 to M do stable_q[i]:=S(stable_q[i]): od: > f1:=listplot([seq([stable_q[i]],i=1..M)]): > display(f1,fixed,attractor); See the middle plot in Fig. 11.22. > for i from 1 to M do stable_q[i]:=S(stable_q[i]): od: > f2:=listplot([seq([stable_q[i]],i=1..M)]): > display(f2,fixed,attractor); See the right plot in Fig. 11.22. 30

1 0.2 –1

1

–1

20

1

0

10

–0.2 –1 –4

0

4

Fig. 11.22. Generation of the stable manifold of q for the H´enon mapping: a line segment I tangent to the stable manifold, and its images T (I), T 2 (I) (from left to right)

Now plot the stable and the unstable manifolds of q together. > display(g2,g8,f2,fixed); See Fig. 11.8. To ﬁnd the basin of attraction, consult Maple Program 3.9.14.

11.5 Maple Programs

353

11.5.4 Behavior of the H´ enon mapping near a saddle point Consider the ﬁxed point q of the H´enon mapping. The following illustrates the existence of a saddle structure around q. > with(plots): with(linalg): > a:=1.4: b:=0.3: > T:=(x,y)->(y+1-a*x^2,b*x): Find the inverse of T . > S:=(x,y)->(1/b*y,x-1+a/b^2*y^2): > DT:=(x,y)->matrix([[-2*a*x,1],[b,0]]): > seed[0]:=(0,0): Discard the transient points. > for k from 1 to 2000 do seed[0]:=T(seed[0]): od: > SampleSize:=500: > for i from 1 to SampleSize do seed[i]:=T(seed[i-1]): od: > attractor:=pointplot([seq([seed[i]],i=1..SampleSize)]): Find the ﬁxed point q. > Sol:=solve(b*x+1-a*x^2=x,x): > if Sol[1] < Sol[2] then J:=1: else J:=2: fi: > x0:=Sol[J]; x0 := −1.131354477 > y0:=b*x0; y0 := −0.3394063431 > fixed:=pointplot([(x0,y0)],symbol=circle): Find the stable manifold of q. > A:=DT(x0,y0): > Eig:=eigenvectors(A): > if abs(Eig[1][1]) < abs(Eig[2][1]) then k:=1: > else k:=2: fi: > w:=Eig[k][3][1]: > M:=100: > eta:=0.1/M: > eta1:=eta*w[1]: eta2:=eta*w[2]: > for i from 1 to M do > stable_q[i]:=(x0+(i-0.75*M)*eta1,y0+(i-0.75*M)*eta2): > od: > for i from 1 to M do stable_q[i]:=S(S(stable_q[i])): od: Plot a part of the stable manifold of q near q. > f3:=listplot([seq([stable_q[i]],i=1..M)]): We will consider a curve of the form t → (t, C3 (t − C1 )(t − C2 )) that partially surrounds the stable manifold of q. Choose the range t1 ≤ t ≤ t2 .

354

11 Stable and Unstable Manifolds

t1:=-1.6: t2:= 1.6: Find a parabola that partially surrounds the stable manifold of q from the outside. > C1:=-1.34: C2:=1.4: C3:=1.3: > outer:=t->(t,C3*(t-C1)*(t-C2)): Plot the curve J = {outer(t) : t1 ≤ t ≤ t2 } and its images under T . > para_out[0]:=plot([outer(t),t=t1..t2]): > para_out[1]:=plot([T(outer(t)),t=t1..t2]): > para_out[2]:=plot([T(T(outer(t))),t=t1..t2]): > para_out[3]:=plot([T(T(T(outer(t)))),t=t1..t2]): >

Attach the labels First, Second, and Third to the images T (J), T 2 (J) and T 3 (J), respectively. (A diﬀerent color may be used for each image.) > order_out[1]:=textplot([T(outer(t1)),‘First‘]): > order_out[2]:=textplot([T(T(outer(t1))),‘Second‘]): > order_out[3]:=textplot([T(T(T(outer(t1)))),‘Third‘]: > display(fixed,f3,attractor,seq(para_out[i],i=0..3), seq(order_out[i],i=1..3)); See Fig. 11.23. The ﬁxed point q is marked by a circle. The parabola J is close to the stable manifold so it is shrunk into the curve T (J), then it is expanded a little to T 2 (J) along the unstable direction. Next, it is further expanded to T 3 (J), which approximates a part of the unstable manifold.

2 First

–10

–5 Second

Third

–2

Fig. 11.23. The image of the outer parabola is attracted to the unstable manifold of q and diverges to inﬁnity

Choose a curve of the form t → (t, D3 (t − D1 )(t − D2 )) that is partially surrounded by the stable manifold of q from the inside. > D1:=-1.13: D2:=1.19: D3:=1.47: > inner_c:=t->(t,D3*(t-D1)*(t-D2)): Plot the curve H = {inner c(t) : t1 ≤ t ≤ t2 } and its images under T .

11.5 Maple Programs > > > > >

355

para_in[0]:=plot([inner_c(t),t=t1..t2]): para_in[1]:=plot([T(inner_c(t)),t=t1..t2]): para_in[2]:=plot([T(T(inner_c(t))),t=t1..t2]): para_in[3]:=plot([T(T(T(inner_c(t)))),t=t1..t2]): para_in[4]:=plot([T(T(T(T(inner_c(t))))),t=t1..t2]):

Attach the labels First, Second, Third and Fourth to the images T (H), T 2 (H), T 3 (H) and T 4 (H), respectively. > order_in[1]:=textplot([T(inner_c(t1)),‘First‘]): > order_in[2]:=textplot([T(T(inner_c(t1))),‘Second‘]): > order_in[3]:=textplot([T(T(T(inner_c(t1)))),‘Third‘]): > order_in[4]:=textplot([T(T(T(T(inner_c(t1))))),‘Fourth‘]): > display(fixed,f3,attractor,seq(para_in[i],i=0..4), seq(order_in[i],i=1..4)); See Fig. 11.24. The ﬁxed point q is marked by a circle.

2 First Fourth

–2

Third

2

Second

–2

Fig. 11.24. The image of the inner parabola gradually approximates the H´enon attractor

11.5.5 Hyperbolic points of the standard mapping Check the hyperbolicity of the ﬁxed and the periodic points of the modiﬁed standard mapping. > with(linalg): > C:=0.45: > fractional:=x->x-floor(x): Deﬁne the modiﬁed standard mapping T . > T:=(x,y)->(fractional(2*x+y+C*sin(2*Pi*x)),x); T := (x, y) → (fractional(2 x + y + C sin(2 π x)), x) Find the Jacobian matrix of T . > DT:=(x,y)->matrix([[2+2*Pi*C*cos(2*Pi*x),1],[1,0]]);

356

11 Stable and Unstable Manifolds

2 + 2 π C cos(2 π x) 1 1 0 To ﬁnd the ﬁxed points, solve 2x + C sin 2πx = 1. > t1:=fsolve(2*x+C*sin(2*Pi*x)-1,x=0..1/2); t1 := 0.2786308603 > t2:=fsolve(2*x+C*sin(2*Pi*x)-1,x=1/2..1); t2 := 0.7213691397 In the following we check T 2 (xr , ys ) = (xr , ys ). > x[1]:=0: x[2]:=t1: x[3]:=0.5: x[4]:=t2: > y[1]:=0: y[2]:=t1: y[3]:=0.5: y[4]:=t2: > for r from 1 to 4 do > for s from 1 to 4 do > print(T(T(x[r],y[s]))-(x[r],y[s])); > od; od; 0., 0. 0., 0. 0., 0. 0.9999999996, 0. .. . Each output is equal to (0, 0) modulo 1 within an acceptable margin of error, and we conclude that T 2 (xr , ys ) = (xr , ys ). Now we show that T (xr , ys ) = (ys , xr ). > for r from 1 to 4 do > for s from 1 to 4 do > print(T(x[r],y[s])-(y[s],x[r])); > od; od; 0., 0 0., 0 .. . Each output is equal to (0, 0) modulo 1 within an acceptable margin of error. To check the hyperbolicity of the ﬁxed and the periodic points, we compute the eigenvalues of the Jacobian matrix of T 2 . > for r from 1 to 4 do > for s from 1 to 4 do > x0:=x[r]: y0:=y[s]: > A:=evalm(DT(T(x0,y0))&*DT(x0,y0)): > Eig:=eigenvectors(A): > if abs(abs(Eig[1][1]) - 1) > 0.00001 then > print([r,s],Eig[1][1],Eig[2][1],’hyperbolic’) > else print([r,s],Eig[1][1],Eig[2][1],’not hyperbolic’): > fi: > od; > od; DT := (x, y) →

11.5 Maple Programs

357

[1, 1], 0.03958, 25.26, hyperbolic [1, 2], 9.103, 0.1099, hyperbolic [1, 3], −0.9972 − 0.07492 I, −0.9972 + 0.07492 I, not hyperbolic [1, 4], 0.1099, 9.103, hyperbolic .. . Check that the complex eigenvalues are of modulus 1 in the above. All the ﬁxed points, which are indexed by [r, r], 1 ≤ r ≤ 4, are hyperbolic. 11.5.6 Images of a circle centered at a hyperbolic point under the standard mapping We study the behavior of the standard mapping T by tracking the images under T of circles centered at the periodic points. > with(plots): > C:=0.45: > fractional:=x->x-floor(x): > T:=(x,y)->(fractional(y+2*x+C*sin(2*Pi*x)),x): > t1:=fsolve(2*x+C*sin(2*Pi*x)-1,x=0..1/2): > t2:=fsolve(2*x+C*sin(2*Pi*x)-1,x=1/2..1): > x[1]:=0: x[2]:=t1: x[3]:=0.5: x[4]:=t2: > y[1]:=0: y[2]:=t1: y[3]:=0.5: y[4]:=t2: Choose the number of points on each circle. > N:=500: Choose the common radius a of the circles. > a:=0.03: > for r from 2 to 4 do > for s from 2 to r do > for i from 1 to N do > pt_circle[i]:=(x[r]+a*cos(2*Pi*i/N),y[s]+a*sin(2*Pi*i/N)): > od: > image[0]:=pointplot([seq(pt_circle[i],i=1..N)]): > for k from 1 to 4 do > for i from 1 to N do > pt_circle[i]:=S(point_circle[i]): > od: > image[k]:=pointplot([seq(pt_circle[i],i=1..N)]): > od: > g[r,s]:=display(image[0],image[2],image[4]): > od: > od: > display(seq(seq(g[r,s],s=2..r),r=2..4)); See Fig. 11.12.

358

11 Stable and Unstable Manifolds

11.5.7 Stable manifolds of the standard mapping with(plots): with(linalg): C:=0.45: > fractional:=x->x-floor(x): Deﬁne the modiﬁed standard mapping T and its inverse S. > T:=(x,y)->(fractional(evalhf(2*x+y+C*sin(2*Pi*x))),x): > S:=(x,y)->(y,fractional(evalhf(x-2*y-C*sin(2*Pi*y)))): Find the Jacobian matrix of T . > DT:=(x,y)->matrix([[2+2*Pi*C*cos(2*Pi*x),1],[1,0]]): > t1:=fsolve(2*x+C*sin(2*Pi*x)-1,x=0..1/2): > t2:=fsolve(2*x+C*sin(2*Pi*x)-1,x=1/2..1): List the coordinates of ﬁxed and the periodic points of period 2. > x[1]:=0: x[2]:=t1: x[3]:=0.5: x[4]:=t2: > y[1]:=0: y[2]:=t1: y[3]:=0.5: y[4]:=t2: Choose three of the ﬁxed points and two of the periodic points, and mark them by circles. > fixed:=pointplot([seq([x[r],y[r]],r=2..4)],symbol=circle): > periodic:=pointplot([[x[2],y[4]],[x[4],y[2]]], symbol=circle): Find the stable manifolds of the ﬁxed points. > N:=1000: > r:=2: Choose the number of iterations. > K1:=12: Find the stable direction. > x0:=x[r]: y0:=y[r]: > A:=DT(x0,y0): > Eig:=eigenvectors(A):: > if abs(Eig[1][1]) < abs(Eig[2][1]) then k:=1: > else k:=2: > fi: > v:=Eig[k][3][1]: Construct N points on a very short segment in the stable direction. > delta:=0.05/N: > delta1:=delta*v[1]: delta2:=delta*v[2]: > for i from 1 to N do > pt_stable[i]:=(x0+(i-N/2)*delta1,y0+(i-N/2)*delta2): > od: > for k from 1 to K1 do > for i from 1 to N do > pt_stable[i]:=S(pt_stable[i]): > od: > Ws[2][k]:=pointplot([seq([pt_stable[i]],i=1..N)]): > od: > >

11.5 Maple Programs

359

display(seq(Ws[2][k],k=1..K1),fixed); See the left plot in Fig. 11.25. If we increase K1 , then we can observe the complicated folding of the stable manifold wrapping around the holes seen in Fig. 11.13. >

1

1

y

y

0 x

1

0 x

1

Fig. 11.25. The stable manifolds of the ﬁxed points (t1 , t1 ) (left) and (1/2, 1/2) (right) of the modiﬁed standard mapping

Using the same algorithm we ﬁnd the stable manifold of ( 21 , 21 ). In this case the manifold is simple and we use a small number of points to describe the manifold. > N:=250: > r:=3: > K2:=10: . . > . > Ws[3][k]:=pointplot([seq([pt_stable[i]],i=1..N)]): > od: > display(seq(Ws[3][k],k=1..K2),fixed); See the right plot in Fig. 11.25. The points on the stable manifold are attracted to ( 21 , 21 ). Two endpoints do not belong to Ws (( 21 , 21 )). Using the same algorithm we ﬁnd the stable manifold of (t2 , t2 ). > N:=1000: > r:=4: > K3:=12: . . > . > Ws[4][k]:=pointplot([seq([pt_stable[i]],i=1..N)]): > od: > display(seq(Ws[4][k],k=1..K3),fixed); See the left plot in Fig. 11.26. Find the stable manifolds of periodic points of period 2. In the following we ﬁnd Ws (t2 , t1 ) and Ws (t1 , t2 ) at the same time.

360

11 Stable and Unstable Manifolds 1

1

y

y

0 x

1

0 x

1

Fig. 11.26. The stable manifolds of the ﬁxed point (t3 , t3 ) (left) and the periodic points (t1 , t2 ) and (t2 , t1 ) (right) of the modiﬁed standard mapping

Choose the number of iterations. > K4:=10: > N:=1000: > (r,s):=(4,2): > x0:=x[r]; y0:=y[s]; x0 := 0.7213691397 y0 := 0.2786308603 Find the stable direction of the Jacobian matrix of T 2 at the periodic point (x0 , y0 ). > A:=evalm(DT(T(x0,y0))&*DT(x0,y0)): > Eig:=eigenvectors(A): > if abs(Eig[1][1]) < abs(Eig[2][1]) then b:=1: > else b:=2: fi: > v:=Eig[b][3][1]: Construct N points on a very short segment in the stable direction. > delta:=0.05/N: > delta1:=delta*v[1]: delta2:=delta*v[2]: > for i from 1 to N do > pt_stable[i]:=(x0+(i-N/2)*delta1,y0+(i-N/2)*delta2): > od: > for k from 1 to K4 do > for i from 1 to N do > pt_stable[i]:=S(pt_stable[i]): > od: > f[k]:=pointplot([seq([pt_stable[i]],i=1..N)]): > od: > display(seq(f[k],k=1..K4),periodic); See the right plot in Fig. 11.26. For the stable manifolds at the ﬁxed points and the periodic points in one frame, see Fig. 11.13.

11.5 Maple Programs

361

11.5.8 Intersection of the stable and unstable manifolds of the standard mapping with(plots): with(linalg): > C:=0.45: > fractional:=x->x-floor(x): > T:=(x,y)->(fractional(evalhf(y+2*x+C*sin(2*Pi*x))),x): > S:=(x,y)->(y,fractional(evalhf(x-2*y-C*sin(2*Pi*y)))): > DT:=(x,y)->matrix([[2+2*Pi*C*cos(2*Pi*x),1],[1,0]]): > t1:=fsolve(2*x+C*sin(2*Pi*x)-1,x=0..1/2): > t2:=fsolve(2*x+C*sin(2*Pi*x)-1,x=1/2..1): Find the stable manifold of (t2 , t2 ) as in Maple Program 11.5.7. > A:=evalm(DT(t2,t2)): > Eig:=eigenvectors(A): > if abs(Eig[1][1]) < abs(Eig[2][1]) then k:=1: > else k:=2: fi: > v:=Eig[k][3][1]: Generate N points, called pt stabl[i], which are approximately on the stable manifold. > N:=2000: > delta:=0.05/N: > delta1:=delta*v[1]: delta2:=delta*v[2]: > for i from 1 to N do > pt_stabl[i]:=(t2+(i-N/2)*delta1,t2+(i-N/2)*delta2): > od: > J:=10: Plot the stable manifold using points of relatively larger size. > for j from 1 to J do > for i from 1 to N do > pt_stabl[i]:=S(pt_stabl[i]): od: > h[j]:=pointplot([seq([pt_stabl[i]],i=1..N)],symbolsize=4): > od: Find the unstable manifold of the periodic point (t2 , t1 ). > B:=evalm(DT(T(t2,t1))&*DT(t2,t1)): > Eig:=eigenvectors(B): > if abs(Eig[1][1]) > abs(Eig[2][1]) then k:=1: > else k:=2: fi: > w:=Eig[k][3][1]: > N:=3000: > delta:=0.05/N: > del1:=delta*w[1]: > del2:=delta*w[2]: >

362

11 Stable and Unstable Manifolds

for i from 1 to N do pt_unsta[i]:=(t2+(i-N/2)*del1, t1+(i-N/2)*del2): od: Plot the unstable manifold using points of relatively smaller size. > for j from 1 to J do > for i from 1 to N do > pt_unsta[i]:=T(pt_unsta[i]): > od: > g[j]:=pointplot([seq([pt_unsta[i]],i=1..N)],symbolsize=1): > od: Consider the images of a point z0 on the intersection of the stable manifold of (t2 , t2 ) and the unstable manifold of (t2 , t1 ). > range1:=plot(0.5,x=0.25..0.9,y=0.25..0.9,color=white): > alpha:=0.00675: > z[0]:=(t2 + alpha*w[1],t1 + alpha*w[2]); z0 := 0.7274036685, 0.2816552547 > text[0]:=textplot([z[0]+(0.01,0.02),0]): > L:=10: Label the point zn = T n (z0 ) as n. > for n from 1 to L do > z[n]:=T(z[n-1]): > text[n]:=textplot([z[n]+(0.01,0.02),n]): > od: Using small circles mark the images of a point along the intersection of the stable manifold of (t2 , t2 ) and the unstable manifold of (t2 , t1 ). > circles:=pointplot([seq(z[n],n=0..L)],symbol=circle): > display(range1,seq(g[j],j=1..J),seq(h[j],j=1..J), circles,seq(text[n],n=0..L),text_Ws,text_Wu); See Fig. 11.15. > > >

12 Recurrence and Entropy

The entropy of a shift transformation is expressed in terms of the ﬁrst return time. The idea is simple: given a sequence x = (x1 x2 x3 . . .), xi ∈ {s1 , . . . , sk }, ﬁnd the ﬁrst recurrence time Rn to observe the ﬁrst n-block x1 . . . xn again. Then (log Rn )/n converges to entropy as n → ∞. If a sequence x is less random, then there are more repetitive patterns appearing in x, and it takes less time to observe the same block again, thus we obtain a smaller value for entropy. For a comprehensive introduction on the ﬁrst return time and data compression, consult [Shi]. For applications of the ﬁrst return time in testing random number generators, see [CKd2],[Mau].

12.1 The First Return Time Formula Let {s1 , . . . , sk } be a set of symbols. Consider the shift transformation T ∞ deﬁned on X = 1 {s1 , . . . , sk } by (T x)n = xn+1 for x = (x1 x2 x3 . . .). The T -invariance of a measure µ is nothing but the shift invariance, i.e., µ([a1 , . . . , as ]n,...,n+s−1 ) = µ([a1 , . . . , as ]1,...,s ) where [a1 , . . . , as ]n,...,n+s−1 is the cylinder set of length s whose (n + j)th coordinate is the symbol aj for n ≤ j < n + s. Recall the deﬁnition of the ﬁrst return time RE in Deﬁnition 5.11. If we choose E = [a1 , . . . , an ]1,...,n and take x ∈ E, then RE (x) = min{j ≥ 1 : xj+1 = a1 , . . . , xj+n = an } . Let P = {C1 , . . . , Ck } be a measurable partition of X given by the cylinder sets Ci = [si ]1 = {x ∈ X : x1 = si }. Note that P is a generating partition. Let T −j P, j ≥ 1, be the partition {T −j C1 , . . . , T −j Ck }. Let Pn (x) ⊂ X be the unique subset that contains x and belong to the collection Pn = P ∨ T −1 P ∨ · · · ∨ T −(n−1) P .

364

12 Recurrence and Entropy

Now we deﬁne a special form of the ﬁrst return time by Rn (x) ≡ RPn (x) (x), i.e., Rn (x) = min{j ≥ 1 : xj+1 = x1 , . . . , xj+n = xn } . x = a1 . . . an . . . . . . . . . a1 . . . an . . . . . . ? => < Rn (x)

Kac’s lemma implies that the average of Rn under the condition xi = ai , 1 ≤ i ≤ n, is equal to 1/ Pr(a1 . . . an ) where Pr(a1 . . . an ) is the probability of observing a1 . . . an in the ﬁrst n-block of x. The lemma suggests that Rn may be approximated by 1/µ(Pn (x)) in a suitable sense. Since the Shannon– McMillan–Breiman Theorem implies that 1/µ(Pn (x)) converges to entropy as n → ∞, we expect that (log Rn )/n converges to entropy, which is true as we shall see in this section. Here is another viewpoint. According to the Asymptotic Equipartition Property the number of typical subsets in Pn is approximately equal to 2nh . Because of ergodicity an orbit of a typical sequence x would visit almost all the typical subsets in Pn , and hence the return time for almost every starting point would be approximately equal to 2nh . Now we give an almost identical deﬁnition of the ﬁrst return time: Deﬁne Rn (x) = min{j ≥ n : x1 . . . xn = xj+1 . . . xj+n } for each x = (x1 x2 x3 . . .). This deﬁnition was used by other researchers. See [OW],[WZ]. Note that Rn and Rn are deﬁned almost everywhere and that Rn ≥ n and Rn ≤ Rn . If Rn (x) ≥ n, then Rn (x) = Rn (x). Theorem 12.1 (The ∞ﬁrst return time formula). Let T be the shift transformation on X = 1 {s1 , . . . , sk } with a shift invariant probability measure µ. Suppose that T is ergodic. For x = (x1 x2 x3 . . .) ∈ X deﬁne Rn (x) = min{j ≥ n : x1 . . . xn = xj+1 . . . xj+n } , Then, for almost every x, log Rn (x) = entropy . n→∞ n lim

The formula was ﬁrst studied by Wyner and Ziv [WZ] in relation to data compression algorithms such as Lempel–Ziv coding [LZ]. They proved the convergence in probability and Ornstein and Weiss[OW] later showed that the convergence is with probability 1, i.e., pointwise almost everywhere. The formula enables us to estimate the entropy by observing a typical sample sequence. In real world applications there is no inﬁnite binary sequence. If a computer ﬁle or a digital message is suﬃciently long, then the ﬁle or the message is regarded as a part of an inﬁnitely long sequence, and the formula

12.1 The First Return Time Formula

365

can be used. The strong points of the formula are its intrinsic beauty and simple computer implementation. For more details consult [Shi]. Since Rn and Rn are close to each other, we expect that similar results holds for Rn and Rn at the same time. In Theorem 12.3 it is shown that the same formula holds for Rn . Lemma 12.2. Let T be an ergodic shift transformation on a shift space X on ﬁnitely many symbols. (i) If Rn (x) < n, then there exists k, n3 ≤ k < n, such that Rk (x) = k. (ii) If T has positive entropy, then for almost every x there exists N = N (x) such that Rn (x) = Rn (x) for n ≥ N . Proof. (i) If Rn (x) = k for some k < n, then x1 . . . xk = xk+1 . . . x2k and Rk (x) = k. If k ≥ n3 , then we are done. If k < n3 , then x1 . . . xn+k = x1 . . . xk x1 . . . xk x1 . . . xk . . . . . . x1 . . . xl (x) = mk for some m such that n3 ≤ mk < n. for some l ≤ k. Hence Rmk (ii) Put E = {x ∈ X : lim sup(Rn (x) − Rn (x)) > 0} . n→∞

If x ∈ E, then Rn (x) > Rn (x) for inﬁnitely many n. Hence Rn (x) < n for inﬁnitely many n. Part (i) implies that Rk (x) = k for inﬁnitely many k and lim inf k→∞

log Rk (x) =0. k

Now the Ornstein–Weiss formula implies that E has measure zero.

As a direct application we have the following result. Theorem 12.3. For an ergodic shift on ﬁnitely many symbols, lim

n→∞

log Rn (x) = entropy n

for almost every x. Proof. If entropy is positive, we use Lemma 12.2(ii). If entropy is zero, then observe that lim sup n→∞

log Rn (x) log Rn (x) log Rn (x) =0. = lim ≤ lim sup n→∞ n n n n→∞

Remark 12.4. Suppose that every block in Kac’s lemma implies that E[Rn ] = E[Rn |B] Pr(B) = B∈Pn

B∈Pn

See Theorem 12.26 for more information.

∞ 1

{0, 1} has positive probability.

1 1 = 2n . Pr(B) = Pr(B) B∈Pn

366

12 Recurrence and Entropy

Example 12.5. (Pointwise version) Simulation results for Theorem 12.3 with the (p, 1 − p)-Bernoulli shifts are given in Fig. 12.1. A horizontal line marks the entropy in each case. For p = 21 it takes longer for an n-block to reappear, and we consider Rn for small values of n. For p = 54 there are only a few typical n-blocks for small n, and so it does not take too long for a starting block to reappear. That is why we have Rn = 1 for small n. See Maple Program 12.6.1.

1

1

0.5

0.5

0

10 n 20

0

30

10 n 20

30

Fig. 12.1. y = log Rn (x0 )/n at a sequence x0 from the (p, 1 − p)-Bernoulli shift spaces: p = 1/2 and 1 ≤ n ≤ 19 (left) and p = 4/5 and 1 ≤ n ≤ 30 (right)

Example 12.6. (Average version) Simulation results for the averages of the ﬁrst return time Rn are presented in Figs. 12.2, 12.3. The theoretical average of Rn is equal to 2n . Jensen’s inequality implies that the average of the logarithm is less than or equal to the logarithm of the average. If p is not close to 21 , then usually the simulations produce smaller values than the theoretical value since our sample size 103 was not large enough: There are n-blocks with very small measure, and so the return times to them are very large. If the sample size is small, then these blocks are not sampled, and the average is underestimated. See Maple Program 12.6.2. Consult a similar idea in Maple Program 8.7.2.

1

1

y

y

0

5

10 n

0

5

10 n

Fig. 12.2. y = Ave[(log Rn )/n] (left) and y = (log Ave[Rn ])/n (right) for the (1/2, 1/2)-Bernoulli shift. The horizontal line in the left graph marks the entropy

12.2 Lp -Convergence

1

367

1

y

y

0

5

10 n

0

5

10 n

Fig. 12.3. y = Ave[(log Rn )/n] and y = (log Ave[Rn ])/n, 1 ≤ n ≤ 12, for the (1/4, 3/4)-Bernoulli shift. The horizontal line in the left graph marks the entropy

Example 12.7. The pdf’s for Rn and (log Rn )/n, n = 10, are given for the ( 41 , 43 )-Bernoulli shift. See Fig. 12.4. A sample value that is far away from most of other sample values is called an outlier. Outliers are ignored in the left graph , because if all the sample values were to be used in drawing the graph, then the right tail would be too long. Some values of Rn are equal to 1, which explains the left tail in the right graph. See Maple Program 12.6.3. 1.5 0.004 1 0.002

0

0.5

2000

4000

0

0.5

1

1.5

Fig. 12.4. Empirical pdf’s of Rn (left) and (log Rn )/n (right), n = 10, with the sample size 104 for the (1/4, 3/4)-Bernoulli shift

12.2 Lp-Convergence Here is the Lp -version of the Shannon–McMillan–Breiman Theorem. Fact 12.8 Let p ≥ 1. Consider an ergodic shift space (X, µ) and its partition Pn into the n-blocks determined by the ﬁrst n symbols. Deﬁne 1 − log µ(B) 1B (x) , En (x) = n B∈Pn

where 1B is the indicator function of B. Then En converges to entropy h for almost every x and also in Lp .

368

12 Recurrence and Entropy

Now we prove Lp -convergence of (log Rn )/n for p ≥ 1. Theorem 12.9. Let p ≥ 1 and let (X, µ) be an ergodic shift space. Then (log Rn )/n converges to h in Lp . Proof. In the following we do not distinguish between Rn and Rn since we are interested in the limits of their integrals, which are equal. The Ornstein–Weiss formula and Fatou’s lemma together imply that p log Rn p dµ . h ≤ lim inf n→∞ n X Note that (ln x)p is concave for suﬃciently large x: For p = 1 this is obvious. If p ≥ 2, then (p − 1)(ln x)p−2 − (ln x)p−1 d2 p 0. Put H0 = − πi log πi . i

12.3 The Nonoverlapping First Return Time 1

371

1

y

y

0

10

n

0

n

10

Fig. 12.5. Simulations of Maurer’s theorem: the (1/2, 1/2)-Bernoulli shift (left) and the (2/3, 1/3)-Bernoulli shift (right)

Let Pn (x) denote the probability of appearance of the initial n-block in the sequence x = x1 x2 x3 . . .; in other words, Pn (x) = Pr(x1 . . . xn ). Let Φ be the density function of the standard normal distribution given in Deﬁnition 1.43. Put Var[− log Pn (x)] σ 2 = lim , n→∞ n where ‘Var’ stands for variance and σ > 0 is regarded as the normalized standard deviation. Fact 12.13 ([W1]) For a mixing Markov shift with entropy h we have

α 8n − nh log R √ lim Pr ≤α = Φ(t) dt . n→∞ σ n −∞ Compare the result with the one in Sect. 8.6. See also Theorem 1.44. Fact 12.14 (Corollary 1, [Ko]) For an ergodic Markov shift we have 8n (x)Pn (x)) = o(nβ ) log(R almost surely for any β > 0. Fact 12.15 (Corollary B5, [W2]) For any ε > 0 8n (x)Pn (x)) ≤ log log n −(1 + ε) log n ≤ log(R eventually, almost surely for an ergodic Markov shift. Since E [log Pn (x)] = =

πa1 pa1 a2 · · · pan−1 an log(πa1 pa1 a2 · · · pan−1 an ) πi log πi + (n − 1) πi pij log pij

i

= −H0 − (n − 1)h

i,j

372

12 Recurrence and Entropy

where the ﬁrst sum is taken over all n-blocks a1 · · · an , Fact 12.15 implies that 8n ] − (n − 1)h − H0 ≤ log log n −(1 + ε) log n ≤ E[log R approximately for large n. We prove an extension of Theorem 12.12 to the Markov shifts with a modiﬁed deﬁnition of the ﬁrst return time. We obtain a sharper estimate of the convergence rate of its average and, further, propose an algorithm for the estimation of entropy for Markov shifts. Deﬁnition 12.16. The modiﬁed nth ﬁrst return time R(n) is deﬁned by R(n) (x) = min{j ≥ 1 : x1 . . . xn = x2jn+1 . . . x2jn+n } . Put r = 2−n . Then the expectation of log R(n) equals v(r) in case of the Bernoulli ( 12 , 12 )-shift. Hence the expectation of log R(n) is approximately equal to n − lnγ2 for large n. We investigate the speed of convergence of the average of log R(n) to entropy after being properly normalized. The dependence on the past decreases exponentially, and hence odd-numbered blocks become almost independent of each other as the gap between the neighboring blocks increases. The following lemma is a restatement of Theorem 5.22. Lemma 12.17. Let P be an irreducible and aperiodic stochastic matrix. Put Q = limn→∞ P n . Then there exist constants α > 0 and 0 < d < 1 such that ||P n − Q||∞ ≤ α dn . Theorem 12.18. If P is irreducible and aperiodic, then γ . lim E log R(n) − (n − 1)h = H0 − n→∞ ln 2 Proof. Throughout the proof we simply write xn1 in place of x1 . . . xn and similarly for other blocks if necessary. Let π denote the Perron–Frobenius eigenvector for P . Put c = α/ mini πi and suppose that k is large enough so that dk+1 c < 1 where c and d are the constants obtained in Lemma 12.17. Let T denote the shift transformation on the Markov chain. For arbitrary n-blocks a1 · · · an and b1 · · · bn , we have Pr [b1 · · · bn ] ∩ T −(n+k) [a1 · · · an ] = πb1 Pb1 b2 · · · Pbn−1 bn (P k+1 )bn a1 Pa1 a2 · · · Pan−1 an . Hence

Pr [b1 · · · bn ] ∩ T −(n+k) [a1 · · · an ] − Pr(b1 · · · bn ) Pr(a1 · · · an )

= πb1 Pb1 b2 · · · Pbn−1 bn |(P k+1 )bn a1 − πa1 | Pa1 a2 · · · Pan−1 bn ≤ Pr(bn1 ) Pr(an1 ) dk+1 α/πb1 ≤ Pr(bn1 ) Pr(an1 ) dk+1 c ,

12.3 The Nonoverlapping First Return Time

373

and n n n n n+1 Pr(an1 )(1 − dn+1 c) ≤ Pr x2n+n c) . 2n+1 = a1 | x1 = b1 ≤ Pr(a1 )(1 + d (0)

(0)

(i)

(i)

Put a1 · · · an = a1 · · · an = a1 · · · an . Then

Pr R(n) = i | xn1 = an1 =

(i−1) (i−1) a1 ···an =an 1

where

i -

(1) (1) a1 ···an =an 1

j=1

···

Aj

(j) (j−1) (j) n (j−1) = a · · · a | x = a · · · a Aj = Pr x2n+n n 1 n 1 1 2n+1 (j)

(j)

2jn+n = a1 · · · an given the condition since Aj is equal to the probability of x2jn+1 2(j−1)n+n

(j−1)

x2(j−1)n+1 = a1

· · · an(j−1) .

Hence Pr(an1 ) (1 − dn+1 c) U1 ≤ Pr R(n) = i | xn1 = an1 ≤ Pr(an1 ) (1 + dn+1 c) U1 where

U1 =

···

(i−1) (i−1) a1 ···an =an 1

Since

(i−1)

(i−1)

···an

i−1 -

(1) (1) a1 ···an =an 1

j=1

Aj .

(i−2) n n , Ai−1 = 1 − Pr x2n+n · · · a(i−2) n 2n+1 = a1 | x1 = a1

a1

=an 1

the sum is bounded by 1 − Pr(an1 ) (1 + dn+1 c) and 1 − Pr(an1 ) (1 − dn+1 c). Hence Pr(an1 )(1 − dn+1 c) 1 − Pr(an1 )(1 + dn+1 c) U2 ≤ Pr R(n) = i | xn1 = an1 ≤ Pr(an1 )(1 + dn+1 c) 1 − Pr(an1 )(1 − dn+1 c) U2 where

U2 = (i−2)

a1

(i−2)

···an

··· =an 1

i−2 -

Aj .

(1) (1) j=1 a1 ···an =an 1

Inductively we have Pr(an1 )(1 − dn+1 c)(1 − Pr(an1 )(1 + dn+1 c))i−1 ≤ Pr R(n) = i | xn1 = an1 ≤ Pr(an1 )(1 + dn+1 c)(1 − Pr(an1 )(1 − dn+1 c))i−1 .

374

12 Recurrence and Entropy

Hence Pr(an1 )(1 − dn+1 c)

∞

(1 − Pr(an1 )(1 + dn+1 c))i−1 log i

i=1

≤ E log R(n) | xn1 = an1 ∞ (1 − Pr(an1 )(1 − dn+1 c))i−1 log i . ≤ Pr(an1 )(1 + dn+1 c) i=1

Let v be the function in Deﬁnition 12.11. The average over all n-blocks a1 . . . an is bounded by 1 − dn+1 c E v(Pr (xn1 ) (1 + dn+1 c)) 1 + dn+1 c ≤ E log R(n) 1 + dn+1 c . ≤ E v(Pr (xn1 ) (1 − dn+1 c)) 1 − dn+1 c Multiplying by (1 + dn+1 c)/(1 − dn+1 c) and subtracting nh, we have 1 + dn+1 c − nh E v(Pr(xn1 )(1 + dn+1 c)) − nh ≤ E log R(n) 1 − dn+1 c or E v(Pr (xn1 ) (1 + dn+1 c)) + log(Pr (xn1 ) (1 + dn+1 c)) −E [log Pr (xn1 ) + nh] − log(1 + dn+1 c) 1 + dn+1 c − nh ≤ E log R(n) 1 − dn+1 c

(12.1)

and similarly, from the second inequality, 1 − dn+1 c − nh E log R(n) 1 + dn+1 c ≤ E v Pr (xn1 ) (1 − dn+1 c) + log(Pr (xn1 ) (1 − dn+1 c)) −E [log Pr (xn1 ) + nh] − log(1 − dn+1 c) .

(12.2)

Recall that v(r) + log r converges to −γ/ln 2 as r ↓ 0. For any small δ > 0 there exists n0 such that Pr (xn1 ) (1 + dn+1 c) ≤ δ if n ≥ n0 . By taking r = Pr (xn1 ) (1 + dn+1 c) we can check that the functions v(Pr (xn1 ) (1 + dn+1 c)) + log(Pr (xn1 ) (1 + dn+1 c)) are bounded by the same upper bound for all n ≥ n0 . Hence the Lebesgue Dominated Convergence Theorem implies that lim E[v(Pr (xn1 ) (1 + dn+1 c)) + log(Pr(xn1 )(1 + dn+1 c))] = −

n→∞

γ . ln 2

12.3 The Nonoverlapping First Return Time

375

Recall that E [log Pr (xn1 ) + nh] = −H0 + h. As n → ∞, (12.1) implies that −

γ + H0 − h ≤ lim E[log R(n) ] − nh , n→∞ ln 2

and similarly (12.2) implies that γ lim E[log R(n) ] − nh ≤ − + H0 − h . n→∞ ln 2

Deﬁnition 12.19. Theorem 12.18 implies that lim E log R(n+1) − log R(n) = h , n→∞

and in this case we deﬁne h(n) by h(n) = E log R(n+1) − log R(n) . We use h(n) to estimate the entropy of a Markov shift associated with an irreducible and aperiodic matrix. ∞ Example 12.20. Consider the Markov shift X = 1 {0, 1, 2} associated with the irreducible and aperiodic matrix ⎛ ⎞ 1/2 1/4 1/4 P = ⎝ 1/4 0 3/4 ⎠ . 1/2 1/2 0 10 6 7 Then π = ( 23 , 23 , 23 ), h ≈ 1.168 and H0 − lnγ2 ≈ 0.72. (Note that the entropy can be greater than 1 since we use the logarithm to the base 2.) We estimate E[log R(n) ] by taking the average along an orbit. See Maple Program 12.6.5 and Fig. 12.6. For more information see [CKd1].

1 1 y

y

0

n

5

0

n

5

Fig. 12.6. y = Ave[log R(n) ] − (n − 1)h (left) and y = h(n) (right) for Ex. 12.20. The horizontal lines indicate H0 − γ/ ln 2 and h, respectively

376

12 Recurrence and Entropy

12.4 Product of the Return Time and the Probability Kac’s lemma implies that

E[Rn Pn ] =

E[Rn Pn |B] Pr(B)

B∈Pn

=

E[Rn |B] Pr(B) Pr(B)

B∈Pn

=

B∈Pn

=

1 Pr(B) Pr(B) Pr(B) Pr(B) = 1 .

B∈Pn

Theorem 12.21. Let X be an irreducible and aperiodic Markov shift space. Let Rn be the nonoverlapping ﬁrst return time deﬁned in Deﬁnition 12.10 and let Pn (x) be the probability of the n-block x1 . . . xn . Then we have ξ e−t dt . lim Pr(Rn Pn ≤ ξ) = n→∞

0

For the proof consult [W2], and for a related result see [Pit]. See Fig. 12.7 2 1 and Maple Program 12.6.6 for simulations for the ( 3 , 3 )-Bernoulli shift and 1/3 2/3 for the Markov shift with P = . 1/4 3/4 1

1

Prob

Prob

0

R n Pn

0

5

R n Pn

5

Fig. 12.7. Pdf’s of Rn Pn , n = 10, for a Bernoulli shift (left) and a Markov shift (right) with y = e−x

Remark 12.22. (i) For the probability distribution of limn→∞ ln(Rn Pn ) see Ex. 1.40. (ii) As a corollary of Theorem 12.21, we obtain ∞ lim E[ln(Rn Pn )] = (ln y) e−y dy = −γ , n→∞

lim Var[ln(Rn Pn )] =

n→∞

Use Maple to verify them.

0

0 ∞

(ln y − (−γ)) e−y dy = 2

π2 . 6

12.5 Symbolic Dynamics and Topological Entropy

377

12.5 Symbolic Dynamics and Topological Entropy A topological dynamical system (X, T ) is a compact metric space X together with a continuous mapping T : X → X. Let Ube an open cover of X, i.e., U is a collection of open subsets Ui such that i Ui = X. Let H(U) denote the logarithm of the cardinality of a ﬁnite subcover of U with the smallest cardinality, i.e., H(U) = min log card(U1 ) . U1 ⊂U

Let U ∨ V be the cover made up of U ∩ V where U ∈ U and V ∈ V. The topological entropy of the cover U with respect to T is deﬁned by n−1 I 1 T −i U) . H( n→∞ n i=0

h(U, T ) = lim

The topological entropy of T is deﬁned by htop = sup h(U, T ) , U

where the supremum is over all open covers U. A measure theoretic entropy with respect to a probability measure µ on X is denoted by hµ when there is a possibility of confusion. (Sometimes, measure theoretic entropy is called metric entropy for short, or Kolmogorov-Sinai entropy.) It is known that the topological entropy is the supremum of measure theoretic entropies, which is called the variational principle. See [Man],[PoY]. For an elementary introduction to symbolic dynamics see [DH]. For ∞a related result see [KLe]. Consider the full shift space X0 = 1 {s1 , . . . , sk } on k symbols. Deﬁne a metric d on X0 by d(x, y) = 2−n ,

n = min{i ≥ 1 : xi = yi }

for two distinct points x = (x1 , x2 , . . .) and y = (y1 , y2 , . . .) in X0 . Then X0 is compact. Let T : X0 → X0 be the transformation deﬁned by the left shift, i.e., T ((x1 , x2 , . . .)) = (x2 , x3 , . . .). Then T is continuous. A T -invariant closed subset X ⊂ X0 is called a topological shift space. The associated transformation on X is the shift mapping T restricted to X. The study of such a mapping is called symbolic dynamics. Let Bn (X) be the collection of n-blocks appearing in sequences in X and let N (n) be its cardinality. An n-block a1 . . . an ∈ Bn (X) is identiﬁed with the cylinder set {x ∈ X : xi = ai , 1 ≤ i ≤ n}. For a proof of the following fact consult [LM]. Theorem 12.23. Let X be a topological shift space with the shift transformation T . Then the topological entropy of T is given by htop = lim

n→∞

1 log N (n) . n

378

12 Recurrence and Entropy

A ﬁnite graph consists of vertices V = {v1 , . . . , vk } and some directed edges connecting vertices. Assume that for any pair of vertices u, v there exists at most one directed edge from u to v. We do not exclude the case when u and v are identical, and in this case an edge is called a loop. Deﬁne a k × k matrix A, called an adjacency matrix, by A = (Aij )1≤i,j≤k if there are Aij directed edges from vi to vj . A path of length n is a sequence of n connected directed edges e1 , . . . , en , or equivalently, it is a series of n + 1 vertices such that ej is the edge from the jth to the (j + 1)th vertex. Then (A2 )ij = Aik Akj k

is the number of paths of length 2 from vi to vj . Similarly, (An )ij is the number of paths of length n from vi to vj . An inﬁnite path is an inﬁnite sequence of connected directed edges. Using ∞the set of vertices as an alphabet, we deﬁne a topological shift space X ⊂ 1 V as the set of all inﬁnite paths. Sometimes the shift spaces are deﬁned in terms of edges not vertices. The two approaches are equivalent. For more information consult [BS],[Kit],[LM]. Theorem 12.24. Let X be a topological shift space deﬁned by a ﬁnite graph. Suppose that the corresponding adjacency matrix A is irreducible, i.e., for every (i, j) there exists n such that (An )ij > 0. Then 1 log λA , n→∞ n

htop = lim

where λA is the Perron–Frobenius eigenvalue of A. Proof. Recall the deﬁnitions of norms in Subsect. 1.1.2 and Sect 10.1. Note that N (n + 1) is the number of all paths of length n on the graph. Hence (An )ij = ||An ||1 . N (n + 1) = i,j

Since the vector space of all k × k matrices is ﬁnite-dimensional, all norms on it are equivalent. Thus there exist constants 0 < c1 < c2 such that c1 ||An ||1 ≤ ||An ||op ≤ c2 ||An ||1 . Since the operator norm of An is dominated by (λA )n , we have 1 1 log N (n) ≈ log ||An ||op ≈ log λA . n n Now we take the limit as n → ∞.

12.5 Symbolic Dynamics and Topological Entropy

0

379

1

Fig. 12.8. A graph represents a shift space

Example 12.25. Consider the set X of inﬁnite sequences on symbols ‘0’ and ‘1’ in which a successive block of 1’s does not appear. Then V = {0, 1} and there is 1. See Fig. 12.8. The adjacency matrix is given by A = no loop at the vertex √ √ 11 1+ 5 , and λA = 2 . Hence htop = log 1+2 5 . See Maple Program 12.6.7. 10 If X is the full shift with k symbols, then htop = log k since N (n) = k n . The next theorem gives another way to obtain the topological entropy of a shift transformation. Theorem 12.26. Suppose that we have a topological shift space X. Let µ be a shift-invariant ergodic probability measure on X. Then 1 lim Rn dµ ≤ htop . log n→∞ n X If µ satisﬁes the additional property that any block appearing in X has positive measure, then we have 1 lim Rn dµ = htop . log n→∞ n X

Proof. Take B ∈ Bn (X). If B has positive measure, then Kac’s lemma implies R dµ = 1. Hence B B Rn dµ Rn dµ = X

B∈Bn (X)

B

= card{B ∈ Bn (X) : µ(B) > 0} ≤ N (n) . If µ(B) > 0 for any B, then we have the equality.

The average of Rn is underestimated in general. If the sample size is not suﬃciently large, then only the typical n-blocks of relatively large measure (≈ 2−nh ) are sampled. In other words, if an n-block has a relatively small probability, then it may not be included in a sample. Since the ﬁrst return time for such a block would be large by Kac’s lemma, an average of Rn over

a small number of sample blocks underestimates X Rn dµ. See Table 12.1 for simulation results with the (3/4, 1/4)-Bernoulli shift.

380

12 Recurrence and Entropy

Table 12.1. Average of the ﬁrst return time for the (3/4, 1/4)-Bernoulli shift n 2 3 4 5 6 7 8 9 10

2n 4 8 16 32 64 128 256 512 1024

Ave[Rn ] 4.0 8.0 15.9 31.3 57.7 108.1 215.4 398.6 744.9

Theorem 12.27 (The variational inequality). Let T be an ergodic shift transformation on a shift space (X, µ). The measure theoretic entropy is bounded by the topological entropy. Proof. Note that the metric version of the nth return time is equal to the ﬁrst return time deﬁned in the ﬁrst return time formula. Jensen’s inequality implies that 1 1 Rn dµ . log log Rn dµ ≤ lim hµ = lim n→∞ n n→∞ n X X

Remark 12.28 (The variational principle). Let T : X → X be a continuous mapping (or a homeomorphism) of a compact metric space X, and let M (X, T ) denote the set of all T -invariant probability measures on X. Then htop = sup{hµ (T ) : µ ∈ M (X, T )} . See [Wa1] for the proof. Remark 12.29. Consider X =

∞ 1

d(x, y) = 2−k ,

{0, 1} with the metric k = min{i ≥ 1 : xi = yi }

for x = y. Put Eα,β = {x ∈ X : lim inf n→∞

log Rn (x) log Rn (x) = β}. = α, lim sup n n n→∞

It is known that Eα,β has Hausdorﬀ dimension 1 for each pair α, β ∈ [0, ∞] with α ≤ β. Consult [FW] for the proof. For the deﬁnition of the Hausdorﬀ dimension see Sect. 13.1.

12.6 Maple Programs

381

12.6 Maple Programs 12.6.1 The ﬁrst return time Rn for the Bernoulli shift Simulate the Ornstein–Weiss formula for the Bernoulli shift on two symbols ‘0’ and ‘1’. > with(plots): > k:=4: > q:=1./(k+1): This is the probability of the symbol ‘1’. > entropy:= -(1-q)*log[2.](1-q)-q*log[2.](q); entropy := .7219280949 > Max_Block:=30: This is the maximum value for the block length. > N:=round(2^(entropy*Max_Block)); N := 3308722 This is the total length of the binary sequence x1 x2 x3 . . .. There is a chance that the recurrence does not occur for large block length close to Max Block within the time limit N . Generate a Bernoulli sequence of length N . > ran:= rand(0..k): > for i from 1 to N do x[i]:=trunc(ran()/k): od: Compare the following with q. > evalf(add(x[i],i=1..N)/N); .2001201672 > seq(x[i],i=1..Max_Block); 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0 > evalf(add(x[i],i=1..Max_Block)/Max_Block); .1666666667 In the above block of length 30 there are fewer 1’s than average. Therefore the simulation result would not agree with the theoretical prediction. Compute the ﬁrst return time. > for n from 1 to Max_Block do > k:=1; > while(add(abs(x[j]-x[j+k]),j=1..n)>0) do k:=k+1: od: > R[n]:=k: od: Draw the graph. > g1:=listplot([seq([n,log[2.](R[n])/n],n=1..Max_Block)]): > g2:=plot(entropy,x=0..30,,labels=["n"," "]): > display(g1,g2); See Fig. 12.1.

382

12 Recurrence and Entropy

12.6.2 Averages of (log Rn )/n and Rn for the Bernoulli shift The averages of (log Rn )/n and Rn are computed for the Bernoulli shift on two symbols ‘0’ and ‘1’. > with(plots): Choose an integer. > k:=3: Choose the probability of the symbol ‘1’. > q:=1./(k+1); q := 0.2500000000 > entropy:= -(1-q)*log[2.](1-q)-q*log[2.](q); entropy := 0.8112781245 Choose the maximum value for the block length. > Max_Block:=12: Generate a Bernoulli sequence of length N . We multiply 200 since there is a chance that the recurrence does not occur within the time limit

>

2entropy×Max Block . N:=round(2^(entropy*Max_Block))*200;

N := 170400 ran:= rand(0..k): > for i from 1 to N do x[i]:=trunc(ran()/k): od: Compare the following with q. > evalf(add(x[i],i=1..N)/N); 0.2503345070 Choose the sample size S. > S:=1000: > M:=N - Max_Block - S + 2; M := 169390 Compute the ﬁrst return time R[n, s] where s, 1 ≤ s ≤ S, is the space variable. > for s from 1 to S do > R[0,s]:=1: od: > for s from 1 to S do > for n from 1 to Max_Block do > k:=R[n-1,s]: > i:=1: > while (i else k:=k+1; i:=1; fi; > od; > R[n,s]:=k; > od: > od: >

12.6 Maple Programs

383

If the maximum of R[n, s] is greater than M , then there is a premature ending due to the short length of the given sequence. > max(seq(R[Max_Block,s],s=1..S)); 154211 Find the average of R[n, s] with respect to s. Simulations usually produce smaller values than the theoretical prediction equal to 2n since our sample size S is not large enough. When q = 12 and n is large, there are blocks which have very small measures so that their return times are very large by Kac’s lemma. If S is small, then these blocks of negligible probability are not sampled in general, and the average of the return time is underestimated. > for n from 1 to Max_Block do > AveR[n]:=add(R[n,s],s=1..S)/S: > od: Compute the average of (log2 R[n, s])/n with respect to s. > for n from 1 to Max_Block do > AveLog[n]:=add(log[2.](R[n,s])/n,s=1..S)/S; > od: Draw the graphs. > g1:=pointplot([seq([n,AveLog[n]],n=1..Max_Block)]): > g2:=plot(entropy,x=0..Max_Block,labels=["n","y"]): > g3:=listplot([seq([n,AveLog[n]],n=1..Max_Block)]): > display(g1,g2,g3); See the left graph in Fig. 12.3. > for n from 1 to Max_Block do > Top[n]:=log[2.0](add(R[n,s],s=1..S)/S)/n: > od: > g4:=pointplot([seq([n,Top[n]],n=1..Max_Block)]): > g5:=listplot([seq([n,Top[n]],n=1..Max_Block)]): > g6:=plot(1,x=0..Max_Block,labels=["n","y"]): > display(g4,g5,g6); See the right graph in Fig. 12.3. 12.6.3 Probability density function of (log Rn )/n for the Bernoulli shift Find the pdf’s of Rn and (log Rn )/n for the Bernoulli shift transformation on two symbols ‘0’ and ‘1’. > with(plots): Choose the probability of the symbol ‘1’. > q:=0.25: > entropy:= -(1-q)*log[2.](1-q)-q*log[2.](q); entropy := 0.8112781245 Choose the block length. > n:=10: Choose the total length of the sequence.

384

12 Recurrence and Entropy >

N:=round(2^(entropy*n))*1000;

N := 277000 We multiply 1000 since there is a small chance that the recurrence does not occur within a reasonable time limit. Now generate a Bernoulli sequence of length N . > k:=3: > ran:= rand(0..k): > for i from 1 to N do > x[i]:=trunc(ran()/k): > od: Choose the sample size. > S:=10000: Find the ﬁrst return time Rn . > for s from 1 to S do > t:=1; > while(add(abs(x[j]-x[j+t]),j=s..n+s-1)>0) do > t:=t+1: > od: > R[s]:=t; > od: See Fig. 12.4. Draw the pdf of the return time Rn . > Bin1:=2000: > epsilon:=0.0000001: > Max:=max(seq(R[s],s=1..S))+epsilon; Max := 219147. for k from 1 to Bin1 do freq[k]:=0: od: > for s from 1 to S do > slot:=ceil(Bin1*R[s]/Max): > freq[slot]:=freq[slot]+1: > od: > listplot([seq([(i-0.5)*Max/Bin1, freq[i]*Bin1/S/Max], i=1..Bin1/50)]); Draw the pdf of (log Rn )/n. > Bin2:=60: > Max2:=max(seq(log[2.](R[s])/n,s=1..S),2): > for k from 1 to Bin2 do freq2[k]:=0: od: > for s from 1 to S do > slot2:=ceil(Bin2/Max2*log[2.](R[s])/n + epsilon): > freq2[slot2]:=freq2[slot2]+1: > od: > listplot([seq([(i-.5)*Max2/Bin2, freq2[i]*Bin2/S/Max2], i=1..Bin2)]); See Fig. 12.4. >

12.6 Maple Programs

385

12.6.4 Convergence speed of the average of log Rn for the Bernoulli shift Check the formula in Theorem 12.12 on the average of log Rn for the Bernoulli shift. > with(plots): > k:=2: > p:=1./(k+1): This is the probability of the symbol ‘1’. > entropy:=-p*log[2.](p)-(1-p)*log[2.](1-p); entropy := 0.9182958340 Choose the maximum value for the block length. > MaxBlock:=10: Choose the sample size S. > S:=1000: > N:=round(2^(entropy*MaxBlock))*1000 + S*MaxBlock; N := 591000 Generate a Bernoulli sequence of length N . > ran:=rand(0..k): > for i from 1 to N do x[i]:=trunc(ran()/k): od: Compare the following with p. > evalf(add(x[i],i=1..N)/N); 0.3341133672 Find the return time R[n, s]. > for n from 1 to MaxBlock do > for s from 1 to S do > t:=n; > while(add(abs(x[j]-x[j+t]),j=n*(s-1)+1..n*s)>0) do > t:=t+n: > od: > R[n,s]:=t/n; > od: > od: Find the average of the logarithm of the return time. > for n from 1 to MaxBlock do > Ave[n]:=-add(log[2.](R[n,q]),q=1..S)/S + n*entropy: > od: Draw the graph. > g1:=plot(0.832746,x=0..MaxBlock,labels=["n","y"]): > g2:=listplot([seq([n,Ave[n]],n=1..MaxBlock)]): > display(g1,g2); See Fig. 12.5.

386

12 Recurrence and Entropy

12.6.5 The nonoverlapping ﬁrst return time Estimate entropy by the nonoverlapping ﬁrst return time for a Markov shift deﬁned by P . > with(plots): > with(linalg): > P:=matrix([[1/2,1/4,1/4],[1/4,0,3/4],[1/2,1/2,0]]); > eigenvectors(transpose(P)); D C √ √ √ 2 2 2 1 7 5 , 1 }], , , 1, { −1 − [1, 1, { , 1, }], [− + 2 2 4 4 6 3 D C √ √ √ 2 2 2 1 , 1 }] ,− , 1, { −1 + [− − 2 2 4 4 Find the Perron–Frobenius eigenvector v. > v:=[10/23,6/23,7/23]: > epsilon:=10.0^(-10): > entropy:=add(add(-v[i]*P[i,j]*log[2.](P[i,j]+epsilon), j=1..3),i=1..3); entropy := 1.168159511 Choose the maximum value for the block length. > Max_Block:=7: > SampleSize:=10000: > N:=ceil(2^(Max_Block*entropy))*1500 + SampleSize; N := 445000 Generate a Markov sequence of length N . > ran0:=rand(0..1): ran1:=rand(1..2): ran2:=rand(0..3): > x[0]:=1: > for j from 1 to N do > if x[j-1]=0 then x[j]:=ran0()*ran1(): > elif x[j-1]=1 then x[j]:=2*ceil(ran2()/3): > else x[j]:=ran0(): > fi: > od: Count the number of appearances of the symbols 0, 1 and 2. > Num[0]:=0: Num[1]:=0: Num[2]:=0: > for i from 1 to N do > Num[x[i]]:=Num[x[i]]+1: > od: Compare the preceding results with the measures of the cylinder sets [0]1 , [1]1 , [2]1 of length 1. > evalf(Num[0]/N-v[1]); 0.0005095261358

12.6 Maple Programs >

>

>

387

evalf(Num[1]/N-v[2]); −0.0002875427455 evalf(Num[2]/N-v[3]); −0.0002219833903 M:=trunc((N-SampleSize)/2/Max_Block);

M := 31071 for n from 1 to Max_Block do for s from 1 to SampleSize do t:=1: i:=1: while (i max(seq(R[Max_Block,s],s=1..SampleSize)); 28412 If max ≥ M then there is a possibility of premature ending due to the short length of the given sequence. If this happens, we have to generate a longer sequence and do the simulation again. Calculate h(n) . > for n from 1 to 6 do > h[n]:=Ave[n+1]-Ave[n]: > od: > H0:= add(-v[i]*log[2.0](v[i]),i=1..3); > > > > > > > > > > > > >

H0 := 1.550494982 > evalf(H0-gamma/ln(2)); 0.7177488048 Draw the graphs. > g1:=listplot([seq([n,Ave[n]-(n-1)*entropy], n=1..Max_Block)]): > g2:=plot(H0-gamma/ln(2.),x=0..Max_Block,labels=["n","y"]): > display(g1,g2); See the left graph in Fig. 12.6. > f1:=listplot([seq([n,h[n]],n=1..Max_Block-1)]): > f2:=plot(entropy,x=0..7,labels=["n","y"]): > display(f1,f2); See the right graph in Fig. 12.6.

388

12 Recurrence and Entropy

12.6.6 Probability density function of Rn Pn for the Markov shift Find the probability density function of the product of the ﬁrst return time Rn and the probability Pn of an n-block for the Markov shift. > with(plots): Construct a typical sequence from a binary Markov shift deﬁned by P . > with(linalg): > P:=matrix([[1/3,2/3],[1/4,3/4]]); > eigenvectors(transpose(P)); 8 1 [ , 1, {[−1, 1]}], [1, 1, { 1, }] 3 12 Find the Perron–Frobenius eigenvector v. > v:=[3/11,8/11]: > SampleSize:=1000: > entropy:=add(add(-v[i]*P[i,j]*log[2.](P[i,j]),j=1..2), i=1..2); entropy := .8404647727 Choose the block length. > n:=10: > N:=300000: Generate a Markov sequence of length N . > ran0:=rand(0..2): > ran1:=rand(0..3): b[0]:=1: for j from 1 to N do if b[j-1]=0 then b[j]:=ceil(ran0()/3): else b[j]:=ceil(ran1()/4): end if od; 8 . The following two The measure of the cylinder set {x : x1 = 1} is equal to 11 numbers should be approximately equal. > evalf(add(b[j],j=1..N)/N); 8/11.; .7256033333 .7272727273 Compute Pn . > for j from 1 to SampleSize do > Prob[j]:=v[b[j]+1]: > for t from 0 to n-2 do > Prob[j]:=Prob[j]*P[b[j+t]+1,b[j+t+1]+1]: > od: > od: > > > > > >

Compute Rn .

12.6 Maple Programs

389

for s from 1 to SampleSize do t:=1; while (add(abs(b[j]-b[j+t]),j=s..n+s-1) > 0) do t:=t+1: od: ReturnTime[s]:=t; od: Check that whether the average of Rn is equal to 2n . > evalf(add(ReturnTime[s],s=1..SampleSize)/SampleSize); 2^n; 988.2900000 1024 > for s from 1 to SampleSize do > RnPn[s]:=ReturnTime[s]*Prob[s]: > od: > Bin:=50: > Max:=evalf(max(seq(RnPn[s],s=1..SampleSize))); > > > > > > >

Max := 11.35826527 We divide the interval 0 ≤ x ≤ Max into 50 subintervals of the equal length. > for i from 1 to Bin do freq[i]:= 0: od: for s from 1 to SampleSize do slot:=ceil(Bin*RnPn[s]/Max): freq[slot]:= freq[slot]+1: od: Draw the graphs. > g1:=listplot([seq([(i-0.5)*Max/Bin,freq[i]*Bin/SampleSize /Max],i=1..Bin)]): > g2:=plot(exp(-x),x=0..10): > display({g1,g2},labels=["Rn Pan","Prob"]); See Fig. 12.7. > > > >

12.6.7 Topological entropy of a topological shift space Find the topological entropy of a topological shift space deﬁned by a ﬁnite graph. > with(linalg): Choose an adjacency matrix. > A:=matrix([[1,1],[1,0]]): > eigenvals(A); √ √ 5 5 1 1 , − + 2 2 2 2 Calculate log2 λA . > log[2.](%[1]); 0.6942419136 > ran:=rand(0..1):

390

12 Recurrence and Entropy

N:=100000: > x[0]:=1: > for j from 1 to N do > if x[j-1]=0 then x[j]:= ran(): else x[j]:=0: fi: > od: > n:=15: > Bin:=2^n: > for k from 0 to Bin-1 do freq[k]:=0: od: > SampleSize:=N-n+1; Compute the number of diﬀerent blocks that appear in the given sequence. > for i from 1 to SampleSize do > k:=add(x[i+d-1]*2^(n-d),d=1..n): > freq[k]:=freq[k]+1: > od: > Num_block:=0: > for k from 0 to Bin-1 do > if freq[k] > 0 then Num_block:=Num_block+1: fi: > od: > Num_block; 1597 The following should be close to log2 λA . > evalf(log[2](Num_block)/n); 0.7094099067 > B:=evalm(A&^(n-1)); 610 377 B := 377 233 The following is equal to Num block. > add(add(B[i,j],i=1..2),j=1..2); 1597 >

13 Recurrence and Dimension

Recurrence is one of the central themes in dynamical systems theory. A typical orbit of a given transformation returns to a neighborhood of a starting point inﬁnitely many times with probability 1 under suitable conditions. Consider a transformation T deﬁned on a metric space X and the nth ﬁrst return time Rn of T to the ball of radius 2−n centered at a starting point. Then (log Rn )/n converges to the Hausdorﬀ dimension of X under suitable conditions where log denotes the logarithm to the base 2 throughout the chapter. For irrational transformations the conclusion does not always hold, and the answer depends on the type of the given irrational number.

13.1 Hausdorﬀ Dimension Let X be a set and let P(X) be the collection of all subsets of X. An outer measure on X is a function µ∗ : P(X) → [0, ∞] satisfying the following conditions: (i) µ∗ (∅) = 0, (ii) if A ⊂ B then µ∗ (A) ≤ µ∗ (B), and (iii) if Ej , j ≥ 1, are pairwise disjoint then µ∗ (

∞

Ej ) ≤

j=1

∞

µ∗ (Ej ) .

j=1

Note that an outer measure is deﬁned for all subsets of X. Let (X, d) be a metric space, and let B be the Borel σ-algebra, i.e., the σ-algebra generated by open subsets. The diameter of U ⊂ X is deﬁned by diam(U ) = sup{d(x, y) : x, y ∈ U } . For A ⊂ X, ε > 0, let Hα,ε (A) = inf

diam(Ui )α ,

392

13 Recurrence and Dimension

where the inﬁmum is taken over all countable coverings of A by subsets Ui with diameters diam(Ui ) ≤ ε. The Hausdorﬀ α-measure on X is deﬁned by Hα (A) = lim Hα,ε (A) = lim sup Hα,ε (A) . ε↓0

ε↓0

Then it is an outer measure. There exists a unique value for α such that Hs (X) = ∞ if s < α and Hs (X) = 0 if s > α. Such α is called the Hausdorﬀ dimension of (X, d). For X = Rn the Hausdorﬀ dimension is equal to n. If the Hausdorﬀ dimension of X is not an integer, then X is called a fractal set. Example 13.1. The Cantor set C is a fractal set, and its Hausdorﬀ dimension is equal to log3 2 ≈ 0.6309 . For the proof, recall that C is deﬁned by the limiting process of removing 2n−1 open intervals of length 3−n for every n ≥ 1. Let Cn denote the remaining closed set, which consists of closed intervals Un,i , 1 ≤ i ≤ 2n , of length 3−n . ∞ Then C = n=1 Cn and Hα,3−n (C) ≤ 2n 3−αn using {Un,i }i as a covering. Hence Hα (C) = 0 for α > log3 2. Now it is enough to show that Hα (C) > 0 for α = log3 2 using the fact that 1 ≤ i diam(Ui )α for any countable covering {Ui }i of C. To prove it we may assume that each Ui is an interval [ai , bi ] where ai = inf{x : x ∈ Ui } and bi = sup{x : x ∈ Ui } since there is no change in the diameter. Next we may assume that each there is a ﬁnite Ui is open by slightly expanding Ui . Since C is compact, open subcovering. Thus it suﬃces to prove that 1 ≤ i diam(Ui )α for any ﬁnite covering of C by intervals. Since a ﬁnite covering of C by intervals is a covering of Cn for some suﬃciently large n, each Ui contains intervals of length 3−n . Since the subdivision of intervals does not increases the sum, it remains to verify the inequality when Ui are the intervals in Cn . This is true since 2n 3−αn = 1. ∞ Example 13.2. Let X = 1 {0, 1} be the (p, 1 − p)-Bernoulli shift space that is regarded as the unit interval [0, 1] endowed with the Euclidean metric. Let Xp denote the set of all binary sequences x in X such that 1 xi = p} . n→∞ n i=1 n

Xp = {x ∈ X : lim

In 1949 Eggleston[Eg] proved that the Hausdorﬀ dimension of Xp is equal to the entropy −p log2 p − (1 − p) log2 (1 − p) of the Bernoulli shift transformation. A similar result can be obtained for a Markov shift space. See [Bi]. For an introduction to Hausdorﬀ measures, consult [Ed],[Fa],[Roy]. For the relation between the Hausdorﬀ dimension and the ﬁrst return time see [DF].

13.2 Recurrence Error

393

13.2 Recurrence Error Let (X, A, µ) be a probability space and let T : X → X be a µ-preserving transformation. If there exits a metric d on X such that the σ-algebra A is the Borel σ-algebra generated by d, then we call (X, A, µ, d, T ) a metric measure preserving system. The mapping T is not assumed to be continuous. Boshernitzan [Bos] proved the following fact. Fact 13.3 Let (X, A, µ, d, T ) be a metric measure preserving system. Assume that for some α > 0, the Hausdorﬀ α-measure Hα agrees with the measure µ on the σ-algebra A. Then for µ-almost every x ∈ X we have lim inf nβ d(T n x, x) ≤ 1, with β = n→∞

1 . α

For X = [0, 1] Lebesgue measure µ coincides with the Hausdorﬀ 1-measure H1 on X. Hence Boshernitzan obtained the following corollary. Fact 13.4 Let X = [0, 1]. If T : X → X is Lebesgue measure preserving (not necessarily continuous), then, for almost every x ∈ X, lim inf n |T n x − x| ≤ 1 . n→∞

The optimal value for the constant on the right hand side is not known. Probably the right hand side is bounded by a smaller constant depending on the transformation. A generalization of Fact 13.4 to absolutely continuous invariant measures is proved in Theorem 13.9. Deﬁnition 13.5. Let (X, d) be a metric space. Deﬁne the kth ﬁrst return time Rk and the kth recurrence error εk by Rk (x) = min{s ≥ 1 : d(T s x, x) ≤ 2−k } , and εk (x) = d(T Rk (x) x, x) . Example 13.6. Let T x = x + θ (mod 1) where 0 < θ < 1 is irrational. Then T preserves Lebesgue measure and its entropy equals 0. Deﬁne a metric d on X = [0, 1) by d(x, y) = ||x − y||, x, y ∈ X, where ||t|| = min{|t − n| : n ∈ Z}. Note that Rk is constant and that for 0 < x < 1 we have lim inf n ||T n x − x|| = lim inf n |T n x − x| = lim inf n ||nθ|| . n→∞

n→∞

n→∞

If pk /qk be the kth convergent in the continued fraction expansion of θ, then |θ − pk /qk | ≤ qk −2 and hence qk ||qk θ|| ≤ 1. By choosing a subsequence of √ 5−1 pk /qk , we may have a smaller upper bound. With θ = 2 we have 1 lim inf n ||nθ|| ≤ √ . n→∞ 5

394

13 Recurrence and Dimension 0.6

0.6

0.4

0.4

0.2

0.2

0

5

10

k

15

20

0

5

10

k

15

20

√ Fig. 13.1. y = Rk εk , 1 ≤ k ≤ 20, for T x = {x + θ} with θ = ( 5 − 1)/2 (left) and θ = e − 2 (right)

For more details, consult [Kh]. In Fig. 13.1 the graphs y = Rk εk , 1 ≤ k ≤ 20, √ are drawn with θ = 5−1 and θ = e − 2, respectively. See Fig. 13.1 and Maple 2 Program 13.7.1. Deﬁnition 13.7. Let T : X → X be measure preserving on a probability space (X, µ). Choose a measurable subset E with µ(E) > 0. Deﬁne the ﬁrst return time on E by RE (x) = min{j ≥ 1 : T j x ∈ E} for x ∈ E. It is ﬁnite for almost every x ∈ X. Let (X, A, µ, d, T ) be a metric measure preserving system. Consider a ball E of radius 2−k centered at x0 . Suppose that T is ergodic. Then RE = Rk (x0 ) and in view of Kac’s lemma we expect that Rk (x0 ) is approximately equal to 1/µ(E) in a suitable sense.

13.3 Absolutely Continuous Invariant Measures We extend Fact 13.4 for transformations with absolutely continuous invariant measures. Lemma 13.8. Let µ be a ﬁnite continuous measure on X = [0, 1] such that µ([a, b]) > 0 for every interval [a, b], a < b. Deﬁne d : X × X → R by d(x, y) = µ([α, β]) for x, y ∈ X where α = min{x, y} and β = max{x, y}. Then (i) d is a metric on X, and (ii) µ coincides with the Hausdorﬀ 1-measure H1 on X.

13.3 Absolutely Continuous Invariant Measures

395

Proof. (i) Clearly d(x, y) = d(y, x). Since µ is continuous, we observe that d(x, y) = 0 if and only if x = y. It remains to show that d(x, y) ≤ d(x, z) + d(z, y). It suﬃces to consider the case x < y. If x ≤ z ≤ y, then d(x, y) = d(x, z) + d(z, y). If x < y ≤ z, then d(x, y) + d(y, z) = d(x, z). Finally, if z ≤ x < y, then d(z, y) = d(z, x) + d(x, y). (ii) For A ⊂ X, ε > 0, we have H1,ε (A) = inf ri where the inﬁmum is taken over all countable coverings by sets Ui with the diameters ri < ε. Observe that ri = µ([ai , bi ]) where ai = inf{s : s ∈ Ui }, bi = sup{s : s ∈ Ui }. Hence H1,ε (A) = diam(A) = µ(A) and H1 (A) = µ(A) for every interval A ⊂ X. Thus the same is true for every measurable subset A.

Theorem 13.9. Let X = [0, 1]. If T : X → X is a transformation preserving an absolutely continuous probability density ρ(x) > 0, then for almost every x lim inf n |T n x − x| ≤ n→∞

1 . ρ(x)

Proof. As in the original proof of Fact 13.4 we will apply Fact 13.3. As in Lemma 13.8, deﬁne a metric d on X by y ρ(t) dt d(x, y) = x

for x, y ∈ X. By Fact 13.3 we have lim inf n→∞ n d(T n x, x) ≤ 1 for almost every x. Note that T nx 1 1 n ρ(t) dt x, x) = lim d(T lim = ρ(x) . |T n x−x|→0 T n x − x x |T n x−x|→0 |T n x − x| Hence lim inf n d(T n x, x) = lim inf n |T n x − x| ρ(x) ≤ 1 . n→∞

n→∞

Let (X, A, µ, d, T ) be a metric measure preserving system. Fix x ∈ X nj and choose an increasing sequence {nj }∞ x, x) = j=1 such that limj→∞ nj d(T n nj ∞ lim inf n→∞ n d(T x, x). We may assume that {nj d(T x, x)}j=1 monotonically decreases to a limit and that {d(T nj x, x)}∞ j=1 monotonically decreases to 0 as j → ∞. For each nj there exists kj such that 2−kj −1 < d(T nj x, x) ≤ 2−kj . Since Rkj (x) = min{s ≥ 1 : d(T s x, x) ≤ 2−kj }, we see that Rkj ≤ nj and d(T Rkj x, x) ≤ 2−kj . For X = [0, 1] we have the following result.

396

13 Recurrence and Dimension

Theorem 13.10. Let X = [0, 1]. If T : X → X preserves an absolutely continuous probability density ρ(x) > 0, then for almost every x lim inf Rk (x)εk (x) ≤ k→∞

2 . ρ(x)

Proof. With the same notation as in the preceding argument, we have Rkj (x) d(T Rkj x, x) ≤ 2 nj d(T nj x, x) . Hence lim inf Rk (x)εk (x) ≤ 2 lim inf n |T n x − x| ≤ k→∞

n→∞

2 . ρ(x)

To investigate the behavior of n |T n x − x| as n → ∞ we consider the subsequence nk = Rk (x) and check the convergence of Rk (x)εk (x) as k → ∞. Take a starting point x0 . In the simulations we plot (x, Rk (x)εk (x)) at x = T j x0 , 1 ≤ j ≤ 1000. See Figs. 13.2, 13.3. Note that most of the points lie below y = 2/ρ(x). In computing the ﬁrst return time it is crucial to take a suﬃcient number of signiﬁcant digits. If T is iterated n times then about n × h decimal digits of accuracy is lost where h is the Lyapunov exponent of T . Hence, when we begin with D decimal digits, the maximal number of possible iterations of T is about D/h. Since T is applied Rk (x) times, which is of the order of 2k−1 by Kac’s lemma, we need to have 2k−1 ≈

D h

and hence D ≈ 2k−1 × h. See Chap. 9. For more details, consult [C2]. See Maple Program 13.7.2. Example 13.11. Let T x = {2x}. Then T preserves Lebesgue measure, and hence ρ(x) = 1. See the left graph in Fig. 13.2. Example 13.12. (A piecewise linear transformation) Deﬁne T on [0, 1] by & 4x (mod 1) , 0 ≤ x < 12 , Tx = 2x (mod 1) , 12 ≤ x ≤ 1 . Then T preserves Lebesgue measure, and hence ρ(x) = 1. Since log |T (x)| is equal to log 4 on (0, 12 ), and log 2 on ( 12 , 1), the Lyapunov exponent is equal to 3 1 1 log 4 + log 2 = log 2 ≈ 0.4515 . 2 2 2 See the middle graph in Fig. 13.2.

13.3 Absolutely Continuous Invariant Measures 3

3

3

2

2

2

1

1

1

0

0

1

x

x

1

0

x

397

1

Fig. 13.2. Plots of (x, R10 (x)ε10 (x)) with y = 2/ρ(x) for T x = {2x}, a piecewise linear transformation and T x = {βx} (from left to right) √

Example 13.13. Let β = 5+1 2 . Consider the β-transformation T x = {βx}. Its invariant density is a step function. See the right graph in Fig. 13.2. Example 13.14. The logistic transformation T x = 4x(1 − x) has the invariant 1 2 density ρ(x) = . Hence = 2π x(1 − x). See the left graph ρ(x) π x(1 − x) in Fig. 13.3. Example 13.15. The Gauss transformation T x = {1/x} has the invariant den1 1 2 sity ρ(x) = . Hence = 2(ln 2)(1 + x). See the middle graph in ln 2 1 + x ρ(x) Fig. 13.3. Example 13.16. (A generalized Gauss transformation) The invariant density for T x = {1/xp }, p = 41 , is given in Fig. 9.7. There is a unique solution a ≈ 0.32472 of the equation T 2 x = x in the interval 0.1 < x < 0.7. If 0.30042 < x < 0.34799, then |T 2 x − x| < 2−10 , and hence Rk (x) = 2 except at x = a. See the right graph in Fig. 13.3.

3

15 2

2

10 1

1 0

x

1

0

5

x

1

0

x

1

Fig. 13.3. Plots of (x, R10 (x)ε10 (x)) with y = 2/ρ(x) for T x = 4x(1 − x), T x = {1/x} and T x = {1/x0.25 } (from left to right)

398

13 Recurrence and Dimension

13.4 Singular Continuous Invariant Measures Deﬁnition 13.17. Let µ be a measure on the real line. Put µ(I) |I| 0. The set of the irrational numbers of type 1 has measure 1 and includes the irrational numbers with bounded partial quotients. For the proof see Theorem 32 in [Kh]. For simulations for irrational numbers θ of type 1 see Fig. 13.11, where graphs of y = qn ||qn θ||, 1 ≤ n ≤ 100, are plotted. Recall that √ 2 = 1 + [2, 2, 2, 2, 2, 2, 2, 2, . . .] , √ 3 = 1 + [1, 2, 1, 2, 1, 2, 1, 2, . . .] , √ 7 = 2 + [1, 1, 1, 4, 1, 1, 1, 4, . . .] . For simulations with θ of type τ > 1, consult Maple Program 13.7.4.

404

13 Recurrence and Dimension 0.6

0.4

0.2

0

0.4

0.4

0.2

0.2

0

100

n

Fig. 13.11. y = qn ||qn θ|| for θ =

√

2 − 1,

0

100

n

√

3 − 1 and

√

n

100

7 − 2 (from left to right)

Example 13.24. Let θ = [a1 , a2 , a3 , . . .] be the continued fraction expansion of an irrational number 0 < θ < 1. Suppose that i

i≥1

ai = 2τ , for some τ ≥ 1. Then θ is of type τ .

Proof. For notational simplicity we prove the fact only for τ = 2. Since qi+1 = ai+1 qi + qi−1 , by induction we have i

i

22+···+2 ≤ qi < qi + qi−1 ≤ 21+2+···+2 . Since 2 + · · · + 2i = 2i+1 − 2, we have 22 and 22

i+1

i+1

−2

−2

≤ q i ≤ 22

i+1

−1

≤ qi + qi−1 ≤ 22

i+1

−1

.

Hence ||qi θ||

0, lim inf j 2−ε ||jθ|| = lim inf qi2−ε ||qi θ|| j→∞

i→∞

≤ lim inf i→∞

i+1 2−ε i+2 22 −1 2−2 +2

= lim inf 2−(2

i+1

−1)ε

i→∞

=0

and lim inf j 2+ε ||jθ|| = lim inf qi2+ε ||qi θ|| j→∞

i→∞

≥ lim inf i→∞

i+1 2+ε i+2 22 −2 2−2 +1

= lim inf 2(2 i→∞

i+1

−2)ε−3

>0.

13.6 Irrational Translations Mod 1

405

Hence θ is of type 2.

For a simulation of Ex. 13.24 see Maple Program 13.7.5. Lemma 13.25. (i) If n < − log ||θ||, then Rn = 1. (ii) If n > − log ||θ||, then there exists a unique integer i ≥ 1 such that ||qi θ|| < 2−n < ||qi−1 θ|| , and Rn = qi . Proof. (i) Use the fact 2−n > ||θ||. (ii) Since ||qi θ|| decreases monotonically to 0, there exists i such that ||qi θ|| < 2−n < ||qi−1 θ|| . Fact 1.36(vi) implies ||qi−1 θ|| ≤ ||jθ|| for 1 ≤ j < qi . Hence ||jθ|| > 2−n for 1 ≤ j ≤ qi − 1, and hence Rn ≥ qi . Since ||qi θ|| < 2−n , we have Rn = qi .

Theorem 13.26. For any irrational θ, we have lim sup n→∞

log Rn =1. n

Proof. Choose i such that ||qi θ|| < 2−n < ||qi−1 θ|| as in Lemma 13.25. Then 2−n < ||qi−1 θ|| < qi −1 and Rn = qi < 2n . Hence lim sup n→∞

log Rn ≤1. n

For the other direction, suppose ni satisﬁes 2−ni < ||qi−1 θ|| < 2−ni +1 . Since ||jθ|| > 2−ni for j < qi and 1 1 1 < ||qi−1 θ|| < ni −1 , < 2 qi + qi−1 2qi we have

Rni ≥ qi > 2ni −2 .

Therefore lim sup n→∞

log Rni log Rn ≥1. ≥ lim sup ni n i→∞

406

13 Recurrence and Dimension

Lemma 13.27. If β > 0 satisﬁes lim inf nβ ||nθ|| < ∞ , n→∞

then lim inf n→∞

1 log Rn ≤ . β n

Proof. Put L = lim inf nβ ||nθ|| . n→∞

Then there exists a sequence {nj }∞ j=1 such that (nj )β ||nj θ|| ≤ L + 1 . Choose a constant c > 0 such that L + 1 ≤ 2c . Take a sequence {mj }∞ j=1 satisfying 2mj +c ≤ (nj )β < 2mj +c+1 . Then ||nj θ||

0 and j ≥ 1, there exists Cε > 0 such that j τ +ε ||jθ|| ≥ Cε . By Lemma 13.25 qiτ +ε ≥

Cε > 2n Cε . ||qi θ||

Hence (τ + ε) log qi > n + log Cε and

1 log qi ≥ τ +ε n for every ε > 0. By Lemma 13.25 we have lim inf n→∞

13.6 Irrational Translations Mod 1

lim inf n→∞

407

log Rn log qi 1 = lim inf ≥ . n→∞ n n τ

For the other direction, note that lim inf j τ −ε ||jθ|| = 0 < ∞ j→∞

for every ε > 0. Lemma 13.27 implies that for every ε > 0 lim inf n→∞

log Rn 1 ≤ , n τ −ε

and hence lim inf n→∞

log Rn 1 ≤ . n τ

Example 13.29. See Fig. 13.12 for simulations with irrational numbers of type τ = 23 and of type τ = 3. The horizontal lines indicate the graphs y = τ1 . Theoretically in the right graph we expect that R40 ≈ 240 ≈ 1012 , which is impossible to obtain in a computer experiment in any near future. It is estimated from a computer experiment which lasted for several hours that R40 ≥ 2988329. So instead of using the exact value for R40 we choose R40 = 2988329 to draw the graph. The true graph would have a very steep line segment on the interval 39 ≤ n ≤ 40. This agrees with the fact that the limit inﬁmum is equal to 31 . Consult Maple Program 13.7.5. 1

1

(log Rn )/n

(log Rn )/n

0

10 n 20

0

30

n

40

Fig. 13.12. y = (log Rn )/n for T x = {x + θ} with θ of type 3/2 (left) and of type 3 (right)

As a conclusion we have the following result. Corollary 13.30. limn→∞ (log Rn )/n = 1 if and only if θ is of type 1. Proof. Use Theorems 13.26, 13.28.

408

13 Recurrence and Dimension 1

1

(log Rn )/n

(log Rn )/n

0

0

20

n

n

20

√ Fig. 13.13. y = (log Rn )/n for T x = {x+θ} with θ = ( 5−1)/2 (left) and θ = e−2 (right)

For simulations see Fig. 13.13 and Maple Program 13.7.5. For closely related results see [CS],[KS]. Remark 13.31. Here is a heuristic argument on the dependence of the ﬁrst return time on the type of θ. Let T x = x + θ (mod 1). Suppose that there exists a sequence of rational numbers pn /qn , (pn , qn ) = 1, satisfying θ − pn < C 1 , qn qn 1+α for some C > 0 and α ≥ 1. Deﬁne Tn by Tn x = x +

pn qn

(mod 1) .

Then for 1 ≤ j < qn ,

j T x − Tn j x = jθ − jpn < C 1 . qn qn α

Thus T is approximated by Tn . Note that Tn has a period qn , and so we expect that T qn is close to the identity transformation for suﬃciently large n. For larger values of α the approximation is better and T behaves more like a periodic transformation. In this case the ﬁrst return time Rk is close to qn for reasonably large values of k. Hence Rk does not grow very fast. The type of θ determines the speed of the approximation by periodic transformations of T by Tn , which in turn determines the speed of the periodic approximation. For a related result, see [CFS].

13.7 Maple Programs We investigate the properties of the metric version of the ﬁrst return time, and simulate generalizations of Boshernitzan’s theorem.

13.7 Maple Programs

409

13.7.1 The recurrence error We investigate the product of the return time and the recurrence error for an irrational translation modulo 1. (If we are interested only in ﬁnding the Hausdorﬀ dimension, we do not need many signiﬁcant digits as explained in Remark 13.22.) > with(plots): Since an irrational translation modulo 1 has zero Lyapunov exponent, we do not need many signiﬁcant digits. > Digits:=50: Choose an irrational number θ. > theta:=evalf(sqrt(5)-1)/2: > T:= x->frac(x+theta): > N:=20: It is convenient to choose x0 = 12 as the starting point of an orbit. There is no need to use the deﬁnition of ||x|| in the following. > X0:=0.5: > for n from 1 to N do > Xn:=T(X0): > for j from 1 while abs(Xn-X0) > 1/2.^n do > Xn:= T(Xn): > od: > R[n]:=j: > Error[n]:=evalf(abs(Xn-X0),10): > od: > Digits:=10: > listplot([seq([n,R[n]*Error[n]],n=1..N)],labels=["n",""]); See Fig. 13.1. 13.7.2 The logistic transformation Investigate the metric version of the ﬁrst return time for the logistic transformation. (If we are interested only in ﬁnding the Hausdorﬀ dimension, we do not need many signiﬁcant digits as explained in Remark 13.22.) > with(plots): > T:=x->4.0*x*(1.0-x): > rho:=x->1/(Pi*sqrt(x*(1-x))): This program takes very long to ﬁnish. Try ﬁrst n = 10. > n:=14: Choose the sample size. > S:=1000: > seed[0]:=evalf(Pi-3.0): > for i from 1 to S do seed[i]:=T(seed[i-1]): od:

410

13 Recurrence and Dimension

Calculate the ﬁrst return time. > Digits:=10000: Try Digits:=2000 ﬁrst. > for i from 1 to S do > Xn:=T(seed[i]): > for j from 1 while abs(Xn-seed[i]) > 2.0^(-n) do > Xn:=T(Xn): > od: > Return[i]:=j; > Error[i]:=evalf(abs(Xn-seed[i]),10): > od: > Digits:=10: Plot the points (x, Rn (x)εn (x)). > g1:=pointplot([seq([seed[i],Return[i]*Error[i]],i=1..S)]): > g2:=plot(2/rho(x),x=0..1,labels=["x"," "]): > display(g1,g2); See Fig. 13.3 for the output with n = 10. Plot the points (x, log Rn (x)/n). > pointplot([seq([seed[i],log[2](Return[i])/n],i=1..S)]); See the left plot in Fig. 13.6. Plot the pdf of Rn . > Bin:=100: Find the maximum of the ﬁrst return time. > Max:=max(seq(Return[i],i=1..S)); Max := 87820 The following shows that for outliers the number of signiﬁcant digits was not large enough. > ceil(entropy*Max); 26437 Find the frequency corresponding to each bin. > for k from 1 to Bin do freq[k]:=0: od: > for j from 1 to S do > slot:=ceil(Bin*Return[j]/Max): > freq[slot]:=freq[slot]+1: > od: Plot the pdf. > gA:=listplot([seq([(i-0.5)*Max/Bin,freq[i]*Bin/S/Max], i=1..Bin)]): > gB:=plot(0,x=0..Max*0.8,labels=["Rn","Prob"]): > display(gA,gB); See Fig. 13.14. The area under the graph should be equal to 1. Now ﬁnd the pdf of (log Rn )/n. > Bin2:=30:

13.7 Maple Programs

411

0.0001 Prob

0

Rn 50000

Fig. 13.14. Pdf of Rn , n = 14, for T x = 4x(1 − x)

>

Max2:=max(seq(log[2.0](Return[i])/n,i=1..S),1.5);

Max2 := 1.5 > add(log[2.0](Return[i])/n,i=1..S)/S; 0.8544914490 We will consider log(Rn + ε) for small ε > to avoid the case when Rn = 1. > epsilon:=10.^(-10): > for k from 1 to Bin2 do freq2[k]:= 0: od: > for j from 1 to S do > slot2:=ceil(Bin2/Max2*(log[2.](Return[j])/n+epsilon)): > freq2[slot2] := freq2[slot2]+1: > od: Plot the pdf. > gC:=listplot([seq([(i-0.5)*Max2/Bin2,freq2[i]*Bin/S/Max2], i=1..Bin2)]): > gD:=plot(0,x=0..1.5,labels=[" ","Prob"]): > display(gC,gD); See the right graph in Fig. 13.6. 13.7.3 The H´ enon mapping Check the validity of the ﬁrst return time formula for the H´enon mapping. For a related result consult [C3]. (If we are interested only in ﬁnding the dimension of the attractor, we do not need many digits. See Remark 13.22.) > with(plots): It takes very long to ﬁnish the program. Start the experiment with n = 3 and Digits:=100. > n:=8: > Digits:=20000: > a:=1.4: > b:=0.3: > T:=(x,y)->(y+1-a*x^2,b*x): > seed[0]:=(0,0):

412

13 Recurrence and Dimension

Throw away transient points. > for i from 1 to 5000 do seed[0]:=T(seed[0]): od: > SampleSize:=1000: > for i from 1 to SampleSize do seed[i]:=T(seed[i-1]): od: Calculate the ﬁrst return time Rn . > for i from 1 to SampleSize do > Xn:=T(seed[i]): > for j from 1 while > (Xn[1]-seed[i][1])^2+(Xn[2]-seed[i][2])^2 > 2^(-2*n) do > Xn:=T(Xn): > od: > ReturnTime[i]:=j; > od: > Digits:=10: Find the average of (log Rn )/n. > add(log[2.](ReturnTime[i])/n,i=1..SampleSize)/SampleSize; 1.205964669 This result agrees with the fact the Hausdorﬀ dimension is greater than 1. In the following we can rotate the three-dimensional plot by moving the mouse with a button being pressed. See the left plot in Fig. 13.10. > pointplot3d([seq([seed[i],log[2](ReturnTime[i])/n], i=1..SampleSize)],axes=normal); Find the maximum of (log2 Rn )/n. > Max:=max(seq(log[2.0](ReturnTime[i])/n,i=1..SampleSize)): Divide the interval [0, Max] into 27 bins. > Bin:=27: > epsilon:=0.000000001: Find the frequency corresponding to each bin. > for k from 1 to Bin do freq[k]:=0: od: for j from 1 to SampleSize do slot:=ceil(Bin/Max*(log[2](ReturnTime[j])/n+epsilon)): freq[slot]:=freq[slot]+1: od: Plot the pdf of (log Rn )/n. > listplot([seq([(i-0.5)*Max/Bin,freq[i]*Bin/SampleSize/ Max],i=1..Bin)]); See the right graph in Fig. 13.10. > > > >

13.7.4 Type of an irrational number Construct an irrational number θ of type τ and check the validity of the construction. > with(plots):

13.7 Maple Programs

413

Let qn be the denominator of the nth convergent in the continued fraction expansion of θ. Compute qn ||qn θ||, 1 ≤ n ≤ K, K = 50. There may be a margin of error so we choose an integer slightly greater than 50 for K. > K:=52: Choose τ = 65 . > tau:=1.2; τ := 1.2 Deﬁne the partial quotients of θ. > for n from 1 to K do > a[n]:=trunc(2^(tau^n)): > od: Find the numerators and the denominators of the ﬁrst K convergents of θ. We use the recursive relation for the denominators of the convergents. > q[-1]:=0: > q[0]:=1: > for n from 1 to K do > q[n]:=a[n]*q[n-1]+q[n-2]: > od: Here is the recursive relation for the numerators of the convergents. > p[-1]:=1: > p[0]:=0: > for n from 1 to K do > p[n]:=a[n]*p[n-1]+p[n-2]: > od: Note that |θ − pn /qn | ≤ qn−2 . We use the nth convergent pn /qn for suﬃciently large n in place of θ, and the error is less than qn−2 . Find the number of decimal digits to calculate qn−2 . > evalf(2*log10(q[K])); 47334.62374 In the approximation of θ using its ﬁrst K partial quotients, ﬁnd an optimal number of signiﬁcant decimal digits needed to ensure the accuracy. > Digits:=ceil(%); Digits := 47335 Deﬁne θ. > theta:=evalf(p[K]/q[K]): Calculate ||qn θ||. > for n from 1 to K do > dist[n]:=min(1-frac(q[n]*theta),frac(q[n]*theta)): > od: We needed high precision arithmetic in constructing θ, but we no longer need many signiﬁcant digits in iterating the irrational translation modulo 1. > Digits:=10: Check whether the numerical value of ||qn θ|| is accurate for every n within an acceptable margin.

414

13 Recurrence and Dimension >

seq([n,evalf(dist[n],10)],n=K-4..K);

[48, 0.3060616812 10−13695 ], [49, 0.9570885589 10−16435 ], [50, 0.3759446621 10−19722 ], [51, 0.4876731853 10−23667 ], [52, 0.1 10−23667 ] In the above output the value for n = 52 does not have enough signiﬁcant digits so it should be discarded. Other values for n ≤ 51 seem to be acceptable. We can safely use the ﬁrst L = 50 values. > L:=50: Plot the graph y = (qn )τ ||qn θ||, 1 ≤ n ≤ 50. > pointplot([seq([n,q[n]^tau*dist[n]],n=1..L)]); See the left graph in Fig. 13.15. Plot the graph y = (qn )τ −0.0001 ||qn θ||, 1 ≤ n ≤ 50. > pointplot([seq([n,q[n]^(tau-0.0001)*dist[n]],n=1..L)]); See the middle graph in Fig. 13.15. Plot the graph y = (qn )τ +0.0001 ||qn θ||, 1 ≤ n ≤ 50. > pointplot([seq([n,q[n]^(tau+0.0001)*dist[n]],n=1..L)]; See the right graph in Fig. 13.15.

0.4

0.4

0.2

0.2

15 10 5

0

n

50

0

n

50

0

n

50

Fig. 13.15. y = (qn )τ ||qn θ||, y = (qn )τ −0.0001 ||qn θ|| and y = (qn )τ +0.0001 ||qn θ|| for τ = 6/5 (from left to right)

13.7.5 An irrational translation mod 1 Consider T x = {x + θ} where θ is an irrational number of type τ = 2. For the construction of θ consult Maple Program 13.7.4. > with(plots): > K:=10: > tau:=2:

13.7 Maple Programs > > > > > > > > >

415

for n from 1 to K do a[n]:=trunc(2^(tau^n)): od: q[-1]:=0: q[0]:=1: for n from 1 to K do q[n]:=a[n]*q[n-1]+q[n-2]: od: evalf(2*log10(q[K])): Digits:=ceil(%);

Digits := 1232 p[-1]:=1: p[0]:=0: > for n from 1 to K do > p[n]:=a[n]*p[n-1]+p[n-2]: > od: > theta:=evalf(p[K]/q[K]): We do not need many signiﬁcant digits. > Digits:=100: > T:=x->frac(x+theta): > N:=30: > evalf(1./2^N,10); >

0.9313225746 10−9 It is convenient to choose x0 = 12 as the starting point of an orbit. > X0:=0.5: Find the ﬁrst return time Rn (x0 ). To speed up the computation we use the fact Rn (x0 ) ≥ Rn−1 (x0 ) in the following do loop. > Xn:=T(X0): > R[0]:=1: > for n from 1 to N do > for j from R[n-1] do > if abs(Xn - X0) < 1./2^n then R[n]:=j: break: > else Xn:=T(Xn): fi: > od: > od: > seq(R[n],n=1..N); 1, 1, 4, 4, 4, 4, 65, 65, 65, 65, 65, 65, 65, 65, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644 > Digits:=10: Plot the graph y = (log2 Rn )/n, 1 ≤ n ≤ N , together with y = 1/τ . > g1:=listplot([seq([n,log[2](R[n])/n], n=2..N)]): > g2:=pointplot([seq([n,log[2](R[n])/n], n=2..N)]): > g3:=plot(1/tau, x=0..N, labels=["n","(log Rn)/n"]): > display(g1,g2,g3);

416

13 Recurrence and Dimension

See Fig. 13.16. 1

(log Rn )/n

0

10 n 20

30

Fig. 13.16. y = (log Rn )/n, 2 ≤ n ≤ 30, for T x = {x + θ} with θ of type 2

14 Data Compression

Data compression in digital technology is explained in terms of entropy, and three compression algorithms are introduced: Huﬀman coding, Lempel–Ziv coding and arithmetic coding. Historically Huﬀman coding was invented ﬁrst but these days Lempel–Ziv coding is mostly used. Arithmetic coding has been relatively recently introduced. In our mathematical model a computer ﬁle under consideration is assumed to be an inﬁnitely long binary sequence, and entropy is the best possible compression ratio. Most computer users are familiar with compressed ﬁles. Usually computer software is stored in compressed format and it has to be decompressed before it is installed on a computer. This is done almost automatically nowadays. As another example, consider a binary source emitting a signal (‘0’ or ‘1’) every second. (Imagine a really slow modem.) But the problem is that the signal has to be transmitted through a channel (imagine a telephone line) that can accept and send only 0.9 binary symbol every second. How can we send the information in this case? We use data compression so that the information content of our compressed signal is not changed but the length is reduced. Throughout the chapter the codes are assumed to be binary and ‘log’ denotes the logarithm to the base 2 unless stated otherwise. For practical implementation of the data compression algorithms consult [HHJ],[We]. For the fax code see [Kuc]. For an elementary introduction to information theory see [Luc].

14.1 Coding An alphabet is a ﬁnite set A = {s1 , . . . , sk }. Its elements are called (source) symbols. We imagine a signal source that emits symbols ∞si . Consider the set X of inﬁnite sequences of symbols from A, i.e., X = 1 A = A × A × · · · . Let An denote the set of blocks (or words) of length n made up of symbols ∞ from A and let A∗ = n=1 An denote the set of all blocks of ﬁnite length. ∗ For example, {0, 1} is the set of all ﬁnite sequences of 0’s and 1’s.

418

14 Data Compression

Deﬁnition 14.1. (i) A binary code for A is a mapping C : A → {0, 1}∗ . For s ∈ A, C(s) is called the codeword for s and its length is denoted by C (s). The average length L(C) of C is deﬁned by L(C) =

k

Pr(si ) C (si ) ,

i=1

where Pr(si ) is the probability of si . (ii) A code C is called a preﬁx code if there do not exist two distinct symbols si , sj , such that C(si ) is the beginning of C(sj ). (iii) Deﬁne a coding map C ∗ : A∗ → {0, 1}∗ by C ∗ (si1 · · · sin ) = C(si1 ) · · · C(sin ) for a sequence of symbols si1 · · · sin . We sometimes identify a code C with C ∗ . The process of applying C ∗ is called ‘coding’ or ‘encoding’, and if it is invertible then the reverse process is called ‘decoding’. For a code C to be useful, it should be decodable. Example 14.2. Let A = {s1 , s2 , s3 }. (i) Deﬁne a code C, which is not a preﬁx code, by C(s1 ) = 0 ,

C(s2 ) = 1 ,

C(s3 ) = 01 .

(ii) Deﬁne a preﬁx code C by C (s1 ) = 00 ,

C (s2 ) = 11 ,

C (s3 ) = 01 .

Preﬁx codes are uniquely decodable. If a sequence (or string) made up of C(s) is given, then the sequence (or string) of inverse images of C(s) can be reconstructed uniquely. In Ex. 14.2, C is not uniquely decodable since the string 01 may have come from two diﬀerent strings s1 s2 and s3 by C. A binary code can be represented as a binary tree. A full binary tree of length M is a binary tree such that, except for the end nodes, every node has two nodes originating from it. Note that a full binary tree of length M has 2M end nodes. For examples of binary trees see Sect. 14.3. Theorem 14.3 (Kraft’s inequality). Let A = {s1 , . . . , sk } be an alphabet. If C is a binary preﬁx code for A, then k

2−C (si ) ≤ 1 .

i=1

Conversely, if a set of integers i , 1 ≤ i ≤ k, satisﬁes k

2−i ≤ 1 ,

i=1

then there exists a binary preﬁx code C for A with C (si ), 1 ≤ i ≤ k.

14.2 Entropy and Data Compression Ratio

419

Proof. Let M = max C (si ) . 1≤i≤k

Recall that C is represented by a binary tree of length M whose end nodes correspond to codewords of C. Choose an arbitrary end node corresponding to C(si ). Consider an imaginary subtree of length M −C (si ) originating from the node corresponding to si . It has 2M −C (si ) end nodes. These subtrees do not overlap and all of them are part of a full binary tree of length M with 2M end nodes. Hence k 2M −C (si ) ≤ 2M . i=1

For the converse, suppose that 1 ≤ . . . ≤ k = M and consider the full binary tree FM of length M . First, consider the subtree T1 of length M − 1 whose end nodes are ﬁrst 2M −1 end nodes of FM . Deﬁne C(s1 ) to be the codeword corresponding to the root of T1 . Proceed similarly for s2 with the remaining 2M − 2M −1 end nodes of FM , and for s3 with the 2M − 2M −1 − 2M −2 end nodes of FM . This is possible since k remaining −i ≤ 1.

i=1 2 Theorem 14.3 was obtained by Kraft [Kra] and McMillan.

14.2 Entropy and Data Compression Ratio In digital technology large data ﬁles, which are strings of binary symbols, are compressed to save hard disk space and communication time. Lossless compression of a data ﬁle shortens the length of the string using a method that guarantees the recovery of the original string. In other words, a lossless compression algorithm is a one-to-one mapping deﬁned on a set of all ﬁnite or inﬁnite strings. When we compress video images such as a digitized movie, nonessential part is ignored and lost for better compression, which is called lossy compression. In this book we consider only lossless data compression, and compression means lossless compression. We deﬁne the compression ratio and the compression rate as follows: compression ratio =

length of compressed string length of original string

and compression rate = 1 − compression ratio. How much a computer ﬁle can be compressed or how much a message can be shortened depends on the amount of information or randomness contained in a given data. The more redundancy there is, the less information we have.

420

14 Data Compression Table 14.1. Correspondence of ideas Measure Theory an alphabet a shift space a cylinder set a shift invariant measure entropy

Information Theory source symbols inﬁnitely long data ﬁles a block a stationary probability compression ratio

This is why we need the concept of entropy. Information theory has its own language. See Table 14.1. If a probability distribution is given on A = {s1 , . . . , sk }, then the average quantity of information of A (or, the entropy of A) is deﬁned by H(A) =

k

Pr(si ) log

i=1

1 , Pr(si )

where Pr(s) is the probability of s. If there ∞ exists a shift-invariant measure (or a stationary probability) µ on X = 1 A, then consider the nth average quantity of information of X deﬁned by Hn (X) = H(An ) =

B

Pr(B) log

1 , Pr(B)

where the sum is taken over all blocks B of length n. This is nothing but the entropy of the nth reﬁnement Pn = P ∨ T −1 P ∨ · · · ∨ T −(n−1) P where P is the partition of X by the cylinder sets of length 1 corresponding to the ﬁrst coordinates. Recall that limn→∞ n1 Hn (X) is the entropy of the shift transformation. See Chap. 8. If we have a Bernoulli measure on X, then Hn (X) = nH1 (X) = nH(A) , and the entropy is equal to H(A). So far we have deﬁned the entropy of a partition while the measure is ﬁxed. The problem with the real life application of entropy is to ﬁnd an appropriate measure on the set of all possible strings made of 0’s and 1’s that appear in a given binary sequence x0 . To construct a mathematical ∞ model we assume that the length of x0 is inﬁnite, that is, x0 ∈ X = 1 {0, 1}. The transformation under consideration is always the left shift mapping T , and so it is not mentioned explicitly in information theory. The Birkhoﬀ Ergodic Theorem deﬁnes an ergodic shift invariant probability measure µ, i.e., the µ-measure of a cylinder set or a block C = {x ∈ X : xt = a1 , . . . , xt+n−1 = an } = [a1 , . . . , an ]t,t+n−1 is deﬁned by

14.2 Entropy and Data Compression Ratio

421

1 1C (T k−1 x0 ) . k→∞ k Equivalently, it is the relative frequency of the n-block a1 . . . an in x0 . Then T is invariant and ergodic with respect to µ, and x0 is a typical point for µ as far as the Birkhoﬀ Ergodic Theorem is concerned. Note that even a single sequence x0 deﬁnes a probability measure µ. µ(C) = lim

Lemma 14.4 (Gibbs’ inequality). Let {p1 , . . . , pk } and {q1 , . . . , qk } be two probability distributions. Then −

k

pi log pi ≤ −

i=1

k

pi log qi .

i=1

The equality holds only when pi = qi for every i. Proof. We may assume that pi > 0 for every i. Since log t ≤ t − 1 for t > 0, we have qi qi −1 ≤ log pi pi with equality only when pi = qi . Hence qi qi −1 = qi − pi = 0 . ≤ pi pi log pi pi i i i i The equality holds only when pi = qi for every i.

Corollary 14.5. Suppose that A has k symbols. Then H(A) ≤ log k . The equality holds only when pi =

1 k

for every i.

Proof. Let pi be the probability of the ith symbol and take qi = Lemma 14.4.

1 k

in

Theorem 14.6 (Source Coding Theorem). Let P C(A) denote the set of all preﬁx codes for an alphabet A = {s1 , . . . , sk } with pi = Pr(si ). Then H(A) ≤

inf

L(C) < H(A) + 1 .

C∈P C(A)

Proof. For the lower bound, take an arbitrary preﬁx code C and put w = −j and qi = 2−i /w. By Kraft’s inequality, w ≤ 1. Lemma 14.4 implies j2 pi i = pi (− log qi − log w) = − pi log qi − log w ≥ − pi log pi . i

i

i

i

the upper bound the problem is to minimize i pi i with the constraint For −i 2 ≤ 1. Let i = log(1/pi ) where t is the smallest integer greater than i or equal to t. Then

422

14 Data Compression

i

2−i ≤

2−log(1/pi ) ≤

i

2− log(1/pi ) =

i

pi = 1 .

i

Hence there exists a preﬁx code C0 with lengths i , 1 ≤ i ≤ k, and pi i < pi (log(1/pi ) + 1) = H(A) + 1 . L(C0 ) = i

i

Example 14.7. Consider a binary preﬁx code C : A → {0, 1}∗ such that Pr(0) = 0.1, Pr(1) = 0.9. Since A = {0, 1}, the coding map C is determined by C(0) and C(1). We have two choices for C with the minimal average length: either C(0) = 0, C(1) = 1 or C(0) = 1, C(1) = 0. In each case L(C) = 1 and the inequality of the preceding theorem is satisﬁed. Let A = {0, 1}, and consider A2 = {00, 01, 10, 11}, which is the set of blocks of length 2. Assume that the (p, 1−p)-Bernoulli measure, where p = 0.4, ∞ is the shift invariant measure on X = 1 A. Note that Pr(00) = 0.16, Pr(01) = Pr(10) = 0.24 and Pr(11) = 0.36. Among the probabilities of the blocks of length n, the diﬀerence between the smallest and the largest probabilities becomes greater and greater as n increases. Theorem 8.19 on the Asymptotic Equipartition Property states that the probability of a typical block of length n is close to 2−nh where h is the entropy. Now, for suﬃciently large n, we assign short codewords of length close to nh to typical n-blocks (there are approximately 2nh of them) and long codewords to non-typical n-blocks to achieve optimal compression. On average an n-block is coded into a codeword of length nh since the set of non-typical blocks has negligible probability. Thus the average codeword length per input symbol is close to entropy. More precisely, we have the following theorem: Theorem 14.8. Given a ﬁnite alphabet A, let µ be a shift invariant probability ∞ measure on X = 1 A. Let C (n) be an optimal binary preﬁx code for An for each n. Then L(C (n) ) =h, lim n→∞ n where h is the entropy of X with respect to µ. Proof. Recall that An is identiﬁed with the set of blocks of length n and that Pr(B) = µ(B) for B ∈ An . By Theorem 14.6 for C (n) we have H(An ) ≤ L(C (n) ) < H(An ) + 1 . Hence

1 H(An ) L(C (n) ) H(An ) + . < ≤ n n n n Now we take the limits as n → ∞.

14.3 Huﬀman Coding

423

14.3 Huﬀman Coding Huﬀman coding was invented by D.A. Huﬀman while he was a student at MIT around 1950. It assigns the shortest codeword to the most frequently appearing symbol. It was widely used before the invention of Lempel–Ziv algorithm. Suppose that there are k symbols s1 , s2 , . . . , sk with the corresponding probabilities p1 ≤ p2 ≤ . . . ≤ pk . The following steps explain the algorithm for constructing a Huﬀman tree for k = 8: The ﬁrst step: Combine two smallest probabilities p1 , p2 to obtain p12 = p1 +p2 with a new symbol s12 that replaces s1 , s2 . Thus we have k −1 symbols s12 , s3 , . . . , sk with corresponding probabilities p12 , p3 , . . . , pk . See Fig. 14.1. p 1 + p 2 = 0.1

0.05 p1

0.05 p2

0.1 p3

0.1 p4

0.15 p5

0.15 p6

0.2 p7

0.2 p8

Fig. 14.1. The ﬁrst step in the Huﬀman tree algorithm

Intermediate steps: Rearrange p12 , p3 , . . . , pk in increasing order and the corresponding symbols accordingly. Go back to the ﬁrst step and proceed inductively on k until we are left with two symbols. See Fig. 14.2. p 1 + p 2 + p 3 = 0.2

0.05 p1

0.05 p2

0.1 p3

0.1 p4

0.15 p5

0.15 p6

0.2 p7

0.2 p8

Fig. 14.2. The second step in the Huﬀman tree algorithm

The last step: Draw a binary tree starting from the root. Assign 0 and 1 to the top left and top right branches. Similarly assign 0 and 1 when there is a splitting of a branch into two branches. To make the algorithm systematic we assign 0 to the left branch and 1 to the right. See Fig. 14.3.

424

14 Data Compression 1

0.6

0.35

0.2 0.1

0.25

0.05 p1

0.05 p2

0.1 p3

0.4

0.1 p4

0.15 p5

0.15 p6

0.2 p7

0.2 p8

Fig. 14.3. The last step in the Huﬀman tree algorithm

Using the Huﬀman tree obtained in the above we deﬁne the coding map. For example, suppose we have eight source symbols with probabilities given in Fig. 14.1. They are already in increasing order. In subsequent ﬁgures we gradually construct a binary tree and ﬁnally in Fig. 14.4 we deﬁne a coding map C. Its formula is presented in Table 14.2. Note that L(C) = 2.9.

0

0

1 1

0 1

0 1 0 p1

1 p2

0 p3

p6

p4

1 p5

0 p7

Fig. 14.4. The coding map obtained from the Huﬀman tree algorithm

1 p8

14.4 Lempel–Ziv Coding

425

Table 14.2. A coding map C for Huﬀman coding Symbol Probability s1 0.05 0.05 s2 s3 0.10 0.10 s4 s5 0.15 s6 0.15 s7 0.20 s8 0.20

Codeword C (si ) 00000 5 00001 5 0001 4 010 3 011 3 001 3 10 2 11 2

Note that the construction of the Huﬀman tree is not unique. Historically Huﬀman coding had been widely used but now Lempel–Ziv coding is mostly used since it does not need a priori information on the probability distribution for source symbols. For more details see [HHJ],[Rom]. For better performance we regard blocks of length n as source symbols for some suﬃciently large ﬁxed n. For example, suppose we have a binary sequence a1 a2 a3 . . .. Consider the nonoverlapping blocks of length n regarded as source symbols: a1 . . . an ,

an+1 . . . a2n ,

a2n+1 . . . a3n ,

and so on. There are 2n diﬀerent such symbols, some of which might have zero probability. Label them in the lexicographic order as s1 = 0 . . . 00 , s2 = 0 . . . 01 , s3 = 0 . . . 10 , .. .. . . s2n −1 = 1 . . . 10 , s2n = 1 . . . 11 . Now we deﬁne an optimal coding map C (n) using the Huﬀman tree and apply Theorem 14.8. Consult Maple Program 14.6.1. For other applications of the Huﬀman algorithm see Chap. 2 in [Brem].

14.4 Lempel–Ziv Coding Lempel–Ziv coding was ﬁrst introduced in [LZ]. It is simple to implement. The algorithm has become popular as the standard algorithm for ﬁle compression because of its speed and eﬃciency. This code can be used without a knowledge

426

14 Data Compression

of the probability distribution of the information source and yet will achieve an asymptotic compression ratio equal to the entropy of the information source. ∞ (Here by an information source we mean a shift space X = 1 {0, 1} with an ergodic shift invariant probability measure. A signal from an information source corresponds to a binary sequence x ∈ X. Receiving a signal means that we read x1 , x2 , x3 , . . . sequentially. When we have a binary message, we regard it as a part of an inﬁnitely long binary sequence.) The probability distribution of the source sequence is implicitly reﬂected on the ﬁrst return time of the initial n-blocks through the Ornstein-Weiss formula. We will consider a binary source for the sake of simplicity. A parsing of a binary string x1 x2 . . . xn is a division of the string into phrases, separated by commas. A distinct parsing is a parsing such that no two phrases are identical. The Lempel–Ziv algorithm described below gives a distinct parsing of the source sequence. The source sequence is sequentially parsed into strings that have not appeared so far. For example, if the string is 1011010100010 . . . , we parse it as 1, 0, 11, 01, 010, 00, 10, . . . . After each comma, we read along the input sequence until we come to the shortest string that has not been marked before. Since this is the shortest such string, all its preﬁxes must have occurred earlier. In particular, the string consisting of all but the last bit of this string must have occurred earlier. See the dictionary given in Table 14.3. We code this phrase by giving the location of the preﬁx and the value of the last bit. For example, the code for the above sequence is (000, 1) (000, 0) (001, 1) (010, 1) (100, 0) (010, 0) (001, 0) where the ﬁrst number of each pair gives the index of the preﬁx and the second number gives the last bit of the phrase. Obviously if we parse more of the given sequence then we should reserve more bits in computer memory for allocation of the preﬁx. Decoding of the coded sequence is straightforward. Let CN be the number of phrases in the sequence obtained from the parsing of an input sequence of length N . We need log CN bits to describe the location of the preﬁx to the phrase and 1 bit to describe the last bit. The total length of the compressed sequence is approximately CN (log CN + 1) bits. The following fact is known. ∞ Theorem 14.9. Let X = 1 {0, 1} be the product space and let T be the shift transformation deﬁned by T (x1 . . . xk . . .) = (x2 . . . xk+1 . . .). Suppose that µ is an ergodic T -invariant probability measure. Then lim

N →∞

CN (log CN + 1) =h. N

14.4 Lempel–Ziv Coding

427

Table 14.3. A dictionary obtained from Lempel–Ziv parsing Order of Appearance 1 2 3 4 5 6 7 .. .

Word 1 0 11 01 010 00 10 .. .

Hence for almost every sequence x ∈ X, the average length per symbol of the compressed sequence does not exceed the entropy of the sequence. Here is a heuristic argument on the optimal performance of the Lempel–Ziv algorithm based on the Asymptotic Equipartition Property: We are given a ﬁnite binary sequence of length N . After it is parsed into CN diﬀerent blocks, the average block length is about N . n≈ CN Those CN blocks may be regarded as typical blocks and the Asymptotic Equipartition Property implies that there exist approximately 2(N/CN )×h typical blocks of length n ≈ N/CN . Hence we have CN ≈ 2(N/CN )×h and

CN log CN ≈h, N which is what we have in Theorem 14.9. Here is another heuristic argument based on the Ornstein–Weiss formula on the ﬁrst return time: Since the shift transformation is ergodic, if we start from one of CN typical subsets of length approximately equal to n≈

N , CN

then the orbit should visit almost all typical subsets almost equally often. Thus we expect Rn ≈ CN , and hence h≈

CN log CN log CN log Rn . ≈ ≈ N N n C N

See Maple Program 14.6.2.

428

14 Data Compression

14.5 Arithmetic Coding Arithmetic coding is based on the simple fact that binary digital information corresponds to a real number through binary expansion. It was introduced by P. Elias in an unpublished work, then improved by R. Pasco [Pas] and J. Rissanen [Ris]. Practical algorithms for arithmetic coding have been found relatively recently. For the history see the appendix in [Lan]. For more information consult [Dr],[HHJ]. We are given an idealized ∞inﬁnitely long data ﬁle. It is represented by a typical sequence x0 in X = 1 {0, 1}. The set X is shift-invariant with respect to the measure µ determined by the Birkhoﬀ Ergodic Theorem. Let h denote the entropy of T . On the collection of blocks of length n the lexicographic ordering is given: the n-block 0 . . . 00 comes ﬁrst and 0 . . . 01 next, and so on until we have the n-block 1 . . . 1 last. In other words, if ak ∈ {0, 1}, 1 ≤ k ≤ n, satisﬁes n 1+ ak 2k−1 = j , k=1

then a1 . . . an is the jth block. Let pj denote the probability of the jth block. Consider the partition of [0, 1] by the points t0 = 0 and tk =

k

pj ,

1 ≤ k ≤ 2n .

j=1

Note that t2n = 1. Now we have a correspondence between the set of all n-blocks and the set of the intervals I1 = [t0 , t1 ) , I2 = [t1 , t2 ) , . . . , I2n = [t2n −1 , ttn ] . The Asymptotic Equipartition Property implies that among 2n blocks of length n there are approximately 2nh of them of measures approximately equal to 2−nh . Other blocks form a set of negligible size. Equivalently, among 2n intervals I1 , . . . , I2n there are about 2nh of them of lengths approximately equal to 2−nh . See Maple Program 14.6.3. Now here is a basic idea behind arithmetic coding. Given an inﬁnitely long data ﬁle represented by x0 ∈ X, we consider the ﬁrst n bits in x0 = (a1 , . . . , an , . . .). With probability close to 1, the cylinder set C(n, x0 ) = [a1 , . . . , an ]1,n of length n containing x0 is one of typical n-blocks. Suppose that C(n, x0 ) corresponds to an interval Ij , 1 ≤ j ≤ 2n . Recall that the length of Ij is approximately equal to 2−nh . Pick a dyadic rational number r(n, x0 ) from Ij in such a way that the denominator of r(n, x0 ) is minimal. This can be done by applying the bisection method repeatedly m times until we have 2−m ≈ 2−nh .

14.6 Maple Programs

429

The arithmetic coding maps the ﬁrst n bits in x0 to the m bits in the binary expansion of r(n, x0 ). The compression ratio is given by m ≈h, n which shows that the arithmetic coding is optimal.

14.6 Maple Programs We present Maple programs for two data compression algorithms: Huﬀman coding and Lempel–Ziv coding. Readers should increase the lengths of input sequences to observe that the compression ratio really converges to entropy. A method to visualize typical subsets in arithmetic coding is also given. 14.6.1 Huﬀman coding The following Maple program for Huﬀman algorithm is a slight modiﬁcation of a program written by B.K. Seo. > z:=4: > p:=1./z; p := 0.2500000000 > q:=1-p: q := 0.7500000000 Find the entropy of the (p, q)-Bernoulli sequence. > entropy:=-p*log[2.](p)-q*log[2.](q); entropy := 0.8112781245 Choose the number of bits, which is the length of a binary sequence. > N:=300: Generate a typical (p, q)-Bernoulli sequence of length N . Increase N gradually to check the convergence of compression ratio to entropy. > ran:=rand(0..z-1): > for i from 1 to N do > x[i]:=ceil(ran()/(z-1)): > od: Print the ﬁrst 30 bits. > seq(x[j],j=1..30); 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 The following number should be close to q. > evalf(add(x[j],j=1..N)/N); 0.7833333333 Choose a divisor of N .

430

14 Data Compression

n:=3: Divide the sequence x[i] into blocks of length n. Find the number of those blocks. > Number_block:=N/n; Number block := 100 List the n-blocks obtained from the sequence x[i]. > for i from 1 to N by n do > for j from 0 to n-1 do > printf("%d",x[i+j]); > od; > printf("%c"," " ); > od; >

111 011 110 111 111 111

101 111 111 110 111 111

101 111 111 011 011 011

111 101 110 111 111 111

101 111 011 011 101 110

101 110 110 111 101 101

111 111 101 111 110 110

111 111 110 111 101 110

111 111 101 100 011 111

111 100 101 010 111 111

110 111 110 011 111 010

111 101 110 110 111 111

001 110 100 111 111 011

111 111 101 110 111 111

101 111 111 111 011 001

011 010 111 001 101

111 110 100 111 110

Calculate the frequencies of the n-blocks. > for k from 0 to 2^n-1 do > freq[k]:=0: > od: > for i from 1 to N by n do > k:=add(x[i+d]*2^(n-1-d),d=0..n-1): > freq[k]:=freq[k]+1: > od: Two branches in a Huﬀman tree meet at a node. To each node we assign three objects A, B, C where A is the branches under the node, B is the set of source symbols under the node, and C is the sum of frequencies under the node. > for i from 0 to 2^n-1 do > Node[i]:=[i,{i},freq[i]]: > od: > seq(Node[i],i=0..2^n-1); [0, {0}, 0], [1, {1}, 3], [2, {2}, 3], [3, {3}, 11], [4, {4}, 4], [5, {5}, 16], [6, {6}, 18], [7, {7}, 45] List the frequencies of the n-blocks. > for i from 0 to 2^n-1 do > printf("%c","("); > for j from 1 to n do > printf("%d",modp(floor(Node[i][1]/2^(n-j)),2)); > od; > printf(",%d",Node[i][3]); > printf("%c ",")"); > od; (000,0) (001,3) (010,3) (011,11) (100,4) (101,16) (110,18) (111,45)

14.6 Maple Programs

431

Sort the n-blocks according to the frequencies. > for i from 0 to 2^n-1 do > for j from 0 to 2^n-2-i do > if Node[j][3] > Node[j+1][3] then > temp:=Node[j]; > Node[j]:=Node[j+1]; > Node[j+1]:=temp; > fi: > od; > od; List the n-blocks together with their frequencies in the increasing order of frequencies. > for i from 0 to 2^n-1 do > printf("%c", "("); > for j from 1 to n do > printf("%d", modp(floor(Node[i][1]/2^(n-j)),2)); > od; > printf(",%d", Node[i][3]); > printf("%c ", ")"); > od; (000,0) (001,3) (010,3) (100,4) (011,11) (101,16) (110,18) (111,45)

Build the Huﬀman tree and deﬁne the coding map at the same time. > for i from 0 to 2^n-1 do > Code[i] := [0,0]: > od: We need commands for set operations: union, in, nops. See Subsect. 1.8.2. > for i from 0 to 2^n-2 do > for j from 1 to nops(Node[i][2]) do > Code[Node[i][2][j]][1] := Code[Node[i][2][j]][1] + 0*2^(Code[Node[i][2][j]][2]): > Code[Node[i][2][j]][2] := Code[Node[i][2][j]][2] + 1: > od: > for j from 1 to nops(Node[i+1][2]) do > Code[Node[i+1][2][j]][1] := Code[Node[i+1][2][j]][1] + 1*2^(Code[Node[i+1][2][j]][2]): > Code[Node[i+1][2][j]][2] := Code[Node[i+1][2][j]][2] + 1: > od: > Node[i+1][1] := [Node[i][1],Node[i+1][1]]: > Node[i+1][2] := Node[i][2] union Node[i+1][2]: > Node[i+1][3] := Node[i][3] + Node[i+1][3]: > for j from i+1 to 2^n-2 do > if Node[j][3] >= Node[j+1][3] then > temp:=Node[j]: > Node[j]:=Node[j+1]: > Node[j+1]:=temp: > fi: > od: > od:

432

14 Data Compression

The following information is equivalent to a Huﬀman tree that we look for. > Node[2^n-1][1]; [7, [[[4, [2, [0, 1]]], 3], [5, 6]]] Check the additional information. > Node[2^n-1][2]; {0, 1, 2, 3, 4, 5, 6, 7} > Node[2^n-1][3]; 100 > seq(Code[i],i=0..2^n-1); [38, 6], [39, 6], [18, 5], [5, 3], [8, 4], [6, 3], [7, 3], [0, 1] > > > > > > > > > > >

for i from 0 to 2^n-1 do printf("%c"," "); for j from 1 to n do printf("%d",modp(floor(i/2^(n-j)),2)); od; printf("-->"); for j from 1 to Code[i][2] do printf("%d",modp(floor(Code[i][1]/2^(Code[i][2]-j)),2)); od; printf("%c"," "); od:

000-->100110 001-->100111 010-->10010 101-->110 110-->111 111-->0

011-->101

100-->1000

In the following c[i], 1 ≤ i ≤ L, is the coded sequence. > L:=0: > for i from 1 to N by n do > k:=add(x[i+d]*2^(n-1-d),d=0..n-1): > s:=Code[k][1]: > for j from 0 to Code[k][2]-1 do > c[L+Code[k][2]-j]:=modp(s,2): > s:=floor(s/2): > od: > L:=L+Code[k][2]: > od: The following is the length of the coded sequence. > L; 229 > for i from 1 to L do > printf("%d",c[i]): > od; 011011001101100000111010011101101010101001100111000100001101 110010010111111001111011111101111101101111111000110001000011 110101010001000100101011110111010011100010101101101111101010 0000101110111001010111110111111001001001010100111

14.6 Maple Programs

Compare the following with entropy. > Compression_ratio:=evalf(L/N); Compression ratio := 0.7633333333 Deﬁne the decoding map. > for i from 0 to 2^n-1 do > Decode(Code[i]) := i: > od: The following deﬁnes a set. Note that the order structure is now lost. > Code_words:={seq(Code[i],i=0..2^n-1)}; Code words := {[0, 1], [6, 3], [7, 3], [38, 6], [39, 6], [18, 5], [5, 3], [8, 4]} > M:=0: > w:=0: > d:=0: > for i from 1 to L do > w:=w*2+c[i]: > d:=d+1: > if [w,d] in Code_words then > for j from 1 to n do > y[M+j]:=modp(floor(Decode([w,d])/2^(n-j)),2): > od: > M:=M+n: > w:=0: > d:=0: > fi: > od: The following is the length of the decoded sequence. > M; 300 Check whether the decoding recovers the original input sequence. > add(abs(x[i]-y[i]),i=1..N); 0 14.6.2 Lempel–Ziv coding A Maple program for Lempel–Ziv algorithm is presented. > z:=5: > p:=1/z: > q:=1.-p: Calculate the entropy of (p, q)-Bernoulli shift. > entropy:=-p*log[2.](p)-q*log[2.](q); entropy := 0.7219280949 > ran:=rand(0..z-1): Choose the length of an input sequence. > N:=50:

433

434

14 Data Compression

Choose a considerably larger value for N to check the convergence of the compression ratio to entropy as N → ∞. > for i from 1 to N do > x[i]:=ceil(ran()/(z-1)): > od: Generate a typical (p, q)-Bernoulli sequence of length N . > seq(x[i],i=1..N); 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0 Calculate the relative frequency of the bit ‘1’, which should be approximately equal to q. > evalf(add(x[i],i=1..N)/N); 0.7800000000 > C:=0: After an input sequence is parsed a phrase is regarded as a binary number, which are again represented as binary blocks. To distinguish blocks such as ‘01’ and ‘1’ we place the leading bit ‘1’ in front of them as a marker, so two blocks ‘01’ and ‘1’ are now written as ‘101’ and ‘11’. > for i from 1 to N do > block:=2+x[i]: > address:=0: > for j from 1 to C do > if block=phrase[j] then > address:=j: > i:=i+1: > block:=block*2+x[i]: > fi: > od: > C:=C+1: > phrase[C]:=block: > LZ[C]:=[address,x[i]]: > od: Find the number of parsed phrases. > C; 16 > for i from 1 to C do > num_bit[i]:=max(1,ceil(log[2.](max(1,LZ[i][1]))))+1; > od: > seq(num_bit[i],i=1..C); 2, 2, 2, 3, 3, 2, 2, 4, 4, 5, 5, 4, 4, 4, 5, 3 > seq(phrase[i],i=1..C); 3, 2, 7, 14, 15, 5, 6, 13, 31, 63, 127, 11, 26, 27, 23, 28 + x51 For a sequence of parsed phrases, delete the leading bit ‘1’ in the following binary blocks. > seq(convert(phrase[i],binary),i=1..C-1);

14.6 Maple Programs

435

11, 10, 111, 1110, 1111, 101, 110, 1101, 11111, 111111, 1111111, 1011, 11010, 11011, 10111 > > >

for i from 1 to C do power[i]:=trunc(log[2.](phrase[i]))+1; od:

Find the parsed phrases. > for i from 1 to C-1 do > for j from 2 to power[i] do > printf("%d",modp(floor(phrase[i]/2^(power[i]-j)),2)): > od: > printf("%c", " "): > od: 1 0 11 110 111 01 10 101 1111 11111 111111 011 1010 1011 0111

Here is the Lempel–Ziv dictionary. >

seq(LZ[i],i=1..C);

[0, 1], [0, 0], [1, 1], [3, 0], [3, 1], [2, 1], [1, 0], [7, 1], [5, 1], [9, 1], [10, 1] [6, 1], [8, 0], [8, 1], [12, 1], [4, x51 ] Compare the following with entropy. >

Compression_ratio:=evalf(add(num_bit[i],i=1..C)/N);

Compression ratio := 1.080000000 The compression ratio is greater than 1! We need a longer input sequence to observe compression in this case. Now we decode the compressed sequence back to the original sequence. > n :=0: > for i from 1 to C do > a := LZ[i]: > for j from 1 to i do > Decoded_Bit[j] := a[2]: > if a[1] = 0 then leng := j: break: > else a := LZ[a[1]]: > fi: > od: > for k from 1 to leng do > y[n+k] := Decoded_Bit[leng-k+1]: > od: > n := n + leng: > od: In the preceding program the command break is used to exit from the do loop on j. > seq(y[i],i=1..N); 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0

436

14 Data Compression

14.6.3 Typical intervals arising from arithmetic coding We present a method of visualizing the intervals corresponding to the typical blocks in the Asymptotic Equipartition Property. > with(plots): > with(plottools): Consider the (p, 1 − p)-Bernoulli shift. > z:=3; z := 3 > p:=1./z; p := 0.3333333333 Find the entropy of the (p, 1 − p)-Bernoulli sequence. > entropy:=-p*log[2.](p)-(1-p)*log[2.](1-p); entropy := 0.9182958340 Choose the length of a binary sequence. > n:=7: > 2^n; 128 Generate a typical (p, 1 − p)-Bernoulli sequence of length n. > ran:=rand(0..z-1): > for i from 1 to n do > a[i]:=ceil(ran()/(z-1)): > od: The string a1 . . . an is regarded as a binary integer. It is converted into a decimal integer J, 0 ≤ J < 2n . > seq(a[i],i=1..n); 0, 1, 0, 1, 1, 1, 1 > J:=add(a[k]*2^(k-1),k=1..n); J := 122 Calculate the probabilities of the n-blocks. > for j from 0 to 2^n-1 do > temp:=j: > for k from 1 to n do > b[n-k+1]:=modp(temp,2): > temp:=(temp-b[n-k+1])/2: > od: > num_1:=add(b[k],k=1..n): > prob[j]:=p^(n-num_1)*(1-p)^num_1: > od: The sum of all probabilities should be equal to 1. > add(prob[j],j=0..2^n-1); 1.000000000 Calculate the cumulative probabilities.

14.6 Maple Programs

437

t[-1]:=0: for j from 0 to 2^n-1 do t[j]:=t[j-1] + prob[j]: od: The following should be equal to 1. > t[2^n-1]; 1.000000000 Draw the short vertical lines dividing the intervals. > L[-1]:=line([0,-0.1],[0,0.1]): > for j from 0 to 2^n-1 do > L[j]:=line([t[j],-0.1],[t[j],0.1]): > od: Plot the representation of the partition of [0, 1] by 2n intervals. > g1:=display(seq(L[j],j=-1..2^n-1)): > g2:=plot(0,x=0..1,y=-0.5..0.5,axes=none): > g_circ:=pointplot([(t[J-1]+t[J])/2,0.15],symbol=circle): > display(g1,g2,g_circ); The Jth interval under consideration is indicated by a circle. See Fig. 14.5. > > > >

0

1

Fig. 14.5. A partition of the unit interval in the arithmetic coding

Calculate an approximate number of the typical intervals corresponding to the typical n-blocks. > round(2^(n*entropy)); 86 Calculate a value that approximates lengths of the intervals corresponding to the typical n-blocks. > evalf(2^(-n*entropy)); 0.01161335932 > prob[J]; 0.01463191587

Geon Ho Choe

Computational Ergodic Theory With 250 Figures and 10 Tables

123

Geon Ho Choe Korea Advanced Institute of Science and Technology Department of Mathematics Guseong-dong 373-1, Yuseong-gu Daejeon 305-701, Korea e-mail: [email protected]

Mathematics Subject Classiﬁcation (2000): 11Kxx, 28Dxx, 37-01, 37Axx

Library of Congress Control Number: 2004117450

ISSN 1431-1550 ISBN 3-540-23121-8 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microﬁlm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2005 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a speciﬁc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: by the author using a Springer LATEX macro package Production: LE-TEX Jelonek, Schmidt & Vöckler GbR, Leipzig Cover design: design & production GmbH, Heidelberg Printed on acid-free paper

46/3142YL - 5 4 3 2 1 0

In Memory of My Father

Preface

This book is about probabilistic aspects of discrete dynamical systems and their computer simulations. Basic measure theory is used as a theoretical tool to describe probabilistic phenomena. All of the simulations are done using the mathematical software Maple, which enables us to translate theoretical ideas into computer programs word by word avoiding any other technical details. The level of familiarity with computer programming is kept minimal. Thus even theoretical mathematicians can appreciate the experimental nature of one of the fascinating areas in mathematics. A new philosophy is introduced even for those who are already familiar with numerical simulations of dynamical systems. Most computer experiments in this book employ many signiﬁcant digits depending on the entropy of the given transformation so that they can still produce meaningful results after many iterations. This is why we can do experiments that have not been tried elsewhere before. It is the main point of the book. Readers should regard the Maple programs as important as the theoretical explanations of mathematical facts. The book is designed in such a way that theoretical and experimental parts complement each other. A dynamical system is a set of points together with a transformation rule that describes the movement of points. Sometimes the points lose the geometrical implication and they are just elements in an abstract set that has certain structures depending on the type of the problem under consideration. The theory of dynamical systems studies the long term statistical behavior of the points under the transformation rules. When the given set of points has a probabilistic structure, the theory of dynamical systems is called ergodic theory. The aim of the book is to study computational aspects of ergodic theory including data compression. Many mathematical concepts introduced in the book are illustrated by computer simulations. That gives a diﬀerent ﬂavor to ergodic theory, and may attract a new generation of students who like to play with computers. Some theorems look diﬀerent, at least to beginners, when the abstract facts are explained in terms of computer simulations. Many computer experiments

VIII

Preface

in the book are numerical simulations, but some others are based on symbolic mathematics. Readers are strongly encouraged to try all the experiments. Students with less mathematical background may skip some of the technical proofs and concentrate on the computer experiments instead. The author wishes the book to be interdisciplinary and easily accessible to people with diverse backgrounds: for example, students and researchers in science and engineering who want to study discrete dynamical systems or anyone who is interested in the theoretical background of data compression, even though this book is intended mainly for graduate students and advanced undergraduate seniors in pure and applied mathematics. Mathematical rigor is maintained throughout the book but the statements of facts and theorems can be understood easily by looking at the included ﬁgures and tables or by doing computer simulations given at the end of each chapter. This enables even students outside mathematics to understand the concepts introduced in the book. Maple programs for computer experiments presented in a chapter are collected in the last section of the same chapter. A Maple program is divided into as many components as possible so that even readers with no experience in programming can appreciate how a mathematical idea is translated into a series of Maple commands. Four fundamental ideas of ergodic theory are explained in the book: the Birkhoﬀ Ergodic Theorem on the uniform distribution in Chapter 3, the Shannon–McMillan–Breiman Theorem on entropy in Chapter 8, the Lyapunov exponent of a diﬀerentiable mapping in Chapters 9, 10, and the ﬁrst return time formula on entropy by Wyner, Ziv, Ornstein and Weiss in Chapter 12. Chapter 1 is a collection of mathematical facts and Maple commands. Chapter 2 introduces measure preserving transformations. Chapter 4 explains the Central Limit Theorem for dynamical systems. Chapter 5 presents a few miscellaneous facts related with ergodicity including Kac’s lemma on the ﬁrst return time. It also includes mixing property of a transformation. In Chapter 6 homeomorphisms of the unit circle are studied. Chapter 7 on mod 2 normal numbers is almost independent of other chapters. Chapter 11 illustrates how to sketch the stable and unstable manifolds, whose tangential directions can be obtained using the method in Chapter 10. Chapter 13 discusses the relation between recurrence time and Hausdorﬀ dimension. Chapter 14 introduces data compression at an elementary level. This topic does not belong to conventional ergodic theory but readers will ﬁnd that compression algorithms are closely related with many ideas in earlier chapters. There are many important topics not covered in this book and readers are encouraged to move on to other books later. This book is an outgrowth of the author’s eﬀorts of more than ten years to test the possibility of using computer experiments in studying rigorous theoretical mathematics. During that period the clock speed of the central processing units of personal computers has increased about one hundred times. Now it takes at most a few minutes to process most of Maple programs in the book. Maple not only does symbolic computations but also allows us to use a

Preface

IX

practically unlimited number of signiﬁcant digits in ﬂoating point calculations. Due to the sensitive dependence on initial conditions of nonlinear dynamical systems there is a possibility of obtaining meaningless output after many iterations of a transformation in a computer experiment. Once we begin with suﬃciently many digits, however, iterations can be done without paying much attention to the sensitive dependence on initial data. The optimal number of signiﬁcant digits can be given in terms of the Lyapunov exponent. A mathematical software such as Maple is an indispensable tool for anyone who studies nonlinear phenomena using computer simulations with suﬃciently many signiﬁcant digits. It can be compared to a microscope for a biologist or a telescope for an astronomer. Due to the sensitive dependence on initial data, even if we change slightly the value of a starting point in iterating a transformation, the cumulative eﬀect on its orbit can be enormous. That is why we need to have high precision to specify a seed for iteration, and it might be regarded as using a microscope. As for the second aspect, once we begin a computer experiment with suﬃciently many signiﬁcant digits we can observe reasonably distant future orbit points with an acceptable margin of error for practical purposes. Thus a mathematician may be compared to an astronomer using a telescope. Comments from readers are welcome. For supplementary materials on the topics covered in the book including Maple programs please check the author’s homepage http://shannon.kaist.ac.kr/choe/ or send emails to [email protected]

Geon Ho Choe

Acknowledgements

The ﬁrst draft of this book was written while the author was on sabbatical leave at University of California, Berkeley during the 1998–1999 academic year. He wishes to thank his old thesis advisor Henry Helson for his warm encouragements over many years, for sharing his oﬃce, and for correcting grammatical errors in the manuscript. The author also thanks William Bade and Marc Rieﬀel, who helped the author in case of need many years ago. He wishes to thank Kyewon Koh Park for introducing him to Ornstein and Weiss’s paper on the ﬁrst return time and for many other things. Although today’s personal computers are fast enough for most experiments presented in this book, that was not the case in the past. The author had to upgrade his hardware and software many times as the computers became faster and faster over the years. The following organizations provided the necessary funds: Korea Advanced Institute of Science and Technology, Global Analysis Research Center at Seoul National University, Korea Science and Engineering Foundation, Ministry of Information and Communication, and Korea Research Foundation. The ﬁnal stage of revision was done while the author was staying at Imperial College London. The author thanks the following people who have helped him in writing the book: Dong Pyo Chi, Jaigyoung Choe, Yun Sung Choi, Karma Dajani, Michael Eastham, John Elgin, Michael Field, Stefano Galatolo, William Geller, Young Hwa Ha, Toshihiro Hamachi, Boris Hasselblatt, Peter Hellekalek, Huyi Hu, Shunsuke Ihara, Yuji Ito, the late Anzelm Iwanik, Brunon Kaminski, Dohan Kim, Dong-Gyeom Kim, Hong Jong Kim, Soon-Ki Kim, Young-One Kim, Cor Kraaikamp, Jeroen Lamb, Jungseob Lee, Mariusz Lema´ nczyk, Stefano Luzzatto, Stefano Marmi, Makoto Mori, Hitoshi Nakada, Masakazu Nasu, Shigeyoshi Ogawa, Khashayar Pakdaman, Karl Petersen, Norbert Riedel, Klaus Schmidt, J¨ org Schmeling, Fritz Schweiger, William A. Veech, Dalibor Voln´ y, Peter Walters, the colleagues at KAIST including Sujin Shin, and many other teachers and friends. The author also thanks many students who have attended his classes, and his former and current students Young-Ho Ahn, Hyun Jin Jang, Myeong Geun

XII

Acknowledgements

Jeong, Dae Hyeon Kang, Mihyun Kang, Bong Jo Kim, Chihurn Kim, Dong Han Kim, Miseon Lee and Byoung Ki Seo for various help from them. The staﬀ at Springer have been very supportive and patient for almost six years. The author thanks the editor Martin Peters, the publishing director Joachim Heinze, Ruth Allewelt and Claudia Rau. He is also grateful to several anonymous reviewers who gave many helpful suggestions at various stages. The author thanks his mother and mother-in-law for their love and care. Finally he wishes to thank his wife for her love and patience.

Contents

1

Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Point Set Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Sets and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Metric spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Measures and Lebesgue Integration . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 Lebesgue integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Nonnegative Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Perron–Frobenius Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Stochastic matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Compact Abelian Groups and Characters . . . . . . . . . . . . . . . . . . . 1.4.1 Compact abelian groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.3 Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.4 Endomorphisms of a torus . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Continued Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Statistics and Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Independent random variables . . . . . . . . . . . . . . . . . . . . . . 1.6.2 Change of variables formula . . . . . . . . . . . . . . . . . . . . . . . . 1.6.3 Statistical laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Random Number Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7.1 What are random numbers? . . . . . . . . . . . . . . . . . . . . . . . . 1.7.2 How to generate random numbers . . . . . . . . . . . . . . . . . . . 1.7.3 Random number generator in Maple . . . . . . . . . . . . . . . . . 1.8 Basic Maple Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.1 Restart, colon, semicolon, Digits . . . . . . . . . . . . . . . . . . . . . 1.8.2 Set theory and logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.3 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.4 How to plot a graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.8.5 Calculus: diﬀerentiation and integration . . . . . . . . . . . . . . 1.8.6 Fractional and integral parts of real numbers . . . . . . . . . .

1 1 1 2 4 4 6 11 11 13 14 14 16 17 18 20 21 21 23 23 25 25 27 28 28 29 29 31 32 34 35

XIV

Contents

1.8.7 1.8.8 1.8.9 1.8.10 1.8.11 1.8.12 1.8.13 1.8.14 1.8.15

Sequences and printf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Continued fractions I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Continued fractions II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statistics and probability . . . . . . . . . . . . . . . . . . . . . . . . . . . Lattice structures of linear congruential generators . . . . Random number generators in Maple . . . . . . . . . . . . . . . . Monte Carlo method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36 37 39 40 41 42 44 44 45

2

Invariant Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Invariant Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Other Types of Continued Fractions . . . . . . . . . . . . . . . . . . . . . . . 2.3 Shift Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Isomorphic Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Coding Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 The logistic transformation . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 Chebyshev polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.3 The beta transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.4 The baker’s transformation . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.5 A toral automorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.6 Modiﬁed Hurwitz transformation . . . . . . . . . . . . . . . . . . . . 2.6.7 A typical point of the Bernoulli measure . . . . . . . . . . . . . 2.6.8 A typical point of the Markov measure . . . . . . . . . . . . . . . 2.6.9 Coding map for the logistic transformation . . . . . . . . . . .

47 47 56 59 60 66 70 71 73 76 78 79 80 81 82 84

3

The Birkhoﬀ Ergodic Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.1 Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.2 The Birkhoﬀ Ergodic Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 3.3 The Kronecker–Weyl Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.4 Gelfand’s Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.5 Borel’s Normal Number Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.6 Continued Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 3.7 Approximation of Invariant Density Functions . . . . . . . . . . . . . . . 103 3.8 Physically Meaningful Singular Invariant Measures . . . . . . . . . . 105 3.9 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 3.9.1 The Gauss transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 112 3.9.2 Discrete invariant measures . . . . . . . . . . . . . . . . . . . . . . . . . 113 3.9.3 A cobweb plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 3.9.4 The Kronecker-Weyl Theorem . . . . . . . . . . . . . . . . . . . . . . 115 3.9.5 The logistic transformation . . . . . . . . . . . . . . . . . . . . . . . . . 116 3.9.6 An unbounded invariant density and the speed of growth at a singularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 3.9.7 Toral automorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Contents

3.9.8 3.9.9 3.9.10 3.9.11 3.9.12 3.9.13 3.9.14

XV

The Khinchin constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 An inﬁnite invariant measure . . . . . . . . . . . . . . . . . . . . . . . 123 Cross sections of the solenoid . . . . . . . . . . . . . . . . . . . . . . . 125 The solenoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Feigenbaum’s logistic transformations . . . . . . . . . . . . . . . . 130 The H´enon attractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Basin of attraction for the H´enon attractor . . . . . . . . . . . 131

4

The Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.1 Mixing Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.2 The Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 4.3 Speed of Correlation Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 4.4 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 4.4.1 The Central Limit Theorem: σ > 0 for the logistic transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 4.4.2 The Central Limit Theorem: the normal distribution for the Gauss transformation . . . . . . . . . . . . . . . . . . . . . . . . 148 4.4.3 Failure of the Central Limit Theorem: σ = 0 for an irrational translation mod 1 . . . . . . . . . . . . . . . . . . . . . . . . 151 4.4.4 Correlation coeﬃcients of T x = 2x mod 1 . . . . . . . . . . . . 151 4.4.5 Correlation coeﬃcients of the beta transformation . . . . . 152 4.4.6 Correlation coeﬃcients of an irrational translation mod 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

5

More on Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5.1 Absolutely Continuous Invariant Measures . . . . . . . . . . . . . . . . . . 155 5.2 Boundary Conditions for Invariant Measures . . . . . . . . . . . . . . . . 156 5.3 Kac’s Lemma on the First Return Time . . . . . . . . . . . . . . . . . . . . 159 5.4 Ergodicity of Markov Shift Transformations . . . . . . . . . . . . . . . . . 165 5.5 Singular Continuous Invariant Measures . . . . . . . . . . . . . . . . . . . . 170 5.6 An Invertible Extension of a Transformation on an Interval . . . 172 5.7 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 5.7.1 Piecewise deﬁned transformations . . . . . . . . . . . . . . . . . . . 173 5.7.2 How to sketch the graph of the ﬁrst return time transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 5.7.3 Kac’s lemma for the logistic transformation . . . . . . . . . . . 175 5.7.4 Kac’s lemma for an irrational translation mod 1 . . . . . . . 176 5.7.5 Bernoulli measures on the unit interval . . . . . . . . . . . . . . . 178 5.7.6 An invertible extension of the beta transformation . . . . . 179

6

Homeomorphisms of the Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 6.1 Rotation Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 6.2 Topological Conjugacy and Invariant Measures . . . . . . . . . . . . . . 188 6.3 The Cantor Function and Rotation Number . . . . . . . . . . . . . . . . 193 6.4 Arnold Tongues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

XVI

Contents

6.5 6.6 6.7 6.8

How to Sketch a Conjugacy Using Rotation Number . . . . . . . . . 195 Unique Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Poincar´e Section of a Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 6.8.1 The rotation number of a homeomorphism of the unit circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 6.8.2 Symmetry of invariant measures . . . . . . . . . . . . . . . . . . . . . 200 6.8.3 Conjugacy and the invariant measure . . . . . . . . . . . . . . . . 201 6.8.4 The Cantor function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 6.8.5 The Cantor function as a cumulative density function . . 202 6.8.6 Rotation numbers and a staircase function . . . . . . . . . . . . 204 6.8.7 Arnold tongues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 6.8.8 How to sketch a topological conjugacy of a circle homeomorphism I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 6.8.9 How to sketch a topological conjugacy of a circle homeomorphism II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

7

Mod 2 Uniform Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.1 Mod 2 Normal Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.2 Skew Product Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 7.3 Mod 2 Normality Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 7.4 Mod 2 Uniform Distribution for General Transformations . . . . . 221 7.5 Random Walks on the Unit Circle . . . . . . . . . . . . . . . . . . . . . . . . . 222 7.6 How to Sketch a Cobounding Function . . . . . . . . . . . . . . . . . . . . . 226 7.7 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 7.7.1 Failure of mod 2 normality for irrational translations mod 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 7.7.2 Ergodic components of a nonergodic skew product transformation arising from the beta transformation . . . 231 7.7.3 Random walks by an irrational angle on the circle . . . . . 232 7.7.4 Skew product transformation and random walks on a cyclic group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 7.7.5 How to plot points on the graph of a cobounding function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 7.7.6 Fourier series of a cobounding function . . . . . . . . . . . . . . . 235 7.7.7 Numerical computation of a lower bound for n||nθ|| . . . . 236

8

Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 8.1 Deﬁnition of Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 8.2 Entropy of Shift Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . 242 8.3 Partitions and Coding Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 8.4 The Shannon–McMillan–Breiman Theorem . . . . . . . . . . . . . . . . . 248 8.5 Data Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 8.6 Asymptotically Normal Distribution of (− log Pn )/n . . . . . . . . . 253 8.7 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254

Contents

XVII

8.7.1 Deﬁnition of entropy: the logistic transformation . . . . . . 254 8.7.2 Deﬁnition of entropy: the Markov shift . . . . . . . . . . . . . . . 255 8.7.3 Cylinder sets of a nongenerating partition: T x = 2x mod 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 8.7.4 The Shannon–McMillan–Breiman Theorem: the Markov shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 8.7.5 The Shannon–McMillan–Breiman Theorem: the beta transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 8.7.6 The Asymptotic Equipartition Property: the Bernoulli shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 8.7.7 Asymptotically normal distribution of (− log Pn )/n: the Bernoulli shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 9

The Lyapunov Exponent: One-Dimensional Case . . . . . . . . . . 269 9.1 The Lyapunov Exponent of Diﬀerentiable Maps . . . . . . . . . . . . . 269 9.2 Number of Signiﬁcant Digits and the Divergence Speed . . . . . . . 278 9.3 Fixed Points of the Gauss Transformation . . . . . . . . . . . . . . . . . . 280 9.4 Generalized Continued Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . 282 9.5 Speed of Approximation by Convergents . . . . . . . . . . . . . . . . . . . . 286 9.6 Random Shuﬄing of Cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 9.7 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 9.7.1 The Lyapunov exponent: the Gauss transformation . . . . 292 9.7.2 Hn,k : a local version for the Gauss transformation . . . . . 293 9.7.3 Hn,k : a global version for the Gauss transformation . . . . 294 9.7.4 The divergence speed: a local version for the Gauss transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 9.7.5 The divergence speed: a global version for the Gauss transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 9.7.6 Number of correct partial quotients: validity of Maple algorithm of continued fractions . . . . . . . . . . . . . . . . . . . . . 295 9.7.7 Speed of approximation by convergents: the Bernoulli transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297

10 The Lyapunov Exponent: Multidimensional Case . . . . . . . . . . 299 10.1 Singular Values of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 10.2 Oseledec’s Multiplicative Ergodic Theorem . . . . . . . . . . . . . . . . . 302 10.3 The Lyapunov Exponent of a Diﬀerentiable Mapping . . . . . . . . 310 10.4 Invariant Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 10.5 The Lyapunov Exponent and Entropy . . . . . . . . . . . . . . . . . . . . . . 318 10.6 The Largest Lyapunov Exponent and the Divergence Speed . . . 321 10.7 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 10.7.1 Singular values of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . 323 10.7.2 A matrix sends a circle onto an ellipse . . . . . . . . . . . . . . . 324 10.7.3 The Lyapunov exponents: a local version for the solenoid mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325

XVIII Contents

10.7.4 The Lyapunov exponent: a global version for the H´enon mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 10.7.5 The invariant subspace E1 of the H´enon mapping . . . . . 329 10.7.6 The invariant subspace E2 of the H´enon mapping . . . . . 330 10.7.7 The divergence speed: a local version for a toral automorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 10.7.8 The divergence speed: a global version for H´enon mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 11 Stable and Unstable Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 11.1 Stable and Unstable Manifolds of Fixed Points . . . . . . . . . . . . . . 333 11.2 The H´enon Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 11.3 The Standard Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 11.4 Stable and Unstable Manifolds of Periodic Points . . . . . . . . . . . . 340 11.5 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 11.5.1 A stable manifold of the H´enon mapping . . . . . . . . . . . . . 345 11.5.2 An unstable manifold of the H´enon mapping . . . . . . . . . . 348 11.5.3 The boundary of the basin of attraction of the H´enon mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 11.5.4 Behavior of the H´enon mapping near a saddle point . . . 353 11.5.5 Hyperbolic points of the standard mapping . . . . . . . . . . . 355 11.5.6 Images of a circle centered at a hyperbolic point under the standard mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 11.5.7 Stable manifolds of the standard mapping . . . . . . . . . . . . 358 11.5.8 Intersection of the stable and unstable manifolds of the standard mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 12 Recurrence and Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 12.1 The First Return Time Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 12.2 Lp -Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 12.3 The Nonoverlapping First Return Time . . . . . . . . . . . . . . . . . . . . 369 12.4 Product of the Return Time and the Probability . . . . . . . . . . . . 376 12.5 Symbolic Dynamics and Topological Entropy . . . . . . . . . . . . . . . 377 12.6 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 12.6.1 The ﬁrst return time Rn for the Bernoulli shift . . . . . . . . 381 12.6.2 Averages of (log Rn )/n and Rn for the Bernoulli shift . . 382 12.6.3 Probability density function of (log Rn )/n for the Bernoulli shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 12.6.4 Convergence speed of the average of log Rn for the Bernoulli shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 12.6.5 The nonoverlapping ﬁrst return time . . . . . . . . . . . . . . . . . 386 12.6.6 Probability density function of Rn Pn for the Markov shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 12.6.7 Topological entropy of a topological shift space . . . . . . . . 389

Contents

XIX

13 Recurrence and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 13.1 Hausdorﬀ Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 13.2 Recurrence Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 13.3 Absolutely Continuous Invariant Measures . . . . . . . . . . . . . . . . . . 394 13.4 Singular Continuous Invariant Measures . . . . . . . . . . . . . . . . . . . . 398 13.5 The First Return Time and the Dimension . . . . . . . . . . . . . . . . . 400 13.6 Irrational Translations Mod 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 13.7 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 13.7.1 The recurrence error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 13.7.2 The logistic transformation . . . . . . . . . . . . . . . . . . . . . . . . . 409 13.7.3 The H´enon mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 13.7.4 Type of an irrational number . . . . . . . . . . . . . . . . . . . . . . . 412 13.7.5 An irrational translation mod 1 . . . . . . . . . . . . . . . . . . . . . 414 14 Data Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 14.1 Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 14.2 Entropy and Data Compression Ratio . . . . . . . . . . . . . . . . . . . . . . 419 14.3 Huﬀman Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423 14.4 Lempel–Ziv Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 14.5 Arithmetic Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 14.6 Maple Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 14.6.1 Huﬀman coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 14.6.2 Lempel–Ziv coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433 14.6.3 Typical intervals arising from arithmetic coding . . . . . . . 436 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449

1 Prerequisites

The study of discrete dynamical systems requires ideas and techniques from diverse ﬁelds including mathematics, probability theory, statistics, physics, chemistry, biology, social sciences and engineering. Even only in mathematics, we need various facts from classical analysis, measure theory, linear algebra, topology and number theory. This chapter will help the reader build suﬃcient background to understand materials in later chapters without much diﬃculty. We introduce notations, deﬁnitions, concepts and fundamental facts that are needed in later chapters. In the last section some of basic Maple commands are presented.

1.1 Point Set Topology 1.1.1 Sets and functions Let N, Z, Q, R and C denote the sets of natural numbers, integers, rational numbers, real numbers and complex numbers, respectively. The diﬀerence of two sets A, B is given by A\B = {x : x ∈ A and x ∈ B}, and their symmetric diﬀerence by A B = (A ∪ B) \ (A ∩ B). If A ⊂ X, then Ac denotes the complement of A, i.e., Ac = X \ A. Let X and Y be two sets. A function (or a mapping or a map) f : X → Y is a rule that assigns a unique element f (x) ∈ Y to every x ∈ X. In this case, X and Y are called the domain and the range (or codomain) of f , respectively. For x ∈ X, the element f (x) is called the image of x under f . Sometimes the range of f means the set {f (x) : x ∈ X}. If {f (x) : x ∈ X} = Y , then f is said to be onto. If x1 = x2 implies f (x1 ) = f (x2 ), then f is said to be one-to-one. If f : X → Y is onto and one-to-one, then f is said to be bijective, and its inverse f −1 exists. Suppose that f : X → Y is not necessarily invertible. The inverse image of E ⊂ Y is deﬁned by f −1 (E) = {x : f (x) ∈ E}. Then f −1 (E ∪ F ) = f −1 (E) ∪ f −1 (F ), f −1 (E ∩ F ) = f −1 (E) ∩ f −1 (F ) and f −1 (Y \ E) = X \ f −1 (E).

2

1 Prerequisites

The number of elements of a set A is called the cardinality of A, and denoted by cardA. If f : X → Y is bijective, then we say that X and Y have the same cardinality. If X has ﬁnitely many elements or if X and N have the same cardinality, then X is said to be countable. The sets Z and Q are countable. A countable union of countable sets is also countable. If X is not countable, then X is said to be uncountable. The sets R and C are uncountable. Take a subset A ⊂ X. Then 1A denotes the characteristic function or the indicator function of A, i.e., 1A (x) = 1 if x ∈ A and 1A (x) = 0 if x ∈ A. A partition of a set X is a collection of pairwise disjoint subsets Eλ , λ ∈ Λ, such that λ∈Λ Eλ = X. If there are ﬁnitely (or countably) many subsets under consideration we call the partition ﬁnite (or countable). Suppose that A is a set of real numbers. If there exists α ∈ R ∪ {+∞} such that x ≤ α for every x ∈ A, then α is called an upper bound of A. There exists a unique upper bound α such that if t < α then t is not an upper bound. We call α the least upper bound, or the supremum of A, and write α = sup A. Similarly, we deﬁne the greatest lower bound β, or the inﬁmum of A using lower bounds of A, and write β = inf A. Let {an }∞ n=1 be a sequence of real numbers. Deﬁne lim sup an = inf sup ak , lim inf an = sup inf ak . n→∞

n≥1

k≥n

n→∞

n≥1

k≥n

∞ Both limits exist even when {an }∞ n=1 does not converge. If {an }n=1 converges to a limit L, then the lim sup and the lim inf are equal to L. A bounded and monotone sequence in R converges to a limit. Let {An }∞ n=1 be a sequence of subsets of X. Deﬁne lim sup An = An . n→∞

m≥1 n≥m

This set comprises the points x satisfying x ∈ An for inﬁnitely many n. The Ces`aro sum of a sequence a1 , a2 , a3 , . . . of numbers is a new sequence c1 , c2 , c3 , . . . deﬁned by cn = (a1 + · · · + an )/n. If {an }∞ n=1 converges, then its Ces`aro sum also converges to the same limit. 1.1.2 Metric spaces A metric on a set S is a function d : S × S → R such that (i) d(x, y) ≥ 0, and d(x, y) = 0 if and only if x = y, (ii) d(x, y) = d(y, x), (iii) d(x, z) ≤ d(x, y) + d(y, z). A set S together with a metric d, denoted by (S, d), is called a metric space. A norm on a vector space V is a function || · || : V → R such that (i) ||v|| ≥ 0 for v ∈ V , and ||v|| = 0 if only if v = 0, (ii) ||cv|| = |c| ||v|| for every scalar c and v ∈ V , (iii) ||v1 + v2 || ≤ ||v1 || + ||v2 || for v1 , v2 ∈ V . A normed space is a metric space with the metric d(x, y) = ||x − y||. Two norms

1.1 Point Set Topology

3

|| · || and || · || on a vector space V are said to be equivalent if there exist constants A, B > 0 such that A||x|| ≤ ||x|| ≤ B||x|| for x ∈ V . All norms on a ﬁnite-dimensional vector space are equivalent. The n-dimensional Euclidean space Rn has many norms. For 1 ≤ p < ∞ n 1/p let ||x||p = ( i=1 |xi |p ) and ||x||∞ = max1≤i≤n |xi |. Then (Rn , || · ||p ), 1 ≤ p ≤ ∞, is a normed space. A sequence x1 , x2 , x3 , . . . in a metric space (X, d) converges to a limit x ˜ ˜) converge to 0. In this case, we write x ˜ = limn→∞ xn . if the numbers d(xn , x A sequence x1 , x2 , x3 , . . . in a metric space (X, d) is called a Cauchy sequence if for any ε > 0 there exists N depending on ε such that n, m ≥ N implies d(xn , xm ) < ε. A Cauchy sequence in X need not converge to a limit in X. For example, the sequence 1, 1.4, 1.41, 1.414, 1.4142, . . . in Q has a limit √ 2 ∈ Q. A metric space X is said to be complete if every Cauchy sequence has a convergent subsequence with the limit in X. The Euclidean space Rn with the metric dp (x, y) = ||x − y||p , 1 ≤ p ≤ ∞, is complete. Note that d2 gives the usual Euclidean metric. Two metrics d and d on the same set X are equivalent if there exist constants A, B > 0 such that A d(x, y) ≤ d (x, y) ≤ B d(x, y) for every x, y ∈ X. If d and d are equivalent, then a sequence converges with respect to d if and only if it converges with respect to d . All the metrics dp on Rn are equivalent. Let (X, d) be a metric space. An open ball centered at x0 of radius r > 0 is a set of the form Br (x0 ) = {x ∈ X : d(x, x0 ) < r}. An open set V ⊂ X is a set such that, for every x0 ∈ V , there exists r > 0 for which Br (x0 ) ⊂ V . A set is closed if its complement is open. For example, closed intervals and closed disks are closed sets. A closure of a set A, denoted by A, is the smallest closed set containing A. The closure of an open ball Br (x0 ) is the closed ball {x : d(x, x0 ) ≤ r}. A function f from X into another metric space Y is continuous if limn→∞ f (xn ) = f (limn→∞ xn ). An equivalent condition is the following: f −1 (B) is open in X for every open ball B ⊂ Y . If f : X → Y is bijective and continuous and if f −1 is also continuous, then f is called a homeomorphism, and X and Y are said to be homeomorphic. Let X be a metric space. An open cover U of A ⊂ X is a collection of open subsets Vα such that A ⊂ α Vα . A subcover of U is a subcollection of U that is still a cover of A. A subset K is said to be compact if for every open cover of K there exists a subcover consisting of ﬁnitely many subsets. Fact 1.1 (Bolzano-Weierstrass Theorem) Let X be a metric space. A set K ⊂ X is compact if and only if every sequence in K has a subsequence that converges to a point in K. Fact 1.2 (Heine-Borel Theorem) A subset A ⊂ Rn is compact if and only if A is closed and bounded. For example, a bounded closed interval is compact. For another example, consider the Cantor set C. Let F1 = [0, 13 ] ∪ [ 23 , 1] be the subset obtained by removing the middle third set ( 13 , 23 ) from [0, 1]. Let F2 be the subset obtained

4

1 Prerequisites

by removing the middle third sets ( 19 , 29 ) and ( 79 , 89 ) from the closed intervals Inductively we deﬁne a closed set Fn for every [0, 13 ] and [ 23 , 1], respectively. ∞ n ≥ 1. Put C = n=1 Fn . Then C is compact and its interior is empty, i.e., C does not contain any nonempty open interval. The set C consists of the ∞ points x = i=1 bi 3−i , bi = 0 or 2 for every i ≥ 1. For example, C contains 1 the point 4 = (0.020202 . . .)3 . Note that C is uncountable and has Lebesgue measure zero. Fact 1.3 (i) Let X, Y be compact metric spaces. If f : X → Y is bijective and continuous, then f −1 is also continuous, and f is a homeomorphism. (ii) A continuous function f : K → R deﬁned on a compact set K has a maximum and a minimum. In other words, there exist points x1 , x2 ∈ K such that f (x1 ) = minx∈K f (x) and f (x2 ) = maxx∈K f (x). (iii) A sequence of functions {fn }∞ n=1 is said to converge uniformly to f if for ε > 0 there exits N such that n ≥ N implies |f (x) − fn (x)| < ε for all x. In this case, if fn is continuous for every n ≥ 1, then f is also continuous. Fact 1.4 (Brouwer Fixed Point Theorem) Let B be the closed unit ball in Rn . If f : B → B is a continuous mapping, then f has a ﬁxed point, i.e., there exists x ∈ B such that f (x) = x. Consult [Mu] for a proof. The conclusion is valid for any set B that is homeomorphic to the closed unit ball in Rn . For more details on basic facts presented in this section consult [Rud1].

1.2 Measures and Lebesgue Integration 1.2.1 Measure Given a set X we want to measure the size of a subset A of X. When X is not a set of countably many elements, it is not always possible to measure the size of every subset of X in a consistent and mathematically meaningful way. So we choose only some but suﬃciently many subsets of X and deﬁne their sizes. Such subsets under consideration are called measurable subsets and a collection of them is called a σ-algebra. Axiomatically, a σ-algebra A is a collection of subsets satisfying the following conditions: (i) ∅, X ∈ A, (ii) if A ∈ A, then X \ A ∈ A, and ∞ (iii) if A1 , A2 , A3 , . . . ∈ A, then n=1 An ∈ A. A measure µ is a rule that assigns a nonnegative real number or +∞ to each measurable subset. More precisely, a measure is a function µ : A → [0, +∞) ∪ {+∞} that satisﬁes the following conditions: (i) µ(A) ∈ [0, +∞) ∪ {+∞} for every A ∈ A, (ii) µ(∅) = 0, and (iii) if A1 , A2 , A3 , . . . are pairwise disjoint measurable subsets, then

1.2 Measures and Lebesgue Integration

µ(

∞

n=1

An ) =

∞

5

µ(An ) .

n=1

If µ(X) = 1, then µ is called a probability measure. Example 1.5. (i) The counting measure µ counts the number of elements, i.e., µ(A) is the cardinality of A. (ii) Take x ∈ X. The Dirac measure (or a point mass) δx at x is deﬁned by δx (A) = 1 if x ∈ A, and δx (A) = 0 if x ∈ A. A set X with a σ-algebra A is called a measurable space and denoted by (X, A). If a measure µ is chosen for (X, A), then it is called a measure space and denoted by (X, A, µ) or (X, µ). A measure space (X, A, µ) is said to be complete, if a subset N ∈ A satisﬁes µ(N ) = 0 and A ⊂ N , then A ∈ A and µ(A) = 0. A measure can be extended to a complete measure, i.e., the σ-algebra can be extended to include all the subsets of measure zero sets. Let Rn be the n-dimensional Euclidean space. An n-dimensional rectangle in Rn is a set of the form [a1 , b1 ] × · · · × [an , bn ], where the closed intervals may be replaced by open or half-open intervals. Consider the collection R of n-dimensional rectangles and deﬁne a set function µ : R → [0, +∞) ∪ {+∞} by the usual concept of length for n = 1, or area for n = 2, or volume for n ≥ 3. Then µ is extended to the σ-algebra generated by R. Finally, it is extended again to a complete measure, which is called the n-dimensional Lebesgue measure. A function f : (X, A) → R is said to be measurable if the inverse image of any open interval belongs to A. A simple function s(x) is a measurable n function of the form s(x) = i=1 αi 1Ei (x) for some measurable subsets Ei and constants αi , 1 ≤ i ≤ n. For a measurable function f ≥ 0, there exists an increasing sequence of simple functions sn ≥ 0 such that sn (x) converges to f (x) for every x. On a metric space X, the smallest σ-algebra B containing all the open subsets is called the Borel σ-algebra. If the σ-algebra under consideration is the Borel σ-algebra, then measurable subsets and measurable functions are called Borel measurable subsets and Borel measurable functions, respectively. A measure on B is called a Borel measure. A continuous function f : X → R is Borel measurable. ∞ Let S = {0, 1}. Deﬁne an inﬁnite product space X = 1 S = S × S × · · · . An element of X is an inﬁnite binary sequence x = (x1 , x2 , x3 . . .) where xi = 0 or 1. Deﬁne a cylinder set of length n by [a1 , . . . , an ] = {x ∈ X : x1 = a1 , . . . , xn = an } . Let R be the collection of cylinder sets. For 0 ≤ p ≤ 1, deﬁne a set function µp : R → [0, ∞) by µp ([a1 , . . . , an ]) = pk (1 − p)n−k

6

1 Prerequisites

where k is the number of times that the symbol ‘0’ appears in a1 , . . . , an . Then µp is extended to a measure, also denoted by µp , deﬁned on the σ-algebra generated by R. The new measure is called the (p, 1 − p)-Bernoulli measure. ∞ −i A number x in the unit interval has a binary expansion x = i=1 ai 2 , ai ∈ {0, 1}, and it can be identiﬁed with the sequence (a1 , a2 , a3 , . . .) ∈ X. For p = 12 we can identify µp with Lebesgue measure on [0, 1]. Here we ignore the binary representations whose tails are all equal to 1. A measure is continuous if a set containing only one point has measure zero. In this case, by the countable additivity of a measure, a countable set has measure zero. Bernoulli measures for 0 < p < 1 are continuous. A measure is discrete if there exists a subset A of countably many points such that the complement of A has measure zero. If K ⊂ X satisﬁes µ(X \ K) = 0, then K is called a support of µ. For X ⊂ Rn a measure µ on X is singular if there exists a measurable subset K ⊂ X such that K has Lebesgue measure zero and µ(X \ K) = 0. If p = 12 , then the Bernoulli measure µp , which can be regarded as a measure on [0, 1], is singular. Two measures µ1 , µ2 on X are mutually singular if there exist disjoint measurable subsets A, B such that µ1 (B) = 0 and µ2 (A) = 0. Let µ be a measure on X ⊂ Rn . If a measurable function ρ(x) ≥ 0 satisﬁes µ(E) = E ρ(x) dx for any measurable subset E, then µ is said to be absolutely continuous and ρ is called a density function. In this case f (x) dµ = f (x)ρ(x) dx X

X

for any measurable function f and we write dµ = ρ dx. If µ is a probability measure, then ρ is called a probability density function, or a pdf for short. Let (X, µ) be a measure space. Let P (x) be a property whose validity depends on x ∈ X. If P (x) holds true for every x ∈ A with µ(X \ A) = 0, then we say that P (x) is true almost everywhere (or, for almost every x in X) with respect to µ. Let A be a measurable subset of a probability measure space (X, µ). If µ(A) > 0, then we deﬁne a new measure µA on X by µA (E) = µ(E ∩A)/µ(A). Then µA is called a conditional measure. A partition P = {Ej : j ≥ 1} of a measurable space (X, A) is said to be measurable if Ej ∈ A for every j. 1.2.2 Lebesgue integration Now we want to compute the average of a function deﬁned on X with respect to a measure µ. It is not possible in general to consider all the functions. Let f ≥ 0 be a bounded measurable function on X, i.e., there is a constant M > 0 such that 0 ≤ f (x) < M . Put k k−1 M . M, En,k = f −1 2n 2n

1.2 Measures and Lebesgue Integration

7

The measurability of f guarantees the measurability of En,k , and the size of En,k can be determined by µ. As n → ∞, we take the limit of n

2 k M × µ(En,k ) , 2n

k=1

which is called the Lebesgue integral of f on (X, µ) and denoted by f dµ . X

The key idea is that we partition the vertical axis in Lebesgue integral, not the horizontal axis as in Riemann integral. If f is real-valued, then f = f+ − f− for some nonnegative functions f+ and f− , and we deﬁne f dµ = f+ dµ − f− dµ . X

X

X

For complex-valued measurable functions, we integrate the real and imaginary parts separately. function of a measurable subset A. Then 1A Let 1A be the characteristic

is measurable function and X 1A (x) dµ = µ(A) for any measure µ on X. In measure theory two measurable subsets A1 , A2 are regarded as being identical if µ(A1 A2 ) = 0. In this case we say that A1 = A2 modulo measure zero sets and write A1 A2 . (In most cases we simply write A1 = A2 to denote A1 A2 if there is no danger of confusion.) A measurable function f is said to be integrable if its integral is ﬁnite. If two functions diﬀer on a subset of measure zero, then their integrals are equal. We then regard them as identical functions. The set of all integrable functions is a vector space. Fact 1.6 (Monotone Convergence Theorem) Let 0 ≤ f1 ≤ f2 ≤ · · · be a monotone sequence of measurable functions on a measure space (X, µ). Then lim fn dµ = lim fn dµ . X n→∞

n→∞

X

Fact 1.7 (Fatou’s lemma) Let fn ≥ 0, n ≥ 1, be a sequence of measurable functions on a measure space (X, µ). Then lim inf fn dµ ≤ lim inf fn dµ . X n→∞

n→∞

X

Fact 1.8 (Lebesgue Dominated Convergence Theorem) Let fn , n ≥ 1, be a sequence of measurable functions on a measure space (X, µ). Suppose that limn→∞ fn (x) exists for every x ∈ X and that there exists an integrable function g ≥ 0 such that |fn (x)| ≤ g(x) for every n. Then lim fn dµ = lim fn dµ . X n→∞

n→∞

X

8

1 Prerequisites

For 1 ≤ p < ∞ deﬁne ||f ||p =

1/p |f | dµ , p

X

and for p = ∞ deﬁne ||f ||∞ = inf{C : µ{x : |f (x)| > C} = 0} . For 1 ≤ p ≤ ∞ let

Lp (X, µ) = {f : ||f ||p < ∞} .

Then Lp (X, µ) is a vector space with a norm || · ||p , and it is complete as a metric space. Suppose that µ(X) < ∞. It is known that Lp (X) ⊂ Lr (X) for 1 ≤ r < p ≤ ∞. Hence Lp (X) ⊂ L1 (X) for any 1 ≤ p ≤ ∞. A sequence of functions {fn }n , fn ∈ Lp , is said to converge in Lp to f ∈ Lp if limn→∞ ||fn − f ||p = 0. A sequence of measurable functions {gn }n is said to converge in measure to g if for every ε > 0, lim µ({x : |gn (x) − g(x)| > ε}) = 0 .

n→∞

Various modes of convergence are compared in the following: Fact 1.9 Suppose that 1 ≤ p < ∞. (i) If fn → f in Lp , then fn → f in measure. (ii) If fn → f in measure and |fn | ≤ g ∈ Lp for all n, then fn → f in Lp . (iii) If fn , f ∈ Lp and fn → f almost everywhere, then fn → f in Lp if and only if ||fn ||p → ||f ||p . Proof. Here only (i) is proved. Let An,ε = {x : |fn (x) − f (x)| ≥ ε}, then p |fn − f |p dµ ≥ εp µ(An,ε ) , |fn − f | dµ ≥ An,ε

and hence µ(An,ε ) ≤ ε−p |fn − f |p dµ → 0.

Fact 1.10 Assume that µ(X) < ∞. (i) For 1 ≤ p ≤ ∞, if fn → f in Lp , then fn → f in measure. (ii) If fn (x) → f (x) almost everywhere, then fn → f in measure. A function φ : R → R is said to be convex , if for any λ1 , . . . , λn ≥ 0 such n that i=1 λi = 1, we have

n n φ λi xi ≤ λi φ(xi ) i=1

for xi ∈ R. The sum

n i=1

i=1

λi xi is called a convex combination of the xi .

1.2 Measures and Lebesgue Integration

9

Fact 1.11 (Jensen’s inequality) Let (X, µ) be a probability measure space and let f : X → R be a measurable function. If φ : R → R is convex and φ ◦ f is integrable, then f dµ

φ

≤

X

φ(f (x)) dµ . X

Fact 1.12 (H¨ older’s inequality) Let 1 < p < ∞, 1 < q < ∞, Then ||f g||1 ≤ ||f ||p ||g||q .

1 p

+

1 q

= 1.

Equality holds if and only if there exist constants C1 ≥ 0 and C2 ≥ 0, not both equal to 0, such that C1 |f (x)|p = C2 |g(x)|q for almost every x. If p = 2 we have the Cauchy–Schwarz inequality. Fact 1.13 (Minkowski’s inequality) Let 1 ≤ p ≤ ∞. Then ||f + g||p ≤ ||f ||p + ||g||p . If 1 < p < ∞, then the equality holds if and only if there exist constants C1 ≥ 0 and C2 ≥ 0, not both zero, such that C1 f = C2 g almost everywhere. For p = 1 the equality holds if and only if there exists a measurable function h ≥ 0 such that f (x)h(x) = g(x) for almost every x satisfying f (x)g(x) = 0. Let V , V be vector spaces over a set of scalars F = R or C. A mapping L : V → V is said to be linear if L(c1 v1 + c2 v2 ) = c1 L(v1 ) + c2 L(v2 ) for v1 , v2 ∈ V and c1 , c2 ∈ F . Let (X1 , || · ||1 ) and (X2 , || · ||2 ) be two normed spaces. A linear mapping T : X1 → X2 is bounded if ||T || = sup{||T x||2 : ||x||1 ≤ 1} < ∞ . In this case, we have ||T x||2 ≤ ||T || ||x||1

for x ∈ X1 ,

and ||T || is called the norm of T . A bounded linear map is continuous. A complete normed space is called a Banach space. For 1 ≤ p ≤ ∞, Lp is a Banach space. Another example of a Banach space is the set of continuous functions on [a, b] with the norm ||f || = maxa≤x≤b |f (x)|. Fact 1.14 (Chebyshev’s inequality) If f ∈ Lp (X, µ), 1 ≤ p < ∞, then for any ε > 0 we have 1 |f |p dµ . µ({x : |f (x)| > ε}) ≤ p ε X Proof. Let Aε = {x : |f (x)| > ε}. Then |f |p dµ ≥ |f |p dµ ≥ εp µ(Aε ) . X

Aε

10

1 Prerequisites

Deﬁnition 1.15. Let (X, µ) be a probability space. A sequence of measurable subsets {An }∞ n=1 are said to be independent if, for any 1 ≤ i1 < · · · < ik , we have µ(Ai1 ∩ · · · ∩ Aik ) = µ(Ai1 ) · · · µ(Aik ) . Fact 1.16 (Borel–Cantelli Lemma) Let {An }∞ n=1 be a sequence of measurablesubsets of a probability space (X, µ). ∞ (i) If n=1 µ(An ) < ∞, then µ(lim sup An ) = 0. n→∞

(ii) Suppose that A1 , A2 , . . . are independent. If

∞ n=1

µ(An ) = ∞, then

µ(lim sup An ) = 1. n→∞

Given measure spaces (X, A, µ) and (Y, B, ν), deﬁne the product measure µ × ν on X × Y as follows: Consider the smallest σ-algebra on X × Y , denoted by A × B, containing rectangles A × B where A ∈ A, B ∈ B. Deﬁne µ × ν on rectangles by (µ × ν)(A × B) = µ(A)ν(B). It is known that µ × ν can be extended to a measure on (X × Y, A × B). For example, if µ is the Lebesgue measure on R, then µ × µ is the Lebesgue measure on R2 . Consider the product measure space (X ×Y, A×B, µ×ν). Let f : X ×Y → C be measurable with respect to A × B. Deﬁne fx : Y → C by fx (y) = f (x, y) and f y : X → C by f y (x) = f (x, y). Then fx is measurable with respect to B for each x, and f y is measurable with respect to A for each y. Theorem 1.17 (Fubini’s theorem). Let (X, A, µ) and (Y, B, ν) be measure spaces. If f (x, y) : X × Y → C is measurable with respect to A × B and if |f (x, y)| dν(y) dµ(x) < ∞ , X

then

Y

f (x, y) d(µ × ν) =

X×Y

X

f (x, y) dν dµ = f (x, y) dµ dν .

Y

Y

X

In short, multiple integrals can be calculated as iterated integrals. For more details on product measures see [AG],[Rud2]. Deﬁnition 1.18. Let f : [a, b] → R and let P be a partition of [a, b] given by x0 = a < x1 < · · · < xn = b. Put V[a,b] (f, P ) =

n k=1

|f (xk ) − f (xk−1 )|

1.3 Nonnegative Matrices

11

and V[a,b] (f ) = sup V[a,b] (f, P ) , P

where the supremum is taken over all ﬁnite partitions P . If V[a,b] (f ) < ∞, then f is said to be of bounded variation and V[a,b] (f ) is called the total variation of f on [a, b]. An inner product on a complex vector space V is a function (·, ·) : V ×V → C such that (i) (v1 + v2 , w) = (v1 , w) + (v2 , w) and (v, w1 + w2 ) = (v, w1 ) + (v, w2 ), (ii) (cv, w) = c(v, w) and (v, cw) = c(v, w), c ∈ C, (iii) (v, w) = (w, v), (iv) (v, v) ≥ 0. The Cauchy–Schwarz inequality holds: |(v, w)| ≤ ||v|| ||w||. If (v, v) = 0 only for v = 0, then the inner product is said to be positive deﬁnite. In this case we deﬁne a norm by ||v|| = (v, v). Thus a positive deﬁnite inner product space is a normed space. If the resulting metric is complete, then V is called a Hilbert space. Of all Lp spaces, the case p = 2 is special: L2 is a Hilbert space. We can deﬁne an inner product by (f, g) = X f (x)g(x) dµ for f, g ∈ L2 (X, µ). Let V be a vector space over a set of scalars F = R or C. A subspace W ⊂ V is said to be invariant under a linear operator L if Lw ∈ W for every w ∈ W . If Lv = λv for some 0 = v ∈ V and λ ∈ F , then v and λ are called an eigenvector and an eigenvalue of L, respectively. Given a Hilbert space H, an invertible linear operator U : H → H is said to be unitary if (U f, U g) = (f, g), f, g ∈ H. This is equivalent to the condition that (U f, U f ) = (f, f ) where f ∈ H. If U has an eigenvalue λ, then |λ| = 1. If W is invariant under U , then the orthogonal complement W ⊥ = {f : (f, g) = 0 for every g ∈ W } is also invariant under U . Let L1 and L2 be bounded linear operators in Hilbert spaces H1 and H2 , respectively. If there is a linear isomorphism, i.e., a linear one-to-one and onto mapping S : H1 → H2 such that (Sv, Sw)H2 = (v, w)H1 for v, w ∈ H1 and SL1 = L2 S, then L1 and L2 are said to be unitarily equivalent. For the proofs of facts in this section consult [Rud2],[To].

1.3 Nonnegative Matrices 1.3.1 Perron–Frobenius Theorem We say that a matrix A = (aij ) is nonnegative (positive) if aij ≥ 0 (aij > 0) for every i, j. In this case we write A ≥ 0 (A > 0). Similarly we deﬁne nonnegative and positive vectors. If a nonzero square matrix A ≥ 0 has an λ is positive. eigenvector v = (vi ) > 0, then the corresponding eigenvalue To see why, note that there exists i such that (Av)i = j aij vj > 0. Since λvi = (Av)i > 0 and vi > 0, we have that λ > 0. We present the Perron–Frobenius Theorem, and prove the main part using the Brouwer Fixed Point Theorem. Eigenvectors of A will be ﬁxed points of a continuous mapping deﬁned in terms of A.

12

1 Prerequisites

n Throughout the section || · ||1 denotesn the 1-norm on R , i.e., ||x||1 = i |xi |, x = (x1 , . . . , xn ). Put S = {v ∈ R : ||v||1 = 1, vi ≥ 0 for every i}.

Lemma 1.19. If none of the columns of A ≥ 0 is a zero vector, then there exists an eigenvalue λ > 0 with an eigenvector w ≥ 0. Proof. Let A be an n × n matrix. Take v ∈ S. Then Av is a convex linear combination of columns of A and ||Av||1 > 0. Deﬁne f : S → S by f (v) =

1 Av . ||Av||1

Then f is continuous and has a ﬁxed point w ∈ S by the Brouwer Fixed Point Theorem since S is homeomorphic to the closed unit ball in Rn−1 . Hence Aw/||Aw||1 = w and by putting λ = ||Aw||1 we have Aw = λw.

Lemma 1.20. If A ≥ 0 has an eigenvalue λ > 0 with an eigenvector w > 0, then 1/m λ = lim (Am )ij . m→∞

ij

(This implies that λ is unique.) If µ ∈ C is an eigenvalue of A, then |µ| ≤ λ. Proof. We may assume ||w||1 = 1. Put C = minj wj > 0. Since ||Am w||1 = m ij (A )ij wj , we have (Am )ij ≤ ||Am w||1 ≤ (Am )ij . C ij

ij

From ||Am w||1 = ||λm w||1 = λm , 1/m 1/m C 1/m (Am )ij ≤λ≤ (Am )ij . ij

ij

Now let m → ∞. For the second statement, choose v ∈ Cn such that Av = µv and ||v||1 = j |vj | = 1. Put u = ( |v1 |, . . . , |vn | ) ∈ Rn . Then (Am )ij , |µ|m = ||µm v||1 = ||Am v||1 ≤ ||Am u||1 ≤ ij

and hence |µ| ≤

m ij (A )ij

1/m for every m.

Deﬁnition 1.21. A matrix A ≥ 0 is said to be irreducible if for any i, j there exists a positive integer m depending on i, j such that (Am )ij > 0. The period of a state i, denoted by Per(i), is the greatest common divisor of those integers n ≥ 1 for which (An )ii > 0. If no such integers exist, we deﬁne Per(i) = ∞. The period of A is the greatest common divisor of the numbers Per(i) that are ﬁnite, or is ∞ if Per(i) = ∞ for all i. A matrix is said to be aperiodic if it has period 1.

1.3 Nonnegative Matrices

13

If A is irreducible, then all the states have the same period, and so the period of A is the period of any of its states. If A is irreducible and aperiodic, then there exists (An ) ij > 0 for every i, j. For example, n≥ 1 satisfying 11 1 n consider A = . Then An = for every n ≥ 1. Thus A is not 01 01 irreducible, but it is aperiodic. Lemma 1.22. Let A be an irreducible matrix. Then any eigenvector w ≥ 0 corresponding to a positive eigenvalue is positive. (Hence the eigenvector w obtained in Lemma 1.19 is positive, and the conclusion of Lemma 1.20 holds.) Proof. Since w = 0, there exists wk > 0. For any i there exists m depending on i such that (Am )ik > 0. Since Am w = λm w, we have j (Am )ij wj = λm wi and 0 < (Am )ik wk ≤ λm wi .

Theorem 1.23 (Perron–Frobenius Theorem). Let A be irreducible. Then there is a unique eigenvalue λA > 0 satisfying the following: (i) |µ| ≤ λA for every eigenvalue µ. (ii) The positive eigenvector w corresponding to λA is unique up to scalar multiplication. (iii) λA is a simple root of the characteristic polynomial of A. (iv) If A is also aperiodic, then |µ| < λA for any eigenvalue µ = λA . Proof. (i) Use Lemmas 1.19, 1.20, 1.22. (ii) Assume that there exists v = 0 such that Av = λA v. Consider w + tv, −∞ < t < ∞, which is also an eigenvector corresponding to λA . Note that, if t is close to 0, then it is a positive vector. We can choose t = t0 so that w + t0 v ≥ 0 and at least one of components of w + t0 v is zero. By Lemma 1.22 this is impossible unless w + t0 v = 0. (iii),(iv) Consult p.112 and p.129 in [LM], respectively.

The eigenvalue λA is called the Perron–Frobenius eigenvalue and a corresponding positive eigenvector is called a Perron–Frobenius eigenvector . For more information see [Ga],[Kit]. 1.3.2 Stochastic matrices Deﬁnition 1.24. A square matrix P = (pij ), with pij ≥ 0 for every i, j, is called a stochastic matrix if j pij = 1 for every i. If P is a stochastic matrix then P n , n ≥ 1, is also a stochastic matrix. The value pij represents the probability of the transition from the ith state to the jth state and (P n )ij represents the probability of the transition from the ith state to the jth state in n steps. When stochastic matrices are discussed, we multiply row vectors from the left as a convention. So when we say that v is an eigenvector of P we mean vP = λv for some constant λ. Obviously the results for column operations obtained in Subsect. 1.3.1 are also valid for row operations.

14

1 Prerequisites

Theorem 1.25. Let P be a stochastic matrix. Then the following holds: (i) P has an eigenvalue 1. (ii) There exists v ≥ 0, v = 0, such that vP = v. (iii) If P is irreducible, then there exists a unique row eigenvector π = (π1 , . . . , πn ) such that πP = π with πi > 0 and i πi = 1. Proof. (i) The column vector u = (1, . . . , 1)T ∈ Rn satisﬁes P u = u. Hence 1 is an eigenvalue of P . (ii) Since none of the rows of P is the zero vector, we can apply the idea from the proof of Lemma 1.19 to prove the existence of a positive eigenvalue λ with a nonnegative eigenvector. Take f (v) = vP . In this case, for v ∈ S, ⎛ ⎞

(vP )j = ||vP ||1 = vi pij = vi ⎝ pij ⎠ = vi = 1 . j

j

i

i

j

i

Hence we may choose λ = 1. (iii) Apply Theorem 1.23(ii).

If P is irreducible, then the transition from an arbitrary state to another arbitrary state through intermediate states is possible. If P is irreducible and aperiodic, then π is a steady state solution of the probabilistic transition problem, i.e., after a suﬃciently long time the probability that we are at the state i is close to πi , whatever the initial probability distribution is. A vector (p1 , . . . , pn ) ≥ 0 is called a probability vector if i pi = 1. Consult Theorem 5.21 and Maple Program 1.8.8.

1.4 Compact Abelian Groups and Characters 1.4.1 Compact abelian groups A group is a set G that is closed under the operation · : G × G → G satisfying (i) (g1 · g2 ) · g3 = g1 · (g2 · g3 ) for g1 , g2 , g3 ∈ G, (ii) there is an identity element e ∈ G such that g · e = e · g = g for g ∈ G, (iii) for g ∈ G there exists an inverse g −1 ∈ G such that g · g −1 = g −1 · g = e. A group is said to be abelian if the operation is commutative: g1 · g2 = g2 · g1 for every g1 , g2 . In most cases the group multiplication symbol ‘·’ is omitted. When the elements of the group are numbers or vectors and the operation is the usual addition, the additive notation is often used. For A ⊂ G, we write gA = {ga : a ∈ A} and Ag = {ag : a ∈ A}. Let G and H be groups. A mapping φ : G → H is called a homomorphism if φ(g1 g2 ) = φ(g1 )φ(g2 ) for g1 , g2 ∈ G. In this case, φ(g −1 ) = φ(g)−1 and the identity element in G is sent to the identity element in H. If G = H, then φ is called an endomorphism. If φ : G → H is one-to-one and onto, then it is an isomorphism. If an endomorphism is also an isomorphism, then

1.4 Compact Abelian Groups and Characters

15

it is an automorphism. If φ : G → H is a homomorphism, then the kernel {g ∈ G : φ(g) = e}, denoted by ker φ, and the range {φ(g) : g ∈ G} are subgroups of G and H, respectively. A metric space (X, d) is locally compact if for every x0 ∈ X there exists r > 0 such that {x ∈ X : d(x, x0 ) ≤ r} is compact. A topological group is a group G with a topological structure, e.g., a metric, such that the following operations are continuous: (i) (g1 , g2 ) → g1 g2 , (ii) g → g −1 . If G is compact, then it is called a compact group. When G and H are topological groups, we implicitly assume that a given homomorphism φ is continuous. A measure µ on a topological group G is left-invariant if µ(gA) = µ(A) for every g ∈ G. A right-invariant measure is similarly deﬁned. On an abelian group a left-invariant measure is also right-invariant. On a locally compact group there exist left-invariant measures and right-invariant measures, called left and right Haar measures, respectively under some regularity conditions. On compact groups, left and right Haar measures coincide and they are ﬁnite. For example, the circle T = {e2πix : 0 ≤ x < 1} is regarded as the unit interval with its endpoints identiﬁed, and its Haar measure is Lebesgue measure. For more information consult [Fo1],[Roy]. Theorem 1.26. (i) A closed subgroup of the circle group T with inﬁnitely many elements is T itself. (ii) A ﬁnite subgroup of T with n elements is of the form {1, w, . . . , wn−1 } where w = e2πi/n . Proof. (i) Let H be an inﬁnite closed subgroup of T. Since H is compact, it has a limit point a. In any small neighborhood of a there exists an element of H, say b. Since a/b and b/a are in H and since they are close to 1, 1 is also a limit point. In short, for every n ≥ 1 there exists u = e2πiθ ∈ H such that θ is real and |θ| < n1 . Since {uk : k ∈ Z} ⊂ H, any interval of length greater than 1 n contains an element of H. Therefore H is dense in T, and H = T. (ii) If a ﬁnite subgroup H has n elements, then z n = 1 for every z ∈ H. Since the polynomial equation z n = 1 has only n solutions, and since H has

n elements, all the points satisfying z n = 1 belong to H. Theorem 1.27. (i) The automorphisms of T are given by z → z and z → z. (ii) The homomorphisms of T are given by z → z n for some n ∈ Z. (iii) The homomorphisms of Tn to T are of the form (z1 , . . . , zn ) → z1 k1 · · · zn kn for some (k1 , . . . , kn ) ∈ Zn . Proof. (i) Let h : T → T be an isomorphism. Since (−1)2 = 1, we have h(−1)2 = h(1) = 1, and hence h(−1) = ±1. Since h is one-to-one, h(−1) = −1. Since i2 = −1, we have h(i)2 = h(−1) = −1, and hence h(i) = ±i. First, consider the case h(i) = i. The image of the arc with the endpoints 1 and i is the same arc since a continuous image of a connected k k set is connected. By induction we see that h(e2πi/2 ) = e2πi/2 , and hence

16

1 Prerequisites k

k

k

h(e2πi/2 ) = h(e2πi/2 ) = e2πi/2 . Thus h(z) = z for points z in a dense subset. Since h is continuous, h(z) = z on T. For the case h(i) = −i, we consider the automorphism g(z) = h(z) and use the previous result. (ii) Let h : T → T be a homomorphism. Assume that h = 1. Since h is continuous, the range h(T) is a connected subgroup, which implies that h(T) = T. Since ker h is a closed subgroup, but not the whole group, we see that ker h is ﬁnite. By Theorem 1.26 we have ker h = {1, w, . . . , wn−1 } where w = e2πi/n for some n ≥ 1. Let Jn be the closed arc from the endpoint 1 to w in the counterclockwise direction. Then h(In ) = T and h is one-to-one in the interior of Jn . Note that h(e2πi/(2n) ) = −1 since h(w) = 1. Hence h(e2πi/(4n) ) = ±i. First, consider the case h(e2πi/(4n) ) = i. Proceeding as in Part (i), we see that h(z) = z n on Jn . Take q ∈ T. Since q = z k where k ≥ 1 and z ∈ Jn , we observe that h(q) = h(z k ) = h(z)k = (z n )k = (z k )n = q n . In the case h(e2πi/(4n) ) = −i, we have h(z) = z −n , n ≥ 1. (iii) Let h : Tn → T be a homomorphism. An element (z1 , . . . , zn ) ∈ Tn is a product of n elements uj = (1, . . . , 1, zj , 1, . . . , 1). Hence h((z1 , . . . , zn )) = h(u1 ) · · · h(un ). Now use Part (ii) for each h(uj ).

1.4.2 Characters Let G be a compact abelian group. A character of G is a continuous mapping χ : G → C such that (i) |χ(g)| = 1, (ii) χ(g1 g2 ) = χ(g1 )χ(g2 ). In other words, a character of G is a homomorphism from G into the unit circle group. Note that χ(e) = 1 and χ(g −1 ) = 1/χ(g) = χ(g). A trivial character χ is the one that sends every element to 1. In that case, we write χ = 1. The set of all characters is also a group. It is called the dual group and is denoted by G. The following facts are known: (i) G 1 × G2 = G1 × G2 , (ii) G = G. Example 1.28. (i) The characters of Zn = {0, 1, . . . n − 1} are given by χk (j) = e2πijk/n , j ∈ Zn , 0 ≤ k ≤ n − 1 . (ii) The characters of T are given by χk (z) = z k , k ∈ Z. See Theorem 1.27(ii). Let µ be the Haar measure on a compact abelian group G. For f ∈ L1 (G, µ) → C by deﬁne the Fourier transform f : G f (g) χ(g −1 ) dµ . f(χ) = G

For each χ, f(χ) is called a Fourier coeﬃcient of f . Note that |f(χ)| ≤ ||f ||1 for every χ ∈ G. The characters form an orthonormal basis for L2 (G, µ); in other words, for f ∈ L2 (G, µ) we have the Fourier series expansion f (x) = f(χ) χ(x) χ∈G

1.4 Compact Abelian Groups and Characters

and

|f (x)|2 dµ = G

17

|f(χ)|2 .

χ∈G

Furthermore, for f1 , f2 ∈ L2 (G, µ), we have f1 (χ)f2 (χ) , f1 (x)f2 (x) dµ = G

χ∈G

which is called Parseval’s identity. For g ∈ G, consider the unitary operator Ug on L2 (G, µ) deﬁned by (Ug f )(x) = f (gx). Since for a character χ, (Ug χ)(x) = χ(gx) = χ(g)χ(x) cχ χ(g) χ. Thus Ug = χ(g)Pχ where Pχ is the we have Ug ( cχ χ) = orthogonal projection onto the 1-dimensional subspace spanned by χ. 1.4.3 Fourier series A classical example of the Fourier transform on a compact abelian group is the Fourier series on the unit circle identiﬁed with [0, 1). If f (x) is a periodic function of period 1, then f is be regarded as a function on [0, 1] with f (0) = f (1), and so it may be regarded as a function on the unit circle. As for the pointwise convergence of the Fourier series the following fact is known: If f (x) is continuous with f (0) = f (1) and if f (x) is piecewise continuous with jump discontinuities, then f (x) = f(n) e2πinx n∈Z

where f(n) =

1

f (x) e−2πinx dx

0

and the convergence is absolute and uniform. Fact 1.29 (Mercer’s theorem) Let f be an integrable function on [0, 1]. Then lim f(n) = 0 . |n|→∞

For the proof see [Hels]. The same type of theorem for the Fourier transform on the real line is called the Riemann–Lebesgue lemma. Deﬁne the Fourier coeﬃcients of a measure µ on the unit circle [0, 1) by 1 e−2πinx dµ(x) . µ (n) = 0

Observe that µ (0) = 1 and | µ(n)| ≤ 1 for every n. If dµ = f (x) dx for an integrable function f , then µ (n) = f(n) for every n.

18

1 Prerequisites

Two integrable functions on [0, 1] with the same Fourier coeﬃcients are equal almost everywhere. Furthermore, two measures with the same Fourier coeﬃcients are also identical. Here is a proof: Let V be the vector space of all continuous functions f on [0, 1] such that f (0) = f (1) with the norm ||f || = max |f (x)|. A Borel probability measure µ may be regarded as a linear

1 mapping from V into C deﬁned by µ(f ) = 0 f dµ. Then |µ(f )| ≤ ||f || and µ : V → C is continuous. If two measures have the same Fourier coeﬃcients, then they coincide on the set of the trigonometric polynomials, which is dense in V . Hence the two measures are identical. For a diﬀerent proof see [Hels]. Fact 1.30 Let f be a continuous function on the unit circle, so that it can be regarded as a continuous function of period 1 on the real line. If f is k-times diﬀerentiable on the unit circle, and if f (k) is integrable, then nk f(n) → 0

as |n| → ∞ .

Proof. First, consider k = 1. Then 1 f (x)e−2πinx dx f (n) = 0

1 = f (x)e−2πinx 0 − (−2πin)

1

= 2πin

1

f (x)e−2πinx dx

0

f (x)e−2πinx dx = 2πinf(n) .

0 (k) (n) = (2πin)k f(n). Hence In general, for k ≥ 1, f

|nk f(n)| =

1 |f (k) (n)| . (2π)k

Now apply Fact 1.29 for f (k) . Example 1.31. The nth Fourier coeﬃcient of f (x) = f(n) =

1 4

− |x − 12 | is given by

0, n even , 1 − 2 2 , n odd . π n

See Maple Program 1.8.9. 1.4.4 Endomorphisms of a torus Let Tn denote the n-dimensional torus {(x1 , . . . , xn ) : 0 ≤ xi < 1, 1 ≤ i ≤ n}, in the additive notation, which may be identiﬁed with the additive quotient group Rn /Zn . We state some of basic facts: (i) A ﬁnite subgroup of T is of the form {0, n1 , . . . , n−1 n } for some n. (ii) An automorphism of T satisﬁes either

1.4 Compact Abelian Groups and Characters

19

x → x or x → 1−x for every x ∈ T. (iii) A homomorphisms of T is of the form x → kx (mod 1) for some k ∈ Z. (iv) A homomorphisms of Tn to T is of the form (x1 , . . . , xn ) → k1 x1 +· · ·+kn xn (mod 1) for some (k1 , . . . , kn ) ∈ Zn . We will use additive and multiplicative notations interchangeably in this book. A matrix is called an integral matrix if all the entries are integers. Theorem 1.32. Let A and B denote n × n integral matrices. Then we have the following: (i) Deﬁne a mapping φ : Tn → Tn by φ(x) = Ax (mod 1). Then φ is an endomorphism of Tn . (ii) If φ and ψ are endomorphisms of Tn represented by A and B, then the composite map ψ ◦ φ is represented by BA. (iii) Every endomorphism of Tn is of the form given in (i). (iv) An endomorphism φ : Tn → Tn , φ(x) = Ax (mod 1), is onto if and only if det A = 0. (v) An endomorphism φ : Tn → Tn , φ(x) = Ax (mod 1), is one-to-one and onto if and only if det A = ±1. Proof. (i),(ii) Obvious. (iii) Let πi : Tn → T be the projection onto the ith coordinate. Then πi ◦ φ : Tn → T is a homomorphism. Now use Theorem 1.27(iii). (iv) If det A = 0, then the linear mapping LA : Rn → Rn , LA (x) = Ax, is onto, and hence φ is also onto. Suppose det A = 0. Then det AT = 0 and there exists 0 = v ∈ Qn such that AT v = 0. Note that we can obtain v using Gaussian elimination. Since the entries of AT are rational numbers and since Gaussian elimination uses only addition, subtraction, multiplication and division by nonzero numbers, we can choose v so that the entries of v are also rational numbers. Furthermore, by multiplying by a suitable constant if necessary, we can choose v so that its entries are integers. Let v = (k1 , . . . , kn ) and deﬁne a homomorphism χ : Tn → T by χ(x1 , . . . , xn ) = k1 x1 + · · · + kn xn (mod 1). Since AT v = 0, we have v · Ay = AT v · y = 0 for y ∈ Rn . Hence the range of φ is included in ker χ = Tn and φ is not onto. (v) Note that the inverse of an automorphism φ is also an automorphism. Let A and B be two integral matrices representing φ and φ−1 . By (ii) we see that AB = BA = I, and hence (det A)(det B) = 1. Since the determinant of an integral matrix is also an integer, we conclude that det A = ±1. Suppose det A = ±1. Then A−1 is also an integral matrix since the (j, i)th entry of A−1 is given by (−1)i+j det Aij / det A where Aij is the (n−1)×(n−1) matrix obtained by deleting the ith row and the jth column of A. Let ψ be an endomorphism of Tn deﬁned by A−1 . By (ii) we see that ψ ◦ φ and φ ◦ ψ are endomorphisms of Tn and that both are represented by the identity matrix. Hence ψ ◦ φ = φ ◦ ψ = id, and so φ is invertible.

31 Example 1.33. Consider the toral endomorphism φ given by A = . Then 11 1 1 φ is not one-to-one since φ x + 2 , y + 2 = φ(x, y).

20

1 Prerequisites

1.5 Continued Fractions Let 0 < θ < 1 be an irrational number, and consider its continued fraction expansion 1 . θ= 1 a1 + 1 a2 + a3 + · · · The natural numbers an are called the partial quotients and we write θ = [a1 , a2 , a3 , . . .]. Take p−1 = 1, p0 = 0, q−1 = 0, q0 = 1. For n ≥ 1 choose pn , qn such that (pn , qn ) = 1 and pn = [a1 , a2 , . . . , an ] = qn

The rational numbers

1 a1 +

.

1 ··· +

1 an

pn are called the convergents. qn

Example 1.34. (i) Let θ0 = [k, k, k, . . .] for k ≥ 1. Then θ0 = 1/(k + θ0 ), and √ −k + k 2 + 4 . θ0 = 2 (ii) It is known that e = 2.718281828459045 . . . satisﬁes e = 2 + [1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, . . . . . . , 1, 2n, 1, . . .] . See Maple Programs 1.8.10, 1.8.11. Deﬁnition 1.35. For x ∈ R put ||x|| = min |x − n| . n∈Z

Thus ||x|| is the distance from x to Z. Fact 1.36 (i) qn+1 = an+1 qn + qn−1 , pn+1 = an+1 pn + pn−1 (ii) pn+1 qn − pn qn+1 = (−1)n pn , n ≥ 1, alternate. (iii) The signs of the sequence θ − qn 1 pn 1 ,n≥0 < < θ − (iv) qn qn qn+1 qn (qn+1 + qn ) 1 1 ,n≥1 < ||qn θ|| < (v) qn+1 qn+1 + qn (vi) If 1 ≤ j < qn+1 , then ||jθ|| ≥ |qn θ − pn | = ||qn θ||. Hence ||qn θ|| decreases monotonically to 0 as n → ∞.

1.6 Statistics and Probability

21

Proof. For (i)-(v) consult [AB],[Kh],[KuN],[La]. For (vi) we follow the proof in Chap. 2, Sect. 3 of [RS]. Suppose that ||jθ|| = |jθ − h| for some h ∈ Z. Consider the equation jθ − h = A(qn+1 θ − pn+1 ) + B(qn θ − pn ). Since θ is irrational, we obtain h pn+1 pn A = . j qn+1 qn B Since the determinant of the matrix is equal to (−1)n , we see that A and B are integers. Since (pk , qk ) = 1 and j < qn+1 , we see that h/j is diﬀerent from pn+1 /qn+1 and that B = 0. If A = 0, then |jθ − h| = |B(qn θ − pn )| ≥ |qn θ − pn |. Now suppose that AB = 0. Then A and B have opposite signs since j < qn+1 . Since qn+1 θ − pn+1 and qn θ − pn also have opposite signs, we have |jθ − h| = |A(qn+1 θ − pn+1 )| + |B(qn θ − pn )| ≥ |qn θ − pn |.

1.6 Statistics and Probability We introduce basic facts. For more information, consult [AG],[Du],[Fo2]. 1.6.1 Independent random variables The terminology in probability theory is diﬀerent from that in measure theory but there is a correspondence between them. See Table 1.1. A sample space Ω with a probability P is a probability measure space with a probability measure µ. A random variable X on Ω is a measurable function X : Ω → R. (The values of X are usually denoted by x.) Deﬁne a probability measure µX on R by µX (A) = P (X −1 (A)) = Pr(X ∈ A) , where Pr(X ∈ A) denotes the probability that a value of X belongs to A ⊂ R. The probability measure µX has all the information on the distribution of values of X. For a measurable function h : R → R, we have ∞ h(X(ω)) dP (ω) = h(x) dµX (x) . −∞

Ω

Taking h(x) = |x| , we have p p |X(ω)| dP (ω) = E[ |X| ] = p

∞

−∞

Ω

|x|p dµX (x) .

If there exists fX ≥ 0 on R such that fX (x) dx = µX (A) A

for every A, then fX is called a probability density function (pdf) of X.

22

1 Prerequisites

The expectation (or mean) and the variance of X are deﬁned by ∞ x dµX (x) E[X] = −∞

and σ 2 [X] =

∞

−∞

2

(x − E[X]) dµX (x) .

The standard deviation, denoted by σ, is the square root of the variation. If two random variables X and Y are identically distributed, i.e., they have the same associated probability measures dµX and dµY on R, then they have the same expectation and the same variance. Table 1.1. Comparison of terminology Measure Theory Probability Theory a probability measure space X a sample space Ω x∈X ω∈Ω a σ-algebra A a σ-ﬁeld F a measurable subset A an event E a probability measure µ a probability P µ(A) P (E) a measurable function f a random variable X f (x) x, a value of X a characteristic function χE an indicator function 1E

Lebesgue integral X f dµ expectation E[X] almost everywhere almost surely, or with probability 1 convergence in L1 convergence in mean convergence in measure convergence in probability conditional measure µA (B) conditional probability Pr(B|A)

Deﬁnition 1.37. A sequence of random variables Xi : Ω → R, i ≥ 1, is said to be independent if the sequence of subsets {Xi−1 (Bi )}∞ i=1 is independent for all Borel measurable subsets Bi ⊂ R. (For independent subsets consult Deﬁnition 1.15.) Fact 1.38 Let X1 , . . . , Xn be independent random variables. (i) For any Borel measurable subsets Bi ⊂ R we have Pr(X1 ∈ B1 , . . . , Xn ∈ Bn ) = Pr(X1 ∈ B1 ) · · · Pr(Xn ∈ Bn ) . (ii) If −∞ < E[Xi ] < ∞ for every i, then E[X1 · · · Xn ] = E[X1 ] · · · E[Xn ] . (iii) If E[Xi2 ] < ∞ for every i, then σ 2 [X1 + · · · + Xn ] = σ 2 [X1 ] + · · · + σ 2 [Xn ] .

1.6 Statistics and Probability

23

1.6.2 Change of variables formula Let X be a random variable with a continuous pdf fX , i.e., the probability of ﬁnding a value of X between x and x + ∆x is fX (x)|∆x|. If y = g(x) is a monotone function, then we deﬁne Y = g(X). Let fY be the pdf for Y , that is, the probability of ﬁnding a value of Y between y and y + ∆y is fY (y)|∆y|. Note that fX (x)|∆x| = fY (y)|∆y| implies ∆x . fY (y) = fX (x) ∆y Hence fY (y) = fX (x)

1 1 = fX (g −1 (y)) −1 . |g (x)| |g (g (y))|

Example 1.39. Let X be a random variable uniformly distributed in [0, 1], i.e., 0 ≤ X ≤ 1 and fX (x) = 1, 0 ≤ x ≤ 1 and fX (x) = 0, elsewhere. Deﬁne a new random variable Y = − ln(X), i.e., g(x) = − ln x. Then 0 ≤ Y ≤ ∞. Since g (x) = − x1 and x = e−y , we have fY (y) = e−y ,

y≥0.

Example 1.40. If a random variable X has the pdf fX (x) = e−x , x ≥ 0, then the pdf fY of Y = ln X satisﬁes y

y

fY (y) = fX (x)x = e−e ey = ey−e . See Fig. 1.1.

1 0.3 0.2 0.1 0

x

5

–6 –4 –2

y

2

Fig. 1.1. Pdf’s of X (left) and Y = log X (right) when X has an exponential distribution

1.6.3 Statistical laws There are various versions of the Law of Large Numbers. In this subsection the symbol µ denotes a mean, not a measure.

24

1 Prerequisites

Theorem 1.41 (Weak Law of Large Numbers). Let X1 , X2 , X3 , . . . be a sequence of independent identically distributed L2 -random variables with mean µ and variance σ 2 . If Sn = X1 +· · ·+Xn , then n1 Sn converges to µ in measure as n → ∞. In other words, for any ε > 0, X1 + · · · + Xn lim Pr − µ > ε = 0 . n→∞ n Proof. Use Chebyshev’s inequality(Fact 1.14) with p = 2. Then ⎛ ⎞ n n 1 1 σ2 Pr ⎝ (Xj − µ) > ε⎠ ≤ 2 2 E[(Xj − µ)2 ] = 2 . n ε j=1 nε n j=1 Now let n → ∞.

The following fact is due to A. Khinchin. Theorem 1.42 (Strong Law of Large Numbers). Let X1 , X2 , X3 , . . . be a sequence of independent identically distributed L1 -random variables with mean µ. If Sn = X1 + · · · + Xn , then n1 Sn converges to µ almost surely as n → ∞. Deﬁnition 1.43. The probability density function of the standard normal distribution on R is deﬁned by 2 1 Φ(x) = √ e−x /2 2π

i.e., the standard normal variable X satisﬁes x Pr(X ≤ x) = Φ(t) dt . −∞

For the graph y = Φ(x) see Fig. 1.2. Most of the probability (≈ 99.7 %) is concentrated in the interval −3 ≤ x ≤ 3. Consult Maple Program 1.8.12. 0.4

–4

–2

0

2

x

4

Fig. 1.2. The pdf of the standard normal distribution

The cumulative density function (cdf) of a given probability distribution on R is deﬁned by x → Pr(X ≤ x), −∞ < x < ∞. Note that a cdf is a monotonically increasing function. See Fig. 1.3 for the graph of the cumulative density function for the standard normal distribution.

1.7 Random Number Generators

25

1

–4

–2

0

2

x

4

Fig. 1.3. The cdf of the standard normal distribution

Theorem 1.44 (The Central Limit Theorem). Let X1 , X2 , X3 , . . . be a sequence of independent identically distributed L2 -random variables with mean µ and variance σ 2 . If Sn = X1 + · · · + Xn , then x Sn − nµ √ lim Pr ≤x = Φ(t) dt , −∞ < x < ∞ . n→∞ σ n −∞ In other words, if n observations are randomly drawn from a not necessarily normal population with mean µ and standard deviation σ, then for suﬃciently large n the distribution of the sample mean Sn /n is√approximately normally distributed with mean µ and standard deviation σ/ n. For more information consult [Fo2].

1.7 Random Number Generators 1.7.1 What are random numbers? There is no practical way of obtaining a perfectly random sequence of numbers. One might consider measuring the intensity of cosmic rays or movements of small particles suspended in ﬂuid and use the measured data as a source of random numbers. Such eﬀorts have failed. The data usually have patterns or deviate from theoretical predictions based on properties of ideal random numbers. Furthermore, when numerical experiments are done, they need to be repeated by other researchers to be conﬁrmed independently. That is why we need simple and fast algorithms for generating random numbers. Computer languages such as Fortran and C have built-in programs and they employ generators of the form xn+1 = axn + b (mod M ) for some suitably chosen values of a, b and M . We start with a seed x0 and proceed recursively. The constants x0 , a, b, M should be chosen with great care. Since computer generated random numbers are produced by deterministic rules, they are not truly random. They are called pseudorandom numbers. In practical applications they behave as if they were perfectly random, and they can be used to simulate random phenomena. From time to time we use the term random number generators(RNG’s) instead of pseudorandom number generators(PRNG’s) for the sake of simplicity if there is no danger of confusion. One more remark on terminology: quasirandom points are almost

26

1 Prerequisites

uniformly distributed points in a multidimensional cube in a suitable sense, and they are diﬀerent from what we call random points in this book. For more information consult [Ni]. s Consider as function f (x1 , . . . , xs ) deﬁned on the s-dimensional cube Q = k=1 Ik ⊂ R , Ik = [0, 1]. Suppose that we want to integrate f (x1 , . . . , xs ) over Q numerically. We might try the following method based on Riemann integration. First, we partition each interval Ik , 1 ≤ k ≤ s, into n subintervals. Note that the number of small subcubes in Q is equal to ns . Next, choose a point qi , 1 ≤ i ≤ ns , from each subcube and evaluate f (qi ) and take the average over such points qi . The diﬃculty in such an algorithm is that even when the dimension s is modest, say s = 10, the sum of ns numbers becomes practically impossible to compute. To avoid such a diﬃculty we employ the so-called the Monte Carlo method based on randomness. Choose m points randomly from {qi : 1 ≤ i ≤ ns }, where m is suﬃciently large but very small compared with ns , and take the average of f over these m points. The method works well and it is the only practical way of obtaining a good estimate of the integral of f over Q in a reasonable time limit. The error of estimation depends on the number m of points chosen. Once the sample size m is given, the eﬀectiveness of estimation depends on how to choose m points randomly from ns points. To do that we use a PRNG. See Maple Program 1.8.15. For the history of the Monte Carlo method, see [Ec],[Met]. For serious simulations really good generators are needed, which is never satisﬁed fully as theoretical models in applications become more sophisticated and computer hardware is upgraded constantly to do faster computation. As the demand for good random number generators grows, there also arises the need for eﬃcient methods to test newly invented algorithms. There are currently many tests for PRNG’s, and some of them are listed in [Kn]. One of the standard tests is to check whether there is a lattice structure in the set of the points (x3i+1 , x3i+2 , x3i+3 ), i ≥ 0, where the integers xn are generated by a generator. See Fig. 1.4 and Maple Program 1.8.13 for some bad linear congruential generators.

Fig. 1.4. Lattice structures of the points (x3i+1 , x3i+2 , x3i+3 ), 0 ≤ i ≤ 2000, given by LCG(M, a, b, x0 ) for a = 137, b = 187, M = 28 , (left) and a = 6559, b = 0, M = 230 (right)

1.7 Random Number Generators

27

For an elementary introduction to random numbers and some historical remarks see [Benn],[Pson], and for a survey see [Hel]. For a comprehensive introduction, consult [Kn]. 1.7.2 How to generate random numbers A linear congruential generator, denoted by LCG(M, a, b, x0 ), is an algorithm given by xn+1 := axn + b (mod M ) with initial seed x0 . Fact 1.45 The period length of LCG(M, a, b, x0 ) is equal to M if and only if (i) b is relatively prime to M , (ii) a − 1 is a multiple of p, for every prime p dividing M , and (iii) a − 1 is a multiple of 4, if M is a multiple of 4. For the proof see [Kn]. In the 1960s IBM developed an LCG called Randu, which turned out to be less eﬀective than expected. These days once a new generator is invented, it is often compared with Randu to test its performance. ANSI(American National Standards Institute) and Microsoft employ LCG’s in their C libraries. For a prime number p, an inversive congruential generator ICG(p, a, b, x0 ) is deﬁned by xn+1 := ax−1 n + b (mod p) with initial seed x0 , where x−1 is the multiplicative inverse of x modulo p. Table 1.2 is a list of random number generators that are currently used in practice except Randu. Table 1.2. Pseudorandom number generators Generator

Algorithm

Randu LCG(231 , 65539, 0, 1) ANSI LCG(231 , 1103515245, 12345, 141421356) Microsoft LCG(231 , 214013, 2531011, 141421356) Fishman LCG(231 − 1, 950706376, 0, 3141) ICG ICG(231 − 1, 1, 1, 0) Ran0 LCG(231 − 1, 16807, 0, 3141) Ran1 Ran0 with shuﬄe Ran2 L’Ecuyer’s algorithm with shuﬄe Ran3 xn = xn−55 − xn−24 (mod 231 ) MT bitwise XOR operation

Period 229 231 231 31 2 −2 231 − 1 231 − 2 > 231 − 2 > 2.3 × 1018 ≤ 255 − 1 219937 − 1

The generators Ran0, Ran1, Ran2 and Ran3 are from [PTV]. Ran0 is an LCG found by Park and Miller [PM]. Ran1 is Ran0 with Bays-Durham shuﬄe.

28

1 Prerequisites

Ran2 is L’Ecuyer’s algorithm [Le] with shuﬄe, where L’Ecuyer’s algorithm is a combination of two LCG’s which have diﬀerent modulo operations. Ran3 is Knuth’s algorithm [Kn]. MT is Mersenne Twister [MN]. The name comes from the fact that the period is a Mersenne prime number, i.e., both 19937 and 219937 − 1 are prime. One might consider the possibility of using a chaotic dynamical system as a random number generator. For instance, if T : [0, 1] → [0, 1] is a suitably chosen chaotic mapping whose orbit points x, T x, T 2 , . . . for some x are more or less random in [0, 1], then deﬁne a binary sequence bn , n ≥ 1, by the rule: bn = 0 (or 1, respectively) if and only if T n−1 x ∈ [0, 12 ) (or T n−1 x ∈ [ 12 , 1), respectively). Then bn may be regarded as being suﬃciently random. See Chap. 13 in [BoG] for a discussion of using T x = (π +x)5 (mod 1) as a random number generator, which is suitable only for small scale simulations. For extensive simulations it is recommended to use well-known generators that have passed various tests. 1.7.3 Random number generator in Maple Maple uses an LCG of the form xn+1 = axn + b (mod M ) where a = 427419669081 ,

b=0,

M = 999999999989

and x0 = 1. Note that M is prime. Hence the generator has period M − 1. This follows from Euler’s theorem: If M > 1 is a positive integer and if a is relatively prime to M , then aφ(M ) = 1 (mod M ) where φ(1) = 1 and φ(M ) is the number of positive integers less than M and relatively prime to M . See Maple Program 1.8.14. If we want a prime number for M that is close to 1012 , then M = 1012 − 11 is such a choice. Between 1012 − 20 and 1012 + 20, it is the only prime number. The command rand() generates a random number between 1 and M − 1. We never obtain 0 since a has a multiplicative inverse modulo M . Maple produces the ﬁrst several thousand digits of π = 3.1415 . . . very π as a starting point (or a seed) of an orbit quickly. We often choose π − 3 or 10 of a transformation with an absolutely continuous invariant measure on the unit interval. For empirical evidence of randomness of digits of π, see [AH].

1.8 Basic Maple Commands Computer simulations given in this book are indispensable for understanding theoretical results. The software Maple is used for symbolic and numerical experiments. It can handle a practically unlimited number of decimal digits. Some of the Maple algorithms given in this book are not fully optimized for eﬃciency in memory requirement and computing speed because we hope to translate theoretical ideas into computer programs without introducing

1.8 Basic Maple Commands

29

any complicated programming skills. Furthermore, sometimes a less polished program is easier to understand. Maple procedures are not used in Maple programs presented in this book. Readers with a little experience in computing may improve some of the programs. We present some of the essential Maple commands. They are organized into subsections. Please try to read from the beginning. For more information on a special topic, type a keyword right after the question mark ‘?’. 1.8.1 Restart, colon, semicolon, Digits Sometimes, during the course of Maple computation, we need to restart from the beginning and reset the values assigned to all the variables. If that is the case, we use the command restart. (Maple instructions in this book, which are given by a user, are printed in a teletype font.) > restart; A command should end with a semicolon ‘;’ and the result of computation is displayed. If a command ends with colon ‘:’ then the result of computation is not displayed. Sometimes output of a Maple command is not presented in this book to save space. >

# Anything written after the symbol # is ignored by Maple.

>

Digits:=10;

Digits := 10 This proclaims that the number of signiﬁcant decimal digits is set to ten. Maple can use a practically unlimited number of decimal digits in ﬂoating point calculation. The default value is equal to ten; in other words, if the number of signiﬁcant digits is not declared formally, then Maple assumes that it is set to 10. Do not start a Maple experiment with too many digits unnecessarily. It slows down the computation. Start with a small number of digits in the beginning. It is recommended to try a simple toy model ﬁrst. > Digits; 10 This conﬁrms the number of signiﬁcant decimal digits employed by Maple at the moment. 1.8.2 Set theory and logic Deﬁne two sets A and B. > A:={1,2,a}; A := {1, 2, a} >

B:={0,1}; B := {0, 1}

30

1 Prerequisites

Find the elements of A. > op(A); 1, 2, a Find the number of elements (or cardinality) of A. > nops(A); 3 Find A ∪ B. > A union B; {0, 1, 2, a} Find A \ B. > A minus B; {2, a} Find A ∩ B. > A intersect B; {1} Here is a mathematical statement that 1 ∈ A. > 1 in A; 1 in {1, 2, a} Check the validity of the previous statement (denoted by ‘%’). The logical command evalb (b for ‘Boolean’) evaluates a statement and returns true or false. > evalb(%); true Check the validity of the statement inside the parentheses. > evalb(0 in A); false The following is an if . . . then . . . conditional statement. > K:=1; K := 1 Choose a number K. > if K > 2 then X:=1: else X:=0: fi: Note that fi is a short form of end if. Now check the value of X. > X; 0 In the following a b means the compound statement (a < b or a > b), and it is equivalent to a = b. > evalb(-1 1); true

1.8 Basic Maple Commands

31

1.8.3 Arithmetic Here are several examples of basic operations of Maple. Some of the results are stored in computer memory as abstract objects. > 1+0.7; 1.7 > Pi; π > sqrt(2); √ 2 > a-b; a−b > x*y^2; x y2 >

%;

x y2 The symbol ‘%’ denotes the result obtained from the preceding operation. > gamma; γ > evalf(%,10); .5772156649 n 1 − ln n . This is the Euler constant γ = limn→∞ k=1 k The command evalf (f for ‘ﬂoating’) evaluates a given expression and returns a number in a ﬂoating point format. > sqrt(2.); 1.414213562 > evalf(Pi,30); 3.14159265358979323846264338328 > evalf(exp(1),20); 2.7182818284590452354 Maple understands standard mathematical identities. > sin(x)^2+cos(x)^2; sin(x)2 + cos(x)2 >

>

simplify(%); 1 sum(1/n^2,n=1..infinity); 1 2 π 6

32

1 Prerequisites

In general, two dots represent a range for addition, integration and plotting, etc. In the preceding summation, we take the sum for all n ≥ 1. The commands sum and add are diﬀerent. In the following, sum is used for the symbolic addition. > sum(sin(n),n=1..N); 1 sin(1) cos(N + 1) 1 1 1 sin(1) cos(1) 2 − sin(N + 1) + + sin(1) − 2 cos(1) − 1 2 2 cos(1) − 1 The command add is used for numerical addition of numbers. > add(exp(n),n=1..N); Error, unable to execute add

This does not work since N is a symbol, not a speciﬁed number. The command sum may be used for a ﬁnite sum of numbers, too. It takes longer and consumes more computer memory if we try to add very many numbers. When we add reasonably many numbers there seems to be no big diﬀerence in general. 1.8.4 How to plot a graph In its early days Maple was developed for students and researchers who often had modest computer hardware resources. Developers of Maple thought that the users would need only a small subset of commands in a single task. When we start Maple it loads a part of the whole set of Maple commands and gradually adds more as the user instructs. To plot a graph we need to include the package ‘plots’. > with(plots): Plot a graph. Double dots denote the range. > plot(sin(x)/x, x=0..10, y=-1.1..1.1); See Fig. 1.5. The graph suggests that the limit as x → 0 is equal to 1. The same technique can be used instead of L’Hospital’s rule. 1 y 0

2

4

x

6

8

10

–1

Fig. 1.5. y = (sin x)/x, 0 < x < 10

Find the Taylor series. The following result shows that x = 0 is a removable singularity of (sin x)/x. The command series may be used in some cases.

1.8 Basic Maple Commands >

33

taylor(sin(x)/x,x=0,10); 1 1 4 1 1 1 − x2 + x − x6 + x8 + O(x9 ) 6 120 5040 362880

Now let us draw a graph of a function f (x, y) of two variables. > f:=(x,y)->x*y/sqrt(x^2+y^2); xy f := (x, y) → x2 + y 2 Use the command plot3d to plot the graph z = f (x, y) in R3 . Plot the graph around the origin where the function is not yet deﬁned. The function can be extended continuously at the origin by deﬁning f (0, 0) = 0. > plot3d(f(x,y),x=-0.00001..0.00001,y=-0.00001..0.00001, scaling=constrained); See Fig. 1.6. By pointing the cursor of the mouse at the graph and moving it while pressing a button, we can rotate the graph.

5e–06 0 –5e–06 –1e–05

y

0

1e–05

0

–1e–05 x

Fig. 1.6. z = f (x, y) where |x|, |y| < 10−5

In an introductory analysis course students have diﬃculty in checking the continuity or diﬀerentiability of a function f (x, y) with a possible singularity at (0, 0). By plotting the graph on gradually shrinking small squares around the origin, we can check whether f is continuous or diﬀerentiable at (0, 0). The continuity of f at (0, 0) is equivalent to the connectedness of the graph z = f (x, y) at (0, 0), and the diﬀerentiability of f is equivalent to the existence of the tangent plane. For example, plot the graph on |x| ≤ a, |y| ≤ a for a = 0.01, a = 0.001, and so on. In the above example, it is easy to see that there is no tangent plane at (0, 0). Here are several options for plotting: Use labels for names of coordinate axes, tickmarks for the number of tickmarks on the axes, and color for the color of the curve, thickness for the thickness of the curve. We can choose a symbol, for example, symbol=circle. For the graph obtained from the following command see Fig. 8.3. > plot(-x*ln(x)-(1-x)*ln(1-x),x=0..1,labels=["p","H(p)"], tickmarks=[2,3],color=blue,thickness=3);

34

1 Prerequisites

When we want to have a line graph by connecting points, we use the command listplot. To plot only the points, then we use the command pointplot. These commands can be interchangeably used if we specify the style by style=point or style=line. When we draw two graphs together in a single frame, then we use display: > g1:=pointplot([[1,1/2],[1/2,1],[1/2,1/2]],symbol=circle): > g2:=plot(x^2, x=-1..1, tickmarks=[2,2]): > display(g1,g2); See Fig. 1.7. 1

–1

0

x

1

Fig. 1.7. Display of two graphs

The Maple programs presented in this book usually do not contain detailed options for drawing because we want to emphasize the essential ideas and to save space. Maple programs from the author’s homepage contain all the necessary plot options to draw graphs in the text. All the graphs, ﬁgures, and diagrams in this book were drawn by the author using only Maple. 1.8.5 Calculus: diﬀerentiation and integration Maple can diﬀerentiate and integrate a function symbolically. First let us integrate a function. The ﬁrst slot in int( , ) is for a function and the second slot for the variable x. Integrate the function 1/x with respect to x. > int(1/x,x); ln(x) Here log may be used in place of ln. For the logarithm to the base b, use log[b]. > log[2](2^k); ln(2k ) ln(2) Simplify the preceding result represented by the symbol ‘%’.

1.8 Basic Maple Commands >

35

simplify(%);

k Integrate 1/ x(1 − x) over the interval 0 ≤ x ≤ 1. > int(1/(sqrt(x*(1-x))),x=0..1); π Now diﬀerentiate a function with respect to x. > diff(exp(x^2),x); 2

2 x e(x ) Substitute a condition into the previous result(%). > subs(x=-1,%); −2 e > ?subs The question mark ‘?’ followed by a command explains the meaning of the command. In this case, the command subs is explained. 1.8.6 Fractional and integral parts of real numbers Many transformations on the unit interval are deﬁned in terms of fractional parts of some real-valued functions. Here are ﬁve basic commands: (i) frac(x) is the fractional part of x, (ii) ceil(x) is the smallest integer ≥ x, (iii) floor(x) is the greatest integer ≤ x, (iv) trunc(x) truncates x to the next nearest integer towards 0, and (v) round(x) rounds x to the nearest integer. The command ceil comes from the word ‘ceiling’, and the two commands floor and trunc coincide for x ≥ 0 and diﬀer for x < 0. > frac(1.3); .3 > frac(-1.3); −.3 > ceil(0.4); 1 > floor(1.8); 1 > trunc(-10.7); −10 > round(-10.7); −11 Draw the graphs of the preceding piecewise linear functions. Observe the values of frac(x) for x < 0. Without the plot option discont=true, the graphs would be connected at the discontinuities.

36

1 Prerequisites

plot(frac(x),x=-3..3,discont=true); > plot(ceil(x),x=-3..3,discont=true); > plot(floor(x),x=-3..3,discont=true); See Fig. 1.8. >

0 –3 –2

1 2

3

–3 –2

0

1 2

3

–3 –2

0

1 2

3

Fig. 1.8. y = frac(x), y = ceil(x) and y = floor(x) (from left to right)

plot(trunc(x),x=-3..3,discont=true); plot(round(x),x=-3..3,discont=true); Deﬁne a new piecewise function. > fractional:=x->x-floor(x): > plot(fractional(x),x=-3..3,discont=true); See Fig. 1.9. > >

–3 –2

0

1 2

3

–3 –2

0

1 2

3

–3 –2

0

1 2

3

Fig. 1.9. y = trunc(x), y = round(x) and y = fractional(x) (from left to right)

1.8.7 Sequences and printf Two brackets enclose a list, or a ﬁnite sequence, or a vector. Deﬁne a list aa. > aa:=[1,3,5,7,-2,-4,-6]; aa := [1, 3, 5, 7, −2, −4, −6] > pointplot([seq([i,aa[i]],i=1..7)]); See the left plot in Fig. 1.10. To obtain a line graph, use listplot.

1.8 Basic Maple Commands 5

0

37

5

2 3 4 5 6 7

–5

0

2 3 4 5 6 7

–5

Fig. 1.10. A pointplot (left) and a listplot (right) of a sequence

Here is how print a list of characters. We use a do loop. For each i, 1 ≤ i ≤ 7, the task given between do and end do is executed. Note that od is a short form of end do. > for i from 1 to 7 do > printf("%c","("); > printf("%d",aa[i]); > printf("%c",")" ); > printf("%c"," " ); > od; (1) (3) (5) (7) (-2) (-4) (-6)

The Maple command printf is used to display expressions. It corresponds to the function of the same name in C programming language. It displays an expression using the formatting speciﬁcations placed inside " ", such as %c for a single character or %d for an integer. A blank space produces a blank. To display a single symbol such as a parenthesis, place it inside " ". 1.8.8 Linear algebra First, we need to load the package for linear algebra. > with(linalg): > A:=matrix([[2/4,1/4,1/4],[2/6,3/6,1/6],[1/5,2/5,2/5]]); ⎡1 1 1⎤ ⎢2 4 4⎥ ⎢ ⎥ ⎢1 1 1⎥ A := ⎢ ⎥ ⎢3 2 6⎥ ⎣ ⎦ 1 2 2 5 5 5 The sum of entries of each row is all equal to 1. This is an example of a stochastic matrix. Find the determinant, the rank and the trace of A. > det(A); 1 20

38

1 Prerequisites >

rank(A);

>

trace(A);

3 7 5 Here is the characteristic polynomial of A. > charpoly(A,x); 1 9 7 x− x3 − x2 + 20 20 5 √ Find the eigenvalues of A. The symbol I denotes the complex number −1. > eigenvals(A); 1 1 1 1 I I, − 1, + 5 10 5 10 We consider row operations and ﬁnd row eigenvectors. Note that the column eigenvectors of the transpose of A are the row eigenvectors of A. > B:=transpose(A): The following gives eigenvalues, corresponding multiplicities and eigenvectors. > Eigen:=eigenvectors(B); −1 2 −2 2 1 1 7 3 + I }], − I, 1, I, 1, { Eigen := [1, 1, { , , 1 }], [ + 3 3 3 3 5 10 5 2 −1 2 −2 2 1 1 − I }] + I, 1, I, 1, { [ − 3 3 3 3 5 10 Find the Perron–Frobenius eigenvector by choosing the positive eigenvector. > Positive_Vector:=Eigen[1]; 7 3 Positive Vector := [1, 1, { , , 1 }] 5 2 > Positive_Vector[3]; 7 3 { , ,1} 5 2 > u:=Positive_Vector[3][1]; 7 3 u := , , 1 5 2 > a:=add(u[i],i=1..3); 39 a := 10 Find the normalized Perron–Frobenius eigenvector v. > v:=evalm((1/a)*u); 14 5 10 , , v := 39 13 39

1.8 Basic Maple Commands

39

This may be a row vector or a column vector. In the following v is regarded as a row vector. > evalm(v&*A); 14 5 10 , , 39 13 39 So v is an eigenvector corresponding to the eigenvalue 1. > evalf(%); [.3589743590, .3846153846, .2564102564] > evalm(A^10): > evalf(%); ⎡ ⎤ .3589743038 .3846155735 .2564101227 ⎣ .3589742237 .3846153940 .2564103823 ⎦ .3589746391 .3846151061 .2564102548 Note that all the rows are almost equal to v. Here is how to obtain the inner product of two vectors. > w1:=vector([4,2,1]); w1 := [4, 2, 1] > w2:=vector([1,-2,-3]); w2 := [1, −2, −3] Calculate the inner product of w1 and w2 . > innerprod(w1,w2); −3 1.8.9 Fourier series Compute the Fourier coeﬃcients bn of f (x) = > with(plots): > f:=x->1/4-abs(x-1/2); 1 f := x → − x − 4

1 4

− |x − 12 |, −N ≤ n ≤ N .

1 2

> plot(f(x),x=0..1); See Fig. 1.11. > int(f(x)*exp(-2*Pi*I*n*x),x=0..1);

>

1 −π n I − 2 + (e(π n I) )2 π n I − 2 (e(π n I) )2 + 4 e(π n I) 8 (e(π n I) )2 π 2 n2 subs(exp(Pi*I*n)=(-1)^n,%);

>

−π n I − 2 + ((−1)n )2 π n I − 2 ((−1)n )2 + 4 (−1)n 8 ((−1)n )2 π 2 n2 simplify(%);

40

1 Prerequisites

(−1)(−2 n) π n I + 2 (−1)(−2 n) − π n I + 2 + 4 (−1)(1−n) 8 π 2 n2 subs({(-1)^(-2*n)=1,(-1)^(1-n)=-(-1)^n},%); −

>

− >

4 − 4 (−1)n 8 π 2 n2

b[n]:=simplify(%); bn :=

−1 + (−1)n 2 π 2 n2

N:=5: Increase N if a more detailed graph is needed. > for n from 1 to N do > b[n]:=(-1+(-1)^n)/(2*Pi^2*n^2): > od: > seq(b[n],n=1..N); 1 1 1 − 2 , 0, − 2 , 0, − π 9π 25 π 2 Draw the graph of the N th partial Fourier series and compare it with the graph of f (x). For large values of N the two graphs become almost identical. > Fourier:=x->2*Re(add(b[n]*exp(2*Pi*n*I*x),n=1..N)); >

Fourier := x → 2 (add(bn e(2 I π n x) , n = 1..N )) > plot(Fourier(x),x=0..1); See Fig. 1.11.

0.4

0.4

y

y 0

x

–0.4

0

1

x

1

–0.4

Fig. 1.11. y = f (x) (left) and its partial Fourier series (right)

1.8.10 Continued fractions I Find the continued fraction of an irrational number θ. > k:=5.: > theta:=-k/2+sqrt(k^2+4)/2; >

θ := 0.192582404 convert(theta,confrac,cvgts);

1.8 Basic Maple Commands

41

[0, 5, 5, 5, 5, 5, 5, 2, 1, 2] In the preceding result all the partial quotients except the ﬁrst one should be equal to k = 5 theoretically. Because we did not have enough signiﬁcant digits, the last three values are not correct. By increasing Digits, we will have more correct partial quotients. > cvgts; 1 5 26 135 701 3640 7981 11621 31223 ] , , , , , , , [0, , 5 26 135 701 3640 18901 41442 60343 162128 > for i from 1 to 5 do > a[i]:=evalf(cvgts[i]-theta): > od; a1 := −0.192582404 a2 := 0.0074175960 a3 := −0.0002747117 a4 := 0.0000101886 a5 := −0.3783 10−6 1.8.11 Continued fractions II Here is an iterative algorithm to construct a number θ with speciﬁed partial quotients. > restart; > N:=20: In the following do not use the command fsolve (= ﬂoat + solve), which would yield a numerical solution. > solve(x=1/(k+x),x); √ √ k2 + 4 k k2 + 4 k ,− − − + 2 2 2 2 Find the theoretical value. > x0:=-1/2*k+1/2*(k^2+4)^(1/2); √ k2 + 4 k x0 := − + 2 2 Generate a sequence of partial quotients an . > k:=1: > for n from 1 to N do a[n]:=k: od: List the sequence an , 1 ≤ n ≤ N . > seq(a[n],n=1..N); 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 > continued[1]:=1/a[N]: > for i from 2 to N do > continued[i]:=1/(a[N-i+1]+continued[i-1]): > od:

42

1 Prerequisites >

theta:=continued[N]; θ :=

Calculate |θ − x0 |. > evalf(theta-x0,10);

6765 10946

−0.3 10−8 Now for the second example, we take e = 2.718281828459045 . . .. > restart; Generate a sequence of partial quotients an . > N:=8: > for n from 1 to 3*N do a[n]:=1: od: > for n from 1 to N do a[3*n-1]:=2*n: od: List the sequence an , 1 ≤ n ≤ 3N . > seq(a[n],n=1..3*N); 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1, 10, 1, 1, 12, 1, 1, 14, 1, 1, 16, 1 > continued[1]:=1/a[N]: > for i from 2 to N do > continued[i]:=1/(a[N-i+1]+continued[i-1]): > od: > num:=2+continued[N]; 1264 num := 465 > evalf(num-exp(1.0),10); −.2258 10−5 1.8.12 Statistics and probability The probability density function of the standard normal variable X is given as follows: > f:=x->1/sqrt(2*Pi)*exp(-x^2/2); 2

e(−1/2 x ) √ 2π Its integral over the whole real line is equal to 1 since it represents a probability distribution. > int(f(x),x=-infinity..infinity); 1 > plot(f(x),x=-4..4); See Fig. 1.2. Find the maximum of f (x). > f(0); f := x →

1.8 Basic Maple Commands

43

√ 2 √ 2 π >

evalf(%);

0.3989422802 Find Pr(−1 ≤ X ≤ 1). > int(f(x),x=-1.0..1.0); 0.6826894921 Find Pr(−2 ≤ X ≤ 2). > int(f(x),x=-2.0..2.0); 0.9544997361 Find Pr(−3 ≤ X ≤ 3). > int(f(x),x=-3.0..3.0); 0.9973002039 Deﬁne the error function erf(x) by x 2 erf(x) = √ exp(−t2 ) dt . π 0 See Fig. 1.12. 1

–4

–2

0

2

x

4

–1

Fig. 1.12. The error function

Deﬁne the cumulative density function (cdf) of X by Pr(−∞ ≤ X ≤ x), which is expressed in terms of the error function. > int(f(t), t=-infinity..x); √ 1 2x 1 erf( )+ 2 2 2 > plot(1/2*erf(1/2*2^(1/2)*x)+1/2, x=-4..4); See Fig. 1.3. In this book we often sketch a pdf f (x) of a random variable based on ﬁnitely many sample values, say x1 , . . . , xn . Let a = min xi , b = max xi . Then we divide the range [a, b] into subintervals of equal length, each of which is called a bin or an urn, and count the number of times the values xi fall into each bin. Using these frequencies we plot a histogram or a line graph. As we take more and more sample values and bins, the corresponding plot approximates the theoretical prediction better and better.

44

1 Prerequisites

1.8.13 Lattice structures of linear congruential generators The set of ordered triples of pseudorandom numbers generated by a bad linear congruential generator displays a lattice structure. See Fig. 1.4. > with(plots): Deﬁne a linear congruential generator LCG(M, a, b, x0 ). > M:=2^8: > a:=137: > b:=187: > x[0]:=10: Choose the number of points in a cube. > N:=10000: > for n from 1 to 3*N do > x[n]:=modp(a*x[n-1]+b,M): > od: Rotate the following three-dimensional plot. > pointplot3d([seq([x[3*i+1],x[3*i+2],x[3*i+3]],i=0..N-1)], axes=boxed,tickmarks=[0,0,0],orientation=[45,55]); Do the same experiment with M = 230 , a = 65539, b = 0 and x0 = 1. 1.8.14 Random number generators in Maple Maple uses a linear congruential generator of the form xn+1 = axn + b (mod M ) where a = 427419669081, b = 0, M = 999999999989 and x0 = 1. Note that M is prime. > M:=999999999989; Factorize M . > ifactor(M); (999999999989) Is M prime? > isprime(M); true If we want a prime number for M that is close to 1012 , then M = 1012 − 11 is such a choice. Between 1012 − 20 and 1012 + 20, M is the only prime number as seen in the following: > for i from -20 to 20 do > if isprime(10^12+i)=true then print(i): fi: > od; −11 The following commands all give the same answer: > 100 mod 17; modp(100,17); irem(100,17); 15 15 15

1.8 Basic Maple Commands

45

Find the multiplicative inverse of 10 modulo 17. Note that 17 is prime. > 1/10 mod 17; modp(1/10,17); irem(1/10,17); 12 12 Error, wrong number (or type) of parameters in function irem

In the preceding three commands the third one does not work. The command rand() generates random numbers between 1 and M − 1. > rand(); rand(); rand(); 427419669081 321110693270 343633073697 Let’s check the formula used by Maple. > a:=427419669081: > M:=999999999989: > x[0]:=1: > for n from 0 to 2 do > x[n+1]:=modp(a*x[n],M): > od; x1 := 427419669081 x2 := 321110693270 x3 := 343633073697 How to generate random numbers between L and N ? > restart; > L:=3: > N:=7: > ran:=rand(L..N): > ran(); ran(); ran(); 4 3 5 1.8.15 Monte Carlo method As the ﬁrst example of the Monte Carlo method we estimate the average of a function f (x) on an interval by choosing sample points randomly. > f:= x-> sin(x): Choose an interval [a, b]. > a:=0: b:=Pi: Integrate the given function. > int(f(x),x=a..b); 2

46

1 Prerequisites

Choose the number of points where the given function is evaluated. > SampleSize:=1000: > Integ:=0: > for i from 1 to SampleSize do > ran:=evalf(rand()/999999999989*(b-a)+a): > f_value:=f(ran): > Integ:=Integ + f_value: > od: > Approx_Integral:=evalf(Integ/SampleSize*(b-a)); Approx Integral := 2.019509977 For the second application of the Monte Carlo method, we count the number of points inside the unit circle in the ﬁrst quadrant and estimate π/4. > restart: We have to restart because we need to redeﬁne the function f , which is already used in the above. If we use a new name g instead of f , then there is no need to restart. > f:=x-> 1 - floor(x): Choose the number of points where the given function is evaluated. > SampleSize:=1000: > Count:=0: > for i from 1 to SampleSize do > ran1:=evalf(rand()/999999999989): > ran2:=evalf(rand()/999999999989): > f_value:=f(ran1^2+ran2^2): > Count:=Count+f_value: > od: Compare the following values: > Approx_Area:=evalf(Count/SampleSize); >

Approx Area := .7800000000 True_Area:=evalf(Pi/4); True Area := .7853981635

2 Invariant Measures

We introduce a discrete dynamical system in terms of a measure preserving transformation T deﬁned on a probability space. In other words, we consider {T n : n ∈ N} or {T n : n ∈ Z} depending on whether T is invertible or not. The approach can handle not only purely mathematical concepts, but also physical phenomena in nature and data compression in information theory. Examples of such measure preserving transformations include an irrational translation modulo 1, the multiplication by 2 modulo 1, the beta transformation, the Gauss transformation, a toral automorphism, the baker’s transformation and the shift transformation on the set of inﬁnite binary sequences. A method of classifying them is also introduced.

2.1 Invariant Measures Deﬁnition 2.1. (i) Let (X1 , µ1 ) and (X2 , µ2 ) be measure spaces. We say that a mapping T : X1 → X2 is measurable if T −1 (E) is measurable for every measurable subset E ⊂ X2 . The mapping is measure preserving if µ1 (T −1 E) = µ2 (E) for every measurable subset E ⊂ X2 . When X1 = X2 and µ1 = µ2 , we call T a transformation. (ii) If a measurable transformation T : X → X preserves a measure µ, then we say that µ is T -invariant (or invariant under T ). If T is invertible and if both T and T −1 are measurable and measure preserving, then we call T an invertible measure preserving transformation. Theorem 2.2. Let (X, A, µ) be a measure space. The following statements are equivalent: (i) A transformation T : X → X preserves µ. (ii) For any f ∈ L1 (X, µ) we have f (T (x)) dµ . f (x) dµ = X

X

48

2 Invariant Measures

(iii) Deﬁne a linear operator UT in Lp (X, µ) by (UT f )(x) = f (T x) . Then UT is norm-preserving, i.e., ||UT f ||p = ||f ||p . Proof. First we show the equivalence of (i) and (ii). To show (ii) ⇒ (i), take f = 1E . Then µ(E) = f (x) dµ = f (T (x)) dµ = 1T −1 E (x) dµ = µ(T −1 E) . X

X

X

To show (i) ⇒ (ii), observe ﬁrst that a complex-valued measurable function f can be written as a sum f = f1 − f2 + i (f3 − f4 ) √ where i = −1 and each fj is real, nonnegative and measurable. Thus we may assume that f is real-valued and f ≥ 0. If f = 1E for some measurable subset E, then f (x) dµ = µ(E) = µ(T −1 E) = 1T −1 E (x) dµ = f (T (x)) dµ . X

X

X

By linearity the same relation holds for a simple function f . Now for a general nonnegative function f ∈ L1 choose an increasing sequence of simple functions sn ≥ 0 converging to f pointwise. Then {sn ◦ T }∞ n=1 is an increasing sequence and it converges to f ◦ T pointwise. Hence f (T x) dµ = lim sn (T x) dµ = lim sn (x) dµ = f (x) dµ . X

n→∞

n→∞

X

X

X

The proof of the equivalence of (i) and (iii) is almost identical.

Example 2.3. (Irrational translations modulo 1) Let X = [0, 1). Consider Tx = x + θ

(mod 1)

for an irrational number 0 < θ < 1. Since T preserves the one-dimensional length, Lebesgue measure is invariant under T . See the left graph in Fig. 2.1. Example 2.4. (Multiplication by 2 modulo 1) Let X = [0, 1). Consider T x = 2x (mod 1) . Then T preserves Lebesgue measure. Even though the map doubles the length of an interval I, its inverse image has two pieces in general, each of which has half the length of I. When we add them, the sum equals the original length of I. For a generalization, see Ex. 2.11.

2.1 Invariant Measures 1

49

1

y

y

0

0

1

x

Fig. 2.1. The translation by θ = transformation (right)

√

x

1

2 − 1 modulo 1 (left) and a piecewise linear

Example 2.5. (A piecewise linear mapping) Let X = [0, 1). Deﬁne & 2x (mod 1) , 0 ≤ x < 21 , Tx = 4x (mod 1) , 21 ≤ x < 1 . Then T preserves Lebesgue measure since the inverse image of [0, a] consists of three intervals of lengths a2 , a4 and a4 . See the right graph in Fig. 2.1. Example 2.6. (The logistic transformation) Let X = [0, 1]. Consider T x = 4x(1 − x) . The invariant probability density function of T is given by ρ(x) =

1 , π x(1 − x)

which is unbounded but has its integral equal to 1. See Fig. 2.2. The same density function ρ(x) is invariant under transformations obtained from the Chebyshev polynomials. See Maple Programs 2.6.1, 2.6.2.

1

5

y

y

0

x

1

0

x

Fig. 2.2. y = 4x(1 − x) (left) and y = ρ(x) (right)

1

50

2 Invariant Measures

√ Example 2.7. (The beta transformation) Take β =√( 5 + 1)/2 = 1.618 · · · , which satisﬁes β 2 −β −1 = 0. Hence β −1 = 1/β = ( 5−1)/2, which is called the golden ratio. See Fig. 2.3 where two similar rectangles can be found. 1

0

β

1

Fig. 2.3. The golden ratio: two rectangles are similar

Let X = [0, 1). Consider the so-called β-transformation T x = βx (mod 1) . Its invariant probability density is given by ⎧ β3 1 ⎪ ⎪ ⎨ , 0≤x< , 2 β ρ(x) = 1 +2β β 1 ⎪ ⎪ , ⎩ ≤x g1:=plot(T(x),x=0..1,y=0..1,axes=boxed): > g2:=listplot([[1/2,0],[1/2,1]],color=green): > display(g1,g2); Deﬁne a probability density function ρ(x). > rho:= x->1/(Pi*sqrt(x*(1-x))); 1 ρ := x → π x (1 − x) Since ρ(x) → +∞ as x → 0+ or x → 1−, we draw the graph y = ρ(x) only on the interval [ε, 1 − ε] for some small ε > 0. See Fig. 2.2.

72

2 Invariant Measures

plot(rho(x),x=0.01..0.99);

1 Check whether 0 ρ(x) dx = 1. > int(rho(x),x=0..1); 1 Find the inverse images of b, where b is a point on the positive y-axis. > a:=solve(T(x)=b,x); 1 1√ 1 1√ a := + 1 − b, − 1−b 2 2 2 2 In the following deﬁnition note that a1 ≤ a2 . > a1:=1/2-1/2*sqrt(1-b); √ 1 1−b a1 := − 2 2 > a2:=1/2+1/2*sqrt(1-b); √ 1 1−b a2 := + 2 2 Find inverse images of b = 32 under T . > b:=2/3: > a1; √ 1 3 − 2 6 > a2; √ 1 3 + 2 6 > g5:=listplot([[a1,0],[a1,b]],color=green): > g6:=listplot([[a2,0],[a2,b]],color=green): > g7:=plot(b,x=0..1,color=green,tickmarks=[2,2]): > display(g1,g5,g6,g7); See Fig. 2.23. >

1

b y

0

a1

x

a2

1

Fig. 2.23. Two inverse images of b = 1/3 under the logistic transformation

From now on we treat b again as a symbol.

2.6 Maple Programs >

b:=’b’;

>

a1;

73

b := b 1 − 2 >

a2; 1 + 2

√

√

1−b 2 1−b 2

Find the measure of T −1 ([b, 1]). > measure1:=int(rho(x),x=a1..a2);

√ 2 arcsin( 1 − b) measure1 := π Find the measure of [b, 1]. > measure2:=int(rho(x),x=b..1); 1 −π + 2 arcsin(2 b − 1) measure2 := − π 2 Compare the two values. > d:=measure1-measure2; √ 2 arcsin( 1 − b) 1 −π + 2 arcsin(2 b − 1) + d := π 2 π Assume 0 < b < 1. When we make assumptions about a variable, it is printed with an appended tilde. > assume(b < 1, b > 0): To show d = 0 we show sin(πd) = 0. > dd:=simplify(Pi*d); √ π dd := 2 arcsin( 1 − b˜) − + arcsin(2 b˜ − 1) 2 > expand(sin(dd)): > simplify(%); 0 This shows that d is an integer. Since |d| < 1, we conclude d = 0. 2.6.2 Chebyshev polynomials Consider the transformations f : [0, 1] → [0, 1] obtained from Chebyshev polynomials T : [−1, 1] → [−1, 1] with deg T ≥ 2 after the normalization of the domains and ranges. When deg T = 2, we obtain the logistic transformation. The transformations share the same invariant density function. > with(plots): We need a package for orthogonal polynomials. > with(orthopoly);

74

2 Invariant Measures

[G, H, L, P, T, U ] The letter ‘T’ is from the name of a Russian mathematician P.L. Chebyshev. Many years ago his name was transliterated as Tchebyshev, Tchebycheﬀ or Tschebycheﬀ. That is why ‘T’ stands for Chebyshev polynomials. Let w(x) = (1 − x2 )−1/2 and let H = L2 ([−1, 1], w dx) be the Hilbert space

1 with the inner product given by (u, v) = −1 u(x)v(x)w(x) dx for u, v ∈ H. Chebyshev polynomials are orthogonal in H. Let us check it! Choose integers m and n. Take small integers for speedy calculation. > m:=5: > n:=12: > int(T(m,x)*T(n,x)/sqrt(1-x^2),x=-1..1); 0 Chebyshev polynomials are deﬁned on [−1, 1] and therefore we normalize the domain so that the induced transformations are deﬁned on [0, 1]. Choose the degree of a Chebyshev polynomial. > Deg:=3; Deﬁne a Chebyshev polynomial. > Chev:=x->T(Deg,x): > plot(Chev(x),x=-1..1); The graph of T is omitted to save space. Normalize the domain and the range of T . In this subsection f denotes the transformation since ‘T’ is reserved for the Chebyshev polynomial. > f:=x->expand((Chev(2*x-1)+1)/2): Now f is a transformation deﬁned on [0, 1]. > f(x); 16 x3 − 24 x2 + 9 x Draw the graph y = f (x). > plot(f(x),x=0..1); See Fig. 2.24. 1

y

0

x

1

Fig. 2.24. A transformation f deﬁned by a Chebyshev polynomial

2.6 Maple Programs

75

Now we prove that the transformations deﬁned by Chebyshev polynomials all share the same invariant density. It is known that T (n, cos θ) = cos(nθ). > T(Deg,cos(theta))-cos(Deg*theta); 4 cos(θ)3 − 3 cos(θ) − cos(3 θ) >

simplify(%);

0 Deﬁne a topological conjugacy φ based on the formula in [AdMc]. It will be used as an isomorphism for two measure preserving transformations f and g. The formula in Ex. 2.24 may be used, too. > phi:=x->(1+cos(Pi*x))/2; 1 1 φ := x → + cos(π x) 2 2 Draw the graph y = φ(x). > plot(phi(x),x=0..1,y=0..1,axes=boxed); See the left graph in Fig. 2.25. 1

1

y

y

0

x

1

0

x

1

Fig. 2.25. The topological conjugacy φ(x) (left) and its inverse (right)

Find the inverse of φ. > psi:=x->arccos(2*x-1)/Pi; arccos(2 x − 1) π Draw the graph y = ψ(x) = φ−1 (x). > plot(psi(x),x=0..1,y=0..1,axes=boxed); See the right graph in Fig. 2.25. Check φ(ψ(x)) = x. > phi(psi(x)); x Deﬁne a transformation g(x) = ψ(f (φ(x))). > g:=x->psi(f(phi(x))); ψ := x →

g := x → ψ(f(φ(x)))

76

2 Invariant Measures

Draw the graph y = g(x). > plot(g(x),x=0..1,y=0..1,axes=boxed): See Fig. 2.26. 1

y

0

x

1

Fig. 2.26. A transformation g conjugate to f through φ

It is obvious that g preserves Lebesgue measure dx on X = [0, 1]. Find the invariant probability measure µ for f . g

(X, dx) −−−−→ (X, dx) ⏐ ⏐ ⏐ ⏐ φ, φ, f

(X, µ) −−−−→ (X, µ) Note that the inverse image of [φ(x), 1] under φ is [0, x], which has Lebesgue measure equal to x. For µ to be an f -invariant measure, it must satisfy µ([φ(x), 1]) = x for 0 ≤ x ≤ 1. Thus µ([y, 1]) = ψ(y) and µ([0, y]) = 1 − ψ(y) for 0 ≤ y ≤ 1. > -diff(psi(y),y); 1 −y 2 + y π Hence d 1 ρ(y) = (1 − ψ(y)) = . dy y(1 − y) π

2.6.3 The beta transformation Find the invariant measure of the β-transformation. > with(plots): > beta:=(1+sqrt(5))/2: Deﬁne the transformation T .

2.6 Maple Programs

77

T:=x-> frac(beta*x): Deﬁne the invariant probability density function ρ(x). > rho:=x->piecewise( 0 plot(T(x),x=0..1,y=0..1,axes=boxed); See the left graph in Fig. 2.4. > b0:=T(1); √ 5 1 b0 := − + 2 2 > plot(rho(x),x=0..1); See the right graph in Fig. 2.4. > int(rho(x),x=0..1); √ 15 + 7 5 √ √ (5 + 5) (2 + 5) > simplify(%); 1 When we make assumptions about a variable, a tilde is appended. > assume(b >= 0, b b; b˜ > m1:=beta^3/(1+beta^2): > m2:=beta^2/(1+beta^2): Find the cumulative density function (cdf) for the invariant measure. In the following, cdf(b) is the measure of the interval [0, b]. > cdf:=piecewise( b=1/beta, m1*(1/beta)+ m2*(b-1/beta) ): > plot(cdf(b),b=0..1); See Fig. 2.27. Find the inverse image (or inverse images) of a point b on the y-axis. > a1:=solve(beta*x=b,x); 2 b˜ √ a1 := 1+ 5 > a2:=solve(beta*x-1=b,x); 2 (1 + b˜) √ a2 := 1+ 5 In the following, cdf inverse(b) is the measure of the inverse image of [0, b]. > cdf_inverse:=piecewise( b = b0, m1*a1 + m2*(1-1/beta) ): ρ := x → piecewise(0 ≤ x and x

plot(cdf_inverse(b),b=0..1); > simplify(cdf-cdf_inverse); 0 This shows that ρ(x) is invariant under T . 2.6.4 The baker’s transformation Using the mathematical pointillism we draw images of a rectangle under the iterates of the baker’s transformation. See Fig. 2.8. > with(plots): Deﬁne the baker’s transformation. > T:=(x,y)->(frac(2*x), 0.5*y+0.5*trunc(2.0*x)); T := (x, y) → (frac(2 x), 0.5 y + 0.5 trunc(2.0 x)) S:=4000: Choose a starting point of an orbit of length S. > seed[0]:=(sqrt(3.0)-1,evalf(Pi-3)): Generate S points evenly scattered in the unit square. > for i from 1 to S do seed[i]:=T(seed[i-1]): od: > pointplot({seq([seed[i]],i=1..S)},symbolsize=1); See the left plot in Fig. 2.28. Generate S points evenly scattered in C = {(x, y) : 0 ≤ x ≤ 21 , 0 ≤ y ≤ 1}. > for i from 1 to S do > image0[i]:=(seed[i][1]/2,seed[i][2]): od: > pointplot({seq([image0[i]],i=1..S)},symbolsize=1); See the right plot in Fig. 2.28. Find T (C). > for i from 1 to S do image1[i]:=T(image0[i]): od: > pointplot({seq([image1[i]],i=1..S)},symbolsize=1); See the ﬁrst graph in Fig. 2.8. >

2.6 Maple Programs 1

79

1

y

y

0

x

1

0

x

1

Fig. 2.28. Evenly scattered points in the unit square (left) and a rectangle C (right)

Find T 2 (C). > for i from 1 to S do image2[i]:=T(T(image0[i])): od: > pointplot({seq([image2[i]],i=1..S)},symbolsize=1); See the second graph in Fig. 2.8. Find T 3 (C). > for i from 1 to S do image3[i]:=T(T(T(image0[i]))): od: > pointplot({seq([image3[i]],i=1..S)},symbolsize=1); See the third graph in Fig. 2.8. In the preceding three do loops, if we want to save computing time and memory, we may repeatedly use seed[i]:=T(seed[i]): and plot seed[i], 1 ≤ i ≤ S. See Maple Program 2.6.5. 2.6.5 A toral automorphism Find the successive images of D = {(x, y) : 0 ≤ x ≤ 21 , 0 ≤ y ≤ 21 } under the Arnold cat mapping. > with(plots): Deﬁne a toral automorphism. > T:=(x,y)->(frac(2*x+y),frac(x+y)); > seed[0]:=(evalf(Pi-3),sqrt(2.0)-1): > S:=4000: > for i from 1 to S do seed[i]:=T(seed[i-1]): od: Find S points in D. > for i from 1 to S do > seed[i]:=(seed[i][1]/2,seed[i][2]/2): od: > pointplot({seq([seed[i]],i=1..S)},symbolsize=1); See Fig. 2.29. Find T (D). > for i from 1 to S do seed[i]:=T(seed[i]): od: > pointplot({seq([seed[i]],i=1..S)},symbolsize=1); See the ﬁrst plot in Fig. 2.9.

80

2 Invariant Measures 1

y

0

x

1

Fig. 2.29. Evenly scattered points in the square D

Find T 2 (D). > for i from 1 to S do seed[i]:=T(seed[i]): od: > pointplot({seq([seed[i]],i=1..S)},symbolsize=1); See the second plot in Fig. 2.9. Find T 3 (D). > for i from 1 to S do seed[i]:=T(seed[i]): od: > pointplot({seq([seed[i]],i=1..S)},symbolsize=1); See the third plot in Fig. 2.9. 2.6.6 Modiﬁed Hurwitz transformation with(plots): Deﬁne two constants α and β. > alpha:=(sqrt(5)-1)/2: beta:=(3-sqrt(5))/2: Deﬁne a new command. > bracket:=x->piecewise( 0 x, -floor(-x + beta) ); Deﬁne the transformation T . > T:=x-> 1/x - bracket(1/x); Deﬁne a probability density where C is an undetermined normalizing constant. > rho:=x->piecewise(-alpha N_decimal:=10000: > N_decimal*log[2.](10); 33219.28095 > N_binary:=33300: This is the number of binary signiﬁcant digits needed to produce a decimal number with N decimal signiﬁcant decimal digits. > evalf(2^(-N_binary),10); >

0.5025096306 10−10024 for j from 1 to N_binary do d[j]:=ceil(ran()/3): od:

Find the number of the bit ‘1’ in the binary sequence {dj }. > num_1:=add(d[j],j=1..N_binary); num 1 := 24915 The following number should be close to 43 by the Birkhoﬀ Ergodic Theorem. See Chap. 3 for more information. > evalf(num_1 / N_binary); 0.7481981982 Convert the binary number 0.d1 d2 d3 . . . into a decimal number. > Digits:=N_decimal: > M:=N_binary/100; M := 333

82

2 Invariant Measures

33300 To compute s=1 ds 2−s , calculate 100 partial sums ﬁrst then add them all. In the following partial sum[k] is stored as a quotient of two integers. > for k from 1 to 100 do > partial_sum[k]:=add(d[s+M*(k-1)]/2^s,s=1..M): od: > x0:=evalf(add(partial_sum[k]/2^(M*(k-1)),k=1..100)); x0 := 0.964775085321818215215 . . . This is a typical point for the Bernoulli measure represented on [0, 1]. 2.6.8 A typical point of the Markov measure Find a typical binary Markov sequence x0 deﬁned by a stochastic matrix P . Consult Sect. 2.3 and see Fig. 2.31. > with(linalg): > P:=matrix([[1/3,2/3],[1/2,1/2]]); ⎡1 2⎤ ⎢3 3⎥ P := ⎣ ⎦ 1 1 2 2 2/3

1/3 0

1/2 1

1/2

Fig. 2.31. The graph corresponding to P

Choose the positive vector in the following: > eigenvectors(transpose(P)); 4 −1 [1, 1, { 1, }], [ , 1, {[−1, 1]}] 3 6 The Perron-Frobenius eigenvector is given by the following: > v:=[3/7,4/7]: > evalm(v&*P); 3 4 , 7 7 Observe that the rows of P n converge to v. See Theorem 5.21. > evalf(evalm(P&^20),10); 0.4285714286 0.5714285714 0.4285714286 0.5714285714

2.6 Maple Programs >

3/7.;

>

4/7.;

83

0.4285714286 0.5714285714 Construct a typical binary Markov sequence of length N binary. > ran0:=rand(0..2): ran1:=rand(0..1): > N_decimal:=10000: > N_decimal*log[2.](10); 33219.28095 > N_binary:=33300; >

N binary := 33300 evalf(2.0^(-N_binary),5);

0.50251 10−10024 As in the previous case this shows that the necessary number of binary digits is equal to 33300 when we use 10000 signiﬁcant decimal digits. > d[0]:=1: > for j from 1 to N_binary do > if d[j-1]=0 then d[j]:=ceil(ran0()/2): > else d[j]:=ran1(): fi > od: Count the number of the binary symbol 1. > num_1:=add(d[j],j=1..N_binary); num 1 := 19111 The following is an approximate measure of the cylinder set [1]1 , which should be close to 47 = 0.5714 . . . by the Birkhoﬀ Ergodic Theorem in Chap. 3. > evalf(num_1 / N_binary,10); 0.5739039039 Convert the binary number 0.d1 d2 d3 . . . into a decimal number. > Digits:=N_decimal; Digits := 10000 In computing add(d[s]/2^s,s=1..33300), we ﬁrst divide it into 10 partial sums, and calculate each partial sum separately, then ﬁnally add them all. > M:=N_binary/10; M := 3330 In the following partial sum[k] is stored as a quotient of two integers. > for k from 1 to 10 do > partial_sum[k]:=add(d[s+M*(k-1)]/2^s,s=1..M): od: > x0:=evalf(add(partial_sum[k]/2^(M*(k-1)),k=1..10)); x0 := 0.58976852039782534571049721 . . . This is a typical point for the Markov measure represented on [0, 1].

84

2 Invariant Measures

2.6.9 Coding map for the logistic transformation Let E0 = [0, 21 ), E1 = [ 21 , 1]. For x ∈ [0, 1] deﬁne abinary sequence bn by bn 2−n and sketch the T n−1 x ∈ Ebn . We identify (b1 , b2 , . . .) with ψ(x) = graph of ψ. Consult Sect. 2.5 for the details. > with(plots): > Digits:=100: Choose the logistic transformation. > T:=x-> 4*x*(1-x): Choose the number of points on the graph. > N:=2000: > M:=12: Choose a starting point of an orbit. > seed[0]:=evalf(Pi-3): Find φ(x). > for n from 1 to N+M-1 do > seed[n]:=T(seed[n-1]): > if seed[n] < 1/2 then b[n]:=0: else b[n]:=1: fi: > od: Find ψ(x). > for n from 1 to N do > psi[n]:=add(b[n+i-1]/2^i,i=1..M): od: We don’t need many Digits now. > Digits:=10: > pointplot([seq([seed[n],psi[n]],n=1..N)],symbolsize=1); See Fig. 2.32. 1

y

0

x

1

Fig. 2.32. y = ψ(x) for the logistic transformation

4 The Central Limit Theorem

The Birkhoﬀ Ergodic Theorem may be viewed as a dynamical systems theory version of the Strong Law of Large Numbers presented in Subsect. 1.6.3 on statistical laws. The Central Limit Theorem also has its dynamical systems theory version. More precisely, under suitable conditions on a function f and a transformation T the sequence {f ◦ T i : i ≥ 0} becomes asymptotically n−1 statistically independent, and (1/n) i=0 f (T i x) is asymptotically normally distributed after normalization. One of the conditions on T is a property called mixing, which is stronger than ergodicity. The speed of correlation decay is also presented later in the chapter.

4.1 Mixing Transformations When we have two points on the unit interval and consider their orbits under an irrational translation mod 1, their relative positions are not changed at all. In other words, they are not mixed. Deﬁnition 4.1. Let T be a measure preserving transformation of a probability space (X, A, µ). Then T is said to be weak mixing if for every A, B ∈ A n−1 1 |µ(T −k A ∩ B) − µ(A)µ(B)| = 0 . n→∞ n

lim

k=0

It is said to be mixing, or strong mixing if lim µ(T −n A ∩ B) = µ(A)µ(B) .

n→∞

It is easy to see that mixing implies weak mixing and weak mixing implies ergodicity. See Theorem 3.12. For a measure preserving transformation T deﬁne an isometry UT on L2 (X, µ) by UT f (x) = f (T x) ,

f ∈ L2 (X, µ) .

134

4 The Central Limit Theorem

Let (·, ·) denote the inner product on L2 (X, µ). Then (f, 1) = f dµ . X

In the following three theorems, only Theorem 4.5 will be proved. Other cases can be proved similarly. Theorem 4.2. Let (X, µ) be a probability space and let T : X → X be a measure preserving transformation. The following statements are equivalent: (i) T is ergodic. (ii) For f, g ∈ L2 (X, µ), n−1 1 (UT k f, g) = (f, 1)(1, g) . n→∞ n

lim

k=0

(iii) For f ∈ L2 (X, µ), n−1 1 (UT k f, f ) = (f, 1)(1, f ) . n→∞ n

lim

k=0

Theorem 4.3. Let (X, µ) be a probability space and let T : X → X be a measure preserving transformation. The following statements are equivalent: (i) T is weak mixing. (ii) For f, g ∈ L2 (X, µ), n−1 1 (UT k f, g) − (f, 1)(1, g) = 0 . n→∞ n

lim

k=0

(iii) For f ∈ L2 (X, µ), n−1 1 (UT k f, f ) − (f, 1)(1, f ) = 0 . n→∞ n

lim

k=0

(iv) For f ∈ L2 (X, µ), n−1 2 1 (UT k f, f ) − (f, 1)(1, f ) = 0 . n→∞ n

lim

k=0

For the equivalence of (iii) and (iv) we use Lemma 4.4. For its proof consult [Pet],[Wa1]. In the following lemma we give a necessary and suﬃcient condition for the convergence of the Ces`aro sum.

4.1 Mixing Transformations

135

Lemma 4.4. Let {an }∞ n=1 be a bounded sequence of nonnegative numbers. The following statements are equivalent: (i) n 1 ak = 0 . lim n→∞ n k=1

(ii) There exists J ⊂ N of density zero, i.e., lim

n→∞

card{k ∈ J : 1 ≤ k ≤ n} =0, n

such that the subsequence of an with n ∈ J converges to 0. (iii) n 1 2 ak = 0 . lim n→∞ n k=1

Theorem 4.5. Let (X, µ) be a probability space and let T : X → X be a measure preserving transformation. The following statements are equivalent: (i) T is mixing. (ii) For f ∈ L2 (X, µ), lim (UT n f, f ) = (f, 1)(1, f ) .

n→∞

(iii) For f, g ∈ L2 (X, µ), lim (UT n f, g) = (f, 1)(1, g) .

n→∞

Proof. Throughout the proof we will write U in place of UT for notational simplicity. k (i) ⇒ (ii). Take a simple function f = i=1 ci 1Ei , Ei ⊂ X. Then U nf =

k

ci 1T −n Ei

i=1

and n

(U f, f ) =

k

ci cj µ(T −n Ei ∩ Ej )

i,j=1

→

k

ci cj µ(Ei )µ(Ej ) = (f, 1)(1, f ) .

i,j=1

For h ∈ L2 the Cauchy–Schwarz inequality implies |(h, 1)| ≤ ||h||2 ||1||2 = ||h||2 since µ(X) = 1. Take g ∈ L2 . Since simple functions are dense in L2 ,

136

4 The Central Limit Theorem

there exists a simple function f such that ||f − g||2 < ε for every ε > 0. Let n be an integer such that |(U n f, f ) − (f, 1)(1, f )| < ε . Since U is an isometry, ||U n f − U n g||2 = ||f − g||2 < ε. The Cauchy–Schwarz inequality implies that |(U n f, f ) − (U n g, g)| = |(U n f, f ) − (U n f, g) + (U n f, g) − (U n g, g)| ≤ |(U n f, f − g)| + |(U n f − U n g, g)| ≤ ||U n f ||2 ||f − g||2 + ||U n f − U n g||2 ||g||2 < ||f ||2 ε + ε ||g||2 < ε (2 ||g||2 + ε) and

|(f, 1)(1, f ) − (g, 1)(1, g)| = |(f, 1)|2 − |(g, 1)|2 = ( |(f, 1)| + |(g, 1)|) | |(f, 1)| − |(g, 1)| | ≤ ( ||f ||2 + ||g||2 ) | (f, 1) − (g, 1) | ≤ ( ||f ||2 + ||g||2 ) | (f − g, 1) | ≤ ( ||f ||2 + ||g||2 ) ||f − g||2 ≤ (2 ||g||2 + ε) ε .

Hence |(U n g, g) − (g, 1)(1, g)| = |(U n g, g) − (U n f, f ) + (U n f, f ) − (f, 1)(1, f ) + (f, 1)(1, f ) − (g, 1)(1, g)| ≤ ε(2 ||g||2 + ε) + ε + ε(2 ||g||2 + ε) . Letting ε → 0, we conclude that limn→∞ (U n g, g) = (g, 1)(1, g). (ii) ⇒ (iii). From (U n (f + g), f + g) → (f + g, 1)(1, f + g) as n → ∞, we have (∗) (U n f, g) + (U n g, f ) → (f, 1)(1, g) + (g, 1)(1, f ) . Substituting if for f , we obtain i (U n f, g) − i (U n g, f ) → i (f, 1)(1, g) − i (g, 1)(1, f ) , and so (U n f, g) − (U n g, f ) → (f, 1)(1, g) − (g, 1)(1, f ) .

(∗∗)

Adding (∗) and (∗∗), we have (U f, g) → (f, 1)(1, g). (iii) ⇒ (i). Choose f = 1A and g = 1B for measurable subsets A and B. Then U n f = 1T −n A and n (U f, g) = 1T −n A 1B dµ = µ(T −n A ∩ B) . n

Hence T is mixing.

4.1 Mixing Transformations

137

Theorem 4.6. Let G be a compact abelian group and let T : G → G be an onto endomorphism. Then T is ergodic if and only if T is mixing. Note that if G is a ﬁnite group consisting more than one element, then T is not ergodic since T leaves invariant the one point set consisting of the identity element of G, which has positive measure. Recall that if an endomorphism T is onto, then it preserves Haar measure. See Ex. 2.11. Proof. It suﬃces to show that ergodicity implies mixing. Deﬁne U by U f = f ◦T for f ∈ L2 . Take a character χ of G. Then U n χ, n ≥ 1, is also a character of G since (U n χ)(g1 g2 ) = χ(T n (g1 g2 )) = χ(T n (g1 )T n (g2 )) = χ(T n (g1 ))χ(T n (g2 )) . Note that either (U n χ, χ) = 0 or (U n χ, χ) = 1 for n ≥ 1 since characters are orthonormal. Assume χ = 1. Then U n χ = χ for any n ≥ 1. If it were not so, put n0 = min{n ≥ 1 : U n χ = χ} . Then U kn0 χ = χ, and hence 1 1 j >0, (U χ, χ) ≥ n→∞ n n0 j=1 n

lim

which contradicts the ergodicity of T . (Or, we may use the fact that f = χ + U χ + · · · + U n0 −1 χ is T -invariant. Since (f, χ) = 1, f is not constant.) Thus (U n χ, χ) = 0 for every n ≥ 1. Similarly, let χ1 , χ2 be two distinct characters such that χ1 = 1 and χ2 = 1. If U k χ1 = χ2 for some k ≥ 1, then (U n+k χ1 , χ2 ) = (U n χ2 , χ2 ) = 0 ,

n≥1

by the previous argument. Thus (U n χ1 , χ2 ) = 0 for suﬃciently large n. Consider a ﬁnite linear combination of characters f = j=0 cj χj where χ0 = 1. Then for large n cj U n χj , cj χj (U n f, f ) = = cj ck (U n χj , χk ) = c0 c0 . j,k

Since (f, 1) = c0 , we have lim (U n f, f ) = (f, 1)(1, f ) .

n→∞

Proceeding as in the proof of Theorem 4.5, and using the fact that the set of all ﬁnite linear combinations of characters is dense in L2 , we can show that (U n f, f ) converges to (f, 1)(1, f ) for arbitrary f ∈ L2 .

138

4 The Central Limit Theorem

Theorem 4.7. A weak mixing transformation has no eigenvalue except 1. Proof. Let (X, µ) be a probability space and let T : X → X be a weak mixing transformation. Suppose that λ = 1 is an eigenvalue of T , i.e., there exists a nonconstant L2 -function f such that f (T x) = λf (x) with ||f ||2 = 1. Deﬁne U f (x) = f (T x). Then (U j f, f ) = λj (f, f ). The Cauchy–Schwarz inequality implies that |(f, 1)|2 < (f, f )(1, 1) = 1, where we have the strict inequality because f is not constant. (See Fact 1.12.) Hence n−1 n−1 1 j 1 j (U f, f ) − (f, 1)(1, f ) = lim λ − |(f, 1)|2 > 0 , n→∞ n n→∞ n j=0 j=0

lim

which contradicts Theorem 4.3.

Example 4.8. An irrational translation T x = x+θ (mod 1) is not weak mixing since UT (e2πinx ) = e2πinθ e2πinx for n ∈ Z. Or, we may take f (x) = e2πix and use Theorem 4.3. See √ Fig. 4.1 and Maple Program 4.4.6 for a simulation with f (x) = x and θ = 3 − 1.

0.3 inner product

0.3

0.2

Cesaro sum

0.1 0

0.2 0.1

n

0

30

n

30

Fig. 4.1. An irrational translation T x = {x + θ} is not mixing. The inner product (f ◦ T n , f ) does not converge as n → ∞ (left) but the Ces` aro sum converges (right) √ for f (x) = x and θ = 3 − 1

4.2 The Central Limit Theorem For an ergodic measure preserving transformation T on a probability space (X, µ), take a measurable function (i.e., a random variable) f on X. Let (Sn f )(x) =

n−1 i=0

f (T i x) .

4.2 The Central Limit Theorem

We consider asymptotic statistical distributions of variance of f are given by µ(f ) = f (x) dµ

1 n Sn f .

139

The mean and the

X

and

(f (x) − µ(f ))2 dµ , X

respectively. The symbol µ stands for two things in this chapter: the mean and the measure. Note that f ◦ T i has the same probability distribution for every i ≥ 0, i.e., Pr(f ◦ T i ∈ E) = Pr(f ∈ E) since

µ((f ◦ T i )−1 (E)) = µ(T −i (f −1 (E))) = µ(f −1 (E)) .

Note that the mean of n1 Sn f is equal to µ(f ) and its variance is given by 2 Sn f − nµ(f ) dµ . n X The Central Limit Theorem does not necessarily hold true for general transformations. See [LV],[Vo]. In the following a suﬃcient condition for a special class of transformations is given. Theorem 4.9 (The Central Limit Theorem). Let T be a piecewise C 1 and expanding transformation on [0, 1], i.e., there is a partition 0 = a0 < a1 < . . . < ak−1 < ak = 1 such that T is C 1 on each [ai−1 , ai ] and |T (x)| ≥ B for some constant B > 1. (At the endpoints of an interval we consider directional derivatives.) Assume that 1/|T (x)| is a function of bounded variation. Suppose that T is weak mixing with respect to an invariant probability measure µ. Let f be a function of bounded variation such that the equation f (x) = g(T x) − g(x) has no solution g of bounded variation. Then 2 1 Sn f − nµ(f ) √ dµ > 0 σ 2 = lim n→∞ 0 n and, for every α, lim µ

n→∞

where

0 α & Sn f (x) − nµ(f ) √ ≤α = Φ(t) dt , x: σ n −∞ 2 1 Φ(t) = √ e−t /2 . 2π

140

4 The Central Limit Theorem

Consult [HoK],[R-E] for a proof. Also see an expository article by Young [Y2]. For more details consult [Bal],[BoG],[Den]. Remark 4.10. The nth standard deviation is deﬁned by 5 2 1 Sn f − nµ(f ) √ σn = dµ . n 0 Then σn converges to a positive constant for transformations satisfying the conditions in Theorem 4.9. See Fig. 4.2 for n = 10, 20, . . . , 100 and Maple Program 4.4.1. If n is large, then the values of n1 Sn f (x), for random choices of x, are distributed around the average µ(f ) in a bell-shaped form with √ standard deviation σn / n. See Fig. 4.3 and Maple Program 4.4.2.

1

0

1

n

100

1

0

n

100

0

n

100

Fig. 4.2. σn converges to a positive constant as n → ∞. Simulations for T x = {2x}, T x = {βx} and T x = 4x(1 − x) with f (x) = 1[1/2,1) (x) (from left to right)

Deﬁnition 4.11. An integrable function f is called an additive coboundary if there exists an integrable function g satisfying f (x) = g(T x) − g(x). A necessary condition for the existence of such g is

1 0

f (x) dµ = 0.

Remark 4.12. If f is an additive coboundary of the form f = g ◦ T − g where g ∈ L2 (X, µ), then µ(f ) = 0 and Sn f (x) =

n−1

f (T i x)

i=0

=

n−1

g(T i+1 x) − g(T i x)

i=0

= g(T n x) − g(x) .

4.2 The Central Limit Theorem

141

In this case σ = 0 and the Central Limit Theorem does not hold since 1 1 2 2 (g(T n x) − g(x)) dµ σ = lim n→∞ n 0 1 1 1 1 n 2 n 2 g(T x) dµ − 2 g(T x)g(x) dµ + g dµ = lim n→∞ n 0 0 0 1 1 2 g 2 dµ − g(T n x)g(x) dµ = lim n→∞ n 0 0

1 1/2 1 1/2 1 2 2 n 2 2 g dµ + g(T x) dµ g dµ ≤ lim n→∞ n 0 0 0 4 1 2 g dµ = 0 . = lim n→∞ n 0 It is known that f is an additive coboundary if and only if σ = 0. For a proof consult [BoG]. The proof for T x = {2x} was ﬁrst obtained by M. Kac in 1938, and subsequent generalizations are found in [For],[Ka1],[Ci]. Remark 4.13. Let T be weak mixing. If E is a measurable subset such that 0 < µ(E) < 1, then the function 1E (x) − µ(E) is not an additive coboundary. For, if we let 1E (x) − µ(E) = g(T x) − g(x) then exp(2πig(T x)) = exp(−2πiµ(E)) exp(2πig(x)) . Hence exp(−2πiµ(E)) is the eigenvalue of the linear operator UT f (x) = f (T x), which is a contradiction since exp(−2πiµ(E)) = 1. Example 4.14. (Multiplication by 2 modulo 1) For x ∈ X = [0, 1) we have the binary expansion x=

∞

di (x)2−i ,

di (x) ∈ {0, 1} .

i=1

Taking T x = {2x} and f (x) = 1E (x), E = [ 12 , 1), in the Birkhoﬀ Ergodic Theorem, we have the Normal Number Theorem, i.e., 1 1 d1 (T i−1 x) = lim n→∞ n 2 i=1 n

for almost every x. Note that the classical Central Limit Theorem in Subsect. 1.6.3 gives more information. Note that di = d1 ◦ T i−1 , i ≥ 1. They have the same mean 1 di (x) dx = d1 (T i−1 x) dx = d1 (x) dx = 2

142

4 The Central Limit Theorem

and the same variance 2 2 1 1 1 di (x) − dx = d1 (x) − dx = . 2 2 4 They are identically distributed, i.e., Pr(di = 0) = Pr(di = 1) =

1 2

for i ≥ 1. (This fact may be used to show that the di have the same mean and variance.) Furthermore, they are independent. Hence we have n a i−1 − 21 n i=1 d1 ◦ T lim Pr Φ(t) dt . ≤ a = √ 1 n→∞ −∞ 2 n The probability on the left hand side is given by Lebesgue measure on the unit interval. This can be derived from Theorem 4.9. See the left graph in Fig. 4.3, where the smooth curves is the density function for the standard normal distribution. The number of random samples is 104 ; in other words, we select 104 points from [0, 1] uniformly according to the invariant probability measure, which is Lebesgue measure in this case.

0.4

–4

0.4

0.4

0

4

–4

0

4

–4

0

4

Fig. 4.3. The Central Limit Theorem: Simulations for T x = {2x}, T x = {βx} and T x = 4x(1 − x) with n = 100 and f (x) = 1[1/2,1) (x) (from left to right)

Example 4.15. (Irrational translation) Let 0 < θ < 1 be irrational, and let T x = {x + θ}. In this case, T is not weak mixing and the Central Limit Theorem does not hold. For a simulation, we ﬁrst take a function f which is not an additive coboundary to avoid the trivial case. For example, choose an interval E = (a, b) and put f (x) = 1E (x) − (b − a). Then f is not an additive coboundary if b − a ∈ Z · θ + Z = {mθ + n : m, n ∈ Z} . For, if we had f (x) = g(x + θ) − g(x), then

4.3 Speed of Correlation Decay

143

exp(2πi(1E (x) − (b − a))) = exp(2πig(x + θ)) exp(−2πig(x)) . Put q(x) = exp(2πig(x)). Then q(x + θ) = e−2πi(b−a) q(x) . Hence e−2πi(b−a) = e2πinθ for some n, which is a contradiction. For another choice of f (x) see Fig. 4.4 and Maple Program 4.4.3.

0.15

0.15

0.1

0.1

0.05

0.05

0

n

300

0

n

300

Fig.√4.4. Failures of the Central Limit Theorem: σn → 0 as n → ∞ for T x = {x+θ}, θ = 2 − 1, with f (x) = 1[1/2,1) (x) (left) and f (x) = x (right)

4.3 Speed of Correlation Decay Throughout the section we use the natural logarithm. Let T be a measure preserving transformation on (X, µ). For a real-valued function f ∈ L2 (X, µ) the nth correlation coeﬃcient rn (f ) is deﬁned by 2 f dµ . rn (f ) = f (T n x)f (x) dµ − X X Theorem 4.5 implies that T is mixing if and only if rn (f ) → 0 as n → ∞ for every f . If T satisﬁes the conditions given in Theorem 4.9, rn (f ) → 0 exponentially. More precisely, there exist constants C > 0 and α > 0 such that rn (f ) ≤ C e−αn , where C depends on f and α does not depend on f . For a proof consult [BoG]. Note that log rn (f ) lim inf − ≥α. n→∞ n This property can be observed in the following simulations. If n is large, it is hard to estimate rn (f ) accurately due to the margin of error in numerical computation. See Maple Program 4.4.5.

144

4 The Central Limit Theorem

Example 4.16. (Exponential decay) Correlation coeﬃcients of T x = {2x} are exponentially decreasing as n → ∞. See Maple Program 4.4.4 and Fig. 4.5 for the theoretical calculation with f (x) = x and 1 ≤ n ≤ 15. In the right graph −(log rn (f ))/n converges to log 2 ≈ 0.693, which is marked by a horizontal line. If n is large, say n = 10, then rn (f ) is very small theoretically. In this

case, due to the margin of error in estimating the integral f (T n x)f (x) dµ by the Birkhoﬀ Ergodic Theorem, it is diﬃcult to numerically estimate the exponential decay of correlations. 400000

0.04 r

3 1/r

-(log r)/n 2

200000

0.02

1 0

n

0

15

15

n

0

n

15

Fig. 4.5. Correlation of T x = {2x} is exponentially decreasing: y = rn (f ), y = 1/rn (f ) and y = −(log rn (f ))/n, where f (x) = x (from left to right)

For g(x) = sin(2πx) we have g(T n x) = sin(2n+1 πx) and rn (g) = 0, n ≥ 1,

1 since 0 sin(2πmx) sin(2πnx) dx = 0, m = n. Similarly, for h(x) = 1E (x),

1 E = [0, 21 ), we have rn (h) = 0, n ≥ 1, since 0 h(T n x)h(x) dx = 41 . √

Example 4.17. (Exponential decay) Let β = 5+1 2 . Correlation coeﬃcients of T x = {βx} are exponentially decreasing as n → ∞. The transformation T is isomorphic to the Markov shift in Ex. 2.32, and hence it is mixing. See Fig. 4.6 for a simulation with f (x) = x and 1 ≤ n ≤ 10, where the sample size is 106 .

0.02

1000

r

3

1/r

2

500

0.01

-(log r)/n

1 0

n

10

0

n

10

0

n

10

Fig. 4.6. Correlation of T x = {βx} is exponentially decreasing: y = rn (f ), y = 1/rn (f ) and y = −(log rn (f ))/n, where f (x) = x (from left to right)

4.3 Speed of Correlation Decay

145

Example 4.18. (Polynomial decay) For 0 < α < 1, deﬁne & x(1 + 2α xα ) , 0 ≤ x < 21 , Tα x = 1 2x − 1 , 2 ≤x≤1. It is shown in [Pia] that there exists an absolutely continuous ergodic invariant probability measure dµ = ρ(x) dx for every 0 < α < 1. In Fig. 4.7 the graph y = ρ(x) is sketched by applying the Birkhoﬀ Ergodic Theorem. See also Maple Program 3.9.6. For α = 1 there exists an absolutely continuous ergodic invariant measure µ such that µ([0, 1]) = ∞. Consult [Aa]. 1.5

6 pdf

1

4

0.5

2 0

x

1

0

x

1

Fig. 4.7. The graphs of y = ρ(x) (left) and y = 1/ρ(x) (right) for Ex. 4.18 with α = 0.5

If f is Lipschitz continuous, then there exists a constant C > 0 such that rn (f ) ≤ C n−(1/α)+1 . In general, consider a piecewise smooth expanding transformation T on the unit interval that has the form T x = x + x1+α + o(x1+α ) near 0, where 0 < α < 1. The notation o(xp ) means a function φ(x) such that lim φ(x)x−p = 0 .

x→0+

The rate of decay of correlations is polynomial for a Lipschitz continuous function f , i.e., there exist positive constants C1 and C2 such that C1 n−(1/α)+1 ≤ rn (f ) ≤ C2 n−(1/α)+1 . The density function ρ has the order x−α as x → 0. For the proof consult [Hu]. See also [BLvS],[Bal],[LSV],[Y3]. In Fig. 4.8 we choose 106 samples with α = 0.5 and f (x) = x, and plot y = rn (f ), y = 1/rn (f ) and y = n(1/α)−1 rn (f ), 1 ≤ n ≤ 30. In this example the rate of decay is not fast, and so we can numerically estimate rn within an acceptable margin of error. Thus we can take a relatively large value for n.

146

4 The Central Limit Theorem

0.06 r

200

0.1

1/r

nr

0.04

0

0.05

100

0.02

30

n

0

30

n

0

n

30

Fig. 4.8. Polynomial decay of correlations: y = rn (f ), y = 1/rn (f ) and y = n rn (f ) for α = 0.5 and f (x) = x (from left to right)

Example 4.19. (Irrational translations) Let 0 < θ < 1 irrational. Correlation coeﬃcients of T x = {x + θ} do not decrease to 0 as n → ∞. To see why, choose a nonconstant real-valued L2 -function f (x). Then 1 t → f (x + t)f (x) dx 0

is a continuous function of t but not constant as t varies. Hence there exist a < b and C > 0 such that 1 2 1 f (x + t)f (x) dx − f dx > C 0 0 for every a < t < b. Observe that there exists a sequence {nk }∞ k=1 such that a < {nk θ} < b for every k. Hence for every k 1 2 1 f (x + nk θ)f (x) dx − f dx > C . rnk (f ) = 0 0 To see why t → t, let

f (x + t)f (x) dx is a nonconstant continuous function of f (x) =

∞

an e2πinx

n=−∞

be the Fourier expansion of f . Note that there exists n = 0 such that an = 0 since f is not constant. Then f (x + t) =

∞

an e2πint e2πinx ,

n=−∞

and Parseval’s identity implies that 1 ∞ f (x + t)f (x) dx = |an |2 e2πint , 0

n=−∞

4.4 Maple Programs

147

which is not a constant function of t since there exists n = 0 such that |an |2 = 0. To see that the function t →

∞

|an |2 e2πint

n=−∞

is continuous, observe that ∞ N 2 2πint 2 2πint |an | e − |an | e |an |2 → 0 ≤ n=−∞

|n|>N

n=−N

as N → ∞. Since the convergence is uniform, the limit is continuous. √ Simulation results for T x = {x + θ} with f (x) = x and θ = 3 − 1 are presented in Fig. 4.9. The graph y = rn (f ), 1 ≤ n ≤ 30, is not decreasing as n increases. See Maple Program 4.4.6.

0.05 r

0

n

30

Fig. 4.9. Correlations of an irrational translation modulo 1 do not converge to zero: y = rn (f ) where f (x) = x

4.4 Maple Programs 4.4.1 The Central Limit Theorem: σ > 0 for the logistic transformation Check the Central Limit Theorem for the logistic transformation. > with(plots): > T:=x->4.0*x*(1-x): > density:=x->1/(Pi*sqrt(x*(1-x))): > entropy:=log10(2.): For the deﬁnition of entropy consult Chap. 8. For the reason why we have to consider entropy in deciding the number of signiﬁcant digits see Sect. 9.2.

148

4 The Central Limit Theorem

Num:=10000: This is the number of the samples. A smaller value, say 1000, wouldn’t do. > N:=100: Choose a multiple of 10. > L:=Num+N-1; L := 10099 Choose a slightly larger integer than entropy × N . > Digits:=ceil(N*entropy)+20; >

Digits := 51 Take a test function. We may choose any other test function such as f (x) = x. > f:=x->trunc(2*x): Find the average of f . > mu_f:=int(f(x)*density(x),x=0..1); 1 mu f := 2 Choose a starting point of an orbit of T . > seed[0]:=evalf(Pi-3): Find an orbit of T . > for k from 1 to L do seed[k]:=T(seed[k-1]): od: > Digits:=10: Find σn for n = 10, 20, 30, . . . and draw a line graph. > for n from 10 to N by 10 do > for i from 1 to Num do > Sn[i]:=add(f(seed[k]),k=i..i+n-1): > od: > var[n]:=add((Sn[i]-n*mu_f)^2/n,i=1..Num)/Num: > sigma[n]:=sqrt(var[n]): > od: > listplot([seq([n*10,sigma[n*10]],n=1..N/10)],labels= ["n"," "]); See Fig. 4.2. 4.4.2 The Central Limit Theorem: the normal distribution for the Gauss transformation Check the normal distribution in the Central Limit Theorem for the Gauss transformation. > with(plots): Here is the standard normal distribution function. > Phi:=x->1/(sqrt(2*Pi))*exp(-x^2/2): > T:=x->frac(1.0/x): > entropy:=evalf(Pi^2/6/log(2)/log(10));

4.4 Maple Programs

149

entropy := 1.030640835 > f:=x->trunc(2.0*x): Choose the number of the samples. > Num_Sample:=10000: Choose the size of each random sample. > n:=100: Choose a slightly larger integer than entropy × n. > Digits:=ceil(n*entropy)+10; Digits := 114 seed[0]:=evalf(Pi-3): Find many orbits of T of length n. On each of them calculate Sn . > for i from 1 to Num_Sample do > for k from 1 to n do > seed[k]:=T(seed[k-1]): > od: > Sn[i]:=evalf(add(f(seed[k]),k=1..n),10): > seed[0]:=seed[n]: > od: We no longer need many signiﬁcant digits. > Digits:=10: > mu:=add(Sn[i]/n,i=1..Num_Sample)/Num_Sample; µ := 0.4145300000 Find the variance. > var:=add((Sn[i]-n*mu)^2/n,i=1..Num_Sample)/Num_Sample; var := 0.2166699100 Find the standard deviation. > sigma:=sqrt(var); σ := 0.4654781520 > for i from 1 to Num_Sample do > Zn[i]:=(Sn[i]-n*mu)/(sigma*sqrt(n)): > od: Find the gap between the values of the normalized average Zn[i]. > gap:=1/(sigma*sqrt(n)); gap := 0.2148328543 > 10/gap; 46.54781520 > Bin:=46: We partition the interval [−5, 5] into exactly 46 subintervals. To see why, try 45 or 47. This phenomena occurs since f (x) is integer-valued. > width:=10.0/Bin; This is the length of subintervals. >

150

4 The Central Limit Theorem >

for k from 1 to Bin do freq[k]:=0: od:

for i from 1 to Num_Sample do slot:=ceil((Zn[i]+5.0)/width): freq[slot]:=freq[slot]+1: od: We draw the probability density function. > g1:=listplot([seq([(i-0.5)*width-5,freq[i]/Num_Sample/ width],i=1..Bin)],labels=[" "," "]): > g2:=plot(Phi(x),x=-5..5): > display(g1,g2); See Fig. 4.10. We may use a cumulative probability density function. Its graph does not depend on the number of bins too much. > > > >

1

0

0.4

n

100

–4

0

4

Fig. 4.10. The Central Limit Theorem for the Gauss transformation: σn converges (left) and the numerical data ﬁt the standard normal distribution (right)

For more experiments try the toral automorphism T (x, y) = (2x + y, x + y)

(mod 1) .

See Fig. 4.11 for a simulation with f (x, y) = 1[1/2,1) (x). 1 0.4

0

n

100

–4

0

4

Fig. 4.11. The Central Limit Theorem for a toral automorphism: σn converges (left) and the numerical data ﬁt the standard normal distribution (right)

4.4 Maple Programs

151

4.4.3 Failure of the Central Limit Theorem: σ = 0 for an irrational translation mod 1 We show that σ = 0 for an irrational translation modulo 1. > with(plots): Choose an irrational number θ. > theta:=sqrt(2.0)-1: > T:=x->frac(x+theta): > Num:=1000: > N:=100: > L:=Num+N-1; L := 1099 Take a test function f (x). > f:=x->trunc(2.0*x): Compute µ(f ). > mu_f:=int(f(x),x=0..1); Choose a starting point of an orbit of T . > seed[0]:=evalf(Pi-3): > for k from 1 to L do seed[k]:=T(seed[k-1]): od: Find σn . > for n from 1 to N do > for i from 1 to Num do > Sn[i]:=add(f(seed[k]),k=i..i+n-1): > od: > var[n]:=add((Sn[i]-n*mu_f)^2,i=1..Num)/Num/n: > sigma[n]:=sqrt(var[n]): > od: > listplot([seq([n,sigma[n]],n=1..N)],labels=["n"," "]); See Fig. 4.4. 4.4.4 Correlation coeﬃcients of T x = 2x mod 1 Derive theoretically rn (f ) for T x = 2x (mod 1) with f (x) = x. > T:=x->frac(2.0*x): > f:=x->x: Compute µ(f ). > mu_f:=int(f(x),x=0..1): Put Ij = [(j − 1) × 2−n , j × 2−n ]. Then

n

f (T x)f (x) dx = 0

2 n

1

j=1

2 n

{2 x} x dx = n

Ij

j=1

(2n x − (j − 1)) x dx . Ij

152

4 The Central Limit Theorem

Find rn (f ). For the symbolic summation in the following use the command sum instead of the command add. > r[n]:=abs(sum(int(x*(2^n*x-j+1),x=(j-1)*2^(-n)..j*2^(-n)), j=1..2^n) - 1/4); 5 (2n + 1) (2n + 1)2 1 1 − + + rn := − 4 6 (2n )2 4 (2n )2 12 (2n )2 Simplify the preceding result. > simplify(%);

See Fig. 4.5.

1 (−n) 2 12

4.4.5 Correlation coeﬃcients of the beta transformation We introduce a general method to estimate correlation coeﬃcients. > with(plots): Deﬁne the β-transformation and the invariant density function. > Digits:=100; > beta:=(sqrt(5.)+1)/2: > T:=x->frac(beta*x): Deﬁne the invariant probability density. > density:=x->piecewise(0 M:=100000: Here M is the orbit length in the Birkhoﬀ Ergodic Theorem. Take f (x) = x. > f:=x->x: Compute µ(f ). > mu_f:=int(f(x)*density(x),x=0..1): > seed[0]:=evalf(Pi-3): > for k from 1 to N+M-1 do > seed[k]:=T(seed[k-1]): > seed[k-1]:=evalf(T(seed[k-1]),10): #This saves memory. > od: > Digits:=10: In the following we use the idea that

4.4 Maple Programs

1

f (T n x)f (x) dµ ≈ 0

153

M −1 1 f (T n+k x0 )f (T k x0 ) . M k=0

Find the nth correlation coeﬃcient. > for n from 1 to N do > r[n]:=abs(add(f(seed[n+k])*f(seed[k]),k=0..M-1)/M-mu_f^2): > od: Plot y = rn . > listplot([seq([n,r[n]],n=1..N)],labels=["n","r"]); Plot y = 1/rn . > listplot([seq([n,1/r[n]],n=1..N)],labels=["n","1/r"]); Plot y = − log rn . > listplot([seq([n,-log(r[n])/n],n=1..N)],labels= ["n","-(log r)/n"]); See Fig. 4.6. 4.4.6 Correlation coeﬃcients of an irrational translation mod 1 Show that the correlation coeﬃcients of an irrational translation modulo 1 do not converge to zero. > with(plots): > theta:=sqrt(3.0)-1: > N:=30: > T:=x->frac(x+theta): > f:=x->x: > mu_f:=int(f(x),x=0..1): Find the inner product of f ◦ T j and f . Here we use a diﬀerent idea from the one in Subsect. 4.4.5. Since T preserves Lebesgue measure, we choose the sample points i/S, 1 ≤ i ≤ S. > S:=1000: > for j from 0 to N do > inner_prod[j]:=add(f(frac(i/S+j*theta))*f(i/S),i=1..S)/S: > od: The following shows that T is not mixing. > f0:=plot(mu_f^2,x=0..N,labels=["n","inner product"]): > f1:=listplot([seq([n,inner_prod[n]],n=0..N)]): > display(f0,f1); See the left graph in Fig. 4.1. The following shows that T is not weak mixing. > for n from 1 to N do > Cesaro1[n]:=add(abs(inner_prod[j]-mu_f^2),j=0..n-1)/n: > od: > listplot([seq([n,Cesaro1[n]],n=1..N)]);

154

4 The Central Limit Theorem 0.3 0.2 0.1 0

n

30

Fig. 4.12. An irrational translation mod 1 is not weak mixing

See Fig. 4.12. The limit is positive. The following shows that T is ergodic. > for n from 1 to N do > Cesaro2[n]:=add(inner_prod[j],j=0..n-1)/n: > od: > h0:=plot(mu_f^2,x=0..N,labels=["n","Cesaro sum"]): > h1:=listplot([seq([n,Cesaro2[n]],n=1..N)]): > display(h0,h1); See the right graph in Fig. 4.1. Find the nth correlation coeﬃcient. > for n from 1 to N do > r[n]:=abs(inner_prod[n]-mu_f^2): > od: > listplot([seq([n,r[n]],n=1..N)],labels=["n","r"]); See Fig. 4.9.

5 More on Ergodicity

For a transformation deﬁned on an interval, conditions for the existence of an absolutely continuous invariant probability measure are given in the ﬁrst section, and a related property in the second one. An ergodic transformation visits a subset E of positive measure over and over with probability 1, and the average of the recurrence times between consecutive occurrences is equal to the reciprocal of the measure of E. Simply put, it takes longer to return to a set of a smaller size. This fact was proved by M. Kac. For his autobiography, see [Ka4]. Later, conditions for the ergodicity of a Markov shift are given in terms of the associated stochastic matrix. In the last section is presented an example of a noninvertible transformation with an invertible extension. For a collection of interesting examples of ergodic theory consult [ArA], and for applications to statistical mechanics see [Kel]. For an exposition on dynamical systems see [Ru3]. For continuous dynamical systems consult [HK],[Ly], and for complex dynamical systems see [El],[Ly].

5.1 Absolutely Continuous Invariant Measures For a piecewise diﬀerentiable mapping on an interval, the existence of an absolutely continuous invariant measure is known under various conditions. First we consider a mapping with ﬁnitely many discontinuities. Deﬁnition 5.1. Take X = [0, 1]. A piecewise diﬀerentiable transformation T : X → X is said to be eventually expansive if some iterate of T has its derivative bounded away from 1 in modulus, i.e., there exist n ∈ N and a constant C > 1 such that |(T n ) | ≥ C where the derivative is deﬁned. The following fact has many variants and it is called a folklore theorem. Fact 5.2 (Finite partition) Let 0 = a0 < a1 < · · · < an = 1 be a partition of X = [0, 1]. Put Ii = (ai−1 , ai ) for 1 ≤ i ≤ n. Suppose that T : X → X satisﬁes the following:

156

5 More on Ergodicity

(i) T |Ii has a C 2 -extension to the closure Ii of Ii , (ii) T |Ii is strictly monotone, (iii) T (Ii ) is a union of intervals Ij , (iv) there exists an integer s such that T s (Ii ) = X, and (v) T is eventually expansive. Then T has an ergodic invariant probability measure µ such that dµ = ρ dx where ρ is piecewise continuous and satisﬁes A ≤ ρ(x) ≤ B for some constants 0 < A ≤ B. For the proof see [AdF]. For the case of transformations with inﬁnitely many discontinuities see [Bow]. Fact 5.3 (Continuous density) Let {∆i } be a countable partition of [0, 1] by subintervals ∆i = (ai , ai−1 ) satisfying a0 = 1, ai < ai−1 and limi ai = 0. Suppose that a map T on [0, 1] satisﬁes the following: (i) T |∆i has a C 2 -extension to the closure ∆i of ∆i , (ii) T |∆i is strictly monotone, (iii) T (∆& i ) = [0, 1], 0 supx1 ∈∆i |T (x1 )| < ∞, and (iv) supi inf x2 ∈∆i |T (x2 )|2 (v) T is eventually expansive. Then T has an ergodic invariant probability measure µ such that dµ = ρ dx where ρ is continuous and satisﬁes A ≤ ρ(x) ≤ B for some constants 0 < A ≤ B. For the proof see [Half]. For similar results see [Ad],[CFS],[LY],[Re],[Si2]. In [Half] a condition for the diﬀerentiability of ρ(x) is given. As a corollary we obtain the ergodicity of the Gauss transformation.

5.2 Boundary Conditions for Invariant Measures Theorem 5.4. Suppose that φ : (0, 1] → R satisﬁes the following: (i) φ(0+) = +∞, φ(1) = 1, (ii) φ (x) < 0 for 0 < x < 1, (iii) φ is & twice continuously0diﬀerentiable and φ (x) < 0, supx∈∆i |φ (x)| < ∞, where ∆i = (φ−1 (i + 1), φ−1 (i) ], and (iv) supi (y)|2 inf |φ y∈∆ i ∞ (v) n=1 1/|φ (φ−1 (n))| < ∞ where φ (1) is understood as one-sided. Deﬁne T on [0, 1] by T 0 = 0 and T x = {φ(x)} for 0 < x ≤ 1. Assume that T is eventually expansive. Then there exists a continuous function ρ such that dµ = ρ dx is an ergodic invariant probability measure for T and 1 ρ(1) . ρ(0) = 1 − φ (1)

5.2 Boundary Conditions for Invariant Measures

157

Proof. From Fact 5.3 there exists such a continuousfunction ρ. Assume that ∞ 1 n=1 (n, n + α), we have C ≤ ρ ≤ C. Since T x ∈ (0, α) if and only if φ(x) ∈ µ((0, α)) = µ T −1 (0, α)

∞ −1 −1 φ (α + n), φ (n) =µ n=1 ∞ µ φ−1 (α + n), φ−1 (n) , = n=1

and

α

ρ dx = 0

∞ n=1

Put fk (α) =

k n=1

φ−1 (n)

ρ dx .

φ−1 (α+n) φ−1 (n)

ρ dx φ−1 (α+n)

for 0 < α < 1. Then fk (x) =

k −ρ(φ−1 (x + n)) φ (φ−1 (x + n)) n=1

and k k2 k2 2 1 1 −ρ(φ−1 (x + n)) ≤ ≤ C C |φ (φ−1 (n))| |φ (φ−1 (x + n))| φ (φ−1 (x + n)) n=k1

n=k1

n=k1

since |φ | is monotonically decreasing. Since the right hand side converges to 0 as k1 and k2 increase to inﬁnity, fk converges uniformly on [0, 1]. Therefore we can diﬀerentiate the series ∞ φ−1 (n) ρ(x) dx n=1

φ−1 (x+n)

term by term, and obtain ρ(x) =

∞ −ρ(φ−1 (x + n)) . φ (φ−1 (x + n)) n=1

Since ρ(0) = − and

∞ ρ(φ−1 (n)) φ (φ−1 (n)) n=1

158

5 More on Ergodicity

ρ(1) = −

∞ ∞ ρ(φ−1 (n + 1)) ρ(φ−1 (n)) = − , φ (φ−1 (n + 1)) φ (φ−1 (n)) n=1 n=2

we see that ρ(0) − ρ(1) = −

ρ(φ−1 (1)) . T (φ−1 (1))

Now use φ(1) = 1.

Corollary 5.5. ρ(0) > ρ(1) since φ < 0. Example 5.6. The diﬀerentiability condition on φ is indispensable. Consider the linearized Gauss transformation 1 1 1 T x = −n(n + 1) x − , <x< , n≥1, n n+1 n introduced in Chap. 2. Note that φ(x) is not diﬀerentiable at x = φ−1 (n). Since T preserves Lebesgue measure, we have ρ = 1, which does not satisfy the above relation. See Fig. 5.1 and Maple Program 5.7.1. The sample size is 106 and the number of bins is 100 for simulations presented in Figs. 5.1, 5.2.

1

1

y

0

x

1

0

x

1

Fig. 5.1. The linearized Gauss transformation (left) and a simulation of its invariant density function (right)

Remark 5.7. We may go on further to obtain relations among higher order derivatives at x = 0, 1. The diﬀerentiability of ρ(x) is investigated by Halfant [Half]. Assuming the diﬀerentiability of ρ and suitable growth conditions to allow term by term diﬀerentiation where needed, we can derive the following: 1 φ (1) 1− ρ (1) = ρ (0) − ρ(1) . [φ (1)]2 [φ (1)]3 For the proof see [C1].

5.3 Kac’s Lemma on the First Return Time

159

1 Example 5.8. Consider T x = 1/x (mod 1). Choose ∆i = ( i+1 , 1i ]. Then 1 1 ρ(x) = satisﬁes the conclusions in Theorem 5.4 and Remark 5.7. ln 2 1 + x

Example 5.9. Deﬁne the perturbed Gauss transformation 1 1 Tx = −x (mod 1) 2 x and the logarithmic transformation T x = − ln x

(mod 1) ,

whose invariant densities are given in Fig. 5.2. In both cases ρ(0) = 2 ρ(1).

1

1 y

y

0

x

1

0

x

1

Fig. 5.2. Invariant density functions for T x = {( x1 − x)/2} (left) and T x = {− ln x} (right)

5.3 Kac’s Lemma on the First Return Time Theorem 5.10 (The Poincar´ e Recurrence Theorem). Let T be a measure preserving transformation on a probability space (X, µ). If E ⊂ X is measurable and µ(E) > 0, then almost every point x ∈ E is recurrent, i.e., there exists a sequence 0 < n1 < n2 < · · · such that T nj x ∈ E for every j. ∞ ∞ Proof. Let En = k=n T −k E. Then n=0 En is the set of all points ∞that visit E inﬁnitely often under positive iterations by T . Put F = E ∩ ( n=0 En ). If x ∈ F then there is a sequence 0 < n1 < n2 < · · · such that T ni x ∈ E for every i. Since T nj −ni (T ni x) = T nj x ∈ E for every j > i where i is ﬁxed, T ni x returns to E inﬁnitely often. Hence T ni x ∈ F . This is true for every i, and so x ∈ F returns to F inﬁnitely often. To show that µ(F ) = µ(E), we note that T −1 En = En+1 and µ(En ) = µ(En+1 ). Since E0 ⊃ E1 ⊃ E2 ⊃ · · · , we have

160

5 More on Ergodicity

µ

∞

En

n=0

= lim µ(En ) = µ(E0 ) . n→∞

Similarly, since E ∩ E0 ⊃ E ∩ E1 ⊃ E ∩ E2 ⊃ · · · , we have

∞ µ(F ) = µ (E ∩ En ) = lim µ(E ∩ En ) = µ(E ∩ E0 ) = µ(E) n→∞

n=0

where the last equality comes from the fact that E ⊂ E0 .

Deﬁnition 5.11. Let T : X → X be a measure preserving transformation on a probability space (X, µ). Assume that E ⊂ X satisﬁes µ(E) > 0. For x ∈ E we deﬁne the ﬁrst return time in E by RE (x) = min{n ≥ 1 : T n x ∈ E} . The Poincar´e Recurrence Theorem guarantees that RE (x) < ∞ for almost every x ∈ E. Deﬁne the ﬁrst return transformation TE : E → E almost everywhere on E by TE (x) = T RE (x) x . For simulation results for y = TE (x) see Fig. 5.3. 0.5

0

0.5

x

0.5

0.5

0

x

0.5

0

0.5

x

Fig. 5.3. Plots of (x, TE (x)) with E = [0, 1/2] for T x = {x + θ}, θ = T x = {2x} and T x = 4x(1 − x) (from left to right)

√

2 − 1,

Remark 5.12. (i) If α < 1, then {x ∈ X : RE (x) ≤ α} = ∅. For α ≥ 1, let k = [α]. Then {x ∈ E : RE (x) ≤ α} = E ∩ (T −1 E ∪ · · · ∪ T −k E) . Hence RE : E → R is a measurable function. (ii) Let Ek = {x ∈ E : RE (x) = k}. For a measurable subset C ⊂ E we have ∞ −1 TE (C) = (Ek ∩ T −k C) . k=1

Thus TE−1 (C) is a measurable subset, and hence TE is a measurable mapping.

5.3 Kac’s Lemma on the First Return Time

161

Example 5.13. Let T be an irrational translation modulo 1. Take an interval E ⊂ [0, 1). Then the ﬁrst return time RE assumes at most three values. Furthermore, if k1 < k2 < k3 are three values of RE , then k1 + k2 = k3 . See Fig. 5.7 and Maple Program 5.7.4. For the proof and related results see [Fl],[Halt],[Lo],[LoM],[Sl1],[Sl2]. Take E ⊂ X with µ(E) > 0. Deﬁne the conditional measure µE on E by µE (A) = µ(A)/µ(E) for measurable subsets A ⊂ E. Theorem 5.14. (i) If T is a measure preserving transformation on a probability space (X, µ) and if µ(E) > 0, then TE preserves µE . (ii) Furthermore, if T is ergodic, then TE is also ergodic. Proof. (i) First, we consider the case when T is invertible. Then TE : E → E is one-to-one. To show that TE preserves µE , for A ⊂ E let An = {x ∈ A : RE (x) = n} . and we have a pairwise disjoint union A = Then An is measurable ∞ Hence TE (A) = n=1 TE (An ). Note that TE (An ) = T n (An ). Hence µE (TE A) =

∞

µE (T n An ) =

n=1

∞

∞ n=1

An .

µE (An ) = µE (A) .

n=1

When T is not invertible, consult [DK],[Kel],[Pia]. (ii) Let B ⊂ E be an invariant set for TE such that µE (B) > 0. We will show that µE (B) = 1. Note that B = TE−1 (B) = TE−2 (B) = TE−3 (B) = · · · , and hence B=(

∞

T −n B) ∩ E .

n=0

Since T is ergodic, we have µ(

∞

T −n B) = 1 ,

n=0

and hence

∞ n=0

It follows that B = E.

T −n B = X .

162

5 More on Ergodicity

Remark 5.15. Take X = [0, 1]. Let us sketch the graph of TE : E → E based on the mathematical pointillism. Choose a dense subset {x1 , x2 , x3 , . . .} ⊂ E. Plot the ordered pairs (xj , TE (xj )). If TE is reasonably nice, e.g., piecewise continuous, then the ordered pairs are dense in G = {(x, TE (x)) : x ∈ E}. See Fig. 5.3 where 5000 points are plotted. Consult Maple Program 5.7.2. Remark 5.16. Assume that T is ergodic and 0 < µ(E) < 1. By Theorem 5.14 TE is ergodic, but (TE )2 need not be so. If E is of the form E = F T −1 F for some F with µ(F ) > 0, then (TE )2 : E → E is not ergodic. To see why, ﬁrst consider the case when F ∩T −1 F = ∅. Then TE (T −1 F ) = F and TE (F ) = T −1 F . Note that F and T −1 F are invariant under (TE )2 . Next, if F ∩ T −1 F = A with µ(A) > 0, then put B1 = T −1 F \ A, B2 = F \ A. Then TE (B1 ) = B2 , TE (B2 ) = B1 . Hence B1 , B2 are invariant under (TE )2 . Therefore, in either case, (TE )2 is not ergodic. See Figs. 5.4, 5.5, 5.6, where (TE )2 has two ergodic components in each example. For a related idea consult Sect. 7.3.

0.5

0

0.5

x

0.5

0

x

0.5

2 Fig. 5.4. Plots of (x, TE (x)) (left) √ and (x, TE (x)) (right) with E = [0, 2θ] and F = [θ, 2θ] for T x = {x + θ}, θ = 2 − 1

0.6

0.6

0.4

0.4

0.4 x 0.6

0.4 x 0.6

Fig. 5.5. Plots of (x, TE (x)) (left) and (x, TE 2 (x)) (right) with E = [1/4, 3/4] and F = [0, 1/2] for T x = {2x}

5.3 Kac’s Lemma on the First Return Time 1

1

0.8

0.8

0.6

0.6

0.4

0.4 0.4

0.6

x

0.8

1

0.4

0.6

x

0.8

163

1

Fig. 5.6. Plots of (x, TE (x)) (left) and (x, TE 2 (x)) (right) with E = [1/4, 1] and F = [3/4, 1] for T x = 4x(1 − x)

Theorem 5.17 (Kac’s lemma). If T is an ergodic measure preserving transformation on a probability space (X, µ) and if µ(E) > 0, then RE (x) dµ = 1 , in other words,

E

E

RE dµE = 1/µ(E).

We give two proofs. The ﬁrst one is only for invertible transformations. Proof. (T is invertible) For n ≥ 1, let En = {x ∈ E : RE (x) = n} . ∞ We have pairwise disjoint unions E = n=1 En and X=

∞ n−1

T k En .

n=1 k=0

Hence

RE dµ = E

∞ n=1

RE dµ =

En

∞

n µ(En ) = µ(X) = 1 .

n=1

Proof. (T is not necessarily invertible) Take x ∈ E and consider the orbit of x under the map TE : E → E: x, TE (x), . . . , TE (x), . . . , TE L (x) . L−1 Put N = =0 RE (TE x). Then N is the time duration for the orbit of x under T to revisit E exactly L times, i.e., N n=1

1E (T n x) = L .

164

5 More on Ergodicity

Applying the Birkhoﬀ Ergodic Theorem for TE and T , we have

1 N 1 RE (TE x) = lim N = . n L→∞ L N →∞ µ(E) n=1 1E (T x) L

RE dµE = lim E

=1

For the original proof by Kac, see [Ka2]. For a version of Kac’s lemma on inﬁnite measure spaces consult [Aa]. See Maple Programs 5.7.3, 5.7.4. Remark 5.18. Here is how to sketch the graph of the ﬁrst return time RE based on the mathematical pointillism when X = [0, 1] and E ⊂ X is an interval. First, choose suﬃciently many points x in E that are uniformly distributed more or less. Several thousand points will do. Next, calculate RE (x), then plot the points (x, RE (x)). If we take suﬃciently many points, then the plot would look like the graph of y = RE (x), x ∈ E. See Figs. 5.7, 5.8, 5.9.

5

0

5

x

0.5

0 0.5

Fig. 5.7. Plots of (x, RE (x)) for T x = {x + θ}, θ = and E = [1/2, 1] (right)

15

15

10

10

5

5

0

x

0.5

0 0.5

1

x

√

2 − 1 on E = [0, 1/2] (left)

x

1

Fig. 5.8. Plots of (x, RE (x)) for T x = {2x} on E = [0, 1/2] (left) and E = [1/2, 1] (right)

5.4 Ergodicity of Markov Shift Transformations 15

15

10

10

5

5

0

0 0.5

0.5

x

165

1

x

Fig. 5.9. Plots of (x, RE (x)) for T x = 4x(1 − x) on E = [0, 1/2] (left) and E = [1/2, 1] (right)

5.4 Ergodicity of Markov Shift Transformations A product of two stochastic matrices A and B is also a stochastic matrix since the sum of the ith row of AB is equal to (AB)ij = Aik Bkj = Aik Bkj = Aik = 1 . j

j

k

k

j

k

Hence if P is a stochastic matrix then P n is a stochastic matrix for any n ≥ 1 N −1 1 and so is N n=0 P n . Lemma 5.19. Let P be a stochastic matrix. If a vector π = (πi )i > 0 satisﬁes πP = π and i πi = 1, then we have the following: −1 n (i) Q = limN →∞ N1 N n=0 P exists. (ii) Q is a stochastic matrix. (iii) QP = P Q = Q. (iv) If vP = v, then vQ = v. (v) Q2 = Q. Proof. Let P be a k × k stochastic matrix. For A = {0, 1, . . . , k − 1} let T ∞ be the shift transformation on the inﬁnite product space X = 1 A and µ denote the Markov measure on X deﬁned by P and π. Note that T is not necessarily ergodic. (i) Let A = [i]1 , B = [j]1 be two cylinder sets of length 1 where i, j ∈ A. Then [j, a1 , . . . , an−1 , i]1,n+1 T −n A ∩ B = a1 ,...,an−1

where the union is taken over arbitrary choices for a1 , . . . , an−1 ∈ A. Then µ([j, a1 , . . . , an−1 , i]) µ(T −n A ∩ B) = a1 ,...,an−1

=

a1 ,...,an−1

πj Pj,a1 · · · Pan−1 ,i = πj (P n )ji .

166

5 More on Ergodicity

The Birkhoﬀ Ergodic Theorem implies that for almost every x ∈ X N 1 1A (T n x) = fi∗ (x) N →∞ N n=1

lim

where fi∗ is T -invariant and ∗ fi dµ = 1A dµ = µ(A) = πi . X

X

Since 1A (T n x)1B (x) = 1T −n A∩B (x) , by the Lebesgue Dominated Convergence Theorem we have X

fi∗ (x)1B (x) dµ = lim

N →∞

X

N 1 1A (T n x)1B (x) dµ N n=1

N 1 µ(T −n A ∩ B) N →∞ N n=1

= lim

N 1 πj (P n )ji N →∞ N n=1

= lim and

N 1 n (P )ji = lim N →∞ N n=1

X

fi∗ (x)1B (x) dµ . πj

N Hence the limit of N1 n=1 P n exists. N (ii) Since the sum of any row of N1 n=1 P n is equal to 1, its limit Q has the same property. (iii) Since

N +1 N N 1 n 1 n 1 n P , P = P = P P N n=2 N n=1 N n=1 by taking limits we have P Q = QP = Q. N (iv) Since N1 n=1 P n v = v for every N , we have Qv = v. (v) Since

N N 1 1 n Q=Q = P Q N n=1 N n=1 for every N from (iii), we have Q2 = Q. Now we ﬁnd conditions for ergodicity of Markov shift transformations.

5.4 Ergodicity of Markov Shift Transformations

167

Theorem 5.20. Let P be a k × k stochastic matrix. Suppose that a vector 1. For A = {0, 1, . . . , k − 1} let π = (πi )i > 0 satisﬁes πP = π and i πi = ∞ T be a Markov shift transformation on X = 1 A associated with P and π. N −1 (By Theorem 5.19, Q = limN →∞ N1 n=0 P n exists.) Then the following are equivalent: (i) T is ergodic. (ii) All rows of Q are identical. (iii) Every entry of Q is positive. (iv) P is irreducible. (v) The subspace {v : vP = v} has dimension 1. Proof. (i) ⇒ (ii). Let A = [i]1 , B = [j]1 . The ergodicity of T implies N −1 1 πj (P n )ji = πj Qji , N →∞ N n=0

µ(A)µ(B) = lim

and hence πi πj = πj Qji . Since πj > 0, we have Qji = πi , which implies that every row of Q is equal to π. that πQ = π. LetQj = Qij for every i. From (ii) ⇒ (iii). Recall π Q = π , we have i ij j i i πi Qj = πj . Since i πi = 1, we have Qj = πj > 0. (iii) ⇒ (iv). Note that for every (i, j), N −1 1 n (P )ij , 0 < Qij = lim N →∞ N n=0

and hence (P n )ij > 0 for some n ≥ 1. (iv) ⇒ (v). Suppose that v = 0 is not a constant multiple of π and satisﬁes vP = v. For some t0 = 0, there exist i, j such that (π + t0 v)i = 0, (π + t0 v)j > 0 and (π + t0 v) ≥ 0 for every . Put w = π + t0 v. Take n such that (P n )ji > 0. Then wP n = w, and hence 0 = wi = (wP n )i = w (P n )i ≥ wj (P n )ji > 0 ,

which is a contradiction. (v) ⇒ (i). Suppose that T is not ergodic. Then there exist two cylinder sets A, B such that µ(A) > 0, µ(B) > 0 satisfying N −1 1 µ(T −n A ∩ B) = µ(A)µ(B) . N →∞ N n=0

lim

(See the remark following Theorem 3.12.) Let , A = [i1 , . . . , ir ]a+r−1 a If a + n ≥ b + s, we have

B = [j1 , . . . , js ]b+s−1 . b

168

5 More on Ergodicity

µ(T −n A ∩ B) = πj1 Pj1 j2 · · · Pjs−1 js (P a−b+n−s+1 )js i1 Pi1 i2 · · · Pir−1 ir . Hence N −1 1 µ(T −n A ∩ B) = πj1 Pj1 j2 · · · Pjs−1 js Qjs i1 Pi1 i2 · · · Pir−1 ir n→∞ N n=0

lim

= µ(B)

Qjs i1 µ(A) . πi 1

Hence Qjs i1 = πi1 . Since QP = Q every row of Q is an eigenvector of P with an eigenvalue 1. Hence π and the js th row of Q are two independent eigenvectors with the same eigenvalue 1, which is a contradiction.

Note that ergodicity of a Markov shift implies that every row of Q is equal to π. Now we ﬁnd mixing conditions for Markov shift transformations. Theorem 5.21. Let P be a k × k stochastic matrix. Suppose that a vector π = (πi )i > 0 satisﬁes πP = π and i πi = 1. For A = {0, 1, . . . , k − 1} let ∞ T be a Markov shift transformation on X = 1 A associated with P and π. Then the following are equivalent: (i) T is mixing. (ii) (P n )ij converges to πj as n → ∞ for every i, j. (iii) P is irreducible and aperiodic. Proof. Let µ denote N −1the Markov measure on X deﬁned by P and π. Put Q = limN →∞ N1 n=0 P n . (i) ⇒ (ii). Let A = [j]1 , B = [i]1 be two cylinder sets of length 1 where i, j ∈ A. Since µ(T −n A ∩ B) converges to µ(A)µ(B) as n → ∞, πi (P n )ij converges to πi πj . (ii) ⇒ (iii). Since P n is close to Q for suﬃciently large n, and since every entry of Q is positive, we observe that (P n )ij > 0 for suﬃciently large n. (iii) ⇒ (i). It suﬃces to show that µ(T −n A ∩ B) → µ(A)µ(B) as n → ∞ for cylinder sets A and B. Let A = [i1 , . . . , ir ]a+r−1 , a

B = [j1 , . . . , js ]b+s−1 . b

Let J denote the Jordan canonical form of P . Since an eigenvalue λ = 1 of P satisﬁes |λ| < 1, the Jordan block Mλ corresponding to λ = 1 satisﬁes Mλn → 0 as n → ∞. Suppose that the ﬁrst Jordan block is given by M1 = (1) diagonal corresponding to the simple eigenvalue 1, then J n converges to the N matrix diag(1, 0, . . . , 0). Hence P n converges to a limit. Since N1 n=1 P n converges to Q, P n converges to Q. Hence lim µ(T −n A ∩ B) = lim πj1 Pj1 j2 · · · Pjs−1 js (P n )js i1 Pi1 i2 · · · Pir−1 ir

n→∞

n→∞

= µ(B)Qjs i1

1 1 µ(A) = µ(B)µ(A) . µ(A) = µ(B)πi1 πi1 πi 1

5.4 Ergodicity of Markov Shift Transformations

169

Theorem 5.22. In Theorem 5.21 (ii), the speed of convergence is exponential, i.e., ||P n − Q|| → 0 exponentially as n → ∞ for any norm || · || on the vector space V of k × k complex matrices. Proof. Fix a norm || · || on the space V . Let S be an invertible matrix such that S −1 P S = J where J is the Jordan canonical form of P as in the proof of the theorem. Then ||A|| = ||S −1 AS|| ,

A∈V ,

deﬁnes a new norm || · || on V . Since all norms on V are equivalent, there exist constants 0 < α ≤ β such that α||A|| ≤ ||S −1 AS|| ≤ β||A|| for every A ∈ V . Hence α||P n − Q|| ≤ ||S −1 (P n − Q)S|| ≤ β||P n − Q|| . Since S −1 (P n − Q)S = J n − diag(1, 0, . . . , 0), the speed of the convergence of P n to Q is equivalent to the speed of convergence of J n to diag(1, 0, . . . , 0) for any norm on V . Now we use the norm ||A||∞ = max |Aij | . 1≤i,j≤k

Its restriction to any subspace of V will be also denoted by || · ||∞ . For a Jordan block Mλ = λI + N corresponding to λ, |λ| < 1, we have N k = 0. Hence for n ≥ k (Mλ )n =

k−1

C(n, j)λn−j N j

j=0

where C(n, j) is the binomial coeﬃcient. Let η = max{||I||∞ , ||N ||∞ , . . . , ||N k−1 ||∞ } . Then for n ≥ k ||(Mλ )n ||∞ ≤ η

k−1

C(n, j)|λ|n−j ≤ ηknk−1 |λ|n−k+1 .

j=0

Thus (Mλ )n → 0 exponentially. Now use ||J n − diag(1, 0, . . . , 0)||∞ ≤ max ||(Mλ )n ||∞ . |λ| T:=x->-trunc(1/x)*(trunc(1/x)+1)*(x-1/(1+trunc(1/x)))+1; ⎛ ⎞ 1 1 ⎜ T := x → −trunc( ) (trunc( ) + 1) ⎝x − x x

1 ⎟ ⎠+1 1 trunc( ) + 1 x

> plot(T(x),x=0..1); See Fig. 5.1. Here is another example, which is a formula of a linearized logarithmic transformation in a closed form. > S:= x->-2^(trunc(-log[2.0](x))+1)*x+2;

S := x → −2(trunc(−log2.0 (x))+1) x + 2 > plot({frac(-log[2](x)),S(x)},x=0..1); See Fig. 5.15.

174

5 More on Ergodicity 1

y

0

x

1

Fig. 5.15. y = {log2 x} and its linearized version

5.7.2 How to sketch the graph of the ﬁrst return time transformation We introduce a method to sketch the graph of the ﬁrst return time transformation without its explicit formula. > with(plots): Take T x = 2x (mod 1). > T:=x->frac(2*x): > Digits:=100: Choose an interval [a, b] ⊂ [0, 1] where the ﬁrst return time is deﬁned. > a:=1/4: > b:=3/4: > SampleSize:=5000: Take points in the interval [a, b]. > for i from 1 to SampleSize do > seed[i]:=a+frac(sqrt(3.0)*i)*(b-a): > od: For x ∈ [a, b] ﬁnd TE (x). > for i from 1 to SampleSize do > X:=T(seed[i]): > for j from 1 while (a > X or b < X) do > X:=T(X): > od: > Y[i]:=X: > od: > Digits:=10: Plot suﬃciently many points on the graph of TE . See the left graph in Fig. 5.5. > g1:=pointplot([seq([seed[i],Y[i]],i=1..SampleSize)], symbolsize=1): > g2:=plot((a+b)/2,x=a..b,y=a..b,labels=["x"," "]): > display(g1,g2); Now we sketch the graph y = (TE )2 (x). > Digits:=200:

5.7 Maple Programs

175

We need twice as many signiﬁcant digits for (TE )2 since we need to compute the second return time. > for i from 1 to SampleSize do > X:=T(seed[i]): > for j from 1 while (a > X or b < X) do X:=T(X): od: > X2:=T(X): > for j from 1 while (a > X2 or b < X2) do X2:=T(X2): od: > Z[i]:=X2: > od: > Digits:=10: Plot suﬃciently many points on the graph of (TE )2 . > g3:=pointplot([seq([seed[i],Z[i]],i=1..SampleSize)], symbolsize=1): > display(g3,g2); See the right graph in Fig. 5.5. 5.7.3 Kac’s lemma for the logistic transformation We simulate Kac’s lemma for the logistic transformation. > with(plots): > Digits:=100: > x[0]:=evalf(Pi-3): Choose an interval [a, b] ⊂ [0, 1] where the ﬁrst return time is deﬁned. > a:=0.5: > b:=1: Choose a transformation. > T:=x->4.0*x*(1-x): > rho:=x->1/(Pi*sqrt(x*(1-x))): > SampleSize:=10000: Choose points in [0, 1]. > for i from 1 to SampleSize do x[i]:=T(x[i-1]): od: Choose points in [a, b]. > j:=1: > for i from 1 to SampleSize do > if (x[i] > a and x[i] < b) then seed[j]:=x[i]: > j:=j+1: fi: > od: Find the number of points in the interval [a, b]. > N:=j-1; N := 5002 The Birkhoﬀ Ergodic Theorem implies that the following two numbers should be approximately equal. > evalf(int(rho(x),x=a..b),10); .5000000000

176

5 More on Ergodicity

evalf(N/SampleSize,10); .5002000000 Taking sample points from [a, b] is equivalent to using the conditional measure on [a, b]. > for j from 1 to N do > orbit:=T(seed[j]): > for k from 1 while(orbit < a or orbit > b) do > orbit:=T(orbit): > od: > ReturnTime[j]:=k: > od: We no longer need many digits. > Digits:=10; Kac’s lemma states that the average of the ﬁrst return time is equal to the reciprocal of the measure of the given set. > evalf(add(ReturnTime[j],j=1..N)/N); 1.999000400 > 1/int(rho(x),x=a..b); 2.000000000 Plot the points (x, RE (x)). See Fig. 5.9. > pointplot([seq([seed[j],ReturnTime[j]],j=1..N)]); >

5.7.4 Kac’s lemma for an irrational translation mod 1 An irrational translation modulo 1 has a special property: There are three values for the ﬁrst return time in general and the sum of two smaller values is equal to the maximum. > with(plots): An irrational translation modulo 1 has entropy zero, and so there is no need to take many signiﬁcant digits. For the deﬁnition of entropy consult Chap. 8, and for the choice of number of signiﬁcant digits see Remark 9.18. > Digits:=50: Choose the interval [a, b] where the ﬁrst return time is deﬁned. > a:=0: > b:=0.05: Choose θ. > theta:=sqrt(2.0)-1: > T:=x->frac(x+theta): > SampleSize:=1000: > x[0]:=evalf(Pi-3): > for i from 1 to SampleSize do x[i]:=T(x[i-1]): od: > j:=1:

5.7 Maple Programs

177

for i from 1 to SampleSize do if (x[i] > a and x[i] < b) then seed[j]:=x[i]: j:=j+1: fi: od: Find the number of points inside the interval [a, b]. We will compute the ﬁrst return time at thus obtained points. > N:=j-1; > > > >

N := 50 Check the Birkhoﬀ Ergodic Theorem. > evalf(b-a,10); 0.05 > evalf(N/SampleSize,10); 0.05000000000 > for j from 1 to N do > orbit:=T(seed[j]): > for k from 1 while(orbit < a or orbit > b) do > orbit:=T(orbit): > od: > ReturnTime[j]:=k: > od: > Digits:=10: In the following there are only three values of the ﬁrst return time and the maximum of the ﬁrst return time is the sum of two smaller values. > seq(ReturnTime[j],j=1..N);

>

12, 29, 29, 12, 17, 12, 29, 12, 17, 12, 29, 29, 12, 29, 29, 12, 17, 12, 29, 29, 12, 29, 29, 12, 17, 12, 29, 12, 17, 12, 29, 29, 12, 17, 12, 29, 12, 17, 12, 29, 29, 12, 29, 29, 12, 17, 12, 29, 29, 12 Max:=max(seq(ReturnTime[j],j=1..N));

>

Max := 29 Min:=min(seq(ReturnTime[j],j=1..N));

Min := 12 for j from 1 to N do if (ReturnTime[j] > Min and ReturnTime[j] < Max) then Middle[k]:=ReturnTime[j]: k:=k+1: fi: od: Find the number of times when Min < ReturnTime < Max. > K:=k-1; K := 8 > seq(Middle[k],k=1..K); 17, 17, 17, 17, 17, 17, 17, 17 The following two numbers are approximately equal by Kac’s lemma. > evalf(add(ReturnTime[j],j=1..N)/N); > > > > >

178

5 More on Ergodicity

19.94000000 >

evalf(1/(b-a),10);

20.00000000 > pointplot([seq([seed[j],ReturnTime[j]],j=1..N)]); See Fig. 5.16.

20 10

0

0.02

x

0.04

Fig. 5.16. The ﬁrst return time of an irrational translation mod 1

5.7.5 Bernoulli measures on the unit interval Regard the ( 41 , 43 )-Bernoulli measure as being deﬁned on the unit interval. > with(plots): Construct a typical point x0 , or use Maple Program 2.6.7. > z:=4: > ran:=rand(0..z-1): > N:=10000: > evalf(2^(-N),10); 0.5012372749 10−3010 > Digits:=3020: Generate the ( 41 , 43 )-Bernoulli sequence {dj }j . > for j from 1 to N do > d[j]:=ceil(ran()/(z-1)): > od: Convert the binary number (0.d1 d2 d3 . . .)2 into a decimal number. > M:=N/100; M := 100 The following method is faster the direct summation of all terms. > for k from 1 to 100 do > partial_sum[k]:=add(d[s+M*(k-1)]/2^s,s=1..M): > od: > x0:=evalf(add(partial_sum[k]/2^(M*(k-1)),k=1..100)):

5.7 Maple Programs

179

T:=x->frac(2*x): > S:=10000: Divide the unit interval into 64 subintervals and ﬁnd the relative frequency pi of visiting the ith subinterval and plot the points (i − 0.5, pi ). > Bin:=64: > for k from 1 to Bin do freq[k]:=0: od: > seed:=x0: > for j from 1 to S do > seed:=T(seed): > slot:=ceil(Bin*seed): > freq[slot]:=freq[slot]+1: > od: To draw lines we need an additional package. > with(plottools): > Digits:=10: > for i from 1 to Bin do > g[i]:=line([(i-0.5)/Bin,freq[i]*Bin/S],[(i-0.5)/Bin,0]): > od: Draw a histogram as an approximation of the ( 14 , 34 )-Bernoulli measure. > display(seq(g[i],i=1..Bin),labels=["x"," "]); > Digits:=ceil(1000*log10(2))+10; >

Digits := 312 Plot an orbit of length 1000. > orbit[0]:=x0: > for j from 1 to 1000 do > orbit[j]:=T(orbit[j-1]): > orbit[j-1]:=evalf(orbit[j-1],10): > od: > Digits:=10: > pointplot([seq([i,orbit[i]],i=1..1000)],labels=["n","x"]): See Fig. 5.10. 5.7.6 An invertible extension of the beta transformation Find an invertible extension of the β-transformation Tβ (x) = {βx} using the idea in Sect. 5.6. > with(plots): > beta:=(sqrt(5.0)+1)/2; β := 1.618033988 >

a:=1/(3-beta); a := 0.7236067974

>

a*beta;

180

5 More on Ergodicity

1.170820392 The following calculations show that a = β 2 /(1 + β 2 ) and 1/β = β − 1. > beta^2/(1+beta^2); 0.7236067975 > 1/beta; 0.6180339890 Deﬁne an invertible transformation T on the domain D given in Fig. 5.17. It is a union of three rectangles. Let us call the left top rectangle R1 , the left bottom one R2 and the right one R3 . Note that the roof of D is given by the invariant density function of Tβ .

1

a

0

1/ β

1

Fig. 5.17. The domain D = R1 ∪ R2 ∪ R3

g0:=textplot([[1/beta,0,‘1/b‘]],font=[SYMBOL,9]): > g00:=textplot([[0,a,"a"]],font=[HELVETICA,9]): > g1:=listplot([[0,a],[1/beta,a],[1/beta,0]],color=green): > g2:=listplot([[0,a*beta],[1/beta,a*beta],[1/beta,a],[1,a], [1,0]]): > display(g0,g00,g1,g2); See Fig. 5.17. We want to ﬁnd an invertible mapping T : D → D satisfying >

T (R1 ∪ R2 ) = R2 ∪ R3

and T (R3 ) = R1 .

Deﬁne T . > T:=(x,y)->(frac(beta*x),(1/beta)*y+ trunc(beta*x)*a); y T := (x, y) → (frac(β x), + trunc(β x) a) β Note that T horizontally stretches D by the factor β and squeezes vertically by the factor 1/β. So it preserves the two-dimensional Lebesgue measure λ. Deﬁne φ : D → [0, 1] by φ(x, y) = x. Then Tβ ◦ φ = φ ◦ T . Now we study the images of the region R1 ∪R2 under T and T 2 . First, represent R1 ∪ R2 using the mathematical pointillism. > seed[0]:=(evalf(Pi-3),0.1):

5.7 Maple Programs

181

Arnold:=(x,y)->(frac(2*x+y),frac(x+y)); > S:=3000: > for i from 1 to S do seed[i]:=Arnold(seed[i-1]): od: The points seed[i] are more or less uniformly scattered in the unit square. They are modiﬁed to generate points in R1 ∪ R2 . > for i from 1 to S do > image0[i]:=(seed[i][1]/beta,seed[i][2]*(a*beta)): > od: > pointplot({seq([image0[i]],i=1..S)},symbolsize=1); See the ﬁrst diagram in Fig. 5.18 for R1 ∪ R2 . > for i from 1 to S do image1[i]:=T(image0[i]): od: > pointplot([seq([image1[i]],i=1..S)],symbolsize=1); See the second diagram in Fig. 5.18 for T (R1 ∪ R2 ) = R2 ∪ R3 . > for i from 1 to S do image2[i]:=T(image1[i]): od: > pointplot({seq([image2[i]],i=1..S)},symbolsize=1); See the third diagram in Fig. 5.18 for T 2 (R1 ∪ R2 ). >

1

0

1

1

0

1

1

0

1

Fig. 5.18. Dotted regions represent R1 ∪ R2 , T (R1 ∪ R2 ) and T 2 (R1 ∪ R2 ) (from left to right)

Now we show the ergodicity of T . If E ⊂ D is an invariant subset of positive measure, then for ε > 0 there exist rectangles F1 , . . . ,Fk ⊂ D such that k k F approximates E within an error bound of ε, i.e., λ E F i=1 i i=1 i < ε . k Since T n E = E for every n ∈ Z, we have λ E i=1 T n Fi < ε . If n > 0 is suﬃciently large, then T n Fi consists mostly of horizontal strips, and T −n Fi consists mostly of vertical strips. See Fig. 5.19. Let || · || denote the L1 -norm. Then ||1U − 1V || = |1U − 1V | dλ = λ(U V ) k k for U, V ⊂ D. Since both A = i=1 T n Fi and B = i=1 T −n Fi approximate E within the error bound of ε, we observe that

182

5 More on Ergodicity

Fig. 5.19. Dotted regions represent a rectangle F and its images T k F , 1 ≤ k ≤ 8 (from top left to bottom right)

λ((A ∩ B)A) = ||1A∩B − 1A || ≤ ||1A 1B − (1E )2 || + ||1E − 1A || = ||(1A − 1E )(1B − 1E ) + (1E∩A − 1E ) + (1E∩B − 1E )|| + ||1E − 1A || ≤ ||(1A − 1E )(1B − 1E )|| + ||1E∩A − 1E || + ||1E∩B − 1E || + ||1E − 1A || ≤ ||1A − 1E || + ||1E∩A − 1E || + ||1E∩B − 1E || + ||1E − 1A || < 4ε . We used the fact |1B (x) − 1E (x)| ≤ 1 in the third inequality. Since A and B mostly consist of horizontal and vertical strips respectively, A ∩ B approximates A and B only if there is not much area outside the strips. Thus λ(An ) and λ(Bn ) are close to 1, and we conclude that λ(E) = 1.

6 Homeomorphisms of the Circle

A homeomorphism of the unit circle is regarded as a transformation. If its rotation number is irrational, then the homeomorphism is equivalent to an irrational rotation. We ﬁnd numerically the conjugacy mapping that gives the equivalence of the homeomorphism and the corresponding rotation.

6.1 Rotation Number Consider a homeomorphism T of the unit circle S 1 . For the sake of notational simplicity, the unit circle is identiﬁed with the interval [0, 1). Assume that T preserves orientation, i.e, T winds around the circle counterclockwise. Then there exists a continuous strictly increasing function f : R → R such that (i) f (x + 1) = f (x) + 1, x ∈ R, (ii) T x = {f (x)}, 0 ≤ x < 1. We say that f represents T . Note that f is not unique: We may use g deﬁned by g(x) = f (x) + k for any integer k. See Figs. 6.1, 6.2. The most elementary example is given by a translation T x = x + θ (mod 1) (or, a rotation by e2πiθ on the unit circle), which is represented by f (x) = x + θ. 2

2

y

y 1

–1

1

1 –1

x

2

–1

1

x

2

–1

Fig. 6.1. Graphs of f (x) = x + a sin(2πx) + b, b = a = 1/(2π) (right)

√

2 − 1, for a = 0.1 (left) and

184

6 Homeomorphisms of the Circle 1

1

y

y

0

x

0

1

1

x

Fig.√6.2. Homeomorphisms of the circle represented by f (x) = x + a sin(2πx) + b, b = 2 − 1, for a = 0.1 (left) and a = 1/(2π) (right)

Let f (0) (x) = x, f (1) (x) = f (x), and f (n) (x) = f (f (n−1) (x)), n ≥ 1. Observe that f (n) is monotone and f (n) (x + k) = f (n) (x) + k for every k ∈ Z. Theorem 6.1. Let T be an orientation preserving homeomorphism of the unit circle represented by f : R → R. Deﬁne f (n) (x) − x . n→∞ n

ρ(T, f ) = lim

Then the limit exists and does not depend on x. Proof. We follow the argument given in [CFS]. There are two possibilities depending on the existence of a ﬁxed point of T . Case I. Suppose that there exists j ∈ Z, j = 0, such that T j x0 = x0 for some x0 . If j < 0, then x0 = T −j x0 , and we may assume j > 0. Since f (j) (x0 ) = x0 (mod 1), there exists k ∈ Z such that f (j) (x0 ) = x0 + k. Hence f (2j) (x0 ) = f (j) (f (j) (x0 )) = f (j) (x0 + k) = f (j) (x0 ) + k = x0 + 2k , and in general we have f (qj) (x0 ) = x0 + qk for q ∈ N. Take an integer n. Then n = qj + r, 0 ≤ r < j. Hence f (n) (x0 ) = f (qj+r) (x0 ) = f (r) (f (qj) (x0 )) = f (r) (x0 + qk) = f (r) (x0 ) + qk , and so we have f (n) (x0 ) lim = lim n→∞ n→∞ n

f (r) (x0 ) qk + n qj + r

=

k j

since q → ∞ as n → ∞. In this case, the limit exists at x0 and is rational. Case II. Suppose that there exists no ﬁxed point of T j for any j. Since the mapping f (j) (x) − x is continuous, its range {f (j) (x) − x : x ∈ R} is a

6.1 Rotation Number

185

connected subset in R. Since f (j) (x) − x ∈ Z, the range is included in an open interval of the form (p, p + 1) for some integer p depending on j, and hence p < f (j) (x) − x < p + 1 for every x. Thus x + p < f (j) (x) < x + p + 1 for every x. Then f (j) (0) = f (j) (f ((−1)j) (0)) < f ((−1)j) (0) + p + 1 < · · · < (p + 1) . Similarly, p < · · · < f ((−1)j) (0) + p < f (j) (f ((−1)j) (0)) = f (j) (0) . Hence for every ≥ 1

It follows that

p+1 f (j) (0) p . < < j j j (j) f (0) f (j) (0) 1 < . − j j j

Switching the roles of j and , we have (j) f (0) f () (0) 1 < . − j Thus (j) f (0) f () (0) f (j) (0) f (j) (0) f (j) (0) f () (0) 1 1 < + . − + − = − j j j j j Therefore {f (j) (0)/j}j≥1 is a Cauchy sequence, and it converges. By Cases I and II we now know that there is at least one point x0 ∈ R where the limit exists. Take an arbitrary point x ∈ R. Then we can ﬁnd J ∈ Z depending on x such that x0 + J ≤ x < x0 + J + 1 . Since f (n) is monotone, we have f (n) (x0 + J) ≤ f (n) (x) < f (n) (x0 + J + 1) and hence f (n) (x0 ) + J ≤ f (n) (x) < f (n) (x0 ) + J + 1 . Since

(n) f (x0 ) f (n) (x) J + 1 < →0 − n x n

as n → ∞, the limit exists at every point and is independent of x.

186

6 Homeomorphisms of the Circle

Remark 6.2. In the proof of Theorem 6.1 we have shown that if T j x0 = x0 for some j ≥ 1 then ρ(T, f ) = kj for some k ≥ 1. For example, consider the circle homeomorphism T represented by f (x) = x + 0.1 sin(2πx) +

1 . 2

Then f (0) = 21 and f ( 21 ) = 1. Hence T 2 (0) = 0, and so ρ(T, f ) = 21 . See Fig. 6.3 where the simulation is done with a point x0 ∈ {0, 21 }. See also Fig. 6.12.

0.5

0.5002

0.4 0.5

0.3 0.2

0.4998

0.1 0

n

30

0

x

1

Fig. 6.3. Plots of f (n) (x0 )/n, 1 ≤ n ≤ 30, (left) and (f (1000) (x)−x)/1000, 0 ≤ x ≤ 1, (right) where f (x) = x + 0.1 sin(2πx) + 0.5

Theorem 6.3. Let T be an orientation preserving homeomorphism of the unit circle represented by f : R → R. Then ρ(T, f ) is rational if and only if T n has a ﬁxed point for some n ≥ 1. Proof. It suﬃces to show that if ρ(T, f ) is rational then T n has a ﬁxed point for some n ≥ 1. The other direction has been proved already while proving Theorem 6.1 as explained in Remark 6.2. First consider the special case ρ(T, f ) = 0. We will show that T x0 = x0 for some x0 . Suppose not. Then f (x) − x ∈ Z for any x. Since f (x) − x is continuous, either f (x) − x > 0 for every x or f (x) − x < 0 for every x. If f (x) > x for every x, then f (0) > 0. Since f is increasing, so is f (n) for every n ≥ 1. Hence f (n+1) (0) = f (n) (f (0)) > f (n) (0) . Therefore f (n) (0) > f (n−1) (0) > · · · > f (2) (0) > f (0) > 0 . If f (n) (0) → ∞ as n → ∞, then f (N ) (0) > 1 for some N ≥ 1. Hence f (2N ) (0) = f (N ) (f (N ) (0)) > f (N ) (1) = f (N ) (0) + 1 > 2 . In general, f (kN ) (0) > k. Hence

6.1 Rotation Number

187

1 k f (kN ) (0) >0, = ≥ lim k→∞ kN k→∞ N kN

ρ(T, f ) = lim

which is a contradiction. It follows that {f (n) (0)}∞ n=1 is bounded, and hence it converges to some x0 . Then f (x0 ) = f ( lim f (n) (0)) = lim f (f (n) (0)) = lim f (n+1) (0) = x0 . n→∞

n→∞

n→∞

If f (x) < x for every x, then f (0) < 0 and we proceed similarly. Now let ρ(T, f ) = p/q for p, q ∈ N, (p, q) = 1, q ≥ 2. Note that g(x) = f (q) (x) − p represents T q and g (2) (x) = f (q) (g(x)) − p = f (q) (f (q) (x) − p) − p = f (2q) (x) − 2p . In general, g (n) (x) = f (nq) (x) − np , and hence p f (nq) (x) f (nq) (x) g (n) (x) −p=q −p=0. − p = q lim = lim n→∞ n→∞ n→∞ q nq n n lim

It follows that ρ(T q , g) = 0. By the preceding argument for the ﬁrst case, we conclude that T q has a ﬁxed point.

Deﬁnition 6.4. Observe that if g(x) = f (x) + k for some k ∈ Z, then ρ(T, g) = ρ(T, f ) + k . Deﬁne the rotation number ρ(T ) of T by ρ(T ) = ρ(T, f )

(mod 1)

where f is any function representing T . Among all functions representing T there is a unique function f such that 0 ≤ f (0) < 1 , which is assumed throughout the rest of the chapter. In this case, ρ(T ) = ρ(T, f ). For a simulation see Fig. 6.4 and Maple Program 6.8.1. Remark 6.5. Theoretically, there is no diﬀerence in the limits of f (n) (x) − x n

and

f (n) (x) n

as n → ∞. For ﬁnite n, however, there is a very small diﬀerence that cannot be ignored in computer experiments. Even for n = 1000 this small diﬀerence may cause a serious problem. See Fig. 6.4 and Maple Program 6.8.8.

188

6 Homeomorphisms of the Circle 0.4

0.41094

0.3

0.41092

0.2

0.4109

0.1 0.41088 0

n

30

0

x

1

Fig. 6.4. Plots of f (n) (0)/n, 1 ≤ n ≤ 30, (left) and (f (1000) √(x) − x)/1000 at x = T j (0), 1 ≤ j ≤ 100, (right) where f (x) = x + 0.1 sin(2πx) + 2 − 1. The horizontal line indicates the sample average

6.2 Topological Conjugacy and Invariant Measures Let X be a compact metric space. If T : X → X is continuous, then there exists a Borel probability measure invariant under T . For, if we take an arbitrary point x0 ∈ X, we then deﬁne µn by µn =

n−1 1 δ T k x0 , n k=0

where δa is the Dirac measure concentrated on a. Measures can be regarded as bounded linear functionals on the space of continuous functions on X and the set of all measures µn has a limit point, which can be shown to be invariant under T . Therefore, any homeomorphism of the circle has an invariant measure. For the details, consult Chap. 1, Sect. 8 in [CFS]. Deﬁnition 6.6. Let T : X → X be a homeomorphism of a topological space X. Then T is said to be minimal if {T n x : n ∈ Z} is dense in X for every x ∈ X. Theorem 6.7. Let T : S 1 → S 1 be a minimal homeomorphism. If µ is a T -invariant probability measure on S 1 , then µ(A) > 0 for any nonempty open subset A. Proof. First, observe that there exists x0 ∈ S 1 such that µ(U ) > 0 for any neighborhood U of x0 . If not, for any x ∈ S 1 there would exist an open neighborhood Vx of x such that µ(Vx ) = 0. Since S1 = Vx , x∈S 1

the compactness of S 1 implies that there are ﬁnitely many points x1 , . . . , xn satisfying S 1 = Vx1 ∪ · · · ∪ Vxn . Hence µ(S 1 ) ≤ µ(Vx1 ) + · · · + µ(Vxn ) = 0, which is a contradiction.

6.2 Topological Conjugacy and Invariant Measures

189

Next, take a nonempty open subset A. Then there exists n such that T n x0 ∈ A, i.e., x0 ∈ T −n A. Since T is a homeomorphism, T −1 A is an open neighborhood of x0 . Hence µ(T −n A) > 0. Since µ is T -invariant, µ(A) =

µ(T −n A) > 0. Remark 6.8. Take a concrete example given by a family of homeomorphisms Ta,b of the circle represented by f (x) = x + a sin(2πx) + b,

|a|

0 and f (x + 1) = f (x) + 1. If b = 21 , then there exist two invariant measures with their supports on two sets of periodic points of period 2 given by {0, 21 } and {x1 ≈ 0.2, T (x1 ) ≈ 0.8}. (See Fig. 6.5 where we see that T 10 (x) are close to one of these four points for most of x.) Hence the rotational number is equal to 21 by Remark 6.2. 1

1

y

1

y

0

x

1

y

0

x

0

1

1

x

Fig. 6.5. y = T x, y = T 2 x and y = T 10 x where T is represented by f (x) = x + 0.1 sin(2πx) + 0.5 (from left to right)

For other values of b, invariant measures are obtained with a = 0.1 in Fig. 6.6.

2 1

1

1

0

x

1

0

x

1

0

x

1

Fig. 6.6. Invariant measures of homeomorphisms of represented by f (x) = x + 0.1 sin(2πx) + b for b = 1/3, b = 1/4 and b = 1/5 (from left to right)

190

6 Homeomorphisms of the Circle

The invariant measures of Ta,b and Ta,1−b are symmetric with respect to x = 21 since 1 − Ta,b (1 − x) = Ta,1−b (x). See Fig. 6.7 and Maple Program 6.8.2.

6

6

4

4

2

2

0

x

0

1

x

1

Fig. 6.7. Invariant measures of homeomorphisms represented by f (x) = x + 0.1 sin(2πx) + b (left) and f (x) = x + 0.1 sin(2πx) + 1 − b (right) for b = 0.484

Deﬁnition 6.9. Two homeomorphisms T1 and T2 of a topological space X are said to be topologically conjugate if there exists a homeomorphism φ of X such that φ ◦ T1 = T2 ◦ φ. Such a mapping φ is called a topological conjugacy. T

1 X −−−− → X ⏐ ⏐ ⏐ ⏐ φ, φ,

T

2 X −−−− → X

In the following theorem an invariant measure of a homeomorphism of the unit circle plays a crucial role in constructing a topological conjugacy. Theorem 6.10. Let T be a homeomorphism of the circle with an irrational rotation number θ. Take a Borel probability measure µ on S 1 invariant under T . Deﬁne the cumulative density function φ : S 1 → S 1 by φ(x) = µ([0, x]) . Then we have the following: (i) φ is a continuous, monotonically increasing, onto function. (ii) φ satisﬁes φ ◦ T = Rθ ◦ φ where Rθ is the rotation by θ, i.e., the following diagram is commutative: T S 1 −−−−→ S 1 ⏐ ⏐ ⏐ ⏐ φ, φ, R

θ → S1 S 1 −−−−

(iii) For each x ∈ S 1 , φ−1 (x) is either a point or a closed interval in S 1 . (iv) φ is a homeomorphism if and only if T is minimal.

6.2 Topological Conjugacy and Invariant Measures

191

Proof. (i) If there were a point x0 ∈ S 1 such that µ({x0 }) > 0, then the invariance under T would imply that µ({T n x0 }) > 0 for every n, which would be impossible unless T n x0 = x0 for some n. This would imply that the rotation number is rational, which is a contradiction by Theorem 6.3. Hence µ is a continuous measure, and so φ is a continuous function. Observe that φ(0) = 0 and φ is onto. (ii) Note that µ([a, c]) = µ([a, b]) + µ([b, c]) (mod 1) for any choice of a, b, c ∈ S 1 . Here [x, y] are intervals in the circle and we use the convention that [y, x] = [y, 1) ∪ [0, x] for 0 ≤ x < y < 1. Hence φ(T x) = µ([0, T x]) = µ([0, T 0]) + µ([T 0, T x])

(mod 1)

= µ([0, T 0]) + µ([0, x]) (mod 1) = µ([0, T 0]) + φ(x) (mod 1) Let f : R → R be the function representing T with 0 ≤ f (0) < 1, and consider 0, f (0), f (2) (0), . . . , f (n) (0). Then [0, f (n) (0)] = [0, f (0)] ∪ [f (0), f (2) (0)] ∪ · · · ∪ [f (n−1) (0), f (n) (0)] . Deﬁne an inﬁnite measure ν on R by ν(A) =

∞

µ(A ∩ [j, j + 1) − j)

j=−∞

for A ⊂ R where B − j = {b − j : b ∈ B}, B ⊂ R. Note that ν([0, j]) = j and ν([f (j−1) (0), f (j) (0)]) = µ([0, T 0]) for every j ≥ 1. Hence ν([0, f (n) (0)]) = n µ([0, T 0]) . Let t denote the greatest integer less than or equal to t ∈ R. Then f (n) (0) n→∞ n f (n) (0) = lim n→∞ n ν([0, f (n) (0)]) = lim n→∞ n ν([0, f (n) (0)]) = µ([0, T 0]) = lim n→∞ n

θ = lim

Therefore φ(T x) = φ(x) + θ (mod 1).

192

6 Homeomorphisms of the Circle

(iii) Take x ∈ S 1 . Let t0 ∈ φ−1 (x). Choose t0 ∈ S 1 . Suppose that t0 < t. Then t ∈ φ−1 (x) if and only if µ([t0 , t]) = 0. In this case, we have [t0 , t] ⊂ φ−1 (x). Deﬁne b = sup{t ≥ t0 : µ([t0 , t]) = 0} . Similarly, deﬁne a = inf{t ≤ t0 : µ([t, t0 ]) = 0} . Then [a, b] = φ−1 (x). (iv) If φ is a homeomorphism, then all the topological properties of Rθ are carried over to T . Since θ is irrational, any orbit of Rθ is dense. Hence any orbit of T is dense. Now assume that T is minimal. To show that φ is a homeomorphism it is enough to show that φ is one-to-one. (See Fact 1.3.) Take x1 < x2 . Theorem 6.7 implies that φ(x2 ) − φ(x1 ) = µ([x1 , x2 ]) > 0. For a simulation see Fig. 6.8 and Maple Program 6.8.3. The cdf is the topological conjugacy φ. Compare the cumulative density function (cdf) also with the one given in Fig. 6.13. 1 3 y

2 1 0

x

1

0

x

1

Fig. 6.8. An invariant pdf (left) and the corresponding cdf (right) for T x = x + 0.15 sin(2πx) + 0.33 (mod 1). Compare the cdf with the one given in Fig. 6.13

In 1932 A. Denjoy proved the following theorem. Fact 6.11 (Denjoy Theorem) Let T be a homeomorphism of the unit circle represented by f with an irrational rotation number θ. If f is continuously diﬀerentiable with f > 0 and if f is of bounded variation on the circle, then there is a topological conjugacy φ such that φ ◦ T ◦ φ−1 = Rθ where Rθ is a rotation by θ. For the proof of and related results, consult [Ar2],[CFS],[Dev],[KH],[MS]. J.-C. Yoccoz [Yoc] proved that if an analytic homeomorphism of the circle preserves orientation and has no periodic point, then it is conjugate to a rotation.

6.3 The Cantor Function and Rotation Number

193

6.3 The Cantor Function and Rotation Number Let Rt denote the rotation on the circle by t. For example, R1 is the identity mapping. If T is a circle homeomorphism without periodic points represented by f and if 0 < t, then ρ(T, f ) < ρ(Rt ◦ T, f + t) . See [MS] for the proof. Hence the mapping t → ρ(Rt ◦ T, f + t) is monotonically increasing. Furthermore, Theorem 6.3 implies that it is strictly increasing at t if ρ(Rt ◦ T, f + t) is irrational. ∞ Let C be the Cantor set. A point x ∈ C is of the form x = j=1 aj 3−j , aj ∈ {0, 2}. Deﬁne h : C → [0, 1] by h(x) =

∞

bj 2−j

where bj =

j=1

aj . 2

Note that h is onto [0, 1]. Take x, y ∈ C such that x < y. If x and y are two endpoints of one of the middle third open intervals removed from [0, 1] to construct C, then h(x) = h(y) = k/2 for some integers k, . If x and y are not two endpoints of one of the middle third open intervals, then h(x) < h(y). We extend h to a continuous mapping from [0, 1] onto itself by deﬁning h to be constant on each middle third set. For example, h( 31 ) = h( 32 ) = 21 and h( 21 ) = 21 . The function thus obtained is called the Cantor function or the Cantor–Lebesgue function. Observe that the Cantor function has derivative equal to zero at almost every x. See Fig. 6.9 and Maple Programs 6.8.4, 6.8.5. 1

y

0

x

1

Fig. 6.9. The Cantor function

Fix a, |a| < 1/(2π). Consider a function ρ : [0, 1] → [0, 1] where ρ(b) is the rotation number of f (x) = x + a sin(2πx) + b. It is known that y = ρ(b) is a staircase function, i.e., ρ is a variation of the Cantor function. More precisely, for every rational number r, ρ−1 (r) has nonempty interior. See Fig. 6.10 and Maple Program 6.8.6.

194

6 Homeomorphisms of the Circle 1

ρ

0

1

b

Fig. 6.10. The Cantor type function deﬁned by rotation numbers ρ(b) of f (x) = x + 0.15 sin(2πx) + b, 0 ≤ b < 1

6.4 Arnold Tongues Consider the rotation number ρ(a, b) of f (x) = x + a sin(2πx) + b as a function of a and b. In Fig. 6.11 we plot z = ρ(a, b) on the rectangle 0 ≤ a < 1/(2π), 0 ≤ b ≤ 1.

1

ρ

0 a

1

0.1 0

b

Fig. 6.11. Rotation numbers of f (x) = x + a sin(2πx) + b

To ﬁnd the region {(a, b) : ρ(a, b) = 0} we check the existence of the ﬁxed points of f . (See Remark 6.2.) From x + a sin(2πx) + b = x we have a sin(2πx) = −b, and so a ≥ b. To ﬁnd the region {(a, b) : ρ(a, b) = 1} we solve x + a sin(2πx) + b = x + 1 , that is, a sin(2πx) = 1 − b. Hence a ≥ 1 − b.

6.5 How to Sketch a Conjugacy Using Rotation Number

195

Consider the set {(a, b) : ρ(a, b) ∈ Q}. The level sets for rational rotation numbers are called the Arnold tongues after V.I. Arnold. In Fig. 6.12 the boundaries of the Arnold tongues are plotted for several rational numbers with small denominators (≤ 5). The left and right triangular regions correspond to ρ = 0 and ρ = 1, respectively. The tongues corresponding to the rotation numbers 1 1 1 2 1 3 2 3 4 , , , , , , , , 5 4 3 5 2 5 3 4 5 are given from left to right. It is known that the Lebesgue measure of the set {b ∈ [0, 1] : ρ(a, b) ∈ Q} converges to 0 as a → 0. Consult [Ar1],[Ot] for more information and see Maple Program 6.8.7. 0.15 0.1 a 0.05 0

b

1

Fig. 6.12. Arnold tongues

6.5 How to Sketch a Conjugacy Using Rotation Number We present a method of plotting the graph of a topological conjugacy based on the mathematical pointillism. Let T be a homeomorphism of the circle with irrational rotation number θ. Let Rt be the rotation by t on the unit circle. Suppose that we have the relation φ ◦ T = Rθ ◦ φ for some homeomorphism φ : S 1 → S 1 . We may assume that φ(0) = 0. Otherwise, put t = φ(0). Put ψ = R−t ◦ φ. Then φ = Rt ◦ ψ. Hence (Rt ◦ ψ) ◦ T = Rθ ◦ (Rt ◦ ψ) , ψ ◦ T = R−t ◦ Rθ ◦ Rt ◦ ψ = R−t+θ+t ◦ ψ = Rθ ◦ ψ . Observe that ψ(0) = 0.

196

6 Homeomorphisms of the Circle

Now we describe how to ﬁnd a mapping φ satisfying the above relation without using Theorem 6.10. First, we look for φ such that φ(0) = 0. Let G be the graph of y = φ(x), i.e., G = {(x, φ(x)) : x ∈ S 1 }. Since φ ◦ T = Rθ ◦ φ, we have φ ◦ T n = Rnθ ◦ φ for n ∈ Z. Hence φ(T n x) = {φ(x) + nθ}. Take x = 0. Then φ(T n 0) = {nθ} and (T n 0, {nθ}) ∈ G. Deﬁne γ : S 1 → G by γ(y) = (φ−1 (y), y) . If φ is a homeomorphism, then γ is also a homeomorphism and γ({nθ}) = (T n 0, {nθ}) . Since the points {nθ}, n ≥ 0, are dense in S 1 , the points γ({nθ}), n ≥ 0, are dense in G, and they uniquely deﬁne φ(x) on S 1 . For a simulation consider homeomorphisms of the circle represented by f (x) = x + a sin(2πx) + b, |a| < 1/(2π). See Fig. 6.13 where 1000 points are plotted on the graph of y = φ(x). Compare it with the cumulative density function in Fig. 6.8. See Maple Program 6.8.8. For another experiment with a circle homeomorphism see Maple Program 6.8.9. 1

y

0

x

1

Fig. 6.13. The points (T n 0, {nθ}), n ≥ 0, approximate the graph of the topological conjugacy φ of a homeomorphism represented by f (x) = x + 0.15 sin(2πx) + 0.33

6.6 Unique Ergodicity Theorem 6.12. Let X be a measurable space with a σ-algebra A and let T : X → X be a measurable transformation. If there exists only one T -invariant probability measure µ on (X, A), then T is ergodic with respect to µ. Proof. Suppose that there exists A ∈ A such that µ(A) > 0 and T −1 A = A. Deﬁne a conditional measure ν by ν(B) = µ(B ∩ A)/µ(A) for B ∈ A. Then ν(X) = 1 and ν(T −1 B) = ν(B). Hence ν = µ, and so µ(B) = µ(B ∩ A)/µ(A) for every B ∈ A. If we take B = X \ A, then µ(X \ A) = 0. Thus µ(A) = 1, and the ergodicity of T is proved.

6.6 Unique Ergodicity

197

Deﬁnition 6.13. A continuous transformation T on a compact metric space X is said to be uniquely ergodic if there exists only one T -invariant Borel probability measure on X. Note that T is ergodic with respect to the only T -invariant measure. Example 6.14. A translation T x = {x + θ} is uniquely ergodic if and only if θ is irrational. Proof. First, suppose that θ is rational and of the form θ = p/q for some integers p, q ≥ 1 such that (p, q) = 1. Deﬁne a T -invariant probability measure on {0, 1/q, . . . , (q−1)/q} ⊂ S 1 by assigning probability 1/q to each point. Then we have a T -invariant probability measure other than Lebesgue measure, and hence T is not uniquely ergodic. Next, assume that θ is irrational. If µ is a T -invariant probability measure, then 1

0

1

e−2πinx dµ =

e−2πin(x+θ) dµ ,

0

(n) for every n. If n = 0, then µ (n) = 0. Hence µ is and µ (n) = e−2πinθ µ Lebesgue measure by the uniqueness theorem in Subsect. 1.4.3.

For more general transformations we have the following: Theorem 6.15. If T : S 1 → S 1 is a homeomorphism with irrational rotation number, then it is uniquely ergodic. Proof. First, we consider the case when T is minimal. Let µ be a T -invariant Borel probability measure. Deﬁne a mapping φ as in Theorem 6.10. Since T is minimal, φ is a homeomorphism. Let G be the graph of y = φ(x). Then the mapping γ : S 1 → G deﬁned by γ(y) = (φ−1 (y), y) is a homeomorphism. See Sect. 6.5. The points γ({nθ}), n ≥ 0, uniquely deﬁne φ(x) on S 1 with the condition φ(0) = 0. Hence by Theorem 6.10 an invariant probability measure µ is uniquely determined. Next, suppose that T is not minimal. Let K denote the closure of the points T n 0, n ≥ 0, in S 1 . Then φ is determined on K as in the previous case. Recall that φ is continuous and monotonically increasing. Observe that S1 \ K = (aj , bj ) j

where (aj , bj ) are pairwise disjoint open intervals. Suppose that µ((aj , bj )) > 0 for some j. Then φ(aj ) < φ(bj ) and there would exist n such that φ(aj ) < {nθ} < φ(bj ) . This implies that {nθ} = φ(T k 0) for some k and aj < φ(T k 0) < bj , which is a contradiction. Hence µ((aj , bj )) = 0 for every j, and φ is constant on each interval [aj , bj ]. Thus φ is determined uniquely on S 1 , and so is µ.

For a diﬀerent proof see [CFS],[Wa1].

198

6 Homeomorphisms of the Circle

6.7 Poincar´ e Section of a Flow A continuous ﬂow {Ft : t ∈ R} on a metric space X is a one-parameter group of homeomorphisms Ft of X such that (i) Ft ◦ Fs = Ft+s for t, s ∈ R, (ii) F0 is the identity mapping on X, and (iii) (Ft )−1 = F−t . We assume that (t, x) → Ft (x) is continuous. A ﬂow line is a curve t → Ft (x), t ∈ R. It passes through x at t = 0. When an autonomous ordinary diﬀerential equation (du/dt, dv/dt) = W (u, v) is given for a vector ﬁeld W , the solution curves of the diﬀerential equation are ﬂow lines. A circle homeomorphism can be obtained from a continuous ﬂow on the torus X = S 1 × S 1 . Let E = {v = 0} = S 1 × {0}, which is a circle obtained by taking the intersection of X with a plane perpendicular to X. See Fig. 6.14. Given a continuous ﬂow on X, deﬁne a mapping T : E → E as follows: Take x ∈ E and consider the ﬂow line starting from x. Let T x be the ﬁrst point where the ﬂow line hits E. Then T is a homeomorphism of the unit circle E. The set E is called the Poincar´e section and T the Poincar´e mapping.

x Tx

Fig. 6.14. A circle homeomorphism T is deﬁned by a Poincar´e section

The Poincar´e section method reduces the dimension of the phase space under consideration, so that the problem becomes simpler and some of the properties of the continuous ﬂow can be obtained by studying a discrete action of a mapping T . For example, the following statements are equivalent: (i) T : E → E has a periodic point of period k ≥ 1. (ii) There exists a closed ﬂow line that winds around the torus k times in the v-direction. See [DH] for a historical introduction to the Poincar´e mapping in celestial mechanics, and consult [Tu] for accurate numerical computations. Example 6.16. Fix w = (a, b) ∈ R2 , and deﬁne Ft (x) = x + tw (mod 1), where t ∈ R, x = (u, v) ∈ X. Then we have T (u, 0) = (u+ ab , 0) (mod 1), or ignoring the second coordinate, T u = u + ab (mod 1). See Fig. 6.15. Consult [Si1] for converting a smooth ﬂow on the torus into a linear ﬂow with an irrational slope by making a suitable change of coordinates.

6.8 Maple Programs

199

1

0

x

Tx

1

Fig. 6.15. A translation mod 1 is deﬁned by a Poincar´e section of a linear ﬂow. The bottom edge represents E = {(u, 0) ∈ X}

6.8 Maple Programs Simulations with rotation numbers and conjugacy mappings are presented. 6.8.1 The rotation number of a homeomorphism of the unit circle Calculate the rotation number of a circle homeomorphism. > with(plots): > evalf(1/(2*Pi)); 0.15915494309189533576 Choose a constant a, |a| < 1/(2π). > a:=0.1: > b:=sqrt(2.)-1: Take f (x) = x + a sin(2πx) + b. Use the command evalhf in evaluating transcendental functions. > f:=x->evalhf(x+a*sin(2*Pi*x)+b): Calculate the rotation number of T at a point x = 0. > seed[0]:=0: Choose the number of iterations in estimating the rotation number. > L:=30: > for n from 1 to L do seed[n]:=f(seed[n-1]): od: Calculate the rotation number at seed[0]. > for j from 1 to L do > rho[j]:=(seed[j]-seed[0])/j: > od: Plot the approximation of the rotation number ρ(x) at x = 0. > listplot([seq([n,rho[n]],n=1..L)],labels=["n"," "]); See Fig. 6.4.

200

6 Homeomorphisms of the Circle

6.8.2 Symmetry of invariant measures The following is a simulation of Remark 6.8. > with(plots): > Digits:=50: Consider the β-transformation. See the left graph in Fig. 6.16. > beta:=(1+sqrt(5.))/2: > T:=x->frac(beta*x); > plot(T(x),x=0..1); 1 1 y 0.5

0

x

1

0

x

1

Fig. 6.16. y = T x (left) and an approximate invariant pdf with y = ρ(x) (right)

Deﬁne the invariant density ρ(x). > rho:=x->piecewise(0 for j from 1 to Bin do freq[j]:=0: od: > for i from 1 to S do seed:=T(seed): > slot:=ceil(Bin*seed): > freq[slot]:=freq[slot]+1: > od: Compare ρ(x) with the approximate invariant density for T obtained by the Birkhoﬀ Ergodic Theorem. See the right plot in Fig. 6.16. > g3:=listplot([seq([(i-0.5)/Bin,freq[i]*Bin/S],i=1..Bin)]): > g4:=plot(rho(x),x=0..1): > display(g3,g4); Deﬁne S using the symmetry. > S:=x->1-T(1-x): Draw the graph y = Sx. See the left graph in Fig. 6.17.

6.8 Maple Programs >

201

plot(S(x),x=0..1); 1 1 y 0.5

0

x

1

0

x

1

Fig. 6.17. y = Sx (left) and its numerical approximation of the invariant pdf with y = ρ(1 − x) (right)

Find an approximate invariant pdf for S. > for j from 1 to Bin do freq2[j]:=0: od: > for i from 1 to S do > seed:=S(seed): > slot:=ceil(Bin*seed): > freq2[slot]:=freq2[slot]+1: > od: Compare ρ(1 − x) with the approximate invariant density for S. > g7:=listplot([seq([frac((i-0.5)/Bin),freq2[i]*Bin/S], i=1..Bin)]): > g8:=plot(rho(1-x),x=0..1): > display(g7,g8); See the right plot in Fig. 6.17. 6.8.3 Conjugacy and the invariant measure The following is a simulation of Theorem 6.10. > with(plots): > a:=0.15: b:=0.33; Deﬁne a circle homeomorphism. > T:=x->frac(x+a*sin(2*Pi*x)+b): > S:=10^6: > Bin:=100: Find the probability density function (pdf). > for j from 1 to Bin do freq[j]:=0: od: > seed:=0: > for s from 1 to S do seed:=T(seed): > slot:=ceil(evalhf(Bin*seed)): > freq[slot]:=freq[slot]+1: od:

202

6 Homeomorphisms of the Circle

Plot the pdf. See the left graph in Fig. 6.8. > listplot([seq([frac((i-0.5)/Bin),freq[i]*Bin/S], i=1..Bin)]); Find the cumulative density function (cdf). > cum[0]:=0: > for i from 1 to Bin do cum[i]:=cum[i-1]+freq[i]: od: Plot the cdf, which is the topological conjugacy. > listplot([seq([frac((i-0.5)/Bin),cum[i]/S],i=1..Bin)]); See the right graph in Fig. 6.8. 6.8.4 The Cantor function Sketch the standard Cantor function arising from the Cantor ternary set. > with(plots): Choose the number of points to be plotted on the graph of the Cantor function. > N:=2000: For each xn , 1 ≤ n ≤ N , calculate the value yn of the Cantor function. An extra condition that j < 100 is needed to avoid an inﬁnite loop. Without such a condition the do loop would not end if xn belongs to the Cantor set. > for n from 1 to N do > x[n]:=evalhf(frac(n/N+Pi)): > seed:=x[n]: > j:=0: > a:=’a’: > while (a 1 and j < 100) do > j:=j+1: > a:=trunc(3.0*seed); > seed:=frac(3.0*seed): > b[j]:=a/2; > od: > y[n]:=add(b[i]* 2^(-i*b[i]),i=1..j-1) + 2^(-j): > od: The points (xn , yn ) approximate the graph of the Cantor function. > pointplot([seq([x[n],y[n]],n=1..N)],symbolsize=1); See Fig. 6.9. 6.8.5 The Cantor function as a cumulative density function The Cantor function is obtained as a cdf of the probability measure, called the Cantor measure, supported on the Cantor set. > with(plots): with(plottools): > T:=x->frac(3.0*x): > entropy:=log10(3.0); entropy := 0.4771212547

6.8 Maple Programs

203

Find a typical point seed[0] belonging to the Cantor set. Use N digits in the ternary expansion (d1 , d2 , d3 , . . .), dj ∈ {0, 2}, of seed[0]. > N:=21000: > 3.0^(-N); 0.2842175472 10−10019 > Digits:=10020: Generate a sequence of dj ∈ {0, 2}, 1 ≤ j ≤ N , with probability 21 for each. > ran:=rand(0..1): > for j from 1 to N do > d[j]:=ran()*2: > od: > seed[0]:=evalf(add(d[s]/3^s,s=1..N)): Choose the size of samples. It is bounded by the number of iterations after which orbit points lose any meaningful signiﬁcant digits. Consult Sect. 9.2. > S:=trunc(Digits/entropy)-20;

S := 20980 for i from 1 to S do seed[i]:=T(seed[i-1]): seed[i-1]:=evalf(seed[i-1],10): od: > Digits:=10: > Bin:=3^5: > for k from 1 to Bin do freq[k]:=0: od: > for j from 1 to S do > slot:=ceil(Bin*seed[j]): > freq[slot]:=freq[slot]+1: > od: > for i from 1 to Bin do > g[i]:=line([(i-0.5)/Bin,freq[i]*Bin/S],[(i-0.5)/Bin,0]): > od: > display(seq(g[i],i=1..Bin),labels=["x"," "]); See Fig. 6.18. > > > >

8 6 4 2 0

x

Fig. 6.18. Approximation of the Cantor measure

1

204

6 Homeomorphisms of the Circle

Find the cumulative density function. > cum[0]:=0: > for i from 1 to Bin do > cum[i]:=cum[i-1]+freq[i]: > od: > listplot([seq([frac((i-0.5)/Bin),cum[i]/S],i=1..Bin)]); 6.8.6 Rotation numbers and a staircase function Here is how to sketch a staircase function arising from rotation numbers of a circle homeomorphism. > with(plots): > a:=0.15; Choose the number of iterations to ﬁnd the rotation number. > N:=500: Decide the increment of b. It is given by 1/Step. > Step:=1000: For each value of b ﬁnd the rotation number. > for i from 1 to Step do > b:=evalhf((i-0.5)/Step): > f:=x->evalhf(x+a*sin(2*Pi*x)+b): > seed:=0: > for n from 1 to N do seed:=f(seed): od: > rho[i]:=seed/N: > od: Plot the staircase function in Fig. 6.10. > pointplot([seq([(i-0.5)/Step,rho[i]],i=1..Step)], symbolsize=1); For further experiments take the periodic function g(x) of period 1 given by the periodic extension of the left graph in Fig. 6.19 and consider f (x) = x + ag(x) + b, a = 0.5. It yields a staircase function on the right. 1

1

ρ

y

0

x

1

0

b

1

Fig. 6.19. The graph of g(x) on 0 ≤ x ≤ 1, (left) and a staircase function arising from f (x) = x + ag(x) + b, a = 0.5 (right)

6.8 Maple Programs

205

6.8.7 Arnold tongues The following is a modiﬁcation of a C program written by B.J. Kim. Use evalhf for the hardware ﬂoating point arithmetic with double precision. > with(plots): Take f (x) = x + a sin(2πx) + b. Choose the number of iterations in calculating the rotation number. > N:=1000: Choose the number of intermediate points for the a-axis and the b-axis. To obtain Fig. 6.12 increase Num1. > Num1:=35: Num2:=1000: Calculate the increments of a and b. > Inc_a:=evalhf(1/2/Pi/Num1): > Inc_b:=evalhf(1/Num2): Choose the rotation numbers corresponding to the Arnold tongues. Take rational numbers with denominators not greater than 5. > [0,1,1/2,1/3,2/3,1/4,3/4,1/5,2/5,3/5,4/5]; Sort the rotation numbers in increasing order. > Ratio:=sort(%); 1 1 1 2 1 3 2 3 4 Ratio := [0, , , , , , , , , , 1] 5 4 3 5 2 5 3 4 5 Count the number of elements in the sequence Ratio. > M:=nops(Ratio); M := 11 Error_bound:=0.001: Choose an error bound for the estimation of the rotation number. If it is too large, then the plot would be meaningless. If it is too small, then there may be no plot. For small values we have to increase N. > for i from 1 to Num1 do > for j from 0 to Num2 do > seed:=0.0: > for k from 1 to N do > seed:=evalhf(seed+i*Inc_a*sin(2*Pi*seed)+j*Inc_b): od: > rho[j]:=evalhf(seed/N): > od: > for m from 1 to M do S:=1: > for g from 0 to Num2 do > if abs(rho[g]-Ratio[m]) < Error_bound then > b_rational[S]:= g/Num2: > S:=S+1: fi: > od: > min_b[i,m]:= b_rational[1]: > max_b[i,m]:= b_rational[S-1]: > od: > od: >

206

6 Homeomorphisms of the Circle

Find the left and right boundaries of the Arnold tongues. For small values of Num1, it is better to use listplot to connect points. > for m from 1 to M do > L[m]:=pointplot({seq([min_b[i,m], i*Inc_a],i=1..Num1)}): > R[m]:=pointplot({seq([max_b[i,m], i*Inc_a],i=1..Num1)}): > od: > g1:=display(seq([L[m],R[m]],m=1..M)): > g2:=plot(0,x=0..1,y=0..1/2/Pi): > display(g1,g2,labels=["b","a"]); See Fig. 6.12. 6.8.8 How to sketch a topological conjugacy of a circle homeomorphism I Now we explain how to simulate the idea in Sect. 6.5. > with(plots): > a:=0.15: b:=0.33: Deﬁne a function f representing a circle homeomorphism T . > f:=x->evalhf(x+a*sin(2*Pi*x)+b): Choose an orbit of length 2000. > SampleSize:=2000: > orbit[0]:=0: > for i from 1 to SampleSize do > orbit[i]:=frac(f(orbit[i-1])): od: Calculate the rotation number of T at orbit points. > Length:=2000: > for i from 1 to SampleSize do > seed[0]:=orbit[i]: > for n from 1 to Length do seed[n]:=f(seed[n-1]): od: > rotation[i]:=(seed[Length]-seed[0])/Length: > od: > average:=add(rotation[i],i=1..SampleSize)/SampleSize; average := 0.3143096817 > rotation_num:=average: A sharp estimate for the rotation number ρ(T ) is essential in plotting points on the graph y = φ(x). A slight change in the estimation would destroy the plot completely. This is why we adopted the version in Deﬁnition 6.4. If we had used f (n) (x)/n as an approximation of ρ(T ), then we would have slightly overestimated values, which would not be good enough for our purpose. > seed[0]:=0: > N:=2000: This is the number of points on the graph of the topological conjugacy. > for n from 1 to N do seed[n]:=frac(f(seed[n-1])): od:

6.8 Maple Programs

207

Plot points on the graph of the topological conjugacy using the idea of the mathematical pointillism. See Fig. 6.13. > pointplot([seq([seed[n],frac(n*rotation_num)],n=1..N)]); Change the value of rotation num slightly and observe what happens. The method would not work! Consult Remark 6.5. > rotation_number:=average + 0.0001; rotation number := 0.3144096817 > pointplot([seq([seed[n],frac(n*rotation_num)],n=1..N)]); See Fig. 6.20. 1

y

0

x

1

Fig. 6.20. The points (T n 0, {nθ}) with a less precise value for θ

6.8.9 How to sketch a topological conjugacy of a circle homeomorphism II Here is another example of how to ﬁnd a topological conjugacy φ numerically for a circle homeomorphism T based on the mathematical pointillism. We consider a case when φ is already known and compare it with a numerical solution. > with(plots): > Digits:=20: Choose an irrational rotation number 0 < θ < 1. > theta:=frac(sqrt(2.)): Deﬁne an irrational rotation R to which T is conjugate through φ. > R:=x-> frac(x+theta); R := x → frac(x + θ) Draw the graph y = R(x). See Fig. 6.21. > plot(R(x),x=0..1); Deﬁne the topological conjugacy φ. > phi:=x->piecewise(x >= 0 and x < 3/4, (1/3)*x, x >= 3/4 and x plot(phi(x),x=0..1); Find ψ = φ−1 . > psi:=x->piecewise(x >= 0 and x < 1/4, 3*x, x >= 1/4 and x plot(psi(x),x=0..1); φ := x → piecewise(0 ≤ x and x

T:=x->psi(R(phi(x))); T := x → ψ(R(φ(x))) Draw the graph y = T x. See Fig. 6.23. > plot(T(x),x=0..1); Recall that θ is the rotation number of T . > seed[0]:=0:

1

6.8 Maple Programs

209

1

y

0

x

1

Fig. 6.23. y = T x

Choose the number of points on the graph of the conjugacy. To emphasize the key idea we choose a small number here. > N:=30: > for n from 1 to N do seed[n]:=T(seed[n-1]): od: Plot the points of the form (T n 0, {nθ}), which are on the graph y = φ(x). > pointplot([seq([seed[n],frac(n*theta)],n=1..N)]); See Fig. 6.24. 1

y

0

x

1

Fig. 6.24. Plot of points (T n 0, {nθ})

Now we apply the Birkhoﬀ Ergodic Theorem. > SampleSize:=10^4: Since the pdf is piecewise continuous, it is better to take suﬃciently many bins to have a nice graph. > Bin:=200: > for s from 1 to Bin do freq[s]:=0: od: > start:=evalf(Pi-3): > for i from 1 to SampleSize do > start:=T(start): > slot:=ceil(Bin*start): > freq[slot]:=freq[slot]+1: > od:

210

6 Homeomorphisms of the Circle

Sketch the invariant pdf ρ(x) based on the preceding numerical experiment. Theoretically it can be obtained by diﬀerentiating φ(x). Thus we have &1 , 0 < x < 43 , ρ(x) = 3 3 3, 4 <x listplot([seq([frac((i-0.5)/Bin),freq[i]*Bin/SampleSize], i=1..Bin)]); 3 2 1 0

x

1

Fig. 6.25. An approximate invariant pdf for T

Find the cumulative density function (cdf). > cum[0]:=0: > for i from 1 to Bin do > cum[i]:=cum[i-1]+freq[i]: > od: Plot the cdf for T together with the plot obtained in Fig. 6.24. They match well as expected. See Fig. 6.26. > listplot([seq([frac((i-0.5)/Bin),cum[i]/SampleSize], i=1..Bin)]); 1

y

0

x

1

Fig. 6.26. An approximate cdf for T together with the points (T n 0, {nθ})

7 Mod 2 Uniform Distribution

We construct a sequence of 0 and 1 from a dynamical system on a probability space X and a measurable set E, and deﬁne mod 2 uniform distribution. Let 1E be the characteristic function of E. For an ergodic transformation T : X → X, consider the sequence dn (x) ∈ {0, 1} deﬁned by dn (x) ≡

n

1E (T k−1 x)

(mod 2) .

k=1

The mod 2 uniform distribution problem is to investigate the convergence of N 1 dn (x) N n=1

to a limit as N → ∞. Contrary to our intuition, the limit may not be equal to 12 . This is related with the ergodicity of the skew product transformation.

7.1 Mod 2 Normal Numbers Let X = [0, 1) with Lebesgue measure µ. Deﬁne T x = 2x (mod 1). Let x=

∞

ak 2−k ,

ak ∈ {0, 1} ,

k=1

be the binary representation of x. Then ak = 1E (T k−1 x) where E = [ 12 , 1). The classical normal number theorem states that almost every x is normal, n i.e., n1 k=1 ak converges to 12 . Deﬁnition 7.1. Deﬁne dn (x) ∈ {0, 1} by dn (x) ≡

n k=1

ak

(mod 2) .

212

7 Mod 2 Uniform Distribution

Then x is called a mod 2 normal number if N 1 1 dn (x) = . N →∞ N 2 n=1

lim

Theorem 7.2. Almost every x is a mod 2 normal number. Proof. The proof may be obtained following the general arguments in later sections but we present a proof here to emphasize the key idea. Note that X is a group under addition modulo 1. Take E = [ 12 , 1). Deﬁne T1 on X × {0, 1} by T1 (x, y) = (2x, 1E (x) + y) . Here {0, 1} is regarded as an additive group Z2 , and the addition is done modulo 2. Then T1 is measure preserving and T1 n (x, y) = (2n x, dn (x) + y) . Note that L2 (X × {0, 1}) = L2 (X) ⊕ L2 (X)χ where the character χ on {0, 1} is deﬁned by χ(y) = eπiy and L2 (X)χ = {f (x)χ(y) : f ∈ L2 (X)}. Suppose that T1 is not ergodic. Then there is a nonconstant invariant function of the form h1 (x) + h2 (x)eπiy . Hence h1 (x) + h2 (x)eπiy = h1 (2x) + h2 (2x) exp(πi1E (x))eπiy and h1 (x) − h1 (2x) = (−h2 (x) + h2 (2x) exp(πi1E (x)))eπiy Since the left hand side does not depend on y we have h1 (x) = h1 (2x) and h2 (x) = h2 (2x) exp(πi1E (x)). Since x → {2x} is ergodic, h1 is constant. Let us show that h2 is also constant. First, note that 1E (1 − x) = 1 − 1E (x) and exp(πi1E (1 − x)) = − exp(πi1E (x)). Hence h2 (1 − x) = −h2 (2(1 − x)) exp(πi1E (x)) and h2 (x)h2 (1 − x) = −h2 (2x)h2 (2(1 − x)) . Put q(x) = h2 (x)h2 (1 − x). Then q(2x) = −q(x) , which is a contradiction since x → {2x} is mixing and has no eigenvalue other than 1. Thus T1 is ergodic. Let ν be the Haar measure on Z2 , i.e., ν({0}) = ν({1}) = 12 . For an integrable function f deﬁned on X × {0, 1}, the Birkhoﬀ Ergodic Theorem implies that

7.2 Skew Product Transformations

213

N −1 1 f (T1 n (x, y)) = f (x, y) d(µ × ν) N →∞ N X×{0,1} n=0 lim

for almost every (x, y) ∈ X × {0, 1}. Since X × {0, 1} can be regarded as a union of two copies of X, we have N −1 1 n f (T1 (x, 0)) = f (x, y) d(µ × ν) lim N →∞ N X×{0,1} n=0 for almost every x ∈ X. Choose f : X × {0, 1} → C deﬁned by f (x, y) = y. Then Fubini’s theorem implies that N −1 1 1 dn (x) = y d(µ × ν) = y dν = . lim N →∞ N 2 X×{0,1} {0,1} n=0

7.2 Skew Product Transformations Consider a probability space (X, µ) and a compact abelian group G with the normalized Haar measure ν. Let µ × ν denote the product measure on X × G. Let T be a measure preserving transformation on (X, µ) and let φ : X → G be a measurable function. Deﬁnition 7.3. A skew product transformation Tφ on X × G is deﬁned by Tφ (x, g) = (T x, φ(x)g) . Theorem 7.4. A skew product transformation Tφ is measure preserving. Proof. It is enough to show that (µ × ν)(Tφ −1 (A × B)) = (µ × ν)(A × B) for measurable subsets A ⊂ X, B ⊂ G since measurable rectangles generate the σ-algebra in the product measurable space X × G. Note that 1T −1 (A×B) (x, g) d(µ × ν) (µ × ν)(Tφ−1 (A × B)) = φ X×G = 1A×B (Tφ (x, g)) d(µ × ν) X×G = 1A (T x)1B (φ(x)g) d(µ × ν) X×G 1A (T x) 1φ(x)−1 B (g) dν(g) dµ(x) . = X

G

214

7 Mod 2 Uniform Distribution

In the last equality we used Fubini’s theorem. Since 1φ(x)−1 B (g) dν(g) = ν(φ(x)−1 B) = ν(B) , G

we have (µ × ν)(Tφ−1 (A × B)) =

1A (T x) dµ(x) ν(B) X

= µ(A) ν(B) = (µ × ν)(A × B) .

Hence Tφ preserves µ × ν. Deﬁne an isometry on L2 (X × G) by UTφ (h(x, g)) = h(Tφ (x, g)). Let G denote the dual group of G and put Hχ = L2 (X)χ(g) = {f (x)χ(g) : f ∈ L2 (X)} . Then we have an orthogonal direct sum decomposition 6 L2 (X × G) = Hχ . χ∈G

Since UTφ (f (x)χ(g)) = f (T x)χ(φ(x))χ(g), each direct summand Hχ is an invariant subspace of UTφ and UTφ |Hχ : Hχ → Hχ satisﬁes f (x)χ(g) → f (T x)χ(φ(x))χ(g) . Lemma 7.5. Let T : X → X be an ergodic transformation, G a compact abelian group, and φ : X → G a measurable function. Let Tφ be the skew product transformation. Then the following are equivalent: (i) Tφ is not ergodic on X × G. (ii) UTφ has an eigenvalue 1 with a nonconstant eigenfunction. (iii) There exists a measurable function f on X of modulus 1 such that −1 χ = 1. χ(φ(x)) = f (x)f (T x) for some χ ∈ G, Proof. It is clear that (i) and (ii) are equivalent. We will show that (ii) and (iii) are equivalent. Assume that (ii) holds true. Let χ∈G fχ (x)χ(g) be a nonconstant eigenfunction of UTφ corresponding to the eigenvalue 1. Then fχ (x)χ(g) = fχ (T x)χ(φ(x))χ(g) . χ∈G

χ∈G

By the uniqueness of the Fourier expansion, we have fχ (x) = fχ (T x)χ(φ(x)) for every χ. Since T is ergodic, f1 is constant where f1 corresponds to χ = 1. Since χ fχ (x)χ(g) is not constant, there exists χ = 1 such that fχ (x) =

7.2 Skew Product Transformations

215

fχ (T x)χ(φ(x)) = 0. By taking absolute values we see that |fχ (x)| = |fχ (T x)|. Hence fχ has constant modulus. Conversely, assume that a measurable function f satisﬁes −1

χ(φ(x)) = f (T x)

f (x)

for some χ = 1. Then we have f (T x)χ(φ(x)) = f (x). Hence f (x)χ(g) is a

nonconstant eigenfunction of UTφ for the eigenvalue 1. Deﬁnition 7.6. A function φ : X → G is called a G-coboundary if there exists a function q : X → G satisfying −1

φ(x) = q(x) q(T x)

.

Such a function q is not unique in general, and one such function is called a cobounding function for φ. Remark 7.7. Suppose that T is ergodic and let S 1 = {z ∈ C : |z| = 1}. (i) The characters of G = S 1 are given by χn , χn (z) = z n for z ∈ G. Then the condition in Lemma 7.5(iii) becomes φ(x)n = f (x)f (T x)−1 for some n = 0, i.e., φ(x)n is a coboundary for some n = 0. (ii) The characters of G = {−1, 1} are given by 1 and χ where χ(z) = z for z ∈ G. Hence the coboundary condition becomes φ(x) = f (x)f (T x)−1 . (iii) Let G be a compact subgroup of S 1 , i.e., either G = S 1 or G = {1, a, . . . , ak−1 } for some k ≥ 1. If f (T x)φ(x) = f (x) for some f : X → G, f ≡ 0, then |f (T x)| = |f (x)|. The ergodicity of T implies that f has constant modulus. Thus we may assume that φ(x) = f (x)f (T x)−1 with |f (x)| = 1. Such a function f is not unique. For example, cf for any c ∈ C, |c| = 1, satisﬁes the same functional equation. Let 1E be the characteristic function of E ⊂ X. Deﬁne dn (x) ∈ {0, 1} by dn (x) ≡

n

1E (T k−1 x)

(mod 2) .

k=1

Put en (x) = exp(πidn (x)) = 1 − 2 dn (x) ∈ {±1} . Then the mod 2 uniform distribution is equivalent to N 1 en (x) = 0 . N →∞ N n=1

lim

216

7 Mod 2 Uniform Distribution

Theorem 7.8. Consider the multiplicative group G = {1, −1}. Let T be an ergodic transformation on a probability space (X, µ). Take a measurable subset E ⊂ X. Put φ = exp(πi1E ) and let Tφ be the skew product transformation on X × G induced by φ. (i) Suppose that Tφ : X × G → X × G is ergodic. Then for almost every x N 1 en (x) = 0 . N →∞ N n=1

lim

(ii) Suppose that Tφ is not ergodic. Then for almost every x N 1 en (x) = q(x) q dµ lim N →∞ N X n=1 −1

where φ(x) = q(x)q(T x)

and q(x) ∈ {±1}.

consists Proof. Let ν be the Haar measure on G. Recall that the dual group G of the trivial homomorphism 1 = 1G and χ deﬁned by χ(y) = y. (i) For an integrable function f (x, y) on X × G, the Birkhoﬀ Ergodic Theorem implies that N −1 1 n f (Tφ (x, y)) = f (x, y) d(µ × ν) lim N →∞ N X×G n=0 for almost every (x, y) ∈ X × G. Since X × G can be regarded a union of two copies of X, we have N −1 1 f (Tφn (x, 1)) = f (x, y) d(µ × ν) N →∞ N X×G n=0 lim

for almost every x ∈ X. Choose f (x, y) = y. Then N 1 en (x) = y d(µ × ν) = y dν = 0 . lim N →∞ N X×{−1,1} {−1,1} 1 (ii) By Lemma 7.5, we have φ(x) = q(x)q(T x) and q(x) ∈ {±1} almost everywhere. Hence en (x) = q(x)q(T n x) for every n ≥ 1. Now use N N 1 1 q(T n x) en (x) = q(x) N n=1 N n=1

together with the Birkhoﬀ Ergodic Theorem.

N 1 When the limit of the sequence N 1 en (x) fails to converge to zero, Part (ii) provides us with information on the extent of irregularity. That is, the

sequence converges to a real-valued function of modulus | X q dµ|.

7.2 Skew Product Transformations

217

Theorem 7.9. Assume that the case (ii) holds in Theorem 7.8. If φ(x) = exp(πi1E (x)) = q(x)q(T x) for some {±1}-valued function q, then q dµ ≤ 1 − µ(E) . X

Proof. Since q(x) is real, it is of the form q(x) = exp(πi1B (x)) for some measurable subset B, and exp(πi1E (x)) = exp(πi1B (x)) exp(πi1T −1 B (x)) and E = BT −1 B modulo measure zero sets, where denotes the symmetric diﬀerence of two sets. Hence µ(E) ≤ µ(B) + µ(T −1 B) = 2 µ(B) . On the other hand, exp(πi1E (x)) = (−q)(x) (−q)(T x) = exp(πi1C (x)) exp(πi1T −1 C (x)) where C = X \ B. Hence E = C T −1 C modulo measure zero sets and µ(E) ≤ 2 µ(C) = 2(1 − µ(B)) . Since

q dµ = 1 × (1 − µ(B)) + (−1) × µ(B) = 1 − 2 µ(B) , X

we have µ(E) − 1 ≤

q dµ ≤ 1 − µ(E) . X

Remark 7.10. (i) Roughly speaking, as the size of E increases, the extent of the irregularity becomes smaller. (ii) It is possible that µ(E) is small and the integral of q is close to 1. For example, choose a small set B of positive measure such that B ∩ T −1 B = ∅. Put E = B ∪ T −1 B. Take x ∈ X \ E. The orbit of x visits T −1 B if and only if it visits B right after it visits T −1 B. Hence the orbit of x visits E twice in a row. The Birkhoﬀ Ergodic Theorem implies that the orbit of x visits E after its orbit stays outside E for a long time since the measure of B is small. Hence the sequence en (x), n ≥ 1, is given by 1, . . . . . . . . . , 1, −1, 1, . . . . . . . . . , 1, −1, 1, . . . . . . . . . , 1, . . . . . . where there are many 1’s between two −1’s. (iii) There exists a measurable set B ⊂ X such that E = B T −1 B if and only if exp(πi1E ) is a coboundary. For, if there exists a real-valued function q such that exp(πi1E (x)) = q(x)q(T x), then we let q = exp(πi1B ) and this implies the required condition.

218

7 Mod 2 Uniform Distribution

Theorem 7.11. Take G = Z2 = {±1}. Let T be an ergodic transformation on X and let φ be a Z2 -valued measurable function on X. If Tφ is not ergodic, then it has exactly two ergodic components of equal measure and they are {(x, q(x)) : x ∈ X}

and

{(x, −q(x)) : x ∈ X}

where q is a Z2 -cobounding function for φ. Proof. Since Tφ is not ergodic, Lemma 7.5 implies that φ is a coboundary and there exists q such that φ(x) = q(x)q(T x). Deﬁne a bijective mapping Q : X × Z2 → X × Z2 by Q(x, z) = (x, q(x)z). Deﬁne S : X × Z2 → X × Z2 by S(x, z) = (T x, z). Then Q and S are measure preserving, and we have (Q ◦ Tφ )(x, z) = Q(T x, φ(x)z) = (T x, q(T x)φ(x)z) = (T x, q(x)z) = (S ◦ Q)(x, z) , i.e., Tφ is isomorphic to S, which has two ergodic components X × {+1} and X × {−1}. Note that Q−1 = Q and that the following diagram commutes: Tφ

X × Z2 −−−−→ X × Z2 ⏐ ⏐ ⏐ ⏐ Q, Q, S

X × Z2 −−−−→ X × Z2 Hence Tφ has two ergodic components Q(X × {+1}) = {(x, q(x)) : x ∈ X} and Q(X × {−1}) = {(x, −q(x)) : x ∈ X}. Remark 7.12. In Theorem 7.11, Tφ |X×{+1} is isomorphic to T since Tφ (x, q(x)) = (T x, φ(x)q(x)) = (T x, q(T x)q(x)q(x)) = (T x, q(T x)) . Similarly, Tφ |X×{−1} is isomorphic to T . This can be seen in Figs. 7.2, 7.3 for T x = 2x (mod 1).

7.3 Mod 2 Normality Conditions In this section we study the skew product of the transformation T x = 2x (mod 1) with the group Z2 = {−1, 1} from the viewpoint of interval mappings. The subsets X+ = X × {+1} , X− = X × {−1} are identiﬁed with the intervals [0, 1), [1, 2), respectively, and Tφ given in 7 = [0, 2). The new Theorem 7.8 is regarded as being deﬁned on the interval X 7 8 8 mapping on X is denoted by Tφ . If Tφ is not ergodic, then it has two ergodic 7 each having Lebesgue measure 1. As in the other chapters, components in X, set relations are understood as being so modulo measure zero sets.

7.3 Mod 2 Normality Conditions

219

Example 7.13. (An ergodic skew product) Choose E = ( 21 , 1) and φ = 8φ can be regarded as the piecewise linear mapping given in exp(πi1E ). Then T Fig. 7.1. Note that it is ergodic.

2

1

0

1

2

Fig. 7.1. An ergodic skew product Tφ deﬁned by E = (1/2, 1) ⊂ X = [0, 1) is 8φ on X 7 = [0, 2) regarded as a mapping T

Example 7.14. (A nonergodic skew product) Choose E = ( 41 , 43 ) and φ = exp(πi1E ). Put F = ( 21 , 1) and q = exp(πi1F ). Then E = F T −1 F and 8φ can be regarded as the piecewise linear mapping φ(x) = q(x)q(T x). Hence T given in Fig. 7.2. Note that it has two ergodic components K1 = ( 21 , 23 ) and 7 − K1 = (0, 1 ) ∪ ( 3 , 2). K2 = X 2 2

2

1

0

1

2

Fig. 7.2. A nonergodic skew product Tφ deﬁned by E = (1/4, 3/4) ⊂ X = [0, 1) is 7 = [0, 2). 8φ on X regarded as a mapping T

Example 7.15. (A nonergodic skew product) Choose E = ( 61 , 65 ) and φ = exp(πi1E ). Put F = ( 31 , 32 ) and q = exp(πi1F ). Then E = F T −1 F and 8φ can be regarded as the piecewise linear mapping φ(x) = q(x)q(T x). Hence T given in Fig. 7.3. Note that it has two ergodic components K1 = (0, 31 ) ∪ 7 − K1 . ( 32 , 1) ∪ ( 34 , 35 ) and K2 = X

220

7 Mod 2 Uniform Distribution 2

1

0

1

2

Fig. 7.3. A nonergodic skew product Tφ deﬁned by E = (1/6, 5/6) ⊂ X = [0, 1) is 8φ on X 7 = [0, 2). regarded as a mapping T

8φ as in the previous examples we obtain the following theorem. Using T Theorem 7.16. Let E = (a, b) (0, 1). Then Tφ is not ergodic if E = ( 41 , 43 ) or E = ( 61 , 65 ). Proof. In each of Figs. 7.2, 7.3 we can ﬁnd two ergodic components. Hence Tφ is not ergodic if E = ( 41 , 43 ) or E = ( 61 , 65 ). These are the only exceptions. For other intervals, Tφ is ergodic. For the proof see [Ah1],[CHN].

Theorem 7.17. (i) For any interval E = ( 61 , 65 ) almost every x ∈ [0, 1) is a mod 2 normal number with respect to E. (ii) For E = ( 61 , 65 ) almost every x ∈ [0, 1) is not a mod 2 normal number with respect to E. More precisely, for E = ( 61 , 65 ) N 1 2 lim dn = N →∞ N 3 n=1 N 1 1 dn = N →∞ N 3 n=1

lim

for almost every x ∈ ( 31 , 32 ) , for almost every x ∈ (0, 31 ) ∪ ( 32 , 1) .

Proof. First, Theorem 7.16 implies that, for any interval E such that E = ( 61 , 65 ) and E = ( 41 , 43 ), almost every x ∈ [0, 1) is mod 2 normal with respect to E. Next, consider E = ( 41 , 43 ). Put F = ( 21 , 1). Then E = F T −1 F , and

1 φ(x) = exp(πi1E (x)) = q(x)q(T x) where q = exp(πi1F ). Then 0 q dx = 0, N and Theorem 7.8(ii) implies that limN →∞ N1 n=1 dn = 21 for almost every x. In this case we have mod 2 normality with respect to E even though exp(πi1E ) is a coboundary. There now remains the only exceptional case E = ( 61 , 65 ), for which the mod 2 normality with respect to E does not hold. Put F = ( 31 , 32 ). Then

1 E = F T −1 F . Now use 0 q dx = 31 .

7.4 Mod 2 Uniform Distribution for General Transformations

221

7.4 Mod 2 Uniform Distribution for General Transformations For general transformations, we consider the cases when the corresponding skew products are not ergodic. Since dn = 21 (1 − en ) we have N 1 1 1 lim dn (x) = − q(x) q dµ . N →∞ N 2 2 X n=1 In Fig. √ 7.4 simulations for mod 2 normality are given for T x = {x + θ}, θ = 2 − 1, and T x = {2x}. In Figs. 7.5, 7.6 simulations for failure of mod 2 normality are given for T x = {x + θ}, T x = {2x}, T x = 4x(1 − x) and √ 5+1 T x = {βx}, β = 2 . 1

1

y

y

0

x

0

1

x

1

√ Fig. 7.4. Mod 2 uniform distribution: T x = {x + θ}, θ = 2 − 1, with respect to E = (0, θ) (left) and T x = {2x} with respect to E = (0, 1/2) (right)

Example 7.18. For T x = {x + θ}, 0 < θ < 21 irrational, take E = (0, 2θ). Then exp(πi1

E (x)) = q(x)q(T x) is satisﬁed by q = exp(πi1F ), F = (θ, 2θ). Note that q dµ = 1 − 2θ. See the plot on the left in Fig. 7.5 and Maple Program 7.7.1. Example 7.19. For T x = {2x}, take E = ( 61 , 65 ). Then exp(πi1

E (x)) = q(x)q(T x) is satisﬁed by q = exp(πi1F ), F = ( 31 , 32 ). Note that q dµ = 31 . See Fig. 7.3 and the plot on the right in Fig. 7.5. Example 7.20. For T x = 4x(1 − x), take E = ( 41 , 1). Then exp(πi1

E (x)) = q(x)q(T x) is satisﬁed by q = exp(πi1F ), F = ( 43 , 1). Note that q dµ = 31 . See the plot on the left in Fig. 7.6. √

1 Example 7.21. For T x = {βx}, β = 5+1 2 , take E = (1 − β , 1). Then exp(πi1E (x)) = q(x)q(T x) is satisﬁed by q = exp(πi1F ), F = (0, β1 ). Note

that q dµ = − √15 . See the plot on the right in Fig. 7.6. Consult Maple Program 7.7.2.

222

7 Mod 2 Uniform Distribution 1

1

y

y

0

x

0

1

x

1

√ Fig. 7.5. Failure of mod 2 uniform distribution: T x = {x + θ}, θ = 2 − 1, with respect to E = (0, 2θ) (left) and T x = {2x} with respect to E = (1/6, 5/6) (right) 1

1

y

y

0

x

1

0

x

1

Fig. 7.6. Failure of mod 2 uniform distribution: T x = 4x(1 − x) with E = (1/4, 1) (left) and for T x = {βx} with E = (1 − (1/β), 1) (right)

Remark 7.22. For a generalization to the Bernoulli shifts, see [Ah1]. When T is given by T x = {x+θ}, θ irrational, the mod 2 uniform distribution problem was investigated by W.A. Veech [Ve]. For an interval E deﬁne N 1 µθ (E) = lim dn (x) N →∞ N 1

if the limit exists. Let t = length(E). He proved that µθ (E) exists for every interval E if and only if θ has bounded partial quotients in its continued fraction expansion. For closely related results, see [ILM],[ILR],[MMY].

7.5 Random Walks on the Unit Circle Random walks on a ﬁnite state space may be regarded as the Markov shifts. In this section we investigate random walks on the unit circle in terms of ergodicity of skew product transformations. Let Y be the unit circle identiﬁed with the unit interval [0, 1), which is an additive group under addition modulo 1. Let ν denote the normalized Lebesgue measure.

7.5 Random Walks on the Unit Circle

223

A random walker starts at position p on Y and tosses a fair coin. If it comes up heads then the walker moves to the right by the distance α, and if it comes up tails then the walker moves to the right by the distance β. A sequence of coin tossing can be regarded as a binary expansion of a number ∞ x ∈ [0, 1), i.e., if x = (x1 , x2 , x3 , . . .)2 = k=1 xk 2−k with xk ∈ {0, 1}, then the random walker moves depending on whether xk = 0 or xk = 1. Heads and tails correspond to the symbols 0 and 1, respectively. We may ask the following questions: Question A. Can the walker visit any open neighborhood U of a given point? Question B. How long does it take to return if the walker starts from a given subset V ? These questions can be answered using skew product transformations. ∞ Take X = 1 {0, 1} with the ( 12 , 12 )-Bernoulli measure µ, and deﬁne the shift transformation T : X → X by T ((x1 , x2 , . . .)) = (x2 , x3 , . . .) . Let µ × ν denote the product measure on X × Y . Deﬁne φ : X → Y by & α , x1 = 0 , φ(x) = β , x1 = 1 , and a skew product transformation Tφ : X × Y → X × Y by Tφ (x, y) = (T x, y + φ(x)) . Observe that for n ≥ 1 we have (Tφ )n (x, y) = (T n x, y + φ(x) + · · · + φ(T n−1 x)) . To answer Question A, take an open interval U and consider X × U ⊂ X × Y . If Tφ is ergodic, then the Birkhoﬀ Ergodic Theorem implies that the orbit of almost every (x, p) ∈ X × Y visits X × U with relative frequency of (µ × ν)(X × U ) = µ(X)ν(U ) = ν(U ) . By projecting onto the Y -axis, we see that for almost every p ∈ Y the random walker visits U with relative frequency of ν(U ). For Question B, we ﬁrst note that the ﬁrst return time to X × V for Tφ on X × Y is equal to the ﬁrst return time to V for the random walks in Y . If Tφ is ergodic, then Kac’s lemma implies that the average of the ﬁrst return time to X × V for Tφ is equal to 1 1 1 . = = ν(V ) µ(X)ν(V ) (µ × ν)(X × V ) Now we ﬁnd a condition on the ergodicity of Tφ in terms of α and β.

224

7 Mod 2 Uniform Distribution

Theorem 7.23. The skew product transformation Tφ is ergodic if and only if at least one of α, β is irrational. Proof. (⇒) If both α and β are rational, then there exists an integer k ≥ 2 such that α = ka and β = kb for some a, b ∈ N. Then Tφ n (x, y) belongs to the subset : 9 1 ) ⊂ X ×Y. for every n ≥ 0. Take F = X × (0, 2k X × y, y + k1 , . . . , y + k−1 k k 2j−1 j Then the orbit of (x, y) ∈ F never visits the subset X × j=1 2k , k , which contradicts the Birkhoﬀ Ergodic Theorem, and Tφ is not ergodic. (⇐) Suppose that Tφ is not ergodic. Lemma 7.5 implies that there exist an integer k = 0 and q(x) = 0 such that q(x) = exp(2πikφ(x))q(T x) where x = (x1 , x2 , . . .). Put ψ(x) = exp(2πikφ(x)). We may express the relation as q(x1 , x2 , . . .) = ψ(x1 )q(x2 , x3 , . . .) since φ depends only on x1 . Note that q(x2 , x3 , . . .) = ψ(x2 )q(x3 , x4 , . . .) and so on. Hence for every n ≥ 1 (∗) q(x1 , x2 , . . .) = ψ(x1 ) · · · ψ(xn )q(xn+1 , xn+2 , . . .) . n Let µ1,n be the measure on X1,n = 1 {0, 1} such that every point (a1 , . . . , an ) has equal measure 2−n , and let µn+1,∞ be the ( 12 , 12 )-Bernoulli measure on ∞ Xn+1,∞ = n+1 {0, 1}. Note that µ on X may be regarded as a product measure of µ1,n and µn+1,∞ . Integrating both sides of (∗) on X, we obtain q dµ = ψ(x1 ) · · · ψ(xn ) dµ1,n q(xn+1 , xn+2 , . . .) dµn+1,∞ . X

X1,n

Xn+1,∞

Since n

ψ(x1 ) · · · ψ(xn ) dµ1,n = X1,n

and since

ψ(x1 ) dµ1,1

=

X1,1

n

q(xn+1 , xn+2 , . . .) dµn+1,∞ = Xn+1,∞

we have

e2πikα + e2πikβ 2

q dµ = X

q dµ , X

e2πikα + e2πikβ 2

n q dµ . X

(∗∗)

Suppose that X q dµ = 0. Integrating both sides of (∗) on a cylinder set [a1 , . . . , an ], we have −n q dµ = 2 ψ(a1 ) · · · ψ(an ) q dµ = 0 . [a1 ,...,an ]

X

7.5 Random Walks on the Unit Circle

225

Hence the integral of q on every

cylinder set is equal to zero, and so q = 0, which is a contradiction. Thus X q dµ = 0, and from (∗∗) with n = 1 we have e2πikα + e2πikβ =1. 2 Thus e2πikα = e2πikβ = 1, which is possible only when both α and β are rational.

When α is irrational and β = −α, the problem was studied in [Su] from the viewpoint of number theory. An orbit of Tφ of length 300 is plotted in Fig. 7.7, √ √ 3 where we choose α = and the starting point (x0 , y0 ) = (π − 3, 2 − 1.2). 20 ∞ We identify 1 {0, 1} and the shift transformation with [0, 1] and x → 2x (mod 1), respectively. The orbit points are uniformly distributed due to the ergodicity. Positions of a random walker are obtained by projecting the orbit points of Tφ onto the vertical axis. Consult Maple Program 7.7.3. See also Maple Program 7.7.4. 1

y

0

x

1

Fig. 7.7. An orbit of Tφ of length 300 with α irrational and β = −α

From the same simulation data we plot the ﬁrst 50 positions of the random walker. In Fig. 7.8 the walker starts from y0 ≈ 0.4 and each random walk depends on whether {2n−1 x0 } belongs to [0, 21 ) or [ 21 , 1) where 1 ≤ n ≤ 50. 1

y

0

n

50

Fig. 7.8. The ﬁrst 50 positions of a random walker with α irrational and β = −α

226

7 Mod 2 Uniform Distribution

7.6 How to Sketch a Cobounding Function Let X = [0, 1] and let Y denote the unit circle group identiﬁed with [0, 1). Suppose that T : X → X is ergodic with respect to an invariant probability measure. We use the additive notation for the group operation in Y . For φ : X → Y deﬁne a skew product Tφ as before. As a concrete example, consider T x = x + θ (mod 1), θ irrational. Choose a continuous function φ : [0, 1) → R such that φ(0) = φ(1) (mod 1). Then φ is regarded as a mapping from Y into itself, and so it has a winding number, which plays a great role in determining ergodic properties of Tφ when φ is suﬃciently smooth. See [ILM],[ILR]. If the winding number is nonzero, then φ is not a coboundary up to addition of constants and Tφ is ergodic. See Fig. 7.9 for an orbit of length 1000 with φ(x) = x, which has winding number equal to 1. The vertical axis represents Y . The orbit points are uniformly distributed since Tφ is ergodic. 1

y

0

x

1

Fig. 7.9. An orbit of Tφ is uniformly distributed, where T x = x + θ (mod 1) and φ(x) = x

Now consider the case when Tφ is not ergodic. There exist an integer k = 0 and a measurable function q(x) such that k φ(x) = q(T x) − q(x)

(mod 1) .

Instead of ﬁnding q explicitly, we present a method of plotting its graph {(x, q(x)) : x ∈ X} in the product space X × Y based on the mathematical pointillism. First, we observe that k

n−1

φ(T i x) =

i=0

n−1

q(T i+1 x) − q(T i x) = q(T n x) − q(x) (mod 1) .

i=0

Take a starting point (x0 , y0 ) for an orbit under Tφ . Then n

(Tkφ ) (x0 , y0 ) = (T n x0 , y0 + q(T n x0 ) − q(x0 )) What we have are points of the form

(mod 1) .

7.6 How to Sketch a Cobounding Function

227

(x, q(x) + y0 − q(x0 )) (mod 1) and they are on the graph y = q(x) + y0 − q(x0 ) (mod 1) . The ergodicity of T implies that the points T n (x0 ), n ≥ 0, are uniformly distributed for almost every x0 . If q(x) exists and is continuous, then uniform distribution of the sequence {T n (x0 )} in X usually implies that the orbit {(Tkφ )n (x0 , y0 ) : n ≥ 0} is dense in the graph of y = q(x) + y0 − q(x0 ) (mod 1). Thus, when we plot suﬃciently many points of the form (Tkφ )n (x0 , y0 ) in X × Y , we have a rough sketch of the graph y = q(x) + y0 − q(x0 ) (mod 1). Since a cobounding function is determined up to addition of constants, we may choose q(x) + y0 − q(x0 ) as q. Observe that we automatically have q(x0 ) = y0 in this case. For example, consider T x = {x + θ} and a coboundary φ of the form φ(x) = q(T x) − q(x) for some real-valued function q(x), 0 ≤ x < 1. Observe

1 that φ must satisfy 0 φ(x) dx = 0. Choose φ with winding number equal to zero and plot an orbit of length 1000 under Tφ with the starting point (x0 , y0 ) = (0, 0.7). See Fig. 7.10 and Maple Program 7.7.5. 1

1

y

1

y

0

x

y

0

1

x

1

0

x

1

Fig. 7.10. An orbit√of Tφ is dense in the graph of a cobounding function, where T x = {x + θ}, θ = ( 5 − 1)/2, for φ(x) = (1/2) sin(2πx), φ(x) = x(1 − x) − 1/6 and φ(x) = 1/4 − |x − 1/2| (from left to right)

Lemma 7.24. For t ∈ R deﬁne ||t|| = min{t − n : n ∈ Z}. Then there exist constants A, B > 0 such that A ||t|| ≤ |e2πit − 1| ≤ B ||t|| . Proof. Deﬁne

& f (t) =

|e2πit − 1| / ||t|| , 2π ,

t = 0 , t=0.

Then f is continuous and periodic. It is continuous on the compact set [0, 1], and there exist a minimum and a maximum of f , say A and B, respectively. Then A ≤ f (t) ≤ B for every t. Since f = 0, we have A > 0.

228

7 Mod 2 Uniform Distribution

We may use Fourier series expansionto ﬁnd q. Let φ(x) = bn e2πinx the2πinx 2πinθ 2πinx and q(x) = cn e . Then q(x + θ) = cn e e and a necessary condition for φ(x) = q(x + θ) − q(x) is given by bn = (e2πinθ − 1) cn . 2πinθ − 1) converges to 0 suﬃciently rapidly as |n| → ∞, then If bn /(e q(x) = cn e2πinx exists in a suitable sense. By Lemma 7.24, the speed of convergence to 0 of the sequence bn /(e2πinθ − 1) as |n| → ∞ is determined by |bn |/||nθ||, especially when ||nθ|| is close to 0. For estimation of |bn | as |n| → ∞, consult Fact 1.30. More precisely, if θ = [a1 , a2 , a3 , . . .] is the continued fraction expansion of θ, and if ak ≤ A for every k, then

qk ≤ Aqk−1 + qk−2 < (A + 1)qk−1 . Take an integer j such that qk ≤ j < qk+1 . Then ||jθ|| ≥ ||qk θ|| and j||jθ|| ≥ j||qk θ|| ≥ qk ||qk θ|| . From Fact 1.36(i) we have 1 1 1 . < ||qk θ|| < < qk qk+1 + qk (A + 2)qk Hence

1 1 theta:=sqrt(2)-1: Consider an interval E = [a, b] where a = 0 and b = 2θ. > a:=0: b:=2*theta: > evalf(b); 0.828427124 > Ind_E:=x->piecewise(apiecewise(theta T:=x->frac(x+theta): > plot(q(x)*q(T(x)),x=0..1); See the right graph in Fig. 7.12. > SampleSize:=100: > N:=50: As N increases, the simulation result converges to the theoretical prediction. > for k from 1 to SampleSize do > seed[k]:= evalf(frac(k*Pi)): > od: Compute the orbits of length N starting from seed[k]. > Digits:=20: Since the entropy of an irrational translation modulo 1 is equal to zero, we do not need many digits. For more information consult Sect. 9.2. > for k from 1 to SampleSize do > orbit[k,0]:=seed[k]: > for n from 1 to N-1 do > orbit[k,n]:= T(orbit[k,n-1]): > od: > od: Find dn (xk ) at xk = {kπ}. > for k from 1 to SampleSize do > d[k,1]:=modp(Ind_E(orbit[k,0]),2): > for n from 2 to N do > d[k,n]:=modp(d[k,n-1]+Ind_E(orbit[k,n-1]),2): > od: > od: > for k from 1 to SampleSize do > z[k]:=add(d[k,n],n=1..N)/N: > od: > rho:=x->1: Find the exact value of C. > theta:=sqrt(2)-1: > C:=int(q(x)*rho(x),x=0..1); √ C := −2 2 + 3 > evalf(%); 0.171572876 > g1:=plot((1-C*q(x))/2.,x=0..1,y=0..1): > g2:=pointplot([seq([seed[k],z[k]],k=1..SampleSize)]): > display(g1,g2); See Fig. 7.5. >

7.7 Maple Programs

231

7.7.2 Ergodic components of a nonergodic skew product transformation arising from the beta transformation We ﬁnd ergodic components of a nonergodic skew product transformation Tφ arising from the beta transformation T . Choose E ⊂ [0, 1] and take φ = 1E . 8φ is nonergodic, we ﬁnd its ergodic components by tracking orbit When T points. Since an orbit stays inside an ergodic component, if we take a suﬃciently long orbit then the orbit points depict an ergodic component, in other words, we employ the mathematical pointillism. > with(plots): > Digits:=20: > beta:=(sqrt(5.0)+1)/2: > T:=x->frac(beta*x): Consider an interval E = [a, b]. > a:=1-1/beta; a := 0.38196601125010515179 > b:=1: 8φ , which is deﬁned on [0, 2], for notational simplicity. Let S denote T > S:=x->piecewise( 0 seed[0]:=0.782: > Length:=500: > for i from 1 to Length do seed[i]:=S(seed[i-1]): od: Plot the points (x, S(x)) on the graph of y = S(x). > g5:=pointplot([seq([seed[i-1],seed[i]],i=1..Length)], symbol=circle): > display(g1,g2,g3,g4,g5,s1); See Fig. 7.13 where the transformation restricted to an ergodic component [ β1 , 1 + β1 ] ⊂ [0, 2] is represented by the thick line segments.

232

7 Mod 2 Uniform Distribution 2

1

0

1

2

Fig. 7.13. An orbit of Tφ is inside an ergodic component where T is the beta transformation

7.7.3 Random walks by an irrational angle on the circle Consider random walks by an irrational angle on the unit circle. > with(plots): > Digits:=1000: > alpha:=sqrt(3.)/20: beta:=1-alpha: Deﬁne φ. See Fig. 7.14. > phi:=x->piecewise(0 T_phi:=(x,y)->(frac(2.0*x),frac(y+phi(x))): > Length:=300: Choose the starting point of an orbit. > seed[0]:=(evalf(Pi)-3,sqrt(2.0)-1.2): > for k from 1 to Length do > seed[k]:=T_phi(seed[k-1]): > od: > Digits:=10:

7.7 Maple Programs

233

Plot an orbit of the skew product transformation. The shift transformation is identiﬁed with T x = 2x (mod 1). > g1:=pointplot([seq([seed[n]],n=1..Length)]): > g2:=listplot([[1,0],[1,1],[0,1]],labels=["x","y"]): > display(g1,g2); See Fig. 7.7. Plot the ﬁrst 50 consecutive positions of a random walker as time passes by. > g3:=listplot([seq([n,seed[n][2]],n=0..50)]): > g4:=pointplot([seq([n,seed[n][2]],n=0..50)], symbol=circle,labels=["n","y"]): > display(g3,g4); See Fig. 7.8. 7.7.4 Skew product transformation and random walks on a cyclic group ∞ Let G = Zn = {0, 1, . . . , n − 1} and X = 1 {1, 2, 3, 4}. Deﬁne φ : X → Zn by φ(x) = φ(x1 , x2 , x3 , . . .) = g[k] if x1 = k , 1 ≤ k ≤ 4 . Deﬁne Tφ on X × Zn by Tφ (x, y) = (T x, y + φ(x)). It describes the random walks on the 1-dimensional lattice Zn with equal probability 14 of moving in one of four ways y → y + g[k] (mod n), y ∈ Zn , 1 ≤ k ≤ 4. g[1]:=1: g[2]:=-1: g[3]:=2: g[4]:=0: This deﬁnes φ(x). > SampleSize:=10000: > n:=10: > Start:=2: Choose a starting position of a random walker. > for i from 1 to SampleSize do > Position:=Start mod n; > k:=rand(1..4); > Position:=Position+g[k()] mod n: > T:=1; > while modp(Position,n) Start do > Position:=Position+g[k()] mod n; > T:=T+1; > od; > Arrival_Time[i]:=T; > od: Suppose a walker starts from y0 . In terms of Tφ , this corresponds to a starting point in X ×{y0 }, which has measure n1 . Kac’s lemma implies that the average of the ﬁrst return time to y0 is equal to n if Tφ is ergodic. > evalf(add(Arrival_Time[i],i=1..SampleSize)/SampleSize); 9.992300000 >

234

7 Mod 2 Uniform Distribution

7.7.5 How to plot points on the graph of a cobounding function We explain how to plot points on the graph of a cobounding function q(x) using the mathematical pointillism. > with(plots): Deﬁne a function φ(x) = 41 − |x − 21 |. > phi:=x->1/4-abs(x-1/2): Plot y = φ(x). > plot(phi(x),x=0..1); See Fig. 7.15.

0.4 y

0.2

0 –0.2

x

1

–0.4

Fig. 7.15. y = φ(x)

Recall that frac(x) does not produce the fractional part of x < 0. Consult Subsect. 1.8.6. > fractional:=x->x-floor(x): > Digits:=30: > m:=1: Observe what happens as m increases. > theta:=evalf((-m + sqrt(m^2+4))/2): Deﬁne a skew product transformation Tφ . > T_phi:=(x,y)->(frac(x+theta),fractional(y+phi(x))): Choose a cobounding function q(x) such that q(0) = 0.7. > seed[0]:=(0,0.7): > Length:=1000: > for i from 1 to Length do > seed[i]:=T_phi(seed[i-1]): > od: > Digits:=10: > pointplot([seq([seed[i]],i=1..Length)],symbolsize=1); See the right graph in Fig. 7.10.

7.7 Maple Programs

235

7.7.6 Fourier series of a cobounding function Approximate a cobounding function q(x) by the Fourier series. > with(plots): > phi:=x->1/4-abs(x-1/2): Recall that frac(x) does not produce the fractional part of x < 0. > fractional:=x->x-floor(x): See Fig. 1.9. > m:=1: Try other values for m. > theta:=evalf((-m + sqrt(m^2+4))/2): Compute the nth Fourier coeﬃcient bn of φ as in Subsect. 1.8.9. > N:=30: > for n from 1 to N do > b[n]:=1/2*((-1)^n-1)/Pi^2/n^2: > od: Find an approximate solution q1 (x) for a coboundary. > q1:=x->2*Re(add(b[n]/(exp(2*Pi*I*n*theta)-1)* exp(2*Pi*I*n*x),n=1..N)); bn e(2 I π n x) , n = 1..N )) e(2 I π n θ) − 1 Modify q1 (x) so that the new function qF (x) satisﬁes qF (0) = 0.7. > q_F:=x->evalf(fractional(q1(x)-q1(0)+0.7)): > plot(q_F(x),x=0..1); See the left graph in Fig. 7.16. Compare it with the right graph in Fig. 7.10. q1 := x → 2 (add(

1 0.4 y y

0.2

0 –0.2 0

x

1

x

1

–0.4

Fig. 7.16. y = qF (x) (left) and y = qF (x + θ) − qF (x) (right)

plot(q_F(frac(x+theta)) - q_F(x),x=0..1); See the right graph in Fig. 7.16. Compare it with Fig. 7.15. >

236

7 Mod 2 Uniform Distribution

7.7.7 Numerical computation of a lower bound for n||nθ|| Check the existence of K > 0 such that n||nθ|| ≥ K for every n ≥ 1. Recall that the distance from x ∈ R to Z satisﬁes ||x|| = min({x}, 1 − {x}). > with(plots): > Digits:=100: > m:=1: Try other values for m. > theta:=evalf((-m + sqrt(m^2+4))/2): > L:=300: > for n from 1 to L do > dist[n]:=min(1-frac(n*theta),frac(n*theta)): > od: > Digits:=10: In the following we are interested in the lower bound, and any value greater than 3 is omitted. In g3, the Greek letter θ is represented by q in SYMBOL font. > g1:=pointplot([seq([n,n*dist[n]],n=1..L)],symbol=circle, labels=["n","n||n||"]): > g2:=plot(0,x=0..L,y=0..3): > g3:=textplot([[0,0.5,‘q‘]],font=[SYMBOL,10]): > display(g1,g2,g3); See Fig. 7.17 3 2 n ||n θ || 1

0

100

n

200

300

√ Fig. 7.17. n||nθ||, 1 ≤ n ≤ 300, is bounded from below where θ = ( 5 − 1)/2

To check that n*dist[n] is bounded from below, we check the boundedness of 1/n/dist[n]. > pointplot([seq([n,1/n/dist[n]],n=1..L)],symbol=circle); See Fig. 7.11.

8 Entropy

What is randomness? How much randomness is there in a set of data? To utilize any possible deﬁnition of randomness in the study of unpredictable phenomena, we ﬁrst deﬁne a concept of randomness in a formal mathematical way and see whether it leads to any reasonably acceptable or applicable conclusion. In 1948 C. Shannon [Sh1] deﬁned the amount of randomness, and called it entropy, which was the starting point of information theory. Entropy is the central idea in his theory and it measures the amount of randomness in a sequence on ﬁnitely many symbols. Shannon may be regarded as the founder of modern digital communication technology. A.N. Kolmogorov and Ya. Sinai extended the deﬁnition of entropy and used it to study general measure preserving transformations in ergodic theory. In data compression, which is one of main subjects in information theory, a given transformation is always a shift transformation on a shift space, and this is why a transformation is not mentioned explicitly. The most important result on entropy is the Shannon–McMillan–Breiman Theorem. For the entire collection of Shannon’s work see [Sh2]. For brief surveys on Shannon’s lifelong achievement consult [BCG]. For the physical meaning of entropy see [At].

8.1 Deﬁnition of Entropy The following is a motivational introduction to entropy. First, imagine that there is an experiment with the possible outcome given by a ﬁnite set A = {a1 , . . . , ak }. Suppose that the probability of obtaining the result ai is pi , 0 ≤ pi ≤ 1, p1 + · · · + pk = 1. If one of the ai , let us say a1 , occurs with probability p1 that is close to 1, then in most trials the outcome would be a1 . If a1 indeed happens, we would not be surprised. If we quantitatively measure the magnitude of ‘being surprised’ is a certain sense, then it would be almost equal to zero if p1 is close to 1. There is not much information gained after the experiment. How can we deﬁne the amount of information? Here is a clue from biology: When we receive a signal or an input or a stimulus from a source, we

238

8 Entropy

perceive it using our sensory organs - eyes, ears, skin, nose, etc. It is a well established fact that the magnitude of our perception is proportional to the logarithm of the magnitude of the stimulus. In this sense, the deﬁnition of the Richter scale for earthquakes and the method of classifying stars according to their brightness are very well suited to our daily experiences. It is a reasonable guess or an approximation to the real world that, for each possible outcome ai with probability pi , the magnitude of surprise given by the source or stimulant would be equal to 1/pi . Hence the perceived magnitude of stimulus or surprise would be roughly equal to log(1/pi ). In short, information = − log (probability) . Since any ai can happen, we take the expectation or average over all the ai thus obtaining the value 1 . pi log pi i It is called the entropy of A. (We adopt the convention that 0 × log 0 = 0.) A rigorous mathematical formulation is given as follows: An experiment corresponds to a measurable partition P = {E1 , . . . , Ek } of a probability space (X, µ). Deﬁne the entropy of a partition P by 1 =− pi log pi pi log H(P) = pi i i where pi = µ(Ei ) and the base of the logarithm is 2 throughout this chapter unless otherwise stated. (Theoretically it does not matter which base we choose for logarithms. But, when the partition has two subsets, the possible maximal value for entropy is log 2, and if we choose the logarithmic base 2, then the maximal value for entropy is log 2 = 1. This choice is convenient especially when we discuss data compression ratio of a binary sequence in Chap. 14.) In this book we always assume that a partition of a probability measure space consists of measurable subsets. By removing subsets of measure zero from P if necessary, we may assume that pi = µ(Ei ) > 0 for every i. Lemma 8.1. If a partition P consists of k subsets, then H(P) ≤ log k. Proof. Let P = {E1 , . . . , Ek }. Recall that ln a/ ln 2 = log2 a for a > 0. Hence it is enough to show that −

k

pi ln pi ≤ ln k

i=1

using the Lagrange multiplier method. (Here we use the natural logarithm for the simplicity of diﬀerentiation.) Put

k k pi ln pi + λ pi − 1 . f (p1 , . . . , pk , λ) = − i=1

i=1

8.1 Deﬁnition of Entropy

239

Solving ∂f = − ln pi − 1 + λ = 0 , ∂pi together with the constraint

1≤i≤k,

p1 + · · · + pk = 1 , we observe that the maximum is obtained only when p1 = · · · = pk = Hence the maximum is equal to ln k.

1 k.

Given two partitions P and Q we let P ∨ Q be the partition, called the join of P and Q, consisting of the subsets of the form B ∩ C where B ∈ P and C ∈ Q. The join of more than two partitions is deﬁned similarly. Suppose that T : X → X is a measure preserving transformation. For a partition P we let T −j P = {T −j E1 , . . . , T −j Ek } and

Pn = P ∨ T −1 P ∨ · · · ∨ T −(n−1) P

and deﬁne the entropy of T with respect to P by h(T, P) = lim

n→∞

1 H(Pn ) . n

It is known that the sequence here is monotonically decreasing and that the limit exists. Finally we deﬁne the entropy h(T ) of T by h(T ) = sup h(T, P) P

where P ranges over all ﬁnite partitions. When there is a need to emphasize the measure µ under consideration, we write hµ (T ) in place of h(T ). Sometimes h(T ) is called the Kolmogorov–Sinai entropy. Suppose that we are given a measure preserving transformation T on (X, A). A partition P is said to be generating if the σ-algebra generated by T −n P, n ≥ 0, is A. It is known that if P is generating then h(T ) = h(T, P) . Theorem 8.2. If a partition P consists of k subsets, then h(T, P) ≤ log k. Proof. Let P = {E1 , . . . , Ek }. Then Pn consists of at most k n subsets. By

Lemma 8.1 we have H(Pn ) ≤ n log k, and hence H(Pn )/n ≤ log k. Example 8.3. An irrational translation T x = x + θ (mod 1) has :entropy 9 zero. For the proof, take a generating partition P = [0, 12 ), [ 12 , 1) . Then Pn consists of 2n disjoint intervals (or arcs) if we identify the unit interval [0, 1) with the unit circle. By Lemma 8.1 we have √ H(Pn ) ≤ log 2n, and h(T ) = limn→∞ n1 H(Pn ) = 0. See Fig. 8.1 where θ = 2 − 1. Observe that the line graph is under the curve y = log(2x)/x.

240

8 Entropy 1

0.5

0

5

n 10

15

Fig. 8.1. y = H(Pn )/n, 1 ≤ n ≤ 15, for T x = {x + θ} together with y = log(2x)/x

Recall Deﬁnition 2.22. Let (X, µ) and (Y, ν) be two probability spaces. Suppose that T : X → X and S : Y → Y preserve µ and ν, respectively. If φ : X → Y is measure preserving and satisﬁes φ ◦ T = S ◦ φ, then S is called a factor of T and φ a factor map. The following result shows that a factor map cannot increase entropy. Theorem 8.4. If S : Y → Y is a factor of T : X → X, then h(S) ≤ h(T ). Proof. Let φ : X → Y be a factor map and let Q = {F1 , . . . , Fk } be a generating partition of Y . Put Ei = φ−1 (Fi ), 1 ≤ i ≤ k. Then µ( i Ei ) = 1 and hence P = φ−1 Q = {E1 , . . . , Ek } is a partition of X. Note that P need not ;n−1 ;n−1 be generating with respect to T . Put Qn = j=0 S −j Q and Pn = j=0 T −j P. Take E ∈ Pn . Then E is of the form E = Ei1 ∩ T −1 Ei2 ∩ · · · ∩ T −(n−1) Ein and hence E = φ−1 Fi1 ∩ T −1 φ−1 Fi2 ∩ · · · ∩ T −(n−1) φ−1 Fin = φ−1 Fi1 ∩ φ−1 S −1 Fi2 ∩ · · · ∩ φ−1 S −(n−1) Fin = φ−1 (Fi1 ∩ S −1 Fi2 ∩ · · · ∩ S −(n−1) Fin ) .

Hence Pn = φ−1 Qn = {φ−1 F : F ∈ Qn } for every n, and µ(φ−1 F ) log µ(φ−1 F ) H(Pn ) = − F ∈Qn

=−

ν(F ) log ν(F ) = H(Qn ) .

F ∈Qn

Thus

1 1 H(Pn ) = lim H(Qn ) = h(S, Q) . n→∞ n n Hence h(T ) ≥ h(T, P) = h(S, Q) = h(S). The last equality holds since Q is generating.

h(T, P) = lim

n→∞

Recall that (X, µ) and (Y, ν) are said to be isomorphic if there exists an almost everywhere bijective mapping φ : X → Y such that ν(φ(E)) = µ(E)

8.1 Deﬁnition of Entropy

241

for every measurable subset E ⊂ X. Furthermore, if two measure preserving transformations T : X → X and S : Y → Y satisfy φ ◦ T = S ◦ φ, then they are said to be isomorphic. Consult Sect. 2.4 for the details. Corollary 8.5. If T : X → X and S : Y → Y are isomorphic, then they have the same entropy. Proof. Let φ : X → Y be a factor map. By applying Theorem 8.4 for φ and φ−1 , we obtain h(S) ≤ h(T ) and h(T ) ≤ h(S), respectively. Hence h(T ) = h(S).

Remark 8.6. If a factor map φ is almost everywhere ﬁnite-to-one, i.e., the cardinality of φ−1 ({y}) is ﬁnite for almost every y ∈ Y , then h(T ) = h(S). Example 8.7. The entropy of T x = 2x 9(mod 1) is equal to log 2 = 1. For the : proof, take a generating partition P = [0, 21 ), [ 21 , 1) . Then Pn consists of 2n intervals of the form (j − 1) × 2−n , j × 2−n , 1 ≤ j ≤ 2n . Since H(Pn ) = n log 2 = n, we have h(T ) = limn→∞ n1 H(Pn ) = 1. The logistic transformation is isomorphic to T x = 2x (mod 1), and the given partition P is generating for both transformations. Hence in both cases H(Pn )/n converges to the same entropy, which is marked by the horizontal lines in Fig. 8.2. See Maple Program 8.7.1.

1

1

0.5

0.5

0

n

10

0

n

10

Fig. 8.2. y = H(Pn )/n, 1 ≤ n ≤ 10, for T x = {2x} (left) and T x = 4x(1 − x) (right)

Theorem 8.8. Let T : X → X and S : Y → Y be measure preserving transformations. Then we have the following: (i) If n ≥ 1, then h(T n ) = n h(T ). (ii) If T is invertible, then h(T −1 ) = h(T ). (iii) Deﬁne the product transformation T × S on X × Y by (T × S)(x, y) = (T x, Sy) for x ∈ X and y ∈ Y . Then T × S preserves the product measure and h(T × S) = h(T )h(S). Proof. Consult [Pet],[Wa1].

242

8 Entropy

8.2 Entropy of Shift Transformations Let A = {s1 , . . . , sk } be a set of symbols. Consider a shift transformation T ∞ on X = 1 A. An element x ∈ X is of the form x = (x1 . . . xn . . .), xn ∈ A. Let P be the partition of X by subsets [si ] = {x ∈ X : x1 = si } ,

1≤i≤k.

Then Pn consists of cylinder sets of the form [a1 , . . . , an ] = {x ∈ X : x1 = a1 , . . . , xn = an } for (a1 , . . . , an ) ∈ An , and P is generating. For shift spaces in this book we always take the same partition P. Suppose that X is a Bernoulli shift where each symbol si has probability pi . Then pi1 · · · pin log(pi1 · · · pin ) = −n H(Pn ) = − pi log pi , i1 ,...,in

and h(T ) = −

i

i

pi log pi . Thus we have the following theorem.

Theorem 8.9. The entropy of the (p1 , . . . , pk )-Bernoulli shift is equal to −

k

pi log pi .

i=1

The theorem is a generalization of Ex. 8.7 since T x = 2x (mod 1) is isomorphic to the ( 12 , 12 )-Bernoulli shift. Using the Lagrange multiplier method as in Lemma 8.1, we see that −

k

pi log pi ≤ log k

i=1

and the maximum occurs if and only if p1 = · · · = pk = k1 . For k = 2, see the graph y = H(p) = −p log p − (1 − p) log(1 − p) in Fig. 8.3. D. Ornstein proved that entropy completely classiﬁes the bi-inﬁnite (i.e., two-sided) Bernoulli shifts; in other words, two bi-inﬁnite Bernoulli shifts are isomorphic if and only if they have the same entropy. For more information, consult [Pet]. A Markov shift space with a stochastic matrix P = (pij )1≤i,j≤k is the set of all inﬁnite sequences on symbols {1, . . . , k} with the probability of transition from i (at time s) to j (at time s + 1) given by Pr(xs+1 = j | xs = i) = pij . If P is irreducible and aperiodic, then all the eigenvalues other than 1 have modulus less than 1. The irreducibility of P implies the ergodicity of the

8.2 Entropy of Shift Transformations

243

1

H(p)

0

p

1

Fig. 8.3. y = H(p), 0 ≤ p ≤ 1

Markov shift, and the aperiodicity together with the irreducibility gives the mixing property. Let π = (π1 , . . . , πk ), i πi = 1, πi > 0, be the unique left eigenvector corresponding to the simple eigenvalue 1, which is called the Perron–Frobenius eigenvector. The measure of a cylinder set [a1 , . . . , an ], ai ∈ {1, . . . , k}, 1 ≤ i ≤ n, is given by πa1 pa1 a2 · · · pan−1 an . Consult Sect. 2.3. Theorem 8.10. The entropy of a Markov shift is equal to πi pij log pij . − i,j

Proof. Let P be the partition given by the 1-blocks [s]1 , s ∈ {1, . . . , k}. Then ;n−1 Pn = i=0 T −i P consists of cylinder sets of length n. Since i πi pij = πj and j pij = 1, we have H(Pn+1 ) k

=−

πi0 pi0 i1 · · · pin−1 in log(πi0 pi0 i1 · · · pin−1 in )

i0 ,i1 ,...,in =1 k

=−

πi0 pi0 i1 · · · pin−1 in log πi0 + log pi0 i1 + · · · + log pin−1 in

i0 ,i1 ,...,in =1

=−

k

πi log πi − n

i=1

k

Therefore h = lim

n→∞

πi pij log pij .

i,j=1

1 πi pij log pij . H(Pn+1 ) = − n+1 i,j

244

8 Entropy

If we have a Bernoulli shift, then pij = pj for every i. Hence πj = pj for every j. Since i pi = 1, the entropy is equal to pi pj log pj = − pj log pj , − i

j

j

which is given in Theorem 8.9. For simulations see Fig. 8.4 and Maple Program 8.7.2. Observe that the left graph for the ( 41 , 43 )-Bernoulli shift with its entropy ≈ 0.8113 is almost horizontal since H(Pn )/n is constant forevery n ≥ 1. The right graph for the Markov 1/10 9/10 shift associated with the matrix decreases monotonically, which 3/4 1/4 is a theoretically known fact, and converges to its entropy ≈ 0.6557. The horizontal line in each graph indicates the entropy. 1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0

10

n

n

10

Fig. 8.4. y = H(Pn )/n, 1 ≤ n ≤ 10, for the (1/4, 3/4)-Bernoulli shift (left) and a Markov shift (right)

Example 8.11. Using the isomorphism given in Ex. 2.32 we can ﬁnd the entropy of the beta transformation T x = βx (mod 1). Recall that β12 + β1 = 1 and β − 1 = β1 . The stochastic matrix 1/β 1/β 2 P = 1 0 has a row eigenvector (β 2 , 1) corresponding to the eigenvalue 1. Hence the Perron–Frobenius eigenvector π = (π0 , π1 ) is given by π0 =

β2 , +1

β2

π1 =

1 . β2 + 1

Hence the entropy is equal to β2 1 1 1 1 1 − 2 log + 2 log 2 − 2 (1 × log 1 + 0 × log 0) = log β . β +1 β β β β β +1 See Fig. 8.5. The horizontal line indicates the entropy log β ≈ 0.694.

8.3 Partitions and Coding Maps

245

1 0.8 0.6 0.4 0.2 0

n

10

Fig. 8.5. y = H(Pn )/n, 1 ≤ n ≤ 10, for T x = {βx} in Ex. 8.11

8.3 Partitions and Coding Maps Recall the deﬁnitions in Sect. 2.5. Note that the coding map φ is a factor map. Example 8.12. (Nongenerating partition) It is important to take a generating partition in computing the entropy h(T ). If a partition P is not generating, then it is possible that h(T, P) < h(T ) and that we may underestimate the entropy of T . Consider T x = 2x (mod 1) and & 0 1 1 P = E0 = [0, ), E1 = [ , 1) . 4 4 Then P is not generating, and h(T, P) < h(T ) = log 2. See Fig. 8.6. 1 0.8 0.6 0.4 0.2 0

n

10

Fig. 8.6. y = H(Pn )/n, 1 ≤ n ≤ 10 for T x = {2x} and a nongenerating partition P in Ex. 8.12

Example 8.13. When a partition P consists of many subsets, there is a good chance that h(T, P) is close to h(T ). Consider T x = 2x (mod 1) and & 0 1 1 2 2 P = E0 = [0, ), E1 = [ , ), E2 = [ , 1) . 3 3 3 3 Then P is generating, and h(T, P) = h(T ) = log 2. See Fig. 8.7.

246

8 Entropy 1.5 1 0.5 0

10

n

Fig. 8.7. y = H(Pn )/n, 1 ≤ n ≤ 10 for T x = {2x} and P in Ex. 8.13

Example 8.14. A generating partition deﬁnes an almost everywhere one-to-one √ coding map in general. Consider T x = x + θ (mod 1), θ = 2 − 1, and & 0 1 1 P = E0 = [0, ), E1 = [ , 1) . 2 2 Note that P is generating. See the left diagram in Fig. 8.8, which employs the mathematical pointillism. The ﬁrst ﬂoor represents φ−1 [1] = E1 , which is the cylinder set of length 1. The second ﬂoor represents the cylinder set of length 2 given by φ−1 [1, 0] = E1 ∩ T −1 E0 , and so on. Finally the 5th ﬂoor represents the cylinder set of length 5 given by φ−1 [1, 0, 1, 1, 0] = E1 ∩ T −1 E0 ∩ T −2 E1 ∩ T −3 E1 ∩ T −4 E0 . For the representation y = ψ(x) of the coding map see the right plot in Fig. 8.8. The graph of y = ψ(x) looks almost locally ﬂat for the following reasons: First, for each n there are 2n cylinder sets of length n, and ψ(x) is constant on each cylinder set. Thus there are only 2n possible values for ψ out of 2n values of the form k × 2−n , 0 ≤ k < 2n . Second, since the entropy of T is zero, orbits of two nearby points x and x stay close to each other for a long time. Thus ψ(x) and ψ(x ) seem to be equal to the naked eye. 5

1

y

0

x

1

0

x

1

Fig. 8.8. Cylinder sets of length n, 1 ≤ n ≤ 5, (left) and y = ψ(x) (right) for T x = {x + θ} and P in Ex. 8.14

8.3 Partitions and Coding Maps

247

Example 8.15. If the coding map deﬁned by a partition P gives an almost everywhere ﬁnite-to-one coding map, then h(T ) = h(T, P) even when P is not generating. Consider T x = 2x (mod 1) and the partition & 0 1 3 1 3 P = E0 = [ , ), E1 = [0, ) ∪ [ , 1) . 4 4 4 4 To show that P is not generating, we observe that a cylinder set is a union of two intervals. See the left diagram in Fig. 8.9 and Maple Program 8.7.3, which employ the mathematical pointillism. The ﬁrst ﬂoor represents φ−1 [1] = E1 , which is the cylinder set of length 1. The second ﬂoor represents the cylinder set of length 2 given by φ−1 [1, 0] = E1 ∩ T −1 E0 , and so on. Finally the 5th ﬂoor represents the cylinder set of length 5 given by φ−1 [1, 0, 0, 1, 0] = E1 ∩ T −1 E0 ∩ T −2 E0 ∩ T −3 E1 ∩ T −4 E0 . Note that the cylinder sets are symmetric with respect to x = 21 . The coding map ψ satisﬁes ψ(x) = ψ(1 − x). It is a 2-to-1 map. Since x + (1 − x) = 1 and T x + T (1 − x) = 1 for every x, two points x and 1 − x that are symmetric with respect to x = 21 are mapped by T to points also symmetric with respect to x = 21 . Since E0 and E1 are symmetric with respect to x = 21 , a point x belongs Ei if and only if 1 − x belongs to Ei . This implies that x and 1 − x are coded into the same binary sequence with respect to P. The right graph shows that entropy is equal to h(T ) = log 2 = h(T, P). 5

1 1 y

0

x

1

0.5

0

x

1

0

n

10

Fig. 8.9. Cylinder sets of length n, 1 ≤ n ≤ 5, y = ψ(x) and y = H(Pn )/n for T x = {2x} and P in Ex. 8.15 (from left to right)

Example 8.16. Consider T x = 2x (mod 1) and the partition & 0 1 1 P = E0 = [0, ), E1 = [ , 1) . 4 4 To show that P is not generating, we observe that a cylinder set is a union of many intervals for large n. See the left diagram in Fig. 8.10, which employs the

248

8 Entropy

idea of the mathematical pointillism. The ﬁrst ﬂoor represents φ−1 [1] = E1 , which is the cylinder set of length 1. The second ﬂoor represents the cylinder set of length 2 given by φ−1 [1, 1] = E1 ∩ T −1 E1 , and so on. Finally the 5th ﬂoor represents the cylinder set of length 5 given by φ−1 [1, 1, 1, 1, 1] = E1 ∩ T −1 E1 ∩ T −2 E1 ∩ T −3 E1 ∩ T −4 E1 . Fig. 8.6 shows that h(T, P) < h(T ), which implies that ψ is not a ﬁnite-to-one map. See the right plot in Fig. 8.10. 5

1

y

0

x

1

0

x

1

Fig. 8.10. Cylinder sets of length n, 1 ≤ n ≤ 5, (left) and y = ψ(x) (right) for T x = {2x} and P in Ex. 8.16

8.4 The Shannon–McMillan–Breiman Theorem Let P be a generating partition of a probability space (X, µ). Let Pn (x) be the unique member in Pn containing x ∈ X. Then the function x → µ(Pn (x)) is constant on each subset in Pn , and µ(Pn (x)) log µ(Pn (x)) = − log µ(Pn (x)) dµ H(Pn ) = − Pn (x)∈Pn

X

where the sum is taken over all subsets in Pn . Hence 1 h(T, P) = − lim log µ(Pn (x)) dµ . n→∞ n X Theorem 8.17 (The Shannon–McMillan–Breiman Theorem). If T is an ergodic transformation on (X, µ) with a ﬁnite generating partition P, then for almost every x log µ(Pn (x)) − lim = h(T ) . n→∞ n (If we integrate the left hand side of the equality, we obtain the deﬁnition of entropy.)

8.4 The Shannon–McMillan–Breiman Theorem

249

Proof. We will prove the theorem only for special cases. For general cases see ∞ [Bi],[Pet]. First we consider a Bernoulli shift T on X = 1 {1, . . . , k} where the symbol i ∈ {1, . . . , k} has probability pi . Then Pn (x) = {y ∈ X : y1 = x1 , . . . , yn = xn } = [x1 , . . . , xn ] for x = (x1 x2 x3 . . .) ∈ X and µ(Pn (x)) = px1 · · · pxn . Deﬁne fk : X → R by fk ((x1 x2 x3 . . .)) = pxk . Then fk (x) = f1 (T k−1 (x)). The Birkhoﬀ Ergodic Theorem implies that for almost every x 1 1 log fk (x) log µ(Pn (x)) = lim n→∞ n n→∞ n n

lim

= lim

n→∞

log f1 (T k−1 (x))

k=1

log f1 (x) dµ(x)

= =

1 n

k=1 n

X k

pi log pi .

i=1

k The last equality follows from the fact that X = i=1 [i], µ([i]) = pi and f1 (x) = pi on [i]. Next, for a Markov shift with a stochastic matrix (pij ), we have µ(Pn (x)) = πx1 px1 x2 · · · pxn−1 xn . Deﬁne fk,j : X → R by fk,j ((x1 x2 x3 . . .)) = pxk xj . Note that fk,k+1 (x) = f1,2 (T k−1 (x)). The Birkhoﬀ Ergodic Theorem implies that for almost every x n−1 1 1 1 log fk,k+1 (x) log πx1 + lim log µ(Pn (x)) = lim n→∞ n n→∞ n n→∞ n i=1

lim

1 log f1,2 (T k−1 (x)) n→∞ n i=1 log f1,2 (x) dµ(x) = n−1

= lim

X

=

k

πi pij log pij .

i,j=1

The last equality follows from the fact that X = and f1,2 (x) = pij on [i, j].

k

i,j=1 [i, j],

µ([i, j]) = πi pij

250

8 Entropy

C. Shannon [Sh1] proved Theorem 8.17 for Markov shifts, B. McMillan [Mc1] for convergence in measure, L. Carleson [Ca] and L. Breiman [Brei] for pointwise convergence, A. Ionescu Tulcea [Io] for Lp -convergence, and K.L. Chung [Chu] for cases with countably many symbols and ﬁnite entropy. A simulation result for the ( 32 , 31 )-Bernoulli shift is given in Fig. 8.11 where we plot y = − n1 log Pn , 10 ≤ n ≤ 1000, evaluated at a single sequence. The entropy is equal to 0.918 . . . and is indicated by the horizontal line. For a simulation with a Markov shift see Maple Program 8.7.4 and Fig. 8.15.

1

0.5

0

n

1000

Fig. 8.11. The Shannon–McMillan–Breiman Theorem for the ( 32 , 31 )-Bernoulli shift

Example 8.18. Fig. 8.12 shows the pointwise convergence of − log√ µ(Pn (x0 ))/n to entropy as n → ∞ at x0 = π − 3 for T x = x + θ (mod 1), θ = 2 − 1, T x = 2x (mod 1), and T x = βx (mod 1) with respect to generating partitions P. See Maple Program 8.7.5. 1

2 1

0.8 0.6

1

0.5

0.4 0.2

0

n

10

0

n

10

0

n

10

Fig. 8.12. y = − log µ(Pn (x0 ))/n, 1 ≤ n ≤ 10, for T x = {x + θ}, T x = {2x} and T x = {βx} (from left to right)

Let X be a shift space and T the left-shift. Consider the partition P given by the ﬁrst coordinate. Let Pn (x) be the relative frequency of the ﬁrst n-block x1 . . . xn in the sequence x = (x1 x2 x3 . . .), i.e.,

8.4 The Shannon–McMillan–Breiman Theorem

251

1 card{0 ≤ j < N : x1+j = x1 , . . . , xn+j = xn } . N →∞ N

Pn (x) = lim

Note that the Birkhoﬀ Ergodic Theorem implies that Pn (x) = µ(Pn (x)) where P is the standard partition of a shift space according to the ﬁrst coordinates. Note that Pn (x) = Pn (y) if and only if xj = yj , 1 ≤ j ≤ n. Now the Shannon– McMillan–Breiman Theorem states that for almost every x h = lim

n→∞

1 1 . log Pn (x) n

This indicates that an n-block has measure roughly equal to 2−nh . Since the sum of their measures should be close to 1, there are approximately 2nh of them. More precisely, we have the following theorem. Theorem 8.19 (The Asymptotic Equipartition Property). Let (X, µ) be a probability space and T be an ergodic µ-invariant transformation on X. Suppose that P is a ﬁnite generating partition of X. For every ε > 0 and n ≥ 1 there exist subsets in Pn , which are called (n, ε)-typical subsets, satisfying the following: (i) for every typical subset Pn (x), 2−n(h+ε) < µ(Pn (x)) < 2−n(h−ε) , (ii) the union of all (n, ε)-typical subsets has measure greater than 1 − ε, and (iii) the number of (n, ε)-typical subsets is between (1 − ε)2n(h−ε) and 2n(h+ε) . Proof. First, note that for each n ≥ 1 the function x → µ(Pn (x)) is constant on each subset Pn (x) ∈ Pn . The Shannon–McMillan–Breiman Theorem implies that the sequence of the functions 1 log µ(Pn (x)) n converges to h in measure as n → ∞. Hence for every ε > 0 there exists N such that n ≥ N implies that 0 & 1 0 take suﬃciently large n so that the measure of the union of (n, ε)-typical blocks from X is at least 1 − ε. Take m satisfying m log2 M ≥ n(h + ε) . Since the number of (n, ε)-typical blocks is bounded by 2n(h+ε) , which is again bounded by M m , a diﬀerent codeword can be assigned to each source block that is an (n, ε)-typical block in X. At worst, only the source blocks that are

8.6 Asymptotically Normal Distribution of (− log Pn )/n

253

not (n, ε)-typical will not receive distinct codewords and the probability that a source block does not receive its own codeword is less than ε. A reﬁnement of this idea can be used to compress a message. If there are more patterns, i.e., less randomness in a given sequence, then it has less entropy and can be compressed more. In data compression, which is widely used in digital signal transmission and data storage algorithms, entropy is the best possible compression ratio. For more on data compression see Chap. 14.

8.6 Asymptotically Normal Distribution of (− log Pn)/n It is known that for a mixing Markov shift the values of −(log Pn )/n are approximately normally distributed for large n. In other words, the Central Limit Theorem holds. More precisely, let Φ be the density function of the standard normal distribution given in Deﬁnition 1.43 and put Var[− log Pn (x)] , n→∞ n

σ 2 = lim

where ‘Var’ stands for variance and σ is called the standard deviation. Then & 0 α − log Pn (x) − nh √ x: lim µ ≤α = Φ(t) dt . n→∞ σ n −∞ See [Ib],[Yu]. In Fig. 8.13 we take n = 30 and choose sample values at x = T j−1 x0 , 1 ≤ j ≤ 105 , where x0 is a typical sequence in the ( 32 , 31 )-Bernoulli shift space. The standard normal distribution curve is drawn together in each plot. See Maple Program 8.7.7. 2

0.4

1

–4

–2

0

2

Z

4

–4

–2

0

2

Z

4

√ Fig. 8.13. Probability distributions of Z = (− log Pn − nh)/(σ n) for the (p, 1− p)Bernoulli shift, p = 2/3, with the number of bins equal to 120 (left) and 24 (right)

Remark 8.20. For two invertible transformations T, S such that T S = ST , we can deﬁne the entropy of {T m S n : (m, n) ∈ Z2 } on each ray in R2 , which is called the directional entropy. For more information consult [Pkoh].

254

8 Entropy

8.7 Maple Programs 8.7.1 Deﬁnition of entropy: the logistic transformation Calculate the entropy of the logistic transformation using the deﬁnition. > with(plots): > T:=x->4*x*(1-x); > entropy:=log[2](2): > Max_Block:=10: > L:=5000: > L+Max_Block-1; 5009 Generate a sequence of length L + Max Block − 1. We do not need many digits since we use the Birkhoﬀ Ergodic Theorem. > Digits:=100: Choose a partition P = {E0 , E1 } where E0 = [0, a) and E1 = [a, 1). > a:=evalf(0.5): Choose a starting point of an orbit. > seed:=evalf(Pi-3): > for k from 1 to L+Max_Block-1 do > seed:=evalf(T(seed)): > if seed < a then b[k]:=0: else b[k]:=1: fi: > od: > evalf(add(b[k],k=1..L+Max_Block-1)/(L+Max_Block-1)); 0.5056897584 Maple does not perfectly recognize the formula 0 log 0 = 0. > 0*log(0); 0 > K:=0: > K*log(K); Error, (in ln) numeric exception: division by zero

In the following not every cylinder set has positive measure. To avoid log 0 we use log(x + ε) ≈ log x for suﬃciently small ε > 0. > epsilon:=10.0^(-10): > for n from 1 to Max_Block do > for i from 0 to 2^n-1 do freq[n,i]:=0: od: > for k from 0 to L-1 do > num:=add(b[k+j]*2^(n-j),j=1..n): > freq[n,num]:= freq[n,num] + 1: > od: > h[n]:=-add(freq[n,i]/L*log10(freq[n,i]/L+epsilon),i= 0..2^n-1)/n: > od:

8.7 Maple Programs

255

Draw a graph. > g1:=listplot([seq([n,h[n]],n=1..Max_Block)]): > g2:=pointplot([seq([n,h[n]],n=1..Max_Block)], labels=["n"," "],symbol=circle): > g3:=plot(entropy,x=0..Max_Block): > display(g1,g2,g3); See Fig. 8.2. 8.7.2 Deﬁnition of entropy: the Markov shift Here is a simulation for the deﬁnition of entropy of a Markov shift. > with(plots): > with(linalg): > P:=matrix(2,2,[[1/10,9/10],[3/4,1/4]]); ⎡ 1 9 ⎤ ⎢ 10 10 ⎥ P := ⎣ ⎦ 3 1 4 4 > eigenvectors(transpose(P)); 6 −13 , 1, {[−1, 1]}], [1, 1, { 1, }] [ 5 20 Find the Perron–Frobenius eigenvector. > v:=[5/11,6/11]: Check whether v is an eigenvector with the eigenvalue 1. > evalm(v&*P); 5 6 , 11 11 > entropy:=add(add(-v[i]*P[i,j]*log[2.](P[i,j]),j=1..2), i=1..2); entropy := 0.6556951559 Choose the maximum block size. > N:=10: Generate a typical Markov sequence of length L + N − 1. > L:=round(2^(entropy*N))*100; L := 9400 If L is not suﬃciently large, then the blocks of small measure are not sampled, and the numerical average of log(1/Pn (x)) is less than the theoretical average. Consult a similar idea in Maple Program 12.6.2. > ran0:=rand(0..9): > ran1:=rand(0..3): > x[0]:=1:

256

8 Entropy

for j from 1 to L+N-1 do if x[j-1]=0 then x[j]:=ceil(ran0()/9): else x[j]:=floor(ran1()/3): fi: od: Estimate the measure of the cylinder set [1] = {x : x1 = 1} and compare it with the theoretical value. > add(x[i],i=1..L+N-1)/(L+N-1.0); 0.5441598470 > 6/11.; 0.5454545455 In the following not every cylinder set has positive measure. To avoid log 0 we use log(x + ε) ≈ log x for suﬃciently small ε > 0. > epsilon:=10.0^(-10): > for n from 1 to N do > for k from 0 to 2^n-1 do freq[k]:=0: od: > for i from 1 to L do > k:=add(x[i+d]*2^(n-1-d),d=0..n-1): > freq[k]:=freq[k]+1: > od: > for i from 0 to 2^n-1 do > prob[i]:=freq[i]/L+epsilon: od: > h[n]:=-(1/n)*add(prob[i]*log[2.0](prob[i]),i=0..2^n-1): > od: Draw a graph. > g1:=listplot([seq([n,h[n]],n=1..N)],labels=["n"," "]): > g2:=pointplot([seq([n,h[n]],n=1..N)],symbol=circle): > g3:=plot(entropy,x=0..N): > display(g1,g2,g3); See Fig. 8.4. > > > > >

8.7.3 Cylinder sets of a nongenerating partition: T x = 2x mod 1 We describe how to visualize typical cylinder sets as a nested sequence. Using the mathematical pointillism we plot Jk points in a cylinder set of length k. For example, consider T x = 2x (mod 1). Take a partition P = {E0 , E1 } where E0 = [ 14 , 34 ), E1 = [0, 14 ) ∪ [ 34 , 1). The following shows how to visualize the cylinder sets of increasing length as a series of ﬂoors of increasing height. Each ﬂoor represents a cylinder set of length 1,2,3, respectively. > with(plots): > T:=x->frac(2.0*x): > L:=2000: Generate a sequence {xk }k of length L by the rule xk = i if and only if seed[k] = T k (seed[0]) ∈ Ei . > seed[0]:=evalf(Pi-3):

8.7 Maple Programs

257

for k from 1 to L do seed[k]:=evalf(T(seed[k-1])): if (seed[k] > 1/4 and seed[k] < 3/4) then x[k]:=0: else x[k]:=1: fi: od: We will ﬁnd a cylinder set of length 3 represented by [d1 , d2 , d3 ]. > d1:=1: > j:=0: > for k from 1 to L do > if x[k]=d1 then > j:=j+1: > Q1[j]:=seed[k]: > fi: > od: > J1:=j; > > > > > >

J1 := 983 g1:=pointplot([seq([Q1[j],1],j=1..J1)]): g0:=plot(0,x=0..1,labels=["x"," "]): > display(g1,g0); See the left plot in Fig. 8.14. > d2:=0: > j:=0: > for k from 1 to L do > if (x[k]=d1 and x[k+1]=d2) then > j:=j+1: > Q2[j]:=seed[k]: > fi: > od: > J2:=j; > >

J2 := 502 g2:=pointplot([seq([Q2[j],2],j=1..J2)]): g0:=plot(0,x=0..1,labels=["x"," "]): See the second plot in Fig. 8.14. The ﬁrst ﬂoor is a cylinder set of length 1 and the second ﬂoor is a cylinder set of length 2. > d3:=0: > j:=0: > for k from 1 to L do > if (x[k]=d1 and x[k+1]=d2 and x[k+2]=d3) then > j:=j+1: > Q3[j]:=seed[k]: > fi: > od: > J3:=j; > >

> >

J3 := 248 g3:=pointplot([seq([Q3[j],3],j=1..J3)]): display(g3,g2,g1,g0);

258

8 Entropy

See the right plot in Fig. 8.14. The third ﬂoor is a cylinder set [d1 , d2 , d3 ]. 1

2 2

0

x

1

0

x

1

0

x

1

Fig. 8.14. Cylinder sets of length 1, 2, 3 for T x = 2x (mod 1) with respect to P (from left to right)

For a cylinder set of length 5 see Fig. 8.9. The partition P is not generating, but h(T, P) = h(T ) since the coding map with respect to P is two-to-one. 8.7.4 The Shannon–McMillan–Breiman Theorem: the Markov shift Here is a simulation of the Shannon–McMillan–Breiman Theorem for a Markov shift. > with(plots): > with(linalg): Choose a stochastic matrix P . > P:=matrix(2,2,[[1/2,1/2],[1/5,4/5]]); ⎡1 1⎤ ⎢2 2⎥ P := ⎣ ⎦ 1 4 5 5 Find the eigenvalues and the eigenvectors of the transpose of P . > eigenvectors(transpose(P)); 5 3 [1, 1, { 1, }], [ , 1, {[−1, 1]}] 2 10 3 The eigenvalues are 1 and 10 , and both have multiplicity 1. > v:=[2/7,5/7]: Check whether v is the Perron-Frobenius eigenvector. > evalm(v&*P); 2 5 , 7 7 Find the entropy.

8.7 Maple Programs

259

> entropy:=evalf(-add(add(v[i]*P[i,j]*log[2.](P[i,j]), j=1..2),i=1..2)); entropy := .8013772106 Choose the maximal block length. > N:=1000: Generate a Markov sequence of length N . > ran0:=rand(0..1): > ran1:=rand(0..4): > b[0]:=0: > for j from 1 to N do > if b[j-1]=0 then b[j]:= ran0(): > else b[j]:=ceil(ran1()/4): > fi: > od: Find the numerical estimate of the cylinder set [1] and its theoretical value. > evalf(add(b[j],j=1..N)/N); 0.6940000000 > 5./7; 0.7142857143 Find −(log Pn (x))/n. > for n from 1 to N do > prob[n]:=v[b[1]+1]: > for t from 1 to n-1 do > prob[n]:=prob[n]*P[b[t]+1,b[t+1]+1]: > od: > h[n]:=-log[2.](prob[n])/n: > od: Plot the graph. > g1:=listplot([seq([n,h[n]], n=10..N)]): > g2:=plot(entropy, x=0..N, labels=["n"," "]): > display(g1,g2); See Fig. 8.15.

1

0.5

0

n

1000

Fig. 8.15. The Shannon–McMillan–Breiman Theorem for a Markov shift

260

8 Entropy

8.7.5 The Shannon–McMillan–Breiman Theorem: the beta transformation with(plots): √ Take β = ( 5 + 1)/2, which is stored as a symbol. > beta:=(sqrt(5)+1)/2: Deﬁne the β-transformation. > T:=x->frac(beta*x): Here is the ergodic invariant probability density ρ(x). > rho:=x->piecewise(0 L+Max_Block-1; 100009 To estimate the measures of the cylinder sets we apply the Birkhoﬀ Ergodic Theorem, thus we do not need very many digits. > Digits:=100: We need a numerical value for β with a reasonable amount of precision to evaluate T x = {βx}. > beta:=(sqrt(5)+1)/2.0: Choose a partition P = {E0 , E1 } of [0, 1) where E0 = [0, a), E1 = [a, 1). > a:=evalf(1/beta): Find the measure of the interval [a, 1). > evalf(int(rho(x),x=1/beta..1),10); 0.2763932023 Choose a starting point x = π − 3. > seed:=evalf(Pi-3): Find the coded sequence b1 , b2 , b3 , . . . with respect to P by the rule bk = 0 or 1 if and only if T k (x) ∈ E0 or E1 , respectively. > for k from 1 to L+Max_Block-1 do > seed:=T(seed): > if seed < a then b[k]:=0: > else b[k]:=1: > fi: > od: In the coded sequence b1 , b2 , b3 , . . ., the block ‘11’ is forbidden, in other words, if bj = 1 then bj+1 = 0. This fact is reﬂected in the following. > seq(b[k],k=1..Max_Block); >

8.7 Maple Programs

261

0, 0, 0, 1, 0, 1, 0, 1, 0, 0 > Digits:=10: The following value should be close to the measure of the interval [a, 1). > evalf(add(b[k],k=1..L+Max_Block-1)/(L+Max_Block-1)); 0.2755052045 Estimate the measure of the set Pn (x) by the Birkhoﬀ Ergodic Theorem. The ﬁrst n bits b1 , . . . , bn represent the cylinder set [b1 , . . . , bn ] = Pn (x). To ﬁnd µ([b1 , . . . , bn ]), we compute the relative frequency that T k x visits [b1 , . . . , bn ]. Since T k x corresponds to the coded sequence b1+k , . . . , bn+k , we observe that T k x ∈ [b1 , . . . , bn ] if and only if bj = bj+k , 1 ≤ j ≤ n. > for n from 1 to Max_Block do > freq[n]:=0: > for k from 0 to L-1 do > for j from 1 to n do > if b[j] b[j+k] then increase:=0: break: > else increase:=1: > fi: > od: > freq[n]:=freq[n] + increase: > od: > h[n]:= -log[2.](freq[n]/L)/n: > od: When the break statement is executed in the above, the result is to exit from the innermost do statement within which it occurs. > seq(freq[n],n=1..Max_Block); 72450, 44899, 27836, 10512, 10512, 3987, 3987, 1508, 1508, 940 Draw the graph. > g1:=listplot([seq([n,h[n]],n=1..Max_Block)]): > g2:=pointplot([seq([n,h[n]],n=1..Max_Block)], labels=["n"," "],symbol=circle): > g3:=plot(entropy,x=0..Max_Block): > display(g1,g2,g3); See the right graph in Fig. 8.12.

8.7.6 The Asymptotic Equipartition Property: the Bernoulli shift Here is a simulation of the Asymptotic Equipartition Property for the ( 13 , 23 )-Bernoulli shift. > with(plots): Choose the block size. > n:=1000: > z:=3: Choose the probability of having ‘0’. > p:=1/z;

262

8 Entropy

p :=

1 3

Find the probability of having ‘1’. > q:=1-p: Find the entropy of the (p, q)-Bernoulli sequence. > entropy:=-p*log[2.](p)-q*log[2.](q); entropy := 0.9182958342 Let k denote the number of the symbol ‘1’ in an n-block, and let Prob[k] be the probability that an n-block contains exactly k bits of ‘1’ in some k speciﬁed places. > for k from 0 to n do > Prob[k]:=p^(n-k)*q^k: > od: Let n! C(n, k) = k!(n − k)! be the binomial coeﬃcient equal to the number of ways of choosing k objects from n distinct objects. It is given by the Maple command binomial(n,k). Consider the binomial probability distribution where the probability of observing exactly k bits of ‘1’ among the n bits of an arbitrary binary n-block is given by Prob[k] × C(n, k). The sum of all probabilities is equal to 1 in the following. > add(Prob[k]*binomial(n,k),k=0..n); 1 Find the average of k, which is equal to µ = nq. > add(k*Prob[k]*binomial(n,k),k=0..n); 2000 3 > mu:=n*q; 2000 µ := 3 Find the average of − log(Prob)/n, which is approximately equal to entropy. > add(-log[2.0](Prob[k])/n*Prob[k]*binomial(n,k),k=0..n); 0.9182958338 Find the standard deviation of k, which is equal to σ = nq(1 − q). > sigma:=sqrt(n*q*(1-q)); √ 20 5 σ := 3 > K1:=round(mu-4*sigma); >

K1 := 607 K2:=round(mu+4*sigma);

8.7 Maple Programs

263

K2 := 726 Deﬁne the normal distribution density. > f:=x-> exp(-(x-mu)^2/2/sigma^2) / sqrt(2*Pi*sigma^2); (x−µ)2

e(−1/2 σ2 ) √ f := x → 2 π σ2 The interval K1 ≤ x ≤ K2 is an essential range for the values of the normal distribution. > evalf(int(f(x),x=K1..K2)); 0.9999342413 Plot the probability density function for the number of occurrences of ‘1’ in an n-block. For suﬃciently large values of n the binomial distribution is approximated by the normal distribution with mean µ = nq and variance σ 2 = nq(1 − q). > pdf:=pointplot([seq([k-0.5, binomial(n,k)*Prob[k]], k=K1..K2)]): > g_normal:=plot(f(x),x=0..n): > display(pdf, g_normal, labels=["k","pdf"]); See Fig. 8.16 where an enlargement of the left plot on K1 − 50 ≤ k ≤ K2 + 50 is given on the right.

0.02

0.02

pdf 0.01

0

0.01

k

1000

0

650 k

700

Fig. 8.16. Probability of the number of occurrences of ‘1’ in an n-block of the (1/3, 2/3)-Bernoulli shift with n = 1000

Plot the cumulative density function for the number of occurrences of ‘1’ in an n-block. > cum[-1]:=0.0: This makes every cum[k] a ﬂoating point number. For larger values of n set cum[K1-1]:=0.0: and run the following do loop only for K1 ≤ k ≤ K2 to save time. > for k from 0 to n do > cum[k]:=cum[k-1] + binomial(n,k)*p^(n-k)*q^k: > od: > cum[n];

264

8 Entropy

1.000000000 >

cum[K1]; 0.00004417647482

>

cum[K2];

0.9999772852 cdf:=pointplot([seq([k,cum[k]], k=K1..K2)]): > g0:=plot(0,x=0..K1): > g1:=plot(1,x=K2..n): > display(g0, g1, cdf, labels=["k","cdf"]); See Fig. 8.17. >

1

cdf

0

k

1000

Fig. 8.17. Cumulative probability of the number of occurrences of ‘1’ in an n-block of the (1/3, 2/3)-Bernoulli shift with n = 1000

h_point:=pointplot([seq([cum[k],-log[2.](Prob[k])/n], k=K1..K2)]): > for k from K1 to K2 do > h_line[k]:=plot(-log[2.](Prob[k])/n,x=cum[k-1]..cum[k]): > od: Find the minimum of −(log2 Pn )/n given by −(log2 q n )/n. > -log[2.](q); 0.5849625007 Find the maximum of −(log2 Pn )/n given by −(log2 pn )/n. > -log[2.](p); 1.584962501 Choose ε > 0. > epsilon:=0.01: > g2:=plot({entropy - epsilon, entropy + epsilon}, x=0..1): > display(g2, h_point, seq(h_line[k], k=K1..K2), labels=["cum","-(log Pn)/n"]); >

See Fig. 8.18, where the horizontal axis represents the cumulative probability. Two horizontal lines are y = entropy ± ε.

8.7 Maple Programs

265

1 -(log Pn )/n 0.9 0.8 0

1

cum

Fig. 8.18. The Asymptotic Equipartition Property for the (1/3, 2/3)-Bernoulli shift with n = 1000

Compare the case n = 1000 with the cases n = 100, n = 10000. We observe the convergence in measure, i.e., 1 Pr − log Pn − entropy ≥ ε → 0 n as n → ∞, where the probability is given by the ( 31 , 32 )-Bernoulli measure. Consult Theorem 8.19 and see Figs. 8.18, 8.19.

1

1

-(log Pn )/n

-(log Pn )/n 0.9

0.9

0.8

0.8 0

cum

1

0

cum

1

Fig. 8.19. The Asymptotic Equipartition Property for the (1/3, 2/3)-Bernoulli shift with n = 100 (left) and n = 10000 (right)

8.7.7 Asymptotically normal distribution of (− log Pn )/n: the Bernoulli shift The values of Z = −(log Pn )/n are approximately normally distributed for large n. We consider the Bernoulli shift on two symbols 0 and 1. > with(plots): Choose an integer k ≥ 1. > k:=2: Choose the probability of the symbol ‘1’.

266

8 Entropy >

q:=1./(k+1);

q := 0.3333333333 entropy:=-(1-q)*log[2.](1-q)-q*log[2.](q); entropy := 0.9182958340 Choose the sample size. > SampleSize:=100000: Choose the block length. > n:=30: > N:=SampleSize+n-1; >

N := 100029 ran:= rand(0..k): Generate a Bernoulli sequence of length N , which is regarded as the initial N bits of a typical Bernoulli sequence x0 . > for i from 1 to N do > if ran() = k then a[i]:=1: else a[i]:=0: fi: > od: Calculate log Pn (x) at x = T j x0 where x0 = (a1 a2 a3 . . .) is the sequence obtained in the above and T is the shift transformation. > for j from 1 to SampleSize do > num_1:=add(a[s],s=j..n+j-1): > num_0:=n-num_1: > Prob[j]:=(1-q)^num_0 * q^num_1: > logPn[j]:=log[2.](Prob[j]): > od: > h:=entropy: Use n × h as an approximation for the average of − log Pn . > n*h; 27.54887502 > add(-logPn[j],j=1..SampleSize)/SampleSize; 27.59658500 > Var:=add((-logPn[j]-n*h)^2,j=1..SampleSize)/SampleSize; >

Var := 6.552650000 Find an approximation of σ for suﬃciently large n. > sigma:=sqrt(Var/n); σ := 0.4673560385 Deﬁne the normalized random variable Z. > for j from 1 to SampleSize do > Z[j]:=evalf((-logPn[j]-n*h)/sigma/sqrt(n)): > od: Choose the number of bins. > Bin:=48: > Max:=max(seq(Z[j],j=1..SampleSize));

8.7 Maple Programs

>

267

Max := 5.859799731 Min:=min(seq(Z[j],j=1..SampleSize));

Min := −3.515879837 Calculate the width of an interval corresponding to a bin. > width:=(Max-Min)/Bin; width := 0.1953266577 Find the number of occurrences in each bin. > for k from 0 to Bin do freq[k]:= 0: od: for s from 1 to SampleSize do slot:=round((Z[s]-Min)/width): freq[slot]:=freq[slot]+1: od: Here is the so-called bell curve of the standard normal distribution. > Phi:=x->1/(sqrt(2*Pi))*exp(-x^2/2): > g1:=plot(Phi(Z),Z=-5..5): > > > >

for k from 1 to Bin do Q[4*k-3]:=[(k-1/2)*width+Min,0]: Q[4*k-2]:=[(k-1/2)*width+Min,freq[k]/SampleSize/width]: Q[4*k-1]:=[(k+1/2)*width+Min,freq[k]/SampleSize/width]: Q[4*k]:=[(k+1/2)*width+Min,0]: od: Draw the graphs. > g2:=listplot([seq(Q[k],k=1..4*Bin)]): > display(g1,g2); See Fig. 8.20. > > > > > >

0.8

0.4

–4

–2

0

2

Z

4

Fig. 8.20. The histogram together with the standard normal distribution function

Find the cumulative probability. > cum[-1]:=0: > for i from 0 to Bin do cum[i]:=cum[i-1] + freq[i]: od: Draw the cumulative probability density function (cdf).

268

8 Entropy

for k from 0 to Bin do c[k]:=[(k+0.5)*width+Min,cum[k]/SampleSize]: od: > g3:=listplot([seq(c[k],k=0..Bin)]): Deﬁne the cumulative probability density given in Fig. 1.12. See also Maple Program 1.8.12. > g_cumul:=plot(1/2*erf(1/2*2^(1/2)*x)+1/2, x=-5..5, labels=["x"," "]): > display(g3,g_cumul); See Fig. 8.21, where the cdf approximates the cdf of the standard normal distribution very well. > > >

1

–4

–2

0

2

x

4

Fig. 8.21. The cumulative distribution function

9 The Lyapunov Exponent: One-Dimensional Case

The entropy of a diﬀerentiable transformation T deﬁned on an interval with an absolutely continuous invariant measure can be computed simply by taking the average of log |T |. This average is called the Lyapunov exponent, and it measures the divergence speed of two nearby points under iterations of T . In this chapter we consider only one-dimensional examples. Higher dimensional cases are treated in the next chapter. As an application, suppose that we want to calculate the continued fraction of a real number x up to its nth partial quotient. How can we ensure the validity of the accuracy of computation? The answer is to start with at least nh signiﬁcant decimal digits, where h ≈ 1.03 is the entropy of the Gauss transformation with the logarithmic base 10. The same idea is used throughout the book in most numerical experiments that are sensitive to the accuracy of initial data. In the last section a method of ﬁnding the optimal number of card shuﬄing is investigated from the viewpoint of the Lyapunov exponent. For expositions on diﬀerentiable dynamical systems see [Ru2],[Y2], and for a comprehensive introduction to Lyapunov exponents see [BaP].

9.1 The Lyapunov Exponent of Diﬀerentiable Maps Let X = [0, 1]. Suppose that T is a piecewise continuously diﬀerentiable map on X, i.e., the derivative of T is piecewise continuous. For |x − y| ≈ 0 choose a value of n that is suﬃciently large, but not too large. Then we have |T (x) − T (y)| ≈ |T (x)| × |x − y| , and |T n (x) − T n (y)| ≈

n−1 i=0

Hence

|T (T i x)| × |x − y| .

270

9 The Lyapunov Exponent: One-Dimensional Case n−1 1 1 log |T (T i x)| . log |T n (x) − T n (y)| ≈ n i=0 n

(The reason why n should not be too large is that if we let n → ∞ then T n (x) and T n (y) may come close to each other for some large n since X is a bounded set. We assume that the right hand side has a limit as n → ∞.) If µ is invariant measure for T , then the right hand

1an ergodic

1 side converges to log |T | dµ by the Birkhoﬀ Ergodic Theorem. Thus log |T | dµ measures 0 0 the exponent of the speed of the divergence.

1 Deﬁnition 9.1. The number 0 log |T | dµ is called the Lyapunov exponent of T . In Theorem 9.4 the Lyapunov exponent will be shown to be equal to the entropy under the following conditions: Let T : X → X be a piecewise C 1 mapping. Assume that there exists a T -invariant probability measure dµ = ρ(x) dx such that A ≤ ρ(x) ≤ B for some constants A, B > 0 for every x. Suppose that P is a countable generating partition of X and that P consists of intervals Ej such that µ(Ej ) > 0, j µ(Ej ) = 1. Let Pn = P ∨ T −1 P ∨ · · · ∨ T −(n−1) P , i.e., Pn consists of cylinder sets [i1 , . . . , in ] = Ei1 ∩ T −1 Ei2 ∩ · · · ∩ T −(n−1) Ein . Assume that there exists N such that if n ≥ N then (i) every cylinder set is an interval and (ii) T is monotone and continuously diﬀerentiable on any cylinder set in Pn . First we need two lemmas. Lemma 9.2. Let λ be Lebesgue measure on [0, 1]. Then λ([i2 , . . . , in ]) = |T (zi1 ,...,in )| λ([i1 , . . . , in ]) for some zi1 ,...,in ∈ [i1 , . . . , in ] for suﬃciently large n. Proof. Since T ([i1 , . . . , in ]) = [i2 , . . . , in ] and T is one-to-one on [i1 , . . . , in ], T (x) dx = |T (zi1 ,...,in )| λ([i1 , . . . , in ]) λ([i2 , . . . , in ]) = [i1 ,...,in ]

for some zi1 ,...,in ∈ [i1 , . . . , in ]. Lemma 9.3. Suppose a sequence {bn }∞ n=1 satisﬁes lim (bn+1 − bn ) = C

n→∞

for some constant C. Then lim

n→∞

bn =C. n

9.1 The Lyapunov Exponent of Diﬀerentiable Maps

271

Proof. For any ε > 0, choose N such that if n ≥ N then |bn+1 − bn − C| < ε . Hence

k

|bN +k − bN − k C| ≤

|bN +j − bN +j−1 − C| < k ε

j=1

and

bN +k k k bN N + k − N + k − N + k C < N + k ε .

Letting k → ∞ we obtain C − ε ≤ lim inf n→∞

bn bn ≤C +ε. ≤ lim sup n n→∞ n

Since ε is arbitrary, the proof is complete.

Theorem 9.4. Let h be the entropy of T . With the conditions on T given in the above we have 1 log |T (x)| dµ . h= 0

Proof. Put

bn =

µ([i1 , . . . , in ]) log λ([i1 , . . . , in ])

i1 ,...,in

and

Ln =

µ([i1 , . . . , in ]) log |T (zi1 ,...,in )| .

i1 ,...,in

Lemma 9.2 implies that µ([i1 , . . . , in ]) log λ([i2 , . . . , in ]) − Ln bn = i1 ,...,in

=

µ([i2 , . . . , in ]) log λ([i2 , . . . , in ]) − Ln

i2 ,...,in

= bn−1 − Ln . Hence

1

lim (bn − bn−1 ) = − lim Ln = −

n→∞

n→∞

0

Lemma 9.3 implies that bn =− lim n→∞ n Therefore

log |T | dµ .

log |T | dµ .

272

9 The Lyapunov Exponent: One-Dimensional Case

1 µ([i1 , . . . , in ]) log µ([i1 , . . . , in ]) n→∞ n i ,...,i

h = − lim

1

n

1

n

1 µ([i1 , . . . , in ]) log λ([i1 , . . . , in ]) = − lim n→∞ n i ,...,i bn = − lim n→∞ n =

log |T | dµ .

The second equality comes from the fact that log A + log λ([i1 , . . . , in ]) ≤ log µ([i1 , . . . , in ]) ≤ log B + log λ([i1 , . . . , in ]) .

For transformations with singular invariant measures, Theorem 9.4 does not hold. For example, consider the singular continuous measure µ on [0, 1] obtained from the (p, 1−p)-Bernoulli measure, 0 < p < 1, p = 12 , by the binary expansion as in Ex. 5.23. Then µ is invariant under T x = 2x (mod 1). Note that the entropy of T with respect to µ is equal to −p log p − (1 − p) log(1 − p), but its Lyapunov exponent is equal to log 2. Example 9.5. The Lyapunov exponent of an irrational translation modulo 1 is equal to zero since T = 1. This is in agreement with Ex. 8.3. Example 9.6. Consider T x = {2x}. The Lyapunov exponent is equal to log 2 since T = 2. This is in agreement with Ex. 8.7. √

and T x = {βx}. Since T = β, the Lyapunov Example 9.7. Let β = 5+1 2 exponent is equal to log β. This is in agreement with Ex. 8.11. Example 9.8. Let 0 < p < 1. Consider the Bernoulli transformation Tp on [0, 1] given by & x/p , 0≤x 0 the limits as n → ∞ of the above sums over the cylinder sets [i1 , . . . , in ] ⊂ [a, 1 − a] are equal since ρ(x) is bounded from below and above on [a, 1 − a]. Now we show that for large n µ([i1 , . . . , in ]) log µ([i1 , . . . , in ]) ≈ µ([i1 , . . . , in ]) log λ([i1 , . . . , in ]) where the sums are taken over the intervals [i1 , . . . , in ] inside [0, a] for some small a > 0. The collection of such cylinder sets form a partition Pa of an interval [0, a1 ] for some a1 ≤ a where the endpoints of cylinder sets are ignored. Let 0 < t1 < · · · < tk = a1 be the partition so obtained. Then

tj+1 ρ(x) dx µ([tj , tj+1 ]) t = ρ(t) = j tj+1 − tj λ([tj , tj+1 ]) for some tj < t < tj+1 . Hence as a → 0 we have lim

µ([i1 , . . . , in ]) log

n→∞

[i1 ,...,in ]∈Pa a

µ([i1 , . . . , in ]) λ([i1 , . . . , in ])

a 1 1 √ log √ dx ρ(x) log ρ(x) dx ≈ x π x π 0 0 √ √ √ a log a 2 a 2 a log π →0. + − =− π π π √ We have used the fact that ρ(x) ≈ 1/(π x) if x ≈ 0.

=

Lemma 9.10.

π/2

log sin θ dθ = − 0

π log 2 . 2

Proof. Consider an analytic function f (z) = log(1 − z), |z| ≤ r < 1. Its real part u(z) = log |1 − z| is harmonic, and hence the Mean Value Theorem for harmonic functions implies that

274

9 The Lyapunov Exponent: One-Dimensional Case

2π 1 log |1 − reiθ | dθ . u(re ) dθ = 2π 0 0

2π Letting r → 1, we have 0 = 0 log |1 − eiθ | dθ. Since |1 − eiθ | = 2 sin θ2 , 0 ≤ θ ≤ 2π, we have 2π θ log sin dθ . −2π log 2 = 2 0 1 0 = u(0) = 2π

2π

iθ

Substituting t = 12 θ, we obtain π −π log 2 = log sin t dt = 2 0

π/2

log sin t dt . 0

Example 9.11. Consider the Gauss transformation T x = {1/x}. Since |T | = x−2 , the Lyapunov exponent with respect to the logarithmic base b > 1 is given by 1 1 π2 ln x −2 1 1 . dx = dx = (−2 logb x) 6 ln b ln 2 ln b ln 2 0 1 + x ln 2 1 + x 0 We have used the following calculation: 1 1 ln(1 + x) ln x dx (integration by parts) dx = − x 1 + x 0 0 1 x3 x x2 + · · · ) dx − (1 − + =− 4 3 2 0 ∞ ∞ 1 1 (−1)n +2 = − = 2 2 n2 n n n even n=1 n=1 1 π2 . 2 6 For the last equality we have used =−

∞ 1 π2 1 1 1 . = = 2 2 2 2 6 2 n=1 n (2n) n=1 ∞

See Maple Program 9.7.1. Example 9.12. Consider the linearized Gauss transformation 1 1 1 <x< . , T x = −n(n + 1) x − n n+1 n It preserves Lebesgue measure. See the right graph in Fig. 2.1. Its entropy is equal to ∞ log n(n + 1) . n(n + 1) 1

9.1 The Lyapunov Exponent of Diﬀerentiable Maps

275

Example 9.13. Consider the linearized logarithmic transformation T x = −2n (x − 21−n ) = −2n x + 2,

2−n < x < 21−n .

It preserves Lebesgue measure. Its entropy is equal to log 2

∞

n 2−n =

n=1

3 log 2 . 2

Theorem 9.14. Let S and T be continuously diﬀerentiable transformations on [0, 1] preserving probability measures ρ1 (x) dx and ρ2 (x) dx, respectively. Suppose that there exists a continuous and piecewise diﬀerentiable conjugacy mapping φ : [0, 1] → [0, 1] such that T ◦φ=φ◦S . (We assume that S, T and φ satisfy some integrability conditions as needed in the following proof.) Then S and T have the same Lyapunov exponent. As a corollary, the Lyapunov exponent of the logistic transformation is equal to log 2. Consult Ex. 2.24. Proof. Since φ is monotone, we assume that φ(0) = 0 and φ(1) = 1. The other case is similar. Since φ is measure preserving, φ(x) x ρ1 (t) dt = ρ2 (t) dt . 0

0

Hence ρ1 (x) = ρ2 (φ(x)) φ (x). From T (φ(x))φ (x) = φ (S(x))S (x) , we have

1

1

log |T (φ(x))| ρ1 (x) dx + 0

1

=

log |φ (S(x))| ρ1 (x) dx +

0 1

0

Note that 1

log |φ (x)| ρ1 (x) dx log |S (x)| ρ1 (x) dx .

0

1

log |T (φ(x))| ρ1 (x) dx = 0

log |T (φ(x))| ρ2 (φ(x)) φ (x) dx

0

=

1

log |T (y)| ρ2 (y) dy ,

0

which is equal to the Lyapunov exponent of T . By the invariance of ρ1 dx under S, 1 1 log |φ ◦ S| ρ1 dx = log |φ | ρ1 dx .

0

0

276

9 The Lyapunov Exponent: One-Dimensional Case

Now we present a method to estimate the exponent of the growth of the distance between the orbits of two neighboring points. 7 = x + 10−k . Deﬁne Deﬁnition 9.15. For 0 ≤ x ≤ 1 − 10−k , let x Hn,k (x) =

1 k x)| + . log |T n (x) − T n (7 n n

Note that Hn,k (x) is close to the Lyapunov exponent for large n, k since n−1 1 1 1 x)| ≈ log |T n (x) − T n (7 log |T (T i x)| + log |x − x 7| . n n i=0 n

The correction term

1 k log |x − x 7| = n n should not be ignored in numerical simulations because k is also large. As mentioned in the beginning of this section, a value for n used in the experiment x)| cannot grow indeﬁnitely. should not be too large since |T n (x) − T n (7 Simulations for Theorem 9.4 are done in two ways. First, we consider a local version. Choose x0 = π − 3 as an arbitrary point. In Fig. 9.1 Hn,k (x0 ) is regarded as a function of n, 10 ≤ n ≤ 300, with k ﬁxed, and its graphs are given for the logistic transformation, the Gauss transformation and the 7 Bernoulli transformation Tp with p = 10 in Ex. 9.8. The horizontal lines indicate the Lyapunov exponents. We choose the logarithmic base 10 because it is convenient when we consider the number of signiﬁcant decimal digits. 0.4

1.5

0.3

0.4 0.3

1

0.2

0.2 0.5

0.1 0

n

300

0

0.1 n

300

0

n

300

Fig. 9.1. y = Hn,k (x0 ) with x0 = π − 3 for the logistic transformation, the Gauss transformation and a Bernoulli transformation (from left to right)

Let D be the number of signiﬁcant digits, and let h be the Lyapunov exponent. Then in numerical simulations for Hn,k we need to choose suitable combinations of n, k and D. First, we need k < D, otherwise x 80 is not deﬁned. Second, we need k > nh since 0>

1 k 7| ≈ h − . log |T n x − T n x n n

9.1 The Lyapunov Exponent of Diﬀerentiable Maps

277

Third, the maximal value of n should be less than D/h since an application of T erases h signiﬁcant decimal digits on the average, which is discussed in detail in Sect. 9.2. For the choices of k and D for the simulations presented in Fig. 9.1, consult Table 9.1. See Maple Program 9.7.2. For transformations T with constant |T | such as T x = 2x (mod 1) and T x = βx (mod 1), the numerical simulation works very well, and the graphs obtained from computer experiments are almost horizontal, and so they are not presented here. Table 9.1. Choices of k and D for Hn,k (x0 ), 10 ≤ n ≤ 300 Transformation Logistic Gauss Bernoulli

k 200 350 200

D Lyapunov exponent 300 0.3010 · · · 400 1.0306 · · · 300 0.2652 · · ·

For the global version of the simulation we take x0 = π − 3 and consider the orbit {T j x0 : 1 ≤ j ≤ 200}. In Fig. 9.2 the points (T j x0 , Hn,k (T j x0 )) are plotted in Ex. 9.8. The horizontal lines indicate the Lyapunov exponents. For the choices of n, k and D, see Table 9.2, where ‘Ave’ denotes the sample average along the orbit. See Maple Program 9.7.3. 1.05

0.3

0.31 0.3

0.28

1

0.26

0.29 0.95

0.24 0

x

1

0

x

1

0

x

1

Fig. 9.2. Plots of (x, Hn,k (x)) for the logistic transformation, the Gauss transformation and a Bernoulli transformation (from left to right)

Table 9.2. Choices of n, k and D for the average of Hn,k Transformation Logistic Gauss Bernoulli

n 100 200 100

k 100 300 100

D Ave[Hn,k ] 200 0.298 400 1.017 200 0.273

278

9 The Lyapunov Exponent: One-Dimensional Case

9.2 Number of Signiﬁcant Digits and the Divergence Speed With the rapid development of computer technology some of the abstract and sophisticated mathematical concepts can now be tested experimentally. For example, Maple allows us to do ﬂoating point calculations with a practically unlimited number of decimal digits. For example, the error bound can be as small as 10−10000 or less if required. Until recently computations with this level of accuracy was practically impossible because of the small amount of random access memory and slow central processing units even though the software was already available some years ago. The reason why we need very high accuracy in experiments with dynamical systems is that, as we iterate a transformation T , the accuracy evaporates with exponential rate. The following result gives a formula on how many signiﬁcant digits are needed in iterating T . Let x and x 7 be two nearby points. As we iterate T , the distance |T n x−T n x 7| diverges exponentially: |T n x − T n x 7| ≈ 10nh × |x − x 7| as shown in the previous section where h is the Lyapunov exponent with respect to the logarithmic base 10. Deﬁnition 9.16. Let X be the unit interval. Deﬁne the nth divergence speed for 0 ≤ x ≤ 1 − 10−n by 7| ≥ 10−1 where x 7 = x + 10−n } . Vn (x) = min{j ≥ 1 : |T j x − T j x Note that for 0 ≤ x < 1 there exists N = N (x) such that Vn (x) is deﬁned for every n ≥ N . The integer Vn may be regarded as the maximal number of iterations after which some precision still remains when we start ﬂoating point calculations with n signiﬁcant digits. Example 9.17. (i) For T x = 10x (mod 1) we have Vn (x) = n − 1 for 0 ≤ x ≤ 1 − 10−n . (ii) We may deﬁne the divergence speed on the unit circle in an almost identical way. Consider an irrational translation mod 1, which can be identiﬁed with a rotation on the circle. Then Vn = +∞ for every n ≥ 2, which is in agreement with the fact that the entropy is equal to 0. Remark 9.18. Let T be an ergodic transformation on [0, 1] with an absolutely continuous probability measure. Suppose that T is piecewise diﬀerentiable. If T has the Lyapunov exponent (or entropy) h > 0, then n ≈h Vn (x) if n is suﬃciently large. To see why, note that

9.2 Number of Signiﬁcant Digits and the Divergence Speed

279

|T Vn x − T Vn x 7| ≈ 10Vn h × |x − x 7| = 10Vn h 10−n . Hence

10−1 10Vn h−n 1

and −1 Vn h − n 0 , giving

n n−1 . h Vn Vn

In iterating a given transformation T , if T is applied once to the value x with n signiﬁcant decimal digits, then T x has n − h signiﬁcant decimal digits on average. Therefore, if we start with n signiﬁcant decimal digits for x, the above formula suggests that we may iterate T about Vn ≈ n/h times with at least one decimal signiﬁcant digit remaining in T Vn x. We may use base 2 in deﬁning Vn and use the logarithm with respect to the base 2 in deﬁning the Lyapunov exponent. Suppose that we employ n = D signiﬁcant decimal digits in a Maple simulation. Take a starting point of an orbit, e.g., x0 = π − 3. In numerical 80 , which is an numerical approximation of x0 calculations x0 is stored as x satisfying 80 | ≈ 10−D |x0 − x since there are D decimal digits to be used. For every integer j satisfying 1≤j

D , h

there corresponds an integer k ≈ D − j × h such that 80 = 0. a1 . . . . . . . . . ak ak+1 . . . . . . . . . aD T jx ?< => < ? => signiﬁcant

meaningless

where the last D − k decimal digits have no meaning in relation with the true theoretical orbit point T j x0 . Hence if j≈

D , h

80 loses any amount of precision and has nothing to do with T j x0 . then T j x In this argument we do not have to worry about truncation errors of the size 10−D , which are negligible in comparison with 10−k . A trivial conclusion is that, for simulations with irrational translations modulo 1, we do not need many signiﬁcant digits. In Fig. 9.3 for the local version of the divergence speed, the graphs y = n/Vn (x0 ), x0 = π − 3, are presented as functions of n, 2 ≤ n ≤ 100. As

280

9 The Lyapunov Exponent: One-Dimensional Case

0.4

0.2

0

100

n

0.6

3

0.4

2

0.2

1

0

n

100

0

n

100

Fig. 9.3. The divergence speed: y = n/Vn (x0 ) at x0 = π−3 for the β-transformation, the logistic transformation and the Gauss transformation (from left to right)

n increases, n/Vn (x0 ) converges to the Lyapunov exponent represented by a horizontal line. See Maple Program 9.7.4. In Fig. 9.4 for the global version of the divergence speed, the points (T j x0 , n/Vn (T j x0 )), 1 ≤ j ≤ 200, for n = 100 are plotted. The horizontal lines indicate the Lyapunov exponents. In simulations 300 signiﬁcant decimal digits were used, which is more than enough. See Maple Program 9.7.5. 0.22

1.2 0.31 1.1

0.21

1

0.3

0.9 0.2 0

1

x

0

x

1

0

x

1

Fig. 9.4. The divergence speed: Plots of (x, 100/V100 (x)) for the β-transformation, the logistic transformation and the Gauss transformation (from left to right)

9.3 Fixed Points of the Gauss Transformation Using the idea of the Lyapunov exponent we test the eﬀectiveness of the continued fraction algorithm employed by Maple. In this section we use the logarithm to the base 10. If T is a piecewise continuously diﬀerentiable ergodic transformation with an invariant measure ρ(x) dx, then h= 0

1

n−1 1 log10 |T (T j x0 )| n→∞ n j=0

log10 |T (x)| ρ(x) dx = lim

9.3 Fixed Points of the Gauss Transformation

281

for almost every x0 . The formula need not hold true for all x0 . If x0 is an exceptional point that does not satisfy the above relation, for example, if x0 is a ﬁxed point of T , then the average of log10 |T (x)| along the orbit x0 , T x0 , T 2 x0 , . . ., is equal to log10 |T (x0 )|. For example, consider the Gauss transformation T x = {1/x}, 0 < x < 1. Take 1 . x0 = [k, k, k, . . .] = 1 k+ 1 k+ k + ··· Then T x0 = x0 and x0 = 1/(k + x0 ). Note that √ −k + k 2 + 4 . x0 = 2 Since log10 |T (x0 )| = −2 log10 x0 , if we are given D decimal signiﬁcant digits we lose all the signiﬁcant digits after approximately D −2 log10 x0 iterations of T . In the simulation we take D = 100 and 1 ≤ k ≤ 9, k = 10, 20 . . . , 100 and k = 200. The number of correct partial quotients in the continued fraction of x0 obtained from Maple computation is denoted by Nk . Let Ck be the theoretical value for the maximal number of iterations of T that produces the correct values for the partial quotients of x0 = [k, k, k, . . .] 80 when the error bound for the initial data is 10−D . In other words, if we let x be an approximation of x0 such that |8 x0 − x0 | ≈

1 × 10−D , 2

then T

Ck −1

x 80 ∈ Ik =

1 1 , k+1 k

and T Ck x 80 ∈ Ik . Since x0 is a ﬁxed point of T , i.e., T j x0 = x0 for every j, we have n−1 80 − T n x0 | |T n x 80 − x0 | |T n x |T (T j x0 )| = |T (x0 )|n . ≈ ≈ |8 x0 − x0 | |8 x0 − x0 | j=0

Hence 80 − x0 | + D + log10 2 ≈ n log10 |T (x0 )| = n(−2 log10 x0 ) . log10 |T n x

282

9 The Lyapunov Exponent: One-Dimensional Case

Since

T Ck −1 x 80 ∈ Ik ,

we have

1 1 |} . |, |x0 − k+1 k Therefore an approximate upper bound Uk for Nk is given by 80 − x0 | ≤ max{ |x0 − |T Ck −1 x

Uk =

D + log10 2 + log10 max{ |x0 − k1 |, |x0 −

1 k+1 |}

−2 log10 x0

+1.

On the other hand, since 80 ∈ Ik , T Ck x we have

1 1 |} . |, |x0 − k+1 k Thus an approximate lower bound Lk for Nk is given by 80 − x0 | ≥ min{ |x0 − |T Ck x

Lk =

D + log10 2 + log10 min{ |x0 − k1 |, |x0 − −2 log10 x0

1 k+1 |}

.

Based on simulation results presented in Table 9.3 we conclude that the continued fraction algorithm employed by Maple is more or less optimal. See Maple Program 9.7.6. Table 9.3. Accuracy of the continued fraction algorithm of Maple: Test result for x0 = [k, k, k, . . .] k 1 2 3 4 5 6 7 8 9 10

Nk 237 129 95 80 69 63 58 54 51 48

Lk 237.7 129.6 95.2 78.5 68.6 62.0 57.2 53.6 50.8 48.4

Uk 240.0 130.6 96.4 79.8 70.0 63.4 58.7 55.1 52.2 49.9

k 20 30 40 50 60 70 80 90 100 200

Nk 37 33 30 28 27 26 25 25 24 20

Lk 37.0 32.4 29.8 28.0 26.7 25.7 24.9 24.2 23.6 20.3

Uk 38.5 33.9 31.3 29.5 28.2 27.2 26.3 25.7 25.1 21.8

9.4 Generalized Continued Fractions Consider the generalized continued fraction transformations T x = {1/xp } for suitable choices of p. Let x0 and p0 be solutions of the system of two equations

9.4 Generalized Continued Fractions

x=

1 −1, xp

−

p xp+1

283

= −1 .

Their numerical values are x0 ≈ 0.3183657369, p0 ≈ 0.2414851418. Note that x0 is the rightmost intersection point of the graph y = T x with y = x and that the derivative of y = {1/xp0 } at x0 is equal to −1. See Fig. 9.5. 1

1

y

y

0

x

0

1

x

1

Fig. 9.5. y = T x (left) and y = T 2 x (right) for p = p0

For p > p0 , the modulus of the slope of the tangent line to y = {1/xp } at the intersection point with y = x is greater than 1 and the rightmost ﬁxed point is a repelling point. See the left graph in Fig. 9.6. For 0 < p < p0 , the modulus of the slope of the tangent line is less than 1 and the rightmost ﬁxed point is an attracting point. See the right graph in Fig. 9.6. 1

1

y

y

0

x

1

0

x

1

Fig. 9.6. y = T 2 x for p = 0.35 > p0 (left) and p = 0.15 < p0 (right)

Theorem 9.19. For p > p0 , T has an absolutely continuous invariant density ρ(x) such that 1 ρ(0) = 1 + ρ(1) . p Proof. Use Theorem 5.4 and φ (1) = −p.

284

9 The Lyapunov Exponent: One-Dimensional Case

Observe that ρ(x) converges to 1 as p → ∞ in Fig. 9.7. Consult [Ah2] for its proof. See Fig. 9.7. The shape of the distribution becomes more concentrated around x0 as p ↓ p0 . 10

30

6

8 20

4

6 4

10

2

2 0 3

1

0 2

1

0 2

1

2 1

1

1

0

1

0

1

0

2

2

2

1

1

1

0

1

0

1

0

1

1

Fig. 9.7. Invariant probability density functions for T x = {1/xp } with p = 0.2415, 0.242, 0.244, 0.25, 0.3, 0.5, 1, 2 and 4 (from top left to bottom right). For p = 1 the theoretical pdf is also drawn

The Lyapunov exponent of T x = {1/xp } is equal to the average of log |(x−p ) | = log p − (p + 1) log x with respect to its invariant measure. We choose 10 for the logarithmic base. See Table 9.4. The sample size for each n is at least 106 . Note that log p0 − (p0 + 1) log x0 = 0 . If p (> p0 ) is close to p0 , then the density is concentrated around x0 and the integral of log p−(p+1) log x is close to 0. For p >> 1 the Lyapunov exponent

1 may be approximated by 0 log |(x−p ) | dx = log p + (p + 1) log e.

9.4 Generalized Continued Fractions

285

Table 9.4. Lyapunov exponents for T x = {1/xp } for p > p0 p Estimate 0.2415 0.004 0.242 0.019 0.244 0.044 0.25 0.083 0.3 0.230 0.5 0.538 1 1.026 2 1.739 4 2.891

True value

log10 p + (p + 1) log10 e

1.03064 2.774

In Fig. 9.8 the points (n, T n a), 1 ≤ n ≤ 1000, are plotted to represent the orbits of T x = {1/xp } for p = 0.242, 0.244, 0.25, 0.3, 0.5 and 1. The starting point is a = π − 3. As p increases the randomness also increases. For small p, it takes very long for an orbit to escape from a small neighborhood of the rightmost ﬁxed point of T . The horizontal lines represent the rightmost ﬁxed points for p = 0.242, 0.244 and 0.25. For more information see [C1]. 1

0

1

n

1000

1

0

0

1

n

1000

1

n

1000

0

0

n

1000

n

1000

1

n

1000

0

Fig. 9.8. Orbits of T x = {1/xp } of length 1000 for p = 0.242, 0.244, 0.25, 0.3, 0.5 and 1 (from top left to bottom right)

As in Sect. 3.6 we can deﬁne the Khinchin constants for generalized Gauss transformations. For their numerical estimation see [CKc].

286

9 The Lyapunov Exponent: One-Dimensional Case

9.5 Speed of Approximation by Convergents Let T be an ergodic transformation on the unit interval. Suppose that we have a partition P of [0, 1] into disjoint subintervals, say [0, 1] = k Ik , the union being ﬁnite or inﬁnite depending on the problem. If P is generating, then almost every point x has a unique symbolic representation x = a1 a2 . . . ak . . . according to the rule T k−1 x ∈ Iak for every k ≥ 1. Let Pn (x) be the unique element in the partition Pn = P ∨ T −1 P ∨ · · · T −(n−1) P. In other words, Pn (x) = Ia1 ∩ T −1 (Ia2 ) ∩ · · · ∩ T −(n−1) (Ian ) . We are interested in the convergence speed of the nth convergent xn to x where xn is any convenient point in Pn (x). In general, if Pn (x) is an interval, we choose one of the endpoints of Pn (x) as its nth convergent xn . Theorem 9.20. Let T : [0, 1] → [0, 1] be an ergodic transformation with an invariant probability measure dµ = ρ(x) dx, where ρ is piecewise continuous and A ≤ ρ(x) ≤ B for some constants 0 < A ≤ B < ∞. Let h be the Lyapunov exponent of T . Consider the case when Pn (x) is an interval for every x. Choose one of the endpoints of Pn (x) as the nth convergent xn of each x. Then log |x − xn | ≥h, lim inf − n→∞ n where the same logarithmic base is used to deﬁne entropy and logarithm. Proof. Let (I) denote the length of an interval I. The Shannon–McMillan– Breiman Theorem states that lim

n→∞

log µ(Pn (x)) = −h n

for almost every x. Since ρ is continuous at almost every x, we have ρ(t) dt ≈ ρ(x) × (Pn (x)) µ(Pn (x)) = Pn (x)

where ‘≈’ means ‘being approximately equal to’ when n is suﬃciently large. Hence log (Pn (x)) = −h lim n→∞ n for almost every x. Since |x − xn | ≤ (Pn (x)), we have lim sup n→∞

log |x − xn | ≤ −h . n

9.5 Speed of Approximation by Convergents

287

In many examples the limit inﬁmum and the inequality in the preceding theorem are limit and equality, respectively, and if this happens then we may view the symbolic representation as an expansion with the base b ≈ 10h . For example, the decimal expansion obtained from T x = 10x (mod 1) uses the base b =√ 10. In this context the expansions corresponding to x → {1/x2 } and x → {1/ x} employ bases approximately equal to 101.8 ≈ 63 and 100.5 ≈ 3, respectively. See Fig. 9.9. Example 9.21. For T x = {10x}, we have the usual decimal expansion x = 0.a1 a2 a3 . . . =

1 1 1 (a1 + (a2 + (a3 + · · · ))) 10 10 10

where

1 1 1 (a1 + (a2 + · · · + an )) . 10 10 10 Hence |x−xn | ≤ 10−n . In this case we can prove that (log |x−xn |)/n converges to −1 for almost every x. First, note that (x − xn ) × 10n = T n x. For any ﬁxed δ > 0, deﬁne An by 0 0 & & log T n x log |x − xn | ≤ −δ . ≤ −1 − δ = x : An = x : n n xn = 0.a1 a2 . . . an =

−δn , we have Let ∞ µ denote Lebesgue measure on [0, 1). Since µ(An ) = 10 n=1 µ(An ) < ∞. The Borel–Cantelli Lemma implies that the set of points belonging to An for inﬁnitely many n has measure zero. Therefore for almost every x we have log |x − xn | = −1 . lim n→∞ n √

Example 9.22. Let β = 5+1 2 . For T x = {βx}, we have the expansion with respect to the base β given by x = (0.a1 a2 a3 . . .)β =

1 1 1 (a1 + (a2 + (a3 + · · · ))) β β β

and xn = (0.a1 a2 . . . an )β =

1 1 1 (a1 + (a2 + · · · + an )) β β β

where ai ∈ {0, 1}. Hence |x − xn | ≤ β −n . Proceeding as in Ex. 9.21, we can show that (log |x − xn |)/n converges to − log β for almost every x. Note that there is no consecutive string of 1’s in the β-expansion of 0 < x < 1; in other words, if ai = 1 then ai+1 = 0. ∞ Consider an arbitrary sum i=1 ai β −i , ai ∈ {0, 1}, which is not necessarily obtained from the β-expansion. The representation is not unique. For example, 1 1 1 1 1 1 1 1 (0 + (1 + (1 + (a4 + · · · )))) = (1 + (0 + (0 + (a4 + · · · )))) . β β β β β β β β

288

9 The Lyapunov Exponent: One-Dimensional Case

This can be explained by the following fact: Let b > 1 be a real number, and let A = {a1 , . . . , ak } be a set of real numbers including 0. If all real numbers can M be expressed in the form i=−∞ ai bi , and the set of multiple representations has Lebesgue measure zero, then b is an integer. See the remark at the end of Sect. 5.1 in [Ed]. Example 9.23. Let T : [0, 1] → [0, 1] be the Bernoulli transformation deﬁned by &1 x if x ∈ [0, p) , T x = p1 (x − p) if x ∈ [p, 1] . 1−p See also Ex. 2.31. For x ∈ [0, 1) and n ≥ 1, deﬁne βn (x) and an (x) by &1 1 1−p

&

and

if T n−1 x ∈ [0, p) , if T n−1 x ∈ [p, 1] ,

p

βn (x) =

an (x) =

0 p 1−p

if T n−1 x ∈ [0, p) , if T n−1 x ∈ [p, 1] .

It can be shown that βn (x) = β1 (T n−1 x) and an (x) = a1 (T n−1 x) for every n ≥ 1, and T nx an a2 a1 . + + ··· + + x= β1 · · · βn β1 · · · βn β1 β2 β1 See [DK] for more details. Let xn =

an a2 a1 . + ··· + + β1 · · · βn β1 β2 β1

Then 0 ≤ x − xn =

T nx . β1 · · · βn

As in Ex. 9.21, we have limn→∞ (log T n x)/n = 0 for almost every x, and hence

n 1 log(x − xn ) n − log T x + log βk (x) = lim lim − n→∞ n n→∞ n k=1

n 1 log β1 (T k−1 x) = lim n→∞ n k=1 1 log β1 (x) dx = 0

= p log

1 1 . + (1 − p) log 1−p p

The third equality comes from the Birkhoﬀ Ergodic Theorem. See Maple Program 9.7.7.

9.5 Speed of Approximation by Convergents

289

Example 9.24. Let T x = {1/x}, 0 < x < 1. For almost every x we have the classical continued fraction expansion 1

x=

1

a1 + a2 +

1 a3 + · · ·

for which xn = pn /qn , (pn , qn ) = 1, and 1 pn 1 |< 2 . < |x − 2 qn qn qn+1 Hence −2 and

log |x − log qn+1 < n n log |x −

− lim

pn qn |

n

n→∞

pn qn |

= lim 2 n→∞

< −2

log qn , n

π2 log qn =h = 6 log 2 n

where log denotes the natural logarithm. The second and the third equalities come from Theorem 3.19 and Ex. 9.11, respectively. Example 9.25. (Square-root continued fractions) For T x = {1/x2 }, we have x = 5

Since T x =

@C A A B

1

1 (T x)2

1

.

1 + Tx x2

, we have

D +T 2 x

1 . x = @ A 1 1 A A 2 + 5 A x 1 B + T 2x (T x)2 Continuing indeﬁnitely, we obtain x= @ A Aa + E A 1 B

1

= [a1 , a2 , a3 , . . .]

1 a2 + √

1 a3 + · · ·

where a1 (x) = 1/x2 and an (x) = a1 (T n−1 x), n ≥ 2. To deﬁne xn we use the ﬁrst n elements a1 , . . . , an .

290

9 The Lyapunov Exponent: One-Dimensional Case

√ Example 9.26. (Squared continued fractions) For T x = {1/ x}, we have −2 −2 −2 , x = a1 + a2 + (a3 + · · · ) √ where a1 (x) = [1/ x ] and an (x) = a1 (T n−1 x), n ≥ 2. The ﬁrst n elements a1 , . . . , an are used to deﬁne xn .

2

1

0

n

100

0.4

0.4

0.2

0.2

0

n

100

0

n

100

3 0.8

2 2 1

0.6 0.4

1

0.2 0

n

100

0

n

100

0

n

100

Fig. 9.9. y = −(log10 |x−xn |)/n, 2 ≤ n ≤ 100, at x = π −3 for diﬀerent expansions: √ decimal, β, Bernoulli, Gauss, T x = {1/x2 } and T x = {1/ x} (from top left to bottom right). Horizontal lines indicate entropies

9.6 Random Shuﬄing of Cards Given a deck of n cards, what is the minimal (or optimal) number of shuﬄes to achieve reasonable randomness among the cards? P. Diaconis [Di] regarded the problem as random walks on the symmetric group Sn on n symbols. A permutation g ∈ Sn corresponds to a method of shuﬄing n cards. A product g1 · · · gk , gi ∈ Sn , 1 ≤ i ≤ k, corresponds to consecutive applications of shuﬄing corredistribution on Sn ; in other words, there sponding to gi . Let µ be a probability exist g1 , . . . , g ∈ Sn with µ(gj ) > 0, j=1 µ(gj ) = 1. A random walk on Sn is given by Pr(g → h) = µ(g −1 h) where Pr(g → h) is the probability of moving from g to h. Deﬁne the convolution µ ∗ µ(x) = g∈Sn µ(xg −1 )µ(g). Similarly, the kth convolution µ∗k is deﬁned. The probability of moving from g to h in k

9.6 Random Shuﬄing of Cards

291

steps is given by µ∗k (g −1 h). Let λ be the uniform distribution on Sn , i.e., each permutation has equal probability, and let ||·|| denote the norm deﬁned on the real vector space of real-valued measures M(G) on G by ||ν|| = g∈Sn |ν(g)|, ν ∈ M(G). Now the card shuﬄing problem can be formulated in terms of convolution: What is the minimal number k for which ||µ∗k − λ|| < ε for a suﬃciently small constant ε > 0? For a suitable choice of µ reﬂecting the usual shuﬄing methods, Diaconis showed that k is approximately 7. We introduce an approach based on ergodic theory. A continuous version of typical card shuﬄing, called riﬄe shuﬄing, can be illustrated in Fig. 9.10. It may be regarded as a limiting case of shuﬄing of n cards as n → ∞. The unit interval [0, 1] is a model for a stack of inﬁnitely many cards. Cut the stack at x = p and place the upper stack (Stack B) beside the lower stack (Stack A). Then shuﬄe Stack A and Stack B evenly. Shuffle A and B linearly 1

1 B

B p

Cut here

A

Move down

A

B

A 0

0

Fig. 9.10. A continuous version of riﬄe shuﬄing: Stacks A and B correspond to the intervals [0, p] and [p, 1], respectively. They are linearly mapped to [0, 1]

The transformation Tp deﬁned in Ex. 9.8 corresponds to a typical shuﬄing. Note that Tp is ergodic with respect to Lebesgue measure. Its entropy is given by the Lyapunov exponent H(p) = −p log p−(1−p) log(1−p). Here we use the natural logarithm. Choose randomly p1 , . . . , pk , i.e., the pi are independent and uniformly distributed in [0, 1]. Consider the composite mapping Tpk ◦ · · · ◦ Tp1 . Its entropy is equal to

1

··· 0

0

1

k i=1 k i=1

H(pi ), and its average over [0, 1]k is equal to

1

H(pi ) dp1 · · · dpk = k

H(p) dp = 0

k . 2

Suppose that there are n cards. If Tpk ◦ · · · ◦ Tp1 is close to random shuﬄing of n cards, then it would look like the transformation x → {nx}. Thus they have almost equal entropy and we may conclude that k ≈ 2 log n. In Japan and Korea there are traditional card games played with forty eight cards. Each month in a year is represented by four cards. The cards

292

9 The Lyapunov Exponent: One-Dimensional Case

are made of small, thick and inﬂexible material and the shuﬄing method is diﬀerent. It is called overhand shuﬄing. To have a perfect shuﬄing of n cards by overhand shuﬄing, approximately n2 shuﬄes are necessary, which is far more than the number of shuﬄes needed in riﬄe shuﬄing. See [Pem] for the proof. From experience, card players know this fact and it is why one has to scatter the cards on the table randomly and gather them into a stack before cards are shuﬄed by overhand shuﬄing. From the viewpoint of ergodic theory this fact is not surprising. The transformations T corresponding to overhand shuﬄing are interval exchange transformations preserving Lebesgue measure, and they satisfy T (x) = 1 where the derivative is deﬁned. (See Fig. 9.11.) Hence the Lyapunov exponent of T is zero, and it takes a long time to have a perfect shuﬄing. 1

1

1

0

0

0

1

Fig. 9.11. A continuous version of overhand shuﬄing (left) and the corresponding transformation (right)

9.7 Maple Programs 9.7.1 The Lyapunov exponent: the Gauss transformation Show that the entropy of the Gauss transformation T is equal to π 2 /(6 ln 2 ln b) where b > 1 is the logarithmic base in the deﬁnition of the Lyapunov exponent. > T:=x->frac(1/x): Take the derivative of T (x). > diff(1/x,x): Find log |T (x)|. > log[b](abs(%)); −

2 ln(|x|) ln(b)

LE:=x->-2*log[b](x): Deﬁne the invariant probability density ρ(x). > rho:=x->1/ln(2)*1/(1+x): >

9.7 Maple Programs

293

Compute log |T (x)| ρ(x) dx. > Lyapunov_exponent:=int(LE(x)*rho(x),x=0..1); Lyapunov exponent :=

π2 1 6 ln(b) ln(2)

Choose the logarithmic base b = 10. > b:=10: > evalf(Lyapunov_exponent); 1.030640835 Apply the Birkhoﬀ Ergodic Theorem. > seed[0]:=evalf(Pi-3): > SampleSize:=10000: > for i from 1 to SampleSize do > seed[i]:=T(seed[i-1]): > od: > add(LE(seed[i]),i=1..SampleSize)/SampleSize; 1.032996182 Two values obtained from the Birkhoﬀ Ergodic Theorem and the deﬁnition of the Lyapunov exponent are very close. 9.7.2 Hn,k : a local version for the Gauss transformation Check the convergence of Hn,k for the Gauss transformation at a ﬁxed point. > with(plots): > T:=x->frac(1.0/x): > Digits:=400: > k:=350: > L:=300: Choose two neighboring points seed1[0] and seed2[0]. > seed1[0]:=evalf(frac(Pi)): > seed2[0]:=seed1[0] + 10^(-k): >

for n from 1 to L do seed1[n]:=T(seed1[n-1]): od:

for n from 1 to L do seed2[n]:=T(seed2[n-1]): od: > for n from 1 to L do > H[n]:=log[10](abs(seed1[n]-seed2[n]))/n + k/n: > od: > Digits:=10: > entropy:= Pi^2/6/ln(2)/ln(10): > g1:=listplot([seq([n,H[n]],n=1..L)],labels=["n"," "]): > g2:=plot(entropy,x=0..L): > display(g1,g2); See Fig. 9.1. >

294

9 The Lyapunov Exponent: One-Dimensional Case

9.7.3 Hn,k : a global version for the Gauss transformation Simulate the global version of Hn,k along an orbit {T j x0 : 1 ≤ j ≤ 1000} for the Gauss transformation. > with(plots): > T:=x->frac(1.0/x): > entropy:=Pi^2/6/ln(2)/ln(10): > Digits:=400: > k:=300: > n:=200: > SampleSize:=200: > Length:=SampleSize + n: > seed[0]:=evalf(Pi-3.0): > for i from 1 to Length do seed[i]:=T(seed[i-1]): od: for s from 1 to SampleSize do seed2[1]:=seed[s]+10^(-k): for i from 2 to n do seed2[i]:=T(seed2[i-1]): od: H_nk[s]:=log10(abs(seed[s+n-1]-seed2[n])) / n + k/n: od: > Digits:=10: > g1:=pointplot([seq([seed[s],H_nk[s]],s=1..SampleSize)]): > g2:=plot(entropy,x=0..1): > display(g1,g2); See Fig. 9.2. > > > > >

9.7.4 The divergence speed: a local version for the Gauss transformation Check the convergence of n/Vn for the Gauss transformation at a point. > with(plots): > Digits:=500: > T:=x->frac(1.0/x): > entropy:= Pi^2/6/ln(2)/ln(10): > seed[0]:=evalf(Pi-3.0): > for i from 1 to 500 do seed[i]:=T(seed[i-1]): od: Find the divergence speed. > for k from 1 to 200 do > X0:=evalf(seed[0]+10.0^(-k)): > Xn:=T(X0): > for j from 1 while abs(Xn-seed[j]) < 1/10 do > Xn:=T(Xn): od: > V[k]:=j; > od:

9.7 Maple Programs

295

Digits:=10: fig1:=listplot([seq([n,n/V[n]],n=2..100)]): fig2:=plot(entropy,x=0..100,labels=["n"," "]): display(fig1,fig2); See Fig. 9.3. > > > >

9.7.5 The divergence speed: a global version for the Gauss transformation Simulate a global version of n/Vn for the Gauss transformation. > with(plots): > n:=100: > Digits:=2*n: > T:=x->frac(1.0/x): > SampleSize:=1000: > entropy:=evalf(Pi^2/(6*ln(2)*ln(10)),10): > Length:= SampleSize + ceil(2*n/entropy); > >

Length := 1195 seed[0]:=evalf(Pi-3.0): for i from 1 to Length do seed[i]:=T(seed[i-1]): od:

for s from 1 to SampleSize do seed1[s]:=seed[s]-10^(-n): Xn:=T(seed1[s]): for j from 1 while abs(Xn-seed[j+s]) < 1/10 do Xn:=T(Xn): od: V_n[s]:=j: od: > Digits:=10: > g1:=listplot([seq([evalf(seed[s],10),n/V_n[s]],s=1.. SampleSize)]): > g2:=plot(entropy,x=0..1): > display(g1,g2); See Fig. 9.4 > > > > > > >

9.7.6 Number of correct partial quotients: validity of Maple algorithm of continued fractions √ Find the number of correct partial quotients of xk = (−k + k 2 + 4)/2 in numerical experiments using Maple. See Table 9.3. > with(numtheory): We need a Maple package for continued fractions. > cfrac(sqrt(3)-1,5,’quotients’); [0, 1, 2, 1, 2, 1, ...]

296

9 The Lyapunov Exponent: One-Dimensional Case >

cfrac(sqrt(3)-1,5); 1 1

1+

1

2+

1

1+

1 1 + ··· What is the diﬀerence among the following three commands? The second method does not work, and we receive an error message. > convert(sqrt(3.)-1,confrac); 2+

>

[0, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 3, 2, 8] convert(sqrt(3)-1,confrac);

Error, (in convert/confrac) expecting a 3rd argument of type {name, name = algebraic} > >

Digits:=15: convert(sqrt(3.)-1,confrac);

[0, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 7] The preceding results depend on the number of signiﬁcant digits. For a related experiment see Maple Program 3.9.8. Find the ﬁxed points of the Gauss transformation. > solve(1/x=k+x ,x); 1√ 2 1 1√ 2 1 k +4 k + 4, − k − − k+ 2 2 2 2 > Digits:=100: > for k from 10 to 100 by 10 do > x[k]:=-1/2*k+1/2*sqrt(k^2+4.): > od: Count the number of correct partial quotients. All the partial quotients should be identical by the deﬁnition of xk . > for k from 10 to 100 by 10 do > convert(x[k],confrac): > od; [0, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 9, 1, 2, 5] [0, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 18, 10] [0, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 11, 2] .. .

9.7 Maple Programs

297

The command ‘nops’ counts the number of elements in a given sequence. > for k from 10 to 100 by 10 do > nops(convert(x[k],confrac)): > od; 53 40 36 .. . For k = 10 the length of the sequence given by Maple is 53. The ﬁrst element 0 means that the integer part of x[10] is zero. Observe that the number of incorrect partial quotients 9, 1, 2, 5 at the end of the sequence is four. Hence the number of correct partial quotients is equal to Nk = 53 − 1 − 4 = 48. For k = 20 and k = 30 we have Nk = 37 and Nk = 33 correct partial quotients, respectively. > Digits:=10: > DD:=100: Calculate lower bounds. > for k from 10 to 100 by 10 do > L[k]:=(DD + log10(2.) + log10(min(abs(x[k]-1/k), abs(x[k]-1/(k+1)))))/(-2*log10(x[k])): > od; L10 := 48.43895528 L20 := 37.01517232 L30 := 32.44061585 .. . Calculate upper bounds. > for k from 10 to 100 by 10 do > U[k]:=(DD + log10(2.) + log10(max(abs(x[k]-1/k), abs(x[k]-1/(k+1)))))/(-2*log10(x[k]))+1: > od; U10 := 49.89580130 U20 := 38.49850368 U30 := 33.93082056 .. . 9.7.7 Speed of approximation by convergents: the Bernoulli transformation We study the speed of approximation by convergents for the Bernoulli transformation T deﬁned on the unit interval. > with(plots): > p:=0.7:

298

9 The Lyapunov Exponent: One-Dimensional Case > >

> >

T:= x->piecewise(0 seed[0]:=evalf(Pi-3): Generate a sequence of partial quotients. > for n from 1 to Length do > if seed[n-1] < p then > beta[n]:=1/p: a[n]:=0: > else > beta[n]:=1/(1-p): a[n]:=p/(1-p): > fi: > seed[n]:=T(seed[n-1]): > od: In the following prod[n] is the product of β1 , . . . , βn . > prod[0]:=1: > for n from 1 to Length do > prod[n]:=prod[n-1]*beta[n]: > od: Find the nth convergent xn . > x[0]:=0: > for n from 1 to Length do > x[n]:=x[n-1]+a[n]/prod[n]: > c[n]:=-log10(abs(x[n]-seed[0]))/n: > od: Draw the graph. > Digits:=10: > g1:=listplot([seq([n,c[n]],n=2..Length)]): > g2:=plot(entropy,x=0..Length,labels=["n",""]): > display(g1,g2); See Fig. 9.9.

References

[Aa] [AG] [Ad]

[AdF]

[AdMc] [Ah1] [Ah2] [ASY] [AB] [AH] [Ar1]

[Ar2] [ArA] [At] [BBC]

J. Aaronson, An Introduction to Inﬁnite Ergodic Theorey, American Mathematical Society, Providence, 1997. M. Adams and V. Guillemin, Measure Theory and Probability, Birkh¨ auser, Boston, 1996. R.L. Adler, Geodesic ﬂows, interval maps, and symbolic dynamics, Ergodic Theory, Symbolic Dynamics and Hyperbolic Spaces, edited by T. Bedford, M. Keane and C. Series, 93–121, Oxford University Press, Oxford, 1991. R.L. Adler and L. Flatto, Geodesic ﬂows, interval maps, and symbolic dynamics, Bulletin of the American Mathematical Society 25 (1991), 229– 334. R.L. Adler and M.H. McAndrew, The entropy of Chebyshev polynomials, Transactions of the American Mathematical Society 121 (1966), 236–241. Y. Ahn, On compact group extension of Bernoulli shifts, Bulletin of the Australian Mathematical Society 61 (2000), 277–288. Y. Ahn, Generalized Gauss transformations, Applied Mathematics and Computation 142 (2003), 113–122. K.T. Alligood, T.D. Sauer and J.A. Yorke, Chaos – An Introduction to Dynamical Systems, Springer-Verlag, New York, 1996. J.A. Anderson and J.M. Bell, Number Theory with Applications, PrenticeHall, Upper Saddle River, 1996. J. Arndt and C. Haenel, Pi – Unleashed, 2nd ed., Springer-Verlag, Berlin, 2001. V.I. Arnold, Small denominators I: Mappings of the circumference onto itself, American Mathematical Society Translations Series 2 46 (1965), 213–284. (Originally published in Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya, 25 (1961), 21–86.) V.I. Arnold, Geometric Methods in the Theory of Ordinary Diﬀerential Equations, 2nd ed., Springer-Verlag, New York, 1988. V.I. Arnold and A. Avez, Ergodic Problems of Classical Mechanics, Benjamin, New York, 1968. P. Atkins, Galileo’s Finger: The Ten Great Ideas of Science, Oxford University Press, Oxford, 2003. D.H. Bailey, J.M. Borwein and R.E. Crandall, On the Khintchine constant, Mathematics of Computation 66 (1997), 417–431.

440 [Bal] [BaP] [BaS1]

[BaS2] [BeC] [BeY] [Benn] [BCG]

[Bi] [Bir] [Bos] [Bow] [BoG] [Brei]

[Brem] [BS] [BLvS]

[Ca] [Chi] [C1] [C2]

[C3] [CHN]

References V. Baladi, Positive Transfer Operators and Decay of Correlations, World Scientiﬁc, Singapore, 2000. L. Barreira and Ya. Pesin, Lyapunov Exponents and Smooth Ergodic Theory, American Mathematical Society, Providence, 2002. L. Barreira and B. Saussol, Hausdorﬀ dimension of measures via Poincar´ e recurrence, Communications in Mathematical Physics 219 (2001), 443– 463. L. Barreira and B. Saussol, Product structure of Poincar´e recurrence, Ergodic Theory and Dynamical Systems 22 (2002), 33–61. M. Benedicks and L. Carleson, The dynamics of the H´ enon map, Annals of Mathematics 133 (1991), 73–169. M. Benedicks and L.-S. Young, Sinai–Bowen–Ruelle measures for certain H´enon maps, Inventiones Mathematicae 112 (1993), 541–576. D.J. Bennett, Randomness, Havard University Press, Cambridge, 1998. E. Berlekamp, T.M. Cover, R.G. Gallager, S.W. Golomb, J.L. Massey and A.J. Viterbi, Claude Elwood Shannon (1916-2001), Notices of the American Mathematical Society 49 (2002), 8–16. P. Billingsley, Ergodic Theory and Information, Wiley, New York, 1965. G.D. Birkhoﬀ, Proof of the ergodic theorem, Proceedings of the National Academy of Sciences USA 17 (1931), 656–660. M.D. Boshernitzan, Quantitative recurrence results, Inventiones Mathematicae 113 (1993), 617–631. R. Bowen, Invariant measures for Markov maps of the interval, Communications in Mathematical Physics 69 (1979), 1–17. A. Boyarsky and P. G´ ora, Laws of Chaos: Invariant Measures and Dynamical Systems in One Dimension, Birkh¨ auser, Boston, 1997. L. Breiman, The individual ergodic theorem of information theory, Annals of Mathematical Statistics 28 (1957), 809–811. Correction, ibid. 31 (1960), 809–810. P. Br´emaud, An Introduction to Probabilistic Modeling, Springer-Verlag, New York, 1987. M. Brin and G. Stuck, Introduction to Dynamical Systems, Cambridge University Press, Cambridge, 2002. H. Bruin, S. Luzzatto and S. van Strien, Decay of correlations in onedimensional dynamics, Annales Scientiﬁques de l’Ecole Normale Superieure 36 (2003), 621–646. L. Carleson, Two remarks on the basic theorems of information theory, Mathematica Scandinavica 6 (1958), 175–180. B.V. Chirikov, A universal instability of many-dimensional oscillator systems, Physics Reports 52 (1979), 263–379. G.H. Choe, Generalized continued fractions, Applied Mathematics and Computation 109 (2000), 287–299. G.H. Choe, Recurrence of transformations with absolutely continuous invariant measures, Applied Mathematics and Computation 129 (2002), 501–516. G.H. Choe, A universal law of logarithm of the recurrence time, Nonlinearity 16 (2003), 883–896. G.H. Choe, T. Hamachi and H. Nakada, Mod 2 normal numbers and skew products, Studia Mathematica 165 (2004), 53–60.

References [CKc]

[CKd1] [CKd2]

[CS]

[Chu] [Ci] [CFS] [DF]

[DK] [Den]

[Dev] [Di] [DH] [Dr] [Du] [Ec]

[Ed] [Eg] [EHI] [El] [Fa] [FW]

441

G.H. Choe and C. Kim, The Khintchine constants for generalized continued fractions, Applied Mathematics and Computation 144 (2003), 397– 411. G.H. Choe and D.H. Kim, Average convergence rate of the ﬁrst return time, Colloquium Mathematicum 84/85 (2000), 159–171. G.H. Choe and D.H. Kim, The ﬁrst return time test of pseudorandom numbers, Journal of Computational and Applied Mathematics 143 (2002), 263-274. G.H. Choe and B.K. Seo, Recurrence speed of multiples of an irrational number, Proceedings of the Japan Academy Series A, Mathematical Sciences 77 (2001), 134–137. K.L. Chung, A note on the ergodic theorem of information theory, Annals of Mathemtical Statistics 32 (1961), 612–614. Z. Ciesielski, On the functional equation f (t) = g(t) − g(2t), Proceedings of the American Mathematical Society 13 (1962), 388–393. I.P. Cornfeld, S.V. Fomin and Ya. G. Sinai, Ergodic Theory, SpringerVerlag, New York, 1982. K. Dajani and A. Fieldsteel, Equipartition of interval partitions and an application to number theory, Proceedings of the American Mathematical Society 129 (2001), 3453–3460. K. Dajani and C. Kraaikamp, Ergodic Theory of Numbers, Mathematical Association of America, Washington, D.C., 2002. M. Denker, The central limit theorem for dynamical systems, 33–62, Dynamical Systems and Ergodic Theorey, Banach Center Publications 23, PWN, Warsaw, 1989. R.L. Devaney, An Introduction to Chaotic Dynamical Systems, 2nd ed., Addison-Wesley, Redwood City, 1989. P. Diaconis, Group Representations in Probability and Statistics, Institute of Mathematical Statistics, Hayward, 1988. F. Diacu and P. Holmes, Celestial Encounters: The Origins of Chaos and Stability, Princeton University Press, Princeton, 1996. A. Drozdek, Elements of Data Compression, Brooks/Cole, Paciﬁc Grove, 2002. R.M. Dudley, Real Analysis and Probability, Wadsworth & Brooks/Cole, Paciﬁc Grove, 1988. R. Eckhardt, Stan Ulam, John von Neumann, and the Monte Carlo method, From Cardinals to Chaos, edited by N.G. Cooper, 131–137, Cambridge University Press, Cambridge, 1989. G.A. Edgar, Measure, Topology, and Fractal Geometry, Springer-Verlag, New York, 1990. H.G. Eggleston, The fractional dimension of a set deﬁned by decimal properties, Quarterly Journal of Mathematics Oxford 20 (1949), 31–36. S. Eigen, A. Hajian and Y. Ito, Ergodic measure preserving transformations of ﬁnite type, Tokyo Journal of Mathematics 11 (1988), 459–470. S. Elaydi, Discrete Chaos, Chapman & Hall/CRC, Boca Raton, 1999. K. Falconer, Fractal Geometry: Mathematical Foundations and Applications, John Wiley & Sons, Chichester, 1990. D.-J. Feng and J. Wu, The Hausdorﬀ dimension of recurrent sets in symbolic spaces, Nonlinearity 14 (2001), 81–85.

442 [Fl] [Fo1] [Fo2] [For] [Ga] [HI]

[Half] [Halt] [Ham] [HHJ] [HK]

[Hel] [Hels] [Hen] [HoK]

[Hu] [Hur] [Ib] [Io] [ILM]

[ILR] [Ka1] [Ka2]

References K. Florek, Une remarque sur la r´epartition des nombres mξ mod 1, Colloquium Mathematicum 2 (1951), 323–324. G.B. Folland, A Course in Abstract Harmonic Analysis, CRC Press, Boca Raton, 1995. G.B. Folland, Real Analysis: Modern Techniques and Their Applications, 2nd ed., John-Wiley & Sons, New York, 1999. R. Fortet, Sur une suite egalement r´epartie, Studia Mathematica 9 (1940), 54–70. F.R. Gantmacher, Matrix Theory, Chelsea, New York, 1959. A. Hajian and Y. Ito, Transformations that do not accept a ﬁnite invariant measure, Bulletin of the Amererican Mathematical Society 84 (1978), 417– 427. M. Halfant, Analytic properties of R´ enyi’s invariant density, Israel Journal of Mathematics 27 (1977), 1–20. J. Halton, The distribution of the sequence {nξ} (n = 0, 1, . . .), Proceedings of Cambridge Philosophical Society 61 (1965), 665–670. T. Hamachi, On a Bernoulli shift with nonidentical factor measures, Ergodic Theory and Dynamical Systems 1 (1981), 273–283. D. Hankerson, G.A. Harris and P.D. Johnson, Jr., Introduction to Information Theory and Data Compression, CRC Press, Boca Raton, 1998. B. Hasselblatt and A. Katok, A First Course in Dynamics: with a Panorama of Recent Developments, Cambridge University Press, Cambridge, 2003. P. Hellekalek, Good random number generators are (not so) easy to ﬁnd, Mathematics and Computers in Simulation 46 (1998), 485–505. H. Helson, Harmonic Analysis, 2nd ed., Hindustan Book Agency and Helson Publishing Co., 1995. M. H´enon, A two-dimensional mapping with a strange attractor, Communications in Mathematical Physics 50 (1976), 69–77. F. Hofbauer and G. Keller, Ergodic properties of invariant measures for peicewise monotone transformations, Mathematische Zeitschrift 180 (1982), 119–140. H. Hu, Decay of correlations for piecewise smooth maps with indiﬀerent ﬁxed points, Ergodic Theory and Dynamical Systems 24 (2004), 495–524. ¨ A. Hurwitz, Uber die Entwicklungen Komplexer Gr¨ oßen in Kettenb¨ uche, Acta Mathematica 11 (1888), 187–200. I.A. Ibragimov, Some limit theorems for stationary processes, Theory of Probability and its Applications 7 (1962), 361–392. A. Ionescu Tulcea, Contribution to information theory for abstract alphabets, Arkiv f¨ or Matematik 4 (1960), 235–247. A. Iwanik, M. Lema´ nczyk and C. Mauduit, Piecewise absolutely continuous cocycles over irrational rotations, Journal of the London Mathematical Society 59 (1999), 171–187. A. Iwanik, M. Lema´ nczyk and D. Rudolph, Absolutely continuous cocycles over irrational rotations, Israel Journal of Mathematics 83 73–95. (1993), M. Kac, On the distribution of values of sums of the type f (2k t), Annals of Mathematics 47 (1946), 33–49. M. Kac, On the notion of recurrence in discrete stochastic processes, Bulletin of the American Mathematical Society 53 (1947), 1002–1010.

References [Ka3] [Ka4] [KP] [KH] [Kel] [KemS] [Kh] [KK] [KS] [KLe]

[Kin] [King] [Kit] [Kn] [Ko] [Kra] [Kre] [Krey] [KuN] [Kuc] [La] [Lan] [LY]

[Lay]

443

M. Kac, Statistical Independence in Probability, Analysis and Number Theory, Mathematical Association of America, Washington, D.C., 1959. M. Kac, Enigmas of Chance, University of California Press, Berkeley, 1987. S. Kakutani and K. Petersen, The speed of convergence in the Ergodic Theorem, Monatshefte f¨ ur Mathematik 91 (1981), 11–18. A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems, Cambridge University Press, Cambridge, 1995. G. Keller, Equillibrium States in Ergodic Theory, Cambridge University Press, Cambridge, 1998. J.G. Kemeny and J.L. Snell, Finite Markov Chains, Springer-Verlag, New York, 1976. A.Ya. Khinchin, Continued Fractions, 3rd ed., Dover Publications, New York, 1997. C. Kim and D.H. Kim, On the law of logarithm of the recurrence time, Discrete and Continuous Dynamical Systems 10 (2004), 581–587. D.H. Kim and B.K. Seo, The waiting time for irrational rotations, Nonlinearity 16 (2003), 1861–1868. Y.-O. Kim and J. Lee, On the Gibbs measures of commuting one-sided subshifts of ﬁnite type, Osaka Journal of Mathematics 37 (2000), 175– 183. J. King, Three problems in search of a measure, American Mathematical Monthly 101 (1998), 609–628. J.F.C. Kingman, The ergodic theory of subadditive stochastic processes, Journal of the Royal Statistical Society Series B 30 (1968), 499–510. B. Kitchens, Symbolic Dynamics: One-sided, Two-sided and Countable State Markov Shifts, Springer-Verlag, Berlin, 1998. D. Knuth, The Art of Computer Programming, Vol. 2, 3rd ed., AddisonWesley, Reading, 1997. I. Kontoyiannis, Asymptotic recurrence and waiting times for stationary processes, Journal of Theoretical Probability 11 (1998), 795–811. L. Kraft, A Device for Quantizing, Grouping and Coding Amplitude Modulated Pulses, Master’s Thesis, Massachusetts Institute of Technology, 1949. U. Krengel, Ergodic Theorems, Walter de Gruyter, Berlin, 1985. E. Kreyszyg, Introductory Functional Analysis with Applications, John Wiley & Sons, New York, 1978. L. Kuipers and H. Niederreiter, Uniform Distribution of Sequences, John Wiley & Sons, New York, 1974. R. Kuc, The Digital Information Age, PWS Publishing, Paciﬁc Grove, 1999. S. Lang, Introduction to Diophantine Approximations, Addison-Wesley, Reading, 1966. G. Langdon, An introduction to arithmetic coding, IBM Journal of Research and Development, 28 (1984), 135–149. A. Lasota and J. Yorke, On the existence of invariant measures for piecewise monotone transformations, Transactions of the American Mathematical Society 186 (1973), 481–488. D.C. Lay, Linear Algebra and Its Applications, 2nd ed., Addison-Wesley, Reading, 1996.

444 [Le]

[LZ]

[LV] [LM] [Liv]

[LSV] [Lo] [LoM] [Luc] [LuV] [Ly] [Mack]

[Man] [MMY]

[Mar]

[MN]

[Mau] [Mc1] [Mc2] [MS]

References P. L’Ecuyer, Eﬃcient and portable combined random number generators, Communications of the Association for Computing Machinery 31 (1988), 742–749. A. Lempel and J. Ziv, Compression of individual sequences via variable rate coding, IEEE Transactions on Information Theory 24 (1978), 530– 536. P. Liardet and D. Voln´ y, Sums of continuous and diﬀerenctiable functions in dynamical systems, Israel Journal of Mathematics 98 (1997), 29–60. D. Lind and B. Marcus, An Introduction to Symbolic Dynamics, Cambridge University Press, Cambridge, 1995. C. Liverani, Central limit theorem for deterministic systems, International Conference on Dynamical Systems, edited by F. Ledrappier, J. Lewowicz and S. Newhouse, 56–75, Pitman Research Notes in Mathematics Series 362, Longman, Harlow, 1996. C. Liverani, B. Saussol and S. Vaienti, A probabilistic approach to intermittency, Ergodic Theory and Dynamical Systems 19 (1999), 671–685. G. Loh¨ ofer, On the distribution of recurrence times in nonlinear systems, Letters in Mathematical Physics 16 (1988), 139–143. G. Loh¨ ofer and D.H. Mayer, On the gap problem for the sequence ||mβ||, Letters in Mathematical Physics 16 (1988), 145–149. R.W. Lucky, Silicon Dreams, St. Martin’s Press, New York, 1991. S. Luzzatto and M. Viana, Parameter exclusions in H´enon-like systems, Russian Mathematical Surveys 58 (2003), 1053–1092. S. Lynch, Dynamical Systems with Applications Using Maple, Birkh¨ auser, Boston, 2001. G. Mackey, Von Neumann and the early days of ergodic theory, The Legacy of John von Neumann, Proceedings of Symposia in Pure Mathematics 50, 25–38, American Mathematical Society, Providence, 1990. R. Ma˜ n´e, Ergodic Theory and Diﬀerentiable Dynamics, Springer-Verlag, Berlin, 1987. S. Marmi, P. Moussa and J.-C. Yoccoz, On the cohomological equation for interval exchange maps, Comptes Rendus Mathematique Academie des Sciences Paris 336 (2003), 941–948. G. Marsaglia, The mathematics of random number generators, The Unreasoable Eﬀectiveness of Numbery Theory, Proceedings of Symposia in Applied Mathematics 46, 73–90, American Mathematical Society, Providence, 1992. M. Matsumoto and T. Nishimura, Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator, ACM Transactions on Modeling and Computer Simulation 8 (1998), 3–30. U. Maurer, A universal statistical test for random bit generators, Journal of Cryptology 5 (1992), 89–105. B. McMillan, The basic theorems of information theory, Annals of Mathematical Statistics 24 (1953), 196–219. B. McMillan, Two inequalities implied by unique decipherability, IRE Transactions on Information Theory 2 (1956), 115–116. W. de Melo and S. van Strien, One-Dimensional Dynamics, SpringerVerlag, Berlin, 1993.

References [Met]

[Mu] [NIT]

[Na]

[ND] [Ni] [Os]

[Ot] [OW] [Pkoh] [PM]

[PC] [Pa] [Pas] [Pem] [Pet] [Pson] [Pia] [Pit] [Po] [PoY] [PTV]

445

N. Metropolis, The beginning of the Monte Carlo method, From Cardinals to Chaos, edited by N.G. Cooper, 125–130, Cambridge University Press, Cambridge, 1989. J.R. Munkres, Topology: A First Course, Prentice-Hall, Englewood Cliﬀs, 1975. H. Nakada, Sh. Ito and S. Tanaka, On the invariant measure for the transformations associated with some real continued fractions, Keio Engineering Report 30 (1977), 159–175. H. Nakada, Metrical theory for a class of continued fraction transformations and their natural extensions, Tokyo Journal of Mathematics 4 (1981), 39–426. B. Noble and J.W. Daniel, Applied Linear Algebra, 3rd ed., Prentice-Hall, Englewood Cliﬀs, 1988. H. Niederreiter, Random Number Generation and Quasi-Monte Carlo Methods, Society for Industrial and Applied Mathematics, Vermont, 1992. V.I. Oseledec, A multiplicative ergodic theorem: Lyapunov characteristic numbers for dynamical systems, Transactions of the Moscow Mathematical Society 19 (1968), 197–231. E. Ott, Chaos in Dynamical Systems, 2nd ed., Cambridge University Press, Cambridge, 2002. D. Ornstein and B. Weiss, Entropy and data compression schemes, IEEE Transactions on Information Theory 39 (1993), 78–83. K. Koh Park, On directional entropy functions, Israel Journal of Mathematics 113 (1999), 243–267. S. Park and K. Miller, Random number generators: good ones are hard to ﬁnd, Communications of the Association for Computing Machinery 31 (1988), 1192–1201. T.S. Parker and L.O. Chua, Practical Numerical Algorithms for Chaotic Systems, Springer-Verlag, New York, 1989. W. Parry, On the β-expansions of real numbers, Acta Mathematica Academiae Scientiarum Hungaricae 11 (1960), 401–416. R. Pasco, Source Coding Algorithms for Fast Data Compression, Ph.D Thesis, Stanford University, 1976. R. Pemantle, Randomization time for the overhand shuﬄe, Journal of Theoretical Probability 2 (1989), 37–49. K. Petersen, Ergodic Theory, Cambridge University Press, Cambridge, 1983. I. Peterson, The Jungle of Randomness: A Mathematical Safari, John Wiley & Sons, New York, 1998. G. Pianigiani, First return map and invariatn measures, Israel Journal of Mathematics 35 (1979), 32–48. B. Pitskel, Poisson limit law for Markov chains, Ergodic Theory and Dynamical Systems 11 (1991), 501–513. M. Pollicott, Lectures on Ergodic Theory and Pesin Theory on Compact Manifolds, Cambridge University Press, Cambridge, 1993. M. Pollicott and M. Yuri, Dynamical Systems and Ergodic Theory, Cambridge University Press, Cambridge, 1998. W. Press, S. Teukolsky, W. Vetterling and B. Flannery, Numerical Recipes in C, 2nd ed., Cambridge University Press, Cambridge, 1992.

446 [Re] [Rie1] [Rie2]

[Ris] [RS] [Rom] [R-E]

[Roy] [Rud1] [Rud2] [Ru1]

[Ru2] [Ru3] [R-N] [STV] [Sch]

[Schw] [Sh1]

[Sh2] [Shi] [Sim] [Si1] [Si2]

References A. R´enyi, Representations for real numbers and their ergodic properties, Acta Mathematica Academiae Scientiarum Hungaricae 8 (1957), 477–493. G.J. Rieger, Ein Gauss-Kusmin-L´evy Satz f¨ ur Kettenbr¨ uche nach n¨ achsten Ganzen, Manuscripta Mathematica 24 (1978), 437–448. G.J. Rieger, Mischung und Ergodizit¨ at bei Kettenbr¨ uchen nach n¨ achsten Ganzen, Journal f¨ ur die Reine und Angewandte Mathematik 310 (1979), 171–181. J. Rissanen, Generalized Kraft inequality and arithmetic coding, IBM Journal of Research and Development 20 (1976), 198–203. A.M. Rockett and P. Sz¨ usz, Continued Fractions, World Scientiﬁc, Singapore, 1992. S. Roman, Introduction to Coding and Information Theory, SpringerVerlag, New York, 1996. J. Rousseau-Egele, Un th´eor`eme de la limite locale pour une classe de transformations dilatantes et monotones par morceaux, Annals of Probability 11 (1983), 772-788. H.L. Royden, Real Analysis, 3rd ed., Macmillan, New York, 1989. W. Rudin, Principles of Mathematical Analysis, 3rd. ed., McGraw-Hill, New York, 1976. W. Rudin, Real and Complex Analysis, 3rd. ed., McGraw-Hill, New York, 1986. D. Ruelle, Ergodic theory of diﬀerentiable dynamical systems, Publications ´ Math´ematiques de l’Institut des Hautes Etudes Scientiﬁques 50 (1979), 27-58. D. Ruelle, Chaotic Evolution and Strange Attractors, Cambridge University Press, Cambridge, 1989. D. Ruelle, Chance and Chaos, Princeton University Press, Princeton, 1991. C. Ryll-Nardzewski, On the ergodic theorems II. Ergodic theory of continued fractions, Studia Mathematica 12 (1951), 74–79. B. Saussol, S. Troubetzkoy and S. Vaienti, Recurrence, dimensions and Lyapunov exponents, Journal of Statistical Physics 106 (2002), 623–634. J. Schmeling, Dimension theory of smooth dynamical systems, Ergodic Theory, Analysis, and Eﬃcient Simulation of Dynamical Systems, edited by B. Fiedler, 109–129, Springer-Verlag, Berlin, 2001. S. Schwartzman, The Words of Mathematics, Mathematical Association of America, Washington, D.C., 1994. C. Shannon, A mathematical theory of communication, Bell System Technical Journal 27 (1948), 379–423 and 623–656, reprinted in A Mathematical Theory of Communication by C. Shannon and W. Weaver, University of Illinois Press, 1963. C. Shannon, Claude Elwood Shannon: Collected Papers, edited by N.J.A. Sloan and A.D. Wyner, IEEE Press, New York, 1993. P.C. Shields, The Ergodic Theory of Discrete Sample Paths, American Mathematical Society, Providence, 1996. M. Simonnet, Measures and Probabilities, Springer-Verlag, New York, 1996. Ya. Sinai, Introduction to Ergodic Theory, Princeton University Press, Princeton, 1976. Ya. Sinai, Topics in Ergodic Theory, Princeton University Press, Princeton, 1994.

References [Sl1] [Sl2] [Spr] [Su]

[To] [Tu] [Ve]

[Vo] [vN] [Wa1] [Wa2] [We] [WZ]

[W1] [W2] [Yoc]

[Y1]

[Y2] [Y3] [Yu]

447

N.B. Slater, The distribution of integers N for which {N θ} < Φ, Proceedings of Cambridge Philosophical Society 46 (1950), 525–534. N.B. Slater, Gaps and steps for the sequence nθ mod 1, Proceedings of Cambridge Philosophical Society 63 (1967), 1115–1123. J.C. Sprott, Chaos and Time-Series Analysis, Oxford University Press, Oxford, 2003. F. Su, Convergence of random walks on the circle generated by an irrational rotation, Transactions of the American Mathematical Society 350 (1998), 3717–3741. A. Torchinsky, Real Variables, Addison-Wesley, Redwood City, 1988. W. Tucker, Computing accurate Poincar´ e maps, Physica D 171 (2002), 127–137. W.A. Veech, Strict ergodicity in zero dimensional dynamical systems and Kronecker–Weyl theorem mod 2, Transactions of the American Mathematical Society 140 (1969), 1–33. D. Voln´ y, On limit theorems and category for dynamical systems, Yokohama Mathematical Journal 38 (1990), 29–35. J. von Neumann, Proof of the quasi-ergodic hypothesis, Proceedings of the National Academy of Sciences USA 18 (1932), 70–82. P. Walters, An Introduction to Ergodic Theory, 2nd ed., Springer-Verlag, New York, 1982. P. Walters, A dynamical proof of the multiplicative ergodic theorem, Transactions of the American Mathematical Society 335 (1993), 245–257. R.B. Wells, Applied Coding and Information Theory for Engineers, Prentice-Hall, Upper Saddle River, 1999. A.D. Wyner and J. Ziv, Some asymptotic properties of the entropy of a stationary ergodic data source with applications to data compression, IEEE Transactions on Information Theory 35 (1989), 1250–1258. A.J. Wyner, Strong Matching Theorems and Applications to Data Compression and Statistics, Ph.D Thesis, Stanford University, 1993. A.J. Wyner, More on recurrence and waiting times, Annals of Applied Probability 9 (1999), 780–796. J.-C. Yoccoz, Il n’y a pas de contre-exemple de Denjoy analytique, Comptes Rendus des Seances de l’Academie des Sciences. Serie I. Mathematique 298 (1984), 141–144. L.-S. Young, Ergodic theory of diﬀerentiable dynamical systems, Real and Complex Dynamical Systems, edited by B. Branner and P. Hjorth, Proceedings of the NATO Advanced Science Institute Series C Mathematical and Physical Sciences 464, 293–336, Kluwer Academic Publishers, Dordrecht, 1995. L.-S. Young, Developments in chaotic dynamics, Notices of the American Mathematical Society 45 (1998), 1318–1328. L.-S. Young, Recurrence times and rates of mixing, Israel Journal of Mathematics 110 (1999), 153–188. A.A. Yushkevich, On limit theorems connected with the concept of entropy of Markov chains, Uspehi Matematiceskih Nauk 8 (1953), 177–180.

Index

σ-algebra, 4 abelian group, 14 absolutely continuous measure, 6, 28, 103, 395 additive coboundary, 140, 142 adjacency matrix, 378, 379 almost every point, 6 almost everywhere, 6 alphabet, 252, 378, 417, 418, 420–422 aperiodic matrix, 12, 168, 372, 375 arithmetic coding, 417, 428 Arnold cat mapping, 55, 79 Arnold tongue, 195, 205 Arnold, V.I., 55 Asymptotic Equipartition Property, 251, 252, 364, 422, 427, 428, 436 attractor, 105, 109 automorphism, 15 baker’s transformation, 53, 66, 70, 78, 173 basin of attraction, 105, 109, 131, 349 Bernoulli measure, 6, 59, 69, 81, 100, 170, 178, 223, 224, 229, 265, 402 Bernoulli shift, 59, 61, 66, 70, 81, 222, 242, 249, 252, 253, 261, 265, 272, 366, 367, 369, 370, 379, 381–383, 385, 392, 429, 433, 436 Bernoulli transformation, 69, 172, 272, 276, 288, 297 beta transformation, 50, 69, 76, 144, 179, 200, 221, 231, 244, 260, 272, 287, 397

bijective, 1–4, 61, 62, 218, 240, 310 bin, 43, 115, 209 binary expansion, 6, 60, 62, 66, 99, 100, 141, 170, 171, 223, 429 binary sequence, 5, 28, 60, 81, 84, 170, 238, 247, 364, 381, 392, 399, 420, 425, 427, 429, 436 binomial coeﬃcient, 169, 262, 368 binomial distribution, 262 Birkhoﬀ Ergodic Theorem, 81, 83, 89, 93, 99–101, 104, 105, 118, 145, 164, 170, 175, 177, 212, 216, 223, 251, 252, 260, 270, 288, 293, 399, 420, 428 Birkhoﬀ, G.D., 85 bisection method, 428 bit, 252 block, 252, 260 Borel σ-algebra, 5, 391 Borel measurable, 5 Borel measure, 5, 61 Borel Normal Number Theorem, 99 Borel–Cantelli Lemma, 10, 287 Boshernitzan, M.D., 393 bounded linear map, 9 bounded variation, 11, 139 Bowen, R., 106, 312 Breiman, L., 250 Brouwer Fixed Point Theorem, 4, 12 Cantor function, 193, 202 Cantor measure, 202 Cantor set, 3, 107, 109, 193, 202 Cantor–Lebesgue function, 193

450

Index

cardinality, 2 Carleson, L., 250 Cauchy sequence, 3 Cauchy–Schwarz inequality, 9, 11 cdf, 24, 43, 77, 192, 202, 210, 263, 268 Central Limit Theorem, 25, 139, 148, 151, 253 Ces` aro sum, 2, 134 change of variables, 23 character, 16, 212 characteristic function, 112 characteristic function of a subset, 2 Chebyshev inequality, 9, 24 Chebyshev polynomial, 49, 73 Chebyshev, P.L., 74 Chung, K.L., 250 closed set, 3 closure, 3, 87, 156, 197 coboundary, 140, 142, 215 cobounding function, 215, 227, 229 cobweb plot, 95 code, 252 codeword, 252, 253, 418 coding map, 66, 67, 241, 246, 247, 258, 418 compact, 3 complete measure, 5 complete metric space, 3 compression rate, 419 compression ratio, 419, 429, 434 conditional measure, 6, 85, 161, 176 continued fraction, 20, 41, 93, 101, 113, 121, 222, 280, 289, 295, 413 backward, 123 Hurwitz, A., 57 Nakada, H., 57 Nakada, H., Ito, Sh., Tanaka, S., 58 Rieger, G.J., 57 continuous function, 3 continuous measure, 6, 94 convergence in a metric space, 3 convergence in measure, 8, 24, 251, 265 convergence in probability, 22 convergent, 20, 286, 289, 298, 413 convex combination, 8 convex function, 8 convolution, 290 correlation, 143–145, 152, 153 countable, 2

counting measure, 5 cumulative density function, 24, 43, 77, 190, 192, 202, 210, 263, 268 cumulative probability, 24, 267, 436 cylinder set, 5, 59, 83, 94, 100, 242, 256, 270, 363, 369, 420 cylindrical coordinate system, 126 data compression, 253, 417 decay of correlation, 143 decimal expansion, 287 decoding, 418 Denjoy Theorem, 192 density function, 6 density zero, 135 Diaconis, P., 290 diameter, 391 dictionary, 426, 427, 435 diﬀeomorphism, 310 diﬀerentiable manifold, 310 dimension of a measure, 400, 402 Dirac measure, 5, 105, 188 discrete measure, 6 divergence speed, 278, 321, 322 double precision, 125, 205 dual group, 16, 214 dyadic rational, 428 edge, 378 ellipsoid, 301 encoding, 418 endomorphism, 14, 19, 53, 87, 137 entropy, 176, 239, 241–244, 271, 292, 320, 364 entropy of a partition, 238 equivalence relation, 60 equivalent metrics, 3 equivalent norms, 3, 169, 378 ergodic component, 56, 85, 218, 219 ergodic transformation, 85–87, 89, 94, 100, 156, 181 error function, 43 Euler constant, 31, 369 Euler theorem, 28 event, 22, 59 eventually expansive, 155 expectation, 22 exponential distribution, 23 extension, 62, 172, 179

Index factor, 62, 240 factor map, 62, 172, 240, 241 Fatou’s lemma, 7, 92, 368 fax code, 417 Feigenbaum, M., 108 ﬁrst return time, 160, 233, 363, 364, 369–371, 394, 400 ﬁrst return transformation, 160 ﬁxed point, 4, 194, 296, 333, 345 ﬂoating point arithmetic, 29, 31, 122, 125–128, 199, 205, 263, 278 ﬂowchart, 70 Fourier coeﬃcient, 16–18 Fourier series, 16, 228, 235 Fourier transform, 16 Fubini’s theorem, 10, 213, 214 Gauss transformation, 51, 112, 122, 148, 156, 274, 280, 289, 292–296, 397 Gauss, C.F., 51 Gaussian elimination, 19, 88 Gelfand’s problem, 99 general linear group, 303 generalized Gauss transformation, 282, 397 Gibbs’ inequality, 421 golden ratio, 50 graph of a topological shift, 378 group, 14 Haar measure, 15, 16, 53, 87, 137, 212, 213 hardware ﬂoating point arithmetic, 125–128, 199, 205 Hausdorﬀ dimension, 109, 380, 392 Hausdorﬀ measure, 392 H´enon attractor, 109, 130, 131, 312, 337, 345, 348, 349, 402, 411 H´enon mapping, 108, 312, 317, 336, 345, 348, 349, 353, 402 Hilbert space, 11, 74 histogram, 60, 98, 116, 170, 179 H¨ older continuity, 319 H¨ older inequality, 9 homeomorphism, 3, 4, 192 homomorphism, 14 Huﬀman coding, 423 Huﬀman tree, 423 hyperbolic point, 333, 355

451

independent random variables, 22, 24, 142 independent subsets, 10 indicator function, 2, 112, 229 inﬁmum, 2 information theory, 237, 420 inner product, 11 integrable function, 7 integral matrix, 19 interval exchange transformation, 292 invariant measure, 47–52, 54–56, 59, 120, 123 inversive congruential generator, 27 Ionescu Tulcea, A., 250 irrational translation (mod 1), 48, 86, 115, 138, 142, 151, 153, 176, 198, 221, 229, 239, 272, 393, 409, 414 irreducible matrix, 12–14, 167, 375, 378 irreducible polynomial, 89 isomorphic measure spaces, 61, 62, 240 isomorphic transformations, 62–64, 66, 68–70, 75, 170, 171, 218, 241, 242 isomorphism, 14, 61 Jensen’s inequality, 9, 366, 368, 380 join of partitions, 239 Jordan canonical form, 168, 300, 303 Kac’s lemma, 163, 176, 177, 223, 233, 364, 365, 376, 379, 383, 394, 396 Khinchin constant, 102, 121, 123 Khinchin, A., 24 Kolmogorov, A.N., 237 Kolmogorov–Sinai entropy, 239 Kraft’s inequality, 418 Kronecker–Weyl Theorem, 97 Lagrange multiplier method, 238, 242 lambda transformation, 55, 56, 68 Law of Large Numbers, 24, 99, 133, 252 Lebesgue Dominated Convergence Theorem, 7, 94, 166, 374 Lebesgue integral, 7 Lebesgue measure, 5, 15, 48, 49, 52–56, 61, 63, 68, 69, 76, 86, 87, 104, 105, 109, 120, 172, 180, 195 Lempel–Ziv coding, 364, 425 lexicographic ordering, 428 limsup of a sequence of sets, 2

452

Index

linearized Gauss transformation, 158, 173, 274 linearized logarithmic transformation, 173, 275 locally compact, 15 logistic transformation, 49, 71, 73, 116, 147, 175, 221, 241, 254, 272, 275, 397, 409 loop, 37, 70, 202, 378, 415 Lyapunov exponent, 104, 120, 170, 270–276, 278, 280, 284, 286, 292, 305, 307, 311, 312, 314, 316, 319–322, 325, 328, 396, 409 Markov measure, 59, 69, 82, 171, 399 Markov shift, 167, 168, 242, 253, 255, 258, 371, 375, 376, 388 Maximal Ergodic Theorem, 89 McMillan, B., 250 mean, 22, 139 Mean Ergodic Theorem, 96 measurable function, 5 measurable mapping, 47 measurable partition, 6 measurable space, 5 measurable subset, 4 measure, 4 measure preserving transformation, 47 Mercer’s theorem, 17 Mersenne prime, 28 metric space, 2, 61, 377 minimal homeomorphism, 188, 190 Minkowski’s inequality, 9 mixing, 133–139, 143, 144, 153, 168, 243, 253, 371 mod 2 normal number, 212 modem, 417 modulo measure zero sets, 7 Monotone Convergence Theorem, 7 Monte Carlo method, 26, 45, 46 multiplication by 2 (mod 1), 48, 68, 87, 141, 144, 174, 221, 241, 272, 396 Multiplicative Ergodic Theorem, 305, 307, 334 mutually singular measures, 6, 100 name of a partition, 67 Newton’s method, 52 nonnegative matrix, 11

nonnegative vector, 11 norm, 2, 169, 181, 302, 378 normal distribution, 24, 148, 253, 263, 265, 371 normal number, 99 open set, 3 orientation, 183 Ornstein, D., 242, 364 Ornstein–Weiss formula, 364, 381, 427 orthogonal complement, 11 orthogonal matrix, 300 orthogonal polynomial, 73 orthogonal projection, 17 orthonormal basis, 16 Oseledec, V.I., 302, 305, 307 outer measure, 391 outlier, 367, 410 Parseval’s identity, 17, 146 parsing, 426 partial quotient, 20, 41, 93, 101, 113, 121, 222, 280, 295, 298, 403, 413 partition, 2, 6, 66, 67, 239, 245, 258, 286 pdf, 6, 21, 43, 118, 150, 180, 201, 263 period, 12, 18, 340 periodic point, 108, 340 Perron–Frobenius eigenvalue, 13, 378 Perron–Frobenius eigenvector, 13, 38, 243, 244, 255, 370, 386, 388 Perron–Frobenius Theorem, 13 Pesin, Ya., 319 piecewise linear transformation, 49, 396 Poincar´e Recurrence Theorem, 159, 160 Poincar´e section, 198 pointillism, 54, 55, 67, 78, 106, 114, 127, 162, 164, 180, 195, 207, 226, 231, 234, 246–248, 256 positive matrix, 11 positive vector, 11 preﬁx code, 418 probability density function, 6, 21, 24, 42, 118, 150, 180, 201, 263 probability measure, 5 probability vector, 14 product measure, 10, 213, 241 pseudorandom number, 25 quasirandom points, 25

Index random number, 25, 170, 363 random variable, 21 random walk, 222, 232, 233 recurrence error, 393 Riemann–Lebesgue lemma, 17 rotation number, 187, 194 Ruelle, D., 106, 302, 312, 320 saddle point, 333, 334, 353 sample space, 21 seed, 25, 28, 70 semi-conjugacy, 62, 64 Shannon, C., 237, 250 Shannon–McMillan–Breiman Theorem, 248, 258, 260, 286, 367 shift transformation, 59 shuﬄing, 291, 292 signiﬁcant digits, 29, 41, 81, 83, 104, 113, 122, 170, 176, 254, 276–281, 296, 324, 326, 396, 403, 413 simple function, 5, 48 Sinai, Ya., 106, 237, 312 Sinai–Ruelle–Bowen measure, 106, 312 singular measure, 339, 398, 399 singular value, 299, 300, 310, 323 skew product transformation, 213, 216, 219, 223, 226, 231, 233, 234 solenoid, 106, 125, 126, 311, 316 source block, 252 Source Coding Theorem, 421 spectral measure, 96 spectral theorem, 96 speed of periodic approximation, 408 stable manifold, 317, 334, 345, 358, 361 staircase function, 193, 204 standard deviation, 22, 140, 253, 262 standard mapping, 110, 313, 318, 339, 341, 355, 357, 358, 361 stochastic matrix, 13, 37, 59, 69, 165 strong mixing, 133 Subadditive Ergodic Theorem, 304 support of a measure, 6, 106, 189, 402 supremum, 2 symbolic dynamics, 377 symmetric diﬀerence, 1 symmetric group, 290 symmetry of measures, 190, 200

453

topological conjugacy, 62, 190, 192, 195, 202, 206, 207, 275 topological entropy, 377, 378, 389 topological group, 15 topological shift space, 377, 389 toral automorphism, 19, 54, 79, 87, 88, 120, 127, 132, 150, 316, 320, 331, 336, 401 torus, 18, 54, 106, 126, 313 total variation, 11 transition probability, 13, 59, 171, 242 tree, 418, 419, 423 truncation error, 104 type of an irrational number, 403, 412 typical block, 252, 366, 379, 422, 427, 436, 437 typical point, 82, 83, 104, 170, 229, 399 typical sequence, 81, 82, 252, 364, 428, 429, 434, 436 typical subset, 251, 364, 427 uncountable, 2, 4 uniform convergence, 4, 147 uniform distribution, 23, 89, 97 uniquely ergodic, 197 unitary operator, 11 unstable manifold, 317, 334, 348, 349, 353, 357, 361 variance, 139, 253, 371 variational inequality, 380 variational principle, 377, 380 Veech, W.A., 222 vertex, 378 von Neumann, J., 96 weak convergence, 106, 110 weak mixing, 133, 134, 138, 139, 153 Weiss, B., 364 winding number, 226 Wyner, A.D., 364 XOR, 27 Yoccoz, J.-C., 192 Young, L.-S., 140 Ziv, J., 364

10 The Lyapunov Exponent: Multidimensional Case

How can we measure the randomness of a diﬀerentiable mapping T on a multidimensional space? First, ﬁnd its linear approximation at a point x by the Jacobian matrix DT (x) and observe that a sphere of a small radius centered at x is approximately mapped to an ellipsoid. By applying DT (T k x), k ≥ 1, repeatedly along the orbit of x, we observe that the image of the given sphere is approximated by an ellipsoid and that the lengths of its semi-axes increase exponentially. The average of the exponents of growth are called the Lyapunov exponents, and they are obtained using singular values of D(T n ) as n → ∞. The largest Lyapunov exponent is equal to the divergence speed of two nearby points. For a comprehensive survey consult [Y1].

10.1 Singular Values of a Matrix In this section we give a brief introduction to singular values. In this chapter we need facts only for square matrices but we consider general matrices because the argument for rectangular matrices is the same. Let A be an n × m real matrix. Its transpose is denoted by AT . The m×m symmetric matrix AT A has real eigenvalues, and the corresponding eigenvectors are also real and pairwise orthogonal. If λ ∈ R satisﬁes AT Av = λv for some v ∈ Rm , ||v|| = 1, then ||Av||2 = (Av, Av) = (AT Av, v) = (λv, v) = λ , and so λ ≥ 0. Let

λ1 ≤ · · · ≤ λm

be the eigenvalues of A A with corresponding eigenvectors vi , ||vi || = 1. Put σi = λi , 1 ≤ i ≤ m . T

The σi are called the singular values of A. Note that σi = ||Avi || and σ1 ≤ · · · ≤ σm .

300

10 The Lyapunov Exponent: Multidimensional Case

Consider an orthonormal basis for Rm given by {v1 , . . . , vm }. For i = j we have (Avi , Avj ) = (AT Avi , vj ) = (λi vi , vj ) = λi (vi , vj ) = 0 . Hence Av1 , . . . , Avm are orthogonal and the rank of A is equal to the number of nonzero singular values. 1 1 Avi = Avi for σi > 0. If σi = 0, then choose arbitrarily Let ui = σi ||Avi || ui , ||ui || = 1, so that {u1 , . . . , un } is an orthonormal basis for Rn . Then Avi = σi ui for every i. Deﬁne an n × n orthogonal matrix U = (u1 , . . . , un ) using ui as columns, an m × m orthogonal matrix V = (v1 , . . . , vm ) using vi as columns, and an n × m matrix D0 Σ= 0 0 where D is a diagonal matrix given by positive singular values. Then A = U ΣV T , which is called a singular value decomposition. See Maple Program 10.7.1. Theorem 10.1. (i) If two real n × m matrices A and B are orthogonally equivalent, i.e., there exist orthogonal matrices P and Q such that A = P BQ , then A and B have the same set of singular values. (ii) Let A be a real square matrix. The product of singular values of A is equal to | det A |, which is equal to the product of absolute values of eigenvalues of A. (iii) If A is a real symmetric matrix, then absolute values of eigenvalues of A are singular values of A. Proof. (i) Since AT A = (QT B T P T )P BQ = QT B T BQ, two matrices AT A and B T B have the same set of eigenvalues. (ii) Let A be an m × m matrix. In the singular value decomposition A = U ΣV T , we have | det U | = | det V | = 1. Hence | det A| = | det Σ| = det Σ = σ1 · · · σm where σ1 , . . . , σm are singular values of A. Let A = P JP −1 be the Jordan decomposition of A, where the diagonal of the Jordan form J consists of eigenvalues of A. Then det A = det J, and det A is the product of eigenvalues. (iii) Suppose that Av = µv, v ∈ Rm , ||v|| = 1, µ ∈ R. Then AT Av = A2 v = µ2 v , and |µ| is a singular value of A.

10.1 Singular Values of a Matrix

301

Here is a geometrical interpretation of singular values. Let A be an n × m real matrix and let vi , ui be given as in the introduction. Then F m m xi vi : xi 2 = 1 S= i=1

is the unit sphere in Rm and A(S) = {Ax : x ∈ S} =

i=1

m

xi σi ui :

i=1

m

F 2

xi = 1

.

i=1

Note that A(S) is an ellipsoid with its semi-axes given by Avi of length σi , 1 ≤ i ≤ r. 12 Example 10.2. (i) Consider A = . The images of two orthogonal eigen30 vectors of AT A are also orthogonal. See Fig. 10.1 and Maple Program 10.7.2.

1

2

y

–1

y

x –1

1

–2

x

2

–2

Fig. 10.1. A matrix sends a circle (left) onto an ellipse (right)

(ii) If one of the singular values is zero, then A(S) may include the interior 2 0 0 of an ellipsoid. Consider A = . Then 0 −1 0 ⎛ ⎞ ⎛ ⎞ 2 0 400 2 0 0 = ⎝0 1 0⎠ . AT A = ⎝0 −1⎠ 0 −1 0 0 0 000 Here we observe that σ1 = 0, σ2 = 1, σ3 = 2 with v1 = (0, 0, 1), v2 = (0, 1, 0), v3 = (1, 0, 0). Hence A(S) is the two-dimensional region x 2 + y2 ≤ 1 . 2

302

10 The Lyapunov Exponent: Multidimensional Case

Remark 10.3. To deﬁne singular values we may use AAT instead of AT A. For, √ 1 if AT Avi = λi vi , ||vi || = 1, λi = σi = ||Avi || > 0, then we let ui = Avi . σi Then 1 1 1 Avi = AT Avi = λi vi = σi vi . AT ui = AT σi σi σi Hence AAT ui = σi Avi = σi 2 ui = λi ui . Thus we obtain the same set of nonzero singular values. When AT A has 0 as one of its eigenvalues,it is not necessarily so with AAT . For example, if 10 has two eigenvalues 1 and 0, but AAT = 1 A = 1 0 then AT A = 00 has only one eigenvalue 1. To deﬁne singular values of a complex matrix A we use AH A where AH is the complex conjugate of AT . For more information see [Lay],[ND]. Deﬁnition 10.4. Let A : Rm → Rm be a linear map. Then the operator norm of A is deﬁned by ||A||op = sup

v=0

||Av|| = sup ||Av|| . ||v|| ||v||=1

It is easy to check that the operator norm is a norm on the space of all m × m matrices. Note that ||Av|| ≤ ||A||op ||v|| for every v. If A is invertible, −1 then ||w|| ≤ ||A||op ||A−1 w|| for every w, and ||A−1 ||op ≥ (||A||op ) . Theorem 10.5. Suppose that the linear map A in Deﬁnition 10.4 is given by an m × m real matrix, again denoted by A. Then ||A||op is the maximum of the singular values of the matrix A. Proof. Let λ1 ≤ · · · ≤ λm be the eigenvalues of AT A with corresponding eigenvectors vi , ||vi || = 1, 1 ≤ i ≤ m. Since the vi are orthogonal, for x2i = 1, we have v = xi v i , ||Av||2 = (Av, Av) = (AT Av, v) = λi x2i . λi xi v i , xi vi = Hence sup||v||=1 ||Av||2 = λm and ||A||op =

√

λm .

10.2 Oseledec’s Multiplicative Ergodic Theorem We prove the Multiplicative Ergodic Theorem, which was ﬁrst obtained by V.I. Oseledec [Os] and later generalized by many mathematicians. The presentation given in this section follows the idea due to D. Ruelle [Ru1]. Consult [Kre],[Po],[Y1] for more information. See [BaP],[Wa2] for diﬀerent proofs.

10.2 Oseledec’s Multiplicative Ergodic Theorem

303

Let GL(m, R) denote the general linear group, which is the set of all m×m real invertible matrices. Take A ∈ GL(m, R). If an eigenvalue of A is of the form λ = a + bi, b = 0, a, b ∈ R, then λ = a − bi is also an eigenvalue of A. Thus, if M is a k × k Jordan block corresponding to λ, then there is another k × k Jordan block M corresponding to λ. We then have the 2k × 2k combined block M ⊕ M . To ﬁnd the real Jordan canonical form of A, we replace M ⊕ M by the 2k × 2k real matrix obtained by the following rules: Each diagonal a −b entry λ = a + bi of M is replaced by and an oﬀ-diagonal entry 1 of b a 10 M is replaced by . Using the real Jordan canonical form of A, we see 01 m that R is decomposed into a direct sum Rm = E1 ⊕ · · · ⊕ Er such that (i) AEi = Ei , (ii) for every v ∈ Ei , v = 0, lim

n→∞

1 log ||A±n v|| = ±Li n

where Li is the logarithm of the absolute value of the eigenvalue corresponding Li dim Ei = log | det A|. to Ei , and (iii) Next, consider a random sequence of matrices A(1), A(2), A(3), . . . and study the growth rate of ||A(n) · · · A(1)v|| for v ∈ Rm . More speciﬁcally, we have a measure preserving transformation T on a probability space X and a measurable mapping A : X → GL(m, R). (The measurability condition on A means that the coeﬃcients of A are real-valued measurable functions on X.) In this case, we consider a sequence A(x), A(T x), A(T 2 x), . . . for each x ∈ X. Suppose that A satisﬁes log+ ||A−1 || dµ < ∞ , log+ ||A|| dµ < ∞ and where log+ a = max{log a, 0}. For n ≥ 1 we write An (x) = A(T n−1 x) · · · A(x) . If T is invertible, for n ≥ 1 we write A−n (x) = A(T −n x)−1 · · · A(T −1 x)−1 . Deﬁnition 10.6. For x ∈ X and v ∈ Rm , deﬁne L+ (x, v) = lim sup n→∞

1 log ||An (x)v|| , n

1 log ||An (x)v|| . n If L+ = L+ , then we simply write L+ . Deﬁne L− , L− and L− similarly using A−n (x), for example, L+ (x, v) = lim inf n→∞

L− (x, v) = lim sup n→∞

1 log ||A−n (x)v|| . n

304

10 The Lyapunov Exponent: Multidimensional Case

The following fact due to J.F.C. Kingman [King] will be used in proving the Multiplicative Ergodic Theorem. We write f + = max{f, 0} for a measurable function f : X → R. Fact 10.7 (The Subadditive Ergodic Theorem) Let T be a measure preserving transformation on a probability space (X,

µ). For n ≥ 1, suppose that φn : X → R is a measurable function such that (φ1 )+ dµ < ∞ and φ+n (x) ≤ φ (x) + φn (T x) almost everywhere for every , n ≥ 1. Then there exists a measurable function φ∗ : X → R ∪ {−∞} such that (i) φ ∗ ◦ T = φ∗ , (ii) (φ∗ )+ dµ < ∞, and (iii) n1 φn → φ∗ almost everywhere as n → ∞. Note that, if T is ergodic, then φ∗ is constant. If the inequality in the second condition for φn is replaced by equality, we have the Birkhoﬀ Ergodic n−1 Theorem with φn = i=0 f ◦ T i for a measurable function f . Remark 10.8. Let W be a k-dimensional subspace of Rm and let A be an invertible matrix identiﬁed with the linear map deﬁned by v → Av, v ∈ Rm . Since A is invertible, AW = {Aw : w ∈ W } is also k-dimensional. After identifying both W and AW with Rk using linear isometries, we regard the restriction A|W as a linear map from Rk into itself. If {w1 , . . . , wk } ⊂ W is an orthonormal set, then |det (A|W )| is the k-dimensional volume of the parallelepiped spanned by Awi . Lemma 10.9. Deﬁne φ(k) n (x)

= log

sup

|det (An (x)|W )| ,

dim W =k

where the supremum is taken over all k-dimensional subspaces W of Rm . Then (k) {φn }n≥1 is subadditive for k < m, and it is additive for k = m. Proof. Since A+n (x) = An (T x)A (x), we have |det(An+ (x)|W )| = det(An (T x)|A (x)W ) |det(A (x)|W )| , and hence sup |det(An+ (x)|W )| ≤ sup det(An (T x)|W ) sup |det(A (x)|W )| , W

W

W

where the supremum is taken over all k-dimensional subspaces W , so that we have subadditivity. If k = m, then W = A (x)W = Rm , so that we have additivity.

10.2 Oseledec’s Multiplicative Ergodic Theorem

305

Note that if A(x) has singular values σ1 (x) ≤ · · · ≤ σm (x), then sup

|det (A(x)|W )| = σm−k+1 (x) × · · · × σm (x) ≤ [σm (x)]k ,

dim W =k

and hence

(k) φ1 (x)

= log

sup

|det (A(x)|W )| ≤ k log σm (x) .

dim W =k

Theorem 10.5 implies that (k) (φ1 )+ ≤ k log+ σm = k log+ ||A|| < ∞ , and hence the Subadditive Ergodic Theorem holds true. Similarly for A−1 , (k) deﬁne φn using A−n (x), then use + −1 log (σ1 ) = log+ ||A−1 || < ∞ to show the integrability condition. Theorem 10.10 (The Multiplicative Ergodic Theorem for not necessarily invertible cases). Let T be a measure preserving transformation on a probability space X. Suppose that T is not necessarily invertible. For almost every x ∈ X there exist numbers L1 (x) < · · · < Lr(x) (x) and a sequence of subspaces {0} = V0 (x) V1 (x) · · · Vr(x) (x) = Rm such that (i) L+ (x, v) = Li (x) for v ∈ Vi (x) \ Vi−1 (x), (ii) A(x)Vi (x) = Vi (T x), (iii) r(x), Li (x) and Vi (x) are measurable, (iv) r(x), Li (x) and dim Vi (x) are constant along orbits of T , and r(x) (v) limn→∞ n1 log | det An (x)| = i=1 Li (x) (dim Vi (x) − dim Vi−1 (x)). It is possible that L1 = −∞. From (iv) we observe that, if T is ergodic, then r, Li and dim Vi are constant. The numbers Li (x) and dim Vi , 1 ≤ i ≤ r(x), are called the Lyapunov exponents and the multiplicities, respectively. Note that the following is a disjoint union: Rm = Vr(x) (x) \ Vr(x)−1 (x) ∪ · · · ∪ [V2 (x) \ V1 (x)] ∪ V1 (x) . We give a sketch of the proof given by Ruelle [Ru1]. Sometimes we suppress the variable x in the notation for matrices and eigenvalues when there is no danger of confusion.

306

10 The Lyapunov Exponent: Multidimensional Case

Proof. Let σn,1 (x) ≤ · · · ≤ σn,m (x) be the singular values of An (x). Deﬁne (k) (k) φn as in Lemma 10.9. The supremum in the deﬁnition of φn (x) is obtained when the subspace W is spanned by the eigenvectors of An (x)T An (x) corresponding to the k largest eigenvalues [σn,i (x)]2 , m − k + 1 ≤ i ≤ m. In this case, m log σn,i (x) . φ(k) n (x) = i=m−k+1

m Using the Subadditive Ergodic Theorem we see that n1 i=m−k+1 log σn,i (x) converges to a limit for every k. Hence n1 log σn,i (x) converges to a limit for every i. Note that the limit is constant along orbits of T . Since ATn An is symmetric, it has eigenvalues (σn,1 )2 ≤ · · · ≤ (σn,m )2 , and there exists an orthogonal matrix Cn such that Cn−1 ATn An Cn = Dn where Dn = diag((σn,1 )2 , . . . , (σn,m )2 ) is a diagonal matrix. Then (ATn An )1/2n = Cn (Dn )1/2n Cn−1 . Now (Dn )1/2n = diag((σn,1 )1/n , . . . , (σn,m )1/n ) converges as n → ∞ since 1 T 1/2n converges. n log σn,i converges. It can then be shown that (An (x) An (x)) See [Ru1] for the details. Put Σ(x) = lim (An (x)T An (x))1/2n . n→∞

Note that Σ(x) is symmetric. Let σ1 (x) < · · · < σr(x) (x) be the distinct eigenvalues of Σ(x) with corresponding eigenspaces Ui (x). If n is large, then Σ ≈ Cn (Dn )1/2n Cn−1 , and so there exists a sequence 0 = J0 (x) < J1 (x) < · · · < Jr (x) = m such that 1 log σn,j log σi = lim n→∞ n for Ji−1 < j ≤ Ji . Put Li (x) = log σi (x) . Then Li (T x) = Li (x). Since the singular values of Σ(x) are T -invariant, the number of distinct singular values are T -invariant. Hence r(x) = r(T x). This partially proves (iv). Put V0 = {0} and Vi (x) = U1 (x) + · · · + Ui (x), 1 ≤ i ≤ r(x). To prove (i), take v ∈ Vi (x) \ Vi−1 (x). Then v has a nonzero orthogonal projection u onto Uj (x) for some j, Ji−1 < j ≤ Ji , and so ||An (x)v|| ≈ ||An (x)u|| ≈ σi (x)n ||u|| for large n. Hence lim

n→∞

1 log ||An (x)v|| = log σi (x) = Li (x) . n

10.2 Oseledec’s Multiplicative Ergodic Theorem

307

For (ii) take v ∈ Vi (x) \ Vi−1 (x). Then 1 log ||An (T x)A(x)v|| n 1 log ||An+1 (x)v|| = lim n→∞ n = L+ (x, v) = Li (x) = Li (T x) ,

L+ (T x, A(x)v) = lim

n→∞

and A(x)v ∈ Vi (T x). (In general it is not true that A(x)Ui (x) = Ui (T x).) In (iii) all the objects are obtained by taking countable set operations, and they are measurable. Technical details from measure theory are skipped. For the remaining part of (iv), note that A(x) is invertible. Then (ii) implies that dim Vi (T x) = dim A(x)Vi (x) = dim Vi (x), and so Vi (T x) = Vi (x). To prove (v), note that 1/2n 1/n = lim (det An ) . det Σ = lim det(ATn An ) n→∞

n→∞

Hence r(x) 1 log σi (x) dim Ui (x) log | det An (x)| = n→∞ n i=1

lim

r(x)

=

Li (x) (dim Vi (x) − dim Vi−1 (x)) .

i=1

If T is invertible, we can deduce more. Theorem 10.11 (The Multiplicative Ergodic Theorem for invertible cases). Let T be an invertible measure preserving transformation on a probability space X. Then for almost every x ∈ X there exist numbers L1 (x) < · · · < Lr(x) (x) and a decomposition of Rm into Rm = E1 (x) ⊕ · · · ⊕ Er(x) (x) such that (i) L+ (x, v) = −L− (x, v) = Li (x) for 0 = v ∈ Ei (x), (ii) A(x)Ei (x) = Ei (T x), (iii) r(x), Li (x) and dim Ei (x) are constant along orbits of T . Note that L1 > −∞ since −L1 is the largest Lyapunov exponent for T −1 . (See the following proof.) From (iii) we see that, if T is ergodic, then r, Li and dim Ei are constant.

308

10 The Lyapunov Exponent: Multidimensional Case

Proof. As in Theorem 10.10, the Vi (x), i ≥ 0, are invariant spaces for the pair (T, A(x)). Consider (T −1 , A(T −1 x)−1 ) in place of (T, A(x)). Then we have {0} V−r(x) (x) · · · V−1 (x) = Rm and the corresponding Lyapunov exponents −Lr(x) (x) < · · · < −L1 (x) . Note that A(T −1 x)−1 V−i (x) = V−i (T −1 x), which is equivalent to V−i (T y) = A(y)V−i (y). First, we check how the Lyapunov exponents are obtained. Put Σ(x) = lim (An (x)T An (x))1/2n n→∞

and

7 Σ(x) = lim (A−n (x)T A−n (x))1/2n . n→∞

Let σn,1 (x) ≤ · · · ≤ σn,m (x) be eigenvalues of (An (x)T An (x))1/2n , and let 7n,1 (x) ≤ · · · ≤ σ1 (x) ≤ · · · ≤ σm (x) be eigenvalues of Σ(x). Similarly, let σ 71 (x) ≤ · · · ≤ σ 7m (x) σ 7n,m (x) be eigenvalues of (A−n (x)T A−n (x))1/2n and σ −1 7 eigenvalues of Σ(x). We will show that σ 7j (x) is an eigenvalue of Σ(x) for every j. Choose ε > 0. Let Gn = {x ∈ X : |σn,j (x)−1 − σj (x)−1 | < ε, 1 ≤ j ≤ m} and

7 n = {x ∈ X : |7 G σn,j (x) − σ 7j (x)| < ε, 1 ≤ j ≤ m} .

7 n ) = 1. Since An (T −n x)−1 = Note that limn→∞ µ(Gn ) = 1 and limn→∞ µ(G A−n (x), we have A−n (x)T A−n (x) = (An (T −n x)−1 )T An (T −n x)−1 = [(An (T −n x)An (T −n x)T )]−1 . Hence

σ 7n,j (x) = σ 7n,m−j+1 (T −n x)−1 .

Take x ∈ Gn ∩ T n (Gn ). Then σ 7j (x) ≈ σ 7n,j (x) = σn,m−j+1 (T −n x)−1 ≈ σm−j+1 (T −n x)−1 = σm−j+1 (x)−1 , where ≈ indicates an approximation within ε bound and the last equality comes from the fact that σj is constant along orbits of T . Now we let n → ∞

10.2 Oseledec’s Multiplicative Ergodic Theorem

309

to obtain |7 σj (x) − σm−j+1 (x)−1 | < ε for almost every x. Since ε is arbitrary, we have σ 7j (x) = σm−j+1 (x)−1 for almost every x. Therefore log σ 7j (x) = log(σm−j+1 (x)−1 ) = − log σm−j+1 (x) . Next, we claim that Vi−1 (x) ∩ V−i (x) = {0}. To see why, let S be the set of the points x where Vi−1 (x) ∩ V−i (x) = {0}. Note that T −1 S ⊂ S since Vi−1 (T −1 x) ∩ V−i (T −1 x) = A(T −1 x)−1 (Vi−1 (x) ∩ V−i (x)) = {0} , and that T (S) ⊂ S since Vi−1 (T x) ∩ V−i (T x) = A(x)(Vi−1 (x) ∩ V−i (x)) = {0} . For δ > 0 and 2 ≤ i ≤ r(x), consider the set Sn ⊂ S of the points x where ||An (x)u|| ≤ ||u|| en(Li−1 (x)+δ) and ||A−n (x)u|| ≤ ||u|| en(−Li (x)+δ) for 0 = u ∈ Vi−1 (x) ∩ V−i (x). Put w = A−n (x)u. Then w ∈ Vi−1 (T −n x) ∩ V−i (T −n x) since A(x)−1 Vj (x) = Vj (T −1 x) for every j. Since An (T −n x)w = u, the second inequality becomes ||w|| ≤ ||An (T −n x)w|| en(−Li (x)+δ) and hence

||An (T −n x)w|| ≥ ||w|| en(Li (x)−δ) .

If y ∈ T −n Sn , then

||An (y)u|| ≥ ||u|| en(Li (y)−δ)

for u ∈ Vi−1 (y) ∩ V−i (y). Thus Li (x) − Li−1 (x) ≤ 2δ for x ∈ Sn ∩ T −n Sn . Since µ(Sn ∩ T −n Sn ) → µ(S) as n → ∞, we have Li (x) − Li−1 (x) ≤ 2δ for almost every x ∈ S. Since δ is arbitrary, we obtain µ(S) = 0. Now since dim Vi−1 (x) + dim V−i (x) = m, we have Vi−1 (x) ⊕ V−i (x) = Rm . Deﬁne Ei (x) = Vi (x) ∩ V−i (x) ,

1 ≤ i ≤ r(x) .

Then Rm = V−1 (x) ∩ [V1 (x) ⊕ V−2 (x)] ∩ · · · ∩ Vr(x)−1 (x) ⊕ V−r(x) (x) ∩ Vr(x) = E1 (x) ⊕ · · · ⊕ Er(x) (x) .

310

10 The Lyapunov Exponent: Multidimensional Case

10.3 The Lyapunov Exponent of a Diﬀerentiable Mapping In this section we consider the case when the matrices An in the Multiplicative Ergodic Theorem are given by the Jacobian matrices D(T n ), where T is a diﬀerentiable mapping on a compact diﬀerentiable manifold. Let Df = (∂fi /∂xj )i,j be the Jacobian matrix of f . Since f (x0 + h) ≈ f (x0 ) + Df (x0 ) h for h ≈ 0, the behavior of f near x0 can be studied using Df (x0 ). For example, a volume element at x0 is magniﬁed by the factor |det Df (x0 )|. To investigate the growth rate along a given direction, consider a sphere Sr (x0 ) centered at x0 of a small radius r > 0. By the theory of singular values there exist m orthogonal vectors v1 , . . . , vm , ||vi || = r, such that the image of Sr (x0 ) under f is approximately an ellipsoid with semi-axes given by Df (x0 )v1 , . . . , Df (x0 )vm . Take an open set U ⊂ Rm . A function f = (f1 , . . . , fn ) : U → Rn is called a C k function if every fi , 1 ≤ i ≤ n, has k continuous partial derivatives. An m-dimensional C k diﬀerentiable manifold is a set X ⊂ R with a local coordinate system at every x ∈ X satisfying compatibility conditions. More precisely, for x ∈ X there exists an open set U ⊂ R containing x and a one-to-one continuous mapping φU : U ∩ X → Rm satisfying the following condition: if V is another open set containing x, then φV ◦ φU −1 is C k on φU (U ∩ V ∩ X) for some s ≥ 1. The mapping φU is called a local coordinate system at x. A nonempty open subset U ⊂ Rm together with φU (x) = x is an m-dimensional diﬀerentiable manifold. A circle, a torus and a sphere are examples of compact diﬀerentiable manifolds. Let Y be an n-dimensional diﬀerentiable manifold. A mapping f : X → Y is said to be C k if there exist local coordinate systems φ and ψ at every x and f (x) respectively such that ψ ◦ f ◦ φ−1 is C k as a mapping from an open subset of Rm into Rn . If k = ∞, then f is said to be smooth. From now on we assume that f is C k for some k ≥ 1. If m = n and f is bijective, then f is called a diﬀeomorphism. Let T be a diﬀeomorphism of a diﬀerentiable compact manifold X and let µ be a T -invariant ergodic probability measure on X. Take A = DT in Oseledec’s Multiplicative Ergodic Theorem. Since [D(T −1 )(T x)] [DT (x)] = I, we have det[D(T −1 )(T x)] det[DT (x)] = 1, and so det A(x) = 0. Using the chain rule repeatedly we have for n ≥ 1 D(T n )(x) = A(T n−1 x) · · · A(x) = An (x) . −1 Since D(T −1 )(T x) = [DT (x)] , we have for n ≥ 1 D(T −n )(x) = D(T −1 )(T −n+1 x) · · · D(T −1 )(T −1 x) D(T −1 )(x) −1 −1 −1 DT (T −1 x) · · · DT (T −2 x) = DT (T −n x) = A−n (x) . When we consider an m-dimensional torus, we regard it as a subset of Rm , and do all the computations modulo 1 for the sake of notational convenience.

10.3 The Lyapunov Exponent of a Diﬀerentiable Mapping

311

Theorem 10.12. Let X be a compact diﬀerentiable manifold with dim X = m. (To simplify the notation we regard X as a subset of Rm .) Let T : X → X be a diﬀeomorphism and let µ be a T -invariant ergodic probability measure on X. Then at almost every x ∈ X there exist a decomposition of Rm into Rm = E1 (x) ⊕ · · · ⊕ Er (x) and constants L1 < · · · < Lr such that (i) the limits Li = lim

n→∞

1 1 log D(T −n )(x)v log D(T n )(x)v = − lim n→∞ n n

exist for 0 = v ∈ Ei (x), 1 ≤ i ≤ r, and do not depend on v or x, (ii) dim Ei (x) is constant, 1 ≤ i ≤ r, and (iii) DT (x)Ei (x) = Ei (T x) for almost every x, 1 ≤ i ≤ r. Proof. Take A(x) = DT (x) and apply Theorem 10.11.

In the language of the theory of diﬀerentiable manifolds we may state the conclusion in Theorem 10.12 as follows: There exist subspaces Ei (x) of the tangent space Tx X of X (‘T ’ for tangent) at almost every x such that Tx X = E1 (x) ⊕ · · · ⊕ Er (x) . For simulations for the Lyapunov exponent see Figs. 10.2, 10.3, 10.4 and Maple Programs 10.7.3, 10.7.4. For the numerical calculations in this chapter we use the logarithmic base 10. Example 10.13. (Solenoid) Let S 1 = {φ : 0 ≤ φ < 1} and D = {(u, v) ∈ R2 : u2 + v 2 ≤ 1}. Note that S 1 is a group under the addition mod 1. Consider the solid torus X = S 1 × D. For 0 < a < 12 , consider the solenoid mapping T : X → X deﬁned by 1 1 T (φ, u, v) = 2φ, au + cos(2πφ), av + sin(2πφ) . 2 2 See Ex. 3.21. A small ball in X is elongated mainly in the direction of φ and contracted in the directions of u, v. Hence the Lyapunov exponents are L1 = log a < 0 ,

L2 = log 2

with dim E1 = 2 ,

dim E2 = 1 .

See Fig. 10.2 and Maple Program 10.7.3.

312

10 The Lyapunov Exponent: Multidimensional Case 0.2 n

1000

0 –0.2 –0.4

Fig. 10.2. The Lyapunov exponents of the solenoid mapping are equal to L1 = log a, L2 = log 2. They are approximated by (log σn )/n where σn are the singular values of D(T n ), 50 ≤ n ≤ 1000

Example 10.14. Consider the H´enon mapping T (x, y) = (1 − ax2 + y, bx) . ∗1 , we have det DT = −b. If |b| < 1, then the area of a small b0 rectangle is strictly decreased under the application of T . Hence a T -invariant bounded subset has Lebesgue measure 0. It is known that, for a positive Lebesgue measure set of pairs of parameters (a, b) ∈ R2 , the H´enon mapping has a Sinai–Ruelle–Bowen measure µ on its attractor with a positive Lyapunov exponent almost everywhere with respect to µ. For more information consult [ASY],[BeC],[BeY]. For the estimation of the Lyapunov exponents see the left plot in Fig. 10.3, where y = (log σn )/n, 50 ≤ n ≤ 1000, is plotted for each singular value σn of D(T n ). The curves below and above the x-axis are for L1 and L2 , respectively. Compare the results with Fig. 10.14. Since DT =

0.5 0.5 0

n

1000

0

n

1000

–0.5 –0.5

Fig. 10.3. The Lyapunov exponents are approximated by (log σn )/n where σn are the singular values of D(T n ): the H´enon mapping with a = 1.4, b = 0.3 (left), and the modiﬁed standard mapping with C = 0.5 (right)

For estimation of the largest Lyapunov exponent on the H´enon attractor, see Fig. 10.4, where (T j x0 , (log σ)/200), 1 ≤ j ≤ 100, are plotted. Compare the results with Fig. 10.15.

10.3 The Lyapunov Exponent of a Diﬀerentiable Mapping

313

0.3

0.3 0.2

0.2 0.1

0.5

0

–1

1

1 1

–0.5

Fig. 10.4. The largest Lyapunov exponent is approximated by (log σ)/200 where σ is the largest singular value of D(T 200 )(T j x0 ), 1 ≤ j ≤ 100: the H´enon mapping with a = 1.4, b = 0.3 (left), and the standard mapping with C = 0.5 (right)

Example 10.15. Let C be a real constant. Consider the modiﬁed standard mapping on the two-dimensional torus X deﬁned by T (x, y) = TC (x, y) = (2x − y + C sin(2πx), x)

(mod 1) .

Note that T is invertible and T −1 (x, y) = (y, −x + 2y + C sin(2πy))

(mod 1) .

Also observe that T (−x, −y) = −T (x, y) (mod 1) . ∗ −1 Since DT = , we have det DT = 1. Hence T preserves the area. If 1 0 T has an absolutely continuous invariant measure of the form ρ(x, y) dxdy, ρ ≥ 0, then ρ = 1/λ(A) on A ⊂ X and ρ = 0 on X \ A where λ(A) is the Lebesgue measure of A. To ﬁnd invariant measures we apply the Birkhoﬀ Ergodic Theorem. In each of the left and the middle plots in Fig. 10.5 with C = 0.5 and C = −0.5, respectively, 5000 orbit points are plotted with suitably chosen starting points. Since 1 1 1 1 TC x + , y + = T−C (x, y) + , (mod 1) , 2 2 2 2 an invariant measure of T−C is the translate of an invariant measure of TC by ( 21 , 21 ) modulo 1. Compare the ﬁrst and the second plots. If U denotes a mapping on X induced by the rotation around the origin by 180 degrees, i.e., U (x, y) = (1 − x, 1 − y) = (−x, −y) (mod 1) , then U −1 = U and T ◦ U = U ◦ T .

314

10 The Lyapunov Exponent: Multidimensional Case

1

1

y

1

y

0

x

1

y

0

x

1

0

x

1

Fig. 10.5. Absolutely continuous ergodic invariant measures of the modiﬁed standard mapping with C = 0.5 and C = −0.5 (left and middle), and singular invariant measures with C = 0.5 (right)

Let µ be an ergodic invariant measure for T , and deﬁne a new measure ν by ν(E) = µ(U (E)) for E ⊂ X. Then ν(T (E)) = µ(U (T (E))) = µ(T (U (E))) = µ(U (E)) = ν(E) . Hence ν is also T -invariant. To prove the ergodicity of ν, suppose that ν(T (E)E) = 0 where denotes the symmetric diﬀerence. Then ν(T (E)E) = µ(U (T (E))U (E)) = µ(T (U (E))U (E)) , and so the ergodicity of µ implies that µ(U (E)) = 0 or 1. Hence ν(E) = 0 or 1. As a special case, consider an absolutely continuous invariant probability measure µ. Then µ = ν by the uniqueness of the absolutely continuous ergodic invariant measure. See the left plot in Fig. 10.5. The union of three almost ﬂat closed curves in the right plot, located between the third and the fourth closed curves circling around the origin counting from the inside, form the support of a singular continuous ergodic invariant measure, which is not symmetric with respect to U . Observe that T has two ﬁxed points (0, 0) and ( 21 , 21 ). If we pick a point other than the ﬁxed point inside the hole in the ﬁrst plot as a starting point, then the orbit points are attracted to closed curves, which indicates that there exist singular continuous invariant measures. For an experiment with the standard mapping we will consider only absolutely continuous invariant measures. Since det D(T n ) = 1 for every n ≥ 1, the product of two singular values σn,1 and σn,2 is equal to 1, and hence 1 1 log σn,1 = lim log(σn,2 )−1 = −L2 . n→∞ n n→∞ n

L1 = lim

For the estimation of the Lyapunov exponents see the right plot in Fig. 10.3. The curves below and above the x-axis are for L1 and L2 , respectively. See also Fig. 10.4.

10.4 Invariant Subspaces

315

10.4 Invariant Subspaces Let T be a transformation given in Theorem 10.12. Consider the case when r = 2 and the Lyapunov exponents are given by L1 < L2 . Suppose that a tangent vector v at x ∈ X has a nonzero component in E2 (x), i.e., v = v1 +v2 , v1 ∈ E1 (x), v2 ∈ E2 (x), v2 = 0. Then L2 = lim

n→∞

1 log ||D(T n )(x)v|| . n

We use diﬀerent methods in ﬁnding E1 (x) and E2 (x). The subspace E1 (x) is obtained as the limit of the subspace spanned by the eigenvectors of An (x)T An (x) corresponding to the smaller eigenvalue. But E2 (x) cannot be obtained by the same method: it is not necessarily the limit of the subspace spanned by the eigenvectors vn of An (x)T An (x) corresponding to the larger eigenvalue. What we really obtain is a vector vn ≈ w ∈ R2 \ E1 (x) where R2 is identiﬁed with the space of all tangent vectors at x. This property is suﬃcient to calculate L2 , but, in general, vn ≈ w ∈ E2 (x) . To ﬁnd E2 (x), we use T −1 instead of T . Let Ei (x, T ) and Ei (x, T −1 ) be the invariant subspaces given in the Oseledec Multiplicative Ergodic Theorem corresponding to T and T −1 , respectively. Then E1 (x, T ) = E2 (x, T −1 ) and

E2 (x, T ) = E1 (x, T −1 ) .

Since E1 (x, T −1 ) is obtained by the previous method, we can ﬁnd E2 (x, T ). How can we ﬁnd invariant subspaces of T −1 when T −1 is not explicitly given? First, observe that for n ≥ 1 −1 D(T −n )(x) = DT (T −1 x) · · · DT (T −n x) . Put Bn (x) = DT (T −1 x) · · · [DT (T −n x)]. Then Bn (x) = An (T −n x) and D(T −n )(x) = Bn (x)−1 . Note that if an invertible matrix M satisﬁes M v = λv, λ = 0, then M −1 v = λ−1 v. Therefore, to obtain E1 (x, T −1 ), we ﬁnd an eigenvector corresponding to the smaller eigenvalue of the matrix T −1 Bn (x)−1 = Bn (x)Bn (x)T Bn (x)−1 , which is also an eigenvector of the matrix Bn (x)Bn (x)T corresponding to the larger eigenvalue. Once we have Ei (x), we obtain Ei (T j x), j ≥ 1, using DT (x)Ei (x) = Ei (T x), i = 1, 2. In the following examples we plot the directions of E1 (x), E2 (x) along an orbit of T . See Maple Programs 10.7.5, 10.7.6.

316

10 The Lyapunov Exponent: Multidimensional Case

32 Example 10.16. Consider a toral automorphism deﬁned by A = , which 11 √ √ has eigenvalues λ1 = 2√ − 3 < 1, λ2 = √ 2 + 3 > 1 with the corresponding eigenvectors v1 = (1 − 3, 1), v2 = (1 + 3, 1). In this case, it is easy to ﬁnd the Lyapunov exponents and the invariant subspaces. Since An v1 = λ1 n v1 and An v2 = λ2 n v2 , we have E1 (x) = {tv1 : t ∈ R}, E2 (x) = {tv2 : t ∈ R} and L1 = log λ1 < 0, L2 = log λ2 > 0. See Fig. 10.6 where the Ei (x), i = 1, 2, are plotted along an orbit marked by small circles. 1

1

y

y

0

x

0

1

x

1

Fig. 10.6. Directions of E1 (x) (left) and E2 (x) (right) for a toral automorphism

Example 10.17. Consider the solenoid mapping in Ex. 10.13. The subspaces E1 , E2 are plotted along an orbit on the attractor in Fig. 10.7. The subspace E2 , in the expanding direction, is tangent to the attractor, and has dimension 1. The subspace E1 , in the contracting direction, is parallel to the disks, and has dimension 2. The φ-axis connects two disks, which are identiﬁed.

1

1

1

1

0 1

–1 –1

1

–1 –1

Fig. 10.7. Directions of E1 (x) (left) and E2 (x) (right) invariant under the solenoid mapping

10.4 Invariant Subspaces

317

Example 10.18. Consider the H´enon mapping with a = 1.4, b = 0.3 given in Ex. 10.14. We draw the directions of E1 (x), E2 (x) at x along an orbit on the attractor as in Ex. 10.17. (Here and Ex. 10.17, we may plot E1 (x), E2 (x) at x outside the basin of attraction, but we choose the points on the attractors to show that E2 is tangent to the attractors.) See Figs. 10.8, 10.9. The subspace E2 (x), in the expanding direction, is tangent to the H´enon attractor at x on the attractor. At some points the directions of E1 (x) and E2 (x) almost coincide. Small circles mark the orbit points.

0.5 y

–1

1 x –0.5

Fig. 10.8. Directions of E1 (x) invariant under the H´enon mapping with a = 1.4, b = 0.3

0.5 y

–1

1 x –0.5

Fig. 10.9. Directions of E2 (x) invariant under the H´enon mapping with a = 1.4, b = 0.3

In Fig. 10.10 a region {(x, y) : −1.286 ≤ x ≤ −1.278 , 0.379 ≤ y ≤ 0.383} around the leftmost point on the H´enon attractor is magniﬁed. Small circles in the right plot mark the orbit points. It shows that E2 (x) is tangent to the H´enon attractor at a point x on the attractor. Compare the directions with the tangential directions of the stable and unstable manifolds in Figs. 11.6, 11.7 in Chap. 11, where we can clearly see that E2 is tangential to the H´enon attractor. The theory of the stable and unstable manifolds may be regarded as an integral version of the invariant subspace theory.

318

10 The Lyapunov Exponent: Multidimensional Case

0.382

0.382

y

y

0.38

0.38

–1.285

x

–1.28

–1.285

x

–1.28

Fig. 10.10. A magniﬁcation of the region around the leftmost point on the H´enon attractor with a = 1.4, b = 0.3 (left). The directions of E2 (x) are indicated (right)

Example 10.19. Consider the standard mapping in Ex. 10.15. Take C = 0.5. We draw the directions of E1 (x) and E2 (x) at x along orbits. See Fig. 10.11 where the directions at 1000 orbit points are indicated in each plot.

1

1

y

y

0 x

1

0 x

1

Fig. 10.11. Directions of E1 (x) (left) and E2 (x) (right) invariant under the action of the modiﬁed standard mapping with C = 0.5

10.5 The Lyapunov Exponent and Entropy We will discuss the relation between the Lyapunov exponents and entropy in this section. Only a rough sketch of the idea is given. Assume that T is ergodic. First, let

10.5 The Lyapunov Exponent and Entropy

319

L1 < · · · < Ls ≤ 0 < Ls+1 < · · · < Lr be the Lyapunov exponents of T and consider a subspace W u (x) = Es+1 (x) ⊕ · · · ⊕ Er (x) deﬁned by the direct sum of invariant subspaces corresponding to positive Lyapunov exponents. The superscript ‘u’ denotes the unstable direction along which two nearby points become separated farther and farther. Put = dim W u (x), x ∈ X. Note that is constant and DT (x)W u (x) = W u (T x) . Take A(x) = DT (x) and let di = dim Ei (x) for x ∈ X. Then

1 di σn,i (x) log di Li = lim n→∞ n Li >0

Li >0

1 log det An (x)|W u (x) = lim n→∞ n 1 log det A(T n−1 x)|W u (T n−1 x) · · · det A(x)|W u (x) = lim n→∞ n = log det A(x)|W u (x) dµ(x) . X

The second and the last equalities are obtained by applying Theorem 10.1 and the Birkhoﬀ Ergodic Theorem, respectively. Now we need a deﬁnition: Let (X, d) be a metric space. A function f : X → Rm is H¨older continuous of exponent α > 0 (or f is C α ) if sup d(x, y)−α ||f (x) − f (y)|| < ∞ . x=y

A H¨older continuous function is necessarily continuous. If X is a compact manifold, then we regard X as a subset in some Euclidean space. Theorem 10.20 (Pesin). Let X be a compact diﬀerentiable manifold with an absolutely continuous probability measure µ. Suppose that (i) T : X → X is a diﬀeomorphism and its derivative is C α , α > 0, (ii) µ is T -invariant, and (iii) T is ergodic with respect to µ. Let Li be the ith Lyapunov exponent and put di = dim Ei . Then di Li , entropy = Li >0

where the same logarithmic base is taken in the deﬁnitions of the entropy and the Lyapunov exponents.

320

10 The Lyapunov Exponent: Multidimensional Case

Corollary 10.21. Let T be an automorphism of the m-dimensional torus given by an integral matrix A. Let λi be the eigenvalues of A. Then counting multiplicity we have log |λi | , entropy = |λi |>1

where the same logarithmic base is taken for the entropy and log |λi |. Proof. Note that DT (x) = A for every x. Let λ be an eigenvalue of A, and let E|λ| be the direct sum of the generalized eigenspaces corresponding to the eigenvalues of modulus |λ|. If 0 = v ∈ E|λ| , then by using the real Jordan canonical form of A we see that n1 log ||An v|| converges to log |λ|. Hence the positive Lyapunov exponents are given by log |λi | with |λi | > 1.

Theorem 10.22 (Ruelle). Let X be a compact diﬀerentiable manifold with a probability measure µ. Suppose that (i) T : X → X is a C 1 diﬀeomorphism, (ii) µ is T -invariant, and (iii) T is ergodic with respect to µ. Let Li be the ith Lyapunov exponent and put di = dim Ei . Then di Li , entropy ≤ Li >0

where the same logarithmic base is taken in the deﬁnitions of the entropy and the Lyapunov exponents. Example 10.23. Consider a two-dimensional toral automorphism given by a matrix A. Since the characteristic equation of A is equal to λ2 − (tr A)λ + det A = 0 with | det A| = 1, the eigenvalues λmax and λmin , |λmax | ≥ |λmin |, satisfy |λmax × λmin | = 1. Hence |λmax | ≥ 1 and |λmin | ≤ 1. Thus entropy = log |λmax | = the largest Lyapunov exponent . In Fig. 10.12 we plot y = n1 log10 σn,max , 5 ≤ n ≤ 100, where σn,max is the n largest singularvalue of A . Compare the result with√Fig. 10.13. √ 21 (i) Take A = . It has eigenvalues λmax = 3+2 5 , λmin = 3−2 5 . The 11 √

entropy is equal to log10 ( 3+2 5 ) ≈ 0.418. In this case, An is symmetric for n ≥ 1, and the eigenvalues and singular values coincide. Hence log |λmax | = 1 n log σn,max for n ≥1. See the left graph in Fig. 10.12. √ √ 23 (ii) Take A = . It has eigenvalues λmax = 2 + 3, λmin = 2 − 3. The 12 √ entropy is equal to log10 (2 + 3) ≈ 0.572. See the right graph in Fig. 10.12.

10.6 The Largest Lyapunov Exponent and the Divergence Speed 0.43

0.59

0.42

0.58

0.41

0.57

0.4 0

n

100

0.56 0

n

321

100

Fig. 10.12. The largest Lyapunov exponent log λmax of T x = Ax is approximated by the largest singular values of An where A is given in Ex. 10.23. The horizontal lines indicate y = log λmax

10.6 The Largest Lyapunov Exponent and the Divergence Speed Let X be an m-dimensional compact diﬀerentiable manifold. (We assume that X ⊂ Rm for the simplicity of notation.) Let T : X → X be a diﬀeomorphism and suppose that the largest Lyapunov exponent of T is positive. Take two neighboring points x, x 7 ∈ X. Let Er (x) be the subspace in Theorem 10.12 with the largest Lyapunov exponent. The vector x − x 7 has nonzero component in Er (x) with probability 1. (For, if we let Z(x) ⊂ Rm be the subspace spanned by Ek (x), 1 ≤ k ≤ r − 1, then dim Z(x) < m, and hence the m-dimensional volume of Z(x) is zero.) Then, as we iterate T , the distance ||T k x − T k x 7|| increases exponentially. More precisely, for two nearby points x, x 7, and for k suﬃciently large but not too large, we have 7|| ≈ 10kLmax ||x − x 7|| ||T k x − T k x where Lmax is the largest Lyapunov exponent. Deﬁnition 10.24. For n ≥ 1, take x, x 7 ∈ X such that ||7 x − x|| = 10−n . If the direction x − x 7 has nonzero component in Er (x), then we deﬁne the nth divergence speed by 7) = min{j ≥ 1 : ||T j x − T j x 7|| ≥ 10−1 } . Vn (x, x When there is no danger of confusion we write Vn (x) in place of Vn (x, x 7). Remark 10.25. Let X be a diﬀerentiable manifold. To simplify the notation we assume that X is a bounded subset of Rm . Let T : X → X be a diﬀeomorphism and µ a T -invariant ergodic probability measure on X. Let Lmax be the largest Lyapunov exponent of T . Then for almost every x n ≈ Lmax . Vn (x)

322

10 The Lyapunov Exponent: Multidimensional Case

To see why, observe that for k suﬃciently large but not too large, we have log ||T k x − T k x 7|| log ||x − x 7|| ≈ Lmax + k k where the correction term

log ||x − x 7|| k is not negligible if ||x − x 7|| is very small. Take k = Vn (x). Then −1 −n ≈ Lmax + . Vn (x) Vn (x) Hence n/Vn ≈ Lmax for suﬃciently large n. We approximate the largest Lyapunov exponent by the divergence speed. For simulations at a single point x0 for 5 ≤ n ≤ 100, see Figs. 10.13, 10.14, and for simulations at 500 points with n = 100 see Fig. 10.15. Compare the results with Figs. 10.3, 10.4. Consult Maple Programs 10.7.7, 10.7.8.

0.6

0.4

0.4 0.2 0.2 0

n

100

0

n

100

Fig. 10.13. The divergence speed: y = n/Vn (x0 ) for toral automorphisms given in Ex. 10.23. The horizontal lines indicate the largest Lyapunov exponents

0.6

0.6

0.4

0.4

0.2

0.2

0

n

100

0

n

100

Fig. 10.14. The divergence speed: y = n/Vn (x0 ) for the H´enon mapping with a = 1.4, b = 0.3 (left) and the standard mapping with C = 0.5 (right)

10.7 Maple Programs

323

0.4

0.3 0.2

0.2 0.5

0

–1

1 –0.5

1 1

Fig. 10.15. The divergence speed: plots of (T j x0 , n/Vn (T j x0 )), n = 100, 1 ≤ j ≤ 100, for the H´enon mapping with a = 1.4, b = 0.3 (left) and the standard mapping with C = 0.5 (right)

10.7 Maple Programs 10.7.1 Singular values of a matrix Find the singular values of a matrix. > with(linalg): > A:=matrix([[2.0,1,1],[0,-3,0],[2,1,2]]); ⎡ ⎤ 2.0 1 1 A := ⎣ 0 −3 0 ⎦ 2 12 > det(A); −6.0 Find the singular values of A. Choose one of the following two methods. Use the commands Svd and evalf together in the second method. > SV_A:=singularvals(A); SV A := [0.5605845521, 2.602624117, 4.112431479] > SV_A:=evalf(Svd(A)); SV A := [4.112431479, 2.602624117, 0.5605845521] The following is the product of the singular values, which is equal to the absolute value of the determinant. > SV_A[1]*SV_A[2]*SV_A[3]; 6.000000003 Find the eigenvalues of A. > EV_A:=[eigenvals(A)[1],eigenvals(A)[2],eigenvals(A)[3]]; EV A := [3.414213562, 0.5857864376, −3.] Calculate the product of the eigenvalues, which is equal to the determinant. > EV_A[1]*EV_A[2]*EV_A[3]; −6.000000000

324

10 The Lyapunov Exponent: Multidimensional Case

Find the singular values of An . > n:=100: > Digits:=200: If n is large then Digits should be large. In general, if the entries of B have k signiﬁcant digits, then at least 2k signiﬁcant digits are needed to calculate B T B and the singular values. If one of the singular values is close to zero, use the idea explained in Subsect. 10.7.3. > evalf(Svd(evalm(A&^n)),10); [0.2315601860 1054 , 0.5047427740 1048 , 0.5048859902 1044 ] 10.7.2 A matrix sends a circle onto an ellipse Show that a 2 × 2 matrix A sends the unit circle onto an ellipse. > with(linalg): > with(plots): > with(plottools): Choose the coeﬃcients of a 2 × 2 matrix A. If the coeﬃcients are not given as decimal numbers, the eigenvectors are not normalized. Change the entries of the matrix A and observe what happens. > A:=matrix(2,2,[[1,2.],[3,0]]); 1 2. A := 3 0 > w:=[cos(theta),sin(theta)]; w := [cos(θ), sin(θ)] > k:=eigenvectors(transpose(A)&*A); k := [3.394448728, 1, {[0.2897841487, −0.9570920265]}], [10.60555128, 1, {[−0.9570920265, −0.2897841487]}] > v1:=k[1][3][1]; v1 := [0.2897841487, −0.9570920265] > v2:=k[2][3][1]; v2 := [−0.9570920265, −0.2897841487] The vectors v1 and v2 are orthogonal. > innerprod(v1,v2); 0. > g1:=line([(0,0)],[(v1[1],v1[2])]): > g2:=line([(0,0)],[(v2[1],v2[2])]): > g3:=plot([w[1],w[2],theta=0..2*Pi]): Draw the unit circle. > display(g1,g2,g3);

10.7 Maple Programs

325

Find the image of the unit circle. > image:=evalm(A&*w); image := [cos(θ) + 2. sin(θ), 3 cos(θ)] >

image[1]; cos(θ) + 2. sin(θ)

>

image[2];

3 cos(θ) u1:=evalm(A&*v1); u1 := [−1.624399904, 0.8693524461] > u2:=evalm(A&*v2); u2 := [−1.536660324, −2.871276080] Two vectors u1 and u2 are orthogonal. > innerprod(u1,u2); >

−0.1 10−8 The above result may be regarded as zero. > g4:=line([(0,0)],[(u1[1],u1[2])],color=green): > g5:=line([(0,0)],[(u2[1],u2[2])]): > g6:=plot([image[1],image[2],theta=0..2*Pi]): Now we have an ellipse. > display(g4,g5,g6); See Fig. 10.1. 10.7.3 The Lyapunov exponents: a local version for the solenoid mapping Find the Lyapunov exponent of the solenoid mapping at a single point. > with(linalg): > with(plots): > N:=1000: Here is a general rule for choosing the number of signiﬁcant decimal digits in computing the Lyapunov exponents: Let Lmin and Lmax denote the smallest and the largest Lyapunov exponents of T . If Lmin < 0 < Lmax , then after N iterations of T , the smallest and the largest singular values of the Jacobian matrix of T N are roughly equal to 10N ×Lmin respectively, and we need at least

and 10N ×Lmax ,

326

10 The Lyapunov Exponent: Multidimensional Case

N × |Lmin | + N × Lmax decimal digits. (We have to count the number of digits after the decimal point to estimate the negative Lyapunov exponents. If there are not enough signiﬁcant digits, then the number 10N ×Lmin is treated as zero, and Maple produces an error message when we try to compute the logarithm. The digits before the decimal point are used to calculate the positive Lyapunov exponents, and the digits after the decimal point are used for the negative Lyapunov exponents.) If there is not much information on the Lyapunov exponents before we start the simulation, we choose a sufﬁciently large D for the number of decimal digits, then gradually increase D to see whether our choice for D is suﬃcient. If there is a signiﬁcant change, then we need more digits. If there is no visible change, then we may conclude that we have reasonably accurate estimates for L1 , L2 . For the example in this subsection, Lmin = log10 a ≈ −0.5 and Lmax = log10 2 ≈ 0.3 . Hence we have N × (|Lmin | + Lmax ) ≈ 800 . We take D = 900 since it is better to allow enough room for truncation errors. > Digits:=900: > T:=(phi,u,v)->(frac(2.0*phi),a*u+0.5*cos(2*Pi*phi), a*v+0.5*sin(2*Pi*phi)); T := (φ, u, v) → (frac(2.0 φ), a u + 0.5 cos(2 π φ), a v + 0.5 sin(2 π φ)) Find the Jacobian matrix of T . > DT:=(phi,u,v)->matrix([[2,0,0],[-Pi*sin(2*Pi*phi),a,0], [Pi*cos(2*Pi*phi),0,a]]); ⎡ ⎤ 2 00 DT := (φ, u, v) → ⎣ −π sin(2 π φ) a 0 ⎦ π cos(2 π φ) 0 a > a:=0.3: > seed[0]:=(evalf(Pi-3),0.1*sqrt(2.0),sqrt(3.0)-1): Throw away transient points. > for i from 1 to 1000 do seed[0]:=evalf(T(seed[0])): od: Compute an orbit. > for n from 1 to N do seed[n]:=evalf(T(seed[n-1])): od: Take the 3 × 3 identity matrix. > A:=Matrix(3,3,shape=identity):

10.7 Maple Programs

327

Find the Lyapunov exponents L1, L2 and L3. > for n from 1 to N do > A:=evalm(DT(seed[n-1]) &* A): > K:=evalf(Svd(A)); > SV:=sort([K[1],K[2],K[3]]); > L1[n]:=evalf(log10(SV[1])/n,10); > L2[n]:=evalf(log10(SV[2])/n,10); > L3[n]:=evalf(log10(SV[3])/n,10); > od: Now we do not need many signiﬁcant digits. > Digits:=10: The following three numbers should be approximately equal. > log10(a); L1[N]; L2[N]; −0.5228787453 −0.5231792559 −0.5228787453 The following two numbers should be approximately equal. > log10(2.); L3[N]; 0.3010299957 0.3013305063 > g0:=plot(log10(a),x=0..N,labels=["n"," "],color=green): > g1:=listplot([seq([n,L1[n]],n=50..N)]): > display(g1,g0); See the left plot in Fig. 10.16. > g2:=listplot([seq([n,L2[n]],n=50..N)]): > display(g2,g0); See the middle plot in Fig. 10.16. > g3:=listplot([seq([n,L3[n]],n=50..N)]): > g4:=plot(log10(2),x=0..N,labels=["n"," "],color=green): > display(g3,g4); See the right plot in Fig. 10.16. See also Figs. 10.2, 10.3.

0.2

0.2 n

0.2 n

1000

n

1000

0

0

0

–0.2

–0.2

–0.2

–0.4

–0.4

–0.4

1000

Fig. 10.16. The Lyapunov exponents of the solenoid mapping are approximated by (log σn )/n where σn are the smallest, the second largest and the largest singular values of D(T n ), 50 ≤ n ≤ 1000 (from left to right)

328

10 The Lyapunov Exponent: Multidimensional Case

10.7.4 The Lyapunov exponent: a global version for the H´ enon mapping Find the Lyapunov exponents of the H´enon mapping. > with(linalg): > with(plots): Choose the length of each orbit. > N:=200: In the following we will see that L1 ≈ −0.7 and L2 ≈ 0.18, so an optimal value for D is approximately equal to N . See the explanation in Subsect. 10.7.3. > Digits:=200: > a:=1.4: b:=0.3: > T:=(x,y)->(y+1-a*x^2,b*x); > DT:=(x,y)->matrix([[-2*a*x,1],[b,0]]); −2 a x 1 DT := (x, y) → b 0 > seed[0]:=(0,0): Throw away transient points. > for i from 1 to 2000 do seed[0]:=T(seed[0]): od: Choose the number of samples. > SampleSize:=100: > for s from 1 to N*SampleSize do > seed[s]:=T(seed[s-1]): > od: Calculate the Lyapunov exponents using orbits of length N starting from T (s−1)N +1 (x0 ) for each 1 ≤ s ≤ 100. > for s from 1 to SampleSize do > A:=DT(seed[(s-1)*N+1]): > for n from 1 to N-1 do > A:=evalm(DT(seed[(s-1)*N+1+n]) &* A): > od: > K:=evalf(Svd(A),200); > SV:=sort([K[1],K[2]]): > L1[s]:=log10(SV[1])/N: > L2[s]:=log10(SV[2])/N: > od: Digits:=10: Find the average of L1 . > ave_L1:=add(L1[s],s=1..SampleSize)/SampleSize; >

ave L1 := −0.7043197238 pointplot3d([seq([seed[s],L1[s]],s=1..SampleSize)]; See Fig. 10.17. Find the average of L2 . >

10.7 Maple Programs

329

0.5 –1

1

–0.5

–0.5

–1

Fig. 10.17. The Lyapunov exponent L1 is approximated by (log σ)/200 where σ is the smaller singular value of D(T 200 )(T j x0 ), 1 ≤ j ≤ 100: the H´enon mapping with a = 1.4, b = 0.3

>

ave_L2:=add(L2[s],s=1..SampleSize)/SampleSize;

ave L2 := 0.1814409784 > pointplot3d([seq([seed[s],L2[s]],s=1..SampleSize)]; See Fig. 10.4. For conventional algorithms consult [ASY],[PC]. 10.7.5 The invariant subspace E1 of the H´ enon mapping Draw the subspaces E1 (x) invariant under the action of the H´enon mapping along an orbit using the property DT (x)E1 (x) = E1 (T x). > with(linalg): > with(plots): with(plottools): > Digits:=1000: > a:=1.4: b:=0.3: > T:=(x,y)->(y+1-a*x^2,b*x): > DT:=(x,y)->matrix([[-2*a*x,1],[b,0]]): > seed[0]:=(0,0): Throw away transient points. > for i from 1 to 5000 do seed[0]:=T(seed[0]): od: > N:=200: > S:=100: The length of an orbit, which is given by S, should not exceed N . > for i from 1 to S+N do seed[i]:=T(seed[i-1]): od: > A:=DT(seed[0]): > for n from 1 to N do A:=evalm(DT(seed[n]) &* A); od: > k:=eigenvectors(evalm(transpose(A) &* A)): Compare the eigenvalues. > if k[2][1] > >

v[0]:=k[d][3][1]: for i from 1 to S do v[i]:=evalm(DT(seed[i-1]) &* v[i-1]): od:

>

Digits:=10: for i from 1 to S do dir[i]:=(v[i][1],v[i][2])/norm(v[i]): od: for i from 1 to S do point1[i]:=seed[i]-0.1*dir[i]: od: for i from 1 to S do point2[i]:=seed[i]+0.1*dir[i]: od:

> > > >

for i from 1 to S do L[i]:=line([point1[i]],[point2[i]],color=blue): od: > g1:=display(seq(L[i],i=1..S)): > g2:=pointplot([seq([seed[i]],i=1..S)],symbol=circle): > display(g1,g2); See Fig. 10.8. > >

10.7.6 The invariant subspace E2 of the H´ enon mapping Plot the subspace E2 (x) invariant under the action of the H´enon mapping along an orbit using the property DT (x)E2 (x) = E2 (T x). > with(linalg): > with(plots): with(plottools): > Digits:=1000: > a:=1.4: b:=0.3: > T:=(x,y)->(y+1-a*x^2,b*x); > DT:=(x,y)->matrix([[-2*a*x,1],[b,0]]); > N:=500: We start from x−N . Throw away transient points. > seed[-N]:=(0,0): > for i from 1 to 5000 do seed[-N]:=T(seed[-N]): od: > for n from -N+1 to 0 do seed[n]:=T(seed[n-1]): od: > B:=DT(seed[-N]): > for n from -N+1 to -1 do > B:=evalm(DT(seed[n]) &* B): od: > k:=eigenvectors(evalm(B &* transpose(B))): Compare the eigenvalues. > if k[2][1] >= k[1][1] then d:=2: else d:=1: fi: > v[0]:=k[d][3][1]: > S:=200: > for i from 1 to S do seed[i]:=T(seed[i-1]): od: > for i from 1 to S do > v[i]:=evalm(DT(seed[i-1]) &* v[i-1]): od:

10.7 Maple Programs

Digits:=10: for i from 0 to S do dir[i]:=(v[i][1],v[i][2])/norm(v[i]): od: > for i from 0 to S do point1[i]:=seed[i]-0.1*dir[i]: od: > for i from 0 to S do point2[i]:=seed[i]+0.1*dir[i]: od: > for i from 0 to S do > L[i]:=line([point1[i]],[point2[i]]): > od: > g1:=display(seq(L[i],i=0..S)): > g2:=pointplot([seq([seed[i]],i=0..S)],symbol=circle): > display(g1,g2); See Fig. 10.9. >

> >

10.7.7 The divergence speed: a local version for a toral automorphism Plot the graph y = n/Vn (x0 ), 1 ≤ n ≤ 100, for a toral automorphism T . > with(plots): > T:=(x,y)->(frac(3.0*x+y),frac(2*x+y)): > with(linalg): Find the associated integral matrix A and its eigenvalues. > A:=matrix([[3,2],[1,1]]); > EV:=eigenvals(A); √ √ EV := 2 + 3, 2 − 3 > max_EV:=max(abs(EV[1]),abs(EV[2])): > max_Lyapunov:=log[10.](max_EV); max Lyapunov := 0.5719475475 > Digits:=1000: > Length:=1000: > seed[0]:=(evalf(Pi-3),sqrt(2.1)-1): > for i from 1 to Length do seed[i]:= T(seed[i-1]): od: > N:=100: Find Vn , 1 ≤ n ≤ N . > for n from 1 to N do > X0:=(seed[0][1]+1/10^n, seed[0][2]): > X:=evalf(T(X0)): > for j from 1 > while (X[1]-seed[j][1])^2+(X[2]-seed[j][2])^2 < 1/100 > do X:=T(X): od: > V[n]:=j; > od: >

Digits:=10:

331

332

10 The Lyapunov Exponent: Multidimensional Case

Draw the graph y = n/Vn , 5 ≤ n ≤ N . > g1:=listplot([seq([n,n/V[n]],n=5..N)],labels=["n"," "]): > g2:=plot({max_Lyapunov,0},x=0..N): > display(g1,g2); See Fig. 10.14. 10.7.8 The divergence speed: a global version for H´ enon mapping Plot (x, n/Vn (x)) along x = T j x0 , 1 ≤ j ≤ 500, for the H´enon mapping. > with(plots): > a:=1.4: b:=0.3: > T:=(x,y)->(y+1-a*x^2,b*x); > Digits:=1000: Throw away transient points. > seed[0]:=(0,0): > for i from 1 to 10000 do seed[0]:=T(seed[0]): od: > S:=500: > L:=1000: > for i from 1 to S+L do seed[i]:=T(seed[i-1]): od: > n:=100: Find Vn . > for s from 1 to S do > X0:=(seed[s][1]+10^(-n),seed[s][2]+10^(-n)): > X:=T(X0): > for j from 1 > while (X[1]-seed[s+j][1])^2+(X[2]-seed[s+j][2])^2 < 1/100 > do X:=T(X): > od: > V[s]:=j: > od: Digits:=10: pointplot3d([seq([seed[s],n/V[s]],s=1..S)],symbol=circle); See Fig. 10.15. > evalf(add(n/V[s],s=1..S)/S); 0.1814595275 > >

11 Stable and Unstable Manifolds

We introduce a new method of plotting the invariant subspaces Ei (x) given in the Multiplicative Ergodic Theorem in Chap. 10 when the transformation T under consideration has hyperbolic ﬁxed (or periodic) points. The method uses the stable and unstable manifolds of ﬁxed (or periodic) points, which are integral versions of the subspaces Ei (x) in the sense that they are tangential either to the stable manifold or to the unstable manifold. These manifolds are also invariant under T . For the simplicity of computer experiment we consider only two-dimensional examples such as a toral automorphism, the H´enon mapping and the modiﬁed standard mapping. The stable and unstable manifolds are quite useful in understanding the global behavior of T . For example, the stable manifold of a ﬁxed point of the H´enon mapping is part of the boundary of the basin of attraction, and the unstable manifold of a ﬁxed point describes the H´enon attractor.

11.1 Stable and Unstable Manifolds of Fixed Points Let X be a diﬀerentiable manifold. For the sake of notational simplicity, let us assume that X is a subset of R2 with the Euclidean metric d. Let T : X → X be an invertible and continuously diﬀerentiable transformation. Suppose that p ∈ X is a ﬁxed point of T , i.e., T p = p. It is clear that if p is a ﬁxed point of T then it is also a ﬁxed point of T −1 . If none of the absolute values of the eigenvalues of the Jacobian matrix DT (p) is equal to 1, then p is called a hyperbolic ﬁxed point. If DT (p) has two real eigenvalues λ1 , λ2 such that |λ1 | < 1 < |λ2 | , then p is called a saddle point. Since det DT (p) = λ1 λ2 , a hyperbolic ﬁxed point is a saddle point if | det DT (p)| = 1. First, we deﬁne the stable and unstable manifolds of ﬁxed points. The deﬁnitions for periodic points will be given in Sect. 11.4.

334

11 Stable and Unstable Manifolds

Deﬁnition 11.1. For a hyperbolic ﬁxed point p of T , we deﬁne the stable (invariant) manifold of p by Ws (p) = Ws (T, p) = {x ∈ X : lim d(T n x, p) = 0} , n→∞

and the unstable (invariant) manifold of p by Wu (p) = Wu (T, p) = {x ∈ X : lim d(T −n x, p) = 0} . n→∞

Note that Wu (T −1 , p) = Ws (T, p) and Ws (T −1 , p) = Wu (T, p). In the deﬁnition of the unstable manifold we cannot require limn→∞ d(T n x, p) = ∞ since X may be a bounded set. If p is not a hyperbolic ﬁxed point, then in general there exist T -invariant closed curves circling around p. If x ∈ Ws (p), then d(T n (T x), p) = d(T n+1 x, p) → 0 as n → ∞, and so we have T x ∈ Ws (p). Hence T (Ws (p)) ⊂ Ws (p). Similarly, T (Wu (p)) ⊂ Wu (p). If p and q are two distinct ﬁxed points, then Ws (p) ∩ Ws (q) = ∅ . For, if there were a point x ∈ Ws (p) ∩ Ws (q), then T n x would converge to both p and q as n → ∞, which is a contradiction. Similarly, if p and q are two distinct ﬁxed points, then Wu (p) ∩ Wu (q) = ∅ . It is possible that Ws (p) and Wu (p) intersect at points other than p. Note that D(T n )(p) = [DT (p)]n . Let v1 and v2 be eigenvectors of DT (p) corresponding to the eigenvalues λ1 and λ2 , |λ1 | < 1 < |λ2 |, respectively. Take x in a small neighborhood of p and let x = p + c1 v1 + c2 v2 for some c1 , c2 . If x ∈ Ws (p), then for suﬃciently large n > 0 we have T n x ≈ T n p + D(T n )(p)(c1 v1 + c2 v2 ) = p + c1 λ1 n v1 + c2 λ2 n v2 ≈ p + c2 λ2 n v2 , and hence c2 → 0 as n → ∞. Similarly, if x ∈ Wu (p), then for large n > 0, −n T −n x ≈ p + c1 λ−n 1 v1 + c2 λ2 v2

≈ p + c1 λ−n 1 v1 , and hence c1 → 0 as n → ∞. More precisely, v1 and v2 are tangent to Ws (p) and Wu (p) at p, respectively. See Fig. 11.1 where p is a saddle point. Flow lines near p describe the action of T , i.e., if x is on one of the ﬂow lines then T x is also on the same ﬂow line. It is known that at x ∈ Ws (p) the subspace E1 (x) in the Multiplicative Ergodic Theorem is tangent to Ws (p). Similarly, at x ∈ Wu (p) the subspace

11.1 Stable and Unstable Manifolds of Fixed Points

335

Wu(p) v1 Tx p x v2 Ws(p) Fig. 11.1. A saddle point p with ﬂow lines

E2 (x) is tangent to Wu (p). This idea can be used to sketch E1 (x) and E2 (x) without using as many signiﬁcant digits as we did in Chap. 10. In other words, we ﬁrst draw Ws (p) and Wu (p), and then ﬁnd the tangential directions. To ﬁnd the stable manifold of p we proceed as follows: Choose an arc J ⊂ Ws (p) such that (i) J is an intersection of an open ball centered at p and Ws (p), and (ii) T (J) ⊂ J. Then J ⊂ T −1 (J) ⊂ T −2 (J) ⊂ · · · and Ws (p) =

∞

T −n (J) .

n=0

See Fig. 11.2 where J is represented by a thick arc. To prove the equality, take x ∈ T −n (J) for n ≥ 1. Then T n x ∈ J ⊂ Ws (p), and so limm→∞ T m+n x = limm→∞ T m (T n x) = p. Hence x ∈ Ws (p). For the converse, take x ∈ Ws (p). Then T n x ∈ Ws (p). If n is suﬃciently large, then T n x ≈ p, and so T n x ∈ J. Hence x ∈ T −n (J). In simulations, J is a very short line segment tangent to the stable manifold. As T −1 is iterated, the component in the stable direction for T −1 (equivalently, in the unstable direction for T ) shrinks to zero. Thus T −n (J) approximates the stable manifold better and better as n increases.

–2

T (J)

–1

p

Ws(p)

p

J

Ws(p)

T (J)

p

Ws(p)

Fig. 11.2. Generation of the stable manifold of a hyperbolic ﬁxed point p

Next, to ﬁnd the unstable manifold of p, we choose an arc K ⊂ Wu (p) such that (i) K is an intersection of an open ball centered at p and Wu (p), and (ii) K ⊂ T (K). Then K ⊂ T (K) ⊂ T 2 (K) ⊂ · · · and

336

11 Stable and Unstable Manifolds

Wu (p) =

∞

T n (K) .

n=0

See Fig. 11.3 where K is represented by a thick arc. To prove the equality, note that T n (K) ⊂ T n (Wu (p)) ⊂ Wu (p) for n ≥ 0. For the converse, take x ∈ Wu (p). Then T −n x ∈ K for suﬃciently large n since limn→∞ T −n x = p. Hence x ∈ T n (K). In simulations, K is a very short line segment tangent to the unstable manifold. As T is iterated, the component in the stable direction for T shrinks to zero, and so T n (K) approximates the unstable manifold better and better as n increases. T 2(K)

T(K) K p

p

Wu(p)

p

Wu(p)

Wu(p)

Fig. 11.3. Generation of the unstable manifold of a hyperbolic ﬁxed point p

Example 11.2. Consider the toral automorphism given by T (x, y) = (3x + 2y, x + y) (mod 1) √ √ 32 on the torus. Since DT = has eigenvalues λ1 = 2 − 3, λ2 = 2 + 3 11 √ √ with the corresponding eigenvectors v1 = (1 − 3, 1), v2 = (1 + 3, 1). Since |λ1 | < 1 < |λ2 |, T has a hyperbolic ﬁxed point p = (0, 0). Note that Ws (p) = {tv1 (mod 1) : −∞ < t < ∞} and Wu (p) = {tv2 (mod 1) : −∞ < t < ∞}. Both sets are dense in the torus since the slopes of the directions are irrational. In Fig. 11.4 parts of Ws (p) and Wu (p) are drawn.

11.2 The H´ enon Mapping Consider the H´enon mapping T (x, y) = (−ax2 + y + 1, bx) with a = 1.4, b = 0.3. Then T has two hyperbolic ﬁxed points

b − 1 + (b − 1)2 + 4a b(b − 1 + (b − 1)2 + 4a) p= , , 2a 2a

11.2 The H´enon Mapping 1

337

1

y

y

0

0

1

x

x

1

Fig. 11.4. The stable manifold (left) and the unstable manifold (right) of a ﬁxed point (0, 0) of a toral automorphism

and

q=

b−1−

(b − 1)2 + 4a b(b − 1 − (b − 1)2 + 4a) , . 2a 2a

See Fig. 11.5 where two ﬁxed points are marked by small circles. Note that p is on the H´enon attractor and q is not.

0.4 y

–1

p

0 x

1

q –0.4

Fig. 11.5. Fixed points p and q of the H´enon mapping together with the H´enon attractor

To sketch the stable manifold of p, take a line segment J of a short length passing through p in the stable direction. Using formula 1 a 2 T −1 (u, v) = v, u + 2 v − 1 , b b ﬁnd T −n (J) for some suﬃciently large n. See Fig. 11.6 for a part of the stable manifold given by T −7 (J), and consult Maple Program 11.5.1. As n increases, T −n (J) densely ﬁlls the basin of attraction. If a point x0 lies on the intersection of the H´enon attractor and Ws (p), then the tangential direction

338

11 Stable and Unstable Manifolds

to Ws (p) coincides with E1 (x0 ) in the Multiplicative Ergodic Theorem. See Fig. 10.8.

2 y

–2

2 x

–2

Fig. 11.6. The stable manifold of a ﬁxed point p for the H´enon mapping together with the H´enon attractor

To sketch the unstable manifold of p choose a short line segment K passing through p and ﬁnd T n (K) for some suﬃciently large n. See Fig. 11.7 for T 8 (K) where the ﬁxed point p is marked by a circle, and consult Maple Program 11.5.2. The unstable manifold of p approximates the H´enon attractor.

0.4 y

–1

p

0 x

1

–0.4

Fig. 11.7. The unstable manifold of a ﬁxed point p of the H´enon mapping

For the stable and unstable manifolds of the other ﬁxed point q, see Fig. 11.8, and consult Maple Program 11.5.3. The point q lies on the boundary of the basin of attraction, and its stable manifold Ws (q) forms a part of the boundary of the basin. Hence Ws (q) cannot be used to ﬁnd E1 (x) at a point

11.3 The Standard Mapping

339

x on the H´enon attractor. To visualize the fact that q is a saddle point, see Maple Program 11.5.4. Even though two unstable manifolds Wu (p) and Wu (q) do not intersect, they are so much intertwined that they appear to overlap.

2 y

–2

0

2 x

–2

Fig. 11.8. The stable and unstable manifolds of the ﬁxed point q of the H´enon mapping: The basin of attraction is represented by the dotted region

11.3 The Standard Mapping Consider the modiﬁed standard mapping T (x, y) = (2x − y + C sin 2πx, x) (mod 1) , which was introduced in Ex. 10.15. Choose C = 0.5. There are two ﬁxed points p = (0, 0) and q = ( 21 , 21 ), where p is hyperbolic and q is not. The complex eigenvalues of DT (q) are of modulus 1, and there exist singular continuous invariant measures deﬁned on invariant closed curves around q. See the right plot in Fig. 10.5. The stable and unstable manifolds at the hyperbolic ﬁxed point p are presented in Fig. 11.9. The tangential directions of the stable and the unstable manifolds of p coincide with E1 and E2 in Fig. 10.11. Let us check the symmetry of Ws (p), Wu (p). First, deﬁne the rotation by 180 degrees around the origin by F (x, y) = (−x, −y) (mod 1). Then F −1 T F = T . Take x ∈ Ws (p). Since F −1 T n F x = T n x → p as n → ∞, we have T n (F x) → F p = p as n → ∞. Thus F x ∈ Ws (p), and hence F (Ws (p)) ⊂ Ws (p). Since F −1 = F , we have Ws (p) ⊂ F −1 (Ws (p)) = F (Ws (p)). Thus F (Ws (p)) = Ws (p), and similarly, F (Wu (p)) = Wu (p). Now deﬁne G(x, y) = (y, x), which is the reﬂection with respect to y = x. Then G = G−1 . Since

340

11 Stable and Unstable Manifolds 1

1

y

y

0

x

1

0

x

1

Fig. 11.9. The stable manifold (left) and the unstable manifold (right) of the ﬁxed point (0, 0) of the modiﬁed standard mapping: Compare the tangential directions with Fig. 10.11

G−1 T G(x, y) = G−1 T (y, x) = G−1 (2y − x + C sin 2πy, y) = (y, 2y − x + C sin 2πy) = T −1 (x, y) , we have G−1 T G = T −1 . Take x ∈ Wu (p). Then T −n x → p, G−1 T n Gx → p as n → ∞. Thus T n Gx → Gp = p, Gx ∈ Ws (p), and hence G(Wu (p)) ⊂ Ws (p). Similarly, G(Ws (p)) ⊂ Wu (p). Since Wu (p) ⊂ G−1 (Ws (p)) = G(Ws (p)) and Ws (p) ⊂ G−1 (Wu (p)) = G(Wu (p)), we have Wu (p) ⊂ G(Ws (p)) ⊂ G(G(Wu (p))) ⊂ Wu (p) , and hence G(Ws (p)) = Wu (p). Similarly, G(Wu (p)) = Ws (p). In summary, the left and the right plots in Fig. 11.9 are invariant under the rotation by 180 degrees around the origin, and they are mirror images of each other with respect to y = x.

11.4 Stable and Unstable Manifolds of Periodic Points So far in this chapter we have considered the stable and unstable manifolds of a ﬁxed point p. Now we can extend the deﬁnitions of the stable and unstable manifolds to the case when p is a periodic point of period k ≥ 2, i.e., T k p = p, T j p = p, 1 ≤ j < k. Deﬁnition 11.3. Let p be a periodic point of period k ≥ 2. Deﬁne the stable (invariant) manifold of p by

11.4 Stable and Unstable Manifolds of Periodic Points

341

Ws (p) = {x ∈ X : lim d(T kn x, p) = 0} , n→∞

and the unstable (invariant) manifold of p by Wu (p) = {x ∈ X : lim d(T −kn x, p) = 0} . n→∞

As an interesting example we consider another version of the modiﬁed standard mapping T (x, y) = (2x + y + C sin 2πx, x) (mod 1) . Its inverse is given by T −1 (x, y) = (y, x − 2y − C sin 2πy) (mod 1) . If T (x, y) = (x, y), then x = y and 2x + C sin 2πx = 0 (mod 1). Let t1 , t2 , 0 < t1 < t2 < 1, be two solutions of 2x + C sin 2πx = 1 other than 0, 21 . (See Fig. 11.10.) Then t1 + t2 = 1, and a ﬁxed point of T is of the form (z, z), z ∈ {0, t1 , 21 , t2 }. It is easy to check that a periodic point of T of period 2 is of the form (z, w), z = w, z, w ∈ {0, t1 , 21 , t2 }, and that it satisﬁes T (z, w) = (w, z). All the four ﬁxed points are hyperbolic, and only six periodic points of period 2 are hyperbolic. Consult Maple Program 11.5.5. 1

1

1 y

y 0

–1

t1

x

t2

y

1

0

x

1

0

x

1

Fig. 11.10. The graph of y = 2x + C sin(2πx) − 1 (left), the ﬁxed points (middle) and the periodic points of periodic 2 (right)

A few ergodic invariant measures of T for C = 0.45 are presented in Fig. 11.11. In the left plot we have the absolutely continuous invariant measure of the form ρ(x, y) dxdy. Let A = {(x, y) : ρ(x, y) > 0}. Since | det DT | = 1, T preserves area. Hence ρ(x, y) = 1/λ(A) where λ denotes Lebesgue measure. In the middle plot we have 4 singular continuous invariant measures supported on the closed curves surrounding periodic points ( 21 , t1 ), ( 21 , t2 ) and their images under T . Each of them is a union of two closed curves in the torus. In the

342

11 Stable and Unstable Manifolds

1

1

y

1

y

0

0

1

x

y

1

x

0

1

x

Fig. 11.11. An absolutely continuous invariant measure (left) and two singular continuous invariant measures (middle and right)

right plot, for a singular continuous measure supported on the closed curves surrounding periodic point ( 21 , 0), we have only two closed curves since the two pairs of edges are topologically identiﬁed on the torus. Let F1 = (t1 , t1 ), F2 = ( 21 , 21 ), F3 = (t2 , t2 ) denote three of the ﬁxed points, and let P1 = (t1 , t2 ) , P2 = (t2 , t1 ) denote two of the periodic points. In Fig. 11.12 there are plotted the images of circles centered at hyperbolic points F1 , F2 , F3 , P2 , and at two nonhyperbolic periodic points under T 2 and T 4 . The circles centered at hyperbolic points are elongated along unstable directions, and the circles centered at nonhyperbolic points are more or less rotated around the their centers. The images of the circles centered at F1 and F2 are so stretched that they go out of bound and come back by the modulo 1 operation. The images of a circle centered at P2 under T j , j odd, are closed curves surrounding P1 = T (P2 ), which are not drawn here. 1

1

P1

y

F3

y

F2 F1

P2

0 x

1

0

x

1

Fig. 11.12. Images of circles under T 2 (left) and T 4 (right): The circles are centered at F1 , F2 , F3 , P2 and at two unnamed nonhyperbolic periodic points

11.4 Stable and Unstable Manifolds of Periodic Points

343

In Fig. 11.13 there are plotted the stable manifolds of F1 , F2 , F3 , P1 , P2 . Other less interesting ﬁxed or periodic points are not considered here. Only a part of each manifold is drawn. Consult Maple Program 11.5.7. 1

y

0

1

x

Fig. 11.13. The stable manifolds of the modiﬁed standard mapping

In Fig. 11.14 there are plotted the unstable manifolds of F1 , F2 , F3 , P1 and P2 . Observe that some of the stable manifolds overlap some of the unstable manifolds. For example, the arc connecting F1 and F2 is a part of Wu (F1 ), and it is also a part of Ws (F2 ). Since T (−x, −y) = −T (x, y) (mod 1), i.e., F −1 ◦ T ◦ F = T , where F is deﬁned in Sect. 11.3, it follows that the stable and unstable manifolds of T are invariant under the rotation by 180 degrees as observed in Figs. 11.13, 11.14. Let R be the counterclockwise rotation by 90 degrees. Then R−1 ◦ T ◦ R = T −1 , and so R(Ws ) = Wu , R(Wu ) = Ws by the same argument in Sect. 11.3. Compare Figs. 11.13, 11.14. Since Ws (F3 ) and Wu (P2 ) are invariant under T 2 , we have T 2 (Ws (F3 ) ∩ Wu (P2 )) ⊂ Ws (F3 ) ∩ Wu (P2 ) . Near the periodic point P2 choose a point z0 ∈ Ws (F3 ) ∩ Wu (P2 ) ,

z0 = P2 ,

label it as 0, and label the images T n z0 as n. See Fig. 11.15. Observe that T 2n z0 ∈ Ws (F3 ) ∩ Wu (P2 ) for every n. Consult Maple Program 11.5.8. Since

344

11 Stable and Unstable Manifolds 1

y

0

1

x

Fig. 11.14. The unstable manifolds of the modiﬁed standard mapping

F3 is a ﬁxed point, T n z0 converges to F3 but T n z0 = F3 for any n. Hence an area element at z0 is squeezed in the direction of Ws (F3 ) under T 2n as n → ∞. Since T preserves area, the area element should be expanded in some other direction. In Fig. 11.15 the unstable manifold of P2 is stretched and folded as it tries to reach F3 .

5

0.8

7

3 1 Wu

y

9 10 8

0.6 6 Ws

0.4

4 2 0

0.4

0.6 x

0.8

Fig. 11.15. An orbit of a point z0 labeled as 0 on the intersection of the stable and the unstable manifolds of the modiﬁed standard mapping

11.5 Maple Programs

345

11.5 Maple Programs 11.5.1 A stable manifold of the H´ enon mapping Here is how to obtain the stable manifold of the ﬁxed point p of the H´enon mapping given in Sect. 11.2. > with(plots): with(linalg): Deﬁne the H´enon mapping T . > T:=(x,y)->(y+1-a*x^2,b*x); Deﬁne the inverse of T . > S:=(x,y)->(1/b*y,x-1+a/b^2*y^2); a y2 y S := (x, y) → ( , x − 1 + 2 ) b b Check whether T ◦S is the identity mapping. Observe that T (S(x, y)) = (x, y). > T(S(x,y)); x, y Find the Jacobian matrix. > DT:=(x,y)->matrix([[-2*a*x,1],[b,0]]); −2 a x 1 DT := (x, y) → b 0 > a:=1.4: b:=0.3: Discard the transient points. > seed[0]:=(0,0): > for k from 1 to 10000 do seed[0]:=T(seed[0]): od: Plot the H´enon attractor. > SampleSize:=2000: > for i from 1 to SampleSize do seed[i]:=T(seed[i-1]): od: > attractor:=pointplot([seq([seed[i]],i=1..SampleSize)]): Find the ﬁxed points. > Sol:=solve(b*x+1-a*x^2=x,x); Sol := −1.131354477, 0.6313544771 Choose the ﬁxed point p with a positive x-coordinate, which lies on the H´enon attractor. > if Sol[1] > Sol[2] then J:=1: else J:=2: fi: > x0:=Sol[J]; x0 := 0.6313544771 > y0:=b*x0; y0 := 0.1894063431 Find the Jacobian matrix DT (p). > A:=DT(x0,y0):

346

11 Stable and Unstable Manifolds

Find the eigenvalues, the multiplicities, and the eigenvectors of A. > Eig:=eigenvectors(A); Eig := [−1.923738858, 1, {[−0.9880577560, 0.1540839733]}], [0.1559463223, 1, {[−0.4866537468, −0.9361947229]}] Compare two eigenvalues and ﬁnd an eigenvector v in the stable direction. > if abs(Eig[1][1]) < abs(Eig[2][1]) then k:=1: > else k:=2: fi: > v:=Eig[k][3][1]; v := [−0.4866537468, −0.9361947229] Plot the ﬁxed point p. > fixed:=pointplot([(x0,y0)],symbol=circle): Choose an arc J on the stable manifold of p, which is approximated by a line segment, also denoted by J, centered at p lying in the directions ±v. Generate N points on J whose images under the iterations of T −1 will gradually approximate the stable manifold. > N:=100000: > delta:=0.1/N: > delta1:=delta*v[1]: delta2:=delta*v[2]: > for i from 1 to N do > stable_p[i]:=(x0+(i-N/2.)*delta1,y0+(i-N/2.)*delta2): od: Plot J. See the left plot in Fig. 11.16 where the H´enon attractor is also plotted. > g1:=listplot([[stable_p[1]],[stable_p[N]]]): > display(g1,attractor,fixed); 0.5 y

–1

0.5 y

0 x

–0.5

1

–1

0 x

1

–0.5

Fig. 11.16. Generation of the stable manifold of p for the H´enon mapping: a line segment J tangent to the stable manifold (left) and T −1 (J) (right)

Plot T −1 (J). See the right plot in Fig. 11.16. To save memory and computing time we use only a subset of N points to plot a simple graph. > for i from 1 to N do stable_p[i]:=S(stable_p[i]): od:

11.5 Maple Programs

347

g2:=listplot([seq([stable_p[s*10000]],s=1..N/10000)]): > display(g2,attractor,fixed); Plot T −2 (J). See the left plot in Fig. 11.17. > for i from 1 to N do stable_p[i]:=S(stable_p[i]): od: > g3:=listplot([seq([stable_p[s*2000]],s=1..N/2000)]): > display(g3,attractor,fixed); >

4

4

y

y 2

2

0

–2

x

2

0

–2

x

2

Fig. 11.17. Generation of the stable manifold of p for the H´enon mapping: T −2 (J) (left) and T −3 (J) (right)

Plot T −3 (J). See the right plot in Fig. 11.17. > for i from 1 to N do stable_p[i]:=S(stable_p[i]): od: > g4:=listplot([seq([stable_p[s*1000]],s=1..N/1000)]): > display(g4,attractor,fixed); Similarly, we plot T −4 (J), T −5 (J). See Fig. 11.18. For T −7 (J) see Fig. 11.6.

4

4

y

y 2

–2

2

x –2

2

–2

x

2

–2

Fig. 11.18. Generation of the stable manifold of p for the H´enon mapping: T −4 (J) (left) and T −5 (J) (right)

348

11 Stable and Unstable Manifolds

11.5.2 An unstable manifold of the H´ enon mapping Here is how to obtain the unstable manifold of the ﬁxed point p of the H´enon mapping in Sect. 11.2. First, we proceed as in Maple Program 11.5.1 and ﬁnd a hyperbolic ﬁxed point p with a positive x-coordinate. For the beginning of the program see Maple Program 11.5.1. Find the Jacobian matrix of the H´enon mapping T . > A:=DT(x0,y0): > Eig:=eigenvectors(A): Choose the eigenvector v in the unstable direction. > if abs(Eig[1][1]) > abs(Eig[2][1]) then k:=1: > else k:=2: fi: > v:=Eig[k][3][1]; v := [−0.98805775599470461437, 0.15408397327012552629] N:=2000: delta:=1./N: delta1:=delta*v[1]: delta2:=delta*v[2]: Compute K. > for i from 1 to N do > unstable_p[i]:=(x0+(i-N/2)*delta1,y0+(i-N/2)*delta2): > od: Plot a line segment K passing through p and tangent to the unstable manifold. > listplot([[unstable_p[1]],[unstable_p[N]]]; See the ﬁrst plot in Fig. 11.19 where 1000 points on the H´enon attractor are also plotted. Now compute T (K) in the following. > for i from 1 to N do > unstable_p[i]:=T(unstable_p[i]): > od: Plot T (K). The line segment K is shrinking in the stable direction, so T (K) better approximates the unstable manifold than K. > listplot([seq([unstable_p[i*10]],i=1..N/10)]; Compute T 2 (K) in the following. > for i from 1 to N do > unstable_p[i]:=T(unstable_p[i]): > od: Plot T 2 (K). > listplot([seq([unstable_p[i*10]],i=1..N/10)]; For T 3 (K), T 4 (K), and T 5 (K) we proceed similarly. In Fig. 11.19 the line segment K and its images T j (K), 1 ≤ j ≤ 5, are given together with the H´enon attractor and the ﬁxed point p (from top left to bottom right). > > > >

11.5 Maple Programs

349

Fig. 11.19. Generation of the unstable manifold of p for the H´enon mapping: a line segment K tangent to the unstable manifold and its images T j (K), 1 ≤ j ≤ 5 (from top left to bottom right)

11.5.3 The boundary of the basin of attraction of the H´ enon mapping Choose the ﬁxed point q of the H´enon mapping given in Sect. 11.2. The following shows that q is on the boundary of the basin of attraction. > with(plots): with(linalg): > a:=1.4: b:=0.3: Deﬁne the H´enon mapping T . Find T −1 and the Jacobian matrix of T . > T:=(x,y)->(y+1-a*x^2,b*x); > S:=(x,y)->(1/b*y,x-1+a/b^2*y^2); > DT:=(x,y)->matrix([[-2*a*x,1],[b,0]]): Throw away the transient points. > seed[0]:=(0,0): > for k from 1 to 2000 do seed[0]:=T(seed[0]): od: Plot 500 points on the H´enon attractor. > SampleSize:=500: > for i from 1 to SampleSize do seed[i]:=T(seed[i-1]): od: > attractor:=pointplot([seq([seed[i]],i=1..SampleSize)]): Find a ﬁxed point q that is not on the H´enon attractor. > Sol:=solve(b*x+1-a*x^2=x,x); Sol := −1.131354477, 0.6313544771 > if Sol[1] < Sol[2] then J:=1: else J:=2: fi: > x1:=Sol[J];

350

11 Stable and Unstable Manifolds

x1 := −1.131354477 >

y1:=b*x1;

y1 := −0.3394063431 Plot the ﬁxed point q. > fixed:=pointplot([(x1,y1)],symbol=circle): Find the Jacobian matrix at q. > A:=DT(x1,y1): > Eig:=eigenvectors(A): Choose the eigenvector v in the unstable direction. > if abs(Eig[1][1]) > abs(Eig[2][1]) then k:=1: > else k:=2: fi: > v:=Eig[k][3][1]: Take a line segment L passing through q in the direction of v. Since q lies on the boundary of the basin of attraction, a half of L lies inside the basin and the other half lies outside the basin, which will be seen in Fig. 11.20. > N:=10000: This is the number of points on L. > delta:=0.5/N: > delta1:=delta*v[1]: delta2:=delta*v[2]: > for i from 1 to N do > unstable_q[i]:=(x1+(i-N/2)*delta1,y1+(i-N/2)*delta2): > od: To plot L we use only two endpoints. > g0:=listplot([[unstable_q[1]],[unstable_q[N]]]): > display(attractor,fixed,g0); See the left plot in Fig. 11.20 for L, where the H´enon attractor is also plotted. > for i from 1 to N do unstable_q[i]:=T(unstable_q[i]): od: > g1:=listplot([seq([unstable_q[i*100]],i=1..N/100)]): > display(g1,fixed,attractor); See the second plot in Fig. 11.20, where the portion of T (L) outside the basin of attraction starts to diverge to inﬁnity and the portion inside the basin starts to move toward the H´enon attractor. > for i from 1 to N do unstable_q[i]:=T(unstable_q[i]): od: > g2:=listplot([seq([unstable_q[i*50]],i=1..N/50)]): > display(g2,attractor,fixed); See the right plot in Fig. 11.20. The portion of T 2 (L) outside the basin of attraction escapes farther to inﬁnity, and the portion inside the basin almost overlaps the H´enon attractor. > for i from 1 to N do unstable_q[i]:=T(unstable_q[i]): od: > g3:=listplot([seq([unstable_q[i]],i=N/2..N)]): > display(g3,attractor,fixed);

11.5 Maple Programs

0.2

0.2 –2

–1

0 –0.2

–1

–4

351

–2

1

1 –0.2 –0.5

Fig. 11.20. Generation of the unstable manifold of q for the H´enon mapping: a line segment L tangent to the unstable manifold, and its images T (L), T 2 (L) (from left to right)

From now on we consider the part of T k (L) inside the basin of attraction. The part outside the basin stretches to inﬁnity as k increases. > for i from N/2 to N do > unstable_q[i]:=T(unstable_q[i]): od: > g4:=listplot([seq([unstable_q[i]],i=N/2..N)]): > display(g4,attractor,fixed); .. . Similarly we obtain g5, g6, g7 and g8. See Fig. 11.21 for T k (L), 3 ≤ k ≤ 8, where the H´enon attractor is also plotted.

Fig. 11.21. Generation of an unstable manifold of q for the H´enon mapping: parts of T k (L), 3 ≤ k ≤ 8, inside the basin of attraction (from top left to bottom right)

Now we plot a part of the stable manifold of q, which forms a part of the boundary of the basin of the attraction.

352

11 Stable and Unstable Manifolds

First, ﬁnd the tangent vector w in the stable direction. > if abs(Eig[1][1]) < abs(Eig[2][1]) then k:=1: > else k:=2: fi: > w:=Eig[k][3][1]; w := [−0.2996032920, 0.9766534317] Choose a line segment I in the direction of w. > M:=1000: This is the number of points on I. > eta:=0.25/M: > eta1:=eta*w[1]: eta2:=eta*w[2]: > for i from 1 to M do > stable_q[i]:=(x1+(i-M/2)*eta1,y1+(i-M/2)*eta2): > od: To plot I we use only two endpoints. > f0:=listplot([[stable_q[1]],[stable_q[M]]]): > display(f0,fixed,attractor); See the left plot in Fig. 11.22, where the H´enon attractor is also plotted. > for i from 1 to M do stable_q[i]:=S(stable_q[i]): od: > f1:=listplot([seq([stable_q[i]],i=1..M)]): > display(f1,fixed,attractor); See the middle plot in Fig. 11.22. > for i from 1 to M do stable_q[i]:=S(stable_q[i]): od: > f2:=listplot([seq([stable_q[i]],i=1..M)]): > display(f2,fixed,attractor); See the right plot in Fig. 11.22. 30

1 0.2 –1

1

–1

20

1

0

10

–0.2 –1 –4

0

4

Fig. 11.22. Generation of the stable manifold of q for the H´enon mapping: a line segment I tangent to the stable manifold, and its images T (I), T 2 (I) (from left to right)

Now plot the stable and the unstable manifolds of q together. > display(g2,g8,f2,fixed); See Fig. 11.8. To ﬁnd the basin of attraction, consult Maple Program 3.9.14.

11.5 Maple Programs

353

11.5.4 Behavior of the H´ enon mapping near a saddle point Consider the ﬁxed point q of the H´enon mapping. The following illustrates the existence of a saddle structure around q. > with(plots): with(linalg): > a:=1.4: b:=0.3: > T:=(x,y)->(y+1-a*x^2,b*x): Find the inverse of T . > S:=(x,y)->(1/b*y,x-1+a/b^2*y^2): > DT:=(x,y)->matrix([[-2*a*x,1],[b,0]]): > seed[0]:=(0,0): Discard the transient points. > for k from 1 to 2000 do seed[0]:=T(seed[0]): od: > SampleSize:=500: > for i from 1 to SampleSize do seed[i]:=T(seed[i-1]): od: > attractor:=pointplot([seq([seed[i]],i=1..SampleSize)]): Find the ﬁxed point q. > Sol:=solve(b*x+1-a*x^2=x,x): > if Sol[1] < Sol[2] then J:=1: else J:=2: fi: > x0:=Sol[J]; x0 := −1.131354477 > y0:=b*x0; y0 := −0.3394063431 > fixed:=pointplot([(x0,y0)],symbol=circle): Find the stable manifold of q. > A:=DT(x0,y0): > Eig:=eigenvectors(A): > if abs(Eig[1][1]) < abs(Eig[2][1]) then k:=1: > else k:=2: fi: > w:=Eig[k][3][1]: > M:=100: > eta:=0.1/M: > eta1:=eta*w[1]: eta2:=eta*w[2]: > for i from 1 to M do > stable_q[i]:=(x0+(i-0.75*M)*eta1,y0+(i-0.75*M)*eta2): > od: > for i from 1 to M do stable_q[i]:=S(S(stable_q[i])): od: Plot a part of the stable manifold of q near q. > f3:=listplot([seq([stable_q[i]],i=1..M)]): We will consider a curve of the form t → (t, C3 (t − C1 )(t − C2 )) that partially surrounds the stable manifold of q. Choose the range t1 ≤ t ≤ t2 .

354

11 Stable and Unstable Manifolds

t1:=-1.6: t2:= 1.6: Find a parabola that partially surrounds the stable manifold of q from the outside. > C1:=-1.34: C2:=1.4: C3:=1.3: > outer:=t->(t,C3*(t-C1)*(t-C2)): Plot the curve J = {outer(t) : t1 ≤ t ≤ t2 } and its images under T . > para_out[0]:=plot([outer(t),t=t1..t2]): > para_out[1]:=plot([T(outer(t)),t=t1..t2]): > para_out[2]:=plot([T(T(outer(t))),t=t1..t2]): > para_out[3]:=plot([T(T(T(outer(t)))),t=t1..t2]): >

Attach the labels First, Second, and Third to the images T (J), T 2 (J) and T 3 (J), respectively. (A diﬀerent color may be used for each image.) > order_out[1]:=textplot([T(outer(t1)),‘First‘]): > order_out[2]:=textplot([T(T(outer(t1))),‘Second‘]): > order_out[3]:=textplot([T(T(T(outer(t1)))),‘Third‘]: > display(fixed,f3,attractor,seq(para_out[i],i=0..3), seq(order_out[i],i=1..3)); See Fig. 11.23. The ﬁxed point q is marked by a circle. The parabola J is close to the stable manifold so it is shrunk into the curve T (J), then it is expanded a little to T 2 (J) along the unstable direction. Next, it is further expanded to T 3 (J), which approximates a part of the unstable manifold.

2 First

–10

–5 Second

Third

–2

Fig. 11.23. The image of the outer parabola is attracted to the unstable manifold of q and diverges to inﬁnity

Choose a curve of the form t → (t, D3 (t − D1 )(t − D2 )) that is partially surrounded by the stable manifold of q from the inside. > D1:=-1.13: D2:=1.19: D3:=1.47: > inner_c:=t->(t,D3*(t-D1)*(t-D2)): Plot the curve H = {inner c(t) : t1 ≤ t ≤ t2 } and its images under T .

11.5 Maple Programs > > > > >

355

para_in[0]:=plot([inner_c(t),t=t1..t2]): para_in[1]:=plot([T(inner_c(t)),t=t1..t2]): para_in[2]:=plot([T(T(inner_c(t))),t=t1..t2]): para_in[3]:=plot([T(T(T(inner_c(t)))),t=t1..t2]): para_in[4]:=plot([T(T(T(T(inner_c(t))))),t=t1..t2]):

Attach the labels First, Second, Third and Fourth to the images T (H), T 2 (H), T 3 (H) and T 4 (H), respectively. > order_in[1]:=textplot([T(inner_c(t1)),‘First‘]): > order_in[2]:=textplot([T(T(inner_c(t1))),‘Second‘]): > order_in[3]:=textplot([T(T(T(inner_c(t1)))),‘Third‘]): > order_in[4]:=textplot([T(T(T(T(inner_c(t1))))),‘Fourth‘]): > display(fixed,f3,attractor,seq(para_in[i],i=0..4), seq(order_in[i],i=1..4)); See Fig. 11.24. The ﬁxed point q is marked by a circle.

2 First Fourth

–2

Third

2

Second

–2

Fig. 11.24. The image of the inner parabola gradually approximates the H´enon attractor

11.5.5 Hyperbolic points of the standard mapping Check the hyperbolicity of the ﬁxed and the periodic points of the modiﬁed standard mapping. > with(linalg): > C:=0.45: > fractional:=x->x-floor(x): Deﬁne the modiﬁed standard mapping T . > T:=(x,y)->(fractional(2*x+y+C*sin(2*Pi*x)),x); T := (x, y) → (fractional(2 x + y + C sin(2 π x)), x) Find the Jacobian matrix of T . > DT:=(x,y)->matrix([[2+2*Pi*C*cos(2*Pi*x),1],[1,0]]);

356

11 Stable and Unstable Manifolds

2 + 2 π C cos(2 π x) 1 1 0 To ﬁnd the ﬁxed points, solve 2x + C sin 2πx = 1. > t1:=fsolve(2*x+C*sin(2*Pi*x)-1,x=0..1/2); t1 := 0.2786308603 > t2:=fsolve(2*x+C*sin(2*Pi*x)-1,x=1/2..1); t2 := 0.7213691397 In the following we check T 2 (xr , ys ) = (xr , ys ). > x[1]:=0: x[2]:=t1: x[3]:=0.5: x[4]:=t2: > y[1]:=0: y[2]:=t1: y[3]:=0.5: y[4]:=t2: > for r from 1 to 4 do > for s from 1 to 4 do > print(T(T(x[r],y[s]))-(x[r],y[s])); > od; od; 0., 0. 0., 0. 0., 0. 0.9999999996, 0. .. . Each output is equal to (0, 0) modulo 1 within an acceptable margin of error, and we conclude that T 2 (xr , ys ) = (xr , ys ). Now we show that T (xr , ys ) = (ys , xr ). > for r from 1 to 4 do > for s from 1 to 4 do > print(T(x[r],y[s])-(y[s],x[r])); > od; od; 0., 0 0., 0 .. . Each output is equal to (0, 0) modulo 1 within an acceptable margin of error. To check the hyperbolicity of the ﬁxed and the periodic points, we compute the eigenvalues of the Jacobian matrix of T 2 . > for r from 1 to 4 do > for s from 1 to 4 do > x0:=x[r]: y0:=y[s]: > A:=evalm(DT(T(x0,y0))&*DT(x0,y0)): > Eig:=eigenvectors(A): > if abs(abs(Eig[1][1]) - 1) > 0.00001 then > print([r,s],Eig[1][1],Eig[2][1],’hyperbolic’) > else print([r,s],Eig[1][1],Eig[2][1],’not hyperbolic’): > fi: > od; > od; DT := (x, y) →

11.5 Maple Programs

357

[1, 1], 0.03958, 25.26, hyperbolic [1, 2], 9.103, 0.1099, hyperbolic [1, 3], −0.9972 − 0.07492 I, −0.9972 + 0.07492 I, not hyperbolic [1, 4], 0.1099, 9.103, hyperbolic .. . Check that the complex eigenvalues are of modulus 1 in the above. All the ﬁxed points, which are indexed by [r, r], 1 ≤ r ≤ 4, are hyperbolic. 11.5.6 Images of a circle centered at a hyperbolic point under the standard mapping We study the behavior of the standard mapping T by tracking the images under T of circles centered at the periodic points. > with(plots): > C:=0.45: > fractional:=x->x-floor(x): > T:=(x,y)->(fractional(y+2*x+C*sin(2*Pi*x)),x): > t1:=fsolve(2*x+C*sin(2*Pi*x)-1,x=0..1/2): > t2:=fsolve(2*x+C*sin(2*Pi*x)-1,x=1/2..1): > x[1]:=0: x[2]:=t1: x[3]:=0.5: x[4]:=t2: > y[1]:=0: y[2]:=t1: y[3]:=0.5: y[4]:=t2: Choose the number of points on each circle. > N:=500: Choose the common radius a of the circles. > a:=0.03: > for r from 2 to 4 do > for s from 2 to r do > for i from 1 to N do > pt_circle[i]:=(x[r]+a*cos(2*Pi*i/N),y[s]+a*sin(2*Pi*i/N)): > od: > image[0]:=pointplot([seq(pt_circle[i],i=1..N)]): > for k from 1 to 4 do > for i from 1 to N do > pt_circle[i]:=S(point_circle[i]): > od: > image[k]:=pointplot([seq(pt_circle[i],i=1..N)]): > od: > g[r,s]:=display(image[0],image[2],image[4]): > od: > od: > display(seq(seq(g[r,s],s=2..r),r=2..4)); See Fig. 11.12.

358

11 Stable and Unstable Manifolds

11.5.7 Stable manifolds of the standard mapping with(plots): with(linalg): C:=0.45: > fractional:=x->x-floor(x): Deﬁne the modiﬁed standard mapping T and its inverse S. > T:=(x,y)->(fractional(evalhf(2*x+y+C*sin(2*Pi*x))),x): > S:=(x,y)->(y,fractional(evalhf(x-2*y-C*sin(2*Pi*y)))): Find the Jacobian matrix of T . > DT:=(x,y)->matrix([[2+2*Pi*C*cos(2*Pi*x),1],[1,0]]): > t1:=fsolve(2*x+C*sin(2*Pi*x)-1,x=0..1/2): > t2:=fsolve(2*x+C*sin(2*Pi*x)-1,x=1/2..1): List the coordinates of ﬁxed and the periodic points of period 2. > x[1]:=0: x[2]:=t1: x[3]:=0.5: x[4]:=t2: > y[1]:=0: y[2]:=t1: y[3]:=0.5: y[4]:=t2: Choose three of the ﬁxed points and two of the periodic points, and mark them by circles. > fixed:=pointplot([seq([x[r],y[r]],r=2..4)],symbol=circle): > periodic:=pointplot([[x[2],y[4]],[x[4],y[2]]], symbol=circle): Find the stable manifolds of the ﬁxed points. > N:=1000: > r:=2: Choose the number of iterations. > K1:=12: Find the stable direction. > x0:=x[r]: y0:=y[r]: > A:=DT(x0,y0): > Eig:=eigenvectors(A):: > if abs(Eig[1][1]) < abs(Eig[2][1]) then k:=1: > else k:=2: > fi: > v:=Eig[k][3][1]: Construct N points on a very short segment in the stable direction. > delta:=0.05/N: > delta1:=delta*v[1]: delta2:=delta*v[2]: > for i from 1 to N do > pt_stable[i]:=(x0+(i-N/2)*delta1,y0+(i-N/2)*delta2): > od: > for k from 1 to K1 do > for i from 1 to N do > pt_stable[i]:=S(pt_stable[i]): > od: > Ws[2][k]:=pointplot([seq([pt_stable[i]],i=1..N)]): > od: > >

11.5 Maple Programs

359

display(seq(Ws[2][k],k=1..K1),fixed); See the left plot in Fig. 11.25. If we increase K1 , then we can observe the complicated folding of the stable manifold wrapping around the holes seen in Fig. 11.13. >

1

1

y

y

0 x

1

0 x

1

Fig. 11.25. The stable manifolds of the ﬁxed points (t1 , t1 ) (left) and (1/2, 1/2) (right) of the modiﬁed standard mapping

Using the same algorithm we ﬁnd the stable manifold of ( 21 , 21 ). In this case the manifold is simple and we use a small number of points to describe the manifold. > N:=250: > r:=3: > K2:=10: . . > . > Ws[3][k]:=pointplot([seq([pt_stable[i]],i=1..N)]): > od: > display(seq(Ws[3][k],k=1..K2),fixed); See the right plot in Fig. 11.25. The points on the stable manifold are attracted to ( 21 , 21 ). Two endpoints do not belong to Ws (( 21 , 21 )). Using the same algorithm we ﬁnd the stable manifold of (t2 , t2 ). > N:=1000: > r:=4: > K3:=12: . . > . > Ws[4][k]:=pointplot([seq([pt_stable[i]],i=1..N)]): > od: > display(seq(Ws[4][k],k=1..K3),fixed); See the left plot in Fig. 11.26. Find the stable manifolds of periodic points of period 2. In the following we ﬁnd Ws (t2 , t1 ) and Ws (t1 , t2 ) at the same time.

360

11 Stable and Unstable Manifolds 1

1

y

y

0 x

1

0 x

1

Fig. 11.26. The stable manifolds of the ﬁxed point (t3 , t3 ) (left) and the periodic points (t1 , t2 ) and (t2 , t1 ) (right) of the modiﬁed standard mapping

Choose the number of iterations. > K4:=10: > N:=1000: > (r,s):=(4,2): > x0:=x[r]; y0:=y[s]; x0 := 0.7213691397 y0 := 0.2786308603 Find the stable direction of the Jacobian matrix of T 2 at the periodic point (x0 , y0 ). > A:=evalm(DT(T(x0,y0))&*DT(x0,y0)): > Eig:=eigenvectors(A): > if abs(Eig[1][1]) < abs(Eig[2][1]) then b:=1: > else b:=2: fi: > v:=Eig[b][3][1]: Construct N points on a very short segment in the stable direction. > delta:=0.05/N: > delta1:=delta*v[1]: delta2:=delta*v[2]: > for i from 1 to N do > pt_stable[i]:=(x0+(i-N/2)*delta1,y0+(i-N/2)*delta2): > od: > for k from 1 to K4 do > for i from 1 to N do > pt_stable[i]:=S(pt_stable[i]): > od: > f[k]:=pointplot([seq([pt_stable[i]],i=1..N)]): > od: > display(seq(f[k],k=1..K4),periodic); See the right plot in Fig. 11.26. For the stable manifolds at the ﬁxed points and the periodic points in one frame, see Fig. 11.13.

11.5 Maple Programs

361

11.5.8 Intersection of the stable and unstable manifolds of the standard mapping with(plots): with(linalg): > C:=0.45: > fractional:=x->x-floor(x): > T:=(x,y)->(fractional(evalhf(y+2*x+C*sin(2*Pi*x))),x): > S:=(x,y)->(y,fractional(evalhf(x-2*y-C*sin(2*Pi*y)))): > DT:=(x,y)->matrix([[2+2*Pi*C*cos(2*Pi*x),1],[1,0]]): > t1:=fsolve(2*x+C*sin(2*Pi*x)-1,x=0..1/2): > t2:=fsolve(2*x+C*sin(2*Pi*x)-1,x=1/2..1): Find the stable manifold of (t2 , t2 ) as in Maple Program 11.5.7. > A:=evalm(DT(t2,t2)): > Eig:=eigenvectors(A): > if abs(Eig[1][1]) < abs(Eig[2][1]) then k:=1: > else k:=2: fi: > v:=Eig[k][3][1]: Generate N points, called pt stabl[i], which are approximately on the stable manifold. > N:=2000: > delta:=0.05/N: > delta1:=delta*v[1]: delta2:=delta*v[2]: > for i from 1 to N do > pt_stabl[i]:=(t2+(i-N/2)*delta1,t2+(i-N/2)*delta2): > od: > J:=10: Plot the stable manifold using points of relatively larger size. > for j from 1 to J do > for i from 1 to N do > pt_stabl[i]:=S(pt_stabl[i]): od: > h[j]:=pointplot([seq([pt_stabl[i]],i=1..N)],symbolsize=4): > od: Find the unstable manifold of the periodic point (t2 , t1 ). > B:=evalm(DT(T(t2,t1))&*DT(t2,t1)): > Eig:=eigenvectors(B): > if abs(Eig[1][1]) > abs(Eig[2][1]) then k:=1: > else k:=2: fi: > w:=Eig[k][3][1]: > N:=3000: > delta:=0.05/N: > del1:=delta*w[1]: > del2:=delta*w[2]: >

362

11 Stable and Unstable Manifolds

for i from 1 to N do pt_unsta[i]:=(t2+(i-N/2)*del1, t1+(i-N/2)*del2): od: Plot the unstable manifold using points of relatively smaller size. > for j from 1 to J do > for i from 1 to N do > pt_unsta[i]:=T(pt_unsta[i]): > od: > g[j]:=pointplot([seq([pt_unsta[i]],i=1..N)],symbolsize=1): > od: Consider the images of a point z0 on the intersection of the stable manifold of (t2 , t2 ) and the unstable manifold of (t2 , t1 ). > range1:=plot(0.5,x=0.25..0.9,y=0.25..0.9,color=white): > alpha:=0.00675: > z[0]:=(t2 + alpha*w[1],t1 + alpha*w[2]); z0 := 0.7274036685, 0.2816552547 > text[0]:=textplot([z[0]+(0.01,0.02),0]): > L:=10: Label the point zn = T n (z0 ) as n. > for n from 1 to L do > z[n]:=T(z[n-1]): > text[n]:=textplot([z[n]+(0.01,0.02),n]): > od: Using small circles mark the images of a point along the intersection of the stable manifold of (t2 , t2 ) and the unstable manifold of (t2 , t1 ). > circles:=pointplot([seq(z[n],n=0..L)],symbol=circle): > display(range1,seq(g[j],j=1..J),seq(h[j],j=1..J), circles,seq(text[n],n=0..L),text_Ws,text_Wu); See Fig. 11.15. > > >

12 Recurrence and Entropy

The entropy of a shift transformation is expressed in terms of the ﬁrst return time. The idea is simple: given a sequence x = (x1 x2 x3 . . .), xi ∈ {s1 , . . . , sk }, ﬁnd the ﬁrst recurrence time Rn to observe the ﬁrst n-block x1 . . . xn again. Then (log Rn )/n converges to entropy as n → ∞. If a sequence x is less random, then there are more repetitive patterns appearing in x, and it takes less time to observe the same block again, thus we obtain a smaller value for entropy. For a comprehensive introduction on the ﬁrst return time and data compression, consult [Shi]. For applications of the ﬁrst return time in testing random number generators, see [CKd2],[Mau].

12.1 The First Return Time Formula Let {s1 , . . . , sk } be a set of symbols. Consider the shift transformation T ∞ deﬁned on X = 1 {s1 , . . . , sk } by (T x)n = xn+1 for x = (x1 x2 x3 . . .). The T -invariance of a measure µ is nothing but the shift invariance, i.e., µ([a1 , . . . , as ]n,...,n+s−1 ) = µ([a1 , . . . , as ]1,...,s ) where [a1 , . . . , as ]n,...,n+s−1 is the cylinder set of length s whose (n + j)th coordinate is the symbol aj for n ≤ j < n + s. Recall the deﬁnition of the ﬁrst return time RE in Deﬁnition 5.11. If we choose E = [a1 , . . . , an ]1,...,n and take x ∈ E, then RE (x) = min{j ≥ 1 : xj+1 = a1 , . . . , xj+n = an } . Let P = {C1 , . . . , Ck } be a measurable partition of X given by the cylinder sets Ci = [si ]1 = {x ∈ X : x1 = si }. Note that P is a generating partition. Let T −j P, j ≥ 1, be the partition {T −j C1 , . . . , T −j Ck }. Let Pn (x) ⊂ X be the unique subset that contains x and belong to the collection Pn = P ∨ T −1 P ∨ · · · ∨ T −(n−1) P .

364

12 Recurrence and Entropy

Now we deﬁne a special form of the ﬁrst return time by Rn (x) ≡ RPn (x) (x), i.e., Rn (x) = min{j ≥ 1 : xj+1 = x1 , . . . , xj+n = xn } . x = a1 . . . an . . . . . . . . . a1 . . . an . . . . . . ? => < Rn (x)

Kac’s lemma implies that the average of Rn under the condition xi = ai , 1 ≤ i ≤ n, is equal to 1/ Pr(a1 . . . an ) where Pr(a1 . . . an ) is the probability of observing a1 . . . an in the ﬁrst n-block of x. The lemma suggests that Rn may be approximated by 1/µ(Pn (x)) in a suitable sense. Since the Shannon– McMillan–Breiman Theorem implies that 1/µ(Pn (x)) converges to entropy as n → ∞, we expect that (log Rn )/n converges to entropy, which is true as we shall see in this section. Here is another viewpoint. According to the Asymptotic Equipartition Property the number of typical subsets in Pn is approximately equal to 2nh . Because of ergodicity an orbit of a typical sequence x would visit almost all the typical subsets in Pn , and hence the return time for almost every starting point would be approximately equal to 2nh . Now we give an almost identical deﬁnition of the ﬁrst return time: Deﬁne Rn (x) = min{j ≥ n : x1 . . . xn = xj+1 . . . xj+n } for each x = (x1 x2 x3 . . .). This deﬁnition was used by other researchers. See [OW],[WZ]. Note that Rn and Rn are deﬁned almost everywhere and that Rn ≥ n and Rn ≤ Rn . If Rn (x) ≥ n, then Rn (x) = Rn (x). Theorem 12.1 (The ∞ﬁrst return time formula). Let T be the shift transformation on X = 1 {s1 , . . . , sk } with a shift invariant probability measure µ. Suppose that T is ergodic. For x = (x1 x2 x3 . . .) ∈ X deﬁne Rn (x) = min{j ≥ n : x1 . . . xn = xj+1 . . . xj+n } , Then, for almost every x, log Rn (x) = entropy . n→∞ n lim

The formula was ﬁrst studied by Wyner and Ziv [WZ] in relation to data compression algorithms such as Lempel–Ziv coding [LZ]. They proved the convergence in probability and Ornstein and Weiss[OW] later showed that the convergence is with probability 1, i.e., pointwise almost everywhere. The formula enables us to estimate the entropy by observing a typical sample sequence. In real world applications there is no inﬁnite binary sequence. If a computer ﬁle or a digital message is suﬃciently long, then the ﬁle or the message is regarded as a part of an inﬁnitely long sequence, and the formula

12.1 The First Return Time Formula

365

can be used. The strong points of the formula are its intrinsic beauty and simple computer implementation. For more details consult [Shi]. Since Rn and Rn are close to each other, we expect that similar results holds for Rn and Rn at the same time. In Theorem 12.3 it is shown that the same formula holds for Rn . Lemma 12.2. Let T be an ergodic shift transformation on a shift space X on ﬁnitely many symbols. (i) If Rn (x) < n, then there exists k, n3 ≤ k < n, such that Rk (x) = k. (ii) If T has positive entropy, then for almost every x there exists N = N (x) such that Rn (x) = Rn (x) for n ≥ N . Proof. (i) If Rn (x) = k for some k < n, then x1 . . . xk = xk+1 . . . x2k and Rk (x) = k. If k ≥ n3 , then we are done. If k < n3 , then x1 . . . xn+k = x1 . . . xk x1 . . . xk x1 . . . xk . . . . . . x1 . . . xl (x) = mk for some m such that n3 ≤ mk < n. for some l ≤ k. Hence Rmk (ii) Put E = {x ∈ X : lim sup(Rn (x) − Rn (x)) > 0} . n→∞

If x ∈ E, then Rn (x) > Rn (x) for inﬁnitely many n. Hence Rn (x) < n for inﬁnitely many n. Part (i) implies that Rk (x) = k for inﬁnitely many k and lim inf k→∞

log Rk (x) =0. k

Now the Ornstein–Weiss formula implies that E has measure zero.

As a direct application we have the following result. Theorem 12.3. For an ergodic shift on ﬁnitely many symbols, lim

n→∞

log Rn (x) = entropy n

for almost every x. Proof. If entropy is positive, we use Lemma 12.2(ii). If entropy is zero, then observe that lim sup n→∞

log Rn (x) log Rn (x) log Rn (x) =0. = lim ≤ lim sup n→∞ n n n n→∞

Remark 12.4. Suppose that every block in Kac’s lemma implies that E[Rn ] = E[Rn |B] Pr(B) = B∈Pn

B∈Pn

See Theorem 12.26 for more information.

∞ 1

{0, 1} has positive probability.

1 1 = 2n . Pr(B) = Pr(B) B∈Pn

366

12 Recurrence and Entropy

Example 12.5. (Pointwise version) Simulation results for Theorem 12.3 with the (p, 1 − p)-Bernoulli shifts are given in Fig. 12.1. A horizontal line marks the entropy in each case. For p = 21 it takes longer for an n-block to reappear, and we consider Rn for small values of n. For p = 54 there are only a few typical n-blocks for small n, and so it does not take too long for a starting block to reappear. That is why we have Rn = 1 for small n. See Maple Program 12.6.1.

1

1

0.5

0.5

0

10 n 20

0

30

10 n 20

30

Fig. 12.1. y = log Rn (x0 )/n at a sequence x0 from the (p, 1 − p)-Bernoulli shift spaces: p = 1/2 and 1 ≤ n ≤ 19 (left) and p = 4/5 and 1 ≤ n ≤ 30 (right)

Example 12.6. (Average version) Simulation results for the averages of the ﬁrst return time Rn are presented in Figs. 12.2, 12.3. The theoretical average of Rn is equal to 2n . Jensen’s inequality implies that the average of the logarithm is less than or equal to the logarithm of the average. If p is not close to 21 , then usually the simulations produce smaller values than the theoretical value since our sample size 103 was not large enough: There are n-blocks with very small measure, and so the return times to them are very large. If the sample size is small, then these blocks are not sampled, and the average is underestimated. See Maple Program 12.6.2. Consult a similar idea in Maple Program 8.7.2.

1

1

y

y

0

5

10 n

0

5

10 n

Fig. 12.2. y = Ave[(log Rn )/n] (left) and y = (log Ave[Rn ])/n (right) for the (1/2, 1/2)-Bernoulli shift. The horizontal line in the left graph marks the entropy

12.2 Lp -Convergence

1

367

1

y

y

0

5

10 n

0

5

10 n

Fig. 12.3. y = Ave[(log Rn )/n] and y = (log Ave[Rn ])/n, 1 ≤ n ≤ 12, for the (1/4, 3/4)-Bernoulli shift. The horizontal line in the left graph marks the entropy

Example 12.7. The pdf’s for Rn and (log Rn )/n, n = 10, are given for the ( 41 , 43 )-Bernoulli shift. See Fig. 12.4. A sample value that is far away from most of other sample values is called an outlier. Outliers are ignored in the left graph , because if all the sample values were to be used in drawing the graph, then the right tail would be too long. Some values of Rn are equal to 1, which explains the left tail in the right graph. See Maple Program 12.6.3. 1.5 0.004 1 0.002

0

0.5

2000

4000

0

0.5

1

1.5

Fig. 12.4. Empirical pdf’s of Rn (left) and (log Rn )/n (right), n = 10, with the sample size 104 for the (1/4, 3/4)-Bernoulli shift

12.2 Lp-Convergence Here is the Lp -version of the Shannon–McMillan–Breiman Theorem. Fact 12.8 Let p ≥ 1. Consider an ergodic shift space (X, µ) and its partition Pn into the n-blocks determined by the ﬁrst n symbols. Deﬁne 1 − log µ(B) 1B (x) , En (x) = n B∈Pn

where 1B is the indicator function of B. Then En converges to entropy h for almost every x and also in Lp .

368

12 Recurrence and Entropy

Now we prove Lp -convergence of (log Rn )/n for p ≥ 1. Theorem 12.9. Let p ≥ 1 and let (X, µ) be an ergodic shift space. Then (log Rn )/n converges to h in Lp . Proof. In the following we do not distinguish between Rn and Rn since we are interested in the limits of their integrals, which are equal. The Ornstein–Weiss formula and Fatou’s lemma together imply that p log Rn p dµ . h ≤ lim inf n→∞ n X Note that (ln x)p is concave for suﬃciently large x: For p = 1 this is obvious. If p ≥ 2, then (p − 1)(ln x)p−2 − (ln x)p−1 d2 p 0. Put H0 = − πi log πi . i

12.3 The Nonoverlapping First Return Time 1

371

1

y

y

0

10

n

0

n

10

Fig. 12.5. Simulations of Maurer’s theorem: the (1/2, 1/2)-Bernoulli shift (left) and the (2/3, 1/3)-Bernoulli shift (right)

Let Pn (x) denote the probability of appearance of the initial n-block in the sequence x = x1 x2 x3 . . .; in other words, Pn (x) = Pr(x1 . . . xn ). Let Φ be the density function of the standard normal distribution given in Deﬁnition 1.43. Put Var[− log Pn (x)] σ 2 = lim , n→∞ n where ‘Var’ stands for variance and σ > 0 is regarded as the normalized standard deviation. Fact 12.13 ([W1]) For a mixing Markov shift with entropy h we have

α 8n − nh log R √ lim Pr ≤α = Φ(t) dt . n→∞ σ n −∞ Compare the result with the one in Sect. 8.6. See also Theorem 1.44. Fact 12.14 (Corollary 1, [Ko]) For an ergodic Markov shift we have 8n (x)Pn (x)) = o(nβ ) log(R almost surely for any β > 0. Fact 12.15 (Corollary B5, [W2]) For any ε > 0 8n (x)Pn (x)) ≤ log log n −(1 + ε) log n ≤ log(R eventually, almost surely for an ergodic Markov shift. Since E [log Pn (x)] = =

πa1 pa1 a2 · · · pan−1 an log(πa1 pa1 a2 · · · pan−1 an ) πi log πi + (n − 1) πi pij log pij

i

= −H0 − (n − 1)h

i,j

372

12 Recurrence and Entropy

where the ﬁrst sum is taken over all n-blocks a1 · · · an , Fact 12.15 implies that 8n ] − (n − 1)h − H0 ≤ log log n −(1 + ε) log n ≤ E[log R approximately for large n. We prove an extension of Theorem 12.12 to the Markov shifts with a modiﬁed deﬁnition of the ﬁrst return time. We obtain a sharper estimate of the convergence rate of its average and, further, propose an algorithm for the estimation of entropy for Markov shifts. Deﬁnition 12.16. The modiﬁed nth ﬁrst return time R(n) is deﬁned by R(n) (x) = min{j ≥ 1 : x1 . . . xn = x2jn+1 . . . x2jn+n } . Put r = 2−n . Then the expectation of log R(n) equals v(r) in case of the Bernoulli ( 12 , 12 )-shift. Hence the expectation of log R(n) is approximately equal to n − lnγ2 for large n. We investigate the speed of convergence of the average of log R(n) to entropy after being properly normalized. The dependence on the past decreases exponentially, and hence odd-numbered blocks become almost independent of each other as the gap between the neighboring blocks increases. The following lemma is a restatement of Theorem 5.22. Lemma 12.17. Let P be an irreducible and aperiodic stochastic matrix. Put Q = limn→∞ P n . Then there exist constants α > 0 and 0 < d < 1 such that ||P n − Q||∞ ≤ α dn . Theorem 12.18. If P is irreducible and aperiodic, then γ . lim E log R(n) − (n − 1)h = H0 − n→∞ ln 2 Proof. Throughout the proof we simply write xn1 in place of x1 . . . xn and similarly for other blocks if necessary. Let π denote the Perron–Frobenius eigenvector for P . Put c = α/ mini πi and suppose that k is large enough so that dk+1 c < 1 where c and d are the constants obtained in Lemma 12.17. Let T denote the shift transformation on the Markov chain. For arbitrary n-blocks a1 · · · an and b1 · · · bn , we have Pr [b1 · · · bn ] ∩ T −(n+k) [a1 · · · an ] = πb1 Pb1 b2 · · · Pbn−1 bn (P k+1 )bn a1 Pa1 a2 · · · Pan−1 an . Hence

Pr [b1 · · · bn ] ∩ T −(n+k) [a1 · · · an ] − Pr(b1 · · · bn ) Pr(a1 · · · an )

= πb1 Pb1 b2 · · · Pbn−1 bn |(P k+1 )bn a1 − πa1 | Pa1 a2 · · · Pan−1 bn ≤ Pr(bn1 ) Pr(an1 ) dk+1 α/πb1 ≤ Pr(bn1 ) Pr(an1 ) dk+1 c ,

12.3 The Nonoverlapping First Return Time

373

and n n n n n+1 Pr(an1 )(1 − dn+1 c) ≤ Pr x2n+n c) . 2n+1 = a1 | x1 = b1 ≤ Pr(a1 )(1 + d (0)

(0)

(i)

(i)

Put a1 · · · an = a1 · · · an = a1 · · · an . Then

Pr R(n) = i | xn1 = an1 =

(i−1) (i−1) a1 ···an =an 1

where

i -

(1) (1) a1 ···an =an 1

j=1

···

Aj

(j) (j−1) (j) n (j−1) = a · · · a | x = a · · · a Aj = Pr x2n+n n 1 n 1 1 2n+1 (j)

(j)

2jn+n = a1 · · · an given the condition since Aj is equal to the probability of x2jn+1 2(j−1)n+n

(j−1)

x2(j−1)n+1 = a1

· · · an(j−1) .

Hence Pr(an1 ) (1 − dn+1 c) U1 ≤ Pr R(n) = i | xn1 = an1 ≤ Pr(an1 ) (1 + dn+1 c) U1 where

U1 =

···

(i−1) (i−1) a1 ···an =an 1

Since

(i−1)

(i−1)

···an

i−1 -

(1) (1) a1 ···an =an 1

j=1

Aj .

(i−2) n n , Ai−1 = 1 − Pr x2n+n · · · a(i−2) n 2n+1 = a1 | x1 = a1

a1

=an 1

the sum is bounded by 1 − Pr(an1 ) (1 + dn+1 c) and 1 − Pr(an1 ) (1 − dn+1 c). Hence Pr(an1 )(1 − dn+1 c) 1 − Pr(an1 )(1 + dn+1 c) U2 ≤ Pr R(n) = i | xn1 = an1 ≤ Pr(an1 )(1 + dn+1 c) 1 − Pr(an1 )(1 − dn+1 c) U2 where

U2 = (i−2)

a1

(i−2)

···an

··· =an 1

i−2 -

Aj .

(1) (1) j=1 a1 ···an =an 1

Inductively we have Pr(an1 )(1 − dn+1 c)(1 − Pr(an1 )(1 + dn+1 c))i−1 ≤ Pr R(n) = i | xn1 = an1 ≤ Pr(an1 )(1 + dn+1 c)(1 − Pr(an1 )(1 − dn+1 c))i−1 .

374

12 Recurrence and Entropy

Hence Pr(an1 )(1 − dn+1 c)

∞

(1 − Pr(an1 )(1 + dn+1 c))i−1 log i

i=1

≤ E log R(n) | xn1 = an1 ∞ (1 − Pr(an1 )(1 − dn+1 c))i−1 log i . ≤ Pr(an1 )(1 + dn+1 c) i=1

Let v be the function in Deﬁnition 12.11. The average over all n-blocks a1 . . . an is bounded by 1 − dn+1 c E v(Pr (xn1 ) (1 + dn+1 c)) 1 + dn+1 c ≤ E log R(n) 1 + dn+1 c . ≤ E v(Pr (xn1 ) (1 − dn+1 c)) 1 − dn+1 c Multiplying by (1 + dn+1 c)/(1 − dn+1 c) and subtracting nh, we have 1 + dn+1 c − nh E v(Pr(xn1 )(1 + dn+1 c)) − nh ≤ E log R(n) 1 − dn+1 c or E v(Pr (xn1 ) (1 + dn+1 c)) + log(Pr (xn1 ) (1 + dn+1 c)) −E [log Pr (xn1 ) + nh] − log(1 + dn+1 c) 1 + dn+1 c − nh ≤ E log R(n) 1 − dn+1 c

(12.1)

and similarly, from the second inequality, 1 − dn+1 c − nh E log R(n) 1 + dn+1 c ≤ E v Pr (xn1 ) (1 − dn+1 c) + log(Pr (xn1 ) (1 − dn+1 c)) −E [log Pr (xn1 ) + nh] − log(1 − dn+1 c) .

(12.2)

Recall that v(r) + log r converges to −γ/ln 2 as r ↓ 0. For any small δ > 0 there exists n0 such that Pr (xn1 ) (1 + dn+1 c) ≤ δ if n ≥ n0 . By taking r = Pr (xn1 ) (1 + dn+1 c) we can check that the functions v(Pr (xn1 ) (1 + dn+1 c)) + log(Pr (xn1 ) (1 + dn+1 c)) are bounded by the same upper bound for all n ≥ n0 . Hence the Lebesgue Dominated Convergence Theorem implies that lim E[v(Pr (xn1 ) (1 + dn+1 c)) + log(Pr(xn1 )(1 + dn+1 c))] = −

n→∞

γ . ln 2

12.3 The Nonoverlapping First Return Time

375

Recall that E [log Pr (xn1 ) + nh] = −H0 + h. As n → ∞, (12.1) implies that −

γ + H0 − h ≤ lim E[log R(n) ] − nh , n→∞ ln 2

and similarly (12.2) implies that γ lim E[log R(n) ] − nh ≤ − + H0 − h . n→∞ ln 2

Deﬁnition 12.19. Theorem 12.18 implies that lim E log R(n+1) − log R(n) = h , n→∞

and in this case we deﬁne h(n) by h(n) = E log R(n+1) − log R(n) . We use h(n) to estimate the entropy of a Markov shift associated with an irreducible and aperiodic matrix. ∞ Example 12.20. Consider the Markov shift X = 1 {0, 1, 2} associated with the irreducible and aperiodic matrix ⎛ ⎞ 1/2 1/4 1/4 P = ⎝ 1/4 0 3/4 ⎠ . 1/2 1/2 0 10 6 7 Then π = ( 23 , 23 , 23 ), h ≈ 1.168 and H0 − lnγ2 ≈ 0.72. (Note that the entropy can be greater than 1 since we use the logarithm to the base 2.) We estimate E[log R(n) ] by taking the average along an orbit. See Maple Program 12.6.5 and Fig. 12.6. For more information see [CKd1].

1 1 y

y

0

n

5

0

n

5

Fig. 12.6. y = Ave[log R(n) ] − (n − 1)h (left) and y = h(n) (right) for Ex. 12.20. The horizontal lines indicate H0 − γ/ ln 2 and h, respectively

376

12 Recurrence and Entropy

12.4 Product of the Return Time and the Probability Kac’s lemma implies that

E[Rn Pn ] =

E[Rn Pn |B] Pr(B)

B∈Pn

=

E[Rn |B] Pr(B) Pr(B)

B∈Pn

=

B∈Pn

=

1 Pr(B) Pr(B) Pr(B) Pr(B) = 1 .

B∈Pn

Theorem 12.21. Let X be an irreducible and aperiodic Markov shift space. Let Rn be the nonoverlapping ﬁrst return time deﬁned in Deﬁnition 12.10 and let Pn (x) be the probability of the n-block x1 . . . xn . Then we have ξ e−t dt . lim Pr(Rn Pn ≤ ξ) = n→∞

0

For the proof consult [W2], and for a related result see [Pit]. See Fig. 12.7 2 1 and Maple Program 12.6.6 for simulations for the ( 3 , 3 )-Bernoulli shift and 1/3 2/3 for the Markov shift with P = . 1/4 3/4 1

1

Prob

Prob

0

R n Pn

0

5

R n Pn

5

Fig. 12.7. Pdf’s of Rn Pn , n = 10, for a Bernoulli shift (left) and a Markov shift (right) with y = e−x

Remark 12.22. (i) For the probability distribution of limn→∞ ln(Rn Pn ) see Ex. 1.40. (ii) As a corollary of Theorem 12.21, we obtain ∞ lim E[ln(Rn Pn )] = (ln y) e−y dy = −γ , n→∞

lim Var[ln(Rn Pn )] =

n→∞

Use Maple to verify them.

0

0 ∞

(ln y − (−γ)) e−y dy = 2

π2 . 6

12.5 Symbolic Dynamics and Topological Entropy

377

12.5 Symbolic Dynamics and Topological Entropy A topological dynamical system (X, T ) is a compact metric space X together with a continuous mapping T : X → X. Let Ube an open cover of X, i.e., U is a collection of open subsets Ui such that i Ui = X. Let H(U) denote the logarithm of the cardinality of a ﬁnite subcover of U with the smallest cardinality, i.e., H(U) = min log card(U1 ) . U1 ⊂U

Let U ∨ V be the cover made up of U ∩ V where U ∈ U and V ∈ V. The topological entropy of the cover U with respect to T is deﬁned by n−1 I 1 T −i U) . H( n→∞ n i=0

h(U, T ) = lim

The topological entropy of T is deﬁned by htop = sup h(U, T ) , U

where the supremum is over all open covers U. A measure theoretic entropy with respect to a probability measure µ on X is denoted by hµ when there is a possibility of confusion. (Sometimes, measure theoretic entropy is called metric entropy for short, or Kolmogorov-Sinai entropy.) It is known that the topological entropy is the supremum of measure theoretic entropies, which is called the variational principle. See [Man],[PoY]. For an elementary introduction to symbolic dynamics see [DH]. For ∞a related result see [KLe]. Consider the full shift space X0 = 1 {s1 , . . . , sk } on k symbols. Deﬁne a metric d on X0 by d(x, y) = 2−n ,

n = min{i ≥ 1 : xi = yi }

for two distinct points x = (x1 , x2 , . . .) and y = (y1 , y2 , . . .) in X0 . Then X0 is compact. Let T : X0 → X0 be the transformation deﬁned by the left shift, i.e., T ((x1 , x2 , . . .)) = (x2 , x3 , . . .). Then T is continuous. A T -invariant closed subset X ⊂ X0 is called a topological shift space. The associated transformation on X is the shift mapping T restricted to X. The study of such a mapping is called symbolic dynamics. Let Bn (X) be the collection of n-blocks appearing in sequences in X and let N (n) be its cardinality. An n-block a1 . . . an ∈ Bn (X) is identiﬁed with the cylinder set {x ∈ X : xi = ai , 1 ≤ i ≤ n}. For a proof of the following fact consult [LM]. Theorem 12.23. Let X be a topological shift space with the shift transformation T . Then the topological entropy of T is given by htop = lim

n→∞

1 log N (n) . n

378

12 Recurrence and Entropy

A ﬁnite graph consists of vertices V = {v1 , . . . , vk } and some directed edges connecting vertices. Assume that for any pair of vertices u, v there exists at most one directed edge from u to v. We do not exclude the case when u and v are identical, and in this case an edge is called a loop. Deﬁne a k × k matrix A, called an adjacency matrix, by A = (Aij )1≤i,j≤k if there are Aij directed edges from vi to vj . A path of length n is a sequence of n connected directed edges e1 , . . . , en , or equivalently, it is a series of n + 1 vertices such that ej is the edge from the jth to the (j + 1)th vertex. Then (A2 )ij = Aik Akj k

is the number of paths of length 2 from vi to vj . Similarly, (An )ij is the number of paths of length n from vi to vj . An inﬁnite path is an inﬁnite sequence of connected directed edges. Using ∞the set of vertices as an alphabet, we deﬁne a topological shift space X ⊂ 1 V as the set of all inﬁnite paths. Sometimes the shift spaces are deﬁned in terms of edges not vertices. The two approaches are equivalent. For more information consult [BS],[Kit],[LM]. Theorem 12.24. Let X be a topological shift space deﬁned by a ﬁnite graph. Suppose that the corresponding adjacency matrix A is irreducible, i.e., for every (i, j) there exists n such that (An )ij > 0. Then 1 log λA , n→∞ n

htop = lim

where λA is the Perron–Frobenius eigenvalue of A. Proof. Recall the deﬁnitions of norms in Subsect. 1.1.2 and Sect 10.1. Note that N (n + 1) is the number of all paths of length n on the graph. Hence (An )ij = ||An ||1 . N (n + 1) = i,j

Since the vector space of all k × k matrices is ﬁnite-dimensional, all norms on it are equivalent. Thus there exist constants 0 < c1 < c2 such that c1 ||An ||1 ≤ ||An ||op ≤ c2 ||An ||1 . Since the operator norm of An is dominated by (λA )n , we have 1 1 log N (n) ≈ log ||An ||op ≈ log λA . n n Now we take the limit as n → ∞.

12.5 Symbolic Dynamics and Topological Entropy

0

379

1

Fig. 12.8. A graph represents a shift space

Example 12.25. Consider the set X of inﬁnite sequences on symbols ‘0’ and ‘1’ in which a successive block of 1’s does not appear. Then V = {0, 1} and there is 1. See Fig. 12.8. The adjacency matrix is given by A = no loop at the vertex √ √ 11 1+ 5 , and λA = 2 . Hence htop = log 1+2 5 . See Maple Program 12.6.7. 10 If X is the full shift with k symbols, then htop = log k since N (n) = k n . The next theorem gives another way to obtain the topological entropy of a shift transformation. Theorem 12.26. Suppose that we have a topological shift space X. Let µ be a shift-invariant ergodic probability measure on X. Then 1 lim Rn dµ ≤ htop . log n→∞ n X If µ satisﬁes the additional property that any block appearing in X has positive measure, then we have 1 lim Rn dµ = htop . log n→∞ n X

Proof. Take B ∈ Bn (X). If B has positive measure, then Kac’s lemma implies R dµ = 1. Hence B B Rn dµ Rn dµ = X

B∈Bn (X)

B

= card{B ∈ Bn (X) : µ(B) > 0} ≤ N (n) . If µ(B) > 0 for any B, then we have the equality.

The average of Rn is underestimated in general. If the sample size is not suﬃciently large, then only the typical n-blocks of relatively large measure (≈ 2−nh ) are sampled. In other words, if an n-block has a relatively small probability, then it may not be included in a sample. Since the ﬁrst return time for such a block would be large by Kac’s lemma, an average of Rn over

a small number of sample blocks underestimates X Rn dµ. See Table 12.1 for simulation results with the (3/4, 1/4)-Bernoulli shift.

380

12 Recurrence and Entropy

Table 12.1. Average of the ﬁrst return time for the (3/4, 1/4)-Bernoulli shift n 2 3 4 5 6 7 8 9 10

2n 4 8 16 32 64 128 256 512 1024

Ave[Rn ] 4.0 8.0 15.9 31.3 57.7 108.1 215.4 398.6 744.9

Theorem 12.27 (The variational inequality). Let T be an ergodic shift transformation on a shift space (X, µ). The measure theoretic entropy is bounded by the topological entropy. Proof. Note that the metric version of the nth return time is equal to the ﬁrst return time deﬁned in the ﬁrst return time formula. Jensen’s inequality implies that 1 1 Rn dµ . log log Rn dµ ≤ lim hµ = lim n→∞ n n→∞ n X X

Remark 12.28 (The variational principle). Let T : X → X be a continuous mapping (or a homeomorphism) of a compact metric space X, and let M (X, T ) denote the set of all T -invariant probability measures on X. Then htop = sup{hµ (T ) : µ ∈ M (X, T )} . See [Wa1] for the proof. Remark 12.29. Consider X =

∞ 1

d(x, y) = 2−k ,

{0, 1} with the metric k = min{i ≥ 1 : xi = yi }

for x = y. Put Eα,β = {x ∈ X : lim inf n→∞

log Rn (x) log Rn (x) = β}. = α, lim sup n n n→∞

It is known that Eα,β has Hausdorﬀ dimension 1 for each pair α, β ∈ [0, ∞] with α ≤ β. Consult [FW] for the proof. For the deﬁnition of the Hausdorﬀ dimension see Sect. 13.1.

12.6 Maple Programs

381

12.6 Maple Programs 12.6.1 The ﬁrst return time Rn for the Bernoulli shift Simulate the Ornstein–Weiss formula for the Bernoulli shift on two symbols ‘0’ and ‘1’. > with(plots): > k:=4: > q:=1./(k+1): This is the probability of the symbol ‘1’. > entropy:= -(1-q)*log[2.](1-q)-q*log[2.](q); entropy := .7219280949 > Max_Block:=30: This is the maximum value for the block length. > N:=round(2^(entropy*Max_Block)); N := 3308722 This is the total length of the binary sequence x1 x2 x3 . . .. There is a chance that the recurrence does not occur for large block length close to Max Block within the time limit N . Generate a Bernoulli sequence of length N . > ran:= rand(0..k): > for i from 1 to N do x[i]:=trunc(ran()/k): od: Compare the following with q. > evalf(add(x[i],i=1..N)/N); .2001201672 > seq(x[i],i=1..Max_Block); 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0 > evalf(add(x[i],i=1..Max_Block)/Max_Block); .1666666667 In the above block of length 30 there are fewer 1’s than average. Therefore the simulation result would not agree with the theoretical prediction. Compute the ﬁrst return time. > for n from 1 to Max_Block do > k:=1; > while(add(abs(x[j]-x[j+k]),j=1..n)>0) do k:=k+1: od: > R[n]:=k: od: Draw the graph. > g1:=listplot([seq([n,log[2.](R[n])/n],n=1..Max_Block)]): > g2:=plot(entropy,x=0..30,,labels=["n"," "]): > display(g1,g2); See Fig. 12.1.

382

12 Recurrence and Entropy

12.6.2 Averages of (log Rn )/n and Rn for the Bernoulli shift The averages of (log Rn )/n and Rn are computed for the Bernoulli shift on two symbols ‘0’ and ‘1’. > with(plots): Choose an integer. > k:=3: Choose the probability of the symbol ‘1’. > q:=1./(k+1); q := 0.2500000000 > entropy:= -(1-q)*log[2.](1-q)-q*log[2.](q); entropy := 0.8112781245 Choose the maximum value for the block length. > Max_Block:=12: Generate a Bernoulli sequence of length N . We multiply 200 since there is a chance that the recurrence does not occur within the time limit

>

2entropy×Max Block . N:=round(2^(entropy*Max_Block))*200;

N := 170400 ran:= rand(0..k): > for i from 1 to N do x[i]:=trunc(ran()/k): od: Compare the following with q. > evalf(add(x[i],i=1..N)/N); 0.2503345070 Choose the sample size S. > S:=1000: > M:=N - Max_Block - S + 2; M := 169390 Compute the ﬁrst return time R[n, s] where s, 1 ≤ s ≤ S, is the space variable. > for s from 1 to S do > R[0,s]:=1: od: > for s from 1 to S do > for n from 1 to Max_Block do > k:=R[n-1,s]: > i:=1: > while (i else k:=k+1; i:=1; fi; > od; > R[n,s]:=k; > od: > od: >

12.6 Maple Programs

383

If the maximum of R[n, s] is greater than M , then there is a premature ending due to the short length of the given sequence. > max(seq(R[Max_Block,s],s=1..S)); 154211 Find the average of R[n, s] with respect to s. Simulations usually produce smaller values than the theoretical prediction equal to 2n since our sample size S is not large enough. When q = 12 and n is large, there are blocks which have very small measures so that their return times are very large by Kac’s lemma. If S is small, then these blocks of negligible probability are not sampled in general, and the average of the return time is underestimated. > for n from 1 to Max_Block do > AveR[n]:=add(R[n,s],s=1..S)/S: > od: Compute the average of (log2 R[n, s])/n with respect to s. > for n from 1 to Max_Block do > AveLog[n]:=add(log[2.](R[n,s])/n,s=1..S)/S; > od: Draw the graphs. > g1:=pointplot([seq([n,AveLog[n]],n=1..Max_Block)]): > g2:=plot(entropy,x=0..Max_Block,labels=["n","y"]): > g3:=listplot([seq([n,AveLog[n]],n=1..Max_Block)]): > display(g1,g2,g3); See the left graph in Fig. 12.3. > for n from 1 to Max_Block do > Top[n]:=log[2.0](add(R[n,s],s=1..S)/S)/n: > od: > g4:=pointplot([seq([n,Top[n]],n=1..Max_Block)]): > g5:=listplot([seq([n,Top[n]],n=1..Max_Block)]): > g6:=plot(1,x=0..Max_Block,labels=["n","y"]): > display(g4,g5,g6); See the right graph in Fig. 12.3. 12.6.3 Probability density function of (log Rn )/n for the Bernoulli shift Find the pdf’s of Rn and (log Rn )/n for the Bernoulli shift transformation on two symbols ‘0’ and ‘1’. > with(plots): Choose the probability of the symbol ‘1’. > q:=0.25: > entropy:= -(1-q)*log[2.](1-q)-q*log[2.](q); entropy := 0.8112781245 Choose the block length. > n:=10: Choose the total length of the sequence.

384

12 Recurrence and Entropy >

N:=round(2^(entropy*n))*1000;

N := 277000 We multiply 1000 since there is a small chance that the recurrence does not occur within a reasonable time limit. Now generate a Bernoulli sequence of length N . > k:=3: > ran:= rand(0..k): > for i from 1 to N do > x[i]:=trunc(ran()/k): > od: Choose the sample size. > S:=10000: Find the ﬁrst return time Rn . > for s from 1 to S do > t:=1; > while(add(abs(x[j]-x[j+t]),j=s..n+s-1)>0) do > t:=t+1: > od: > R[s]:=t; > od: See Fig. 12.4. Draw the pdf of the return time Rn . > Bin1:=2000: > epsilon:=0.0000001: > Max:=max(seq(R[s],s=1..S))+epsilon; Max := 219147. for k from 1 to Bin1 do freq[k]:=0: od: > for s from 1 to S do > slot:=ceil(Bin1*R[s]/Max): > freq[slot]:=freq[slot]+1: > od: > listplot([seq([(i-0.5)*Max/Bin1, freq[i]*Bin1/S/Max], i=1..Bin1/50)]); Draw the pdf of (log Rn )/n. > Bin2:=60: > Max2:=max(seq(log[2.](R[s])/n,s=1..S),2): > for k from 1 to Bin2 do freq2[k]:=0: od: > for s from 1 to S do > slot2:=ceil(Bin2/Max2*log[2.](R[s])/n + epsilon): > freq2[slot2]:=freq2[slot2]+1: > od: > listplot([seq([(i-.5)*Max2/Bin2, freq2[i]*Bin2/S/Max2], i=1..Bin2)]); See Fig. 12.4. >

12.6 Maple Programs

385

12.6.4 Convergence speed of the average of log Rn for the Bernoulli shift Check the formula in Theorem 12.12 on the average of log Rn for the Bernoulli shift. > with(plots): > k:=2: > p:=1./(k+1): This is the probability of the symbol ‘1’. > entropy:=-p*log[2.](p)-(1-p)*log[2.](1-p); entropy := 0.9182958340 Choose the maximum value for the block length. > MaxBlock:=10: Choose the sample size S. > S:=1000: > N:=round(2^(entropy*MaxBlock))*1000 + S*MaxBlock; N := 591000 Generate a Bernoulli sequence of length N . > ran:=rand(0..k): > for i from 1 to N do x[i]:=trunc(ran()/k): od: Compare the following with p. > evalf(add(x[i],i=1..N)/N); 0.3341133672 Find the return time R[n, s]. > for n from 1 to MaxBlock do > for s from 1 to S do > t:=n; > while(add(abs(x[j]-x[j+t]),j=n*(s-1)+1..n*s)>0) do > t:=t+n: > od: > R[n,s]:=t/n; > od: > od: Find the average of the logarithm of the return time. > for n from 1 to MaxBlock do > Ave[n]:=-add(log[2.](R[n,q]),q=1..S)/S + n*entropy: > od: Draw the graph. > g1:=plot(0.832746,x=0..MaxBlock,labels=["n","y"]): > g2:=listplot([seq([n,Ave[n]],n=1..MaxBlock)]): > display(g1,g2); See Fig. 12.5.

386

12 Recurrence and Entropy

12.6.5 The nonoverlapping ﬁrst return time Estimate entropy by the nonoverlapping ﬁrst return time for a Markov shift deﬁned by P . > with(plots): > with(linalg): > P:=matrix([[1/2,1/4,1/4],[1/4,0,3/4],[1/2,1/2,0]]); > eigenvectors(transpose(P)); D C √ √ √ 2 2 2 1 7 5 , 1 }], , , 1, { −1 − [1, 1, { , 1, }], [− + 2 2 4 4 6 3 D C √ √ √ 2 2 2 1 , 1 }] ,− , 1, { −1 + [− − 2 2 4 4 Find the Perron–Frobenius eigenvector v. > v:=[10/23,6/23,7/23]: > epsilon:=10.0^(-10): > entropy:=add(add(-v[i]*P[i,j]*log[2.](P[i,j]+epsilon), j=1..3),i=1..3); entropy := 1.168159511 Choose the maximum value for the block length. > Max_Block:=7: > SampleSize:=10000: > N:=ceil(2^(Max_Block*entropy))*1500 + SampleSize; N := 445000 Generate a Markov sequence of length N . > ran0:=rand(0..1): ran1:=rand(1..2): ran2:=rand(0..3): > x[0]:=1: > for j from 1 to N do > if x[j-1]=0 then x[j]:=ran0()*ran1(): > elif x[j-1]=1 then x[j]:=2*ceil(ran2()/3): > else x[j]:=ran0(): > fi: > od: Count the number of appearances of the symbols 0, 1 and 2. > Num[0]:=0: Num[1]:=0: Num[2]:=0: > for i from 1 to N do > Num[x[i]]:=Num[x[i]]+1: > od: Compare the preceding results with the measures of the cylinder sets [0]1 , [1]1 , [2]1 of length 1. > evalf(Num[0]/N-v[1]); 0.0005095261358

12.6 Maple Programs >

>

>

387

evalf(Num[1]/N-v[2]); −0.0002875427455 evalf(Num[2]/N-v[3]); −0.0002219833903 M:=trunc((N-SampleSize)/2/Max_Block);

M := 31071 for n from 1 to Max_Block do for s from 1 to SampleSize do t:=1: i:=1: while (i max(seq(R[Max_Block,s],s=1..SampleSize)); 28412 If max ≥ M then there is a possibility of premature ending due to the short length of the given sequence. If this happens, we have to generate a longer sequence and do the simulation again. Calculate h(n) . > for n from 1 to 6 do > h[n]:=Ave[n+1]-Ave[n]: > od: > H0:= add(-v[i]*log[2.0](v[i]),i=1..3); > > > > > > > > > > > > >

H0 := 1.550494982 > evalf(H0-gamma/ln(2)); 0.7177488048 Draw the graphs. > g1:=listplot([seq([n,Ave[n]-(n-1)*entropy], n=1..Max_Block)]): > g2:=plot(H0-gamma/ln(2.),x=0..Max_Block,labels=["n","y"]): > display(g1,g2); See the left graph in Fig. 12.6. > f1:=listplot([seq([n,h[n]],n=1..Max_Block-1)]): > f2:=plot(entropy,x=0..7,labels=["n","y"]): > display(f1,f2); See the right graph in Fig. 12.6.

388

12 Recurrence and Entropy

12.6.6 Probability density function of Rn Pn for the Markov shift Find the probability density function of the product of the ﬁrst return time Rn and the probability Pn of an n-block for the Markov shift. > with(plots): Construct a typical sequence from a binary Markov shift deﬁned by P . > with(linalg): > P:=matrix([[1/3,2/3],[1/4,3/4]]); > eigenvectors(transpose(P)); 8 1 [ , 1, {[−1, 1]}], [1, 1, { 1, }] 3 12 Find the Perron–Frobenius eigenvector v. > v:=[3/11,8/11]: > SampleSize:=1000: > entropy:=add(add(-v[i]*P[i,j]*log[2.](P[i,j]),j=1..2), i=1..2); entropy := .8404647727 Choose the block length. > n:=10: > N:=300000: Generate a Markov sequence of length N . > ran0:=rand(0..2): > ran1:=rand(0..3): b[0]:=1: for j from 1 to N do if b[j-1]=0 then b[j]:=ceil(ran0()/3): else b[j]:=ceil(ran1()/4): end if od; 8 . The following two The measure of the cylinder set {x : x1 = 1} is equal to 11 numbers should be approximately equal. > evalf(add(b[j],j=1..N)/N); 8/11.; .7256033333 .7272727273 Compute Pn . > for j from 1 to SampleSize do > Prob[j]:=v[b[j]+1]: > for t from 0 to n-2 do > Prob[j]:=Prob[j]*P[b[j+t]+1,b[j+t+1]+1]: > od: > od: > > > > > >

Compute Rn .

12.6 Maple Programs

389

for s from 1 to SampleSize do t:=1; while (add(abs(b[j]-b[j+t]),j=s..n+s-1) > 0) do t:=t+1: od: ReturnTime[s]:=t; od: Check that whether the average of Rn is equal to 2n . > evalf(add(ReturnTime[s],s=1..SampleSize)/SampleSize); 2^n; 988.2900000 1024 > for s from 1 to SampleSize do > RnPn[s]:=ReturnTime[s]*Prob[s]: > od: > Bin:=50: > Max:=evalf(max(seq(RnPn[s],s=1..SampleSize))); > > > > > > >

Max := 11.35826527 We divide the interval 0 ≤ x ≤ Max into 50 subintervals of the equal length. > for i from 1 to Bin do freq[i]:= 0: od: for s from 1 to SampleSize do slot:=ceil(Bin*RnPn[s]/Max): freq[slot]:= freq[slot]+1: od: Draw the graphs. > g1:=listplot([seq([(i-0.5)*Max/Bin,freq[i]*Bin/SampleSize /Max],i=1..Bin)]): > g2:=plot(exp(-x),x=0..10): > display({g1,g2},labels=["Rn Pan","Prob"]); See Fig. 12.7. > > > >

12.6.7 Topological entropy of a topological shift space Find the topological entropy of a topological shift space deﬁned by a ﬁnite graph. > with(linalg): Choose an adjacency matrix. > A:=matrix([[1,1],[1,0]]): > eigenvals(A); √ √ 5 5 1 1 , − + 2 2 2 2 Calculate log2 λA . > log[2.](%[1]); 0.6942419136 > ran:=rand(0..1):

390

12 Recurrence and Entropy

N:=100000: > x[0]:=1: > for j from 1 to N do > if x[j-1]=0 then x[j]:= ran(): else x[j]:=0: fi: > od: > n:=15: > Bin:=2^n: > for k from 0 to Bin-1 do freq[k]:=0: od: > SampleSize:=N-n+1; Compute the number of diﬀerent blocks that appear in the given sequence. > for i from 1 to SampleSize do > k:=add(x[i+d-1]*2^(n-d),d=1..n): > freq[k]:=freq[k]+1: > od: > Num_block:=0: > for k from 0 to Bin-1 do > if freq[k] > 0 then Num_block:=Num_block+1: fi: > od: > Num_block; 1597 The following should be close to log2 λA . > evalf(log[2](Num_block)/n); 0.7094099067 > B:=evalm(A&^(n-1)); 610 377 B := 377 233 The following is equal to Num block. > add(add(B[i,j],i=1..2),j=1..2); 1597 >

13 Recurrence and Dimension

Recurrence is one of the central themes in dynamical systems theory. A typical orbit of a given transformation returns to a neighborhood of a starting point inﬁnitely many times with probability 1 under suitable conditions. Consider a transformation T deﬁned on a metric space X and the nth ﬁrst return time Rn of T to the ball of radius 2−n centered at a starting point. Then (log Rn )/n converges to the Hausdorﬀ dimension of X under suitable conditions where log denotes the logarithm to the base 2 throughout the chapter. For irrational transformations the conclusion does not always hold, and the answer depends on the type of the given irrational number.

13.1 Hausdorﬀ Dimension Let X be a set and let P(X) be the collection of all subsets of X. An outer measure on X is a function µ∗ : P(X) → [0, ∞] satisfying the following conditions: (i) µ∗ (∅) = 0, (ii) if A ⊂ B then µ∗ (A) ≤ µ∗ (B), and (iii) if Ej , j ≥ 1, are pairwise disjoint then µ∗ (

∞

Ej ) ≤

j=1

∞

µ∗ (Ej ) .

j=1

Note that an outer measure is deﬁned for all subsets of X. Let (X, d) be a metric space, and let B be the Borel σ-algebra, i.e., the σ-algebra generated by open subsets. The diameter of U ⊂ X is deﬁned by diam(U ) = sup{d(x, y) : x, y ∈ U } . For A ⊂ X, ε > 0, let Hα,ε (A) = inf

diam(Ui )α ,

392

13 Recurrence and Dimension

where the inﬁmum is taken over all countable coverings of A by subsets Ui with diameters diam(Ui ) ≤ ε. The Hausdorﬀ α-measure on X is deﬁned by Hα (A) = lim Hα,ε (A) = lim sup Hα,ε (A) . ε↓0

ε↓0

Then it is an outer measure. There exists a unique value for α such that Hs (X) = ∞ if s < α and Hs (X) = 0 if s > α. Such α is called the Hausdorﬀ dimension of (X, d). For X = Rn the Hausdorﬀ dimension is equal to n. If the Hausdorﬀ dimension of X is not an integer, then X is called a fractal set. Example 13.1. The Cantor set C is a fractal set, and its Hausdorﬀ dimension is equal to log3 2 ≈ 0.6309 . For the proof, recall that C is deﬁned by the limiting process of removing 2n−1 open intervals of length 3−n for every n ≥ 1. Let Cn denote the remaining closed set, which consists of closed intervals Un,i , 1 ≤ i ≤ 2n , of length 3−n . ∞ Then C = n=1 Cn and Hα,3−n (C) ≤ 2n 3−αn using {Un,i }i as a covering. Hence Hα (C) = 0 for α > log3 2. Now it is enough to show that Hα (C) > 0 for α = log3 2 using the fact that 1 ≤ i diam(Ui )α for any countable covering {Ui }i of C. To prove it we may assume that each Ui is an interval [ai , bi ] where ai = inf{x : x ∈ Ui } and bi = sup{x : x ∈ Ui } since there is no change in the diameter. Next we may assume that each there is a ﬁnite Ui is open by slightly expanding Ui . Since C is compact, open subcovering. Thus it suﬃces to prove that 1 ≤ i diam(Ui )α for any ﬁnite covering of C by intervals. Since a ﬁnite covering of C by intervals is a covering of Cn for some suﬃciently large n, each Ui contains intervals of length 3−n . Since the subdivision of intervals does not increases the sum, it remains to verify the inequality when Ui are the intervals in Cn . This is true since 2n 3−αn = 1. ∞ Example 13.2. Let X = 1 {0, 1} be the (p, 1 − p)-Bernoulli shift space that is regarded as the unit interval [0, 1] endowed with the Euclidean metric. Let Xp denote the set of all binary sequences x in X such that 1 xi = p} . n→∞ n i=1 n

Xp = {x ∈ X : lim

In 1949 Eggleston[Eg] proved that the Hausdorﬀ dimension of Xp is equal to the entropy −p log2 p − (1 − p) log2 (1 − p) of the Bernoulli shift transformation. A similar result can be obtained for a Markov shift space. See [Bi]. For an introduction to Hausdorﬀ measures, consult [Ed],[Fa],[Roy]. For the relation between the Hausdorﬀ dimension and the ﬁrst return time see [DF].

13.2 Recurrence Error

393

13.2 Recurrence Error Let (X, A, µ) be a probability space and let T : X → X be a µ-preserving transformation. If there exits a metric d on X such that the σ-algebra A is the Borel σ-algebra generated by d, then we call (X, A, µ, d, T ) a metric measure preserving system. The mapping T is not assumed to be continuous. Boshernitzan [Bos] proved the following fact. Fact 13.3 Let (X, A, µ, d, T ) be a metric measure preserving system. Assume that for some α > 0, the Hausdorﬀ α-measure Hα agrees with the measure µ on the σ-algebra A. Then for µ-almost every x ∈ X we have lim inf nβ d(T n x, x) ≤ 1, with β = n→∞

1 . α

For X = [0, 1] Lebesgue measure µ coincides with the Hausdorﬀ 1-measure H1 on X. Hence Boshernitzan obtained the following corollary. Fact 13.4 Let X = [0, 1]. If T : X → X is Lebesgue measure preserving (not necessarily continuous), then, for almost every x ∈ X, lim inf n |T n x − x| ≤ 1 . n→∞

The optimal value for the constant on the right hand side is not known. Probably the right hand side is bounded by a smaller constant depending on the transformation. A generalization of Fact 13.4 to absolutely continuous invariant measures is proved in Theorem 13.9. Deﬁnition 13.5. Let (X, d) be a metric space. Deﬁne the kth ﬁrst return time Rk and the kth recurrence error εk by Rk (x) = min{s ≥ 1 : d(T s x, x) ≤ 2−k } , and εk (x) = d(T Rk (x) x, x) . Example 13.6. Let T x = x + θ (mod 1) where 0 < θ < 1 is irrational. Then T preserves Lebesgue measure and its entropy equals 0. Deﬁne a metric d on X = [0, 1) by d(x, y) = ||x − y||, x, y ∈ X, where ||t|| = min{|t − n| : n ∈ Z}. Note that Rk is constant and that for 0 < x < 1 we have lim inf n ||T n x − x|| = lim inf n |T n x − x| = lim inf n ||nθ|| . n→∞

n→∞

n→∞

If pk /qk be the kth convergent in the continued fraction expansion of θ, then |θ − pk /qk | ≤ qk −2 and hence qk ||qk θ|| ≤ 1. By choosing a subsequence of √ 5−1 pk /qk , we may have a smaller upper bound. With θ = 2 we have 1 lim inf n ||nθ|| ≤ √ . n→∞ 5

394

13 Recurrence and Dimension 0.6

0.6

0.4

0.4

0.2

0.2

0

5

10

k

15

20

0

5

10

k

15

20

√ Fig. 13.1. y = Rk εk , 1 ≤ k ≤ 20, for T x = {x + θ} with θ = ( 5 − 1)/2 (left) and θ = e − 2 (right)

For more details, consult [Kh]. In Fig. 13.1 the graphs y = Rk εk , 1 ≤ k ≤ 20, √ are drawn with θ = 5−1 and θ = e − 2, respectively. See Fig. 13.1 and Maple 2 Program 13.7.1. Deﬁnition 13.7. Let T : X → X be measure preserving on a probability space (X, µ). Choose a measurable subset E with µ(E) > 0. Deﬁne the ﬁrst return time on E by RE (x) = min{j ≥ 1 : T j x ∈ E} for x ∈ E. It is ﬁnite for almost every x ∈ X. Let (X, A, µ, d, T ) be a metric measure preserving system. Consider a ball E of radius 2−k centered at x0 . Suppose that T is ergodic. Then RE = Rk (x0 ) and in view of Kac’s lemma we expect that Rk (x0 ) is approximately equal to 1/µ(E) in a suitable sense.

13.3 Absolutely Continuous Invariant Measures We extend Fact 13.4 for transformations with absolutely continuous invariant measures. Lemma 13.8. Let µ be a ﬁnite continuous measure on X = [0, 1] such that µ([a, b]) > 0 for every interval [a, b], a < b. Deﬁne d : X × X → R by d(x, y) = µ([α, β]) for x, y ∈ X where α = min{x, y} and β = max{x, y}. Then (i) d is a metric on X, and (ii) µ coincides with the Hausdorﬀ 1-measure H1 on X.

13.3 Absolutely Continuous Invariant Measures

395

Proof. (i) Clearly d(x, y) = d(y, x). Since µ is continuous, we observe that d(x, y) = 0 if and only if x = y. It remains to show that d(x, y) ≤ d(x, z) + d(z, y). It suﬃces to consider the case x < y. If x ≤ z ≤ y, then d(x, y) = d(x, z) + d(z, y). If x < y ≤ z, then d(x, y) + d(y, z) = d(x, z). Finally, if z ≤ x < y, then d(z, y) = d(z, x) + d(x, y). (ii) For A ⊂ X, ε > 0, we have H1,ε (A) = inf ri where the inﬁmum is taken over all countable coverings by sets Ui with the diameters ri < ε. Observe that ri = µ([ai , bi ]) where ai = inf{s : s ∈ Ui }, bi = sup{s : s ∈ Ui }. Hence H1,ε (A) = diam(A) = µ(A) and H1 (A) = µ(A) for every interval A ⊂ X. Thus the same is true for every measurable subset A.

Theorem 13.9. Let X = [0, 1]. If T : X → X is a transformation preserving an absolutely continuous probability density ρ(x) > 0, then for almost every x lim inf n |T n x − x| ≤ n→∞

1 . ρ(x)

Proof. As in the original proof of Fact 13.4 we will apply Fact 13.3. As in Lemma 13.8, deﬁne a metric d on X by y ρ(t) dt d(x, y) = x

for x, y ∈ X. By Fact 13.3 we have lim inf n→∞ n d(T n x, x) ≤ 1 for almost every x. Note that T nx 1 1 n ρ(t) dt x, x) = lim d(T lim = ρ(x) . |T n x−x|→0 T n x − x x |T n x−x|→0 |T n x − x| Hence lim inf n d(T n x, x) = lim inf n |T n x − x| ρ(x) ≤ 1 . n→∞

n→∞

Let (X, A, µ, d, T ) be a metric measure preserving system. Fix x ∈ X nj and choose an increasing sequence {nj }∞ x, x) = j=1 such that limj→∞ nj d(T n nj ∞ lim inf n→∞ n d(T x, x). We may assume that {nj d(T x, x)}j=1 monotonically decreases to a limit and that {d(T nj x, x)}∞ j=1 monotonically decreases to 0 as j → ∞. For each nj there exists kj such that 2−kj −1 < d(T nj x, x) ≤ 2−kj . Since Rkj (x) = min{s ≥ 1 : d(T s x, x) ≤ 2−kj }, we see that Rkj ≤ nj and d(T Rkj x, x) ≤ 2−kj . For X = [0, 1] we have the following result.

396

13 Recurrence and Dimension

Theorem 13.10. Let X = [0, 1]. If T : X → X preserves an absolutely continuous probability density ρ(x) > 0, then for almost every x lim inf Rk (x)εk (x) ≤ k→∞

2 . ρ(x)

Proof. With the same notation as in the preceding argument, we have Rkj (x) d(T Rkj x, x) ≤ 2 nj d(T nj x, x) . Hence lim inf Rk (x)εk (x) ≤ 2 lim inf n |T n x − x| ≤ k→∞

n→∞

2 . ρ(x)

To investigate the behavior of n |T n x − x| as n → ∞ we consider the subsequence nk = Rk (x) and check the convergence of Rk (x)εk (x) as k → ∞. Take a starting point x0 . In the simulations we plot (x, Rk (x)εk (x)) at x = T j x0 , 1 ≤ j ≤ 1000. See Figs. 13.2, 13.3. Note that most of the points lie below y = 2/ρ(x). In computing the ﬁrst return time it is crucial to take a suﬃcient number of signiﬁcant digits. If T is iterated n times then about n × h decimal digits of accuracy is lost where h is the Lyapunov exponent of T . Hence, when we begin with D decimal digits, the maximal number of possible iterations of T is about D/h. Since T is applied Rk (x) times, which is of the order of 2k−1 by Kac’s lemma, we need to have 2k−1 ≈

D h

and hence D ≈ 2k−1 × h. See Chap. 9. For more details, consult [C2]. See Maple Program 13.7.2. Example 13.11. Let T x = {2x}. Then T preserves Lebesgue measure, and hence ρ(x) = 1. See the left graph in Fig. 13.2. Example 13.12. (A piecewise linear transformation) Deﬁne T on [0, 1] by & 4x (mod 1) , 0 ≤ x < 12 , Tx = 2x (mod 1) , 12 ≤ x ≤ 1 . Then T preserves Lebesgue measure, and hence ρ(x) = 1. Since log |T (x)| is equal to log 4 on (0, 12 ), and log 2 on ( 12 , 1), the Lyapunov exponent is equal to 3 1 1 log 4 + log 2 = log 2 ≈ 0.4515 . 2 2 2 See the middle graph in Fig. 13.2.

13.3 Absolutely Continuous Invariant Measures 3

3

3

2

2

2

1

1

1

0

0

1

x

x

1

0

x

397

1

Fig. 13.2. Plots of (x, R10 (x)ε10 (x)) with y = 2/ρ(x) for T x = {2x}, a piecewise linear transformation and T x = {βx} (from left to right) √

Example 13.13. Let β = 5+1 2 . Consider the β-transformation T x = {βx}. Its invariant density is a step function. See the right graph in Fig. 13.2. Example 13.14. The logistic transformation T x = 4x(1 − x) has the invariant 1 2 density ρ(x) = . Hence = 2π x(1 − x). See the left graph ρ(x) π x(1 − x) in Fig. 13.3. Example 13.15. The Gauss transformation T x = {1/x} has the invariant den1 1 2 sity ρ(x) = . Hence = 2(ln 2)(1 + x). See the middle graph in ln 2 1 + x ρ(x) Fig. 13.3. Example 13.16. (A generalized Gauss transformation) The invariant density for T x = {1/xp }, p = 41 , is given in Fig. 9.7. There is a unique solution a ≈ 0.32472 of the equation T 2 x = x in the interval 0.1 < x < 0.7. If 0.30042 < x < 0.34799, then |T 2 x − x| < 2−10 , and hence Rk (x) = 2 except at x = a. See the right graph in Fig. 13.3.

3

15 2

2

10 1

1 0

x

1

0

5

x

1

0

x

1

Fig. 13.3. Plots of (x, R10 (x)ε10 (x)) with y = 2/ρ(x) for T x = 4x(1 − x), T x = {1/x} and T x = {1/x0.25 } (from left to right)

398

13 Recurrence and Dimension

13.4 Singular Continuous Invariant Measures Deﬁnition 13.17. Let µ be a measure on the real line. Put µ(I) |I| 0. The set of the irrational numbers of type 1 has measure 1 and includes the irrational numbers with bounded partial quotients. For the proof see Theorem 32 in [Kh]. For simulations for irrational numbers θ of type 1 see Fig. 13.11, where graphs of y = qn ||qn θ||, 1 ≤ n ≤ 100, are plotted. Recall that √ 2 = 1 + [2, 2, 2, 2, 2, 2, 2, 2, . . .] , √ 3 = 1 + [1, 2, 1, 2, 1, 2, 1, 2, . . .] , √ 7 = 2 + [1, 1, 1, 4, 1, 1, 1, 4, . . .] . For simulations with θ of type τ > 1, consult Maple Program 13.7.4.

404

13 Recurrence and Dimension 0.6

0.4

0.2

0

0.4

0.4

0.2

0.2

0

100

n

Fig. 13.11. y = qn ||qn θ|| for θ =

√

2 − 1,

0

100

n

√

3 − 1 and

√

n

100

7 − 2 (from left to right)

Example 13.24. Let θ = [a1 , a2 , a3 , . . .] be the continued fraction expansion of an irrational number 0 < θ < 1. Suppose that i

i≥1

ai = 2τ , for some τ ≥ 1. Then θ is of type τ .

Proof. For notational simplicity we prove the fact only for τ = 2. Since qi+1 = ai+1 qi + qi−1 , by induction we have i

i

22+···+2 ≤ qi < qi + qi−1 ≤ 21+2+···+2 . Since 2 + · · · + 2i = 2i+1 − 2, we have 22 and 22

i+1

i+1

−2

−2

≤ q i ≤ 22

i+1

−1

≤ qi + qi−1 ≤ 22

i+1

−1

.

Hence ||qi θ||

0, lim inf j 2−ε ||jθ|| = lim inf qi2−ε ||qi θ|| j→∞

i→∞

≤ lim inf i→∞

i+1 2−ε i+2 22 −1 2−2 +2

= lim inf 2−(2

i+1

−1)ε

i→∞

=0

and lim inf j 2+ε ||jθ|| = lim inf qi2+ε ||qi θ|| j→∞

i→∞

≥ lim inf i→∞

i+1 2+ε i+2 22 −2 2−2 +1

= lim inf 2(2 i→∞

i+1

−2)ε−3

>0.

13.6 Irrational Translations Mod 1

405

Hence θ is of type 2.

For a simulation of Ex. 13.24 see Maple Program 13.7.5. Lemma 13.25. (i) If n < − log ||θ||, then Rn = 1. (ii) If n > − log ||θ||, then there exists a unique integer i ≥ 1 such that ||qi θ|| < 2−n < ||qi−1 θ|| , and Rn = qi . Proof. (i) Use the fact 2−n > ||θ||. (ii) Since ||qi θ|| decreases monotonically to 0, there exists i such that ||qi θ|| < 2−n < ||qi−1 θ|| . Fact 1.36(vi) implies ||qi−1 θ|| ≤ ||jθ|| for 1 ≤ j < qi . Hence ||jθ|| > 2−n for 1 ≤ j ≤ qi − 1, and hence Rn ≥ qi . Since ||qi θ|| < 2−n , we have Rn = qi .

Theorem 13.26. For any irrational θ, we have lim sup n→∞

log Rn =1. n

Proof. Choose i such that ||qi θ|| < 2−n < ||qi−1 θ|| as in Lemma 13.25. Then 2−n < ||qi−1 θ|| < qi −1 and Rn = qi < 2n . Hence lim sup n→∞

log Rn ≤1. n

For the other direction, suppose ni satisﬁes 2−ni < ||qi−1 θ|| < 2−ni +1 . Since ||jθ|| > 2−ni for j < qi and 1 1 1 < ||qi−1 θ|| < ni −1 , < 2 qi + qi−1 2qi we have

Rni ≥ qi > 2ni −2 .

Therefore lim sup n→∞

log Rni log Rn ≥1. ≥ lim sup ni n i→∞

406

13 Recurrence and Dimension

Lemma 13.27. If β > 0 satisﬁes lim inf nβ ||nθ|| < ∞ , n→∞

then lim inf n→∞

1 log Rn ≤ . β n

Proof. Put L = lim inf nβ ||nθ|| . n→∞

Then there exists a sequence {nj }∞ j=1 such that (nj )β ||nj θ|| ≤ L + 1 . Choose a constant c > 0 such that L + 1 ≤ 2c . Take a sequence {mj }∞ j=1 satisfying 2mj +c ≤ (nj )β < 2mj +c+1 . Then ||nj θ||

0 and j ≥ 1, there exists Cε > 0 such that j τ +ε ||jθ|| ≥ Cε . By Lemma 13.25 qiτ +ε ≥

Cε > 2n Cε . ||qi θ||

Hence (τ + ε) log qi > n + log Cε and

1 log qi ≥ τ +ε n for every ε > 0. By Lemma 13.25 we have lim inf n→∞

13.6 Irrational Translations Mod 1

lim inf n→∞

407

log Rn log qi 1 = lim inf ≥ . n→∞ n n τ

For the other direction, note that lim inf j τ −ε ||jθ|| = 0 < ∞ j→∞

for every ε > 0. Lemma 13.27 implies that for every ε > 0 lim inf n→∞

log Rn 1 ≤ , n τ −ε

and hence lim inf n→∞

log Rn 1 ≤ . n τ

Example 13.29. See Fig. 13.12 for simulations with irrational numbers of type τ = 23 and of type τ = 3. The horizontal lines indicate the graphs y = τ1 . Theoretically in the right graph we expect that R40 ≈ 240 ≈ 1012 , which is impossible to obtain in a computer experiment in any near future. It is estimated from a computer experiment which lasted for several hours that R40 ≥ 2988329. So instead of using the exact value for R40 we choose R40 = 2988329 to draw the graph. The true graph would have a very steep line segment on the interval 39 ≤ n ≤ 40. This agrees with the fact that the limit inﬁmum is equal to 31 . Consult Maple Program 13.7.5. 1

1

(log Rn )/n

(log Rn )/n

0

10 n 20

0

30

n

40

Fig. 13.12. y = (log Rn )/n for T x = {x + θ} with θ of type 3/2 (left) and of type 3 (right)

As a conclusion we have the following result. Corollary 13.30. limn→∞ (log Rn )/n = 1 if and only if θ is of type 1. Proof. Use Theorems 13.26, 13.28.

408

13 Recurrence and Dimension 1

1

(log Rn )/n

(log Rn )/n

0

0

20

n

n

20

√ Fig. 13.13. y = (log Rn )/n for T x = {x+θ} with θ = ( 5−1)/2 (left) and θ = e−2 (right)

For simulations see Fig. 13.13 and Maple Program 13.7.5. For closely related results see [CS],[KS]. Remark 13.31. Here is a heuristic argument on the dependence of the ﬁrst return time on the type of θ. Let T x = x + θ (mod 1). Suppose that there exists a sequence of rational numbers pn /qn , (pn , qn ) = 1, satisfying θ − pn < C 1 , qn qn 1+α for some C > 0 and α ≥ 1. Deﬁne Tn by Tn x = x +

pn qn

(mod 1) .

Then for 1 ≤ j < qn ,

j T x − Tn j x = jθ − jpn < C 1 . qn qn α

Thus T is approximated by Tn . Note that Tn has a period qn , and so we expect that T qn is close to the identity transformation for suﬃciently large n. For larger values of α the approximation is better and T behaves more like a periodic transformation. In this case the ﬁrst return time Rk is close to qn for reasonably large values of k. Hence Rk does not grow very fast. The type of θ determines the speed of the approximation by periodic transformations of T by Tn , which in turn determines the speed of the periodic approximation. For a related result, see [CFS].

13.7 Maple Programs We investigate the properties of the metric version of the ﬁrst return time, and simulate generalizations of Boshernitzan’s theorem.

13.7 Maple Programs

409

13.7.1 The recurrence error We investigate the product of the return time and the recurrence error for an irrational translation modulo 1. (If we are interested only in ﬁnding the Hausdorﬀ dimension, we do not need many signiﬁcant digits as explained in Remark 13.22.) > with(plots): Since an irrational translation modulo 1 has zero Lyapunov exponent, we do not need many signiﬁcant digits. > Digits:=50: Choose an irrational number θ. > theta:=evalf(sqrt(5)-1)/2: > T:= x->frac(x+theta): > N:=20: It is convenient to choose x0 = 12 as the starting point of an orbit. There is no need to use the deﬁnition of ||x|| in the following. > X0:=0.5: > for n from 1 to N do > Xn:=T(X0): > for j from 1 while abs(Xn-X0) > 1/2.^n do > Xn:= T(Xn): > od: > R[n]:=j: > Error[n]:=evalf(abs(Xn-X0),10): > od: > Digits:=10: > listplot([seq([n,R[n]*Error[n]],n=1..N)],labels=["n",""]); See Fig. 13.1. 13.7.2 The logistic transformation Investigate the metric version of the ﬁrst return time for the logistic transformation. (If we are interested only in ﬁnding the Hausdorﬀ dimension, we do not need many signiﬁcant digits as explained in Remark 13.22.) > with(plots): > T:=x->4.0*x*(1.0-x): > rho:=x->1/(Pi*sqrt(x*(1-x))): This program takes very long to ﬁnish. Try ﬁrst n = 10. > n:=14: Choose the sample size. > S:=1000: > seed[0]:=evalf(Pi-3.0): > for i from 1 to S do seed[i]:=T(seed[i-1]): od:

410

13 Recurrence and Dimension

Calculate the ﬁrst return time. > Digits:=10000: Try Digits:=2000 ﬁrst. > for i from 1 to S do > Xn:=T(seed[i]): > for j from 1 while abs(Xn-seed[i]) > 2.0^(-n) do > Xn:=T(Xn): > od: > Return[i]:=j; > Error[i]:=evalf(abs(Xn-seed[i]),10): > od: > Digits:=10: Plot the points (x, Rn (x)εn (x)). > g1:=pointplot([seq([seed[i],Return[i]*Error[i]],i=1..S)]): > g2:=plot(2/rho(x),x=0..1,labels=["x"," "]): > display(g1,g2); See Fig. 13.3 for the output with n = 10. Plot the points (x, log Rn (x)/n). > pointplot([seq([seed[i],log[2](Return[i])/n],i=1..S)]); See the left plot in Fig. 13.6. Plot the pdf of Rn . > Bin:=100: Find the maximum of the ﬁrst return time. > Max:=max(seq(Return[i],i=1..S)); Max := 87820 The following shows that for outliers the number of signiﬁcant digits was not large enough. > ceil(entropy*Max); 26437 Find the frequency corresponding to each bin. > for k from 1 to Bin do freq[k]:=0: od: > for j from 1 to S do > slot:=ceil(Bin*Return[j]/Max): > freq[slot]:=freq[slot]+1: > od: Plot the pdf. > gA:=listplot([seq([(i-0.5)*Max/Bin,freq[i]*Bin/S/Max], i=1..Bin)]): > gB:=plot(0,x=0..Max*0.8,labels=["Rn","Prob"]): > display(gA,gB); See Fig. 13.14. The area under the graph should be equal to 1. Now ﬁnd the pdf of (log Rn )/n. > Bin2:=30:

13.7 Maple Programs

411

0.0001 Prob

0

Rn 50000

Fig. 13.14. Pdf of Rn , n = 14, for T x = 4x(1 − x)

>

Max2:=max(seq(log[2.0](Return[i])/n,i=1..S),1.5);

Max2 := 1.5 > add(log[2.0](Return[i])/n,i=1..S)/S; 0.8544914490 We will consider log(Rn + ε) for small ε > to avoid the case when Rn = 1. > epsilon:=10.^(-10): > for k from 1 to Bin2 do freq2[k]:= 0: od: > for j from 1 to S do > slot2:=ceil(Bin2/Max2*(log[2.](Return[j])/n+epsilon)): > freq2[slot2] := freq2[slot2]+1: > od: Plot the pdf. > gC:=listplot([seq([(i-0.5)*Max2/Bin2,freq2[i]*Bin/S/Max2], i=1..Bin2)]): > gD:=plot(0,x=0..1.5,labels=[" ","Prob"]): > display(gC,gD); See the right graph in Fig. 13.6. 13.7.3 The H´ enon mapping Check the validity of the ﬁrst return time formula for the H´enon mapping. For a related result consult [C3]. (If we are interested only in ﬁnding the dimension of the attractor, we do not need many digits. See Remark 13.22.) > with(plots): It takes very long to ﬁnish the program. Start the experiment with n = 3 and Digits:=100. > n:=8: > Digits:=20000: > a:=1.4: > b:=0.3: > T:=(x,y)->(y+1-a*x^2,b*x): > seed[0]:=(0,0):

412

13 Recurrence and Dimension

Throw away transient points. > for i from 1 to 5000 do seed[0]:=T(seed[0]): od: > SampleSize:=1000: > for i from 1 to SampleSize do seed[i]:=T(seed[i-1]): od: Calculate the ﬁrst return time Rn . > for i from 1 to SampleSize do > Xn:=T(seed[i]): > for j from 1 while > (Xn[1]-seed[i][1])^2+(Xn[2]-seed[i][2])^2 > 2^(-2*n) do > Xn:=T(Xn): > od: > ReturnTime[i]:=j; > od: > Digits:=10: Find the average of (log Rn )/n. > add(log[2.](ReturnTime[i])/n,i=1..SampleSize)/SampleSize; 1.205964669 This result agrees with the fact the Hausdorﬀ dimension is greater than 1. In the following we can rotate the three-dimensional plot by moving the mouse with a button being pressed. See the left plot in Fig. 13.10. > pointplot3d([seq([seed[i],log[2](ReturnTime[i])/n], i=1..SampleSize)],axes=normal); Find the maximum of (log2 Rn )/n. > Max:=max(seq(log[2.0](ReturnTime[i])/n,i=1..SampleSize)): Divide the interval [0, Max] into 27 bins. > Bin:=27: > epsilon:=0.000000001: Find the frequency corresponding to each bin. > for k from 1 to Bin do freq[k]:=0: od: for j from 1 to SampleSize do slot:=ceil(Bin/Max*(log[2](ReturnTime[j])/n+epsilon)): freq[slot]:=freq[slot]+1: od: Plot the pdf of (log Rn )/n. > listplot([seq([(i-0.5)*Max/Bin,freq[i]*Bin/SampleSize/ Max],i=1..Bin)]); See the right graph in Fig. 13.10. > > > >

13.7.4 Type of an irrational number Construct an irrational number θ of type τ and check the validity of the construction. > with(plots):

13.7 Maple Programs

413

Let qn be the denominator of the nth convergent in the continued fraction expansion of θ. Compute qn ||qn θ||, 1 ≤ n ≤ K, K = 50. There may be a margin of error so we choose an integer slightly greater than 50 for K. > K:=52: Choose τ = 65 . > tau:=1.2; τ := 1.2 Deﬁne the partial quotients of θ. > for n from 1 to K do > a[n]:=trunc(2^(tau^n)): > od: Find the numerators and the denominators of the ﬁrst K convergents of θ. We use the recursive relation for the denominators of the convergents. > q[-1]:=0: > q[0]:=1: > for n from 1 to K do > q[n]:=a[n]*q[n-1]+q[n-2]: > od: Here is the recursive relation for the numerators of the convergents. > p[-1]:=1: > p[0]:=0: > for n from 1 to K do > p[n]:=a[n]*p[n-1]+p[n-2]: > od: Note that |θ − pn /qn | ≤ qn−2 . We use the nth convergent pn /qn for suﬃciently large n in place of θ, and the error is less than qn−2 . Find the number of decimal digits to calculate qn−2 . > evalf(2*log10(q[K])); 47334.62374 In the approximation of θ using its ﬁrst K partial quotients, ﬁnd an optimal number of signiﬁcant decimal digits needed to ensure the accuracy. > Digits:=ceil(%); Digits := 47335 Deﬁne θ. > theta:=evalf(p[K]/q[K]): Calculate ||qn θ||. > for n from 1 to K do > dist[n]:=min(1-frac(q[n]*theta),frac(q[n]*theta)): > od: We needed high precision arithmetic in constructing θ, but we no longer need many signiﬁcant digits in iterating the irrational translation modulo 1. > Digits:=10: Check whether the numerical value of ||qn θ|| is accurate for every n within an acceptable margin.

414

13 Recurrence and Dimension >

seq([n,evalf(dist[n],10)],n=K-4..K);

[48, 0.3060616812 10−13695 ], [49, 0.9570885589 10−16435 ], [50, 0.3759446621 10−19722 ], [51, 0.4876731853 10−23667 ], [52, 0.1 10−23667 ] In the above output the value for n = 52 does not have enough signiﬁcant digits so it should be discarded. Other values for n ≤ 51 seem to be acceptable. We can safely use the ﬁrst L = 50 values. > L:=50: Plot the graph y = (qn )τ ||qn θ||, 1 ≤ n ≤ 50. > pointplot([seq([n,q[n]^tau*dist[n]],n=1..L)]); See the left graph in Fig. 13.15. Plot the graph y = (qn )τ −0.0001 ||qn θ||, 1 ≤ n ≤ 50. > pointplot([seq([n,q[n]^(tau-0.0001)*dist[n]],n=1..L)]); See the middle graph in Fig. 13.15. Plot the graph y = (qn )τ +0.0001 ||qn θ||, 1 ≤ n ≤ 50. > pointplot([seq([n,q[n]^(tau+0.0001)*dist[n]],n=1..L)]; See the right graph in Fig. 13.15.

0.4

0.4

0.2

0.2

15 10 5

0

n

50

0

n

50

0

n

50

Fig. 13.15. y = (qn )τ ||qn θ||, y = (qn )τ −0.0001 ||qn θ|| and y = (qn )τ +0.0001 ||qn θ|| for τ = 6/5 (from left to right)

13.7.5 An irrational translation mod 1 Consider T x = {x + θ} where θ is an irrational number of type τ = 2. For the construction of θ consult Maple Program 13.7.4. > with(plots): > K:=10: > tau:=2:

13.7 Maple Programs > > > > > > > > >

415

for n from 1 to K do a[n]:=trunc(2^(tau^n)): od: q[-1]:=0: q[0]:=1: for n from 1 to K do q[n]:=a[n]*q[n-1]+q[n-2]: od: evalf(2*log10(q[K])): Digits:=ceil(%);

Digits := 1232 p[-1]:=1: p[0]:=0: > for n from 1 to K do > p[n]:=a[n]*p[n-1]+p[n-2]: > od: > theta:=evalf(p[K]/q[K]): We do not need many signiﬁcant digits. > Digits:=100: > T:=x->frac(x+theta): > N:=30: > evalf(1./2^N,10); >

0.9313225746 10−9 It is convenient to choose x0 = 12 as the starting point of an orbit. > X0:=0.5: Find the ﬁrst return time Rn (x0 ). To speed up the computation we use the fact Rn (x0 ) ≥ Rn−1 (x0 ) in the following do loop. > Xn:=T(X0): > R[0]:=1: > for n from 1 to N do > for j from R[n-1] do > if abs(Xn - X0) < 1./2^n then R[n]:=j: break: > else Xn:=T(Xn): fi: > od: > od: > seq(R[n],n=1..N); 1, 1, 4, 4, 4, 4, 65, 65, 65, 65, 65, 65, 65, 65, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644, 16644 > Digits:=10: Plot the graph y = (log2 Rn )/n, 1 ≤ n ≤ N , together with y = 1/τ . > g1:=listplot([seq([n,log[2](R[n])/n], n=2..N)]): > g2:=pointplot([seq([n,log[2](R[n])/n], n=2..N)]): > g3:=plot(1/tau, x=0..N, labels=["n","(log Rn)/n"]): > display(g1,g2,g3);

416

13 Recurrence and Dimension

See Fig. 13.16. 1

(log Rn )/n

0

10 n 20

30

Fig. 13.16. y = (log Rn )/n, 2 ≤ n ≤ 30, for T x = {x + θ} with θ of type 2

14 Data Compression

Data compression in digital technology is explained in terms of entropy, and three compression algorithms are introduced: Huﬀman coding, Lempel–Ziv coding and arithmetic coding. Historically Huﬀman coding was invented ﬁrst but these days Lempel–Ziv coding is mostly used. Arithmetic coding has been relatively recently introduced. In our mathematical model a computer ﬁle under consideration is assumed to be an inﬁnitely long binary sequence, and entropy is the best possible compression ratio. Most computer users are familiar with compressed ﬁles. Usually computer software is stored in compressed format and it has to be decompressed before it is installed on a computer. This is done almost automatically nowadays. As another example, consider a binary source emitting a signal (‘0’ or ‘1’) every second. (Imagine a really slow modem.) But the problem is that the signal has to be transmitted through a channel (imagine a telephone line) that can accept and send only 0.9 binary symbol every second. How can we send the information in this case? We use data compression so that the information content of our compressed signal is not changed but the length is reduced. Throughout the chapter the codes are assumed to be binary and ‘log’ denotes the logarithm to the base 2 unless stated otherwise. For practical implementation of the data compression algorithms consult [HHJ],[We]. For the fax code see [Kuc]. For an elementary introduction to information theory see [Luc].

14.1 Coding An alphabet is a ﬁnite set A = {s1 , . . . , sk }. Its elements are called (source) symbols. We imagine a signal source that emits symbols ∞si . Consider the set X of inﬁnite sequences of symbols from A, i.e., X = 1 A = A × A × · · · . Let An denote the set of blocks (or words) of length n made up of symbols ∞ from A and let A∗ = n=1 An denote the set of all blocks of ﬁnite length. ∗ For example, {0, 1} is the set of all ﬁnite sequences of 0’s and 1’s.

418

14 Data Compression

Deﬁnition 14.1. (i) A binary code for A is a mapping C : A → {0, 1}∗ . For s ∈ A, C(s) is called the codeword for s and its length is denoted by C (s). The average length L(C) of C is deﬁned by L(C) =

k

Pr(si ) C (si ) ,

i=1

where Pr(si ) is the probability of si . (ii) A code C is called a preﬁx code if there do not exist two distinct symbols si , sj , such that C(si ) is the beginning of C(sj ). (iii) Deﬁne a coding map C ∗ : A∗ → {0, 1}∗ by C ∗ (si1 · · · sin ) = C(si1 ) · · · C(sin ) for a sequence of symbols si1 · · · sin . We sometimes identify a code C with C ∗ . The process of applying C ∗ is called ‘coding’ or ‘encoding’, and if it is invertible then the reverse process is called ‘decoding’. For a code C to be useful, it should be decodable. Example 14.2. Let A = {s1 , s2 , s3 }. (i) Deﬁne a code C, which is not a preﬁx code, by C(s1 ) = 0 ,

C(s2 ) = 1 ,

C(s3 ) = 01 .

(ii) Deﬁne a preﬁx code C by C (s1 ) = 00 ,

C (s2 ) = 11 ,

C (s3 ) = 01 .

Preﬁx codes are uniquely decodable. If a sequence (or string) made up of C(s) is given, then the sequence (or string) of inverse images of C(s) can be reconstructed uniquely. In Ex. 14.2, C is not uniquely decodable since the string 01 may have come from two diﬀerent strings s1 s2 and s3 by C. A binary code can be represented as a binary tree. A full binary tree of length M is a binary tree such that, except for the end nodes, every node has two nodes originating from it. Note that a full binary tree of length M has 2M end nodes. For examples of binary trees see Sect. 14.3. Theorem 14.3 (Kraft’s inequality). Let A = {s1 , . . . , sk } be an alphabet. If C is a binary preﬁx code for A, then k

2−C (si ) ≤ 1 .

i=1

Conversely, if a set of integers i , 1 ≤ i ≤ k, satisﬁes k

2−i ≤ 1 ,

i=1

then there exists a binary preﬁx code C for A with C (si ), 1 ≤ i ≤ k.

14.2 Entropy and Data Compression Ratio

419

Proof. Let M = max C (si ) . 1≤i≤k

Recall that C is represented by a binary tree of length M whose end nodes correspond to codewords of C. Choose an arbitrary end node corresponding to C(si ). Consider an imaginary subtree of length M −C (si ) originating from the node corresponding to si . It has 2M −C (si ) end nodes. These subtrees do not overlap and all of them are part of a full binary tree of length M with 2M end nodes. Hence k 2M −C (si ) ≤ 2M . i=1

For the converse, suppose that 1 ≤ . . . ≤ k = M and consider the full binary tree FM of length M . First, consider the subtree T1 of length M − 1 whose end nodes are ﬁrst 2M −1 end nodes of FM . Deﬁne C(s1 ) to be the codeword corresponding to the root of T1 . Proceed similarly for s2 with the remaining 2M − 2M −1 end nodes of FM , and for s3 with the 2M − 2M −1 − 2M −2 end nodes of FM . This is possible since k remaining −i ≤ 1.

i=1 2 Theorem 14.3 was obtained by Kraft [Kra] and McMillan.

14.2 Entropy and Data Compression Ratio In digital technology large data ﬁles, which are strings of binary symbols, are compressed to save hard disk space and communication time. Lossless compression of a data ﬁle shortens the length of the string using a method that guarantees the recovery of the original string. In other words, a lossless compression algorithm is a one-to-one mapping deﬁned on a set of all ﬁnite or inﬁnite strings. When we compress video images such as a digitized movie, nonessential part is ignored and lost for better compression, which is called lossy compression. In this book we consider only lossless data compression, and compression means lossless compression. We deﬁne the compression ratio and the compression rate as follows: compression ratio =

length of compressed string length of original string

and compression rate = 1 − compression ratio. How much a computer ﬁle can be compressed or how much a message can be shortened depends on the amount of information or randomness contained in a given data. The more redundancy there is, the less information we have.

420

14 Data Compression Table 14.1. Correspondence of ideas Measure Theory an alphabet a shift space a cylinder set a shift invariant measure entropy

Information Theory source symbols inﬁnitely long data ﬁles a block a stationary probability compression ratio

This is why we need the concept of entropy. Information theory has its own language. See Table 14.1. If a probability distribution is given on A = {s1 , . . . , sk }, then the average quantity of information of A (or, the entropy of A) is deﬁned by H(A) =

k

Pr(si ) log

i=1

1 , Pr(si )

where Pr(s) is the probability of s. If there ∞ exists a shift-invariant measure (or a stationary probability) µ on X = 1 A, then consider the nth average quantity of information of X deﬁned by Hn (X) = H(An ) =

B

Pr(B) log

1 , Pr(B)

where the sum is taken over all blocks B of length n. This is nothing but the entropy of the nth reﬁnement Pn = P ∨ T −1 P ∨ · · · ∨ T −(n−1) P where P is the partition of X by the cylinder sets of length 1 corresponding to the ﬁrst coordinates. Recall that limn→∞ n1 Hn (X) is the entropy of the shift transformation. See Chap. 8. If we have a Bernoulli measure on X, then Hn (X) = nH1 (X) = nH(A) , and the entropy is equal to H(A). So far we have deﬁned the entropy of a partition while the measure is ﬁxed. The problem with the real life application of entropy is to ﬁnd an appropriate measure on the set of all possible strings made of 0’s and 1’s that appear in a given binary sequence x0 . To construct a mathematical ∞ model we assume that the length of x0 is inﬁnite, that is, x0 ∈ X = 1 {0, 1}. The transformation under consideration is always the left shift mapping T , and so it is not mentioned explicitly in information theory. The Birkhoﬀ Ergodic Theorem deﬁnes an ergodic shift invariant probability measure µ, i.e., the µ-measure of a cylinder set or a block C = {x ∈ X : xt = a1 , . . . , xt+n−1 = an } = [a1 , . . . , an ]t,t+n−1 is deﬁned by

14.2 Entropy and Data Compression Ratio

421

1 1C (T k−1 x0 ) . k→∞ k Equivalently, it is the relative frequency of the n-block a1 . . . an in x0 . Then T is invariant and ergodic with respect to µ, and x0 is a typical point for µ as far as the Birkhoﬀ Ergodic Theorem is concerned. Note that even a single sequence x0 deﬁnes a probability measure µ. µ(C) = lim

Lemma 14.4 (Gibbs’ inequality). Let {p1 , . . . , pk } and {q1 , . . . , qk } be two probability distributions. Then −

k

pi log pi ≤ −

i=1

k

pi log qi .

i=1

The equality holds only when pi = qi for every i. Proof. We may assume that pi > 0 for every i. Since log t ≤ t − 1 for t > 0, we have qi qi −1 ≤ log pi pi with equality only when pi = qi . Hence qi qi −1 = qi − pi = 0 . ≤ pi pi log pi pi i i i i The equality holds only when pi = qi for every i.

Corollary 14.5. Suppose that A has k symbols. Then H(A) ≤ log k . The equality holds only when pi =

1 k

for every i.

Proof. Let pi be the probability of the ith symbol and take qi = Lemma 14.4.

1 k

in

Theorem 14.6 (Source Coding Theorem). Let P C(A) denote the set of all preﬁx codes for an alphabet A = {s1 , . . . , sk } with pi = Pr(si ). Then H(A) ≤

inf

L(C) < H(A) + 1 .

C∈P C(A)

Proof. For the lower bound, take an arbitrary preﬁx code C and put w = −j and qi = 2−i /w. By Kraft’s inequality, w ≤ 1. Lemma 14.4 implies j2 pi i = pi (− log qi − log w) = − pi log qi − log w ≥ − pi log pi . i

i

i

i

the upper bound the problem is to minimize i pi i with the constraint For −i 2 ≤ 1. Let i = log(1/pi ) where t is the smallest integer greater than i or equal to t. Then

422

14 Data Compression

i

2−i ≤

2−log(1/pi ) ≤

i

2− log(1/pi ) =

i

pi = 1 .

i

Hence there exists a preﬁx code C0 with lengths i , 1 ≤ i ≤ k, and pi i < pi (log(1/pi ) + 1) = H(A) + 1 . L(C0 ) = i

i

Example 14.7. Consider a binary preﬁx code C : A → {0, 1}∗ such that Pr(0) = 0.1, Pr(1) = 0.9. Since A = {0, 1}, the coding map C is determined by C(0) and C(1). We have two choices for C with the minimal average length: either C(0) = 0, C(1) = 1 or C(0) = 1, C(1) = 0. In each case L(C) = 1 and the inequality of the preceding theorem is satisﬁed. Let A = {0, 1}, and consider A2 = {00, 01, 10, 11}, which is the set of blocks of length 2. Assume that the (p, 1−p)-Bernoulli measure, where p = 0.4, ∞ is the shift invariant measure on X = 1 A. Note that Pr(00) = 0.16, Pr(01) = Pr(10) = 0.24 and Pr(11) = 0.36. Among the probabilities of the blocks of length n, the diﬀerence between the smallest and the largest probabilities becomes greater and greater as n increases. Theorem 8.19 on the Asymptotic Equipartition Property states that the probability of a typical block of length n is close to 2−nh where h is the entropy. Now, for suﬃciently large n, we assign short codewords of length close to nh to typical n-blocks (there are approximately 2nh of them) and long codewords to non-typical n-blocks to achieve optimal compression. On average an n-block is coded into a codeword of length nh since the set of non-typical blocks has negligible probability. Thus the average codeword length per input symbol is close to entropy. More precisely, we have the following theorem: Theorem 14.8. Given a ﬁnite alphabet A, let µ be a shift invariant probability ∞ measure on X = 1 A. Let C (n) be an optimal binary preﬁx code for An for each n. Then L(C (n) ) =h, lim n→∞ n where h is the entropy of X with respect to µ. Proof. Recall that An is identiﬁed with the set of blocks of length n and that Pr(B) = µ(B) for B ∈ An . By Theorem 14.6 for C (n) we have H(An ) ≤ L(C (n) ) < H(An ) + 1 . Hence

1 H(An ) L(C (n) ) H(An ) + . < ≤ n n n n Now we take the limits as n → ∞.

14.3 Huﬀman Coding

423

14.3 Huﬀman Coding Huﬀman coding was invented by D.A. Huﬀman while he was a student at MIT around 1950. It assigns the shortest codeword to the most frequently appearing symbol. It was widely used before the invention of Lempel–Ziv algorithm. Suppose that there are k symbols s1 , s2 , . . . , sk with the corresponding probabilities p1 ≤ p2 ≤ . . . ≤ pk . The following steps explain the algorithm for constructing a Huﬀman tree for k = 8: The ﬁrst step: Combine two smallest probabilities p1 , p2 to obtain p12 = p1 +p2 with a new symbol s12 that replaces s1 , s2 . Thus we have k −1 symbols s12 , s3 , . . . , sk with corresponding probabilities p12 , p3 , . . . , pk . See Fig. 14.1. p 1 + p 2 = 0.1

0.05 p1

0.05 p2

0.1 p3

0.1 p4

0.15 p5

0.15 p6

0.2 p7

0.2 p8

Fig. 14.1. The ﬁrst step in the Huﬀman tree algorithm

Intermediate steps: Rearrange p12 , p3 , . . . , pk in increasing order and the corresponding symbols accordingly. Go back to the ﬁrst step and proceed inductively on k until we are left with two symbols. See Fig. 14.2. p 1 + p 2 + p 3 = 0.2

0.05 p1

0.05 p2

0.1 p3

0.1 p4

0.15 p5

0.15 p6

0.2 p7

0.2 p8

Fig. 14.2. The second step in the Huﬀman tree algorithm

The last step: Draw a binary tree starting from the root. Assign 0 and 1 to the top left and top right branches. Similarly assign 0 and 1 when there is a splitting of a branch into two branches. To make the algorithm systematic we assign 0 to the left branch and 1 to the right. See Fig. 14.3.

424

14 Data Compression 1

0.6

0.35

0.2 0.1

0.25

0.05 p1

0.05 p2

0.1 p3

0.4

0.1 p4

0.15 p5

0.15 p6

0.2 p7

0.2 p8

Fig. 14.3. The last step in the Huﬀman tree algorithm

Using the Huﬀman tree obtained in the above we deﬁne the coding map. For example, suppose we have eight source symbols with probabilities given in Fig. 14.1. They are already in increasing order. In subsequent ﬁgures we gradually construct a binary tree and ﬁnally in Fig. 14.4 we deﬁne a coding map C. Its formula is presented in Table 14.2. Note that L(C) = 2.9.

0

0

1 1

0 1

0 1 0 p1

1 p2

0 p3

p6

p4

1 p5

0 p7

Fig. 14.4. The coding map obtained from the Huﬀman tree algorithm

1 p8

14.4 Lempel–Ziv Coding

425

Table 14.2. A coding map C for Huﬀman coding Symbol Probability s1 0.05 0.05 s2 s3 0.10 0.10 s4 s5 0.15 s6 0.15 s7 0.20 s8 0.20

Codeword C (si ) 00000 5 00001 5 0001 4 010 3 011 3 001 3 10 2 11 2

Note that the construction of the Huﬀman tree is not unique. Historically Huﬀman coding had been widely used but now Lempel–Ziv coding is mostly used since it does not need a priori information on the probability distribution for source symbols. For more details see [HHJ],[Rom]. For better performance we regard blocks of length n as source symbols for some suﬃciently large ﬁxed n. For example, suppose we have a binary sequence a1 a2 a3 . . .. Consider the nonoverlapping blocks of length n regarded as source symbols: a1 . . . an ,

an+1 . . . a2n ,

a2n+1 . . . a3n ,

and so on. There are 2n diﬀerent such symbols, some of which might have zero probability. Label them in the lexicographic order as s1 = 0 . . . 00 , s2 = 0 . . . 01 , s3 = 0 . . . 10 , .. .. . . s2n −1 = 1 . . . 10 , s2n = 1 . . . 11 . Now we deﬁne an optimal coding map C (n) using the Huﬀman tree and apply Theorem 14.8. Consult Maple Program 14.6.1. For other applications of the Huﬀman algorithm see Chap. 2 in [Brem].

14.4 Lempel–Ziv Coding Lempel–Ziv coding was ﬁrst introduced in [LZ]. It is simple to implement. The algorithm has become popular as the standard algorithm for ﬁle compression because of its speed and eﬃciency. This code can be used without a knowledge

426

14 Data Compression

of the probability distribution of the information source and yet will achieve an asymptotic compression ratio equal to the entropy of the information source. ∞ (Here by an information source we mean a shift space X = 1 {0, 1} with an ergodic shift invariant probability measure. A signal from an information source corresponds to a binary sequence x ∈ X. Receiving a signal means that we read x1 , x2 , x3 , . . . sequentially. When we have a binary message, we regard it as a part of an inﬁnitely long binary sequence.) The probability distribution of the source sequence is implicitly reﬂected on the ﬁrst return time of the initial n-blocks through the Ornstein-Weiss formula. We will consider a binary source for the sake of simplicity. A parsing of a binary string x1 x2 . . . xn is a division of the string into phrases, separated by commas. A distinct parsing is a parsing such that no two phrases are identical. The Lempel–Ziv algorithm described below gives a distinct parsing of the source sequence. The source sequence is sequentially parsed into strings that have not appeared so far. For example, if the string is 1011010100010 . . . , we parse it as 1, 0, 11, 01, 010, 00, 10, . . . . After each comma, we read along the input sequence until we come to the shortest string that has not been marked before. Since this is the shortest such string, all its preﬁxes must have occurred earlier. In particular, the string consisting of all but the last bit of this string must have occurred earlier. See the dictionary given in Table 14.3. We code this phrase by giving the location of the preﬁx and the value of the last bit. For example, the code for the above sequence is (000, 1) (000, 0) (001, 1) (010, 1) (100, 0) (010, 0) (001, 0) where the ﬁrst number of each pair gives the index of the preﬁx and the second number gives the last bit of the phrase. Obviously if we parse more of the given sequence then we should reserve more bits in computer memory for allocation of the preﬁx. Decoding of the coded sequence is straightforward. Let CN be the number of phrases in the sequence obtained from the parsing of an input sequence of length N . We need log CN bits to describe the location of the preﬁx to the phrase and 1 bit to describe the last bit. The total length of the compressed sequence is approximately CN (log CN + 1) bits. The following fact is known. ∞ Theorem 14.9. Let X = 1 {0, 1} be the product space and let T be the shift transformation deﬁned by T (x1 . . . xk . . .) = (x2 . . . xk+1 . . .). Suppose that µ is an ergodic T -invariant probability measure. Then lim

N →∞

CN (log CN + 1) =h. N

14.4 Lempel–Ziv Coding

427

Table 14.3. A dictionary obtained from Lempel–Ziv parsing Order of Appearance 1 2 3 4 5 6 7 .. .

Word 1 0 11 01 010 00 10 .. .

Hence for almost every sequence x ∈ X, the average length per symbol of the compressed sequence does not exceed the entropy of the sequence. Here is a heuristic argument on the optimal performance of the Lempel–Ziv algorithm based on the Asymptotic Equipartition Property: We are given a ﬁnite binary sequence of length N . After it is parsed into CN diﬀerent blocks, the average block length is about N . n≈ CN Those CN blocks may be regarded as typical blocks and the Asymptotic Equipartition Property implies that there exist approximately 2(N/CN )×h typical blocks of length n ≈ N/CN . Hence we have CN ≈ 2(N/CN )×h and

CN log CN ≈h, N which is what we have in Theorem 14.9. Here is another heuristic argument based on the Ornstein–Weiss formula on the ﬁrst return time: Since the shift transformation is ergodic, if we start from one of CN typical subsets of length approximately equal to n≈

N , CN

then the orbit should visit almost all typical subsets almost equally often. Thus we expect Rn ≈ CN , and hence h≈

CN log CN log CN log Rn . ≈ ≈ N N n C N

See Maple Program 14.6.2.

428

14 Data Compression

14.5 Arithmetic Coding Arithmetic coding is based on the simple fact that binary digital information corresponds to a real number through binary expansion. It was introduced by P. Elias in an unpublished work, then improved by R. Pasco [Pas] and J. Rissanen [Ris]. Practical algorithms for arithmetic coding have been found relatively recently. For the history see the appendix in [Lan]. For more information consult [Dr],[HHJ]. We are given an idealized ∞inﬁnitely long data ﬁle. It is represented by a typical sequence x0 in X = 1 {0, 1}. The set X is shift-invariant with respect to the measure µ determined by the Birkhoﬀ Ergodic Theorem. Let h denote the entropy of T . On the collection of blocks of length n the lexicographic ordering is given: the n-block 0 . . . 00 comes ﬁrst and 0 . . . 01 next, and so on until we have the n-block 1 . . . 1 last. In other words, if ak ∈ {0, 1}, 1 ≤ k ≤ n, satisﬁes n 1+ ak 2k−1 = j , k=1

then a1 . . . an is the jth block. Let pj denote the probability of the jth block. Consider the partition of [0, 1] by the points t0 = 0 and tk =

k

pj ,

1 ≤ k ≤ 2n .

j=1

Note that t2n = 1. Now we have a correspondence between the set of all n-blocks and the set of the intervals I1 = [t0 , t1 ) , I2 = [t1 , t2 ) , . . . , I2n = [t2n −1 , ttn ] . The Asymptotic Equipartition Property implies that among 2n blocks of length n there are approximately 2nh of them of measures approximately equal to 2−nh . Other blocks form a set of negligible size. Equivalently, among 2n intervals I1 , . . . , I2n there are about 2nh of them of lengths approximately equal to 2−nh . See Maple Program 14.6.3. Now here is a basic idea behind arithmetic coding. Given an inﬁnitely long data ﬁle represented by x0 ∈ X, we consider the ﬁrst n bits in x0 = (a1 , . . . , an , . . .). With probability close to 1, the cylinder set C(n, x0 ) = [a1 , . . . , an ]1,n of length n containing x0 is one of typical n-blocks. Suppose that C(n, x0 ) corresponds to an interval Ij , 1 ≤ j ≤ 2n . Recall that the length of Ij is approximately equal to 2−nh . Pick a dyadic rational number r(n, x0 ) from Ij in such a way that the denominator of r(n, x0 ) is minimal. This can be done by applying the bisection method repeatedly m times until we have 2−m ≈ 2−nh .

14.6 Maple Programs

429

The arithmetic coding maps the ﬁrst n bits in x0 to the m bits in the binary expansion of r(n, x0 ). The compression ratio is given by m ≈h, n which shows that the arithmetic coding is optimal.

14.6 Maple Programs We present Maple programs for two data compression algorithms: Huﬀman coding and Lempel–Ziv coding. Readers should increase the lengths of input sequences to observe that the compression ratio really converges to entropy. A method to visualize typical subsets in arithmetic coding is also given. 14.6.1 Huﬀman coding The following Maple program for Huﬀman algorithm is a slight modiﬁcation of a program written by B.K. Seo. > z:=4: > p:=1./z; p := 0.2500000000 > q:=1-p: q := 0.7500000000 Find the entropy of the (p, q)-Bernoulli sequence. > entropy:=-p*log[2.](p)-q*log[2.](q); entropy := 0.8112781245 Choose the number of bits, which is the length of a binary sequence. > N:=300: Generate a typical (p, q)-Bernoulli sequence of length N . Increase N gradually to check the convergence of compression ratio to entropy. > ran:=rand(0..z-1): > for i from 1 to N do > x[i]:=ceil(ran()/(z-1)): > od: Print the ﬁrst 30 bits. > seq(x[j],j=1..30); 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 The following number should be close to q. > evalf(add(x[j],j=1..N)/N); 0.7833333333 Choose a divisor of N .

430

14 Data Compression

n:=3: Divide the sequence x[i] into blocks of length n. Find the number of those blocks. > Number_block:=N/n; Number block := 100 List the n-blocks obtained from the sequence x[i]. > for i from 1 to N by n do > for j from 0 to n-1 do > printf("%d",x[i+j]); > od; > printf("%c"," " ); > od; >

111 011 110 111 111 111

101 111 111 110 111 111

101 111 111 011 011 011

111 101 110 111 111 111

101 111 011 011 101 110

101 110 110 111 101 101

111 111 101 111 110 110

111 111 110 111 101 110

111 111 101 100 011 111

111 100 101 010 111 111

110 111 110 011 111 010

111 101 110 110 111 111

001 110 100 111 111 011

111 111 101 110 111 111

101 111 111 111 011 001

011 010 111 001 101

111 110 100 111 110

Calculate the frequencies of the n-blocks. > for k from 0 to 2^n-1 do > freq[k]:=0: > od: > for i from 1 to N by n do > k:=add(x[i+d]*2^(n-1-d),d=0..n-1): > freq[k]:=freq[k]+1: > od: Two branches in a Huﬀman tree meet at a node. To each node we assign three objects A, B, C where A is the branches under the node, B is the set of source symbols under the node, and C is the sum of frequencies under the node. > for i from 0 to 2^n-1 do > Node[i]:=[i,{i},freq[i]]: > od: > seq(Node[i],i=0..2^n-1); [0, {0}, 0], [1, {1}, 3], [2, {2}, 3], [3, {3}, 11], [4, {4}, 4], [5, {5}, 16], [6, {6}, 18], [7, {7}, 45] List the frequencies of the n-blocks. > for i from 0 to 2^n-1 do > printf("%c","("); > for j from 1 to n do > printf("%d",modp(floor(Node[i][1]/2^(n-j)),2)); > od; > printf(",%d",Node[i][3]); > printf("%c ",")"); > od; (000,0) (001,3) (010,3) (011,11) (100,4) (101,16) (110,18) (111,45)

14.6 Maple Programs

431

Sort the n-blocks according to the frequencies. > for i from 0 to 2^n-1 do > for j from 0 to 2^n-2-i do > if Node[j][3] > Node[j+1][3] then > temp:=Node[j]; > Node[j]:=Node[j+1]; > Node[j+1]:=temp; > fi: > od; > od; List the n-blocks together with their frequencies in the increasing order of frequencies. > for i from 0 to 2^n-1 do > printf("%c", "("); > for j from 1 to n do > printf("%d", modp(floor(Node[i][1]/2^(n-j)),2)); > od; > printf(",%d", Node[i][3]); > printf("%c ", ")"); > od; (000,0) (001,3) (010,3) (100,4) (011,11) (101,16) (110,18) (111,45)

Build the Huﬀman tree and deﬁne the coding map at the same time. > for i from 0 to 2^n-1 do > Code[i] := [0,0]: > od: We need commands for set operations: union, in, nops. See Subsect. 1.8.2. > for i from 0 to 2^n-2 do > for j from 1 to nops(Node[i][2]) do > Code[Node[i][2][j]][1] := Code[Node[i][2][j]][1] + 0*2^(Code[Node[i][2][j]][2]): > Code[Node[i][2][j]][2] := Code[Node[i][2][j]][2] + 1: > od: > for j from 1 to nops(Node[i+1][2]) do > Code[Node[i+1][2][j]][1] := Code[Node[i+1][2][j]][1] + 1*2^(Code[Node[i+1][2][j]][2]): > Code[Node[i+1][2][j]][2] := Code[Node[i+1][2][j]][2] + 1: > od: > Node[i+1][1] := [Node[i][1],Node[i+1][1]]: > Node[i+1][2] := Node[i][2] union Node[i+1][2]: > Node[i+1][3] := Node[i][3] + Node[i+1][3]: > for j from i+1 to 2^n-2 do > if Node[j][3] >= Node[j+1][3] then > temp:=Node[j]: > Node[j]:=Node[j+1]: > Node[j+1]:=temp: > fi: > od: > od:

432

14 Data Compression

The following information is equivalent to a Huﬀman tree that we look for. > Node[2^n-1][1]; [7, [[[4, [2, [0, 1]]], 3], [5, 6]]] Check the additional information. > Node[2^n-1][2]; {0, 1, 2, 3, 4, 5, 6, 7} > Node[2^n-1][3]; 100 > seq(Code[i],i=0..2^n-1); [38, 6], [39, 6], [18, 5], [5, 3], [8, 4], [6, 3], [7, 3], [0, 1] > > > > > > > > > > >

for i from 0 to 2^n-1 do printf("%c"," "); for j from 1 to n do printf("%d",modp(floor(i/2^(n-j)),2)); od; printf("-->"); for j from 1 to Code[i][2] do printf("%d",modp(floor(Code[i][1]/2^(Code[i][2]-j)),2)); od; printf("%c"," "); od:

000-->100110 001-->100111 010-->10010 101-->110 110-->111 111-->0

011-->101

100-->1000

In the following c[i], 1 ≤ i ≤ L, is the coded sequence. > L:=0: > for i from 1 to N by n do > k:=add(x[i+d]*2^(n-1-d),d=0..n-1): > s:=Code[k][1]: > for j from 0 to Code[k][2]-1 do > c[L+Code[k][2]-j]:=modp(s,2): > s:=floor(s/2): > od: > L:=L+Code[k][2]: > od: The following is the length of the coded sequence. > L; 229 > for i from 1 to L do > printf("%d",c[i]): > od; 011011001101100000111010011101101010101001100111000100001101 110010010111111001111011111101111101101111111000110001000011 110101010001000100101011110111010011100010101101101111101010 0000101110111001010111110111111001001001010100111

14.6 Maple Programs

Compare the following with entropy. > Compression_ratio:=evalf(L/N); Compression ratio := 0.7633333333 Deﬁne the decoding map. > for i from 0 to 2^n-1 do > Decode(Code[i]) := i: > od: The following deﬁnes a set. Note that the order structure is now lost. > Code_words:={seq(Code[i],i=0..2^n-1)}; Code words := {[0, 1], [6, 3], [7, 3], [38, 6], [39, 6], [18, 5], [5, 3], [8, 4]} > M:=0: > w:=0: > d:=0: > for i from 1 to L do > w:=w*2+c[i]: > d:=d+1: > if [w,d] in Code_words then > for j from 1 to n do > y[M+j]:=modp(floor(Decode([w,d])/2^(n-j)),2): > od: > M:=M+n: > w:=0: > d:=0: > fi: > od: The following is the length of the decoded sequence. > M; 300 Check whether the decoding recovers the original input sequence. > add(abs(x[i]-y[i]),i=1..N); 0 14.6.2 Lempel–Ziv coding A Maple program for Lempel–Ziv algorithm is presented. > z:=5: > p:=1/z: > q:=1.-p: Calculate the entropy of (p, q)-Bernoulli shift. > entropy:=-p*log[2.](p)-q*log[2.](q); entropy := 0.7219280949 > ran:=rand(0..z-1): Choose the length of an input sequence. > N:=50:

433

434

14 Data Compression

Choose a considerably larger value for N to check the convergence of the compression ratio to entropy as N → ∞. > for i from 1 to N do > x[i]:=ceil(ran()/(z-1)): > od: Generate a typical (p, q)-Bernoulli sequence of length N . > seq(x[i],i=1..N); 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0 Calculate the relative frequency of the bit ‘1’, which should be approximately equal to q. > evalf(add(x[i],i=1..N)/N); 0.7800000000 > C:=0: After an input sequence is parsed a phrase is regarded as a binary number, which are again represented as binary blocks. To distinguish blocks such as ‘01’ and ‘1’ we place the leading bit ‘1’ in front of them as a marker, so two blocks ‘01’ and ‘1’ are now written as ‘101’ and ‘11’. > for i from 1 to N do > block:=2+x[i]: > address:=0: > for j from 1 to C do > if block=phrase[j] then > address:=j: > i:=i+1: > block:=block*2+x[i]: > fi: > od: > C:=C+1: > phrase[C]:=block: > LZ[C]:=[address,x[i]]: > od: Find the number of parsed phrases. > C; 16 > for i from 1 to C do > num_bit[i]:=max(1,ceil(log[2.](max(1,LZ[i][1]))))+1; > od: > seq(num_bit[i],i=1..C); 2, 2, 2, 3, 3, 2, 2, 4, 4, 5, 5, 4, 4, 4, 5, 3 > seq(phrase[i],i=1..C); 3, 2, 7, 14, 15, 5, 6, 13, 31, 63, 127, 11, 26, 27, 23, 28 + x51 For a sequence of parsed phrases, delete the leading bit ‘1’ in the following binary blocks. > seq(convert(phrase[i],binary),i=1..C-1);

14.6 Maple Programs

435

11, 10, 111, 1110, 1111, 101, 110, 1101, 11111, 111111, 1111111, 1011, 11010, 11011, 10111 > > >

for i from 1 to C do power[i]:=trunc(log[2.](phrase[i]))+1; od:

Find the parsed phrases. > for i from 1 to C-1 do > for j from 2 to power[i] do > printf("%d",modp(floor(phrase[i]/2^(power[i]-j)),2)): > od: > printf("%c", " "): > od: 1 0 11 110 111 01 10 101 1111 11111 111111 011 1010 1011 0111

Here is the Lempel–Ziv dictionary. >

seq(LZ[i],i=1..C);

[0, 1], [0, 0], [1, 1], [3, 0], [3, 1], [2, 1], [1, 0], [7, 1], [5, 1], [9, 1], [10, 1] [6, 1], [8, 0], [8, 1], [12, 1], [4, x51 ] Compare the following with entropy. >

Compression_ratio:=evalf(add(num_bit[i],i=1..C)/N);

Compression ratio := 1.080000000 The compression ratio is greater than 1! We need a longer input sequence to observe compression in this case. Now we decode the compressed sequence back to the original sequence. > n :=0: > for i from 1 to C do > a := LZ[i]: > for j from 1 to i do > Decoded_Bit[j] := a[2]: > if a[1] = 0 then leng := j: break: > else a := LZ[a[1]]: > fi: > od: > for k from 1 to leng do > y[n+k] := Decoded_Bit[leng-k+1]: > od: > n := n + leng: > od: In the preceding program the command break is used to exit from the do loop on j. > seq(y[i],i=1..N); 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0

436

14 Data Compression

14.6.3 Typical intervals arising from arithmetic coding We present a method of visualizing the intervals corresponding to the typical blocks in the Asymptotic Equipartition Property. > with(plots): > with(plottools): Consider the (p, 1 − p)-Bernoulli shift. > z:=3; z := 3 > p:=1./z; p := 0.3333333333 Find the entropy of the (p, 1 − p)-Bernoulli sequence. > entropy:=-p*log[2.](p)-(1-p)*log[2.](1-p); entropy := 0.9182958340 Choose the length of a binary sequence. > n:=7: > 2^n; 128 Generate a typical (p, 1 − p)-Bernoulli sequence of length n. > ran:=rand(0..z-1): > for i from 1 to n do > a[i]:=ceil(ran()/(z-1)): > od: The string a1 . . . an is regarded as a binary integer. It is converted into a decimal integer J, 0 ≤ J < 2n . > seq(a[i],i=1..n); 0, 1, 0, 1, 1, 1, 1 > J:=add(a[k]*2^(k-1),k=1..n); J := 122 Calculate the probabilities of the n-blocks. > for j from 0 to 2^n-1 do > temp:=j: > for k from 1 to n do > b[n-k+1]:=modp(temp,2): > temp:=(temp-b[n-k+1])/2: > od: > num_1:=add(b[k],k=1..n): > prob[j]:=p^(n-num_1)*(1-p)^num_1: > od: The sum of all probabilities should be equal to 1. > add(prob[j],j=0..2^n-1); 1.000000000 Calculate the cumulative probabilities.

14.6 Maple Programs

437

t[-1]:=0: for j from 0 to 2^n-1 do t[j]:=t[j-1] + prob[j]: od: The following should be equal to 1. > t[2^n-1]; 1.000000000 Draw the short vertical lines dividing the intervals. > L[-1]:=line([0,-0.1],[0,0.1]): > for j from 0 to 2^n-1 do > L[j]:=line([t[j],-0.1],[t[j],0.1]): > od: Plot the representation of the partition of [0, 1] by 2n intervals. > g1:=display(seq(L[j],j=-1..2^n-1)): > g2:=plot(0,x=0..1,y=-0.5..0.5,axes=none): > g_circ:=pointplot([(t[J-1]+t[J])/2,0.15],symbol=circle): > display(g1,g2,g_circ); The Jth interval under consideration is indicated by a circle. See Fig. 14.5. > > > >

0

1

Fig. 14.5. A partition of the unit interval in the arithmetic coding

Calculate an approximate number of the typical intervals corresponding to the typical n-blocks. > round(2^(n*entropy)); 86 Calculate a value that approximates lengths of the intervals corresponding to the typical n-blocks. > evalf(2^(-n*entropy)); 0.01161335932 > prob[J]; 0.01463191587

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close