The Mathematica GuideBook for Symbolics
Michael Trott
The Mathematica GuideBook for Symbolics
With 848 Illustration...

Author:
Michael Trott

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

The Mathematica GuideBook for Symbolics

Michael Trott

The Mathematica GuideBook for Symbolics

With 848 Illustrations

Michael Trott Wolfram Research Champaign, Illinois

Mathematica is a registered trademark of Wolfram Research, Inc. Library of Congress Control Number: 2005928496 ISBN-10: 0-387-95020-6 ISBN-13: 978-0387-95020-4 e-ISBN: 0-387-28815-5

Printed on acid-free paper.

2006 Springer Science+Business Media, Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, Inc., 233 Spring St., New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed in the United States of America. 9 8 7 6 5 4 3 2 1 springeronline.com

(HAM)

Preface Bei mathematischen Operationen kann sogar eine gänzliche Entlastung des Kopfes eintreten, indem man einmal ausgeführte Zähloperationen mit Zeichen symbolisiert und, statt die Hirnfunktion auf Wiederholung schon ausgeführter Operationen zu verschwenden, sie für wichtigere Fälle aufspart. When doing mathematics, instead of burdening the brain with the repetitive job of redoing numerical operations which have already been done before, it’s possible to save that brainpower for more important situations by using symbols, instead, to represent those numerical calculations. — Ernst Mach (1883) [45]

Computer Mathematics and Mathematica Computers were initially developed to expedite numerical calculations. A newer, and in the long run, very fruitful field is the manipulation of symbolic expressions. When these symbolic expressions represent mathematical entities, this field is generally called computer algebra [8]. Computer algebra begins with relatively elementary operations, such as addition and multiplication of symbolic expressions, and includes such things as factorization of integers and polynomials, exact linear algebra, solution of systems of equations, and logical operations. It also includes analysis operations, such as definite and indefinite integration, the solution of linear and nonlinear ordinary and partial differential equations, series expansions, and residue calculations. Today, with computer algebra systems, it is possible to calculate in minutes or hours the results that would (and did) take years to accomplish by paper and pencil. One classic example is the calculation of the orbit of the moon, which took the French astronomer Delaunay 20 years [12], [13], [14], [15], [11], [26], [27], [53], [16], [17], [25]. (The Mathematica GuideBooks cover the two other historic examples of calculations that, at the end of the 19th century, took researchers many years of hand calculations [1], [4], [38] and literally thousands of pages of paper.) Along with the ability to do symbolic calculations, four other ingredients of modern general-purpose computer algebra systems prove to be of critical importance for solving scientific problems: † a powerful high-level programming language to formulate complicated problems † programmable two- and three-dimensional graphics † robust, adaptive numerical methods, including arbitrary precision and interval arithmetic † the ability to numerically evaluate and symbolically deal with the classical orthogonal polynomials and special functions of mathematical physics. The most widely used, complete, and advanced general-purpose computer algebra system is Mathematica. Mathematica provides a variety of capabilities such as graphics, numerics, symbolics, standardized interfaces to other programs, a complete electronic document-creation environment (including a full-fledged mathematical typesetting system), and a variety of import and export capabilities. Most of these ingredients are necessary to coherently and exhaustively solve problems and model processes occurring in the natural sciences [41], [58], [21], [39] and other fields using constructive mathematics, as well as to properly represent the results. Conse-

Preface

vi

quently, Mathematica’s main areas of application are presently in the natural sciences, engineering, pure and applied mathematics, economics, finance, computer graphics, and computer science. Mathematica is an ideal environment for doing general scientific and engineering calculations, for investigating and solving many different mathematically expressable problems, for visualizing them, and for writing notes, reports, and papers about them. Thus, Mathematica is an integrated computing environment, meaning it is what is also called a “problem-solving environment” [40], [23], [6], [48], [43], [50], [52].

Scope and Goals The Mathematica GuideBooks are four independent books whose main focus is to show how to solve scientific problems with Mathematica. Each book addresses one of the four ingredients to solve nontrivial and real-life mathematically formulated problems: programming, graphics, numerics, and symbolics. The Programming and the Graphics volume were published in autumn 2004. The four Mathematica GuideBooks discuss programming, two-dimensional, and three-dimensional graphics, numerics, and symbolics (including special functions). While the four books build on each other, each one is self-contained. Each book discusses the definition, use, and unique features of the corresponding Mathematica functions, gives small and large application examples with detailed references, and includes an extensive set of relevant exercises and solutions. The GuideBooks have three primary goals: † to give the reader a solid working knowledge of Mathematica † to give the reader a detailed knowledge of key aspects of Mathematica needed to create the “best”, fastest, shortest, and most elegant solutions to problems from the natural sciences † to convince the reader that working with Mathematica can be a quite fruitful, enlightening, and joyful way of cooperation between a computer and a human. Realizing these goals is achieved by understanding the unifying design and philosophy behind the Mathematica system through discussing and solving numerous example-type problems. While a variety of mathematics and physics problems are discussed, the GuideBooks are not mathematics or physics books (from the point of view of content and rigor; no proofs are typically involved), but rather the author builds on Mathematica’s mathematical and scientific knowledge to explore, solve, and visualize a variety of applied problems. The focus on solving problems implies a focus on the computational engine of Mathematica, the kernel—rather than on the user interface of Mathematica, the front end. (Nevertheless, for a nicer presentation inside the electronic version, various front end features are used, but are not discussed in depth.) The Mathematica GuideBooks go far beyond the scope of a pure introduction into Mathematica. The books also present instructive implementations, explanations, and examples that are, for the most part, original. The books also discuss some “classical” Mathematica implementations, explanations, and examples, partially available only in the original literature referenced or from newsgroups threads. In addition to introducing Mathematica, the GuideBooks serve as a guide for generating fairly complicated graphics and for solving more advanced problems using graphical, numerical, and symbolical techniques in cooperative ways. The emphasis is on the Mathematica part of the solution, but the author employs examples that are not uninteresting from a content point of view. After studying the GuideBooks, the reader will be able to solve new and old scientific, engineering, and recreational mathematics problems faster and more completely with the help of Mathematica—at least, this is the author’s goal. The author also hopes that the reader will enjoy

Preface

vii

using Mathematica for visualization of the results as much as the author does, as well as just studying Mathematica as a language on its own. In the same way that computer algebra systems are not “proof machines” [46], [9], [37], [10], [54], [55], [56] such as might be used to establish the four-color theorem ([2], [22]), the Kepler [28], [19], [29], [30], [31], [32], [33], [34], [35], [36] or the Robbins ([44], [20]) conjectures, proving theorems is not the central theme of the GuideBooks. However, powerful and general proof machines [9], [42], [49], [24], [3], founded on Mathematica’ s general programming paradigms and its mathematical capabilities, have been built (one such system is Theorema [7]). And, in the GuideBooks, we occasionally prove one theorem or another theorem. In general, the author’s aim is to present a realistic portrait of Mathematica: its use, its usefulness, and its strengths, including some current weak points and sometimes unexpected, but often nevertheless quite “thought through”, behavior. Mathematica is not a universal tool to solve arbitrary problems which can be formulated mathematically—only a fraction of all mathematical problems can even be formulated in such a way to be efficiently expressed today in a way understandable to a computer. Rather, it is often necessary to do a certain amount of programming and occasionally give Mathematica some “help” instead of simply calling a single function like Solve to solve a system of equations. Because this will almost always be the case for “real-life” problems, we do not restrict ourselves only to “textbook” examples, where all goes smoothly without unexpected problems and obstacles. The reader will see that by employing Mathematica’s programming, numeric, symbolic, and graphic power, Mathematica can offer more effective, complete, straightforward, reusable, and less likely erroneous solution methods for calculations than paper and pencil, or numerical programming languages. Although the Guidebooks are large books, it is nevertheless impossible to discuss all of the 2,000+ built-in Mathematica commands. So, some simple as well as some more complicated commands have been omitted. For a full overview about Mathematica’s capabilities, it is necessary to study The Mathematica Book [60] in detail. The commands discussed in the Guidebooks are those that an scientist or research engineer needs for solving typical problems, if such a thing exists [18]. These subjects include a quite detailed discussion of the structure of Mathematica expressions, Mathematica input and output (important for the human–Mathematica interaction), graphics, numerical calculations, and calculations from classical analysis. Also, emphasis is given to the powerful algebraic manipulation functions. Interestingly, they frequently allow one to solve analysis problems in an algorithmic way [5]. These functions are typically not so well known because they are not taught in classical engineering or physics-mathematics courses, but with the advance of computers doing symbolic mathematics, their importance increases [47]. A thorough knowledge of: † structural operations on polynomials, rational functions, and trigonometric functions † algebraic operations on polynomial equations and inequalities † process of compilation, its advantages and limits † main operations of calculus—univariate and multivariate differentiation and integration † solution of ordinary and partial differential equations is needed to put the heart of Mathematica—its symbolic capabilities—efficiently and successfully to work in the solution of model and real-life problems. The Mathematica GuideBooks to Symbolics discusses these subjects. The current version of the Mathematica GuideBooks is tailored for Mathematica Version 5.1.

viii

Preface

Content Overview The Mathematica GuideBook for Symbolics has three chapters. Each chapter is subdivided into sections (which occasionally have subsections), exercises, solutions to the exercises, and references. This fourth and last volume of the GuideBooks deals with Mathematica’s symbolic mathematical capabilities—the real heart of Mathematica and the ingredient of the Mathematica software system that makes it so unique and powerful. In addition, this volume discusses and employs the classical orthogonal polynomials and special functions of mathematical physics. To demonstrate the symbolic mathematics power, a variety of problems from mathematics and physics are discussed. Chapter 1 starts with a discussion of the algebraic functions needed to carry out analysis problems effectively. Contrary to classical science/engineering mathematics education, using a computer algebra system makes it often a good idea to rephrase a problem—including when it is from analysis—in a polynomial way to allow for powerful algorithmic treatments. Gröbner bases play a central role in accomplishing this task. This volume discusses in detail the main functions to deal with structural operations on polynomials, polynomial equations and inequalities, and expressions containing quantified variables. Rational functions and expressions containing trigonometric functions are dealt with next. Then the central problems of classical analysis—differentiation, integration, summation, series expansion, and limits—are discussed in detail. The symbolic solving of ordinary and partial differential equations is demonstrated in many examples. As always, a variety of examples show how to employ the discussed functions in various mathematics or physics problems. The Symbolics volume emphasizes their main uses and discusses the specialities of these operations inside a computer algebra system, as compared to a “manual” calculation. Then, generalized functions and Fourier and Laplace transforms are discussed. The main part of the chapter culminates with three examples of larger symbolic calculations, two of them being classic problems. This chapter has more than 150 exercises and solutions treating a variety of symbolic computation examples from the sciences. Chapters 2 and 3 discuss classical orthogonal polynomials and the special functions of mathematical physics. Because this volume is not a treatise on special functions, it is restricted to selected function groups and presents only their basic properties, associated differential equations, normalizations, series expansions, verification of various special cases, etc. The availability of nearly all of the special functions of mathematical physics for all possible arbitrary complex parameters opens new possibilities for the user, e.g., the use of closed formulas for the Green’s functions of commonly occurring partial differential equations or for “experimental mathematics”. These chapters focus on the use of the special functions in a number of physics-related applications in the text as well as in the exercises. The larger examples deal with are the quartic oscillator in the harmonic oscillator basis and the implementation of Felix Klein’s method to solve quintic polynomials in Gauss hypergeometric functions 2F1 . The Symbolics volume employs the built-in symbolic mathematics in a variety of examples. However, the underlying algorithms themselves are not discussed. Many of them are mathematically advanced and outside of the scope of the GuideBooks. Throughout the Symbolics volume, the programming and graphics experience acquired in the first two volumes is used to visualize various mathematics and physics topics.

Preface

ix

The Books and the Accompanying DVDs Each of the GuideBooks comes with a multiplatform DVD. Each DVD contains the fourteen main notebooks, the hyperlinked table of contents and index, a navigation palette, and some utility notebooks and files. All notebooks are tailored for Mathematica 5.1. Each of the main notebooks corresponds to a chapter from the printed books. The notebooks have the look and feel of a printed book, containing structured units, typeset formulas, Mathematica code, and complete solutions to all exercises. The DVDs contain the fully evaluated notebooks corresponding to the chapters of the corresponding printed book (meaning these notebooks have text, inputs, outputs and graphics). The DVDs also include the unevaluated versions of the notebooks of the other three GuideBooks (meaning they contain all text and Mathematica code, but no outputs and graphics). Although the Mathematica GuideBooks are printed, Mathematica is “a system for doing mathematics by computer” [59]. This was the lovely tagline of earlier versions of Mathematica, but because of its growing breadth (like data import, export and handling, operating system-independent file system operations, electronic publishing capabilities, web connectivity), nowadays Mathematica is called a “system for technical computing”. The original tagline (that is more than ever valid today!) emphasized two points: doing mathematics and doing it on a computer. The approach and content of the GuideBooks are fully in the spirit of the original tagline: They are centered around doing mathematics. The second point of the tagline expresses that an electronic version of the GuideBooks is the more natural medium for Mathematica-related material. Long outputs returned by Mathematica, sequences of animations, thousands of web-retrievable references, a 10,000-entry hyperlinked index (that points more precisely than a printed index does) are space-consuming, and therefore not well suited for the printed book. As an interactive program, Mathematica is best learned, used, challenged, and enjoyed while sitting in front of a powerful computer (or by having a remote kernel connection to a powerful computer). In addition to simply showing the printed book’s text, the notebooks allow the reader to: † experiment with, reuse, adapt, and extend functions and code † investigate parameter dependencies † annotate text, code, and formulas † view graphics in color † run animations.

The Accompanying Web Site Why does a printed book need a home page? There are (in addition to being just trendy) two reasons for a printed book to have its fingerprints on the web. The first is for (Mathematica) users who have not seen the book so far. Having an outline and content sample on the web is easily accomplished, and shows the look and feel of the notebooks (including some animations). This is something that a printed book actually cannot do. The second reason is for readers of the book: Mathematica is a large modern software system. As such, it ages quickly in the sense that in the timescale of 101. smallIntegermonths, a new version will likely be available. The overwhelmingly large majority of Mathematica functions and programs will run unchanged in a new version. But occasionally, changes and adaptations might be needed. To accommodate this, the web site of this book—http://www.MathematicaGuideBooks.org—contains a list of changes relevant to the GuideBooks. In addition, like any larger software project, unavoidably, the GuideBooks will contain suboptimal implementations, mistakes, omissions, imperfections, and errors. As they come to his attention, the author will list them at

Preface

x

the book’s web site. Updates to references, corrections [51], hundreds of pages of additional exercises and solutions, improved code segments, and other relevant information will be on the web site as well. Also, information about OS-dependent and Mathematica version-related changes of the given Mathematica code will be available there.

Evolution of the Mathematica GuideBooks A few words about the history and the original purpose of the GuideBooks: They started from lecture notes of an Introductory Course in Mathematica 2 and an advanced course on the Efficient Use of the Mathematica Programming System, given in 1991/1992 at the Technical University of Ilmenau, Germany. Since then, after each release of a new version of Mathematica, the material has been updated to incorporate additional functionality. This electronic/printed publication contains text, unique graphics, editable formulas, runable, and modifiable programs, all made possible by the electronic publishing capabilities of Mathematica. However, because the structure, functions and examples of the original lecture notes have been kept, an abbreviated form of the GuideBooks is still suitable for courses. Since 1992 the manuscript has grown in size from 1,600 pages to more than three times its original length, finally “weighing in” at nearly 5,000 printed book pages with more than: † 18 gigabytes of accompanying Mathematica notebooks † 22,000 Mathematica inputs with more than 13,000 code comments † 11,000 references † 4,000 graphics † 1,000 fully solved exercises † 150 animations. This first edition of this book is the result of more than eleven years of writing and daily work with Mathematica. In these years, Mathematica gained hundreds of functions with increased functionality and power. A modern year-2005 computer equipped with Mathematica represents a computational power available only a few years ago to a select number of people [57] and allows one to carry out recreational or new computations and visualizations—unlimited in nature, scope, and complexity— quickly and easily. Over the years, the author has learned a lot of Mathematica and its current and potential applications, and has had a lot of fun, enlightening moments and satisfaction applying Mathematica to a variety of research and recreational areas, especially graphics. The author hopes the reader will have a similar experience.

Disclaimer In addition to the usual disclaimer that neither the author nor the publisher guarantees the correctness of any formula, fitness, or reliability of any of the code pieces given in this book, another remark should be made. No guarantee is given that running the Mathematica code shown in the GuideBooks will give identical results to the printed ones. On the contrary, taking into account that Mathematica is a large and complicated software system which evolves with each released version, running the code with another version of Mathematica (or sometimes even on another operating system) will very likely result in different outputs for some inputs. And, as a consequence, if different outputs are generated early in a longer calculation, some functions might hang or return useless results.

Preface

xi

The interpretations of Mathematica commands, their descriptions, and uses belong solely to the author. They are not claimed, supported, validated, or enforced by Wolfram Research. The reader will find that the author’s view on Mathematica deviates sometimes considerably from those found in other books. The author’s view is more on the formal than on the pragmatic side. The author does not hold the opinion that any Mathematica input has to have an immediate semantic meaning. Mathematica is an extremely rich system, especially from the language point of view. It is instructive, interesting, and fun to study the behavior of built-in Mathematica functions when called with a variety of arguments (like unevaluated, hold, including undercover zeros, etc.). It is the author’s strong belief that doing this and being able to explain the observed behavior will be, in the long term, very fruitful for the reader because it develops the ability to recognize the uniformity of the principles underlying Mathematica and to make constructive, imaginative, and effective use of this uniformity. Also, some exercises ask the reader to investigate certain “unusual” inputs. From time to time, the author makes use of undocumented features and/or functions from the Developer` and Experimental` contexts (in later versions of Mathematica these functions could exist in the System` context or could have different names). However, some such functions might no longer be supported or even exist in later versions of Mathematica.

Acknowledgements Over the decade, the GuideBooks were in development, many people have seen parts of them and suggested useful changes, additions, and edits. I would like to thank Horst Finsterbusch, Gottfried Teichmann, Klaus Voss, Udo Krause, Jerry Keiper, David Withoff, and Yu He for their critical examination of early versions of the manuscript and their useful suggestions, and Sabine Trott for the first proofreading of the German manuscript. I also want to thank the participants of the original lectures for many useful discussions. My thanks go to the reviewers of this book: John Novak, Alec Schramm, Paul Abbott, Jim Feagin, Richard Palmer, Ward Hanson, Stan Wagon, and Markus van Almsick, for their suggestions and ideas for improvement. I thank Richard Crandall, Allan Hayes, Andrzej Kozlowski, Hartmut Wolf, Stephan Leibbrandt, George Kambouroglou, Domenico Minunni, Eric Weisstein, Andy Shiekh, Arthur G. Hubbard, Jay Warrendorff, Allan Cortzen, Ed Pegg, and Udo Krause for comments on the prepublication version of the GuideBooks. I thank Bobby R. Treat, Arthur G. Hubbard, Murray Eisenberg, Marvin Schaefer, Marek Duszynski, Daniel Lichtblau, Devendra Kapadia, Adam Strzebonski, Anton Antonov, and Brett Champion for useful comments on the Mathematica Version 5.1 tailored version of the GuideBooks. My thanks are due to Gerhard Gobsch of the Institute for Physics of the Technical University in Ilmenau for the opportunity to develop and give these original lectures at the Institute, and to Stephen Wolfram who encouraged and supported me on this project. Concerning the process of making the Mathematica GuideBooks from a set of lecture notes, I thank Glenn Scholebo for transforming notebooks to T E X files, and Joe Kaiping for T E X work related to the printed book. I thank John Novak and Jan Progen for putting all the material into good English style and grammar, John Bonadies for the chapter-opener graphics of the book, and Jean Buck for library work. I especially thank John Novak for the creation of Mathematica 3 notebooks from the T E X files, and Andre Kuzniarek for his work on the stylesheet to give the notebooks a pleasing appearance. My thanks go to Andy Hunt who created a specialized stylesheet for the actual book printout and printed and formatted the 4×1000+ pages of the Mathematica GuideBooks. I thank Andy Hunt for making a first version of the homepage of the GuideBooks and Amy Young for creating the current version of the homepage of the GuideBooks. I thank Sophie Young for a final check of the English. My largest thanks go to Amy Young, who encouraged me to update the whole book over the years and who had a close look at all of my English writing and often improved it considerably. Despite reviews by

xii

Preface

many individuals any remaining mistakes or omissions, in the Mathematica code, in the mathematics, in the description of the Mathematica functions, in the English, or in the references, etc. are, of course, solely mine. Let me take the opportunity to thank members of the Research and Development team of Wolfram Research whom I have met throughout the years, especially Victor Adamchik, Anton Antonov, Alexei Bocharov, Arnoud Buzing, Brett Champion, Matthew Cook, Todd Gayley, Darren Glosemeyer, Roger Germundsson, Unal Goktas, Yifan Hu, Devendra Kapadia, Zbigniew Leyk, David Librik, Daniel Lichtblau, Jerry Keiper, Robert Knapp, Roman Mäder, Oleg Marichev, John Novak, Peter Overmann, Oleksandr Pavlyk, Ulises Cervantes–Pimentel, Mark Sofroniou, Adam Strzebonski, Oyvind Tafjord, Robby Villegas, Tom Wickham–Jones, David Withoff, and Stephen Wolfram for numerous discussions about design principles, various small details, underlying algorithms, efficient implementation of various procedures, and tricks concerning Mathematica. The appearance of the notebooks profited from discussions with John Fultz, Paul Hinton, John Novak, Lou D’Andria, Theodore Gray, Andre Kuzniarek, Jason Harris, Andy Hunt, Christopher Carlson, Robert Raguet–Schofield, George Beck, Kai Xin, Chris Hill, and Neil Soiffer about front end, button, and typesetting issues. It was an interesting and unique experience to work over the last 12 years with five editors: Allan Wylde, Paul Wellin, Maria Taylor, Wayne Yuhasz, and Ann Kostant, with whom the GuideBooks were finally published. Many book-related discussions that ultimately improved the GuideBooks, have been carried out with Jan Benes from TELOS and associates, Steven Pisano, Jenny Wolkowicki, Henry Krell, Fred Bartlett, Vaishali Damle, Ken Quinn, Jerry Lyons, and Rüdiger Gebauer from Springer New York. The author hopes the Mathematica GuideBooks help the reader to discover, investigate, urbanize, and enjoy the computational paradise offered by Mathematica.

Wolfram Research, Inc. April 2005

Michael Trott

Preface

xiii

References 1

A. Amthor. Z. Math. Phys. 25, 153 (1880).

2

K. Appel, W. Haken. J. Math. 21, 429 (1977).

3

A. Bauer, E. Clarke, X. Zhao. J. Automat. Reasoning 21, 295 (1998).

4

A. H. Bell. Am. Math. Monthly 2, 140 (1895).

5

M. Berz. Adv. Imaging Electron Phys. 108, 1 (2000).

6

R. F. Boisvert. arXiv:cs.MS/0004004 (2000).

7

B. Buchberger. Theorema Project (1997). ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1997/97-34/ed-media.nb

8

B. Buchberger. SIGSAM Bull. 36, 3 (2002).

9

S.-C. Chou, X.-S. Gao, J.-Z. Zhang. Machine Proofs in Geometry, World Scientific, Singapore, 1994.

10

A. M. Cohen. Nieuw Archief Wiskunde 14, 45 (1996).

11

A. Cook. The Motion of the Moon, Adam-Hilger, Bristol, 1988.

12

C. Delaunay. Théorie du Mouvement de la Lune, Gauthier-Villars, Paris, 1860.

13

C. Delaunay. Mem. de l’ Acad. des Sc. Paris 28 (1860).

14

C. Delaunay. Mem. de l’ Acad. des Sc. Paris 29 (1867).

15

A. Deprit, J. Henrard, A. Rom. Astron. J. 75, 747 (1970).

16

A. Deprit. Science 168, 1569 (1970).

17

A. Deprit, J. Henrard, A. Rom. Astron. J. 76, 273 (1971).

18

P. J. Dolan, Jr., D. S. Melichian. Am. J. Phys. 66, 11 (1998).

19

S. P. Ferguson, T. C. Hales. arXiv:math.MG/ 9811072 (1998).

20

B. Fitelson. Mathematica Educ. Res. 7, n1, 17 (1998).

21

A. C. Fowler. Mathematical Models in the Applied Sciences, Cambridge University Press, Cambridge, 1997.

22

H. Fritsch, G. Fritsch. The Four-Color Theorem, Springer-Verlag, New York, 1998.

23

E. Gallopoulus, E. Houstis, J. R. Rice (eds.). Future Research Directions in Problem Solving Environments for Computational Science: Report of a Workshop on Research Directions in Integrating Numerical Analysis, Symbolic Computing, Computational Geometry, and Artificial Intelligence for Computational Science, 1991. http://www.cs.purdue.edu/research/cse/publications/tr/92/92-032.ps.gz

24

V. Gerdt, S. A. Gogilidze in V. G. Ganzha, E. W. Mayr, E. V. Vorozhtsov (eds.). Computer Algebra in Scientific Computing, Springer-Verlag, Berlin, 1999.

25

M. C. Gutzwiller, D. S. Schmidt. Astronomical Papers: The Motion of the Moon as Computed by the Method of Hill, Brown, and Eckert, U.S. Government Printing Office, Washington, 1986.

26

M. C. Gutzwiller. Rev. Mod. Phys. 70, 589 (1998).

27

Y. Hagihara. Celestial Mechanics vII/1, MIT Press, Cambridge, 1972.

28

T. C. Hales. arXiv:math.MG/ 9811071 (1998).

29

T. C. Hales. arXiv:math.MG/ 9811073 (1998).

30

T. C. Hales. arXiv:math.MG/ 9811074 (1998).

31

T. C. Hales. arXiv:math.MG/ 9811075 (1998).

32

T. C. Hales. arXiv:math.MG/ 9811076 (1998).

33

T. C. Hales. arXiv:math.MG/ 9811077 (1998).

Preface

xiv 34

T. C. Hales. arXiv:math.MG/ 9811078 (1998).

35

T. C. Hales. arXiv:math.MG/0205208 (2002).

36

T. C. Hales in L. Tatsien (ed.). Proceedings of the International Congress of Mathematicians v. 3, Higher Education Press, Beijing, 2002.

37

J. Harrison. Theorem Proving with the Real Numbers, Springer-Verlag, London, 1998.

38

J. Hermes. Nachrichten Königl. Gesell. Wiss. Göttingen 170 (1894).

39 40

E. N. Houstis, J. R. Rice, E. Gallopoulos, R. Bramley (eds.). Enabling Technologies for Computational Science, Kluwer, Boston, 2000. E. N. Houstis, J. R. Rice. Math. Comput. Simul. 54, 243 (2000).

41

M. S. Klamkin (eds.). Mathematical Modelling, SIAM, Philadelphia, 1996.

42

H. Koch, A. Schenkel, P. Wittwer. SIAM Rev. 38, 565 (1996).

43

Y. N. Lakshman, B. Char, J. Johnson in O. Gloor (ed.). ISSAC 1998, ACM Press, New York, 1998.

44

W. McCune. Robbins Algebras Are Boolean, 1997. http://www.mcs.anl.gov/home/mccune/ar/robbins/

45

E. Mach (R. Wahsner, H.-H. von Borszeskowski eds.). Die Mechanik in ihrer Entwicklung, Akademie-Verlag, Berlin, 1988. D. A. MacKenzie. Mechanizing Proof: Computing, Risk, and Trust, MIT Press, Cambridge, 2001.

46 47

B. M. McCoy. arXiv:cond-mat/0012193 (2000).

48

K. J. M. Moriarty, G. Murdeshwar, S. Sanielevici. Comput. Phys. Commun. 77, 325 (1993).

49

I. Nemes, M. Petkovšek, H. S. Wilf, D. Zeilberger. Am. Math. Monthly 104, 505 (1997).

50

W. H. Press, S. A. Teukolsky. Comput. Phys. 11, 417 (1997).

51

D. Rawlings. Am. Math. Monthly 108, 713 (2001).

52

Problem Solving Environments Home Page. http://www.cs.purdue.edu/research/cse/pses

53

D. S. Schmidt in H. S. Dumas, K. R. Meyer, D. S. Schmidt (eds.). Hamiltonian Dynamical Systems, Springer-Verlag, New York, 1995. S. Seiden. SIGACT News 32, 111 (2001).

54 55

S. Seiden. Theor. Comput. Sc. 282, 381 (2002).

56

C. Simpson. arXiv:math.HO/0311260 (2003).

57

A. M. Stoneham. Phil. Trans. R. Soc. Lond. A 360, 1107 (2002).

58

M. Tegmark. Ann. Phys. 270, 1 (1999).

59

S. Wolfram. Mathematica: A System for Doing Mathematics by Computer, Addison-Wesley, Redwood City, 1992.

60

S. Wolfram. The Mathematica Book, Wolfram Media, Champaign, 2003.

Contents 0.

Introduction and Orientation

xix

CHAmR I

SymbolicComputations 1.0

Remarks

1

1.1

Introduction 1

1.2

Operations on Polynomials

13

1.2.0

Remarks 13

1.2.1

Structural Manipulations on Polynomials 13

1.2.2

Polynomials in Equations 25

1.2.3

Polynomials in Inequalities 50

1.3

Operations on Rational Functions 78

1.4

Operations on Trigonometric Expressions

1.5

Solution of Equations 94

1.6

Classical Analysis 129

1.7

1.6.1

Differentiation 129

1.6.2

Integration 156

1.6.3

Limits 184

1.6.4

Series Expansions 189

1.6.5

Residues 220

1.6.6

Sums 221

88

Differential and Difference Equations 233 1.7.0

Remarks 233

1.7.1

Ordinary Differential Equations 234

1.7.2

Partial Differential Equations 257

1.7.3

Difference Equations 260

1.8

Integral Transforms and Generalized Functions 266

1.9

Additional Symbolics Functions 294

xvi 1.10

Three Applications 298 1.10.0 Remarks 298 1.10.1

Area of a Random Triangle in a Square 298

1.10.2

2 cosH 257 L à la Gauss 312

1.10.3

Implicitization of a Trefoil Knot 321

p

ÅÅÅÅÅÅÅÅÅÅ

Exercises 330 Solutions 371 References 749

CHAPTER 2

Classical Orthogonal Polynomials 2.0

Remarks 803

2.1

General Properties of Orthogonal Polynomials 803

2.2

Hermite Polynomials 806

2.3

Jacobi Polynomials 816

2.4

Gegenbauer Polynomials 823

2.5

Laguerre Polynomials 832

2.6

Legendre Polynomials 842

2.7

Chebyshev Polynomials of the First Kind 849

2.8

Chebyshev Polynomials of the Second Kind 853

2.9

Relationships Among the Orthogonal Polynomials 860

2.10 Ground-State of the Quartic Oscillator 868 Exercises 885 Solutions 897 References 961

xvii

CHAPTER 3

Classical Special Functions 3.0

Remarks 979

3.1

Introduction 989

3.2

Gamma, Beta, and Polygamma Functions 1001

3.3

Error Functions and Fresnel Integrals 1008

3.4

Exponential Integral and Related Functions 1016

3.5

Bessel and Airy Functions 1019

3.6

Legendre Functions 1044

3.7

Hypergeometric Functions 1049

3.8

Elliptic Integrals 1062

3.9

Elliptic Functions 1071

3.10 Product Log Function 1081 3.11

Mathieu Functions 1086

3.12 Additional Special Functions 1109 3.13 Solution of Quintic Polynomials 1110 Exercises 1125 Solutions 1155 References 1393 Index 1431

Introduction and Orientation to The Mathematica GuideBooks 0.1 Overview 0.1.1 Content Summaries The Mathematica GuideBooks are published as four independent books: The Mathematica GuideBook to Programming, The Mathematica GuideBook to Graphics, The Mathematica GuideBook to Numerics, and The Mathematica GuideBook to Symbolics. † The Programming volume deals with the structure of Mathematica expressions and with Mathematica as a programming language. This volume includes the discussion of the hierarchical construction of all Mathematica objects out of symbolic expressions (all of the form head[argument]), the ultimate building blocks of expressions (numbers, symbols, and strings), the definition of functions, the application of rules, the recognition of patterns and their efficient application, the order of evaluation, program flows and program structure, the manipulation of lists (the universal container for Mathematica expressions of all kinds), as well as a number of topics specific to the Mathematica programming language. Various programming styles, especially Mathematica’ s powerful functional programming constructs, are covered in detail. † The Graphics volume deals with Mathematica’s two-dimensional (2D) and three-dimensional (3D) graphics. The chapters of this volume give a detailed treatment on how to create images from graphics primitives, such as points, lines, and polygons. This volume also covers graphically displaying functions given either analytically or in discrete form. A number of images from the Mathematica Graphics Gallery are also reconstructed. Also discussed is the generation of pleasing scientific visualizations of functions, formulas, and algorithms. A variety of such examples are given. † The Numerics volume deals with Mathematica’s numerical mathematics capabilities—the indispensable sledgehammer tools for dealing with virtually any “real life” problem. The arithmetic types (fast machine, exact integer and rational, verified high-precision, and interval arithmetic) are carefully analyzed. Fundamental numerical operations, such as compilation of programs, numerical Fourier transforms, minimization, numerical solution of equations, and ordinary/partial differential equations are analyzed in detail and are applied to a large number of examples in the main text and in the solutions to the exercises. † The Symbolics volume deals with Mathematica’s symbolic mathematical capabilities—the real heart of Mathematica and the ingredient of the Mathematica software system that makes it so unique and powerful. Structural and mathematical operations on systems of polynomials are fundamental to many symbolic calculations and are covered in detail. The solution of equations and differential equations, as well as the classical calculus operations, are exhaustively treated. In addition, this volume discusses and employs the classical

xx

Introduction

orthogonal polynomials and special functions of mathematical physics. To demonstrate the symbolic mathematics power, a variety of problems from mathematics and physics are discussed. The four GuideBooks contain about 25,000 Mathematica inputs, representing more than 75,000 lines of commented Mathematica code. (For the reader already familiar with Mathematica, here is a more precise measure: The LeafCount of all inputs would be about 900,000 when collected in a list.) The GuideBooks also have more than 4,000 graphics, 150 animations, 11,000 references, and 1,000 exercises. More than 10,000 hyperlinked index entries and hundreds of hyperlinks from the overview sections connect all parts in a convenient way. The evaluated notebooks of all four volumes have a cumulative file size of about 20 GB. Although these numbers may sound large, the Mathematica GuideBooks actually cover only a portion of Mathematica’s functionality and features and give only a glimpse into the possibilities Mathematica offers to generate graphics, solve problems, model systems, and discover new identities, relations, and algorithms. The Mathematica code is explained in detail throughout all chapters. More than 13,000 comments are scattered throughout all inputs and code fragments.

0.1.2 Relation of the Four Volumes The four volumes of the GuideBooks are basically independent, in the sense that readers familiar with Mathematica programming can read any of the other three volumes. But a solid working knowledge of the main topics discussed in The Mathematica GuideBook to Programming—symbolic expressions, pure functions, rules and replacements, and list manipulations—is required for the Graphics, Numerics, and Symbolics volumes. Compared to these three volumes, the Programming volume might appear to be a bit “dry”. But similar to learning a foreign language, before being rewarded with the beauty of novels or a poem, one has to sweat and study. The whole suite of graphical capabilities and all of the mathematical knowledge in Mathematica are accessed and applied through lists, patterns, rules, and pure functions, the material discussed in the Programming volume. Naturally, graphics are the center of attention of the The Mathematica GuideBook to Graphics. While in the Programming volume some plotting and graphics for visualization are used, graphics are not crucial for the Programming volume. The reader can safely skip the corresponding inputs to follow the main programming threads. The Numerics and Symbolics volumes, on the other hand, make heavy use of the graphics knowledge acquired in the Graphics volume. Hence, the prerequisites for the Numerics and Symbolics volumes are a good knowledge of Mathematica’s programming language and of its graphics system. The Programming volume contains only a few percent of all graphics, the Graphics volume contains about two-thirds, and the Numerics and Symbolics volume, about one-third of the overall 4,000+ graphics. The Programming and Graphics volumes use some mathematical commands, but they restrict the use to a relatively small number (especially Expand, Factor, Integrate, Solve). And the use of the function N for numerical ization is unavoidable for virtually any “real life” application of Mathematica. The last functions allow us to treat some mathematically not uninteresting examples in the Programming and Graphics volumes. In addition to putting these functions to work for nontrivial problems, a detailed discussion of the mathematics functions of Mathematica takes place exclusively in the Numerics and Symbolics volumes. The Programming and Graphics volumes contain a moderate amount of mathematics in the examples and exercises, and focus on programming and graphics issues. The Numerics and Symbolics volumes contain a substantially larger amount of mathematics. Although printed as four books, the fourteen individual chapters (six in the Programming volume, three in the Graphics volume, two in the Numerics volume, and three in the Symbolics volume) of the Mathematica GuideBooks form one organic whole, and the author recommends a strictly sequential reading, starting from Chapter 1 of the Programming volume and ending with Chapter 3 of the Symbolics volume for gaining the maximum

Introduction

xxi

benefit. The electronic component of each book contains the text and inputs from all the four GuideBooks, together with a comprehensive hyperlinked index. The four volumes refer frequently to one another.

0.1.3 Chapter Structure A rough outline of the content of a chapter is the following: † The main body discusses the Mathematica functions belonging to the chapter subject, as well their options and attributes. Generically, the author has attempted to introduce the functions in a “natural order”. But surely, one cannot be axiomatic with respect to the order. (Such an order of the functions is not unique, and the author intentionally has “spread out” the introduction of various Mathematica functions across the four volumes.) With the introduction of a function, some small examples of how to use the functions and comparisons of this function with related ones are given. These examples typically (with the exception of some visualizations in the Programming volume) incorporate functions already discussed. The last section of a chapter often gives a larger example that makes heavy use of the functions discussed in the chapter. † A programmatically constructed overview of each chapter functions follows. The functions listed in this section are hyperlinked to their attributes and options, as well as to the corresponding reference guide entries of The Mathematica Book. † A set of exercises and potential solutions follow. Because learning Mathematica through examples is very efficient, the proposed solutions are quite detailed and form up to 50% of the material of a chapter. † References end the chapter. Note that the first few chapters of the Programming volume deviate slightly from this structure. Chapter 1 of the Programming volume gives a general overview of the kind of problems dealt with in the four GuideBooks. The second, third, and fourth chapters of the Programming volume introduce the basics of programming in Mathematica. Starting with Chapters 5 of the Programming volume and throughout the Graphics, Numerics, and Symbolics volumes, the above-described structure applies. In the 14 chapters of the GuideBooks the author has chosen a “we” style for the discussions of how to proceed in constructing programs and carrying out calculations to include the reader intimately.

0.1.4 Code Presentation Style The typical style of a unit of the main part of a chapter is: Define a new function, discuss its arguments, options, and attributes, and then give examples of its usage. The examples are virtually always Mathematica inputs and outputs. The majority of inputs is in InputForm are the notebooks. On occasion StandardForm is also used. Although StandardForm mimics classical mathematics notation and makes short inputs more readable, for “program-like” inputs, InputForm is typically more readable and easier and more natural to align. For the outputs, StandardForm is used by default and occasionally the author has resorted to InputForm or FullForm to expose digits of numbers and to TraditionalForm for some formulas. Outputs are mostly not programs, but nearly always “results” (often mathematical expressions, formulas, identities, or lists of numbers rather than program constructs). The world of Mathematica users is divided into three groups, and each of them has a nearly religious opinion on how to format Mathematica code [1], [2]. The author follows the InputForm

xxii

Introduction

cult(ure) and hopes that the Mathematica users who do everything in either StandardForm or Traditional Form will bear with him. If the reader really wants to see all code in either StandardForm or Traditional Form, this can easily be done with the Convert To item from the Cell menu. (Note that the relation between InputForm and StandardForm is not symmetric. The InputForm cells of this book have been line-broken and aligned by hand. Transforming them into StandardForm or TraditionalForm cells works well because one typically does not line-break manually and align Mathematica code in these cell types. But converting StandardForm or TraditionalForm cells into InputForm cells results in much less pleasing results.) In the inputs, special typeset symbols for Mathematica functions are typically avoided because they are not monospaced. But the author does occasionally compromise and use Greek, script, Gothic, and doublestruck characters. In a book about a programming language, two other issues come always up: indentation and placement of the code. † The code of the GuideBooks is largely consistently formatted and indented. There are no strict guidelines or even rules on how to format and indent Mathematica code. The author hopes the reader will find the book’s formatting style readable. It is a compromise between readability (mental parsabililty) and space conservation, so that the printed version of the Mathematica GuideBook matches closely the electronic version. † Because of the large number of examples, a rather imposing amount of Mathematica code is presented. Should this code be present only on the disk, or also in the printed book? If it is in the printed book, should it be at the position where the code is used or at the end of the book in an appendix? Many authors of Mathematica articles and books have strong opinions on this subject. Because the main emphasis of the Mathematica GuideBooks is on solving problems with Mathematica and not on the actual problems, the GuideBooks give all of the code at the point where it is needed in the printed book, rather than “hiding” it in packages and appendices. In addition to being more straightforward to read and conveniently allowing us to refer to elements of the code pieces, this placement makes the correspondence between the printed book and the notebooks close to 1:1, and so working back and forth between the printed book and the notebooks is as straightforward as possible.

0.2 Requirements 0.2.1 Hardware and Software Throughout the GuideBooks, it is assumed that the reader has access to a computer running a current version of Mathematica (version 5.0/5.1 or newer). For readers without access to a licensed copy of Mathematica, it is possible to view all of the material on the disk using a trial version of Mathematica. (A trial version is downloadable from http://www.wolfram.com/products/mathematica/trial.cgi.) The files of the GuideBooks are relatively large, altogether more than 20 GB. This is also the amount of hard disk space needed to store uncompressed versions of the notebooks. To view the notebooks comfortably, the reader’s computer needs 128 MB RAM; to evaluate the evaluation units of the notebooks 1 GB RAM or more is recommended. In the GuideBooks, a large number of animations are generated. Although they need more memory than single pictures, they are easy to create, to animate, and to store on typical year-2005 hardware, and they provide a lot of joy.

Introduction

xxiii

0.2.2 Reader Prerequisites Although prior Mathematica knowledge is not needed to read The Mathematica GuideBook to Programming, it is assumed that the reader is familiar with basic actions in the Mathematica front end, including entering Greek characters using the keyboard, copying and pasting cells, and so on. Freely available tutorials on these (and other) subjects can be found at http://library.wolfram.com. For a complete understanding of most of the GuideBooks examples, it is desirable to have a background in mathematics, science, or engineering at about the bachelor’s level or above. Familiarity with mechanics and electrodynamics is assumed. Some examples and exercises are more specialized, for instance, from quantum mechanics, finite element analysis, statistical mechanics, solid state physics, number theory, and other areas. But the GuideBooks avoid very advanced (but tempting) topics such as renormalization groups [6], parquet approximations [27], and modular moonshines [14]. (Although Mathematica can deal with such topics, they do not fit the character of the Mathematica GuideBooks but rather the one of a Mathematica Topographical Atlas [a monumental work to be carried out by the Mathematica–Bourbakians of the 21st century]). Each scientific application discussed has a set of references. The references should easily give the reader both an overview of the subject and pointers to further references.

0.3 What the GuideBooks Are and What They Are Not 0.3.1 Doing Computer Mathematics As discussed in the Preface, the main goal of the GuideBooks is to demonstrate, showcase, teach, and exemplify scientific problem solving with Mathematica. An important step in achieving this goal is the discussion of Mathematica functions that allow readers to become fluent in programming when creating complicated graphics or solving scientific problems. This again means that the reader must become familiar with the most important programming, graphics, numerics, and symbolics functions, their arguments, options, attributes, and a few of their time and space complexities. And the reader must know which functions to use in each situation. The GuideBooks treat only aspects of Mathematica that are ultimately related to “doing mathematics”. This means that the GuideBooks focus on the functionalities of the kernel rather than on those of the front end. The knowledge required to use the front end to work with the notebooks can easily be gained by reading the corresponding chapters of the online documentation of Mathematica. Some of the subjects that are treated either lightly or not at all in the GuideBooks include the basic use of Mathematica (starting the program, features, and special properties of the notebook front end [16]), typesetting, the preparation of packages, external file operations, the communication of Mathematica with other programs via MathLink, special formatting and string manipulations, computer- and operating system-specific operations, audio generation, and commands available in various packages. “Packages” includes both, those distributed with Mathematica as well as those available from the Mathematica Information Center (http://library.wolfram.com/infocenter) and commercial sources, such as MathTensor for doing general relativity calculations (http://smc.vnet.net/MathTensor.html) or FeynCalc for doing high-energy physics calculations (http://www.feyncalc.org). This means, in particular, that probability and statistical calculations are barely touched on because most of the relevant commands are contained in the packages. The GuideBooks make little or no mention of the machine-dependent possibilities offered by the various Mathematica implementations. For this information, see the Mathematica documentation.

xxiv

Introduction

Mathematical and physical remarks introduce certain subjects and formulas to make the associated Mathematica implementations easier to understand. These remarks are not meant to provide a deep understanding of the (sometimes complicated) physical model or underlying mathematics; some of these remarks intentionally oversimplify matters. The reader should examine all Mathematica inputs and outputs carefully. Sometimes, the inputs and outputs illustrate little-known or seldom-used aspects of Mathematica commands. Moreover, for the efficient use of Mathematica, it is very important to understand the possibilities and limits of the built-in commands. Many commands in Mathematica allow different numbers of arguments. When a given command is called with fewer than the maximum number of arguments, an internal (or user-defined) default value is used for the missing arguments. For most of the commands, the maximum number of arguments and default values are discussed. When solving problems, the GuideBooks generically use a “straightforward” approach. This means they are not using particularly clever tricks to solve problems, but rather direct, possibly computationally more expensive, approaches. (From time to time, the GuideBooks even make use of a “brute force” approach.) The motivation is that when solving new “real life” problems a reader encounters in daily work, the “right mathematical trick” is seldom at hand. Nevertheless, the reader can more often than not rely on Mathematica being powerful enough to often succeed in using a straightforward approach. But attention is paid to Mathematica-specific issues to find time- and memory-efficient implementations—something that should be taken into account for any larger program. As already mentioned, all larger pieces of code in this book have comments explaining the individual steps carried out in the calculations. Many smaller pieces of code have comments when needed to expedite the understanding of how they work. This enables the reader to easily change and adapt the code pieces. Sometimes, when the translation from traditional mathematics into Mathematica is trivial, or when the author wants to emphasize certain aspects of the code, we let the code “speak for itself”. While paying attention to efficiency, the GuideBooks only occasionally go into the computational complexity ([8], [40], and [7]) of the given implementations. The implementation of very large, complicated suites of algorithms is not the purpose of the GuideBooks. The Mathematica packages included with Mathematica and the ones at MathSource (http://library.wolfram.com/database/MathSource ) offer a rich variety of self-study material on building large programs. Most general guidelines for writing code for scientific calculations (like descriptive variable names and modularity of code; see, e.g., [19] for a review) apply also to Mathematica programs. The programs given in a chapter typically make use of Mathematica functions discussed in earlier chapters. Using commands from later chapters would sometimes allow for more efficient techniques. Also, these programs emphasize the use of commands from the current chapter. So, for example, instead of list operation, from a complexity point of view, hashing techniques or tailored data structures might be preferable. All subsections and sections are “self-contained” (meaning that no other code than the one presented is needed to evaluate the subsections and sections). The price for this “self-containedness” is that from time to time some code has to be repeated (such as manipulating polygons or forming random permutations of lists) instead of delegating such programming constructs to a package. Because this repetition could be construed as boring, the author typically uses a slightly different implementation to achieve the same goal.

Introduction

xxv

0.3.2 Programming Paradigms In the GuideBooks, the author wants to show the reader that Mathematica supports various programming paradigms and also show that, depending on the problem under consideration and the goal (e.g., solution of a problem, test of an algorithm, development of a program), each style has its advantages and disadvantages. (For a general discussion concerning programming styles, see [3], [41], [23], [32], [15], and [9].) Mathematica supports a functional programming style. Thus, in addition to classical procedural programs (which are often less efficient and less elegant), programs using the functional style are also presented. In the first volume of the Mathematica GuideBooks, the programming style is usually dictated by the types of commands that have been discussed up to that point. A certain portion of the programs involve recursive, rule-based programming. The choice of programming style is, of course, partially (ultimately) a matter of personal preference. The GuideBooks’ main aim is to explain the operation, limits, and efficient application of the various Mathematica commands. For certain commands, this dictates a certain style of programming. However, the various programming styles, with their advantages and disadvantages, are not the main concern of the GuideBooks. In working with Mathematica, the reader is likely to use different programming styles depending if one wants a quick one-time calculation or a routine that will be used repeatedly. So, for a given implementation, the program structure may not always be the most elegant, fastest, or “prettiest”. The GuideBooks are not a substitute for the study of The Mathematica Book [45] http://documents. wolfram.com/mathematica). It is impossible to acquire a deeper (full) understanding of Mathematica without a thorough study of this book (reading it twice from the first to the last page is highly recommended). It defines the language and the spirit of Mathematica. The reader will probably from time to time need to refer to parts of it, because not all commands are discussed in the GuideBooks. However, the story of what can be done with Mathematica does not end with the examples shown in The Mathematica Book. The Mathematica GuideBooks go beyond The Mathematica Book. They present larger programs for solving various problems and creating complicated graphics. In addition, the GuideBooks discuss a number of commands that are not or are only fleetingly mentioned in the manual (e.g., some specialized methods of mathematical functions and functions from the Developer` and Experimental` contexts), but which the author deems important. In the notebooks, the author gives special emphasis to discussions, remarks, and applications relating to several commands that are typical for Mathematica but not for most other programming languages, e.g., Map, MapAt, MapIndexed, Distribute, Apply, Replace, ReplaceAll, Inner, Outer, Fold, Nest, Nest List, FixedPoint, FixedPointList, and Function. These commands allow to write exceptionally elegant, fast, and powerful programs. All of these commands are discussed in The Mathematica Book and others that deal with programming in Mathematica (e.g., [33], [34], and [42]). However, the author’s experience suggests that a deeper understanding of these commands and their optimal applications comes only after working with Mathematica in the solution of more complicated problems. Both the printed book and the electronic component contain material that is meant to teach in detail how to use Mathematica to solve problems, rather than to present the underlying details of the various scientific examples. It cannot be overemphasized that to master the use of Mathematica, its programming paradigms and individual functions, the reader must experiment; this is especially important, insightful, easily verifiable, and satisfying with graphics, which involve manipulating expressions, making small changes, and finding different approaches. Because the results can easily be visually checked, generating and modifying graphics is an ideal method to learn programming in Mathematica.

xxvi

Introduction

0.4 Exercises and Solutions 0.4.1 Exercises Each chapter includes a set of exercises and a detailed solution proposal for each exercise. When possible, all of the purely Mathematica-programming related exercises (these are most of the exercises of the Programming volume) should be solved by every reader. The exercises coming from mathematics, physics, and engineering should be solved according to the reader’s interest. The most important Mathematica functions needed to solve a given problem are generally those of the associated chapter. For a rough orientation about the content of an exercise, the subject is included in its title. The relative degree of difficulty is indicated by level superscript of the exercise number ( L1 indicates easy, L2 indicates medium, and L3 indicates difficult). The author’s aim was to present understandable interesting examples that illustrate the Mathematica material discussed in the corresponding chapter. Some exercises were inspired by recent research problems; the references given allow the interested reader to dig deeper into the subject. The exercises are intentionally not hyperlinked to the corresponding solution. The independent solving of the exercises is an important part of learning Mathematica.

0.4.2 Solutions The GuideBooks contain solutions to each of the more than 1,000 exercises. Many of the techniques used in the solutions are not just one-line calls to built-in functions. It might well be that with further enhancements, a future version of Mathematica might be able to solve the problem more directly. (But due to different forms of some results returned by Mathematica, some problems might also become more challenging.) The author encourages the reader to try to find shorter, more clever, faster (in terms of runtime as well complexity), more general, and more elegant solutions. Doing various calculations is the most effective way to learn Mathematica. A proper Mathematica implementation of a function that solves a given problem often contains many different elements. The function(s) should have sensibly named and sensibly behaving options; for various (machine numeric, high-precision numeric, symbolic) inputs different steps might be required; shielding against inappropriate input might be needed; different parameter values might require different solution strategies and algorithms, helpful error and warning messages should be available. The returned data structure should be intuitive and easy to reuse; to achieve a good computational complexity, nontrivial data structures might be needed, etc. Most of the solutions do not deal with all of these issues, but only with selected ones and thereby leave plenty of room for more detailed treatments; as far as limit, boundary, and degenerate cases are concerned, they represent an outline of how to tackle the problem. Although the solutions do their job in general, they often allow considerable refinement and extension by the reader. The reader should consider the given solution to a given exercise as a proposal; quite different approaches are often possible and sometimes even more efficient. The routines presented in the solutions are not the most general possible, because to make them foolproof for every possible input (sensible and nonsensical, evaluated and unevaluated, numerical and symbolical), the books would have had to go considerably beyond the mathematical and physical framework of the GuideBooks. In addition, few warnings are implemented for improper or improperly used arguments. The graphics provided in the solutions are mostly subject to a long list of refinements. Although the solutions do work, they are often sketchy and can be considerably refined and extended by the reader. This also means that the provided solutions to the exercises programs are not always very suitable for

Introduction

xxvii

solving larger classes of problems. To increase their applicability would require considerably more code. Thus, it is not guaranteed that given routines will work correctly on related problems. To guarantee this generality and scalability, one would have to protect the variables better, implement formulas for more general or specialized cases, write functions to accept different numbers of variables, add type-checking and error-checking functions, and include corresponding error messages and warnings. To simplify working through the solutions, the various steps of the solution are commented and are not always packed in a Module or Block. In general, only functions that are used later are packed. For longer calculations, such as those in some of the exercises, this was not feasible and intended. The arguments of the functions are not always checked for their appropriateness as is desirable for robust code. But, this makes it easier for the user to test and modify the code.

0.5 The Books Versus the Electronic Components 0.5.1 Working with the Notebooks Each volume of the GuideBooks comes with a multiplatform DVD, containing fourteen main notebooks tailored for Mathematica 4 and compatible with Mathematica 5. Each notebook corresponds to a chapter from the printed books. (To avoid large file sizes of the notebooks, all animations are located in the Animations directory and not directly in the chapter notebooks.) The chapters (and so the corresponding notebooks) contain a detailed description and explanation of the Mathematica commands needed and used in applications of Mathematica to the sciences. Discussions on Mathematica functions are supplemented by a variety of mathematics, physics, and graphics examples. The notebooks also contain complete solutions to all exercises. Forming an electronic book, the notebooks also contain all text, as well as fully typeset formulas, and reader-editable and reader-changeable input. (Readers can copy, paste, and use the inputs in their notebooks.) In addition to the chapter notebooks, the DVD also includes a navigation palette and fully hyperlinked table of contents and index notebooks. The Mathematica notebooks corresponding to the printed book are fully evaluated. The evaluated chapter notebooks also come with hyperlinked overviews; these overviews are not in the printed book. When reading the printed books, it might seem that some parts are longer than needed. The reader should keep in mind that the primary tool for working with the Mathematica kernel are Mathematica notebooks and that on a computer screen and there “length does not matter much”. The GuideBooks are basically a printout of the notebooks, which makes going back and forth between the printed books and the notebooks very easy. The GuideBooks give large examples to encourage the reader to investigate various Mathematica functions and to become familiar with Mathematica as a system for doing mathematics, as well as a programming language. Investigating Mathematica in the accompanying notebooks is the best way to learn its details. To start viewing the notebooks, open the table of contents notebook TableOfContents.nb. Mathematica notebooks can contain hyperlinks, and all entries of the table of contents are hyperlinked. Navigating through one of the chapters is convenient when done using the navigator palette GuideBooksNavigator.nb. When opening a notebook, the front end minimizes the amount of memory needed to display the notebook by loading it incrementally. Depending on the reader’s hardware, this might result in a slow scrolling speed. Clicking the “Load notebook cache” button of the GuideBooksNavigator palette speeds this up by loading the complete notebook into the front end. For the vast majority of sections, subsections, and solutions of the exercises, the reader can just select such a structural unit and evaluate it (at once) on a year-2005 computer (¥512 MB RAM) typically in a matter of

xxviii

Introduction

minutes. Some sections and solutions containing many graphics may need hours of computation time. Also, more than 50 pieces of code run hours, even days. The inputs that are very memory intensive or produce large outputs and graphics are in inactive cells which can be activated by clicking the adjacent button. Because of potentially overlapping variable names between various sections and subsections, the author advises the reader not to evaluate an entire chapter at once. Each smallest self-contained structural unit (a subsection, a section without subsections, or an exercise) should be evaluated within one Mathematica session starting with a freshly started kernel. At the end of each unit is an input cell. After evaluating all input cells of a unit in consecutive order, the input of this cell generates a short summary about the entire Mathematica session. It lists the number of evaluated inputs, the kernel CPU time, the wall clock time, and the maximal memory used to evaluate the inputs (excluding the resources needed to evaluate the Program cells). These numbers serve as a guide for the reader about the to-be-expected running times and memory needs. These numbers can deviate from run to run. The wall clock time can be substantially larger than the CPU time due to other processes running on the same computer and due to time needed to render graphics. The data shown in the evaluated notebooks came from a 2.5 GHz Linux computer. The CPU times are generically proportional to the computer clock speed, but can deviate within a small factor from operating system to operating system. In rare, randomly occurring cases slower computers can achieve smaller CPU and wall clock times than faster computers, due to internal time-constrained simplification processes in various symbolic mathematics functions (such as Integrate, Sum, DSolve, …). The Overview Section of the chapters is set up for a front end and kernel running on the same computer and having access to the same file system. When using a remote kernel, the directory specification for the package Overview.m must be changed accordingly. References can be conveniently extracted from the main text by selecting the cell(s) that refer to them (or parts of a cell) and then clicking the “Extract References” button. A new notebook with the extracted references will then appear. The notebooks contain color graphics. (To rerender the pictures with a greater color depth or at a larger size, choose Rerender Graphics from the Cell menu.) With some of the colors used, black-and-white printouts occasionally give low-contrast results. For better black-and-white printouts of these graphics, the author recommends setting the ColorOutput option of the relevant graphics function to GrayLevel. The notebooks with animations (in the printed book, animations are typically printed as an array of about 10 to 20 individual graphics) typically contain between 60 and 120 frames. Rerunning the corresponding code with a large number of frames will allow the reader to generate smoother and longer-running animations. Because many cell styles used in the notebooks are unique to the GuideBooks, when copying expressions and cells from the GuideBooks notebooks to other notebooks, one should first attach the style sheet notebook GuideBooksStylesheet.nb to the destination notebook, or define the needed styles in the style sheet of the destination notebook.

Introduction

xxix

0.5.2 Reproducibility of the Results The 14 chapter notebooks contained in the electronic version of the GuideBooks were run mostly with Mathematica 5.1 on a 2 GHz Intel Linux computer with 2 GB RAM. They need more than 100 hours of evaluation time. (This does not include the evaluation of the currently unevaluatable parts of code after the Make Input buttons.) For most subsections and sections, 512 MB RAM are recommended for a fast and smooth evaluation “at once” (meaning the reader can select the section or subsection, and evaluate all inputs without running out of memory or clearing variables) and the rendering of the generated graphic in the front end. Some subsections and sections need more memory when run. To reduce these memory requirements, the author recommends restarting the Mathematica kernel inside these subsections and sections, evaluating the necessary definitions, and then continuing. This will allow the reader to evaluate all inputs. In general, regardless of the computer, with the same version of Mathematica, the reader should get the same results as shown in the notebooks. (The author has tested the code on Sun and Intel-based Linux computers, but this does not mean that some code might not run as displayed (because of different configurations, stack size settings, etc., but the disclaimer from the Preface applies everywhere). If an input does not work on a particular machine, please inform the author. Some deviations from the results given may appear because of the following: † Inputs involving the function Random[…] in some form. (Often SeedRandom to allow for some kind of reproducibility and randomness at the same time is employed.) † Mathematica commands operating on the file system of the computer, or make use of the type of computer (such inputs need to be edited using the appropriate directory specifications). † Calculations showing some of the differences of floating-point numbers and the machine-dependent representation of these on various computers. † Pictures using various fonts and sizes because of their availability (or lack thereof) and shape on different computers. † Calculations involving Timing because of different clock speeds, architectures, operating systems, and libraries. † Formats of results depending on the actual window width and default font size. (Often, the corresponding inputs will contain Short.) Using anything other than Mathematica Version 5.1 might also result in different outputs. Examples of results that change form, but are all mathematically correct and equivalent, are the parameter variables used in underdetermined systems of linear equations, the form of the results of an integral, and the internal form of functions like InterpolatingFunction and CompiledFunction. Some inputs might no longer evaluate the same way because functions from a package were used and these functions are potentially built-in functions in a later Mathematica version. Mathematica is a very large and complicated program that is constantly updated and improved. Some of these changes might be design changes, superseded functionality, or potentially regressions, and as a result, some of the inputs might not work at all or give unexpected results in future versions of Mathematica.

xxx

Introduction

0.5.3 Earlier Versions of the Notebooks The first printing of the Programming volume and the Graphics volumes of the Mathematica GuideBooks were published in October 2004. The electronic components of these two books contained the corresponding evaluated chapter notebooks as well as unevaluated versions of preversions of the notebooks belonging to the Numerics and Symbolics volumes. Similarly, the electronic components of the Numerics and Symbolics volume contain the corresponding evaluated chapter notebooks and unevaluated copies of the notebooks of the Programming and Graphics volumes. This allows the reader to follow cross-references and look up relevant concepts discussed in the other volumes. The author has tried to keep the notebooks of the GuideBooks as up-to-date as possible. (Meaning with respect to the efficient and appropriate use of the latest version of Mathematica, with respect to maintaining a list of references that contains new publications, and examples, and with respect to incorporating corrections to known problems, errors, and mistakes). As a result, the notebooks of all four volumes that come with later printings of the Programming and Graphics volumes, as well with the Numerics and Symbolics volumes will be different and supersede the earlier notebooks originally distributed with the Programming and Graphics volumes. The notebooks that come with the Numerics and Symbolics volumes are genuine Mathematica Version 5.1 notebooks. Because most advances in Mathematica Version 5 and 5.1 compared with Mathematica Version 4 occurred in functions carrying out numerical and symbolical calculations, the notebooks associated with Numerics and Symbolics volumes contain a substantial amount of changes and additions compared with their originally distributed version.

0.6 Style and Design Elements 0.6.1 Text and Code Formatting The GuideBooks are divided into chapters. Each chapter consists of several sections, which frequently are further subdivided into subsections. General remarks about a chapter or a section are presented in the sections and subsections numbered 0. (These remarks usually discuss the structure of the following section and give teasers about the usefulness of the functions to be discussed.) Also, sometimes these sections serve to refresh the discussion of some functions already introduced earlier. Following the style of The Mathematica Book [45], the GuideBooks use the following fonts: For the main text, Times; for Mathematica inputs and built-in Mathematica commands, Courier plain (like Plot); and for user-supplied arguments, Times italic (like userArgument1 ). Built-in Mathematica functions are introduced in the following style: MathematicaFunctionToBeIntroduced[typeIndicatingUserSuppliedArgument(s)] is a description of the built-in command MathematicaFunctionToBeIntroduced upon its first appearance. A definition of the command, along with its parameters is given. Here, typeIndicatingUserSuppliedArgument(s) is one (or more) user-supplied expression(s) and may be written in an abbreviated form or in a different way for emphasis.

The actual Mathematica inputs and outputs appear in the following manner (as mentioned above, virtually all inputs are given in InputForm).

Introduction

xxxi

(* A comment. It will be/is ignored as Mathematica input: Return only one of the solutions *) Last[Solve[{x^2 - y == 1, x - y^2 == 1}, {x, y}]]

When referring in text to variables of Mathematica inputs and outputs, the following convention is used: Fixed, nonpattern variables (including local variables) are printed in Courier plain (the equations solved above contained the variables x and y). User supplied arguments to built-in or defined functions with pattern variables are printed in Times italic. The next input defines a function generating a pair of polynomial equations in x and y. equationPair[x_, y_] := {x^2 - y == 1, x - y^2 == 1}

x and y are pattern variables (usimng the same letters, but a different font from the actual code fragments x_ and y_) that can stand for any argument. Here we call the function equationPair with the two arguments u + v and w - z. equationPair[u + v, w - z]

Occasionally, explanation about a mathematics or physics topic is given before the corresponding Mathematica implementation is discussed. These sections are marked as follows:

Mathematical Remark: Special Topic in Mathematics or Physics A short summary or review of mathematical or physical ideas necessary for the following example(s). 1

From time to time, Mathematica is used to analyze expressions, algorithms, etc. In some cases, results in the form of English sentences are produced programmatically. To differentiate such automatically generated text from the main text, in most instances such text is prefaced by “ë” (structurally the corresponding cells are of type "PrintText" versus "Text" for author-written cells). Code pieces that either run for quite long, or need a lot of memory, or are tangent to the current discussion are displayed in the following manner. Make Input

mathematicaCodeWhichEitherRunsVeryLongOrThatIsVeryMemoryIntensive OrThatProducesAVeryLargeGraphicOrThatIsASideTrackToTheSubjectUnder Discussion (* with some comments on how the code works *)

To run a code piece like this, click the Make Input button above it. This will generate the corresponding input cell that can be evaluated if the reader’s computer has the necessary resources. The reader is encouraged to add new inputs and annotations to the electronic notebooks. There are two styles for reader-added material: "ReaderInput" (a Mathematica input style and simultaneously the default style for a new cell) and "ReaderAnnotation" (a text-style cell type). They are primarily intended to be used in the Reading environment. These two styles are indented more than the default input and text cells, have a green left bar and a dingbat. To access the "ReaderInput" and "ReaderAnnotation" styles, press the system-dependent modifier key (such as Control or Command) and 9 and 7, respectively.

xxxii

Introduction

0.6.2 References Because the GuideBooks are concerned with the solution of mathematical and physical problems using Mathematica and are not mathematics or physics monographs, the author did not attempt to give complete references for each of the applications discussed [38], [20]. The references cited in the text pertain mainly to the applications under discussion. Most of the citations are from the more recent literature; references to older publications can be found in the cited ones. Frequently URLs for downloading relevant or interesting information are given. (The URL addresses worked at the time of printing and, hopefully, will be still active when the reader tries them.) References for Mathematica, for algorithms used in computer algebra, and for applications of computer algebra are collected in the Appendix A. The references are listed at the end of each chapter in alphabetical order. In the notebooks, the references are hyperlinked to all their occurrences in the main text. Multiple references for a subject are not cited in numerical order, but rather in the order of their importance, relevance, and suggested reading order for the implementation given. In a few cases (e.g., pure functions in Chapter 3, some matrix operations in Chapter 6), references to the mathematical background for some built-in commands are given—mainly for commands in which the mathematics required extends beyond the familiarity commonly exhibited by non-mathematicians. The GuideBooks do not discuss the algorithms underlying such complicated functions, but sometimes use Mathematica to “monitor” the algorithms. References of the form abbreviationOfAScientificField/yearMonthPreprintNumber (such as quant-ph/0012147) refer to the arXiv preprint server [43], [22], [30] at http://arXiv.org. When a paper appeared as a preprint and (later) in a journal, typically only the more accessible preprint reference is given. For the convenience of the reader, at the end of these references, there is a Get Preprint button. Click the button to display a palette notebook with hyperlinks to the corresponding preprint at the main preprint server and its mirror sites. (Some of the older journal articles can be downloaded free of charge from some of the digital mathematics library servers, such as http://gdz.sub.uni-goettingen.de, http://www.emis.de, http://www.numdam.org, and http://dieper.aib.unilinz.ac.at.) As much as available, recent journal articles are hyperlinked through their digital object identifiers (http://www.doi.org).

0.6.3 Variable Scoping, Input Numbering, and Warning Messages Some of the Mathematica inputs intentionally cause error messages, infinite loops, and so on, to illustrate the operation of a Mathematica command. These messages also arise in the user’s practical use of Mathematica. So, instead of presenting polished and perfected code, the author prefers to illustrate the potential problems and limitations associated with the use of Mathematica applied to “real life” problems. The one exception are the spelling warning messages General::spell and General::spell1 that would appear relatively frequently because “similar” names are used eventually. For easier and less defocused reading, these messages are turned off in the initialization cells. (When working with the notebooks, this means that the pop-up window asking the user “Do you want to automatically evaluate all the initialization cells in the notebook?” should be evaluated should always be answered with a “yes”.) For the vast majority of graphics presented, the picture is the focus, not the returned Mathematica expression representing the picture. That is why the Graphics and Graphics3D output is suppressed in most situations.

Introduction

xxxiii

To improve the code’s readability, no attempt has been made to protect all variables that are used in the various examples. This protection could be done with Clear, Remove, Block, Module, With, and others. Not protecting the variables allows the reader to modify, in a somewhat easier manner, the values and definitions of variables, and to see the effects of these changes. On the other hand, there may be some interference between variable names and values used in the notebooks and those that might be introduced when experimenting with the code. When readers examine some of the code on a computer, reevaluate sections, and sometimes perform subsidiary calculations, they may introduce variables that might interfere with ones from the GuideBooks. To partially avoid this problem, and for the reader’s convenience, sometimes Clear[sequenceOfVariables]and Remove[sequenceOfVariables] are sprinkled throughout the notebooks. This makes experimenting with these functions easier. The numbering of the Mathematica inputs and outputs typically does not contain all consecutive integers. Some pieces of Mathematica code consist of multiple inputs per cell; so, therefore, the line numbering is incremented by more than just 1. As mentioned, Mathematica should be restarted at every section, or subsection or solution of an exercise, to make sure that no variables with values get reused. The author also explicitly asks the reader to restart Mathematica at some special positions inside sections. This removes previously introduced variables, eliminates all existing contexts, and returns Mathematica to the typical initial configuration to ensure reproduction of the results and to avoid using too much memory inside one session.

0.6.4 Graphics In Mathematica 5.1, displayed graphics are side effects, not outputs. The actual output of an input producing a graphic is a single cell with the text Graphics or Graphics3D or GraphicsArray and so on. To save paper, these output cells have been deleted in the printed version of the GuideBooks. Most graphics use an appropriate number of plot points and polygons to show the relevant features and details. Changing the number of plot points and polygons to a higher value to obtain higher resolution graphics can be done by changing the corresponding inputs. The graphics of the printed book and the graphics in the notebooks are largely identical. Some printed book graphics use a different color scheme and different point sizes and line and edge thicknesses to enhance contrast and visibility. In addition, the font size has been reduced for the printed book in tick and axes labels. The graphics shown in the notebooks are PostScript graphics. This means they can be resized and rerendered without loss of quality. To reduce file sizes, the reader can convert them to bitmap graphics using the Cellö Convert ToöBitmap menu. The resulting bitmap graphics can no longer be resized or rerendered in the original resolution. To reduce file sizes of the main content notebooks, the animations of the GuideBooks are not part of the chapter notebooks. They are contained in a separate directory.

xxxiv

Introduction

0.6.5 Notations and Symbols The symbols used in typeset mathematical formulas are not uniform and unique throughout the GuideBooks. Various mathematical and physical quantities (such as normals, rotation matrices, and field strengths) are used repeatedly in this book. Frequently the same notation is used for them, but depending on the context, also different ones are used, e.g. sometimes bold is used for a vector (such as r) and sometimes an arrow (such as ”r). Matrices appear in bold or as doublestruck letters. Depending on the context and emphasis placed, different notations are used in display equations and in the Mathematica input form. For instance, for a time-dependent scalar quantity of one variable yHt; xL, we might use one of many patterns, such as ψ[t][x] (for emphasizing a parametric t-dependence) or ψ[t, x] (to treat t and x on an equal footing) or ψ[t, {x}] (to emphasize the one-dimensionality of the space variable x). Mathematical formulas use standard notation. To avoid confusion with Mathematica notations, the use of square brackets is minimized throughout. Following the conventions of mathematics notation, square brackets are used for three cases: a) Functionals, such as t @ f HtLD HwL for the Fourier transform of a function f HtL. b) Power series coefficients, @xk D H f HxLL denotes the coefficient of xk of the power series expansion of f HxL around x = 0. c) Closed intervals, like @a, bD (open intervals are denoted by Ha, bL). Grouping is exclusively done using parentheses. Upper-case double-struck letters denote domains of numbers, for integers, for nonnegative integers, for rational numbers, for reals, and for complex numbers. Points in n (or n ) with explicitly given coordinates are indicated using curly braces 8c1 , …, cn = 0, 1/2 Floor[n](1 + Floor[n]), Sum[k, {k, 1, n}]] which would correspond to the result returned by Sum for an explicit (real) n. Variables that occur in inequalities will be considered as real-valued by many functions. For instance, for most functions a statement like z2 < -1 will not include parts of the imaginary axis of the z-plane. Many matrix operations, such as Cross[, ] and Det[] stay unevaluated for symbols and . Obviously, here and are not assumed to be complex numbers. There are some more exceptions, and we will encounter them in the following discussions. Generically, the assumption that every variable is a complex one of finite size is very sensible. The complex numbers are an algebraically closed field and enable the inversion of polynomials and more complicated functions. Without using complex numbers, it would be, for instance, impossible to express the three real roots è!!!!!!!! of 5 x3 - 9 x2 + x + 1 = 0 in radicals without using -1 explicitly. (See below for a more detailed discussion of this case.) But in some instances one wants to make certain assumptions about the type of a variable, for example, when ¶ 2 one wants to express that the parameter g in Ÿ-¶ ei g x dx is real so that the integral exists. A few Mathematica functions, notably Simplify, Integrate, Refine, and Assuming have currently the notion of a variable “type”. We will discuss assumptions in Integrate in detail in Subsection 1.6.2. The function Simplify we discussed already in Section 3.5 of the Programming volume [1735], but not in its full generality. Because we will make use of it more frequently later, and because internally Simplify uses functions from all sections of this chapter, we will discuss all of its options now. Simplify[expression, assumptions, options] tries to simplify expression under the assumptions assumptions.

We start with the last arguments of Simplify, its options. In[1]:= Out[1]=

Options[Simplify] 8Assumptions $Assumptions, ComplexityFunction → Automatic, TimeConstraint → 300, TransformationFunctions → Automatic, Trig → True

Pi], {100}] // Timing 84.75 Second, Null

Pi], {100}] // Timing 81.31 Second, Null

0]}&[Sqrt[z^2]] è!!!!!! 9 z2 , z= {#, Refine[z < 0, z > 0]}&[Sqrt[z^2]] è!!!!!! 9 z2 , False= {#, Refine[#, z == Pi/2]}&[Tan[z]] 8Tan@zD, ComplexInfinity< {#, Refine[#, z == E]}&[Round[z]] 8Round@zD, 3< {#, Refine[#, z < -1]}&[Log[z]] 8Log@zD, π + Log@−zD

20]}&[Log[z] > 1] 8Log@zD > 1, Log@zD > 1

1, Refine[z > 0]] True

When the function Assuming appears in nested form, the assumptions are joined. Here is an example. In[73]:= Out[73]=

Assuming[z > 1, Assuming[x < 0, Refine[z > 0 && x < 1]]] True

All currently active assumptions are stored in $Assumptions. $Assumptions gives the currently active assumptions.

By default, $Assumptions has the value True. This means, nothing nontrivial can be derived. In[74]:= Out[74]=

$Assumptions True

Here are the assumptions printed (using a Print-statement) that are active within the inner nested Assuming. In[75]:=

Assuming[z > 1, Assuming[x < 0, Print[$Assumptions]; Simplify[z > 0 && x < 1]]] x < 0 && z > 1

Out[75]=

True

From contradictory assumptions (indicated when recognized as such), follow false statements. In[76]:=

Assuming[z > 1, Assuming[z < -1, Simplify[-1/2 < z < 1/2]]] $Assumptions::cas : Warning: Contradictory assumptionHsL z < −1 && z > 1 encountered. More…

Out[76]=

True

This was a short introduction into a directed use of Simplify and Refine using options and assumption specifications.

1.2 Operations on Polynomials

13

1.2 Operations on Polynomials 1.2.0 Remarks Polynomials and polynomial systems play an extraordinary role in computational symbolic mathematics. In this section, we deal with three aspects of such systems: 1) Structural operations that express polynomials in various canonical forms, 2) manipulations of systems of polynomial equations and 3) manipulations of systems of polynomial inequations (meaning inequalities with Less and Greater, as well as Unequal as their heads). Explicit solutions of polynomial equations (which for most univariate polynomials of degree five or higher) cannot be given in radicals; their solutions will be discussed in Section 1.5. Here we largely focus on operations on polynomials that use their coefficients only.

1.2.1 Structural Manipulations on Polynomials The two most important commands for manipulating polynomials, Expand and Factor, were already introduced in Chapter 3 of the Programming volume [1735]. Note that Factor also works for polynomials in several variables. In[1]:= Out[1]=

In[2]:= Out[2]=

Expand[(1 - x)^3 (3 + y - 2x)^2 (z^2 + 8y)] 72 y − 312 x y + 536 x2 y − 456 x3 y + 192 x4 y − 32 x5 y + 48 y2 − 176 x y2 + 240 x2 y2 − 144 x3 y2 + 32 x4 y2 + 8 y3 − 24 x y3 + 24 x2 y3 − 8 x3 y3 + 9 z2 − 39 x z2 + 67 x2 z2 − 57 x3 z2 + 24 x4 z2 − 4 x5 z2 + 6 y z2 − 22 x y z2 + 30 x2 y z2 − 18 x3 y z2 + 4 x4 y z2 + y2 z2 − 3 x y2 z2 + 3 x2 y2 z2 − x3 y2 z2 Factor[%] −H−1 + xL3 H3 − 2 x + yL2 H8 y + z2 L

The following condition is often ignored. Factor works “properly” only for polynomials whose coefficients are exact (rational) numbers. Thus, for instance, the following example does not work. In[3]:= Out[3]=

Expand[(1.0 - x)^3 (3.0 + y - 2.0 x)^2 (z^2 + 8.0 y)] // Factor −8. H−9. y + 39. x y − 67. x2 y + 57. x3 y − 24. x4 y + 4. x5 y − 6. y2 + 22. x y2 − 30. x2 y2 + 18. x3 y2 − 4. x4 y2 − 1. y3 + 3. x y3 − 3. x2 y3 + 1. x3 y3 − 1.125 z2 + 4.875 x z2 − 8.375 x2 z2 + 7.125 x3 z2 − 3. x4 z2 + 0.5 x5 z2 − 0.75 y z2 + 2.75 x y z2 − 3.75 x2 y z2 + 2.25 x3 y z2 − 0.5 x4 y z2 − 0.125 y2 z2 + 0.375 x y2 z2 − 0.375 x2 y2 z2 + 0.125 x3 y2 z2 L

The following simpler example works, but we highly discourage the use of inexact numbers inside Factor. In[4]:= Out[4]=

x^2 - 5 x + 6. // Factor 1. H−3. + xL H−2. + xL

Results such as the following are much better produced using NRoots or NSolve to achieve a factorization explicitly via solving for the roots. In[5]:=

x^3 - x^2 - 5 x + 5.23 // Factor

Symbolic Computations

14 Out[5]= In[6]:= Out[7]=

1. H−2.19252 + xL H−1.05931 + xL H2.25183 + xL (* better *) Times @@ (x - (x /. NSolve[x^3 - x^2 - 5 x + 5.23 == 0, x])) H−2.19252 + xL H−1.05931 + xL H2.25183 + xL

Using the command Rationalize introduced in Chapter 1 of the Numerics volume [1737], we can convert approximate numbers to nearby rational numbers. (But be aware that for inputs with many-digit high-precision numbers, the functions myFactor might run a long time.) In[8]:=

Out[9]=

myFactor[x_, opts___] := N[Factor[MapAll[Rationalize[#, 0]&, x], opts], (* output precision = input precision *) Precision[x]] myFactor[%%%%] 7.03682 × 10−24 H−4.74876 × 107 + 4.48287 × 107 xL H−1.03683 × 108 + 4.72897 × 107 xL H1.50951 × 108 + 6.70348 × 107 xL

For nonexact integer exponents, Expand and Factor fail. In[10]:= Out[10]=

Expand[(1.0 - x)^3. (3.0 + y - 2.0 x)^2. (z^2 + 8.0 y)^2.] H1. − xL3. H3. − 2. x + yL2. H8. y + z2 L

2.

But note that the application of N to a polynomial does not give numericalized exponents. In[11]:= Out[11]=

N[Expand[(1.0 - x)^3 (3.0 + y - 2.0 x)^2 (z^2 + 8.0 y)^2]] 576. y2 − 2496. x y2 + 4288. x2 y2 − 3648. x3 y2 + 1536. x4 y2 − 256. x5 y2 + 384. y3 − 1408. x y3 + 1920. x2 y3 − 1152. x3 y3 + 256. x4 y3 + 64. y4 − 192. x y4 + 192. x2 y4 − 64. x3 y4 + 144. y z2 − 624. x y z2 + 1072. x2 y z2 − 912. x3 y z2 + 384. x4 y z2 − 64. x5 y z2 + 96. y2 z2 − 352. x y2 z2 + 480. x2 y2 z2 − 288. x3 y2 z2 + 64. x4 y2 z2 + 16. y3 z2 − 48. x y3 z2 + 48. x2 y3 z2 − 16. x3 y3 z2 + 9. z4 − 39. x z4 + 67. x2 z4 − 57. x3 z4 + 24. x4 z4 − 4. x5 z4 + 6. y z4 − 22. x y z4 + 30. x2 y z4 − 18. x3 y z4 + 4. x4 y z4 + 1. y2 z4 − 3. x y2 z4 + 3. x2 y2 z4 − 1. x3 y2 z4

Mathematica factorizes over the integers (“not over the rationals”, and not over the algebraic numbers as long as they do not appear explicitly). This is not a big restriction for rational numbers. The following polynomial over the exact rationals is factored in such a way that the resulting polynomials have integer coefficients and is written out with a common denominator. In[12]:=

Expand[(1/4 - x)^3 (3/2 + y - 2x)^2 (z^2 + 8/5y)^2] // Factor 2

Out[12]=

H−1 + 4 xL3 H3 − 4 x + 2 yL2 H8 y + 5 z2 L − 6400

An interesting theoretical question is the following: Given a polynomial p = ⁄dk=0 ck xk of degree d with integer coefficients ck in the range - f § ck § f , what is the average number of factors of p [1413], [152], [1578], [525]? Here is a simulation for small d and f . In[13]:=

factorNumber[maxDegree_, maxCoefficient_] := Module[{x}, (* count factors *) If[Head[#] === Plus, 1, Length[#]]&[ (* factored random polynomial *) Factor[ Sum[Random[Integer, {-1, 1} maxCoefficient] x^i, {i, 0, maxDegree}]]]]

In[14]:=

Module[{n = 400, dMax = 12, fMax = 20, data}, (* use n random polynomials *) data = Table[Plus @@ Table[factorNumber[d, f], {n}], {d, 2, dMax}, {f, 1, fMax}]/n;

1.2 Operations on Polynomials

15

ListPlot3D[Log[10, data - 1], MeshRange -> {{2, dMax}, {1, fMax}}, PlotRange -> All]]

0 -0.5

20

-1

15

2

10 4 6

5

8 10 12

Polynomials that cannot be factored into multiple x-dependent factors are called irreducible [1439]. In[15]:=

irreducibleQ[poly_, x_] := With[{factors = Select[FactorList[poly], MemberQ[#, x, Infinity]&]}, If[(* at least to x-containing factors exits *) Length[factors] > 1 || (* powers *) factors[[1, 2]] > 1, False, True]] /; PolynomialQ[poly, x] && Exponent[poly, x] > 0

“Most” univariate polynomials are irreducible. The following graphic shows the reducible polynomials of a quadratic, cubic and quartic polynomial over the plane of two coefficients. Reducible polynomials occur along certain lines. In[16]:=

Show[GraphicsArray[#]]& @ Block[{o = 250, α, β}, ListDensityPlot[Table[If[TrueQ[Not[irreducibleQ[#, x]]], 0, 1], {α, -o, o}, {β, -o, o}], Mesh -> False, MeshRange -> {{-o, o}, {-o, o}}, DisplayFunction -> Identity]& /@ (* three polynomials with two parameters each *) {-2 + α x + β x^2, -4 + 3 x + α x^2 + β x^3, -4 - 3 x^2 + α x^3 + β x^4}] 200

200

200

100

100

100

0

0

0

-100

-100

-100

-200

-200

-200

-200 -100

0

100

200

-200 -100

0

100

200

-200 -100

0

100

200

`log np

Given the digits dk of an integer n in base b, we can naturally form the polynomial pb Hn; xL = ⁄k=0b dk xk . Interestingly, when n is a prime number, the polynomial p is irreducible [1424], [937]. In[17]:=

digitPolynomial[k_, b_, x_] := Plus @@ MapIndexed[#1 x^(#2[[1]] - 1)&, Reverse[IntegerDigits[k, b]]]

In[18]:=

(* checking a "random" prime in 100 bases *) Table[irreducibleQ[#, x]& @ digitPolynomial[Prime[123456789], b, x], {b, 2, 1001}] // Union 8True

All, AspectRatio -> 0.4, Frame -> True]] 200

150

100

50

0 1 µ 106

1.002 µ 106

1.004 µ 106

1.006 µ 106

1.008 µ 106

1.01 µ 106

Because no algebraic numbers are computed in the factorization of a univariate polynomial with integer (rational) coefficients, we have the following behavior. In[21]:=

{Factor[x^2 - a^2], Factor[x^2 - 2^2], Factor[x^2 - Sqrt[2]^2]} 8−Ha − xL Ha + xL, H−2 + xL H2 + xL, −2 + x2

Automatic]

1.2 Operations on Polynomials Out[23]=

17

2 è!!!! H 2 + xL

By giving a list of algebraic numbers, one can explicitly specify the extension field. Here, a quartic polynomial 4 è!!!!!! ! is factored. Adjoining 11 allows factoring x4 - 11 into two linear factors with real roots and one quadratic factor with two complex roots. In[24]:= Out[24]=

Adding In[25]:= Out[25]=

Factor[x^4 - 11, Extension -> (11)^(1/4)] è!!!!!!! −H111ê4 − xL H111ê4 + xL H 11 + x2 L

è!!!!!!!! -1 to the extensions allows for a complete factorization of x4 - 11 into linear factors. Factor[x^4 - 11, Extension -> {(11)^(1/4), I}] −H111ê4 − xL H111ê4 − xL H111ê4 + xL H111ê4 + xL

Finding an extension such that a polynomial will factor is largely equivalent to solving polynomial = 0. Here is an unsuccessful trial to factor 3 x3 + 7 x2 - 9. In[26]:= Out[26]=

Factor[x^4 - 3x^3 + 7 x^2 - 9] −9 + 7 x2 − 3 x3 + x4

Here, we use an extension such that the polynomial factors into one linear and one quadratic factor. In[27]:=

Out[27]=

Factor[3 x^3 + 7 x^2 - 9, Extension -> {(1501/2 - (27 Sqrt[2445])/2)^(1/3)}] 1ê3 1 − II−67228 + 4802 22ê3 H1501 − 27 è!!!!!!!!!!!! 2445! L + 2490394032 è!!!!!!!!!!!!! 2ê3 è!!!!!!!!!!!!! 2ê3 1ê3 1ê3 è!!!!!!!!!!!!! 1501 2 H1501 − 27 2445 L + 27 2 2445 H1501 − 27 2445 L − 86436 xM è!!!!!!!!!!!!! 1ê3 è!!!!!!!!!!!!! 1ê3 2ê3 2ê3 è!!!!!!!!!!!!! I11907 2 H1501 − 27 2445 L + 147 2 2445 H1501 − 27 2445 L + è!!!!!!!!!!!!! 2ê3 è!!!!!!!!!!!!! 2ê3 1ê3 1ê3 è!!!!!!!!!!!!! 1701 2 H1501 − 27 2445 L + 21 2 2445 H1501 − 27 2445 L + 1ê3 2ê3 I134456 + 4802 22ê3 H1501 − 27 è!!!!!!!!!!!! 2445! L + 1501 21ê3 H1501 − 27 è!!!!!!!!!!!! 2445! L + 27 21ê3

è!!!!!!!!!!!!! è!!!!!!!!!!!!! 2ê3 2445 H1501 − 27 2445 L M x + 86436 x2 MM

Be aware that factoring of polynomials is a rather complex process (see the general references given in the appendix), which takes some time. Here, the timings for the expansion of (C + 1)^i are compared with the timings for the factorization of the expanded object. (Be aware of the different degrees for Expand and Factor.) In[28]:=

Show[GraphicsArray[ ListPlot[(* reasonable units for a 2-GHz computer *) {#[[1]], 1000 #[[2, 1, 1]]}& /@ #[[1]], Frame -> True, PlotLabel -> #[[2]], FrameLabel -> {"degree", "milliSeconds"}, DisplayFunction -> Identity]& /@ (* Expand and Factor data *) {{(* clear caches for reliable timings *) Table[{i, Developer`ClearCache[]; Timing[Expand[(C + 1)^i];]}, {i, 0, 6000, 50}], "Expand"}, {Table[{i, Developer`ClearCache[]; Timing[Factor[#]]&[Expand[(C + 1)^i]]}, {i, 300}], "Factor"}}]]

Symbolic Computations

18 Expand

Factor

60

150 125 milliSeconds

milliSeconds

50 40 30 20 10

100 75 50 25

0

0 0

1000

2000

3000 4000 degree

5000

6000

0

50

100

150 degree

200

250

300

Let us take a graphical look at the result of expanding a power of a sum. Let the sum total be zero in the form 0 = ⁄n-1 j=0 expH2 p i j ê nL. Then the powers of this sum are also 0, and, by interpreting the partial sums of the expanded power as points in the complex plane, we get a closed path. In[29]:=

expandPicture[{n_, pow_}, opts___] := Show[Graphics[{Thickness[0.002], Line[{Re[#], Im[#]}& /@ (* form the partial sums *) FoldList[Plus, 0, N[(List @@ (* first make list, and then replace to avoid reordering *) (* now comes the expansion *) Expand[Sum[C[i], {i, 0, n - 1}]^pow]) /. C[i_] -> Exp[i I 2Pi/n]]]]}], opts, AspectRatio -> Automatic, Frame -> True, PlotRange -> All, FrameTicks -> None];

In[30]:=

Map[Show[GraphicsArray[ expandPicture[#, (* the parameters {Table[{3, i}, {i, Table[{5, i}, {i, Table[{8, i}, {i,

DisplayFunction -> Identity]& /@ #]]&, for the pictures *) 3, 21, 3}], Table[{4, i}, {i, 2, 20, 4}], 3, 15, 3}], Table[{6, i}, {i, 3, 12, 2}], 2, 8, 2}]}, {1}]

1.2 Operations on Polynomials

19

Here are three more complicated versions of such a graphic. We color the line segments from red to blue. In[31]:=

Show[GraphicsArray[ expandPicture[#, DisplayFunction -> Identity] /. Line[l_] :> With[{n = Length[l]}, MapIndexed[{Hue[0.78 #2[[1]]/n], Line[#1]}&, Partition[l, 2, 1]]]& /@ {{10, 10}, {16, 8}, {36, 4}}]]

Before discussing another algorithmically nontrivial operation for manipulating polynomials—namely, Decompose—we consider a method to rearrange an expression, if possible, into canonical polynomial form. PolynomialQ[polynomial, var] (we know this command from Chapter 5 of the Programming volume [1735]) tests whether polynomial is a polynomial in var. Be aware that PolynomialQ is a purely structural operation. While the expression Hcos2 H1L + sin2 H1L - 1L xx + 2 x2 - 1 is mathematically a polynomial, structurally it is not. As a function ending with Q, PolynomialQ has to return True of False and cannot stay unevaluated. But it is always possible to construct terms of the form hiddenZero nonPolynomialPart so that it is algorithmically undecidable hiddenZero is zero (Richardson theorem [1482], [1484], [1485], [442]). From this, it follows that to guarantee not to get wrong answers from PolynomialQ is not doomed to give wrong results sometimes, it must be a purely structural function. In[32]:= Out[32]=

PolynomialQ[(Sin[1]^2 + Cos[1]^2 - 1) x^x + 2 x^2 - 1, x] False

PolynomialQ[polynomial] tests if polynomial can be considered as a polynomial in at least one variable. Using Collect, we can now write an expression as an explicit polynomial in given variables.

Symbolic Computations

20

Collect[expression, {var1 , var2 , … , varn }, function] writes expression recursively as a polynomial in the variables vari Hi = 1, …, nL and applies the optional function function to the resulting coefficients. If function is omitted, the last argument is assumed to be Identity.

Here we again use our previous polynomial. In[33]:=

polyInxInyInz = (1 - x)^3 (3 + y - 2x)^2 (z^2 + 8y)^2;

Here is this expression as a polynomial in x. In[34]:= Out[34]=

Collect[polyInxInyInz, x] 2

2

2

−4 x5 H8 y + z2 L + x4 H24 + 4 yL H8 y + z2 L + x H−39 − 22 y − 3 y2 L H8 y + z2 L + 2 2

2 2

x H−57 − 18 y − y L H8 y + z L + H9 + 6 y + y L H8 y + z L + x H67 + 30 y + 3 y2 L H8 y + z2 L 3

2

2

2

2

The result of Collect depends on the form of its input. Collect will not expand or factor the resulting coefficients by default. In[35]:= Out[35]=

Collect[Expand[polyInxInyInz], x] 576 y2 + 384 y3 + 64 y4 + 144 y z2 + 96 y2 z2 + 16 y3 z2 + 9 z4 + 6 y z4 + y2 z4 + x5 H−256 y2 − 64 y z2 − 4 z4 L + x4 H1536 y2 + 256 y3 + 384 y z2 + 64 y2 z2 + 24 z4 + 4 y z4 L + x H−2496 y2 − 1408 y3 − 192 y4 − 624 y z2 − 352 y2 z2 − 48 y3 z2 − 39 z4 − 22 y z4 − 3 y2 z4 L + x3 H−3648 y2 − 1152 y3 − 64 y4 − 912 y z2 − 288 y2 z2 − 16 y3 z2 − 57 z4 − 18 y z4 − y2 z4 L + x2 H4288 y2 + 1920 y3 + 192 y4 + 1072 y z2 + 480 y2 z2 + 48 y3 z2 + 67 z4 + 30 y z4 + 3 y2 z4 L

Using the optional third argument of Collect, we can bring the coefficients to a canonical form. In[36]:= Out[36]=

Collect[Expand[polyInxInyInz], x, Factor] 2

2

−4 x5 H8 y + z2 L + H3 + yL2 H8 y + z2 L + 2 2

2

4 x4 H6 + yL H8 y + z L − x H3 + yL H13 + 3 yL H8 y + z2 L − 2 2

2

x3 H57 + 18 y + y2 L H8 y + z L + x2 H67 + 30 y + 3 y2 L H8 y + z2 L

Note that the individual terms are not strictly ordered; in particular, not all terms ( ∂ x0 ) appear one after the other. This is a consequence of the Flat and Orderless attributes of Plus and the canonical order. Here is the same expression as a polynomial in y. In[37]:= Out[37]=

Collect[polyInxInyInz, y] 64 H1 − xL3 y4 + H1 − xL3 y3 H384 − 256 x + 16 z2 L + H1 − xL3 y2 H576 − 768 x + 256 x2 + 96 z2 − 64 x z2 + z4 L + H1 − xL3 y H144 z2 − 192 x z2 + 64 x2 z2 + 6 z4 − 4 x z4 L + H1 − xL3 H9 z4 − 12 x z4 + 4 x2 z4 L

Here it is again as a polynomial in z. In[38]:= Out[38]=

Collect[polyInxInyInz, z] 64 H1 − xL3 y2 H3 − 2 x + yL2 + 16 H1 − xL3 y H3 − 2 x + yL2 z2 + H1 − xL3 H3 − 2 x + yL2 z4

Using as the second argument in Collect the list {x, y} results in a polynomial in x, whose coefficients are polynomials in y, whose coefficients are polynomials in z. In[39]:= Out[39]=

Collect[polyInxInyInz, {x, y}] 64 y4 + 9 z4 + y3 H384 + 16 z2 L + x5 H−256 y2 − 64 y z2 − 4 z4 L + y2 H576 + 96 z2 + z4 L + y H144 z2 + 6 z4 L + x H−192 y4 − 39 z4 + y3 H−1408 − 48 z2 L + y H−624 z2 − 22 z4 L + y2 H−2496 − 352 z2 − 3 z4 LL +

1.2 Operations on Polynomials

21

x3 H−64 y4 − 57 z4 + y3 H−1152 − 16 z2 L + y H−912 z2 − 18 z4 L + y2 H−3648 − 288 z2 − z4 LL + x4 H256 y3 + 24 z4 + y2 H1536 + 64 z2 L + y H384 z2 + 4 z4 LL + x2 H192 y4 + 67 z4 + y3 H1920 + 48 z2 L + y2 H4288 + 480 z2 + 3 z4 L + y H1072 z2 + 30 z4 LL

Here we apply the function C to each of the coefficients in z. In[40]:=

Collect[polyInxInyInz, {x, y}, C] y4 C@64D + x5 Hy2 C@−256D + y C@−64 z2 D + C@−4 z4 DL + C@9 z4 D + y3 C@384 + 16 z2 D + x Hy4 C@−192D + C@−39 z4 D + y3 C@−1408 − 48 z2 D + y C@−624 z2 − 22 z4 D + y2 C@−2496 − 352 z2 − 3 z4 DL + x3 Hy4 C@−64D + C@−57 z4 D + y3 C@−1152 − 16 z2 D + y C@−912 z2 − 18 z4 D + y2 C@−3648 − 288 z2 − z4 DL + y2 C@576 + 96 z2 + z4 D + 4 x Hy3 C@256D + C@24 z4 D + y2 C@1536 + 64 z2 D + y C@384 z2 + 4 z4 DL + y C@144 z2 + 6 z4 D + x2 Hy4 C@192D + C@67 z4 D + y3 C@1920 + 48 z2 D + y2 C@4288 + 480 z2 + 3 z4 D + y C@1072 z2 + 30 z4 DL

Out[40]=

The second argument in Collect need not be an atomic expression, and thus the following expression will be written as a polynomial over co[x]. In[41]:=

Collect[Expand[(co[x] + 4 si[z] + 5 co[x]^3)^4], co[x]] 150 co@xD8 + 500 co@xD10 + 625 co@xD12 + 240 co@xD5 si@zD + 1200 co@xD7 si@zD + 2000 co@xD9 si@zD + 96 co@xD2 si@zD2 + 256 co@xD si@zD3 + 256 si@zD4 + co@xD4 H1 + 960 si@zD2 L + co@xD6 H20 + 2400 si@zD2 L + co@xD3 H16 si@zD + 1280 si@zD3 L

Out[41]=

Collect only reorders. It does not carry out any “mathematical meaning- or content-dependent manipulations” . It only looks at the syntactical structure of expressions. (This means that in the following example, Cos[x]^2 is not rewritten as 1 - Sin[x]^2.) In[42]:= Out[42]=

Collect[Sin[x]^2 + (Cos[x]^2 + Sin[x]^2)^3 + 3 Sin[x]^3 + 7, Sin[x]] 7 + Cos@xD6 + H1 + 3 Cos@xD4 L Sin@xD2 + 3 Sin@xD3 + 3 Cos@xD2 Sin@xD4 + Sin@xD6

Given an expression in several variables, we can use Variables to identify in which variables the expression is a polynomial. Variables[expression] produces a list of the variables in which expression is a polynomial.

For the above polyInxInyInz, we get the expected result. In[43]:= Out[43]=

Variables[polyInxInyInz] 8x, y, z

0 jj$ X ,X >0 jj"x,x>X ﬂlœ l - ¶ < ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 7 x3 - 5 x + 9 k k {{ In[156]:=

Out[156]=

ForAll[∂, ∂ > 0, Exists[X, X > 0, ForAll[x, x > X && Element[, Reals], - ∂ < (2x^3 - 4x + 6)/(7x^3 - 5x + 9) < + ∂]]] // Resolve 2 7

1.3 Operations on Rational Functions The commands Numerator and Denominator introduced at the beginning (Subsection 2.4.1 of the Programming volume [1735]) work for rational numbers and for rational functions, that is, for fractions of polynomials. Here is an example. In[1]:= Out[1]= In[2]:= Out[2]= In[3]:= Out[3]=

ratio = (3 + 6 x + 6 x^2)/(5 y + 6 y^3) 3 + 6 x + 6 x2 5 y + 6 y3 Numerator[ratio] 3 + 6 x + 6 x2 Denominator[ratio] 5 y + 6 y3

The parts of a product that belong to the numerator and the parts that belong to the denominator are determined by the sign of the associated exponents after transformations of the form 1/k^(-l) Ø k^l. Here is an expression that is a product of ten factors. After evaluation, eight have a positive exponent and four have a negative exponent. Some of the negative exponent terms are formatted in the denominator. (Be aware that Exp[expr] is rewritten is Power[E, expr] and the explicit formatting depends on expr.) In[4]:=

expr = a b^2 c^-2 d^(4/3) e^-(5/6) 1/f^(-12/13) g^h i^-j 1/k^-l Exp[-E^2] 2

Out[4]= In[5]:= Out[5]=

a b2 d4ê3 − f12ê13 gh i−j kl c2 e5ê6 {Numerator[expr], Denominator[expr]} 2

9a b2 d4ê3 f12ê13 gh kl , c2 e5ê6 ij =

Here is a nested fraction.

1.3 Operations on Rational Functions In[6]:= Out[6]=

79

nestedFraction = (a/(b + 1) + 2)/(c/(d + 3) + 4) a 2 + 1+b c 4 + 3+d

For nested fractions, the functions Numerator and Denominator take into account only the “outermost” structure. In[7]:= Out[7]=

{Numerator[#], Denominator[#]}&[nestedFraction] a c 92 + , 4 + = 1+b 3+d

For fractions, the command Expand, which multiplies out polynomials, is divided into four parts to facilitate working on the numerators and denominators. Expand[rationalFunction] multiplies out only the numerator of the rationalFunction, and divides all resulting terms by the (unchanged) denominator. ExpandNumerator[rationalFunction] multiplies out only the numerator of the rationalFunction, and divides the result as a single expression by the (unchanged) denominator. ExpandDenominator[rationalFunction] multiplies out only the denominator of the rationalFunction. ExpandAll[rationalFunction] multiplies out the numerator and denominator of rationalFunction, and divides all resulting terms.

We now look at the effect of these four commands on the sum of two ratios of polynomials. In[8]:= Out[8]=

ratio = (1 + 7 y^3)/(2 + 8 x^3)^2 + (1 + 6 x)^3/(1 - 4 y)^2 H1 + 6 xL3 1 + 7 y3 + H1 − 4 yL2 H2 + 8 x3 L2

Except for the lexicographic reordering of the two partial sums, Mathematica did not do anything nontrivial to this input automatically. Now, all numerators are multiplied out, and all resulting parts are individually divided. In[9]:= Out[9]=

Expand[ratio] 1 1 18 x 108 x2 216 x3 7 y3 + + + + + H1 − 4 yL2 H1 − 4 yL2 H1 − 4 yL2 H1 − 4 yL2 H2 + 8 x3 L2 H2 + 8 x3 L2

ExpandNumerator also multiplies out, but does not divide the terms individually. In[10]:= Out[10]=

ExpandNumerator[ratio] 1 + 18 x + 108 x2 + 216 x3 1 + 7 y3 + H1 − 4 yL2 H2 + 8 x3 L2

With ExpandDenominator, the numerator remains unchanged, and only the denominator is multiplied out. In[11]:= Out[11]=

ExpandDenominator[ratio] 1 + 7 y3 H1 + 6 xL3 + 1 − 8 y + 16 y2 4 + 32 x3 + 64 x6

Finally, we multiply everything out. ExpandAll typically produces the largest expressions. Now, the numerator is multiplied out and each of its terms is written over the expanded form of the denominator. In[12]:=

ExpandAll[ratio]

Symbolic Computations

80

Out[12]=

1 7 y3 + + 4 + 32 x3 + 64 x6 4 + 32 x3 + 64 x6 1 18 x 108 x2 216 x3 + + + 1 − 8 y + 16 y2 1 − 8 y + 16 y2 1 − 8 y + 16 y2 1 − 8 y + 16 y2

In the following example, the same happens. Be aware that no common factors are cancelled. In[13]:= Out[13]=

ExpandAll[(1 - x^4)/(1 + x^2)^2] 1 x4 − 1 + 2 x2 + x4 1 + 2 x2 + x4

In nested fractions, ExpandAll works again only the “outermost” structure. In[14]:= Out[14]=

ExpandAll[nestedFraction] 2 a c + c 4 + H1 + bL H4 + L 3+d 3+d

Mapping ExpandAll to all parts allows us to expand all parts. In[15]:=

Out[16]=

(* show all steps *) FixedPointList[MapAll[ExpandAll, #]&, nestedFraction] a 2 + a 1+b , 2 9 c c + c , 4 + 4 + H1 + bL H4 + L 3+d 3+d 3+d a 2 a 2 c + c + c b c , c b c = 4 + 4 + 4 + 4 b + 4 + 4 b + + + 3+d 3+d 3+d 3+d 3+d 3+d

Be aware that there is a difference between Expand //@ expr and ExpandAll[expr]. ExpandAll does not automatically recursively expanded in the inner levels of expressions with Hold-like attributes. In[17]:= Out[17]=

{Expand //@ #, ExpandAll[#]}&[(α + β)^2 + Hold[(α + β)^2]] 8α2 + 2 α β + β2 + Hold@Expand@Expand@Expand@αD + Expand@βDDExpand@2D DD, α2 + 2 α β + β2 + Hold@Hα + βL2 D

{z}] -> z''[x]} 2 z@xDz@xD + 4 x z@xDz@xD Hz @xD + Log@z@xDD z @xDL + z @xD2 z i i @xD + Log@z@xDD z @xD + y x2 j y jz@xDz@xD Hz @xD + Log@z@xDD z @xDL2 + z@xDz@xD j jz z zz z@xD {{ k k

In[8]:= Out[8]=

Expand[ex3 - ex1] x2 z@xDz@xD z @xD + x2 Log@z@xDD z@xDz@xD z @xD − x2 z@xDz@xD z @xD − x2 Log@z@xDD z@xDz@xD z @xD

For a function of several variables, the derivative is not represented by ' in output form, but instead by using numbers in parentheses to specify how many times to differentiate with respect to the corresponding variable. In[9]:= Out[9]=

D[func[t, , ], {t, 2}, {, 3}, {, 3}] funcH2,3,3L @t, , D

Mathematica is able to explicitly differentiate nearly all special functions with respect to their “argument”, but only a few special functions with respect to their “parameters”. Here is the derivative of LegendreP with respect to its first argument. In[10]:= Out[10]=

D[LegendreP[n, z], {n, 1}] LegendrePH1,0L @n, zD

Numerically, these quantities can still be calculated. In[11]:= Out[11]=

N[% /. {z -> 2.04, n -> 0.567}] 1.07925

Here is a high-precision evaluation of the last derivative. In[12]:= Out[12]=

N[%% /. {z -> 204/100, n -> 567/1000}, 22] 1.079254609237523525024

Here, we have to make a remark about the numerical differentiation encountered in the last examples. Whereas the last derivative was evaluated “just fine” by Mathematica, the following “simple” derivative does “not work”. In[13]:=

Abs'[1.]

Out[13]=

Abs @1.D

The reason Abs'[inexactNumber] does not evaluate is the fact that the derivative of Abs does not exist. The derivative is (by definition) the limit limdØ0 H f Hz + dL - f HzLL ê d for any complex d. But for Abs, the result

1.6 Classical Analysis

131

depends on the direction of d approaching 0. Let z = 1; then we have the following result. (Here, we use the soon-to-be-discussed function Limit.) In[14]:=

Out[14]=

absDeriv[zr_, zi_, ϕ_] = With[{z = zr + I zi, δz = δ Exp[I ϕ]}, Limit[ComplexExpand[(Abs[z + δz] - Abs[z])/δz, TargetFunctions -> {Re, Im}], δ -> 0]] HCos@ϕD − Sin@ϕDL Hzr Cos@ϕD + zi Sin@ϕDL è!!!!!!!!!!!!!!!!!!!!!! zi2 + zr2

Here is the direction dependence for z = 1 as a function of argHz - 1L. The two curves show the real and the imaginary parts of absDeriv[1, 0, j]. In[15]:=

Plot[{Re[absDeriv[1, 0, ϕ]], Im[absDeriv[1, 0, ϕ]]}, {ϕ, 0, 2Pi}, PlotRange -> All, Frame -> True, Axes -> False, PlotStyle -> {{Thickness[0.005]}, {Thickness[0.005], Dashing[{0.01, 0.01}]}}] 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 0

1

2

3

4

5

6

The numerical derivative is always taken for purely real δz. To use the numerical differentiation, we “hide” Abs in the following definition of abs. In[16]:=

(* make abs a numerical function for numerical arguments *) SetAttributes[abs, NumericFunction]; abs[x_?InexactNumberQ] := Abs[x]

For purely real δz and complex z, we have the following behavior of the “derivative”. The right picture shows the result of the numerical differentiation. In[19]:=

Show[GraphicsArray[ Plot3D[#, {x, -1, 1}, {y, -1, 1}, DisplayFunction -> Identity, PlotPoints -> 25]& /@ {absDeriv[x, y, 0], abs'[x + I y]}]]

1 0.5 0 -0.5 -1 -1 1

1 0.5 0 -0.5

-0.5

0 0.5 1

-1

1 0.5 0 -0.5 -1 -1 1

1 0.5 0 -0.5

-0.5

0 0.5 1

-1

While the last two pictures look “sensible”, checking the difference between absDeriv[x, y, 0] and abs' more carefully, we see near the origin the effects of the numerical differentiation.

Symbolic Computations

132 In[20]:=

Plot3D[Abs[absDeriv[x, y, 0] - abs'[x + I y]], {x, -1/4, 1/4}, {y, -1/4, 1/4}, PlotRange -> All, PlotPoints -> 50]

0.4

0.2

0.2

0.1

0 -0.2

0 -0.1

0

-0.1 0.1

-0.2

0.2 0 2

The following picture shows the points used in the numerical differentiation process near the high-precision number 1. In[21]:=

abs[x_?InexactNumberQ] := ((* collect values *) Sow[x]; Abs[x]); Show[Graphics[Line[{{#, 0}, {#, Abs[#]}}]& /@ (* evaluate derivative and return sampled x-values *) Reap[abs'[1``100]][[2, 1]]], PlotRange -> All, Frame -> True] 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0.6

0.8

1

1.2

1.4

From the last result, we conclude that we should not trust the numerical derivatives of discontinuous functions or quickly oscillating functions. Here is the numerical derivative of a step-like function (be aware that we display f HxL and f £ HxL ê 60). In[23]:=

SetAttributes[meander, NumericFunction]; meander[x_?NumberQ] := Sin[x]/Sqrt[Sin[x]^2] Plot[{meander[x], meander'[SetPrecision[x, 100]]/60}, {x, 2, 4}, PlotStyle -> {Hue[0], GrayLevel[0]}, PlotRange -> All, Compiled -> False]

1.6 Classical Analysis

133 1 0.5 2.5

3

3.5

4

-0.5 -1 -1.5 -2

One word of caution is in order here: If very reliable high-precision numerical values of derivatives are needed, it is safer to use the function ND from the package NumericalMath`NLimit` instead of numericalizing symbolic expressions containing unevaluated derivatives using N. N[unevaluatedDerivative] has to choose a scale for sample points. Like other numerical functions, no symbolic analysis of unevaluatedDerivative is carried out, and as a result, the chosen scale may result in mathematically wrong values for the derivative (this is especially the case for values of derivatives near singularities; but in principle higher-order numerical differentiation is difficult [1331]). We should make another remark concerning a potential pitfall when differentiating. Mathematica tacitly assumes that the differentiation with respect to different variables can be interchanged (a condition which is fulfilled for most functions used in practical calculations). This means that for the following well-known example strange Func, we do not get the “expected” derivatives for all values x and y if we specialize x- or y-values in intermediate steps. In[26]:=

strangeFunc[0, 0] = 0; strangeFunc[x_, y_] = x y (x^2 - y^2)/(x^2 + y^2);

Here is the function, its first derivative with respect to x, and its mixed second derivative are shown over the x,y-plane. In[28]:=

Show[GraphicsArray[ Plot3D[Evaluate[D[strangeFunc[x, y], ##]], {x, -1, 1}, {y, -1, 1}, PlotPoints -> 121, Mesh -> False, PlotRange -> All, DisplayFunction -> Identity]& @@@ (* function, one first derivative, and mixed second derivative *) {{}, {x}, {x, y}}]]

0.2 0 -0.2 -1

1 0.5 -0.5

0 0

-0.5 0.5

1

-1

1 0.5 0 -0.5 -1 -1

1 0.5 -0.5

0 0

1 0 -1 -1

-0.5 0.5

1

-1

1 0.5 -0.5

0 0

-0.5 0.5

1

-1

First, we differentiate with respect to x and evaluate at 80, y 0}, x]}& @ strangeFunc[x, y] 8−1, 1

0, 1D

For some applications, the following result is not desired, although correct almost everywhere with respect to the 1D Lebesgue measure on the real axis. In[34]:= Out[34]=

D[%, x] Which@x ≤ 0, 0, x > 0, 0D

Differentiating a univariate piecewise function (head Piecewise) gives the value Indeterminate at the position of the discontinuity. In[35]:= Out[35]=

D[Piecewise[{{1, x > 1}}], x] 0 x < 1 »» x > 1 µ Indeterminate True

Differentiating a multivariate piecewise function (head Piecewise) gives the result valid almost everywhere (including lower-dimensional curves where the result is not pointwise correct). In[36]:= Out[36]=

D[Piecewise[{{1, x + y > 1}}], x] 0

The Mathematica kernel does not recognize the derivative of such discontinuous functions as proportional to Dirac delta function by default (but see Section 1.8). Other case-sensitive functions are differentiated in a similar way.

1.6 Classical Analysis In[37]:= Out[37]=

135

D[If[true, false[unknownVariable], dontKnow[unknownVariable]], unknownVariable] If@true, false @unknownVariableD, dontKnow @unknownVariableDD

This means that D does generically not act in a distributional sense, not for the above-mentioned case of piecewise-defined functions and not for other “closed-form” functions. The following two expressions are so-called differential algebraic constants. In[38]:= Out[38]=

D[{(Sqrt[x^2] - x)/(2x), (Log[x^2] - 2 Log[x])}, x] // Together 80, 0

True, Axes -> False], Plot3D[Im[(Log[(x + I y)^2] - 2 Log[x + I y])], {x, -1, 1}, {y, -1, 1}, PlotPoints -> 30]}]]] 0 -0.2 -0.4

5 1

0

-0.6

0.5

-5 0

-1 1

-0.8

-0.5

-0.5

0 0.5

-1 -1

-0.5

0

0.5

1

1

-1

x

As a result, for such functions the identity Ÿx ∑ f HxL ê ∑ x „ x f HxL - f Hx0 L does not hold for all generic x0 and x. 9 Using a generalized function UnitStep (to be discussed below) and its derivatives (Dirac d function Dirac Delta and its derivative), the last identity hold almost everywhere. In[40]:=

intDiffIdentity[f_, {x0_, x_}] := Integrate[D[f[ξ], ξ], {ξ, x0, x}] == f[x] - f[x0]

In[41]:=

{(* generic, symbolic, not explicitly defined function *) intDiffIdentity[ , {x0, x}], (* distribution--can be differentiated freely *) intDiffIdentity[UnitStep, {-1, 1}], (* piecewise functions ignore jump contributions *) intDiffIdentity[Piecewise[{{1, # >= 0}}]&, {-1, 1}]} 8True, True, False

_]] 8DifferentiationOptions → 8AlwaysThreadGradients → False, DifferentiateHeads → True, DirectHighDerivatives → True, ExcludedFunctions → 8Hold, HoldComplete, Less, LessEqual, Greater, GreaterEqual, Inequality, Unequal, Nand, Nor, Xor, Not, Element, Exists, ForAll, Implies, Positive, Negative, NonPositive, NonNegative False)]; D[fh, {x, 6}] 4 4 4 4 4 4320 x x2 + 5760 x x6 + 11520 x x H1 + xL + 69120 x x5 H1 + xL + 30720 x x9 H1 + xL + 4

4

4

4

4320 x H1 + xL2 + 146880 x x4 H1 + xL2 + 207360 x x8 H1 + xL2 + 46080 x x12 H1 + xL2 + x4

80640

x4

x H1 + xL + 299520 3

4

3

x H1 + xL + 184320 7

4

3

x4

x

11

H1 + xL + 3

4

24576 x x15 H1 + xL3 + 10080 x x2 H1 + xL4 + 100800 x x6 H1 + xL4 + 4

4

4

134400 x x10 H1 + xL4 + 46080 x x14 H1 + xL4 + 4096 x x18 H1 + xL4 In[48]:= Out[48]=

Expand[%%% - %] 0

There is no general rule regarding what is preferable, the setting "DirectHighDerivatives" -> False or "DirectHighDerivatives" -> True . In many cases, "DirectHighDerivatives" -> True will be much faster, but will produce larger results. Here is a “typical” example. In[49]:=

Out[49]=

With[{f = x^3 Log[x^5 + 1] Exp[-x^2], n = 50}, {Developer`SetSystemOptions["DifferentiationOptions" -> ("DirectHighDerivatives" -> True)]; Timing[ByteCount[D[f, {x, n}]]], Developer`SetSystemOptions["DifferentiationOptions" -> ("DirectHighDerivatives" -> False)]; Timing[ByteCount[D[f, {x, n}]]]}] 880.06 Second, 934776 ("DirectHighDerivatives" -> False)]; Timing[ByteCount[D[f, {x, n}]]]}] 8813.47 Second, 16 {"ExcludedFunctions" -> Append["ExcludedFunctions" /. ("DifferentiationOptions" /. Developer`SystemOptions[]), ]}] DifferentiationOptions → 8AlwaysThreadGradients → False, DifferentiateHeads → True, DirectHighDerivatives → True, ExcludedFunctions → 8Hold, HoldComplete, Less, LessEqual, Greater, GreaterEqual, Inequality, Unequal, Nand, Nor, Xor, Not, Element, Exists, ForAll, Implies, Positive, Negative, NonPositive, NonNegative, True}];

In[56]:=

D[ [[x]], x]

Out[56]= In[57]:=

∂x @@xDD (* restore old settings *) Developer`SetSystemOptions[ "DifferentiationOptions" -> {"ExitOnFailure" -> False}];

The next input makes use of fairly high derivatives. We visualize the (normalized) coefficients cHnL k appearing in n-1

1 1 ∑n k ÅÅÅÅÅÅÅÅÅnÅÅÅ J ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ N = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ‚ cHnL k ln HzL. n ∑ z lnHzL z lnn+1 HzL k=0 Here, we use n = 1, 2, …, 200.

Symbolic Computations

138 In[59]:=

Show[Graphics3D[ Table[(* the derivative *) deriv = D[1/Log[z], {z, n}]; (* the coefficients *) cl = CoefficientList[Expand[ (-1)^n z^n Log[z]^(n + 1)deriv], Log[z]]; (* colored line *) {Hue[n/ 250], Line[MapIndexed[{#2[[1]] - 1, n, #1}&, (* normalize coefficients *) cl/Max[Abs[cl]]]]}, {n, 200}]], BoxRatios -> {1, 3/2, 0.5}, PlotRange -> All, Axes -> True, ViewPoint -> {0, 3, 1}] 0 50 100 00 150 0 200 1 0.75 0.5 0.25 0 200

150

100

50

0

In addition to explicitly given functions, Mathematica is also able to differentiate certain abstract function(al)s of functions, for example, indefinite integrals or inverse functions [75], [942], [1724], [1622]. In[60]:=

Table[D[InverseFunction[Υ][z], {z, i}], {i, 3}] 1 3 Υ @ΥH−1L @zDD ΥH3L @ΥH−1L @zDD Υ @ΥH−1L @zDD , = 9 , − − Υ @ΥH−1L @zDD Υ @ΥH−1L @zDD3 Υ @ΥH−1L @zDD4 Υ @ΥH−1L @zDD5 2

Out[60]=

Sometimes it is convenient to form derivatives of multivariate functions with respect to all independent variables at once (especially in carrying out vector analysis operations). This can be done with the following syntax. D[toDifferentiate, {vector, n}] differentiates toDifferentiate n times with respect to the vector variable vector. If the n is omitted, it is assumed to be 1.

Here are the first and second derivatives of the scalar function f , that depends on x, y, and z. In[61]:= Out[61]= In[62]:= Out[62]=

D[f[x, y, z], {{x, y, z}, 1}] 8fH1,0,0L @x, y, zD, fH0,1,0L @x, y, zD, fH0,0,1L @x, y, zD< D[f[x, y, z], {{x, y, z}, 2}] 88fH2,0,0L @x, y, zD, fH1,1,0L @x, y, zD, fH1,0,1L @x, y, zD z; Γllu[a_, b_, c_] := Γllu[a, b, c] = (* raise third index with g *) Together[Sum[Γlll[a, b, d] guu[[c, d]], {d, 2}]] /. z[x, y] -> z

The two equations for the geodesics are of the form x≥ HtL = FHxHtL, yHtL, zHtL, x£ HtL, y£ HtL, z£ HtLL and similar for y≥ HtL. To make sure the geodesics stay on the original, implicitly defined surface, and to obtain three equations for the three coordinates, we supplement the two geodesic equations with the differentiated form of the implicit equation (meaning Hgrad holedCubeHxHtL, yHtL, zHtLLL.8x£ HtL, y£ HtL, z£ HtL< = 0 [332]. In[154]:=

geodesicEquations = (# == 0)& /@ Append[ (* second-order odes for x[τ] and y[τ] *) Table[D[[c][τ], τ, τ] + Sum[Γllu[a, b, c] D[[a][τ], τ] D[[b][τ], τ], {a, 2}, {b, 2}], {c, 2}] /. Derivative[n_][xy_][τ] :> Derivative[n][xy], (* differentiated form of the implicit equation of the surface *) D[holedCube /. {x -> x[τ], y -> y[τ], z[x, y] -> z[τ]}, τ] /. xyz_[τ] -> xyz] /. {x -> x[τ], y -> y[τ], z -> z[τ]} /. Derivative[n_][xy_[τ]] :> Derivative[n][xy][τ];

In[155]:=

((geodesicEquations // Simplify) /. {ξ_[τ] -> ξ}) // Simplify // TraditionalForm

Out[155]//TraditionalForm= 2

9I2 x2 y H2 y2 - 1L H6 z2 - 1L x£ y£ H1 - 2 x2 L + 2

x H2 x2 - 1L I4 H6 z2 - 1L x6 + H4 - 24 z2 L x4 + H24 z6 - 24 z4 + 12 z2 - 1L x2 - z2 H1 - 2 z2 L M Hx£ L2 + 2

x H2 x2 - 1L I4 H6 z2 - 1L y6 + H4 - 24 z2 L y4 + H24 z6 - 24 z4 + 12 z2 - 1L y2 - z2 H1 - 2 z2 L M Hy£ L2 + 2

2

z2 H1 - 2 z2 L I4 x6 - 4 x4 + x2 + 4 y6 - 4 y4 + y2 + z2 H1 - 2 z2 L M x££ M ë 2

IH2 z3 - zL H4 x6 - 4 x4 + x2 + 4 y6 + 4 z6 - 4 y4 - 4 z4 + y2 + z2 LM 0, 2

I2 x H2 x2 - 1L y2 H6 z2 - 1L x£ y£ H1 - 2 y2 L +

Symbolic Computations

150 2

y H2 y2 - 1L I4 H6 z2 - 1L x6 + H4 - 24 z2 L x4 + H24 z6 - 24 z4 + 12 z2 - 1L x2 - z2 H1 - 2 z2 L M Hx£ L2 + 2

y H2 y2 - 1L I4 H6 z2 - 1L y6 + H4 - 24 z2 L y4 + H24 z6 - 24 z4 + 12 z2 - 1L y2 - z2 H1 - 2 z2 L M Hy£ L2 + 2 2

2

6

4

2

6

4

2

2

2 2

££

z H1 - 2 z L I4 x - 4 x + x + 4 y - 4 y + y + z H1 - 2 z L M y M ë 2

IH2 z3 - zL H4 x6 - 4 x4 + x2 + 4 y6 + 4 z6 - 4 y4 - 4 z4 + y2 + z2 LM 0, 2

£

x H2 x - 1L x + y H2 y2 - 1L y£ + z H2 z2 - 1L z£ 0=

For a nicer visualization, we calculate a graphic of the surface. In[156]:=

Needs["Graphics`ContourPlot3D`"]

In[157]:=

holedCubeGraphic3D = Graphics3D[{EdgeForm[], (* map in other positions *) Fold[Function[{p, r}, {p, Map[# r&, p, {-2}]}], (* 3D contour plot of holedCube in the first octant *) Cases[ContourPlot3D[Evaluate[holedCube /. z[x, y] -> z], {x, 0, 1.1}, {y, 0, 1.1}, {z, 0, 1.1}, PlotPoints -> {{24, 2}, {20, 2}, {12, 2}}, MaxRecursion -> 1, DisplayFunction -> Identity], _Polygon, {0, Infinity}] /. Polygon[l_] :> (* make diamonds *) Polygon[Plus @@@ Partition[Append[l, First[l]], 2, 1]/2], {{-1, 1, 1}, {1, -1, 1}, {1, 1, -1}}]}];

We visualize the geodesics as lines on the surface. To avoid visually unpleasant intersections between the discretized surface and the discretized geodesics, we define a function liftUp that lifts the geodesics slightly in direction of the local surface normal. In[158]:=

normal[{x_, y_, z_}] = (* gradient gives the normal *) D[holedCube /. z[x, y] -> z, #]& /@ {x, y, z} // Expand;

In[159]:=

liftUp[{x_, y_, z_}, ∂_] = (* move in direction of normal *) {x, y, z} - ∂ #/Sqrt[#.#]&[normal[{x, y, z}]];

In the following graphic, we calculate 64 geodesics. We choose the starting points along the upper front “beam”. The function rStart parametrizes the starting values. In[160]:=

rStart[ϕ_] := rStart[ϕ] = Module[{r}, r /. FindRoot[Evaluate[ holedCube == 0 /. {x -> 0, y -> 0.7 + r Cos[ϕ], z[x, y] -> 0.7 + r Sin[ϕ]}], {r, 0, 1/3}]]

Here are the resulting geodesics. On the nearby smoothed corners of the cube, we see the to-be-expected caustics. In[161]:=

Module[{o = 128, T = 6, , τ1, τ2}, Show[{(* the surface *) holedCubeGraphic3D, Table[ (* solve differential equations for geodesics *) (* avoid messages from caustics that run in problems *) Internal`DeactivateMessages[ nsol = NDSolve[Join[geodesicEquations, (* starting values *) {x[0] == 0, y[0] == 0.7 + rStart[ϕ] Cos[ϕ], z[0] == 0.7 + rStart[ϕ] Sin[ϕ], x'[0] == 1, y'[0] == 0}], {x, y, z}, {τ, -T, T}, MaxSteps -> 2 10^4, PrecisionGoal -> 6, AccuracyGoal -> 6, (* use appropriate method *) Method -> {"Projection", Method -> "StiffnessSwitching",

1.6 Classical Analysis

151

(* stay on surface *) "Invariants" -> {holedCube /. {x -> x[τ], y -> y[τ], z[x, y] -> z[τ]}}}]]; (* parametrized geodesics *) [τ_] := (Append[liftUp[{x[τ], y[τ], z[τ]} /. nsol[[1]], 0.015], {Thickness[0.003], Hue[ϕ/(2Pi)]}]); (* for larger T *) {τ1, τ2} = nsol[[1, 1, 2, 1, 1]]; (* show surface and geodesics *) ParametricPlot3D[[τ], {τ, τ1, τ2}, Compiled -> False, PlotPoints -> Round[200 (τ2 - τ1)], DisplayFunction -> Identity], {ϕ, 0, 2Pi (1 - 1/o), 2Pi/o}]}, DisplayFunction -> $DisplayFunction, Boxed -> False, Axes -> False, ViewPoint -> {2.2, 2.4, 1.6}]]

We end here and leave it to the reader to calculate euthygrammes [885]. For large-scale calculations of this kind arising in general relativity (see [364]), we recommend the advanced (commercially available) Mathematica package MathTensor by L. Parker and S. Christensen [1379] (http://smc.vnet.net/MathSolutions.html); or the package Cartan by H. Soreng (http://store.wolfram.com/view/cartan). For the algorithmic simplification of tensor expressions, see [120] and [1430]. Next, we give an application of differentiation involving graphics: the evolute of a curve, the evolute of the evolute of a curve, the evolute of the evolute of the evolute of a curve, etc. [271].

Mathematical Remark: Evolutes The evolute of a curve is the set of all centers of curvature associated with the curve. For a planar curve given in the parametric form HxHtL, yHtLL, the parametric representation of its evolute is: ij xHtL - y£ HtL Hx£ HtL2 + y£ HtL2 L yHtL + x£ HtL Hx£ HtL2 + y£ HtL2 L yz j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ , ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ z. x£ HtL y££ HtL - y£ HtL x££ HtL { k x£ HtL y££ HtL - y£ HtL x££ HtL For more on evolutes and related topics, see any textbook on differential geometry, for example, [1119], [1404], [492], [764], and [609]. For curves that are their own evolutes, see [1864].

1

Here we implement the definition directly. We apply Together to get one fraction. The optional function simp simplifies the resulting expressions in a user-specified way. In[162]:=

Evolute[{x_, y_}, t_, simp_:Identity] := simp[{x - #2(#1^2 + #2^2)/(#1 #4 - #2 #3), y + #1(#1^2 + #2^2)/(#1 #4 - #2 #3)}&[ (* compute all derivatives only once *)

Symbolic Computations

152 D[x, t], D[y, t], D[x, {t, 2}], D[y, {t, 2}]] // (* avoid blow-up in size of iterated form *) Together]

For a circle, the set of all centers of curvature is precisely the center of the circle. In[163]:= Out[163]=

Evolute[{Cos[ϑ], Sin[ϑ]}, ϑ] 80, 0

a})&] a2 Cos@ϑD3 − b2 Cos@ϑD3 −a2 Sin@ϑD3 + b2 Sin@ϑD3 9 , = a b

We now iterate the formation of evolutes starting with an ellipse, and we graph the resulting evolutes. We use ten ellipses with different half-axes ratios. The right graphic shows a magnified view of the center region of the left graphic. In[166]:=

Show[GraphicsArray[{Show[#], Show[#, PlotRange -> {{-3, 3}, {-3, 3}}]}&[ Table[ParametricPlot[Evaluate[(* nest forming the evolute *) NestList[Evolute[#, ϑ, (# /. {a_. Sin[ϑ]^2 + a_.Cos[ϑ]^2 -> a})&]&, {α Cos[ϑ], 2 Sin[ϑ]}, 4]], {ϑ, 0, 2Pi}, PlotStyle -> {Hue[0.78 (α - 3/2)]}, Axes -> False, PlotRange -> All, AspectRatio -> 1, DisplayFunction -> Identity, PlotPoints -> 140], (* values of α *) {α, 3/2, 5/2, 1/11}]]]]

A rich field for the generation of nice curves is given by starting the above process, for instance, with Lissajous figures. We could now go on to the analogous situation for surfaces. Here, for an ellipsoid, we construct a picture with the two surfaces formed by going the amount of the principal radii of the curvature in the direction of the normal to the surface [1157]. In[167]:=

With[{(* ellipsoid half axes *)a = 1, b = 3/4, c = 5/4, (* avoid 0/0 in calculations *) ∂ = 10^-12}, Module[{ϕ, ϑ, x, y, z, e, f, g, l, m, n, λ, ν, µ, k, h, cross, normal1, normal, ellipsoid, makeAll}, (* parametrization of the ellipsoid *) {x, y, z} = {a Cos[ϕ] Sin[ϑ], b Sin[ϕ] Sin[ϑ], c Cos[ϑ]}; (* E, F, G from differential geometry of surfaces *)

1.6 Classical Analysis

153

{e, g} = (D[x, #]^2 + D[y, #]^2 + D[z, #]^2)& /@ {ϕ, ϑ}; f = D[x, ϕ] D[x, ϑ] + D[y, ϕ] D[y, ϑ] + D[z, ϕ] D[z, ϑ]; (* L, M, N from differential geometry of surfaces *) {l, n, m} = Det[{{D[x, ##], D[y, ##], D[z, ##]}, D[#, ϕ]& /@ {x, y, z}, D[#, ϑ]& /@ {x, y, z}}]& @@@ {{ϕ, ϕ}, {ϑ, ϑ}, {ϕ, ϑ}}; {λ, ν, µ} = {l, m, n}/Sqrt[e g - f^2]; (* Gaussian curvature and mean curvature *) k = (λ ν - µ^2)/(e g - f^2); h = (g λ - 2 f µ + e ν)/(2 (e g - f^2)); (* normal on the ellipsoid *) normal = #/Sqrt[#.#]&[Cross[D[{x, y, z}, ϕ], D[{x, y, z}, ϑ]]]; (* construct all pieces from the piece of one octant *) makeAll[polys_] := Function[v, Map[v #&, polys, {-2}]] /@ {{ 1, 1, 1}, { 1, 1, -1}, { 1, -1, 1}, {-1, 1, 1}, {-1, -1, 1}, { 1, -1, -1}, {-1, 1, -1}, {-1, -1, -1}}; (* cut a hole in a polygon *) makeHole[Polygon[l_], factor_] := Module[{mp = Plus @@ l/Length[l], L, nOld, nNew}, L = (mp + factor(# - mp))& /@ l; {nOld, nNew} = Partition[Append[#, First[#]]&[#], 2, 1]& /@ {l, L}; {MapThread[Polygon[Join[#1, Reverse[#2]]]&, {nOld, nNew}]}]; (* a sketch of the ellipsoid *) ellipsoid = {Thickness[0.002], (ParametricPlot3D[Evaluate[{x, y, z}], {ϕ, 0, 2Pi}, {ϑ, 0, Pi}, DisplayFunction -> Identity][[1]]) //. Polygon[l_] :> Line[l]}; (* surfaces of the centers of the principal curvatures *) Show[GraphicsArray[ Graphics3D[{ellipsoid, {EdgeForm[], makeAll @ ParametricPlot3D[#, {ϕ, ∂, Pi/2 - ∂}, {ϑ, ∂, Pi/2 - ∂}, DisplayFunction -> Identity][[1]]}}, Boxed -> False, PlotRange -> All]& /@ (({x, y, z} + normal 1/(h + # Sqrt[h^2 - k]))& /@ {+1, -1}) ] /. p_Polygon :> makeHole[p, 0.7], GraphicsSpacing -> 0]]]

We give one more example illustrating the usefulness of symbolic differentiation.

Mathematical Remark: Phase Integral Approximation Here, we are dealing with a method for the approximate solution of the ordinary differential equation (and associated eigenvalue problem):

Symbolic Computations

154

y££ HzL + R2 HzL yHzL = 0,

R2 HzL p 1.

If we assume yHzL has the form z

yHzL = qHzL-1ê2 expJi ‡ qHz£ L dz£ N, qHzL = QHzL gHzL where QHzL is “ arbitrary”, we get the following differential equation for g: d 2 gHxL-1ê2 1 + ¶HQHxLL - gHxL2 + gHxL1ê2 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅ = 0 dx2 z

x = xHzL = ‡ QHz£ L dz£ , d 2 QHzL-1ê2 HRHzL - Q2 HzLL ÅÅÅÅÅÅÅÅÅÅ . ¶HQL = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅ + QHzL3ê2 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 2 Q HzL dz2 Introducing the parameter l (which we will later set to 1) in the differential equation, d 2 gHxL-1ê2 ÅÅÅÅÅÅÅÅÅ = 0 1 + l2 ¶HQHxLL - gHxL2 + l2 gHxL1ê2 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ d x2 ¶ and expanding gHzL in an infinite series in l (with Y2 n+1 = 0) gHzL = ⁄n=0 Y2 n HzL l2 n , we are led to the following £ ££ recurrence formula for Y as a function of ¶HxL and its derivatives ¶ HxL, ¶ HxL, … :

Y0 Y2 n

=1 1 = ÅÅÅÅÅ 2

1 ‚ Y2 a Y2 b - ÅÅÅÅÅÅ 2 a+b=n

0§a,b,§n-1

1 + ÅÅÅÅÅÅ 2

‚

Y2 a Y2 b Y2 g Y2 d

a+b+g+d=n 0§a,b,g,d§n-1

3 1 ‚ :¶ Y2 a Y2 b + ÅÅÅÅÅÅ Y2£ a Y2£ b - ÅÅÅÅÅÅ HY2 a Y2££b + Y2££a Y2 b L> 4 4 a+b=n-1

0§a,b§n-1

where Ya£ = Ya£ HxL = dY a HxL ê dx and x = xHzL. Y2 HxL, Y4 HxL, and Y6 HxL were found earlier by painful hand computations, but from Y8 HzL on, computer algebra becomes necessary [296]. For more details on such asymptotic expansions, see [671], [487], [672], [428], [1006], [1623], [142], [1306], [1883], [1104], and [674]. For the corresponding supersymmetric problem, see [16], [673], [58], and [544]. 1

Here, we want to find the first few nonvanishing Y2 a . We now give an unrefined implementation of the above recurrence. (It is unrefined considering the restriction a + b + g + d = n, and the fourfold sum can be replaced by a threefold sum.) In[168]:=

Υ[_Integer?OddQ] = 0; Υ[0] = 1; Υ[zn_Integer?EvenQ] := Υ[zn] = Module[{n = zn/2}, (* the If is the obvious implementation,

1.6 Classical Analysis

155

but it requires summing over all variables *) 1/2 Sum[If[a + b == n, 1, 0] Υ[2 a] Υ[2 b], {a, 0, n - 1}, {b, 0, n - 1}] 1/2 Sum[If[a + b + c + d == n, 1, 0] Υ[2 a] Υ[2 b] Υ[2 c] Υ[2 d], {a, 0, n - 1}, {b, 0, n - 1}, {c, 0, n - 1}, {d, 0, n - 1}] + 1/2 Sum[If[a + b == n - 1, 1, 0] * (∂[ξ] Υ[2 a] Υ[2 b] + 3/4 D[Υ[2 a], ξ] D[Υ[2 b], ξ] 1/4 (Υ[2 a] D[Υ[2 b], {ξ, 2}] + D[Υ[2 a], {ξ, 2}] Υ[2 b])), {a, 0, n - 1}, {b, 0, n - 1}] // (* keep the results as short as possible *) Expand // Factor]

We now look at the first few Yi HzL. In[171]:= Out[171]= In[172]:= Out[172]= In[173]:= Out[173]= In[174]:= Out[174]=

Υ[2] ∂@ξD 2 Υ[4] 1 H−∂@ξD2 − ∂ @ξDL 8 Υ[6] 1 H2 ∂@ξD3 + 5 ∂ @ξD2 + 6 ∂@ξD ∂ @ξD + ∂H4L @ξDL 32 Υ[8] 1 H−5 ∂@ξD4 − 50 ∂@ξD ∂ @ξD2 − 30 ∂@ξD2 ∂ @ξD − 128 19 ∂ @ξD2 − 28 ∂ @ξD ∂H3L @ξD − 10 ∂@ξD ∂H4L @ξD − ∂H6L @ξDL

The Y[i] for higher orders can also be found in a few seconds. In[175]:= Out[175]=

Timing[Υ[16]] 90.86 Second, 1 I−429 ∂@ξD8 − 60060 ∂@ξD5 ∂ @ξD2 − 316030 ∂@ξD2 ∂ @ξD4 − 12012 ∂@ξD6 ∂ @ξD − 32768 758472 ∂@ξD3 ∂ @ξD2 ∂ @ξD − 496950 ∂ @ξD4 ∂ @ξD − 114114 ∂@ξD4 ∂ @ξD2 − 1794156 ∂@ξD ∂ @ξD2 ∂ @ξD2 − 360932 ∂@ξD2 ∂ @ξD3 − 174317 ∂ @ξD4 − 168168 ∂@ξD4 ∂ @ξD ∂H3L @ξD − 877760 ∂@ξD ∂ @ξD3 ∂H3L @ξD − 1591304 ∂@ξD2 ∂ @ξD ∂ @ξD ∂H3L @ξD − 1533408 ∂ @ξD ∂ @ξD2 ∂H3L @ξD − 118404 ∂@ξD3 ∂H3L @ξD − 562474 ∂ @ξD2 ∂H3L @ξD − 684372 ∂@ξD ∂ @ξD ∂H3L @ξD − 12012 ∂@ξD5 ∂H4L @ξD − 466180 ∂@ξD2 ∂ @ξD2 ∂H4L @ξD − 188760 ∂@ξD3 ∂ @ξD ∂H4L @ξD − 893724 ∂ @ξD2 ∂ @ξD ∂H4L @ξD − 543972 ∂@ξD ∂ @ξD2 ∂H4L @ξD − 2

2

2

800176 ∂@ξD ∂ @ξD ∂H3L @ξD ∂H4L @ξD − 206138 ∂H3L @ξD ∂H4L @ξD − 71786 ∂@ξD2 ∂H4L @ξD − 2

2

163722 ∂ @ξD ∂H4L @ξD − 92664 ∂@ξD3 ∂ @ξD ∂H5L @ξD − 144780 ∂ @ξD3 ∂H5L @ξD − 529776 ∂@ξD ∂ @ξD ∂ @ξD ∂H5L @ξD − 119548 ∂@ξD2 ∂H3L @ξD ∂H5L @ξD − 2

272108 ∂ @ξD ∂H3L @ξD ∂H5L @ξD − 159268 ∂ @ξD ∂H4L @ξD ∂H5L @ξD − 23998 ∂@ξD ∂H5L @ξD − 6006 ∂@ξD4 ∂H6L @ξD − 111020 ∂@ξD ∂ @ξD2 ∂H6L @ξD − 68068 ∂@ξD2 ∂ @ξD ∂H6L @ξD − 76986 ∂ @ξD2 ∂H6L @ξD − 113456 ∂ @ξD ∂H3L @ξD ∂H6L @ξD − 41132 ∂@ξD ∂H4L @ξD ∂H6L @ξD − 2

3431 ∂H6L @ξD − 25168 ∂@ξD2 ∂ @ξD ∂H7L @ξD − 56328 ∂ @ξD ∂ @ξD ∂H7L @ξD − 25688 ∂@ξD ∂H3L @ξD ∂H7L @ξD − 6004 ∂H5L @ξD ∂H7L @ξD − 1716 ∂@ξD3 ∂H8L @ξD − 9210 ∂ @ξD2 ∂H8L @ξD − 11388 ∂@ξD ∂ @ξD ∂H8L @ξD − 4002 ∂H4L @ξD ∂H8L @ξD − 3380 ∂@ξD ∂ @ξD ∂H9L @ξD − 2000 ∂H3L @ξD ∂H9L @ξD − 286 ∂@ξD2 ∂H10L @ξD − 726 ∂ @ξD ∂H10L @ξD − 180 ∂ @ξD ∂H11L @ξD − 26 ∂@ξD ∂H12L @ξD − ∂H14L @ξDM= 2

Symbolic Computations

156

1.6.2 Integration Symbolic integration of functions is one of the most important capabilities of Mathematica. In contrast to many other operations (which can also be carried out by hand by the user, albeit more slowly and probably with more errors), and in addition to standard methods such as integration by parts, substitution, etc., Mathematica makes essential use of special algorithms for the determination of indefinite and definite integrals (see [244], the references cited in the appendix, Chapter 21 of [1917], and the very readable introductions of [1330], [990], [643], and [1205]). Mathematica can find a great many integrals, including many not listed in tables. This holds primarily for integrands that are not special functions; but even for special functions, Mathematica is often able to find a closed-form answer. Nevertheless, once in a while, the user will have to refer to a book such as [1443] for complicated integrals. For most integrals, Mathematica works with algorithms rather than looking in tables. For indefinite integrals, these algorithms are based on the celebrated work by Risch and extensions by Trager and Bronstein [244]. Definite integrals are computed by using contour integration, by the Marichev–Adamchik reduction to Meijer-G functions [1560], [1561], [284], [1650], and [574], or by integration via differentiation (Cauchy contour integral formula). We have already introduced the Integrate command for symbolic integration. In view of its extraordinary importance, we repeat it here. Integrate[integrand, var] finds (if possible) the indefinite integral Ÿ

var

integrand dvar.

Integrate[integrand, {var, lowerLimit, upperLimit}] upperLimit

finds (if possible) the definite integral ŸlowerLimit integrand dvar.

Let us start with a remark similar to the one made in section dealing with the solution of equations using Solve. All variables in integrals are assumed to take generic complex values. So the result of the simple x integral Ÿ xn d x will be 1 ê Hn + 1L xn+1 . In[1]:= Out[1]=

Integrate[x^n, x] x1+n 1+n

This is the correct answer for all complex x and for nearly all complex n (the exception, which is of Lebesgue measure zero with respect to dx, being n = -1). This assumption about all unspecified variables being generic can cause indeterminate expressions when substituting numerical values into the result of an integration containing parameters. The integrand can either be given explicitly or unspecified. Here is such an example of the latter case. In[2]:= Out[2]= In[3]:= Out[3]= In[4]:= Out[4]= In[5]:=

Integrate[f'[x], x] f@xD Integrate[f'[x] f''[x], x] 1 f @xD2 2 Integrate[f'[x] g[x] + f[x] g'[x], x] f@xD g@xD Integrate[Sin[f[x]] f'[x], x]

1.6 Classical Analysis Out[5]=

157

−Cos@f@xDD

Here is a slightly more complicated integral, the Bohlin constant of motion for a damped harmonic oscillator [720]. In[6]:= Out[6]=

Exp[Integrate[(λ1 - λ2) x'[t](x''[t] + (λ1 + λ2) x'[t] + λ1 λ2 x[t])/ ((x'[t] + λ1 x[t])(x'[t] + λ2 x[t])), t]] // Together Hλ1 x@tD + x @tDLλ1 Hλ2 x@tD + x @tDL−λ2

The following product cannot be symbolically integrated without the result containing unevaluated integrals (which would cause recursion). In[7]:= Out[7]=

Integrate[f'[x] g[x], x] ‡ g@xD f @xD ' x

Be aware that in distinction to NIntegrate, the function Integrate has no HoldAll attribute. This means that the scoping behavior in nested integrals is different. Whereas NIntegrate can treat its body as a black box that delivers values at given points (when the corresponding system option is set to avoid the evaluation of the body), the algorithms used in Integrate require unavoidably the evaluation of the integrand. When Integrate carries out an indefinite integral, it does not return any explicit constants of integration. (Implicitly the result given amounts to selecting a concrete constant of integration.) So mathematically identical integrands can result in different indefinite integrals. The following polynomial Hx + 1L4 + 1 shows such a situation. In[8]:=

Out[9]=

(* integrals of original and expanded integrand and difference *) {Integrate[#, x], Integrate[Expand[#], x], Expand[Integrate[#, x] - Integrate[Expand[#], x]]}&[(x + 1)^4 + 1] 1 x5 1 9x + H1 + xL5 , 2 x + 2 x2 + 2 x3 + x4 + , = 5 5 5

Mathematica’s ability to integrate implicitly defined functions can be seen nicely in the following example. Suppose 1 0 HxL = ÅÅÅÅÅÅ 2

x

£ £ j HxL = ‡ H£££ j-1 HxL + 4 HxL j-1 HxL + 2 HxL j-1 HxLL dx, j = 1, 2, ….

These equations are of great practical importance for the construction of the Korteweg–de Vries equation hierarchy. Because the mathematical description of Lax pairs is slightly more complicated, we do not go into details here; see, however, [37], [719], [1204], [1442], [628], [1575], [1576], [325], [1782], [255], [1788], [717]. Note that HxL is not explicitly defined. We now implement the above definition of the j HxL. In[10]:=

[0] = 1/2; [j_] := [j] = Integrate[D[[j - 1], {x, 3}] + 4 [x] D[[j - 1], x] + 2 '[x] [j - 1], x] // Together // Numerator

We look at the first few j HxL; they are “completely” integrated. In[12]:= Out[12]=

{[1], [2], [3]} 8@xD, 3 @xD2 + @xD, 10 @xD3 + 5 @xD2 + 10 @xD @xD + H4L @xD

, Derivative[i_][][x] -> Derivative[i][]}

In[14]:=

Table[KdVShortForm[k], {k, 3}]

Out[14]=

8t , t 6 + H3L , t 30 2 + 20 + 10 H3L + H5L

−1 && Re@oD > −1 && Re@pD > 0, 1+n+p+o p p GammaA p E

xn H1 − xp Lo , 8x, 0, 1 −1 && Re@oD > −1 && Re@pD > 0LDE In[50]:= Out[50]= In[51]:= Out[51]= In[52]:= Out[52]=

Integrate[(Exp[-x] - Exp[-z x])/x, {x, 0, Infinity}] −x − −x z IfARe@zD > 0, Log@zD, IntegrateA , 8x, 0, ∞ 1/2] Exp[-x] Sin[x], {x, 0, Infinity}] 3 πê4 è!!!! 2 H1 + π L

Here are two integrals of rational functions. In[63]:= Out[63]=

Integrate[1/(x^4 + 3 x^2 + 1)^8, {x, 0, Infinity}] 21377637 π è!!!! 160000000 5

1.6 Classical Analysis In[64]:=

163

largeResult = Integrate[1/(x^6 + 3 x^2 + 1)^2, {x, -Infinity, Infinity}]; Short[largeResult, 12]

Out[66]//Short=

J17 Log@Root@1 + 3 #1 + #13 &, 3DD 5ê2 "############################################################### + 53 + −Root@1 + 3 #1 + #13 &, 1D Root@1 + 3 #1 + #13 &, 2D 2

2 π I17 − 6 Root@1 + 3 #1 + #13 &, 1D + 4 Root@1 + 3 #1 + #13 &, 1D M JRoot@1 + 3 #1 + #13 &, 2D

5ê2

Root@1 + 3 #1 + #13 &, 3D

"############################################################ Root@1 + 3 #1 + #13 &, 2D Root@1 + 3 #1 + #13 &, 3D + "############################################################

5ê2

+ "######################################################################################################################## Root@1 + 3 #1 + #13 &, 2D Root@1 + 3 #1 + #13 &, 3D + 5ê2

HRoot@1 + 3 #1 + #13 &, 2D Root@1 + 3 #1 + #13 &, 3DL

NN í

Root@1 + 3 #1 + #13 &, 1D Root@1 + 3 #1 + #13 &, 2D Root@1 + 3 #1 + #13 &, 3D N J900 "####################################################################################################################################################################################

The results returned by Integrate are typically not simplified. (It is always easily possible to apply a simplifying function to the result, but it would be impossible for a user to disable any built-in simplification if it would happen automatically.) Applying RootReduce to the last expression gives a much shorter answer. In[67]:= Out[67]=

In[68]:= Out[68]=

Collect[RootReduce[largeResult], _Log, RootReduce] π Root@−11449 + 17890956 #12 − 7103376000 #14 + 59049000000 #16 &, 2D + Log@Root@1 + 3 #12 + #16 &, 1DD Root@11449 + 17890956 #12 + 7103376000 #14 + 59049000000 #16 &, 1D + Log@Root@1 + 3 #12 + #16 &, 2DD Root@11449 + 17890956 #12 + 7103376000 #14 + 59049000000 #16 &, 2D + Log@Root@1 + 3 #1 + #13 &, 2DD Root@11449 + 71563824 #12 + 113654016000 #14 + 3779136000000 #16 &, 5D + Log@Root@1 + 3 #1 + #13 &, 3DD Root@11449 + 71563824 #12 + 113654016000 #14 + 3779136000000 #16 &, 6D {LeafCount[%], LeafCount[largeResult]} 8191, 2958

Re[n] > 0]

Symbolic Computations

164 3

Out[73]=

3n

1

3n

1 3n nπ 1 3n 2− 2 − 2 H1 + H−1Ln L Gamma@ + D I2 2 + 2 − 2 r Sin@ DM + D H1 + H−1Ln L Gamma@ 2 2 2 2 , 2 = 9 è!!!! è!!!! π 2 π

Because of symmetry (and visible through the factors H1 + H-1Ln L), the odd moments vanish for and and the even moments agree. In[74]:= Out[74]=

Simplify[moment[n] == moment[n, r], Element[n/2, Integers]] True

As some of the above examples show, sometimes Mathematica will produce If statements as results, where the first argument represents a set of conditions on parameters appearing in the integral such that the second argument of If is the integrated form. This form of the result allows giving sufficient conditions for the convergence of the integral depending on parameters appearing in the integrand (and potentially in the integration limits). The last argument contains the unevaluated form of the integral (which is possible because If has the HoldRest attribute) with the negated conditions. Here is an example. In[75]:= Out[75]=

Integrate[Sin[a x] Cos[b x]/x, {x, 0, Infinity}] 1 IfAa − b ∈ Reals && a + b ∈ Reals, π HSign@a − bD + Sign@a + bDL, 4 Cos@b xD Sin@a xD IntegrateA , 8x, 0, ∞ (5 + 3 Sqrt[3])^(1/4)))&[ Simplify[Integrate[1/Sqrt[y] '[y], y]]]] 128 è!!!! 7ê8 è!!!! 9 H5 + 3 3 L H41 + 21 3 L, 469.197= 161

(If we did not know the polynomial , we could calculate it the following way.) In[87]:=

Out[87]=

GroebnerBasis[Numerator[Together[ {(5 - (27 (1 - I Sqrt[3]))/(2^(2/3) ) ((1 + I Sqrt[3]) )/ (2 2^(1/3))) - ^4, (277 + τ + ) - ^3, -2003 + 554 τ + τ^2 - ^2}]], {τ, }, {, }] 83 − 6 4 − 15 8 + 12 − τ

False]; (* 3D plot made piecewise *) Show[Table[Plot3D[Re[[x + I y]], {x, k Pi + ∂, (k + 1) Pi - ∂}, {y, -4, 4}, PlotPoints -> {8, 60}], {k, 0, 3}], AxesLabel -> {"x", "y", None}]}]]]

0.5 4

0 2

-0.5

0 y

0 -2

5 x

10

-4

By adding the piecewise constant function, we can make the antiderivative a continuous function. In[145]:=

In[146]:=

[x_] := Which[x < 1 x == 1 x < 3 x == 3 x < 5

Pi Pi Pi Pi Pi

// // // // //

N, N, N, N, N,

[x], Pi/2/Sqrt[6], [x] + Pi/Sqrt[6], 3Pi/2/Sqrt[6], [x] + 2 Pi/Sqrt[6]]

Plot[[x], {x, 0, 4Pi}] 2.5 2 1.5 1 0.5

2

4

6

8

10

12

Now, the integral is given as the difference of the function value of the upper limit and the lower limit. In[147]:= Out[147]=

[4Pi] - [0] 2 $%%%%%%% π 3

(For more details concerning such pitfalls, see [925], [923], and [926].) Even for an everywhere smooth function, the indefinite integral returned by Mathematica might be discontinuous. The following plots show the real and imaginary parts of He x - 1L ê x and Ÿ Hex - 1L ê x dx along the real axis. The imaginary part (blue curves) of the indefinite integral is discontinuous at x = 0. (This integrand has the special property to give an integral that has a single line where its value is different from its left-side and right-side limits.) In[148]:=

Module[{f, F, x}, f[x_] = (Exp[x] - 1)/x; Show[GraphicsArray[

F[x_] = Integrate[f[x], x];

Symbolic Computations

176 (* show function and indefinite integral along real axis *) Plot[{Re[#1[x]], Im[#1[x]]}, {x, -1, 1}, PlotLabel -> #2, PlotStyle -> {RGBColor[1, 0, 0], RGBColor[0, 0, 1]}, DisplayFunction -> Identity, Frame -> True, PlotRange -> {{-1, 1}, {-3.5, 3.5}}]& @@@ {{f, "f[x]"}, {F, "Ÿf[x] d x"}}]]] Ÿ f@xD d x

f@xD 3

3

2

2

1

1

0

0

-1

-1

-2

-2

-3

-3 -0.75 -0.5 -0.25

0

0.25

0.5

0.75

-0.75 -0.5 -0.25

1

0

0.25

0.5

0.75

1

But the indefinite integral was nevertheless correct. In[149]:= Out[149]=

D[Integrate[(Exp[x] - 1)/x, x], x] - (Exp[x] - 1)/x // Simplify 0

As an application of Mathematica integration capabilities, let us briefly discuss a class of parametrically describable minimal surfaces.

Mathematical Remark: Minimal Surfaces Minimal surfaces are surfaces z = f Hx, yL that satisfy the differential equation H1 + f y2 L fxx + 2 fx f y fxy + H1 + fx2 L f yy = 0 or for surfaces given in parametric form 8xHu, vL, yHu, vL, zHu, vL pictVar]

Here are two examples: the Enneper surface with f HxL = 1 and gHxL = x and a Henneberg surface with f HxL = -i ê 2 H1 - x-4 L and gHxL = x. In[151]:=

Show[GraphicsArray[ Block[{$DisplayFunction = Identity, opts = Sequence[Boxed -> False, Axes -> False, PlotRange -> All]}, {(* Enneper surface *) ParametricPlot3D[Evaluate[ WeierstrassMinimalSurface[1, ξ, ξ, r Exp[I ϕ]]], {r, 0, 3}, {ϕ, 0, 2Pi}, PlotPoints -> {116, 80}, Evaluate[opts]], (* Henneberg surface *) ParametricPlot3D[Evaluate[ WeierstrassMinimalSurface[-I/2 (1 - ξ^-4), ξ, ξ, r Exp[I ϕ]]], {r, 0.72, 1}, {ϕ, 0, 2Pi}, PlotPoints -> {16, 40}, Evaluate[opts]]}]]]

Symbolic Computations

178

Here is a spiraling minimal surface related to the behavior of a soap film near a boundary wire [236]. In[152]:=

Block[{γ = 0.02 Pi, wms}, wms = WeierstrassMinimalSurface[ I Exp[-w + I Pi w/(2 Cot[γ/2])], Exp[w], w, r Exp[I ϕ]]; ParametricPlot3D[ Evaluate[Append[wms, SurfaceColor[Hue[ϕ/(2 Pi)]]]], {r, 0, 6}, {ϕ, 0, 2Pi}, PlotPoints -> {40, 160}, Boxed -> False, Axes -> False, PlotRange -> All, BoxRatios -> {1, 1, 2}]]

We could plot many other such (generally unnamed) surfaces, for example, f HxL = x1ê4 + x1ê3 and gHxL = x. In[153]:= Out[153]=

wms = WeierstrassMinimalSurface[ξ^(1/4) + ξ^(1/3), ξ, ξ, r Exp[I ϕ]] 1 5ê4 9− 260 ReAH ϕ rL I− 208 + 80 2 ϕ r2 − 195 H ϕ rL1ê12 + 78 H ϕ rL25ê12 ME, 1 5ê4 − ImAH ϕ rL I208 + 80 2 ϕ r2 + 195 H ϕ rL1ê12 + 78 H ϕ rL25ê12 ME, 260 4 9ê4 3 2 ReA H ϕ rL + H ϕ rL7ê3 E= 9 7

An initial attempt to plot this function does not produce a satisfactory result. (We use lines rather than polygons in the following graphic because the polygons touch each other often, and rendering the corresponding graphic takes a long time.) In[154]:=

Show[Graphics3D[ ParametricPlot3D[Evaluate[%], {r, 0.7, 0.75}, {ϕ, 0.001, 12Pi - 0.001}, PlotPoints -> {2, 600}, PlotRange -> All, Axes -> False, DisplayFunction -> Identity][[1]] //. Polygon[l_] :> Line[Append[l, First[l]]]]]

1.6 Classical Analysis

179

To get the “correct” function values for multivalued functions, we have to modify the results of the indefinite integration; in this case, we take the appropriate nth root. (If we would calculate the integrals through numerically solving a differential equation, we do not encounter such branch cut problems.) For ease of understanding, we view only a small strip. In[155]:=

ParametricPlot3D[Evaluate[(* analytically continue and add color *) Append[wms /. {(r Exp[I ϕ])^n_ -> r^n Exp[I n ϕ]}, SurfaceColor[Hue[ϕ/(12 Pi)], Hue[ϕ/(12 Pi)], 2]]], {r, 0.7, 0.75}, {ϕ, 0.01, 12Pi - 0.01}, PlotPoints -> {2, 600}, PlotRange -> All, Axes -> False]

Many further examples of minimal surfaces exist and are easy to (re)produce in Mathematica. By changing the integrands in the three integrals in the Weierstrass representation from integrand to expHi JL integrand, we can (in dependence on J) look at how a minimal surface evolves to its adjoint surface. For additional examples of minimal surfaces, see [1586], [1164], [1344], [865], [481], [1228], [649], [1178], [1707], [863], [1848], [484], [864], [986], [642], [780], [874], [1698], [1423], [237], [1412], [1708], [985], [1513], [279], [1312], and [1709]. Remark: It is not necessary to use integrals when constructing minimal surfaces. If the above gHxL Ø x and f HxL Ø f £££ HxL, we can write In[156]:=

WeierstrassMinimalSurface[f'''[ξ], ξ, ξ, x] // TraditionalForm

Out[156]//TraditionalForm=

8ReH- f ££ HxL x2 + 2 f £ HxL x - 2 f HxL + f ££ HxLL, -2 ImH f HxLL + 2 ImHx f £ HxLL - ImH f ££ HxLL - ImHx2 f ££ HxLL, 2 ReHx f ££ HxL - f £ HxLL

([m, n] = R[m, n]) /; NonNegative[m] && NonNegative[n] && m [n, m] /; NonNegative[m] && NonNegative[n], HoldPattern[[m_, n_]] :> [-n, -m] /; Negative[m] && Negative[n], HoldPattern[[m_, n_]] :> [n, -m] /; Negative[m], HoldPattern[[m_, n_]] :> [-n, m] /; Negative[n]};

Here is the resistance in the neighborhood of the origin. In[161]:=

With[{n = 5}, ListPlot3D[Table[[i, j], {i, -n, n}, {j, -n, n}], MeshRange -> {{-n, n}, {-n, n}}, PlotRange -> All]];

1 0.75 0.5 0.25 0

4 2 -4

0 -2

0

-2 2

4

-4

The function R makes heavy use of definite integration. For larger values of n and m, it becomes somewhat slow. In[162]:= Out[162]=

{R[10, 10] // Timing, R[8, 12] // Timing} 486215980256 + 10640 π − 62075752 14549535 == 990.43 Second, =, 945.89 Second, 2π 14549535 π

Indefinite integration is often much faster than definite integration. As a result, it is sometimes advantageous to first calculate the indefinite integral and then substitute the integration limits. (Sometimes this might require the “manual” calculation of limits). For this procedure to be correct one must of course know that, inside the integration interval, the indefinite integral is a continuous function without any singularities. For the integrands under consideration this is actually the case, and we use the function Limit (to be discussed in the next subsection) to obtain the values at the integration end points. We also have to take care about contributions from branch cuts of the integral to make sure we use a continuous antiderivative.

1.6 Classical Analysis In[163]:=

RFast[m_, n_] := Module[{upperLimitContribution, lowerLimitContribution, branchCutCorrection}, (* the indefinite integral *) indefInt = Integrate[(1 - ((t - I)/(t + I))^(m + n)* ((t - 1)/(t + 1))^Abs[m - n])/t, t]; (* contributions from the integration limits *) upperLimitContribution = Limit[indefInt, t -> Infinity]; lowerLimitContribution = Limit[indefInt, t -> 0]; (* contribution from making a continuous antiderivative *) branchCutCorrection = If[MemberQ[int, ArcTan[(1 + t)/(-1 + t)], Infinity], 2Pi, 0]; (* simplify result *) Together @ ComplexExpand @ Re (upperLimitContribution + branchCutCorrection lowerLimitContribution)/(2Pi)]

In[164]:=

{RFast[10, 10] // Timing, RFast[8, 12] // Timing}

Out[164]=

181

486215980256 + 10640 πL Re H− 62075752 Re 14549535 == 990.1 Second, =, 94.93 Second, 2π 14549535 π

For the n-dimensional case of such resistor networks, see [419], [420], [1368], [930], [94]; for the continuous analog, see [1002], [1013]; for finite lattices, see [1857]. Mathematica can differentiate expressions arising from computations in which it is not able to explicitly integrate (meaning these expressions contain unevaluated integrals). In[165]:= Out[165]=

D[Integrate[f[x], y], y] f@xD

This also works for integrals in which the variable of differentiation enters in a complicated way in the limits of integration (differentiation of parametric integrals). In[166]:= Out[167]=

Clear[f, x, y]; D[Integrate[f[x], {x, 0, y}], y] f@yD

In[168]:=

D[Integrate[f[x], {x, -x, x}], x]

Out[168]= In[169]:= Out[169]= In[170]:= Out[170]=

f@−xD + f@xD Derivative[1, 0][Integrate[f[x], {x, #1, #2}]&][a, b] −f@aD Derivative[0, 1][Integrate[f[x], {x, #1, #2}]&][a, b] f@bD

We now look at a somewhat more complicated expression: the d’Alembert solution of the one-dimensional wave equation.

Mathematical Remark: d’Alembert Solution of the One-Dimensional Wave Equation Suppose we are given the following differential equation (wave equation) ∑2 uHx, tL ∑2 uHx, tL ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ2ÅÅÅÅÅÅÅÅÅÅÅÅ - a2 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅ = f Hx, tL ∑t ∑ x2 in 1 ä1+ . Here, uHx, tL is the amplitude of the wave as a function of position x and time t, and a is the inverse phase velocity. The d’Alembert solution for prescribed f Hx, tL is:

Symbolic Computations

182

t x+aHt-tL x+at 1 1 1 uHx, tL = ÅÅÅÅÅÅÅÅÅÅ ‡ ‡ f Hx, tL dx dt + ÅÅÅÅÅÅÅÅÅÅ ‡ u1 HxL dx + ÅÅÅÅÅ Hu0 Hx + atL + u0 Hx - atLL. 2 a 0 x-aHt-tL 2 a x-at 2

Here, u0 HxL is the initial position, and u1 HxL is the initial velocity function; that is, uHx, t = 0L = u0 HxL and ∑uHx, tL ê ∑t§t=0 = u1 HxL. For references, see any textbook on partial differential equations, for example, [1627] and [1047]. For some direct extensions, see [413] , [848], and [1853].

1

We now check this solution. The initial conditions are fulfilled. In[171]:=

u[x_, t_] = 1/(2 a) Integrate[Integrate[f[ξ, τ], {ξ, x - a (t - τ), x + a (t - τ)}], {τ, 0, t}] + 1/(2 a) Integrate[u1[ξ], {ξ, x - a t, x + a t}] + 1/2 (u0[x + a t] + u0[x - a t]);

In[172]:=

{u[x, 0], D[u[x, t], t] /. t -> 0}

Out[172]=

8u0@xD, u1@xD

{Automatic, (# //. HoldPattern[Integrate[c_?(FreeQ[#, τ]&) r_, i_]] :> c Integrate[r, i])&}]&) f@x, tD

Here is a solution for the Schrödinger equation for a particle of mass HtL in a time-dependent linear potential HtL [627]. In[175]:=

Out[175]=

ψ[{x_, t_}, {_, _, _}] = AiryAi[ (x + Integrate[1/[τ] Integrate[[σ], {σ, 0, τ}], {τ, 0, t}] ^3/4 Integrate[1/[τ], {τ, 0, t}]^2)]* Exp[I (^3/2 Integrate[1/[τ], {τ, 0, t}]* (x + Integrate[1/[τ] Integrate[[σ], {σ, 0, τ}], {τ, 0, t}] ^3/6 Integrate[1/[τ], {τ, 0, t}]^2) 1/2 Integrate[1/[τ] Integrate[[σ], {σ, 0, τ}]^2, {τ, 0, t}] x Integrate[[σ], {σ, 0, t}])] 2 τ t Ÿ τ @σD σ i1 3 t 1 y 2 i 1 3 t 1 y 1 t IŸ0 @σD σM j z 0 jx− z− j z j τM j τM +‡ τz τ−x Ÿ t @σD σz j z 2 IŸ0 @τD 6 IŸ0 @τD @τD 2 ‡ @τD 0 k { 0 0 { k

τ

2 t t i y Ÿ0 @σD σ 1 3i 1 y j z j AiryAiA j τz z + ‡ j‡ τz j z jx − zE 4 @τD @τD { k 0 0 k {

1.6 Classical Analysis

183

The solution contains again unevaluated integrals and the Airy function AiHzL. We can verify that it is indeed a solution for any HtL and HtL. In[176]:= Out[176]=

With[{ψ = ψ[{x, t}, {, , }]}, I D[ψ, t] == -1/(2 [t]) D[ψ, x, x] + [t] x ψ] True

// Simplify

As a related example, let us develop a series solution of the differential equation z£ HtL = f HzHtL, tL for small t. We t rewrite the differential equation as an integral equation zHtL = zH0L + Ÿ0 f HzHtL, tL dt and calculate the series expansion of the right-hand side. In[177]:=

Out[177]=

Together /@ Normal[Series[z[0] + Integrate[f[z[τ], τ], {τ, 0, t}], {t, 0, 4}, Analytic -> True]] //. (* replace derivatives of z using the differential equations *) {Derivative[n_][z][0] :> (D[f[z[t], t], {t, n - 1}] /. t -> 0)} 1 t f@z@0D, 0D + z@0D + t2 HfH0,1L @z@0D, 0D + f@z@0D, 0D fH1,0L @z@0D, 0DL + 2 1 t3 HfH0,2L @z@0D, 0D + fH1,0L @z@0D, 0D HfH0,1L @z@0D, 0D + f@z@0D, 0D fH1,0L @z@0D, 0DL + 6 2 f@z@0D, 0D fH1,1L @z@0D, 0D + f@z@0D, 0D2 fH2,0L @z@0D, 0DL + 1 4 t HfH0,3L @z@0D, 0D + 3 HfH0,1L @z@0D, 0D + f@z@0D, 0D fH1,0L @z@0D, 0DL fH1,1L @z@0D, 0D + 24 3 f@z@0D, 0D fH1,2L @z@0D, 0D + 3 f@z@0D, 0D HfH0,1L @z@0D, 0D + f@z@0D, 0D fH1,0L @z@0D, 0DL fH2,0L @z@0D, 0D + fH1,0L @z@0D, 0D HfH0,2L @z@0D, 0D + fH1,0L @z@0D, 0D HfH0,1L @z@0D, 0D + f@z@0D, 0D fH1,0L @z@0D, 0DL + f@z@0D, 0D fH1,1L @z@0D, 0D + f@z@0D, 0D HfH1,1L @z@0D, 0D + f@z@0D, 0D fH2,0L @z@0D, 0DLL + 3 f@z@0D, 0D2 fH2,1L @z@0D, 0D + f@z@0D, 0D3 fH3,0L @z@0D, 0DL

Using f Hz, tL = z, we get the series expansion of expHtL. In[178]:= Out[178]=

% /. f -> (#1&) 1 1 1 z@0D + t z@0D + t2 z@0D + t3 z@0D + t4 z@0D 2 6 24

And using f Hz, tL = 2 t z, we get the series expansion of expHt2 L. In[179]:= Out[179]=

%% /. f -> (2 #1 #2&) 1 z@0D + t2 z@0D + t4 z@0D 2

At this point, we mention that Mathematica can integrate a large class of functions whose antiderivatives can be expressed as elliptic integrals. Typically, such integrands contain roots of polynomials of third or fourth degree. Here are three examples. In[180]:=

Out[180]=

In[181]:=

Integrate[Sqrt[(b^2 - x^2)/(x^2 + a^2)], x] "############## b2 −x2 "################ x2 1 a2 1 + EllipticEAArcSinA"########### − xE, − E a2 +x2 a2 a2 b2 1 "################ x2 "########### − 1 − a2 b2 Integrate[Sqrt[(b^2 - x^2)/(x^2 + a^2)^3], x]

Symbolic Computations

184 b2 − x2 Ha2 + x2 L $%%%%%%%%%%%%%%%%%%%%%%%%%%% Ha2 + x2 L3

Out[181]=

i i i j j x2 j 1 b2 1 x x2 $%%%%%%%%%%%%%%%%% j j j $%%%%%%%%%%%%%%%%% j 1 − 2 j j − 2 xE, − 2 E − 2 − j j jEllipticEA ArcSinhA$%%%%%%%%%%%% 2 j j 1 + j a b a b ja 1 "########### 2 2 − Hb − x L k k b2 k z zy zy 1 b2 y z zz zz EllipticFA ArcSinhA$%%%%%%%%%%%% − 2 xE, − 2 Ez z zz zz b a z z {{{ In[182]:= Out[182]=

Integrate[1/Sqrt[1 - x^3], x] è!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! è!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! è!!!!!!!!!!!!!!!!!!!!!! −H−1L5ê6 − x 2 H−1L5ê6 H−1 + xL 1 + x + x2 EllipticFAArcSinA E, H−1L1ê3 E 31ê4 è!!!!!!!!!!!!!! 1ê4 3 1−x 3

Note that sometimes Mathematica produces an incorrect result for a definite integral. Such cases usually involve integrands with symbolic parameters and branches. One possibility for checking the correctness of integrals is to 1ê10+i compare the result of Integrate with that of NIntegrate. Here is an example: Ÿ1ê10-i lnHz2 - 1L dz. The integrand has a branch cut between -1 and 1. Here, the results of Integrate and NIntegrate do agree. In[183]:= Out[183]=

Integrate[Log[z^2 - 1], {z, 1/10 - I, 1/10 + I}] 1 20 400 40001 − JArcTanA E − 5 J−4 + π + ArcTanA E + LogA ENN 5 199 39999 10000

Here is an example where the two results do not agree. For generic endpoints of a definite integral, Mathematica must carry out the definite integral by first calculating the indefinite integral. Then it must find out if the straight line connecting the integration end points crosses any branch cuts of the antiderivative. In general, this means solving a transcendental equation and finding all relevant solutions. This is a very complicated step, and missing a crossed branch cut causes a different result from the one returned by NIntegrate. In[184]:=

Out[184]=

{N[Integrate[#, {z, -1 - I, -1 + I}]], NIntegrate[#, {z, -1 - I, -1 + I}]}&[ (1 + z^z (1 + Log[z]))/(z + z^z)] // Chop 84.80293 , −1.48025

specificValue, options] finds the limit of function if var Ø specificValue taking into account the option settings options.

Here are four simple examples to start. In[1]:= Out[1]= In[2]:= Out[2]= In[3]:= Out[3]=

Limit[Sin[x]/x, x -> 0] 1 Limit[Exp[-x] x^2, x -> Infinity] 0 Limit[((x + h)^(1/3) - x^(1/3))/h, h -> 0] 1 2ê3 3x

1.6 Classical Analysis In[4]:=

185

Limit[(Tan[x]/x)^(1/x^2), x -> 0] 1ê3

Out[4]=

Here are three slightly more complicated limits, two of the form ¶0 [921] and one of the form 1¶ . In[5]:= Out[5]=

Limit[(1/x)^Tan[x], x -> 0] 1

In[6]:=

Limit[(2 - 2 x)^Tan[Pi x], x -> 1/2] 2êπ

Out[6]= In[7]:= Out[7]=

Limit[(((n - 1)^2 n^n)/(n^n - n))^((n - n^(2 - n))/(n - 1)^2), n -> Infinity] 1

A more complicated limit than contains a binomial coefficient. In[8]:= Out[8]=

Limit[Binomial[n, k] (a/n)^k (1 - a/n)^(n - k), n -> Infinity] ak −a Gamma@1 + kD

The next limit is ¶. In[9]:=

Limit[x^x - x^(Log[x]), x -> Infinity] ∞

Out[9]=

The next limit shows how the logarithm lnHxL arises as the limit of a power function xa . (For continuity, it follows from this that xa and lnHxL should have the same branch cut structure.) In[10]:= Out[10]=

Limit[Integrate[ξ^a, {ξ, 0, x}, Assumptions -> x > 0 && Re[a] > -1] 1/(1 + a), a -> -1] Log@xD

For functions whose limit values depend on the direction from which we approach specificValue, we can use the option Direction. Direction is an option for Limit, and it determines a direction for computing the limit. Default: 1 (from the left) Admissible: -1 (from the right) or complexNumber (in direction complexNumber)

Here is an example of finding the limit of expH1 ê xL as x Ø 0. Using the Direction option in Limit, we can determine both limits. In[11]:= Out[11]=

Limit[Exp[1/x], x -> 0, Direction -> #]& /@ {1, -1} 80, ∞

0, Direction -> #]& /@ {-I, I}

Symbolic Computations

186 80, ∞

0, Direction -> #]& /@ {+1, -1} // ExpToTrig 8Cos@π λD − Sin@π λD, Cos@π λD + Sin@π λD

#]& /@ {1, -1} α

α

9LimitA x , x → 0, Direction → 1E, LimitA x , x → 0, Direction → −1E=

Out[15]=

Under the assumption that the real part of a is positive, the last limit can be found by Limit. In[16]:= Out[16]=

Assuming[Re[α] > 0, Limit[Exp[α/x], x -> 0, Direction -> #]& /@ {1, -1}] 80, ∞

0] Interval@8−1, 1 1], Limit[f[x], x -> 1, Analytic -> True]} 8Limit@f@xD, x → 1D, f@1D

0] −f@zD + f@z + ∂D LimitA , ∂ → 0E ∂

Assuming that f HzL is an analytic function yields, as the result, the derivative f £ HzL.

1.6 Classical Analysis In[20]:= Out[20]=

187

Limit[(f[z + ∂] - f[z])/∂, ∂ -> 0, Analytic -> True] f @zD

Here is a slightly more complicated limit. In[21]:=

Out[21]=

Limit[(f[z + ∂ + ∂^2] + f[z - ∂ - ∂^3/4] - 2 f[z + ∂^2/3])/∂^2, ∂ -> 0, Analytic -> True] f @zD + f @zD 3

Also in the following limit (that gives the Schwarz derivative wH3L ê w£ - 3 ê 2 w££2 ê w£2 ) the option setting Analytic -> True is needed. In[22]:=

Out[22]=

Limit[6 D[Log[(w[z] - w[ζ])/(z - ζ)], z, ζ], z -> ζ, Analytic -> True] // Expand 3 w @ζD2 wH3L @ζD − + w @ζD 2 w @ζD2

The following input reduces also to the Schwarzian derivative [1370], [1371], [1342]. Because w[ξ] appears multiplicative in this expression, this time the option setting Analytic -> True is not needed. In[23]:= Out[23]=

Limit[Derivative[3][Function[z, (z - w[z]/w'[z])/2]][ζ], w[ζ] -> 0] // Expand 3 w @ζD2 wH3L @ζD − + w @ζD 2 w @ζD2

The next input represents a discrete approximation to the nth derivative (n a nonnegative integer) of a function f at x [1842]. In[24]:=

derivativeApproximation[f_, n_, ξ_, ∂_] := Sum[(-1)^k Binomial[n, k] f[ξ + (n - 2k)/2 ∂], {k, 0, n}]/∂^n

In the limit ¶ Ø ¶, we get the explicit derivative for explicit nonnegative integer n. In[25]:= Out[25]=

Table[Limit[derivativeApproximation[f, n, x0, ∂], ∂ -> 0, Analytic -> True], {n, 0, 6}] 8f@x0D, f @x0D, f @x0D, fH3L @x0D, fH4L @x0D, fH5L @x0D, fH6L @x0D

Infinity]

Subtracting the value of the limit allows finding the next terms as a correction term for large, but finite n. In[29]:= Out[30]= In[31]:=

Out[32]= In[33]:=

Out[34]=

(* coefficient of 1/n term vanishes *) Limit[(expr - E) n, n -> Infinity] 0 (* coefficient of 1/n^2 term is finite *) Limit[(expr - E) n^2, n -> Infinity] −1 + 24 (* coefficient of 1/n^3 term is finite *) Limit[(expr - E - (* last term *) (E/24 - 1)/n^2) n^3, n -> Infinity] 2 2 − − 6

Symbolic Computations

188

Limit assumes that its variable approaches the limit point in a continuous manner. This means limits such as the following will stay unevaluated. In[35]:=

Limit[Nest[Sqrt[5 + #]&, 5, n], n -> Infinity] Nest::intnm : Non−negative machine−size è!!!!!!!!!!!!!!!!! integer expected at position 3 in NestA 5 + #1 &, 5, nE. More…

Out[35]= In[36]:=

è!!!!!!!!!!!!!!! LimitANestA 5 + #1 &, 5, nE, n → ∞E

Limit[Nest[1 + 1/#&, 1, n], n -> Infinity] Nest::intnm : Non−negative machine−size 1 integer expected at position 3 in NestA1 + &, 1, nE. More… #1

Out[36]= In[37]:= Out[37]=

1 LimitANestA1 + &, 1, nE, n → ∞E #1 Limit[Prime[n]/Exp[n], n -> Infinity] Limit@−n Prime@nD, n → ∞D

To compute limits when several variables are simultaneously tending toward given values, we have to apply Limit repeatedly. However, constructions of the form Limit[ f Ha, bL, a -> a0 , b -> b0 ] are not allowed. Here is a function, with two different limit values, that depends on the order in which Limit is applied. In[38]:=

Out[39]=

(* use different variable ordering *) {Limit[Limit[(x^2 - y^2)/(x^2 + y^2), x -> 0], y -> 0], Limit[Limit[(x^2 - y^2)/(x^2 + y^2), y -> 0], x -> 0]} 8−1, 1

0, y -> 0] x2 − y2 Limit::optx : Unknown option y in LimitA , x → 0, y → 0E. More… x2 + y2

Out[40]=

x2 − y2 LimitA , x → 0, y → 0E x2 + y2

To conclude this section, we now present a tiny application of Limit concerning the computation of a 2D rotation matrix from infinitesimals [922]: An infinitesimal rotation by an angle je around the z-axis can be described (which is easily seen from the geometry) by x£ = x + je y, y£ = -je x£ + y. Here, x and y are the coordinates of a point before the rotation, and x£ and y£ are the coordinates after the rotation. In matrix form, this is £ ij x yz ij 1 je yz ij x yz z j z. j £ z=j k y { k -je 1 { k y {

Here, je is the infinitesimal angle of rotation. A finite rotation by an angle j can be obtained by n-fold repetition of this small rotation, where n je = j. Here is the limit as n Ø ¶. In[41]:= Out[41]=

MatrixPower[{{1, ϕ/n}, {-ϕ/n, 1}}, n] 1 n−ϕ n 1 n+ϕ n 1 n−ϕ n 1 n+ϕ n 99 J N + J N , J N − J N =, 2 n 2 n 2 n 2 n 1 n−ϕ n 1 n+ϕ n 1 n−ϕ n 1 n+ϕ n 9− J N + J N , J N + J N == 2 n 2 n 2 n 2 n

This is what we get after some reorganization.

1.6 Classical Analysis In[42]:=

189

ComplexExpand[Map[Limit[#, n -> Infinity]&, %, {2}]] // Simplify 88Cos@ϕD, Sin@ϕD Identity] // Internal`DeactivateMessages)& @@@ (* power and difference of powers to Gaussian and decorated Gaussian *) {{Cos[x]^k, All}, {Cos[x]^k - Exp[-k/2 x^2], All}, { δcosExpK[k, x], {0, -50}}}]]]] 1

0

0

0.8

-0.05

-10

-0.1

0.6 0.4

-30

-0.2

0.2 0

-20

-0.15

-40

-0.25 -1.5

-1

-0.5

0

0.5

1

1.5

-1.5 -1 -0.5

0

0.5

1

1.5

-1.5 -1 -0.5

0

0.5

1

1.5

† Laurent Series Now, we have terms with negative powers of x. (Within Mathematica, it is a series with positive powers of 1 ê x.) In[21]:= Out[21]= In[22]:= Out[22]=

Series[1/(x^2 + a^2), {x, Infinity, 3}] 1 2 1 4 J N + OA E x x Series[Sin[x]^-1, {x, 0, 4}] 1 x 7 x3 + + + O@xD5 x 6 360

Note the O[x] terms in the following two examples. In[23]:= Out[23]= In[24]:= Out[24]=

Series[x^-6, {x, 0, 4}] 1 6 + O@xD5 x Series[(1/Sin[x])^4, {x, 0, 4}] 1 2 11 62 x2 41 x4 4 + 2 + + + + O@xD5 x 3x 45 945 2835

The next series has no nonvanishing terms up to order x4 . And the result returned by Series indicates that the first nonvanishing coefficient might appear earliest at order x10 . In[25]:= Out[25]=

Series[(x^2 + 3)/(x^12 - 17), {x, Infinity, 4}] 1 10 OA E x

To get a nontrivial term for the last series, we must calculate more terms. In[26]:=

Series[(x^2 + 3)/(x^12 - 17), {x, Infinity, 12}]

1.6 Classical Analysis

Out[26]=

193

1 10 1 12 1 13 J N + 3 J N + OA E x x x

In case we have a series with many negative power terms and are only interested in the leading terms, we can use a negative value for order. In[27]:= Out[27]=

Series[(1/Sin[x])^1000, {x, 0, -995}] 1 500 125050 1 + 998 + + x1000 3x 9 x996 O@xD994

The trigonometric functions cscHzL and cotHzL have Laurent expansions around z = 0. The next input shows that the function Series is effectively behaving like a listable function (because its second argument is a list, Series cannot carry the Listable attribute). In[28]:= Out[28]=

Series[{Csc[z], Cot[z]}, {z, 1 z 7 z3 1 z 9 + + + O@zD4 , − − z 6 360 z 3

0, 3}] z3 + O@zD4 = 45

Here is a series of a special function (to be discussed in Chapter 3). We use an approximate expansion point to force the numericalization of the resulting coefficients. In[29]:= Out[29]=

Series[Gamma[z], {z, 1/2., 8}] 1.77245 − 3.48023 Hz − 0.5L + 7.79009 Hz − 0.5L2 − 15.7948 Hz − 0.5L3 + 31.8788 Hz − 0.5L4 − 63.9127 Hz − 0.5L5 + 127.943 Hz − 0.5L6 − 255.961 Hz − 0.5L7 + 511.974 Hz − 0.5L8 + O@z − 0.5D9

Here are two series expansions for expressions that tend to e. In[30]:=

Out[31]=

(* expand one time at zero and one time at infinity *) {Series[(1 + 1/n)^n, {n, Infinity, 2}], Series[(1 + n)^(1/n), {n, 0, 2}]} 11 1 2 1 3 n 11 n2 9 − + J N + OA E , − + + O@nD3 = 2n 24 n n 2 24

† Puiseux Series è!!!! The expression z is an independent term in a Puiseux series. The O@xD13ê2 term arises from the order 6 of the series requested and the fact that the nonvanishing terms have fractional exponents with denominator 2. In[32]:= Out[32]=

Series[Sqrt[x], {x, 0, 6}] è!!!! x + O@xD13ê2

The next series can be expressed in powers of x1ê2 . The last argument of the SeriesData-object is 2, meaning that the increments in the powers of the expansion variable are 1 ê 2. In[33]:= Out[33]=

Series[1 x^(1/2) + 3 x^(3/2) + 5 x^(5/2), {x, 0, 6}] è!!!! x + 3 x3ê2 + 5 x5ê2 + O@xD13ê2

In[34]:=

InputForm[%]

Out[34]//InputForm=

SeriesData[x, 0, {1, 0, 3, 0, 5}, 1, 13, 2]

Similarly, the O-term in the following has the value 7 µ H1 ê 7L + 1 ê 7 = 50 ê 7. In[35]:= Out[35]=

Series[x^(1/7), {x, 0, 7}] x1ê7 + O@xD50ê7

For large denominators, the third argument of the underlying SeriesData-object can become a long list. In[36]:=

Series[x^(1/2000) + x^2, {x, 0, 2}][[3]] // Length

Symbolic Computations

194 Out[36]=

4000

The next two series expansions contain logarithms. In[37]:= Out[37]= In[38]:= Out[38]=

Series[x^x, {x, 0, 4}] 1 1 1 1 + Log@xD x + Log@xD2 x2 + Log@xD3 x3 + Log@xD4 x4 + O@xD5 2 6 24 Series[x^(x^2), {x, 0, 3}] 1 + Log@xD x2 + O@xD4

The last example contained a term of the form lnHxL x2 . Logarithmic factors appear in the third argument of the underlying SeriesData-object. In[39]:=

FullForm[%]

Out[39]//FullForm=

SeriesData@x, 0, List@1, 0, Log@xDD, 0, 4, 1D

The function arcsinHzL has three branch points: two square-root–like branch points at ≤1 and a logarithmic branch point at ¶. Looking at the series expansion of ArcSin, these two different types of branch points are clearly visible. In[40]:= Out[40]= In[41]:= Out[41]= In[42]:= Out[42]=

Series[ArcSin[z], {z, Infinity, 3}] π 1 1 1 1 2 1 4 J − Log@4D + LogA EN + J N + OA E 2 2 z 4 z z Series[ArcSin[z], {z, -1, 3}] π è!!!! è!!!!!!!!!!!! Hz + 1L3ê2 3 Hz + 1L5ê2 5 Hz + 1L7ê2 − + 2 z + 1 + + + + O@z + 1D4 è!!!! è!!!! è!!!! 2 6 2 80 2 448 2 Series[ArcSin[z], {z, +1, 3}] Arg@−1+zD π 2 π E + H−1LFloorA− 2 3 Hz − 1L5ê2 5 Hz − 1L7ê2 i è!!!! è!!!!!!!!!!!! Hz − 1L3ê2 4y j z + − + j− 2 z − 1 + è!!!! è!!!! è!!!! + O@z − 1D z 6 2 80 2 448 2 k {

The last expansion at the branch point z = 1 shows the slightly unusual prefactor H-1Ld-argHz-1LêH2 pLt . We will encounter such-type factors frequently when expanding analytic functions on branch points and branch cuts. Such factors ensure that the resulting series expansions are correct in any direction from the expansion point. The discontinuous function d-argHz - 1L ê H2 pLt reflects the fact that the original function arcsinHzL has a line of discontinuity (a branch cut) emerging from the point z = +1. The next input shows that in the last example, the factor is needed to get the sign of the imaginary part just above the branch cut corrected. In[43]:=

Out[44]=

(* function, naive series, and corrected series *) {ArcSin[z], Pi/2 - I Sqrt[2] Sqrt[z - 1], Pi/2 - (-1)^Floor[-(Arg[z - 1]/(2 Pi))] I Sqrt[2] Sqrt[z - 1]} /. z -> 1 + 10^-3 + (* above branch cut *) 10^-10 I // N 81.5708 + 0.0447176 , 1.5708 − 0.0447214 , 1.5708 + 0.0447214

z - eP[i - 1]}], {i, 1, Length[summedSeries]}]

We look at the resulting Riemann surface by showing the values of the various sqrt[i, z] inside their disks of convergence. In[59]:=

Do[points[i] = Table[{Re[#], Im[#], Im[N[sqrt[i, #]]]}&[ N[eP[i] + r Exp[I ϕ]]], {r, 0, 0.99, 0.99/10}, {ϕ, 0, N[2Pi], N[2Pi]/16}], {i, 0, 8}]

In[60]:=

Show[Graphics3D[{ {Thickness[0.002], Table[(* the disks *) {Hue[i/8 0.76], Line /@ points[i], Line /@ Transpose[points[i]]}, {i, 0, 8}]}, {Thickness[0.01], GrayLevel[0.3], Line[{{-1, 0, -5}, {-1, 0, 2}}]}, {Thickness[0.01], (* the continuation path *) Line[N[Append[#, First[#]]]& @ Table[{Re[eP[i]], Im[eP[i]], Im[N[sqrt[i, eP[i]]]]}, {i, 1, 8}]]}}], PlotRange -> All, BoxRatios -> {1, 1, 1.5}, ViewPoint -> {-2, -1, 1.1}, Axes -> True, AxesLabel -> (StyleForm[#, TraditionalForm]& /@ {"x", "y", "Sqrt[1 + x + I y]"})]

1.6 Classical Analysis

231

2 0 Sqrt@1 + x + I yD -2 2 -4 4 2

1

0 y -1 -2

1 0 -1x -2 -3 3

Because of the two-valuedness of H1 + zL1ê2 , the first function sqrt[0, z] (in red) and the last function sqrt[8, z] (in blue) do not coincide, and the branch cut of Sqrt[1 + z] along the negative real axis is—because of the analytic continuation—missing. As another application of Sum, let us look at the Hölder summation method [1744], [1054], [200], [656]. Given a divergent sum (divergent in the limit n Ø ¶) S0HnL = ⁄nj=1 a j one recursively forms the (partial) sums n HnL SkHnL = n-1 ‚ Sk-1 until SkHnL converges (if this happens). j=1

Let us take an example, the series of -Hx + 1L-2 for x = 1. The nth term of the series is given by a j = H-1L j j x j-1 . In[61]:= Out[61]=

Series[-1/(1 + x)^2, {x, 0, 8}] −1 + 2 x − 3 x2 + 4 x3 − 5 x4 + 6 x5 − 7 x6 + 8 x7 − 9 x8 + O@xD9

The first partial sums are formed. In[62]:= Out[62]=

sum1 = Sum[(-1)^j j x^(j - 1), {j, n}] −1 + H−xLn + n H−xLn + n H−xLn x H1 + xL2

The first partial sum does not converge for n Ø ¶. In[63]:= Out[63]=

Table[sum1 /. x -> 1, {n, 12}] 8−1, 1, −2, 2, −3, 3, −4, 4, −5, 5, −6, 6

j], {j, n}]/n // Together −n − 2 x − n x + 2 H−xLn x + n H−xLn x + n H−xLn x2 n H1 + xL3

This partial sum still does not converge. In[65]:= Out[65]= In[66]:= Out[66]=

Table[sum2 /. x -> 1, {n, 12}] 2 3 4 5 6 9−1, 0, − , 0, − , 0, − , 0, − , 0, − , 0= 3 5 7 9 11 {sum2 /. x -> 1 /. (-1)^n -> -1, sum2 /. x -> 1 /. (-1)^n -> +1} −4 − 4 n 9 , 0= 8n

So, let us do one more iteration. In[67]:=

sum3 = Sum[Evaluate[sum2 /. n -> j], {j, n}]/n // Together

Symbolic Computations

232 Out[67]=

1 − Hn + n2 + n x + n2 x + x2 + n x2 − H−xLn x2 − n H1 + nL H1 + xL3 n H−xLn x2 + 2 x HarmonicNumber@nD + 2 n x HarmonicNumber@nD − 2 H−xLn x2 Hypergeometric2F1@1 + n, 1, 2 + n, −xD + 2 x Log@1 + xD + 2 n x Log@1 + xDL

Now, we finally have a convergent sum. In[68]:= Out[68]=

Table[Expand[sum3 /. x -> 1], {n, 12}] // N 8−1., −0.5, −0.555556, −0.416667, −0.453333, −0.377778, −0.405442, −0.354762, −0.377072, −0.339365, −0.3581, −0.328259

1 /. n -> N[10^k, 22]], {k, 20}] // N[#, 2]& 80.089, 0.015, 0.0020, 0.00026, 0.000032, 3.8 × 10−6 , 4.3 × 10−7 , 4.9 × 10−8 , 5.5 × 10−9 , 6.1 × 10−10 , 6.6 × 10−11 , 7.2 × 10−12 , 7.8 × 10−13 , 8.4 × 10−14 , 9.0 × 10−15 , 9.5 × 10−16 , 1.0 × 10−16 , 1.1 × 10−17 , 1.1 × 10−18 , 1.2 × 10−19

False, ColorFunction -> (Hue[0.78#]&)], (* 3D plot for uniform random variables *) ListPlot3D[Log @ Abs @ recursivePartialSumList[ Table[Random[Real, {-1, 1}], {i, 120}], 20], Mesh -> False]}]]] 10 8 -0.5 -1 -1.5

6 4

25

2 0

20 15

0

200

400

600

10 50

75

5 100

800 1000 1200

The next inputs use the Cesàro summation method [216] to establish the value -1 ê 4. In[72]:=

Out[72]= In[73]:=

partialSums = Simplify[#, x > 0]& @ Sum[(-1)^(j + 1) (j + 1) x^j, −1 + H2 + kL H−xL1+k − H1 + kL H−xL2+k H1 + xL2

{j, 0, k}]

(* multiply partial sums with a binomial and sum again *) cesaroSum = Sum[Evaluate[partialSums Binomial[n - k + - 1, n - k]], {k, 0, n}]/Binomial[n + , n] /. x -> 1 // Simplify

1.7 Differential and Difference Equations

233

Gamma@1+n+D 3 Gamma@n+D Hypergeometric2F1@1,−n,1−n−,−1D +

Out[74]= In[75]:=

Out[76]=

2 Gamma@−1+n+D Hypergeometric2F1@2,1−n,2−n−,−1D Gamma@1+D Gamma@D − + Gamma@1+nD Gamma@nD Gamma@D 4 Binomial@n + , nD

(* limits for different values for the parameter p *) Table[Limit[FullSimplify[cesaroSum], n -> Infinity], {, 4}] 1 1 1 1 9− , − , − , − = 4 4 4 4

The function Integrate gives finite results for (some) divergent integrals when using the option setting GenerateConditions -> False. Sum does not have the option GenerateConditions. But the function SymbolicSum`SymbolicSum does. The next input calculates a finite result for the divergent sum ¶ H-1Lk lnHkL. ⁄k=1 In[77]:=

Out[77]=

SymbolicSum`SymbolicSum[(-1)^k Log[k], {k, Infinity}, GenerateConditions -> False] // Simplify 1 π LogA E 2 2

Taking into account that ∑ k ¶ ê ∑ ¶ = k ¶ lnHkL, the last result can be understood in the following way (zeta regularization). In[78]:=

Out[78]=

Normal[Series[D[Sum[(-1)^k k^∂, {k, Infinity}], ∂], {∂, 0, 0}]] // Simplify 1 π LogA E 2 2

We end this subsection by remarking that the symbolic analog of the function NProduct, namely the function Product should be mentioned here. Because its syntax and functionality is largely identical to the one of Sum, we just give three simple examples here. In[79]:=

Out[79]=

{Product[Sin[z + k Pi/ν], {k, 0, ν - 1}], Product[1 - k^-4, {k, 2, Infinity}], Product[(1 - Prime[k]^-2)^4, {k, Infinity}]} Sinh@πD 1296 921−ν Sin@z νD, , = 4π π8

We end with an infinite sum over finite products. In[80]:= Out[80]=

Sum[n!/Product[x + k, {k, n}], {n, Infinity}] 1 −1 + x

1.7 Differential and Difference Equations 1.7.0 Remarks In this section, we discuss another of the very useful Mathematica commands for symbolic computations: DSolve, the function for the symbolic solution of ordinary differential equations (ODEs), systems of ODEs [1667], partial differential equations, and differential-algebraic equations. The function DSolve is quite powerful and will find closed-form solutions to many differential equations. Here we present examples for the most popular classes of differential equations. This listing is far from exhaustive.

Symbolic Computations

234

1.7.1 Ordinary Differential Equations The syntax for solving an ordinary differential equation is straightforward. DSolve[listOfODEsAndInitialValues, listOfFunctions, independentVariable] tries to solve the ODE(s) with potential initial conditions given by listOfODEsAndInitialValues for the functions in listOfFunctions. The independent variable is independentVariable. In the case of a single differential equation without initial conditions with only one unknown function, the first and second arguments can appear without the braces.

We first look at a simple example. The result of a successfully solved differential equation is a list of lists of rules—structurally, like the result of Solve. In[1]:= Out[1]=

y1 = DSolve[y''[x] == x^2, y[x], x] x4 99y@xD → + C@1D + x C@2D== 12

Here is a more complicated example. Similar to the results of Integrate and Sum, DSolve-results often contain special functions, Root-objects, and RootSum-objects. In[2]:= Out[2]=

DSolve[y'[x] == y[x]^2 - x, y[x], x] 1 2 2 2 99y@xD → J−BesselJA− , x3ê2 E C@1D + x3ê2 J−2 BesselJA− , x3ê2 E − 3 3 3 3 4 2 2 2 3ê2 3ê2 BesselJA− , x E C@1D + BesselJA , x E C@1DNN í 3 3 3 3 1 2 1 2 J2 x JBesselJA , x3ê2 E + BesselJA− , x3ê2 E C@1DNN== 3 3 3 3

The next picture shows a visualization of the last solution curves generated by choosing real values from the interval @-4, 4D for the integration constant C[1]. In[3]:=

Show[Graphics[{Thickness[0.002], Table[With[{c = Random[Real, {-3, 3}]}, Line /@ DeleteCases[Partition[Table[{x, -(AiryAiPrime[x] + AiryBiPrime[x] c)/(AiryAi[x] + AiryBi[x] c)}, {x, -4., 4., 1/50.}], 2, 1], (* delete steep vertical parts *) _?(#.#&[Subtract @@ #] > 5&)]], {50}]}], Frame -> True] 4

2

0

-2

-4

-4

-2

0

2

4

The specification of the functions in the second argument of DSolve is analogous to that for NDSolve; that is, if no argument is specified for the function to be found, DSolve returns a pure function (with the dummy

1.7 Differential and Difference Equations

235

variable typically being the independent variable from the input equations). Here this is demonstrated using the simple differential equation y≥ HxL = - yHxL [1199], [1863]. In[4]:= Out[4]=

y2 = DSolve[{y''[x] == -y[x], y[0] == 0}, y, x] 88y → Function@8x With[{ = c++, d = Exponent[p[C], C]}, [, d] /; True]] i j j j j j 99Q → FunctionA8k -f'[x] DiracDelta[x] /. DiracDelta[x] f_[x] :> f[0] DiracDelta[x] DiracDelta@xD

The solution of the initial value problem is obtained using the fundamental solution GHxL (Green’s function) [1704] for arbitrary initial values and adding the initial conditions yHnL H0L in the form n ∑k-1 GHxL ê ∑ xn yHn-kL H0L to the right-hand side as an inhomogeneous term. Here is a simple example—the ⁄k=1 differential equation y≥ HxL + yHxL = e-x with initial conditions yH0L = y0 and y£ H0L = y p . We use DSolve to solve the initial value problem. In[80]:=

Out[80]=

sol = DSolve[{y''[x] y[0] == Cos@xD − + y0 Cos@xD + 2

+ y[x] == Exp[-x], y0, y'[0] == yp}, y[x], x][[1, 1, 2]] // Expand 1 Sin@xD 1 −x Cos@xD2 + + yp Sin@xD + −x Sin@xD2 2 2 2

This is a fundamental solution for this problem. In[81]:=

gf[x_] = Limit[DSolve[{y''[x] + y[x] == DiracDelta[x], (* right sided initial conditions; after δ kicked *)

Symbolic Computations

278

Out[81]=

y[∂] == 0, y'[∂] == 1}, y[x], x][[1, 1, 2]] /. DiracDelta[c_] Sin[c_] :> 0 // Simplify, ∂ -> 0, Direction -> -1] Sin@xD UnitStep@xD

Now, we use the fundamental solution to build the solution of the inhomogeneous equation and to fulfill the initial conditions. In[82]:=

Out[82]=

sol1 = Integrate[Expand[gf[x - ξ] Exp[-ξ]], {ξ, 0, Infinity}, GenerateConditions -> False] + (* the initial conditions as part of the inhomogeneous part *) (gf[x - ξ] yp /. ξ -> 0) + (D[gf[x - ξ], x] y0 /. ξ -> 0) /. DiracDelta[c_] Sin[c_] :> 0 1 y0 Cos@xD UnitStep@xD + yp Sin@xD UnitStep@xD + H−x − Cos@xD + Sin@xDL UnitStep@xD 2

For x > 0 (the region under consideration), the solution so-obtained agrees with the one from DSolve. In[83]:= Out[83]=

Expand[sol - %] // Simplify[#, x > 0]& 0

Within the realm of distributions, differential equations get more solutions than just the classical ones. Let us look at the first-order differential equation x2 u£ HxL = 1. In[84]:=

ode = ξ^2 u'[ξ] - 1;

In the space of ordinary functions, we have the solution uHxL = c1 - 1 ê x. In[85]:= Out[85]=

DSolve[ode == 0, u[ξ], ξ] 1 99u@ξD → − + C@1D== ξ

In the space of generalized functions we have the solution uGF HxL = c1 + c2 qHxL + c3 dHxL - 1 ê x. Let us check this. In[86]:= Out[86]=

uGF[ξ_] = c[1] + c[2] UnitStep[ξ] + c[3] DiracDelta[ξ] - 1/ξ 1 − + c@1D + c@3D DiracDelta@ξD + c@2D UnitStep@ξD ξ

Directly substituting the solution into Mathematica does not give zero. In[87]:= Out[87]=

ξ^2 uGF'[ξ] - 1 // Expand ξ2 c@2D DiracDelta@ξD + ξ2 c@3D DiracDelta @ξD

Using Simplify, we can get zero. In[88]:= Out[88]=

Simplify[%] 0

To get the last zero, we have to add the two rules xn dHxL = 0 and xn dHnL HxL = H-1Ln n ! ê Hn - nL! dHn-nL HxL. In[89]:=

δSimplify[expr_, x_] := With[{rules = {x^n_. Derivative[ν_][DiracDelta][x] :> (-1)^n ν!/(ν - n)! Derivative[ν - n][DiracDelta][x], x^n_. DiracDelta[x] :> 0}}, FixedPoint[Expand[#] //. rules&, expr]]

Now it is straightforward to see that uGF HxL is indeed a solution of the differential equation x2 uHxL = 1. In[90]:= Out[90]=

δSimplify[%%, ξ] 0

1.8 Integral Transforms and Generalized Functions

279

No option of DSolve is currently available to generate solutions of differential equations that are distributions. Let us deal with a slightly more complicated example, the hypergeometric differential equation xH1 - xL y££ HxL + Hg - Ha + b + 1L xL y£ HxL - a b yHxL = 0. Classically, the solutions are hypergeometric functions (see Chapter 3). These become rational functions for integer parameters. Here is an example. In[91]:=

ode2F1[x_, y_, {α_, β_, γ_}] = x (1 - x) y''[x] + (γ - (α + β +1) x) y'[x] - α β y[x];

In[92]:=

With[{α = 12, β = 7, γ = 10}, DSolve[ode2F1[x, y, {α, β, γ}] == 0, y, x]] H28 + 3 x H7 + 2 xLL C@1D 99y → FunctionA8x= γ > β

Here is a distributional solution of our special case of the hypergeometric differential equation. In[95]:= Out[95]=

yGF[x, {12, 7, 10}] 1 1 DiracDeltaH6L @xD − DiracDeltaH7L @xD + DiracDeltaH8L @xD 2 12

Substituting this solution into the differential equation and applying our δSimplify shows that this is indeed a solution. In[96]:= Out[96]=

In[97]:= Out[97]=

With[{α = 12, β = 7, γ = 10, y = Function[x, Evaluate[%]]}, ode2F1[x, y, {α, β, γ}]] // Expand −84 DiracDeltaH6L @xD + 52 DiracDeltaH7L @xD − 20 x DiracDeltaH7L @xD − 12 DiracDeltaH8L @xD + 11 x DiracDeltaH8L @xD − 5 13 x2 DiracDeltaH8L @xD + DiracDeltaH9L @xD − x DiracDeltaH9L @xD + 6 6 1 1 1 x2 DiracDeltaH9L @xD + x DiracDeltaH10L @xD − x2 DiracDeltaH10L @xD 2 12 12 δSimplify[%, x] 0

For some more uses of series of Dirac d distributions, see [289], [1728], [969], [1729]; for a spectacular weak solution of the Euler PDEs, see [1615]; for distributional solutions of functional equations, see [454], [456], [1569], and [372]. As a little application of how to deal with the UnitStep and the DiracDelta function in Mathematica, let us check that yHx, tL = qH2 Hx - k tL g + pL qHp - 2 g Hx - k tLL cosd+1 Hg Hx - k tLL eÂ Hk x-w tL is a “finite length solito-

Symbolic Computations

280

nic” solution (also called compacton [1509], [1093], [1165], [1360], [1475], [1166], [406], [1817], [560], [1818], [1873], [1819]) of the following nonlinear Schrödinger equation [300]: 1 1 ∑ rHx, tL 2 ∑yHx, tL 1 ∑2 yHx, tL + ÅÅÅÅÅ Å x J ÅÅÅÅÅÅÅÅ Å ÅÅÅÅÅÅÅ Å ÅÅÅ Å ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅ N yHx, tL Å ÅÅÅ Å i ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ = - ÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 8 rHx, tL ∑x ∑ x2 ∑t 2 êêêêêêêêêê where rHx, tL = yHx, tL yHx, tL, 0 < x < 1, d = x ê H1 - xL, and w = Hk 2 + g2 Hd + 1LL ê 2. (For arbitrarily narrow solitons, see [434].) Here, we implement the equations from above. In[98]:=

In[100]:=

δ = ξ/(1 - ξ); ω = 1/2 (k^2 + γ^2 (1 + δ)); Ω[ψ_] := Module[{ψc = ψ /. c_Complex :> Conjugate[c], ρ, j}, ρ = ψ ψc; ξ/8 (D[ρ, x]/ρ)^2]

Without the finite length restriction (the terms qH2 Hx - k tL g + pL qHp - 2 g Hx - k tLL in yHx, tL, it is straightforward that yHx, tL is a solution of the equation. In[101]:=

ψ[x_, t_] = Cos[γ (x - k t)]^(1 + δ) Exp[I (k x - ω t)];

In[102]:=

Factor[I D[ψ[x, t], t] + 1/2 D[ψ[x, t], {x, 2}] - Ω[ψ[x, t]] ψ[x, t]]

Out[102]=

0

Including the finite length condition makes things a bit more tricky. Here is the finite length solution. In[103]:=

ψ1[x_, t_] = ψ[x, t] UnitStep[2 γ (x - k t) + Pi] UnitStep[Pi - 2 γ (x - k t)];

Just plainly redoing the calculation above will not give the desired result. In[104]:= Out[104]=

Simplify[Factor[I D[ψ1[x, t], t] + 1/2 D[ψ1[x, t], {x, 2}] Ω[ψ1[x, t]] ψ[x, t]]] === 0 False

So let us do the calculation step by step. First, we form the first time derivative with respect to t. In[105]:= Out[105]=

D[ψ1[x, t], x] 1

2

2

ξ

ξ

MMM 1−ξ 1−ξ 2 Ik x− 2 t Ik +γ I1+ γ Cos@H−k t + xL γD1+ DiracDelta@π + 2 H−k t + xL γD UnitStep@π − 2 H−k t + xL γD − 1 t Ik2 +γ2 I1+ ξ MMM 1−ξ

2 Ik x− 2

ξ

1−ξ γ Cos@H−k t + xL γD1+ DiracDelta@π − 2 H−k t + xL γD 1

2

2

ξ

ξ

MMM 1−ξ 1−ξ UnitStep@π + 2 H−k t + xL γD + Ik x− 2 t Ik +γ I1+ k Cos@H−k t + xL γD1+ UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD − 1 ξ MMM ξ 2 2 ξ 1−ξ 1−ξ Ik x− 2 t Ik +γ I1+ γ J1 + N Cos@H−k t + xL γD Sin@H−k t + xL γD 1−ξ UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD

We implement a generalization of x dHxL = 0 for the form f HtL dHgHtLL to simplify the expression above. In[106]:=

δrule = Times[factors__, DiracDelta[y_]] :> Module[{t0, factor1}, (* the t such that y vanishes *) t0 = t /. Solve[y == 0, t][[1]]; (* the value of factor at t0 *) factor1 = Times[factors] //. _UnitStep -> 1 /. t -> t0; (* the zero result *) 0 /; ((Together //@ factor1) /. 0^_ -> 0) === 0];

1.8 Integral Transforms and Generalized Functions

281

Applying δrule to the first time derivative gives a better result—no Dirac d functions appear anymore. In[107]:= Out[107]=

timeDeriv1 = Expand[D[ψ1[x, t], t]] /. δrule 1 ξ ξ 2 2 1 MMM 2 1−ξ 1−ξ − Ik x− 2 t Ik +γ I1+ k Cos@H−k t + xL γD1+ 2 UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD − 1 ξ ξ 2 2 1 MMM 2 1−ξ 1−ξ γ Cos@H−k t + xL γD1+ UnitStep@π − 2 H−k t + xL γD Ik x− 2 t Ik +γ I1+ 2 1 ξ MMM 2 2 1 1−ξ γ2 ξ UnitStep@π + 2 H−k t + xL γD − J Ik x− 2 t Ik +γ I1+ 2 H1 − ξL ξ

1−ξ Cos@H−k t + xL γD1+ UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γDN + 1

2

2

ξ

ξ

MMM 1−ξ 1−ξ Ik x− 2 t Ik +γ I1+ k γ Cos@H−k t + xL γD Sin@H−k t + xL γD UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD + 1 ξ ξ 2 2 1 MMM 1−ξ 1−ξ k γ ξ Cos@H−k t + xL γD Sin@H−k t + xL γD J Ik x− 2 t Ik +γ I1+ 1−ξ

UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γDN

In a similar way, we deal with the first and second space derivative. In[108]:= Out[108]=

spaceDeriv1 = Expand[D[ψ1[x, t], x]] /. δrule 1

2 +γ2 I1+ ξ MMM 1−ξ

Ik x− 2 t Ik

ξ

1−ξ k Cos@H−k t + xL γD1+ UnitStep@π − 2 H−k t + xL γD 1

2

2

ξ

ξ

MMM 1−ξ 1−ξ UnitStep@π + 2 H−k t + xL γD − Ik x− 2 t Ik +γ I1+ γ Cos@H−k t + xL γD Sin@H−k t + xL γD UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD − 1 ξ ξ 2 2 1 MMM 1−ξ 1−ξ J Ik x− 2 t Ik +γ I1+ γ ξ Cos@H−k t + xL γD Sin@H−k t + xL γD 1−ξ

UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γDN In[109]:=

spaceDeriv2 = Expand[D[spaceDeriv1, x]] /. δrule;

The nonlinear term still needs to be dealt with. In[110]:=

Out[111]=

ψ1c = ψ1[x, t] /. c_Complex :> Conjugate[c]; ρ1 = ψ1[x, t] ψ1c 2ξ

1−ξ Cos@H−k t + xL γD2+ UnitStep@π − 2 H−k t + xL γD2 UnitStep@π + 2 H−k t + xL γD2

The rule ruleθ simplifies powers of Heaviside distributions. In[112]:=

ruleθ = u_UnitStep^e_ :> u

Out[112]=

u_UnitStepe_ u

In[113]:=

ρ1 = ρ1 /. ruleθ

Out[113]=

2ξ

1−ξ Cos@H−k t + xL γD2+ UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD

After carrying out the spatial differentiation, we again apply our rule δrule. In[114]:= Out[114]=

ρDeriv1 = Expand[D[ρ1, x]] /. δrule 2ξ

1−ξ −2 γ Cos@H−k t + xL γD1+ Sin@H−k t + xL γD UnitStep@π − 2 H−k t + xL γD 2ξ 1 1−ξ UnitStep@π + 2 H−k t + xL γD − J2 γ ξ Cos@H−k t + xL γD1+ 1−ξ

Sin@H−k t + xL γD UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γDN

In the process of forming the expression 1 ê rHx, tL ∑ rHx, tL ê ∑ x, we must be especially careful. Formally, the terms qHp - 2 Hx - k tL gL qH2 Hx - k tL g + pL cancel because inside Times they are treated like a commutative, associative quantity.

Symbolic Computations

282 In[115]:= Out[115]=

ξ/8 (ρDeriv1/ρ1)^2 // Expand 1 γ2 ξ2 Tan@H−k t + xL γD2 γ2 ξ3 Tan@H−k t + xL γD2 γ2 ξ Tan@H−k t + xL γD2 + + 2 1−ξ 2 H1 − ξL2

We restore the finite length conditions “by hand”. In[116]:= Out[116]=

Ω[ψ1] = % UnitStep[2 γ (x - k t) + Pi] UnitStep[Pi - 2 γ (x - k t)] γ2 ξ2 Tan@H−k t + xL γD2 γ2 ξ3 Tan@H−k t + xL γD2 z 1 i j + y j γ2 ξ Tan@H−k t + xL γD2 + z 1−ξ 2 H1 − ξL2 k2 { UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD

Putting everything together, we arrive at the zero we were hoping for. This indeed shows that ψ1[x, t] describes a finite length soliton of the above nonlinear Schrödinger equation. In[117]:= Out[117]=

Factor[Expand[I timeDeriv1 + 1/2 spaceDeriv2 - Ω[ψ1] ψ1[x, t]] /. ruleθ] 0

Here is a space-time picture of the absolute value of the finite length soliton for certain parameters. It is really a localized, moving, shape-invariant solution of a nonlinear wave equation that is concentrated at every time on a compact space domain. For a fixed time (right graphic), one sees that the transition between the zero-elongation and the nonzero-elongation domain is smooth (which is needed to fulfill the second-order differential equation). In[118]:= Out[118]= In[119]:=

Ψ = With[{k = 2, γ = 1/2, ξ = 1/2}, Evaluate[ψ1[x, t]]] 2 9t 1 I− 4 +2 xM CosA H−2 t + xLE UnitStep@π + 2 t − xD UnitStep@π − 2 t + xD 2

Show[GraphicsArray[ Block[{$DisplayFunction = Identity}, {(* 3D plot of the compacton *) Plot3D[Evaluate[Abs[Ψ]], {x, -12, 12}, {t, -4, 4}, Mesh -> False, PlotPoints -> 140, PlotRange -> All], (* plot of the compacton at a fixed time *) Plot[Evaluate[Abs[Ψ] /. t -> 2], {x, -0, 8}, PlotRange -> All, AspectRatio -> 1/3, Frame -> True, Axes -> False]}]]]

1 0.75 0.5 0.25 0 -10

4 2 0 -5

0

1 0.8 0.6 0.4 0.2 0

0

2

4

6

8

-2 5

10

-4

Until now, we encountered only one possibility that Mathematica would return a Dirac d function if we did not input one; this was by differentiation of the UnitStep function. More functions generate generalized functions, also in case one does not explicitly input the UnitStep or the DiracDelta distribution. The most important one is the Fourier transform [1387]. The Fourier transform t @ f HtLD HwL of a function f HtL is defined as ¶ t @ f HtLD HwL = H2 pL-1ê2 Ÿ-¶ ei w t f HtL dt [920]. (The square brackets in the traditional form notation t @ f HtLD HwL indicate the fact that the Fourier transform of f HtL is a linear functional of f HtL and a function of w.)

1.8 Integral Transforms and Generalized Functions

283

FourierTransform[f(t), t, ω] represents the Fourier transform of the function f HtL with respect to the variable t and the kernel ei w t .

Here is the Fourier transform of an “ordinary” function. In[120]:=

Clear[t, ω, x, y, s, Ω, a, b, term] FourierTransform[Exp[-x^2] x^3, x, y] y2

Out[121]=

− 4 y H−6 + y2 L − è!!!! 8 2

The Fourier transformation is a linear operation. In[122]:= Out[122]=

FourierTransform[α Sin[x^2] + β Exp[-x^2], x, y] y2 1 è!!!! y2 y2 J 2 − 4 β + α CosA E − α SinA EN 2 4 4

Derivative operators transform under a Fourier transformation into multiplication operators. This property makes them useful for solving ordinary and partial differential equations [532], [750], [443], [183], [879]. In[123]:= Out[123]=

FourierTransform[y''[x], x, ξ] −ξ2 FourierTransform@y@xD, x, ξD

The Fourier transform of the function 1 is essentially a Dirac d distribution [257]. In[124]:= Out[124]=

FourierTransform[1, t, ω] è!!!!!!!! 2 π DiracDelta@ωD

The following Fourier transform of cosHtL and sinHtL too gives a result that contains Dirac d distribution. In[125]:= Out[125]=

FourierTransform[α Cos[t] + β Sin[t], t, ω] π π α DiracDelta@−1 + ωD + $%%%%%%% β DiracDelta@−1 + ωD + $%%%%%%% 2 2 π π $%%%%%%% α DiracDelta@1 + ωD − $%%%%%%% β DiracDelta@1 + ωD 2 2

Be aware that carrying out the “integral” (using Integrate) will not result in a Dirac d distribution. In[126]:=

Integrate[Exp[I k t] Exp[I ω t], {t, -Infinity, Infinity}, Assumptions -> Im[k] == 0]/(2 Pi) Integrate::idiv : Integral of t Hk+ωL does not converge on 8−∞, ∞ (η[#][x]&)]], x, s]]], s, x] // Expand

Here are the first three partial sums of the hk HxL shown. In[197]:=

Out[198]=

yApproxList[x_] = Rest[FoldList[Plus, 0, Take[yApproxList[x], 3] x2 55 5 x2 x4 91, + Cos@xD, − + − 6 Cos@xD + 2 8 4 8

Table[η[k][x], {k, 0, 5}]]]; 1 1 x2 Cos@xD + Cos@2 xD − 2 x Sin@xD= 2 8

We compare the approximate solutions with a high-precision numerical solution ndsol. The following graphics show, that with each hk HxL the solution becomes substantially better and the fifth approximation has an error less than 10-10 for 0 § x d 0.8. In[199]:=

(* high-precision numerical solution *) ndsol = NDSolve[{yN''[x] + a[x] yN'[x] + b[x] yN[x] == f[yN[x]], yN[0] == 1, yN'[0] == 0}, yN, {x, 0, 5/2}, WorkingPrecision -> 50, MaxSteps -> 10^5, PrecisionGoal -> 30, AccuracyGoal -> 30];

In[201]:=

Show[GraphicsArray[ Block[{$DisplayFunction = Identity, (* order increases from red to blue *) = Table[Hue[k/7], {k, 0, 5}]}, {(* absolute differences *) Plot[Evaluate[Join[yApproxList[x], yN[x] /. ndsol]], {x, 0, 5/2}, PlotRange -> All, PlotStyle -> Prepend[, GrayLevel[0]]], (* logarithms of the differences *) MapIndexed[(δN[#2[[1]]][x_?NumberQ] := Log[10, Abs[SetPrecision[#1, 60] - yN[SetPrecision[x, 60]] /. ndsol[[1]]]])&, yApproxList[x]]; (* show logarithms of the differences *) Plot[Evaluate[Table[δN[k][x], {k, 6}]], {x, 0, 5/2}, PlotRange -> {All, {-10, 2}}, PlotStyle -> ]}]]]

Symbolic Computations

294 7

2

6 0.5 5

1

1.5

2

2.5

-2

4

-4

3

-6

2

-8 0.5

1

1.5

2

2.5

-10

For the application of the Adomian decomposition to boundary value problems, see [455], [1816].

1.9 Additional Symbolics Functions Now, we are nearly at the end of our chapter about symbolic computations. Many features of Mathematica have been discussed, but as many have not been discussed. The next section will deal with some applications of the discussed functions. In addition to the functionality built into the Mathematica kernel, a number of important packages in the standard package directory of Mathematica are useful for symbolic calculations, and they enhance the power of the corresponding built-in functions and offer new functionality. In addition to Calculus`Limit`, Calculus`PDSolve1`, and Calculus`DSolve`, which were already mentioned above, the following packages are often very useful: Calculus`VectorAnalysis`, DiscreteMath` RSolve`, and Calculus`VariationalMethods` . The functions contained in these packages can be deduced immediately from their names. Because of space and time limitations, we look only briefly at what these packages can accomplish. The package Calculus`VariationalMethods` implements the calculation of variational derivatives of integrals and the associated Euler-Lagrange equation (for an introduction to variational calculations, see, e.g., [240], [664], or for somewhat more detail, see [439] and [1806]). In[1]:=

Needs["Calculus`VariationalMethods`"]

In[2]:=

?VariationalD VariationalD@f, u@xD, xD or VariationalD@f, u@x,y,...D, 8x,y,... 1 the series is divergent (corresponding to the singularity of 1 ê H1 - a x2 L at x = 1 ê a). Exchanging summation and integration yields a divergent sum. But due to the automatic Borel summation of SymbolicSum`SymbolicSum for such type sums, we get the a closed-form result as for the integral.

Symbolic Computations

296

¶

‡

0

2 ¶ i¶ e-x 2j zy ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ2ÅÅ dx =° ‡ e-x jjj‚ H1 + kL ak xk zzz dx =° H1 - a xL 0 k k=0 {

¶

i kj

¶

„ H1 + kL a j‡ e k 0 k=0

In[13]:= Out[13]= In[14]:=

Out[14]=

¶

-x2

GHH1 + kL ê 2L y x dxz =° „ H1 + kL ak ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 2 { k

k=0

Integrate[x^k Exp[-x^2], {x, 0, Infinity}, Assumptions -> k >= 0] 1 1+k GammaA E 2 2 sum = SymbolicSum`SymbolicSum[α^k (1 + k) Gamma[(1 + k)/2]/2, {k, 0, Infinity}, GenerateConditions -> False] − 12 12 è!!!! 1 1 1 1 y i "########### − 1 1 α α z j 2 J π − π − + α2 α − π "######## ErfiA"######## EN 2 α2 Gamma@0, − 1 j α2 α2 α2 2 D z j α z j z − − j z j 2 3 z α α 2 j z z j { k

Using the functions FullSimplify we can show that the sum and the integral are identical. (FullSim plify simplifies identities with special functions, we will discuss it in Chapter 3.) In[15]:= Out[15]=

FullSimplify[int - sum, α < 0] 0

Here is another example. We first sum a series [734] and then recover the nth term. In[16]:= Out[16]= In[17]:=

Sum[([x] - [y])^n/(x - y)^(n + 1) λ^n, {n, Infinity}] // Simplify λ H@xD − @yDL Hx − yL Hx − y − λ @xD + λ @yDL SeriesTerm[%, {λ, 0, n}] // Simplify[#, n > 1]& n

Out[17]=

@xD−@yD I M x−y x−y

As a small application of the function SeriesTerm, we will prove the following identity (due to Ramanujan) about the Taylor series coefficients of three rational functions [856], [570]. 3

3

ij k ij 9 x2 + 53 x + 1 yzyz ij k ij -12 x2 - 26 x + 2 yzyz j@x D j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ Å zz + j@x D j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ zz = k k x3 - 82 x2 - 82 x + 1 {{ k k x3 - 82 x2 - 82 x + 1 {{ 3

ij k ij -10 x2 + 8 x + 2 yzyz j@x D j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ zz + H-1Lk k x3 - 82 x2 - 82 x + 1 {{ k Using the function Series, we can easily explicitly verify the identity for the first few coefficients. In[18]:=

Out[20]=

abc = {1 + 53 x + 9 x^2, 2 - 26 x - 12 x^2, 2 + 8 x - 10 x^2}/ (1 - 82 x - 82 x^2 + x^3); (#1 + #2 - #3)& @@@ Transpose[#[[3]]^3& /@ Series[abc, {x, 0, 12}]] 81, −1, 1, −1, 1, −1, 1, −1, 1, −1, 1, −1, 1

0]& H−1Lk

We end with another application of the series terms also due to Ramanujan: Calculating integrals through series ¶ terms. For a sufficiently nice function f HxL, the kth moment mk @ f HxLD =Ÿ0 xk f HxL dx can be calculated through k the analytic continuation of the series coefficient cHkL = @x D H f HxLL to negative integer k by mk = -H-1L-k k ! H-k - 1L! c-k-1 (Ramanujan’s master theorem [157]). Here is a simple example. In[23]:=

f[x_] = x^2 Exp[-x] Sin[x]^2;

In[24]:=

c[k_] = SeriesTerm[f[x], {x, 0, k}]; intc[k_] = k! (-1)^(-k - 1) (-k - 1)! c[-k - 1] 1 1 IH−1L−4−2 k I−1 + 5 2 H−3−kL Cos@H−3 − kL ArcTan@2DDM 2π H−1 − kL ! k ! Gamma@3 + kD Sin@H−3 − kL πDM

Out[25]=

This is the result of the direct integration. In[26]:= Out[26]=

intI[k_] = Integrate[x^k f[x], {x, 0, Infinity}, Assumptions -> k > 0] 3 k 1 I1 − 5− 2 − 2 Cos@H3 + kL ArcTan@2DDM Gamma@3 + kD 2

For negative integer k, intc[k] is indeterminate. For concrete k we could use Limit or Series to obtain a value. For generic k, we simplify first the Gamma functions using FullSimplify. In[27]:= Out[27]=

intI[k]/intc[k] // FullSimplify // Simplify[#, Element[k, Integers]]& 1

(For calculating series terms of arbitrary order, see [1040], [1041], and [1043].)

Symbolic Computations

298

1.10 Three Applications 1.10.0 Remarks In this section, we will discuss three larger calculations. Here, “larger” mainly refers to the necessary amount of operations to calculate the result and not so much to the number of lines of Mathematica programs to carry it out. The first two are “classical” problems. Historically, the first one was solved in an ingenious method. Here we will implement a straightforward calculation. Carrying out the calculation of an extension of the second one (cosH2 p ê 65537L) took more than 10 years at the end of the nineteenth century. The third problem is a natural continuation from the visualizations discussed in Section 3.3 of the Graphics volume [1736]. The code is adapted to Mathematica Version 5.1. As mentioned in the Introduction, later versions of Mathematica may allow for a shorter implementation and more efficient implementation.

1.10.1 Area of a Random Triangle in a Square In the middle of the last century, J. J. Sylvester proposed calculating the expectation value of the convex hull of n randomly chosen points in a plane square. For n = 1, the problem is trivial, and for n = 2, the question is relatively easy to answer. For n ¥ 3, the straightforward formulation of the problem turns out to be technically quite difficult because of the multiple integrals to be evaluated. In 1885, M. W. Crofton came up with an ingenious trick to solve special cases of this problem. (His formulae are today called Crofton’s theorem.) At the same time, he remarked: The intricacy and difficulty to be encountered in dealing with such multiple integrals and their limits is so great that little success could be expected in attacking such questions directly by this method [direct integration]; and most of what has been done in the matter consists in turning the difficulty by various considerations, and arriving at the result by evading or simplifying the integration. [1031] The general setting of the problem is to calculate the expectation value of the minHn - 1, dL-dimensional volume of the convex hull of n points in d dimensions, for instance, the volume of a random tetrahedron formed by four randomly chosen points in 3 . For details about what is known, the Crofton theorem and related matters, see [35], [262], [562], [263], [1217], [832], [1031], [1238], [277], and [1410]. For an ingenious elementary derivation for the n = 3 case, see [1596]; for a tetrahedron in a cube, see [1910]. For the case of a tetrahedron inside a tetrahedron, see [1196].) In this subsection, we will show that using the integration capabilities of Mathematica it is possible to tackle such problems directly—this means by carrying out the integrations. (This subsection is based on [1733].) In the following, let the plane polygon be a unit square. We will calculate the expectation value of the area of a random triangle within this unit square (by an affine coordinate transformation, the problem in an arbitrary convex quadrilateral can be reduced to this case). Here is a sketch of the situation. In[1]:=

With[{P1 = {0.2, 0.3}, P2 = {0.8, 0.2}, P3 = {0.4, 0.78}}, Show[Graphics[ {{Thickness[0.01], Line[{{0, 0}, {1, 0}, {1, 1}, {0, 1}, {0, 0}}]}, {Thickness[0.002], Hue[0], Line[{P1, P2, P3, P1}]},

1.10 Three Applications

299

{Text["P1", {0.16, 0.26}], Text["P2", {0.84, 0.16}], Text["P3", {0.40, 0.82}]}}], AspectRatio -> Automatic]]

P3

P1 P2

Let 8x1, y1 -1]; (* the upper limit *) uValue = Limit[indefiniteIntegral, ξ -> u, Direction -> +1]; Factor[Together[uValue - lValue]]]

To speed up the indefinite integration and the calculation of the limits, we apply some transformation rules implemented in LogExpand to the expressions. LogExpand splits all Log[expr] into as many subparts as possible to simplify the integrands. Because we know that the integrals we are dealing with are real quantities, we do not have to worry about branch cut problems associated with the logarithm function, and so drop all imaginary parts at the end. In[16]:=

LogExpand[expr_] := PowerExpand //@ Together //@ expr

Now, we have all functions together and can actually carry out the integration. To get an idea about the form of the expressions appearing in the six integrations, let us have a look at the individual integration results of the first region. (The indefinite integrals are typically quite a bit larger than the definite ones, as shown in the following results.) This is the description of the first six-dimensional region. In[17]:= Out[17]=

regions[[1]] 1 −x1 + y1 x2 y1 99x1, 0, =, 8y1, 0, x1 {Automatic, (# /. Log[x_] :> Log[2, x]/Log[2])&}]& @ (Re[Together[Plus @@ Apply[multiDimensionalIntegrate[area, ##]&, regions, {1}]]] // Timing) 11 91133.01 Second, = 288

All p and logH2L terms cancelled, and we got (taking into account the triangles with negative orientation) for the expectation value, the simple result A = 11 ê 144. The degree of difficulty to do multidimensional integrals is often depending sensitively from the order of the integration. As a check of the last result and for comparison, we now first evaluate the three integrations over the yi and then the three integration over the xi . For this situation, we have only 62 six-dimensional regions. In[26]:=

cad2 = GenericCylindricalAlgebraicDecomposition[ signedTriangleArea && unitCube6D, {x1, x2, x3, y1, y2, y3}]; regions2 = Apply[List, Apply[{#3, #1, #5} &, cad2[[1]] //. a_ && (b_ || c_) :> a && b || a && c, {2}], {0, 2}];

Out[30]=

Length[regions2] 62

And doing the integrations and simplifying the result takes now only a few seconds. Again, we obtain the result 11/288. In[31]:=

Out[31]=

Simplify[Together[Re[Plus @@ Apply[multiDimensionalIntegrate[area, ##]&, regions2, {1}]]], TransformationFunctions -> {(# /. Log[k_Integer] :> (Plus @@ ((#2 Log[#1])& @@@ FactorInteger[k])))&}] // Timing 11 932.51 Second, = 288

Symbolic Computations

304

Using numerical integration, we can calculate an approximative value of this integral to support the result 11 ê 144. In[32]:=

Out[32]=

(SeedRandom[111]; NIntegrate[Evaluate[Abs[area]], {x1, 0, 1}, {y1, 0, 1}, {x2, 0, 1}, {y2, 0, 1}, {x3, 0, 1}, {y3, 0, 1}, Method -> QuasiMonteCarlo, MaxPoints -> 10^6, PrecisionGoal -> 3]) 0.0763889

This result confirms the above result. In[33]:= Out[33]=

N[2 %%[[2]]] 0.0763889

We could now go on and calculate the probability distribution for the areas. The six-dimensional integral to be calculated is now 1

1

1

1

1

1

pHAL ~ ‡ ‡ ‡ ‡ ‡ ‡ dHA - H x1 , x2 , x3 , y1 , y2 , y3 LL d y3 dx3 d y2 dx2 d y1 dx1 , 0

0

0

0

0

0

H x1 , x2 , x3 , y1 , y2 , y3 L = †x3 y1 - x2 y1 + x1 y2 - x3 y2 + x2 y3 - x1 y3 §. (Here we temporarily changed A Ø 2 A so that all variables involved range over the interval @0, 1D. This time, before subdividing the integration variable space into subregions, we carry out the integral over y3 to eliminate the Dirac d function. To do this, we use the identity b

b

dHy - x0,k L ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ dx ‡ dHy - f HxLL dx = ‡ ‚ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ † f £ Hx0,k L§ k a

a

where the x0,k are the zeros of f HxL in @a, bD. Expressing y3 through x1 , x2 , x3 , y1 , y2 , and A yields the following expression. In[34]:=

Out[34]=

soly3 = Solve[A == (* or - *) (-x2 y1 + x3 y1 + x1 y2 - x3 y2 x1 y3 + x2 y3), y3][[1, 1, 2]] −A − x2 y1 + x3 y1 + x1 y2 − x3 y2 x1 − x2

And the derivative from the denominator becomes †x1 - x2 §. In[35]:= Out[35]=

D[-x2 y1 + x3 y1 + x1 y2 - x3 y2 - x1 y3 + x2 y3, y3] −x1 + x2

Now is a good time to obtain a decomposition of the space into subregions. In addition to the constraints following from the geometric constraints of the integration variables being from the unit square, we add three more inequalities: 1) 0 < y3 H x1 , x2 , x3 , y1 , y2 ; AL < 1 to ensure the existence of a zero inside the Dirac d function argument; 2) A > 0 for positive oriented areas; and 3) x1 > x2 to avoid the absolute value in the denominator (the case x1 < x2 follows from symmetry). In[36]:=

cad = Experimental`GenericCylindricalAlgebraicDecomposition[ 0 < soly3 < 1 && A > 0 && x1 > (* or < *) x2 && 0 < x1 < 1 && 0 < x2 < 1 && 0 < x3 < 1 && 0 < y2 < 1 && 0 < y1 < 1, {A, x1, x2, x3, y1, y2}];

1.10 Three Applications

305

This time, we get a total of 1282 subregions. In[37]:= Out[37]=

(l1 = cad[[1]] //. a_ && (b_ || c_) :> (a && b) || (a && c)) // Length 1282

One expects the probability distribution pHAL to be a piecewise smooth function of l. Six l-interval arise naturally from the decomposition. In[38]:= Out[38]= In[39]:=

Union[First /@ l1] 1 1 1 1 1 1 1 1 1 1 0 < A < »» < A < »» < A < »» < A < »» < A < »» < A < 1 6 6 5 5 4 4 3 3 2 2 ASortedRegions = {#[[1, 1, 2]] < A < #[[1, 1, 3]] , Rest /@ #}& /@ Split[Sort[(# /. Inequality[a_, Less, b_, Less, c_] :> {b, a, c} /. And -> List)& /@ (List @@ l1)], #1[[1]] === #2[[1]]&];

Here is the number of regions for the six l-intervals. In[40]:= Out[40]=

{#1, Length[#2] "subregions"}& @@@ ASortedRegions 1 1 1 990 < A < , 317 subregions=, 9 < A < , 324 subregions=, 6 6 5 1 1 1 1 9 < A < , 310 subregions=, 9 < A < , 216 subregions=, 5 4 4 3 1 1 1 9 < A < , 99 subregions=, 9 < A < 1, 16 subregions== 3 2 2

The regions themselves look quite similar to the above ones. In[41]:= Out[41]=

{#1, #2[[1]]}& @@@ ASortedRegions 1 990 < A < , 98x1, 0, A 0

To calculate the sixfold integral, we will follow the already twice successfully-used strategy to first calculate a decomposition of the integration domain. Because of the obvious fourfold rotational symmetry of pHx, yL around the square center 81 ê 2, 1 ê 2 a && b || a && c;

In[59]:=

Length[l1]

Out[59]=

327

All cells span the specified x,y-domain. This means, the density pHx, yL is continuous within this domain. In[60]:= Out[60]=

Union[Take[#, 2]& /@ l1] 1 1 < x < 1 && < y < x 2 2

1.10 Three Applications

309

The cells of the 6D integration domain have similar-looking boundaries as the cells from the above calculations. In[61]:=

xyRegions = (# /. Inequality[a_, Less, b_, Less, c_] :> {b, a, c} /. And -> List)& /@ ((* remove x and y parts *) List @@ Drop[#, 2]& /@ l1);

In[62]:=

xyRegions[[1]] −x + y −x1 y + x2 y 99x1, 0, =, 8x2, 0, x1 {1, 1, 1/2}, Axes -> True]], (* modeled probability *) Module[{d = 60, o = 10^4, data, if}, data = Compile[{}, Module[{T = Table[0, {d}, {d}], p1, p2, p3, xc, yc, mp, σ}, Do[{p1, p2, p3} = Table[Random[], {3}, {2}]; mp = (p1 + p2 + p3)/3; (* orientation of the normals *) σ = Sign[(Reverse[p2 - p1]{1, -1}).(mp - p1)]; (* are discretized square points inside triangle? *) Do[If[σ (Reverse[p2 - p1]{1, -1}).({x, y} - p1) > 0 && σ (Reverse[p3 - p2]{1, -1}).({x, y} - p2) > 0 && σ (Reverse[p1 - p3]{1, -1}).({x, y} - p3) > 0, (* increase counters *) {xc, yc} = Round[{x, y} (d - 1)] + 1; T[[xc, yc]] = T[[xc, yc]] + 1], {x, 0, 1, 1/(d - 1)}, {y, 0, 1, 1/(d - 1)}], {o}]; T]][]; (* interpolated scaled counts *) if = Interpolation[Flatten[MapIndexed[Flatten[ {(#2 - {1, 1})/(d - 1), #1}]&, data/o, {2}], 1]]; (* interpolated observed frequencies *) Plot3D[if[x, y], {x, 0, 1}, {y, 0, 1}, Mesh -> False]]}]]]

0.2

0.2 1

0.1

0.75

0 0

0.1 0 0

0.5 0.25

0.5

0.25 0.75 1

1 0.8 0.6 0.2

0.4 0.4

0.6

0

0.2 0.8 0 8

1

0

We end by integrating the calculated probability density pHx, yL over the unit square. pHx, yL is the probability that the point 8x, y< is inside a randomly chosen triangle. This means the average of pHx, yL is again the area of a randomly chosen triangle, namely 11 ê 144. In[76]:= Out[76]=

8 Integrate[p[x, y], {x, 1/2, 1}, {y, 1/2, x}] 11 144

For a similar probabilistic problem, the Heilbronn triangle problem, see [936].

Symbolic Computations

312

2p 1.10.2 cosI ÅÅÅÅ ÅÅÅ M à la Gauss 257 In the early morning of March 29 in 1796, Carl Friedrich Gauss (while still in bed) recognized how it is possible to construct a regular 17-gon by ruler and compass; or more arithmetically and less geometrically speaking, he 2p ÅÅÅÅ L in terms of square roots and the four basic arithmetic operations of addition, subtraction, expressed cosH ÅÅÅÅ 17 multiplication, and division only. (This discovery was the reason why he decided to become a mathematician j [1472], [704], [1792].) His method works immediately for all primes of the form 22 + 1, so-called Fermat numbers F j [1080]. For j = 0 to 4, we get the numbers 3, 5, 17, 257, and 65537. ( j = 5, …, 14 do not give primes; we return to this at the end of this section.) The problem to be solved is to express the roots of z p = 1, where p is a Fermat prime in square roots. One obvious solution of this equation is z = 1. After dividing z p = 1 by this solution, we get as the new equation to be solved: z p-1 + z p-2 + ∫ + z + 1 = 0. It can be shown that there are no further rational zeros; so this equation cannot be simplified further in an easy l2pi ÅÅÅÅÅÅ M, l integer, way. Let us denote (by following Gauss’s notation here and in the following) the solution expI ÅÅÅÅÅÅÅÅ p êê 1 § l § p - 1 of this equation by l (which is, of course, a solution, but which contains a pth root). Gauss’s idea, which solves the above equation exclusively in square roots, is to group the roots of the above equation in a recursive way such that the explicit values of the sums of these roots can be expressed in numbers and square roots. Each step then rearranges these roots until finally only groups of length two remain. These last groups are j 2p ÅÅÅÅÅ M. then just of the form cosI ÅÅÅÅÅÅÅÅ p Let us describe this idea in more detail. First, we need the number-theoretic notion of a primitive root: the number g is called a primitive root of p if the set of numbers 8gi mod p All, AspectRatio -> Automatic]

In[3]:=

(* reduced residue system exists for the following 128 numbers *) rssNumbers = Flatten[Position[Table[Sort[Array[ PowerMod[i, #, 257]&, 256, 0]] == Range[256], {i, 256}], True]];

In[5]:=

(* visualizations of the powermod sequences *) Function[bs, Show[GraphicsArray[Function[b,

1.10 Three Applications

313

primitiveRootsGraphics[b]] /@ bs]]] /@ (* display nine examples *) Partition[rssNumbers[[{1, 2, 3, 33, 42, 43, 66, 106, 114}]], 3]

Make Input

Show[primitiveRootsGraphics[#]]& /@ rssNumbers

(For some interesting discussions about the number of crossings and the number of regions in such pictures, see [1428].) f

ó The next concept we need is that of the so-called periods. A period l to the primitive root g, containing the root êê l and having length f, is defined by the expression below. (The dependence on the fixed quantities p and g is suppressed.) f -1 êêêêêêêêêêêê f jHp-1L ó l = ‚ l g ÅÅÅÅÅÅÅÅÅÅÅÅf ÅÅÅÅÅÅ j=0

êêêêêêêê êê Because the root l + p is equivalent to the root l, we implement the construction of the periods in the following êê way. (We again use PowerMod because of speed and denote l by R[l].)

Symbolic Computations

314 In[7]:=

period[λ_, f_, p_, g_] := Plus @@ (R /@ Mod[Mod[λ, p] Array[ PowerMod[g^((p - 1)/f), #, p]&, f, 0], p])

Let us look at two examples for the prime 17 and the primitive root 3. In[8]:= Out[8]= In[9]:= Out[9]=

period[1, 8, 17, 3] R@1D + R@2D + R@4D + R@8D + R@9D + R@13D + R@15D + R@16D period[3, 8, 17, 3] R@3D + R@5D + R@6D + R@7D + R@10D + R@11D + R@12D + R@14D

We see that their sum just gives the sum of all roots. This is always the case if p is a Fermat prime; here, the case p = 257 is checked. In[10]:= Out[10]= In[11]:= Out[11]=

period[1, 128, 257, 3] + period[3, 128, 257, 3] == Plus @@ Array[R, 256] True period[5, 128, 257, 3] + period[9, 128, 257, 3] == Plus @@ Array[R, 256] True

Dividing the last period again into subperiods by using the above definition for the periods, we find that the period period[3, 8, 17, 3] can be expressed as the sum of the following periods. In[12]:= Out[12]= In[13]:= Out[13]=

period[3, 4, 17, 3] R@3D + R@5D + R@12D + R@14D period[11, 4, 17, 3] R@6D + R@7D + R@10D + R@11D

It can be shown that one can always represent a period in this way: One root of one of the new periods is êêêêêêêêêêêêêêêê êêêêêêêêêêêêê p-1 êê identical to the old one (l), whereas one root of the other period is generated by the root Jl g ÅÅÅÅÅfÅÅÅÅÅ Å N modH p - 1L. The other roots of the two periods under consideration follow immediately from the above definition of the 17-1 periods. In our example, we have this for one root of the second period: I33 ÅÅÅÅÅÅÅÅ8ÅÅÅÅÅÅ M mod 16 = 11. In doing this division process for the periods repeatedly, we end up in periods of length two. These periods are of the form êê êêêêêêêê êê êêêêêêêê 2p 1 + p - 1, 2 + p - 2, …, which give immediately 2 cosI ÅÅÅÅ ÅÅÅÅ M, 2 cosI2 µ 2 ÅÅÅÅppÅ M, …. To explicitly calculate the p values of the periods in square roots, we need the following theorem: The (numerical) values L1 , L2 of two periods l1 , l2 (which contain no higher roots than square roots and are to be discriminated from the periods l1 , l2 themselves) obtained by splitting one period are the solutions of a quadratic equation. If L1 and L2 are the solutions of L2 + a1 L + a2 = 0; by Vieta’s theorem, we have L1 L2 = -a1 and L1 + L2 = a2 . The sum of the two periods is just the period before splitting, and the (numerical) value of the starting period is -1. It is 2f ò important to observe that the product of two periods of length f, obtained by splitting a period l , can always be expressed as a linear combination of periods of length 2 f . The explicit formula for carrying out this multiplication of two periods is given by f

f

f

f

ò õúúúúúùúúúû õúúúúú ùúúúû õúúúúú ùúúúúû lm = l1 m1 + l2 m1 + ∫ + l f m1 where f

f ó êêêê êêêê êêêê ó êêêê êêêê êêêê l = l1 + l2 + ∫ + l f and m = m1 + m2 + ∫ + m f .

1.10 Three Applications

315

After this multiplication, the periods on the right-hand side can then be expressed as periods of length 2 f or as Å2ÅÅÅÅÅ , they can always be expressed as pure numbers, which ensures that we have pure numbers. (For f = ÅÅÅÅp-1 appropriate starting values for the recursive calculation.) Here, the above two periods of length 8 of p = 17 (period[1, 8, 17, 3] and period[3, 8, 17, 3]; m1 = 1, l1 = 3, l2 = 5, l3 = 6, l4 = 7, l5 = 10, l6 = 11, l7 = 12, l8 = 14) are multiplied in this manner. In[14]:=

Out[14]=

period[ 3 + 1, 8, 17, 3] + period[ 5 + 1, 8, 17, 3] + period[ 6 + 1, 8, 17, 3] + period[ 7 + 1, 8, 17, 3] + period[10 + 1, 8, 17, 3] + period[11 + 1, 8, 17, 3] + period[12 + 1, 8, 17, 3] + period[14 + 1, 8, 17, 3] // Factor 4 HR@1D + R@2D + R@3D + R@4D + R@5D + R@6D + R@7D + R@8D + R@9D + R@10D + R@11D + R@12D + R@13D + R@14D + R@15D + R@16DL

By taking into account the original equation this obviously simplifies to -4. (The value of the period of length 16 was -1.) Å2ÅÅÅÅÅ can be given in closed form. The two values for the periods of length ÅÅÅÅp-1 In[15]:=

{1/2 (-1 + I^(((p - 1)/2)^2) Sqrt[p]), 1/2 (-1 - I^(((p - 1)/2)^2) Sqrt[p])};

This agrees with the direct numerical calculation, as shown here for p = 17. In[16]:= Out[16]= In[17]:= Out[17]=

% /. p -> 17 // N 81.56155, −2.56155< {period[1, 8, 17, 3] /. (R -> (Exp[2Pi I #/17.]&)), period[3, 8, 17, 3] /. (R -> (Exp[2Pi I #/17.]&))} // N // Chop 81.56155, −2.56155

17 the various lists of rules that are in use inside GaussSolve are quite big, we use Dispatch to accelerate their application (with the exception of the list solList, which is not used actively internally, but only serves as a container for the results). In[18]:=

GaussSolve[p:(3 | 5 | 17 | 257 | 65537), Λ_Symbol] := Module[{g = 3, λ, newλs, Timesλ, allλs, rules1, rules2, Simplifyλ, solStep, solArgs, solNList, solList = {Λ[1, p - 1] -> - 1}}, (* the λ’s *) λ[t_, f_] := λ[t, f] = Function[γ, Mod[Mod[t, p] Array[ PowerMod[γ, #, p]&, f, 0], p]][g^((p - 1)/f)]; (* newλs function definition with remembering *) newλs[t_, f_] := newλs[t, f] = {t, Mod[Mod[t, p] PowerMod[g, (p - 1)/f, p], p]}; (* Timesλ function for λ multiplication *) Timesλ[t_, u_, f_] := Plus @@ (Λ[#, f]& /@ Mod[λ[u, f] + t, p]); (* allλs lists *) allλs[p - 1] = {1}; allλs[f_] := allλs[f] = Flatten[Map[newλs[#, 2f]&, allλs[2f], {-1}]]; (* rules1 for λ canonicalization *) rules1[f_] := rules1[f] = Dispatch[Map[Λ[#, f]&, Flatten[Function[a, Apply[Rule, Transpose[{Rest[a], Table[#, {Length[Rest[a]]}]&[First[a]]}], {1}]] /@ (λ[#, f]& /@ allλs[f])], {-1}]]; (* rules2 for λ eliminating one λ *) rules2[(p - 1)/2] = Λ[g, (p - 1)/2] -> - 1 - Λ[1, (p - 1)/2]; rules2[f_] := rules2[f] = Dispatch[ Λ[#[[2, 2]], f] -> Λ[#[[1]], 2f] - Λ[#[[2, 1]], f]& /@ Map[{#, newλs[#, 2f]}&, allλs[2f], {-1}]]; (* Simplifyλ for simplifying products of λs *) Simplifyλ[t_, u_, f_] := Fold[Expand[#1 //. #2]&,

1.10 Three Applications

317

Expand[Timesλ[t, u, f] //. rules1[f]], rules2 /@ (f 2^Range[0, Log[2, (p - 1)/f] - 1])]; (* solStep for period subdivision *) solStep[t_, f_] := Module[{u, v, x1Px2, x1Tx2, sol1, sol2, sol1N, sol2N, numSol1}, {u, v} = newλs[t, f]; x1Px2 = Λ[t, f]; x1Tx2 = Simplifyλ[u, v, f/2]; {sol1, sol2} = # + Sqrt[#^2 - x1Tx2]{1, -1}&[x1Px2/2]; numSol1 = Λ[u, f/2] //. solNList; {sol1N, sol2N} = N[{sol1, sol2} //. solNList]; solList = Flatten[{solList, If[Abs[sol1N - numSol1] < Abs[sol2N - numSol1], {Λ[u, f/2] -> sol1, Λ[v, f/2] -> sol2}, {Λ[u, f/2] -> sol2, Λ[v, f/2] -> sol1}]}]; ]; (* solNList for numerical values of the periods *) solNList = Dispatch[Apply[(Λ @ ##) -> (Plus @@ Exp[N[2Pi I λ[##]/p]])&, Flatten[Function[i, {#, i}& /@ allλs[i]] /@ (2^Range[Log[2, p - 1], 1, -1]), 1], {1}]]; (* stepArgs for period arguments *) stepArgs = Flatten[Function[i, {#, i}& /@ allλs[i]] /@ (2^Range[Log[2, p - 1], 2, -1]), 1]; (* do the work *) solStep @@ #& /@ stepArgs; solList]

Now, let us calculate the two simple cases p = 3 and p = 5 as a warm up. In[19]:= Out[19]= In[20]:= Out[20]=

(Λ[1, 2] //. GaussSolve[3, Λ])/2 1 − 2 (Λ[1, 2] //. GaussSolve[5, Λ])/2 // Expand è!!!! 5 1 − + 4 4

The results agree with the well-known expressions for cosH2 p ê 3L and cosH2 p ê 5L. Here is the list of the values of the periods for p = 17. In[21]:=

(list17 = GaussSolve[17, Λ]) // InputForm

Out[21]//InputForm=

{Λ[1, 16] -> -1, Λ[1, 8] -> Λ[1, 16]/2 + Sqrt[4 + Λ[1, 16]^2/4], Λ[3, 8] -> Λ[1, 16]/2 - Sqrt[4 + Λ[1, 16]^2/4], Λ[1, 4] -> Λ[1, 8]/2 + Sqrt[1 + Λ[1, 8]^2/4], Λ[9, 4] -> Λ[1, 8]/2 - Sqrt[1 + Λ[1, 8]^2/4], Λ[3, 4] -> Λ[3, 8]/2 + Sqrt[1 + Λ[3, 8]^2/4], Λ[10, 4] -> Λ[3, 8]/2 - Sqrt[1 + Λ[3, 8]^2/4], Λ[1, 2] -> Λ[1, 4]/2 + Sqrt[Λ[1, 4]^2/4 - Λ[3, 4]], Λ[13, 2] -> Λ[1, 4]/2 - Sqrt[Λ[1, 4]^2/4 - Λ[3, 4]], Λ[9, 2] -> Λ[9, 4]/2 - Sqrt[1 + Λ[1, 8] + Λ[3, 4] + Λ[9, 4]^2/4], Λ[15, 2] -> Λ[9, 4]/2 + Sqrt[1 + Λ[1, 8] + Λ[3, 4] + Λ[9, 4]^2/4], Λ[3, 2] -> Λ[3, 4]/2 + Sqrt[Λ[1, 4] - Λ[1, 8] + Λ[3, 4]^2/4], Λ[5, 2] -> Λ[3, 4]/2 - Sqrt[Λ[1, 4] - Λ[1, 8] + Λ[3, 4]^2/4], Λ[10, 2] -> Λ[10, 4]/2 - Sqrt[-Λ[1, 4] + Λ[10, 4]^2/4], Λ[11, 2] -> Λ[10, 4]/2 + Sqrt[-Λ[1, 4] + Λ[10, 4]^2/4]} 2p Here is the final expression for cosH ÅÅÅÅ ÅÅÅÅ L. 17 In[22]:=

(Λ[1, 2] //. list17)/2 // Expand // Factor

Symbolic Computations

318

Out[22]=

1 16

i j è!!!!!!! "################################ è!!!!!!! # j j j j−1 + 17 + 2 H17 − 17 L + k y "################################ è!!!!!!! "################################ è!!!!!!! # "################################### è!!!!!!! è!!!!!!! # z z $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 J34 + 6 17 − 2 H17 − 17 L + 34 H17 − 17 L − 8 2 H17 + 17 L N% z z z {

We numerically check this result. Because the result is 0, we cannot get any significant digit, and so the N::meprec message is issued. In[23]:= Out[23]=

(% - Cos[2Pi/17]) // SetPrecision[#, 1000]& 0. × 10−1000

Next is the result for cosH2 µ 2 p ê 17L. (Because we have eliminated most of the L[j, 2]’s with even j, we make use of cosH2 jp ê pL = cosH2 H p - jL p ê pL and use L[15, 2].) In[24]:= Out[24]=

(Λ[15, 2] //. list17)/2 // Expand // Factor 1 16

i j è!!!!!!! "################################ è!!!!!!! # j j j−1 + 17 − 2 H17 − 17 L + j k y "################################ è!!!!!!! "################################ è!!!!!!! # "################################### è!!!!!!! è!!!!!!! # z z $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 J34 + 6 17 + 2 H17 − 17 L − 34 H17 − 17 L + 8 2 H17 + 17 L N% z z z {

In[25]:= Out[25]=

(% - Cos[2 2Pi/17]) // SetPrecision[#, 1000]& 0. × 10−1000

Using the powerful function RootReduce we could also prove the last equality symbolically. In[26]:= Out[26]= In[27]:= Out[27]=

(%% // Simplify // RootReduce) (Together[TrigToExp[Cos[2 2Pi/17]]] // RootReduce) 0 Together[TrigToExp[Cos[2 2Pi/17]]] 1 − H−1L13ê17 H1 + H−1L8ê17 L 2

The last value of interest here is cosH8 µ 2 p ê 17L. In[28]:= Out[28]=

(Λ[9, 2] //. list17)/2 // Expand // Factor 1 16

i j è!!!!!!! "################################ è!!!!!!! # j j j j−1 + 17 − 2 H17 − 17 L − k y "################################ è!!!!!!! "################################ è!!!!!!! # "################################### è!!!!!!! è!!!!!!! # z z $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 J34 + 6 17 + 2 H17 − 17 L − 34 H17 − 17 L + 8 2 H17 + 17 L N% z z z {

Here is again a quick numerical check for the last result. In[29]:= Out[29]=

(% - Cos[8 2Pi/17]) // SetPrecision[#, 1000]& 0. × 10−1000

Now, as promised in the title of this subsection, we calculate cosH2 p ê 257L [1487], [638]. In[30]:=

list257 = GaussSolve[257, Λ];

We select only those parts that are explicitly needed for the evaluation of cosH2 p ê 257L. In[31]:=

Flatten[Function[{lhs, rhs}, (* until we have all needed Λ’s *)

1.10 Three Applications

319

FixedPoint[{#, Complement[Union[Cases[ (* what is in the rhs *) rhs[[#]]& /@ Flatten[Position[lhs, #]& /@ Last[#]], _Λ, {0, Infinity}]], Flatten[#]]}&, (* this we need of course *) {{Λ[1, 2]}}, SameTest -> (Last[#2] === {}&)]][ (* all lhs and rhs from list257 *) First /@ list257, Last /@ list257]]; solListPiD257 = (list257[[#]]& /@ Flatten[Function[lhs, Position[lhs, #]& /@ %][First /@ list257]]); 2p Here is a shortened version of this list of replacement rules necessary to express cosH ÅÅÅÅ ÅÅÅÅÅ L. 257 In[33]:=

solListPiD257 // Short[#, 6]&

Out[33]//Short=

1 1 Λ@1, 4D2 + Λ@1, 16D − Λ@1, 32D + Λ@136, 8D + Λ@197, 4D% , 9Λ@1, 2D → Λ@1, 4D + $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 4 2 1 1 Λ@1, 4D → Λ@1, 8D + $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Λ@1, 8D2 − Λ@3, 8D + Λ@131, 8D − Λ@131, 16D , 2 4 1 Λ@1, 16D → Λ@1, 32D + 2 1 $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% −Λ@1, 32D + Λ@1, 32D2 − Λ@1, 128D + 2 Λ@3, 32D − 2 Λ@3, 64D − Λ@9, 32D , 4 1 1 Λ@1, 32D → Λ@1, 64D + $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 5 + 2 Λ@1, 64D + Λ@1, 64D2 + Λ@1, 128D% , 28, 2 4 1 1 Λ@243, 32D → Λ@3, 64D + $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 4 − Λ@1, 128D + 2 Λ@3, 64D + Λ@3, 64D2% , 2 4 1 1 Λ@27, 64D → Λ@3, 128D − $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 16 + Λ@3, 128D2 , 2 4 1 1 Λ@81, 32D → Λ@1, 64D − $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 5 + 2 Λ@1, 64D + Λ@1, 64D2 + Λ@1, 128D% , 2 4 1 1 5 − 2 Λ@1, 64D + 3 Λ@1, 128D + Λ@9, 64D2% = Λ@215, 32D → Λ@9, 64D − $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 4 2 2p The value for cosH ÅÅÅÅ ÅÅÅÅÅ L is now easily obtained, but because of its size, we do not display it here. 257 In[34]:= Out[34]=

(cos2PiD257 = (Λ[1, 2] //. Dispatch[solListPiD257])/2) // ByteCount 1822680

It contains only square roots, but it contains a lot of them. In[35]:= Out[35]=

Cases[cos2PiD257, Power[_, 1/2], {0, Infinity}, Heads -> True] // Length 5133

If the reader wants to see all of them, the following code opens a new notebook with the typeset formula for the square root version of cosH2 p ê 257L.

Symbolic Computations

320 Make Input

NotebookPut[Notebook[{Cell[BoxData[ FormBox[MakeBoxes[#, TraditionalForm]&[cos2PiD257], TraditionalForm]], "Output", ShowCellBracket -> False, CellMargins -> {{0, 0}, {5, 5}}, PageWidth -> Infinity, FontColor -> GrayLevel[1], (* allow to see all square roots *) CellHorizontalScrolling -> True]}, WindowSize -> {Automatic, Fit}, Background -> RGBColor[0.31, 0., 0.51], ScrollingOptions -> {"HorizontalScrollRange" -> 500000}, WindowMargins -> {{0, 0}, {Automatic, 10}}, WindowElements -> {"HorizontalScrollBar"}, WindowFrameElements -> {"CloseBox"}]]

Here is a numerical check of the result. In[36]:= Out[36]=

(cos2PiD257 - Cos[2Pi/257]) // SetPrecision[#, 1000]& 0. × 10−996

One could now go on and calculate the following quite large calculation for the denominator 65537. Make Input

l65537 = GaussSolve[65537, L]

It will take around one day on a modern workstation. Here are the first lines of the result (of size 55 MB). {Λ[ 1, Λ[ 1, Λ[ 3, Λ[ 1, Λ[ 9, Λ[ 3, Λ[ 27, Λ[ 1,

65536] 32768] 32768] 16384] 16384] 16384] 16384] 8192]

-> -> -> -> -> -> -> ->

Λ[ 81,

8192] ->

Λ[

9,

8192] ->

Λ[729,

8192] ->

Λ[

8192] ->

3,

-1, Λ[1, 65536]/2 + Sqrt[16384 + Λ[1, 65536]^2/4], Λ[1, 65536]/2 - Sqrt[16384 + Λ[1, 65536]^2/4], Λ[1, 32768]/2 - Sqrt[ 4096 + Λ[1, 32768]^2/4], Λ[1, 32768]/2 + Sqrt[ 4096 + Λ[1, 32768]^2/4], Λ[3, 32768]/2 - Sqrt[ 4096 + Λ[3, 32768]^2/4], Λ[3, 32768]/2 + Sqrt[ 4096 + Λ[3, 32768]^2/4], Λ[1, 16384]/2 - Sqrt[ 1040 + 32 Λ[1, 16384] + Λ[1, 16384]^2/4 + 16 Λ[1, 32768]], Λ[1, 16384]/2 + Sqrt[1040 + 32 Λ[1, 16384] + Λ[1, 16384]^2/4 + 16 Λ[1, 32768]], Λ[9, 16384]/2 + Sqrt[1040 - 32 Λ[1, 16384] + 48 Λ[1, 32768] + Λ[9, 16384]^2/4], Λ[9, 16384]/2 - Sqrt[1040 - 32 Λ[1, 16384] + 48 Λ[1, 32768] + Λ[9, 16384]^2/4], Λ[3, 16384]/2 + Sqrt[1024 - 16 Λ[1, 32768] + 32 Λ[3, 16384] + Λ[3, 16384]^2/4]}

(Although the above implementation strictly follows Gauss’s original work, we could have used more efficient procedures. See [835].) Let us briefly discuss the numbers n for which the value cosH2 p ê nL can be expressed in square roots (or geometrically speaking, which n-gons can be constructed by ruler and compass [466], [825]?). 5

The above-mentioned number 22 - 1 = 4294967295 is not a prime number; the factors are all Fermat numbers F j with j = 0, …, 4. In[37]:=

FactorInteger[2^(2^5) - 1]

1.10 Three Applications Out[37]=

321

883, 1 120, Contours -> {0}, ContourShading -> False, ContourStyle -> {Hue[0.8 (∂ - 55)/6]}, DisplayFunction -> Identity, Frame -> False, PlotLabel -> #6 "-plane"]& @@@ (* the 3 coordinate plane data *)

Symbolic Computations

326 {{x, y, 0, x, y, "x,y"}, {x, 0, z, x, z, "x,z"}, {0, y, z, y, z, "y,z"}}, {∂, 18, 26, 1/3}]]]]] /@ {+1, -1} x,y-plane

x,z-plane

y,z-plane

x,y-plane

x,z-plane

y,z-plane

And here are two 3D plots of the resulting surfaces. By adding a constant to the polynomial, we squeeze the tube and by subtracting a constant, we thicken the tube. In[17]:=

Show[GraphicsArray[ (* show squeezed and fattened version *) Graphics3D[{EdgeForm[], Cases[ ContourPlot3D[Evaluate[treFoilKnotPoly[x, y, z] + #], {x, -5, 5}, {y, -5, 5}, {z, -2, 2}, Boxed -> False, MaxRecursion -> 1, DisplayFunction -> Identity, PlotPoints -> {{21, 6}, {21, 6}, {13, 6}}], _Polygon, Infinity] /. (* cut vertices off *) Polygon[l_] :> Polygon[Plus @@@ Partition[Append[l, l[[1]]], 2, 1]/2]}, Boxed -> False]& /@ (* two constant values *) {8 10^21, -10^23}]]

In a similar manner, one can implicitize many other surfaces, when their parametrization is in terms of trigonometric or hyperbolic functions, for instance, the Klein bottle from Section 2.2.1 of the Graphics volume [1736].

1.10 Three Applications

327

Here is their implicit form together with the code for making a picture of the resulting polynomial. (For the implicitization of a “realistic looking” Klein bottle, see [1734].) Make Input

Needs["Graphics`ContourPlot3D`"] Clear[x, y, z, r, ϕ] Show[Graphics3D[ (* convert back from polar coordinates to Cartesian coordinates *) Apply[{#1 Cos[#2], #1 Sin[#2], #3}&, Cases[ContourPlot3D[Evaluate[ 768 x^4 - 1024 x^5 - 128 x^6 + 512 x^7 - 80 x^8 - 64 x^9 + 16 x^10 + 144 x^2 y^2 - 768 x^3 y^2 - 136 x^4 y^2 + 896 x^5 y^2 - 183 x^6 y^2 176 x^7 y^2 + 52 x^8 y^2 + 400 y^4 + 256 x y^4 - 912 x^2 y^4 + 256 x^3 y^4 + 315 x^4 y^4 - 144 x^5 y^4 - 16 x^6 y^4 + 4 x^8 y^4 904 y^6 - 128 x y^6 + 859 x^2 y^6 - 16 x^3 y^6 - 200 x^4 y^6 + 16 x^6 y^6 + 441 y^8 + 16 x y^8 - 224 x^2 y^8 + 24 x^4 y^8 - 76 y^10 + 16 x^2 y^10 + 4 y^12 - 2784 x^3 y z + 4112 x^4 y z - 968 x^5 y z 836 x^6 y z + 416 x^7 y z - 48 x^8 y z + 1312 x y^3 z + 2976 x^2 y^3 z 5008 x^3 y^3 z - 12 x^4 y^3 z + 2016 x^5 y^3 z - 616 x^6 y^3 z 64 x^7 y^3 z + 32 x^8 y^3 z - 1136 y^5 z - 4040 x y^5 z + 2484 x^2 y^5 z + 2784 x^3 y^5 z - 1560 x^4 y^5 z - 192 x^5 y^5 z + 128 x^6 y^5 z + 1660 y^7 z + 1184 x y^7 z - 1464 x^2 y^7 z 192 x^3 y^7 z + 192 x^4 y^7 z - 472 y^9 z - 64 x y^9 z + 128 x^2 y^9 z + 32 y^11 z - 752 x^4 z^2 + 1808 x^5 z^2 - 1468 x^6 z^2 + 512 x^7 z^2 64 x^8 z^2 + 6280 x^2 y^2 z^2 - 5728 x^3 y^2 z^2 - 4066 x^4 y^2 z^2 + 5088 x^5 y^2 z^2 - 820 x^6 y^2 z^2 - 384 x^7 y^2 z^2 + 96 x^8 y^2 z^2 136 y^4 z^2 - 7536 x y^4 z^2 + 112 x^2 y^4 z^2 + 8640 x^3 y^4 z^2 2652 x^4 y^4 z^2 - 1152 x^5 y^4 z^2 + 400 x^6 y^4 z^2 + 2710 y^6 z^2 + 4064 x y^6 z^2 - 3100 x^2 y^6 z^2 - 1152 x^3 y^6 z^2 + 624 x^4 y^6 z^2 1204 y^8 z^2 - 384 x y^8 z^2 + 432 x^2 y^8 z^2 + 112 y^10 z^2 + 3896 x^3 y z^3 - 7108 x^4 y z^3 + 3072 x^5 y z^3 + 768 x^6 y z^3 768 x^7 y z^3 + 128 x^8 y z^3 - 3272 x y^3 z^3 - 4936 x^2 y^3 z^3 + 8704 x^3 y^3 z^3 - 80 x^4 y^3 z^3 - 2496 x^5 y^3 z^3 + 608 x^6 y^3 z^3 + 2172 y^5 z^3 + 5632 x y^5 z^3 - 2464 x^2 y^5 z^3 - 2688 x^3 y^5 z^3 + 1056 x^4 y^5 z^3 - 1616 y^7 z^3 - 960 x y^7 z^3 + 800 x^2 y^7 z^3 + 224 y^9 z^3 + 752 x^4 z^4 - 1792 x^5 z^4 + 1472 x^6 z^4 - 512 x^7 z^4 + 64 x^8 z^4 - 3031 x^2 y^2 z^4 + 1936 x^3 y^2 z^4 + 2700 x^4 y^2 z^4 2304 x^5 y^2 z^4 + 448 x^6 y^2 z^4 + 697 y^4 z^4 + 3728 x y^4 z^4 + 24 x^2 y^4 z^4 - 3072 x^3 y^4 z^4 + 984 x^4 y^4 z^4 - 1204 y^6 z^4 1280 x y^6 z^4 + 880 x^2 y^6 z^4 + 280 y^8 z^4 - 800 x^3 y z^5 + 1488 x^4 y z^5 - 768 x^5 y z^5 + 128 x^6 y z^5 + 992 x y^3 z^5 + 1016 x^2 y^3 z^5 - 1728 x^3 y^3 z^5 + 480 x^4 y^3 z^5 - 472 y^5 z^5 960 x y^5 z^5 + 576 x^2 y^5 z^5 + 224 y^7 z^5 + 16 x^4 z^6 + 388 x^2 y^2 z^6 - 384 x^3 y^2 z^6 + 96 x^4 y^2 z^6 - 76 y^4 z^6 384 x y^4 z^6 + 208 x^2 y^4 z^6 + 112 y^6 z^6 - 64 x y^3 z^7 + 32 x^2 y^3 z^7 + 32 y^5 z^7 + 4 y^4 z^8 /. (* to polar coordinates *) {x -> r Cos[ϕ], y -> r Sin[ϕ]}], {r, 0.6, 3.3}, {ϕ, 0, 2Pi}, {z, -1.3, 1.3}, PlotPoints -> {18, 40, 24}, MaxRecursion -> 0, DisplayFunction -> Identity], _Polygon, Infinity], {-2}]]]

Symbolic Computations

328

For more on the subject of implicitization of surfaces, see [1197], [351], and [1591] and references cited therein. We end with another implicit surface originating from a trefoil knot. Starting with a parametrized space curve cHtL, we construct the parametrized surface HcHt + a ê 2L + cHt + a ê 2LL ê a (the average of two symmetrically located points with respect to t). The following code calculates the implicit form of this surface for the trefoil knot. We use the function Resultant to eliminate the parametrization variables. For brevity, we express the resulting surface in cylindrical coordinates. Make Input

(* a function to convert from trigonometric to polynomial variables *) [expr_] := Numerator[Together[TrigToExp[expr] /. {t -> Log[T]/I, α -> Log[Α]/I}]] (* make algebraic form of average *) cAv = ((c /. t -> t + α) + (c /. t -> t - α))/2 cAvAlg = [{x, y, z} - cAv]/{I, 1, I} (* eliminate parametrization variables *) res1 = Resultant[cAvAlg[[1]], cAvAlg[[2]], Α] // Factor res2 = Resultant[cAvAlg[[1]], cAvAlg[[3]], Α] // Factor res3 = Resultant[res1[[-1]] /. T -> Sqrt[T2], res2[[-1, 1]] /. T -> Sqrt[T2], T2, Method -> SylvesterMatrix]; (* express implicit form of surface in cylindrical coordinates *) cAvImpl = Factor[res3][[3, 1]] /. {x -> r Cos[ϕ], y -> r Sin[ϕ]} // FullSimplify In[18]:=

cAvImpl = r^6 (2 + r) (r - 2) (1 - 44 r^2 + 64 r^4) + 24 r^4 (-12 - 3 r^2 + 80 r^4) z^2 128 r^2 (-123 + 36 r^2 + 64 r^4) z^4 - 8192 z^6 + r^3 (2 z (993 r^4 - 80 r^6 - 4144 z^2 + 8192 z^4 + r^2 (84 - 5760 z^2)) Cos[3 ϕ] + r^3 (-4 + 177 r^2 - 300 r^4 + 64 r^6 32 (-109 + 48 r^2) z^2) Cos[6 ϕ] - 64 r^6 z Cos[9 ϕ] 16 (3 r^6 (-4 + r^2) + 2 r^2 (69 - 114 r^2 + 64 r^4) z^2 256 (-2 + 3 r^2) z^4) Sin[3 ϕ] + 4 r^3 z (157 - 174 r^2 + 512 z^2) Sin[6 ϕ] - 48 r^6 (-4 + r^2) Sin[9 ϕ]);

In[19]:=

Needs["Graphics`ContourPlot3D`"]

In[20]:=

(* a function for making a hole in a polygon *) makeHole[Polygon[l_], f_] := Module[{mp = Plus @@ l/Length[l], , }, = Append[l, First[l]]; = (mp + f (# - mp))& /@ ; {(* new polygons *) MapThread[Polygon[Join[#1, Reverse[#2]]]&, Partition[#, 2, 1]& /@ {, }]}]

1.10 Three Applications

329

The next pair of graphics shows the parametric and the implicit version of this surface. We make use of the threefold rotational symmetry of the surface in the generation of the implicit plot. In[22]:=

Show[GraphicsArray[ Block[{$DisplayFunction = Identity, polysCart, = {{-1, Sqrt[3], 0}, {-Sqrt[3], -1, 0}, {0, 0, 2}}/2.}, {(* the parametrized 3D plot *) ParametricPlot3D[Evaluate[Append[ ((c /. t -> t + α) + (c /. t -> t - α))/2, {EdgeForm[], SurfaceColor[#, #, 3]&[Hue[(t + Pi)/(2Pi)]]}]], {t, -Pi, Pi}, {α, 0, Pi/2}, Axes -> False, PlotPoints -> {64, 32}, BoxRatios -> {1, 1, 0.6}, PlotRange -> {{-3, 3}, {-3, 3}, {-1, 1}}] /. p_Polygon :> makeHole[p, 0.76], (* the implicit 3D plot; use symmetry *) polysCart = Apply[{#1 Cos[#2], #1 Sin[#2], #3}&, Cases[(* contour plot in cylindrical coordinates *) ContourPlot3D[cAvImpl, {r, 0, 3}, {ϕ, -Pi/3, Pi/3}, {z, -1, 1}, PlotPoints -> {28, 24, 32}, MaxRecursion -> 0], _Polygon, Infinity], {-2}]; Graphics3D[{EdgeForm[], (* generate all three parts of the surface *) {polysCart, Map[ .#&, polysCart, {-2}], Map[..#&, polysCart, {-2}]}} /. p_Polygon :> {SurfaceColor[#, #, 2.4]&[ Hue[Sqrt[#.#]&[0.24 Plus @@ p[[1]]/Length[p[[1]]]]]], makeHole[p, 0.72]}, BoxRatios -> {1, 1, 0.6}]}]]]

For the volume of such tubes, see [309].

Symbolic Computations

330

Exercises 1.L2 The 2 in the Factorization of xi - 1, Heron’s Formula, Volume of Tetrahedron, Circles of Apollonius, Circle ODE, Modular Transformations, Two-Point Taylor Expansion, Quotential Derivatives a) Program a function which finds all i for which numbers other than 0 or ≤1 appear as coefficients of x j

(0 § j § i) in the factorized decomposition of x j - 1 (1 § i § 500) [586]. Do not use temporary variables (no Block or Module constructions). b) Let P1 , P2 , and P3 be three points in the plane. Starting from the formula A = †HP2 - P1 LäHP3 - P1 L§ ê 2 for the area A of the triangle formed by P1 , P2 , and P3 , derive a formula for the area which only contains the lengths of the three sides of the triangle (Heron’s area formula). c) Let P1 , P2 , P3 , and P4 be four points in 3 . Starting from the formula V = HareaOfOneFace height ê 3L for the

volume V of the tetrahedron formed by P1 , P2 , P3 , and P4 , derive a formula for the volume which only contains the lengths of the six edges of the tetrahedron [841]. d) Given are three circles in the plane that touch each other pairwise. In the “middle” between these three circles

now put a fourth circle that touches each of the three others. Calculate the radius of this circle as an explicit function of the radius of the three other circles (see [1630], [416], [155], [1680], [839], and [695]). e) Calculate the differential equation that governs all circles in the x,y-plane (from I.I.5.6 of [896]). f) Show that the three equations

u4 - vHuL4 - 2 u vHuL H1 - u2 vHuL2 L = 0 u6 - vHuL6 + 5 vHuL2 u2 Hu2 - vHuL2 L - 4 u vHuL H1 - u4 vHuL4 L = 0 H1 - u8 L H1 - vHuL8 L - H1 - u vHuLL8 = 0 are solutions of the (so-called modular) differential equation [1438] iji 1 + k 2 y2 1 + l 2 £ 2 yzz £ 2 jjjj ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ zz - J ÅÅÅÅÅÅÅÅ ÅÅÅÅÅ3ÅÅÅÅ N l HkL zz l HkL + 3 l≥ HkL2 - 2 l£ HkL l£££ HkL = 0. j k - k3 l l k { { k 1

1

The change of variables between 8k, l< and 8u, v< is given by k ÅÅ4ÅÅ = u and l ÅÅ4ÅÅ = v. g) The function f HxL

ÅÅÅÅÅÅÅÅÅÅ dx i yz 1-hHxL f HxL eŸ gHxL dx+Ÿ ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅ dx j jjc + ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 1-hHxL ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ dxzzzz wHxL = c1 e-Ÿ ÅÅÅÅÅÅÅÅ jj 2 ‡ 1 - hHxL k {

fulfills a linear second-order differential equation [700]. Derive this differential equation. h) Prove the following two identities (from [1838] and [897]):

6p 10 p 1 tanJ ÅÅÅÅÅÅ tan-1 H4LN = 2 JcosJ ÅÅÅÅÅÅÅÅÅÅ N + cosJ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅ NN 17 17 4 è!!!! 7 i i1 p 1 i 1 yzyz è!!!! ij 1 i 1 yzyzyz cosJ ÅÅÅÅÅÅ N = ÅÅÅÅÅ + ÅÅÅÅÅÅÅÅÅÅÅÅÅ jjjcosjjj ÅÅÅÅÅÅ cos-1 jjj ÅÅÅÅÅÅÅÅè!!!! ÅÅÅÅÅÅÅÅÅÅ zzzz + 3 sinjj ÅÅÅÅÅÅ cos-1 jjj ÅÅÅÅÅÅÅÅè!!!! ÅÅÅÅÅÅÅÅÅÅ zzzzzz 6 k k3 7 6 3 k k 2 7 {{ k 2 7 {{{

Exercises

331

i) Given a rectangular box of size w1 µ h1 µ d1 . Is it possible to put a second box of size w2 µ h2 µ d2 in the first

one such that 1 ê w2 + 1 ê h2 + 1 ê d2 is equal to, less than, or greater than w1 + h1 + d1 ?

j) What geometric object is described by the following three inequalities?

†f x§ + †y§ < 1 ﬂ †f y§ + †z§ < 1 ﬂ †x§ + †f z§ < 1 (f is the Golden ratio.) k) Check the following integral identity [1062]: 2

2 ¶ x x x iè!!!! ¶ f HtL y f HtL y2 1 i i1 ÅÅÅÅÅ dtz dx = ‡ j ÅÅÅÅÅ ‡ f HtL dtyz dx + jjj x ‡ ÅÅÅÅÅÅÅÅÅÅ ÅÅÅ dt + ÅÅÅÅÅÅÅÅ ÅÅÅÅ!Å ‡ f HtL dtzzz . ‡ j‡ ÅÅÅÅÅÅÅÅ è!!! t t { { 0 k x 0 k x 0 x 0 x k { x

l) Check the following identity [1900] for small integer n and r: n

pHak L ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ = „ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ Hx - ak Lr+1 ¤nl = 1 Hak - al L k=1

l∫k

r ij y n n yzz i n jj HrL r jj p HxL + „ H-1L j ijj yzz pHr- jL HxL Ajjj‚ Hx - ai L-1 , ‚ Hx - ai L-2 , …, ‚ Hx - ai L- j zzzzzz zzz j jj kj{ j i=1 i=1 {z k i=1 j=1 k {

H-1Lr ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ n r! ¤k=1 Hx - ak L

Here pHzL is a polynomial of degree equal to or less than n; the ak are arbitrary complex numbers and the multivariate polynomials AHt1 , …, t j L are defined through AHt1 , t2 , …, t j L =

‚ k1 ,k2 ,…,k j k1 +2 k2 +∫+ j k j = j

t j kj j! t1 k1 t2 k2 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ J ÅÅÅÅÅÅÅÅ N J ÅÅÅÅÅÅÅÅ N ∫ J ÅÅÅÅÅÅÅÅ N . j 2 k1 ! k 2 ! ∫ k j ! 1

m) Given five points in 2 , find all relations between the oriented areas (calculated, say, with the determinantal

formula from Subsection 1.9.2) of the nine triangles that one can form using the points. n) Is it possible to position six points P1 , …, P6 in the plane in such a way that they have the following integer distances between them [814]?

P1 P2 P3 P4 P5 P6

P1 0 87 158 170 127 68

P2 87 0 85 127 136 131

P3 158 85 0 68 131 174

P4 170 127 68 0 87 158

P5 127 136 131 87 0 85

P6 68 131 174 158 85 0

o) Show that there are no 3 µ 3 Hadamard matrices [78], [1866], [681]. (An n µ n Hadamard matrix Hn is a

matrix with elements ≤1 that fulfills Hn .HTn = n 1n .)

p) The two-point Taylor series of order for a function f HzL analytic in z1 , z2 is defined through [1158]

Symbolic Computations

332

o

f HzL = ‚ Hn Hz1 , z2 L Hz - z1 L + n Hz2 , z1 L Hz - z2 L L Hz - z1 Ln Hz - z2 Ln + Ro+1 Hz, z1 , z2 L. n=0

Here Ro+1 Hz, z1 , z2 L is the remainder term and the coefficients n Hz1 , z2 L are given as f Hz2 L 0 Hz1 , z2 L = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ z2 - z1 n

Hk + n - 1L! H-1Lk k f Hn-kL Hz1 L + H-1Ln+1 n f Hn-kL Hz2 L ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ . n Hz1 , z2 L = „ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ k ! n! Hn - kL! Hz1 - z2 Lk+n+1 k=0

H20L Calculate the two-point Taylor series 0,2 p @sinD HzL of order 20 for f HzL = sinHzL, z1 = 0, and z2 = 2 p. Find maxz1 §z§z2 † f HzL - 20 HzL§.

q) While for a smooth function yHxL, the relation d yHxL ê dx = 1 ê HdxHyL ê d yL holds; the generalization

d n yHxL ê dxn = 1 ê Hd n xHyL ê d yn L for n ¥ 2 in general does not hold. Find functions yHxL such that the generalization holds for n = 2 [245]. Can you find one for n = 3?

r) Define a function (similar to the built-in function D) that implements the quotential derivatives n . ê xn of a function f HxL defined recursively by [1297]

i n-1 f HxL yz n f HxL ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅ = ÅÅÅÅÅÅ Å jj ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅ z n x k xn-1 { x with the first quotential derivatives . ê x defined as 1 f HxL f HxL f Hq xL ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅ = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ = lim lnJ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ N. qØ1 x f HxL x1 Show that f HyHxLL ê x = f HyHxLL ê y yHxL ê x. Define the multivariate quotential derivative recursively starting with the rightmost ones, meaning 2 f Hx, yL i f Hx, yL y ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ = ÅÅÅÅÅÅÅÅ j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ z. x y x k y { Show by explicit calculation that f Hx, yL 2 f Hx, yL f Hx, yL 2 f Hx, yL ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅ . y x y x y x ¶ s) Conjecture the value of the following sum: ⁄k=1 H¤kj=1 a j-1 ê Hx + a j LL. Here a0 = 1, ak œ , ak ∫ 0, x ∫ 0

[1648]. 2.L1 Horner’s Form, Bernoulli Polynomials, Squared Zeros, Polynomialized Radicals, Zeros of Icosahedral Equation, Iterated Exponentials, Matrix Sign Function, Appell–Nielsen Polynomials a) Given a polynomial pHxL, rewrite it in Horner’s form.

Exercises

333 x+1

b) Bernoulli polynomials Bn HxL are uniquely characterized by the property Ÿx

Bn HtL dt = xn . Use this method to implement the calculation of Bernoulli polynomials Bn HxL. Try to use only built-in variables (with the exception of x and n, of course).

c) Given the polynomial x4 + a3 x3 + a2 x2 + a1 x + a0 with zeros x1 , x2 , x3 , and x4 , calculate the coefficients (as

functions of a0 , a1 , a2 , and a3 ) of a polynomial that has the zeros x21 , x22 , x23 , and x24 . d) Express the real zeros of

-1 + x + 2

3 5 è!!!!!!!!!!!!!! è!!!!!!!!!!!!!! è!!!!!!!!!!!!!! 1 + x2 - 3 1 + x3 + 5 1 + x5 - 4 = 0

as the zeros of a polynomial. e) Show that all nontrivial solutions of x10 + 11 x5 - 1 = 0 stay invariant under the following 60 substitutions:

xöei x ei xö- ÅÅÅÅÅÅ x e j Hei + x He4 + eLL xö ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ x - ei He4 + eL e j Hx - ei He4 + eLL xö- ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ei + x He4 + eL f) Iterated exponentials expHc1 z expHc2 z expHc3 z ∫LLL can be used to approximate functions [966], [1881],

[1882], [47]. Find values for c1 , c2 , …, c10 such that expHc1 z expHc2 z expHc3 z ∫LLL approximates the function 1 + lnH1 + zL around z = 0 as best as possible.

g) Motivate symbolically the result of the following input.

m = Table[1/(i + j + 1), {i, 5}, {j, 5}]; FixedPoint[(# + Inverse[#])/2&, N[m], 100] h) Efficiently calculate the list of coefficients of the polynomial 500

Hx4 + x3 + x2 + x + 1L

Hx2 + x + 1L

1000

Hx + 1L2000

without making use of any polynomial function like Expand, Coefficient, CoefficientList, …. i) What is the minimal distance between the roots of z3 + c2 z + 1 = 0 for real c? j) Let f HkL HzL = f H f Hk-1L HzLL, f H1L HzL = f HzL = z2 - c. Then the following remarkable identity holds [1135], [119]: ¶ ij ¶ y H ÅÅÅÅ2z Lk zk zzz jj expjjj- „ ck ÅÅÅÅÅÅÅÅ zzz = 1 + „ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ k z j ¤kj=1 f H jL H0L k k=1 k=1 {

where

Symbolic Computations

334

2k

1 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ . ck = „ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ HkL £ f Hz j L H f HkL £ Hz j L - 1L j=1

The sum appearing in the definition of the ck extends over all 2k roots of f HkL HzL = z. Expand both sides of the identity in a series around z = 0 and check the equality of the terms up to order z4 explicitly. k) Write a one-liner that, for a given integer m, quickly calculates the matrix of values e

x ∑d I ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅ M sinHxL ce,d = lim ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅ d xØ0 ∑x

for 1 § e § m, 0 § d § m. l) The Appell–Nielsen polynomials pn HzL are defined through the recursion p£n HzL = pn-1 HzL, the symmetry

constraint pn HzL = H-1Ln pn H-z - 1L, and the initial condition p0 HzL = 1 [324], [1341]. Write a one-liner that calculates the first n Appell–Nielsen polynomials. Visualize the polynomials.

m) Write a one-liner that uses Integrate (instead of the typically used D) to derive the first n terms of the

Taylor expansion of a function f around x that is based on the following identity [729], [549] n-1

h h1 hn-1 hk f Hx + hL = „ ÅÅÅÅÅÅÅÅÅ f HkL HxL + ‡ ‡ ∫ ‡ f HnL Hx + hn L dhn … dh2 dh1 . k! 0 0 0 k=0

n) A generalization of the classical Taylor expansion of a function f HxL around a point x0 into functions jk HxL,

k = 0, 1, …, n (where the jk HxL might be other functions that the monomials xk ) can be written as [1839] ƒƒi 0 j1 HxL j0 HxL ƒƒj ƒƒjj j0 Hx0 L j1 Hx0 L ƒƒƒjjj f Hx0 L ƒƒjj £ 1 j£0 Hx0 L j£1 Hx0 L f HxL º - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ †ƒjjjj f Hx0 L W Hj0 Hx0 L, …, jn Hx0 LL ƒƒjj ƒƒjj ª ª ª ƒƒjj ƒƒƒj f HnL Hx L jHnL Hx L jHnL Hx L 0 0 0 ƒk 0 1

yzƒƒƒƒ zzƒƒ zzƒƒ zzzƒƒƒ zz§. zzƒ zzƒƒ zzƒƒ ∏ ª zzƒƒ zƒ HnL ∫ jn Hx0 L {ƒƒƒ

∫ jn HxL ∫ jn Hx0 L ∫ j£n Hx0 L

Here the WHj0 HxL, …, jn HxLL is the Wronskian of the j0 HxL, …, jn HxL and it is assumed not to vanish at x0 . Implement this approximation and approximate f HxL = cosHxL around x0 = 0 through expHxL, expHx ê 2L, …, expHx ê mL. Can this formula be used for m = 25? o) Show that the function [1222] 2

HHHzL + 2 z £ HzLL2 - 4 z £ HzL2 L wHzL = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ £ 8 HHzL HzL HHzL + 2 Hz - 1L £ HzLL HHzL + 2 z £ HzLLL where HzL = c1 1 HzL + c 2 2 HzL and f1,2 HzL are solutions of H1 - zL z ££ HzL + H1 - 2 zL £ HzL - f HzL ê 4 = 0 fulfills the following special case of the Painlevé VI equation:

Exercises

335

1 1 1 1 1 1 1 w££ HzL = ÅÅÅÅÅ J ÅÅÅÅÅÅÅÅÅÅÅÅÅÅ + ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ + ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ N w£ HzL2 - J ÅÅÅÅÅÅ + ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ + ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ N w£ HzL + z 2 wHzL wHzL - 1 wHzL - z z-1 wHzL - z wHzL HwHzL - 1L HwHzL - zL ij zHz - 1L y + 4zz. ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅ j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 2 Hz - 1L2 z2 k HwHzL - zL2 { 3.L1 Nested Integration, Derivative[-n], PowerFactor, Rational Painlevé II Solutions a) Given that the following definition is plugged into Mathematica, what will be the result of f[2][x]?

f[n_][x_] := Integrate[f[n - 1][x - z], {z, 0, x}] f[0][x_] = Exp[-x]; Consider the evaluation process. How would one change the first two inputs to get the “correct” result as if from Nest[Integrate[# /. {x -> x - z}, {z, 0, x}]&, Exp[-x], 2] b) Find two (univariate) functions f and g, such that Integrate[f, x] + Integrate[g, x] gives a different

result than does Integrate[f + g, x]. Find a (univariate) functions f and integration limits xl , xm , and xu , such that Integrate[f, {x, xl , xu }] gives a different result than does Integrate[f, {x, xl , xm }] + Integrate[f, {x, xm , xu }]. c) What does the following code do?

Derivative[i_Integer?Negative][f_] := With[{pI = Integrate[f[C], C]}, derivative[i + 1][Function[pI] /. C -> #] /; FreeQ[pI, Integrate, {0, Infinity}]] Predict the results of Derivative[+4][Exp[1 #]&] and Derivative[-4][Exp[1 #]&]. d) Is it possible to find a function f Hx, yL such that D[Integrate[f(x, y), x], y] is different from

Integrate[D[f(x, y), y], x]? e) Write a function PowerFactor that does the “reverse” of the function PowerExpand. It should convert products of radicals into one radical with the base having integer powers. It should also convert sums of logarithms into one logarithm and s logHaL into logHas L. f) The rational solutions of w≥ HzL = 2 wHzL3 - 4 z wHzL + 4 k, k œ + (a special Painlevé II equation) can be

expressed in the following way [967], [910], [1215], [952]: ¶ qk HzL xk = expHz x + x3 ê 3L (for k < 0, let Let the polynomials qk HzL be defined by the generating function ⁄k=0 qk HzL = 0). Let the determinants sk HzL be defined by matrices Hai j L0§i, j§k-1 with ai j = qk+i-2 j HzL (for k = 0, let s0 HzL = 1). Then, wk HzL is given as wn HzL = ∑logHsk+1 HzL ê sk HzLL ê ∑ z. Calculate the first few wk HzL explicitly. 4.L1 Differential Equations for the Product, Quotient of Solutions of Linear Second-Order Differential Equations Let y1 HzL and y2 HzL be two linear independent solutions of y££ HzL + f HzL y£ HzL + gHzL yHzL = 0 The product uHzL = y1 HzL y2 HzL obeys a linear third-order differential equation u£££ HzL + a p @ f HzL, gHzLD u££ HzL + b p @ f HzL, gHzLD u£ HzL + c p @ f HzL, gHzLD uHzL = 0

Symbolic Computations

336

The quotient wHzL = y1 HzL ê y2 HzL obeys (Schwarz’s differential operator; see, for instance, [847] and [1906]) w£££ HzL w£ HzL + aq @ f HzL, gHzLD w££ HzL2 + bq @ f HzL, gHzLD w£ HzL2 = 0 Calculate a p , b p , c p and aq , bq . (For analogous equations for the solutions of higher-order differential equations, see [1024].) 5.L1 Singular Points of ODEs, Integral Equation a) First-order ordinary differential equations of the form y£ HxL = PHx, yL ê QHx, yL possess singular points 8x*i , y*i < [215], [1403], [1746], [1105], [467], [949]. These are defined by PHx*i , y*i L = QHx*i , y*i L = 0. It is possible to trace the typical form of the solution curves in the neighborhood of a singular point by solving y£ HxL = Ha x + b yL ê Hc x + d yL. Some typical forms include the following examples:

2 yHxL yHxL + x yHxL y£ HxL = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ , y£ HxL = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ , y£ HxL = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅ x x x yHxL £ a vortex point y HxL = - ÅÅÅÅÅÅÅÅÅÅÅÅÅ x yHxL - x £ an eddy point y HxL = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ . yHxL + x

a knot point

Investigate which of the given differential equations can be solved analytically by Mathematica, and plot the behavior of the solution curves in a neighborhood of the singular point 80, 0 1) [1594]: ¶ g

lim ‡

gØ0

1

1 z -1 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅ H1 + x Hzg - 1LL- ÅÅgÅÅ -1 dz. g z2

Let k Hx, a0 + a1 x + ∫ + an xn L stand for the root that is represented by the Root-object Root@a0 + a1 # + ∫ + an #n , k]. Calculate the following integrals symbolically (express the results using Root-objects): d)

2 6 ‡ lnHx L 1 Hx, -x - x x + x L dx

2 7 2 7 2 7 ‡ expH3 Hx, -x - x + x LL lnH3 Hx, -x - x + x LL 3 Hx, -x - x + x L dx

Symbolic Computations

354

2 Hx, -x - x + x3 L ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ % dx ‡ $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 3 Hx, -x - x + x3 L 2 Hx, -x - x x + x3 L 3 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ dx ‡ $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 3 Hx, -x - x x + x3 L 1 2 Hx, -x - x + x3 L ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅ Å dx ‡ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 3 0 2 Hx, x - x + x L - 1

‡

¶i 1

1 1 zy 1 jj ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅ ÅÅÅ zz dx ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ5ÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅ j 5 è!!!! Hx, x + x + x L 5 x 1 x { k

e) Under which conditions on a1 , a2 , a3 can the three roots of the cubic x3 + a1 x2 + a2 x + a3 = 0 be interpreted

as the side length of a nondegenerate triangle [1338]? Visualize the volume in a1 ,a2 ,a3 -space for which this happens. For random a1 , a2 , a3 from the interval @-1, 1D, what is the probability that the roots are the side length of a nondegenerate triangle? 23.L2 Riemann Surface of Cubic Visualize the Riemann surface of xHaL, where x = xHaL is implicitly given by x3 + x2 + a x - 1 ê 2 = 0. Do not use ContourPlot3D. 24.L2 Celestial Mechanics, Lagrange Points a) For the so-called Kepler equation (see [1332], [1699], [1521], [781], [1236], [303], [343], and [388]) L = M + e sinHLL find a series solution for small e in the form n

ijn or n-1 yz L º M + „ jjj ‚ aij e j zzzz sinHi M L j j=i { i=1 k with n around 10. b) Find a short time-series solution (power series in t up to order 10, for example) for the equation of motion for

a body in a spherical symmetric gravitational field (to avoid unnecessary constants, appropriate units are chosen) rHtL r££ HtL = ÅÅÅÅÅÅÅÅÅÅÅÅ3ÅÅ rHtL with the initial conditions rH0L = r0 , r£ H0L = v0 . Here, rHtL is the time-dependent position vector of the body and rHtL = †rHtL§. To shorten the result, introduce the abbreviations v0 .v0 r0 .v0 1 s = ÅÅÅÅÅÅÅÅÅ2ÅÅÅÅÅ Å w = ÅÅÅÅÅÅÅÅÅ2ÅÅÅÅÅ ÅÅ u = ÅÅÅÅ3ÅÅÅÅ r0 r0 r0 (Do not use explicit lists as vectors, first because this is explicitly dependent on the dimension, and second because it slows down the calculation considerably. It is better to implement an abstract vector type for rHtL and define appropriate rules for it.)

Exercises

355

c) The Lagrange points 8xHmL, yHmL< of the restricted three-body problem are the solutions of the following system of equations [430], [1421], [1755], [801], [1433], [745], [137]:

∑V Hx, yL ∑ VHx, yL - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ Å = - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ = 0. ∑x ∑y The potential VHx, yL is given by the following expression: m 1-m 1 V Hx, yL = - ÅÅÅÅÅÅ Hx2 + y2 L - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅ r1 r2 2 "############################## 2 2 r1 = Hx - x1 L + y r2 =

"############################## Hx - x2 L2 + y2

x1 = -m x2 = 1 - m 1 ÅÅ , calculate all real Calculate explicit symbolic solutions for the Lagrange points. For the parameter value m = ÅÅÅÅ 10 solutions (do not do this by a direct call to Solve).

25.L2 Algebraic Lissajous Curves, Light Ray Reflection Inside a Closed Region Derive an implicit representation f Hx, yL of the Lissajous curves 8xHtL, yHtL< = 8cosHtL, sinH2 tL EliminationOrder]] Out[6]= 8H−1 + h@xDL Hf@xD g@xD w@xD − w@xD f @xD − f@xD w @xD + g@xD w @xD − g@xD h@xD w @xD + h @xD w @xD − w @xD + h@xD w @xDL

0 /. d[c] -> d] Out[8]= 12 c H2 c2 + d2 L H2187 c2 + 648 c8 + 48 c14 + 486 c6 d2 + 72 c12 d2 −

243 c4 d4 − 36 c10 d4 − 189 c2 d6 − 82 c8 d6 − 27 d8 − 4 c6 d8 + 21 c4 d10 + 2 c2 d12 − d14 L

Using the numerical solution of the system allows us to extract the corresponding symbolic solution. In[9]:= FindRoot[Evaluate[{gb == 0, gbD == 0}], {c, 1}, {d, 1.5}] Out[9]= 8c → 1.09112, d → 1.54308< In[10]:= {#, N[#]}& @ Select[{c, d} /. Solve[{gb == 0, gbD == 0}, {c, d}],

# == {1.09112`5, 1.54308`5}&] è!!!!! è!!!!! 3 3 Out[10]= 999 , ==, 881.09112, 1.54308 False]

Symbolic Computations

398

150 100 50 0 20

100 80 60 40

40

60

20 80

100

0

For a semi-closed form of the ce,d , see [1336]. l) It is straightforward to write a one-liner that calculates the first n Appell–Nielsen polynomials. To avoid an explicit counter n, we operate recursively on a two-element list 8≤1, poly -(z + 1)), C][[1]] /. C -> 0][ *) Integrate[p, z] + C]}] @@ #&, {1, 1}, n]

The next plot shows the logarithm of the absolute value of the first 36 Appell–Nielsen polynomials. The steep vertical cusps are the zeros of the polynomials. While the majority of the roots seem to have identical numerical values, most of them are actually slightly different. In[2]:= With[{o = 35},

Plot[Evaluate[Log @ Abs @ AppellNielsenPolynomialList[o, z]], {z, -6, 6}, PlotRange -> {-45, 5}, PlotPoints -> 200, Frame -> True, Axes -> False, PlotStyle -> Table[{Thickness[0.002], Hue[0.8 k/o]}, {k, o + 1}]]] 0 -10 -20 -30 -40 -6

-4

-2

0

2

4

6

HnL

m) Evaluating the multiple integral of f Hx + hn L for a given integer n yields (up to sign and the term f Hx + hL) the first n - 1 terms of the Taylor series. Because the integrand at each integration stage is a complete differential, the n iterated integrations can all be carried out completely. The following function TaylorTerms implements the multiple integral (we use the same integration variable h for each integration). In[1]:= TaylorTerms[n_, {f_, x_, h_}] := Expand[f[x + h] -

Nest[Integrate[#, {h, 0, h}]&, Derivative[n][f][x + h], n]]

And here are the first ten Taylor terms obtained from integration for a not explicitly specified function f . In[2]:= TaylorTerms[10, {f, x, h}]

1 2

1 6

1 24

Out[2]= f@xD + h f @xD + h2 f @xD + h3 fH3L @xD + h4 fH4L @xD +

h7

fH7L @xD

1 h8 fH8L @xD h9 fH9L @xD 1 h5 fH5L @xD + h6 fH6L @xD + + + 120 5040 40320 362880 720

The next input expands f HxL = cosHxL around x = p. In[3]:= TaylorTerms[10, {Cos, Pi, h}]

h2 2

h4 24

h6 720

h8 40320

Out[3]= −1 + − + −

For direct integral analogues of the Taylor formula, see [1358].

Solutions

399

n) Here is the above formula implemented as the function GeneralizedTaylorExpansion. In[1]:= GeneralizedTaylorExpansion[f_, ϕs_, x_, x0_] :=

Module[{n = Length[ϕs], W, Φ}, (* the Wronskian *) W = Det[Table[D[ϕs, {x, k}], {k, 0, n - 1}]] /. x -> x0; (* the second determinant *) Φ = Det[Join[{Prepend[ϕs, 0]}, Table[D[Prepend[ϕs, f], {x, k}], {k, 0, n - 1}] /. x -> x0]]; (* the approximation *) -Expand[Φ/W]]

For small m (say m d 10), it works fine. In[2]:= ExpBasis[m_] := Table[Exp[x/k], {k, m}]

GeneralizedTaylorExpansion[Cos[x], ExpBasis[8], x, 0] // Timing 173539328 xê8 631657481 xê7 Out[3]= 90.27 Second, − + − 10614240 xê6 + 63 72 xê5 xê4 435546875 14643200 1431027 xê3 47840 xê2 5135 x − + − + = 72 9 8 9 504

Due to the calculation of a determinant with symbolic entries, this form is not suited for larger m. The Taylor-like approximation carried out by GeneralizedTaylorExpansion cancels the leading monomial terms in a classical Taylor n ck jk HxL around x = x0 . We can so reformulate the problem to the determination of the ck . expansion of f HxL - ⁄k=0 Assuming no additional degeneracy and the presence of all monomials, the following function GeneralizedTaylor Expansion1 solves for the ck and returns the resulting sum. In[4]:= GeneralizedTaylorExpansion1[f_, ϕs_, x_, x0_] :=

Module[{n = Length[ϕs], vars = Table[C[k], {k, Length[ϕs]}], sol}, sol = Solve[# == 0& /@ CoefficientList[Series[f - vars.ϕs, {x, x0, n - 1}], x], vars]; vars.ϕs /. sol[[1]]]

If all the ck are numbers, the resulting linear system can be solved much more quickly then the above symbolic determinant. In[5]:= GeneralizedTaylorExpansion1[Cos[x], ExpBasis[8], x, 0] // Timing

173539328 xê8 631657481 xê7 63 72 435546875 xê5 14643200 xê4 1431027 xê3 47840 xê2 5135 x − + − + = 72 9 8 9 504

Out[5]= 90.05 Second, − + − 10614240 xê6 +

Now, we can also deal with m = 25. In[6]:= Approx[x_] = GeneralizedTaylorExpansion1[Cos[x], ExpBasis[25], x, 0];

Some of the resulting numbers have up to 50 digits. In[7]:= {#, N[#]}& @ Max[Abs[{Numerator[#], Denominator[#]}& /@

Cases[Approx[x], _Integer | _Rational, {0, Infinity}]]] Out[7]= 823081981002827323123938185744918882846832275390625, 2.3082× 1049

{Hue[0]}], ListPlot[dataApprox // N, PlotRange -> {-2, 2}, PlotJoined -> True]}], (* logarithm of absolute error *) ListPlot[N[{#1, Log[10, Abs[#2 - Cos[#1]]]}]& @@@ dataApprox, PlotRange -> All, PlotJoined -> True]}]]]

Symbolic Computations

400 1

-15

-10

-5

5 5

10

15

-1

-15

-10

-5

5

10

15

-5 -2 -3 -4

-10 -15

o) PainlevéODEVI is the differential operator for the special Painlevé VI equation under consideration. In[1]:= PainlevéODEVI[w_, z_] := D[w, z, z] -

(1/2 (1/w + 1/(w - 1) + 1/(w - z)) D[w, z]^2 (1/z + 1/(z - 1) + 1/(w - z)) D[w, z] + 1/2 w (w - 1)(w - z)/(z^2 (z - 1)^2)(4 + z (z - 1)/(w - z)^2))

The function yChazy is the proposed solution. In[2]:= yChazy[z_] = With[{ = (c1 [1][#] + c2 [2][#])&},

1/8 (([z] + 2z '[z])^2 - 4z '[z]^2)^2/ ([z] '[z] (2(z - 1) '[z] + [z])([z] + 2z '[z]))];

Substituting now yChazy[z] into PainlevéODEVI, replacing the second and third derivatives of [k] by using its defining differential equation, and simplifying the result shows that yChazy[z] is a solution. In[3]:= Together[PainlevéODEVI[yChazy[z], z] //.

{[k_]'''[z] :> (8 (1 - 2z) [k]''[z] - 9 [k]'[z])/(4 z(z - 1)), [k_]''[z] :> (4 (1 - 2z) [k]'[z] - [k][z])/(4 z(z - 1))}] Out[3]= 0

We end by remarking that the explicit solution of HzL is HzL = c1 KHzL + c2 KH1 - zL where K is the complete elliptic integral of the first kind. In[4]:= Together[z (1 - z) ''[z] + (1 - 2z) '[z] - [z]/4 /.

-> Function[z, c[1] EllipticK[z] + c[2] EllipticK[1 - z]]]

Out[4]= 0

3. Nested Integration, Derivative[-n], PowerFactor, Rational Painlevé II Solutions a) First, we look at the actual result. In[1]:= f[n_][x_] := Integrate[f[n - 1][x - z], {z, 0, x}]

f[0][x_] = Exp[-x];

Out[3]=

f[2][x] 1 H−x Cosh@xD + Sinh@xD + x Sinh@xDL 2

We compare it to the following. In[4]:= fn[2][x] = Nest[Integrate[# /. {x -> x - z}, {z, 0, x}]&, Exp[-x], 2] Out[4]= −1 + −x + x In[5]:= Expand[TrigToExp[fn[2][x] - f[2][x]]]

5 −x 4

x 4

−x x 2

Out[5]= −1 + − + x +

The reason for this in the first moment unexpected result is that Integrate does not localize its integration variable. (It is impossible for Integrate to do this because it has no HoldAll attribute, and so it cannot avoid the evaluation of all its arguments before Integrate can go to work.) So the integration variables are not screened from each other in nested integrations. Here is what happens in detail by calculating f[2][x]. f[2][x] Integrate[f[2 - 1][x - z], {z, 0, x}]

Now the two variables (from a mathematical point of view—dummy variables) z interfere.

Solutions

401

f[1][x - z] Integrate[f[0][(x - z) - z], {z, 0, (x - z)}] Integrate[f[0][(x - z) - z], {z, 0, x}] Integrate[Exp[-((x - z) - z)], {z, 0, x}] Integrate[Exp[-((x - z) - z)], {z, 0, x}] Exp[x - 2 z]/2 - Exp[-x]/2 Integrate[Exp[x - 2z]/2 - Exp[-x]/2, {z, 0, x}] Exp[x]/4 - (1 + 2 x)/4 Exp[-x]

By using On[], we could follow all of the above steps in more detail, but because of the extensive output, we do not show it here. To screen the integration variables in nested integrations, we could, for instance, use the following construction for the function definiteIntegrate . (We implement it here only for 1D integrals—the generalization to multidimensional integrals is obvious.) In[6]:= SetAttributes[definiteIntegrate, HoldAll]

definiteIntegrate[integrand_, {iVar_, lowerLimit_, upperLimit_}] := Function[x, Integrate[#, {x, lowerLimit, upperLimit}]& @@ (* avoid evaluation of integrand; substitute new integration variable *) (Hold[integrand] //. iVar -> x)][ (* create a unique integration variable *) Unique[x]]

(Note that definiteIntegrate has the attribute HoldAll and that an additional Hold on the right-hand side is necessary to avoid any evaluation. A unique integration variable is created via Unique[x].) Using the function definiteIntegrate in the recursive definition of f now gives the “expected” result. In[8]:= f1[n_][x_] := definiteIntegrate[f1[n - 1][x - z], {z, 0, x}]

f1[0][x_] = Exp[-x];

Now, we get from f[2][x] the expected result. In[10]:= f1[2][x] Out[10]= −1 + −x + x In[11]:= f1[2][x] - fn[2][x] // TrigToExp // Expand Out[11]= 0

For the simple example under consideration, we could use a simpler way of creating different dummy integration variables. Here is an example. In[12]:= f2[n_][x_] := Integrate[f2[n - 1][x - z[x]], {z[x], 0, x}]

f2[0][x_] = Exp[-x]; f2[2][x] Out[14]= −1 + −x + x

b) Obviously, Integrate[f, x] + Integrate[g, x] and Integrate[f + g, x] can only differ by an x-independent constant. It turns out that finding a pair of functions f and g is not difficult; low-degree polynomials and powers already do the job. In[1]:= Integrate[(1 + x)^2, x] + Integrate[x^α, x]

x3 3

x1+α 1+α

Out[1]= x + x2 + + In[2]:= Integrate[(1 + x)^2 + x^α, x] Out[2]=

1 x1+α H1 + xL3 + 3 1+α

In[3]:= % - %% // Expand Out[3]=

1 3

Symbolic Computations

402

Now, let us deal with the definite integrals. The function f should have a discontinuity at xm . We choose the branch cut of the square root function as the discontinuity. We take xl and xu on opposite sides of the branch cut and xm directly on the branch cut. In[4]:= Integrate[Sqrt[z], {z, -1 - I, -1 + I}] Out[4]=

4 2 2 − H−1 − L3ê2 + H−1 + L3ê2 3 3 3

In[5]:= Integrate[Sqrt[z], {z, -1 - I, 0}] + Integrate[Sqrt[z], {z, 0, -1 + I}]

2 3

2 3

Out[5]= − H−1 − L3ê2 + H−1 + L3ê2 In[6]:= % - %% // Expand

4 3

Out[6]= −

c) First, the input adds a new rule to Derivative (which does not have the attribute Protected) for a negative integer argument for an arbitrary function. Now, we look at the actual code. With evaluates its first argument, which means the local variable pI, and sets the value to Integrate[f[C], C] . In the case that the result does not contain Integrate, pI becomes a pure function by substituting Slot[1] for C and adding the head Function. The whole expression so constructed again has a Derivative wrapped around it, but with the order incremented by one. In summary, this means that taking a Derivative of negative order n is interpreted as an iterated n-fold integration. Let us look at some examples. In[1]:= Derivative[i_Integer?Negative][f_] :=

(* because the test is the whole calculation, use With and then use pI as test and as the result *) With[{pI = Integrate[f[C], C]}, (* test if Integrate appears in result *) Derivative[i + 1][Function[pI] /. C -> #] /; FreeQ[pI, Integrate, {0, Infinity}]] In[2]:= Derivative[-3][Exp] Out[2]= #1 & In[3]:= Derivative[-3][#^3 + Sin[#]&] Out[3]=

#16 + Cos@#1D & 120

Here are the two derivatives Derivative[+4][Exp[1 #]&] and Derivative[-4][Exp[1 #]&]. In[4]:= {Derivative[+4][Exp[1 #]&], Derivative[-4][Exp[1 #]&]} Out[4]= 8#1 &, #1 &

Module[{product = List @@ t, rads, rest}, (* select the radicals *) rads = Cases[product, Power[_, _Rational], {1}]; rest = Complement[product, rads]; (* the new exponent *) exp = LCM @@ Denominator[Last /@ rads]; (Times @@ rads^exp)^(1/exp) (Times @@ rest)];

Here is an example showing rulePower at work. In[2]:= a^(2/3) b^(3/4) c^(4/5) (d + e)^(5/6) f^(1/n) g /. rulePower 1ê60

Out[2]= Ha40 b45 c48 Hd + eL50 L

1

f n g

The rule ruleLogSum rewrites sums of logarithms as one logarithm. In[3]:= ruleLogSum = p:_Plus :>

Module[{sum = List @@ p, logs, rest}, (* select the logarithms *) logs = Cases[sum, _Log, {1}]; rest = Complement[sum, logs]; Plus[Sequence @@ rest, Log[Times @@ (First /@ logs)]]];

Here is an example. The term -Log[c] has the head Times and is not matched by the rule ruleLogSum. In[4]:= Log[a] + Log[b] - Log[c] /. ruleLogSum Out[4]= Log@a bD − Log@cD

The rule ruleLogProduct rewrites products involving logarithms. In[5]:= ruleLogProduct = c_ Log[a_] :> Log[a^c];

Now terms of the form -Log[c] are rewritten too. In[6]:= 1 - Log[a] + Log[b] Log[c] /. ruleLogProduct

1 a

Out[6]= 1 + LogA E + Log@bLog@cD D

The rule ruleLogProduct rewrites products involving logarithms. In[7]:= ruleLogPower = Log[a_]^e_ :> Log[a^(Log[a]^(e - 1))];

Here is an example. In[8]:= Log[a]^3 /. ruleLogPower 2

Out[8]= LogAaLog@aD E

Now, we put all rules together in the function PowerFactor. To make sure that every rule gets applied whenever possible, we use ReplaceRepeated and MapAll. In[9]:= PowerFactor[expr_] := MapAll[(# //. rulePower //. ruleLogSum //.

ruleLogProduct //. ruleLogPower)&, expr]

Here is PowerFactor applied to a more complicated input. In[10]:= 1 + a^(1/3) b^(2/3) c /d^(5/3) (z^3)^(1/2) + Log[s^2] +

((Log[x] + Log[z^2])^2 + 1)^(1/2) + 3(Log[a] - Log[b] Log[c]) + Log[x]^3 Log[y]^3 è!!!!!!! a1ê3 b2ê3 c z3 Out[10]= 1 + + 3 HLog@aD − Log@bD Log@cDL + d5ê3 "################################################################## 2 Log@s D + Log@xD3 Log@yD3 + 1 + HLog@xD + Log@z2 DL2 In[11]:= PowerFactor[%]

Symbolic Computations

404 1ê6 LogA 13 E 2 Log@xD E LogAy i a2 b4 z9 y c + LogAa3 b s IxLogAx M { k d

z Out[11]= 1 + c j j 10 z

LogAyLog@yD E E

E + "############################################################### 1 + LogAHx z2 LLog@x z D E 2

PowerExpand rewrites the expression in the opposite direction. In[12]:= PowerExpand[%]

a1ê3 b2ê3 c z3ê2 d è!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 2 Log@sD + Log@xD3 Log@yD3 + 1 + HLog@xD + 2 Log@zDL2

Out[12]= 1 + + 3 Log@aD − 3 Log@bD Log@cD + 5ê3

PowerFactor recovers the above expression. In[13]:= PowerFactor[%] 1ê6 LogA 13 E 2 Log@xD E LogAy i a2 b4 z9 y c + LogAa3 b s IxLogAx M { k d

z Out[13]= 1 + c j j 10 z

LogAyLog@yD E E

E + "############################################################### 1 + LogAHx z2 LLog@x z D E 2

We could now continue and extend the rulePower to complex powers. The above rule rulePower was designed to work with rational powers. For complex powers it will not work. In[14]:= PowerFactor[x^I y^I (1/x)^I (1/y)^I

(1 - I z)^((1 - I)/2) (1 + I z)^((I - 1)/2)] 1 x

1 y

1 − 2

Out[14]= J N x J N y H1 − zL 2

1

H1 + zL− 2 + 2

Now, we have to deal with exponents e and -e appropriately. In[15]:= rulePower = t:_Times?(MemberQ[#, Power[_, _Complex]]&) :>

Module[{product = List @@ t, crads, rest, exp, cradsN}, (* select the radicals *) crads = Cases[product, Power[_, _Complex], {1}]; rest = Complement[product, crads]; (* the new exponent *) exp = LCM @@ Denominator[Last /@ crads]; cradsN = crads^exp; If[exp =!= 1, (Times @@ cradsN)^(1/exp), (* complementary powers *) Times @@ (Function[l, Times @@ (#[[2, 1]]^(l[[1, 2, 2]]/ #[[2, 2]])& /@ l)^l[[1, 2, 2]]] /@ Split[{Sort[{#[[2]], -#[[2]]}], #}& /@ cradsN, #1[[1]] === #2[[1]]&])] (Times @@ rest)]; In[16]:= PowerFactor[x^I y^I (1/x)^I (1/y)^I

(1 - I z)^((1 - I)/2) (1 + I z)^((I - 1)/2)] 1 − z 1− Out[16]= $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% J N 1+ z

f) We use the series of the generating function to define the qk HzL for the first k. In[1]:= q[_, z_] = 0;

(* make definitions for the q *) MapIndexed[(q[#2[[1]], z_] = #1)&, CoefficientList[Series[Exp[z ξ + ξ^3/3], {ξ, 0, 20}], ξ] // Expand];

Given the qk HzL, the definition of the sk HzL is straightforward. In[4]:= σ[0, z_] := 1;

σ[k_, z_] := σ[k, z] = Det[Table[q[k + i - 2 j, z], {i, 0, k - 1}, {j, 0, k - 1}]]

Now, we can calculate the first few wk HzL. In[6]:= kMax = 10;

Do[w[k, z_] = D[Log[σ[k + 1, z]/σ[k, z]], z] // Together // Factor, {k, kMax}]

Solutions

405

Here are the first four wk HzL. In[8]:= Table[w[k, z], {k, 4}]

1 + 2 z3 H−1 + zL z H1 + z + z L

1 z

3 z2 H10 − 2 z3 + z6 L H−1 + zL H1 + z + z L H−5 − 5 z + z L

Out[8]= 9 , 2 , 2 3 6 , 875 − 1750 z3 + 1400 z6 + 250 z9 − 50 z12 + 4 z15 = z H−5 − 5 z3 + z6 L H−175 − 15 z6 + z9 L

Here is a quick check for the correctness of the calculated functions. In[9]:= Table[Together[D[w[k, z], {z, 2}] - (2w[k, z]^3 - 4 z w[k, z] + 4 k)],

{k, kMax}] Out[9]= 80, 0, 0, 0, 0, 0, 0, 0, 0, 0

1, aq -> 1, bq -> 1, i_Integer ws_ -> ws}] Out[6]= 8y2@zD4 y1 @zD2 , y1@zD y2@zD3 y1 @zD y2 @zD, y2@zD3 y1 @zD2 y2 @zD, y1@zD2 y2@zD2 y2 @zD2 , y1@zD y2@zD2 y1 @zD y2 @zD2 , y2@zD2 y1 @zD2 y2 @zD2 , y1@zD2 y2@zD y2 @zD3 , y1@zD y2@zD y1 @zD y2 @zD3 , y1@zD2 y2 @zD4

RationalFunctions] Out[9]= 84 f@zD g@zD w@zD + 2 w@zD g @zD + 2 f@zD2 w @zD + 4 g@zD w @zD + f @zD w @zD + 3 f@zD w @zD + wH3L @zD

{{-2, 2}, {-2, 2}}, Evaluate[opts["Saddle point"]]] Saddle point

For y£ HxL = -x ê yHxL, we get two solutions from DSolve. In[15]:= DSolve[{y'[x] == -x/y[x], y[x0] == y0}, y[x], x] Out[15]= 99y@xD → −

è!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! è!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! −x2 + x02 + y02 =, 9y@xD → −x2 + x02 + y02 ==

In[16]:= sol5[x_, {x0_, y0_}] = {-Sqrt[-x^2 + x0^2 + y0^2], Sqrt[-x^2 + x0^2 + y0^2]};

Now, we plot both solutions.

Symbolic Computations

410 In[17]:= Show[Table[Plot[Evaluate[sol5[x, {x0, 0}]], {x, -x0, x0},

DisplayFunction -> Identity], {x0, 0.1, 1, 0.1}], DisplayFunction -> $DisplayFunction, Evaluate[opts["Vortex point"]]] Vortex point

Remaining is the differential equation that gives eddy points. Again, we can find a solution, although not explicitly for yHxL. In[18]:= DSolve[{y'[x] == (y[x] - x)/(y[x] + x)}, y[x], x]

Solve::tdep : The equations appear to involve the variables to be solved for in an essentially non−algebraic way. More… y@xD x

y@xD2 x

1 2

Out[18]= SolveAArcTanA E + LogA1 + 2 E C@1D − Log@xD, y@xDE In[19]:= sol6[{x_, y_}] = -2 ArcTan[y/x] + Log[1/(x^2 (1 + y^2/x^2))];

We now plot this result. Unfortunately, for this transcendental equation, ImplicitPlot is of little use because it can only plot polynomial equations. Also, ContourPlot does not give a very good result because of the branch cut of Log. In[20]:= ContourPlot[Evaluate[Re[sol6[{x, y}]]],

{x, -2, 2}, {y, -2, 2}, PlotPoints -> 100, Contours -> 20, ContourShading -> False] 2 1 0 -1 -2

-2

-1

0

1

2

Therefore, we now create a special implementation. We could try a numerical implementation using FindRoot, for example, of the following form. However, it is difficult to represent larger pieces like this. The form of the solution suggests the use of polar coordinates. In[21]:= sol6[{r Cos[ϕ], r Sin[ϕ]}] // Simplify

1 r

Out[21]= −2 ArcTan@Tan@ϕDD + LogA 2 E In[22]:= Solve[% == c, r] 1 H−c−2 ArcTan@Tan@ϕDDL

Out[22]= 99r → − 2

1 H−c−2 ArcTan@Tan@ϕDDL

=, 9r → 2

==

We arrive at the following formula. In[23]:= % // PowerExpand 1 H−c−2 ϕL

Out[23]= 99r → − 2

1 H−c−2 ϕL

=, 9r → 2

==

The final graphics of the integral curves is the following. In[24]:= Show[Table[

ParametricPlot[Evaluate[Exp[c - ϕ]{Cos[ϕ], Sin[ϕ]}], {ϕ, c + 0.1, 3Pi + c}, DisplayFunction -> Identity], {c, 0, 2Pi 24/25, 2Pi/25}],

Solutions

411 DisplayFunction -> $DisplayFunction, PlotRange -> All, Evaluate[opts["Eddy point"]]] Eddy point

We demonstrate the appearance of various singular points in the following example. For a “random” bivariate function yHx, yL, we will integrate the equations x£ HsL = ∑yHxHsL, yHsLL ê ∑ xHsL, y£ HtL = -∑ yHxHsL, yHsLL ê ∑ xHsL. We see saddle points, vortex points, and knot points [85], [448]. In[25]:= Module[{L = 4, pp = 41, T = 5, o = 90, ms = 100, = 3/2,

ps = 21, λ = 2, ipo, ipoX, ipoY, eqs, pathList, nsol}, SeedRandom[123]; (* a streamfunction *) ipo = (* smooth interpolation *) Interpolation[ (* random data *) Flatten[Table[{x, y, Random[Real, {-1, 1}]}, {x, -L, L, 2L/pp}, {y, -L, L, 2L/pp}], 1], InterpolationOrder -> 8]; (* derivatives *) {ipoX, ipoY} = D[ipo[x[t], y[t]], #]& /@ {x[t], y[t]}; (* differential equations for flow lines *) eqs = Thread[{x'[t], y'[t]} == #/Sqrt[#.#]&[{ipoX, -ipoY}]]; (* calculate flow lines *) pathList = Table[ ((* solve for flow lines *) Internal`DeactivateMessages[ nsol = NDSolve[Join[eqs, {x[0] == x0, y[0] == y0}], {x, y}, {t, 0, #}, MaxSteps -> ms, PrecisionGoal -> 3, AccuracyGoal -> 3]]; (* visualize flow lines *) ParametricPlot[Evaluate[{x[t], y[t]} /. nsol], {t, 0, DeleteCases[nsol[[1, 1, 2, 1, 1]], 0.][[1]]}, (* color flow lines differently *) PlotStyle -> {{Thickness[0.002], RGBColor[ (x0 + )/(2 ), 0.2, (y0 + )/(2 )]}}, DisplayFunction -> Identity, PlotPoints -> 200])& /@ {T, -T}, (* grid of initial conditions *) {x0, - , , 2 /ps}, {y0, - , , 2 /ps}]; (* display flow lines and stream function *) Show[(* contour plot of the stream function *) {ContourPlot[Evaluate[ipo[x, y]], {x, -L/2, L/2}, {y, -L/2, L/2}, PlotPoints -> 400, Contours -> 60, ContourLines -> False, PlotRange -> All, DisplayFunction -> Identity], Show[pathList]}, DisplayFunction -> $DisplayFunction, Frame -> True, Axes -> False, FrameTicks -> False, PlotRange -> {{-λ, λ}, {-λ, λ}}, AspectRatio -> Automatic]]

Symbolic Computations

412

For higher-order singularities, see [1726]. b) We start with implementing the exact solution for separable kernels. The function iSolve (named in analogy to DSolve) attempts this. Because a kernel might be separable, but structurally not in separated form, we allow for an optional function that attempts to separate the kernel. While we could be more elaborate with respect to matching the pattern of a Fredholm integral equation of the second kind, we require here the canonical form. The step-by-step implementation of iSolve is self-explanatory. In[1]:= iSolve[eq:(y_[x_] + λ_ Integrate[_ y[ξ_], {ξ_, a_, b_}] == f_),

y_, x_, _:Identity] := Module[{ = Integrate, intExpand, eq1, integrals, Rules, functions, eq2, eqs, s, separableQ}, (* thread integrals over sums and pull integration variable-independent out *) intExpand = Function[int, (int //. [p_Plus, i_] :> ( [#, i]& /@ p) //. HoldPattern[Integrate[c_?(FreeQ[#, ξ, Infinity]&) rest_, {ξ, a, b}]] :> c Integrate[rest, {ξ, a, b}])]; (* separate kernel *) eq1 = intExpand[ExpandAll[ //@ (Subtract @@ eq)]]; (* integrals over y[ζ] and kernel functions *) integrals = Union[Cases[eq1, _ , Infinity]]; (* replace integrals by variables [i] *) Rules = Rule @@@ Transpose[{integrals /. ξ -> ζ_, s = Array[, Length[integrals]]}]; (* was the kernel separable? *) separableQ = FreeQ[Rules, x, Infinity]; (* kernel functions h_j[.] *) functions = ((First /@ integrals)/y[ξ]) /. ξ -> x; (* replace integrals by variables *) eq2 = eq1 /. Rules; (* make linear system in the [i] *) eqs = intExpand[ExpandAll[ [eq2 #, {x, a, b}]& /@ functions]] //. Rules; (* solve linear system, backsubstitute into eq2 and solve for y[x] *) Solve[(eq2 /. Solve[(# == 0)& /@ eqs, s][[1]]) == 0, y[x]] /; (* was iSolve applicable? *) separableQ] 1

The next inputs solve the example equation yHxL - l Ÿ0 sinHx + xL yHxL dx = cosHxL. In[2]:= [x_, ξ_] := Sin[x + ξ]

f[x_] := Cos[x] IEq = y[x] - λ Integrate[[x, ξ] y[ξ], {ξ, 0, 1}] == f[x]; IEqSol = iSolve[y[x] - λ Integrate[[x, ξ] y[ξ], {ξ, 0, 1}] == f[x], y, x, TrigExpand] // Simplify 2 Hλ Cos@2 − xD − H−4 + λL Cos@xD + 2 λ Sin@xDL Out[5]= 99y@xD → − == −8 + λ2 H1 + Cos@2DL + 8 λ Sin@1D2

Here is a quick check for the correctness of the result. In[6]:= yExact = IEqSol[[1, 1, 2]];

IEq /. y -> Function[x, Evaluate[yExact]] // Simplify

Solutions

413 Out[7]= True

In the calculation of the truncated Fredholm and Neumann resolvents, we have to carry out many definite integrals. Because we do not worry about convergence and hope to carry out all integrals successfully term by term, we do not use the built-in function Integrate directly, but rather implement a function integrate, that expands products and powers. In[8]:= integrate[l_List, i_] := Integrate[#, i]& /@ l

integrate[p_Plus, i_] := Integrate[#, i]& /@ p integrate[p:Times[___, _Plus] | p:Power[_Plus, _Integer], i_] := integrate[Expand[p], i] integrate[e_, i_] := Integrate[e, i]

The function FredholmResolventList calculates a list of the successive resolvent approximations arising from truncating the Fredholm minor and the Fredholm determinant at lo+1 . In[12]:= FredholmResolventList[_, {ξ_, a_, b_}, {x_, ξ_}, o_, _:Identity] :=

Module[{c, d, , , , }, (* make recursive definitions for Fredholm minor and determinant *) (* avoid variable interference by applying Set and SetDelayed *) Set @@ {[_, _], /. {x -> , ξ -> }}; Set @@ {d[0][_, _], [, ]}; SetDelayed @@ {d[k_][_, _], Unevaluated @ With[{p = Pattern[#, _]& @@ {}, p = Pattern[#, _]& @@ {}}, d[k][p, p] = @ (c[k] [, ] k integrate[[, ] d[k - 1][, ], {, a, b}])]}; c[0] := 1; c[k_] := c[k] = @ integrate[d[k - 1][, ], {, a, b}]; (* calculate c[k] and d[k] recursively and form successive resolvent approximations *) Divide @@ Transpose[Rest[FoldList[Plus, 0, Table[(-1)^k/k! λ^k {d[k][x, ξ], c[k]}, {k, 0, o}]]]]]

For the example integral equation, all higher ck and dk Hx, xL vanish identically and we obtain the exact solution. In[13]:= FSerKernels = FredholmResolventList[[x, ξ], {ξ, 0, 1}, {x, ξ}, 3,

Simplify] − 1 λ H−Cos@x − ξD + Cos@1 − x − ξD Sin@1DL + Sin@x + ξD 1 − λ Sin@1D

2 , Out[13]= 9Sin@x + ξD, 2 1 λ H−Cos@x − ξD + Cos@1 − x − ξD Sin@1DL + Sin@x + ξD − 2 , 1 − 14 λ2 Cos@1D2 − λ Sin@1D2 1 − λ H−Cos@x − ξD + Cos@1 − x − ξD Sin@1DL + Sin@x + ξD 2 = 1 − 14 λ2 Cos@1D2 − λ Sin@1D2

In[14]:= yFSerSols[x_] = f[x] + λ integrate[FSerKernels f[ξ], {ξ, 0, 1}]; In[15]:= yFSerSols[x][[-1]] == yExact // Simplify Out[15]= True

We end with the implementation of the iterated kernels. The function NeumannResolventList calculates the resolvent arising from o + 1 iterated kernels. In[16]:= NeumannResolventList[_, {ξ_, a_, b_}, {x_, ξ_}, o_, _:Identity] :=

Module[{, , , , kernels}, Set @@ {[_, _], /. {x -> , ξ -> }}; kernels = NestList[[integrate[[, ] (# /. -> ), {, 0, 1}]]&, [, ], o] /. { -> x, -> ξ}; Rest[FoldList[Plus, 0, MapIndexed[λ^(#2[[1]] - 1) #1&, kernels]]]]

The iterated kernels become increasingly complicated functions. In[17]:= NSerKernels = NeumannResolventList[[x, ξ], {ξ, 0, 1},

{x, ξ}, 5, Simplify];

{LeafCount /@ NSerKernels, Short[NSerKernels, 12]} Out[19]= 984, 26, 78, 168, 292, 454 3/2}], {x, 0, 20}, (* setting options to get a pretty picture *) PlotRange -> All, PlotPoints -> 200, PlotStyle -> {Thickness[0.007], Thickness[0.002], Thickness[0.002], {Thickness[0.002], Dashing[{0.02, 0.02}]}, {Thickness[0.002], Dashing[{0.02, 0.02}]}}, Frame -> True, FrameLabel -> ({#["r"], #["V"], None, "∂"}&[ StyleForm["r", FontWeight -> "Bold", FontSize -> 6]&])] 2

0

¶

r

1

-1 -2 0

5

10

r

15

20

For the practical importance of such conditions, see [301], [1474], [1887], [1888], [113], and [1663]. For a nontrivial background potential, see [1367]; for bound states in gaps, see [1508]. b) The function GraeffeSolve implements the calculation of the polynomials pk HzL and the root zn,k . After the †zk § are calculated as precisely as possible given the initial precision prec, ≤ zk is formed and the appropriate sign is selected. In[1]:= Off[RuleDelayed::rhs];

GraeffeSolve[poly_, z_Symbol, prec_] := Module[{k = 1, oldRoots = {0, 0}, newRoots}, Clear[p]; p[0, ζ_] = N[poly /. z -> ζ, prec]; (* polynomial recursion *) p[k_, ζ_] := p[k, ζ_] = Expand[p[k - 1, ] p[k - 1, -]] /. (* avoid 0. z^o *) {_?(# == 0&) -> 0, ^n_ :> ζ^(n/2)}; While[FreeQ[p[k, ζ], Overflow[] | Underflow[], Infinity] && (coeffs = CoefficientList[p[k, ζ], ζ]; (* next polynomial; normalized *) p[k, ζ_] = Expand[p[k, ζ]/Max[Abs[coeffs]]]; (* new root approximations *) newRoots = Abs[Divide @@@ Partition[coeffs, 2, 1]]^(2^-k); (* are roots still changing? *) newRoots =!= oldRoots), oldRoots = newRoots; k++]; {z -> #}& /@ (* add sign *) Select[Join[newRoots, -newRoots], (poly /. z -> #) == 0&]]

Here the function GraeffeSolve is used to solve p = z5 + 5 z4 - 10 z3 - 10 z2 + 5 z + 2 = 0. We start with 100 digits. In[3]:= poly[z_] := 2 + 5 z - 10 z^2 - 10 z^3 + 5 z^4 + z^5;

(* display shortened result *) (grs = GraeffeSolve[poly[z], z, 110]) // N[#, 10]& Out[5]= 88z → 0.5973232647 {{-1, -1}, {+1, -1}, {-1, -1}},

Solutions

419 Lighting -> False, PlotRange -> All, BoxRatios -> {1, 1, 0.7}, AxesLabel -> {x, y, None}, Boxed -> True, TextStyle -> {FontFamily -> "Times", FontSize -> 6}, PlotLabel -> "SF[" ToString[n] ", " ToString[k] "]"]] /; (k Identity], {i, numKnots[1]}]]] SF@1, 1D

SF@1, 2D

SF@1, 3D

In[13]:= Show[GraphicsArray[#]]& /@

Table[ShapeFunctionPlot[2, 3i + j, 12, DisplayFunction -> Identity], {i, 0, 1}, {j, 3}] SF@2, 1D

SF@2, 2D

SF@2, 3D

SF@2, 4D

SF@2, 5D

SF@2, 6D

In[14]:= (* suppress message for only one picture in the last row *)

Show[GraphicsArray[#]]& /@ Table[ShapeFunctionPlot[3, 3i + j, 12, DisplayFunction -> Identity], {i, 0, 2}, {j, 3}] SF@3, 1D

SF@3, 2D

SF@3, 3D

Symbolic Computations

420 SF@3, 4D

SF@3, 5D

SF@3, 6D

SF@3, 7D

SF@3, 8D

SF@3, 9D

In[16]:= ShapeFunctionPlot[3, 10, 12] SF@3, 10D

We turn now to the computation of the integrals of the element vector and to the entries in the stiffness and mass matrices. Because these involve integrals of the shape functions yi Hx, yL over the triangle with vertices P1 = 8x1 , y1 x1} // Simplify Out[21]= x3 η + x2 ξ − x1 H−1 + η + ξL In[22]:= y[ξ_, η_] = (ay ξ + by η + cy) /.

{ay -> - y1 + y2, by -> - y1 + y3, cy -> y1} // Simplify Out[22]= y3 η + y2 ξ − y1 H−1 + η + ξL

We then get the following Jacobian determinant. In[23]:= Simplify[Det[Outer[D, {x[ξ, η], y[ξ, η]}, {ξ, η}]]] Out[23]= x3 Hy1 − y2L + x1 Hy2 − y3L + x2 H−y1 + y3L 1 1-x p

Next, we implement the relation Ÿ0 Ÿ0

x hq dh dx = p! q ! ê H p + q + 2L (for our applications p and q are positive integers).

The function TriangularIntegration implements the integration of polynomials over the unit triangle. In[24]:= (* Additivity of the integration *)

TriangularIntegration[p_Plus, {x_, y_}] := TriangularIntegration[#, {x, y}]& /@ p; (* Factors that do not depend on the integration variables are moved in front of the integral *) TriangularIntegration[c_ z_, {x_, y_}] := c TriangularIntegration[z, {x, y}] /; FreeQ[c, x] && FreeQ[c, y]; (* let q be 0 *) TriangularIntegration[x_^p_., {x_, y_}] := TriangularIntegration[x^p, {x, y}] = p!/(p + 2)!; (* let p be 0 *) TriangularIntegration[y_^q_., {x_, y_}] := TriangularIntegration[x^q, {x, y}] = q!/(q + 2)!; (* the actual integration formula *) TriangularIntegration[x_^p_. y_^q_., {x_, y_}] := TriangularIntegration[x^p y^q, {x, y}] = (p! q!) /(p + q + 2)!; (* integration of a constant *) TriangularIntegration[c_, {x_, y_}] := (c/2) /; FreeQ[c, x] && FreeQ[c, y];

(For the efficient integration of analytic functions over triangles, see [446].) By comparing our triangular integration with the built-in command Integrate, we see that our work was justified. In[36]:= Timing[TriangularIntegration[a + b x + c y^2 + d x^3 y^6 +

a 2

b 6

c 12

d 9240

e x^12 y^ 16, {x, y}]] e 26466926850

Out[36]= 90. Second, + + + + =

In[37]:= Timing[Integrate[a + b x + c y^2 + d x^3 y^6 + e x^12 y^ 16,

a 2

{x, 0, 1}, {y, 0, 1 - x}]] // Simplify b c d e 6 12 9240 26466926850

Out[37]= 91.35 Second, + + + + =

Now to the heart of this problem: the computation of the element vector and the mass and stiffness matrices. For the element vector, we have HeL fi = ‡ yi Hx, yL dxdy = J ‡ jHeL i Hx, hL dx dh = J fi . RT

UT

Here, RT denotes the real triangle, whereas UT denotes the unit triangle. J is the Jacobian determinant †∑Hx, yL ê ∑ Hx, hL§. We get this relationship by means of the relations HeL xHx, hL = ‚ x j jHeL j Hx, hL, yHx, hL = ‚ y j j j Hx, hL j

j

Symbolic Computations

422

(where x j , y j are the coordinates of the point P j in the actual triangle RT), which hold for the isoparametric mappings yi HxHx, hL, yHx, hLL = jHeL i Hx, hL. Thus, we compute only the element vector in the unit triangle fiHeL (i.e., we do not explicitly write the Jacobian determinant). In[38]:= ElementVectorElement[n_Integer?Positive, i_Integer?Positive] :=

(ElementVectorElement[n, i] = TriangularIntegration[Expand[ShapeFunction[n, i, ξ, η]], {ξ, η}]) /; (i All, Frame -> True, Axes -> False, PlotStyle -> {PointSize[0.008]}, DisplayFunction -> Identity], (* values over the base points; coloring according to size *) Graphics3D[{Hue[0.76 #[[2]]/Max[evec]], Line[{Append[#[[1]], 0], Append[#[[1]], Abs[#[[2]]]]}]}& /@ Transpose[{Table[PD[n, k], {k, numKnots[n]}], evec}], BoxRatios -> {1, 1, 0.5}, PlotRange -> All, Axes -> True]}]]]

0.1 0.2 0.15 0.1 0.05 0 0

0 -0.1 -0.2 0

20

40

60

1 0.75 0.5 0.25

0.5

80

0.25 0.75

1

0

The computation of the mass matrix is essentially analogous to that for the eigenvector. Using similar notation as in the element vector case, we have HeL HeL mij = ‡ yi Hx, yL y j Hx, yL dxdy = J ‡ jHeL i Hx, hL j j Hx, hL dx dh = J mij . RT

UT

Again, we find only the coordinate-free part. In[44]:= MassMatrixElement[n_Integer?Positive,

i_Integer?Positive, j_Integer?Positive] := (MassMatrixElement[n, i, j] = (* because of symmetry *) MassMatrixElement[n, j, i] = TriangularIntegration[Expand[ShapeFunction[n, i, ξ, η] * ShapeFunction[n, j, ξ, η]], {ξ, η}]) /; ((i J, -(-(x2 y1) + x3 y1 + x1 y2 - x3 y2 - x1 y3 + x2 y3) -> -J}) // Simplify x3 H−y + y1L + x1 Hy − y3L + x H−y1 + y3L x2 Hy − y1L + x Hy1 − y2L + x1 H−y + y2L Out[48]= 99ξ → , η → == x3 Hy1 − y2L + x1 Hy2 − y3L + x2 H−y1 + y3L J ∑ ∑ ∑ ∑ We now can calculate the following four quantities: ÅÅÅÅ ÅÅ xHx, yL, ÅÅÅÅ ÅÅ hHx, yL, ÅÅÅÅ ÅÅ xHx, yL, ÅÅÅÅ ÅÅ hHx, yL. ∑x ∑y ∑y ∑y

In[49]:= Ξ[x_, y_] = (x1 y - x3 y - x

Η[x_, y_] = (x2 y - x1 y + x

y1 + x3 y1 + x y3 - x1 y3)/J; y1 - x2 y1 - x y2 + x1 y2)/J;

In[51]:= {dξdx = D[Ξ[x, y], x], dηdx = D[Η[x, y], x],

dξdy = D[Ξ[x, y], y], dηdy = D[Η[x, y], y]} −y1 + y3 y1 − y2 x1 − x3 −x1 + x2 J J J J

Out[51]= 9 , , , =

∑ ∑ ∑ ∑ We now rewrite ÅÅÅÅ ÅÅ yi Hx, yL ÅÅÅÅ ÅÅ y j Hx, yL + ÅÅÅÅ ÅÅ y Hx, yL ÅÅÅÅ ÅÅ y j Hx, yL in the form ∑x ∑x ∑y i ∑y

Symbolic Computations

424

∑ ij ∑ HeL y Hx, hLz ÿ j ÅÅÅÅÅÅÅÅÅ ji Hx, hL ÿ ÅÅÅÅÅÅ ÅÅÅ jHeL ∑x j k ∑x {

2 2 jijijj ÅÅÅÅ∑ÅÅ ÅÅÅ xHx, yLyz + ijj ÅÅÅÅ∑ÅÅÅÅÅÅ xHx, yLyz zyz + ∑ x ∑ y { k { k k {

2 2 ∑ yz ij ∑ yz zy yz jiij ∑ ij ∑ HeL j ÅÅÅÅÅÅÅÅÅÅ ji Hx, hL ÿ ÅÅÅÅÅÅÅÅÅÅ jHeL j Hx, hL ÿ jj ÅÅÅÅÅÅÅÅÅ hHx, yL + j ÅÅÅÅÅÅÅÅÅÅ hHx, yL z + ∑ x ∑ y ∑h ∑ h { k {{ { kk k ij ÅÅÅÅ∑ÅÅÅÅÅ jHeL Hx, hL ÿ ÅÅÅÅ∑ÅÅÅÅÅÅ jHeL Hx, hL + ÅÅÅÅ∑ÅÅÅÅÅÅ jHeL Hx, hL ÿ ÅÅÅÅ∑ÅÅÅÅÅÅ jHeL Hx, hLyz µ j i ∑h j ∑h i ∑x j k ∑x { ∑ ∑ ∑ y ij ∑ z j ÅÅÅÅÅÅÅÅÅ xHx, yL ÿ ÅÅÅÅÅÅÅÅÅ hHx, yL + ÅÅÅÅÅÅÅÅÅÅ xHx, yL ÿ ÅÅÅÅÅÅÅÅÅÅ hHx, yL ∑x ∑y ∑y { k ∑x

and introduce 2 2 Hx3 - x1 L2 + Hy3 - y1 L2 ii ∑ y y y i ∑ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ A = jjjj ÅÅÅÅÅÅÅÅÅ xHx, yLzz + jj ÅÅÅÅÅÅÅÅÅÅ xHx, yLzz zz J = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ J ∑ y ∑ x { { { k kk 2 2 Hx2 - x1 L2 + Hy2 - y1 L2 ii ∑ y i ∑ y y C = jjjj ÅÅÅÅÅÅÅÅÅ hHx, yLzz + jj ÅÅÅÅÅÅÅÅÅÅ hHx, yLzz zz J = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ J ∑ x ∑ y { k { k k { Hy3 - y1 L Hy2 - y1 L + Hx3 - x1 L Hx2 - x1 L ∑ ∑ ∑ ∑ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ . B = ijj ÅÅÅÅÅÅ ÅÅÅ xHx, yL ÅÅÅÅÅÅÅÅÅÅ hHx, yL + ÅÅÅÅÅÅÅÅÅÅ xHx, yL ÅÅÅÅÅÅÅÅÅÅ hHx, yLyz J = - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ J ∑x ∑y ∑y { k ∑x

This leads to the following result: ∑ ∑ ∑ ∑ sij = ‡ ijj ÅÅÅÅÅÅ ÅÅÅ yi Hx, yL ÅÅÅÅÅÅÅÅÅÅ y j Hx, yL + ÅÅÅÅÅÅÅÅÅÅ yi Hx, yL ÅÅÅÅÅÅÅÅÅÅ y j Hx, yLyz dxdh ∑x ∑y ∑y { RT k ∑ x ∑ HeL ∑ HeL = A ‡ ÅÅÅÅÅÅÅÅÅ ji Hx, hL ÅÅÅÅÅÅÅÅÅÅ j j Hx, hL dxdh + ∑x UT ∑x ∑ ∑ HeL C ‡ ÅÅÅÅÅÅÅÅÅÅ ji Hx, hL ÅÅÅÅÅÅÅÅÅÅ jHeL j Hx, hL dxdh + ∑h ∑h UT ∑ HeL ∑ HeL ∑ HeL i ∑ yz B ‡ jj ÅÅÅÅÅÅÅÅÅÅ jHeL i Hx, hL ÅÅÅÅÅÅÅÅÅÅ j j Hx, hL + ÅÅÅÅÅÅÅÅÅÅ ji Hx, hL ÅÅÅÅÅÅÅÅÅ j j Hx, hL dxdh. ∑h ∑h ∑x { UT k ∑ x In[52]:= StiffnessMatrixElement[n_Integer?Positive,

i_Integer?Positive, j_Integer?Positive] := (StiffnessMatrixElement[n, i, j] = (* because of symmetry *) StiffnessMatrixElement[n, j, i] = With[{SF = ShapeFunction}, (* sum of the three terms *) A TriangularIntegration[ Expand[D[SF[n, i, ξ, η], ξ] D[SF[n, j, ξ, η], ξ]], {ξ, η}] + C TriangularIntegration[ Expand[D[SF[n, i, ξ, η], η] D[SF[n, j, ξ, η], η]], {ξ, η}] + B TriangularIntegration[ Expand[D[SF[n, i, ξ, η], ξ] D[SF[n, j, ξ, η], η] + D[SF[n, i, ξ, η], η] D[SF[n, j, ξ, η], ξ]], {ξ, η}]]) /; ((i {1, 1}]& A 2A 2B 2B 2C A + B + C2 + B6 6B + C6 − − 0 − − 2 6 3 3 3 3 A B + 6 6

Out[55]//TableForm=

A 2

B C + 6 6 2A 2B − 3 3

−

B6

B − 6

2A 2B − − 3 3

2B 3

0

C 2

0

2B 3

2B 2C − − 3 3

2A 2B − − 3 3

0

4A 4B 4C + + 3 3 3

4B 4C − − 3 3

4B 3

0

2B 3

2B 3

4B 4C − − 3 3

4B 4C 4A + + 3 3 3

4A 4B − − 3 3

2B 2C − − 3 3

0

2B 2C − − 3 3

4B 3

4A 4B − − 3 3

4B 4C 4A + + 3 3 3

−

For a larger order, we will visualize the resulting mass and stiffness matrices. Here are these two matrices shown for n = 10 for the unit triangle. In[56]:= With[{n = 10},

Show[GraphicsArray[ ListDensityPlot[(* scale *) ArcTan[#], PlotRange -> All, Mesh -> False, DisplayFunction -> Identity]& /@ (* calculate exact mass and stiffness matrices *) {MassMatrix[n], StiffnessMatrix[n] /. {A -> 1, C -> 1, B -> 0}}]]] 60

60

50

50

40

40

30

30

20

20

10

10

0

0

10

20

30

40

50

60

0

0

10

20

30

40

50

60

The subject of finite elements contains many other opportunities for programming with Mathematica. For example, we mention algorithms for minimizing the bandwidth of sparse matrices (following, e.g., Cuthill–McKee [424], Gibbs–Poole– Stockmeyer ([723] and [711]), or Sloan [1625]). Because of their special nature, we do not go any further into the explicit implementation of these finite-element computations. Hp,dL b) We start by implementing the interpolating functions ck,l HxL. Using the function InterpolatingPolynomial , their construction is straightforward for explicitly given integers e, p, d, k, and l. While the unexpanded form has a better stability for numerical evaluation, we expand the functions here to speed-up the integrations to be carried out later.

In[1]:= χ[p_, d_][k_, l_, ξ_] :=

Expand[InterpolatingPolynomial[ Table[{j/p, Table[KroneckerDelta[j, k]* KroneckerDelta[l, i], {i, 0, d}]}, {j, 0, p}], ξ]]

Here are two examples: In[2]:= {χ[3, 0][0, 0, ξ], χ[2, 2][1, 1, ξ]}

11 ξ 2

9 ξ3 2

Out[2]= 91 − + 9 ξ2 − , −32 ξ3 + 160 ξ4 − 288 ξ5 + 224 ξ6 − 64 ξ7 = H p,dL

We sidestep a moment and visualize some of the ck,l HxL. The function maxAbs[p, d][k, l] calculates the maximum of Hp,dL the absolute value of the ck,l HxL over the x-interval @0, 1D. In[3]:= maxAbs[p_, d_][k_, l_] :=

Module[{f = χ[p, d][k, l, ξ], extξs}, (* solve for extrema *) extξs = Select[N[{ToRules[Roots[D[f, ξ, ξ] == 0, ξ, Cubics -> False, Quartics -> False]]}, 50], (Im[ξ /. #] == 0 && 0 1}}]]]]

The magnitude of the functions decreases quickly with higher-order continuity.

Symbolic Computations

426 In[4]:= With[{p = 4, d = 4},

Table[{j, Max[Table[maxAbs[p, d][i, j], {i, 0, p}]] // N}, {j, 0, d}]] Out[4]= 880, 5.69702 None, PlotRange -> All, Frame -> True, Axes -> False] Show[GraphicsArray[#]]& /@ Table[ Table[graph[µ, µ][k, l], {l, 0, µ}, {k, 0, µ}], {µ, 3}]

Solutions

427

H p,dL

Before starting the implementation of the functions to solve the eigenvalue problem, we will renumber the ck,l . For fixed p Hp,dL and d, we want to number the functions cHp,dL HxL using one index to easily assemble the global finite element k,l HxL = ch matrices. We number them consecutively with increasing k, and within each k with increasing l. The function reducesIn dices does the inverse: given the linear numbering h, it generates the pairs Hk, lL. In[10]:= reducesIndices[p_, d_][h_] :=

Sequence[Floor[h/(d + 1)], h - (d + 1) Floor[h/(d + 1)]]

Here are the sixteen pairs corresponding to cH3,3L h HxL. In[11]:= Table[{k, {reducesIndices[3, 3][k]}}, {k, 0, 15}] Out[11]= 880, 80, 0 22, MaxIterations -> 40]

8848, 12, 6 e[b], b] /. {e'[b] -> 0,

e[b] -> e}) == 0, evEq[0] == 0}, {b, e}], (Im[e] == 0 && b > 0 /. N[#])&] 13 è!!!!! Out[16]= 99e → , b → 2 == 16

Here is a sketch of the behavior of the yHb; xL, including more terms. In[17]:= Show[GraphicsArray[#]]& /@

Partition[Table[ ListPlot[Table[{b, NRoots[evEq[i] == 0, e][[1, 2]]}, {b, 0.8, 2.5, 0.025}], PlotRange -> {0.8, 0.82}, PlotJoined -> True, AxesOrigin -> {0.8, 0.8}, DisplayFunction -> Identity, PlotLabel -> StyleForm["evEq[" ToString[i] "]", "MR"]], {i, 6}], 3] evEq@1D 0.8175 0.815 0.8125 0.81 0.8075 0.805 0.8025

evEq@2D 0.8175 0.815 0.8125 0.81 0.8075 0.805 0.8025

1 1.25 1.5 1.75 2 2.25 2.5

1 1.25 1.5 1.75 2 2.25 2.5

evEq@4D 0.8175 0.815 0.8125 0.81 0.8075 0.805 0.8025

1 1.25 1.5 1.75 2 2.25 2.5

evEq@5D 0.8175 0.815 0.8125 0.81 0.8075 0.805 0.8025

1 1.25 1.5 1.75 2 2.25 2.5

evEq@3D 0.8175 0.815 0.8125 0.81 0.8075 0.805 0.8025

evEq@6D 0.8175 0.815 0.8125 0.81 0.8075 0.805 0.8025

1 1.25 1.5 1.75 2 2.25 2.5

1 1.25 1.5 1.75 2 2.25 2.5

Let us now numerically compute the minimizing values for b. We compare three different methods for the case of evEq[3]. One method is to use FindMinimum for the lowest value of e, which we calculate by solving the polynomial in e with NRoots. In[18]:= oFevEq3[_?NumericQ] :=

Block[{b = }, NRoots[evEq[3] == 0, e, 20][[1, 2]]] Timing[FindMinimum[oFevEq3[b], {b, ##}, WorkingPrecision -> 25, PrecisionGoal -> 12, Compiled -> False]& @@@ (* two initial intervals *) {{11/10, 12/10}, {17/10, 18/10}}] Out[19]= 80.02 Second, 880.8074145723427270178250488, 8b → 1.203732086388522417922241 0) /. e[b] -> e) == 0, evEq[3] == 0} . This would not have resulted in a faster solution. Actually, the quality of the solution is not guaranteed. In[22]:= Sort[Select[NSolve[{((D[evEq[3] /. e -> e[b], b] /. e'[b] -> 0) /.

e[b] -> e) == 0, evEq[3] == 0}, {e, b}], Im[e] == 0 && Im[b] == 0 && Re[b] > 0 /. #&], #1[[1, 2]] < #2[[1, 2]]&] // Timing Out[22]= 82.38 Second, 88e → 0.804175, b → 1.72205 0) /. e[b] -> e, evEq[n]}] In[30]:= Timing[frSolve[3, {12/10, 18/10}]] Out[30]= 80.04 Second, 88e → 0.8074145723427270178250477, b → 1.203732086388840409673660 a})] /. {Cos[x_]^2 + Sin[x_]^2 -> 1}

Here are the computations of some Jacobian determinants with the times required. In[7]:= timings[k_Integer] := {k, {Timing[NaivJacobiDeterminant[k]],

Timing[FastJacobiDeterminant[k]]}} In[8]:= Table[timings[k], {k, 2, 7}] Out[8]= 882, 880.01 Second, r 0, x[ϕ] -> x},

(* algebraic relation between Sin and Cos *) Sin[ϕ]^2 + Cos[ϕ]^2 - 1}, {Cos[ϕ], Cos[ϕMax], h, l}, {Sin[ϕ], x}, MonomialOrder -> EliminationOrder] /. {Cos[ϕ] -> c, Cos[ϕMax] -> cm} // Factor Out[14]= 8c2 H3 c − 2 cmL l H−h − l + c lL H−h + c2 h − l + c2 l + c3 l + cm l − 2 c2 cm lL

Michael Trott

The Mathematica GuideBook for Symbolics

With 848 Illustrations

Michael Trott Wolfram Research Champaign, Illinois

Mathematica is a registered trademark of Wolfram Research, Inc. Library of Congress Control Number: 2005928496 ISBN-10: 0-387-95020-6 ISBN-13: 978-0387-95020-4 e-ISBN: 0-387-28815-5

Printed on acid-free paper.

2006 Springer Science+Business Media, Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, Inc., 233 Spring St., New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed in the United States of America. 9 8 7 6 5 4 3 2 1 springeronline.com

(HAM)

Preface Bei mathematischen Operationen kann sogar eine gänzliche Entlastung des Kopfes eintreten, indem man einmal ausgeführte Zähloperationen mit Zeichen symbolisiert und, statt die Hirnfunktion auf Wiederholung schon ausgeführter Operationen zu verschwenden, sie für wichtigere Fälle aufspart. When doing mathematics, instead of burdening the brain with the repetitive job of redoing numerical operations which have already been done before, it’s possible to save that brainpower for more important situations by using symbols, instead, to represent those numerical calculations. — Ernst Mach (1883) [45]

Computer Mathematics and Mathematica Computers were initially developed to expedite numerical calculations. A newer, and in the long run, very fruitful field is the manipulation of symbolic expressions. When these symbolic expressions represent mathematical entities, this field is generally called computer algebra [8]. Computer algebra begins with relatively elementary operations, such as addition and multiplication of symbolic expressions, and includes such things as factorization of integers and polynomials, exact linear algebra, solution of systems of equations, and logical operations. It also includes analysis operations, such as definite and indefinite integration, the solution of linear and nonlinear ordinary and partial differential equations, series expansions, and residue calculations. Today, with computer algebra systems, it is possible to calculate in minutes or hours the results that would (and did) take years to accomplish by paper and pencil. One classic example is the calculation of the orbit of the moon, which took the French astronomer Delaunay 20 years [12], [13], [14], [15], [11], [26], [27], [53], [16], [17], [25]. (The Mathematica GuideBooks cover the two other historic examples of calculations that, at the end of the 19th century, took researchers many years of hand calculations [1], [4], [38] and literally thousands of pages of paper.) Along with the ability to do symbolic calculations, four other ingredients of modern general-purpose computer algebra systems prove to be of critical importance for solving scientific problems: † a powerful high-level programming language to formulate complicated problems † programmable two- and three-dimensional graphics † robust, adaptive numerical methods, including arbitrary precision and interval arithmetic † the ability to numerically evaluate and symbolically deal with the classical orthogonal polynomials and special functions of mathematical physics. The most widely used, complete, and advanced general-purpose computer algebra system is Mathematica. Mathematica provides a variety of capabilities such as graphics, numerics, symbolics, standardized interfaces to other programs, a complete electronic document-creation environment (including a full-fledged mathematical typesetting system), and a variety of import and export capabilities. Most of these ingredients are necessary to coherently and exhaustively solve problems and model processes occurring in the natural sciences [41], [58], [21], [39] and other fields using constructive mathematics, as well as to properly represent the results. Conse-

Preface

vi

quently, Mathematica’s main areas of application are presently in the natural sciences, engineering, pure and applied mathematics, economics, finance, computer graphics, and computer science. Mathematica is an ideal environment for doing general scientific and engineering calculations, for investigating and solving many different mathematically expressable problems, for visualizing them, and for writing notes, reports, and papers about them. Thus, Mathematica is an integrated computing environment, meaning it is what is also called a “problem-solving environment” [40], [23], [6], [48], [43], [50], [52].

Scope and Goals The Mathematica GuideBooks are four independent books whose main focus is to show how to solve scientific problems with Mathematica. Each book addresses one of the four ingredients to solve nontrivial and real-life mathematically formulated problems: programming, graphics, numerics, and symbolics. The Programming and the Graphics volume were published in autumn 2004. The four Mathematica GuideBooks discuss programming, two-dimensional, and three-dimensional graphics, numerics, and symbolics (including special functions). While the four books build on each other, each one is self-contained. Each book discusses the definition, use, and unique features of the corresponding Mathematica functions, gives small and large application examples with detailed references, and includes an extensive set of relevant exercises and solutions. The GuideBooks have three primary goals: † to give the reader a solid working knowledge of Mathematica † to give the reader a detailed knowledge of key aspects of Mathematica needed to create the “best”, fastest, shortest, and most elegant solutions to problems from the natural sciences † to convince the reader that working with Mathematica can be a quite fruitful, enlightening, and joyful way of cooperation between a computer and a human. Realizing these goals is achieved by understanding the unifying design and philosophy behind the Mathematica system through discussing and solving numerous example-type problems. While a variety of mathematics and physics problems are discussed, the GuideBooks are not mathematics or physics books (from the point of view of content and rigor; no proofs are typically involved), but rather the author builds on Mathematica’s mathematical and scientific knowledge to explore, solve, and visualize a variety of applied problems. The focus on solving problems implies a focus on the computational engine of Mathematica, the kernel—rather than on the user interface of Mathematica, the front end. (Nevertheless, for a nicer presentation inside the electronic version, various front end features are used, but are not discussed in depth.) The Mathematica GuideBooks go far beyond the scope of a pure introduction into Mathematica. The books also present instructive implementations, explanations, and examples that are, for the most part, original. The books also discuss some “classical” Mathematica implementations, explanations, and examples, partially available only in the original literature referenced or from newsgroups threads. In addition to introducing Mathematica, the GuideBooks serve as a guide for generating fairly complicated graphics and for solving more advanced problems using graphical, numerical, and symbolical techniques in cooperative ways. The emphasis is on the Mathematica part of the solution, but the author employs examples that are not uninteresting from a content point of view. After studying the GuideBooks, the reader will be able to solve new and old scientific, engineering, and recreational mathematics problems faster and more completely with the help of Mathematica—at least, this is the author’s goal. The author also hopes that the reader will enjoy

Preface

vii

using Mathematica for visualization of the results as much as the author does, as well as just studying Mathematica as a language on its own. In the same way that computer algebra systems are not “proof machines” [46], [9], [37], [10], [54], [55], [56] such as might be used to establish the four-color theorem ([2], [22]), the Kepler [28], [19], [29], [30], [31], [32], [33], [34], [35], [36] or the Robbins ([44], [20]) conjectures, proving theorems is not the central theme of the GuideBooks. However, powerful and general proof machines [9], [42], [49], [24], [3], founded on Mathematica’ s general programming paradigms and its mathematical capabilities, have been built (one such system is Theorema [7]). And, in the GuideBooks, we occasionally prove one theorem or another theorem. In general, the author’s aim is to present a realistic portrait of Mathematica: its use, its usefulness, and its strengths, including some current weak points and sometimes unexpected, but often nevertheless quite “thought through”, behavior. Mathematica is not a universal tool to solve arbitrary problems which can be formulated mathematically—only a fraction of all mathematical problems can even be formulated in such a way to be efficiently expressed today in a way understandable to a computer. Rather, it is often necessary to do a certain amount of programming and occasionally give Mathematica some “help” instead of simply calling a single function like Solve to solve a system of equations. Because this will almost always be the case for “real-life” problems, we do not restrict ourselves only to “textbook” examples, where all goes smoothly without unexpected problems and obstacles. The reader will see that by employing Mathematica’s programming, numeric, symbolic, and graphic power, Mathematica can offer more effective, complete, straightforward, reusable, and less likely erroneous solution methods for calculations than paper and pencil, or numerical programming languages. Although the Guidebooks are large books, it is nevertheless impossible to discuss all of the 2,000+ built-in Mathematica commands. So, some simple as well as some more complicated commands have been omitted. For a full overview about Mathematica’s capabilities, it is necessary to study The Mathematica Book [60] in detail. The commands discussed in the Guidebooks are those that an scientist or research engineer needs for solving typical problems, if such a thing exists [18]. These subjects include a quite detailed discussion of the structure of Mathematica expressions, Mathematica input and output (important for the human–Mathematica interaction), graphics, numerical calculations, and calculations from classical analysis. Also, emphasis is given to the powerful algebraic manipulation functions. Interestingly, they frequently allow one to solve analysis problems in an algorithmic way [5]. These functions are typically not so well known because they are not taught in classical engineering or physics-mathematics courses, but with the advance of computers doing symbolic mathematics, their importance increases [47]. A thorough knowledge of: † structural operations on polynomials, rational functions, and trigonometric functions † algebraic operations on polynomial equations and inequalities † process of compilation, its advantages and limits † main operations of calculus—univariate and multivariate differentiation and integration † solution of ordinary and partial differential equations is needed to put the heart of Mathematica—its symbolic capabilities—efficiently and successfully to work in the solution of model and real-life problems. The Mathematica GuideBooks to Symbolics discusses these subjects. The current version of the Mathematica GuideBooks is tailored for Mathematica Version 5.1.

viii

Preface

Content Overview The Mathematica GuideBook for Symbolics has three chapters. Each chapter is subdivided into sections (which occasionally have subsections), exercises, solutions to the exercises, and references. This fourth and last volume of the GuideBooks deals with Mathematica’s symbolic mathematical capabilities—the real heart of Mathematica and the ingredient of the Mathematica software system that makes it so unique and powerful. In addition, this volume discusses and employs the classical orthogonal polynomials and special functions of mathematical physics. To demonstrate the symbolic mathematics power, a variety of problems from mathematics and physics are discussed. Chapter 1 starts with a discussion of the algebraic functions needed to carry out analysis problems effectively. Contrary to classical science/engineering mathematics education, using a computer algebra system makes it often a good idea to rephrase a problem—including when it is from analysis—in a polynomial way to allow for powerful algorithmic treatments. Gröbner bases play a central role in accomplishing this task. This volume discusses in detail the main functions to deal with structural operations on polynomials, polynomial equations and inequalities, and expressions containing quantified variables. Rational functions and expressions containing trigonometric functions are dealt with next. Then the central problems of classical analysis—differentiation, integration, summation, series expansion, and limits—are discussed in detail. The symbolic solving of ordinary and partial differential equations is demonstrated in many examples. As always, a variety of examples show how to employ the discussed functions in various mathematics or physics problems. The Symbolics volume emphasizes their main uses and discusses the specialities of these operations inside a computer algebra system, as compared to a “manual” calculation. Then, generalized functions and Fourier and Laplace transforms are discussed. The main part of the chapter culminates with three examples of larger symbolic calculations, two of them being classic problems. This chapter has more than 150 exercises and solutions treating a variety of symbolic computation examples from the sciences. Chapters 2 and 3 discuss classical orthogonal polynomials and the special functions of mathematical physics. Because this volume is not a treatise on special functions, it is restricted to selected function groups and presents only their basic properties, associated differential equations, normalizations, series expansions, verification of various special cases, etc. The availability of nearly all of the special functions of mathematical physics for all possible arbitrary complex parameters opens new possibilities for the user, e.g., the use of closed formulas for the Green’s functions of commonly occurring partial differential equations or for “experimental mathematics”. These chapters focus on the use of the special functions in a number of physics-related applications in the text as well as in the exercises. The larger examples deal with are the quartic oscillator in the harmonic oscillator basis and the implementation of Felix Klein’s method to solve quintic polynomials in Gauss hypergeometric functions 2F1 . The Symbolics volume employs the built-in symbolic mathematics in a variety of examples. However, the underlying algorithms themselves are not discussed. Many of them are mathematically advanced and outside of the scope of the GuideBooks. Throughout the Symbolics volume, the programming and graphics experience acquired in the first two volumes is used to visualize various mathematics and physics topics.

Preface

ix

The Books and the Accompanying DVDs Each of the GuideBooks comes with a multiplatform DVD. Each DVD contains the fourteen main notebooks, the hyperlinked table of contents and index, a navigation palette, and some utility notebooks and files. All notebooks are tailored for Mathematica 5.1. Each of the main notebooks corresponds to a chapter from the printed books. The notebooks have the look and feel of a printed book, containing structured units, typeset formulas, Mathematica code, and complete solutions to all exercises. The DVDs contain the fully evaluated notebooks corresponding to the chapters of the corresponding printed book (meaning these notebooks have text, inputs, outputs and graphics). The DVDs also include the unevaluated versions of the notebooks of the other three GuideBooks (meaning they contain all text and Mathematica code, but no outputs and graphics). Although the Mathematica GuideBooks are printed, Mathematica is “a system for doing mathematics by computer” [59]. This was the lovely tagline of earlier versions of Mathematica, but because of its growing breadth (like data import, export and handling, operating system-independent file system operations, electronic publishing capabilities, web connectivity), nowadays Mathematica is called a “system for technical computing”. The original tagline (that is more than ever valid today!) emphasized two points: doing mathematics and doing it on a computer. The approach and content of the GuideBooks are fully in the spirit of the original tagline: They are centered around doing mathematics. The second point of the tagline expresses that an electronic version of the GuideBooks is the more natural medium for Mathematica-related material. Long outputs returned by Mathematica, sequences of animations, thousands of web-retrievable references, a 10,000-entry hyperlinked index (that points more precisely than a printed index does) are space-consuming, and therefore not well suited for the printed book. As an interactive program, Mathematica is best learned, used, challenged, and enjoyed while sitting in front of a powerful computer (or by having a remote kernel connection to a powerful computer). In addition to simply showing the printed book’s text, the notebooks allow the reader to: † experiment with, reuse, adapt, and extend functions and code † investigate parameter dependencies † annotate text, code, and formulas † view graphics in color † run animations.

The Accompanying Web Site Why does a printed book need a home page? There are (in addition to being just trendy) two reasons for a printed book to have its fingerprints on the web. The first is for (Mathematica) users who have not seen the book so far. Having an outline and content sample on the web is easily accomplished, and shows the look and feel of the notebooks (including some animations). This is something that a printed book actually cannot do. The second reason is for readers of the book: Mathematica is a large modern software system. As such, it ages quickly in the sense that in the timescale of 101. smallIntegermonths, a new version will likely be available. The overwhelmingly large majority of Mathematica functions and programs will run unchanged in a new version. But occasionally, changes and adaptations might be needed. To accommodate this, the web site of this book—http://www.MathematicaGuideBooks.org—contains a list of changes relevant to the GuideBooks. In addition, like any larger software project, unavoidably, the GuideBooks will contain suboptimal implementations, mistakes, omissions, imperfections, and errors. As they come to his attention, the author will list them at

Preface

x

the book’s web site. Updates to references, corrections [51], hundreds of pages of additional exercises and solutions, improved code segments, and other relevant information will be on the web site as well. Also, information about OS-dependent and Mathematica version-related changes of the given Mathematica code will be available there.

Evolution of the Mathematica GuideBooks A few words about the history and the original purpose of the GuideBooks: They started from lecture notes of an Introductory Course in Mathematica 2 and an advanced course on the Efficient Use of the Mathematica Programming System, given in 1991/1992 at the Technical University of Ilmenau, Germany. Since then, after each release of a new version of Mathematica, the material has been updated to incorporate additional functionality. This electronic/printed publication contains text, unique graphics, editable formulas, runable, and modifiable programs, all made possible by the electronic publishing capabilities of Mathematica. However, because the structure, functions and examples of the original lecture notes have been kept, an abbreviated form of the GuideBooks is still suitable for courses. Since 1992 the manuscript has grown in size from 1,600 pages to more than three times its original length, finally “weighing in” at nearly 5,000 printed book pages with more than: † 18 gigabytes of accompanying Mathematica notebooks † 22,000 Mathematica inputs with more than 13,000 code comments † 11,000 references † 4,000 graphics † 1,000 fully solved exercises † 150 animations. This first edition of this book is the result of more than eleven years of writing and daily work with Mathematica. In these years, Mathematica gained hundreds of functions with increased functionality and power. A modern year-2005 computer equipped with Mathematica represents a computational power available only a few years ago to a select number of people [57] and allows one to carry out recreational or new computations and visualizations—unlimited in nature, scope, and complexity— quickly and easily. Over the years, the author has learned a lot of Mathematica and its current and potential applications, and has had a lot of fun, enlightening moments and satisfaction applying Mathematica to a variety of research and recreational areas, especially graphics. The author hopes the reader will have a similar experience.

Disclaimer In addition to the usual disclaimer that neither the author nor the publisher guarantees the correctness of any formula, fitness, or reliability of any of the code pieces given in this book, another remark should be made. No guarantee is given that running the Mathematica code shown in the GuideBooks will give identical results to the printed ones. On the contrary, taking into account that Mathematica is a large and complicated software system which evolves with each released version, running the code with another version of Mathematica (or sometimes even on another operating system) will very likely result in different outputs for some inputs. And, as a consequence, if different outputs are generated early in a longer calculation, some functions might hang or return useless results.

Preface

xi

The interpretations of Mathematica commands, their descriptions, and uses belong solely to the author. They are not claimed, supported, validated, or enforced by Wolfram Research. The reader will find that the author’s view on Mathematica deviates sometimes considerably from those found in other books. The author’s view is more on the formal than on the pragmatic side. The author does not hold the opinion that any Mathematica input has to have an immediate semantic meaning. Mathematica is an extremely rich system, especially from the language point of view. It is instructive, interesting, and fun to study the behavior of built-in Mathematica functions when called with a variety of arguments (like unevaluated, hold, including undercover zeros, etc.). It is the author’s strong belief that doing this and being able to explain the observed behavior will be, in the long term, very fruitful for the reader because it develops the ability to recognize the uniformity of the principles underlying Mathematica and to make constructive, imaginative, and effective use of this uniformity. Also, some exercises ask the reader to investigate certain “unusual” inputs. From time to time, the author makes use of undocumented features and/or functions from the Developer` and Experimental` contexts (in later versions of Mathematica these functions could exist in the System` context or could have different names). However, some such functions might no longer be supported or even exist in later versions of Mathematica.

Acknowledgements Over the decade, the GuideBooks were in development, many people have seen parts of them and suggested useful changes, additions, and edits. I would like to thank Horst Finsterbusch, Gottfried Teichmann, Klaus Voss, Udo Krause, Jerry Keiper, David Withoff, and Yu He for their critical examination of early versions of the manuscript and their useful suggestions, and Sabine Trott for the first proofreading of the German manuscript. I also want to thank the participants of the original lectures for many useful discussions. My thanks go to the reviewers of this book: John Novak, Alec Schramm, Paul Abbott, Jim Feagin, Richard Palmer, Ward Hanson, Stan Wagon, and Markus van Almsick, for their suggestions and ideas for improvement. I thank Richard Crandall, Allan Hayes, Andrzej Kozlowski, Hartmut Wolf, Stephan Leibbrandt, George Kambouroglou, Domenico Minunni, Eric Weisstein, Andy Shiekh, Arthur G. Hubbard, Jay Warrendorff, Allan Cortzen, Ed Pegg, and Udo Krause for comments on the prepublication version of the GuideBooks. I thank Bobby R. Treat, Arthur G. Hubbard, Murray Eisenberg, Marvin Schaefer, Marek Duszynski, Daniel Lichtblau, Devendra Kapadia, Adam Strzebonski, Anton Antonov, and Brett Champion for useful comments on the Mathematica Version 5.1 tailored version of the GuideBooks. My thanks are due to Gerhard Gobsch of the Institute for Physics of the Technical University in Ilmenau for the opportunity to develop and give these original lectures at the Institute, and to Stephen Wolfram who encouraged and supported me on this project. Concerning the process of making the Mathematica GuideBooks from a set of lecture notes, I thank Glenn Scholebo for transforming notebooks to T E X files, and Joe Kaiping for T E X work related to the printed book. I thank John Novak and Jan Progen for putting all the material into good English style and grammar, John Bonadies for the chapter-opener graphics of the book, and Jean Buck for library work. I especially thank John Novak for the creation of Mathematica 3 notebooks from the T E X files, and Andre Kuzniarek for his work on the stylesheet to give the notebooks a pleasing appearance. My thanks go to Andy Hunt who created a specialized stylesheet for the actual book printout and printed and formatted the 4×1000+ pages of the Mathematica GuideBooks. I thank Andy Hunt for making a first version of the homepage of the GuideBooks and Amy Young for creating the current version of the homepage of the GuideBooks. I thank Sophie Young for a final check of the English. My largest thanks go to Amy Young, who encouraged me to update the whole book over the years and who had a close look at all of my English writing and often improved it considerably. Despite reviews by

xii

Preface

many individuals any remaining mistakes or omissions, in the Mathematica code, in the mathematics, in the description of the Mathematica functions, in the English, or in the references, etc. are, of course, solely mine. Let me take the opportunity to thank members of the Research and Development team of Wolfram Research whom I have met throughout the years, especially Victor Adamchik, Anton Antonov, Alexei Bocharov, Arnoud Buzing, Brett Champion, Matthew Cook, Todd Gayley, Darren Glosemeyer, Roger Germundsson, Unal Goktas, Yifan Hu, Devendra Kapadia, Zbigniew Leyk, David Librik, Daniel Lichtblau, Jerry Keiper, Robert Knapp, Roman Mäder, Oleg Marichev, John Novak, Peter Overmann, Oleksandr Pavlyk, Ulises Cervantes–Pimentel, Mark Sofroniou, Adam Strzebonski, Oyvind Tafjord, Robby Villegas, Tom Wickham–Jones, David Withoff, and Stephen Wolfram for numerous discussions about design principles, various small details, underlying algorithms, efficient implementation of various procedures, and tricks concerning Mathematica. The appearance of the notebooks profited from discussions with John Fultz, Paul Hinton, John Novak, Lou D’Andria, Theodore Gray, Andre Kuzniarek, Jason Harris, Andy Hunt, Christopher Carlson, Robert Raguet–Schofield, George Beck, Kai Xin, Chris Hill, and Neil Soiffer about front end, button, and typesetting issues. It was an interesting and unique experience to work over the last 12 years with five editors: Allan Wylde, Paul Wellin, Maria Taylor, Wayne Yuhasz, and Ann Kostant, with whom the GuideBooks were finally published. Many book-related discussions that ultimately improved the GuideBooks, have been carried out with Jan Benes from TELOS and associates, Steven Pisano, Jenny Wolkowicki, Henry Krell, Fred Bartlett, Vaishali Damle, Ken Quinn, Jerry Lyons, and Rüdiger Gebauer from Springer New York. The author hopes the Mathematica GuideBooks help the reader to discover, investigate, urbanize, and enjoy the computational paradise offered by Mathematica.

Wolfram Research, Inc. April 2005

Michael Trott

Preface

xiii

References 1

A. Amthor. Z. Math. Phys. 25, 153 (1880).

2

K. Appel, W. Haken. J. Math. 21, 429 (1977).

3

A. Bauer, E. Clarke, X. Zhao. J. Automat. Reasoning 21, 295 (1998).

4

A. H. Bell. Am. Math. Monthly 2, 140 (1895).

5

M. Berz. Adv. Imaging Electron Phys. 108, 1 (2000).

6

R. F. Boisvert. arXiv:cs.MS/0004004 (2000).

7

B. Buchberger. Theorema Project (1997). ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1997/97-34/ed-media.nb

8

B. Buchberger. SIGSAM Bull. 36, 3 (2002).

9

S.-C. Chou, X.-S. Gao, J.-Z. Zhang. Machine Proofs in Geometry, World Scientific, Singapore, 1994.

10

A. M. Cohen. Nieuw Archief Wiskunde 14, 45 (1996).

11

A. Cook. The Motion of the Moon, Adam-Hilger, Bristol, 1988.

12

C. Delaunay. Théorie du Mouvement de la Lune, Gauthier-Villars, Paris, 1860.

13

C. Delaunay. Mem. de l’ Acad. des Sc. Paris 28 (1860).

14

C. Delaunay. Mem. de l’ Acad. des Sc. Paris 29 (1867).

15

A. Deprit, J. Henrard, A. Rom. Astron. J. 75, 747 (1970).

16

A. Deprit. Science 168, 1569 (1970).

17

A. Deprit, J. Henrard, A. Rom. Astron. J. 76, 273 (1971).

18

P. J. Dolan, Jr., D. S. Melichian. Am. J. Phys. 66, 11 (1998).

19

S. P. Ferguson, T. C. Hales. arXiv:math.MG/ 9811072 (1998).

20

B. Fitelson. Mathematica Educ. Res. 7, n1, 17 (1998).

21

A. C. Fowler. Mathematical Models in the Applied Sciences, Cambridge University Press, Cambridge, 1997.

22

H. Fritsch, G. Fritsch. The Four-Color Theorem, Springer-Verlag, New York, 1998.

23

E. Gallopoulus, E. Houstis, J. R. Rice (eds.). Future Research Directions in Problem Solving Environments for Computational Science: Report of a Workshop on Research Directions in Integrating Numerical Analysis, Symbolic Computing, Computational Geometry, and Artificial Intelligence for Computational Science, 1991. http://www.cs.purdue.edu/research/cse/publications/tr/92/92-032.ps.gz

24

V. Gerdt, S. A. Gogilidze in V. G. Ganzha, E. W. Mayr, E. V. Vorozhtsov (eds.). Computer Algebra in Scientific Computing, Springer-Verlag, Berlin, 1999.

25

M. C. Gutzwiller, D. S. Schmidt. Astronomical Papers: The Motion of the Moon as Computed by the Method of Hill, Brown, and Eckert, U.S. Government Printing Office, Washington, 1986.

26

M. C. Gutzwiller. Rev. Mod. Phys. 70, 589 (1998).

27

Y. Hagihara. Celestial Mechanics vII/1, MIT Press, Cambridge, 1972.

28

T. C. Hales. arXiv:math.MG/ 9811071 (1998).

29

T. C. Hales. arXiv:math.MG/ 9811073 (1998).

30

T. C. Hales. arXiv:math.MG/ 9811074 (1998).

31

T. C. Hales. arXiv:math.MG/ 9811075 (1998).

32

T. C. Hales. arXiv:math.MG/ 9811076 (1998).

33

T. C. Hales. arXiv:math.MG/ 9811077 (1998).

Preface

xiv 34

T. C. Hales. arXiv:math.MG/ 9811078 (1998).

35

T. C. Hales. arXiv:math.MG/0205208 (2002).

36

T. C. Hales in L. Tatsien (ed.). Proceedings of the International Congress of Mathematicians v. 3, Higher Education Press, Beijing, 2002.

37

J. Harrison. Theorem Proving with the Real Numbers, Springer-Verlag, London, 1998.

38

J. Hermes. Nachrichten Königl. Gesell. Wiss. Göttingen 170 (1894).

39 40

E. N. Houstis, J. R. Rice, E. Gallopoulos, R. Bramley (eds.). Enabling Technologies for Computational Science, Kluwer, Boston, 2000. E. N. Houstis, J. R. Rice. Math. Comput. Simul. 54, 243 (2000).

41

M. S. Klamkin (eds.). Mathematical Modelling, SIAM, Philadelphia, 1996.

42

H. Koch, A. Schenkel, P. Wittwer. SIAM Rev. 38, 565 (1996).

43

Y. N. Lakshman, B. Char, J. Johnson in O. Gloor (ed.). ISSAC 1998, ACM Press, New York, 1998.

44

W. McCune. Robbins Algebras Are Boolean, 1997. http://www.mcs.anl.gov/home/mccune/ar/robbins/

45

E. Mach (R. Wahsner, H.-H. von Borszeskowski eds.). Die Mechanik in ihrer Entwicklung, Akademie-Verlag, Berlin, 1988. D. A. MacKenzie. Mechanizing Proof: Computing, Risk, and Trust, MIT Press, Cambridge, 2001.

46 47

B. M. McCoy. arXiv:cond-mat/0012193 (2000).

48

K. J. M. Moriarty, G. Murdeshwar, S. Sanielevici. Comput. Phys. Commun. 77, 325 (1993).

49

I. Nemes, M. Petkovšek, H. S. Wilf, D. Zeilberger. Am. Math. Monthly 104, 505 (1997).

50

W. H. Press, S. A. Teukolsky. Comput. Phys. 11, 417 (1997).

51

D. Rawlings. Am. Math. Monthly 108, 713 (2001).

52

Problem Solving Environments Home Page. http://www.cs.purdue.edu/research/cse/pses

53

D. S. Schmidt in H. S. Dumas, K. R. Meyer, D. S. Schmidt (eds.). Hamiltonian Dynamical Systems, Springer-Verlag, New York, 1995. S. Seiden. SIGACT News 32, 111 (2001).

54 55

S. Seiden. Theor. Comput. Sc. 282, 381 (2002).

56

C. Simpson. arXiv:math.HO/0311260 (2003).

57

A. M. Stoneham. Phil. Trans. R. Soc. Lond. A 360, 1107 (2002).

58

M. Tegmark. Ann. Phys. 270, 1 (1999).

59

S. Wolfram. Mathematica: A System for Doing Mathematics by Computer, Addison-Wesley, Redwood City, 1992.

60

S. Wolfram. The Mathematica Book, Wolfram Media, Champaign, 2003.

Contents 0.

Introduction and Orientation

xix

CHAmR I

SymbolicComputations 1.0

Remarks

1

1.1

Introduction 1

1.2

Operations on Polynomials

13

1.2.0

Remarks 13

1.2.1

Structural Manipulations on Polynomials 13

1.2.2

Polynomials in Equations 25

1.2.3

Polynomials in Inequalities 50

1.3

Operations on Rational Functions 78

1.4

Operations on Trigonometric Expressions

1.5

Solution of Equations 94

1.6

Classical Analysis 129

1.7

1.6.1

Differentiation 129

1.6.2

Integration 156

1.6.3

Limits 184

1.6.4

Series Expansions 189

1.6.5

Residues 220

1.6.6

Sums 221

88

Differential and Difference Equations 233 1.7.0

Remarks 233

1.7.1

Ordinary Differential Equations 234

1.7.2

Partial Differential Equations 257

1.7.3

Difference Equations 260

1.8

Integral Transforms and Generalized Functions 266

1.9

Additional Symbolics Functions 294

xvi 1.10

Three Applications 298 1.10.0 Remarks 298 1.10.1

Area of a Random Triangle in a Square 298

1.10.2

2 cosH 257 L à la Gauss 312

1.10.3

Implicitization of a Trefoil Knot 321

p

ÅÅÅÅÅÅÅÅÅÅ

Exercises 330 Solutions 371 References 749

CHAPTER 2

Classical Orthogonal Polynomials 2.0

Remarks 803

2.1

General Properties of Orthogonal Polynomials 803

2.2

Hermite Polynomials 806

2.3

Jacobi Polynomials 816

2.4

Gegenbauer Polynomials 823

2.5

Laguerre Polynomials 832

2.6

Legendre Polynomials 842

2.7

Chebyshev Polynomials of the First Kind 849

2.8

Chebyshev Polynomials of the Second Kind 853

2.9

Relationships Among the Orthogonal Polynomials 860

2.10 Ground-State of the Quartic Oscillator 868 Exercises 885 Solutions 897 References 961

xvii

CHAPTER 3

Classical Special Functions 3.0

Remarks 979

3.1

Introduction 989

3.2

Gamma, Beta, and Polygamma Functions 1001

3.3

Error Functions and Fresnel Integrals 1008

3.4

Exponential Integral and Related Functions 1016

3.5

Bessel and Airy Functions 1019

3.6

Legendre Functions 1044

3.7

Hypergeometric Functions 1049

3.8

Elliptic Integrals 1062

3.9

Elliptic Functions 1071

3.10 Product Log Function 1081 3.11

Mathieu Functions 1086

3.12 Additional Special Functions 1109 3.13 Solution of Quintic Polynomials 1110 Exercises 1125 Solutions 1155 References 1393 Index 1431

Introduction and Orientation to The Mathematica GuideBooks 0.1 Overview 0.1.1 Content Summaries The Mathematica GuideBooks are published as four independent books: The Mathematica GuideBook to Programming, The Mathematica GuideBook to Graphics, The Mathematica GuideBook to Numerics, and The Mathematica GuideBook to Symbolics. † The Programming volume deals with the structure of Mathematica expressions and with Mathematica as a programming language. This volume includes the discussion of the hierarchical construction of all Mathematica objects out of symbolic expressions (all of the form head[argument]), the ultimate building blocks of expressions (numbers, symbols, and strings), the definition of functions, the application of rules, the recognition of patterns and their efficient application, the order of evaluation, program flows and program structure, the manipulation of lists (the universal container for Mathematica expressions of all kinds), as well as a number of topics specific to the Mathematica programming language. Various programming styles, especially Mathematica’ s powerful functional programming constructs, are covered in detail. † The Graphics volume deals with Mathematica’s two-dimensional (2D) and three-dimensional (3D) graphics. The chapters of this volume give a detailed treatment on how to create images from graphics primitives, such as points, lines, and polygons. This volume also covers graphically displaying functions given either analytically or in discrete form. A number of images from the Mathematica Graphics Gallery are also reconstructed. Also discussed is the generation of pleasing scientific visualizations of functions, formulas, and algorithms. A variety of such examples are given. † The Numerics volume deals with Mathematica’s numerical mathematics capabilities—the indispensable sledgehammer tools for dealing with virtually any “real life” problem. The arithmetic types (fast machine, exact integer and rational, verified high-precision, and interval arithmetic) are carefully analyzed. Fundamental numerical operations, such as compilation of programs, numerical Fourier transforms, minimization, numerical solution of equations, and ordinary/partial differential equations are analyzed in detail and are applied to a large number of examples in the main text and in the solutions to the exercises. † The Symbolics volume deals with Mathematica’s symbolic mathematical capabilities—the real heart of Mathematica and the ingredient of the Mathematica software system that makes it so unique and powerful. Structural and mathematical operations on systems of polynomials are fundamental to many symbolic calculations and are covered in detail. The solution of equations and differential equations, as well as the classical calculus operations, are exhaustively treated. In addition, this volume discusses and employs the classical

xx

Introduction

orthogonal polynomials and special functions of mathematical physics. To demonstrate the symbolic mathematics power, a variety of problems from mathematics and physics are discussed. The four GuideBooks contain about 25,000 Mathematica inputs, representing more than 75,000 lines of commented Mathematica code. (For the reader already familiar with Mathematica, here is a more precise measure: The LeafCount of all inputs would be about 900,000 when collected in a list.) The GuideBooks also have more than 4,000 graphics, 150 animations, 11,000 references, and 1,000 exercises. More than 10,000 hyperlinked index entries and hundreds of hyperlinks from the overview sections connect all parts in a convenient way. The evaluated notebooks of all four volumes have a cumulative file size of about 20 GB. Although these numbers may sound large, the Mathematica GuideBooks actually cover only a portion of Mathematica’s functionality and features and give only a glimpse into the possibilities Mathematica offers to generate graphics, solve problems, model systems, and discover new identities, relations, and algorithms. The Mathematica code is explained in detail throughout all chapters. More than 13,000 comments are scattered throughout all inputs and code fragments.

0.1.2 Relation of the Four Volumes The four volumes of the GuideBooks are basically independent, in the sense that readers familiar with Mathematica programming can read any of the other three volumes. But a solid working knowledge of the main topics discussed in The Mathematica GuideBook to Programming—symbolic expressions, pure functions, rules and replacements, and list manipulations—is required for the Graphics, Numerics, and Symbolics volumes. Compared to these three volumes, the Programming volume might appear to be a bit “dry”. But similar to learning a foreign language, before being rewarded with the beauty of novels or a poem, one has to sweat and study. The whole suite of graphical capabilities and all of the mathematical knowledge in Mathematica are accessed and applied through lists, patterns, rules, and pure functions, the material discussed in the Programming volume. Naturally, graphics are the center of attention of the The Mathematica GuideBook to Graphics. While in the Programming volume some plotting and graphics for visualization are used, graphics are not crucial for the Programming volume. The reader can safely skip the corresponding inputs to follow the main programming threads. The Numerics and Symbolics volumes, on the other hand, make heavy use of the graphics knowledge acquired in the Graphics volume. Hence, the prerequisites for the Numerics and Symbolics volumes are a good knowledge of Mathematica’s programming language and of its graphics system. The Programming volume contains only a few percent of all graphics, the Graphics volume contains about two-thirds, and the Numerics and Symbolics volume, about one-third of the overall 4,000+ graphics. The Programming and Graphics volumes use some mathematical commands, but they restrict the use to a relatively small number (especially Expand, Factor, Integrate, Solve). And the use of the function N for numerical ization is unavoidable for virtually any “real life” application of Mathematica. The last functions allow us to treat some mathematically not uninteresting examples in the Programming and Graphics volumes. In addition to putting these functions to work for nontrivial problems, a detailed discussion of the mathematics functions of Mathematica takes place exclusively in the Numerics and Symbolics volumes. The Programming and Graphics volumes contain a moderate amount of mathematics in the examples and exercises, and focus on programming and graphics issues. The Numerics and Symbolics volumes contain a substantially larger amount of mathematics. Although printed as four books, the fourteen individual chapters (six in the Programming volume, three in the Graphics volume, two in the Numerics volume, and three in the Symbolics volume) of the Mathematica GuideBooks form one organic whole, and the author recommends a strictly sequential reading, starting from Chapter 1 of the Programming volume and ending with Chapter 3 of the Symbolics volume for gaining the maximum

Introduction

xxi

benefit. The electronic component of each book contains the text and inputs from all the four GuideBooks, together with a comprehensive hyperlinked index. The four volumes refer frequently to one another.

0.1.3 Chapter Structure A rough outline of the content of a chapter is the following: † The main body discusses the Mathematica functions belonging to the chapter subject, as well their options and attributes. Generically, the author has attempted to introduce the functions in a “natural order”. But surely, one cannot be axiomatic with respect to the order. (Such an order of the functions is not unique, and the author intentionally has “spread out” the introduction of various Mathematica functions across the four volumes.) With the introduction of a function, some small examples of how to use the functions and comparisons of this function with related ones are given. These examples typically (with the exception of some visualizations in the Programming volume) incorporate functions already discussed. The last section of a chapter often gives a larger example that makes heavy use of the functions discussed in the chapter. † A programmatically constructed overview of each chapter functions follows. The functions listed in this section are hyperlinked to their attributes and options, as well as to the corresponding reference guide entries of The Mathematica Book. † A set of exercises and potential solutions follow. Because learning Mathematica through examples is very efficient, the proposed solutions are quite detailed and form up to 50% of the material of a chapter. † References end the chapter. Note that the first few chapters of the Programming volume deviate slightly from this structure. Chapter 1 of the Programming volume gives a general overview of the kind of problems dealt with in the four GuideBooks. The second, third, and fourth chapters of the Programming volume introduce the basics of programming in Mathematica. Starting with Chapters 5 of the Programming volume and throughout the Graphics, Numerics, and Symbolics volumes, the above-described structure applies. In the 14 chapters of the GuideBooks the author has chosen a “we” style for the discussions of how to proceed in constructing programs and carrying out calculations to include the reader intimately.

0.1.4 Code Presentation Style The typical style of a unit of the main part of a chapter is: Define a new function, discuss its arguments, options, and attributes, and then give examples of its usage. The examples are virtually always Mathematica inputs and outputs. The majority of inputs is in InputForm are the notebooks. On occasion StandardForm is also used. Although StandardForm mimics classical mathematics notation and makes short inputs more readable, for “program-like” inputs, InputForm is typically more readable and easier and more natural to align. For the outputs, StandardForm is used by default and occasionally the author has resorted to InputForm or FullForm to expose digits of numbers and to TraditionalForm for some formulas. Outputs are mostly not programs, but nearly always “results” (often mathematical expressions, formulas, identities, or lists of numbers rather than program constructs). The world of Mathematica users is divided into three groups, and each of them has a nearly religious opinion on how to format Mathematica code [1], [2]. The author follows the InputForm

xxii

Introduction

cult(ure) and hopes that the Mathematica users who do everything in either StandardForm or Traditional Form will bear with him. If the reader really wants to see all code in either StandardForm or Traditional Form, this can easily be done with the Convert To item from the Cell menu. (Note that the relation between InputForm and StandardForm is not symmetric. The InputForm cells of this book have been line-broken and aligned by hand. Transforming them into StandardForm or TraditionalForm cells works well because one typically does not line-break manually and align Mathematica code in these cell types. But converting StandardForm or TraditionalForm cells into InputForm cells results in much less pleasing results.) In the inputs, special typeset symbols for Mathematica functions are typically avoided because they are not monospaced. But the author does occasionally compromise and use Greek, script, Gothic, and doublestruck characters. In a book about a programming language, two other issues come always up: indentation and placement of the code. † The code of the GuideBooks is largely consistently formatted and indented. There are no strict guidelines or even rules on how to format and indent Mathematica code. The author hopes the reader will find the book’s formatting style readable. It is a compromise between readability (mental parsabililty) and space conservation, so that the printed version of the Mathematica GuideBook matches closely the electronic version. † Because of the large number of examples, a rather imposing amount of Mathematica code is presented. Should this code be present only on the disk, or also in the printed book? If it is in the printed book, should it be at the position where the code is used or at the end of the book in an appendix? Many authors of Mathematica articles and books have strong opinions on this subject. Because the main emphasis of the Mathematica GuideBooks is on solving problems with Mathematica and not on the actual problems, the GuideBooks give all of the code at the point where it is needed in the printed book, rather than “hiding” it in packages and appendices. In addition to being more straightforward to read and conveniently allowing us to refer to elements of the code pieces, this placement makes the correspondence between the printed book and the notebooks close to 1:1, and so working back and forth between the printed book and the notebooks is as straightforward as possible.

0.2 Requirements 0.2.1 Hardware and Software Throughout the GuideBooks, it is assumed that the reader has access to a computer running a current version of Mathematica (version 5.0/5.1 or newer). For readers without access to a licensed copy of Mathematica, it is possible to view all of the material on the disk using a trial version of Mathematica. (A trial version is downloadable from http://www.wolfram.com/products/mathematica/trial.cgi.) The files of the GuideBooks are relatively large, altogether more than 20 GB. This is also the amount of hard disk space needed to store uncompressed versions of the notebooks. To view the notebooks comfortably, the reader’s computer needs 128 MB RAM; to evaluate the evaluation units of the notebooks 1 GB RAM or more is recommended. In the GuideBooks, a large number of animations are generated. Although they need more memory than single pictures, they are easy to create, to animate, and to store on typical year-2005 hardware, and they provide a lot of joy.

Introduction

xxiii

0.2.2 Reader Prerequisites Although prior Mathematica knowledge is not needed to read The Mathematica GuideBook to Programming, it is assumed that the reader is familiar with basic actions in the Mathematica front end, including entering Greek characters using the keyboard, copying and pasting cells, and so on. Freely available tutorials on these (and other) subjects can be found at http://library.wolfram.com. For a complete understanding of most of the GuideBooks examples, it is desirable to have a background in mathematics, science, or engineering at about the bachelor’s level or above. Familiarity with mechanics and electrodynamics is assumed. Some examples and exercises are more specialized, for instance, from quantum mechanics, finite element analysis, statistical mechanics, solid state physics, number theory, and other areas. But the GuideBooks avoid very advanced (but tempting) topics such as renormalization groups [6], parquet approximations [27], and modular moonshines [14]. (Although Mathematica can deal with such topics, they do not fit the character of the Mathematica GuideBooks but rather the one of a Mathematica Topographical Atlas [a monumental work to be carried out by the Mathematica–Bourbakians of the 21st century]). Each scientific application discussed has a set of references. The references should easily give the reader both an overview of the subject and pointers to further references.

0.3 What the GuideBooks Are and What They Are Not 0.3.1 Doing Computer Mathematics As discussed in the Preface, the main goal of the GuideBooks is to demonstrate, showcase, teach, and exemplify scientific problem solving with Mathematica. An important step in achieving this goal is the discussion of Mathematica functions that allow readers to become fluent in programming when creating complicated graphics or solving scientific problems. This again means that the reader must become familiar with the most important programming, graphics, numerics, and symbolics functions, their arguments, options, attributes, and a few of their time and space complexities. And the reader must know which functions to use in each situation. The GuideBooks treat only aspects of Mathematica that are ultimately related to “doing mathematics”. This means that the GuideBooks focus on the functionalities of the kernel rather than on those of the front end. The knowledge required to use the front end to work with the notebooks can easily be gained by reading the corresponding chapters of the online documentation of Mathematica. Some of the subjects that are treated either lightly or not at all in the GuideBooks include the basic use of Mathematica (starting the program, features, and special properties of the notebook front end [16]), typesetting, the preparation of packages, external file operations, the communication of Mathematica with other programs via MathLink, special formatting and string manipulations, computer- and operating system-specific operations, audio generation, and commands available in various packages. “Packages” includes both, those distributed with Mathematica as well as those available from the Mathematica Information Center (http://library.wolfram.com/infocenter) and commercial sources, such as MathTensor for doing general relativity calculations (http://smc.vnet.net/MathTensor.html) or FeynCalc for doing high-energy physics calculations (http://www.feyncalc.org). This means, in particular, that probability and statistical calculations are barely touched on because most of the relevant commands are contained in the packages. The GuideBooks make little or no mention of the machine-dependent possibilities offered by the various Mathematica implementations. For this information, see the Mathematica documentation.

xxiv

Introduction

Mathematical and physical remarks introduce certain subjects and formulas to make the associated Mathematica implementations easier to understand. These remarks are not meant to provide a deep understanding of the (sometimes complicated) physical model or underlying mathematics; some of these remarks intentionally oversimplify matters. The reader should examine all Mathematica inputs and outputs carefully. Sometimes, the inputs and outputs illustrate little-known or seldom-used aspects of Mathematica commands. Moreover, for the efficient use of Mathematica, it is very important to understand the possibilities and limits of the built-in commands. Many commands in Mathematica allow different numbers of arguments. When a given command is called with fewer than the maximum number of arguments, an internal (or user-defined) default value is used for the missing arguments. For most of the commands, the maximum number of arguments and default values are discussed. When solving problems, the GuideBooks generically use a “straightforward” approach. This means they are not using particularly clever tricks to solve problems, but rather direct, possibly computationally more expensive, approaches. (From time to time, the GuideBooks even make use of a “brute force” approach.) The motivation is that when solving new “real life” problems a reader encounters in daily work, the “right mathematical trick” is seldom at hand. Nevertheless, the reader can more often than not rely on Mathematica being powerful enough to often succeed in using a straightforward approach. But attention is paid to Mathematica-specific issues to find time- and memory-efficient implementations—something that should be taken into account for any larger program. As already mentioned, all larger pieces of code in this book have comments explaining the individual steps carried out in the calculations. Many smaller pieces of code have comments when needed to expedite the understanding of how they work. This enables the reader to easily change and adapt the code pieces. Sometimes, when the translation from traditional mathematics into Mathematica is trivial, or when the author wants to emphasize certain aspects of the code, we let the code “speak for itself”. While paying attention to efficiency, the GuideBooks only occasionally go into the computational complexity ([8], [40], and [7]) of the given implementations. The implementation of very large, complicated suites of algorithms is not the purpose of the GuideBooks. The Mathematica packages included with Mathematica and the ones at MathSource (http://library.wolfram.com/database/MathSource ) offer a rich variety of self-study material on building large programs. Most general guidelines for writing code for scientific calculations (like descriptive variable names and modularity of code; see, e.g., [19] for a review) apply also to Mathematica programs. The programs given in a chapter typically make use of Mathematica functions discussed in earlier chapters. Using commands from later chapters would sometimes allow for more efficient techniques. Also, these programs emphasize the use of commands from the current chapter. So, for example, instead of list operation, from a complexity point of view, hashing techniques or tailored data structures might be preferable. All subsections and sections are “self-contained” (meaning that no other code than the one presented is needed to evaluate the subsections and sections). The price for this “self-containedness” is that from time to time some code has to be repeated (such as manipulating polygons or forming random permutations of lists) instead of delegating such programming constructs to a package. Because this repetition could be construed as boring, the author typically uses a slightly different implementation to achieve the same goal.

Introduction

xxv

0.3.2 Programming Paradigms In the GuideBooks, the author wants to show the reader that Mathematica supports various programming paradigms and also show that, depending on the problem under consideration and the goal (e.g., solution of a problem, test of an algorithm, development of a program), each style has its advantages and disadvantages. (For a general discussion concerning programming styles, see [3], [41], [23], [32], [15], and [9].) Mathematica supports a functional programming style. Thus, in addition to classical procedural programs (which are often less efficient and less elegant), programs using the functional style are also presented. In the first volume of the Mathematica GuideBooks, the programming style is usually dictated by the types of commands that have been discussed up to that point. A certain portion of the programs involve recursive, rule-based programming. The choice of programming style is, of course, partially (ultimately) a matter of personal preference. The GuideBooks’ main aim is to explain the operation, limits, and efficient application of the various Mathematica commands. For certain commands, this dictates a certain style of programming. However, the various programming styles, with their advantages and disadvantages, are not the main concern of the GuideBooks. In working with Mathematica, the reader is likely to use different programming styles depending if one wants a quick one-time calculation or a routine that will be used repeatedly. So, for a given implementation, the program structure may not always be the most elegant, fastest, or “prettiest”. The GuideBooks are not a substitute for the study of The Mathematica Book [45] http://documents. wolfram.com/mathematica). It is impossible to acquire a deeper (full) understanding of Mathematica without a thorough study of this book (reading it twice from the first to the last page is highly recommended). It defines the language and the spirit of Mathematica. The reader will probably from time to time need to refer to parts of it, because not all commands are discussed in the GuideBooks. However, the story of what can be done with Mathematica does not end with the examples shown in The Mathematica Book. The Mathematica GuideBooks go beyond The Mathematica Book. They present larger programs for solving various problems and creating complicated graphics. In addition, the GuideBooks discuss a number of commands that are not or are only fleetingly mentioned in the manual (e.g., some specialized methods of mathematical functions and functions from the Developer` and Experimental` contexts), but which the author deems important. In the notebooks, the author gives special emphasis to discussions, remarks, and applications relating to several commands that are typical for Mathematica but not for most other programming languages, e.g., Map, MapAt, MapIndexed, Distribute, Apply, Replace, ReplaceAll, Inner, Outer, Fold, Nest, Nest List, FixedPoint, FixedPointList, and Function. These commands allow to write exceptionally elegant, fast, and powerful programs. All of these commands are discussed in The Mathematica Book and others that deal with programming in Mathematica (e.g., [33], [34], and [42]). However, the author’s experience suggests that a deeper understanding of these commands and their optimal applications comes only after working with Mathematica in the solution of more complicated problems. Both the printed book and the electronic component contain material that is meant to teach in detail how to use Mathematica to solve problems, rather than to present the underlying details of the various scientific examples. It cannot be overemphasized that to master the use of Mathematica, its programming paradigms and individual functions, the reader must experiment; this is especially important, insightful, easily verifiable, and satisfying with graphics, which involve manipulating expressions, making small changes, and finding different approaches. Because the results can easily be visually checked, generating and modifying graphics is an ideal method to learn programming in Mathematica.

xxvi

Introduction

0.4 Exercises and Solutions 0.4.1 Exercises Each chapter includes a set of exercises and a detailed solution proposal for each exercise. When possible, all of the purely Mathematica-programming related exercises (these are most of the exercises of the Programming volume) should be solved by every reader. The exercises coming from mathematics, physics, and engineering should be solved according to the reader’s interest. The most important Mathematica functions needed to solve a given problem are generally those of the associated chapter. For a rough orientation about the content of an exercise, the subject is included in its title. The relative degree of difficulty is indicated by level superscript of the exercise number ( L1 indicates easy, L2 indicates medium, and L3 indicates difficult). The author’s aim was to present understandable interesting examples that illustrate the Mathematica material discussed in the corresponding chapter. Some exercises were inspired by recent research problems; the references given allow the interested reader to dig deeper into the subject. The exercises are intentionally not hyperlinked to the corresponding solution. The independent solving of the exercises is an important part of learning Mathematica.

0.4.2 Solutions The GuideBooks contain solutions to each of the more than 1,000 exercises. Many of the techniques used in the solutions are not just one-line calls to built-in functions. It might well be that with further enhancements, a future version of Mathematica might be able to solve the problem more directly. (But due to different forms of some results returned by Mathematica, some problems might also become more challenging.) The author encourages the reader to try to find shorter, more clever, faster (in terms of runtime as well complexity), more general, and more elegant solutions. Doing various calculations is the most effective way to learn Mathematica. A proper Mathematica implementation of a function that solves a given problem often contains many different elements. The function(s) should have sensibly named and sensibly behaving options; for various (machine numeric, high-precision numeric, symbolic) inputs different steps might be required; shielding against inappropriate input might be needed; different parameter values might require different solution strategies and algorithms, helpful error and warning messages should be available. The returned data structure should be intuitive and easy to reuse; to achieve a good computational complexity, nontrivial data structures might be needed, etc. Most of the solutions do not deal with all of these issues, but only with selected ones and thereby leave plenty of room for more detailed treatments; as far as limit, boundary, and degenerate cases are concerned, they represent an outline of how to tackle the problem. Although the solutions do their job in general, they often allow considerable refinement and extension by the reader. The reader should consider the given solution to a given exercise as a proposal; quite different approaches are often possible and sometimes even more efficient. The routines presented in the solutions are not the most general possible, because to make them foolproof for every possible input (sensible and nonsensical, evaluated and unevaluated, numerical and symbolical), the books would have had to go considerably beyond the mathematical and physical framework of the GuideBooks. In addition, few warnings are implemented for improper or improperly used arguments. The graphics provided in the solutions are mostly subject to a long list of refinements. Although the solutions do work, they are often sketchy and can be considerably refined and extended by the reader. This also means that the provided solutions to the exercises programs are not always very suitable for

Introduction

xxvii

solving larger classes of problems. To increase their applicability would require considerably more code. Thus, it is not guaranteed that given routines will work correctly on related problems. To guarantee this generality and scalability, one would have to protect the variables better, implement formulas for more general or specialized cases, write functions to accept different numbers of variables, add type-checking and error-checking functions, and include corresponding error messages and warnings. To simplify working through the solutions, the various steps of the solution are commented and are not always packed in a Module or Block. In general, only functions that are used later are packed. For longer calculations, such as those in some of the exercises, this was not feasible and intended. The arguments of the functions are not always checked for their appropriateness as is desirable for robust code. But, this makes it easier for the user to test and modify the code.

0.5 The Books Versus the Electronic Components 0.5.1 Working with the Notebooks Each volume of the GuideBooks comes with a multiplatform DVD, containing fourteen main notebooks tailored for Mathematica 4 and compatible with Mathematica 5. Each notebook corresponds to a chapter from the printed books. (To avoid large file sizes of the notebooks, all animations are located in the Animations directory and not directly in the chapter notebooks.) The chapters (and so the corresponding notebooks) contain a detailed description and explanation of the Mathematica commands needed and used in applications of Mathematica to the sciences. Discussions on Mathematica functions are supplemented by a variety of mathematics, physics, and graphics examples. The notebooks also contain complete solutions to all exercises. Forming an electronic book, the notebooks also contain all text, as well as fully typeset formulas, and reader-editable and reader-changeable input. (Readers can copy, paste, and use the inputs in their notebooks.) In addition to the chapter notebooks, the DVD also includes a navigation palette and fully hyperlinked table of contents and index notebooks. The Mathematica notebooks corresponding to the printed book are fully evaluated. The evaluated chapter notebooks also come with hyperlinked overviews; these overviews are not in the printed book. When reading the printed books, it might seem that some parts are longer than needed. The reader should keep in mind that the primary tool for working with the Mathematica kernel are Mathematica notebooks and that on a computer screen and there “length does not matter much”. The GuideBooks are basically a printout of the notebooks, which makes going back and forth between the printed books and the notebooks very easy. The GuideBooks give large examples to encourage the reader to investigate various Mathematica functions and to become familiar with Mathematica as a system for doing mathematics, as well as a programming language. Investigating Mathematica in the accompanying notebooks is the best way to learn its details. To start viewing the notebooks, open the table of contents notebook TableOfContents.nb. Mathematica notebooks can contain hyperlinks, and all entries of the table of contents are hyperlinked. Navigating through one of the chapters is convenient when done using the navigator palette GuideBooksNavigator.nb. When opening a notebook, the front end minimizes the amount of memory needed to display the notebook by loading it incrementally. Depending on the reader’s hardware, this might result in a slow scrolling speed. Clicking the “Load notebook cache” button of the GuideBooksNavigator palette speeds this up by loading the complete notebook into the front end. For the vast majority of sections, subsections, and solutions of the exercises, the reader can just select such a structural unit and evaluate it (at once) on a year-2005 computer (¥512 MB RAM) typically in a matter of

xxviii

Introduction

minutes. Some sections and solutions containing many graphics may need hours of computation time. Also, more than 50 pieces of code run hours, even days. The inputs that are very memory intensive or produce large outputs and graphics are in inactive cells which can be activated by clicking the adjacent button. Because of potentially overlapping variable names between various sections and subsections, the author advises the reader not to evaluate an entire chapter at once. Each smallest self-contained structural unit (a subsection, a section without subsections, or an exercise) should be evaluated within one Mathematica session starting with a freshly started kernel. At the end of each unit is an input cell. After evaluating all input cells of a unit in consecutive order, the input of this cell generates a short summary about the entire Mathematica session. It lists the number of evaluated inputs, the kernel CPU time, the wall clock time, and the maximal memory used to evaluate the inputs (excluding the resources needed to evaluate the Program cells). These numbers serve as a guide for the reader about the to-be-expected running times and memory needs. These numbers can deviate from run to run. The wall clock time can be substantially larger than the CPU time due to other processes running on the same computer and due to time needed to render graphics. The data shown in the evaluated notebooks came from a 2.5 GHz Linux computer. The CPU times are generically proportional to the computer clock speed, but can deviate within a small factor from operating system to operating system. In rare, randomly occurring cases slower computers can achieve smaller CPU and wall clock times than faster computers, due to internal time-constrained simplification processes in various symbolic mathematics functions (such as Integrate, Sum, DSolve, …). The Overview Section of the chapters is set up for a front end and kernel running on the same computer and having access to the same file system. When using a remote kernel, the directory specification for the package Overview.m must be changed accordingly. References can be conveniently extracted from the main text by selecting the cell(s) that refer to them (or parts of a cell) and then clicking the “Extract References” button. A new notebook with the extracted references will then appear. The notebooks contain color graphics. (To rerender the pictures with a greater color depth or at a larger size, choose Rerender Graphics from the Cell menu.) With some of the colors used, black-and-white printouts occasionally give low-contrast results. For better black-and-white printouts of these graphics, the author recommends setting the ColorOutput option of the relevant graphics function to GrayLevel. The notebooks with animations (in the printed book, animations are typically printed as an array of about 10 to 20 individual graphics) typically contain between 60 and 120 frames. Rerunning the corresponding code with a large number of frames will allow the reader to generate smoother and longer-running animations. Because many cell styles used in the notebooks are unique to the GuideBooks, when copying expressions and cells from the GuideBooks notebooks to other notebooks, one should first attach the style sheet notebook GuideBooksStylesheet.nb to the destination notebook, or define the needed styles in the style sheet of the destination notebook.

Introduction

xxix

0.5.2 Reproducibility of the Results The 14 chapter notebooks contained in the electronic version of the GuideBooks were run mostly with Mathematica 5.1 on a 2 GHz Intel Linux computer with 2 GB RAM. They need more than 100 hours of evaluation time. (This does not include the evaluation of the currently unevaluatable parts of code after the Make Input buttons.) For most subsections and sections, 512 MB RAM are recommended for a fast and smooth evaluation “at once” (meaning the reader can select the section or subsection, and evaluate all inputs without running out of memory or clearing variables) and the rendering of the generated graphic in the front end. Some subsections and sections need more memory when run. To reduce these memory requirements, the author recommends restarting the Mathematica kernel inside these subsections and sections, evaluating the necessary definitions, and then continuing. This will allow the reader to evaluate all inputs. In general, regardless of the computer, with the same version of Mathematica, the reader should get the same results as shown in the notebooks. (The author has tested the code on Sun and Intel-based Linux computers, but this does not mean that some code might not run as displayed (because of different configurations, stack size settings, etc., but the disclaimer from the Preface applies everywhere). If an input does not work on a particular machine, please inform the author. Some deviations from the results given may appear because of the following: † Inputs involving the function Random[…] in some form. (Often SeedRandom to allow for some kind of reproducibility and randomness at the same time is employed.) † Mathematica commands operating on the file system of the computer, or make use of the type of computer (such inputs need to be edited using the appropriate directory specifications). † Calculations showing some of the differences of floating-point numbers and the machine-dependent representation of these on various computers. † Pictures using various fonts and sizes because of their availability (or lack thereof) and shape on different computers. † Calculations involving Timing because of different clock speeds, architectures, operating systems, and libraries. † Formats of results depending on the actual window width and default font size. (Often, the corresponding inputs will contain Short.) Using anything other than Mathematica Version 5.1 might also result in different outputs. Examples of results that change form, but are all mathematically correct and equivalent, are the parameter variables used in underdetermined systems of linear equations, the form of the results of an integral, and the internal form of functions like InterpolatingFunction and CompiledFunction. Some inputs might no longer evaluate the same way because functions from a package were used and these functions are potentially built-in functions in a later Mathematica version. Mathematica is a very large and complicated program that is constantly updated and improved. Some of these changes might be design changes, superseded functionality, or potentially regressions, and as a result, some of the inputs might not work at all or give unexpected results in future versions of Mathematica.

xxx

Introduction

0.5.3 Earlier Versions of the Notebooks The first printing of the Programming volume and the Graphics volumes of the Mathematica GuideBooks were published in October 2004. The electronic components of these two books contained the corresponding evaluated chapter notebooks as well as unevaluated versions of preversions of the notebooks belonging to the Numerics and Symbolics volumes. Similarly, the electronic components of the Numerics and Symbolics volume contain the corresponding evaluated chapter notebooks and unevaluated copies of the notebooks of the Programming and Graphics volumes. This allows the reader to follow cross-references and look up relevant concepts discussed in the other volumes. The author has tried to keep the notebooks of the GuideBooks as up-to-date as possible. (Meaning with respect to the efficient and appropriate use of the latest version of Mathematica, with respect to maintaining a list of references that contains new publications, and examples, and with respect to incorporating corrections to known problems, errors, and mistakes). As a result, the notebooks of all four volumes that come with later printings of the Programming and Graphics volumes, as well with the Numerics and Symbolics volumes will be different and supersede the earlier notebooks originally distributed with the Programming and Graphics volumes. The notebooks that come with the Numerics and Symbolics volumes are genuine Mathematica Version 5.1 notebooks. Because most advances in Mathematica Version 5 and 5.1 compared with Mathematica Version 4 occurred in functions carrying out numerical and symbolical calculations, the notebooks associated with Numerics and Symbolics volumes contain a substantial amount of changes and additions compared with their originally distributed version.

0.6 Style and Design Elements 0.6.1 Text and Code Formatting The GuideBooks are divided into chapters. Each chapter consists of several sections, which frequently are further subdivided into subsections. General remarks about a chapter or a section are presented in the sections and subsections numbered 0. (These remarks usually discuss the structure of the following section and give teasers about the usefulness of the functions to be discussed.) Also, sometimes these sections serve to refresh the discussion of some functions already introduced earlier. Following the style of The Mathematica Book [45], the GuideBooks use the following fonts: For the main text, Times; for Mathematica inputs and built-in Mathematica commands, Courier plain (like Plot); and for user-supplied arguments, Times italic (like userArgument1 ). Built-in Mathematica functions are introduced in the following style: MathematicaFunctionToBeIntroduced[typeIndicatingUserSuppliedArgument(s)] is a description of the built-in command MathematicaFunctionToBeIntroduced upon its first appearance. A definition of the command, along with its parameters is given. Here, typeIndicatingUserSuppliedArgument(s) is one (or more) user-supplied expression(s) and may be written in an abbreviated form or in a different way for emphasis.

The actual Mathematica inputs and outputs appear in the following manner (as mentioned above, virtually all inputs are given in InputForm).

Introduction

xxxi

(* A comment. It will be/is ignored as Mathematica input: Return only one of the solutions *) Last[Solve[{x^2 - y == 1, x - y^2 == 1}, {x, y}]]

When referring in text to variables of Mathematica inputs and outputs, the following convention is used: Fixed, nonpattern variables (including local variables) are printed in Courier plain (the equations solved above contained the variables x and y). User supplied arguments to built-in or defined functions with pattern variables are printed in Times italic. The next input defines a function generating a pair of polynomial equations in x and y. equationPair[x_, y_] := {x^2 - y == 1, x - y^2 == 1}

x and y are pattern variables (usimng the same letters, but a different font from the actual code fragments x_ and y_) that can stand for any argument. Here we call the function equationPair with the two arguments u + v and w - z. equationPair[u + v, w - z]

Occasionally, explanation about a mathematics or physics topic is given before the corresponding Mathematica implementation is discussed. These sections are marked as follows:

Mathematical Remark: Special Topic in Mathematics or Physics A short summary or review of mathematical or physical ideas necessary for the following example(s). 1

From time to time, Mathematica is used to analyze expressions, algorithms, etc. In some cases, results in the form of English sentences are produced programmatically. To differentiate such automatically generated text from the main text, in most instances such text is prefaced by “ë” (structurally the corresponding cells are of type "PrintText" versus "Text" for author-written cells). Code pieces that either run for quite long, or need a lot of memory, or are tangent to the current discussion are displayed in the following manner. Make Input

mathematicaCodeWhichEitherRunsVeryLongOrThatIsVeryMemoryIntensive OrThatProducesAVeryLargeGraphicOrThatIsASideTrackToTheSubjectUnder Discussion (* with some comments on how the code works *)

To run a code piece like this, click the Make Input button above it. This will generate the corresponding input cell that can be evaluated if the reader’s computer has the necessary resources. The reader is encouraged to add new inputs and annotations to the electronic notebooks. There are two styles for reader-added material: "ReaderInput" (a Mathematica input style and simultaneously the default style for a new cell) and "ReaderAnnotation" (a text-style cell type). They are primarily intended to be used in the Reading environment. These two styles are indented more than the default input and text cells, have a green left bar and a dingbat. To access the "ReaderInput" and "ReaderAnnotation" styles, press the system-dependent modifier key (such as Control or Command) and 9 and 7, respectively.

xxxii

Introduction

0.6.2 References Because the GuideBooks are concerned with the solution of mathematical and physical problems using Mathematica and are not mathematics or physics monographs, the author did not attempt to give complete references for each of the applications discussed [38], [20]. The references cited in the text pertain mainly to the applications under discussion. Most of the citations are from the more recent literature; references to older publications can be found in the cited ones. Frequently URLs for downloading relevant or interesting information are given. (The URL addresses worked at the time of printing and, hopefully, will be still active when the reader tries them.) References for Mathematica, for algorithms used in computer algebra, and for applications of computer algebra are collected in the Appendix A. The references are listed at the end of each chapter in alphabetical order. In the notebooks, the references are hyperlinked to all their occurrences in the main text. Multiple references for a subject are not cited in numerical order, but rather in the order of their importance, relevance, and suggested reading order for the implementation given. In a few cases (e.g., pure functions in Chapter 3, some matrix operations in Chapter 6), references to the mathematical background for some built-in commands are given—mainly for commands in which the mathematics required extends beyond the familiarity commonly exhibited by non-mathematicians. The GuideBooks do not discuss the algorithms underlying such complicated functions, but sometimes use Mathematica to “monitor” the algorithms. References of the form abbreviationOfAScientificField/yearMonthPreprintNumber (such as quant-ph/0012147) refer to the arXiv preprint server [43], [22], [30] at http://arXiv.org. When a paper appeared as a preprint and (later) in a journal, typically only the more accessible preprint reference is given. For the convenience of the reader, at the end of these references, there is a Get Preprint button. Click the button to display a palette notebook with hyperlinks to the corresponding preprint at the main preprint server and its mirror sites. (Some of the older journal articles can be downloaded free of charge from some of the digital mathematics library servers, such as http://gdz.sub.uni-goettingen.de, http://www.emis.de, http://www.numdam.org, and http://dieper.aib.unilinz.ac.at.) As much as available, recent journal articles are hyperlinked through their digital object identifiers (http://www.doi.org).

0.6.3 Variable Scoping, Input Numbering, and Warning Messages Some of the Mathematica inputs intentionally cause error messages, infinite loops, and so on, to illustrate the operation of a Mathematica command. These messages also arise in the user’s practical use of Mathematica. So, instead of presenting polished and perfected code, the author prefers to illustrate the potential problems and limitations associated with the use of Mathematica applied to “real life” problems. The one exception are the spelling warning messages General::spell and General::spell1 that would appear relatively frequently because “similar” names are used eventually. For easier and less defocused reading, these messages are turned off in the initialization cells. (When working with the notebooks, this means that the pop-up window asking the user “Do you want to automatically evaluate all the initialization cells in the notebook?” should be evaluated should always be answered with a “yes”.) For the vast majority of graphics presented, the picture is the focus, not the returned Mathematica expression representing the picture. That is why the Graphics and Graphics3D output is suppressed in most situations.

Introduction

xxxiii

To improve the code’s readability, no attempt has been made to protect all variables that are used in the various examples. This protection could be done with Clear, Remove, Block, Module, With, and others. Not protecting the variables allows the reader to modify, in a somewhat easier manner, the values and definitions of variables, and to see the effects of these changes. On the other hand, there may be some interference between variable names and values used in the notebooks and those that might be introduced when experimenting with the code. When readers examine some of the code on a computer, reevaluate sections, and sometimes perform subsidiary calculations, they may introduce variables that might interfere with ones from the GuideBooks. To partially avoid this problem, and for the reader’s convenience, sometimes Clear[sequenceOfVariables]and Remove[sequenceOfVariables] are sprinkled throughout the notebooks. This makes experimenting with these functions easier. The numbering of the Mathematica inputs and outputs typically does not contain all consecutive integers. Some pieces of Mathematica code consist of multiple inputs per cell; so, therefore, the line numbering is incremented by more than just 1. As mentioned, Mathematica should be restarted at every section, or subsection or solution of an exercise, to make sure that no variables with values get reused. The author also explicitly asks the reader to restart Mathematica at some special positions inside sections. This removes previously introduced variables, eliminates all existing contexts, and returns Mathematica to the typical initial configuration to ensure reproduction of the results and to avoid using too much memory inside one session.

0.6.4 Graphics In Mathematica 5.1, displayed graphics are side effects, not outputs. The actual output of an input producing a graphic is a single cell with the text Graphics or Graphics3D or GraphicsArray and so on. To save paper, these output cells have been deleted in the printed version of the GuideBooks. Most graphics use an appropriate number of plot points and polygons to show the relevant features and details. Changing the number of plot points and polygons to a higher value to obtain higher resolution graphics can be done by changing the corresponding inputs. The graphics of the printed book and the graphics in the notebooks are largely identical. Some printed book graphics use a different color scheme and different point sizes and line and edge thicknesses to enhance contrast and visibility. In addition, the font size has been reduced for the printed book in tick and axes labels. The graphics shown in the notebooks are PostScript graphics. This means they can be resized and rerendered without loss of quality. To reduce file sizes, the reader can convert them to bitmap graphics using the Cellö Convert ToöBitmap menu. The resulting bitmap graphics can no longer be resized or rerendered in the original resolution. To reduce file sizes of the main content notebooks, the animations of the GuideBooks are not part of the chapter notebooks. They are contained in a separate directory.

xxxiv

Introduction

0.6.5 Notations and Symbols The symbols used in typeset mathematical formulas are not uniform and unique throughout the GuideBooks. Various mathematical and physical quantities (such as normals, rotation matrices, and field strengths) are used repeatedly in this book. Frequently the same notation is used for them, but depending on the context, also different ones are used, e.g. sometimes bold is used for a vector (such as r) and sometimes an arrow (such as ”r). Matrices appear in bold or as doublestruck letters. Depending on the context and emphasis placed, different notations are used in display equations and in the Mathematica input form. For instance, for a time-dependent scalar quantity of one variable yHt; xL, we might use one of many patterns, such as ψ[t][x] (for emphasizing a parametric t-dependence) or ψ[t, x] (to treat t and x on an equal footing) or ψ[t, {x}] (to emphasize the one-dimensionality of the space variable x). Mathematical formulas use standard notation. To avoid confusion with Mathematica notations, the use of square brackets is minimized throughout. Following the conventions of mathematics notation, square brackets are used for three cases: a) Functionals, such as t @ f HtLD HwL for the Fourier transform of a function f HtL. b) Power series coefficients, @xk D H f HxLL denotes the coefficient of xk of the power series expansion of f HxL around x = 0. c) Closed intervals, like @a, bD (open intervals are denoted by Ha, bL). Grouping is exclusively done using parentheses. Upper-case double-struck letters denote domains of numbers, for integers, for nonnegative integers, for rational numbers, for reals, and for complex numbers. Points in n (or n ) with explicitly given coordinates are indicated using curly braces 8c1 , …, cn = 0, 1/2 Floor[n](1 + Floor[n]), Sum[k, {k, 1, n}]] which would correspond to the result returned by Sum for an explicit (real) n. Variables that occur in inequalities will be considered as real-valued by many functions. For instance, for most functions a statement like z2 < -1 will not include parts of the imaginary axis of the z-plane. Many matrix operations, such as Cross[, ] and Det[] stay unevaluated for symbols and . Obviously, here and are not assumed to be complex numbers. There are some more exceptions, and we will encounter them in the following discussions. Generically, the assumption that every variable is a complex one of finite size is very sensible. The complex numbers are an algebraically closed field and enable the inversion of polynomials and more complicated functions. Without using complex numbers, it would be, for instance, impossible to express the three real roots è!!!!!!!! of 5 x3 - 9 x2 + x + 1 = 0 in radicals without using -1 explicitly. (See below for a more detailed discussion of this case.) But in some instances one wants to make certain assumptions about the type of a variable, for example, when ¶ 2 one wants to express that the parameter g in Ÿ-¶ ei g x dx is real so that the integral exists. A few Mathematica functions, notably Simplify, Integrate, Refine, and Assuming have currently the notion of a variable “type”. We will discuss assumptions in Integrate in detail in Subsection 1.6.2. The function Simplify we discussed already in Section 3.5 of the Programming volume [1735], but not in its full generality. Because we will make use of it more frequently later, and because internally Simplify uses functions from all sections of this chapter, we will discuss all of its options now. Simplify[expression, assumptions, options] tries to simplify expression under the assumptions assumptions.

We start with the last arguments of Simplify, its options. In[1]:= Out[1]=

Options[Simplify] 8Assumptions $Assumptions, ComplexityFunction → Automatic, TimeConstraint → 300, TransformationFunctions → Automatic, Trig → True

Pi], {100}] // Timing 84.75 Second, Null

Pi], {100}] // Timing 81.31 Second, Null

0]}&[Sqrt[z^2]] è!!!!!! 9 z2 , z= {#, Refine[z < 0, z > 0]}&[Sqrt[z^2]] è!!!!!! 9 z2 , False= {#, Refine[#, z == Pi/2]}&[Tan[z]] 8Tan@zD, ComplexInfinity< {#, Refine[#, z == E]}&[Round[z]] 8Round@zD, 3< {#, Refine[#, z < -1]}&[Log[z]] 8Log@zD, π + Log@−zD

20]}&[Log[z] > 1] 8Log@zD > 1, Log@zD > 1

1, Refine[z > 0]] True

When the function Assuming appears in nested form, the assumptions are joined. Here is an example. In[73]:= Out[73]=

Assuming[z > 1, Assuming[x < 0, Refine[z > 0 && x < 1]]] True

All currently active assumptions are stored in $Assumptions. $Assumptions gives the currently active assumptions.

By default, $Assumptions has the value True. This means, nothing nontrivial can be derived. In[74]:= Out[74]=

$Assumptions True

Here are the assumptions printed (using a Print-statement) that are active within the inner nested Assuming. In[75]:=

Assuming[z > 1, Assuming[x < 0, Print[$Assumptions]; Simplify[z > 0 && x < 1]]] x < 0 && z > 1

Out[75]=

True

From contradictory assumptions (indicated when recognized as such), follow false statements. In[76]:=

Assuming[z > 1, Assuming[z < -1, Simplify[-1/2 < z < 1/2]]] $Assumptions::cas : Warning: Contradictory assumptionHsL z < −1 && z > 1 encountered. More…

Out[76]=

True

This was a short introduction into a directed use of Simplify and Refine using options and assumption specifications.

1.2 Operations on Polynomials

13

1.2 Operations on Polynomials 1.2.0 Remarks Polynomials and polynomial systems play an extraordinary role in computational symbolic mathematics. In this section, we deal with three aspects of such systems: 1) Structural operations that express polynomials in various canonical forms, 2) manipulations of systems of polynomial equations and 3) manipulations of systems of polynomial inequations (meaning inequalities with Less and Greater, as well as Unequal as their heads). Explicit solutions of polynomial equations (which for most univariate polynomials of degree five or higher) cannot be given in radicals; their solutions will be discussed in Section 1.5. Here we largely focus on operations on polynomials that use their coefficients only.

1.2.1 Structural Manipulations on Polynomials The two most important commands for manipulating polynomials, Expand and Factor, were already introduced in Chapter 3 of the Programming volume [1735]. Note that Factor also works for polynomials in several variables. In[1]:= Out[1]=

In[2]:= Out[2]=

Expand[(1 - x)^3 (3 + y - 2x)^2 (z^2 + 8y)] 72 y − 312 x y + 536 x2 y − 456 x3 y + 192 x4 y − 32 x5 y + 48 y2 − 176 x y2 + 240 x2 y2 − 144 x3 y2 + 32 x4 y2 + 8 y3 − 24 x y3 + 24 x2 y3 − 8 x3 y3 + 9 z2 − 39 x z2 + 67 x2 z2 − 57 x3 z2 + 24 x4 z2 − 4 x5 z2 + 6 y z2 − 22 x y z2 + 30 x2 y z2 − 18 x3 y z2 + 4 x4 y z2 + y2 z2 − 3 x y2 z2 + 3 x2 y2 z2 − x3 y2 z2 Factor[%] −H−1 + xL3 H3 − 2 x + yL2 H8 y + z2 L

The following condition is often ignored. Factor works “properly” only for polynomials whose coefficients are exact (rational) numbers. Thus, for instance, the following example does not work. In[3]:= Out[3]=

Expand[(1.0 - x)^3 (3.0 + y - 2.0 x)^2 (z^2 + 8.0 y)] // Factor −8. H−9. y + 39. x y − 67. x2 y + 57. x3 y − 24. x4 y + 4. x5 y − 6. y2 + 22. x y2 − 30. x2 y2 + 18. x3 y2 − 4. x4 y2 − 1. y3 + 3. x y3 − 3. x2 y3 + 1. x3 y3 − 1.125 z2 + 4.875 x z2 − 8.375 x2 z2 + 7.125 x3 z2 − 3. x4 z2 + 0.5 x5 z2 − 0.75 y z2 + 2.75 x y z2 − 3.75 x2 y z2 + 2.25 x3 y z2 − 0.5 x4 y z2 − 0.125 y2 z2 + 0.375 x y2 z2 − 0.375 x2 y2 z2 + 0.125 x3 y2 z2 L

The following simpler example works, but we highly discourage the use of inexact numbers inside Factor. In[4]:= Out[4]=

x^2 - 5 x + 6. // Factor 1. H−3. + xL H−2. + xL

Results such as the following are much better produced using NRoots or NSolve to achieve a factorization explicitly via solving for the roots. In[5]:=

x^3 - x^2 - 5 x + 5.23 // Factor

Symbolic Computations

14 Out[5]= In[6]:= Out[7]=

1. H−2.19252 + xL H−1.05931 + xL H2.25183 + xL (* better *) Times @@ (x - (x /. NSolve[x^3 - x^2 - 5 x + 5.23 == 0, x])) H−2.19252 + xL H−1.05931 + xL H2.25183 + xL

Using the command Rationalize introduced in Chapter 1 of the Numerics volume [1737], we can convert approximate numbers to nearby rational numbers. (But be aware that for inputs with many-digit high-precision numbers, the functions myFactor might run a long time.) In[8]:=

Out[9]=

myFactor[x_, opts___] := N[Factor[MapAll[Rationalize[#, 0]&, x], opts], (* output precision = input precision *) Precision[x]] myFactor[%%%%] 7.03682 × 10−24 H−4.74876 × 107 + 4.48287 × 107 xL H−1.03683 × 108 + 4.72897 × 107 xL H1.50951 × 108 + 6.70348 × 107 xL

For nonexact integer exponents, Expand and Factor fail. In[10]:= Out[10]=

Expand[(1.0 - x)^3. (3.0 + y - 2.0 x)^2. (z^2 + 8.0 y)^2.] H1. − xL3. H3. − 2. x + yL2. H8. y + z2 L

2.

But note that the application of N to a polynomial does not give numericalized exponents. In[11]:= Out[11]=

N[Expand[(1.0 - x)^3 (3.0 + y - 2.0 x)^2 (z^2 + 8.0 y)^2]] 576. y2 − 2496. x y2 + 4288. x2 y2 − 3648. x3 y2 + 1536. x4 y2 − 256. x5 y2 + 384. y3 − 1408. x y3 + 1920. x2 y3 − 1152. x3 y3 + 256. x4 y3 + 64. y4 − 192. x y4 + 192. x2 y4 − 64. x3 y4 + 144. y z2 − 624. x y z2 + 1072. x2 y z2 − 912. x3 y z2 + 384. x4 y z2 − 64. x5 y z2 + 96. y2 z2 − 352. x y2 z2 + 480. x2 y2 z2 − 288. x3 y2 z2 + 64. x4 y2 z2 + 16. y3 z2 − 48. x y3 z2 + 48. x2 y3 z2 − 16. x3 y3 z2 + 9. z4 − 39. x z4 + 67. x2 z4 − 57. x3 z4 + 24. x4 z4 − 4. x5 z4 + 6. y z4 − 22. x y z4 + 30. x2 y z4 − 18. x3 y z4 + 4. x4 y z4 + 1. y2 z4 − 3. x y2 z4 + 3. x2 y2 z4 − 1. x3 y2 z4

Mathematica factorizes over the integers (“not over the rationals”, and not over the algebraic numbers as long as they do not appear explicitly). This is not a big restriction for rational numbers. The following polynomial over the exact rationals is factored in such a way that the resulting polynomials have integer coefficients and is written out with a common denominator. In[12]:=

Expand[(1/4 - x)^3 (3/2 + y - 2x)^2 (z^2 + 8/5y)^2] // Factor 2

Out[12]=

H−1 + 4 xL3 H3 − 4 x + 2 yL2 H8 y + 5 z2 L − 6400

An interesting theoretical question is the following: Given a polynomial p = ⁄dk=0 ck xk of degree d with integer coefficients ck in the range - f § ck § f , what is the average number of factors of p [1413], [152], [1578], [525]? Here is a simulation for small d and f . In[13]:=

factorNumber[maxDegree_, maxCoefficient_] := Module[{x}, (* count factors *) If[Head[#] === Plus, 1, Length[#]]&[ (* factored random polynomial *) Factor[ Sum[Random[Integer, {-1, 1} maxCoefficient] x^i, {i, 0, maxDegree}]]]]

In[14]:=

Module[{n = 400, dMax = 12, fMax = 20, data}, (* use n random polynomials *) data = Table[Plus @@ Table[factorNumber[d, f], {n}], {d, 2, dMax}, {f, 1, fMax}]/n;

1.2 Operations on Polynomials

15

ListPlot3D[Log[10, data - 1], MeshRange -> {{2, dMax}, {1, fMax}}, PlotRange -> All]]

0 -0.5

20

-1

15

2

10 4 6

5

8 10 12

Polynomials that cannot be factored into multiple x-dependent factors are called irreducible [1439]. In[15]:=

irreducibleQ[poly_, x_] := With[{factors = Select[FactorList[poly], MemberQ[#, x, Infinity]&]}, If[(* at least to x-containing factors exits *) Length[factors] > 1 || (* powers *) factors[[1, 2]] > 1, False, True]] /; PolynomialQ[poly, x] && Exponent[poly, x] > 0

“Most” univariate polynomials are irreducible. The following graphic shows the reducible polynomials of a quadratic, cubic and quartic polynomial over the plane of two coefficients. Reducible polynomials occur along certain lines. In[16]:=

Show[GraphicsArray[#]]& @ Block[{o = 250, α, β}, ListDensityPlot[Table[If[TrueQ[Not[irreducibleQ[#, x]]], 0, 1], {α, -o, o}, {β, -o, o}], Mesh -> False, MeshRange -> {{-o, o}, {-o, o}}, DisplayFunction -> Identity]& /@ (* three polynomials with two parameters each *) {-2 + α x + β x^2, -4 + 3 x + α x^2 + β x^3, -4 - 3 x^2 + α x^3 + β x^4}] 200

200

200

100

100

100

0

0

0

-100

-100

-100

-200

-200

-200

-200 -100

0

100

200

-200 -100

0

100

200

-200 -100

0

100

200

`log np

Given the digits dk of an integer n in base b, we can naturally form the polynomial pb Hn; xL = ⁄k=0b dk xk . Interestingly, when n is a prime number, the polynomial p is irreducible [1424], [937]. In[17]:=

digitPolynomial[k_, b_, x_] := Plus @@ MapIndexed[#1 x^(#2[[1]] - 1)&, Reverse[IntegerDigits[k, b]]]

In[18]:=

(* checking a "random" prime in 100 bases *) Table[irreducibleQ[#, x]& @ digitPolynomial[Prime[123456789], b, x], {b, 2, 1001}] // Union 8True

All, AspectRatio -> 0.4, Frame -> True]] 200

150

100

50

0 1 µ 106

1.002 µ 106

1.004 µ 106

1.006 µ 106

1.008 µ 106

1.01 µ 106

Because no algebraic numbers are computed in the factorization of a univariate polynomial with integer (rational) coefficients, we have the following behavior. In[21]:=

{Factor[x^2 - a^2], Factor[x^2 - 2^2], Factor[x^2 - Sqrt[2]^2]} 8−Ha − xL Ha + xL, H−2 + xL H2 + xL, −2 + x2

Automatic]

1.2 Operations on Polynomials Out[23]=

17

2 è!!!! H 2 + xL

By giving a list of algebraic numbers, one can explicitly specify the extension field. Here, a quartic polynomial 4 è!!!!!! ! is factored. Adjoining 11 allows factoring x4 - 11 into two linear factors with real roots and one quadratic factor with two complex roots. In[24]:= Out[24]=

Adding In[25]:= Out[25]=

Factor[x^4 - 11, Extension -> (11)^(1/4)] è!!!!!!! −H111ê4 − xL H111ê4 + xL H 11 + x2 L

è!!!!!!!! -1 to the extensions allows for a complete factorization of x4 - 11 into linear factors. Factor[x^4 - 11, Extension -> {(11)^(1/4), I}] −H111ê4 − xL H111ê4 − xL H111ê4 + xL H111ê4 + xL

Finding an extension such that a polynomial will factor is largely equivalent to solving polynomial = 0. Here is an unsuccessful trial to factor 3 x3 + 7 x2 - 9. In[26]:= Out[26]=

Factor[x^4 - 3x^3 + 7 x^2 - 9] −9 + 7 x2 − 3 x3 + x4

Here, we use an extension such that the polynomial factors into one linear and one quadratic factor. In[27]:=

Out[27]=

Factor[3 x^3 + 7 x^2 - 9, Extension -> {(1501/2 - (27 Sqrt[2445])/2)^(1/3)}] 1ê3 1 − II−67228 + 4802 22ê3 H1501 − 27 è!!!!!!!!!!!! 2445! L + 2490394032 è!!!!!!!!!!!!! 2ê3 è!!!!!!!!!!!!! 2ê3 1ê3 1ê3 è!!!!!!!!!!!!! 1501 2 H1501 − 27 2445 L + 27 2 2445 H1501 − 27 2445 L − 86436 xM è!!!!!!!!!!!!! 1ê3 è!!!!!!!!!!!!! 1ê3 2ê3 2ê3 è!!!!!!!!!!!!! I11907 2 H1501 − 27 2445 L + 147 2 2445 H1501 − 27 2445 L + è!!!!!!!!!!!!! 2ê3 è!!!!!!!!!!!!! 2ê3 1ê3 1ê3 è!!!!!!!!!!!!! 1701 2 H1501 − 27 2445 L + 21 2 2445 H1501 − 27 2445 L + 1ê3 2ê3 I134456 + 4802 22ê3 H1501 − 27 è!!!!!!!!!!!! 2445! L + 1501 21ê3 H1501 − 27 è!!!!!!!!!!!! 2445! L + 27 21ê3

è!!!!!!!!!!!!! è!!!!!!!!!!!!! 2ê3 2445 H1501 − 27 2445 L M x + 86436 x2 MM

Be aware that factoring of polynomials is a rather complex process (see the general references given in the appendix), which takes some time. Here, the timings for the expansion of (C + 1)^i are compared with the timings for the factorization of the expanded object. (Be aware of the different degrees for Expand and Factor.) In[28]:=

Show[GraphicsArray[ ListPlot[(* reasonable units for a 2-GHz computer *) {#[[1]], 1000 #[[2, 1, 1]]}& /@ #[[1]], Frame -> True, PlotLabel -> #[[2]], FrameLabel -> {"degree", "milliSeconds"}, DisplayFunction -> Identity]& /@ (* Expand and Factor data *) {{(* clear caches for reliable timings *) Table[{i, Developer`ClearCache[]; Timing[Expand[(C + 1)^i];]}, {i, 0, 6000, 50}], "Expand"}, {Table[{i, Developer`ClearCache[]; Timing[Factor[#]]&[Expand[(C + 1)^i]]}, {i, 300}], "Factor"}}]]

Symbolic Computations

18 Expand

Factor

60

150 125 milliSeconds

milliSeconds

50 40 30 20 10

100 75 50 25

0

0 0

1000

2000

3000 4000 degree

5000

6000

0

50

100

150 degree

200

250

300

Let us take a graphical look at the result of expanding a power of a sum. Let the sum total be zero in the form 0 = ⁄n-1 j=0 expH2 p i j ê nL. Then the powers of this sum are also 0, and, by interpreting the partial sums of the expanded power as points in the complex plane, we get a closed path. In[29]:=

expandPicture[{n_, pow_}, opts___] := Show[Graphics[{Thickness[0.002], Line[{Re[#], Im[#]}& /@ (* form the partial sums *) FoldList[Plus, 0, N[(List @@ (* first make list, and then replace to avoid reordering *) (* now comes the expansion *) Expand[Sum[C[i], {i, 0, n - 1}]^pow]) /. C[i_] -> Exp[i I 2Pi/n]]]]}], opts, AspectRatio -> Automatic, Frame -> True, PlotRange -> All, FrameTicks -> None];

In[30]:=

Map[Show[GraphicsArray[ expandPicture[#, (* the parameters {Table[{3, i}, {i, Table[{5, i}, {i, Table[{8, i}, {i,

DisplayFunction -> Identity]& /@ #]]&, for the pictures *) 3, 21, 3}], Table[{4, i}, {i, 2, 20, 4}], 3, 15, 3}], Table[{6, i}, {i, 3, 12, 2}], 2, 8, 2}]}, {1}]

1.2 Operations on Polynomials

19

Here are three more complicated versions of such a graphic. We color the line segments from red to blue. In[31]:=

Show[GraphicsArray[ expandPicture[#, DisplayFunction -> Identity] /. Line[l_] :> With[{n = Length[l]}, MapIndexed[{Hue[0.78 #2[[1]]/n], Line[#1]}&, Partition[l, 2, 1]]]& /@ {{10, 10}, {16, 8}, {36, 4}}]]

Before discussing another algorithmically nontrivial operation for manipulating polynomials—namely, Decompose—we consider a method to rearrange an expression, if possible, into canonical polynomial form. PolynomialQ[polynomial, var] (we know this command from Chapter 5 of the Programming volume [1735]) tests whether polynomial is a polynomial in var. Be aware that PolynomialQ is a purely structural operation. While the expression Hcos2 H1L + sin2 H1L - 1L xx + 2 x2 - 1 is mathematically a polynomial, structurally it is not. As a function ending with Q, PolynomialQ has to return True of False and cannot stay unevaluated. But it is always possible to construct terms of the form hiddenZero nonPolynomialPart so that it is algorithmically undecidable hiddenZero is zero (Richardson theorem [1482], [1484], [1485], [442]). From this, it follows that to guarantee not to get wrong answers from PolynomialQ is not doomed to give wrong results sometimes, it must be a purely structural function. In[32]:= Out[32]=

PolynomialQ[(Sin[1]^2 + Cos[1]^2 - 1) x^x + 2 x^2 - 1, x] False

PolynomialQ[polynomial] tests if polynomial can be considered as a polynomial in at least one variable. Using Collect, we can now write an expression as an explicit polynomial in given variables.

Symbolic Computations

20

Collect[expression, {var1 , var2 , … , varn }, function] writes expression recursively as a polynomial in the variables vari Hi = 1, …, nL and applies the optional function function to the resulting coefficients. If function is omitted, the last argument is assumed to be Identity.

Here we again use our previous polynomial. In[33]:=

polyInxInyInz = (1 - x)^3 (3 + y - 2x)^2 (z^2 + 8y)^2;

Here is this expression as a polynomial in x. In[34]:= Out[34]=

Collect[polyInxInyInz, x] 2

2

2

−4 x5 H8 y + z2 L + x4 H24 + 4 yL H8 y + z2 L + x H−39 − 22 y − 3 y2 L H8 y + z2 L + 2 2

2 2

x H−57 − 18 y − y L H8 y + z L + H9 + 6 y + y L H8 y + z L + x H67 + 30 y + 3 y2 L H8 y + z2 L 3

2

2

2

2

The result of Collect depends on the form of its input. Collect will not expand or factor the resulting coefficients by default. In[35]:= Out[35]=

Collect[Expand[polyInxInyInz], x] 576 y2 + 384 y3 + 64 y4 + 144 y z2 + 96 y2 z2 + 16 y3 z2 + 9 z4 + 6 y z4 + y2 z4 + x5 H−256 y2 − 64 y z2 − 4 z4 L + x4 H1536 y2 + 256 y3 + 384 y z2 + 64 y2 z2 + 24 z4 + 4 y z4 L + x H−2496 y2 − 1408 y3 − 192 y4 − 624 y z2 − 352 y2 z2 − 48 y3 z2 − 39 z4 − 22 y z4 − 3 y2 z4 L + x3 H−3648 y2 − 1152 y3 − 64 y4 − 912 y z2 − 288 y2 z2 − 16 y3 z2 − 57 z4 − 18 y z4 − y2 z4 L + x2 H4288 y2 + 1920 y3 + 192 y4 + 1072 y z2 + 480 y2 z2 + 48 y3 z2 + 67 z4 + 30 y z4 + 3 y2 z4 L

Using the optional third argument of Collect, we can bring the coefficients to a canonical form. In[36]:= Out[36]=

Collect[Expand[polyInxInyInz], x, Factor] 2

2

−4 x5 H8 y + z2 L + H3 + yL2 H8 y + z2 L + 2 2

2

4 x4 H6 + yL H8 y + z L − x H3 + yL H13 + 3 yL H8 y + z2 L − 2 2

2

x3 H57 + 18 y + y2 L H8 y + z L + x2 H67 + 30 y + 3 y2 L H8 y + z2 L

Note that the individual terms are not strictly ordered; in particular, not all terms ( ∂ x0 ) appear one after the other. This is a consequence of the Flat and Orderless attributes of Plus and the canonical order. Here is the same expression as a polynomial in y. In[37]:= Out[37]=

Collect[polyInxInyInz, y] 64 H1 − xL3 y4 + H1 − xL3 y3 H384 − 256 x + 16 z2 L + H1 − xL3 y2 H576 − 768 x + 256 x2 + 96 z2 − 64 x z2 + z4 L + H1 − xL3 y H144 z2 − 192 x z2 + 64 x2 z2 + 6 z4 − 4 x z4 L + H1 − xL3 H9 z4 − 12 x z4 + 4 x2 z4 L

Here it is again as a polynomial in z. In[38]:= Out[38]=

Collect[polyInxInyInz, z] 64 H1 − xL3 y2 H3 − 2 x + yL2 + 16 H1 − xL3 y H3 − 2 x + yL2 z2 + H1 − xL3 H3 − 2 x + yL2 z4

Using as the second argument in Collect the list {x, y} results in a polynomial in x, whose coefficients are polynomials in y, whose coefficients are polynomials in z. In[39]:= Out[39]=

Collect[polyInxInyInz, {x, y}] 64 y4 + 9 z4 + y3 H384 + 16 z2 L + x5 H−256 y2 − 64 y z2 − 4 z4 L + y2 H576 + 96 z2 + z4 L + y H144 z2 + 6 z4 L + x H−192 y4 − 39 z4 + y3 H−1408 − 48 z2 L + y H−624 z2 − 22 z4 L + y2 H−2496 − 352 z2 − 3 z4 LL +

1.2 Operations on Polynomials

21

x3 H−64 y4 − 57 z4 + y3 H−1152 − 16 z2 L + y H−912 z2 − 18 z4 L + y2 H−3648 − 288 z2 − z4 LL + x4 H256 y3 + 24 z4 + y2 H1536 + 64 z2 L + y H384 z2 + 4 z4 LL + x2 H192 y4 + 67 z4 + y3 H1920 + 48 z2 L + y2 H4288 + 480 z2 + 3 z4 L + y H1072 z2 + 30 z4 LL

Here we apply the function C to each of the coefficients in z. In[40]:=

Collect[polyInxInyInz, {x, y}, C] y4 C@64D + x5 Hy2 C@−256D + y C@−64 z2 D + C@−4 z4 DL + C@9 z4 D + y3 C@384 + 16 z2 D + x Hy4 C@−192D + C@−39 z4 D + y3 C@−1408 − 48 z2 D + y C@−624 z2 − 22 z4 D + y2 C@−2496 − 352 z2 − 3 z4 DL + x3 Hy4 C@−64D + C@−57 z4 D + y3 C@−1152 − 16 z2 D + y C@−912 z2 − 18 z4 D + y2 C@−3648 − 288 z2 − z4 DL + y2 C@576 + 96 z2 + z4 D + 4 x Hy3 C@256D + C@24 z4 D + y2 C@1536 + 64 z2 D + y C@384 z2 + 4 z4 DL + y C@144 z2 + 6 z4 D + x2 Hy4 C@192D + C@67 z4 D + y3 C@1920 + 48 z2 D + y2 C@4288 + 480 z2 + 3 z4 D + y C@1072 z2 + 30 z4 DL

Out[40]=

The second argument in Collect need not be an atomic expression, and thus the following expression will be written as a polynomial over co[x]. In[41]:=

Collect[Expand[(co[x] + 4 si[z] + 5 co[x]^3)^4], co[x]] 150 co@xD8 + 500 co@xD10 + 625 co@xD12 + 240 co@xD5 si@zD + 1200 co@xD7 si@zD + 2000 co@xD9 si@zD + 96 co@xD2 si@zD2 + 256 co@xD si@zD3 + 256 si@zD4 + co@xD4 H1 + 960 si@zD2 L + co@xD6 H20 + 2400 si@zD2 L + co@xD3 H16 si@zD + 1280 si@zD3 L

Out[41]=

Collect only reorders. It does not carry out any “mathematical meaning- or content-dependent manipulations” . It only looks at the syntactical structure of expressions. (This means that in the following example, Cos[x]^2 is not rewritten as 1 - Sin[x]^2.) In[42]:= Out[42]=

Collect[Sin[x]^2 + (Cos[x]^2 + Sin[x]^2)^3 + 3 Sin[x]^3 + 7, Sin[x]] 7 + Cos@xD6 + H1 + 3 Cos@xD4 L Sin@xD2 + 3 Sin@xD3 + 3 Cos@xD2 Sin@xD4 + Sin@xD6

Given an expression in several variables, we can use Variables to identify in which variables the expression is a polynomial. Variables[expression] produces a list of the variables in which expression is a polynomial.

For the above polyInxInyInz, we get the expected result. In[43]:= Out[43]=

Variables[polyInxInyInz] 8x, y, z

0 jj$ X ,X >0 jj"x,x>X ﬂlœ l - ¶ < ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 7 x3 - 5 x + 9 k k {{ In[156]:=

Out[156]=

ForAll[∂, ∂ > 0, Exists[X, X > 0, ForAll[x, x > X && Element[, Reals], - ∂ < (2x^3 - 4x + 6)/(7x^3 - 5x + 9) < + ∂]]] // Resolve 2 7

1.3 Operations on Rational Functions The commands Numerator and Denominator introduced at the beginning (Subsection 2.4.1 of the Programming volume [1735]) work for rational numbers and for rational functions, that is, for fractions of polynomials. Here is an example. In[1]:= Out[1]= In[2]:= Out[2]= In[3]:= Out[3]=

ratio = (3 + 6 x + 6 x^2)/(5 y + 6 y^3) 3 + 6 x + 6 x2 5 y + 6 y3 Numerator[ratio] 3 + 6 x + 6 x2 Denominator[ratio] 5 y + 6 y3

The parts of a product that belong to the numerator and the parts that belong to the denominator are determined by the sign of the associated exponents after transformations of the form 1/k^(-l) Ø k^l. Here is an expression that is a product of ten factors. After evaluation, eight have a positive exponent and four have a negative exponent. Some of the negative exponent terms are formatted in the denominator. (Be aware that Exp[expr] is rewritten is Power[E, expr] and the explicit formatting depends on expr.) In[4]:=

expr = a b^2 c^-2 d^(4/3) e^-(5/6) 1/f^(-12/13) g^h i^-j 1/k^-l Exp[-E^2] 2

Out[4]= In[5]:= Out[5]=

a b2 d4ê3 − f12ê13 gh i−j kl c2 e5ê6 {Numerator[expr], Denominator[expr]} 2

9a b2 d4ê3 f12ê13 gh kl , c2 e5ê6 ij =

Here is a nested fraction.

1.3 Operations on Rational Functions In[6]:= Out[6]=

79

nestedFraction = (a/(b + 1) + 2)/(c/(d + 3) + 4) a 2 + 1+b c 4 + 3+d

For nested fractions, the functions Numerator and Denominator take into account only the “outermost” structure. In[7]:= Out[7]=

{Numerator[#], Denominator[#]}&[nestedFraction] a c 92 + , 4 + = 1+b 3+d

For fractions, the command Expand, which multiplies out polynomials, is divided into four parts to facilitate working on the numerators and denominators. Expand[rationalFunction] multiplies out only the numerator of the rationalFunction, and divides all resulting terms by the (unchanged) denominator. ExpandNumerator[rationalFunction] multiplies out only the numerator of the rationalFunction, and divides the result as a single expression by the (unchanged) denominator. ExpandDenominator[rationalFunction] multiplies out only the denominator of the rationalFunction. ExpandAll[rationalFunction] multiplies out the numerator and denominator of rationalFunction, and divides all resulting terms.

We now look at the effect of these four commands on the sum of two ratios of polynomials. In[8]:= Out[8]=

ratio = (1 + 7 y^3)/(2 + 8 x^3)^2 + (1 + 6 x)^3/(1 - 4 y)^2 H1 + 6 xL3 1 + 7 y3 + H1 − 4 yL2 H2 + 8 x3 L2

Except for the lexicographic reordering of the two partial sums, Mathematica did not do anything nontrivial to this input automatically. Now, all numerators are multiplied out, and all resulting parts are individually divided. In[9]:= Out[9]=

Expand[ratio] 1 1 18 x 108 x2 216 x3 7 y3 + + + + + H1 − 4 yL2 H1 − 4 yL2 H1 − 4 yL2 H1 − 4 yL2 H2 + 8 x3 L2 H2 + 8 x3 L2

ExpandNumerator also multiplies out, but does not divide the terms individually. In[10]:= Out[10]=

ExpandNumerator[ratio] 1 + 18 x + 108 x2 + 216 x3 1 + 7 y3 + H1 − 4 yL2 H2 + 8 x3 L2

With ExpandDenominator, the numerator remains unchanged, and only the denominator is multiplied out. In[11]:= Out[11]=

ExpandDenominator[ratio] 1 + 7 y3 H1 + 6 xL3 + 1 − 8 y + 16 y2 4 + 32 x3 + 64 x6

Finally, we multiply everything out. ExpandAll typically produces the largest expressions. Now, the numerator is multiplied out and each of its terms is written over the expanded form of the denominator. In[12]:=

ExpandAll[ratio]

Symbolic Computations

80

Out[12]=

1 7 y3 + + 4 + 32 x3 + 64 x6 4 + 32 x3 + 64 x6 1 18 x 108 x2 216 x3 + + + 1 − 8 y + 16 y2 1 − 8 y + 16 y2 1 − 8 y + 16 y2 1 − 8 y + 16 y2

In the following example, the same happens. Be aware that no common factors are cancelled. In[13]:= Out[13]=

ExpandAll[(1 - x^4)/(1 + x^2)^2] 1 x4 − 1 + 2 x2 + x4 1 + 2 x2 + x4

In nested fractions, ExpandAll works again only the “outermost” structure. In[14]:= Out[14]=

ExpandAll[nestedFraction] 2 a c + c 4 + H1 + bL H4 + L 3+d 3+d

Mapping ExpandAll to all parts allows us to expand all parts. In[15]:=

Out[16]=

(* show all steps *) FixedPointList[MapAll[ExpandAll, #]&, nestedFraction] a 2 + a 1+b , 2 9 c c + c , 4 + 4 + H1 + bL H4 + L 3+d 3+d 3+d a 2 a 2 c + c + c b c , c b c = 4 + 4 + 4 + 4 b + 4 + 4 b + + + 3+d 3+d 3+d 3+d 3+d 3+d

Be aware that there is a difference between Expand //@ expr and ExpandAll[expr]. ExpandAll does not automatically recursively expanded in the inner levels of expressions with Hold-like attributes. In[17]:= Out[17]=

{Expand //@ #, ExpandAll[#]}&[(α + β)^2 + Hold[(α + β)^2]] 8α2 + 2 α β + β2 + Hold@Expand@Expand@Expand@αD + Expand@βDDExpand@2D DD, α2 + 2 α β + β2 + Hold@Hα + βL2 D

{z}] -> z''[x]} 2 z@xDz@xD + 4 x z@xDz@xD Hz @xD + Log@z@xDD z @xDL + z @xD2 z i i @xD + Log@z@xDD z @xD + y x2 j y jz@xDz@xD Hz @xD + Log@z@xDD z @xDL2 + z@xDz@xD j jz z zz z@xD {{ k k

In[8]:= Out[8]=

Expand[ex3 - ex1] x2 z@xDz@xD z @xD + x2 Log@z@xDD z@xDz@xD z @xD − x2 z@xDz@xD z @xD − x2 Log@z@xDD z@xDz@xD z @xD

For a function of several variables, the derivative is not represented by ' in output form, but instead by using numbers in parentheses to specify how many times to differentiate with respect to the corresponding variable. In[9]:= Out[9]=

D[func[t, , ], {t, 2}, {, 3}, {, 3}] funcH2,3,3L @t, , D

Mathematica is able to explicitly differentiate nearly all special functions with respect to their “argument”, but only a few special functions with respect to their “parameters”. Here is the derivative of LegendreP with respect to its first argument. In[10]:= Out[10]=

D[LegendreP[n, z], {n, 1}] LegendrePH1,0L @n, zD

Numerically, these quantities can still be calculated. In[11]:= Out[11]=

N[% /. {z -> 2.04, n -> 0.567}] 1.07925

Here is a high-precision evaluation of the last derivative. In[12]:= Out[12]=

N[%% /. {z -> 204/100, n -> 567/1000}, 22] 1.079254609237523525024

Here, we have to make a remark about the numerical differentiation encountered in the last examples. Whereas the last derivative was evaluated “just fine” by Mathematica, the following “simple” derivative does “not work”. In[13]:=

Abs'[1.]

Out[13]=

Abs @1.D

The reason Abs'[inexactNumber] does not evaluate is the fact that the derivative of Abs does not exist. The derivative is (by definition) the limit limdØ0 H f Hz + dL - f HzLL ê d for any complex d. But for Abs, the result

1.6 Classical Analysis

131

depends on the direction of d approaching 0. Let z = 1; then we have the following result. (Here, we use the soon-to-be-discussed function Limit.) In[14]:=

Out[14]=

absDeriv[zr_, zi_, ϕ_] = With[{z = zr + I zi, δz = δ Exp[I ϕ]}, Limit[ComplexExpand[(Abs[z + δz] - Abs[z])/δz, TargetFunctions -> {Re, Im}], δ -> 0]] HCos@ϕD − Sin@ϕDL Hzr Cos@ϕD + zi Sin@ϕDL è!!!!!!!!!!!!!!!!!!!!!! zi2 + zr2

Here is the direction dependence for z = 1 as a function of argHz - 1L. The two curves show the real and the imaginary parts of absDeriv[1, 0, j]. In[15]:=

Plot[{Re[absDeriv[1, 0, ϕ]], Im[absDeriv[1, 0, ϕ]]}, {ϕ, 0, 2Pi}, PlotRange -> All, Frame -> True, Axes -> False, PlotStyle -> {{Thickness[0.005]}, {Thickness[0.005], Dashing[{0.01, 0.01}]}}] 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 0

1

2

3

4

5

6

The numerical derivative is always taken for purely real δz. To use the numerical differentiation, we “hide” Abs in the following definition of abs. In[16]:=

(* make abs a numerical function for numerical arguments *) SetAttributes[abs, NumericFunction]; abs[x_?InexactNumberQ] := Abs[x]

For purely real δz and complex z, we have the following behavior of the “derivative”. The right picture shows the result of the numerical differentiation. In[19]:=

Show[GraphicsArray[ Plot3D[#, {x, -1, 1}, {y, -1, 1}, DisplayFunction -> Identity, PlotPoints -> 25]& /@ {absDeriv[x, y, 0], abs'[x + I y]}]]

1 0.5 0 -0.5 -1 -1 1

1 0.5 0 -0.5

-0.5

0 0.5 1

-1

1 0.5 0 -0.5 -1 -1 1

1 0.5 0 -0.5

-0.5

0 0.5 1

-1

While the last two pictures look “sensible”, checking the difference between absDeriv[x, y, 0] and abs' more carefully, we see near the origin the effects of the numerical differentiation.

Symbolic Computations

132 In[20]:=

Plot3D[Abs[absDeriv[x, y, 0] - abs'[x + I y]], {x, -1/4, 1/4}, {y, -1/4, 1/4}, PlotRange -> All, PlotPoints -> 50]

0.4

0.2

0.2

0.1

0 -0.2

0 -0.1

0

-0.1 0.1

-0.2

0.2 0 2

The following picture shows the points used in the numerical differentiation process near the high-precision number 1. In[21]:=

abs[x_?InexactNumberQ] := ((* collect values *) Sow[x]; Abs[x]); Show[Graphics[Line[{{#, 0}, {#, Abs[#]}}]& /@ (* evaluate derivative and return sampled x-values *) Reap[abs'[1``100]][[2, 1]]], PlotRange -> All, Frame -> True] 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0.6

0.8

1

1.2

1.4

From the last result, we conclude that we should not trust the numerical derivatives of discontinuous functions or quickly oscillating functions. Here is the numerical derivative of a step-like function (be aware that we display f HxL and f £ HxL ê 60). In[23]:=

SetAttributes[meander, NumericFunction]; meander[x_?NumberQ] := Sin[x]/Sqrt[Sin[x]^2] Plot[{meander[x], meander'[SetPrecision[x, 100]]/60}, {x, 2, 4}, PlotStyle -> {Hue[0], GrayLevel[0]}, PlotRange -> All, Compiled -> False]

1.6 Classical Analysis

133 1 0.5 2.5

3

3.5

4

-0.5 -1 -1.5 -2

One word of caution is in order here: If very reliable high-precision numerical values of derivatives are needed, it is safer to use the function ND from the package NumericalMath`NLimit` instead of numericalizing symbolic expressions containing unevaluated derivatives using N. N[unevaluatedDerivative] has to choose a scale for sample points. Like other numerical functions, no symbolic analysis of unevaluatedDerivative is carried out, and as a result, the chosen scale may result in mathematically wrong values for the derivative (this is especially the case for values of derivatives near singularities; but in principle higher-order numerical differentiation is difficult [1331]). We should make another remark concerning a potential pitfall when differentiating. Mathematica tacitly assumes that the differentiation with respect to different variables can be interchanged (a condition which is fulfilled for most functions used in practical calculations). This means that for the following well-known example strange Func, we do not get the “expected” derivatives for all values x and y if we specialize x- or y-values in intermediate steps. In[26]:=

strangeFunc[0, 0] = 0; strangeFunc[x_, y_] = x y (x^2 - y^2)/(x^2 + y^2);

Here is the function, its first derivative with respect to x, and its mixed second derivative are shown over the x,y-plane. In[28]:=

Show[GraphicsArray[ Plot3D[Evaluate[D[strangeFunc[x, y], ##]], {x, -1, 1}, {y, -1, 1}, PlotPoints -> 121, Mesh -> False, PlotRange -> All, DisplayFunction -> Identity]& @@@ (* function, one first derivative, and mixed second derivative *) {{}, {x}, {x, y}}]]

0.2 0 -0.2 -1

1 0.5 -0.5

0 0

-0.5 0.5

1

-1

1 0.5 0 -0.5 -1 -1

1 0.5 -0.5

0 0

1 0 -1 -1

-0.5 0.5

1

-1

1 0.5 -0.5

0 0

-0.5 0.5

1

-1

First, we differentiate with respect to x and evaluate at 80, y 0}, x]}& @ strangeFunc[x, y] 8−1, 1

0, 1D

For some applications, the following result is not desired, although correct almost everywhere with respect to the 1D Lebesgue measure on the real axis. In[34]:= Out[34]=

D[%, x] Which@x ≤ 0, 0, x > 0, 0D

Differentiating a univariate piecewise function (head Piecewise) gives the value Indeterminate at the position of the discontinuity. In[35]:= Out[35]=

D[Piecewise[{{1, x > 1}}], x] 0 x < 1 »» x > 1 µ Indeterminate True

Differentiating a multivariate piecewise function (head Piecewise) gives the result valid almost everywhere (including lower-dimensional curves where the result is not pointwise correct). In[36]:= Out[36]=

D[Piecewise[{{1, x + y > 1}}], x] 0

The Mathematica kernel does not recognize the derivative of such discontinuous functions as proportional to Dirac delta function by default (but see Section 1.8). Other case-sensitive functions are differentiated in a similar way.

1.6 Classical Analysis In[37]:= Out[37]=

135

D[If[true, false[unknownVariable], dontKnow[unknownVariable]], unknownVariable] If@true, false @unknownVariableD, dontKnow @unknownVariableDD

This means that D does generically not act in a distributional sense, not for the above-mentioned case of piecewise-defined functions and not for other “closed-form” functions. The following two expressions are so-called differential algebraic constants. In[38]:= Out[38]=

D[{(Sqrt[x^2] - x)/(2x), (Log[x^2] - 2 Log[x])}, x] // Together 80, 0

True, Axes -> False], Plot3D[Im[(Log[(x + I y)^2] - 2 Log[x + I y])], {x, -1, 1}, {y, -1, 1}, PlotPoints -> 30]}]]] 0 -0.2 -0.4

5 1

0

-0.6

0.5

-5 0

-1 1

-0.8

-0.5

-0.5

0 0.5

-1 -1

-0.5

0

0.5

1

1

-1

x

As a result, for such functions the identity Ÿx ∑ f HxL ê ∑ x „ x f HxL - f Hx0 L does not hold for all generic x0 and x. 9 Using a generalized function UnitStep (to be discussed below) and its derivatives (Dirac d function Dirac Delta and its derivative), the last identity hold almost everywhere. In[40]:=

intDiffIdentity[f_, {x0_, x_}] := Integrate[D[f[ξ], ξ], {ξ, x0, x}] == f[x] - f[x0]

In[41]:=

{(* generic, symbolic, not explicitly defined function *) intDiffIdentity[ , {x0, x}], (* distribution--can be differentiated freely *) intDiffIdentity[UnitStep, {-1, 1}], (* piecewise functions ignore jump contributions *) intDiffIdentity[Piecewise[{{1, # >= 0}}]&, {-1, 1}]} 8True, True, False

_]] 8DifferentiationOptions → 8AlwaysThreadGradients → False, DifferentiateHeads → True, DirectHighDerivatives → True, ExcludedFunctions → 8Hold, HoldComplete, Less, LessEqual, Greater, GreaterEqual, Inequality, Unequal, Nand, Nor, Xor, Not, Element, Exists, ForAll, Implies, Positive, Negative, NonPositive, NonNegative False)]; D[fh, {x, 6}] 4 4 4 4 4 4320 x x2 + 5760 x x6 + 11520 x x H1 + xL + 69120 x x5 H1 + xL + 30720 x x9 H1 + xL + 4

4

4

4

4320 x H1 + xL2 + 146880 x x4 H1 + xL2 + 207360 x x8 H1 + xL2 + 46080 x x12 H1 + xL2 + x4

80640

x4

x H1 + xL + 299520 3

4

3

x H1 + xL + 184320 7

4

3

x4

x

11

H1 + xL + 3

4

24576 x x15 H1 + xL3 + 10080 x x2 H1 + xL4 + 100800 x x6 H1 + xL4 + 4

4

4

134400 x x10 H1 + xL4 + 46080 x x14 H1 + xL4 + 4096 x x18 H1 + xL4 In[48]:= Out[48]=

Expand[%%% - %] 0

There is no general rule regarding what is preferable, the setting "DirectHighDerivatives" -> False or "DirectHighDerivatives" -> True . In many cases, "DirectHighDerivatives" -> True will be much faster, but will produce larger results. Here is a “typical” example. In[49]:=

Out[49]=

With[{f = x^3 Log[x^5 + 1] Exp[-x^2], n = 50}, {Developer`SetSystemOptions["DifferentiationOptions" -> ("DirectHighDerivatives" -> True)]; Timing[ByteCount[D[f, {x, n}]]], Developer`SetSystemOptions["DifferentiationOptions" -> ("DirectHighDerivatives" -> False)]; Timing[ByteCount[D[f, {x, n}]]]}] 880.06 Second, 934776 ("DirectHighDerivatives" -> False)]; Timing[ByteCount[D[f, {x, n}]]]}] 8813.47 Second, 16 {"ExcludedFunctions" -> Append["ExcludedFunctions" /. ("DifferentiationOptions" /. Developer`SystemOptions[]), ]}] DifferentiationOptions → 8AlwaysThreadGradients → False, DifferentiateHeads → True, DirectHighDerivatives → True, ExcludedFunctions → 8Hold, HoldComplete, Less, LessEqual, Greater, GreaterEqual, Inequality, Unequal, Nand, Nor, Xor, Not, Element, Exists, ForAll, Implies, Positive, Negative, NonPositive, NonNegative, True}];

In[56]:=

D[ [[x]], x]

Out[56]= In[57]:=

∂x @@xDD (* restore old settings *) Developer`SetSystemOptions[ "DifferentiationOptions" -> {"ExitOnFailure" -> False}];

The next input makes use of fairly high derivatives. We visualize the (normalized) coefficients cHnL k appearing in n-1

1 1 ∑n k ÅÅÅÅÅÅÅÅÅnÅÅÅ J ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ N = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ‚ cHnL k ln HzL. n ∑ z lnHzL z lnn+1 HzL k=0 Here, we use n = 1, 2, …, 200.

Symbolic Computations

138 In[59]:=

Show[Graphics3D[ Table[(* the derivative *) deriv = D[1/Log[z], {z, n}]; (* the coefficients *) cl = CoefficientList[Expand[ (-1)^n z^n Log[z]^(n + 1)deriv], Log[z]]; (* colored line *) {Hue[n/ 250], Line[MapIndexed[{#2[[1]] - 1, n, #1}&, (* normalize coefficients *) cl/Max[Abs[cl]]]]}, {n, 200}]], BoxRatios -> {1, 3/2, 0.5}, PlotRange -> All, Axes -> True, ViewPoint -> {0, 3, 1}] 0 50 100 00 150 0 200 1 0.75 0.5 0.25 0 200

150

100

50

0

In addition to explicitly given functions, Mathematica is also able to differentiate certain abstract function(al)s of functions, for example, indefinite integrals or inverse functions [75], [942], [1724], [1622]. In[60]:=

Table[D[InverseFunction[Υ][z], {z, i}], {i, 3}] 1 3 Υ @ΥH−1L @zDD ΥH3L @ΥH−1L @zDD Υ @ΥH−1L @zDD , = 9 , − − Υ @ΥH−1L @zDD Υ @ΥH−1L @zDD3 Υ @ΥH−1L @zDD4 Υ @ΥH−1L @zDD5 2

Out[60]=

Sometimes it is convenient to form derivatives of multivariate functions with respect to all independent variables at once (especially in carrying out vector analysis operations). This can be done with the following syntax. D[toDifferentiate, {vector, n}] differentiates toDifferentiate n times with respect to the vector variable vector. If the n is omitted, it is assumed to be 1.

Here are the first and second derivatives of the scalar function f , that depends on x, y, and z. In[61]:= Out[61]= In[62]:= Out[62]=

D[f[x, y, z], {{x, y, z}, 1}] 8fH1,0,0L @x, y, zD, fH0,1,0L @x, y, zD, fH0,0,1L @x, y, zD< D[f[x, y, z], {{x, y, z}, 2}] 88fH2,0,0L @x, y, zD, fH1,1,0L @x, y, zD, fH1,0,1L @x, y, zD z; Γllu[a_, b_, c_] := Γllu[a, b, c] = (* raise third index with g *) Together[Sum[Γlll[a, b, d] guu[[c, d]], {d, 2}]] /. z[x, y] -> z

The two equations for the geodesics are of the form x≥ HtL = FHxHtL, yHtL, zHtL, x£ HtL, y£ HtL, z£ HtLL and similar for y≥ HtL. To make sure the geodesics stay on the original, implicitly defined surface, and to obtain three equations for the three coordinates, we supplement the two geodesic equations with the differentiated form of the implicit equation (meaning Hgrad holedCubeHxHtL, yHtL, zHtLLL.8x£ HtL, y£ HtL, z£ HtL< = 0 [332]. In[154]:=

geodesicEquations = (# == 0)& /@ Append[ (* second-order odes for x[τ] and y[τ] *) Table[D[[c][τ], τ, τ] + Sum[Γllu[a, b, c] D[[a][τ], τ] D[[b][τ], τ], {a, 2}, {b, 2}], {c, 2}] /. Derivative[n_][xy_][τ] :> Derivative[n][xy], (* differentiated form of the implicit equation of the surface *) D[holedCube /. {x -> x[τ], y -> y[τ], z[x, y] -> z[τ]}, τ] /. xyz_[τ] -> xyz] /. {x -> x[τ], y -> y[τ], z -> z[τ]} /. Derivative[n_][xy_[τ]] :> Derivative[n][xy][τ];

In[155]:=

((geodesicEquations // Simplify) /. {ξ_[τ] -> ξ}) // Simplify // TraditionalForm

Out[155]//TraditionalForm= 2

9I2 x2 y H2 y2 - 1L H6 z2 - 1L x£ y£ H1 - 2 x2 L + 2

x H2 x2 - 1L I4 H6 z2 - 1L x6 + H4 - 24 z2 L x4 + H24 z6 - 24 z4 + 12 z2 - 1L x2 - z2 H1 - 2 z2 L M Hx£ L2 + 2

x H2 x2 - 1L I4 H6 z2 - 1L y6 + H4 - 24 z2 L y4 + H24 z6 - 24 z4 + 12 z2 - 1L y2 - z2 H1 - 2 z2 L M Hy£ L2 + 2

2

z2 H1 - 2 z2 L I4 x6 - 4 x4 + x2 + 4 y6 - 4 y4 + y2 + z2 H1 - 2 z2 L M x££ M ë 2

IH2 z3 - zL H4 x6 - 4 x4 + x2 + 4 y6 + 4 z6 - 4 y4 - 4 z4 + y2 + z2 LM 0, 2

I2 x H2 x2 - 1L y2 H6 z2 - 1L x£ y£ H1 - 2 y2 L +

Symbolic Computations

150 2

y H2 y2 - 1L I4 H6 z2 - 1L x6 + H4 - 24 z2 L x4 + H24 z6 - 24 z4 + 12 z2 - 1L x2 - z2 H1 - 2 z2 L M Hx£ L2 + 2

y H2 y2 - 1L I4 H6 z2 - 1L y6 + H4 - 24 z2 L y4 + H24 z6 - 24 z4 + 12 z2 - 1L y2 - z2 H1 - 2 z2 L M Hy£ L2 + 2 2

2

6

4

2

6

4

2

2

2 2

££

z H1 - 2 z L I4 x - 4 x + x + 4 y - 4 y + y + z H1 - 2 z L M y M ë 2

IH2 z3 - zL H4 x6 - 4 x4 + x2 + 4 y6 + 4 z6 - 4 y4 - 4 z4 + y2 + z2 LM 0, 2

£

x H2 x - 1L x + y H2 y2 - 1L y£ + z H2 z2 - 1L z£ 0=

For a nicer visualization, we calculate a graphic of the surface. In[156]:=

Needs["Graphics`ContourPlot3D`"]

In[157]:=

holedCubeGraphic3D = Graphics3D[{EdgeForm[], (* map in other positions *) Fold[Function[{p, r}, {p, Map[# r&, p, {-2}]}], (* 3D contour plot of holedCube in the first octant *) Cases[ContourPlot3D[Evaluate[holedCube /. z[x, y] -> z], {x, 0, 1.1}, {y, 0, 1.1}, {z, 0, 1.1}, PlotPoints -> {{24, 2}, {20, 2}, {12, 2}}, MaxRecursion -> 1, DisplayFunction -> Identity], _Polygon, {0, Infinity}] /. Polygon[l_] :> (* make diamonds *) Polygon[Plus @@@ Partition[Append[l, First[l]], 2, 1]/2], {{-1, 1, 1}, {1, -1, 1}, {1, 1, -1}}]}];

We visualize the geodesics as lines on the surface. To avoid visually unpleasant intersections between the discretized surface and the discretized geodesics, we define a function liftUp that lifts the geodesics slightly in direction of the local surface normal. In[158]:=

normal[{x_, y_, z_}] = (* gradient gives the normal *) D[holedCube /. z[x, y] -> z, #]& /@ {x, y, z} // Expand;

In[159]:=

liftUp[{x_, y_, z_}, ∂_] = (* move in direction of normal *) {x, y, z} - ∂ #/Sqrt[#.#]&[normal[{x, y, z}]];

In the following graphic, we calculate 64 geodesics. We choose the starting points along the upper front “beam”. The function rStart parametrizes the starting values. In[160]:=

rStart[ϕ_] := rStart[ϕ] = Module[{r}, r /. FindRoot[Evaluate[ holedCube == 0 /. {x -> 0, y -> 0.7 + r Cos[ϕ], z[x, y] -> 0.7 + r Sin[ϕ]}], {r, 0, 1/3}]]

Here are the resulting geodesics. On the nearby smoothed corners of the cube, we see the to-be-expected caustics. In[161]:=

Module[{o = 128, T = 6, , τ1, τ2}, Show[{(* the surface *) holedCubeGraphic3D, Table[ (* solve differential equations for geodesics *) (* avoid messages from caustics that run in problems *) Internal`DeactivateMessages[ nsol = NDSolve[Join[geodesicEquations, (* starting values *) {x[0] == 0, y[0] == 0.7 + rStart[ϕ] Cos[ϕ], z[0] == 0.7 + rStart[ϕ] Sin[ϕ], x'[0] == 1, y'[0] == 0}], {x, y, z}, {τ, -T, T}, MaxSteps -> 2 10^4, PrecisionGoal -> 6, AccuracyGoal -> 6, (* use appropriate method *) Method -> {"Projection", Method -> "StiffnessSwitching",

1.6 Classical Analysis

151

(* stay on surface *) "Invariants" -> {holedCube /. {x -> x[τ], y -> y[τ], z[x, y] -> z[τ]}}}]]; (* parametrized geodesics *) [τ_] := (Append[liftUp[{x[τ], y[τ], z[τ]} /. nsol[[1]], 0.015], {Thickness[0.003], Hue[ϕ/(2Pi)]}]); (* for larger T *) {τ1, τ2} = nsol[[1, 1, 2, 1, 1]]; (* show surface and geodesics *) ParametricPlot3D[[τ], {τ, τ1, τ2}, Compiled -> False, PlotPoints -> Round[200 (τ2 - τ1)], DisplayFunction -> Identity], {ϕ, 0, 2Pi (1 - 1/o), 2Pi/o}]}, DisplayFunction -> $DisplayFunction, Boxed -> False, Axes -> False, ViewPoint -> {2.2, 2.4, 1.6}]]

We end here and leave it to the reader to calculate euthygrammes [885]. For large-scale calculations of this kind arising in general relativity (see [364]), we recommend the advanced (commercially available) Mathematica package MathTensor by L. Parker and S. Christensen [1379] (http://smc.vnet.net/MathSolutions.html); or the package Cartan by H. Soreng (http://store.wolfram.com/view/cartan). For the algorithmic simplification of tensor expressions, see [120] and [1430]. Next, we give an application of differentiation involving graphics: the evolute of a curve, the evolute of the evolute of a curve, the evolute of the evolute of the evolute of a curve, etc. [271].

Mathematical Remark: Evolutes The evolute of a curve is the set of all centers of curvature associated with the curve. For a planar curve given in the parametric form HxHtL, yHtLL, the parametric representation of its evolute is: ij xHtL - y£ HtL Hx£ HtL2 + y£ HtL2 L yHtL + x£ HtL Hx£ HtL2 + y£ HtL2 L yz j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ , ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ z. x£ HtL y££ HtL - y£ HtL x££ HtL { k x£ HtL y££ HtL - y£ HtL x££ HtL For more on evolutes and related topics, see any textbook on differential geometry, for example, [1119], [1404], [492], [764], and [609]. For curves that are their own evolutes, see [1864].

1

Here we implement the definition directly. We apply Together to get one fraction. The optional function simp simplifies the resulting expressions in a user-specified way. In[162]:=

Evolute[{x_, y_}, t_, simp_:Identity] := simp[{x - #2(#1^2 + #2^2)/(#1 #4 - #2 #3), y + #1(#1^2 + #2^2)/(#1 #4 - #2 #3)}&[ (* compute all derivatives only once *)

Symbolic Computations

152 D[x, t], D[y, t], D[x, {t, 2}], D[y, {t, 2}]] // (* avoid blow-up in size of iterated form *) Together]

For a circle, the set of all centers of curvature is precisely the center of the circle. In[163]:= Out[163]=

Evolute[{Cos[ϑ], Sin[ϑ]}, ϑ] 80, 0

a})&] a2 Cos@ϑD3 − b2 Cos@ϑD3 −a2 Sin@ϑD3 + b2 Sin@ϑD3 9 , = a b

We now iterate the formation of evolutes starting with an ellipse, and we graph the resulting evolutes. We use ten ellipses with different half-axes ratios. The right graphic shows a magnified view of the center region of the left graphic. In[166]:=

Show[GraphicsArray[{Show[#], Show[#, PlotRange -> {{-3, 3}, {-3, 3}}]}&[ Table[ParametricPlot[Evaluate[(* nest forming the evolute *) NestList[Evolute[#, ϑ, (# /. {a_. Sin[ϑ]^2 + a_.Cos[ϑ]^2 -> a})&]&, {α Cos[ϑ], 2 Sin[ϑ]}, 4]], {ϑ, 0, 2Pi}, PlotStyle -> {Hue[0.78 (α - 3/2)]}, Axes -> False, PlotRange -> All, AspectRatio -> 1, DisplayFunction -> Identity, PlotPoints -> 140], (* values of α *) {α, 3/2, 5/2, 1/11}]]]]

A rich field for the generation of nice curves is given by starting the above process, for instance, with Lissajous figures. We could now go on to the analogous situation for surfaces. Here, for an ellipsoid, we construct a picture with the two surfaces formed by going the amount of the principal radii of the curvature in the direction of the normal to the surface [1157]. In[167]:=

With[{(* ellipsoid half axes *)a = 1, b = 3/4, c = 5/4, (* avoid 0/0 in calculations *) ∂ = 10^-12}, Module[{ϕ, ϑ, x, y, z, e, f, g, l, m, n, λ, ν, µ, k, h, cross, normal1, normal, ellipsoid, makeAll}, (* parametrization of the ellipsoid *) {x, y, z} = {a Cos[ϕ] Sin[ϑ], b Sin[ϕ] Sin[ϑ], c Cos[ϑ]}; (* E, F, G from differential geometry of surfaces *)

1.6 Classical Analysis

153

{e, g} = (D[x, #]^2 + D[y, #]^2 + D[z, #]^2)& /@ {ϕ, ϑ}; f = D[x, ϕ] D[x, ϑ] + D[y, ϕ] D[y, ϑ] + D[z, ϕ] D[z, ϑ]; (* L, M, N from differential geometry of surfaces *) {l, n, m} = Det[{{D[x, ##], D[y, ##], D[z, ##]}, D[#, ϕ]& /@ {x, y, z}, D[#, ϑ]& /@ {x, y, z}}]& @@@ {{ϕ, ϕ}, {ϑ, ϑ}, {ϕ, ϑ}}; {λ, ν, µ} = {l, m, n}/Sqrt[e g - f^2]; (* Gaussian curvature and mean curvature *) k = (λ ν - µ^2)/(e g - f^2); h = (g λ - 2 f µ + e ν)/(2 (e g - f^2)); (* normal on the ellipsoid *) normal = #/Sqrt[#.#]&[Cross[D[{x, y, z}, ϕ], D[{x, y, z}, ϑ]]]; (* construct all pieces from the piece of one octant *) makeAll[polys_] := Function[v, Map[v #&, polys, {-2}]] /@ {{ 1, 1, 1}, { 1, 1, -1}, { 1, -1, 1}, {-1, 1, 1}, {-1, -1, 1}, { 1, -1, -1}, {-1, 1, -1}, {-1, -1, -1}}; (* cut a hole in a polygon *) makeHole[Polygon[l_], factor_] := Module[{mp = Plus @@ l/Length[l], L, nOld, nNew}, L = (mp + factor(# - mp))& /@ l; {nOld, nNew} = Partition[Append[#, First[#]]&[#], 2, 1]& /@ {l, L}; {MapThread[Polygon[Join[#1, Reverse[#2]]]&, {nOld, nNew}]}]; (* a sketch of the ellipsoid *) ellipsoid = {Thickness[0.002], (ParametricPlot3D[Evaluate[{x, y, z}], {ϕ, 0, 2Pi}, {ϑ, 0, Pi}, DisplayFunction -> Identity][[1]]) //. Polygon[l_] :> Line[l]}; (* surfaces of the centers of the principal curvatures *) Show[GraphicsArray[ Graphics3D[{ellipsoid, {EdgeForm[], makeAll @ ParametricPlot3D[#, {ϕ, ∂, Pi/2 - ∂}, {ϑ, ∂, Pi/2 - ∂}, DisplayFunction -> Identity][[1]]}}, Boxed -> False, PlotRange -> All]& /@ (({x, y, z} + normal 1/(h + # Sqrt[h^2 - k]))& /@ {+1, -1}) ] /. p_Polygon :> makeHole[p, 0.7], GraphicsSpacing -> 0]]]

We give one more example illustrating the usefulness of symbolic differentiation.

Mathematical Remark: Phase Integral Approximation Here, we are dealing with a method for the approximate solution of the ordinary differential equation (and associated eigenvalue problem):

Symbolic Computations

154

y££ HzL + R2 HzL yHzL = 0,

R2 HzL p 1.

If we assume yHzL has the form z

yHzL = qHzL-1ê2 expJi ‡ qHz£ L dz£ N, qHzL = QHzL gHzL where QHzL is “ arbitrary”, we get the following differential equation for g: d 2 gHxL-1ê2 1 + ¶HQHxLL - gHxL2 + gHxL1ê2 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅ = 0 dx2 z

x = xHzL = ‡ QHz£ L dz£ , d 2 QHzL-1ê2 HRHzL - Q2 HzLL ÅÅÅÅÅÅÅÅÅÅ . ¶HQL = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅ + QHzL3ê2 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 2 Q HzL dz2 Introducing the parameter l (which we will later set to 1) in the differential equation, d 2 gHxL-1ê2 ÅÅÅÅÅÅÅÅÅ = 0 1 + l2 ¶HQHxLL - gHxL2 + l2 gHxL1ê2 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ d x2 ¶ and expanding gHzL in an infinite series in l (with Y2 n+1 = 0) gHzL = ⁄n=0 Y2 n HzL l2 n , we are led to the following £ ££ recurrence formula for Y as a function of ¶HxL and its derivatives ¶ HxL, ¶ HxL, … :

Y0 Y2 n

=1 1 = ÅÅÅÅÅ 2

1 ‚ Y2 a Y2 b - ÅÅÅÅÅÅ 2 a+b=n

0§a,b,§n-1

1 + ÅÅÅÅÅÅ 2

‚

Y2 a Y2 b Y2 g Y2 d

a+b+g+d=n 0§a,b,g,d§n-1

3 1 ‚ :¶ Y2 a Y2 b + ÅÅÅÅÅÅ Y2£ a Y2£ b - ÅÅÅÅÅÅ HY2 a Y2££b + Y2££a Y2 b L> 4 4 a+b=n-1

0§a,b§n-1

where Ya£ = Ya£ HxL = dY a HxL ê dx and x = xHzL. Y2 HxL, Y4 HxL, and Y6 HxL were found earlier by painful hand computations, but from Y8 HzL on, computer algebra becomes necessary [296]. For more details on such asymptotic expansions, see [671], [487], [672], [428], [1006], [1623], [142], [1306], [1883], [1104], and [674]. For the corresponding supersymmetric problem, see [16], [673], [58], and [544]. 1

Here, we want to find the first few nonvanishing Y2 a . We now give an unrefined implementation of the above recurrence. (It is unrefined considering the restriction a + b + g + d = n, and the fourfold sum can be replaced by a threefold sum.) In[168]:=

Υ[_Integer?OddQ] = 0; Υ[0] = 1; Υ[zn_Integer?EvenQ] := Υ[zn] = Module[{n = zn/2}, (* the If is the obvious implementation,

1.6 Classical Analysis

155

but it requires summing over all variables *) 1/2 Sum[If[a + b == n, 1, 0] Υ[2 a] Υ[2 b], {a, 0, n - 1}, {b, 0, n - 1}] 1/2 Sum[If[a + b + c + d == n, 1, 0] Υ[2 a] Υ[2 b] Υ[2 c] Υ[2 d], {a, 0, n - 1}, {b, 0, n - 1}, {c, 0, n - 1}, {d, 0, n - 1}] + 1/2 Sum[If[a + b == n - 1, 1, 0] * (∂[ξ] Υ[2 a] Υ[2 b] + 3/4 D[Υ[2 a], ξ] D[Υ[2 b], ξ] 1/4 (Υ[2 a] D[Υ[2 b], {ξ, 2}] + D[Υ[2 a], {ξ, 2}] Υ[2 b])), {a, 0, n - 1}, {b, 0, n - 1}] // (* keep the results as short as possible *) Expand // Factor]

We now look at the first few Yi HzL. In[171]:= Out[171]= In[172]:= Out[172]= In[173]:= Out[173]= In[174]:= Out[174]=

Υ[2] ∂@ξD 2 Υ[4] 1 H−∂@ξD2 − ∂ @ξDL 8 Υ[6] 1 H2 ∂@ξD3 + 5 ∂ @ξD2 + 6 ∂@ξD ∂ @ξD + ∂H4L @ξDL 32 Υ[8] 1 H−5 ∂@ξD4 − 50 ∂@ξD ∂ @ξD2 − 30 ∂@ξD2 ∂ @ξD − 128 19 ∂ @ξD2 − 28 ∂ @ξD ∂H3L @ξD − 10 ∂@ξD ∂H4L @ξD − ∂H6L @ξDL

The Y[i] for higher orders can also be found in a few seconds. In[175]:= Out[175]=

Timing[Υ[16]] 90.86 Second, 1 I−429 ∂@ξD8 − 60060 ∂@ξD5 ∂ @ξD2 − 316030 ∂@ξD2 ∂ @ξD4 − 12012 ∂@ξD6 ∂ @ξD − 32768 758472 ∂@ξD3 ∂ @ξD2 ∂ @ξD − 496950 ∂ @ξD4 ∂ @ξD − 114114 ∂@ξD4 ∂ @ξD2 − 1794156 ∂@ξD ∂ @ξD2 ∂ @ξD2 − 360932 ∂@ξD2 ∂ @ξD3 − 174317 ∂ @ξD4 − 168168 ∂@ξD4 ∂ @ξD ∂H3L @ξD − 877760 ∂@ξD ∂ @ξD3 ∂H3L @ξD − 1591304 ∂@ξD2 ∂ @ξD ∂ @ξD ∂H3L @ξD − 1533408 ∂ @ξD ∂ @ξD2 ∂H3L @ξD − 118404 ∂@ξD3 ∂H3L @ξD − 562474 ∂ @ξD2 ∂H3L @ξD − 684372 ∂@ξD ∂ @ξD ∂H3L @ξD − 12012 ∂@ξD5 ∂H4L @ξD − 466180 ∂@ξD2 ∂ @ξD2 ∂H4L @ξD − 188760 ∂@ξD3 ∂ @ξD ∂H4L @ξD − 893724 ∂ @ξD2 ∂ @ξD ∂H4L @ξD − 543972 ∂@ξD ∂ @ξD2 ∂H4L @ξD − 2

2

2

800176 ∂@ξD ∂ @ξD ∂H3L @ξD ∂H4L @ξD − 206138 ∂H3L @ξD ∂H4L @ξD − 71786 ∂@ξD2 ∂H4L @ξD − 2

2

163722 ∂ @ξD ∂H4L @ξD − 92664 ∂@ξD3 ∂ @ξD ∂H5L @ξD − 144780 ∂ @ξD3 ∂H5L @ξD − 529776 ∂@ξD ∂ @ξD ∂ @ξD ∂H5L @ξD − 119548 ∂@ξD2 ∂H3L @ξD ∂H5L @ξD − 2

272108 ∂ @ξD ∂H3L @ξD ∂H5L @ξD − 159268 ∂ @ξD ∂H4L @ξD ∂H5L @ξD − 23998 ∂@ξD ∂H5L @ξD − 6006 ∂@ξD4 ∂H6L @ξD − 111020 ∂@ξD ∂ @ξD2 ∂H6L @ξD − 68068 ∂@ξD2 ∂ @ξD ∂H6L @ξD − 76986 ∂ @ξD2 ∂H6L @ξD − 113456 ∂ @ξD ∂H3L @ξD ∂H6L @ξD − 41132 ∂@ξD ∂H4L @ξD ∂H6L @ξD − 2

3431 ∂H6L @ξD − 25168 ∂@ξD2 ∂ @ξD ∂H7L @ξD − 56328 ∂ @ξD ∂ @ξD ∂H7L @ξD − 25688 ∂@ξD ∂H3L @ξD ∂H7L @ξD − 6004 ∂H5L @ξD ∂H7L @ξD − 1716 ∂@ξD3 ∂H8L @ξD − 9210 ∂ @ξD2 ∂H8L @ξD − 11388 ∂@ξD ∂ @ξD ∂H8L @ξD − 4002 ∂H4L @ξD ∂H8L @ξD − 3380 ∂@ξD ∂ @ξD ∂H9L @ξD − 2000 ∂H3L @ξD ∂H9L @ξD − 286 ∂@ξD2 ∂H10L @ξD − 726 ∂ @ξD ∂H10L @ξD − 180 ∂ @ξD ∂H11L @ξD − 26 ∂@ξD ∂H12L @ξD − ∂H14L @ξDM= 2

Symbolic Computations

156

1.6.2 Integration Symbolic integration of functions is one of the most important capabilities of Mathematica. In contrast to many other operations (which can also be carried out by hand by the user, albeit more slowly and probably with more errors), and in addition to standard methods such as integration by parts, substitution, etc., Mathematica makes essential use of special algorithms for the determination of indefinite and definite integrals (see [244], the references cited in the appendix, Chapter 21 of [1917], and the very readable introductions of [1330], [990], [643], and [1205]). Mathematica can find a great many integrals, including many not listed in tables. This holds primarily for integrands that are not special functions; but even for special functions, Mathematica is often able to find a closed-form answer. Nevertheless, once in a while, the user will have to refer to a book such as [1443] for complicated integrals. For most integrals, Mathematica works with algorithms rather than looking in tables. For indefinite integrals, these algorithms are based on the celebrated work by Risch and extensions by Trager and Bronstein [244]. Definite integrals are computed by using contour integration, by the Marichev–Adamchik reduction to Meijer-G functions [1560], [1561], [284], [1650], and [574], or by integration via differentiation (Cauchy contour integral formula). We have already introduced the Integrate command for symbolic integration. In view of its extraordinary importance, we repeat it here. Integrate[integrand, var] finds (if possible) the indefinite integral Ÿ

var

integrand dvar.

Integrate[integrand, {var, lowerLimit, upperLimit}] upperLimit

finds (if possible) the definite integral ŸlowerLimit integrand dvar.

Let us start with a remark similar to the one made in section dealing with the solution of equations using Solve. All variables in integrals are assumed to take generic complex values. So the result of the simple x integral Ÿ xn d x will be 1 ê Hn + 1L xn+1 . In[1]:= Out[1]=

Integrate[x^n, x] x1+n 1+n

This is the correct answer for all complex x and for nearly all complex n (the exception, which is of Lebesgue measure zero with respect to dx, being n = -1). This assumption about all unspecified variables being generic can cause indeterminate expressions when substituting numerical values into the result of an integration containing parameters. The integrand can either be given explicitly or unspecified. Here is such an example of the latter case. In[2]:= Out[2]= In[3]:= Out[3]= In[4]:= Out[4]= In[5]:=

Integrate[f'[x], x] f@xD Integrate[f'[x] f''[x], x] 1 f @xD2 2 Integrate[f'[x] g[x] + f[x] g'[x], x] f@xD g@xD Integrate[Sin[f[x]] f'[x], x]

1.6 Classical Analysis Out[5]=

157

−Cos@f@xDD

Here is a slightly more complicated integral, the Bohlin constant of motion for a damped harmonic oscillator [720]. In[6]:= Out[6]=

Exp[Integrate[(λ1 - λ2) x'[t](x''[t] + (λ1 + λ2) x'[t] + λ1 λ2 x[t])/ ((x'[t] + λ1 x[t])(x'[t] + λ2 x[t])), t]] // Together Hλ1 x@tD + x @tDLλ1 Hλ2 x@tD + x @tDL−λ2

The following product cannot be symbolically integrated without the result containing unevaluated integrals (which would cause recursion). In[7]:= Out[7]=

Integrate[f'[x] g[x], x] ‡ g@xD f @xD ' x

Be aware that in distinction to NIntegrate, the function Integrate has no HoldAll attribute. This means that the scoping behavior in nested integrals is different. Whereas NIntegrate can treat its body as a black box that delivers values at given points (when the corresponding system option is set to avoid the evaluation of the body), the algorithms used in Integrate require unavoidably the evaluation of the integrand. When Integrate carries out an indefinite integral, it does not return any explicit constants of integration. (Implicitly the result given amounts to selecting a concrete constant of integration.) So mathematically identical integrands can result in different indefinite integrals. The following polynomial Hx + 1L4 + 1 shows such a situation. In[8]:=

Out[9]=

(* integrals of original and expanded integrand and difference *) {Integrate[#, x], Integrate[Expand[#], x], Expand[Integrate[#, x] - Integrate[Expand[#], x]]}&[(x + 1)^4 + 1] 1 x5 1 9x + H1 + xL5 , 2 x + 2 x2 + 2 x3 + x4 + , = 5 5 5

Mathematica’s ability to integrate implicitly defined functions can be seen nicely in the following example. Suppose 1 0 HxL = ÅÅÅÅÅÅ 2

x

£ £ j HxL = ‡ H£££ j-1 HxL + 4 HxL j-1 HxL + 2 HxL j-1 HxLL dx, j = 1, 2, ….

These equations are of great practical importance for the construction of the Korteweg–de Vries equation hierarchy. Because the mathematical description of Lax pairs is slightly more complicated, we do not go into details here; see, however, [37], [719], [1204], [1442], [628], [1575], [1576], [325], [1782], [255], [1788], [717]. Note that HxL is not explicitly defined. We now implement the above definition of the j HxL. In[10]:=

[0] = 1/2; [j_] := [j] = Integrate[D[[j - 1], {x, 3}] + 4 [x] D[[j - 1], x] + 2 '[x] [j - 1], x] // Together // Numerator

We look at the first few j HxL; they are “completely” integrated. In[12]:= Out[12]=

{[1], [2], [3]} 8@xD, 3 @xD2 + @xD, 10 @xD3 + 5 @xD2 + 10 @xD @xD + H4L @xD

, Derivative[i_][][x] -> Derivative[i][]}

In[14]:=

Table[KdVShortForm[k], {k, 3}]

Out[14]=

8t , t 6 + H3L , t 30 2 + 20 + 10 H3L + H5L

−1 && Re@oD > −1 && Re@pD > 0, 1+n+p+o p p GammaA p E

xn H1 − xp Lo , 8x, 0, 1 −1 && Re@oD > −1 && Re@pD > 0LDE In[50]:= Out[50]= In[51]:= Out[51]= In[52]:= Out[52]=

Integrate[(Exp[-x] - Exp[-z x])/x, {x, 0, Infinity}] −x − −x z IfARe@zD > 0, Log@zD, IntegrateA , 8x, 0, ∞ 1/2] Exp[-x] Sin[x], {x, 0, Infinity}] 3 πê4 è!!!! 2 H1 + π L

Here are two integrals of rational functions. In[63]:= Out[63]=

Integrate[1/(x^4 + 3 x^2 + 1)^8, {x, 0, Infinity}] 21377637 π è!!!! 160000000 5

1.6 Classical Analysis In[64]:=

163

largeResult = Integrate[1/(x^6 + 3 x^2 + 1)^2, {x, -Infinity, Infinity}]; Short[largeResult, 12]

Out[66]//Short=

J17 Log@Root@1 + 3 #1 + #13 &, 3DD 5ê2 "############################################################### + 53 + −Root@1 + 3 #1 + #13 &, 1D Root@1 + 3 #1 + #13 &, 2D 2

2 π I17 − 6 Root@1 + 3 #1 + #13 &, 1D + 4 Root@1 + 3 #1 + #13 &, 1D M JRoot@1 + 3 #1 + #13 &, 2D

5ê2

Root@1 + 3 #1 + #13 &, 3D

"############################################################ Root@1 + 3 #1 + #13 &, 2D Root@1 + 3 #1 + #13 &, 3D + "############################################################

5ê2

+ "######################################################################################################################## Root@1 + 3 #1 + #13 &, 2D Root@1 + 3 #1 + #13 &, 3D + 5ê2

HRoot@1 + 3 #1 + #13 &, 2D Root@1 + 3 #1 + #13 &, 3DL

NN í

Root@1 + 3 #1 + #13 &, 1D Root@1 + 3 #1 + #13 &, 2D Root@1 + 3 #1 + #13 &, 3D N J900 "####################################################################################################################################################################################

The results returned by Integrate are typically not simplified. (It is always easily possible to apply a simplifying function to the result, but it would be impossible for a user to disable any built-in simplification if it would happen automatically.) Applying RootReduce to the last expression gives a much shorter answer. In[67]:= Out[67]=

In[68]:= Out[68]=

Collect[RootReduce[largeResult], _Log, RootReduce] π Root@−11449 + 17890956 #12 − 7103376000 #14 + 59049000000 #16 &, 2D + Log@Root@1 + 3 #12 + #16 &, 1DD Root@11449 + 17890956 #12 + 7103376000 #14 + 59049000000 #16 &, 1D + Log@Root@1 + 3 #12 + #16 &, 2DD Root@11449 + 17890956 #12 + 7103376000 #14 + 59049000000 #16 &, 2D + Log@Root@1 + 3 #1 + #13 &, 2DD Root@11449 + 71563824 #12 + 113654016000 #14 + 3779136000000 #16 &, 5D + Log@Root@1 + 3 #1 + #13 &, 3DD Root@11449 + 71563824 #12 + 113654016000 #14 + 3779136000000 #16 &, 6D {LeafCount[%], LeafCount[largeResult]} 8191, 2958

Re[n] > 0]

Symbolic Computations

164 3

Out[73]=

3n

1

3n

1 3n nπ 1 3n 2− 2 − 2 H1 + H−1Ln L Gamma@ + D I2 2 + 2 − 2 r Sin@ DM + D H1 + H−1Ln L Gamma@ 2 2 2 2 , 2 = 9 è!!!! è!!!! π 2 π

Because of symmetry (and visible through the factors H1 + H-1Ln L), the odd moments vanish for and and the even moments agree. In[74]:= Out[74]=

Simplify[moment[n] == moment[n, r], Element[n/2, Integers]] True

As some of the above examples show, sometimes Mathematica will produce If statements as results, where the first argument represents a set of conditions on parameters appearing in the integral such that the second argument of If is the integrated form. This form of the result allows giving sufficient conditions for the convergence of the integral depending on parameters appearing in the integrand (and potentially in the integration limits). The last argument contains the unevaluated form of the integral (which is possible because If has the HoldRest attribute) with the negated conditions. Here is an example. In[75]:= Out[75]=

Integrate[Sin[a x] Cos[b x]/x, {x, 0, Infinity}] 1 IfAa − b ∈ Reals && a + b ∈ Reals, π HSign@a − bD + Sign@a + bDL, 4 Cos@b xD Sin@a xD IntegrateA , 8x, 0, ∞ (5 + 3 Sqrt[3])^(1/4)))&[ Simplify[Integrate[1/Sqrt[y] '[y], y]]]] 128 è!!!! 7ê8 è!!!! 9 H5 + 3 3 L H41 + 21 3 L, 469.197= 161

(If we did not know the polynomial , we could calculate it the following way.) In[87]:=

Out[87]=

GroebnerBasis[Numerator[Together[ {(5 - (27 (1 - I Sqrt[3]))/(2^(2/3) ) ((1 + I Sqrt[3]) )/ (2 2^(1/3))) - ^4, (277 + τ + ) - ^3, -2003 + 554 τ + τ^2 - ^2}]], {τ, }, {, }] 83 − 6 4 − 15 8 + 12 − τ

False]; (* 3D plot made piecewise *) Show[Table[Plot3D[Re[[x + I y]], {x, k Pi + ∂, (k + 1) Pi - ∂}, {y, -4, 4}, PlotPoints -> {8, 60}], {k, 0, 3}], AxesLabel -> {"x", "y", None}]}]]]

0.5 4

0 2

-0.5

0 y

0 -2

5 x

10

-4

By adding the piecewise constant function, we can make the antiderivative a continuous function. In[145]:=

In[146]:=

[x_] := Which[x < 1 x == 1 x < 3 x == 3 x < 5

Pi Pi Pi Pi Pi

// // // // //

N, N, N, N, N,

[x], Pi/2/Sqrt[6], [x] + Pi/Sqrt[6], 3Pi/2/Sqrt[6], [x] + 2 Pi/Sqrt[6]]

Plot[[x], {x, 0, 4Pi}] 2.5 2 1.5 1 0.5

2

4

6

8

10

12

Now, the integral is given as the difference of the function value of the upper limit and the lower limit. In[147]:= Out[147]=

[4Pi] - [0] 2 $%%%%%%% π 3

(For more details concerning such pitfalls, see [925], [923], and [926].) Even for an everywhere smooth function, the indefinite integral returned by Mathematica might be discontinuous. The following plots show the real and imaginary parts of He x - 1L ê x and Ÿ Hex - 1L ê x dx along the real axis. The imaginary part (blue curves) of the indefinite integral is discontinuous at x = 0. (This integrand has the special property to give an integral that has a single line where its value is different from its left-side and right-side limits.) In[148]:=

Module[{f, F, x}, f[x_] = (Exp[x] - 1)/x; Show[GraphicsArray[

F[x_] = Integrate[f[x], x];

Symbolic Computations

176 (* show function and indefinite integral along real axis *) Plot[{Re[#1[x]], Im[#1[x]]}, {x, -1, 1}, PlotLabel -> #2, PlotStyle -> {RGBColor[1, 0, 0], RGBColor[0, 0, 1]}, DisplayFunction -> Identity, Frame -> True, PlotRange -> {{-1, 1}, {-3.5, 3.5}}]& @@@ {{f, "f[x]"}, {F, "Ÿf[x] d x"}}]]] Ÿ f@xD d x

f@xD 3

3

2

2

1

1

0

0

-1

-1

-2

-2

-3

-3 -0.75 -0.5 -0.25

0

0.25

0.5

0.75

-0.75 -0.5 -0.25

1

0

0.25

0.5

0.75

1

But the indefinite integral was nevertheless correct. In[149]:= Out[149]=

D[Integrate[(Exp[x] - 1)/x, x], x] - (Exp[x] - 1)/x // Simplify 0

As an application of Mathematica integration capabilities, let us briefly discuss a class of parametrically describable minimal surfaces.

Mathematical Remark: Minimal Surfaces Minimal surfaces are surfaces z = f Hx, yL that satisfy the differential equation H1 + f y2 L fxx + 2 fx f y fxy + H1 + fx2 L f yy = 0 or for surfaces given in parametric form 8xHu, vL, yHu, vL, zHu, vL pictVar]

Here are two examples: the Enneper surface with f HxL = 1 and gHxL = x and a Henneberg surface with f HxL = -i ê 2 H1 - x-4 L and gHxL = x. In[151]:=

Show[GraphicsArray[ Block[{$DisplayFunction = Identity, opts = Sequence[Boxed -> False, Axes -> False, PlotRange -> All]}, {(* Enneper surface *) ParametricPlot3D[Evaluate[ WeierstrassMinimalSurface[1, ξ, ξ, r Exp[I ϕ]]], {r, 0, 3}, {ϕ, 0, 2Pi}, PlotPoints -> {116, 80}, Evaluate[opts]], (* Henneberg surface *) ParametricPlot3D[Evaluate[ WeierstrassMinimalSurface[-I/2 (1 - ξ^-4), ξ, ξ, r Exp[I ϕ]]], {r, 0.72, 1}, {ϕ, 0, 2Pi}, PlotPoints -> {16, 40}, Evaluate[opts]]}]]]

Symbolic Computations

178

Here is a spiraling minimal surface related to the behavior of a soap film near a boundary wire [236]. In[152]:=

Block[{γ = 0.02 Pi, wms}, wms = WeierstrassMinimalSurface[ I Exp[-w + I Pi w/(2 Cot[γ/2])], Exp[w], w, r Exp[I ϕ]]; ParametricPlot3D[ Evaluate[Append[wms, SurfaceColor[Hue[ϕ/(2 Pi)]]]], {r, 0, 6}, {ϕ, 0, 2Pi}, PlotPoints -> {40, 160}, Boxed -> False, Axes -> False, PlotRange -> All, BoxRatios -> {1, 1, 2}]]

We could plot many other such (generally unnamed) surfaces, for example, f HxL = x1ê4 + x1ê3 and gHxL = x. In[153]:= Out[153]=

wms = WeierstrassMinimalSurface[ξ^(1/4) + ξ^(1/3), ξ, ξ, r Exp[I ϕ]] 1 5ê4 9− 260 ReAH ϕ rL I− 208 + 80 2 ϕ r2 − 195 H ϕ rL1ê12 + 78 H ϕ rL25ê12 ME, 1 5ê4 − ImAH ϕ rL I208 + 80 2 ϕ r2 + 195 H ϕ rL1ê12 + 78 H ϕ rL25ê12 ME, 260 4 9ê4 3 2 ReA H ϕ rL + H ϕ rL7ê3 E= 9 7

An initial attempt to plot this function does not produce a satisfactory result. (We use lines rather than polygons in the following graphic because the polygons touch each other often, and rendering the corresponding graphic takes a long time.) In[154]:=

Show[Graphics3D[ ParametricPlot3D[Evaluate[%], {r, 0.7, 0.75}, {ϕ, 0.001, 12Pi - 0.001}, PlotPoints -> {2, 600}, PlotRange -> All, Axes -> False, DisplayFunction -> Identity][[1]] //. Polygon[l_] :> Line[Append[l, First[l]]]]]

1.6 Classical Analysis

179

To get the “correct” function values for multivalued functions, we have to modify the results of the indefinite integration; in this case, we take the appropriate nth root. (If we would calculate the integrals through numerically solving a differential equation, we do not encounter such branch cut problems.) For ease of understanding, we view only a small strip. In[155]:=

ParametricPlot3D[Evaluate[(* analytically continue and add color *) Append[wms /. {(r Exp[I ϕ])^n_ -> r^n Exp[I n ϕ]}, SurfaceColor[Hue[ϕ/(12 Pi)], Hue[ϕ/(12 Pi)], 2]]], {r, 0.7, 0.75}, {ϕ, 0.01, 12Pi - 0.01}, PlotPoints -> {2, 600}, PlotRange -> All, Axes -> False]

Many further examples of minimal surfaces exist and are easy to (re)produce in Mathematica. By changing the integrands in the three integrals in the Weierstrass representation from integrand to expHi JL integrand, we can (in dependence on J) look at how a minimal surface evolves to its adjoint surface. For additional examples of minimal surfaces, see [1586], [1164], [1344], [865], [481], [1228], [649], [1178], [1707], [863], [1848], [484], [864], [986], [642], [780], [874], [1698], [1423], [237], [1412], [1708], [985], [1513], [279], [1312], and [1709]. Remark: It is not necessary to use integrals when constructing minimal surfaces. If the above gHxL Ø x and f HxL Ø f £££ HxL, we can write In[156]:=

WeierstrassMinimalSurface[f'''[ξ], ξ, ξ, x] // TraditionalForm

Out[156]//TraditionalForm=

8ReH- f ££ HxL x2 + 2 f £ HxL x - 2 f HxL + f ££ HxLL, -2 ImH f HxLL + 2 ImHx f £ HxLL - ImH f ££ HxLL - ImHx2 f ££ HxLL, 2 ReHx f ££ HxL - f £ HxLL

([m, n] = R[m, n]) /; NonNegative[m] && NonNegative[n] && m [n, m] /; NonNegative[m] && NonNegative[n], HoldPattern[[m_, n_]] :> [-n, -m] /; Negative[m] && Negative[n], HoldPattern[[m_, n_]] :> [n, -m] /; Negative[m], HoldPattern[[m_, n_]] :> [-n, m] /; Negative[n]};

Here is the resistance in the neighborhood of the origin. In[161]:=

With[{n = 5}, ListPlot3D[Table[[i, j], {i, -n, n}, {j, -n, n}], MeshRange -> {{-n, n}, {-n, n}}, PlotRange -> All]];

1 0.75 0.5 0.25 0

4 2 -4

0 -2

0

-2 2

4

-4

The function R makes heavy use of definite integration. For larger values of n and m, it becomes somewhat slow. In[162]:= Out[162]=

{R[10, 10] // Timing, R[8, 12] // Timing} 486215980256 + 10640 π − 62075752 14549535 == 990.43 Second, =, 945.89 Second, 2π 14549535 π

Indefinite integration is often much faster than definite integration. As a result, it is sometimes advantageous to first calculate the indefinite integral and then substitute the integration limits. (Sometimes this might require the “manual” calculation of limits). For this procedure to be correct one must of course know that, inside the integration interval, the indefinite integral is a continuous function without any singularities. For the integrands under consideration this is actually the case, and we use the function Limit (to be discussed in the next subsection) to obtain the values at the integration end points. We also have to take care about contributions from branch cuts of the integral to make sure we use a continuous antiderivative.

1.6 Classical Analysis In[163]:=

RFast[m_, n_] := Module[{upperLimitContribution, lowerLimitContribution, branchCutCorrection}, (* the indefinite integral *) indefInt = Integrate[(1 - ((t - I)/(t + I))^(m + n)* ((t - 1)/(t + 1))^Abs[m - n])/t, t]; (* contributions from the integration limits *) upperLimitContribution = Limit[indefInt, t -> Infinity]; lowerLimitContribution = Limit[indefInt, t -> 0]; (* contribution from making a continuous antiderivative *) branchCutCorrection = If[MemberQ[int, ArcTan[(1 + t)/(-1 + t)], Infinity], 2Pi, 0]; (* simplify result *) Together @ ComplexExpand @ Re (upperLimitContribution + branchCutCorrection lowerLimitContribution)/(2Pi)]

In[164]:=

{RFast[10, 10] // Timing, RFast[8, 12] // Timing}

Out[164]=

181

486215980256 + 10640 πL Re H− 62075752 Re 14549535 == 990.1 Second, =, 94.93 Second, 2π 14549535 π

For the n-dimensional case of such resistor networks, see [419], [420], [1368], [930], [94]; for the continuous analog, see [1002], [1013]; for finite lattices, see [1857]. Mathematica can differentiate expressions arising from computations in which it is not able to explicitly integrate (meaning these expressions contain unevaluated integrals). In[165]:= Out[165]=

D[Integrate[f[x], y], y] f@xD

This also works for integrals in which the variable of differentiation enters in a complicated way in the limits of integration (differentiation of parametric integrals). In[166]:= Out[167]=

Clear[f, x, y]; D[Integrate[f[x], {x, 0, y}], y] f@yD

In[168]:=

D[Integrate[f[x], {x, -x, x}], x]

Out[168]= In[169]:= Out[169]= In[170]:= Out[170]=

f@−xD + f@xD Derivative[1, 0][Integrate[f[x], {x, #1, #2}]&][a, b] −f@aD Derivative[0, 1][Integrate[f[x], {x, #1, #2}]&][a, b] f@bD

We now look at a somewhat more complicated expression: the d’Alembert solution of the one-dimensional wave equation.

Mathematical Remark: d’Alembert Solution of the One-Dimensional Wave Equation Suppose we are given the following differential equation (wave equation) ∑2 uHx, tL ∑2 uHx, tL ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ2ÅÅÅÅÅÅÅÅÅÅÅÅ - a2 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅ = f Hx, tL ∑t ∑ x2 in 1 ä1+ . Here, uHx, tL is the amplitude of the wave as a function of position x and time t, and a is the inverse phase velocity. The d’Alembert solution for prescribed f Hx, tL is:

Symbolic Computations

182

t x+aHt-tL x+at 1 1 1 uHx, tL = ÅÅÅÅÅÅÅÅÅÅ ‡ ‡ f Hx, tL dx dt + ÅÅÅÅÅÅÅÅÅÅ ‡ u1 HxL dx + ÅÅÅÅÅ Hu0 Hx + atL + u0 Hx - atLL. 2 a 0 x-aHt-tL 2 a x-at 2

Here, u0 HxL is the initial position, and u1 HxL is the initial velocity function; that is, uHx, t = 0L = u0 HxL and ∑uHx, tL ê ∑t§t=0 = u1 HxL. For references, see any textbook on partial differential equations, for example, [1627] and [1047]. For some direct extensions, see [413] , [848], and [1853].

1

We now check this solution. The initial conditions are fulfilled. In[171]:=

u[x_, t_] = 1/(2 a) Integrate[Integrate[f[ξ, τ], {ξ, x - a (t - τ), x + a (t - τ)}], {τ, 0, t}] + 1/(2 a) Integrate[u1[ξ], {ξ, x - a t, x + a t}] + 1/2 (u0[x + a t] + u0[x - a t]);

In[172]:=

{u[x, 0], D[u[x, t], t] /. t -> 0}

Out[172]=

8u0@xD, u1@xD

{Automatic, (# //. HoldPattern[Integrate[c_?(FreeQ[#, τ]&) r_, i_]] :> c Integrate[r, i])&}]&) f@x, tD

Here is a solution for the Schrödinger equation for a particle of mass HtL in a time-dependent linear potential HtL [627]. In[175]:=

Out[175]=

ψ[{x_, t_}, {_, _, _}] = AiryAi[ (x + Integrate[1/[τ] Integrate[[σ], {σ, 0, τ}], {τ, 0, t}] ^3/4 Integrate[1/[τ], {τ, 0, t}]^2)]* Exp[I (^3/2 Integrate[1/[τ], {τ, 0, t}]* (x + Integrate[1/[τ] Integrate[[σ], {σ, 0, τ}], {τ, 0, t}] ^3/6 Integrate[1/[τ], {τ, 0, t}]^2) 1/2 Integrate[1/[τ] Integrate[[σ], {σ, 0, τ}]^2, {τ, 0, t}] x Integrate[[σ], {σ, 0, t}])] 2 τ t Ÿ τ @σD σ i1 3 t 1 y 2 i 1 3 t 1 y 1 t IŸ0 @σD σM j z 0 jx− z− j z j τM j τM +‡ τz τ−x Ÿ t @σD σz j z 2 IŸ0 @τD 6 IŸ0 @τD @τD 2 ‡ @τD 0 k { 0 0 { k

τ

2 t t i y Ÿ0 @σD σ 1 3i 1 y j z j AiryAiA j τz z + ‡ j‡ τz j z jx − zE 4 @τD @τD { k 0 0 k {

1.6 Classical Analysis

183

The solution contains again unevaluated integrals and the Airy function AiHzL. We can verify that it is indeed a solution for any HtL and HtL. In[176]:= Out[176]=

With[{ψ = ψ[{x, t}, {, , }]}, I D[ψ, t] == -1/(2 [t]) D[ψ, x, x] + [t] x ψ] True

// Simplify

As a related example, let us develop a series solution of the differential equation z£ HtL = f HzHtL, tL for small t. We t rewrite the differential equation as an integral equation zHtL = zH0L + Ÿ0 f HzHtL, tL dt and calculate the series expansion of the right-hand side. In[177]:=

Out[177]=

Together /@ Normal[Series[z[0] + Integrate[f[z[τ], τ], {τ, 0, t}], {t, 0, 4}, Analytic -> True]] //. (* replace derivatives of z using the differential equations *) {Derivative[n_][z][0] :> (D[f[z[t], t], {t, n - 1}] /. t -> 0)} 1 t f@z@0D, 0D + z@0D + t2 HfH0,1L @z@0D, 0D + f@z@0D, 0D fH1,0L @z@0D, 0DL + 2 1 t3 HfH0,2L @z@0D, 0D + fH1,0L @z@0D, 0D HfH0,1L @z@0D, 0D + f@z@0D, 0D fH1,0L @z@0D, 0DL + 6 2 f@z@0D, 0D fH1,1L @z@0D, 0D + f@z@0D, 0D2 fH2,0L @z@0D, 0DL + 1 4 t HfH0,3L @z@0D, 0D + 3 HfH0,1L @z@0D, 0D + f@z@0D, 0D fH1,0L @z@0D, 0DL fH1,1L @z@0D, 0D + 24 3 f@z@0D, 0D fH1,2L @z@0D, 0D + 3 f@z@0D, 0D HfH0,1L @z@0D, 0D + f@z@0D, 0D fH1,0L @z@0D, 0DL fH2,0L @z@0D, 0D + fH1,0L @z@0D, 0D HfH0,2L @z@0D, 0D + fH1,0L @z@0D, 0D HfH0,1L @z@0D, 0D + f@z@0D, 0D fH1,0L @z@0D, 0DL + f@z@0D, 0D fH1,1L @z@0D, 0D + f@z@0D, 0D HfH1,1L @z@0D, 0D + f@z@0D, 0D fH2,0L @z@0D, 0DLL + 3 f@z@0D, 0D2 fH2,1L @z@0D, 0D + f@z@0D, 0D3 fH3,0L @z@0D, 0DL

Using f Hz, tL = z, we get the series expansion of expHtL. In[178]:= Out[178]=

% /. f -> (#1&) 1 1 1 z@0D + t z@0D + t2 z@0D + t3 z@0D + t4 z@0D 2 6 24

And using f Hz, tL = 2 t z, we get the series expansion of expHt2 L. In[179]:= Out[179]=

%% /. f -> (2 #1 #2&) 1 z@0D + t2 z@0D + t4 z@0D 2

At this point, we mention that Mathematica can integrate a large class of functions whose antiderivatives can be expressed as elliptic integrals. Typically, such integrands contain roots of polynomials of third or fourth degree. Here are three examples. In[180]:=

Out[180]=

In[181]:=

Integrate[Sqrt[(b^2 - x^2)/(x^2 + a^2)], x] "############## b2 −x2 "################ x2 1 a2 1 + EllipticEAArcSinA"########### − xE, − E a2 +x2 a2 a2 b2 1 "################ x2 "########### − 1 − a2 b2 Integrate[Sqrt[(b^2 - x^2)/(x^2 + a^2)^3], x]

Symbolic Computations

184 b2 − x2 Ha2 + x2 L $%%%%%%%%%%%%%%%%%%%%%%%%%%% Ha2 + x2 L3

Out[181]=

i i i j j x2 j 1 b2 1 x x2 $%%%%%%%%%%%%%%%%% j j j $%%%%%%%%%%%%%%%%% j 1 − 2 j j − 2 xE, − 2 E − 2 − j j jEllipticEA ArcSinhA$%%%%%%%%%%%% 2 j j 1 + j a b a b ja 1 "########### 2 2 − Hb − x L k k b2 k z zy zy 1 b2 y z zz zz EllipticFA ArcSinhA$%%%%%%%%%%%% − 2 xE, − 2 Ez z zz zz b a z z {{{ In[182]:= Out[182]=

Integrate[1/Sqrt[1 - x^3], x] è!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! è!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! è!!!!!!!!!!!!!!!!!!!!!! −H−1L5ê6 − x 2 H−1L5ê6 H−1 + xL 1 + x + x2 EllipticFAArcSinA E, H−1L1ê3 E 31ê4 è!!!!!!!!!!!!!! 1ê4 3 1−x 3

Note that sometimes Mathematica produces an incorrect result for a definite integral. Such cases usually involve integrands with symbolic parameters and branches. One possibility for checking the correctness of integrals is to 1ê10+i compare the result of Integrate with that of NIntegrate. Here is an example: Ÿ1ê10-i lnHz2 - 1L dz. The integrand has a branch cut between -1 and 1. Here, the results of Integrate and NIntegrate do agree. In[183]:= Out[183]=

Integrate[Log[z^2 - 1], {z, 1/10 - I, 1/10 + I}] 1 20 400 40001 − JArcTanA E − 5 J−4 + π + ArcTanA E + LogA ENN 5 199 39999 10000

Here is an example where the two results do not agree. For generic endpoints of a definite integral, Mathematica must carry out the definite integral by first calculating the indefinite integral. Then it must find out if the straight line connecting the integration end points crosses any branch cuts of the antiderivative. In general, this means solving a transcendental equation and finding all relevant solutions. This is a very complicated step, and missing a crossed branch cut causes a different result from the one returned by NIntegrate. In[184]:=

Out[184]=

{N[Integrate[#, {z, -1 - I, -1 + I}]], NIntegrate[#, {z, -1 - I, -1 + I}]}&[ (1 + z^z (1 + Log[z]))/(z + z^z)] // Chop 84.80293 , −1.48025

specificValue, options] finds the limit of function if var Ø specificValue taking into account the option settings options.

Here are four simple examples to start. In[1]:= Out[1]= In[2]:= Out[2]= In[3]:= Out[3]=

Limit[Sin[x]/x, x -> 0] 1 Limit[Exp[-x] x^2, x -> Infinity] 0 Limit[((x + h)^(1/3) - x^(1/3))/h, h -> 0] 1 2ê3 3x

1.6 Classical Analysis In[4]:=

185

Limit[(Tan[x]/x)^(1/x^2), x -> 0] 1ê3

Out[4]=

Here are three slightly more complicated limits, two of the form ¶0 [921] and one of the form 1¶ . In[5]:= Out[5]=

Limit[(1/x)^Tan[x], x -> 0] 1

In[6]:=

Limit[(2 - 2 x)^Tan[Pi x], x -> 1/2] 2êπ

Out[6]= In[7]:= Out[7]=

Limit[(((n - 1)^2 n^n)/(n^n - n))^((n - n^(2 - n))/(n - 1)^2), n -> Infinity] 1

A more complicated limit than contains a binomial coefficient. In[8]:= Out[8]=

Limit[Binomial[n, k] (a/n)^k (1 - a/n)^(n - k), n -> Infinity] ak −a Gamma@1 + kD

The next limit is ¶. In[9]:=

Limit[x^x - x^(Log[x]), x -> Infinity] ∞

Out[9]=

The next limit shows how the logarithm lnHxL arises as the limit of a power function xa . (For continuity, it follows from this that xa and lnHxL should have the same branch cut structure.) In[10]:= Out[10]=

Limit[Integrate[ξ^a, {ξ, 0, x}, Assumptions -> x > 0 && Re[a] > -1] 1/(1 + a), a -> -1] Log@xD

For functions whose limit values depend on the direction from which we approach specificValue, we can use the option Direction. Direction is an option for Limit, and it determines a direction for computing the limit. Default: 1 (from the left) Admissible: -1 (from the right) or complexNumber (in direction complexNumber)

Here is an example of finding the limit of expH1 ê xL as x Ø 0. Using the Direction option in Limit, we can determine both limits. In[11]:= Out[11]=

Limit[Exp[1/x], x -> 0, Direction -> #]& /@ {1, -1} 80, ∞

0, Direction -> #]& /@ {-I, I}

Symbolic Computations

186 80, ∞

0, Direction -> #]& /@ {+1, -1} // ExpToTrig 8Cos@π λD − Sin@π λD, Cos@π λD + Sin@π λD

#]& /@ {1, -1} α

α

9LimitA x , x → 0, Direction → 1E, LimitA x , x → 0, Direction → −1E=

Out[15]=

Under the assumption that the real part of a is positive, the last limit can be found by Limit. In[16]:= Out[16]=

Assuming[Re[α] > 0, Limit[Exp[α/x], x -> 0, Direction -> #]& /@ {1, -1}] 80, ∞

0] Interval@8−1, 1 1], Limit[f[x], x -> 1, Analytic -> True]} 8Limit@f@xD, x → 1D, f@1D

0] −f@zD + f@z + ∂D LimitA , ∂ → 0E ∂

Assuming that f HzL is an analytic function yields, as the result, the derivative f £ HzL.

1.6 Classical Analysis In[20]:= Out[20]=

187

Limit[(f[z + ∂] - f[z])/∂, ∂ -> 0, Analytic -> True] f @zD

Here is a slightly more complicated limit. In[21]:=

Out[21]=

Limit[(f[z + ∂ + ∂^2] + f[z - ∂ - ∂^3/4] - 2 f[z + ∂^2/3])/∂^2, ∂ -> 0, Analytic -> True] f @zD + f @zD 3

Also in the following limit (that gives the Schwarz derivative wH3L ê w£ - 3 ê 2 w££2 ê w£2 ) the option setting Analytic -> True is needed. In[22]:=

Out[22]=

Limit[6 D[Log[(w[z] - w[ζ])/(z - ζ)], z, ζ], z -> ζ, Analytic -> True] // Expand 3 w @ζD2 wH3L @ζD − + w @ζD 2 w @ζD2

The following input reduces also to the Schwarzian derivative [1370], [1371], [1342]. Because w[ξ] appears multiplicative in this expression, this time the option setting Analytic -> True is not needed. In[23]:= Out[23]=

Limit[Derivative[3][Function[z, (z - w[z]/w'[z])/2]][ζ], w[ζ] -> 0] // Expand 3 w @ζD2 wH3L @ζD − + w @ζD 2 w @ζD2

The next input represents a discrete approximation to the nth derivative (n a nonnegative integer) of a function f at x [1842]. In[24]:=

derivativeApproximation[f_, n_, ξ_, ∂_] := Sum[(-1)^k Binomial[n, k] f[ξ + (n - 2k)/2 ∂], {k, 0, n}]/∂^n

In the limit ¶ Ø ¶, we get the explicit derivative for explicit nonnegative integer n. In[25]:= Out[25]=

Table[Limit[derivativeApproximation[f, n, x0, ∂], ∂ -> 0, Analytic -> True], {n, 0, 6}] 8f@x0D, f @x0D, f @x0D, fH3L @x0D, fH4L @x0D, fH5L @x0D, fH6L @x0D

Infinity]

Subtracting the value of the limit allows finding the next terms as a correction term for large, but finite n. In[29]:= Out[30]= In[31]:=

Out[32]= In[33]:=

Out[34]=

(* coefficient of 1/n term vanishes *) Limit[(expr - E) n, n -> Infinity] 0 (* coefficient of 1/n^2 term is finite *) Limit[(expr - E) n^2, n -> Infinity] −1 + 24 (* coefficient of 1/n^3 term is finite *) Limit[(expr - E - (* last term *) (E/24 - 1)/n^2) n^3, n -> Infinity] 2 2 − − 6

Symbolic Computations

188

Limit assumes that its variable approaches the limit point in a continuous manner. This means limits such as the following will stay unevaluated. In[35]:=

Limit[Nest[Sqrt[5 + #]&, 5, n], n -> Infinity] Nest::intnm : Non−negative machine−size è!!!!!!!!!!!!!!!!! integer expected at position 3 in NestA 5 + #1 &, 5, nE. More…

Out[35]= In[36]:=

è!!!!!!!!!!!!!!! LimitANestA 5 + #1 &, 5, nE, n → ∞E

Limit[Nest[1 + 1/#&, 1, n], n -> Infinity] Nest::intnm : Non−negative machine−size 1 integer expected at position 3 in NestA1 + &, 1, nE. More… #1

Out[36]= In[37]:= Out[37]=

1 LimitANestA1 + &, 1, nE, n → ∞E #1 Limit[Prime[n]/Exp[n], n -> Infinity] Limit@−n Prime@nD, n → ∞D

To compute limits when several variables are simultaneously tending toward given values, we have to apply Limit repeatedly. However, constructions of the form Limit[ f Ha, bL, a -> a0 , b -> b0 ] are not allowed. Here is a function, with two different limit values, that depends on the order in which Limit is applied. In[38]:=

Out[39]=

(* use different variable ordering *) {Limit[Limit[(x^2 - y^2)/(x^2 + y^2), x -> 0], y -> 0], Limit[Limit[(x^2 - y^2)/(x^2 + y^2), y -> 0], x -> 0]} 8−1, 1

0, y -> 0] x2 − y2 Limit::optx : Unknown option y in LimitA , x → 0, y → 0E. More… x2 + y2

Out[40]=

x2 − y2 LimitA , x → 0, y → 0E x2 + y2

To conclude this section, we now present a tiny application of Limit concerning the computation of a 2D rotation matrix from infinitesimals [922]: An infinitesimal rotation by an angle je around the z-axis can be described (which is easily seen from the geometry) by x£ = x + je y, y£ = -je x£ + y. Here, x and y are the coordinates of a point before the rotation, and x£ and y£ are the coordinates after the rotation. In matrix form, this is £ ij x yz ij 1 je yz ij x yz z j z. j £ z=j k y { k -je 1 { k y {

Here, je is the infinitesimal angle of rotation. A finite rotation by an angle j can be obtained by n-fold repetition of this small rotation, where n je = j. Here is the limit as n Ø ¶. In[41]:= Out[41]=

MatrixPower[{{1, ϕ/n}, {-ϕ/n, 1}}, n] 1 n−ϕ n 1 n+ϕ n 1 n−ϕ n 1 n+ϕ n 99 J N + J N , J N − J N =, 2 n 2 n 2 n 2 n 1 n−ϕ n 1 n+ϕ n 1 n−ϕ n 1 n+ϕ n 9− J N + J N , J N + J N == 2 n 2 n 2 n 2 n

This is what we get after some reorganization.

1.6 Classical Analysis In[42]:=

189

ComplexExpand[Map[Limit[#, n -> Infinity]&, %, {2}]] // Simplify 88Cos@ϕD, Sin@ϕD Identity] // Internal`DeactivateMessages)& @@@ (* power and difference of powers to Gaussian and decorated Gaussian *) {{Cos[x]^k, All}, {Cos[x]^k - Exp[-k/2 x^2], All}, { δcosExpK[k, x], {0, -50}}}]]]] 1

0

0

0.8

-0.05

-10

-0.1

0.6 0.4

-30

-0.2

0.2 0

-20

-0.15

-40

-0.25 -1.5

-1

-0.5

0

0.5

1

1.5

-1.5 -1 -0.5

0

0.5

1

1.5

-1.5 -1 -0.5

0

0.5

1

1.5

† Laurent Series Now, we have terms with negative powers of x. (Within Mathematica, it is a series with positive powers of 1 ê x.) In[21]:= Out[21]= In[22]:= Out[22]=

Series[1/(x^2 + a^2), {x, Infinity, 3}] 1 2 1 4 J N + OA E x x Series[Sin[x]^-1, {x, 0, 4}] 1 x 7 x3 + + + O@xD5 x 6 360

Note the O[x] terms in the following two examples. In[23]:= Out[23]= In[24]:= Out[24]=

Series[x^-6, {x, 0, 4}] 1 6 + O@xD5 x Series[(1/Sin[x])^4, {x, 0, 4}] 1 2 11 62 x2 41 x4 4 + 2 + + + + O@xD5 x 3x 45 945 2835

The next series has no nonvanishing terms up to order x4 . And the result returned by Series indicates that the first nonvanishing coefficient might appear earliest at order x10 . In[25]:= Out[25]=

Series[(x^2 + 3)/(x^12 - 17), {x, Infinity, 4}] 1 10 OA E x

To get a nontrivial term for the last series, we must calculate more terms. In[26]:=

Series[(x^2 + 3)/(x^12 - 17), {x, Infinity, 12}]

1.6 Classical Analysis

Out[26]=

193

1 10 1 12 1 13 J N + 3 J N + OA E x x x

In case we have a series with many negative power terms and are only interested in the leading terms, we can use a negative value for order. In[27]:= Out[27]=

Series[(1/Sin[x])^1000, {x, 0, -995}] 1 500 125050 1 + 998 + + x1000 3x 9 x996 O@xD994

The trigonometric functions cscHzL and cotHzL have Laurent expansions around z = 0. The next input shows that the function Series is effectively behaving like a listable function (because its second argument is a list, Series cannot carry the Listable attribute). In[28]:= Out[28]=

Series[{Csc[z], Cot[z]}, {z, 1 z 7 z3 1 z 9 + + + O@zD4 , − − z 6 360 z 3

0, 3}] z3 + O@zD4 = 45

Here is a series of a special function (to be discussed in Chapter 3). We use an approximate expansion point to force the numericalization of the resulting coefficients. In[29]:= Out[29]=

Series[Gamma[z], {z, 1/2., 8}] 1.77245 − 3.48023 Hz − 0.5L + 7.79009 Hz − 0.5L2 − 15.7948 Hz − 0.5L3 + 31.8788 Hz − 0.5L4 − 63.9127 Hz − 0.5L5 + 127.943 Hz − 0.5L6 − 255.961 Hz − 0.5L7 + 511.974 Hz − 0.5L8 + O@z − 0.5D9

Here are two series expansions for expressions that tend to e. In[30]:=

Out[31]=

(* expand one time at zero and one time at infinity *) {Series[(1 + 1/n)^n, {n, Infinity, 2}], Series[(1 + n)^(1/n), {n, 0, 2}]} 11 1 2 1 3 n 11 n2 9 − + J N + OA E , − + + O@nD3 = 2n 24 n n 2 24

† Puiseux Series è!!!! The expression z is an independent term in a Puiseux series. The O@xD13ê2 term arises from the order 6 of the series requested and the fact that the nonvanishing terms have fractional exponents with denominator 2. In[32]:= Out[32]=

Series[Sqrt[x], {x, 0, 6}] è!!!! x + O@xD13ê2

The next series can be expressed in powers of x1ê2 . The last argument of the SeriesData-object is 2, meaning that the increments in the powers of the expansion variable are 1 ê 2. In[33]:= Out[33]=

Series[1 x^(1/2) + 3 x^(3/2) + 5 x^(5/2), {x, 0, 6}] è!!!! x + 3 x3ê2 + 5 x5ê2 + O@xD13ê2

In[34]:=

InputForm[%]

Out[34]//InputForm=

SeriesData[x, 0, {1, 0, 3, 0, 5}, 1, 13, 2]

Similarly, the O-term in the following has the value 7 µ H1 ê 7L + 1 ê 7 = 50 ê 7. In[35]:= Out[35]=

Series[x^(1/7), {x, 0, 7}] x1ê7 + O@xD50ê7

For large denominators, the third argument of the underlying SeriesData-object can become a long list. In[36]:=

Series[x^(1/2000) + x^2, {x, 0, 2}][[3]] // Length

Symbolic Computations

194 Out[36]=

4000

The next two series expansions contain logarithms. In[37]:= Out[37]= In[38]:= Out[38]=

Series[x^x, {x, 0, 4}] 1 1 1 1 + Log@xD x + Log@xD2 x2 + Log@xD3 x3 + Log@xD4 x4 + O@xD5 2 6 24 Series[x^(x^2), {x, 0, 3}] 1 + Log@xD x2 + O@xD4

The last example contained a term of the form lnHxL x2 . Logarithmic factors appear in the third argument of the underlying SeriesData-object. In[39]:=

FullForm[%]

Out[39]//FullForm=

SeriesData@x, 0, List@1, 0, Log@xDD, 0, 4, 1D

The function arcsinHzL has three branch points: two square-root–like branch points at ≤1 and a logarithmic branch point at ¶. Looking at the series expansion of ArcSin, these two different types of branch points are clearly visible. In[40]:= Out[40]= In[41]:= Out[41]= In[42]:= Out[42]=

Series[ArcSin[z], {z, Infinity, 3}] π 1 1 1 1 2 1 4 J − Log@4D + LogA EN + J N + OA E 2 2 z 4 z z Series[ArcSin[z], {z, -1, 3}] π è!!!! è!!!!!!!!!!!! Hz + 1L3ê2 3 Hz + 1L5ê2 5 Hz + 1L7ê2 − + 2 z + 1 + + + + O@z + 1D4 è!!!! è!!!! è!!!! 2 6 2 80 2 448 2 Series[ArcSin[z], {z, +1, 3}] Arg@−1+zD π 2 π E + H−1LFloorA− 2 3 Hz − 1L5ê2 5 Hz − 1L7ê2 i è!!!! è!!!!!!!!!!!! Hz − 1L3ê2 4y j z + − + j− 2 z − 1 + è!!!! è!!!! è!!!! + O@z − 1D z 6 2 80 2 448 2 k {

The last expansion at the branch point z = 1 shows the slightly unusual prefactor H-1Ld-argHz-1LêH2 pLt . We will encounter such-type factors frequently when expanding analytic functions on branch points and branch cuts. Such factors ensure that the resulting series expansions are correct in any direction from the expansion point. The discontinuous function d-argHz - 1L ê H2 pLt reflects the fact that the original function arcsinHzL has a line of discontinuity (a branch cut) emerging from the point z = +1. The next input shows that in the last example, the factor is needed to get the sign of the imaginary part just above the branch cut corrected. In[43]:=

Out[44]=

(* function, naive series, and corrected series *) {ArcSin[z], Pi/2 - I Sqrt[2] Sqrt[z - 1], Pi/2 - (-1)^Floor[-(Arg[z - 1]/(2 Pi))] I Sqrt[2] Sqrt[z - 1]} /. z -> 1 + 10^-3 + (* above branch cut *) 10^-10 I // N 81.5708 + 0.0447176 , 1.5708 − 0.0447214 , 1.5708 + 0.0447214

z - eP[i - 1]}], {i, 1, Length[summedSeries]}]

We look at the resulting Riemann surface by showing the values of the various sqrt[i, z] inside their disks of convergence. In[59]:=

Do[points[i] = Table[{Re[#], Im[#], Im[N[sqrt[i, #]]]}&[ N[eP[i] + r Exp[I ϕ]]], {r, 0, 0.99, 0.99/10}, {ϕ, 0, N[2Pi], N[2Pi]/16}], {i, 0, 8}]

In[60]:=

Show[Graphics3D[{ {Thickness[0.002], Table[(* the disks *) {Hue[i/8 0.76], Line /@ points[i], Line /@ Transpose[points[i]]}, {i, 0, 8}]}, {Thickness[0.01], GrayLevel[0.3], Line[{{-1, 0, -5}, {-1, 0, 2}}]}, {Thickness[0.01], (* the continuation path *) Line[N[Append[#, First[#]]]& @ Table[{Re[eP[i]], Im[eP[i]], Im[N[sqrt[i, eP[i]]]]}, {i, 1, 8}]]}}], PlotRange -> All, BoxRatios -> {1, 1, 1.5}, ViewPoint -> {-2, -1, 1.1}, Axes -> True, AxesLabel -> (StyleForm[#, TraditionalForm]& /@ {"x", "y", "Sqrt[1 + x + I y]"})]

1.6 Classical Analysis

231

2 0 Sqrt@1 + x + I yD -2 2 -4 4 2

1

0 y -1 -2

1 0 -1x -2 -3 3

Because of the two-valuedness of H1 + zL1ê2 , the first function sqrt[0, z] (in red) and the last function sqrt[8, z] (in blue) do not coincide, and the branch cut of Sqrt[1 + z] along the negative real axis is—because of the analytic continuation—missing. As another application of Sum, let us look at the Hölder summation method [1744], [1054], [200], [656]. Given a divergent sum (divergent in the limit n Ø ¶) S0HnL = ⁄nj=1 a j one recursively forms the (partial) sums n HnL SkHnL = n-1 ‚ Sk-1 until SkHnL converges (if this happens). j=1

Let us take an example, the series of -Hx + 1L-2 for x = 1. The nth term of the series is given by a j = H-1L j j x j-1 . In[61]:= Out[61]=

Series[-1/(1 + x)^2, {x, 0, 8}] −1 + 2 x − 3 x2 + 4 x3 − 5 x4 + 6 x5 − 7 x6 + 8 x7 − 9 x8 + O@xD9

The first partial sums are formed. In[62]:= Out[62]=

sum1 = Sum[(-1)^j j x^(j - 1), {j, n}] −1 + H−xLn + n H−xLn + n H−xLn x H1 + xL2

The first partial sum does not converge for n Ø ¶. In[63]:= Out[63]=

Table[sum1 /. x -> 1, {n, 12}] 8−1, 1, −2, 2, −3, 3, −4, 4, −5, 5, −6, 6

j], {j, n}]/n // Together −n − 2 x − n x + 2 H−xLn x + n H−xLn x + n H−xLn x2 n H1 + xL3

This partial sum still does not converge. In[65]:= Out[65]= In[66]:= Out[66]=

Table[sum2 /. x -> 1, {n, 12}] 2 3 4 5 6 9−1, 0, − , 0, − , 0, − , 0, − , 0, − , 0= 3 5 7 9 11 {sum2 /. x -> 1 /. (-1)^n -> -1, sum2 /. x -> 1 /. (-1)^n -> +1} −4 − 4 n 9 , 0= 8n

So, let us do one more iteration. In[67]:=

sum3 = Sum[Evaluate[sum2 /. n -> j], {j, n}]/n // Together

Symbolic Computations

232 Out[67]=

1 − Hn + n2 + n x + n2 x + x2 + n x2 − H−xLn x2 − n H1 + nL H1 + xL3 n H−xLn x2 + 2 x HarmonicNumber@nD + 2 n x HarmonicNumber@nD − 2 H−xLn x2 Hypergeometric2F1@1 + n, 1, 2 + n, −xD + 2 x Log@1 + xD + 2 n x Log@1 + xDL

Now, we finally have a convergent sum. In[68]:= Out[68]=

Table[Expand[sum3 /. x -> 1], {n, 12}] // N 8−1., −0.5, −0.555556, −0.416667, −0.453333, −0.377778, −0.405442, −0.354762, −0.377072, −0.339365, −0.3581, −0.328259

1 /. n -> N[10^k, 22]], {k, 20}] // N[#, 2]& 80.089, 0.015, 0.0020, 0.00026, 0.000032, 3.8 × 10−6 , 4.3 × 10−7 , 4.9 × 10−8 , 5.5 × 10−9 , 6.1 × 10−10 , 6.6 × 10−11 , 7.2 × 10−12 , 7.8 × 10−13 , 8.4 × 10−14 , 9.0 × 10−15 , 9.5 × 10−16 , 1.0 × 10−16 , 1.1 × 10−17 , 1.1 × 10−18 , 1.2 × 10−19

False, ColorFunction -> (Hue[0.78#]&)], (* 3D plot for uniform random variables *) ListPlot3D[Log @ Abs @ recursivePartialSumList[ Table[Random[Real, {-1, 1}], {i, 120}], 20], Mesh -> False]}]]] 10 8 -0.5 -1 -1.5

6 4

25

2 0

20 15

0

200

400

600

10 50

75

5 100

800 1000 1200

The next inputs use the Cesàro summation method [216] to establish the value -1 ê 4. In[72]:=

Out[72]= In[73]:=

partialSums = Simplify[#, x > 0]& @ Sum[(-1)^(j + 1) (j + 1) x^j, −1 + H2 + kL H−xL1+k − H1 + kL H−xL2+k H1 + xL2

{j, 0, k}]

(* multiply partial sums with a binomial and sum again *) cesaroSum = Sum[Evaluate[partialSums Binomial[n - k + - 1, n - k]], {k, 0, n}]/Binomial[n + , n] /. x -> 1 // Simplify

1.7 Differential and Difference Equations

233

Gamma@1+n+D 3 Gamma@n+D Hypergeometric2F1@1,−n,1−n−,−1D +

Out[74]= In[75]:=

Out[76]=

2 Gamma@−1+n+D Hypergeometric2F1@2,1−n,2−n−,−1D Gamma@1+D Gamma@D − + Gamma@1+nD Gamma@nD Gamma@D 4 Binomial@n + , nD

(* limits for different values for the parameter p *) Table[Limit[FullSimplify[cesaroSum], n -> Infinity], {, 4}] 1 1 1 1 9− , − , − , − = 4 4 4 4

The function Integrate gives finite results for (some) divergent integrals when using the option setting GenerateConditions -> False. Sum does not have the option GenerateConditions. But the function SymbolicSum`SymbolicSum does. The next input calculates a finite result for the divergent sum ¶ H-1Lk lnHkL. ⁄k=1 In[77]:=

Out[77]=

SymbolicSum`SymbolicSum[(-1)^k Log[k], {k, Infinity}, GenerateConditions -> False] // Simplify 1 π LogA E 2 2

Taking into account that ∑ k ¶ ê ∑ ¶ = k ¶ lnHkL, the last result can be understood in the following way (zeta regularization). In[78]:=

Out[78]=

Normal[Series[D[Sum[(-1)^k k^∂, {k, Infinity}], ∂], {∂, 0, 0}]] // Simplify 1 π LogA E 2 2

We end this subsection by remarking that the symbolic analog of the function NProduct, namely the function Product should be mentioned here. Because its syntax and functionality is largely identical to the one of Sum, we just give three simple examples here. In[79]:=

Out[79]=

{Product[Sin[z + k Pi/ν], {k, 0, ν - 1}], Product[1 - k^-4, {k, 2, Infinity}], Product[(1 - Prime[k]^-2)^4, {k, Infinity}]} Sinh@πD 1296 921−ν Sin@z νD, , = 4π π8

We end with an infinite sum over finite products. In[80]:= Out[80]=

Sum[n!/Product[x + k, {k, n}], {n, Infinity}] 1 −1 + x

1.7 Differential and Difference Equations 1.7.0 Remarks In this section, we discuss another of the very useful Mathematica commands for symbolic computations: DSolve, the function for the symbolic solution of ordinary differential equations (ODEs), systems of ODEs [1667], partial differential equations, and differential-algebraic equations. The function DSolve is quite powerful and will find closed-form solutions to many differential equations. Here we present examples for the most popular classes of differential equations. This listing is far from exhaustive.

Symbolic Computations

234

1.7.1 Ordinary Differential Equations The syntax for solving an ordinary differential equation is straightforward. DSolve[listOfODEsAndInitialValues, listOfFunctions, independentVariable] tries to solve the ODE(s) with potential initial conditions given by listOfODEsAndInitialValues for the functions in listOfFunctions. The independent variable is independentVariable. In the case of a single differential equation without initial conditions with only one unknown function, the first and second arguments can appear without the braces.

We first look at a simple example. The result of a successfully solved differential equation is a list of lists of rules—structurally, like the result of Solve. In[1]:= Out[1]=

y1 = DSolve[y''[x] == x^2, y[x], x] x4 99y@xD → + C@1D + x C@2D== 12

Here is a more complicated example. Similar to the results of Integrate and Sum, DSolve-results often contain special functions, Root-objects, and RootSum-objects. In[2]:= Out[2]=

DSolve[y'[x] == y[x]^2 - x, y[x], x] 1 2 2 2 99y@xD → J−BesselJA− , x3ê2 E C@1D + x3ê2 J−2 BesselJA− , x3ê2 E − 3 3 3 3 4 2 2 2 3ê2 3ê2 BesselJA− , x E C@1D + BesselJA , x E C@1DNN í 3 3 3 3 1 2 1 2 J2 x JBesselJA , x3ê2 E + BesselJA− , x3ê2 E C@1DNN== 3 3 3 3

The next picture shows a visualization of the last solution curves generated by choosing real values from the interval @-4, 4D for the integration constant C[1]. In[3]:=

Show[Graphics[{Thickness[0.002], Table[With[{c = Random[Real, {-3, 3}]}, Line /@ DeleteCases[Partition[Table[{x, -(AiryAiPrime[x] + AiryBiPrime[x] c)/(AiryAi[x] + AiryBi[x] c)}, {x, -4., 4., 1/50.}], 2, 1], (* delete steep vertical parts *) _?(#.#&[Subtract @@ #] > 5&)]], {50}]}], Frame -> True] 4

2

0

-2

-4

-4

-2

0

2

4

The specification of the functions in the second argument of DSolve is analogous to that for NDSolve; that is, if no argument is specified for the function to be found, DSolve returns a pure function (with the dummy

1.7 Differential and Difference Equations

235

variable typically being the independent variable from the input equations). Here this is demonstrated using the simple differential equation y≥ HxL = - yHxL [1199], [1863]. In[4]:= Out[4]=

y2 = DSolve[{y''[x] == -y[x], y[0] == 0}, y, x] 88y → Function@8x With[{ = c++, d = Exponent[p[C], C]}, [, d] /; True]] i j j j j j 99Q → FunctionA8k -f'[x] DiracDelta[x] /. DiracDelta[x] f_[x] :> f[0] DiracDelta[x] DiracDelta@xD

The solution of the initial value problem is obtained using the fundamental solution GHxL (Green’s function) [1704] for arbitrary initial values and adding the initial conditions yHnL H0L in the form n ∑k-1 GHxL ê ∑ xn yHn-kL H0L to the right-hand side as an inhomogeneous term. Here is a simple example—the ⁄k=1 differential equation y≥ HxL + yHxL = e-x with initial conditions yH0L = y0 and y£ H0L = y p . We use DSolve to solve the initial value problem. In[80]:=

Out[80]=

sol = DSolve[{y''[x] y[0] == Cos@xD − + y0 Cos@xD + 2

+ y[x] == Exp[-x], y0, y'[0] == yp}, y[x], x][[1, 1, 2]] // Expand 1 Sin@xD 1 −x Cos@xD2 + + yp Sin@xD + −x Sin@xD2 2 2 2

This is a fundamental solution for this problem. In[81]:=

gf[x_] = Limit[DSolve[{y''[x] + y[x] == DiracDelta[x], (* right sided initial conditions; after δ kicked *)

Symbolic Computations

278

Out[81]=

y[∂] == 0, y'[∂] == 1}, y[x], x][[1, 1, 2]] /. DiracDelta[c_] Sin[c_] :> 0 // Simplify, ∂ -> 0, Direction -> -1] Sin@xD UnitStep@xD

Now, we use the fundamental solution to build the solution of the inhomogeneous equation and to fulfill the initial conditions. In[82]:=

Out[82]=

sol1 = Integrate[Expand[gf[x - ξ] Exp[-ξ]], {ξ, 0, Infinity}, GenerateConditions -> False] + (* the initial conditions as part of the inhomogeneous part *) (gf[x - ξ] yp /. ξ -> 0) + (D[gf[x - ξ], x] y0 /. ξ -> 0) /. DiracDelta[c_] Sin[c_] :> 0 1 y0 Cos@xD UnitStep@xD + yp Sin@xD UnitStep@xD + H−x − Cos@xD + Sin@xDL UnitStep@xD 2

For x > 0 (the region under consideration), the solution so-obtained agrees with the one from DSolve. In[83]:= Out[83]=

Expand[sol - %] // Simplify[#, x > 0]& 0

Within the realm of distributions, differential equations get more solutions than just the classical ones. Let us look at the first-order differential equation x2 u£ HxL = 1. In[84]:=

ode = ξ^2 u'[ξ] - 1;

In the space of ordinary functions, we have the solution uHxL = c1 - 1 ê x. In[85]:= Out[85]=

DSolve[ode == 0, u[ξ], ξ] 1 99u@ξD → − + C@1D== ξ

In the space of generalized functions we have the solution uGF HxL = c1 + c2 qHxL + c3 dHxL - 1 ê x. Let us check this. In[86]:= Out[86]=

uGF[ξ_] = c[1] + c[2] UnitStep[ξ] + c[3] DiracDelta[ξ] - 1/ξ 1 − + c@1D + c@3D DiracDelta@ξD + c@2D UnitStep@ξD ξ

Directly substituting the solution into Mathematica does not give zero. In[87]:= Out[87]=

ξ^2 uGF'[ξ] - 1 // Expand ξ2 c@2D DiracDelta@ξD + ξ2 c@3D DiracDelta @ξD

Using Simplify, we can get zero. In[88]:= Out[88]=

Simplify[%] 0

To get the last zero, we have to add the two rules xn dHxL = 0 and xn dHnL HxL = H-1Ln n ! ê Hn - nL! dHn-nL HxL. In[89]:=

δSimplify[expr_, x_] := With[{rules = {x^n_. Derivative[ν_][DiracDelta][x] :> (-1)^n ν!/(ν - n)! Derivative[ν - n][DiracDelta][x], x^n_. DiracDelta[x] :> 0}}, FixedPoint[Expand[#] //. rules&, expr]]

Now it is straightforward to see that uGF HxL is indeed a solution of the differential equation x2 uHxL = 1. In[90]:= Out[90]=

δSimplify[%%, ξ] 0

1.8 Integral Transforms and Generalized Functions

279

No option of DSolve is currently available to generate solutions of differential equations that are distributions. Let us deal with a slightly more complicated example, the hypergeometric differential equation xH1 - xL y££ HxL + Hg - Ha + b + 1L xL y£ HxL - a b yHxL = 0. Classically, the solutions are hypergeometric functions (see Chapter 3). These become rational functions for integer parameters. Here is an example. In[91]:=

ode2F1[x_, y_, {α_, β_, γ_}] = x (1 - x) y''[x] + (γ - (α + β +1) x) y'[x] - α β y[x];

In[92]:=

With[{α = 12, β = 7, γ = 10}, DSolve[ode2F1[x, y, {α, β, γ}] == 0, y, x]] H28 + 3 x H7 + 2 xLL C@1D 99y → FunctionA8x= γ > β

Here is a distributional solution of our special case of the hypergeometric differential equation. In[95]:= Out[95]=

yGF[x, {12, 7, 10}] 1 1 DiracDeltaH6L @xD − DiracDeltaH7L @xD + DiracDeltaH8L @xD 2 12

Substituting this solution into the differential equation and applying our δSimplify shows that this is indeed a solution. In[96]:= Out[96]=

In[97]:= Out[97]=

With[{α = 12, β = 7, γ = 10, y = Function[x, Evaluate[%]]}, ode2F1[x, y, {α, β, γ}]] // Expand −84 DiracDeltaH6L @xD + 52 DiracDeltaH7L @xD − 20 x DiracDeltaH7L @xD − 12 DiracDeltaH8L @xD + 11 x DiracDeltaH8L @xD − 5 13 x2 DiracDeltaH8L @xD + DiracDeltaH9L @xD − x DiracDeltaH9L @xD + 6 6 1 1 1 x2 DiracDeltaH9L @xD + x DiracDeltaH10L @xD − x2 DiracDeltaH10L @xD 2 12 12 δSimplify[%, x] 0

For some more uses of series of Dirac d distributions, see [289], [1728], [969], [1729]; for a spectacular weak solution of the Euler PDEs, see [1615]; for distributional solutions of functional equations, see [454], [456], [1569], and [372]. As a little application of how to deal with the UnitStep and the DiracDelta function in Mathematica, let us check that yHx, tL = qH2 Hx - k tL g + pL qHp - 2 g Hx - k tLL cosd+1 Hg Hx - k tLL eÂ Hk x-w tL is a “finite length solito-

Symbolic Computations

280

nic” solution (also called compacton [1509], [1093], [1165], [1360], [1475], [1166], [406], [1817], [560], [1818], [1873], [1819]) of the following nonlinear Schrödinger equation [300]: 1 1 ∑ rHx, tL 2 ∑yHx, tL 1 ∑2 yHx, tL + ÅÅÅÅÅ Å x J ÅÅÅÅÅÅÅÅ Å ÅÅÅÅÅÅÅ Å ÅÅÅ Å ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅ N yHx, tL Å ÅÅÅ Å i ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ = - ÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 8 rHx, tL ∑x ∑ x2 ∑t 2 êêêêêêêêêê where rHx, tL = yHx, tL yHx, tL, 0 < x < 1, d = x ê H1 - xL, and w = Hk 2 + g2 Hd + 1LL ê 2. (For arbitrarily narrow solitons, see [434].) Here, we implement the equations from above. In[98]:=

In[100]:=

δ = ξ/(1 - ξ); ω = 1/2 (k^2 + γ^2 (1 + δ)); Ω[ψ_] := Module[{ψc = ψ /. c_Complex :> Conjugate[c], ρ, j}, ρ = ψ ψc; ξ/8 (D[ρ, x]/ρ)^2]

Without the finite length restriction (the terms qH2 Hx - k tL g + pL qHp - 2 g Hx - k tLL in yHx, tL, it is straightforward that yHx, tL is a solution of the equation. In[101]:=

ψ[x_, t_] = Cos[γ (x - k t)]^(1 + δ) Exp[I (k x - ω t)];

In[102]:=

Factor[I D[ψ[x, t], t] + 1/2 D[ψ[x, t], {x, 2}] - Ω[ψ[x, t]] ψ[x, t]]

Out[102]=

0

Including the finite length condition makes things a bit more tricky. Here is the finite length solution. In[103]:=

ψ1[x_, t_] = ψ[x, t] UnitStep[2 γ (x - k t) + Pi] UnitStep[Pi - 2 γ (x - k t)];

Just plainly redoing the calculation above will not give the desired result. In[104]:= Out[104]=

Simplify[Factor[I D[ψ1[x, t], t] + 1/2 D[ψ1[x, t], {x, 2}] Ω[ψ1[x, t]] ψ[x, t]]] === 0 False

So let us do the calculation step by step. First, we form the first time derivative with respect to t. In[105]:= Out[105]=

D[ψ1[x, t], x] 1

2

2

ξ

ξ

MMM 1−ξ 1−ξ 2 Ik x− 2 t Ik +γ I1+ γ Cos@H−k t + xL γD1+ DiracDelta@π + 2 H−k t + xL γD UnitStep@π − 2 H−k t + xL γD − 1 t Ik2 +γ2 I1+ ξ MMM 1−ξ

2 Ik x− 2

ξ

1−ξ γ Cos@H−k t + xL γD1+ DiracDelta@π − 2 H−k t + xL γD 1

2

2

ξ

ξ

MMM 1−ξ 1−ξ UnitStep@π + 2 H−k t + xL γD + Ik x− 2 t Ik +γ I1+ k Cos@H−k t + xL γD1+ UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD − 1 ξ MMM ξ 2 2 ξ 1−ξ 1−ξ Ik x− 2 t Ik +γ I1+ γ J1 + N Cos@H−k t + xL γD Sin@H−k t + xL γD 1−ξ UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD

We implement a generalization of x dHxL = 0 for the form f HtL dHgHtLL to simplify the expression above. In[106]:=

δrule = Times[factors__, DiracDelta[y_]] :> Module[{t0, factor1}, (* the t such that y vanishes *) t0 = t /. Solve[y == 0, t][[1]]; (* the value of factor at t0 *) factor1 = Times[factors] //. _UnitStep -> 1 /. t -> t0; (* the zero result *) 0 /; ((Together //@ factor1) /. 0^_ -> 0) === 0];

1.8 Integral Transforms and Generalized Functions

281

Applying δrule to the first time derivative gives a better result—no Dirac d functions appear anymore. In[107]:= Out[107]=

timeDeriv1 = Expand[D[ψ1[x, t], t]] /. δrule 1 ξ ξ 2 2 1 MMM 2 1−ξ 1−ξ − Ik x− 2 t Ik +γ I1+ k Cos@H−k t + xL γD1+ 2 UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD − 1 ξ ξ 2 2 1 MMM 2 1−ξ 1−ξ γ Cos@H−k t + xL γD1+ UnitStep@π − 2 H−k t + xL γD Ik x− 2 t Ik +γ I1+ 2 1 ξ MMM 2 2 1 1−ξ γ2 ξ UnitStep@π + 2 H−k t + xL γD − J Ik x− 2 t Ik +γ I1+ 2 H1 − ξL ξ

1−ξ Cos@H−k t + xL γD1+ UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γDN + 1

2

2

ξ

ξ

MMM 1−ξ 1−ξ Ik x− 2 t Ik +γ I1+ k γ Cos@H−k t + xL γD Sin@H−k t + xL γD UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD + 1 ξ ξ 2 2 1 MMM 1−ξ 1−ξ k γ ξ Cos@H−k t + xL γD Sin@H−k t + xL γD J Ik x− 2 t Ik +γ I1+ 1−ξ

UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γDN

In a similar way, we deal with the first and second space derivative. In[108]:= Out[108]=

spaceDeriv1 = Expand[D[ψ1[x, t], x]] /. δrule 1

2 +γ2 I1+ ξ MMM 1−ξ

Ik x− 2 t Ik

ξ

1−ξ k Cos@H−k t + xL γD1+ UnitStep@π − 2 H−k t + xL γD 1

2

2

ξ

ξ

MMM 1−ξ 1−ξ UnitStep@π + 2 H−k t + xL γD − Ik x− 2 t Ik +γ I1+ γ Cos@H−k t + xL γD Sin@H−k t + xL γD UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD − 1 ξ ξ 2 2 1 MMM 1−ξ 1−ξ J Ik x− 2 t Ik +γ I1+ γ ξ Cos@H−k t + xL γD Sin@H−k t + xL γD 1−ξ

UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γDN In[109]:=

spaceDeriv2 = Expand[D[spaceDeriv1, x]] /. δrule;

The nonlinear term still needs to be dealt with. In[110]:=

Out[111]=

ψ1c = ψ1[x, t] /. c_Complex :> Conjugate[c]; ρ1 = ψ1[x, t] ψ1c 2ξ

1−ξ Cos@H−k t + xL γD2+ UnitStep@π − 2 H−k t + xL γD2 UnitStep@π + 2 H−k t + xL γD2

The rule ruleθ simplifies powers of Heaviside distributions. In[112]:=

ruleθ = u_UnitStep^e_ :> u

Out[112]=

u_UnitStepe_ u

In[113]:=

ρ1 = ρ1 /. ruleθ

Out[113]=

2ξ

1−ξ Cos@H−k t + xL γD2+ UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD

After carrying out the spatial differentiation, we again apply our rule δrule. In[114]:= Out[114]=

ρDeriv1 = Expand[D[ρ1, x]] /. δrule 2ξ

1−ξ −2 γ Cos@H−k t + xL γD1+ Sin@H−k t + xL γD UnitStep@π − 2 H−k t + xL γD 2ξ 1 1−ξ UnitStep@π + 2 H−k t + xL γD − J2 γ ξ Cos@H−k t + xL γD1+ 1−ξ

Sin@H−k t + xL γD UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γDN

In the process of forming the expression 1 ê rHx, tL ∑ rHx, tL ê ∑ x, we must be especially careful. Formally, the terms qHp - 2 Hx - k tL gL qH2 Hx - k tL g + pL cancel because inside Times they are treated like a commutative, associative quantity.

Symbolic Computations

282 In[115]:= Out[115]=

ξ/8 (ρDeriv1/ρ1)^2 // Expand 1 γ2 ξ2 Tan@H−k t + xL γD2 γ2 ξ3 Tan@H−k t + xL γD2 γ2 ξ Tan@H−k t + xL γD2 + + 2 1−ξ 2 H1 − ξL2

We restore the finite length conditions “by hand”. In[116]:= Out[116]=

Ω[ψ1] = % UnitStep[2 γ (x - k t) + Pi] UnitStep[Pi - 2 γ (x - k t)] γ2 ξ2 Tan@H−k t + xL γD2 γ2 ξ3 Tan@H−k t + xL γD2 z 1 i j + y j γ2 ξ Tan@H−k t + xL γD2 + z 1−ξ 2 H1 − ξL2 k2 { UnitStep@π − 2 H−k t + xL γD UnitStep@π + 2 H−k t + xL γD

Putting everything together, we arrive at the zero we were hoping for. This indeed shows that ψ1[x, t] describes a finite length soliton of the above nonlinear Schrödinger equation. In[117]:= Out[117]=

Factor[Expand[I timeDeriv1 + 1/2 spaceDeriv2 - Ω[ψ1] ψ1[x, t]] /. ruleθ] 0

Here is a space-time picture of the absolute value of the finite length soliton for certain parameters. It is really a localized, moving, shape-invariant solution of a nonlinear wave equation that is concentrated at every time on a compact space domain. For a fixed time (right graphic), one sees that the transition between the zero-elongation and the nonzero-elongation domain is smooth (which is needed to fulfill the second-order differential equation). In[118]:= Out[118]= In[119]:=

Ψ = With[{k = 2, γ = 1/2, ξ = 1/2}, Evaluate[ψ1[x, t]]] 2 9t 1 I− 4 +2 xM CosA H−2 t + xLE UnitStep@π + 2 t − xD UnitStep@π − 2 t + xD 2

Show[GraphicsArray[ Block[{$DisplayFunction = Identity}, {(* 3D plot of the compacton *) Plot3D[Evaluate[Abs[Ψ]], {x, -12, 12}, {t, -4, 4}, Mesh -> False, PlotPoints -> 140, PlotRange -> All], (* plot of the compacton at a fixed time *) Plot[Evaluate[Abs[Ψ] /. t -> 2], {x, -0, 8}, PlotRange -> All, AspectRatio -> 1/3, Frame -> True, Axes -> False]}]]]

1 0.75 0.5 0.25 0 -10

4 2 0 -5

0

1 0.8 0.6 0.4 0.2 0

0

2

4

6

8

-2 5

10

-4

Until now, we encountered only one possibility that Mathematica would return a Dirac d function if we did not input one; this was by differentiation of the UnitStep function. More functions generate generalized functions, also in case one does not explicitly input the UnitStep or the DiracDelta distribution. The most important one is the Fourier transform [1387]. The Fourier transform t @ f HtLD HwL of a function f HtL is defined as ¶ t @ f HtLD HwL = H2 pL-1ê2 Ÿ-¶ ei w t f HtL dt [920]. (The square brackets in the traditional form notation t @ f HtLD HwL indicate the fact that the Fourier transform of f HtL is a linear functional of f HtL and a function of w.)

1.8 Integral Transforms and Generalized Functions

283

FourierTransform[f(t), t, ω] represents the Fourier transform of the function f HtL with respect to the variable t and the kernel ei w t .

Here is the Fourier transform of an “ordinary” function. In[120]:=

Clear[t, ω, x, y, s, Ω, a, b, term] FourierTransform[Exp[-x^2] x^3, x, y] y2

Out[121]=

− 4 y H−6 + y2 L − è!!!! 8 2

The Fourier transformation is a linear operation. In[122]:= Out[122]=

FourierTransform[α Sin[x^2] + β Exp[-x^2], x, y] y2 1 è!!!! y2 y2 J 2 − 4 β + α CosA E − α SinA EN 2 4 4

Derivative operators transform under a Fourier transformation into multiplication operators. This property makes them useful for solving ordinary and partial differential equations [532], [750], [443], [183], [879]. In[123]:= Out[123]=

FourierTransform[y''[x], x, ξ] −ξ2 FourierTransform@y@xD, x, ξD

The Fourier transform of the function 1 is essentially a Dirac d distribution [257]. In[124]:= Out[124]=

FourierTransform[1, t, ω] è!!!!!!!! 2 π DiracDelta@ωD

The following Fourier transform of cosHtL and sinHtL too gives a result that contains Dirac d distribution. In[125]:= Out[125]=

FourierTransform[α Cos[t] + β Sin[t], t, ω] π π α DiracDelta@−1 + ωD + $%%%%%%% β DiracDelta@−1 + ωD + $%%%%%%% 2 2 π π $%%%%%%% α DiracDelta@1 + ωD − $%%%%%%% β DiracDelta@1 + ωD 2 2

Be aware that carrying out the “integral” (using Integrate) will not result in a Dirac d distribution. In[126]:=

Integrate[Exp[I k t] Exp[I ω t], {t, -Infinity, Infinity}, Assumptions -> Im[k] == 0]/(2 Pi) Integrate::idiv : Integral of t Hk+ωL does not converge on 8−∞, ∞ (η[#][x]&)]], x, s]]], s, x] // Expand

Here are the first three partial sums of the hk HxL shown. In[197]:=

Out[198]=

yApproxList[x_] = Rest[FoldList[Plus, 0, Take[yApproxList[x], 3] x2 55 5 x2 x4 91, + Cos@xD, − + − 6 Cos@xD + 2 8 4 8

Table[η[k][x], {k, 0, 5}]]]; 1 1 x2 Cos@xD + Cos@2 xD − 2 x Sin@xD= 2 8

We compare the approximate solutions with a high-precision numerical solution ndsol. The following graphics show, that with each hk HxL the solution becomes substantially better and the fifth approximation has an error less than 10-10 for 0 § x d 0.8. In[199]:=

(* high-precision numerical solution *) ndsol = NDSolve[{yN''[x] + a[x] yN'[x] + b[x] yN[x] == f[yN[x]], yN[0] == 1, yN'[0] == 0}, yN, {x, 0, 5/2}, WorkingPrecision -> 50, MaxSteps -> 10^5, PrecisionGoal -> 30, AccuracyGoal -> 30];

In[201]:=

Show[GraphicsArray[ Block[{$DisplayFunction = Identity, (* order increases from red to blue *) = Table[Hue[k/7], {k, 0, 5}]}, {(* absolute differences *) Plot[Evaluate[Join[yApproxList[x], yN[x] /. ndsol]], {x, 0, 5/2}, PlotRange -> All, PlotStyle -> Prepend[, GrayLevel[0]]], (* logarithms of the differences *) MapIndexed[(δN[#2[[1]]][x_?NumberQ] := Log[10, Abs[SetPrecision[#1, 60] - yN[SetPrecision[x, 60]] /. ndsol[[1]]]])&, yApproxList[x]]; (* show logarithms of the differences *) Plot[Evaluate[Table[δN[k][x], {k, 6}]], {x, 0, 5/2}, PlotRange -> {All, {-10, 2}}, PlotStyle -> ]}]]]

Symbolic Computations

294 7

2

6 0.5 5

1

1.5

2

2.5

-2

4

-4

3

-6

2

-8 0.5

1

1.5

2

2.5

-10

For the application of the Adomian decomposition to boundary value problems, see [455], [1816].

1.9 Additional Symbolics Functions Now, we are nearly at the end of our chapter about symbolic computations. Many features of Mathematica have been discussed, but as many have not been discussed. The next section will deal with some applications of the discussed functions. In addition to the functionality built into the Mathematica kernel, a number of important packages in the standard package directory of Mathematica are useful for symbolic calculations, and they enhance the power of the corresponding built-in functions and offer new functionality. In addition to Calculus`Limit`, Calculus`PDSolve1`, and Calculus`DSolve`, which were already mentioned above, the following packages are often very useful: Calculus`VectorAnalysis`, DiscreteMath` RSolve`, and Calculus`VariationalMethods` . The functions contained in these packages can be deduced immediately from their names. Because of space and time limitations, we look only briefly at what these packages can accomplish. The package Calculus`VariationalMethods` implements the calculation of variational derivatives of integrals and the associated Euler-Lagrange equation (for an introduction to variational calculations, see, e.g., [240], [664], or for somewhat more detail, see [439] and [1806]). In[1]:=

Needs["Calculus`VariationalMethods`"]

In[2]:=

?VariationalD VariationalD@f, u@xD, xD or VariationalD@f, u@x,y,...D, 8x,y,... 1 the series is divergent (corresponding to the singularity of 1 ê H1 - a x2 L at x = 1 ê a). Exchanging summation and integration yields a divergent sum. But due to the automatic Borel summation of SymbolicSum`SymbolicSum for such type sums, we get the a closed-form result as for the integral.

Symbolic Computations

296

¶

‡

0

2 ¶ i¶ e-x 2j zy ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ2ÅÅ dx =° ‡ e-x jjj‚ H1 + kL ak xk zzz dx =° H1 - a xL 0 k k=0 {

¶

i kj

¶

„ H1 + kL a j‡ e k 0 k=0

In[13]:= Out[13]= In[14]:=

Out[14]=

¶

-x2

GHH1 + kL ê 2L y x dxz =° „ H1 + kL ak ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 2 { k

k=0

Integrate[x^k Exp[-x^2], {x, 0, Infinity}, Assumptions -> k >= 0] 1 1+k GammaA E 2 2 sum = SymbolicSum`SymbolicSum[α^k (1 + k) Gamma[(1 + k)/2]/2, {k, 0, Infinity}, GenerateConditions -> False] − 12 12 è!!!! 1 1 1 1 y i "########### − 1 1 α α z j 2 J π − π − + α2 α − π "######## ErfiA"######## EN 2 α2 Gamma@0, − 1 j α2 α2 α2 2 D z j α z j z − − j z j 2 3 z α α 2 j z z j { k

Using the functions FullSimplify we can show that the sum and the integral are identical. (FullSim plify simplifies identities with special functions, we will discuss it in Chapter 3.) In[15]:= Out[15]=

FullSimplify[int - sum, α < 0] 0

Here is another example. We first sum a series [734] and then recover the nth term. In[16]:= Out[16]= In[17]:=

Sum[([x] - [y])^n/(x - y)^(n + 1) λ^n, {n, Infinity}] // Simplify λ H@xD − @yDL Hx − yL Hx − y − λ @xD + λ @yDL SeriesTerm[%, {λ, 0, n}] // Simplify[#, n > 1]& n

Out[17]=

@xD−@yD I M x−y x−y

As a small application of the function SeriesTerm, we will prove the following identity (due to Ramanujan) about the Taylor series coefficients of three rational functions [856], [570]. 3

3

ij k ij 9 x2 + 53 x + 1 yzyz ij k ij -12 x2 - 26 x + 2 yzyz j@x D j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ Å zz + j@x D j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ zz = k k x3 - 82 x2 - 82 x + 1 {{ k k x3 - 82 x2 - 82 x + 1 {{ 3

ij k ij -10 x2 + 8 x + 2 yzyz j@x D j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ zz + H-1Lk k x3 - 82 x2 - 82 x + 1 {{ k Using the function Series, we can easily explicitly verify the identity for the first few coefficients. In[18]:=

Out[20]=

abc = {1 + 53 x + 9 x^2, 2 - 26 x - 12 x^2, 2 + 8 x - 10 x^2}/ (1 - 82 x - 82 x^2 + x^3); (#1 + #2 - #3)& @@@ Transpose[#[[3]]^3& /@ Series[abc, {x, 0, 12}]] 81, −1, 1, −1, 1, −1, 1, −1, 1, −1, 1, −1, 1

0]& H−1Lk

We end with another application of the series terms also due to Ramanujan: Calculating integrals through series ¶ terms. For a sufficiently nice function f HxL, the kth moment mk @ f HxLD =Ÿ0 xk f HxL dx can be calculated through k the analytic continuation of the series coefficient cHkL = @x D H f HxLL to negative integer k by mk = -H-1L-k k ! H-k - 1L! c-k-1 (Ramanujan’s master theorem [157]). Here is a simple example. In[23]:=

f[x_] = x^2 Exp[-x] Sin[x]^2;

In[24]:=

c[k_] = SeriesTerm[f[x], {x, 0, k}]; intc[k_] = k! (-1)^(-k - 1) (-k - 1)! c[-k - 1] 1 1 IH−1L−4−2 k I−1 + 5 2 H−3−kL Cos@H−3 − kL ArcTan@2DDM 2π H−1 − kL ! k ! Gamma@3 + kD Sin@H−3 − kL πDM

Out[25]=

This is the result of the direct integration. In[26]:= Out[26]=

intI[k_] = Integrate[x^k f[x], {x, 0, Infinity}, Assumptions -> k > 0] 3 k 1 I1 − 5− 2 − 2 Cos@H3 + kL ArcTan@2DDM Gamma@3 + kD 2

For negative integer k, intc[k] is indeterminate. For concrete k we could use Limit or Series to obtain a value. For generic k, we simplify first the Gamma functions using FullSimplify. In[27]:= Out[27]=

intI[k]/intc[k] // FullSimplify // Simplify[#, Element[k, Integers]]& 1

(For calculating series terms of arbitrary order, see [1040], [1041], and [1043].)

Symbolic Computations

298

1.10 Three Applications 1.10.0 Remarks In this section, we will discuss three larger calculations. Here, “larger” mainly refers to the necessary amount of operations to calculate the result and not so much to the number of lines of Mathematica programs to carry it out. The first two are “classical” problems. Historically, the first one was solved in an ingenious method. Here we will implement a straightforward calculation. Carrying out the calculation of an extension of the second one (cosH2 p ê 65537L) took more than 10 years at the end of the nineteenth century. The third problem is a natural continuation from the visualizations discussed in Section 3.3 of the Graphics volume [1736]. The code is adapted to Mathematica Version 5.1. As mentioned in the Introduction, later versions of Mathematica may allow for a shorter implementation and more efficient implementation.

1.10.1 Area of a Random Triangle in a Square In the middle of the last century, J. J. Sylvester proposed calculating the expectation value of the convex hull of n randomly chosen points in a plane square. For n = 1, the problem is trivial, and for n = 2, the question is relatively easy to answer. For n ¥ 3, the straightforward formulation of the problem turns out to be technically quite difficult because of the multiple integrals to be evaluated. In 1885, M. W. Crofton came up with an ingenious trick to solve special cases of this problem. (His formulae are today called Crofton’s theorem.) At the same time, he remarked: The intricacy and difficulty to be encountered in dealing with such multiple integrals and their limits is so great that little success could be expected in attacking such questions directly by this method [direct integration]; and most of what has been done in the matter consists in turning the difficulty by various considerations, and arriving at the result by evading or simplifying the integration. [1031] The general setting of the problem is to calculate the expectation value of the minHn - 1, dL-dimensional volume of the convex hull of n points in d dimensions, for instance, the volume of a random tetrahedron formed by four randomly chosen points in 3 . For details about what is known, the Crofton theorem and related matters, see [35], [262], [562], [263], [1217], [832], [1031], [1238], [277], and [1410]. For an ingenious elementary derivation for the n = 3 case, see [1596]; for a tetrahedron in a cube, see [1910]. For the case of a tetrahedron inside a tetrahedron, see [1196].) In this subsection, we will show that using the integration capabilities of Mathematica it is possible to tackle such problems directly—this means by carrying out the integrations. (This subsection is based on [1733].) In the following, let the plane polygon be a unit square. We will calculate the expectation value of the area of a random triangle within this unit square (by an affine coordinate transformation, the problem in an arbitrary convex quadrilateral can be reduced to this case). Here is a sketch of the situation. In[1]:=

With[{P1 = {0.2, 0.3}, P2 = {0.8, 0.2}, P3 = {0.4, 0.78}}, Show[Graphics[ {{Thickness[0.01], Line[{{0, 0}, {1, 0}, {1, 1}, {0, 1}, {0, 0}}]}, {Thickness[0.002], Hue[0], Line[{P1, P2, P3, P1}]},

1.10 Three Applications

299

{Text["P1", {0.16, 0.26}], Text["P2", {0.84, 0.16}], Text["P3", {0.40, 0.82}]}}], AspectRatio -> Automatic]]

P3

P1 P2

Let 8x1, y1 -1]; (* the upper limit *) uValue = Limit[indefiniteIntegral, ξ -> u, Direction -> +1]; Factor[Together[uValue - lValue]]]

To speed up the indefinite integration and the calculation of the limits, we apply some transformation rules implemented in LogExpand to the expressions. LogExpand splits all Log[expr] into as many subparts as possible to simplify the integrands. Because we know that the integrals we are dealing with are real quantities, we do not have to worry about branch cut problems associated with the logarithm function, and so drop all imaginary parts at the end. In[16]:=

LogExpand[expr_] := PowerExpand //@ Together //@ expr

Now, we have all functions together and can actually carry out the integration. To get an idea about the form of the expressions appearing in the six integrations, let us have a look at the individual integration results of the first region. (The indefinite integrals are typically quite a bit larger than the definite ones, as shown in the following results.) This is the description of the first six-dimensional region. In[17]:= Out[17]=

regions[[1]] 1 −x1 + y1 x2 y1 99x1, 0, =, 8y1, 0, x1 {Automatic, (# /. Log[x_] :> Log[2, x]/Log[2])&}]& @ (Re[Together[Plus @@ Apply[multiDimensionalIntegrate[area, ##]&, regions, {1}]]] // Timing) 11 91133.01 Second, = 288

All p and logH2L terms cancelled, and we got (taking into account the triangles with negative orientation) for the expectation value, the simple result A = 11 ê 144. The degree of difficulty to do multidimensional integrals is often depending sensitively from the order of the integration. As a check of the last result and for comparison, we now first evaluate the three integrations over the yi and then the three integration over the xi . For this situation, we have only 62 six-dimensional regions. In[26]:=

cad2 = GenericCylindricalAlgebraicDecomposition[ signedTriangleArea && unitCube6D, {x1, x2, x3, y1, y2, y3}]; regions2 = Apply[List, Apply[{#3, #1, #5} &, cad2[[1]] //. a_ && (b_ || c_) :> a && b || a && c, {2}], {0, 2}];

Out[30]=

Length[regions2] 62

And doing the integrations and simplifying the result takes now only a few seconds. Again, we obtain the result 11/288. In[31]:=

Out[31]=

Simplify[Together[Re[Plus @@ Apply[multiDimensionalIntegrate[area, ##]&, regions2, {1}]]], TransformationFunctions -> {(# /. Log[k_Integer] :> (Plus @@ ((#2 Log[#1])& @@@ FactorInteger[k])))&}] // Timing 11 932.51 Second, = 288

Symbolic Computations

304

Using numerical integration, we can calculate an approximative value of this integral to support the result 11 ê 144. In[32]:=

Out[32]=

(SeedRandom[111]; NIntegrate[Evaluate[Abs[area]], {x1, 0, 1}, {y1, 0, 1}, {x2, 0, 1}, {y2, 0, 1}, {x3, 0, 1}, {y3, 0, 1}, Method -> QuasiMonteCarlo, MaxPoints -> 10^6, PrecisionGoal -> 3]) 0.0763889

This result confirms the above result. In[33]:= Out[33]=

N[2 %%[[2]]] 0.0763889

We could now go on and calculate the probability distribution for the areas. The six-dimensional integral to be calculated is now 1

1

1

1

1

1

pHAL ~ ‡ ‡ ‡ ‡ ‡ ‡ dHA - H x1 , x2 , x3 , y1 , y2 , y3 LL d y3 dx3 d y2 dx2 d y1 dx1 , 0

0

0

0

0

0

H x1 , x2 , x3 , y1 , y2 , y3 L = †x3 y1 - x2 y1 + x1 y2 - x3 y2 + x2 y3 - x1 y3 §. (Here we temporarily changed A Ø 2 A so that all variables involved range over the interval @0, 1D. This time, before subdividing the integration variable space into subregions, we carry out the integral over y3 to eliminate the Dirac d function. To do this, we use the identity b

b

dHy - x0,k L ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ dx ‡ dHy - f HxLL dx = ‡ ‚ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ † f £ Hx0,k L§ k a

a

where the x0,k are the zeros of f HxL in @a, bD. Expressing y3 through x1 , x2 , x3 , y1 , y2 , and A yields the following expression. In[34]:=

Out[34]=

soly3 = Solve[A == (* or - *) (-x2 y1 + x3 y1 + x1 y2 - x3 y2 x1 y3 + x2 y3), y3][[1, 1, 2]] −A − x2 y1 + x3 y1 + x1 y2 − x3 y2 x1 − x2

And the derivative from the denominator becomes †x1 - x2 §. In[35]:= Out[35]=

D[-x2 y1 + x3 y1 + x1 y2 - x3 y2 - x1 y3 + x2 y3, y3] −x1 + x2

Now is a good time to obtain a decomposition of the space into subregions. In addition to the constraints following from the geometric constraints of the integration variables being from the unit square, we add three more inequalities: 1) 0 < y3 H x1 , x2 , x3 , y1 , y2 ; AL < 1 to ensure the existence of a zero inside the Dirac d function argument; 2) A > 0 for positive oriented areas; and 3) x1 > x2 to avoid the absolute value in the denominator (the case x1 < x2 follows from symmetry). In[36]:=

cad = Experimental`GenericCylindricalAlgebraicDecomposition[ 0 < soly3 < 1 && A > 0 && x1 > (* or < *) x2 && 0 < x1 < 1 && 0 < x2 < 1 && 0 < x3 < 1 && 0 < y2 < 1 && 0 < y1 < 1, {A, x1, x2, x3, y1, y2}];

1.10 Three Applications

305

This time, we get a total of 1282 subregions. In[37]:= Out[37]=

(l1 = cad[[1]] //. a_ && (b_ || c_) :> (a && b) || (a && c)) // Length 1282

One expects the probability distribution pHAL to be a piecewise smooth function of l. Six l-interval arise naturally from the decomposition. In[38]:= Out[38]= In[39]:=

Union[First /@ l1] 1 1 1 1 1 1 1 1 1 1 0 < A < »» < A < »» < A < »» < A < »» < A < »» < A < 1 6 6 5 5 4 4 3 3 2 2 ASortedRegions = {#[[1, 1, 2]] < A < #[[1, 1, 3]] , Rest /@ #}& /@ Split[Sort[(# /. Inequality[a_, Less, b_, Less, c_] :> {b, a, c} /. And -> List)& /@ (List @@ l1)], #1[[1]] === #2[[1]]&];

Here is the number of regions for the six l-intervals. In[40]:= Out[40]=

{#1, Length[#2] "subregions"}& @@@ ASortedRegions 1 1 1 990 < A < , 317 subregions=, 9 < A < , 324 subregions=, 6 6 5 1 1 1 1 9 < A < , 310 subregions=, 9 < A < , 216 subregions=, 5 4 4 3 1 1 1 9 < A < , 99 subregions=, 9 < A < 1, 16 subregions== 3 2 2

The regions themselves look quite similar to the above ones. In[41]:= Out[41]=

{#1, #2[[1]]}& @@@ ASortedRegions 1 990 < A < , 98x1, 0, A 0

To calculate the sixfold integral, we will follow the already twice successfully-used strategy to first calculate a decomposition of the integration domain. Because of the obvious fourfold rotational symmetry of pHx, yL around the square center 81 ê 2, 1 ê 2 a && b || a && c;

In[59]:=

Length[l1]

Out[59]=

327

All cells span the specified x,y-domain. This means, the density pHx, yL is continuous within this domain. In[60]:= Out[60]=

Union[Take[#, 2]& /@ l1] 1 1 < x < 1 && < y < x 2 2

1.10 Three Applications

309

The cells of the 6D integration domain have similar-looking boundaries as the cells from the above calculations. In[61]:=

xyRegions = (# /. Inequality[a_, Less, b_, Less, c_] :> {b, a, c} /. And -> List)& /@ ((* remove x and y parts *) List @@ Drop[#, 2]& /@ l1);

In[62]:=

xyRegions[[1]] −x + y −x1 y + x2 y 99x1, 0, =, 8x2, 0, x1 {1, 1, 1/2}, Axes -> True]], (* modeled probability *) Module[{d = 60, o = 10^4, data, if}, data = Compile[{}, Module[{T = Table[0, {d}, {d}], p1, p2, p3, xc, yc, mp, σ}, Do[{p1, p2, p3} = Table[Random[], {3}, {2}]; mp = (p1 + p2 + p3)/3; (* orientation of the normals *) σ = Sign[(Reverse[p2 - p1]{1, -1}).(mp - p1)]; (* are discretized square points inside triangle? *) Do[If[σ (Reverse[p2 - p1]{1, -1}).({x, y} - p1) > 0 && σ (Reverse[p3 - p2]{1, -1}).({x, y} - p2) > 0 && σ (Reverse[p1 - p3]{1, -1}).({x, y} - p3) > 0, (* increase counters *) {xc, yc} = Round[{x, y} (d - 1)] + 1; T[[xc, yc]] = T[[xc, yc]] + 1], {x, 0, 1, 1/(d - 1)}, {y, 0, 1, 1/(d - 1)}], {o}]; T]][]; (* interpolated scaled counts *) if = Interpolation[Flatten[MapIndexed[Flatten[ {(#2 - {1, 1})/(d - 1), #1}]&, data/o, {2}], 1]]; (* interpolated observed frequencies *) Plot3D[if[x, y], {x, 0, 1}, {y, 0, 1}, Mesh -> False]]}]]]

0.2

0.2 1

0.1

0.75

0 0

0.1 0 0

0.5 0.25

0.5

0.25 0.75 1

1 0.8 0.6 0.2

0.4 0.4

0.6

0

0.2 0.8 0 8

1

0

We end by integrating the calculated probability density pHx, yL over the unit square. pHx, yL is the probability that the point 8x, y< is inside a randomly chosen triangle. This means the average of pHx, yL is again the area of a randomly chosen triangle, namely 11 ê 144. In[76]:= Out[76]=

8 Integrate[p[x, y], {x, 1/2, 1}, {y, 1/2, x}] 11 144

For a similar probabilistic problem, the Heilbronn triangle problem, see [936].

Symbolic Computations

312

2p 1.10.2 cosI ÅÅÅÅ ÅÅÅ M à la Gauss 257 In the early morning of March 29 in 1796, Carl Friedrich Gauss (while still in bed) recognized how it is possible to construct a regular 17-gon by ruler and compass; or more arithmetically and less geometrically speaking, he 2p ÅÅÅÅ L in terms of square roots and the four basic arithmetic operations of addition, subtraction, expressed cosH ÅÅÅÅ 17 multiplication, and division only. (This discovery was the reason why he decided to become a mathematician j [1472], [704], [1792].) His method works immediately for all primes of the form 22 + 1, so-called Fermat numbers F j [1080]. For j = 0 to 4, we get the numbers 3, 5, 17, 257, and 65537. ( j = 5, …, 14 do not give primes; we return to this at the end of this section.) The problem to be solved is to express the roots of z p = 1, where p is a Fermat prime in square roots. One obvious solution of this equation is z = 1. After dividing z p = 1 by this solution, we get as the new equation to be solved: z p-1 + z p-2 + ∫ + z + 1 = 0. It can be shown that there are no further rational zeros; so this equation cannot be simplified further in an easy l2pi ÅÅÅÅÅÅ M, l integer, way. Let us denote (by following Gauss’s notation here and in the following) the solution expI ÅÅÅÅÅÅÅÅ p êê 1 § l § p - 1 of this equation by l (which is, of course, a solution, but which contains a pth root). Gauss’s idea, which solves the above equation exclusively in square roots, is to group the roots of the above equation in a recursive way such that the explicit values of the sums of these roots can be expressed in numbers and square roots. Each step then rearranges these roots until finally only groups of length two remain. These last groups are j 2p ÅÅÅÅÅ M. then just of the form cosI ÅÅÅÅÅÅÅÅ p Let us describe this idea in more detail. First, we need the number-theoretic notion of a primitive root: the number g is called a primitive root of p if the set of numbers 8gi mod p All, AspectRatio -> Automatic]

In[3]:=

(* reduced residue system exists for the following 128 numbers *) rssNumbers = Flatten[Position[Table[Sort[Array[ PowerMod[i, #, 257]&, 256, 0]] == Range[256], {i, 256}], True]];

In[5]:=

(* visualizations of the powermod sequences *) Function[bs, Show[GraphicsArray[Function[b,

1.10 Three Applications

313

primitiveRootsGraphics[b]] /@ bs]]] /@ (* display nine examples *) Partition[rssNumbers[[{1, 2, 3, 33, 42, 43, 66, 106, 114}]], 3]

Make Input

Show[primitiveRootsGraphics[#]]& /@ rssNumbers

(For some interesting discussions about the number of crossings and the number of regions in such pictures, see [1428].) f

ó The next concept we need is that of the so-called periods. A period l to the primitive root g, containing the root êê l and having length f, is defined by the expression below. (The dependence on the fixed quantities p and g is suppressed.) f -1 êêêêêêêêêêêê f jHp-1L ó l = ‚ l g ÅÅÅÅÅÅÅÅÅÅÅÅf ÅÅÅÅÅÅ j=0

êêêêêêêê êê Because the root l + p is equivalent to the root l, we implement the construction of the periods in the following êê way. (We again use PowerMod because of speed and denote l by R[l].)

Symbolic Computations

314 In[7]:=

period[λ_, f_, p_, g_] := Plus @@ (R /@ Mod[Mod[λ, p] Array[ PowerMod[g^((p - 1)/f), #, p]&, f, 0], p])

Let us look at two examples for the prime 17 and the primitive root 3. In[8]:= Out[8]= In[9]:= Out[9]=

period[1, 8, 17, 3] R@1D + R@2D + R@4D + R@8D + R@9D + R@13D + R@15D + R@16D period[3, 8, 17, 3] R@3D + R@5D + R@6D + R@7D + R@10D + R@11D + R@12D + R@14D

We see that their sum just gives the sum of all roots. This is always the case if p is a Fermat prime; here, the case p = 257 is checked. In[10]:= Out[10]= In[11]:= Out[11]=

period[1, 128, 257, 3] + period[3, 128, 257, 3] == Plus @@ Array[R, 256] True period[5, 128, 257, 3] + period[9, 128, 257, 3] == Plus @@ Array[R, 256] True

Dividing the last period again into subperiods by using the above definition for the periods, we find that the period period[3, 8, 17, 3] can be expressed as the sum of the following periods. In[12]:= Out[12]= In[13]:= Out[13]=

period[3, 4, 17, 3] R@3D + R@5D + R@12D + R@14D period[11, 4, 17, 3] R@6D + R@7D + R@10D + R@11D

It can be shown that one can always represent a period in this way: One root of one of the new periods is êêêêêêêêêêêêêêêê êêêêêêêêêêêêê p-1 êê identical to the old one (l), whereas one root of the other period is generated by the root Jl g ÅÅÅÅÅfÅÅÅÅÅ Å N modH p - 1L. The other roots of the two periods under consideration follow immediately from the above definition of the 17-1 periods. In our example, we have this for one root of the second period: I33 ÅÅÅÅÅÅÅÅ8ÅÅÅÅÅÅ M mod 16 = 11. In doing this division process for the periods repeatedly, we end up in periods of length two. These periods are of the form êê êêêêêêêê êê êêêêêêêê 2p 1 + p - 1, 2 + p - 2, …, which give immediately 2 cosI ÅÅÅÅ ÅÅÅÅ M, 2 cosI2 µ 2 ÅÅÅÅppÅ M, …. To explicitly calculate the p values of the periods in square roots, we need the following theorem: The (numerical) values L1 , L2 of two periods l1 , l2 (which contain no higher roots than square roots and are to be discriminated from the periods l1 , l2 themselves) obtained by splitting one period are the solutions of a quadratic equation. If L1 and L2 are the solutions of L2 + a1 L + a2 = 0; by Vieta’s theorem, we have L1 L2 = -a1 and L1 + L2 = a2 . The sum of the two periods is just the period before splitting, and the (numerical) value of the starting period is -1. It is 2f ò important to observe that the product of two periods of length f, obtained by splitting a period l , can always be expressed as a linear combination of periods of length 2 f . The explicit formula for carrying out this multiplication of two periods is given by f

f

f

f

ò õúúúúúùúúúû õúúúúú ùúúúû õúúúúú ùúúúúû lm = l1 m1 + l2 m1 + ∫ + l f m1 where f

f ó êêêê êêêê êêêê ó êêêê êêêê êêêê l = l1 + l2 + ∫ + l f and m = m1 + m2 + ∫ + m f .

1.10 Three Applications

315

After this multiplication, the periods on the right-hand side can then be expressed as periods of length 2 f or as Å2ÅÅÅÅÅ , they can always be expressed as pure numbers, which ensures that we have pure numbers. (For f = ÅÅÅÅp-1 appropriate starting values for the recursive calculation.) Here, the above two periods of length 8 of p = 17 (period[1, 8, 17, 3] and period[3, 8, 17, 3]; m1 = 1, l1 = 3, l2 = 5, l3 = 6, l4 = 7, l5 = 10, l6 = 11, l7 = 12, l8 = 14) are multiplied in this manner. In[14]:=

Out[14]=

period[ 3 + 1, 8, 17, 3] + period[ 5 + 1, 8, 17, 3] + period[ 6 + 1, 8, 17, 3] + period[ 7 + 1, 8, 17, 3] + period[10 + 1, 8, 17, 3] + period[11 + 1, 8, 17, 3] + period[12 + 1, 8, 17, 3] + period[14 + 1, 8, 17, 3] // Factor 4 HR@1D + R@2D + R@3D + R@4D + R@5D + R@6D + R@7D + R@8D + R@9D + R@10D + R@11D + R@12D + R@13D + R@14D + R@15D + R@16DL

By taking into account the original equation this obviously simplifies to -4. (The value of the period of length 16 was -1.) Å2ÅÅÅÅÅ can be given in closed form. The two values for the periods of length ÅÅÅÅp-1 In[15]:=

{1/2 (-1 + I^(((p - 1)/2)^2) Sqrt[p]), 1/2 (-1 - I^(((p - 1)/2)^2) Sqrt[p])};

This agrees with the direct numerical calculation, as shown here for p = 17. In[16]:= Out[16]= In[17]:= Out[17]=

% /. p -> 17 // N 81.56155, −2.56155< {period[1, 8, 17, 3] /. (R -> (Exp[2Pi I #/17.]&)), period[3, 8, 17, 3] /. (R -> (Exp[2Pi I #/17.]&))} // N // Chop 81.56155, −2.56155

17 the various lists of rules that are in use inside GaussSolve are quite big, we use Dispatch to accelerate their application (with the exception of the list solList, which is not used actively internally, but only serves as a container for the results). In[18]:=

GaussSolve[p:(3 | 5 | 17 | 257 | 65537), Λ_Symbol] := Module[{g = 3, λ, newλs, Timesλ, allλs, rules1, rules2, Simplifyλ, solStep, solArgs, solNList, solList = {Λ[1, p - 1] -> - 1}}, (* the λ’s *) λ[t_, f_] := λ[t, f] = Function[γ, Mod[Mod[t, p] Array[ PowerMod[γ, #, p]&, f, 0], p]][g^((p - 1)/f)]; (* newλs function definition with remembering *) newλs[t_, f_] := newλs[t, f] = {t, Mod[Mod[t, p] PowerMod[g, (p - 1)/f, p], p]}; (* Timesλ function for λ multiplication *) Timesλ[t_, u_, f_] := Plus @@ (Λ[#, f]& /@ Mod[λ[u, f] + t, p]); (* allλs lists *) allλs[p - 1] = {1}; allλs[f_] := allλs[f] = Flatten[Map[newλs[#, 2f]&, allλs[2f], {-1}]]; (* rules1 for λ canonicalization *) rules1[f_] := rules1[f] = Dispatch[Map[Λ[#, f]&, Flatten[Function[a, Apply[Rule, Transpose[{Rest[a], Table[#, {Length[Rest[a]]}]&[First[a]]}], {1}]] /@ (λ[#, f]& /@ allλs[f])], {-1}]]; (* rules2 for λ eliminating one λ *) rules2[(p - 1)/2] = Λ[g, (p - 1)/2] -> - 1 - Λ[1, (p - 1)/2]; rules2[f_] := rules2[f] = Dispatch[ Λ[#[[2, 2]], f] -> Λ[#[[1]], 2f] - Λ[#[[2, 1]], f]& /@ Map[{#, newλs[#, 2f]}&, allλs[2f], {-1}]]; (* Simplifyλ for simplifying products of λs *) Simplifyλ[t_, u_, f_] := Fold[Expand[#1 //. #2]&,

1.10 Three Applications

317

Expand[Timesλ[t, u, f] //. rules1[f]], rules2 /@ (f 2^Range[0, Log[2, (p - 1)/f] - 1])]; (* solStep for period subdivision *) solStep[t_, f_] := Module[{u, v, x1Px2, x1Tx2, sol1, sol2, sol1N, sol2N, numSol1}, {u, v} = newλs[t, f]; x1Px2 = Λ[t, f]; x1Tx2 = Simplifyλ[u, v, f/2]; {sol1, sol2} = # + Sqrt[#^2 - x1Tx2]{1, -1}&[x1Px2/2]; numSol1 = Λ[u, f/2] //. solNList; {sol1N, sol2N} = N[{sol1, sol2} //. solNList]; solList = Flatten[{solList, If[Abs[sol1N - numSol1] < Abs[sol2N - numSol1], {Λ[u, f/2] -> sol1, Λ[v, f/2] -> sol2}, {Λ[u, f/2] -> sol2, Λ[v, f/2] -> sol1}]}]; ]; (* solNList for numerical values of the periods *) solNList = Dispatch[Apply[(Λ @ ##) -> (Plus @@ Exp[N[2Pi I λ[##]/p]])&, Flatten[Function[i, {#, i}& /@ allλs[i]] /@ (2^Range[Log[2, p - 1], 1, -1]), 1], {1}]]; (* stepArgs for period arguments *) stepArgs = Flatten[Function[i, {#, i}& /@ allλs[i]] /@ (2^Range[Log[2, p - 1], 2, -1]), 1]; (* do the work *) solStep @@ #& /@ stepArgs; solList]

Now, let us calculate the two simple cases p = 3 and p = 5 as a warm up. In[19]:= Out[19]= In[20]:= Out[20]=

(Λ[1, 2] //. GaussSolve[3, Λ])/2 1 − 2 (Λ[1, 2] //. GaussSolve[5, Λ])/2 // Expand è!!!! 5 1 − + 4 4

The results agree with the well-known expressions for cosH2 p ê 3L and cosH2 p ê 5L. Here is the list of the values of the periods for p = 17. In[21]:=

(list17 = GaussSolve[17, Λ]) // InputForm

Out[21]//InputForm=

{Λ[1, 16] -> -1, Λ[1, 8] -> Λ[1, 16]/2 + Sqrt[4 + Λ[1, 16]^2/4], Λ[3, 8] -> Λ[1, 16]/2 - Sqrt[4 + Λ[1, 16]^2/4], Λ[1, 4] -> Λ[1, 8]/2 + Sqrt[1 + Λ[1, 8]^2/4], Λ[9, 4] -> Λ[1, 8]/2 - Sqrt[1 + Λ[1, 8]^2/4], Λ[3, 4] -> Λ[3, 8]/2 + Sqrt[1 + Λ[3, 8]^2/4], Λ[10, 4] -> Λ[3, 8]/2 - Sqrt[1 + Λ[3, 8]^2/4], Λ[1, 2] -> Λ[1, 4]/2 + Sqrt[Λ[1, 4]^2/4 - Λ[3, 4]], Λ[13, 2] -> Λ[1, 4]/2 - Sqrt[Λ[1, 4]^2/4 - Λ[3, 4]], Λ[9, 2] -> Λ[9, 4]/2 - Sqrt[1 + Λ[1, 8] + Λ[3, 4] + Λ[9, 4]^2/4], Λ[15, 2] -> Λ[9, 4]/2 + Sqrt[1 + Λ[1, 8] + Λ[3, 4] + Λ[9, 4]^2/4], Λ[3, 2] -> Λ[3, 4]/2 + Sqrt[Λ[1, 4] - Λ[1, 8] + Λ[3, 4]^2/4], Λ[5, 2] -> Λ[3, 4]/2 - Sqrt[Λ[1, 4] - Λ[1, 8] + Λ[3, 4]^2/4], Λ[10, 2] -> Λ[10, 4]/2 - Sqrt[-Λ[1, 4] + Λ[10, 4]^2/4], Λ[11, 2] -> Λ[10, 4]/2 + Sqrt[-Λ[1, 4] + Λ[10, 4]^2/4]} 2p Here is the final expression for cosH ÅÅÅÅ ÅÅÅÅ L. 17 In[22]:=

(Λ[1, 2] //. list17)/2 // Expand // Factor

Symbolic Computations

318

Out[22]=

1 16

i j è!!!!!!! "################################ è!!!!!!! # j j j j−1 + 17 + 2 H17 − 17 L + k y "################################ è!!!!!!! "################################ è!!!!!!! # "################################### è!!!!!!! è!!!!!!! # z z $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 J34 + 6 17 − 2 H17 − 17 L + 34 H17 − 17 L − 8 2 H17 + 17 L N% z z z {

We numerically check this result. Because the result is 0, we cannot get any significant digit, and so the N::meprec message is issued. In[23]:= Out[23]=

(% - Cos[2Pi/17]) // SetPrecision[#, 1000]& 0. × 10−1000

Next is the result for cosH2 µ 2 p ê 17L. (Because we have eliminated most of the L[j, 2]’s with even j, we make use of cosH2 jp ê pL = cosH2 H p - jL p ê pL and use L[15, 2].) In[24]:= Out[24]=

(Λ[15, 2] //. list17)/2 // Expand // Factor 1 16

i j è!!!!!!! "################################ è!!!!!!! # j j j−1 + 17 − 2 H17 − 17 L + j k y "################################ è!!!!!!! "################################ è!!!!!!! # "################################### è!!!!!!! è!!!!!!! # z z $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 J34 + 6 17 + 2 H17 − 17 L − 34 H17 − 17 L + 8 2 H17 + 17 L N% z z z {

In[25]:= Out[25]=

(% - Cos[2 2Pi/17]) // SetPrecision[#, 1000]& 0. × 10−1000

Using the powerful function RootReduce we could also prove the last equality symbolically. In[26]:= Out[26]= In[27]:= Out[27]=

(%% // Simplify // RootReduce) (Together[TrigToExp[Cos[2 2Pi/17]]] // RootReduce) 0 Together[TrigToExp[Cos[2 2Pi/17]]] 1 − H−1L13ê17 H1 + H−1L8ê17 L 2

The last value of interest here is cosH8 µ 2 p ê 17L. In[28]:= Out[28]=

(Λ[9, 2] //. list17)/2 // Expand // Factor 1 16

i j è!!!!!!! "################################ è!!!!!!! # j j j j−1 + 17 − 2 H17 − 17 L − k y "################################ è!!!!!!! "################################ è!!!!!!! # "################################### è!!!!!!! è!!!!!!! # z z $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 2 J34 + 6 17 + 2 H17 − 17 L − 34 H17 − 17 L + 8 2 H17 + 17 L N% z z z {

Here is again a quick numerical check for the last result. In[29]:= Out[29]=

(% - Cos[8 2Pi/17]) // SetPrecision[#, 1000]& 0. × 10−1000

Now, as promised in the title of this subsection, we calculate cosH2 p ê 257L [1487], [638]. In[30]:=

list257 = GaussSolve[257, Λ];

We select only those parts that are explicitly needed for the evaluation of cosH2 p ê 257L. In[31]:=

Flatten[Function[{lhs, rhs}, (* until we have all needed Λ’s *)

1.10 Three Applications

319

FixedPoint[{#, Complement[Union[Cases[ (* what is in the rhs *) rhs[[#]]& /@ Flatten[Position[lhs, #]& /@ Last[#]], _Λ, {0, Infinity}]], Flatten[#]]}&, (* this we need of course *) {{Λ[1, 2]}}, SameTest -> (Last[#2] === {}&)]][ (* all lhs and rhs from list257 *) First /@ list257, Last /@ list257]]; solListPiD257 = (list257[[#]]& /@ Flatten[Function[lhs, Position[lhs, #]& /@ %][First /@ list257]]); 2p Here is a shortened version of this list of replacement rules necessary to express cosH ÅÅÅÅ ÅÅÅÅÅ L. 257 In[33]:=

solListPiD257 // Short[#, 6]&

Out[33]//Short=

1 1 Λ@1, 4D2 + Λ@1, 16D − Λ@1, 32D + Λ@136, 8D + Λ@197, 4D% , 9Λ@1, 2D → Λ@1, 4D + $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 4 2 1 1 Λ@1, 4D → Λ@1, 8D + $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Λ@1, 8D2 − Λ@3, 8D + Λ@131, 8D − Λ@131, 16D , 2 4 1 Λ@1, 16D → Λ@1, 32D + 2 1 $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% −Λ@1, 32D + Λ@1, 32D2 − Λ@1, 128D + 2 Λ@3, 32D − 2 Λ@3, 64D − Λ@9, 32D , 4 1 1 Λ@1, 32D → Λ@1, 64D + $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 5 + 2 Λ@1, 64D + Λ@1, 64D2 + Λ@1, 128D% , 28, 2 4 1 1 Λ@243, 32D → Λ@3, 64D + $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 4 − Λ@1, 128D + 2 Λ@3, 64D + Λ@3, 64D2% , 2 4 1 1 Λ@27, 64D → Λ@3, 128D − $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 16 + Λ@3, 128D2 , 2 4 1 1 Λ@81, 32D → Λ@1, 64D − $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 5 + 2 Λ@1, 64D + Λ@1, 64D2 + Λ@1, 128D% , 2 4 1 1 5 − 2 Λ@1, 64D + 3 Λ@1, 128D + Λ@9, 64D2% = Λ@215, 32D → Λ@9, 64D − $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 4 2 2p The value for cosH ÅÅÅÅ ÅÅÅÅÅ L is now easily obtained, but because of its size, we do not display it here. 257 In[34]:= Out[34]=

(cos2PiD257 = (Λ[1, 2] //. Dispatch[solListPiD257])/2) // ByteCount 1822680

It contains only square roots, but it contains a lot of them. In[35]:= Out[35]=

Cases[cos2PiD257, Power[_, 1/2], {0, Infinity}, Heads -> True] // Length 5133

If the reader wants to see all of them, the following code opens a new notebook with the typeset formula for the square root version of cosH2 p ê 257L.

Symbolic Computations

320 Make Input

NotebookPut[Notebook[{Cell[BoxData[ FormBox[MakeBoxes[#, TraditionalForm]&[cos2PiD257], TraditionalForm]], "Output", ShowCellBracket -> False, CellMargins -> {{0, 0}, {5, 5}}, PageWidth -> Infinity, FontColor -> GrayLevel[1], (* allow to see all square roots *) CellHorizontalScrolling -> True]}, WindowSize -> {Automatic, Fit}, Background -> RGBColor[0.31, 0., 0.51], ScrollingOptions -> {"HorizontalScrollRange" -> 500000}, WindowMargins -> {{0, 0}, {Automatic, 10}}, WindowElements -> {"HorizontalScrollBar"}, WindowFrameElements -> {"CloseBox"}]]

Here is a numerical check of the result. In[36]:= Out[36]=

(cos2PiD257 - Cos[2Pi/257]) // SetPrecision[#, 1000]& 0. × 10−996

One could now go on and calculate the following quite large calculation for the denominator 65537. Make Input

l65537 = GaussSolve[65537, L]

It will take around one day on a modern workstation. Here are the first lines of the result (of size 55 MB). {Λ[ 1, Λ[ 1, Λ[ 3, Λ[ 1, Λ[ 9, Λ[ 3, Λ[ 27, Λ[ 1,

65536] 32768] 32768] 16384] 16384] 16384] 16384] 8192]

-> -> -> -> -> -> -> ->

Λ[ 81,

8192] ->

Λ[

9,

8192] ->

Λ[729,

8192] ->

Λ[

8192] ->

3,

-1, Λ[1, 65536]/2 + Sqrt[16384 + Λ[1, 65536]^2/4], Λ[1, 65536]/2 - Sqrt[16384 + Λ[1, 65536]^2/4], Λ[1, 32768]/2 - Sqrt[ 4096 + Λ[1, 32768]^2/4], Λ[1, 32768]/2 + Sqrt[ 4096 + Λ[1, 32768]^2/4], Λ[3, 32768]/2 - Sqrt[ 4096 + Λ[3, 32768]^2/4], Λ[3, 32768]/2 + Sqrt[ 4096 + Λ[3, 32768]^2/4], Λ[1, 16384]/2 - Sqrt[ 1040 + 32 Λ[1, 16384] + Λ[1, 16384]^2/4 + 16 Λ[1, 32768]], Λ[1, 16384]/2 + Sqrt[1040 + 32 Λ[1, 16384] + Λ[1, 16384]^2/4 + 16 Λ[1, 32768]], Λ[9, 16384]/2 + Sqrt[1040 - 32 Λ[1, 16384] + 48 Λ[1, 32768] + Λ[9, 16384]^2/4], Λ[9, 16384]/2 - Sqrt[1040 - 32 Λ[1, 16384] + 48 Λ[1, 32768] + Λ[9, 16384]^2/4], Λ[3, 16384]/2 + Sqrt[1024 - 16 Λ[1, 32768] + 32 Λ[3, 16384] + Λ[3, 16384]^2/4]}

(Although the above implementation strictly follows Gauss’s original work, we could have used more efficient procedures. See [835].) Let us briefly discuss the numbers n for which the value cosH2 p ê nL can be expressed in square roots (or geometrically speaking, which n-gons can be constructed by ruler and compass [466], [825]?). 5

The above-mentioned number 22 - 1 = 4294967295 is not a prime number; the factors are all Fermat numbers F j with j = 0, …, 4. In[37]:=

FactorInteger[2^(2^5) - 1]

1.10 Three Applications Out[37]=

321

883, 1 120, Contours -> {0}, ContourShading -> False, ContourStyle -> {Hue[0.8 (∂ - 55)/6]}, DisplayFunction -> Identity, Frame -> False, PlotLabel -> #6 "-plane"]& @@@ (* the 3 coordinate plane data *)

Symbolic Computations

326 {{x, y, 0, x, y, "x,y"}, {x, 0, z, x, z, "x,z"}, {0, y, z, y, z, "y,z"}}, {∂, 18, 26, 1/3}]]]]] /@ {+1, -1} x,y-plane

x,z-plane

y,z-plane

x,y-plane

x,z-plane

y,z-plane

And here are two 3D plots of the resulting surfaces. By adding a constant to the polynomial, we squeeze the tube and by subtracting a constant, we thicken the tube. In[17]:=

Show[GraphicsArray[ (* show squeezed and fattened version *) Graphics3D[{EdgeForm[], Cases[ ContourPlot3D[Evaluate[treFoilKnotPoly[x, y, z] + #], {x, -5, 5}, {y, -5, 5}, {z, -2, 2}, Boxed -> False, MaxRecursion -> 1, DisplayFunction -> Identity, PlotPoints -> {{21, 6}, {21, 6}, {13, 6}}], _Polygon, Infinity] /. (* cut vertices off *) Polygon[l_] :> Polygon[Plus @@@ Partition[Append[l, l[[1]]], 2, 1]/2]}, Boxed -> False]& /@ (* two constant values *) {8 10^21, -10^23}]]

In a similar manner, one can implicitize many other surfaces, when their parametrization is in terms of trigonometric or hyperbolic functions, for instance, the Klein bottle from Section 2.2.1 of the Graphics volume [1736].

1.10 Three Applications

327

Here is their implicit form together with the code for making a picture of the resulting polynomial. (For the implicitization of a “realistic looking” Klein bottle, see [1734].) Make Input

Needs["Graphics`ContourPlot3D`"] Clear[x, y, z, r, ϕ] Show[Graphics3D[ (* convert back from polar coordinates to Cartesian coordinates *) Apply[{#1 Cos[#2], #1 Sin[#2], #3}&, Cases[ContourPlot3D[Evaluate[ 768 x^4 - 1024 x^5 - 128 x^6 + 512 x^7 - 80 x^8 - 64 x^9 + 16 x^10 + 144 x^2 y^2 - 768 x^3 y^2 - 136 x^4 y^2 + 896 x^5 y^2 - 183 x^6 y^2 176 x^7 y^2 + 52 x^8 y^2 + 400 y^4 + 256 x y^4 - 912 x^2 y^4 + 256 x^3 y^4 + 315 x^4 y^4 - 144 x^5 y^4 - 16 x^6 y^4 + 4 x^8 y^4 904 y^6 - 128 x y^6 + 859 x^2 y^6 - 16 x^3 y^6 - 200 x^4 y^6 + 16 x^6 y^6 + 441 y^8 + 16 x y^8 - 224 x^2 y^8 + 24 x^4 y^8 - 76 y^10 + 16 x^2 y^10 + 4 y^12 - 2784 x^3 y z + 4112 x^4 y z - 968 x^5 y z 836 x^6 y z + 416 x^7 y z - 48 x^8 y z + 1312 x y^3 z + 2976 x^2 y^3 z 5008 x^3 y^3 z - 12 x^4 y^3 z + 2016 x^5 y^3 z - 616 x^6 y^3 z 64 x^7 y^3 z + 32 x^8 y^3 z - 1136 y^5 z - 4040 x y^5 z + 2484 x^2 y^5 z + 2784 x^3 y^5 z - 1560 x^4 y^5 z - 192 x^5 y^5 z + 128 x^6 y^5 z + 1660 y^7 z + 1184 x y^7 z - 1464 x^2 y^7 z 192 x^3 y^7 z + 192 x^4 y^7 z - 472 y^9 z - 64 x y^9 z + 128 x^2 y^9 z + 32 y^11 z - 752 x^4 z^2 + 1808 x^5 z^2 - 1468 x^6 z^2 + 512 x^7 z^2 64 x^8 z^2 + 6280 x^2 y^2 z^2 - 5728 x^3 y^2 z^2 - 4066 x^4 y^2 z^2 + 5088 x^5 y^2 z^2 - 820 x^6 y^2 z^2 - 384 x^7 y^2 z^2 + 96 x^8 y^2 z^2 136 y^4 z^2 - 7536 x y^4 z^2 + 112 x^2 y^4 z^2 + 8640 x^3 y^4 z^2 2652 x^4 y^4 z^2 - 1152 x^5 y^4 z^2 + 400 x^6 y^4 z^2 + 2710 y^6 z^2 + 4064 x y^6 z^2 - 3100 x^2 y^6 z^2 - 1152 x^3 y^6 z^2 + 624 x^4 y^6 z^2 1204 y^8 z^2 - 384 x y^8 z^2 + 432 x^2 y^8 z^2 + 112 y^10 z^2 + 3896 x^3 y z^3 - 7108 x^4 y z^3 + 3072 x^5 y z^3 + 768 x^6 y z^3 768 x^7 y z^3 + 128 x^8 y z^3 - 3272 x y^3 z^3 - 4936 x^2 y^3 z^3 + 8704 x^3 y^3 z^3 - 80 x^4 y^3 z^3 - 2496 x^5 y^3 z^3 + 608 x^6 y^3 z^3 + 2172 y^5 z^3 + 5632 x y^5 z^3 - 2464 x^2 y^5 z^3 - 2688 x^3 y^5 z^3 + 1056 x^4 y^5 z^3 - 1616 y^7 z^3 - 960 x y^7 z^3 + 800 x^2 y^7 z^3 + 224 y^9 z^3 + 752 x^4 z^4 - 1792 x^5 z^4 + 1472 x^6 z^4 - 512 x^7 z^4 + 64 x^8 z^4 - 3031 x^2 y^2 z^4 + 1936 x^3 y^2 z^4 + 2700 x^4 y^2 z^4 2304 x^5 y^2 z^4 + 448 x^6 y^2 z^4 + 697 y^4 z^4 + 3728 x y^4 z^4 + 24 x^2 y^4 z^4 - 3072 x^3 y^4 z^4 + 984 x^4 y^4 z^4 - 1204 y^6 z^4 1280 x y^6 z^4 + 880 x^2 y^6 z^4 + 280 y^8 z^4 - 800 x^3 y z^5 + 1488 x^4 y z^5 - 768 x^5 y z^5 + 128 x^6 y z^5 + 992 x y^3 z^5 + 1016 x^2 y^3 z^5 - 1728 x^3 y^3 z^5 + 480 x^4 y^3 z^5 - 472 y^5 z^5 960 x y^5 z^5 + 576 x^2 y^5 z^5 + 224 y^7 z^5 + 16 x^4 z^6 + 388 x^2 y^2 z^6 - 384 x^3 y^2 z^6 + 96 x^4 y^2 z^6 - 76 y^4 z^6 384 x y^4 z^6 + 208 x^2 y^4 z^6 + 112 y^6 z^6 - 64 x y^3 z^7 + 32 x^2 y^3 z^7 + 32 y^5 z^7 + 4 y^4 z^8 /. (* to polar coordinates *) {x -> r Cos[ϕ], y -> r Sin[ϕ]}], {r, 0.6, 3.3}, {ϕ, 0, 2Pi}, {z, -1.3, 1.3}, PlotPoints -> {18, 40, 24}, MaxRecursion -> 0, DisplayFunction -> Identity], _Polygon, Infinity], {-2}]]]

Symbolic Computations

328

For more on the subject of implicitization of surfaces, see [1197], [351], and [1591] and references cited therein. We end with another implicit surface originating from a trefoil knot. Starting with a parametrized space curve cHtL, we construct the parametrized surface HcHt + a ê 2L + cHt + a ê 2LL ê a (the average of two symmetrically located points with respect to t). The following code calculates the implicit form of this surface for the trefoil knot. We use the function Resultant to eliminate the parametrization variables. For brevity, we express the resulting surface in cylindrical coordinates. Make Input

(* a function to convert from trigonometric to polynomial variables *) [expr_] := Numerator[Together[TrigToExp[expr] /. {t -> Log[T]/I, α -> Log[Α]/I}]] (* make algebraic form of average *) cAv = ((c /. t -> t + α) + (c /. t -> t - α))/2 cAvAlg = [{x, y, z} - cAv]/{I, 1, I} (* eliminate parametrization variables *) res1 = Resultant[cAvAlg[[1]], cAvAlg[[2]], Α] // Factor res2 = Resultant[cAvAlg[[1]], cAvAlg[[3]], Α] // Factor res3 = Resultant[res1[[-1]] /. T -> Sqrt[T2], res2[[-1, 1]] /. T -> Sqrt[T2], T2, Method -> SylvesterMatrix]; (* express implicit form of surface in cylindrical coordinates *) cAvImpl = Factor[res3][[3, 1]] /. {x -> r Cos[ϕ], y -> r Sin[ϕ]} // FullSimplify In[18]:=

cAvImpl = r^6 (2 + r) (r - 2) (1 - 44 r^2 + 64 r^4) + 24 r^4 (-12 - 3 r^2 + 80 r^4) z^2 128 r^2 (-123 + 36 r^2 + 64 r^4) z^4 - 8192 z^6 + r^3 (2 z (993 r^4 - 80 r^6 - 4144 z^2 + 8192 z^4 + r^2 (84 - 5760 z^2)) Cos[3 ϕ] + r^3 (-4 + 177 r^2 - 300 r^4 + 64 r^6 32 (-109 + 48 r^2) z^2) Cos[6 ϕ] - 64 r^6 z Cos[9 ϕ] 16 (3 r^6 (-4 + r^2) + 2 r^2 (69 - 114 r^2 + 64 r^4) z^2 256 (-2 + 3 r^2) z^4) Sin[3 ϕ] + 4 r^3 z (157 - 174 r^2 + 512 z^2) Sin[6 ϕ] - 48 r^6 (-4 + r^2) Sin[9 ϕ]);

In[19]:=

Needs["Graphics`ContourPlot3D`"]

In[20]:=

(* a function for making a hole in a polygon *) makeHole[Polygon[l_], f_] := Module[{mp = Plus @@ l/Length[l], , }, = Append[l, First[l]]; = (mp + f (# - mp))& /@ ; {(* new polygons *) MapThread[Polygon[Join[#1, Reverse[#2]]]&, Partition[#, 2, 1]& /@ {, }]}]

1.10 Three Applications

329

The next pair of graphics shows the parametric and the implicit version of this surface. We make use of the threefold rotational symmetry of the surface in the generation of the implicit plot. In[22]:=

Show[GraphicsArray[ Block[{$DisplayFunction = Identity, polysCart, = {{-1, Sqrt[3], 0}, {-Sqrt[3], -1, 0}, {0, 0, 2}}/2.}, {(* the parametrized 3D plot *) ParametricPlot3D[Evaluate[Append[ ((c /. t -> t + α) + (c /. t -> t - α))/2, {EdgeForm[], SurfaceColor[#, #, 3]&[Hue[(t + Pi)/(2Pi)]]}]], {t, -Pi, Pi}, {α, 0, Pi/2}, Axes -> False, PlotPoints -> {64, 32}, BoxRatios -> {1, 1, 0.6}, PlotRange -> {{-3, 3}, {-3, 3}, {-1, 1}}] /. p_Polygon :> makeHole[p, 0.76], (* the implicit 3D plot; use symmetry *) polysCart = Apply[{#1 Cos[#2], #1 Sin[#2], #3}&, Cases[(* contour plot in cylindrical coordinates *) ContourPlot3D[cAvImpl, {r, 0, 3}, {ϕ, -Pi/3, Pi/3}, {z, -1, 1}, PlotPoints -> {28, 24, 32}, MaxRecursion -> 0], _Polygon, Infinity], {-2}]; Graphics3D[{EdgeForm[], (* generate all three parts of the surface *) {polysCart, Map[ .#&, polysCart, {-2}], Map[..#&, polysCart, {-2}]}} /. p_Polygon :> {SurfaceColor[#, #, 2.4]&[ Hue[Sqrt[#.#]&[0.24 Plus @@ p[[1]]/Length[p[[1]]]]]], makeHole[p, 0.72]}, BoxRatios -> {1, 1, 0.6}]}]]]

For the volume of such tubes, see [309].

Symbolic Computations

330

Exercises 1.L2 The 2 in the Factorization of xi - 1, Heron’s Formula, Volume of Tetrahedron, Circles of Apollonius, Circle ODE, Modular Transformations, Two-Point Taylor Expansion, Quotential Derivatives a) Program a function which finds all i for which numbers other than 0 or ≤1 appear as coefficients of x j

(0 § j § i) in the factorized decomposition of x j - 1 (1 § i § 500) [586]. Do not use temporary variables (no Block or Module constructions). b) Let P1 , P2 , and P3 be three points in the plane. Starting from the formula A = †HP2 - P1 LäHP3 - P1 L§ ê 2 for the area A of the triangle formed by P1 , P2 , and P3 , derive a formula for the area which only contains the lengths of the three sides of the triangle (Heron’s area formula). c) Let P1 , P2 , P3 , and P4 be four points in 3 . Starting from the formula V = HareaOfOneFace height ê 3L for the

volume V of the tetrahedron formed by P1 , P2 , P3 , and P4 , derive a formula for the volume which only contains the lengths of the six edges of the tetrahedron [841]. d) Given are three circles in the plane that touch each other pairwise. In the “middle” between these three circles

now put a fourth circle that touches each of the three others. Calculate the radius of this circle as an explicit function of the radius of the three other circles (see [1630], [416], [155], [1680], [839], and [695]). e) Calculate the differential equation that governs all circles in the x,y-plane (from I.I.5.6 of [896]). f) Show that the three equations

u4 - vHuL4 - 2 u vHuL H1 - u2 vHuL2 L = 0 u6 - vHuL6 + 5 vHuL2 u2 Hu2 - vHuL2 L - 4 u vHuL H1 - u4 vHuL4 L = 0 H1 - u8 L H1 - vHuL8 L - H1 - u vHuLL8 = 0 are solutions of the (so-called modular) differential equation [1438] iji 1 + k 2 y2 1 + l 2 £ 2 yzz £ 2 jjjj ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ zz - J ÅÅÅÅÅÅÅÅ ÅÅÅÅÅ3ÅÅÅÅ N l HkL zz l HkL + 3 l≥ HkL2 - 2 l£ HkL l£££ HkL = 0. j k - k3 l l k { { k 1

1

The change of variables between 8k, l< and 8u, v< is given by k ÅÅ4ÅÅ = u and l ÅÅ4ÅÅ = v. g) The function f HxL

ÅÅÅÅÅÅÅÅÅÅ dx i yz 1-hHxL f HxL eŸ gHxL dx+Ÿ ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅ dx j jjc + ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 1-hHxL ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ dxzzzz wHxL = c1 e-Ÿ ÅÅÅÅÅÅÅÅ jj 2 ‡ 1 - hHxL k {

fulfills a linear second-order differential equation [700]. Derive this differential equation. h) Prove the following two identities (from [1838] and [897]):

6p 10 p 1 tanJ ÅÅÅÅÅÅ tan-1 H4LN = 2 JcosJ ÅÅÅÅÅÅÅÅÅÅ N + cosJ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅ NN 17 17 4 è!!!! 7 i i1 p 1 i 1 yzyz è!!!! ij 1 i 1 yzyzyz cosJ ÅÅÅÅÅÅ N = ÅÅÅÅÅ + ÅÅÅÅÅÅÅÅÅÅÅÅÅ jjjcosjjj ÅÅÅÅÅÅ cos-1 jjj ÅÅÅÅÅÅÅÅè!!!! ÅÅÅÅÅÅÅÅÅÅ zzzz + 3 sinjj ÅÅÅÅÅÅ cos-1 jjj ÅÅÅÅÅÅÅÅè!!!! ÅÅÅÅÅÅÅÅÅÅ zzzzzz 6 k k3 7 6 3 k k 2 7 {{ k 2 7 {{{

Exercises

331

i) Given a rectangular box of size w1 µ h1 µ d1 . Is it possible to put a second box of size w2 µ h2 µ d2 in the first

one such that 1 ê w2 + 1 ê h2 + 1 ê d2 is equal to, less than, or greater than w1 + h1 + d1 ?

j) What geometric object is described by the following three inequalities?

†f x§ + †y§ < 1 ﬂ †f y§ + †z§ < 1 ﬂ †x§ + †f z§ < 1 (f is the Golden ratio.) k) Check the following integral identity [1062]: 2

2 ¶ x x x iè!!!! ¶ f HtL y f HtL y2 1 i i1 ÅÅÅÅÅ dtz dx = ‡ j ÅÅÅÅÅ ‡ f HtL dtyz dx + jjj x ‡ ÅÅÅÅÅÅÅÅÅÅ ÅÅÅ dt + ÅÅÅÅÅÅÅÅ ÅÅÅÅ!Å ‡ f HtL dtzzz . ‡ j‡ ÅÅÅÅÅÅÅÅ è!!! t t { { 0 k x 0 k x 0 x 0 x k { x

l) Check the following identity [1900] for small integer n and r: n

pHak L ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ = „ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ Hx - ak Lr+1 ¤nl = 1 Hak - al L k=1

l∫k

r ij y n n yzz i n jj HrL r jj p HxL + „ H-1L j ijj yzz pHr- jL HxL Ajjj‚ Hx - ai L-1 , ‚ Hx - ai L-2 , …, ‚ Hx - ai L- j zzzzzz zzz j jj kj{ j i=1 i=1 {z k i=1 j=1 k {

H-1Lr ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ n r! ¤k=1 Hx - ak L

Here pHzL is a polynomial of degree equal to or less than n; the ak are arbitrary complex numbers and the multivariate polynomials AHt1 , …, t j L are defined through AHt1 , t2 , …, t j L =

‚ k1 ,k2 ,…,k j k1 +2 k2 +∫+ j k j = j

t j kj j! t1 k1 t2 k2 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ J ÅÅÅÅÅÅÅÅ N J ÅÅÅÅÅÅÅÅ N ∫ J ÅÅÅÅÅÅÅÅ N . j 2 k1 ! k 2 ! ∫ k j ! 1

m) Given five points in 2 , find all relations between the oriented areas (calculated, say, with the determinantal

formula from Subsection 1.9.2) of the nine triangles that one can form using the points. n) Is it possible to position six points P1 , …, P6 in the plane in such a way that they have the following integer distances between them [814]?

P1 P2 P3 P4 P5 P6

P1 0 87 158 170 127 68

P2 87 0 85 127 136 131

P3 158 85 0 68 131 174

P4 170 127 68 0 87 158

P5 127 136 131 87 0 85

P6 68 131 174 158 85 0

o) Show that there are no 3 µ 3 Hadamard matrices [78], [1866], [681]. (An n µ n Hadamard matrix Hn is a

matrix with elements ≤1 that fulfills Hn .HTn = n 1n .)

p) The two-point Taylor series of order for a function f HzL analytic in z1 , z2 is defined through [1158]

Symbolic Computations

332

o

f HzL = ‚ Hn Hz1 , z2 L Hz - z1 L + n Hz2 , z1 L Hz - z2 L L Hz - z1 Ln Hz - z2 Ln + Ro+1 Hz, z1 , z2 L. n=0

Here Ro+1 Hz, z1 , z2 L is the remainder term and the coefficients n Hz1 , z2 L are given as f Hz2 L 0 Hz1 , z2 L = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ z2 - z1 n

Hk + n - 1L! H-1Lk k f Hn-kL Hz1 L + H-1Ln+1 n f Hn-kL Hz2 L ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ . n Hz1 , z2 L = „ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ k ! n! Hn - kL! Hz1 - z2 Lk+n+1 k=0

H20L Calculate the two-point Taylor series 0,2 p @sinD HzL of order 20 for f HzL = sinHzL, z1 = 0, and z2 = 2 p. Find maxz1 §z§z2 † f HzL - 20 HzL§.

q) While for a smooth function yHxL, the relation d yHxL ê dx = 1 ê HdxHyL ê d yL holds; the generalization

d n yHxL ê dxn = 1 ê Hd n xHyL ê d yn L for n ¥ 2 in general does not hold. Find functions yHxL such that the generalization holds for n = 2 [245]. Can you find one for n = 3?

r) Define a function (similar to the built-in function D) that implements the quotential derivatives n . ê xn of a function f HxL defined recursively by [1297]

i n-1 f HxL yz n f HxL ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅ = ÅÅÅÅÅÅ Å jj ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅ z n x k xn-1 { x with the first quotential derivatives . ê x defined as 1 f HxL f HxL f Hq xL ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅ = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ = lim lnJ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ N. qØ1 x f HxL x1 Show that f HyHxLL ê x = f HyHxLL ê y yHxL ê x. Define the multivariate quotential derivative recursively starting with the rightmost ones, meaning 2 f Hx, yL i f Hx, yL y ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ = ÅÅÅÅÅÅÅÅ j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ z. x y x k y { Show by explicit calculation that f Hx, yL 2 f Hx, yL f Hx, yL 2 f Hx, yL ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅ . y x y x y x ¶ s) Conjecture the value of the following sum: ⁄k=1 H¤kj=1 a j-1 ê Hx + a j LL. Here a0 = 1, ak œ , ak ∫ 0, x ∫ 0

[1648]. 2.L1 Horner’s Form, Bernoulli Polynomials, Squared Zeros, Polynomialized Radicals, Zeros of Icosahedral Equation, Iterated Exponentials, Matrix Sign Function, Appell–Nielsen Polynomials a) Given a polynomial pHxL, rewrite it in Horner’s form.

Exercises

333 x+1

b) Bernoulli polynomials Bn HxL are uniquely characterized by the property Ÿx

Bn HtL dt = xn . Use this method to implement the calculation of Bernoulli polynomials Bn HxL. Try to use only built-in variables (with the exception of x and n, of course).

c) Given the polynomial x4 + a3 x3 + a2 x2 + a1 x + a0 with zeros x1 , x2 , x3 , and x4 , calculate the coefficients (as

functions of a0 , a1 , a2 , and a3 ) of a polynomial that has the zeros x21 , x22 , x23 , and x24 . d) Express the real zeros of

-1 + x + 2

3 5 è!!!!!!!!!!!!!! è!!!!!!!!!!!!!! è!!!!!!!!!!!!!! 1 + x2 - 3 1 + x3 + 5 1 + x5 - 4 = 0

as the zeros of a polynomial. e) Show that all nontrivial solutions of x10 + 11 x5 - 1 = 0 stay invariant under the following 60 substitutions:

xöei x ei xö- ÅÅÅÅÅÅ x e j Hei + x He4 + eLL xö ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ x - ei He4 + eL e j Hx - ei He4 + eLL xö- ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ei + x He4 + eL f) Iterated exponentials expHc1 z expHc2 z expHc3 z ∫LLL can be used to approximate functions [966], [1881],

[1882], [47]. Find values for c1 , c2 , …, c10 such that expHc1 z expHc2 z expHc3 z ∫LLL approximates the function 1 + lnH1 + zL around z = 0 as best as possible.

g) Motivate symbolically the result of the following input.

m = Table[1/(i + j + 1), {i, 5}, {j, 5}]; FixedPoint[(# + Inverse[#])/2&, N[m], 100] h) Efficiently calculate the list of coefficients of the polynomial 500

Hx4 + x3 + x2 + x + 1L

Hx2 + x + 1L

1000

Hx + 1L2000

without making use of any polynomial function like Expand, Coefficient, CoefficientList, …. i) What is the minimal distance between the roots of z3 + c2 z + 1 = 0 for real c? j) Let f HkL HzL = f H f Hk-1L HzLL, f H1L HzL = f HzL = z2 - c. Then the following remarkable identity holds [1135], [119]: ¶ ij ¶ y H ÅÅÅÅ2z Lk zk zzz jj expjjj- „ ck ÅÅÅÅÅÅÅÅ zzz = 1 + „ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ k z j ¤kj=1 f H jL H0L k k=1 k=1 {

where

Symbolic Computations

334

2k

1 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ . ck = „ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ HkL £ f Hz j L H f HkL £ Hz j L - 1L j=1

The sum appearing in the definition of the ck extends over all 2k roots of f HkL HzL = z. Expand both sides of the identity in a series around z = 0 and check the equality of the terms up to order z4 explicitly. k) Write a one-liner that, for a given integer m, quickly calculates the matrix of values e

x ∑d I ÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅ M sinHxL ce,d = lim ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅ d xØ0 ∑x

for 1 § e § m, 0 § d § m. l) The Appell–Nielsen polynomials pn HzL are defined through the recursion p£n HzL = pn-1 HzL, the symmetry

constraint pn HzL = H-1Ln pn H-z - 1L, and the initial condition p0 HzL = 1 [324], [1341]. Write a one-liner that calculates the first n Appell–Nielsen polynomials. Visualize the polynomials.

m) Write a one-liner that uses Integrate (instead of the typically used D) to derive the first n terms of the

Taylor expansion of a function f around x that is based on the following identity [729], [549] n-1

h h1 hn-1 hk f Hx + hL = „ ÅÅÅÅÅÅÅÅÅ f HkL HxL + ‡ ‡ ∫ ‡ f HnL Hx + hn L dhn … dh2 dh1 . k! 0 0 0 k=0

n) A generalization of the classical Taylor expansion of a function f HxL around a point x0 into functions jk HxL,

k = 0, 1, …, n (where the jk HxL might be other functions that the monomials xk ) can be written as [1839] ƒƒi 0 j1 HxL j0 HxL ƒƒj ƒƒjj j0 Hx0 L j1 Hx0 L ƒƒƒjjj f Hx0 L ƒƒjj £ 1 j£0 Hx0 L j£1 Hx0 L f HxL º - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ †ƒjjjj f Hx0 L W Hj0 Hx0 L, …, jn Hx0 LL ƒƒjj ƒƒjj ª ª ª ƒƒjj ƒƒƒj f HnL Hx L jHnL Hx L jHnL Hx L 0 0 0 ƒk 0 1

yzƒƒƒƒ zzƒƒ zzƒƒ zzzƒƒƒ zz§. zzƒ zzƒƒ zzƒƒ ∏ ª zzƒƒ zƒ HnL ∫ jn Hx0 L {ƒƒƒ

∫ jn HxL ∫ jn Hx0 L ∫ j£n Hx0 L

Here the WHj0 HxL, …, jn HxLL is the Wronskian of the j0 HxL, …, jn HxL and it is assumed not to vanish at x0 . Implement this approximation and approximate f HxL = cosHxL around x0 = 0 through expHxL, expHx ê 2L, …, expHx ê mL. Can this formula be used for m = 25? o) Show that the function [1222] 2

HHHzL + 2 z £ HzLL2 - 4 z £ HzL2 L wHzL = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ £ 8 HHzL HzL HHzL + 2 Hz - 1L £ HzLL HHzL + 2 z £ HzLLL where HzL = c1 1 HzL + c 2 2 HzL and f1,2 HzL are solutions of H1 - zL z ££ HzL + H1 - 2 zL £ HzL - f HzL ê 4 = 0 fulfills the following special case of the Painlevé VI equation:

Exercises

335

1 1 1 1 1 1 1 w££ HzL = ÅÅÅÅÅ J ÅÅÅÅÅÅÅÅÅÅÅÅÅÅ + ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ + ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ N w£ HzL2 - J ÅÅÅÅÅÅ + ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ + ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ N w£ HzL + z 2 wHzL wHzL - 1 wHzL - z z-1 wHzL - z wHzL HwHzL - 1L HwHzL - zL ij zHz - 1L y + 4zz. ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅ j ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 2 Hz - 1L2 z2 k HwHzL - zL2 { 3.L1 Nested Integration, Derivative[-n], PowerFactor, Rational Painlevé II Solutions a) Given that the following definition is plugged into Mathematica, what will be the result of f[2][x]?

f[n_][x_] := Integrate[f[n - 1][x - z], {z, 0, x}] f[0][x_] = Exp[-x]; Consider the evaluation process. How would one change the first two inputs to get the “correct” result as if from Nest[Integrate[# /. {x -> x - z}, {z, 0, x}]&, Exp[-x], 2] b) Find two (univariate) functions f and g, such that Integrate[f, x] + Integrate[g, x] gives a different

result than does Integrate[f + g, x]. Find a (univariate) functions f and integration limits xl , xm , and xu , such that Integrate[f, {x, xl , xu }] gives a different result than does Integrate[f, {x, xl , xm }] + Integrate[f, {x, xm , xu }]. c) What does the following code do?

Derivative[i_Integer?Negative][f_] := With[{pI = Integrate[f[C], C]}, derivative[i + 1][Function[pI] /. C -> #] /; FreeQ[pI, Integrate, {0, Infinity}]] Predict the results of Derivative[+4][Exp[1 #]&] and Derivative[-4][Exp[1 #]&]. d) Is it possible to find a function f Hx, yL such that D[Integrate[f(x, y), x], y] is different from

Integrate[D[f(x, y), y], x]? e) Write a function PowerFactor that does the “reverse” of the function PowerExpand. It should convert products of radicals into one radical with the base having integer powers. It should also convert sums of logarithms into one logarithm and s logHaL into logHas L. f) The rational solutions of w≥ HzL = 2 wHzL3 - 4 z wHzL + 4 k, k œ + (a special Painlevé II equation) can be

expressed in the following way [967], [910], [1215], [952]: ¶ qk HzL xk = expHz x + x3 ê 3L (for k < 0, let Let the polynomials qk HzL be defined by the generating function ⁄k=0 qk HzL = 0). Let the determinants sk HzL be defined by matrices Hai j L0§i, j§k-1 with ai j = qk+i-2 j HzL (for k = 0, let s0 HzL = 1). Then, wk HzL is given as wn HzL = ∑logHsk+1 HzL ê sk HzLL ê ∑ z. Calculate the first few wk HzL explicitly. 4.L1 Differential Equations for the Product, Quotient of Solutions of Linear Second-Order Differential Equations Let y1 HzL and y2 HzL be two linear independent solutions of y££ HzL + f HzL y£ HzL + gHzL yHzL = 0 The product uHzL = y1 HzL y2 HzL obeys a linear third-order differential equation u£££ HzL + a p @ f HzL, gHzLD u££ HzL + b p @ f HzL, gHzLD u£ HzL + c p @ f HzL, gHzLD uHzL = 0

Symbolic Computations

336

The quotient wHzL = y1 HzL ê y2 HzL obeys (Schwarz’s differential operator; see, for instance, [847] and [1906]) w£££ HzL w£ HzL + aq @ f HzL, gHzLD w££ HzL2 + bq @ f HzL, gHzLD w£ HzL2 = 0 Calculate a p , b p , c p and aq , bq . (For analogous equations for the solutions of higher-order differential equations, see [1024].) 5.L1 Singular Points of ODEs, Integral Equation a) First-order ordinary differential equations of the form y£ HxL = PHx, yL ê QHx, yL possess singular points 8x*i , y*i < [215], [1403], [1746], [1105], [467], [949]. These are defined by PHx*i , y*i L = QHx*i , y*i L = 0. It is possible to trace the typical form of the solution curves in the neighborhood of a singular point by solving y£ HxL = Ha x + b yL ê Hc x + d yL. Some typical forms include the following examples:

2 yHxL yHxL + x yHxL y£ HxL = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ , y£ HxL = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ , y£ HxL = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅ x x x yHxL £ a vortex point y HxL = - ÅÅÅÅÅÅÅÅÅÅÅÅÅ x yHxL - x £ an eddy point y HxL = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ . yHxL + x

a knot point

Investigate which of the given differential equations can be solved analytically by Mathematica, and plot the behavior of the solution curves in a neighborhood of the singular point 80, 0 1) [1594]: ¶ g

lim ‡

gØ0

1

1 z -1 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅ H1 + x Hzg - 1LL- ÅÅgÅÅ -1 dz. g z2

Let k Hx, a0 + a1 x + ∫ + an xn L stand for the root that is represented by the Root-object Root@a0 + a1 # + ∫ + an #n , k]. Calculate the following integrals symbolically (express the results using Root-objects): d)

2 6 ‡ lnHx L 1 Hx, -x - x x + x L dx

2 7 2 7 2 7 ‡ expH3 Hx, -x - x + x LL lnH3 Hx, -x - x + x LL 3 Hx, -x - x + x L dx

Symbolic Computations

354

2 Hx, -x - x + x3 L ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ % dx ‡ $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 3 Hx, -x - x + x3 L 2 Hx, -x - x x + x3 L 3 ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ dx ‡ $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 3 Hx, -x - x x + x3 L 1 2 Hx, -x - x + x3 L ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅ Å dx ‡ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ 3 0 2 Hx, x - x + x L - 1

‡

¶i 1

1 1 zy 1 jj ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅ ÅÅÅ zz dx ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ5ÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅ j 5 è!!!! Hx, x + x + x L 5 x 1 x { k

e) Under which conditions on a1 , a2 , a3 can the three roots of the cubic x3 + a1 x2 + a2 x + a3 = 0 be interpreted

as the side length of a nondegenerate triangle [1338]? Visualize the volume in a1 ,a2 ,a3 -space for which this happens. For random a1 , a2 , a3 from the interval @-1, 1D, what is the probability that the roots are the side length of a nondegenerate triangle? 23.L2 Riemann Surface of Cubic Visualize the Riemann surface of xHaL, where x = xHaL is implicitly given by x3 + x2 + a x - 1 ê 2 = 0. Do not use ContourPlot3D. 24.L2 Celestial Mechanics, Lagrange Points a) For the so-called Kepler equation (see [1332], [1699], [1521], [781], [1236], [303], [343], and [388]) L = M + e sinHLL find a series solution for small e in the form n

ijn or n-1 yz L º M + „ jjj ‚ aij e j zzzz sinHi M L j j=i { i=1 k with n around 10. b) Find a short time-series solution (power series in t up to order 10, for example) for the equation of motion for

a body in a spherical symmetric gravitational field (to avoid unnecessary constants, appropriate units are chosen) rHtL r££ HtL = ÅÅÅÅÅÅÅÅÅÅÅÅ3ÅÅ rHtL with the initial conditions rH0L = r0 , r£ H0L = v0 . Here, rHtL is the time-dependent position vector of the body and rHtL = †rHtL§. To shorten the result, introduce the abbreviations v0 .v0 r0 .v0 1 s = ÅÅÅÅÅÅÅÅÅ2ÅÅÅÅÅ Å w = ÅÅÅÅÅÅÅÅÅ2ÅÅÅÅÅ ÅÅ u = ÅÅÅÅ3ÅÅÅÅ r0 r0 r0 (Do not use explicit lists as vectors, first because this is explicitly dependent on the dimension, and second because it slows down the calculation considerably. It is better to implement an abstract vector type for rHtL and define appropriate rules for it.)

Exercises

355

c) The Lagrange points 8xHmL, yHmL< of the restricted three-body problem are the solutions of the following system of equations [430], [1421], [1755], [801], [1433], [745], [137]:

∑V Hx, yL ∑ VHx, yL - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ Å = - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ = 0. ∑x ∑y The potential VHx, yL is given by the following expression: m 1-m 1 V Hx, yL = - ÅÅÅÅÅÅ Hx2 + y2 L - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ - ÅÅÅÅÅÅÅÅ r1 r2 2 "############################## 2 2 r1 = Hx - x1 L + y r2 =

"############################## Hx - x2 L2 + y2

x1 = -m x2 = 1 - m 1 ÅÅ , calculate all real Calculate explicit symbolic solutions for the Lagrange points. For the parameter value m = ÅÅÅÅ 10 solutions (do not do this by a direct call to Solve).

25.L2 Algebraic Lissajous Curves, Light Ray Reflection Inside a Closed Region Derive an implicit representation f Hx, yL of the Lissajous curves 8xHtL, yHtL< = 8cosHtL, sinH2 tL EliminationOrder]] Out[6]= 8H−1 + h@xDL Hf@xD g@xD w@xD − w@xD f @xD − f@xD w @xD + g@xD w @xD − g@xD h@xD w @xD + h @xD w @xD − w @xD + h@xD w @xDL

0 /. d[c] -> d] Out[8]= 12 c H2 c2 + d2 L H2187 c2 + 648 c8 + 48 c14 + 486 c6 d2 + 72 c12 d2 −

243 c4 d4 − 36 c10 d4 − 189 c2 d6 − 82 c8 d6 − 27 d8 − 4 c6 d8 + 21 c4 d10 + 2 c2 d12 − d14 L

Using the numerical solution of the system allows us to extract the corresponding symbolic solution. In[9]:= FindRoot[Evaluate[{gb == 0, gbD == 0}], {c, 1}, {d, 1.5}] Out[9]= 8c → 1.09112, d → 1.54308< In[10]:= {#, N[#]}& @ Select[{c, d} /. Solve[{gb == 0, gbD == 0}, {c, d}],

# == {1.09112`5, 1.54308`5}&] è!!!!! è!!!!! 3 3 Out[10]= 999 , ==, 881.09112, 1.54308 False]

Symbolic Computations

398

150 100 50 0 20

100 80 60 40

40

60

20 80

100

0

For a semi-closed form of the ce,d , see [1336]. l) It is straightforward to write a one-liner that calculates the first n Appell–Nielsen polynomials. To avoid an explicit counter n, we operate recursively on a two-element list 8≤1, poly -(z + 1)), C][[1]] /. C -> 0][ *) Integrate[p, z] + C]}] @@ #&, {1, 1}, n]

The next plot shows the logarithm of the absolute value of the first 36 Appell–Nielsen polynomials. The steep vertical cusps are the zeros of the polynomials. While the majority of the roots seem to have identical numerical values, most of them are actually slightly different. In[2]:= With[{o = 35},

Plot[Evaluate[Log @ Abs @ AppellNielsenPolynomialList[o, z]], {z, -6, 6}, PlotRange -> {-45, 5}, PlotPoints -> 200, Frame -> True, Axes -> False, PlotStyle -> Table[{Thickness[0.002], Hue[0.8 k/o]}, {k, o + 1}]]] 0 -10 -20 -30 -40 -6

-4

-2

0

2

4

6

HnL

m) Evaluating the multiple integral of f Hx + hn L for a given integer n yields (up to sign and the term f Hx + hL) the first n - 1 terms of the Taylor series. Because the integrand at each integration stage is a complete differential, the n iterated integrations can all be carried out completely. The following function TaylorTerms implements the multiple integral (we use the same integration variable h for each integration). In[1]:= TaylorTerms[n_, {f_, x_, h_}] := Expand[f[x + h] -

Nest[Integrate[#, {h, 0, h}]&, Derivative[n][f][x + h], n]]

And here are the first ten Taylor terms obtained from integration for a not explicitly specified function f . In[2]:= TaylorTerms[10, {f, x, h}]

1 2

1 6

1 24

Out[2]= f@xD + h f @xD + h2 f @xD + h3 fH3L @xD + h4 fH4L @xD +

h7

fH7L @xD

1 h8 fH8L @xD h9 fH9L @xD 1 h5 fH5L @xD + h6 fH6L @xD + + + 120 5040 40320 362880 720

The next input expands f HxL = cosHxL around x = p. In[3]:= TaylorTerms[10, {Cos, Pi, h}]

h2 2

h4 24

h6 720

h8 40320

Out[3]= −1 + − + −

For direct integral analogues of the Taylor formula, see [1358].

Solutions

399

n) Here is the above formula implemented as the function GeneralizedTaylorExpansion. In[1]:= GeneralizedTaylorExpansion[f_, ϕs_, x_, x0_] :=

Module[{n = Length[ϕs], W, Φ}, (* the Wronskian *) W = Det[Table[D[ϕs, {x, k}], {k, 0, n - 1}]] /. x -> x0; (* the second determinant *) Φ = Det[Join[{Prepend[ϕs, 0]}, Table[D[Prepend[ϕs, f], {x, k}], {k, 0, n - 1}] /. x -> x0]]; (* the approximation *) -Expand[Φ/W]]

For small m (say m d 10), it works fine. In[2]:= ExpBasis[m_] := Table[Exp[x/k], {k, m}]

GeneralizedTaylorExpansion[Cos[x], ExpBasis[8], x, 0] // Timing 173539328 xê8 631657481 xê7 Out[3]= 90.27 Second, − + − 10614240 xê6 + 63 72 xê5 xê4 435546875 14643200 1431027 xê3 47840 xê2 5135 x − + − + = 72 9 8 9 504

Due to the calculation of a determinant with symbolic entries, this form is not suited for larger m. The Taylor-like approximation carried out by GeneralizedTaylorExpansion cancels the leading monomial terms in a classical Taylor n ck jk HxL around x = x0 . We can so reformulate the problem to the determination of the ck . expansion of f HxL - ⁄k=0 Assuming no additional degeneracy and the presence of all monomials, the following function GeneralizedTaylor Expansion1 solves for the ck and returns the resulting sum. In[4]:= GeneralizedTaylorExpansion1[f_, ϕs_, x_, x0_] :=

Module[{n = Length[ϕs], vars = Table[C[k], {k, Length[ϕs]}], sol}, sol = Solve[# == 0& /@ CoefficientList[Series[f - vars.ϕs, {x, x0, n - 1}], x], vars]; vars.ϕs /. sol[[1]]]

If all the ck are numbers, the resulting linear system can be solved much more quickly then the above symbolic determinant. In[5]:= GeneralizedTaylorExpansion1[Cos[x], ExpBasis[8], x, 0] // Timing

173539328 xê8 631657481 xê7 63 72 435546875 xê5 14643200 xê4 1431027 xê3 47840 xê2 5135 x − + − + = 72 9 8 9 504

Out[5]= 90.05 Second, − + − 10614240 xê6 +

Now, we can also deal with m = 25. In[6]:= Approx[x_] = GeneralizedTaylorExpansion1[Cos[x], ExpBasis[25], x, 0];

Some of the resulting numbers have up to 50 digits. In[7]:= {#, N[#]}& @ Max[Abs[{Numerator[#], Denominator[#]}& /@

Cases[Approx[x], _Integer | _Rational, {0, Infinity}]]] Out[7]= 823081981002827323123938185744918882846832275390625, 2.3082× 1049

{Hue[0]}], ListPlot[dataApprox // N, PlotRange -> {-2, 2}, PlotJoined -> True]}], (* logarithm of absolute error *) ListPlot[N[{#1, Log[10, Abs[#2 - Cos[#1]]]}]& @@@ dataApprox, PlotRange -> All, PlotJoined -> True]}]]]

Symbolic Computations

400 1

-15

-10

-5

5 5

10

15

-1

-15

-10

-5

5

10

15

-5 -2 -3 -4

-10 -15

o) PainlevéODEVI is the differential operator for the special Painlevé VI equation under consideration. In[1]:= PainlevéODEVI[w_, z_] := D[w, z, z] -

(1/2 (1/w + 1/(w - 1) + 1/(w - z)) D[w, z]^2 (1/z + 1/(z - 1) + 1/(w - z)) D[w, z] + 1/2 w (w - 1)(w - z)/(z^2 (z - 1)^2)(4 + z (z - 1)/(w - z)^2))

The function yChazy is the proposed solution. In[2]:= yChazy[z_] = With[{ = (c1 [1][#] + c2 [2][#])&},

1/8 (([z] + 2z '[z])^2 - 4z '[z]^2)^2/ ([z] '[z] (2(z - 1) '[z] + [z])([z] + 2z '[z]))];

Substituting now yChazy[z] into PainlevéODEVI, replacing the second and third derivatives of [k] by using its defining differential equation, and simplifying the result shows that yChazy[z] is a solution. In[3]:= Together[PainlevéODEVI[yChazy[z], z] //.

{[k_]'''[z] :> (8 (1 - 2z) [k]''[z] - 9 [k]'[z])/(4 z(z - 1)), [k_]''[z] :> (4 (1 - 2z) [k]'[z] - [k][z])/(4 z(z - 1))}] Out[3]= 0

We end by remarking that the explicit solution of HzL is HzL = c1 KHzL + c2 KH1 - zL where K is the complete elliptic integral of the first kind. In[4]:= Together[z (1 - z) ''[z] + (1 - 2z) '[z] - [z]/4 /.

-> Function[z, c[1] EllipticK[z] + c[2] EllipticK[1 - z]]]

Out[4]= 0

3. Nested Integration, Derivative[-n], PowerFactor, Rational Painlevé II Solutions a) First, we look at the actual result. In[1]:= f[n_][x_] := Integrate[f[n - 1][x - z], {z, 0, x}]

f[0][x_] = Exp[-x];

Out[3]=

f[2][x] 1 H−x Cosh@xD + Sinh@xD + x Sinh@xDL 2

We compare it to the following. In[4]:= fn[2][x] = Nest[Integrate[# /. {x -> x - z}, {z, 0, x}]&, Exp[-x], 2] Out[4]= −1 + −x + x In[5]:= Expand[TrigToExp[fn[2][x] - f[2][x]]]

5 −x 4

x 4

−x x 2

Out[5]= −1 + − + x +

The reason for this in the first moment unexpected result is that Integrate does not localize its integration variable. (It is impossible for Integrate to do this because it has no HoldAll attribute, and so it cannot avoid the evaluation of all its arguments before Integrate can go to work.) So the integration variables are not screened from each other in nested integrations. Here is what happens in detail by calculating f[2][x]. f[2][x] Integrate[f[2 - 1][x - z], {z, 0, x}]

Now the two variables (from a mathematical point of view—dummy variables) z interfere.

Solutions

401

f[1][x - z] Integrate[f[0][(x - z) - z], {z, 0, (x - z)}] Integrate[f[0][(x - z) - z], {z, 0, x}] Integrate[Exp[-((x - z) - z)], {z, 0, x}] Integrate[Exp[-((x - z) - z)], {z, 0, x}] Exp[x - 2 z]/2 - Exp[-x]/2 Integrate[Exp[x - 2z]/2 - Exp[-x]/2, {z, 0, x}] Exp[x]/4 - (1 + 2 x)/4 Exp[-x]

By using On[], we could follow all of the above steps in more detail, but because of the extensive output, we do not show it here. To screen the integration variables in nested integrations, we could, for instance, use the following construction for the function definiteIntegrate . (We implement it here only for 1D integrals—the generalization to multidimensional integrals is obvious.) In[6]:= SetAttributes[definiteIntegrate, HoldAll]

definiteIntegrate[integrand_, {iVar_, lowerLimit_, upperLimit_}] := Function[x, Integrate[#, {x, lowerLimit, upperLimit}]& @@ (* avoid evaluation of integrand; substitute new integration variable *) (Hold[integrand] //. iVar -> x)][ (* create a unique integration variable *) Unique[x]]

(Note that definiteIntegrate has the attribute HoldAll and that an additional Hold on the right-hand side is necessary to avoid any evaluation. A unique integration variable is created via Unique[x].) Using the function definiteIntegrate in the recursive definition of f now gives the “expected” result. In[8]:= f1[n_][x_] := definiteIntegrate[f1[n - 1][x - z], {z, 0, x}]

f1[0][x_] = Exp[-x];

Now, we get from f[2][x] the expected result. In[10]:= f1[2][x] Out[10]= −1 + −x + x In[11]:= f1[2][x] - fn[2][x] // TrigToExp // Expand Out[11]= 0

For the simple example under consideration, we could use a simpler way of creating different dummy integration variables. Here is an example. In[12]:= f2[n_][x_] := Integrate[f2[n - 1][x - z[x]], {z[x], 0, x}]

f2[0][x_] = Exp[-x]; f2[2][x] Out[14]= −1 + −x + x

b) Obviously, Integrate[f, x] + Integrate[g, x] and Integrate[f + g, x] can only differ by an x-independent constant. It turns out that finding a pair of functions f and g is not difficult; low-degree polynomials and powers already do the job. In[1]:= Integrate[(1 + x)^2, x] + Integrate[x^α, x]

x3 3

x1+α 1+α

Out[1]= x + x2 + + In[2]:= Integrate[(1 + x)^2 + x^α, x] Out[2]=

1 x1+α H1 + xL3 + 3 1+α

In[3]:= % - %% // Expand Out[3]=

1 3

Symbolic Computations

402

Now, let us deal with the definite integrals. The function f should have a discontinuity at xm . We choose the branch cut of the square root function as the discontinuity. We take xl and xu on opposite sides of the branch cut and xm directly on the branch cut. In[4]:= Integrate[Sqrt[z], {z, -1 - I, -1 + I}] Out[4]=

4 2 2 − H−1 − L3ê2 + H−1 + L3ê2 3 3 3

In[5]:= Integrate[Sqrt[z], {z, -1 - I, 0}] + Integrate[Sqrt[z], {z, 0, -1 + I}]

2 3

2 3

Out[5]= − H−1 − L3ê2 + H−1 + L3ê2 In[6]:= % - %% // Expand

4 3

Out[6]= −

c) First, the input adds a new rule to Derivative (which does not have the attribute Protected) for a negative integer argument for an arbitrary function. Now, we look at the actual code. With evaluates its first argument, which means the local variable pI, and sets the value to Integrate[f[C], C] . In the case that the result does not contain Integrate, pI becomes a pure function by substituting Slot[1] for C and adding the head Function. The whole expression so constructed again has a Derivative wrapped around it, but with the order incremented by one. In summary, this means that taking a Derivative of negative order n is interpreted as an iterated n-fold integration. Let us look at some examples. In[1]:= Derivative[i_Integer?Negative][f_] :=

(* because the test is the whole calculation, use With and then use pI as test and as the result *) With[{pI = Integrate[f[C], C]}, (* test if Integrate appears in result *) Derivative[i + 1][Function[pI] /. C -> #] /; FreeQ[pI, Integrate, {0, Infinity}]] In[2]:= Derivative[-3][Exp] Out[2]= #1 & In[3]:= Derivative[-3][#^3 + Sin[#]&] Out[3]=

#16 + Cos@#1D & 120

Here are the two derivatives Derivative[+4][Exp[1 #]&] and Derivative[-4][Exp[1 #]&]. In[4]:= {Derivative[+4][Exp[1 #]&], Derivative[-4][Exp[1 #]&]} Out[4]= 8#1 &, #1 &

Module[{product = List @@ t, rads, rest}, (* select the radicals *) rads = Cases[product, Power[_, _Rational], {1}]; rest = Complement[product, rads]; (* the new exponent *) exp = LCM @@ Denominator[Last /@ rads]; (Times @@ rads^exp)^(1/exp) (Times @@ rest)];

Here is an example showing rulePower at work. In[2]:= a^(2/3) b^(3/4) c^(4/5) (d + e)^(5/6) f^(1/n) g /. rulePower 1ê60

Out[2]= Ha40 b45 c48 Hd + eL50 L

1

f n g

The rule ruleLogSum rewrites sums of logarithms as one logarithm. In[3]:= ruleLogSum = p:_Plus :>

Module[{sum = List @@ p, logs, rest}, (* select the logarithms *) logs = Cases[sum, _Log, {1}]; rest = Complement[sum, logs]; Plus[Sequence @@ rest, Log[Times @@ (First /@ logs)]]];

Here is an example. The term -Log[c] has the head Times and is not matched by the rule ruleLogSum. In[4]:= Log[a] + Log[b] - Log[c] /. ruleLogSum Out[4]= Log@a bD − Log@cD

The rule ruleLogProduct rewrites products involving logarithms. In[5]:= ruleLogProduct = c_ Log[a_] :> Log[a^c];

Now terms of the form -Log[c] are rewritten too. In[6]:= 1 - Log[a] + Log[b] Log[c] /. ruleLogProduct

1 a

Out[6]= 1 + LogA E + Log@bLog@cD D

The rule ruleLogProduct rewrites products involving logarithms. In[7]:= ruleLogPower = Log[a_]^e_ :> Log[a^(Log[a]^(e - 1))];

Here is an example. In[8]:= Log[a]^3 /. ruleLogPower 2

Out[8]= LogAaLog@aD E

Now, we put all rules together in the function PowerFactor. To make sure that every rule gets applied whenever possible, we use ReplaceRepeated and MapAll. In[9]:= PowerFactor[expr_] := MapAll[(# //. rulePower //. ruleLogSum //.

ruleLogProduct //. ruleLogPower)&, expr]

Here is PowerFactor applied to a more complicated input. In[10]:= 1 + a^(1/3) b^(2/3) c /d^(5/3) (z^3)^(1/2) + Log[s^2] +

((Log[x] + Log[z^2])^2 + 1)^(1/2) + 3(Log[a] - Log[b] Log[c]) + Log[x]^3 Log[y]^3 è!!!!!!! a1ê3 b2ê3 c z3 Out[10]= 1 + + 3 HLog@aD − Log@bD Log@cDL + d5ê3 "################################################################## 2 Log@s D + Log@xD3 Log@yD3 + 1 + HLog@xD + Log@z2 DL2 In[11]:= PowerFactor[%]

Symbolic Computations

404 1ê6 LogA 13 E 2 Log@xD E LogAy i a2 b4 z9 y c + LogAa3 b s IxLogAx M { k d

z Out[11]= 1 + c j j 10 z

LogAyLog@yD E E

E + "############################################################### 1 + LogAHx z2 LLog@x z D E 2

PowerExpand rewrites the expression in the opposite direction. In[12]:= PowerExpand[%]

a1ê3 b2ê3 c z3ê2 d è!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 2 Log@sD + Log@xD3 Log@yD3 + 1 + HLog@xD + 2 Log@zDL2

Out[12]= 1 + + 3 Log@aD − 3 Log@bD Log@cD + 5ê3

PowerFactor recovers the above expression. In[13]:= PowerFactor[%] 1ê6 LogA 13 E 2 Log@xD E LogAy i a2 b4 z9 y c + LogAa3 b s IxLogAx M { k d

z Out[13]= 1 + c j j 10 z

LogAyLog@yD E E

E + "############################################################### 1 + LogAHx z2 LLog@x z D E 2

We could now continue and extend the rulePower to complex powers. The above rule rulePower was designed to work with rational powers. For complex powers it will not work. In[14]:= PowerFactor[x^I y^I (1/x)^I (1/y)^I

(1 - I z)^((1 - I)/2) (1 + I z)^((I - 1)/2)] 1 x

1 y

1 − 2

Out[14]= J N x J N y H1 − zL 2

1

H1 + zL− 2 + 2

Now, we have to deal with exponents e and -e appropriately. In[15]:= rulePower = t:_Times?(MemberQ[#, Power[_, _Complex]]&) :>

Module[{product = List @@ t, crads, rest, exp, cradsN}, (* select the radicals *) crads = Cases[product, Power[_, _Complex], {1}]; rest = Complement[product, crads]; (* the new exponent *) exp = LCM @@ Denominator[Last /@ crads]; cradsN = crads^exp; If[exp =!= 1, (Times @@ cradsN)^(1/exp), (* complementary powers *) Times @@ (Function[l, Times @@ (#[[2, 1]]^(l[[1, 2, 2]]/ #[[2, 2]])& /@ l)^l[[1, 2, 2]]] /@ Split[{Sort[{#[[2]], -#[[2]]}], #}& /@ cradsN, #1[[1]] === #2[[1]]&])] (Times @@ rest)]; In[16]:= PowerFactor[x^I y^I (1/x)^I (1/y)^I

(1 - I z)^((1 - I)/2) (1 + I z)^((I - 1)/2)] 1 − z 1− Out[16]= $%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% J N 1+ z

f) We use the series of the generating function to define the qk HzL for the first k. In[1]:= q[_, z_] = 0;

(* make definitions for the q *) MapIndexed[(q[#2[[1]], z_] = #1)&, CoefficientList[Series[Exp[z ξ + ξ^3/3], {ξ, 0, 20}], ξ] // Expand];

Given the qk HzL, the definition of the sk HzL is straightforward. In[4]:= σ[0, z_] := 1;

σ[k_, z_] := σ[k, z] = Det[Table[q[k + i - 2 j, z], {i, 0, k - 1}, {j, 0, k - 1}]]

Now, we can calculate the first few wk HzL. In[6]:= kMax = 10;

Do[w[k, z_] = D[Log[σ[k + 1, z]/σ[k, z]], z] // Together // Factor, {k, kMax}]

Solutions

405

Here are the first four wk HzL. In[8]:= Table[w[k, z], {k, 4}]

1 + 2 z3 H−1 + zL z H1 + z + z L

1 z

3 z2 H10 − 2 z3 + z6 L H−1 + zL H1 + z + z L H−5 − 5 z + z L

Out[8]= 9 , 2 , 2 3 6 , 875 − 1750 z3 + 1400 z6 + 250 z9 − 50 z12 + 4 z15 = z H−5 − 5 z3 + z6 L H−175 − 15 z6 + z9 L

Here is a quick check for the correctness of the calculated functions. In[9]:= Table[Together[D[w[k, z], {z, 2}] - (2w[k, z]^3 - 4 z w[k, z] + 4 k)],

{k, kMax}] Out[9]= 80, 0, 0, 0, 0, 0, 0, 0, 0, 0

1, aq -> 1, bq -> 1, i_Integer ws_ -> ws}] Out[6]= 8y2@zD4 y1 @zD2 , y1@zD y2@zD3 y1 @zD y2 @zD, y2@zD3 y1 @zD2 y2 @zD, y1@zD2 y2@zD2 y2 @zD2 , y1@zD y2@zD2 y1 @zD y2 @zD2 , y2@zD2 y1 @zD2 y2 @zD2 , y1@zD2 y2@zD y2 @zD3 , y1@zD y2@zD y1 @zD y2 @zD3 , y1@zD2 y2 @zD4

RationalFunctions] Out[9]= 84 f@zD g@zD w@zD + 2 w@zD g @zD + 2 f@zD2 w @zD + 4 g@zD w @zD + f @zD w @zD + 3 f@zD w @zD + wH3L @zD

{{-2, 2}, {-2, 2}}, Evaluate[opts["Saddle point"]]] Saddle point

For y£ HxL = -x ê yHxL, we get two solutions from DSolve. In[15]:= DSolve[{y'[x] == -x/y[x], y[x0] == y0}, y[x], x] Out[15]= 99y@xD → −

è!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! è!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! −x2 + x02 + y02 =, 9y@xD → −x2 + x02 + y02 ==

In[16]:= sol5[x_, {x0_, y0_}] = {-Sqrt[-x^2 + x0^2 + y0^2], Sqrt[-x^2 + x0^2 + y0^2]};

Now, we plot both solutions.

Symbolic Computations

410 In[17]:= Show[Table[Plot[Evaluate[sol5[x, {x0, 0}]], {x, -x0, x0},

DisplayFunction -> Identity], {x0, 0.1, 1, 0.1}], DisplayFunction -> $DisplayFunction, Evaluate[opts["Vortex point"]]] Vortex point

Remaining is the differential equation that gives eddy points. Again, we can find a solution, although not explicitly for yHxL. In[18]:= DSolve[{y'[x] == (y[x] - x)/(y[x] + x)}, y[x], x]

Solve::tdep : The equations appear to involve the variables to be solved for in an essentially non−algebraic way. More… y@xD x

y@xD2 x

1 2

Out[18]= SolveAArcTanA E + LogA1 + 2 E C@1D − Log@xD, y@xDE In[19]:= sol6[{x_, y_}] = -2 ArcTan[y/x] + Log[1/(x^2 (1 + y^2/x^2))];

We now plot this result. Unfortunately, for this transcendental equation, ImplicitPlot is of little use because it can only plot polynomial equations. Also, ContourPlot does not give a very good result because of the branch cut of Log. In[20]:= ContourPlot[Evaluate[Re[sol6[{x, y}]]],

{x, -2, 2}, {y, -2, 2}, PlotPoints -> 100, Contours -> 20, ContourShading -> False] 2 1 0 -1 -2

-2

-1

0

1

2

Therefore, we now create a special implementation. We could try a numerical implementation using FindRoot, for example, of the following form. However, it is difficult to represent larger pieces like this. The form of the solution suggests the use of polar coordinates. In[21]:= sol6[{r Cos[ϕ], r Sin[ϕ]}] // Simplify

1 r

Out[21]= −2 ArcTan@Tan@ϕDD + LogA 2 E In[22]:= Solve[% == c, r] 1 H−c−2 ArcTan@Tan@ϕDDL

Out[22]= 99r → − 2

1 H−c−2 ArcTan@Tan@ϕDDL

=, 9r → 2

==

We arrive at the following formula. In[23]:= % // PowerExpand 1 H−c−2 ϕL

Out[23]= 99r → − 2

1 H−c−2 ϕL

=, 9r → 2

==

The final graphics of the integral curves is the following. In[24]:= Show[Table[

ParametricPlot[Evaluate[Exp[c - ϕ]{Cos[ϕ], Sin[ϕ]}], {ϕ, c + 0.1, 3Pi + c}, DisplayFunction -> Identity], {c, 0, 2Pi 24/25, 2Pi/25}],

Solutions

411 DisplayFunction -> $DisplayFunction, PlotRange -> All, Evaluate[opts["Eddy point"]]] Eddy point

We demonstrate the appearance of various singular points in the following example. For a “random” bivariate function yHx, yL, we will integrate the equations x£ HsL = ∑yHxHsL, yHsLL ê ∑ xHsL, y£ HtL = -∑ yHxHsL, yHsLL ê ∑ xHsL. We see saddle points, vortex points, and knot points [85], [448]. In[25]:= Module[{L = 4, pp = 41, T = 5, o = 90, ms = 100, = 3/2,

ps = 21, λ = 2, ipo, ipoX, ipoY, eqs, pathList, nsol}, SeedRandom[123]; (* a streamfunction *) ipo = (* smooth interpolation *) Interpolation[ (* random data *) Flatten[Table[{x, y, Random[Real, {-1, 1}]}, {x, -L, L, 2L/pp}, {y, -L, L, 2L/pp}], 1], InterpolationOrder -> 8]; (* derivatives *) {ipoX, ipoY} = D[ipo[x[t], y[t]], #]& /@ {x[t], y[t]}; (* differential equations for flow lines *) eqs = Thread[{x'[t], y'[t]} == #/Sqrt[#.#]&[{ipoX, -ipoY}]]; (* calculate flow lines *) pathList = Table[ ((* solve for flow lines *) Internal`DeactivateMessages[ nsol = NDSolve[Join[eqs, {x[0] == x0, y[0] == y0}], {x, y}, {t, 0, #}, MaxSteps -> ms, PrecisionGoal -> 3, AccuracyGoal -> 3]]; (* visualize flow lines *) ParametricPlot[Evaluate[{x[t], y[t]} /. nsol], {t, 0, DeleteCases[nsol[[1, 1, 2, 1, 1]], 0.][[1]]}, (* color flow lines differently *) PlotStyle -> {{Thickness[0.002], RGBColor[ (x0 + )/(2 ), 0.2, (y0 + )/(2 )]}}, DisplayFunction -> Identity, PlotPoints -> 200])& /@ {T, -T}, (* grid of initial conditions *) {x0, - , , 2 /ps}, {y0, - , , 2 /ps}]; (* display flow lines and stream function *) Show[(* contour plot of the stream function *) {ContourPlot[Evaluate[ipo[x, y]], {x, -L/2, L/2}, {y, -L/2, L/2}, PlotPoints -> 400, Contours -> 60, ContourLines -> False, PlotRange -> All, DisplayFunction -> Identity], Show[pathList]}, DisplayFunction -> $DisplayFunction, Frame -> True, Axes -> False, FrameTicks -> False, PlotRange -> {{-λ, λ}, {-λ, λ}}, AspectRatio -> Automatic]]

Symbolic Computations

412

For higher-order singularities, see [1726]. b) We start with implementing the exact solution for separable kernels. The function iSolve (named in analogy to DSolve) attempts this. Because a kernel might be separable, but structurally not in separated form, we allow for an optional function that attempts to separate the kernel. While we could be more elaborate with respect to matching the pattern of a Fredholm integral equation of the second kind, we require here the canonical form. The step-by-step implementation of iSolve is self-explanatory. In[1]:= iSolve[eq:(y_[x_] + λ_ Integrate[_ y[ξ_], {ξ_, a_, b_}] == f_),

y_, x_, _:Identity] := Module[{ = Integrate, intExpand, eq1, integrals, Rules, functions, eq2, eqs, s, separableQ}, (* thread integrals over sums and pull integration variable-independent out *) intExpand = Function[int, (int //. [p_Plus, i_] :> ( [#, i]& /@ p) //. HoldPattern[Integrate[c_?(FreeQ[#, ξ, Infinity]&) rest_, {ξ, a, b}]] :> c Integrate[rest, {ξ, a, b}])]; (* separate kernel *) eq1 = intExpand[ExpandAll[ //@ (Subtract @@ eq)]]; (* integrals over y[ζ] and kernel functions *) integrals = Union[Cases[eq1, _ , Infinity]]; (* replace integrals by variables [i] *) Rules = Rule @@@ Transpose[{integrals /. ξ -> ζ_, s = Array[, Length[integrals]]}]; (* was the kernel separable? *) separableQ = FreeQ[Rules, x, Infinity]; (* kernel functions h_j[.] *) functions = ((First /@ integrals)/y[ξ]) /. ξ -> x; (* replace integrals by variables *) eq2 = eq1 /. Rules; (* make linear system in the [i] *) eqs = intExpand[ExpandAll[ [eq2 #, {x, a, b}]& /@ functions]] //. Rules; (* solve linear system, backsubstitute into eq2 and solve for y[x] *) Solve[(eq2 /. Solve[(# == 0)& /@ eqs, s][[1]]) == 0, y[x]] /; (* was iSolve applicable? *) separableQ] 1

The next inputs solve the example equation yHxL - l Ÿ0 sinHx + xL yHxL dx = cosHxL. In[2]:= [x_, ξ_] := Sin[x + ξ]

f[x_] := Cos[x] IEq = y[x] - λ Integrate[[x, ξ] y[ξ], {ξ, 0, 1}] == f[x]; IEqSol = iSolve[y[x] - λ Integrate[[x, ξ] y[ξ], {ξ, 0, 1}] == f[x], y, x, TrigExpand] // Simplify 2 Hλ Cos@2 − xD − H−4 + λL Cos@xD + 2 λ Sin@xDL Out[5]= 99y@xD → − == −8 + λ2 H1 + Cos@2DL + 8 λ Sin@1D2

Here is a quick check for the correctness of the result. In[6]:= yExact = IEqSol[[1, 1, 2]];

IEq /. y -> Function[x, Evaluate[yExact]] // Simplify

Solutions

413 Out[7]= True

In the calculation of the truncated Fredholm and Neumann resolvents, we have to carry out many definite integrals. Because we do not worry about convergence and hope to carry out all integrals successfully term by term, we do not use the built-in function Integrate directly, but rather implement a function integrate, that expands products and powers. In[8]:= integrate[l_List, i_] := Integrate[#, i]& /@ l

integrate[p_Plus, i_] := Integrate[#, i]& /@ p integrate[p:Times[___, _Plus] | p:Power[_Plus, _Integer], i_] := integrate[Expand[p], i] integrate[e_, i_] := Integrate[e, i]

The function FredholmResolventList calculates a list of the successive resolvent approximations arising from truncating the Fredholm minor and the Fredholm determinant at lo+1 . In[12]:= FredholmResolventList[_, {ξ_, a_, b_}, {x_, ξ_}, o_, _:Identity] :=

Module[{c, d, , , , }, (* make recursive definitions for Fredholm minor and determinant *) (* avoid variable interference by applying Set and SetDelayed *) Set @@ {[_, _], /. {x -> , ξ -> }}; Set @@ {d[0][_, _], [, ]}; SetDelayed @@ {d[k_][_, _], Unevaluated @ With[{p = Pattern[#, _]& @@ {}, p = Pattern[#, _]& @@ {}}, d[k][p, p] = @ (c[k] [, ] k integrate[[, ] d[k - 1][, ], {, a, b}])]}; c[0] := 1; c[k_] := c[k] = @ integrate[d[k - 1][, ], {, a, b}]; (* calculate c[k] and d[k] recursively and form successive resolvent approximations *) Divide @@ Transpose[Rest[FoldList[Plus, 0, Table[(-1)^k/k! λ^k {d[k][x, ξ], c[k]}, {k, 0, o}]]]]]

For the example integral equation, all higher ck and dk Hx, xL vanish identically and we obtain the exact solution. In[13]:= FSerKernels = FredholmResolventList[[x, ξ], {ξ, 0, 1}, {x, ξ}, 3,

Simplify] − 1 λ H−Cos@x − ξD + Cos@1 − x − ξD Sin@1DL + Sin@x + ξD 1 − λ Sin@1D

2 , Out[13]= 9Sin@x + ξD, 2 1 λ H−Cos@x − ξD + Cos@1 − x − ξD Sin@1DL + Sin@x + ξD − 2 , 1 − 14 λ2 Cos@1D2 − λ Sin@1D2 1 − λ H−Cos@x − ξD + Cos@1 − x − ξD Sin@1DL + Sin@x + ξD 2 = 1 − 14 λ2 Cos@1D2 − λ Sin@1D2

In[14]:= yFSerSols[x_] = f[x] + λ integrate[FSerKernels f[ξ], {ξ, 0, 1}]; In[15]:= yFSerSols[x][[-1]] == yExact // Simplify Out[15]= True

We end with the implementation of the iterated kernels. The function NeumannResolventList calculates the resolvent arising from o + 1 iterated kernels. In[16]:= NeumannResolventList[_, {ξ_, a_, b_}, {x_, ξ_}, o_, _:Identity] :=

Module[{, , , , kernels}, Set @@ {[_, _], /. {x -> , ξ -> }}; kernels = NestList[[integrate[[, ] (# /. -> ), {, 0, 1}]]&, [, ], o] /. { -> x, -> ξ}; Rest[FoldList[Plus, 0, MapIndexed[λ^(#2[[1]] - 1) #1&, kernels]]]]

The iterated kernels become increasingly complicated functions. In[17]:= NSerKernels = NeumannResolventList[[x, ξ], {ξ, 0, 1},

{x, ξ}, 5, Simplify];

{LeafCount /@ NSerKernels, Short[NSerKernels, 12]} Out[19]= 984, 26, 78, 168, 292, 454 3/2}], {x, 0, 20}, (* setting options to get a pretty picture *) PlotRange -> All, PlotPoints -> 200, PlotStyle -> {Thickness[0.007], Thickness[0.002], Thickness[0.002], {Thickness[0.002], Dashing[{0.02, 0.02}]}, {Thickness[0.002], Dashing[{0.02, 0.02}]}}, Frame -> True, FrameLabel -> ({#["r"], #["V"], None, "∂"}&[ StyleForm["r", FontWeight -> "Bold", FontSize -> 6]&])] 2

0

¶

r

1

-1 -2 0

5

10

r

15

20

For the practical importance of such conditions, see [301], [1474], [1887], [1888], [113], and [1663]. For a nontrivial background potential, see [1367]; for bound states in gaps, see [1508]. b) The function GraeffeSolve implements the calculation of the polynomials pk HzL and the root zn,k . After the †zk § are calculated as precisely as possible given the initial precision prec, ≤ zk is formed and the appropriate sign is selected. In[1]:= Off[RuleDelayed::rhs];

GraeffeSolve[poly_, z_Symbol, prec_] := Module[{k = 1, oldRoots = {0, 0}, newRoots}, Clear[p]; p[0, ζ_] = N[poly /. z -> ζ, prec]; (* polynomial recursion *) p[k_, ζ_] := p[k, ζ_] = Expand[p[k - 1, ] p[k - 1, -]] /. (* avoid 0. z^o *) {_?(# == 0&) -> 0, ^n_ :> ζ^(n/2)}; While[FreeQ[p[k, ζ], Overflow[] | Underflow[], Infinity] && (coeffs = CoefficientList[p[k, ζ], ζ]; (* next polynomial; normalized *) p[k, ζ_] = Expand[p[k, ζ]/Max[Abs[coeffs]]]; (* new root approximations *) newRoots = Abs[Divide @@@ Partition[coeffs, 2, 1]]^(2^-k); (* are roots still changing? *) newRoots =!= oldRoots), oldRoots = newRoots; k++]; {z -> #}& /@ (* add sign *) Select[Join[newRoots, -newRoots], (poly /. z -> #) == 0&]]

Here the function GraeffeSolve is used to solve p = z5 + 5 z4 - 10 z3 - 10 z2 + 5 z + 2 = 0. We start with 100 digits. In[3]:= poly[z_] := 2 + 5 z - 10 z^2 - 10 z^3 + 5 z^4 + z^5;

(* display shortened result *) (grs = GraeffeSolve[poly[z], z, 110]) // N[#, 10]& Out[5]= 88z → 0.5973232647 {{-1, -1}, {+1, -1}, {-1, -1}},

Solutions

419 Lighting -> False, PlotRange -> All, BoxRatios -> {1, 1, 0.7}, AxesLabel -> {x, y, None}, Boxed -> True, TextStyle -> {FontFamily -> "Times", FontSize -> 6}, PlotLabel -> "SF[" ToString[n] ", " ToString[k] "]"]] /; (k Identity], {i, numKnots[1]}]]] SF@1, 1D

SF@1, 2D

SF@1, 3D

In[13]:= Show[GraphicsArray[#]]& /@

Table[ShapeFunctionPlot[2, 3i + j, 12, DisplayFunction -> Identity], {i, 0, 1}, {j, 3}] SF@2, 1D

SF@2, 2D

SF@2, 3D

SF@2, 4D

SF@2, 5D

SF@2, 6D

In[14]:= (* suppress message for only one picture in the last row *)

Show[GraphicsArray[#]]& /@ Table[ShapeFunctionPlot[3, 3i + j, 12, DisplayFunction -> Identity], {i, 0, 2}, {j, 3}] SF@3, 1D

SF@3, 2D

SF@3, 3D

Symbolic Computations

420 SF@3, 4D

SF@3, 5D

SF@3, 6D

SF@3, 7D

SF@3, 8D

SF@3, 9D

In[16]:= ShapeFunctionPlot[3, 10, 12] SF@3, 10D

We turn now to the computation of the integrals of the element vector and to the entries in the stiffness and mass matrices. Because these involve integrals of the shape functions yi Hx, yL over the triangle with vertices P1 = 8x1 , y1 x1} // Simplify Out[21]= x3 η + x2 ξ − x1 H−1 + η + ξL In[22]:= y[ξ_, η_] = (ay ξ + by η + cy) /.

{ay -> - y1 + y2, by -> - y1 + y3, cy -> y1} // Simplify Out[22]= y3 η + y2 ξ − y1 H−1 + η + ξL

We then get the following Jacobian determinant. In[23]:= Simplify[Det[Outer[D, {x[ξ, η], y[ξ, η]}, {ξ, η}]]] Out[23]= x3 Hy1 − y2L + x1 Hy2 − y3L + x2 H−y1 + y3L 1 1-x p

Next, we implement the relation Ÿ0 Ÿ0

x hq dh dx = p! q ! ê H p + q + 2L (for our applications p and q are positive integers).

The function TriangularIntegration implements the integration of polynomials over the unit triangle. In[24]:= (* Additivity of the integration *)

TriangularIntegration[p_Plus, {x_, y_}] := TriangularIntegration[#, {x, y}]& /@ p; (* Factors that do not depend on the integration variables are moved in front of the integral *) TriangularIntegration[c_ z_, {x_, y_}] := c TriangularIntegration[z, {x, y}] /; FreeQ[c, x] && FreeQ[c, y]; (* let q be 0 *) TriangularIntegration[x_^p_., {x_, y_}] := TriangularIntegration[x^p, {x, y}] = p!/(p + 2)!; (* let p be 0 *) TriangularIntegration[y_^q_., {x_, y_}] := TriangularIntegration[x^q, {x, y}] = q!/(q + 2)!; (* the actual integration formula *) TriangularIntegration[x_^p_. y_^q_., {x_, y_}] := TriangularIntegration[x^p y^q, {x, y}] = (p! q!) /(p + q + 2)!; (* integration of a constant *) TriangularIntegration[c_, {x_, y_}] := (c/2) /; FreeQ[c, x] && FreeQ[c, y];

(For the efficient integration of analytic functions over triangles, see [446].) By comparing our triangular integration with the built-in command Integrate, we see that our work was justified. In[36]:= Timing[TriangularIntegration[a + b x + c y^2 + d x^3 y^6 +

a 2

b 6

c 12

d 9240

e x^12 y^ 16, {x, y}]] e 26466926850

Out[36]= 90. Second, + + + + =

In[37]:= Timing[Integrate[a + b x + c y^2 + d x^3 y^6 + e x^12 y^ 16,

a 2

{x, 0, 1}, {y, 0, 1 - x}]] // Simplify b c d e 6 12 9240 26466926850

Out[37]= 91.35 Second, + + + + =

Now to the heart of this problem: the computation of the element vector and the mass and stiffness matrices. For the element vector, we have HeL fi = ‡ yi Hx, yL dxdy = J ‡ jHeL i Hx, hL dx dh = J fi . RT

UT

Here, RT denotes the real triangle, whereas UT denotes the unit triangle. J is the Jacobian determinant †∑Hx, yL ê ∑ Hx, hL§. We get this relationship by means of the relations HeL xHx, hL = ‚ x j jHeL j Hx, hL, yHx, hL = ‚ y j j j Hx, hL j

j

Symbolic Computations

422

(where x j , y j are the coordinates of the point P j in the actual triangle RT), which hold for the isoparametric mappings yi HxHx, hL, yHx, hLL = jHeL i Hx, hL. Thus, we compute only the element vector in the unit triangle fiHeL (i.e., we do not explicitly write the Jacobian determinant). In[38]:= ElementVectorElement[n_Integer?Positive, i_Integer?Positive] :=

(ElementVectorElement[n, i] = TriangularIntegration[Expand[ShapeFunction[n, i, ξ, η]], {ξ, η}]) /; (i All, Frame -> True, Axes -> False, PlotStyle -> {PointSize[0.008]}, DisplayFunction -> Identity], (* values over the base points; coloring according to size *) Graphics3D[{Hue[0.76 #[[2]]/Max[evec]], Line[{Append[#[[1]], 0], Append[#[[1]], Abs[#[[2]]]]}]}& /@ Transpose[{Table[PD[n, k], {k, numKnots[n]}], evec}], BoxRatios -> {1, 1, 0.5}, PlotRange -> All, Axes -> True]}]]]

0.1 0.2 0.15 0.1 0.05 0 0

0 -0.1 -0.2 0

20

40

60

1 0.75 0.5 0.25

0.5

80

0.25 0.75

1

0

The computation of the mass matrix is essentially analogous to that for the eigenvector. Using similar notation as in the element vector case, we have HeL HeL mij = ‡ yi Hx, yL y j Hx, yL dxdy = J ‡ jHeL i Hx, hL j j Hx, hL dx dh = J mij . RT

UT

Again, we find only the coordinate-free part. In[44]:= MassMatrixElement[n_Integer?Positive,

i_Integer?Positive, j_Integer?Positive] := (MassMatrixElement[n, i, j] = (* because of symmetry *) MassMatrixElement[n, j, i] = TriangularIntegration[Expand[ShapeFunction[n, i, ξ, η] * ShapeFunction[n, j, ξ, η]], {ξ, η}]) /; ((i J, -(-(x2 y1) + x3 y1 + x1 y2 - x3 y2 - x1 y3 + x2 y3) -> -J}) // Simplify x3 H−y + y1L + x1 Hy − y3L + x H−y1 + y3L x2 Hy − y1L + x Hy1 − y2L + x1 H−y + y2L Out[48]= 99ξ → , η → == x3 Hy1 − y2L + x1 Hy2 − y3L + x2 H−y1 + y3L J ∑ ∑ ∑ ∑ We now can calculate the following four quantities: ÅÅÅÅ ÅÅ xHx, yL, ÅÅÅÅ ÅÅ hHx, yL, ÅÅÅÅ ÅÅ xHx, yL, ÅÅÅÅ ÅÅ hHx, yL. ∑x ∑y ∑y ∑y

In[49]:= Ξ[x_, y_] = (x1 y - x3 y - x

Η[x_, y_] = (x2 y - x1 y + x

y1 + x3 y1 + x y3 - x1 y3)/J; y1 - x2 y1 - x y2 + x1 y2)/J;

In[51]:= {dξdx = D[Ξ[x, y], x], dηdx = D[Η[x, y], x],

dξdy = D[Ξ[x, y], y], dηdy = D[Η[x, y], y]} −y1 + y3 y1 − y2 x1 − x3 −x1 + x2 J J J J

Out[51]= 9 , , , =

∑ ∑ ∑ ∑ We now rewrite ÅÅÅÅ ÅÅ yi Hx, yL ÅÅÅÅ ÅÅ y j Hx, yL + ÅÅÅÅ ÅÅ y Hx, yL ÅÅÅÅ ÅÅ y j Hx, yL in the form ∑x ∑x ∑y i ∑y

Symbolic Computations

424

∑ ij ∑ HeL y Hx, hLz ÿ j ÅÅÅÅÅÅÅÅÅ ji Hx, hL ÿ ÅÅÅÅÅÅ ÅÅÅ jHeL ∑x j k ∑x {

2 2 jijijj ÅÅÅÅ∑ÅÅ ÅÅÅ xHx, yLyz + ijj ÅÅÅÅ∑ÅÅÅÅÅÅ xHx, yLyz zyz + ∑ x ∑ y { k { k k {

2 2 ∑ yz ij ∑ yz zy yz jiij ∑ ij ∑ HeL j ÅÅÅÅÅÅÅÅÅÅ ji Hx, hL ÿ ÅÅÅÅÅÅÅÅÅÅ jHeL j Hx, hL ÿ jj ÅÅÅÅÅÅÅÅÅ hHx, yL + j ÅÅÅÅÅÅÅÅÅÅ hHx, yL z + ∑ x ∑ y ∑h ∑ h { k {{ { kk k ij ÅÅÅÅ∑ÅÅÅÅÅ jHeL Hx, hL ÿ ÅÅÅÅ∑ÅÅÅÅÅÅ jHeL Hx, hL + ÅÅÅÅ∑ÅÅÅÅÅÅ jHeL Hx, hL ÿ ÅÅÅÅ∑ÅÅÅÅÅÅ jHeL Hx, hLyz µ j i ∑h j ∑h i ∑x j k ∑x { ∑ ∑ ∑ y ij ∑ z j ÅÅÅÅÅÅÅÅÅ xHx, yL ÿ ÅÅÅÅÅÅÅÅÅ hHx, yL + ÅÅÅÅÅÅÅÅÅÅ xHx, yL ÿ ÅÅÅÅÅÅÅÅÅÅ hHx, yL ∑x ∑y ∑y { k ∑x

and introduce 2 2 Hx3 - x1 L2 + Hy3 - y1 L2 ii ∑ y y y i ∑ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ A = jjjj ÅÅÅÅÅÅÅÅÅ xHx, yLzz + jj ÅÅÅÅÅÅÅÅÅÅ xHx, yLzz zz J = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ J ∑ y ∑ x { { { k kk 2 2 Hx2 - x1 L2 + Hy2 - y1 L2 ii ∑ y i ∑ y y C = jjjj ÅÅÅÅÅÅÅÅÅ hHx, yLzz + jj ÅÅÅÅÅÅÅÅÅÅ hHx, yLzz zz J = ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ J ∑ x ∑ y { k { k k { Hy3 - y1 L Hy2 - y1 L + Hx3 - x1 L Hx2 - x1 L ∑ ∑ ∑ ∑ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ . B = ijj ÅÅÅÅÅÅ ÅÅÅ xHx, yL ÅÅÅÅÅÅÅÅÅÅ hHx, yL + ÅÅÅÅÅÅÅÅÅÅ xHx, yL ÅÅÅÅÅÅÅÅÅÅ hHx, yLyz J = - ÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅÅ J ∑x ∑y ∑y { k ∑x

This leads to the following result: ∑ ∑ ∑ ∑ sij = ‡ ijj ÅÅÅÅÅÅ ÅÅÅ yi Hx, yL ÅÅÅÅÅÅÅÅÅÅ y j Hx, yL + ÅÅÅÅÅÅÅÅÅÅ yi Hx, yL ÅÅÅÅÅÅÅÅÅÅ y j Hx, yLyz dxdh ∑x ∑y ∑y { RT k ∑ x ∑ HeL ∑ HeL = A ‡ ÅÅÅÅÅÅÅÅÅ ji Hx, hL ÅÅÅÅÅÅÅÅÅÅ j j Hx, hL dxdh + ∑x UT ∑x ∑ ∑ HeL C ‡ ÅÅÅÅÅÅÅÅÅÅ ji Hx, hL ÅÅÅÅÅÅÅÅÅÅ jHeL j Hx, hL dxdh + ∑h ∑h UT ∑ HeL ∑ HeL ∑ HeL i ∑ yz B ‡ jj ÅÅÅÅÅÅÅÅÅÅ jHeL i Hx, hL ÅÅÅÅÅÅÅÅÅÅ j j Hx, hL + ÅÅÅÅÅÅÅÅÅÅ ji Hx, hL ÅÅÅÅÅÅÅÅÅ j j Hx, hL dxdh. ∑h ∑h ∑x { UT k ∑ x In[52]:= StiffnessMatrixElement[n_Integer?Positive,

i_Integer?Positive, j_Integer?Positive] := (StiffnessMatrixElement[n, i, j] = (* because of symmetry *) StiffnessMatrixElement[n, j, i] = With[{SF = ShapeFunction}, (* sum of the three terms *) A TriangularIntegration[ Expand[D[SF[n, i, ξ, η], ξ] D[SF[n, j, ξ, η], ξ]], {ξ, η}] + C TriangularIntegration[ Expand[D[SF[n, i, ξ, η], η] D[SF[n, j, ξ, η], η]], {ξ, η}] + B TriangularIntegration[ Expand[D[SF[n, i, ξ, η], ξ] D[SF[n, j, ξ, η], η] + D[SF[n, i, ξ, η], η] D[SF[n, j, ξ, η], ξ]], {ξ, η}]]) /; ((i {1, 1}]& A 2A 2B 2B 2C A + B + C2 + B6 6B + C6 − − 0 − − 2 6 3 3 3 3 A B + 6 6

Out[55]//TableForm=

A 2

B C + 6 6 2A 2B − 3 3

−

B6

B − 6

2A 2B − − 3 3

2B 3

0

C 2

0

2B 3

2B 2C − − 3 3

2A 2B − − 3 3

0

4A 4B 4C + + 3 3 3

4B 4C − − 3 3

4B 3

0

2B 3

2B 3

4B 4C − − 3 3

4B 4C 4A + + 3 3 3

4A 4B − − 3 3

2B 2C − − 3 3

0

2B 2C − − 3 3

4B 3

4A 4B − − 3 3

4B 4C 4A + + 3 3 3

−

For a larger order, we will visualize the resulting mass and stiffness matrices. Here are these two matrices shown for n = 10 for the unit triangle. In[56]:= With[{n = 10},

Show[GraphicsArray[ ListDensityPlot[(* scale *) ArcTan[#], PlotRange -> All, Mesh -> False, DisplayFunction -> Identity]& /@ (* calculate exact mass and stiffness matrices *) {MassMatrix[n], StiffnessMatrix[n] /. {A -> 1, C -> 1, B -> 0}}]]] 60

60

50

50

40

40

30

30

20

20

10

10

0

0

10

20

30

40

50

60

0

0

10

20

30

40

50

60

The subject of finite elements contains many other opportunities for programming with Mathematica. For example, we mention algorithms for minimizing the bandwidth of sparse matrices (following, e.g., Cuthill–McKee [424], Gibbs–Poole– Stockmeyer ([723] and [711]), or Sloan [1625]). Because of their special nature, we do not go any further into the explicit implementation of these finite-element computations. Hp,dL b) We start by implementing the interpolating functions ck,l HxL. Using the function InterpolatingPolynomial , their construction is straightforward for explicitly given integers e, p, d, k, and l. While the unexpanded form has a better stability for numerical evaluation, we expand the functions here to speed-up the integrations to be carried out later.

In[1]:= χ[p_, d_][k_, l_, ξ_] :=

Expand[InterpolatingPolynomial[ Table[{j/p, Table[KroneckerDelta[j, k]* KroneckerDelta[l, i], {i, 0, d}]}, {j, 0, p}], ξ]]

Here are two examples: In[2]:= {χ[3, 0][0, 0, ξ], χ[2, 2][1, 1, ξ]}

11 ξ 2

9 ξ3 2

Out[2]= 91 − + 9 ξ2 − , −32 ξ3 + 160 ξ4 − 288 ξ5 + 224 ξ6 − 64 ξ7 = H p,dL

We sidestep a moment and visualize some of the ck,l HxL. The function maxAbs[p, d][k, l] calculates the maximum of Hp,dL the absolute value of the ck,l HxL over the x-interval @0, 1D. In[3]:= maxAbs[p_, d_][k_, l_] :=

Module[{f = χ[p, d][k, l, ξ], extξs}, (* solve for extrema *) extξs = Select[N[{ToRules[Roots[D[f, ξ, ξ] == 0, ξ, Cubics -> False, Quartics -> False]]}, 50], (Im[ξ /. #] == 0 && 0 1}}]]]]

The magnitude of the functions decreases quickly with higher-order continuity.

Symbolic Computations

426 In[4]:= With[{p = 4, d = 4},

Table[{j, Max[Table[maxAbs[p, d][i, j], {i, 0, p}]] // N}, {j, 0, d}]] Out[4]= 880, 5.69702 None, PlotRange -> All, Frame -> True, Axes -> False] Show[GraphicsArray[#]]& /@ Table[ Table[graph[µ, µ][k, l], {l, 0, µ}, {k, 0, µ}], {µ, 3}]

Solutions

427

H p,dL

Before starting the implementation of the functions to solve the eigenvalue problem, we will renumber the ck,l . For fixed p Hp,dL and d, we want to number the functions cHp,dL HxL using one index to easily assemble the global finite element k,l HxL = ch matrices. We number them consecutively with increasing k, and within each k with increasing l. The function reducesIn dices does the inverse: given the linear numbering h, it generates the pairs Hk, lL. In[10]:= reducesIndices[p_, d_][h_] :=

Sequence[Floor[h/(d + 1)], h - (d + 1) Floor[h/(d + 1)]]

Here are the sixteen pairs corresponding to cH3,3L h HxL. In[11]:= Table[{k, {reducesIndices[3, 3][k]}}, {k, 0, 15}] Out[11]= 880, 80, 0 22, MaxIterations -> 40]

8848, 12, 6 e[b], b] /. {e'[b] -> 0,

e[b] -> e}) == 0, evEq[0] == 0}, {b, e}], (Im[e] == 0 && b > 0 /. N[#])&] 13 è!!!!! Out[16]= 99e → , b → 2 == 16

Here is a sketch of the behavior of the yHb; xL, including more terms. In[17]:= Show[GraphicsArray[#]]& /@

Partition[Table[ ListPlot[Table[{b, NRoots[evEq[i] == 0, e][[1, 2]]}, {b, 0.8, 2.5, 0.025}], PlotRange -> {0.8, 0.82}, PlotJoined -> True, AxesOrigin -> {0.8, 0.8}, DisplayFunction -> Identity, PlotLabel -> StyleForm["evEq[" ToString[i] "]", "MR"]], {i, 6}], 3] evEq@1D 0.8175 0.815 0.8125 0.81 0.8075 0.805 0.8025

evEq@2D 0.8175 0.815 0.8125 0.81 0.8075 0.805 0.8025

1 1.25 1.5 1.75 2 2.25 2.5

1 1.25 1.5 1.75 2 2.25 2.5

evEq@4D 0.8175 0.815 0.8125 0.81 0.8075 0.805 0.8025

1 1.25 1.5 1.75 2 2.25 2.5

evEq@5D 0.8175 0.815 0.8125 0.81 0.8075 0.805 0.8025

1 1.25 1.5 1.75 2 2.25 2.5

evEq@3D 0.8175 0.815 0.8125 0.81 0.8075 0.805 0.8025

evEq@6D 0.8175 0.815 0.8125 0.81 0.8075 0.805 0.8025

1 1.25 1.5 1.75 2 2.25 2.5

1 1.25 1.5 1.75 2 2.25 2.5

Let us now numerically compute the minimizing values for b. We compare three different methods for the case of evEq[3]. One method is to use FindMinimum for the lowest value of e, which we calculate by solving the polynomial in e with NRoots. In[18]:= oFevEq3[_?NumericQ] :=

Block[{b = }, NRoots[evEq[3] == 0, e, 20][[1, 2]]] Timing[FindMinimum[oFevEq3[b], {b, ##}, WorkingPrecision -> 25, PrecisionGoal -> 12, Compiled -> False]& @@@ (* two initial intervals *) {{11/10, 12/10}, {17/10, 18/10}}] Out[19]= 80.02 Second, 880.8074145723427270178250488, 8b → 1.203732086388522417922241 0) /. e[b] -> e) == 0, evEq[3] == 0} . This would not have resulted in a faster solution. Actually, the quality of the solution is not guaranteed. In[22]:= Sort[Select[NSolve[{((D[evEq[3] /. e -> e[b], b] /. e'[b] -> 0) /.

e[b] -> e) == 0, evEq[3] == 0}, {e, b}], Im[e] == 0 && Im[b] == 0 && Re[b] > 0 /. #&], #1[[1, 2]] < #2[[1, 2]]&] // Timing Out[22]= 82.38 Second, 88e → 0.804175, b → 1.72205 0) /. e[b] -> e, evEq[n]}] In[30]:= Timing[frSolve[3, {12/10, 18/10}]] Out[30]= 80.04 Second, 88e → 0.8074145723427270178250477, b → 1.203732086388840409673660 a})] /. {Cos[x_]^2 + Sin[x_]^2 -> 1}

Here are the computations of some Jacobian determinants with the times required. In[7]:= timings[k_Integer] := {k, {Timing[NaivJacobiDeterminant[k]],

Timing[FastJacobiDeterminant[k]]}} In[8]:= Table[timings[k], {k, 2, 7}] Out[8]= 882, 880.01 Second, r 0, x[ϕ] -> x},

(* algebraic relation between Sin and Cos *) Sin[ϕ]^2 + Cos[ϕ]^2 - 1}, {Cos[ϕ], Cos[ϕMax], h, l}, {Sin[ϕ], x}, MonomialOrder -> EliminationOrder] /. {Cos[ϕ] -> c, Cos[ϕMax] -> cm} // Factor Out[14]= 8c2 H3 c − 2 cmL l H−h − l + c lL H−h + c2 h − l + c2 l + c3 l + cm l − 2 c2 cm lL