Yuri A. Melnikov
Green’s Functions and Infinite Products Bridging the Divide
Yuri A. Melnikov Department of Mathematical Sciences Computational Sciences Program Middle Tennessee State University Murfreesboro, TN 37132-0001 USA
[email protected] ISBN 978-0-8176-8279-8 e-ISBN 978-0-8176-8280-4 DOI 10.1007/978-0-8176-8280-4 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2011937161 Mathematics Subject Classification (2010): 40A20, 65N80 © Springer Science+Business Media, LLC 2011 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.birkhauser-science.com)
To my beloved grandchildren: Yulya, Afanasy, and Mashen’ka
Preface
Two traditional mathematical concepts, classical in their own fields, are brought forward in this brief volume. Reviewing these concepts separately, with no connection to each other, would definitely look natural, but bringing them together into a single book format is quite a different story. The point is that the concepts are drawn from subject areas of mathematics that have no evident points of contiguity. That is why the reader might be intrigued with our intention in this book to explore their mutual fusion. This endeavor provides a basis for a challenging and nontrivial investigation. The first of the two concepts is the Green’s function. It represents an important topic in standard courses of differential equations and is customarily covered in most texts in the field. The second concept, of infinite product, belongs, in turn, to classical mathematical analysis. As to Green’s functions for partial differential equations, it is not a common practice in existing textbooks for careful consideration to be given to procedures used for their construction. On the other hand, the standard texts on mathematical analysis do not usually confront the infinite product representation of elementary functions. A simultaneous review of just these two subject areas (the construction of Green’s functions and the infinite product representation of elementary functions) constitutes the context of the present book. Green’s functions for the two-dimensional Laplace equation are most widely represented in relevant texts. They are conventionally constructed using the method of images, conformal mapping, or eigenfunction expansion. The present volume focuses on the construction of such Green’s functions for a wide range of boundaryvalue problems. A comprehensive review of the traditional methods is provided, with emphasis on the infinite-product-containing expressions of Green’s functions, which are obtained by the method of images. This provides a background for the central theme in this book, which is the development of an innovative approach to the representation of elementary functions in terms of infinite products. The intention in the present volume is not just to familiarize the reader in the traditional manner with the state of things in the area, but rather to reach beyond traditions. That is, we plan not only to introduce the classical topics of the construction of Green’s functions and the infinite product representation of elementary functions, but also to present a challenging investigation into the intersection of these fields. vii
viii
Preface
To be well prepared for the presentation in this book, the reader is required to have a reasonably solid background in the standard undergraduate courses of calculus and differential equations. In addition, the reader would definitely benefit from a superficial knowledge of the basics of numerical analysis. There is good reason to believe that this piece of work is original. To the author’s best knowledge, there are no analogous books available on the market. That is why we anticipate that the book will not be overlooked by the professional community. It might, for example, be adopted as supplementary reading for an undergraduate course or as a seminar topic within the scope of a pure or applied mathematics curriculum. Infinite Product Representation of Elementary Functions, A Further Linking of Differential Equations with Calculus, or Broadening the Use of Green’s Functions might be the title for such a course or seminar topic. Very initial results on the Green’s-function-based approach to the infinite product representation of elementary functions were reported not long ago. The first printed publications on progress in this field appeared just recently. It then took us over three years to ultimately come up with this book, which was originally intended as a text for an elective course within the computational sciences Ph.D. program just launched at Middle Tennessee State University. It is with pleasure and gratitude that the author acknowledges the editorial services provided by the staff of Birkhäuser Boston, with special thanks to Tom Grasso, senior mathematics editor, for his professional treatment of nontrivial situations. Although the editing process was not fast, smooth, and painless, it has significantly improved the quality of the presentation and definitely made this book a much better read. The opening phase of our work on this project was partially funded by a 2008 Summer Research Grant awarded by the Faculty Research and Creative Activity Committee at Middle Tennessee State University. This created a propitious work environment, promoted progress at later stages of the project, and made a decisive contribution to its prompt completion. Murfreesboro, USA
Yuri A. Melnikov
Contents
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
Infinite Products and Elementary Functions 2.1 Euler’s Classical Representations . . . . 2.2 Alternative Derivations . . . . . . . . . 2.3 Other Elementary Functions . . . . . . . 2.4 Chapter Exercises . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
17 17 24 28 41
3
Green’s Functions for the Laplace Equation 3.1 Construction by the Method of Images . 3.2 Method of Conformal Mapping . . . . . 3.3 Chapter Exercises . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
43 43 54 59
4
Green’s Functions for ODE . . . . . . . 4.1 Construction by Defining Properties . 4.2 Method of Variation of Parameters . 4.3 Chapter Exercises . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
61 61 72 82
5
Eigenfunction Expansion . . . . 5.1 Background of the Approach 5.2 Cartesian Coordinates . . . . 5.3 Polar Coordinates . . . . . . 5.4 Chapter Exercises . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. 85 . 85 . 86 . 105 . 118
6
Representation of Elementary Functions 6.1 Method of Images Extends Frontiers 6.2 Trigonometric Functions . . . . . . . 6.3 Hyperbolic Functions . . . . . . . . 6.4 Chapter Exercises . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
121 122 131 141 149
7
Hints and Answers to Chapter Exercises 7.1 Chapter 2 . . . . . . . . . . . . . . . 7.2 Chapter 3 . . . . . . . . . . . . . . . 7.3 Chapter 4 . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
151 151 153 153
. . . . .
. . . . .
. . . . .
1
ix
x
Contents
7.4 Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 7.5 Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Chapter 1
Introduction
Our objective in putting together this volume has been to develop a supplementary text for an elective upper-division undergraduate or graduate course/seminar that might be offered within the scope of a pure or applied mathematics curriculum. A quite unexpected treatment is delivered herein on two subjects that one might hardly have anticipated considering together in a single book. This makes the book an original and unique read, and a good choice for those who are open to challenges and welcome the unexpected. The reader is invited on an interesting voyage, with the subject matter resting upon two concepts taken from different subject areas of mathematics. These concepts are (i) the infinite product, which represents a standard topic in courses of mathematical analysis, and (ii) the Green’s function, representing a significant topic in courses on differential equations. To be more specific, we will concentrate in this book on the infinite product representation of elementary functions and the Green’s function of boundary-value problems for the two-dimensional Laplace equation. It would not probably be an exaggeration to assert that none of the existing relevant textbooks in mathematics covers both the concepts that are to be explored herein. Consequently, the two concepts have probably never been considered together in a single traditionally offered course. The reader might therefore be concerned about the reason for presenting both concepts in our volume. Indeed, what is the driving force for considering them together this time around? The resolution of this concern will be found on examining the very recent developments reported in [27] and [28]. It appears that a diligent analysis of the two concepts reveals an unlooked-for outcome that happens to be extremely rewarding. A novel approach was discovered to the approximation of functions, which provides never before reported infinite product representations for some elementary functions. According to the title, the present volume is not designed to focus exclusively on either differential equations or mathematical analysis. The subtitle brings a necessary clarification. It suggests that both the subject areas, a fusion of which represents an unbreakable background for our presentation, are going to be covered to a certain extent, with emphasis on the establishment of their productive linking. The motivation for writing this book is due in large measure to the many years of our work on the construction of computer-friendly expressions for Green’s functions Y.A. Melnikov, Green’s Functions and Infinite Products, DOI 10.1007/978-0-8176-8280-4_1, © Springer Science+Business Media, LLC 2011
1
2
1
Introduction
for applied partial differential equations. The results of that work have been reported in a series of publications of the past decades. To get a sense and a distinctive feature of this work, the reader might examine some of our publications [11, 12, 17, 21, 23, 25]. The most complete and useful list of efficient representations of Green’s functions recently constructed can be found in [16]. It was just recently, however, that the idea emerged to compare alternative forms of Green’s functions that had been obtained for a variety of boundary-value problems stated for the two-dimensional Laplace equation. The comparison appeared really nontrivial. It ultimately gave birth to a score of infinite product representations of some trigonometric and hyperbolic functions. The idea is not, of course, new. The reader might recall some other areas of mathematics in which a comparison of equivalent but different-looking forms of some statement results in interesting developments. As might be learned from mathematical analysis, the theory of infinite products is closely related to that of infinite series. The latter, in turn, represents one of the major driving forces in the core courses of undergraduate mathematics. Infinite series usually receive more or less complete and detailed coverage in standard undergraduate texts in both pure and applied mathematics curricula. Indeed, Taylor, Fourier, and other types of series represent a convenient tool for mathematical analysis. They traditionally play a significant role in the standard courses of calculus, complex variables, differential equations, numerical analysis, and others. Infinite products, in contrast, are not that well and fully covered in standard texts on mathematical analysis. Nevertheless, they quite frequently arise in different areas of mathematics [5, 20], and are, as well as infinite series, successfully implemented as a tool in the description of a number of subjects, such as the approximation of functions in particular. Although the fundamental results on the infinite product representation of elementary functions can be traced back to Euler’s era [1], mathematicians all over the globe are still working in this area [2–4, 7, 14, 22, 24], reporting on different theoretical and computational aspects of this topic. Since representation of elementary functions by infinite products constitutes the leading theme in this work, it would be appropriate to provide the reader with at least a brief introduction to the basic terminology as well as to the chief concepts of infinite products. An introduction of this sort would be, in our opinion, reasonable to make this book more consistent, self-contained, and easier to read. In addition, the reader will later be familiarized with the concept of Green’s function for the two-dimensional Laplace equation. This concept will be briefly reviewed in the introduction and discussed then in more detail in Chaps. 3, 4, and 5, where we give an overview of the major solution methods that are traditionally used for the construction of Green’s functions. In doing so, our special emphasis will be on the method of images, which results, for some problems, in an infinite-productcontaining representation of the Green’s function. We begin with a brief review of the fundamentals of infinite products by assuming that a1 , a2 , . . . , ak , . . . represent nonzero complex numbers, and consider the
1 Introduction
3
product a1 · a2 · · · · · ak · · · =
∞
ak .
(1.1)
k=1
The concept of convergence must be, of course, of the same critical importance for infinite products as it is for infinite series. To introduce this concept, we form the finite product PN =
N
ak = a1 · a2 · · · · · aN
k=1
and call it the N th partial product of (1.1). It is said that the infinite product in (1.1) is convergent (we will also use the wording the infinite product converges) if there exists a finite limit P = 0 to which the sequence P1 , P2 , . . . , PN , . . .
(1.2)
of partial products of (1.1) converges as N approaches infinity. That is, P = lim PN . N →∞
The number P is called the value of (1.1). Similarly to the case of infinite series, if an infinite product is not convergent, then we say that it diverges or is divergent. An important comment must be offered at this point as to the convergence of an infinite product. If some of the terms in (1.1) are equal to zero, then the infinite product is said to be convergent if it converges when the zero terms are excluded. With this, the value of the infinite product with zero terms is said to be zero. This comment is required for the infinite product to possess a property of finite products. That is, the value of a convergent infinite product is zero if and only if at least one of its terms is zero. Note also that if none of the terms ak of the infinite product in (1.1) is zero, and the limit of the sequence in (1.2) is zero (P = 0), then we say that the product diverges to zero. Hence, two options are on the table as to the convergence of an infinite product. Namely, an infinite product either converges (when P is finite, but not zero), or diverges (when P is either infinite or zero). It is evident that if the infinite product in (1.1) converges, then the limit of its general term must equal unity: lim ak = 1.
k→∞
(1.3)
This assertion immediately follows from the obvious relation aN =
PN PN −1
(1.4)
4
1
Introduction
for the general term of the product in (1.1) in terms of two successive partial products. Indeed, taking the limit in (1.4), one obtains for a convergent infinite product lim aN =
N →∞
limN →∞ PN P = = 1. limN →∞ PN −1 P
Hence, the condition in (1.3) is necessary for the infinite product in (1.1) to converge. To give some illustrations of this claim, we consider a few examples. Take the infinite product ∞ (k + 1)2 k(k + 2)
(1.5)
k=1
and explore its convergence by taking a close look at its N th partial product, written down explicitly as PN =
N (k + 1)2 k(k + 2)
k=1
=
32 42 (N − 1)2 N2 (N + 1)2 22 · · · ··· · · · . 1·3 2·4 3·5 (N − 2)N (N − 1)(N + 1) N (N + 2)
It is evident that after a series of cancellations in the above expression, PN becomes PN =
2(N + 1) , N +2
verifying that the limit of the above is a finite number. Indeed, lim PN = lim
N →∞
N →∞
2(N + 1) = 2. N +2
Thus, the infinite product in (1.5) really does converge. And what about the condition in (1.3)? It is obviously met, since (k + 1)2 = 1. k→∞ k(k + 2)
lim ak = lim
k→∞
For another example of the necessity of the condition in (1.3), consider the following infinite product: ∞ 2 k −1 k=2
k2
.
(1.6)
To check for convergence, consider its N th partial product, which reads in this case
1 Introduction
PN =
5
N N k 2 − 1 (k − 1)(k + 1) = k2 k2
k=2
=
k=2
1·3 2·4 3·5 (N − 2)N (N − 1)(N + 1) N + 1 · 2 · 2 · ··· · · = , 2 2 3 4 (N − 1)2 N2 2N
providing N +1 1 = . N →∞ N →∞ 2N 2 Hence, the infinite product in (1.6) is indeed convergent. It is also clear that the condition in (1.3) is met. That is, lim PN = lim
k2 − 1 = 1. k→∞ k 2 lim
To explore the convergence of the next infinite product, ∞ k + (−1)k k=2
k
,
(1.7)
its odd-index partial product P2N −1 and even-index partial product P2N sould be analyzed separately. The point is that these partial products look formally different. We can show, however, that the sequence P3 , P5 , P7 , . . . , P2N −1 of the odd-index partial products, as well as the sequence P2 , P4 , P6 , P8 , . . . , P2N of the even-index partial products of (1.7) converge to the same value, implying that the infinite product in (1.7) is convergent. To verify this conjecture, take a look first at the odd-index partial product, P2N −1 : P2N −1 =
2N −1 k=2
3 2 5 4 2N − 1 2N − 2 k + (−1)k = · · · · ··· · · , k 2 3 4 5 2N − 2 2N − 1
which implies that P3 , P5 , P7 , . . . , P2N −1 represents just the sequence of 1’s. Hence, 1 is its limit. The even-index partial product P2N of (1.7) is, in turn, P2N =
2N k + (−1)k k
k=2
=
2N − 1 2N − 2 2N + 1 2N + 1 3 2 5 4 7 · · · · · ··· · · · = , 2 3 4 5 6 2N − 2 2N − 1 2N 2N
6
1
Introduction
which clearly converges to 1, implying that the infinite product in (1.7) is indeed convergent. As to its value, which is the limit of its general term, it is equal to unity: k + (−1)k = 1. k→∞ k lim
Thus, the analysis just completed for the infinite products in (1.5), (1.6), and (1.7) simply illustrates the necessity (which has actually been proven earlier) of the condition in (1.3) for the convergence of an infinite product. To show that the condition in (1.3) is not sufficient for convergence, we may offer a single counterexample. Let us take the product ∞ k+1 , k
(1.8)
k=1
the general term of which approaches unity as k goes to infinity, yet the product is divergent. The divergence can be proven by showing that the N partial product, PN =
2 3 4 N N +1 · · · ··· · · = N + 1, 1 2 3 N −1 N
is unbounded as N goes to infinity. This provides convincing evidence of the divergence of the product in (1.8). The example just considered allows us to declare that the condition in (1.3), being necessary for the convergence of infinite products, is not, however, sufficient. In other words, if the condition in (1.3) is not met, then the product in (1.1) diverges. If, however, the condition in (1.3) is met, then the product might either converge or diverge. It is interesting to recall that the situation with infinite products just discussed resembles that for infinite series. That is, if an infinite series ∞ k=1 bk converges, then the limit of its general term bk must be zero. The converse assertion, that a series necessarily converges if the limit of its general term is zero, is, however, untrue. This point is traditionally illustrated in calculus with the remarkable example 1/k, which diverges despite the fact that its general of the harmonic series ∞ k=1 term 1/k approaches zero as k goes to infinity. Let us revisit the infinite product in (1.1) and assume that all its terms are nonzero, which means that if its general term is rewritten as ak = 1 + βk , then βk = −1 for every k. Now rewrite (1.1) as ∞
(1 + βk ).
(1.9)
k=1
From the necessary condition for convergence of the product in (1.1), it follows that if the product in (1.9) converges, then the following condition lim βk = 0
k→∞
must be satisfied.
(1.10)
1 Introduction
7
Taking logarithms in (1.9), one obtains the series ∞
log(1 + βk ).
(1.11)
k=1
Since the logarithm is a multiple-valued function, a single branch of the logarithmic function in (1.11) (say, the principal one) can be chosen. Let the number SN represent a partial sum of (1.11), and assume that the series converges. This implies that a finite limit S of SN exists as N goes to infinity. From (1.11), it follows in this case that the partial product PN of (1.9) is expressed in terms of SN as PN = eSN . By the continuity property, we conclude that the value P of the product in (1.9) is expressed in terms of the sum S of the series in (1.11) as P = eS , which cannot, of course, be zero. As with infinite series, the notion of absolute convergence can also be introduced for the infinite product in (1.9). Namely, we say that the product in (1.9) converges absolutely if the product ∞
(1 + |βk |)
k=1
converges. A necessary and sufficient condition for absolute convergence of the above product is that the series ∞
βk
k=1
be absolutely convergent. Clearly, this assertion is equivalent to another one, that the series ∞
| log(1 + βk )|
(1.12)
k=1
is convergent if and only if the series ∞
|βk |
(1.13)
k=1
is convergent. Proof of the latter claim can immediately be accomplished by the limit comparison test. Indeed, since convergence of either the series in (1.12) or the series in (1.13) implies (1.10), we take the limit lim
βk →0
log(1 + βk ) βk
8
1
Introduction
and expand the logarithm in a Taylor series. This yields 1 1 log(1 + βk ) 1 = lim lim βk − βk2 + βk3 − · · · βk →0 βk →0 βk βk 2 3 1 1 2 = lim 1 − βk + βk − · · · = 1, βk →0 2 3 which justifies our assertion. Another terminological issue is also important for the material in this book. That is, we say that the product in (1.9) converges conditionally if it converges, whereas the product ∞
|1 + βk |
k=1
diverges. As with infinite series, the commutativity property [5] holds for absolutely convergent infinite products but does not do so for conditionally convergent ones. This means that the order of factors in an absolutely convergent infinite product can be arbitrarily rearranged without affecting the product value. If, however, an infinite product converges conditionally, then a rearrangement may affect the product’s convergence in the sense that it might change its value. For a justification of the commutativity property, we will present an example of a conditionally convergent infinite product and show that by rearranging the order of its factors we can obtain for it an arbitrarily preassigned value. In doing so, recall the infinite product in (1.7) and rewrite it as ∞ k=2
(−1)k . 1+ k
(1.14)
As we have recently proved, this product converges to the value of unity. The convergence is, however, conditional because the product in (1.8), ∞ 1 , 1+ k
k=2
as we also recently figured out, is divergent. To illustrate the fact that the order of factors in (1.14) matters, or to show, in other words, that rearranging the order of its factors might affect its value, observe that the factors with a plus sign in (1.14) alternate with those having a minus sign. Indeed, ∞ k=2
1+
(−1)k k
1 1 1 1 = 1+ 1− 1+ 1− ··· . 2 3 4 5
1 Introduction
9
Let M and N be two positive integers, and rearrange the order of factors in (1.14) in such a way that segments TM of M factors representing sums alternate with segments TN of N factors representing differences. The first of the segments TM appears as 1 1 1 1 1+ 1+ · ··· · 1 + , TM = 1 + 2 4 6 2M while the first of the segments TN reads 1 1 1 1 TN = 1 − 1− 1− · ··· · 1 − . 3 5 7 2N + 1 Rewrite the segments TM and TN in a compact form. That is, TM =
3 5 7 2M + 1 (2M + 1)!! · · · ··· · = 2 4 6 2M (2M)!!
TN =
3 5 7 2N (2N )!! · · · ··· · = . 2 4 6 2N + 1 (2N + 1)!!
and
This makes the (M + N)kth partial product P(M+N )k of the rearranged infinite product equal to P(M+N )k =
(2Mk + 1)!!(2N k)!! . (2Mk)!!(2Nk + 1)!!
(1.15)
To compute the value of the rearranged infinite product, one is required to take a limit of its partial products in (1.15) as k approaches infinity. Before going any further with the limit process, we recall the classical [5] Wallis formula (2k)!!
√ = k→∞ (2k − 1)!! k lim
√
π
and convert it to a form that is more convenient for the development that follows. In doing so, rewrite the above as √ (2k − 1)!! k 1 lim =√ . k→∞ (2k)!! π Clearly, upon multiplying the numerator and denominator in the above by (2k + √ 1) k, we transform it into √ (2k + 1)!! · k (2k − 1)!! k 1 lim =√ . = lim √ k→∞ (2k)!! · k · (2k + 1) k→∞ (2k)!! π The limit on the left-hand side can be decomposed into the product of two limits: lim
k→∞
(2k + 1)!! k 1 =√ . √ · lim π (2k)!! k k→∞ 2k + 1
10
1
Introduction
Since the second of the two limits is 1/2, the above relation reads lim
k→∞
2 (2k + 1)!! √ =√ , π (2k)!! k
which can be considered an equivalent version of Wallis’s formula. We recall now the rearranged infinite product in (1.14) and compute its value V by taking the limit of its (M + N)kth partial product P(M+N )k in (1.15) as k approaches infinity. This yields V = lim
k→∞
=
(2Mk + 1)!!(2Nk)!! (2Mk)!!(2Nk + 1)!!
√ M (2Mk + 1)!! (2N k)!! N k lim , · lim √ N k→∞ (2Mk)!! Mk k→∞ (2N k + 1)!!
which, in light of the second version of Wallis’s formula, reads √ M 2 π M ·√ · = . V= N 2 N π Hence, the infinite product in (1.14), rearranged in the way just described, might either increase or decrease in value depending upon the integers M and N . Indeed, if the rearrangement is made, say, such that every four factors representing sums (M = 4) are followed by a single factor representing a difference (N = 1), then the value of the resultant infinite product is twice the value of the original infinite product in (1.14). Completing our brief review on the fundamentals of infinite products, we turn to functional products and let βk (z) be a function defined on a set S for each positive integer k. Then we say that the infinite product ∞
(1 + βk (z))
(1.16)
k=1
converges uniformly in S if the condition 1 + βk (z) = 0 holds for all k, and the sequence of partial products PN (z) =
N
(1 + βk (z))
k=1
of (1.16) converges uniformly in S to a function P (z) that never vanishes in S. Clearly, the infinite product in (1.16) converges uniformly if the series ∞ k=1
is uniformly convergent on S.
βk (z)
1 Introduction
11
Let βk (z) represent continuous functions in S for each k. It can be shown that if the infinite product in (1.16) converges uniformly in S, then the product value P (z) is continuous in S. Keep in mind that the key theme in the present volume is the representation of elementary functions in terms of infinite products. We believe that with the review just completed, the reader is prepared to cope with the rest of the material in this book where infinite products emerge. Our introductory review takes a turn, at this point, to the second of the two concepts, which, along with the concept of infinite product, represents the keystone in this volume. That is the concept of Green’s function. In order to introduce the Green’s function notion for the two-dimensional Laplace equation, we consider, in two-dimensional Euclidean space, a simply connected region bounded by a piecewise smooth contour L, and formulate a boundary-value problem in which the Poisson equation ∇ 2 u(P ) = −f (P ),
P ∈ ,
(1.17)
is subject to the homogeneous boundary condition B[u(P )] = 0,
P ∈ L,
(1.18)
where ∇ 2 represents the Laplace operator, the right-hand side f (P ) is a function continuous in , and B is an operator of boundary conditions. Assume that the problem in (1.17) and (1.18) is well posed, implying that it has a unique solution. This means that the corresponding homogeneous problem, where the boundary condition in (1.18) is imposed on L for the Laplace equation ∇ 2 u(P ) = 0,
P ∈ ,
(1.19)
has only the trivial solution u(P ) = 0. If so, then the solution u(P ) for the problem in (1.17) and (1.18) can be expressed [8, 15, 18] in the integral form u(P ) = G(P , Q)f (Q)d(Q), (1.20)
with the kernel G(P , Q) being called the Green’s function to the homogeneous boundary-value problem in (1.19) and (1.18). The relation in (1.20) reveals a special feature of the Green’s function. Indeed, once the latter is available, solution of the problem in (1.17) and (1.18) is a matter of computing the integral in (1.20), which can be considered a direct consequence of the second Green’s formula [8]. We use some standard terminology in our book for the arguments P and Q of the Green’s function. They are commonly referred to as the observation point for P (another customarily used term is the field point) and the source point for Q. For any location of the source point Q ∈ , the Green’s function G(P , Q), as a function of the coordinates of the field point P , possesses the following defining properties:
12
1
Introduction
1. At any location of P ∈ , except at P = Q, G(P , Q) is a harmonic function of P , that is, ∇P2 G(P , Q) = 0,
P = Q.
2. For P = Q, G(P , Q) possesses a logarithmic singularity of the type 1 1 ln . 2π |P − Q| 3. G(P , Q) satisfies the boundary condition in (1.18), that is, BP [G(P , Q)] = 0,
P ∈ L.
In compliance with the defining properties, the Green’s function G(P , Q) of the problem in (1.19) and (1.18) can be expressed as G(P , Q) =
1 1 ln + R(P , Q), 2π |P − Q|
with the second additive component R(P , Q) referred to as the regular part of the Green’s function. R(P , Q) represents a harmonic, everywhere in , function of the coordinates of P . That is, ∇P2 R(P , Q) = 0,
P ∈ and
Q ∈ .
A given Green’s function constructed by two different methods might have two different appearances. One expression might appear in a computer-friendly form, whereas the other might not be readily computable or simple to use. Later in this volume, a number of different methods will be reviewed that produce a variety of different forms of Green’s functions for boundary-value problems for the Laplace equation. Some of the available Green’s functions can be completely expressed in terms of elementary functions. As an example, one might recall the classical [5, 18] Green’s function G(x, y; ξ, η) =
1 (x − ξ )2 + (y + η)2 ln 4π (x − ξ )2 + (y − η)2
(1.21)
for the Dirichlet problem posed in the half-plane {−∞ < x < ∞, 0 < y < ∞}. It is evident that the denominator component in (1.21) constitutes the singular part of G(x, y; ξ, η), whereas the component R(x, y; ξ, η) =
1 ln (x − ξ )2 + (y + η)2 4π
represents its regular part. Representations of the type in (1.21) are compact and convenient to work with in various applications. It is worth noting, however, that there unfortunately exist only a few such closed analytical forms of Green’s functions available in the literature.
1 Introduction
13
Some other Green’s functions for the Laplace equation that are available in literature are expressed in a form containing elementary functions and trigonometric series, such as ∞ (r)n 1 1 G(r, ϕ; , ψ) = − 2β cos n(ϕ − ψ) 2π β n(n + β) n=1
1 2 ln r − 2r cos(ϕ − ψ) + 2 4π
1 ln 1 − 2r cos(ϕ − ψ) + r 2 2 , − 4π −
obtained in [16] for the mixed boundary-value problem ∂ + β G(1, ϕ; , ψ) = 0, β > 0, ∂r
(1.22)
(1.23)
posed on the unit disk {0 < r < 1, 0 < ϕ ≤ 2π}. Clearly, the regular part of the Green’s function presented in (1.22) can be written as
1 ln 1 − 2r cos(ϕ − ψ) + r 2 2 4π ∞ 1 1 1 n − 2β (r) cos n(ϕ − ψ) . + 2π β n(n + β)
R(r, ϕ; , ψ) = −
n=1
Representations of the type in (1.22) are also quite convenient for practical implementations, because their singular components are explicitly expressed, while the series in their regular parts are uniformly convergent. This makes forms of the type in (1.22) computer-friendly and allows efficient computing by a truncation of the series. A number of Green’s functions obtained in such a mixed form can be found in [16]. In most other cases, however, Green’s functions of boundary-value problems for the Laplace equation cannot be expressed in either an elementary function form or in a mixed form of the type in (1.22). For example, we have the classical [18] Fourier double-series form G(x, y; ξ, η) =
∞ 4 sin μx sin νy sin μξ sin νη ab μ2 + ν 2
(1.24)
m,n=1
of the Green’s function for the Dirichlet problem stated on the rectangle {0 < x < a, 0 < y < b}, where the parameters μ and ν are expressed in terms of the summation indices m and n, and the rectangle’s dimensions a and b as μ=
mπ a
and ν =
nπ . b
14
1
Introduction
The computability of the series representations of the type in (1.24) is limited due to their nonuniform convergence. The latter is unavoidable because any Green’s function for the Laplace equation is not regular by definition and does not, therefore, meet the convergence requirements for Fourier series [5]. Hence, a certain regularization is required in order to convert the Green’s function shown in (1.24) to a form appropriate for immediate computer implementation. Some recommendations for such a conversion can be found in [16, 19, 24]. It is worth noting that multiple forms of Green’s functions are available for many boundary-value problems for the Laplace equation. To illustrate this point, recall the mixed problem in (1.23) for the unit disk and its Green’s function shown in (1.22). Another alternative to the representation in (1.22) of this Green’s function,
1 1 1 1 − 2r cos(ϕ − ψ) + r 2 2 + ln 2 G(r, ϕ; , ψ) = 2π β 2 r − 2r cos(ϕ − ψ) + 2
i(ϕ−ψ)
−β re 1 ζ β−1 dζ + Re rei(ϕ−ψ) , π 1−ζ 0
(1.25)
is available in [5]. The standard abbreviation Re denotes the real part of a function of a complex variable. Note that the above representation and the one in (1.22) are equivalent mathematically, but it is evident that on the other hand, the two forms are not quite equivalent computationally. Indeed, the one in (1.25) is less suitable for computer implementations compared to that of (1.22), because the regular part in (1.25) requires greater computational effort. The multiplicity of forms in which Green’s functions can potentially be represented is instrumental in the present book. It represents the driving force of the investigation reported in Chap. 6. Taking advantage of this multiplicity, we will later derive some interesting infinite product representations of trigonometric and hyperbolic functions by comparison of some alternative forms of Green’s functions for the Laplace equation. As to the material of the present volume, it is organized in five chapters. The focus in Chap. 2 is on the classical [1] Euler infinite product representation of the trigonometric and hyperbolic sine and cosine functions. We explain how Euler derived them and also review some other known derivation procedures developed later. In addition, the reader is introduced to the derivation of some other infinite product representations of elementary functions that are available in mathematical handbooks [6] or [9]. In Chaps. 3 and 5, we turn to Green’s functions of boundary-value problems for the two-dimensional Laplace equation. Our effort is specific because we do not concentrate on theoretical aspects but mostly analyze a variety of methods that are traditionally used for the construction of Green’s functions. In doing so, special attention is paid to the methods of images, conformal mapping, and eigenfunction expansion. By extending the frontiers of the method of images, we obtain alternative infinite-product-containing expressions for some classical Green’s functions. This provides background for the key developments in the present work.
1 Introduction
15
Chapter 4 makes a sharp turn, departing from the field of partial differential equations and focusing instead on ordinary differential equations. It might seem that this material lies outside the book’s focus because it is not directly related to Green’s functions for the Laplace equation. But the purpose of Chap. 4 is to prepare the reader for a better comprehension of our work in Chap. 5, where we return to the Laplace equation. A vast number of Green’s functions are obtained for this equation by means of the method of eigenfunction expansion. It is worth noting that this method is applicable and appears efficient not only for the Laplace equation but also for many other applied partial differential equations. Chapter 6 is central to the book. An innovative approach [28] is presented and developed for expressing elementary functions in terms of infinite products. Our work on Green’s functions, discussed in detail earlier in the book, is instrumental for Chap. 6. A number of infinite product expansions of elementary functions are obtained. Some of those are simply alternatives to the forms that are already available in the classical literature. Some others were derived, however, for functions whose infinite product representations are unavailable in existing handbooks. To enhance the usefulness of this volume as a textbook many illustrative examples are offered in most of its sections to assist the instructor in class preparation and in giving the student more effective material for study. Every chapter begins with a review guide outlining the basic concepts covered in the chapter. To reflect the widespread idea that a text is only as good as its problems, a set of carefully designed challenging exercises is available at the end of each chapter. The exercises provide opportunities for the reader to explore the concepts of the chapter in more detail. Hints, comments, and answers to most of those exercises are available in the book. The author hopes that the discussion initiated in this brief volume will motivate the reader’s interest to further learning from our approach to the representation of elementary functions in terms of infinite products. He believes that the book might arouse the reader’s curiosity and awaken a desire to better understand the nature of the intersection of the subjects of Green’s functions and infinite products. This could promote further progress in this challenging field that bridges the divide between the two subjects.
Chapter 2
Infinite Products and Elementary Functions
The objective in this chapter is to lay out a working background for dealing with infinite products and their possible applications. The reader will be familiarized with a specific topic that is not often included in traditional texts on related courses of mathematical analysis, namely the infinite product representation of elementary functions. It is known [9] that the theory of some special functions is, to a certain extent, linked to infinite products. In this regard, one might recall, for example, the elliptic integrals, gamma function, Riemann’s zeta function, and others. But note that special functions are not targeted in this book at all. Our scope is limited exclusively to the use of infinite products for the representation of elementary functions. We will recall and discuss those infinite product representations of elementary functions that are available in the current literature. Note that they have been derived by different methods, but the number of them is limited. In Sect. 2.1, Euler’s classical derivation procedure will be analyzed. His elegant elaborations in this field were directed toward the derivation of infinite product representations for trigonometric as well as the hyperbolic sine and cosine functions. The work of Euler on infinite products was inspirational [26] for many generations of mathematicians. It will be frequently referred to in this brief volume as well. Some alternative derivation techniques proposed for infinite product representations of trigonometric functions will be reviewed in detail in Sect. 2.2. The closing Sect. 2.3 brings to the reader’s attention a variety of possible techniques for the derivation of infinite product forms of other elementary functions. We will instruct the reader on how to obtain the infinite product representations of elementary functions that are available in standard texts and handbooks.
2.1 Euler’s Classical Representations Both infinite series and infinite products could potentially be helpful in the area of approximation of functions. Infinite series represent a traditional instrument in contemporary mathematics. One of its classical implementations is the representation of Y.A. Melnikov, Green’s Functions and Infinite Products, DOI 10.1007/978-0-8176-8280-4_2, © Springer Science+Business Media, LLC 2011
17
18
2
Infinite Products and Elementary Functions
functions, which is applicable to different areas of mathematical analysis. Approximation of functions and numerical differentiation and integration can be pointed out as some, but not the only, such areas. Although infinite products have also been known and developed for centuries [26], and can potentially be used in solving a variety of mathematical problems, the range of their known implementations is not as broad as that of infinite series. The focus in the present volume is on just one of many possible implementations of infinite products, namely the representation of elementary functions. Pioneering results in this field were obtained over two hundred fifty years ago. They are associated with the name of one of the most prominent mathematicians of all time, Leonhard Euler. According to historians [26], his mind had been preoccupied with this topic for quite a long span of time. And it took him nearly ten years to ultimately derive the following now classical representation for the trigonometric sine function: ∞ x2 1− 2 2 . (2.1) sin x = x k π k=1
We will analyze in this section the derivation procedure proposed by Euler and also review, in further sections, some other procedures proposed later for the derivation of the representation in (2.1). Euler also showed that his procedure appears effective for the trigonometric cosine function and derived the following infinite product representation: cos x =
∞ k=1
4x 2 1− . (2k − 1)2 π 2
(2.2)
It is evident from the classical relations sin iz = i sinh z
and
cos iz = cosh z
between the trigonometric and hyperbolic functions, which represent the analytic continuation of the trigonometric functions into the complex plane, that the infinite product representations ∞ x2 1+ 2 2 sinh x = x k π
(2.3)
k=1
and cosh x =
∞ 1+ k=1
4x 2 (2k − 1)2 π 2
(2.4)
for the hyperbolic sine and cosine functions directly follow from (2.1) and (2.2), respectively. As we will show later, Euler’s direct approach can be successfully applied to the derivation of the representations in (2.3) and (2.4).
2.1 Euler’s Classical Representations
19
To let the reader enjoy the elegance of the approach, we will consider first the case of the representation in (2.1) and follow it in some detail. In doing so, we write down the trigonometric sine function, using Euler’s formula, in the exponential form sin x =
eix − e−ix , 2i
and replace the exponential functions with their limit expressions reducing the above to 1 ix n ix n sin x = 1+ − 1− lim 2i n→∞ n n n ix ix n i 1+ . (2.5) − 1− = − lim 2 n→∞ n n We then apply Newton’s binomial formula to both polynomials in the brackets. This yields n ix k k ix n(n − 1) ix 2 ix n =1+n + + ··· = (2.6) 1+ n n n 2! n n k=0
and k n ix n(n − 1) ix 2 ix n ix k k =1−n + − ··· = (−1) . (2.7) 1− n n 2! n n n k=0
Once these expressions are substituted into (2.5), all the real terms in the brackets (the terms in even powers of x) cancel out. As soon as the common factor of 2ix is factored out in the remaining odd-power terms of x, the right-hand side of (2.5) reduces to a compact form, and we have (n−1)/2 2k + 1 x 2k sin x = x lim (−1)k . (2.8) n→∞ n n2k+1 k=0
Of all the stages in Euler’s procedure, which, as a whole, represents a real work of art, the next stage is perhaps the most critical and decisive. Factoring the polynomial in (2.8) into the trigonometric form (n−1)/2 (1 + cos 2kπ/n) x 2 1− , sin x = x lim n→∞ (1 − cos 2kπ/n) n2 k=1
after trivial trigonometric transformations, we obtain (n−1)/2 x 2 cos2 kπ/n sin x = x lim 1− n→∞ n2 sin2 kπ/n k=1 = x lim
n→∞
(n−1)/2 k=1
1−
x2 . n2 tan2 kπ/n
20
2
Infinite Products and Elementary Functions
To take the limit, the second additive term in the parentheses of the above finite product is multiplied and divided by the factor k 2 π 2 . This yields sin x = x lim
n→∞
= x lim
n→∞
(n−1)/2
x 2 k2 π 2 1− 2 2 2 2 n k π tan kπ/n
k=1
(n−1)/2
kπ/n 2 x2 1− 2 2 , k π tan kπ/n
k=1
which can be written, on account of the standard limit ϑ = 1, ϑ→0 tan ϑ lim
as the classical Euler representation sin x = x
∞ k=1
1−
x2 . k2 π 2
An interesting observation can be drawn from a comparison of the above infinite product form with the classical Maclaurin series expansion sin x =
∞ (−1)k x 2k+1 k=0
(2k + 1)!
of the sine function. These two forms share a common feature and are different at the same time. As to the common feature, both, the partial products of Euler’s infinite product representation and partial sums of the Maclaurin expansion are odd-degree polynomials. But what makes the two forms different is that the partial products of Euler’s representation are somewhat more relevant to the sine function. That is, they share same zeros xk = kπ with the original sine function, whereas the Maclaurin expansion does not. It is evident that this property of the infinite product representation could be essential in applications. To examine the convergence pattern of Euler’s representation and compare it to that of Maclaurin’s series, the reader is invited to take a close look at Figs. 2.1 and 2.2. Sequences of the Euler partial products and Maclaurin partial sums are depicted, illustrating the difference between the two formulations. As to the derivation of the infinite product representation of the trigonometric cosine function, which was shown in (2.2), we diligently follow the procedure just described for the sine function. That is, after using Euler’s formula cos x =
eix + e−ix 2
2.1 Euler’s Classical Representations
21
Fig. 2.1 Convergence of the series expansion for sin x
Fig. 2.2 Convergence of the product expansion for sin x
and expressing the exponential functions in the limit form cos x =
ix n 1 ix n 1+ , lim + 1− 2 n→∞ n n
we substitute the Newtonian polynomials from (2.6) and (2.7) into the right-hand side of the above relation. It can readily be seen that, in contrast to the case of the sine function, all the odd-power terms cancel out; and we subsequently arrive at the following even-degree polynomial-containing representation (n−1)/2
cos x = lim
n→∞
k=0
(−1)
k
2k x 2k n n2k
for the cosine function. The polynomial under the limit sign can be factored in a similar way as in (2.8). In this case, we obtain cos x = lim
n→∞
(n−1)/2
1−
k=1
[1 + cos(2k − 1)π/n] x 2 , [1 − cos(2k − 1)π/n] n2
22
2
Infinite Products and Elementary Functions
Fig. 2.3 Convergence of the product expansion for cos x
which, after a trivial trigonometric transformation, becomes cos x = lim
(n−1)/2
n→∞
= lim
k=1 (n−1)/2
n→∞
k=1
1−
x 2 cos2 (2k − 1)π/2n
n2 sin2 (2k − 1)π/2n
x2 . 1− 2 2 n tan (2k − 1)π/2n
Similarly to the case of the sine function, we take the limit in the above relation, which requires some additional algebra. That is, the second additive term in the parentheses of the finite product is multiplied and divided by (2k − 1)2 π 2 /4n2 . This yields cos x = lim
(n−1)/2
n→∞
1−
k=1
4x 2 (2k − 1)2 π 2 , 4n2 (2k − 1)2 π 2 tan2 (2k − 1)π/2n
which immediately transforms into cos x = lim
n→∞
(n−1)/2 k=1
(2k − 1)π/2n 2 4x 2 1− . (2k − 1)2 π 2 tan(2k − 1)π/2n
The latter, in turn, reads ultimately as the classical Euler expansion for the cosine shown in (2.2): ∞ 4x 2 cos x = 1− . (2k − 1)2 π 2 k=1
Note that, similarly to the case of the sine function, the above infinite product representation also shares the zeros xk = (2k − 1)π/2 with the original cosine function. The convergence pattern of the above infinite product representation can be observed in Fig. 2.3.
2.1 Euler’s Classical Representations
23
We turn now to the case of the hyperbolic sine function whose expansion is presented in (2.3). Its derivation can be conducted in a manner similar to that for the trigonometric sine. Indeed, representing the hyperbolic sine function with Euler’s formula sinh x =
ex − e−x , 2
one customarily expresses both the exponential functions in the limit form. This results in x n x n 1 − 1− 1+ . (2.9) sinh x = lim 2 n→∞ n n Once the Newton binomial formula is used for both polynomials in the brackets, one obtains n k x k x n x n(n − 1) x 2 1+ =1+n + + . . . = n n n n 2! n2 k=0
and 1−
x n
n
k n x x n(n − 1) x 2 k k =1−n + − . . . = (−1) . 2 n n n 2! n k=0
As in the derivation of the trigonometric sine function, all the even-power terms in x in (2.9) cancel out, while the remaining odd-power terms possess a common factor of 2x. Once the latter is factored out, the expression in (2.9) simplifies to the compact form sinh x = x lim
(n−1)/2
n→∞
k=0
2k + 1 x 2k , n n2k+1
which factors as sinh x = x lim
(n−1)/2
n→∞
k=1
x 2 (1 + cos 2kπ/n) . 1+ 2 n (1 − cos 2kπ/n)
Elementary trigonometric transformations yield sinh x = x lim
n→∞
= x lim
n→∞
(n−1)/2
1+
k=1 (n−1)/2
1+
k=1
x 2 cos2 kπ/n
n2 sin2 kπ/n x2 . n2 tan2 kπ/n
24
2
Infinite Products and Elementary Functions
Fig. 2.4 Convergence of the product expansion for sinh x
Taking the limit as in the case of the trigonometric sine function, one ultimately transforms the above relation into the classical Euler form in (2.3): ∞ x2 1+ 2 2 . sinh x = x k π k=1
The convergence pattern of the above product representation can be observed in Fig. 2.4. As to the derivation procedure for the case of the hyperbolic cosine function, we will not go through its specifics, because it can be accomplished in exactly the same way as that for the trigonometric cosine. To better understand the peculiarities of the procedure, the reader is, however, urgently recommended to carefully pass through its details. It is worth noting that since Euler there have been proposed various procedures for the derivation of the infinite product representations of the trigonometric and hyperbolic functions. In the next section, we plan to review some of those procedures. Over a dozen infinite product representations of elementary functions are available in current handbooks (see, for example, [9]). The present volume reviews them in detail and describes, in addition, an interesting approach to the problem based on the construction of Green’s functions for the two-dimensional Laplace equation. This results, in particular, in infinite product representations [28] alternative to those in (2.1) and (2.2) for the trigonometric sine and cosine functions. A number of otherwise unavailable infinite product representations will also be derived for some other trigonometric and hyperbolic functions.
2.2 Alternative Derivations In all fairness, Euler’s derivation of the infinite product representations of the trigonometric (hyperbolic) sine and cosine functions, which were reviewed in Sect. 2.1, must be referred to as classical. This assertion is unreservedly justified by the chronology. Indeed, Euler was the first to propose his derivation.
2.2 Alternative Derivations
25
The reader will later be exposed to an unusual approach to the representation of elementary functions by infinite products, which was proposed by the author. This approach had resulted [28] in novel representations for many elementary functions. But before going any further into the details of that approach, let us revisit the classical Euler representation of the trigonometric sine function, and proceed through some of its other derivations that are well known and can readily be found in the classical literature [5] on the subject. The first of those derivations can be handled with DeMoivre’s formula [5] for a complex number in trigonometric form. It will be written down here for its odd (2n + 1) exponent: (cos w + i sin w)2n+1 = cos(2n + 1)w + i sin(2n + 1)w.
(2.10)
On the other hand, using the binomial formula, the left-hand side of the above can be expanded as (cos w + i sin w)2n+1 = cos2n+1 w + i(2n + 1) cos2n w sin w 2n + 1 − cos2n−1 w sin2 w 2 2n + 1 cos2n−2 w sin3 w −i 3 + · · · + (−1)n sin2n+1 w.
(2.11)
Equating the imaginary parts of the left-hand sides in (2.10) and (2.11), we obtain 2n + 1 2n sin(2n + 1)w = (2n + 1) cos w sin w − cos2n−2 w sin3 w 3 + · · · + (−1)n sin2n+1 w = sin w (2n + 1) cos2n w 2n + 1 2n−2 2 n 2n w sin w + · · · + (−1) sin w . (2.12) − cos 3
Since the second factor (the one in the brackets) contains only even exponents of the sine and cosine functions, it can be represented as a polynomial Pn (sin2 w), where the degree of sin2 x never exceeds n. On the other hand, for any fixed value of n, the left-hand side of (2.12) takes on the value zero at the n points wk = kπ/(2n + 1), k = 1, 2, 3, . . . , n, on the open segment (0, π/2). This implies that the zeros of Pn (s) are the values sk = sin wk , allowing the polynomial to be expressed as Pn (s) = β
n k=1
1−
s sin2 wk
,
(2.13)
26
2
Infinite Products and Elementary Functions
where the factor β is yet to be determined. In going through its determination, we can rewrite the relation in (2.12), in light of (2.13), in the following compact form n sin(2n + 1)w sin w 2 =β 1− sin w sin wk
(2.14)
k=1
in terms of wk and take the limit as w approaches zero: n sin(2n + 1)w sin w 2 1− . = β lim w→0 w→0 sin w sin wk lim
k=1
The limit on the left-hand side of the above is 2n + 1, while the limit on the right-hand side is equal to 1. This suggests for the factor β the value 2n + 1, and the relation in (2.14) transforms into sin(2n + 1)w = (2n + 1) sin w
n
1−
k=1
sin w sin(kπ/(2n + 1))
2 .
(2.15)
Substituting x = (2n + 1)w, we rewrite (2.15) as sin x = (2n + 1) sin
n sin(x/(2n + 1)) 2 x 1− . 2n + 1 sin(kπ/(2n + 1))
(2.16)
k=1
Since
lim (2n + 1) sin
n→∞
x = x, 2n + 1
while lim
n→∞
sin(x/(2n + 1)) x = , sin(kπ/(2n + 1)) kπ
the relation in (2.16) transforms, as n approaches infinity, into the classical Euler representation in (2.1): ∞ x2 1− 2 2 . sin x = x k π k=1
Clearly, the derivation procedure just reviewed is based on a totally different idea compared to that used by Euler. Recall another alternative derivation of the Euler representation of the sine function, which can be carried out using the Laurent series expansion [5] ∞ 1 1 1 + (2.17) cot z − = z z − kπ kπ k=−∞
for the cotangent function of a complex variable. Note that the summation in (2.17) assumes that the k = 0 term is omitted.
2.2 Alternative Derivations
27
Evidently, the opening terms of the above series have isolated singular points (poles) in any bounded region D of the complex plane. If, however, a few initial terms of the series in (2.17) are truncated, then the series becomes absolutely and uniformly convergent in a bounded region. This assertion can be justified by considering the general term 1 z 1 + = z − kπ kπ kπ(z − kπ) of the series, for which the following estimate holds:
z z 1 T
=
kπ(z − kπ) k 2 π(z/k − π) ≤ π|T /k − π| · k 2 , where T represents the upper bound of the modulus of the variable z, that is, |z| < T . It can be shown that the first factor on the right-hand side of the above inequality has the finite limit T /π 2 as k approaches infinity. Thus, the series in (2.17) converges (at the rate of 1/k 2 ) absolutely and uniformly in any bounded region. In other words, both the left-hand side and the right-hand side in (2.17) are regular functions at z = 0. This makes it possible for the series in (2.17) to be integrated term by term. Taking advantage of this fact, we integrate both sides in (2.17) along a path joining the origin z = 0 to a point z ∈ D. This yields
∞ sin z
z=z z z=z log log(z − kπ) + = , z z=0 kπ z=0 k=−∞
and after choosing the branch of the logarithm that vanishes at the origin, we obtain log
∞ z z sin z = + log 1 − z kπ kπ k=−∞
=
∞
log
k=−∞
= log
∞ k=−∞
z 1− kπ
z 1− kπ
z exp kπ
z exp . kπ
(2.18)
Exponentiating (2.18), we rewrite it as sin z = z
∞ k=−∞
1−
z kπ
exp
z . kπ
(2.19)
Recall that the factor k = 0 is omitted in the above infinite product. Coupling then the kth factor z z 1− exp kπ kπ
28
2
and the −kth factor
1+
z kπ
Infinite Products and Elementary Functions
z exp − kπ
in (2.19), we ultimately obtain the classical Euler representation of (2.1): sin z = z
∞ k=1
z2 1− 2 2 . k π
So, two different derivations for the expansion in (2.1) have been reviewed in this section. They are alternative to the classical Euler procedure discussed in Sect. 2.1. This issue will be revisited again in Chap. 6, where yet another alternative derivation procedure for infinite product representations of elementary functions will be presented. It was recently proposed by the author and reported in [27, 28], and is based on a novel approach. The objective in the next section is to introduce the reader to a limited number of infinite product representations of elementary functions that can be found in the current literature.
2.3 Other Elementary Functions The classical Euler representations of the trigonometric and hyperbolic sine and cosine functions, whose derivation has been reproduced in this volume, could be employed in obtaining infinite product expansions for some other elementary functions. However, only a limited number of those expansions are available in the literature. All of them are listed in handbooks on the subject (see, for example, [6, 9]). In this section, we are going to revisit the expressions for elementary functions in terms of infinite products available in literature and advise the reader on methods that could be applied for their derivation. In doing so, we begin with the representation ∞ x2 x2 x2 2 y cos x − cos y = 2 1 − 2 sin 1− 1− y 2 (2kπ + y)2 (2kπ − y)2 k=1 (2.20) listed in [9] as #1.432(1). In order to derive it, the difference of cosines on the lefthand side of (2.20) can be converted to the product form cos x − cos y = 2 sin
y −x y+x sin 2 2
and multiplied and divided then by the factor sin2 y2 , yielding cos x − cos y = 2
y 2 sin2 y2
sin2
sin
y −x y+x sin . 2 2
2.3 Other Elementary Functions
29
Leaving the sin2 y2 factor in the numerator in its current form while expressing the other three sine factors with the aid of the classical Euler infinite product representation in (2.1), one obtains ∞ ∞ y y2 − x2 (y + x)2 (y − x)2 1− 1 − cos x − cos y = sin2 2 2 4k 2 π 2 4k 2 π 2 k=1 k=1
∞ −2 y y2 , × 1− 2 2 2 4k π k=1
which can be rewritten in a more compact form. To proceed with this, we combine all the three infinite products into a single product form. This yields 2
2
(y+x) (y−x) ∞ y 2 − x 2 2 y [1 − 4k 2 π 2 ][1 − 4k 2 π 2 ] cos x − cos y = 2 sin , y2 y2 2 2 (1 − ) k=1 4k 2 π 2
or, after performing elementary algebra on the expression under the product sign, we have ∞ y [4k 2 π 2 − (x + y)2 ][4k 2 π 2 − (x − y)2 ] x2 cos x − cos y = 2 1 − 2 sin2 . 2 y (4k 2 π 2 − y 2 )2 k=1
Upon factoring the differences of squares under the product sign, the above relation transforms into ∞ x2 y [2kπ + (x + y)][2kπ − (x + y)] cos x − cos y = 2 1 − 2 sin2 2 y (2kπ + y)2 k=1
[2kπ + (x − y)][2kπ − (x − y)] × . (2kπ − y)2 At this point, we regroup the numerator factors under the product sign. That is, we combine the first and fourth factors, as well as the second and third factors. This yields ∞ y [(2kπ + y) + x][(2kπ + y) − x] x2 cos x − cos y = 2 1 − 2 sin2 y 2 (2kπ + y)2 k=1
[(2kπ − y) − x][(2kπ − y) + x] × , (2kπ − y)2 reducing the above relation to the form ∞ x2 y (2kπ + y)2 − x 2 (2kπ − y)2 − x 2 cos x − cos y = 2 1 − 2 sin2 2 y (2kπ + y)2 (2kπ − y)2 k=1 ∞ y x2 x2 x2 1− 1 − . = 2 1 − 2 sin2 2 y (2kπ + y)2 (2kπ − y)2 k=1
30
2
Infinite Products and Elementary Functions
This completes the derivation of the representation in (2.20). A derivation procedure similar to that just described for the expansion in (2.20) can be employed for obtaining another infinite product expression of an elementary function. This is the representation ∞ x2 x2 x2 2 y 1+ 1+ , cosh x − cos y = 2 1 + 2 sin y 2 (2kπ + y)2 (2kπ − y)2 k=1 (2.21) which is also available in the existing literature (see #1.432(2) in [9]). To put the derivation procedure for the relation in (2.21) on the effective track just used in the case of the representation in (2.20), we express the hyperbolic cosine function in terms of the trigonometric cosine, cosh x = cos ix, and simply trace out the procedure described earlier in detail for the case of (2.20): cosh x − cos y = cos ix − cos y = 2 sin y 2 sin2 y2
sin2
y + ix y − ix sin 2 2
y + ix y − ix sin 2 2 ∞ (y + ix)2 2 y y + ix 1− · = 2 sin 2 2 4k 2 π 2 =2
sin
k=1
y − ix × 2
∞ k=1
(y − ix)2 1− 4k 2 π 2
2 −1 ∞ y2 y2 . 1− 2 2 4 4k π k=1
Upon grouping all the infinite product factors, the above reads 2
2
(y+ix) (y−ix) ∞ y 2 + x 2 2 y [1 − 4k2 π 2 ][1 − 4k 2 π 2 ] 2 sin , 2 2 y2 (1 − 4ky2 π 2 )2 k=1
and transforms then as ∞ y [4k 2 π 2 − (ix + y)2 ][4k 2 π 2 − (ix − y)2 ] x2 2 1 + 2 sin2 y 2 (4k 2 π 2 − y 2 )2 k=1
∞ y [2kπ − (ix + y)][2kπ + (ix + y)] x2 = 2 1 + 2 sin2 y 2 (2kπ + y)2 k=1
×
[2kπ − (ix − y)][2kπ + (ix − y)] (2kπ − y)2
2.3 Other Elementary Functions
31
∞ x2 y (2kπ + y)2 + x 2 (2kπ − y)2 + x 2 = 1 + 2 sin2 2 y (2kπ + y)2 (2kπ − y)2 k=1
∞ y x2 x2 x2 1+ 1 + . = 2 1 + 2 sin2 2 y (2kπ + y)2 (2kπ − y)2 k=1
We turn now to another infinite product representation of an elementary function that is available in the literature, ∞ πx πx (−1)k x cos − sin = , 1+ 4 4 2k − 1
(2.22)
k=1
listed in [9], for example, as #1.433. This infinite product converges at the slow rate of 1/k. We can offer two alternative expansions of the function cos
πx πx − sin 4 4
whose convergence rate is notably faster compared to that of (2.22). To derive the first such expansion, we convert the difference of trigonometric functions in (2.22) to a single √ cosine function. This can be done by multiplying and dividing it by a factor of 2/2: √ √ πx √ πx πx πx 2 2 − sin = 2 cos − sin cos 4 4 2 4 2 4 √ √ π π(1 + x) πx π πx = 2 cos cos − sin sin = 2 cos . 4 4 4 4 4 Upon expressing the above cosine function by the classical Euler infinite product form in (2.2), the first alternative version of the expansion in (2.22) appears as ∞ (1 + x)2 π(1 + x) √ πx πx √ 1− . cos − sin = 2 cos = 2 4 4 4 4(2k − 1)2
(2.23)
k=1
If in contrast to the derivation just completed, the left-hand side of (2.22) is similarly expressed as a single sine function cos
πx πx √ π(1 − x) − sin = 2 sin , 4 4 4
then one arrives, with the aid of the classical Euler infinite product form for the sine function in (2.1), at another alternative representation to that in (2.22), cos
√ ∞ πx πx π 2 (1 − x)2 − sin = (1 − x) 1− . 4 4 4 16k 2 k=1
(2.24)
32
2
Infinite Products and Elementary Functions
Fig. 2.5 Convergence of the representation in (2.22)
Fig. 2.6 Convergence of the representation in (2.22)
It is evident that the versions in (2.23) and (2.24) are more efficient computationally than that in (2.22). Indeed, they converge at the rate 1/k 2 , in contrast to the rate 1/k for the expansion in (2.22). As to the representations in (2.23) and (2.24), it can be shown that the relative convergence of the latter must be slightly faster. This assertion directly follows from observation of the denominators in their fractional components. Indeed, the inequality 4(2k − 1)2 = 16k 2 − 16k + 4 < 16k 2 holds for any integer k, since 16k − 4 > 0. Relative convergence of the representations in (2.22) and (2.24) can be observed in Figs. 2.5 and 2.6, where their second, fifth, and tenth partial products are plotted on the interval [0, π]. Derivation of the next infinite product representation of an elementary function, which is available in [9] (see #1.434), ∞ (π + 2x)2 1 1− , cos2 x = (π + 2x)2 4 4k 2 π 2 k=1
(2.25)
2.3 Other Elementary Functions
33
is as straightforward as it gets. Indeed, once the cosine function is converted to the sine form 2 2 π +x , cos x = sin 2 the implementation of the classical representation for the sine function in (2.1) completes the job. At this point, we turn to another representation, ∞ x sin π(x + a) x + a x 1− = 1+ , sin πa a k−a k+a
(2.26)
k=1
which is presented in [9] as #1.435. If the sine functions in the numerator and denominator are expressed in terms of the classical Euler form, then (2.26) reads ∞ π 2 (x+a)2 sin π(x + a) π(x + a) k=1 [1 − k 2 π 2 ] . = π 2 a2 sin πa πa ∞ k=1 [1 − k 2 π 2 ] And upon performing a chain of straightforward transformations, the above representation converts ultimately into (2.26), 2
(x+a) ∞ sin π(x + a) x + a 1 − k2 = a2 sin πa a k=1 1 − k 2
=
∞ x+a x + a (1 − x+a k )(1 + k ) a (1 − ak )(1 + ak ) k=1
=
∞ x + a (k − a) − x (k + a) + x · a (k − a) (k + a) k=1
=
∞ x x +a x 1− 1+ . a k−a k+a k=1
For another infinite product representation of an elementary function available in the literature, we turn to 1−
sin2 πx sin2 πa
=
∞ k=−∞
x2 1− , (k − a)2
(2.27)
which is listed as #1.436 in [9]. To proceed with the derivation in this case, we convert the infinite product in (2.27) to an equivalent form. In doing so, we isolate the term with k = 0 (which is equal to (1 − x 2 /a 2 )) of the product, and group the kth and the −kth terms by
34
2
Infinite Products and Elementary Functions
pairs. This transforms the relation in (2.27) into
sin2 πx
x2 1− 2 = 1− 2 a sin πa
∞ k=1
x2 1− (k − a)2
1−
x2 . (k + a)2
(2.28)
To verify the above identity, transform its left-hand side as 1−
sin2 πx 2
sin πa
=
sin2 πa − sin2 πx sin2 πa
and decompose the numerator as a difference of squares: sin2 πa − sin2 πx sin2 πa
=
(sin πa − sin πx)(sin πa + sin πx) sin2 πa
.
(2.29)
At the next step, convert the difference and the sum of the sine functions in (2.29) to the product forms sin πa − sin πx = 2 sin
π(a − x) π(a + x) cos 2 2
and π(a + x) π(a − x) cos . 2 2 With this, we regroup the numerator in (2.29) as sin πa + sin πx = 2 sin
2 sin
π(a + x) π(a − x) π(a − x) π(a + x) cos · 2 sin cos , 2 2 2 2
where the first double product represents the sine function sin π(a + x), while the second double product is sin π(a − x). This finally transforms the left-hand side in (2.28) into sin π(a + x) sin π(a − x)
. sin2 πa At this point, replacing all the sine functions with their classical Euler infinite product form, we rewrite the above as ∞ (a + x)(a − x) [1 − a2 k=1
2 (a+x)2 ][1 − (a−x) ] k2 k2 , a2 2 (1 − k2 )
which transforms into ∞ a 2 − x 2 [k 2 − (a + x)2 ][k 2 − (a − x)2 ] . a2 (k − a)2 (k + a)2 k=1
(2.30)
2.3 Other Elementary Functions
35
The numerator under the infinite product sign can be decomposed as (k − a − x)(k + a + x)(k − a + x)(k + a − x). So, grouping the first factor with the third, and the second with the fourth, one converts the numerator in (2.30) into (k − a)2 − x 2 (k + a)2 − x 2 ,
which transforms (2.30) to
x2 1− 2 a
∞ k=1
(k − a)2 − x 2 (k + a)2 − x 2 (k − a)2 (k + a)2
and finally to ∞ x2 x2 x2 1− 2 1− 1 − . a (k − a)2 (k + a)2 k=1
This completes the derivation of the representation in (2.27). The next infinite product representation of an elementary function that will be reviewed here, is also taken from [9]. It is #1.437: 2 ∞ 2x sin 3x =− 1− . sin x x + kπ
(2.31)
k=−∞
To verify this identity, we decompose first the difference of squares in the product as ∞
−
1−
k=−∞
2x x + kπ
2
∞
=−
k=−∞
1−
2x x + kπ
1+
2x x + kπ
and then convert the above infinite product to an equivalent form. Namely, by splitting off the term with k = 0, which is evidently equal to −3, and pairing the kth and the −kth terms, the above product transforms into 3
∞ k=1
2x 1− x + kπ
2x 1− x − kπ
2x 1+ x + kπ
2x 1+ x − kπ
and 3
∞ −kπ − x 3x + kπ 3x − kπ kπ − x k=1
x + kπ
x − kπ
x + kπ
x − kπ
.
36
2
Infinite Products and Elementary Functions
Clearly, the first two factors under the product sign cancel, leaving the right-hand side of (2.31) as ∞ 3x − kπ 3x + kπ . (2.32) 3 x + kπ x − kπ k=1
As to the left-hand side in (2.31), we reduce both the sine functions in it to the infinite product form 2
∞ ∞ 1 − (3x) 9x 2 − k 2 π 2 sin 3x k2 π 2 =3 , =3 2 x sin x x 2 − k2 π 2 k=1 1 − k 2 π 2 k=1
which is identical to the expression in (2.32). Thus, the identity in (2.31) is ultimately verified. We turn next to an infinite product representation of another elementary function, 2 ∞ x cosh x − cos α 1+ , = 1 − cos α 2kπ + a
(2.33)
k=−∞
which is listed in [9] as #1.438. To verify this identity, we transform its left-hand side as a−ix cosh x − cos α cos ix − cos α sin a+ix 2 sin 2 = = . 1 − cos α 1 − cos α sin2 a2
We then express the sine functions by the classical Euler infinite product form, and perform some obvious elementary transformations. This yields ∞ a+ix a−ix 2 2 2 a k=1 4
[1 −
2 (a+ix)2 ][1 − (a−ix) ] 4k 2 π 2 4k 2 π 2 , a2 2 (1 − 4k 2 π 2 )
or ∞ a 2 + x 2 [4k 2 π 2 − (a + ix)2 ][4k 2 π 2 − (a − ix)2 ] , a2 (2kπ + a)2 (2kπ − a)2 k=1
which transforms as ∞ x 2 (2kπ − a − ix)(2kπ + a + ix) (2kπ − a + ix)(2kπ + a − ix) . 1+ 2 a (2kπ + a)2 (2kπ − a)2 k=1
Combining the first factor with the third, and the second with the fourth in the numerator, one converts the above into ∞ x 2 (2kπ − a)2 + x 2 (2kπ + a)2 + x 2 1+ 2 , a (2kπ − a)2 (2kπ + a)2 k=1
2.3 Other Elementary Functions
37
which can be represented as
x2 1+ 2 a
∞ k=1
x2 1+ (2kπ − a)2
1+
x2 . (2kπ + a)2
(2.34)
It can be shown that the above infinite product (where the multiplication is assumed from one to infinity) transforms to that in (2.33), where we “sum” from negative infinity to positive infinity. To justify this assertion, we formally break down the product in (2.34) into two pieces,
x2 1+ 2 a
∞ 1+ m=1
∞ x2 x2 · 1+ , (2mπ − a)2 (2kπ + a)2 k=1
and change the multiplication index in the first of the products via m = −k. This converts the above expression to
x2 1+ 2 a
−∞ k=−1
∞ x2 x2 1+ · 1+ , (2kπ + a)2 (2kπ + a)2 k=1
which is just the right-hand side of the relation in (2.33). Thus, the identity in (2.33) is verified. We have finished our review of infinite product expansions of elementary functions that can be directly derived with the aid of the classical Euler representations for the trigonometric sine and cosine functions. A few expansions, whose derivation will be conducted in the remaining part of this section, illustrate a variety of other possible approaches to the problem. Let us recall an alternative to the Euler’s (2.1) infinite product expansion of the trigonometric sine function. That is, sin x = x
∞
cos
k=1
x , 2k
(2.35)
which also has been known for centuries and is listed, in particular, in [9] as #1.439. A formal comment is appropriate as to the convergence of the infinite product in (2.35). It converges to nonzero values of the sine function for any value of the variable x that does not make the argument of the cosine equal to π/2 + nπ , whereas it diverges to zero at such values of x, matching zero values of the sine function. The derivation strategy that we are going to pursue in the case of (2.35) is based on the definition of the value of an infinite product. The strategy has two stages. First, a compact expression must be derived for the Kth partial product PK (x), PK =
K k=1
cos
x , 2k
38
2
Infinite Products and Elementary Functions
of the infinite product in (2.35). Then the limit of PK (x) is obtained as K approaches infinity. To obtain a compact form of the partial product PK (x) for (2.35), we rewrite its first factor cos x2 as cos
x 2 sin x2 cos x2 sin x = = . 2 2 sin x2 2 sin x2
Similarly, the second factor cos 2x2 and the third factor cos 2x3 in PK (x) turn out to be cos
2 sin 2x2 cos 2x2 sin x2 x = = 2 sin 2x2 2 sin 2x2 22
cos
2 sin 2x3 cos 2x3 sin 2x2 x = = . 2 sin 2x3 2 sin 2x3 23
and
x Proceeding like this with the next-to-the-last factor cos 2K−1 and the last factor x cos 2K in PK (x), we express them as
cos
x 2K−1
=
x x 2 sin 2K−1 cos 2K−1 x 2 sin 2K−1
=
x sin 2K−2 x 2 sin 2K−1
and cos
x sin 2K−1 2 sin 2xK cos 2xK x = = . 2K 2 sin 2xK 2 sin 2xK
Once all the factors are put together, we have a series of cancellations, and the partial product PK (x) eventually reduces to the form PK (x) =
sin x . 2K sin 2xK
(2.36)
Upon multiplying the numerator and denominator in (2.36) by x and regrouping the factors x
PK (x) =
x sin x 2K sin x , x = K x2 sin 2K sin 2xK x
the partial product of the representation in (2.35) is prepared for taking the limit. Thus, we finally obtain lim PK (x) =
K→∞
∞ k=1
x
cos
x sin x sin x K = lim 2 = , 2k K→∞ sin 2xK x x
which completes the derivation of the representation in (2.35).
2.3 Other Elementary Functions
39
Recall another infinite product representation, sinh x = x
∞
cosh
k=1
x , 2k
(2.37)
which is available in [20]. It is evident that its derivation can also be conducted in exactly same way as for the one in (2.35). In what follows, the strategy just illustrated will be applied to the derivation of an infinite product representation for another elementary function, that is, ∞
1 k = 1 + x2 , 1−x
|x| < 1.
(2.38)
k=0
It can also be found in [20]. To proceed with the derivation, we transform the general term in (2.38) as k
1 + x2 =
1 − x2
k+1
1 − x2 and write down the Kth partial product PK (x) of the representation in (2.38) explicitly as PK (x) =
K
1 + x2
k
k
k=0 K−1
=
K
1 − x2 1 − x2 1 − x2 1 − x4 1 − x8 · · ... · · . · K−2 K−1 2 4 1−x 1−x 1−x 1 − x2 1 − x2
It is evident that nearly all the terms in the above product cancel. Indeed, the only K terms left are the denominator 1 − x of the first factor and the numerator 1 − x 2 of the last factor. This reduces the partial product PK (x) to the compact form K
1 + x2
k
k=0
K
=
1 − x2 , 1−x
whose limit, as K approaches infinity, is lim
K→∞
K
1 + x2
k=0
k
K
1 1 − x2 = K→∞ 1 − x 1−x
= lim
for values of x such that |x| < 1. From a comparison of the infinite product representation in (2.38) with the k of the function 1/(1 − x), it follows that the two are Maclaurin series ∞ x k=0 equivalent to each other, with the relation PK = S2K
40
2
Infinite Products and Elementary Functions
between the partial product of (2.38) and the partial sum of the series. This observation means that the infinite product in (2.38) converges, at least formally, at a much faster rate. To complete the review of methods customarily used for the infinite product representation √ of elementary functions, let us recall an approach to the square root function 1 + x, which is described in [20], for example. The function is first transformed as √ 2(x + 1) (x + 2)2 1+x = (1 + x) , (2.39) x +2 4(x + 1)2 and the radicand on the right-hand side is then simplified as (1 + x)
(x + 2)2 (x + 2)2 = , 2 4(x + 1) 4(x + 1)
resulting in √ 2(x + 1) 1+x = x +2 2(x + 1) = x +2
(x + 2)2 4(x + 1)
x 2 + 4x + 4 2(x + 1) x2 1+ = . 4x + 4 x+2 4x + 4
This suggests for the radical factor 1+
(2.40)
x2 4(x + 1)
of the right-hand √ side in (2.40) the same transformation that has just been applied to the function 1 + x in (2.39). This yields x2 x2 + 1) )2 ( 4(x+1) 2( √ 2(x + 1) 4(x+1) 1 + 1+x = . · 2 x2 x +2 +2 4( x + 1) 4(x+1)
4(x+1)
Proceeding further with this algorithm, one arrives at the infinite product representation ∞ √ 2(Ak + 1) 1+x = Ak + 2
(2.41)
k=0
for the square root function, where the parameter Ak can be obtained from the recurrence A0 = x
and
Ak+1 =
A2k , 4(Ak + 1)
k = 0, 1, 2, . . . .
2.4 Chapter Exercises
41
Fig. 2.7 Convergence of the expansion in (2.41)
It appears that the convergence rate of the expansion in (2.41) is extremely fast. This assertion is illustrated with Fig. 2.7, where the partial products P0 , P1 , and P2 of the representation are depicted. This completes the review that we intended to provide the reader of infinite product representations of elementary functions available in the current literature. In the next chapter, the reader’s attention will be directed to a totally different subject. Namely, we will begin a review of a collection of methods that are traditionally used for the construction of Green’s functions for the two-dimensional Laplace equation. The purpose for such a sharp turn is twofold. First, we aim at giving a more comprehensive, in comparison with other relevant sources, review of the available procedures for the construction of Green’s functions for a variety of boundary-value problems for the Laplace equation. Second, one of those procedures represents a significant issue for Chap. 6, where an innovative approach will be discussed for the expression of elementary functions in terms of infinite products.
2.4 Chapter Exercises 2.1 Use Euler’s approach and derive the infinite product representation in (2.4) for the hyperbolic cosine function. 2.2 Verify the infinite product representation in (2.24). 2.3 Derive the infinite product representation in (2.37) for the hyperbolic sine function. 2.4 Verify the infinite product representation cos x − sin x =
∞ n=1
1+
(−1)n 4x . (2n − 1)π
42
2
Infinite Products and Elementary Functions
2.5 Derive an infinite product representation for the function a sin x + b cos x, where a and b are real factors. 2.6 Derive an infinite product representation for the function sin x + sin y. 2.7 Derive an infinite product representation for the function cos x + cos y. 2.8 Verify the infinite product representation tan x + cot x =
∞ 4x 2 1 1+ 2 2 . x k π − 4x 2 k=1
2.9 Derive an infinite product representation for the function cot x + cot y. 2.10 Verify the infinite product representation ∞ x2 − y2 x 2 + y 2 (x 2 − y 2 )2 cosh x − cosh y = 1+ . + 2 2k 2 π 2 16k 4 π 4 k=1
2.11 Derive an infinite product representation for the function coth x + coth y.
Chapter 3
Green’s Functions for the Laplace Equation
Our recent work reported in [27, 28] provides convincing evidence of a surprising linkage between the topics of approximation of functions and the Green’s function for some partial differential equations. The linkage appears promising and extremely productive. It has generated an unlooked-for approach to the infinite product representation of elementary functions. Our work here focuses on a comprehensive review of two standard methods that can potentially be (and are, actually) used for the construction of Green’s functions to boundary-value problems for the two-dimensional Laplace equation. These are the method of images, which is reviewed in Sect. 3.1, and the method of conformal mapping, whose review is given in Sect. 3.2. The present chapter is primarily designed to provide a preparatory basis for Chap. 6, which plays a central role in the entire volume. An innovative approach is proposed in that chapter to the infinite product representation of some elementary functions, in particular for a number of trigonometric and hyperbolic functions.
3.1 Construction by the Method of Images We begin our exposure to the collection of methods that are traditionally used for the construction of Green’s functions for the two-dimensional Laplace equation with the method of images. It is probably the simplest of all and represents one of the classical approaches to the problem. It is included in nearly every text on partial differential equations. The scheme of the method is transparent, its algorithm is straightforward, but its applicability is, however, very limited. Only a few closed forms of Green’s function, as expressed in terms of elementary functions, can be obtained by the method of images. The objective of the method is to obtain a closed analytical form of the regular component R(P , Q) of the Green’s function G(P , Q) = −
1 ln |P − Q| + R(P , Q), 2π
P , Q ∈ ,
Y.A. Melnikov, Green’s Functions and Infinite Products, DOI 10.1007/978-0-8176-8280-4_3, © Springer Science+Business Media, LLC 2011
(3.1) 43
44
3
Green’s Functions for the Laplace Equation
for the well-posed boundary-value problem ∇ 2 u(P ) = 0, P ∈ , T u(P ) = 0, P ∈ L,
(3.2) (3.3)
stated for the Laplace equation. Commonly used terminology [5, 8, 13, 15, 18] applies in this volume to the setting in (3.2) and (3.3). Namely, the latter is said to be the Dirichlet problem if T represents the identity operator T ≡ I . The Neumann problem corresponds to T ≡ ∂/∂n, where n is the normal direction to the boundary L. The case of T ≡ ∂/∂n − β, where β is a function of the coordinates of P , is usually referred to as the Robin problem; in some sources it is called the mixed problem. 1 ln |P − Q| of G(P , Q) can be interRecall that the singular component − 2π preted as the response at a field (observation) point P to a unit source placed at an arbitrary point Q. With this in mind, the regular component R(P , Q) of G(P , Q) is intended, in the method of images, to be expressed as a response to a finite number of unit sources and sinks placed at points Q∗1 , Q∗2 , . . . , Q∗m outside the region . None of those sources and sinks can, according to the definition of the Green’s function, be located inside . This makes the regular component R(P , Q) =
m j =1
±
1 ln |P − Q∗j | 2π
(3.4)
a harmonic function at every point P in (since all the source points Q∗j are outside ). The plus sign in (3.4) corresponds to a sink, and the minus to a source. Clearly, G(P , Q), with such a regular component R(P , Q), represents a harmonic function at every point P ∈ , except at P = Q. In addition, the boundary condition in (3.3) is supposed to be satisfied by appropriately choosing locations for Q∗1 , Q∗2 , . . . , Q∗m . 1 That is, the trace of the singular component −T [ 2π ln |P − Q|] on the boundary line L is supposed to be compensated by T [R(P , Q)]. Example 3.1 For the first example on the use of the method of images, we consider a classical case of the Dirichlet problem for the upper half-plane (x, y) = {−∞ < x < ∞, y > 0}, and construct its Green’s function. The influence of the unit source at a point Q(ξ, η) (the singular component of the Green’s function) 1 ln (x − ξ )2 + (y − η)2 − 4π can be compensated, in this case, with a single unit sink placed at the point Q∗ (ξ, −η) located in the lower half-plane and symmetric to Q(ξ, η) about the boundary y = 0 of the half-plane. With the influence of this sink given as 1 ln (x − ξ )2 + (y + η)2 , 4π
3.1 Construction by the Method of Images
45
Fig. 3.1 Derivation of the Green’s function for the quarter-plane
the Green’s function of the Dirichlet problem for the upper half-plane is finally found as G(x, y; ξ, η) =
1 (x − ξ )2 + (y + η)2 . ln 4π (x − ξ )2 + (y − η)2
(3.5)
Example 3.2 As our next example, we consider another classical case of the Dirichlet problem for the quarter-plane (one may refer to it as the infinite wedge of π/2), (r, ϕ) = {0 < r < ∞, 0 < ϕ < π/2}. Since the distance between two points (r1 , ϕ1 ) and (r2 , ϕ2 ) is defined in polar coordinates as r12 − 2r1 r2 cos(ϕ1 − ϕ2 ) + r22 , the singular component of the Green’s function G(r, ϕ; , ψ) reads as −
1 2 ln r − 2r cos(ϕ − ψ) + 2 , 4π
(3.6)
which represents the response at an observation point M(r, ϕ) ∈ to the unit source (labeled here and later with a plus sign) placed at A( , ψ) ∈ (see Fig. 3.1). In order to compensate the trace of the function in (3.6) (or in other words, to support the Dirichlet condition) on the boundary segment y = 0, we place a unit sink (labeled with an asterisk) at D( , 2π − ψ). The influence of this sink is given by 1 2 ln r − 2r cos ϕ − (2π − ψ) + 2 . 4π Similarly, with a unit sink at B( , π − ψ), whose influence is defined as 1 2 ln r − 2r cos ϕ − (π − ψ) + 2 , 4π
(3.7)
(3.8)
46
3
Green’s Functions for the Laplace Equation
Fig. 3.2 Dirichlet–Neumann problem for the quarter-plane
we compensate the trace of (3.6) on the boundary segment x = 0, while to compensate the traces of the functions in (3.7) and (3.8) on x = 0 and y = 0, respectively, a unit source is required at C( , π + ψ), with the influence −
1 2 ln r − 2r cos ϕ − (π + ψ) + 2 . 4π
(3.9)
Hence, the sum of the components in (3.6), (3.7), (3.8), and (3.9), which converts to the compact form r 2 − 2r cos(ϕ − (nπ − ψ)) + 2 1 , ln 4π r 2 − 2r cos(ϕ − ((n − 1)π + ψ)) + 2 2
G(r, ϕ; , ψ) =
(3.10)
n=1
represents the Green’s function of the Dirichlet problem for the infinite wedge {0 < r < ∞, 0 < ϕ < π/2}. Example 3.3 Note that if compensatory sources and sinks are placed, for the infinite wedge (r, ϕ) = {0 < r < ∞, 0 < ϕ < π/2}, in a manner different from that just described in Example 3.2, then the method of images enables us to construct the Green’s function for a certain mixed boundary-value problem. Proceeding in compliance with the scheme depicted in Fig. 3.2, one obtains the Green’s function G(r, ϕ; , ψ) =
r 2 − 2r cos(ϕ − (2π − ψ)) + 2 1 ln 4π r 2 − 2r cos(ϕ − ψ) + 2 ×
r 2 − 2r cos(ϕ − (π + ψ)) + 2 r 2 − 2r cos(ϕ − (π − ψ)) + 2
(3.11)
of the Dirichlet–Neumann boundary-value problem for the infinite wedge of π/2, with Dirichlet and Neumann boundary conditions imposed on the boundary segments y = 0 and x = 0, respectively.
3.1 Construction by the Method of Images
47
Fig. 3.3 Dirichlet problem for the infinite wedge of π/3
In a series of examples that follow, we show that although the method of images appears productive for a number of boundary-value problems stated on infinite wedges, it does not work for some of them. Example 3.4 Consider the Dirichlet problem for the wedge of π/3, (r, ϕ) = {0 < r < ∞, 0 < ϕ < π/3}. To construct the Green’s function, the reader could follow, in this case, the procedure in detail by examining the scheme depicted in Fig 3.3. In order to compensate the influence of the singular component −
1 2 ln r − 2r cos(ϕ − ψ) + 2 4π
of the Green’s function on the boundary fragment ϕ = 0, we place a compensatory unit sink at F ( , 2π − ψ), while another unit sink is required at B( , 2π/3 − ψ) to support the Dirichlet condition on ϕ = π/3. To compensate the trace of the latter sink on the boundary fragment ϕ = 0, a unit source is required at E( , 4π/3 + ψ). The trace of the latter source is compensated on ϕ = π/3 with a unit sink at D( , 4π/3 − ψ), while the trace of this sink is compensated on ϕ = 0 with a unit source placed at C( , 2π/3 + ψ). Thus, the aggregate influence of the five compensatory sources and sinks located outside , as shown in Fig. 3.3, represents the regular component R(r, ϕ; , ψ) of the Green’s function of the Dirichlet problem for the wedge of π/3. The Green’s function itself is ultimately obtained in the form G(r, ϕ; , ψ) =
3 2 r 2 − 2r cos(ϕ − ( 2nπ 1 3 − ψ)) + . ln 2(n−1)π 2 4π + ψ)) + 2 n=1 r − 2r cos(ϕ − ( 3
(3.12)
In contrast to the case of the mixed problem considered in Example 3.3 for the wedge of π/2, the method of images fails for the problem considered in the next example.
48
3
Green’s Functions for the Laplace Equation
Fig. 3.4 Failure of the method of images for a mixed problem
Example 3.5 To follow the procedure in detail and observe its failure for the Dirichlet–Neumann problem stated for the wedge of π/3, the reader is referred to the scheme of Fig 3.4. Clearly, the Dirichlet condition on ϕ = 0 is supported with a unit sink placed at F ( , 2π − ψ). To allow this sink to support the Neumann condition on ϕ = π/3, a unit sink is also required at C( , 2π/3 + ψ). As to the Neumann condition on ϕ = π/3, the unit source at A( , ψ) must be supported with a unit source at B( , 2π/3 − ψ), which, in turn, should be paired with a unit sink placed at E( , 4π/3 + ψ). The latter sink has to be paired with a unit sink at D( , 4π/3 − ψ) for the Neumann condition supported on ϕ = π/3. If we now take a look at the two sinks at C( , 2π/3 + ψ) and D( , 4π/3 − ψ), they do not, unfortunately, support the Dirichlet condition on the boundary fragment ϕ = 0. And this is what indicates, in fact, the failure of the method for the mixed problem under consideration. With the next example, we extend the number of problems stated on infinite wedges for which the method of images does work. Example 3.6 Consider the case of the Dirichlet problem stated on the infinite wedge of π/4, (r, ϕ) = {0 < r < ∞, 0 < ϕ < π/4}. The scheme depicted in Fig 3.5 allows the reader to follow the procedure in detail and helps ultimately to obtain the Green’s function that we are looking for in the compact form G(r, ϕ; , ψ) =
4 2 r 2 − 2r cos(ϕ − ( nπ 1 2 − ψ)) + . ln (n−1)π 2 4π + ψ)) + 2 n=1 r − 2r cos(ϕ − ( 2
(3.13)
Example 3.7 The Green’s function of the mixed problem for the wedge of π/4 can also be obtained by the method of images. To justify this claim, consider the statement with the Dirichlet and Neumann conditions imposed on the boundary segments ϕ = 0 and ϕ = π/4, respectively.
3.1 Construction by the Method of Images
49
Fig. 3.5 Dirichlet problem stated on the infinite wedge of π/4
Fig. 3.6 Dirichlet–Neumann problem on the infinite wedge of π/4
In order to trace out the image method, we examine the scheme shown in Fig 3.6. Combining the influence of the eight sources and sinks that emerge in this case, we obtain the Green’s function that we are looking for in the form G(r, ϕ; , ψ) =
2 r 2 − 2r cos(ϕ − ( (2n−1)π + ψ)) + 2 1 2 ln (2n−1)π 2 4π − ψ)) + 2 2 n=1 r − 2r cos(ϕ − (
×
r 2 − 2r cos(ϕ − (nπ − ψ)) + 2 . r 2 − 2r cos(ϕ − ((n − 1)π + ψ)) + 2
(3.14)
Example 3.8 As to the Dirichlet problem for the wedge of π/6, the scheme of the method of images results in twelve unit sources and sinks, the aggregate of which represents the Green’s function of interest, which appears in the form G(r, ϕ; , ψ) =
6 2 r 2 − 2r cos(ϕ − ( nπ 1 3 − ψ)) + ln . (n−1)π 2 4π + ψ)) + 2 3 n=1 r − 2r cos(ϕ − (
(3.15)
50
3
Green’s Functions for the Laplace Equation
Analysis of the boundary-value problems for infinite wedges considered so far allows a couple of generalizations. The following two examples are presented in order to provide the reader with details. Example 3.9 Observing the expression derived earlier for the Green’s function of the Dirichlet problem on the wedge of π/2 and presented in (3.10) along with the one obtained for the wedge of π/4 (see (3.13)), we arrive at the generalization k
2 2 r 2 − 2r cos(ϕ − ( 2nπ 1 k−1 − ψ)) + ln G(r, ϕ; , ψ) = , (n−1)π 2 2 4π n=1 r − 2r cos(ϕ − ( 2k−1 + ψ)) +
(3.16)
representing the Green’s function of the Dirichlet problem for the wedge of π/2k , where k = 0, 1, 2, . . . . It is worth noting that the case of k = 0, which corresponds to the wedge of π or in other words, to the upper half-plane y > 0, reads from (3.16) as G(r, ϕ; , ψ) =
r 2 − 2r cos(ϕ + ψ) + 2 1 ln 2 , 4π r − 2r cos(ϕ − ψ) + 2
representing the Green’s function derived earlier (see (3.5)) and expressed here in polar coordinates. Example 3.10 Upon analyzing the expressions in (3.12) and (3.15), obtained for the wedges of π/3 and π/6, we obtain the Green’s function of the Dirichlet problem for the wedge of π/(3 · 2k ) in the form k
3·2 r 2 − 2r cos(ϕ − ( 2nπk − ψ)) + 2 1 3·2 ln G(r, ϕ; , ψ) = , 2 − 2r cos(ϕ − ( 2(n−1)π + ψ)) + 2 4π r n=1 3·2k
(3.17)
where k = 0, 1, 2, . . . . Observe that the case k = 0 corresponds to the wedge of π/3, while the case of k = 1 is associated with the wedge of π/6, and so on. Earlier in this section, we presented a convincing example of a problem for which the method of images may fail when applied to a mixed (Dirichlet–Neumann) boundary-value problem stated on a wedge. Note that the method is not necessarily effective even for the Dirichlet problem on a wedge. The following example is presented to justify this assertion. Example 3.11 Consider the Dirichlet problem for the infinite wedge (r, ϕ) = {0 < r < ∞, 0 < ϕ < 2π/3} and try to construct its Green’s function. The failure of the method can be observed, in this case, with the aid of the scheme shown in Fig. 3.7. Let a unit source (which produces the singular component of the Green’s function) be located at A( , ψ) ∈ . To compensate its trace on the fragment ϕ = 0 of the boundary of , place a compensatory sink at D( , 2π − ψ) ∈ / .
3.1 Construction by the Method of Images
51
Fig. 3.7 Failure of the method of images for a Dirichlet problem
The trace of the latter on the boundary fragment ϕ = 2π/3 is compensated, in turn, with a unit source at C( , 4π/3 + ψ) ∈ / , whose trace on ϕ = 0 should be compensated with a unit sink at B( , 2π/3 − ψ), which is, unfortunately, located inside . And this is what justifies the failure of the method. Why so? Because compensatory sources and sinks cannot, according to the definition of the Green’s function, be located inside . Thus, the above example illustrates the fact that the method of images may fail in the construction of the Green’s function for the Dirichlet problem stated on a wedge that allows cyclic symmetry. To observe some other cases in which the method fails, the reader is invited (in the chapter exercises) to apply the procedure to other wedges (of 2π/5 or 2π/7, for example) also allowing the cyclic symmetry. Based on the experience gained so far, it sounds reasonable to make the following observation. The method of images appears workable for Dirichlet problems stated on the wedges of π/k, where k represents an integer. But a word of caution is appropriate as to the above assertion. It is just an assertion, and the reader is strongly encouraged to prove it rigorously. Example 3.12 For the next illustrative example on the effective implementation of the method of images, let us apply it to the construction of Green’s function for another classical case of the Dirichlet problem stated on the disk of radius a. The strategy of tackling the current problem with the method of images is based on an obvious observation concerning the shape of equipotential lines in the field generated by a point source or sink. Since these lines represent concentric circles centered at the generating point, the following statement looks reasonable. That is, for every location A of a unit source inside the disk, there exists a proper location B of the compensatory unit sink outside the disk such that the circumference of the disk is an equipotential line for the field generated by both the source and the sink. Applying the strategy just described, we assume that the disk is centered at the origin of the polar coordinate system r, ϕ and let the unit source generating the
52
3
Green’s Functions for the Laplace Equation
Fig. 3.8 Derivation of the Green’s function for disk
singular component −
1 2 ln r − 2r cos(ϕ − ψ) + 2 4π
(3.18)
of the Green’s function at M(r, ϕ) be located at a point A( , ψ) (see Fig. 3.8). Let also C(a, ϕ) be an arbitrary point on the circumference of the disk. It is evident that the point B( , ψ), where the compensatory unit sink 1 2 cos(ϕ − ψ) + 2 ln r − 2r 4π
(3.19)
is located, must be on the extension of the radial line of A. In other words, the angular coordinate ψ of B must be the same as that of A. As to the radial coordinate of B, it should be determined from the condition that the sum of (3.18) and (3.19) is a constant, say λ, when M is taken to C (r = a). For the sake of convenience, we express λ as λ=−
1 ln μ. 4π
(3.20)
This yields cos(ϕ − ψ) + 2 1 1 a 2 − 2a =− ln 2 ln μ, 2 4π a − 2a cos(ϕ − ψ) + 4π or
a 2 − 2a cos(ϕ − ψ) + 2 = μ a 2 − 2a cos(ϕ − ψ) + 2 . Making the substitution = ω ,
(3.21)
we transform the above equation into a 2 − 2a cos(ϕ − ψ) + 2 = μ a 2 − 2aω cos(ϕ − ψ) + ω2 2 .
(3.22)
3.1 Construction by the Method of Images
53
Clearly, the equation in (3.22) must hold for any value of ϕ − ψ. So, by assuming, for instance, ϕ − ψ = π/2 (which implies cos(ϕ − ψ) = 0), we reduce (3.22) to a 2 + 2 = μ a 2 + ω2 2 .
(3.23)
Subtracting (3.23) from (3.22), we in turn have 2a cos(ϕ − ψ) = 2μaω cos(ϕ − ψ). This simply means that μω = 1, that is, the values of μ and ω are reciprocals. Substitution of μ = 1/ω into (3.23) yields a 2 + 2 =
1 2 a + ω 2 . ω
The above equation can be rewritten as a 2 (ω − 1) = 2 ω(ω − 1).
(3.24)
So, ω = 1 represents one of the roots of the quadratic equation in (3.24). It is evident that this root is meaningless, because the relation in (3.21) suggests, in this case, that the compensatory point B (see Fig. 3.8) is the same as A (?!). The second root ω = a 2 / 2 of (3.24) implies that = a 2 /
and μ = 2 /a 2 .
Thus, we have found the location where the point B(a 2 / , ψ) should be placed. Such a point is usually referred to as the image of A about the circumference of the disk. We also found the value of λ in (3.20): λ=−
1 2 ln 2 . 4π a
To complete the construction of the Green’s function, observe that the unit sink at B generates the potential field
4 1 a2 a 2 ln 2 − 2r cos(ϕ − ψ) + r 4π at a point M(r, ϕ) inside the disk. Hence, the potential field generated at M(r, ϕ) by both the unit source at A and the compensatory unit sink at B is defined as 1 a 4 − 2r a 2 cos(ϕ − ψ) + r 2 2 . ln 2 2 4π (r − 2r cos(ϕ − ψ) + 2 )
(3.25)
In other words, (3.25) represents a function that is harmonic everywhere inside the disk except for the source point ( , ψ). And in addition, the function in (3.25)
54
3
Green’s Functions for the Laplace Equation
1 takes on the constant value − 4π ln( 2 /a 2 ) on the boundary of the disk. Thus, com1 ln(a 2 / 2 ), one ultipensating the function in (3.25) with the opposite value of − 4π mately obtains the Green’s function of the Dirichlet problem for the disk of radius a in the form
G(r, ϕ; , ψ) =
a 4 − 2r a 2 cos(ϕ − ψ) + r 2 2 a2 1 1 ln 2 2 ln 2 , − 2 4π (r − 2r cos(ϕ − ψ) + ) 4π
which reduces to G(r, ϕ; , ψ) =
a 4 − 2r a 2 cos(ϕ − ψ) + r 2 2 1 ln 2 2 . 4π a (r − 2r cos(ϕ − ψ) + 2 )
(3.26)
Note that another equivalent to the above compact form, G(r, ϕ; , ψ) =
|zζ − a 2 | 1 ln , 2π a|z − ζ |
is often used in the literature for the representation in (3.26), where the complex variable notation is employed for the observation point z = r(cos ϕ + i sin ϕ) and the source point ζ = (cos ψ + i sin ψ). It is important to note that all the Green’s functions presented in this section are expressed in a closed analytical form (in terms of a finite number of elementary functions). Later in this book, we will further expand the sphere of productive use of the method of images. But the purpose of that expansion will be different from just obtaining Green’s functions. The method will be employed to obtain specific representations of some Green’s functions with a subsequent focus on the approximation of elementary functions.
3.2 Method of Conformal Mapping Another method that has traditionally been employed for the construction of Green’s functions for the two-dimensional Laplace equation is the method of conformal mapping [8, 13, 16]. It is rooted in the classical topic of complex analysis. To introduce its background, let w(z, ζ ) represent a function of a complex variable that maps a simply connected region bounded by L conformally onto the interior of the unit disk |w| ≤ 1, while the point z = ζ is mapped into the center w = 0 of the disk, that is, w(ζ, ζ ) = 0. It is worth noting that the conformal mapping of a simply connected region onto a disk is not unique. Indeed, it is defined up to an arbitrary rotation about the disk’s center. As the reader may have learned from a course in complex analysis [5], the Green’s function to the Dirichlet problem ∇ 2 u(P ) = 0,
P ∈ ,
(3.27)
u(P ) = 0,
P ∈ L,
(3.28)
3.2 Method of Conformal Mapping
55
can be expressed in terms of the mapping function w(z, ζ ) as G(P , Q) = −
1 ln |w(z, ζ )|, 2π
(3.29)
where z = x + iy represents the observation point P , while ζ = ξ + iη represents the source point Q. This statement can readily be justified. In doing so, we observe that since w = w(z, ζ ) performs a conformal mapping of , it is an analytic function of z in and w(z, ζ ) = 0 if z = ζ , while dw dz = 0 everywhere in , with z = ζ included. Consequently, z = ζ represents a simple pole for w(z, ζ ). That is why one can express the latter in the form w(z, ζ ) = (z − ζ )(z, ζ ),
(3.30)
with (z, ζ ) representing an analytic function of z in , which is nonzero as z = ζ , that is, (ζ, ζ ) = 0. Since an analytic function of an analytic function is also analytic, we have that the function ln (z, ζ ) = ln |(z, ζ )| + i arg (z, ζ )
(3.31)
is analytic in . Then the real component ln |(z, ζ )| in (3.31) represents a harmonic function in and so, obviously, is −
1 ln |(z, ζ )|. 2π
Hence, in light of the relation in (3.30), the function in (3.29) reads as −
1 1 1 ln |w(z, ζ )| = − ln |z − ζ | − ln |(z, ζ )|, 2π 2π 2π
which can be rewritten in terms of the Cartesian coordinates of z and ζ as 1 1 1 − ln |w(z, ζ )| = − ln (x − ξ )2 + (y − η)2 − ln |(z, ζ )|. 2π 2π 2π The reader can easily figure out that the component 1 ln (x − ξ )2 + (y − η)2 − 2π is a harmonic function of x and y almost everywhere in . More specifically, it is harmonic at every point (x, y) ∈ , except at (x, y) = (ξ, η). This implies that the function in (3.29), as a function of x and y, is also harmonic everywhere in , except at (x, y) = (ξ, η). So, what has already been shown is that the function in (3.29) meets two of the three defining properties of the Green’s function. Indeed, it is harmonic everywhere in , except at (x, y) = (ξ, η), and possesses a logarithmic singularity as (x, y) →
56
3
Green’s Functions for the Laplace Equation
(ξ, η). But what about the third defining property? Does the function in (3.29) vanish on the boundary L of ? The answer is yes, because from the fact that the function w(z, ζ ) maps L onto the circumference of the unit disk, it follows that w(z, ζ ) = 1 for z ∈ L, based on which we have −
1 ln |w(z, ζ )| = 0 2π
for z ∈ L.
Thus, (3.29) really represents the Green’s function for the Dirichlet problem stated in (3.27) and (3.28). In what follows, we present a few examples of the construction of Green’s functions by the method of conformal mapping. Example 3.13 Let the method of conformal mapping be applied to the Dirichlet problem stated on the unit disk |z| ≤ 1. The family of functions w(z, ζ ) that maps the unit disk conformally onto itself, with a point z = ζ being mapped onto the disk’s center, is defined [5] as w(z, ζ ) = eiβ ·
z−ζ zζ − 1
,
where β is a real parameter that is responsible for the rotation of the disk about its center. For the sake of uniqueness, we neglect the rotation by assuming β = 0. In compliance with (3.29), one arrives at the expression 1 z − ζ 1 zζ − 1 G(z, ζ ) = − ln ln (3.32) = 2π zζ − 1 2π z − ζ for the Green’s function that we are looking for. Expressing the observation point z and the source point ζ in polar coordinates z = r(cos ϕ + i sin ϕ) and
ζ = (cos ψ + i sin ψ),
we transform the numerator in the argument of the logarithm in (3.32) as zζ − 1 = r(cos ϕ + i sin ϕ) (cos ψ − i sin ψ) − 1 = r [(cos ϕ cos ψ + sin ϕ sin ψ) + i(cos ϕ cos ψ + sin ϕ sin ψ)] − 1 = r cos(ϕ − ψ) − 1 + ir sin(ϕ − ψ). The modulus of the above appears as 2 2 |zζ − 1| = r cos(ϕ − ψ) − 1 + r sin(ϕ − ψ) = r 2 2 − 2r cos(ϕ − ψ) + 1.
(3.33)
3.2 Method of Conformal Mapping
57
The denominator in the argument of the logarithm in (3.32) represents the distance between z and ζ , which is (3.34) |z − ζ | = r 2 − 2r cos(ϕ − ψ) + 2 . Substituting (3.33) and (3.34) into (3.32), we finally obtain the Green’s function of the Dirichlet problem for the unit disk: G(r, ϕ; , ψ) =
1 r 2 2 − 2r cos(ϕ − ψ) + 1 . ln 2 4π r − 2r cos(ϕ − ψ) + 2
(3.35)
The reader may compare this representation with the one derived earlier in Sect. 3.1 (see (3.26), where a is to be set equal to unity). Example 3.14 The method of conformal mapping will be used here to construct the Green’s function of the Dirichlet problem for the infinite strip = {−∞ < x < ∞, 0 ≤ y ≤ π}. In a course on complex analysis [5], the reader may have learned that the family of functions ez − eζ w(z, ζ ) = eiβ (3.36) ez − eζ maps the infinite strip conformally onto the unit disk |w| ≤ 1, while the point z = ζ is mapped onto the disk’s center w = 0. For the sake of uniqueness, we assume β = 0 for the rotation parameter. Before substituting the mapping function from (3.36) into (3.29), we express the observation point and the source point in Cartesian coordinates z = x + iy
and ζ = ξ + iη,
and then transform the modulus of the numerator in (3.36) by means of the classical Euler formula z e − eζ = Re2 ez − eζ + Im2 ez − eζ , where the real and the imaginary parts read Re ez − eζ = ex cos y − eξ cos η and
Im ez − eζ = ex sin y − eξ sin η.
Trivial complex algebra further yields z e − eζ = e2x + e2ξ − 2e(x+ξ ) cos(y − η) ξ = e · 1 − 2e(x−ξ ) cos(y − η) + e2(x−ξ ) .
58
3
Green’s Functions for the Laplace Equation
The modulus of the denominator in (3.36) transforms similarly into z e − eζ = eξ · 1 − 2e(x−ξ ) cos(y + η) + e2(x−ξ ) . This puts the Green’s function we are looking for in the form G(x, y; ξ, η) =
1 1 − 2e(x−ξ ) cos(y + η) + e2(x−ξ ) . ln 4π 1 − 2e(x−ξ ) cos(y − η) + e2(x−ξ )
(3.37)
An equivalent but more compact form for this Green’s function can be obtained by multiplying both the numerator and the denominator in (3.37) by e(ξ −x) . This yields G(x, y; ξ, η) =
e(x−ξ ) + e(ξ −x) − 2 cos(y + η) 1 ln (x−ξ ) , 4π e + e(ξ −x) − 2 cos(y − η)
which transforms, by dividing the numerator and the denominator of the argument of the logarithm by 2, into the equivalent form G(x, y; ξ, η) =
cosh(x − ξ ) − cos(y + η) 1 ln . 4π cosh(x − ξ ) − cos(y − η)
(3.38)
Example 3.15 The half-plane = {−∞ < x < ∞, y ≥ 0} maps conformally onto the unit disk (with the point z = ζ mapped onto the disk’s center, w = 0) by a family of functions, one of which is [5] w(z, ζ ) =
z−ζ z−ζ
.
Thus, the Green’s function of the Dirichlet problem for the Laplace equation on the half-plane is given by 1 z − ζ G(z, ζ ) = − , ln 2π z−ζ which reads in Cartesian coordinates as (x − ξ )2 + (y + η)2 1 ln , 4π (x − ξ )2 + (y − η)2
(3.39)
r 2 − 2r cos(ϕ + ψ) + 2 1 ln 2 . 4π r − 2r cos(ϕ − ψ) + 2
(3.40)
G(x, y; ξ, η) = while in polar coordinates it is G(r, ϕ; , ψ) =
Recall that the representations of the Green’s function for the half-plane shown in (3.39) and (3.40) were already obtained in Sect. 3.1 by the method of images (see Examples 3.1 and 3.9).
3.3 Chapter Exercises
59
Note that in each of the problems reviewed so far in this section, the regions under consideration are mapped conformally onto the unit disk by an elementary function. The problem that we will face in Example 3.16 below represents a challenge. The point is that it aims at the Green’s function of the Dirichlet problem stated on a rectangle. But the rectangle cannot [5], unfortunately, be mapped conformally onto the interior of a disk by an elementary function. Example 3.16 Construct the Green’s function of the Dirichlet problem stated for the two-dimensional Laplace equation on the rectangle = {0 ≤ x ≤ a, 0 ≤ y ≤ b}. From [5] the reader will have learned that the rectangle maps onto the unit disk (with a point z = ζ mapped onto the disk’s center w = 0) conformally by the function W (z − ζ ; ω1 , ω2 ) · W (z + ζ ; ω1 , ω2 ) w(z, ζ ) = W (z − ζ ; ω1 , ω2 ) · W (z + ζ ; ω1 , ω2 ) defined in terms of the special function W (t; ω1 , ω2 ), which is called the Weierstrass elliptic function. The parameters ω1 and ω2 are determined through the dimensions of the rectangle as ω1 = 2a and ω2 = 2ib. To compute the Weierstrass function practically, the reader might go with its series representation given in [9], for example, as ∞ 1 1 1 , W (t; ω1 , ω2 ) = 2 + − t (t − 2mω1 − 2nω2 )2 (2mω1 + 2nω2 )2 m,n=0
where in the summation, we assume that the indices m and n are not equal to zero simultaneously. Thus, the Green’s function of the Dirichlet problem for the rectangle is expressed in terms of the Weierstrass function as 1 W (z − ζ ; 2a, 2ib) · W (z + ζ ; 2a, 2ib) . (3.41) ln G(z, ζ ) = 2π W (z − ζ ; 2a, 2ib) · W (z + ζ ; 2a, 2ib) In Chap. 5, we will revisit the construction of the Green’s function of the Dirichlet problem for a rectangle. An alternative form for that in (3.41) will be derived using the method of eigenfunction expansion.
3.3 Chapter Exercises 3.1 Derive the Green’s function presented in (3.13) for the Dirichlet problem stated for the Laplace equation on the infinite wedge of π/4. 3.2 Derive the Green’s function presented in (3.15) for the Dirichlet problem stated for the Laplace equation on the infinite wedge of π/6.
60
3
Green’s Functions for the Laplace Equation
3.3 Show that the method of images fails in the construction of the Green’s function of the Dirichlet problem on the infinite wedge of 2π/5. 3.4 Show that the method of images fails in the construction of the Green’s function of the Dirichlet problem on the infinite wedge of 2π/7. 3.5 Prove that the method of images is efficient for the construction of the Green’s function of the Dirichlet problem stated on the infinite wedge of π/k, where k is an integer.
Chapter 4
Green’s Functions for ODE
As was convincingly shown in Chap. 3, the methods of images and conformal mapping are helpful in obtaining Green’s functions for the two-dimensional Laplace equation. But it is worth noting, at the same time, that the number of problems for which these methods are productive, is notably limited. To support this assertion, recall that mixed boundary-value problems with Robin conditions imposed on a piece of the boundary are not within the reach of these methods. Another alternative approach to the construction of Green’s functions for many partial differential equations is the method of eigenfunction expansion. But we do not, however, immediately proceed with its coverage, postponing it for Chap. 5, where its potential will be explored in full detail. The reason for that is methodological. Certain preparatory work would help prior to turning to this method. In doing so, we change topics by shifting from partial to ordinary differential equations. Our objective is to assist the reader with an easier grasp of the material of Chap. 5, where intensive work will be resumed on the Laplace equation. But before going any further with this, the topic of Green’s functions for linear ordinary differential equations will be explored here in some detail. A consistent use of the experience gained in the current chapter will prove critical for later work on the method of eigenfunction expansion.
4.1 Construction by Defining Properties In contrast to partial differential equations, for which the construction of Green’s functions represents in most cases a challenge, the case of linear ordinary differential equations is to a large extent a routine exercise. It is included and discussed in nearly every undergraduate textbook in the field. A standard procedure is based on defining properties of Green’s functions. Since the discussion in Chap. 5 is limited to second-order differential equations, our presentation here will be related to the homogeneous equation d 2 y(x) dy(x) + p1 (x) + p2 (x)y(x) = 0, L y(x) ≡ p0 (x) 2 dx dx Y.A. Melnikov, Green’s Functions and Infinite Products, DOI 10.1007/978-0-8176-8280-4_4, © Springer Science+Business Media, LLC 2011
(4.1) 61
62
4
Green’s Functions for ODE
subject to the homogeneous boundary conditions 2 d k−1 y(a) d k−1 y(b) αi,k−1 = 0, + β Mi y(a), y(b) ≡ i,k−1 dx k−1 dx k−1
i = 1, 2,
k=1
(4.2) where the coefficients pj (x) in (4.1) are continuous functions on (a, b), with leading coefficient p0 (x) = 0, while Mi (y(a), y(b)) in (4.2) represent linearly independent forms with constant coefficients αi,k−1 and βi,k−1 . It is assumed that at least one of the coefficients αi,k−1 and βi,k−1 is nonzero, for every fixed subscript i. This holds the total number of boundary conditions in (4.2) to exactly 2. From a course on differential equations [5, 10], one learns that if the boundaryvalue problem stated in (4.1) and (4.2) is well posed (if, in other words, the problem has only the trivial solution y(x) ≡ 0), then it has a unique Green’s function. We call g(x, s) the Green’s function for the boundary-value problem stated in (4.1) and (4.2) if as a function of its first variable x, it meets the following defining properties for every s ∈ (a, b): 1. On both intervals [a, s) and (s, b], g(x, s) is a continuous function having continuous derivatives up to second order and satisfies the homogeneous equation in (4.1) on (a, s) and (s, b), i.e., L g(x, s) = 0, x ∈ (a, s); L g(x, s) = 0, x ∈ (s, b). 2. For x = s, g(x, s) is continuous, lim g(x, s) − lim g(x, s) = 0.
x→s +
x→s −
3. The first-order derivative of g(x, s) is discontinuous when x = s, provided that lim
x→s +
∂g(x, s) ∂g(x, s) 1 − lim =− , ∂x ∂x p0 (s) x→s −
where p0 (s) represents the leading coefficient in (4.1). 4. g(x, s) satisfies the boundary conditions in (4.2), i.e., Mi g(a, s), g(b, s) = 0 (i = 1, 2). Two standard approaches to the construction of Green’s functions for linear ordinary differential equations are traditionally recommended [5, 8]. The first of them is based, as we already mentioned, on the defining properties just listed and represents, in fact, a constructive proof of the existence and uniqueness theorem for the given statement. The idea of the second approach is different. It is rooted in Lagrange’s method of variation of parameters, which is usually used for finding particular solutions to inhomogeneous linear equations. To trace out the procedure of the approach based on the defining properties, let functions y1 (x) and y2 (x) constitute a fundamental set of solutions for the equation
4.1 Construction by Defining Properties
63
in (4.1). That is, y1 (x) and y2 (x) are particular solutions of the equation that are linearly independent on (a, b). In compliance with property 1 of the definition, for any arbitrarily fixed value of s ∈ (a, b), the Green’s function g(x, s) must be a solution of the equation in (4.1) in (a, s) (on the left of s), as well as in (s, b) (on the right of s). Since any solution of (4.1) can be expressed as a linear combination of y1 (x) and y2 (x), one may write g(x, s) in the following form: g(x, s) =
2 yj (x)Aj (s),
for a ≤ x ≤ s,
yj (x)Bj (s),
for s ≤ x ≤ b,
j =1
(4.3)
where Aj (s) and Bj (s) (j = 1, 2) represent functions to be determined. Clearly, the total number of these functions is four, and to complete the construction, we are required to find them. To succeed with such an endeavor, an appropriate engine must be created. Reviewing available resources for that, we go to the remaining defining properties of the Green’s function. A close analysis shows that one linear equation can be obtained for Aj (s) and Bj (s) with the aid of property 2, a single linear equation coming from property 3, and another two linear equations from property 4. From the strategy just sketched, it follows that a system of four linear algebraic equations can be obtained in the four functions Aj (s) and Bj (s) (j = 1, 2). The question that remains unanswered, however, is whether this system is well posed. To answer this question, one must take a close look at the coefficient matrix of the system to find out whether it is nonsingular. This requires an analysis of the proposed strategy in full detail. First, it is evident that by virtue of property 2, which stipulates the continuity of g(x, s) at x = s, one derives the linear algebraic equation 2
Cj (s)yj (s) = 0
(4.4)
i=1
in the two unknown functions Cj (s) = Bj (s) − Aj (s)
(j = 1, 2).
(4.5)
Another linear equation in C1 (s) and C2 (s) can be derived by turning to property 3. This yields the equation 2 i=1
Ci (s)
1 dyi (s) =− . dx p0 (s)
(4.6)
Hence, the relation in (4.4) along with that in (4.6) forms a system of two simultaneous linear algebraic equations in C1 (s) and C2 (s). The determinant of the coefficient matrix in this system is not zero, because it represents the Wronskian for the fundamental set of solutions {yj (x), j = 1, 2}.
64
4
Green’s Functions for ODE
Thus, the system in (4.4) and (4.6) has a unique solution. In other words, one can readily obtain explicit expressions for C1 (s) and C2 (s). This implies that, in view of (4.5), two linear relations are already available for the four functions Aj (s) and Bj (s). In order to obtain them, we take advantage of property 4. In doing so, let us first break down the forms Mi (y(a), y(b)) in (4.2) into two additive components as Mi y(a), y(b) = Si y(a) + Ti y(b) (i = 1, 2), with the forms Si (a) and Ti (b) being defined as 2 αi,k−1 y (k−1) (a) Si y(a) = k=1
and 2 βi,k−1 y (k−1) (b). Ti y(b) = k=1
In compliance with property 4, we substitute the expression for g(x, s) from (4.3) into (4.2), and we obtain Mi g(a, s), g(b, s) ≡ Si g(a, s) + Ti g(b, s) = 0 (i = 1, 2). (4.7) Since the operator Si in (4.7) governs the values of g(a, s) at the left endpoint x = a of the interval [a, b], while the operator Ti governs the values of g(b, s) at the right endpoint x = b, the upper branch 2
yj (x)Aj (s)
j =1
of g(x, s) from (4.3) goes to Si (g(a, s)), while the lower branch 2
yj (x)Bj (s)
j =1
of g(x, s) must be substituted into Ti (g(b, s)), resulting in 2 Si g(a, s) Aj (s) + Ti g(b, s) Bj (s) = 0
(i = 1, 2).
j =1
Replacing the expressions for Aj (s) in the above system with the differences Bj (s)–Cj (s) in accordance with (4.5), one rewrites the system in the form 2 Si g(a, s) Bj (s) − Cj (s) + Tj g(b, s) Bj (s) = 0 j =1
(i = 1, 2).
4.1 Construction by Defining Properties
65
Combining the terms with Bj (s) and moving the term with Cj (s) to the righthand side, we obtain 2 2 Si g(a, s) Cj (s) Si g(a, s) + Ti g(b, s) Bj (s) = j =1
(i = 1, 2).
j =1
Upon recalling the relation from (4.7), the above equations can finally be rewritten in the form 2
2 Mi g(a, s), g(b, s) Bj (s) = Si g(a, s) Cj (s)
j =1
(i = 1, 2).
(4.8)
j =1
Thus, the relations in (4.8) constitute a system of two linear algebraic equations in Bj (s). The coefficient matrix of this system is nonsingular, because the forms Mi are linearly independent. The right-hand-side vector in (4.8) is defined in terms of the values of Cj (s), which have already been found. The system has, consequently, a unique solution for B1 (s) and B2 (s). So, once these are available, unique expressions for Aj (s) can readily be obtained from (4.5). Hence, upon substituting the expressions obtained for Aj (s) and Bj (s) into (4.3), we obtain an explicit representation for the Green’s function that we are looking for. In what follows, a series of examples is presented, where a number of different boundary-value problems are considered, illustrating the described approach to the construction of Green’s functions in detail. Example 4.1 We start with a simple boundary-value problem in which the differential equation d 2 y(x) = 0, dx 2 is subject to the boundary conditions dy(0) = 0, dx
x ∈ (0, a),
(4.9)
dy(a) + hy(a) = 0, dx
(4.10)
with h representing a nonzero constant. Before going any further with the construction procedure, we must make sure that the unique Green’s function to the problem in (4.9) and (4.10) really does exist. That is, we are required to check whether the problem has only the trivial solution. The most elementary set of functions constituting a fundamental set of solutions for the equation in (4.9) is represented by y1 (x) ≡ 1 and y2 (x) ≡ x. Therefore, the general solution yg (x) for (4.9) can be written as a linear combination of y1 (x) and y2 (x), yg (x) = D1 + D2 x,
66
4
Green’s Functions for ODE
where D1 and D2 represent arbitrary constants. Substitution of yg (x) into the boundary conditions of (4.10) yields the homogeneous system of linear algebraic equations in D1 and D2 D2 = 0, hD1 + (1 + ah)D2 = 0. It is evident that the only solution for the system is D1 = D2 = 0. This implies that the problem in (4.9) and (4.10) is well posed. There thus exists a unique Green’s function g(x, s). And according to the defining property 1, one can look for it in the form A1 (s) + xA2 (s), for 0 ≤ x ≤ s, g(x, s) = (4.11) B1 (s) + xB2 (s), for s ≤ x ≤ a. Introducing then, as suggested in (4.5), C1 (s) = B1 (s) − A1 (s) and C2 (s) = B2 (s) − A2 (s), we form a system of linear algebraic equations in these unknowns written as C1 (s) + sC2 (s) = 0, C2 (s) = −1, whose unique solution is C1 (s) = s and C2 (s) = −1. The first boundary condition in (4.10), being satisfied with the upper branch of g(x, s), results in A2 (s) = 0. Recall that the upper branch is chosen because x = 0 belongs to the domain 0 ≤ x ≤ s. Since B2 (s) = C2 (s) + A2 (s), we conclude that B2 (s) = −1. The second boundary condition in (4.10), being treated with the lower branch of g(x, s), yields B2 (s) + h B1 (s) + aB2 (s) = 0, resulting in B1 (s) = (1 + ah)/ h. And finally, since A1 (s) = B1 (s) − C1 (s), we find that A1 (s) = 1 + h(a − s) / h. Substituting these into (4.11), we ultimately obtain the Green’s function 1 1 + h(a − s), for 0 ≤ x ≤ s, g(x, s) = h 1 + h(a − x), for s ≤ x ≤ a,
(4.12)
that we are looking for. Take a look again at the problem setting in (4.9) and (4.10). If h = 0, then the boundary conditions in (4.10) are dy(0) = 0, dx
dy(a) = 0. dx
(4.13)
4.1 Construction by Defining Properties
67
It is evident that the boundary-value problem in (4.9) and (4.13) has no Green’s function, because it allows infinitely many solutions (any function y(x) = const represents a solution) and is therefore ill posed. This conclusion is also justified by the form of the Green’s function in (4.12). Indeed, if h = 0, then g(x, s) in (4.12) is undefined. On the other hand, if h → ∞, then the boundary conditions in (4.10) transform into dy(0) = 0, y(a) = 0, (4.14) dx and the Green’s function of the problem in (4.9) and (4.14) can be obtained from that in (4.12) by taking a limit as h → ∞, resulting in a − s, for 0 ≤ x ≤ s, g(x, s) = (4.15) a − x, for s ≤ x ≤ a. Example 4.2 Let us construct the Green’s function for the boundary-value problem stated by the differential equation d 2 y(x) − k 2 y(x) = 0, dx 2
x ∈ (0, ∞),
(4.16)
subject to boundary conditions imposed as y(0) = 0,
|y(∞)| < ∞.
(4.17)
It can readily be shown that the conditions of existence and uniqueness for the Green’s function are met in this case. Indeed, since a fundamental set of solutions for the equation in (4.16) can be written as y1 (x) ≡ ekx ,
y2 (x) ≡ e−kx ,
its general solution is yg (x) = D1 ekx + D2 e−kx . The first condition in (4.17) implies D1 + D2 = 0, while the second condition in (4.17) requires D1 = 0, resulting in D2 = 0. This ensures, in fact, existence of a unique Green’s function of the formulation in (4.16) and (4.17), which can be expressed in the following form: A1 (s)ekx + A2 (s)e−kx , for x ≤ s, g(x, s) = (4.18) B1 (s)ekx + B2 (s)e−kx , for s ≤ x. Defining Ci (s) = Bi (s) − Ai (s) (i = 1, 2), one obtains the following well-posed system of linear algebraic equations: ks e C1 (s) + e−ks C2 (s) = 0, keks C1 (s) − ke−ks C2 (s) = −1,
68
4
Green’s Functions for ODE
in C1 (s) and C2 (s). Its solution is C1 (s) = −
1 −ks e , 2k
C2 (s) =
1 ks e . 2k
The first condition in (4.17) implies A1 (s) + A2 (s) = 0,
(4.19)
while the second condition results in B1 (s) = 0, because the exponential function ekx is unbounded as x approaches infinity. And the only way to satisfy the second condition in (4.17) is to set B1 (s) equal to zero. This immediately yields A1 (s) =
1 −ks e , 2k
and the relation in (4.19) consequently provides A2 (s) = −
1 −ks e . 2k
Hence, based on the known values of C2 (s) and A2 (s), one obtains B2 (s) =
1 ks e − e−ks . 2k
Upon substituting the values of the coefficients Aj (s) and Bj (s) just found into (4.18), one finally obtains the Green’s function 1 g(x, s) = 2k
ek(x−s) − e−k(x+s) ,
for x ≤ s,
ek(s−x) − e−k(x+s) ,
for s ≤ x,
(4.20)
to the problem posed by (4.16) and (4.17). It is evident that we can rewrite it in the compact form g(x, s) =
1 −k|x−s| e − e−k(x+s) , 2k
for 0 ≤ x, s ≤ a.
Example 4.3 Consider a boundary-value problem for the equation in (4.16) but stated over a different domain, d 2 y(x) − k 2 y(x) = 0, dx 2
x ∈ (0, a),
(4.21)
dy(0) dy(a) = . dx dx
(4.22)
and subject to boundary conditions written as y(0) = y(a),
4.1 Construction by Defining Properties
69
This boundary-value problem represents an important type of formulations in applied sciences. The relations in (4.22) specify conditions of the a-periodicity of the solution to be found. Using the experience accumulated so far, the reader can easily show that the above problem has only the trivial solution, providing existence of a unique Green’s function for it. Clearly, the beginning stage of the construction procedure can be reiterated from Example 4.2. We again express the Green’s function as in (4.18), and recall the coefficients C1 (s) and C2 (s): C1 (s) = −
1 −ks e , 2k
C2 (s) =
1 ks e . 2k
(4.23)
Satisfying the first condition in (4.22), we utilize the upper branch of the Green’s function from (4.18) in order to compute the value of y(0), while its lower branch is used for computing the value of y(a). This results in A1 (s) + A2 (s) = B1 (s)eka + B2 (s)e−ka .
(4.24)
Upon satisfying the second condition in (4.22), we compute the derivative of y(x) at x = 0 using the upper branch from (4.18), while the value of the derivative of y(x) at x = a is computed using the lower branch. This yields A1 (s) − A2 (s) = B1 (s)eka − B2 (s)e−ka .
(4.25)
So the relations in (4.24) and (4.25), along with those in (4.23), form a system of four linear algebraic equations in A1 (s), A2 (s), B1 (s), and B2 (s). To find the values of A1 (s) and B1 (s), we add (4.24) and (4.25) to each other. This provides us with A1 (s) − B1 (s)eka = 0,
(4.26)
while the first relation in (4.23) can be rewritten in the form −A1 (s) + B1 (s) = −
1 −ks e . 2k
(4.27)
Solving equations (4.26) and (4.27) simultaneously, we obtain A1 (s) =
ek(a−s) , 2k(eka − 1)
B1 (s) =
e−ks . 2k(eka − 1)
To find the values of A2 (s) and B2 (s), we subtract (4.25) from (4.24). This results in A2 (s) − B2 (s)e−ka = 0.
(4.28)
Rewriting the second relation from (4.23) in the form −A2 (s) + B2 (s) =
1 ks e , 2k
(4.29)
70
4
Green’s Functions for ODE
we solve equations (4.28) and (4.29) simultaneously. This yields A2 (s) =
eks , 2k(eka − 1)
B2 (s) =
ek(a+s) . 2k(eka − 1)
Substituting the values of A1 (s), A2 (s), B1 (s), and B2 (s) just found into (4.18), we finally obtain the compact form g(x, s) =
e−k(|x−s|−a) + ek|x−s| , 2k(eka − 1)
for 0 ≤ x, s ≤ a,
(4.30)
of the Green’s function to the boundary-value problem posed by (4.21) and (4.22). Example 4.4 For another example, let the second-order equation with variable coefficients
dy d (mx + b) = 0, x ∈ (0, a), (4.31) dx dx be subject to the boundary conditions dy(0) = 0, dx
y(a) = 0,
(4.32)
where we assume that m > 0 and b > 0, which implies that mx + b = 0 on the interval [0, a]. The fundamental set of solutions y1 (x) ≡ 1,
y2 (x) ≡ ln(mx + b),
(4.33)
required for the construction of the Green’s function for the problem in (4.31) and (4.32) can be obtained by two successive integrations of the governing equation. Indeed, the first integration yields (mx + b)
dy = C1 . dx
Dividing the above equation through by mx + b and multiplying by dx, we separate variables, dy = C1
dx , mx + b
and finally obtain the general solution of the equation in (4.31) in the form y(x) =
C1 ln(mx + b) + C2 , m
which implies that the functions in (4.33) indeed constitute a fundamental set of solutions for (4.31).
4.1 Construction by Defining Properties
71
It can be easily shown that the problem in (4.31) and (4.32) has only the trivial solution. Hence, there exists a unique Green’s function, which can be represented in the form A1 (s) + ln (mx + b)A2 (s), for 0 ≤ x ≤ s, (4.34) g(x, s) = B1 (s) + ln (mx + b)B2 (s), for s ≤ x ≤ a. Tracing out our construction procedure, we obtain the system of linear algebraic equations C1 (s) + ln (ms + b)C2 (s) = 0, mC2 (s) = −1, in Cj (s) = Bj (s) − Aj (s) (j = 1, 2). Its solution is C1 (s) =
1 ln (ms + b), m
C2 (s) = −
1 . m
(4.35)
The first boundary condition in (4.32) yields A2 (s) = 0. Consequently, we have B2 (s) = −1/m. The second condition in (4.32) gives B1 (s) + ln (ma + p)B2 (s) = 0, resulting in B1 (s) = [ln(ma + b)]/m, which provides us with A1 (s) =
1 ma + b ln . m ms + b
Substituting the values of Aj (s) and Bj (s) just found into (4.34), we obtain the Green’s function that we are looking for in the form 1 ln((ma + b)/(ms + b)), for 0 ≤ x ≤ s, g(x, s) = (4.36) m ln((ma + b)/(mx + b)), for s ≤ x ≤ a. Sometimes in the applied sciences, we face boundary-value problems on finite intervals, where one of the endpoints is singular for the governing differential equation. Our algorithm can be successfully used for constructing Green’s functions of such problems as well. As an illustration to this point, we offer the following example. Example 4.5 Construct the Green’s function of the boundary-value problem for the differential equation
dy(x) d x = 0, x ∈ (0, a), (4.37) dx dx subject to boundary conditions written as |y(0)| < ∞,
dy(a) + hy(a) = 0. dx
(4.38)
72
4
Green’s Functions for ODE
Note that the boundedness condition at x = 0 is written in a shorthand form. It is understood in the sense that the limit of y(x) as x approaches zero is bounded. The left endpoint x = 0 of the domain is a point of singularity for the governing equation. Therefore, instead of formulating a traditional boundary condition at this point, we require in (4.38) for y(0) to be bounded. Clearly, a fundamental set of solutions of the equation in (4.37) can be written as y1 (x) ≡ 1,
y2 (x) ≡ ln x.
The problem in (4.37) and (4.38) is well posed (has only the trivial solution), allowing a unique Green’s function in the form A1 (s) + ln xA2 (s), for 0 ≤ x ≤ s, (4.39) g(x, s) = B1 (s) + ln xB2 (s), for s ≤ x ≤ a. In compliance with our procedure, we form a system of linear algebraic equations C1 (s) + ln s C2 (s) = 0, s −1 C2 (s) = −s −1 , whose solution is C1 (s) = ln s and C2 (s) = −1. The boundedness of the Green’s function at x = 0 implies A2 (s) = 0. Consequently, B2 (s) = −1, while the second condition in (4.38) yields B2 (s)/a + h B1 (s) + ln aB2 (s) = 0. Hence, B1 (s) = 1/ah + ln a, and ultimately, A1 (s) = 1/ah − ln s/a. Thus, substituting the values of Aj (s) and Bj (s) just found into (4.39), we obtain the Green’s function that we are looking for in the form ln(s/a), for 0 ≤ x ≤ s, 1 g(x, s) = − (4.40) ah ln(x/a), for s ≤ x ≤ a. It is clearly seen that if the parameter h is equal to zero, then the Green’s function in (4.40) is undefined. This agrees with the setting in (4.37) and (4.38), which becomes ill posed if h = 0.
4.2 Method of Variation of Parameters We turn now to another approach that has traditionally been applied to the construction of Green’s functions for ordinary differential equations. This is the one rooted in Lagrange’s method of variation of parameters. The idea behind this approach is based on the following classical assertion referred to, in some sources (see, for example, [8]), as Hilbert’s theorem.
4.2 Method of Variation of Parameters
73
If g(x, s) represents the Green’s function to the homogeneous boundary-value problem posed by (4.1) and (4.2), then a unique solution of the corresponding inhomogeneous equation p0 (x)
d 2 y(x) dy(x) + p2 (x)y(x) = −f (x), + p1 (x) dx dx 2
x ∈ (a, b),
(4.41)
with a right-hand-side function f (x) continuous on [a, b], subject to the homogeneous boundary conditions in (4.2) can be expressed by the integral
b
y(x) =
g(x, s) f (s) ds.
(4.42)
a
For simplicity in the presentation that follows, we consider a particular case of the boundary conditions in (4.2). That is, choosing the coefficients αj,k−1 and βi,k−1 as α1,0 = β2,0 = 1, while α1,1 = β1,0 = β1,1 = α2,0 = α2,1 = β2,1 = 0, reduces (4.2) to y(a) = 0,
y(b) = 0.
(4.43)
The boundary-value problem stated in (4.41) and (4.43) has a unique solution, which implies, of course, that the corresponding homogeneous problem has only the trivial solution. Let y1 (x) and y2 (x) represent two linearly independent particular solutions of the homogeneous equation corresponding to that in (4.41). We then express the general solution of (4.41) itself, in compliance with the method of variation of parameters, in the form y(x) = C1 (x)y1 (x) + C2 (x)y2 (x),
(4.44)
where C1 (x) and C2 (x) represent functions that are at least twice differentiable and yet to be found. The idea of expressing the solution in the form of (4.44) might not look reasonable, since the equation in (4.41) delivers just a single relation for the two functions C1 (x) and C2 (x). This presumes a certain degree of freedom in choosing a second relation, which would allow us to uniquely define C1 (x) and C2 (x). Lagrange’s method of variation of parameters provides an effective and elegant choice of such a relation. The direct substitution of y(x) from (4.44) into (4.41) would result in a cumbersome single second-order differential equation in two unknown functions C1 (x) and C2 (x). In order to avoid such an unfortunate complication, Lagrange’s method
74
4
Green’s Functions for ODE
proceeds as follows. First, differentiate the function y(x) in (4.44) using the product rule y (x) = C1 y1 + C1 y1 + C2 y2 + C2 y2 ,
(4.45)
and then, keeping in mind the degree of freedom mentioned above, we make a simplifying assumption as C1 y1 + C2 y2 = 0,
(4.46)
y (x) = C1 y1 + C2 y2 .
(4.47)
transforming (4.45) into
Hence, the second derivative of y(x), is expressed as y (x) = C1 y1 + C1 y1 + C2 y2 + C2 y2 .
(4.48)
We now substitute the expressions of functions y(x), y (x), and y (x) from (4.44), (4.47), and (4.48) into (4.41), yielding for its left-hand-side p0 C1 y1 + C1 y1 + C2 y2 + C2 y2 + p1 C1 y1 + C2 y2 + p2 (C1 y1 + C2 y2 ). Rearranging the order of terms, we rewrite this as C1 p0 y1 + p1 y1 + p2 y1 + C2 p0 y2 + p1 y2 + p2 y2 + p0 C1 y1 + C2 y2 . (4.49) Since y1 = y1 (x) and y2 = y2 (x) represent particular solutions of the homogeneous equation corresponding to (4.41), we have p0 y1 + p1 y1 + p2 y1 = 0 as well as p0 y2 + p1 y2 + p2 y2 = 0. This reduces the equation in (4.49) to C1 (x)y1 (x) + C2 (x)y2 (x) = −f (x)p0−1 (x).
(4.50)
The relations in (4.46) and (4.50) represent a system of linear algebraic equations in C1 (x) and C2 (x). The system is well posed (has a unique solution) because the determinant of its coefficient matrix is the Wronskian W (x) = y1 (x)y2 (x) − y2 (x)y1 (x) = 0 for the fundamental set of solutions y1 (x) and y2 (x). Solving the system in (4.46) and (4.50), we obtain C1 (x) = −
y2 (x)f (x) , p0 (x)W (x)
C2 (x) =
y1 (x)f (x) . p0 (x)W (x)
4.2 Method of Variation of Parameters
75
Straightforward integration of the above expressions yields x y2 (s)f (s) ds + H1 C1 (x) = − a p0 (s)W (s) and
x
C2 (x) = a
y1 (s)f (s) ds + H2 , p0 (s)W (s)
where H1 and H2 are arbitrary constants of integration. Upon substituting these expressions for C1 (x) and C2 (x) into (4.44), we obtain the general solution of the equation in (4.41) in the form y(x) = H1 y1 (x) + H2 y2 (x) x x y1 (s)f (s) y2 (s)f (s) ds − y1 (x) ds. + y2 (x) a p0 (s)W (s) a p0 (s)W (s) Since s represents the variable of integration, the factors y1 (x) and y2 (x) of the integral containing terms representing functions of x can be formally moved inside the integrals. And once this is done and the two integral terms are combined, we obtain x y1 (s)y2 (x) − y1 (x)y2 (s) f (s)ds. (4.51) y(x) = H1 y1 (x) + H2 y2 (x) + p0 (s)W (s) a To determine values of H1 and H2 , we satisfy the boundary conditions in (4.43) with the above expression for y(x). This yields the system of linear algebraic equations
H1 y1 (a) y2 (a) 0 = (4.52) H2 y1 (b) y2 (b) P (a, b) in H1 and H2 , where P (a, b) is defined as
b
P (a, b) = a
R(b, s) f (s) ds p0 (s)W (s)
and R(b, s) = y1 (b)y2 (s) − y1 (s)y2 (b). With this, we arrive at the solution to the system in (4.52) in the form H1 = − a
and
b
H2 = a
b
y2 (a)R(b, s)f (s) ds p0 (s)R(a, b)W (s)
y1 (a)R(b, s)f (s) ds. p0 (s)R(a, b)W (s)
76
4
Green’s Functions for ODE
Upon substituting these into (4.51), we obtain the solution of the boundary-value problem posed in (4.41) and (4.43) as
x
y(x) = − a
R(x, s)f (s) ds + p0 (s)W (s)
b a
R(a, x)R(b, s)f (s) ds. p0 (s)R(a, b)W (s)
This representation can be rewritten in the single-integral form y(x) =
b
(4.53)
g(x, s)f (s) ds, a
whose kernel function g(x, s) is found in two pieces. For x ≤ s, it is defined as g(x, s) =
R(a, x)R(b, s) , p0 (s)R(a, b)W (s)
x ≤ s,
while for x ≥ s, we obtain g(x, s) =
R(a, x)R(b, s) − R(x, s)R(a, b) , p0 (s)R(a, b)W (s)
x ≥ s.
After a trivial but quite cumbersome transformation, the above expression can be simplified to g(x, s) =
R(a, s)R(b, x) , p0 (s)R(a, b)W (s)
x ≥ s.
Thus, since the solution to the problem posed in (4.41) and (4.43) is found as a single integral of the type in (4.42), we conclude that the kernel function g(x, s) in (4.53) does in fact represent the Green’s function to the corresponding homogeneous boundary-value problem. So, the approach based on the method of variation of parameters can successfully be used to actually construct Green’s functions. We present below a number of examples illustrating some peculiarities of this approach that emerge in practical situations. Example 4.6 Apply the procedure based on the method of variation of parameters to the construction of the Green’s function for the homogeneous equation corresponding to d 2 y(x) + k 2 y(x) = −f (x), x ∈ (0, a), dx 2 and subject to the homogeneous boundary conditions y (0) = 0,
y (a) = 0.
(4.54)
(4.55)
We assume that the right-hand-side function f (x), in (4.54) is continuous and therefore integrable on (0, a).
4.2 Method of Variation of Parameters
77
It can easily be shown that the homogeneous problem corresponding to that in (4.54) and (4.55) has only the trivial solution. This implies that the conditions of existence and uniqueness of the Green’s function are met and the latter can be constructed. Since the functions y1 (x) ≡ sin kx and y2 (x) ≡ cos kx represent a fundamental set of solutions for the corresponding homogeneous equation, the general solution to (4.54) can be expressed as y(x) = C1 (x) sin kx + C2 (x) cos kx.
(4.56)
The system of linear algebraic equations in C1 (x) and C2 (x), which has been derived, in general, in (4.46) and (4.50), appears in this case as
0 sin kx cos kx C1 (x) = , C2 (x) −f (x) k cos kx −k sin kx providing us with the following solution: C1 (x) =
1 cos kxf (x), k
1 C2 (x) = − sin kxf (x). k
Integrating, we obtain
x
C1 (x) = 0
and
C2 (x) = − 0
1 cos ksf (s)ds + H1 k x
1 sin ksf (s)ds + H2 . k
Upon substituting these into (4.56) and carrying out an obvious transformation, we obtain x 1 (4.57) sin k(x − s)f (s)ds + H1 sin kx + H2 cos kx. y(x) = 0 k To determine the values of H1 and H2 , we differentiate y(x): x y (x) = cos k(x − s)f (s)ds + H1 k cos kx − H2 k sin kx. 0
From the first condition in (4.55), it follows that H1 = 0, while the second condition yields a cos k(a − s)f (s)ds − H2 k sin ka = 0, 0
from which we immediately obtain a cos k(a − s) H2 = f (s) ds. k sin ka 0
78
4
Green’s Functions for ODE
Upon substituting the values of H1 and H2 just found into (4.57) and correspondingly regrouping the integrals, one obtains
sin k(x − s) f (s) ds + k
x
y(x) = 0
a
cos kx 0
cos k(a − s) f (s) ds. k sin ka
(4.58)
Both of the above integrals can be combined and written in a compact singleintegral form. In helping the reader to proceed more easily through this transformation, we add formally the term
a
0 · f (s) ds
x
to the first of the two integrals in (4.58) and break down the second one as
x
cos kx 0
cos k(a − s) f (s) ds + k sin ka
a
cos kx x
cos k(a − s) f (s) ds. k sin ka
Then y(x) is represented as a sum of four definite integrals, in two of which the integration is carried out from 0 to x. In the other two integrals, we integrate from x to a. This transforms (4.58) into y(x) =
x cos k(a − s) sin k(x − s) cos kx f (s) ds + f (s) ds k k sin ka 0 0 a a cos k(a − s) f (s) ds. + 0 · f (s)ds + cos kx k sin ka x x x
Grouping the integrals by pairs, we have y(x) =
x 0
+
sin k(x − s) cos k(a − s) + cos kx f (s) ds k k sin ka a
cos kx x
x
=
cos ks 0
cos k(a − s) f (s) ds k sin ka
cos k(a − x) f (s) ds + k sin ka
a
cos kx x
cos k(a − s) f (s) ds. k sin ka
Note that in the first integral above, the variables x and s satisfy the inequality x ≥ s, since x represents the upper limit of integration, whereas in the second integral x, is the lower limit, implying x ≤ s. Hence, the representation for y(x) that we just came up with can be viewed as the single integral a y(x) = g(x, s)f (s) ds, (4.59) 0
4.2 Method of Variation of Parameters
79
whose kernel function g(x, s) is defined in two pieces as 1 g(x, s) = k sin ka
cos kx cos k(a − s),
for x ≤ s,
cos ks cos k(a − x),
for s ≤ x.
(4.60)
Thus, since the solution of the boundary-value problem stated in (4.54) and (4.55) is expressed as the integral in (4.59), g(x, s) represents the Green’s function to the homogeneous boundary-value problem corresponding to that in (4.54) and (4.55). Example 4.7 Consider the inhomogeneous equation d 2 y(x) − k 2 y(x) = −f (x), dx 2
(4.61)
subject to the homogeneous boundary conditions y (0) = 0,
y(a) = 0.
(4.62)
It can be shown that the homogeneous boundary-value problem corresponding to that of (4.61) and (4.62) has only the trivial solution, which means that the above problem itself has a unique solution. This justifies the existence and uniqueness of the Green’s function. Earlier in Example 4.2, while dealing with the homogeneous equation corresponding to (4.61), we presented the set of functions y2 (x) ≡ e−kx
y1 (x) ≡ ekx ,
as its fundamental set of solutions. Hence, the general solution can be represented for (4.61) itself by y(x) = C1 (x)ekx + C2 (x)e−kx .
(4.63)
Tracing out the procedure of Lagrange’s method, one obtains expressions for C1 (x) and C2 (x) in the form x 1 −ks C1 (x) = − e f (s) ds + H1 2k 0 and
x
C2 (x) = 0
1 ks e f (s) ds + H2 . 2k
By virtue of substitution of these into (4.63), we obtain y(x) = H1 e
kx
+ H2 e
−kx
− 0
x
1 sinh k(x − s)f (s) ds. k
(4.64)
80
4
Green’s Functions for ODE
The first boundary condition y (0) = 0 in (4.62) implies that H1 = H2 , while the second condition y(a) = 0 yields a sinh k(a − s) f (s) ds. H1 = H2 = 2k cosh ka 0 Substituting these into (4.64), we obtain x a cosh kx sinh k(a − s) 1 f (s) ds − sinh k(x − s)f (s) ds. y(x) = k cosh ka 0 0 k Conducting transformations of the above integrals in compliance with our routine, the Green’s function g(x, s) to the homogeneous boundary-value problem corresponding to that in (4.61) and (4.62) reads ultimately as cosh kx sinh k(a − s), for x ≤ s, 1 g(x, s) = (4.65) k cosh ka cosh ks sinh k(a − x), for s ≤ x. The example below focuses on another special feature of Lagrange’s method. It is designed to demonstrate a capability of the method in managing problems stated on an unbounded region with boundedness conditions imposed. Example 4.8 Let us return to the equation in (4.61), and let it be subject to the following boundary conditions: y (0) − hy(0) = 0,
|y(∞)| < ∞.
(4.66)
It can readily be checked that there exists a unique Green’s function for the homogeneous boundary-value problem corresponding to that posed by (4.61) and (4.66). The reader is recommended to justify this fact in Exercise 4.6. The general solution of the equation in (4.61) was earlier presented in (4.64). In the present case, however, it is going to be more beneficial to express it, in contrast to the mixed hyperbolic–exponential form in (4.64), completely in terms of exponential functions. That is, x 1 k(s−x) y(x) = H1 ekx + H2 e−kx + − ek(x−s) f (s)ds. (4.67) e 0 2k The point is that the form in (4.67) will be more practical in view of the necessity to treat the boundedness condition |y(∞)| < ∞ in the discussion that follows. Indeed, splitting off both the exponential terms under the integral sign and grouping together the terms containing the factor of ekx and those containing the factor of e−kx , we transform (4.67) into
y(x) = H1 −
x 0
x ks e−ks e f (s) ds ekx + H2 + f (s) ds e−kx . 2k 0 2k
(4.68)
4.2 Method of Variation of Parameters
81
It is clearly seen that the boundedness condition |y(∞)| < ∞ implies that the factor of the positive exponential term ekx in (4.68) must equal zero as x approaches infinity. This implies ∞ 1 −ks H1 = e f (s) ds, 2k 0 while the first condition in (4.66) subsequently yields H2 =
k−h H1 = k+h
∞ 0
k − h −ks e f (s) ds. 2k(k + h)
Upon substituting the expressions for H1 and H2 just found into (4.67) and rewriting its integral component again in a more compact hyperbolic-functioncontaining form, we obtain y(x) = − 0
x
1 sinh k(x − s)f (s)ds + k
0
∞
1 −ks kx e + h∗ e−kx f (s) ds, e 2k
where h∗ = (k − h)/(k + h). From this representation, the Green’s function to the problem in (4.61) and (4.66) ultimately appears as 1 g(x, s) = 2k
e−ks (ekx + h∗ e−kx ),
for x ≤ s,
e−kx (eks + h∗ e−ks ),
for s ≤ x.
(4.69)
In the example that follows, a boundary-value problem for another equation with variable coefficients is considered. Example 4.9 Consider another second-order linear equation with variable coefficients
dy(x) d 2 2 β x +1 = −f (x), x ∈ (0, a), (4.70) dx dx subject to the boundary conditions y(0) = 0,
y(a) = 0.
(4.71)
The above boundary-value problem is well posed (the reader is recommended to justify this assertion in Exercise 4.7), ensuring existence of the unique Green’s function for the corresponding homogeneous problem. Since by now, the reader should have gained a great deal of experience, we will just briefly describe the construction procedure. Due to the self-adjoint form of the equation in (4.70), two components for its fundamental set of solutions can in this case be obtained by two successive integrations of the corresponding homogeneous equation. This gives us y1 (x) ≡ 1 and y2 (x) ≡ arctan βx, yielding the general solution to the inhomogeneous equation in
82
4
Green’s Functions for ODE
the form
x
y(x) = 0
1 β(s − x) f (s)ds + D1 + D2 arctan βx. arctan β 1 + β 2 xs
By satisfying the boundary conditions in (4.71), one determines the constants D1 and D2 : a arctan βa − arctan βs D2 = D1 = 0, f (s) ds. β arctan βa 0 Substituting these into the above expression for the general solution of (4.70) and rearranging the integral terms, one obtains the solution to the original boundaryvalue problem as a g(x, s)f (s) ds, y(x) = 0
where the kernel g(x, s) represents the Green’s function 1 arctan βx(K − arctan βs), for 0 ≤ x ≤ s, g(x, s) = K arctan βs(K − arctan βx), for x ≤ s ≤ a,
(4.72)
for the homogeneous problem corresponding to that in (4.70) and (4.71), where K = β arctan βa. We believe that having developed the necessary flexibility in dealing with ordinary differential equations, the reader will feel comfortable enough working on the material in the next chapter.
4.3 Chapter Exercises 4.1 Show that the trivial solution is the only solution to the boundary-value problem stated in (4.21) and (4.22). 4.2 Justify the well-posedness of the boundary-value problem stated in (4.31) and (4.32). 4.3 Prove that the boundary-value problem stated in (4.54) and (4.55) is uniquely solvable. 4.4 Construct the Green’s function for the homogeneous equation corresponding to (4.54), subject to the boundary conditions y(0) = y (1) = 0.
4.3 Chapter Exercises
83
4.5 Construct the Green’s function for the homogeneous equation corresponding to (4.61) subject to the boundary conditions y (0) − hy(0) = y(a) = 0. 4.6 Prove that the boundary-value problem stated in (4.61) and (4.66) is well posed. 4.7 Prove that the boundary-value problem stated in (4.70) and (4.71) is well posed.
Chapter 5
Eigenfunction Expansion
Having departed for a while from the main focus of the book in the previous chapter, where the emphasis was on ordinary differential equations, we are going to return in the present chapter to partial differential equations. The reader will be provided with a comprehensive review of another approach that has been traditionally employed for the construction of Green’s functions for partial differential equations. The method of eigenfunction expansion will be used, representing one of the most productive and recommended methods in the field. Our objective in reviewing the method of eigenfunction expansion is twofold. First, we want to assist the reader in the derivation of Green’s functions for a variety of applied partial differential equations. Our second goal is to lay out a preparatory basis for Chap. 6, which is, to a certain extent, central for the entire volume. In that chapter, upon comparison of different forms of Green’s functions, some infinite product representations are derived for a number of trigonometric and hyperbolic functions. After presenting introductory comments in the brief section below, we develop a procedure based on the eigenfunction expansion method to derive a number of Green’s functions in Sects. 5.2 and 5.3. The first of these touches upon problems stated in Cartesian coordinates, while problems formulated in polar coordinates are dealt with in Sect. 5.3.
5.1 Background of the Approach Earlier, in Chap. 3, the reader was familiarized with two standard approaches that are traditionally used for the construction of Green’s functions for the Laplace equation in two dimensions. These approaches are based on the methods of images and conformal mapping. Another standard approach in the field is based on the method of eigenfunction expansion [15, 18]. The number of problems for which this method appears productive is notably wider than the number of problems successfully treated by either of the two other methods. Y.A. Melnikov, Green’s Functions and Infinite Products, DOI 10.1007/978-0-8176-8280-4_5, © Springer Science+Business Media, LLC 2011
85
86
5
Eigenfunction Expansion
In the introduction to this book, it was mentioned that the solution u(P ) to the well-posed boundary-value problem ∇ 2 u(P ) = −f (P ), P ∈ , B u(P ) = 0, P ∈ L,
(5.1) (5.2)
stated for the Poisson (inhomogeneous) equation can be expressed in the integral form G(P , Q)f (Q)d(Q) (5.3) u(P ) =
in terms of the Green’s function G(P , Q) of the corresponding homogeneous problem (stated for the Laplace equation). This gives a hint as to a possible technique for constructing Green’s functions. It could aim at expressing the solution u(P ) to the problem in (5.1) and (5.2) in the integral representation form of (5.3). In other words, when solving the problem in (5.1) and (5.2), the goal is not just to obtain its solution u(P ) by any means and in any form. The intention is rather more specific. The solution should be in the form of (5.3), providing an explicit expression for G(P , Q). An algorithm that uses the method of eigenfunction expansion appears efficient in such an endeavor.
5.2 Cartesian Coordinates In what follows, the particulars of the approach based on the method of eigenfunction expansion and its specific features are clarified and explained as we pass through a series of illustrative examples in which problems are stated in Cartesian coordinates. In the first example, the reader has an opportunity to go into a more or less detailed description of the approach. Example 5.1 We revisit the Dirichlet problem for the Laplace equation stated on the infinite strip = {−∞ < x < ∞, 0 < y < b}. This problem has already been considered in Chap. 3. Two equivalent forms of its Green’s function were presented there in (3.37) and (3.38). They were obtained by the method of conformal mapping. We are going to describe now an alternative derivation procedure, which will be explained by turning to the following boundaryvalue problem: ∂ 2 u(x, y) ∂ 2 u(x, y) + = −f (x, y), ∂x 2 ∂y 2 u(x, 0) = u(x, b) = 0.
(x, y) ∈ ,
(5.4) (5.5)
5.2 Cartesian Coordinates
87
In addition to the conditions in (5.5), it is assumed that the function u(x, y) is bounded as x approaches infinity, lim |u(x, y)| < ∞,
x→−∞
lim |u(x, y)| < ∞,
x→∞
while the right-hand-side function f (x, y) is integrable on , implying that the improper integral f (x, y) d(x, y)
is convergent. Recall that if the classical separation of variables method is applied to the homogeneous problem ∂ 2 U (x, y) ∂ 2 U (x, y) + = 0, ∂x 2 ∂y 2
(x, y) ∈ ,
U (x, 0) = U (x, b) = 0, corresponding to (5.4) and (5.5), then its solution U (x, y) is given by U (x, y) =
∞
Xn (x)Yn (y).
n=1
This yields the following eigenvalue problem in Yn (y): d 2 Yn (y) + ν 2 Yn (y) = 0, dy 2
y ∈ (0, b),
Yn (0) = Yn (b) = 0, whose eigenvalues and eigenfunctions are found as ν = nπ/b, with n = 1, 2, 3, . . . , and Yn (y) = sin νy. Following then, with the above in mind, the procedure of the eigenfunction expansion method, we express the solution u(x, y) of the problem in (5.4) and (5.5) in terms of the eigenfunctions Yn (y), u(x, y) =
∞
un (x) sin νy,
(5.6)
n=1
which represents the expansion of the two variable function u(x, y) in a Fourier sine series with respect to one of the variables. The right-hand-side function f (x, y) in (5.4) is also expanded in terms of Yn (y): f (x, y) =
∞ n=1
fn (x) sin νy.
(5.7)
88
5
Eigenfunction Expansion
Once the expansions from (5.6) and (5.7) are substituted into (5.4), we obtain ∞ 2 d un (x) n=1
dx 2
∞ − ν un (x) sin νy = − fn (x) sin νy. 2
n=1
Equating the coefficients of the two series in the above relation yields the ordinary differential equation d 2 un (x) − ν 2 un (x) = −fn (x), dx 2
−∞ < x < ∞,
(5.8)
in the coefficients un (x) of the series in (5.6). Clearly, the boundedness conditions lim |un (x)| < ∞,
x→−∞
lim |un (x)| < ∞,
x→∞
(5.9)
must be imposed on un (x) to make the problem setting in (5.8) and (5.9) well posed. To construct the Green’s function of the above boundary-value problem, we may choose either the approach employing the defining properties or the one based on the method of variation of parameters. Choosing the latter, we trace out its procedure, which was described in detail in Sect. 4.2. That is, we express the general solution to (5.8) in the form un (x) = C1 (x)eνx + C2 (x)e−νx ,
(5.10)
which yields the well-posed system of linear algebraic equations νx 0 e e−νx C1 (x) = νeνx −νe−νx −f (x) C2 (x) in C1 (x) and C2 (x), whose solution is obtained as C1 (x) = −
1 −νx e fn (x), 2ν
C2 (x) =
1 νx e fn (x). 2ν
Expressions for C1 (x) and C2 (x), x 1 e−νξ fn (ξ ) dξ + D1 C1 (x) = − 2ν −∞ and C2 (x) =
1 2ν
x −∞
eνξ fn (ξ ) dξ + D2 ,
are found by integration. Substituting these into (5.10), we obtain x x 1 1 un (x) = e−νx eνξ fn (ξ ) dξ + D2 − eνx e−νξ fn (ξ ) dξ + D1 . 2ν −∞ 2ν −∞ (5.11)
5.2 Cartesian Coordinates
89
The first boundedness condition in (5.9) requires that the factor of e−νx in (5.11) be zero as x approaches negative infinity. This yields D2 = 0. The second condition in (5.9), in turn, implies that the factor of eνx is zero as x approaches infinity. This yields ∞ 1 D1 = − e−νξ fn (ξ ) dξ. 2ν −∞ After the values of D1 and D2 that were just found are substituted into (5.11), the solution of the boundary-value problem posed in (5.8) and (5.9) is found as ∞ x 1 ν(x−ξ ) 1 ν(ξ −x) e e fn (ξ ) dξ + − eν(x−ξ ) fn (ξ ) dξ, un (x) = −∞ 2ν −∞ 2ν which reads as a single integral un (x) =
∞ −∞
gn (x, ξ )fn (ξ ) dξ,
(5.12)
whose kernel is expressed as gn (x, ξ ) =
1 −ν|x−ξ | , e 2ν
−∞ < x, ξ < ∞.
Thus, the above represents the Green’s function of the homogeneous boundaryvalue problem corresponding to that in (5.8) and (5.9). With the aid of the Euler–Fourier formula, the coefficient fn (ξ ) in the series of (5.7) is expressed through the right-hand-side function of the equation in (5.4) as 2 fn (ξ ) = b
b
f (ξ, η) sin νη dη. 0
By substitution of the above into (5.12) and then substituting the coefficients un (x) in (5.6), the solution to the problem in (5.4) and (5.5) is obtained in the form b ∞
u(x, y) = 0
−∞
∞ 1 e−ν|x−ξ | sin νy sin νηf (ξ, η) dξ dη, π n
(5.13)
n=1
which suggests that in view of (5.3), the kernel G(x, y; ξ, η) =
∞ 1 e−ν|x−ξ | sin νy sin νη π n
(5.14)
n=1
of the integral representation from (5.13) represents the Green’s function to the homogeneous boundary-value problem corresponding to that in (5.4) and (5.5). The series in (5.14) is nonuniformly convergent. Due to the logarithmic singularity, it diverges, in fact, when the observation point (x, y) coincides with the source point (ξ, η). This makes the above series form of the Green’s function somewhat
90
5
Eigenfunction Expansion
inconvenient for numerical implementations. But the situation can be radically improved, because the series is actually summable. To sum it, we transform (5.14) into ∞ 1 e−ν|x−ξ | cos ν(y − η) − cos ν(y + η) 2π n n=1
∞ ∞ −ν|x−ξ | 1 e−ν|x−ξ | e cos ν(y − η) − cos ν(y + η) = 2π n n
G(x, y; ξ, η) =
n=1
n=1
(5.15) and recall the classical [5, 6, 9] summation formula ∞ pn n=1
1 cos nϑ = − ln 1 − 2p cos ϑ + p 2 , n 2
(5.16)
which holds if its parameters meet the constraints p < 1 and 0 ≤ ϑ < 2π . It is evident that the series in (5.15) are of the type in (5.16) and that the constraints on the parameters p and ϑ are met. Indeed, it is clear that e−ν|x−ξ | ≤ 1 and 0 ≤ ν(y − η) < 2π
and
0 ≤ ν(y + η) < 2π.
Hence, the series in (5.15) appear summable, which yields the analytical representation G(x, y; ξ, η) =
1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) 1 ln 4π 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
(5.17)
for the Green’s function to the homogeneous boundary-value problem corresponding to that in (5.4) and (5.5). Here ω = π/b. At this point, the reader is referred to the expression in (3.37) of Chap. 3, which was obtained (by the method of conformal mapping) as the Green’s function of the Dirichlet problem for the infinite strip = {−∞ < x < ∞, 0 < y < π} of width π . Clearly, if we assume b = π , implying ω = 1, then the expression in (5.17) reduces to that of (3.37). Note that, similarly to the conversion of the representation in (3.37) into that in (3.38) undertaken in Sect. 3.2, the expression shown in (5.17) converts into G(x, y; ξ, η) =
1 cosh ω(x − ξ ) − cos ω(y + η) ln . 4π cosh ω(x − ξ ) − cos ω(y − η)
(5.18)
Recall that the conversion is accomplished by multiplying the numerator and denominator in (5.17) by the factor 2eω(ξ −x) , with subsequent use of the Euler formula for the hyperbolic cosine function.
5.2 Cartesian Coordinates
91
Another shorthand expression for (5.17) can be obtained upon introducing the complex variables z = x + iy
and
ζ = ξ + iη
for the observation point (x, y) and the source point (ξ, η). Indeed, recalling the Euler formula ez = ex (cos y + i sin y) for the complex exponent, one reduces the representation in (5.17) to a compact form. That is, G(x, y; ξ, η) =
|1 − eω(z−ζ ) | 1 , ln 2π |1 − eω(z−ζ ) |
(5.19)
where the bar on ζ stands for the complex conjugate. Example 5.2 We turn to the construction of the Green’s function for the Dirichlet problem u(x, 0) = u(x, b) = u(0, y) = 0
(5.20)
stated for the Laplace equation on the semi-infinite strip = {0 < x < ∞, 0 < y < b}. Consider the boundary-value problem posed in (5.4) and (5.20) on . In addition to the conditions in (5.20), it is required for the function u(x, y) to be bounded as x approaches infinity, while the right-hand-side function f (x, y) in (5.4) is assumed to be integrable on . Following the scheme of the eigenfunction expansion method, the functions u(x, y) and f (x, y) are expanded, analogously to the case in Example 5.1, in the Fourier sine series of (5.6) and (5.7). This yields the boundary-value problem d 2 un (x) − ν 2 un (x) = −fn (x), 0 < x < ∞, dx 2 lim |un (x)| < ∞, un (0) = 0, x→∞
in the coefficients un (x) of the series in (5.6). Recall that the Green’s function gn (x, ξ ) to the above problem was obtained earlier, in Chap. 4 (see the form in (4.20)). Using our current notation, we express it as 1 −ν|x−ξ | e − e−ν(x+ξ ) , for 0 ≤ x, ξ ≤ ∞. gn (x, ξ ) = 2ν Tracing out the procedure used for the setting in Example 5.1, a series expansion of the Green’s function for the homogeneous boundary-value problem correspond-
92
5
Eigenfunction Expansion
ing to that in (5.4) and (5.20) is obtained as ∞
G(x, y; ξ, η) =
2 gn (x, ξ ) sin νy sin νη, b n=1
and after employing the summation formula from (5.16), the above representation transforms to the closed analytical form G(x, y; ξ, η) =
1 |1 − eω(z−ζ ) ||1 − eω(z+ζ ) | , ln 2π |1 − eω(z−ζ ) ||1 − eω(z+ζ ) |
where ω = π/b. Equivalence of this to the form cosh ω(x + ξ ) − cos ω(y − η) 1 ln G(x, y; ξ, η) = 4π cosh ω(x − ξ ) − cos ω(y − η) cosh ω(x − ξ ) − cos ω(y + η) × , cosh ω(x + ξ ) − cos ω(y + η)
(5.21)
(5.22)
which is usually given for this Green’s function in the literature [18], can readily be verified using the algebra explained earlier in Example 5.1. So far, we have used the method of eigenfunction expansion as an alternative way to construct some Green’s functions already available in the literature. In what follows, in contrast, the method is applied to a mixed boundary-value problem whose Green’s function probably cannot be obtained otherwise. Example 5.3 Consider the Poisson equation ∂ 2 u(x, y) ∂ 2 u(x, y) + = −f (x, y), ∂x 2 ∂y 2
(x, y) ∈ ,
(5.23)
stated on the semi-infinite strip = {0 < x < ∞, 0 < y < b}, and subject, in this case, to the boundary conditions ∂u(0, y) − βu(0, y) = 0, ∂x
u(x, 0) = u(x, b) = 0,
By virtue of the Fourier sine-series expansions u(x, y) =
∞
un (x) sin νy,
ν=
n=1
and f (x, y) =
∞ n=1
fn (x) sin νy,
nπ , b
β ≥ 0.
(5.24)
5.2 Cartesian Coordinates
93
we obtain the boundary-value problem d 2 un (x) − ν 2 un (x) = −fn (x), 0 < x < ∞, dx 2 dun (0) lim |un (x)| < ∞, − βun (0) = 0, x→∞ dx in the coefficients un (x) of the above series expansion for u(x, y). Following the procedure of the method of variation of parameters, the Green’s function gn (x, ξ ) to the above problem is found in the form −νξ νx ∗ −νx 1 e (e + β e ), for x ≤ ξ, gn (x, ξ ) = (5.25) 2ν e−νx (eνξ + β ∗ e−νξ ), for x ≥ ξ, where β ∗ = (ν − β)/(ν + β). The solution to (5.23) and (5.24) is obtained then as
b ∞
u(x, y) = 0
0
∞
2 gn (x, ξ ) sin νy sin νη f (ξ, η) dξ dη. b
(5.26)
n=1
This implies that the kernel ∞
G(x, y; ξ, η) =
2 gn (x, ξ ) sin νy sin νη b
(5.27)
n=1
of (5.26) represents the Green’s function to the problem in (5.24). Thus, the Green’s function that we are looking for is formally obtained. But what is about the computability of the representation in (5.27)? In contrast to closed analytical forms, such as those obtained in Examples 5.1 and 5.2, the one in (5.27) is not, unfortunately, suitable for immediate computer implementation. This is so because the series in (5.27) does not (and cannot) uniformly converge. We have already touched upon this phenomenon earlier in this book. Any series-only representation of a Green’s function for the two-dimensional Laplace equation cannot uniformly converge due to the logarithmic singularity in the Green’s function. To give the reader a sense of the level of accuracy attainable by the series expansion in (5.27), Fig. 5.1 exhibits profiles of G(x, y; ξ, η) for various partial sums. The problem was specified by the parameters b = 1, β = 0.5. The source point is fixed as (0.1, 0.5), and the profile of G(x, 0.3; 0.1, 0.5) was depicted in a neighborhood of the source point. Although increasing the order of a partial sum provides a reasonable improvement, the approximation in the immediate vicinity of the source point still remains quite poor. In other words, any attempt to approximate the Green’s function with a partial sum of (5.27) is ineffectual, at least in a neighborhood of the source point (ξ, η).
94
5
Eigenfunction Expansion
Fig. 5.1 Profiles of different partial sums of the series in (5.27)
Recall now the cases covered earlier in Examples 5.1 and 5.2, where we managed to sum the series expansions of Green’s functions. Observe, for example, how the series in (5.14) converts to the closed analytical form in (5.18). In contrast to those cases, the series in (5.27) cannot be completely summed, but we can observe that its singular component can be split off, radically improving its computability. Since the truncation of the series in (5.27) does not work, an extra effort is required to enhance its computability. One possible way of doing so was proposed in [12]. The idea is to split the expression for gn (x, ξ ) in (5.25) into two parts, one of which contains the components responsible for the singularity and allows a complete summation, while the other part leads to a uniformly convergent series. In doing so, we rewrite the coefficient gn (x, ξ ) in the form 1 ν − β −ν(x+ξ ) −ν|x−ξ | e e gn (x, ξ ) = , for 0 < x, ξ < ∞, + 2ν ν +β and represent the factor (ν − β)/(ν + β) of its second exponential function e−ν(x+ξ ) as 2β ν −β =1− . ν +β ν +β This yields gn (x, ξ ) =
1 2β −ν(x+ξ ) . e−ν|x−ξ | + e−ν(x+ξ ) − e 2ν ν +β
Upon substituting the above into (5.27), we rewrite the latter as ∞
G(x, y; ξ, η) =
1 1 −ν|x−ξ | e + e−ν(x+ξ ) sin νy sin νη b ν n=1
−
∞ 2β e−ν(x+ξ ) sin νy sin νη, b ν(ν + β) n=1
ν=
nπ . b
5.2 Cartesian Coordinates
95
Clearly, the first of the above two series is summable. The summation can be accomplished in the same way as in Examples 5.1 and 5.2. The second series does not allow a summation. But it is uniformly convergent, and we may leave it in its current form without significantly deteriorating the computability of the whole expression. Thus, a computer-friendly representation of the Green’s function to the mixed boundary-value problem in (5.24) for the Laplace equation posed on the semiinfinite strip = {0 < x < ∞, 0 < y < b} is finally obtained as G(x, y; ξ, η) =
|1 − eω(z+ζ ) ||1 − eω(z−ζ ) | 1 ln 2π |1 − eω(z−ζ ) ||1 − eω(z+ζ ) | −
∞ 2β e−ν(x+ξ ) sin νy sin νη, b ν(ν + β)
ω=
n=1
π . b
(5.28)
If β = 0, then the above reduces to the Green’s function G(x, y; ξ, η) =
|1 − eω(z+ζ ) ||1 − eω(z−ζ ) | 1 ln 2π |1 − eω(z−ζ ) ||1 − eω(z+ζ ) |
(5.29)
of the boundary-value problem ∂u(0, y) = 0, ∂x
u(x, 0) = u(x, b) = 0,
for the Laplace equation on the region = {0 < x < ∞, 0 < y < b}. We turn again to the expression in (5.28). Its series component converges at a rapid rate. To be more specific, we estimate its N th remainder RN (x, y; ξ, η) =
∞ e−ν(x+ξ ) sin νy sin νη. ν(ν + β)
(5.30)
n=N+1
The exponential and trigonometric factors of the general term in this series never exceed unity. Since the parameter β is nonnegative, we arrive at the following estimate for the absolute value of RN : |RN (x, y; ξ, η)| ≤
∞ n=N+1
b2 = 2 π
∞ 1 1 ≤ ν(ν + β) ν2
∞ n=N+1
n=N +1
∞ N 1 1 b2 1 = − . n2 π 2 n2 n2 n=1
n=1
The infinite series in parentheses can be summed [9], yielding N b2 π 2 1 − . |RN (x, y; ξ, η)| ≤ 2 6 π n2 n=1
(5.31)
96
5
Eigenfunction Expansion
Notice first that the above inequality is quite compact and very simple to use. Second, it provides a uniform estimate and is therefore valid at any point in . Third, from our derivation, it follows that it gives a relatively coarse estimate. The latter makes it advisable to revisit the analysis of (5.30). In doing so, we replace its trigonometric factors with unity and express the parameter ν in terms of n. This yields |RN (x, y; ξ, η)| ≤
∞ ∞ e−ν(x+ξ ) b2 e−ν(x+ξ ) = 2 , ν(ν + β) π n(n + β0 ) n=N +1
n=N+1
where β0 = βb/π . In the case of β0 ≥ 1, the above estimate might be improved. That is,
∞ ∞ N b2 e−ν(x+ξ ) b2 e−ν(x+ξ ) e−ν(x+ξ ) |RN (x, y; ξ, η)| ≤ 2 = − . π n(n + 1) π 2 n(n + 1) n(n + 1) n=N+1
n=1
n=1
Note that the infinite series in the brackets is summable. Using the standard summation formula [9] ∞ n=1
pn 1−p 1 =1− ln , n(n + 1) p 1−p
p2 < 1,
where p = e−ν(x+ξ ) and ν = π/b, we arrive at the following estimate for the remainder in (5.30)
N
ν(x+ξ ) e−ν(x+ξ ) b2 −ν(x+ξ ) − 1 ln 1 − e − . (5.32) |RN (· · · )| ≤ 2 1 + e n(n + 1) π n=1
The above estimate, unlike that in (5.31), is nonuniform. Indeed, its right-handside depends on the observation and source points to which the estimate is applied. So, it is more flexible in practical computing. This, in other words, allows the user to apply different truncations for the series in (5.31) in different zones of in order to keep a certain desired accuracy level for the entire region. The improvement that has been achieved by the recent transformation of the series-only form of the Green’s function in (5.27) appears to be outstanding. This can be fully appreciated when the profile depicted in Fig. 5.2 is compared with those shown earlier, in Fig. 5.1. The representation in (5.28), with only the tenth partial sum of its series component is accounted for. Example 5.4 The method of eigenfunction expansion will be used here to construct the Green’s function of the Dirichlet problem for the Laplace equation stated on a rectangle. This is our second look at the problem. It was considered in Chap. 3 where its Green’s function was obtained by the method of conformal mapping (see the representation in (3.41)). The representation in (3.41) is expressed in terms of a special (Weierstrass) function that is not yet tabulated, making it inconvenient in computing.
5.2 Cartesian Coordinates
97
Fig. 5.2 Convergence of the representation in (5.28)
The objective in the current example is to derive an alternative representation of the Green’s function for the Laplace equation on a rectangle. We aim, in other words, at a form that is more easily computable. In doing so, consider the boundary-value problem ∂ 2 u(x, y) ∂ 2 u(x, y) + = −f (x, y), ∂x 2 ∂y 2
(x, y) ∈ ,
u(x, 0) = u(x, b) = u(0, y) = u(a, y) = 0,
(5.33) (5.34)
on the rectangle = {0 < x < a, 0 < y < b}, where f (x, y) is assumed to be integrable (continuous) on the closure of . It is assumed that the reader has learned from a course on differential equations (see, for example, [5, 15]) that the components in the set of functions Um,n (x, y) = sin μx sin νy, where μ = mπ/a and ν = nπ/b, with m, n = 1, 2, 3, . . . , represent eigenfunctions of the Dirichlet problem for the Laplace operator on the rectangle . Indeed, one can directly check that every component in the set Um,n (x, y) satisfies the conditions in (5.34) and is also a solution of the static Klein-Gordon equation ∂ 2 Um,n (x, y) ∂ 2 Um,n (x, y) + + λ2 Um,n (x, y) = 0, ∂x 2 ∂y 2
(x, y) ∈ ,
if the parameter λ is defined in terms of the indices m and n as λ2 = μ2 + ν 2 . This motivates a strategy of our approach to the problem in (5.33) and (5.34) when we represent its solution u(x, y) in the eigenfunction expansion (double Fourier sine series) form u(x, y) =
∞ m,n=1
um,n sin μx sin νy
(5.35)
98
5
Eigenfunction Expansion
and expand also the right-hand-side function f (x, y) in (5.33) in the double Fourier sine series ∞ fm,n sin μx sin νy. (5.36) f (x, y) = m,n=1
Once the expansions from (5.35) and (5.36) are substituted into (5.33), we have ∞ ∞ 2
2 − fm,n sin μx sin νy. μ + ν um,n sin μx sin νy = − m,n=1
m,n=1
Equating the coefficients of the series from the left-hand side and the right-hand side in the above equation yields um,n =
fm,n . μ2 + ν 2
With the aid of the Euler–Fourier formula, the coefficients fm,n in the series of (5.36) are expressed as fm,n =
4 ab
b a
f (ξ, η) sin μξ sin νη dξ dη. 0
0
By substitution of the expression for fm,n into the above formula for the coefficients um,n , and then substituting the coefficients um,n into (5.35), we obtain the solution of the problem posed by (5.33) and (5.34) in the form u(x, y) = 0
b a 0
∞ 4 sin μx sin νy sin μξ sin νη f (ξ, η) dξ dη. ab μ2 + ν 2 m,n=1
Since the solution of the problem in (5.33) and (5.34) is expressed in the integral form of (5.3), the kernel of the above, G(x, y; ξ, η) =
∞ 4 sin μx sin νy sin μξ sin νη , ab μ2 + ν 2
(5.37)
m,n=1
represents the Green’s function of the Dirichlet problem stated on the rectangle = {0 < x < a, 0 < y < b}. It is evident that the computability represents a critical issue for the double series in (5.37). Addressing this issue in the forthcoming analysis, let a = π and b = π for simplicity. This transforms (5.37) into G(x, y; ξ, η) =
∞ 4 sin mx sin ny sin mξ sin nη , π2 m2 + n 2 m,n=1
which is the Green’s function for the square = {0 < x < π, 0 < y < π}.
(5.38)
5.2 Cartesian Coordinates
99
Fig. 5.3 Convergence of the representation of (5.38)
To examine the convergence rate of the series in (5.38), we depict, in Fig. 5.3, profiles of its (M, N )th partial sum for various values of the truncation parameters M and N . The x-coordinate of the field point is fixed at x = π/2, while the source point (ξ, η) is chosen as (π/2, 2). Two important observations can be made from the data in Fig. 5.3, and both of them indicate a low computational potential of the expression in (5.38). First, the logarithmic singularity is poorly approximated when the series is truncated. Second, a high-frequency oscillation dramatically reduces its practicality. Note that the oscillation cannot be entirely eliminated in the case of M = 100 and N = 100. This implies, in particular, that the accuracy in computing derivatives of the Green’s function (which are frequently required in applications) should to be even much lower than that of the function itself. Hence, some work is required to enhance the computational potential of the representation in (5.38). In [25], for example, it was proposed to rearrange the doublesummation in (5.38) as ∞ ∞ 4 sin mx sin mξ sin ny sin nη, π2 m2 + n 2 n=1 m=1
which after some trivial algebra reads ∞ ∞ 4 1 cos m(x − ξ ) − cos m(x + ξ ) sin ny sin nη. π2 2 m2 + n2 n=1
m=1
Breaking the m-series into two, the above is transformed into ∞ ∞ 2 cos m(x − ξ ) cos m(x + ξ ) sin ny sin nη. − π2 m2 + n2 m2 + n 2 n=1 m=1
(5.39)
100
5
Eigenfunction Expansion
Fig. 5.4 Convergence of the representation in (5.40)
In compliance with the standard [9] summation formula ∞ cos mβ π cosh α(π − β) 1 = − 2, 2 2 m +α 2α sinh απ 2α
m=1
where the parameter β is assumed to be bounded, since 0 < β < 2π , each of the m-series in (5.39) is analytically summable. Carrying out the summation, we reduce the double series in (5.38) to ∞ 1 cosh n(π − |x − ξ |) − cosh n(π − (x + ξ )) sin ny sin nη, π n sinh nπ n=1
or ∞ 2 Tn (x, ξ ) sin ny sin nη, π
(5.40)
n=1
where the coefficient Tn (x, ξ ) is defined as sinh n(π − x) sinh nξ, x ≥ ξ, 1 Tn (x, ξ ) = n sinh nπ sinh n(π − ξ ) sinh nx, x ≤ ξ. To analyze the convergence of the single-series form in (5.40), we depict, in Fig. 5.4, profiles of its N th partial sum in a manner similar to that of Fig. 5.3. Comparison of the data of Figs. 5.3 and 5.4 clearly indicates that the single-series expression of the Green’s function works slightly better in the approximation of the basic logarithmic singularity. But on the other hand, a high-frequency oscillation is still there for the single-series form. Moreover, it becomes even more notable. So, neither of the two series representations of the Green’s function obtained so far is computationally efficient, leaving room for further improvement. A significant step in that direction can be provided by accelerating the convergence of the form in (5.40). This can be done by operating on either branch of the coefficient Tn (x, ξ ). For specificity we choose the one valid for x ≥ ξ , and transform
5.2 Cartesian Coordinates
101
the series as ∞ 2 sinh nπ cosh nx − sinh nx cosh nπ sinh nξ sin ny sin nη, π n sinh nπ n=1
or ∞ 2 cosh nx − sinh nx coth nπ sinh nξ sin ny sin nη. π n n=1
Adding and subtracting the term of sinh nx in the numerator, we rewrite the above as ∞ 2 1 cosh nx − sinh nx + sinh nx(1 − coth nπ) sinh nξ sin ny sin nη, π n n=1
from which, upon removing the brackets, we have ∞ 21 sinh nx(1 − coth nπ) sinh nξ sin ny sin nη π n n=1
+
∞ 21 (cosh nx − sinh nx) sinh nξ sin ny sin nη. π n n=1
It can readily be shown that the second of the above two series appears analytically summable. To proceed with the summation, we convert its hyperbolic functions into exponential form. This yields ∞ 21 (1 − coth nπ) sinh nx sinh nξ sin ny sin nη π n n=1
+
∞ 2 1 −nx enξ − e−nξ e sin ny sin nη, π n 2 n=1
which transforms, by means of elementary algebra, into ∞ 1 1 −n(x−ξ ) − e−n(x+ξ ) cos n(y − η) − cos n(y + η) e 2π n n=1
−
∞ 2 sinh nx sinh nξ sin ny sin nη. π nenπ sinh nπ n=1
When the brackets are removed in the first of the above two series, it breaks into four pieces, each of which allows analytical summation in compliance with the
102
5
Eigenfunction Expansion
standard formula from (5.16). This converts the Green’s function in (5.40) into 1 − 2e−(x−ξ ) cos(y + η) + e−2(x−ξ ) 1 G(x, y; ξ, η) = ln 2π 1 − 2e−(x−ξ ) cos(y − η) + e−2(x−ξ ) 1 − 2e−(x+ξ ) cos(y − η) + e−2(x+ξ ) 1 + ln 2π 1 − 2e−(x+ξ ) cos(y + η) + e−2(x+ξ ) −
∞ 2 sinh nx sinh nξ sin ny sin nη. π nenπ sinh nπ n=1
Following some elementary transformations, the logarithmic terms reduce to a more compact form, G(x, y; ξ, η) =
1 |1 − e(z−ζ ) ||1 − e(z+ζ ) | ln 2π |1 − e(z−ζ ) ||1 − e(z+ζ ) | ∞ 2 sinh nx sinh nξ − sin ny sin nη, π nenπ sinh nπ
(5.41)
n=1
where the complex variable notation z = x + iy
and ζ = ξ + iη,
as introduced earlier in Example 5.1, is used for the points (x, y) and (ξ, η). The computational superiority of the version in (5.41) over those in (5.38) and (5.40) cannot be disputed, mainly because the basic logarithmic singularity of the Green’s function is analytically expressed in (5.41). Indeed, it is contained in the term 1 1 ln . (5.42) 2π |1 − e(z−ζ ) | To verify this fact, we expand the exponent e(z−ζ ) in a Taylor series and substitute it into (5.42). This yields 1 1 1 1 1 2 3 = − + + · · · (z − ζ ) + ln ln (z − ζ ) (z − ζ ) (z−ζ ) 2π |1 − e 2π 2! 3! | 1 1 1 2 =− ln |z − ζ |1 + (z − ζ ) + (z − ζ ) + · · · 2π 2! 3! 1 1 1 1 1 2 = ln − ln 1 + (z − ζ ) + (z − ζ ) + · · · , 2π z − ζ 2π 2! 3! where the first logarithmic term in fact represents the fundamental solution of the Laplace equation, while the second logarithm is a regular function that vanishes at z = ζ.
5.2 Cartesian Coordinates
103
Table 5.1 Convergence of (5.41) for the source point (3π/4, π/2) Field point, x/π
Truncation parameter, N 10
20
50
100
200
0.185
.0299291
.0299291
.0299291
.0299291
.0299291
0.385
.0767002
.0767002
.0767002
.0767002
.0767002
0.585
.1752671
.1752671
.1752671
.1752671
.1752671
0.785
.3851038
.3851034
.3851033
.3851033
.3851033
0.985
.0171972
.0171947
.0171937
.0171936
.0171936
Table 5.2 Convergence of (5.41) for the source point (0.99π, π/2) Field point, x/π
Truncation parameter, N 10
20
50
100
200
0.185
.0010733
.0010733
.0010733
.0010733
.0010733
0.385
.0027066
.0027066
.0027066
.0027066
.0027066
0.585
.0057374
.0057367
.0057363
.0057363
.0057363
0.785
.0137082
.0136948
.0136931
.0136928
.0136928
0.985
.3066126
.2703686
.2567230
.2560739
.2560668
It is worth noting that although the basic logarithmic singularity is explicit in (5.41), the representation as a whole still has a computational drawback. That is, its convergence rate varies with the location of the field and the source point. In other words, the convergence of the series in (5.41) is nonuniform. Indeed, the series converges at a relatively fast rate unless both (x, y) and (ξ, η) are close to the boundary segment x = π . This feature of series expansions of Green’s functions could be referred to as the near-boundary singularity. The data in Tables 5.1 and 5.2 are presented to illustrate the near-boundary singularity of the representation in (5.41). The data in Table 5.1 were obtained for a source point relatively remote from the boundary segment x = π , whereas in Table 5.2, the source point is quite close to the boundary. As can be seen from Table 5.1, the data are nearly indifferent to the truncation, indicating a rapid convergence of the series (only the last row slightly varies with N ). The data in Table 5.2 are, in contrast, significantly affected by N , revealing poor convergence if both the field and the source point approach the boundary. The convergence of the representation in (5.41) can be further improved. By an elementary transformation, it reduces to a form that contains a series that is uniformly convergent. That is,
104
G(x, y; ξ, η) =
5
Eigenfunction Expansion
|1 − e(z−ζ ) ||1 − e(z+ζ ) | 1 ln 2π |1 − e(z−ζ ) ||1 − e(z+ζ ) | + +
|1 − e(z1 +ζ 1 ) ||1 − e(z2 +ζ 2 ) | 1 ln 4π |1 − e(z1 +ζ1 ) ||1 − e(z2 +ζ2 ) | ∞ nπ e cosh n(x − ξ ) − cosh nπ cosh n(x + ξ )
πne2nπ sinh nπ
n=1
S(y, η), (5.43)
where z1 = (x + π) + iy,
ζ1 = (ξ + π) + iη,
z2 = (x − π) + iy,
ζ2 = (ξ − π) + iη,
and S(y, η) = sin ny sin nη. To ensure accurate computation of values of G(x, y; ξ, η), we obtain an estimate of the series remainder in (5.43). In doing so, we write it down as ∞ enπ cosh n(x − ξ ) − cosh nπ cosh n(x + ξ ) |RN (x, y; ξ, η)| = S(y, η) πne2nπ sinh nπ N+1 ∞ enπ cosh n(x − ξ ) − cosh nπ cosh n(x + ξ ) ≤ . πne2nπ sinh nπ n=N+1
Since the second additive term cosh nπ cosh n(x + ξ ) in the numerator is never negative, the estimation procedure can be continued as ∞ ∞ enπ cosh n(x − ξ ) e−nπ cosh nx ≤ |RN (x, y; ξ, η)| ≤ πne2nπ sinh nπ πn sinh nπ n=N +1
n=N+1
1 ≤ π
∞ n=N+1
∞ 1 e−nπ
e−nπ = n π
n=1
n
−
N e−nπ n=1
n
.
The infinite series in the above expression is analytically summable [9], leading ultimately to the estimate e−nπ 1 − , 1 − e−π n N
|RN (x, y; ξ, η)| ≤ ln
n=1
which indicates extremely rapid convergence of the series in (5.43), where the error level of order, say, 10−8 can be attained for any location of the field and the source point, with the truncation parameter as low as N = 5. The superiority of the latter form of the Green’s function over all other forms obtained so far is illustrated in Fig. 5.5, where its profile G(π/2, y; π/2, 2) is exhibited as in Figs. 5.3 and 5.4.
5.3 Polar Coordinates
105
Fig. 5.5 Convergence of the representation of (5.43)
Note that the representation in (5.43) of the Green’s function of the Dirichlet problem for the Laplace equation appears by far more computer-friendly than those of (5.38), (5.40), and (5.41). Two features of (5.43) support this claim (an analytical form of the basic logarithmic singularity and uniform convergence of its series term). The latter feature allows complete elimination of the high-frequency oscillation by truncating the series to its low partial sums. The fifth partial sum, for example, is accounted for in Fig. 5.5.
5.3 Polar Coordinates We begin our presentation in this section with a problem that has already been treated twice in the present volume. In Chap. 3, the classical expression of the Green’s function G(r, ϕ; , ψ) =
a 4 − 2ra 2 cos(ϕ − ψ) + r 2 2 1 ln 2 2 4π a (r − 2r cos(ϕ − ψ) + 2 )
(5.44)
of the Dirichlet problem for a disk of radius a was constructed by the methods of images and conformal mapping. This time around, the eigenfunction expansion method will be used for its derivation. We thereby provide the necessary background for later application of this method to the construction of Green’s functions to some other problems for which the methods of images and conformal mapping fail. Example 5.5 On the disk = {0 < r < a, 0 ≤ ϕ < 2π} of radius a, the boundaryvalue problem lim |u(r, ϕ)| < ∞ and
r→0
u(a, ϕ) = 0
(5.45)
is considered for the Poisson equation 1 ∂ ∂u(r, ϕ) 1 ∂ 2 u(r, ϕ) r + 2 = −f (r, ϕ), r ∂r ∂r r ∂ϕ 2
(r, ϕ) ∈ .
(5.46)
106
5
Eigenfunction Expansion
Note that the boundedness condition, as r approaches zero, is required in the above problem, because r = 0 represents a singular point for the governing equation. As the reader has already learned in this section, our objective in the method of eigenfunction expansion is to express the solution of the problem in (5.45) and (5.46) in integral form, which in this case reads as
2π
u(r, ϕ) =
a
G(r, ϕ; , ψ)f (, ψ)ddψ, 0
(5.47)
0
where ddψ represents the area element in polar coordinates. It gives the Green’s function G(r, ϕ; , ψ) in which we are interested. Taking into account the 2π -periodicity of the solution u(r, ϕ) of the problem in (5.45) and (5.46) with respect to the variable ϕ, we expand it in the trigonometric Fourier series ∞
1 ucn (r) cos nϕ + usn (r) sin nϕ . u(r, ϕ) = u0 (r) + 2
(5.48)
n=1
The right-hand-side function f (r, ϕ) in (5.46) is also represented by the Fourier series ∞
1 f (r, ϕ) = f0 (r) + fnc (r) cos nϕ + fns (r) sin nϕ . 2
(5.49)
n=1
By substitution of the expansions from (5.48) and (5.49) into (5.46) and equating the corresponding coefficients of the series on both sides, we derive the following linear ordinary differential equation: d dun (r) n2 r − 2 un (r) = −rfn (r), dr dr r
n = 0, 1, 2, . . . ,
(5.50)
in the coefficients un (r) of the expansion in (5.48). At the current stage of our development, we omit the superscripts on un (r) and fn (r) for notational convenience. The relations in (5.45) imply that the solution un (r) of (5.50) should satisfy the boundary conditions lim |un (r)| < ∞
r→0
and un (a) = 0.
(5.51)
It is worth noting that the fundamental set of solutions of the homogeneous equation corresponding to (5.50) for the case n = 0 differs from that for the case n ≥ 1. This means that in constructing the Green’s function to the boundary-value problem in (5.50) and (5.51), the two cases must be considered separately. In the case n = 0, the boundary-value problem in (5.50) and (5.51) reduces to du0 (r) d r = −rf0 (r), (5.52) dr dr
5.3 Polar Coordinates
107
lim |u0 (r)| < ∞,
r→0
and
u0 (a) = 0,
(5.53)
with the functions u(r) = ln r and u(r) = 1 representing a fundamental set of solutions for the homogeneous equation corresponding to (5.52). Hence, the general solution for (5.52) can be written, in the variation of parameters method, as u0 (r) = C1 (r) ln r + C2 (r).
(5.54)
Substituting this into (5.52) and following the routine of the method, we obtain C1 (r) = −rf0 (r)
and C2 (r) = r ln rf0 (r).
Integration of these expressions yields r C1 (r) = − f0 ()d + D1 0
and
r
C1 (r) =
ln f0 ()d + D2 .
0
Once the above quantities are substituted into (5.54) and the integral terms are combined, the general solution of (5.52) is found as r ln f0 ()d + D1 ln r + D2 . u0 (r) = r 0 The values D1 = 0 and D2 = −
a
0
f0 ()d ln a
of the constants D1 and D2 are obtained by taking advantage of the boundary conditions in (5.53). Upon substituting these into the above expression for u0 (r), the solution of the boundary-value problem in (5.52) and (5.53) reads as r a f0 ()d − f0 ()d, u0 (r) = ln ln r a 0 0 which can be rewritten in the single-integral form a g0 (r, )f0 ()d, u0 (r) =
(5.55)
0
where the kernel
g0 (r, ) = −
ln(/a), for r ≤ , ln(r/a),
for r ≥ ,
(5.56)
108
5
Eigenfunction Expansion
represents the Green’s function of the homogeneous problem corresponding to that posed by (5.52) and (5.53). We turn now to the case n ≥ 1. That is, we consider the boundary-value problem in (5.50) and (5.51) as it is. Since the equation in (5.50) is of Cauchy–Euler type, its fundamental set of solutions can be formed with the functions u(r) = r n and u(r) = r −n . This yields the general solution for (5.50) in the form un (r) = C1 (r)r n + C2 (r)r −n , and after proceeding through the variation of parameters routine, we derive the above as n n r 1 r fn ()d + D1 r n + D2 r −n . − (5.57) un (r) = r 0 2n The boundary conditions in (5.51) yield n n a 1 1 − D2 = 0 and D1 = fn ()d. 2 2n a 0 Upon substituting these into (5.57), we obtain n n n n r a r r r 1 1 − − 2 un (r) = fn ()d + fn ()d, r a 0 2n 0 2n or using a more compact notation,
a
un (r) =
gn (r, )fn ()d,
(5.58)
0
where the kernel gn (r, ) is defined in two pieces. For r ≤ , it reads as n n r 1 r , for r ≤ , − gn (r, ) = 2n a2 while for r ≥ , we have gn (r, ) =
1 2n
n n r , − r a2
for r ≥ .
The expression for un (r) in (5.58) suggests that the cosine coefficients and the sine coefficients in the Fourier series of (5.48) can be written as a c un (r) = gn (r, )fnc ()d (5.59) 0
and
usn (r) =
a 0
gn (r, )fns ()d,
(5.60)
5.3 Polar Coordinates
109
where, in compliance with the Euler–Fourier formulas, fnc () =
1 π
fns (r) =
1 π
and
2π
f (, ψ) cos nψdψ,
n = 0, 1, 2, . . . ,
(5.61)
f (, ψ) sin nψdψ,
n = 1, 2, 3, . . . .
(5.62)
0
2π
0
Upon substituting the expressions for fnc () and fns () from (5.61) and (5.62) into (5.55), (5.59), and (5.60), and then the coefficients u0 (r), ucn (r), and usn (r) into (5.55), we obtain the solution of the boundary-value problem posed by (5.45) and (5.46) in the form
2π
u(r, ϕ) = 0
a 0
∞ 1 g0 (r, ) gn (r, )(cos nϕ cos nψ + sin nϕ sin nψ) + π 2 n=1
× f (, ψ)ddψ, which can be written in a more compact form, after the factor of gn (r, ) in the above series reads as a single trigonometric function. That is, a
∞ 1 g0 (r, ) + gn (r, ) cos n(ϕ − ψ) f (, ψ)ddψ. u(r, ϕ) = π 2 0 0 n=1 (5.63) Since the expression ddψ represents the area element in polar coordinates, we observe that the solution of the boundary-value problem in (5.45) and (5.46) is obtained in the form of (5.48). Indeed, the integration in (5.63) is taken over the entire disk . This allows us to conclude that the kernel in the above integral,
2π
∞ 1 gn (r, ) cos n(ϕ − ψ) , g0 (r, ) + 2 G(r, ϕ; , ψ) = 2π
(5.64)
n=1
represents the Green’s function of the Dirichlet problem for the Laplace equation on the disk of radius a. To proceed with the summation of the series term in G(r, ϕ; , ψ), either branch of g0 (r, ) and gn (r, ) can be used. Taking, for instance, the branch valid for r ≤ and substituting into (5.64), we have n n ∞ 1 r 1 r cos n(ϕ − ψ) . − G(r, ϕ; , ψ) = − ln + 2π a n a2 n=1
Recalling the summation formula from (5.16), we arrive at
110
5
Eigenfunction Expansion
2 1 r r 1 G(r, ϕ; , ψ) = − ln − ln 1 − 2 cos(ϕ − ψ) + 2π a 2 2 1 r r + ln 1 − 2 2 cos(ϕ − ψ) + , 2 a a2 which, after some trivial algebra, reads as the familiar form G(r, ϕ; , ψ) =
1 a 4 − 2ra 2 cos(ϕ − ψ) + r 2 2 ln 2 2 4π a (r − 2r cos(ϕ − ψ) + 2 )
of the Green’s function of the Dirichlet problem on the disk of radius a. In the following example, the derivation, which yields a computer-friendly representation of a Green’s function, is introduced, as an alternative to the classical form whose computer implementation is not that straightforward. Example 5.6 Let us turn to the mixed boundary-value problem ∂u(a, ϕ) + βu(a, ϕ) = 0, ∂r
β > 0,
(5.65)
stated on the disk = {0 < r < a, 0 ≤ ϕ < 2π}. Recall that due to the singularity of the Laplace operator at the point r = 0, the boundedness condition as r approaches zero also applies, to make the problem well posed. Tracing out the procedure of the method of eigenfunction expansion for the setting in (5.46) and (5.65), we expand its solution u(r, ϕ) and the right-hand-side function f (r, ϕ) of the equation in (5.46) in the Fourier series of (5.48) and (5.49), respectively. This yields, for our setting, the boundary-value problem dun (r) n2 d (5.66) r − 2 un (r) = −rfn (r), n = 0, 1, 2, . . . , dr dr r lim |un (r)| < ∞,
r→0
and
dun (a) + βun (a) = 0, dr
(5.67)
in the coefficients un (r) of the expansion in (5.48). As in the treatment of the problem in Example 5.5, the cases of n = 0 and n ≥ 1 have been considered separately. For n = 0, the Green’s function of the homogeneous boundary-value problem corresponding to that in (5.66) and (5.67) is found as 1 − ln , r ≤ , g0 (x, s) = aβ a while the case n ≥ 1 yields n n n r r r aβ 1 gn (r, ) = − + , 2n n(n + aβ) a 2 a2
for r ≤ .
5.3 Polar Coordinates
111
This leads to the Green’s function of the homogeneous setting corresponding to that in (5.46) and (5.65) in the form 1 1 G(r, ϕ; , ψ) = − ln 2π aβ a n n n ∞ r 2aβ r r 1 cos n(ϕ − ψ) , + − + n (n + aβ) a 2 a2 n=1
where the series is partially summable. By applying the standard summation formula from (5.16), we convert the above representation to
1 1 G(r, ϕ; , ψ) = − ln − L1 (r, ϕ; , ψ) − L2 (r, ϕ; , ψ) 2π aβ a n ∞ r 2aβ − cos n(ϕ − ψ) , n(n + aβ) a 2 n=1
where
and
2 r r 1 cos(ϕ − ψ) + L1 (r, ϕ; , ψ) = ln 1 − 2 2 2 r 1 r . L2 (r, ϕ; , ψ) = ln 1 − 2 2 cos(ϕ − ψ) + 2 a a2
Following trivial transformations, this cumbersome form reduces to a more compact one as
1 a3 1 G(r, ϕ; , ψ) = + ln 2π aβ |z − ζ ||zζ − a 2 | n ∞ 2aβ r cos n(ϕ − ψ) . (5.68) − n(n + aβ) a 2 n=1
Clearly, the series in (5.68) converges at the rate 1/n2 , making the entire representation convenient for computer implementation. It is worth noting that the boundary condition in (5.65) reduces to Dirichlet type if the parameter β is taken to infinity. In compliance with this note, the limit of the expression in (5.68) as β approaches infinity should represent the Green’s function for the Dirichlet problem on the disk of radius a. Indeed, taking the limit in (5.68), one arrives at
∞ 2 r n a3 1 − ln cos n(ϕ − ψ) , (5.69) G(r, ϕ; , ψ) = 2π n a2 |z − ζ ||zζ − a 2 | n=1
112
5
Eigenfunction Expansion
where the series sums as ∞ |zζ − a 2 | 2 r n cos n(ϕ − ψ) = −2 ln , 2 n a a2 n=1
transforming (5.69) to the familiar form G(r, ϕ; , ψ) =
1 |zζ − a 2 | ln 2π a|z − ζ |
of the Green’s function for the Dirichlet problem on the disk of radius a, which was just derived in Example 5.5. So far in this section, we have dealt with more or less trivial problems, those whose Green’s functions can be found in existing texts on partial differential equations. In the examples that follow, we turn to a series of boundary-value problems for the Laplace equation whose Green’s functions are not so readily available. Example 5.7 Consider the Dirichlet problem stated for the Laplace equation on the annular region = {a < r < b, 0 ≤ ϕ < 2π}, and look for a computer-friendly form of its Green’s function. Acting in compliance with our strategy, we subject the Poisson equation (5.46) of Example 5.5 to the boundary conditions u(a, ϕ) = 0,
u(b, ϕ) = 0.
(5.70)
It appears that the procedure of the method of eigenfunction expansion is efficient in this case. Tracing it out, we expand the functions u(r, ϕ) and f (r, ϕ) in (5.46) in the Fourier series shown earlier in (5.48) and (5.49). This yields the boundary-value problem un (a) = 0
and un (b) = 0
(5.71)
stated for the equation in (5.50) of Example 5.5. Recall again that the cases n = 0 and n ≥ 1 in (5.50) must be treated separately. For n = 0, the boundary-value problem in (5.50) and (5.71) transforms, for this setting, into du0 (r) d (5.72) r = −rf0 (r), dr dr u0 (a) = 0,
and
u0 (b) = 0,
(5.73)
and a solution for the above problem is found in the integral form u0 (r) =
b
g0 (r, )f0 ()d, a
(5.74)
5.3 Polar Coordinates
113
where the kernel 1 g0 (r, ) = ln(b/a)
ln(r/a) ln(b/),
for r ≤ ,
ln(/a) ln(b/r),
for r ≥ ,
(5.75)
represents the Green’s function of the homogeneous problem corresponding to that posed by (5.72) and (5.73). Following our procedure for the solution of the problem in (5.50) and (5.71), in the case of n ≥ 1, we arrive at b gn (r, )fn ()d, (5.76) un (r) = a
with the kernel function found as 2n (b − 2n )(r 2n − a 2n ), r −n −n gn (r, ) = 2n(b2n − a 2n ) (b2n − r 2n )(2n − a 2n ),
for r ≤ , for r ≥ ,
(5.77)
representing the Green’s function of the homogeneous problem corresponding to that posed by (5.50) and (5.71). Upon substituting the expressions from (5.74) and (5.76) into the expansion of (5.48), the solution to the boundary-value problem stated by (5.46) and (5.70) reduces to the volume integral 2π b
∞ 1 g0 (r, ) gn (r, ) cos nϕ cos nψ u(r, ϕ) = + 2 0 a π n=1 ∞ + gn (r, ) sin nϕ sin nψ f (, ψ)ddψ, n=1
which can be rewritten as 2π b
∞ 1 g0 (r, ) u(r, ϕ) = gn (r, ) cos n(ϕ − ψ) f (, ψ)ddψ. + 2 0 a π n=1 (5.78) As soon as the solution to the problem in (5.46) and (5.70) appears in the integral form of (5.3), we conclude that the kernel function
∞ 1 gn (r, ) cos n(ϕ − ψ) (5.79) G(r, ϕ; , ψ) = g0 (r, ) + 2 2π n=1
in (5.78) represents the Green’s function to the Dirichlet problem for the Laplace equation stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. Close analysis shows that the representation in (5.79) does not guarantee high level of accuracy in computing the Green’s function. Causing this is the appearance
114
5
Eigenfunction Expansion
Fig. 5.6 Profile of the representation of (5.79), with N = 10
Fig. 5.7 Profile of the representation of (5.79), with N = 100
of the coefficient gn (r, ), which reveals two different types of singularity from the series in (5.79). The first of the singularities is of principal logarithmic type, which shows up whenever the field point (r, ϕ) approaches the source point (, ψ), whereas the second singularity could be called the near-boundary type. It shows up whenever both the field and the source point approach either the inner r = a or the outer r = b fragment of the boundary of . The accuracy level attainable in the direct valuation of the expansion in (5.79) can be observed in Fig. 5.6, where the profile G(r, ϕ; 2.0, 4π/9) of the Green’s function for the annular region with a = 1.0 and b = 3.0 is depicted. The series in (5.79) was truncated to its tenth partial sum, which is clearly insufficient for a reasonable approximation. To find out how the order of a partial sum affects the accuracy level attainable by the expansion in (5.79), we present Fig. 5.7. As in Fig. 5.6, the profile G(r, ϕ; 2.0, 4π/9) of the Green’s function is depicted, with the 100th partial sum of the series in (5.79) accounted for. Clearly, such a radical increase in the order of the partial sum notably improves the accuracy level overall, but it still remains low very close to the angular coordinate ψ of the source point.
5.3 Polar Coordinates
115
As follows from the analysis of the data in Figs. 5.6 and 5.7, the costly involvement of higher partial sums in computing the nonuniform convergent series in (5.79) can hardly be considered productive. However, an effective way of improving the convergence of the series prior to its computer implementation might be found. This can be done by some analytical work on the coefficient gn (r, ). Taking its branch, which is valid for r ≤ , we have
1 1 2n 1 1 b − 2n r 2n − a 2n + gn (r, ) = − n 2n 2n 2n 2n 2n(r) b −a b b
1 1 a 2n + 2n b2n − 2n r 2n − a 2n . = n 2n 2n 2n 2n(r) b (b − a ) b With this, the series in (5.79) breaks into two pieces, the first of which, the one associated with the term a 2n , − a 2n )
b2n (b2n
is uniformly convergent. The other series, the one associated with the term 1/b2n , is nonuniformly convergent. But it allows a complete summation using the standard formula from (5.16). This yields a computer-friendly form for the Green’s function as
∞ 1 ∗ g0 (r, ) + 2 G(r, ϕ; , ψ) = gn (r, ) cos n(ϕ − ψ) 2π n=1
a4
− 2a 2 r cos(ϕ − ψ) + r 2 2 r 2 − 2r cos(ϕ − ψ) + 2
+
1 ln 4π
+
1 b4 − 2b2 r cos(ϕ − ψ) + r 2 2 , ln 4 2 4π b r − 2a 2 b2 r cos(ϕ − ψ) + a 4 2
(5.80)
where the coefficient gn∗ (r, ) is found as gn∗ (r, ) =
a 2n (b2n − 2n )(r 2n − a 2n ) . 2n(b2 r)n (b2n − a 2n )
(5.81)
Recall that the expression for G(r, ϕ; , ψ) in (5.80) is valid for r ≤ . The following steps should be taken to convert the expression in (5.80) to a form valid for r ≥ : (i) choose the corresponding branch of g0 (r, ); (ii) interchange the variables r with in (5.81); and (iii) replace the denominator of the second logarithmic term in (5.80) with b4 2 − 2a 2 b2 r cos(ϕ − ψ) + a 4 r 2 . A shorthand complex-variable-based notation can be introduced for the arguments of the logarithmic terms in (5.80) so that the expression for the Green’s func-
116
5
Eigenfunction Expansion
Fig. 5.8 Profile of the representation of (5.82), with N = 10
tion of the Dirichlet problem on the annular region of radii a and b finally reads
∞ 1 ∗ G(r, ϕ; , ψ) = gn (r, ) cos n(ϕ − ψ) g0 (r, ) + 2 2π n=1
+
− zζ ||b2 − zζ | 1 , ln 2π |z − ζ ||b2 z − a 2 ζ | |a 2
(5.82)
where the factor |b2 z − a 2 ζ | in the denominator holds for r ≤ , while for r ≥ it must be replaced with |b2 ζ − a 2 z|. It can easily be shown that the representation in (5.82) is notably more efficient than that of (5.79). Two features support this assertion. First, the principal singularity term is expressed analytically. Second, the series in (5.82) converges uniformly, allowing a fairly accurate valuation at a relatively low cost. The smooth graph in Fig. 5.8 convincingly supports the efficient computability of the above representation of the Green’s function. As in Figs. 5.6 and 5.7, the profile G(r, ϕ; 2.0, 4π/9) of the Green’s function is depicted in Fig. 5.8 for the annulus of radii a = 1.0 and b = 3.0, with the series in (5.82) truncated to its tenth partial sum. Example 5.8 We turn now to a mixed boundary-value problem. As such, the “Dirichlet–Neumann” setting u(a, ϕ) = 0,
∂u(b, ϕ) = 0, ∂r
(5.83)
for the Laplace equation is considered on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. Tracing out the procedure of the method of eigenfunction expansion for the setting in (5.46) and (5.83), we arrive at the series representation in (5.79) for the Green’s function to the corresponding homogeneous boundary-value problem. Expressions for the coefficients g0 (r, ) and gn (r, ) of the series term in (5.79), which
5.3 Polar Coordinates
117
are valid for r ≤ , are found, in this case, as g0 (r, ) = ln
r a
and gn (r, ) =
(b2n + 2n )(r 2n − a 2n ) , 2n(r)n (b2n + a 2n )
and their forms valid for r ≥ can be obtained from those above by interchanging the variables r and . Upon substituting g0 (r, ) and gn (r, ) just shown into (5.79) and proceeding through some algebra, we obtain the computer-friendly form ∞ 1 |b2 z − a 2 ζ ||a 2 − zζ | ∗ gn (r, ) cos n(ϕ − ψ) G(r, ϕ; , ψ) = + ln 2π a 2 |z|2 |z − ζ ||b2 − zζ | n=1 (5.84) of the Green’s function to the “Dirichlet–Neumann” problem for the annulus of radii a and b. The r ≤ branch of gn∗ (r, ) is found, in this case, as gn∗ (r, ) =
a 2n (b2n + 2n )(a 2n − r 2n ) , n(b2 r)n (b2n + a 2n )
while for its r ≥ branch, the variables r and in gn∗ (r, ) must be interchanged. In addition, the factors |z| and |b2 z − a 2 ζ | in the argument of the logarithmic term in (5.84) hold for r ≤ , while for r ≥ they must be replaced with |ζ | and |b2 ζ − a 2 z|, respectively. The series in (5.84) converges uniformly, allowing an accurate immediate valuation of the Green’s function by truncating the series to the N th partial sum. To verify this claim, the reader is encouraged to take a close look at the coefficient gn∗ (r, ) of the series in (5.84). It is also recommended that some profiles of this representation be depicted for different values of the truncation parameter N . Example 5.9 Consider another mixed boundary-value problem for the Laplace equation, that is, the “Neumann–Dirichlet” problem ∂u(b, ϕ) = 0, ∂r
u(a, ϕ) = 0,
(5.85)
stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. The Green’s function to the homogeneous boundary-value problem corresponding to that in (5.46) and (5.85), constructed by the eigenfunction expansion method, appears, this time around, again as the series representation in (5.79). Expressions for the coefficients g0 (r, ) and gn (r, ) of the series that are valid for r ≤ are found in this case as g0 (r, ) = ln
b
and gn (r, ) =
(b2n − 2n )(r 2n + a 2n ) , 2n(r)n (b2n + a 2n )
while for r ≥ , the variables r and in the above must be interchanged.
118
5
Eigenfunction Expansion
As in the derivation in the previous example, we substitute the above expressions for the components g0 (r, ) and gn (r, ) into (5.79). Performing then some trivial algebra, the branch r ≤ of the Green’s function that we are looking for is obtained in the computer-friendly form 1 |ζ |2 |b2 z − a 2 ζ ||b2 − zζ | G(r, ϕ; , ψ) = ln 2π b6 |z − ζ ||a 2 − zζ | ∞ ∗ + gn (r, ) cos n(ϕ − ψ) , (5.86) n=1
where gn∗ (r, ) =
a 2n (2n − b2n )(a 2n + r 2n ) . n(b2 r)n (b2n + a 2n )
(5.87)
Note that to obtain the branch of G(r, ϕ; , ψ) valid for r ≥ , the variables r and in (5.87) should be interchanged, while the factors |ζ | and |b2 z − a 2 ζ | in the argument of the logarithmic term in (5.86) must be replaced with |z| and |b2 ζ −a 2 z|, respectively. Uniform convergence of the series in (5.86) can again be justified upon analysis of its coefficient gn∗ (r, ). To illustrate this assertion, the reader is advised to depict some profiles of the Green’s function by playing around with different values of the truncation parameter N .
5.4 Chapter Exercises 5.1 Use the method of eigenfunction expansion to construct the Green’s function of the Laplace equation for the boundary-value problem u(x, 0) =
∂u(x, b) = 0, ∂y
stated on the infinite strip = {−∞ < x < ∞, 0 < y < b}. 5.2 Construct the Green’s function of the Laplace equation for the boundary-value problem u(0, y) = u(x, 0) =
∂u(x, b) =0 ∂y
stated on the semi-infinite strip = {0 < x < ∞, 0 < y < b}. 5.3 Construct the Green’s function of the Laplace equation for the boundary-value problem ∂u(0, y) ∂u(x, b) = u(x, 0) = =0 ∂x ∂y
5.4 Chapter Exercises
119
stated on the semi-infinite strip = {0 < x < ∞, 0 < y < b}. 5.4 Use the method of eigenfunction expansion to construct the Green’s function of the Laplace equation for the mixed boundary-value problem u(0, y) = u(x, 0) = u(x, b) =
∂u(a, y) + βu(a, y) = 0, ∂x
where β ≥ 0, stated on the rectangle = {0 < x < a, 0 < y < b}. 5.5 Construct the Green’s function of the Laplace equation for the “Dirichlet– mixed” problem u(a, ϕ) = 0,
∂u(b, ϕ) + βu(b, ϕ) = 0, ∂r
β ≥ 0,
stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. Notice that in the case β = 0, your representation of the Green’s function reduces to that of Example 5.8, whereas in the case of β approaching infinity, it reduces to the form derived in Example 5.7. 5.6 Construct the Green’s function of the Laplace equation for the “mixed– Dirichlet” problem u(b, ϕ) = 0,
∂u(a, ϕ) − βu(a, ϕ) = 0, ∂r
β ≥ 0,
stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. Treat the cases in which the parameter β either is equal to zero or approaches infinity. 5.7 Use the method of eigenfunction expansion to construct the Green’s function of the Laplace equation for the “Neumann–mixed” problem ∂u(a, ϕ) = 0, ∂r
∂u(b, ϕ) + βu(b, ϕ) = 0, ∂r
β > 0,
stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. Explain why, in the case β = 0, the problem is ill posed, implying that its Green’s function does not exist. Consider also the case of β approaching infinity. 5.8 Use the method of eigenfunction expansion to construct the Green’s function of the Laplace equation for the “mixed–Neumann” boundary-value problem ∂u(a, ϕ) − βu(a, ϕ) = 0, ∂r
∂u(b, ϕ) = 0, ∂r
stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. Treat the cases in which the parameter β either is equal to zero or approaches infinity.
120
5
Eigenfunction Expansion
5.9 Use the method of eigenfunction expansion to construct the Green’s function of the Laplace equation for the “mixed–mixed” boundary-value problem ∂u(a, ϕ) − β1 u(a, ϕ) = 0, ∂r
∂u(b, ϕ) + βu(b, ϕ) = 0, ∂r
β1 , β2 > 0,
stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. Explain why in the case that both the parameters β1 and β2 are equal to zero, the problem is ill posed, implying that its Green’s function does not exist. Observe also that some other Green’s functions for the annular region obtained earlier in this chapter follow from the present one.
Chapter 6
Representation of Elementary Functions
While the first five chapters in this book have touched upon more or less standard topics, the material of the present chapter goes in another direction. The reader will probably find it surprising. Indeed, the notions of infinite product and Green’s function, discussed in detail earlier in this volume, have customarily been included in texts on mathematical analysis and differential equations, respectively. The present chapter, in contrast, discusses an unusual idea that has never been explored in texts before. That is, a technique, reported for the first time in [27, 28], is employed here for obtaining infinite product representations for a number of elementary functions. The technique is based on the comparison of alternative expressions of Green’s functions for the two-dimensional Laplace equation that are constructed by different methods. Some standard boundary-value problems posed on regions of a regular configuration are considered. Classical closed analytical forms of Green’s functions for such problems are compared with those obtained by the method of images in the infinite product form. This comparison appears extremely fruitful. It provides a number of infinite product representations for some trigonometric and hyperbolic functions. As outlined in Chap. 3, the method of images is useful for obtaining closed analytical expressions of Green’s functions for a certain class of boundary-value problems posed for the Laplace equation. The sphere of successful implementation of this method is limited, however, to a narrow class of problems. We begin our presentation in Sect. 6.1 by considering problems for which the method of images does not represent the best choice for the construction of the Green’s function, because some other classical methods allow one to obtain the Green’s function in a more compact computer-friendly form. But it is worth noting that Green’s functions themselves are not considered as the ultimate goal. The form in which they are expressed is what is at issue. To broaden the limited frontiers of successful application of the method of images, one arrives at expressions of Green’s functions in terms of infinite products. Those expressions are no match to the compact ones available in the literature and obtained by other classical methods (see Chaps. 3 and 5 for examples). But what makes such expressions of Green’s functions really valuable is that they are used here for the derivation of some identities involving infinite products. Y.A. Melnikov, Green’s Functions and Infinite Products, DOI 10.1007/978-0-8176-8280-4_6, © Springer Science+Business Media, LLC 2011
121
122
6 Representation of Elementary Functions
The reader will be surprised by the number of infinite product representations of trigonometric and hyperbolic functions derived in Sects. 6.2 and 6.3. They were obtained with the aid of the infinite-product-containing identities that we managed to obtain in Sect. 6.1. Some of these representations are just alternatives for those classical ones already available in the literature [9], while others were unavailable prior to the first report on them in [27, 28].
6.1 Method of Images Extends Frontiers The method of images, which is traditionally used for the construction of Green’s functions for the Laplace equation, is well described in Chap. 3. The idea behind the method is to find the location and intensity of point sources and sinks outside the region in such a way that the homogeneous boundary conditions imposed on the region’s boundary are satisfied for any location of a unit source inside the region. The method of images represents one of the standard approaches in the field. From Chap. 3, the reader may conclude that the complete list of problems allowing a successful implementation of the method of images is quite short. This is indeed true for the list of such problems for which the method results in a closed analytical form of Green’s functions. It includes only the Dirichlet problem for a half-plane; the Dirichlet, Neumann, and Dirichlet–Neumann problems for a quarterplane; the Dirichlet problem for a disk; and the Dirichlet problem for some infinite wedge-shaped regions. In [27] and [28] a nontrivial accomplishment was reported for the first time on the application of the method of images to the derivation of infinite product representations of elementary functions. The method was used for the derivation of Green’s functions for the infinite and semi-infinite strip, with Dirichlet and Neumann boundary conditions imposed. Comparison of such infinite-product-containing representations of Green’s functions with their classical analytical forms brings an unexpected discovery. To lay out a working background for our approach to the derivation of infinite product representations for a number of elementary functions, we will revisit some classical expressions of Green’s functions for the Laplace equation that have conventionally been obtained by a variety of methods. Alternative representations of those Green’s functions are later constructed here by the method of images in the infinite product form. Comparison of the two representations of the same Green’s function entails a number of “summation” formulas for infinite functional products. Example 6.1 We begin our presentation by considering the Dirichlet problem for the Laplace equation stated on the infinite strip = {−∞ < x < ∞, 0 < y < b}. The closed analytical form 1 π 1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) G(x, y; ξ, η) = , ω = , (6.1) ln ω(x−ξ ) 2ω(x−ξ ) 2π b 1 − 2e cos ω(y − η) + e
6.1 Method of Images Extends Frontiers
123
Fig. 6.1 Derivation of an alternative representation for (6.1)
of the Green’s function for this problem is available in standard texts [15, 18] on partial differential equations. As follows from Chaps. 3 and 5, it can be derived by either the method of conformal mapping or the method of eigenfunction expansion. In [16], for example, it was obtained by a modified version of the method of eigenfunction expansion. That version was first proposed in [12]. It provides a computer-friendly form of the Green’s function, which becomes possible due to either complete (as in the case under consideration) or partial summation of its series representation. In what follows, it will be explicitly shown how another (alternative to that in (6.1)) expression can be obtained by the method of images for the Green’s function of the Dirichlet problem for the infinite strip. To follow the method, the reader is advised to take a close look at the scheme presented in Fig. 6.1. We place a unit source S0+ at an arbitrary point A(ξ, η) inside . The response to S0+ at a point M(x, y) represents the fundamental solution G+ 0 (x, y; ξ, η) = −
1 ln (x − ξ )2 + (y − η)2 2π
of the Laplace equation. Clearly, the function G+ 0 (x, y; ξ, η) conflicts with the Dirichlet conditions on the boundary fragments y = 0 and y = b (it does not vanish on these lines). To compensate the traces of G+ 0 (x, y; ξ, η) on y = 0 and y = b, we place two unit − − sinks S1,0 and S1,b at the points B(ξ, −η) and C(ξ, 2b − η), which represent the images of (ξ, η) about the lines y = 0 and y = b, respectively. The responses to these sinks at (x, y) evidently are 1 ln (x, y; ξ, −η) = (x − ξ )2 + (y + η)2 G− 1,0 2π and G− 1,b (x, y; ξ, 2b − η) =
2 1 ln (x − ξ )2 + y − (2b − η) . 2π
− But the functions G− 1,0 (x, y; ξ, −η) and G1,b (x, y; ξ, 2b − η) leave nonzero traces on the boundary lines y = 0 and y = b. These traces can be compensated
124
6 Representation of Elementary Functions
+ + with the unit sources S2,0 and S2,b located at D(ξ, −2b + η) and E(ξ, 2b + η). The responses to these at (x, y) are given as 2 1 (x, y; ξ, −2b + η) = − (x − ξ )2 + y − (−2b + η) ln G+ 2,0 2π
and G+ 2,b (x, y; ξ, 2b + η) = −
2 1 ln (x − ξ )2 + y − (2b + η) . 2π
+ Traces of the functions G+ 2,0 (x, y; ξ, −2b + η) and G2,b (x, y; ξ, 2b + η) on y = 0 − − and y = b can, in turn, be compensated with the unit sinks S3,0 and S3,b located at F (ξ, −2b − η) and H (ξ, 4b − η), respectively. Following the described procedure of properly placing compensatory unit sources that alternate with unit sinks, the Green’s function G = G(x, y; ξ, η) that we are looking for is obtained in the infinite series form
G = G+ 0 +
∞ ∞ − + G2i−1,0 + G− + G2i,0 + G+ 2i−1,b 2i,b . i=1
i=1
Since the terms of this series represent logarithmic functions, its N th partial sum SN (x, y; ξ, η) = G+ 0 +
N N − + G2i−1,0 + G− + G2i,0 + G+ 2i−1,b 2i,b i=1
i=1
can be written as a single logarithm of a product: N (x − ξ )2 + (y + η − 2nb)2 1 . ln SN (x, y; ξ, η) = 2π (x − ξ )2 + (y − η + 2nb)2 n=−N
Taking the limit as N approaches infinity, we obtain the final form of the Green’s function that we are looking for as ∞ 1 (x − ξ )2 + (y + η − 2nb)2 ln G(x, y; ξ, η) = . (6.2) 2π n=−∞ (x − ξ )2 + (y − η + 2nb)2 Thus, (6.2) provides another representation of the Green’s function for the Dirichlet problem for the Laplace equation stated on the infinite strip. The above can be considered an alternative to the classical form presented in (6.1). It is evident, however, that the representation in (6.2) cannot be recommended for practical use, since it is not that computer-friendly compared to the closed form in (6.1). But computability is not an issue in the discussion that follows. The radicand in either (6.1) or (6.2) is a fraction whose numerator and denominator represent a distance between two points. Hence, the radicands are nonnegative
6.1 Method of Images Extends Frontiers
125
quantities, which allows us to obtain the identity ∞ (x − ξ )2 + [y + (η − 2nb)]2 1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) = . (x − ξ )2 + [y − (η − 2nb)]2 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ ) n=−∞
This relation can be interpreted as a “summation” formula for the infinite product. In order to reduce the above to a more compact form, assume that b = π , and introducing the parameters β = x − ξ , 2t = y + η and 2u = y − η, we obtain the multivariable identity ∞ β 2 + 4(t − nπ)2 1 − 2eβ cos 2t + e2β = . β 2 + 4(u + nπ)2 1 − 2eβ cos 2u + e2β n=−∞
(6.3)
To obtain ranges for the parameters β, t , and u in (6.3), we recall that both the observation point (x, y) and the source point (ξ, η) are interior to the infinite strip . This makes the identity in (6.3) valid (at least, formally) for −∞ < β < ∞,
0 < t < π,
0 ≤ u < π/2,
(6.4)
given that the parameters β and u are not equal zero at the same time. But it is important to note that if the product in (6.3) happens to be uniformly convergent for a wider range of the variables t and u, then the constraints on these variables in (6.4) can be revised accordingly. The identity in (6.3), along with other identities to be derived in this section, will play a significant role in the further development. Example 6.2 Reviewing other classical Green’s functions, we consider a mixed boundary-value problem for the Laplace equation on the infinite strip = {−∞ < x < ∞, 0 < y < b}, with Dirichlet condition imposed on y = 0, while the Neumann condition is imposed on y = b. Recall the Green’s function for this problem, which is expressed in [16] as 1 1 + 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ ) G(x, y; ξ, η) = ln 2π 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ ) 1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) π × , ω = . (6.5) 2b 1 + 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) The scheme presented in Fig. 6.2 may help the reader to follow the procedure of the method of images which is similar to that described earlier for the Dirichlet problem. We look for an alternative representation to (6.5) of the Green’s function for the Dirichlet–Neumann problem stated on the infinite strip. It can again be obtained as an aggregate response to an infinite number of properly spaced unit sources and sinks. Their locations are chosen in compliance with the following pattern.
126
6 Representation of Elementary Functions
Fig. 6.2 Derivation of an alternative representation for (6.5)
To compensate the trace of the fundamental solution G+ 0 (x, y; ξ, η) on the − is placed at the point B(ξ, −η), with the reboundary line y = 0, a unit sink S1,0 sponse at M(x, y) given by 1 G− (x, y; ξ, −η) = (x − ξ )2 + (y + η)2 . ln 1,0 2π As to the Neumann condition on y = b, it can be supported by placing a unit + at the point C(ξ, 2b − η). This yields source S1,b G+ 1,b (x, y; ξ, 2b − η) = −
2 1 ln (x − ξ )2 + y − (2b − η) . 2π
The trace of the function G+ 1,b (x, y; ξ, 2b − η) on the boundary line y = 0 can, − in turn, be compensated with a unit sink S2,0 placed at D(ξ, −2b + η), with the response at (x, y) 2 1 − ln (x − ξ )2 + y − (−2b + η) , G2,0 (x, y; ξ, −2b + η) = 2π − while the Neumann condition on y = b can be supported with a unit sink S2,b located at E(ξ, 2b + η), whose response at (x, y) reads as 2 1 − ln (x − ξ )2 + y − (2b + η) . G2,b (x, y; ξ, 2b + η) = 2π
The trace of the function G− 2,b (x, y; ξ, 2b + η) on y = 0 can, in turn, be compen+ sated with a unit source S3,0 placed at F (ξ, −2b − η). The response of this source reads as 2 1 (x, y; ξ, −2b − η) = − (x − ξ )2 + y + (2b + η) , G+ ln 3,0 2π − while the Neumann condition on y = b can be supported with a unit sink S3,b at H (ξ, 4b − η), whose response at (x, y) is given as 2 1 ln (x, y; ξ, 4b − η) = (x − ξ )2 + y − (4b − η) . G− 3,b 2π
6.1 Method of Images Extends Frontiers
127
Continuing this process and proceeding in compliance with the scheme described in Example 6.1, the Green’s function that we are looking for is ultimately obtained in the following infinite product form: ∞ 1 (x − ξ )2 + (y + η + 4nb)2 ln G(x, y; ξ, η) = 2π n=−∞ (x − ξ )2 + (y − η + 4nb)2 (x − ξ )2 + [y − η + 2(2n + 1)b]2 × , (x − ξ )2 + [y + η + 2(2n + 1)b]2
(6.6)
which can be viewed as an alternative to the closed analytical form exhibited earlier in (6.5). By comparison of the equivalent expressions in (6.6) and (6.5), one arrives at the multivariable identity ∞ (x − ξ )2 + [y − η + 2(2n + 1)b]2 (x − ξ )2 + [y + η + 2(2n + 1)b]2 n=−∞
× =
(x − ξ )2 + (y + η + 4nb)2 (x − ξ )2 + (y − η + 4nb)2
1 + 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ ) 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ ) ×
1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) . 1 + 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ )
To obtain a more compact form for this relation, we assume b = π/2, which evidently implies that ω = 1, and introduce the parameters β = x − ξ , t = y + η, and u = y − η. This yields ∞ [β 2 + (t + 2nπ)2 ][β 2 + (u + (2n + 1)π)2 ] [β 2 + (u + 2nπ)2 ][β 2 + (t + (2n + 1)π)2 ] n=−∞
=
(1 − 2eβ cos t + e2β )(1 + 2eβ cos u + e2β ) . (1 − 2eβ cos u + e2β )(1 + 2eβ cos t + e2β )
(6.7)
The above identity, along with that in (6.3) and some others to be obtained later in this section, is crucial for the major issue of the present chapter, which is the derivation of infinite product representations for some elementary functions. We turn now to other classical Green’s functions and apply our technique based on the method of images to some boundary-value problems formulated for the Laplace equation on a semi-infinite strip.
128
6 Representation of Elementary Functions
Fig. 6.3 Derivation of an alternative representation for (6.8)
Example 6.3 Consider first the Dirichlet problem on the semi-infinite strip = {0 < x < ∞, 0 < y < b}. The classical compact form 1 1 − 2eω(x+ξ ) cos ω(y − η) + e2ω(x+ξ ) ln G(x, y; ξ, η) = 2π 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ ) 1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) π (6.8) × , ω= , b 1 − 2eω(x+ξ ) cos ω(y + η) + e2ω(x+ξ ) of its Green’s function can be found in most classical sources. In [16], for example, it was obtained by the modified version of the method of eigenfunction expansion. Another form of the Green’s function for the problem under consideration will be obtained here with the aid of the method of images. To trace its procedure in a way similar to that described in detail in Examples 6.1 and 6.2, the reader is invited to follow, in this case, the derivation scheme depicted in Fig. 6.3. The potential field generated by a unit source acting at an arbitrary point A(ξ, η) in can be compensated on the edges y = 0 and y = b with unit sources and sinks placed at the regular set of points B(ξ, −η), C(ξ, 2b − η), D(ξ, −2b + η), E(ξ, 2b + η), F (ξ, −2b − η), H (ξ, 4b − η), and so on. All these points are located outside of . In other words, these sources and sinks allow us to satisfy the homogeneous Dirichlet boundary conditions imposed on the edges y = 0 and y = b of . As to the boundary condition imposed on the edge x = 0, the influence of the sources and sinks acting at A, B, C, D, E, F , H , and so on can, in turn, be compensated on that boundary line with unit sources and sinks if we place them at another set of points K(−ξ, η), L(−ξ, −η), N(−ξ, 2b − η), P (−ξ, −2b + η), R(−ξ, 2b + η), S(−ξ, −2b − η), T (−ξ, 4b − η), and so on. It is evident that the latter sources and sinks do not conflict with the boundary conditions on y = 0 and y = b. Thus, upon combining the influence of all the compensatory sources and sinks shown in Fig. 6.3, one arrives at an alternative form to (6.8) of the Green’s function of the Dirichlet problem for the semi-infinite strip = {0 < x < ∞, 0 < y < b}. After some trivial algebra, it is ultimately obtained in the infinite product-containing
6.1 Method of Images Extends Frontiers
form
∞ (x − ξ )2 + (y + η − 2nb)2 1 G(x, y; ξ, η) = ln 2π n=−∞ (x − ξ )2 + (y − η + 2nb)2 (x + ξ )2 + (y − η + 2nb)2 × . (x + ξ )2 + (y + η − 2nb)2
129
(6.9)
Similarly to the development in the problems considered in Examples 6.1 and 6.2, by equating arguments of the logarithmic functions of the alternative expressions in (6.9) and (6.8), one obtains in this case the following multivariable identity: ∞ [(x − ξ )2 + (y + η − 2nb)2 ][(x + ξ )2 + (y − η + 2nb)2 ] [(x − ξ )2 + (y − η + 2nb)2 ][(x + ξ )2 + (y + η − 2nb)2 ] n=−∞
=
1 − 2eω(x+ξ ) cos ω(y − η) + e2ω(x+ξ ) 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ ) ×
1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) . 1 − 2eω(x+ξ ) cos ω(y + η) + e2ω(x+ξ )
To convert the above identity to a more compact form, we assume b = π and introduce the parameters α = x + ξ , β = x − ξ , t = y + η, and u = y − η. This reduces the identity to ∞ [β 2 + (t − 2nπ)2 ][α 2 + (u + 2nπ)2 ] [β 2 + (u + 2nπ)2 ][α 2 + (t − 2nπ)2 ] n=−∞
=
(1 − 2eα cos u + e2α )(1 − 2eβ cos t + e2β ) . (1 − 2eβ cos u + e2β )(1 − 2eα cos t + e2α )
(6.10)
Note that the above identity, along with those of (6.3) and (6.7), creates a background for our further work on the infinite product representation of elementary functions. Example 6.4 As another example for the semi-infinite strip = {0 < x < ∞, 0 < y < b}, we consider a mixed boundary-value problem. That is, let Dirichlet conditions be imposed on the boundary fragments y = 0 and y = b, while the Neumann condition is imposed on x = 0. The compact form 1 1 − 2eω(x+ξ ) cos ω(y + η) + e2ω(x+ξ ) ln G(x, y; ξ, η) = 2π 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ ) π 1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) × , ω = , (6.11) ω(x+ξ ) 2ω(x+ξ ) b 1 − 2e cos ω(y − η) + e
130
6 Representation of Elementary Functions
Fig. 6.4 Derivation of an alternative representation for (6.11)
of the Green’s function for this Dirichlet–Neumann problem is presented, for example, in [16]. An alternative form to (6.11) of the Green’s function can be derived with the aid of the scheme exhibited in Fig. 6.4. As the previous example suggests, the traces of the fundamental solution (the field generated by a unit source acting at an arbitrary point A(ξ, η) in ) on the edges y = 0 and y = b are compensated with unit sources and sinks placed at a set of points exterior to : B(ξ, −η), C(ξ, 2b − η), D(ξ, −2b + η), E(ξ, 2b + η), F (ξ, −2b − η), H (ξ, 4b − η), and so on. To satisfy the Neumann condition imposed on the edge x = 0, the influence of the sources and sinks acting at A, B, C, D, E, F , H , and so on can, similarly to the Dirichlet problem, be compensated with unit sources and sinks if we place them at the set of points K(−ξ, η), L(−ξ, −η), N (−ξ, 2b − η), P (−ξ, −2b + η), R(−ξ, 2b + η), S(−ξ, −2b − η), T (−ξ, 4b − η), and so on exterior to . The order of sources and sinks is, however, different from that suggested earlier for the Dirichlet problem. Proceeding further with the method of images, one arrives at the infinite-productcontaining representation ∞ (x − ξ )2 + (y + η − 2nb)2 1 ln G(x, y; ξ, η) = 2π n=−∞ (x − ξ )2 + (y − η + 2nb)2 (x + ξ )2 + (y + η − 2nb)2 × (6.12) (x + ξ )2 + (y − η + 2nb)2 for the Green’s function under consideration. Setting equal the arguments of the logarithmic functions in (6.11) and (6.12), we obtain the following multivariable identity: ∞ (x − ξ )2 + (y + η − 2nb)2 (x − ξ )2 + (y − η + 2nb)2 n=−∞
×
(x + ξ )2 + (y + η − 2nb)2 (x + ξ )2 + (y − η + 2nb)2
6.2 Trigonometric Functions
=
131
1 − 2eω(x+ξ ) cos ω(y + η) + e2ω(x+ξ ) 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ ) ×
1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) . 1 − 2eω(x+ξ ) cos ω(y − η) + e2ω(x+ξ )
Similarly to the case of the Dirichlet problem considered earlier in Example 6.3, we simplify the above identity by assuming b = π and introducing the parameters α = x + ξ , β = x − ξ , t = y + η, and u = y − η. This yields ∞ [β 2 + (t − 2nπ)2 ][α 2 + (t − 2nπ)2 ] [β 2 + (u + 2nπ)2 ][α 2 + (u + 2nπ)2 ] n=−∞
=
(1 − 2eα cos t + e2α )(1 − 2eβ cos t + e2β ) . (1 − 2eβ cos u + e2β )(1 − 2eα cos u + e2α )
(6.13)
The infinite-product-containing identities that have been derived so far in this section (see (6.3), (6.7), (6.10), and (6.13)) will be repeatedly referred to in Sects. 6.2 and 6.3. They will play a key role in our study. We will use them for the derivation of such representations for some elementary (trigonometric and hyperbolic) functions that have not been reported before.
6.2 Trigonometric Functions At this point, the reader is prepared for a new turn in our presentation. We are going to address a subject area that bridges the topics of Green’s function and infinite product. The infinite product representation of elementary functions will be explored to a certain extent. The infinite-product-containing identities derived in the previous section create a convenient background for our work. We begin by recalling first the identity presented in (6.3) of Sect. 6.1 and assume a zero value for the parameter β. This converts the identity into the compact form ∞ (t − nπ)2 1 − cos 2t = sin2 t csc2 u, = 2 1 − cos 2u (u + nπ) n=−∞
(6.14)
where the parameter t can take on any real value. As to the parameter u, it cannot equal nπ , with n = 0, ±1, ±2, . . . . It is evident that the identity that we just arrived at in (6.14) holds if the identity ∞ t − nπ sin t = u + nπ sin u n=−∞
(6.15)
holds as well. The above identity represents, in fact, an infinite product expansion of the two-variable function sin t . F (t, u) = sin u
132
6 Representation of Elementary Functions
To analyze the convergence of the infinite product representation in (6.15), we isolate its term with n = 0, which clearly is t/u, and group the pairs of terms with n = k and n = −k. This yields ∞ ∞ t − nπ t (t − kπ)(t + kπ) = u + nπ u (u + kπ)(u − kπ) n=−∞ k=1
=
∞ ∞ t t 2 − u2 + u2 − k 2 π 2 t t 2 − k2π 2 = u u2 − k 2 π 2 u u2 − k 2 π 2 k=1
=
t u
∞
k=1
1+
k=1
t 2 − u2 . u2 − k 2 π 2
The form that the product in (6.15) reduces to implies [5, 9] that it converges uniformly if the series ∞ t 2 − u2 u2 − k 2 π 2 k=1
does. But the above represents the p-series (also referred to in some sources as the generalized harmonic series) with convergence rate of order 1/k 2 . Hence, it converges uniformly [9] for any finite value of t and u = kπ . This makes it possible to conclude that the constraints put on the parameters t and u in Sect. 6.1 (see (6.4)) can be revised. This, in turn, implies that the product in (6.15) converges uniformly to a value of the function F (t, u) at any point (t, u) in its domain. In what follows, the reader will be introduced to a number of infinite product representations for single-variable trigonometric functions that can be obtained from the identities in (6.14) or (6.15). Note that most of the representations, we will arrive at in this section were reported for the first time in [27] and [28]. Let us revisit the two-variable identity in (6.15) and assume u = π/2. The identity transforms in this case to the expansion sin t =
∞ 2(t − nπ) (2n + 1)π n=−∞
(6.16)
of the sine function in an infinite product. Uniform convergence of this expansion evidently follows from the analysis that we just completed for the infinite product in (6.15). The expansion in (6.16) can be transformed by means of the approach applied earlier to the relation in (6.15). That is, by isolating the term with n = 0, which equals 2t/π , and coupling the terms with n = k and n = −k, we can convert the expansion in (6.16) into sin t =
∞ 2t 4(t 2 − k 2 π 2 ) , π (1 − 4k 2 )π 2 k=1
6.2 Trigonometric Functions
133
which, after some trivial algebra, reads sin t =
∞
2t 4t 2 − π 2 1+ . π (1 − 4k 2 )π 2
(6.17)
k=1
Thus, it appears that an infinite product representation is obtained for the trigonometric sine function. But this raises a natural question about the relationship between the representation in (6.17) and the classical [9] Euler expansion ∞ t2 1− 2 2 , sin t = t k π
(6.18)
k=1
which has been referenced and dealt with multiple times in this volume. Close analysis shows that the forms in (6.17) and (6.18) are unrelated, meaning that neither of them follows from the other. This makes it possible to assert that (6.17) is simply an alternative to (6.18). Note that the representation in (6.18) has been around in mathematics for the past two hundred and fifty plus years, owing to the genius of Leonhard Euler [1]. It is obvious that his name needs no recommendation. It is known to everyone who is at least superficially familiar with the history of the natural sciences. That phenomenal Swiss mathematician made countless decisive contributions to different areas of mathematics, mechanics, and engineering sciences. To provide the reader with some perspective on the intellectual greatness of Euler, we recall a comment of another giant, who represents an indisputable authority in the mathematical sciences. Being impressed with and inspired by the beauty and elegance of Euler’s ideas, which had influenced a huge army of his pupils and followers, the French mathematician and physicist Pierre Simon Laplace [26] once exclaimed: “Read Euler, read Euler, he is the master of us all.” What could be more convincing than such recognition! It can be seen clearly that the infinite products in (6.17) and (6.18) converge at the same rate. This assertion follows from the form of their general terms. Indeed, both products converge at the same rate 1/k2 . It appears from close observation, however, that the actual convergence of the product in (6.17) is somewhat faster than that in (6.18). This observation by no means conflicts with the a priori estimate, but rather gives a comparison of the practical convergencies of the two expansions. The latter point is well illustrated in Figs. 6.5 and 6.6, where, to give a clear view of the convergence rate of both representations, we display graphs of their Kth partial products (17) K
2t 4t 2 − π 2 1+ = π (1 − 4k 2 )π 2 K
k=1
and (18) K
=t
K k=1
1−
t2 . k2π 2
134
6 Representation of Elementary Functions
Fig. 6.5 Convergence of the expansions in (6.17) and (6.18), K = 5
Fig. 6.6 Convergence of the expansions in (6.17) and (6.18), K = 10
The case K = 5 is depicted in Fig. 6.5 along with that of K = 10 shown in Fig. 6.6. These illustrations provide a sense of the convergence rate for both expansions in (6.17) (diamonds) and (6.18) (boxes). The infinite product representation
∞
π 4t (t − π) π − 2t cos t = sin 1+ −t = 2 π (1 − 4k 2 )π 2
(6.19)
k=1
for the cosine function directly follows from the expansion in (6.17), representing an alternative to another classical [9] Euler form cos t =
∞ k=1
4t 2 1− . (2k − 1)2 π 2
(6.20)
We revisit again the identity in (6.15) and let the parameter t there equal π/2. This converts (6.15) to the representation
6.2 Trigonometric Functions
135
∞ ∞
(1 − 2n)π 2t + π csc t = = −1 + 2(t + nπ) n=−∞ 2(t + nπ) n=−∞ =
∞
π 1 − 4t 2 1+ 2t 4(t 2 − k 2 π 2 )
(6.21)
k=1
for the cosecant function. From the appearance of the second additive term in the brackets in (6.21), it is evident that the infinite product converges uniformly to values of the cosecant function at every point in the domain of this function. And the convergence rate of this representation is 1/k 2 . Observe that from the representations exhibited in (6.21) and (6.16), it follows that ∞ ∞ 2(t − nπ) 2(t + nπ) ≡ . (2n + 1)π (1 − 2n)π n=−∞ n=−∞
The equivalence of the two infinite products in the above identity is evident, given that each of them is unchanged by the replacement of the multiplication index n with −n. It appears that the identity shown in (6.15) might help in deriving alternative forms for other rare infinite product expansions that are available in the literature. To verify this assertion, we recall the representation 2 ∞
sin 3t 2t =− 1− , sin t t + nπ n=−∞
(6.22)
which appears in [9]. If both the variables t and u in ∞ sin t t − nπ = u + nπ sin u n=−∞
are expressed in terms of a single variable as t := At and u := Bt, where A and B are real constants, then the above relation transforms into ∞ sin At At − nπ = . sin Bt n=−∞ Bt + nπ
This yields ∞ ∞
sin At A A2 t 2 − k 2 π 2 (A2 − B 2 )t 2 A = 1 + , = sin Bt B B 2 t 2 − k2 π 2 B B 2t 2 − k2π 2 k=1
k=1
Bx = nπ, (6.23)
136
6 Representation of Elementary Functions
from which the compact expansion ∞ sin 3t 3t − nπ = , sin t t + nπ n=−∞
t = nπ,
(6.24)
follows as a particular case apparently representing an alternative to the expansion in (6.22). A close analysis reveals, however, the equivalence of the expansions in (6.24) and (6.22). This assertion can readily be verified using the procedure that was applied earlier to the product in (6.15). That is, we isolate the terms with n = 0 in (6.22) and (6.24), which are equal to −3 and 3, respectively, and pair the terms with n = k and n = −k. This ultimately transforms the product in (6.24) into ∞ ∞ ∞ 3t − nπ (3t − kπ)(3t + kπ) 9t 2 − k 2 π 2 =3 =3 . t + nπ (t + nπ)(t − kπ) t 2 − k2π 2 n=−∞ k=1
k=1
As to the product in (6.22), we can prove that it reduces to the same expression. Indeed, after trivial transformations we have −
∞
1−
n=−∞
=3
∞
k=1
=3
∞
2
2t 1− t + kπ 1+
k=1
=3
2t t + nπ
2
2t 1− t − kπ
2
16t 4 4t 2 4t 2 − + (t 2 − k 2 π 2 )2 (t + kπ)2 (t − kπ)2
∞ ∞ 9t 4 − 10k 2 π 2 t 2 + k 4 π 4 9t 2 − k 2 π 2 = 3 . (t 2 − k 2 π 2 )2 t 2 − k2 π 2
k=1
k=1
Thus, the expansions in (6.22) and (6.24) are indeed equivalent. The reader is encouraged, in Exercise 6.5, to obtain a graphical illustration of the equivalence of these two expansions. Note that the expansion in (6.23) can be obtained by directly expressing sin At and sin Bt with the aid of the representation in (6.17). Indeed, the latter suggests that ∞
∞ 4A2 t 2 − π 2 2At 4(A2 t 2 − k 2 π 2 ) 2At 1+ = , sin At = π (1 − 4k 2 )π 2 π (1 − 4k 2 )π 2 k=1
k=1
while sin Bt =
∞
∞ 4B 2 t 2 − π 2 2Bt 4(B 2 t 2 − k 2 π 2 ) 2Bt 1+ = . π π (1 − 4k 2 )π 2 (1 − 4k 2 )π 2 k=1
k=1
6.2 Trigonometric Functions
137
Thus we have ∞ sin At A A2 t 2 − k 2 π 2 . = sin Bt B B 2t 2 − k2π 2 k=1
Interestingly enough, the Euler expansion in (6.18) also directly leads to that in (6.23). We continue with the derivation of infinite product representations for trigonometric functions. In doing so, let us assume A = 2 and B = 1 in (6.23). This yields ∞ sin 2t 2t − nπ = 2 cos t = . sin t t + nπ n=−∞
This immediately yields another uniformly convergent expansion for the cosine function, ∞ ∞ 1 2t − nπ 3t 2 = cos t = 1+ 2 , (6.25) 2 n=−∞ t + nπ t − k2 π 2 k=1
which represents yet another alternative to those exhibited earlier in (6.19) and (6.20). Yet another infinite product representation for the cosine function can be obtained from that in (6.23) if we assume there A = 1/2 and B = 1. This yields ∞ sin t/2 t − 2nπ = , sin t 2(t + nπ) n=−∞
or sin2 t/2 sin2 t
=
∞ 1 (t − 2nπ)2 = , 2(1 + cos t) n=−∞ 4(t + nπ)2
from which, solving for cos t, we obtain another alternative infinite product representation for the cosine function: cos t = −1 +
∞ 1 4(t + nπ)2 . 2 n=−∞ (t − 2nπ)2
(6.26)
To analyze the convergence of this form, an approach will be applied that is similar to that used earlier for the form in (6.15). That is, by isolating the term with n = 0, which is equal to 4, and pairing the terms with n = k and n = −k, we rewrite the representation in (6.26) in the form cos t = −1 + 2
∞ 16(t 2 − k 2 π 2 )2 k=1
(t 2 − 4k 2 π 2 )2
= −1 + 2
∞
k=1
3t 2 (5t 2 − 8k 2 π 2 ) 1+ . (t 2 − 4k 2 π 2 )2
Since the degree of the polynomial in k in the denominator is two units higher than that in the numerator (four against two), we conclude that the product in (6.26) converges at the rate 1/k 2 .
138
6 Representation of Elementary Functions
Fig. 6.7 Different product expansions of the cosine function
Another alternative infinite product expansion for the cosine function directly follows from the identity ∞ (t − nπ)2 1 − cos 2t , = 2 1 − cos 2u (u + nπ) n=−∞
which we saw earlier in (6.14). Indeed, assuming in the above t := t/2 and u = π/4, we reduce it to ∞ 4(t − 2nπ)2 . cos t = 1 − (1 + 4n)2 π 2 n=−∞
(6.27)
The reader is encouraged to analyze the convergence of this representation in the way suggested earlier. So, as follows from our presentation, the alternative expansions obtained thus far for the cosine function (see the expansions in (6.19), (6.20), (6.25), (6.26), and (6.27)) converge at the same rate 1/k 2 , although our experience reveals some differences in their practical convergence. To justify this assertion, take a look at Fig. 6.7, which gives a view of the actual convergence. The 10th partial products are depicted for each expansion involved. Note that of all the expansions, the one in (6.27) (smaller box curve) appears to be the “most accurate,” followed by the one in (6.19) (cross curve) and the classical Euler expansion in (6.20) (circle curve). The expansion in (6.15) has already been used in this section for the derivation of a number of infinite product representations of trigonometric functions. And yet it can be helpful in the development of some other infinite product expansions. To support this claim let us show that it can, for example, be used to generate such an expansion for the trigonometric tangent function. In doing so, we introduce a single variable in (6.15) by leaving the t variable as it is, while expressing the u variable as u := π/2 − t. This yields for the right-hand side in (6.15) sin t sin t = = tan t, sin u sin(π/2 − t)
6.2 Trigonometric Functions
139
transforming the whole relation in (6.15) into tan t =
∞
2(t − nπ) . (1 + 2n)π − 2t n=−∞
(6.28)
The uniform convergence of this representation, for any value of t in the domain of the tangent function, clearly follows from the analysis of the expansion in (6.15) that was completed earlier in this section. An alternative to the infinite product representation in (6.28) for the tangent function can be obtained from the identity in (6.7). Indeed, if the parameter β is set equal to zero in (6.7), then the latter reads as ∞ (t + 2nπ)2 [u + (2n + 1)π]2 (1 − cos t)(1 + cos u) = (u + 2nπ)2 [t + (2n + 1)π]2 (1 − cos u)(1 + cos t) n=−∞
= tan2
t u cot2 . 2 2
It is evident that the above identity holds, for values of t and u from the domain of the function tan2 2t cot2 u2 if the identity ∞ (t + 2nπ)[u + (2n + 1)π] t u tan cot = 2 2 n=−∞ (u + 2nπ)[t + (2n + 1)π]
(6.29)
also holds. The identity in (6.29) can be further transformed. Assuming u = π/2 and t/2 := t, we arrive at tan t =
∞
2(3 + 4n)(t + nπ) . (1 + 4n)[2t + (2n + 1)π] n=−∞
(6.30)
Uniform convergence of this infinite product, for any value of t in the domain of the tangent function, can be verified if we transform it into tan t =
∞ 6t 4(9 − 16k 2 )(t 2 − k 2 π 2 ) 2t − π (1 − 16k 2 )[(2t + π)2 − 4k 2 π 2 ] k=1
and rewrite it in an equivalent form as
∞ (4t − π)[8t + (1 + 16k 2 )π] 6t 1+ tan t = . 2t − π (1 − 16k 2 )[(2t + π)2 − 4k 2 π 2 ] k=1
Since the series ∞ (4t − π)[8t + (1 + 16k 2 )π] (1 − 16k 2 )[(2t + π)2 − 4k 2 π 2 ] k=1
140
6 Representation of Elementary Functions
Fig. 6.8 Convergence pattern of the expansions in (6.28) and (6.30)
converges at the rate 1/k 2 , the infinite product in (6.30) uniformly converges to the tangent function at every point in its domain. Figure 6.8 gives a clear view of the convergence rate of the expansions in (6.28) and (6.30). The 10th partial products are shown. It is important to note that one of these expansions approximates exact values of the tangent function strictly from above, whereas the other one does so strictly from below. Note also that this sandwich-type feature holds for every value of the truncation parameter K, making convenient the simultaneous use of both expansions in (6.28) and (6.30). It is evident that the relation that we obtained in (6.30) yields the infinite product representation ∞ (1 + 4n)[2t + (2n + 1)π] cot t = , 2(3 + 4n)(t + nπ) n=−∞
t = nπ
(6.31)
for the cotangent function,while the alternative infinite product expansion cot t =
∞ (1 + 2n)π − 2t , 2(t − nπ) n=−∞
t = nπ,
(6.32)
for the cotangent function follows from the expansion for the tangent function in (6.28). By the way, the representation in (6.31) can be directly obtained from that in (6.29) by letting t = π/2 and making the substitution u/2 := t. Another infinite product representation of a trigonometric function can directly be obtained from that in (6.29), that is, ∞ tan At (At + nπ)[2Bt + (1 + 2n)π] = . tan Bt n=−∞ (Bt + nπ)[2At + (1 + 2n)π]
(6.33)
This follows from the identity in (6.29) if a single variable t is introduced there as t/2 := At and u/2 := Bt, where A and B are real constants that meet the following constraints: At = (1 + 2n)π/2 and Bt = nπ, n = 0, ±1, ±2, . . . .
6.3 Hyperbolic Functions
141
In Exercise 6.8, the reader is advised to analyze the representation in (6.33) and determine its convergence rate. This can be accomplished using the approach employed earlier in this section. In the next section, we will show that the use of the identities derived earlier in Sect. 6.1 can be extended to another type of elementary functions. Namely, those identities also allow one to derive some infinite product representations for some hyperbolic functions.
6.3 Hyperbolic Functions The identities derived earlier in Sect. 6.1 (see (6.3), (6.7), (6.10), and (6.13)) are also helpful in obtaining some infinite product representations for hyperbolic functions. But before we proceed with specifics, let us revisit some of the infinite product expansions obtained in Sect. 6.2 for trigonometric functions and figure out what those expansions transform to with the aid of the analytic continuation formulas i sin iz = − sinh z;
cos iz = cosh z.
(6.34)
Similarly to the conversion of the classical Euler infinite product expansion for the trigonometric sine function (see (2.1) of Chap. 2) into the expansion for the hyperbolic sine function in (2.3), which was accomplished with the aid of the first of the formulas in (6.34), the expansion ∞
2t 4t 2 − π 2 sin t = 1+ π (1 − 4k 2 )π 2 k=1
derived in (6.18) converts into the expansion ∞
4t 2 + π 2 2t 1− sinh t = π (1 − 4k 2 )π 2
(6.35)
k=1
of the hyperbolic sine function. Some infinite product representations of the hyperbolic cosine function can also be directly obtained from those derived earlier for the trigonometric cosine. Taking ∞ cos t = 1+ k=1
3t 2 t 2 − k2π 2
from (6.25), for example, and utilizing the second of the formulas in (6.34), we have cosh t =
∞ 1+ k=1
3t 2 . t 2 + k2 π 2
(6.36)
142
6 Representation of Elementary Functions
The alternative representation ∞ 1 4(t + nπ)2 2 n=−∞ (t − 2nπ)2
cos t = −1 +
for the trigonometric cosine shown in (6.26) also works. But before going to its analytic continuation, we convert it to an equivalent form as cos t = −1 + 2
∞ 16(t 2 − n2 π 2 )2 . (t 2 − 4n2 π 2 )2
k=1
This yields cosh t = −1 + 2
∞ 16(t 2 + n2 π 2 )2 . (t 2 + 4n2 π 2 )2
(6.37)
k=1
Another alternative to the two infinite product representations for cosh t just presented follows from ∞ 4(t − 2nπ)2 , cos t = 1 − (1 + 4n)2 π 2 n=−∞ shown in (6.27). Converting it first to the equivalent form cos t = 1 −
∞ 4t 2 16(t 2 − 4n2 π 2 )2 , π2 (1 − 16n2 )2 π 4 k=1
we then obtain cosh t = 1 +
∞ 4t 2 16(t 2 + 4n2 π 2 )2 . π2 (1 − 16n2 )2 π 4
(6.38)
k=1
Some other infinite product representations of trigonometric functions can also be immediately converted upon analytic continuation. Revisiting, for example, the relation ∞ A A2 t 2 − k 2 π 2 sin At , = sin Bt B B 2 t 2 − k2 π 2 k=1
derived earlier in Sect. 6.2, we convert it into a corresponding relation written in terms of hyperbolic functions, which reads as ∞ A A2 t 2 + k 2 π 2 sinh At = . sinh Bt B B 2t 2 + k2π 2
(6.39)
k=1
The infinite product representations shown in (6.35)–(6.39) are converted from the corresponding representations obtained earlier for trigonometric functions. Analytic continuation was used as an instrument for that. The list of such conversions
6.3 Hyperbolic Functions
143
can be further extended. We are not going to explore this track in more detail but encourage the reader to do so. In the remaining part of this section, we will be investigating the potential of the identities derived in Sect. 6.1. To begin, we revisit first the identity in (6.3). By assuming for the variables t and u the values t = 0 and u = π/2, we transform (6.3) into the single-variable identity ∞
β 2 + 4n2 π 2 (1 − eβ )2 = . β 2 + (1 + 2n)2 π 2 (1 + eβ )2 n=−∞ The exponential expression on the right-hand-side can be rewritten in terms of a hyperbolic function, transforming the above into tanh2
∞ β β 2 + 4n2 π 2 = , 2 n=−∞ β 2 + (1 + 2n)2 π 2
(6.40)
from which, by introducing the variable t := β/2, we have tanh t = 2
∞
4(t 2 + n2 π 2 ) . 4t 2 + (1 + 2n)2 π 2 n=−∞
(6.41)
This delivers the following dual expansion for the hyperbolic tangent function: ∞ t 2 + n2 π 2 2 , (6.42) tanh t = ± 4t 2 + (1 + 2n)2 π 2 n=−∞ where the expansion with the plus sign ∞ 2 tanh t = n=−∞
t 2 + n2 π 2 , 4t 2 + (1 + 2n)2 π 2
holds for t ≥ 0, while for the expansion tanh t = −
∞ n=−∞
2
t 2 + n2 π 2 4t 2 + (1 + 2n)2 π 2
with the minus sign, the variable t is assumed less then zero. It can be shown that the infinite product representations in (6.41) and (6.42) converge uniformly for −∞ < t < ∞. We will verify this assertion by the method used earlier for the product in (6.15). In doing so, we isolate first the term with n = 0 in (6.41), which is equal to 4t 2 , 4t 2 + π 2
144
6 Representation of Elementary Functions
and then pair the terms with n = k and n = −k. This transforms the expansion in (6.41) into tanh2 t =
∞ 16(t 2 + k 2 π 2 )2 4t 2 . 4t 2 + π 2 [4t 2 + (1 + 2k)2 π 2 ][4t 2 + (1 − 2k)2 π 2 ]
(6.43)
k=1
We then rewrite the general term
[4t 2
16(t 2 + k 2 π 2 )2 + (1 + 2k)2 π 2 ][4t 2 + (1 − 2k)2 π 2 ]
of the infinite product of (6.43) in the equivalent form 1−
π 2 [8t 2 + (1 − 8k 2 )π 2 ] . [4t 2 + (1 + 2k)2 π 2 ][4t 2 + (1 − 2k)2 π 2 ]
(6.44)
Hence, since the numerator in the second additive term in (6.44) represents a second-degree polynomial in k, whereas the degree of its denominator is four, we conclude that the infinite product in (6.41) is indeed convergent, with convergence rate of order 1/k2 . Another infinite product representation for a hyperbolic function can be obtained from another multivariable identity also derived earlier in Sect. 6.1. That is, assigning in (6.7) the values of 0 and π for the variables t and u, respectively, we read it as ∞ (β 2 + 4n2 π 2 )[β 2 + 4(1 + n)2 π 2 ] 1 − eβ 4 = , 1 + eβ [β 2 + (1 + 2n)2 π 2 ]2 n=−∞ which converts to the infinite product expansion tanh4
∞ β (β 2 + 4n2 π 2 )[β 2 + 4(1 + n)2 π 2 ] . = 2 n=−∞ [β 2 + (1 + 2n)2 π 2 ]2
The above can be rewritten as tanh4 t =
∞ 16(t 2 + n2 π 2 )[t 2 + (1 + n)2 π 2 ] [4t 2 + (1 + 2n)2 π 2 ]2 n=−∞
(6.45)
with the equivalent form tanh4 t =
∞ π 2 [8(t 2 − n(n + 1)π 2 ) − π 2 ] 1+ . [4t 2 + (1 + 2n)2 π 2 ]2 n=−∞
The uniform convergence of the above infinite product can be proven for any real value of t in the way applied earlier to the identity in (6.41).
6.3 Hyperbolic Functions
145
The identity in (6.10) can also be used to derive some infinite product representations for hyperbolic functions. Indeed, assuming the values of 0 and π for the variables t and u, respectively, we arrive at the expansion coth2
∞ α β (β 2 + 4n2 π 2 )[α 2 + (1 + 2n)2 π 2 ] . tanh2 = 2 2 n=−∞ (α 2 + 4n2 π 2 )[β 2 + (1 + 2n)2 π 2 ]
(6.46)
It is worth noting that the expansion in (6.40) follows from that in (6.46) if α is taken to infinity. If, on the other hand, the limit is taken in (6.46) as β approaches infinity, then one arrives at the expansion coth2
∞ α α 2 + (1 + 2n)2 π 2 , = 2 n=−∞ α 2 + 4n2 π 2
which reads as coth2 t =
∞ 4t 2 + (1 + 2n)2 π 2 , 4(t 2 + n2 π 2 ) n=−∞
t = 0,
(6.47)
if the variable t is introduced in terms of α as t := α/2. Note that the above expansion has been derived from the identity of (6.10). But it can also be directly obtained as a reciprocal of the expansion in (6.41). Dual expansion for the hyperbolic cotangent function follows from that in (6.47) as ∞ 1 4t 2 + (1 + 2n)2 π 2 coth t = ± , t = 0, (6.48) 2 t 2 + n2 π 2 n=−∞ with ∞ 1 coth t = 2 n=−∞
4t 2 + (1 + 2n)2 π 2 t 2 + n2 π 2
holding for t > 0, while ∞ 1 coth t = − 2 n=−∞
4t 2 + (1 + 2n)2 π 2 t 2 + n2 π 2
holds for t < 0. Uniform convergence of the expansion in (6.47), for any nonzero value of t, becomes evident after we reduce it to coth2 t =
∞ 4t 2 + π 2 [4t 2 + (1 + 2k)2 π 2 ][4t 2 + (1 − 2k)2 π 2 ] 4t 2 16(t 2 + k 2 π 2 )2 k=1
146
6 Representation of Elementary Functions
and then to
∞ 4t 2 + π 2 π 2 [8t 2 + (1 − 8k 2 )π 2 ] coth t = 1+ . 4t 2 16(t 2 + k 2 π 2 )2 2
k=1
Since the numerator of the second additive component in the braces represents a second-degree polynomial in k, while the degree of the denominator polynomial is two units higher, the expansion in (6.47) converges at a rate of order 1/k 2 . Clearly enough, the expansions in (6.47) and (6.48) can be directly obtained from those in (6.41) and (6.42), respectively. An interesting infinite product representation for a single-variable hyperbolic function follows from the two-variable identity in (6.46). Indeed, if a new variable t is introduced there as α = 2At and β = 2Bt, where A and B represent real constants, with A = 0, then the right-hand side of the identity in (6.46) transforms into the infinite product expansion ∞ (B 2 t 2 + n2 π 2 )[4A2 t 2 + (1 + 2n)2 π 2 ] (A2 t 2 + n2 π 2 )[4B 2 t 2 + (1 + 2n)2 π 2 ] n=−∞
(6.49)
of the function F (t) =
tanh2 Bt tanh2 At
,
whose domain clearly is the set of all real numbers except for t = 0. Hence, the expansion in (6.49) must converge uniformly in the domain of F (t). This statement can, however, be further justified with the assertion that the expansion in (6.49) converges at t = 0 as well, and the value of it at t = 0 is the limit of F (t) as t approaches zero. That is, lim
t→0
tanh2 Bt tanh2 At
=
B2 . A2
This assertion is not evident. To come up with its verification, we split off the n = 0 term in (6.49). This yields ∞ (B 2 t 2 + n2 π 2 )[4A2 t 2 + (1 + 2n)2 π 2 ] (A2 t 2 + n2 π 2 )[4B 2 t 2 + (1 + 2n)2 π 2 ] n=−∞ ∞
=
B 2 (4A2 t 2 + π 2 ) (B 2 t 2 + k 2 π 2 )[4A2 t 2 + (1 + 2k)2 π 2 ] A2 (4B 2 t 2 + π 2 ) (A2 t 2 + k 2 π 2 )[4B 2 t 2 + (1 + 2k)2 π 2 ] k=1
×
∞ k=1
(B 2 t 2 + k 2 π 2 )[4A2 t 2 + (1 − 2n)2 π 2 ] . (A2 t 2 + k 2 π 2 )[4B 2 t 2 + (1 − 2n)2 π 2 ]
6.3 Hyperbolic Functions
147
Combining the two infinite products into one, we can rewrite the above relation as ∞ (B 2 t 2 + n2 π 2 )[4A2 t 2 + (1 + 2n)2 π 2 ] (A2 t 2 + n2 π 2 )[4B 2 t 2 + (1 + 2n)2 π 2 ] n=−∞ ∞
=
B 2 (4A2 t 2 + π 2 ) 16A4 t 4 + 8(1 + 4k 2 )A2 t 2 π 2 + (1 − 4k 2 )2 π 4 . A2 (4B 2 t 2 + π 2 ) 16B 4 t 4 + 8(1 + 4k 2 )B 2 t 2 π 2 + (1 − 4k 2 )2 π 4 k=1
It can readily be seen that the general term in the last infinite product represents unity at t = 0, implying that the value of the expansion in (6.49) at t = 0 is indeed B 2 /A2 . This allows us finally to obtain the expansion tanh2 Bt tanh2 At
=
∞ (B 2 t 2 + n2 π 2 )[4A2 t 2 + (1 + 2n)2 π 2 ] , (A2 t 2 + n2 π 2 )[4B 2 t 2 + (1 + 2n)2 π 2 ] n=−∞
which is valid for any real value of t. Another infinite product expansion of a multivariable function can be obtained from the identity presented in (6.13) of Sect. 6.1. In doing so, we assume for the variables t and u in (6.13) the values of t = π and u = π/2. This yields ∞ (1 + eα )2 (1 + eβ )2 16[α 2 + (1 − 2n)2 π 2 ][β 2 + (1 − 2n)2 π 2 ] = . (6.50) (1 + e2α )(1 + e2β ) [4α 2 + (1 + 4n)2 π 2 ][4β 2 + (1 + 4n)2 π 2 ] n=−∞
To simplify the above relation, we take advantage of its specific (symmetric) form. Indeed, introducing a new variable t by assuming t = α = β, we reduce (6.50) to ∞ (1 + et )4 16[t 2 + (1 − 2n)2 π 2 ]2 = , 2t 2 (1 + e ) [4t 2 + (1 + 4n)2 π 2 ]2 n=−∞ which further reduces to ∞ 4[t 2 + (1 − 2n)2 π 2 ] (1 + et )2 = , 1 + e2t 4t 2 + (1 + 4n)2 π 2 n=−∞
(6.51)
or, converting the left-hand side of the above to a hyperbolic function form, we transform (6.51) into ∞ 4[t 2 + (1 − 2n)2 π 2 ] 1 + cosh t . = cosh t 4t 2 + (1 + 4n)2 π 2 n=−∞
This leads to the compact infinite product expansion sech t = −1 +
∞ 4[t 2 + (1 − 2n)2 π 2 ] 4t 2 + (1 + 4n)2 π 2 n=−∞
(6.52)
148
6 Representation of Elementary Functions
Fig. 6.9 Convergence of the representation in (6.52)
of the hyperbolic secant function. The above representation converges uniformly for any value of t. We verify this assertion by our customary procedure. For that, the infinite product in (6.52) is transformed as ∞ 4[t 2 + (1 − 2n)2 π 2 ] 4t 2 + (1 + 4n)2 π 2 n=−∞ ∞
4(t 2 + π 2 ) 16[t 2 + (1 + 2k)2 π 2 ][t 2 + (1 − 2k)2 π 2 ] = . 4t 2 + π 2 [4t 2 + (1 + 4k)2 π 2 ][4t 2 + (1 − 4k)2 π 2 ] k=1
After some trivial algebra with the general term of the above product, the infinite product representation of the hyperbolic secant function exhibited in (6.52) converts into the equivalent form
∞ 3π 2 [8t 2 + π 2 (5 − 32k 2 )] 4(t 2 + π 2 ) . 1+ sech t = −1 + 4t 2 + π 2 [4t 2 + (1 + 4k)2 π 2 ][4t 2 + (1 − 4k)2 π 2 ] k=1 (6.53) From a comparison of the highest degree of the multiplication index k in the numerator (which is two) and the denominator (which is four) of (6.53), it follows that the expansion in (6.52) converges uniformly for any value of t, and its convergence rate is of order 1/k 2 . To give a sense of the actual convergence of the expansion derived in (6.52), graphs of its partial products sech t = −1 +
N 4[t 2 + (1 − 2n)2 π 2 ] 4t 2 + (1 + 4n)2 π 2
n=−N
are depicted in Fig. 6.9 for N = 5, 10, and 50.
6.4 Chapter Exercises
149
6.4 Chapter Exercises 6.1 Using the method of eigenfunction expansion, derive the expression presented in (6.1) for the Green’s function of the Dirichlet problem stated for the Laplace equation on the infinite strip {−∞ < x < ∞, 0 < y < b}. 6.2 Using the method of eigenfunction expansion, derive the expression presented in (6.5) for the Green’s function of the Dirichlet–Neumann problem stated for the Laplace equation on the infinite strip {−∞ < x < ∞, 0 < y < b}. 6.3 Using the method of eigenfunction expansion, derive the expression presented in (6.8) for the Green’s function of the Dirichlet problem stated for the Laplace equation on the semi-infinite strip {0 < x < ∞, 0 < y < b}. 6.4 Using the method of eigenfunction expansion, derive the expression presented in (6.11) for the Green’s function of the mixed problem stated for the Laplace equation on the semi-infinite strip {0 < x < ∞, 0 < y < b}. 6.5 Illustrate the equivalence of the expansions presented in (6.22) and (6.24) by graphing their partial products. 6.6 Prove the convergence of the infinite product representation derived in (6.27) for the cosine function. 6.7 Prove the convergence of the infinite product representation derived in (6.28) for the tangent function. 6.8 Determine the convergence rate of the infinite product representation obtained in (6.33). 6.9 Derive an infinite product representation for the function tan x − cot x and determine its convergence rate.
Chapter 7
Hints and Answers to Chapter Exercises
7.1 Chapter 2 2.5
Transforming the function in the statement as a sin x + b cos x =
a sin x b cos x a 2 + b2 √ +√ a 2 + b2 a 2 + b2
and introducing the argument a ϕ = arccos √ , 2 a + b2 which implies ϕ = arcsin √
b a2
we have a sin x + b cos x =
+ b2
b = arctan , a
a 2 + b2 sin(x + ϕ),
which, in compliance with the Euler expansion in (2.1), yields a sin x + b cos x =
a 2 + b2 x + arctan(b/a) ∞ (x + arctan(b/a))2 × 1− . k2 π 2 k=1
2.6
Expressing the sum of sine functions in the statement in the product form sin x + sin y = 2 sin
x +y x −y x+y π − (x − y) cos = 2 sin sin 2 2 2 2
Y.A. Melnikov, Green’s Functions and Infinite Products, DOI 10.1007/978-0-8176-8280-4_7, © Springer Science+Business Media, LLC 2011
151
152
7
Hints and Answers to Chapter Exercises
and replacing the sine functions with their Euler infinite product representations, one arrives at sin x + sin y = 2
∞ k=1
(x + y)2 1− 4k 2 π 2
(π − (x − y))2 1− , 4k 2 π 2
which can be rewritten, after an elementary algebra, as sin x + sin y = 2
∞ (x + y)2 + (π − (x − y))2 (x + y)2 (π − (x − y))2 1− . + 4k 2 π 2 16k 4 π 4
k=1
2.7
If the sum of cosine functions is expressed in the product form cos x + cos y = 2 cos
x −y x+y cos 2 2
and each of the cosine functions on the right-hand side is replaced with its Euler infinite product representation in (2.2), then one obtains ∞ cos x + cos y = 2 1− k=1
(x + y)2 (2k − 1)2 π 2
(x − y)2 1− , (2k − 1)2 π 2
which converts, with the aid of elementary algebra, into cos x + cos y = 2
∞
1−
k=1
2.9
2(x 2 + y 2 ) (x 2 − y 2 )2 . + (2k − 1)2 π 2 (2k − 1)4 π 4
Using the elementary trigonometric identity cot x + cot y =
sin(x + y) sin x sin y
and expressing the sine functions on the right-hand side with the Euler representation in (2.1), one obtains 2
∞ (1 − (x+y) ) x +y k2 π 2 , cot x + cot y = y2 x2 xy k=1 (1 − k 2 π 2 )(1 − k 2 π 2 )
which transforms ultimately into cot x + cot y =
∞ xy(xy + 2k 2 π 2 ) x+y . 1− 2 2 xy (k π − x 2 )(k 2 π 2 − y 2 ) k=1
7.2 Chapter 3
2.11
153
Converting the sum of hyperbolic cotangent functions into the product coth x + coth y =
sinh(x + y) sinh x sinh y
and using the classical Euler infinite product representation in (2.3) for the righthand side, one arrives at 2
∞ (1 + (x+y) ) x +y k2 π 2 , coth x + coth y = 2 y2 x xy k=1 (1 + k 2 π 2 )(1 + k 2 π 2 )
which transforms into coth x + coth y =
∞ x+y xy(2k 2 π 2 − xy) . 1+ 2 2 xy (k π + x 2 )(k 2 π 2 + y 2 ) k=1
7.2 Chapter 3 3.3 In an attempt to construct the Green’s function for the Dirichlet problem for the Laplace equation on the infinite wedge (r, ϕ) = {0 < r < ∞, 0 < ϕ < 2π/5}, let the unit source (which produces the singular component of the Green’s function) be located at (, ψ) ∈ . To compensate its trace on the fragment ϕ = 0 of the boundary of , place a unit sink at (, 2π − ψ) ∈ / . The trace of the latter on the boundary fragment ϕ = 2π/5 is compensated, in turn, with a unit source at (, 4π/5 + ψ) ∈ / , whose trace on ϕ = 0 must be compensated with a unit sink at (, 6π/5 − ψ) ∈ / , whose trace on ϕ = 2π/5 requires a compensation by a source at (, 8π/5 + ψ) ∈ / . And to compensate the latter’s trace on ϕ = 0, we must put a sink at (, 2π/5 − ψ), which is, unfortunately, located inside . And this is what justifies the failure of the method in this case.
7.3 Chapter 4 4.1
Express the general solution of the equation in (4.21) as yg (x) = D1 exp kx + D2 exp(−kx)
where D1 and D2 are arbitrary constants. The first of the conditions in (4.22) yields D1 + D2 = D1 exp ka + D2 exp(−ka) while from the second condition, we have D1 − D2 = D1 exp ka − D2 exp(−ka).
154
7
Hints and Answers to Chapter Exercises
These two relations represent the homogeneous system of linear algebraic equations 1 − exp ka 1 − exp(−ka) D1 0 = , D2 1 − exp ka exp(−ka) − 1 0 having only the trivial solution D1 = D2 = 0, because the coefficient matrix of the system is regular. Indeed, its determinant 2(1 − exp ka) exp(−ka) − 1 is nonzero. 4.2
Express the general solution of the equation in (4.31) as yg (x) = D1 ln(mx + b) + D2 .
The first of the boundary conditions in (4.32) yields mD1 /b = 0, while the second condition yields D1 ln(ma + b) + D2 = 0. Hence, D1 = D2 = 0, which implies that the boundary-value problem stated in (4.31) and (4.32) has only the trivial solution, or is well posed. 4.3 Proving that the problem in (4.54) and (4.55) has a unique solution is equivalent to showing that the corresponding homogeneous problem has only the trivial solution, which is indeed true. To support this claim, express the general solution of the homogeneous equation as yg (x) = D1 sin kx + D2 cos kx. The first boundary condition in (4.55) yields D1 = 0, while from the second condition it follows that D1 cos ka − D2 cos ka = 0, implying that D2 is also zero. 4.4
The Green’s function is found as
sin kx cos k(s − 1), for x ≤ s, 1 g(x, s) = k cos k sin ks cos k(x − 1), for x ≥ s.
4.5
The Green’s function is obtained in the form
E(x) sinh k(s − a), for x ≤ s, 1 g(x, s) = kE(a) E(s) sinh k(x − a), for x ≥ s,
where E(p) = exp kp + λ exp(−kp)
and λ =
k−h . k+h
7.4 Chapter 5
155
7.4 Chapter 5 5.1
A closed form of the Green’s function is obtained as G(x, y; ξ, η) =
) π(z−ζ ) |1 + exp π(z−ζ 1 2b ||1 − exp 2b | ln . 2π |1 + exp π(z−ζ ) ||1 − exp π(z−ζ ) | 2b 2b
Here and further in the answers for Exercises 5.2–5.4, the complex variable notations z = x + iy and ζ = ξ + iη are used for the field point and the source point respectively. 5.2
The Green’s function is obtained in the form G(x, y; ξ, η) =
) π(z−ζ ) |1 + exp π(z−ζ 1 2b ||1 − exp 2b | ln ) π(z−ζ ) 2π |1 + exp π(z−ζ 2b ||1 − exp 2b | ×
5.3
) π(z+ζ ) |1 + exp π(z+ζ 2b ||1 − exp 2b |
.
) π(z+ζ ) |1 + exp π(z+ζ 2b ||1 − exp 2b |
The Green’s function is obtained in the form G(x, y; ξ, η) =
) π(z−ζ ) |1 + exp π(z−ζ 1 2b ||1 − exp 2b | ln ) π(z−ζ ) 2π |1 + exp π(z−ζ 2b ||1 − exp 2b | ×
) π(z+ζ ) |1 + exp π(z+ζ 2b ||1 − exp 2b | ) π(z+ζ ) |1 + exp π(z+ζ 2b ||1 − exp 2b |
.
5.4 A computer-friendly expression for the Green’s function is obtained in the form G(x, y; ξ, η) =
) ) |1 − exp π(z+ζ ||1 − exp π(z−ζ | 1 b b ln π(z−ζ ) π(z+ζ ) 2π |1 − exp 2b ||1 − exp 2b | ∞
−
(β − ν) sinh νx sinh νξ 4 sin νy sin νη, b ν[(ν + β) exp 2νa + (ν − β)] n=1
where ν = nπ/b. 5.5 Tracing out the standard procedure of the method of eigenfunction expansion, one obtains the Green’s function in the Fourier series form
∞ 1 k0 (r, ρ) + 2 kn (r, ρ) cos n(ϕ − ψ) , G(r, ϕ; ρ, ψ) = 2π n=1
156
7
Hints and Answers to Chapter Exercises
where the function k0 (r, ρ) is found as
ln(r/a)(1 + βb ln(b/ρ)), 1 k0 (r, ρ) = 1 + βb ln(b/a) ln(ρ/a)(1 + βb ln(b/r)),
for r ≤ ρ, for r ≥ ρ,
while the expression for the coefficient kn (r, ρ), valid for r ≤ ρ, reads kn (r, ρ) =
(r 2n − a 2n )[n(b2n + ρ 2n ) + βb(b2n − ρ 2n )] , 2n(rρ)n [n(b2n + a 2n ) + βb(b2n − a 2n )]
for r ≤ ρ.
Note that the expression for kn (r, ρ), valid for r ≥ ρ, can be obtained from the above with the variables r and ρ interchanged. A close analysis reveals a slow convergence of the series in the above expression for G(r, ϕ; ρ, ψ). Indeed, it converges at the rate 1/n, notably diminishing the practicality of the representation. After improving the convergence in the way described in the current chapter, a computer-friendly form for the Green’s function is ultimately obtained as
∞ 1 |a 2 − zζ | ∗ G(r, ϕ; ρ, ψ) = kn (r, ρ) cos n(ϕ − ψ) , ln + k0 (r, ρ) + 2π |z||z − ζ | n=1
where the coefficient kn∗ (r, ρ) of the series component is found, for r ≤ ρ, as kn∗ (r, ρ) =
(r 2n − a 2n )(a 2n − ρ 2n )(βb − n) , n(rρ)n [(b2n (βb + n) − a 2n (βb − n)]
while for an expression that is valid for r ≥ ρ, we interchange the variables r and ρ. Here and further in the answer to Exercise 5.7, the complex variable notations z = r(cos ϕ + i sin ϕ) and ζ = ρ(cos ψ + i sin ψ) are used for the field point and the source point, respectively. 5.7
A computer-friendly form of the Green’s function is obtained as 1 |z||ζ |2 G(r, ϕ; ρ, ψ) = ln 2π |z − ζ ||a 2 − zζ | ∞ ∗ ∗ kn (r, ρ) cos n(ϕ − ψ) , + k0 (r, ρ) − n=1
k0∗ (r, ρ)
where the function found, for r ≤ ρ, as
and the coefficient kn∗ (r, ρ) of the series component are
k0∗ (r, ρ) =
1 b 1 + βb ln βb ρ
and kn∗ (r, ρ) =
(r 2n + a 2n )(a 2n + ρ 2n )(βb − n) , n(rρ)n [(b2n (βb + n) + a 2n (βb − n)]
7.5 Chapter 6
157
while the variables r and ρ must be interchanged in the above expressions for k0∗ (r, ρ) and kn∗ (r, ρ) to make them valid for r ≥ ρ.
7.5 Chapter 6 6.6
To prove the convergence of the infinite product ∞ 4(t − 2nπ)2 , (1 + 4n)2 π 2 n=−∞
convert it to the form ∞ 4t 2 16(t 2 − 4k 2 π 2 )2 π2 (1 − 16k 2 )2 π 4 k=1
and then transform it into ∞ (π 2 − 4t 2 )((32k 2 − 1)π 2 − 4t 2 ) 4t 2 1 + . π2 (1 − 16k 2 )2 π 4 k=1
Hence, the infinite product in (6.27) indeed converges, and its convergence rate is of order 1/k 2 . 6.7
The infinite product ∞
2(t − nπ) (1 + 2n)π − 2t n=−∞ in (6.28) converges because it is equivalent to ∞ 2t 4(t 2 − k 2 π 2 ) , π − 2t [(1 + 2k)π − 2t][(1 − 2k)π − 2t] k=1
which can be rewritten as ∞ 2t π(π − 4t) , 1− π − 2t [(1 + 2k)π − 2t][(1 − 2k)π − 2t] k=1
revealing the convergence rate of order 1/k2 . 6.8
Show that the representation in the statement is equivalent to ∞ A(2Bt + π) (A2 t 2 − k 2 π 2 )[(1 + 2k)π + 2Bt][(1 − 2k)π + 2Bt] , B(2At + π) (B 2 t 2 − k 2 π 2 )[(1 + 2k)π + 2At][(1 − 2k)π + 2At] k=1
158
7
Hints and Answers to Chapter Exercises
which, in turn, is equivalent to ∞ πt (A − B)[(πt (A + B) + 4(ABt 2 + k 2 π 2 )] A(2Bt + π) . 1 + B(2At + π) (B 2 t 2 − k 2 π 2 )[(1 + 2k)π + 2At][(1 − 2k)π + 2At] k=1
This reveals the convergence rate of the representation in (6.33) of order 1/k 2 . 6.9
Transform the function in the statement as tan t − cot t =
sin t cos t − = −2 cot 2t cos t sin t
and replace the cotangent function with the infinite product (see (6.32)). This yields tan t − cot t = 2
∞ 4t − (1 + 2n)π . 2(2t − nπ) n=−∞
The above converts into the infinite product tan t − cot t =
∞ 4x − π π(π − 8t) , 1+ 2x 4(4t 2 − k 2 π 2 ) k=1
whose convergence rate is of the order of 1/k 2 .
References
1. L. Euler, Introductio in Analysin Infinitorum, Vol. 1, Lausanne, Bousquet, 1748 2. G.M. Robison, Summability of infinite products, Am. J. Math., 51 (1929), no. 4, pp. 653–660 3. K.M. Slepenchuk, Representation of an analytic function of two variables by means of a double infinite product (Russian), Usp. Mat. Nauk, 8 (1953), no. 2, pp. 139–152 4. K.M. Slepenchuk, On a property of infinite products (Russian), Usp. Mat. Nauk, 10 (1955), no. 1, pp. 151–153 5. V.I. Smirnov, A Course of Higher Mathematics, Vols. 1 and 4, Pergamon, Oxford, 1964 6. M. Abramowitz and I.A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover, New York, 1965 7. A. Arcache, Expansion of analytic functions in infinite series and infinite products with application to multiple valued functions, Am. Math. Mon., 72 (1965), no. 8, pp. 861–864 8. V.Y. Arsenin, Basic Equations and Special Functions of Mathematical Physics, Eliffe Books, London, 1968 9. I.S. Gradstein and I.M. Ryzhik, Table of Integrals, Series and Products, Academic Press, New York, 1971 10. W.E. Boyce and R.C. DiPrima, Elementary Differential Equations and Boundary Value Problems, Wiley, New York, 1977 11. Y.A. Melnikov, Some applications of the Green’s function method in mechanics, Int. J. Solids Struct., 13 (1977), pp. 1045–1058 12. I.M. Dolgova and Yu.A. Melnikov, Construction of Green’s functions and matrices for equations and systems of elliptic type, J. Appl. Math. Mech., 42 (1978), pp. 740–746. Translation from Russian PMM 13. I. Stakgold, Green’s Functions and Boundary-Value Problems, Wiley, New York, 1980 14. R. Blecksmith, J. Brillhart, and I. Gerst, Some infinite product identities, Math. Comput., 51 (1988), no. 183, pp. 301–314 15. R. Haberman, Elementary Applied Partial Differential Equations, Prentice-Hall, New Jersey, 1998 16. Yu.A. Melnikov, Influence Functions and Matrices, Marcel Dekker, New York, 1998 17. Yu.A. Melnikov, Green’s function formalism extended to systems of mechanical differential equations posed on graphs, J. Eng. Math., 34 (1998), pp. 369–386 18. M.A. Pinsky, Partial Differential Equations and Boundary-Value Problems with Applications, McGraw-Hill, Boston, 1998 19. S.L. Marshall, A rapidly convergent modified Green’s function for Laplace’s equation in a rectangular region. Proc. R. Soc., 155 (1999), pp. 1739–1766 20. W.J. Kaczor and M.T. Nowak, Problems in Mathematical Analysis, I, AMS, Providence, 2000 21. Yu.A. Melnikov, An alternative construction of Green’s functions for the two-dimensional heat equation, Eng. Anal. Bound. Elem., 24 (2000), pp. 467–475 22. H.-G. Jeon, Expressions of infinite products, Far East J. Math. Sci., 5 (2002), no. 1, pp. 81–88 Y.A. Melnikov, Green’s Functions and Infinite Products, DOI 10.1007/978-0-8176-8280-4, © Springer Science+Business Media, LLC 2011
159
160
References
23. Yu.A. Melnikov, Influence functions of a point source for perforated compound plates with facial convection, J. Eng. Math., 49 (2004), pp. 253–270 24. A.K. Ibrahim and M.A. Rakha, Numerical computations of infinite products, Appl. Math. Comput., 161 (2005), no. 1, pp. 271–283 25. Yu.A. Melnikov and M.Yu. Melnikov, Computability of series representations of Green’s functions in a rectangle, Eng. Anal. Bound. Elem., 30 (2006), pp. 774–780 26. V.S. Varadarajan, Euler and his work on infinite series, Bull. Am. Math. Soc., 44 (2007), no. 4, pp. 515–539 27. Yu. A. Melnikov, New infinite product representations of some elementary functions, Appl. Math. Sci., 2 (2008), no. 2, pp. 81–97 28. Yu.A. Melnikov, A new approach to the representation of trigonometric and hyperbolic functions by infinite products, J. Math. Anal. Appl., 344 (2008), no. 1, pp. 521–534
Index
A Absolute convergence, 7 Absolutely convergent, 7, 8 Additive component, 12, 64, 146 Additive term, 20, 22, 104, 135, 144 Aggregate influence, 47 Algorithm, 40, 43, 71, 86 Alternate, 8, 9, 124 Alternative forms of Green’s functions, 2, 14 Alternative representation, 31, 97, 122, 123, 125, 126, 128, 130, 142 Analytic continuation, 18, 141, 142 Analytic function, 55 Angular coordinate, 52, 114 Applied mathematics, viii, 1, 2 Applied partial differential equations, 2, 15, 85 Approximation of functions, 1, 2, 17, 18, 43 Arbitrarily preassigned value, 8 Arbitrary rotation, 54 B Background, vii, viii, 1, 14, 17, 54, 85, 105, 122, 129, 131 Basic terminology, 2 Birkhäuser Boston, viii Boundary, vii, 1, 2, 11–14, 41, 43, 44, 46–48, 50, 51, 54, 56, 61, 62, 65, 67–71, 73, 76, 79–83, 86, 88–91, 93, 95, 97, 103, 105–110, 112–114, 116–119, 121–123, 127, 129, 153, 154 Boundary conditions, 11, 12, 44, 46, 62, 65–68, 70, 71–73, 75, 80–83, 92, 106–108, 111, 112, 122, 128, 154 Boundary line, 44, 123, 126, 128 Boundary segment, 45, 46, 48, 103
Boundary-value problems, vii, 1, 2, 11, 12–14, 41, 43, 44, 46, 47, 50, 62, 65, 67–71, 73, 76, 79–83, 86, 88–91, 93, 95, 97, 105–110, 112, 113, 116–119, 121, 127, 154 Bounded region, 27 Bridge the divide, 15 Brief introduction, 2 Brief review, 2, 10 C Cartesian coordinates, 55, 57, 58, 85, 86 Challenging field, 15 Chief concepts, 2 Circumference, 51–53, 56 Classical literature, 15, 25 Classical topic, vii, 54 Closed analytical form, 12, 43, 54, 92–94, 121, 122, 127 Closed form, 43, 124, 155 Collection of methods, 41, 43 Common factor, 19, 23 Commutativity, 8 Compact form, 9, 19, 23, 26, 29, 38, 39, 46, 48, 54, 58, 68, 70, 91, 102, 109, 125, 127–129, 131 Compensatory sources, 46, 47, 51, 128 Compensatory unit sink, 47, 51–53 Complex plane, 18, 27 Complex variable notation, 54, 102, 155, 156 Complex variables, 2, 14, 26, 54, 91 Comprehensive review, vii, 43, 85 Computability, 14, 93–95, 98, 116, 124 Computable, 12, 97 Computational aspects, 2 Computational effort, 14 Computational sciences, viii Computer-friendly expressions, 1 Concentric circle, 51
Y.A. Melnikov, Green’s Functions and Infinite Products, DOI 10.1007/978-0-8176-8280-4, © Springer Science+Business Media, LLC 2011
161
162 Concept of convergence, 3 Conditional convergence, 8 Conditionally convergent, 8 Conjecture, 5 Consistent, 2, 61 Constant coefficients, 62 Constant value, 54 Construction of Green’s functions, vii, 2, 14, 24, 41, 43, 54, 56, 61, 62, 65, 72, 85, 105, 122 Constructive proof, 62 Contemporary mathematics, 17 Continuity property, 7 Continuous derivative, 62 Continuous function, 11, 62 Convenient tool, 2 Convergence pattern, 20, 22, 24, 140 Convergent, 3–7, 10, 13, 27, 87, 89, 94, 95, 103, 115, 125, 137, 144 Converse assertion, 6 Conversion, 14, 90, 141, 142 Cosine function, 14, 17, 18, 20–22, 24, 25, 28, 30, 31, 33, 37, 41, 90, 134, 137, 138, 141, 149, 152 Cotangent function, 26, 140, 145, 153, 158 Counterexample, 6 Current literature, 17, 28, 41 Curriculum, viii, 1 Cyclic symmetry, 51 D Defining properties, 11, 12, 55, 61–63, 88 DeMoivre’s formula, 25 Derivation procedure, 14, 18, 24, 26, 28, 30, 86 Derivation strategy, 37 Difference of cosines, 28 Difference of squares, 29, 34, 35 Differential equations, vii, viii, 1, 2, 62, 97, 121 Dimensions, 13, 59, 85 Direct consequence, 11 Dirichlet problem, 12, 13, 44–51, 54, 56–60, 86, 90, 91, 96–98, 105, 109–113, 116, 117, 119, 122–125, 128, 130, 131, 149, 153 Dirichlet–Neumann problem, 46, 48, 49, 117, 122, 125, 130, 149 Discontinuous, 62 Disk, 51–54, 56–59, 105, 109–112, 122 Disk’s center, 54, 56–59 Distance, 45, 57, 124 Divergent, 3, 6, 8 Diverges, 3, 6, 8, 37, 89 Double product, 34
Index Double series, 13, 98, 100 Driving force, 1, 2, 14 E Editing process, viii Editorial services, viii Efficient computing, 13 Elective course, viii Elegance of the approach, 19 Elementary functions, vii, viii, 1, 2, 11–15, 17, 18, 24, 25, 28, 37, 40, 41, 43, 54, 121, 122, 127, 129, 131, 141 Elliptic integrals, 17 Equipotential lines, 51 Equivalent mathematically, 14 Equivalent version, 10 Estimate, 27, 95, 96, 104, 133 Euler, 14, 17, 18, 20, 22, 24–26, 28, 29, 33, 37, 41, 57, 89–91, 98, 108, 109, 133, 134, 137, 138, 151–153 Euler infinite product, 31, 34, 36, 141 Euler’s classical derivation, 17 Euler’s classical representation, 17 Euler’s derivation, 24 Euler’s era, 2 Euler’s formula, 19, 20, 23 Euler’s procedure, 19 Even power, 19 Even-index partial product, 5 Existence and uniqueness, 62, 67, 77, 79 Exponential form, 19, 80, 101 Exponential function, 19, 21, 23, 68, 80, 94 F Factoring polynomial, 19 Failure, 48, 50, 51, 153 Family of functions, 56–58 Field point, 11, 99, 103, 114, 155, 156 Finite limit, 3, 7, 27 Finite product, 3, 20, 22 First-order derivative, 62 Fourier series, 14, 106, 108, 110, 112, 155 Fractional components, 32 Frontiers, 14, 121, 122 Functional products, 10, 122 Fundamental results, 2 Fundamentals of infinite products, 2, 10 G Gamma function, 17 General term, 3, 4, 6, 27, 39, 95, 133, 144, 147, 148 Generalization, 50 Generating point, 51
Index Green’s function, vii, viii, 1, 2, 11–15, 43–63, 65–73, 76, 77, 79–83, 85, 86, 88–100, 102–106, 108–125, 127, 128, 130, 131, 149, 153–156 H Half-plane, 12, 44, 58, 122 Harmonic function, 12, 44, 55 Harmonic series, 6, 132 Historians, 18 Homogeneous boundary condition, 11, 62, 73, 76, 79, 122 Homogeneous equation, 61, 62, 73, 74, 76, 77, 79, 81–83, 106, 107, 154 Homogeneous problem, 11, 73, 77, 81, 82, 86, 87, 108, 113, 154 Hyperbolic functions, 2, 14, 18, 24, 43, 85, 101, 121, 122, 141, 142, 145 Hyperbolic sine function, 23, 41, 141 I Identity, 34–37, 44, 125, 127, 129–132, 134, 135, 138–140, 143–147, 152 Imaginary part, 25, 57 Inequality, 27, 32, 78, 96 Infinite product, vii, 1–8, 10, 11, 15, 17, 18, 20, 25, 27–31, 33, 35–38, 40, 41, 121, 122, 125, 127, 128, 131–133, 135, 138–141, 144, 146–148, 157, 158 Infinite product representation, vii, viii, 1, 2, 14, 15, 17, 18, 20, 22, 24, 28, 29, 31–33, 35, 36, 39–43, 85, 121, 122, 127, 129, 131–134, 137–146, 148, 149, 152, 153 Infinite product representation of elementary functions, vii, viii, 1, 2, 14, 17, 24, 28, 32, 40, 43, 129 Infinite series, 2, 3, 6–8, 17, 18, 95, 96, 104, 124 Infinite strip, 57, 86, 90–92, 95, 118, 119, 122–125, 127–129, 149 Infinite wedge, 45–50, 59, 60, 122, 153 Infinite-product-containing expressions, vii, 14 Infinity, 3, 6, 7, 9, 10, 26, 27, 37–39, 68, 81, 87, 89, 91, 111, 119, 124, 145 Influence, 44–47, 49, 128, 130 Inhomogeneous equation, 73, 79, 81 Initial terms, 27 Innovative approach, vii, 15, 41, 43 Integer, 10, 32, 51, 60 Integral form, 11, 76, 78, 86, 98, 106, 107, 112, 113 Interior of a disk, 59 Introductory review, 11
163 K Kernel, 11, 76, 79, 82, 89, 93, 98, 107–109, 113 Key theme, 11 Keystone, 11 L Lagrange’s method, 62, 72, 73, 79, 80 Laplace operator, 11, 97, 110 Laurent series, 26 Leading coefficient, 62 Leading theme, vii, 2 Left-hand side, 9, 25–28, 31, 34, 36, 98, 147 Leonhard Euler, 18, 133 Limit comparison test, 7 Limit of general term, 3, 6 Limit of sequence, 3 Limit process, 9 Linear ordinary differential equation, 61, 62, 106 Linearly independent form, 62 Logarithmic function, 7, 124, 129, 130 Logarithmic singularity, 12, 55, 89, 93, 99, 100, 102, 103, 105 Logarithms, 7 Lower half-plane, 44 M Maclaurin series, 20, 39 Major solution methods, 2 Mapping function, 55, 57 Mathematical analysis, vii, 1, 2, 17, 18, 121 Mathematical concepts, vii Mathematical handbooks, 14 Mathematical problems, 18 Mathematics editor, viii Method of conformal mapping, 43, 54, 56, 57, 86, 90, 96, 123 Method of eigenfunction expansion, 15, 59, 61, 85, 86, 92, 96, 106, 110, 112, 116, 118–120, 123, 128, 149, 155 Method of images, vii, 2, 14, 43, 44, 46–48, 50, 51, 54, 58, 60, 61, 85, 105, 121–123, 125, 127, 128, 130 Middle Tennessee State University, viii Mixed boundary-value problem, 13, 46, 61, 92, 95, 110, 116, 117, 119, 120, 125, 129 Mixed form, 13 Mixed problem, 14, 44, 47, 48, 119, 149 Modulus, 27, 56–58 Multiple form, 14 Multiple-valued function, 7 Multiplication index, 37, 135, 148 Multiplicity, 14
164 N Necessary condition, 6 Necessity, 4, 6, 80 Negative infinity, 37, 89 Neumann problem, 44 Newtonian polynomial, 21 Newton’s binomial formula, 19 Nontrivial, vii, 2, 122 Nontrivial situations, viii Nonzero, 6, 37, 55, 62, 65, 123, 145, 154 Nonzero complex number, 2 Normal direction, 44 Novel approach, 1, 28 Numerical analysis, viii, 2 Numerical differentiation, 18 O Observation point, 11, 45, 54–57, 89, 91, 125 Odd-degree polynomial, 20 Odd-index partial product, 5 Opening terms, 27 Order of factors, 8, 9 Ordinary differential equations, 15, 61, 72, 82, 85 P Parameters, 13, 59, 90, 93, 99, 120, 125, 127, 129, 131, 132 Partial differential equations, vii, 15, 43, 61, 85, 112, 123 Partial product, 3–7, 9, 10, 20, 32, 37–41, 133, 138, 140, 148, 149 Partial sum, 7, 20, 93, 96, 99, 100, 105, 114–117, 124 Particular solution, 62, 63, 73, 74 Piecewise smooth contour, 11 Pioneering results, 18 Poisson equation, 11, 92, 105, 112 Polar coordinates, 45, 50, 56, 58, 85, 105, 106, 109 Pole, 27 Polynomial, 19, 21, 23, 25, 137, 144, 146 Polynomial-containing representation, 21 Positive infinity, 37 Positive integers, 9, 10 Potential field, 53, 128 Practical implementations, 13 Preparatory basis, 43, 85 Principal value, 7 Product form, 28, 29, 34, 151, 152 Product value, 8, 11 Professional treatment, viii Prominent mathematician, 18 Property of finite products, 3
Index Q Quadratic equation, 53 Quarter-plane, 45, 46, 122 R Radial coordinate, 52 Radial line, 52 Radical factor, 40 Radicand, 40, 124 Rate of convergence, 31, 41, 99, 103, 133, 141, 148, 157 Real component, 55 Real part, 14 Real terms, 19 Rearranged infinite product, 9, 10 Rearrangement, 8, 10 Reciprocals, 53 Rectangle, 13, 59, 96–98, 119 Recurrence, 40 Region, 44, 59, 80, 95, 96, 112–114, 116, 117, 119–122 Regular component, 43, 44, 47 Regular function, 27, 102 Regular part, 12–14 Regularization, 14 Relative convergence, 32 Review guide, 15 Riemann’s zeta function, 17 Right-hand side, 11, 19, 21, 26, 27, 36, 37, 40, 65, 98, 138, 146, 152, 153 Robin problem, 44 Root, 53 Rotation parameter, 57 Routine exercise, 61 S Scheme of the method, 43, 49 Second Green’s formula, 11 Second-order differential equation, 61, 73 Self-contained, 2 Seminar topic, viii Sequence of partial products, 10 Series representation, 14, 59, 100, 116, 117, 123 Sharp turn, 15, 41 Simple pole, 55 Simply connected region, 11, 54 Single branch, 7 Singular component, 13, 44, 45, 47, 50, 52, 94, 153 Singular part, 12 Singular point, 27, 106 Sink, 44–51, 122, 123, 125, 128, 130, 153 Source, 41, 44, 47, 49, 51, 72, 122, 126, 128, 130, 132, 153
Index Source point, 11, 44, 53–57, 89, 91, 93, 96, 99, 103, 104, 114, 125, 155, 156 Special feature, 11, 80 Special functions, 17, 59 Square root function, 40 Standard abbreviation, 14 Standard courses, vii, 2 Standard limit, 20 Subject areas of mathematics, vii, 1 Successive partial products, 4 Sufficient condition, 7 Sum of the series, 40, 94, 114 Summation, 26, 59, 90, 92, 94–96, 99–101, 109, 111, 115, 122, 123, 125 Summation indices, 13 Supplementary reading, viii Surprising linkage, 43 T Taylor series, 8, 102 Terminological issue, 8 Terminology, 11, 44 Theoretical aspects, 14 Theory of infinite products, 2 Tom Grasso, viii Trace of the function, 45, 46, 124, 126 Traditional instrument, 17 Traditional methods, vii Trigonometric functions, 17, 18, 31, 131, 132, 137, 138, 141, 142 Trigonometric series, 13 Trigonometric sine function, 18, 19, 23–25, 37, 133, 141 Trivial solution, 11, 62, 65, 69, 71–73, 77, 79, 82, 154 Trivial trigonometric transformation, 19, 22 Truncation of the series, 13, 94 Two-dimensional Euclidean space, 11 Two-dimensional Laplace equation, vii, 1, 2, 11, 14, 24, 41, 43, 54, 59, 61, 93, 121
165 U Unbounded, 6, 68, 80 Undergraduate course of calculus, viii Undergraduate course of differential equations, viii Undergraduate mathematics, 2 Undergraduate textbook, 61 Unexpected treatment, 1 Uniform convergence, 105, 118, 132, 139, 144, 145 Unique solution, 11, 64–66, 73, 74, 79, 154 Uniqueness, 56, 57 Unit disk, 13, 14, 54, 56–59 Unit sink, 44, 45, 47, 48, 51, 53, 123, 124, 126, 153 Unit source, 44–51, 53, 122–126, 128, 130, 153 Unity, 3, 6, 8, 57, 95, 96, 147 Unlooked-for approach, 43 Unlooked-for outcome, 1 Upper bound, 27 Upper-division course/seminar, 1 Upper half-plane, 44, 45, 50 V Value, 3, 5–10, 25, 26, 37, 39, 53, 54, 63–65, 68–72, 75, 77, 78, 89, 95, 99, 104, 107, 117, 118, 131, 132, 135, 139, 140, 143–148 Variation of parameters, 62, 72, 73, 76, 88, 93, 107, 108 W Wallis formula, 9, 10 Weierstrass elliptic function, 59 Well posed, 11, 62, 63, 66, 72, 74, 81, 83, 88, 110, 154 Word of caution, 51 Work of art, 19 Z Zero terms, 3