De Gruyter Expositions in Mathematics 55 Editors Victor P. Maslov, Moscow, Russia Walter D. Neumann, New York, USA Markus J. Pflaum, Boulder, USA Dierk Schleicher, Bremen, Germany Raymond O. Wells, Bremen, Germany
Rainer Picard Des McGhee
Partial Differential Equations A Unified Hilbert Space Approach
De Gruyter
Mathematical Subject Classification 2010: 35-01, 35-02, 34G10, 44A10, 47A60, 47D06, 47E05, 47F05, 47G10.
ISBN 978-3-11-025026-8 e-ISBN 978-3-11-025027-5 ISSN 0938-6572 Library of Congress Cataloging-in-Publication Data Picard, R. H. (Rainer H.) Partial differential equations : a unified Hilbert space approach / by Rainer Picard, Des McGhee. p. cm. ⫺ (De Gruyter expositions in mathematics ; 55) Includes bibliographical references and index. ISBN 978-3-11-025026-8 (alk. paper) 1. Hillbert space. 2. Differential equations, Partial. I. McGhee, D. F. II. Title. QA322.4.P53 2011 5151.733⫺dc22 2011004423
Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.d-nb.de. ” 2011 Walter de Gruyter GmbH & Co. KG, Berlin/New York Typesetting: Da-TeX Gerd Blumenstein, Leipzig, www.da-tex.de Printing and binding: Hubert & Co. GmbH & Co. KG, Göttingen ⬁ Printed on acid-free paper Printed in Germany www.degruyter.com
Dedication We dedicate this book to the memory of Cliff Bartlett (1923–2010), a mathematician, artist and singer and a member of the Department of Mathematics (as it then was), University of Strathclyde for over thirty years, retiring in the mid-1980s. Cliff offered great encouragement and support to both of us in the early parts of our careers, to DM as a newly appointed lecturer and to RP during a year at Strathclyde as a Visiting Lecturer. Rainer Picard, Des McGhee
Preface
Given that there is a multitude of well-written and useful textbooks and monographs on partial differential equations, one of the obvious concerns of prospective authors might be why there should be another one written. So, it is appropriate to explain at the outset some of the particular features which make this book different from other texts. It will be obvious to the reader that the present book is not a run-off-the-mill text book on linear partial differential equations. First of all, the whole approach – although (with some additional work) extendable to a more general Banach space setting – is established in a Hilbert space setting, as the title of this monograph indicates. Of course, a Banach space setting is more general and sometimes more appropriate, but usually the core results rely nevertheless on a Hilbert space solution theory, a fact sometimes only tacitly acknowledged. We hope to show that, for presenting core ideas, our focus on a Hilbert space setting is not a constraint, but rather a highly suitable approach for providing a more transparent and even fairly elementary framework for presenting the main issues in the discussion of a solution theory for partial differential equations. The reader may also find many topics, dealt with elsewhere, presented here in a slightly different flavor. Indeed, the building blocks (such as extrapolation and interpolation spaces, sums of operators, vector-valued Laplace transform) are largely well known with some of the ideas dating back to the early 1960s, see e.g. [35], [26] for the idea of interpolation/extrapolation spaces. Therefore, it has been and still is surprising to us that the full power of these concepts, which we utilize in this approach, has not been previously exploited to the extent we have found so useful. The differences are somewhat subtle and a more superficial reader may fail to appreciate how different our perspective on the theory of linear partial differential equations is. Indeed, it is this perspective on our approach which may be considered the most innovative feature of this monograph. In contrast to many other books, which are either focusing on specific types of partial differential equations or on a collection of tools for solving a variety of problems associated with various specific linear partial differential equations, we are attempting to assume a more global point of view on the issues involved. Our approach can be classified as a functional analytical one, but this says very little, since nowadays it is the accepted standard to employ functional analytic language to formulate PDE problems. It may, however, come as a surprise that a Hilbert space setting is sufficiently general to cover the core issues of solving PDE problems. We focus on the case of linear partial differential equations, which is of course in one way a severe constraint, but given that by a rule of thumb non-linear problems, if they are at all well-posed, are frequently solvable by using a priori estimates and a fixed
viii
Preface
point argument based on perturbations of the linear theory, we see the restriction to linear partial differential equations more as foundation laying rather than an exclusion of non-linear issues. A natural guideline for approaching problem solving is provided by Hadamard’s celebrated criteria for well-posedness:1 I Uniqueness: there is at most one solution, I Existence: there is at least one solution (at least for a dense set of data), I Continuous Dependence: the dependence of the solution on the data is locally (or weakly locally) uniformly continuous. Compared to these fundamental requirements, qualitative properties of a solution are a secondary consideration. This remark applies in particular to the issue of regularity in connection with solving linear partial differential equations. The historical focus on regularity issues has fostered a number of guiding ideas, which in the light of our approach appear as occasionally misleading. Among these are the notions that I partial differential equations are best classified according to the regularizing properties of their solution operators, I one type of space should be used for solving “all” problems associated with partial differential equations, I problems involving elliptic partial differential equations are “easier” than those involving parabolic partial differential equations, which are again “easier” than those involving hyperbolic partial differential equations. The systematic approach presented here will shed a different and hopefully more illuminating light on these and other issues by I proposing a different classification scheme, I advocating the construction of “tailor-made” distribution spaces adapted to the particular equation at hand, 1
These requirements are straightforwardly illustrated, if we consider the “problem” of finding a solution of the equation F .x/ D f , where F denotes a mapping between – say – metric spaces. The properties are that F must be injective, with dense range and with F 1 being (at least weakly) locally uniformly continuous. We note in particular that the (weakly) local uniform continuity of F 1 allows the extension of F to its closure F given by F .x/ WD lim F .y/ for any sequence y converging to x such that F .y/ also converges. Indeed, for two sequences y .k/ , k D 0; 1, converging to x such that F .y .k/ /, k D 0; 1, are also convergent, we have that F .y .0/ /, F .y .1/ / must converge to the same limit, so that F is well-defined, i.e. F is closable. Weakly local uniform continuity is indeed characterized by mapping Cauchy sequences to Cauchy sequences.
Preface
ix
I showing that from our point of view parabolic and hyperbolic partial differential equations are, in a way, “easier” than elliptic partial differential equations. It will also be seen that our general framework is sufficiently powerful to be applied to general evolution equations. However, to flesh out the ideas presented, we shall illustrate by examples and consider particular systems of partial differential equations from mathematical physics. Accordingly, the book is divided into several natural parts. In Chapter 1 we supply some additional material on functional analysis2 in Hilbert space which may be difficult to find elsewhere. Chapter 2 introduces the idea of what we shall call Sobolev lattices. In Chapter 3, as a first application, we consider partial differential equations with constant coefficients in RnC1 , n 2 N. The results are extended to tempered distributions (set in a suitably extended Sobolev lattice). Then in Chapter 4 the ideas presented are transferred to a more general framework covering a large class of abstract evolution equations. In Chapter 5 this general setting is exemplified by applications to a variety of initial-boundary value problems from mathematical physics. The concluding Chapter 6 offers a new approach to initial boundary value problems by expanding on the ideas and concepts presented. The material is based on lecture notes developed for introductory and advanced graduate level courses on partial differential equations and functional analysis given by the first author over the past three decades at the Rheinische Friedrich-WilhelmsUniversität at Bonn, Germany, at the University of Wisconsin-Milwaukee, Wisconsin, USA, and at the Technische Universität Dresden, Germany, and on a series of lectures given at the Strathclyde University, Glasgow, Scotland, UK. This development from lecture notes has led to proofs being given in more detail than one might normally expect in a monograph. This may slow the readers progress, but it should, however, make the text not only useful as a resource for courses on the topic but also make it suitable as a text for a reading course or for self-study. Apart from the novel approach the material presented in this monograph may in many ways be considered elementary, however, researchers will nevertheless find new results for particular evolutionary system from mathematical physics in later parts of this monograph as well as a very different perspective on seemingly familiar evolutionary problems. This book has been in preparation for a number of years and many colleagues have contributed to the effort either directly through discussion and comment on research papers or at seminar or indirectly through general support and encouragement. The first of us (RP) would like to particularly acknowledge the hospitality of the Department and Mathematics and Statistics, University of Strathclyde, that he has enjoyed first as a young Visiting Researcher and more recently as a Visiting Professor, while DM would mention all colleagues, past and present, in the Department and Mathematics and Statistics, University of Strathclyde, but particularly Adam McBride, Wilson 2
We have chosen to assume that the reader is familiar with basic functional analysis in Hilbert space.
x
Preface
Lamb and Mike Grinfeld of the Applied Analysis Group. Both of us would like to thank Rolf Leis and Gary Roach for encouragement and support throughout our careers. Finally, we both wish to acknowledge the love and support of our wives, Brigitte and Alison, and families – their patience and understanding (of the effort required if not the mathematics itself!) has been necessary, and we hope sufficient, to see this task to a successful conclusion. Dresden / Glasgow, January 2011
Rainer Picard, Des McGhee
Contents
Preface
vii
Nomenclature
xv
1
. . . . .
1 1 2 3 15 19
2
Sobolev Lattices 2.1 Sobolev Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Sobolev Lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Sobolev Lattices from Tensor Products of Sobolev Chains . . . . . . .
30 30 56 65
3
Linear Partial Differential Equations with Constant Coefficients 3.1 Partial Differential Equations in H1 .@ C e/ . . . . . . . . . . . . 3.1.1 First Steps Towards a Solution Theory . . . . . . . . . . . . . 3.1.2 The Tarski–Seidenberg Theorem and some Consequences . . 3.1.3 Regularity Loss .0; : : : ; 0/ . . . . . . . . . . . . . . . . . . . 3.1.4 Classification of Partial Differential Equations . . . . . . . . 3.1.5 The Classical Classification of Partial Differential Equations . 3.1.6 Elliptic, Parabolic, Hyperbolic? . . . . . . . . . . . . . . . . 3.1.7 Evolutionary Expressions in Canonical Form . . . . . . . . . 3.1.8 Functions of @ and Convolutions . . . . . . . . . . . . . . . 3.1.9 Systems and Scalar Equations . . . . . . . . . . . . . . . . . 3.1.10 Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.11 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . 3.1.12 Some Applications to Linear Partial Differential Equations of Mathematical Physics . . . . . . . . . . . . . . . . . . . . . 3.1.12.1 Transport Equation . . . . . . . . . . . . . . . . . 3.1.12.2 Acoustics . . . . . . . . . . . . . . . . . . . . . . 3.1.12.3 Thermodynamics . . . . . . . . . . . . . . . . . . 3.1.12.4 Electrodynamics . . . . . . . . . . . . . . . . . . . 3.1.12.5 Elastodynamics . . . . . . . . . . . . . . . . . . .
72 72 72 76 89 90 94 104 107 114 121 125 142
Elements of Hilbert Space Theory 1.1 Hilbert Space . . . . . . . . . . . . . . . . . . 1.2 Some Construction Principles of Hilbert Spaces 1.2.1 Direct Sums of Hilbert Spaces . . . . . 1.2.2 Dual Spaces . . . . . . . . . . . . . . . 1.2.3 Tensor Products of Hilbert Spaces . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
156 156 160 170 171 181
xii
Contents
3.2
4
3.1.12.6 Fluid Dynamics . . . . . . . . . . . . . . . . . . . 3.1.12.7 Quantum Mechanics . . . . . . . . . . . . . . . . . Partial Differential Equations in H1 .jD j/ . . . . . . . . . . . . . 3.2.1 Extension of the Solution Theory to H1 .jD j/. . . . . . . . 3.2.2 Some Applications to Linear Partial Differential Equations of Mathematical Physics . . . . . . . . . . . . . . . . . . . . . 3.2.2.1 Helmholtz Equation in R3 . . . . . . . . . . . . . . 3.2.2.2 Helmholtz Equation in R2 . . . . . . . . . . . . . . 3.2.2.3 Cauchy–Riemann Operator . . . . . . . . . . . . . 3.2.2.4 Wave Equation in R2 (Method of Descent) . . . . . 3.2.2.5 Plane Waves . . . . . . . . . . . . . . . . . . . . . 3.2.2.6 Linearized Navier–Stokes Equations . . . . . . . . 3.2.2.7 Electro- and Magnetostatics . . . . . . . . . . . . . 3.2.2.8 Force-free Magnetic Fields . . . . . . . . . . . . . 3.2.2.9 Beltrami Field Expansions . . . . . . . . . . . . . 3.2.3 Convolutions in H1 .jD j/; 2 RnC1 . . . . . . . . . . . . 3.2.4 Integral Representations of Convolutions with Fundamental Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4.1 An Integral Representation of the Solution of the Transport Equation . . . . . . . . . . . . . . . . . 3.2.4.2 Potentials, Single and Double Layers in R3 . . . . . 3.2.4.3 Electro- and Magnetostatics (Biot–Savart’s Law) . . 3.2.4.4 Potential Theory in R2 . . . . . . . . . . . . . . . 3.2.4.5 Cauchy’s Integral Formula . . . . . . . . . . . . . 3.2.4.6 Integral Representations of Solutions of the Helmholtz Equation in R3 . . . . . . . . . . . . . . 3.2.4.7 Retarded Potentials . . . . . . . . . . . . . . . . . 3.2.4.8 Integral Representations of Solutions of the TimeHarmonic Maxwell Equations . . . . . . . . . . . .
Linear Evolution Equations 4.1 Linear Operator Equations in Sobolev Lattices . . . . . . . . . . . . . 4.1.1 Polynomials of Commuting Operators . . . . . . . . . . . . . 4.1.2 Polynomials of Commuting, Selfadjoint Operators . . . . . . 4.2 Evolution Equations with Polynomials of Operators as Coefficients . . 4.2.1 Classification of Operator Polynomials with Time Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Causality of Evolutionary Problems . . . . . . . . . . . . . . 4.2.3 Abstract Initial Value Problems . . . . . . . . . . . . . . . . 4.2.4 Systems and Scalar Equations . . . . . . . . . . . . . . . . . 4.2.5 First-Order-in-Time Evolution Equations in Sobolev Lattices .
195 198 205 205 223 223 226 229 229 232 233 235 236 238 243 248 248 249 275 278 281 285 296 297 303 303 303 304 305 305 308 316 329 334
Contents 5
6
xiii
Some Evolution Equations of Mathematical Physics 5.1 Schrödinger Type Equations . . . . . . . . . . . . . . . . . . . . . 5.1.1 The Selfadjoint Laplace Operator . . . . . . . . . . . . . . 5.1.2 Some Perturbations . . . . . . . . . . . . . . . . . . . . . . 5.1.2.1 Bounded Perturbations . . . . . . . . . . . . . . 5.1.2.2 Relatively Bounded Perturbations (the Coulomb Potential) . . . . . . . . . . . . . . . . . . . . . . 5.2 Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 The Selfadjoint Operator Case . . . . . . . . . . . . . . . . 5.2.1.1 Prescribed Dirichlet and Neumann Boundary Data 5.2.1.2 Transmission Initial Boundary Value Problem . . 5.2.2 Stefan Boundary Condition . . . . . . . . . . . . . . . . . . 5.2.3 Lower Order Perturbations . . . . . . . . . . . . . . . . . . 5.3 Acoustics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Dirichlet and Neumann Boundary Condition . . . . . . . . 5.3.2 Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Reversible Heat Transport . . . . . . . . . . . . . . . . . . 5.4 Electrodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 The Electric Boundary Condition . . . . . . . . . . . . . . 5.4.2 Some Decomposition Results . . . . . . . . . . . . . . . . 5.4.3 The Extended Maxwell System . . . . . . . . . . . . . . . 5.4.4 The Vectorial Wave Equation for the Electromagnetic Field . 5.5 Elastodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 The Rigid Boundary Condition . . . . . . . . . . . . . . . . 5.5.2 Free Boundary Condition . . . . . . . . . . . . . . . . . . . 5.5.3 Shear and Pressure Waves . . . . . . . . . . . . . . . . . . 5.6 Plate Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Thermo-Elasticity . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
338 338 338 342 342
. . . . . . . . . . . . . . . . . . . . . .
346 349 350 357 366 369 371 372 374 377 382 385 387 391 394 400 407 409 412 413 415 420
A “Royal Road” to Initial Boundary Value Problems 6.1 A Class of Evolutionary Material Laws . . . . . . . . . . . . . 6.2 Evolutionary Dynamics and Material Laws . . . . . . . . . . . 6.2.1 The Shape of Evolutionary Problems with Material Laws 6.2.2 Some Special Cases . . . . . . . . . . . . . . . . . . . 6.2.3 Material Laws via Differential Equations . . . . . . . . 6.2.4 Coupled Systems . . . . . . . . . . . . . . . . . . . . . 6.2.5 Initial Value Problems . . . . . . . . . . . . . . . . . . 6.2.6 Memory Problems . . . . . . . . . . . . . . . . . . . . 6.3 Some Applications . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Reversible Heat Transfer . . . . . . . . . . . . . . . . . 6.3.2 Models of Thermoelasticity . . . . . . . . . . . . . . . 6.3.3 Thermo-Piezo-Electro-Magnetism . . . . . . . . . . . .
. . . . . . . . . . . .
426 427 431 431 437 440 441 443 447 453 454 455 457
. . . . . . . . . . . .
. . . . . . . . . . . .
xiv
Contents
Conclusion
459
Bibliography
461
Index
465
Nomenclature
!
mapping, as in A ! B
!
strong convergence, as in xn ! x1 for the convergence of a sequence .xn /n2N to its limit x1 in the norm topology
*
weak convergence, as in xn * x1 for the weak convergence of a sequence .xn /n2N to its weak limit x1 , i.e. in the weak topology
^
logical “and”
_ V
logical (non-exclusive) “or”
W
n!1
n!1
:::
for all : : : / for every : : :
:::
there is : : : / there exists : : :
7!
maps to, as in x 7! x 2
AN
closure of A
A
adjoint relation or mapping to a relation or mapping A
AV
closure of a closable operator A restricted to elements in CV 1 ./ for some open subset RnC1 , n 2 N
cof.A/
co-factor matrix associated with a square matrix A
cof.A/>
transposed co-factor matrix associated with a square matrix A, adjunct matrix
z
complex conjugate of a complex number z
L.H; H /
continuous linear mappings on H
CV 1 ./
space of smooth functions with compact support contained in the open subset RnC1 , n 2 N
@ b @
derivatives .@0 ; @1 ; : : : ; @n / in R1Cn , n 2 N
@0 b @
time derivative in R1Cn , n 2 N
b @ b @> ˚
spatial derivatives .@1 ; @2 ; : : : ; @n / in R1Cn with @0 denoting the time derivative, n 2 N same as grad same as curl same as div direct sum, orthogonal sum
xvi L
Nomenclature direct summation sign or orthogonal summation sign as in
L t2M
Ht
Div
divergence of .1; 1/-tensor fields
2
element sign as in x 2 C
2
element function as in x D2 .¹xº/ giving the element of a set containing only one element
L
Fourier–Laplace transform with parameter 2 RnC1
L temporal Fourier–Laplace transform with parameter 2 R b f . i / same as L f Grad
symmetric part of the covariant derivative of a vector field
H;k
short for Hk .@ C /, k 2 Z
H;0;0
short for H;0 ˝ H
HL
Hardy–Lebesgue space
i
imaginary unit
h j iX
inner product of the inner product space X
E 1=2 ŒX
inner product space derived from the inner product space X by modifying the inner product h j i X to h j E i X , where E W X ! X is continuous, linear, symmetric and strictly positive definite
dre
smallest integer greater than or equal to the real number r (roof)
brc T
largest integer less than or equal to the real number r (integer part, floor) T T V big intersection symbol, as in M D ¹yj X2M y 2 X º or X2M X
spatial Laplacian, same as b @2 or jb @j
2 jb @j
negative spatial Laplacian, same as or b @2
b @2
spatial Laplacian, same as or jb @j
dsx
line element at a point x
ds
line element
Lin
linear hull, as in LinK A, the smallest linear space over the field K containing the set A
A
the relation ¹.a; b/j .a; b/ 2 Aº with A X Y
ŒA
the set of all negatives of elements in the set A
Œ¹0ºf
null space or kernel of a mapping or function f
N.f /
null space or kernel of a mapping or function f
C
field or set of complex numbers
Re
real part
2
2
Nomenclature
xvii
Im
imaginary part
K
field or set of numbers (either K D R or K D C)
N
monoid or set of natural numbers ¹0; 1; 2; : : :º
R
field or set of real numbers
Z
group or set of integers
e
vector .1; 1; : : : ; 1/
kAk
operator norm of a linear operator A between normed linear spaces
kAkX!Y
operator norm of a linear operator A W X ! Y , X; Y normed linear spaces
?
orthogonal, as in x ? y
M?
ortho-complement of M ,
AŒM
the post-set or image of M with respect to a mapping or function A
AŒM
the post-set or co-domain of M with respect to the binary relation A
AŒX
the post-set of X of a relation A X Y , co-domain, range or image of the mapping or function A
R.A/
range of the mapping or function A
2B
the power set of B
AB
the set of (left-total) mappings from B into A
ŒM A
the pre-image of M with respect to a mapping or function A
ŒM A
the pre-set of M with respect to a binary relation A
ŒY A
the pre-set of a relation A, the pre-image of Y with respect to a mapping or function A or the domain of A
D.A/
the domain of a mapping or function A
Cartesian product as in X Y or Cartesian multiplication sign as in s2M Hx , where X; Y; M; Hs are sets, t 2 M
vector product in R3 as in x y, where x; y 2 R3
PC
orthogonal projector onto the closed subspace C
%.A/
resolvent set of operator A
%1 .A/ ˇ Aˇ
Sobolev lattice resolvent set of A
M
Aˇrestricted to M for a mapping A W D.A/ X ! Y , i.e. the mapping AˇM W D.A/ \ M X ! Y where x 7! A.x/
RH
Riesz mapping, which unitarily maps H onto H
.A/
spectrum of operator A
C .A/
continuous spectrum of A
xviii
Nomenclature
P .A/
point spectrum of A
R.A/
residual spectrum of A
P 1 .A/ Sobolev lattice point spectrum of A R1 .A/ Sobolev lattice residual spectrum of A 1 .A/
Sobolev lattice spectrum of A
C 1 .A/ Sobolev lattice continuous spectrum of A supp
support
supp0
support in direction 0
supp0
temporal support
dSx
surface element at x
dS
surface element
a
˝
algebraic tensor product
˝ S
tensor product big union symbol, as in
S
M D ¹yj
W X2M
y 2 X º or
grad
vector analytic differential operator grad, gradient
curl
vector analytic differential operator curl, curl
div
vector analytic differential operator div, divergence
d Vx
volume element at x
dV
volume element
S X2M
X
Chapter 1
Elements of Hilbert Space Theory
We shall assume that the reader is familiar with major Hilbert space concepts (see e.g. [46, 6, 51, 2] as general references). For later parts of this monograph this includes the spectral theorem for unbounded selfadjoint operators and even finite commuting families of such operators, [10, 42]. However, in order to introduce particular notations and to refresh the readers memory, we shall start by briefly recapping some of the basic ideas.
1.1
Hilbert Space
Definition 1.1.1. An inner product space is an ordered pair X D .L; h j i/ consisting of a linear space L D .M; C; .˛/˛2K / over the field of real numbers K D R or the field of complex numbers K D C (with .x; y/ 7! x C y being the addition in L and x 7! ˛ x the multiplication by the scalar ˛ 2 K) and a functional h j i W M M ! K; .x; y/ 7! hxjyi called the inner product1 , which satisfies (a) for all x 2 M :
hxjxi 2 R0
.WD ¹˛ 2 R j ˛ 0º/;
(b) hxj i W M ! K, is a linear functional2 for every fixed x 2 M , (c) for all x; y 2 M :
hxjyi D hyjxi ;
where : : : denotes complex conjugation, (d) for all x 2 M : hxjxi D 0 ) x D 0 2 L: It is customary not to distinguish notationally between the set M , the linear space L and the inner-product space X thus allowing to speak of the elements of M as elements of the inner product space X or to say that X is a linear space. For easier 1 If only properties (a)–(c) are required, then X is called a semi-inner-product space and h j i is referred to as a semi-inner-product. 2 Note that thus we have chosen the inner product to be linear in the second factor.
2
Chapter 1 Elements of Hilbert Space Theory
reference the (semi-)inner product of an (semi-)inner-product space X will be denoted by h j iX . So, let X be an inner product space. Then the functional p x 7! hxjxiX defines a norm on X (or semi-norm, if property (d) is not required). The crucial triangle inequality follows from the Cauchy–Schwarz inequality p p jhxjyiX j hxjxiX hyjyiX for all x; y 2 X . The latter can be easily seen from the non-negativity of the determinant of the (non-negative) Gramian matrix function hxjxiX hxjyiX .x; y/ 7! hyjxiX hyjyiX associated with the inner product. As a matter of convenience we shall denote the associated (semi-)norm by j jX . Definition 1.1.2. Let X be an inner product space. If X is complete with respect to the norm j jX , then we say that X is a Hilbert space. A standard procedure of constructing Hilbert spaces is by completion of an innerproduct space. Therefore an inner-product space is also known as a pre-Hilbert space. As a prominent example we may consider L2 .R/ as the completion of the space of step functions, i.e. the linear span LinC ¹I j I bounded interval in Rº of characteristic functions I of bounded, left-closed intervals I in R, with respect to the norm induced by the inner product Z .f; g/ 7!
f .t / g.t / dt: R
Lebesgue integration theory shows that L2 .R/ may be identified with (equivalence) classes of square-integrable functions (coinciding pair-wise almost everywhere, compare e.g. the appendix in [50]).
1.2
Some Construction Principles of Hilbert Spaces
A closed subspace of a Hilbert space is also a Hilbert space. Beyond this trivial observation there are three main constructions for obtaining new Hilbert spaces from given Hilbert spaces, which we shall consider more closely: first the direct sum ˚, then the construction of Hilbert spaces based on duality and finally the tensor product ˝ of Hilbert spaces.
Section 1.2 Some Construction Principles of Hilbert Spaces
3
1.2.1 Direct Sums of Hilbert Spaces Direct sums of Hilbert spaces play an important role for the discussion of systems of equations and in particular for understanding linear operators in Hilbert spaces. Definition 1.2.1. Let H S denote the class of all real or complex Hilbert spaces (that is with scalar field K equal to R or C respectively) and let H. / W M ! H S, x 7! Hx , be a mapping, where M is a given set. Then the Cartesian product3 x2M Hx is also a set and naturally gives rise to a linear space by defining the component-wise linear structure .˛ v C w/.t / D ˛ v.t / C w.t / 2 H t
for all t 2 M; ˛ 2 K; v; w 2 Hx : x2M
Let W be the subset of all w 2 x2M Hx such that w.t / D 0 2 H t for all t 2 M but one. Then LinK W is a linear subspace of the Cartesian product space and equipped with the inner product hvjwi WD
X
hv.t / j w.t /iH t
for all v; w 2 LinK W
(1.2.1)
t2M
is a pre-Hilbert space. The completion of this pre-Hilbert space is called the direct sum of H. / , denoted by M Hx : x2M
If M D ¹0; : : : ; nº; n 2 N, N denoting the set of natural numbers (starting with 0), we also write H0 ˚ ˚ Hn L or tD0;:::;n H t or we use a suggestive column matrix notation 0
1 H0 : @ :: A Hn 3
Recall that the Cartesian product ˇ ^ ± ° [ ˇ Hx ˇ f .x/ 2 Hx : Hx WD f W M ! x2M
x2M
x2M
V W We use here and in the following occasionally the standard quantifiers ::: (“for every : : :”) and ::: (“there is a (at least one) : : :”) with possibly additional constraints. T S We prefer these to the more frequently used 8 : : : and 9 : : : because of their analogy to and and also to the logical symbols ^ (“and”) and _ (“or”), respectively. The latter symbols will also be occasionally used in composite mathematical expressions.
4
Chapter 1 Elements of Hilbert Space Theory
for the direct sum of .H0 ; : : : ; Hn /. The elements .xi /iD0;:::;n .x0 ; : : : ; xn / of such a finite direct sum will also be denoted by x0 ˚ ˚ xn or, in correspondence to the column matrix notation for the direct sum, by a column matrix 0 1 x0 B :: C @ : A: xn Remark 1.2.2. The linearity of the structure is easily checked. Note also that the sum in (1.2.1) is indeed a finite sum, since except for finitely many terms the inner products are zero. If M is a finite set then the pre-Hilbert space constructed here is already complete. The simplicity of the construction ofL such a direct sum is reflected in the simplicity of obtaining an orthonormal set4 in x2M Hx from orthonormal sets in the spaces H t ; t 2 M . Indeed, if ! t is an orthonormal set in Hilbert space H t , t 2 M , then the set Lof all ! 2 x2M !x with !.t / D 0 for all but one t 2 M is an orthonormal set in x2M Hx . Moreover, if all !L t are complete orthonormal sets in Hilbert space H t for t 2 M then ! is complete in x2M Hx . Example 1.1. C nC1 with the Euclidean inner product is a direct sum of n copies of C (as Hilbert space with inner product .x; y/ 7! x y): C nC1 D
M
C D `2 .¹0; : : : ; nº; C/:
tD0;:::;n
The spaces `2 .M; C/ WD
M
C;
M an arbitrary set;
t2M
are frequently used as model spaces. As a first application of the concept of a direct sum we recall the following variant of the projection theorem. 4
An orthonormal set ! is a subset of an inner-product space X with hxjxi D 1
for all x 2 ! and hxjyi D 0 for all x; y 2 ! with x ¤ y. Such an orthonormal set is called complete if its linear span LinK ! is dense in X , i.e. the closure of its linear span equals X : LinK ! D X:
Section 1.2 Some Construction Principles of Hilbert Spaces
5
Theorem 1.2.3. Let H be a real or complex Hilbert space and C a closed subspace of H . Then we have H D C ˚ C?
(1.2.2)
in the sense of unitary equivalence. Here ˇ ^ ° ± ˇ hxjyiH D 0 : C ? WD x 2 H ˇ y2C
The unitary equivalence expressed in (1.2.2) allows us to speak of an orthogonal decomposition of H . Indeed, we have that every x 2 H can be written in a unique way as x D u C v with u 2 C and v 2 C ? . The mapping PC W H ! H; x 7! PC .x/;
(1.2.3)
where PC .x/ is the element of the singleton ¹u 2 C j x u 2 C ? º, i.e.5 PC .x/ WD 2.¹u 2 C j x u 2 C ? º/; is a well-defined linear mapping called the orthogonal projector associated with C . Writing 1H or simply 1 for the identity mapping in H , we see that6 I C is the range7 PC ŒH of PC I C ? is the range .1 PC /ŒH of .1 PC / and I PC ı PC D PC . 5 As for the notation we recall that 2 is a relation between classes. The notation 2. / indicates, however, that 2 should here be understood as a mapping, which is only possible if the arguments are sets with a single element, i.e. a singleton. So, the domain of M 7! 2.M / is the class of all singletons and the value 2.M / is the element of the singleton M :
¹xº 7! 2.¹xº/ WD x: 6
In general an orthogonal projector is defined in terms of these three properties. For a binary relation R (or a binary correspondence .R; X Y /, where R is a relation in X Y , i.e. a subset of X Y ), we define the pre-set of a set M under the relation R as ° ˇ _ ± ˇ ŒM R WD x ˇ y2M 7
.x;y/2R
6
Chapter 1 Elements of Hilbert Space Theory
Example 1.2. Consider the direct sum of Hilbert spaces M Ht : H D t2M
Then the mappings Hs W H ! H; x 7! Hs .x/ with
´ x.s/ .Hs .x//.t / WD 0
for t D s; otherwise
are orthogonal projectors for all s 2 M . Recalling that .Hs .x//.s/ D x.s/ 2 Hs , we see that Hs ŒH can (and occasionally will) be identified with the (tagged) Hilbert space Hs , s 2 M . We note the following important consequence of the projection Theorem 1.2.3. Theorem 1.2.4. Let H be a Hilbert space with scalar field K and M H a subset. Then LinK M D M ?? : and the post-set of a set N under the relation R as ° ˇ ˇ RŒN WD y ˇ
_
± x2N :
.x;y/2R
V
We recall that if R is right-unique, i.e. x;y;z ..x; y/; .x; z/ 2 R H) y D z/, then R is called a function. A correspondence .R; X Y / with R right-unique is commonly referred to as a mapping, more commonly written as R W ŒY R X ! Y; x 7! R.x/; where as a consequence R.x/ WD 2.¹yj.x; y/ 2 Rº/ is well-defined. Here ŒY R is called the domain of the mapping R, sometimes also written as D.R/. The post-set RŒX of X under the right-unique R is referred to as the range of the function or mapping. It is customary although occasionally confusing to make little distinction between functions and mappings. A small part of the resulting confusion is that some authors speak of multi-valued mappings, when they should say relations or correspondences. Since mathematics strictly speaking only deals with sets (even “elements” are sets in the framework of the Zermelo–Fraenkel set theory), to speak of set-valued mappings is also not too helpful. What is meant and could easily be said is that in a canonical way every correspondence .R; X Y / gives rise to a mapping ŒY R X ! 2Y ; x 7! RŒ¹xº; which is frequently identified with the correspondence .R; X Y /.
Section 1.2 Some Construction Principles of Hilbert Spaces
7
Moreover, if N M then M ? ˚ .M ?? \ N ? / D N ? : Let .xn /n be a sequence in a Hilbert space H . One says .xn /n converges weakly to n!1
x1 in H , in symbols xn * x1 in H , if ^ hujxn iH ! hujx1 iH
as n ! 1:
u2H
The usual convergence in norm is also referred to as strong convergence. That weak convergence is really a weaker concept than strong convergence is implied by the Cauchy–Schwarz inequality jhujxn x1 iH j jujH jxn x1 jH which clearly shows that strong convergence implies weak convergence. Orthonormal sequences provide examples of not strongly convergent sequences which are, however, weakly convergent to zero. Despite this apparent difference in strength, the above result can be seen to yield the following characterization of the closure of a subspace, which contains the nucleus of the issue of “weak versus strong”. Proposition 1.2.5. Let M be a subspace of the Hilbert space H . Then ˇ ° ± _ n!1 ˇ MN D x1 2 H ˇ xn * x1 : .xn /n 2M N
Remark 1.2.6. This result states in the obvious terminology that the strong closure of M is equal to the weak closure of M . Proof. Since weak convergence follows from strong convergence, we obviously have ˇ ° ± _ n!1 ˇ MN x1 2 H ˇ .xn * x1 / : .xn /n 2M N
So, let x1 2 H be such that there is a sequence8 .xn /n 2 M N with n!1
xn * x1 : Since hujxn iH D 0 for all n 2 N and u 2 M ? , we also see hujx1 iH D 0 for all u 2 M ? , that is x1 2 M ?? D MN : 8 In general, the class of mappings defined on a class A and values in a class B is suggestively written as B A . Sequences are mappings with domain N, the natural numbers, which we assume to begin with zero (as suggested by there construction based on the empty set). Sequences in M are therefore elements of M N .
8
Chapter 1 Elements of Hilbert Space Theory
The decomposition argument employed here can also be utilized to obtain the following weak compactness result. Proposition 1.2.7. Let .xn /n 2 H N be a bounded sequence in a Hilbert space H . Then there is a subsequence .e x n /n 2 H N such that n!1
e x n * x1 for some x1 2 H . Proof. As argued above, for u 2 ¹xn j n 2 Nº? we have hujxn iH D 0 for all n 2 N and so it suffices to consider the subspace H0 WD LinK ¹xn j n 2 Nº in place of the Hilbert space H . Note that .xn /n is a bounded sequence in H0 and if the weak limit x1 exists it must also be in H0 . By virtue of the Weierstrass selection theorem .k/ .k1/ /n such that of calculus we now obtain recursively subsequences .xn /n of .xn .k/ .0/ .k/ .hxk jxn iH /n converges in C, k 2 N, where .xn /n D .xn /n . Obviously, .xn /n is a subsequence of .xn /n for all k 2 N. We also see that by our taking subsequences .k/ consecutively we even have that .hxs jxn iH /n converges for all s k. We claim that .n/ the diagonal sequence .xn /n has the desired property. Clearly, the diagonal sequence .n/ is also a subsequence of .xn /n . Moreover, .xn /nDk;.kC1/;::: yields a subsequence .k/ .n/ of .xn /n2N for all k 2 N. Therefore, .hxs jxn iH /n converges for all s 2 N. By Gram–Schmidt orthonormalisation (ignoring elements of the sequence linearly dependent on earlier elements) we obtain a countable complete orthonormal set o0 for H0 . Let .un /n be an enumeration of o0 . By construction, uN is just a linear .n/ combination of .xn /nD1;:::;N for all N 2 N, so we have that .hus jxn iH /n converges for all s 2 N. We are therefore led to define x1 WD
1 X sD1
lim hus jxn.n/ iH us
n!1
which is well-defined because v u1 uX .n/ t jhus jxn iH j2 D jxn.n/ jH sup¹:jxn jH jn 2 Nº DW c0 < 1: sD1
Note that this also implies jx1 jH c0 . Moreover, we find for any uD
1 X sD1
hus juiH us 2 H
Section 1.2 Some Construction Principles of Hilbert Spaces
9
that N ˇD X ˇ E ˇ ˇ ˇ ˇ jhu j xn.n/ x1 iH j ˇ hus juiH us ˇ xn.n/ x1 ˇ H
sD1
1 ˇ ˇD X E ˇ ˇ ˇ ˇ hus juiH us ˇ xn.n/ x1 ˇ Cˇ sDN C1
H
N ˇ ˇX ˇ ˇ hus juiH h us j xn.n/ x1 iH ˇ ˇ sD1
v u 1 u X jhus juiH j2 ; C 2 c0 t sDN C1
and we see that the last term can be made small independently of n by choosing N sufficiently large. With fixed N the first term then can be made small by making n sufficiently large. This is the desired convergence statement. Remark 1.2.8. The result can be restated as saying that closed, bounded sets are weakly (sequentially) compact, in the sense that every sequence in such a set contains a weakly convergent subsequence with weak limit in the set. In contrast, all closed, bounded sets in a Hilbert space H are compact if and only if the closed unit ball is compact, which in turn is equivalent to H being finite-dimensional. We have already mentioned the construction of Hilbert spaces as closed subspaces of Hilbert spaces. In connection with the direct sum of Hilbert spaces and in accordance with earlier remarks we make the following definition about subspaces of a direct sum of Hilbert spaces. It will often be convenient and intuitive to think of mappings, and in particular linear mappings between Hilbert spaces H0 and H1 in the way they were originally made precise in the framework of set theory, namely as particular subsets of the Cartesian product H0 H1 or rather of the direct sum H0 ˚H1 . In this sense, a subset A of H0 ˚ H1 is a (binary) relation between H0 and H1 . If in addition A is right-unique, i.e. ^ .x ˚ y0 ; x ˚ y1 2 A ) y0 D y1 /; x2H0 ; y0 ;y1 2H1
then A defines a mapping9 A W D.A/ H0 ! H1 ; x 7! A.x/; 9 The original subset A is then occasionally referred to as the graph of this mapping for which we use the same name A.
10
Chapter 1 Elements of Hilbert Space Theory
where
ˇ _ ° ± ˇ D.A/ WD x 2 H0 ˇ x˚y 2A y2H1
and A.x/ is the element of the singleton ¹y 2 H1 j x ˚ y 2 Aº, in symbols A.x/ WD 2.¹y 2 H1 j x ˚ y 2 Aº/: If a relation A is a linear subspace of H0 ˚ H1 , i.e. ^ .x0 ˚ x1 C ˛ y0 ˚ y1 D .x0 C ˛ y0 / ˚ .x1 C ˛ y1 / 2 A/; x0 ˚x1 ;y0 ˚y1 2A; ˛2K
then A is also called a linear relation. For a relation A in H0 ˚ H1 we write ˇ _ ° ± ˇ AŒ WD y 2 H1 ˇ x˚y 2A ; x2
ˇ _ ± ˇ Œ‚A WD x 2 H0 ˇ x ˚ y 2 A D A1 Œ‚; °
y2‚
where A1 denotes the inverse relation A1 WD ¹y ˚ x 2 H1 ˚ H0 j x ˚ y 2 Aº to A. AŒ is called the post-set of under the relation A and Œ‚A is called the pre-set of ‚ under the relation A. AŒH0 is simply called the post-set of A and ŒH1 A the pre-set of A. If A is a mapping then R.A/ WD AŒH0 is called the range of A and D.A/ D ŒH1 A is the domain of A. The set N.A/ WD Œ¹0ºA D A1 Œ¹0º is the null space of A. Note that A.x/ D 2.AŒ¹xº/: A right-unique linear relations is of course nothing but a linear mapping in the usual sense. Lemma 1.2.9. A right-unique relation A H0 ˚ H1 , Hi ; i D 0; 1, Hilbert spaces, is linear if and only if A W D.A/ H0 ! H1 is linear. Proof. The only thing to check is that linearity of a mapping is indeed the same as linearity of the subspace A. This becomes clear by observing that A.˛ x C y/ D ˛ Ax C Ay is the same as saying ˛ .x; Ax/ C .y; Ay/ D .˛ x C y; ˛ Ax C Ay/ 2 A:
Section 1.2 Some Construction Principles of Hilbert Spaces
11
This lemma yields the justification for calling a right-unique linear relation A a linear mapping or – synonymously – a linear operator. Remark 1.2.10. For linear mappings A we use the simplified multiplicative notation Ax WD A.x/ for x 2 D.A/: It should be noted that scalar multiplication ˛ as a mapping in a Hilbert space H is also a linear operator. With little risk of confusion we will therefore also in many instances simplify notation by letting ˛ x WD ˛ x
for all ˛ 2 K; x 2 H;
where K denotes the scalar field of H . Recall that for a mapping A W D.A/ H0 ! H1 the subset A H0 ˚H1 is often referred to as the graph of A. The restriction of the norm in H0 ˚H1 to A H0 ˚H1 is therefore also known as the graph norm j jD.A/ associated with the mapping A: ^
jxjD.A/ WD
q q 2 2 2 2 jxjH C jyj D jxjH C jA.x/jH D j.x; y/jH0 ˚H1 : H1 0 0 1
.x;y/2A
For a linear operator A the domain D.A/ of A is a linear subspace of H0 . A linear operator A is called a closed linear operator if A is a closed subspace of H0 ˚ H1 . A linear operator A is called a closable linear operator if A is a closed subspace of H0 ˚ H1 and a right-unique relation. In this case A is a closed linear operator. If a linear operator A is closed it is a Hilbert space (as a closed subspace of H0 ˚ H1 ). With the graph norm the domain D.A/ is also a Hilbert space, which clearly is unitarily equivalent via the bijective isometry D.A/ ! A; x 7! .x ˚ A.x// to A, where D.A/ is equipped with the graph norm j jD.A/ and A with the norm of H0 ˚ H1 . To establish a Hilbert space as the completion of D.A/ of some closable linear mapping A, is one of the standard procedures for constructing Hilbert spaces. Any dense subset of D.A/ in the sense of the graph norm of A is known as a core of A. In this sense, if A is closable that D.A/ is naturally a core for the closed linear N operator A. Sometimes we will have occasion to consider linear operators between finite direct sums of Hilbert spaces. In these casesLa matrix notation L is appropriate and convee niently suggestive. Let A W D.A/ H ! 7 k kD0;:::;m j D0;:::;n H j be a linear
12
Chapter 1 Elements of Hilbert Space Theory
operator. Define Aj k WD H ej A Hk ; j D 0; : : : ; n and k D 0; : : : ; m, with the notation of example (1.2). With the identification introduced there we can consider Aj k as the mapping ej ; Aj k W D.Aj k / Hk ! H x 7! Aj k .x/ with D.Aj k / WD Hk .D.A//, j D 0; : : : ; n and k D 0; : : : ; m. Now, if \ M D.Aj k / D.A/ D kD0;:::;m j D0;:::;n
holds, then for
we may write
0
1 x0 M B C x D @ ::: A 2 D.A/ Hk kD0;:::;m xm 0
10 1 A00 A0m x0 B :: : : : CB : C @ : : :: A @ :: A WD Ax: An0 Anm xm
Conversely, an operator matrix
0
1 A00 A0m B :: : : : C @ : : :: A An0 Anm L L e j with domain defines an operator from kD0;:::;m Hk to j D0;:::;n H M \ D.Aj k /: kD0;:::;m j D0;:::;n
In some cases it may actually be feasible to expand the matrix notation and to also arrange the argument in a more general matrix form to insinuate action of an operator matrix on a matrix of arguments of suitable size. Since this notation takes its intuition from elementary matrix algebra, we shall freely use this extension of notation without further explanation. As a first, simple application of this matrix notation we will recall the concept of complexification. Definition 1.2.11. Let H be a real Hilbert space. Then the complexification HC of H can be defined as H H ˚H D H
Section 1.2 Some Construction Principles of Hilbert Spaces but equipped with a modified scalar multiplication by ˛ 2 C defined by H H ! ; H H x0 x0 7! ˛ x1 x1 with
˛
x0 x1
WD
Re ˛ I m ˛ I m ˛ Re ˛
for
x0 x1
x0 x1
2
D
H H
.Re ˛/ x0 .I m ˛/ x1 .I m ˛/ x0 C .Re ˛/ x1
13
(1.2.4)
:
Remark 1.2.12. It is straightforward to confirm that the complexification HC is a complex Hilbert space. In this context it should be noted that any complex Hilbert space H can also be considered as a real Hilbert space by restricting the scalar field to R and using Re.hjiH / as inner product. Thus henceforth we restrict our attention to the case of complex Hilbert spaces. The corresponding reasoning for real Hilbert space is usually just a simplification of the arguments for complex Hilbert space. Returning to our view of a linear mapping A W D.A/ H0 ! H1 as a particular pre-Hilbert (sub)space of H0 ˚ H1 , we note that, by the projection theorem, we have the orthogonal decomposition H0 ˚ H1 D A ˚ A? for any closed linear relation A H0 ˚ H1 . The closed linear relation A? is of particular interest. Definition 1.2.13. Let A be a relation in H0 ˚ H1 . Then A WD . A? /1 will be called the adjoint relation. Here A D .1/A WD ¹.x; y/ 2 H0 ˚ H1 j .x; y/ 2 Aº: If A is a linear mapping, it is called the adjoint operator.
14
Chapter 1 Elements of Hilbert Space Theory
Remark 1.2.14. The reason for defining this peculiar looking combination of operations lies in the property that hxjviH0 D hyjuiH1
(1.2.5)
for all .x; y/ 2 A and .u; v/ 2 A . In this form, the ortho-complement is closer to the construction of adjoint operators (see later). Lemma 1.2.15. Let A be a relation in H0 ˚ H1 . Then .A1 /? D .A1 /? D .A? /1 D ..A/? /1 D .A? /1 :
(1.2.6)
In other words, the three operations involved in the definition of A commute. Proof. Let .x; y/ 2 .A1 /? H1 ˚ H0 , i.e. ^ h.x; y/j.u; v/iH1 ˚H0 D 0: .u;v/2A1
Then we have the following chain of equivalences .x; y/ 2 .A1 /? ^ , h.x; y/j.v; u/iH1 ˚H0 D h.x; y/j.v; u/iH1 ˚H0 D 0 .u;v/2A
,
^
h.y; x/j.u; v/iH0 ˚H1 D 0
.u;v/2A
, .y; x/ 2 A? , .x; y/ 2 .A? /1 : As a consequence we obtain the following result directly from Theorem 1.2.4. Theorem 1.2.16. Let H0 ˚ H1 be a direct sum of two (complex) Hilbert spaces and let A H0 ˚ H1 be a relation. Then LinC A D A : Proof. According to (1.2.6) and by definition of the adjoint relation we have A D ...A1 /? /1 /? D ...A?? /1 /1 / D A?? D LinC A: Here we have used .1/.1/ D 1 and that Theorem 1.2.4 holds.
Section 1.2 Some Construction Principles of Hilbert Spaces
15
Remark 1.2.17. Note that A is indeed a double ortho-complement. Therefore Theorem 1.2.4 applies. So, if A is a linear relation then its (strong) closure AN is also equal to its weak closure. If AN is a linear mapping then it can be also characterized as the N adjoint relation of A . Since the latter is referred to as a weak characterization of A, we have established that weak (closure) equals strong (closure). The projection theorem now gives the following useful decomposition result. Theorem 1.2.18. Let H0 ˚ H1 be a direct sum of two (complex) Hilbert spaces and A H0 ˚ H1 a closed, linear relation. Then we have the orthogonal decompositions H0 D Œ¹0ºA ˚ A ŒH1 and
H1 D Œ¹0ºA ˚ AŒH0 :
Proof. By a straightforward calculation we have ^ y ? AŒH0 , 0 D hyjviH1 D h.0; y/j.u; v/iH0 ˚H1 .u;v/2A
, .0; y/ 2 A? , .y; 0/ 2 A , y 2 Œ¹0ºA : This shows that AŒH0 ? D Œ¹0ºA and the second decomposition follows from the projection theorem in H0 ˚ H1 . The first decomposition now follows by replacing A by A and using Theorem 1.2.16. Remark 1.2.19. With A D PC as the orthogonal projector defined in (1.2.3) and H D H0 D H1 we recover from Theorem 1.2.18 the earlier Theorem 1.2.3. It can be shown that PC D PC , i.e. PC is selfadjoint10 , and Œ¹0ºPC D .1 PC /ŒH .
1.2.2 Dual Spaces As an important application of Theorem 1.2.18 we shall now recall the Riesz representation theorem, which provides a construction of another Hilbert space H as the 10 A relation satisfying A D A is called selfadjoint. A relation is called skew-selfadjoint if A D A , which can be simply stated as A1 D A? :
If A is an operator, then we speak of a selfadjoint or a skew-selfadjoint operator, respectively. If for a linear relation A we have A A , we say A is Hermitian. A Hermitian operator A W D.A/ H0 ! H0 is characterized by the property hAxj yiH0 D hxj AyiH0 for all x; y 2 D.A/. If A is a Hermitian operator and D.A/ is dense in H0 , i.e. A is densely defined, then A is called symmetric.
16
Chapter 1 Elements of Hilbert Space Theory
dual of a given Hilbert space H . For this construction, let first H WD ¹f 2 C H jf linear ^ f Lipschitz continuousº: Since any mapping into the numbers is called a functional, the elements of H are referred to as continuous11 , linear functionals. On H we do not use the linear structure induced by C, but rather define the following modified12 linear structure: .˛f C g/.t / WD ˛ f .t / C g.t / for all ˛ 2 C; f; g 2 H ; t 2 H:
(1.2.7)
Thus, H becomes a linear space. We choose this particular linear structure in order that the Riesz mapping (see Definition 1.2.21 below) becomes linear. We state without proof the Riesz representation theorem. Theorem 1.2.20 (Riesz representation theorem). Let H be a complex Hilbert space. For any f 2 H there is a unique w 2 H such that f .x/ D hwjxiH
for all x 2 H:
Definition 1.2.21. Let H be a complex Hilbert space. The mapping RH W H ! H; f 7! RH .f /; V where RH .f / WD 2.¹w 2 H j x2H f .x/ D hwjxiH :º/, is – according to the Riesz representation theorem – a well-defined mapping called the Riesz operator or the Riesz mapping. The Riesz mapping enjoys the following properties: Lemma 1.2.22. Let H be a complex Hilbert space. The Riesz mapping RH W H ! H is a linear mapping and norm-preserving13 in the sense that jRH f jH D jf jLip 11 12
for all f 2 H :
Recall that for linear mappings Lipschitz continuity is equivalent to continuity. With the imaginary unit i we have in particular that for f 2 H 1 f D f .i /: i
13
The functional j jLip given by ² jf jLip WD sup
ˇ ³ jf .x/ f .y/jB1 ˇˇ x; y 2 D; x ¤ y ˇ jx yjB0
(1.2.8)
Section 1.2 Some Construction Principles of Hilbert Spaces
17
Proof. The linearity14 follows straightforwardly from the definition of RH and the properties of the inner product. For all f; g 2 H ; y 2 H and ˛ 2 C we calculate hRH .˛ f C g/ j yiH D .˛ f C g/.y/ D ˛ f .y/ C g.y/ D ˛ hRH .f / j yiH C hRH .g/ j yiH D h˛ RH .f / C RH .g/ j yiH I that is RH .˛f C g/ D ˛ RH f C RH g: Moreover, by the Cauchy–Schwarz inequality we have for all y 2 H jf .y/j D jhRH f j yiH j jRH f jH jyjH
(1.2.9)
and so jf jLip jRH f jH : Since equality can occur in (1.2.9) (e.g. for y D RH f ), we see that jRH f jH is also the best possible constant, i.e. (1.2.8) holds. From (1.2.8) we also see that j jLip actually is a pre-Hilbert space norm, since the left-hand side is clearly a pre-Hilbert space norm associated with the inner product .x; y/ 7! hRH x j RH yiH : Thus, we have that H equipped with this inner product is a pre-Hilbert space. Lemma 1.2.23. Let H be a complex Hilbert space. The Riesz mapping RH W H ! H is onto. Proof. Let y 2 H be arbitrary. The linear functional fy W H ! C; x 7! hy j xiH , is clearly linear and by the Cauchy–Schwarz inequality also Lipschitz continuous. Indeed, as in the proof of the previous lemma we have jhy j iH jLip D jyjH : for the linear space of Lipschitz continuous functions f W D B0 ! B1 , B0 ; B1 normed linear spaces, constitutes a semi-norm. For a fixed argument x0 2 D the functional f 7! jf .x0 /jB1 C jf jLip defines a norm for such functions. For the subspace of linear mappings we may choose x0 D 0 to see that for these j jLip is already a norm. In the latter case we also write kf k WD jf jLip and f 7! kf k is referred to as the operator norm of the linear mapping f . In some cases, where domain and range spaces may be unclear, we shall also write k kX!Y if linear mappings from normed linear space X to normed linear space Y are concerned. 14 The particular choice of linear structure is designed to make the Riesz mapping a linear operator.
18
Chapter 1 Elements of Hilbert Space Theory
Thus, we have fy 2 H . Moreover, by definition fy .x/ D hy j xiH D hRH fy j xiH for all x 2 H and therefore RH fy D y: (1.2.10) As a consequence we have Theorem 1.2.24. Let H be a complex Hilbert space. Then H is also a Hilbert space (the so-called dual space to H ) and the Riesz mapping RH W H ! H is unitary. Proof. Given our previous results we only need to show completeness of H . So, let .fn /n be a Cauchy sequence in H . Then, since RH is norm-preserving, we also have that .RH fn /n is a Cauchy sequence in H . By the completeness of H we have y WD lim RH fn 2 H: n!1
According to (1.2.10) we have RH fy D y for fy W H ! C; x 7! hy j xiH . Thus, (again by norm-preservation) from RH fn ! y as n ! 1 we obtain fn ! fy
as n ! 1:
Since RH is norm-preserving it is clearly one-to-one and by the above indeed a unitary mapping. Remark 1.2.25. It is worth comparing the adjoint to the dual relation. Let A H0 ˚ H1 be a linear relation, then the dual relation is given by A} WD .Aa /1 H1 ˚ H0 ; where the so-called annihilator Aa of A is defined as Aa WD ¹f 2 H0 ˚ H1 j f .x/ D 0 for all x 2 Aº: The connection between A and A} is given by composition with the corresponding Riesz mappings RHi ; i D 0; 1, : A D RH0 A} RH 1
(1.2.11)
The dual relation is the more general concept, since it makes sense in a Banach space setting, whereas the adjoint uses the concept of orthogonality. In case of rightuniqueness one speaks of the dual mapping or dual operator. As an example, given 1 the Riesz mapping RH W H ! H , then RH D RH W H ! H . However, } W H ! H is such that RH } g.RH f / D .RH g/.f /
Section 1.2 Some Construction Principles of Hilbert Spaces
19
for all f; g 2 H . According to (1.2.11) we have } 1 D RH D RH RH RH RH
and it follows that } : 1H D RH RH
Thus we have } 1 D RH RH D RH :
In the following we shall always make use of the implied identification of H with H via the unitary mapping } RH RH W H ! H :
In some instances it may also be useful to even identify H with H .
1.2.3 Tensor Products of Hilbert Spaces A final construction principle for new Hilbert spaces we shall explore is the so-called tensor product of Hilbert spaces. It can be rooted on the concept of multi-linear forms on Hilbert spaces. We shall be more detailed here since this concept – although extremely useful for the discussion of partial differential equations – is rarely included in elementary functional analysis texts. Let .Hi /iD0;:::;n be a family of .n C 1/ (complex) Hilbert spaces. A mapping W H0 Hn ! C is called continuous .n C 1/-linear form on H0 Hn if y 7! .: : : ; xi1 ; y; xiC1 ; : : :/ 2 Hi for xk 2 Hk for k D 0; : : : ; n, k ¤ i , i D 0; : : : ; n. The implied linear structure for such .n C 1/-linear forms f; g is given by ^
.˛ f C g/.x/ WD ˛ f .x/ C g.x/:
˛2C; x2H0 Hn
Given xi 2 Hi , i D 0; : : : ; n, a continuous .n C 1/-linear form x0 ˝ ˝ xn on H0 Hn can be defined by x0 ˝ ˝ xn .u0 ; : : : ; un / WD hx0 ju0 iH0 hxn jun iHn
20
Chapter 1 Elements of Hilbert Space Theory
for ui 2 Hi , i D 0; : : : ; n. Such continuous .n C 1/-linear forms are referred to as decomposable continuous .n C 1/-linear form. Hence, we can generate a linear space called the algebraic tensor product of .Hi /iD0;:::;n W˝ WD
a O
Hk WD LinC ¹x0 ˝ ˝ xn j xi 2 Hi ; i D 0; : : : ; nº:
kD0;:::;n
Defining hx0 ˝ ˝ xn j u0 ˝ ˝ un iH0 ˝˝Hn WD hx0 ju0 iH0 hxn jun iHn
(1.2.12)
for all xi ; ui 2 Hi ; i D 0; : : : ; n, we obtain a candidate for an inner product for W˝ by sesqui-linear extension. Since the values may depend on the representation, this process may a priori merely create a sesqui-linear relation, i.e. a relation, which is linear w.r.t. the second component and conjugate linear w.r.t. the first component. More precisely, we have a relation between W˝ W˝ and C, i.e. a subset B of .W˝ W˝ / C given by the set of all ordered pairs of the form .a; b/ with X X aD ˛i x0;i ˝ ˝ xn;i ; ˇj u0;j ˝ ˝ un;j i2N
j 2N
and bD
X X
˛i ˇj hx0;i ˝ ˝ xn;i j u0;j ˝ ˝ un;j iH0 ˝˝Hn ;
i2N j 2N
where ˛; ˇ 2 C N ;
ŒC n ¹0º˛; ŒC n ¹0ºˇ finite;
xk ; uk 2 HkN ;
k D 0; : : : ; n;
such that, as we shall demonstrate, if ..x; y/; !/; ..x; z/; / 2 B then also ..x; "y C z/; "! C / 2 B and ..y; x/; ! / 2 B for all !; ; " 2 C; x; y; z 2 W˝ . From these properties we shall derive that we have actually defined a sesqui-linear functional on W˝ . P x D i2N ˛i x0;i ˝ ˝ xn;i , y D PSo, consider arbitrary linear combinations P j 2N ˇj y0;j ˝ ˝ yn;j and z D k2N k z0;k ˝ ˝ zn;k . Then we see "y C z D "
X j 2N
ˇj y0;j ˝ ˝ yn;j C
X k2N
k z0;k ˝ ˝ zn;k
Section 1.2 Some Construction Principles of Hilbert Spaces and
X X
21
˛i "ˇj hx0;i ˝ ˝ xn;i j y0;j ˝ ˝ yn;j iH0 ˝˝Hn
i2N j 2N
X X
C
˛i k hx0;i ˝ ˝ xn;i j z0;k ˝ ˝ zn;k iH0 ˝˝Hn
i2N k2N
X X
D"
˛i ˇj hx0;i ˝ ˝ xn;i j y0;j ˝ ˝ yn;j iH0 ˝˝Hn
i2N j 2N
C
X X
˛i k hx0;i ˝ ˝ xn;i j z0;k ˝ ˝ zn;k iH0 ˝˝Hn
i2N k2N
showing the desired linearity property. Using (1.2.12) we also obtain X X ˛i ˇj hx0;i ˝ ˝ xn;i j y0;j ˝ ˝ yn;j iH0 ˝˝Hn i2N j 2N
D
X X
˛i ˇj hx0;i j y0;j iH0 hxn;i j yn;j iHn
i2N j 2N
D
X X
˛i ˇj hy0;j j x0;i iH0 hyn:j j xn;i iHn
i2N j 2N
D
X X
˛i ˇj hy0;j ˝ ˝ yn;j j x0;i ˝ ˝ xn;i iH0 ˝˝Hn
: (1.2.13)
i2N j 2N
This shows the desired conjugate symmetry. From (1.2.13) we also get X X ˛i ˛j hx0;i ˝ ˝ xn;i j x0;j ˝ ˝ xn;j iH0 ˝˝Hn i2N j 2N
D
X X
˛i ˛j hx0;i j x0;j iH0 hxn;i j xn;j iHn :
(1.2.14)
i2N j 2N .k/
Let .Aij /i;j be a non-negative root of the Gramian .hxk;i j xk;j iHk /i;j , k D 0; : : : ; n. Then we obtain from (1.2.14) X X ˛i ˛j hx0;i ˝ ˝ xn;i j x0;j ˝ ˝ xn;j iH0 ˝˝Hn i2N j 2N
D
X X X
i2N j 2N s0
D
X s0
X
X X sn
i
sn
.0/
.0/
.n/
.n/
˛i ˛j Ais0 As0 j Aisn Asn j .0/
.n/
˛i As0 i Asn i
X j
.0/ .n/ ˛j As0 j Asn j :
22
Chapter 1 Elements of Hilbert Space Theory
The last term is, however, the square of the norm of w in `2 .N nC1 /, where X .0/ .n/ ˛i As0 i Asn i ws0 sn WD i2N
for s D .s0 ; : : : ; sn / 2 N nC1 , and so X X ˛i ˛j hx0;i ˝ ˝ xn;i j x0;j ˝ ˝ xn;j iH0 ˝˝Hn 0: i2N j 2N
Next, we want to show definiteness of our candidate for a semi-inner product. So let X X ˛i ˛j hx0;i ˝ ˝ xn;i j x0;j ˝ ˝ xn;j iH0 ˝˝Hn D 0: i2N j 2N
Then, since 2 Re
X
˛i hx0;i ˝ ˝ xn;i j u0 ˝ ˝ un iH0 ˝˝Hn
i2N
C hu0 ˝ ˝ un j u0 ˝ ˝ un iH0 ˝˝Hn X X D ˛i ˛j hx0;i ˝ ˝ xn;i j x0;j ˝ ˝ xn;j iH0 ˝˝Hn i2N j 2N
C 2 Re
X
˛i hx0;i ˝ ˝ xn;i j u0 ˝ ˝ un iH0 ˝˝Hn
i2N
C hu0 ˝ ˝ un j u0 ˝ ˝ un iH0 ˝˝Hn ˇX ˇ2 ˇ ˇ Dˇ ˛i x0;i ˝ ˝ xn;i C u0 ˝ ˝ un ˇ
H0 ˝˝Hn
i2N
0
for all ui 2 Hi ; i D 0; : : : ; n, this implies by a standard argument on quadratic forms that we must also have X ˛i hx0;i ˝ ˝ xn;i j u0 ˝ ˝ un iH0 ˝˝Hn D 0; i2N
for all ui 2 Hi ; i D 0; : : : ; n. This is, however, by definition X X ˛i hx0;i ju0 iH0 hxn;i jun iHn D ˛i x0;i ˝ ˝ xn;i .u0 ; : : : ; un / 0D i2N
i2N
P for all ui 2 Hi ; i D 0; : : : ; n. In other words the .n C 1/-linear form i2N ˛i x0;i ˝ ˝ xn;i is the zero functional. We may finally conclude independence of the chosen representation. Indeed, if X X ˛i x0;i ˝ ˝ xn;i D ˇj y0;j ˝ ˝ yn;j i2N
j 2N
Section 1.2 Some Construction Principles of Hilbert Spaces then
X
˛i hx0;i ˝ ˝ xn;i j u0 ˝ ˝ un iH0 ˝˝Hn
i2N
X
j 2N
D
23
X
ˇj hy0;j ˝ ˝ yn;j j u0 ˝ ˝ un iH0 ˝˝Hn ˛i x0;i ˝ ˝ xn;i .u0 ; : : : ; un /
i2N
X
ˇj y0;j ˝ ˝ yn;j .u0 ; : : : ; un /
j 2N
D 0: This shows that DX
ˇ E ˇ ˛i x0;i ˝ ˝ xn;i ˇ u0 ˝ ˝ un
H0 ˝˝Hn
i2N
WD
X
˛i hx0;i ˝ ˝ xn;i j u0 ˝ ˝ un iH0 ˝˝Hn
i2N
defines a conjugate linear functional of the first factor. By conjugate symmetry, the analogous result follows for the second factor, i.e. ˇX DX E ˇ ˛i x0;i ˝ ˝ xn;i ˇ ˇj u0;j ˝ ˝ un;j i2N
WD
X X
i2N
H0 ˝˝Hn
˛i ˇj hx0;i ˝ ˝ xn;i j u0;j ˝ ˝ un;j iH0 ˝˝Hn
i2N j 2N
defines a sesqui-linear functional on W˝ with the properties of an inner product. With this we can now define the tensor product of Hilbert spaces. Definition 1.2.26. Let .Hi /iD0;:::;n be a family N of .n C 1/ (complex) Hilbert spaces. The completion of the pre-Hilbert space akD0;:::;n Hk with respect to the induced norm j jH0 ˝˝Hn is called the tensor product of .Hi /iD0;:::;n and denoted by N iD0;:::;n Hi or by NH0 ˝ ˝ Hn . Let Vi be a subspace of Hi , i D 0; : : : ; n. The pre-Hilbert space akD0;:::;n Vk is called the algebraic tensor product of .Vi /iD0;:::;n a
a
and also denoted by V0 ˝ ˝ Vn . The concept of tensor products simplifies dramatically if the Hilbert spaces are a priori constructed as completions of pre-Hilbert spaces of functions. Proposition 1.2.27. Let .Hi /iD0;:::;n be a family of .n C 1/ (complex) Hilbert spaces and .Vi /iD0;:::;n a corresponding family of pre-Hilbert spaces of functions, i.e. Vi
24
Chapter 1 Elements of Hilbert Space Theory
C Mi ; Mi sets, such that Hi is the completion of Vi , i D 0; : : : ; n. Then the subspace V0 Vn WD LinC ¹.t0 ; : : : ; tn / 7! f0 .t0 / fn .tn / j .f0 ; : : : ; fn / 2 V0 Vn º of C M0 Mn (with the linear structure induced by C) equipped with the inner product hji a a defined by
6
V0 ˝˝Vn
hFf j Fg i
a
WD hf0 jg0 iH0 hfn jgn iHn
a
V0 ˝˝Vn
(1.2.15)
for all f D .f0 ; : : : ; fn /; g D .g0 ; : : : ; gn / in V0 Vn where Ff abbreviates the mapping .t0 ; : : : ; tn / 7! f0 .t0 / fn .tn /, is a pre-Hilbert space. The mapping F.f0 ;:::;fn / 7! f0 ˝ ˝ fn extends by linearity to an isometry JH0 ˝˝Hn W
6 D
6
V0 Vn ! H0 ˝ ˝ Hn . The completion of V0 Vn is then unitarily equivalent to H0 ˝ ˝ Hn , where the unitary mapping is given by the continuous extension JH0 ˝˝Hn of JH0 ˝˝Hn to this completion. Proof. Although the result is just a special case of a general result, the implicit claim that hji a a is actually an inner product (rather than a semi-inner prodV0 ˝˝Vn
6
uct) needs to be confirmed. To see this definiteness, we consider a linear combination P i ˛i Ffi 2 V0 Vn satisfying ˇX DX E ˇ ˛i Ffi ˇ ˛j Ffj D 0: a a i
V0 ˝˝Vn
j
From the Cauchy–Schwarz inequality we obtain that D ˇX E ˇ ˛i Ffi D0 Fg ˇ a a V0 ˝˝Vn
i
for all g 2 V0 Vn . This yields D ˇX E ˇ 0 D Fg ˇ ˛i Ffi a
a
V0 ˝˝Vn
i
D
X i
˛i hFg j Ffi i
a
a
V0 ˝˝Vn
and using definition (1.2.15) (and letting fi D .fi0 ; : : : ; fin /; g D .g0 ; : : : ; gn /), we obtain X 0D ˛i hg0 j fi0 iV0 hgn j fin iVn i
D ˇX E ˇ D g0 ˇ ˛i hg1 j fi1 iV1 hgn j fin iVn fi0 : i
V0
Since V0 is a pre-Hilbert space and g0 2 V0 was arbitrary, this implies X ˛i fi0 .t0 / hg1 j fi1 iV1 hgn j fin iVn D 0 i
Section 1.2 Some Construction Principles of Hilbert Spaces
25
for all t0 2 M0 . By induction we get as desired X ˛i fi0 .t1 / fin .tn / D 0 for all ti 2 Mi ; i D 0; : : : ; n: i
This tensor product construction is extremely useful for our context, since it allows for the transition from one variable to multi-variable function spaces. N Example 1.3. `2 .ZnC1 / D kD0;:::;n `2 .Z/ Recalling that the elements of Vi WD `2 .Z/ are complex number sequences, we realize that with Mi D Z we encounter an instance of the last proposition. The correspondence .zi0 zin /.i
nC1 0 ;:::;in /2Z
7! .zi0 /i0 2Z ˝ ˝ .zin /in 2Z
extends to a linear isometry I . Since the elements generating `2 .ZnC1 / (according to its definition) have product form ¹.i0 ;:::;in /º .j0 ; : : : ; jn / D ¹i0 º .j0 / ¹in º .jn / : : ; jn / 2 ZnC1 , we see that I extends to a unitary mapping IN for .i0 ; : : : ; in /; .j0 ; :N 2 nC1 from ` .Z / onto kD1;:::;n `2 .Z/. N Example 1.4. L2 .RnC1 / D kD0;:::;n L2 .R/ This result follows by the same arguments as the previous example. For this we first need to notice that it is sufficient to use characteristic functions I0 In with Ik D Œak ; bk Œ; ak < bk ; k D 0; : : : ; n, in order to generate L2 .RnC1 /, since jI0 In J0 Jn jL2 .RnC1 / D 0 for Jk D ak ; bk Œ, Jk D ak ; bk or Jk D Œak ; bk , k D 0; : : : ; n. The linear span of these restricted characteristic functions is however a function space. Moreover, we have I0 In .t0 ; : : : ; tn / D I0 .t0 / In .tn /
for .t0 ; : : : ; tn / 2 RnC1 ;
from which the claim follows in the same way as for the previous example. Example 1.5. Similar to the previous examples, it can be seen that in general the completion L2 .R; H / of the space of H -valued step functions LinC ¹t 7! Œa;b Œ .t / h j h 2 H ^ a; b 2 Rº; is just L2 .R/ ˝ H for every Hilbert space H . The next proposition gives the details in a more general setting.
26
Chapter 1 Elements of Hilbert Space Theory As a final variant of the same reasoning we record the next proposition for later use.
Proposition 1.2.28. Let H0 and H1 be (complex) Hilbert spaces and V0 a preHilbert space of functions, i.e. V0 C M ; M a set, such that H0 is the completion of V0 . Then the subspace V0 H1 WD LinC ¹t 7! f .t / w j f0 2 V0 ^ w 2 H1 º of H1M (with the linear structure induced by H1 ) equipped with the inner product hji a
3
V0 ˝H1
defined by hf w j g vi
a
V0 ˝H1
WD hf jgiH0 hwjviH1
for all f; g 2 V0 ; w; v 2 H1
3
where f w denotes the mapping t 7! f .t / w is a pre-Hilbert space. The mapping f w 7! f ˝w extends by linearity to an isometry JH0 ˝H1 W V0 H1 ! H0 ˝H1 . The completion of V0 H1 is then unitarily equivalent to H0 ˝H1 , where the unitary mapping is the continuous extension JH0 ˝H1 of JH0 ˝H1 to this completion.
3
C
3
Proof. The result follows the same way as the previous P one. To see the definiteness of hji a we consider again a linear combination i ˛i fi wi 2 V0 H1 satisfying V0 ˝H1
DX
ˇX E ˇ ˛i fi wi ˇ ˛i fi wi
i
a
V0 ˝H1
i
D 0:
From the Cauchy–Schwarz inequality we obtain ˇX E D X ˇ ˛i fi wi a D ˛i hf j fi iV0 hw j wi iH1 f wˇ V0 ˝H1
i
i
D ˇX E ˇ D f ˇ ˛i fi hw j wi iH1
V0
i
D0
for all f 2 V0 ; w 2 H1 . Since V0 is a pre-Hilbert space and f 2 V0 is arbitrary, we get X ˛i fi .t / hw j wi iH1 D 0 (1.2.16) i
for all t 2 M . But (1.2.16) implies D ˇX E ˇ ˛i fi .t / wi wˇ i
which in turn yields
X
H1
D 0;
˛i fi .t / wi D 0
i
for all t 2 M , since once again H1 is a pre-Hilbert space and w 2 H1 is arbitrary.
Section 1.2 Some Construction Principles of Hilbert Spaces
27
This result shows that tensor products can be employed to manage the transition from scalar-valued function spaces to spaces of Hilbert space-valued functions. We also have the following useful general density result: Lemma 1.2.29. Let .Hi /iD0;:::;n be a family of .n C 1/ (complex) Hilbert spaces and .Si /iD0;:::;n a corresponding family of respective subsets. If Vi WD LinC Si is dense in Hi for i D 0; : : : ; n, then a
a
V0 ˝ ˝ Vn D LinC ¹x0 ˝ ˝ xn j xi 2 Si for i D 0; : : : ; nº is dense in H0 ˝ ˝ Hn . Proof. It suffices to approximate decomposable elements, i.e. elements of the form x0 ˝ ˝ xn 2 H0 ˝ ˝ Hn . Let vi WD .vi;j /j be a sequence approximating xi 2 Hi , i D 0; : : : ; n. Then we can estimate jx0 ˝ ˝ xn v0;j ˝ ˝ vn;j jH0 ˝˝Hn
n X
jv0;j ˝ ˝ vk1;j ˝ .xk vk;j / ˝ xkC1 ˝ ˝ xn jH0 ˝˝Hn
kD0
n X
jv0;j j jvk1;j jjxk vk;j jHk jxkC1 jHkC1 jxn jHn :
kD0
Since the last term is a finite sum of products of norms of a difference tending to zero and bounded factors, the right-hand side goes to zero. Thus we have v0;j ˝ ˝ vn;j ! x0 ˝ ˝ xn
as j ! 1:
Based on this lemma we can now address the issue of generating orthonormal sets in tensor product spaces from orthonormal sets in the “factor” spaces. Proposition 1.2.30. Let .Hi /iD0;:::;n be a family of n (complex) Hilbert spaces and .oi /iD0;:::;n a corresponding family of respective orthonormal sets. Then15 Œo0 ˝ ˝ Œon WD ¹x0 ˝ ˝ xn j xi 2 oi for i D 0; : : : ; nº 15 Here we utilize a general notational scheme, which we shall use throughout. Generating for a mapping or rather an operation f W M0 Mn ! N a new set by applying f to all elements of certain subsets 0 M0 ; : : : ; n Mn , will be indicated by using square brackets rather than round brackets as one does when f is applied in the normal way to elements m0 2 M0 ; : : : ; mn 2 Mn , i.e. we write f .m0 ; : : : ; mn / briefly for f ..m0 ; : : : ; mn // for the evaluation of f at .m0 ; : : : ; mn / 2 M0 Mn , but f Œ 0 n for the set ¹f .m0 ; : : : ; mn /j .m0 ; : : : ; mn / 2 0 n º. Since frequently the operation symbol is written between the arguments, we arrive at notations such as
Œ¹1; 2; 3º C Œ¹3; 4º D ¹4; 5; 6; 7º or Œ¹u; vº ˝ Œ¹x; yº D ¹u ˝ x; u ˝ y; v ˝ x; v ˝ yº or the one used here for applying ˝ repeatedly.
28
Chapter 1 Elements of Hilbert Space Theory
is an orthonormal set in H0 ˝ ˝ Hn . Moreover, if oi is complete in Hi for i D 0; : : : ; n, then Œo0 ˝ ˝ Œon is a complete orthonormal set in H0 ˝ ˝ Hn . Proof. Let x0 ˝ ˝ xn ; y0 ˝ ˝ yn 2 Œo0 ˝ ˝ Œon . Then hx0 ˝ ˝ xn jy0 ˝ ˝ yn i D hx0 jy0 iH0 hxn jyn iHn ´ 1 if x0 D y0 ; : : : ; xn D yn ; D 0 otherwise; which shows that indeed Œo0 ˝ ˝ Œon is an orthonormal set in H0 ˝ ˝ Hn . Recalling that for an orthonormal set ok to be complete means that Vk WD LinC ok is dense in Hk , k D 0; : : : n, the rest of the proposition follows from the previous lemma. Since closed linear operators between Hilbert spaces are just (right-unique) closed linear subspaces of the direct sum of the Hilbert spaces, it would be natural to define the tensor product of linear operators as tensor products of these subspaces. There is, however, a more useful (algebraic) concept of a tensor product for linear operators Ak H0k ˚ H1k ; k D 0; : : : ; n, defined as the linear relation given by16 a
a
.A0 ˝ ˝An /.x0 ˝ ˝ xn / WD A0 x0 ˝ ˝ An xn
(1.2.17)
for all xk 2 D.Ak /; k D 0; : : : ; n. That (1.2.17) defines a linear operator a
a
a
a
.A0 ˝ ˝An / W D.A0 / ˝ ˝ D.An / H00 ˝ ˝ H0n ! H10 ˝ ˝ H1n and not just a linear relation is not obvious and needs a careful Nconsideration. We must confirm that different representations of an element x 2 akD0;:::;n D.Ak / do a
a
not lead to let .0; w/ 2 A0 ˝ ˝An , Pdifferent ‘images’. To show right-uniqueness P i.e. w D i ˛i A0 x0;i ˝ ˝ An xn;i and i ˛i x0;i ˝ ˝ xn;i D 0. The latter is the same as X 0D ˛i hx0;i jv0 iH00 hxn;i jvn iH0n i
for all vk 2 H0k ; k D 0; : : : ; n. If .xk;i /i is linearly independent for k D 0; : : : ; n, then all coefficients ˛i must be zero. In this case w D 0. Otherwise, let .yk;j /j be a maximal linearly independent subfamily of .xk;i /i so that for all i X xk;i D ak;ij yk;j j 16
a
We shall have to live with the ambiguity that for linear mappings A; B the mapping A˝B is not the tensor product of A; B considered as linear spaces,
Section 1.2 Some Construction Principles of Hilbert Spaces
29
for suitable coefficients ak;ij . Then X
˛i x0;i ˝ ˝ xn;i D
XX
i
i
j0
X
˛i a0;ij0 an;ijn y0;j0 ˝ ˝ yn;jn
jn
D0 and due to linear independence X
˛i a0;ij0 an;ijn D 0
i
for all multi-indices .j0 ; : : : ; jn / appearing in the sum. Trivially this yields XX X ˛i a0;ij0 an;ijn A0 y0;j0 ˝ ˝ An yn;jn D 0 i
j0
jn
and by the linearity of the operators .Ak /kD0;:::;n 0D
X
˛i A0
X
i
D
X
a0;ij0 y0;j0 ˝ ˝ An
j0
X
an;ijn yn;jn
jn
˛i A0 x0;i ˝ ˝ An xn;i D w:
i a
a
Thus right-uniqueness of A0 ˝ ˝An – as defined above – is shown. a
a
If A0 ˝ ˝An is a closable, linear operator then we define a
a
A0 ˝ ˝ An D A0 ˝ ˝An : We have
a
a
a
a
a
a
A˝B ˝C D .A˝B/˝C D A˝.B ˝C / and in the closable case A ˝ B ˝ C D .A ˝ B/ ˝ C D A ˝ .B ˝ C /; so that we may focus our attention on the case of two factors with the general case following by induction. With this rather brief review of Hilbert space concepts, we are ready to consider the functional analytical frame work utilized in our discussion of partial differential equations.
Chapter 2
Sobolev Lattices
In order to make the concept of a Sobolev (or extrapolation) lattice more easily digestible, we first consider a slightly simpler and more commonly discussed concept, namely that of a chain of Hilbert spaces.
2.1
Sobolev Chains
The starting point of our discussion is the idea of a Gelfand triple. We begin with a first observation formulated in the following lemma. Lemma 2.1.1. Let H0 ; H1 be Hilbert spaces such that j
H1 , ! H0 (in the sense of j being a dense and continuous embedding1 ). Then j}
H0 ,! H1 : Here j } denotes the dual mapping associated with j . Proof. By assumption we have that the range j ŒH1 is dense in H0 and the null space Œ¹0º j D ¹0º. We need to show that j } is one-to-one and that j } ŒH0 is dense 1
A continuous, linear mapping j W H1 ! H0 from a Hilbert space H1 into a Hilbert space H0 is in this context called an embedding if it is injective. If j has dense range j ŒH1 in H0 , then we speak of a dense embedding. We may identify j ŒH1 with H1 and so think of H1 as a subspace of H0 and j as given by the identity j W H1 ! H0 ; x 7! x: Thus, linearity and injectivity become obvious. This particular j has dense range if and only if H1 is a dense subspace of H0 . The continuity of j can here be expressed by the existence of a constant C0 such that jxjH0 C0 jxjH1 holds for all x 2 H1 . To underscore that the j is generated by x 7! x we frequently write 1H1 !H0 or 1H1 ,!H0 for this embedding (as we write 1H0 1H0 !H0 – or even shorter just 1 – for the identity mapping in H0 ).
Section 2.1 Sobolev Chains
31
in H1 . By the unitary character of the Riesz mapping, it suffices to establish the . We have by the projection theorem analogous properties for j D RH1 j } RH 0 H1 D j ŒH0 ˚ Œ¹0º j
and
H0 D j ŒH1 ˚ Œ¹0º j :
Thus, the density of j ŒH0 in H1 and Œ¹0º j D ¹0º follow from Œ¹0º j D ¹0º and the density of j ŒH1 in H0 , respectively. The best embedding constant for both embeddings is the same, namely the operator norm kj k D kj k D kj } k:
(2.1.1)
It is customary to identify H1 with the subspace j ŒH1 H0 and we shall do so henceforth. In this case, the embedding j W H1 ! H0 becomes the canonical embedding 1H1 ,!H0 2 L.H1 ; H0 / given by 1H1 ,!H0 W H1 ! H0 ; x 7! x: } Then, its dual 1H gives the corresponding canonical embedding of H0 into H1 . 1 ,!H0 We have by definition of the dual mapping that } .1H f /.x/ D f .1H1 ,!H0 x/ 1 ,!H0
for all f 2 H0 and x 2 H1 . Consequently, } 1H f D f ı 1H1 ,!H0 1 ,!H0
for all f 2 H0 . Combining the two continuous and dense embeddings with the Riesz mapping RH0 gives 1H1 ,!H0
} 1H
RH0
1 ,!H0
* H1 ,! H0 ) H0 ,! H1 : RH
0
If we choose to identify H0 and H0 , then RH0 turns into the identity 1H0 and we may glue these embeddings even closer together } 1H
1H1 ,!H0
1 ,!H0
H1 ,! H0 D H0 ,! H1 or more concisely 1H1 ,!H0
} 1H
1 ,!H0
H1 ,! H0 ,! H1 : As a consequence of identifying H0 and H0 we have f .x/ D hf j xiH0
(2.1.2)
32
Chapter 2 Sobolev Lattices
for all f 2 H0 D H0 and all x 2 H0 D H0 . For f 2 H0 and g 2 H1 , we have by definition of h j iH1 and RH1 } } h1H f j giH1 D hRH1 .1H f / j RH1 giH1 1 ,!H0 1 ,!H0 } D .1H f /.RH1 g/ D f .1H1 ,!H0 RH1 g/: 1 ,!H0
With (2.1.2) we have f .1H1 ,!H0 RH1 g/ D hf j 1H1 ,!H0 RH1 giH0 and so } hf j giH1 D h1H f j giH1 D hf j 1H1 ,!H0 RH1 giH0 D hf j RH1 giH0 1 ,!H0 (2.1.3)
for all f 2 H0 and g 2 H1 . A triple .H1 ; H0 ; H1 / with H1 H0 and H1 densely and continuously embedded in H0 via x 7! x is known as a Gelfand triple2 or a rigged Hilbert space. Realizing that the canonical dense and continuous embedding ‘,!’ may be considered as an order relation, we see that ¹H1 ; H0 ; H1 º is a totally ordered set, i.e. a chain. We shall therefore refer to the triple .H1 ; H0 ; H1 / also as a Gelfand chain. Since 1H1 ,!H0 is a bounded operator on H1 , the unbounded “identity” mapping given by 1H1 H0 !H1 WD .1H1 ,!H0 /1 is a closed operator and so A WD .1H1 H0 !H1 / 1H1 H0 !H1 is a non-negative3 , selfadjoint operator A H0 ˚ H0 satisfying D.A/ H1 . We p recall that for the selfadjoint, strictly positive operator square root4 A of A we have 2 By making the appropriate natural identifications, we may and shall – as a matter of convenience – always assume that H0 H1 : 3
A is actually a (strictly) positive operator. Indeed, we find jxjH0 jAxjH0 hxjAxiH0 D hxj.1H1 H0 !H1 / 1H1 H0 !H1 xiH0 D h1H1 H0 !H1 xj1H1 H0 !H1 xiH1 D hxjxiH1
1 hxjxiH0 k1H1 ,!H0 k2
for all x 2 D.A/ and so A
1 : k1H1 ,!H0 k2
4 A convenient way to think of the square root of A is in the framework of the function calculus associated with A via the spectral theorem, compare e.g. [6]. It follows that p 1 A : k1H1 ,!H0 k
Section 2.1 Sobolev Chains that
33
p j AxjH0 D jxjH1
p for all x 2 D. A/ D H1 :
Now, since H1 H1 is dense in H1 , we may consider the densely-defined operator e .1/ W H1 H1 ! H1 ; C p x 7! Ax: e / if there is an f 2 H such that Then y 2 D.C 1 .1/ p e .1/ x j yiH D h Ax j yiH D hx j f iH hC 1 1 1 e .1/ / D H1 and utilizing (2.1.3), the definition of the Riesz map and for all x 2 D.C the identification of H0 with H0 we find hx j f iH1 D hRH1 x j RH1 f iH1 D x.RH1 f / D hx j RH1 f iH0 and on the other hand p hx j f iH1 D h Ax j yiH1 p D hRH1 Ax j RH1 yiH1 p D Ax.RH1 y/ p D h Ax j RH1 yiH0 : Thus p h Ax j RH1 yiH0 D hx j RH1 f iH0 e .1/ / D H1 and so for y 2 D.C e / we read off that RH1 y 2 for all x 2 D.C .1/ p D. A/ and p A RH1 y D RH1 f so that
p e C .1/ RH1 A RH1 : p ŒD. A/ we obtain the reverse inclusion. This shows that Similarly, for y 2 RH 1 p e C .1/ is unitarily equivalent to A and therefore also selfadjoint with the same specp e is strictly positive and selfadjoint and consequently trum as A. In particular, C .1/
e D C e D C e .1/ DW C.1/ C .1/ .1/
34
Chapter 2 Sobolev Lattices
is a strictly positive, selfadjoint p operator in H1 . Moreover, for x 2 H0 H1 we recall (2.1.2) and, noting that A is onto H0 , we get ˇ ˇ ² ³ ² ³ jx.y/j ˇˇ jhx j yiH0 j ˇˇ jxjH1 D sup y 2 H1 n ¹0º D sup y 2 H1 n ¹0º jyjH1 ˇ jyjH1 ˇ ˇ ² p 1 p ³ jh A x j A yiH0 j ˇˇ D sup p ˇ y 2 H1 n ¹0º j A yjH0 ˇ ² p 1 ³ jh A x j ziH0 j ˇˇ D sup ˇ z 2 H0 n ¹0º jzjH0 p 1 D j A xjH0 :
We see in particular that p e .1/ C.1/ H1 ˚ H1 : A H0 ˚ H0 H1 ˚ H1 and C p This suggests that we consider A as a linear operator in H1 ˚ H1 and then by definition we have p A C.1/ or C.0/ WD
p
A D C.1/ jH0 ;
1 C.0/
p
1
A
1 D C.1/ jH0 :
This association between the Gelfand triple and the operator C.0/ motivates the following definition. Definition 2.1.2. Let C H ˚ H be a densely defined, closed linear operator in a Hilbert space H with 0 in the resolvent set %.C /5 . Then the triple .H1 .C /; H; H1 .C // 5 We recall the definition of the resolvent set associated with a linear operator B acting in a Hilbert space H :
%.B/ WD ¹ 2 C j . B/ is injective, . B/ŒH is dense in H ; . B/1 is boundedº: The resolvent set associated with a closed linear operator B can be shown to be always open, its complement is therefore closed in C and known as the spectrum .B/ of the linear operator B. We take the opportunity to also recall the major parts of the spectrum. The point spectrum P .B/ is defined as the set P .B/ WD ¹ 2 C j . B/ is not injectiveº; the continuous spectrum C .B/ is defined as the set C .B/ WD ¹ 2 C j . B/ is injective, ŒH . B/1 D D.. B/1 / is dense in H , . B/1 is not uniformly continuousº:
Section 2.1 Sobolev Chains
35
with H1 .C / denoting D.C / equipped with the norm jC jH and H1 .C / the dual space of H1 .C / (denoting analogously D.C / equipped with the norm jC jH ) will be called a short (abstract) Sobolev chain. We need to verify that this is well-defined, i.e. that .H1 .C /; H; H1 .C // is a chain. That H1 .C / is a Hilbert space follows since C was assumed to be closed and jxjH D jC 1 C xjH kC 1 k jC xjH D kC 1 k jxjH1 .C / :
(2.1.4)
Indeed, hC j C iH is an inner product in H1 .C / (with definiteness following from the invertibility of C ). Moreover, according to (2.1.4) we have a continuous mapping of H1 .C / into H given by the identity and therefore an embedding, and since C is densely defined we also have H1 .C / dense in H . Thus, we have established the canonical embedding H1 .C / ,! H: The analogous results hold for C in place of C so that H1 .C / ,! H: Therefore we also have the canonical dual embedding H ,! H1 .C / DW H1 .C /: Identifying H with H , we get H1 .C / ,! H D H ,! H1 .C / and so .H1 .C /; H; H1 .C // is totally ordered with respect to the canonical dense and continuous embeddings, i.e. indeed a chain. We see that, by definition, a short Sobolev chain associated with a selfadjoint operator C is a Gelfand chain. Conversely, we already noticed that every Gelfand chain may be reformulated as a short Sobolev chain with the role of C played by the selfadp joint operator A, where A is the selfadjoint operator associated with the canonical embedding of H1 in H0 . As our terminology suggests, there are also long Sobolev chains extending short ones. Before they can be properly defined we need some supporting results. and the residual spectrum R .B/ is defined as the set R .B/ WD ¹ 2 C j . B/ is injective, ŒH . B/1 D D.. B/1 / is not dense in H º: These spectral parts are clearly disjoint and together make up all of the spectrum: .B/ D P .B/ [ R .B/ [ C .B/:
36
Chapter 2 Sobolev Lattices
Lemma 2.1.3. Let C H ˚ H be a densely defined, closed linear operator in a Hilbert space H with 0 2 %.C /. Then C n is also closed and densely defined for every n 2 N. Moreover, we have 0 2 %.C n / and jC n xjH kC 1 kn jxjH
(2.1.5)
for all x 2 D.C n /. In particular, D.C n / equipped with the norm jC n jH is a Hilbert space denoted by Hn .C /, n 2 N. The mapping CkC1;k W HkC1 .C / ! Hk .C /; x 7! C x is unitary, k 2 N. Proof. We prove this result by induction. For n D 0; 1 the result is obviously true by assumption. So let C k be densely defined and closed for all k 2 N; k n. We first show that C nC1 is closed. Let x D .xk /k be a sequence in D.C nC1 / with xk ! u and C nC1 xk ! y as k ! 1. If we can show that u 2 D.C nC1 / and C u D y, we have confirmed closedness of C nC1 . Since 0 2 %.C /, we get C n xk D C 1 C nC1 xk ! C 1 y for k ! 1. By the assumed closedness of C n we get C 1 y D C n u and consequently C n u 2 D.C /, i.e. u 2 D.C nC1 /. Applying C to both sides we see that y D C nC1 u. Now we want to show that C nC1 is densely defined. Let x be a sequence in D.C n / approximating Cy; y 2 D.C /. Then we have C 1 xk ! y
in H
and C n xk D C nC1 C 1 xk , i.e. .C 1 xk /k is a sequence in D.C nC1 / converging to y 2 D.C /. Since on the other hand D.C / is dense in H , the desired denseness of D.C nC1 / in H follows. Next we show estimate (2.1.5). Clearly, jC n yjH kC 1 kn jyjH for all y 2 H . Letting y D C n x with x 2 D.C n / gives jxjH D jC n C n xjH kC 1 kn jC n xjH which yields estimate (2.1.5). If we can show that C n is onto, then we have 0 2 %.C n /. But this is clear, since C n C n y D y
Section 2.1 Sobolev Chains
37
and C n y 2 D.C n / for every y 2 H . That Hn .C / is a Hilbert space is now also clear. Indeed, hC n j C n iH is clearly a semi-inner product and from (2.1.5) definiteness follows. Therefore only completeness remains to be shown. Again by (2.1.5) we have, with ˛ WD 1CkC11 kn 2 0; 1Œ, kC 1 kn 1 C kC 1 kn
q
2 2 jxjH C jC n xjH ˛ kC 1 kn jxjH C .1 ˛/ jC n xjH
jC n xjH q 2 2 jxjH C jC n xjH for all x 2 D.C n / and completeness follows from the completeness of the domain of a closed operator with respect to the graph norm. Finally, we have jCkC1;k yjHk .C / D jC yjHk .C / D jC kC1 yjH D jyjHkC1 .C / for all y 2 D.C kC1 /; k 2 N. Thus, CkC1;k is isometric. Moreover, CkC1;k is also onto: if y 2 D.C k / then x D C 1 y 2 D.C kC1 / and CkC1;k x D C x D C C 1 y D y. So, CkC1;k is indeed unitary. Recalling C also satisfies the assumptions imposed on C , i.e. C is a densely defined, closed linear operator with 0 2 %.C / D %.C / , we also get Hilbert spaces Hn .C /;
n 2 N:
We introduce their dual spaces and define Hn .C / WD Hn .C / for n 2 N. Realizing that Hn .C / ,! H ,! Hn .C / D Hn .C / constitutes a Gelfand chain .Hn .C /; H; Hn .C / / we have that H H0 .C / D H0 .C / is densely and continuously embedded in Hn .C / via the corresponding canonical embedding. Definition 2.1.4. Let C H ˚ H be a densely defined, closed linear operator in Hilbert space H with 0 2 %.C /. Then the family of Hilbert spaces .Hn .C //n2N is called the positive Sobolev chain associated with C , the family of Hilbert spaces .Hn .C //n2N is called the negative Sobolev chain associated with C . The family .Hn .C //n2Z is called the (long) Sobolev chain associated with C .
38
Chapter 2 Sobolev Lattices
Remark 2.1.5. We see that a long Sobolev chain joins the positive Sobolev chain associated with C and the dual of the positive Sobolev chain associated with C . Thus, in general Hn .C / is not the dual space of Hn .C /. The purpose of the construction of a long Sobolev chain is to produce a continuous extension of the operator C to all spaces of the chain. Speaking of a Sobolev chain is justified by the follow result. Theorem 2.1.6. Let .Hn .C //n2Z be the Sobolev chain associated with the operator C . Then we have for all n 2 Z and k 2 N HnCk .C / ,! Hn .C / (in the sense of the canonical embeddings being dense and continuous). The mappings e nC1;n W D.C jnjC1 / HnC1 .C / ! Hn .C /; C x 7! C x e nC1;n for all n 2 Z. have unitary closures CnC1;n WD C Proof. That for all n 2 N HnC1 .C / ,! Hn .C /
(2.1.6)
is a consequence of 0 2 %.C /: jyjHn .C / D jC n yjH D jC 1 C nC1 yjH kC 1 k jyjHnC1 .C / for all y 2 HnC1 .C /. Since D.C / is dense in H , we have a sequence x D .xk /k in D.C / such that xk ! C nC1 y for k ! 1 and so we also see that C n C n xk ! C n y for k ! 1 and that .C n xk /k is a sequence in D.C nC1 / converging to y in Hn .C /. Since y 2 D.C nC1 / was arbitrary, this shows the density of the embedding. Also for n 2 Z 0 for all 2 RnC1 . Then there is a positive constant C and a rational number c 2 Q such that6 P ./ C ji ˙ ejce for all 2 RnC1 : Proof. Noting that the set E defined as E WD ¹.x; y; / 2 R3Cn j P ./ C y D 0 ^ x ji ˙ ej2e D 0º is semi-algebraic in R3Cn , we also see that subgraph fE of this set is well-defined for all sufficiently large positive arguments. Indeed, fE is a negative function since P is positive. Hence by the previous result we have fE .x/ D C1 x a .1 C o.1// as x ! C1, where C1 2 R0 such that (with c D 2a) P ./ C ji ˙ ejce 6
for all 2 RnC1 :
Here and in the following an expression of the form jzj˛ , z 2 C nC1 , ˛ 2 RnC1 , abbreviates jz0 j˛0 jzn j˛n :
The symbol e here is our abbreviation for .1; : : : ; 1/.
Section 3.1 Partial Differential Equations in H1 .@ C e/
81
Corollary 3.1.8. Let P be a polynomial in RnC1 , n 2 N, with P .i / ¤ 0 for all 2 RnC1 , then there a positive constant C and a rational number c 2 Q such that jP .i /j C ji ˙ ejce
for all 2 RnC1 :
Proof. Applying the previous corollary to the polynomial 7! jP ./j2 yields the result. For later use we note the following result. Corollary 3.1.9. Let P be a polynomial in RnC1 , n 2 N, with P .i C % / ¤ 0 for a fixed 2 RnC1 ; jj D 1, and all 2 RnC1 ; % 2 Œ%0 ; 1Œ; %0 2 R. Then there is a positive constant C and a rational number c 2 Q such that7 jP .i C % /j C .ji C % j2 C e/ce
for all % 2 Œ%0 ; 1Œ; 2 RnC1 :
Proof. Noting that E WD ¹.x; y; ; %/ 2 R1C1C.nC1/C1 j x .j i C % j2 C e/e D 0 ^ jP .i C % /j2 C y D 0 ^ % %0 º is also semi-algebraic and applying the arguments of the proof of Corollary 3.1.7 to the real polynomial .; %/ 7! jP .i C % /j2 the result follows. Finally, Corollary 3.1.8 also answers our current question about the meaning of invertibility for partial differential equations. Theorem 3.1.10. Let P .@/ be a linear ..s C 1/ .s C 1//-partial differential expression in RnC1 , n; s 2 N. If P .@/ is invertible in H1 .@ C e/ then P .@/1 W HnC1;sC1;tC1;1 .@ C e/ ! HnC1;sC1;tC1;1 .@ C e/ exists as a continuous8 Here and in the following we use as a convenient abbreviation x 2 C e WD .x02 C 1; : : : ; xn2 C 1/, nC1 . With some care, there is little danger confusing this with the quite different x D .x0 ; : : : ; xP n/ 2 C n 2 notation x D kD0 xk2 , x D .x0 ; : : : ; xn / 2 C nC1 , which we also shall use. The first interpretation is only used if x 2 occurs in conjunction with another vector. 8 By this we mean that for every A 2 .ZnC1 /.sC1/.t C1/ there is a B 2 .ZnC1 /.sC1/.t C1/ such that 7
HA .@ C e/ ! HB .@ C e/; ˆ 7! P .@/1 ˆ is a continuous mapping. Indeed, closer inspection of the proof yields a stronger result, namely that there is a D 2 .ZnC1 /.sC1/.t C1/ such that for every A 2 .ZnC1 /.sC1/.t C1/ HA .@ C e/ ! HAD .@ C e/; ˆ 7! P .@/1 ˆ is continuous.
82
Chapter 3 Partial Differential Equations with Constant Coefficients
operator for every t 2 N. The inverse is given by Cramer’s rule as P .@/1 D P .@ C /1 D det.P .@ C //1 cof.P .@ C //> : Proof. Of course we would expect the solution operator to be given as P .@ C /1 D det.P .@ C //1 cof.P .@ C //> : Since cof.P .@ C//> as a partial differential operator is continuous in H1 .@ Ce/, it remains to be seen that det.P .@ C //1 W H1 .@ C e/ ! H1 .@ C e/ is continuous. For this it suffices to interpret this as the inverse of a scalar partial differential operator. The latter is equivalent to showing that det.P .i m C //1 W H1 .i m C e/ ! H1 .i m C e/ is continuous. Let ' 2 .CV 1 .RnC1 //.sC1/.tC1/ and consider the function x 7! det.P .i x C %0 //1 '.x/ with 0 2 RnC1 , j0 j D 1, i.e. a normalized vector, and % 2 Œ0; 1Œ, which is a well-defined element of .CV 1 .RnC1 //.sC1/.tC1/ since P .@/ was assumed to be invertible in H1 .@ C e/, D % 0 . By Corollary 3.1.8, there is a rational exponent c such that jdet.P .i C //j C .ji C j2 C e/ce
for all 2 RnC1 :
(Recall that in ji Cj2 Ce the term ji Cj2 stands for .ji 0 C0 j2 ; : : : ; ji n Cn j2 /, whereas in the expression ji Cj2 C1 the term ji Cj2 would have to be interpreted as ji 0 C 0 j2 C C ji n C n j2 in order to make the expressions meaningful.) Therefore .ji C /j2 C e/ce jdet.P .i C //j1 C 1 for all 2 RnC1 . Moreover, we estimate jP .i C /1b ' .i C /j D jdet.P .i C //1 cof.P .i C //>b ' .i C /j C 1 j.ji C j2 C e/ce cof.P .i C //>b ' .i C /j for all 2 RnC1 . Since all entries in cof.P .i C //> can be uniformly estimated by a term of the form C 0 .ji C /j C e/˛ for some suitably large ˛ 2 N nC1 , we find e .jj2 C e/ˇ jb ' .i C /j C ' .i C /j jP .i C /1b
(3.1.6)
e 2 R>0 , a multi-index ˇ 2 ZnC1 and all 2 RnC1 . Applying for some constant C the inverse Fourier–Laplace transform to the resulting norm inequality we get e j'j;ACˇE jP .@ C /1 'j;A C
(3.1.7)
for every A 2 .ZnC1 /.sC1/.tC1/ where we have used the notation ˇE WD .ˇ/j D0;:::;sIkD0;:::;t : Since .CV 1 .RnC1 //.sC1/.tC1/ is dense in HACˇE .im C e/, we conclude by continuous extension that (3.1.7) holds for all ' 2 HACˇE .@ C e/.
Section 3.1 Partial Differential Equations in H1 .@ C e/
83
e do not Remark 3.1.11. We note that the multi-index ˇ and the continuity constant C depend on t 2 N. As a matter of convenience we shall henceforth again write simply CV 1 .RnC1 / for the linear space of matrix-valued functions .CV 1 .RnC1 //.sC1/.tC1/ and, as in the case of the notation H˙1 .@ C /, leave it to the context to determine the appropriate matrix size. Theorem 3.1.10 exhibits the solution theory for partial differential equations with constant coefficients which we are pursuing. There are, however, several finer points which still need to be discussed. The continuity of the inverse of P .@/ is a particular case of the general concept of continuity for operators in a Sobolev lattice. We recall that an operator W W HnC1;sC1;tC1;1 .@ C e/ ! HnC1;sC1;tC1;1 .@ C e/ is said to be continuous in HnC1;sC1;tC1;1 .@ C e/ if ^
_
A2.ZnC1 /.sC1/.t C1/
B2.ZnC1 /.sC1/.t C1/
W jHA .@ Ce/ W
HA .@ C e/ ! HB .@ C e/ continuous: In the particular case of interest to us, we have the stronger property of continuity: ^
_
^
2N B2.ZnC1 /.sC1/.t C1/ A2.ZnC1 /.sC1/.t C1/ ;sup¹Ak jkD0;:::;sºinf¹Aj jj D0;:::;sºD
W jHA .@ Ce/ W HA .@ C e/ ! HAB .@ C e/ continuous: Indeed, if W is a function of @ , then continuity is equivalent to this stronger continuity property. Proposition 3.1.12. Let F .i / W RnC1 ! C .sC1/.sC1/ be a matrix of Borel functions Fij .i / W RnC1 ! C, i; j D 0; : : : ; s, and let F .@ /jHA .@ Ce/ W HA .@ C e/ ! HB .@ C e/, 7! F .@ / WD .Fij .@ / /i;j be continuous for some A 2 .ZnC1 /.sC1/.uC1/ , B 2 .ZnC1 /.sC1/.uC1/ , s; u 2 N. Then, for every w 2 N, 2 N, we have that for all C 2 .ZnC1 /.sC1/.wC1/ such that sup¹Ckj jk D 0; : : : ; sI j D 0; : : : ; wº inf¹Ckj jk D 0; : : : ; sI j D 0; : : : ; wº D there is a D 2 .RnC1 /.sC1/.wC1/ such that F .@ /jHC .@ Ce/ W HC .@ C e/ ! HC D .@ C e/ continuous:
84
Chapter 3 Partial Differential Equations with Constant Coefficients
Proof. By assumption jF .@ / j;B C0 j j;A for some A 2 .ZnC1 /.sC1/.uC1/ ; B 2 .Z nC1 /.sC1/.uC1/ , some positive constant C0 and for all 2 HA .@ C e/. Since the concept of continuity does not depend on the number of columns and therefore works for all u if it is only assumed to hold for u D 0, it is sufficient to consider the case u D 0. Observing that by the function calculus of spectral theory Fij .@ / j@ C ej ' D j@ C ej Fij .@ /' for all ' in a dense subset of HBi C .@ C e/, 2 ZnC1 , we find that s s ˇX ˇX ˇ2 ˇ2 ˇ ˇ ˇ ˇ Fik .@ / 'k ˇ Dˇ .@ C e/ Fik .@ / 'k ˇ ˇ kD0
;Bi C
;Bi
kD0
s ˇX ˇ2 ˇ ˇ Dˇ Fik .@ / .@ C e/ 'k ˇ
;Bi
kD0
C02
s X
j.@ C e/ 'k j2;Ak
kD0
D C02
s X
j'k j2;Ak C
kD0
follows by continuous extension for all 'k 2 HAk C .@ C e/; k D 0; : : : ; s, with 2 ZnC1 arbitrary. Thus, jF .@ / 'j2;BCG C02
s X s X
j'j2;Ak CGj
j D0 kD0
holds for all ' 2 HACG .@ C e/, where G 2 .ZnC1 /sC1 is arbitrary. For Ak C Gj Ck we can estimate jF .@ / 'j2;BCG C02
s X
j'j2;ACGk C02 .s C 1/j'j2;C :
kD0
This is the case for Gj D inf¹.Ck Ak /jk D 0; : : : ; sº, j D 0; : : : ; s. With this B C G D .Bj C inf¹.Ck Ak /jk D 0; : : : ; sº/j D0;:::;s and B C G C D holds for D WD .sup¹Ck jk D 0; : : : ; sº inf¹Bj jj D 0; : : : ; sº inf¹Ck jk D 0; : : : ; sº C sup¹Ak jk D 0; : : : ; sº/j D0;:::;s .Cj Bj inf¹.Ck Ak /jk D 0; : : : ; sº/j D0;:::;s D C B G:
Section 3.1 Partial Differential Equations in H1 .@ C e/
85
We are led to the following definition. Definition 3.1.13. Let F .i / W RnC1 ! C .sC1/.sC1/ be a matrix of Borel functions Fij .i / W RnC1 ! C, i; j D 0; : : : ; s, and let F .@ / be continuous in H1 .@ C e/, 2 R, s 2 N. If A WD .Ak /kD0;:::;s 2 .N nC1 /.sC1/ is such that9 _ ^ ^ jFkj .i /j C0 .ji C j2 C e/Ak =2 a:e:
C0 2R>0 2RnC1 k;j 2¹0;:::;sº
then we shall say that F .@ / has a regularity loss of at most A . Correspondingly, if ^ ^ _ jFkj .i /j C0 .ji C j2 C e/Ak =2 a:e:
C0 2R>0 2RnC1 k;j 2¹0;:::;sº
we shall speak of a regularity gain of at least A for F .@ /. If there is a possible regularity loss A which is minimal with respect to the order relation (3.1.3), then we shall call A a minimal regularity loss of F .@ / in H1 .@ Ce/, (i.e. there is no regularity loss B with component-wise B A and B ¤ A). A regularity gain A will be called maximal regularity gain of F .@ / in H1 .@ C e/, if A is maximal with respect to the same order relation, (i.e. there is no regularity gain B A with B ¤ A). As a matter of convenience we shall allow for an entry 1, if the F .@ / if the regularity gain or loss is unbounded and still speak of maximal regularity gain and minimal regularity loss. Of course our main interest here is to use these concepts for partial differential operators P .@ C / and their inverses. Let us consider some examples. P Example 3.3. Consider the (negative) Laplacian WD @2 WD nkD0 @2k as a .1 1/-partial differential expression. We find 2 D 02 C C n2
n Y
.j2 C 1/
j D0
showing a regularity loss of at most .2; : : : ; 2/ in H1 .@ C e/. In this Sobolev chain, there is no possibility of improving this estimate. Indeed, assuming 2 D 02 C C n2 C
n Y
.j2 C 1/˛j =2
j D0
with one of the ˛j , say for j D 0, satisfying 0 ˛0 < 2, and setting all but the 0-th component of equal to zero, gives 02 C .1 C 02 /˛0 =2 ; 9 Here the symbol is saying that needs only hold almost everywhere in the sense of the Lebesgue measure in RnC1 . a:e:
86
Chapter 3 Partial Differential Equations with Constant Coefficients
which leads to a contradiction for large arguments 0 2 R. Therefore, the only minimal regularity loss in H1 .@ C e/ is indeed equal to .2; : : : ; 2/. The differential operator @2 C 1 is invertible in H1 .@ C e/ and, in contrast to the minimal regularity loss statement for @2 C 1 above, .@2 C 1/1 has regularity gain of (at least) ˛ for all ˛ 2 N nC1 with j˛j1 D 2. Indeed, for ˛ 2 N nC1 such that j˛j1 D 2 and all x 2 RnC1 we have with y D .x02 ; : : : ; xn2 / n Y
.y C e/˛ D
.xj2 C 1/˛j
j D0 n X n Y
j D0
˛j xk2 C 1
kD0
2
where x 2 WD
.x C 1/2 ;
Pn
2 kD0 xk
is used as an abbreviation. For any ˛ 2 N nC1 with k D
j˛j1 > 2 we cannot have such an estimate, since
.t 2 C1/k=2 .nC1/ t 2 C1
is unbounded10 for t 2
R>0 . Thus, every ˛ 2 N nC1 with j˛j1 D 2 yields a maximal regularity gain for .@2 C 1/1 . Example 3.4. Consider P .@/ WD .@0 @1 C1/2 @21 in R2 . P .@/ is obviously invertible in H1 .@ C e/. P .@/1 features a regularity loss. In fact, we find .x02 C 1/ P .i x0 ; i x1 / 1 showing a regularity loss of at most .2; 0/. Moreover, this estimate implies .x02 C 1/s=2 P .i x0 ; i x1 / .x02 C 1/s=21 ! 0 for all s 2 1; 2Œ. That the lower estimate for such s cannot be improved can be seen by observing that .x02 C 1/s=2 P .i x0 ; i x1 / D .x02 C 1/s=21 for all s 2 1; 2Œ when x1 D
x0 . x02 C1
Furthermore,
.x12 C 1/s=2 P .i x0 ; i x1 / D for all s 2 R when x1 D
x0 x02 C1
1C
s=2 x02 .x02 C1/2 x02 C 1
!0
as x0 ! 1. As a consequence we also find
.x02 C 1/s=2 .x12 C 1/t=2 P .i x0 ; i x1 / ! 0 for all s 2 1; 2Œ; t 2 R, x1 D
x0 x02 C1
and x0 ! 1. Therefore, ˛ D .2; 0/ is the
(only) minimal (multi-index) regularity loss for P .@/1 . 10
This quotient refers to the case x D t e.
Section 3.1 Partial Differential Equations in H1 .@ C e/
87
Example 3.5. Consider the 2 2-partial differential expression11 @2 @0 b P .@0 ; b @/ WD 1 @0 in R1Cn . Then det.P .@0 ; b @// D @20 b @2 is invertible with respect to D .%; 0; : : : ; 0/, % 2 R n ¹0º. We have for every x0 2 R, b x D .x1 ; : : : ; xn / 2 Rn , det.P .i x0 C %; ib x // D .i x0 C %/2 C b x 2 D x02 C %2 C b x 2 C 2 i % x0 and so jdet.P .i x0 C %; ib x //j2 D .x02 %2 b x 2 /2 C 4 %2 x02 D .x02 b x 2 /2 C %4 C 2 x02 %2 C 2 %2 b x2 %4 C 2 x02 %2 C 2 %2 b x2
(3.1.8)
%2 .%2 C 2 x02 C 2b x2/ x2/ %2 .%2 C x02 C b %2 .%2 C xk2 /
for k D 0; : : : ; n. This proves invertibility with respect to all such as well as a @2 /1 of at least ˛ 2 N nC1 with j˛j1 D 1. Moreover, regularity gain for .@20 b x 2 /2 C %4 C 2 x02 %2 C 2 %2 b x 2 / D %2 .%2 C 2 x02 C 2b x2/ ..x02 b D %2 .%2 C 4 x02 / for x02 D b x 2 . Let k; s 2 ¹1; : : : ; nº, k ¤ s, and xk D xs D
p1 2
x0 . Then
.xs2 C 1/1 .1 C xk2 /1 %2 .%2 C 4 x02 / D 4 .2 C x02 /2 %2 .%2 C 4 x02 / ! 0 as x0 ! 1. If one of the indices, say s, is equal to zero then choosing xk D x0 also yields .x02 C 1/1 .1 C xk2 /1 %2 .%2 C 4 x02 / D .1 C x02 /2 %2 .%2 C 4 x02 / ! 0 as x0 ! 1. This shows that the regularity gains of ˛ 2 N nC1 with j˛j1 D 1 for .@20 b @2 /1 are maximal. However, @0 b @2 P .@0 ; b @/1 D .@20 b @2 /1 1 @0 11
Since here b @ abbreviates .@1 ; : : : ; @n /, we have b @2 D
P
2 kD1;:::;n @k .
88
Chapter 3 Partial Differential Equations with Constant Coefficients
features a regularity loss. We consider the absolute value squared corresponding to the matrix entry with the most detrimental regularity property, i.e. .b x 2 /2 .x02 b x 2 /2 C %2 .%2 C 2 .x02 C b x 2 //
.b x 2 /2 1 !1 2 max.1; %4 / .x02 b x 2 /2 C 1 C x02 C b x2
for x02 D b x 2 and x0 ! 1 showing that there can be no regularity gain. On the other hand, by elementary calculus we estimate .b x 2 /2 1 .b x 2 /2 min.1; %4 / .x02 b .x02 b x 2 /2 C %2 .%2 C 2 .x02 C b x 2 // x 2 /2 C 1 C x02 C b x2
1 .1 C x02 /; min.1; %4 /
suggesting a regularity loss of at most .1; 0; : : : ; 0/ for det.P .@0 ; b @//1 b @2 and so, for 1 P .@0 ; b @/ , a (maximal) regularity loss ..1; 0; : : : ; 0/; .0; 0; : : : ; 0//. Example 3.6. Consider the 2 2-partial differential expression @0 @1 P .@0 ; @1 / WD @1 @0 in R1C1 . Then, by the previous example with n D 1, det.P .@0 ; @1 // D @20 @21 is invertible with respect to D .%; 0/, % 2 R n ¹0º, and so we have that P .@0 ; @1 / is also invertible. However, considering i x0 C % i x1 P .i x0 C %; i x1 /1 D .x02 %2 x12 2 i % x0 /1 ; i x1 i x0 C % yields a regularity loss ..0; 0/; .0; 0//. Indeed, we estimate x12 1 2: 2 2 2 2 2 2 2 2% .x0 x1 / C % .% C 2 .x0 C x1 // There is also a maximal regularity gain equal to ..0; 0/; .0; 0//. Example 3.7. Consider P .@0 ; b @/ WD @0 b @2 in R1C.nC1/ , n 2 N. Then P .@0 ; b @/ is invertible with respect to D .%; 0; : : : ; 0/ for % > 0, since jP .i x0 C %; ib x /j2 D x 2 C %/2 %2 > 0. Moreover, x02 C .b x02 C .b x 2 C %/2 min.1; %2 / .x02 C .b x 2 C 1/2 /
Section 3.1 Partial Differential Equations in H1 .@ C e/
89
and x 2 C %/2 x02 C .b x 2 /2 C %2 x02 C .b min.1; %2 / .x02 C 1/u ..b x 2 /2 C 1/v min.1; %
2
/ .x02
u
C 1/
n Y
.xj4 C 1/ˇj v
j D1
min.1; %2 / .x02 C 1/u
n 1 Y 2 .xj C 1/2ˇj v ; 2 j D1
for every ˇ 2 N nC1 ; jˇj1 D 1, u; v 2 ¹0; 1º, u C v D 1. Therefore, all multi-indices of the form ˛ D .˛0 ; 2 / with 2 N n ; jj1 D 1 ˛0 , and ˛0 2 ¹0; 1º are maximal regularity gains for P .@0 ; b @/1 in H1 .@ C e/.
3.1.3 Regularity Loss .0; : : : ; 0/ Of particular interest are ..s C1/.s C1//-partial differential operators P .@ C/ for which the solution operator P .@ C /1 features no regularity loss, because these can be considered as normal operators with zero in the resolvent set in each space HA .@ C e/ of the Sobolev lattice .HA .@ C e//A2.ZnC1 /.sC1/.t C1/ . In this case the inverse P .@ C /1 jHA .@ Ce/ W HA .@ C e/ ! HA .@ C e/ is the inverse of the normal operator P .@ C/jHA .@ Ce/ W DA .P .@ C// HA .@ Ce/ ! HA .@ Ce/. Here DA .P .@ C // WD P .@ C /1 HA .@ C e/ is the domain of the normal operator P .@ C /jHA .@ Ce/ . From the general theory of Sobolev lattices and earlier observations on the independence on the number of columns t of the data, it is clear that it suffices to consider the case of multi-index A D 0 2 .ZnC1 /sC1 , i.e. there is only one column. To have zero in the resolvent set, we would need an estimate of the form jP .@ C /uj;0 c0 juj;0 for some c0 2 R>0 and all u 2 D0 .P .@ C //. Applying the Fourier–Laplace transform, we see that this is equivalent to v uX Z s ˇX ˇ2 u s ˇ ˇ t Pkj .i x C / uj .x/ˇ dx D jP .i m C /uj0 ˇ kD0
RnC1 j D0
c0 juj0 v uX Z u s t D c0 j D0
RnC1
juj .x/j2 dx
90
Chapter 3 Partial Differential Equations with Constant Coefficients
for all u 2 D0 .P .i m C //. We observe that s ˇX s ˇ2 X ˇ ˇ Pkj .i x C / uj .x/ˇ ˇ kD0 j D0
D
s s X X
Pkj .i x C / uj .x/
D
uj .x/
D
j D0
s s X X
Pkj .i x C / Pkm .i x C / um .x/
mD0 kD0
j D0 s X
Pkm .i x C / um .x/
mD0
kD0 j D0 s X
s X
uj .x/
s s X X
Pkj .i x C / Pkm .i x C / um .x/:
mD0 kD0
Thus, we are led to the point-wise condition P .i x C / P .i x C / c02 ;
(3.1.9)
where P .i x C / D .Pj i .i x C / /i;j D1;:::;s . We notice that P .i m C / D P .i m C /, so that (3.1.9) is the (strict) positive definiteness of the partial differential expression P .@ C / P .@ C /. Linear algebra tells us that (3.1.9) can be stated as the uniform positivity of the smallest eigenvalue of P .i x C / P .i x C /, in other words the uniform positivity of the smallest characteristic value of P .i x C/.
3.1.4 Classification of Partial Differential Equations The main prerequisite for our solution theory for a partial differential expression is the requirement to ensure that there are no real zeros of the corresponding polynomial obtained as the determinant of the Fourier–Laplace transformed partial differential expression. We shall say that an ..s C 1/ .s C 1//-partial differential expression P .@/ is well-composed, if there is a 2 RnC1 such that P .@/ is invertible with respect to ; (we shall then say that P .@/ is well-composed with respect to ). Clearly, P .@/ is well-composed with respect to a parameter vector 2 RnC1 , if and only if the problem of finding u 2 H1 .@ C e/ solving P .@/u D f for every given f 2 H1 .@ C e/ is a well-posed problem, i.e. P .@ C /1 exists as a continuous operator on H1 .@ C e/. A problem involving such a P .@/ is also called well-posed in the sense of Petrovskii (with respect to ). As indicated in Remark 3.1.11 there are further distinctions between various types of partial differential operators. They hinge eventually on the issue of causality. Postponing this deeper discussion until later, we would here like to give a preparatory, purely formal classification of well-composed partial differential expressions, the importance of which will only become clear later, when the issue of causality is actually
Section 3.1 Partial Differential Equations in H1 .@ C e/
91
discussed. We mention this background here to motivate the terminology used in the next definition. Definition 3.1.14. Let P .@/ be an ..s C 1/ .s C 1//-partial differential expression 1 in RnC1 and 2 RnC1 n ¹0º. Define .0/ WD jj , the direction of the vector . (a) If there is a %0 2 R such that
det.P .i x C % .0/ // has no real zero for all % 2 Œ%0 ; 1Œ; P .@/ is called evolutionary in direction .0/ . (b) If P .@/ is evolutionary in directions .0/ and .0/ , then P .@/ is called reversibly evolutionary in direction .0/ . (c) If P .@/ is evolutionary in direction .0/ and non-evolutionary in direction .0/ , then P .@/ will be called irreversibly evolutionary in direction .0/ . (d) If P .@/ is for a direction .0/ 2 RnC1 not evolutionary, then P .@/ will be called non-evolutionary in direction .0/ . (e) If P .@/ is non-evolutionary in direction .0/ for all directions .0/ 2 RnC1 , then P .@/ will be called non-evolutionary. We shall illustrate this classification by examples. The classification of systems follows the classification of their determinant. Therefore it suffices to consider scalar partial differential expressions. P Example 3.8. The negative Laplacian @2 D nkD0 @2k is non-evolutionary in RnC1 ; n 2 N>0 . Indeed, .i C % .0/ /2 D 2 C %2 C 2 i % .0/ D 0 for % 2 R>0 is equivalent to ? .0/ and jj D %, a system which always has a real solution in RnC1 for n 2 N>0 . Example 3.9. Consider an ..sC1/.sC1//-partial differential expression P .@/ in R, which simply is an ..s C 1/ .s C 1//-matrix of ordinary differential expressions with constant coefficients. Then, det.P .@// is a scalar ordinary differential expression and Q WD det.P .// a 1-dimensional polynomial. In R the only possibilities for directions are .0/ D ˙1. Let %0 > max¹jRe j j Q. / D 0; 2 Cº. Then Q.i x ˙ %/ D 0 has no real solutions for % 2 Œ%0 ; 1Œ: This proves that in the 1-dimensional case, P .@/ is always reversibly evolutionary. Example 3.10. Q.@0 ; b @/ D @20 b @2 in R1C.nC1/ , n 2 N, where b @ D .@1 ; : : : ; @nC1 / Choosing .0/ D .˙1; 0; : : : ; 0/, we see from estimate (3.1.8) that Q.@0 ; b @/ is reversibly evolutionary in direction .0/ . Let us now consider a general direction .0/ WD ..0/;0 ; .1/ / WD ..0/;0 ; .0/;1 ; : : : ; .0/;nC1 / 2 R1C.nC1/
92
Chapter 3 Partial Differential Equations with Constant Coefficients
with .1/ ¤ 0. Then for % 2 R>0 Q.i x0 C % .0/;0 ; i b x C % .1/ / x C % .1/ /2 D .i x0 C % .0/;0 /2 .ib 2 D x02 C 2 i % x0 .0/;0 C b x 2 2 i %b x .1/ %2 .j.1/ j2 .0/;0 / 2 D x02 C b x 2 %2 .j.1/ j2 .0/;0 / C 2 i % .x0 .0/;0 b x .1/ /:
This is zero if and only if x0 2 R and b x 2 RnC1 satisfy x0 .0/;0 D b x .1/ ; 2 b x 2 D x02 C %2 .j.1/ j2 .0/;0 /:
Since b x D .b x .1/ / j.1/ j2 .1/ C with ? .1/ , for this to hold we require 2 .b x .1/ /2 j.1/ j2 C 2 D x02 .0/;0 j.1/ j2 C 2
and 2 2 0 D x02 x02 .0/;0 j.1/ j2 2 C %2 .j.1/ j2 .0/;0 / x02 2 D .j.1/ j2 .0/;0 / %2 C 2: j.1/ j2 2 2 This is possible if and only if j.1/ j2 .0/;0 0. Since j.1/ j2 C .0/;0 D 1, the 1 b latter is equivalent to j.0/;0 j p . In other words, Q.@0 ; @/ is non-evolutionary for
all directions v.0/ with j.0/;0 j j.0/;0 j > sets
p1 2
2 p1 . 2
In all other directions .0/ , i.e. directions with (and therefore j.1/ j < p1 ), Q.@0 ; b @/ is reversibly evolutionary. The 2
¹.t; x/ 2 R1C.nC1/ jt 2 D x 2 ^ ˙t 0º D ¹.t; x/ 2 R1C.nC1/ jt D ˙jxj º describe a double cone12 hypersurface. This double cone separates the regions of the parameter vector 2 R1C.nC1/ , where Q.@0 ; b @/ D Q..@0 ; b @/ C / is invertible (reversibly evolutionary in direction jj ) from those where it is non-invertible (non@/ C / evolutionary in direction ). For on the double cone hypersurface Q..@0 ; b jj
is also non-invertible (and if ¤ 0, also non-evolutionary in direction
). jj
12 The two cones are closely connected to the so-called forward or backward light-cone, respectively, although their discussion here occurs after Fourier–Laplace transformation.
Section 3.1 Partial Differential Equations in H1 .@ C e/
93
Example 3.11. Consider P .@0 ; b @/ WD @0 b @2 in R1C.nC1/ , n 2 N, where b @ D .@1 ; : : : ; @nC1 /. Then from Example 3.7 we see that P .@0 ; b @/ is evolutionary with respect to the direction .1; 0; : : : ; 0/. Let us consider an arbitrary direction .0/ WD ..0/;0 ; .1/ / WD ..0/;0 ; .0/;1 ; : : : ; .0/;nC1 / 2 R1C.nC1/ with .1/ ¤ 0. Then x C % .1/ / D i x0 C % .0/;0 .ib x C % .1/ /2 P .i x0 C % .0/;0 ; ib D i x0 C % .0/;0 C b x 2 2 i %b x .1/ %2 j.1/ j2 D i .x0 2 %b x .1/ / .%2 j.1/ j2 b x 2 % .0/;0 /: As in the Example 3.10, for this to have a real zero we need 1 b x 2 D x02 %2 j.1/ j2 C 2 4 and hence
1 %2 j.1/ j2 % .0/;0 D x02 %2 j.1/ j2 C 2 ; 4 which is only possible for 1 %2 j.1/ j2 % .0/;0 x02 %2 j.1/ j2 0: 4 @/ can be evolutionary only in directions for which %j.1/ j2 This shows that P .@0 ; b .0/;0 < 0 for all sufficiently large % 2 R>0 . Therefore, .1; 0; : : : ; 0/ is the only direction in which P .@0 ; b @/ is evolutionary and so P .@0 ; b @/ is irreversibly evolutionary in this direction. Example 3.12. The system of Example 3.6 belongs to a particular class of partial differential expressions, the so-called symmetric hyperbolic systems. They are defined as ..s C 1/ .s C 1//-partial differential expressions in R1C.nC1/ of the form E0 @0 C A.b @/ C B
P where A.i / D i nC1 kD1 Ak k with Ak , k D 1; : : : ; n C 1, real symmetric, which yields a skew-selfadjoint matrix A.i / for every 2 RnC1 , where B 2 R.sC1/.sC1/ is arbitrary and E0 is assumed to be a positive definite matrix. Now consider the matrix .i C %/ E0 C A.i / C B where ; % 2 R and 2 RnC1 . We find Re..i C %/ x E0 x C x A.i / x C x B x/ D % x E0 x C x Re.B/ x
(3.1.10)
94
Chapter 3 Partial Differential Equations with Constant Coefficients
for all x 2 C sC1 . Using the Cauchy–Schwarz inequality we obtain .j%j "0 ˇ0 /jxj j..i C %/ E0 C A.i / C B/xj for all 2 R; 2 RnC1 ; x 2 C sC1 , where "0 is the smallest eigenvalue of E0 and ˇ0 the largest absolute value of the eigenvalues of Re.B/ D 12 .B C B /. As a consequence, .i C%/ E0 CA.i /CB must be non-singular for all 2 R; 2 RnC1 and therefore invertible for all D .%; 0; : : : ; 0/; % 2 R; j%j > ˇ"00 . Thus, symmetric hyperbolic systems are in particular reversibly evolutionary in direction .1; 0; : : : ; 0/ and their solution operator features no regularity loss. Remark 3.1.15. It is worth noting that the above classification scheme also contains possibly surprising limit cases. For example, any P .@/ that is invertible with respect to D .0; : : : ; 0/, where P is a polynomial in .n C 1/ variables, can be considered as reversibly evolutionary with respect to direction .0; : : : ; 0; 1/ if P is considered as a polynomial in .n C 2/ variables. Indeed, considering P as a polynomial function in .n C 2/ variables, i.e. considering Q W C nC1 C ! C; .x0 ; : : : ; xn ; xnC1 / 7! P .x0 ; : : : ; xn / we find that .x0 ; : : : ; xn ; xnC1 / 7! Q.i x0 ; : : : ; i xn ; i xn C %/ does not have any real zeros for arbitrary % 2 R. Thus, Q.@/ is evolutionary in direction .0; : : : ; 0; ˙1/.
3.1.5 The Classical Classification of Partial Differential Equations Although in our approach we shall not need to use the classical classification into elliptic, parabolic and hyperbolic differential expressions, it is instructive to establish the relationship to the above classification. At the same time it is convenient to expand our ‘jargon’ and to link up with other sources of reference. In order to make these connections we first have to gain more insight into the concept of ‘evolutionarity’. The question of finding real zeros can be reduced to the discussion of real zeros of 1-dimensional polynomials considering the other variables merely as parameters. Let P be a polynomial in .n C 1/ variables, n 2 N. Then using x D .x .0/ / .0/ C
with WD .x .x .0/ / .0/ / ? .0/ we may write P .i x C % .0/ / D P .i . .0/ C //; with WD x .0/ i %. Noting that, conversely, if for a given ? .0/ there is a root of 7! P .i . .0/ C // with I m D %, then x WD .Re / .0/ C is a real root of x 7! P .i x C % .0/ /, we obtain the following characterization.
Section 3.1 Partial Differential Equations in H1 .@ C e/
95
Proposition 3.1.16. P .@/ is evolutionary in direction .0/ if and only if all roots of 7! P .i . .0/ C // have imaginary parts which are bounded below uniformly with respect to all ? .0/ ; 2 RnC1 . In other words, defining ° ˇ _ ± ˇ P .i . .0/ C // D 0 ; N.P .i /; .0/ / WD ˇ ?.0/
we must have I mŒN.P .i /; .0/ / bounded below. P .@/ is reversibly evolutionary in direction .0/ if and only if I mŒN.P .i /; .0/ / is also bounded above, i.e. I mŒN.P .i /; .0/ / is a bounded set, otherwise P .@/ is irreversibly evolutionary. P .@/ is non-evolutionary in direction .0/ if and only if I mŒN.P .i /; .0/ / is neither bounded below nor above. Proof. The result is clear from the transformation: RnC1 ! R ˚ ¹.0/ º? ; x 7! .x .0/ ; x .x .0/ / .0/ / with R ˚ ¹.0/ º? ! RnC1 ; .s; / 7! s .0/ C
as inverse. Remark 3.1.17. We note that the requirements on can be relaxed to admitting all
2 RnC1 . Indeed, if is a zero of 7! P .i . .0/ C // for an arbitrary 2 RnC1 , then C . .0/ / is a root of 7! P .i . .0/ C . . .0/ / .0/ ///, where now . . .0/ / .0/ / ? .0/ . Moreover, I m D I m. C . .0/ //, so that the characterization given in the proposition does not change. P If P .x/ WD ˛ a˛ x ˛ is a polynomial of degree #P , then P .i . .0/ C // D
X ˛
a˛ .i . .0/ C //˛ D
#P X
Qk . / k
kD0
where Qk is a polynomial of degree at most equal to #P k for k D 0; : : : ; #P . Here kŠ Qk . / equals the k-th derivative of the polynomial 7! P 1-dimensional ˛ . The polynomial P .i . .0/ C //, in particular Q#P . / D i#p a ˛ j˛j1 D#P .0/ part of P of highest order is called the principal part Pmain . With this notion we have that Q#P . / is independent of and equal to Pmain .i .0/ / D i#P Pmain ..0/ /. It may happen that Pmain ..0/ / D 0 in which case the direction .0/ is called a characteristic direction, otherwise it is called a non-characteristic direction of P .@/. The same terminology is used for square systems P .@/, if det.P .@//main has such directions.
96
Chapter 3 Partial Differential Equations with Constant Coefficients
Proposition 3.1.18. Let P .@/ be a scalar partial differential expression evolutionary in a non-characteristic direction .0/ . Then Pmain .@/ is reversibly evolutionary in direction .0/ . Moreover, N.Pmain .i /; .0/ / R. P ˛ Proof. Let P .@/ D ˛ aa @ . Since .0/ is assumed to be non-characteristic, the degree #P .i . .0/ C// of 7! P .i . .0/ C // is equal to #P and the leading coefficient of 7! P .i . .0/ C // is just i#P Pmain ..0/ / ¤ 0. The proposition follows if we can show that all roots of 7! Pmain . .0/ C / are real. For D 0 we have Pmain . .0/ C / D Pmain . .0/ / D Pmain ..0/ / #P , i.e. D 0 is the only root and has multiplicity #P . So, let ¤ 0 and define p.; / WD P .i . .0/ C //: Then, with % WD j j, WD %1 , WD %1 , we get p.; / D
#P X
pk .; / D
kD0
#P X
%k pk . ; /;
kD0
P
where pk .; / WD Pk .i. .0/ C // WD i k j˛jDk a˛ . .0/ C /˛ are homogeneous polynomials of degree k. In particular, we have p#P D i#P Pmain and our task is to show that p#P .; / D 0 has only real roots. For this we consider q% . ; / WD %#P p.; / D
#P X
%k#P pk . ; /
kD0
D p#P . ; / C
(3.1.11)
#P 1 #P k1 1 1 X pk . ; / % % kD0
and observe that q% . ; / ! p#P . ; / as % ! 1. Moreover, using the fact that the roots of a polynomial may be arranged to be continuous functions of the coefficients, we may consider the family of roots . k .1=%; //kD1;:::;#P of q% . ; / (with possible repetitions according to their multiplicity) as a family of continuous functions .s; / 7! k .s; / in .R n ¹0º/ RnC1 . Now, we achieve the result by an indirect argument. Assume there is a real 0 ? .0/ , j0 j D 1, such that we have a root 0 of p#P . ; 0 / D 0 with I m 0 ¤ 0. Since p#P . ; 0 / D .1/#P p#P . ; 0 / we may assume that I m 0 < 0. With the continuous dependence of the roots on the coefficients, see e.g. [21, appendix], we must therefore have a k 2 ¹1; : : : ; #P º such that k .s; 0 / ! 0 as s ! 0. A-fortiori we have I m k .s; 0 / ! I m 0 as s ! 0. Observing that % WD % k .1=%; 0 / is a root of p.; %0 / D P .i . .0/ C %0 // we have I m % % I m 0 D o.%/ as % ! 1
Section 3.1 Partial Differential Equations in H1 .@ C e/
97
and so, recalling I m 0 < 0 we see that I mŒN.P . i /; .0/ / cannot be bounded below, contradicting the assumption of P .@/ being evolutionary. Remark 3.1.19. A scalar partial differential expression satisfying the assumption of this proposition is called hyperbolic in the sense of Gårding or Gårding hyperbolic or – briefly – hyperbolic in direction .0/ . Correspondingly, an ..sC1/.sC1//-partial differential expression is called Gårding hyperbolic in direction .0/ if its determinant is Gårding hyperbolic in direction .0/ . As a by-product of the proof we also have the following consequence. Corollary 3.1.20. Let P .@/ D Pmain .@/ be evolutionary in a non-characteristic direction .0/ then P .@/ is reversibly evolutionary in direction .0/ . Remark 3.1.21. The proposition and the corollary state that every partial differential expression which is Gårding hyperbolic in direction .0/ has a Gårding hyperbolic principal part and that the principal part is evolutionary in a non-characteristic direction .0/ , i.e. Gårding hyperbolic in direction .0/ , if and only if it is reversibly evolutionary in this direction. The converse holds under an additional assumption. Proposition 3.1.22. Let P .@/ be a scalar partial differential expression such that .0/ is a non-characteristic direction for P .@/ and 7! Pmain . .0/ C / has only simple zeros for all ? .0/ ; 2 RnC1 n ¹0º. Then P .@/ is reversibly evolutionary in direction .0/ . Proof. The assumed simplicity of the roots implies (by the inverse function theorem) the smooth dependence of the roots on the coefficients of the polynomial, compare e.g. [21, appendix]. In particular, we have Lipschitz continuous dependence, a fact we will have to utilize in proving this result. We shall use the notations of the proof of the previous proposition. So let a normalized 0 ? .0/ be given and let 0 be a solution of p#P . ; 0 / D 0. By assumption we have I m 0 D 0. Therefore, we must have a k 2 ¹1; : : : ; #P º such that I m k .s; 0 / ! I m 0 D 0 as s ! 0. By the Lipschitz continuity of .s; 0 / 7! k .s; 0 / we get jI m k .t; 0 / I m k .s; 0 /j C jt sj for some C 2 R>0 uniformly with respect to 0 2 RnC1 , j0 j D 1, and letting s ! 0 jI m k .t; 0 /j D jI m k .t; 0 / I m 0 j C jt j:
(3.1.12)
Recalling that % WD % k .1=%; 0 / is a root of p.; %0 / D P .i . .0/ C %0 // D 0 we have, using (3.1.12), with some generic constant C 2 R>0 jI m % j D j% I m k .1=%; 0 /j C
for all % 2 R>0 :
98
Chapter 3 Partial Differential Equations with Constant Coefficients
Observing that any root of p.; / for 2 RnC1 n BRC1 .0; %0 / yields such 0 and 0 , the desired boundedness of I mŒN.P .i /; .0/ / follows. Remark 3.1.23. The assumptions of this proposition give rise to the concept of strict hyperbolicity. An ..s C 1/ .s C 1//-partial differential expression is called strictly hyperbolic in direction .0/ if .0/ is a non-characteristic direction of det.P .@// and 7! det.P /main . .0/ C / has only simple zeros for all ? .0/ ; 2 RnC1 n ¹0º. That the additional assumption is not merely technical is shown in the following example, where a Gårding hyperbolic principal part does not yield an evolutionary partial differential expression due to multiple roots of the principal part. Considering only the principal part is therefore not sufficient to establish well-(com)posedness. Example 3.13. Let P .@0 ; @1 / WD .@0 @1 /2 @1 . We see that Pmain .1; 0/ D 1, i.e. .1; 0/ is a non-characteristic direction of P .@0 ; @1 /. Pmain .@0 ; @1 / D .@0 @1 /2 is reversibly evolutionary (as the square of the reversibly evolutionary (and strictly hyperbolic) operator @0 @1 ) and so also hyperbolic (but not strictly hyperbolic) in this direction. Investigating the zeros of .ix0 C%i x1 /2 i x1 yields .x0 x1 /2 D %2 and 2 % .x0 x1 / D x1 D ˙2 %2 . Thus, x1 D ˙2 %2 and x0 D ˙% C x1 provides 4 real zeros for every % 2 R n ¹0º showing that P .@0 ; @1 / is not invertible with respect to arbitrary 2 R ¹0º. That there are reversibly evolutionary problems which are not Gårding hyperbolic can be seen by the following example. Example 3.14. Consider P .@0 ; b @/ WD @0 C i b @2 , b @ D .@1 ; : : : ; @n /, the so-called Schrödinger operator. Clearly, Pmain .1; 0; : : : ; 0/ D 0 and so .1; 0; : : : ; 0/ is a characteristic direction of @0 C i b @2 . However, i x0 C % i x 2 has no real zeros for % ¤ 0. 2 b Therefore, @0 C i @ is reversibly evolutionary in direction .1; 0; : : : ; 0/. We have restricted our attention to the scalar case, since the classification is expressed in terms of determinants. We will, however, summarize our findings for general ..s C 1/ .s C 1//-partial differential expressions. Theorem 3.1.24. Let P .@/ be an ..s C 1/ .s C 1//-partial differential expression. P .@/ is evolutionary in a non-characteristic direction .0/ if and only if P .@/ is hyperbolic in this direction. If det.P .@//main is strictly hyperbolic in direction .0/ then P .@/ is hyperbolic in this direction. Proof. The result follows immediately from the definitions of the terms (reducing the issues to the scalar case) and the previous propositions. Proposition 3.1.16 shows how trying to characterize being reversibly evolutionary in terms of terms of imaginary parts of roots can fail. Clearly, I mŒN.P .i /; .0/ /
Section 3.1 Partial Differential Equations in H1 .@ C e/
99
has to be unbounded for P .@/ not to be reversibly evolutionary in direction .0/ . We define N.P .i . .0/ C /// WD ¹ jP .i . .0/ C // D 0º
(3.1.13)
and [
N.P; .0/ ; r/ WD
N.P .i . .0/ C ///
?.0/ ;jjr
° ˇ _ ± ˇ D ˇ det.P /.i . .0/ C // D 0 ^ j j r :
(3.1.14)
?.0/
Then N.P .i . .0/ C /// D
[ r2R0
D
[
N.P; .0/ ; r/ [
N.P .i . .0/ C ///:
r2R0 ?.0/ ;jjr
Definition 3.1.25. Let P .@/ be irreversibly evolutionary in a characteristic direction .0/ . If additionally inf I mŒN.det.P /; .0/ ; r/ ! 1
as r ! 1
(3.1.15)
then we shall call P .@/ purely irreversibly evolutionary in direction .0/ . If the polynomial 7! det.P .i . .0/ C /// is non-constant and has a highest order term with a coefficient, which is independent of , then P .@/ is called parabolic in direction .0/ . Remark 3.1.26. The additional condition (3.1.15) serves to avoid contamination with reversibly evolutionary parts, i.e. to avoid the possibility that some roots may have bounded imaginary parts. Generally, the highest order term of the 1-dimensional polynomial 7! det.P /.i . .0/ C // will depend on , in fact even the degree #det.P /.i . .0/ C// of 7! det.P /.i . .0/ C // may depend on . We have that the leading coefficient is given by the polynomial Q#P .i . .0/ C// . / D
1 #det.P /.i . .0/ C// Š
.#det.P /.i . .0/ C// /
. 7! det.P /.i . .0/ C ///
For the parabolic case it is assumed that this is a constant.
:
100
Chapter 3 Partial Differential Equations with Constant Coefficients
A final case of failing to be evolutionary is the following. Definition 3.1.27. Let P .@/ be a ..s C 1/ .s C 1//-partial differential expression that is non-evolutionary in all direction .0/ and let .I mŒN.det.P /; .0/ ; r//˙ WD I mŒN.det.P /; .0/ ; r/ \ ˙ŒR>0 : If additionally inf .I mŒN.det.P /; .0/ ; r//C ! 1 and
sup .I mŒN.det.P /; .0/ ; r// ! 1
(3.1.16)
as r ! 1, then we shall call P .@/ purely non-evolutionary. If, additionally, all directions .0/ are non-characteristic, then P .@/ is called elliptic. Note that, since P .@/ is non-evolutionary, the sets .I mŒN.det.P /; .0/ ; r//˙ must be non-empty and are obviously bounded below or above, respectively, by zero. Example 3.15. P .@0 ; @1 / WD @20 @21 is purely non-evolutionary and also elliptic (note P ..0/ / D Pmain ..0/ / D 1 for all directions .0/ ). Example 3.16. P .@0 ; @1 / WD @20 C @41 is purely non-evolutionary, but not elliptic (note Pmain .1; 0/ D 0). Elliptic operators are particularly easy to characterize. Proposition 3.1.28. Let P .@/ be an ..s C 1/ .s C 1//-partial differential expression in .n C 2/ variables, n 2 N. Then P .@/ has no characteristic directions if and only if P .@/ is elliptic. Proof. If P .@/ is elliptic, then by definition there are no characteristic directions. We need to show that it suffices to require all directions to be non-characteristic in order to have an elliptic partial differential expression, (in other words, we need to demonstrate that the conditions on the imaginary parts of the roots are superfluous). Without loss of generality we may again restrict our attention to the scalar case. So, let Pmain ..0/ / ¤ 0 for all directions .0/ , i.e. for all .0/ 2 RnC2 with j.0/ j D 1. Let ˇ _ ° ± ˇ 0 WD inf jI m j ˇ .jj D 1 ^ Pmain . .0/ C / D 0/ : ?.0/
We claim that 0 > 0. Assume the contrary, i.e. let there be a sequence of zeros . k /k and normalized parameter vectors .k /k such that I m k ! 0 as k ! 1. Without loss of generality we may assume that this sequence .k /k is convergent to some 1 (otherwise pick a convergent subsequence). Since by continuous dependence
Section 3.1 Partial Differential Equations in H1 .@ C e/
101
of the corresponding roots we must also have boundedness of the sequence of corresponding zeros . k /k , we may likewise assume without loss of generality that . k /k is convergent to some 1 . Going to the limit we have Pmain . 1 .0/ C 1 / D 0
C
1 .0/ and therefore j 1 as a characteristic direction, contradicting the assumption. 1 .0/ C 1 j This confirms that 0 > 0. Defining analogously ˇ _ ° ± ˇ 1=% WD inf jI m j ˇ .jj D 1 ^ q% . ; / D 0/
?.0/
we have by the continuous dependence of the roots, which carries over to the infimum of the mapping s 7! s that 1=% ! 0 as % ! 1. In particular, we have 1=% 0 =2 for all sufficiently large % 2 R. As in the proof of Proposition 3.1.18, we conclude that the roots of P .i . .0/ C // satisfy 1 jI m j % 1=% % 0 2 and therefore jI mŒN.P; .0/ ; r/j ! 1 as r ! 1:
(3.1.17)
Ellipticity in the sense of our definition follows if we can show that .I mŒN.det.P /; .0/ ; r//˙ are both non-empty. Noting that by homogeneity Pmain .i . ..0/ / // D .1/#P Pmain .i . .0/ C // we see that both signs must occur for the imaginary parts of the roots. By continuous dependence of the roots on the coefficients the same must hold true for all sufficiently large j j and so .I mŒN.det.P /; .0/ ; r//˙ are non-empty sets and, using (3.1.17) we get inf .I mŒN.det.P /; .0/ ; r//C ! 1; sup .I mŒN.det.P /; .0/ ; r// ! 1; as r ! 1.
102
Chapter 3 Partial Differential Equations with Constant Coefficients
Remark 3.1.29. This last result could motivate the definition of ellipticity as P .@/ having no characteristic directions, which is actually the usual way. For systematic reasons the above definition appears to be more appropriate. Example 3.17. The partial differential expression @1 C i @2 in R2 is called the Cauchy–Riemann operator13 . Since there are no non-vanishing real roots .x1 ; x2 / of i x1 x2 , we see that @1 C i @2 is elliptic. Likewise .@1 C i @2 /2 is elliptic. Both differential expressions are non-invertible with respect to all 2 R2 . So this example gives an elliptic operator for which our solution theory utterly fails. This is not too surprising since the spectrum of @1 C i @2 as a normal operator in L2 .R2 / is all of C, so that a reasonable solution theory cannot be expected in this framework. We shall later develop an extension of our theory which in some instances gives a possibility of solutions in the spectrum, see Subsection 3.2. How bad such operators really can be, even ignoring our special approach, is seen from the following related example by Bitsadze (1953). Let f be analytic in a neighborhood U of the closed unit disc in R2 , i.e. f is smooth and satisfies .@1 C i @2 /f D 0 (classically) in U . Then u defined as the mapping .x; y/ 7! .1 x 2 y 2 / f .x; y/ solves the equation14 .@1 C i @2 /2 u D 0 and u satisfies the homogeneous Dirichlet boundary condition u D 0 on the boundary of the unit disc. Thus, since f is arbitrary, this seemingly simple boundary value problem has already an infinite number of solutions. In order to avoid pathologies like Bitsadze’s example other concepts of ellipticity have been devised. Definition 3.1.30. A scalar elliptic partial differential expression P .@/ is called strictly elliptic if P .@/ is elliptic and the number of zeros of P . .0/ C / (counted according to their multiplicities) with positive imaginary parts is equal to the number with negative imaginary parts for all directions .0/ and all ? .0/ with j j sufficiently large. Equivalently, we could have said that the number of zeros of Pmain . .0/ C / (counted according to their multiplicities) with positive imaginary parts is equal to the number with negative imaginary parts for all directions .0/ and all ? .0/ ; ¤ 0. P .@/ strictly elliptic implies that P must be of even order. Therefore, @1 C i @2 cannot be strictly elliptic, but also .@1 C i @2 /2 cannot be strictly elliptic because it inherits this failure from @1 C i @2 . 13 Recall that functions u satisfying .@ C i @ /u D 0 in the classical sense in a domain of R2 are 1 2 known as analytic functions in . Using the complex notation .x; y/ 7! x C i y the Cauchy–Riemann operator is often written as @z@ WD .@1 C i @2 / with z D x C i y and so z D x i y. 14 Equivalently, in complex notation this is . @ /2 u D 0. Note that @ .1 jzj2 / D z is analytic. @z @z
Section 3.1 Partial Differential Equations in H1 .@ C e/
103
Remark 3.1.31. It is remarkable that strict ellipticity is really only needed for the two-dimensional case. Lopatinski has shown, see e.g. [24], that for n 3 every elliptic differential expression is strictly elliptic. The proof is based on a homotopy argument, which we briefly describe here. Let P .@/ be elliptic and assume that Pmain ..0/ C / has different numbers of zeros with positive and negative imaginary parts for a particular direction .0/ and some ? .0/ ; jj D 1. Then, by homogeneity, Pmain ..0/ / must have the reverse discrepancy. If we can find a continuous connection between C and , then we must have a vanishing imaginary part of a zero of the polynomial for some other value of contradicting the ellipticity. Indeed, we may take a continuous family of rotations15 .A t / t2Œ0;1 leaving the direction .0/ invariant and such that A0 D and A1 D . Let . k .A t //kD1;:::;#P be the continuous family of roots of Pmain ..0/ C A t /. Then at least one of them must change sign to account for the reversal of the discrepancy between roots with positive and negative imaginary parts and by continuity this root must have a vanishing imaginary part for some intermediate t 2 0; 1Œ. This would yield a real root contradicting the assumed ellipticity. The reasoning of the last remark also yields the following statement. Proposition 3.1.32. Let P .@/ be a scalar elliptic partial differential expression. Then P .@/ is strictly elliptic if and only if the number of zeros of Pmain . .0/ C / (counted according to their multiplicities) with positive imaginary parts is equal to the number with negative imaginary parts for some direction .0/ and some ? .0/ , ¤ 0. Proof. Let the number of roots of Pmain . .0/ C / with positive imaginary parts equal the number of those with negative imaginary parts. We may assume without loss of generality that j j D 1. Now, let .A t / t2Œ0;1 be a continuous family of rotations turning .0/ into e .0/ and into e
. Then the continuity argument16 of Remark 3.1.31 yields that the equality of the number of roots with positive and negative imaginary parts must be maintained. We found that the principal part of a scalar partial differential expression P .@/ has no characteristic direction if and only if P .@/ is elliptic. This non-vanishing of Pmain on the unit sphere can be enforced by the following stronger condition. Definition 3.1.33. A scalar elliptic partial differential expression P .@/ in RnC1 is called strongly elliptic if P .@/ is elliptic and there is a 2 C such that Re. Pmain .x// > 0 15
for all x 2 RnC1 n ¹0º:
This is where the dimension n 3 comes into play. Note that this rotation can be done in dimensions greater or equal to 2. Moreover, recall that all ordinary differential expressions with constant coefficients are reversibly evolutionary. Thus, there are no 1-dimensional elliptic operators. This contradicts some statements found in the literature where ellipticity is characterized by regularity properties. 16
104
Chapter 3 Partial Differential Equations with Constant Coefficients
Remark 3.1.34. Clearly, we may normalize such a number to have absolute value 1. By the homogeneity of the polynomial Pmain it would also be sufficient to impose this condition on the unit sphere in RnC1 . It can be shown that strong ellipticity implies strict ellipticity. In two dimensions the two concepts actually coincide. That this is not the case in higher dimensions can be seen from the example @41 C @42 @43 C i .@21 C @22 /@23 which turns out to be strictly elliptic but not strongly elliptic, see [24].
3.1.6 Elliptic, Parabolic, Hyperbolic? The terms ‘elliptic’, ‘parabolic’, ‘hyperbolic’ have their historical origin in the discussion of scalar, second order, partial differential expressions in R2 . In this subsection we want to indicate, the origin of this choice of terminology. The partial differential expression with real scalar coefficients P .@1 ; @2 / WD a11 @21 2 a12 @1 @2 a22 @22 is Fourier transformed into the quadratic form q.x1 ; x2 / WD a11 x12 C 2 a12 x1 x2 C a22 x22 a11 a12 x1 D . x1 ; x2 / : a12 a22 x2
a12 If A WD aa11 is positive (or negative) definite, i.e. the matrix has two eigenvalues 12 a22 of equal sign, then this quadratic form is the highest order term in the equation of an ellipse and P .@1 ; @2 / is called elliptic. If A has two eigenvalues of opposite sign, then q is the highest order term of the equation of a hyperbola. In this case there is a characteristic direction, since q.x/ D x > A x assumes both signs (e.g. for the two normalized eigenvectors) and must therefore vanish for a direction .0/ (compare the argument in the proof of Proposition 3.1.32). Since it is a homogeneous, quadratic form we have q.˙.0/ / D 0. These are the only direction where the quadratic form is zero and any other direction .0/ is therefore non-characteristic. To show this, we calculate q. .0/ C / D a11 . .0/;1 C 1 /2 C 2 a12 . .0/;1 C 1 /. .0/;2 C 2 / C a22 . .0/;2 C 2 /2 > D q..0/ / 2 C 2 .0/ A C q. / > and so if q. / q..0/ / ..0/ A /2 < 0 we would have two different real roots which would make P .@1 ; @2 / strictly hyperbolic. In the present two dimensional case we have 2 ŒR..0/;2 ; .0/;1 / for ? .0/ .
Section 3.1 Partial Differential Equations in H1 .@ C e/
105
Then, with j.0/ j D 1, > q. / q..0/ / ..0/ A /2 2 2 D .a11 .0/;1 C 2 a12 .0/;1 .0/;2 C a22 .0/;2 / 2 2 .a11 .0/;2 2 a12 .0/;1 .0/;2 C a22 .0/;1 / 2 2 a12 .0/;2 C a22 .0/;2 .0/;1 /2 .a11 .0/;1 .0/;2 C a12 .0/;1
D det.A/ < 0; for all ¤ 0. Finally, if A has one non-vanishing eigenvalue and one eigenvalue equal to zero, then q is as we shall see the leading term of the equation of a parabola. Let x.0/ 2 R2 > x/ C be the normalized eigenvector for the eigenvalue 0. Then, with x D x.0/ .x.0/ b x .0/ .b x> x/ where b x .0/ WD .x.0/;2 ; x.0/;1 /> , we get .0/ 2 q.x/ D x > A x D b x> x .0/ .b x> .0/ Ab .0/ x/ D q.x i s x.0/ /
(3.1.18)
for all s 2 R. Since eigensolutions for different eigenvalues are orthogonal, b x .0/ is the other eigenvector and 1 D b x> Ab x ¤ 0 the corresponding eigenvalue. With re.0/ .0/ gards to P .@1 ; @2 / the direction x.0/ is characteristic. Therefore with D .1 ; 2 / D % x.0/ ; % ¤ 0, recalling Ax.0/ D 0, i.e. a11 x.0/;1 C a12 x.0/;2 D a12 x.0/;1 C a22 x.0/;2 D 0, we get P .@/ D P .@ C % x.0/ / D a11 .@1 ;1 C 1 /2 2a12 .@1 ;1 C 1 /.@2 ;2 C 2 / a22 .@2 ;2 C 2 /2 D P .@ / C %2 P .x.0/ / 2% .a11 x.0/;1 @1 ;1 C a12 x.0/;1 @2 ;2 C a12 x.0/;2 @1 ;1 C a22 x.0/;2 @2 ;2 / D P .@ / 2% ..a11 x.0/;1 C a12 x.0/;2 /@1 ;1 C .a12 x.0/;1 C a22 x.0/;2 /@2 ;2 / D P .@ /;
(3.1.19)
> and using column vector notation we can decompose .@1 ;1 @2 ;2 /> D x.0/ .x.0/ @ /C > b x .0/ .b x .0/ @ / to find that
P .@ / D .@1 ;1 @2 ;2 / A .@1 ;1 @2 ;2 /> D .@1 ;1 @2 ;2 / A .x.0/;2 x.0/;1 /> ..x.0/;2 x.0/;1 /@ / D 1 ..x.0/;2 x.0/;1 /@ /2 :
(3.1.20)
106
Chapter 3 Partial Differential Equations with Constant Coefficients
This shows that the principal part is just a multiple of a second order directional derivative. Consider now the more general second order partial differential expression Q.@/ D P .@ / C a> @ C b, a 2 C 2 (written as a column vector) and b 2 C associated with P .@/. From (3.1.19) and (3.1.20) we see Q.@ C / D P .@ / C a> .@ C / C b D 1 ..x.0/;2 ; x.0/;1 /@ /2 C a> .@ C / C b: After Fourier–Laplace transformation we are led to consider the polynomial y 7! 1 ..x.0/;2 ; x.0/;1 / y/2 C a> .i y C / C b: We see that we would have real zeros if I m a> y C Re a> C Re b C 1 ..x.0/;2 ; x.0/;1 / y/2 D 0
(3.1.21)
and Re a> y C I m a> C I m b D 0:
(3.1.22)
Representing a as a linear combination of x.0/ and b x .0/ WD .x.0/;2 ; x.0/;1 / we obtain > a D a0 C b a0 with a0 WD x.0/ .x.0/ a/; b a0 WD b x> .b x .0/ a/ and so from (3.1.21) we .0/ obtain 0 D I mb a> a> x .0/ y/2 C Re a0> I m a0> y 0 y C Reb 0 C Re b C 1 .b D I m.b x .0/ a/b x> x .0/ y/2 I m.x.0/ a/ x0> y C % Re.x.0/ a/; 0 y C Re b C 1 .b and from (3.1.22) we get > > x> 0 D Re.b x .0/ a/b 0 y C Re.x.0/ a/ x0 y C I m.x.0/ a/ x0 C I m b > D Re.b x .0/ a/b x> 0 y C Re.x.0/ a/ x0 y C % I m.x.0/ a/ C I m b:
If Re.x.0/ a/ ¤ 0, then we can eliminate x0> y D and obtain
I m.b x .0/ a/ I m.x.0/ a/
Re.b x .0/ a/b x> 0 yC% I m.x.0/ a/CI m b Re.x.0/ a/
Re.b x .0/ a/ x .0/ y/2 b x> 0 y C Re b C 1 .b Re.x.0/ a/
I m.x.0/ a/
% I m.x.0/ a/ C I m b % Re.x.0/ a/ D 0: Re.x.0/ a/
To avoid real zeros for all sufficiently large % 2 R>0 we must have 1 Re.x.0/ a/ < 0:
Section 3.1 Partial Differential Equations in H1 .@ C e/
107
In this case Q.@/ would be irreversibly evolutionary in direction x.0/ . Moreover, the leading coefficient a> x.0/ of the polynomial 7! Q.i . x.0/ C s b x .0/ // is constant and therefore Q.@/ is parabolic. We find that the principal part of Q.@/ is given by P .@/ D Qmain .@/ D Qmain .@ C / D P .@ / D 1 .b x .0/ @ /2 D 1 .b x .0/ @/2 : Comparison with (3.1.18) shows Qmain .i x C / D q.i x C /. This coincidence of the highest order parts of Q.i x C /, ¤ 0 and A D 0, with the highest order term q.x/ of a parabola finally motivates the use of the term ‘parabolic’17 for this type of partial differential expression. Note, however, that with regards to the full polynomial Q.i C / this correspondence is rather superficial since, in the parabolic case, Q.i C / has non-real coefficients.
3.1.7 Evolutionary Partial Differential Expressions in Canonical Form It is common and convenient to single out the particular direction in which a specific partial differential expression is evolutionary as one of the coordinate directions. We shall choose the coordinate labelled by 0 to make its particular role apparent. In many applications this direction is identified as the forward time direction and hence @0 would be the time-derivative. The other directions (orthogonal to time) are then referred to as spatial directions. To emphasize the particular role that the time direction plays we have chosen to refer to the underlying domain as R1Cn , where 1 refers to the time dimension and n 2 N is the spatial18 dimension. In order to standardize our approach, we need to determine how the structure we are working in is affected by a change of coordinates. Lemma 3.1.35. Let A 2 R.nC1/.nC1/ be a non-singular matrix, n 2 N. Then the induced rescaling operator A j V
C1 .RnC1 /
W CV 1 .RnC1 / L2 .RnC1 / ! CV 1 .RnC1 / L2 .RnC1 /
defined by A j V
C1 .RnC1 /
WD jdet.A/j1=2 .A /
for 2 CV 1 .RnC1 / extends to a unitary mapping A in L2 .RnC1 /. 17
It should be observed that this terminology is only formally motivated. The Fourier transform of e.g. the parabolic partial differential expression @0 @21 yields the polynomial .x0 ; x1 / 7! i x0 C x12 which has complex coefficients and so does not represent a parabola in R2 . The correspondence to a parabola can, however, be established in a formal way, if we think of @ as real (ignoring that @ is actually purely imaginary, i.e. skew-selfadjoint), 2 ŒR>0 .1; 0/. 18 In the case n D 0 we are dealing with evolutionary ordinary differential expressions. (To ensure that the spatial dimension is at least 1 we sometimes write R1C.nC1/ for n 2 N).
108
Chapter 3 Partial Differential Equations with Constant Coefficients
Proof. The result follows by an elementary calculation. Following the standard convention of identifying x D .x0 ; : : : ; xn / 2 RnC1 with the column matrix 0 1 x0 B :: C .nC1/1 @ : A2R xn we have
Z
Z 2
RnC1
j.A /.x/j dx D Z D
j .Ax/j2 jdet.A/j dx
RnC1
j .y/j2 dy
RnC1
for all 2 CV 1 .RnC1 /, from which the result follows by taking limits. Lemma 3.1.36. Let A 2 R.nC1/.nC1/ be a non-singular matrix. Then for every P s 2 RnC1 we have (with @ s WD nkD0 sk @k ) that A .@ s/ D .@ A1 s/ A holds for all 2 CV 1 .RnC1 / and all s 2 RnC1 . Proof. Let t 2 RnC1 . Then by the chain rule ..@ t / A /.x/ D jdet.A/j1=2
n X
ti @j .Ax/ Aj i
i;j D0
D .A .@ .At // /.x/: With s WD A t we obtain
.@ A1 s/ A D A .@ s/
on CV 1 .RnC1 /. Formally @ s can and will also be interpreted as the matrix product s > @ D @> s, where @ D .@0 ; : : : ; @n / is analogously identified with the formal ..n C 1/ 1/-matrix 0 1 @0 B :: C @ : A; @n i.e. the vector analytical nabla operator. In this spirit we write .A /1 @ and obtain according to the last lemma ..A /1 @ s/A D A .@ s/ for any s 2 RnC1 .
(3.1.23)
Section 3.1 Partial Differential Equations in H1 .@ C e/
109
Lemma 3.1.37. Let A 2 R.nC1/.nC1/ be a non-singular matrix. Then A j V
C1 .RnC1 /
W CV 1 .RnC1 / H˛ .@ C e/ ! H˛ ..A /1 @ C e/
extends to a unitary mapping for every ˛ 2 ZnC1 . Thus, the Sobolev lattices .Hˇ ..A /1 @ C e//ˇ 2ZnC1 ;
.Hˇ .@ C e//ˇ 2ZnC1 ;
are (unitarily) equivalent. Proof. With the previous lemma we find ..A /1 @ C e/ A D A .@ C e/ and so A .@ C e/˛ D ..A /1 @ C e/˛ A for all ˛ 2 ZnC1 . Moreover, 2 jA jH 1 Ce/ ˛
[email protected] /
D
Z RnC1
j...A /1 @ C e/˛ A /.x/j2 dx
Z D
RnC1
j...A /1 @ C e/˛ .A //.x/j2 jdet.A/j dx
Z D
RnC1
j..@ C e/˛ /.Ax/j2 jdet.A/j dx
Z D D
j..@ C e/˛ /.y/j2 dy
RnC1 2 j jH ˛ .@Ce/
for all 2 CV 1 .RnC1 /. Again by taking limits the result follows for arbitrary 2 H˛ .@ C e/. The required density of CV 1 .RnC1 / in H˛ ..A /1 @ C e/, however, needs to be shown. First we recall the density of CV 1 .RnC1 / in H˛ .@ C e/ for all ˛ 2 ZnC1 . We also have the estimate . 2 C e/kei D .i2 C 1/k .jj2 C 1/k . 2 C e/ke for all 2 RnC1 , k 2 N. Replacing by .A /1 we get ...A /1 /2 C e/˛ .j.A /1 j2 C 1/j˛j1 max¹1; k.A /1 k2 ºj˛j1 .jj2 C 1/j˛j1 max¹1; k.A /1 k2 ºj˛j1 . 2 C e/j˛j1 e
110
Chapter 3 Partial Differential Equations with Constant Coefficients
for all 2 RnC1 ; ˛ 2 N nC1 . Similarly, we estimate . 2 C e/˛ .jj2 C 1/j˛j1 max¹1; kA k2 ºj˛j1 .j.A /1 j2 C 1/j˛j1 max¹1; kA k2 ºj˛j1 ...A /1 /2 C e/j˛j1 e for all 2 RnC1 ; ˛ 2 N nC1 . This shows, via an application of the Fourier transform L0 , that H˙1 ..A /1 @ C e/ D H˙1 .@ C e/ in the sense that ^
_
Hˇ ..A /1 @ C e/ ,! H˛ .@ C e/
(3.1.24)
H˛ .@ C e/ ,! Hˇ ..A /1 @ C e/:
(3.1.25)
ˇ 2ZnC1 ˛2ZnC1
and
^
_
˛2ZnC1 ˇ 2ZnC1
The estimate also yields the inverse continuity statements ^
_
Hˇ ..A /1 @ C e/ ,! H˛ .@ C e/
(3.1.26)
H˛ .@ C e/ ,! Hˇ ..A /1 @ C e/:
(3.1.27)
˛2ZnC1 ˇ 2ZnC1
and
^
_
ˇ 2ZnC1 ˛2ZnC1
This shows in particular the denseness of CV 1 .RnC1 / in Hˇ ..A /1 @ C e/ for every ˇ 2 ZnC1 . Let Hˇ ..A /1 @ C e/ be given with ˇ 2 N nC1 large. Then there is a ˛ˇ ˇ; ˛ˇ 2 N nC1 , such that H˛ˇ .@ C e/ ,! Hˇ ..A /1 @ C e/. Then there is also a ˇ ˛ˇ ; ˇ 2 N nC1 , such that Hˇ ..A /1 @ C e/ ,! H˛ˇ .@ C e/. Since Hˇ ..A /1 @ C e/ is dense in Hˇ ..A /1 @ C e/, we also have the density of the continuous embedding H˛ˇ .@Ce/ ,! Hˇ ..A /1 @Ce/. Now the density of CV 1 .RnC1 / in H˛ˇ .@ C e/ yields the desired density of CV 1 .RnC1 / in Hˇ .@ .A /1 C e/. The continuity statements (3.1.24), (3.1.25), (3.1.26), (3.1.27) also show that the lattices .Hˇ .@ .A /1 Ce//ˇ 2ZnC1 and .Hˇ .@Ce//ˇ 2ZnC1 are equivalent. Finally, from our initial calculation for functions in CV 1 .RnC1 / we conclude that the correspondence is indeed unitary. The last result can now be refined further.
Section 3.1 Partial Differential Equations in H1 .@ C e/
111
Theorem 3.1.38. Let A 2 R.nC1/.nC1/ be a non-singular matrix. Then A j V
C1 .RnC1 /
W CV 1 .RnC1 / H˛ .@ C e/ ! H˛ ..A /1 @A C e/
extends to a unitary mapping for every ˛ 2 .ZnC1 /.sC1/.tC1/ ; s; t 2 N; 2 RnC1 . Moreover, the Sobolev lattices .Hˇ ..A /1 @A C e//ˇ 2.ZnC1 /.sC1/.t C1/ and .Hˇ .@ C e//ˇ 2.ZnC1 /.sC1/.t C1/ are equivalent. Proof. We may restrict our attention to the case s D t D 0, (recall the convention not to note the matrix size for .CV 1 .RnC1 //.sC1/.tC1/ ). The result for D 0 is the previous lemma. The case of arbitrary 2 RnC1 follows similarly: 2 jA jH 1 @ Ce/ ˛ ..A / A Z D j..A /1 @A C e/˛ A .x/j2 exp.2 A x/ dx RnC1 Z D j...A /1 @ C e/˛ . .A ///.x/j2 jdet.A/j exp.2A x/ dx RnC1 Z D j..@ C e/˛ /.Ax/j2 jdet.A/j exp.2 Ax/ dx nC1 R Z D j.@ C e/˛ .y/j2 exp.2 y/ dy
D
RnC1 2 j jH ; ˛ .@ Ce/
for all 2 CV 1 .RnC1 /. The remaining results follow in the same way as in the proof of the previous lemma by using LA as the appropriate spectral representation (rather than the (n-dimensional) Fourier transform F D L0 /. Finally, to arrive at the corresponding spaces with @ A replacing @ we note the following result. Theorem 3.1.39. Let A 2 R.nC1/.nC1/ be a non-singular matrix. Then the Sobolev lattices .Hˇ .@A C e//ˇ 2.ZnC1 /.sC1/.t C1/ and .Hˇ .@ C e//ˇ 2.ZnC1 /.sC1/.t C1/ are equivalent. Proof. We may again restrict our attention to the case s D t D 0. According to the previous theorem we only need to show that the Sobolev lattices .H˛ ..A /1 @A C e//˛2ZnC1 ;
.H˛ .@ A C e//˛2ZnC1
are equivalent. Using the Laplace transform LA , this amounts to comparing the operators i .A /1 x C e and i x C e. Let B D .Bij /i;j D0;:::;n , so that the adjoint is
112
Chapter 3 Partial Differential Equations with Constant Coefficients
B D .Bji /i;j D0;:::;n . Obviously, we can estimate X
1C
xi2 j.i x C e/2e j D .1 C x02 / .1 C xn2 /:
iD0;:::;n
Therefore we obtain Y
j.i B x C e/˛ j2 D
j D0;:::;n
Y
X 2 ˛j 1C xi Bji iD0;:::;n
1C
j D0;:::;n
X iD0;:::;n
°
max 1 C max 1 C
jBji j2
˛j
iD0;:::;n
ˇ ±j˛j1 ˇ jBij j2 ˇj D 0; : : : ; n 1C
X iD0;:::;n
°
X
xi2
X
X
xi2
j˛j1
iD0;:::;n
ˇ ±j˛j1 ˇ jBij j2 ˇj D 0; : : : ; n j.i x C e/2e jj˛j1
iD0;:::;n
and so ²s j.i B x C e/ j max 1C
X
˛
iD0;:::;n
jBij
ˇ ³j˛j1 ˇ j.i x C e/j˛j1 e j ˇ D 0; : : : ; n
j2 ˇj
for all x 2 RnC1 . Choosing B D A1 we obtain in particular j.i .A /1 x C e/˛ j ˇ ²s ³j˛j1 X ˇ max 1C j.A1 /ij j2 ˇˇj D 0; : : : ; n j.i x C e/j˛j1 e j
(3.1.28)
iD0;:::;n
for all x 2 RnC1 . On the other hand, substituting y WD B x we also get ²s j.i y C e/˛ j max
1C
X iD0;:::;n
ˇ ³j˛j1 ˇ jBij j2 ˇˇj D 0; : : : ; n j.i .B /1 y C e/j˛j1 e j
for all y 2 RnC1 and choosing B D A we obtain ²s j.i y C e/˛ j max
1C
X iD0;:::;n
ˇ ³j˛j1 ˇ jAij j2 ˇˇj D 0; : : : ; n j.i .A /1 y C e/j˛j1 e j
(3.1.29) for all y 2 RnC1 . Equations (3.1.28) and (3.1.29) show the equivalence of the Sobolev lattices, since ˛ 2 ZnC1 is arbitrary.
Section 3.1 Partial Differential Equations in H1 .@ C e/
113
This theorem leads to the following consideration. Let P .@/ be an arbitrary partial differential operator evolutionary in direction 0 2 RnC1 . Then with D % 0 , % 2 R>0 , n X P .@/ D P .@ C / D P .0 @ C %/ 0 C .i @/ i iD1
where ¹i j i D 1; : : : nº is an orthonormal set chosen such that ¹0 º [ ¹i j i D 1; : : : nº is a complete orthonormal set in RnC1 . Choosing a unitary transformation A such that this complete orthonormal set is transformed into the canonical basis ¹ei j i D 0; : : : ; nº by A1 such that Ae0 D 0 ;
Aei D i
for i D 1; : : : ; n;
i.e. in block matrix notation A D .0 1 n / written as a row of columns, leads to A P .@/ D P .A @/ A ; (3.1.30) Pn where we have abbreviated A@ D @0 0 C iD1 @i i . Note that since A is unitary by construction, we have .A /1 D A: So, rather than solving the equation P .@/u D f in H1 .@ C e/, we may, according to the last two theorems, solve the equation Q.@0 ; : : : ; @n / w D g WD A f in H1 .@e0 C e/, where Q.@0 ; : : : ; @n / D P .A@/ is evolutionary in direction e0 . Following the convention introduced in Example 3.5, we shall for sake of emphasis write Q.@0 ; b @/ WD Q.@0 ; @1 ; : : : ; @n /; where it is now understood that Q.@0 ; b @/ is evolutionary in direction e0 , which is the time direction and all other directions e1 ; : : : ; en are now spatial directions. Instead of referring to RnC1 as the underlying domain we write R1Cn to underscore that the ‘zeroth’ variable is singled out as the time-direction. An ..s C 1/ .s C 1//-partial differential operator Q.@0 ; b @/ in R1Cn , where e0 is understood as the direction in which Q.@0 ; b @/ is evolutionary, is said to be in standard form. This partial differential expression can now be discussed in an (equivalent) associated Sobolev lattice with D e0 .
114
Chapter 3 Partial Differential Equations with Constant Coefficients
Example 3.18. The partial differential expressions of the wave equation @20 b @2 , @2 and of the heat equation @0 b @2 are typical of the Schrödinger equation @0 i b examples of evolutionary partial differential expressions in standard form. In many circumstances it will be convenient to assume that a problem is already formulated in standard form and we shall make use of this option whenever it appears to be advantageous.
3.1.8 Functions of @ and Convolutions So far we have encountered polynomials and their inverses as particular functions of the Abelian system, i.e. the family of commuting selfadjoint operators, 1 1 1 @ WD @0;0 ; : : : ; @n;n : i i i Of course, more general functions may be of interest also. As a generalization of so called convolution integrals given by Z
.x y/ .y/ dy . /.x/ WD RnC1
for suitably well-behaved functions, e.g. ; 2 CV 1 .RnC1 /, convolutions in the framework of Sobolev lattices turn out to be closely related to the concept of functions of @ . We give the following formal definition extending the idea of convolutions to matrix operators. Definition 3.1.40. Let f 2 HnC1;sC1;tC1;1 .@ C e/, n; s; t 2 N. Then, with19 b. i / WD L f; f
(3.1.31)
This notation is motivated by the custom to denote the Fourier transform of an f as b f . Indeed, for
2 CV 1 .RnC1 / we recall Z b
.x/ D .2/.nC1/=2 exp.ixy/ .y/ dy 19
RnC1
for x 2 RnC1 , which – by complex extension – yields Z b
. i / WD .2/.nC1/=2 exp.i. i/y/ .y/ dy Z D .2/.nC1/=2
RnC1 RnC1
exp.i y/ exp.y/ .y/ dy
D .L /. / for 2 RnC1 , 2 RnC1 . This observation justifies the notation b f . i / for L f .
Section 3.1 Partial Differential Equations in H1 .@ C e/
115
the operator b f WD .2/.nC1/=2 f
1 .@ C / i
b.m i / L D .2/.nC1/=2 L f is called the convolution operator associated with f . For g 2 HnC1;tC1;rC1;1 .@ C e/, r 2 N, we call f g 2 HnC1;sC1;rC1;1 .@ C e/ the convolution of f with b.m i /b g in H1 .@ C e/, whenever it is well-defined, i.e. if f g . i / 2 HnC1;sC1;rC1;1 .i m C e/. Remark 3.1.41. According to the definition we see for example that, if we have an b.x element f 2 HnC1;sC1;tC1;1 .@ C e/, n; s; t 2 N, such that x 7! .i x C e/˛ f nC1 i / is a bounded measurable matrix-valued function for some ˛ 2 Z , then f g is the well-defined convolution of f with g in H1 .@ Ce/ for arbitrary g 2 H1 .@ C e/. Note that for a convolution to make sense we must of course have that g is in the b. 1 @ i / and this means in particular that the matrix sizes must fit domain of f i b. 1 @ i / does not depend on r 2 N in any significant together. As noted earlier, f i way. Recalling that, for every 2 RnC1 , the multiplication operator exp. m/ is a unitary mapping from L2 .RnC1 ; exp.2 x/ dx/ to L2 .RnC1 /, we see that this mapping extends by continuity to the Sobolev lattice .H˛ .@ C e//˛2ZnC1 . We have, for 2 CV 1 .RnC1 /, j j;˛ D j.@ C e/˛ j;0 D j exp. m/ .@ C e/˛ j0;0 D j.@0 C e/˛ exp. m/ j0;0 D j exp. m/ j0;˛ ; and this extends by density to all 2 H˛ .@ C e/, ˛ 2 ZnC1 . Recalling that exp. m/ maps CV 1 .RnC1 / onto itself, this shows that the mapping exp. m/ W H˛ .@ C e/ ! H˛ .@0 C e/ is unitary for every ˛ 2 ZnC1 . Since @ D exp. m/ @0 exp. m/ and therefore @ D @.0 ;:::;n / is unitarily equivalent to @0 D @.0;:::;0/ , we also have that, for every Borel function f W RnC1 ! C, 1 1 f @ D exp. m/ f @0 exp. m/: i i
116
Chapter 3 Partial Differential Equations with Constant Coefficients
In particular we have that exp. m/ W H˛ .@ C e/ ! H˛ .@0 C e/ is unitary for ˛ 2 ZnC1 ; 2 RnC1 . Moreover, noting that exp. m/ is the inverse operator of exp. m/, we also find that .@ C e/˛ exp.2 m/ .@ C e/˛ W H˛ .@ C e/ ! H˛ .@ C e/ and .@ C e/˛ exp.2 m/ .@ C e/˛ W H˛ .@ C e/ ! H˛ .@ C e/ are unitary for all ˛ 2 ZnC1 ; 2 RnC1 . Thus, we see that H˛ .@ C e/ ,! H0 .@0 C e/ ,! H˛ .@ C e/; where the unitary mappings exp. m/ .@ C e/˛ and .@ C e/˛ exp. m/ are interpreted as embeddings. Recall that H0 .@0 C e/ is simply L2 .RnC1 /. We want to show that this constitutes a Gelfand chain. For this we need to show that H˛ .@ C e/ D H˛ .@ C e/; where now, however, the duality pairing is obtained by continuously extending the L2 .RnC1 /-inner product rather than the H0 .@ C e/-inner product. So, let f 2 H˛ .@ C e/ . Then for all 2 H˛ .@ C e/ f . / D hR˛; f j i;˛ D h.@ C e/˛ R˛; f j .@ C e/˛ i;0 D hexp. m/ .@ C e/˛ R˛; f j exp. m/ .@ C e/˛ i0;0 D h.@0 C e/˛ exp. m/ R˛; f j .@0 C e/˛ exp. m/ i0;0 D h.@ C e/˛ exp.2 m/ .@ C e/˛ exp.2 m/ R˛; f j exp.2 m/ i0;0 : We have .@ C e/˛ exp.2 m/ .@ C e/˛ exp.2 m/ R˛; f 2 H˛ .@ C e/ and exp.2 m/ 2 H˛ .@ C e/. Thus, .@ Ce/˛ exp.2m/ .@ Ce/˛ exp.2m/ R˛; W H˛ .@ Ce/ ! H˛ .@ Ce/ is unitary and so indeed H˛ .@ C e/ D H˛ .@ C e/ in the sense of unitary equivalence. Since the Riesz map R˛; can be factorized in the form R˛; D .@ C e/˛ .@ C e/˛ ;
Section 3.1 Partial Differential Equations in H1 .@ C e/
117
we find .@ C e/˛ exp.2 m/ .@ C e/˛ exp.2 m/ R˛; D .@ C e/˛ exp.2 m/ .@ C e/˛ exp.2 m/ .@ C e/˛ .@ C e/˛ D .@ C e/˛ exp. m/ .@ C e/˛ exp. m/ D .@ C e/˛ .@ C e/˛ D 1: In other words, the stated unitary equivalence is canonical, as it is just the identity. We therefore have for f 2 H˛ .@ C e/ D H˛ .@ C e/, 2 H˛ .@ C e/, that f . / D hf j exp.2 m/ i0;0 : In particular, the sesqui-linear form h j i0;0 , as the continuous extension of the inner product of L2 .RnC1 /, gives the duality pairing between H˛ .@ C e/ and H˛ .@ C e/ for every ˛ 2 ZnC1 ; 2 RnC1 . In this framework we shall see that the convolution, as the notation suggests, is largely independent of 2 RnC1 . We have, for ; g component-wise in CV 1 .RnC1 / and f 2 H1 .@ C e/, hf g j i;0 b.m i /b D .2/.nC1/=2 hf g . i / j b
. i /i0;0 Z X bij .m i / b
ik .m i / dx D .2/.nC1/=2 f gj k .m i / b i;j;k
D .2/.nC1/=2
XZ i;j
.nC1/=2
D .2/
RnC1
RnC1
bij .m i / f
X
ik .m i / dx b gj k .m i / b
k
b.m i / j b hf
.m i /b g . i / i0;0 :
We note that, with20 .1 g/.x/ WD g.x/, x 2 RnC1 , Z 1 .L g/ .x/ D exp.ix y/ exp. y/ g.y/ dy .2/.nC1/=2 RnC1 Z 1 exp.ix y/ exp. y/ g.y/ dy D .2/.nC1/=2 RnC1 Z 1 D exp.ix y/ exp. y/ exp.2 y/ g.y/ dy .2/.nC1/=2 RnC1 D .L .1 exp.2 m/ g //.x/ 20
In general A is the rescaling operator given by p .A '/.x/ WD jdet.A/j '.Ax/
for a regular linear mapping A W RnC1 ! RnC1 , ' a complex-valued function on RnC1 , n 2 N. Here A is just multiplication by 1.
118
Chapter 3 Partial Differential Equations with Constant Coefficients
and so .2/.nC1/=2 L .m/ .L g/ D L . 1 exp.2 m/ g /, where XZ
1 exp.2 m/ g D
ik . y/ exp.2 y/ gj k .y/ dy: k
RnC1
Thus, hf g j i;0 D hf g j exp.2 m/ i0;0 D hf j 1 exp.2 m/ g i;0 D hf j exp.2 m/ 1 exp.2 m/ g i0;0 D hf j .exp.2 m/ / 1 g i0;0 ; and in particular hf g j i0;0 D hf j
1 g i0;0
(3.1.32)
for all f 2H˛ .@ C e/ and ; g 2 CV 1 .RnC1 /. Equality (3.1.32) shows that the b. 1 @ i/ is dependent on 2 RnC1 convolution operator f D .2/.nC1/=2 f i only as far as the domain is concerned. This appears plausible taking into account that formally @ C D @ C D @. The solution theory specifies that the solution of the ..s C 1/ .s C 1//-partial differential equation P .@ C / u D f can be found in the form u D L P .i m C /1 L f assuming that P .@/ is invertible with respect to 2 RnC1 . Noting that P .i C/1 2 HnC1;sC1;sC1;1 . i m C e/ and defining G WD .2/.nC1/=2 L P .i C/1 2 HnC1;sC1;sC1;1 .@ C e/ we have that L G WD .2/.nC1/=2 P .i C/1 and so u D L .2/.nC1/=2 .L G/.m/ L f D G f: The element G is known as the fundamental solution or Green’s function (s D 0) or Green’s tensor (for s 1) of P .@/. Clearly, the fundamental solution satisfies the equation P .@ C / G D .2/.nC1/=2 L 1..sC1/.sC1// ;
Section 3.1 Partial Differential Equations in H1 .@ C e/
119
where 1..sC1/.sC1// stands for the ..s C 1/ .s C 1//-identity matrix, which is considered here as an element of HnC1;sC1;sC1;1 .i m C e/. More precisely we have 1 0 1 0 0 :: C : B : C B 0 :: C 2 HA .i m C e/ B :: : : @ : : 0A 0 0 1 with
1 e C1 C1 :: C : B : C B C1 : : ADB : C: :: @ :: : C1 A C1 C1 e 0
Since 0 2 H1 .i m C e/, we only need to see that the constant function 1 is in He .i m C e/. This follows since j.i m C e/e 1jL2 .RnC1 / D j.i C e/e jL2 .RnC1 / nC1 D j.i C 1/1 jL 2 .R/
Z D
R
1 dx 1 C x2
.nC1/=2 D .nC1/=2 < 1:
Consequently, the right-hand side of the equation for the fundamental solution satisfies L 1..sC1/.sC1// 2 HA .@ C e/: The element 0 B B ı 1..sC1/.sC1// WD B @
ı 0 0 :: : : 0 :: :: :: : 0 : 0 0 ı
1 C C C WD .2/.nC1/=2 L 1..sC1/.sC1// A
where the so-called Dirac ı-distribution is given by ı D .2/.nC1/=2 L 1:
120
Chapter 3 Partial Differential Equations with Constant Coefficients
To understand its action we calculate hı j i;0 D .2/.nC1/=2 hL 1 j i;0 D .2/.nC1/=2 h1 j L i0;0 Z .nC1/=2 b
.x i / dx D .2/ RnC1
D .2/.nC1/=2 exp. 0/
Z RnC1
exp.i 0 x/ b
.x i / dx
D .L b
. i //.0/ D .0/ for all 2 CV 1 .RnC1 /. By density it follows that for any 2 He .@ C e/ we have that .0/ is well-defined as limk!1 k .0/ for any sequence . k /k2N converging to in He .@ C e/. The number .0/ is referred to as the evaluation or trace of at 0. The Dirac ı-distribution can be distinguished from 0 only by elements 2 He .@ C e/ with .0/ ¤ 0. It is therefore used to model strongly localized objects or phenomena such as elementary particles, mass points, light spots, short impulses etc. in varied applications. The particular relevance in the current context is that the fundamental solution G of an invertible ..s C 1/ .s C 1//-partial differential expression P .@/ D P .@ C /, 2 RnC1 is given as the solution 21 of P .@/ G D ı 1..sC1/.sC1// : As indicated above, the solution for an arbitrary right-hand side f is given by convolution with the fundamental solution. It is interesting to note that finding the fundamental solution is essentially a scalar problem. We found that by Cramer’s rule P .i C /1 D
1 cof.P .i C //> ; det.P .i C //
D cof.P .i m C //>
1 1..sC1/.sC1// : det.P .i C //
Recalling that cof.P .@ C //> is also an ..s C 1/ .s C 1//-partial differential expression, we have that the fundamental solution G of P .@ C / is given by G D cof.P .@ C //> g 1..sC1/.sC1// ; 21 Using the latter imagery for the Dirac ı-distribution as a ‘short impulse’ the fundamental solution G is also referred to as the impulse response. In the current framework the fundamental solution may depend on the choice of 2 RnC1 .
Section 3.1 Partial Differential Equations in H1 .@ C e/
121
where g is the fundamental solution of the scalar partial differential expression given by det.P .@ C //, i.e. g D .2/.nC1/=2 L
1 : det.P .i C //
We shall later have the opportunity to explicitly find fundamental solutions for particular differential expressions. The apparently rather intimate relationship between general partial differential expressions and scalar partial differential expressions will now be investigated further.
3.1.9 Systems and Scalar Equations The use of the Fourier–Laplace transform as a spectral representation in conjunction with the Sobolev lattice structure makes dealing with partial differential expressions almost a purely algebraic issue. We shall exploit this fact to reveal the equivalence of systems of partial differential equations with scalar partial differential equations, so that it will appear merely as a matter of convenience which approach to use. Let us assume we have P .@ C / u D f 2 H1 .@ C / with P .@ C / being an ..s C 1/ .s C 1//-partial differential expression, 2 RnC1 . We can arrive at s scalar partial differential equations with identical scalar partial differential expression satisfied by u component-wise. The main tool here is the Cayley–Hamilton theorem which we shall recall now. Theorem 3.1.42 (Cayley–Hamilton theorem). Let A 2 C ..sC1/.sC1// be a matrix PsC1 with Q.t / WD det.t 1..sC1/.sC1// A/ DW kD0 ak t k as characteristic polynomial, s 2 N. Then sC1 X ak Ak D 0: kD0
In other words, a matrix A is a ‘root’ of its characteristic polynomial. Let k , k D 0; : : : ; s, denote the roots of Q. Then Q.t / D
s Y
.t k /:
kD0
In case of multiple roots it can happen that some of the corresponding multiple linear factors .t k / can be omitted without destroying the property that A is a ‘root’ of the resulting polynomial of lower degree. Such a non-trivial polynomial of lowest degree is called the minimal polynomial22 of A. Note that the leading coefficient in any case is 1. As an obvious consequence we have our next result. 22 For normal matrices the minimal polynomial consists of the product of the linear factors associated with the distinct roots (ignoring multiplicity). This is the situation we encounter in many applications.
122
Chapter 3 Partial Differential Equations with Constant Coefficients
P# Theorem 3.1.43. For the minimal polynomial .t / WD kD0 ck t k of degree # s C 1, associated with an ..s C 1/ .s C 1//-matrix A we have # X
ck Ak D 0:
kD0
Note that if A is invertible then c0 ¤ 0 as of course is a0 D .1/s det.A/ ¤ 0. In this case, multiplication by A1 yields # 1 1
A
D
X ckC1 Ak c0
(3.1.33)
s X akC1 k A : a0
(3.1.34)
kD0
and A1 D
kD0
The coefficients are polynomials in the entries of A. Applying these findings to P .@/ in place of A yields an interesting way to construct P .@/1 in terms of powers of P .@/. We also find det.P .@// D .1/sC1 a0 .@/ D .1/s
sC1 X
ak .@/ P .@/k
(3.1.35)
kD1
or, using the minimal polynomial, c0 .@/ D
# X
ck .@/ P .@/k ;
kD1
where ak .@/; cj .@/ are scalar partial differential expressions (to be applied component-wise), k D 0; : : : ; s C1I j D 0; : : : ; # . Applying these formulas to the solution u of P .@/u D f we get det.P .@// u D .1/sC1 a0 .@/ u D .1/s
sC1 X
ak .@/ P .@/k u
kD1
D .1/s
sC1 X
(3.1.36)
ak .@/ P .@/k1 f;
kD1
or using the minimal
polynomial23
c0 .@/ u D
# X kD1
23
instead
ck .@/ P .@/k u D
# X
ck .@/ P .@/k1 f:
kD1
We make here the implicit assumption that the coefficients are of polynomial type, i.e. we take the minimal polynomial to be the polynomial of lowest degree with polynomial coefficients.
Section 3.1 Partial Differential Equations in H1 .@ C e/
123
Summarizing, we have found the following correspondence. Theorem 3.1.44. Let P .@/ be an ..s C 1/ .s C 1//-partial differential expression, invertible with respect to 2 RnC1 and let u; f 2 H1 .@ C e/ be such that P .@/ u D f:
(3.1.37)
PsC1 Furthermore, let t 7! a .i x C / t k be the characteristic polynomial of PrC1 kD0 k P .i xC/ and t 7! kD0 ck .i xC/ t k its minimal polynomial (asC1 D crC1 D 1). Then u also solves the equations det.P .@// u D .1/s
s X
akC1 .@/ P .@/k f
(3.1.38)
kD0
and c0 .@/ u D
r X
ckC1 .@/ P .@/k f
(3.1.39)
kD0
where ak .@/; cj .@/ are scalar partial differential expressions, k D 0; : : : ; s C 1I j D 0; : : : ; r C 1. Conversely, every solution of (3.1.38) or (3.1.39) also solves (3.1.37). Proof. The first part of the result is clear from our considerations above. It remains to see the converse. For sake of brevity, we only consider the case of equation (3.1.38); the other case follows analogously. From (3.1.35) we have det.P .@// D .1/s
sC1 X
ak .@/ P .@/k
kD1
and using this in the right-hand side of (3.1.38) leads to det.P .@// .u P .@/1 f / D 0: Since det.P .@// is invertible (since P .@/ was assumed to be invertible) we get u P .@/1 f D 0 and so u solves (3.1.37). This result shows that the components of u do indeed solve a scalar partial differential equation of the form det.P .@// w D h or c0 .@/ w D h;
124
Chapter 3 Partial Differential Equations with Constant Coefficients
respectively. Thus a system of partial differential equations is always equivalent to a diagonal system of scalar partial differential equations involving the same scalar partial differential expression. Sometimes we wish to convert a scalar equation into a system while reducing the order of differentiation with respect to a particular choice of coordinate direction. Although, we do not need the partial differential expression to be evolutionary, we shall write P .@0 ; b @/ for the scalar partial differential expression, which is to be converted to a system that is first order with respect to @0 . More specifically let @/ WD P .@0 ; b
sC1 X
ak .b @/ @k0
kD0
with asC1 .b @/ D 1. Then the well-known procedure from the theory of ordinary differential equations translates this into the ..s C 1/ .s C 1//-partial differential equation 0 1 1 0 0 @0 1 0 1 0 u 0 B C : : : :: :: :: B 0 CB B : C @0 B C B @0 u C C B :: C B : CB :: :: :: C D B C: B :: C : : : : B C 0 :: C B CB A @ 0 A @ B C 0 @0 1 @ 0 A @s0 u f a0 .b @/ a1 .b @/ as1 .b @/ @0 C as .b @/ With 0 B B B B b A.@/ WD B B B @
this simplifies to
0
1
0
0 :: :
0 :: :
1 :: :
0 0 b b a0 .@/ a1 .@/
1 0 :: C : C C C ; 0 C C C 0 1 A b as1 .@/ as .b @/ :: : :: :
1 u C B B @0 u C B V WD B : C C; @ :: A
1 0 B :: C B C @// V D F WD B : C: .@0 C A.b @0A f
0
@s0 u
0
(3.1.40)
@// D P .@0 ; b @/ and so invertibility is preserved. We note that det.@0 C A.b Let V be a solution of (3.1.40). Then with u WD V0 we recover from the first s rows Vk D @0 Vk1 D @k0 u for k D 1; : : : ; s
Section 3.1 Partial Differential Equations in H1 .@ C e/
125
as well as P .@0 ; b @/u D f from the last row. Of course there are many systems equivalent to a given scalar partial differential equation and the above way may not be the smartest way to get an equivalent system. Examples 3.5 and 3.6 provide two systems equivalent to the wave equation of which only the second features regularity loss 0. The first system – displaying a non-vanishing regularity loss – is, however, obtained in the way described above. Thus, since every evolutionary higher order scalar equation with constant leading coefficient with respect to powers of @0 can be brought into the form of a system of first order with respect to @0 , we may consider an ..s C1/.s C1//-partial differential expression evolutionary in direction .1; 0; : : : ; 0/ of the form P .@0 ; b @/ D @0 C A.b @/; where A.b @/ is an ..s C 1/ .s C 1//-partial differential expression in the spatial derivatives only, as the standard first order form of an ..s C 1/ .s C 1//-partial differential expression evolutionary in direction .1; 0; : : : ; 0/ 2 R1Cn .
3.1.10 Causality Intrinsic in the term ‘evolutionary’ is the expectation that solutions associated with evolutionary operators are ‘causal’. With the direction of evolution as the time direction we know what ‘before’ and ‘after’ means and causality will be understood as the property that the solution can be non-zero only ‘after’ the right-hand side is non-zero. To make this more precise we need to define the support of an element in 1 H1 .@ C e/ in direction 0 WD jj , 2 RnC1 n ¹0º. Definition 3.1.45. Let W RnC1 ! C be a continuous function. We define the support supp of as [ supp WD RnC1 n ¹ j open in RnC1 ^ D 0 in º: V
nC1
Two linear functionals f; g 2 C C1 .R / are called equal on if f g D 0 on , i.e. f . / D g. / for all 2 CV 1 .RnC1 / with supp . Moreover, we define the support supp f of a linear functional f on CV 1 .RnC1 / as [ supp f WD RnC1 n ¹ j open in RnC1 ^ f D 0 in º: Let .0/ be a particular direction in RnC1 . Then [ supp.0/ f WD R n ¹I j I open in R ^ f D 0 in ŒI .0/ C ¹.0/ º? º is called the support of f in direction .0/ . By definition we have that the orthogonal projection of supp f on LinR ¹.0/ º D ŒR .0/ is given by Œsupp.0/ f .0/ :
126
Chapter 3 Partial Differential Equations with Constant Coefficients
If .0/ D .1; 0; : : : ; 0/ D e0 is the evolutionary direction of a partial differential expression in R1Cn under consideration, then we shall briefly speak of the time support of f and write supp0 f WD suppe0 f: Here .0/ D .1; 0; : : : ; 0/ is identified as the positive time direction e0 . Remark 3.1.46. A straight-forward calculation shows [ supp.0/ f D R n ¹I j I open in R ^ RnC1 n supp f ŒI .0/ C ¹.0/ º? º \ D ¹R n I j I open in R ^ supp f ŒR n I .0/ C ¹.0/ º? º \ D ¹C j C closed in R ^ supp f ŒC .0/ C ¹.0/ º? º: As a matter of convenience, we may also rotate .0/ into a suitable direction. Indeed, let A W RnC1 ! RnC1 be unitary, so that A is its inverse. We note that 7! ı A is a bijection on CV 1 .RnC1 / and define V
nC1 /
A W C C1 .R
V
nC1 /
! C C1 .R
;
u 7! Au nC1
V
with u. ı A / DW Au. /, where C C1 .R / is the linear space of functionals on CV 1 .RnC1 /. We find ^ f D 0 in , f . / D 0 2CV 1 ./
,
^
f . / D 0
ıA2CV 1 .A Œ/
,
^
f . ı A ı A / D 0
ıA2CV 1 .A Œ/
,
^
Af . ı A/ D 0
ıA2CV 1 .A Œ/
,
^
Af . / D 0
2CV 1 .A Œ/
,
Af D 0 in A Œ
and so supp Af D A Œsupp f :
Section 3.1 Partial Differential Equations in H1 .@ C e/
127
As a consequence we have supp f ŒC .0/ C ¹.0/ º?
,
A Œsupp f A ŒŒC .0/ C ¹.0/ º?
,
supp Af ŒC A .0/ C ¹A .0/ º?
and therefore also supp.0/ f D suppA .0/ Af: This observation allows for the convenient assumption that .0/ D .1; 0; : : : ; 0/ is the standard direction of concern in the discussion of causality (compare Lemma 3.1.48 below). We are now ready to formulate our concept of causality. Definition 3.1.47. Let W be a mapping from a subset of linear functional defined on CV 1 .RnC1 / with values in the set of such functionals. If inf supp.0/ .f g/ inf supp.0/ .W .f / W .g//
(3.1.41)
for all f; g in the domain of W then we call W (forward) causal in direction .0/ . If W is (forward) causal in direction .0/ , we shall also say that W is backward causal in direction .0/ or just backward causal, if the direction is understood from the context (in particular if .0/ is the positive time direction). Here we interpret inf supp.0/ f D 1 if supp.0/ f is not bounded below and inf supp.0/ f D C1 if supp.0/ f is empty, so that (3.1.41) is only restrictive if we take f with supp.0/ f bounded below. If W is linear then W .0/ D 0 and we may assume g D 0 for an equivalent description of causality. As a word of warning, we point out that, without topology in the background, the concepts of support and causality as introduced here can be quite meaningless. It is, however, convenient to introduce these concepts as a “jargon” without explicit reference to a particular topology, which in fact will vary. The gap can, in these cases, which we consider, be completed via a density argument and continuous extension with respect to the particular topology used. For example, any element in H1 .@ C e/ can and will be identified with a linear functional on CV 1 .RnC1 / via CV 1 .RnC1 / ! C;
(3.1.42)
' 7! h j exp.2m/'i;0 : In the above terminology we obviously have, for any 2 RnC1 , that P .@/ is causal in any direction .0/ . Indeed, hP .@ C / u j i;0 D hu j P .@ C i / i;0
128
Chapter 3 Partial Differential Equations with Constant Coefficients
for all 2 CV 1 .RnC1 /. Since D 0 in implies P .@ C i / D 0 in , we have supp P .@ C i / supp and so u D 0 in implies P .@ C / u D 0 in . Thus supp P .@ C / u supp u: Recalling the above remark, this implies in particular that supp.0/ P .@ C / u supp.0/ u and so inf supp.0/ u inf supp.0/ P .@ C / u; where the direction .0/ is arbitrary and need not be linked to . The interesting part is whether the inverse, i.e. the solution operator, is also causal. In the light of our earlier consideration of transforming coordinates and in consequence Sobolev lattices, it is worth noting the following general result. V
nC1
V
nC1
Lemma 3.1.48. Let W W D.W / C C1 .R / ! C C1 .R / be causal in direction .0/ and let A W RnC1 ! RnC1 be a rotation. Then the mapping A ı W ı A defined by u 7! A .W .A u// is causal in direction A .0/ . Proof. The condition inf supp.0/ .f g/ inf supp.0/ .W .f / W .g// is equivalent to inf suppA .0/ .Af Ag/ inf suppA .0/ .A.W .f // A.W .g/// inf suppA .0/ .A.W .A Af // A.W .A Ag///: Replacing Af and Ag for f and g, we obtain the desired result. Example 3.19. As an important example, consider the 1-dimensional situation @ C . C ˇ/ W H1 .@ C 1/ ! H1 .@ C 1/ for 2 R. According to the above, this is a causal operator in direction ˙1. Since ji x C . C ˇ/j2 D x 2 C . C ˇ/2 . C ˇ/2
Section 3.1 Partial Differential Equations in H1 .@ C e/
129
we see that @ C . C ˇ/ is reversibly evolutionary24 . Note, however, that if ˇ D we have @ C . C ˇ/ D @ , which is not even invertible with respect to . Indeed, 0 2 .@ / and we shall see later that invertibility in H1 .@ C / also fails. Let us now consider .@ C . C ˇ//1 for ˇ ¤ . To study this operator we first solve the ordinary differential equation @u C ˇ u D f for, say, f 2 CV 1 .R/ classically. By the variation of constant method, the general solution is given by u.x/ D C exp. ˇ x/ Z C exp. ˇx/
exp.ˇs/ f .s/ ds: 1;x
To make sure that u 2 H1 .@ C i/, we have to adjust the constants and find, for ˇ > , Z u.x/ D exp.ˇx/ exp.ˇs/ f .s/ ds: 1;x
Similarly, for ˇ < we get Z u.x/ D exp.ˇx/ With
´ gˇ; .x/ WD sgn.ˇ C /
exp.ˇs/ f .s/ ds: Œx;1Œ
exp.ˇx/ 0
for sgn.ˇ C / x 0; for sgn.ˇ C / x < 0
we simply obtain u D gˇ; f in the sense of a convolution integral. Note in particular that gˇ; 2 H0 .@ C e/: 24 This is of course also clear from our general observation that ordinary differential expressions are always reversibly evolutionary.
130
Chapter 3 Partial Differential Equations with Constant Coefficients
Indeed, we calculate jgˇ; j2;0
Z D
R
jgˇ; .x/j2 exp.2x/ dx
Z D
sgn.ˇ C/ R>0
exp.2ˇx/ exp.2x/ dx
Z D
sgn.ˇ C/ R>0
exp.2.ˇ C /x/ dx
Z D D
R>0
exp.2 jˇ C j x/ dx
1 : 2 jˇ C j
Moreover, by uniqueness and continuous extension we have that gˇ; D .@ C .ˇ C //1 for 2 R; ˇ ¤ . Now, observe that hgˇ; f j i;0 D h.@ C .ˇ C //1 f j i;0 D hf j.@ C .ˇ C //1 i;0 D hf j.@ C .ˇ //1 i;0 D hf j.@ C . C .ˇ 2///1 i;0 D hf jgˇ 2; i;0 ; for all 2 CV 1 .R/, C ˇ ¤ 0. Let a WD inf supp f 2 R so that f D 0 on 1; aŒ. With supp 1; aŒ we therefore have hf j i;0 D 0: We also read off from the convolution integral that gˇ 2; 2 C1 .R/ and sup supp sup supp gˇ 2; , if ˇC > 0. Moreover, since the multiplication operator .i m C .ˇ C //1 maps H1 .i m C e/ into itself, we also have gˇ 2;
2 H1 .@ C e/. Since a simple cut-off with25 j1 ŒN;N .m/ yields that j1 Here j" .x/ WD "1 j.x="/, x 2 R, with j 2 CV 1 .R/ and supp.j / D Œ1; 1 may be chosen arbitrarily. A specific well-known choice is given by 1 j.x/ D C exp 1 x2 25
for x 2 1; 1Œ, and j.x/ D 0 otherwise, where C WD R 1;1 Œ
1 : 1 exp 1x 2 dt
Section 3.1 Partial Differential Equations in H1 .@ C e/
131
ŒN;N .m/ gˇ 2; ! gˇ 2; in H1 .@ C e/ as N ! 1, we also see that 0 D lim hf jj1 ŒN;N .m/ gˇ 2; i;0 N !1
D hf j gˇ 2; i;0 D h.@ C .ˇ C //1 f j i;0 for all 2 CV 1 .R/ with supp 1; aŒ . This proves that, in the case ˇ C > 0, .@ C .ˇ C //1 f D 0
on 1; aŒ
or inf supp.@ i .ˇ C //1 f a. Since a D inf supp f , the desired inequality follows for ˇ C > 0 proving that .@ C .ˇ C //1 is causal in direction C1. In the case ˇ C < 0 causality in direction 1 can be shown analogously. Since sgn.ˇ C / D sgn./ for jj sufficiently large, indeed for jj > jˇj, we may reformulate our findings in the following way: .@ C .ˇ C //1 is causal in direction sgn./ for all sufficiently large jj (in fact for jj > jˇj). Example 3.20. The last causality result carries over to the higher dimensional case following the same line of reasoning, since the cut-off procedure and the integration to obtain the solution to the ordinary differential equations can be carried out for directional derivatives in arbitrary dimensions. Thus, for any direction .0/ 2 RnC1 and D % .0/ 2 RnC1 ; % 2 R>0 , and ˇ 2 RnC1 such that k C ˇk ¤ 0, k D 0; : : : ; n, .@ C .ˇ C //˛ W H1 .@ C e/ ! H1 .@ C e/ is causal in direction .0/ for all ˛ 2 ZnC1 . This follows by induction from the causality of .@;k C .ˇk C k //1 W H1 .@ C e/ ! H1 .@ C e/ in direction .0/ . This is clear if .0/k ¤ 0. For .0/k D 0 we have .@;k C .ˇk C k //1 D .@k C ˇk /1 : This operator, however, cannot have any influence on the support in direction .0/ , since in this case .0/ ? ek . Considering the translation operator h W H1 .@ C e/ ! H1 .@ C e/, h 2 RnC1 , defined as the continuous extension of the mapping .h /.x/ WD .x C h/
for all 2 CV 1 .RnC1 /:
132
Chapter 3 Partial Differential Equations with Constant Coefficients
We can confirm that causality is, for (Borel) functions of @ , neutral with respect to shifts, i.e. translation of the argument. We calculate first Z jh j2;0 D j .x C h/j2 exp.2 x/ dx nC1 R Z D j .y/j2 exp.2 .y h// dy RnC1
D exp.2 h/ j j2;0 for all 2 CV 1 .RnC1 /. Therefore exp. h/ h W H0 .@ C e/ ! H0 .@ C e/ is unitary since h is clearly onto with inverse h W H0 .@ C e/ ! H0 .@ C e/. Moreover, we have h .@ C e/ D .@ C e/ h (3.1.43) on CV 1 .RnC1 / and so by induction h .@ C e/˛ D .@ C e/˛ h for all ˛ 2 ZnC1 . Consequently, we also get by continuous extension that exp. h/ h W H˛ .@ C e/ ! H˛ .@ C e/ is unitary for every ˛ 2 ZnC1 . In addition, due to the above commutativity property, we have that h commutes with P .@/ D P .@ C/ D P ..@ Ce/C.e// and so also with P .@/1 if P .@/ is invertible with respect to . We shall exploit this fact in the proof of our next theorem. Alternatively, these commutativity results can of course be confirmed (and extended to Borel functions of @ ) by applying the Fourier–Laplace transform. In particular, we note for future consideration that for functions of @ it suffices to test causality with such elements f satisfying inf supp.0/ f D 0: Remark 3.1.49. That h commutes with differentiation is not much of a surprise if we note that h is actually a function of @ . Indeed, we have26 h D exp.h / exp.h @ / D exp.h .@ C // and so exp. h/ h D exp.h @ / which confirms again the above unitarity statement. 26
It may be interesting to note that this formally corresponds to Taylor’s formula, since exp.hz/ D hk k k2N kŠ z .
P
Section 3.1 Partial Differential Equations in H1 .@ C e/
133
Theorem 3.1.50. Let P .@/ be an ..s C 1/ .s C 1//-partial differential expression evolutionary in direction .0/ 2 RnC1 , s; n 2 N. Then P .@ C /1 W H1 .@ C e/ ! H1 .@ C e/ is well-defined and causal in direction .0/ with D % .0/ for all sufficiently large % 2 R>0 . Proof. That P .@ C /1 W H1 .@ C e/ ! H1 .@ C e/ exists as a continuous operator for all D % .0/ with % 2 R>0 sufficiently large follows from our solution theory. Moreover, we have the existence of an ˛ D .˛0 ; b ˛ / 2 N nC1 , b ˛ 2 N n , such ˛ 1 nC1 that .i x C e/ P .i x C / is uniformly bounded in x 2 R , 2 ŒR>0 .0/ , with jj %0 > 0 for some %0 2 R>0 . According to our earlier considerations and in particular due to Lemma 3.1.48, we may assume without loss of generality that .0/ D .1; 0; : : : ; 0/. Therefore, we are concerned with an expression of the form ˛ .i x0 C 1/˛0 .ib x C e/b P .i x0 C %; ib x /1
bounded uniformly for x0 2 R, b x 2 Rn , % 2 R%0 . Clearly ji x0 C 1j2 D x02 C 1 x02 C .1 C %/2 D ji x0 C % C 1j2 and so there exists a constant C 2 R>0 such that ˛ x C e/b P .i x0 C %; ib x /1 j C j..i x0 C %/ C 1/˛0 .ib
(3.1.44)
for all x0 2 R, b x 2 Rn , % 2 R%0 . Since .@ C C e/˛ is causal in every direction and the composition of causal operators is again causal, it suffices to show that .@ C C e/˛ P .@ C /1 is causal in direction .0/ in order to show that P .@ C /1 is causal in this direction. For this we first note that (3.1.44) can be written in the form j.i C C %0 C 1; i C e/˛ P .i C C %0 ; i /1 j C for all 2 R, 2 R>0 , 2 Rn . Letting z D i we obtain that j.i z C %0 C 1; i C e/˛ P .i z C %0 ; i /1 j C
(3.1.45)
for all 2 Rn , z 2 R i R>0 . Now let f 2 Hˇ .@ C e/, ˇ 2 .ZnC1 /s1 D .ZnC1 /s be such that a D inf supp.0/ f 2 R. Then, we have to show that inf supp.0/ ..@ C . C e//˛ P .@ C /1 f / a:
134
Chapter 3 Partial Differential Equations with Constant Coefficients
Without loss of generality we may assume that f 2 H0 .@ C e/ since, from Example 3.20, we have that the bounded bijections .@ C e C / W H .@ C e/ ! H0 .@ C e/, 2 ZnC1 are already causal in direction .0/ . Due to the above noted interaction with translation, we may focus our attention on the case a D 0. Under these simplifying assumptions, we first observe that .L .@ C C e/˛ P .@ C / f /.x/ D .i z C %0 C 1; i x1 C 1; : : : ; i xn C 1/˛ b.z i %0 ; x1 ; : : : ; xn / P .i z C %0 ; i x1 ; : : : ; i xn /1 f for almost all x D .x1 ; : : : ; xn / 2 Rn . The mapping .z; x1 ; : : : ; xn / 7! .i z C %0 C 1; i x1 C 1; : : : ; i xn C 1/˛ b.z i %0 ; x1 ; : : : ; xn / P .i z C %0 ; x1 ; : : : ; xn /1 f
(3.1.46)
can be considered as a mapping from R i R>0 into L2 .Rn / and so, given any 2 CV 1 .Rn /, we may define a mapping S W R i R>0 ! C; b.z i %0 ; /i0 : z 7! hFn j .i z C%0 C1; i C e/˛ P .i z C%0 ; i /1 f By the converse of the Paley–Wiener theorem (see e.g. [52]), we know that the mapb.z i %0 ; ; : : : ; / is analytic27 and ping F W R i R>0 ! L2 .Rn / given by z 7! f so the mapping S is also analytic in R i R>0 . Indeed, t 7! exp.%0 t /h j f .t; /i0 2 L2 .R>0 / and so its Fourier transform z 7! h j .L%0 ˝ 1/f .z; /i0 D z 7! hFn j .L%0 ˝ Fn /f .z; /i0 b.z i %0 ; /i0 D z 7! hFn j f extends to an analytic function in R i R>0 . It can be seen that b.z i %0 ; /i0 S 0 .z/ D hFn j Q.z; / f b 0 .z i %0 ; /i0 C hFn j .i z C %0 C 1; i C e/˛ P .i z C %0 ; i /1 f where Q denotes the derivative of z 7! .i z C %0 C 1; i C e/˛ P .i z C %0 ; i /1 : 27
Recall that weak analyticity is equivalent to analyticity, see e.g. [46].
Section 3.1 Partial Differential Equations in H1 .@ C e/
135
The family of matrix-valued functions z 7! Q.z;b x /, b x 2 Rn , is by the rules of differentiation still entry-by-entry absolutely and locally uniformly bounded above by a term of the form C .ib x C e/ for some 2 Zn , C 2 R>0 . Thus, S is seen to be analytic. Moreover, by (3.1.45) we have jS. i /j0;0 C j j0 jF . i /j0;0 D C j j0 jf j.%0 C ;0;:::;0/;0 C j j0 jf j.%0 ;0;:::;0/;0 for all 2 R>0 . Thus, we also get that S is in the so-called Hardy–Lebesgue space ˇ ° ˇ (3.1.47) HL WD f W ŒR iŒR>0 ! C ˇ f analytic ; ± ^ f . i/ 2 L2 .R/; sup¹jf . i/jL2 .R/ j 2 R>0 º < 1 : 2R>0
The Paley–Wiener theorem (see e.g. [52]), now yields that S can be continuously extended to the closed halfplane ŒR iŒR0 yielding on R an element s of L2 .R/ such that F s 2 L2 .R/ has support in R0 , i.e. h j F si0;0 D h exp.% m1 / j L% si%;0 D 0 for all
2 CV 1 .R/ with supp
R0 , let .0/ ;R 2 CV 1.0/ .RnC1 / be defined by nC1 , with28 .0/ ;R .x/ WD 0 ..0/ x R/; x 2 R Z 0 .t / WD
j.s/ ds Œt;1Œ
Here j is the well-known element of CV 1 .R/ constructed earlier. Of course, any element ' 2 R CV 1 .RnC1 / with R '.x/ dx D 1 could be used. 28
Section 3.1 Partial Differential Equations in H1 .@ C e/
139
C .@ for all t 2 R. Then, for every % %0 and f 2 H1 % .0/ C e/, the limit
U WD lim P .@% .0/ C % .0/ /1 R!1
.0/ ;R
f
C .@ exists in H1 % .0/ C e/ and satisfies
hU j P .@/ i0;0 D hf j i0;0
(3.1.49)
for all 2 CV 1 .RnC1 /.
Proof. First we note that
.0/ ;R
;C
2 CV 1.0/ .RnC1 / so that
.0/ ;R .m/ f
2 H1 .@% .0/ C e/:
The solution uR 2 H1 .@% .0/ C e/ of P .@% .0/ C % .0/ / uR D
.0/ ;R .m/ f
is, for % %0 , well-defined. From the definition of inf supp.0/ .
.0/ ;R1 .m/
.0/ ;R2 .m// f
.0/ ;R , we get, for all R1 ; R2
inf supp.0/ .
.0/ ;R1
D inf supp. . R1 /
2 R,
.0/ ;R2 /
. R2 //
min.R1 ; R2 / 1: Therefore, by causality in direction .0/ , the corresponding solutions must also satisfy inf supp.0/ .uR1 uR2 / min.R1 ; R2 / 1: In other words, we have uR1 D uR2
in ŒR<min¹R1 ;R2 º1 .0/ C ¹.0/ º? :
Thus R 7! huR j i%0 .0/ ;0 is eventually constant (i.e. for sufficiently large argument); we can define a linear functional U by U. / WD hU j i%0 .0/ ;0 WD lim huR j i%0 .0/ ;0 R!1
for all 2 CV 1 .RnC1 /. Moreover, by the same argument it follows that for every ;C 2 CV 1.0/ .RnC1 / we have .m/ U D .m/ uR if R is sufficiently large. Thus, in ;C particular, .m/ U 2 H1 .@% .0/ C e/ and, since 2 CV 1.0/ .RnC1 / was arbitrary, C .@% .0/ C e/: U 2 H1
140
Chapter 3 Partial Differential Equations with Constant Coefficients
Moreover, we have hU j P .@/ i0;0 D lim huR j P .@/ i0;0 R!1
D lim huR j exp.2%.0/ m/ P .@/ i%0 .0/ ;0 R!1
D lim huR j P .@% .0/ C % .0/ / exp.2%.0/ m/ i%0 .0/ ;0 R!1
D lim hP .@% .0/ C % .0/ / uR j exp.2%.0/ m/ i%0 .0/ ;0 R!1
D lim h
.0/ ;R .m/ f
j exp.2%.0/ m/ i%0 .0/ ;0
D lim h
.0/ ;R .m/ f
j i0;0
R!1 R!1
D lim hf j R!1
.0/ ;R .m/ i0;0
D hf j i0;0 D f . / for all 2 CV 1 .RnC1 /. Remark 3.1.55. The particular form of .0/ ;R is used here only for the sake of definiteness. Any sequence . k /k in C1 .RnC1 / with supp.0/ k bounded above as well as supp.0/ .1 k / bounded below and inf supp.0/ .1 k / ! 1 as k ! 1 could be used to serve the same purpose, producing the same U . Given any linear functional U on CV 1 .RnC1 /, and defining hU j i;0 WD hU j exp.2 m/ i0;0 WD U.exp.2 m/ / for 2 CV 1 .RnC1 /, 2 RnC1 , we see that (3.1.49) can equivalently be written (with D % .0/ ) as hU j i;0 D hf j P .@ C / i;0 : This notation makes (3.1.49), in a more suggestive way, a generalization of the solution theory in H1 .@% .0/ C e/. We have chosen to write (3.1.49) in order to be closer to the usual concept of a weak solution. C .@ The element U 2 H1 % .0/ C e/ will – because of the relation (3.1.49) – be C .@ called a weak solution in H1 % .0/ C e/ of the equation C P .@/ U D f 2 H1 .@% .0/ C e/:
This solution U will also be referred to as a solution in the weak sense. The last lemma shows its existence; the next shows uniqueness.
Section 3.1 Partial Differential Equations in H1 .@ C e/
141
Lemma 3.1.56. Let P .@/ be a partial differential expression evolutionary in direction .0/ and let %0 2 R>0 be such that P .@% .0/ C% .0/ / is invertible in H1 .@% .0/ Ce/ C .@ for all % %0 . A weak solution of the equation P .@/ U D f in H1 % .0/ C e/ is unique. C .@ Proof. Let U 2 H1 % .0/ C e/ be a weak solution of
P .@/ U D 0: Then
.0/ ;R .m/ U
2 H1 .@% .0/ C e/ solves the equation
P .@/
.0/ ;R .m/ U
D .P .@/
.0/ ;R .m/
.0/ ;R .m/ P .@// U
2 H1 .@% .0/ C e/:
Since .P .@/ .0/ ;R .m/ .0/ ;R .m/ P .@// contains terms with at least one differentiation acting on .0/ ;R we have inf supp.0/ .P .@/
.0/ ;R .m/
.0/ ;R .m/ P .@// U
R 1:
Therefore, by causality we must also have inf supp.0/
.0/ ;R .m/ U
R 1:
Letting R ! 1 we see that U. / D 0 for all 2 CV 1 .RnC1 /, i.e. U D 0. This lemma establishes the existence of the solution operator C C P .@/1 W H1 .@% .0/ C e/ ! H1 .@% .0/ C e/:
Theorem 3.1.57. Let P .@/ be a partial differential expression evolutionary in direction .0/ and let %0 2 R>0 be such that P .@% .0/ C % .0/ / is invertible in C .@ H1 .@% .0/ C e/ for all % %0 . Then, for every f 2 H1 % .0/ C e/, we have a C unique solution U 2 H1 .@% .0/ C e/ solving P .@/ U D f in the weak sense. Moreover, U depends continuously on f in the sense that the mapping 1 .0/ ;R .m/ f 7! .0/ ;R2 .m/ P .@/ f C .@ is continuous in H1 % .0/ C e/ for every R 2 R.
Proof. The unique existence has been established in Lemma 3.1.54 (existence) and Lemma 3.1.56 (uniqueness). The continuity statement needs to be shown. We find P .@/
.0/ ;R .m/ U
D
.0/ ;R .m/ f
C .P .@/
.0/ ;R .m/
2 H1 .@% .0/ C e/
.0/ ;R .m/ P .@// U
142
Chapter 3 Partial Differential Equations with Constant Coefficients
and so .0/ ;R .m/ U
D uR C P .@% .0/ C % .0/ /1 .P .@/
.0/ ;R .m/
.0/ ;R .m/ P .@// U:
Recalling from the proof of Lemma 3.1.56 that, due to causality, inf supp.0/ P .@% .0/ C % .0/ /1 .P .@/
.0/ ;R .m/
.0/ ;R .m/ P .@// U
R1
we find .0/ ;R2 .m/ U
D
.0/ ;R2 .m/
.0/ ;R .m/ U
D
.0/ ;R2 .m/ uR :
This yields the desired continuity result. Thus, we see that we really do not lose much generality by restricting our attention to the globally constrained Sobolev lattices H1 .@ C e/. For this reason we shall also henceforth not bother to point out the corresponding semi-local in time result, which can always easily be obtained by the above ideas.
3.1.11 Initial Value Problems As we have shown earlier, we may for an evolutionary problem always assume that the direction of evolution is the first coordinate direction. To emphasize this, it is usual to renumber the coordinate to refer to the direction of evolution as the direction of the ‘0-th’ variable. We recall that the corresponding direction .0/ D e0 D .1; 0; : : : ; 0/ 2 R1Cn , n 2 N, is frequently referred to as ‘time’. We have adopted this terminology already to discuss various applications we shall turn to in detail later. In this spirit we shall, henceforth, always consider evolutionary problems in the form P .@0 ; b @/ u D f , where it is understood that P .@0 ; b @/ is evolutionary in direction e0 making @0 the timederivative and b @ D .@1 ; : : : ; @n / indicates the spatial derivatives. We shall consider solving this equation in H1 .@% e0 C .1; e// D H1 .@0;% C 1; b @ C e/, for % 2 R>0 b sufficiently large (to assure that P .@0;% C%; @/ is invertible in H1 .@0;% C1; b @Ce/), e WD .1; : : : ; 1/ 2 Rn . Recalling the tensor product construction used to obtain H1 .@0;% C 1; b @ C e/, we note that @ C e/ D H˛0 .@0;% C 1/ ˝ Hb .b @ C e/ H˛ .@0;% C 1; b ˛ where ˛ D .˛0 ; ˛1 ; : : : ; ˛n / 2 Z1Cn , b ˛ D .˛1 ; : : : ; ˛n / 2 Zn . When ˛ is a multi.ij / index ..s C 1/ .t C 1//-matrix .˛ /iD0;:::;sIj D0;:::;t with .1 C n/-tuples as entries .ij / .ij / .ij / .ij / ˛D ˛ .ij / D .˛0 ; ˛1 ; : : : ; ˛n / in Z1Cn , then ˛0 D .˛0 /iD0;:::;sIj D0;:::;t and b .ij / .ij / .ij / .ij / n ˛ D .˛1 ; : : : ; ˛n / in Z . It is .b ˛ /iD0;:::;sIj D0;:::;t with multi-index entries b
Section 3.1 Partial Differential Equations in H1 .@ C e/
143
to single out the time variable that we shall also write @ C e/ H1 .@0;% C 1/ ˝ H1 .b and H˛0 .@0;% C 1/ ˝ H1 .b @ C e/
for
for H1 .@0;% C 1; b @ C e/ [ ˛ b
H˛0 .@0;% C 1/ ˝ Hb .b @ C e/: ˛
Thus we shall recover the intuition of an initial value problem from an abstract problem of the form P .@0 ; b @/ u D f where in this context f has a specific form. To analyze this issue more closely we note that the polynomial matrix P can be written in the form P .i ; i / D
p X
aj .i / .i /j
j D0
where aj is a polynomial matrix for j D 0; : : : ; p, p 2 N. We assume that ap is not the zero polynomial matrix, so that p is indeed the maximal degree of the single variable polynomial matrix 7! P .i ; i / for 2 RnC1 . For the type of problem we wish to discuss here, it is customary to assume that ap is a constant invertible matrix and without loss of generality we assume ap 1.ss/ :
(3.1.50)
Thus, we have P .@0 ; b @/ D
p X
j p aj .b @/ @0 D @0 C
j D0
p1 X
j aj .b @/ @0 :
j D0
Now let u be smooth on R0 Rn , e.g. u D R>0 .m0 / ; 2 CV 1 .RnC1 /, and define f WD P .@0 ; b @/ u. Then f is smooth on R>0 and by definition u solves classically the equation
P .@0 ; b @/ u D f
on R>0 . The regular part of f , i.e. the part in H0 .@0;% C 1; b @ C e/, is just f0 WD R>0 .m0 / P .@0 ; b @/ . Moreover, u D f D 0 on the time interval R0 .m0 / P .@0 ; b @// and letting Now, f f0 D .P .@0 ; b e WD exp.2% m0 /
144
Chapter 3 Partial Differential Equations with Constant Coefficients
we get @/ R>0 .m0 / R>0 .m0 / P .@0 ; b @// j i% e0 ;0 h.P .@0 ; b D h.P .@0 ; b @/ R>0 .m0 / R>0 .m0 / P .@0 ; b @// j exp.2% m0 / i0;0 D h j R>0 .m0 / P .@0 ; b @/ ei0;0 hR>0 .m0 / P .@0 ; b @/ j ei0;0 Z D .h j P .@0 ; b @/ ei0 .t / hP .@0 ; b @/ j ei0 .t // dt Œ0;1Œ
Z D
p X
Œ0;1Œ j D1
Z D
j .haj .b @/ j .@0 /j ei0 .t / h.@0 aj .b @/ /.t; / j e.t; /i0 / dt
p X
j .h@0 aj .b @/ j .@0 /j 1 ei0 .t / h@0 aj .b @/ j ei0 .t // dt
Œ0;1Œ j D1
Z
p X
.@0 haj .b @/ j .@0 /j 1 ei0 .t // dt:
(3.1.51)
Œ0;1Œ j D1
Continuing in this fashion we obtain eventually @/ R>0 .m0 / R>0 .m0 / P .@0 ; b @// j i% e0 ;0 h.P .@0 ; b Z
D
D
jX 1 p X h@k0 aj .b @/ j .@0 /j k1 ei0 .t / dt @0
Œ0;1Œ j D1
1 p jX X
kD0
haj .b @/ .@k0 /.0; / j ..@0 /j k1 e/.0; /i0
j D1 kD0
D
1 p jX X
hı ˝ .aj .b @/ .@k0 /.0; // j .@0 /j k1 ei0;0
j D1 kD0
D
1 p jX X
j k1
ı ˝ .aj .b @/ .@k0 /.0; // j ei0;0
j k1
ı ˝ .aj .b @/ .@k0 /.0; // j i;0 :
h@0
j D1 kD0
D
1 p jX X
h@0
j D0 kD0
Thus we have found f f0 D
1 p jX X j D0 kD0
j k1
@0
ı ˝ .aj .b @/ .@k0 /.0; //:
(3.1.52)
Section 3.1 Partial Differential Equations in H1 .@ C e/
145
In other words, u solves the equation @/ u D f0 C P .@0 ; b
1 p jX X
j k1
@0
ı ˝ .aj .b @/ .@k0 /.0; //:
j D0 kD0
Obviously, we have .@k0 u/.0C; / D .@k0 /.0; / for k D 0; : : : ; p 1, which is what in classical terms would be called initial conditions at time zero. This suggests the following definition of an initial value problem in the framework of our Sobolev lattice structure. Definition 3.1.58. Let P .@0;% C %; b @/ be an evolutionary partial differential expression and %0 2 R>0 be such that P .@0;% C %; b @/ is invertible for all % %0 . Then the problem of finding the solution u 2 H1 .@0;% C 1; b @ C e/ of P .@0 ; b @/ u D f0 C
1 p jX X
j k1
@0
ı ˝ .aj .b @/ u0;k /;
(3.1.53)
j D0 kD0
where f0 2 H0 .@0;% C 1/ ˝ H1 .b @ C e/ with supp0 f0 R0 and u0;k 2 b H1 .@ C e/, k D 0; : : : ; p 1, % %0 , is called the abstract initial value problem for P .@0 ; b @/ with initial data .u0;k /kD0;:::;p1 and source term f0 . If specifically f0 D 0, we shall speak of the abstract initial value problem for P .@0 ; b @/ with initial data .u0;k /kD0;:::;p1 . That the abstract initial value problem has a unique solution u 2 H1 .@0;% C 1; b @ C e/ which depends continuously on the data f0 and .u0;k /kD0;:::;p1 is clear from the general theory. The question arises, however, in what sense does this solution assume its initial data, i.e. do we and in which sense do we have right-continuity (with respect to time) of the solution u and its first .p 1/ time derivatives. Pp Pj 1 j k1 Before we explore this question let us first consider the term j D0 kD0 @0 ı ˝ .aj .b @/ u0;k / on the right-hand side of (3.1.53). Recalling that in the 1-dimensional case we found @0 R>0 D ı;
(3.1.54)
it is not hard to see that
j @0
8 kj m0 ˆ ˆ < .kj /Š R>0
mk0 D R >0 ˆ kŠ R>0 ˆ : j k1 @0 ı
for j < k; for j D k; for j > k
146
Chapter 3 Partial Differential Equations with Constant Coefficients
for all j; k 2 N. This yields, for j > k, j k1
@0
j k
ı D @0
j
R>0 D @0
mk0 kŠ R>0
and so 1 p jX X
j k1
@0
j D0 kD0
D P .@0 ; b @/
p1 X
p1 X kD0
D P .@0 ; b @/
jX 1
aj .b @/ @0 j
j D0
kD0
D P .@0 ; b @/
p X
ı ˝ .aj .b @/ u0;k / D
p1 X kD0
mk0 ˝ u0;k kŠ R>0
p X
kD0 j aj .b @/ @0
j D0
mk0 ˝ u0;k kŠ R>0
p1 X kDj
mk0 ˝ u0;k kŠ R>0
p p1 X X mkj mk0 0 R>0 ˝ u0;k aj .b @/ ˝ u0;k kŠ .k j /Š R>0 j D0
kDj
pj p X X1 mk mk0 0 R>0 ˝ u0;k aj .b @/ ˝ u0;kCj : kŠ kŠ R>0 j D0
kD0
This shows that solving the abstract initial value problem for data f0 , .u0;k /kD0;:::;p1 is equivalent to solving the abstract initial value problem with source term f0
p X
aj .b @/
j D0
pj X1 kD0
mk0 ˝ u0;kCj kŠ R>0
and vanishing initial data .0/kD0;:::;p1 . Let w denote the solution of the latter problem. Then the solution u of the first problem is given by uDwC
p1 X kD0
mk0 ˝ u0;k : kŠ R>0
(3.1.55)
As a matter of simplification we may therefore assume that the initial data are all vanishing. To connect with point-wise concepts we shall use the following simple version of a so-called Sobolev embedding lemma. Lemma 3.1.59. Let D .%; 0; : : : ; 0/, where % 2 R>0 and ˛ 2 Zn , n 2 N. Then the mapping CV 1 .R1Cn / H1 .@0;% C 1/ ˝ H˛ .b @ C e/ ! ¹ 2 C0 .R; H˛ .b @ C e//j j j;˛;1;1=2 < 1º; u 7! .t 7! u.t; //
Section 3.1 Partial Differential Equations in H1 .@ C e/
147
has a continuous extension to all of H1 .@0;% C 1/ ˝ H˛ .b @ C e/ (sometimes called the “trace operator” or the “operator of point-wise evaluation in time”) with respect to the norm of H1 .@0;% C 1/ ˝ H˛ .b @ C e/ and the image norm29 j j;˛;1;1=2 WD sup¹j exp.%t / .t /j˛ j t 2 Rº ² j exp.%t / .t / exp.%s/ .s/j˛ C sup jt sj1=2
ˇ ³ ˇ ˇ t; s 2 R ^ t ¤ s ˇ
@ C e/ for 2 CV 1 .R1Cn /. Moreover, for all u 2 H1 .@0;% C 1/ ˝ H˛ .b 1 sup¹j exp.%t /. u/.t /j˛ j t 2 Rº p j@0 uj;.0;˛/ 2% and
ˇ ³ j exp.%t /. u/.t / exp.%s/. u/.s/j˛ ˇˇ sup t; s 2 R ^ t ¤ s j@ uj;.0;˛/ : ˇ jt sj1=2 ²
@ C e/ and . k /k a sequence in CV 1 .RnC1 / In particular, if f 2 H1 .@0;% C 1/ ˝ H˛ .b converging to f in H1 .@0;% C1/˝H˛ .b @Ce/, then limk!1 k .t; / exists in H˛ .b @Ce/ for all t 2 R, is independent of the particular approximating sequence and equals f . Furthermore, the mapping is injective. Proof. For 2 CV 1 .R1Cn / we have ˇZ t ˇ ˇ ˇ ˇ @0 .u/ duˇˇ j .t / .s/j˛ D ˇ s
˛
sˇ Z ˇsˇ Z t ˇ ˇ ˇ ˇ ˇ t 2 ˇˇ exp.2%u/ duˇˇ ˇˇ j@0 .u/j˛ exp.2%u/ duˇˇ s s s j exp.2%t / exp.2%s/j j@0 j;.0;˛/ 2% s p j exp.2%t / exp.2%s/j j@0 j;.0;˛/ jt sj 2%jt sj p jt sj sup¹exp.%x/j x 2 ¹t; sºº j@0 j;.0;˛/
from which we can read off the desired continuity. With s ! 1, we also see from the second inequality that s 1 j .t /j exp.%t / j j;.1;˛/ : 2% 29
Note that the first term on the right is the so-called Morgenstern norm, [32].
148
Chapter 3 Partial Differential Equations with Constant Coefficients
Moreover, we also calculate j exp.%t / .t / exp.%s/ .s/j˛ ˇZ t ˇ ˇ ˇ D ˇˇ .@0 .exp.%m0 / //.u/ duˇˇ s ˛ sˇ Z ˇ p ˇ ˇ t jt sj ˇˇ j.@0 .exp.%m0 / //.u/j2˛ duˇˇ s sˇ Z ˇ p ˇ ˇ t 2 ˇ j.@0;% /.u/j˛ exp.2%u/ duˇˇ jt sj ˇ s p jt sj j@ j;.0;˛/ ; which shows that the mapping under consideration is a well-defined continuous linear mapping. This mapping can now be extended by the obvious uniform continuity to all of H1 .@0;% C 1/ ˝ H˛ .b @ C e/ due to the density of CV 1 .R1Cn / in H1 .@0;% C 1/ ˝ @ C e/. H˛ .b By substituting k j for in the first or second estimate, we see that . k .t; //k is a Cauchy sequence in H˛ .b @ C e/, if . k /k is a Cauchy sequence in H1 .@0;% C 1/ ˝ b H˛ .@ C e/. Moreover, let . k /k be another such sequence. Then, with k k in place of , we see that k .t; / k .t; / ! 0 in H˛ .@ C e/ as k ! 1, since k k ! 0 in H1 .@0;% C 1/ ˝ H˛ .@ C e/ as k ! 1. This shows that the extension is indeed well-defined by f .t / D lim k .t; /; t 2 R; k!1
where f WD limk!1 k . Finally, to see that W H1 .@0;% C 1/ ˝ H˛ .b @ C e/ ! ¹ 2 C0 .R; H˛ .b @ C e//j j j;˛;1;1=2 < 1º is injective, assume . k /k 2 CV 1 .RnC1 /N with j k j;˛;1;1=2 ! 0 as k ! 1 is @ C e/ with limit f 2 H1 .@0;% C 1/ ˝ a Cauchy sequence in H1 .@0;% C 1/ ˝ H˛ .b b H˛ .@ C e/. We need to show that f D 0. Letting k ! 1 in the equality h@0 k j i.%;0/;.0;˛/ D h k j .@0;% C %/ i.%;0/;.0;˛/ ; where
2 CV 1 .RnC1 / is arbitrary, we obtain h@0 f j i.%;0/;.0;˛/ D 0
Section 3.1 Partial Differential Equations in H1 .@ C e/ for every
149
2 CV 1 .RnC1 / from which we conclude @0 f D 0
@ C e/ and so in H1 .@0;% C 1/ ˝ H˛ .b f D0
in H1 .@0;% C 1/ ˝ H˛ .b @ C e/
follows. Remark 3.1.60. From the estimate p j .t / .s/j˛ jt sj sup¹exp.%x/j x 2 ¹t; sºº j@0 j;.0;˛/ we see that, even for % D 0, we obtain the continuity estimate p j .t / .s/j˛ jt sj j@0 j0;.0;˛/ for all 2 H1 .@0 C 1/ ˝ H˛ .b @ C e/. It is common practice to identify u 2 H1 .@0;% C1/˝H˛ .b @Ce/ with the continuous mapping t 7! u.t / and so to simply write u.t / for . u/.t /, t 2 R. In the evolutionary case, we know that u D 0 on R0 be such that P .@0;% C %; b @/ is invertible for all % %0 . Then @/1 ŒHk .@0;% C 1/ ˝ H1 .b @ C e/ HpCk .@0;% C 1/ ˝ H1 .b @ C e/ P .@0;% C %; b
150
Chapter 3 Partial Differential Equations with Constant Coefficients
for all k 2 Z. Moreover, for every k 2 Z there is a ˇ 2 Zn such that jP .@0;% C %; b @/1 f j;.pCk;˛/ C jf j;.k;˛Cˇ / for some constant C 2 R>0 and all ˛ 2 Zn , f 2 Hk .@0;% C 1/ ˝ Hˇ .b @ C e/. @/u D f . Then Proof. Let P .@0 ; b p p P .@0 ; b @/ u D @0 @0 f
and so p @0
.u
p @0
f/C
p1 X
j aj .b @/ @0 u D 0
j D0
or p
p
@0 .u @0 f / C
p1 X
j p aj .b @/ @0 .u @0 f / D
j D0
p1 X
j p aj .b @/ @0 @0 f
j D0
D
p X
aps .b @/ @s 0 f
sD1
D
@1 0
p1 X
j apj 1 .b @/ @0 f:
j D0
Defining recursively f0 WD f
and
fkC1 WD
p1 X
j apj 1 .b @/ @0 fk
j D0
for k 2 N, we first get f1 2 Hk .@0;% C 1/ ˝ H1 .b @ C e/ and p @/ .u @0 f0 / D @1 P .@0 ; b 0 f1 :
By repeating this construction we confirm by a simple induction that fj 2 Hk .@0;% C @ C e/ for all j 2 N and 1/ ˝ H1 .b N 1 X j p P .@0 ; b @/ u @0 @0 fj D @N fN 0 j D0 p
for all N 2 N. Indeed, the case N D 0 is clear and with uN WD u@0 assuming P .@0 ; b @/ uN D @N fN ; 0
PN 1 j D0
j
@0 fj ,
Section 3.1 Partial Differential Equations in H1 .@ C e/
151
we get p1 X j p N 1 b P .@0 ; @/ .uN @0 @0 fN / D @0 apj 1 .b @/ @0 @N fN 0 j D0 .N C1/
D @0
p1 X
j apj 1 .b @/ @0 fN
j D0 .N C1/
D @0
fN C1
and p
fN uN C1 D uN @0 @N 0 Du
p @0
N 1 X
j
p
@0 fj @0 @N fN 0
j D0 p
D u @0
N X
j
@0 fj :
j D0
@ C e/. Then, since P .@0 ; b @/1 has a finite Now, let f 2 Hk .@0;% C 1/ ˝ H1 .b regularity loss, for N 2 N sufficiently large we get that u
p @0
N X
j @0 fj 2 HpCk .@0;% C 1/ ˝ H1 .b @ C e/
j D0
and so, since
PN
j j D0 @0
fj 2 Hk .@0;% C 1/ ˝ H1 .b @ C e/, also
u 2 HpCk .@0;% C 1/ ˝ H1 .b @ C e/: We also read off that by construction 1 @/1 f D P .@0;% C %; b @/1 @N fN C1 C @0 u D P .@0;% C %; b 0
p
N X
j
@0 fj
j D0
from which finally the desired continuity estimate follows (for N 2 N sufficiently large). In other words, we have shown that the time-regularity of a solution is always equal to time-degree p of the polynomial plus the time-regularity of the right-hand side f .
152
Chapter 3 Partial Differential Equations with Constant Coefficients
Pp1 Theorem 3.1.62. Let P .@0;% C %; b @/ D .@0;% C %/p C j D0 aj .b @/ .@0;% C %/j be an evolutionary partial differential expression and let %0 2 R>0 be such that @/ is invertible for all % %0 . Furthermore, let u 2 H1 .@0;% C1; b @Ce/ P .@0;% C%; b be the solution of the corresponding abstract initial value problem for a source term f and initial data .u0;k /kD0;:::;p1 , % %0 . Then, .@k0 u/.0C/ D u0;k
for k D 0; : : : ; p 1:
Proof. We have 1 p jX X
@/ u D f C P .@0 ; b
j k1
@0
ı ˝ .aj .b @/ u0;k /:
j D0 kD0
According to (3.1.55) we also have pj p1 p X mk X X1 mk b R>0 ˝ u0;k D f aj .b @/ ˝ u0;kCj : P .@0 ; @/ u kŠ kŠ R>0 j D0
kD0
kD0
Since the right-hand side is in H0 .@0;% C 1/ ˝ H1 .b @ C e/, we find from Lemma 3.1.61 that u
p1 X kD0
mk ˝ u0;k 2 Hp .@0;% C 1/ ˝ H1 .b @ C e/: kŠ R>0
This yields, in particular, p1 X
j
j
j
p X
Uj WD @0 u @0
kD0
D @0 u
kDj
jX 1
mkj ˝ u0;k .k j /Š R>0
j k1
@0
mk ˝ u0;k kŠ R>0 (3.1.56)
ı ˝ u0;k 2 H1 .@0;% C 1/ ˝ H1 .b @ C e/
kD0
for j D 0; : : : ; p 1. We have Uj .0/ D 0
for j D 0; : : : ; p 1;
(3.1.57)
since causality yields that Uj D 0 on R0
Since by assumption p 2 N, this shows that t 7! w.t / must be continuous (indeed Hölder continuous with exponent 12 ). Observing that h1 . .t C h; x/ .t; x// .@0 /.t; x/ Z 1 h D ..@0 /.t C s; x/ .@0 /.t; x// ds h 0 Z Z 1 h tCs 2 D .@0 /.r; x/ dr ds; h 0 t for all 2 CV 1 .RnC1 /, h 2 R, we get, similar to the proof of Lemma 3.1.59, jh1 . .t C h; x/ .t; x// .@0 /.t; /j˛ ˇZ Z ˇ ˇ 1 ˇˇ h tCs 2 ˇ j.@
/.r; /j dr ds ˛ 0 ˇ ˇ jhj 0 t ˇZ Z ˇ1=2 ˇ Z h Z tCs ˇ1=2 ˇ ˇ ˇ 1 ˇˇ h tCs 2 2 ˇ ˇ ˇ exp.2% r/ dr ds j.@
/.r; /j exp.2% r/ dr ds 0 ˛ ˇ ˇ ˇ jhj ˇ 0 t 0 t ˇ1=2 ˇZ h ˇ exp.2% s/ 1 ˇˇ 1 p ds ˇ j@20 j.%;0/;.0;˛/ exp.%t / ˇˇ 2% jhj 0 ˇ ˇ p ˇ exp.2% h/ 1 2%h ˇ1=2 2 ˇ ˇ j@ j.%;0/;.0;˛/ jhj exp.%t / ˇ 0 ˇ .2%h/2 r jhj exp.%t / exp.%jhj/ j@20 j.%;0/;.0;˛/ 2 r jhj exp.%t / exp.%jhj/ .1 C %/2 j j.%;0/;.2;˛/ : 2
154
Chapter 3 Partial Differential Equations with Constant Coefficients j 1
Replacing by @0
and taking limits we obtain j 1
j 1
j
jh1 .@0 .t C h; / @0 .t; // .@0 /.t; /j˛ r jhj j 1 exp.%t / exp.%jhj/ .1 C %/2 j@0 j.%;0/;.2;˛/ 2 r jhj exp.%t / exp.%jhj/ .1 C %/j C1 j j.%;0/;.j C1;˛/ 2 for all 2 Hp .@0;% C 1/ ˝ H˛ .b @ C e/, j D 1; : : : ; p 1, ˛ 2 ZnC1 . This proves that every 2 Hp .@0;% C 1/ ˝ H˛ .b @ C e/ is .p 1/-times classically differentiable and the classical derivatives at a point t coincide with the point-wise in time evaluation of the generalized derivative at t . We already know that these derivatives are continuous (indeed Hölder continuous with exponent 12 ). Having seen that w is .p 1/-times continuously differentiable and since
wjR>0 D ujR>0
p1 X kD0
.mjR>0 /k ˝ u0;k ; kŠ
we get the same differentiability property for u on R>0 . The regularity mechanism shown in the last result is actually a consequence of a more general situation described in the following lemma. @ C e/. Then Lemma 3.1.64. Let u 2 R>0 .m0 / Hp .@0;% C 1/ ˝ H1 .b
u
p1 X kD0
mk ˝ .@k0 u/.0C/ 2 Hp .@0;% C 1/ ˝ H1 .b @ C e/: kŠ R>0
Proof. Let w D u H1 .b @ C e/ and
Pp
mk kD0 kŠ
R>0 ˝ .@k0 u/.0C/. Then w 2 H0 .@0;% C 1/ ˝
p
h@0 wj i.%;0/;0 D h wj.@0;% C %/p i.%;0/;0 D h uj.@0;% C %/p i.%;0/;0
p1 XZ kD0
R>0
tk h u0;k j..@0;% C %/p /.t /i0 exp.2%t / dt: kŠ
Section 3.1 Partial Differential Equations in H1 .@ C e/
155
Now, by assumption u D R>0 .m0 / U with U 2 Hp .@0;% C 1/ ˝ H1 .b @ C e/ and so p
h@0 wj i.%;0/;0 D h R>0 .m0 / U j.@0;% C %/p i.%;0/;0 Z D hU.t /j..@0;% C %/p /.t /i0 exp.2%t / dt R>0
Z D
R>0
hU.t /j..@0 /p exp.2%m0 / /.t /i0 dt
Z D
R>0
h@0 U.t /j..@0 /p1 exp.2%m0 / /.t /i0 dt
C hU.0/j..@0 /p1 exp.2%m0 / /.0/i0 : Repeating this integration by parts argument we obtain by induction that p
p
h @0 uj i.%;0/;0 D hR>0 .m0 / @0 U j i.%;0/;0 C
p1 X
h.@k0 U /.0/j..@0 /pk1 exp.2%m0 / /.0/i0
kD0 p
D hR>0 .m0 / @0 U j i.%;0/;0 C
p1 X
hı ˝ .@k0 U /.0/j.@0 /pk1 exp.2%m0 / i0;0
kD0 p
D hR>0 .m0 / @0 U j i.%;0/;0 C
D p1 X
pk1
@0
ˇ E ˇ ı ˝ .@k0 U /.0/ˇ
kD0
.%;0/;0
:
Comparison with the above yields p
p
h@0 wj i.%;0/;0 D hR>0 .m0 / @0 U j i.%;0/;0 : This can be estimated by p
p
jh@0 wj i.%;0/;0 j j@0 U j.%;0/;˛ j j.%;0/;˛ p for some sufficiently large ˛ 2 N nC1 , proving that @0 w 2 H0 .@0;% C1/˝H˛ .b @Ce/. b Since @0 D @0;% C % is a bijection between HkC1 .@0;% C 1/ ˝ Hˇ .@ C e/ and @ C e/ for all k 2 Z; ˇ 2 ZnC1 , we get Hk .@0;% C 1/ ˝ Hˇ .b
w 2 Hp .@0;% C 1/ ˝ H˛ .b @ C e/:
156
Chapter 3 Partial Differential Equations with Constant Coefficients
These findings suggest that we formulate the classical initial value problem in the Sobolev lattice space H1 .@.%;0/ C e/, with % 2 R>0 sufficiently large, and seek to find u 2 R>0 .m0 / ŒHp .@0;% C 1/ ˝ H1 .b @ C e/ such that P .@0 ; b @/ u D f 2 R>0 .m0 / ŒH0 .@0;% C 1/ ˝ H˛ .b @ C e/ and
j .@0 u/.0C/ D u0;j 2 H1 .b @ C e/
on R>0
for j D 0; : : : ; p 1:
In the above we have just proved existence of a solution to this problem.
3.1.12 Some Applications to Linear Partial Differential Equations of Mathematical Physics To illustrate the framework developed so far, we shall now consider a number of examples and explicitly calculate fundamental solutions as elements of the Sobolev lattice associated with @ C e. The technical ideas are of course classical and well known, see e.g. [48]. 3.1.12.1
Transport Equation
Possibly the simplest evolutionary partial differential expression is featured by the transport equation .@0 C s b @/ u D f; 1 s with where s 2 Rn n ¹0º, which describes transport of a substance in direction jsj scalar velocity jsj. The term s b @ is frequently referred to as a drift or convection
term. Considering the polynomial expression ji C % C i s j2 D . C s /2 C %2 , % 2 Rn¹0º, we see that .@0 Cs b @/ is reversibly evolutionary (in the forward, % > 0, or backward, % < 0, time direction). As our earlier considerations suggest the (forward) fundamental solution s;C can be found by calculating the inverse Fourier–Laplace transform 1 1 s;C D L .%;0/ i m0 C % C i s m .2/.nC1/=2 which can be done by applying Cauchy’s integral formula. By definition we have (with D .%; 0; : : : ; 0/, % 2 R>0 ,) for 2 CV 1 .RnC1 / ˇ ˇ 1 1 ˇ b hs;C j i;0 D
. i / .2/.nC1/=2 i m0 C % C i s m ˇ 0;0 Z Z 1 1 b
. i %; / d d (3.1.58) D .2/.nC1/=2 R Rn i C % i s Z R Z 1 1 b D
. i %; / d d : lim .2/.nC1/=2 R!1 R Rn i C % i s
Section 3.1 Partial Differential Equations in H1 .@ C e/ Assuming as a matter of convenience that D 0 ˝ Z
R
R
Z
D Z D and then Z
Z Rn
Z
Rn
Rn
Z
157
, we evaluate further
1 b
. i %; / d d i C % i s R R R R
1 b
0 . i %/ b./ d d i C % i s 1 b
0 . i %/ d b./ d ; i C % i s
R
1 b
0 . i %/ d R i C % i s Z Z R 1 1 exp.i t / exp.%t / 0 .t / dt d Dp 2 R i C % i s R Z Z R 1 1 Dp exp.i t / exp.%t / 0 .t / d dt: 2 R R i C % i s
Now, Cauchy’s integral formula yields Z 1 exp.i t / d D 2 exp. t %/ exp.i t s / R; i C % i s where R; is the closed curve connecting consecutively the points R, R i R, R i R, R; R in the complex plane by straight lines. Taking R;C as the closed curve connecting consecutively the points R; R; R C i R; R C i R; R in the complex plane by straight lines, since the pole i % s is outside of the contour R;C , we get Z 1 exp.i t / d D 0: R;C i C % i s These observations yield Z
R
1 exp.i t / d i C % is R Z R 1 exp.i.R C i u/t / du Ci R C i u C i% C s 0 Z R 1 exp.i .u C i R/ t / du i u C i R C i% C s R Z R 1 exp.i.R C i u/t / du D 0 i R C i u C i% C s 0
(3.1.59)
158
Chapter 3 Partial Differential Equations with Constant Coefficients
and Z i
R
1 exp.i t / d C i% C s
R
Z
Ci Z Ci Z i
R
1 exp.i.R i u/t / du R i u C i % C s
R
1 exp.i .u i R/ t / du u iR C i% C s
0
R R 0
(3.1.60)
1 exp.i.R i u/t / du R iu C i% C s
D 2 exp. t %/ exp.i t s /: For t > 0 we see from (3.1.60) that Z
R
1 exp.i t / d ! 2 exp. t %/ exp.i t s / C i% C s
i R
as R ! 1. For t < 0 we likewise see from (3.1.59) that Z
R
1 exp.i t / d ! 0 C i% C s
R
as R ! 1. The latter confirms our knowledge that s;C D 0 on R0 .t / exp.2 t %/ exp.i t s / 0 .t / dt ! .2/1=2
i
R
as R ! 1. Substituting this into (3.1.58) we get Z Z n=2 hs;C j i;0 D .2/ exp.2 t %/ exp.i t s / 0 .t / dt b./ d Z D .2/n=2 Z D .2/n=2 and noting that 1 .2/n=2
Rn
Rn
R>0
Z Rn
R>0
Z
R>0
Z
Rn
exp.i t s / 0 .t / b./ exp.2 t %/ dt d exp.i t s / b./ d 0 .t / exp.2 t %/ dt;
exp.i t s / b./ d D
.t s/
Section 3.1 Partial Differential Equations in H1 .@ C e/
159
we get Z hs;C j i;0 D
R>0
0 .t / .t s/ exp.2 t %/ dt:
(3.1.61)
Let M be a d -dimensional submanifold in RmC1 and define the linear functional Z ıM . / WD
.x/ d Vx (3.1.62) x2M
for all 2 CV 1 .RmC1 /, where d V denotes the d -dimensional volume element30 on M . We shall call ıM a ı-layer31 on M . Using the ı-layer on the line t 7! .t; t s/, i.e. on ŒR>0 .1; s/ as a 1-dimensional submanifold in Rn , we see that Z 1
.t; x/ exp.2 t %/ d V.t;x/ hs;C j i;0 D p 1 C s 2 .t;x/2ŒR>0 .1;s/ 1 D p ıŒR>0 .1;s/ .exp.2 m0 %/ /: 1 C s2 Writing the latter as usual in the form ˇ ˇ 1 1 ıŒR>0 .1;s/ .exp.2 m0 %/ / D p ıŒR>0 .1;s/ ˇˇ ; p 2 2 1Cs 1Cs ;0 we get s;C D p
1 1 C s2
ıŒR>0 .1;s/ :
As a by-product we have that @ C e/: ıŒR>0 .1;s/ 2 H1 .@0;% C 1; b From the general theory we know that p
1 1 C s2
ıŒR>0 .1;s/
30 In the case d D 1, we shall also write ds for d V and speak of a line element. If d D 2 or x x d D m then M is a surface or hypersurface, respectively, and we also write dSx for d Vx and call dSx the surface element at x. 31 Even the case d D 0 can be incorporated by noting that a 0-dimensional manifold is composed of isolated points and for an isolated point set ¹aº we let
ı¹aº WD a ı: In this sense we have ı¹0º D ı:
160
Chapter 3 Partial Differential Equations with Constant Coefficients
satisfies the initial value problem with initial data ı. In particular, we have then 1 ıŒR>0 .1;s/ .0C/ D ı p 1 C s2
in H1 .b @ C e/:
An alternative description of s;C can be given by observing that (3.1.61) can be understood as Z hs;C j i;0 D
.t; u C t s/ exp.2 t %/ d V.t;u/ Z
.t;u/2R>0 ¹0º
Z
t2R>0
D D
t2R>0
hıj t s .t; /i0 exp.2 t %/ dt ht s ıj .t; /i0 exp.2 t %/ dt:
In other words, s;C .t C/ D R0 .t / t s ı D R0 .t / ı¹t sº D R0 .t / ı. t s/; where the last expression is a common abuse of notation motivated by formally applying the rules of substitution for integrals. 3.1.12.2
Acoustics
The propagation of acoustic waves in a homogeneous isotropic medium is, with suitable scaling, modelled approximately by the linearized equations of acoustics given in block matrix form by a .4 4/-partial differential equation !! v 0 b @ @0 C D f; > b p @ 0 where
0
1 @1 b @ WD @ @2 A D grad @3
and b @> D .@1 @2 @3 / D div
and v denotes the velocity field and p the pressure distribution in R1C3 . For 0 1 @ 0 0 0 @1 ! ! B 0 @ 0 0 @2 C @ @0 b 0 b @ C B D P .@0 ; b @/ D @0 C D @ 0 0 @0 @3 A b b @> 0 @ > @0 @1 @2 @3 @0
Section 3.1 Partial Differential Equations in H1 .@ C e/ we find
161
det P .@0 ; b @/ D .@20 b @2 / @20 :
Recall that P .@0 ; b @/ is reversibly evolutionary in the (forward or backward) time direction. In the general a case of a homogeneous medium, there is an additional (con0 stant) block matrix 0 c describing the acoustic material properties. Here a is a positive definite, symmetric .3 3/-matrix and c a positive constant. The acoustic field is then governed by the equation ! 1 a0 v b D f; (3.1.63) A0 .@/ @0 0c p @ where A0 .b @/ WD 0> b . b @ 0 These equations are closely related to the symmetric hyperbolic system
a0 0c
@0 C
3 X 0 ei @i ei> 0 iD1
where ek , k D 1; 2; 3, are the canonical basis vectors of R3 , which is therefore also reversibly evolutionary. The system decomposes into Newton’s law @0 v C a1 b @p D @> v D f2 . Applying the f1 and the conservation of mass equation @0 p C c 1 b substitution R given by .t; x/ 7! .c 1=2 t; a1=2 x/ p we get with the associated isometric re-scaling operation R (R u WD det.R/uıR), R @0 D @0 c 1=2 R ;
R b @> D b @> a1=2 R
and so @0 C
R
a0 0c
1
D @0 c D
1=2
C
a1=2 0 0 c 1=2
a0 0c
1
0 b @ > b @ 0 1
and
R b @ D a1=2 b @ R
!!
!! 0 a1=2 b @ R b @> a1=2 0 ! 1=2 1=2 0 b @ a 0 c c 1=2 a1=2 @0 C > b 0 1 0 @ 0
Thus, from the equation (3.1.63) above, we obtain !! 1=2 a 0 b @ 0 c 1=2 a1=2 R v D R f: @0 C b R p 0 c 1=2 @> 0
0 1
! R :
162
Chapter 3 Partial Differential Equations with Constant Coefficients
This shows that without loss of generality we may restrict our attention to the isotropic case, i.e. a D 1.33/ , c D 1. The fundamental solution is given by !! ! 0 b @ 0 b @ GC WD @0 @1 0 g1;C 1.44/ C R>0 ˝ ı 1.44/ ; b b @> 0 @> 0 where g1;C is the fundamental solution for the scalar partial differential expression @20 b @2 of the so-called wave equation. This becomes apparent from considering the minimal polynomial 7! .@0 / ..@0 /2 b @2 /: @ , @/ WD 0> b It follows that, with A0 .b b @ 0 @//1 .@0 A0 .b 2 b2 1 D .3 @20 3 .@0 A0 .b @// @0 C .@0 A0 .b @//2 b @2 / @1 0 .@0 @ / 2 b2 1 D .@20 C 3 A0 .b @/ @0 2 A0 .b @/ @0 C A0 .b @/2 b @2 / @1 0 .@0 @ / 2 b2 1 D .@20 C A0 .b @/ @0 C A0 .b @/2 b @2 / @1 0 .@0 @ / :
We can re-write this as 2 b2 1 @//1 D .@20 b @2 C A0 .b @/.@0 C A0 .b @/// @1 .@0 A0 .b 0 .@0 @ / 2 b2 1 D .@0 C A0 .b @// A0 .b @/ @1 C @1 0 .@0 @ / 0 :
(3.1.64)
In order to determine the fundamental solution of the partial differential equations of acoustics it therefore remains to calculate the fundamental solution g1;C of the wave equation, which can again be done routinely by applying residue calculus of complex analysis. As in the previous case we find .2/2 hg1;C j i;0 ˇ ˇ 1 ˇb D
. i / .i m0 C %/2 C m2 ˇ 0;0 Z Z 1 b D
. i %; / d d 2 2 (3.1.65) R R3 .i C %/ C Z Z 1 b D lim lim
. i %; / d d "!0C R!1 2ŒR;R 2R3 nB.0;"/ 2 i jj .i C%Ci jj/ Z Z 1 b
. i %; / d d : C lim lim "!0C R!1 2ŒR;R 2R3 nB.0;"/ 2 i jj .i C%i jj/
Section 3.1 Partial Differential Equations in H1 .@ C e/ With D 0 ˝ we evaluate further Z Z
2ŒR;R
Z
D
2R3 nB.0;"/
Z
2R3 nB.0;"/
Z D
Z
2R3 nB.0;"/
1 b
. i %; / d d 2 jj . C i % ˙ jj/
2ŒR;R
1 b
0 . i %/ b./ d d 2 jj . C i % ˙ jj/
2ŒR;R
1 b
0 . i %/ d b./ d ; 2 jj . C i % ˙ jj/
where Z
1 b
0 . i %/ d
2ŒR;R C i % ˙ jj Z Z 1 1 exp.i t / exp.%t / 0 .t / dt d Dp 2 2ŒR;R C i % ˙ jj R Z Z 1 1 exp.i t / exp.%t / 0 .t / d dt: Dp C i % ˙ jj 2 R 2ŒR;R
Now, residue calculus yields Z 1 exp.i t / d D 2i exp. t %/ exp.˙i t jj/ R; C i % ˙ jj Z
and
R;C
1 exp.i t / d D 0; C i % ˙ jj
where R;˙ are the contours in the previous example. For t > 0 we get Z
2ŒR;R
1 exp.i t / d ! 2i exp. t %/ exp.˙i t jj/ C i % ˙ jj
and for t < 0 we have Z
2ŒR;R
1 exp.i t / d ! 0 C i % ˙ jj
as R ! 1. Thus, by an exchange of limits we have now Z 1 b
. i %/ d 2 jj2 0 . C i %/
2ŒR;R p Z 2 R>0 .t / exp.2 t %/ sin.t jj/ 0 .t / dt ! jj R
163
164
Chapter 3 Partial Differential Equations with Constant Coefficients
as R ! 1. Substituting this into (3.1.65) yields hg1;C j i;0 1 D .2/3=2 D
Z lim
"!0C
1 .2/3=2 lim
2R3 nB.0;"/
"!0C R!1
R>0
2B.0;R/nB.0;"/
Z lim
exp.2 t %/
sin. t jj/
0 .t / dt b./ d jj
Z
lim
1 .2/3=2 lim
R>0
Z
"!0C R!1
D
Z
Z R>0
2B.0;R/nB.0;"/
exp.2 t %/
sin. t jj/
0 .t / dt b./ d jj
sin. t jj/ b ./ d exp.2 t %/ 0 .t / dt: jj
Using Z 2B.0;R/nB.0;"/
Z
D 4
r2Œ";R
sin. t jj/ exp.i x/ d jj
sin.t r/ sin.r jxj/ dr; jxj
(3.1.66)
we get with an interchange of integration and introducing polar coordinates hg1;C j i;0 1 D2 lim lim .2/2 "!0C R!1
1 D2 lim lim .2/2 "!0C R!1
Z
Z R>0
Z
Z
R3
Z R>0
R>0
sin.t r/ sin.r jxj/ dr .x/ dx jxj r2Œ";R exp.2 t %/ 0 .t / dt Z sin.t r/ sin.r u/ dr .u/ u du r2Œ";R
exp.2 t %/ 0 .t / dt; R where .u/ WD 2S 2 .u / dS is the integral of 7! .u / over the 2-dimensional unit sphere S 2 , u 2 R>0 . Using
.u/ D .u/
Section 3.1 Partial Differential Equations in H1 .@ C e/ for u 2 R we obtain Z Z 2 R>0
Z
D
R>0
165
sin.t r/ sin.r u/ dr .u/ u du
r2Œ";R
Z
r2Œ";R
.cos..t u/ r/ cos..t C u/ r// dr .u/ u du
sin..t u/ R/ sin..t C u/ R/
.u/ u du D .t u/ .t C u/ R>0 Z sin..t u/ "/ sin..t C u/ "/
.u/ u du .t u/ .t C u/ R>0 Z Z sin..t u/ R/ sin..t u/ "/ D
.u/ u du
.u/ u du: .t u/ .t u/ R R Z
Employing the fact that
Z lim
R!1
R
Z
and lim
R!0C
R
sin.u R/ '.u/ du D '.0/ u
sin. u R/ '.u/ du D 0 u
for all ' 2 CV 1 .R/, we get (recall D .%; 0/ 2 R>0 R3 ) Z 1
.t / t exp.2 t %/ 0 .t / dt hg1;C j i;0 D .2/2 R>0 Z Z 1 D .t / dS t exp.2 t %/ 0 .t / dt 4 R>0 2S 2 Z Z 1 1 .x/ dSx exp.2 t %/ 0 .t / dt D 4 R>0 t x2t ŒS 2 Z 1 1 exp.2 jxj %/ .jxj; x/ dx D 4 x2R3 jxj 1 1 D
p ıŒR>0 .¹1ºS 2 / exp.2 m0 %/ m0 4 2 ˇ ˇ 1 1 D ; p ıŒR>0 .¹1ºS 2 / ˇˇ m0 ;0 4 2 for 0 2 CV 1 .R n ¹0º/. Consequently, we have g1;C .t / D
1 1 p ıŒR>0 .¹1ºS 2 / .t; / for t 2 R n ¹0º: 4 2 t
(3.1.67)
166
Chapter 3 Partial Differential Equations with Constant Coefficients
To complete the proof we need to show (3.1.66) and (3.1.67). To see (3.1.66) we first realize that the substitution 7! A, where A is a rotation, does not change the value of the integral Z sin. t jj/ exp.i x/ d jj 2B.0;R/nB.0;"/ Z sin. t jA j/ D exp.i A Ax/ d jA j 2B.0;R/nB.0;"/ Z sin. t jj/ exp.i Ax/ d : D jj 2B.0;R/nB.0;"/ Therefore, we may choose without loss of generality x D .0; 0; jxj/ and we get Z sin. t jj/ exp.i x/ d jj 2B.0;R/nB.0;"/ Z Z D 2 sin. t r/ r exp.i r cos # jxj/ sin # dr d # #2Œ0;
r2Œ";R
Z D 2
s2Œ1;C1
Z D 2
r2Œ";R
Z D 4
Z
r2Œ";R
Z
r2Œ";R
s2Œ1;C1
sin. t r/ exp.i r s jxj/ r dr ds sin. t r/ exp.i r s jxj/ ds r dr
sin. t r/ sin. r jxj/ dr: jxj
It remains to show (3.1.67). But since ' has compact support, we have, for some W 2 R>0 sufficiently large, Z Z sin. u R/ sin. u R/ '.u/ du D '.u/ du u u R ŒW;W Z '.u/ '.0/ D sin. u R/ du u ŒW;W Z sin. u R/ du: C '.0/ u ŒW;W Since u 7! ŒW;W .u/ '.u/'.0/ is in L1 .R/, i.e. Lebesgue integrable over R, the u Riemann–Lebesgue lemma shows that Z '.u/ '.0/ du ! 0 as R ! 1: sin. u R/ u ŒW;W The desired result now follows since Z Z sin.u R/ sin.v/ du D dv u v ŒW;W ŒW R;W R
Section 3.1 Partial Differential Equations in H1 .@ C e/
167
converges to the well-known improper integral Z sin.u/ du D u R as R ! 1. This concludes the construction of the fundamental solution of the wave equation. The cone ŒR>0 .¹1º S 2 / D ¹.t; x/ 2 R1C3 j jxj D t ^ t 2 R>0 º is called the (forward) light cone and its closure is the support of the fundamental solution. Comparing with the abstract initial value problem .@20 b @2 /u D @0 ı ˝ u0;0 C ı ˝ u0;1 with initial data .u0;0 ; u0;1 / D .0; ı/, the formula for the fundamental solution yields g1;C .0C/ D 0; so that t 7! g1;C .t C/ is continuous as a mapping into H1 .b @ C e/ and @0 g1;C .0C/ D ı:
(3.1.68)
To consider this more closely we have to calculate @0 g1;C : h@0 g1;C j i;0 D hg1;C j .@0 C 2 %/ i;0 Z Z 1 D .t / dS t .@0 .exp.2 m0 %/ 0 /.t / dt 4 R>0 2S 2 Z Z 1 .t / dS exp.2 t %/ 0 .t / dt D 4 R>0 2S 2 Z Z 1 .t @/ .t / dS exp.2 t %/ 0 .t / dt; C 4 R>0 2S 2 for D 0 ˝
. Thus, we obtain ˇ ˇ 1 h@0 g1;C j i;0 D ıŒR>0 .¹1ºS 2 / ˇˇ p 2 4 2 t ;0 ˇ ˇ 1 b ˇ C ıŒR>0 .¹1ºS 2 / ˇ.b m @/ p 4 2 t 2 ;0
for 0 2 CV 1 .R n ¹0º/. Consequently, we have @0 g1;C .t / D
4
1 p
2 t2
ıR>0 ¹1ºS 2 .t; /
1 p
4 2 t 2
.b @m b/ ıR>0 ¹1ºS 2 .t; /
168
Chapter 3 Partial Differential Equations with Constant Coefficients
for t 2 R n ¹0º. In particular, we have with (3.1.68) 1 1 ıR>0 ¹1ºS 2 .t; / p .b @m b/ ıR>0 ¹1ºS 2 .t; / ! ı p 2 4 2 t 4 2 t 2 Since we know that t 7!
1p 1 ı 2 .t; / 4 2 t ŒR>0 .¹1ºS /
as t ! 0 C :
is continuous on R n ¹0º in the
sense of continuity in H1 .b @ C e/ and reading off that ˇ Z ˇ 1 1 1 1 ˇ D .t / .x/ dSx p ıŒR>0 .¹1ºS 2 / .t; /ˇ 4 R>0 t x2t ŒS 2 4 2 t 0 we also have R>0 .t / 1 1 ıŒR>0 .¹1ºS 2 / .t; / D ı t ŒS 2 p 4 t 4 2 t for t 2 Rn¹0º. For the fundamental solution, we also have an alternative formulation. We first observe that, by the rules of substitution for integrals, we have for bounded, measurable w, ' 2 CV 1 .R/ and 2 CV 1 .R1C3 / 2hw.m0 / '.m20 m2 /j i;0 Z Z Z D2 '.t 2 r 2 / .t; r x0 / r 2 dx0 dr w.t / exp.2% t / dt R
Z D
Z
Z
1;t 2
R
Z D
S2
Œ0;1Œ
Z
S2
Z
'.u/ R>0
Z
C
R
Z
'.u/ R0 this is Z Z 1 2 2 hR>0 .m0 / ı.m0 m /j i;0 D
.t; t x0 / t dx0 exp.2% t / dt 2 Œ0;1Œ S 2 (3.1.69) and so we obtain 1 g1;C .t; / D .t / ı.t 2 . /2 / 2 R>0 1 .t / ı t ŒS 2 D 4 t R>0 1 D p ıR ¹1ºS 2 .t; / for t 2 R; t ¤ 0: 4 t 2 >0
Section 3.1 Partial Differential Equations in H1 .@ C e/
169
Remark 3.1.65. In this expression for g1;C .t /, restricting our attention to t ¤ 0 is not important since the behavior at t D 0 is known explicitly. In general one may define (in accordance with the rules of substitution for functions32 ) the functional representation ı ı ‰, at least on an open set U , by hı ı ‰j i0;0 Z WD
x2N.‰/
.x/ dSx p jdet.d0 ‰.x//j det.1 C d1 ‰.x/ .d0 ‰.x/ d0 ‰.x/ /1 d1 ‰.x//
for all 2 CV 1 .U /, where ‰ W U RnC1 ! RmC1 , n; m 2 N, m < n, is continuously differentiable on an open set U with det.d0 ‰.x0 ; x1 // ¤ 0, x D .x0 ; x1 / 2 U with x0 2 RmC1 and x1 2 Rnm , in a neighborhood of N.‰/ D ¹x 2 U RnC1 j ‰.x/ D 0º. For m D 0 this simplifies to a line integral Z hı ı ‰j i0;0 WD
.x/ x2N.S/
1 dsx : jgrad ‰.x/j
p With ‰.t; x/ D t 2 x 2 we get jgrad ‰.t; x/j D 2 t 2 C x 2 (n D 3; m D 0) and so the representation (3.1.69) is recovered as a particular case on N.‰/ D ŒR>0 .¹1º S 2 / with U D R4 n ¹0º. The particular shape of the standard reversibly evolutionary partial differential equations of mathematical physics shared by the system of linear acoustics also suggests an alternative approach reducing the system size to just a scalar partial differential equation. We note that @20 A0 .b @/2 D .@0 C A0 .b @// .@0 A0 .b @// has block diagonal form and therefore decouples v and p. Thus we may focus our attention on just one part, the other (block) unknown being easily reconstructed from the original first order equation. More specifically, we may focus on the scalar equation .@20 b @/ p D .@20 b @> b @ 2 / p D @ 0 f2 C b @ f1 which is just the wave equation. The velocity field can then easily be reconstructed since 1 b v D @1 0 @p C @0 f2 : 32
Here d0 and d1 denote differentiation with respect to the first and second block variable, i.e. d0 ‰.x0 ; x1 / WD d.y 7! ‰.y; x1 //.x0 /; d1 ‰.x0 ; x1 / WD d.y 7! ‰.x0 ; y//.x1 /:
170
Chapter 3 Partial Differential Equations with Constant Coefficients
3.1.12.3
Thermodynamics
According to Fourier, heat conduction obeys the linear partial differential equation P .@0 ; b @/ # D f with
P .@0 ; b @/ D @0 b @2 ;
the so-called heat equation33 . Here # denotes the scalar heat distribution function in R1C3 and f models heat sources (or sinks). Inserting an additional drift term s b @ for increased generality yields @0 C s b @ b @2 : The corresponding fundamental solution gs is given by gs D
1 1 : L.%;0/ 2 4 i m0 C i s m bCm b2 C %
Once again we give the explicit evaluation. As in the case of the transport equation, we find using residue calculus hgs j 0 ˝
i;0 Z
Z 1 exp. t jj2 / exp.i t s / 0 .t / b./ d exp.2%t / dt .2/3=2 R>0 R3 Z Z Z 1 D exp. t jj2 / 0 .t / exp.i .x t s/ / .x/ dx d .2/3 R>0 R3 R3 exp.2%t / dt: D
33
The equation results from considering the determinant of a system associated with ! 0 1 b q @ : D Q # b @> @0
where q denotes the so-called heat flux, Q models the influence of given heat sources. Other thermal parameters are assumed to be constants and eliminated by suitable scaling. The first row models Fourier’s law. Our standard solution procedure can be started as usual, noting that 7! . 1/. 2 .1 C @0 / C .@0 b @2 // is the minimal polynomial. Alternatives to Fourier’s law have been proposed leading to quite different systems. E.g. the Catta b neo model of heat conduction associated with 1C @0 @ or the Jeffreys type heat conduction model b @> @0 b b associated with 1C @0 @C @0 @ , ; 2 R>0 . The determinants are @20 . @20 C @0 b @2 / and
b @>
@0
@2 /, respectively, showing that the Cattaneo model is reversibly evolution@20 . @20 C @0 .1 C @0 /b ary, whereas the Jeffreys model is irreversibly evolutionary.
Section 3.1 Partial Differential Equations in H1 .@ C e/
171
We observe that for t 2 R>0 Z 1 .2 t /3=4 exp. t jj2 / exp.i .x t s/ / d D .F .p2t //.x t s/: .2/3=2 R3 Since the Gauss distribution function is invariant under the spatial Fourier transform F D L0 , its interaction with rescaling yields hgs j 0 ˝ i;0 Z Z D .4 t /3=2 exp. jx t sj2 =.4t // 0 .t / .x/ dx exp.2%t / dt R>0
R3
for 0 2 CV 1 .R n ¹0º/. This proves that gs .t; x/ D R>0 .t / .4 t /3=2 exp.jx t sj2 =.4t //
for t 2 R; t ¤ 0; x 2 R3 :
In particular, we have here that gs is indeed given by a function. From our general results on initial value problems we also know the behavior at t D 0. We therefore have in particular .4 t /3=2 exp.j t sj2 =.4t // ! ı
as t ! 0C;
where the convergence is in H1 .b @ C e/. 3.1.12.4
Electrodynamics
The Maxwell System.
With 0
1 0 @3 @2 b @ WD @ @3 0 @1 A @2 @1 0 representing the operator curl, the differential operator of Maxwell’s equations J E b (3.1.70) M.@0 ; @/ D K H governing the propagation of electromagnetic fields .E; H / in a homogeneous medium occupying R3 can be written in (block) matrix form as a .66/-partial differential expression ! 1 b " 0 0 @ M.@0 ; b @/ WD @0 0 b @ 0 ! @0 "1b @ D ; 1 b @ @0
172
Chapter 3 Partial Differential Equations with Constant Coefficients
where "; are positive definite, symmetric .3 3/-matrices. Here the first block row is called Ampère’s law and the second block row is Faraday’s law. With A D det. /1=2 1=2 one finds formally with the associated isometric re-scaling34 operation A @ D 1=2 b @ 1=2 A : A b Transformation with A therefore leads to: ! ! @0 "1b @0 @ "1 1=2 b @ 1=2 A D A @ @0 @ 1=2 1=2b @0 1 b and so @ @0 1=2 "1 1=2 b b @ @0 34
!
1=2 A E 1=2 A H
D
1=2 0 0 1=2
A F:
Generally, for any symmetric real matrix A we have by a transformation of variables Ab @ D .A1b @/ A ;
where .A1b @/ is to be understood as a formal cross product operation. Moreover, one finds Ab @ D .A1b @/ A D
1 Ab @ A A : det.A/
Indeed, following the algebraic argument in [28] we first observe that for x; y; z 2 R3 we have by definition of the cross product x .y z/ D det..x y z// where .x y z/ is the block matrix notation with x; y; z represented as column vectors. Thus, we have x ..Ay/ .Az// D det..x Ay Az// D det..A A1 x Ay Az// D det.A .A1 x y z// D det.A/ det..A1 x y z// D det.A/ A1 x .y z/ D det.A/ x A1 .y z/ for all x; y; z 2 R3 and so, letting u D Ay, we obtain the matrix equality 1 A .u/ A D .A1 u/ det.A/ for every u 2 R3 . The latter implies, by a Fourier transform argument, the stated transformation formula.
Section 3.1 Partial Differential Equations in H1 .@ C e/
173
Since 1=2 "1 1=2 D j"1=2 1=2 j2 is also symmetric and positive definite, without loss of generality we may assume that D 1.33/ . In the so-called vacuum case we have – up to scaling – that " D D 1.33/ . In this case we find simply ! @0 b @ det D @20 .@20 b @2 /2 ; b @ @0
@0 b @
@2 are is reversibly evolutionary, since @0 and @20 b b @ @0 reversibly evolutionary. In the general case we have that ! !! 1 " 0 0 b @ @0 "1b @ " 0 D @0 0 0 @ @0 b @ 0 1 b
which re-confirms that
and, with ei , i D 1; 2; 3, the canonical basis vectors of R3 , ! 3 X " 0 " 0 0 ei 0 b @ @0 @0 D @i 0 0 ei 0 b @ 0 iD1
is still a symmetric hyperbolic system, which shows that M.@0 ; b @/ is reversibly evolutionary according to our general result for such systems. For the vacuum case, the fundamental solution can be determined in the following way. We first observe ! !2 ! 0 b @ 0 b @ 2 b D 0; 1.66/ @ b @ 0 b @ 0 which indicates that the minimal polynomial of M.@0 ; b @/ WD
@0 b @
b @
@0
is
@2 /: .@0 / ..@0 /2 b Noting that this is the same as in the acoustic case, replacing A.b @/ with M0 .b @/ WD M.0; @/, we find immediately that 2 b2 1 @/1 D .@20 b @2 C M0 .b @/.@0 C M0 .b @/// @1 M.@0 ; b 0 .@0 @ / 2 b2 1 D .@0 C M0 .b @// M0 .b @/ @1 C @1 0 .@0 @ / 0 1.66/ :
(3.1.71)
Thus, the fundamental solution is given by @/1 ı 1.66/ GC WD M.@0 ; b D .@0 C M0 .b @// M0 .b @/ @1 0 g1;C 1.66/ C R0 ˝ ı 1.66/ D M0 .b @/ g1;C 1.66/ C M0 .b @/2 @1 0 g1;C 1.66/ C R0 ˝ ı 1.66/ ;
174
Chapter 3 Partial Differential Equations with Constant Coefficients
where g1;C denotes the fundamental solution of the wave equation. Using M0 .b @/2 D .M0 .b @/2 b @2 / .@20 b @2 / C @20 , we also obtain GC D .@0 C M0 .b @// g1;C 1.66/ C .M0 .b @/2 b @2 / @1 0 g1;C 1.66/ : From the vector identity jj2 x D . x/ x, we have by inverse Fourier transform with respect to that b @2 1.33/ D b @b @> C b @ b @ and it follows that ! b grad div 0 @b @> 0 2 b2 b .M0 .@/ @ 1.66/ / D D ; 0 grad div 0 b @b @> where grad and div denote the generalized vector analytic operations J given by ap> b b plying @ and @ , respectively. Therefore, if the right-hand side K of (3.1.70) is E is simply divergence-free, i.e. satisfies div J D div K D 0, then the solution H given by E J g1;C J b D GC D .@0 C M0 .@// ; H K g1;C K where the convolution operator g1;C is to be understood component-wise. Remark 3.1.66. Given that the electromagnetic field .E; H / is real-valued, an alternative approach could have been taken. We may write F WD E C i H as a complexvalued representation of the electromagnetic field. Then Maxwell’s equations reduce to .@0 C i b @/ F D .@0 C i curl/ F D S WD J C i K; where curl stands for b @ and is just a generalization of the vector analytic operation with the same name. Here 7! .@0 / ..@0 /2 b @2 / is the characteristic polynomial and consequently, with M0 .b @/ replaced by i b @ in (3.1.71), we obtain 2 b2 1 .@0 C i b @/1 D .@0 i b @/ i b @ @1 C @1 0 .@0 @ / 0 2 b2 1 b b D i b @ .@20 b @2 /1 @1 C @1 0 @ @ .@0 @ / 0 :
Using b @2 1.33/ D b @b @> C b @ b @, we get bb> 2 b2 1 @/1 D .@0 i b @/ .@20 b @2 /1 @1 .@0 C i b 0 @ @ .@0 @ / : This leads to the corresponding (forward) fundamental solution bb> C WD .@0 i b @/ gC 1.33/ @1 0 @ @ gC 1.33/ :
Section 3.1 Partial Differential Equations in H1 .@ C e/
175
The relation b @2 D b @b @> C b @ b @ can be also considered in a different way. Consider Pcurl WD .b @2 /1b @2 /1 b @ b @; Pgrad WD .b @b @> ; @ C e/ for every ˛ 2 which are well-defined as bounded operators in H˛ .@0;% C 1; b ZnC1 . Indeed they are orthogonal projectors with Pcurl C Pgrad D 1.33/ ;
(3.1.72)
which is a form of the so-called Helmholtz decomposition35 . With Pcurl 0 Pgrad 0 Pcurl ˚ Pcurl D D ; Pgrad ˚ Pgrad ; 0 Pcurl 0 Pgrad we have for the fundamental solution of Maxwell’s equations @// gC 1.66/ b @2 Pgrad ˚ Pgrad @1 GC D .@0 C M0 .b 0 gC 1.66/ D .@0 C M0 .b @// Pcurl ˚ Pcurl gC 1.66/ C @0 Pgrad ˚ Pgrad gC 1.66/ b @2 Pgrad ˚ Pgrad @1 0 gC 1.66/ D .@0 C M0 .b @// Pcurl ˚ Pcurl gC 1.66/ C Pgrad ˚ Pgrad R>0 ˝ ı 1.66/ ; where the term Pgrad ˚ Pgrad R>0 ˝ ı 1.66/ is the part of the fundamental solution in the null space of M0 .b @/. If the material is conducting, then the current density j in the right-hand side J D "1 j contains a contribution j0 produced by the electromagnetic field itself. The dependence on the field is commonly given by Ohm’s law: j0 D E; where the non-negative symmetric matrix denotes the conductivity (and so J0 WD "1 j0 D "1 E). In this case, Maxwell’s equations assume the form ! @0 C "1 "1b @ E J ; D 1 b K H @0 @ where now J WD J J0 ; K are given external forcing terms. Considering again the vacuum case " D D 1 and as multiplication by a positive constant, we get ! @ @0 C b D @0 .@0 C / .@20 C @0 b @2 /2 ; det b @ @0 35 The classical Helmholtz decomposition states – roughly speaking – that every sufficiently smooth field in RnC1 can be written as a sum of a curl of a certain field and a gradient of a certain potential.
176
Chapter 3 Partial Differential Equations with Constant Coefficients
so that the minimal polynomial is 7! .@0 / .@0 C / ..@0 /2 C .@0 / b @2 /: Without elaborating the details, we note that in this case we need to find the fundamental solution for the scalar expression @2 ; @20 C @0 b which for > 0 is the partial differential expression of a damped wave equation, also known as the telegraph equation. Investigating the real zeros of the polynomial 7! .i C %/2 C .i C %/ C 2 for 2 Rn , n 2 N, % 2 R n ¹0º, from 2 C 2i % C %2 C i C % C 2 D 0 we obtain the system of real equations 2 C I m D %2 C 2 C Re %; 2% C Re C I m % D 0: Here we have assumed 2 C, which is more general than currently needed. For Im % % ¤ 12 Re , the second equation implies D 2%CRe , and so
Im % 2% C Re
2
2 Im .%2 %.2% C Re // 2% C Re 2 Im D .%2 C Re %/ 2% C Re
.I m /2 % D 2% C Re
D %2 C 2 C Re % from the first equation. Thus, the constraint %2 C Re % > 0 prevents real zeros. From this we see that for j%j > jRe j there are no real roots for these choices of % 2 R. Thus, reversible evolutionarity is maintained for any 2 C. Recall that, for the telegraph equation, we have specifically 2 R>0 . Similar to earlier considerations, the partial differential expression of the telegraph equation re-appears by eliminating the magnetic field from Maxwell’s equations. Differentiating @0 E C E b @ H D J with respect to time and substituting @0 H D b @ E C K yields @20 E C @0 E C b @ b @ E D @0 J C b @ K:
Section 3.1 Partial Differential Equations in H1 .@ C e/
177
@ D 0 we get b @> E D .@0 C /1 b @> J (in H1 .@0;% C 1; b Since b @> b @ C e/ for > 2 b b b b b % 2 R>0 ), and using we @ @ E @ E D C@ @ E get the .3 3/-telegraph equation @20 E C @0 E b @2 E D @0 J C b @ K .@0 C /1 b @b @> J : The magnetic field can now be recovered from the electric field 1 b H WD @1 0 @ E C @0 K:
The telegraph equation can be modified by the following observation. It is easy to see that exp.s m0 / W H˛ .@0;% C 1; b @ C e/ ! H˛ .@0;%Cs C 1; b @ C e/ is unitary for every ˛ 2 Z1C3 , %; s 2 R. With % 2 R>0 and s D 2 2 R>0 , we get that ! @0 C 12 b @ exp. m0 =2/ J exp. m0 =2/ E D : b exp. m0 =2/ H exp. m0 =2/ K @ @0 12 @ C 1 b @ is now The corresponding minimal polynomial of 0 2 b @ @0 12 1 1 2 2 b2 7! @0 @0 C .@0 / @ : 2 2 4
The relevant scalar partial differential expression 2 @2 @20 b 4 is known as the partial differential expression of the modified telegraph equation. Again investigating the associated polynomial 7! .i C %/2 C 2
2 4
for 2 Rn , n 2 N, % 2 R n ¹0º, in the more general case 2 C, from 2 C 2i % C %2 C 2
2 D0 4
we obtain the system of real equations 2 C
.Re /2 .I m /2 D %2 C 2 ; 4 Re I m 2% C D 0: 2
178
Chapter 3 Partial Differential Equations with Constant Coefficients
For % ¤ 0 the second equation implies D Re 4%I m and so .Re /2 .I m /2 .Re /2 .I m /2 D %2 C 2 : C 16%2 4 This leads to the non-existence of real roots if .Re /2 .I m /2 .Re /2 .I m /2 < %2 ; C 2 16% 4 which is the same as .4%2 .Re /2 /.4%2 C .I m /2 / > 0: Thus, there are no real roots and the system is reversibly evolutionary if j%j >
1 jRe j: 2
Recall that for the telegraph equation we have assumed 2 R>0 . In the case Re D 0 and I m ¤ 0 we obtain the so-called Klein–Gordon equation. The Extended Maxwell System. It is frequently convenient to consider Maxwell’s 36 equations in the following J form. Emodified b @> b Noting that M.@0 ; @/ H D K implies (since b @ D 0) @0 b @>
E H
WD
@> E @0 b @> H @0 b
! D
! b @> J ; b @> K
we may write Maxwell’s equations in the redundant form 0 0 11 0 1 0 b> 0 b @> 0 0 0 @1 0 @ J Bb B C C B @ 0 CC B J EC B@ 0 b B CDB B@0 B CC B B @ A b b K @ 0 @ 0 @ AA H @ @ 1b> > 0 b @ 0 0 @ 0 0 @ K 36
1 C C C A
Variants of this idea to reformulate Maxwell’s equations have been discovered and re-discovered by a number of authors in various contexts. Possibly the earliest reference is [34] and consequently one finds the term “Ohmura system”. It was discovered in the context of a quaternionic formulation of Maxwell’s equations. It was later re-discovered independently in various forms e.g. in [19], [36], [11]. In [36], however, the full functional analytical context has been described in the general anisotropic, inhomogeneous case, showing that Maxwell’s equations are indeed equivalent to an evolution equation involving two commuting selfadjoint operators. The extended Maxwell system can be utilized for standard elliptic regularity for the spatial part of Maxwell’s equations and shows advantages in applications involving the integral equations method, see e.g. [45, 47].
Section 3.1 Partial Differential Equations in H1 .@ C e/ Remark 3.1.67. It may be preferable to write 0 11 0 0 0 0b @ b @ H B CC B @ > 0 CC B 0 B 0 0b B CC B B@0 B b b @ @ @ 0 0 AA @ E @ 0 b @> 0 0 0
1
0
K 1b> C B @0 @ J CDB A B J @ 1b> @0 @ K
179
1 C C C: A
In this way we see that also the extended Maxwell system is of the typical block form of reversibly evolutionary expressions of mathematical physics @0 W ; W @0 @ b @ . This transformation is a particular instance of a general here with W WD b @> 0 b mechanism for tri-diagonal skew-selfadjoint block operator matrix A D .Aj;k /j;k with AkC1;k D Ak;kC1 for k D 1; : : : ; 2n 1 (all other entries being zero). Indeed, consider the permutation ' given by ¹1; : : : ; 2nº ! ¹1; : : : ; 2nº; k 7! 2.k b.k 1/=ncn/ 1 C b.k 1/=ncn which induces a row operation %' W B D .Bi;j /i;j D1;:::;2n 7! .B'.i/;j /i;j D1;:::;2n . Then the image of the block identity matrix corresponding to the block operator matrix structure of A yields a mapping P' , such that %' .A/ D P' A: With P' we now obtain that P' AP' is of the desired superblock structure 0 W : W 0 The .8 8/-partial differential expression corresponding to the extended Maxwell system is 0 1 @> 0 0 @0 b B b C @ 0 C B @ @0 b MM.@0 ; b @/ WD B (3.1.73) C @ 0 b @ A @ @0 b 0 0 b @> @0 and now has @2 /4 .@20 b as determinant and 7! .@0 /2 b @2
180
Chapter 3 Partial Differential Equations with Constant Coefficients
as minimal polynomial. Indeed, we find that 11 0 0 0 0 0 b @> 0 0 0 b @> C B Bb B C B @ 0 CC B @ 0 B@ 0 b Bb B @0 C B C B @0 B C B @ 0 b @ 0 b @ @ 0 b @ AA @ @ > b 0 0 @ 0 0 0
0 b @ 0 b @>
11 0 CC 0 CC @2 ; CC D @20 b b @ AA 0
and so the fundamental solution of this form of Maxwell’s equation has the simple form 11 0 0 0 b @> 0 0 CC B Bb @ 0 CC B B@ 0 b CC g1;C 1.88/ ; B@0 C B @ @ 0 b @ 0 b @ AA 0 0 b @> 0 where g1;C is the fundamental solution of the wave equation in R1C3 . That the solution is indeed of the form 0 1 0 BEC B C @H A 0 is merely a structural condition on the data, which must have the form 1 0 1 0 1b> F1 @0 @ F2 C B F2 C B F2 C B CDB B C: @ F3 A @ F3 A 1 > b F4 @0 @ F3 Conversely, if (3.1.74) holds then the solution 0 1 u1 B u2 C B C @ u3 A u4 must satisfy @> u2 D b @> F2 ; @0 b @0 u1 b @> u2 D F1 ;
@0 b @> u3 D b @> F3 ; @0 u4 C b @> u3 D F4 ;
and so b> @0 u1 D F1 C b @> u2 D F1 C @1 0 @ F2 D 0; b> @0 u4 D F4 b @> u3 D F4 @1 0 @ F3 D 0; implying u1 D u4 D 0.
(3.1.74)
Section 3.1 Partial Differential Equations in H1 .@ C e/ 3.1.12.5
181
Elastodynamics
Let the displacement of an elastic solid with mass density matrix M from its equilibrium state be given by a function u D .us /sD1;2;3 of space and time. Then the motion of the body is governed by the equation M @20 u D div T C F; where T D .Tij /i;j D1;2;3 denotes the stress tensor and F an external forcing term. The elastic properties of a medium can be described in terms of the rate of change of the displacement vector field u, i.e. the derivative b @u WD .@k us /s;kD1;2;3 . The stress tensor T D .Tij /i;j D1;2;3 can formally be considered as a matrix-valued function depending on this rate of change. In the simplest case this dependence is linear and homogeneous, i.e. we have the so-called Hooke’s law Tij D
3 X
Cij ks @k us
k;sD1
for i; j D 1; 2; 3, or
T D Cb @u
for short, where C D .Cij ks /i;j;k;sD1;2;3 with real constants Cij ks known as elasticity moduli, i; j; k; s D 1; 2; 3, should be symmetric in the sense that Cij ks D Cksij
(3.1.75)
for i; j; k; s D 1; 2; 3. The symmetry requirement reduces the original 81 constants to P 45 elasticity moduli. Denoting the trace 3iD1 Xi i of a matrix X D .Xij /i;j D1;2;3 2 C 33 by trace.T /, we see that .X; Y / 7! trace.X Y / provides an inner product in C 33 . The induced norm p X 7! trace.X X / is known as the Frobenius norm. In order to obtain an evolutionary equation, C should be non-negative with respect to this inner product. In general there may be non-trivial elements in the null space Œ¹0ºC of C W C 33 ! 33 C . The projection theorem yields the orthogonal decomposition C 33 D Œ¹0ºC ˚ C ŒC 33 and in particular the subspace C ŒC 33 reduces C so that C ŒC 33 ! C ŒC 33 ; U 7! C U is a well-defined, positive definite mapping.
182
Chapter 3 Partial Differential Equations with Constant Coefficients
More specifically, it is commonly assumed that Œ¹0ºC D ¹U 2 C 33 j U C U > D 0º; i.e. that C vanishes on antisymmetric matrices. This amounts to the additional symmetry constraint Cij ks D Cijsk (3.1.76) for i; j; k; s D 1; 2; 3 effectively reducing the number of elasticity moduli to 21. This assumption makes the stress tensor T (real) symmetric, i.e. T > D T . Indeed, Tij D
3 X
Cij ks @k us D
k;sD1
3 X
Cksj i @k us D
k;sD1
3 X
Cj iks @k us D Tj i
k;sD1
for i; j D 1; 2; 3. Moreover, we see from (3.1.75) and (3.1.76) that Cij kl D Clkj i D Cklj i D Cij lk D Cj ilk for i; j; k; l D 1; 2; 3 holds. Consequently, 1 T D .T C T > / 2 1 D .C @u C .C @u/> / 2 3 3 X 1 X D Cij kl @k ul C Cj ikl @k ul i;j 2 k;lD1
D
1 2
3 X
k;lD1
Cij kl @k ul C
k;lD1
Cj ilk @l uk
k;lD1
i;j
1 .@k ul C @l uk / i;j 2 k;lD1 1 .@k ul C @l uk / DC : 2 k;l D
3 X
3 X
Cij kl
Thus, we see that only the real symmetric part of @u is needed in Hooke’s law. We recall that, under these assumptions, we have merely 21 real constants left as elasticity moduli – rather than 45 in the general symmetric case. Noting that 0
X11 @ X21 X31
PD W C 33 ! C 33 ; 1 0 1 X11 0 X12 X13 0 X22 X23 A 7! @ 0 X22 0 A X32 X33 0 0 X33
Section 3.1 Partial Differential Equations in H1 .@ C e/
183
defines an orthogonal projector with respect to the Frobenius inner product, it is preferable (under the additional symmetry assumption (3.1.76)) to use the modified Frobenius inner product (inducing an equivalent norm) h j i WD .X; Y / 7! trace..PD X / PD Y / C 3 X
D
1 trace...1 PD /X / .1 PD /Y / 2
Xks Yks
k;sD1;ks
for all complex .3 3/-matrices X D .Xij /i;j D1;2;3 and Y D .Yij /i;j D1;2;3 with X> D X, Y > D Y . The tensor C is still positive definite with respect to the modified inner product in the sense that, for some positive constant c0 , we have 3 X
hX jC X i D
3 X
Xij Cij ks Xks
i;j D1; ij k;sD1
D
3 3 X X iD1 k;sD1
D
3 3 X X
3 X
Xii Ci iks Xks C
C2
iD1 kD1
C
Xij Cij ks Xks
i;j D1; i<j k;sD1
Xii Ci ikk Xkk
3 X
3 X
3 X
3 X
Xij Cij ks Xks
iD1 k;sD1;k<s 3 X
Xij Cij kk Xkk
i;j D1; i<j kD1
C2
3 X
3 X
Xij Cij ks Xks
i;j D1; i<j k;sD1;k<s 3 X
D hC X jX i c0
Xks Xks
k;sD1;ks
for all complex .3 3/-matrices X D .Xij /i;j D1;2;3 with X > D X . Following the so-called Voigt convention [49] we may re-write the above Cartesian tensors in an easier accessible matrix form. For this we first observe that the space of symmetric .3 3/-matrices can be identified with 6-vectors due to the relation hKjKi D
3 X i;j D1;ij
jKij j2 ;
184
Chapter 3 Partial Differential Equations with Constant Coefficients
for symmetric matrices K D .Kij /i;j D1;2;3 , which shows the isometry of the mapping V given by 1 0 K11 B K22 C 0 1 C B K11 K12 K13 V B K33 C C: B @ K12 K22 K23 A 7 !B K23 C B C K13 K23 K33 @ K31 A K12 We also see that V
0
1 .@k ul C @l uk / 2 k;l
1 @1 u1 B C @2 u2 B C C B @3 u3 C B1 D B .@2 u3 C @3 u2 / C D W B2 C B1 C @ 2 .@3 u1 C @1 u3 / A 1 2 .@1 u2
with
0
1 B0 B B0 W DB B0 B @0 0
0 1 0 0 0 0
C @2 u1 / 0 0 1 0 0 0
0
@1 B 0 B B 0 B B 0 B @ @3 @2
0 @2 0 @3 0 @1
1 0 0 1 0 C C u1 C @3 C @ A u2 @2 C C u3 @1 A 0
1 0 0 0 0 0 0C C 0 0 0C C: 1 C 2 0 0C 1 0 0A 2
0 0
1 2
Hooke’s law can now be written in the matrix form as 1 .@k ul C @l uk / W V T DW V C 2 k;l 0 1 1 0 T11 @1 0 0 B 0 @2 0 C 0 1 B T22 C B C C B B T33 C B 0 0 @3 C u1 1 B C C B @ A DW V C V W B C u2 D B T23 C 0 @ @ 3 2 B B C C @ T31 A @ @3 0 @1 A u3 @2 @1 0 T12 and so
1 T11 B T22 C C B B T33 C 1 C B B T23 C D W V C V W C B @ T31 A T12 0
0
@1 B 0 B B 0 B B 0 B @ @3 @2
0 @2 0 @3 0 @1
1 0 0 1 0 C C u1 @3 C C @ u2 A: @2 C C u3 @1 A 0
Section 3.1 Partial Differential Equations in H1 .@ C e/
185
Thus, C is transformed into the matrix 0
C1111 C1122 C1133 2 C1123 B C1122 C2222 C2233 2 C2223 B B C1133 C2233 C3333 2 C2333 W B B 2 C1123 2 C2223 2 C2333 4 C2323 B @ 2 C1113 2 C1322 2 C1333 4 C1323 2 C1112 2 C1222 2 C1233 4 C1223 0 C1111 C1122 C1133 C1123 C1113 B C1122 C2222 C2233 C2223 C1322 B B C1133 C2233 C3333 C2333 C1333 DB B C1123 C2223 C2333 C2323 C1323 B @ C1113 C1322 C1333 C1323 C1313 C1112 C1222 C1233 C1223 C1213
1 2 C1113 2 C1112 2 C1322 2 C1222 C C 2 C1333 2 C1233 C CW 4 C1323 4 C1223 C C 4 C1313 4 C1213 A 4 C1213 4 C1212 1 C1112 C1222 C C C1233 C C: C1223 C C C1213 A C1212
Hooke’s law takes on the form 1 0 T11 C1111 B T22 C B C1122 C B B B T33 C B C1133 C B B B T23 C D B C1123 B C B @ T31 A @ C1113 T12 C1112 0
C1122 C2222 C2233 C2223 C1322 C1222
C1133 C2233 C3333 C2333 C1333 C1233
C1123 C2223 C2333 C2323 C1323 C1223
C1113 C1322 C1333 C1323 C1313 C1213
10 C1112 @1 B 0 C1222 C CB B C1233 C CB 0 C C1223 C B B 0 C1213 A @ @3 C1212 @2
0 @2 0 @3 0 @1
1 0 0 1 0 C C u1 @3 C C @ u2 A: @2 C C u3 @1 A 0
Now, with little danger of confusion, again using T to denote the column matrix 0
1 T11 B T22 C B C B T33 C B C B T23 C; B C @ T31 A T12 C for the matrix 0
C1111 B C1122 B B C1133 B B C1123 B @ C1113 C1112
C1122 C2222 C2233 C2223 C1322 C1222
C1133 C2233 C3333 C2333 C1333 C1233
C1123 C2223 C2333 C2323 C1323 C1223
C1113 C1322 C1333 C1323 C1313 C1213
1 C1112 C1222 C C C1233 C C C1223 C C C1213 A C1212
186
Chapter 3 Partial Differential Equations with Constant Coefficients
and Grad for
0
@1 B 0 B B 0 B B 0 B @ @3 @2
0 @2 0 @3 0 @1
1 0 0 C C @3 C C; @2 C C @1 A 0
we get Hooke’s law as T D C Grad u:
(3.1.77)
The Newton law governing the elastic displacement is now M @20 u Div T D F;
(3.1.78)
where 1 T11 1 B T22 C 0 C @ 1 0 0 0 @3 @ 2 B B T33 C B C A @ D 0 @2 0 @ 3 @ 1 0 B T23 C B C 0 0 @3 @ 2 @ 1 0 @ T31 A T12 0
Div T WD
3 X kD1
@k Tj k
j D1;2;3
and the .3 3/-matrix M 2 R33 is assumed to be positive definite and symmetric. Substituting Hooke’s law into (3.1.78), we obtain the system .@20 M 1 Div C Grad/ u D M 1 F
(3.1.79)
for the displacement. Before we analyze this system, let us take a look at an alternative option which is to formulate this system as first order in time revealing that the equations of elastodynamics can also be formulated as a symmetric hyperbolic system. For this we let v WD @0 u to obtain M @0 v Div T D F and, by differentiating Hooke’s law, we get @0 T D C Grad v: Thus we are led to the block matrix system 1 1 v M 0 0 Div v M F @0 D : T 0 C Grad 0 T 0
(3.1.80)
Section 3.1 Partial Differential Equations in H1 .@ C e/
187
We see that (3.1.80) is a symmetric hyperbolic system. The corresponding partial differential expression is 1.33/ @0 M 1 Div b E.@0 ; @/ WD @/; D @0 E0 .b C Grad 1.66/ @0 where E0 .b @/ WD E.0; b @/. Studying a general 9 9-matrix system more explicitly would require us to find roots of a 9-th order polynomial. This can only be successfully done in very special cases. To reduce the number of parameters let us consider simpler materials. First we assume that M D c 1.33/ for c 2 R>0 . Without loss of generality we may then also assume c D 1 (otherwise we re-scale). A case which can be handled in further detail, is the so-called isotropic case where 0 1 2 C 0 0 0 B 2 C 0 0 0C B C B 2 C 0 0 0 C B C: C DB 0 0 0 0C B 0 C @ 0 0 0 0 0A 0
0
0
0
0
The eigenvalues of C are ; 2 C 3k; 2 with multiplicities 3; 2 and 1, respectively. The quantities ; are known as Lamé constants and for the medium to be isotropic (C positive definite) we must have > 0;
2 C 3 > 0
or
2 0 < C : 3 The characteristic polynomial in this case is found to be > 0;
(3.1.81)
.@0 /3 ..@0 /2 b @2 /2 ..@0 /2 .2 C / b @2 /: Noting that (3.1.81) implies 2 C > 0 we see that there are two wave equations involved with different time-scales (velocities). A closer look reveals that the minimal polynomial is of smaller degree .@0 / ..@0 /2 b @2 / ..@0 /2 .2 C / b @2 /: Indeed, in this particular case we find Div C Grad 0 2 b E0 .@/ D 0 C Grad Div and calculate E0 .b @/.E0 .b @/2 b @2 / .E0 .b @/2 .2 C / b @2 / D 0:
188
Chapter 3 Partial Differential Equations with Constant Coefficients
The solution operator evaluates to E.@0 ; b @/1 b 2 b2 1 C E0 .b D @1 @/ .@20 .2 C / b @2 /1 0 C E0 .@/ .@0 @ / 2 2 b2 1 C E0 .b b2 1 C E0 .b @/2 @1 @/2 @1 0 .@0 @ / 0 .@0 .2 C / @ /
C .E0 .b @/3 .@0 C E0 .b @// @0 E0 .b @// .@20 b @2 /1 .@20 .2 C / b @2 /1 2 b2 1 2 b2 1 C E0 .b @/4 @1 0 .@0 @ / .@0 .2 C / @ / :
The (forward) fundamental solution is then given by GE;C D R>0 ˝ ı C E0 .b @/ g ;C C E0 .b @/ g2 C;C C E0 .b @/2 .R>0 ˝ ı/ g ;C C E0 .b @/2 .R>0 ˝ ı/ g2 C;C C .E0 .b @/3 .@0 C E0 .b @/// @0 E0 .b @/ g ;C g2 C;C C E0 .b @/4 .R>0 ˝ ı/ g ;C g2 C;C ; where g;C denotes the (forward) fundamental solution of .@20 b @2 /, 2 R>0 , which can be obtained from the fundamental solution of the wave equation above by a simple rescaling of space g;C D 3=4 1 ˝ 1=p g1;C D 3=2 g1;C .; 1=2 /: Indeed, @2 / 1 ˝ 1=p g1;C D 1 ˝ 1=p .@20 b @2 / g1;C D 1 ˝ 1=p ı .@20 b and h1 ˝ 1=p ıj i;0 D hıj1 ˝ p i;0 D 3=4 .0/: Note that to recover the displacement from the velocity vector an additional integration, i.e. application of @1 0 or convolution with .R>0 ˝ ı/, is needed. As convenient as the formulation as a symmetric hyperbolic system sometimes is, it suffers from the considerable size of the resulting system. In the present case of elastodynamics we could obtain a smaller system by the observation made for the acoustics equations. We find, however, that the original equation (3.1.78) already has the desired form 2 1 @20 u M 1 Div C Grad u D @1 Div C Grad/ v 0 .@0 M
D M 1 F
(3.1.82)
and the stress tensor T can be found from Hooke’s law (3.1.77) as soon as we have solved (3.1.82). The .3 3/-partial differential expression @/ WD @20 M 1 Div C Grad DW @20 E0 .b @/ E.@0 ; b
Section 3.1 Partial Differential Equations in H1 .@ C e/
189
can be analyzed more comfortably due to its smaller size. We only consider here the case M D 1.33/ and briefly discuss several special classes of media: (a) (monoclinic media) In this case we have generically 0
C1111 B C1122 B B C1133 C DB B 0 B @ 0 C1112
C1122 C2222 C2233 0 0 C1222
C1133 0 0 C2233 0 0 C3333 0 0 0 C2323 C1323 0 C1323 C1313 C1233 0 0
1 C1112 C1222 C C C1233 C C 0 C C 0 A C1212
with 13 material constants. This case is characterized by leaving the equations invariant under reflexion .x; y; z/ 7! .x; y; z/: The 13 constants have to satisfy certain constraints to make sure that C is positive definite. We certainly must require that 1 C1111 C1122 C1133 @ C1122 C2222 C2233 A and C2323 C1323 are positive definite C1323 C1313 C1133 C2233 C3333 0
(3.1.83)
and C1212 > 0;
(3.1.84)
as well as det.C / > 0. Due to the particular block structure C D we find
1 W B 1 0 1
A W W B
A W W B
1 B 1 W
0 1
D
A W B 1 W 0 0 B
and so we see that C is positive definite if and only if in addition to (3.1.83), (3.1.84) we have that 1 0 1 0 C1112 C1111 C1122 C1133 @ C1122 C2222 C2233 A .C1212 /1 @ C1222 A C1112 C1222 C1233 C1133 C2233 C3333 C1233 is positive definite.
190
Chapter 3 Partial Differential Equations with Constant Coefficients
(b) (rhombic media) This case is characterized by mirror symmetries with respect to all coordinate planes .x; y; z/ 7! .x; y; z/; .x; y; z/ 7! .x; y; z/; .x; y; z/ 7! .x; y; z/: These symmetries reduce the number of free parameters even further. We find in this case that C reduced to 0 1 0 0 C1111 C1122 C1133 0 B C1122 C2222 C2233 0 0 0 C B C B C1133 C2233 C3333 0 0 0 C B C: B 0 0 0 C2323 0 0 C B C @ 0 0 0 0 C1313 0 A 0 0 0 0 0 C1212 For C to be positive definite we must require that the .3 3/-matrix 1 0 C1111 C1122 C1133 @ C1122 C2222 C2233 A C1133 C2233 C3333 is positive definite and C2323 ; C1313 ; C1212 > 0: (c) (hexagonal media) In media with a hexagonal symmetry we have 0 1 0 0 C1111 C1122 C1133 0 B C1122 C1111 C1133 0 C 0 0 B C B C1133 C1133 C3333 0 C 0 0 B C C DB C 0 0 0 C 0 0 2323 C B A @ 0 0 0 0 C2323 0 1 0 0 0 0 0 2 .C1111 C1122 / with 5 material constants. Positive definiteness of C requires the constraints C1111 C1122 > 0;
C2323 > 0;
1 2 .C1111 C C1112 /C3333 > C1133 : 2
(d) (cubic media) If there are three orthogonal symmetry axes then 0 1 0 0 C1111 C1122 C1122 0 B C1122 C1111 C1122 0 0 0 C B C B C1122 C1122 C1111 0 C 0 0 C C DB B 0 0 0 C2323 0 0 C B C @ 0 0 0 0 C2323 0 A 0 0 0 0 0 C2323
Section 3.1 Partial Differential Equations in H1 .@ C e/
191
with only 3 material constants. Indeed, this case is characterized by leaving the equations invariant under spatial transformations .x; y; z/ 7! .y; x; z/; .x; y; z/ 7! .x; z; y/; .x; y; z/ 7! .z; y; x/; as well as under reflexions .x; y; z/ 7! .x; y; z/; .x; y; z/ 7! .x; y; z/; .x; y; z/ 7! .x; y; z/: Geometrically this means axial symmetry with respect to all coordinate axes and mirror symmetry with respect to all coordinate planes. Indeed, the mirror symmetries reduce C to 1 0 0 0 C1111 C1122 C1133 0 C B B C1122 C2222 C2233 0 0 0 C B C C BC C C 0 0 0 C B 1133 2233 3333 C: B B 0 C 0 0 C 0 0 2323 B C B 0 0 0 0 C1313 0 C @ A 0 0 0 0 0 C1212 This is a rhombic medium. The additional axial symmetries now lead to 1 0 0 C1111 C1122 C1122 0 C B B C1122 C1111 C1122 0 0 0 C C B BC 0 0 C C B 1122 C1122 C1111 0 C: B C B 0 0 0 C 0 0 2323 B C B 0 0 0 0 C2323 0 C @ A 0 0 0 0 0 C2323 0
The required conditions for C to be positive definite can be easily found as C2323 > 0;
C1111 C1122 > 0;
C1111 C 2 C1122 > 0
or C2323 > 0;
1 C1111 > C1122 > C1111 : 2
192
Chapter 3 Partial Differential Equations with Constant Coefficients
The isotropic case mentioned above is a particular instance of such a cubic system. Utilizing the fact that the only linearly independent Cartesian 4-tensors invariant under rotation, i.e. isotropic tensors, are37 .i; j; k; s/ 7! ¹0º .i j / ¹0º .k s/; .i; j; k; s/ 7! ¹0º .i k/ ¹0º .j s/; .i; j; k; s/ 7! ¹0º .i s/ ¹0º .k j / we find that in the isotropic case we must have Cij ks D ¹0º .i j / ¹0º .k s/ C ¹0º .i k/ ¹0º .j s/ C ¹0º .i s/ ¹0º .k j / for i; j; k; s D 1; 2; 3 with parameters ; ; 2 R. Together with the symmetry constraint (3.1.76) on C we find that D and so Cij ks D ¹0º .i j / ¹0º .k s/ C ¹0º .i k/ ¹0º .j s/ C ¹0º .i s/ ¹0º .k j / for i; j; k; s D 1; 2; 3. This yields in the cubic case an additional reduction C1111 D 2 C ; C1122 D ; C1212 DD C1221 D ; leaving just 2 elasticity moduli known as Lamé constants. Note that C D 1.66/ and the rather special case 1 0 0 0 C1111 C1122 C1122 0 B C1122 C1111 C1122 0 0 0 C C B B C1122 C1122 C1111 0 0 0 C B C C DB 0 0 0 C1122 0 0 C B C @ 0 0 0 0 C1122 0 A 0 0 0 0 0 C1122 are examples of cubic media. The latter has been singled out since in this case we obtain a decoupled system, i.e. the system is diagonal. Indeed, for cubic systems, we @/ the matrix obtain for E0 .b 0 2 1 @1 0 0 @ b @ @/ D .C1111 C1122 2 C2323 / @ 0 @22 0 A C2323 b E0 .b 0 0 @23 C .C1122 C 2 C2323 / b @b @> 37 The characteristic functions used here are nothing but the components of the so-called Kroneckerı-tensor.
Section 3.1 Partial Differential Equations in H1 .@ C e/
193
reducing to E0 .b @/ D b @ b @ C.2 C / b @b @> in the isotropic case. The fundamental solution in the case of a cubic medium can now in principle be calculated according to our usual procedure in terms of the fundamental solution of a sixth order scalar partial differential equation. Depending on the type of cubic medium, the corresponding polynomial may be reducible thus further simplifying the construction of a fundamental solution. Here we consider only the isotropic case, where the characteristic polynomial is given by det.E.@0 ; b @/ 1.33/ / D .@20 .2 C / b @2 /.@20 b @2 /2 : The minimal polynomial in the isotropic case is .@20 .2 C / b @2 / .@20 b @2 / as can be seen from .E0 .b @/ .2 C / b @2 / .E0 .b @/ b @2 / D 0; which is implied by @/ .2 C / b @2 / D . C / b @ b @; .E0 .b .E0 .b @/ b @2 / D . C / b @b @> :
(3.1.85)
Thus, .@20 .2 C / b @2 / .@20 b @2 / 133 D E.@0 ; b @/2 C .2 @20 .3 C / b @2 / E.@0 ; b @/ and so .@20 .2 C /b @2 /.@20 b @2 / E.@0 ; b @/1 D E.@0 ; b @/ C 2 @20 .3 C /b @2 D @20 C E0 .b @/ .3 C / b @2 or E.@0 ; b @/1 D .@20 .3 C / b @2 / .@20 .2 C / b @2 /1 .@20 b @2 /1 C E0 .b @/ .@20 .2 C / b @2 /1 .@20 b @2 /1 :
(3.1.86)
194
Chapter 3 Partial Differential Equations with Constant Coefficients
This can be further evaluated as E.@0 ; b @/1 1 2 1 1 .@ .2 C / b @2 D @2 / C .@20 b @2 / b 2 0 2 2 1 2 b .2 C / @ .@20 .2 C / b @2 /1 .@20 b @2 /1 2 C E0 .b @/ .@20 .2 C / b @2 /1 .@20 b @2 /1 1 1 D .@20 .2 C / b @2 /1 C .@20 b @2 /1 2 2 1 C .E0 .b @/ .2 C / b @2 / .@20 .2 C / b @2 /1 .@20 b @2 /1 2 1 C .E0 .b @/ b @2 / .@20 .2 C / b @2 /1 .@20 b @2 /1 2 1 1 D .@20 .2 C / b @2 /1 C .@20 b @2 /1 2 2 1 @ b @ .@20 .2 C / b C . C / b @2 /1 .@20 b @2 /1 2 1 @b @> .@20 .2 C / b C . C / b @2 /1 .@20 b @2 /1 : 2 A corresponding formula for the fundamental solution in terms of the fundamental solutions g ;C and g2 C;C follows. Moreover, we find with (3.1.85) b @ b @ .@20 E0 .b @// D .@20 b @2 / b @ b @ and
b @b @> .@20 E0 .b @// D .@20 .2 C / b @2 / b @b @> :
@ b @ Cb @b @> , we get by subtracting Since b @2 D b b @2 .@20 E0 .b @// D .@20 b @2 / b @ b @ .@20 .2 C / b @2 / b @b @> : In terms of the fundamental solutions g ;C and g2 C;C , we have .@20 b @2 /b @ b @ g ;C 1.33/ D b @ b @ .@20 E0 .b @//g ;C 1.33/ Db @ b @ ı 1.33/ and @2 /b @b @> g2 C;C 1.33/ D b @b @> .@20 E0 .b @//g2 C;C 1.33/ .@20 .2 C /b D b @b @> ı 1.33/ :
Section 3.1 Partial Differential Equations in H1 .@ C e/
195
By adding we get b @2 .@20 E0 .b @// GE;C D .@20 E0 .b @// .b @ b @ g ;C 1.33/ b @b @> g2 C;C 1.33/ / and so
b @2 GE;C D b @ b @ g ;C C b @b @> g2 C;C :
With this we get GE;C D Pcurl g ;C C Pgrad g2 C;C : Here Pcurl g ;C is associated with the so-called shear waves and Pgrad g2 C;C with the so-called pressure waves. b grad .i / b Remark 3.1.68. It is interesting to note that P
./ WD .F Pgrad /./ and 3 b V b P curl .i / ./ WD .F Pcurl /./, 2 R n ¹0º, 2 C1 .R3 /, define projections on the eigenspaces of E0 .i / for all 2 R3 n ¹0º, (here F and b denote the spatial Fourier transform). Indeed, a spectral representation of a general matrix of polynomials A.i / of the form X A.i / D k .i / Pk .i /; k
where the eigenvalues 7! k .i / and the entries of the projections 7! Pk .i / onto corresponding eigenspaces are Borel functions, can be used to obtain the solution operator in the form X @//1 D .@0 k .b @//1 Pk .b @/: .@0 A.b k
Analogously, we have @//1 D .@20 A.b
X
.@20 k .b @//1 Pk .b @/:
k
3.1.12.6
Fluid Dynamics
The dynamics of viscous fluids is commonly described by the Navier–Stokes system. If v denotes the velocity distribution and p the pressure density of the fluid, Newton’s law, assuming constant mass density, takes the particular form @0 v b @/v D f @2 v C b @p C .v >b with the typical non-linearity appearing as a directional derivative of the velocity field in its own direction, 2 R>0 . The system needs to be amended by suitable conditions
196
Chapter 3 Partial Differential Equations with Constant Coefficients
on the pressure density. In the so-called incompressible case it is assumed that the velocity field is divergence-free, i.e. b @> v D 0: Dropping the non-linear term we end up with the so-called Stokes system in the incompressible case which can be discussed within our framework. I Stokes equation (constant pressure, incompressible case). According to the above introductory remarks, the Stokes equation in the incompressible case is given by the system 0 10 1 0 1 @2 0 0 @1 @0 b f1 v1 B CB C B C 2 b B 0 @0 @ 0 @2 C B v2 C B f2 C D ; (3.1.87) B C @ 0 0 @0 b @2 @3 A @ v3 A @ f3 A p 0 @1 @2 @3 0 which is associated with a non-invertible operator. Assuming, however, that a solution p of 0 1 f1 2 >@ b b f2 A ; @ p D @ f3 is known, determining v reduces to discussing the .3 3/-partial differential expression S0 .@0 ; b @/ WD @0 C b @ b @: Since, from a mathematical point of view, the effect of this is equivalent to assuming b @p D 0, we refer to this case as the constant pressure case. The characteristic polynomial of S0 .@0 ; b @/ is .@0 / .@0 b @2 /2 and the minimal polynomial is @2 / D 2 .2 @0 b @2 / C @0 .@0 b @2 /: .@0 / .@0 b This shows that S0 .@0 ; b @/ is irreversibly evolutionary. Moreover, we find b2 1 .@0 C b @ b @/1 D .@0 b @2 /1 .b @2 C b @ b @/@1 0 .@0 @ / b2 1 D .@0 b @2 /1 b @b @> @1 0 .@0 @ / ; and the fundamental solution GS is therefore given by @b @> @1 GS D g0 1.33/ b 0 g0 1.33/ ;
Section 3.1 Partial Differential Equations in H1 .@ C e/
197
where g0 is the fundamental solution of the heat equation. Using the projections Pcurl ; Pgrad we obtain @2 @1 GS D g0 1.33/ Pgradb 0 g0 1.33/ D Pcurl g0 1.33/ C Pgrad R>0 ˝ ı 1.33/ : The condition that the velocity field u satisfying @/ u D f S0 .@0 ; b
(3.1.88)
is divergence-free appears as a constraint on the data rather than an additional equation (which would make this system formally overdetermined). Indeed, applying div D b @> to (3.1.88) yields @0 b @> u D b @> f and so u is divergence-free whenever the data f are divergence-free. I Linearized Navier–Stokes equation (constant pressure, incompressible case), Oseen system. Adding a drift term yields the linearized38 Navier–Stokes operator in the constant pressure case N0;s .@0 ; b @/ WD @0 C @ @ Cs > @: The determinant is given by @/ .@0 C s >b @ b @2 /2 @/ D .@0 C s >b det N0;s .@0 ; b and so the characteristic polynomial is now @/ .@0 C s >b @ b @2 /2 : 7! .@0 C s >b The minimal polynomial is @/ .@0 C s >b @ b @2 / 7! .@0 C s >b and so @ Cb @ b @/1 D .@0 C s >b @ b @2 /1 .@0 C s >b @/1 .@0 C s >b @ b @2 /1 : b @b @> .@0 C s >b 38
In the non-linear case the direction of drift s=jsj is proportional to the velocity field itself. The non-linear drift term is then v 7! .v >b @/ v; with 2 R>0 . Using a more concise block matrix notation, the Navier–Stokes equations are of the form ! ! v f @0 Cb @ b @ b @ v .v >b @/ 0 : D C 0 p p b 0 0 @> 0
198
Chapter 3 Partial Differential Equations with Constant Coefficients
The fundamental solution is now given by @b @> s;C gs 1.33/ ; GN WD gs 1.33/ b where s;C is the (forward) fundamental solution of the transport equation and gs is the fundamental solution of the heat equation with drift term. Using Pgrad we can separate the pure shift contribution and obtain @2 s;C gs 1.33/ GN D gs 1.33/ Pgrad b @/gs ı/ 1.33/ D gs 1.33/ Pgrad s;C ..@0 C s >b D gs 1.33/ C Pgrad s;C 1.33/ Pgrad gs 1.33/ D Pcurl gs 1.33/ C Pgrad s;C 1.33/ : As in the previous case, the condition that the velocity field u satisfying @/ u D f N0;s .@0 ; b
(3.1.89)
is divergence-free appears as a constraint on the data rather than an additional equation (which would make this system formally overdetermined). Applying @> to (3.1.89) leads to .@0 C s >b @/ b @> u D b @> f; which is a transport equation for div u. Again we have that u is divergence-free if the data f are. 3.1.12.7
Quantum Mechanics
The last examples to illustrate the utility of our Sobolev lattice framework are taken from quantum mechanics. I Schrödinger equation in vacuum. The scalar partial differential expression associated with this equation is of the form @0 i b @2 ; which is reversibly evolutionary (as a curiosity we mention that the Schrödinger equation is – in a different context – also known as the parabolic wave equation). Here the (forward) fundamental solution can be found by the same techniques as before and is given by the function39 gS;C .t; x/ D R>0 .t / .4 t /3=2 exp.i jxj2 =.4t // exp.3i=4/ 39 It may be interesting to note that this fundamental solution appears also by the formal substitution t 7! i t into the fundamental solution of the heat equation.
Section 3.1 Partial Differential Equations in H1 .@ C e/
199
for t 2 R n ¹0º; x 2 R3 . The behavior at t D 0 is again determined by our general results on initial value problems. In particular, R>0 .t / .4 t /3=2 exp.i j j2 =.4t // exp.3i=4/ ! ı
as t ! 0 C :
@/ is usually given as the .44/-partial I Dirac equation. The Dirac operator Q0 .@0 ; b differential expression with the block matrix form ! b C i C. @/ @ 0 Q0 .@0 ; b @/ WD : C.b @/ @0 i Here C.b @/ WD
@3 @1 i @2 @1 Ci @2 @3
…1 WD
0 C1 ; C1 0
D
P3 kD1
…k @k ; where
…2 WD
0 i ; Ci 0
…3 WD
C1 0 0 1
are known as Pauli matrices. Applying the unitary transformation given by the block C1 b matrix p1 Ci i C1 to Q0 .@0 ; @/ we obtain 2
! b i C i C. @/ @ 0 @/ WD Q1 .@0 ; b i i C.b @/ @0 ! 1 Ci C1 @/ i @0 1 C C.b @/ i @0 C 1 C C.b D 2 i C1 i C.b @/ C @0 i i C.b @/ C @0 i ! 1 Ci C1 @0 C i C.b @/ i Ci D : C1 C1 2 i C1 C.b @/ @0 i @/ has the typical form of reversibly The latter may be a preferable form since Q1 .@0 ; b evolutionary expressions of mathematical physics @0 W ; W @0 where W WD i i C.b @/. It is worth noting that 0 i C.b @/ b i C.@/ 0 so that b @2 .
0
C.b @/
C.b @/ 0
!2 D
C.b @/2 0 0 C.b @/2
! Db @2
is seen to be a (selfadjoint) square root of the negative Laplacian
200
Chapter 3 Partial Differential Equations with Constant Coefficients
The characteristic polynomial of course remains unchanged by this transformation and is given by det
@0 C i C.b @/ b C.@/ @0 i
! D det
@/ @0 i C i C.b b i i C.@/ @0
!
@2 C 1/2 : D ..@0 /2 b We obtain .@0 /2 b @2 C 1 D @20 b @ 2 C 1 2 @0 C 2 as minimal polynomial of the two variants of Dirac operators Q0 .@0 ; b @/ and Q1 .@0 ; b @/ as the calculation 0 i C i C.b @/ b i i C.@/ 0
!2 D
1.22/ .b @2 1/ 0 0 1.22/ .b @2 1/
!
@2 1/; D 1.44/ .b confirms. Correspondingly, we have, for i D 0; 1, Qi .@0 ; b @/1 D .2 @0 Qi .@0 ; b @// .@20 b @2 C 1/1 D Qi .@0 ; b @/ .@20 b @2 C 1/1 : Therefore, the (forward) fundamental solution Fi;C associated with Qi .@0 ; b @/ is given by Fi;C WD Qi .@0 ; b @/ gKG;C where gKG;C is the (forward) fundamental solution of the (unit mass) Klein–Gordon operator @2 b @2 C 1: 0
By comparison we see that this is formally the partial differential expression of the modified telegraph equation with e D i. We note j..i 0 C %/2 C 2 /1 j j..i 0 C %/ C i jj/1 ..i 0 C %/ i jj/1 j %2 and therefore j.@20 b @2 /1 jH
b
b
˛ .@0;% C1;@Ce/!H˛ .@0;% C1;@Ce/
%2
Section 3.1 Partial Differential Equations in H1 .@ C e/
201
for all ˛ 2 ZnC1 . Thus, we obtain, for % > 1, the Neumann series representation of .@20 b @2 C 1/1 .@20 b @2 C 1/1 D .@20 b @2 /1 .1 C .@20 b @2 /1 /1 D .@20 b @2 /1
1 X
.b @2 @20 /k
kD0
D
1 X
.1/k .@20 b @2 /k1 ;
kD0
in every H˛ .@0;% C 1; b @ C e/, ˛ 2 ZnC1 . Applying this to the Dirac-ı-distribution yields 1 X k gKG;C D g1;C .1/k g1;C ; kD2 k k where g1;C denotes the k-fold convolution g1;C D g1;C g1;C of the fundamental solution g1;C of the wave equation with itself. P k k The term 1 kD2 .1/ g1;C evaluates to the function p J1 . t 2 j j2 / 1 .t j j/ p ; .t; x/ 7! 4 R>0 t 2 j j2
where J1 is the Bessel function with index 1, see e.g. [30], therefore we have p 1 1 J1 . t 2 j j2 / gKG;C .t / D .t / ı t ŒS 2 .t j j/ p 4 t R>0 4 R>0 t 2 j j2 p 1 1 J1 . t 2 j j2 / 2 2 D .t / ı.t j j / .t j j/ p ; 2 R>0 4 R>0 t 2 j j2 for t 2 R n ¹0º. The fundamental solution gKG;C can be calculated as a variant of the fundamental solution of the wave equation. The behavior at t D 0 is again known from our general results. 40 Remark 3.1.69. As a curiosity unit i as we observe that – interpreting @3 the@complex qi E3 i @2 0 1 the real 2 2-matrix C1 0 – we may re-write the term @1 Ci @2 1@ E2 i E1 3 40
It should be emphasized that complex numbers can be (and are perhaps best) defined as real 2 2matrices of the form x y ; x; y 2 R: y x The usual matrix multiplication coincides with complex multiplication. It may be interesting to note that the block matrix structure x y ; x; y 2 C; y x generalizes the complex numbers to quaternions.
202
Chapter 3 Partial Differential Equations with Constant Coefficients
as
@3 q C .@1 E2 @2 E1 / @1 q C .@2 E3 @3 E2 /
As a consequence we see
Ci
@3 E3 @2 E2 @1 E1 : @2 q C .@3 E1 @1 E3 /
1 q iE3 C @0 i C i C.b @/ B B E2 iE1 C @ b p iH3 A i i C.@/ @0 H2 iH1 1 0 q iE3 p iH3 p iH3 b B @0 E2 iE1 C i C.@/ H2 iH1 i H2 iH1 C C DB @ A p iH3 q iE3 q iE3 b @0 i C.@/ i H2 iH1 E2 iE1 E2 iE1 0 !
0
1 @0 q @3 H3 @2 H2 @1 H1 C H3 @3 p C .@1 H2 @2 H1 / @0 E3 p @ @1 p C .@2 H3 @3 H2 / @0 E1 H2 @2 p C .@3 H1 @1 H3 / @0 E2 C H1 A: @3 q C .@1 E2 @2 E1 / C @0 H3 C q @0 p @3 E3 @2 E2 @1 E1 E3 i C @1 q C .@2 E3 @3 E2 / C @0 H1 C E2 @2 q C .@3 E1 @1 E3 / C @0 H2 E1 i
D
Assuming that the functions p; q and the vector fields E D .E1 ; E2 ; E3 / and H D .H1 ; H2 ; H3 / – as the applications require – are real-valued, we get 1 0 @0 p div E E3 B @0 E1 @1 p .@2 H3 @3 H2 / C H2 C C B B @0 E2 @2 p .@3 H1 @1 H3 / H1 C C B B @0 E3 @3 p .@1 H2 @2 H1 / C p C C B B @0 H1 C @1 q C .@2 E3 @3 E2 / C E2 C C B B @0 H2 C @2 q C .@3 E1 @1 E3 / E1 C C B @ @0 H3 C @3 q C .@1 E2 @2 E1 / C q A @0 q C div H H3 10 1 0 p @0 div 0 B grad @0 rot 0 C B E C CB C DB @ 0 rot @0 grad A @ H A 0 0 div @0 q 0 .0 1 0/ 1 0 .0 0 0/ 1 0.0/1 0 B 0 0 10 0 0 0 B B @ 1 A @ 0 0 0 A @ 1 0 0 A B B 0 0 00 000 CB B001 0 0 1 01 00 0 01 B B @ 0 A @ 1 0 0 A @ 0 0 0 A B @ 0 000 0 00 .0/ .0 0 0/ .0 0 1/ 0
1 0.0/1 0 C C0 @0AC p C CB E 0 B 0 1C @ 0 C C H C @0AC q 1 A .0/
showing a correspondence to the extended Maxwell system.
1 C C A
Section 3.1 Partial Differential Equations in H1 .@ C e/
203
Indeed, transformation with the block matrix 0 1 0 010 B 1 0 0 0 C B C @ 0 1 0 0A 0 001 yields 0
0 B 1 B @ 0 0
0 0 1 0
1 0 0 0
10 0 p BE 0C CB 0A@H 1 q
1
0
1 H C B p C CDB C A @ E A q
and so 10 1> 10 0 0 010 0 @0 div 0 CB C B 0C C B grad @0 rot 0 C B 1 0 0 0 C A @ A @ 0 rot @0 grad 0 1 0 0A 0 0 0 div @0 0 001 1 10 10 1 0 0 010 @0 div 0 0 1 0 0 B 1 0 0 0 C B grad @0 rot 0 C B 0 0 1 0 C CB CB C DB @ 0 1 0 0A@ 0 rot @0 grad A @ 1 0 0 0 A 0 001 0 0 div @0 0 0 01 1 0 10 0 010 0 @0 div 0 B 1 0 0 0 C B rot grad @0 0 C CB C DB @ 0 1 0 0 A @ @0 0 rot grad A 0 001 0 div 0 @0 0 1 0 rot grad @0 B 0 @ 0 C 0 div C DB @ rot grad @0 0 A div 0 0 @0
0
0 B 1 B @ 0 0 0
0 0 1 0
1 0 0 0
as well as 0
0
0 B 1 B @ 0 0
0 0 1 0
1 0 0 0
0.0/1 B 0 1B @ A 0 B B 1 B 0 0C CB0 1 0AB B 0 1 B B@0A @ 0 .0/
.0 0/ 0 1 1 000 @0 0 0A 0 000 1 0 10 @ 1 0 0 A 0 00 .0 0 0/
0 .0 0 0/ 1 0 10 @ 1 0 0 A 0 0 0 01 000 @0 0 0A 000 .0 0 1/
1 0.0/1 0 C C0 @0AC 0 C C B 1 0 B 0 1C @ 0 C C 0 @0AC C 0 1 A .0/
0 0 1 0
1 0 0 0
1> 0 0C C 0A 1
204
Chapter 3 Partial Differential Equations with Constant Coefficients 0
0
0 B B 1 DB @ 0 0
0 0 1 0
1 0 0 0
.0/ B001 1B 0 B B@1A CB 0CB 0 CB0 1 0AB 0 B @ A 1 B B 0 @ 0 .0/
.0 1 0/ 0 1 000 @0 0 0A 000 0 1 0 10 @ 1 0 0 A 0 00 .0 0 0/
0
0
0 B B 1 DB @ 0 0
0 0 1 0
1 0 0 0
.0 0 0/ B0 0 1 01 1B 0 B B @ 1 0 0 A CB 0CB 0 00 1 CB 0 0AB 0 0 0 B @ A 1 B B 000 @ 000
1 000 B @0 0 0A B B 000 B B .0 0 0/ B 1 D B0 B 0 10 B B @ 1 0 0 A B @ 0 00 .0
0 1/
.0 1 0/ 0 1 000 @0 0 0A 000 0 1 0 10 @ 1 0 0 A
0 .0/
0 00 .0 0 0/
0 10 1 0 0 10 @ 0 A @ 1 0 0 A 0 0 00 .0/ .0 0 1/ 0 1 0 1 0 000 @0A @0 0 0A 1 .0/
1 .0/ 0 1C 0 C0 @0AC C 0 B 0 C CB0 0 1CB 0 C@1 C @0AC 0 C 1 A
.0/ 1 0 @ 1 A 0 0 1 0 @0A 0
.0 0 1/ 0 0
.0 0 0/ 1 0 10 @ 1 0 0 A 0 00 0 1 000 @0 0 0A 000 .0 0 1/ 0
000 .0 0 0/
1 0 0 0
0 1 0 0
1 0 C 0C C 0A 1
.0/ 1 .0/ 0 1C 0 C @0AC C C 0 C 0 1C 0 C C @0AC C 1 A .0/
0 11 0 @0AC C 1 C C C .0/ C 0 1 C: 0 C C @0AC C 0 A .0/
Thus we obtain again the block structure of reversibly evolutionary systems of @0typical W mathematical physics W @0 , where
rot div
W WD
0
! grad 0
00
1 0 11 0 10 0 B@ C A @ 1 0 0 0AC B CB C 0 00 1 A @ .0 0 1/ .0/
1 0 1 0 B rot C @ 1 0 0 A B DB 0 0 0 @ div C.0 0 1/ 0
0 11 0 @ grad C 0 A C C C: 1 A 0
Section 3.2 Partial Differential Equations in H1 .jD j/
3.2
205
Linear Partial Differential Equations with Constant Coefficients in H1 .jD j/
3.2.1 Extension of the Solution Theory to H1 .jD j/. The solution theory developed and exemplified in the previous sections clearly is wholly satisfactory for evolutionary problems as well as for non-evolutionary but invertible differential expressions. It is a remarkable fact, however, that for every partial differential expression with constant coefficients a fundamental solution can be constructed in H1 .jDj/ D S , the space of tempered distributions. This result is originally due to Hörmander, [20]. It has later found a beautiful expression as an application of algebraic methods in the study of rings of differential operators, [12]. Since exp.m/ P .@/ D P .@C/ exp.m/ and exp.m/ W H1 .jD j/ ! H1 .jDj/ are continuous, we have from this result the existence of a fundamental solution in H1 .jD j/ for every 2 RnC1 . In the evolutionary case it may happen, however, that the causal fundamental solution, which is of foremost interest, is not, due to exponential growth, in H1 .jDj/ as examples from ordinary differential equations readily show. Thus, the fundamental solution in H1 .jDj/ may not be the one we are looking for. Moreover, in the non-invertible case we shall see non-uniqueness results. In many such cases of interest, however, additional constraints can be found to enforce uniqueness. Despite these added difficulties the prospect of discussing fundamental solutions even in non-invertible cases is a strong incentive for an extension of the solution theory. These remarks may suffice to motivate the expansion of our solution theory to the Sobolev lattice .H˛ .jD j//˛2ZnC1 , 2 RnC1 , which in contrast to the previously considered exponentially weighted Sobolev lattice will be referred to as the tempered, exponentially weighted Sobolev lattice. Rather than re-doing the whole approach, however, we shall perform this expansion more simply by continuous extension of the solution theory already established. We first recall that the lattice .Hˇ .jD j//ˇ 2ZnC1 satisfies H˛ .jD j/ ,! H˛ .@ C e/ for all ˛ 2 N nC1 . By duality this implies the continuity of the dense embedding H˛ .@ C e/ ,! H˛ .jD j/ for all ˛ 2 N nC1 . This shows that in particular H1 .@ C e/ ,! H1 .jD j/ densely and continuously. We shall use the notation HnC1;sC1;tC1;1 .jD j/, s; t 2 N, 2 RnC1 , for H1 .jD j/ in the analogous sense to HnC1;sC1;tC1;1 .@ C e/ for our new solution spaces, if the added information is needed for sake of clarity.
206
Chapter 3 Partial Differential Equations with Constant Coefficients
As a preparatory step to simplify further calculations, we need the following lemmata. Lemma 3.2.1. For all ˇ 2 N nC1 , 2 H1 .jD j/ jjD jˇ j;0 D jjD jˇ 2bˇ=2c .D D /bˇ=2c j;0 D jDˇ 2bˇ=2c .D D /bˇ=2c j;0 : Moreover, C
X
(3.2.90)
jm˛ @ j2;0
˛Cˇ;˛;2N nC1
jDˇ 2bˇ=2c .D D /bˇ=2c j2;0 X C 1 jm˛ @ j2;0
(3.2.91)
˛Cˇ;˛;2N nC1
and C
X
j@ m˛ j2;0
˛Cˇ;˛;2N nC1
jDˇ 2bˇ=2c .D D /bˇ=2c j2;0 X C 1 j@ m˛ j2;0
(3.2.92)
˛Cˇ;˛;2N nC1
for some fixed C 2 R>0 and for all ˇ 2 N nC1 , 2 CV 1 .RnC1 /. Proof. Equality (3.2.90) follows from abstract functional analytical arguments. It remains to consider (3.2.91) and (3.2.92). Again we observe that it suffices to deal with the case D 0. We first recall that formally D D p1 .m @/; D D p1 .m C @/ 2 2 so that 2jDk j20;0 D j@k j20;0 C jmk j20;0 2 Reh@k j mk i0;0 D j@k j20;0 C jmk j20;0 C j j20;0 and 2jDk j20;0 D j@k j20;0 C jmk j20;0 j j20;0 : Combining these equalities we get 4jDk Dk j20;0 D 2j@k Dk j20;0 C 2jmk Dk j20;0 2jDk j20;0 D j @2k C mk @k C j20;0 C j mk @k C m2k j20;0 (3.2.93) j@k j20;0 jmk j20;0 j j20;0 :
Section 3.2 Partial Differential Equations in H1 .jD j/
207
The first term on the right-hand side can be further evaluated to yield j@2k @k mk j20;0 D j@2k mk @k j20;0 D j@2k j20;0 C jmk @k j20;0 2 Reh.@2k 1/ j mk @k i0;0 D j@2k j20;0 C j j20;0 C jmk @k j20;0 2 Reh@2k j i0;0 2 Reh@2k j mk @k i0;0 C 2 Reh j mk @k i0;0 D j@2k j20;0 C jmk @k j20;0 C 3j@k j20;0 : We get similarly jmk @k m2k j20;0 D jmk @k j20;0 C jm2k j20;0 C 3jmk j20;0 and so for the original term in (3.2.93) 4jDk Dk j20;0 D j@2k j20;0 C 2jmk @k j20;0 C 2j@k j20;0 C 2jmk j20;0 C jm2k j20;0 j j20;0 : The equivalence of norms (3.2.91) now follows by induction. We have 2Dk Dk D .mk C @k /.mk @k / D @2k C m2k C 1 and .Dk Dk /sC1 D .Dk Dk /s Dk Dk D 12 .Dk Dk /s .@2k C m2k C 1/ and so also 1 j.Dk Dk /sC1 j20;0 D j.Dk Dk /s .@2k C m2k C 1/ j20;0 2 X j C jmik @k .@2k C m2k C 1/ j20;0 iCj 2s; i;j 2N
X
C
j C2
jmik @k
j
C miC2 @k j20;0 k
iCj 2s; i;j 2N
C 0 jjDk j2sC1 j20;0 ; where we have estimated the lower order terms by the last term according to the induction assumption. Evaluating j C2
jmtk @k
j
C mtC2 @k j20;0 k
j C2
D jmtk @k
j
j20;0 C jmtC2 @k j20;0 k j
j C2
2 RehmtC2 @k j mtk @k k j C2
D jmtk @k
i0;0
j
j20;0 C jmtC2 @k j20;0 k j C1
C 2jmtC1 @k k
j
j C1
j20;0 C 4 .t C 1/ RehmtC1 @k j mtk @k k
i0;0 ;
208
Chapter 3 Partial Differential Equations with Constant Coefficients
we obtain X
j.Dk Dk /sC1 j20;0 C
j
jmtk @k j20;0
tCj 2sC2; t;j 2N
C 00 jjDk j2sC1 j20;0 : Together with j.Dk Dk /sC1 j0;0 D jjDk j2sC2 j0;0 D jDk jDk j2sC1 j0;0 jjDk j2sC1 j0;0 ; we obtain the desired lower estimate X
jjDk j2sC2 j20;0 C
j
jmik @k j20;0
iCj 2sC2; i;j 2N
for some generic constant C 2 R>0 provided we have X
jjDk j2sC1 j20;0 C
j
jmtk @k j20;0
tCj 2sC1; t;j 2N
for some generic constant C 2 R>0 . The odd case, however, follows in a similar fashion if we assume it holds for the even case. Indeed, jjDk j2sC1 j20;0 D jDk jDk j2s j20;0 jjDk j2s Dk j20;0 C 0 jjDk j2s j20;0 X j C1 j C jmtk @k mtC1 @k j20;0 k tCj 2s; t;j 2N
C 0 jjDk j2s j20;0 X j C jmtk @k j20;0 C 0 jjDk j2s j20;0 ; tCj 2sC1; t;j 2N
for generic constants C; C 0 2 R>0 . Together with jjDk j2sC1 j0;0 D jDk jDk j2s j0;0 jjDk j2s j0;0 the desired estimate follows also in this case. Thus, we have by induction jjDk js j0;0 C
X tCj s; t;j 2N
j
jmtk @k j20;0
Section 3.2 Partial Differential Equations in H1 .jD j/
209
for all 2 CV 1 .RnC1 / and s 2 N, where the constant C 2 R>0 only depends on s. The corresponding upper estimate follows by a much simpler induction argument using the triangle inequality, the details of which may be left as an exercise. The case 2 RnC1 now follows by unitary equivalence. The general result (3.2.91) is now clear since the occurring operations commute for different variables. Since mk formally commutes with @k up to terms of lower order the remaining estimate (3.2.92) now follows. In the proof of the previous lemma we have made use of a rough estimate of ‘lower order terms’ due to differentiating products. The interaction of differentiation and multiplication can of course be made precise in the form of a ‘Leibniz rule’. Lemma 3.2.2. A Leibniz rule holds for D and D in the sense that D˛ .
X
/D
2jˇ j1 =2
ˇ CD˛; ˇ;2N nC1
and D˛ .
X
/D
ˇ CD˛; ˇ;2N nC1
˛ ˇ
2jˇ j1 =2
.@/ˇ D
˛ ˇ
@ˇ D
2 for all 2 C1 .RnC1 / such that Dˇ isQbounded ˛i for all ˇ ˛ and all ˛ n nC1 H1 .D /, ˛; ˇ 2 N . Here ˇ D iD0 ˇi for ˛ D .˛0 ; : : : ; ˛n /, ˇ D .ˇ0 ; : : : ; ˇn / 2 N nC1 . Proof. Let Then Dk; .
2 CV 1 .RnC1 /. 1 / D p .@k C mk C / . / 2 1 1 1 D p .@k / p @k C p .mk C / 2 2 2 1 D p .@k / C Dk; : 2
(3.2.94)
The Leibniz rule follows in this case by the usual induction argument. The case with D instead of D follows in the same way. It is clear that the role of and can be formally interchanged. Based on this observation, we shall now derive somewhat of a converse of the Leibniz rule. We
210
Chapter 3 Partial Differential Equations with Constant Coefficients
claim
D˛
X
D
jˇ j1 =2
2
ˇ CD˛; ˇ;2N nC1
˛ ˇ
D ..@ˇ / /:
(3.2.95)
This follows also by induction. Indeed,
D˛Cek
D
D˛
Dk;
X
D
jˇ j1 =2
2
ˇ CD˛; ˇ;2N nC1
˛ ˇ
D .@ˇ Dk;
/:
According to (3.2.94) we have @ˇ Dk;
1 / C p @ˇ Cek 2
D Dk; .@ˇ
and so
D˛Cek
X
D
jˇ j1 =2
2
ˇ CD˛; ˇ;2N nC1
X
D
jˇ j1 =2
2
ˇ CD˛; ˇ;2N nC1
X
C
ˇ CD˛; ˇ;2N nC1
ˇ CD˛Cek ; k
ˇ CD˛; ˇk
D .Dk; .@ˇ
/ C @ˇ Cek
/
DCek .@ˇ
/
2jˇ Cek j1 =2 D .@ˇ Cek 2jˇ j1 =2
1;ˇ;2N nC1
2jˇ j1 =2
1; ˇ;2N nC1
˛ ˇ
/
D .@ˇ
˛ ˇ ek
/
D .@ˇ
/:
m mC1 (as a simple consequence of m C k1 D k ˛ ˛ ˛Cekk ˛Ce k D 0 D ˛ D ˛Cek , we get the desired for m; k 2 N, k 1) and 0 equality. The analogous result holds for D replaced by D , i.e. we also have Since
ˇ
C
D˛
˛ ˇ ek
D
˛ ˇ
X
C
˛ ˇ
X
D
˛
˛ ˇ
D
˛Cek ˇ
X ˇ CD˛; ˇ;2N nC1
2jˇ j1 =2
˛ ˇ
D ..@/ˇ
/:
(3.2.96)
Section 3.2 Partial Differential Equations in H1 .jD j/
211
Now let f 2 H1 .D /. Then we have using (3.2.96) hD˛ . f /j i;0 D h f jD˛ i;0 D hf j D˛ i;0 ˇ X ˇ D f ˇˇ ˇ ˇ D f ˇˇ D
2jˇ j1 =2
ˇ CD˛Iˇ;2N nC1
X
2jˇ j1 =2
ˇ CD˛Iˇ;2N nC1
X
2jˇ j1 =2
ˇ CD˛Iˇ;2N nC1
˛ ˇ
˛ ˇ ˛ ˇ
D ..@/ˇ / ;0
D ...@/ˇ / / ;0
h..@/ˇ /D f j i;0 :
Since 2 CV 1 .RnC1 / was arbitrary, the stated Leibniz rule also follows in this case. With these results we can now extend our earlier solution theory which was summarized in Theorem 3.1.10. Let P .@/ be invertible with respect to 2 RnC1 . Then x 7! .i x C e/˛ P .i x C /1 is bounded for some ˛ 2 N nC1 and using the product rule for matrix valued functions we obtain @k .i m C e/˛ P .i m C /1 D i .˛/ .i m C e/˛ek P .i m C /1 C i.im C e/˛ P .im C /1 .@k P /.im C /P .i m C /1 C .im C e/˛ P .im C /1 @k D i .˛/ .i m C e/ek ..i m C e/˛ P .i m C /1 / C i.i m C e/˛ ..i m C e/˛ P .i m C /1 /.@k P /.i m C / ..i m C e/˛ P .i m C /1 / C .i m C e/˛ P .i m C /1 @k : This yields by a lengthy but in principle elementary induction, making heavy use of Lemma 3.2.1, that _ ^ .i mCe/˛ P .i mC/1 jH .jDj/ W H .jDj/ ! H .jDj/ is continuous: 2N nC1 2N nC1
212
Chapter 3 Partial Differential Equations with Constant Coefficients
Thus, with ˇ 2 N nC1 given and noting that P enjoys the same continuity property as P , we find a possibly rather large 2 N nC1 such that jhP .i m C /1 j i0;0 j D jh jP .i m C /1 i0;0 j j j0;ˇ jP .i m C /1 j0;ˇ D j j0;ˇ j.i m C e/˛ .i m C e/˛ P .i m C /1 j0;ˇ C1 j j0;ˇ j.i m C e/˛ P .i m C /1 j0;˛Cˇ D C1 j j0;ˇ jjDj˛Cˇ .i m C e/˛ P .i m C /1 j0;0 C2 j j0;ˇ jjDj
j0;0
D C2 j j0;ˇ j j0; for all ; 2 CV 1 .RnC1 /. This, together with the lattice structure of the Sobolev lattice .H˛ .jDj//˛2ZnC1 and the density of CV 1 .RnC1 / in H1 .jDj/, shows that P .i m C /1 W H1 .jDj/ ! H1 .jDj/
(3.2.97)
is continuous. Thus, we find that the solution theory carries over to the larger space H1 .jDj/ simply by continuous extension. To confirm that P .i m C / indeed has an inverse we need to see that P .i m C / u D 0 has only the trivial solution. This follows from 0 D hP .i m C / uj i0;0 D hujP .i m C / i0;0 for all
2 CV 1 .RnC1 /, and by noting that P .i m C /j V
C1 .RnC1 /
W CV 1 .RnC1 / !
CV 1 .RnC1 / is a bijection (due to the assumed invertibility). Indeed, this yields huj i0;0 D 0 for all 2 CV 1 .RnC1 /, i.e. u D 0. Theorem 3.2.3. Let P .@/ be a ..s C 1/ .s C 1//-partial differential expression in RnC1 . If P .@/ is invertible with respect to 2 RnC1 then P .@/1 WD P .@ C /1 W HnC1;sC1;tC1;1 .jD j/ ! HnC1;sC1;tC1;1 .jD j/ exists as a continuous operator for every t 2 N, via continuous extension of P .@ C /1 W HnC1;sC1;tC1;1 .@ C e/ HnC1;sC1;tC1;1 .jD j/ ! HnC1;sC1;tC1;1 .jD j/:
Section 3.2 Partial Differential Equations in H1 .jD j/
213
Proof. It should be noted that by applying the inverse Fourier–Laplace transform we get from (3.2.97) the existence of the continuous operator S WD L P .i m C /1 L . The unitary equivalence of S with P .i m C /1 yields that S is indeed the inverse of P .@ C /: S D P .@ C /1 : Remark 3.2.4. In contrast to the theory in H1 .@ C e/ we do not get a fixed loss of regularity ˇ ˛ for P .@/1 jH˛ .jD j/ W H˛ .jD j/ ! Hˇ .jD j/, where ˇ 2 ZnC1 ˇ ˛, is assumed to be maximal. In other words, we do have continuity of P .@/1 W H1 .jD j/ ! H1 .jD j/, but not of finite order. This is the price we have to pay for enlarging the class of admissible right-hand sides. It is also important that the concept of causality of the solution operator carries over to H1 .jD j/. For this we identify elements in H1 .jD j/ as linear functionals on CV 1 .RnC1 / in the same way as in (3.1.42) by considering ' 7! h j exp.2m/'i;0 on CV 1 .RnC1 /. Indeed, let P .@/ be evolutionary in direction .0/ and P .@/1 f D 0 in an open set Ga0 ;.0/ WD .a0 C R>0 / .0/ C ¹.0/ º? , whenever f D 0 in Ga0 ;.0/ . Then, if additionally f 2 H1 .@ C e/, D % .0/ , with % 2 R>0 sufficiently large, we have by definition hP .@/1 f j i;0 D 0 for all 2 CV 1 .RnC1 / with supp Ga0 ;.0/ . This property carries over from f 2 H1 .@ C e/ to f 2 H1 .jD j/ by approximation (using a standard cut-off technique). Thus, we obtain the following causality result. Theorem 3.2.5. Let P .@/ be an ..s C 1/ .s C 1//-partial differential expression in RnC1 evolutionary in direction .0/ . Then, with D % .0/ and % 2 R>0 sufficiently large, P .@ C /1 W H1 .jD j/ ! H1 .jD j/ is causal in direction .0/ . Proof. We know that P .@ C /1 jH1 .@ Ce/ W H1 .@ C e/ ! H1 .@ C e/ is causal in direction .0/ . Therefore, if g 2 Hˇ .@ C e/, with ˇ a matrix of multiindices in .N nC1 /s1 D .N nC1 /s , is such that a D inf supp.0/ g 2 R, then we have that inf supp.0/ P .@ C /1 g a, or hP .@ C /1 gj i;0 D 0
214
Chapter 3 Partial Differential Equations with Constant Coefficients
for all 2 CV 1 .RnC1 / with sup supp.0/ < a. Now, let ! 2 C1 .R/ be such that supp ! is bounded above and supp.1 !/ is bounded below. Then !R 2 C1 .RnC1 / given by !R .x0 ; : : : ; xn / WD !.x0 R/ !.xn R/ for x D .x0 ; : : : ; xn / 2 RnC1 , R 2 R>0 , satisfies !R .x/ ! 1 as R ! 1 for all x 2 RnC1 . Moreover, @˛ !R .x/ ! 0 as R ! 1 for all x 2 RnC1 , ˛ 2 N nC1 n ¹0º. As a consequence of the uniform boundedness of these terms, we find by an application of the Leibniz rule !R .m/ g 2 Hˇ .@ C e/
and
!R .m/ g ! g
in Hˇ .jD j/
as R ! 1
for all g 2 Hˇ .jD j/. Thus, we get in particular, with a D inf supp.0/ g 2 R, hP .@ C /1 !R .m/ gj i;0 D 0 for all R 2 R, 2 CV 1 .RnC1 / with sup supp.0/ < a. Letting R ! 1 and using the continuity of P .@ C /1 , we get hP .@ C /1 gj i;0 D 0 for all 2 CV 1 .RnC1 / with sup supp.0/ < a. This proves that causality in direction .0/ is preserved. Thus we have extended our previous solution theory to the new Sobolev lattice. The added benefit is that even if P .@/ is not invertible, there is a fundamental solution in H1 .jD j/, which can always be constructed. In our specific cases of interest the construction is indeed fairly elementary. It is interesting to note that the concepts of the spectrum and the resolvent set of an operator in H1 .C /, where C D @ C e or C D jD j, 2 RnC1 , are of particular interest, carry over from the Hilbert space case in the obvious way. Definition 3.2.6. Let .H˛ .C //˛2ZnC1 be a Sobolev lattice associated with an operator family C and let W W H1 .C / ! H1 .C / be a continuous linear operator in H1 .C /. Then the (Sobolev lattice) resolvent set of W is given by %1 .W / WD ¹ 2 C j.W / injective ^ .W /ŒH1 .C / D H1 .C / ^ .W /1 continuousº: and correspondingly the spectrum of W in H1 .C / is 1 .W / WD C n %1 .W /:
Section 3.2 Partial Differential Equations in H1 .jD j/
215
The spectral parts – (Sobolev lattice) point spectrum, (Sobolev lattice) residual spectrum and (Sobolev lattice) continuous spectrum, respectively – are defined (as in the Hilbert space case): P 1 .W / WD ¹ 2 C j.W / not injectiveº; R1 .W / WD ¹ 2 Cj.W / injective ^ .W /ŒH1 .C / ¤ H1 .C /º; C 1 .W / WD ¹ 2 Cj.W / injective ^ .W /H1 .C / D H1 .C / ^ .W /1 not continuousº: Returning to our current special situation, we distinguish the exponentially weighted Sobolev lattice from the tempered, exponentially weighted Sobolev lattice in the following by writing a lower index ‘ ’ to denote the spectral parts of the case C D jD j, 2 RnC1 . Example 3.21. Consider the special situation of the differential expression @0 p b @ in R1Cn , n 2 N>0 , p 2 Rn ; jpj D 1, in H1 .@0; C 1; b @ C e/, 2 R. Since @ D @0; p b @ . / @0 p b is invertible if Re ¤ , we see that @/: C n .i ŒR C / %1 .@0 p b Let now D 1i . / 2 R. Application of the Fourier–Laplace transform leads to b i W H1 .i m0 C 1; i m b C e/ ! H1 .i m0 C 1; i m b C e/: i m0 i p m Since the spectral information is preserved, i.e. @/ D P 1 .i m0 i p m b C / D P 1 .i m0 i p m b / C ; P 1 .@0 p b R1 .@0 p D/ D R1 .i m0 i p m b C / D R1 .i m0 i p m b/ C ; C 1 .@0 p D/ D C 1 .i m0 i p m b C / D C 1 .i m0 i p m b/ C ; b. From the above we already have it suffices to consider i m0 i p m b/ R 1 .m0 p m b C e/ can We would like to show equality. Since – as we know – H1 .i m0 C 1; i m 1Cn / such that be characterized as those elements of Lloc .R 2 b C e/˛ .i m0 C 1/˛0 .i m
2 L2 .R1Cn /
216
Chapter 3 Partial Differential Equations with Constant Coefficients
for some ˛ D .˛0 ; b ˛ / 2 Z1Cn and since S WD ¹.x0 ; y/j x0 2 R; y 2 Rn ; x0 p y D º b / is injective. Moreover, since is a Lebesgue null set, we see that .m0 p m CV 1 .R1Cn n S / is dense in L2 .R1Cn / and b C e/˛ .m0 p m b /j V .i m0 C 1/˛0 .i m
C1 .R1Cn nS /
is a bijection in CV 1 .R1Cn nS /, we have that .m0 pb m /ŒH1 .im0 C1; ib m Ce/ is dense in H1 .im0 C 1; i m b C e/. Let u" WD .m0 p m b i "/1 " , with " WD ".nC1/=2 .. . ; 0//="/, " 2 R>0 , and 2 CV 1 .RnC1 / such that supp BRnC1 .0; 1/ and Z j .y/j2 dy D 1: RnC1
Then we have b / u" D .m0 p m b / .m0 p m b i "/1 " .m0 p m D " C i " .m0 p m b i "/1 " : b C e/, ˇ 2 Z1Cn , but The right-hand side is bounded in every Hˇ .i m0 C 1; i m Z Z 2 .x02 C 1/˛0 .b x 2 C e/˛ ju" .x0 ;b x /j2 dx0 dx juj˛ D Z D
RnC1
RnC1
inf¹.x02
!1
Z
R
R
.x02 C 1/˛0 .b x 2 C e/˛ j " .x0 ;b x /j2 dx0 dx .x0 p b x /2 C "2
C 1/˛0 .b x 2 C e/˛ j .x0 ;b x / 2 BRnC1 .. ; 0/; "/º .3 C jpj2 /"2
as " ! 0:
b /1 is unbounded and so Consequently, we find that .m0 p m 1 .m0 p m b/ D C 1 .m0 p m b/ D R: Thus we have 1 .@0; C p b @/ D C 1 .@0; C p b @/ D i ŒR C : b @ acting in H1 .jD0; j; jDj/. Alternatively, we could have considered @0 p b b Then we would have found C n .i ŒR C / %1; .@0; C p @/. For D
Section 3.2 Partial Differential Equations in H1 .jD j/ 1 i .
217
/ 2 R we see that for any f 2 CV 1 .R/ we have that 2 C1 .R1Cn / given
by
.x0 ; y/ WD exp. x0 / exp.i x0 / f .x0 C p y/ b for x0 2 R and y 2 Rn can be considered as an element of H1 .jD0; j; jDj/. Indeed, jh j i;0 j2 D jh j exp.2m0 / i0;0 j2 ˇZ ˇ2 ˇ ˇ D ˇˇ exp.i x0 / f .x0 C p b x / exp.x0 / .x0 ;b x / d.x0 ;b x /ˇˇ R1Cn
Z
R1Cn
2 sup¹jf .t /jjt 2 Rº exp.x0 / j .x0 ;b x /j d.x0 ;b x/
sup¹jf .t /j2 jt 2 Rº .nC1/ Z .x02 C 1/.b x 2 C e/e exp.2x0 /j .x0 ;b x /j2 d.x0 ;b x/ R1Cn
b Similarly we for all 2 CV 1 .R1Cn /. Thus, we deduce 2 H.1;e/ .jD0; j; jDj/. b Moreover, can show that classical derivatives of are also in H.1;e/ .jD0; j; jDj/. we find by construction of that .@0 p b @ .i C // D 0 since point-wise we have @0 .x0 ; y/ D .i C / exp. x0 / exp.i x0 / f .x0 C p y/ C exp. x0 / exp.i x0 / .@f /.x0 C p y/ and
b @ .x0 ; y/ D exp. x0 / exp.i x0 / .@f /.x0 C p y/ p
(where @f denotes the 1-dimensional derivative of f ) for all x0 2 R and y 2 Rn (recall jpj D 1). Thus, we find in this case – in contrast to the above – 1; .@0; C p b @/ D P 1; .@0; C p b @/ D i R C : Example 3.22. Consider the differential expression @2 in RnC1 , n 2 N>0 , as a continuous operator in H1 .@ C e/ for some fixed 2 RnC1 . By the Fourier– Laplace transform we are led to discuss .i m C /2 D m2 2 2 i m
218
Chapter 3 Partial Differential Equations with Constant Coefficients
in H1 .i m C e/. For ¤ 0 and with WD x jj2 .x / (and noting that ? ) we calculate j .i x C /2 j2 D .x 2 2 Re /2 C .2 x C I m /2 D ..jj2 .x / C /2 2 Re /2 C .2 x C I m /2 D .jj4 .x /2 C 2 2 Re /2 C .2 x C I m /2 : For this to be zero for an x 2 R we must have x D 12 I m and jj4 .x /2 C 2 2 Re D 14 jj4 .I m /2 C 2 2 Re D 0 so that 1 x D I m jj2 C 2 with ? in RnC1 such that 1 2 D 2 C Re jj4 .I m /2 : 4 For n 2 N1 this is possible if only if 1 2 C Re jj4 .I m /2 0; 4 i.e. for in the ‘interior’ ¹ 2 C j 2 C Re 14 jj4 .I m /2 0º enclosed by the parabola ¹ 2 C j 2 C Re 14 jj4 .I m /2 D 0º. In the ‘exterior’ of this parabola given by the open set ¹ 2 C j 2 C Re 14 jj4 .I m /2 < 0º the differential expression @2 is invertible in H1 .@ C e/. Consequently, we have ˇ ³ ² ˇ 2 1 4 2 ˇ 2 C ˇ C Re jj .I m / < 0 %1 .@2 /: 4 For 2 ¹ 2 C j 2 C Re 14 jj4 .I m /2 0º we see that the real zeros of the polynomial x 7! .i x C /2 constitute a Lebesgue null set (indeed if n > 1 an .n 2/-dimensional sphere in a hyperplane orthogonal to ). By the argument of the previous example we find again ˇ ² ³ ˇ 2 1 4 2 ˇ 2 C ˇ C Re jj .I m / 0 C 1 .@2 / [ %1 .@2 /: 4 To show that actually ˇ ³ ² ˇ 2 1 4 2 ˇ 2 C ˇ C Re jj .I m / 0 D C 1 .@2 / 4
(3.2.98)
Section 3.2 Partial Differential Equations in H1 .jD j/
219
we need to see the failure of continuity of .@2 /1 for all values in the set ¹ 2 C j 2 C Re 14 jj4 .I m /2 0º. A similar but slightly refined argument to the one used in the previous example yields the result. Choose a real zero 1 x D I m jj2 C 2 with ? in RnC1 such that 1 2 D 2 C Re jj4 .I m /2 4 and consider u" WD
1 j.i m C
/2
C j C i "
"
with " WD ".nC1/=2 .. x /="/, where 2 CV 1 .RnC1 / is such that supp
BRnC1 .0; 1/ and Z RnC1
j .y/j2 dy D 1:
Then we have ..i m C /2 / u" D D
..i m C /2 /
" j.i m C /2 C j C i " ..i m C /2 / C i " 1
" i "
" : 2 2 j.i m C / C j C i " j.i m C / C j C i "
The right-hand side is bounded in every Hˇ .i m C e/, ˇ 2 ZnC1 , but for u" we find Z ju" j2˛ D .x 2 C e/˛ ju" .x/j2 dx Z D
RnC1
RnC1
.x 2 C e/˛ j " .x/j2 dx j.i x C /2 C j2 C "2
inf¹.x 2 C e/˛ j x 2 BRnC1 .x ; "/º ! 1 as " ! 0C: sup¹j.x i /2 j2 j x 2 BRnC1 .x ; "/º C "2
This yields the desired result (3.2.98). Considering @2 in H1 .jD j/, we see that for 2 C 12 22 jj4 .1 2 /2 0, D .1 C i 2 /2 , 2 RnC1 n ¹0º, we have w WD ı¹x º is an element in H1 .jDj/ that solves the equation ..i m C /2 / w D 0: Consequently, u WD L w 2 H1 .jD j/ solves .@2 / u D 0:
220
Chapter 3 Partial Differential Equations with Constant Coefficients
In this case we get ˇ ² ³ ˇ 2 1 4 2 ˇ 2 C ˇ C Re jj .I m / 0 D P 1; .@2 / D 1; .@2 /: 4 In the remaining case D 0, we find by the same reasoning that R0 D C 1 .@2 / D P 1; .@2 /: As the reasoning – in particular for the last example – indicates, there is a general spectral result here. Theorem 3.2.7. Let P .@/ be an .L L/-differential expression in RnC1 , L 2 N>0 . If the set of all real zeros N WD ¹xjx 2 RnC1 ; det.P .i x C / / D 0º of x 7! det.P .i xC/ / is a Lebesgue null set for every 2 C, i.e. x 7! det.P .i xC/ / is not the zero polynomial, then we have 1 .P .@ C // D C 1 .det P .@ C // D P 1; .det P .@ C // D 1; .P .@ C // D ¹ 2 Cj N non-emptyº and %1 .P .@ C // D %1; .P .@ C //: Proof. Let 2 C be such that N is empty. Then we have, according to our solution theory, 2 %1 .det P .@ C // D %1 .P .@ C //. Since N is in any case a Lebesgue null set (by assumption), the injectivity of det.P .i mC/ / and therefore .P .i m C / / in H1 .i m C e/ is clear. Likewise, we have the density of the range of det.P .i m C / / in H1 .i m C e/, since CV 1 .RnC1 n N / is dense in H1 .i m C e/ and det.P .i m C / / W CV 1 .RnC1 n N / ! CV 1 .RnC1 n N / is bijective. 1 Let u" WD jdet.P .i mC//jCi
, " WD ".nC1/=2 .. x /="/, x 2 N , 2 " " CV 1 .RnC1 / such that supp BRnC1 .0; 1/ and Z j .y/j2 dy D 1: RnC1
Then we have det.P .i m C / /
" jdet.P .i m C / /j C i " det.P .i m C / / C i "
" D jdet.P .i m C / /j C i " 1
" : i" jdet.P .i m C / /j C i "
det.P .i m C / / u" D
Section 3.2 Partial Differential Equations in H1 .jD j/
221
The right-hand side is bounded in every Hˇ .i m C e/, ˇ 2 ZnC1 , but for u" we find Z ju" j2˛ D Z D
RnC1
.x 2 C e/˛ ju" .x/j2 dx .x 2 C e/˛ j " .x/j2 dx jdet.P .i x C / /j2 C "2
RnC1
inf¹.x 2 C e/˛ j x 2 BRnC1 .x ; "/º sup¹jdet.P .i ." x C x / C / /j2 j x 2 BRnC1 .x ; "/º C "2
! 1 as " ! 0: This confirms that 2 C 1 .det P .@ C //: Thus, we have shown ¹ j N emptyº %1 .det P .@ C // D %1 .P .@ C // and ¹ j N non-emptyº C 1 .det P .@ C //: Consequently, we have 1 .det P .@ C // D 1 .P .@ C // ¹ j N non-emptyº C 1 .det P .@ C // and so the desired equality follows. Considering the case of H1 .jD j/, we note that %1 .P .@ C // %1; .P .@ C //: Now ı¹x º 2 H1 .jDj/ satisfies det.P .i m C / / ı¹x º D 0 and so u WD L ı¹x º D exp. m/ exp.i x m/ 2 H1 .jD j/ satisfies det.P .@ C / / u D 0: Thus, we find ¹ j N non-emptyº P 1; .det P .@ C // and the remaining equalities follow.
222
Chapter 3 Partial Differential Equations with Constant Coefficients
Finally, the definition of a convolution carries over to the present situation in a slightly modified form. Via Fourier–Laplace transform, the concept of convolution is reduced to multiplication. Since for F 2 H1 .jDj/ pointwise evaluation may rarely make sense, we define F .m/ W H1 .jDj/ H1 .jDj/ ! H1 .jDj/;
7! .m/ F with h .m/ F j for ;
i0;0 D hF j .m/
i0;0
2 H1 .jDj/. Note that
.m/
2 H1 .jDj/
follows from a Leibniz type formula, see Subsection 3.2.3. Definition 3.2.8. Let f 2 HnC1;sC1;tC1;1 .jD j/, n; s; t 2 N. Then the mapping b 1 .@ C / D .2/.nC1/=2 L f b.m i / L f WD .2/.nC1/=2 f i is continuous in HnC1;sC1;tC1;1 .jD j/ and will be called the convolution operator associated with f . If f W HnC1;sC1;tC1;1 .jD j/ HnC1;sC1;tC1;˛ .jD j/ ! HnC1;sC1;tC1;ˇ .jD j/ is continuous for some ˛; ˇ 2 ZnC1 , then the continuous extension to the space b. 1 .@ C // HnC1;sC1;tC1;˛ .jD j/ will again be denoted by f or .2/.nC1/=2 f i and will also be referred to as a convolution operator. In this case, for g 2 HnC1;tC1;rC1;˛ .jD j/, r 2 N, we call f g the convolution of f with g in H1 .jD j/. The proof that convolutions of this form are well-defined is postponed to Subsection 3.2.3. Indeed, we shall not develop the idea of convolution in full generality, since we are mainly interested in very particular types of convolution, e.g. convolution with fundamental solutions and convolution with elements having compact support. The details of the issues involved in considering such special cases will be presented in Subsection 3.2.3. For our present purposes it suffices to consider the invertible case and convolution with the fundamental solution. According to our extension of the solution theory, this particular convolution carries over to H1 .jD j/. Let P .@/ be invertible with respect to 2 RnC1 and let G be the corresponding fundamental solution. Then by definition P .@/1 D G
Section 3.2 Partial Differential Equations in H1 .jD j/
223
on H1 .jD j/ H1 .@ Ce/ but also in H1 .@ Ce/ H1 .jD j/. By continuous extension we obtain that P .@/1 D G as continuous mappings in H1 .jD j/.
3.2.2 Some Applications to Linear Partial Differential Equations of Mathematical Physics 3.2.2.1
Helmholtz Equation in R3
We recall that @2 . ˙ i /2 D @2 C .i /2 D @2 C .˙i /2 is invertible for every 2 R>0 , e.g. in H1 .@ C e/. The fundamental solution can easily be determined as the inverse Fourier transform of x 7! .2/3=2 .x 21.˙i /2 / . By applying residue calculus we find that the fundamental solution g˙i is given by the function g˙i .x/ D Indeed, with
exp.˙i jxj/ exp. jxj/ 4 jxj
for x ¤ 0:
2 CV 1 .R3 /,
hg i j i0;0
Z 1 1 b./ d D lim 2 3=2 "!0C 2R3 nB.0;"/ . ˙ i /2 .2/ Z 1 1 b./ d lim D .2/3=2 "!0C 2R3 nB.0;"/ 2 jj .jj C ˙ i / Z 1 1 b./ d : lim C 3=2 "!0C 2R3 nB.0;"/ 2 jj .jj i / .2/
Introducing polar coordinates, Z 1 1 b./ d 2 2 3=2 .2/ 2R3 nB.0;"/ . ˙ i / Z Z 1 r2 b.r0 / dr dS D 0 .2/3=2 r2Œ";1Œ 0 2S 2 r 2 . ˙ i /2 Z Z Z r2 1 D exp.ir0 x/ .x/ dx dr dS 0 .2/3 r2Œ";1Œ 0 2S 2 x2R3 r 2 . ˙ i /2 Z Z Z 1 r2 D exp.ir0 x/ dS 0 .x/ dr dx .2/3 x2R3 r2Œ";1Œ r 2 . ˙ i /2 0 2S 2 Z Z 1 r sin.r jxj/ dr .x/ dx: D 2 2 2 2 jxj x2R3 r2Œ";1Œ r . ˙ i /
224
Chapter 3 Partial Differential Equations with Constant Coefficients
Here we have made use of the rotational symmetry of the domain and integrand to turn x 2 R3 into 0
1 0 @ 0 A jxj in
R 0 2S 2
exp.i r0 x/ dS 0 . In standard polar coordinates we obtain 0
1 cos ' cos B C 0 D @ sin ' cos A sin and dS 0 D cos d' d and so for x 2 R3 n ¹0º Z
Z 0 2S 2
exp.i r0 x/ dS 0 D 2
exp.i rjxj sin / cos d Z
D 2 D 4
2Œ=2;C=2
exp.i rjxj s/ ds s2Œ1;C1
sin.rjxj/ : r jxj
R r Now, r2R>0 r 2 .˙i sin. r jxj/ dr can again be evaluated using residue calculus. /2 First we observe Z Z 1 r r sin. r jxj/ dr D sin. r jxj/ dr 2 2 2 2 r2R r . ˙ i /2 r2R>0 r . ˙ i / Z r 1 D exp.i r jxj/ dr 2 2i r2R r . ˙ i /2 Z 1 1 D exp.i r jxj/ dr 4i r2R r C . ˙ i / Z 1 1 exp.i r jxj/ dr: C 4i r2R r . ˙ i / Then, for 2 R n ¹0º, we find Z r2R
´ 1 2 i exp. i jxj/ exp. jxj/ for > 0; exp. i r jxj/ dr D r . C i / 0 for < 0
Section 3.2 Partial Differential Equations in H1 .jD j/
225
and Z r2R
´ 1 2 i exp. i jxj/ exp. jxj/ for < 0; exp. i r jxj/ dr D r C . C i / 0 for > 0:
Consequently, for > 0, we get Z r sin.r jxj/ dr D exp.˙ i jxj/ exp. jxj/ 2 . ˙ i /2 r 2 r2R>0 and so hg i j i0;0
Z Z r sin. r jxj/ 1 dr D lim 2 2 2 2 .2/ "!0C jxj x2R3 r2Œ";1Œ r . ˙ i / Z exp.˙ i jxj/ exp. jxj/ D .x/ dx; 3 4 jxj x2R
.x/ dx
from which the result follows. Since g˙i 2 L1;loc .R3 /, i.e. locally Lebesgue integrable (even in L2;loc .R3 / and even uniformly with respect to 2 R, 2 R>0 ), we have that g˙i is a regular distribution. Note that we must have g i D .L˙ ˝ 1/ g1;˙ . ; / where L˙ ˝ 1 is the Fourier–Laplace transform with respect to time only and g1;˙ is the forward or backward fundamental solution for @20 @2 , respectively. Obviously we also have that g˙i 2 H1 .b @ C e/ as long as 2 R>0 . Since g˙i ! g˙i 0
in H1 .jDj/
where g˙i 0 .x/ D
as ! 0C;
exp.˙ i jxj/ ; 4 jxj
we can use g˙i 0 as fundamental solutions for @2 2 , 2 R. Usually, we assumes 2 Œ0; 1Œ and call the wave number. For ¤ 0 the non-uniqueness is obvious (there are infinitely many solutions u 2 H1 .jDj/ of the equation .@2 2 /u D 0). But for D 0, in which case gL WD g0˙i 0 is given by gL .x/ D
1 4 jxj
for x 2 R3 n ¹0º;
we also have non-uniqueness. The regular distribution given by gL is a fundamental solution for the so-called Laplace operator @2 .
226 3.2.2.2
Chapter 3 Partial Differential Equations with Constant Coefficients Helmholtz Equation in R2
In R2 the situation is slightly more complicated. We consider again (for > 0) hg i j i0;0
Z
1 b./ d "!0C R!1 2B.0;R/nB.0;"/ . ˙ i /2 Z Z r 1 b.r 0 / ds dr lim lim D 0 2 "!0C R!1 r2Œ";R r 2 . ˙ i /2 0 2S 1 Z r 1 lim lim D .2/2 "!0C R!1 r2Œ";R r 2 . ˙ i /2 Z Z exp.ir0 x/ .x/ dx ds 0 dr
1 D 2
lim
lim
0 2S 1
x2R2
2
Z Z r 1 lim lim D 2 2 .2/ "!0C R!1 x2R2 r2Œ";R r . ˙ i /2 Z exp.i r0 x/ ds 0 dr .x/ dx: 0 2S 1
Now the usual rotational symmetry argument yields Z Z 1 1 exp.i r0 x/ ds 0 D exp.i r jxj cos '/ d' DW J0 . r jxj/: 2 0 2S 1 2 '2Œ0;2 Thus we have hg i j i0;0 1 D 2
lim
R!1
Z x2R2
Z r2Œ0;R
r2
r J0 . r jxj/ dr . ˙ i /2
.x/ dx:
Fortunately, for x ¤ 0 the integral Z Z r s J . r jxj/ dr D J .s/ ds 2 . ˙ i /2 0 2 .. ˙ i / jxj/2 0 r s R>0 R>0 has known values, see [16, formula 6.532, 4]: Z s i .1/ H .. i / jxj/ J0 .s/ ds D 2 2 2 0 R>0 s .. i / jxj/ and
Z R>0
s i .2/ J0 .s/ ds D H0 .. C i / jxj/ s 2 .. C i / jxj/2 2
Section 3.2 Partial Differential Equations in H1 .jD j/ .1/
227
.1/
where H0 ; H0 are the Hankel functions for index 0 of first and second kind re.1/ .2/ spectively. Noting that H0 .z/ D H0 .z /, z 2 C n ¹0º, we have i .2/ gCi .x/ D H0 .. C i / jxj/ and 4
gi .x/ D
i .1/ H .. i / jxj/: 4 0
For ¤ 0 we can let ! 0C and obtain a limit in H1 .jDj/ i .2/ gCi 0 .x/ D H0 . jxj/; 4
i .1/ gi 0 .x/ D C H0 . jxj/: 4
These are both fundamental solutions of the Helmholtz equation in R2 .@2 2 / g˙i 0 D ı;
2 R n ¹0º:
The case D 0 is, however, not achieved by taking the limit ! 0. Indeed, the .1/ .2/ asymptotic behaviors of the Hankel functions H0 ; H0 are given by .1/
H0 .z/ D
2i ln.z/ C C.1/ C o.1/;
.2/
H0 .z/ D
2i ln.z/ C C.2/ C o.1/
as z ! 0 for some constants C.i/ ; i D 1; 2, and so we have no convergence of g˙i 0 in H1 .jDj/ as ! 0 in R. We have, however, convergence of w WD g˙i 0 ˙
1 1 . ln.j j/ C˙;sgn./ / ! ln.j j/ 2 2
in H1 .jDj/ as ! 0C or ! 0. Here C˙;sgn./ denotes a constant depending on the sign of 2 R. Clearly we have @2 w D ı C 2 w ! ı
in H1 .jDj/ as ! 0
and therefore w0 WD
1 ln.j j/ 2
may serve as a fundamental solution for @2 in the case n D 2. Another way of looking at this fundamental solution is to consider it as a limit of the solution of @2 0 D ı ˝ ı ˝ ŒR1 ;R2 in R3 (compare ‘method of descent’ below) as R1 ; R2 ! C1. For this we study the limit of g0 ı ˝ ı ˝ ŒR1 ;R2
228
Chapter 3 Partial Differential Equations with Constant Coefficients
as R1 ; R2 ! 1. To discuss this limit in a suitable sense, we first evaluate for 2 CV 1 .R3 / (and with x D .x1 ; x2 ; x3 /) hg0 ı ˝ ı ˝ ŒR1 ;R2 j i0 D hı ˝ ı ˝ ŒR1 ;R2 j g0 i0 Z Z 1 D .y1 ; y2 ; x3 y3 / dy dx3 ŒR1 ;R2 R3 4j.y1 ; y2 ; y3 /j Z Z 1 .y1 ; y2 ; y3 / dy dx3 D ŒR1 ;R2 R3 4j.y1 ; y2 ; x3 y3 /j Z Z 1 dx3 .y1 ; y2 ; y3 / dy D 3 4j.y ; y 1 2 ; x3 y3 /j R ŒR1 ;R2 p Z x3 C R1 C j.x1 ; x2 /j2 C .x3 C R1 /2 1 D ln .x/ dx: p 4 R3 x3 R2 C j.x1 ; x2 /j2 C .x3 R2 /2 Now
p j.x1 ; x2 /j2 C .x3 C R1 /2 ln p x3 R2 C j.x1 ; x2 /j2 C .x3 R2 /2 p x3 R2 C j.x1 ; x2 /j2 C .x3 R2 /2 D ln p x3 C R1 C j.x1 ; x2 /j2 C .x3 C R1 /2 j.x1 ; x2 /j2 D ln p p .x3 C R1 C j.x1 ; x2 /j2 C .x3 C R1 /2 /.R2 x3 C j.x1 ; x2 /j2 C .x3 R2 /2 /
x 3 C R1 C
D 2 ln.j.x1 ; x2 /j/ R 1 R2 C ln p p .x3 C R1 C j.x1 ; x2 /j2 C .x3 C R1 /2 /.R2 x3 C j.x1 ; x2 /j2 C .x3 R2 /2 / ln.R1 R2 /;
for sufficiently large R1 ; R2 2 R>0 . Noting that the integral only extends over the compact set supp and R 1 R2 p p 2 .x3 C R1 C j.x1 ; x2 /j C .x3 C R1 /2 /.R2 x3 C j.x1 ; x2 /j2 C .x3 R2 /2 / !1 as R1 ; R2 ! 1, we get 1 hln.R1 R2 / j i0 lim hg0 ı ˝ ı ˝ ŒR1 ;R2 j i0 C R1 ; R2 !1 4 1 D hln.j.m1 ; m2 /j/ j i0 2
Section 3.2 Partial Differential Equations in H1 .jD j/
229
for all 2 CV 1 .R3 /. Thus, we have established 21 ln.j.m1 ; m2 /j as a somewhat peculiar limit case of a three-dimensional potential. 3.2.2.3
Cauchy–Riemann Operator
Obviously, in the non-invertible case, fundamental solutions are more difficult to find. This difficulty is akin of the problems one encounters when trying to solve Au D f when 0 2 C .A/. In this respect the Cauchy–Riemann operator @C WD @1 C i @2 in R2 is a particularly notable example since C .@C / D C. Fortunately, noting that @C D @1 C i @2 D .@1 i @2 /, we have @C @C D @C @C D @2 and so the previous example yields that gCR WD @C w0 D 21 @C ln.j j/ is a fundamental solution for the Cauchy–Riemann operator. We find gCR .x1 ; x2 / D
1 2 .x1 C i x2 /
as a regular distribution which, in the usual complex notation (identifying R2 and C via the isometry .x1 ; x2 / 7! z WD x1 C i x2 ), can be re-written as gCR .z/ D 3.2.2.4
1 : 2z
Wave Equation in R2 (Method of Descent)
One may suspect that the solution of the wave equation in R2 may be found as the solution of the three-dimensional wave equation with the right-hand side independent of one of the variables. Noting that the constant function 1 … H1 .@ C 1/ but 1 2 H1 .jDj/, we see that in order to pursue this idea we require the extension of the solution theory to H1 .jD j/; D .%; 0; 0; 0/. Consider the uniquely determined solution u 2 H1 .jD0;% j/ ˝ H1 .jDj/; % 2 R>0 , of the wave equation @2 / u D ı ˝ ı ˝ ı ˝ 1 .@20 b in R1C3 . We expect u to be independent of the last variable and then u interpreted in R1C2 would solve the equation .@20 b @2 / u D ı ˝ ı ˝ ı and so u would be the fundamental solution of the wave equation in R1C2 . This approach to fundamental solutions of lower dimension is known as the method of descent. We know that u D g1;C ı ˝ ı ˝ ı ˝ 1
230
Chapter 3 Partial Differential Equations with Constant Coefficients
and calculate41 huj i;0 D huj exp.2% m0 / i0;0 D hg1;C ı ˝ ı ˝ ı ˝ 1j exp.2% m0 / i0;0 D hı ˝ ı ˝ ı ˝ 1 j .1 g1;C / exp.2% m0 / i0;0
D hı ˝ ı ˝ ı ˝ 1 j i0;0 Z D .0; 0; 0; s/ ds; R
where we have used as an ad-hoc abbreviation .t; x/ WD hg1;C j .t;x/ exp.2% m0 / i0;0 ; and .t;x/ is the translation operation given by ..t;x/ '/.s; y/ D '.s C t; y C x/ for complex-valued functions ' defined on R1C3 . We find Z .0; 0; 0; s/ ds R Z D hg1;C j .0;0;0;s/ exp.2% m0 / i0;0 ds R Z D hg1;C j .0;0;0;s/ i;0 ds R Z Z 1 exp.2 jxj %/ .jxj; x1 ; x2 ; x3 C s/ dx ds D 3 4jxj R x2R Z Z Z 1 D t exp.2 t %/
.t; t x1 ; t x2 ; s/ dSx ds dt: 4 t2Œ0;1Œ R x2S 2 Noting that Z
Z R
D2 41
x2S 2
.t; t x1 ; t x2 ; s/ dSx ds
Z
Z
R
.x1 ;x2 /2B.0;1/
.t; t x1 ; t x2 ; s/ d.x1 ; x2 / ds; q 1 x12 x22
Recall that A denotes the rescaling operator given by p .A '/.x/ D jdet.A/j '.Ax/
for a regular linear mapping A W RnC1 ! RnC1 . Here n D 2 and A is just multiplication by 1.
Section 3.2 Partial Differential Equations in H1 .jD j/
231
we get 2huj i;0 Z Z D t exp.2 t %/
R
t2Œ0;1Œ
Z D
Z .x1 ;x2 /2B.0;1/
R
Z
R
t exp.2 t %/ t2Œ0;1Œ
.x1 ;x2 /2B.0;1/
Z
Z
R
.t; t x1 ; t x2 ; s/ d.x1 ; x2 / ds dt q 1 x12 x22
.t; t x1 ; t x2 ; s/ ds d.x1 ; x2 / dt q 1 x12 x22
.t; x1 ; x2 ; s/ ds d.x1 ; x2 / dt q t2Œ0;1Œ .x1 ;x2 /2B.0;t/ t 2 x12 x22 Z Z Z exp.2 t %/
.t; x1 ; x2 ; s/ ds d.x1 ; x2 / dt D q 2 2 2 t2Œ0;1Œ .x1 ;x2 /2B.0;t/ R t x1 x2 Z ¹.t;x/2R1C2 jt >jxjº .t; x/ Z D
.t; x; s/ ds dx exp.2 t %/ dt: p t 2 jxj2 .t;x/2R1C2 R D
exp.2 t %/
R
Consequently, we have found the solution u given by the function q R>0 .t x12 C x22 / u.t; x1 ; x2 ; x3 / D for t ¤ 0; x1 ; x2 ; x3 2 R: q 2 t 2 x12 x22 In particular, we see that u is independent of the last variable in the sense that uDw˝1 for some w 2 H1 .jD0;% j/ ˝ H1 .j.D1 ; D2 /j/. Indeed, let .@20 @21 @22 /w D ı ˝ ı ˝ ı in R1C2 . Then by letting w j i;0 he w j ˝ !i;0 WD he
Z !.s/ ds R
we have a continuous embedding w 7! w e of H1 .jD0;% j/ ˝ H1 .j.D1 ; D2 /j/ into H1 .jD0;% j/ ˝ H1 .j.D1 ; D2 ; D3 /j/. Due to uniqueness of solution we must have w.t; x1 ; x2 / D
¹.t;x/2R1C2 jt >jxjº .t; x1 ; x2 / q 2 t 2 x12 x22
for t ¤ 0; x1 ; x2 2 R:
232
Chapter 3 Partial Differential Equations with Constant Coefficients
Since w coincides with the fundamental solution of the wave equation in R1C2 , we also know its behavior at t D 0. Continuing with the method of descent for one more step yields that the fundamental solution g of the wave equation in R1C1 is given by g.t; x/ D
1 .t jxj/; 2 R>0
t; x 2 R:
The details for this step are left as an exercise. 3.2.2.5
Plane Waves
The construction in the previous subsection points in a particular direction of application. The extension to the tempered, exponentially weighted Sobolev lattice allows for particular elements of interest not previously included. Another instance of this type of generalization is associated with the idea of a plane wave. Let us consider again the wave equation in R1Cn with initial data f , g: .@20 b @2 / u D ı ˝ g C @0 ı ˝ f: If the initial data satisfy
x b @ f D 0 D x b @g
for all x 2 RnC1 ; x ? p, for some fixed direction p 2 RnC1 , jpj D 1, then, due to uniqueness, we also have x b @u D 0 for all x 2 RnC1 ; x ? p. A solution with this property is called a plane wave. In this case, the wave equation reduces to .@20 .p b @/2 / u D .@0 C p b @/ .@0 p b @/ u D ı ˝ g C @0 ı ˝ f; and so decomposes into two transport equations. Indeed, we have .@20 .p b @/2 /1 D .@0 C p b @/1 .@0 p b @/1 1 D ..@0 C p b @/1 C .@0 p b @/1 / @1 0 2 and so 1 u D ..@0 C p b @/1 C .@0 p b @/1 / @1 0 .ı ˝ g C @0 ı ˝ f / 2 1 D ..@0 C p b @/1 C .@0 p b @/1 / .R0 ˝ g C ı ˝ f /: 2 Introducing the solution operator for the initial value problem for the transport equation (in direction p) W˙ f WD .@0 ˙ p b @/1 ı ˝ f
Section 3.2 Partial Differential Equations in H1 .jD j/
233
for f 2 H1 .j.D1 ; : : : ; Dn /j/ satisfying x b @u D 0 for all x 2 RnC1 ; x ? p, we obtain 1 1 1 u D .WC f C @1 0 WC g/ C .W f C @0 W g/; 2 2 showing that u is composed of a part propagating in direction p and another part propagating in the opposite direction p. If for example f D ı¹pº? and g D 0, then uD
1 .m0 / .ı¹.s;y/ j sypD0º C ı¹.s;y/ j sCypD0º / 2 R>0
or u.t / D
1 .t / .ı¹y j typD0º C ı¹y j tCypD0º /: 2 R>0
If we wanted the solution just to be R>0 .m0 / ı¹.s;y/ j sCypD0º instead, we would have to choose f D ı ? and g D p b @ ı ?. ¹pº
3.2.2.6
¹pº
Linearized Navier–Stokes Equations (Incompressible Case)
Since we are now quite familiar with the construction of fundamental solutions, we may be brief in our discussion of the variants of the linearized equations of fluid dynamics. I Stokes equation with pressure (incompressible case). The .4 4/-partial differential expression of the Stokes equation including pressure (in R1C3 ) is given by (see (3.1.87)) ! b b b C @ @ @ @ 0 S1 .@0 ; b : @/ WD b @> 0 An elementary calculation shows that the characteristic polynomial is . 2 @0 b @2 / .@0 b @2 /2 and the minimal polynomial is . 2 @0 b @2 / .@0 b @2 /: @/ is non-invertible (and so non-evolutionary), since its determiWe see that S1 .@0 ; b nant is @2 /2 : b @2 .@0 b
234
Chapter 3 Partial Differential Equations with Constant Coefficients
Nevertheless, a fundamental solution with time-support R0 can be found in the usual way expressed in terms of a fundamental solution for the Laplace operator and the fundamental solution of the heat equation. We shall not give the details in this case. The problem can be reduced to an evolutionary problem by eliminating the second component in the same way as discussed earlier, see Subsection 3.1.12.6. Note, however, that even for well-behaved right-hand sides we cannot expect the solution to be in H1 .@0;% C 1; b @ C e/ but only in H1 .jD0;% j/ ˝ H1 .j.D1 ; D2 ; D3 /j/. The non-uniqueness issue of the solution p of the equation b @2 p D g in H1 .jD0;% j/ ˝ H1 .j.D1 ; D2 ; D3 /j/ must also be addressed (see our later discussion of harmonic polynomials). I Linearized Navier–Stokes equations (incompressible case). Here we have an additional drift term: ! b b b b C @ @ s @ @ @ 0 N1;s .@0 ; b @/ WD : b @> 0 @, we find Comparing with the previous case, simply by replacing @0 with @0 s b . 2 .@0 s b @/ b @2 / .@0 s b @ b @2 /2 as characteristic polynomial and . 2 .@0 s b @/ b @2 / .@0 s b @ b @2 / @/ remains non-evolutionary since its as the minimal polynomial. Of course, N1;s .@0 ; b determinant is b @2 .@0 s b @ b @2 /2 : A fundamental solution can be found in the usual way expressed in terms of a fundamental solution for the Laplace operator and the fundamental solution of the heat equation with drift term. We omit the details. I Static Stokes equations. The static Stokes equations result from the dynamic Stokes equations, if we assume that the solution does not vary with time. The partial differential expression associated with them is given by ! b b b @ @ @ S1 .0; b @/ D b 0 @> which will be considered in R3 . The determinant is now .b @2 /3 and the characteristic polynomial is given by 7! . 2 C b @2 / . b @2 /2 :
Section 3.2 Partial Differential Equations in H1 .jD j/
235
The minimal polynomial is @2 / . b @2 / D 3 . 2 /b @2 .b @2 /2 7! . 2 C b D . 2 . 1/b @2 / .b @2 /2 : Noting that our earlier reasoning based on the Cayley–Hamilton theorem still applies, we see that a fundamental solution of S1 .0; @/is given by GsS D .S1 .0; b @/2 S1 .0; b @/ b @2 C b @2 / g 0
.2/
.2/
where g0 WD g0 g0 with g0 denoting a fundamental solution of Laplace’s equa.2/ tion. This can be re-written (using @2 g0 D g0 / as .2/ GsS D S1 .0; b @/2 g0 C S1 .0; b @/ g0 g0 :
3.2.2.7
Electro- and Magnetostatics
Maxwell’s equations in the expanded form involve the partial differential expression MM.@0 ; b @/, see (3.1.73). The corresponding static case results if we assume the solution is time-independent, which leads to 0 1 0 b @> 0 0 C B b @ 0 C B @ 0 b MM.0; b @/ D B C @ 0 b @ 0 b @A 0 0 b @> 0 in R3 . In this case we get
MM.0; b @/2 D b @2 188
and so, analogously to the Cauchy–Riemann operator, we have a fundamental solution @/ g0 188 G0;MM WD MM.0; b as a fundamental solution, where g0 is a fundamental solution of the Laplace operator b @2 . Similarly, in R2 we find 0 1 0 @1 @2 0 B @1 0 0 @2 C C MM.0; b @/ D B @ @2 0 0 @1 A 0 @2 @1 0 and again
@2 144 : MM.0; b @/2 D b
236
Chapter 3 Partial Differential Equations with Constant Coefficients
Thus, we obtain
e 0;MM WD MM.0; b G @/ g0 144
as the fundamental solution in this case42 . 3.2.2.8
Force-free Magnetic Fields
Applying the (forward) Fourier–Laplace transform to Maxwell’s equations in the form discussed in Remark 3.1.66 yields the expression .i m0 C 1 % C i b @/
(3.2.99)
where % 2 R>0 . With 2 R replacing the operator m0 , a pointwise interpretation leads to the partial differential expression . C i % b @/ D curl C C i %;
2 R:
The case % ! 0C leads in turn to considering an equation of the form b @ H H D F: If F D 0, a solution H can be regarded as a magnetic field with vanishing Lorentz force, since then b @ H and H must be parallel. In this case, H would be called force-free in the region where F D 0. The determinant det.b @ / D .b @2 2 / reveals b @ as non-invertible and so non-evolutionary. Moreover, the characteristic polynomial is found as 7! . C / .b @2 . C /2 / D 3 3 2 C .b @2 3 2 / C .b @2 2 /: Correspondingly, with g˙i 0 denoting our earlier fundamental solution for the differential expression .b @2 2 /, we find for ¤ 0 @ /2 3 .b @ / . 2 b @2 / C 2 2 / 1 g˙i 0 G0;F F WD ..b D .b @ /2 1 g˙i 0 3 .b @ / g˙i 0 2 g˙i 0 1 ı D 1 @ @ g˙i 0 @ g˙i 0 1 ı; as a fundamental solution for force-free (FF) fields. For 2 R n ¹0º, magnetic fields in H1 .j.D1 ; D2 ; D3 /j/ which are force-free in all of R3 can be found from the characteristic polynomial. We have .b @ /..b @ /2 3 .b @ / C .3 2 C b @2 // D .b @2 2 /: 42 Note that, although, the fundamental solution g of b @2 does not decay in R2 , its derivatives do de0 cay. Thus, unique representations (see later) of decaying solutions can be given in terms of convolutions with e G 0;MM .
Section 3.2 Partial Differential Equations in H1 .jD j/
237
Therefore every (scalar) solution u of the homogeneous Helmholtz equation .b @2 2 / u D 0 yields a force-free solution Ha via Ha WD ..b @ /2 3 .b @ / C .3 2 C b @2 // u a for every a 2 C 3 . Such u can be found as the inverse Fourier transform of an element in H1 .j.D1 ; D2 ; D3 /j/ with support on @B.0; jj/. Again, we could consider the expanded system ! b @ b @ 1.44/ b @> 0 instead. Here we find ! ! b @ b @ 1.44/ b @> 0
! ! b @ b @ C 1.44/ D b @> 0
b @ b @ b @> 0
!2 2 1.44/
and b @ b @ > b @ 0
!2 D D
! b @ b @ b @b @> 0 @ 0 b @>b ! b @2 1.33/ 0 D b @2 1.44/ : 0 b @2
Thus, a fundamental solution is found for all 2 C from a fundamental solution g of the Helmholtz equation: ! ! b @ b @ C 1.44/ g 1.44/ ; G WD b @> 0 (recall that g is uniquely determined for 2 C n R). In order for the equation ! ! b @ b @ H F 1.44/ D > b ' f @ 0 to reduce to the above system we must again require the compatibility condition b @> F C f D 0: Then, for 2 C n R we find .b @2 2 / ' D b @> F C f D 0
(3.2.100)
238
Chapter 3 Partial Differential Equations with Constant Coefficients
and so ' D 0. For 2 R, we cannot exclude the possibility of a non-trivial component '. However, under assumption (3.2.100), a solution given in the form F G f is always of the desired form: F G D g f
! ! b F @ b @ C 1.44/ > b f @ 0 b @ F b @f C F D g : 0
From
! ! b @ b @ 1.44/ b @> 0
! ! b @ b @ @2 2 C 1.44/ D b b @> 0
it is also clear how to obtain force-free magnetic fields in R3 in this framework. Let U D 0 and b @> U C u D 0: .b @2 2 / u Then the first component of ! ! b b U @ b @ @ U b @u C U D C 1.44/ b u 0 @> 0 is a force-free magnetic field. 3.2.2.9
Beltrami Field Expansions
Force-free magnetic fields in R3 are also known as Beltrami fields. We shall now consider an associated generalized eigensolution expansion. According to our general functional-analytical considerations, we need to find a suitable diagonalization of i m. Note that such an investigation takes places in H3;3;3;1 .@ C e/. It is only later that we shall refer to the extended space H3;3;3;1 .jDj/. With43 1 0 m1 m3 m1 m3 C qi m2 jmj q i m2 jmj pm1 p p 2 jmj B 2 jmj m21 Cm22 2 jmj m21 Cm22 C C B C B m2 m3 C i m1 jmj m2 m3 q qi m1 jmj C pm2 p p V .m/ WD B 2 2 2 2 C 2 jmj B 2 jmj m1 Cm2 2 jmj m1 Cm2 B C q q @ A 2 2 m Cm m2 Cm2 2 p1 2 jmj
pm3 2 jmj
2 p1 2 jmj
43 The matrix V .x/ is uniquely determined by the diagonal matrix only almost everywhere and up to right-multiplication by a diagonal matrix with diagonal entries of absolute value 1 almost everywhere.
Section 3.2 Partial Differential Equations in H1 .jD j/ 1 Cjmj 0 0 i m D V .m/ @ 0 0 0 A V .m/ : 0 0 jmj 0
we have
Thus, we obtain the following generalized eigensolution expansion 1 0 100 FC1 WD F V .m/ @ 0 0 0 A; 000 0 1 000 F0 WD F V .m/ @ 0 1 0 A; 000 1 0 000 WD F V .m/ @ 0 0 0 A; F1 001 and so F h WD F V .m/ h D FC1 hC1 C F0 h0 C F1 h1 ;
with
X
h WD
hj ;
hk WD Fk f;
j D1;0;C1
E.g. V .x/ could be replaced by 0 B B V .x/ B @
The structure of V .x/ is in both cases 0
x1 Ci x2 q x12 Cx22
0
0
0 0
1 0
0
v1 .x/ @ v2 .x/ v3 .x/
x1 =jxj x2 =jxj x3 =jxj
x1 i x2 q x12 Cx22
1 C C C: A
1 v1 .x/ v2 .x/ A; v3 .x/
where in the latter case 1 v1 .x/ D p .1 C .x1 =jxj C i x2 =jxj/ x1 =jxj .1 C x3 =jxj/1 /; 2 1 v2 .x/ D p .i C .x1 =jxj C i x2 =jxj/ x2 =jxj .1 C x3 =jxj/1 /; 2 1 v3 .x/ D p .x1 =jxj C i x2 =jxj/ 2 for x … ¹0º ¹0º R0 , compare [33].
239
240
Chapter 3 Partial Differential Equations with Constant Coefficients
where Fk denotes the adjoint of Fk , k D 1; 0; C1. Applying the generalized eigensolution transform, e.g. to Maxwell’s operator .@0 C i b @/, yields44 the unitary equivalence 1 0 @0 C i jmj 0 0 A F D @0 C i b 0 0 @0 F @ @ 0 0 @0 i jmj X Fj .@0 C i j jmj/ Fj : D j D1;0;C1
The solution of
@0 U C i b @U DF
is therefore given by X U D
Fj .@0 C i j jmj/1 Fj F
j D1;0;C1
D
X
Fj R0 .m0 / exp.i j m0 jmj/ 0 Fj F
j D1;0;C1
D .R0 .m0 / exp.i m0 j@j// 0 P1 F C R0 .m0 / 0 P0 F C .R0 .m0 / exp.i m0 j@j// 0 PC1 F; where Pj WD Fj Fj are orthogonal projectors for j D 1; 0; C1, and 0 denotes convolution with respect to time. As pointed out before, this derivation makes sense in H3;3;3;1 .@ C e/. Clearly, for example, a point source of the form F ı¹p0 º a0 with a0 2 C 3 n ¹0º does not make sense if p0 2 ¹0º R ¹0º. For other p0 , however, we can define F ı¹p0 º a0 WD lim F j" .m p0 / a0 "!0C
and we find F ı¹p0 º a0 D exp.i m p0 / V .p0 / a0 : 44 One might prefer to consider the unitary mapping F F instead, yielding a unitary equivalence between 1 0 @0 C jb @j 0 0 C B 0 0 @0 A @ 0 0 @0 jb @j
and @0 C ib @.
Section 3.2 Partial Differential Equations in H1 .jD j/ Moreover, we have
241
1 jmj 0 0 b @ F ı¹p0 º a0 D F @ 0 0 0 A ı¹p0 º a0 0 0 jmj 1 0 jp0 j 0 0 D F ı¹p0 º @ 0 0 0 A a0 0 0 jp0 j X D j jp0 j Fj ı¹p0 º a0 ; 0
j D1;0;C1
that is b @ Fj ı¹p0 º a0 D j jp0 j Fj ı¹p0 º a0 ;
j D 1; 0; C1:
The latter justifies the term ‘generalized eigensolution’, since every non-trivial Fj ı¹p0 º a0 2 H3;3;3;1 .jDj/ is indeed an eigenfunction corresponding to the eigenvalue j jp0 j. The question of generalized eigenfunction expansions could have been attacked with the extended Maxwell operator ! ib @ b @ @0 C : b @> 0 In this case we would need to diagonalize the block matrix i m m : m 0 The discussion proceeds in the framework of H3;4;4;1 .@Ce/. We first find a unitary matrix 0 1 m1 m3 m2 m3 i m1 jmj qi m2 jmj q pm1 pm2 p i i p 2 jmj B 2 jmj 2 jmj m21 Cm22 2 jmj m21 Cm22 C B C B m2 m2 m3 C i m1 jmj m1 m3 C i m2 jmj C m1 q q p C Bp p i ip 2 2 2 2 C B 2 jmj 2 jmj 2 jmj m Cm 2 jmj m Cm 1 2 1 2 C V .m/ WD B q C B m21 Cm22 C B m3 1 p p C Bp 0 B 2 jmj C 2 jmj 2 q @ A m21 Cm22 m 1 3 p p p 0 2
such that
0 jmj B 0 i m m D V .m/ B @ 0 m 0 0
2 jmj
2 jmj
1 0 0 0 jmj 0 0 C C V .m/ : 0 jmj 0 A 0 0 jmj
242
Chapter 3 Partial Differential Equations with Constant Coefficients
Since we have multiple diagonal elements, we are led to combine the resulting generalized eigensolution expansion in the following way: 0 1 1000 B0 1 0 0C C FC WD F V .m/ B @ 0 0 0 0 A; 0000 1 0 0000 B0 0 0 0C C F WD F V .m/ B @ 0 0 1 0 A; 0001 and so F h WD F V .m/ h D FC hC C F h
where h WD hC C h ;
h˙ WD F˙ f:
Applying generalized eigensolution transform to the extended Maxwell operator this @ b @ @0 C ib yields the unitary equivalence b @> 0 0 1 0 0 0 @0 C i jmj B C 0 @0 C i jmj 0 0 CF F B A @ 0 0 @0 i jmj 0 0 0 0 @0 i jmj ! ib @ b @ D @0 C > b @ 0 .@0 C i jmj/ FC C F .@0 i jmj/ FC : D FC
The solution of @0 U C
! ib @ b @ U DF b @> 0
is therefore given by U D FC .@0 C i jmj/1 FC F C CF .@0 i jmj/1 F F D FC .R0 .m0 / exp.i m0 jmj// 0 FC F
C F .R0 .m0 / exp.i m0 jmj// 0 F F D .R0 .m0 / exp.i m0 j@j// 0 P F C .R0 .m0 / exp.i m0 j@j// 0 PC F;
Section 3.2 Partial Differential Equations in H1 .jD j/
243
where P˙ WD F˙ F˙
are orthogonal projections. We now find F ı¹p0 º a0 D exp.i m p0 / V .p0 / a0 for a0 2 C 4 and p0 2 R3 n ¹0º. Moreover, we have ! ib @ b @ F˙ ı¹p0 º a0 D ˙ jp0 j F˙ ı¹p0 º a0 2 H3;4;4;1 .jDj/ b @> 0 showing that F˙ ı¹p0 º a0 is a generalized eigensolutions. Here we conclude our initial discussion of examples illustrating the utility of our extension of the solution theory to H1 .jD j/ for suitable choices of 2 RnC1 . The convolution concept introduced above has enabled us to represent solutions in terms of convolutions of a fundamental solution with suitable right-hand sides. Whereas in the invertible case we have seen that the unique solution can be given in terms of a convolution for arbitrary right-hand sides in the appropriate Sobolev lattice, the situation is – not surprisingly – more complicated in the non-invertible case. Although there is still a fundamental solution in H1 .jD j/, the right-hand sides cannot be chosen arbitrarily. This is reflected in difficulties to ensure the existence of convolutions in H1 .jDj/.
3.2.3 Convolutions with Particular Elements of H1 .jD j/; 2 RnC1 The existence of convolutions is often a difficult issue and we shall not attempt to consider this issue in full generality. Generally speaking we can say that, whenever the product of the Fourier–Laplace transforms is well-defined, then so is the convolution of the original distributions. For our purposes it is sufficient to describe appropriate special cases where the convolution is well-defined. We want to do this in slightly more generality than in the above examples, where only convolution with fundamental solutions in the invertible case was considered. A first special case will be the consideration of convolutions with g 2 H1 .jD j/. In this case we have formally hg
j i;0 D .2/.nC1/=2 hb g .m i / b. i /jb
. i /i0;0
. i/i0;0 : D .2/.nC1/=2 hb. i/jb g .m i / b
It suffices to show that b g .m i / jH˛ .jD0 j/ W H˛ .jD0 j/ ! H˛ .jD0 j/ for all sufficiently large ˛ 2 N nC1 , in order to confirm the existence of g in g .m i / jH˛ .jD0 j/ H1 .jD j/ for all 2 H1 .jD j/. The stated continuity of b
244
Chapter 3 Partial Differential Equations with Constant Coefficients
follows from the Leibniz rule using the fact that for g 2 H1 .jD j/ all derivatives of b g . i / 2 H1 .jDj/ are bounded. This follows from the Sobolev type estimate derived below. As by-products of this observation we have that the convolution defined in Definition 3.2.8 is indeed well-defined and that H1 .jDj/ is a multiplicative algebra, i.e.
WD .m/
D
.m/ 2 H1 .jDj/
for ; 2 H1 .jDj/. It is not always feasible to impose such strong assumptions on one of the convolution factors. For many purposes it will, however, be sufficiently general to consider convolutions with elements in H1 .jD j/ having compact support. The following theorem describes this particular class of convolutions. Theorem 3.2.9. Let f 2 H1 .jD j/ have compact support. Then g f exists in H1 .jD j/ for any g 2 H1 .jD j/. Proof. For simplicity let D 0. Then, since f 2 H1 .jDj/, we know that f is well-defined on H1 .jDj/. Moreover, since f has compact support, we may take a real-valued cut-off function 0 2 CV 1 .RnC1 / with 0 D 1 on a ball around the origin containing the support of f to obtain f D 0 .m/ f: From this we see that f 2 H1 .@ C e/: Indeed, let f 2 H2˛ .jDj/ for some ˛ 2 N nC1 . Then, from Lemma 3.2.1 and the Leibniz rule, we obtain hf j i0;0 D h 0 .m/f j i0;0 D hf j 0 .m/ i0;0 C j.D D/˛ . 0 .m/ /j0;0 X C jm @ˇ . 0 .m/ /j0;0 Cˇ 2˛
C
X
j@ˇ j0;0
Cˇ 2˛
C j.@ C e/2˛ j0;0 for some generic constant C 2 R>0 and all 2 CV 1 .RnC1 /. Thus, we see that f 2 H2˛ .@ C e/. As before, reversing the argument with the isometric re-scaling operator 1 , we find h
0
gj i0;0 D hg j .1
0/
i0;0
Section 3.2 Partial Differential Equations in H1 .jD j/
245
for all ; 0 2 CV 1 .RnC1 /. Clearly, .1 0 / 2 CV 1 .RnC1 / H1 .jDj/. We shall see that the convolution .1 0 / can be extended simply by continuous extension to 0 D f 2 H1 .jDj/ with supp. 0 / compact and indeed we still have .1 0 / 2 H1 .jDj/. We may assume that the support of 0 is contained in the set where the cut-off function 0 is equal to 1. The Fourier transform b of such a WD 1 0 D 1 0 .m/ 0 2 H1 .@ C e/ is then differentiable to any order. Moreover, since @ b 2 H1 .i m C e/ for every 2 N nC1 , we have that for every 2 N nC1 there exists ˛ 2 N nC1 such that Z j .i x C e/˛ @ b.x/j2 dx < 1: RnC1
To continue our proof and for later purposes we need a simple .n C 1/-dimensional version of Sobolev’s lemma. Lemma 3.2.10. The mapping CV 1 .RnC1 / H2e .jDj/ ! ¹ 2 C0 .RnC1 /j j j1;1=2 < 1º; u 7! .t 7! u.t // has a continuous extension to all of H2e .jDj/ (“trace operator”) with respect to the norm of H2e .jDj/ and the image norm ˇ ² ³ j .t / .s/j ˇˇ nC1 nC1 º C sup ^t ¤s : j j1;1=2 WD sup¹j .t /j j t 2 R t; s 2 R jt sj1=2 ˇ In particular, if f 2 H2e .jDj/ and . k /k is a sequence in CV 1 .RnC1 / converging to f in H2e .jDj/, then limk!1 k exists in C0 .RnC1 /, is independent of the particular approximating sequence and equals f . Furthermore, the mapping is injective. Proof. By the fundamental theorem of calculus we obtain
.x0 ; : : : ; xn / Z Z D D
@0 @n .t0 ; : : : ; tn / dtn dt0 1;x0 1;xn Z x0 Z xn j.it0 C 1/ .itn C 1/j@0 @n .t0 ; : : : ; tn / 1
j.it0 C 1/ .itn C 1/j
1
dtn dt0
and so, applying the Cauchy–Schwarz inequality, Z nC1 2 2 j .x0 ; : : : ; xn /j j.it C 1/j dt j.i m C 1/e @e j20;0
R nC1
j.i m C 1/e @e j20;0 :
246
Chapter 3 Partial Differential Equations with Constant Coefficients
Similarly, we get
.x0 ; : : : ; xn / .y0 ; : : : ; yn / n X D . .x0 ; : : : ; xk ; ykC1 ; : : : ; yn / .x0 ; : : : ; xk1 ; yk ; : : : ; yn // D
kD0 n X
Z
Z 1;x0
kD0
Z
1;yn
Z
xk
1;xk1 yk
Z 1;ykC1
@0 @n .t0 ; : : : ; tn /dtn dt0
and consequently j .x0 ; : : : ; xn / .y0 ; : : : ; yn /j2 n X n jarctan.xk / arctan.yk /j j.i m C 1/e @e j20;0 n
p
kD0 n X
jxk yk j j.i m C 1/e @e j20;0
kD0
n C 1 n jx yj j.i m C 1/e @e j20;0 ;
showing the required continuity estimate. Note that the term j.i m C 1/e @e j0;0 can be further estimated by jjDj2e j20;0 D j.D D/e j20;0 . The desired injectivity follows since convergence in H2e .jDj/ implies convergence in L2 .RnC1 /. Indeed, if f D 0, then by construction we have a sequence . k /k converging to f in H2e .jDj/, thus in L2 .RnC1 /, and to zero point-wise. Therefore, f must vanish. Returning to the proof of the last theorem, we note that, according to this lemma, we also have that for every 2 N nC1 there is an ˛ 2 N nC1 such that the function x 7! .i x C e/˛ @ b.x/ is bounded. Therefore, for any ˇ 2 N nC1 , we can find a sufficiently large ˛ 2 N nC1 , ˛ e, such that jjDjˇ b.m/ b
j20;0 X C
jm @ b.m/ b
/j20;0
Cˇ; ;2N nC1 ˇ
ˇ2 X ˇ ˇ
b bˇ ˇ C .@ /.m/ @ ˇ ˇm 0;0 Cˇ; ;2N nC1 ˇ
CD; ;2N nC1 ˇ2 X X ˇ ˇ C˛ ˛ b bˇ ˇ C .i m C e/ .@ /.m/@ ˇ ˇ.i m C e/ X
Cˇ; ;2N nC1
C 0
X
Cˇ; ;2N nC1
j20;0 C 0 jjDj˛Cˇ b
CD; ;2N nC1
j20;0 jm .i m C e/˛ @b
0;0
Section 3.2 Partial Differential Equations in H1 .jD j/
247
for some generic constant C 2 R>0 and 0 D sup¹j.i x C e/˛ @ b.x/j2 j x 2 RnC1 ; 2 N nC1 ; ˇº: We obtain an estimate of the form 0 C j.i m C e/e˛ @ bj20;0 D C j.@ C e/e˛ m 0 .m/ j20;0 C j.@ C e/e˛ j20;0 for some generic constant C 2 R>0 (independent of and ). Thus, we obtain by applying inverse Fourier transformation for every ˇ 2 N nC1 a sufficiently large 2 N nC1 such that jjDjˇ ..1
0/
/j0;0 C j.@ C e/ j0;0 jjDj Cˇ j0;0
(3.2.101)
for some generic constant C 2 R>0 (independent of and ). Since ˇ 2 N nC1 was arbitrary this shows that .1 0 / 2 H1 .jDj/. Estimate (3.2.101) finally also shows that convolution .1 0 / can be extended by continuity to .1 f / . In particular, this confirms that f g 2 H1 .jDj/ is well-defined. The result carries over to the general case 2 RnC1 . For this, note in particular that, since f has compact support, we have that f 2 H1 .jD j/ for any 2 RnC1 . For later purposes we record the following slight refinement of the above version of Sobolev’s lemma. Corollary 3.2.11. For f 2 H2e .jDj/ we have that f can be considered as a continuous function, i.e. f 2 C0 .RnC1 /. If f 2 H4e .jDj/ then f is in C1 .RnC1 /. If f 2 H1 .jDj/ then f is in C1 .RnC1 /. Proof. By the fundamental theorem of calculus we obtain @k .x0 ; : : : ; xn / D
1 h
D
1 h
Z Z
.x0 ; : : : ; xk1 ; xk C h; xkC1 ; : : : ; xn / .x0 ; : : : ; xn / h
xk Ch xk
.@k .x0 ; : : : ; xn / @k .x0 ; : : : ; xk1 ; tk ; xkC1 ; : : : ; xn // dtk
xk Ch Z xk
xk
tk
@2k .x0 ; : : : ; xk1 ; s; xkC1 ; : : : ; xn / ds dtk
and so, applying the above Sobolev lemma, ˇ2 ˇ ˇ ˇ ˇ@k .x0 ; : : : ; xn / .x0 ; : : : ; xk1 ; xk C h; xkC1 ; : : : ; xn / .x0 ; : : : ; xn / ˇ ˇ ˇ h jhj nC1 j.i m C 1/e @eC2ek j20;0 holds for all 2 H4e .jDj/. Since k D 0; : : : ; n is arbitrary, we read off that has continuous partial derivatives. The last statement follows by induction.
248
Chapter 3 Partial Differential Equations with Constant Coefficients
In special non-invertible cases it is customary to establish particular solution theories by continuous extension to suitable spaces. Uniqueness is enforced by additional constraints. It is not surprising that these adjustments have to be made, since we are trying to solve in the non-invertible case (i.e. ‘in the spectrum’), where by definition the standard solution theory fails. The observation that convolutions can sometimes be written as convolution type integral operators is frequently utilized in this context. The next subsection provides a discussion of some of the issues involved.
3.2.4 Integral Representations of Convolutions with Fundamental Solutions It may be comforting to know that convolutions as abstract objects are not always far away from convolution integrals. In order to obtain integral representations one has to further restrict the class of elements the convolution is applied to. We shall frequently utilize H1 .jD j/ for appropriate 2 RnC1 or even CV 1 .RnC1 / for this purpose. 3.2.4.1
An Integral Representation of the Solution of the Transport Equation
We recall that, for the forward fundamental solution s;C , we found, with the particular parameter choice D .%; 0; : : : ; 0/, Z hs;C j i;0 D hs;C j exp.2 t %/ i0;0 D
.t; t s/ exp.2 t %/ dt R>0
for all 2 CV 1 .R1Cn /. Thus, we find for the convolution s;C , 2 CV 1 .R1Cn /, that Z Z .1 / .t; x/ D ..t r/; .x y// .r; y/ dr dy; Rn
hs;C
R
j i0;0 D hs;C j.1 Z Z D R>0
Z D
R1Cn
/ i0;0
R1Cn
.t C r; t s C y/ .r; y/ dr dy dt
Z
R>0
Consequently,
.t C r; t s C y/ dt .r; y/ dr dy:
Z .s;C
/.t; x/ D Z D
for
2 CV 1 .R1Cn /.
R>0
1;tŒ
.t r; x r s/ dr .u; x .t u/ s/ du
Section 3.2 Partial Differential Equations in H1 .jD j/ 3.2.4.2
249
Potentials, Single and Double Layers in R3
Recalling that @2 in R3 has g0 given by x 7! find for ; 2 CV 1 .R3 / hg0
1 4 jxj
as a fundamental solution, we
j i0;0 D hg0 j.1 / i0;0 Z Z 1 D .x C y/ .y/ dy dx R3 4 jxj R3 Z Z 1 D .x C y/ .y/ dx dy R3 R3 4 jxj Z Z 1 D .z/ .y/ dz dy: 3 3 4 jy zj R R
The latter shows that g0 Z x 7!
R3
is given by the function 1 1 .z/ dz D 4 jx zj 4 r
where r.x/ D jxj and 1 WD 4 r.x/
´
1 4 jxj
for x ¤ 0;
0
for x D 0;
.x/
x 2 R3 :
Thus, for sufficiently well-behaved right-hand sides f , a particular solution u of @2 u D f
(3.2.102)
can be given as a convolution integral Z 1 1 f .x/ D f .z/ dz: u.x/ D 4 r R3 4 jx zj We already noted that we cannot expect uniqueness of solution to hold. In fact, every solution of (3.2.102) is given by uD where u0 is a solution of
1 f C u0 4 r
@2 u0 D 0:
(3.2.103)
(3.2.104)
Remark 3.2.12. For g0 , the term ‘potential’ is frequently used. This term is physically motivated since, in suitable physical contexts, grad g0 can be interpreted as a force. Conversely, a force K generated by a function V as K D grad V
250
Chapter 3 Partial Differential Equations with Constant Coefficients
is called a potential force and V is called a potential of K. If div K D 0 in an open set , then the force is called divergence-free in and the region in which this holds is called source-free. In this terminology, the above g0 is a potential for K WD grad g0 and K is divergence-free in R3 n ¹0º. This is why a solution theory for the equation @2 u D f is also called a ‘potential theory’. From (3.2.103) and (3.2.104) we see that in order to establish a solution theory for (3.2.102) we need to gain additional insight into the null space of @2 . Although we are specifically concerned here with the three-dimensional case, we shall consider the general n-dimensional case to analyze the null space. Lemma 3.2.13. If u 2 Hn;1;1;1 .jDj/ satisfies @2 u D 0, then u is a polynomial. u satisfies the Proof. Let u be a solution of @2 u D 0. Then, its Fourier transform b equation m2 b u D 0: As a consequence we have that suppb u ¹0º: We shall now show that b u 2 LinC ¹@˛ ı j˛ 2 N nC1 º so that the claim of the lemma then follows by applying the inverse Fourier transformation. We have that b u 2 H2˛ .jDj/ for some ˛ 2 N nC1 . Then, there must be a constant C 2 R>0 such that jhb u j i0;0 j C j.D D/˛ j0;0
(3.2.105)
for all 2 CV 1 .RnC1 /. Consider h 2 CV 1 .R/ with supp h Œ1; 1, and supp.1 h/ Rn 1=2; C1=2 Œ: Q Then g" .x/ WD nkD0 h.xk ="/ for x D .x0 ; : : : ; xn / 2 RnC1 and " 2 R>0 defines a family .g" /"2R>0 of functions in CV 1 .RnC1 /. First we observe that u j .1 g" / i0;0 C hb u j g" i0;0 D hb u j g" i0;0 : hb u j i0;0 D hb
Section 3.2 Partial Differential Equations in H1 .jD j/
251
For " D 2 we obtain from (3.2.105) with a new constant C 0 jhb u j i0;0 j D jhb u j g2 i0;0 j (3.2.106)
C j.D D/˛ .g2 /j0;0 X C0 j@ j0;0 02˛
for all 2 CV 1 .RnC1 /. For WD . .m/ we have 2 CV 1 .RnC1 / and
P
1 2N nC1 ;jj1 j2˛j1 C1 Š
@ .0/ m /g1
@ .0/ D 0 for multi-indices 2 N nC1 ; jj1 j2˛j1 C 1. Consequently, we have a generic constant C such that j@ .x/j C jxjj2˛j1 C1jj1 for all x 2 RnC1 and so noting that @ g" D @ g1 .="/ "jj1 we get u j g" jhb u j i0;0 j D jhb
X
i0;0 j C 0
j@ .g"
/j0;0 ! 0 as " ! 0C:
02˛
Thus, we have shown that u j g1 .m/ i0;0 hb u j i0;0 D hb ˇ X ˇ D b u ˇˇ
2N nC1 ;jj1 j2˛j1 C1
X
D
2N nC1 ;jj1 j2˛j1 C1
D
1 @ .0/ m g1 Š
0;0
1 @ .0/ hb u jm g1 i0;0 Š
X 2N nC1 ;jj1 j2˛j1 C1
hb u jm g1 i0;0
ˇ 1 ˇˇ @ ıˇ Š 0;0
for all 2 CV 1 .RnC1 /, showing that b u 2 LinC ¹@˛ ı j˛ 2 N nC1 º. The homogeneous solutions of the Laplace equation are therefore polynomials called harmonic polynomials. The last lemma implies uniqueness of solution of
252
Chapter 3 Partial Differential Equations with Constant Coefficients
(3.2.102) up to arbitrary harmonic polynomials. To find harmonic polynomials45 is a purely algebraic question about relations between coefficients. For suitable right-hand sides, existence of solutions can be established via convolution. For example, if the right-hand side f of the .n C 1/-dimensional Laplace equation @2 u D f 45 Harmonic polynomials are obtainable in a systematic way by analytic means involving the Kelvin transform. The Kelvin transform is induced by the following reflexion at the boundary of the unit ball
s W RnC1 n ¹0º ! RnC1 n ¹0º; x 7!
1 x jxj2
with s W RnC1 n ¹0º ! RnC1 n ¹0º; x 7!
1 x jxj2
as its inverse. The Kelvin transform is now given as K W C1 .RnC1 n ¹0º/ ! C1 .RnC1 n ¹0º/; u 7! jmj1n u ı s: The main feature of the Kelvin transform is that it transforms harmonic functions in an open set RnC1 n ¹0º to harmonic functions in the open set sŒ RnC1 n ¹0º. Harmonic functions in RnC1 n ¹0º are given e.g. by the fundamental solution g0 and their derivatives. Here g0 depends on the dimension in the following way: 8 1 for n D 0; ˆ 1
[email protected];1/j jxj n
for x 2
RnC1
n ¹0º. Thus, p˛ WD
K.@˛ g0 /
are all analytic in RnC1 n ¹0º, ˛ 2 N nC1 . In particular,
8 1 ˆ 1
for x 2 RnC1 n ¹.0; 0/º. Due to the decay at infinity of @˛ g0 for ˛ 2 N nC1 , n 2 N>1 , or ˛ 2 N 2 n¹.0; 0/º or ˛ 2 Nn¹0; 1º, the singularity of p˛ at 0 is removable and so p˛ must be a (homogeneous) harmonic polynomial in these cases. For n D 0 we have 1 p0 .x/ D ; 2
1 p1 .x/ D x 2
for x 2 R describing a basis of all harmonic polynomials in R. For n D 1 we have that .K.@˛ g0 //˛2N 2 n¹.0;0/º consists of non-constant, homogeneous, harmonic polynomials spanning all harmonic polynomials in R2 with p.0/ D 0, [44]. For n > 1 we have that .K.@˛ g0 //˛2N nC1 spans all harmonic polynomials in RnC1 , for more details see [44].
Section 3.2 Partial Differential Equations in H1 .jD j/
253
is assumed to be in CV 1 .RnC1 /, n 2 N>1 , then uniqueness of solution can be established by imposing additional decay conditions at 1 in order to exclude harmonic polynomials as contributors to a solution. Let us return to the case n D 3. (The case n D 2 will be treated separately). We first note the following regularity result. Lemma 3.2.14. Let u 2 H1 .jD1 j; jD2 j; jD3 j/ satisfy @2 u 2 CV 1 .R3 /: Then u 2 C1 .R3 /: Proof. Since (3-dimensional) polynomials are certainly in C1 .R3 /, we only need to consider the regularity of 1 uD f 4 r with f WD @2 u 2 CV 1 .R3 /. We see that Z 1 f .x z/ dz; u.x/ D R3 4 jzj
x 2 R3 ;
from which we can read off the desired regularity by noting that we may differentiate u by differentiating f ‘under the integral’. A local variant of this regularity result also holds. It will be derived for arbitrary dimensions. Lemma 3.2.15. Let u 2 H1 .jDj/ and @2 u D f C g; with g 2 C1 .RnC1 / and f; g 2 H1 .jDj/ with supp f compact, n 2 N. Then u 2 C1 .RnC1 n supp f /: Proof. Consider j" .x0 ; : : : ; xn / D "n1 j.x0 ="/ j.xn ="/, where j 2 CV 1 .R/ is chosen as in Example 3.19. Since regularity is a local property, we may just consider an arbitrary open cube Q.x; %0 / WD x0 %0 ; x0 C %0 Œ xn %0 ; xn C %0 Œ with x 2 RnC1 n supp f and %0 2 R>0 so small that Q.x; 2 %0 / RnC1 n supp f . Since u 2 H1 .jDj/ we have u 2 Hk .jDj/ for some k 2 N nC1 and with 0 WD j"0 Q.x;%0 / , 0 < "0 < %0 , we see that 0 has compact support and 0 .m/ u
2 Hk .@ C e/:
254 Letting
Chapter 3 Partial Differential Equations with Constant Coefficients 1
WD j"1 Q.x;%0 "0 / , 0 < "1 < %0 "0 , we have 1 .m/ u
D
1 .m/
0 .m/u
and @2
1 .m/ u
D
1 .m/ .f
D
1 .m/ g
C .@2
C g/ 2 .@
1 /.m/
2 .@
1 /.m/
.@
1 /.m/
0 .m/u:
@u C .@2
1 /.m/ u
0 /.m/ u
Since 1 and 0 have compact support the right-hand side is a distribution with compact support. Therefore @2
1 .m/ u
C
1u
D
1 .m/ g
C .@2
2 .@
1 /.m/
.@
1 /.m/
0 .m/u
C
0 /.m/u/ 1 .m/
0 .m/ u
which yields 1 .m/ u
D .@2 C 1/1 . C .@
2
1 /.m/
1 .m/ g 0 .m/u
2 @.@
C
1 .m/
1 .m/
0 .m/u/
0 .m/ u/
in H1 .@ C e/. We note that 1 .m/ g 2 H1 .@ C e/ and obtain that .@2 C 1/1 . 1 .m/ g C .@2 1 /.m/ 0 .m/u C 1 .m/ 0 .m/ u/ is an element of intersection T 2 2 1 is bounded for j D 0; : : : ; n. j D0;:::;n HkC2ej .@Ce/, since .@j ˙1/ .@ C1/ 2 Similarly, we find that the contributing part .@ C 1/1 @.@ 1 .m/ 0 .m/u/ is T in j D0;:::;n HkCej .@ C e/, since .@k ˙ 1/ .@2 C 1/1 @j is bounded for j; k D 0; : : : ; n. Thus, we have \ HkCej .@ C e/: 1 .m/ u 2 j D0;:::;n
Defining recursively
mC1
we obtain, assuming "j WD
WD j"mC1 Q.x;% 2j C1 m
, j 2 N, that 1
Pm 0 j D0 "j /
, 0 < "m < % 0
Pm
j D0 "j ,
on Q.x; %0 /
for 0 < < %0 and all m 2 N. With D %20 and 0 WD j%0 =8 Q.x;%0 =4/ , we have
0 .m/ u D 0 .m/ s .m/ u for all s 2 N and by induction
0 .m/ u 2 H1 .@ C e/ and so also
0 .m/ u 2 H1 .jDj/: By Lemma 3.2.10 and Corollary 3.2.11 we obtain 0 .m/ u 2 C1 .RnC1 / or u 2 C1 .Q.x; %0 =8//:
Section 3.2 Partial Differential Equations in H1 .jD j/
255
In the same way a more general regularity result follows. It is frequently referred to as Weyl’s lemma. Corollary 3.2.16. Let u 2 H1 .jDj/ and @2 u C a @u C b u D f C g; with g 2 C1 .RnC1 / and f; g 2 H1 .jDj/ with supp f compact, a 2 C nC1 ; b 2 C, n 2 N. Then u 2 C1 .RnC1 n supp f /: For the extension of the solution theory for @2 , the next inequalities – known as Poincaré estimates – are of interest. Lemma 3.2.17. Let uD
1 f 4 r
with f 2 CV 1 .R3 /. Then we have ˇ ˇ ˇ 1 ˇ C ˇ ˇ u ˇ r C i ˇ 4 j.r C i/ f j0 0 R where C WD sup¹ R3 jxyj1jyj5=2 dyjx 2 R3 ^ jxj D 1º. Proof. We have46 ˇ2 Z ˇ ˇ ˇ 1 ˇ u.x/ˇˇ dx ˇ R3 jxj C i ˇZ ˇ2 Z ˇ ˇ 1 1 ˇ ˇ dx f .z/ dz D ˇ ˇ 2 R3 jxj C 1 R3 4 jx zj Z 1 2 R3 jxj C 1 ˇ2 ˇZ ˇ ˇ 1 2 1=2 1=4 ˇ .jzj C 1/ jzj jf .z/j dz ˇˇ dx ˇ 2 1=2 1=4 jzj R3 4 jx zj .jzj C 1/ Z Z 1 1 1 dz 16 2 R3 jxj2 C 1 R3 jx zj .jzj2 C 1/ jzj1=2 Z jzj2 C 1 1=2 jzj jf .z/j2 dz dx: R3 jx zj 46
We remind the reader that for sake of brevity we are using here the customary formal estimation process, which finds its justification only in the validity of the ‘last’ estimate. The existence of all previously used integrals follows by following the formal argument in reverse order. This then finally makes the formal estimates rigorous.
256
Chapter 3 Partial Differential Equations with Constant Coefficients
We find for x ¤ 0 with the substitution y D z=jxj that Z
Z
1
R3
jx zj .jzj2 C 1/ jzj1=2
1 dz jx zj jzj5=2 Z 1 1 D dy 1=2 5=2 jxj R3 jx=jxj yj jyj C ; jxj1=2
dz
R3
(3.2.107)
where ²Z C WD sup
R3
ˇ ³ ˇ 1 3 ˇ dy x 2 R ^ jxj D 1 : jx yj jyj5=2 ˇ
Inserting this into the estimate for u and using (3.2.107) once again, we obtain as desired ˇ2 ˇ ˇ ˇ 1 ˇ u.x/ˇˇ dx ˇ R3 jxj C i Z Z C 1 jzj2 C 1 1=2 jzj jf .z/j2 dz dx 16 2 R3 .jxj2 C 1/ jxj1=2 R3 jx zj Z Z 1 C dx .jzj2 C 1/ jzj1=2 jf .z/j2 dz D 2 2 1=2 jx zj 16 R3 R3 .jxj C 1/ jxj Z C2 .jzj2 C 1/ jf .z/j2 dz: 16 2 R3
Z
Considering r as a multiplication operator in L2 .R3 /, we see that we are here concerned with the spaces H˙1 .r C i/ of the associated Sobolev chain .Hk .r C i//k2Z . The set CV 1 .R3 / can be seen to be dense in H1 .r C i/ by a simple cut-off argument, (which we will not carry out in detail). The density of CV 1 .R3 / in H1 .r C i/ (H0 .r Ci/ WD L2 .R3 /) then shows that the last lemma yields that 41 r W CV 1 .R3 / H1 .r Ci/ ! H1 .r Ci/ can be extended by continuity to a continuous mapping from H1 .r C i/ to H1 .r C i/. As a consequence we obtain Theorem 3.2.18. Let f 2 H1 .r C i/. Then @2 u D f has a unique solution in H1 .r C i/, so that .@2 /1 W H1 .r C i/ ! H1 .r C i/ is a bounded, linear operator.
Section 3.2 Partial Differential Equations in H1 .jD j/
257
Proof. Let S W H1 .r C i/ ! H1 .r C i/ denote the continuous extension of the convolution operator 41 r W CV 1 .R3 / H1 .r C i/ ! H1 .r C i/. Then, since Sf 2 H1 .r C i/ ,! H.1;1;1/ .jD1 j; jD2 j; jD3 j/, we have @2 Sf D f
in H.3;3;3/ .jD1 j; jD2 j; jD3 j/;
(3.2.108)
for f 2 CV 1 .R3 /. Since convergence of f in H1 .r C i/ implies convergence in the space H0 .jD1 j; jD2 j; jD3 j/ and convergence of Sf in H1 .r C i/ implies convergence in H.1;1;1/ .jD1 j; jD2 j; jD3 j/, we have by continuity of @2 in H1 .jD1 j; jD2 j; jD3 j/ and by taking limits that equation (3.2.108) still holds for f 2 H1 .r C i/. Thus we have existence of a solution in H1 .r C i/. Uniqueness now follows since a solution of @2 u D 0 must be a (harmonic) polynomial and the only polynomial in H1 .r C i/ is the zero polynomial. The continuity of the solution operator is known explicitly by construction according to Lemma 3.2.17 and has indeed been utilized already for the above approximation. The particular asymptotic behavior of the solutions just introduced is also reflected in the so-called Poincaré estimate in R3 . We shall record this type of estimates in the following lemma (again for arbitrary dimensions). Lemma 3.2.19. For all
2 CV 1 .RnC1 /, n 2 N, we have the following estimates.
(a) If n D 0 then jr 1 j20 4 j@ j20 with
2 CV 1 .R n ¹0º/.
(b) If n D 1 then j.r ln.r//1 with47
j20 4 .j@0
j20 C j@1
j20 /
2 CV 1 .R2 n .¹0º [ @BR2 .0; 1///.
(c) If n > 1 then jr 1 j20
n X 4 j@k .n 1/2
j20 :
kD0
Proof. For 2 CV 1 .R/ we clearly have Z j˛.r/ .r/ C @ .r/j2 r n dr 0 R0
(3.2.109)
47 Recall that we denote a ball of radius R 2 R >0 centered at x in some normed space E by BE .x; R/. Its boundary, the sphere, is denoted by @BE .x; R/.
258
Chapter 3 Partial Differential Equations with Constant Coefficients
where the sufficiently regular real-valued coefficient function ˛ is to be determined later. We first obtain at least formally Z Z j@ .r/j2 r n dr j˛.r/ .r/j2 r n dr R0
R0
Z
2 Re Z D Z D
˛.r/ .r/ @ .r/ r n dr R0
Z 2 n
R0
R0
j˛.r/ .r/j r dr
˛.r/@.j .r/j2 /r n dr R0
.r n @.r n ˛.r// ˛.r/2 /j .r/j2 r n dr:
We shall attempt to determine ˛ in such a way that r n @ .r n ˛.r// ˛.r/2 c0 ˛.r/2
(3.2.110)
is satisfied for some c0 2 R>0 . Singularities of suitable ˛ may result in further constraints on to make the above calculation rigorous. Let us first consider the case n D 0. In this case, we want to satisfy @ ˛.r/ .1 C c0 / ˛.r/2 : Clearly, ˛.r/ WD =r, 2 R, satisfies this estimate if .1 C c0 / 2 > 0 or
We set WD
1 ./ > 0: 1 C c0 1 1Cc0
2 R>0 and choose c0 such that c0 .1 C c0 /2
is maximal, i.e. c0 D 1. This concludes the case n D 0. Note that in this case, we have assumed additionally that 2 CV 1 .R n ¹0º/ to avoid problems associated with the singularity of ˛. For n > 1, (3.2.110) also holds for ˛.r/ D =r if .n 1/ .1 C c0 / 2 > 0 or
.n 1/ > 0: .1 C c0 /
Section 3.2 Partial Differential Equations in H1 .jD j/ With c0 D 1 we get Z R0
jr
1
.r/j20 r n dr
4 .n 1/2
259
Z R0
j@ .r/j20 r n dr:
Replacing by r 7! .r x0 /, 2 CV 1 .RnC1 /, and integrating with respect to x0 in the unit sphere we obtain (polar coordinates!) the desired estimate jr
1
j20
ˇ ˇm 4 ˇ @ .n 1/2 ˇ jmj
Finally, for n D 1, let ˛.r/ WD
. r ln.r/
ˇ2 nC1 X ˇ 4 ˇ j@k ˇ .n 1/2 0
j20 :
kD1
Then the desired inequality (3.2.110) holds if
.1 C c0 / 2 > 0: Here we must assume additionally 2 CV 1 .R n ¹0; 1º/. The same reasoning as in the case n > 1 now yields the desired Poincaré type inequality. The constraints on imply the requirement 2 CV 1 .R2 n .¹0º [ @BR2 .0; 1///. Remark 3.2.20. The connection between these estimates and the Laplacian is given by the following observation. The above estimates are valid in the closed subspace HnC1 generated by the closure of the specified functions in CV 1 .RnC1 / with respect to the Hilbert space norm 48
7! j@ j0 in H1 .jDj/. Defining49 grad W H.nC1/ ˇ.r/1 ŒL2 .RnC1 / !
M
L2 .RnC1 /
0;:::;n 48
According to the above Poincaré type estimates this norm is equivalent to q
7! jˇ.r/ j20 C j@ j20 ;
where the weight ˇ.r/ is given by ´ ˇ.r/ WD 49
1 r
1 r ln r
for n ¤ 1; for n D 1:
Here .m/1 ŒL2 .RnC1 / abbreviates the Hilbert space ¹ 2 L2;loc .RnC1 /j .m/ 2 L2 .RnC1 /º
with norm
7! j .m/ j0 ; where is a measurable function vanishing only on a Lebesgue null set.
260
Chapter 3 Partial Differential Equations with Constant Coefficients
as the closure of DnC1 CV 1 .RnC1 / ˇ.r/1 ŒL2 .RnC1 / ! L2 .RnC1 /;
7! @ and div W D.div/
M
L2 .RnC1 / ! ˇ.r/ ŒL2 .RnC1 /
0;:::;n
as its negative adjoint, i.e. ‰ 2 D.div/ if there is an f 2 ˇ.r/ ŒL2 .RnC1 / such that hgrad j‰i0 D hˇ.r/ j ˇ.r/1 f i0 for all 2 DnC1 , where 8 V ˆ 1 ;
hgrad j‰i0 D hˇ.r/ jˇ.r/1 f i0
± ;
2DnC1
we may consider the composition div grad as a particular realization of the negative Laplacian @2 . We have D. div grad/ D ¹ 2 HnC1 j grad 2 D.div/º. Now the above Poincaré type estimates yield, for 2 D. div grad/ jˇ.r/ j20 CnC1 jgrad j20 D CnC1 hgrad
j grad i0
CnC1 h j div grad i0 CnC1 hˇ.r/ j ˇ.r/1 div grad i0 CnC1 jˇ.r/ j0 jˇ.r/1 div grad j0 ; where
´ CnC1 WD
2 jn1j
for n ¤ 1;
2
for n D 1:
Consequently, we have jˇ.r/ j0 CnC1 jˇ.r/1 div grad j0
Section 3.2 Partial Differential Equations in H1 .jD j/ for all
261
2 D. div grad/. The solution theory of the equation div grad u D f
is now provided by a simple application of Riesz representation formula. Indeed, let f 2 ˇ.r/ŒL2 .RnC1 /. Then f generates a continuous linear functional 7! hˇ.r/1 f j ˇ.r/ i0 on HnC1 , since jhˇ.r/1 f j ˇ.r/ i0 j jˇ.r/1 f j0 jˇ.r/ j0 CnC1 jˇ.r/1 f j0 j@ j0 for all 2 D.grad/ D HnC1 . This functional can be represented by an element u 2 HnC1 as hgrad u j grad i0 D h@ u j @ i0 D hˇ.r/1 f j ˇ.r/ i0 for every
2 HnC1 . By definition of div, this implies that grad u 2 D.div/ and div grad u D f:
This shows existence. Since the argument can be reversed uniqueness also follows and since the Riesz map is the solution operator, continuous dependence on the data is also clear. In particular for n D 2, our earlier solution theory for the 3-dimensional case is reproduced. Returning for sake of simplicity to the case n D 3, we now want to briefly outline the possibility of considering so-called boundary value problems in the framework of our “whole-space” solution theory. Let R3 be a bounded domain with a smooth boundary @, i.e. is a bounded, smooth submanifold with boundary in R3 . In order to find the proper formulation for a boundary value problem in this setting, let us assume that u is a smooth function on R3 such that @2 u D f: Using the characteristic function of as a ‘cut-off’-function we obtain @2 w D f C 2
3 X
.@k / @k u C .@2 / u
kD1
with w WD u. We want to evaluate the distributions @k , k D 1; 2; 3, and @2 .
262
Chapter 3 Partial Differential Equations with Constant Coefficients 2 CV 1 .R3 /
We find for all
h@k j i0 D h j @k i0 Z D .x/ @k .x/ dx R3 Z D @k .x/ dx Z D nk .x/ dSx ; @
where nk , k D 1; 2; 3, denotes the k-th component of the exterior normal vector to the boundary manifold @, (according to Gauss’ theorem). The latter, however, is by definition equal to hnk ı@ j i0 and so @k D nk ı@ :
(3.2.111)
Since ı@ appears to describe Dirac-ı-sources spread over @ one speaks of a monopole layer or single layer on @, here with a weight factor nk , k D 1; 2; 3. Now, it is easy to calculate 2
@ D
3 X
@2k
kD1
3 X
D
@k .nk ı@ / D
kD1
3 X
nk @k ı@ C
kD1
3 X
.@k nk / ı@ :
kD1
Thus, @2 D n @ ı@ C .div n/ ı@ DW @ .n ı@ /: We see that 3 DX
3 ˇ E ˇ X D E ˇ ˇ @k .nk ı@ / ˇ D ı@ ˇ n k @k 0
kD1
kD1
Z D
3 X
@
Z D
nk .x/ @k .x/ dSx
kD1
lim Z
@ h!0
D lim
h!0
0
@
.x C h n/ .x h n/ dSx 2h .x C h n/ .x h n/ dSx : 2h
The latter can be interpreted as 3 X kD1
@k .nk ı@ / D lim
h!0
1 .hn ı@ hn ı@ / 2h
Section 3.2 Partial Differential Equations in H1 .jD j/
263
and is therefore referred to with physical justification50 as a dipole layer or double layer. As a consequence we find the following form of our induced equation for w: @2 w D f C .n @u/ ı@ C u @ .n ı@ /: The monopole layer ı@ is weighted by n @ u and the dipole layer @ .n ı@ / is weighted by u. Since ı@ has support on @ the weight factors are also only used on ı@ . Consequently, n @ u and u need only be known on @ in order to recover w from the right-hand side. The data uj@ is referred to as Dirichlet boundary data and n @ uj@ is known as Neumann boundary data for the current problem. The right-hand side is a distribution with compact support and thus we have a solution w0 WD
1 . f C .n @u/ ı@ C u @ .n ı@ //: 4 r
(3.2.112)
This solution is smooth outside of the support of f and away from @ according to our above regularity lemma. Moreover, we have the following asymptotic decay property. Lemma 3.2.21. Let f 2 H1 .jD1 j; jD2 j; jD3 j/ with supp f compact. Then RnŒR;R .r/
1 f 2 H1 .r C i/ 4 r
for all sufficiently large R 2 R>0 . Proof. We have @2
1 f Df 4 r
and using a real-valued cut-off function 0 2 CV 1 .R3 / with 0 D 1 on supp f we obtain, as in the proof of Lemma 3.2.15, 1 1 1 2 @ .1 0 / f D .1 0 / f C 2 @ 0 @ f C @2 0 f 4 r 4 r 4 r 1 1 f C @2 0 f: D 2 @ 0 @ 4 r 4 r 50 Indeed, ı hn @ hn ı@ is the sum of a (shifted) monopole layer hn ı@ and another monopole layer hn ı@ of opposite sign (‘charge’) (shifted in the opposite direction). Such a configuration is referred to as a dipole. Since we consider here the limit of the scaled monopole layer sum
1 . ı C .hn ı@ // 2h hn @ as h ! 0, we are actually considering something one might call an ‘infinitesimal’ dipole layer.
264
Chapter 3 Partial Differential Equations with Constant Coefficients
The right-hand side, according to Lemma 3.2.15, is an element in CV 1 .R3 /. We calculate with a second real-valued function 1 2 CV 1 .R3 / with 1 D 1 on supp f and
0 D 1 on supp 1 : ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ ˇ 1 ˇ ˇ 1 ˇDˇ ˇ ˇ .1 0 / f ˇˇ f ˇˇ .1 0 / ˇ ˇ ˇ ˇ 4 r 4 r 0 0 ˇ ˇ ˇ ˇ ˇ ˇ 1 ˇ .1 0 / D ˇˇ f ˇˇ ˇ 4 r 0 ˇ ˇ ˇ ˇ 1 ˇ ˇ ˇ ˇ ˇ .1 0 / D ˇ 1 f ˇ ˇ 4 r 0 ˇ ˇ ˇ ˇ 1 .1 0 / ˇˇ jf j˛ ˇˇ.@ C e/˛ 1 4 r 0
for some ˛ 2 N 3 . Considering the last term, ˇ ˇ2 ˇ ˇ ˇ.@ C e/˛ 1 1 .1 0 / ˇ ˇ ˇ 4 r 0 ˇ2 Z Z ˇ ˇ ˇ 1 ˇ.@ C e/˛ 1 .1
D .y// .y/ dy .x/ˇˇ dx 0 ˇ R3 R3 4 j yj ˇ2 ˇZ Z ˇ ˇ 1 ˛ ˇ D .y/ dy ˇˇ dx ˇ 3 .@ C e/ 1 4 j yj .x/ .1 0 .y// supp 1 R j.r C i/ j20 ˇ Z Z ˇ 2 1 ˇ ˛ .jyj C 1/ ˇ .@ C e/ 1 supp 1
R3
It is not hard to see that ˇ Z ˇ .jyj2 C 1/1 ˇˇ .@ C e/˛ 1 R3
which confirms that ˇ ˇ ˇ .1 0 / 1 f ˇ 4 r
ˇ2 ˇ 1 .x/ .1 0 .y// ˇˇ dy dx: 4 j yj
ˇ2 ˇ 1 .x/ .1 0 .y// ˇˇ dy dx < 1; 4 j yj
ˇ ˇ ˇ ˇ ˇ ˇ ˇ jf j˛ ˇ.@ C e/˛ 1 1 .1 0 / ˇ ˇ ˇ ˇ 4 r 0
jf j˛ C0 j.r C i/ j0 ; for all
2 CV 1 .R3 / and some C0 2 R>0 . This proves that .1 0 /
1 f 2 H1 .r C i/ 4 r
ˇ ˇ ˇ ˇ
0
Section 3.2 Partial Differential Equations in H1 .jD j/
265
and so in particular RnŒR;R .r/
1 1 f D RnŒR;R .r/ .1 0 / f 2 H1 .r C i/ 4 r 4 r
for all sufficiently large R 2 R>0 . From (3.2.112) and Lemma 3.2.21 we find that RnŒR;R .r/ .w0 w/ 2 H1 .r C i/
(3.2.113)
for sufficiently large R 2 R>0 . Since w0 w must be a harmonic polynomial and the only polynomial satisfying (3.2.113) is the zero polynomial we have w D w0 D
1 . f C .n @u/ ı@ C u @ .n ı@ //: 4 r
Thus we have found what is known as the representation theorem, which states how a solution can be expressed in terms of the source term and Dirichlet and Neumann boundary data. Clearly, the same reasoning can be applied to the case where is unbounded if @ and supp f remain bounded. Theorem 3.2.22. Let w 2 H1 .jD1 j; jD2 j; jD3 j/ be such that RnŒR;R .r/ w 2 H1 .r C i/
(3.2.114)
for a sufficiently large R 2 R>0 , wjN continuously differentiable in a neighborhood of the smooth, compact boundary @ of a domain R3 and @2 w D f C .n @w/j@ ı@ C wj@ @ .n ı@ /
(3.2.115)
with f 2 H0 .jD1 j; jD2 j; jD3 j/. Then wD
1 1 f C ..n @w/j@ ı@ C wj@ @ .n ı@ //: 4 r 4 r
Proof. The result is just a slight refinement of the above derivation by being more specific about the ‘smoothness’ of w. The above uniqueness argument yields the given representation. However, we need to convince ourselves that the right-hand side of (3.2.115) defines a distribution with compact support. Let us first consider terms of the form ' ı@ where ' is a continuous function on @. We have sZ ˇZ ˇ sZ ˇ ˇ '.x/ .x/ dSx ˇˇ j'.x/j2 dSx jh' ı@ j i0 j D ˇˇ @
@
@
j .x/j2 dSx
266
Chapter 3 Partial Differential Equations with Constant Coefficients
and with n as a continuously differentiable extension of the exterior unit normal to all of Z Z 2 j .x/j dSx D jnj2 j .x/j2 dSx @
@
D
3 Z X kD1
Z D
@k .nk j ./j2 /.x/ dx Z
.div n/.x/j .x/j2 dx C 2 Re
n @ .x/ .x/ dx
C12 j j2e for some constant C1 2 R>0 and all
2 CV 1 .R3 /. This implies51 sZ
jh' ı@ j i0 j C1
@
j'.x/j2 dSx j je
so that ' ı@ 2 H.1;1;1/ .@ C e/ H1 .@ C e/: Similarly we also see that ' @ .n ı@ / 2 H1 .@ C e/ if ' is continuously differentiable in a neighborhood of @. It is interesting to investigate the term 1 ..n @w/j@ ı@ C wj@ @ .n ı@ // 4 r more closely. It can indeed be written as a boundary integral. We calculate ˇ ˇ ˇ ˇ 1 1 ˇ ˇ .' ı@ / ˇ D ' ı@ ˇ 4 r 4 r 0 0 Z Z 1 .y/ dy dSx D '.x/ @ R3 4 jx yj Z Z 1 dSx .y/ dy: D '.x/ 3 4 jx yj R @ Thus we see that 1 .' ı@ / D 4 r
Z @
1 '.x/ dSx : 4 j xj
(3.2.116)
51 Here it becomes apparent that it would also be sufficient to require ' 2 L .@/. More subtle trace 2 theorems would yield even more general boundary weights '.
Section 3.2 Partial Differential Equations in H1 .jD j/ Slightly more involved, we find ˇ ˇ 1 .' @ .n ı@ // ˇˇ 4 r 0 ˇ ˇ 1 ˇ D ' @ .n ı@ / ˇ 4 r 0 ˇ ˇ 1 D @ .n ı@ / ˇˇ ' 4 r 0 ˇ 3 X ˇ 1 D nk ı@ ˇˇ @k ' 4 r kD1
D
3 Z X kD1
Z
D Z
@
nk .x/ @k ' @
Z R3
R3
0
1 .y/ dy .x/ dSx 4 j yj
1 .y/ dy dSx 4 jx yj 1 .x/ .y/ dy dSx ; .n @/ 4 j yj
.n @ '/.x/ '.x/
@
Z
Z
267
R3
and so 1 .' @ .n ı@ // 4 r Z 1 D n.y/ @ '.y/ dSy @ 4 j yj Z 1 .y/ '.y/ dSy : n.y/ @ z 7! 4 j zj @
(3.2.117)
Here the somewhat clumsy looking notation @ .z 7! 4 j1 zj / has been used to ensure that the differentiation with respect to correct variable is properly expressed. Using (3.2.116) and (3.2.117) we obtain 1 ..n @w/j@ ı@ C wj@ @ .n ı@ // (3.2.118) 4 r Z 1 1 n.y/ @w.y/ w.y/ n.y/ @ z 7! .y/ dSy : D 4 j yj 4 j zj @ Denoting the directional derivative .n@/ evaluated at a point y 2 R3 in the classical way by @
.y/ WD ..n @/ /.y/ @n.y/
268
Chapter 3 Partial Differential Equations with Constant Coefficients
the above representation can be written slightly more traditionally (but ambigiously) as Z 1 @ @ 1 1 f C w.y/ w.y/ wD dSy : 4 r 4 j yj @n.y/ @n.y/ 4 j yj @ Of course, noting that 1 2 L2 .R3 / 4 r for any compact set K, we may write the representation formula of potential theory fully in integral form: Z 1 wD f .y/ dy 4 j yj Z 1 @ @ 1 C w.y/ w.y/ dSy : 4 j yj @n.y/ @n.y/ 4 j yj @ K
N We can of course also In our construction we have assumed that w D 0 in R3 n . patch two non-trivial solutions together in the following way: w WD wC C w ; with w supported in and wC supported in R3 n , where is a bounded domain with smooth boundary. Thus, we have @2 w D f C .n @w /j@ ı@ C w j@ @ .n ı@ / and @2 wC D .n @wC /j@ ı@ wC j@ @ .n ı@ /; where n denotes the exterior normal to the boundary @ of . Note that n is then N Thus we have the exterior normal to the boundary @ of the domain R3 n . @2 w D f Œn @w@ ı@ Œw@ @ .n ı@ / with Œn @w@ WD .n @wC /j@ .n @w /j@ and Œw@ WD wC j@ w j@ as ‘jump terms’ across the boundary @ (from to its complement). The only solution with the asymptotic behavior of (3.2.114) can then be represented by Z 1 wD f .y/ dy 4 j yj Z 1 1 @ Œn @w@ .y/ Œw@ .y/ dSy : 4 j yj @n.y/ 4 j yj @
Section 3.2 Partial Differential Equations in H1 .jD j/
269
Under suitable regularity assumptions consider, for arbitrary data f and jump values g; h (g is vector-valued), the solution of @2 w D f g ı@ h @ .n ı@ / given by Z wD
@
By Gauss’ theorem Z Z f .x/ dx D
0D
@ 1 1 g.y/ h.y/ dSy : 4j yj @n.y/ 4j yj Z
2
and analogously
Therefore,
Z
1 f .y/ dy 4j yj
(3.2.119)
@ w.x/ dx D
Z
@
n.x/ @w .x/ dSx
Z @2 w.x/ dx D
N R3 n
Z
@
n.x/ @wC .x/ dSx :
Z
f .x/ dx D
@
Œn @w@ .x/ dSx :
(3.2.120)
This indicates that the data are not completely arbitrary but must be compatible in the sense of (3.2.120). In order for w to be a solution with jumps g; h respectively, it is necessary to have Z Z
f .x/ dx D
g.x/ dSx : @
If one is interested in a solution in a domain , then the actual choice of an extension to the complement of is irrelevant, and the degrees of freedom introduced by the arbitrariness of the extension can be utilized to adapt the solution in in such a way that it matches certain boundary conditions. For example, assuming f 0 and Œn @w@ D 0, we get Z @ 1 Œw@ .y/ dSy : (3.2.121) wD @n.y/ 4 j yj @ Under suitable assumptions the limit to the boundary can be performed yielding the so-called jump conditions: Z 1 1 @ w j@ D Œw@ C dSy on @ (3.2.122) Œw@ .y/ 2 @n.y/ 4 j yj @ and wC j@
1 D Œw@ C 2
Z Œw@ .y/ @
1 @ dSy @n.y/ 4 j yj
on @:
(3.2.123)
270
Chapter 3 Partial Differential Equations with Constant Coefficients
In other words we have wC j@ C w j@ D 2
Z Œw@ .y/ @
@ 1 dSy @n.y/ 4 j yj
on @:
(3.2.124)
If, for example, wC j@ D dC is given (Dirichlet boundary condition; exterior Dirichlet boundary value problem; for the interior Dirichlet boundary value problem w j@ would have to be prescribed) then Œw@ can be determined as a solution of Z 1 1 @ dSy on @: Œw@ .y/ dC D Œw@ C 2 @n.y/ 4 j yj @ That a solution exists is due to a Fredholm alternative resulting from the compactness of Z 1 @ ' 7! dSy on @ '.y/ @n.y/ 4 j yj @ as an operator in L2 .@/. Similar ideas can be utilized for the Neumann boundary condition, where n @w˙ is prescribed, respectively (exterior and interior Neumann boundary value problem). Example 3.23. It may be illuminating and entertaining to note that similar arguments can be used in the 1-dimensional case. We can even demonstrate the integral equation method in this rather trivial case. The fundamental solution is here given by x 7! 12 jxj. The decomposition of R into a bounded open interval and the unbounded complement now needs five parts: R D 1; a1 Œ [ ¹a1 º [ a1 ; aC1 Œ [ ¹aC1 º [ aC1 ; C1Œ ; a1 < aC1 . Accordingly, we assemble
D 1;a1 Œ .m/ 1 C a1 ;aC1 Œ .m/ 0 C aC1 ;1Œ .m/ C1 with k – say – harmonic in k (with 0 D a1 ; aC1 Œ , 1 D aC1 ; C1Œ and 1 D 1; a1 Œ ). Then the graph of k must be a straight line segment in k for k D 1; 0; C1. Thus, for y 2 R n ¹a1 ; aC1 º we expect (for volume source term f D 0 and using the “surface element” dS somewhat jokingly, since the boundary “integral” is of course nothing but the difference of the evaluations at the boundary points) Z 1
.y/ D .Œ@ .x/ jy xj Œ .x/ sgn jy xj/ dSx 2 @ a1 ;aC1 Œ 1 1 D C Œ@ .a1 / jy a1 j Œ@ .aC1 / jy aC1 j 2 2 1 1 Œ .a1 / sgn.y a1 / C Œ .aC1 / sgn.y aC1 /; 2 2
Section 3.2 Partial Differential Equations in H1 .jD j/
271
where jumps are taken always by subtracting the term stemming from the bounded part from the term stemming from the unbounded part Œ@ .a1 / D @ 1 .a1 / @ 0 .a1 C/; Œ@ .aC1 / D @ C1 .aC1 C/ @ 0 .aC1 /; Œ .a1 / D 1 .a1 / 0 .a1 C/; Œ .aC1 / D C1 .aC1 C/ 0 .aC1 /: Note that the unit normal vector – taken as pointing into the unbounded part – is at a1 just 1 and at aC1 it is C1. As an asymptotic condition52 enforcing unique solvability and the validity of the representation formula we may use here
.x/ C1 jxj C0 sgn.x/ ! 0
as x ! ˙1
(3.2.125)
for some constants C0 ; C1 2 C. In the 1-dimensional case the only possible harmonic polynomials are spanned by x 7! 1 and x 7! x and these are indeed excluded by (3.2.125). Since ˙1 .y/ D @ ˙1 .a˙1 ˙/ .y a˙1 / C ˙1 .a˙1 ˙/, the asymptotic assumption (3.2.125) yields 1 1 C1 D @ C1 .aC1 C/ D @ 1 .a1 / D C Œ@ .aC1 / Œ@ .a1 / 2 2 and C0 D @ C1 .aC1 / aC1 C C1 .aC1 / D @ 1 .a1 / a1 1 .a1 / 1 1 1 1 D C Œ@ .a1 / a1 Œ@ .aC1 / aC1 Œ .a1 / C Œ .aC1 /: 2 2 2 2 With regards to the above mentioned boundary integral equation method, we note here, considering the limits to the boundary, that we have “jump relations” 1 1 1
.a1 / D Œ@ .aC1 / .aC1 a1 / C Œ .a1 / Œ .aC1 /; 2 2 2 1 1 1
.a1 C/ D Œ@ .aC1 / .aC1 a1 / Œ .a1 / Œ .aC1 /; 2 2 2 1 1 1
.aC1 / D C Œ@ .a1 / .aC1 a1 / Œ .a1 / Œ .aC1 /; 2 2 2 1 1 1
.aC1 C/ D C Œ@ .a1 / .aC1 a1 / Œ .a1 / C Œ .aC1 /; 2 2 2 52 Of course in this case the asymptotic behavior is in fact the actual vanishing of .x/ C jxj 1 C0 sgn.x/ (for jxj > max¹ja1 j; jaC1 jº).
272
Chapter 3 Partial Differential Equations with Constant Coefficients
or
.aC1 / C .aC1 C/ 1 D C Œ@ .a1 / .aC1 a1 / 2 2
.a1 / C .a1 C/ 1 D Œ@ .aC1 / .aC1 a1 / 2 2
1 Œ .a1 /; 2 1 Œ .aC1 /: 2
Œ .a˙1 / to suitably reduce the degrees of freeAssuming Œ@ .a˙1 / D .aC1 1 a1 / dom, we obtain in the so called interior domain case for Dirichlet boundary conditions, i.e. the data .a˙1 / are prescribed, 1 Œ .aC1 /
.aC1 / D :
.a1 C/ 2 Œ .a1 / Œ .a˙1 / we obtain in the so called exteSimilarly choosing Œ@ .a˙1 / D .aC1˙1 a1 / rior domain case for Dirichlet boundary conditions, i.e. .a˙1 ˙/ prescribed, 1 Œ .aC1 /
.aC1 C/ D :
.a1 / 2 Œ .a1 / Thus, we have Œ .aC1 / D ˙2 .aC1 ˙/, Œ .a1 / D 2 .a1 ˙/, where the upper sign refers to the exterior, the lower sign to the interior domain case. Consequently,
.y/ D ˙
1 1
.a1 / jy a1 j ˙
.aC1 ˙/ jy aC1 j .aC1 a1 / .aC1 a1 /
.a1 / sgn.y a1 / ˙ .aC1 ˙/ sgn.y aC1 /; where the upper sign and lower sign provide the solution of the exterior and interior boundary value problem, respectively. To consider Neumann type boundary conditions, i.e. @ .a1 C/ D @ .aC1 / for the interior domain case53 and @ .a1 / D @ .aC1 C/ for the exterior domain case54 , we let Œ D 0 in the representation formula and obtain Z 1
.y/ D Œ@ .x/ jy xj dSx ; 2 @ a1 ;aC1 Œ 1 1 D Œ@ .a1 / jy a1 j C Œ@ .aC1 / jy aC1 j: 2 2 Differentiating we find Z 1 Œ@ .x/ sgn.y x/ dSx ; @ .y/ D 2 @ a1 ;aC1 Œ 1 1 D Œ@ .a1 / sgn.y a1 / C Œ@ .aC1 / sgn.y aC1 /: 2 2
R Note that the equality in this case means @ a1 ;aC1 Œ @ .x/ dSx D 0, which is a necessary and as we shall see sufficient condition for solvability. 54 This equality of boundary data is induced by the asymptotic constraint. 53
Section 3.2 Partial Differential Equations in H1 .jD j/
273
Since uniqueness in this case only holds up to a constant, we need to impose further constraints. In the interior domain case we require, for example, Z
.x/ dx D 0; a1 ;aC1 Œ
and in the exterior domain case we impose as an additional requirement a boundary condition at infinity by sharpening the asymptotic constraint to
.x/ C1 jxj ˇ0 sgn.x/ ! 0
as x ! ˙1
for some C1 2 C with ˇ0 2 C prescribed as a “boundary condition at ˙1”. In the interior domain case we get 1 1 @ .a1 C/ D @ .aC1 / D Œ@ .a1 / Œ@ .aC1 / 2 2 and Z 0D
.x/ dx Z
D
a1 ;aC1 Œ
a1 ;aC1 Œ
1 D 2
1 1 Œ@ .a1 / jy a1 j C Œ@ .aC1 / jy aC1 j dx 2 2
Z
a1 ;aC1 Œ
.Œ@ .a1 / .y a1 / C Œ@ .aC1 / .y aC1 // dx
1 .aC1 a1 /2 .a1 aC1 /2 D Œ@ .a1 / Œ@ .aC1 / : 2 2 2 Thus, we have Œ@ .aC1 / Œ@ .a1 /
! D
@ .aC1 / @ .aC1 /
!
and so
.y/ D
1 @ .aC1 / .jy a1 j jy aC1 j/: 2
For the exterior domain case, we may argue in the following way. Here we have 1 1 @ .a1 / D @ .aC1 C/ D Œ@ .a1 / C Œ@ .aC1 /: 2 2
274
Chapter 3 Partial Differential Equations with Constant Coefficients
On the other hand, with Œ D 0 we have 1 1 Œ@ .a1 / C Œ@ .aC1 / 2 2 D @ .a1 C/ D @ .aC1 /
.a1 C/ .aC1 / aC1 a1
.a1 / .aC1 C/ D aC1 a1 .@ .a1 / a1 ˇ0 / .@ .aC1 C/ aC1 C ˇ0 / D aC1 a1 @ .aC1 C/ .a1 C a1 / C ˇ0 D 2 : aC1 a1 D
Solving for the jump terms yields Œ@ .aC1 / C1 C1 D C1 1 Œ@ .a1 /
2
@.aC1 C/ .a1 Ca1 /Cˇ0 aC1 a1
!
@ .aC1 C/
or @ .aC1 C/ .a1 C a1 / C ˇ0 C @ .aC1 C/ aC1 a1 @ .aC1 C/ .a1 C 3a1 / C ˇ0 ; D aC1 a1 @ .aC1 C/ .a1 C a1 / C ˇ0 @ .aC1 C/ Œ@ .a1 / D 2 aC1 a1 @ .aC1 C/ .3a1 C a1 / C ˇ0 : D aC1 a1
Œ@ .aC1 / D 2
Consequently, we have found
.y/ D
@ .aC1 C/ .3a1 C a1 / C ˇ0 jy a1 j 2.aC1 a1 / @ .aC1 C/ .a1 C 3a1 / C ˇ0 jy aC1 j 2.aC1 a1 /
as the solution in this case. Note that @ .y/ D
@ .aC1 C/ .3a1 C a1 / C ˇ0 sgn.y a1 / 2.aC1 a1 / @ .aC1 C/ .a1 C 3a1 / C ˇ0 sgn.y aC1 / 2.aC1 a1 /
Section 3.2 Partial Differential Equations in H1 .jD j/
275
and so @ .aC1 C/ .3a1 C a1 / C ˇ0 2.aC1 a1 / @ .aC1 C/ .a1 C 3a1 / C ˇ0 2.aC1 a1 / @ .aC1 C/ a1 @ .aC1 C/ a1 D˙ .aC1 a1 / .aC1 a1 /
@ .a˙1 ˙/ D ˙
D ˙ @ .aC1 C/: This should suffice as an outline of the ideas. We shall not pursue boundary integral methods further since we will later encounter an – in many respects – more general method to approach boundary value problems in the framework of Hilbert space methods. 3.2.4.3
Electro- and Magnetostatics (Biot–Savart’s Law)
In Subsection 3.2.2.7, a fundamental solution associated with the equation 10 1 0 0 b @> 0 0 0 0 CB b b BEC B @ 0 @ 0 E CB C B MM.0; b @/ B C @H A D B @ 0 b @ 0 b @A@H 0 0 0 0 b @> 0 0
1 J1 C B J2 C C CDB A @ J3 A DW F: 0 1
0
(3.2.126)
of electro- and magnetostatics in free space was found as 1 1.88/ : @/ G0;MM WD MM.0; b 4 r Therefore a solution of (3.2.126) is given by 0
1 0 b @> 0 0 B b C @ 0 C B @ 0 b G0;MM F D g0 MM.0; b @/ F D g0 B CF @ 0 b @ 0 b @A 0 0 b @> 0 1 1 0 0 0 0 0 .b @g0 /> 0 b @> 0 0 C C Bb B b 0 Cb @ g0 0 C @ 0 C B @g B @ 0 b g0 F D B 0 D B C C F: @ 0 b @ 0 b @ g0 0 b @g0 A @ 0 b @A 0 0 b @> 0 0 0 .b @g0 /> 0
276
Chapter 3 Partial Differential Equations with Constant Coefficients
Here 0 b @g0 D
1 4 r 3
1 m1 @ m2 A DW 1 m b 4 r 3 m3
and 1 0 @3 g0 @2 g0 b 0 @1 g0 A @g0 WD @ @3 g0 @2 g0 @1 g0 0 1 0 0 m3 m2 1 @ 1 m3 0 m1 A DW D m b: 3 4 r 4 r 3 m2 m1 0 0
It is worth noting that a specific form of the source term is necessary in order to have that the system (3.2.126) has a chance to be well-posed. Since it is equivalent to curl H D b @ H D J2 ; curl E D b @ E D J3 ; div E D b @> E D J1 ;
div H D b @> H D 0;
we must have that J2 ; J3 satisfy div Jk D 0;
k D 2; 3:
In this case we get 0 G0;MM F D
1 4 r 3 0
D
0 0 Bm Bb 0 @ 0 b m 0 0
1 1 B B r3 4 @
0 m b 0 0
1 0 0C CF 0A 0
1 0 m b J1 C r13 m b J3 C C: 1 A m b J 2 3 r 0
for suitable right-hand sides, e.g. for all data in H1 .jDj/ with compact support. Uniqueness can be enforced as in the case of potential theory. Indeed, a U satisfying MM.0; b @/ U D 0 also must satisfy MM.0; b @/ MM.0; b @/ U D b @2 U D 0. Thus, we obtain the following result (we omit the details of the argument).
Section 3.2 Partial Differential Equations in H1 .jD j/
277
Theorem 3.2.23. Let E; H 2 H1 .jD1 j; jD2 j; jD3 j/ be such that curl H D J2 ;
curl E D J3 ;
div E D J1 ;
div H D 0
(3.2.127)
with J1 ; J2 ; J3 2 H1 .jD1 j; jD2 j; jD3 j/ having compact supports and div J2 D div J3 D 0: Then E 2 C1 .R3 n .supp J1 [ supp J3 //, H 2 C1 .R3 n supp J2 / are given by ED
1 1 m b J1 C m b J3 ; 4 r 3 4 r 3
H D
1 m b J2 4 r 3
and satisfy RnŒR;R .r/ E; RnŒR;R .r/ H 2 H0 .jD1 j; jD2 j; jD3 j/
(3.2.128)
for a sufficiently large R 2 R>0 . A solution of (3.2.127) satisfying the asymptotic constraint (3.2.128) is uniquely determined. In electrostatics one usually assumes J3 D 0 and so the formula for the electric field simplifies to 1 1 J1 ; m b J1 D grad 4 r 3 4 r where J1 is a prescribed charge distribution. Thus, electrostatics yields nothing new compared to potential theory. The formula 1 H D m b J2 4 r 3 for the magnetic field H is a form of the so-called Biot–Savart law describing a magnetic field generated by a current density J2 . To model currents confined to a ‘wire’ the distribution ı is used, where is a curve in R3 modeling the wire. So letting J2 D I t ı , I 2 R, (t denotes the unit tangent to ), the induced magnetic field assumes the more specific form ED
1 m b t ı : 4 r 3 To write this in integral form, we calculate ˇ ˇ ˇ ˇ 1 1 ˇ ˇ m b t ı D I t ı m b I ˇ ˇ 4 r 3 4 r 3 0 0 Z Z 1 DI t.x/ .x y/ .y/ dy dsx 3 R3 4 jx yj Z Z 1 DI t.x/ .x y/ .y/ dsx dy 4 jx yj3 R3 Z Z 1 DI t.x/ .x y/ dsx .y/ dy 3 4 jx yj3 R H DI
278 for all
Chapter 3 Partial Differential Equations with Constant Coefficients 2 CV 1 .R3 n /. Here we have used that Z 1 1 m b D . y/ 3 3 4 r R3 4 j yj
.y/ dy:
Thus, we obtain in R3 n 1 m b t ı D I H DI 4 r 3
Z
1 . y/ t.y/ dsy : 4 j yj3
It is here implicitly assumed that div.t ı / D 0: This condition means that
Z
0 D hdiv.t ı / j i0 D hı j t grad i0 D
t.x/ grad .x/ dsx ;
which is indeed satisfied as follows from the Stokes theorem (dsx , dSx denoting the line element and the surface element, respectively) Z Z t.x/ grad .x/ dsx D n.x/ curl grad .x/ dSx D 0
F
which holds if we assume that is the boundary manifold of an oriented surface F , with n its suitably oriented normal. The integral representation holds therefore for ‘closed wire loops’. 3.2.4.4
Potential Theory in R2
The particular form of the Poincaré equality in the 2-dimensional case already hints at the additional complications of a potential theory in R2 . The solutions of @2 u D 0 are harmonic polynomials and the only harmonic polynomial w with the asymptotic behavior 1 .r/ w 2 H0 .jDj/ (3.2.129) r ln.r/ RnŒR;R at 1 is a constant. The shortcoming of this weight, which according to the Poincaré estimate in R2 is the natural weight to choose, is the failure to produce uniqueness via the asymptotic behavior at 1. Since for f 2 CV 1 .R2 / uD
1 ln.r/ f 2
Section 3.2 Partial Differential Equations in H1 .jD j/
279
is a solutions of @2 u D f; it is also clear that we should not impose a better asymptotic behavior at 1 than 1 .r/ w 2 H0 .jDj/; r .ln.r//2 RnŒR;R
(3.2.130)
in order to maintain existence in all of R2 . Thus, we get unique existence55 only up to addition of an arbitrary constant. We summarize our findings: Theorem 3.2.24. Let u 2 H1 .jD1 j; jD2 j/ and @2 u D f C g;
(3.2.131)
with g 2 C1 .R2 / and f; g 2 H1 .jD1 j; jD2 j/ with supp f compact . Then u 2 C1 .R2 n supp f /: If 1 .r/ u 2 H0 .jD1 j; jD2 j/ r .ln.r//2 RnŒR;R
(3.2.132)
for a sufficiently large R 2 R>0 then solutions of (3.2.131) can differ only by a constant. For g D 0 a solution satisfying (3.2.132) is given by56 uD
1 ln.r/ f C C; 2
where C 2 C is arbitrary. In order to enforce uniqueness, a more subtle asymptotic constraint needs to be found. For a solution u 2 H1 .jD1 j; jD2 j/ of @2 u D f with f having compact support we shall require the existence of a constant C 2 C such that u.x/ C ln.jxj/ ! 0 55
as jxj ! 1:
(3.2.133)
Uniqueness could be enforced by an ‘orthogonality-to-1’ condition: Z 1 u.x/ dx D 0 2 R2 nB 2 .0;R0 / jxj .ln.jxj// R
if we assume that supp f BR2 .0; R0 / for some R0 2 R>1 . The claimed regularity of u outside of supp f follows by Weyl’s Lemma 3.2.15, 3.2.16, in the case of R2 . Alternatively, we could require Z u.x/ dx D 0
for any fixed, bounded open set in the complement of supp f . Due to regularity outside of supp f even evaluation at a point x 2 R2 n supp f could be used to fix the unknown constant. 56 The convolution 1 ln.r/ f is often referred to as the logarithmic potential of f . 2
280
Chapter 3 Partial Differential Equations with Constant Coefficients
Indeed, since @2 u has compact support and u is smooth outside this support, we may choose a cut-off function R 2 CV 1 .R2 / with R D 1 on BR2 .0; R/ supp.@2 u/, such that @2 ..1
R /.m/ u/
D 2 .@
R .m// @u
C .@2
R /.m/ u
DW u 2 CV 1 .R2 /:
Therefore, .1 R /.m/ u D g u C P for some harmonic polynomial P . The asymptotic behavior of the right-hand side is completely determined by the growth behavior of polynomials and the convolution term, which can now be written as a convolution integral Z 1 .g u /.x/ D ln.jx yj/ u .y/ dy 2 R2 Z Z jx yj 1 D ln
u .y/ dy ln.jxj/
u .y/ dy: 2 R2 jxj R2 Since 1
jyj jxj
jxyj jxj
1C
jyj jxj
for jxj > jyj and ln is monotone, we see that Z
.g u /.x/ C ln.jxj/
R2
u .y/ dy ! 0
as jxj ! 1. Thus we have that uP is a solution with the desired asymptotic behavior. But we also know that two solutions uk , k D 0; 1, for the same right-hand side can only differ by a harmonic polynomial, say u0 u1 D P . Assuming that both also satisfy (3.2.133) with certain constants Ck , i.e. uk .x/ Ck ln.jxj/ ! 0
as jxj ! 1;
k D 0; 1;
implies P .C0 C1 / ln.jxj/ ! 0
as jxj ! 1:
Therefore, we must have P D 0 and C0 D C1 , so that u0 D u1 follows. Thus, we have found the following refinement of our previous theorem. Theorem 3.2.25. Let u 2 H1 .jD1 j; jD2 j/ satisfy @2 u D f with f 2 H1 .jD1 j; jD2 j/ having compact support. Then u 2 C1 .R2 n supp f /:
(3.2.134)
Section 3.2 Partial Differential Equations in H1 .jD j/
281
If in addition we require that u satisfies u.x/ C ln.jxj/ ! 0 as jxj ! 1 for some constant C 2 C, then the solution of (3.2.134) is determined uniquely and given by uD
1 ln.r/ f: 2
Remark 3.2.26. Compare the corresponding asymptotic condition in the 1-dimensional case (Example 3.23, (3.2.125)) that some constants C0 ; C1 2 C must exist such that u.x/ C1 jxj C0 sgn.x/ ! 0 as jxj ! 1: (3.2.135) 3.2.4.5
Cauchy’s Integral Formula
It is interesting to note how the theory of analytic functions fits into the setup developed here. Recall that a fundamental solution for the Cauchy–Riemann operator @C 1 is given by gCR WD @C w0 D 2 @C ln.j j/. Indeed, we found that gCR .x1 ; x2 / D
1 2 .x1 C i x2 /
as a regular distribution, which in the usual complex notation (identifying R2 and C via the isometry .x1 ; x2 / 7! z WD x1 C i x2 ) is written as gCR .z/ D
1 : 2 z
Thus, a solution u of @C u D f
(3.2.136)
for f 2 H1 .jD1 j; jD2 j/ with supp f compact is given by 1 f: 2 z Since any solution u of (3.2.136) with f D 0 must also solve uD
@2 u D 0; we have again that such u must be a harmonic polynomial p satisfying additionally @C p D 0: Such polynomials are called analytic polynomials and are the elements of LinC ¹.x1 ; x2 / 7! .x1 C i x2 /k j k 2 Nº: With this observation we see that uniqueness can again be enforced by a suitable decay condition. In summary, we obtain the following result.
282
Chapter 3 Partial Differential Equations with Constant Coefficients
Theorem 3.2.27. Let u 2 H1 .jD1 j; jD2 j/ and @C u D f C g;
(3.2.137)
with g 2 C1 .R2 / and f 2 H1 .jD1 j; jD2 j/ with supp f compact. Then u 2 C1 .R2 n supp f /: If in addition we require RnŒR;R .r/ u 2 H1 .r C i/
(3.2.138)
for a sufficiently large R 2 R>0 then u is uniquely determined. For g D 0 this solution is given by 1 uD f: 2 z It should be noted that a solution of (3.2.137) is also a solution of @2 u D @C f:
(3.2.139)
Conversely, for a solution u of (3.2.139), h WD @C u f must be a polynomial satisfying @C h D 0: (3.2.140) A polynomial h satisfying (3.2.140) is called a conjugate analytic polynomial and must have the form h.x1 ; x2 / D
N X
aj .x1 i x2 /j ;
j D0
which is usually written in complex notation as h.z / WD
N X
aj .z /j :
j D0
Thus, conjugate analytic polynomials are elements of LinC ¹.x1 ; x2 / 7! .x1 i x2 /k j k 2 Nº: We see, in particular, that potential theory in R2 can be considered as a generalization of the theory of the Cauchy–Riemann operator. The theory of analytic functions is particularly concerned with distributions f satisfying @C f D 0
Section 3.2 Partial Differential Equations in H1 .jD j/
283
in a domain , i.e. so called analytic functions in . In other words supp.@C f / \ D ;: From the regularity result of potential theory in R2 (Weyl’s Lemma 3.2.16) we know that f 2 C1 .R2 n supp.@C f //. Let f be analytic in a neighborhood of , i.e. supp.@C f / R2 n . Then f satisfies @C . f / D .@C / f C @C f D .@C / f: If we assume that is a bounded submanifold of R2 with sufficiently smooth boundary @ then we find by Gauss’ theorem h@C j i0 D h j @C i0 Z D @C .x/ dx Z D .n1 .x/ C i n2 .x// .x/ dsx ; @
n1
where n D n2 denotes the exterior normal to the boundary curve @ of , dsx is the line element at x 2 @. The unit tangent vector (in the counterclockwise direction) to this curve is then given by t1 n2 t WD D : t2 n1 Thus we find
Z h@C j i0 D Z
@
@
.n1 .x/ i n2 .x// .x/ dsx .t2 .x/ C i t1 .x// .x/ dsx
and so @C D i .t1 C i t2 / ı@ Using the identification .x1 ; x2 / 7! x1 C i x2 of R2 and C we reuse the notation57 t for t1 C i t2 and so @C D n ı@ D i t ı@ : 57
In this “complex notation” we have i n D t:
284
Chapter 3 Partial Differential Equations with Constant Coefficients
Applying the Solution Theorem 3.2.27, we obtain a representation formula f D
1 1 .t f ı@ / 2i z
for f in in terms of the boundary values of f on @. This is the well-known 1 1 Cauchy integral formula. This becomes apparent if we write the convolution 2i z .t f ı@ / in integral form. We have ˇ ˇ 1 1 f .t ı@ / ˇˇ 2i z 0 ˇ ˇ 1 1 ˇ ı@ ˇ f t 1 D 2i z 0 Z Z 1 1 D f .x/ t.x/ .y/ dy dsx 2i @ R2 x1 C i x2 C y1 i y2 Z Z 1 1 D f .x/ t.x/ dsx .y/ dy; 2i R2 @ x1 C i x2 C y1 i y2
for all
2 CV 1 .R2 n @/. Thus we have Z
1 . f /.y1 ; y2 / D 2i
@
t.x/
1 f .x/ dsx x1 i x2 C y1 C i y2
in R2 n @, which in complex notation with z D x1 C i x2 , dz D dx1 C i dx2 D t.x/ dsx , D y1 C i y2 , becomes 1 f ./ D 2i and 1 0D 2i
Z @
Z @
1 f .z/ dz z
1 f .z/ dz z
in
in C n :
Here we have again written for the corresponding set ¹x Ciy j .x; y/ 2 º in C. To reproduce Cauchy’s integral theorem we only need to make use of Gauss’s theorem: Z
Z
.@C f /.x/ dx D
.n1 .x/ C i n2 .x// f .x/ dsx Z D i .t1 .x/ C i t2 .x// f .x/ dsx @ Z D i f .z/ dz: @
@
Section 3.2 Partial Differential Equations in H1 .jD j/
285
This implies for f analytic in a neighborhood of that Z f .z/ dz D 0: @
We summarize our findings in the following theorem. Theorem 3.2.28. Let w 2 H1 .jD1 j; jD2 j/ be such that @C w D f with f 2 H1 .jD1 j; jD2 j/, supp f compact and RnŒR;R .r/ w 2 H1 .r C i/
(3.2.141)
for a sufficiently large R 2 R>0 . Then wD
1 f: 2 z
Moreover, if R2 is a bounded domain with smooth, compact boundary @ such that supp f \ D ;, then w is (by definition) analytic in a neighborhood of and for .x1 ; x2 / 2 R2 n @ 1 .t w ı@ / .x1 ; x2 / w.x1 ; x2 / D 2 i z Z 1 1 w.z/ dz: D 2 i @ z x1 i x2 For any w analytic in a neighborhood of we also have Z w.z/ dz D 0: @
3.2.4.6
Integral Representations of Solutions of the Helmholtz Equation in R3
Fundamental solutions associated with the Helmholtz operator @2 2 , 2 R, were found as regular distributions g˙i 0 .x/ D
exp.˙ i jxj/ ; 4 jxj
x ¤ 0;
and can be recognized as the Fourier transform g i 0 D .L0 ˝ I g1;˙ /. ; /
286
Chapter 3 Partial Differential Equations with Constant Coefficients
with respect to time of the forward and backward causal fundamental solutions g1;˙ of the wave operator @20 @2 , respectively. The direction of causality, i.e. forward or backward in time, is preserved in the sign occurring in g i 0 . Of course, since .L0 ˝ I g1; /. ; / D .L0 ˝ I g1; /. ; / D g˙i 0 D g i 0 D .L0 ˝ I g1;˙ /. ; / this distinction becomes void if we consider not as a variable, but as a fixed real number. The terms outgoing fundamental solution for gi 0 and incoming fundamental solution for gCi 0 are used as a reminder58 of the time-dependent background. The outgoing and incoming fundamental solution can be detected by the condition m RnŒR;R .jmj/ @ g i 0 ˙ i g i 0 2 H0 .r i/ D L2 .R3 / (3.2.142) jmj for all sufficiently large R 2 R>0 . These two conditions will re-appear below as the outgoing and incoming Sommerfeld radiation condition to enforce uniqueness of solution. Indeed, following our usual line of reasoning, we always have existence of a solution for the equation .@2 2 / u D f; (3.2.143) where f 2 H1 .jD1 j; jD2 j; jD3 j/ and supp f is compact. The convolutions g˙i 0 f provide two solutions for (3.2.143). Uniqueness, therefore, is a more intricate matter. We note that by the reasoning of Lemma 3.2.15, we have smoothness of any solution u of (3.2.143) outside of supp f . Since both fundamental solutions have the same decay behavior as the potential 41 r , we obtain as above that RnŒR;R .r/ g˙i 0 f 2 H1 .r i/
(3.2.144)
for all sufficiently large R 2 R>0 . Thus, we cannot hope to enforce uniqueness by this condition alone. In lieu of (3.2.142), we shall impose either the so-called outgoing Sommerfeld radiation condition m RnŒR;R .jmj/ @ u C i u 2 H0 .r i/ (3.2.145) jmj for all sufficiently large R 2 R>0 or alternatively the incoming Sommerfeld radiation condition m @ u i u 2 H0 .r i/ (3.2.146) RnŒR;R .jmj/ jmj for all sufficiently large R 2 R>0 on a solution u of (3.2.143). That imposing either one of these radiation conditions does not exclude both gi 0 f and gCi 0 f as solutions needs to be justified. 58 Historically, this terminology is used in the reverse order. This corresponds to having g 1;˙ as the Fourier transform of 7! g˙i 0 (rather than the inverse Fourier transform). The above choice is consistent with the present approach and therefore more appropriate for our framework.
Section 3.2 Partial Differential Equations in H1 .jD j/
287
Lemma 3.2.29. Let f 2 H1 .jD1 j; jD2 j; jD3 j/ and supp f compact. Then m RnŒR;R .jmj/ @ g i 0 f ˙ i g i 0 f 2 H0 .r i/ jmj for all sufficiently large R 2 R>0 , 2 R n ¹0º. Proof. It suffices to consider the outgoing case, the remaining case being analogous. So let u WD gi 0 f and 0 2 CV 1 .R3 / real-valued with 0 D 1 on supp f . Then we continue to argue as in the proof of Lemma 3.2.21. We have .@2 2 /gi 0 f D f and .@2 2 / ..1 0 / gi 0 f / D .1 0 / f C 2 @ 0 @gi 0 f C @2 0 gi 0 f D 2 @ 0 @gi 0 f C @2 0 gi 0 f: The right-hand side is an element in CV 1 .R3 /. We calculate with a second real-valued function 1 2 CV 1 .R3 / with 1 D 1 on supp f and 0 D 1 on supp 1 : ˇ ˇ ˇ ˇ ˇ ˇ m ˇ ˇ @ C i .1 0 /.m/ gi 0 f ˇˇ ˇ ˇ jmj 0 ˇ ˇ ˇ ˇ ˇ ˇ m ˇ ˇ ˇ i D ˇ gi 0 f ˇ.1 0 /.m/ @ ˇ jmj 0 ˇ ˇ ˇ ˇ ˇ ˇ m ˇ i D ˇˇ f ˇˇgCi 0 .1 0 /.m/ @ ˇ jmj 0 ˇ ˇ ˇ ˇ ˇ ˇ m ˇ i D ˇˇ 1 f ˇˇgCi 0 .1 0 /.m/ @ ˇ jmj 0 ˇ ˇ ˇ ˇ m ˛ ˇ ˇ i jf j˛ ˇ.@ C e/ 1 .m/gCi 0 .1 0 /.m/ @ ˇ jmj 0 for some ˛ 2 N 3 . Considering the last term ˇ ˇ2 ˇ ˇ ˇ ˇ.@ C e/˛ 1 .m/ gCi 0 .1 0 /.m/ @ m i ˇ ˇ jmj 0 Z ˇ Z ˇ exp.i j yj/ ˇ.@ C e/˛ 1 .m/ .1 0 .y// D ˇ 3 4j yj R R3 ˇ2 ˇ m @ i .y/ dy .x/ˇˇ dx: jmj
288
Chapter 3 Partial Differential Equations with Constant Coefficients
By integration by parts we find exp.i jx yj/ m .1 0 .y// @ i .y/ dy 4 jx yj jmj R3 Z exp.i jx yj/ exp.i jx yj/ .x y/ y .1 0 .y// C y @ 0 .y/ D 3 4 jx yj jyj 4 jx yjjyj R3 exp.i jx yj/ .x y/ y C i .1 0 .y// 1C .y/ dy 4 jx yj jx yjjyj
Z
and so – using that .a C b C c/2 3.a2 C b 2 C c 2 / – we can estimate ˇ ˇ2 ˇ ˇ ˇ ˇ.@ C e/˛ 1 .m/ gCi 0 .1 0 /.m/ @ m i ˇ ˇ jmj 0 Z ˇ2 Z ˇ ˇ ˇ exp.i j yj/ ˛ ˇ . y/y.1 0 .y// .y/ dy .x/ˇˇ dx 3 .@ C e/ 1 ˇ 3 R3 R3 4j yj jyj Z ˇ2 Z ˇ ˇ ˇ exp.i j yj/ ˛ ˇ.@ C e/ 1 C3 y@ 0 .y/ .y/ dy .x/ˇˇ dx ˇ 4j yjjyj R3 R3 Z Z ˇ ˇ .x y/ y exp.i j yj/ ˛ ˇ C 3j j ˇ.@ C e/ 1 3 4j yjjyj 1 C jx yjjyj .1 0 .y// R3 R ˇ2 ˇ .y/ dy .x/ˇˇ dx: Noting that ˇ ˇ ˇ .@ C e/˛ 1 .m/ exp.i j yj/ 1 C . y/ y .x/ ˇ j yj j yjjyj ˇ2 ˇ .1 0 .y//@ 0 .y/ ˇˇ dy dx < 1;
Z
Z R3
R3
ˇ ˇ2 ˇ ˇ ˇ .@ C e/˛ 1 .m/ exp.i j yj/ . y/ .x/ .1 0 .y// ˇ dy dx ˇ ˇ 4 j yj3
Z
Z R3
R3
< 1; and also Z
Z R3
R3
ˇ ˇ2 ˇ ˇ ˇ .@ C e/˛ 1 .m/ exp.i j yj/ .x/ @ 0 .y/ ˇ dy dx < 1; ˇ ˇ j yj
Section 3.2 Partial Differential Equations in H1 .jD j/ we find59
ˇ ˇ m ˇ @ C i .1 0 / gi 0 f ˇ jmj
289
ˇ ˇ ˇ ˇ ˇ jf j˛ C0 j j0 ˇ ˇ ˇ 0
2 CV 1 .R3 / and some C0 2 R>0 independent of . This proves that m @ C i .1 0 / gi 0 f 2 H0 .r C i/ jmj
for all
and so in particular m RnŒR;R .r/ @ C i gi 0 f jmj m @ C i .1 0 / gi 0 f 2 H0 .r C i/ D RnŒR;R .r/ jmj
for a (and therefore all) sufficiently large R 2 R>0 . The radiation condition now implies uniqueness in an intricate way. First we observe the following result. Lemma 3.2.30. Let u 2 H1 .jD1 j; jD2 j; jD3 j/ be a solution of .@2 k 2 / u D 0; If either
or
59
k 2 R n ¹0º:
(3.2.147)
m @ u C ik u 2 H0 .jD1 j; jD2 j; jD3 j/ jmj m @ u i k u 2 H0 .jD1 j; jD2 j; jD3 j/ jmj
For the second integral the required decay for large jyj and arbitrary x 2 supp 1 follows from .x y/ y jx yjjyj C .x y/ y 1C D jx yjjyj jx yjjyj
for all y 2 R3 n ¹0º.
D
jx yjjyj .x y/ .x y/ .x y/ x C jx yjjyj jx yjjyj
D
.jyj jx yj/ .x y/ x C jyj jx yjjyj
2
jxj jyj
2
sup¹jxj j x 2 supp 1 º jyj
290
Chapter 3 Partial Differential Equations with Constant Coefficients
then u 2 H0 .jD1 j; jD2 j; jD3 j/: Proof. We again restrict our attention to the outgoing case (the incoming case following by replacing k by k). Noting that u 2 C1 .R3 /, we can calculate classically Z u.x/ .@2 k 2 /u.x/ dx 0D B
Z
R3
.0;R/
Z
D
B
@u.x/ @u.x/ dx k
R
Z
C
.0;R/ 3
u.x/
@B
R3
.0;R/
2
u.x/ u.x/ dx B
.0;R/ 3
R
x @u.x/ dSx ; jxj
which implies, by taking imaginary parts (recall 2 R), that Z x @u.x/ dSx D 0: u.x/ Im jxj @B 3 .0;R/ R
Integration with respect to R 2 R>0 yields Z x @u.x/ dx 0 D Im u.x/ jxj B 3 .0;R/ R Z x D Im @u.x/ C i k u.x/ dx u.x/ jxj B 3 .0;R/ R Z ik u.x/ u.x// dx B
R3
.0;R/
and so, applying the Cauchy–Schwarz inequality, Z jkj ju.x/j2 dx B
R3
.0;R/
sZ
B
R3
.0;R/
v uZ u ju.x/j2 dx t
B
R3
.0;R/
ˇ ˇ2 ˇ ˇ x ˇ dx: ˇ @u.x/ C i k u.x/ ˇ ˇ jxj
Letting R ! 1 we obtain
ˇ ˇ ˇ 1 ˇ m @ u C i k uˇˇ juj0 p ˇˇ k jmj 0
and in particular u 2 H0 .jD1 j; jD2 j; jD3 j/:
Section 3.2 Partial Differential Equations in H1 .jD j/
291
This lemma still does not yield uniqueness, but only that solutions of the homogeneous problem satisfying the prescribed asymptotic behavior feature an even stronger decay. The missing link to proving uniqueness is the following result. Lemma 3.2.31. Let u 2 H1 .jD1 j; jD2 j; jD3 j/ and supp.@2 k 2 /u BR3 .0; R0 / for some R0 2 R>0 . Then either supp u BR3 .0; R0 / or60 Z ju.x/j2 dx p R B
R3
.0;R/nB
R3
.0;R0 /
for some p 2 R>0 and all sufficiently large R 2 R>0 . Proof. We follow the presentation in [29], p. 59–60. Let R0 2 R>0 such that supp.@2 k 2 / u BR3 .0; R0 /: Then using spherical harmonics .Sn;j /j D0;:::;2n; n2N , i.e. the system of eigenfunctions of the Laplace–Beltrami operator 0 , which form a complete orthonormal system in L2 .@BR3 .0; 1// with respect to the inner product Z . ; / 7!
.x/ .x/ dSx ; @B
R3
.0;1/
we find, calculating in polar coordinates, that Z vn;j .r/ WD r u.rx/ Sn;j .x/ dSx @B
R3
.0;1/
satisfies
n.n C 1/ 2 @ vn;j .r/ C k vn;j .r/ r2 2 Z @ 2 @ D .ru.rx// C k 2 r u.rx/ Sn;j .x/ dSx .r u.rx// C @r r @r @B 3 .0;1/ R Z 1 u.rx/ n.n C 1/ Sn;j .x/ dSx @B 3 .0;1/ r R 2 Z @ 2 @ 2 D .r u.rx// C .ru.rx// C k r u.rx/ Sn;j .x/ dSx @r r @r @B 3 .0;1/ R Z 1 C u.rx/ 0 Sn;j .x/ dSx r @B 3 .0;1/ 2
R
60
This type of estimate is referred to as Rellich’s estimate.
292
Chapter 3 Partial Differential Equations with Constant Coefficients
Z D
@B
R3
.0;1/
2
2 @ .ru.rx// r @r 1 C k 2 r u.rx/ C 0 u.rx/ Sn;j .x/ dSx r @ @r
.r u.rx// C
Z D r
@B
.0;1/ 3
..@2 k 2 / u.rx// Sn;j .x/ dSx D 0
R
for r > R0 . Here we have used that 2 1 @ 2 @ 2 C 2 0 C D@ D @r r @r r and that Sn;j is an eigenfunction of 0 associated with the eigenvalue n.n C 1/, j D 0; : : : ; 2n, n 2 N. Now if u does not eventually vanish identically then there must be a g WD vn0 ;j0 , j0 2 ¹0; : : : 2n0 º; n0 2 N, and a radius R1 > R0 such that g.R1 / ¤ 0:
(3.2.148)
Since g must satisfy the ordinary differential equation n0 .n0 C 1/ @2 g.r/ C k 2 g.r/ D 0; r2 we know that g is a linear combination of the associated fundamental system of modp p ified Bessel functions r Jn0 C1=2 .k r/, r Yn0 C1=2 .k r/ (compare [30], p. 139) associated with this equation. Thus we have p p g.r/ D a1 r Jn0 C1=2 .k r/ C a2 r Yn0 C1=2 .k r/ (with a1 a2 ¤ 0 because of (3.2.148)!). The asymptotic behavior of the Bessel functions for large arguments is known to be p n0 C 1 1 C O.r 1 /; r Jn0 C1=2 .k r/ D p cos kr 2 k p n0 C 1 1 C O.r 1 /; r Yn0 C1=2 .k r/ D p sin kr 2 k as r ! 1. Since sin and cos are orthogonal in L2 .I / where I is a period interval, we have Z n0 C 1 n0 C 1 sin kr dr cos kr 2 2 ŒR0 ;R Z 1 cos.u/ sin.u/ du D 0 D k kŒR0 ;R.n0 C1/=2
Section 3.2 Partial Differential Equations in H1 .jD j/ if k.R R0 / 2 2N. Therefore, we obtain Z Z 2 ju.x/j dx B
.0;R/nB 3
R
.0;R0 / 3
ŒR0 ; R
R
293
jg.r/j2 dr
1 j.a1 ; a2 /j2 N C O.ln.R// 2 2 k 2 1 k D .R R0 / C O.ln.R// j.a1 ; a2 /j2 2 2 k 2 2 1 j.a1 ; a2 /j2 R 8 3 k D
N C R0 2 for R D 2 k R 2 R>2=k
2 N k
C R0 , N 2 N, sufficiently large. Since for arbitrary
2 2 R k k
k .R R0 / C R0 R; 2
we have Z B
Z R3
.0;R/nB
R3
.0;R0 /
ju.x/j2 dx
B
R3
.0;R0 /nB
R3
.0;R0 /
ju.x/j2 dx
1 j.a1 ; a2 /j2 R0 8 3 k 1 j.a1 ; a2 /j2 R; 16 3 k
k for R > 2 , R > R0 sufficiently large and R0 WD 2 b 2 .R R0 /c C R0 . Therefore k k 1 2 the desired estimate follows with p WD 16 3 k j.a1 ; a2 /j > 0.
We can now derive a solution theorem for Helmholtz equation. Theorem 3.2.32. An element u 2 H1 .jD1 j; jD2 j; jD3 j/ solving the equation .@2 k 2 / u D f;
k 2 R n ¹0º;
with f 2 H1 .jD1 j; jD2 j; jD3 j/ such that supp f is compact and satisfying the Sommerfeld radiation condition m RnŒR;R .jmj/ @ u i k u 2 H0 .jD1 j; jD2 j; jD3 j/ (3.2.149) jmj for a sufficiently large R 2 R>0 , exists uniquely and is given by u D gkCi 0 f:
294
Chapter 3 Partial Differential Equations with Constant Coefficients
Proof. The existence of a solution u is clear, since we know by Lemma 3.2.149 that u D gkCi 0 f is such a solution. Now let w be another solution satisfying (3.2.149) with u replaced by w. Then the difference v WD u w satisfies .@2 k 2 / v D 0 and (3.2.149) with v in place of u. Since v must be a smooth function, we also have that the assumptions of Lemma 3.2.30 are satisfied and so v 2 H0 .jD1 j; jD2 j; jD3 j/: According to Lemma 3.2.31 we therefore must have that v D 0 everywhere. Similar representation results to those developed for potential theory can be given for the Helmholtz equation. Theorem 3.2.33. Let w 2 H1 .jD1 j; jD2 j; jD3 j/ be such that the radiation condition (3.2.149) holds for a sufficiently large R 2 R>0 , with wjN continuously differentiable in a neighborhood of the smooth, compact boundary @ of a domain R3 and @2 w k 2 w D f C
ˇ 1 1 ˇ .n @w/ˇ@ ı@ C 2 w ˇ@ @ .n ı@ / 2 2 2
(3.2.150)
with f 2 H0 .jD1 j; jD2 j; jD3 j/, supp f compact. Then w D gkCi 0 f C gkCi 0 ..n @w/j@ ı@ C wj@ @ .n ı@ // or Z
wD
exp.i k j yj/ f .y/ dy 4 j yj Z exp.i k j yj/ @ @ exp.i k j yj/ C w.y/ w.y/ dSy : 4 j yj @n.y/ @n.y/ 4 j yj @
The method of boundary integral equations indicated for the potential theoretical case can be utilized analogously. We shall, however, not pursue these ideas here. Example 3.24. For its entertainment value we shall, however, in continuation of our considerations in Example 3.23 have a look at the 1-dimensional case to see what a suitable radiation condition might be in this case. For > 0 we have x 7! .i / jxj/ exp.i as fundamental solution for @2 C .i C /2 . Thus, we have 2 i .i / ui D
1 exp.i . i / j j/ f 2 i . i /
Section 3.2 Partial Differential Equations in H1 .jD j/
295
as the solution of .@2 . i /2 /u D f: Here ui is the evaluation at 2 R n ¹0º of the Fourier–Laplace transform of the (forward causal!) solution w of .@20 @2 /w D ı ˝ f:
(3.2.151)
Letting ! 0C, we see that ui0C D 2 1i exp.i j j/f is a solution of .@2 2 /u D f . In the same way we can derive uCi 0C D 2 1i exp.i j j/ f as the only other solution of .@2 2 /u D f resulting from a backward causal solution of (3.2.151). To enforce uniqueness we need to distinguish these two solutions. Taking our lead from the above 3-dimensional considerations, we shall assume that f has compact support and find that the following radiation conditions hold: @u i0C .x/ ˙ i sgn.x/ u i0C .x/ ! 0 as x ! C1 and x ! 1. This shows that ui0C is characterized by what we shall call the outgoing radiation condition @ui0C .x/ C i sgn.x/ ui0C .x/ ! 0 as x ! ˙1, whereas uCi0C is characterized by an analogous incoming radiation condition @uCi0C .x/ i sgn.x/ uCi0C .x/ ! 0 as x ! ˙1. Note that it suffices to require these conditions only at C1 (or at 1) to ensure the desired uniqueness. Since .@2 2 / D .i @/.i C @/, we are also led to consider – say – @ C i for 2 R n ¹0º. Here the (outgoing) fundamental solution is x 7! R>0 .x/ exp.i x/ D
1 .@ i /.x 7! exp.i jxj//: 2i
We read off (with w D .@ i /u) from the above that w.x/ ! 0 as x ! 1 now characterizes the outgoing solution of .@ C i /w D f , whereas the same asymptotic behavior as x ! C1 characterizes an incoming solution. Consider finally the system of two equal Helmholtz type equations
i @ 0 0 i C @
.i C @/u .i @/u
D
f f
296
Chapter 3 Partial Differential Equations with Constant Coefficients
or equivalently (by applying unitary and idempotent matrix 1 2
1 C1 1 1
f f
D
f 0
D
i @ @ i
p1 1 1 2 1 1
)
i u : @u
We read off (with u2 D @u, u1 D i u) that the outgoing radiation condition now assumes the form u2 .x/ C sgn.x/ u1 .x/ ! 0 as x ! ˙1. The incoming radiation condition is now of the form u2 .x/ sgn.x/ u1 .x/ ! 0 as x ! ˙1. Recall that it suffices to require these only for x ! C1 (or for x ! 1) to ensure uniqueness of a corresponding outgoing or incoming solution. Note also that these asymptotic conditions resemble the Silver–Müller radiation conditions, which will be discussed in connection with Maxwell’s equation, see Subsection 3.2.4.8 below. 3.2.4.7
Retarded Potentials
This example brings us back to the invertible case, indeed the reversibly evolutionary case. We return briefly to the wave equation. We mention it here, not because it particularly requires the extension of the solution theory we have been illustrating in the above, but because of its intimate relation to the previous cases of @2 (potential theory, Laplace equation, Poisson equation) and of @2 k 2 (Helmholtz equation). We found that the (forward causal) fundamental solution of the wave equation in R1C3 is given by g1;C .t; / D
1 .t / ı t ŒS 2 4 t R>0
for t 2 R; t ¤ 0:
We want to record a suitable integral representation also in this context. We find for – say – f 2 CV 1 .R1C3 / that j f i0;0 .g1;C f /.t; x/ D h1 .t;x/ g1;C D hg1;C j .t;x/ 1 f i0;0 Z 1 1 D f .t ju xj; u/ du: 4 y2R3 ju xj
Because of the similarity of the integral expression to a potential term and, observing the retardation juxj in the time-dependence of f , an integral expression of this form is referred to as a retarded potential. It gives an integral expression for the convolution with the fundamental solution of the wave equation. Of course, in the backward causal case, we obtain similarly an advanced potential type integral expression.
Section 3.2 Partial Differential Equations in H1 .jD j/ 3.2.4.8
297
Integral Representations of Solutions of the Time-Harmonic Maxwell Equations
The time-harmonic Maxwell equations, which can be understood as a limit case of the Fourier–Laplace transform with respect to time of the time-dependent Maxwell’s equations, are given by the system 0 1 0 10 1 i @> 0 0 0 0 B E C B @ i @ 0 C B E C C B CB C .MM.0; @/ i / B @ H A D @ 0 @ i @ A @ H A D F: (3.2.152) 0 0 0 0 @> i Associated fundamental solutions are easily found since .MM.0; @/ i /.MM.0; @/ C i / D @2 C 2 : Therefore, given the fundamental solution g˙i 0 of the wave equation (see Subsection 3.2.2.1), we find that G˙i 0 WD .MM.0; @/ C i / g˙i 0 1.88/ D i .MM.0; m=jmj/ ˙ 1/g˙i 0 1.88/ C
1 MM.0; m=jmj/g˙i 0 1.88/ ; jmj
can be used as fundamental solutions. Again one speaks of incoming and outgoing fundamental solution, respectively. The two can be distinguished according to their asymptotic behavior by noting that .MM.0; m=jmj 1/ .MM.0; m=jmj ˙ 1/ D 0 in R3 n ¹0º and so .MM.0; m=jmj/ 1/G˙i 0 1 .MM.0; m=jmj/ 1/MM.0; m=jmj/g˙i 0 1.88/ jmj 1 .MM.0; m=jmj/ 1/ g˙i 0 1.88/ ; D jmj D
and RnŒR;R .jmj/ .MM.0; m=jmj/ G˙i 0 G˙i 0 / 2 H0 .r i/ for all R 2 R>0 . This observation leads to the so-called incoming and outgoing Silver–Müller radiation condition, respectively. Lemma 3.2.34. Let F 2 H1 .jD1 j; jD2 j; jD3 j/ with supp F compact. Then RnŒR;R .jmj/ .MM.0; m=jmj/ G˙i 0 F G˙i 0 F / 2 H0 .r i/ for all sufficiently large R 2 R>0 .
298
Chapter 3 Partial Differential Equations with Constant Coefficients
Proof. We follow the now familiar line of reasoning. Again it suffices to consider the outgoing case. So let U WD Gi 0 F and 0 2 CV 1 .R3 / real-valued with 0 D 1 on supp F . Now we argue as in the proof of Lemma 3.2.21. We have .MM.0; @/ i / Gi 0 F D F and so .MM.0; @/ i / ..1 0 / Gi 0 F / D .1 0 / F .MM.0; @/ 0 1.88/ / Gi 0 F D .MM.0; @/ 0 1.88/ / Gi 0 F: The right-hand side is an element in CV 1 .R3 /. We calculate further with a second real-valued function 1 2 CV 1 .R3 / with 1 D 1 on supp F and 0 D 1 on supp 1 : jh.MM.0; m=jmj/ C 1/ .1 0 / Gi 0 F j i0 j D jh gi 0 .MM.0; @/ C i / F j.1 0 / .MM.0; m=jmj/ C 1/ i0 j D jh F j .MM.0; @/ i / gCi 0 ..1 0 / .MM.0; m=jmj/ C 1/ /i0 j D jh 1 F j .MM.0; @/ i /gCi 0 ..1 0 /.MM.0; m=jmj/ C 1/ /i0 j jF j˛ j.@ C e/˛ 1 .MM.0; @/ i /gCi 0 ..1 0 /.MM.0; m=jmj/ C 1/ /j0 for some ˛ 2 N 3 . Observing that .MM.0; .y x/=jy xj/ 1/.MM.0; y=jyj/ C 1/ D ..MM.0; .y x/=jy xj/ MM.0; y=jyj// C .MM.0; y=jyj/ 1//.MM.0; y=jyj/ C 1/ D ..MM.0; .y x/=jy xj/ MM.0; y=jyj// .MM.0; y=jyj/ C 1/ D .MM.0; .y x/=jy xj y=jyj/ .MM.0; y=jyj/ C 1/ and yx y D jy xj jyj
Z Œ0;1
Z
D and
Z
s
.@k C i/
Œ0;1
Œ0;1
y sx s! 7 jy s xj
0
1 x dt jy t xj
.t / dt Z Œ0;1
.y t x/ y t x x dt; jy t xj2 jy t xj
.y t x/ y t x 1 xC x dt D O.jyjs1 / jy t xj jy t xj2 jy t xj
Section 3.2 Partial Differential Equations in H1 .jD j/
299
for large jyj and uniformly for x in a bounded set, we can estimate j.@ C e/˛ 1 gCi 0 ..1 0 / .MM.0; m=jmj/ C 1.88/ / /j20 C02 j j20 : Consequently, we have jh.MM.0; m=jmj/ C 1.88/ / .1 0 / Gi 0 F j i0 j jF j˛ C0 j j0 for all
2 CV 1 .R3 / and some C0 2 R>0 . This proves that .MM.0; m=jmj/ C 1.88/ / .1 0 / Gi 0 F 2 H0 .r C i/
and so in particular RnŒR;R .r/ .MM.0; m=jmj/ 1.88/ / GCi 0 F D RnŒR;R .r/ .MM.0; m=jmj/ 1.88/ / .1 0 / GCi 0 F 2 H0 .r C i/ for a (and therefore all) sufficiently large R 2 R>0 . With regards to the asymptotic behavior of solutions subject to the radiation condition, we have the following result. Lemma 3.2.35. Let U 2 H1 .jD1 j; jD2 j; jD3 j/ be a solution of .MM.0; @/ i / U D 0 If either .MM.0; m=jmj/ C 1.88/ / U 2 H0 .r i/ or .MM.0; m=jmj/ 1.88/ / U 2 H0 .r i/ then U 2 H0 .r C i/: Proof. We again restrict our attention to the outgoing case. Noting that U 2 C1 .R3 /, we can calculate classically Z 0 D Re U.x/ .MM.0; @/ i / U.x/ dx B
Z
R3
U.x/ MM.0; x=jxj/ U.x/ dSx
D Re
@B
Z
R3
.0;R/
D Re Z
.0;R/
@B
@B
R3
R3
.0;R/
.0;R/
U.x/ .MM.0; x=jxj/ U.x/ C U.x// dSx
U.x/ U.x/ dSx ;
300
Chapter 3 Partial Differential Equations with Constant Coefficients
which implies by integration with respect to R 2 R>0 Z 0 D Re U.x/ .MM.0; x=jxj/ U.x/ C U.x// dx B
Z
B
R3
R3
.0;R/
U.x/ U.x/ dx ;
.0;R/
and so, applying the Cauchy–Schwarz inequality, Z jU.x/j2 dx B
R3
.0;R/
sZ
sZ
B
.0;R/ 3
jU.x/j2 dx
R
B
.0;R/ 3
jMM.0; x=jxj/ U.x/ C U.x/j2 dx:
R
This proves with R ! 1 jU j0 jMM.0; m=jmj/ U C U j0 and in particular U 2 H0 .r i/: Thus, we obtain the following solvability result. Theorem 3.2.36. Let U 2 H1 .jD1 j; jD2 j; jD3 j/ be a solution of .MM.0; @/ i / U D F with 2 R n ¹0º, F 2 H1 .jD1 j; jD2 j; jD3 j/ such that supp F is compact and U satisfies the Sommerfeld radiation condition RnŒR;R .jmj/ MM.0; m=jmj/ U C U 2 H0 .r i/
(3.2.153)
for a sufficiently large R 2 R>0 . Then U is uniquely determined and given by U D Gi 0 F: Proof. The existence of a solution U is clear since we know by Lemma 3.2.34 that U D Gi 0 F is such a solution. Now let W be another solution satisfying (3.2.153) with U replaced by W . Then the difference V WD U W satisfies .MM.0; @/ i / V D 0 and (3.2.153) with V in place of U . Since V must be a smooth function, we also have that the assumptions of Lemma 3.2.35 are satisfied so that V 2 H0 .r i/:
Section 3.2 Partial Differential Equations in H1 .jD j/
301
Since .@2 2 / V D .MM.0; @/ C i / .MM.0; @/ i / V D 0 we therefore must also have – according to Lemma 3.2.31 – that V D 0 everywhere. In this case we also want to derive a boundary integral expression. So let .MM.0; @/ i / U D F in R3 n @, where is again a bounded domain of R3 with sufficiently smooth boundary @. Given the characteristic function of we let U WD U and correspondingly UC WD R3 n U . Assuming sufficient smoothness of U˙ up to the boundary @ D @.R3 n /, we can again calculate .MM.0; @/ i / U as a distribution: .MM.0; @/ i / U D .MM.0; @/ i / . U C R3 n UC / D F C R3 n F C .MM.0; @/ 188 / .U UC /: Here we have used that @k R3 n D @k . Recalling that @k D nk ı@ ;
(3.2.154)
we obtain .MM.0; @/ i / U D F C R3 n F C MM.0; n/ ı@ .UC U /: If F is such that F D F C R3 n F , which is the case for F 2 H0 .@ C e/, then we have, with ŒU @ WD UC U , .MM.0; @/ / U D F C MM.0; n/ ı@ ŒU @ : Assuming supp F to be compact, we obtain U D Gi 0 F C Gi 0 MM.0; n/ ı@ ŒU @ as a unique solution in the sense of Theorem 3.2.36. This solution is again smooth outside of the support of F and away from @ according to Weyl’s Lemma 3.2.16 (recalling that MM.0; @/2 D @2 ). As before we can now establish these convolutions as integral representations: Z Gi 0 .x y/ F .y/ dy U.x/ D R3 Z C Gi 0 .x y/ MM.0; n.y// ŒU @ .y/ dSy @
for x 2 R3 n @.
302
Chapter 3 Partial Differential Equations with Constant Coefficients
There are, however, still some structural results missing. Recalling equation (3.2.152), we notice that F is not completely arbitrary. So let us consider the specific form of the fundamental solution U D Gi 0 F D gi 0 .MM.0; @/ C i / F: Introducing the specific block structure of (3.2.152), we write F D .F1 F2 F3 F4 /> and U D .U1 U2 U3 U4 /> . In the case of a usual electro-magnetic field, we would want to have the scalar contributions to U , i.e. U1 ; U4 to vanish. This can only be the case if the first and the last component of .MM.0; @/ C i / F vanish. Thus we must have @ F2 C i F1 D 0; @ F3 i F4 D 0: (3.2.155) In this case the solution theory reduces to the actual electro-magnetic case of equation (3.2.152). The expansion of the system can be regarded as a mathematical trick to include the divergence conditions in a conveniently accessible system. Conditions (3.2.155) are then the compatibility conditions for time-harmonic electro-magnetic fields. The radiation condition now reduces to m U2 2 H0 .r i/; jmj
m U3 C U2 2 H0 .r i/; jmj m U2 C U3 2 H0 .r i/; jmj m U3 2 H0 .r i/: jmj
Noting that by elementary vector analysis all other relations follow from the third relation, it is clear that it suffices to impose m U2 C U3 2 H0 .r i/ jmj
(3.2.156)
as a condition implying uniqueness in this case. Relation (3.2.156) is nothing but the classical (in our sense outgoing) Silver–Müller radiation condition for time-harmonic electro-magnetic fields.
Chapter 4
Linear Evolution Equations
4.1
Linear Operator Equations with Constant Coefficients in Sobolev Lattices
4.1.1 Polynomials of Commuting Operators We have developed a solution theory for polynomials in the family @ D .@0 ; : : : ; @n /, n 2 N, of partial derivatives relying strongly on the availability of a spectral theorem associated with the family of normal operators 1i @ (provided by the Fourier–Laplace transform). To generalize our findings let us first consider a general Sobolev lattice .H˛ .C //˛2ZnC1 associated with a family C of .n C 1/ operators. S In order to approach the corresponding general solution theory in H1 .C / WD ˛2ZnC1 H˛ .C /, we recall Definition 3.2.6 of spectrum and resolvent set of an operator in H1 .C /. In the complete generality that we have here for the family C , we cannot expect a more detailed solution theory than given by Definition 3.2.6. We simply have to live with the trivial statement provided by the following definition and its similarly trivial consequences. Definition 4.1.1. Let .H˛ .C //˛2ZnC1 be the Sobolev lattice associated with a family C D .C0 ; : : : ; Cn / of operators. For an .L C 1/ .L C 1/-polynomial matrix function P (L 2 N) we shall say P .C / is well-composed if 0 2 %1 .det P .C //. If P .C / is well-composed, then P .C / u D f
(4.1.1)
has a unique solution u 2 H1 .C / for every f 2 H1 .C / and the solution depends continuously on the data in the sense that P .C /1 W H1 .C / ! H1 .C / is continuous. Note that here H1 .C / abbreviates the Sobolev lattice structure .HA .C //A2.ZnC1 /.LC1/.t C1/ for some t 2 N. To simplify notation we occasionally extend our earlier convention by writing simply Hˇ .C / for HA .C / if the index matrix A D .ˇ/iD0;:::;LIj D0;:::;t , t 2 N, ˇ 2 ZnC1 , where the actual matrix size is clear from the context. The solution can be given in purely algebraic terms, for example by Cramer’s rule P .C /1 D .det P .C //1 cof.P .C //> ;
(4.1.2)
304
Chapter 4 Linear Evolution Equations
or alternatively via the Cayley–Hamilton Theorem. If P .C / is well-composed, we shall say that problem (4.1.1) is well-posed. If more is known about C , more sophisticated solvability statements can be derived. In the following subsection we shall investigate the particular case where C is a family of commuting selfadjoint operators.
4.1.2 Polynomials of Commuting, Selfadjoint Operators We consider the special case of polynomials with respect to a finite family of commuting, normal operators. Recalling the decomposability of normal operators into a selfadjoint real and imaginary part, we may indeed consider polynomials (with complex .L C 1/ .L C 1/-matrices as coefficients) in a finite family A D .Ak /kD0;:::;n of .n C 1/ commuting, selfadjoint operators in a Hilbert space H , (L; n 2 N), and consider a solution theory in Sobolev lattice structures of the form .H˛ .A ˙ i e//˛2.ZnC1 /.LC1/.sC1/ ;
s 2 N:
If V denotes a spectral representation associated with A, then such a polynomial operator matrix P .A/ would be unitarily equivalent to multiplication by P .m/ D V P .A/ V in the appropriate L2 -type space, which we shall denote by H0;A .i mCe/. The ideas we used for partial differential equations carry now over to this case with some mild modifications. Definition 4.1.2. Let A D .Ak /kD0;:::;n be a family of .nC1/ commuting, selfadjoint operators in Hilbert space H with corresponding spectral families .…Ak /kD0;:::;n . Then ˇ ³ [² ^ Z ˇ nC1 nC1 ˇ 2 R n d j…A0 . 0 / …An . n / j D 0 OR ˇO open; 2H
O
is called the support of the spectral measure or the (joint) spectrum .A/ of A, [42], [10]. Definition 4.1.3. Let A D .Ak /kD0;:::;n be a family of .nC1/ commuting, selfadjoint operators and P a polynomial .L C 1/ .L C 1/-matrix in n variables, n; L 2 N. Then the operator P .A/ is called invertible, if x 7! det.P .x// has no zeros in .A/ RnC1 and x 7! .x i e/˛ .det P .x//1 (4.1.3) is bounded on .A/ for some ˛ 2 N nC1 . Comparing this with the case A D . 1i @0 ; : : : ; 1i @n /, which we dealt with in our earlier discussion of the Sobolev lattice associated with .@0 ; : : : ; @n /, see Subsection 3.1.1, we have there that . 1i @0 ; : : : ; 1i @n / D RnC1 and condition (4.1.3) was shown to follow. In general, condition (4.1.3) has to be imposed explicitly. As a consequence we obtain the following special case of our solution theorem.
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 305 Theorem 4.1.4. Let A D .Ak /kD0;:::;n be a family of .n C 1/ commuting, selfadjoint operators in Hilbert space H and P a polynomial .L C 1/ .L C 1/-matrix in .n C 1/ variables, n; L 2 N, such that P .A/ is invertible. Then P .A/ u D f
(4.1.4)
has for every f 2 H1 .A C i e/ a unique solution u 2 H1 .A C i e/. The solution operator P .A/1 W H1 .A C i e/ ! H1 .A C i e/ is continuous. In other words, problem (4.1.4) is well-posed, i.e. 0 2 %1 .det P .A//: Proof. The result is clear since this is just a special case of our discussion in the previous subsection. That P .A/1 D .det P .A//1 cof.P .A//> is the continuous inverse in H1 .A C i e/ is, due to the spectral representation, a purely algebraic question (compare (4.1.2)). Technically, we find the solution by letting u D V P .m/1 V f; where V W H1 .A C i e/ ! H1;A .i m C e/ denotes the continuous extension of the spectral representation V to H1 .A C i e/. Here the abbreviation H1;A .i m C e/ denotes the lattice structure obtained from im C e acting on the L2 -type image space of V . In the following we shall be more interested in considering evolution equations involving polynomials of operators. Indeed, the general solution theory so far does not explicitly provide for an evolutionary direction. We shall remedy this by fixing one of the operators to be the weighted time-derivative @0; , 2 R.
4.2
Evolution Equations with Polynomials of Operators as Coefficients
4.2.1 Classification of Operator Polynomials with Time Differentiation Guided by our earlier considerations we are led to study general expressions of the form P .@0 ; A/ WD P .@0; C ; A/;
306
Chapter 4 Linear Evolution Equations
where P is a polynomial in n C 1 variables, 2 R. Here we have used the natural abbreviations @0; WD @ ˝1H1 .A/ D .@0 /˝1H1 .A/ and Ak for 1H1 .@ 1/ ˝ Ak , k D 1; : : : ; n. Returning to the general case, we shall assume that the operator family A WD .A1 ; : : : ; An / is a family of n commuting, densely-defined, closed, linear operators with non-empty resolvent Tset acting in a Hilbert space H , and without loss of generality we shall assume 0 2 iD1;:::;n %.Ai /. Then, @0 WD @0; C , P .@0 ; A/ is a continuous operator in H1 .@0; C ; A/ (this is H1 .C / with the associated operator family C D .@0; C ; A1 ; : : : ; An / ). Note that H˛0 .@ C / ˝ H.˛1 ;:::;˛n / .A/ D H.˛0 ;˛1 ;:::;˛n / .@0; C ; A/ D H˛ .@0; C ; A/ for ˛ WD .˛0 ; ˛1 ; : : : ; ˛n / 2 Z1Cn . The norm and inner product of H˛ .@0; C ; A/, ˛ 2 Z1Cn , will again be denoted by j j;˛ and hji;˛ , respectively. We shall also expand these concepts – as in the case of partial differential equations – to matrices of such spaces, i.e. to matrices of multi-indices. We recall these ideas briefly. Let ˛ D .˛ij /i;j 2 .Z1Cn /.sC1/.tC1/ , s; t 2 N. Then M M H˛ .@0; C ; A/ D H˛ij .@0; C ; A/: iD0;:::;s j D0;:::;t
For the latter we may again use the natural notation H˛ .@0; C ; A/ D .H˛ij .@0; C ; A//iD0;:::;sIj D0;:::;t as a matrix of spaces. We shall utilize the tensor product extension of the 1-dimensional Fourier–Laplace transform L L WD L ˝ 1H so that L is the Fourier–Laplace transform with respect to time only). Its extension to the Sobolev lattice will again be denoted simply be L , 2 R. We have as a corresponding variant of the associated “spectral representation” L @0 D .im0 C /L :
(4.2.5)
We also read off that L1 D L ˝ 1H : 1 Of course, L , L1 are both continuous linear transformations on the associated Sobolev lattice structure H1 .@0; C ; A/. Our earlier classification scheme for PDE needs to be adapted to this situation, where only the time-direction is available for the Fourier–Laplace transform. 1 With this construction we basically by-pass more intricate issues of vector-valued Laplace transforms, which are unavoidable in the Banach space case, see e.g. [5].
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 307 Definition 4.2.1. Let P be a polynomial in n C 1 variables with quadratic matrices of size .s C 1/ as coefficients, s; n 2 N, and let A WD .A1 ; : : : ; An / be a family of n densely-defined, commuting, closed, linear operators acting in a Hilbert space H T with 0 2 iD1;:::;n %.Ai /. If, for some C 2 R>0 the operator family .P .; A/ W H1 .A/ ! H1 .A//2i ŒRCŒR>C satisfies
\
02
%1 .det P .; A//
2i ŒRCŒR>C
and, for some k 2 N, ˛ 2 N n , i ŒR C ŒR>C ! L.H; H /; 7! k .det P .; A//1 A˛ is bounded in i ŒR C ŒR>C , we shall call P .@0 ; A/ (forward) evolutionary (for > C ). Here L.H; H / denotes the space of continuous linear mappings on H . If, for some 2 R0 , then P .@0; C ; A/1 W H1 .@0; C ; A/ ! H1 .@0; C ; A/ is backward causal for all 2 R . Proof. Since P .@0 ; A/ is forward evolutionary C > 0 if and only if P .@0 ; A/ is backward evolutionary C < 0, we may focus on the first case. That P .@0; C ; A/1 W H1 .@0; C ; A/ ! H1 .@0; C ; A/ exists as a continuous operator for all C > 0 follows since P .@0; C ; A/ is evolutionary by assumption and Cramer’s rule yields P .@0; C ; A/1 D .det P .@0; C ; A//1 cof.P .@0; C ; A//> ;
(4.2.9)
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 311 which follows as in the case of matrix algebra from det P .@0; C ; A/ 1.ss/ D P .@0; C ; A/ cof.P .@0; C ; A//> By assumption, we have the existence of ˛ 2
Nn
on H1 .@0; C ; A/:
and k 2 N such that
.x; / 7! .i x C /k .det P .i x C ; A//1 A˛ is uniformly bounded for x 2 R and C > 0. This can be written in the form k. C C //k .det P . C C /; A//1 A˛ k C
(4.2.10)
for some C 2 R>0 and all 2 i R C R>0 . It suffices to show that .@0; C /k .det P .@0; C ; A//1 A4˛3ˇ is causal for some suitable ˇ 2 N n . Let f 2 H .@0; C ; A/, 2 .Z1Cn /s1 D .Z1Cn /s be such that a D inf supp0 f 2 R. Then, we have to show that inf supp0 ..@0; C /k .det P .@0; C ; A//1 A4˛3ˇ f / a: Without loss of generality (translation in time) we may assume a D 0 and also that f 2 H0 .@0; C; A/, since multiplication by powers of A does not influence the time support and @1 0 is causal. Thus we can define a mapping S W i R C R>0 ! H given by b 1 . C C / : (4.2.11) S./ WD . C C /k .det P . C C /; A//1 A4˛3ˇ f i b in (3.1.31). By the converse of the Paley–Wiener theorem Recall the definition of f b. 1 .CC //i0 is we know that the mapping F W iŒRCŒR>0 ! C given by 7! h jf i b. 1 . C C // is likewise analytic. Consequently, analytic for every 2 H . So, 7! f i the mapping S given by S
7! h jS./i0 is analytic in i R C R>0 for every 2 H . Moreover, by (4.2.10) we have, for all 2 R>0 , D C C , b. i . C C //j0;0 D C jL f j0;0 D C jf j;0 : jx 7! S .i x C /j0 C jf Thus, we get that S is in the Hardy–Lebesgue space HL, defined in (3.1.47). The Paley–Wiener theorem, with D C C , now yields Z
.t / .F h j S.i C/i0 /.t / dt D h ˝ jF ˝ 1 S.i C/i0;0 R
D hexp. m0 / ˝ j L ˝ 1 S.i C/i;0 D0
312
Chapter 4 Linear Evolution Equations
for all 2 H0 .A / and 2 CV 1 .R/ with supp0 R0 , jj sufficiently large, resp.) is frequently referred to as the (forward/backward) propagator. If P .@0; C ; A/ is (forward/backward) evolutionary then P .@0; C ; A/1 is in a certain sense independent of as long as ˙ > ˙ for some ˙ 2 R>0 . Recalling that elements of H1 .@0; C 1; A/ can be considered as a linear functionals on CV 1 .R/ ˝ H1 .A/, by letting 7! f . / WD hf j i0;0 WD hf j exp.2 m0 / i;0 ; we see that solutions for different values of are already comparable. Conversely, to a characterize a functional on CV 1 .R/ ˝ H1 .A/ as an element of H1 .@0; C 1; A/ it would suffice to establish that it extends to a continuous linear functional on a space Hk; .@0; C1; A / for some k 2 Z, 2 Zn . This is based on the alternative duality relation Hk; .@0; C 1; A/ D Hk; .@0; C 1; A / D Hk; .@0; C 1; A / for k 2 Z, 2 Zn , based on h j i0;0 D h j i0;0;0 as duality pairing rather than h j i;0;0 . Indeed, any f 2 Hk; .@0; C 1; A / defines a bounded linear functional on Hk; .@0; C 1; A/ as can be read off from hf j i0;0 D hf j A .@0; C 1/k .@0; C 1/k A i0;0 D hf j exp.2m0 / A .@0; C 1/k .@0; C 1/k A i;0 D hf j A .@0; C 1/k exp.2m0 / .@0; C 1/k A i;0 D h.@0; C 1/k .A / f j exp.2m0 / .@0; C 1/k A i;0 jf j;;k; j exp.2m0 / .@0; C 1/k A j;0;0 D jf j;;k; j.@0; C 1/k A j;0 D jf j;;k; j j;k; a
for all 2 CV 1 .R/ ˝ H1 .A/. Here we have used exp.2m0 / @0; D exp.2m0 / exp.m0 / @0 exp.m0 / D exp.m0 / @0 exp.m0 / exp.2m0 / D @0; exp.2m0 /:
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 313 a
Now let F 2 Hk; .@0; C 1; A/ , i.e. F is a linear functional on CV 1 .R/ ˝ H1 .A/ satisfying jF . /j C0 j j;k; : Substituting D exp.2 m0 /
a
2 CV 1 .R/ ˝ H1 .A/, we obtain
where
jF .exp.2 m0 / /j C0 j exp.2 m0 / j;k; and recall from our previous calculation that j exp.2 m0 / j;k; D j.@0; C 1/k A exp.2 m0 / j;0 D j.@0; C 1/k A
j;0
D j j;k; : This brings us back to our standard duality, that is 7! F .exp.2 m0 / / corresponds to an element f 2 Hk; .@0; C1; A / D Hk; .@0; C1; A/ , given by hf j i;0 D F .exp.2 m0 / / a
for all 2 CV 1 .R/ ˝ H1 .A/. Recalling that we chose to identify the element f 2 Hk; .@0; C 1; A / with the functional 7! f . / WD hf j i0;0 WD hf j exp.2 m0 / i;0 a
on CV 1 .R/ ˝ H1 .A/, we see that f DF a
as functionals on CV 1 .R/ ˝ H1 .A/. We may introduce as before the (maximal) continuous extension of the C-valued mapping . ; / 7! h j i0;0 a
a
from .CV 1 .R/ ˝ H1 .A// .CV 1 .R/ ˝ H1 .A // to the duality pairing between the spaces H˛ .@0; C 1; A/ and H˛ .@0; C 1; A / and we shall denote the continuous extension again by h j i0;0 leaving the dependence on ; ˛ and A or A to the context. Note in particular the resulting suggestive – but slightly ambiguous – relation h j i0;0 D h j i0;0
314
Chapter 4 Linear Evolution Equations
for all 2 H˛ .@0; C 1; A/ and 2 H˛ .@0; C 1; A / for arbitrary 2 R; ˛ 2 ZnC1 . In the light of our above duality considerations, it may happen that an element f 2 H1 .@0; C 1; A/ considered as a functional 7! hf j exp.2m0 / i;0 D a hf j i0;0 for 2 CV 1 .R/ ˝H1 .A / turns out to be continuous in a slightly different sense, namely that there is a C 2 R>0 such that jhf j i0;0 j C j j;e ;˛ a
for every 2 CV 1 .R/ ˝ H1 .A / and some ˛ 2 ZnC1 , e 2 R. Such an f can be considered as an element of H˛ .@0;e C 1; A/ H .@ 1 0;e C 1; A/. It is in this sense that we shall say f 2 H1 .@0; C 1; A/ \ H1 .@0;e C 1; A/: In order to compare H˛ .@0; C 1; A/ and H˛ .@0;e C 1; A/, we note that a
CV 1 .R/ ˝ H1 .A/ H˛ .@0;e C 1; A/ ! H˛ .@0; C 1; A/;
7! exp.. e /m0 / extends to a unitary mapping between H˛ .@0;e C 1; A/ and H˛ .@0; C 1; A/, ˛ 2 nC1 Z , ; e 2 R. This follows since /m0 / exp.. e /m0 / .@0;e C 1/ D .@0; C 1/ exp.. e and so by induction ˛0 b ˛ ˛0 b ˛ /m0 / exp.. e /m0 / .@0;e C 1/ A D .@0; C 1/ A exp.. e a
2 R. Here we needed to use that the image of on CV 1 .R/ ˝ H1 .A/, ˛ 2 ZnC1 , ; e a CV 1 .R/ ˝ H1 .A/ under the mapping .@0;e C 1/ is dense in H1 .@0;e C 1; A/. The resulting unitary mapping will simply be denoted as exp.. e /m0 /. Theorem 4.2.6. Let P .@0; C ; A/ be forward/backward evolutionary and causal for ˙ > ˙˙ , ˙˙ 2 R>0 , respectively. Then for ˙; ˙e > ˙˙ and given data f 2 H1 .@0; C 1; A/ \ H1 .@0;e C 1; A/ we have u WD P .@0; C ; A/1 f 2 H1 .@0; C 1; A/ \ H1 .@0;e C 1; A/; ue ; A/1 f 2 H1 .@0; C 1; A/ \ H1 .@0;e WD P .@0;e Ce C 1; A/ and u D ue a
as linear functionals on CV 1 .R/ ˝ H1 .A /.
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 315 Proof. We calculate hu j P .@0; C ; A / ˝ wi;0 D hP .@0; C ; A/1 f j P .@0; C ; A / ˝ wi;0 D hP .@0; C ; A/ P .@0; C ; A/1 f j ˝ wi;0 D hf j ˝ wi;0 D hf j exp.2m0 / ˝ wi0;0 D hf j exp.2.e /m0 / ˝ wie ;0 D hf j P .@0;e ; A /1 P .@0;e ; A / exp.2.e /m0 / ˝ wie Ce Ce ;0 D hue m0 /P .@0 ; A / exp.2m0 / ˝ wie j exp.2e ;0 D hue m0 / P .@0; C ; A / exp.2m0 / ˝ wie j exp.2e ;0 D hue /m0 / P .@0; C ; A / ˝ wie j exp.2.e ;0 : a
By the density2 of P .@0 C 2; A /ŒCV 1 .R/ ˝ H1 .A / in H1 .@0; C 1; A / we obtain /m0 / ˆie hu j ˆi;0 D hue j exp.2.e ;0 for all ˆ 2 H1 .@0; C 1; A /. Note that exp.2.e /m0 / W H˛ .@0; C 1; A / ! a nC1 . Specializing this to C V 1 .R/ ˝H1 .A / H˛ .@0;e C1; A / is unitary for all ˛ 2 Z we obtain hu j ˝ wi;0 D hue //m0 / ˝ wie j exp.2.e ;0 for all w 2 H1 .A / and 2 CV 1 .R/. On the other hand, we have for all such w and hu j ˝ wi;0 D hu j exp.2m0 / ˝ wi0;0 ; so that with exp.2m0 / replaced by we find hu j ˝ wi0;0 D hue j ˝ wi0;0 for all w 2 H1 .A / and 2 CV 1 .R/ and the result follows. 2
Note that with P .@0; C ; A/ being invertible, we have for a suitable choice of ˛0 2 N, b ˛ 2 Zn , ˛ .i m0 C /˛0 P .i m0 C ; A/1 Ab
as a bounded operator in H0 .i m0 C ; A/. Taking adjoints, we also have ˛ .i m0 C /˛0 P .i m0 C ; A /1 .A /b
as a bounded operator in H0 .i m0 C ; A /, which in turn shows that P .@0; C ; A / is invertible. Thus, in particular, the desired density statement is valid, due the continuity of such expressions in H1 .@0; C ; A /.
316
Chapter 4 Linear Evolution Equations
Remark 4.2.7. As before, we note explicitly that here also P .@0 ; A/ as a operator polynomial matrix may still contain parts with no time derivatives or even purely algebraic equations (abstract differential-algebraic systems). In the following we shall focus on the forward evolutionary case.
4.2.3 Abstract Initial Value Problems We recall3 that for ¤ 0 .A/ H˛ .@0; C ; A/ D H˛0 .@ C / ˝ Hb ˛ where ˛ D .˛0 ; ˛1 ; : : : ; ˛n / 2 Z1Cn , b ˛ D .˛1 ; : : : ; ˛n / 2 Zn . If ˛ is to mean .ij / .ij / .ij / a matrix .˛ .ij / /iD0;:::;sIj D0;:::;t of multi-indices ˛ .ij / D .˛0 ; ˛1 ; : : : ; ˛n / in .ij / Z1Cn , then we have ˛0 D .˛0 /iD0;:::;sIj D0;:::;t and b ˛ D .b ˛ .ij / /iD0;:::;sIj D0;:::;t .ij / .ij / with b ˛ .ij / D .˛1 ; : : : ; ˛n / 2 Zn , i D 0; : : : ; sI j D 0; : : : ; t . It is in this sense of separated time that we shall also write H1 .@0; C / ˝ H1 .A/ and H˛0 .@0; C / ˝ H1 .A/
for
for H1 .@0; C ; A/ [
H˛0 .@0; C / ˝ Hb .A/: ˛
˛ b To analyze the issue of initial boundary value problems more closely, we note that the polynomial P can be written in the form
P .; A/ D
p X
aj .A/ j
j D0
where aj is a .s s/-matrix polynomial for j D 0; : : : ; p and p 2 N. We assume that ap is not the zero polynomial, so that p is indeed the degree of the single variable operator matrix polynomial 7! P .; A/. For classical initial value problems we routinely assume that ap is indeed a constant invertible matrix and without loss of generality we therefore assume ap 1H : (4.2.12) Therefore, we also have P .@0 ; A/ D
p X j D0
j
p
aj .A/ @0 D @0 C
p1 X
j
aj .A/ @0 :
j D0
In analogy to the partial differential equation case we consider the following abstract initial value problem. 3 We have agreed that occasionally we shall use as a matter of convenience @ 0; C instead of @0; C 1, since ¤ 0. This amounts simply to choosing an equivalent norm.
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 317 Definition 4.2.8. Let P .@0; C ; A/ be evolutionary for all > C > 0. Then the problem of finding the solution u 2 H1 .@0; C ; A/, > C , of
P .@0 ; A/ u D f0 C
1 p jX X
j k1
@0
ı ˝ .aj .A/ u0;k /;
(4.2.13)
j D0 kD0
where f0 2 H0 .@0; C / ˝ H1 .A/ with supp0 f0 Œ0; 1Œ and u0;k 2 H1 .A/, k D 0; : : : ; p 1 is called the abstract initial value problem for P .@0 ; A/ with initial data .u0;k /kD0;:::;p1 and source term f0 . If specifically f0 D 0, we shall speak of the abstract initial value problem for P .@0 ; A/ with initial data .u0;k /kD0;:::;p1 . That the abstract initial value problem has a unique solution u 2 H1 .@0; C ; A/ which depends continuously on the data f0 and .u0;k /kD0;:::;p1 is clear from the general theory. However, we want to determine the sense in which this solution assumes its initial data. Noting the analogy to the partial differential equation case, we see that there is little change other than the replacement of b @ C e by the general operator family A. We find 1 p jX X
j k1
@0
ı ˝ .aj .A/ u0;k /
j D0 kD0
D
p X
jX 1
j
aj .A/ @0
j D0
D P .@0 ; A/
kD0 p1 X kD0
D P .@0 ; A/
p1 X kD0
D P .@0 ; A/
p1 X kD0
mk ˝ u0;k kŠ R>0
p1 p X X mk mk j R>0 ˝ u0;k aj .A/ @0 ˝ u0;k kŠ kŠ R>0 j D0
kDj
p p1 X X mkj mk R>0 ˝ u0;k aj .A/ ˝ u0;k kŠ .k j /Š R>0 j D0
kDj
pj p X X1 mk mk R>0 ˝ u0;k aj .A/ ˝ u0;kCj : kŠ kŠ R>0 j D0
kD0
Thus, solving the abstract initial value problem for data f0 , .u0;k /kD0;:::;p1 is equivalent to solving the abstract initial value problem with source term
f0
p X j D0
aj .A/
pj X1 kD0
mk ˝ u0;kCj kŠ R>0
318
Chapter 4 Linear Evolution Equations
and vanishing initial data .0/kD0;:::;p1 since if w denotes the solution of the latter problem, then the solution u of (4.2.13) is given by uDwC
p1 X kD0
mk ˝ u0;k : kŠ R>0
(4.2.14)
We may therefore assume without loss of generality that the initial data are all vanishing. To connect with point-wise concepts, we shall use the following Sobolev embeda ding P lemma which is formulated for elements in CV 1 .R/ ˝ H1 .A/. For such
D k k ˝ uk we shall write X
.t / WD t 2 R: k .t / uk ; k
Lemma 4.2.9. For 2 R>0 , ˛ 2 Zn , n 2 N, the embedding a
CV 1 .R/ ˝ H1 .A/ H1 .@0;% C %/ ˝ H˛ .A/ ! ¹ 2 C0 .R; H˛ .A//j j j;1;1=2 < 1º; u 7! .t 7! u.t // has a continuous extension to all of H1 .@0;% C %/ ˝ H˛ .A/ (“trace operator”) with respect to the norm j j;.1;˛/ WD j j.;0/;.1;˛/ of H1 .@0;% C %/ ˝ H˛ .A/ and the image norm j j;1;1=2 WD sup¹j exp.t / .t /j˛ j t 2 Rº ² j exp.t / .t / exp.s/ .s/j˛ C sup jt sj1=2
ˇ ³ ˇ ˇ t; s 2 R ^ t ¤ s ˇ
a
for 2 CV 1 .R/ ˝ H1 .A/. Moreover, we have 1 sup¹j exp.t /. u/.t /j˛ j t 2 Rº p j@0 j;.0;˛/ 2 and sup
²
ˇ ³ j exp.t /. u/.t / exp.s/. u/.s/j˛ ˇˇ t; s 2 R ^ t ¤ s j@0; uj;.0;˛/ ˇ jt sj1=2
for all u 2 H1 .@0; C / ˝ H˛ .A/. In particular, if f 2 H1 @0; C / ˝ H˛ .A/ and a . k /k a sequence in CV 1 .R/ ˝ H1 .A/ converging to f in H1 .@0; C / ˝ H˛ .A/, then limk!1 k .t; / exists in H˛ .A/ for all t 2 R, is independent of the particular approximating sequence and equals f . Furthermore, the mapping is injective.
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 319 Proof. The proof is analogous to the earlier Sobolev embedding lemma, the main modification being to have A in place of b @ C e (compare the proof of Lemma 3.1.59). As a consequence of the Lemma 4.2.9 we have a more general version of our earlier considerations for point-wise evaluation. Lemma 4.2.10 (Sobolev embedding lemma). Let f 2 H1 .@0; C / ˝ H˛ .A/, 2 a R n ¹0º, ˛ 2 Zn , and . k /k a sequence in CV 1 .R/ ˝ H1 .A/ converging to f in H1 .@0; C / ˝ H˛ .A/. Then limk!1 k .t / exists in H˛ .A/ for all t 2 R and is independent of the particular approximating sequence. Proof. The existence result follows from the previous lemma by substituting k j for in the first or second estimate, respectively, showing that . k .t //k is a Cauchy sequence in H˛ .A/, since . k /k is a Cauchy sequence in H1 .@0; C / ˝ H˛ .A/. Moreover, if . k /k is another such sequence, then with k k in place of we see that k .t / k .t / ! 0 in H˛ .A/ as k ! 1, since k k ! 0 in H1 .@0; C / ˝ H˛ .A/ as k ! 1. From Lemma 4.2.9 we see that we can also in this more general situation define the point-wise evaluation in time (or time trace) of any f 2 H1 .@0; C / ˝ H˛ .A/ by letting f .t / WD .f /.t / ˇ ° ± ˇ 2 lim k .t /ˇ. k /k 2 .CV 1 .R/ ˝ H1 .A//N ^ lim k D f : k!1
k!1
Moreover, according to Lemma 3.1.59 we have that t 7! f .t / 2 H˛ .A/ is (locally) Hölder continuous on R with Hölder exponent 12 . Having made this connection to point-wise continuity (of the point-wise evaluation in time) we need the next lemma to obtain a corresponding regularity result. Lemma 4.2.11. Let P .@0; C ; A/ D .@0; C /p C evolutionary for all > C > 0. Then
Pp1 j D0
aj .A/ .@0; C /j be
P .@0; C ; A/1 .Hk .@0; C / ˝ H1 .A// HpCk .@0; C / ˝ H1 .A/ for all k 2 Z. Moreover, the mapping f 7! P .@0; C ; A/1 f is continuous from Hk .@0; C / ˝ H1 .A/ to HpCk .@0; C / ˝ H1 .A/ for all k 2 Z.
320
Chapter 4 Linear Evolution Equations
Proof. Let P .@0 ; A/u D f . Then p
p
P .@0 ; A/ u D @0 @0 f and so p
p
@0 .u @0 f / C
p1 X
j
aj .A/ @0 u D 0
j D0
which gives p
p
@0 .u @0 f / C
p1 X
j
p
aj .A/ @0 .u @0 f / D
j D0
p1 X
j
p
aj .A/ @0 @0 f
j D0
D
p X
aps .A/ @s 0 f
sD1
D @1 0
p1 X
j
apj 1 .A/ @0 f:
j D0
Defining recursively f0 WD f
and
fnC1 WD
p1 X
j
apj 1 .A/ @0 fn
j D0
we have p
P .@0 ; A/ .u @0 f0 / D @1 0 f1 : By induction we can show that N X j p 1 @0 fj D @N fN C1 P .@0 ; A/ u @0 0 j D0
for all N 2 N. Now let f 2 Hk .@0; C / ˝ H1 .A/. Since P .@0 ; A/ has a finite regularity loss, for N 2 N sufficiently large we get that p
u @0
N X
j
@0 fj 2 HpCk .@0; C / ˝ H1 .A/
j D0
and hence u 2 HpCk .@0; C / ˝ H1 .A/:
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 321 We also read off that, by construction, u D P .@0; C ; A/1 f p
1 fN C1 C @0 D P .@0; C ; A/1 @N 0
N X
j
@0 fj
j D0
from which the desired continuity follows (for N 2 N sufficiently large). Thus the time-regularity of a solution is always equal to time-degree p of the polynomial plus the time-regularity of the right-hand side f . This observation can be extended to the following local regularity result. Lemma 4.2.12. Let f 2 H1 .@0; C ; A/ be such that '.m0 / f 2 Hk .@0; C / ˝ H1 .A/ for all ' 2 CV 1 .I / for some open interval I . Then .m0 / P .@0; C ; A/1 f 2 HpCk .@0; C / ˝ H1 .A/ for all
I I. 2 CV 1 .e I / for every open sub-interval e I with e
Proof. The argument follows the line of reasoning in the proof of Lemma 3.2.15. Let u WD P .@0; C ; A/1 f with P .@0; C ; A/ D
p X
aj .A/ .@0; C /j
j D0
where ap .A/ D 1. Then P .@0; C ; A/ '.m0 / u p X
D '.m0 / f C
j D0
j X j aj .A/ .@k0 '/.m0 / .@0; C /j k u k
(4.2.15)
kD1
since by Leibniz’ rule .@0;
j X j C / .'.m0 / u/ D .@k0 '/.m0 / .@0; C /j k u: k j
kD0
Let f 2 Hs .@0; C/ ˝ H1 .A/, s 2 Z. Then, by Lemma 4.2.11, u 2 HpCs .@0; C/ ˝ H1 .A/. We see that the right-hand side of (4.2.15) is in Hmin¹sC1;k/ .@0; C / ˝ H1 .A/. Therefore, '.m0 / u 2 HpCmin¹sC1;kº .@0; C / ˝ H1 .A/ 2 Hmin¹sCpC1;kCpº .@0; C / ˝ H1 .A/:
322
Chapter 4 Linear Evolution Equations
If s C p C 1 k C p, we are finished. If not, then we continue in the iterative fashion used in the proof of Lemma 3.2.15. Let CV 1 . 1; 1Œ/ be a R I D a; bŒ and let j 2 1 given non-negative function satisfying R j D 1. With j" D " j. ="/, " 2 R>0 , define4 nC1 WD j"nC1 ŒaCPn " ;bPn " , n 2 N, where "j WD 2jC1 , j 2 N, j D0 j
0 < < .b a/=8. Then, for all n 2 N, n
1
j D0 j
on Œa C ; b :
With 0 WD j ŒaC2;b2 , we have 0 .m0 / u D 0 .m0 / and by induction
n .m0 / u
for all n 2 N
0 .m0 / u 2 Hmin¹sCpCt;pCkº .@0; C / ˝ H1 .A/ for all t 2 N. In particular, we have
0 .m0 / u 2 HpCk .@0; C / ˝ H1 .A/:k Since 0 D 1 on Œa C 3 ; b 3 we have, for every .m0 / u D
2 CV 1 .Œa C 3 ; b 3 /,
.m0 / 0 .m0 / u 2 HpCk .@0; C / ˝ H1 .A/:
Since 2 R>0 can be chosen arbitrarily small, the result follows. As a consequence we formulate the following result. Lemma 4.2.13. Let f 2 H1 .@0; C ; A/ with supp0 f D ¹0º. Then, for every open sub-interval e I R n ¹0º, .m0 / P .@0; C ; A/1 f 2 H1 .@0; C / ˝ H1 .A/ I /. Moreover, we have for the restriction of the H1 .A/-valued for all 2 CV 1 .e function t 7! .P .@0; C ; A/1 f /.t / to R>0 .t 7! .P .@0; C ; A/1 f /.t //jR>0 2 C1 .R>0 ; H1 .A//: Proof. The first result is just a special case of Lemma 4.2.12 and by the Sobolev embedding lemma we get the second part. Pp1 Theorem 4.2.14. Let P .@0; C ; A/ D .@0; C /p C j D0 aj .A/ .@0; C /j be evolutionary for all > C > 0. Furthermore, let u 2 H1 .@0; C ; A/ be the solution of the corresponding abstract initial value problem for a source term f and initial data .u0;k /kD0;:::;p1 , > C . Then, .@k0 u/.0C/ D u0;k
for k D 0; : : : ; p 1:
4 Such a family of convolution operators .j / " "2R>0 based on the family of functions .j" /"2R>0 is called a family of mollifiers j" , " 2 R>0 .
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 323 Proof. We have P .@0 ; A/ u D f C
1 p jX X
j k1
@0
ı ˝ .aj .A/ u0;k /:
j D0 kD0
Analogous to (3.1.55), we also have pj p1 p X mk X X1 mk P .@0 ; A/ u ˝ u0;k D f aj .A/ ˝ u0;kCj : kŠ R>0 kŠ R>0 j D0
kD0
kD0
Since the right-hand side is in H0 .@0; C / ˝ H1 .A/, we find u
p1 X kD0
mk ˝ u0;k 2 Hp .@0; C / ˝ H1 .A/: kŠ R>0
This yields in particular Uj WD
j @0 u
j @0
p1 X kD0
j
D @0 u
p1 X kDj
jX 1
mkj ˝ u0;k .k j /Š R>0
j k1
@0
mk ˝ u0;k kŠ R>0 (4.2.16)
ı ˝ u0;k 2 H1 .@0; C / ˝ H1 .A/
kD0
for j D 0; : : : ; p 1. We have Uj .0/ D 0
for j D 0; : : : ; p 1;
(4.2.17)
since causality yields that Uj .t / D 0 for t 2 R0
Since by assumption p 2 Z>0 , this shows that t 7! w.t / must be continuous (indeed Hölder continuous with exponent 12 ). Observing that
.t C h/ .t / 1 .@0 /.t / D h h
Z
h 0
..@0 /.t C s/ .@0 /.t // ds;
a
for all 2 CV 1 .R/ ˝ H1 .A/, h 2 R, we get, as in the proof of Lemma 3.1.59, ˇ ˇ ˇ .t C h/ .t / ˇ ˇ ˇ .@
/.t / 0 ˇ ˇ h ˛ 2 jhj1=2 exp.t / max.exp.h/; 1/ j@0 j;.1;˛/ 2 jhj1=2 exp.t / max.exp.h/; 1/ j j;.2;˛/ : j 1
Replacing by @0
and taking limits we obtain
ˇ ˇ j 1 ˇ ˇ @0 .t C h/ @j0 1 .t / j ˇ .@0 /.t /ˇˇ ˇ h ˛ j 1
2 jhj1=2 exp.t / max.exp.h/; 1/ j@0 2 jhj
1=2
j;.2;˛/
exp.t / max.exp.h/; 1/ j j;.j C1;˛/ ;
for all 2 Hp .@0; C / ˝ H˛ .A/, j D 1; : : : ; p 1, ˛ 2 Zn . This proves that every 2 Hp .@0; C / ˝ H˛ .A/ is .p 1/-times classically differentiable and the classical derivatives at a point t coincide with the point-wise evaluation of the generalized derivative at t . We already know that these derivatives are continuous (indeed Hölder continuous with exponent 12 ). Corollary 4.2.16. Under the assumptions of Theorem 3:1:62 the solution u of the abstract initial value problem with no source term is infinitely (weakly) continuously differentiable on R0 with respect to time, in the sense that t 7! hwju.t C/i0 2 C1 .R0 / for every w 2 H1 .A/.
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 325 Proof. We have P .@0 ; A/ u D .@0; C /p u C
p1 X
aj .A/ .@0; C /j u
j D0
D
1 p jX X
j k1
@0
(4.2.18)
ı ˝ .aj .A/ u0;k /:
j D0 kD0
Since by Lemma 4.2.13 we already know that ujR>0 2 C1 .R>0 ; H1 .A//, it remains to investigate the limit case as the time parameter approaches zero from the right. Corollary 4.2.15 implies that .@0; C /p u D
p1 X
aj .A/ .@0; C /j u
on R>0
(4.2.19)
j D0
is right-continuous at 0, i.e. p @0 u.0C/
p
D .@0; C / u.0C/ D
p1 X
aj .A/ .@0; C /j u.0C/ exists:
j D0
Differentiating (4.2.19), we see by induction that all derivatives have right-sided limits at zero: j @0 u.0C/ exists for all j 2 N: Observing that, for every w 2 H1 .A/, j
j
for t 2 R>0
hwj@0 u.t /i0 D .@0 hwju./i0 /.t / and
j
j
hwj@0 u.0C/i0 D .@0 hwju./i0 /.0C/ for every j 2 N, we deduce the infinite continuous differentiability of the function t 7! hwju.t C/i0 in R0 for every w 2 H1 .A/. In particular j
j
.@0 hwju./i0 /.t / .@0 hwju./i0 /.0C/ j C1 .@0 hwju./i0 /.0C/ t j
j
hwj@0 u.t /i0 hwj@0 u.0C/i0 j C1 hwj@0 u.0C/i0 t ˇ j ˇ @0 u.t / @j0 u.0C/ j C1 ˇ D wˇ @0 u.0C/ ! 0 t D
0
as t ! 0C for every w 2 H1 .A/.
326
Chapter 4 Linear Evolution Equations
Our findings suggest the formulation of a classical initial value problem in H1 .A/: find u 2 R>0 .m0 / ŒHp .@0; C / ˝ H1 .A/ such that P .@0 ; A/ u D f 2 R>0 .m0 / ŒH0 .@0; C / ˝ H1 .A/
on R>0
(4.2.20)
and j
.@0 u/.0C/ D u0;j 2 H1 .A/
for j D 0; : : : ; p 1:
(4.2.21)
In the above we have proved existence of a solution to this problem. Lemma 4.2.17. Let p 2 N and u 2 R>0 .m0 / ŒHp .@0; C / ˝ H1 .A/. Then
u
p1 X kD0
mk ˝ .@k0 u/.0C/ 2 Hp .@0; C / ˝ H1 .A/: kŠ R>0
Pp1 k Proof. Let w D u kD0 mkŠ R>0 ˝ u0;k with u0;k WD .@k0 u/.0C/. Then w 2 H0 .@0; C / ˝ H1 .A/ and p
h@0 wj i;0 D h wj.@0; C /p i;0 D h uj.@0; C /p i;0 p1 XZ tk h u0;k j..@0; C /p /.t /i0 exp.2t / dt: kŠ R>0 kD0
Now, we note that by assumption u D R>0 .m0 / U for some U 2 Hp .@0; C / ˝ H1 .A/ and so h u j.@0; C /p i;0 D h R>0 .m0 / U j.@0; C /p i;0 Z D hU.t /j..@0; C /p /.t /i0 exp.2t / dt R>0
Z D
R>0
hU.t /j..@0 /p exp.2m0 / /.t /i0 dt
Z D
R>0
h@0 U.t /j..@0 /p1 exp.2m0 / /.t /i0 dt
C hU.0/j..@0 /p1 exp.2m0 / /.0/i0 :
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 327 Using integration by parts we obtain p
p
h @0 uj i;0 D hR>0 .m0 / @0 U j i;0 C
p1 X
h.@k0 U /.0/j..@0 /pk1 exp.2m0 / /.0/i0;0
kD0 p
D hR>0 .m0 / @0 U j i;0 C
p1 X
hı ˝ .@k0 U /.0/j.@0 /pk1 exp.2m0 / i0;0
kD0 p
D hR>0 .m0 / @0 U j i;0 C
D p1 X
pk1
@0
ˇ E ˇ ı ˝ .@k0 U /.0/ˇ
kD0
;0
:
Comparison with the above yields p
p
h@0 wj i;0 D hR>0 .m0 / @0 U j i;0 : This gives the estimate p
p
jh@0 wj i;0 j j@0 U j;˛ j j;˛ p
for some sufficiently large ˛ 2 N n , proving that @0 w 2 H0 .@0; C / ˝ H˛ .A/. Since @0 D @0; C is a bijection between HkC1 .@0; C / ˝ Hˇ .A/ and Hk .@0; C / ˝ Hˇ .A/ for all k 2 Z; ˇ 2 Zn , we get w 2 Hp .@0; C / ˝ H˛ .A/ and therefore in particular that w is .p 1/-times continuously differentiable. Since wjR>0 D ujR>0
p1 X kD0
.mjR>0 /k ˝ u0;k ; kŠ
we get the same differentiability property for u. Let u be a solution of (4.2.20) subject to the initial conditions (4.2.21). Then we have, using Lemma 4.2.17, that p1 X mk R>0 ˝ u0;k .@0; C /p u kŠ kD0
C
p1 X j D0
jX 1 mk R>0 ˝ u0;k D f aj .A/ .@0; C /j u kŠ kD0
328
Chapter 4 Linear Evolution Equations
in H0 .@0; C / ˝ H1 .A/. This gives P .@0 ; A/ u D f C
1 p jX X
j k1
@0
ı ˝ .aj .A/ u0;k /:
(4.2.22)
j D0 kD0
Thus, we see that u also satisfies the abstract initial value problem with source term f and initial data .u0;k /kD0;:::;p1 . The latter is, however, uniquely determined and we deduce the uniqueness of solution the classical initial value problem (4.2.20)–(4.2.21) in H1 .A/. Summarizing we have the following result. Pp1 Theorem 4.2.18. Let P .@0; C ; A/ D .@0; C /p C j D0 aj .A/ .@0; C /j be evolutionary for all > C > 0. An element u 2 H1 .@0; C ; A/ is a solution of the corresponding abstract initial value problem for a source term f and initial data .u0;k /kD0;:::;p1 if and only if it solves the classical initial value problem (4.2.20), (4.2.21) in H1 .A/. In particular, we have existence and uniqueness of the solution of the classical initial value problem (4.2.20), (4.2.21) in H1 .A/. The independence of the solution of the specific choice of the parameter as discussed earlier takes on a particular form in the present context. Pp1 Theorem 4.2.19. Let P .@0; C ; A/ D .@0; C /p C j D0 aj .A/ .@0; C /j be evolutionary for all > C > 0. Furthermore, let u 2 H1 .@0; C ; A/ be the solution of the corresponding abstract initial value problem for a source term f and initial data .u0;k /kD0;:::;p1 , > C . Then, for a source term f 2 H1 .@0; C ; A/ \ H1 .@0;e ; A/ Ce u .t / D ue .t / in H1 .A/
for all t 2 R>0 ;
and ; e > C . Proof. We have from the earlier result that hu j ˝ wi0;0 D hue j ˝ wi0;0 for all 2 CV 1 .R/ and w 2 H1 .A/. Therefore, Z Z hu .t / j wi0 .t / dt D hue .t / j wi0 .t / dt R>0
R>0
and consequently hu .t / j wi0 D hue .t / j wi0 for all t 2 R>0 and w 2 H1 .A/. From this the desired equality follows.
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 329
4.2.4 Systems and Scalar Equations Continuing in complete analogy with our partial differential equation approach, we shall now exhibit the equivalence of an ..s C 1/ .s C 1//-system, s 2 N, to a ‘scalar’, i.e. .1 1/-system. As a matter of convenience, we are particularly interested in reducing our considerations to systems which are first order in time. The main tool here is again the Cayley–Hamilton theorem: if the characteristic polynomial of P .@0 ; A/ is given by sC1 X 7! ak .@0 ; A/ k kD0
then a0 .@0 ; A/ D
sC1 X
ak .@0 ; A/ P .@0 ; A/k
kD1
where ak .@0 ; A/ are .1 1/-matrix operator polynomials (to be applied componentwise), k D 0; : : : ; s C 1. Applying this formula to the solution u of P .@0 ; A/u D f we get5 a0 .@0 ; A/ u D
sC1 X
ak .@0 ; A/ P .@0 ; A/k u
kD1
D
sC1 X
(4.2.23) ak .@0 ; A/ P .@0 ; A/k1 f:
kD1
The same conclusion holds for the minimal polynomial. In the case of the characteristic polynomial we have a0 .@0 ; A/ D det.P .@0 ; A//. Summarizing, we have found the following correspondence. Theorem 4.2.20. Let P .@0 ; A/ be an ..s C 1/ .s C 1//-matrix operator polynomial invertible with respect to 2 R and let u; f 2 H1 .@0; C ; A/ be such that P .@0 ; A/u D f:
PsC1
(4.2.24)
k Furthermore, let t 7! kD0 ak .@0 ; A/ t be the characteristic polynomial or the minimal polynomial of P .@0 ; A/. Then u also solves the equations
a0 .@0; C ; A/ u D
s X
akC1 .@0; C ; A/ P .@0; C ; A/k f
kD0 5
Comparison with Cramer’s rule shows that
s X
ak .@0 ; A/ P .@0 ; A/k1
kD1
is nothing but the transpose of the formal cofactor matrix of P .@0 ; A/.
(4.2.25)
330
Chapter 4 Linear Evolution Equations
where ak .@0 ; A/ are .1 1/-matrix operator polynomials, k D 0; : : : ; s C 1. In the case of a characteristic polynomial we have a0 .@0; C ; A/ D det.P .@0; C ; A//. Conversely, every solution of (4.2.25) also solves (4.2.24). Proof. The argument follows exactly as in the partial differential equations in RnC1 case, compare Theorem 3.1.44. Details may safely be left to the reader. This result shows that a system of polynomial operator equations is equivalent to a diagonal system with the same scalar polynomial operators as diagonal entry. Let us turn now to the issue of converting a scalar polynomial operator equation into a system while reducing the order of differentiation with respect to time. Consider a .1 1/-matrix operator polynomial P .@0 ; A/ WD
sC1 X
ak .A/ @k0
kD0
with asC1 .A/ D 1. Then 0 1 @0 B B 0 @0 B B : : :: B :: B B @ 0 a0 .A/ a1 .A/ With
0 B B B B A.A/ WD B B B @
and
:: : :: :
0 :: : ::
:
1
0 :: :
0 0 @0 1 as1 .A/ @0 C as .A/ 0
1
0
0 :: :
0 :: :
1 :: :
1 0 0 0 u C CB B : C B @0 u C C B :: CB DB CB : C B C @ :: C A @ 0 C A @s0 u f
:: : :: :
0 :: :
0 0 0 0 1 a0 .A/ a1 .A/ as1 .A/ as .A/
1 C C C: C A
1 C C C C C C C A
1 u B @0 u C C B V WD B : C @ :: A 0
@s0 u this simplifies to
0
1 0 B :: C B C .@0 A.A// V D F WD B : C: @0A f
(4.2.26)
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 331 We note that det.@0 A.A// D P .@0 ; A/ and so invertibility is preserved. Let V be a solution of (4.2.26). Then with u WD V1 we recover from the first s rows Vk D @k0 u for k D 0; : : : ; s; as well as P .@0 ; A/u D f from the last row. Taking into account that every evolutionary higher order scalar polynomial operator equation (with leading coefficient 1) can be brought into the form of a system of first order with respect to time derivatives, we may consider an evolutionary ..s C 1/ .s C 1//-matrix polynomial operator, s 2 N, of the form P .@0 ; A/ D @0 A.A/;
(4.2.27)
where A.A/ is an ..s C 1/ .s C 1//-matrix polynomial operator of A alone, as the standard first order form of a matrix polynomial operator. The systems of evolution equations of mathematical physics can all be brought into this form. We shall therefore focus on evolutionary polynomial operator matrices of this particular form. Indeed, frequently we are in the situation that there is no particular ‘internal’ fine structure of A.A/, in which case we simply have dependence on just a single operator A and A./ is just a scalar polynomial of degree 1: A.A/ D c A C d for c 2 C n ¹0º, d 2 C. As a further simplification we assume – as is frequently encountered – that Re 1 .A.A// WD ¹Re j 2 1 .A.A//º is bounded above or, more restrictively, Re .A.A// WD ¹Re j 2 .A.A//º is bounded above: Then the solution theory simplifies and, to conclude this sub-section, we shall focus on this particular case. Proposition 4.2.21. Let A be a closed, densely defined, linear operator in a Hilbert space H , 0 2 %.A/ and let @0 c A d be forward evolutionary, c 2 C n ¹0º, d 2 C. Moreover, let Re.c .A/ C d / be bounded above. Then 7! r . c A d /1 is bounded in i R C R>C for some C 2 R>0 , r 2 N. Proof. We have Re.c .A/ C d / D Re .c A C d / C and 7! k . c A d /1 As
(4.2.28)
332
Chapter 4 Linear Evolution Equations
is bounded in i R C R>C for some C 2 R>0 , k; s 2 N. The resolvent equation now yields that .c A/1 C . c A d /1 D . d / . c A d /1 .c A/1 D c 1 . c A d /1 A1 c 1 d . c A d /1 A1 : Thus, we have k1 A . c A d /1 As D c 1 k1 As C c 1 k . c A d /1 As c 1 d k1 . c A d /1 As D c 1 k1 As c 1 .1 d 1 / . k . c A d /1 As /: This shows that at the price of increasing the power of 1 we may lower the power of A1 up to and including A D .A1 /1 . A simple induction argument now yields the result with r WD k C s. As a by-product of the above proof we note the following result. Corollary 4.2.22. Let A be a closed, densely defined, linear operator in a Hilbert space H such that 0 2 %.A/ and let @0 c Ad be forward evolutionary, c 2 C n¹0º, d 2 C. Moreover, let Re.c .A/ C d / bounded above. Then 7! k A . c A d /1 As1 is bounded in i R C R>C for some C 2 R and some r 2 N and all k; s 2 Z1 with k C s D r. Proof. The result follows as above observing additionally that we also have kC1 . c A d /1 As1 D k As1 C c k . c A d /1 As C d A1 k . c A d /1 As D k As1 C .c C d A1 / . k . c A d /1 As /: The last two results allow us to speak of a regularity loss of r 2 N, if the constant r in the formulation of Corollary 4.2.22 is minimal. As another consequence
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 333 of choosing our rather special case we obtain a more detailed regularity result6 . This result reflects the apparent trade-off between time and ‘space’ regularity inherent in the previous corollary (a “regularity see-saw”, so to speak). Lemma 4.2.23. Let .@0; C c A d / be evolutionary for all > C > 0 and have regularity loss r 2 N. Then the mapping f 7! .@0; C c A d /1 f is continuous from Hs .@0; C / ˝ Hk .A/ to Hsj .@0; C / ˝ Hkh .A/ for all k; s 2 Z and j; h 2 Z1 with j C h D r. Proof. Let .@0 c A d / u D f 2 Hs .@0; C / ˝ Hk .A/. Then, as in the proof of Lemma 4.2.11 (with A replacing the role of the time-derivative), we find .@0 c A d / u D A A1 f and so .@0 c A d /.u c 1 A1 f / D dc 1 A1 f c 1 A1 @0 f 2 Hs1 .@0; C / ˝ HkC1 .A/: Continuing in this fashion we obtain by induction N 1 X fj D fN 2 HsN .@0; C / ˝ HkCN .A/ .@0 c A d / u c 1 A1 j D0
where fnC1 WD d c 1 A1 fn c 1 A1 @0 fn , f0 D f , n 2 N. Now, we read off u c 1 A1
N 1 X
fj 2 HsN j0 .@0; C / ˝ HkCN h0 .A/
j D0
for some fixed h0 ; j0 2 N, h0 C j0 D r, and all N 2 N1 . Consequently, we have u 2 HsN j0 .@0; C / ˝ Hmin¹kCN h0 ;kC1º .A/: 6 Strengthening the assumptions on the behavior of the resolvent further eventually yields even more specific regularity results. Under suitable assumptions, strong continuity of the fundamental solution G can be shown to hold in every space Hj .A/, j 2 Z, of the Sobolev lattice .Hk .A//k2Z . The semigroup G. C 0/jR0 is then a strongly continuous family of bounded operators in Hj .A/, j 2 Z, (compare Subsection 4.2.5). Although the required assumptions are satisfied for the examples we are going to investigate more closely, we will rarely make use of such higher regularity results. We will pursue such issues only in the normal operator case and here regularity results are easily accessible through the use of the spectral theorem.
334
Chapter 4 Linear Evolution Equations
Noting that s N j0 C k C N h0 D s C k r, we get, with N D h0 C 1, that u 2 Hsr1 .@0; C / ˝ HkC1 .A/
(4.2.29)
and similarly, for N D h0 , that u 2 Hsr .@0; C / ˝ Hk .A/:
(4.2.30)
The latter shows that we could, without loss of generality, assume j0 D r and h0 D 0. With this additional observation we get similarly N 1 X 1 .@0 c A d / u @0 fj D fN 2 HsN .@0; C / ˝ HkCN .A/ j D0 1 where fnC1 WD cA@1 0 fn C d @0 fn , f0 D f , n 2 N. This time we read off
u
@1 0
N 1 X
fj 2 HsCN r .@0; C / ˝ HkN .A/
j D0
and so u 2 Hmin¹sC1;sCN rº .@0; C / ˝ HkN .A/: With j WD r N C 1 0 and h WD N 1 we get, for N D 0; : : : ; r C 1, u 2 HsC1j .@0; C / ˝ Hkh1 .A/:
(4.2.31)
From (4.2.30), (4.2.29) and (4.2.31) we see that u 2 Hsj .@0; C / ˝ Hkh .A/ for j C h D r, j; h 2 Z1 .
4.2.5 First-Order-in-Time Evolution Equations in Sobolev Lattices The solution theory carries over to the particular case of a first-order-in-time matrix polynomial operator of the form 1 0 A0 s .A/ @0 A0 0 .A/ A0 1 .A/ A0 .s1/ .A/ C B :: :: :: :: :: .@0 A.A// D @ A : : : : : As 0 .A/
As 1 .A/ As .s1/ .A/
@0 As s .A/
where A.A/ D .Ai j .A//i;j D0;:::;s . The associated abstract initial value problem with source term F0 and initial data U0 now takes on the simple form .@0 A.A//U D F0 C ı ˝ U0 :
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 335 The propagator .@0 A.A//1 has a particularly noteworthy property if restricted to data of the form ı ˝ U0 . Let us consider this case more closely, i.e. we shall inspect the mapping S W H1 .A/ ! H1 .@0; C ; A/; U0 7! .@0 A.A//1 ı ˝ U0 : From the discussion of general initial value problems, we know that the point-wise evaluation S.t / U0 WD ..@0 A.A//1 ı ˝ U0 /.t C 0/ exists in H1 .A/ for t 2 R and that S.0/ U0 WD ..@0 A.A//1 ı ˝ U0 /.0C/ D U0 . Moreover, we found that t 7! S.t / U0 is continuous on R n ¹0º. We see that for s 2 R0 the mappings t 7! R0 .t / S.t C s/ U0 and t 7! S.t / S.s/ U0 both yield a solution of the equation .@0 A.A// U D 0
on R n ¹0º
and both assume at t D 0 the initial data S.s/ U0 . By the established uniqueness of solution of the classical initial value problem, we must have R0 .t / S.t C s/ U0 D S.t / S.s/ U0
(4.2.32)
for all t 2 R, s 2 R0 , and U0 2 H1 .A/. Restricting our attention to the time range R0 yields simply S.t C s/ U0 D S.t / S.s/ U0
(4.2.33)
for all t; s 2 R0 and U0 2 H1 .A/. In other words, S considered as a family of continuous operators .S.t // t2R0 forms a sub-semigroup of the semigroup of continuous, linear operators on H1 .A/ with respect to composition. Moreover, this semigroup is according to (4.2.33) homomorphic to the semigroup R0 with respect to addition C WD CjR0 R0 and we have shown that this homomorphism is also strongly continuous in the sense that t 7! S.t / U0 is continuous on R0 for every U0 2 H1 .A/. Thus, .S.t // t2R0 is in this sense strongly homeomorphic to the semigroup .R0 ; C/. We usually speak, although somewhat imprecisely, of the strongly continuous, one-parameter semigroup S D .S.t // t2R0 generated by A.A/ and write suggestively S.t / DW exp.t A.A//
for t 2 R0 :
With this we have S.t / D R0 .t / exp.t A.A// for t 2 R.
336
Chapter 4 Linear Evolution Equations
Remark 4.2.24. It is also noteworthy that the mapping t 7! S.t / D R0 .t / exp.t A.A//; yielding for every U0 2 H1 .A/ a solution t 7! S.t / U0 of the classical initial value problem with initial data ı ˝ U0 , is in the present general context what we earlier called the (forward) fundamental solution. Indeed, .@0 A.A// S./ U0 D ı ˝ U0 for all U0 2 H1 .A/, or briefly .@0 A.A// S D ı ˝ ./; where ı ˝ ./ W H1 .A/ ! H1 .@0; C ; A/ denotes th mapping U0 7! ı ˝ U0 . Continuing the analogy, we can define S F WD L .i m0 C A.A//1 L F D .@0 A.A//1 F for F 2 H1 .@0; C ; A/. In the light of this remark, we shall refer to S as the (abstract) (forward/backward) fundamental solution of .@0 A.A// as well as the (forward/backward) semigroup associated with A.A/. For sufficiently well-behaved F , we obtain (in the forward evolutionary case) the classical Duhamel principle Z Z t .S F /.t / D S.t s/ F .s/ ds D S.t s/ F .s/ ds: R
1
More specifically, we have the following result. Theorem 4.2.25. Let S be the forward fundamental solution associated with @0 A.A/ and let F 2 R0 .m0 / H1 .@0; C / ˝ H1 .A/. Then Z .S F /.t / D
t 0
S.t s/ F .s/ ds:
Proof. By uniqueness of solution it suffices to see that the mapping W given by Z t S.t s/ F .s/ ds t 7! 0
defines a solution of .@0 A.A// U D F: First we note that W .0C/ D 0.
Section 4.2 Evolution Equations with Polynomials of Operators as Coefficients 337 Furthermore, Z hW j ˝ wi;0 D Z D
R>0 1
Z
0
Z
0
D D
1
1 0
hW .t /jwi0 .t / exp.2 t / dt Z
t
Z
0
Z
s
hS.t s/ F .s/ jwi0 .t / exp.2 t / ds dt 1
hS.t s/ F .s/ jwi0 .t / exp.2 t / dt ds 1
0
hS.u/ F .s/ jwi0 .u C s/ exp.2 .u C s// du ds
and so we find by integration by parts (utilizing Corollary 4.2.16) that hW j.@0; C / ˝ w ˝ A .A / wi;0 Z 1Z 1 D hS.u/ F .s/ jA .A / wi0 .u C s/ exp.2 .u C s// du ds Z C
0
1
0
Z D
hS.u/ F .s/ jwi0 .@0; C / .u C s/ exp.2 .u C s// du ds
0
1
Z
1
0C 1 Z 1
0
0C
Z C
1
0
Z C
0
Z
0
hA.A/ S.u/ F .s/ j wi0 .u C s/ exp.2 .u C s// du ds h@0 S.u/ F .s/ jwi0 .u C s/ exp.2 .u C s// du ds
1
hF .s/ jwi0 .s/ exp.2 .s// ds
D hF j ˝ wi;0 : From this, however, we read off .@0 A.A// W D F
on R>0 :
We shall now turn our attention to a substantial body of examples, which will not only serve to illustrate the application of our theoretical findings, but will also provide the opportunity to gain deeper insight into various specific problems occurring in the study of more complicated partial differential equations, such as boundary value problems and transmission problems.
Chapter 5
Some Evolution Equations of Mathematical Physics
5.1
Schrödinger Type Equations
We have considered Schrödinger type equations in free-space in the form @2 / u D f: .@0 i b
(5.1.1)
As a straightforward generalization to what one might call an abstract Schrödinger type operator we could consider @0 A; where A is a closed, densely defined, linear operator in a Hilbert space H where c0 > sup Re .A/ inf Re .A/ > c0 ;
(5.1.2)
for some c0 2 R>0 and for some s; k 2 N z 7! z s .z A/1 .c0 C A/k is uniformly H -bounded in iR Rc0 . Condition (5.1.2) ensures that the reversible character of the free-space Schrödinger operator is maintained in the abstract case. For physical reasons A skew-selfadjoint is the most appropriate case to investigate. In this case we have Re .A/ D ¹0º and z 7! .z A/1 is uniformly bounded in i R Rc0 for all c0 2 R>0 . Here, however, we will not pursue the abstract case, but instead we shall focus mainly on the case of the selfadjoint Laplace operator (and some of its perturbations) in an non-empty, open set Rn . More complex features of the spatial part of (5.1.1) such as variable coefficients and prescribed boundary data, will be considered later in connection with the heat equation, where these issues are more pertinent.
5.1.1 The Selfadjoint Laplace Operator Let Rn be an open, non-empty set and consider the linear operator M grad j V W CV 1 ./ L2 ./ ! L2 ./; C1 ./
kD1;:::;n
' 7! grad ' D .@i '/iD1;:::;n :
Section 5.1 Schrödinger Type Equations
339
Recalling that a mapping from a subset of some Hilbert space S into another Hilbert space T can be considered as a subset of the direct sum S ˚ T and defining V WD grad j grad V
C1 ./
M
L2 ./ ˚
L2 ./
kD1;:::;n
V a linear operator? That is: Is the linear operator gradj the question arises: Is grad CV 1./ L V closable? This is the same as asking: Is grad L2 ./ ˚ kD1;:::;n L2 ./ rightV i D 1; 2. Then there are unique? To answer this question let .x; y .i/ / 2 grad, .i/
.i/
sequences ..xk ; yk //k , i D 1; 2, in grad j V
C1 ./
.i/
with
.i/
.xk ; yk /k ! .x; y .i/ / as k ! 1: V and By linearity we have .0; w/ WD .0; y .1/ y .2/ / 2 grad .1/
.2/
.1/
.2/
.uk ; grad uk / WD .xk xk ; yk yk /k ! .0; w/
as k ! 1:
L
V is a vanishes, then grad closed linear operator. By integration by parts we have (since uk 2 CV 1 ./) If we can show that w D .w1 ; : : : ; wn / 2 ^
kD1;:::;n L2 ./
h.ˆ1 ; : : : ˆn /jgrad uk iLkD1;:::;n L2 ./ (5.1.3)
ˆi 2CV 1 ./; iD1;:::;n
C hdiv.ˆ1 ; : : : ˆn /j uk iL2 ./ D 0 where div.ˆ1 ; : : : ˆn / WD
P
iD1;:::;n @i ˆi ,
the vector analytical divergence. Letting
k ! 1 in (5.1.3), we get for all ˆi 2 CV 1 ./, i D 1; : : : ; n, h.ˆ1 ; : : : ˆn /j.w1 ; : : : ; wn /iLkD1;:::;n L2 ./ D
n X
hˆi jwi iL2 ./ D 0:
iD1
P Since CV 1 ./ is dense in L2 ./, we get that niD1 hwi jwi iL2 ./ D 0 and so w D y .1/ y .2/ D 0. V of the closed, linear operator grad V equipped Thus, in particular, the domain D.grad/ V with the graph norm j jgrad V is the Hilbert space H1 .jgradj C i/. This Hilbert space (also known as the Sobolev space HV 1 ./) is used to generalize differentiation and the homogeneous Dirichlet boundary condition to arbitrary non-empty, open sets Rn .
340
Chapter 5 Some Evolution Equations of Mathematical Physics
Similarly, we may consider the operator M M V 1 ./ div jL C W L2 ./ ! L2 ./; V kD1;:::;n
C1 ./
kD1;:::;n
kD1;:::;n
7! div D
X
@i i
kD1;:::;n
V As above, we find that which according to (5.1.3) is formally adjoint to grad. V WD div jL div
kD1;:::;n
CV 1 ./
V and grad V define is a well-defined closed linear operator. Moreover, the adjoints of div closed linear operators and so we define V ; grad WD .div/
V div WD .grad/
as Hilbert space realisations of the vector analytical operations gradient and divergence. By construction we have V grad; grad
V div : div
(5.1.4)
Since all these operators are closed, their domains are complex Hilbert spaces with respect to the graph norm for which we introduce particular notations V / WD H1 .jgradj V C i/; H.grad; V / WD H1 .jdivj V C i/; H.div; H.grad; / WD H1 .jgrad j C i/; H.div; / WD H1 .jdiv j C i/; where the noted dependence on the open set is occasionally useful, in particular if different such open sets are used. As a consequence of (5.1.4) we have the inclusions V / H.grad; /; H.grad;
V / H.div; /: H.div;
We note that V div grad;
V grad div
V and are of the form C C for a closed, densely defined operator C (here C D grad 2 C D grad) and therefore selfadjoint, non-negative operators in L ./. They can be V grad. V Defining considered as restrictions of div grad and extensions of div j V
C1 ./
WD div gradj V
C1 ./
Section 5.1 Schrödinger Type Equations
341
V With we again have a closable linear operator whose closure we shall denote by . V WD ./
(5.1.5)
V div V grad V div grad V div grad ;
(5.1.6)
V div V grad V div V grad div grad :
(5.1.7)
we have
We shall concentrate on selfadjoint realizations of the negative Laplacian of the particV /. ular form C C . Let V be a closed subspace of H.grad; / containing H.grad; Then V models so-called enforced, homogeneous boundary conditions for div grad. With C D grad jV we get A WD 1i C C as a skew-selfadjoint operator. The subspace W D D.C / of H.div; / models boundary conditions imposed on div known as (homogeneous) natural boundary conditions or (homogeneous) free boundary conditions, since they are induced by those imposed on grad by restricting it to V . We note V / V H.grad; /; H.grad; V / W WD D..grad jV / / H.div; /: H.div; V / (homogeneous Dirichlet1 boundary As extreme cases we single out V D H.grad; V / condition or homogeneous boundary condition of first kind) and W D H.div; 2 (homogeneous Neumann boundary condition or homogeneous boundary condition of second kind). The homogeneous Neumann boundary condition is – in terms of the terminology introduced here – a natural boundary condition, whereas the homogeneous Dirichlet boundary condition is an enforced boundary condition. In all of these cases the resulting Schrödinger type evolution equation can be solved in the associated Sobolev lattice .Hj .@0; C; A1//j 2Z2 . More general situations can be considered conveniently as additive modifications (so-called perturbations) of A D 1i C C . The next section is devoted to a brief discussion of some of the issues involved.
1 In classical terms this boundary condition encodes “vanishing at the boundary”. Note, however, V / is a suitable generalization even for cases of a “rough” boundary P in that containment in H.grad; which evaluation at the boundary has no meaning. 2 This is a boundary condition for a generalized vector field encoding in classical terms the “vanishing V / is a suitable of the normal component” at the boundary. Note, however, that containment in H.div; P in which “normal component” and its evaluation generalization even for cases of a “rough” boundary at the boundary has no meaning.
342
Chapter 5 Some Evolution Equations of Mathematical Physics
5.1.2 Some Perturbations 5.1.2.1
Bounded Perturbations
Depending on the properties of the perturbation B W D.B/ H ! H , the discussion of an operator A W D.A/ H ! H can assist in the analysis of an operator of the form A C B. The case of a bounded operator B W H ! H is particularly simple to handle. Although, for A we have the above selfadjoint Laplacian in mind, much of what we say here remains valid for a larger class of operators. We first note the following perturbation lemma. Lemma 5.1.1. Let A W D.A/ H ! H be a densely defined, closed, linear operator in Hilbert space H and B W H ! H be a bounded, linear operator. Then ACB is also a densely defined, closed, linear operator. Moreover, 1 ¹ 2 %.A/ j jBjH !H < j. A/1 jH !H º %.A C B/:
(5.1.8)
Proof. The closedness of A C B is clear from earlier results. It remains to show the inclusion (5.1.8). So, let 2 %.A/. Then . A/1 .1H B . A/1 /1 exists as a bounded operator by a Neumann series argument provided jB . A/1 jH !H < 1: This is the case if 1 jBjH !H < j. A/1 jH !H DW A . /
(5.1.9)
and the resulting operator is the inverse of . A B/: . A/1 .1H B . A/1 /1 D ..1H B . A/1 /. A//1 D . A B/1 which shows that 2 %.A C B/. This yields (5.1.8). For a skew-selfadjoint operator3 A, i.e. i A is selfadjoint,
A . / D dist. ; .A// dist. ; iR/ D jRe j 3
In general, for normal operators A we have for 2 %.A/ 1
A . / D j. A/1 jH !H D dist. ; .A//:
This observation can be deduced from the spectral integral representation of . A/1 .
(5.1.10)
Section 5.1 Schrödinger Type Equations
343
and so application of Lemma 5.1.1 yields i R ˙ R>jBjH !H %.A C B/: Noting that in the skew-selfadjoint case we have j. A/1 jH !H
1 jRe j
(5.1.11)
we find j. A B/1 jH !H j. A/1 jH !H j.1H B . A/1 /1 jH !H
1 jRe j jBjH !H
for jRe j > jBjH !H . This shows that @0 A B remains invertibly evolutionary with regularity loss 0. The associated abstract Sobolev lattice .Hj .@0; C ; .A C B/ C C0 //j 2Z2 with C0 2 R, C0 > jBjH !H , provides the environment to solve the corresponding abstract Schrödinger equation. We summarize our findings in the following general theorem. Theorem 5.1.2. Let A W D.A/ H ! H be a skew-selfadjoint operator in Hilbert space H and B W H ! H be a bounded, linear operator. Then .@0 A B/ is reversibly evolutionary and the abstract Schrödinger equation .@0 A B/ u D f has a unique solution in the Sobolev lattice .Hj .@0; C ; .A C B/ C C0 //j 2Z2 for every f 2 H1 .@0; C ; .A C B/ C C0 /, 2 R, jj > C0 > jBjH !H . Proof. In the light of the above the result is clear. If we do not insist on the solution theory remaining valid for the whole Sobolev lattice, then we can handle even time-dependent perturbations by the same Neumann series argument used above. Theorem 5.1.3. Let A W D.A/ H ! H be a skew-selfadjoint operator in Hilbert a space H and let B W CV 1 .R/ ˝ V0 ! W0 be linear, where V0 is dense in H and W0 H0 .@0; C / ˝ H for all sufficiently large 2 R>0 . Moreover, let B be causal in the sense that inf supp0 B' inf supp0 '
344
Chapter 5 Some Evolution Equations of Mathematical Physics a
for all ' 2 CV 1 .R/ ˝ V0 and jB'j;0 C j'j;0 a
for some C 2 R>0 and all ' 2 CV 1 .R/ ˝V0 and all sufficiently large 2 R>0 . Then, for 2 R>0 sufficiently large the abstract Schrödinger equation .@0 A B/ u D f C ı ˝ u0 ;
(5.1.12)
where B denotes the closure of B in .H0 .@0; C / ˝ H / ˚ .H0 .@0; C / ˝ H /, has a unique solution u in R>0 .m0 /H.0;0/ .@0; C ; A 1/ for every given data f 2 R>0 .m0 /H.0;0/ .@0; C ; A 1/ and u0 2 H D H0 .A 1/. Proof. From (5.1.11) we see that j.i C A/1 jH !H
1
for all 2 R. Consequently we find4 j.@0 A/1 jH0 .@0; C/˝H !H0 .@0; C/˝H
1 :
With the closure B of B as an operator in H0 .@0; C / ˝ H we have that (5.1.12) is equivalent to .1H0 .@0; C/˝H .@0 A/1 B/ u D .@0 A/1 f C .@0 A/1 ı ˝ u0 : Since 1 jBjH0 .@0; C/˝H !H0 .@0; C/˝H 1 C;
j.@0 A/1 BjH0 .@0; C/˝H !H0 .@0; C/˝H
the operator on the left-hand side can be inverted in H0 .@0; C / ˝ H by a Neumann series provide > C . The unique solvability thus follows if we can show that the right-hand side is in H0 .@0; C / ˝ H . This is, however, clear for the first term .@0 A/1 f . So, let us consider the second term .@0 A/1 ı ˝ u0 4
This can also be seen directly by noting that Rehuj.@0 A/ui;0;0 D Rehuj@0 ui;0;0 D hujui;0;0
for all u 2 H.1;1/ .@0; C ; A 1/.
Section 5.1 Schrödinger Type Equations
345
which is the solution of the initial value problem .@0 A/ v D 0 with initial condition v.0C/ D u0 : Application of the Fourier–Laplace transform yields 1 j. i m0 C A/1 1 ˝ u0 j20;0 2 Z 1 D j. i C A/1 u0 j20 d : 2 R
j.@0 A/1 ı ˝ u0 j2;0 D
Utilizing the spectral theorem for the selfadjoint operator 1i A with spectral family … we obtain Z Z 1 j.@0 A/1 ı ˝ u0 j2;0 D j. i C i /1 j20 d hu0 j…./u0 i d 2 R R Z Z 1 j. i C i /1 j2 d d hu0 j…./u0 i D 2 R R Z Z 1 D j. i t C /1 j2 dt d hu0 j…./u0 i 2 R R Z 1 1 D dt ju0 j20 2 2 R t C 2 1 ju0 j20 : D 2 Thus, .@0 A/1 ı ˝ u0 2 H0 .@0; C / ˝ H and the Neumann series argument is applicable giving the solution of (5.1.12) as u D .1H0 .@0; C/˝H .@0 A/1 B/1 ..@0 A/1 f C .@0 A/1 ı ˝ u0 /: Noting that .@0 A/1 and B are causal, it is not hard to see that the operator .1H0 .@0; C/˝H .@0 A/1 B/1 is also causal (being given by a Neumann series of causal operators). Remark 5.1.4. In the context of the study of Schrödinger operators one is particularly interested in the skew-selfadjoint case for which B must be assumed to be skewsymmetric.
346 5.1.2.2
Chapter 5 Some Evolution Equations of Mathematical Physics Relatively Bounded Perturbations (the Coulomb Potential)
Unbounded perturbations are more difficult to investigate. For example, the sum of two selfadjoint operators need not be (essentially) selfadjoint. We shall only consider a particular case of interest in connection with perturbations of Schrödinger equations, 1 the so-called Coulomb potential V .m/ D jmj initially defined on CV 1 .Rn n ¹0º/ as an V is given in terms of the unbounded operator in L2 ./. The operator A WD i div grad V with the homogeneous Dirichlet condition on the boundary of the Laplacian div grad open set Rn . Other boundary conditions of the form discussed above can be dealt with similarly. If 0 2 Rn n , the “potential” V .m/ would be a bounded perturbation, so we shall assume 0 2 . For c 2 R, we see that symmetry of C c V .m/ with domain CV 1 . n ¹0º/ is clear. The solution theory associated with the (formal) Schrödinger type operator @0 C i . C c V .m// can be based on the question of essential selfadjointness or selfadjoint extendibility of C c V .m/. Although, the natural setting would be n D 3, let us keep n 2 N>2 general, since the mathematical arguments used will work the same way. We first note that V .m/ is actually well-defined on CV 1 ./, since x 7!
1
.x/ jxj
defines an element in L2 ./ for every 2 CV 1 ./, as can be seen by introducing polar coordinates: Z Rn
ˇ2 ˇ Z 1Z ˇ ˇ 1 1 ˇ dx D ˇ
.x/ j .r x0 /j2 r n1 d Vx0 dr ˇ ˇ jxj 2 n1 r x0 2S 0 Z 1Z D j .r x0 /j2 r n3 d Vx0 dr: 0
x0 2S n1
Here d Vx0 denotes the volume element of the .n 1/-dimensional unit sphere S n1 , x0 2 S n1 . Due to the symmetry of V .m/, we see that the closure V .m/ is a V well-defined closed operator. The Poincaré inequality now shows that D.grad/ D.V .m// and 2 V j0 jgrad jV .m/ j0 n2 V V is continuously embedded for all 2 D.grad/. In particular, we have that H.grad/ in H.V .m//, where in general, for a closed, linear operator C in a Hilbert space, we define H.C / as the Hilbert space obtained from D.C / with the graph norm j jC WD
q j j20 C jC j20
Section 5.1 Schrödinger Type Equations
347
for 2 D.C /. As before we sometimes choose for sake of clarity to indicate the deV /, H.grad; / etc. Since pendence on the underlying domain and write H.grad; 1=2 V D H.grad; V / D H1 .jgradj V i / and jAj V H.grad/ D jgradj, we find that V V j0 D jjgradj j jgrad 0 D jjAj1=2 j20
p 1 D h jA i0 j j0 jA j0 D 2 . " jA j0 / p j j0 2 " 1 "jA j20 C j j2 4" 0 V i /, i.e. 2 D.A/. Thus, renaming for all " 2 R>0 and all 2 H2 .jgradj again as ", we get jc V .m/ j20
4c 2 " .n2/2
4c 2 4 c4 V j0 "jA j20 C j j2 j grad .n 2/2 .n 2/4 " 0
V i /. This observation leads to the general for all " 2 R>0 and all 2 H2 .jgradj concept of what is known as a perturbation relatively bounded with respect to A. Definition 5.1.5. Let B W D.B/ H ! H and A W D.A/ H ! H be closed, linear operators with D.A/ D.B/. Then the operator B is called a relatively bounded operator with respect to A (briefly A-bounded) if the canonical embedding of H.A/ ,! H.B/ is continuous. In this case there must be a constant C0 2 R>0 such that j j20 C jB j20 C0 .jA j20 C j j20 / for all 2 D.A/. Introducing an additional degree of freedom, we can require that there are constants C1 ; C2 2 R>0 such that jB j0 C1 jA j0 C C2 j j0 holds for all 2 D.A/. The point of this variation lies in the concept of the A-bound of such a B defined by ˇ _ ° ± ^ ˇ jB j0 C1 jA j0 C C2 j j0 : jBj.A/ WD inf C1 2 R>0 ˇ C2 2R>0 2D.A/
For an operator B with D.A/ D.B/ there is no problem defining A C B with D.A C B/ WD D.A/. It is, however, not clear if this sum is a closed operator. To answer this question we record the following result.
348
Chapter 5 Some Evolution Equations of Mathematical Physics
Lemma 5.1.6. Let A W D.A/ H ! H and B W D.B/ H ! H be closed, linear operators with D.A/ D.B/ such that B is A-bounded with jBj.A/ < 1. Then A C B is closed. Proof. Let ..A C B/ k /k2N and . k /k2N be convergent in H . We need to show that lim .A C B/ k D .A C B/ lim k :
k!1
k!1
Since j.A C B/ j0 jA j0 C jB j0 .C1 C 1/ jA j0 C C2 j j0 and jA j0 j.A C B/ j0 C jB j0 j.A C B/ j0 C C1 jA j0 C C2 j j0 for all 2 D.A/, we get .1 C1 / jA j0 C j j0 j.A C B/ j0 C .C2 C 1/ j j0 .C1 C 1/ jA j0 C .2 C2 C 1/ j j0 for all 2 D.A/. Observing that, since jBj.A/ < 1, we can choose C1 < 1, we see that the graph norms of A and A C B are equivalent. Therefore, since A is closed, we have that limk!1 A k D A limk!1 k . Moreover, by A-boundedness, limk!1 B k D B limk!1 k and, by uniqueness of limits, limk!1 .ACB/ k D A limk!1 k C B limk!1 k . As an example, consider the above situation where we clearly have jc V .m/j.A/ D 0 V Thus, we have in particular that where A WD i div grad. i A C c V .m/ is a closed symmetric operator. We want to establish that i A C c V .m/ is selfadjoint. For this it suffices to show that c0 2 %.iA C c V .m// for some sufficiently large 2 c0 2 R>0 . We estimate with " D .n2/ and using the Poincaré inequality 8 h j.i A C c V .m// i0 D h ji A i0 C c h jV .m/ i0 V jgrad V i0 C c h jV .m/ i0 D hgrad V j20 jcj j j0 jV .m/ j0 jgrad
Section 5.2 Heat Equation
349 V j20 " jV .m/ j20 1 jcj2 j j0 jgrad 4" 2 1 V 2 c jgrad j20 j j20 2 .n 2/2
2 c2 j j20 .n 2/2 2
2c for 2 D.A/. Therefore, if Re < .n2/ 2 then … P .i ACc V .m//[C .i AC
c V .m// since for those 2 C injectivity and bounded invertibility can be read off from the above estimate. It remains to exclude the possibility of residual spectrum. But from the above we also see that V V i0 C c h jV .m/ i0 . ; / 7! hgrad jgrad V defines a symmetric, sesqui-linear form on H.grad/, indeed, the sesqui-linear form 2 c2 V V . ; / 7! hgrad jgrad i0 C c h jV .m/ i0 C 1 C h j i0 .n 2/2 V generating an equivalent norm. Thus, the surjectivity is an inner product for H.grad/ 2 c2 of i ACc V .m/C.1C .n2/2 / also follows by simply applying the Riesz representation theorem. In summary we have the following solution theory. V in Hilbert space L2 ./ and V .m/ as Theorem 5.1.7. Consider A D i div grad above. Then i A C c V .m/, c 2 R, is selfadjoint, .@0 A C i c V .m// is reversibly evolutionary and the abstract Schrödinger equation .@0 A C i c V .m// u D f has a unique solution in the Sobolev lattice .Hj .@0; C ; .i A C c V .m// i //j 2Z2 for every f 2 H1 .@0; C ; .i A C c V .m// i /, 2 R n ¹0º.
5.2
Heat Equation
The heat equation (also known as diffusion equation) we considered in Rn was associated with the differential expression @0 b @2 : The type of generalization one might be tempted to consider here is to replace again b @2 by a general operator A W D.A/ H ! H such that @0 C A is irreversibly evolutionary. Again we shall be mostly concerned with a more specific choice of A based on suitable restrictions of operators such as div grad and variants thereof. Topics which shall be discussed in this context are
350
Chapter 5 Some Evolution Equations of Mathematical Physics
I variable coefficients I inhomogeneous boundary data I mixed type boundary conditions I regularity of solution. We shall focus initially again on the selfadjoint case.
5.2.1 The Selfadjoint Operator Case The heat equation describes the evolution of a temperature distribution in a region given as an open set in R3 and is typically of the form %0 c P D div q C Q;
q D grad ;
where is the temperature, P its time-derivative, q denotes the so-called heat flux, Q models the influence of given heat sources, c denotes the specific heat, %0 is mass density and denotes the thermal conductivity tensor describing the thermal properties of the underlying medium. The second equality is known as Fourier’s law. These equations result in the formal evolution equation5 %0 c P D div. grad / C Q; This model equation naturally raises the issue of variable coefficients, since one might expect mass density, specific heat and thermal conductivity to vary with location. Our initial focus on the selfadjoint case imposes serious constraints on the material properties that can be dealt with. On the other hand, it will turn out that in other respects the material properties can be quite general. Allowing for variable coefficients is surprising simple to accomplish. Since it does not affect the mathematical reasoning presented, we may assume open in Rn with n 2 N>0 arbitrary. Consider a bounded, symmetric mapping "W
n1 M kD0
2
L ./ !
n1 M
L2 ./:
kD0
For sake of clarity we have here explicitly indicated the number of L2 -type component spaces. Soon we shall write again simply " W L2 ./ ! L2 ./ leaving the context 5
Alternatively, we could consider the associated PDAE (partial differential-algebraic equation) %0 c @0 div Q D grad 1 q 0
directly, compare Section 6.3.1. For sake of simplicity and familarity, however, we are focussing here on the evolution equation form of heat transfer.
Section 5.2 Heat Equation
351
to determine the number of component spaces intended. In addition we shall assume that " is (strictly) positive definite, i.e. that there is a constant "0 2 R>0 such that h j" i0 "0 h j i0
for all 2
n1 M
L2 ./:
kD0 2 ./ is a bounded, symmetric, Similarly we shall assume that W L2 ./ ! LL n1 2 (strictly) positive definite operator. Considering now kD0 L ./ with the new inner 1 1 1=2 product . ; / 7! h j" i0 D h" j i0 D h" j"1=2 i0 DW h j i"1 ;0 , Ln1 2 suggestively denoted by "1=2 Œ kD0 L ./ or (dropping again the reference to the number of component spaces) simply "1=2 ŒL2 ./, and similarly L2 ./ with the new inner product
. ; / 7! h j 1 i0 D h 1
j i0 D h 1=2
j 1=2 i0 DW h j i 1 ;0 ;
denoted by 1=2 ŒL2 ./, we define6 div W D.div/ "1=2 ŒL2 ./ ! 1=2 ŒL2 ./; ˆ 7! div ˆ; and similarly " grad W D.grad/ 1=2 ŒL2 ./ ! "1=2 ŒL2 ./; 7! " grad : We also define quite naturally V WD " grad j " grad V D.grad/ and V WD div j div V : D.div/ Lemma 5.2.1. We have V D . div/ " grad and V D ." grad/ : div Ln1 2 Ln1 2 Note that as sets "1=2 Œ kD0 L ./ and kD0 L ./ are equal; the respective norms are equivalent due to the assumptions imposed on ". Analogously, 1=2 ŒL2 ./ and L2 ./ are likewise equal as sets and the corresponding norms are equivalent. 6
352
Chapter 5 Some Evolution Equations of Mathematical Physics
Proof. The results follow by straightforward calculation. The cases being analogous, we only consider the first. Let 2 D.. div/ /, i.e. h div ‰j i 1 ;0 D h‰j. div/ i"1 ;0 for all ‰ 2 D. div/ D D.div/. On the other hand hdiv ‰j i0 D h div ‰j i 1 ;0 D h‰j. div/ i"1 ;0 D h‰j"1 . div/ i0 V in L2 ./, we read off that 2 D.grad/ V and for all ‰ 2 D.div/. Since div D grad V D "1 . div/ ; grad V . div/ . Since the argument can be reversed, equality which shows " grad follows. So in our weighted L2 ./-structure, we have again that V div " grad is of the form C C and therefore selfadjoint and non-negative. Of course, the operator V " grad is also of this form and so are all operators7 div div jW " grad jV where the subspaces V; W are the same associated subspaces as for the case " D 1, D 1 (compare the previous discussion of Schrödinger type equations). The above particular construction is clearly a general feature, which for deeper insight and later reference will be recorded in abstract terms. Lemma 5.2.2. Let A W D.A/ H0 ! H1 be densely defined and closed, linear operator, where Hi is a Hilbert space with inner product hjii and "i W Hi ! Hi is bounded, symmetric and positive definite, i D 0; 1. Then Hi equipped with the weighted inner product .u; v/ 7! huj"1 i viHi is also a Hilbert space denoted by 1=2 "i ŒHi , i D 0; 1. Moreover, 1=2
1=2
"1 A W D.A/ "0 ŒH0 ! "1 ŒH1 ; x 7! "1 Ax 7
It is most remarkable that these operators are selfadjoint without assuming any additional regularity of ". In classical terms one would need of course at least some kind of differentiability on " in order to make sense of div " grad. In the generalized way div " grad is defined here taking ", to be (multiplicative) discontinuous coefficients would allow us to include so-called transmission problems, where ", have jump discontinuities across certain separating interfaces (modeling composites of different media). We shall address this issue in more detail later.
Section 5.2 Heat Equation
353
is also densely defined and closed, and for 1=2
1=2
"0 A W D.A / "1 ŒH1 ! "0 ŒH0 ; x 7! "0 A x we have ."1 A/ D "0 A : Proof. That "1 A is closed and densely defined is clear. We calculate its adjoint. For u 2 D.."1 A/ / we must have 1 h"1 Axj"1 1 uiH1 D hxj"0 ."1 A/ uiH0
and so hAxjuiH1 D hxj"1 0 ."1 A/ uiH0
for all x 2 D."1 A/ D D.A/. We read off that u 2 D.A / and A u D "1 0 ."1 A/ :
Conversely, if u 2 D.A / we can see that also u 2 D.."1 A/ /. Corollary 5.2.3. Let A W D.A/ H0 ! H1 be a densely defined and closed, linear operator, where Hi is a Hilbert space with inner product hjii and "i W Hi ! Hi is bounded, symmetric and positive definite, i D 0; 1. Then "0 A "1 A 1=2
is selfadjoint in "0 ŒH0 and non-negative. Proof. The result follows from the previous lemma, since every operator of the form C C with C closed and densely defined is selfadjoint and non-negative. Corollary 5.2.4. Let H be a Hilbert space with inner product h j i and let A W D.A/ H ! H be selfadjoint and " W H ! H be bounded and positive definite. Then "A is selfadjoint in "1=2 ŒH , i.e. with respect to the inner product h j " i. Proof. The result is a rather special case of the previous lemma. Let us return to the particular operator under consideration. For sake of brevity let us restrict our attention to the Neumann type boundary condition8 , i.e. V " grad : A WD div In the light of our abstract considerations, we obtain a corresponding solvability result. 8 The Neumann type boundary condition " grad 2 D.div/ V corresponds in classical terms to ‘n P " grad D 0 on ’.
354
Chapter 5 Some Evolution Equations of Mathematical Physics
V " grad be defined in 1=2 ŒL2 ./ as above. Then Theorem 5.2.5. Let A WD div .@0 C A/ is forward evolutionary and the evolution equation .@0 C A/ u D f has for arbitrary 2 R>0 a unique solution u 2 H1 .@0; C ; j" grad j i / for every given f 2 H1 .@0; C ; j" grad j i / and the solution depends continuously on the data f . More specifically, for every given f 2 H.;/ .@0; C ; j" grad j C i /, ; 2 Z, there exists a unique solution u 2 H.;C2/ .@0; C ; j" grad j C i / \ H.C1;/ .@0; C ; j" grad j C i /: Moreover, the solution depends continuously on the data f in the sense that 1 juj;.;C2/ C juj;.C1;/ 1 C jf j;.;/ : min¹1; º Proof. The solution theory is just a summary of our previous findings, see Lemma 4.2.23. It remains to show the continuous dependence estimate. We may assume without loss of generality that D D 0. Since u D .@0; C A C /1 f; we need to confirm the boundedness of p p .A C 1/.@0; C A C /1 D . A i/. A C i/.@0; C A C /1 and .@0; C /.@0; C A C /1 in H0 .@0; C / ˝ 1=2 ŒL2 ./ to show the claimed regularity gain. This, however, can be seen by applying the spectral theorem for the non-negative, selfadjoint operator A (with … as spectral family). We obtain Z j C 1j2 j.A C 1/.i C A C /1 wj20 d j…./wj20 2 C 2 j C j R0
1 jwj20 min¹1; 2 º
for every w 2 H0 D 1=2 ŒL2 ./. Utilizing the temporal Fourier–Laplace transform L , we obtain Z 1 2 j.A C 1/.i C A C /1 L f . /j20 d j.A C 1/.@0; C A C / f j;0 D R Z 1 jL f . /j20 d min¹1; 2 º R 1 D jf j2;0 : min¹1; 2 º
Section 5.2 Heat Equation
355
Hence j.@0; C A C /1 f j;0;2 D j.A C 1/.@0; C A C /1 f j;0
1 jf j;0 : min¹1; º
Similarly, for the operator .@0; C /.@0; C A C /1 we find j.@0; C A C /1 f j2;1;0 D j.@0; C /.@0; C A C /1 f j2;0 Z D j.i C /.i C A C /1 L f . /j20 d R Z Z D j.i C /.i C C /1 j2 d j…./L f . /j20 d R
R0
Z Z D Z
R
R
R0
2 C 2 d j…./L f . /j20 d 2 C . C /2
jL f . /j20 d D jf j2;0 :
Thus, we have 1
j.@0; C A C /
1
f j;1;0 C j.@0; C A C /
f j;0;2
1C
1 jf j;0 min¹1; º
and the desired result follows. Noting that the proof of this result is completely abstract we record the following general by-product. Corollary 5.2.6. Let A be a selfadjoint operator in H0 . Then .@0 C A2 / is forward evolutionary and the evolution equation .@0 C A2 / u D f has for arbitrary 2 R>0 a unique solution u 2 H1 .@0; C ; A C i / for every given f 2 H1 .@0; C ; A C i / and the solution depends continuously on the data f . More specifically, for every given f 2 H.;/ .@0; C ; A C i /, ; 2 Z, a unique solution u 2 H.;C2/ .@0; C ; A C i / \ H.C1;/ .@0; C ; A C i / exists and the solution depends continuously on the data f in the sense that 1 juj;.;C2/ C juj;.C1;/ 1 C jf j;.;/ : min¹1; º Furthermore, we have 1 1 juj;.;C1/ p p jf j;.;/ : min¹1; º
(5.2.13)
356
Chapter 5 Some Evolution Equations of Mathematical Physics
In particular, if the right-hand side f D ı ˝ u0 for some u0 2 H .A C i/, then p
juj;.0;C1/
p ju0 j : min¹1; º
Proof. The result is mostly just a summary of our findings in the previous theorem. Only the last estimates have to be shown. We only need to consider the case D 0, D 0. To show the claimed regularity gain we need to find a bound for .A C i/.@0; C A2 C /1 : This, however, can be seen as before by applying the spectral theorem for the selfadjoint operator A (with … as spectral family). We obtain with a bit of calculus9 1 1 2 C 1 max ; . 2 C /2 2 and so Z 2
1
j.A C i/.i C A C /
wj20
Z
R0
2 C 1 d j…./wj20 . 2 C /2 C 2
2 C 1 d j…./wj20 4 2 2 R0 C 2 C 1 1 max 2 ; jwj20
9 On R the function z 7! z1 D 2. 1/ of the derivative
z.1/ z2
z 7!
D
1 z
.1/ z2
has possible maxima at z0 D and at the zero
1 2. 1/ 1 2. 1/ C D 1 : z z2 z3 z2
At z D the value is
1 2
and at z D 2. 1/ the value is i.e. if 2. Thus, observing
1 . 4.1/
The latter value, however, is only admissible if 2. 1/ ,
1 1 1 4. 1/ 2 we get
² max
for 2 R2
ˇ ³ ³ ² 1 1 z . 1/ ˇˇ z 2 R max ; : ˇ z2 2
Section 5.2 Heat Equation
357
for every w 2 H0 . From this we find j.@0; C A2 C /1 f j2;.0;1/ D j.A C i/.@0; C A2 C /1 f j2;.0;0/ Z D j.A C i/.i C A2 C /1 L f . /j20 d R Z 1 1 max 2 ; jL f . /j20 d R 1 1 max 2 ; jf j2;0 : Thus, we have the desired estimate (5.2.13). The last inequality follows from the same technical ideas. We find j.@0; C A2 C /1 ı ˝ u0 j2;.0;1/ D j.A C i/.@0; C A2 C /1 ı ˝ u0 j2;.0;0/ D j.A C i/.i m0 C A2 C /1 u0 j20;.0;0/ Z Z 2 C 1 D d j…./u0 j20 d 2 C /2 C 2 . R R0 Z Z 1 . 2 C 1/ d d j…./u0 j20 D 2 2 2 R0 R . C / C Z 2 C 1 D d j…./u0 j20 2C R0
1 ju0 j20 : min¹1; º
This confirms our last estimate. We shall now address the issue of inhomogeneous boundary data in the Dirichlet and Neumann case. 5.2.1.1
Prescribed Dirichlet and Neumann Boundary Data
Recall that we have generalized the idea of u ‘vanishing at the boundary’ to mean q q V V C 1/ D H.grad; V /: u 2 H1 . div " grad i/ D H1 . div " grad If we want to say that u equals g on the boundary of it seems natural to state that V /; u g 2 H.grad;
358
Chapter 5 Some Evolution Equations of Mathematical Physics
p where g 2 H1 . jgrad j2 C 1/ D H.grad; /. Therefore we impose the generalized inhomogeneous Dirichlet boundary condition in the form q V C 1/ D H1 .j" gradj V C i/ D H.grad; V /: u g 2 H1 . div " grad For the inhomogeneous Neumann boundary condition we formulate analogously q V C 1/ D H1 .j divj V C i/ D H.div; V / " grad u ‰ 2 H1 . " grad div q V div C1/ D H.div; /. for ‰ 2 H1 . " grad Let us first consider the situation that div " grad u C u D f 2 L2 ./
(5.2.14)
V / u g 2 H.grad;
(5.2.15)
and with g 2 H.grad; /. Here (5.2.14) generalizes classical differentiation by application of the second order differential expression div " grad. The operator div " grad has in general too large a spectrum. Therefore, we will V try to make the transition to the selfadjoint operator div " grad. The following q V C 1/. We first note that calculations will be done in H1 . div " grad q V C 1/: div " grad g 2 H1 . div " grad To see this we only have to show that – following the spirit of our construction of chains of Hilbert spaces – q V C 1/: divŒL2 ./ H1 . div " grad For ‰ 2 L2 ./ we have (by definition) h j div ‰i ;0 WD h" grad j‰i";0 q V C 1/. The Cauchy–Schwarz inequality now yields for all 2 H1 . div " grad that jh div ‰j i ;0 j D jh‰j" grad i";0 j j‰j";0 j" grad j";0 q j‰j";0 j" grad j2";0 C j j2 ;0 ;
Section 5.2 Heat Equation
359
which shows that q V C 1/: div ‰ 2 H1 . div " grad Thus, we have in particular q V C 1/ div " grad g 2 H1 . div " grad and div " grad u C div " grad g D f C div " grad g or div " grad.u g/ D f C div " grad g q V C 1/. Because of boundary condition (5.2.15), as an equality in H1 . div " grad this is equivalent to V div " grad.u g/ D f C div " grad g or V u D f C div " grad g div " grad V g: div " grad The original problem (5.2.14), (5.2.15) is now translated into an operator equation V involving the selfadjoint operator A WD div " grad. The inhomogeneity in the boundary condition appears here as an additional source term (compare with initial value problems!). Remark 5.2.7. It may be somewhat irritating to realize that many Dirichlet data g V / lead to the same boundary condition. In fact, any element taken from gCŒH.grad; yields the same boundary condition, since V D0 div " grad div " grad
(5.2.16)
V /. This observation suggests that the boundary data may be for all 2 H.grad; V /, which in the considered as elements of the quotient space H.grad; /=H.grad; context of Hilbert space can be identified with V / D N. div " grad C1/: H.grad; / H.grad; This difference is indeed orthogonal in the sense of the inner product . ; / 7! h" grad j" grad i"1 ;0 C h j i 1 ;0 D hgrad j" grad i0 C h j 1 i0 :
360
Chapter 5 Some Evolution Equations of Mathematical Physics
V /. This can Conversely, any 2 H.grad; / satisfying (5.2.16) must be in H.grad; be seen from the latter decomposition. Let 2 H.grad; /. Then there is a unique V / such that 0 2 N. div " grad C1/, i.e.
0 2 H.grad; div " grad. 0 / C . 0 / D 0: With (5.2.16) we see that V 0 D div " grad. 0 / div " grad. 0 / V D . 0 / div " grad. 0 / and so, since
q V C1D div " grad
2
V C1 div " grad
is a composition of unitary mappings, we find as desired V /:
D 0 2 H.grad; Henceforth, we shall, however, not be particularly troubled by taking any representer V / rather than the particular of the corresponding equivalence class C ŒH.grad; representer in N. div " grad C1/ and will continue to refer to those as Dirichlet data. V / yields the same Similarly we observe that any element taken from ‰ C H.div; Neumann type boundary condition, since V ‰D0 div ‰ div
(5.2.17)
V /. Conversely, any ‰ 2 H.div; / satisfying (5.2.17) must be for all ‰ 2 H.div; V /. This observation suggests that Neumann boundary data may be considin H.div; V /, which in the context of ered as elements of the quotient space H.div; /=H.div; Hilbert space can similarly be identified with V / D N." grad div C1/: H.div; / H.div; Also in this case, however, we shall refer to representers of such equivalence classes as Neumann data. Carrying our observations heuristically over to the time-dependent case, we are led to the following formulation of an abstract initial boundary value problem with Dirichlet boundary condition. Definition 5.2.8. Let V u D f C ı ˝ u0 C . div " grad g div " grad V g/ (5.2.18) .@0 div " grad/
Section 5.2 Heat Equation
361
for f 2 R0 .m0 / ŒH0 .@0; C / ˝ L2 ./; u0 2 L2 ./ and g 2 R0 .m0 / ŒH0 .@0; C / ˝ H.grad; /: Then (5.2.18) is called the abstract Dirichlet initial boundary value problem with (volume) source term f , initial data u0 and Dirichlet data g. Note that this problem lies well within the range of our solution theory. The rightV C i /. hand side is in H.1;2/ .@0; C ; j" gradj Similarly, we may take our orientation for the Neumann boundary condition from the time-independent equation (5.2.14) with the generalized boundary condition V / " grad u ‰ 2 H.div;
(5.2.19)
for ‰ 2 H.div; /. Now, div ‰ 2 L2 ./ and so div." grad u ‰/ D f C div ‰: Noting (5.2.19) we obtain V " grad u D f C div ‰ div V ‰: div Again we note that the Neumann type boundary condition is translated into an additional source term. Thus, we are motivated to give the following definition for the time-dependent case. Definition 5.2.9. Let V " grad/ u D f C ı ˝ u0 C . div ‰ div V ‰/ .@0 div
(5.2.20)
for f 2 R0 .m0 / ŒH0 .@0; C / ˝ L2 ./, u0 2 L2 ./ and ‰ 2 R0 .m0 / ŒH0 .@0; C / ˝ H.div; /: Then (5.2.20) is called the abstract Neumann initial boundary value problem with (volume) source term f , initial data u0 and Neumann data ‰. Here we see that the right-hand side is in H.1;1/ .@0; C; j" gradjCi /. Before we continue, let us briefly investigate the meaning of the additional source term generated by the Dirichlet and Neumann data, respectively, in case of smooth boundary and data (up to the boundary). We also assume that " D ".m/ is given as a smooth (spatial)
362
Chapter 5 Some Evolution Equations of Mathematical Physics
multiplication operator. Since no time-differentiation is involved in the boundary term V ‰/, we may for sake of notational simplicity consider only the time. div ‰ div independent situation. In general, the term V g/ W WD . div " grad g div " grad V C i/. We find is only in H2 .j" gradj V V gi"1 ;0 j" grad g " grad h jW i 1 ;0 D h" grad V V D h" grad j" grad gi"1 ;0 h div " grad jgi 1 ;0 V V D h" grad jgrad gi0 hdiv " grad jgi0 V C i/. The last expression is in classical terms equal to the for all 2 H2 .j" gradj boundary integral Z V n.x/ ".x/ grad .x/ g.x/ dSx ; @
where n denotes the exterior normal to @, and so represents (utilizing the distribution ı@ ) V V h" grad jn ı@ gi0 D h" grad j" n ı@ gi"1 ;0 D h j div." n ı@ g/i 1 ;0 for all
V C i/. Thus, we have found that in ‘classical’ terms 2 H2 .j" gradj V g D div." n ı@ g/; W D div " grad g div " grad
i.e. the boundary term W is in this case given as a double or dipole layer. In the general case, we may take this observation to define the complex symbol term div." n ı@ g/ by W although there may not be a sufficiently smooth boundary to define a normal vector field and the distribution ı@ separately. Similarly, in the Neumann case, for V ‰; X WD div ‰ div we find that h jX i 1 ;0 D h j div ‰i 1 ;0 C h" grad j‰i"1 ;0 D h jdiv ‰i0 C hgrad j‰i0 ;
Section 5.2 Heat Equation for all
363
2 H.grad; / which corresponds to the classical boundary integral Z .x/ n.x/ ‰.x/ dSx ; @
if
is also assumed to be smooth up to the boundary. So we have V ‰ D n ‰ ı@ ; div ‰ div
i.e. Neumann data generalize a single or monopole layer distribution. The corresponding classical Neumann boundary condition is n " grad u D n ‰ on @ assuming all data and the boundary of to be sufficiently smooth (n the exterior normal vector field to @). In the general case, we may likewise take this observation to define the complex symbol term n ‰ ı@ by X although there may not be a sufficiently smooth boundary to have a normal field n anywhere on @. Returning to our main track of reasoning, we shall now try to answer the question in which sense solutions of the abstract Dirichlet and Neumann initial boundary value problems actually satisfy the boundary condition. The Dirichlet Initial Boundary Value Problem. Let u be the solution in the Sobolev V C i/ of the abstract inhomogeneous Dirichlet lattice space H1 .@0; C ; j" gradj problem V u D f C ı ˝ u0 C . div " grad g div " grad V g/ (5.2.21) .@0 div " grad/ with (volume) source term f , initial data u0 and Dirichlet data g. Since V D R0 .m0 / H0 .@0; C/ ˝ L2 ./; f 2 R0 .m0 / H.0;0/ .@0; C; j" gradjCi/ V C i/ u0 2 H0 .j" gradj and g 2 R0 .m0 / H.0;1/ .@0; C ; j" gradj C i/; V C i/ and so the solution also the right-hand side is in H.1;2/ .@0; C ; j" gradj 10 satisfies V C i/: u 2 H.1;2/ .@0; C ; j" gradj We also note that V C i/: u g 2 H.1;1/ .@0; C ; j" gradj 10 We do not need the better regularity results for abstract heat equations given earlier. This makes the discussion more useful for other initial boundary value problems.
364
Chapter 5 Some Evolution Equations of Mathematical Physics
In particular, we have V C i/: @0 .u g/ 2 H.2;1/ .@0; C ; j" gradj From the equation we read now off that V 2 .u g/ D div " grad.u V V C i/: j" gradj g/ 2 H.2;1/ .@0; C ; j" gradj Thus, from V C i/ H.2;1/ .@0; C ; j" gradj V C i/ .u g/ 2 H.1;1/ .@0; C ; j" gradj and V 2 .u g/ 2 H.2;1/ .@0; C ; j" gradj V C i/; j" gradj we see V C i/: .u g/ 2 H.2;1/ .@0; C ; j" gradj Thus, we are led to the formulation of the boundary condition V u g 2 H1 .@0; C / ˝ H.grad/:
(5.2.22)
.@0 div " grad/ u D f C ı ˝ u0 :
(5.2.23)
From (5.2.18) we get
V C i/ satisfying (5.2.23), (5.2.22) also satisConversely, a u 2 H1 .@0; C ; j" gradj fies (5.2.18). Thus we have found that (5.2.23), (5.2.22) is an equivalent formulation of the abstract Dirichlet initial boundary value problem. Recalling the equivalent – more classically looking – formulation of the initial condition, we find that the abstract Dirichlet initial boundary value problem is equivalent to the problem of finding V i / such that u 2 R0 .m0 / H1 .@0; C / ˝ H1 .j" gradj .@0 div " grad/ u D f
on R>0 ;
V .u g/ 2 H1 .@0; C / ˝ H.grad/; u.0C/ D u0
(5.2.24)
V i /: in H1 .j" gradj
Remark 5.2.10. The rather coarse argument used here would also carry over to the case of the Schrödinger equation discussed earlier and also yield a solution theory with inhomogeneous Dirichlet boundary conditions. To underscore this generality we have also weakened purposely the formulation with regards to regularity. A more specific regularity could be formulated from the above reasoning.
Section 5.2 Heat Equation
365
The Neumann Initial Boundary Value Problem. Let us consider now the solution V i / of the abstract inhomogeneous Neumann initial u 2 H1 .@0; C ; j" gradj boundary value problem V " grad/ u D f C ı ˝ u0 C . div ‰ div V ‰/ .@0 div
(5.2.25)
with (volume) source term f , initial data u0 and Neumann data ‰. The right-hand side is in H.1;1/ .@0; C ; j" grad j i / and consequently u 2 H.1;1/ .@0; C ; j" grad j i /: From our general regularity results we can trade-off regularity to obtain u 2 H.2;1/ .@0; C ; j" grad j i /: This yields that j" grad j u 2 H.2;0/ .@0; C ; j" grad j i / or V i/ " grad u 2 H.2;0/ .@0; C ; j divj and @0 u 2 H.3;1/ .@0; C ; j" grad j i /: The equation now yields that V div." grad u ‰/ 2 H.3;0/ .@0; C ; j" grad j i /: Observing that also V i / H.3;0/ .@0; C ; j divj V i /; ." grad u ‰/ 2 H.2;0/ .@0; C ; j divj we find V i /: ." grad u ‰/ 2 H.3;1/ .@0; C ; j divj Leaving the details as an exercise to the reader, we arrive in the same way as above at the equivalent formulation of the abstract inhomogeneous Neumann initial boundary value problem, which is to find u 2 R0 .m0 / H1 .@0; C / ˝ H1 .j" grad j i / such that .@0 div " grad/ u D f
on R>0 ;
V ." grad u ‰/ 2 H1 .@0; C / ˝ H.div/; u.0C/ D u0
(5.2.26)
in H1 .j" grad j i /:
Remark 5.2.11. Again the rather coarse argument used here carries over to the case of the Schrödinger equation discussed earlier and would also yield a solution theory with inhomogeneous Neumann boundary conditions. Note that we have again weakened purposely the formulation with regards to regularity. A more specific regularity result could be formulated.
366 5.2.1.2
Chapter 5 Some Evolution Equations of Mathematical Physics Transmission Initial Boundary Value Problem
In various application it is of interest not only to have an impenetrable interface as modelled by boundary value problems, but also to consider penetrable interfaces between different subregions. This may occur with the heat equation if the material properties are different in different regions of the underlying domain . Our assumption on the coefficients are actually general enough to cover this situation. Indeed, assume that consists of two parts 0 ; 1 and an interior interface , i.e. 0 ; 1 open subsets of such that 0 [ 1 D ;
@0 \ @1 \ D :
We shall assume that is a Lebesgue null set in . Moreover, let the material properties of j be encoded in "j ; j W L2 .j / ! L2 .j /. Then the mappings " WD "0 0 .m/ C "1 1 .m/;
WD 0 0 .m/ C 1 1 .m/
inherit the required properties, that is positive definiteness and boundedness, to ensure that for example Dirichlet or Neumann boundary value problems are well-posed in in the sense discussed above. Assuming sufficient classical regularity of ", the interface and the solution in 0 and 1 up to the interface , it is not hard to see that solving such a problem implies the classical transmission conditions on Œu WD .u/0 .u/1 D 0
and
n Œ" grad u WD n .." grad u/0 ." grad u/1 / D 0;
where . /j denotes the continuous extension of . /jj to j [ , j D 0; 1. In other words, u and " grad u do not jump across the interface . These jump conditions are in the present language generalized simply by stating that u 2 H.grad; /;
" grad u 2 H.div; /;
which is intrinsic to the solution concept we are using. So, there is no additional theory required to model this type of transmission problem11 . It may happen, however, that – speaking in classical terms – the jump terms are explicitly known to be non-trivial and, as in the case of inhomogeneous boundary data, the question of inhomogeneous jump data arises. We may take our guideline from the boundary data case. Let h 2 H1 .@0; C / ˝ H.grad; / and g 2 H1 .@0; C / ˝ H.div; / be given. Then uC0 h 2 H1 .@0; C / ˝ H.grad; /; (5.2.27) " grad.uC0 h/ 0 " grad hC0 g 2 H1 .@0; C / ˝ H.div; /
(5.2.28)
11 Note that this also models the so-called screen problem, where the critical interface is only a part of . Note that across parts of , where " is e.g. multiplication by a sufficiently regular function, we have locally that u is also regular and so no jump occurs.
Section 5.2 Heat Equation
367
can be used to model prescribed jumps across the interface. Assuming for simplicity homogeneous Dirichlet boundary data on @, we have to find u 2 H1 .@0; C ; V i / such that j" gradj V u/ D f C ı ˝ u0 @0 u div." grad
on R .0 [ 1 /:
To link the problem in 0 [ 1 to provide a formulation in , we have to utilize the transmission conditions (5.2.27), (5.2.28). We first note that equivalently we have V V h C g/ C div g @0 u div." grad.u C 0 h/ 0 " grad 0 0 D f C ı ˝ u0 V / and g 2 H0 .@0; C / ˝ H.div; / are given. where h 2 H0 .@0; C / ˝ H.grad; The latter equation, however, now holds in . Thus we arrive at the following problem as a model for the case of inhomogeneous transmission data: find u 2 H1 .@0; C; V i / such that j" gradj V V V D f C ı ˝ u0 C div." grad. .@0 div " grad/u 0 h/ 0 " grad h C 0 g/: V V The term div." grad. 0 h/ 0 " grad h/ corresponds to the jump of u across the interface and the term div.0 g/ C 0 div g models the jump of the ‘normal component’ of " grad u across 12 . V /, Theorem 5.2.12. Let 2R>0 , f 2R>0 .m0 / H0 .@0; C/˝L2 ./, u0 2H.grad; V / and g 2 R .m0 / H0 .@0; C / ˝ h 2 R>0 .m0 / H0 .@0; C / ˝ H.grad; >0 H.div; / be given. Then the transmission problem, that is the problem to find u 2 V / such that R>0 .m0 / H1 .@0; C / ˝ H.grad; V Df .@0 div " grad/u
on R>0 .0 [ 1 /
and the initial condition u.0C/ D u0 ; the boundary condition V / u C 0 h 2 H1 .@0; C / ˝ H.grad; and the transmission condition V h C g 2 H1 .@0; C / ˝ H.div; / V " grad.u C 0 h/ 0 " grad 0 are satisfied, has a unique solution u 2 R>0 .m0 / H0 .@0; C / ˝ L2 ./. 12
Note that for sufficiently regular g the latter term is grad 0 g, whereas for sufficiently regular h V V V the term " grad. 0 h/ 0 " grad h equals h " grad.0 /. These terms can be recognized as ı n g and h ı " n if is a sufficiently smooth .n 1/-dimensional submanifold. Here n denotes the normal to chosen as exterior to 0 .
368
Chapter 5 Some Evolution Equations of Mathematical Physics
Proof. By the above reasoning we find that a solution of the transmission problem satisfies @0 .u R>0 ˝ u0 /
(5.2.29)
V V h C g/ C div g D f div." grad.u C 0 h/ 0 " grad 0 0 in H0 .@0; C / ˝ L2 .0 [ 1 / and so, since is a Lebesgue null set, in H0 .@0; C / ˝ L2 ./. This is equivalent to the evolution problem V V V D f C ı ˝ u0 C div." grad. .@0 div " grad/u 0 h/ 0 " grad h C 0 g/: In particular this observation yields uniqueness. A solution of the latter problem, however, also satisfies the equation in R>0 .0 [ 1 / and the initial condition. It remains to show that V / u C 0 h 2 H1 .@0; C / ˝ H.grad; and V V h C g 2 H1 .@0; C / ˝ H.div; / w WD " grad.u C 0 h/ 0 " grad 0 in order to complete the existence proof. First, we read off from (5.2.29) that div w 2 H1 .@0; C / ˝ L2 ./:
(5.2.30)
Moreover, we have V .@0 div " grad/.u C 0 h/ D f C ı ˝ u0 V h g/ C @0 h div.0 " grad 0 0 where the right-hand side is in V C i/ D H1 .@0; C / ˝ H1 .j" gradj V C i/ H1 .@0; C / ˝ H1=2 . div " grad and our regularity argument in the proof of Lemma 4.2.23 shows that we can gain some spatial regularity. Indeed we get V C i/ u C 0 h 2 H2 .@0; C / ˝ H1 .j" gradj and so V /: u C 0 h 2 H1 .@0; C / ˝ H.grad; This implies that w 2 H1 .@0; C / ˝ L2 ./. Together with (5.2.30) we therefore obtain w 2 H1 .@0; C / ˝ H.div; /.
Section 5.2 Heat Equation
369
5.2.2 Stefan Boundary Condition Considering the abstract Neumann initial boundary value problem V " grad/ u D f C ı ˝ u0 C . div ‰ div V ‰/ .@0 div or more intuitively .@0 div " grad/ u D f
on R>0 ;
V ." grad u ‰/ 2 H1 .@0; C / ˝ H.div/; u.0C/ D u0
in H1 .j" grad j i /;
with volume source term f , Neumann boundary data ‰ and initial data u0 , it might happen that ‰ is not actually given, but depends on the solution itself. Let h 2 L1 ./ be a (generalized) vector field such that div h is also bounded and measurable, i.e. div h 2 L1 ./. Here div h is to be understood as the functional Z ' 7! h.x/ grad '.x/ dx
on CV 1 ./, i.e. in the – so-called – sense of distributions , and div h 2 L1 ./ means that there is a bounded, measurable function w such that Z Z h.x/ grad '.x/ dx D w.x/ '.x/ dx
for all ' 2 CV 1 ./. Up to differences on Lebesgue null sets, this w is uniquely determined. The case of the Neumann type boundary condition with ‰ WD u h C ‚ is referred to as the inhomogeneous Stefan initial boundary value problem, which is, more explicitly, to find u 2 R0 .m0 / H1 .@0; C / ˝ H1 .j" grad j i / such that .@0 div " grad/ u D f
on R>0 ;
V ." grad u u h ‚/ 2 H1 .@0; C / ˝ H.div/; u.0C/ D u0
(5.2.31)
in H1 .j" grad j i /:
It may be instructive to note that in classical terms the boundary condition would read n " grad u .n h/ u D n ‚ on @ if all data and the boundary of where sufficiently smooth (n denoting the exterior normal vector field to @). This type of boundary condition is not only known as a Stefan boundary condition, but also as a boundary condition of the third kind (the
370
Chapter 5 Some Evolution Equations of Mathematical Physics
first kind being the Dirichlet boundary condition and the second kind the Neumann boundary condition), a Robin boundary condition, an impedance boundary condition or even a mixed type boundary condition. In our present framework we may and shall discuss this type of boundary condition as a perturbation problem. We need to solve V " grad/ u D f C ı ˝ u0 C . div.u h C ‚/ div.u V .@0 div h C ‚//: If ‚ has the same properties that we have assumed for ‰, then the properties of the boundary term depend on V V . div.u h/ div.u h// D . u div h C .h grad u/ div.u h//: Assuming u 2 H.0;1/ .@0; C ; j" gradj i/, the last term on the right is in the space H.0;1/ .@0; C; j" gradji/ while the remaining two terms are slightly more regular being in H.0;0/ .@0; C ; j" gradj i/. By applying Theorem 5.2.5 we have that V " grad/1 W H.0;1/ .@0; C; j" gradji/ ! H.0;1/ .@0; C; j" gradji/; .@0 div and V " grad/1 W H.1;1/ .@0; C; j" gradji/ ! H.0;1/ .@0; C; j" gradji/; .@0 div are continuous. Moreover, for 1 we have according to Corollary 5.2.6 that V " grad/1 W H.0;0/ .@0; C; j" gradji/ ! H.0;1/ .@0; C; j" gradji/; .@0 div has a norm bounded by 1=2 . Thus, V " grad/1 . div.u h/ div.u V h// u 7! .@0 div is a contraction in H.0;1/ .@0; C ; j" gradj i/ if 2 R1 is chosen sufficiently large. For such and assuming that u0 2 H1 .j" gradj i/, the unique solution u 2 H.0;1/ .@0; C ; j" gradj i/ is a fixed point of the mapping V " grad/1 . div.u h/ div.u V h// C F; u 7! .@0 div where V " grad/1 .f C ı ˝ u0 div ‚ C div‚/: V F D .@0 div The additional assumption that u0 2 H1 . j" gradj i/ yields that F is indeed in H.0;1/ .@0; C ; j" gradj i/ as needed for the contraction argument. Note, that due to the specific perturbation argument used, uniqueness is only obtained in H.0;1/ .@0; C ; j" gradj i/, not in the space H1 .@0; C ; j" gradj i/. We summarize our findings. Theorem 5.2.13. Let 2 R>0 be sufficiently large, f 2 R>0 .m0 / H0 .@0; C / ˝ L2 ./, u0 2 H.grad; /, ‚ 2 R>0 .m0 / H0 .@0; C /˝H.div; / and h 2 L1 ./ such that div h 2 L1 ./. Then the Stefan initial boundary value problem (5.2.31) has a unique solution u 2 R>0 .m0 / H0 .@0; C / ˝ H.grad; /.
Section 5.2 Heat Equation
371
5.2.3 Lower Order Perturbations The perturbation argument in the last section sheds light on the general question of perturbations of lower order. Consider for example the homogeneous Neumann boundary condition V " grad B/u D f; .@0 div V " grad. Forwhere B is a suitable operator considered as a ‘perturbation’ of div mally, we obtain the fixed point problem V " grad/1 Bu C .@0 div V " grad/1 f u D .@0 div
(5.2.32)
V " grad/1 B could be made sufficiently and we would have a solution if .@0 div small in some sense. Since we are dealing here with a normal operator, the spectral theorem may be used to do some fine tuning. According to Corollary 5.2.6 we have V " grad, the mapping .@0 C A/1 is continuous since that, for A D div 1 j.@0; C A C /1 f j;;C1 p jf j;; p for f 2 H.;/ .@0; C ; A C i/, ; 2 Z, 2 R1 . p p Thus, if B W H.;C1/ .@0; C ; A C i/ ! H.;/ .@0; C ; A C i/ is bounded 1 V uniformly with respect p to , then .@0 div " grad/ B is bounded in the space H.;C1/ .@0; C ; A C i/ with an operator norm less than 1 for sufficiently large. In summary we obtain the following result. Theorem 5.2.14. Let f 2 H.;/ .@0; C ; j" grad j C i/, ; 2 Z, and let B W D.B/ H.;C1/ .@0; C ; j" grad j C i// ! H.;/ .@0; C ; j" grad j C i/ be a bounded, densely defined, linear operator with bound C > 0 independent of 1 sufficiently large. Then V " grad B/u D f .@0 div
(5.2.33)
has a unique solution u 2 H.;C1/ .@0; C ; j" grad j C i/ depending continuously on the data f for all sufficiently large 2 R1 . Typical examples are given by operators of the form B D 1H .@0; C// ˝b induced by a bounded mapping b W HC1 .j" grad j C i/ ! H .j" grad j C i/, ; 2 Z. We see that the inhomogeneous Stefan boundary condition discussed in the previous subsection is just a special case of this situation ( D D 0). It should be clear that suitable time dependence can be incorporated as well.
372
5.3
Chapter 5 Some Evolution Equations of Mathematical Physics
Acoustics
The equations of linear acoustics can be given in the form 0 grad v F D ; @0 C R div 0 p f where R incorporates certain material properties and the right-hand side is a given general source term, which may include initial data, volume sources and boundary data. As a matter of simplification we only consider the case where R W L2 ./ ! L2 ./ is bounded and positive definite, a non-empty, open subset of R3 . We abbreviate 0 grad A WD R : div 0 Selecting boundary conditions for the vector analytic differential expressions grad, div leads to the discussion of boundary value problems similar tothe heat equation considered previously. If R is of diagonal form – say R D 0" 0 – then it is frequently customary to eliminate the velocity field v from the equations. Indeed, formally applying div to the first equation and substituting div v from the second equation of the system @0 v C " grad p D F; @0 p C div v D f (5.3.34) yields13 @0 1 .f @0 p/ C div " grad p D div F or @20 p div " grad p D div F C @0 f: This calculation14 is made rigorous by considering the interaction between operations V and div with the structure of the lattice, which is addressed in our abstract like grad transmutation result Lemma 2.1.16. 13 Recalling the vector analytic identity curl grad D 0, we can conclude from (5.3.34) that the vorticity curl "1 v satisfies @0 curl "1 v D curl "1 F:
Thus, formally (with t D 0 as initial time) curl "1 v.t / D curl "1 v.0C/ C
Z
t
curl "1 F .s/ ds:
0
This formal calculation can also be performed rigorously in the frame work developed in the following. 14 For an abstract 2 2 operator matrix A B C D with D continuously invertible this elimination step yields A BD 1 C 0 : C D
Section 5.3 Acoustics
373
As an application of our abstract transmutation result we note: Lemma 5.3.1. The mappings V W H.grad; V / 1=2 ŒL2 ./ ! "1=2 ŒL2 ./ " grad and V W H.div; / "1=2 ŒL2 ./ ! 1=2 ŒL2 ./ div D ." grad/ have continuous extensions V W H1 .@0; C ; j" gradj V C i / ! H1 .@0; C ; j divj C i / " grad and V C i /; div W H1 .@0; C ; j divj C i / ! H1 .@0; C ; j" gradj respectively. Proof. This result is clear from the general theory and the tensor product structure of the spaces H.k;h/ .@0; C ; j divj C i / D Hk .@0; C / ˝ Hh .j divj C i / and V C i / D Hk .@0; C / ˝ Hh .j" gradj V C i/ H.k;h/ .@0; C ; j" gradj for k; h 2 Z. Thus, one is led to consider equations of the form .@20 C A/u D G where A can, by a suitable choice of boundary condition, be realized as an operator of the type discussed in connection with the heat equation. In the simplest case this reduces to the wave equation. For many purposes it is, however, more convenient to consider the first order system directly.
Here A BD 1 C is known as the Schur complement of D. In this sense the two equations are obtained by time-differentiating the Schur complements of the diagonal entries of the system.
374
Chapter 5 Some Evolution Equations of Mathematical Physics
5.3.1 Dirichlet and Neumann Boundary Condition If we introduce specifically Dirichlet or Neumann boundary conditions as constraints for the spatial operator we obtain the operator matrices ! ! V 0 grad 0 grad ; ; V 0 div div 0 respectively. In both cases the general form15 of these operator matrices yields that the resulting operators are skew-selfadjoint in a 4-component L2 ./. Thus, since R is bounded, (strictly) positive definite, we have that ! ! V 0 grad 0 grad AD WD R ; AN WD R V 0 div 0 div are skew-selfadjoint in R1=2 L2 ./, i.e. with respect to the weighted inner product hjR1 i0 of L2 ./. Note that in contrast to A, the operators AD ; AN are suitable for constructing Sobolev lattices. To implement non-trivial Dirichlet boundary data we can proceed as before. Theorem 5.3.2. Let 2 R>0 , f; F 2 R>0 .m0 / ŒH0 .@0; C / ˝ L2 ./, p0 ; v0 2 L2 ./ and h 2 R>0 .m0 / ŒH0 .@0; C / ˝ H.grad; / be given. Then the Dirichlet initial boundary value problem: find U 2 H.1;1/ .@0; C ; jAj C i / \ R>0 .m0 / H.1;1/ .@0; C ; AD C 1 / such that .@0 C A/U D U.0C/ D
F f
on R>0 ;
v0 ; p0
(5.3.35)
0 2 H1 .@0; C / ˝ H1 .AD C 1/; U h has a unique solution16 depending continuously on the data. 0 A , It is a straightforward calculation to see that any .2 2/-operator-matrix of the form A 0 where A H1 ˚ H2 is a densely defined, closed linear operator, is skew-selfadjoint in H1 ˚ H2 with domain D.A/ ˚ D.A /. 16 Note that the equation holds naturally in H .1;0/ .@0; C ; jAj C i /. Since 15
D.AD / D H.1/ .AD C 1 / H.1/ .jAj C i / D D.A/ and so H.0/ .jAj C 1 / H.1/ .jAj C 1 / H.1/ .AD C 1 /; we may consider the equation to hold in H.1;1/ .@0; C ; AD C 1 /.
Section 5.3 Acoustics
375
Proof. First we observe that .@0 C A /U D
F f
Cı˝
v0 p0
on R :
Including the boundary data we obtain v0 F 0 0 Cı˝ D A @0 U C AD U f h h p0 and so
.@0 C AD /U D
F f
Cı˝
v0 p0
C .AD A/
0 : h
Since this is an equation associated with an evolutionary operator .@0 C AD / we see uniqueness of the stated Dirichlet initial boundary value problem. Conversely, the uniquely determined solution of the latter equation satisfies the initial condition v0 U.0C/ D : p0 Moreover, we have 0 0 C AD U @0 U h h v0 F 0 0 Cı˝ D @0 2 H1 .@0; C / ˝ H0 .AD C 1/: A f h h p0 From this follows U
0 2 H1 .@0; C / ˝ H0 .AD C 1/: h
By our regularity result we also have 0 U 2 H2 .@0; C / ˝ H1 .AD C 1/ h which is the desired boundary condition. Likewise, continuous dependence holds in H1 .@0; C / ˝ H1 .AD C 1/: Note that the equation @0 U C AD U D
F f
Cı˝
holds in H2 .@0; C / ˝ H2 .AD C 1/.
v0 p0
C .AD A/
0 h
376
Chapter 5 Some Evolution Equations of Mathematical Physics
By assumption we have 0 2 H0 .@0; C / ˝ H1 .jAj C i/ h and so, since H1 .AD C 1/ H1 .jAj C i/, we have U 2 H2 .@0; C / ˝ H1 .jAj C i/: Thus the equation
.@0 A/U D F C ı ˝
v0 p0
holds in H3 .@0; C / ˝ H0 .jAj C i/ D H3 .@0; C / ˝ H0 .AD C 1/. Remark 5.3.3. With U D
v p
the Dirichlet boundary condition reduces to
V /: p h 2 H1 .@0; C / ˝ H.grad; Analogously, we can deal with the Neumann case. Theorem 5.3.4. Let 2 R>0 , f; F 2 R>0 .m0 / ŒH0 .@0; C / ˝ L2 ./, p0 ; v0 2 L2 ./ and g 2 R>0 .m0 / ŒH0 .@0; C / ˝ H.div; / be given. Then the Neumann initial boundary value problem: find U 2 H.1;1/ .@0; C ; jAj C i / \ R>0 .m0 / ŒH.1;1/ .@0; C ; AN C 1 / such that .@0 C A/U D U.0C/ D U
F f
on R>0 ;
v0 ; p0
(5.3.36)
g 2 H1 .@0; C / ˝ H1 .AN C 1/; 0
has a unique solution depending continuously on the data. Proof. As for the Dirichlet case we first observe that .@0 C A/
v p
D
F f
Cı˝
v0 p0
on R :
Section 5.3 Acoustics
377
Together with the boundary condition we get that @0
v p
C AN
vg p
D
F f
Cı˝
v0 p0
g A 0
or .@0 C AN /
v p
D
F f
Cı˝
v0 p0
C .AN
g : A/ 0
Conversely, the latter implies that (5.3.36) holds and so existence and uniqueness of solution follows. Likewise, continuous dependence holds in H1 .@0; C / ˝ H1 .AN C 1/. Note that also H1 .jAj C i/ H1 .AN C 1/ holds, so that the original equation holds in H.1;1/ .@0; C ; AN C 1 /. By assumption we have that g 2 H0 .@0; C / ˝ H1 .jAj C i/ 0 and so, since H1 .AN C 1/ H1 .jAj C i/; we obtain U 2 H2 .@0; C / ˝ H1 .jAj C i/: Thus the equation
.@0 A/U D F C ı ˝
v0 p0
holds also in H3 .@0; C / ˝ H0 .jAj C i/ D H3 .@0; C / ˝ H0 .AN C 1/. Remark 5.3.5. The Neumann boundary condition reduces with U D
v p
to
V /: v g 2 H1 .@0; C / ˝ H.div;
5.3.2 Wave Equation As indicated above, it is customary to consider the second order equation obtained from the first order system of linear acoustics by eliminating the velocity field. Recall in this respect Theorem 2.1.16. In order for this to work, one assumes that R D 0" 0 , i.e. R has block-diagonal form. We recall that the resulting equation is formally @20 p div " grad p D div F C @0 f
378
Chapter 5 Some Evolution Equations of Mathematical Physics
and is referred to as a wave equation. Clearly we have incurred a loss of regularity on the right-hand side, but since the equation can, in the case of the Dirichlet boundary condition, be considered as !! V div F C @0 f @0 p 0 div " grad @0 D p 0 1 0 our solution theory17 can be applied. 17
By Cramer’s rule we find for A selfadjoint and non-negative 1 @0 A A @0 D .@20 C A/1 1 @0 1 @0 p p p @0 . A/2 .@0 C i A/1 .@0 i A/1 ; D 1 @0 p p which is clearly the continuous inverse in H1 .@0; C ; A C i / ˚ H1 .@0; C ; A C i /. We note that if 0 2 %.A/ then 0 A 1 0 p p p p is even skew-selfadjoint in H0 . A C i / ˚ H1 . A C i / (with domain H1 . A C i / ˚ H2 . A C i /). V Recall that in our current context we have specifically A D div " grad. Alternatively, the abstract wave equation .@20 C A/u D f can be factorised into a forward and a backward abstract Schrödinger type operator p p .@0 C i A/.@0 i A/u D .@20 C A/u Df and can consequently be translated in a more symmetric first order in time system of the form p 1 @0 i A @0 f p u 1 p D : 0 i A @0 u @0 i A It is noteworthy that here
0 p
i
p A 0
i A p p p p is skew-selfadjoint in H0 . A C i / ˚ H0 . A C i / (with domain H1 . A C i / ˚ H1 . A C i /). If it seems preferable to avoid complex coefficients we could also consider p 1 @p0 A @0 f p u1 D 0 A @0 u A @0 as a reformulation, which is unitarily equivalent by the transformation matrix 10 0i . Here we find – without further conditions – that p 0 A p A 0 p p p p is skew-selfadjoint in the space H0 . ACi /˚H0 . ACi / (with domain H1 . ACi /˚H1 . ACi /).
Section 5.3 Acoustics
379
Conversely, it is not hard to see that the solution u D uu01 of !! V u0 div F C @0 f 0 div " grad @0 D u1 0 1 0 satisfies @0 u1 D u0 ;
V u1 D div F C @0 f @0 u0 div " grad
and so V u1 D div F C @0 f: @20 u1 div " grad The natural setting in which this equation can be considered rigorously is the space V C i /. Utilizing our findings, we may now let H1 .@0; C ; j" gradj V C i/ p WD u1 2 H1 .@0; C ; j" gradj and V v WD @1 0 .F " grad p/ 2 H1 .@0; C ; j divj C i / and we get that @20 p C div @0 v D @0 f and so by time integration !! V v F 0 " grad @0 C D : p f div 0 Note that, for F 2 H1 .@0; C ; j divj C i /, the term div F is well-defined as V C i /. The underlying lattice structure for the an element in H1 .@0; C ; j" gradj V C i / ˚ H1 .@0; C ; j divj C i /. system is associated with H1 .@0; C ; j" gradj In the case of the Neumann boundary condition we would of course find that V grad p D div F C @0 f; @20 p div" and so, by letting V v WD @1 0 .F " grad p/ 2 H1 .@0; C ; j divj C i /; we get @0 C
0 " grad V div 0
!!
v p
D
F f
as an equivalent equation. The latter system is now acting in V C i / ˚ H1 .@0; C ; j" gradj C i /: H1 .@0; C ; j divj This form of a system seems to be somewhat preferable over the crude first order in time re-formulation used initially for the wave equation, since the typical block of the spatial operator is more obvious. structure C0 C 0
380
Chapter 5 Some Evolution Equations of Mathematical Physics
Let us now consider the following more general abstract wave equation .@20 C A/ p D f which as above can be shown to be equivalent to 0 A u f D : @0 C 1 0 p 0 Here A W D.A/ H ! H denotes a closed, densely defined linear operator with non-empty resolvent set in a Hilbert space H . According 0to Aour general solution theory we would need to know more about the spectrum of 1 0 . We first note that solving 0 A u f0 i C D 1 0 p f1 is equivalent to solving ..i C /2 C A/ p D f0 C .i C / f1 : is bounded above if and only if Re .A/ is bounded above and Thus, Re 01 A 0 I m .A/ is bounded. Indeed, the mapping i R C R>0 ! C; z 7! z 2 has q y x C iy ! 7 ip q C p 2 2 2 xC x Cy
xC
p x2 C y2 p 2
q p xC x 2 Cy 2 p as inverse. If x is bounded above and y is bounded then is also bounded; 2 indeed, for x 2 1; c0 and y 2 Œc0 ; c0 for some c0 2 R>0 , we get
q xC
p p p x2 C y2 x C jxj C jyj 3c0 p : p p 2 2 2 q
p xC x 2 Cy 2 p . As a result we obtain the following soluIf y is unbounded then so is 2 tion theory for abstract wave equations.
Section 5.3 Acoustics
381
Theorem 5.3.6. Let A W D.A/ H ! H denote a closed, densely defined operator in Hilbert space H with RehpjA piH C1 hpjpiH and jI mhpjA piH j C0 hpjpiH for all p 2 D.A/ and some C0 ; C1 2 R>0 . Then for every f 2 H1 .@0; C ; A i c0 / with c0 ; 2 R>0 sufficiently large, the abstract wave equation .@20 C A/ p D f has a unique solution in H1 .@0; C ; A i c0 /. Moreover, the solution operator .@20 C A/1 is continuous and causality holds. Proof. The existence and uniqueness statement is clear from our considerations above if a suitable boundedness of 1 1 0 A .i C / A i C C D 1 .i C / 1 0 .i C / A ..i C /2 C A/1 D 1 .i C / can be shown. Here, the inverse has been calculated by formally applying Cramer’s rule. It suffices to see uniform boundedness of A ..i C /2 C A/1 .i C /2 but this is clear since A ..i C/2 CA/1 .i C/2 D .i C/2 ..i C/2 CA/1 . Thus, unique existence and causality follows. Moreover, hpj..i C /2 C A/ piH D .i C /2 hpjpiH C hpjA piH D .. 2 2 / C 2i / hpjpiH C hpjA piH and so Rehpj..i C /2 C A/ piH D . 2 2 / hpjpiH C RehpjA piH ; I mhpj..i C /2 C A/ piH D 2 hpjpiH C I mhpjA piH : For j j =2, we find from our assumptions that 3 2 hpjpiH C RehpjA piH 4 2 3 C1 hpjpiH 4
Rehpj..i C /2 C A/ piH
and, for j j > =2, . 2 C0 / hpjpiH 2 hpjpiH jI mhpjA piH j 2 j j hpjpiH jI mhpjA piH j jI mhpj..i C /2 C A/ piH j:
382
Chapter 5 Some Evolution Equations of Mathematical Physics
From the assumptions we see that
²
k..i C /2 C A/1 k min
1 1 ; 2 C0 34 2 C1
³
p p for all > max¹ C0 ; 2 C1 =3º and 2 R. This observation yields the desired continuous dependence result by applying the inverse Fourier–Laplace transform with respect to time. This abstract result could be complemented by suitable regularity statements to obtain similar results for initial boundary value problems. We shall not pursue this here but rather leave this development to the interested reader.
5.3.3 Reversible Heat Transport The constant coefficient equations modelling heat conduction and acoustic waves in Rn have an essential difference (inherited by our current more complicated problems), which shows up in their respective fundamental solutions. The wave equation has a fundamental solution with compact support at any given fixed, positive time, whereas the fundamental solution of the heat equation has support equal to Rn for any positive time. These observations are occasionally referred to by saying that the wave equation features a finite propagation speed, whereas the heat equation features infinite propagation speed. The latter clearly collides with physical concerns and so, particularly for an elementary particle and low temperature scale, a hyperbolic heat flow model has been suggested based on the conversation law %0 c P D div q C Q and a modified Fourier’s law (with 2 R>0 ) qP C q D grad due to Cattaneo. This amounts to considering18 %0 c R C %0 c P div grad D QP C Q 18
This is actually a system associated with an operator matrix of the form %0 c 0 0 0 0 div Q @0 C C D : 1 1 grad 0 q 0 0 0
It is, however, customary to consider the resulting scalar equation for the temperature . Indeed, the standard heat equation is obtained by a simple Gauss elimination step in the block matrix div %0 c @0 grad 1 @0 C 1 yielding
%0 c @0 div. 1 @0 C 1 /1 grad 0 : grad 1 @0 C 1
Section 5.3 Acoustics or (with WD
383
1 %0 c )
R C P div grad D QP C Q: Introducing suitable boundary conditions, this can be turned into a well-defined reversible evolution equation. For example, with homogeneous Dirichlet boundary data, V C 1/. The details of the we obtain a solution theory in H1 .@0; C ; div grad discussion may be left as an exercise. We shall, however, discuss the behavior of the solution as ! 0C for the case of homogeneous Dirichlet data. In this case, returning to our operator setting, the temperature distribution satisfies @20 C @0 C A D @0 Q0 C Q0 V Here A is as a selfadjoint operator (provided where Q0 WD Q, A WD div grad. 1 " WD and WD %0 c satisfy the respective assumptions in the above general theory). The solution operator . @20 C @0 C A/1 . @0 C 1/ D . .@0; C /2 C .@0; C / C A/1 . .@0; C / C 1/ can now be analyzed by spectral methods. We first find . .@0; C /2 C .@0; C / C A/1 . .@0; C / C 1/ D . .@0; C /2 C .@0; C / C A/1 . .@0; C /2 C .@0; C //.@0; C /1 D .@0; C /1 . .@0; C /2 C .@0; C / C A/1 A.@0; C /1 : Clearly, we have that .; / 7!
p A. .i C /2 C .i C / C A/1
is uniformly bounded for 2 R>0 and all 2 R. Indeed, we estimate by formally taking A 2 R0 and looking for minima of s 7! . s C 2 C C A/2 C .2 C 1/2 s on R0 , which can be found at s D 0 or where its derivative vanishes, i.e. 2 . s. / C 2 C C A/ C .2 C 1/2 D 0: The top left entry %0 c@0 div. 1 @0 C 1 /1 grad D . @0 C 1/1 .%0 c@20 C %0 c@0 div grad/ yields the differential operator %0 c @20 C %0 c @0 div grad of the heat equation with the Cattaneo modification. The case D 0 leads to the standard heat equation. In this sense the above heat equation is obtained as the Schur complement of the original block operator system.
384
Chapter 5 Some Evolution Equations of Mathematical Physics
Thus, we have a possible minimum at 2 C C A .2 C 1/2 2 2 1 A D 2 2 2 1 2 D 1 A 2
s. / D
if this happens to be non-negative. In the latter case, where A C .2 C1/2
1 2
1 2 , C 1/2 s.
C 2
the value of the function can be estimated from below by . 2 /2 C .2 In the case s D 0 there is a uniform lower bound .CA/2 . Thus, we obtain the estimate p j A. .i C /2 C .i C / C A/1 j p ² p ³ A A 1 ; max A : . C A/ . .2 C1/2 /2 C .2 C 1/2 s. / R>0 2 2 p r Observing that r 7! r 2 Cc is maximal at r D c and that p A 2
. .2 C1/ /2 C .2 C 1/2 s. / 2 p 3=2 A D 2 .2 C 1/2 .A C . 2 C1 2 / . C p 3=2 A D .2 C 1/2 .A C . /2 14 / p 3=2 q 1 2 2 A ; 1 .2 C 1/2 . 2 A C . /2 /
1 2 //
this can be estimated by p j A. .i C /2 C .i C / C A/1 j p 3=2 ³ ² 1 2 max p ; 2 2.2 C 1/2 p ² ³ 1 2 max p ; : 2 2.2 C 1/2 3=2 Now r 7!
r
.r 2 C1/2
is maximal at r D
p1 3
with value
p 3 3 16
and so
p ³ ² p 3 3 1 2 1 j A. .i C / C .i C / C A/ j p max 1; : 16 2
Section 5.4 Electrodynamics
385
From the spectral theorem for A we now obtain p 1 j A. .i C /2 C .i C / C A/1 uj0 p juj0 2 p for all u 2 H0 . A C i / and 2 R
p
3163
. Noting that
..@0; C / A/ D ..@0; C / ..A C 1/ 1// D .@0; C / .A C 1 / .@0; C / D .@0; C / .A C 1/ .1 .A C 1/1 / and applying the inverse Fourier–Laplace transform with respect to time, we obtain uniform boundedness for ! 0C of . .@0; C /2 C .@0; C / C A/1 . .@0; C / C 1/ as a mapping from H.kC1;sC1/ .@0; C ; any k; s 2 Z. In particular, we see
p
A C i / to H.k;s/ .@0; C ;
p
A C i / for
. @20 C @0 C A/1 . @0 C 1/Q0 ! .@0 C A/1 Q0 p p in H1 .@0; C ; A C i / as ! 0C for every Q0 2 H1 .@0; C ; A C i /, 2 R p3 . This observation justifies the replacement of the heat equation by the
4
Cattaneo model for small 2 R>0 .
5.4
Electrodynamics
Maxwell’s equations consist of Ampère’s law and Faraday’s law P C curl E D 0; B
P curl H C J D 0; D
(5.4.37)
where the physical constants have been absorbed by a suitable choice of scale. Here D is the electric displacement current, B the magnetic flux density, E the electric field, H the magnetic field and J the external current density. Recalling that div curl D 0, we conclude from (5.4.37) that P D div J; div D P D 0: div B
386
Chapter 5 Some Evolution Equations of Mathematical Physics
This yields formally (with t D 0 as initial time)
Z
q.t / WD div D.t / D div D.0C/
t
div J.s/ ds; 0
div B.t / D div B.0C/: The so-called charge density q thus satisfies the charge conservation law qP C div J D 0:
(5.4.38)
Likewise we see that div B is conserved over time and it is generally assumed in physics that the conserved value of div B is equal to zero. E The induced field D B is linked to the electromagnetic field H by so-called material relations, which in the linear case are mostly assumed to be block diagonal D " 0 E D B 0 H although the following considerations are largely independent of this particular form. Assuming that the material does not change with time we are led to consider the formal evolution equation 1 1 " 0 " 0 E E 0 curl J @0 D : 0 0 H H curl 0 0 To turn this into a proper evolution equation we need to impose suitable boundary conditions and construct appropriate Hilbert spaces and Sobolev lattices. The generation of such structures can be carried out in a fashion similar to the case of acoustics. The starting point is to define the operator curl j V
C1 ./
W CV 1 ./ L2 ./ ! L2 ./; 1 0 1 0 @2 E3 @3 E2 E1 @ E2 A 7! @ @3 E1 @1 E3 A E2 @1 E2 @2 E1
where is an arbitrary non-empty open subset of R3 . Clearly, the operator curl j V C1./ is densely defined and therefore has an adjoint operator .curl j V / , which we deC1 ./ note simply by curl. Indeed, integration by parts yields classically hˆj curl ‰i0 D hcurl ˆj‰i0
(5.4.39)
for all vector fields ˆ; ‰ 2 CV 1 ./, in fact even for ˆ 2 CV 1 ./ and ‰ 2 C1 ./. This justifies our notation curl introduced here and shows in particular that curl j V
C1 ./
curl
Section 5.4 Electrodynamics
387
confirming, as our choice of notation already suggests, that curl j V
C1 ./
is indeed the
restriction of curl to the domain CV 1 ./. It follows that curl is densely defined and so V Thus, we curl j V is closable. We shall denote the closure of curl j V by curl. C1 ./ C1 ./ have V curl : curl j curl CV 1 ./
V and curl are Hilbert spaces when Recall that the domains of the closed operators curl V / equipped with the graph inner product. We shall denote these spaces by H.curl; and H.curl; /, respectively. Note that V / D H1 .jcurl V j C i /; H.curl;
H.curl; / D H1 .j curl j C i /;
V j C i //k2Z and .Hk .j curl j C i //k2Z are the associated Sobolev where .Hk .jcurl lattices. By construction we have achieved that – by the definition of the adjoint operator – the classical integration-by-parts relation (5.4.39) is generalized to V hˆjcurl‰i 0 D hcurl ˆj‰i0
V /: for all ˆ 2 H.curl; /; ‰ 2 H.curl;
(5.4.40)
V / provides a generalized formulation of the This shows that ‰ being in H.curl; classical boundary condition ‘n ‰ D 0 on @’, where n denotes the exterior normal V / and v 2 H.div; V / were field to @, in much the same way as ' 2 H.grad; utilized to generalize ‘' D 0 on @’ and ‘n v D 0 on @’. Note that, in contrast to the classical situation however, the generalized boundary conditions do not need 0 A the existence of the normal field n to @. From the general structure A of the 0 0 curl operator matrix M WD curl it is clear – compare footnote on page 374 – that V 0 V /˚H.curl; / in the Hilbert this is a skew-selfadjoint operator with domain H.curl; 2 2 2 2 2 space .L ./ ˚ L ./ ˚ L .// ˚ .L ./ ˚ L ./ ˚ L2 .// for which we shall again simply write L2 ./, the intended number of components being obvious from the context. This operator matrix leads us to the study of the boundary condition of total reflection, also known as the electric boundary condition.
5.4.1 The Electric Boundary Condition Given a bounded, linear, selfadjoint, positive definite operator E W L2 ./ ! L2 ./ it follows from our general considerations that ME WD E 1 M
388
Chapter 5 Some Evolution Equations of Mathematical Physics
is – according to Corollary 5.2.4 – also skew-selfadjoint in the space E 1=2 ŒL2 ./. Thus, Maxwell’s equations are transformed into the initial boundary value problem E D E 1 F on R>0 ; .@0 ME / H E (5.4.41) 2 H1 .@0; C / ˝ H1 .ME C 1/; H E E0 2 E 1=2 ŒL2 ./: .0C/ D H H0 and added an appropriate initial Here we have used the abbreviation F WD J 0 condition. Remark 5.4.1. As in the acoustic case the boundary condition reduces to V /: E 2 H1 .@0; C / ˝ H.curl; V / is frequently The generalized boundary condition E 2 H1 .@0; C / ˝ H.curl; 19 referred to as the boundary condition of total reflection or the electric boundary condition. Following our by now familiar line of reasoning we are led to consider an evolution equation of the form .@0 ME /U D .E 1 F C ı ˝ U0 /
(5.4.42)
in H1 .@0; C ; ME C 1 /, looking for a solution U of (5.4.41) (or (5.4.42)) in the subspace R>0 .m0 / ŒH1 .@0; C / ˝ H0 .ME C 1 /. Theorem 5.4.2. Let source term F 2 R>0 .m0 / ŒH0 .@0; C / ˝ H0 .ME C 1 / and initial data U0 2 L2 ./ D H0 .ME C 1 / be given. Then there is a unique solution U 2 R>0 .m0 / ŒH1 .@0; C / ˝ H0 .ME C 1 / of (5.4.41) depending continuously on the data. E be the solution of (5.4.42). We claim that this is the desired Proof. Let U D H solution of (5.4.41). To see this we only need to show that the boundary condition is satisfied, since the remaining claims follow easily. We read off that U 2 H1 .@0; C / ˝ H0 .ME C 1 /; ME U D @0 U .E 1 F C ı ˝ U0 / 2 H1 .@0; C / ˝ H0 .ME C 1 / 19
The reference to the phenomenon of total reflection comes from the observation that in such a case the incoming electric field Ein and the reflected field Eout have the same tangential component, so that for the difference of these fields we have n .Ein Eout / D 0 on the reflecting boundary .n normal field).
Section 5.4 Electrodynamics and so
U D
E H
389
2 H1 .@0; C / ˝ H1 .ME C 1 /
V / ˚ H.curl; /, the boundary condition from which, using H1 .ME C 1 / D H.curl; V / E 2 H1 .@0; C / ˝ H.curl; follows. Since (5.4.41) can be transcribed into (5.4.42) uniqueness follows also for (5.4.41). Continuous dependence for example in H1 .@0; C / ˝ H0 .ME C 1 / follows from the general theory applied to the particular problem at hand. An inhomogeneous boundary condition can be modelled in a way analogous to the case of acoustics. To implement the generalized boundary condition V / E G 2 H1 .@0; C / ˝ H.curl; with G 2 R>0 .m0 / ŒH0 .@0; C / ˝ H.curl; / given, we only need to add a corresponding boundary source term of the form 0 G E 1 C ME 2 H0 .@0; C / ˝ H1 .ME C 1 / curl G 0 to the right-hand side of (5.4.41) and (5.4.42). We shall not pursue this further. In order to reconstruct the induced charge conservation law – derived previously somewhat heuristically – in a rigorous way, we require the following lemma. Lemma 5.4.3. We have the following inclusions: V / H.curl; V /; gradŒH.grad;
(5.4.43)
V / H.div; V /; curlŒH.curl;
(5.4.44)
curlŒH.curl; / H.div; /;
(5.4.45)
gradŒH.grad; / H.curl; /:
(5.4.46)
curl grad D 0 on H.grad; /;
(5.4.47)
V grad V D 0 on H.grad; V /; curl
(5.4.48)
div curl D 0 on H.curl; /;
(5.4.49)
V curl V D 0 on H.curl; V /: div
(5.4.50)
Moreover,
390
Chapter 5 Some Evolution Equations of Mathematical Physics
Furthermore, the orthogonality relations V / ? curlŒH.curl; /; gradŒH.grad;
(5.4.51)
V / ? gradŒH.grad; / curlŒH.curl;
(5.4.52)
hold. V / if and only if V D curl we get that ‰ 2 H.curl; Proof. From curl hˆjf i0 D hcurl ˆj‰i0
for all ˆ 2 H.curl; /
(5.4.53)
V ', ' 2 CV 1 ./, we find for some f 2 L2 ./. For ‰ D grad V grad V 'i0 D 0 V 'i0 D hˆjcurl hcurl ˆjgrad for all ˆ 2 H.curl; / (by definition of H.curl; / and because curl grad D 0 on CV 1 ./ by classical vector analysis). Consequently, by a density argument we have V 'i0 D 0 hcurl ˆjgrad
(5.4.54)
V grad, we obtain (5.4.51). Reading this as for all ˆ 2 H.curl; / and, since grad V ', we observe that grad V ' satisfies (5.4.53) with f D 0. Thus, we a property of grad V V have that grad ' 2 H.curl; / and V grad V 'D0 curl V /. The inclusion curlŒH.curl; V / H.div; V / follows for every ' 2 H.grad; in an analogous way. The remaining inclusions follow similarly. We only consider curlŒH.curl; / H.div; / to show how the reasoning has to be adjusted. Since V D div, we get from (5.4.54) that curl ˆ H.div; / and by definition .grad/ div curl ˆ D 0 for every ˆ 2 H.curl; /. As a consequence of this lemma, for the solution U D
E H
of (5.4.42), we have
V / @0 H ı ˝ H0 D curl E 2 H1 .@0; C / ˝ curlŒH.curl; and so B R>0 ˝ H0 WD H R>0 ˝ H0 V D @1 0 curl E 2 H1 .@0; C / ˝ curlŒH.curl; /:
Section 5.4 Electrodynamics
391
V / H1 .@0; C /˝H.div; V /, this implies Since H1 .@0; C /˝curlŒH.curl; V div.B R>0 ˝ H0 / D 0 in H1 .@0; C / ˝ H0 .jgrad j C i / and so V V H0 divB D R>0 ˝ div in H1 .@0; C / ˝ H1 .jgrad j C i /. Similarly, we also have @0 "E ı" E0 C J D curl H 2 H1 .@0; C / ˝ curlŒH.curl; / and so D R>0 ˝ "E0 C @1 0 J WD "E R>0 ˝ "E0 D @1 0 curl H 2 H1 .@0; C / ˝ curlŒH.curl; /: Since H1 .@0; C /˝curlŒH.curl; / H1 .@0; C /˝H.div; /, this implies div.D R>0 ˝ "E0 C @1 0 J/ D 0 V C i/ and so in H1 .@0; C / ˝ H0 .jgradj q WD divD D R>0 ˝ div"E0 @1 0 divJ DW R>0 ˝ q0 @1 0 divJ; V C i/, which is the rigorous form of the conservation in H1 .@0; C / ˝ H1 .jgradj of charge equation.
5.4.2 Some Decomposition Results In order to better understand the forthcoming construction of an extended Maxwell system, we need a number of decomposition results. Lemma 5.4.4. The following inclusions hold V / H.grad; R3 /; H.grad; V / H.curl; R3 /; H.curl; V / H.div; R3 /: H.div; Proof. The results are clear since CV 1 ./ WD ¹' 2 C1 .R3 / j supp ' º is clearly contained in H.grad; R3 /, H.curl; R3 / and H.div; R3 /, respectively.
392
Chapter 5 Some Evolution Equations of Mathematical Physics
Lemma 5.4.5. We have the following orthogonal decompositions of L2 ./: V / ˚ H 0 .grad; /; L2 ./ D divŒH.div;
(5.4.55)
V / ˚ H 0 .curl; /; L2 ./ D curlŒH.curl;
(5.4.56)
V / ˚ H 0 .div; /; L2 ./ D gradŒH.grad;
(5.4.57)
where H 0 .A; / denotes the null space N.A/ of the differential operator A acting on functions or vectors defined on . Note that H 0 .grad; / D LinC ¹K jK finitely measurable, connected component of º; (5.4.58) and we have the orthogonal decompositions V \ N.div// H 0 .div; / D curlŒH.curl; / ˚ .N.curl/
(5.4.59)
V \ N.curl//: H 0 .curl; / D gradŒH.grad; / ˚ .N.div/
(5.4.60)
and Moreover, we have V / D ¹0º; H 0 .grad;
(5.4.61)
V / D curlŒH.curl; V / ˚ .H0 .curl; / \ H0 .div; V //; H 0 .div;
(5.4.62)
V / D gradŒH.grad; V / ˚ .H 0 .curl; V / \ H 0 .div; //: H 0 .curl;
(5.4.63)
V curl, V grad V yields decompositions (5.4.55), Proof. Applying Theorem 1.2.18 to div, (5.4.56), (5.4.57). In the same way we obtain for the null space H 0 .div; / of div and H 0 .curl; / of curl the decompositions (5.4.59), (5.4.60). For the latter we consider curl as an operator mapping into H 0 .div; / and grad as an operator mapping into H 0 .curl; /. This can be done according to Lemma 5.4.3. Then, applying Theorem 1.2.18 again, yields indeed (5.4.59), (5.4.60). It remains to consider (5.4.58), (5.4.61). If grad ' D 0 then div grad ' D 0 and by local regularity we see that ' 2 C1 ./. But from classical vector analysis we know that ' must be constant on every connected component of . Since a non-vanishing constant is only square-integrable over an open set if this set has finite measure, the desired result for H 0 .grad; / follows. V /. From Lemma 5.4.4, we see that we have grad ' D 0 Finally consider H 0 .grad; 3 in R . This, however, implies that ' D 0 by the previous argument. 0 V 0 Remark 5.4.6. The space Hcurl V ./ WD H .curl; / \ H .div; / is known as the space of harmonic Dirichlet fields. The space of so-called harmonic Neumann fields 0 0 V is given by Hdiv V ./ WD H .curl; / \ H .div; /.
Section 5.4 Electrodynamics
393
Corollary 5.4.7. For L2 ./ we have the following orthogonal decompositions V / ˚ H ./ ˚ gradŒH.grad; /; L2 ./ D curlŒH.curl; V div V L2 ./ D curlŒH.curl; / ˚ Hcurl V ./ ˚ gradŒH.grad; /: Proof. The result follows simply by substituting the null space decompositions given in (5.4.60) and (5.4.59) into the decompositions (5.4.56), (5.4.57), respectively. The above decomposition results may also be recovered by the following observation. Lemma 5.4.8. The operators V W H.grad; / ˚ H.curl; V / L2 ./ ˚ L2 ./ ! L2 ./; .grad; curl/ ' ˚ ˆ 7! grad ' C curl ˆ and V curl/ W H.grad; V / ˚ H.curl; / L2 ./ ˚ L2 ./ ! L2 ./; .grad; ' ˚ ˆ 7! grad ' C curl ˆ are closed, densely defined, linear operators. Their respective adjoints are given by ! V div V / L2 ./ ! L2 ./ ˚ L2 ./; W H.curl; / \ H.div; curl div ˆ ˆ 7! curl ˆ and div V curl
! V / \ H.div; / L2 ./ ! L2 ./ ˚ L2 ./; W H.curl; ˆ 7!
div ˆ : curl ˆ
Proof. That the first two operators are densely defined linear operators is clear. These operators are clearly closable and their adjoints are as stated. The result follows if we can show that the first two operators are indeed closed. Since the arguments are similar we consider only the second operator. So let grad 'k C curl ˆk ! w, 'k ! '1 V / D and ˆk ! ˆ1 as k ! 1. Since by Lemma 5.4.3 we have gradŒH.grad; V V gradŒH.grad; / ? curlŒH.curl; /, we conclude that grad 'k ! w0 ;
curl ˆk ! w1
as k ! 1;
394
Chapter 5 Some Evolution Equations of Mathematical Physics
V / and w1 the orwhere w0 is the orthogonal projection of w onto gradŒH.grad; V and curl thogonal projection of w onto curlŒH.curl; /. By the closedness of grad we get V /; '1 2 H.grad;
V ' 1 D w0 ; grad
ˆ1 2 H.curl; /;
curl ˆ1 D w1
and so V / ˚ H.curl; /; '1 ˚ ˆ1 2 H.grad;
V '1 C curl ˆ1 D w0 C w1 D w; grad
V curl/. which confirms the closedness of .grad;
5.4.3 The Extended Maxwell System The connection of Maxwell’s equations to a vectorial wave equation, which we established in the constant-coefficient-case by including a suitable term of the form grad div, can also be interpreted as a modification of the original first order system. Indeed, guided by the observation that 0
12 0 div 0 0 B grad 0 curl 0 C B C @ 0 curl 0 grad A 0 0 div 0 0 1 div grad 0 0 0 B C 0 curl curl C grad div 0 0 C DB @ A 0 0 curl curl C grad div 0 0 0 0 div grad is the Laplacian , we are led to propose the following system: 1 0 1 10 10 1 ˛ 0 0 0 0 0 0 0 0 0 B E C B 0 "1 0 C B 0 0 curl 0 C B E C 0 C B CB CB C @0 B @ H A @ 0 0 1 0 A @ 0 curl 0 0 A @ H A 0 0 0 0 0 0 0 0 0 ˇ 1 0 10 1 1 0 1 0 1 0 div 0 0 ˛ 0 0 0 0 @0 div " F B grad 0 0 C CB C B B F 0 C C: CB 0 " 0 0 CB E C B B @ 0 0 0 grad A @ 0 0 0 A @ H A D @ A G 1 @0 div G 0 0 div 0 0 0 0 ˇ 0 0
Section 5.4 Electrodynamics
395
Considering the case of the electric boundary condition leads us to study the evolution equation 0 1 0 10 1 0 0 0 0
BEC B C C B 0 0 curl 0 E 1 B C CB C @0 B A @ H A E @ 0 curl @ V HA 0 0 0 0 0 0 0 1 0 1 0 1 0 div 0 0 @1
0 div " F B V C C B F 0 C B EC B grad 0 0 C; CDB B CE B A A @ @ G H @ 0 0 0 grad A 1 V G V ˇ @0 div 0 0 div 0 where20
0
˛ B0 EDB @0 0 Letting
0 0 0
0 M;E WD E 1 0
and N;E we obtain
0 " 0 0
0 B V B grad WD B @ 0 0
1 0 0C C: 0A ˇ
0 0 B0 0 B @ 0 curl V 0 0
0 curl 0 0
1 0 0C C 0A 0
1 div 0 0 C 0 0 0 C CE 0 0 grad A V 0 div 0
1
BEC C @0 B @ H A .M;E C N;E / 0
where
1
BEC B C D K N;E @1 K; 0 @H A 0
1 0 B F C C B @ G A: 0 0
K WD E 1
20 As we shall see later, from a mathematician’s perspective E can be completely general as long as it is bounded, symmetric and positive definite in a suitable Hilbert space setting.
396
Chapter 5 Some Evolution Equations of Mathematical Physics
The operators M;E and N;E are easily seen to be commuting skew-selfadjoint operators in the weighted Hilbert space E 1=2 ŒL2 ./ (with 8 scalar L2 ./-type component spaces). Indeed, we find from our earlier observations that M;E N;E D 0
on D.N;E /;
N;E M;E D 0
on D.M;E /:
Consequently, D..M;E C 1/.N;E C 1// D D..N;E C 1/.M;E C 1// D D.M;E / \ D.N;E / and so .M;E C 1/1 .N;E C 1/1 ŒL2 ./ D .N;E C 1/1 .M;E C 1/1 ŒL2 ./ D D.M;E / \ D.N;E / and .M;E C 1/1 .N;E C 1/1 D .N;E C 1/1 .M;E C 1/1 : Therefore, for this problem it is natural to construct a corresponding Sobolev lattice based on three operators: .H˛ .@0; C ; M;E C 1; N;E C 1 //˛2Z3 : The natural question arises: Does the corresponding unique solution U 2 H1 .@0; C ; M;E C 1; N;E C 1 / of @0 U .M;E C N;E / U D K N;E @1 0 K
(5.4.64)
solve Maxwell’s equations? Applying N;E to (5.4.64) yields @0 N;E U N;E N;E U D N;E K N;E N;E @1 0 K; which rearranges to give 1 @0 .N;E U N;E @1 0 K/ N;E .N;E U N;E @0 K/ D 0:
Since this equation is – according to our theory – also uniquely solvable in H1 .@0; C ; M;E C 1; N;E C 1 / we must have N;E U N;E @1 0 K D0
(5.4.65)
Section 5.4 Electrodynamics
397
and so from (5.4.64) we obtain @0 U M;E U D K:
(5.4.66)
If additionally we utilize the knowledge that the first and last component of K are zero, then we read off that the first and last component of @0 U must also vanish. Consequently U must also have vanishing first and last components. In other words, the artificially introduced unknowns and are actually zero and the original Maxwell equations are completely recovered together with the conservation law contained in (5.4.65): 0 1 0 BEC 1 C N;E B @ H A N;E @0 K D 0: 0 These calculation work for any continuous, selfadjoint, strictly positive definite E. In the block diagonal case, the last equation reduces to div " .@0 E C F / D 0;
V .@0 H G/ D 0: div
We shall not formulate a theorem for the corresponding solution theory, since this is clearly a simple application of our abstract theory. The extended Maxwell system now opens another view of the vectorial wave equation. Indeed, observing that .@0 C .M;E C N;E // .@0 .M;E C N;E // D @20 .M;E C N;E /2 D @20 .M;E /2 .N;E /2 ; we are lead to the equivalent extended vectorial wave equation @20 U .M;E /2 U .N;E /2 U D .@0 C .M;E C N;E // .K N;E @1 0 K/ D @0 K C M;E K .N;E /2 @1 0 K in H1 .@0; C ; M;E C 1; N;E C 1 /. The equivalence to Maxwell’s equations is a consequence of the reversible evolutionarity of .@0 .M;E C N;E //, which also makes the operator .@0 C .M;E C N;E // invertible in the Sobolev lattice space setting of H1 .@0; C ; M;E C 1; N;E C 1 /. In particular, due to this equivalence, we have U D .@0 M;E /1 K D .@0 .M;E C N;E //1 .K N;E @1 0 K/ D .@0 .M;E C N;E //1 K .@0 N;E /1 N;E @1 0 K D .@20 .M;E /2 .N;E /2 /1 .@0 K C M;E K .N;E /2 @1 0 K/ D .@20 .M;E /2 .N;E /2 /1 @0 K C .@20 .M;E /2 /1 M;E K .@20 .N;E /2 /1 .N;E /2 @1 0 K in H1 .@0; C ; M;E C 1; N;E C 1 /.
398
Chapter 5 Some Evolution Equations of Mathematical Physics
Let us turn our attention to the operators M;E and N;E , which are skew-selfadjoint and commuting. Let … 1 M;E . / D 1; . 1i M;E /, … 1 N;E . / D 1; . 1i N;E / i
i
and … 1 .M;E CN;E / . / D 1; . 1i .M;E C N;E // be the respective spectral fami
ilies associated with the selfadjoint operators 1i M;E ; 1i N;E and 1i .M;E C N;E /. Taking into account that, due to the orthogonality of eigenspaces associated with nonzero spectral values, ¹0º .M;E / \ .N;E /; (5.4.67) we shall show that … 1 .M;E CN;E / . / D 1; i
1 .M;E C N;E / i
can be expressed in terms of the other spectral families. Let P;m and P;n denote the orthogonal projectors onto M;E ŒE 1=2 ŒL2 ./ and N;E ŒE 1=2 ŒL2 ./, respectively, and let P;0 denotes the orthogonal projector onto N.M;E / \ N.N;E / D N.M;E C N;E /. Then, similar to our earlier orthogonal decomposition results, by applying the projection theorem twice we have the orthogonal decomposition E 1=2 ŒL2 ./ D M;E ŒE 1=2 ŒL2 ./ ˚ .N.M;E / \ N.N;E // ˚ N;E ŒE 1=2 ŒL2 ./ and P;m C P;0 C P;n D 1 on E 1=2 ŒL2 ./. Hence … 1 .M;E CN;E / . / D … 1 M;E . /P;m C Œ0;1Œ . / P;0 C … 1 N;E . /P;n : i
i
i
In the case that E is block-diagonal, i.e. 0
˛ B0 EDB @0 0
0 " 0 0
0 0 0
1 0 0C C; 0A ˇ
a significant simplification is possible. For this we first recall that 0
M;E
0 0 0 1 curl B0 0 " DB @ 0 1 curl V 0 0 0 0
1 0 0C C 0A 0
Section 5.4 Electrodynamics
399
and 0
N;E
1 0 div " 0 0 B V C 0 0 B grad ˛ 0 C DB C 0 0 grad ˇ A @ 0 V 0 0 div 0 C D N;E C N;E
with 0
0
C N;E
1 0 div " 0 0 B grad V ˛ 0 0 0C C; WD B @ 0 0 0 0A 0 0 00
N;E
00 B0 0 B WD B 0 0 @
0 0 0
V 0 0 div
1 0 C 0 C : grad ˇ C A 0
C have the same ; N;E We note that, except for the location of the zeros, M;E ; N;E abstract structure. Note that 0 A0 M;E D 0 ˚ ˚ 0; A0 0 0 A1 C ˚ 033 ˚ 0 N;E WD A1 0
and N;E
WD 0 ˚ 033 ˚
0 A2 A2 0
V A1 WD grad V ˛ and A2 WD div V so that where A0 WD 1 curl, A0 A1 D 0;
A2 A0 D 0:
Since the derivation of the above spectral results for the extended Maxwell system is completely abstract, similar formulae hold for the spectral families of A0 A01 , 1 0 A C separately. We shall 2 ; N ; N and so for the spectral families of M ;E ;E ;E A2 0 leave the details as an exercise. ˙ Note that @0 N;E corresponds to the acoustic system in the case of homogeneous Dirichlet and Neumann boundary condition, respectively. This observation means that we can consider the extended Maxwell system to be an extended acoustic system by suitably modifying the right-hand side.
400
Chapter 5 Some Evolution Equations of Mathematical Physics
5.4.4 The Vectorial Wave Equation for the Electromagnetic Field
Under the usual assumption that the material matrix E is block diagonal E D 0" 0 on L2 ./3 ˚ L2 ./3 , we may – as we did in the case of acoustics – eliminate one unknown, the magnetic field H or the electric field E, from the Maxwell system " @0 E curl H D J; @0 H C curl E D G; to obtain formal abstract wave equations @20 E C "1 curl 1 curl E D "1 @0 J C "1 curl 1 G
(5.4.68)
@20 H C 1 curl "1 curl H D 1 @0 G C 1 curl "1 J;
(5.4.69)
or respectively, as corresponding Schur complements. The same can be achieved by applying 1 " 0 curl 0 @0 C curl 0 0 1 to the equation 1 1 " " E E 0 curl J 0 0 D : @0 H H curl 0 G 0 1 0 1 This yields 1 1 " " E E 0 curl 0 curl 0 0 2 @0 H H curl 0 curl 0 0 1 0 1 1 " curl 1 curl E E 0 2 D @0 C H H 0 1 curl "1 curl 1 1 1 " J " 0 curl " J 0 0 0 D @0 C G curl 0 G 0 1 0 1 0 1 1 " @0 J C "1 curl 1 G D ; 1 @0 G C 1 curl "1 J i.e. both formal abstract wave equations in one step. Choosing the appropriate boundary conditions and interpreting @0 as the normal operator @0 D @0; C , we obtain abstract wave equations in the proper sense. This is due to the fact that, with ME being skew-selfadjoint (by choice of one of the type of boundary conditions discussed in the above), we have .@0 C ME /.@0 ME / D @20 ME2 and if E is block-diagonal so is @20 ME2 . From this, it is not hard to see that the diagonal blocks of ME2 must
Section 5.4 Electrodynamics
401
be selfadjoint operators in the corresponding component space. Selfadjointness of the associated differential operators can also be seen by directly inspecting "1 curl W H.curl; / 1=2 ŒL2 ./ ! "1=2 ŒL2 ./; ˆ 7! "1 curl ˆ; V The formal abstract wave equations (5.4.68) and as well as its adjoint 1 curl. (5.4.69) now assume the rigorous form V E D "1 @0 J C "1 curl 1 G @20 E C "1 curl 1 curl and V 1 curl H D 1 @0 G C 1 curl" V 1 J @20 H C 1 curl" for the case of the electric boundary condition. The solution theory can be developed V j C i / and H1 .@0; C ; j"1 curl j C i /, respectively. in H1 .@0; C ; j 1 curl Here it suffices to solve one of the two equations – say the one for the electric field E – the other follows by defining 1 H WD @1 curl E C 1 G/: 0 .
(5.4.70)
1 1 The integration @1 0 in (5.4.70) should be understood as @0 D .@0; C / . That H is well-defined is again a consequence of Lemma 2.1.16, leading to the following analogue of Lemma 5.3.1.
V and "1 curl: Lemma 5.4.9. We have the following continuous extensions of 1 curl V W H1 .@0; C ; j 1 curl V j C i/ ! H1 .@0; C ; j"1 curl j C i/; 1 curl V j C i/: "1 curl W H1 .@0; C ; j"1 curl j C i/ ! H1 .@0; C ; j 1 curl Proof. The proof is analogous to that of Lemma 5.3.1 and we shall therefore omit the details. As a result we obtain the equivalence of the abstract wave equations for the electric and magnetic field with Maxwell’s equations. V j C i/ be the solution of Theorem 5.4.10. Let E 2 H1 .@0; C ; j 1 curl V E D @0 F0 C "1 curl F1 @20 E C "1 curl 1 curl V j C i/, F1 2 H1 .@0; C ; j"1 curl j C i/. with F0 2 H1 .@0; C ; j 1 curl Then we have 1 V H WD @1 curl E C F1 / 2 H1 .@0; C ; j"1 curl j C i/ 0 .
402
Chapter 5 Some Evolution Equations of Mathematical Physics
and
@0
E H
"1 curl V 1 curl 0 0
!
E H
D
F0 F1
V j C i/ ˚ H1 .@0; C ; j"1 curl j C i/. in H1 .@0; C ; j 1 curl Proof. The magnetic field 1 V curl E C F1 / 2 H1 .@0; C ; j"1 curl j C i/ H WD @1 0 .
is well-defined and we read off that @20 E "1 curl @0 H D @0 F0 V j C i/. Applying @1 (‘integration’) yields that in H1 .@0; C ; j 1 curl 0 @0 E "1 curl H D F0 V j C i/. Ampère’s law in H1 .@0; C ; j 1 curl V E D F1 @0 H C 1 curl follows by applying @0 to the definition of H . The large null spaces occurring here are sometimes undesirable and so the abstract wave equations are frequently modified to include divergence terms. Formally, from Maxwell’s equations we obtain @0 div " E C div J D 0;
@0 div H D div G
or div " E D @1 0 div J;
div H D @1 0 div G:
V we have from (5.4.68) Then, applying grad, V E grad V ˛ div " E @20 E C "1 curl 1 curl V D @0 F0 C "1 curl F1 C @1 0 grad ˛ div " F0 :
(5.4.71)
This equation is sometimes referred to as vectorial wave equation. Noticing that V V ˛ div " "1 curl 1 curl; grad are selfadjoint in "1=2 ŒL2 ./ is not enough to make the formal argument rigorous and establish (5.4.71) as an abstract wave equation. We shall need the following additional result.
Section 5.4 Electrodynamics
403
Theorem 5.4.11. The operators V "1 curl 1 curl;
V ˛ div " grad
are commuting, selfadjoint operators in "1=2 ŒL2 ./. Here ˛,"; are selfadjoint, positive definite, bounded linear mappings in L2 ./. Proof. Selfadjointness follows from the form C C of the operators where C is closed, densely defined (compare Lemma 5.2.2 and Corollary 5.2.3). From Corollary 5.4.7 V / in "1=2 ŒL2 ./ and also we have "1 ŒcurlŒH.curl; / ? gradŒH.grad; div " ."1 curl/ D 0;
"1 curl grad ˛ D 0:
Noting "1 ŒcurlŒH.curl; / "1 ŒH.div; / and V / grad ˛ ŒH.gradV ˛ ; / H.curl; V and grad V ˛ div " have orthogonal ranges and we deduce that "1 curl 1 curl V grad V ˛ div " D 0 "1 curl 1 curl V ˛ div "/ and on D.grad V D grad V ˛ div curl 1 curl V D0 V ˛ div " "1 curl 1 curl grad V on D."1 curl 1 curl/. This implies that V "1 curl 1 curl;
V ˛ div " grad
V commutes with the continuous do indeed commute. We show that "1 curl 1 curl 1 V ˛ div " C 1/ , i.e. linear operator .grad V ˆ V ˛ div " C 1/1 "1 curl 1 curl U WD .grad V grad V ˛ div " C 1/1 ˆ D "1 curl 1 curl. V for ˆ 2 D."1 curl 1 curl/. From V ˛ div " C 1/ U D "1 curl 1 curl V ˆ .grad V ˛ div " C 1/ U 2 D.grad V ˛ div "/ and we conclude that .grad V ˛ div " .grad V ˛ div " C 1/ U D 0 grad V ˛ div " C 1/ grad V ˛ div " U: D .grad
404
Chapter 5 Some Evolution Equations of Mathematical Physics
V ˛ div " U D 0 and so Consequently, we must have grad V ˛ div " C 1/ U D "1 curl 1 curl V ˆ: U D .grad V ˛ div " C 1/1 ˆ, i.e. On the other hand, letting W WD .grad V ˛ div " C 1/ W D ˆ .grad V that we see by applying "1 curl 1 curl V grad V ˛ div " C 1/ W D "1 curl 1 curl V W "1 curl 1 curl. V ˆ; D "1 curl 1 curl which finally proves the desired commutativity. As a consequence we also have that V j; j 1 curl
jdiv " j
are commuting selfadjoint operators with V j jdiv " j D 0 j 1 curl
on H.div; /;
V jD0 jdiv " j j 1 curl
V /: on H.curl;
From this observation, it seems appropriate to consider the abstract wave equation in V j C i; jdiv " j C i /. The natural question arises: Can we H1 .@0; C ; j 1 curl recover a solution of the original Maxwell equation? This requires a generalization of Lemma 5.4.9. V "1 curl, Lemma 5.4.12. We have the following continuous extensions of 1 curl, V , div ", grad V ˛ and grad ˇ: div V W H1 .@0; C ; j 1 curl V j C i; jdiv"j C i/ 1 curl V j C i/; ! H1 .@0; C ; j"1 curl j C i; jdiv V j C i/ "1 curl W H1 .@0; C ; j"1 curl j C i; jdiv V j C i; jdiv"j C i/; ! H1 .@0; C ; j 1 curl V W H1 .@0; C ; j"1 curl j C i; jdiv V j C i/ ! H1 .@0; C ; jgrad ˇ j C i/; div V j C i; jdiv "j C i / ! H1 .@0; C ; jgrad V ˛ j C i /; div " W H1 .@0; C ; j 1 curl V jCi/; grad ˇ W H1 .@0; C; jgrad ˇ jCi/ ! H1 .@0; C; j"1 curl jCi; jdiv
Section 5.4 Electrodynamics
405
and V ˛ j C i/ ! H1 .@0; C ; j 1 curl V j C i; jdiv "j C i/: V ˛ W H1 .@0; C ; jgrad grad Here ˛; ˇ; "; are selfadjoint, positive definite, bounded linear mappings in L2 ./. Moreover, we have the equalities V grad V ˛D0 1 curl V ˛jC i/ in the sense of H1 .@0; C; j"1 curljC i; jdiv V jC i/, on H1 .@0; C; jgrad "1 curl grad ˇ D 0 V on H1 .@0; C; jgrad ˇjC i/ in the sense of H1 .@0; C; j 1 curljC i; jdiv "jC i/, V 1 curl V D0 div V on H1 .@0; C; j 1 curljCi; jdiv "jCi/ in the sense of H1 .@0; C; jgrad ˇjCi/ and div " "1 curl D 0 V jC i/ in the sense of H1 .@0; C; jgrad V ˛jC i/. on H1 .@0; C; j"1 curljC i; jdiv Proof. The annihilation relations follow from those for the operators in the base Hilbert space L2 ./. Those in turn allow to refine the mapping properties stated in Lemma 5.4.9 to those claimed in this lemma. We shall not dwell on the details of the reasoning. This leads to the following equivalence statement. V j C i; jdiv " j C i / be the solution Theorem 5.4.13. Let E 2 H1 .@0; C ; j 1 curl of V E grad V ˛ div " E @20 E C "1 curl 1 curl V D @0 F0 C "1 curl F1 @1 0 grad ˛ div " F0
(5.4.72)
with V j C i; jdiv " j C i / F0 2 H1 .@0; C ; j 1 curl and V j C i/: F1 2 H1 .@0; C ; j"1 curl j C i; jdiv Then we have 1 V V j C i/ H WD @1 curl E C F1 / 2 H1 .@0; C ; j"1 curl j C i; jdiv 0 .
406
Chapter 5 Some Evolution Equations of Mathematical Physics
and
@0
E H
"1 curl V 1 curl 0 0
!
E H
D
F0 F1
(5.4.73)
in V j C i; jdiv"j C i/ ˚ H1 .@0; C ; j"1 curl j C i; jdiv V j C i/: H1 .@0; C ; j 1 curl Moreover, we have @0 div " E D div " F0 ;
V H D div V F1 @0 div
V ˛ j C i / and H1 .@0; C ; jgrad ˇ j C i /, respectively. in H1 .@0; C ; jgrad Conversely, any such solution of (5.4.73) yields a solution of (5.4.71). Such solutions exist uniquely and depend continuously on the data. V ˛ div " to the equation yields Proof. Applying grad V ˛ div "E .grad V div "/2 E @20 grad 2 V ˛ div " F0 @1 V D @0 grad 0 .grad ˛ div "/ F0
or V V ˛ div "/.grad V ˛ div " E @1 0 D .@20 grad 0 grad ˛ div " F0 / V grad V ˛ div "/.grad V ˛ div " E @1 V D .@20 C "1 curl 1 curl 0 grad ˛ div "F0 /: From the latter we obtain V V ˛ div " E D @1 grad 0 grad ˛ div " F0 ; which in turn implies V E @20 E C "1 curl 1 curl V j C i; jdiv " j C i /: D @0 F0 C "1 curl F1 2 H1 .@0; C ; j 1 curl We define 1 V V j C i/ H WD @1 curl E C F1 / 2 H1 .@0; C ; j"1 curl j C i; jdiv 0 .
and read off @20 E "1 curl @0 H D @0 F0 V j C i; jdiv " j C i /. Applying @1 yields in H1 .@0; C ; j 1 curl 0 @0 E "1 curl H D F0
Section 5.5 Elastodynamics
407
V j C i; jdiv " j C i /. Ampère’s law in H1 .@0; C ; j 1 curl V E D F1 @0 H C 1 curl follows by applying @0 to the definition of H . We read off that V .@0 H F1 / D 0 div or V H D div V F1 @0 div in H1 .@0; C ; jgrad ˇ j C i/. Faraday’s law implies that div " .@0 E C F0 / D 0 V ˛ j C i /. Consequently, we get holds in H1 .@0; C ; jgrad @0 div " E C div " F0 D 0: The converse calculation, done formally above, can now be performed rigorously. Existence and uniqueness as well as continuous dependence follow now from the corresponding properties for the vectorial wave equation.
5.5
Elastodynamics
We recall from Subsection 3.1.12.5 the formal framework of elastodynamics in order to develop a solution theory for elastic waves in a medium confined to an open set R3 . Writing T for 1 0 T11 B T22 C C B B T33 C B C B T23 C; B C @ T31 A T12 C for the matrix
0
11 22 33 23 13 12 C11 C11 C11 C11 C11 C11
B 22 BC B 11 B 33 BC B 11 B 23 BC B 11 B 13 BC @ 11 12 C11
1
C 22 C 33 C 23 C 22 C 22 C C22 22 22 13 12 C C 33 33 33 33 33 C C22 C33 C12 C23 C23 C C 22 33 23 13 12 C C23 C23 C23 C23 C23 C C 13 33 13 13 13 C C22 C13 C23 C13 C12 A 22 33 23 13 12 C12 C12 C12 C12 C12
408
Chapter 5 Some Evolution Equations of Mathematical Physics
with bounded, linear operators Cijmn W L2 ./ ! L2 ./ and formally Grad for
0
@1 B 0 B B 0 B B 0 B @ @3 @2
0 @2 0 @3 0 @1
1 0 0 C C @3 C C; @2 C C @1 A 0
we get Hooke’s law as T D C Grad u:
(5.5.74)
The Newton’s law governing elastic displacement is now M @20 u Div T D F; where
(5.5.75)
1 T11 1 B T22 C 0 C @ 1 0 0 0 @3 @ 2 B B T33 C B C A @ Div T WD 0 @2 0 @3 @1 0 B T23 C B C 0 0 @3 @ 2 @ 1 0 @ T31 A 0
T12 and the .3 3/-matrix M is positive definite and symmetric. Substituting Hooke’s law (5.5.74) into (5.5.75) we obtain formally the system .@20 M 1 Div C Grad/ u D M 1 F
(5.5.76)
for the displacement u. Alternatively, we may let v WD @0 u and we get from (5.5.75) M @0 v Div T D F and, by differentiating Hooke’s law with respect to time, we get @0 T D C Grad v: Thus we are led to the formal block matrix system 1 1 v M 0 0 Div v M F @0 D T 0 C Grad 0 T 0
(5.5.77)
which – given a suitable choice of boundary conditions – will involve an operator with the abstract structure 0 A @0 C A 0
Section 5.5 Elastodynamics where A W D.A/ The linear mapping
L2 iD0
409
L2 ./ !
CW
8 M
L5
L2 ./ is closed and densely defined.
iD0
L2 ./ !
iD0
8 M
L2 ./
iD0
0 A is clearly skewis assumed to be strictly positive definite. The operator C A 0 L selfadjoint in the weighted L2 -type space C 1=2 Œ 8iD0 L2 ./. We shall again write simply L2 ./ for the L2 -type Hilbert spaces appearing leaving it to the context to determine how many components are involved. If C has the particular diagonal block structure we started out with, i.e. 1 M 0 CD ; 0 C
then C
0 A A 0
D
0 B B 0
with21 B D M 1 A :
B D C A;
There are various options of choosing boundary conditions to rigorously establish this particular structure.
5.5.1 The Rigid Boundary Condition Analogous to the case of acoustic waves with a homogeneous Dirichlet boundary condition, we start out with Grad j V
C1 ./
W CV 1 ./ L2 ./ ! L2 ./; v 7! Grad v
and define V WD Grad j Grad V
C1 ./
and Div WD .Grad j V
C1 ./
V / D .Grad/:
The associated evolution problem is then @0 21
v T
C
0 Div V Grad 0
!
v T
D
M 1 F 0
Note that the upper index refers here to different inner products.
410
Chapter 5 Some Evolution Equations of Mathematical Physics
which is solvable in the usual way in H1 .@0; C ; A C 1/ where ! 0 Div A WD C V Grad 0 is a skew-selfadjoint operator. The implied boundary condition is simply the vanishing of the displacement u D @1 0 v and so of v on the boundary. This can be seen from the following lemma. V be defined as above in L2 ./, where is an open set in R3 . Lemma 5.5.1. Let Grad Then we have V / WD H1 .jGrad V j C i/ D H.grad; V / ˚ H.grad; V / ˚ H.grad; V / H.Grad; as a Banach space isomorphism22 . Proof. The result follows from inequalities 3 3 X 1 X jgrad ui j20 j Grad uj20 4 jgrad ui j20 2 iD1
(5.5.78)
iD1
for all u 2 CV 1 ./. The lower estimate is a variant of an estimate usually referred to as Korn’s inequality in the Dirichlet case. To obtain (5.5.78) we calculate using integration by parts, j Grad uj20
Z
3 X
D
2
i;j D1;i¤j
3 Z X i;j D1
D
j@i uj .x/ C @j ui .x/j dx C
3 Z X iD1
j@i ui .x/j2 dx
1 j .@i uj .x/ C @j ui .x//j2 dx 2
3 Z 1 X .@i uj .x/ C @j ui .x// .@i uj .x/ C @j ui .x// dx 4 i;j D1
D
Z 3 3 1 X 1 X jgrad ui j20 C Re .@j ui .x/ @i uj .x// dx 2 2 iD1
D
1 2
3 X iD1
i;j D1
jgrad ui j20 C
1 jdiv uj20 ; 2
22 In other words, the norms of the two Hilbert spaces are equivalent but the inner products are different
Section 5.5 Elastodynamics
411
from which the lower estimates follow. Since by the Cauchy–Schwarz inequality we have 3 X
Z Re
i;j D1
.@j ui .x/ @i uj .x// dx
3 Z X
.@j ui .x/ @j ui .x// dx
i;j D1
for all u 2 CV 1 ./ the upper estimate also follows. Indeed, we have Z
3 X
j Grad uj20 D
i;j D1;i¤j
3 Z X i;j D1
2
4
j@i uj .x/ C @j ui .x/j2 dx C
3 Z X iD1
j@i ui .x/j2 dx
j@i uj .x/ C @j ui .x/j2 dx
3 Z X i;j D1 3 Z X i;j D1
2
j@i uj .x/j dx C 2
3 X i;j D1
Z Re
.@j ui .x/ @i uj .x// dx
j@i uj .x/j2 dx:
In the case where C is block-diagonal, the Sobolev lattice can be refined to V j C i/ ˚ H1 .jM 1 Div j C i// H1 .@0; C / ˝ .H1 .jC Grad V j C i/ ˚ H1 .@0; C ; jM 1 Div j C i/: D H1 .@0; C ; jC Grad This follows in complete analogy to the acoustic case. In particular, we have the following result, which is a special case of our general considerations and therefore needs no proof. V and its adjoint Lemma 5.5.2. We have the following continuous extensions of C Grad 1 M Div: V W H1 .@0; C ; jC Grad V j C i / ! H1 .@0; C ; jM 1 Div j C i / C Grad and V j C i/: M 1 Div W H1 .@0; C ; jM 1 Div j C i/ ! H1 .@0; C ; jC Grad A reduction in the number of unknown solution components can be obtained by considering the original second order equation in the form V u D M 1 F @20 u M 1 Div C Grad
412
Chapter 5 Some Evolution Equations of Mathematical Physics
V j C i /. The stress tensor T is then which can be considered in H1 .@0; C ; jC Grad found simply by letting V u: T WD C Grad Since the ideas are analogous to the case of linear acoustics, we may leave the details to the reader.
5.5.2 Free Boundary Condition Alternatively and completely analogous to the acoustic problem with Neumann boundary condition, we can start with Div j V
C1 ./
W CV 1 ./ L2 ./ ! L2 ./; T 7! Div T
and define V WD Div j Div V
C1 ./
and Grad WD .Div j V
C1 ./
V : / D .Div/
The associated evolution problem is then @0
v T
C
V 0 Div Grad 0
!
v T
D
M 1 F 0
e C 1/ where which is solvable in the usual way in H1 .@0; C ; A ! V 0 Div e WD C A Grad 0 is clearly a skew-selfadjoint operator. The implied boundary condition generalizes the classical boundary condition NT D
3 X iD1
ni Tj i
j D1;2;3
D0
on @, where n D .n1 ; n2 ; n3 / denotes the exterior normal to @ and 1 0 n 1 0 0 0 n3 n2 N WD @ 0 n2 0 n3 0 n1 A: 0 0 n3 n2 n1 0
Section 5.5 Elastodynamics
413
In the case where C is block-diagonal, the Sobolev lattice can be refined to V j C i// H1 .@0; C / ˝ .H1 .jC Grad j C i/ ˚ H1 .jM 1 Div V j C i/: D H1 .@0; C ; jC Grad j C i/ ˚ H1 .@0; C ; jM 1 Div Of course, we then also have the following result, which validates the solution theory in this refined Sobolev lattice structure. Lemma 5.5.3. We have the following continuous extensions of C Grad and its adjoint V M 1 Div: V j C i/ C Grad W H1 .@0; C ; jC Grad j C i / ! H1 .@0; C ; jM 1 Div and V W H1 .@0; C ; jM 1 Div V j C i / ! H1 .@0; C ; jC Grad j C i /: M 1 Div As in the Dirichlet case a reduction in the number of unknown solution components can be obtained by considering the original second order equation in the form V C Grad u D M 1 F @20 u M 1 Div which can be interpreted in H1 .@0; C ; jC Grad j C i /. The stress tensor T is V j C i / by now recovered as an element in H1 .@0; C ; jM 1 Div T D C Grad u:
5.5.3 Shear and Pressure Waves In the case of an isotropic, homogeneous medium, there is a remarkable connection to Maxwell’s equations. We recall that in this case C is determined by the two Lamé constants ; satisfying 2 > 0; C > 0 3 and by suitable scaling we may assume that the constant M D 1. Consequently, in this case we have V C Grad D curl curl .2 C / grad div : M 1 Div
(5.5.79)
This equation is identical to the vectorial wave equation that was discussed in connection with the equations of electrodynamics with 2 C as dielectric constant and 1 as the permeability parameter. By a suitable choice of boundary condition,
.2 C/
414
Chapter 5 Some Evolution Equations of Mathematical Physics
for example the usual electric boundary conditions of electrodynamics, we actually obtain the following extended Maxwell system: 1 0 0 0 0 0 1 C B0 0 B 2 C curl 0 C @0 U B CU V @ 0 .2 C / curl 0 0A 0 0 0 0 0 1 0 .2 C / div 0 0 B grad 0 0 0 C B V C B U 0 0 grad C @ 0 A 1 V 0 0 .2 C/ div 0 1 0 0 B @1 F C 0 C DB @ 0 A; 0 where 1 .2 C / @1 0 div u C B u C: U WD B @ .2 C / @1 curl V uA 0 0 0
Conversely, if 1 p BuC C U DB @sA r 0
is a solution of the extended Maxwell system 1 0 0 0 0 0 1 C B0 0 B 2 C curl 0 C @0 U B CU V @ 0 .2 C / curl 0 0A 0 0 0 0 0 1 0 0 .2 C / div 0 0 0 B grad C B @1 F 0 0 0 C B V 0 B U DB @ 0 0 0 grad C A @ 0 1 V 0 div 0 0 0 .2 C/
1 C C; A
Section 5.6 Plate Dynamics
415
then we find that p D .2 C / @1 0 div u;
V s D .2 C / @1 0 curl u;
r D 0:
Moreover, we find V p D div F @20 p .2 C / div grad and V curl s grad div V s D @20 s C curl V curl s D curlF: V @20 s C curl In other words, p satisfies a scalar wave equation and is therefore, using an acoustic analogy, referred to as a pressure wave. Similarly, s satisfies a vectorial wave equation, is divergence-free and is therefore referred to as a shear wave. Given this intimate relation to the extended Maxwell system, there is no need to repeat the general solution theory for this special case here. We note, however, that the displacement u can be recovered from p and s via the relation uD
5.6
1 2 V @1 curl s C @1 0 grad p C @0 F: 2 C 0
(5.5.80)
Plate Dynamics
The dynamics of an elastic plate can, in linear approximation, be formally written as @20 ' C m1 div Div C Grad grad ' D m1 f; where ' W ! R denotes the deformation of the plate from its flat resting position in R2 , m denotes mass density and C is the elastic tensor. Since we have previously discussed Div and Grad only in R3 , we need to specify how we interpret these operators in R2 . In analogy to the 3-dimensional case we let Div T WD
2 X iD1
@i Tj i
j D1;2
;
Grad u WD
1 .@i uj C @j ui / 2 i;j D1;2
for a smooth symmetric tensor T and a smooth vector field u. In a matrix notation analogous to the one introduced above, we identify T with 1 0 T11 @ T22 A; T12 C with the matrix
0
11 22 12 C11 C11 C11
1
B 22 22 22 C @ C11 C22 C12 A; 12 22 12 C11 C12 C12
416
Chapter 5 Some Evolution Equations of Mathematical Physics
where Cijmn W L2 ./ ! L2 ./ are bounded, linear operators, and formally Grad with 0 1 @1 0 @ 0 @2 A @2 @1 and Div with @1 0 @ 2 : 0 @ 2 @1 The operator matrix C and the operator m W L2 ./ ! L2 ./ are again assumed to be positive definite and symmetric in L2 ./. Formally we have 1 1 0 0 @21 @1 0 @1 Grad grad D @ 0 @2 A D @ @22 A: @2 @2 @1 2 @1 @ 2 In the homogeneous, isotropic case we have specifically 1 0 2 C 0 2 C 0 A; m D 1; C D @ 0 0 where, due to the required positive definiteness, we must have > 0;
C > 0:
In this case23 , we obtain formally 10 0 1 @21 0 2 2 2 C 2 C 0 A @ @22 A @ 1 ; @ 2 ; 2 @1 @ 2 @ 0 0 2 @1 @ 2 0 1 2 2 2 2 .2 C / @12 C @22 D @1 ; @2 ; 2 @1 @2 @ .2 C / @2 C @1 A 2 @1 @ 2 23
Note that
1 1 10 0 .2 C / @1 @2 2 C 0 @1 0 @ 0 @ 1 2 @ @ @1 .2 C /@2 A 2 C 0 A @ 0 @2 A D 0 @2 @1 @2 @1 @2 @1 0 0 .2 C / @21 C @22 . C /@1 @2 D . C /@1 @2 .2 C /@22 C @21 @1 @22 @1 @2 D .2 C / .@1 ; @2 / C @2 @1 @2 @21 @1 @2 D .2 C / .@1 ; @2 / C .@2 ; @1 /; @2 @1
@1 0 @2 0 @2 @1
0
is the 2-dimensional analogue to the representation (5.5.79).
Section 5.6 Plate Dynamics
417
D .2 C / @41 C 2 .2 C / @21 @22 C .2 C / @42 D .2 C /.@21 C @22 /2 : The appearance of .@21 C @22 /2 D . div grad/2 is slightly misleading here with regard to reasonable boundary conditions, since the most natural choice of boundary conditions for the plate equation does not make this operator the square of an operator associated with div grad. The inspection of physically more suitable boundary conditions can be done along the same lines of reasoning as in previous cases. We begin by defining Gradgrad j V
C1 ./
W CV 1 ./ L2 ./ ! L2 ./; ' 7! Grad grad ';
and divDiv j V
C1 ./
W CV 1 ./ L2 ./ ! L2 ./;
7! div Div :
These are formally adjoint operators, densely defined and therefore closable. We ‚ V ƒ ‚ V ƒ shall write Gradgrad and divDiv for their respective closure and let ‚ V ƒ divDiv WD .Gradgrad/ and
‚ V ƒ Gradgrad WD .divDiv/ :
Note that here we wrote purposely without spaces to avoid confusion with the compositions div Div, Grad grad, which are in general different operators. For the latter operators to coincide with the former, further constraints on boundary regularity would be required, which we wish to avoid. ‚ V ƒ Focussing first on the deformation ', it is clear that being in the domain of Gradgrad is the most restrictive boundary condition that can be imposed. Analogous to the acoustic case we shall call this the (generalized) Dirichlet boundary condition. Since ‚ V ƒ Gradgrad D .divDiv/ Gradgrad; ‚ V ƒ we have24 ' 2 H.Gradgrad/ if and only if ' 2 H.Gradgrad/ and hGradgrad 'jT i0 h'jdivDiv T i D 0 24 Recall that H.A/ denotes the Hilbert space given as the domain D.A/ of A equipped with the graph inner product.
418
Chapter 5 Some Evolution Equations of Mathematical Physics
for all T 2 H.divDiv/. This means, in classical terms, i.e. if everything is sufficiently regular and well-behaved, Z hGradgrad 'jT i0 h'jdivDiv T i D ..N grad '/ T ' n Div T / ds D 0 @
for all T which are smooth up to the boundary. Here 1 0 n1 0 N WD @ 0 n2 A; n2 n1 where n D .n1 ; n2 / denotes the exterior normal to @, and so 1 0 T11 0 n T C n T n n 1 2 @ 1 11 2 12 T22 A D : N T D 0 n2 n1 n2 T22 C n1 T12 T12 With T12 D 0 and T11 D T22 arbitrary and observing that
n Div T D
2 X
nj @i Tj i
i;j D1
can be chosen arbitrarily and independently, we see that ' D n grad ' D 0 on @: Thus, we can say that this classical boundary condition is generalized by ‚ V ƒ ' 2 H.Gradgrad/: It is also clear why this boundary condition is referred to as the clamped plate boundary condition. Imposing this boundary condition, the plate equation attains the following abstract wave equation form: ‚ V ƒ @20 ' C m1 divDiv C Gradgrad ' D f: Writing the plate equation as a first-order-in-time system we obtain 1 0 1 0 divDiv @ ' f @0 ' m 0 @ 0 A @0 D ; ‚ V ƒ T 0 C T 0 Gradgrad 0 where
‚ V ƒ T WD C Gradgrad ':
Section 5.6 Plate Dynamics
419
Of course, the material properties which are encoded in the block operator matrix 1 m 0 0 C can be chosen more generally as long as boundedness and positive definiteness are preserved. We shall, however, not pursue this possibility. If we retain the particular block diagonal matrix form, then the system can be written more simply as 0 1 @0 m1 divDiv @ ' f 0 @ A D : ‚ V ƒ T 0 C Gradgrad @ 0
The operator matrix is of the abstract form A @0 A @0
(5.6.81)
if considered in the suitably weighted Hilbert space m1=2 ŒL2 ./ ˚ C 1=2 ŒL2 ./. If we want to maintain the abstract form of (5.6.81) but have no boundary constraints on ', i.e. we only have ' 2 H.Gradgrad/, then there is a boundary condition induced on T . Since this boundary condition is induced by letting ' be unconstrained on the boundary, it is known as the free plate boundary condition (also as the Neumann boundary condition). Of course, a boundary condition on T is still a boundary condition on the higher derivatives of '. Let us determine the specific classical form of this boundary condition for T , which is less than obvious. Since ‚ V ƒ divDiv D .Gradgrad/ divDiv; ‚ V ƒ we have T 2 H.divDiv/ if and only if T 2 H.divDiv/ and hGradgrad jT i0 h j divDiv T i0 D 0 2 H.Gradgrad/. This means in classical terms Z ..N grad / T hGradgrad jT i0 h j divDiv T i0 D
for all
@
n Div T / ds D 0
for all which are smooth up to the boundary. Now, and n grad can be chosen arbitrarily and so, since by applying integration by parts25 on the submanifold @ 25 Here we have used that grad D @ , where s is the arc length parameter, and no boundary terms @s occur in integration by parts on the boundary curve. The term denotes the curvature of the boundary, which is given by @ D n: @s
420
Chapter 5 Some Evolution Equations of Mathematical Physics
2 the unit tangent vector on (letting n WD nn12 be the exterior unit normal, WD n n1 @ and noting that nn C D 1), we get Z ..N grad / T n Div T / ds 0D @ Z D ..N n n grad / T C .N grad / T n Div T / ds @ Z D ..N n n grad / T C . grad / N T n Div T / ds @ Z D ..N n n grad / T . grad/ N T n Div T / ds @ Z D ..n grad / n N T n N T N . grad/T @
. grad N /T
n Div T / ds;
and we obtain n N T D 0;
N . grad/T . grad N /T n Div T D 0
on @
as the classical free plate or Neumann boundary conditions generalized by T 2 ‚ V ƒ H.divDiv/. The system 0 1 ‚ V ƒ 1 @ @0 m divDiv A @0 ' D f T 0 C Gradgrad @0 corresponds to the abstract wave equation ‚ V ƒ @20 ' C m1 divDiv C Gradgrad ' D f: As in the acoustic case, other boundary conditions lying “between” the condition of ‚ V ƒ ' 2 H.Gradgrad/ and ' 2 H.Gradgrad/ (together with the (by taking adjoints) induced boundary condition for T ) could be also considered. However, since there is little new as far as our abstract solution theory is concerned, we leave this to the interested reader.
5.7
Thermo-Elasticity
As an example of a coupling between different physical effects we shall now consider the equations of linear thermo-elasticity. M @20 u Div.C Grad u / D F; w @0 div grad C 0 Grad @0 u D f;
(5.7.82)
Section 5.7 Thermo-Elasticity
421
where, as well as the well-known coefficients of elasticity and of heat transport w WD 2 % 0 c, we have a numerical parameter 0 2 R>0 and a coupling term W L ./ ! L 6 2 kD1 L ./ denoting a bounded linear operator. As usual we shall not continue to indicate the number of L2 ./-components in the range, which will be clear from the context. The operator matrix26 results from a modified Hooke’s law known as the Duhamel–Neumann law, namely T D C Grad u : Differentiating this and letting v WD @0 u; we obtain @0 .T C / D C Grad v or @0 .C 1 T C C 1 / D Grad v: Substituting into (5.7.82) we obtain M @0 v Div T D F; @0 .01 w C C 1 .T C // 01 div grad D 01 f : This modified system can again be conveniently represented as a first order system: 0 1 0 1 M 0 0 v 1 @ 0 C 1 A @ C @0 T A 1 1 1 0 C 0 w C C 0 10 1 0 1 0 Div 0 v F A @ T A D @ 0 A: 0 @ Grad 0 1 0 0 0 div grad 01 f If 01 w, M and C 1 are bounded, symmetric and positive definite (with uniform constant c0 2 R>0 ) then so is 1 0 M 0 0 A: C 1 C 1 Q WD @ 0 1 1 1 0 C 0 w C C 26 represents the symmetric stress-temperature tensor D . / ij i;j D1;2;3 , which following Voigt’s convention we reformulate as 1 0 11 B 22 C C B B 33 C C B B 23 C C B @ 31 A 12
where ij W L2 ./ ! L2 ./, i; j D 1; 2; 3.
422
Chapter 5 Some Evolution Equations of Mathematical Physics
Indeed, with W D .a; b; c/ we have hW jQW i0 D hajM ai0 C hb C cjC 1 .b C c/i0 C hcj01 w ci0 c0 .hajai0 C 0 hb C cjb C ci0 C hcj ci0 / for any choice of 0 2 0; 1 Œ. We shallpspecify 0 in more detail later. Since 2jRehbjci0 j D 2jReh p1 bj 2ci0 j 12 jbj20 C 2jcj20 , we may estimate 2 further: hW jQW i0 c0 .hajai0 C 0 hbjbi0 C 2 0 Rehbjci0 C 0 hcjci0 C hcj ci0 / 1 c0 hajai0 C 0 hbjbi0 0 hcjci0 C hcj ci0 2 1 2 c0 hajai0 C 0 hbjbi0 C .1 0 kk /hcj ci0 2 and so choosing 0 < 0 < min¹1; kk2 º, we see that Q is positive definite. Symmetry of Q is clear. In a more standard form, this system is 1 1 0 0 10 1 @0 u 0 Div 0 F @0 u A @ T A D Q1 @ 0 A: 0 @0 @ T A Q1 @ Grad 0 1 0 0 0 div grad 01 f 0
Introducing suitable boundary conditions, we note that 1 0 Div 0 C B V 0 A @ Grad 0 V 0 0 01 div grad 0
is a normal operator in an 5-component L2 ./-space, since it is a linear combination of the commuting selfadjoint operators 0 0 Div 1@ V Grad 0 i 0 0
1 0 0 A; 0
0
0 @0 0
0 0 0
1 0 A: 0 1 V 0 div grad
Section 5.7 Thermo-Elasticity
423
In the weighted Hilbert space H given by27 H WD Q1=2 ŒL2 ./; the thermo-elasticity operator TC defined as 0 1 0 Div 0 B V C 0 0 TC WD Q1 @ Grad A V 0 0 01 div grad is not a normal operator, unless there is no coupling, i.e. D 0, in which case Q is block diagonal. It is clear, however, that we still have a closed operator and that 1 0 0 Div 0 C B V 0 0 .TC / D Q1 @ Grad (5.7.83) A DW T : 1 V 0 0 div grad 0
Moreover, with 1 v P WD @ T A; 0
due to the skew-selfadjointness of the upper left 2 2-block, we find RehP jT˙ P iH D RehP jQT
˙ P i0
V i0 D Reh j01 div grad 1 V i0 h jdiv grad 0 1 V V i0 ; D hgrad j grad 0 D
for all P in the domain of T˙ . Theorem 5.7.1. The operators .@0 T˙ / are both forward evolutionary. 27
Note that the norm in H corresponds to the energy norm as can be seen from hP jP iH D hP jQP i0 D hvjM vi0 C hT C jC 1 .T C /i0 C 01 hj i0 ;
where we have abbreviated
0
1 v P WD @ T A:
424
Chapter 5 Some Evolution Equations of Mathematical Physics
Proof. Let us first consider the location of the resolvent set of T˙ . We have found that 1 V V i0 RehP jT˙ P iH D hgrad j grad (5.7.84) 0 where 0
1 v P WD @ T A; and so for z 2 C RehP j.z T˙ /P iH D Re z hP jP iH C
1 V V i0 ; hgrad j grad 0
Re z hP jP iH : This shows that .z T˙ / is injective if Re z > 0 and in this case k.z T˙ /1 k
1 : Re z
(5.7.85)
Moreover, since H D .z T /1 Œ¹0º ˚ .z T˙ /ŒH and Re z D Re z , we also have that .z T˙ /ŒH D H : Thus, we see that iR C R>0 %.T˙ / and, together with (5.7.85), we see indeed that @0 T˙ are both forward evolutionary. This result establishes in particular that we have a forward causal solution theory of the thermo-elastic equations .@0 TC /U D F: For every given F 2 H1 .@0; C ; TC 1/ where 2 R>0 there is a unique solution U 2 H1 .@0; C ; TC 1/. The initial boundary value problem could now be discussed along analogous lines as for earlier problems. We shall not elaborate on the details.
Section 5.7 Thermo-Elasticity
425
Remark 5.7.2 (Spectral properties of TC in free space R3 ). We have not previously discussed the free space case with constant coefficients. We only consider the homogeneous isotropic case, i.e. M D D 133 , 0 D 1, w D 1 and 2 133 C e e 0 C D 0 133 with the Lamé constants28 ; satisfying > 0; Moreover, we assume
3 C 2 > 0:
0 1 1 B1C B C B1C e C B D B CD : 0 B0C @0A 0
The characteristic polynomial of this system coincides with the one given in [29] although our system has a different form. The spectrum lies on the non-negative part of imaginary axis, the negative part of the real line and additionally on some algebraic curve depending on the coupling parameter (compare the illustrations in [29]).
28 We have implemented here a notational adjustment using in place of the earlier notation to avoid confusion with the standard notation for the thermal conductivity tensor.
Chapter 6
A “Royal Road” to Initial Boundary Value Problems of Mathematical Physics
The examples discussed in the previous chapter illustrate a general (often hidden) structure of coupled problems of classical mathematical physics. They involve a “conservation law” of the form @0 W C AU D F; where A is, in simple cases, just a normal operator with .A/ Œi R C ŒR0 . The model is completed by a “material relation” which, in simple cases, is of the form W D M U C @1 0 BU; where M; B are continuous linear operators and M is positive definite (or at least non-negative). The combination of various physical effects is exhibited in a block diagonal form of the evolutionary system 0 1 0 10 1 0 1 W0 A00 0 U0 F0 B C B CB C B C @0 @ ::: A C @ ::: : : : ::: A @ ::: A D @ ::: A; 0 Ann Wn Un Fn and the coupling is due to the material law 0 1 0 10 1 0 10 1 W0 M00 M0n U0 B00 B0n U0 B :: C B :: : : : CB : C : CB : C 1 B : : @ : AD@ : : :: A @ :: A C @0 @ :: : : :: A @ :: A: Mn0 Mnn Bn0 Bnn Wn Un Un So, coupling between various physical effects can occur via non-vanishing offdiagonal elements in 0 1 M00 M0n B C M D @ ::: : : : ::: A Mn0 Mnn or in the lower order term
1 B00 B0n C B B D @ ::: : : : ::: A: Bn0 Bnn 0
Section 6.1 A Class of Evolutionary Material Laws
427
For example, in the case of the thermo-elastic system, we have B D 0 and so the coupling is purely due to non-vanishing off-diagonal entries in the bounded, symmetric, positive definite M . More surprisingly, the standard initial boundary value problems of mathematical physics, used to illustrate theoretical considerations, in standard cases require a skewself-adjoint operator A. The apparent universality of this fact has to our knowledge not been noted before. It offers a transparent approach to the classical initial boundary value problems of mathematical physics and makes the discussion of more complex material behaviour naturally accessible1 .
6.1
A Class of Evolutionary Material Laws
With the extended Fourier–Laplace transform we will be able to describe the class of material laws as functions of @0 . Let .M.z//z2BC .r;r/ be a holomorphic family of 1 uniformly bounded linear operators. Then, for > 2r , we define 1 1 M.@0 / WD L M L : i m0 C Note that for r 2 R>0 BC .r; r/ ! Œi R C ŒR>1=.2r/ ; z 7! z 1 is a bijection. Theorem 6.1.1. Let .M.z//z2BC .r;r/ be a holomorphic family of uniformly bounded 1 linear operators on H . Then for > 2r , M.@1 0 / W H;0;0 ! H;0;0 , H;0;0 WD a H;0 ˝ H , is forward causal in the sense that M.@1 / restricted to CV 1 .R/ ˝ H is 0 a
forward causal (see Definition 4:2:4). Here, CV 1 .R/ ˝ H is considered as a space of functionals by interpreting the tensor product ˝ w, 2 CV 1 .R/, w 2 H , as a
˝ w W CV 1 .R/ ˝ H ! C;
Z
˝ v 7! . ˝ w/.
˝ v/ WD
R
.t / .t / dt hwjvi:
Proof. For the time-translation h given as the bounded linear extension of a
h W CV 1 .R/ ˝ H H;0;0 ! H;0;0 ;
˝ w 7! . C h/ ˝ w 1 This comparatively simple and accessible approach to evolutionary problems prompts us to speak of a “royal road”, alluding to a legend saying that Menaechmus (380–320 b.c.) replied to Alexander the Great’s request for an easier way of learning mathematics that “there is no Royal Road to mathematics”.
428
Chapter 6 A Royal Road to Initial Boundary Value Problems
for all 2 CV 1 .R/, t; h 2 R, we have h D exp.h@0 / and so we obtain the commutator relation 1 h M.@1 0 / D M.@0 / h :
Thus, to test for causality, we may assume without loss of generality that 2 CV 1 .R/ has support supp. / with inf supp. / D 0 and due to translation invariance we only need to show that inf supp0 .M.@1 0 / ˝ w/ 0 for all w 2 H . Obviously, ˝ w 2 L2 .R>0 ; H / and so, by the converse of the Paley–Wiener theorem, L ˝w 2 HL˝H (HL denotes the Hardy–Lebesgue space introduced in (3.1.47)). From the assumed holomorphy and uniform boundedness of .M.z//z2BC .r;r/ we obtain M
1 L ˝ w 2 HL ˝ H: i m0 C
Using the Paley–Wiener theorem we get 1 L0 M L ˝ w 2 L2 .R>0 ; H / i m0 C so that
M.@1 0 /
1 ˝w D L ˝ w i m0 C 1 D exp.m0 /L0 M L ˝ w 2 H;0;0 i m0 C L M
and so inf supp0 .M.@1 0 / ˝ w/ 0. Remark 6.1.2. We note here that the seemingly rather restrictive assumption on the material law operator M.@1 0 / is largely unavoidable if causality is to be maintained. Due to translation invariance, we see that M.@1 0 / also commutes with @0 and therefore has a canonical continuous extension to H1 .@ C / ˝ H , which we shall denote by the same symbol.
Section 6.1 A Class of Evolutionary Material Laws
429
There is a hidden issue here obscured by our convenient choice of notation. This is that it may not be obvious that the definition of M.@1 0 / is indeed independent of the choice of the parameter 2 R>0 . This independence is, however, a consequence of the analyticity of M . We first note the following lemma. Lemma 6.1.3. Let H be a Hilbert space and let F W ŒR0 C iŒR ! H be analytic. If, for all 2 R>0 , Z R!1 F .z/ dz ! 0; z2ŒŒ0;1˙iR
^ Z
then
2R0
Z z2iŒR
F .z/ dz D
F .z/ dz: z2CiŒR
Proof. By Cauchy’s integral formula Z BR
F .z/ dz D 0;
where BR is the piece-wise smooth path defined by t 7! i.2tR C R/; t 7! t iR; t 7! C i.2tR R/; t 7! .1 t / C iR with parameter t ranging in Œ0; 1. Thus, noting Z Z
Z z2iŒŒR;R
z2CiŒŒR;R
we obtain
F .z/ dz D 2Ri Z F .z/ dz D 2Ri
Z
1
F .i.2tR C R//dt;
0 1 0
F . C i.2tR R//dt;
Z z2iŒŒR;R
Z
D
Z D
F .z/ dz C
F .z/ dz z2CiŒŒR;R
1
F .t iR/dt C
0
Z
1 0
Z
F .t iR/dt C
1
F ..1 t / C iR/dt
0 1 0
F .t C iR/dt:
430
Chapter 6 A Royal Road to Initial Boundary Value Problems
The desired result follows if Z 0
1
R!1
F .t ˙ iR/dt ! 0:
The latter, however, holds by assumption, since Z 1 Z
F .t ˙ iR/dt D 0
F .z/ dz:
z2ŒŒ0;1˙iR
Theorem 6.1.4. Let H be a Hilbert space and let G W ŒR0 C iŒR ! H be analytic and bounded. Then ^ ^ G.@0; C /f D G.@0; C /f in L2;loc .R; H /: ; 2R>0 f 2H;0;0 \H;0;0
Proof. By a density argument, it suffices to consider f D ' ˝ h, where ' 2 CV 1 .R/ and h 2 H . We have .G.@0; C /f /.t / Z 1 Dp exp.it .s i// b ' .s i/ G.is C /h ds 2 R Z 1 D p exp.t .z C // b ' .i.z C // G.z C /h dz: i 2 z2iŒR For F .z/ WD exp.t .z C // b ' .i.z C // G.z C /h Z 1 D exp.t .z C // p exp.r.z C //'.r/ dr G.z C /h 2 R we get, with C0 2 R>0 as the bound of G, ˇ ˇ p ˇZ ˇ 2 ˇˇ F .z/ dz ˇˇ z2ŒŒ0;1˙iR ˇZ 1 ˇ exp.t .w ˙ iR C // D ˇˇ 0
ˇ ˇ exp.r.w ˙ iR C //'.r/dr G.w ˙ iR C /h dw ˇˇ R ˇ ˇZ 1Z ˇ ˇ ˇ exp..r t /.w ˙ iR C //'.r/ dr G.w ˙ iR C /h dw ˇˇ Dˇ R 0 ˇZ ˇ ˇ ˇ ˇ exp. i.r t /R .r t //'.r/ dr ˇˇ C0 jhjH sup¹exp.jr t j /j r 2 supp 'ºˇ Z
R
Section 6.2 Evolutionary Dynamics and Material Laws
431
ˇ ˇZ ˇ C.t / ˇˇ 0 exp. iuR/.exp..u C t //.' .u C t / '.u C t /// dr ˇˇ ˇ R R
C.t / R!1 j supp.'/j1=2 j@0; 'j;0;0 ! 0 R
where C WD C0 exp.t /jhjH sup¹exp.jr t j /j r 2 supp 'º. Assuming without loss of generality < , we get from Lemma 6.1.3 with D .G.@0; C /f /.t / Z 1 exp.t .z C // b ' .i.z C // G.z C /h dz D p i 2 z2. /CiŒR Z 1 exp.t .z C // b ' .i.z C // G.z C /h dz D p i 2 z2iŒR D .G.@0; C /f /.t / for every t 2 R. Since G.@0; C /f and G.@0; C /f are both in L2;loc .R; H /, they must be equal. The result shows that M.@1 0 / can be defined as the continuous extensions of M.@1 0 /j V
a
C1 .R/˝H
;
which according to the above theorem, is independent of 2 R>0 as long as is sufficiently large.
6.2
Evolutionary Dynamics and Material Laws
6.2.1 The Shape of Evolutionary Problems with Material Laws Let A be a skew-self-adjoint operator in a Hilbert space H , and consider its canonical extension to H1 .@ C / ˝ Hk .A C 1/, k D 1; 0; 1. Then our aim is to find U; V 2 H1 .@ C / ˝ H such that, for a given f 2 H1 .@ C / ˝ H , we have2 @0 V C AU D f
(6.2.1)
V D M.@1 0 /U;
(6.2.2)
and 2
The equation C @0 V C AU D f
with C bounded and strictly positive definite (time-independent, i.e. commuting with @0 ) may appear more general but a change of inner product of H to h jC iH reduces it to the canonical form with A replaced by C 1 A.
432
Chapter 6 A Royal Road to Initial Boundary Value Problems
1 where, for r < 2 , .M.z//z2BC .r;r/ is a uniformly bounded, holomorphic family3 of linear operators in H . As shown in Theorem 6.1.1, the operator M.@1 0 / is forward causal and we shall speak of a causal material operator. To obtain a solution theory, we require an additional constraint on such causal material operators4 , namely there exists a constant c 2 R>0 such that:
1 Re.z 1 M.z// WD .z 1 M.z/ C .z /1 M.z/ / c 2
(6.2.3)
for all z 2 BC .r; r/. The way the problem (6.2.1), (6.2.2) is presented, it is clear that equation (6.2.1) can be considered to hold only in the sense of equality in H1 .@ C / ˝ H1 .A C 1/. To establish the operator @0 M.@1 0 /CA in H1 .@ C/˝H , it suffices to consider the well- and densely defined mapping .@0 M.@1 0 / C A/jH1 .@ C/˝H1 .AC1/ given by H1 .@ C / ˝ H1 .A C 1/ H0 .@ C / ˝ H ! H0 .@ C / ˝ H; U 7! .@0 M.@1 0 / C A/U: We also note that since a bounded A can easily be incorporated in a corresponding modified material law, we only need to be concerned with two cases I A unbounded, skew-self-adjoint or I A D 0. 3 For applications M should usually also be real in the following sense: If H is a Hilbert space generated by a dense set of complex-valued functions defined on a subset of some RN , N 2 N, then
f 7! f is defined by f .x/ D .f .x// . This operation is component-wise carried over to direct sums of such spaces. In such a situation one is lead to require M.z/ D M.z / for what could be called real material laws. Since M is analytic, for every % 2 0; 2 rŒ there is a power series expansion for all % 2 0; 2 rŒ M.z/ D
1 X
Ak .%/.z %/k
kD0
and the condition imposed yields that all coefficient operators Ak .%/ must be real in the sense that they commute with conjugation Ak .%/U D Ak .%/U for all U 2 H . 4 If this condition fails we have a particularly degenerate case, which occurs for example in the incompressible Stokes system. Therefore, we shall refer to such a case as the (I)-degenerate case. The degeneracy shows up in the solution theory of the incompressible Stokes system in so far as there is well-posedness only under particular additional constraints on the underlying domain.
Section 6.2 Evolutionary Dynamics and Material Laws
433
Since the formal adjoint .@0 M.@1 0 / A/jH1 .@ C/˝H1 .AC1/ is clearly also densely 1 defined, we have that .@0 M.@0 /CA/jH1 .@ C/˝H1 .AC1/ is a closable linear operator and we may interpret .@0 M.@1 0 / C A/ as the closure
.@0 M.@1 0 / C A/jH1 .@ C/˝H1 .AC1/ : Since @0 commutes with @0 M.@1 0 / C A, we have again a canonical extension of @0 M.@1 / C A to H .@ C / ˝ H for which we shall re-use the same notation. 1 0 Since M.@1 / and A do not necessarily commute, we need to approach the solution 0 theory more carefully. Due to the unitarity of the mappings Hk .@ C / ˝ Hs .A C 1/ ! H0 .@ C / ˝ H;
7! @k0 .A C 1/s for k; s 2 Z, within the framework of the construction of the Sobolev lattice .Hk .@ C / ˝ Hs .A C 1//k;s2Z ; we only need to be concerned with the situation for k D s D 0. Lemma 6.2.1. Let .M.z//z2BC .r;r/ be a holomorphic family of uniformly bounded linear operators on a Hilbert space H satisfying (6.2.3) and let A be skew-self-adjoint 1 in H . Then, for > 2r , the operator .@0 M.@1 0 / C A/jH1 .@ C/˝H1 .AC1/ is boundedly invertible. Proof. For all ˆ 2 H1 .@ C / ˝ H1 .A C 1/ jˆjH;0 ˝H j.@0 M.@1 0 / C A/ˆjH;0 ˝H jRehˆj.@0 M.@1 0 / C A/ˆiH;0 ˝H j D jRehˆj@0 M.@1 0 /ˆiH;0 ˝H j D jhˆjRe.@0 M.@1 0 //ˆiH;0 ˝H j ˇ ˇ ˇ ˇ ˇ ˇ 1 ˇ L ˆ D ˇˇ L ˆˇˇRe .i m0 C /M ˇ i m0 C H0;0 ˝H c jhL ˆjL ˆiH0;0 ˝H j D c hˆjˆiH;0 ˝H ; where c 2 R>0 is the constant in (6.2.3). Thus, the inverse operator 1 ..@0 M.@1 0 / C A/jH;1 ˝H1;A /
exists and is indeed bounded. With k kX!Y denoting the operator norm for continuous linear operators from Banach space X to Banach space Y , we have that 1 1 k..@0 M.@1 : 0 / C A/jH1 .@ C/˝H1 .AC1/ / kH;0 ˝H !H;0 ˝H c
(6.2.4)
434
Chapter 6 A Royal Road to Initial Boundary Value Problems
Corollary 6.2.2. Let .M.z//z2BC .r;r/ be a holomorphic family of uniformly bounded linear operators on a Hilbert space H satisfying (6.2.3) and let A be skew-self-adjoint 1 in H . Then, for > 2r , the formal adjoint .@0 M..@0 /1 / A/jH1 .@ C/˝H1 .AC1/ is boundedly invertible. Proof. The proof is completely analogous to that of the last lemma. Indeed, even the constant estimating the inverse is the same: k..@0 M..@0 /1 / A/jH1 .@ C/˝H1 .AC1/ /1 kH0 .@ C/˝H !H0 .@ C/˝H c 1 : Lemma 6.2.3. Let .M.z//z2BC .r;r/ be a holomorphic family of uniformly bounded linear operators on a Hilbert space H satisfying (6.2.3) and let A be skew-self-adjoint 1 in H . Then, for > 2r , the operator .@0 M.@1 0 / C A/jH1 .@ C/˝H1 .AC1/ has dense range. Proof. To find u 2 H1 .@ C / ˝ H1 .A C 1/ solving .@0 M.@1 0 / C A/u D f for sufficiently many f 2 H0 .@ C / ˝ H , we first observe that 1 ŒN 1;N C1 @ ŒH0 .@ C / ˝ H D H;0;N ˝ H i where
H;0;N WD ŒN 1;N C1
1 @ ŒH0 .@ C / i
yields a family of Hilbert sub-spaces exhausting H0 .@ C / ˝ H in the sense that H0 .@ C / ˝ H D
[
H;0;N ˝ H :
N 2N
The operator ŒN 1;N C1 . 1i @ / is to be understood here in the sense of the function calculus of the self-adjoint operator 1i @ , that is 1 ŒN 1;N C1 @ D L ŒN 1;N C1 .m/ L : i Moreover, @0 jH;0;N ˝H W H;0;N ˝ H ! H;0;N ˝ H; ' ˝ w 7! @0 ' ˝ w
Section 6.2 Evolutionary Dynamics and Material Laws is a bounded operator for every N 2 N, indeed k@0 jH;0;N ˝H kH;0;N ˝H !H;0;N ˝H
435
q .N C 1/2 C 2 :
Considering now the task of finding a solution u 2 H;0;N ˝ H of 1 .@0 M.@1 0 / C A/u D .@0 jH;0;N ˝H M.@0 / C A/u D f 2 H;0;N ˝ H;
we first note that H;0;N ˝ H1 .A C 1/ H;0;N ˝ H ! H;0;N ˝ H; u 7! .@0 jH;0;N ˝H M.@1 0 / C A/u defines a closed linear operator with domain H;0;N ˝ H1 .A C 1/. Noting that the leading part @0 jH;0;N ˝H M.@1 0 / is a bounded operator, we find that its adjoint is given as an operator in H;0;N ˝ H with domain H;0;N ˝ H1 .A C 1/ by 1 u 7! .@0 jH;0;N ˝H M.@1 0 / C A/ u D .@0 jH;0;N ˝H M..@0 / / A/u
which according to Corollary 6.2.2 is boundedly invertible and has therefore only a trivial null space N.@0 jH;0;N ˝H M..@0 /1 / A/ D ¹0º. By the bounded invertibility of @0 jH;0;N ˝H M.@1 0 / C A, the closedness of the range .@0 jH;0;N ˝H M.@1 0 / C A/ŒH;0;N ˝ H1 .A C 1/ in H;0;N ˝ H follows also. Thus, according to the projection theorem we have the orthogonal decomposition H;0;N ˝ H D N.@0 jH;0;N ˝H M..@0 /1 / A/ ˚ .@0 jH;0;N ˝H M.@1 0 / C A/ŒH;0;N ˝ H1 .A C 1/ D .@0 jH;0;N ˝H M.@1 0 / C A/ŒH;0;N ˝ H1 .A C 1/; i.e. .@0 jH;0;N ˝H M.@1 0 / C A/ is surjective. Thus we have shown that, for every f 2 H;0;N ˝ H , there is a unique u 2 H;0;N ˝ H1 .A C 1/ such that 1 .@0 M.@1 0 / C A/u D .@0 jH;0;N ˝H M.@0 / C A/u D f:
Since this is true for all N 2 N, we see that the range [ [ H;0;N ˝ H D .@0 jH;0;N ˝H M.@1 0 / C A/ŒH;0;N ˝ H1 .A C 1/ N 2N
N 2N
.@0 M.@1 0 / C A/ŒH1 .@ C / ˝ H1 .A C 1/ is dense in H0 .@ C / ˝ H .
436
Chapter 6 A Royal Road to Initial Boundary Value Problems
By taking closures, we see that 1 @0 M.@1 0 / C A D .@0 M.@0 / C A/jH1 .@ C/˝H1 .AC1/
is an injective mapping onto H0 .@ C / ˝ H with a bounded inverse. For a full evolutionary solution theory we need to have a causal solution operator. This is our next result. Lemma 6.2.4 (Causal Solution Operator). Let .M.z//z2BC .r;r/ be a holomorphic family of uniformly bounded linear operators on a Hilbert space H satisfying (6.2.3) 1 and let A be skew-self-adjoint in H . Then, for > 2r , 1 W H0 .@ C / ˝ H ! H0 .@ C / ˝ H .@0 M.@1 0 / C A/ a
1 restricted to C V 1 .R/ ˝ H is is forward causal in the sense that .@0 M.@1 0 / C A/ forward causal according to our definition of causal mappings. Here, as in Theoa rem 6:1:1, CV 1 .R/ ˝ H is considered as a space of functionals by interpreting ˝ w for every 2 CV 1 .R/; w 2 H , as the functional a
˝ w W CV 1 .R/ ˝ H ! C; ˝ v 7! . ˝ w/.
Z ˝ v/ WD
R
.t / .t / dt hwjvi:
Proof. Due to time-translation invariance we only have to show that R>0 .m0 /ŒH0 .@ C / ˝ H D exp.m0 /ŒL2 .R>0 ; H / is mapped into itself. After Fourier–Laplace transform this amounts to showing that the operator ..i m0 C/M. i m01C /CA/1 maps the Hardy–Lebesgue space HL˝H into itself. By the isometry properties of the Fourier–Laplace transform we have from (6.2.4) by taking closures that 1 1 .i m0 C /M c 1 ; C A 2 i m0 C L .R/˝H !L2 .R/˝H and so it suffices to show that z 7! .i z C /M
1 iz C
1 CA
is analytic in ŒR i ŒR>0 or equivalently z 7! .z 1 M.z/ C A/1
(6.2.5)
Section 6.2 Evolutionary Dynamics and Material Laws
437
1 analytic in BC .r; r/ for r D 2 . This follows, however, since the composition of analytic mappings is analytic and the resolvent
7! . C A/1 of A is analytic in BC .r; r/ C n i ŒR. Indeed, the complex derivative of (6.2.5) is z 7!
1 1 .z M.z/ C A/1 .z M 0 .z/ M.z//.z 1 M.z/ C A/1 z2
in BC .r; r/. Thus, we have finally established the solution theory for our problem class and we summarize our findings as the following theorem. Theorem 6.2.5 (Solution Theory). Let .M.z//z2BC .r;r/ be a holomorphic family of uniformly bounded linear operators on a Hilbert space H satisfying (6.2.3) and let A 1 be skew-self-adjoint in H . Then, for > 2r , we have, for every f 2 Hk .@ C/˝H , k 2 Z, a unique solution .U; V / 2 .Hk .@ C / ˝ H /2 of the problem @0 V C AU D f; V D M.@1 0 /U: Moreover, the solution depends continuously on the data f 2 Hk .@ C / ˝ H and is causal in the sense that inf supp0 .U /;
inf supp0 .V / inf supp0 .f /:
Proof. The result follows immediately from our previous considerations in the space H0 .@ C / ˝ H by utilizing the unitary equivalence to Hk .@ C / ˝ H via @k0 D .@ C /k for k 2 Z.
6.2.2 Some Special Cases For applications in mathematical physics, several particular cases are important: (a) The Regular Case M.z/ D M0 C M1 .z/ with M0 self-adjoint, M0 c0 > 0 and k Re.z 1 M1 .z//k c1 Re z 1 for z 2 BC .r; r/, c1 < c0 . The latter can clearly be realized for the particular choice M1 .z/ D z MC .z/ with .MC .z//z2BC .r;r/ analytic and uniformly bounded and r 2 R>0 sufficiently small.
438
Chapter 6 A Royal Road to Initial Boundary Value Problems
(b) The (P)-Degenerate Case M.z/ D M0 C M1 .z/ with M0 self-adjoint, M0 0 with a non-trivial null space N.M0 /, but M0 c0 > 0 on its range M0 ŒH and Re.z 1 M1 .z// c1 > 0 on N.M0 /. The latter can be realized for the particular choice M1 .z/ D z MC .z/ with .MC .z//z2BC.r;r/ analytic, uniformly bounded and Re MC .z/ c1 > 0
(6.2.6)
on N.M0 /. (c) The 0-Analytic Case5 P k M.z/ D 1 kD0 Mk z with a positive radius of convergence and M0 is self-adjoint, M0 0 and condition (6.2.3) holds. If M0 is strictly positive definite, we are in the regular case. If M0 has a non-trivial null space N.M0 /, but M0 c0 > 0 on the range M0 ŒH and (6.2.6) holds with MC .z/ D
1 X
MkC1 z k ;
kD0
i.e. we have Re M1 c2 > 0 for some c2 2 R, we are in the (P)-degenerate case. The regular case occurs predominantly in the equations of mathematical physics that we have considered and so compared to the (P)-degenerate case it can be considered the generic case. In the term “(P)-degenerate”, the letter (P) indicates that this case occurs in connection with parabolic evolution equations. In the 0-analytic case it is easy to see from examples that this degeneracy appears when time-reversibility is lost. As a trade-off for this loss, however, better regularity properties are obtained. In the 0-analytic case, the regular case is characterized for sufficiently large 2 R>0 simply by M0 being strictly positive definite. The (P)-degenerate case is for sufficiently large 2 R>0 simply described by M0 being strictly positive definite on the range of M0 and M1 being strictly positive definite on the non-trivial null space N.M0 / of M0 . 5 The simple case M.z/ D M C zM is closely related to operator classes, so-called port0 1 Hamiltonians, considered in [27].
Section 6.2 Evolutionary Dynamics and Material Laws
439
Remark 6.2.6. (a) If M0 can be diagonalized by a unitary transformation W as D D W M0 W where D is diagonal, then the material law can be simplified to e WD W V D D U eCM f1 .@1 e V 0 /U ; where e WD W U; U 1 f1 .@1 M 0 / WD W M1 .@0 /W:
The equation (6.2.1) is then becomes e WD W f eCe eDf @0 V AU with e A WD W A W; D.e A/ WD W ŒD.A/: Note that e A is still skew-self-adjoint. (b) In the above cases, @0 M1 .@1 0 / may also be considered as a perturbation of the operator .M0 @0 C A/; so that a solution theory can be based on perturbation arguments. Generally speaking one could also think of expanding the above solution theory to incorporate small “admissible perturbations” B, which – considered as operators in H;0 ˝ H for 2 R>0 sufficiently large – satisfy !1
1 k.@0 M.@1 ! 0: 0 / C A/ BkH0 .@ C/˝H !H0 .@ C/˝H
This, however, translates at least formally into the above model with a slightly modified material law: 1 1 f.@1 V DM 0 /U D M.@0 /U C @0 BU:
440
Chapter 6 A Royal Road to Initial Boundary Value Problems
6.2.3 Material Laws via Differential Equations Material properties are frequently modelled by describing separate material dynamics, see e.g. [1] in the case of visco-elastic materials and [23] in the case of ferroelectric media. We wish to demonstrate that such descriptions – assuming a suitable linearization – indeed lead to the above form of material laws. A typical situation is an evolutionary system of the form @0 v C Au D f;
v D J u C Kw;
with J W H0 ! H0 ; K W H1 ! H0 as bounded linear operators, A skew-self-adjoint in H0 and w is linked to u by another dynamical process @0 x C Bw D W u;
x D Lw;
involving another bounded, linear operator W W H0 ! H1 and a skew-self-adjoint or bounded operator B in H1 . This can be seen to be of the general form given above in several ways. Assuming for example that L is symmetric and strictly positive definite in H1 , we may solve the system governing the material dynamics to obtain @0 v C Au D f; v D J u C K.@0 C L1 B/1 L1 W u: The material law is now formally 1 1 1 M.@1 0 /u D J u C K.@0 C L B/ L W u
and, provided suitable assumptions are imposed on the bounded linear operators J W H0 ! H0 ; K W H1 ! H0 ; W W H0 ! H1 between Hilbert spaces H0 and H1 , the above solution theory applies. If B is a bounded linear operator in H1 , we could, however, also consider a more direct approach by building a combined system. First, we see that J @0 u C K @0 w C Au D f; L@0 w C Bw W u D 0: Substituting @0 w from the second equation into the first yields J @0 u K L1 Bw C L1 W u C Au D f; L@0 w C Bw W u D 0: This can be written in the form r A0 u f @0 C D s 0 0 w 0
Section 6.2 Evolutionary Dynamics and Material Laws
441
with a modified material law 1 r J 0 u L W KL1 B u D C @1 ; 0 s 0 L w W B w which is now of a suitable form to be covered by the above approach with underlying 0 skew-self-adjoint in H . Hilbert space H D H0 ˚ H1 and A 0 0 Another case is the model of ferro-electric material suggested by Greenberg, MacCamy and Coffman, [18], which can be written abstractly as: @0 v C Au D f;
v D J u C Kw;
@20 w C B Bw D W u;
with J W H0 ! H0 ; K W H1 ! H0 as bounded linear operators, A skew-self-adjoint in H0 and B W D.B/ H1 ! Y a closed, densely defined operator between Hilbert spaces H1 and Y . (Compare [3] for a first study of the specific model and [40] for the particular re-formulation we wish to present.) The second order evolution equation occurring here clearly translates into the first order system @0 w @0 w 0 B Wu @0 C D ; Bw Bw 0 B 0 is now, due to its particular form, skewwhere the resulting operator matrix B0 B 0 self adjoint in H0 ˚ Y . Thus, introducing new unknowns ! D @0 w and D Bw, the coupled system is seen to be of the form 1 1 0 10 0 f u A .0 0/ v A @ ! A D @ 0 A; 0 B C@ 0 @0 x 0
0 B 0 1 10 0 u J .0 0/ v 1 0 A@ ! A D@ 0 x
0 01 0 10 1 0 .K 0/ u @ W 0 0 A @ ! A; C @1 0 0 00
which is indeed formally of the above form and, under suitable assumptions, accessible to the above solution theory. The underlying Hilbert space H is H0 ˚ .H1 ˚ Y /. In any case, we do not seem to get anything structurally new when material properties are described via differential equations.
6.2.4 Coupled Systems The specific form of the material law allows a transparent discussion on how to couple different physical phenomena to obtain a suitable evolutionary problem, compare [8]
442
Chapter 6 A Royal Road to Initial Boundary Value Problems
for a formal observation supporting this approach. We want to focus on the abstract structure of coupled systems. Systems of interest – ignoring coupling for the moment – can be combined simply by writing them together in diagonal block operator matrix form: 0 1 0 1 0 1 U0 f0 V0 B :: C B :: C B :: C @0 @ : A C A @ : A D @ : A; Vn Un fn where
1 0 B :: C B 0 : C C ADB C B :: :: @ : : 0 A 0 0 An L inherits6 the skew-self-adjointness in H D kD0;:::;n Hk from its skew-self-adjoint diagonal entries Ak W D.Ak / Hk ! Hk , k D 0; : : : ; n. The combined material laws now take the simple diagonal form 0 1 0 0 M00 .@1 0 1 1 0 0 / V0 U0 B C : : : :: :: :: B CB : C 0 B C C @ :: A: B V D @ ::: A D M in .@1 0 /U D B C :: :: :: A @ : : : 0 Vn Un 1 0 0 Mnn .@0 / 0
A0
0 :: :
Coupling between these phenomena now can be modelled by expanding M to contain off-diagonal entries 0 1 1 M00 .@1 0 / M0n .@0 / B C :: :: :: M ex .@1 A : : : 0 / WD @ 1 Mn0 .@1 0 / Mnn .@0 / 0 0 0 M00 .@1 0 / B :: : : : : B : : : 0 B B :: :: :: @ : : : 0 0 0 Mnn .@1 0 /
1 C C C: C A
The full material law now is of the familiar form V D M.@1 0 /U then A Indeed, if each Ak , k D 0; : : : ; n, is of the standard block operator matrix form C0 C 0 is a tri-diagonal block operator matrix discussed earlier in Remark 3.1.67, which by asimple symmetric 0 W . permutation of rows and columns can be brought into the standard super-block form W 0 6
Section 6.2 Evolutionary Dynamics and Material Laws
443
with 1 1 M00 .@1 0 / M0n .@0 / C B :: :: :: M.@1 A: : : : 0 / WD @ 1 1 Mn0 .@0 / Mnn .@0 / 0
We shall have occasion to see this mechanism in action in the some applications which we shall discuss later.
6.2.5 Initial Value Problems Before going into specific applications we wish to address two question, the first of which is how standard initial value problems are to be represented in our framework. For initial value problems we have specifically 1 M.@1 0 / D M 0 C @ 0 M1 :
The abstract initial value problems consists in finding U 2 H1 .@ C / ˝ H such that .@0 M0 C M1 C A/U D F C ı ˝ V0 ; where F 2 R0 .m0 /ŒH0 .@ C / ˝ H and V0 2 M0 ŒH are given data. Existence, uniqueness and continuous dependence are provided by our general solution theory. In particular, U 2 H1 .@ C / ˝ H and so M0 U 2 H0 .@ C / ˝ H1 .A C 1/. Remark 6.2.7. Note that, in this simple case, from .@0 M0 C M1 C A/U D G 2 H0 .@ C / ˝ H0 .A C 1/; by separating symmetric and antisymmetric parts (sym WD Re, asym WD i I m), we get ..M0 C sym.M1 // C .asym.@0 / M0 C asym.M1 / C A//U D G: Here .asym.@0 / M0 C asym.M1 / C A/ is skew-self-adjoint in the space H0 .@ C / ˝ H0 .A/. Thus we obtain Rehˆj.@0 M.@1 0 / C A/ˆiH;0;0 D hˆj.M0 C symM1 /ˆiH;0;0 ./hˆjˆiH;0;0 with ./ D inf¹hˆj.M0 C symM1 /ˆiH;0;0 jˆ 2 BH;0;0 .0; 1/º.
444
Chapter 6 A Royal Road to Initial Boundary Value Problems
In order to show that the initial condition M0 U.0C/ D V0 holds, more regularity is required. Let V0 2 M0 ŒH be such that there is a U0 2 D.A/ such that V0 D M0 U0 . Then U R0 ˝ U0 satisfies the equation @0 M0 .U R0 ˝ U0 / C M1 .U R0 ˝ U0 / C A.U R0 ˝ U0 / D F R0 ˝ M1 U0 R0 ˝ AU0 : Then, from F R0 ˝ M1 U0 R0 ˝ AU0 2 H;0;0 , we get by our solution theory that U R0 ˝ U0 2 H;0;0 and so we read off that M0 .U R0 ˝ U0 / 2 H1 .@ C / ˝ H1 .A C 1/. Consequently, we get as before by Sobolev’s embedding lemma that t 7! .M0 .U R0 ˝ U0 //.t / is continuous in H1 .A C 1/ and .M0 U /.0C/ D M0 U0 D V0 : Note that if M0 has a bounded inverse then t 7! .U R0 ˝U0 /.t / is also continuous in H1 .A C 1/ and so U.0C/ D U0 : If M0 has a non-trivial null space, we have to take into account the notationally subtle difference between .M0 U /.0C/, which under the given assumptions exists in H1 .A C 1/, and M0 U.0C/, which may not exist at all. Remark 6.2.8. In the special, but rather common, case that A D C0 C and M0 0 has a corresponding 2 2-block matrix structure such that the orthogonal projection onto the range of M0 coincides with the canonical projection 0 onto the first block component, a more specific argument is available to obtain the initial condition. We first note that, in this situation, the abstract initial value problem assumes the form M1;00 M1;01 u M0;00 0 0 C C C v 0 0 M1;10 M1;11 C 0 M0;00 u0 DF Cı˝ : 0
@0
For u0 2 D.C /, we obtain the initial condition in the form u.0C/ D u0 ; where the limit is in H1 .jC j C i/. In summary, as in earlier cases, we have the following well-posedness result, which is a direct consequence of our abstract solution theory.
Section 6.2 Evolutionary Dynamics and Material Laws
445
Theorem 6.2.9 (Abstract Initial Value Problem without Memory). Let M0 C @1 0 M1 be a material law operator and A W D.A/ H ! H skew-self-adjoint. Then the abstract initial value problem .@0 M0 C M1 C A/U D F C ı ˝ M0 U0 ; with F 2 R0 .m0 /ŒH0 .@ C / ˝ H and U0 2 H1 .A C 1/ given data, has for all sufficiently large 2 R>0 a unique solution in H1 .@ C / ˝ H depending continuously on the data such that for some constant C0 2 R>0 solutions U can be estimated in terms of data by jU j;1;0 C0 .jF j;1;0 C 1=2 jM0 U0 j0 /: Moreover, M0 .U R>0 ˝U0 / 2 H1 .@ C/˝H1 .AC1/ and the initial condition .M0 U /.0C/ D M0 U0 holds in H1 .A C 1/. Furthermore, for some constant C1 2 R>0 we also have jU j;0;0 C1 .jF j;0;0 C 1=2 jU0 j1 /: Proof. Existence and uniqueness of a solution U 2 H1 .@ C / ˝ H and also the first continuity estimate is clear from the general theory. Then, using @0 R0 D ı, we get @0 M0 .U R0 ˝ U0 / C M1 .U R0 ˝ U0 / C A.U R0 ˝ U0 / D F R0 ˝ M1 U0 R0 ˝ AU0 showing that U R0 ˝ U0 2 H0 .@ C / ˝ H and giving the continuity estimate e 0 .jF j;0;0 C 1=2 jU0 j1 /: jU R0 ˝ U0 j;0;0 C From the latter we obtain e 0 .jF j;0;0 C 1=2 jU0 j1 / C jR ˝ U0 j;0;0 jU j;0;0 C 0 e 0 C 1/.jF j;0;0 C 1=2 jU0 j1 /: .C On the other hand, from the equation for the difference term U R0 ˝ U0 , we also read off that @0 M0 .U R0 ˝ U0 / D F R0 ˝ M1 U0 R0 ˝ AU0 M1 .U R0 ˝ U0 / A.U R0 ˝ U0 / 2 H0 .@ C / ˝ H1 .A C 1/
446
Chapter 6 A Royal Road to Initial Boundary Value Problems
and so M0 .U R0 ˝ U0 / 2 H1 .@ C / ˝ H1 .A C 1/: Thus, by Sobolev’s embedding lemma, t 7! .M0 .U R0 ˝ U0 //.t / is continuous and vanishes on R0 . Then '0 R0 D 0
on R>0
and with W WD U C .'0 R0 / ˝ U0 we get @0 M0 .W '0 ˝ U0 / C M1 U C AU D F; @0 M0 .W '0 ˝ U0 / C M1 .W .'0 R0 / ˝ U0 / CA.W .'0 R0 / ˝ U0 / D F: From the latter we have @0 M0 W C M1 W C AW D F C '00 ˝ M0 U0 C .'0 R0 / ˝ M1 U0 C .'0 R0 / ˝ AU0 as a substitute problem with no initial data. Note that here the right-hand side coincides with F on R>0 and also W DU
on R>0 :
6.2.6 Memory Problems As a notational convention, in the following we shall assume that any term of the form Mx .@1 0 / with some index x denotes an operator generated by a bounded, analytic family .Mx .z//z2BC .r;r/ , r 2 R>0 , of continuous linear mappings on H . Whenever we write Mx .0/ we make the implicit assumption that the limit limz!0 Mx .z/ exists and Mx .0/ denotes this limit.
448
Chapter 6 A Royal Road to Initial Boundary Value Problems
We shall continue to use notations of the form Mx with some index x to describe bounded, linear operators in H . With this convention a material law operator 1 1 M.@1 0 / D M0 C @0 M1 .@0 /
(6.2.7)
would involve a memory effect for the evolutionary problem .@0 M.@1 0 / C A/U D F if .M1 .z//z2BC .r;r/ is non-constant, r 2 R>0 . It is typical for such kind of problems to assume that F 2 H0 .@ C / ˝ H so that t 7! .M.@1 0 /U /.t / is a priori continuous in H1 .A C 1/, due to the fact that /U 2 H .@ C / ˝ H1 .A C 1/. M.@1 1 0 It is easy to underestimate the power of this framework, in particular in connection with memory problems. Therefore we shall briefly discuss a few special cases of such memory terms, in order to illustrate the multitude of possible material laws, even in the linear case that we are concerned with in this expository monograph. As typical memory terms one might consider: (a) Delay terms: It may not be obvious that the backward time-shift h , h 2 R0 , leads to a memory type material law operator, but indeed, one finds h D exp.h@0 / D exp.h=@1 0 / and z 7! exp.h=z/ is analytic in C n ¹0º with an essential singularity at 0. For h 2 R>0 the function z 7! exp.h=z/ is even uniformly bounded by 1 in the half-space ŒR>0 C iŒR and so also in any ball BC .r; r/, r 2 R>0 . (b) Convolution terms: Consider M1 .@1 0 / to be of the form 1 2 1 M1 .@1 0 / D M1 C @0 M2 C @0 M3 .@0 /
D M1 C .R0 ˝ M2 / CG where
2 Z 1 1 1 G WD t 7! exp.t / exp.it s/ M3 ds 2 is C is C R
is a continuous operator-valued function. The convolution operator G denotes the continuous extension of the linear mapping given by the convolution integral Z ' ˝ h 7! t 7! G.t s/h'.s/ ds R
Section 6.2 Evolutionary Dynamics and Material Laws
449
a
on CV 1 .R/ ˝ H to a continuous linear mapping in H0 .@ C / ˝ H . Then 2 1 1 L G u D L u M3 im0 C im0 C and so 1 G u D @2 0 M3 .@0 /u:
So, the resulting material law takes on a more familiar form known from a variety of applications: V D M0 U C @1 0 M1 C G U: Writing @1 0 , i.e. time-integration, as convolution with a characteristic function we get .R0 ˝ M1 / as the continuous extension of the convolution integral operator Z .R0 ˝ M1 / ' WD t 7!
R
Z
D t 7!
R0 .t s/ M1 '.s/ ds
Rt
M1 '.s/ ds
D @1 0 M1 '; a
which is well-defined for ' 2 CV 1 .R/ ˝ H . Thus we obtain V D M0 U C I U with I WD R0 ˝ M1 C G. Moreover, allowing the identity to be written as convolution with Dirac’s ı we even could interpret the whole material law as a convolution V DH U with H WD ı ˝ M0 C R0 ˝ M1 C G. From our perspective, however, the issue of the existence of a convolution integral is only of peripheral interest and we therefore continue to use the more powerful concept of an operator-valued function of @1 0 to describe material laws. (c) Fractional integrals and fractional derivatives: The set ¹@˛0 j˛ 2 Rº of suitably defined fractional powers of @0 provides an example for a continuous one-parameter group of operators in H1 .@ C/ with respect to composition. For sufficiently wellbehaved arguments ' and ˛ 2 R>0 , we have a point-wise formula as a convolution integral Z t 1 ˛ .@0 '/.t / D .t s/˛1 '.s/ ds: .˛/ 1
450
Chapter 6 A Royal Road to Initial Boundary Value Problems
In general we let d˛e ˛d˛e
@˛0 WD @0 @0
for ˛ 2 R. Note that d˛e ˛ 2 Œ0; 1Œ. This formula shows that @˛0 , ˛ 2 R, transforms real-valued arguments ' to real-valued images @˛0 ', i.e. it commutes with conjugation. We have ˛ 1 ˛ 1 ˛ @0 D .@0 / D L L i m0 C for ˛ 2 Œ0; 1Œ. In particular, we have that .@˛ 0 /˛2R0 is a continuous one-parameter semigroup in Hk .@ C / for every k 2 Z. We note in particular that z 7! z ˛ D jzj˛ .cos.˛ arg.z// i sin.˛ arg.z///; ˛ 2 Œ0; 1Œ is analytic and uniformly bounded in ŒR>0 CiŒR. Here arg W ŒR>0 CiŒR ! ; Œ. Moreover, we have for ˛ 2 Œ0; 1 m0 Re. C i m0 /˛ D . 2 C m20 /˛=2 cos ˛ arctan ˛ cos ˛ : 2 Using again the notation @˛0 for the canonical extension to the Sobolev lattice .Hr .@ C / ˝ Hs .A C 1//.r;s/2Z ; we may consider memory terms in (6.2.7) of the form M1 .@1 0 /D
L X
1 k M˛k @˛ C @1 0 M2 .@0 / 0
kD0
with M˛k continuous linear operators in H0 .A C 1/, ˛k 2 Œ0; 1Œ for k D 0; : : : ; L. Indeed, we could even consider material law operators of the form 1 M.@1 0 / D M0 C M˛ .@0 /
with M˛ .@1 0 / WD
L X
1 k M˛k @˛ C @1 0 M1 .@0 /; 0
kD0
2 0; 1Œ L
has all different components. For ˛k 2 Œ0; 1Œ we see that here M˛k , where ˛ k D 0; : : : ; L, needs to be self-adjoint, non-negative and such that L X 1˛k M˛ k cos .1 ˛k / M0 C C Re M1 .0/ 2 kD0
is strictly positive definite, uniformly for all sufficiently large 2 R>0 , in order for M.@1 0 / to satisfy the solvability condition.
Section 6.2 Evolutionary Dynamics and Material Laws
451
This may suffice for an indication of the multitude of memory terms covered by our framework. The standard perspective of memory problems slightly differs from the view point discussed so far. It is usual to assume that the solution of .@0 M0 C M1 .@1 0 / C A/U D F is known up to a certain point in time, which we may choose as 0, and consider only the future development, rather than making the equivalent observation that F is known on the whole time line. In other words we consider only the future development UR>0 WD R>0 .m0 /U of U and consider UR0 .m0 //U as data. Of course we then have U D UR0 D .UR0 ˝ U0 / C .UR>0 R>0 ˝ U0 /; where we assume that U0 is such that .M0 U /.0/ D M0 U0 : With this we have, by the assumption that UR0 .m0 //, we obtain .@0 M0 .UR0 ˝ U0 / C .1 R>0 .m0 //M1 .@1 0 /UR0 R>0 ˝ U0 / 1 C R>0 .m0 / M1 .@1 0 /UR0 C AUR>0
D R>0 .m0 /F and so we get .@0 M0 C M1 .@1 0 / C A/UR>0 D R>0 .m0 /F R>0 .m0 / M1 .@1 0 /UR0 . Although, there is little to add to our abstract solution theory here, we shall formulate a result covering this particular case.
452
Chapter 6 A Royal Road to Initial Boundary Value Problems
Theorem 6.2.10 (Initial Value Problem with Memory). Let G 2 R>0 .m0 /ŒH0 .@ C / ˝ H0 .A C 1/ and UR0 .m0 //ŒH0 .@ C / ˝ H0 .A C 1/ \ H0 .@ C / ˝ H1 .A C 1/ and let M0 UR0 .m0 //ŒH1 .@ C / ˝ H0 .A C 1/ be given. Then the solution W 2 H1 .@ C / ˝ H0 .A C 1/ of 1 .@0 M0 C M1 .@1 0 / C A/W D G R>0 .m0 /M1 .@0 /UR