Springer Series in Computational Mathematics Editorial Board R.L. Graham J. Stoer R. Varga
23
Alﬁo Quarteroni · Alberto Valli
Numerical Approximation of Partial Differential Equations With 59 Figures and 17 Tables
123
Alﬁo Quarteroni École Polytechnique Fédérale de Lausanne Chaire de Modelisation et Calcul Scientiﬁque (CMCS) Station 8 1015 Lausanne, Switzerland alﬁo.quarteroni@epﬂ.ch
Alberto Valli Università di Trento Dipartimento di Matematica Via Sommarive, 14 38050 Povo TN, Italy
[email protected] and MOX, Politecnico di Milano 20133 Milan, Italy
First softcover printing 2008
ISBN 9783540852674
eISBN 9783540852681
DOI 10.1007/9783540852681 Springer Series in Computational Mathematics ISSN 01793632 Library of Congress Control Number: 97160884 Mathematics Subject Classiﬁcation (1991): 65Mxx, 65Nxx, 65Dxx, 65Fxx, 35Jxx, 35Kxx, 35Lxx, 35Q30, 76Mxx © 2008, 1994 SpringerVerlag Berlin Heidelberg This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: WMXDesign GmbH, Heidelberg Typesetting: by the authors using a Springer TEX macro package Printed on acidfree paper 98765 4321 springer.com
A Fulvia e Tiziana
Preface
Everything is more simple than one thinks but at the same time more complex than one can understand Johann Wolfgang von Goethe To reach the point that is unknown to you, you must take the road that is unknown to you St. John of the Cross This is a book on the numerical approximation of partial differential equations (PDEs). Its scope is to provide a thorough illustration of numerical methods (especially those stemming from the variational formulation of PDEs), carry out their stability and convergence analysis, derive error bounds, and discuss the algorithmic aspects relative to their implementation. A sound balancing of theoretical analysis, description of algorithms and discussion of applications is our primary concern. Many kinds of problems are addressed: linear and nonlinear, steady and timedependent, having either smooth or nonsmooth solutions. Besides model equations, we consider a number of (initial) boundary value problems of interest in several fields of applications. Part I is devoted to the description and analysis of general numerical methods for the discretization of partial differential equations. A comprehensive theory of Galerkin methods and its variants (PetrovGalerkin and generalized Galerkin), as well as of collocation methods, is developed for the spatial discretization. This theory is then specified to two numerical subspace realizations of remarkable interest: the finite element method (conforming, nonconforming, mixed, hybrid) and the spectral method (Legendre and Chebyshev expansion). For unsteady problems we will illustrate finite difference and fractionalstep schemes for marching in time. Finite differences will also be extensively considered in Parts II and III in the framework of convectiondiffusion problems and hyperbolic equations. For the latter we will also address, briefly, the schemes based on finite volumes. For the solution of algebraic systems, which are typically very large and sparse, we revise classical and modern techniques, either direct and iterative with preconditioning, for both symmetric and nonsymmetric matrices. A
VIII
Preface
short account will be given also to multigrid and domain decomposition methods. Parts II and III are respectively devoted to steady and unsteady problems. For each (initial) boundary value problem we consider, we illustrate the main theoretical results about wellposedness, i.e., concerning existence, uniqueness and apriori estimates. Afterwards, we reconsider and analyze the previously mentioned numerical methods for the problem at hand, we derive the corresponding algebraic formulation, and we comment on the solution algorithms. To begin with, we consider all classical equations of mathematical physics: elliptic equations for potential problems, parabolic equations for heat diffusion, hyperbolic equations for wave propagation phenomena. Furthermore, we discuss extensively advectiondiffusion equations for passive scalars and the NavierStokes equations (together with their linearized version, the Stokes problem) for viscous incompressible flows. We also derive the equations of fluid dynamics in their general form. Unfortunately, the limitation of space and our own experience have resulted in the omission of many important topics that we would have liked to include (for example, the Saint Venant model for shallow water equations, the system of linear elasticity and the biharmonic equation for membrane displacement and thin plate bending, the driftdiffusion and hydrodynamic models for semiconductor devices, the NavierStokes and Euler equations for compressible flows). This book is addressed to graduate students as well as to researchers and specialists in the field of numerical simulation of partial differential equations. As a graduate text for Ph.D. courses it may be used in its entirety. Part I may be regarded as a one quarter introductory course on variational numerical methods for PDEs. Part II and III deal with its application to the numerical approximation of timeindependent and timedependent problems, respectively, and could be taught through the two remaining quarters. However, other solutions may work well. For instance, supplementing Part I with Chapters 6, 11 and most part of 14 may be suitable for a one semester course. The rest of the book could be covered in the second semester. Following a different key, Part I plus Chapters 8, 9, 10, 12, 13 and 14 can be regarded as an introduction to numerical fluid dynamics. Other combinations are also envisageable. The authors are grateful to Drs. C. Byrne and J. Heinze of SpringerVerlag for their encouragement throughout this project. The assistence of the technical staff of SpringerVerlag has contributed to the final shaping of . the manuscript. This book benefits from our experience in teaching these subjects over the past years in different academical institutions (the University of Minnesota at Minneapolis, the Catholic University of Brescia and the Polythecnic of Milan for the first author, the University of Trento for the second author),
Preface
IX
and from students' reactions. Help was given to us by several friends and collaborators who read parts of the manuscript or provided figures or tables. In this connection we are happy to thank V.1. Agoshkov, Yu.A. Kuznetsov, D. Ambrosi, L. Bergamaschi, S. Delladio, M. Manzini, M. Paolini, F. Pasquarelli, L. Stolcis, E. Zampieri, A. Zaretti and in particular C. Bernini, P. Gervasio and F. Saleri. We would also wish to thank Ms. R. Holliday for having edited the language of the entire manuscript. Finally, the expert and incredibly adept typing of the lEXfiles by Ms. C. Foglia has been invaluable. Milan and Trento May, 1994
Alfio Quarteroni Alberto Valli
In the second printing of this book we have corrected several misprints, and introduced some modifications to the original text. More precisely, we have sligthly changed Sections 2.3.4, 3.4.1, 8.4 and 12.3, and we have added some further comments to Remark 8.2.1. We have also completed the references of those papers appeared after 1994. Milan and Trento December, 1996
Alfio Quarteroni Alberto Valli
Table of Contents
Part I. Basic Concepts and Methods for PDEs' Approximation 1.
2.
Introduction 1.1 The Conceptual Path Behind the Approximation 1.2 Preliminary Notation and Function Spaces 1.3 Some Results About Sobolev Spaces 1.4 Comparison Results Numerical Solution of Linear Systems 2.1 Direct Methods 2.1.1 Banded Systems 2.1.2 Error Analysis 2.2 Generalities on Iterative Methods 2.3 Classical Iterative Methods 2.3.1 Jacobi Method 2.3.2 GaussSeidel Method . 2.3.3 Relaxation Methods (S.O.R. and S.S.O.R.) 2.3.4 Chebyshev Acceleration Method 2.3.5 The Alternating Direction Iterative Method 2.4 Modern Iterative Methods 2.4.1 Preconditioned Richardson Method 2.4.2 Conjugate Gradient Method 2.5 Preconditioning 2.6 Conjugate Gradient and Lanczos like Methods for NonSymmetric Problems 2.6.1 GCR, Orthomin and Orthodir Iterations . . . . . . . . . . 2.6.2 Arnoldi and GMRES Iterations 2.6.3 BiCG, CGS and BiCGSTAB Iterations 2.7 The MultiGrid Method 2.7.1 The MultiGrid Cycles 2.7.2 A Simple Example 2.7.3 Convergence 2.8 Complements
1
2 4 10 13 17 17 22 23 26 29 29 31 32 34 37 39 39 46 51 57 57 59 62 65 65 67 70 71
XII
3.
Table of Contents
Finite Element Approximation 3.1 Triangulation 3.2 PiecewisePolynomial Subspaces 3.2.1 The Scalar Case 3.2.2 The Vector Case 3.3 Degrees of Freedom and Shape Functions 3.3.1 The Scalar Case: Triangular Finite Elements 3.3.2 The Scalar Case: Parallelepipedal Finite Elements 3.3.3 The Vector Case 3.4 The Interpolation Operator 3.4.1 Interpolation Error: the Scalar Case 3.4.2 Interpolation Error: the Vector Case 3.5 Projection Operators 3.6 Complements
..
73 73 74 75 76 77 77 80 82 85 85 91 96 99
4.
Polynomial Approximation 4.1 Orthogonal Polynomials 4.2 Gaussian Quadrature and Interpolation 4.3 Chebyshev Expansion 4.3.1 Chebyshev Polynomials 4.3.2 Chebyshev Interpolation 4.3.3 Chebyshev Projections 4.4 Legendre Expansion 4.4.1 Legendre Polynomials 4.4.2 Legendre Interpolation 4.4.3 Legendre Projections 4.5 TwoDimensional Extensions 4.5.1 The Chebyshev Case 4.5.2 The Legendre Case 4.6 Complements
101 101 103 105 105 107 113 115 115 117 120 121 121 124 127
5.
Galerkin, Collocation and Other Methods 5.1 An Abstract Reference Boundary Value Problem 5.1.1 Some Results of Functional Analysis 5.2 Galerkin Method 5.3 PetrovGalerkin Method 5.4 Collocation Method 5.5 Generalized Galerkin Method 5.6 TimeAdvancing Methods for TimeDependent Problems 5.6.1 SemiDiscrete Approximation 5.6.2 FullyDiscrete Approximation 5.7 FractionalStep and OperatorSplitting Methods 5.8 Complements
129 129 133 136 138 140 141 144 148 148 151 156
Table of Contents
XIII
Part II. Approximation of Boundary Value Problems 6.
7.
Elliptic Problems: Approximation by Galerkin and Collocation Methods 6.1 Problem Formulation and Mathematical Properties 6.1.1 Variational Form of Boundary Value Problems 6.1.2 Existence, Uniqueness and APriori Estimates 6.1.3 Regularity of Solutions 6.1.4 On the Degeneracy of the Constants in Stability and Error Estimates 6.2 Numerical Methods: Construction and Analysis 6.2.1 Galerkin Method: Finite Element and Spectral Approximations 6.2.2 Spectral Collocation Method 6.2.3 Generalized Galerkin Method 6.3 Algorithmic Aspects 6.3.1 Algebraic Formulation 6.3.2 The Finite Element Case 6.3.3 The Spectral Collocation Case 6.4 Domain Decomposition Methods 6.4.1 The Schwarz Method 6.4.2 IterationbySubdomain Methods Based on Transmission Conditions at the Interface 6.4.3 The SteklovPoincare Operator 6.4.4 The Connection Between IterationsbySubdomain Methods and the Schur Complement System
159 159 161 164 167 168 169 170 179 187 189 190 192 198 204 206 209 212 215
Elliptic Problems: Approximation by Mixed and Hybrid Methods 217 7.1 Alternative Mathematical Formulations 217 7.1.1 The Minimum Complementary Energy Principle 218 7.1.2 SaddlePoint Formulations: Mixed and Hybrid Methods 222 7.2 Approximation by Mixed Methods 230 7.2.1 Setting up and Analysis 230 7.2.2 An Example: the RaviartThomas Finite Elements .. 235 7.3 Some Remarks on the Algorithmic Aspects 241 7.4 The Approximation of More General Constrained Problems 246 246 7.4.1 Abstract Formulation 7.4.2 Analysis of Stability and Convergence 250 7.4.3 How to Verify the Uniform Compatibility Condition 253 7.5 Complements 255
XIV
Table of Contents
8.
Steady AdvectionDiffusion Problems 257 8.1 Mathematical Formulation . 257 8.2 A OneDimensional Example . 258 8.2.1 Galerkin Approximation and Centered Finite Differences . 259 8.2.2 Upwind Finite Differences and Numerical Diffusion 262 8.2.3 Spectral Approximation . 263 8.3 Stabilization Methods . 265 8.3.1 The Artificial Diffusion Method . 267 8.3.2 Strongly Consistent Stabilization Methods for Finite Elements . 269 8.3.3 Stabilization by Bubble Functions . 273 8.3.4 Stabilization Methods for Spectral Approximation .. 277 8.4 Analysis of Strongly Consistent Stabilization Methods . 280 8.5 Some Numerical Results . 288 . 289 8.6 The Heterogeneous Method
9.
The Stokes Problem . 297 9.1 Mathematical Formulation and Analysis . 297 9.2 Galerkin Approximation . 300 9.2.1 Algebraic Form of the Stokes Problem . 303 9.2.2 Compatibility Condition and Spurious Pressure Modes . 304 9.2.3 DivergenceFree Property and Locking Phenomena .. 305 9.3 Finite Element Approximation . 306 9.3.1 Discontinuous Pressure Finite Elements . 306 9.3.2 Continuous Pressure Finite Elements . 310 9.4 Stabilization Procedures . 311 9.5 Approximation by Spectral Methods . 317 9.5.1 Spectral Galerkin Approximation . 319 9.5.2 Spectral Collocation Approximation . 323 9.5.3 Spectral Generalized Galerkin Approximation . 324 9.6 Solving the Stokes System . 325 9.6.1 The PressureMatrix Method . 326 9.6.2 The Uzawa Method . 327 9.6.3 The ArrowHurwicz Method . 328 9.6.4 Penalty Methods . 329 9.6.5 The AugmentedLagrangian Method . 330 9.6.6 Methods Based on Pressure Solvers . 331 9.6.7 A Global Preconditioning Technique . 335 9.7 Complements . 337
10. The Steady NavierStokes Problem 10.1 Mathematical Formulation
339 339
Table of Contents
10.2
10.3
10.4 10.5
10.1.1 Other Kind of Boundary Conditions 10.1.2 An Abstract Formulation Finite Dimensional Approximation 10.2.1 An Abstract Approximate Problem 10.2.2 Approximation by Mixed Finite Element Methods 10.2.3 Approximation by Spectral Collocation Methods Numerical Algorithms 10.3.1 Newton Methods and the Continuation Method 10.3.2 An OperatorSplitting Algorithm Stream FunctionVorticity Formulation of the NavierStokes Equations Complements
XV 343 345 346 347 .. 349 351 353 353 358 359 361
Part III. Approximation of InitialBoundary Value Problems 11. Parabolic Problems 11.1 InitialBoundary Value Problems and Weak Formulation 11.1.1 Mathematical Analysis of InitialBoundary Value Problems 11.2 SemiDiscrete Approximation 11.2.1 The Finite Element Case 11.2.2 The Case of Spectral Methods 11.3 TimeAdvancing by Finite Differences 11.3.1 The Finite Element Case 11.3.2 The Case of Spectral Methods 11.4 Some Remarks on the Algorithmic Aspects 11.5 Complements
12. Unsteady AdvectionDiffusion Problems 12.1 Mathematical Formulation 12.2 TimeAdvancing by Finite Differences 12.2.1 A Sharp Stability Result for the Bscheme 12.2.2 A SemiImplicit Scheme 12.3 The Discontinuous Galerkin Method for Stabilized Problems 12.4 OperatorSplitting Methods 12.5 A Characteristic Galerkin Method 13. The Unsteady NavierStokes Problem 13.1 The NavierStokes Equations for Compressible and Incompressible Flows 13.1.1 Compressible Flows 13.1.2 Incompressible Flows
363 363 365 373 373 379 384 385 396 401 404 405 405 408 408 411 415 418 423 429 430 431 432
XVI
Table of Contents 13.2 13.3 13.4 13.5 13.6 13.7
Mathematical Formulation and Behaviour of Solutions SemiDiscrete Approximation TimeAdvancing by Finite Differences OperatorSplitting Methods Other Approaches Complements
433 434 438 441 446 448
14. Hyperbolic Problems 14.1 Some Instances of Hyperbolic Equations 14.1.1 Linear Scalar Advection Equations 14.1.2 Linear Hyperbolic Systems 14.1.3 InitialBoundary Value Problems 14.1.4 Nonlinear Scalar Equations 14.2 Approximation by Finite Differences 14.2.1 Linear Scalar Advection Equations and Hyperbolic Systems 14.2.2 Stability, Consistency, Convergence 14.2.3 Nonlinear Scalar Equations 14.2.4 High Order Shock Capturing Schemes 14.3 Approximation by Finite Elements 14.3.1 Galerkin Method 14.3.2 Stabilization of the Galerkin Method 14.3.3 SpaceDiscontinuous Galerkin Method 14.3.4 Schemes for TimeDiscretization 14.4 Approximation by Spectral Methods 14.4.1 Spectral Collocation Method: the Scalar Case 14.4.2 Spectral Collocation Method: the Vector Case 14.4.3 TimeAdvancing and Smoothing Procedures 14.5 Second Order Linear Hyperbolic Problems 14.6 The Finite Volume Method 14.7 Complements
461 465 471 475 481 482 485 487 488 490 491 494 496 497 501 508
References
509
Subject Index
537
449 450 450 451 453 455 461
1. Introduction
Numerical approximation of partial differential equations is an important branch of Numerical Analysis. Often, it demands a knowledge of many aspects of the problem. First of all, the physical background of the problem is required in order to understand the behaviour of expected solutions. This may often lead to the choice of convenient numerical methods. Secondly, modern formulation of the problem based on the variational (weak) form ought to be considered, as it allows the search for generalized solutions in Hilbert (or Banach) functional spaces. Variational techniques yield apriori estimates for the solution, which in turn indicate in which kind of norms any virtual numerical solution can be proven to be stable. Furthermore, results about smoothness of the mathematical solutions may suggest the numerical methodology to be used, and consequently, determine the kind of accuracy that can be achieved. The latter is pointed out from the error analysis. Clearly, specific attention should be paid to the algorithmic aspects concerned with the choice of any numerical method. This book aims at providing general ideas on numerical approximation of partial differential equations, although (obviously) not all possible existing methods will be considered. In this respect, we mainly focus on variational numerical methods for the discretization of space derivatives, and on finite difference and fractionalstep methods for advancing, in time, unsteady problems. Whenever possible, we present the unifying approach behind apriori different numerical strategies, provide general theory for analysis and illustrate a variety of algorithms that can be used to compute the effective numerical solution of the problem at hand, taking into consideration its algebraic structure. Consequently, we try to avoid using technicalities (or tricks, or algorithms) that work only in very specific situations, or that are not sustained from a sound theoretical background. Some problems (and methods) are discussed on a casetocase basis, but very often they are included in a single logical unit (say Chapter, or Section).
2
1. Introduction
1.1 The Conceptual Path Behind the Approximation We consider a great number of mathematical problems, and numerical methods for their solution. For the approximation of any given boundary value problem, we schematically illustrate in Fig. 1.1.1 the decision path that needs to be followed. Level [lJ is the boundary value problem at hand under its weak formulation accounting for the prescribed boundary conditions. Level [2J provides the kind of discretization (or numerical method) that can be pursued in order to reduce the given problem to one having finite dimension. Of course, the strategy adopted will determine the structure of the numerical problem. Throughout this book we mainly consider two kinds of discretization. The former is the Galerkin method, together with its remarkable variant, the PetrovGalerkin method, which is based on an integral formulation of the differential problem. The second discretization we consider, is the collocation method, which is, instead, based on the fulfillment of the differential equations at some selected points of the computational domain. We then reformulate the collocation method under a generalized Galerkin mode, precisely combining the Galerkin approach with numerical evaluation of integrals using Gaussian formulae. At a lower extent, we will address finite difference schemes for space discretization, especially for nonlinear convectiondiffusion equations and for problems of wave propagation. For the latter we will also present the approach based on the finite volume method, which is very popular in computational fluid dynamics. Finally, we will illustrate shortly the elementary principles of the domain decomposition method, an approach which offers the best promise for the parallel solution of large problems in the field of scientific computing. Other approaches are often encountered in the literature as well, but they will only be addressed incidentally in this book. Level [3J specifies the nature of the subs paces used in the approximation. Typically, we have piecewisepolynomial functions of low degree when using finite elements, and global algebraic polynomials of high degree for spectral methods. These two remarkable cases will be discussed and analyzed in some of their variants (mixed finite elements, Legendre and Chebyshev spectral collocation methods). The choice operated at this level determines the functional structure of the numerical solution, the kind of accuracy that can be achieved, besides affecting the topological form of the resulting algebraic system. At level [4J the selection of convenient algorithms needs to be accomplished to solve the algebraic problem, exploiting, at most, the topological structure and the properties of the associated matrices. We illustrate all the important methods available nowadays for solving large scale symmetric and
1.1 The Conceptual Path Behind the Approximation
3
[1]
[2]
[3]
[4] Fig. 1.1.1. The conceptual path behind the approximation
nonsymmetric systems, including a short presentation of preconditioning and multigrid techniques. For initialboundary value problems, between levels [1] and [2] the timediscretization needs to be carried out. Quite often, this is performed using finite difference divided quotients to approximate the time derivatives. However, other approaches can be pursued, especially for differential operators of complex structure. An instance is provided by strategies based on fractionalsteps, that allow the splitting of the operator into subpieces that are advanced in time in an independent fashion. Both finite difference and fractionalstep approaches will be extensively investigated in this book. We will briefly mention other strategies as well, e.g., discontinuous finite elements, characteristic Galerkin and TaylorGalerkin schemes. Let us finally underline that, in Part I of the book, the flow reported in Fig. 1.1.1 will be followed in the reverse order (i.e., bottom up). Proceeding backward from level [4] to level [2], we start from algorithms for solving linear algebraic systems (Chapter 2), then discuss the basic concepts of finite element (Chapter 3) and polynomial (Chapter 4) approximations. In Chapter 5 we introduce the discretization methods of level [2]' as well as those for the time differencing. At this stage we also present three examples of boundary value problems in order to provide the reader with some concrete guidelines. All the numerical results presented in this book have been obtained in a double precision mode on a IBM RISC 560 (having 50 MHZ) with 128 MB RAM. Peak performance is 100 MFlops (31 MFlops on UNPACK, 84 MFlops on TPP). Compiler is IBM AIX XL FORTRAN/6000 Version 2.3.
4
1. Introduction
1.2 Preliminary Notation and Function Spaces In this Section we introduce some definitions and notations which will often be used in the sequel. For a complete presentation we refer the interested reader to, e.g., Yosida (1974) or Brezis (1983) (see also Brezzi and Gilardi (1987) for a comprehensive and easytoread proofless presentation).
(i) Hilbert and Banach spaces Let V be a (real) linear space. A scalar product on V is a bilinear map (.,.) : V x V > lR that (w,v) = (v,w) for each w,v E V (symmetry), (v, v) ~ 0 for each v E V (positivity), and (v, v) = 0 if and only if v = O. A seminorm is a map II· II : V > lR such that IIvl1 2: 0 for each v E V, Ilevll = lelllvil for each e E lR and v E V, and Ilw + vii::; Ilwll + Ilvll for each w, v E V (triangular inequality). A norm on V is a semi norm satisfying the additional property that Ilvll = o if, and only if v = O. Two norms II .II and III . Ilion V are equivalent if there exist two positive constants M 1 and M 2 such that
for each v E V. It is readily verified that at any scalar product it is associated a norm through the following definition: Ilvll:= (v,v)1/2. Moreover, at any norm we can associate a distance: d(w, v) := Ilw  vii. A linear space V endowed with a scalar product (respectively, a norm) is called prehilbertian (respectively, normed) space. A sequence V n is a Cauchy sequence in a normed space V if it is a Cauchy sequence with respect to the distance d(w,v) = Ilw  vII. If any Cauchy sequence in a prehilbertian (normed) space V is convergent, the space V is called a Hilbert (respectively, Banach) space. In a Hilbert space the Schwarz inequality holds:
(1.2.1 )
l(w,v)l::; Ilwllllvll for each w,v E V
(ii) Dual spaces If (V, 11·lIv) and (W,II·llw) are normed spaces, we denote by £(V;W) the set of linear continuous functionals from V into W, and for L E £(V; W) we define the norm
(1.2.2)
IILII.c(v;w) := sup vEV
IILvllw
II V II V
.
v ,",0
Thus £(V; W) is a normed space; if W is a Banach space, then £(V; W) is a Banach space, too. If W = JR, the space £(V; JR) is called the dual space of V and is denoted by V'.
1.2 Preliminary Notation and Function Spaces
5
The bilinear form (.,.) from V' x V into lR defined by (L,v) := L(v) is called the duality pairing between V' and V. As a consequence of the Riesz representation theorem (see, e.g., Yosida (1974), p. 90), if V is a Hilbert space, the dual V'is a Hilbert space which can be canonically identified to V. (iii) Weak and weak* convergence In a normed space V it is possible to introduce another type of convergence, which is called weak convergence. It is defined as follows: a sequence V n is called weakly convergent to v E V if L(v n ) converges to L(v) for each LEV'. It can be proven that the weak limit v, if it exists, is unique. Clearly, if the sequence V n converges to v in V, it is also weakly convergent. The converse is not true unless V is finite dimensional. In a dual space V' a third type of convergence can be introduced, the weak* convergence. It is defined as follows: a sequence of functionals L n E V' is called weakly* convergent to LEV' if Ln(v) converges to L(v) for each v E V. Also the weak* limit L, if it exists, is unique. Moreover, it can be shown that the weak convergence in V' implies the weak" convergence. (iv) LP spaces We now introduce some spaces of functions which are the basis for the modern theory of partial differential equations. Let Jl be an open set contained in ]Rd, d ~ 1, and consider in Jl the Lebesgue measure. A very important family of Banach spaces is the following one. Let 1 :::; P :::; 00, and consider the set of measurable functions v such that (1.2.3) or, when p
t1v(X)IPdX
< 00
,
1 :::; p
< 00
,
= 00,
(1.2.4)
sup{lv(x)11 x E Jl}
< 00
.
These spaces are usually denoted by LP(Jl) and the associated norm is (1.2.5) or, when p = (1.2.6)
00,
IIvIlL~(!?) := sup{lv(x)/1 x E Jl} .
More precisely, LP(fl) is indeed the space of classes of equivalence of measurable functions, satisfying (1.2.3) or (1.2.4) with respect to the equivalence relation: w == v if wand v are different on a subset having zero measure. In other words, in the space LP(fl) two functions, different on a subset which has zero measure, are identified each other. Thus the definition of the space L OO (fl) in (1.2.4) and of its norm in (1.2.6) should be modified in the following way: v E LOO(Jl) if
6
1. Introduction
inf{M 2:
a Ilv(x)1 :S M
almost everywhere in fl}
< 00
,
and (1.2.7)
IlvIILOO(Q):= inf{M
2: a Ilv(x)1 :S M almost everywhere in D} ,
where "almost everywhere in D" means "except on a subset of D having zero measure" . The space L 2(D) is indeed a Hilbert space, endowed with the scalar product (w, V)L2(Q) :=
l
w(x) v(x) dx .
For reasons which will be clear in the sequel, the norm in L2(fl) is denoted by II ·llo,Q, or simply II ·110 when no confusion about the domain D is possible. Moreover, the scalar product (., ·h2(Q) is often indicated by (., ')o,Q or simply by (, ). If 1 :S p < 00, the dual space of LP(D) is given by £P' (D), where (lip) + (lip') = 1 (and p' = 00 if P = 1). Moreover, the Holder inequality holds: (1.2.8)
I~ w(x) v(x) dxl
:S IIwlb(Q) IlvIlLpl(Q) .
Notice that for p = 2 the Holder inequality is the Schwarz inequality (1.2.1) for the Hilbert space L 2(D). Moreover, from (1.2.8) it easily follows that Lq(D) C LP(D) if P:S q and D has finite measure. (v) Distributions Let us recall that Cif'(D) (or V(D)) denotes the space of infinitely differentiable functions having compact support, i.e., vanishing outside a bounded open set D' C D which has a positive distance from the boundary aD of D). It is useful to define the concept of convergence for sequences of V(D). We say that V n E V(D) converges to v E V(D) if exists a closed bounded subset KED such that V n vanishes outside K for each n, and for every nonnegative multiindex a the derivative DOvn converges to DOv uniformly in D. We recall that if a = (a1' ... ,ad), ai nonnegative integers, then a10Iv
DOv
:=
aXl'" "" a«:
"'d
'
where lal := a1 + ... + ad is the length of a. The space of linear functionals on V(D) which are continuous with respect to the convergence introduced above is denoted by V'(D) and its elements are called distributions. If L E V'(D) and v E V(D), we usually denote L(v) by the duality pairing (L, v}. It is easily seen that each function w E LP(D) can be associated to the following distribution:
1.2 Preliminary Notation and Function Spaces
v
>
In
7
w(x) v(x) dx , v E D(il) .
However, the Dirac functional
v
> 15(v)
:=
v(O) ,v E D(lR) ,
is a distribution which cannot be represented through any function belonging to U(il). We are now in a position for introducing the derivative of a distribution. Let a be a nonnegative multiindex and L a distribution. Then DOt L is the distribution defined as follows
(1.2.9)
(DOt L, v) := (_l)I OtI (L, DOtv) 'if v E D(il) .
Notice that, from this definition, a distribution turns out to be infinitely differentiable. On the other hand, when L is a smooth function by integrating by parts it is easily verified that the derivative in the sense of distributions coincides with the usual derivative. Let us also recall that the Dirac distribution 8 is the distributional derivative of the Heaviside function:
H( x. ) .=
{I ,, xx ws,P(ft) Ivis measurable and
1
:s q < 00, endowed with the
iT Ilv(t)II~,p,odt l ,
0 such that
:s :s
We close this Section by considering some properties which are valid for spacetime functions. Theorem 1.3.8 Assume that J? is an open set of ~d with a Lipschitz continuous boundary, and let s ~ 0 and r > 1/2 be two real numbers. Then for each 8 such that 0 8 1 the space
:s :s
1.4 Comparison Results
13
(1.3.5) is continuously embedded into the space
whereao:= (2~~1)S. If, furthermore, the set [) is bounded and s > 0, then the space defined in (1.3.5) is compactly embedded into the space
for each si
;::: 0,
0 ::; r1
< r(l 
7)
and 0 ::; a1 < ao·
For additional references on interpolation spaces we refer, e.g., to Bergh and Lofstrom (1976).
1.4 Comparison Results In this Section we present two comparison results, which will be useful in the stability and convergence analysis of initialboundary value problems. Lemma 1.4.1 (Gronwall lemma). Let f E L1(to, T) be a nonnegative function, 9 and r.p be continuous functions on [to, T]. If r.p satisfies
(1.4.1)
r.p(t) ::; g(t)
then (1.4.2)
r.p(t) ::; g(t)
+
1:
+
r
i.
f(T)r.p(T)
f(s)g(s)exp
V t E [to, T] ,
(it f(T)dT)
V t E [to, T] .
If moreover 9 is nondecreasing, then (1.4.3)
r.p(t) ::; g(t) exp
(1:
1(T))
V t E [to, T] .
t
Proof. Set R(t) := fto f(T)r.p(T). From (1.4.1) it satisfies
~~(t) = f(t)r.p(t) ::; f(t)[g(t) + R(t)] hence
,
1. Introduction
14
:t [R(t)ex p =
(1.4.4)
(1:
[~~(t) 
f(T)dT)] R(t)f(t)] exp
(1:
::; f(t)g(t)exp
(1:
f(T)dT)
f(T)dT)
Integrating (1.4.4) over (to, t) we find
R(t)exp
(1:
f(T)dT) ::;
1:
f(s)g(s)exp
(1:
f(T)dT)
,
and (1.4.2) follows. If 9 is nondecreasing we find
'{J(t)::; g(t)
[1 +
1:
f(s)exp
(it f(T)dT)]
,
o
which yields (1.4.3). Often, the Gronwall lemma will be used in the special case in which
g(t) = '{J(O)
(1.4.5)
+
it
1/;(s) , 1/;(s)
~0
.
Another comparison result, which is the discrete counterpart of Gronwall lemma, is the following one. Lemma 1.4.2 (Discrete Gronwall lemma). Assume that k n is a nonnegative
sequence, and that the sequence Ofork=l, ...,n ,
and an upper triangular matrix R E jRnxn with rkk = 1, k = 1, ..., n, such that (2.1.12)
A=QR.
This decomposition is particularly successful for the solution to the least squares problem. Indeed, in view of (2.1.12) the normal equations (2.1.13) lead to
whence (2.1.14) Thus, the computation of the solution of (2.1.13) simply requires the solution of two systems, one diagonal and the other upper triangular. In turn, the calculation of the factors Q and R can be performed by several algorithms. One is based on a modified GramSchmidt orthogonalization method, another (the Householder method) on the use of the reflection
22
2. Numerical Solution of Linear Systems
matrices, and a third one (the Givens method) on the use of the rotation matrices, which require twice as many operations than Householder method (e.g., Isaacson and Keller (1966) and Golub and Van Loan (1989)). 2.1.1 Banded Systems
In many practical situations, when the system (2.1.1) arises from the discretization of a differential problem, the matrix A is banded, i.e., there is an integer q (the bandwidth), 1 S; q < n, such that (2.1.15)
aij
=a
if i,j are such that
Ii jl > q
.
In particular, A has at most 2q + 1 nonzero elements per row. When solving banded systems, the triangular factors of A are also banded, allowing for remarkable saving, in terms of storage and operations. Indeed, if A has a band of width q and can be decomposed as LU, then both triangular factors Land U are banded with bandwidth q. The modification of the Gaussian elimination algorithm (2.1.3) is straightforward, and requires O( n q 2) operations. The corresponding solution of a triangular system by forward or backward elimination requires O(nq) operations. In general, when the GEM is implemented with a pivoting strategy, the original band structure of A is not perserved for the factors. More precisely, the bandwidth of U can be bigger than that of A's upper triangle (= 2q in the case of partial pivoting). Even worse, nothing can be said in general about L's bandwidth, although it can be shown that L has at most q + 1 nonzeros per column. If A is symmetric, positive definite and satisfies (2.1.15), its Cholesky decomposition (2.1.9) reads For k
= 1, ...,n
Sk
= max(l, k  q) ,
lkk =
(2.1.16)
(a kk 
rk
I: l~p)
= min(n, k
+ q)
1/2
P=Sk
For i = k + 1, ..., rk and k
i;
= l~k
=1= n
(a ik  %. liPlkP)
end i end k , and requires O( n q2) operations (but about one half of those needed by the complete GEM) plus n square roots. Remark 2.1.1 (Sparse systems). If A has a bandwidth q which is much smaller than its dimension n, then A is sparse. More generally, A is a sparse matrix if the number of its nonzero elements is O(n).
2.1 Direct Methods
23
Unlike the case of banded matrices, quite often the topological structure of sparse matrices is not well defined. This is for instance the case of systems associated with partial differential equations over unstructured meshes, and/or with a careless ordering of gridpoints. A direct method for a sparse system should be capable of exploiting the sparsity of A. As a matter offact, a decomposition method applied to a sparse matrix without caring about its structure can lead to a substantial fillin, i.e., generate a considerable number of new nonzero elements. The fillin can be reduced rearranging rows and columns (which, for partial differential equations, amounts to renumber the nodes) according to special criteria. Amongst these, we mention the method of nested dissection and its variants. Another approach may involve an intelligent use of data structures. For an analysis of special direct methods for the solution of sparse systems we refer to Bunch and Rose (1976), Bjork, Plemmons and Schneider (1981), George and Liu (1981), Duff, Erisman and Reid (1986). 0 Remark 2.1.2 (Special purpose techniques). Special implementations of the GEM are carried out in specific situations. We discuss some of them in Section 6.3.2. Here we will mention three remarkable instances: (i) the frontal method that consists of carrying out the Gaussian elimination process for the stiffness finite element matrix in a suitable order (see the review paper by Liu (1992)); (ii) the fast Poisson solverfor the finite element (or finite difference) system associated with the discretization of the Poisson equation on structured mesh (see Buzbee, Golub and Nielson (1970), Dorr (1970); see also Vajtersic (1993)) ; (iii) the HaidvogelZang diagonalization of the matrix associated with the spectral Galerkin approximation of the Poisson problem (see Haidvogel and Zang (1979)) or with the spectral collocation approximation ofthe Helmholtz equation (see Haldenwang, Labrosse, Abboudi and Deville (1984)). 0 2.1.2 Error Analysis
To begin with, we will recall a few basic notations about matrices. A matrix norm is a function f : JRnxn + JR that satisfies for all matrices A, B E JRnxn: f(A) 2: 0 (f(A) = 0 if and only if A = 0) f(A + B) S f(A) + f(B) f(aA) = lalf(A) , a E JR f(AB) S f(A) f(B).
A double bar notation with subscripts usually designates matrix norms, i.e.,
IIAII =
f(A).
The Frobenius norm
24
2. Numerical Solution of Linear Systems
and the pnorms
IIAllp:=
(2.1.18)
max
Iwlp = l
IAwlp, 1:5 P:5 00
,
are the most frequently used matrix norms in numerical linear algebra. We recall that the pnorms for vectors of IRn are defined as
(2.1.19)
l:5p a we can find a natural matrix norm /IBII* ::; p(B) + e. From (2.2.4) we obtain
II . 11* such that
Jekl* ::; [p(B) + E:]kleol* , which yields convergence if E: is small enough. Conversely, if p(B) 2: lone can choose x? such that eO = w*, the eigenvector corresponding to the eigenvalue A* of maximum absolute value. For this choice (2.2.4) gives
and convergence cannot hold, whatever is the vector norm I . I there considered. Clearly, the smaller is p(B), the quicker is the convergence. In the particular case in which B is diagonalizable, there exists a nonsingular matrix T such that B
= TATI,
with A := diag(Al' ..., An), Aj being the eigenvalues of B. From the recurrence relation (2.2.4) it follows
Defining fk := T1e\ we have immediately
i.e., €j = (Aj)k€J for j = 1, ... ,n, k 2: O. Therefore, up to a changement of basis, the effect of a single iteration amounts to damping each component €J of the initial error by a factor that is given by the corresponding eigenvalue
s;
Since A is a diagonal matrix, we have 1 ::; P ::; oo. Therefore we have
leklp ::; Xp(T)[p(B)]kleOlp
,
IIAllp = p(A) = p(B)
Ifkip::;
for each
[p(BW/f°l p ,
where Xp(T) := IITllp IIT1Ilp is the pcondition number of the diagonalization matrix T. The number 0" := log p(B) is called the asymptotic rate of convergence of the iterative procedure (2.2.2). Its inverse 1/0" represents a measure of the average value of iterations that are needed in order to reduce the norm of the initial error by a factor l/e. Now let us present some sufficient conditions for convergence. A splitting (2.2.1) is said to be regular if (Pl)ij 2: a and N i j 2: a for i,j = 1, ...,n.
28
2. Numerical Solution of Linear Systems
As a consequence of the PerronFrobenius theory on nonnegative matrices, it can be proven that if (2.2.1) is a regular splitting and (A 1 )ij ~ 0 for i, j = 1, ..., n, then p(Pl N) < 1 (Varga (1962), p.89). The following result allows for a comparison between regular splittings (Varga (1962), p. 90). 'I'heorern 2.2.1 Let A be such that (A 1 )ij > 0 for i, j = 1, ... , n. Consider two regular splittings of A: A PI  N 1 and A = P 2  N 2 • Assume moreover that
=
(2.2.6) and N 2

N 1 is not the null matrix. Then
(2.2.7) The inequality (2.2.7) states that the first splitting is virtually superior to the latter, provided (2.2.6) holds (notice that this condition is quite easy to check). Another interesting result is the following: Theorem 2.2.2 (HouseholderJohn). Let A be symmetric, positive definite and split as in (2.2.1). Suppose that the matrix p+pT A is positive definite. Then p(Pl N) < 1.
Proof. Let us write pl N = I  pl A, and let), E C, z E en, z =1= 0, be an eigenvalue and the corresponding eigenvector of I  pl A. Thus we have (1  ),)pz
(2.2.8)
= Az
,
and, in particular, it follows that), =1= 1. Take the scalar product in en of (2.2.8) by z = w + iy, w, y E JRn, and consider also the complex conjugate of the result. As A = AT, we have (Az, Z)cn E JR and (Az, Z)cn
= (1 
),)(pz, z)cn
= (1 

T
),)(P z, z)cn .
Thus (2.2.9)
1 + 1) (, 1/\
=  1
1),
(Az, Z)cn
= ((P + P T 
A)z, Z)cn
Taking the real part of (2.2.9) One easily obtains 2 11 _1),1 ),\2 [(Aw, w) + (Ay, y)]
1
= ((P
+P
T
 A)w, w)
+ ((P + P
where (., .) is the scalar product in JRn. Since both A and (P positive definite, we can conclude that 1),1 < 1, i.e., p(Pl N)
T
 A)y, y) ,
+pT
< 1.

A) are 0
2.3 Classical Iterative Methods
29
We conclude this Section with another convergence theorem. First, we recall that a matrix is said to be Nstable (or negativestable) if all of its eigenvalues have positive real parts (see Young (1971)). 'I'heorern 2.2.3 The convergence condition (2.2.5) holds if, and only if, IB is nonsingular, and moreover (2.2.10)
H:= (I  B)l(I + B) is Nstable .
Proof. Under the assumption that 1 B is nonsingular, from (2.2.10) we have H + 1= 2(1  B)I, H  I = 2(I  B)l B, then
B = (H
+ I)l(H 
I) .
=
If A is an eigenvalue of B, then fL ~ is an eigenvalue of H. Moreover, we can write A = fL 1 = (RefL 1) + i lmfL fL + 1 (Re fL + 1) + i Im fL
Thus IAI < 1 if and ony if Rell > O. On the other hand, if (2.2.5) holds, 1 B is clearly nonsingular and, if fL is an eigenvalue of H, A = ~ is an eigenvalue of B. The thesis thus follows by proceeding as before. 0
2.3 Classical Iterative Methods We review here the classical Jacobi, GaussSeidel and other relaxation iterative procedures.
2.3.1 Jacobi Method It is perhaps the simplest iterative scheme, and it can be defined for matrices having nonzero diagonal elements. If the matrix A is represented as
(2.3.1 ) where DAis the diagonal of A and LA, UA are its lower and upper triangular parts, respectively, and we set
30
2. Numerical Solution of Linear Systems
the Jacobi method is based on the splitting
P := D , N:= (E + F)
(2.3.2)
The Jacobi iteration matrix is therefore B J := D1(E + F). If we define the residual vector as (2.3.3) the Jacobi iteration can be written as
Componentwise, this algorithm reads
(2.3.4) The condition p(BJ) < 1 is satisfied if A is strictly diagonally dominant (see (2.1.4)). In fact, p(B J )
:::;
IID
1
(E
+ F)lloo
=
l~~n
t, ::~ I
p(BJ) > 1 .
32
2. Numerical Solution of Linear Systems
Thus the Jacobi matrix and the GaussSeidel matrix are either both convergent, or both divergent.
Theorem 2.3.3 Let A be a tridiagonal matrix. If A is an eigenvalue of BJ then A2 is an eigenvalue of BGs. Similarly, if Jl is an eigenvalue of BGS greater than zero, then the square roots of f..l are eigenvalues of BJ. In particular, this yields that the GaussSeidel method converges if, and only if, the Jacobi iteration converges, moreover
(2.3.10)
2.3.3 Relaxation Methods (S.O.R. and S.S.O.R.) The Successive OverRelaxation (S. O.R.) method is an iteration procedure that accelerates the GaussSeidel method by the help of a positive parameter w. Still assuming that aii =I= 0 for i = 1, ..., n, in order to get the new iterate X k + 1 from x k we set: (2.3.11) for k
~
0, i = 1, ... , n. This amounts to using these splitting matrices
(2.3.12) The corresponding iteration matrix is (2.3.13)
B w := (D
+ WE)l[(l w)D 
wF] ,
and it coincides with BGS if w = 1. Notice that in general neither P nor B w are symmetric matrices, even if A satisfies this property. As far as convergence, we have the following necessary condition for w. Theorem 2.3.4 (Kahan). The S. O.R. iteration matrix verifies (2.3.14) therefore a necessary condition for convergence is
(2.3.15) Proof. If {Ai Ii easily get
O<w 0 for each w E IE.n, w =1= 0, and suppose that A is split as (2.3.26)
where Ak are nonnegative definite matrices, i.e., (Akw, w) ~ 0 for each w E IE.n, k = 1,2, and either Alar A 2 is positive definite. Let us consider the following scheme (see, e.g., Il'in (1966)):
xk )
')'(Xk+l/2 _
+ A 1 X k+ l / 2 = b  A 2 x k
(2.3.27) {
')'(xk+l _ Xk+ l/ 2)
+ A 2x k+l
= A 2x k + p,),(x k+l / 2 _ x k )
where')' > 0, (1 + p) =1= O. For p = 1 we have the PeacemanRachford scheme, and for p = 0 the DouglasRachford scheme. It can be easily seen that the matrices (/'1 + Ad and (/'1 + A 2 ) are positive definite, hence nonsingular. From a continuity argument it follows that if x k converges, also xk+l/ 2 is convergent. Further, the limit is the same; as a matter of fact, if we define y := lim x k k
,
z:= lim X k +l / 2 k
then from (2.3.27) we find ')'(z  y) {
+ Alz = b
 A2 y
r(1+p)(yz)=O,
hence y = z = Alb. To prove the convergence of x k the following result due to Kellogg (1963) will be useful.
38
2. Numerical Solution of Linear Systems
Proposition 2.3.1 (Kellogg lemma). If G is a nonnegative definite matrix and a is a real nonnegative parameter, then (2.3.28) If G is positive definite and a is positive, then
11(1aG)(1+aG)1112 < 1
(2.3.29)
Here 11·112 denotes the matrix norm subordinated to the euclidean vector norm (see (2.1.18) and (2.1.19)). Proof. We have
11(1  aG)(1 + aG)lll~ ((1  aG)(1 + aG)lw, (1  aG)(1 + aG)lw)
=
sup
(w, w)
wE.: n w#O
= sup
YEI~:n Y#O
= sup yEffin
y#O
((1  aG)y, (1  aG)y) ((I
+ aG)y, (I + aG)y)
(y,y)  2a(Gy,y) +a 2(Gy,Gy) (y,y) + 2a(Gy,y) + a 2(Gy,Gy)
Since a(Gy,y) 2: 0, (2.3.28) follows at once. On the other hand, (2.3.29) 0 holds if a(Gy, y) > 0 for each y E jRn, y i= O. Let us go back to the iterations (2.3.27). Eliminating the iteration operator is given by
Xk+
1/2
we find that
(2.3.30) The spectral radius of B is thus the same of
(2.3.31)
B* = ('y1 + Ad 1 ('y2I  PIA + A 1A 2)(II + A 2) 
1
Let us take now p = 1, i.e., consider the PeacemanRachford scheme. It can be easily checked that
('y2 1 IA
+ A 1A 2) =
('y1  Ad(II  A 2) .
Hence, by Proposition 2.3.1,
IIB*1I2 SIl(111Al)(I+I1Ad111211(111A2)·(1+11A2)1112 < 1 , and this ensures the convergence of x" . For an analysis ofthe Il'in and DouglasRachford schemes we refer, e.g., to Yanenko (1971), pp. 5966. Moreover, several methods are known to accelerate the convergence of x k , especially exploiting information on the spectrum of A and choosing a different parameter Ik at each step and using them cyclically. A thorough discussion is given in Young (1971), Chap.17.
2.4 Modern Iterative Methods
39
2.4 Modern Iterative Methods In this Section we present some families of iterative methods whose acceleration parameters need not be determined on the basis of an eigenvalue analysis, but rather they can be computed by a close formula depending upon available iterates. They include gradient and conjugate gradient methods with preconditioners. A thorough presentation of these and more general methods can be found, e.g., in Marchuk and Kuznetsov (1974), Hestenes (1980), Golub and Van Loan (1989). 2.4.1 Preconditioned Richardson Method
Iteration (2.2.2) can be restated equivalently as P(xk+ 1
x")
_
=rk
,
kEN,
where r k is the residual at the step k (see (2.3.3)). According to the terminology that will be used later, the matrix P in the splitting (2.2.1) can therefore be regarded as a precondiiioner for the matrix A. Let us recall that P needs to be nonsingular and "easy" to invert. This process can be generalized as follows (2.4.1) where, for each k, a =1= 0 is a real acceleration parameter. The iteration (2.4.1) is called the preconditioned Richardson method: stationary if a is constant for all k, dynamical if a changes along the iterations. The stationary case can be formulated recursively as follows. Let x" be given, and rO := b  Ax''. Subsequent iterations are made according to
pzk (2.4.2)
X
k
+1
:= r k
:= x k
+ az k
{ r k +1 := r k _ aAzk
for k :::: O. At each step, (2.4.2) requires the solution of a linear system with matrix P, along with a matrixvector multiplication involving the original matrix A. The preconditioned Richardson iteration matrix is (2.4.3)
Ro:
:=
I  ap 1A .
From now on we denote with TJi the eigenvalues of p 1 A. Theorem 2.4.1 For any nonsingular matrix P, the stationary preconditioned Richardson method converges if, and only if,
40
2. Numerical Solution of Linear Systems Vi = 1, ... ,n
(2.4.4)
Proof. Let us apply Theorem 2.2.3 to the current situation. Owing to (2.4.3) we have (I  Ra.)l(I + RaJ = ~Al p  I .
a
Denoting by fLi the eigenvalues of A 1 P it follows from Theorem 2.2.3 that p(Ra;) < 1 if, and only if, (2/a)Re fLi > 1 for all i = 1, ..., n. Condition (2.4.4) now follows, noting that Tli = 1/ fLi for all i = 1, ..., n. 0 Let us notice that, if the sign of the real parts of the eigenvalues of pl A is not constant, then the stationary preconditioned Richardson method cannot converge. Corollary 2.4.1 If pl A has real eigenvalues, then the stationary preconditioned Richardson method converges if, and only if,
(2.4.5)
2 1 0, we know that there exists an orthonormal basis given by eigenvectors of A, and that the eigenvalues Aj of A are strictly positive. By expanding x  x? with respect to these eigenvectors, some simple calculations yield
We choose
Tk+l
p(X) =
+
(A max Am in Amax  Amin
2X)
T k+l ( Amax + Amin ) Am ax  Amin
where Tk+l (x) is the Chebyshev polynomial of degree k + 1 introduced in Section 4.3.1. Since ITk+l(y)1 ::; 1 for Iyl ::; 1 it follows that
50
2. Numerical Solution of Linear Systems
max XE(Amin,A max]
+Amin)]l [Tk+l ( Amax Am ax  Amin
_ Ip (x )1
:l' ... , v">:n)') Then instead of (2.1.1) we consider the equivalent system (2.5.3)
A*x* = b , where A* := C I AC I
,
b, := C1b , x; := Cx .
The matrix A* is symmetric and positive definite with respect to the usual euclidean scalar product, therefore the CG iteration (2.4.25)(2.4.29) can be applied directly to (2.5.3) and produces a sequence of approximants {x~}. It can easily be shown that the resulting algorithm, when expressed in term of the iterate x k := Clx~, fits again under the form (2.4.36)(2.4.41). In particular, the preconditioner P solely is needed to define the iterative procedure, and the matrix C has only resulted to be an auxiliary tool for the theoretical justification of the scheme. A third possible procedure is to write the preconditioner P as
(2.5.4) where H is a nonsingular matrix, for instance, the lower triangular matrix obtained by Cholesky decomposition. Then one considers the equivalent system A*x* = b* where A* := H I AH T
,
b* : HIb , x* := H T x
54
2. Numerical Solution of Linear Systems
(here, H T denotes (H1)T = (HT)l). Also the matrix A' turns out to be symmetric and positive definite with respect to the usual euclidean scalar product, and the CG iteration (2.4.25)(2.4.29), rewritten in term of x k , takes the form (2.4.36)(2.4.41). Concerning the spectral properties of pl A, A. and A', we have the following result:
(2.5.5)
when P the { when P the
is given by (2.5.2), matrices pl A and A. have the same eigenvalues is given by (2.5.4), matrices pl A and A' have the same eigenvalues
This follows from the similarity transformations
We conclude that the preconditioned matrix pl A has the same spectral condition number as A. (when P = C 2 ) and A' (when P = HH T ) . To estimate Xsp(Pl A) the following result is often quite useful. Theorem 2.5.1 Let A and M be two symmetric and positive definite matri
ces such that (2.5.6) for each w E
~n.
Then
(2.5.7)
Proof. Since M 1 A is symmetric and positive definite with respect to (., ')M, it has positive eigenvalues 1]i and real eigenvectors Wi, i = 1, ..., n. Moreover, from the spectral relation
M 1 AWi = 1]iWi it follows that
(M 1 AWi,Wi)M = 7Ji(Wi,wdM ,
thus 7Ji is given by the Rayleigh quotient (2.5.8)
1]i
=
(M 1 AWi,Wi)M (Wi,Wi)M
From (2.5.6) we deduce K 1 S;
and therefore
(M1Aw,W)M ) (w,w M
S; K 2
Vw E
jRn , W =!=
0 ,
2.5 Preconditioning K1
:s; TJi :s; K 2
55
for any eigenvalue TJi of M 1 A ,
o
from which (2.5.7) follows.
Now the question is how to choose the preconditioner P so that p 1 A (or, equivalently, C 1 AC 1 , H 1 AH T ) has an improved condition number. Clearly, for a preconditioned iteration method to be an effective procedure, we must be able to solve easily linear systems associated to P. Hereafter, we always assume that A is symmetric and positive definite. Keeping in mind the decomposition (2.3.2), a simple pre conditioner is provided by the diagonal matrix (2.5.9)
P:=D=diag(al1,oo.,a nn) ,
i.e., the Jacobi preconditioner, or else by
(2.5.10) Another way is to use the S.S.O.R. preconditioner (2.5.11)
P := (D
+ wE)D 1(D + wE)T
,
for some wE (0,2). In this case a matrix H that satisfies P by (2.5.12)
H:= (D
+ wE)(D 1 / 2 )  1
,
= HH T
with D 1 / 2 := diag(vIall,
00"
is given
Ja nn) .
With the choice of the S.S.O.R. preconditioner, if A is the matrix associated with the fivepoint finite difference discretization of the Poisson equation on a square based on line ordering and w = w* (see (2.3.19)), then Xsp(P 1A) = O(h 1 ) , whereas Xsp(A) = O(h 2 ) , where h is the gridspacing. Another strategy involves computing an incomplete Cholesky decomposition of A, say H, and choosing (if H is nonsingular) (2.5.13)
P:=HH T
.
The simplest way to compute H is to modify (2.1.10) as follows: we set hij = lij if aij =/= 0, otherwise h ij = O. The corresponding algorithm reads:
56
2. Numerical Solution of Linear Systems For k = 1, ..(., n hkk =
k1
akk 
) 1/2
L h~p p=l
(2.5.14)
For i = k + 1, ... , nand k '1= n if aik = 0 then hik = 0 else
end i end k . (Here, it is understood that the sums don't apply for k = 1.) With this approach, P preserves the same sparsity pattern of A. Unfortunately, algorithm (2.5.14) is not always stable. Classes of matrices whose incomplete Cholesky decomposition is stable are identified in Manteuffel (1979). For a nonsymmetric matrix A, especially when A is banded, another preconditioning strategy can be based on the socalled incomplete LU (ILU) decomposition. We compute the LU decomposition of A but drop any fillin in L or U outside of the original structure of A. Higher accuracy incomplete decomposition are also used: ILU(p) allows for p additional diagonals in Land U. We refer to Meijerink and van der Vorst (1981), Evans (1983) and van der Vorst (1989) for a survey. Other examples are provided by block preconditioners for block matrices (see Golub and Van Loan (1989) for a short presentation) or else polynomial preconditioners. Polynomial preconditioning consists of choosing a polynomial tt and replacing the original system (2.1.1) by 7r(A)Ax
= 7r(A)b
,
where 7r(A) is chosen according to some optimality criterion. The new system is then solved by an iterative procedure. A survey can be found in Saad (1989), where the issue of implementing such preconditioners on supercomputers is also addressed. We would also like to point out that in specific situations, other kinds of preconditioners can be successfully used. One instance is provided by preconditioners based on different numerical realizations, such as those given by finite element or finite difference discretizations of a differential operator that is being approximated by spectral methods (see Section 6.3.3). Other examples occur for those preconditioners keeping track of the structure of a differential operator the system matrix refers to. Among other cases, we mention the preconditioners of the pressure matrix arising from the approximation of the Stokes problem (see Section 9.6.1).
2.6 Methods for NonSymmetric Problems
57
2.6 Conjugate Gradient and Lanczos like Methods for N onSymmetric Problems This subject is currently receiving much attention, though a totally satisfactory iterative scheme is not yet available. The interested reader should keep abreast of the literature in this field. If A is a nonsingular matrix of order n, throughout this Section we set (2.6.1) As is the symmetric part of A while Ass is its skewsymmetric part. If A is positive definite so is As. When using a preconditioner P, in the definition (2.6.1) A should be replaced by pl A. In view of the good performance of CG iteration for symmetric systems, a natural approach could be to convert any nonsingular system (2.1.1) into the symmetric and positive definite one (normal equations, such as those of least squares method, see (2.1.13)) ATAx = ATb ,
(2.6.2)
and then apply the CG method to (2.6.2). This would lead to the following algorithm (known as Conjugate Gradient Normal Residual (CGNR) method. Let XO be given, rO := b  Axo, pO := AT rO. For each k 2: 0 the kth iteration becomes: IAT r k l2 IAp k l2
(2.6.3)
Cl:k:=
(2.6.4) (2.6.5)
x k + 1 := x k + Cl:kpk r k + 1 := r k _ Cl:kApk
(2.6.6)
{3k+l:=
(2.6.7)
pk+l :=
IAT rk+ 1 12 IAT r k l2
ATr k + 1
+ (3k+ 1pk
This procedure still converges (theoretically) in no more than n iterations. However, compared to the CG iterations for the symmetric case, it requires extra matrixvector multiplications. Moreover, the condition number of the transformed system (2.6.2) is higher than that of the original system (2.1.1), as Ai(AT A) = o}(A), with O"i(A) denoting the singular values of a matrix A (e.g., Golub and Van Loan (1989), p.427). 2.6.1 GCR, Orthomin and Orthodir Iterations
We recall that if A is symmetric and positive definite the CG method (2.4.25)(2.4.29) applied to (2.1.1) minimizes at each step the energy norm leklA
58
2. Numerical Solution of Linear Systems
(see (2.4.31)), by selecting direction vectors that are Aconjugate (i.e., Aorthogonal). The Conjugate Residual (CR) method is a variant of CG that minimizes the residual Irk I at each iteration, choosing direction vectors that are AT Aorthogonal. Both methods have the finite termination property (i.e., they converge in n steps to the solution in exact arithmetic). All generalizations of the CG and CR methods for nonsymmetric systems form a set of directions which are based on the space (2.6.8)
° °
°
0 K m ( r 0) .= . span {r, A r, A 2 r, ..., A m  1 r },
where m is a positive integer. We recall that Km(rO) is the Krylov space of order m generated from r". For a positive definite matrix A, Cone us and Golub (1976) and Widlund (1978) derived the Generalized Conjugate Gradient (GCG) method, which requires at each iteration the solution of a symmetric system. Eisenstat, Elman and Schultz (1983) proposed the Generalized Conjugate Residual (GCR) method, which forms a new direction vector from the current residual by forcing the AT Aorthogonality to all preceding direction vectors. The initialization is: x'' is a given vector, rO := b  Ax", pO := r". The kth iterate (k 2: 0) is:
(2.6.12)
(rk, Apk) IAp k l2 k 1 k X + := x + akpk 1 k k r + := r  akApk (Ark+l, Api) f3j := IApi 12 , k  ko + 1 :::; j :::; k
(2.6.13)
pk+l := rk+l
(2.6.9) (2.6.10) (2.6.11)
ak:=
k
L
+
f3j p i
i=kko+l k
(2.6.14)
Apk+ 1 = Ark+ 1
L
+
f3j Api
i=kko+l
for k o = k + 1. If A is positive definite, then the GCR method converges and (2.6.15)
Ir kl i + 1 (i > j + 1, respectively). If A is symmetric, H m is symmetric and tridiagonal. Otherwise, the Arnoldi method generalizes the symmetric Lanczos algorithm to nonsymmetric matrices (see, e.g., Golub and Van Loan (1989)). The Arnoldi method is used to approximate the solution of (2.1.1) in the Full Orthogonalization method (FOM) (Saad (1981)). The approximate solution is x j = VJyj lx'', where s' := Hj 1 f3e 1 , f3 := [r"] and e 1 = (1,0, ...,0) is the first unit vector in jRm. FOM is theoretically equivalent to the Orthores method developed by Young and Jea (1980). The GMRES method uses the Arnoldi basis to compute a point x'" E XO + Km(rO) whose residual norm [r'"] is minimum, m being an increasing index that will be upgraded until convergence is achieved. Its description is as follows. We start setting m = 1. (i) Choose x'' and compute rO := b  Ax", f3 := IroI and v ' := rOllrol; (ii) perform the Arnoldi process (2.6.19), then define H m as the (m + 1) x m upper Hessenberg matrix whose nonzero entries are the coefficients hi,j; (2.6.20) (iii) form the approximate solution x?' := x? + vmym, where v'" minimizes J(y) := lf3e 1  Hmyl among all vectors y E jRm; (iv) restart: if satisfied then stop, or else set m and go to (i).
+ 1 . k
p
)=1
>..>.") )
associated with the euclidean vector norm
IITllz := max{ITvll v E
en , Ivl =
I} .
I. I,
62
2. Numerical Solution of Linear Systems
Very often, Ck is of order one, hence the k steps of GMRES reduce the residual norm to the order of lei provided that the condition number of T is not too large. From a practical point of view, when m (the dimension of the Krylov space) increases, the number of vectors requiring storage increases the same as m and the number of multiplications the same as (1/2)m 2n. To remedy this difficulty, the GMRES method can be used iteratively, i.e., the algorithm can be restarted every k o steps, where k o is some fixed integer parameter. The corresponding algorithm, called GMRES(ko), can be described as follows (as usual P denotes the preconditioner): (i) Choose Xo and compute v! := zO/lzol;
Zo
:= Pl(b  AxO) and
(ii) for j = 1, ..., k o
hi,j:= (PIAyj,yi) for i = 1, ... .i i yj+l := pl Ayj hi,jyi
L
i=l
hj+l,j :=
(2.6.23)
yj+l
Iyj+ll
:= yj+l/hj+l,j .
Define H ko as the (k o + 1) x k o upper Hessenberg matrix whose nonzero entries are the coefficients hi,j; (iii) form the approximate solution x ko := XO + Vkoyk O , where yk o minimizes l,Be l  Hkoyl among all vectors y E JRko and,B:= [z"]; (iv) restart: compute r ko := b  Ax ko . If satisfied then stop, or else set XO + x ko and go to (i).
This method is mathematically equivalent to Orthodir(ko), thus its convergence is not guaranteed for nonsymmetric and indefinite matrices. In some cases, it might be convenient to use a variable preconditioner along the iterations. This amounts to replace P by Pj in (ii).
2.6.3 BiCG, CGS and BiCGSTAB Iterations Now we present three variants of the CG method for nonsymmetric matrices. The residual vectors r k generated by the CG method satisfy a threeterm recurrence. This property is lost when A is nonsymmetric. Also, when A is not positive definite, it does not make sense to minimize leklA at each step as I. IA does not define a norm. The BiConjugate Gradient (BiCG) method, which has been introduced by Fletcher (1976), constructs a residual r k orthogonal with respect to an
2.6 Methods for NonSymmetric Problems
63
other row of vectors fO, Z;l, ... ,rkl, and, viceversa, Z;k is orthogonal with respect to rO, r 1 ,oo.,r k 1. This method also terminates within n steps at most, but there is no minimization property as in CG (or in GMRES) for the intermediate steps. A description of the algorithm is as follows. Choose XO and set pO := rO := b  Axo, pO := Z;O, where Z;O is a vector such that Po := (Z;O, r") =1= o. (e.g., Z;O = r"). Then for each k 2: 0 do: (2.6.24) (2.6.25) (2.6.26) (2.6.27) (2.6.28) (2.6.29) (2.6.30) If A is symmetric, then BiCG coincides exactly with CG, but it does twice the work. In case of convergence, both {r k } and {fk} converge toward zero, but only the convergence of the {r"} is exploited. Based on this observation, Sonneveld (1989) proposed a modification that concentrates every effort on the r k vectors. The algorithm, which is called Conjugate GradientSquared (CGS) method, can be represented by the following scheme. (We provide directly a version for the preconditioned system (2.5.1).) Choose XO and set w'' := pO := r O := b  Axo. Choose moreover Z;O, a vector such that Po := (fO, r") =1= O. Then for each k 2: 0 do:
(2.6.31) (2.6.32)
ppk := pk v k := Apk
(2.6.33)
G.k := (
(2.6.34) (2.6.35) (2.6.36)
qk :=
Pu k k
1
Y
Pk ,r
k 0)
wk
_ G.kyk
+ qk + G.k uk
:= wk
x + := x
k
(2.6.37) (2.6.38)
r k +1 := r k
(2.6.39)
. Pk+l (3k+l'
(2.6.40)
W k+ 1 := r k+ 1
(2.6.41)
pk+l

Pk+l := (rk+
G.k A u k 1
,
Z;O)
Pk
+ (3k+l qk := wk+l + (3k+l(qk + (3k+lpk)
64
2. Numerical Solution of Linear Systems
It can be proven that, in absence of roundoff errors, both BiCG and CGS terminate after at most n steps by zero division, since Pk will be 0 for some k ::; n. An essential requirement for convergence in considerably less than n steps is that A is an Nstable matrix (i.e., its eigenvalues have positive real parts). In this case a suitable vector fO exists (though not easy to find in practice) such that both BiCG and CGS converge at a rate comparable to that of CG. It can be shown that CGS generates residual vectors r k given by r k = Pf(A)r O ,
where Pk(A) is the kth degree polynomial in A for which Pk(A)rO is equal to the residual at the kth iteration step obtained by means of BiCG. The residual reduction operator in CGS is therefore the square of that arising in BiCG. In particular this explains why AT is no longer needed in CGS. Neither BiCG nor CGS minimizes the residual, hence in practical calculations we may observe many local peaks in the convergence curve for both BiCG and CGS. Instead of simply squaring the BiCG polynomial, as in CGS, one could use a product of Pk(A) times a suitable polynomial of degree k. In the BiCGSTAB method, introduced by van der Vorst (1992), the residual vector turns out to be O r k = Qk(A)Pk(A)r , where Qk(X) = TI7=1 (1  nix) and ni are suitable constants. The BiCGSTAB algorithm can be formulated as follows. Choose XO and set pO := rO := b  Ax". Choose moreover fO, a vector such that Po := (r", r") i O. Then for each k ~ 0 do: (2.6.42)
(2.6.43) (2.6.44)
(2.6.45) (2.6.46) (2.6.4 7) (2.6.48)
ppk := pk Apk Pk nk := ( v k ,r0) yk :=
sk := r k  nk y k p§k := sk t k := Ask pfk := t k
(2.6.50) (2.6.51 )
(fk,sk) Wk := (i k, i k ) x k+ 1 := x k + nkpk r k +1 := sk  Wk t k
(2.6.52)
Pk+l := (rk+l, fO)
(2.6.53)
pk+1
(2.6.49)
:= rk+1
+ Wk§k
+ Pk+lnk (pk Pkwk
_ wkyk)
2.7 The MultiGrid Method
65
For an unfavourable choice of fO, Pk or (v k , fO) can be 0 or very small. In this case, one has to restart, e.g., with fO and XO given by the last available values of r k and x". In exact arithmetic BiCGSTAB is also a finite termination method. The theoretical properties of BiCGSTAB are very much the same as those of CGS. The essential difference is that often BiCGSTAB is more smoothly converging than CGS, i.e., its oscillations are less pronounced. The computational cost per iteration of the two methods, as well as the memory requirement, are comparable.
2.7 The MultiGrid Method The classical iterative methods fail to be effective whenever the spectral radius of the iteration matrix B is close to one. A Fourier analysis shows that large eigenvalues are associated with errors of low frequency (we will discuss this issue on a simple example below). Therefore the smooth component (low frequencies) of the error holds back convergence, whereas high frequency errors are rapidly damped. The basic idea of the method is to change the grid, whenever the linear system at hand arises from a discretization of differential equations over a certain mesh. As a matter of fact, errors that are smooth on a grid of width h can be attacked on a coarser grid, while high frequency errors that are not visible on the coarse grid of width 2h can be resolved on the fine grid.
2.7.1 The MultiGrid Cycles The basic multigrid algorithm can be presented in an abstract fashion in the following form. We are concerned with a family of linear problems (2.7.1)
Ajx(j)
= b(j)
, j
= 0, ... , m
,
where A j is an invertible linear operator on a finite dimensional space V j , with dim V j < dim Vj+l, j = 0, ..., m 1, and b(j) is a given right hand side. The goal is to solve (2.7.1) for j = m. For this reason we occasionally omit the superscript m for the highest level, thus writing
v = V m , A = Am
,
b
= b(m)
, X
= x(m)
The operators A j , j < m, are auxiliary and will be used in the multigrid algorithm. For each i, problem (2.7.1) should be regarded as the discretization of a differential equation on a grid whose characteristic mesh spacing is hj. If we assume that h j  1 = 2h j and set h = h m , then (2.7.1) provides a family of discretizations with a refinement factor 2.
66
2. Numerical Solution of Linear Systems
To solve each of the problems (2.7.1) we consider a basic iterative method (2.7.2)
k+1 _
xU)
k
 xU)
+ Pj (b U)

A jXU) k)
,
where Pj is a preconditioner for A j. The associated error according to (2.7.3)
k+1 k ) = B je(j)' eU
B j:= I

PA j j
etj) is transformed
.
The multigrid algorithm can be regarded as an acceleration of (2.7.2) for j = m, and can be formulated as follows.
1. Presmoothing on the fine grid Being known x(j)' do nl times (2.7.4)
k k«): xU) = xU)
+ Pj (b U)
kI) , k =1, ..., ni .  A jX U)
2. Coarse grid correction
(i) Form the residual (or defect) rU) = b U)  Ajx('}); (ii) restrict the residual on the coarse grid setting rUI) . R~lrU)' where R~I : V j > V j  I J
is a restriction operator; (iii) solve the defect problem (2.7.5) (iv) correct the solution in V j by setting 

nl
XU)  xU)
+ IIjj  I XUI)
,
where is a prolongation operator. 3. Postsmoothing on the fine grid Set X(j) := xCj) and do nz times (2.7.6) Both prolongation and restriction operators are fullrank linear mappings. One of the numbers of pre or postsmoothing steps ni and nz may be O. In the most general case, both ni and nz may depend on j and may be determined adaptively. Concerning the coarse grid problem (2.7.5), if the dimension of V j  1 is low it can be solved exactly by a direct method. In this case we have a twogrid algorithm. The basic multigrid cycles are obtained when the coarse grid
2.7 The MultiGrid Method
67
problem (2.7.5) is not solved exactly but recursively by 'Y applications of the multigrid algorithm itself. The cases 'Y = 1 and 'Y = 2 are referred to as V cycles and W cycles, respectively. The structure of one iteration step (cycle) of a multigrid method is illustrated by a few pictures which are given in Fig. 2.7.1. The symbols 0, 0, \ and / mean smoothing, solving exactly, finetocoarse and coarsetofine transfer, respectively. threegrid method
twogrid method
v
Vw fourgrid method
Fig. 2.7.1. Structure of one multigrid cycle for different number of grids and values
of 'Y
2.7.2 A Sirnple Example Consider the twopoint boundary value problem
u"(x) (2.7.7)
{
= f(x)
u(O) = u(1)
, 0<x O. The iteration matrix is
 h;BjAj .
Its eigenvalues are (2.7.10)
Aco) J "
0
2 = 14Bosin J
(ihJo"!2!. )
' iI, ... , lol J
.
Their behaviour, when ej = 1/4, is reported in Fig. 2.7.2. The abscissa refers to the value of ili]. The corresponding eigenvector W(j),i read (componentwise) (2.7.11)
(W(j),i)S
= V2hj sin(ihjsw)
, i, S
= 1, ..., lj 
1 .
If we define the error at the kth step by lj1
(2.7.12)
etj) :=
~~j)

~(j)
=
I: f3i,k W(j),i
,
k::::
0 ,
i=l
by recursion we obtain that f3i,k = f3i,o(A(j),i)k, i = 1, ..., lj  1. We can therefore deduce from (2.7.10) that a relaxation method like (2.7.9) is very efficient in smoothing the errors, i.e., in reducing the high frequency error components (see Fig. 2.7.3), whereas for low frequencies the amplitude reduction is very slight. High frequencies on the grid gj are those
2.7 The MultiGrid Method
69
~j),i 1.0
0,5
0.0
I_~_~_=o,
0.0
0.5
Fig. 2.7.2. Behaviour of the eigenvalues for OJ
Low frequencies ~:._._._~
Before
After
~~::~,
1.0 ih,
= 1/4 High frequencies
~L ~ ~'::::::Pc=;=___..
Fig. 2.7.3. Typical error smoothing behaviour ofrelaxation methods. "Before" and
"after" stand for "before relaxation" and "after relaxation" , respectively (3i,O
1
with ij/2 < i :::; ij, while low frequencies are those corresponding to
< i < ij /2.
The smoothing properties of an iterative method is measured by a smoothing factor, the worst (largest) factor by which high frequency error components are reduced per relaxation step. It tipically happens that the smoothing factor can be smaller than 1/2 independently of h j . When passing to the coarser grid, whose subintervals have length hjl = 2(jl) = 2h j , the high frequency components on the fine grid are not visible on the coarse one, due to aliasing. Therefore, they cannot be modified by use of this grid. We illustrate this in Fig. 2.7.4 for hj = 1/8 (corresponding to ij = 8) and h j 1 = 1/4. Instead, defect corrections smooth the error components that are associated with low frequencies on the fine grid.
70
2. Numerical Solution of Linear Systems
i=I~~
i=641\fv ;=5~6~
i=3~
i=4L\d\c;
Fig. 2.7.4. W(j),i(X) = J2hjsin(i1l'x): low (i = 1,2,3) and high (i = 4,5,6,7) frequency components for hj = 1/8 and hjl = 1/4. Low components are visible on the coarse grid gjl, whereas high frequencies are not
2.7.3 Convergence In the case of the twogrid cycle, the error is transformed according to
(2.7.13) where Mj is the twogrid iteration operator
(2.7.14)
nl . .B jn2T·B MJ 'J j
and T j is the coarsegrid correction operator
(2.7.15)
· .I TJ '

1 R i  1A· IIjj1 Aj1 i J
Convergence holds provided that Pi := II M i II ~ C < 1 for each j = 0, ... , m for a suitable natural matrix norm. Once an estimate of the twogrid convergence factor Pi has been obtained, one can also find an estimate of the
2.8 Complements
71
convergence factor for the multigrid Vcycle and Wcycle by a perturbation argument and recursion. Define the convergence factor as
leP:t"ll
(2.7.16)
Cj
:= max
¥)
e(j),oO leU) I
ern
where is the error before and er~l after one step of the multigrid iteration. It can be shown that for the Wcycle the following recursion inequality holds: Cj S; Pj + C C;_l , j = 1, ... , m , provided that the restriction and prolongation operators are bounded in suitable norms. If Pj is bounded from above, independently of i, by a small enough constant (this can be achieved by using sufficiently many smoothing steps), it follows that Cm S; C < 1. This implies that the multigrid convergence factor is bounded away from 1 independently of the number of levels m. If we assume that the cost of smoothing on Vi is proportional to N j (the dimension of Vi), then the cost of both the Vand the W cycle are proportional to N], Therefore the multigrid cycle has optimal asymptotical computational complexity. A historical remark on multigrid method as well as a description of the basic algorithms are presented in Stiiben and Trottenberg (1981). For a theoretical analysis see, among others, Brandt (1977), Hackbusch (1981a, 1985), Bank and Dupont (1981), Mandel, McCormick and Bank (1987), Bramble (1993), Douglas and Douglas (1993) and Yserentant (1993).
2.8 Complements The availability of advancedarchitecture computers with vector and parallel facilities is having a significant impact on algorithm development in numerical linear algebra. It is beyond the scope of this Chapter to address this issue. However, we mention the monographs by Golub and Van Loan (1989), Chap. 6, Dongarra, Duff, Sorensen and van der Vorst (1991) and Golub and Ortega (1993), and the review article by Demmel, Heath and van der Vorst (1993) as pioneering attempts towards a systematic approach to this very promising field.
3. Finite Element Approximation
In this Chapter we present the properties of the classical finite element approximation. We underline the three basic aspects of this method: the existence of a triangulation of il, the construction of a finite dimensional subspace consisting of piecewisepolynomials, and the existence of a basis of functions having small support. Then, we introduce the interpolation operator and we estimate the interpolation error. Some final remarks will be devoted to several projection operators upon finite element subspaces and their approximation properties. These results provide the basis for deriving error estimates for finite element approximation of partial differential equations that will be considered in the forthcoming Chapters.
3.1 Triangulation Let the set n C ]Rd, d = 2,3, be a polygonal domain, i.e., n is an open bounded connected subset such that n is the union of a finite number of polyhedra. Here, we consider a finite decomposition
n=
(3.1.1)
UK, KE7i.
where: (3.1.2) (3.1.3) (3.1.4) (3.1.5)
o
each K is a polyhedron with K =/= o
0;
0
K: nK2 = 0 for each distinct K 1, K 2 E Thi if F = K 1 n K 2 =/= 0 (K 1 and K 2 distinct elements of Th) then F is a common face, side, or vertex of K 1 and K 2 ; diam(K) $ h for each K E Th.
Th is called a triangulation of n (see Fig. 3.1.1. for a description of admissibile and forbidden configurations). For simplicity, in the sequel we assume further that each element K of Th can be obtained as K = TK(K), where K is a reference polyhedron and
74
3. Finite Element Approximation
Fig. 3.1.1. Admissibile (left) and nonadmissible (right) triangulations TK is a suitable invertible affine map, i.e., TK(X) = BKX + bK, BK being a nonsingular matrix. We will confine ourselves to consider two different cases:
(3.1.6)
the reference polyhedron K is the unit dsimplex, i.e., the triangle of vertices (0,0), (1,0), (0,1) (when d = 2), or the tetrahedron of vertices (0,0,0), (1,0,0), (0,1,0), (0,0,1) (when d = 3). As a consequence, each K = TK(K) is a triangle or a tetrahedron.
(3.1.7)
the reference polyhedron K is the unit dcube [O,l]d. As a consequence, each K = TK(K) is a parallelogram (when d = 2) or a parallelepiped (when d = 3).
In the latter case, the triangulation is made by drectangles if for each K E T h the matrix B K defining the affine transformation TK is diagonal.
Notice that dealing with general quadrilaterals or hexahedrons would require admitting that each component of the (invertible) transformation TK is no longer an affine map but a linear polynomial with respect to each single variable Xl, ... , Xd. We will not consider this case, and we refer the interested reader to Ciarlet (1978, 1991), where a general approximation theory for finite elements is presented.
3.2 PiecewisePolynomial Subspaces A second basic aspect of the finite element method consists of determining a finite dimensional space Xh, which should result in a suitable approximation of the infinite dimensional space X from case to case under consideration. Here the point is that the functions Vh E X h are piecewisepolynomials, i.e., for each K E Th the space
I
PK := {vhlK Vh
E Xh}
consists of algebraic polynomials. To be more precise, let us denote by lP'k, k 2: 0, the space of polynomials of degree less than or equal to k in the variables Xl, ... , Xd, and by ~ the space of polynomials that are of degree less than or equal to k with respect to each variable Xl, ... , Xd. An easy computation shows that
3.2 PiecewisePolynomial Subspaces dim Qc
(3.2.1)
75
= (k + l)d
Moreover the following inclusions hold true
We also introduce the following space of vector polynomials (3.2.2)
k ;:: 1 ,
where x E IRd is the independent variable. One can verify that (3.2.3)
. dim
][))k
(d + k  2)! = (d + k) (d _ l)!(k _ I)!
and that (lP'k_l)d C][))k C (lP'k)d (see, e.g., Roberts and Thomas (1991)).
3.2.1 The Scalar Case We are now in a position to define the most commonly used spaces Xh. In case (3.1.6) we set (3.2.4) which will be called the space of triangular finite elements. In case (3.1.7) we define
which is called the space of parallelepipedal finite elements. Let us remark that for case (3.2.5) the finite dimensional space PK = {vhlK IVh E Xk} is different from Qc, except when K is a rectangle with sides parallel to the coordinate axes. In fact, only for this case TK is represented by a diagonal matrix plus a translation, and Qc is invariant under this type of transformations. In general, one has lP'k C PK C lP'dk. In both cases, (3.2.4) and (3.2.5), it is worthwile to notice that
This is in fact a consequence of the following result (notice that for the sake o
of simplicity, here, and in the sequel, we write HS(K) instead of HS(K)):
Proposition 3.2.1 A function v : fl IR belongs to H1(fl) if and only if a. VIK E H1(K) for each K E Th; b. for each common face F = K; n K2, Ki ; K 2 E Th, the trace on F of VjK 1 and VI K 2 is the same. Proof. Using a., define the functions
Wj
E L 2 ( fl ), j = 1, ..., d, through
76
3. Finite Element Approximation V K E Th .
WjlK := Dj(VIK)
In order to show that v E HI(D), we simply have to prove that Wj = Djv. By using the Green formula (see (1.3.3)), we can write for each ep E V(D)
r
Jf}
wj 3, we refer the interested reader to Roberts and Thomas (1991). Let us show that this choice of degrees of freedom permits us to identify a polynomial q E][»k in a unique way.
3.3 Degrees of Freedom and Shape Functions
'.,,
,, ,,
., I
)
83
.
)
Fig. 3.3.5. d = 3 and, from left to right, k = 1, k = 2 and k = 3. The number of degrees of freedom is 8, 27 and 64 respectively. Only the nodes on the visible faces are pictured Proposition 3.3.3 Let k = 1,2 or 3. Assume that q
E]])lk is such that n . q vanishes at k distinct points on each side of K. Assume moreover that
(3.3.2)
[q1 = [q2 =0
(if k :2.: 2)
and (3.3.3)
[X1q1
Then q ==
=[
=[
X2q1
X1Q2 = [X2Q2 = 0
(only if k = 3) .
o.
Proof. First of all, (n· q)1F E 11\1, hence it vanishes on each side F of K. Therefore, by the Green formula (1.3.4) and (3.3.2), (3.3.3), for each tJ! E lP'k1 we have
r tJ!' div q =  1r \7tJ!. q + 18rKtJ!' n . q = 0
1K
,
K
since \7tJ!' E (lP'k_2)2. As div q E IP'k1, it follows that div q = 0 in K. Remember now that we can write q(x) = Pk1 (x)
+ xPk1 (x)
,
where Pk1 E (lP'k_d2 and the polynomial Pk1 is a homogeneous function of degree k  1. It follows that 0= divq
= divPk_1 + 2Pk1 + x· \7Pk1
= divPk1 + (2 + k 
1)Pk1 ,
where the Euler identity for homogeneous functions has been used. Thus Pk1 E lP'k2 and consequently q E (lP'k_1)2. Since div q = 0, we can find a polynomial w E lP'k (unique up to an
additive constant) such that
84
3. Finite Element Approximation
Moreover, since (n· q)JF = 0, we can assume that w is vanishing on each side F, and consequently
where Pi(X) are linear functions, each one vanishing on one side of K. This completes proof if k = 1 or k = 2, since w E lP'k. When k = 3, using again (3.3.2) and (3.3.3) we obtain for each r E (lP'I)2 0= L
s
r
= L[(D2w)r 1 
(D 1w)r2J
Choosing r such that D2rl  D 1 r2
= co, it
=
follows
c6 LPIP2P3 = 0 i.e., Co = 0 and q ==
L w(D 2rl  D 1 r2)
,
o
o.
In particular we have proved that, if q E]]J)k is such that n q is vanishing at k distinct points of a side of K, then n· q is vanishing on that side. This implies that a piecewisepolynomial Vh, identified in each K by the degrees of freedom described above, indeed belongs to H(div; f?) in view of Proposition 3.2.2 . The threedimensional case can be treated in a similar way. The degrees of freedom are given by the values of n q at k(k+ 1)/2 distinct points of each W . q for each W E (lP'k_2)3. The total face, plus (if k ;::: 2) the integrals number of degrees of freedom on each triangle K is given by (3.2.3), i.e., is equal to k(k + 1)(k + 3)/2. We are not going to enter into further details, but will refer rather to Nedelec (1980), Roberts and Thomas (1991).
JK
The construction of a basis of Wt is somehow less evident than for X~, since it is not true now that all degrees offreedom are pointvalues of v u This remains true for n Vh at the nodes chosen on the interelement boundaries; in addition, we also have to consider the value of the integrals over K described before (the socalled Kmoments). Let us denote by mj(v), j = 1, ... ,N1,h, the values (n· v)(aj), where aj is the set of all nodes in f?, and by ml(v), l = N1,h + 1, ..., N h, the set of all Kmoments of the function v. A basis of wt is constructed by requiring that (3.3.4)
i,
S
= 1, ... , Nh
.
3.4 The Interpolation Operator
85
3.4 The Interpolation Operator The identification of the degrees of freedom and of the shape functions easily leads to the definition of an interpolation operator, i.e., an operator defined on the space of continuous functions and valued in the finite element spaces xk or defined in (3.2.4)(3.2.6). The simplest case is provided by Xk. In fact, for each v E GOU?) we can set
wt
Nh
(3.4.1)
7r~(v)
;=
2: v(ai)0 .
We can finally prove an estimate for the global interpolation error. Theorem 3.4.2 Let Th be a regular family of triangulations and assume that m = 0,1, 1 = min(k, s  1) 2:: 1. Then there exists a constant C, independent of h, such that
3.4 The Interpolation Operator
91
(3.4.19) Proof. We localize the estimate over K:
2:
Iv  7I'~(v)I~,Q =
Iv  7I'~(V)I~,K
KETh
From (3.4.14), (3.4.17), (3.4.18) we can write
Iv  7I'~(V)lm,K
:s: Chl+1mlvll+l,K
,
hence the result follows by summing up over K.
o
Note that the restriction on the index m is due to the fact that the inclusion Xk C Hm(f2) holds only if m :s: 1. The construction of a finite dimensional space contained in H 2(f2) would require higher order continuity across the interelement boundaries. 3.4.2 Interpolation Error: the Vector Case If we are considering W~, to define the interpolation operator we must give a meaning to the point value nv at all nodes a, E f2 and to all the Kmoments ml(v) (l = N1,h + 1, ..., Nh). If v E (C O(f2))d, this is, of course, easily doable.
But it will be useful to define the interpolation operator even in spaces of functions that are not necessarily continuous. To do this, let us first remark that for q E J!))k, instead of the point values of n . q on a face FK of K, one can consider the following degrees of freedom
r n· q'l/J
,
lFK
'l/J E 1I"k1
(see Roberts and Thomas (1991)), which are called FKmoments. Let us denote the global set of these FKmoments relative to a function v in the same way as the degrees of freedom associated to pointvalues of n . v have been indicated in Section 3.3.3, i.e., mi(v), i = 1, ..., N 1,h . Moreover, still denote the set of Kmoments by ml(v), l = N 1 ,h + 1, ..., N h and by cp; the shape functions defined as in (3.3.4). We can see now that the interpolation operator w~ is defined in (H 1(f2))d by setting Nh
(3.4.20)
w~(v) :=
2: mi(v)cpi i=l
We again recall that this is the only one function in W~ satisfying mi(W~(v)) = m;(v)
Vi=l, ... ,Nh·
3. Finite Element Approximation
92
To introduce a local interpolation operator, let us denote by mi,K(v), i = 1, ..., M K , the set of Kmoments and FKmoments relative to K, and define MK
(3.4.21 )
wi(v) :=
L mi,K(v)0
,
in case (3.1.6), or (3.5.8) in case (3.1.7). If Th is a regular family of triangulations, as a consequence of (3.5.2), (3.4.19) and (3.4.34) we obtain the following error estimates: (3.5.9) (3.5.10)
IIV 
P~(v)llo,n:::; Ch1+1Iv!I+1,n ,
Ilv  pt,h(v)lh,n :::; Ch1Ivll+l,n ,
1:::; l:::; k 1:::; l :::; k
,
for each v E Hl+l(fl) and (3.5.11)
Ilv  Q~(v)IIH(div;n) :::; Ch1(lvll,n
+ Idivvll,n)
, 1:::; l :::; k
for each v E (Hl(fl))d with divv E Hl(fl). Moreover (3.5.12) Ilv  p1k,h(v)111,n :::; Clvll,n ,
llv
Q~(v)IIH(div;n) :::; II v IlH(div;n )
3.5 Projection Operators
97
as Pf,h and Q~ are orthogonal projections. It can be noticed that if v E Hk+l (n), then, under suitable assumptions, the L2 norm of v  Pfh(v) is in fact O(hk+l). This can be proved by a duality argument (Aubi~Nitsche trick). In fact, let us assume that, given an arbitrary r E L2(n), the function 0 .
This yields the socalled inverse inequality (whose proof is furnished in Proposition 6.3.2), i.e., there exists a positive constant C l such that (3.5.20) Under this assumption we can prove that there exists a positive constant C2 such that (3.5.21)
Ilv 
P~(v)llt.n ::; /Iv  Plk,h(v)llt.n + IlPf,h(V)  P~(v)lIl,n =
Ilv 
Plk,h(v)lh,n + IIP~[v  Pl~h(v)Jllt.n
Using the inverse inequality (3.5.20) we have
IIP~[v 
Plk,h(V)Jlll,n ::;
)1 + Clh
2
Ilv 
Pf,h(v)llo,n
Xk.
as P~ is the L 2(f?)orthogonal projection onto Inequality (3.5.21) now follows from (3.5.16). In the conclusion, we can summarize the properties of these projection operators P~ and Pf h by the following inequality:
Ilv (3.5.22)
P~(v)llo,n
+ Ilv  Pf,h(v)llo,n + h(llv  Pt(v)lh,n + Ilv 
::; Chl+l!vll+l,n
Plk,h(v)lh.n) l+l V v E H (D) , 0::; l ::; k
Finally, let us consider the projection operator p~ defined in (3.5.5). It is easily verified that, if Pi< is the L 2(K)orthogonal projection onto lP'k' then
3.6 Complements
99
(3.5.23) i.e., p~ is a local operator like the interpolation operator 7l'~. Thus, proceding as in the proof of Theorem 3.4.2, if Th is a regular family of triangulations we obtain for each k ~ 0 (3.5.24)
Ilv  p~(v)llo,n S; Ch l+llvll+l,n
3.6 Complements In this presentation we have not considered domains with curved boundary, nor examples of nonconforming approximations (i.e., finite element spaces X~ or W; such that X~ et. H 1(fl) or W; et. H(div; fl)). The interested reader can refer to Strang and Fix (1973) and Ciarlet (1978, 1991). The approximation theory we have presented here is restricted to affineequivalent families of finite elements. The same results hold in more general situations as well. A thorough analysis is provided in Ciarlet (1978, 1991), where several type of finite elements of triangular or parallelepipedal type are considered. Estimates of the approximation error in fractional order Sobolev spaces have been proven by Dupont and Scott (1980). An approximation theory based on averaging rather than interpolation is also possible. Clement (1975) has shown how to construct an operator rh : Hl(fl) + X~, I ~ 0, enjoying the same approximation properties as the interpolation operator 7l'~, and furthermore lim
h+O
Iv 
rh(v)lo = 0
V v E L 2(fl)
lim Iv  rh(v)ll = 0 , Iv  rh(v)lo S; Chlvll
h+O
V v E H1(fl) .
For an historical review on finite elements see Zienkiewicz (1973) and Oden (1991). Classical books on the theory and implementation of the finite element method are the ones by Zienkiewicz (1977) and Oden (1972). Early monographs on the mathematical foundations of finite elements are the books by Strang and Fix (1973) and Oden and Reddy (1976). In this Chapter we have limited our analysis to the socalled hversion of finite element method, which is based on the assumption that the polynomial degree is fixed and the accuracy of the approximation is achieved by refining the meshsize h. An alternative approach is obtained by fixing the mesh, and increasing the degree of the elements. This is called the pversion of the finite element method (p denoting the polynomial degree). It is also possible to refine the mesh and increase the degree of the polynomials at the same time. This leads to the hp version of the finite element method. Early theoretical papers concerning the p and hp versions are the ones by Babuska, Szab6 and
100
3. Finite Element Approximation
Katz (1981) and Babuska and Dorr (1981), respectively. We will not address these methods in this book, referring the interested reader to Babuska and Suri (1990), Szabo and Babuska (1991).
4. Polynomial Approximation
This Chapter is devoted to the introduction of basic notions and working tools concerning orthogonal algebraic polynomials. More specifically, we will present some properties of both Chebyshev and Legendre polynomials, concerning projection and interpolation processes. These will provide the background of spectral methods for the approximation of partial differential equations that are considered throughout Part II and III of this book.
4.1 Orthogonal Polynomials Let IP'N denote the space of all algebraic polynomials of degree less than or equal to N, and let w(x) denote a nonnegative, integrable function (i.e., a weight function) over the interval I = (1,1). Define (4.1.1)
L~(I):= {v: 1+ JR
I v is measurable and Ilvllo,w < oo}
,
where (4.1.2)
[1 Iv(xWw(x)dx 1
Ilvllo,w:= (
) 1/2
is the norm induced by the scalar product (4.1.3)
(u, v)w := [11 u(x) v(x) w(x) dx
Let {Pn}n>O denote a system of algebraic polynomials, with the degree of Pn equal to n~ which are mutually orthogonal under (4.1.3), Le., (4.1.4) The Weierstrass approximation theorem (see, e.g., Yosida (1974), p. 8) implies that this system is complete in L~( 1,1), i.e., for any function u E L~( 1,1) the following expansion holds
102 (4.1.5)
4. Polynomial Approximation
~, () U() x = L...J UkPk x k=O
. h' (U,Pk)w Uk:= II 11 2 Pk O,w
WIt
The Uk'S are the expansion coefficients associated with the family {pd. The series converges in the sense of L~(I). Precisely, denoting by N
(4.1.6)
PNU(X) :=
L UkPk(X) k=O
the truncation of order N, then
Ilu 
(4.1.7) of
PNullo,w ~ 0
as
N ~
00
.
The polynomial PNu can also be regarded as the orthogonal projection upon lP' N with respect to the scalar product (4.1.3), i.e.,
U
(4.1.8) Finally, we remember the following Parseval identity that follows from (4.1.5) (4.1.9) Let us now set
w(",,13)(X)
(4.1.10)
:=
(1 x)"'(l
+ x)13 ,
for 1 < 0:, f3 < 1, 1 :::; x :::; 1. The family of polynomials orthogonal under this weight function are the Jacobi polynomials and are denoted by {J~"',13)}n~o. We report, hereafter, some properties that will be used in the sequel (see also Szego (1959), Courant and Hilbert (1953». Under the normalization
J("',13)(l) = n
(n + 0:) n
:=
r(n + 0: + 1) n!r(o:+l)
'
r(·) being the gamma function, an explicit representation is (4.1.11) The following maximum principle holds for all 0:, f3: max
I~x9
(4.1.12)
The Jacobi polynomials satisfy the SturmLiouville eigenvalue problem
4.2 Gaussian Quadrature and Interpolation
103
(4.1.13) where L(a,{3) :=
_~ dx
((1 
x2)
w(a,{3)(x)~) dx
and ),.~a,{3) := n(n + a + f3 + 1) (see, e.g., Szeg6 (1959), Abramowitz and Stegun (1966)). The Jacobi expansion coefficients of a function u decay with a rate that depends solely on the smoothness degree of u. More precisely, if we consider a the expansion (4.1.5) with Pk = ,(3 ) , we have (see, e.g., Canuto, Hussaini, Quarteroni and Zang (1988), Sect. 9.2.2)
Jk
(4.1.14)
IUk I :::;
c; k;m
'
k>1 .
This is valid under the assumption that, for some m 2: 1, U(m) E L~(".I3) (1) and lim x+±l(l x 2 ) w(a,{3)(x) d~U(j)(X) = 0 for all j = 1, ... ,m, where we have set U(O) := U
,
u( ') := _1_ L(a,{3)u( '1) J w(a,{3) J
The constant C m depends on U and m. Special cases of Jacobi polynomials are the Legendre polynomials, corresponding to the choice a = f3 = 0, and the Chebyshev polynomials of first kind, that are obtained when a = f3 = 1/2. Both families have great relevance in the approximation of boundary value problems by spectral methods, and will therefore be reconsidered in detail in Sections 4.3 ad 4.4.
4.2 Gaussian Quadrature and Interpolation For any fixed integer N 2: 1, we denote by {Xj} j=::O, ... ,N the zeroes of the polynomial (1  x 2)(J1a,{3))/(x) (the prime means derivative). Correspondingly, we denote by {Wj }j=O, ... ,N the real numbers that solve the linear system:
The quadrature formula (4.2.1) is the Jacobi GaussLobatto formula; {Xj} and {Wj} are called, respectively, nodes and weights. A close expression for them will be provided in the next
104
4. Polynomial Approximation
Sections for both Legendre and Chebyshev cases. Formula (4.2.1) is precise up to the degree 2N  1, i.e., N
(4.2.2)
1
~P(Xj) Wj = [/(X) W(cx,{3) (X) dx
\f P E JP'2Nl .
(see, e.g., Davis and Rabinowitz (1984)). If U is any continuous function in Y, we denote by INu E JP'N its Lagrangian interpolant at the nodes {Xj}j=O,...,N, i.e., (4.2.3) For any continuous functions u and v in product and norm respectively as
Y,
we define the discrete scalar
N
(4.2.4)
IlvIIN:= (v,v)~p
(U,V)N:= LU(Xj)v(Xj)Wj j=O
.
Let us note that (4.2.5) and also that, owing to (4.2.2), (4.2.6)
(U,V)N = (u,V)w(Q,I3)
\f U,V such that uv E JP'2Nl .
From (4.2.5) we obtain easily (4.2.7)
INu(x) =
;..., LJ
({3)
uk J k o , (x)
with
k=O
* Uk:=
cx,{3»)N (u,Jk (cx,{3) 'Yk
and 'Yk cx ,(3 ) := II Jkcx,(3) IIJv' The {uk} are the discrete expansion coefficients of the function u. Note the formal analogy between (4.2.7) and (4.1.6). The previous formula produces (4.2.8)
U~ = t
[ (~,{3)
Jk
cx,(3)
(Xj) Wj] u(Xj) , OS k 50 N ,
j=o 1k
which is a linear transformation, called the discrete transform, between the values of a function U at the nodes {Xj} and its discrete expansion coefficients. The discrete antitransform N
(4.2.9)
u(x·) L "" i cx ,(3 )(x J.) u*k ' JJk k=O
follows easily from (4.2.3) and (4.2.7).
4.3 Chebyshev Expansion
105
4.3 Chebyshev Expansion Let us consider the special case of Chebyshev polynomials of the first kind. We briefly review their basic properties, then we define the interpolation and projection operators and illustrate their convergence properties. 4.3.1 Chebyshev Polynomials
They are defined as J~1/2,1/2)(X)
Tn(x):= I (1/2 1/2) ,(1) n
n EN,
and, in view of (4.1.13), they satisfy the SturmLiouville differential equation
(4.3.1) For x E [1,1] a remarkable characterization is given by
(4.3.2)
Tn(x) = cosnB
B = arccos x ,
,
while for Ixl ~ 1 Tn(x) = (sign zn't cosb ne', B = cosh 1 lxl (see, e.g., Rivlin (1974)). We deduce that, according to (4.1.12),
(4.3.3) and that Tn has the same parity of n. Using the trigonometric relation cos( n
+ l)B + cos( n 
l)B = 2 cos B cos nB
we deduce the recursion formula
(4.3.4)
n~1
with To(x) = 1 and T 1(x) Let us set
= x. 1 u(x)v(x) ~dX 1 1  x2 1
1 Ilullo,w:= 1
(u,v)w:=
(4.3.5)
1
(
1
u
2(x)
1)
~ dx 1  x2
1/2
(w(x) := (1  x 2 ) 1/ 2 is called Chebyshev weight function). Operating the change of variable x = cos B, we obtain from elementary trigonometric properties the orthogonality property
106
4. Polynomial Approximation
(4.3.6)
C
Therefore
IITnllo,w =
n
J~
2 if n = 0 =. { 1 1if n_ >1
n
C
If u E L~(I), its Chebyshev series is 00
(4.3.7)
u(x) =
L Uk Tk(X) k=O
If we assume further that u' E L~(I), then 00
u'(x) =
(4.3.8)
L uP) Tk(X)
,
k=O
with (see, e.g., Gottlieb and Orszag (1977), Canuto, Hussaini, Quarteroni and Zang (1988), Sect. 2.4.2) 00
L
(4.3.9)
pUp
p=k+l p+k odd
If U is a polynomial of degree less than or equal to N, then the expansion coefficients of its first derivative can be computed recursively with O(N) operations through the threeterm backward relations
(4.3.10) {
,(1)
_
,(1) 
UN

UN
+1
,(1) CkUk

0
(k )' = U,(1) k+2 + 2 + 1 Uk+l,
k
=N 
1, ... ,0
Going further, assuming that u" E L~(I) one can also infer that (4.3.11)
If u is a polynomial of degree less than or equal to N, the coefficients U~2) can still be computed recursively by iterating on the formula (4.3.10).
4.3 Chebyshev Expansion
107
4.3.2 Chebyshev Interpolation For any positive integer N, the N + 1 Chebyshev GaussLobatto nodes Xj are the zeroes of the polynomial (1 x 2 ) Tlv(x). By differentiating (4.3.2) we obtain 1rj
(4.3.12)
Xj
= cos N
.
' J
= 0, ... , N
These nodes are symmetrically distributed about x = 0. As far as N increases, they cluster towards the endpoints of the interval. In particular IXN  XNli = IXl  xol = O(N 2 ) . This prevents the formation of Runge's instability appearing in those interpolation processes using equispaced nodes (e.g., Isaacson and Keller (1966), Atkinson (1978)). The weights of the Chebyshev GaussLobatto integration formula are (see, e.g., Rivlin (1974)) (4.3.13)
Wj
=~ djN
= { 2 for i :: 0, N,
with dj
1 for J  1, ... , N  1
.
The discrete scalar product and the discrete norm take respectively the form:
(4.3.14)
Owing to (4.2.2) (4.3.15) while, writing
OJ
= arccosxj =
1rj/N,
N
(4.3.16)
2
IITNIIN =
1r
~
2
N L.J cos (NOj)
1
N
1r
~ 1
T
= N L.J
T
J
j=O
J
j=O
=
2
1r
= 211 TN llo,w
Therefore (4.3.17)
IITkllN
= j~dk
, k
= O, ...,N
.
If we denote by INu E IP'N the interpolant of the function U at the Chebyshev nodes (4.3.12), a direct consequence of (4.2.7) and (4.3.14), (4.3.17) is N
(4.3.18) INu(x) =
L k=O
uk Tk(X)
*
Uk
2 ~ = Nd L.J T1 cos k j=o
J
(kNj1r) U(Xj).
108
4. Polynomial Approximation Due to its trigonometric structure, the discrete Chebyshev transform
{U(Xj) Ij = 0, ... ,N}
>
{uk I k = 0, ... ,N}
can be computed by the Fast Fourier Transform algorithm using O(N log2 N) operations, whenever N is a power of 2. The same clearly holds for the antitransform {Uk I k = 0, ... ,N} > {u(Xj) Ij = 0, ... ,N} , since (4.3.19)
Algorithms for performing Fast Fourier and Chebyshev Transforms are reported in Appendix B of Canuto, Hussaini, Quarteroni and Zang (1988). A consequence of (4.3.15), (4.3.16) is that the discrete Chebyshev norm is uniformly equivalent to the continuous L~norm for all polynomials of degree less than or equal to N. Indeed, for all UN = l:~=o Uk T k N
IluNllJv = 2: k=O
Nl 211 TkllJv IUkl
= 2: Uk1211 T k116,w+ 21 uN1 2 I
TN116,w
11
,
k=O
and therefore (4.3.20)
In view of the application to the Chebyshev collocation method it is worthwile to introduce the Chebyshev pseudospectral derivative GNU of a continuous function u. It is a polynomial in lP'Nl defined by (4.3.21)
i.e., as the exact derivative of the interpolant of resorting to the representation
U
at the nodes (4.3.12). By
N
(4.3.22)
2: u(Xj) 'l/Jj(x)
INu(x) =
,
j=O where the Lagrange functions 'l/Jj E IPN are such that 'l/Jj(Xk) = 6jk for each k = 0, ... , N, we deduce N
(4.3.23)
(GNU)(Xi)=2:'l/Jj(Xi)U(Xj), i=O, ...,N. j=O
The matrix (D N )ij := 'l/Jj(Xi) is named Chebyshev pseudospectral matrix. Since
4.3 Chebyshev Expansion 'ljJj(X) =
109
(_1)H1 (1  x 2 ) Tfv(x) d jN2(xXj) ,
the entries of D N can be computed explicitly (see Voigt, Gottlieb and Hussaini (1984) or Canuto, Hussaini, Quarteroni and Zang (1988), Sect. 2.4.2) di (l)i+j
dj
Xi 
i=/=j
Xj
Xj 2(1  x;) 2N 2 + 1 6
(4.3.24)
1::;i=j::;N1 i=j=O
2N 2 + 1 6
i =j =N
The matrix D N is not skewsymmetric; its only eigenvalue is multiplicity N + 1.
°with algebraic
We conclude this section by giving several estimates of the Chebyshev interpolation error. Define the weighted Sobolev space (4.3.25)
HZ,(I):=
{v E L~(I) I vCk) E L~(I)
for k
= 1, ... , s} ,
sEN ,
whose norm is S
(4.3.26)
Ilvlls,w:= {; II vCk)116,w
)
1/2
(
Here v(k) denotes the kth order distributional derivative of v (see (1.2.9)). Assuming that u is a function of HZ,(I) for some s ~ 1, it holds (4.3.27)
(see Canuto and Quarteroni (1982)). This result is based on a similar one for the trigonometric interpolation, which is originally due to Kreiss and Oliger (1979). The proof we present here has been given by Pasciak (1980). Denoting by H;(O, 27f), s ~ 1, the subspace of the Sobolev space HS(O, 27f) consisting of functions whose first s  1 derivatives are periodic, we have: Lemma 4.3.1 Set SN := span{e i k8 : [0,27f] + C IN ::; k ::; N I} and let 'IN: CO([0, 27fJ) + SN be the interpolation operator at the nodes 8j := j7f/N, j = 0, ... , 2N  1. Then for each ¢ E H;(O, 27f) it holds (4.3.28)
II¢  'IN¢110,(0,2rr) ::; CN
Proof. Let us start by defining the map ¢
+
sl¢ls,(O,2rr) .
J; in
the following way:
110
4. Polynomial Approximation ¢(B) .. ¢
(~) N
set moreover SN := {¢N : [0, 21fN] > C I ¢N E SN} and (}j := NB j = i«. j = 0, ..., 2N 1. Clearly, the interpolation operator iN : Co([0, 21f N]) > SN at the nodes (}j satisfies (4.3.29) In addition, it is at once verified that for each ¢ E H8(0,21f) it holds ,
(4.3.30)
1¢11,(0,271'N) = N
I
N
1/2 1¢11,(0,271')'
0~l ~ s .
Thus, by proceeding as in BrambleHilbert lemma (see Proposition 3.4.3)
II¢ 
IN¢llo,(0,271')
(4.3.31)
= N 1 / 2 11¢  i N¢1I0,(0,271'N) ~ N 1 / 2\\I  iNIl . in£. II¢ ~NI18,(0,271'N) tPNESN
(the norm of I  iN is the norm in L(H;(O, 21fN); L 2(0,21fN )).) Choosing ~N = (PN¢Y', PN being the L 2orthogonal projection on SN, from (4.3.30) we obtain
114> 
(to 14> ~ (to
(1'N¢n 1"(o,,.N) =
(4.3.32)
(1'N¢),
11,(o,hN)) 'I'
N" N I¢  l'N¢II,(o".)) 'I'
The space H; (0, 21f) consists of functions for which it is permissible to differentiate termwise the Fourier series s times, provided the convergence is in L 2(0, 21f). Thus from the Parseval identity 00
(4.3.33)
1¢lr,(0,271') = 11¢(l)116,(0,271') = 21f
L
Ik1 211¢k12 ,
k=oo
it follows
I¢
PN¢lr,(0,271') = 21f
(4.3.34)
< 
L
Ik\ 211¢k\2
Iklt: N 2 CN 2(l 8)1¢ 18,(0,271' ) , O~l~s ,
where the symbol Llklt:N means that the sum is taken for k < N and k 2: N. Finally, from (4.3.32) we obtain (4.3.35)
4.3 Chebyshev Expansion
111
It remains to show that
III  i N IIL:(H;(0,21TN);L2(0,21TN)) ::; C Clearly, it is enough to consider iN. From the orthogonality relation 2Nl
_1_ '"" ipOj _ 2N L e j=O
{
if = 0, °1 otherwise p
±2N, ±4N, ...
'
(see, e.g., Kreiss and Oliger (1979)) it follows that for each N(N+1) L'fv(xj) , j=O, ... ,N.
It can be shown that
2
N(N
+ 1)
::; Wj
::;
C. N ' J
0
= ,...,
N
(see, e.g., Bernardi and Maday (1992), p. 76). The discrete approximations to the scalar product of L 2 (1) and its associated norm are
(4.4.14)
where the {Xj} are the Legendre GaussLobatto nodes. In view of (4.2.2) and (4.4.6) we have
IILkliN = IILklio , (4.4.15)
2)
IILNIIN = ( N
1/2
k = 0, ... ,N 1 ,
= ( 2 + N1 ) 1/2 IILNllo
In particular, these relations produce (4.4.16) It follows that for all polynomials of degree less than or equal to N the discrete norm defined in (4.4.14) is uniformly equivalent (with respect to N) to the L 2 nor m . For any continuous function u we can now introduce its interpolant INu E lP'N that matches u at the N + 1 Legendre GaussLobatto nodes {Xj }j=O,...,N.
In accordance with (4.2.7), and owing to (4.4.6), (4.4.15) we have
118
4. Polynomial Approximation N
(4.4.17)
INu(x) = L
uk Lk(X)
k=O
with
(4.4.18)
Formula (4.4.18) is the discrete Legendre transform. It follows from (4.4.17) that its inverse is given by N
(4.4.19)
u(Xj) = LLk(Xj)uk
,
j =O, ...,N .
k=O
Similarly to what we have done for the Chebyshev interpolation, for any continuous function U we can now define a Legendre pseudospectral derivative GNU. It is precisely the polynomial in lP'Nl which is formally defined as in (4.3.21), but now INu is the Legendre interpolant of u given by (4.4.17). We still have a representation like (4.3.22) with the Lagrangian functions 'l/Jj now expressed by 2 'l/Jj(X) = _ 1 (1  x ) LN(x) N(N + 1) (x  Xj) LN(Xj) The Legendre pseudospectral matrix DN associates to the N + 1 values {u(Xj) Ij = 0, ..., N} the N +1 values {(GNU)(Xj) Ij = 0, ..., N} of the pseudospectral derivative of U at the same Legendre GaussLobatto nodes. Using (4.3.23) and the definition of'l/Jj it can be deduced that (see Solomonoff and Turkel (1986) or Canuto, Hussaini, Quarteroni and Zang (1988), Sect. 2.3.2): 1 LN(Xi) Xi  Xj LN(xj)
(4.4.20)
°
N(N + 1) 4 N(N + 1) 4
i:j=j 1::;i=j::;N1 i=j=O i
= N
The only eigenvalue is 0, with algebraic multiplicity N
+ l.
Concerning the interpolation error estimate, the following result holds (see Bernardi and Maday (1992), pp. 7778). If u E HS(I) for some s ~ 1, then
4.4 Legendre Expansion
119
(4.4.21 ) in analogy with the Chebyshev interpolation (see (4.3.27), (4.3.40)). Using the GagliardoNirenberg inequality (see Theorem 1.3.6) we deduce, in particular, the following error behaviour in the maximum norm (4.4.22) Also, we obtain that the error induced by the pseudospectral derivative is (4.4.23) Concerning the Legendre GaussLobatto integration (4.4.14), by proceeding as in the proof of (4.3.44) and using (4.4.21) we can show that for all u E HS(I),s ~ 1, and VN E lP'N (4.4.24) We also notice that taking k = s = 1 in (4.4.21) we obtain (4.4.25) which means that IN is a continuous operator upon Hl(I). Remark 4.4.1 An inequality like (4.4.16) or (4.3.20) can also be established for exact and discrete maximum norms. Precisely, let us define
(4.4.26) where {Xj} are the GaussLobatto nodes. Then (4.4.27) Using the Chebyshev nodes, the function bN can grow at most logarithmically with N. Indeed (e.g., Natanson (1965))
bN
2
< logN +1 tt
In the Legendre case, the logarithmic growth is actually a lower bound, as it is possible to find a constant E: > 0 such that (Erdos (1961)) 2
bN> logN 
E:
7f
Inequalities (4.4.27) are useful whenever discrete maximum norms are adopted to measure the error behaviour of computed solutions. 0
120
4. Polynomial Approximation
4.4.3 Legendre Projections
For any function u E L2(1), the truncated of order N of its Legendre series (4.4.7) is N
(4.4.28)
PNU(X) =
L Uk Lk(X)
,
k=O
and Ilu  PNuilo ~ 0 as N ~ 00. Further, it has been proven by Canuto and Quarteroni (1982) that if u E H S (1) for some s :::: 0 then (4.4.29) which is an optimal rate of convergence, since PN : L 2 (1) ~ IP'N is the orthogonal projection operator. Moreover (4.4.30) As for the Chebyshev case, this estimate is nonoptimal. We are therefore led to introduce the orthogonal projection P 1 ,N : H 1 (1) ~ IP'N, which is defined as follows (4.4.31) where
1 1
(4.4.32)
(u, vh :=
(u'v'
+ uv) dx
1
denotes the scalar product of H 1 (1). The above projection yields an approximation error which is optimal in both the L 2 and H 1 norms. As a matter of fact (4.4.33) This follows from (4.4.21) when k = 1, and by an usual duality argument when k = 0 (see also Maday and Quarteroni (1981)). The definition of orthogonal projection can also be extended to account for the boundary behaviour of the function u. In fact, let us introduce the space (4.4.34)
HJ(I) := {v E H 1 (1) I v(±l) = o}
and recall that, due to the Poincare inequality (1.3.2), the scalar product (4.4.35)
(u,v)~ :=
1 1
u'v'dx
1
induces upon HJ(1) a norm equivalent to lP'~ through
11·1\1. We now define PE N : HJ(1)
~
4.5 TwoDimensional Extensions
121
(4.4.36) As in the case of the projection operator P1,N, for all H!s(I) n HS(I) it holds (4.4.37)
Ilu 
P1a,NUllk S CNksllull s ,
k
8 ~
= 0, 1
1 and all u E
.
4.5 TwoDimensional Extensions Here we show how the previous definitions and results extend to the twodimensional domain rl = (1, 1)2. For any N E N we denote by ~ the space of algebraic polynomials of degree less than or equal to N with respect to each single variable Xi, i = 1,2, and by I!J& the subspace of those polynomials that vanish on arl. We introduce the space (see Section 1.3) (4.5.1)
L 2 (rl ) := {v:
a + lR I
L v
2(x)
dx
0, where (', .) denotes the euclidean scalar product. Indeed, let Tlh E Vh be the function defined as Nh
(5.2.9)
T]h(x) = 2::>/j CPj(x) j=l
Then Nh
(A'T],'T]) =
L
TliA(cpj,cpi)r/j = A('T)h,Tlh) ,
i,j=l
and the conclusion holds owing to (5.1.14). In particular, any eigenvalue of A has positive real part. Thus, the existence and uniqueness of a solution to (5.2.3) can also be proven by a pure algebraic argument without resorting to the LaxMilgram lemma. When the bilinear form A is symmetric, it follows immediately that A is also symmetric. 0
5.3 PetrovGalerkin Method This is the case in which the numerical problem reads (5.3.1) where {Wh I h > O} and {Vh I h > O} are two families of finite dimensional spaces such that Wh :f Vh but dim Wh = dim Vh = Nh, for all h > O. Problem (5.3.1) can be regarded as an approximation to (5.1.3) provided Ah : Wh x Vh + IE. and Fh : Vh + ]R are convenient approximations to A and F, respectively, (possibly coinciding with A and F), while Wh is a subspace of Wand Vh one of V. Spaces Wand V need not be necessarily different, though. The algebraic restatement of (5.3.1) is accomplished in the following way. Let {cpj Ij = 1, ..., Nd be a basis of Wh, and {1/;i Ii = 1, ..., Nh} one of Vh. Expanding again the solution Uh of (5.3.1) as in (5.2.4) we obtain the following system of dimension Nh: (5.3.2)
5.3 PetrovGalerkin Method
139
For the analysis of stability and convergence of (5.3.1) we have the following theorem, which is due to Babuska (e.g., Babuska and Aziz (1972)). Theorem 5.3.1 Under the assumptions of Theorem 5.1.2, suppose further that Fh : Vh > lR is a linear map and that Ah : Wh x Vh > lR is a bilinear form satisfying the same properties (5.1.23), (5.1.24) of A by replacing W with W h, V with Vh, and the constant a byah. Then, there exists a unique solution Uh to (5.3.1) that satisfies Fh(Vh)
1
IIl uhlll S; ah
(5.3.3)
sup
"hEVh \lh;:CO
IIII Vh
Moreover, if U is the solution of (5.1.3), it follows
(5.3.4) 1
+
ah
sup
IF(Vh)  Fh(vh)1
"hEVh tJh ;;f0
Ilvhll
Proof. For any fixed h, existence and uniqueness follow from Theorem 5.1.2. Moreover, (5.3.3) follows using (5.1.23) for Ah and (5.3.1). Indeed, 1
III UhIII < ah
sup
Ah(Uh,Vh)
"hEVh Vh;:CO
Ilvhll
1 Fh(Vh) = sup  ah "hEVh Ilvhll 'Uh:;i:O
Concerning (5.3.4), for all Wh E W h and Vh E Vh we have Ah(UhWh,Vh) = A(UWh,Vh)+A(Wh,Vh)Ah(Wh,Vh)+Fh(Vh)F(Vh).
Using (5.1.22) along with the discrete counterpart of (5.1.23) produces
Owing to the triangle inequality
the result (5.3.4) follows easily.
o
140
5. Galerkin, Collocation and Other Methods
Examples of PetrovGalerkin approximations are furnished by the socalled Tmethod (see, e.g., Canuto, Hussaini, Quarteroni and Zang (1988), Sect. 10.4) and by the "upwind" treatment of advectiondiffusion equations (see Section 8.2.2). Moreover, the PetrovGalerkin approximation can sometimes take the form
where L h is a suitable operator related to L (according to some criteria). Problem (5.3.5) can be written in the form (5.3.1) provided we set Vh:= {Wh
+ LhWh I Wh
E Wh }
.
Examples of this kind can be found in Section 14.3, where the socalled GALS, SUPG and DWG methods are used to approximate pure advection equations (see Remark 14.3.1).
5.4 Collocation Method Collocation methods are used in the framework of several kind of finite dimensional approximations, remarkably for boundary element, finite element and spectral methods. For the sake of brevity, here we focus only on the latter case, where the collocation approach is by far more successful than any other one. We can therefore restrict our interest to the domain n = (1, l)d, d = 2,3, and suppose that we are looking for a discrete solution which is an algebraic polynomial of degree less than or equal to N. Also, we are given (N + l)d collocation points Xi E n which are the nodes of a GaussLobatto integration formula (see (4.5.19) or (4.5.38)). The spectral collocation solution is a function UN E QN (the space of algebraic polynomials of degree less than or equal to N with respect to each single variable Xi, i = 1, ... , d) that satisfies LNUN
=
f
at
Xi
E
n \ an'
BNUN
=0
at
Xi
E
an'
(5.4.1) {
where, typically, LN is an approximation to L obtained by replacing any derivative by the pseudospectral derivative, i.e., the exact derivative of the interpolant at the nodes {x.] (see Sections 4.3.2 and 4.4.2). The same is true for B N . Thus, the boundary conditions are satisfied at all nodes lying on an', whereas the differential equation is enforced at all remaining GaussLobatto nodes. More precisely, we point out that when the condition Bu = 0 expresses
5.5 Generalized Galerkin Method
141
orr
a natural boundary condition, at all nodes of it is not imposed exactly, but a suitable linear combination between it and the residual of the differential equation is enforced. This translates, at the discrete level, the fulfillment of the boundary conditions in a natural fashion. As we will see in Section 6.2.2, when the original problem takes the form (5.1.12), this approach leads to solving (5.4.2) where VN = QN n V and AN is an approximation to the form A in which all integrals are evaluated through the GaussLobatto quadrature formula. The same kind of approximation is carried out for the right hand side. In the form (5.4.2), which is referred to as the weak form of the spectral collocation method, whereas (5.4.1) will be called the strong form, this problem appears indeed as a generalized Galerkin method, and goes therefore under the stability and convergence analysis that is presented in next Section. From the algebraic point of view, let {1/!j} denote the Lagrangian basis of VN , i.e., (5.4.3)
Vi,j=I, ... ,(N+l)d
Setting
UN(X) = L
~j1/!j(x) ,where
~j = UN(Xj) ,
j
it follows from (5.4.1) that
~(LN.pj )(x.) <j (5.4.4)
{ L(BN1/!j)(Xi) ~j
=
f(x,)
for all i such that
Xi
E
0
for all i such that
Xi
Eon'

n \ an'
j
e
This produces a linear system for = (~j), whose matrix can be expressed in terms of the pseudospectral derivative matrices. Therefore, the spectral collocation method is also called the pseudospectral method.
5.5 Generalized Galerkin Method In this case, the finite dimensional problem reads (5.5.1) where {Vh I h > O} is a family of finite dimensional subspaces of V. Here Ah and Fh denote convenient approximations to A and F, respectively. This is a special subcase of (5.3.1) and includes the collocation method in its weak
142
5. Galerkin, Collocation and Other Methods
form (5.4.2). We present, however, a separate analysis, as some differences in the convergence proof arise in the present situation. Other instances of the generalized Galerkin method are provided by the Galerkin finite element method that makes use of numerical integration (see Section 6.2.3) and by stabilization methods for advectiondiffusion problems or for the Stokes problem (see Sections 8.3 and 9.4). We recall that :h(.) is a linear form defined over Vh and Ah (" .) is a bilinear form defined over Vh x Vh, and they do not necessarily make sense when applied to elements of V. As a matter of fact, typically Ah and :h involve pointvalues of Uh or/and its derivatives, which do not necessarily exist for functions of V (e.g., this is never the case for the three examples discussed above). The following result, known as first Strang lemma, holds. Theorem 5.5.1 Under the assumptions of Theorem 5.1.1, suppose further that :h () is a linear map and the bilinear form A h (" .) is uniformly coercive over Vh x Vh. This means that there exists a* > 0 such that for all h > 0
(5.5.2) Then there exists a unique solution Uh to (5.5.1), which satisfies
(5.5.3) and, if U is the solution to (5.1.12),
(5.5.4) 1
+ ; a
sup VhEVh Vh "po
IF(Vh)  Fh(Vh)1 IIVhl1
Proof. As in the proof of Theorem 5.3.1, existence, uniqueness and inequality (5.5.3) follow from the LaxMilgram lemma, owing to (5.5.2). The proof of (5.5.4), although not too different from that of (5.3.4), is reported for the reader's convenience. Let Wh be an arbitrary element in the space Vh. Setting (Th := Uh  Wh, and using (5.5.2) we obtain a*IICThI12 ::; Ah(CTh,CTh) = A(U  Wh, CTh)
+ A(Wh' CTh) 
Ah(Wh, CTh)
+ Fh(CTh)
 F(CTh)
5.5 Generalized Galerkin Method Assuming
(Jh
143
=t 0, due to (5.1.13) one has
The above inequality is clearly true also when (Jh = O. Combining it with the triangular inequality lIu  uhll ~ Ilu  whll + II(Jhll, and taking the infimum with respect to Wh E V h we obtain the desired result. 0 The following result will also be useful: Proposition 5.5.1 Under the assumptions of Theorem 5.5.1, suppose further that the bilinear form Ah('·) is defined at (U,Vh), where U is the solution to (5.1.12) and v h E Vh, and satisfies for a suitable " > 0
(5.5.5) uniformly with respect to h holds:
> O. Then the following convergence estimate
(5.5.6)
Proof. Let Wh be an arbitrary element in Vh . We can write: Ah(Uh  Wh, ut.  Wh) = Ah(u  Wh, Uh  Wh)
+ :h(Uh 
Wh)  Ah(U, Uh  Wh)
Therefore, using (5.5.2) and (5.5.5) we find
lI u h 
whll ~
"
;llu a
whll
1
+ ; sup a "hEVh
l.1'h(vh)Ah(u,vh)1
Ilvhll
'Vh#O
The triangular inequality gives at once (5.5.6).
o
From the algebraic point of view, if we expand Uh as in (5.2.4), problem (5.5.1) produces a linear system like (5.2.5) where .1'(.) and A(·,·) are replaced
144
5. Galerkin, Collocation and Other Methods
by Fh() and Ah(', '), respectively. The coerciveness assumption (5.5.2) guarantees that A is a positive definite matrix; moreover, it is also symmetric if Ah(,') is a symmetric form. Remark 5.5.1 (Nonconforming approximation). An extreme case of (5.5.1) is the one in which Vh is not a subspace of the space V, thus the bilinear form A is not necessarily defined on Vh x Vh. We assume that a norm II . II hand the approximate bilinear form Ah(',') are defined in (V + Vh), and that the approximate linear functional Fh() is defined on Vh. We require moreover that there exist constants a* > 0, ,* > 0 such that for each h > 0
Ah(Vh,Vh) ;::: a*llvhll~ V Vh E Vh IAh(W, vh)1 :::; ,*llwllh Ilvhllh V W E (V
+ Vh) ,
Vh E Vh
Then by the socalled second Strang lemma we find the following error estimate
The proof is quite similar to that of Proposition 5.5.1. See also, e.g., Ciarlet (1991), pp. 212213. 0 A schematic illustration of the various formulations presented in the preceding Sections is reported in Fig. 5.5.1. There we have denoted with fh and af?i. the set of collocation nodes in f? and af?*, respectively.
5.6 TimeAdvancing Methods for TimeDependent Problems In this Section we address the issue of timediscretization for initialboundary value problems. This presentation is very short, and it only serves the purpose of giving the reader a flavour of the several possible ways to face this problem. Detailed discussions are provided all across Part III of this book for several instances. Keeping in mind the abstract framework of Section 5.1, we now define the following problem
au
at
(5.6.1)
{
+ Lu = f
in QT:= (O,T) x f?
Bu =0
on .Ey := (0, T) x
u = uo
on f?, for t = 0
,
af?*
5.6 TimeAdvancing Methods for TimeDependent Problems
145
(P)
u EX: Lu = f in n Bu =0 on an
I u E V :Vv E V
A(u, v)
uE W :Vv E V
= F(v)
A(u, v)
= F(v)
1 (G)
Uh E Vh : VVh E Vh A(uh' Vh)
= F(Vh)
1 (GG)
Uh E Vh : VVh E Vh Ah(Uh,Vh) = Fh(Vh)
~
(C) Uh E x, : t..«, = .fh in nh \ani; Bh'Uh = 0 on ani;
(PC)
Uh E Wh : VVh E Vh Ah(Uh,Vh) = Fh(Vh)
Fig. 5.5.1. Galerkin (G), PetrovGalerkin (PG), Generalized Galerkin (GG) and Collocation (C) approximations to the boundary value problem (P)
ft
where T > 0 is a prescribed timelevel, denotes timedifferentiation, u (the unknown) and f are functions oft E (0, T) and x E il, and Uo = uo(x) is the assigned initial datum. The differential operator L, as well as the boundary operator B, can now depend on t. When u is a vectorvalued function, the time differentiation needs not necessarily apply to all the components of u, This is, for instance, the case of the Stokes problem that is introduced below. The corresponding initialboundary value problems are referred to as DifferentialAlgebraic Equations (see Brenan, Campbell and Petzold (1989)). We are not going to discuss the general assumptions that ensure existence and uniqueness of a solution to (5.6.1). Nonetheless, we can suppose that a weak formulation of the above problem can be stated. For this, we assume that there exist three Hilbert spaces V, W, H such that Wand V are contained into H with dense, continuous inclusion. (If not otherwise specified, H = L 2(D).) The scalar product of H is denoted by (', .). We assume that
146
5. Galerkin, Collocation and Other Methods
f
E L 2(0, T; H). Furthermore, we assume that there exists a bilinear form A(·, .) continuous on W x V. The weak formulation of (5.6.1) reads: find u E L2(0, T'; W)nCO([O, T]; H)
Uo
E H, and
(see Section 1.3 for the definition of these spaces) such that (5.6.2)
d
'livEV
dt(u(t),v) +A(u(t),v) =:F(t,v)
and u = Uo at t = O. The right hand side is a compact notation for (J(t), v) plus another possible term depending upon nonhomogeneous boundary conditions. Equations (5.6.2) are to be intended in the sense of distributions over (0, T). Under the previous assumptions on the data all operations indicated in (5.6.2) make sense. We now provide three examples which are nothing but the unsteady counterparts of those presented in Section 5.1.
Example 1. The heat equation The problem we consider reads
au at

(5.6.3)
{
Llu
=f
in QT
u=o
on ET := (0, T) x
u=
on
Uo
n, for t =
0
an
,
and will be extensively addressed in Chapter 11. It describes the time evolution of the temperature u of an isotropic and homogeneous medium contained in .f2 under an external heat source t, UQ is the initial temperature. Its weak formulation is: find u : QT + IE. such that u(t,·) E HJ(.f2) and
(5.6.4)
fa ~~ (t, x) v(x) dx + fa 'Vu(t, x) . 'Vv(x) dx = fa f(t,x)v(x)dx v 'Ii
E HJ(.f2)
for almost every t E (0, T); moreover, u = Uo at t = O. It is therefore a special case of (5.6.2), with symbols' explanation provided in (5.1.6).
Example 2. The unsteady Stokes problem The timedependent version of problem (5.1.7) is
au at  vLlu + 'Vp = f (5.6.5)
divu
=0
in QT in QT
u=O
on ET
u= Uo
on
n, for t = 0
5.6 TimeAdvancing Methods for TimeDependent Problems
147
These equations describe the unsteady motion of a viscous incompressible fluid confined into rl, subject to a volumic density f of external forces and having an initial velocity Uo, under the assumption that this motion is slow. Their derivation within the framework of timedependent NavierStokes equations is presented in Chapter 13. The weak form of (5.6.5) reads: find u : QT + ]Rd and p : QT + ]R such that u(t,') E (HJ(rl))d, p(t,') E L6(rl) and
1~~
(t, x) . v(x) dx
+ 1I
1
'Vu(t, x) . 'Vv(x) dx
 IaP(t,x)divv(x)dx
(5.6.6) = Ia f(t,x) . v(x) dx
Iaq(x) divu(t,x)dx=O
v v E (HJ(rl))d V q E L6(rl)
for almost every t E (0, T); moreover, u = Uo at t = 0. Again, (5.6.6) can be written in the abstract form (5.6.2) by adopting the notations (5.1.9). Example 3. A linear transport problem Under the notation adopted in (5.1.10), the transport initialboundary value problem is (see also Chapter 14):
(5.6.7)
{
~~ + a· 'Vu = f
in QT
u = J(u)
Vv
=1=
0
Conversely, if the functional J attains its minimum at u, it follows that its Gateaux derivative must be vanishing, i.e.,
[ddT J(u
+ TV)]
Ir=O
=0
Vv EV .
As a consequence
dJ( 0  dT u
=;~
J(U+TV)J(U) + TV )Ir=O 1'r~ ~T":''
[a(u,v)
+ ~a(v,v) 
=a(u,v)(f,v)
(f,v)]
VvEV,
hence (6.1.8) holds. If the problem considered is the equilibrium of an elastic membrane fixed on the boundary and subject to a load of intensity i, the functional J(v) represents the total potential energy associated with the displacement v. When considering the other boundary value problems, the corresponding definition of J(v) is easily obtained from (6.1.11), (6.1.14) and (6.1.16). Precisely, we find
J(v) for problem (6.1.11),
:=
1
2"a(v,v)  (f,v)  (g,v)an
164
6. Elliptic Problems: Galerkin and Collocation Approximation 1
J(v) := 2"a(v,v)  (J,v)  (g,V)rN for problem (6.1.14), and 1
J(v):= 2"[a(v,v) + (KV,V)an]  (J,v)  (g,v)an for problem (6.1.16).
D
6.1.2 Existence, Uniqueness and APriori Estimates Here we review some of the main mathematical properties of the preceding boundary value problems. The basic ingredient for proving the existence of a solution is the LaxMilgram lemma (see Theorem 5.1.1), which in turn is a generalization of the Riesz representation theorem. With the aim of applying this result, we are going to check that, under suitable assumptions on the data, conditions (5.1.13) and (5.1.14) are satisfied. We start from the Dirichlet problem (6.1.8). First of all, any f E L 2(!7) defines a continuous linear functional v > (J,v) on H6(!7). Thus we have only to show that a(,·) is continuous and coercive. Under assumption (6.1.4), the continuity is easily verified. Moreover, the ellipticity assumption gives
1L d
(6.1.17)
n
as choosing
aijDivDjv 2: ao
iJ= 1
1 n
1\7v 12
VVEHJ(!7) ,
e= \7v(x) in Definition 6.1.1 it follows L aij(x)Div(x)Djv(x) 2: aol\7v(x)1 2
a.e. in
o
i,j=1
Here and in the sequel a.e. means almost everywhere. Consider now the remaining term in a(v,v). Assuming that div(b  c) E LOO(fl), we can write d {
in
[  L . (bi
2]  Ci)vDiV + aov
t=1
i» 
= ( [~ in t=1 =
f.a [~diV(b 
c)
2) ci)D i(V + aov2]
+ ao]
v
2
for each v E HJ(!7). If Cn is the constant of the Poincare inequality (see (1.3.2)), i.e.,
(6.1.18)
f.a v
2
:S Cn
f.a l\7vl
2
VVEHJ(!7) ,
we can conclude that a(,·) is coercive provided we assume that for almost every x E!7
6.1 Problem Formulation and Mathematical Properties
~ div[b(x) 
(6.1.19)
c(x)]
+ ao(x)
;::: TJ,
00
< TJ
0 a.e. in
n ,
and further (6.1.21)
[b(x)  c(x)] . n(x) ::; 0 a.e. on
an ,
we can still conclude that a(.,') is coercive. In alternative to (6.1.21), a condition on the magnitude of [b  c] could be assumed, for instance (6.1.22)
lib  cIILOO(&n) ::; eo , 0::; eo
7r is the measure of the angle of a concave corner of ail, then it turn out that locally near that corner (u  'l/J) is an H2function. Here 'l/J is a suitable function having in the corner a singularity of the type r!«, r being the distance from the corner itself. In particular, one can always conclude that u E H3/2(il). When considering either the Neumann or the Robin problems, the regularity result for a smooth domain il is similar to the one for the Dirichlet case. The assumptions on ail, the coefficients aij, bi, c, and aD and the datum f are in fact the same, and requiring 9 E H k+!/2(ail) and K E k+! (ail) produces a solution u E H k +2 (il ), k 2: o. Let il be a plane polygonal domain and consider, for the sake of simplicity, the homogeneous Neumann problem for the Laplace operator:
c
Llu
(6.1.29)
au
{ an
=f
=0
in il on ail
In
In this case, assuming f E L 2 (il ) and the compatibility condition f = 0, one still obtains u E H2(il) for a convex domain, and u E H 3/2(il) in the general case. On the contrary, the solution of the mixed problem in general is not regular. More precisely, there exist examples in which the data and the boundary are smooth, while the solution belongs to H 8 (il ) for any s < 3/2, but not to H 3/2(il). 6.1.4 On the Degeneracy of the Constants in Stability and Error Estimates
Let us consider now the issue of the apriori bounds for the solution of the elliptic problems introduced above. We have already seen from Theorem 5.1.1 that the solution satisfies 1
Ilulh < 0IIFllvl ,
6.2 Numerical Methods: Construction and Analysis
169
where a is the coerciveness constant related to the bilinear form a(u,v) (or a(u, v) + (KU, v)afJ in the case of the Robin problem). To fix the ideas, from now on we will refer to the Dirichlet boundary value problem (6.1.8). If we assume aij = c:cSi j (s > 0 a constant), divb = 0, c, = 0 and ao 2: 0, the existence of a unique solution is guaranteed since (6.1.19) is satisfied. Moreover, the coerciveness constant is given by s, and we have the apriori estimate C 1/ 2 IIV'ullo lLllfllo ,
s
e
where CfJ is the constant that appears in the Poincare inequality (6.1.18). The control on the gradient of u can be very poor if the constant e is very small. More important, the error estimate for the Galerkin method reads (see (5.2.7))
(6.1.30) Here "I is the continuity constant related to the bilinear form a(u, v) and can be expressed by 1= c
+ C}j21IbIILoo(fJ) + CfJllaoIILoo(fJ)
.
Again, "lIe is a large number if e is small in comparison with IlbliLoo(fJ) andj'or IlaoIlLoo(fJ)' In this situation, the performance of the pure Galerkin method can be quite poor. This is revealed by the onset of oscillations that may appear in the numerical solution near to the boundary of {l whenever the exact solution u exhibits boundary layers. In such cases, the numerical method needs to be "stabilized" by resorting to a generalized Galerkin approach that damps the numerical oscillations. This issue will be discussed thoroughly in Chapter 8.
6.2 Numerical Methods: Construction and Analysis In this Section we introduce numerical methods to approximate any elliptic boundary value problem that can be expressed in the form (6.1.5). For the reasons we have outlined in Section 6.1.4, the effectiveness of the approximation methods we are going to describe depends on the relative magnitude of the ellipticity constant ao, the advective vector fields band C, and the coefficient ao. The methods we present in this Chapter are well suited for the diffusiondominated case, i.e., when the ellipticity constant ao is large enough in comparison with the coefficients bi , ci, i = 1, ..., d, and ao. In Chapter 8 we will consider other methods which can be applied to the advectiondominated case. Among the numerical methods presented in Chapter 5, we are going to consider the Galerkin, collocation and generalized Galerkin methods.
170
6. Elliptic Problems: Galerkin and Collocation Approximation
6.2.1 Galerkin Method: Finite Element and Spectral Approximations As pointed out in (5.2.3), the Galerkin approximation to (6.1.5) reads: (6.2.1) where Vh is a suitable finite dimensional subspace of V. To ensure the existence and uniqueness of the solutions u and Uh to (6.1.5) and (6.2.1), respectively, we are always assuming that the bilinear form A(,·) is continuous and coercive, and the linear functional :F(.) is continuous. Moreover, as a consequence of these assumptions, we know that the error estimates (5.2.7) holds. Let us remember that sufficient conditions implying continuity of A and :F and coerciveness of A have been made precise in Section 6.1.2 for all boundary value problems considered in Section 6.1.1. The main point toward proving the convergence of Uh to U is thus to verify that (5.2.2) holds, i.e., 'v'v E V ,
(6.2.2)
where we are denoting by 11·11 the norm of V. We then prove a general result which requires an apparently weaker version of (6.2.2) (indeed, the following condition (6.2.3) turns out to be equivalent to (6.2.2)).
Proposition 6.2.1 Assume that the bilinear form A(·,·) is continuous and coercive in V, and the linear functional :F(.) is continuous in V. Let Vh be a family of finite dimensional subspaces of V. Assume that there exists a subset V dense in V such that (6.2.3)
inf
vhEVh
Ilvvhll.O
as h.O 'v'vEV .
Then the Galerkin method is convergent, i.e., the solution Uh of (6.2.1) converges in V to the solution u of (6.1.5) with respect to the norm 11·11. Proof. Since V is dense in V, for each e
> 0 we can
Iluvll < c:
find v E V such that
.
Moreover, due to (6.2.3) there exist ho(c:) > 0 and, for any positive h < ho(c:), E Vh such that
Vh
Ilv  vhll < e Hence, using the error estimate (5.2.7)
Ilu  uhll < lilu  vhll <  vii + Ilv  vhl!) , a  l(llu a
6.2 Numerical Methods: Construction and Analysis
171 0
and the thesis follows.
We intend to specify the above results in the framework of both finite element and spectral methods. We start with the finite element method, and assume that fl C IRd , d = 2,3, is a polygonal domain with Lipschitz boundary and Th is a family of triangulations of fl. At first, we want to make precise what is Vh for anyone of the boundary value problems we have considered so far. (i) The Dirichlet problem In this case, we choose
(6.2.4) where Xk is defined in (3.2.4) if the reference polyhedra simplex. On the other hand, when = [O,l]d the space (3.2.5).
tc
tc is the unit dXk is defined by
(ii) The Neumann problem We take (6.2.5) (iii) The mixed problem We choose (6.2.6) (Here each triangulation Th has been chosen in such a way that no element K E i; intersects both TD and TN.) (iv) The Robin problem In this case (6.2.7) Furthermore, in all cases we assume that the degrees of freedom and the shape functions are those described in Section 3.3. Consequently, for each v E CO(fl) the interpolation function 1f'~(v) is the one defined in (3.4.1). We are now in a position to show the convergence of the finite element Galerkin method. To verify that (6.2.2) or (6.2.3) hold, we make use of the approximation result obtained in Chapter 3. The main theorem reads as follows: Theorem 6.2.1 Let fl be a polygonal domain of IRd , d = 2,3, with Lipschitz boundary, and Th be a regular family of triangulations of fl associated to a reference polyhedra k; which is the unit dsimplex or [0, l]d. Suppose that the
172
6. Elliptic Problems: Galerkin and Collocation Approximation
bilinear form A(,') is continuous and coercive in V and the linear functional :Fe) is continuous in V. Let Vh be defined as stated in (6.2.4)(6.2.7). Under these assumptions the finite element Galerkin method is convergent. If moreover the exact solution U E HS(fl) for some s ~ 2, the following error estimate holds
(6.2.8) where 1 = min(k, s  1). Proof. We apply the Proposition 6.2.1. Since cOO(fl) is dense in H1(fl), we can choose V = cOO(fl) for both Neumann and Robin problem, V = COO (fl)n HJ(fl) for the Dirichlet problem and V = cOO(fl) n HtD(fl) for the mixed problem. Furthermore, for each v E V
hence it converges to zero due to (3.4.19). We now prove (6.2.8). Under the assumption that U E HS(fl), s ~ 2, from the Sobolev embedding theorem we have u E cO(fl), hence the interpolation function 7r~ (u) is welldefined. Additionally, 7r~ (u) E Vh, since it is easily verified that 7r~(u) E HHfl) in the Dirichlet case, and 7r~(u) E HtD (fl) in the mixed case (we can always choose the triangulation Th so that its restriction on rD provides a triangulation of TD). From Theorem 3.4.2 we thus have (6.2.9) Moreover, the error estimate (5.2.7) holds, i.e. (6.2.10)
o
Now (6.2.8) follows from (6.2.9) and (6.2.10).
The convergence result (6.2.8) is optimal in the Hl(fl)norm, i.e., it provides the highest possible rate of convergence in the HI (fl)norm allowed by the polynomial degree k. However, looking at the interpolation estimate (3.4.19) for m = 0, one could expect that the L2(fl)norm is in fact O(hl+l). Indeed, this is true under suitable assumptions. To clarify this assertion we consider the auxiliary problem: given r E L 2 ([2), (6.2.11)
find 0 such that (6.2.12)
Then if U E HS(D), s 2: 2, the following error estimate holds: (6.2.13)
l
= min(k, s 
1)
Proof. By proceeding as in Section 3.5 and using (6.2.11), we can write
For any arbitrary CPh E Vh, one has
A(u  Uh, cp(r))
= A(u 
Uh, cp(r)  CPh) ,
thus
(r,UUh)
:::;,lluUhlhllcp(r)CPhlh .
Since cp(r) E HZ([l) C CO([l), we can take CPh = 7r~(cp(r)), and from (3.4.19)
(r,u  Uh) :::;
C,llu  uhlhhllcp(r)lIz
Using now (6.2.12) one obtains
Ilu 
Uhllo :5 Chllu 
uhlh ,
o
thus the thesis follows from (6.2.8).
Remark 6.2.1 Inequality (6.2.12) is a regularity assumption on the solution of the adjoint problem (6.2.11). The solution to (6.2.11) enjoys the same regularity property than the one of the original problem (6.1.5) (see Section 6.1.3). In particular, if [l is a polygonal domain the solution cp(r) belongs to HZ([l) and satisfies (6.2.12), provided that [l is convex, aij E Cl([l), and K, E C1(o[l) (see Grisvard (1976)). This is true for all but the mixed boundary value problem. In fact, we have already noticed in Section 6.1.3 that the solution of the mixed problem belongs to HS([l) for any s < 3/2 but in general not to H3/Z([l), even for smooth data. 0 Remark 6.2.2 (The nonhomogeneous Dirichlet problem). When considering a nonvanishing boundary datum U = sp on o[l, as pointed out in Remark 6.1.2 the variational formulation of the Dirichlet problem reads as follows:
find u E HJ([l) : a(u,v) = (f,V)  a(ep, v)
V v E HJ([l) ,
174
6. Elliptic Problems: Galerkin and Collocation Approximation
where 'P E H1(n) is a suitable extension of cP E Hl/2(f}[}). Taking Vh as in (6.2.4), a possible finite element approximation is given by: However, this is not a practical way to define an approximate solution, since the construction of the extension operator sp  l  'P is not easily performed. Assuming that sp belongs not only to H1/2(an) but also to CO(an), we can alternatively proceed in the following way. Define {x, Is = 1, ..., Mh} the nodes on an and {a, Ii = 1, ..., Nh} the internal nodes, and set
V;:= {Vh E
x; I Vh(Xs) = cp(xs)
Vs
= 1, ... ,Mh }
.
The approximate problem reads:
We can write any Uh E
Uh(X) =
V; as
Nh
Mh
i=1
s=1
L uh(ai)CPi + L cp(xs)'Ps =: Zh + 'Ph
,
where CPi and 'Ps are the basis functions of x~ relative to the internal and boundary nodes, respectively. We note that Zh E Vh and 'Ph E V;. Assuming that the family of triangulation Th is quasiuniform (see Definition 6.3.1), it can also be noticed that II'Phlll = O(h 1 / 2 ) . Now the problem can be rewritten as find Zh E Vh : a(zh, Vh) = (I, Vh)  a('Ph,Vh)
V Vh E Vh ,
therefore it is solvable if a(·,·) is coercive in V x V. Concerning the error estimate, one verifies at once that
hence, by proceeding as in Theorem 5.2.1 (Cea lemma),
lIu  uhl11 :S '1 inf Ilu  vh:lh . a vi: EV'; If U E HS(n), s 2: 2, the interpolant 7f~(u) belongs to V,:. Hence, proceeding as in the homogeneous Dirichlet case, we conclude that Ilu  uhlh = O(h l ) , 1 = min(k, s  1). Stability follows from convergence. Alternative approaches to the nonhomogeneous Dirichlet problem, based on Lagrangian multipliers and penalty techniques, have been proposed by Babuska (1973a, 1973b) (see also Section 7.1). 0 Remark 6.2.3 (LOOerror estimate). We are now going to provide an error estimate in LOO(n). To begin, we write
6.2 Numerical Methods: Construction and Analysis
lIu  uh/lu"'ca) ~ Ilu  1l'~(u)IILOO(a)
+ 1I1l'~(u) 
In Remark 3.4.2 we have shown that for 1 ~ l
~
175
uhllLOOca)
k
Ilu  1l'~(u)IILOOCK) ~ C htt 1 [meas(K)J 1 / 2 Iu lz+l .K
VuE HI+1(K) .
If the family of triangulations is a regular one, we have [meas(K)Jl/2 ~ Ch Kd / 2
,
hence
On the other hand, since in a finite dimensional space all norms are equivalent, we can infer that Iluh  1l'~(u)IILOOCk) ~ Clluh  1l'~(u)lIo.k Then by (3.4.4) it follows Iluh 1l'~(u)IILOOCK) ~ C I detBKI
1/21I u h

7l'~(u)llo.K
~ C [meas(K)]1/21Iuh  1l'~(u)lIo.K
Let us now assume that the family of triangulations T h is quasiuniform, i.e., it is regular and there exists a constant T > 0 such that min h«
KE/h
> rli 
Vh
>0
(see Definition 6.3.1). Then [meas(K)]1/2 ~ Ch d / 2
V K E Th
,
Vh
>0 ,
hence Finally,
Iluh 1l'~(u)llo.a ~ lIu  uhllo.a + lIu  1l'~(u)llo.a , and by Theorem 3.4.2 and Proposition 6.2.2 we have
Iluh 1l'~(u)llo.a ~ Chl+1/u!I+1.a
VuE HI+1(n)
Summing up, we have thus obtained the error estimate
A better order of convergence can be obtained by a more technical approach, which make use of weighted norms and seminorms as suggested by Nitsche (1977) (for a different approach, see also Scott (1976)). Under suitable assumptions on the domain n c ll~.2 and the operator L, and taking
176
6. Elliptic Problems: Galerkin and Collocation Approximation
Vh = Xk n HJ(n), where estimate holds
Ilu 
uhIIL~(n)
Xk
is defined as in (3.2.4), the following error
+ hll\7u 
\7uhIIL~(n)
:::; Ch2Iloghllulw2.~(n)
VuE W 2,OO (n ) .
Choosing Vh = X~ nHJ(n), k ~ 2, the factor Iloghl can be dropped away, even if n C ]Rd, d ~ 3. On the contrary, the presence of the Ilog hi term cannot be avoided for "linear" triangles (Haverkamp (1984)). Several other results and comments concerning Looerror estimates can be found in Ciarlet (1978, 1991). 0 Remark 6.2.4 (Noncoercive bilinear forms). We have limited our analysis to coercive bilinear forms, so that existence and uniqueness of the exact solution is a consequence of the LaxMilgram lemma. Based on a different approach (see Remark 6.1.3), an approximation theory can also be developed for noncoercive forms. An example is provided by Schatz (1974), where the bilinear form is assumed to satisfy the Garding inequality (see 11.1.6). 0 Now we return to the general Galerkin problem (6.2.1). We need to point out that the evaluation of the right hand side of (6.2.1) can be done exactly only in very simple cases. The same occurs in the evaluation of the left hand integrals in (6.2.1), whenever the differential boundary value problem has variable coefficients aij, bi , Ci and ao. In these cases, numerical integration formulae should be used. The resulting scheme is a modified (or generalized) Galerkin method and is analyzed in Section 6.2.3. We now consider the spectral method. We assume from now on that n = (1,1)2, and that rD is the union (possibly empty) of edges of an. As usual, we denote by QN the space of algebraic polynomials of degree less than or equal to N in each variable Xi, i = 1,2. The choice of the family of finite dimensional subspaces is given now by (6.2.14)
VN = 1 is straightforward. The spectral counterpart of (6.3.21) is provided by the following proposition, proven in Canuto and Quarteroni (1982): Proposition 6.3.4 (Inverse inequality for algebraic polynomials). There exists a positive constant C3 such that for each v N E iQN
(6.3.27) From (6.3.12) we have Asp = DA;p, where D = diag {( No )d} in the Chebyshev case corresponding to homogeneous Dirichlet data. This has several consequences. At first, since A;p has real and positive eigenvalues (see Gottlieb and Lustman (1983), Canuto, Hussaini, Quarteroni and Zang (1988), Sect.
200
6. Elliptic Problems: Galerkin and Collocation Approximation
H.4.1), the same occurs to Asp. Furthermore, if A(A sp) and J.t(A~p) denote eigenvalues of Asp and A~p, respectively, we have (6.3.28) Now let X be an eigenvalue of Asp and ~ a corresponding eigenvector. In the current case instead of (6.3.23) we have (6.3.29)
A = (Asp~,~) = aN(vN,vN) 1~12
1~12'
where VN = Li t.i'1f;i, and therefore, owing to (6.3.26), (6.3.30)
aN(vN,vN) C*Nd 0 such that the compatibility condition
7. Elliptic Problems: Approximation by Mixed and Hybrid Methods
236
v v E L 2(D)
::3 q E H(div; D) ,q:f: 0:
l
(7.2.22)
vdivq
2: ,8*llqIIH(div;rnll v llo
is satisfied, and, moreover, that there exists an operator ri, : H(div; D) such that
(i)
l
(ii)
IITh(q)IIH(div;n) S; C*llqIIH(div;n)
Vh
div(q 
Th(q))
=0
V q E H(div; D),
Vh
>
Wh
E Yh
V q E H(div; D) ,
where C* > 0 doesn't depend on h. Then, the compatibility condition (7.2.4) is satisfied with ,8 = ,8* / C*. Proof. From (7.2.22), for any such that
Vh
E
Yh
there exists q* E H(div; D), q* :f: 0,
(7.2.23) As we can assume v« :f: 0, (7.2.23) gives in particular In consequently (i) yields Th(q*) :f: O. On the other hand, from (i), (ii) and (7.2.23) we find
l
vhdiv(Th(q*))
=
l
vhdivq*
2:
Vh
div q* :f: 0, and
~: IITh(q * )IIH(div;n ) llvhllo , o
and the thesis follows.
The compatibility condition (7.2.22) is the infinite dimensional counterpart of (7.2.4). Notice also that condition (i) in particular implies that there are no spurious zeroenergy modes, i.e., elements vi. E Y h such that In vi. div qh = 0 for all qh E W h but satisfying In vi. div q =1= 0 for some qEW.
Let us now verify that the assumptions of Lemma 7.2.1 hold for W~ and as in (7.2.19), (7.2.20). First of all, given v E L 2(D), v :f: 0 one can find q E H (div; D), q :f: 0 such that div q = v. This can be done, for instance, by choosing q = \l"lj;, where "lj; is the solution to
y/:l
f1"lj; = v {
in D
1/Jlan = 0 on aD 211vllo
With this choice, (7.2.22) is satisfied, as Ilqllo = 11\l"lj;lIo S; Cif and II div qllo = Ilvllo, C n being the constant appearing in the Poincare inequality (1.3.2). To construct the operator Th, let q E H(div; D) and define
7.2 Approximation by Mixed Methods
¢._ {div q . 0
in D in B \ D
237
,
where B is an open ball containing D. Clearly ¢ E L2(D), therefore we can solve d'I/J. = ¢ in B
{
'I/J'18B = 0 on BB
2(B),
and we obtain 'I/J' E H 11'I/J.112,B :S GBII¢llo,B. Thus, defining q •. ('V'I/J')IS?, we have q, E (H1(D))d, div q, = divq in D and Ilq.111,S? :S GBII divqllo,S? . We can now consider the interpolant of q. in W h , i.e., we define (7.2.24) where w~ is the interpolation operator introduced in (3.4.20). Notice that w~ is defined in (H1(D))d and not in H(divj D), and this is the reason for using the "regularizing" procedure above. The operator Th satisfies IITh(q)IIH(div;S?)
= Ilw~(q.)IIH(div;S?) :S Gllq.Ih,S? :S Gil divqllo,S?
,
hence (ii) holds. Property (i) is a straightforward consequence of (3.4.23), i.e., div(w~(q')) = p~l(divq') = p~l(divq) ,
(7.2.25)
where p~l is the L 2(D)orthogonal projection onto y/:l. We have thus proven that, when considering the RaviartThomas finite elements, the compatibility condition (7.2.4) is satisfied with a constant f3 independent of h. Hence, all assumptions of Theorem 7.2.2 hold true and the error estimate (7.2.13) is valid. On the other hand, we already know from Theorem 3.4.4 that, if Th is a regular family of triangulations, inwf k lip  qhIIH(div;S?) :S Ghl(lpll,S? + Idivpll,S?), 1:S l :S k qhE
h
and that (see (3.5.24)) int,
Ilu 
vhllo :S Ghllull,S?,
1:S l :S k
vhEYh
The following error estimate therefore holds for 1 :S l :S k:
Remark 7.2.1 Following Falk and Osborn (1980) it can also be proven that lip  Phllo:S Ghlllplll,S?,
1:S l:S k
238
7. Elliptic Problems: Approximation by Mixed and Hybrid Methods
Moreover, t" = max(2, I) , 1 ::; I ::; k ,
o
provided the regularity condition (6.2.12) holds.
Remark 7.2.2 Since (coo(It))d is dense in H(div; It) (see, e.g., Girault and Raviart (1986), p.27) and Cgo(It) is dense in L2(It), one can repeat the argument adopted in Theorem 6.2.1 to prove that Ph + pin H(div; It) and 2 Uh + U in L (It ) under the only assumption P E H(div; .0), U E L2(.o) or, equivalently, U E H1(.o). 0 Remark 7.2.3 By proceeding in the same way as in the construction of the operator Th, it is easily seen that for each Vh E y/:l there exists qh E W~ such that div qj, = Vh. (Just take Vh instead of divq.) It is thus proven that diIV Wkh =
0
vkl J. h .
Remark 7.2.4 Let us emphasise that the commutativity property (7.2.25) is of great importance in the analysis of the RaviartThomas finite elements. In fact, it at once yields div W~ C y/:I, so that W~ C Wo; moreover, it is helpful in the construction of the operator Th in Lemma 3.1. At the end of this Section we present another family of finite elements enjoying property (7.2.25). 0
Similar results hold for the other boundary value problems we have considered so far. As we noticed in Section 7.1.2, the Neumann problem can be written in the form (7.1.20), (7.1.21), and the basic spaces are now Ho(div;.o) and L6(.o). The compatibility condition (7.2.22) is satisfied, as for each v E L6(.o) we can construct q E Ho(div;.o) such that divq = v and IIqIlH(diV;.lt) ::; Cllvllo by solving the Neumann problem
111/J = v in J? { EN an
=0
on a.o
setting then q = \11/J. To show that the discrete compatibility condition (7.2.4) holds uniformly with respect to h is a more delicate matter. However, it can be shown that the RaviartThomas spaces
Wa",h := {qh E Ho(div; It) I qhlK E j[])k V K E T h} YO~h:= {Vh E L6(.o) IVhlK E
lP'kl
V K E Th}
satisfy (7.2.4). A proof is reported in Roberts and Thomas (1991), and it is valid also for the problem with mixed boundary conditions.
7.2 Approximation by Mixed Methods
239
When considering the Robin problem (7.1.22), (7.1.23), the only difference from the Dirichlet case stems from the use of the space 1i(div; D) defined in (7.1.24) instead of H(div; st). Nonetheless, by proceeding as in the construction of the operator Th in (7.2.24), it is easily seen that for each v E L 2 (st) there exists q E 1i(div; st) such that divq = v and Ilqll1i(div;n) ~ Cllvllo. The compatibility condition (7.2.22) is thus satisfied, and as before one can 1 . take as Wh and Yh the RaviartThomas finite element spaces W~ and (Just notice that W~ is indeed contained in 1i(div; st).)
yt
Now we are now going to introduce another family of finite element spaces which have been proposed for problem (7.2.1) by Brezzi, Douglas and Marini (1985) in dimension d = 2 and by Brezzi, Douglas, Duran and Fortin (1987) in dimension d = 3. In this case the choice of Wh and Yh is given by: (7.2.27) (7.2.28) where k ~ 2. The degrees of freedom related to W,;k in the twodimensional case are given on each triangle K by
r
}PK
n· q 1/J
[
q . V'x
[
q .w
,
, ,
1/J E IP'kl
X E IP'k2 (k
~ 3)
wE (lP'kd , div w
= a in K
, w· n
= a on oK (k ~ 3)
,
where q E (lP'k_r)2 and FK is any face of K. Hence, on each K there are 2k degrees of freedom less than for the RaviartThomas elements. It has been proven that the commutativity property (7.2.25) still holds 1 2 , and the following estimates for the interpolation with replaced by error are valid:
yt
yt
IIq 
w;;k(q)//o,n ~ Ch1lqll,n , 1 ~ l ~ k
for each q E Hl(st), and
II div(q 
w;;k(q))llo,n ~ Chll divqll,n , 1 ~ l ~ k  1 ,
for each q E H1(st) such that divq E Hl(st). Therefore, the following error estimate follows from Theorem 7.2.2 for 1 ~ l ~ k  1: (7.2.29) lip  phIIH(div;n)
+ Ilu 
uhllo ~ Chl(lpll,n
+ Idivpll,n + lul/,n)
.
Notice that the interpolation errors for the RaviartThomas and the BrezziDouglasMarini elements are of the same order when measured in the
240
7. Elliptic Problems: Approximation by Mixed and Hybrid Methods
L 2 (n )norm . If one is mainly interested in the approximation of the vector p, these latter elements are a viable choice. With the aim of illustrating the accuracy of mixed methods, let us consider the Neumann problem
 div(a'Vu) (7.2.30)
au { a an = 0
=f
in
a = (0,1)2
on an
where the datum f satisfies the compatibility condition Jn f = O. We triangulate n by a uniform grid using 2N 2 triangles of equal size (the finite element grid spacing is h = V2/N). In Tables 7.2.1, 7.2.2 and 7.2.3 we report the relative error in the L 2 (n )norm of the vector function p = a'Vu. Table 7.2.1 refers to the case in which a = 1 and f(XI' X2) = 2(1XIX2), and compares the errors obtained by using the mixed RaviartThomas finite elements based on the spaces (7.2.19), (7.2.20) with k = 1 (RT(l)) and the mixed BrezziDouglasMarini finite elements based on the spaces (7.2.27), (7.2.28) with k = 2 (BDM(2)). We observe linear convergence for the RaviartThomas elements and quadratic convergence for the BrezziDouglasMarini elements. Table 7.2.1. Comparison between two different types of mixed finite element approximations for a = 1
N(= .,fih 1 )
RT(l)
BDM(2)
10 20 40 60 80 100
0.93e1 0.47e1 0.23e1 0.16e1 0.12e1 0.93e2
0.45e2 0.lle2 0.28e3 0.12e3 0.71e4 0.45e4
Table 7.2.2 refers to a similar comparison between RT(l) and BDM(2), this time when the coefficient a = a(x) is constant only with respect to X2 and exhibits a variation with respect to Xl (i.e., rnaxj , a(x)/ rninj , a(x)) of the order of 104 . In Table 7.2.3 we report a calculation made using different values of N for a coefficient a(x) which behaves as before, with maxX 1 a(x)/ minj , a(x) of the order of 10 i for i = 0,1,2,4,8. At the end of the following Section we will compare the accuracy of these mixed finite elements versus the computational cost (for a suitable solution algorithm).
7.3 Some Remarks on the Algorithmic Aspects
241
Table 7.2.2. Comparison between two different types of mixed finite element approximations when a exhibits a gradient of the order of 104
N(= }2h 1 )
RT(l)
BDM(2)
10 20 40 60 80 100
0.2le0 0.93el 0.45el 0.30el 0.22el 0.18el
0.40el 0.l1e1 0.27e2 0.12e2 0.67e3 0.43e3
Table 7.2.3. Comparison between two different types mixed finite element approximations when a exhibits a large gradient
i
RT(l) (N = 40)
BDM(2) (N = 40)
BDM(2) (N = 60)
BDM(2) (N = 100)
0 1 2 4 8
o.zse.i
0.l1e2 0.23e2 0.26e2 0.27e2 0.28e2
0.51e3 0.10e2 0.l1e2 0.12e2 0.12e2
0.20e3 0.37e3 0.4le3 0.43e3 O.43e3
O.4lel 0.44el 0.45el O.45el
7.3 Some Remarks on the Algorithmic Aspects In order to formulate the algebraic problem associated to (7.2.1), let us start by setting d
(7.3.1)
a(r,q):=
rL Jn
a8trtq8
,
b(q,v):=
8,t=1
Cvdivq
Jr,
,
and by choosing two bases {tpj jj = 1, ... ,Nh } and {!PIll = 1, ...,Kh } of Wh and Yh , respectively. Writing the solutions Ph and Uh as Ph(X) Pjtpj(x) and Uh(X) = L:~\ U(I/Jl(X), (7.2.1) can be formulated algebraically as follows
L:f:1
Ap+ BTu =0 (7.3.2) where
{ Bp
= f
242
7. Elliptic Problems: Approximation by Mixed and Hybrid Methods
The matrix (7.3.4)
A S:= ( B
BT) 0
is nonsingular if ker B T = 0, which is equivalent to require that the compatibility condition (7.2.4) holds. Since A is positive definite, under this assumption one could solve (7.3.2) by eliminating p from the first equation (7.3.5) and then finding the solution u to (7.3.6) This procedure will be given the name of mixedSchur complement algorithm. This procedure resembles the one used in the framework of the Stokes problem for eliminating the velocity field from the momentum equation (see Section 9.6.1). For the sake of simplicity, let us assume from now on that the bilinear form a(,·) is symmetric. Then the matrix R ;= EA 1 E T is symmetric and positive definite but full (due to the presence of the inverse of A), hence (7.3.6) can be solved, e.g., by the conjugate gradient method described in Section 2.4.2. This procedure would require at each step the solution of the linear system associated to the matrix A, which is sparse but of large dimension N h . Since the condition number of R can grow like h 2 , a preconditioning procedure is compulsory. In the numerical test that will be reported in Table 7.3.1, the matrix BET is used as a preconditioner of R. As such, it enjoys optimal spectral properties. However, it may be very much memory demanding when K h is large. A different solution algorithm is based on the observation that inverting the matrix A would be economical if the vector functions in W h were not required to belong to H(div; rl), allowing therefore their normal components to be discontinuous across the interelement boundaries. In such a case A would be a blockdiagonal matrix, each block (corresponding to a triangle) being a small nonsingular matrix. The idea is therefore to resorting to a mixedLagrangian formulation, introducing suitable Lagrangian multipliers with the aim of relaxing the continuity assumption for the normal components at the interelement boundaries. To give an example, let us introduce (7.3.7) which is to the discontinuous first order RaviartThomas finite element space. Notice that Wh C Wh, and that an element qh of Wh belongs to Wh if and only if qh E H(div; rl).
7.3 Some Remarks on the Algorithmic Aspects
243
Setting
rh:= U etc ,
(7.3.8)
r~:=
r, nfl,
KETh
and denoting by S a side of the triangle K, the space of multipliers is defined as (7.3.9)
Furthermore, we introduce the bilinear form (7.3.10)
for qh E Wh, I1h E A h, where UK is the unit outward normal vector on oK and [qh . n] denotes the jump of the normal component of qh across the side S. Notice that an element qh E W h satisfies d(qh, fJ.h) = 0 for each fJ.h E A h if and only if qh E H(div; fl), i.e., qh E Wh. Since b(qh, Vh) is not defined for qh E W h, we also set (7.3.11)
L
bh(qh,Vh):=
KETh
1
v« div qj,
K
for each qh E W h, fJ.h E A h· Instead of (7.3.2), we are led to consider the following problem: find Ph E W h, Uh E Yh, Ah E Ah such that a(Ph, '
=f
DAlBlU + DAlD T>.
=0
i.e., a linear system associated to a symmetric and positive definite matrix. The unknown Uh is not continuous, therefore another simple elimination procedure leads to a system where the only unknown is >.. It reads
R>. = g
(7.3.16)
,
where (7.3.17)
R:= D.A l D T 
D.A l Bl(Bh.A l Bn l Bh.A l D T
g := DAlBl(BhAl Bnlf .
The symmetric and positive definite matrix R has dimension L h x Lh, where L h is the number of internal sides of the triangulation, whereas the matrix R = BA 1 B T has dimension Ki, x Kh, Kh being the number of the triangles. Although in general Lh is larger than Kh, it is easier to apply a conjugate gradient procedure to the linear system associated to the matrix R than to the one associated to R, as A is a blockdiagonal matrix with 3 x 3 blocks and BhA. 1 Bl is diagonal. Having determined the solution of system (7.3.16) for the Lagrange multiplier >., the mixedLagrangian algorithm furnishes U through (7.3.15h and p through (7.3.13h. Let us present now some numerical results. Table 7.3.1 refers to the solution of the boundary value problem (7.2.30) (with 0: = 1 and !(Xl,X2) = 2(1  Xl  X2)), using the BDM(2) mixed finite elements. Here, the purpose is to compare the performance of the mixedSchur complement (MSC) algorithm with the one of the mixedLagrangian (ML). For several values of the gridsize we report both the number of CGiterations (ITER) and the global CPUtime (in seconds) for solving system (7.3.6) (with preconditioner BB T ) in the MSC approach, or system (7.3.16)
7.3 Some Remarks on the Algorithmic Aspects
245
Table 7.3.1. Comparison between mixedLagrangian and mixedSchur complement algorithms for the solution of problem (7.2.30) by the BDM(2) mixed finite elements MixedLagrangian MixedSchur complement N(= ..;2h 1 )
ITER
CPUtime
ITER
CPUtime
10 20 40 60 80 100
30 38 69 85 111 139
0.08 0.42 3.03 8.32 16.18 37.36
7 7 8 7 7 7
0.23 0.84 3.59 7.58 12.96 22.35
(with preconditioner the incomplete Cholesky decomposition of R) in the ML approach. For the same problem we report in Fig. 7.3.1 the CPUtime (in seconds) versus the accuracy (still for the L 2 ( st )nor m of the relative error lip  Phllo/llpllo) when using the RT(1) and the BDM(2) mixed finite elements. Computations for both elements have been performed by the MSC algorithm.
JOO.OO r~~___ro~~~_._,
/0.00
/.00 1J _
0./0
0.0/ '0.00/
 0._
...l
0.0/0
'
0./00
Accuracy
Fig. 7.3.1. CPUtime versus accuracy for both RT(l) (solid line) and BDM(2) (dashed line) mixed finite elements
246
7. Elliptic Problems: Approximation by Mixed and Hybrid Methods
7.4 The Approximation of More General Constrained Problems In this Section we want to present an abstract theory, due to Babuska (1971) and Brezzi (1974), concerning existence and uniqueness of the solution to problems like (7.1.15), (7.1.16), as well as their internal approximation (7.2.1). As we already noticed, if the coefficient matrix {aii } is symmetric, these problems are equivalent to the determination of the unique saddlepoint of a Lagrangian functional related to a constrained problem. However, this symmetry assumption is not necessary for the analysis we are going to develop, and we are thus using the expressions "constrained problems" in a wider meaning, as the results presented here are also valid for more general problems. In our presentation we will follow the theory developed by Brezzi (1974). This theory extends and embodies the one presented in Section 7.2 which was limited to the second order elliptic equation (7.1.1). Indeed, the analysis of the Galerkin approximation to the Stokes problem in the primitive variables u (the velocity) and p (the pressure) (see Chapter 9) will rely upon the abstract theory presented hereafter. More generally, the numerical analysis of mixed approximations to several kind of boundaryvalue problems can be brought back to this abstract theory. An example is provided by boundary value problems for the biharmonic operator .1 2 and its applications to plate bending problems, or to the Stokes and NavierStokes problems in the stream function formulation (see Section 10.4; see also Brezzi and Fortin (1991), Chap. IV). Another example arises from the mixed approximation of linear elasticity problems (see Brezzi and Fortin (1991), Chap. VII, and the references therein). 7.4.1 Abstract Formulation Let us introduce the general framework for this analysis. Let X and M be two (real) Hilbert spaces, with norms 11·llx and II· 11M, respectively. Let XI and M' be their dual spaces (the spaces of linear and continuous functional on X and M, respectively), and introduce two bilinear forms (7.4.1)
a(, .) : X x X
+
IR
be,,) : X x M
+
IR
such that (7.4.2)
la(w,v)1 ::; 'Yllwllxllvllx
,
Ib(w,JL)I::;
for each w,v E X and JL EM. We consider the following constrained problem:
511wllxllJLIIM
7.4 The Approximation of More General Constrained Problems
247
find (u, 7]) E X x M :
(7.4.3)
a(u,v)
+ b(v, 7]) = (l,v} V v E X
b(u,Jt)
= (O',Jt}
{
V Jt EM,
where l E X' and 0' E M ', and (".} denotes the duality pairing between X' and X or M ' and M. It is useful to rewrite problem (7.4.3) in an operator form. We associate to a(, .) and bi, .) the operators A E £(X; X') and 8 E £(X; M ') defined by
(7.4.4) (7.4.5)
(Aw,v} = a(w,v) (8v, Jt} = b(v, Jt)
V w,V E X VvEX,JtEM
We denote by 8 T E £(M; X') the adjoint operator of 8, i.e., the operator defined by
(7.4.6) Thus we can write (7.4.3) as: find (u,7]) E X x M such that
(7.4.7)
{
Au + 8 T 7] = l
in X'
8u =
in M '
0'
N ow define the affine manifold
Clearly X O = ker 8, and this is a closed subspace of X. We can now associate to problem (7.4.3) the following problem:
(7.4.8)
findu E X": a(u,v) = (l,v)
Vv E X O
•
It is readily seen that if (u,7]) is a solution to (7.4.3), then u is a solution to (7.4.8). We will introduce suitable conditions ensuring that the converse is also true, and that the solution to (7.4.8) does exist and is unique, thus constructing a solution to (7.4.3). Before doing this, we need to prove an equivalence result concerning the bilinear form b(·,·) and the operator 8. Denote by X~ the polar set of X O, i.e.,
(7.4.9)
X~ := {g E X' I (g, v} = 0 V v E X o} .
Since X O = ker 8, we can also write X~ = (ker 8)u. Let us decompose X as follows:
X = X O 61 (Xo)..l.. . Clearly, 8 is not an isomorphism from X onto M ', as in general ker 8 = X O =j:. {O}. We are going to introduce a condition which is equivalent to the fact
248
7. Elliptic Problems: Approximation by Mixed and Hybrid Methods
that B is indeed an isomorphism from (XO)lo onto M ' (and moreover BT is an isomorphism from M onto X~). Proposition 7.4.1 The following statements are equivalent: a. there exists a constant (3* > 0 such that the compatibility condition 'if fl EM :3 v EX, v =I 0: b(v,fl) 2: (3*llvllxllflIIM
(7.4.10)
holds; b. the operator BT is an isomorphism from M onto X~ and (7.4.11) c. the operator B is an isomorphism from (XO)lo onto M ' and
(7.4.12)
Proof. First of all we show that a. and b. are equivalent. From the definition of BT (see (7.4.6)) it is clear that (7.4.10) and (7.4.11) are equivalent. Thus, we have only to prove that ST is an isomorphism from M onto Clearly, (7.4.11) shows that BT is an injective operator from M onto its range R(B T ) , with a continuous inverse. Thus, R(B T ) is a closed subspace of X'. It remains to be proven that R(B T ) = X~. The application of the Closed Range theorem (see, e.g., Yosida (1974), p.205) gives at once
Xr
R(ST) = (ker S)u = X~ . Let us now prove that b. and c. are equivalent. We first see that X~ can be identified with the dual of (XO)lo. In fact, at each g E ((XO)lo)' we associate 9 E X' satisfying (g,v) = (g,Plo v) 'if v EX, where plo is the orthogonal projection onto (XO)lo. Clearly 9 E X~ and one verifies that + 9 is an isometric bijection from ((XO)lo)' onto As a consequence, BT is an isomorphism from M onto ((XO)lo)' satisfying
9
Xr
II(S
T
r
1
1 II.C(xf;M) ~ (3*
if, and only if, S is an isomorphism from (XO)lo onto M' satisfying
IISThe proof is now complete.
1
1 II.C(M';(XO).L) ~ (3* D
7.4 The Approximation of More General Constrained Problems
249
We are now in a position to prove the main theorem of this Section. Theorem 7.4.1 Assume that the bilinear form a(·,·) satisfies (7.4.2) and is coercive on XO, i. e., there exists a constant a > 0 such that
(7.4.13)
a(v,v) ~
allvlli
v v E XO
.
Assume, moreover, that the bilinear form b(·,·) satisfies (7.4.2) and the compatibility condition (7.4.10) holds true. Then, for each lEX', (7 E M' there exists a unique solution u to (7.4.8), and a unique 1] E M such that (u,1]) is the (unique) solution to (7.4.3). Furthermore, the map (l,(7) + (u,1]) is an isomorphism from X' x M' onto X x M, and (7.4.14) (7.4.15)
Proof. Uniqueness of the solution to problem (7.4.8) is a straightforward consequence of the coerciveness assumption (7.4.13). Let us now prove the existence. As condition (7.4.10) is satisfied, from c. in Proposition 7.4.1 there exists a unique UO E (XO).L such that Buo = (7 and
Ilu °Ilx :::;
(7.4.16)
1 (3*
11(711M'
Thus, we rewrite (7.4.8) as: (7.4.17)
find u E Xo: a(u,v)
= (l,vI 
a(uO,v)
V v E XO ,
and define the solution u to (7.4.8) as u = u + uO. The existence of a unique solution to (7.4.17) is assured by LaxMilgram lemma (see Theorem 5.1.1), and, moreover, (7.4.18) Let us now consider problem (7.4.3). As (7.4.17) can be written in the form
(Au l,vI = 0
V v E XO ,
we have (Au  l) E X~. Moreover, from b. in Proposition 7.4.1 we can find a unique 1] E M such that Au  1 = _B T 1], i.e., (u,1]) is a solution to (7.4.3), and satisfies (7.4.19)
250
7. Elliptic Problems: Approximation by Mixed and Hybrid Methods
We have already noticed that each solution (u, TI) to (7.4.3) gives a solution to (7.4.8): also for problem (7.4.3) uniqueness thus holds. Finally, summing up inequalities (7.4.16), (7.4.18) and (7.4.19) the thesis follows at once. 0
U
We are now in a position to consider the approximation of the abstract constrained problem (7.4.3). Let Xh and Mi; be finite dimensional subspaces of X and M, respectively. We look for a solution to the following problem: being given lEX' and a E M' find (Uh' Tlh) E (7.4.20)
a(uh,vh)
x;
x Mh :
+ b(vh,Tlh)
= (l,vh)
{ b(Uh, fLh) = (a, fLh)
As in the infinite dimensional case, we define the space
Since Mi; is, in general, a proper subspace of M, we notice that Xl; (j:. XU. The finite dimensional problem corresponding to (7.4.8) reads as follows: (7.4.21 ) Clearly, each solution (Uh' T/h) to (7.4.20) provides a solution Uh to (7.4.21). We are going to present suitable conditions to ensure that the converse is also true. Moreover, a sound stability and convergence analysis will follow.
7.4.2 Analysis of Stability and Convergence We first state the finite dimensional counterpart of Theorem 7.4.1, which shows at the same time the stability of the approximate solutions (Uh, T/h).
Theorem 7.4.2 (Stability). Assume that the bilinear form a(,·) satisfies (7.4.2) and is coercive on X~, i.e., there exists a constant ah > 0 such that (7.4.22) Assume, moreover, that the bilinear form b(·,·) satisfies (7.4.2) and that the following compatibility condition holds: there exists (3h > 0 such that
Then for each 1 E Xi;« E M' there exists a unique solution (Uh,Tlh) to (7.4.20), which satisfies (7.4.14) and (7.4.15) with ah instead of a and (3h instead of (3*. In those cases in which both ah and (3h in (7.4.22) and (7.4.23) are independent of h, this is a stability result for (Uh, T/h).
7.4 The Approximation of More General Constrained Problems
251
The proof of the preceding Theorem follows straightforwardly by taking Xh and M h instead of X and M in Theorem 7.4.1, and noticing that
It must be observed, however, that (7.4.13) does not imply (7.4.22), as X~ rt. X o, and (7.4.10) does not imply (7.4.23), since, in general, Xh is a proper subspace of X. As we have already noticed in Section 7.2.1, the compatibility condition (7.4.23) is also called infsup or LadyzhenskayaBabuskaBrezzi condition. Let us now come to the convergence theorem, whose proof will be reported here even if it is very similar to that of Theorem 7.2.2.
Theorem 7.4.3 (Convergence). Let the assumptions of Theorems 7.4.1 and 7.4.2 be satisfied. Then the solutions (u,'fJ) and (Uh,'TJh) to (7.4.3) and (7.4.20), respectively, satisfy the following error estimates
11'fJ 
'fJhlIM
(7.4.25)
~
f3'
h
(1 + l ) Qh
inf
vhEX~
Ilu 
+ (1 + f30h + Qh'f30h )
v';; 1Ix inf
/lhEMh
11'fJ  J.lhiIM
Moreover, the following estimate holds (7.4.26)
From (7.4.24)(7.4.26) it follows that convergence is optimal, provided that (7.4.22) and (7.4.23) hold with constants independent of h. Proof. Take Vh E X h, v';; E Xf and J.lh E M h. By subtracting (7.4.20) from (7.4.3h it follows a(uh  V';;,Vh)
+ b(vh,'fJh 
J.lh) = a(u  V':;,Vh)
+ b(Vh,'fJ 
J.lh)
Choosing Vh = (Uh  v':;) E X~, from (7.4.22) and (7.4.2) we have
Iluh 
v';;llx ~ ~(rllu  v':;llx + 011'fJ Qh
J.lhIIM) ,
and consequently (7.4.24) holds true. To obtain (7.4.25), we start from the compatibility condition (7.4.23). For each J.lh E Mh we find
252
7. Elliptic Problems: Approximation by Mixed and Hybrid Methods
On the other hand, by subtracting (7.4.20h from (7.4.3h it follows b(Vh,'fJh  J.1h)
= a(u 
Uh,Vh)
+ b(Vh,'fJ 
J.1h)
From (7.4.2) we obtain 1
II'fJhJ.1hIIM:::; 13h hlluuhllx+ JII'fJJ.1hIIM)
,
which, together with (7.4.24), yields (7.4.25). Finally, let us prove (7.4.26). For each Vh E X h, owing to (7.4.23) and Proposition 7.4.1 there exists a unique Zh E (X~)l such that b(Zh,J.1h) = b(u  Vh,J.1h)
and
Ilzhllx :::;
J
13h Ilu 
V J.1h E Mh
vhllx .
Setting v h := Zh + Vh, we readily see that v h E XK, as b(u, J.1h) = (0', J.1h) for each J.1h E M h. Furthermore,
Ilu 
vhllx :::;
Ilu  vhllx + Ilzhllx :::;
(1 + :J Ilu  vhllx
,
and (7.4.26) follows.
0
Remark 7.4.1 Looking at the proof of Theorem 7.2.2 (see in particular (7.2.17)), we notice that, under the condition X~ C Xh, it is possible to estimate Ilu  uhllx in terms of infxh Ilu  vhllx solely. Also, error estimate (7.4.24) holds even if the compatibility conditions (7.4.10) and (7.4.23) are not satisfied. 0 Remark 7.4.2 (Spurious modes). The compatibility condition (7.4.23) is necessary to achieve uniqueness of 'fJh. Actually, it can be written as:
if J.1h E Mh and b(Vh,J.1h)
= a for each Vh
E X h, then J.1h
=0
.
Thus, if (7.4.23) is not satisfied, there exists J.1'h E Mh, J.1'h =j:. 0, such that b(Vh,J.1'h) = 0
VVhEXh.
As a consequence, if (Uh,'fJh) solves (7.4.20), also (Uh,'fJh + TJ.1'h),T E JR, is a solution to the same problem. Any such element J.1'h is called spurious (or parasitic) mode, as it cannot be detected by the numerical problem (7.4.20). It might thus generate instabilities for the method (see also Section 9.2.2). 0
7.4 The Approximation of More General Constrained Problems
253
7.4.3 How to Verify the Uniform Compatibility Condition
In most applications, it is not difficult to verify that (7.4.22) holds uniformly with respect to h (see, e.g., Section 7.2.1 and Chapter 9). More difficult is to prove the compatibility condition (7.4.23) holds uniformly on h. A first criterium in this direction has been proposed by Fortin (1977) (see Lemma 7.2.1). Here we report the general abstract formulation, whose proof follows the same guidelines of that presented before. Lemma 7.4.1 (Fortin). Assume that the compatibility condition (7.4.10) is satisfied, and, moreover, that there exists an operator Th : X t Xi, such that (i) b(v  Th(V),/lh) = 0 for each v E X, /lh E Mh (ii) IITh(V)llx :::; C.llvllx for each v E X, where C. > 0 doesn't depend on h. Then, the compatibility condition (7.4.23) is satisfied with f3 = f3. /C•. Another interesting case in related to the choice X = (HJ(D))d,M = L5(f7), a(w, v) = v(V'w, V'v), b(v, q) = (q, div v), v > 0, which are the functional spaces and the bilinear forms appearing in the variational formulation of the Stokes problem (see Chapter 9). In this case, Verfiirth (1984b) has proven the following result: Lemma 7.4.2 (Verfiirth). Let T h be a quasiuniform family of triangulations of D. Assume that Mh C H 1 (f7) n L5(D) and that there exists fJ > 0 such that (7.4.27) Assume, moreover, that the there exists an operator Rh : X constant K > 0 such that
t
Xh and a
(7.4.28) Then the compatibility condition (7.4.23) is satisfied. Proof. First of all, recalling that from the inverse inequality (6.3.21) for each Vh E X h we have Ilvhlh :::; Ch11lvhllo ,
condition (7.4.27) implies that
Let us now recall that for each qh E M h there exists w E X such that divw = qh and (7.4.30)
254
7. Elliptic Problems: Approximation by Mixed and Hybrid Methods
(see, e.g., Girault and Raviart (1986), p. 24). Thus from (7.4.28) we find
IIRh(W)lll::; IIRh(W)  wlh + llwlli ::; (1 + K)llwlh ::; K2(1 + K)lIqhllo
(7.4.31)
.
If qh is such that
(7.4.32) then (7.4.29) yields
(7.4.33) On the contrary, if
(7.4.34) from (7.4.28), (7.4.30) and (7.4.31) we have
b(Rh(W), qh) = b(w, qh) + b(Rh(W)  w, qh) (7.4.35)
?': IIqhll6  IIRh(W)  wllollqhlll ?': IIqhl16  KK 2hllqhllollqhlh ?': K
1
2(1
+ K) (11%110 
KK2hllqhlh)IIRh(W)lh ,
where the term b(Rh (w)  W, qh) has been integrated by parts. Putting together (7.4.29) and (7.4.35), it is readily proven that for each qh E M h satisfying (7.4.34) there exists Zh E X h, Zh i= 0, such that
(7.4.36) where
Q(~) := max {K1E, K 2(11+ K) (1lqhllo 
KK2E)}
,~ > 0
.
Finally, taking the minimum over {E > O} of Q(E), from (7.4.33) and (7.4.36) it easily follows that (7.4.23) holds with (3 = K 1K;1[K1(1 +K) +K]l. 0 Notice that (7.4.27) is different from (7.4.23) as the "wrong" norms occur at the right hand side. However, (7.4.27) has been proven to hold for some finite element spaces frequently used in the approximation of the Stokes problem (see for instance the TaylorHood or BercovierPironneau elements in Section 9.3.2; the proof is reported, e.g., in Fortin (1993) and Bercovier and Pironneau (1979) , respectively). Furthermore, under suitable assumptions, the approximation property (7.4.28) is true for several choices of the space X h (see for instance (3.5.22), or Clement (1975)).
7.5 Complements
255
Other viable criteria for checking the uniform compatibility condition (7.4.23) when considering the Stokes problem have been proposed by Boland and Nicolaides (1983) and Stenberg (1984) (see also Gunzburger (1989), pp.1621, and Girault and Raviart (1986), pp.129132). They essentially require to subdivide D into some macroelements, and to verify the compatibility condition (7.4.23) only locally over each macroelement.
7.5 Complements For the mixed method, error estimates of u  Uh in the LOO(D)norm have been proven, e.g., by Scholz (1977), Douglas and Roberts (1985), Gastaldi and Nochetto (1987). The extension of the results presented in this Chapter to the case of a general elliptic operator
Lw
:= 
d
d
i,j=1
i=l
2: Di(aijDjw) + 2: CiDi
W
+ aow
can give rise to some difficulties. The case of dual mixed formulation has been analyzed by Douglas and Roberts (1985). A different generalization is concerned with problems that can be written in the form find (u,'T]) E X x M: a(u,v) +b(v,'T]) = (l,v)
"Iv E X
b(u, JL)  d('T], JL) = ((J, JL)
VJLEM,
{ where d(.,·) is a continuous and nonnegative bilinear form on M x M. For the analysis of this problem we refer to Roberts and Thomas (1991) (see also Section 9.6.7). Another extension of the theory of Brezzi is related to the problem find (u, 'T]) E X x M : a(u, v)
+ bi(v,'T]) = (l,v)
Vv E X
{ b2(u,JL)
= ((J,JL)
V JL EM,
where the bilinear form bi and b2 are distinct (see Nicolaides (1982) and Bernardi, Canuto and Maday (1988), the latter being motivated by the Chebyshev approximation of the Stokes problem).
8. Steady AdvectionDiffusion Problems
In this Chapter we consider second order linear elliptic equations that have an advective term which is much stronger than the diffusive one. At first, the difficulties stemming from the use of the Galerkin method will be outlined on a simple onedimensional finite difference scheme. The first remedy will be to resort to "upwind" schemes, which introduce a "numerical viscosity" that have the effect of stabilizing the computational algorithm. However, this procedure is only first order accurate. More general and high order accurate methods of stabilization are described and analyzed in the two and threedimensional cases, for both the finite element and the spectral collocation approximation. Another Section is devoted to the presentation of some numerical results obtained by the use of the proposed methods. Finally, an heterogeneous approach, based on a domain decomposition technique, is briefly illustrated. The search of optimal stabilization methods for advectiondominated problems is still an active research area. The interested reader is advised to keep abreast of the latest literature.
8.1 Mathematical Formulation Let us now consider the second order linear elliptic operator d
(8.1.1)
Lz := 
L
d
Di(aijDjz)
i,j=l
+L
Di(biz)
+ aoz
,
i=l
and its associated bilinear form
(8.1.2)
a(z, v)
,~ fa
[it,
a"D,zD,v 
t,
b,zD,v + a,zv]
that have been introduced in Section 6.1. We assume that a(·,·) is continuous and coercive in V x V, V being a closed subspace of H 1 (fl ), i.e., there exist constants 'Y > 0 and a > 0 such that
258 (8.1.3) (8.1.4)
8. Steady AdvectionDiffusion Problems
la(z,v)l:::; ,llzlhllvlh V z,v E V a(v,v) ~ o:llvllr Vv E V .
As reported in Section 6.1.2, suitable assumptions on the coefficients aij, bi and ao ensure that (8.1.3), (8.1.4) hold. For instance, if V = HJ(fl), assuming that (6.1.19) holds with T/ = a we can choose 0: in the following way (8.1.5)
0:
0:0 = .".,1+ Cn
'
0:0 and Cn being the ellipticity and Poincare constants, respectively (see Definition 6.1.1 and (1.3.2)). Moreover, , can be expressed as
(8.1.6) where Cd,n is a suitable constant depending upon the physical dimension d and the domain fl. We recall that under the above assumptions (8.1.3) and (8.1.4), the existence of a unique solution (8.1. 7)
u E V: a(u,v) = F(v)
Vv E V ,
F(·) being a continuous linear functional on the Hilbert space V, is assured by LaxMilgram lemma (see Theorem 5.1.1). In Section 6.2.1 we have introduced the Galerkin method as a possible way to approximate the solution u to (8.1.7). However, in Section 6.1.4 we noticed that the Galerkin method can have a poor performance if the coerciveness constant 0: is small in comparison with the continuity constant 't In particular, error estimate (6.2.8) can have, in front, a very large multiplicative constant C if the ellipticity constant 0:0 is small with respect to IIbIlLoo(n) and/or to IlaoIILOO(n). The aim of the following Sections is to show that these kind of difficulties may really occur for the Galerkin method. Next, we describe some alternative ways for overcoming them. In Section 8.2 we first start from a simple onedimensional example which already shows where instabilities come from. Then, in Section 8.3 we provide a description of some of the most common remedies, especially in the framework of finite element and spectral approximation.
8.2 A OneDimensional Example Let us consider the onedimensional advectiondiffusion equation
8.2 A OneDimensional Example I':D2U
(8.2.1)
+ bDu = 0
u(O)
=0
u(l)
=1
, 0
<x 1 then (1 + Pe)/(l Pe) < 0, and consequently the solution Uh exhibits an oscillatory behaviour. Clearly, being fixed I': and b, in principle it is always possible to choose the gridsize h small enough so that Pe :::; 1, thus avoiding oscillations (see Fig. 8.2.1). However, this is very
260
8. Steady AdvectionDiffusion Problems
often unpractical if c is very small with respect to b, since one would obtain linear systems having too many unknowns (and actually it may be unfeasible in the higher dimensional case).
fJ,Y
0]
  exact solution   centered approximation. h=J115 (Pe=513)      centered approximation, h=J140 (Pe=5/8)  u wind approximation, h=1I40 (Pe=5/8)
V.5 0.3
o.t

O.!
.o.s
Fig.8.2.1. Exact solution, centered and upwind approximation to (8.2.1), with E = 1/50 and b = 1
Therefore, the first conclusion could be: if an advectiondiffusion problem has a boundary layer and the meshsize h is larger than efb, the Galerkin solution may be affected by oscillations. We will see in Section 8.3 how to overcome this problem. The linear system (8.2.2) is equivalent to the one obtained by approximating (8.2.1) by centered finite differences. (This coincidence occurs as we are considering a uniform grid, and a constant advective velocity b.) In fact, the latter scheme reads:

e
(8.2.8)
UQ
1
UM
UJ+l 
2u J h2
+ Ujl + bUj+l 2h  Ujl
=
a
.=
1
, J , ... ,
M  1
= 0 = 1
where Uj is the approximation of u(Xj)' This is a system of linear difference equations whose solution is Uj = ~j, with ~j given by (8.2.6) (e.g., Isaacson and Keller (1966), Sect. 8.4). Therefore the Galerkin method with piecewiselinear polynomials is equivalent to the approximation by means of centered finite differences. Remark 8.2.1 Instead of problem (8.2.1) let us consider, for a while, the reactiondiffusion problem
8.2 A OneDimensional Example
eo« + aou u(O)
=0
u(l)
=1
{
=0
, 0
261
<x 0 and ao > O. Its solution is u(x) = sinh(vaoxlv'c)/sinh(vao/v'c), and has a boundary layer of width O(v'c/vao) at x = 1 whenever ciao is small enough. The centered finite difference scheme would approximate the exact solution u(Xj) by
~j = (7~
(8.2.9)

7f) , j = 1, ... , M  1 ,
7['1)1(74 
with 71 = 1 + a  vu(2 + u) and 72 = 1 + o + vu(2 + u), o := aoh2C1/2. Since 0 < 71 < 1 < 72, for all value of the gridsize h, the sequence E;,j is monotonically increasing, thus the finite difference scheme is oscillationfree for all values of h. On the other hand, approximating (8.2.1) by the Galerkin method using piecewiselinear finite elements would provide the following set of equations M1
e
L
i=l
~i
M1
1
1
r D!PiD!pj + ao L ~i r !Pi!Pj = 0 i=l io
, j
io
= 1, ... , M
 1 ,
where !Pi are the basis functions of Vh . After computing the integrals we obtain the following algebraic equations _
~j+1  2E;,j
+ E;,j1 +
chao
E;,o
hE;,j+l
+ 4E;,j + E;,j1 6
 0 . 1 M _ 1  , J  , ... ,
=0
{ E;,M
=1
.
Correspondingly, for a 71
=
# 3 we obtain a solution like (8.2.9), but with
3 + 2u  V3u(u 3_ u
+ 6)
'
3 + 2u 72
=
+ V3u(u + 6) 3 u
.
These characteristic roots have the same sign of 3  Uj thus the finite solution is affected by oscillations, unless a < 3, i.e., h < (6elao)1/2. This time the "responsible" for the numerical oscillation is the zeroorder 1 term aou. As a matter of fact, if we replace the mass matrix Mij := Jo !Pi!Pj (i.e., the finite element representation of the identity operator) by its lumped ~ M 1 ' form M ij := (2:k=l M ik)6ij, we verify at once that M = I and therefore we reobtain exactly the finite difference scheme, thus an unconditionally stable solution. (For a discussion on the mass lumping procedure for timedependent problems, see Section 11.4.) 0
262
8. Steady AdvectionDiffusion Problems
8.2.2 Upwind Finite Differences and Numerical Diffusion By noticing that in problem (8.2.1) transport occurs from left to right (as the advective coefficient b has a positive sign), one is led to consider backward (upwind) finite differences to approximate the term bDu. The resulting scheme can be written as: IS
(8.2.10)
1 Uo
UHl  2Uj
h2
+ Ujl + bUj
 Ujl
h
.
= 0 , J = 1, ... ,
M
 1
=0
UM
=1
and has the solution (8.2.11)
(1 + 2Pe)j  1
Uj
= (1 + 2Pe)M _ 1
' j
= 1, ... , M
 1 .
This time Uj doesn't oscillate any longer. Indeed, its value increases with j, as 1 + 2Pe > 0 for each value of IS, band h (see Fig. 8.2.1). It can be noticed that this upwind scheme introduces a local truncation error of order O(h), whereas the centered difference is of order O(h 2 ) . (We warn the reader that the error arising from piecewiselinear finite elements decays like O(h) in the H1norm (see (6.2.8)), but like O(h 2 ) in the maximum norm (see Ciarlet (1968)). This is consistent with the fact that piecewiselinear finite elements coincide with centered finite differences on a uniform grid in one dimension, provided the boundary value problem at hand has constant coefficients.) Despite this lack of accuracy the upwind scheme is indeed better suited for the approximation of the solution to (8.2.1) when the Peclet number Pe is larger than 1. This fact can also be explained in a different way. The upwind approximation of Du can be written as Uj  Ujl _ UHl  Ujl
h

2h

h UHl  2uj h2
'2
+ Ujl
therefore it corresponds to a centered difference approximation of the regularized operator D  %D 2 • In other words, it introduces a numerical dissipation that can be regarded as a direct discretization of the artificial viscous term %D 2u. This is frequently called numerical diffusion (or numerical viscosity). This also explains from a different point of view why the upwind difference scheme is only first order accurate. It is possible to interpret the upwind difference scheme as a PetrovGalerkin approximation of (8.2.1). This method is based on the fact that the spaces of trial and test functions are different, and usually denoted by Wh and Vh , respectively. In the approximation of (8.2.1), the space of trial
8.2 A OneDimensional Example
263
functions can be chosen as W h = X~ n HJ(O, 1); the corresponding affine subspace Wh' = {Vh E X~ IVh(O) = 0, vh(l) = I} accounts for the nonhomogeneous Dirichlet condition. To define the space of test function Vh , consider the piecewisequadratic function (8.2.12)
0'2(X):=
{
3x(1 + x) 3x(1x)
o
, 1:::; x :::; 0 ,0:::;x:::;1 , Ixl2: 1 .
We can then introduce the finite dimensional space Vh as (8.2.13) where (8.2.14) being the basis function of Wh corresponding to the node Xj' It is easily verified (see, e.g., Mitchell and Griffiths (1980), pp.221223) that the linear system associated to (8.2.10) is equivalent to the one associated to the PetrovGalerkin problem
'Pj
1 1
find Uh E Wh' :
(cDUhDvh
+ bDuhVh) =
0 V Vh E
v, .
8.2.3 Spectral Approximation
In this Section we consider a Chebyshev collocation approximation of the advectiondiffusion problem (8.2.1). We will see that, although oscillations are indeed present if the advective term is dominant, a sort of maximum principle holds for the spectral solution. These results are due to Canuto (1988). For the sake of simplicity, let us deal with the same Dirichlet problem as before in the interval (1,1), i.e.,
cD2U (8.2.15)
+ bDu = 0
, 1
<x 0 for 1 :S m :S N (Tm(x) being the Chebyshev polynomials introduced in (4.3.2)). A remarkable consequence of this fact is that
8.3 Stabilization Methods N
UN(X) =
N
l: umTm(x) :S uo + l: umITm(x)1 m=O
(8.2.19)
:S
265
m=l
N
N
m=O
m=O
l: Um = l: umTm(l ) = uN(I) = 1
for any x E [1,1], thus UN(X) is uniformly bounded from above by the boundary data. A more precise result can be derived if ejb is small enough compared to 1jN2, i.e., if
~b < C0 N 2
(8.2.20)
holds for a small constant Co > O. Notice that under this condition a Gibbs phenomenon indeed occurs at x = 1. In this case, if N is odd, one obtains um > 0 for 0 :S m :S N, and consequently, by proceeding as in the proof of (8.2.19), it follows that IUN(X)I :S 1 for 1 :S x :S 1. On the other hand, if N is even, one finds Uo < 0 and um > 0 for 1 :S m:S N, thus 12Iuol :S UN(x) :S 1 for 1 :S x :S 1. Note, however, that in this case luol depends on e, band N (precisely, it behaves like N 2bje), hence UN(X) is not uniformly bounded from below. In particular, when elb is very small compared to liN, this result suggests that spectral approximation of advectiondominated problems may exhibit a strong sensitivity to the parity of N.
8.3 Stabilization Methods We now return to the general ddimensional problem
Lu = f (8.3.1)
{
U
=0
in D on
aD
where d = 2,3, L is defined in (8.1.1) and, for the sake of simplicity, D is a polygonal domain. The Galerkin approximation of (8.3.1) (stated in the form (8.1.7)) would read: (8.3.2) where Vh is a suitable finite dimensional subspace of V = HJ(D). As seen in the preceding Section, this method can fail to furnish a satisfactory approximate solution if the ellipticity constant 0:0 is small with respect to Ilbllu"'c.Q) and the mesh size h.
266
8. Steady AdvectionDiffusion Problems
An instance is provided in Fig. 8.3.1, where we report the Galerkin approximation based on piecewiselinear finite elements for the advectiondiffusion equation (8.3.3)
eL\u + div(bu) = 0 in
n=
(0,1)2 ,
where e = 103 , b = (0,1) and the triangulation is made by 30 x 30 uniformly distributed nodes. It is clearly seen that severe oscillations arise across the layer.
1.2
_7
x
Fig. 8.3.1. Numerical results obtained by the Galerkin method
Notice that when considering problem (8.3.3) the advective term does not play any role in the stability estimate. Therefore nothing can be controlled apriori independently of the parameter c. We have shown before that a possible cure is to resort to a PetrovGalerkin approximation, which, in the onedimensional case, turns out to be equivalent to using an upwind finite difference scheme. Unfortunately, the accuracy of this method is affected by the introduction of a numerical viscosity of order O(h). In the following Sections, we review some of the methods that have been proposed for approximating the ddimensional problem (8.3.1): at first, we present the artificial diffusion method, which is first order accurate; then we focus on some stabilization methods which are strongly consistent (see (8.3.6» and high order accurate.
8.3 Stabilization Methods
267
8.3.1 The Artificial Diffusion Method The classical artificial diffusion method is a ddimensional generalization of the upwind finite difference scheme, and introduces a numerical diffusion of order O(h) acting in all directions. The idea behind this method is easily explained: as the oscillations in the Galerkin method appear when the Peclet number Pe is large, i.e., the ellipticity constant 0:0 is small compared with IIbIlLoo(D) and h, to avoid these difficulties one is led to consider a modified problem with a larger ellipticity constant. For instance, adding to L the operator L such that LZ := hQL)"z , Q:= IIbIlLoo(D) ,
the ellipticity constant becomes 0:0 := 0:0 + hQ, and the Galerkin method applied to this new operator leads to an approximate solution which is not affected by oscillations any longer. We notice that the new solution uh satisfies: (8.3.4) where ah(z,v):=a(z,v)+hQ(\7z,\7v)
'Vz,vEV,
i.e., a generalized Galerkin problem (see Section 5.5). This procedure has a clear drawback. Since the additional term hQl1z is O(h), we obtain in particular that (8.3.5) U being the solution to (8.3.1). This means that the scheme is consistent, but with first order accuracy only. As a consequence, the artificial diffusion method cannot be better than first order accurate, no matter how large the polynomial degree k of the finite element space is. In the lowest degree case of piecewiselinear finite elements (k = 1), the above stabilization would not prevent to keep first order accuracy for the Hlnorm of the error. However, it damps the maximum norm of the error from O(h 211og hi) (see Remark 6.2.3) to O(h). The stabilizing term is therefore noncompatible with the optimal order guaranteed by the polynomial approximation. This loss of accuracy is even more dramatic when spectral methods are used. The error estimate for the artificial diffusion method can be obtained by applying Proposition 5.5.1. Precisely, we have
268
8. Steady AdvectionDiffusion Problems
where C = C(a", Q) (a and, being respectively the coerciveness and continuity constants of the bilinear form a(·, .)). Likewise the onedimensional upwind finite difference case, stability is thus achieved at the expense of accuracy. Typical results exhibit excessive diffusive behaviour, as already shown in Fig. 8.2.1 for the onedimensional case. In particular, in the twodimensional case one can notice a diffusion not only along the streamlines (i.e., the lines parallel to the direction b(x)), but also in the direction perpendicular to b(x). This undesirable behavior is usually called crosswind diffusion, and it is witnessed in Fig. 8.3.2.
1.2
_l'
0
x
Fig. 8.3.2. Numerical results obtained by the artificial diffusion method
One could avoid it by adding to L the operator

.c such that
h
Lz :=  Q div[(b . \7z)b] , which is not coercive in HJ(.Q) but satisfies
1
(.cv)v =
~
1
(b. \7v)2
~0
v v E HJ(.Q)
.
This procedure, which was proposed by Raithby (1976) for finite difference schemes and by Hughes and Brooks (1979) in a finite element context, is called streamline upwind method and generates an artificial diffusion which pollutes the approximate solution only in the direction of the streamlines.
8.3 Stabilization Methods
269
However, it still introduces an additional term of first order, hence it is only O(h)accurate. Remark 8.3.1 Several other upwindtype finite element methods have been proposed in the literature. In the following Section we are going to present a family of stabilization methods recently proposed by different authors. For a theoretical presentation and an analysis of the computational efficiency of earlier schemes, we refer to Ikeda (1983), Tabata (1986) and the references therein. 0 8.3.2 Strongly Consistent Stabilization Methods for Finite Elements In this Section we describe some stabilization methods which don't deteriorate the inherent accuracy of the polynomial subspace. All these methods fit under the generalized Galerkin form (5.5.1), and share the desirable property that (8.3.6) provided u is the exact solution of problem (8.3.1). This property, which strengthens the consistency property (8.3.5), will be referred to as strong consistency, and allows the stabilized method to mantain the optimal accuracy which is inherent to the choice of the finite dimensional subspace. We will see that these methods provide nonoscillatory solutions even when the Peclet number is large; some numerical diffusion is still present, but it mainly affects the solution in the streamline direction, sensibly reducing the crosswind diffusion. Moreover, a suitable version of these methods can also be applied to different problems in order to cure different kinds of pathologies. For instance, when considering the approximation of the Stokes problem, the onset of instability is not due to large advection but rather to the presence of spurious pressure modes. The latter may arise when a suitable compatibility condition between the approximating subspaces for velocity and pressure is not fulfilled (see Sections 9.2 and 9.4). Let us consider the operator L introduced in (8.1.1) and split it into its symmetric and skewsymmetric parts (with respect to the scalar product in L 2 (fl )), i.e., (8.3.7) where
L = Ls+Lss ,
270 (8.3.8)
8. Steady AdvectionDiffusion Problems
i: Di(a~Djz)+ (~diVb+ao)
Lsz:=
z
t,J=1
(8.3.9)
~ Di(aij55 Djz) + 2 1 . 1 divfba) + 2b . 'Vz ,
LS5 Z :=  L
i,j=l
and (8.3.10)
5 ._
ai j
.
aij
+ aji 2
55 ._
aij'
aij  aji 2
i,j=l, ...,d.
Then we introduce a regular family of triangulations Th of D (see Definition 3.4.1) and we consider a finite dimensional subspace Vh C V = H6(D) which is made by piecewisesmooth functions. As an example, one can choose Vh = X~ n HJ(D), k 2 1, where X~ is the finite element space defined in (3.2.4) or (3.2.5). In this Chapter we deal with the advectiondominated case, i.e., we assume that for each K E Th the local Peeler function PeK(x) is larger than 1, i.e., (8.3.11)
PeK(x):=
Ib(x)lhK
2ao(x)
>1
Vx E K ,
where (8.3.12) Here, and in the sequel, the coefficients aij(x) and b(x) are supposed to be continuous in D. Notice that from the ellipticity assumption (see Definition 6.1.1) it follows ao(x) 2 ao > 0 for every x E D. Moreover, (8.3.11) implies, in particular, that inf,o Ib(x)/ > o. We are thus in a position to introduce the stabilized approximation problem to (8.3.1). Define
where Zh, Vh E Vh, (, ')K denotes the L 2(K)scalar product and the parameters J > a and p E IR have to be chosen. Then we consider the problem: (8.3.13)
find Uh E Vh : a(uh, Vh) + .c~)(Uh, Vh)
= (f,Vh) + CP~)(Vh)
V Vh E Vh
We further denote by a~)(,.) the stabilized bilinear form above, i.e.,
8.3 Stabilization Methods
271
(8.3.14) and by :F~p\) the perturbed continuous linear functional
A first important remark is in order. Since the exact solution u satisfies Lu = f E L 2 (rt ), a~)(u,vh) is defined for each Vh E Vh. Moreover, strong consistency (in the sense of (8.3.6)) holds for problem (8.3.13) for any choice of 6 and p, The problem now is to find suitable values of these parameters in such a way that also stability and convergence are achieved, with respect to a suitable norm. In Section 8.4 we will analyze the stabilized problem (8.3.13) for three different choices of the parameter p. These methods have been proposed by different researchers. However, following Baiocchi and Brezzi (1992), we have preferred to present them in a unified way. We consider p = 0, which corresponds to the SUPG (Streamline UpwindjPetrovGalerkin) method proposed in Hughes and Brooks (1982) and Brooks and Hughes (1982) (this method has been also called the Streamline Diffusion method, see, e.g., Johnson (1987)); p = 1, which gives the GALS (GalerkinjLeastSquares) method introduced by Hughes, Franca and Hulbert (1989); p = 1, which leads to the method proposed by Franca, Frey and Hughes (1992), extending to advectiondiffusion equations an idea used by Douglas and Wang (1989) in the approximation of the Stokes problem. We call this last choice DWG (Douglas WangjGalerkin) method. Let us notice however that the three cases above do coincide when ~ div b + ao = 0 and piecewiselinear polynomials are considered as test functions. Remark 8.3.2 As we said above, the acronym SUPG stands for Streamline Upwind/PetrovGalerkin. However, as pointed out by Hughes (1987), this name doesn't seem to be really appropriate. Indeed, SUPG is a special case of (8.3.13), which is a generalized Galerkin method. 0 Remark 8.3.3 The diffusiondominated case, i.e., PeK(x) ~ 1 for each x E K and for each K E Th , can be satisfactory treated by the standard Galerkin method. Nonetheless, we can also use the stabilization approach defined in (8.3.13), provided that the stabilization parameter i5 is chosen as a function of PeK, namely i5 = O(PeK) (see also Remark 8.4.2). 0
We conclude this Section showing some numerical results. At first we consider an approximation based on the stabilized GALS method using piecewisequadratic finite elements for the advectiondiffusion equation (8.3.3) (with c = 10 5 ) . The triangulation is made by 30 x 30 nodes uniformly distributed. Different choices of the advective field b are made, as well as different boundary values for u are considered. From case to case, the values of
272
8. Steady AdvectionDiffusion Problems
b and of the boundary data can be directly desumed from the illustrations we are going to show. Fig. 8.3.3 (a) and (b) show that numerical dissipation can only be added in the direction of the streamlines: there is no dissipation at all in (a), while the case (b) is diffusive since streamlines cross the discontinuity line. We also notice the presence of some under and overshooting in the latter case, a situation that often arises for these stabilized methods (see also Fig. 8.3.3 (c)). The test problem referred to in Fig. 8.3.3 (c) and (d) is virtually the same. However, (d) shows that there might be a severe gridorientation effect that induces numerical diffusion even when b is parallel to the discontinuity line. The nonmonotonicity of the scheme, which is enlightened by the underand overshooting phenomena near the layer, can be cured at the expense of introducing a further stabilizing term in (8.3.13). This latter term is a shockcapturing nonlinear viscosity term (see Hughes, Mallet and Mizukami (1986), Galeao and Dutra do Carmo (1988), Shakib (1988)).
o
Fig. 8.3.3 (a). Numerical results obtained by the GALS method
8.3 Stabilization Methods
273
1.2
0·1
Fig. 8.3.3 (b). Numerical results obtained by the GALS method
8.3.3 Stabilization by Bubble Functions
Before analyzing stability and convergence of the methods presented in the previous Section, we want to propose a different interpretation of them, due originally to Brezzi, Bristeau, Franca, Mallet and Roge (1992). Further remarks and improvements can be found in Baiocchi, Brezzi and Franca (1993). For a similar approach in a different context, also see Remark 9.4.2. Let us restrict, for simplicity, to the twodimensional case, i.e., d = 2. We start by introducing the finite dimensional space (8.3.15)
where X~ is defined in (3.2.4), and B is a (finite dimensional) space of bubble functions having exactly one degree of freedom in each element K E Th . For instance, we can choose the cubic bubble functions, namely,
where Ai, i = 1,2,3, are linear polynomials, each one vanishing on one side of K and taking value 1 at the opposite vertex. The function b«, which is an example of bubble function, is thus positive in the internal part of K and vanishes on oK. Now consider the Galerkin approximation of (8.3.1) in Vt:
274
8. Steady AdvectionDiffusion Problems
1.2
1
I
y
Fig. 8.3.3 (c). Numerical results obtained by the GALS method
(8.3.17)
find (Uh +UB) E V~: a(uh +UB,Vh +VB) = (f,Vh +VB)
b
V (Vh +VB) E Vh
We want to reframe (8.3.17) as a stabilized Galerkin method in Xk n HJ(ft), solving with respect to UB. First of all, we can write UB on each K as (8.3.18) Choosing as test function in (8.3.17) Vh
bx VB = { 0
= 0 and VB E B
in K on ft\K
such that
,
we find a(Uh+UB,VB)=aK(Uh+Cb,KbK,bK) , having defined by aK(,') the restriction of a(·,·) to the triangle K. Thus (8.3.17) becomes
and, since uhlK E Xk and b« vanishes on 8K, we can integrate by parts in the first term, obtaining (8.3.19)
8.3 Stabilization Methods
275
1.2         
Or
.
x
Fig. 8.3.3 (d). Numerical results obtained by the GALS method We now choose as test function in (8.3.17) any Vh E Xk nHJ(D) and VB = 0, finding a(Uh' Vh) + cb,KaK(bK, Vh) = (f, Vh) .
2:
KETh
After we integrate by parts, the term aK(b K, Vh) produces
aK(bK,Vh) = (bK,LsVh  LSSVh)K (see (8.3.8) and (8.3.9) for notations), and similarly
aK(bK,b K) = (LsbK,bK)K Using (8.3.19) we can finally write (8.3.17) as (8.3.20)
a(uh,vh) +aB(f;uh,Vh) = (f,Vh)
V Vh E X~ nHJ(n) ,
where (8.3.21)
This finite dimensional approximation to (8.3.1) is strongly consistent, and differs from the usual Galerkin one due to the presence of the additional term aB(f; Uh,Vh). Moreover, it resembles the DWG method introduced in (8.3.13) for p = 1. In fact, there the additional term takes the form
276
8. Steady AdvectionDiffusion Problems
(8.3.22) We now want to show that, at least in a simplified case, (8.3.21) can indeed be presented in the form (8.3.22). Consider piecewiselinear finite elements (i.e., k = 1 in (3.2.4)) and suppose that b(x) is a constant vector, j(x) is constant, ao(x) = 0 and aij(x) = ebij, where e > 0 is a constant and bij is the Kronecker symbol. In such a case we have Lsz
= eL\z
, Lssz
= b· '\lz
,
and SUPG, GALS and DWG methods are all coincident. Choose, moreover, the space B of bubble functions as in (8.3.16). Note that the bubble function b« satisfies (8.3.23) where constants Cl,K and C 2,K are independent of h«. Begin by computing (LUh  j,bK)K. Recalling that uhlK E P l (8.3.23) we have (LUh  j, bK)K
= (b· '\luh 
j, bK)K
= (b. '\luhlK 
f)
,
from
L
b«
= (b· '\luhlK  f)Cl,Khk .
Analogously, (LSSVh  LSVh, bK)K
= (b· '\lvh, bK)K = b . '\lvhlKCl meas(K)
Finally, from (8.3.23) it holds (LsbK' bK)K = e(LlbK, bK)K = eC2,K
Therefore, we can write
From definition (8.3.11) we can finally obtain aB(Ji Uh, Vh)
(8.3.24)
__ ' " 2C1Cl,KPeK ( h K ) L..... LUh  l. 'bl (LSSVh  LSVh) K KETh
C 2 ,K
.
8.3 Stabilization Methods
277
Comparing this with (8.3.13) we find out that the additional term stemming from the presence of the bubble function is nothing but the stabilizing term of the DWG method, for a suitable choice of the constant 0 = OK on each element K E Th. (Recall again that, under our assumptions, we have LSUh = LSVh = 0; hence SUPG, GALS and DWG methods are coincident.) Indeed, (8.3.24) is not yet satisfactory for the approximation of an advectiondominated problem, as the constant OK := 2C~lCIC1,KPeK depends on the local Peclet number. However, by proceeding in a slightly different way, one could choose other types of bubble functions in order to recover (8.3.13) with a parameter JK which is independent of PeK (see Brezzi, Bristeau, Franca, Mallet and Roge (1992)). 8.3.4 Stabilization Methods for Spectral Approximation
In this Section we consider two examples of stabilization methods for spectral collocation approximation. The former shows that a standard spectral collocation scheme can be stabilized by adding extra trial/test functions with local support (typically, bubble functions). The latter modifies the usual Galerkin approach by introducing an additional term like in SUPG, GALS or DWG methods, by adapting the procedures presented in Section 8.3.2 to the spectral collocation case. Let us focus, for simplicity, on the twodimensional homogeneous Dirichlet problem
Lu := eL1u + div(bu) (8.3.25)
{
u
+ aou = f
=0
in
a
on
an
where f E L 2(il) and il = (1,1)2. The bilinear form a(.,') associated to (8.3.25) is defined in (8.1.2) (with aij = eOij), and is assumed to be continuous and coercive, i.e., to satisfy (8.1.3) and (8.1.4). The associated variational problem reads: (8.3.26)
find u E HJ(il) : a(u,v) = (f,v)
V v E HJ(il) .
=
For each integer N 2: 2, the Legendre GaussLobatto nodes Xij (Xi, Xj), i,j = 0, ... ,N, induce a triangulation Th on n, given by the N 2 rectangular cells Kij := Ii x Ij whose vertices are four neighbouring nodes of the GaussLobatto grid. Here, we have set Ii := [Xi+l' Xi]; recall that the Legendre nodes are usually ordered from right toward the left, see Section 4.4.2. We have moreover denoted by Wij the weights of the Legendre GaussLobatto quadrature formula (see (4.5.38)). Let VN := QfN be the space of polynomials of degree less than or equal to N in each variable X and y, vanishing on Similarly, let Vh be the finite element space n HJ(n), where x; is defined in (3.2.5). The space of bubble functions is denoted by
x;
an.
278
8. Steady AdvectionDiffusion Problems
(8.3.27) B:={VBEHJ(n)!VBIKij=cbi j
,
cEIR, i,j=0, ...,N1} ,
where, for each i, i. bi j is a function, uniquely determined, vanishing at the boundary of Kij. The easiest example is provided by bij E «;h, bij(x,y) = pi(X)Pj(Y), Pi and Pj being two parabolas vanishing at the endpoints of the intervals Ii and I j , respectively, and taking value 1 at the midpoints. A more useful choice is the one outlined in (8.3.33) below. Following Canuto (1994) we define
vt := VN EB B
(8.3.28)
,
and we introduce the Galerkin approximation to (8.3.26) in findu~EVt : a(u~,v~)=(f,vt)
(8.3.29)
This problem clearly has a unique solution b I 1 IluN 11 ~ 11/110 Q
(8.3.30)
(8.3.31)
lIu 
ut;."
vt:
VV~EVt
which satisfies
VN?l
u~lh ~ 1 inf Ilu  v~lh ~ 1 inf Ilu  vNlh , Q V~EV~
Q vNEVN
as VN C V]:" This shows that the accuracy of this Galerkin approximation is at least as good as the one of the standard Galerkin method in VN . However, we have already noticed that the spectral Galerkin method is difficult to implement (see Section 6.2.1). Therefore, we are more interested in introducing a spectral collocation approximation to (8.3.26). For this purpose, let us assume that I E eO(n), and for simplicity let b be a constant vector and aD = ao(x) ? O. The following spectral collocation method with bubble correction is proposed in Canuto (1994): find ut;., = UN + Ub E V~ such that
(LUN' VN)N + a(ub' 7r~(VN)) (8.3.32)
{
= (f, VN)N
(7rHLuN),Vb) + a(ub,vb) = (7r~(f),Vb)
V VN E VN V Vb E B
Here (, ')N is the discrete scalar product introduced in (4.5.39), and 7r~ eO(n) ~ Vh is the finite element interpolation operator defined in (3.4.1). The derivation of (8.3.32) can be motivated as follows: split (8.3.29) into two sets of equations (one for UN with the test function VN, the other for Ub with the test function Vb), according to the decomposition (8.3.28). Then, replace the bilinear form a( UN,VN) and the term (f, VN) by their discrete counterparts (involving (., ')N instead of (', .)). As for the remaining terms, the algebraic polynomials, as well as I, are approximated by their finite element interpolants. The introduction of these interpolation corrections aims at obtaining a more sparse global matrix. In fact, the block a(ub,7r~(VN)) is banded; the
8.3 Stabilization Methods
279
other block (7r}.(LUN),Vb) is still full, since LUN depends on all the gridvalues of UN. However, it depends on the gridvalues of LUN in a banded way, since 7r}. is a local operator. It follows that the bubble correction Ub can be expressed elementbyelement in terms of the neighbouring gridvalues of LUN, using (8.3.32h, and thus eliminated from (8.3.32h. It remains to indicate how to construct the space of bubble functions B. Let b*(x) be a onedimensional bubble function on the interval (0,1). By this we mean any nonnegative function of HJ(O, 1) (not identically vanishing). Then, on each rectangular cell Kij = (Xi+l' Xi) X (Xj+!, Xj) let us set
bij(x,y) := b* (X Xi 
(8.3.33)
Xi+l ) Xi+l
b* (y  Xj+l ) Xj  Xj+l
The space B is now defined according to (8.3.27). Setting
* Ilb*IIo c := IIDb*llo ' problem (8.3.32) has a unique solution provided c* is small enough (independently of N). In that case, one also has
Ilu~.r1h :5
Cllfllco c7i)
,
and
Ilu 
:5 C(N1Sllull s + Nrllfllr) , assuming that the exact solution u belongs to HS(fl) and f belongs to Hr(fl) (8 ~ 1, r
~
u~lll
2).
A different approach, more germane to those introduced in Section 8.3.2 for finite elements, has been proposed by Pasquarelli and Quarteroni (1994). The approximate problem now reads: find UN E QPN such that (8.3.34)
(LNUN, VN + b(LN,SS =
+ pLN,S)VN)N (j, VN + b(LN,SS + pLN,s)VN)N
If VN E Q~ ,
where b > 0 is a parameter, and p may take the value p = 0 (SUPG method), p = 1 (GALS method) or p = 1 (DWG method). Here, LN denotes the following pseudospectral approximation of the operator L: (8.3.35)
LNz := cLlz +
~[diV IN (bz) + b· \7z] + (~diVb + ao) z
.
(Clearly, LNZ = Lz ifb is constant and z E iQN.) Moreover, LN,S and LN,SS are its symmetric and skewsymmetric part with respect to the scalar product (', ')N, respectively, i.e.,
280
8. Steady AdvectionDiffusion Problems
(8.3.36)
LN,SZ := eLlz +
(8.3.37)
LN,SSZ:=
(~diVb + ao) z
~[diV IN(bz) + b· 'V'z]
.
As before, let us denote by Xij = (Xi,Xj) and Wij = WiWj, i,j = 0, ... ,N, the Legendre GaussLobatto nodes and weights, respectively. The collocation interpretation of (8.3.34) reads: LNuN(Xii) N
""' LNUN(Xlk) (LN,SS L...J
+J
+ pLN,S)'l/Jij(Xlk) Wlk ~
l,k=O
(8.3.38)
'J
N
= DL
f(Xlk) (LN,ss l,k=O + f(Xii)
+ pLN,s)'l/Jij(Xlk)
Wlk Wij V i,j such that Xij E [l
V i, j such that Xii E 8[l
,
where 'l/Jij E (fN is the Lagrangian basis function associated to the internal node Xii' For all i,j, the function (LN,SS + pLN,S)'l/Jij is a polynomial that is different from 0 at all collocation points. This entails that all equations (except those at the boundary) depend on the values of the polynomial LNUN at all collocation nodes. In particular, one obtains a full matrix instead of the usual block matrix (see Sections 6.3.1 and 6.3.3). With regard to the parameter D, it can either vary at each collocation node, or be taken constant. In the latter case, a suitable scaling is C
D= eN4 ' where
c > 0 is a constant to
be fixed independently of e and N.
8.4 Analysis of Strongly Consistent Stabilization Methods In this Section we analyze the stabilization methods defined in (8.3.13). Although they can be regarded as generalized Galerkin methods, they do not fall into the abstract framework of Proposition 5.5.1, as the associated bilinear form a):'\,.) does not satisfy the continuity requirement (5.5.5). On the other hand, the results proven in Theorem 5.5.1 do not imply convergence in the present situation.
8.4 Analysis of Strongly Consistent Stabilization Methods
281
For the sake of simplicity, in the sequel we assume that there exist constants f.Lo and f.Ll such that: (8.4.1)
o < f.Lo ~ f.L(x)
:=
~ div b(x) + ao(x) ~ f.Ll
, x E [l .
Let us notice however that this condition can be relaxed if the flow field b(x) has no closed streamlines. In that case, a weaker condition suffices, and it disappears in the limit aij  0 (see, e.g., Johnson, Navert and Pitkaranta (1984) ). Assume moreover that the coefficients aij take the simple form
i,j=l, ... ,d, xE[l , e > 0 being a constant. Consequently
= cLlz + f.LZ Lssz = ~ div(bz) + ~b. 'lz Lsz
,
and we find at once that (8.4.2) Now, let us start with the stability analysis for the SUPG method. First of all, we want an estimate of a~O)(vh,Vh) from below. Proposition 8.4.1 Assume that we are dealing with the advectiondominated case, i.e., (8.4.3)
If the parameter J and the meshsize h K satisfy (8.4.4)
0< J < C 1 
0
Jh ,
< Ib(x)1 2f.L(x)
K 
'ixEK ,
where Co is the constant appearing in the inverse inequality (8.4.7) below, the bilinear form a~O\.,.) associated to the SUPG method satisfies the coerciveness inequality
a~O)(Vh' Vh) ~ ~ lI'lvhI16,n + ~1If.L1/2vhI16.n (8.4.5)
+~
L J (~~ LSSVh' LSSVh) K
KE~
Proof. Being satisfied (8.4.2), we concentrate our attention on the stabilizing term, which can be rewritten as
282
8. Steady AdvectionDiffusion Problems
L
(8.4.6)
8 ( edvh,
KElh
+
L
«er;
8
~~ LSSVh)
+ K
(IlVh' ~~ LSSVh)
+ K
L
8 ( LSSVh'
KETi.
~~ LSSVh) K
.
Consider the first term. The following inverse inequality (8.4.7) holds (it can be proven by proceeding as in Proposition 6.3.2). Furthermore, from (8.4.3) it follows that (8.4.8) Therefore,
(8.4.9)
As for the second term in (8.4.6), using (8.4.1) it can be estimated as follows:
L 8 (Il h' ~~ LSSVh) K I IKElh ~ ~ L 8 (~~ LSSVh, LSSVh) KElh K V
(8.4.10)
+
2: 8 (~~ /l2Vh, Vh)
teet;
From (8.4.2), (8.4.6), (8.4.9) and (8.4.10) we can conclude that
K
8.4 Analysis of Strongly Consistent Stabilization Methods
a~O)(Vh,Vh) ~ e
(1
+
(8.4.11)
C~15) lIV'vhl15
~L
15 (
KE7h
+
283
L
(JL
KE7h
~~ LSSVh, LSSVh)
[1 15~~ JL]
K
Vh,Vh)
K
If we now choose 15 and b« as indicated in (8.4.4), the stability inequality (8.4.5) follows. 0 Notice that condition (8.4.4) on h« is actually not restrictive, as we are considering advectiondominated problems for which Ib(x)' is large enough. From (8.4.5) we can deduce the stability result for the solution to problem (8.3.13). In fact, let us define
Ilvhll~uPG
:=
cllV'vhll5.,a + IIJL 1/ 2vhI15.,a
(8.4.12)
+
L 15 (hi:' LSSVh, LSSVh) K
KE7h
If Uh is a solution to (8.3.13) with P = 0 we find
We conclude that there exists C, independent of h, such that (8.4.13) Let us further stress that this stability estimate improves the one that can be obtained for the Galerkin method, owing to the presence at the left hand side of the additional term
L KE7h
15
(h,:,
LSSUh, LSSUh)
.
K
In particular, this provides a control for the derivative of Uh in the streamline direction b(x). We now consider the GALS method. From (8.4.2) we easily obtain: Proposition 8.4.2 For any 15 the GALS method satisfies
> 0 the bilinear form a~l)(.,.) associated to
284
8. Steady AdvectionDiffusion Problems
a~1)(Vh,Vh) (8.4.14)
cll\7vhI16,n + 11J1.1/ 2VhI16,n +
=
L e5 (~~ LVh' LVh)
KETh
V Vh E Vh .
Ilvhllz.ALS
=:
K
This property of coerciveness leads to a stability estimate. In fact, if Uh is a solution to (8.3.13) with p = 1 we have
lIuhllz.ALs =
a~1)(uh' Uh) =
12 ::; 1IJ1. / f116,n
+
(j, Uh)
L s (f, hl:
KETh
L
+
e5
KETh
(h 1t. f) :
l
K
LUh) 1
K
+ ~lIuhllz.ALs
,
hence (8.4.15)
IluhllGALS ::; Gllfllo,n .
Notice that (8.4.15) holds for any choice of the parameter e5 > O. For the sake of completeness, we also present the proof of the stability for the DWG method. Proposition 8.4.3 Assume that (8.4.3) holds. If the parameter e5 and the meshsize b« satisfy
0 0 and 15 > 0
V w, v E V ,
i.e., the bilinear form a(,·) is continuous over V x V, and (9.2.8)
l(q,divv)1 ~ 15llvllllqllo
VvEV, qEQ ,
which in turn expresses the continuity of the bilinear form b(·,·) over V x Q. Moreover, let us assume that the spaces Vh and Qh enjoy the following property: there exists (3 > 0 such that
302
9. The Stokes Problem
This is called compatibility (or infsup or LadyzhenskayaBabuskaBrezzi) condition. Under these assumptions, Theorem 7.4.2 yields existence and uniqueness for the solution to (9.2.4). Further, this solution satisfies (9.2.10) These estimates state that the solution is stable if 13 is independent of h. When the latter condition is not true, or, even worse, (9.2.9) doesn't hold, the approximation (9.2.4) is said to be unstable. The abstract convergence result given by Theorem 7.4.3 yields in the present case (9.2.11) and
(9.2.12)
Because
lin vhll ~ vhEZh inf
(1 +
8 _13 )
inf
vhEVh
lin vhll ,
due to (7.4.26), the convergence estimate is optimal provided 13 is independent of h. Remark 9.2.1 (Nonconforming approximation). With the aim of removing the difficulty of construction of subspaces to Vdiv,h, we could consider, instead of (9.2.1), the problem
(9.2.13) (see (9.2.5)). Since, generally, Zh is not a subspace of Vdiv (as a matter of fact, the elements of Zh are not necessarily divergencefree), (9.2.1) would provide a nonconforming approximation to (9.1.6). The main interest of using a nonconforming finite dimensional space Zh is that it might be simpler to construct a set of basis functions for it. Some examples are provided in Hecht (1981). A proof of convergence can still be provided (as indicated in Remark 5.5.1), if we assume that the consistency property (9.2.2) does hold even with Zh instead of Vdiv,h' 0
9.2 Galerkin Approximation
303
9.2.1 Algebraic Form of the Stokes Problem Let us begin with problem (9.2.1). For any fixed h, let N~ denote the dimension of the subspace Vdiv,h and let {Y?~ Jj = 1, ... ,N~} be a basis for Vdiv,h' Setting
NZ
Uh(X) = L ~jy?~(x) , j=l
the linear system associated with (9.2.1) becomes
Aoe = f
Ii:= (f,Y?n ,
with
A?j := a(Y?~,Y?n
°
The matrix A is symmetric and positive definite. A less simple situation occurs for problem (9.2.4). Now we denote by N h and Kh the dimensions of Vh and Qh, respectively, and by { O. A critical question can be raised now: what happens to the numerical scheme if the finite dimensional spaces don't satisfy the compatibility condition? We deduce from the previous analysis that the velocity field can be obtained in a stable and convergent way, as both (9.2.11) and the first inequality in (9.2.10) do not depend on the constant fJ. As a matter of fact, only the pressure field can be contaminated by the spurious modes. When a good discrete pressure Ph is also desired, the obvious remedy is to operate
9.2 Galerkin Approximation
305
a choice of Qh and Vh that satisfies (9.2.9). If this is not the case, however, several kind of remedies can be introduced. A canonical approach is to operate a stabilization on the finite dimensional system (9.2.4) which amounts to relax the incomprimibility constraint on ux. This can be accomplished in various ways that can be rigorously motivated from a thorough mathematical analysis. This issue is addressed in Section 9.4. Another approach consists of filtering the spurious modes out of the computed pressure. This is typically accomplished within an iterative procedure after each step, and requires apriori knowledge of the structure of the spurious mode vector space. We refer for that to Section 9.5. Finally, other approaches are also workable within specific frameworks of approximation. We mention, for instance, the use of macroelements in the framework of finite element approximations. Such a technique is illustrated in Section 9.3. 9.2.3 DivergenceFree Property and Locking Phenomena
In general, the discrete continuity condition (9.2.22) doesn't necessarily imply that (9.2.23)
div uj, = 0 .
As a matter of fact, (9.2.22) simply entails that Uh E Zh (see (9.2.5)), but this is not sufficient to conclude with (9.2.23), as we have pointed out in Remark 9.2.1. However, (9.2.23) holds if (9.2.24) In fact, (9.2.23) follows at once taking qh = div uj, in (9.2.22). In the finite element approximations of the Stokes problem the property (9.2.24) is almost never verified. The richer Qh is, the more likely (9.2.24) will hold true. However, going along this direction may conflict with the fulfillment of the compatibility condition (9.2.9). In this respect, an extreme situation may occur when Qh is exceedingly large compared with Vh so that the condition (9.2.22) overconstrains Uh at such a level that not only (9.2.23) holds, but even Uh
=0
.
This characteristic behaviour is known as the locking phenomenon for the velocity field. We will encounter an example in Section 9.3.1.
306
9. The Stokes Problem
9.3 Finite Element Approximation Here, we adopt the notation of Chapter 3.In particular, we restrict ourselves to the case of a polygonal domain il, and Th denotes a triangulation of il into polyhedra K, that for brevity are called elements, whose diameter is at most h. Let us define (9.3.1) when each K is either a triangle or a tetrahedron, or (9.3.2) if each K is a parallelogram or a parallelepiped. (One could also consider general quadrilaterals or hexahedrons; however, similarly to what we have done in Chapter 3, we limit ourselves to the simplest case.) For both cases we set (accordingly with (3.2.4), (3.2.5)) (9.3.3)
X~ = Y;
n COut) ,
k~ 1 .
Clearly, a space like yhmnLa(il) for some m ~ 0, or Xh'nLa(il) for m ~ 1, is a good candidate for the pressure space Qh; on the other hand, (X~nHJ(il))d can be used for Vh, for some k ~ 1. Here, we present some possible combinations of these spaces, and for each one we will indicate the degrees of freedom for each velocity component and the pressure on the master element. We point out that for the velocity spaces, the degrees of freedom need to be frozen on the boundary in order to match the homogeneous Dirichlet condition on us , Concerning the pressure, the fact that we are dealing with functions whose average is 0, or that vanish at a given point x* of il, should result in a reduction by one of the global number of degrees of freedom. When the latter option is adopted, choosing the discontinuous space yhm has the nice feature that on each element K (indeed, excepting the one K* containing x*) the divergence of Uh has vanishing average, i.e., the mass is conserved there. This property follows immediately from (9.2.4h as in this case Qh contains all functions that are piecewise constant in il \ K* and vanish in K*. 9.3.1 Discontinuous Pressure Finite Elements
In this Section, we confine ourselves to the twodimensional case. We begin considering
9.3 Finite Element Approximation
307
The simplest situation occurs when m = 0 and k = 1 (piecewiseconstant pressures and linear, or bilinear, velocities). The degrees offreedom are those indicated in Fig. 9.3.1. The symbol. denotes the values of the velocity components, whilst 0 will refer to the value of the pressure.
DO
Fig. 9.3.1. Piecewiseconstant pressure and piecewiselinear (or bilinear) velocities These kind of elements don't satisfy the compatibility condition (9.2.9); further, a locking phenomenon may occur. This is, for instance, the case when the triangulation is made by N triangles having an edge on afl and a vertex that is common to each of them (see the Fig. 9.3.2). Since Uh has only two degrees offreedom (the two components at the only internal vertex), whereas Ph has N  1 degrees of freedom (one less than the number of elements since Ph has vanishing mean value), the condition (9.2.22) yields Uh == O.
Fig. 9.3.2. Triangulation yielding the locking phenomenon for Uh A partial remedy to this situation is to adopt the socalled crossgrid triangulation. This is obtained by subdividing each triangle K (a macroelement) into three subtriangles having the center of gravity as common vertex, then using a constant pressure over K and piecewiselinear velocities upon each subtriangle that are continuous over fl (see Fig. 9.3.3). Such a choice eliminates the locking phenomenon, although it doesn't fulfill the compatibility condition (9.2.9) (see, e.g., Gunzburger (1989), p. 28). Also, the piecewise Q.Qo elements displayed at the right hand of Fig. 9.3.1 don't pass the compatibility test. If, for instance, fl is a rectangle, with sides parallel to the cartesian frame and subdivided into rectangles, then it can be shown that the checkerboard mode Ph E Qh, satisfying Ph = 1 or
308
9. The Stokes Problem
Fig. 9.3.3. (Crossgrid lP1)lPo elements
Ph
= 1 alternatively on adjacent elements, is in fact a spurious mode for the pressure (see Girault and Raviart (1986), p.160). This mode can be filtered out resorting to the crossgrid triangulation illustrated in the following Fig. 9.3.4, left. From any parallelogram K of Th we obtain four triangles by means ofthe two diagonals of K (the macroelement). The associated spaces are stable. Moreover, the following error estimate holds (see Brezzi and Fortin (1991), Sect. VI.5.4)
(9.3.5) where II . lis is the Sobolev norm of order s (see Section 1.2) and h is the maximum diameter of the macroelements. The error estimate (9.3.5) follows from (9.2.11), (9.2.12), Theorem 3.4.2 and estimate (3.5.24). The implicit assumption is that both norms at the right hand side make sense.
.
.
Fig. 9.3.4. (Crossgrid lPl)iQb elements (left) and (crossgrid Ql )iQb elements (right) Another stabilization can be carried out using the macroelements of Fig. 9.3.4, right. Each parallelogram is subdivided into four parallelograms sharing the center of gravity as common vertex, then Qil polynomials are used over each subelement. This approximation is stable and linearly convergent (see Gunzburger (1989), p.30). Another remedy to instability is the use of a richer space for the velocity approximation. In this respect, let us notice that the elements lP'2lP'O and Qi2Qio are stable and both converge linearly (Fortin (1972)). To obtain quadratic accuracy, one has to approximate the pressure with piecewiselinear polynomials. However, the lP'2lP'1 element (which corresponds to take k = 2 and m = 1 in (9.3.4)) is still unstable (see, e.g., Brezzi and Fortin (1991), p. 231). It can be stabilized by resorting to the elements introduced by Crouzeix and Raviart (1973), based on discontinuous piecewise lP'1
9.3 Finite Element Approximation
309
pressures and piecewise IP'~(K) velocities, where (9.3.6)
IP'~(K) := {v = Pk + b IPk E IP'k , b is a cubic bubble function on K} ,
k ;:: 1. A cubic bubble function b on a triangle K is a cubic polynomial that vanishes on the edges of K. Any such function can therefore be represented as a multiple of
(9.3.7) where Pi Ii = 1,2, 3} are linear polynomials, each one vanishing on one side of K and taking value 1 at the opposite vertex, and which is determined by the value that it attains at the center of gravity of K. The CrouzeixRaviart elements are stable and converge quadratically. We report in Fig. 9.3.5 (left) their degrees of freedom. A higher order generalization of the CrouzeixRaviart elements is based upon discontinuous piecewise IP'kl pressures, and piecewise IP'k 6:) Bk+l (K) velocities, where on each triangle K (9.3.8) and b« is defined in (9.3.7). Bk+l is therefore the space of bubble functions of degree less than or equal to k + 1 on K. These elements satisfy the compatibility condition and converge as (For the proofs, see, e.g., Girault and Raviart (1986), p.139.)
ou»;
Fig. 9.3.5. CrouzeixRaviart elements (left) and BolandNicolaides elements (right, k = 2)
Another good example is that based on parallelograms, with discontinuous piecewise illkl pressures (k ;:: 1) and piecewise IP'k velocities on both triangles which are obtained dividing any parallelogram through one of its diagonals. See Fig. 9.3.5 (right) for an example with k = 2. These elements, introduced by Boland and Nicolaides (1983), satisfy the compatibility condition, and converge as O(h k ) (see also Gunzburger (1989), p.36). Among other elements that are stable, we mention the ((b1P'1 on parallelepipedal decompositions (pressures are therefore discontinuous). See Girault and Raviart (1986), p. 156.
310
9. The Stokes Problem
9.3.2 Continuous Pressure Finite Elements For simplicity, also in this Section we consider the twodimensional case. When the finite element pressure is asked to be a continuous function, it is natural to consider the spaces (9.3.9) To start with, let us mention that the socalled equal interpolation choice, the one based on the same polynomial degree for both velocity and pressure, i.e., m = k, yields an unstable approximation (see Sani, Gresho, Lee and Griffiths (1981), Sani, Gresho, Lee, Griffiths and Engelman (1981) and Brezzi and Fortin (1991), p.21O). On the contrary, the elements proposed by Taylor and Hood (1973), corresponding to the choice m = 1, k = 2 in (9.3.9), are stable, and converge with the optimal rate, i.e., (9.3.10)
Ilu  uhll + lip  Philo::; ChS(llulls+l + Ilplls),
s
= 1,2
,
provided the solution is regular enough so that the norms at the right hand side make sense (see, e.g., Girault and Raviart (1986), p.176). This holds for both triangular and parallelepipedal elements. Higherorder TaylorHood elements (corresponding to the choice m = k1, k ~ 3, in (9.3.9)) have been analyzed by Brezzi and Falk (1991). We show in Fig. 9.3.6 the associated degrees of freedom for the case m = 1, k = 2.
DO
Fig. 9.3.6. Loworder TaylorHood triangles (left) and parallelograms (right)
Another possibility is to start with a triangulation Th made by triangles K, take piecewiselinear pressures on each K, and piecewiselinear velocities over each of the three subtriangles of K sharing the center of gravity of K as common vertex (see Fig. 9.3.7 left). This choice, that will be indicated for the sake of brevity (crossgrid 1P'1)1P'1' satisfies the compatibility condition, and the corresponding finite element solution converges linearly with respect to h (see Pironneau (1988), p. 107). A similar idea forms the basis of the socalled (1P'1 iso 1P'2)1P'1 elements (due to BercovierPironneau (1979); see Fig. 9.3.7, right). The pressure space is still the same, while velocities are linear over each of the four subtriangles of K, obtained by joining the midpoints of the edges of K. This discretization is stable and linearly convergent. The reason of its name relies on the
9.4 Stabilization Procedures
311
Fig. 9.3.7. (Crossgrid P 1 )]P'! (left) and (PI iso P2)Pl (right) elements observation that the degrees of freedom are the same as those of the 1P'21P'1 TaylorHood element (see Fig. 9.3.6, left). In a similar manner, one can consider the (Q1 iso «b )'011 elements when Th is made of parallelograms. We conclude with the socalled mini element (introduced by Arnold, Brezzi and Fortin (1984)): pressures are still piecewise 1P'1, while velocities are piecewise IP'~ (see (9.3.6)). This choice is stable, and leads to a linear convergence (see Fig. 9.3.8). The mini element is very much similar to (crossgrid IP'dlP'l element of Fig. 9.3.7, left. Indeed, in the latter element the bubble function bon K, which is a cubic polynomial vanishing over 8K, is replaced by a function 'I/J E CO(K), still vanishing on 8K and which is linear upon each one of the three subtriangles merging at the center of gravity of K. The two elements enjoy the same stability and convergence properties.
Fig. 9.3.8. The mini element Many other finite element spaces have been used in the applications, such as, e.g., elements with nonconforming velocities and those based on reduced integration. We refer the interested reader to Girault and Raviart (1986) and Brezzi and Fortin (1991).
9.4 Stabilization Procedures When the approximation (9.2.4) of the saddlepoint problem (9.1.11) is based on a couple of spaces Vh, Qh that don't satisfy the compatibility condition (9.2.9), a procedure aiming at stabilizing the discrete solution (which otherwise may be affected by spurious pressure modes) can be accomplished according to several criteria.
312
9. The Stokes Problem
Throughout Section 9.3 we have already seen that the use of crossgrid triangulations can sometimes stabilize finite elements that are a priori unstable on general triangulation. Another way of stabilizing without necessarily resorting to any special triangulation consists of slightly modifying (9.2.4), and consequently the associated algebraic system (9.2.14). Often, these methods aim at relaxing the incompressibility constraint, thus modifying the second set of equations of (9.2.4). In abstract form, the stabilized problem reads: find Uh E Vh , Ph E (9.4.1)
a(Uh' Vh)
a, :
+ b(Vh,Ph)
= (f, vs)
V v t. E Vh
{
V qh E Qh
b(Uh' qh) = (h(qh)
where the stabilization term O. For p = 0, this method coincides with the SUPG method (9.4.4). When p = 1 this method has been given the name of Galerkin/LeastSquares (GALS) (Hughes and Franca (1987)), and it leads to a symmetric system. The case p = 1 has been proposed by Douglas and Wang (1989): it yields a system which is no longer symmetric, however it is nonnegative definite and stable under more general conditions. As a matter of fact, the following result holds (Franca and Stenberg (1991)): p is a parameter (typically, p takes the values 1, 0, 1) and 0 is a suitable constant, Th is the set of all edges (J of the triangulation except for those belonging to the boundary aD, and hu is the length
9.4 Stabilization Procedures
315
of a . Finally, for any qh E Qh, [qh]". denotes its jump across a, This method yields a stable solution for all J > 0, which converges linearly, i.e.,
Ilu  uhll + lip  Philo s; Ch(ll ul12 + IlplIl) (Kechkar and Silvester (1992)). The algebraic restatement of (9.4.10) yields the symmetric nonsingular system (9.4.11) where D is a symmetric and nonnegative definite matrix whose entries are (9.4.12) Algebraically, the matrix in (9.4.11), say So, can be decomposed as
So
=
(AB 0)I (A0
l
o BAIB T
) JD
(A 0
B I
T )
.
Therefore, by the Sylvester law of inertia the number of positive and negative eigenvalues of So coincide with the one of its second factor: the first Ni, are positive and the latter Kh negative. Let us remember that if ker B T =F 0 the matrix S in (9.2.16) has Nh positive eigenvalues while the latter K h can be either negative or null. A simplification in (9.4.10) occurs if we consider a triangulation made by macroelements, each of them being the union of four triangles or quadrilaterals (see Fig. 9.4.2). Then the sum over o E Th in (9.4.10) can be restricted to those edges that are internal to each macroelement. The reduced scheme enjoys the same stability and convergence properties as (9.4.10). The advantage is that now D has a narrower band.
Fig. 9.4.2. A macroelement partition of n Rernark 9.4.2 (Stabilization and bubble functions). It has been pointed out in Baiocchi, Brezzi and Franca (1993) that the enrichment of the finite element space by summation of bubble functions results in a stabilization approach that is formally similar to those we have investigated above (see also
316
9. The Stokes Problem
Pierre (1988, 1989), Bank and Welfert (1990)). A similar remark also applies to the stabilization methods introduced in Section 8.3.2 for advectiondiffusion problems (see Section 8.3.3). In an abstract fashion, we can formulate a finite dimensional approach that makes use of a space of bubble functions as follows. Let us set
vk := u; fB B where Uh is a standard finite element space, while B is a space of bubble functions. For instance, for the mini element of Fig. 9.3.8 we have
(9.4.13)
u, =
(Xk
n HJ(fl)?
x (Xk
n L6(fl)),
BJK = (B 3 (K ))2 X {O}
(see (9.3.8)). Then, let us denote by A(,·) the linear form associated with the boundary value problem. The finite dimensional problem can be formulated as follows (9.4.14)
find Uh
+ UB
E
vk : A(Uh + UB, v t, + VB) =:F(Vh+VB) "IVhEUh,VBEB
In particular, taking v i,
=0
yields
and by computing UB (element by element) we obtain (9.4.15) for a suitable functional
~.
Now, taking
VB
= 0 in (9.4.14)
it follows
(9.4.16) This problem is nothing but a Galerkin approximation formulated over the original space Uh. The second term at the right hand side can be regarded as a perturbation that aims at stabilizing the solution Uh. As a particular case, the effect produced by the use of bubble functions for the mini element can be regarded as a GALS approximation that makes use of the space Uh.
o
Remark 9.4.3 The stabilized problem (9.4.6) can be cast in a form similar to the one introduced in Section 8.3.2 for advectiondiffusion equations. In fact, if we write the Stokes operator and the right hand side as
(9.4.17)
L(
) .= w,r .
(aow divw vLlw + \7r)
, f*:=
(~)
,
the symmetric and skewsymmetric part of L with respect to the scalar product in L 2 (fl) are given by
9.5 Approximation by Spectral Methods (9.4.18)
L s (w, r ) := (
aow 1Jdw)
0
317
'Vr ) ' Lss(w, r):= ( div w
Then we can introduce the following stabilized problem: find (Uh,Ph) E Vh x Qh such that a(uh' Vh)
(9.4.19)
+ b(Vh,ph)  b(Uh' qh) + r5 L (L(Uh,Ph)  f*, hk(Lss + pLS)(Vh, qh)) K KETh
for each (Vh, qh) E Vh x Qh' This essentially corresponds to (9.4.6) (indeed, only an additional stabilizing term
s
L
hk(divuh,divvh)K
KETh
has been introduced at the left hand side of (9.4.19)). Notice that for advectiondiffusion equations the GALS method is the one stemming from the choice p = 1, and not from p = 1. The reason of this name "switching" when passing to the Stokes problem is that GALS method was originally introduced starting from the Stokes operator written in the symmetric form L*( ).= (aowlJdw+'Vr) w,r . _ divw (see Hughes and Franca (1987)), then subtracting at the left hand side
5
L
hk(aOuh  lJduh
+ 'VPh 
f, aOVh  lJdvh
+ 'Vqh)K
,
KETh
the "leastsquares" term. This corresponds to the choice p = 1 in (9.4.19). Indeed, for the Stokes problem it would be more appropriate to give the name of "leastsquares method" to the DouglasWang method (9.4.6) with p = 1.
o
9.5 Approximation by Spectral Methods These approximations are characterized by the fact that both test and trial functions are invariably algebraic polynomials of high degree. For the sake of simplicity, we assume that n is the domain (1, l)d, d = 2,3; the case of a more general domain demands for a domain decomposition approach (see, e.g., Section 6.4). Let us begin with the approximation of problems (9.1.6), taking (9.5.1)
318
9. The Stokes Problem
where, as usual, ([fN denotes the space of algebraic polynomials of degree less than or equal to N in each variable Xi, i = 1, ..., d, vanishing over an. In the current case, the Galerkin approximation (9.2.1) reads (9.5.2)
V VN E Vdiv,N .
This problem has a unique solution, which is stable, and satisfies the convergence result (9.2.3) (with the obvious change of notation). In order to deduce a rate of convergence with respect to N 1 , we need to estimate the infimum at the right hand side of (9.2.3). For this, we have the following result that was proven by Sacchi Landriani and Vandeven (1989). Theorem 9.5.1 For any s 2: 1 there exists a constant C independent of N such that for all v E (Hs(n))d n Vdiv (9.5.3)
inf VNEVdiv.N
llv
vN11 ~ CN 1sllvll s .
Proof. Let us confine ourselves to the twodimensional case (d = 2). First of all, we notice that, for any v E Vdiv, there exists a stream function 'l/J E H 2 (n ) such that (9.5.4)
v = rot'l/J :=
(
a'l/J a'l/J a'a) X2 Xl
in
n .
Further, if v E (HS(n))2 for some s 2: 1, then 'l/J E Hs+1(n) and 11'l/Jlls+1 ~ Cllvll •.
(9.5.5)
Since v = 0 on an, \l'l/J = 0 on an; moreover, since 'l/J is determined up to an additive constant, we can enforce that it vanishes at one boundary point, so that we can infer that it vanishes identically on an. Thus 'l/J E H6(n), where (9.5.6)
Hg(n)
= {cp E H 2(n) I sp = ~~ = 0 on an}
.
On the latter space it is possible to define an operator P~ N, by associating to any function of H5(n) its orthogonal projection, with respect to the scalar product of (9.5.6), upon the polynomials of degree less than or equal to N that vanish over an together with their normal derivatives. By an argument similar to that used to derive error estimates for PP N (see Section 4.5.2), it can be proven that ' (9.5.7) (the latter inequality follows from (9.5.5)). Taking VN noticing that VN E Vdiv,N, we conclude with (9.5.3).
rot(P~ N'l/J), and
,
0
9.5 Approximation by Spectral Methods
319
It now follows from (9.2.3) and (9.5.3) that
Ilu  uN11
(9.5.8)
:s CN1sllull s ,
provided u E (Hs(Q))d n Vdiv for some s ~ 1. Once UN is available, the problem is how to recover a pressure field PN with the same optimal accuracy. This is not an easy task, that can be, however, accomplished in some cases. We refer, in particular, to the method of Moser, Moin and Leonard (1983) and its extensions (see Pasquarelli, Quarteroni and Sacchi Landriani (1987) and Pasquarelli (1991)). We should bear in mind, however, that this approach entails the use of a basis for Vdiv,N made by linear combinations of Chebyshev polynomials (see Section 4.5.1). As a consequence, in (9.5.2) every integral is replaced by a weighted one using the Chebyshev weight function (see Section 4.5.4). In general, the difficulty of generating a basis for Vdiv,N and the one of recovering the pressure field suggests to approximate the saddlepoint problem (9.1.11) directly. In turn, the latter can be solved by a spectral Galerkin method, or else by a generalized Galerkin method, which is often equivalent to a spectral collocation method. 9.5.1 Spectral Galerkin Approximation
Let QN and VN be two polynomial subspaces of L6(Q) and (HJ(Q))d, respectively, and then consider the following spectral approximation to (9.1.11): find UN E VN, PN E QN: (9.5.9)
a(UN, VN)
+ b(VN,PN) =
(f, VN)
V VN E VN
{ b(UN,qN) = 0
V qN E QN
Typically, the space VN is the one defined in (9.5.1); this guarantees an error estimate like (9.5.8) for the velocity field. Correspondingly, we would like to take QN = MN, with MN := Qv n L6(Q). Unfortunately, this choice is unsuitable because the couple VN, M N would not satisfy the compatibility condition (9.2.9). For this reason, we need to take a proper subspace of MN as QN. To begin with, let us define (9.5.10) i.e., the subspace of Qv made by all spurious pressure modes (see (9.2.21)). Also, let us define (9.5.11) which is the image of Qv through the divergence operator. Clearly, DN is the orthogonal subspace of SN with respect to the L 2 (Q) scalar product. The
320
9. The Stokes Problem
space SN can be fully characterized. For the time being, notice that we can take, as pressure space QN, any supplementary space to SN in QN, i.e., any QN C QN such that dim SN + dim QN
(9.5.12)
= dim QN
, SN n QN
= {O}
.
The following result (proven, e.g., in Bernardi and Maday (1992), pp. 126127) characterizes the space of spurious modes for twodimensional approximations. Theorem 9.5.2 Assume that D = (1,1)2. The space SN has dimension 8, and a basis for it is given by the polynomials
(9.5.13)
1 , LN(Xl) , L N(X2) , LN(Xl)L N(X2) , L't,(xdL't,(X2) ,
L't,(xdL't,+l (X2) , L't,+l (xl)L't,(X2) , L't,+l(xl)L't,+l (X2) ,
where {L k I k = 0,1, ...} are the Legendre polynomials introduced in Section
4·4·1. Proof. It is easy to show that any function in (9.5.13) belongs to SN. Conversely, let us prove that any function P'N E SN can be obtained by a linear combination of the above polynomials. We notice that P'N must satisfy
(9.5.14)
rP'N (aVN,l + OVN,2) dx = ° OXI OX2
in
First taking VN = ('Pmn, 0) and then VN = (0, 'Pmn) with
'Pmn(Xl,X2)
= (1 xi)(I x~)L~(xdL~(X2),
1::; m,n::; N 1 ,
we deduce from (9.5.14)
(9.5.15)
l
P'N(x)L m(xd(1 =
l
xDL~(X2) dx PN(x)(1 
xi)L~(xdLn(X2) dx = °
We have used the SturmLiouville formula
(9.5.16)
((1  e)L~(O)' = k(k + I)Lk(O, 1
,NJL N
,
A N := PN1L N+1

P[AN]LN+1 .
For any 0 < A < 1, [AN] denotes the integral part of AN, and P[AN] is the orthogonal projection operator upon the space of onedimensional polynomials of degree less than or equal to [AN] with respect to the scalar product defined in (4.4.5). This choice looks rather complicated; however, it is suitable from the theoretical point of view, as the compatibility condition is satisfied with a constant (3 that behaves the same as in (9.5.20). Further, the space QN chosen with the above criterion is rich enough to include all polynomials of degree less than or equal to [AN] with zero average. Thus QN enjoys an optimal convergence property. Indeed, using (9.2.12) and keeping in mind (9.5.20), we deduce the following error estimate for the pressure (9.5.21 ) provided the exact solution (u,p) has the smoothness required from the right hand side. Let us notice that (9.5.21) is optimal up to a factor N which is precisely the inverse of the constant (3.
9.5 Approximation by Spectral Methods
323
9.5.2 Spectral Collocation Approximation
Most often, collocation methods are preferred to the Galerkin method for the spectral approximation to the Stokes equations. Besides other reasons, let us mention the difficulty of calculating the right hand side of (9.5.9), which is overcome with the collocation approach as all integrals are replaced by Gaussian numerical integration. Set fl = (1,1)2, let N be a positive integer, and denote by flN the (N + 1)2 Legendre GaussLobatto nodes (see (4.5.38)), by EN the subset of fl N of the (N  1)2 nodes belonging to the interior of fl, and by EN the subset of the 4N nodes lying on the boundary an. The collocation problem that approximates (9.1) when n = (1,1)2 reads aOuN  VdUN
(9.5.22)
+ "VPN =
f
at x E EN
div UN = 0
at x E
nN \ RN
=0
at x E
EN
{ uN
,
where RN is a set of eight nodes. Four of them are the corners of fl; the others, say {x", x 2 , x 3 , x"}, can be taken in any way so that the Gram matrix ¢>i(X j ) , i,j = 1, ... ,4, is nonsingular, where ¢>l(X) = 1, ¢>2(X) = LN(xd, ¢>3(X) = LN(X2), ¢>4(X) = LN(xdLN(X2)' Using the discrete scalar product (4.5.39) and property (4.5.40), the collocation problem (9.5.22) can be equivalently restated as find UN E VN, PN E QN : aN(uN,vN)+bN(VN,PN)=(f,VN)N
(9.5.23)
{ bN(UN,qN)
=0
VVNEVN
V qN E QN
Symbol explanation is as follows. VN is the space defined in (9.5.1), while QN is a space of dimension (N + 1)2  8, which is supplementary to the space S'iv spanned by the functions (9.5.24)
1, LN(xd, L N(X2), LN(xdLN(X2), L'tv(Xl)L'tv(X2), L'tv(Xd X2 L'tv (X2), xlL'tv (Xl )L'tv(X2), XlL'tv (Xl )X2 L'tv(X2)
Moreover, (9.5.25)
aN(WN, VN) := aO(WN, VN)N
+ V("VWN' "VVN)N
bN(VN,qN):= (qN,divvN)N ,
where (., ')N is the discrete scalar product defined in (4.4.14) and WN, VN E a» E QN' Problem (9.5.23) therefore appears as a generalized Galerkin approximation to (9.1.11). As in the Galerkin case, QN contains Qt.\N] for 0 < ,\ < 1.
VN,
324
9. The Stokes Problem
For such an approach, a theorem of stability and convergence that generalizes Theorems 7.4.2 and 7.4.3 can be proven in a straightforward way. In particular, in the current case, the compatibility condition reads
With the above choice of finite dimensional subspaces, (9.5.23) is free of spurious pressure modes, i.e., of functions PN such that (9.5.27) By analogy with the Galerkin case, it can be shown that (9.5.26) is satisfied with a constant f3 that behaves as in (9.5.20). Thus, an error estimate similar to (9.5.8) and (9.5.21) can also be obtained in this case. The difference comes from the presence of an extra term involving the right hand side f. Precisely, the new estimates read (9.5.28) and (9.5.29) for some s ?:: 2 and r ?:: 2 for which the norms at the right hand side make sense. Also this approximation can be shown to enjoy the property that div UN = 0 identically. The algorithmic aspects of this method are discussed in Bernardi and Maday (1992), pp. 153155. 9.5.3 Spectral Generalized Galerkin Approxhnation
This method, which is due to Maday, Meiron, Patera and Rcnquist (1993), is no longer equivalent to a collocation method. As usual, we take VN = (([JfN)2, while this time QN = Qv2 n L6(f?). The spectral problem reads find UN E VN, PN E QN : aN(UN, VN) + b(VN,PN) = (f',VN)N
(9.5.30) {
b(UN,qN)
=0
V VN E VN V
a»
E QN
where aN("') is defined in (9.5.25) and b(.,') in (9.1.10). Owing to the exactness of the Legendre GaussLobatto integration formula, and the fact that the degree of the polynomials of QN is now less than or equal to N  2, we deduce b(VN' qN)
= (divvN, qN) = bN(VN,qN)
V VN E VN , qN E QN .
Owing to this property we obtain from (9.5.30) that:
9.6 Solving the Stokes System aOUN  VL1UN UN
325
+ V'PN = f
=0
Moreover, for any x = (~i,~j) E EN we obtain that div ujefx) turns out to be a suitable linear combination of the values of div UN at the four points (l,~j), (l,~j), (~i,l), (~i,l). This explains why (9.5.30) fails to be a complete collocation method. The interest of this method is that the pressure space QN is now easy to characterize, and, most of all, it is free of spurious modes. Furthermore it can be proven that (9.5.31)
V qN E QN :3
VN E
VN : b(VN,qN) 2:: CN 1 / 2 11v NII IlqNllo ,
i.e., the compatibility condition holds with a constant
f3 that behaves like
N 1 / 2 . Due to this fact, under the usual smoothness assumptions the error
estimate takes the following form (9.5.32)
Ilu 
uN11
+ N 1 / 2 1Ip PNllo ::; C[N1S(llulls + I/plisl) + Nrllfll r] ,
which is preferable to (9.5.29).
9.6 Solving the Stokes System Here, we deal with the solution to the Stokes system (9.2.14). Some of the methods we are going to consider are actually based on a restatement of the differential system. The latter is then accordingly approximated, therefore yielding a numerical solution that doesn't necessarily coincide with that of (9.2.14). We recall that the matrix S is symmetric but indefinite; it is nonsingular if ker B T = 0, which is the case if the compatibility condition (9.2.9) holds. We report, hereafter, those solution techniques that have deserved broad attention in the literature. For a more complete survey see, e.g., Pironneau (1988), Gunzburger (1989), Quartapelle (1993) and Fortin (1993) for finite element approximations and Canuto, Hussaini, Quarteroni and Zang (1988), Boyd (1989) and Renquist (1988) for spectral approximations. We warn the reader that in this Section we are using the same notation for the vector function u = u(x), the solution to the infinite dimensional problem (9.1), and the finite dimensional vector u E }RNh , the solution to the linear system (9.2.14).
326
9. The Stokes Problem
9.6.1 The PressureMatrix Method This method is based on the elimination procedure (9.2.17)(9.2.18), and consists of generating an independent linear system for the pressure after elimination of the velocity vector. The resulting problem reads (9.6.1)
Rp=g
where R is the K h x K h pressurematrix given in (9.2.19) and g = BA 1r. We have already noticed that R is symmetric and, when ker B T = 0, positive definite. If ao = 0 in (9.1), then R is wellconditioned. In fact, vR is an approximation ofthe operator div .:1 1 \7, which behaves like the identity operator. Thus t/R is close to the variational equivalent of the identity, i.e., the pressure mass matrix J:= ('l/Jl,'l/Jm)lm (see (9.2.15)). More precisely, denoting with Xsp(R) the spectral condition number of R (see (2.4.8)), it can be proven that there exist two constants Co and C 1 such that (9.6.2) where (3 is the constant of the compatibility condition (9.2.9) (see Fortin and Pierre (1992)). If (3 is independent of h, then Xsp(R) is independent of h too. In any such case, the conjugate gradient method can be efficiently used for the solution of (9.6.1), as its convergence rate is independent of the dimension of R (see Section 2.4.2). For each conjugate gradient iteration, the key step is the residual evaluation r k = g _ Rpk , which can be accomplished as follows: (i) compute first qk = BTpk; (ii) then solve Ayk = qk j (iii) finally compute r k = g  Byk. Points (i) and (iii) are trivial as they simply require matrixvector multiplications. Instead, point (ii) entails the solution of two linear systems with the same matrix, each of them being a Poisson problem for one velocity component. Since A is symmetric and positive definite, step (ii) can be faced by a direct method (through a Cholesky decomposition of A) or again by conjugate gradient inner iterations (see Chapter 2). In the latter case, a preconditioner for A oughts to be used, as the condition number of A grows with the dimension Ni; of A (see (6.3.17) and (6.3.25)). Further, let us also notice that this second way to solve step (ii) can be fairly expensive, since at each step of the outer iteration on R it would be necessary to iterate until convergence for computing the vector s". We will see in Section 9.6.7 a different approach which doesn't present this drawback. For approximations based on spectral methods, the constant (3 of the compatibility condition may in some cases depend on N, the degree of the
9.6 Solving the Stokes System
327
polynomial solution (see (9.5.20) and (9.5.31)). However, even in these cases the system (9.6.1) is faced by conjugate gradient outer iterations without preconditioning (or with the matrix J / II as a preconditioner). When ao is different from 0 in (9.1) (let us recall that this case occurs, e.g., for timedependent problems) the spectral condition number of R is no longer independent of the matrix dimension. Indeed, in this case Xsp(R) is proportional to the spectral condition number of A, which grows like Ni; in the finite element case and like Nl. in the case of spectral approximations. Using a preconditioner P for system (9.6.1) is therefore in order. Here, let us review some examples that have been suggested in this respect. To start with, let us notice that from (9.1) it follows (formally!) that the pressure p satisfies  L1p =  div f
ap
{ 
an
= f .n
+ IIL1u . n 
aou . n
in
n
on
an
Since u = 0 on an, by neglecting IIL1u· n when II is very small we see that p satisfies a Poisson problem with Neumann boundary conditions on an. Thus it looks reasonable to define P as the K h x K h matrix associated with the Laplace operator with Neumann condition. This choice, that was proposed by Benque, Ibler, Keramsi and Labadie (1980), has been generalized later on by Cahouet and Chabard (1988) who suggested to take P as a suitable approximation of the operator (111 1 aoL1 1)1, still with Neumann boundary condition. At the discrete level, this corresponds to take as a preconditioner the matrix P = (IIJ 1
+ aoH  1 )1 ,
where H := BM 1 B T and M ij := ('Pi, 'Pj) is the velocity mass matrix (see (9.2.15)). With this choice, p 1 R turns out to be an approximation to the operator
tot:' 
a oL1 1 ) div(aoI  IIL1)l\7
which behaves like the identity when II = 0 or ao = O. Experimental results show that the spectral condition number of p1 R is independent of the finite element mesh size, at least for those finite element approximations that satisfy the compatibility condition (9.2.9) (see Cahouet and Chabard (1988)). A slightly more general approach is to take an approximation to (JL1I1 JL2L11)1, where JL1,JL2 are chosen according to the kind of boundary conditions that are prescribed for the Stokes problem. 9.6.2 The Uzawa Method
This method is inspired by the following iteration procedure on the original problem (9.1). Let pO be given; for any k 2 0 solve
328
9. The Stokes Problem aouk+l  vLlUk+1
(9.6.3)
with Uk+l
=f 
V'pk
= 0 on on, and then
(9.6.4) The positive real number p is an acceleration parameter. If p is chosen suitably, this procedure is nothing but the gradient method applied to the dual of saddlepoint functional (9.1.12). Also, notice that (9.6.4) can be regarded as a backward Euler approximation of the pseudoevolutionary continuity equation op + dilVU= 0 ,
ot
with p playing the role of a timestep Llt. At the discrete level, the Uzawa scheme is indeed the preconditioned Richardson method (see Section 2.4.1) applied to equation (9.6.1), with the pressure mass matrix J as a preconditioner. The convergence of (9.6.3)(9.6.4) is achieved for 0 < p < 2v (for a proof, see, e.g., Temam (1984)). However, it is generally slow, and can be accelerated using in (9.6.4) a preconditioner P for the pressurematrix R of the same type as those considered throughout the previous section. The algebraic restatement of the preconditioned procedure (9.6.3) reads AUk+l = f _ B T pk (9.6.5)
{ P(pk+l _ pk)
= pBUk+1
.
The convergence rate of this iterative method is usually independent of the mesh size h. 9.6.3 The ArrowHurwicz Method
A more complicated scheme (but requiring the same computational effort of the Uzawa method for each iteration) is defined as follows. Let Po and Uo be given, for k ~ 0 solve (9.6.6)
'souk+ 1

with uk+ 1 = 0 on
Llu k+1 = 'sou k  Llu k + p(f  V'pk  aou k + vLlu k ) ,
on, and then
(9.6.7)
°
°
The parameters 'so ~ 0, p > and a > are chosen to ensure convergence. A simple choice is often given by 'so = ao/v. A proof of convergence in the simplified case 'so = ao = 0 can be found, e.g., in Temam (1984). The proof in the general case follows a similar procedure and convergence is achieved for 0 < p < min(2's0/ao, 2uv/(1 + uv 2 ) ) .
9.6 Solving the Stokes System
329
Let us also notice that other types of symmetric and coercive preconditioning operators could be used instead of {JoI  Ll (typically, at the discrete level, suitable symmetric and positive definite preconditioners P for the matrix A/v). 9.6.4 Penalty Methods
A different approach is based on the socalled penalty method, where the functional to be minimized is modified in the following way
(9.6.8) e
Te(v) := ao
r Ivl
2Jn
2
+!:.
r l\7vl + ~s i:r I 2
2Jn
div vj''  (f, v) ,
> 0, and the minimum is now taken over (HJ(D))d. This corresponds to solve
(9.6.9)
aoue  vLlue {
~ \7 div u, =
in D
f
=0
u;
on
aD
or, equivalently
(9.6.10)
aoue  vLlue + \7pe = f
in D
u; = 0
on
aD
{ ep,
+ div u; =
0
in D
.
The constraint div U = 0 is no more satisfied, but it can be proven that and Pe converge as e > 0 to the solution of (9.1), with first order accuracy (see, e.g., Bercovier (1978), Temam (1984)). It must be noticed, however, that the spectral condition number of the matrix corresponding to (9.6.9) behaves like l/e, and that recovering the pressure Pe requires the additional solution of a linear system associated to the pressure mass matrix J. A different penalty method reads as follows: for each e > 0 find the solution (u;, p;) to Ue
aou;  vLlu;
u; = (9.6.11)
ap'
= f
in D on
0
eLlp;
+ div u; =
an = 0
zss.
+ \7p;
0
aD
in D on
aD
A Galerkin finite element approximation of (9.6.11) would lead to the problem (9.4.1), (9.4.2) with h"k replaced by E: in (9.4.2). Its analysis can be found in Brezzi and Pitkaranta (1984), and yields the error estimate
330
9. The Stokes Problem
lIu  uhl11 + lip  Philo = O(h)
,
having chosen Vh and Qh as in (9.3.9) with k = m = l. We underline again that penalty methods can be seen indeed as suitable stabilization procedures for the Stokes system (see Section 9.4). However, these schemes are not strongly consistent (in the sense made precise in (8.3.6)), since the exact solution satisfies divu = O.
9.6.5 The AugmentedLagrangian Method This is a combination of the Uzawa method and the penalty method (9.6.10). Still in differential form, the augmentedLagrangian iterations read: being given pO, for any k 2: 0 solve the system: (9.6.12)
aou k +1  vL1U k+ 1
(9.6.13)
pk+1 _ pk
+ Vpk+l = f
= pdi v u k+ 1
with boundary conditions on uk+ 1 ; p is still an acceleration parameter. The algebraic form of (9.6.13) reads (9.6.14) where J is the pressure mass matrix, i.e., Jim = ('l/JI,'l/Jm) (see (9.2.15)) for finite element or spectral Galerkin approximations, while J is the identity matrix for the spectral collocation method. Since J is in any case nonsingular, pk+1 can formally be eliminated and replaced into (9.6.12) to get the system (9.6.15) This is a symmetric and positive definite system. Once it has been solved (e.g., by a conjugate gradient method with preconditioning) the pressure pk+l can be recovered from (9.6.14). The convergence of (9.6.12), (9.6.13) is achieved for any p > 0, and is very fast if p is very large (see Fortin and Pierre (1992)). However, it must be noticed that the spectral condition number of the matrix (A + pB T J 1 B) in (9.6.15) increases as far as p gets large (see Fortin and Glowinski (1983), p.15). (Here it is assumed that ker B =1= 0, which is a necessary condition for finding nonvanishing velocity approximation.) Sometimes, the augmentedLagrangian is used as preconditioner for the conjugate gradient method applied to problem (9.2.14) (see Fortin (1989)). For a thorough presentation of the argumentedLagrangian method we refer the reader to Glowinski and Le Tallec (1989).
9.6 Solving the Stokes System
331
9.6.6 Methods Based on Pressure Solvers If the divergence operator is applied to the momentum equation in (9.1) we obtain
(9.6.16)
Llp
= divf
n
in
owing to the fact the u is divergence free. The Poisson problem (9.6.16) demands for a boundary condition on p or on its normal derivative. In a formal way, taking the normal component of the momentum equation on an yields
ap
(9.6.17)

an
= f· n
+ vLlu· n
on
an
provided these terms make sense on af? The above relation furnishes a Neumann condition for problem (9.6.16), which unfortunately relates u and p to one another. A possible way for decoupling u and p on an is to resort to the following iterative procedure: pO is given, and for each k 2: 0 solve
(9.6.18)
{
aouk+l  lILlU k+ 1 = f  V'pk
in
n
U k +1
on
an
= 0
then _Llpk+l =  div f
(9.6.19) {
a k+ l
p en = f . n + vLlu + k
1
•
on
n
an .
Another possibility is to endow (9.6.16) with a Dirichlet condition on the pressure. First of all, let us remark that if (u,p) is a classical solution to (9.1), then both conditions (9.6.16) and
(9.6.20)
divu=O
on
an
are satisfied. Conversely, if (u,p) is a classical solution of the momentum equation (9.1h and of (9.1h, (9.6.16) and (9.6.20), then it is at once verified that ao div u  vLl div u = 0 in a
{
divu = 0
on
an
and therefore necessarily div u = 0 in n. Thus (u,p) would be a classical solution to the Stokes problem (9.1). Unfortunately, (9.6.20) is not an admissible boundary condition for the Poisson problem (9.6.16). One may overcome this difficulty by a different approach that we are going to describe.
332
9. The Stokes Problem
The idea is to look for an unknown value A on aD such that, if P = p(A) is the solution to the problem (9.6.16) with boundary condition
P = >.
(9.6.21 )
= u(>.) to
then the solution u
on aD ,
the Dirichlet problem
aou(>.)  vLlu(>.) = f  \7p(>.) (9.6.22)
in D
{ U(A) = 0
on aD
satisfies (9.6.20). If this happens, in view of the previous derivation we would conclude that (u,p) is actually a solution to (9.1). This procedure can be implemented in several ways. We present two of them. The influence matrix method and the integral condition method. Let us begin from the former. Let H (fl.) be the harmonic extension of the boundary value u, i.e., the solution to
LlH(fl.) = 0 in D (9.6.23)
{ H(p,)
= p,
on aD
The solution p(>.) to (9.6.16), (9.6.21) can be written as p(>.) where Po solves (9.6.24)
{
LlPO= diVf
in J?
Po = 0
on aD
= H(>') + Po,
Similarly, the solution U(A) to (9.6.22) can be split as U(A) = U(A) + Uo, where
aOu(A)  vLlu(>.) (9.6.25)
= \7H(A)
{ U(A) = 0
in D on aD
and (9.6.26)
{
aouo  vLluo = f  \7po
in D
Uo = 0
on aD
Let us further introduce the function 'lj;(A) solution to
Ll'lj;(A) (9.6.27)
{
= div U(A)
'lj;(>') = 0
in D on aD
Applying the divergence operator to (9.6.25h, from (9.6.16) and (9.6.27) it easily follows that
9.6 Solving the Stokes System
333
As a consequence, the condition a~lA) = 0 on an, if satisfied, would imply 'IjJ(>.) = div uf X) = 0 in n. By proceeding formally, using the Green formula (1.3.3) and (9.6.23) and (9.6.27), for each boundary function Jl in a suitable class A we have
r a~(>.) Jl = Jrn div[\7'IjJ(>') H(Jl)J
Jan ott
= =
L
L1'IjJ(>') H(Jl)
L
+
L
div u(>.) H(Jl) 
\7'IjJ(>') \7 H(Jl)
L
1jJ(>') L1H(Jl) =
L
div u(>.) H(Jl)
We are then led to looking for that function>' E A such that (9.6.28)
In
div u(>.) H(fl)
=0
V Jl EA.
Using again the Green formula (1.3.3), equation (9.6.28) can be equivalently reformulated as (9.6.29) where
find>. E A : A(>', p,) = F(Jl)
V Jl E A ,
A(7],Jl) := ao(u(7]), U(fl)) + v(\7U(7]) , \7u(Jl)) , F(Jl) := ao(uo, u(Jl))  v(\7uo, \7u(Jl)) .
The form A(,·) is symmetric. Moreover, if A is suitably chosen, A(,·) can be proven to be continuous and coercive on A x A (see Glowinski and Pironneau (1979), or Girault and Raviart (1986), p.183). Hence LaxMilgram lemma ensures the existence of a unique solution>' of (9.6.29). From the numerical point of view, the size of problem (9.6.29) is the number of gridpoints lying on an. This problem can be solved, for instance, by the conjugate gradient method. Once>. is available on an the Poisson problem (9.6.16) can be approximated, and finally u can be recovered solving the Poisson problem (9.6.22), which, from the algebraic point of view, corresponds to two linear systems with the same matrix, one for each velocity component. A detailed analysis and an efficient implementation of this method can be found in Glowinski and Pironneau (1979) (see also Chinosi and Comodi (1991)) for finite element approximations, in Kleiser and Schumann (1980), Canuto and Sacchi Landriani (1986) and Sacchi Landriani (1987) in the framework of spectral approximations. The name influence matrix method is borrowed from the name of the matrix transforming the vector of >. at the gridpoints on an into the one of div u(>.) at the same points.
334
9. The Stokes Problem
The integral condition method has been proposed by Quartapelle and Napolitano (1986). The name originates from the fact that in this formulation the pressure is required to satisfy a condition of integral character, which, at some extent, plays the role of a pressure boundary condition. Its derivation is as follows (for definiteness, let us consider the threedimensional case). Introducing the operator (9.6.30) one easily verifies that Llv = Curl Curl v  \7 divv Therefore the following Green formula holds:  LLlW.V= L(diVWdiVV +Curlw· Curl v) (9.6.31 )
 Jan r [(divw)v.n+(Curlw).nxv]
Let K(p,) be the solution to in
a
K(p,) . n = p,
on
an
=0
on
an
aOK (p, )  vLlK(p,)
(9.6.32) {.
K(p,) x n
=0
Using (9.6.31) and (9.6.32) the solution (u,p) to (9.1) satisfies (9.6.33)
L
\7p·K(p,) = L(faou+vLlu).K(P,) =
Lf.K(P,)
'rfp,EA.
As this integral relation has to be satisfied for each boundary function p" it can be viewed as a boundary condition for the pressure. Starting from (9.6.33) we can formulate a variational problem whose solution provides the correct boundary value for the pressure. Having split p(>.) as before, we seek >. E A such that (9.6.34)
L
\7 H(>') . K(p,) =
L
(f  \7po) . K(p,)
'rf p, EA.
This problem replaces (9.6.29); notice that the bilinear form at the left hand side is not symmetric. Using (9.6.25) and (9.6.26) for rewriting \7 H(>') and \7po and integrating by parts the terms containing u(>.) and un, from (9.6.31) it follows that the vector field u(>.) corresponding to the solution>' to (9.6.34) satisfies (9.6.35)
v
r div u(>')
Jan
p, =
a
'rf p, E A ,
9.6 Solving the Stokes System
335
i.e., div U(A) = 0 on an. An extensive analysis of the integral condition method, of its effective implementation, as well as a comparison with the influence matrix method can be found in Quartapelle (1993), Sects. 5.35.5. 9.6.7 A Global Preconditioning Technique
A drawback of conjugate gradient iterations applied to the pressurematrix R (or else of the Uzawa method, which corresponds to preconditioned Richardson iterations for R) is that at each step of the iterative procedure one needs to compute the action of R, thus, in particular, the action of A1. If a double iteration is used, i.e., an outer conjugate gradient procedure is applied to (9.6.1) and an inner iteration is employed to solve the system related to A, a preconditioner for A (say, Po) oughts to be used, as A is illconditioned. Following this procedure, it happens that the overall solution process can be somewhat costly, as at each step of the outer iteration it is necessary to iterate until convergence the inner one. It is possible, however, to transform problem (9.2.14) into an equivalent one, still based on the unknowns (u, p), which turns out to be symmetric and positive definite with respect to a suitable scalar product, and which can be thus faced by the conjugate gradient method. The advantage now is that we only have to consider a singlelevel iteration procedure, and moreover it can be shown that it just requires at each step the action of the preconditioner PO 1 • In many applications, the amount of work to solve (9.2.14) with this singlelevel iteration is comparable to that required for one evaluation of A1 in the double iteration procedure. The use of a preconditioner Po for the matrix A is also a characteristic feature of a ArrowHurwicz like method (see Section 9.6.3). However, this would require the selection of suitable iteration parameters p and (7, and it is not clear how to choose them in an optimal way. In contrast, the application of the conjugate gradient method to the reformulated problem (9.6.38) (that we are going to introduce here below) does not require to find criteria for the determination of any parameter and furnishes in a natural wayan optimally converging scheme. Here, we report this technique, that has been proposed by Bramble and Pasciak (1988), and which also applies to the system
Au+ BTp (9.6.36)
{
BuDp
=f
=g
under the assumptions that A is symmetric and positive definite, ker BT = 0 and D is symmetric and nonnegative definite. System (9.6.36) is slightly more general than (9.2.14), and it includes the case of stabilization methods that we have discussed in Section 9.4.
336
9. The Stokes Problem
Let Po be a convenient symmetric and positive definite preconditioner of A (e.g., its incomplete Cholesky decomposition, or a finite element preconditioner if A is obtained by a spectral approximation), satisfying moreover (9.6.37) with K 1 > 1. As a consequence, ((A  Pa)v, v) 2: (K 1 v E ]RNh. From (9.6.36) we deduce
PO 1 Au + PO 1 B T P
1)lv12 for each

= Pa 1 f
and also
0= BPa 1(Au + B T p  f) = BPO
1(A
Replacing Bu by Dp
+g
(u) P
where *
S :=
(
Bpa 1 f
we find
S*
(9.6.38)
+ Bu + BPO 1 B T P 
 Po)u
BPa
P.
= (
~1f
BPo 1f  g
)
1A
1(A
 Po)
Introducing the scalar product
[(;),(~)] :=((APo)w,v)+(r,q)
,
it can be seen that S* is symmetric and positive definite with respect to [', .J; more precisely, one finds
for a suitable constant Co Xsp (S*) satisfies 1
>
O. Moreover, the spectral condition number

K:; Xsp(S):S Xsp(S*)
:s K
3

Xsp(S) ,
where K 3 > 0 is related to the constants K 1 and K 2 in (9.6.37), and
 (I
S:=
0
D
0 )
+ BA 1 BT
.
Thus the conjugate gradient method can be successfully applied to (9.6.38), yielding at each step the solution of linear systems associated to the matrix Po, which can be carried out by direct methods. Notice finally that, in applications, the selected preconditioner Po may not satisfy condition (9.6.37) with K 1 > 1. A scaling procedure is thus in order;
9.7 Complements
337
the optimal scaling factor TJ* can be determined, for instance, by estimating the lowest eigenvalue TJl of Po 1 A (recall that, by Theorem 2.5.1, the largest possible constant K 1 in (9.6.37) is given indeed by TJd, choosing then TJ* smaller than but close enough to TJl.
9.7 Cornplernerrts In the last years, the stabilization methods described in Section 9.4 have been intensively used for numerical computations. An account is given in Tezduyar (1992). In the spectral collocation case, another approach can be based on the use of a staggered grid: the usual GaussLobatto nodes are employed to represent velocities while Gauss nodes (internal to fl) are associated with the pressure field (see, e.g., Bernardi and Maday (1992)). When the Chebyshev (rather than the Legendre) collocation points are used, in the matrix S associated to the Stokes operator the nondiagonal blocks are no longer transposed to one another. This is due to the fact that the \l operator is not the adjoint of ( div) when taken with respect to the Chebyshev discrete scalar product. Therefore, in this case the analysis of the collocation approximation of the Stokes problem has to be performed using the theory developed by Bernardi, Canuto and Maday (1988) (see also Section 7.5). A global preconditioning technique (different from the one presented in Section 9.6.7) consists of using the following blockpreconditioner for system (9.6.36) P =
(~o
;0) ,
where Po is still a preconditioner of A, while Qo is a preconditioner of the pressure mass matrix J. If Po is an optimal preconditioner of A and Qo of J, then P turns out to be an optimal preconditioner for (9.6.36), provided that there exists K, > 0, independent of h, such that
(see Wathen and Silvester (1993), Silvester and Wathen (1994)). We conclude mentioning the possibility of facing the Stokes system (9.2.14) directly by a multigrid method. The interested reader is referred to Verfiirth (1984a) and Wittum (1987, 1988) for finite element approximations and to Heinrichs (1992, 1993) for discretizations based on spectral methods.
10. The Steady NavierStokes Problem
This Chapter addresses the steady NavierStokes equations, which describe the motion, independent of time, of a homogeneous incompressible fluid. A complete derivation of these equations will be provided in Section 13.1. After introducing the weak form of the problem, we focus on the approximation of branches of nonsingular solutions by finite elements and spectral methods. Next, we analyze several iterative procedures to solve the system of nonlinear equations. We finish by considering the formulation of the NavierStokes equations in term of the stream function and vorticity variables.
10.1 Mathematical Formulation The NavierStokes equations provide a model of the flow motion of an homogeneous incompressible Newtonian fluid. In the steady case they read (10.1.1) (10.1.2)
 vLlu+ (u· 'V)u+ 'Vp = f divu = 0 in [}
where we have set
in [}
d
(u· 'V)u :=
L
UiDi U
i=l
Moreover, [} is a bounded domain of ~d, d = 2,3, with a Lipschitz continuous boundary a[}, and we retain the notations of Chapter 9. Therefore, u denotes the velocity of the fluid, p the ratio between its pressure and density, f is the external force field per unit mass and v > 0 is the constant kinematic viscosity. The derivation of equations (10.1.1), (10.1.2) is presented in Section 13.1. The above equations need to be supplemented by some boundary conditions. For the sake of simplicity, we consider the following homogeneous Dirichlet condition: (10.1.3)
u = 0
on
a[} ,
which describes a fluid confined into a domain [} whose boundary is fixed. Other boundary conditions are admissible as well, and account for different
340
10. The Steady NavierStokes Problem
kind of physical situations. A short description is given in Section 10.1.1 below. The NavierStokes system differs from the Stokes one due to the presence of the nonlinear convective term (u . 'V)u. If we consider the two Hilbert spaces V = (HJ(il))d and Q = L5(il) as in Section 9, the weak formulation of (10.1.1)(10.1.3) can be stated as follows: given f E (L2(il))d, find u E V , P E Q :
(10.1.4)
a(u, v)
+ c(Uj U, v) + b(v,p)
= (f, v)
Vv E V
{
b(u, q) = 0
VqE
Q
We recall that a:VxV+IR, b:VxQ+IR
are the bilinearforms a(w, v) := 1J('Vw, Vv) and b(v, q) := (q, div v), where (".) denotes the scalar product in L 2(il) or (L 2(il))d. Besides, c: V x V x V + IR is defined by
(10.1.5)
c(Wj z, v) :=
1
d
L (Wi ;;i. ,Vi) ,
[(w . 'V)z] . v =
a
i,i=l
J
and is the trilinear form associated with the nonlinear convective term. At first, we notice that the form c(·;·,·) is continuous on (H1(il))d. Indeed, by using Holder inequality it is easy to see that
Ii Wi;;:VidXI ~ IIWjIIL II:;: 4
CfJ)
110 II vi11 L4 CfJ)
•
Since H1(il) C L 4(il) for d = 2,3 (see Theorem 1.3.4), owing to the Poincare inequality (1.3.2) we conclude that there exists a constant c > 0 such that
(10.1.6)
Ic(w; z, v)1 ~ {:Iwlllzlllvh
V w, z, v E (HJ(il))d .
(See Section 1.2 for the definition of norms and seminorms in Sobolev spaces.) In particular, for any fixed w E V, the map v + c(WjW, v) is linear and continuous on V, i.e., it is an element of V'. Alternatively to (10.1.4) we can formulate the NavierStokes problem as
(10.1. 7)
find u E
Vdiv:
a(u, v)
+ c(u; u, v)
= (f, v)
Vv E
Vdiv
,
where Vdiv is the subspace of V of divergencefree functions introduced in (9.1.3). Clearly, problems (10.1.4) and (10.1.7) provide the nonlinear generalizations of problems (9.1.11) and (9.1.6), respectively. If (u,p) is a solution to (10.1.4), then u is a solution to (10.1.7). The converse is also true in the sense stated by the following result.
10.1 Mathematical Formulation
341
Lemma 10.1.1 Let u be a solution to problem (10.1.7). Then there exists a unique p E Q such that (u,p) is a solution of problem (10.1.4).
Proof. The map v + a(u, v) + c(u; u, v)  (f, v) belongs to V', and vanishes 0 on Vdiv. Therefore, the thesis follows from Lemma 9.1.1. Let us come now to the proof of the existence and uniqueness of a solution to (10.1.7). At first define the space
(10.1.8)
Hdiv
:= {v E (L 2(n»dI divv = 0 in
n , v· n = 0 on an} , an. Further, as norm in Vdiv
where n is the unit outward normal vector on choose the H 1 (n)seminorm I. [iTheorem 10.1.1 Let f E
with
Hdiv
Ilfll o
(10.1.9)
7
1
< CCa1 / 2
'
where C > 0 is the constant appearing in (10.1.6) and Ca > 0 is the constant of the Poincare inequality (1.3.2). Then there exists a unique solution u E Vdiv to problem (10.1. 7). Proof. For each w E
Vdiv
(10.1.10)
Aw(z, v) := a(z, v)
let us define the bilinear form
+ c(w; z, v)
,
which is clearly continuous in V div x Vdiv. Besides, it is also coercive, as integrating by parts one obtains for each v E Vdiv
1L d
Aw(v, v)
(10.1.11)
= vlvli +
a
= vlvli + ~ 2
= vlvli 
~ 2
WiVjDiVj
i,j=l
r
t
la i=l
WiDi(lvI2)
r div wlvl
2
la
+~ 2
r
w· nlvl 2 = vlvli
loa
Applying LaxMilgram lemma (see Theorem 5.1.1) for each w E exists a unique solution z E Vdiv to (10.1.12)
Aw(z, v)
= (f, v)
vv
E Vdiv
Vdiv
there
.
A fixed point ofthe nonlinear map
O}. We first propose a general approximation method for solving (10.2.1), and illustrate its properties in an abstract fashion. This is done in Section 10.2.1. Next, we consider two different approximation methods that are specifically devised for NavierStokes equations, and show how they fit under the general framework described in Section 10.2.1. In particular, in Section 10.2.2 we investigate mixed approximation methods which consist in discretizing directly both momentum and continuity equations. We also make a few comments about approximations based on projection upon divergencefree subspaces. In Section 10.2.3 we address approximations based on spectral collocation methods.
10.2 Finite Dimensional Approximation
347
We say that {(>.,w(>')) I >. E A} is a branch of solutions of (10.2.1) if >. ~ w(>') is a continuous function from A into Wand F(>', w(>')) = O. Our analysis will be confined to the approximation of a branch of nonsingular solutions of (10.2.1). By that we mean that on the "curve" {(>', w(>')) I >. E A} the Frechet derivative DwF of the map F with respect to w is an isomorphism of W. This happens, for instance, if there exists a constant a> 0 such that
IIDwF(>.,w(>'))vllw =
Ilv + TDwG(>',w(>,))vllw
2: allvllw
for all v E W ,
>. EA.
The symbol DwF(>'o,wo) denotes the Frechet derivative of F with respect to the variable w, computed at the point (>'o,wo). In particular, we notice that if the Reynolds number is small enough (namely, the viscosity v satisfies (10.1.9)) and (u,p) is the unique solution of problem (10.1.4), then>' = l/v, w = (u,p/v) is a nonsingular solution of (10.1.27) (for a proof, see, e.g., Girault and Raviart (1986), p.300). The Reynolds number Re is a nondimensional number and is defined as follows (see Landau and Lifshitz (1959), Sect. 19) (10.2.2)
Re:= Ilv*1 , v
where l is a characteristic length of the domain nand v* a typical velocity of the flow. Other definitions are also possible, involving different measures of the data. Looking back at Theorem 10.1.1, we could for instance define the Reynolds number as Re:=
13/2d/41IfI1 1 / 2
v
o.
The Reynolds number in the form (10.2.2) expresses the ratio of convection to diffusion. It has no absolute meaning: it enables problems with the same geometry and similar data (boundary and source terms) to be compared. In practice, the larger is Re the more difficult is the problem to handle. 10.2.1 An Abstract Approximate Problem
Let {Wh I h > O} be a family of finite dimensional subspaces of W. The approximation method for problem (10.2.1) that we are going to consider has the following abstract form: given >. E A
The discrete linear operator Ti, : Y ~ Wh is an approximation of the linear operator T. We will make a set of assumptions on T h that guarantee the existence of a branch of solutions of (10.2.3) and its convergence as h ~ 0 to a branch of nonsingular solutions of (10.2.1).
348
10. The Steady NavierStokes Problem
To begin with, we make some additional assumptions on G and T. We will denote by III· III the norm of a bilinear functional F : Xl x X 2 t x 3 , Xl, X 2 and X 3 Banach spaces, i.e.,
We suppose that (10.2.4)
there exists a Banach space HeY, with continuous imbedding, 2
such that G is a C rnap from A x W into H
Moreover, we require that D 2G is bounded from every bounded subset of A x W into H. This means that there exists a locally bounded function P : IR+ t IR+ such that (10.2.5) Finally, we assume that (10.2.6)
T is a compact operator from H into W
(for the definition of compact operator between Banach spaces we refer to Section 1.3). Since we have supposed that T E £(Y; W), (10.2.6) is satisfied when the inclusion HeY is compact. The following result states the conditions under which problem (10.2.3) has solutions that are stable and converge to those of problem (10.2.1).
Theorem 10.2.1 Suppose that (10.2.4)(10.2.6) are satisfied. Assume moreover that: a. the operators Th E £(Y; W) satisfy
lim II(T  Th)'¢'llw
(10.2.7)
htO
lim
htO
b.
=0
IIT Thll.cww) '
there exists a linear operator nh : W
(10.2.8)
lim IIv  nhvllw = 0
htO
e
Then there exists a neighbourhood enough, a unique C 2mapping..\ E A
t
V'¢' E Y
= 0 t
Wh satisfying Vv EW
.
of the origin in Wand, for h small Wh(..\) E Wh such that, for all ); E A,
(10.2.9) Further, the following convergence inequality holds
(10.2.10)
Ilw(..\)  wh(..\)llw :S C{llw(..\) lhw(..\)llw
+ II(T 
Th)G(..\,w(..\))llw}
10.2 Finite Dimensional Approximation
where C
349
> 0 is independent of both hand >..
Notice that (10.2.7h follows from (10.2.7h provided that the inclusion HeY is compact. A proof of Theorem 10.2.1 is given in Girault and Raviart (1986), pp. 307308. A slightly different version was formerly proven in Brezzi, Rappaz and Raviart (1980). A more general result, in which also the nonlinear map Gis approximated by a suitable G h , has been obtained by Maday and Quarteroni (1982b). The convergence estimate (10.2.10) involves all the ingredients of the finite dimensional approximation. As a matter of fact, the first term of the right hand side measures the distance of the subspace Wh from the solution w(>.) and the second term depends on how well the linear operator Tis approximated by T h .
10.2.2 Approximation by Mixed Finite Element Methods An approximation to the NavierStokes equations (10.1.1)(10.1.3) can be devised by applying a Galerkin method to (10.1.4). With this aim, let {Vh I h > O} be a family of subspaces of V, and {Qh I h > O} a family of subspaces of Q. For the sake of exposition, we may think of Qh and Vh as finite element spaces, although what we are developing can be easily adapted to the case of spectral Galerkin approximations as well. We therefore assume that Qh and Vh are defined as in Section 9.3 and that they are compatible spaces, i.e., satisfy the compatibility condition (9.2.9). Besides, the following approximation properties are supposed to hold: there exist two operators rh : (H 1(D))d + Vh and Sh : L 2(D) + Qh such that
Ilv  rh(v)1I1 ::; ChI, Ilv1111+l \:f v E (HI,+l(D))d Ilq  sh(q)llo ::; Ch I2+11IqII12+1 \:f q E H 12+1(D)) ,
(10.2.11)
for certain integers 11 2: 1, 12 2: O. For instance, if the finite dimensional subspaces are given by Vh = (Xk n HJ(D))d and Qh = (Yh' n L6(D) as in (9.3.4), we can choose rh = 1l'~, the interpolation operator over Vh, and Sh = Ph' the L2 ort hogonal projection over Qh, respectively. With this choice, (10.2.11) holds with 11 = k and 12 = m. Then for each h > 0 we consider the problem: find Uh E Vh , Ph E Qh :
a(uh' vs)
(10.2.12)
+ C(Uh; Uh, vs) + b(vh,ph) =
(f, Vh)
{
b(uh,qh) = 0 which is nothing but the nonlinear generalization of the finite dimensional Stokes problem (9.2.4).
350
10. The Steady NavierStokes Problem
Problem (10.2.12) can be represented in the form (10.2.3). Indeed, define = V x Q, Y = V' = (Hl(D»d and A = IR+ as in (10.1.28), T as in (10.1.29), (10.1.30) and G as in (10.1.31). Furthermore, let us set Wh = Vh X Qh and define Ti, : (H1(D»d + W h as follows: for any f* E (H 1(D»d, Thf* := (uh,Pi'.) E Vh x Qh is such that
W
("Yu h' "YVh) (10.2.13)
{ b(uh,qh)
+ b(Vh, Pi',) = (f*, Vh)
=0
V v« E Vh V qh E Qh
In other words, (10.2.13) is the finite dimensional approximation to the Stokes problem (10.1.30). If we set Wh := (Uh,ph/V) we deduce from (10.2.12) that Wh = ThG(l / v, Wh) ,
or, equivalently, with>. = l/v. Hereafter, we briefly outline how this approximation matches the requirements of Theorem 10.2.1. A complete proof can be found, e.g., in Girault and Raviart (1986), Chap. IV, Sect. 4.1 and 4.2. First of all, by applying the convergence theory presented in Section 9.2 and proceeding as in Proposition 6.2.1, it can be concluded that lim {lIu*
htO
 ui'.lh + IIp*  Philo} = 0
i.e., for all f" E (H 1(D»d we have (10.2.14)
lim
htO
II(T 
Th)f*llw = 0
Therefore, (1O.2.7h is satisfied. Now, we wish to find a space H verifying (10.2.4), (10.2.5) and, moreover, compactly imbedded in Y (so that also (10.2.6) and (10.2.7h would be satisfied). We claim that a good candidate is, for instance, H = (L 3 / 2 (D» d. As a matter of fact, from Theorem 1.3.5 we know that the inclusion L 3 / 2(D) C H 1(D) is compact, hence H is compactly imbedded into Y. Moreover, we notice that, when d ~ 3, owing to Theorem 1.3.4 the space Hd(D) is continuously imbedded into L 6(D). Therefore, using the HOlder inequality (1.2.8), for each v, wE V we have d
~ (8W L...J Vj. 8x)·
)=1
8v ) + W8x)· j
E (L 3/2 (D» d
and (10.2.4), (10.2.5) hold, provided we assume f E (L 3 / 2(D» d. Finally, let us define for each v = (v, q) E V x Q
10.2 Finite Dimensional Approximation
Ihv :==
(PVh
351
(v), PQh (q)) ,
where PVh and PQh are the orthogonal projections over Vh and Qh with respect to the scalar product of (HI (D))d and L2(D), respectively. By proceeding as in Proposition 6.2.1, using (10.2.11) we can conclude that property (10.2.8) is satisfied. In view of Theorem 10.2.1 we conclude that, for h small enough, there exists a unique branch of nonsingular solutions of (10.2.12). Moreover, for what concerns the convergence estimate (10.2.10), in the present situation we obtain (10.2.15)
IIU(A)  uh(A)III
+ IIp(A) 
Ph(A)llo
l
~ C(h , IIU(A) I1lt+1
+ hI2+ 1 /lp(A)1I 12+ d
As a matter of fact, Ilw(A) lhw(A)II~ == Ilu(A) 
PVh
+ IIp(A)  PQh (p(A)) 116 + IIp(A)  Sh(p(A))1I6
(u(A))lli
~ Ilu(A)  rh(U(A))lli
and the right hand side can be bounded making use of (10.2.11). Moreover II(TTh)G(A, w(A))llw is nothing but the error arising from the finite element approximation to a Stokes problem whose right hand side is G(A,W(A)). It can therefore be bounded by the same right hand side of (10.2.15), owing to the results of Section 9.3 on the Stokes problem. In the case in which the finite dimensional subspace is Vdiv,h C Vdiv, by generalizing what has been done in Section 9.2 for the Stokes problem we can find the following Galerkin approximation to problem (10.1.7):
It is easy to see that this problem can be formulated as (10.2.3), and (10.1.7) as (10.1.27). The analysis can therefore be carried out using Theorem 10.2.1. The interested reader can refer to Girault and Raviart (1986), Chap. IV, Sect. 4.3.
10.2.3 Approximation by Spectral Collocation Methods When the NavierStokes equations are approximated by the spectral collocation method, or else by finite elements with numerical integrations, the result is a system like (10.2.12) in which all integrals are apriori replaced by convenient quadrature formulae. As an example, we suppose that D == (1, 1)2 and adopt the QN QN2 representation considered in Section 9.5.3, therefore VN :== (..,w n + t(w  w n))  DwF(>",wO)](w  wn)dt
Assuming that w n belongs to S(Wj 15") we deduce
Ilwn +1  wllw s:; 2,K(lIw n  wOllw + Ilwn s:; 6,KJ"llw n  wllw .

wllw)llw n  wllw
Taking 15" < 1/(6,K) we find that w n +1 belongs to S(Wj 15") and further that the sequence {w n } generated by (10.3.3) converges linearly to w. When applied to the NavierStokes problem (10.1.4), the Newton method (10.3.2) reads (remember that A = l/v in this case): find (un +1 ,pn+l) E V x Q such that:
a(un+1, v) + c(u n+1 ju", v) + c(u n; u n+1 , v) + b(v,pn+l) (10.3.8)
= c(unjUn,v) +(f,v)
{
b(u n +1 , q) = 0
'VvEV 'V q E Q
Likewise, the variant (10.3.3) reads: find (u n+1,pn+l) E V x Q such that:
a(u n+1 , v) + c(u n+1 ; u'', v) + c(uOj u n+1 , v) + b(v,pn+l) (10.3.9)
=c(uO_UnjUn,v)+c(un;uO,v)+(f,v)
{
b(un+1,q) = 0
'VvEV
'VqEQ
A few comments are in order. First of all, the new iterate (u n +1 , pn+l) is independent of p": Moreover, since D 2G(A,W) is constant, DwF(A,W) is obviously Lipschitz continuous. Therefore, if (u,p) is a nonsingular solution of (10.1.4) (equivalently, of (10.1.1)(10.1.3)), for any uO sufficiently close to
356
10. The Steady NavierStokes Problem
u, the scheme (10.3.8) (respectively, (10.3.9)) determines a unique sequence that converges quadratically (respectively, linearly) to (u,p). A major drawback of both algorithms (10.3.2) and (10.3.3) is that the initial guess needs to be near the exact solution. When this solution is a point of a branch of nonsingular solutions, i.e., w = w(>.) for a fixed >., and a neighbouring solution is known, say w(>.  .') for some small increment .., then the latter value can be used to provide the initial guess for the iterative procedure. The resulting procedure is known as the continuation method. We describe it hereby. By formally differentiating (10.1.27) we obtain
dw(>.) DwF(>', w(>.))d:X
+ D>.F(>', w(>.))
= 0
"if
xE A
and therefore (10.3.10) where The ordinary differential equation (10.3.10) can be solved by any singlestep method (such as, e.g., Euler, RungeKutta) or by an explicit multistep method. If, for the sake of simplicity, the forward Euler method is used with a steplength .., assuming that w(>.  .') is known we obtain: (10.3.11)
w(>.) = w(>.  .')  7j;(>'  .').'
This means that w(>.) is defined by
DwF(>'  .., w(>.  ..))(w(>.)  w(>.  ..)) = D>.F(>'  .., w(>.  ..)).. Since the exact value is
w(>.)
= w(>.  .') 
f>' 7j;(J1.)dJ1., J>'LH
subtracting (10.3.11) from the previous equation gives
w(>.)  w(>.) = = 
r
J>.L!.>'
[7j;(J1.)  7j;(>'  ..)]dJ1.
f>' 7j;' ((!")(J1.  >. + ..)dJ1. J>.L!.>'
Therefore,
(..)2 Ilw(>')  w(>')llw :S  2
max >.L!.>'$17:S>'
117j;'(1])llw
10.3 Numerical Algorithms
357
If .:1A is small enough, W(A) belongs to S(W(A); 15') and can therefore serve as initial guess wO for the Newton iterates. If we apply the above continuation method to the NavierStokes problem, and set for ease of notation:
A := 1/11 , A* := A .:1A , U := U(A) , p:= peA) , u" := U(A  .:1A) , p* := p(A  .:1A) 15u := li(A)  u" , 15p = peA)  p* , we obtain the following problem: find 15u E V, 15p E Q such that:
1 A* (\7 15u, \7v)
+ c(u"; 15u, v) + c(15u; u", v) 1
(10.3.12)
+ A* bey, 15p) .:1A
=F[(f,v)c(u*;u*,v)]
VvEV VqEQ
b(15u, q) = 0
This problem has the same complexity as one iteration of Newton algorithm. Remark 10.3.1 For solving (10.1.27), instead of (10.3.2) or (10.3.3) one could use a more general quasiNewton method. This would read:
H(A,W n)(Wn +1
_
w n ) = F(A,W n ) ,
where H is a convenient approximation ofthe Jacobian DwF. In this respect, the algorithm (10.3.3) falls under this cathegory. Another way is to start from the Newton method but at each step n solve the linear problem (10.3.2) by GMRES iterations (see Section 2.6.2)). More precisely, (10.3.2) first needs to be reformulated as (10.3.8), then approximated in space for each n to yield a linear system of the form (10.3.13) where I n is the matrix associated to the Jacobian DwF(,\, w n ) and s" is the vector associated to the right hand side of the discretized momentum equation. With obvious notation, system (10.3.13) can be given the abstract form Ax = b, then the GMRES iterates (2.6.20) can be started on, taking XO
= (;:).
At convergence they provide the solution of (10.3.13). For a
detailed description see Brown and Saad (1990).
0
358
10. The Steady NavierStokes Problem
10.3.2 An OperatorSplitting Algorithm Another iterative approach consists in seeking the solution to (10.1.1), (10.1.2) as the limit of a pseudotime advancing scheme. For instance, at each step the equations can be split into a linear Stokes problem and a nonlinear convectiondiffusion equation for the velocity field, following a procedure similar to the one used for the PeacemanRachford scheme (5.7.7). In differential form, starting from (UO,pO) we construct a sequence (u" ,pn) by 7]vL1u n+1 / 2 + a nu n+1 / 2 + \lpn+l/2 = f  (un. \l)un + (1  7])vL1un + anu n in il (10.3.14)
divu n+! / 2 = 0
in il on
(1 7])vL1u n+ 1 + (un+! . \l)un+1 + anu n+1 = f  \lpn+l/2 + 7]vL1u n+1 / 2 + a nu n+1 / 2
(10.3.15) {
ail
in il
on ail where 0 < 7] < 1. The parameters an need to be chosen conveniently in order to speed up the convergence. In general, the proof of the convergence of such a scheme is not an easy matter. In this respect, knowing that the exact stationary solution to be approximated is stable can provide a justification of the method (see, e.g., Heywood and Rannacher (1986a)). The approach (10.3.14, (10.3.15) was introduced by Glowinski (1984), p.255. It has then been extended to a threestep algorithm of Strang type (see (5.7.11)), in which the first two steps reproduce (10.3.14) and (10.3.15), while the third one is again based on a Stokes problem like (10.3.14) (see Bristeau, Glowinski and Periaux (1987)). The Stokes problem (10.3.14) can be faced as discussed in Chapter 9. On the other hand, (10.3.15) is a nonlinear convectiondiffusion system of the form vL1v + (v . \l)v + a'v = g in a (10.3.16) { v = 0 on ail It is easy to see that this problem can also be given the form F(v, v) = 0, and analyzed as done in the abstract framework of Sections 10.1 and 10.2. Its solution can be accomplished in several ways, e.g., by a gradient algorithm (see Girault and Raviart (1986), p. 361) or a conjugate gradient algorithm (see Glowinski, Mantel, Periaux, Perrier and Pironneau (1982)). In both cases, each iteration would require solving four Dirichlet problems for the vector operator vL1v + a·v. u n +1 = 0
10.4 Stream FunctionVorticity Formulation
359
IDA Stream FunctionVorticity Formulation of the
N avierStokes Equations The velocity u and the pressure p are usually called the primitive variables for the NavierStokes equations. It is possible to rewrite the problem introducing different unknowns (nonprimitive variables). A typical example is the following one. If il C R2 is a simplyconnected domain, a wellknown result states that a vector function u E (L 2(il))2 satisfies div u = if, and only if, there exists a function 'ljJ in HI (il), called stream function, such that
°
(10.4.1)
u = curl'ljJ :=
(:~,  :~ )
Clearly, 'ljJ is defined only up to an additive constant. For the proof see, e.g., Girault and Raviart (1986), p.37. Meanwhile, we define the vorticity associated with u E (H I(il))2 to be the scalar field w E L 2(il) such that (10.4.2)
w := curl u =
aU2 aUI a  a . Xl x2
The two operators curl and curl are formally adjoint operators in the sense that
1n
v·curl
V
lu is measurable and iT Ilu(t)llidt < oo}
,
and similarly we define CO ([0, T]; L 2 (rl )) and other functional spaces for spacetime functions (see also Section 1.2). The weak formulation of (11.1.4) for the homogeneous Dirichlet boundary condition reads as follows: given f E L 2(QT ) and Uo E L 2(rl ), find u E L 2(0, T; V) n CO([O, T]; L 2 (rl )) such that
(11.1.5)
:t (u(t), v) {
u(O)
= Uo
+ a(u(t), v) = (f(t), v) ,
Vv E V
11.1 InitialBoundary Value Problems and Weak Formulation
365
where V = H6(Q) , (,.) denotes the scalar product in L 2(Q), the bilinear form a(·,·) is defined in (6.1.2), and the above equation has to be intended in the sense of distributions in (0, T). For the other boundary conditions, one has to modify the choice of V and both the left and right hand sides of (11.1.5) according to (6.1.11), (6.1.14), (6.1.16). 11.1.1 Mathematical Analysis of InitialBoundary Value Problems
Several methods can be employed to prove the existence and uniqueness of a solution to (11.1.5). We are going to present one of them, based on the FaedoGalerkin method and suitable energy estimates. Let us first assume that the bilinear form a(, .) is continuous and, moreover, weakly coercive in V, i.e., there exist two constants a > and X 2: such that
°
(11.1.6)
a(v,v)
+ >"llvll~ 2: allvlli
°
Vv E V .
This is also called Garding inequality. Very often, it is satisfied taking A = 0, namely the bilinear form a(, .) is coercive. This is, e.g., the case of the heat equation ~~  L1u = f with homogeneous Dirichlet boundary condition. In its general form, the inequality (11.1.6) is satisfied for all the boundary value problems we are dealing with, provided that, for each i, j = 1, ..., d, the coefficients aij, bi, c; and ao of the operator L belong to LOO( Q). In fact, using (11.1.2) one finds
a(v,v) =
[~aijDiVDjV 2)biCi)VDiv+aov2]
l
Z
'tIl
2: aollDvl16 lib  cllLooU:l)II Dv llollv llo  Il aoIILOO(f.?)llvI16 Recalling that for each
10
>
°
1
IIDvllollvllo ~ cllDvll~ + 410 Ilvll~ , it follows that (11.1.6) holds for a(w,v), choosing for instance >..
>C
(~o lib cilloo(!n + IlaoIILoo(!n) ,
where C = C(d,.f?) is a suitable constant. When considering the Robin problem (6.1.15), the bilinear form is given by a( w, v) + (K:W, V)I:Hl (see (6.1.16)). Assumption (11.1.6) is still valid for this bilinear form. As a matter of fact, by the trace theorem and an interpolation result (see Theorems 1.3.1 and 1.3.7) for each 10 > one has
°
Il vl16,eSl ~ cllDvl16 + C llvl16 , E
366
11. Parabolic Problems
thus
(rw,v)e.rl ~
ao II Dvll o 2 ell ~IILoo(en)llvllo 2 2 2  ao
In this case it is sufficient to choose
Notice, moreover, that also the continuity of the bilinear form a(,·) is always easily verified under the above assumptions on the coefficients. Before coming to the existence and uniqueness theorem, let us notice that, if we introduce the change of variable u,\(t,x) := e'\tu(t,x), where u is the solution of (11.1.4), the new unknown u,\ satisfies 8u,\
7ft" + Lu,\ + >.u,\
= e"
M
f
. III
QT .
If (11.1.6) holds for a(w,v), the bilinear form a,\(w,v) := a(w,v) + >'(w,v) associated to this last problem is coercive, i.e., it satisfies (11.1.6) with>' = 0. Therefore, if we replace f with e M f and L with L + AI, I being the identity operator, without loosing generality we can assume that the bilinear form associated to the initialboundary value problem (11.1.4) satisfies (11.1.6) with>. = 0. This will be always assumed in the sequel of this Chapter. However, it is worthy to notice that the estimates we will prove are valid for the auxiliary unknown u,\ (t, x) (or its approximations), and that the corresponding estimates for the solution u(t, x) show an extra multiplicative factor eM. Let us now prove the existence theorem. We notice that hereafter all norms refer to the space variables, i.e., 11·llk is the norm in the Sobolev space Hk(fl) for k ~ 0. Theorexn 11.1.1 Assume that the bilinear form a(·, .) is continuous in V x V and that (11.1.6) is satisfied with>' 0. Then, given f E L 2(QT ) and Uo E L 2 (fl ), there exists a unique solution u E L 2 (0, V) n T]; L 2 (fl )) to (11.1.5). Moreover, ~~ E L 2(0, T; V') and the energy estimate
=
(11.1. 7)
lIu(t)116
+a
i
t
o
r,
eouo,
lit
Ilu(T)lli::; Il uol16 + 
a
a
Ilf(7)116
holds for each t E [0, T]. Proof. We employ the socalled FaedoGalerkin method, and construct an approximate sequence solving suitable finite dimensional problems. Since V is a closed subspace of Hl(fl), it is a separable Hilbert space. Let {¢j h:::l be a complete orthonormal basis in V and define V N := span {¢1, ..., ¢N }. Consider the approximate problem: for each t E [0, T] find uN(t) E V N such that:
11.1 InitialBoundary Value Problems and Weak Formulation
367
d N N dt(u (t),¢j)+a(u (t),¢j)
= (f(t),¢j),
(11.1.8)
Vj
= 1, ...,N
, t E (O,T)
N
UN(O) = U~ := PN(UO) =
2::>8¢8 , 8=1
where PN is the orthogonal projection in L 2 ([2) on V N, hence the vector p is the solution of the linear system (M p)j = (uo, ¢j), M being the mass matrix M j 8 := (¢j,¢8)' Since {¢j}, j = 1, ...,N, is a basis for V N , the equation in (11.1.8) is indeed satisfied for each v N E V N . Writing N
uN (t) =
L C~ (t)¢8
,
8=1
(11.1.8) is equivalent to solving
M :{N (t)
(11.1.9)
{
+ Ac N(t).= F(t)
McN(O) = Co ,
where for i,j = 1, ..., N
A ij := a(¢j,¢i) , Fi(t):= (f(t),¢i) , CO,i:= (UO,¢i) Since M is positive definite, we find a unique solution eN to (11.1.9). As F E L 2(0, T), it follows c N E H 1 (0, T), l.e., UN E H 1(0, T; V). Choosing uN(t) in (11.1.8) as a test function, we have
(~UN(t),UN(t))
+a(uN(t),uN(t)) = (f(t),uN(t)) ,
and therefore, owing to (11.1.6),
Integrating over (0, r), r E (0, T], we obtain (11.1.10)
Ilu
N
(r)116+ a
T i o
IluN(t)lli::;
liT Ilf(t)116
Il uol16+ 
a
a
The sequence uN is thus bounded in Loo(O, r, r, V). Hence, we can select a subsequence (still denoted by uN) which converges in the weak' topology of Loo(O, T; L 2(S?)) and weakly in £2(0, T; V) (see, e.g., Yosida (1974), pp. 137 and 126). This means that there exists u E Loo(O, T; L 2 (S? )) n L 2(0, r, V) such that
L 2(S?))nL2(0,
368
11. Parabolic Problems
T(U
N(t),'P(t))
i
T(u(t),'P(t)) +
i
as N
+ 00
for each 'P E L 1(0,T;L2(st)) and
T('Ju i
T('JU(t),tP(t))
N(t),tP(t))
+
i
as N
+ 00
for each tP E L 2(0,T;L 2(st)). In order to pass to the limit in (11.1.8), take IfJ' E C 1 ([0, TJ) with IfJ'(T) = O. By multiplying (11.1.8) by IfJ' and integrating by parts over (0, T) the first term at the left hand side, we find
Since u{;' converges in L 2(st) to UO, by choosing arbitrarily No and passing to the limit in (11.1.8) we finally obtain
rT (u(t),
Ilu(t)lli +
0
°is a constant independent of T.
Proof. We will proceed in a formal way, because a rigorous proof would require considering the approximate problem (11.1.8), finding the estimate (11.1.15) for the sequence UN, and passing to the limit. The limit function u obtained is obviously the solution to (11.1.5), and satisfies (11.1.15) due to the properties of the weak convergence. Notice moreover that, when aij E Cl (f?), it is not restrictive assuming aij = aji for each i,j = 1, ... , d. In fact, one can write
Clearly, the coefficients a~ := (a;j +aji)/2 still satisfy the ellipticity assumption (11.1.2), with the same constant 00 > 0. Further, the vector e S defined by
cf :=
t
[Di ( aij ; aji )]
, j = 1, ... , d ,
belongs to GO(f?) (and satisfies diveS = 0). We can therefore assume in the sequel that aij = aji.
11.1 InitialBoundary Value Problems and Weak Formulation Multiply now (11.1.4h by ~~ and integrate on for almost each t E [0, T] we have
n. Since
~~
= °on
371
ET,
L[(~~r +LU~~] = (11.1.16)
r [(aU)2 + ~a ..D,uD.au at LJ at
iQ
',J
_
=
J
'
~ (b,UD. au _ «o.« au) + aouau] LJ ' , at "at at i
r fauat
i
'J
.
Q
Integrating by parts one finds
au ~ a'J..D·uD. = ~~ r ~ a .. o.oo.«J ir LJ J , at 2 dt i LJ 'J , 't,J Q
Q
1,,)
and
1 LJ, biuDiat = ~
Q
au
.
l»
au dlV(bu). at
Q
Integrating in time (11.1.16) on (0, t), from the ellipticity assumption we have
it 11~~(7)1I: sc + i (llu(7)lll + 111(7)110) 11~~(7)IIJ
lIu(t)lli +
t
[Iluolli
This yields
it 11~~(7)1I::::; c [lluolli + i ("U(7)lli + 111(7)116)] , t
(11.1.17) /Iu(t)/Ii+
hence, by employing (11.1.7), the energy estimate (11.1.15) follows at once.
D Remark 11.1.4. Let us show how to modify Proposition 11.1.1 when the boundary condition is the (homogeneous) Neumann one. In such a case we have V = Hl(n). As before, let us assume that aij = aji for each i,j = 1, ... , d. Estimate (11.1.17) is obtained in the same way, only noting that

au l' au 1 b·nuau LJbiuDi= dlV(bu)at at en at 1~ , Q
.
Q
=
1·
divfbu )au   1 d
Q
at
1
2 dt eo
b . nu 2
372
11. Parabolic Problems
By integrating on (0, t), the trace theorem and an interpolation result (see Theorems 1.3.1 and 1.3.7) yield
Ihn
2
b· nu (t)/ :::;
cllu(t)lli + C.,llu(t)1I6 ,
thus (11.1.17) holds with the additional term Cllu(t)116 at the right hand side. This last term can be estimated by means of (11.1.7), so that (11.1.15) is obtained. A similar procedure can be employed when considering the (homogeneous) Robin case. D Remark 11.1.5 Proposition 11.1.1 still holds (with suitable modifications) when considering the nonhomogeneous case for the Neumann and Robin problems. In fact, assuming an E C 2 and g E £2 (0, T; H 1 / 2 ( an)) n Hl/4(0,T;£2(aO)) (and moreover K, E C 1(aO) for the Robin problem), a proof of this result can be found, e.g., in Lions and Magenes (1968b), p.34. Clearly, one has also to add the norm of the boundary datum g to the norms of Uo and f at the right hand side of (11.1.15). D
A simple consequence of Proposition 11.1.1 is given by Corollary 11.1.1 Assume that the solution u obtained in Proposition 11.1.1
satisfies (11.1.18)
Ilu(t)II~:::;
C(II£u(t)116 + Ilu(t)lli)
a.e. in [O,T] .
Then U E £2(0, T; H 2(0)) n H1(0, T; £2(0)) n CO ([0, T]; V) and satisfies the estimate max
(11.1.19)
tE[O,T]
2
iT° (II ~uot (t)/1 °+ lIu(t)II~) : :; c, (11uo11i + iT Ilf(t)116)
lIu(t)lli +
Proof. Estimate (11.1.19) follows from (I1.Ll8), (ILl.15) and (11.1.7), since Lu = f  ~~. Moreover, by interpolation one has that L 2(0,T;H 2(0)) nH 1(0,T;£2(0)) C Co([0,T);H1(0))
(see Theorem 1.3.8 or Lions and Magenes (1968a), p. 23).
D
Remark 11.1.6 Assumption (11.1.18) is satisfied for the homogeneous Dirichlet problem if an E C 2 (see, e.g., Lions and Magenes (1968a), p. 166) or if 0 is a plane convex polygonal domain (see Grisvard (1976)). Indeed, under
11.2 SemiDiscrete Approximation
373
these assumptions, (11.1.18) is also satisfied for the (homogeneous) Neumann and Robin problem (assuming moreover, in this last case K, E C 1 (8 D» . On 0 the contrary, (11.1.18) in general does not hold for the mixed problem.
11.2 SemiDiscrete Approximation A first step towards the approximation of the solution to (11.1.5) entails the discretization of the space variable only. This leads to a system of ordinary differential equations, whose solution Uh(t) is an approximation of the exact solution for each t E [0,T). All methods described in Chapters 5 and 6 can be employed to fulfill this aim. Here, as in the sequel of this Chapter, we focus on the finite element method (Section 11.2.1) and the spectral collocation method (Section 11.2.2). In Section 11.2.1 we will prove an error estimate with respect to the norms of CO ([0, T); L2(D») and L 2(0, T; H1(D)) for piecewiselinear polynomials (see Proposition 11.2.1). Moreover, assuming that the solution U is smooth enough, an optimal estimate for higher order elements will be provided (see Proposition 11.2.2), and in Remark 11.2.3 the error estimate for a "rough" initial datum Uo E L 2(D) will be presented. All these results are obtained by means of the energy method. Similar results are also presented in Section 11.2.2 for the spectral collocation case.
11.2.1 The Finite Element Case The variational formulation (11.1.5) naturally leads to a semidiscrete problem by approximating the space V by a finite dimensional space Vh. The proof of Theorem 11.1.1 provides an instance of such a procedure: the socalled FaedoGalerkin method, consisting in choosing Vh = V N := span {cPl, ..., ¢N}, {¢j} being a complete orthonormal basis of V. Now we want to present other choices of Vh , based on the finite element method. The semidiscrete approximate problem reads as follows: given f E L 2(QT ) and UO,h E Vh , a suitable approximation of the initial datum Uo E L 2(D), for each t E [O,T) find Uh(t) E Vh such that
d 'd(Uh(t),Vh) + a(uh(t), Vh) t = (j(t), Vh) V Vh E Vh , t E (0, T)
(11.2.1)
{ Uh(O) = UO,h . Here we have assumed that D is a polygonal domain with Lipschitz boundary, and we are considering, for simplicity, the homogeneous Dirichlet boundary condition, i.e., Vh is chosen as in (6.2.4). When dealing with other boundary
374
11. Parabolic Problems
conditions, the space Vh must be chosen as in (6.2.5)(6.2.7); moreover, the bilinear form and the linear functional at the right hand side must be modified as indicated in (6.1.10), (6.1.13) and (6.1.16). Problem (11.2.1) is a system of ordinary differential equations. Writing Uh(t) = :Lj C,j(t)'Pj, where {'Pj}, j = 1, ..., Nh, is a basis of Vh, and UO,h = :Lj c'o,j'Pj, problem (11.2.1) can be written as M
(11.2.2)
{
~ e(t) + Ae(t) =
e(O) =
F(t)
eo ,
where M ij = (Mfe)ij := ('Pi,'Pj) , A ij = (Afe)ij := a('Pj,'Pi) , Fi(t) = (FfeMt) := (J(t),'Pi) , i,j = 1, ... ,Nh .
Since Mfe is positive definite, there exists a unique solution e(t) of (11.2.2). Repeating the proof of Theorem 11.1.1, we see that the solution Uk satisfies an energy estimate like (11.1.7), provided UO,h converge to Uo in L2([2). This proves the stability of the method. We are now going to prove the convergence of Uh to U in a suitable topology, and give an estimate of the order of convergence. Proposition 11.2.1 Let T h be a regular family of triangulations and assume that piecewiselinear or bilinear finite elements are used. Assume moreover that (11.1.6) (with A = 0) and (11.1.18) are satisfied and that 1 E L2(QT), Uo E V, aij,b i E C 1 ([2 ), Ci,aO E Loo([2). Then the solutions U and Uh to (11.1.5) and (11.2.1), respectively, satisfy
Ilu(t) 
uh(t)116 + a
(11.2.3) ::; lIuo  Uo,hll6
it
Ilu(r)  uh(r)lIi
+ Ca,'Y h2 (1Iuo,hlli + lI uolli +
I 11/(r)1I6) t
for each t E [O,T]. Here a is the coerciveness constant in (11.1.6), I is the continuity constant of the bilinear form a(·, '), and Ca,'Y > 0 is a suitable constant independent of h. Proof. For each t E [0, T] define eh(t) := u(t)  Uh(t)j from (11.1.5) and (11.2.1) one finds d dt (eh(t), Vh) + a(eh(t), Vh)
(11.2.4) =
(8) ;t(t),Vh
+a(eh(t),vh)=O
'VvhEVh·
11.2 SemiDiscrete Approximation
375
For almost any fixed t, choose Vh = Uh(t)  Wh, Wh E Vh in (11.2.4). For each > 0 and for almost any t E [0, T] we find
e
1 d
2' dt (eh(t), eh(t)) + a(eh(t), eh(t))
(8;t (t), u(t)  Wh) + a(eh(t), u(t)  Wh) : ; II 8;t (t) 110 lIu(t)  Wh 110 + 'Ylleh(t)IiIIlu(t)  Wh 111 : ; 118;t(t)llo lIu(t)whllo+ l;lI u (t )  whlli + €lleh(t )lli =
(11.2.5)
From Corollary 11.1.1, it follows that u E L 2(0,T;H2(fl)). Hence, choosing for almost any t E [O,T] Wh = 7r~(u(t)) (see (3.4.1)), we find
Ilu(t)  7r~(u(t))II~
+ h 2I1u(t) 
7r~(u(t))lli ::; Ch4I1u(t)ll~
Integrating (11.2.5) in (0, t) and choosing e = a/2 yields
Ileh(t)1I6
+a
it
Ileh(7)1Ii ::; lIuo  uo,hll6
+Ca" h2
i (11~~(7)11: t
+ 11
;; (7)11: + IIU(7)11~)
8
As in (11.1.15) we have
therefore the thesis follows from (11.1.19).
o
Let us notice that the procedure employed in the proof above leads to an optimal error estimate with respect to the norm of L2(0, T; Hl(fl)) and for piecewiselinear polynomials only. A different method is presented in the following Proposition 11.2.2, yielding an optimal error estimate in the norm of CO ([0, T]; L 2 ( fl )) for any degree of the approximating piecewisepolynomials, provided that the solution u is smooth enough. Remark 11.2.1 If UO,h E Vh is chosen in such a way that
(11.2.6) then the error UUh is O(h) in the space CO([O, T]; L 2(fl))nL2(0, T; H1(fl)). For instance, we can take UO,h = P1k,h(UO), where Pf,h is the projection on Vh with respect to the scalar product of H1(fl). Assuming that Th is quasiuniform, another possible choice is given by UO,h = PkCuo), where pk is the
376
11. Parabolic Problems
L 2(D)projection onto Vh (see (3.5.3)). With both these choices (11.2.6) is satisfied, provided that D is a plane convex polygonal domain (see Section 3.5 and in particular (3.5.22)). However, in practice these choices are not the easiest to be implemented. If we know that Uo E H2(D), it is better to take UO,h = 7l'~(uo), where 7l'~ is the finite element interpolation operator. 0 Assuming more regularity on the data and, furthermore, that suitable compatibility conditions between the initial datum and the boundary data are satisfied at t = a on aD, the solution u to (11.1.5) is indeed more regular. This in principle implies that the convergence of Uh to u is of higher order. As an example, let us just prove the following error estimate, first obtained by Wheeler (1973): Proposition 11.2.2 Let T h be a regular family of triangulations. Assume that (11.1.6) is satisfied with A = a and that the solution t.p(r) of the adjoint
problem t.p(r) E V: a(v,t.p(r)) = (r,v) Vv EV satisfies t.p(r) E H 2(D) when r E L2(D). Assume moreover that Uo E Hk+ 1(D), k 2: 1, and that the solution u to (11.1.5) is such that ~~ E L1 (0, T; Hk+l (D)). Then, using piecewisepolynomials of degree less than or equal to k in the definition of the finite element space Vh in (6.2.4), for each t E [0, TJ the solution Uh to (11.2.1) satisfies Ilu(t)  uh(t)llo ~ Iluo  uO,hllo (11.2.7)
1 +Chk+ (1Iu o11k+1
+
it 11~~(T)"k+J
where C > a is a suitable constant independent of h. Proof. We make use of the elliptic "projection" operator fined as follows for each v E V:
lIfh' which
is de
'
(11.2.8) As (11.1.6) is satisfied with A = 0, the existence of such a solution is ensured by the LaxMilgram lemma. If a(,·) is symmetric, then lIf,h is really an orthogonal projection operator onto Vh, with respect to the scalar product a(, .). If Th is a regular family of triangulations and for each r E L 2 (D) the solution t.p(r) of the adjoint problem belongs to H2(D), by proceeding as in Section 3.5 one has (11.2.9) IlvlIf,h(v)llo+hllvlIf,h(v)lh ~ Ch k+llvlk+1
For any fixed t E [0, T] let us write Uh(t)  u(t) (11.2.10)
Vv E Hk+1(D)
= W1(t) + W2(t), where
W1(t) := Uh(t)  lIf,h(u(t)) , W2(t):= lIf,h(u(t))  u(t) .
11.2 SemiDiscrete Approximation From (11.2.9),
W2
is easily estimated in the following way:
Ilw2(t)llo :::; Chk+ (11.2.11)
:::; Chk+
11Iu(t)IIk+1 1
(1Iuollk+1 +
it 11~~(r)"k+J
On the other hand, from (11.1.5), (11.2.1) and (11.2.8) for each have
Choosing
Vh
=
W1
377
Vh
E Vh we
(t) it follows
Using again (11.1.6), we have (11.2.12) Thus, integrating over (0, t)
Since
fft
commutes with IIf,h' the thesis follows from (11.2.9) and (11.2.11).
o
In a similar way one can obtain an error estimate with respect to the norm L 2(0, T; H 1(D)). Precisely, assuming that Uo E Hk(D), u E L 2(0, T; H k+ 1(D)) and ~~ E L 2(0, T; Hk(D)), it is easily shown that
a (
1 Ilu(r)  uh(r)lli T
)
1/2 = O(h
k
)
•
Remark 11.2.2 If we had taken advantage of the presence of the term allw1(t)lli at the left hand side of (11.2.12) we would have obtained
378
11. Parabolic Problems
~ IlwI(t)llo + allwl(t)llo ::; W9~2 (t)llo (notice that we have bounded from below IlwI(t)III with IlwI(t)llo), i.e.,
~ [exp(at) IlwI(t)lloJ ::; exp(at) II 8~2 (t) 110 Integrating over (0, t) we find IIWI(t)llo::; IlwI(O)lloexp(at) (11.2.13)
+
l
t exp
[ a (t  T)J I18; 2(T) IIo
thus for each t E [0, TJ
Ilu(t)  uh(t)llo::; Iluo  uo,hlloexp(at)
+ Chk+1 {lluollk+1 exp( at) + Ilu(t)IIk+1 +
it
exp[a(t 
T)JII~~ (T)llk+J
This shows, in particular, that the effect of the initial error Uo  UO,h tends exponentially to a as t goes to CXl. For further results of this type, the interested reader can refer to Thomee (1984) and Fujita and Suzuki (1991). 0 Remark 11.2.3 If the initial datum is not regular, it is possible to prove optimal error estimates depending on a negative power of t. Assuming, for simplicity, that f = O,A = a in (11.1.6) and that aij,bi,Ci,aO in (11.1.1) are smooth enough, i,j = 1, ...,d, a typical result reads as follows
(11.2.14) As usual, k 2: 1 refers to the degree of the polynomials lP'k used in defining Vh , and UO,h = P~(uo) is the L 2(f?)orthogonal projection of Uo onto Vh. For a proof of (11.2.14) and of other sharp error estimates for a nonsmooth initial datum see, e.g., Bramble, Schatz, Thomee and Wahlbin (1977), Huang and Thomee (1981), Sammon (1982), Luskin and Rannacher (1982a) and Lasiecka (1984). See also Thomee (1984), Fujita and Suzuki (1991). In particular, estimate (11.2.14) shows that, being f = 0, the error u(t)  Uh(t) decays to zero as t goes to infinity even if Uo is not smooth. An exponential decay can be recovered by proceeding as in Remark 11.2.2.
o Remark 11.2.4 (L':JOerror estimate). It is possible to obtain a result similar to (11.2.7) also when considering the error estimate in the LOO( f?)norm. In
11.2 SemiDiscrete Approximation
379
fact, assume that the bilinear form a(,·) is symmetric and coercive (i.e., (11.1.6) holds with ,\ = 0). In the case of a twodimensional domain and piecewiselinear finite elements, setting UO,h = IIr,h(uo), in Schatz, Thomee and Wahlbin (1980) the following error estimate is proven for each t E [0, T]:
Ilu(t)Uh(t)IIL~C!t)::; Ch
t
2jloghl 2
[lluollw2.00C!t) +
IlaaU(T)11 ]. Jr w2.ooC!t) t o
Let us notice that a logarithmic factor also appears in the analogous estimate for elliptic equations (see Remark 6.2.3). 0 11.2.2 The Case of Spectral Methods Now we consider the semidiscrete approximation of problem (11.1.5), based upon algebraic polynomial approximations in space. As done in the finite element context, we confine to the case in which u(t) = 0 on an for all t 2:: 0 (homogeneous Dirichlet boundary condition); moreover, for simplicity we only consider the twodimensional case. Therefore we set n = (1,1)2 and denote by VN the space of algebraic polynomials of degree less than or equal to N with respect to each coordinate Xi, i = 1,2, that vanish on an. This space was already introduced in (6.2.14). Instead of (11.2.1) we consider the following problem. Let UO,N E VN be a convenient approximation of Uo. For each t E [0, T]' we look for a function UN(t) E VN that satisfies :t (UN(t), VN) + a(uN(t), VN) = (f(t),VN) V VN E VN , t E (O,T)
(11.2.15) {
UN(O) = UO,N .
This is the spectral Galerkin semidiscrete approximation to (11.1.5). A more effective (and flexible) approach is the one based on the spectral collocation approximation. Let us consider, for simplicity, the Legendre case, referring to Section 6.2.2 for the modifications necessary to deal with the Chebyshev one. Formally speaking, this can be easily formulated by replacing (11.2.15) with
~ (UN(t), VN)N + aN(uN(t), VN) = (f(t), VN)N
(11.2.16)
V VN E VN , t E (0, T)
{ UN(O) = UO,N .
As explained in detail in Section 6.2.2, this method is based on the idea of splitting the operator L into its symmetric and skewsymmetric part, replacing any exact scalar product (ep, 'I/J) = I n ep'I/J with numerical integration based
380
11. Parabolic Problems
upon Legendre GaussLobattoformulae, i.e., (following (4.5.38), (4.5.39), but using a single subindex for both the nodes and the weights Xi and Wi) with
(. = a yields t
(11.2.19)
IluN(t)ll~
+a
l IluN(r)lli a
~ lIuo,NII~
1lt
+
a
a
Ilf(r)ll~
for each t E [0, T]. If, for instance, UO,N is the L 2projection of Uo upon VN, then Iluo,Nllo ~ Iluollo, thus (11.2.19) provides stability. In the spectral collocation case, the procedure is about the same, however it yields:
Let us consider, for simplicity, the Legendre collocation case. If we assume that there exists a constant aN > 0 such that (11.2.20) then, from the equality above and (4.5.41), we obtain
(11.2.21)
for each t E [0, T], where IlviiN = remember that (see (6.2.27))
V(v, V)N is the
IIvllN :::; 21Ivllco(Q)
discrete norm. Now, we
V v E c°(.a)
.
Then, if we assume Uo E C°(Jt) and we take UO,N = INUO (the interpolant of Uo at the collocation nodes {x.j ), we deduce from (11.2.21) that
382
11. Parabolic Problems
(11.2.22)
IluN(t)ll~ + aN ::; 4
If it happens that i.e.,
(11.2.23)
I IluN(T)lli t
(Iluoll~o(i'i) + a:
I Ilf(T)II~o(a)) t
aN is bounded from below uniformely with respect to :J a > 0 such that
aN
~
N,
a vN ,
then from (11.2.22) it follows that
IluN(t)lI~ + a (11.2.24) ::; 4
it
IluN(T)lIi
(Iluoll~o(a) + ~
I IIf(T)II~o(i'i)) t
Since IluN(t)IIN ~ IluN(t)llo (see (4.5.41)), the above inequality implies uniform stability for the spectral collocation solution. Before concluding, let us notice that assumptions (11.2.20), (11.2.23) are satisfied for the bilinear form associated to the operator L + AI, A a suitable positive constant, if the coefficients aij, b, and ao and moreover div bare continuous functions in D. (For simplicity, we have assumed here that c, = 0 for i = 1,2.) As a matter of fact, in such a case
ao being the ellipticity constant of L (see (11.1.2)). Therefore, setting A := ~II divbllco(i'i) + Ilaollco(i'i)' the modified bilinear form aN(z,v) + >"(Z,V)N (which is associated to the operator L + AI) satisfies (11.2.20), (11.2.23) with a which depends solely on the Poincare constant Oo (see (1.3.2)). About the proof of convergence, we restrict ourselves to the spectral Galerkin case (11.2.15). In order to exploit all the potential accuracy of polynomial approximation, we make the assumption that the data and the solution to problem (11.1.5) are smooth as much as needed. In particular, we suppose that (11.2.25) We then follow the guidelines of the proof of Proposition 11.2.2, adapting the notation (and concepts) to the spectral Galerkin framework. First, for any v E V = H6(D), let us denote by II1,NV E VN its elliptic "projection" which is defined as follows: (11.2.26)
11.2 SemiDiscrete Approximation
383
If a(,·) is symmetric, Il1,N is indeed an orthogonal projection operator with respect to the scalar product a(, .). In any case, the existence and uniqueness of such a Il 1 ,N V is ensured from the LaxMilgram lemma as a(·, .) is coercive ('Vz, 'Vv) (Le., we are and continuous on V. Let us observe that if a(z,v) dealing with the heat equation, where Lz = L1z), then Il1,N coincides with the orthogonal projection operator pJ,/ that has been introduced in (4.5.33). In any case, a proof similar to the one leading to (4.5.37) yields the estimate
=
(11.2.27)
N/lu(t)  Il1,NU(t)llo
+ Ilu(t)  Il1,NU(t)/h ::; CN1Sllu(t)lls
V t E [O,T]
We now set UN(t)  u(t) = Wl(t) + W2(t), where Wl(t) := UN(t)  Il1,NU(t), and W2(t) := Il1,NU(t)  u(t). Owing to (11.2.27) we have for each t E [0, T]:
On the other hand, also in the spectral case W1 satisfies an inequality like (11.2.13). From (11.2.28) and (11.2.13), and proceeding as done in Remark 11.2.2 we obtain for any t E [0, T]:
Ilu(t)  uN(t)llo::; /luo  uo,Nlloexp(at)
+ CNs{lluollsexp(at) + Ilu(t)lls
(11.2.29)
+
it exp[a(tr)]II~~(r)IIJ.
As in the finite element case, the influence of the initial error (that here behaves as CNslluoll s) decays exponentially to as t increases. A convergence in the norm of L2(0, T; Hl(rl)) for the error UUN can also be proven. Indeed, taking advantage of the presence of the term allw1(t)llr in (11.2.12), one can deduce that
°
1 T
(
a
) 1/2
Ilu(r)  uN(r)lli
= O(N 1  s) ,
under the assumptions Uo E HS 1(rl), U E L 2(0,T ;H S(rl )) and ~~ E L 2(0, T; HS 1(rl)).
384
11. Parabolic Problems
11.3 TimeAdvancing by Finite Differences In order to obtain a full discretization of (11.1.5), we consider a uniform mesh for the time variable t and define
n = 0,1, ... ,N" ,
t n := nL1t
(11.3.1)
L1t > a being the timestep, andN":= [T/L1t), the integral part ofT/L1t. Next, we replace the time derivative by means of suitable difference quotients, thus constructing a sequence uh(x) that approximates the exact solution u(t n , x). Let us describe this procedure on a general system of ordinary differential equations
i; {
(11.3.2)
(t) = !P(t, y(t)) , t E (0, T)
y(O) = Yo .
Several approaches can be used, such as multistep methods, RungeKutta methods or methods based on rational approximations of the exponential (see, e.g., Gear (1971), Crouzeix and Mignot (1984), Thomee (1984), Butcher (1987), Lambert (1991)). For simplicity we confine ourselves to the socalled emethod (see Section 5.6.2), which consists in replacing (11.3.2) by the following scheme: find s" such that
~(yn+l _ yn) (11.3.3)
L1t =e!p(t n+l,yn+l)
+ (1e)!P(tn,yn)
,n=0,1, ... ,N"1 ,
{ yO = Yo .
Here, e E [O,IJ is a parameter. When e = a or e = 1, this scheme is called forward Euler method or backward Euler method, respectively; for e = 1/2 it is called CrankNicolson method. For 2: 1/2 the scheme is Astable (see, e.g., Lambert (1991), p. 244); for e = 1/2, however, it may suffer from unexpected instabilities especially when rough perturbations are introduced in the data. In the following Sections we analyze the escheme for both the finite element and the spectral case. For finite elements, a stability and convergence result is first presented in the general situation (see Theorems 11.3.1 and 11.3.2), and then an alternative proof is provided in the case of a symmetric problem (see Theorems 11.3.3 and 11.3.4). Analogous results for the spectral method will be stated in Section 11.3.2.
e
11.3 TimeAdvancing by Finite Differences
385
11.3.1 The Finite Element Case Applying the Bscheme to the semidiscrete approximation (11.2.1), one is left with the following problem: find u~ E Vh such that
~t (U~+l  u~, Vh) + a(Bu~+l + (1 B)u~, Vh) = (Bf(tn+d
(11.3.4) {
+ (1 B)f(tn),vh)
u~ = UO,h
for each n = 0,1, ... ,N  1, having assumed that
f
is everywhere defined on
[O,T]. At each time step, one has to solve the linear system (11.3.5) where TIn is known from the previous steps, the matrices M and A are defined in (11.2.2) and Nh
u~+l
=L
~rl'Pj
,
j=l
'Pj being the basis functions of Vh .
Assuming that (11.1.6) is satisfied with A = 0, the matrix (M + B.:1tA) is positive definite. In such a case (11.3.5) has a unique solution. Moreover, (M + B.:1tA) is symmetric if the bilinear form is symmetric, i.e., a( z, v) = a(v, z) for each z, v E V. Let us also notice that in general (11.3.5) is a genuine linear system, namely, the matrix (M + B.:1tA) is not diagonal. We will see in Section 11.4 how to resort to a diagonal system when considering the forward Euler method (i.e., B = 0). Now let us turn to the proof of stability of u~. It is useful to introduce the following notation: for any function ¢ E L2(D), define (11.3.6) which is a norm on Vh (clearly, II¢IIl,h:::; 11¢llo for each ¢ E L 2(D)). We are going to prove that the Bscheme (11.3.4) is unconditionally stable with respect to the L2(D)norm provided that 1/2:::; 1. On the contrary, in the case B < 1/2 we have to assume that a stability condition is satisfied (see (11.3.7) here below).
°:: ;
e:: ;
°
Theorem 11.3.1 (Stability). Assume that (11.1.6) is satisfied with A = and that the map t + Ilf(t)llo is bounded in [O,T]. When e < 1/2, assume, moreover, that T h is a quasiuniform family of triangulations and that the following restriction on the timestep
°::;
11. Parabolic Problems
386
2 2a Llt (1 + C3h ) < (1 _ 20)"(2
(11.3.7)
is satisfied. Here, C3 is the constant appearing in the inverse inequality (6.3.21), while a and, are the coerciveness and continuity constants, respectively. Then u h defined in (11.3.4) satisfies (11.3.8)
Iluhllo
:s Co (1Iuo,hllo + tE[O,Tl sup 1I!(t)llo)
,
n = 0,1, ... ,N ,
where the constant Co > 0 is a nondecreasing function of a 1 , is independent of N, Llt and h. Proof. Take
Vh
= OU h+1
+ (1 8)u h in
(11.3.4). It is easily verified that
~lluh+l116  ~lIuhll6 + (8  ~) lI uh+l =
and T, and
,
uhll6
+ Llt a(Ou h+1 + (1 8)Uh,8uh+l + (1 8)uh) Llt(8!(tn+l) + (1  8)f(tn),8 uh+l + (1 O)uh)
By the coerciveness assumption (11.1.6), for each 0
IIuh+l 116 ll uhl16 + (28  l)lI u h+l
< e :s 
1 we find
ukl16
+ 2(1 lO)aLltIl8uh+l + (1  8)uklli
(11.3.9)
:s 2Llt 118f(tn+d + (1 8)f(tn)ll:l h lOa
'
.
When 1/2 :s 8 :s 1, the left hand side is larger than lIu h+l115 lIuhIl5, and in particular we can set lO = 1. On the contrary, when 0 0 < 1/2 we proceed as follows: choosing Vh = u h+ 1  Uk in (11.3.4) we find
:s
1 Iluh+  ukl16 = Llta(8uh+1
+ (1 0)uk,U h+1  Uk) + Llt(8f(tn+l) + (1  O)f(tn), Uh+ 1  Uk) :s ,Llt118uh+1 + (1 0)ukI111Iuh+1  uklll + LltI18f(tn+l) + (1 0)f(tn)II1,h Iluh+ 1  uhllt
(11.3.10)
By means of the inverse inequality (6.3.21), we finally have (11.3.11)
Iluh+1  ukllo:S Llt(l
Setting for each ry /'Cry
:=
+ C3h2)1/2bIl8uh+l + (1 8)ukllt + 118f(tn+1) + (1  8)f(t n)II1,h]
>0
[2(1  lO)a  (1  28)"((, + ry)Llt(l + C3h 2)]
,
11.3 TimeAdvancing by Finite Differences
387
it follows
Ilu~+1116
llukl16 + L1t~TlIIOu~+l + (1 O)uhlli ::; C€,TI L1t(l + L1th 2)IIOf(tn+d + (1 O)f(tn)//:'l,h
Choosing € and 'TJ small enough, due to (11.3.7) we have 1 + L1th 2 ::; C. for a suitable C. > 0, therefore Ilu~+1116
(11.3.12)
~TI
 IIuhl16 ::; Cf,TI L1t IIOf(tn+d + (1 
> 0 and moreover e)f(t n)ll:'l,h .
Now let m be a fixed index, 1 ::; m ::; N. In both cases we have considered above (0 ::; 0 < 1/2 or 1/2 ::; 0 ::; 1), summing up from n = 0 to n = m  1 we find m1
Il uhll6::;
lI u o,hll6 + CL1t L
1I0f(tnH)
+ (1 e)f(tn)lI:'l,h
n=O
and the thesis follows.
o
A slightly different stability result is proven in Section 12.2. Now we turn to the convergence analysis. We start proving an error estimate between the semidiscrete solution Uh(t n) and the fullydiscrete one uk for any fixed h. Theorem 11.3.2 (Convergence). Assume that (11.1.6) is satisfied with'>' = 0 and that ~(O) E L 2(D), f E L 2 (Qr ) with"* E L 2 (Qr ). When 0::; 0 < 1/2, assume, moreover, that Th is a quasiuniform family of triangulations and that the timestep restriction (11.3.7) is satisfied. Then the functions Uk and Uh(t) defined in (11.3.4) and in (11.2.1), respectively, satisfy
(11.3.13) for each n = 0,1, ... ,N. When e = 1/2, under the additional assumptions ~ 8t 2 E L2 (Qr) and
8;~b (0) E L
2(D), the following estimate also holds: lIuk  uh(tn)llo
~ D("')' (11 8; '(0)1 1: + 1'11~; (')11:) 'I'
(11.3.14)
=
for each n 0,1, ... ,N. The constant Co > 0 and C > 0 are nondecreasing functions of 0: 1 , I and T, and are independent of N, L1t and h. Proof. The semidiscrete solution Uh(t) satisfies (11.2.1), which can be rewritten as
388
11. Parabolic Problems 1
Llt (Uh(tn+l)  Uh(tn), Vh)
+ a((}uh(tn+d + (1 (})Uh(t n), Vh)
(11.3.15)
= (uh(tn+dUh(tn) _8 8uh(t
Llt
8t
n+l
)_(1_8) 8uh(t) V) 8t n t» h
+ (8!(tn+l) + (1 8)!(tn),Vh) The first term at the right hand side of (11.3.15) is equal to

~t (l~n+! [s 
(1  8)t n+l  8t n]
8;~h (s)ds, Vh)
Hence, defining e h := u h  Uh(tn), from (11.3.4) and (11.3.15) it follows
~t (eh+1 (11.3.16) =
eh' Vh)
+ a(8eh+1 + (1  8)eh' Vh)
~t (1~n+![S(18)tn+18tn]8;~h(S)dS,Vh)
for each Vh E Vh. Now taking Vh = 8eh+ 1 + (1  8)eh in (11.3.16), by proceeding as in the proof of Theorem 11.3.1, for each 0 < e :5 1 we find
Ileh+lll~ llehll~
< (Llt)2 
2m
+ (28  1)ll eh+ 1  ehll~ + 2(1  €)aLltI18eh+l + (1 8)ehlli
r: 1182~h(S)112 . 8t
Jtn
l,h
It is now necessary to distinguish between the case 0 :5 8 < 1/2 and 1/2 :5 8 :5 1. In the latter, the left hand side is larger than lI eh+l1l5 II eh115, while in the former one has to proceed as in the proof of Theorem 11.3.1 for evaluating lIeh+1  ehl/5. Taking into account (11.3.7), one finally obtains
Ileh+lll~ llehll~:5 C(Llt)2 Jtn r: 118;~h(S)112
l,h
.
Therefore, by fixing an index m, 1 :5 m :5 N, and summing up from n to n = m  1, as e~ = 0 we finally obtain
Differentiating (11.2.1) with respect to t, we have
=0
11.3 TimeAdvancing by Finite Differences
389
hence
Moreover, choosing for each fixed t E [O,T] Vh = W(t) in (11.3.17) and proceeding as in the proof of Theorem 11.1.1, it follows
hence (11.3.13) holds. Now take () = 1/2. One easily verifies that the first term at the right hand side of (11.3.15) is equal to
2~t . : (tn+l 
s)(t n

s)
8;~h (s)ds, Vh)
By proceeding as before, one finds
Ileh'116 ~ C(Lit)41
tm
118;~h(S)I[1'h
Differentiating (11.3.17) with respect to t and applying the previous argument to the equation thus obtained, estimate (11.3.14) easily follows. 0 Remark 11.3.1 The term 11~(0)116 appearing in (11.3.13) can be estimated with respect to the data of the problem. For instance, take UO,h = IIf h(uO) and assume that Uo E H2(il) n V (the elliptic "projection" IIf h is defined in (11.2.8)). Then, choosing Vh = W(O) in (11.2.1) we find '
8U
Uh)
h 112 a;(O) 1/
8 8Uh) ° = a ( UO,h, at(0) + ( 1(0), at (0) 8Uh ) = a ( Uo, at(0) =
8Uh) + ( 1(0), at(0)
8Uh ) (  Luo + 1(0), a;(O)
.
Hence,
118;t (0) I/o
~ C(ll uol12 + IIf(O)llo)
,
~ Clluolh by (11.2.8). If Th is a quasiuniform family of triangulations, for any initial datum UO,h E Vh using the inverse inequality (6.3.21) yields
as
IIIIf,h(Uo)lll
118;t(0)/I0 ~ C(h11Iuo,h  uolh + Il uol12 + 111(0)110) .
390
11. Parabolic Problems
Taking for instance UO,h = 11"~(uo), the finite element interpolation of Uo, again produces an estimate uniform with respect to h. 0 A proof of convergence that doesn't make USe of the error estimate for the semidiscrete approximation can be performed as follows (notice that a similar procedure is also employed in the proof of Theorem 12.2.2). Setting 1]/: := u'h  nt h(u(t n ) ) , one verifies that it satisfies the scheme (11.3.32) below. Taking there the test function Vh = 81]~+1 + (1 8)1]1: and proceeding as in the proof of Theorem 11.3.1, one easily finds
Ilui:  nt,h(u(t n))116 :;; Iluo,h  nt,h(uo)116
l
1I(Int,h)~~(8)1I~1'h +Co(Llt)21 11~:~(8)'[1'h ' +Co
tn
tn
or, when 8 = 1/2,
rII(I
Ilui:  nt,h(u(t n ))116 :::; Iluo,h  nt,h(uo)116
+C
tn
+ C(Llt)41
nt,h)
~~ (8)![1,h
II ~:~ (8)1 [l,h
.
From these results an optimal error estimate follows, under suitable assumptions, by proceeding as in the following Corollary 11.3.1. Now we consider the symmetric case, i.e., a(z,v) = a(v,z) for each z,v E V. (Notice that this is never the case if either bi =j:. 0, or c, =j:. 0, or aij =j:. aji for some index i and j.) Under this assumption, it is possible to analyze stability and convergence taking advantage of the structure of the eigenvalues of the bilinear form a(, }, We still assume that (11.1.6) holds with>. = 0, and we recall that V is compactly embedded in L 2 (rl ) since rl is bounded (see Theorem 1.3.5). Thus, there exists a nondecreasing sequence of eigenvalues a:;; /ll :::; /l2 :::; ... for the bilinear form a(, .), i.e., (11.3.18)
Vv E V .
The corresponding eigenfunctions {Wi} form a complete orthonormal basis in L2(Q) (see Remark 11.1.3). In an analogous way, when considering the finite dimensional problem in Vh , we find a sequence of eigenvalues a :::; /ll,h :::; /l2,h :::; :::; /lNh,h and a L 2(Q)orthonormal basis of eigenvectors Wi,h E Vh, i = 1, , N h. Any function Vh in Vh can thus be expanded with respect to the system Wi,h as
11.3 TimeAdvancing by Finite Differences
391
Nh
(11.3.19)
Vh = I)Vh,Wi,h)Wi,h , i=l
and Nh
IIvhl16 = 2)Vh,Wi,h)2 i=l
In particular, we have Nh
(11.3.20)
Uk =
L UiWi,h
, ui:= (Uk,Wi,h) .
i=l
Moreover, let ff: be the L 2(J7)orthogonal projection of 11 f(t n+d+(ll1)f(tn ) onto Vh , i.e., ff: E Vh and (11.3.21) and set Nh
(11.3.22)
ff: =
L
frWi,h
fr:= Uf:, Wi,h) .
I
i=l
We are now in a position to prove the stability result. Theorem 11.3.3 (Stability). Let the bilinear form a(·,·) be symmetric and coercive, i.e., (11.1.6) is satisfied with>' = O. Assume that the map t + IIf(t)llo is bounded in [O,T]. Moreover, when 0 :5 11 < 1/2 let the timestep
Llt satisfy 2
Llt r:/l.Nh, h <  28 1
(11.3.23)
Then, the solution Uk of the fullydiscrete problem (11.3.4) satisfies
IIUkiio :5 Iluo,hllo + T
(11.3.24)
sup IIf(t)llo, n tE[O,T]
= 0,1, ... ,N
Proof. Equation (11.3.4) is equivalent to 1
(11.3.25) LIt (ui+
1

ui)
+ fLi,h[8ui+ 1 + (1  8)ui]
= fr , i = 1, ..., Ni, ,
for each n = 0,1, ""N  1. We can rewrite (11.3.25) as (11.3.26)
n+l _
u·
,

1  (1  8)Lltp,i,h n u· 1 + 11Lltp,i,h '
The stability condition (11.3.23) yields
+
Llt
fn
.
1 + 11Lltp'i,h '
392
11. Parabolic Problems 1  (1  BPtf.Li,h I < 1 1 + B.tJ.tf.Li,h 1
(11.3.27)
Notice that this condition is always satisfied if 1/2 the absolute value of (11.3.26) we have (11.3.28)
lui'l :s;
lu~1 + 1 + e~tf.Li,h
Recalling that IIu hll6 = L: i the Minkowski inequality.
lufl2 ,
ml
L
n=O
:s; B :s; 1. Hence, taking
Ifrl , m ~
1 .
the thesis thus follows from (11.3.28) and 0
The stability condition (11.3.23) entails a relation between the mesh size h and the time step .tJ.t. In fact, when Th is a quasiuniform family of triangulations, the greatest eigenvalue f.LNh,h of the bilinear form a(·,·) is O(h 2 ) (see (6.3.24)), hence (11.3.23) states
.tJ.t u?
0 has the same parameter dependence as the one that appears in (11.3.51). As pointed out in Remark 11.3.1, in order for (11.3.52) to be effective, the norm of the time derivative of UN at t = 0 needs to be bounded uniformly with respect to N. Indeed, this is easily proven if UO,N is set equal to IIl,NUO, the elliptic projection upon VN of the exact initial value Uo (see (11.2.26)). In such a case, taking VN = a~r (0) in (11.2.15), and repeating the arguments used in Remark 11.3.1, we obtain
provided the data at the initial time Uo and f(O) have the required smoothness. The usual modifications occur for the analysis of the spectral collocation problem. A direct estimate of IlujVII1,NU(tn)llo can also be found in Canuto, Hussaini, Quarteroni and Zang (1988), Sect. 12.2.2. As done for finite elements in Theorems 11.3.3 and 11.3.4, a different approach can also be followed when the bilinear form a(,·) is coercive and symmetric. Denote now by J.Lj,N E lR (respectively, wi,N E VN, wi,N =1= 0) the eigenvalues (respectively, eigenvectors) of the finite dimensional eigenvalue problem (11.3.53)
11.3 TimeAdvancing by Finite Differences
399
j = 1, ... , N h := (N1)2. Clearly, Q:S J.li,N:S ••• :S J.lNh,N' and (WJ,N,WZ.,N) = Jkj for i, k = 1, ... , Ni; In a similar manner, in the Legendre collocation case we can define J.lj,N E lR and Wj,N E VN, Wj,N =1= 0, by
(11.3.54) At this regard, we observe that in the Legendre case aN("') is symmetric if a(,·) is symmetric. Actually, aNe,·) is obtained from a(·,·) barely by replacing any scalar product e,·) with (, ')N, and this operation is symmetrypreserving. Assuming that (11.2.20) and (11.2.23) hold, from (4.4.16) it clearly follows that 3 d a :S J.lI,N :S ... :S J.lNh,N; further, orthogonality now holds for the discrete scalar product, namely, (Wj,N, Wk,N)N = Jkj for j,k = 1, ... .NnThe spectral analysis carried out in the proof of Theorem 11.3.3 can be repeated (with straightforward modifications) for both the spectral Galerkin and the spectral collocation problems introduced in this Section. The stability result reads (11.3.55)
IlluRrIII:s
IIl u o,N II I + T sup
tE[O,Tl
IlIf(t)11I ,
n = 0,1, ... ,N
,
provided (11.3.56) when 0 :S () < 1/2. Here, for the sake of brevity, we have denoted by flN the eigenvalue J.lNh,N in the Galerkin case, and J.lNh,N in the collocation case. Similarly, III· III = 11·110 for the Galerkin problem, whereas 111·111 = II· liN for the collocation problem. Both cases share the (unpleasant) property that {J,N = O(N 4 ) . (This is shown in Section 6.3.3 for the collocation case and can be proven by proceeding as in Section 6.3.2 for the Galerkin case.) Therefore inequality (11.3.56) takes the form (11.3.57)
4
26
Lit N :S 1 _ 2() ,
where 6 is a suitable constant (precisely, 6 := sup{ N 4 / {J,N IN 2:: 1}). This stability limit for the timestep Lit is much more severe than its finiteelement counterpart. This makes explicit timeadvancing schemes (such as the forward Euler one, corresponding to () = 0) often unsuitable for the time discretization of parabolic equations when using spectral methods for space variables. As a matter of fact, for advectiondiffusion equations a fairly common approach is to adopt a semiimplicit scheme, which consists of evaluating implicitly the second order term (diffusion) and explicitly the lower order ones (advection and reaction). This procedure weakens the stability
400
11. Parabolic Problems
limit (11.3.57) at the espense of solving a linear diffusion system at each step. An account is given in Canuto, Hussaini, Quarteroni and Zang (1988); see also the following Section 12.2. The convergence analysis for spectral approximations in the symmetric case quite closely resembles the one developed in the finite element case, and yields the same kind of results. For brevity, we don't report it here. Remark 11.3.3 A fully spectral discretization of initialboundary value problems can be accomplished if a spectral (rather than a finite difference) technique is applied for the discretization of time derivative in (11.2.15) or (11.2.16). This amounts to looking for a function UN which is not only an algebraic polynomial with respect to the space variables, but for any fixed x E [l is an algebraic polynomial also with respect to the time variable t. This approach has the advantage of yielding spectral accuracy both in space and time. The disadvantage is that the discrete problem is now fully coupled, hence advancing stepwise along the temporal direction as in the finite difference approach is no longer possible in this context. Among references addressing the issue of spectral approximations in time we mention Morchoisne (1979), TalEzer (1986, 1989) and Ierley, Spencer and Worthing 0 (1992). Remark 11.3.4 For the spatial approximation, Chebyshev (rather than Legendre) expansions are frequently used, as pointed out, e.g., in Chapter 6. In the spectral Galerkin case, this amounts to replacing the scalar product (,.) with the weighted one (, ')w' In the collocation case, the difference with the Legendre case is that now the Chebyshev GaussLobatto points are used as collocation nodes. Obviously, the stability and convergence analysis is harder in the Chebyshev case, due to the presence of the Chebyshev weight function w(x) = [(1  xl)(l  X2)]1/2. In particular, the latter entails the lack of symmetry of the bilinear form even in the case of a selfadjoint spatial operator L. An indepth analysis can be found in Gottlieb and Orszag (1977), Canuto, Hus0 saini, Quarteroni and Zang (1988), Boyd (1989) and Funaro (1992). Remark 11.3.5 A sharp temporal stability analysis for spectral approximations based on the concept of pseudoeigenvalues is provided in Reddy and Trefethen (1990). 0
11.4 Some Remarks on the Algorithmic Aspects
401
11.4 Some Remarks on the Algorithmic Aspects Whenever an implicit timeadvancing method is adopted, at each timelevel a linear system of the form (11.3.5) needs to be solved. This occurs for both finite element and spectral collocation approximation. When A is symmetric and positive definite, so is the matrix E := M + BLitA. However, the spectral condition number of E is generally much smaller than that of A, especially when the timestep Lit is small. In Table 11.4.1 we report the spectral condition number of E/ e := M/ e + BLitA/ e for several values of h (the gridsize) and BLit. (The finite element stiffness matrix A/ e and mass matrix M/ e are defined in (11.2.2).) Results refer to the discretization of the heat equation
au at

Liu = 0
u=o
(11.4.1)
Ult=Q
=
in QT := (0, T) x
n
on ET:= (O,T) x
an
on
UQ
n ,
n
with = (0,1)2, and space discretization carried out with piecewisebilinear finite elements on a uniform mesh. Table 11.4.1. The spectral condition number of the finite element matrix E/ e BL:J.t N
.5
4 8 12 16 20 24
2.92 11.69 26.40 47.00 73.50 105.88
.1 2.28 8.67 19.42 34.48 53.84 77.51
.05
.01
.005
.001
1.84 6.59 14.63 25.90 40.39 58.11
1.50 2.44 5.10 8.83 13.64 19.52
2.13 1.55 2.94 4.97 7.59 10.79
3.57 3.17 1.96 1.68 1.90 2.59
We recall that in the spectral collocation case M is diagonal. As a matter of fact, following the discussion carried out throughout Section 6.3.1, we can implement the collocation case in two fashions. In the former we have (see (11.2.18)) (11.4.2)
M ij = (Msp)ij := ('ljJi,'ljJj)N , A ij = (ASP)ij := aN('ljJj,'ljJi) ,
thus M = D := diag (wd. In the latter case we have (11.4.3)
M = M~p = I
, A = A~p = D 1 Asp
402
11. Parabolic Problems
This is obtained by multiplying to the left M and A in (11.4.2) by M;;/, resorting therefore to classical (differential) form of the collocation method. We recall, moreover, that A;p is not symmetric (see (6.3.12)), but has positive and real eigenvalues. We report in Table 11.4.2 the condition number Xl (E;p) (see (2.1.25)) of the Legendre spectral evolution matrix E;p = 1+ B.:1tA;p for several values of Nand B.:1t. Table 11.4.2. The condition number
)(l(E~p)
of the spectral collocation matrix
E~p
N
.1
.5
edt .05
.01
.005
.001
2.64 11.44 8.46 3.95 1.85 1.17 4 28.38 15.23 8.24 2.58 110.54 77.20 8 12 489.49 340.38 120.47 63.60 31.95 7.79 16 1458.93 1012.49 361.17 187.08 92.675 20.55 20 3444.97 2389.97 851.16 440.09 217.25 45.34 24 6993.40 4851.04 1726.54 892.05 439.33 89.46
en
e
When B = 0, the solution n + l is explicitly given in terms of throughout (11.3.5). This is a highly desirable feature of the spectral collocation method, which unfortunately is not shared by the finite element method, since M i j = (Mje)ij = (({Ji, ({Jj), {({Jj} being the finite element basis such that ((Jj(ai) = J i j for any gridpoint a.. For this reason, in common practice it is usual to approximate Mje by a diagonal matrix. This is the socalled lumping process. We illustrate this method when considering the finite dimensional subspace Vh = Xk n HJ(fl), Xk defined in (3.2.4). A first way to introduce the lumping process is to substitute the mass matrix M by the matrix M given by
(11.4.4) i.e., M is the diagonal matrix having the element Mii equal to the sum of the elements of M on the ith row. In order to verify that M is indeed nonsingular and to establish suitable error estimates it is useful to interpret this procedure from another point of view. Let us introduce the quadrature formula (trapezoidal rule)
(11.4.5)
1 K
1 'IjJ ~  meas(K) 3
3
L 'IjJ(bj,K) j=l
,
11.4 Some Remarks on the Algorithmic Aspects
403
where K is any element of the triangulation Th and bj,K, j = 1,2,3, are the vertices of K. We can define the approximate scalar product (, ')h as (11.4.6)
(tp, 'lj;)h :=
~ KETh L [meaS(K) i: tp(bj,K )'lj;(bj'K)] j=l
Now we are in a position to formulate the problem: find Uh(t) E Vh such that
:/Uh(t), Vh)h + a(uh(t), Vh) = (f(t), Vh) 'V Vh E Vh , t E [0, T]
(11.4.7)
{ Uh(O) = UO,h , where Vh = Xk n HJ(D) is the space of piecewiselinear functions vanishing on oD. If Uh is written as Nh
Uh(t) = L€j(t)tpj , j=l where tpj is the Lagrangian basis function associated to the node aj, system (11.4.7) reads as follows Nh
L(tpi,tpj)h j=l
d€.
Nh
_
d: (t) + La(tpj,tpi)~j(t) = (f(t),tpi)
'V i
= 1, ... ,Nh
.
j=l
The corresponding mass matrix is given by (tpi,tpj)h, whereas the stiffness matrix is unchanged, i.e., A ij = a(tpj,tpi)' We claim that (11.4.8) (thus, in particular, M is nonsingular). To prove this fact, notice at first that trivially (tpi' tpj)h = if i # i, as in that case tpi tpj vanishes at each node of Th. Moreover, one easily verifies that (tpi' ip k) is nonzero only if the nodes a, and ak belong to the same triangle K. In this case, a simple calculation shows that
°
r
I,
K
tpitpk =
{ 112 meas(K)
, i
1
'6 meas(K)
, i
# k ,
=k
.
For each pair a., ak, there are exactly two triangles K containing ai and ak; thus, denoting by D, the union of the triangles having a, as a vertex, it follows
404
11. Parabolic Problems
~)ipi' ipk) = k'iU
Ilipil16 =
L
L L kii KETn ( ipT =
KETn J K
Hence
~
1
ipiipk =
K
~ meas(Di)
~ meas(Dd
.
6
1
N«
u.; := L(ipi, ipk) = :3 meas(Di)
,
k=l
and trivially we also have
(ipi,ipih =
~
L KETn
[meaS(K)
trp;(bj,K)] j=l
=
~meas(Di)
,
therefore the proof of (11.4.8) is complete. It is worthwhile to notice that optimal error estimates like those proven, for instance, in (11.2.7) and in Corollary 11.3.1 also hold for the "lumped" problem (11.4.7) (see Raviart (1973), and also Thomee (1984), pp.170179). For the solution of the system (11.3.5) we refer to the algorithms described throughout Chapter 2. It is useful to point out that a Cholesky or LV decomposition (when used) can be carried out only once, as E doesn't change along the time. Also, let us stress the fact that iterative procedures are fast to converge as E is nicely conditioned, and, at each timelevel, an accurate initial guess is provided by the numerical solution available from the previous step.
11.5 Complements Besides the paper by Wheeler (1973), other early results on the Galerkin finite element approximation of parabolic equations are the ones by Douglas and Dupont (1970), Bramble and Thomee (1974) and Zlamal (1974). Another approach, based on high order accurate rational approximations of the exponential function, is the one by Baker, Bramble and Thomee (1977). Optimal error estimates in nonisotropic Sobolev spaces of fractional order for the Galerkin approximation of parabolic problems have been obtained by Hackbusch (1981b), Baiocchi and Brezzi (1983) and Tomarelli (1984). The error estimate (11.3.36) is not well suited for being used in choosing the timestep adaptively. For a detailed analysis of this important issue, we refer to Johnson (1987), Sect. 8.4, Eriksson, Estep, Hansbo and Johnson (1994), and the references therein. The approximation of parabolic equations based on the mixed finite element approch has been proposed and analyzed by Johnson and Thomee (1981).
12. Unsteady AdvectionDiffusion Problems
In this Chapter we consider linear secondorder parabolic equations with advective terms dominating over the diffusive ones. The corresponding steady case was considered in Chapter 8. At first we recall the mathematical formulation of parabolic problems, and underline the difficulties stemming from strong advection (and/or small diffusion). Next we introduce a family of timeadvancing finite difference schemes (explicit, implicit or semiimplicit), and analyze their stability properties, improving some results obtained in Chapter 11. Then we consider the socalled discontinuous Galerkin method, which accounts for the stabilization procedures for space discretization that have been introduced in Section 8.3.2. We continue with operatorsplitting methods, which yield a decoupling between advection and diffusion. We finish with the socalled characteristic Galerkin method, which is based on the replacement of the advective part of the equation by total differentiation along characteristics.
12.1 Mathematical Formulation We assume that [2 is a bounded domain in ~d, d = 2,3, with Lipschitz boundary, and consider the parabolic initialboundary value problem: for each t E [0, T] find u(t) such that
~~ + Lu = f (12.1.1)
{
Bu = u
°
= Uo
in QT := (0, T) x [2 on
2)T
:= (0, T) x 8[2
on [2, for t
=
°
where L is the secondorder elliptic operator d
(12.1.2)
Lw:= cLlW + LDi(biw) + aow i=l
and the boundary operator B denotes anyone of the boundary conditions considered in Chapter 6 (Dirichlet, Neumann, mixed, Robin).
406
12. Unsteady AdvectionDiffusion Problems
For the purpose of our analysis, we are considering here the case in which IlbIILoocn). In practice this situation is more difficult to treat, giving rise to instability phenomena if the spatial gridsize h is not small enough. Similar problems also appear in the steady case, and in Chapter 8 we have considered some methods to overcome these difficulties. The "simplified" form considered in (12.1.2) is a perfect alias of the most general situation in which L is given by (8.1.1), whenever the diffusion coefficients aij are significantly smaller then the advective ones bi, i, j = 1, ... , d. With no loss of generality, we further suppose that b is normalized to IlbllL''''cn) = 1. We assume here that there exist two positive constants J.Lo and J.L1 such that 10 ~
(12.1.3)
°< J.Lo
~ J.L(x) :=
1 .
"2 d1V b(x) + ao(x)
~ J.L1
for almost every xED. With this condition the bilinear form (12.1.4)
a(w,v):=
r [c\1w.\1V tbiwDiv+aowv]
In
.=1
is coercive in the Hilbert space HJ(D), i.e., it satisfies (5.1.14) with a constant a = min{c, (10 + CnJ.Lo)/(l + Cn)}, where Cn is the constant in Poincare inequality (1.3.2). Condition (12.1.3) is not restrictive, as, by means of the change of variable u).(t, x) := eAtu(t, x), we can deal with the auxiliary bilinear form a).(w, v) := a(w, v)+>. (w, v), where (,.) denotes the scalar product in L 2(D). Its coefficients always satisfy (12.1.3) provided that both ao and div b belong to Loo(D) and>. is large enough. In the sequel we will consider, for simplicity, the homogeneous Dirichlet boundary condition. The parabolic advectiondiffusion problem (12.1.1) can be reformulated in a weak fashion as follows: given f E L2 (Qr) and Uo E L2(D), find u E L 2(0, T; V) n CO ([0, T]; L2(D)) such that
(12.1.5)
:t(u(t),v) {
+ a(u(t),v)
= (f(t),v)
Vv E V
u(O) = Uo ,
where V = HJ(D) and the equation has to be intended in the sense of distributions in (0, T). Next, its semidiscrete (continuous in time) approximation reads as (11.2.1). The stability and convergence analysis carried out on problem (11.2.1), however, reveals that for case (12.1.2) with 10 ~ 1 some difficulties could arise. Indeed, looking for instance at the proof of Proposition 11.2.1 it is readily seen that the constant Ca" appearing there behaves like [o; where, is
,2
12.1 Mathematical Formulation
407
the continuity constant of the form a(·, .). After rewriting the bilinear form as
it can be easily seen that a(·,·) satisfies (5.1.13) and the continuity constant I can be estimated by
I
:s E + 1 + ILl
.
We conclude that the error estimate (11.2.3) (or else (11.2.7)) for the semidiscrete approximation deteriorates if E is small. A similar situation was already observed in the steady case (see Section 8.1). Clearly, this difficulties also arise for the fullydiscrete approximation given by the escheme (11.3.4). In fact, it is easily seen that also in this case the multiplicative constants appearing in the error estimates obtained in Theorem 11.3.2 and Corollary 11.3.1 depend increasingly on E l. A final remark concerns the stability condition (11.3.7) obtained for the escheme in Theorem 11.3.1. It reads as follows: Lit
For a small
E
Ch 2 a.
< :::::: (1 2e)/2
this becomes
(12.1.7) We notice that this restriction, obtained by an energy analysis, is not fully satisfactory. As a matter of fact, from a von Neumann stability analysis one would expect a condition on Lit which, for fixed h, becomes more favourable as far as E decreases. The aim of this Chapter is twofold. First, improve some results obtained in Chapter 11, enlightening the dependence of stability and convergence estimates on the small diffusion coefficient E. Indeed, we will begin the following Section by showing via a sharper analysis that stability holds even if the timestep Lit satisfies a condition weaker than (12.1.7). We also introduce a semiimplicit scheme which is not subjected to timestep restrictions. Secondly, we introduce other timeadvancing methods that were not considered in Chapter 11 and that are better suited for the approximation of parabolic problems where advection is dominant.
408
12. Unsteady AdvectionDiffusion Problems
12.2 TimeAdvancing by Finite Differences As done in (11.2.1), we consider the following semidiscrete (continuous in time) approximation of the advectiondiffusion initialboundary value problem (12.1.5):
: (Uh(t),Vh)
(12.2.1)
t
{
+ a(Uh(t),Vh) = (f(t),Vh)
'V Vh E Vh , t E (O,T)
Uh(O) = UO,h .
Here Vh C HJ (D) is a suitable finite dimensional space and UO,h E Vh an approximation of the initial datum Uo. In this respect, we have noticed in Chapter 8 that the Galerkin approximation for the steady problem using the classical finite element space (6.2.4) is not satisfactory when advection is much stronger than diffusion, as the approximate solution is highly oscillatory unless the meshsize h is comparable with c. In order to avoid these difficulties, in the rest of this Section we will focus on the case in which diffusion is dominant. More appropriate approaches for the advectiondominated case will be presented and discussed in Sections 12.3, 12.4 and 12.5. 12.2.1 A Sharp Stability Result for the 8scheme
In Section 11.3 we have analyzed a family of timeadvancing methods based on the socalled 8scheme:
~ (U~+l t
(12.2.2)
{
 Uh' Vh) + a(8u~+1 + (1  8)uh' Vh) = (8f(tn+d + (1 8)f(tn),Vh) 'V Vh E Vh
u~ = uO,h
for each n = 0,1, ... ,N 1, where 0 S 8 S 1, Llt:= TIN is the time step, N is a positive integer, and UO,h E Vh is a suitable approximation of the initial datum Uo. We remind that this includes the schemes: forward Euler (8 = 0), backward Euler (8 = 1) and CrankNicolson (8 = 1/2). At each time step one has to solve a linear system associated to the matrix M + 8L1tA, where the stiffness matrix A and the mass matrix M are defined as in (11.2.2), i.e., (12.2.3)
{'Pj Ij = 1, ... , Nh} being a basis of Vh. Notice that the forward Euler scheme is not explicit, unless the mass matrix M is lumped to a diagonal one (see Section 11.4).
12.2 TimeAdvancing by Finite Differences
409
Now we prove a new stability result for these schemes, that improves the one of Theorem 11.3.1. Proposition 12.2.1 Assume that (12.1.3) holds and that the map t + Ilf(t)llo is bounded in [O,T]. When 0 ::::; 0 < 1/2, assume, moreover, that Ti; is a quasiuniform family of triangulations and that the timestep restriction
(12.2.4)
A L.:>t
2
h 2po } . {1 mm  (1  20)C* c' 1 + h 2 pI
0, only
dependent on fl, such that
.:1t (fn+l u n+1 ) < C1 dtllfn+1112 + .:1t~llV'un+l112 B 'h,B C B 1,h 2 h,B a II·IIl,h has
been defined in (11.3.6)). Finally, we know that
Ilu~+1  uhl16 u n+1  .:1t a(u n+l h,B' h =:
_
un) h
+ dt (fn+l u n+l B' h
_ un) h
a+J .
The first term at the right hand side can be estimated using (12.1.6) and providing a strict bound for each one of the terms. This yields
410
a:::;
12. Unsteady AdvectionDiffusion Problems
Litell\7u~~lllo II\7u~+l \7u hllo + ~tllu~~llio II\7u~+l  \7uhllo + Lit
(~II\7U~Yllo + PoIIIU~~lllo) Ilu~+l  uhllo
By means of the inverse inequality (6.3.21) we finally have
a:::; Lit [ci/2hlell\7u~~11Io+ (Ci/ 2h l + Podllu~~lllo] Ilu~+l  uhllo Using again (6.3.21), the second term at the right hand side in (12.2.9) satisfies i 62Lit h l llf; +111_1,hIlu~+l  uhllo
s
In conclusion, from (12.2.9) we deduce
Iluh+ l  uhl16 :::; C*(Lit)2
[h2e211\7uh~1116
+ h 2(1 + h2PoDlluh~1116 + h 21If;+111:'1,h] ,
(12.2.10)
where C* > 0 only depends on can conclude that
n. If the
timestep Lit satisfies (12.2.4) we
Ilu~+1116 lluhI16:::; 63 ~t 11f;+lII:'l,h , where 63 > 0 only depends on n. Now the stability estimate (12.2.5) follows 0 by proceeding as in Theorem 11.3.1. Stability condition (12.2.4) is less restrictive when e becomes small, and it is mainly dictated by the advection coefficient b and by ao through the constants Poo and Pol in (12.1.3). Despite this fact, it might not be the best possible one yet, at least in some particular cases. As a matter of fact, a von Neumann stability analysis for the forwardintime (8 = 0) backwardinspace finite difference scheme applied to the onedimensional advectiondiffusion operator Lu := cu" + bu' + aou, e, band ao positive constants, on the whole real line lE. yields the stability condition Lit
h2
<  2e + bh
(see, e.g., Strikwerda (1989), pp.129132). For small e, the timestep Lit behaves like O(h), whereas in (12.2.4) it is O(h 2 ) for each e > O.
12.2 TimeAdvancing by Finite Differences
411
12.2.2 A SemiImplicit Scheme Aiming at avoiding stability restrictions, without resorting to a fully implicit approximation, one is led to consider the socalled semiimplicit timediscretization approach. This consists in evaluating the principal part of the operator L (i.e., its second order term eLl) at the time level t n +1, whereas the remaining parts are considered at the time tn' More precisely, this scheme reads d
~t (u~+l
 uh,Vh)
(12.2.11)
+ e(V'u~+l, V'Vh)  ~(biUh' DiVh)
+ (aOuh' Vh) = (J(tn+d, Vh) U~
.=1
V Vh E Vh
= UO,h
for each n = 0, 1, ... ,N  1. This scheme is expected to enjoy better stability properties than the one previously analyzed. In fact, we can prove: Theorem 12.2.1 Assume that the map t + Ilf(t)llo is bounded in [O,TJ. Then the semiimplicit approximation scheme (12.2.11) is unconditionally stable on any finite time interval [0, TJ and the solution u h satisfies
I/uhllo::; (lluo (12.2.12)
max Ilf(t)llo) ,hila + C Vff€ tE[O,T)
CT exp [ €(1
X
for each n
= 0, 1, ... , N,
2] + IlaoIILOO(G»)
where the constant C > 0 only depends on fl.
Proof. By proceeding essentially as in the proof of Theorem 11.3.1, the choice Vh = U~+l in (12.2.11) produces
~llu~+1116  ~I/uhI16 + ~llu~+l 
Uhl/6
+eLltllV'u~+1116
Iluhllo IIV'u~+lllo + CLltllf(tn+dll1,h IIV'u~+1llo .
::; CLlt (1 + IlaoIILOO(G») It follows easily that
Ilu~+1116 ll uhI16::; ~ Llt(l + Ilaollioo(G») IIuhll6 + QLltllf(tn+dll:"l h e '
.
Let now m be a fixed index, 1 ::; m ::; N. Summing over n from 0 to m  1 we find
412
12. Unsteady AdvectionDiffusion Problems
Il uh'116:::; Iluo,hl16+ ~ Llt (1 + Ilaollioo(Q)) G
ml
L IIuhl16
n=O
ml
+ Llt "" E L.J Ilf(tn+l)ll:l ,h
.
n=O
Using the discrete Gronwall inequality (see Lemma 1.4.1) and recalling that II¢IIl,h:::; 11¢110 for each ¢ E £2(0) gives (12.2.12). 0 The stability estimate (12.2.12) deteriorates exponentially with respect to However, numerical experiences show that, at least in some particular cases, the bound given in (12.2.12) is too pessimistic, and that the dependence on E 1 is linear rather than exponential (see Bressan and Quarteroni (1986)). Now we turn to the convergence analysis for the semiimplicit problem (12.2.11). We have the following
E 1.
Assume that Uo E HJ (0) and that the solution u to (11.1.5) is such that ~~ E £2(0, T; HJ(O)) and ~ E £2(0, T; £2(0)). Then uR defined in (12.2.11) satisfies for each n = 0,1, ... ,N Theorem 12.2.2
Iluh  u(tn)llo :::;
11(1 lIf,h)u(tn)llo + exp(GEtn)
x {lluo,h lIf,h(uo)116 + G
E
(12.2.13)
+Ce (.1t )'
l
tn
11(1 lIf'h)~~(S)Ir:
t (11:(')11:
+
II~>')II:) }'I' ,
where lIf,h is the elliptic "projection" operator defined in (11.2.8) and Go > is a nondecreasing function of E 1.
°
Proof. From our assumptions it follows that u E GO ([0, T]; HJ(J7)), hence the operator lIf,h(u(tn)) is defined for each n = 0,1, ... ,N. Set 7)h := Uk lIf,h(u(t n)) and observe that for each Vh E Vh and n = 0,1, ... ,Nl
12.2 TimeAdvancing by Finite Differences 1 (n+1 n ) + c (" L1t 1Jh 1Jh,Vh v1Jhn+1" ,VVh ) d
 L(bi1Jh,DiVh) + (a01Jh,Vh) i=l
1
k
= (f(tn+d, Vh)  L1t (II 1,h(U(t n+d  u(tn)), Vh) d
 c('Vu(tn+1), 'VVh)
(12.2.14)
+ L(biu(tn+d, DiVh) i=l
 (aou(tn+d, Vh)  c('Vu(t n), 'VVh) d
+ L(biU(t n), DiVh) 
(aou(tn), Vh)
i=l d
+ e('V IIf,h(U(t n)), 'VVh) 
'l)biIIf,h(u(tn+d), DiVh) i=l
On the other hand, the solution u satisfies
(
~~ (tn+1), Vh) = (f(tn+d, Vh) 
c('Vu(tn+d, 'VVh)
d
+ L(biu(tn+d, DiVh)  (aou(tn+d, Vh) i=l
Let us further recall that the definition of IIf h() reads d
c('Vu(t n), 'VVh)
+ L(biu(t n), DiVh)  (aou(t n), Vh) i=l d
= c('V IIf,h(u(t n)), 'VVh) + 'l)biIIf,h(u(t n)), DiVh) i=l
Since
gt
commutes with IIf,h' (12.2.14) can be finally rewritten as
1 (n+1 n Vh ) + E (" n+1" L1t 1Jh  1Jh' v 1Jh , v Vh ) d
 L (bi1Jh' DiVh) + (ao1Jh, Vh) = (Ch' Vh) , i=l
where
ch E Vh
is defined by the relation
413
414
12. Unsteady AdvectionDiffusion Problems
) _ u(tn+d  u(t n) ) n ) _ (au( (ch,Vh Llt ,Vh at t n+1
+
(12.2.15)
1 Llt
(l
tn
In
1
+
(I  IIf,h) au at (s)ds, Vh )
d
+ I)biIIf,h(U(t n)  u(tn+d), DiVh) i=l  (aoIIf,h(U(tn)  u(tn+d),Vh) . Therefore 77/:: satisfies a scheme like (12.2.11), and consequently
~ Llt ~·llchll:1'h) x exp [C;n(l + lIaolli""c!I»)]
1177/::115::; (1Iuo'h  IIf,h(UO) I15 + (12.2.16)
A suitable bound for Ilckll1,h is easily found. In fact, the first two terms at the right hand side of (12.2.15) can be estimated by proceeding as in the proof of Theorem 11.3.2. Moreover, we have d
I)biIIf,h(U(t n)  U(tn+1)), DiVh) ::; IIIIf,h(U(t n) i=l Therefore we conclude
t:
11~:~(S)I[l'h + ~tl~n+l'I(IIIf'h)~~(S)"~l'h
Ilchll:1,h::; C[Llt (12.2.17)
u(tn+d)llo II vhl11
+ (1 + Ilaolll""C!I») IIIIf,h(U(tn+d  u(t n))1I5] Finally, since the operator IIf,h is uniformly bounded in HJ (fl), we obtain
IIIIf,h(U(tn+d  u(tn))115 ::; Llt (12.2.18)
l~n+l IlIIf,h ~~(S)II:
::; Llt ::
l~n+l II~~(S)":
'
and r being the coerciveness and continuity constants of the form a(, .), respectively. The thesis follows now from (12.2.16)(12.2.18). D
0:
From (12.2.13) we can easily obtain an error estimate with respect to h and Llt by proceeding as in Corollary 11.3.1. Under the additional assumptions Uo E H k +1 (f2) and ~~ E L 2(O,T;Hk+ 1 (f2)) , the final result reads
12.3 The Discontinuous Galerkin Method for Stabilized Problems
Iluh  u(tn)/Io = O(hk+l
+ L1t)
415
.
The main motivation for dealing with the diffusive part implicitly and the advective one explicitly can be ascribed to the following facts. As proven in Theorem 12.2.1, for finite time integration the scheme is inconditionally stable. Besides, at each step one needs to solve a linear system with a symmetric matrix, although the original problem was not selfadjoint. Further, when applied to problems with nonlinear advection, the semiimplicit scheme provides an effective linearization procedure, as it still yields a system with a symmetric matrix, which moreover is the same at each timelevel.
12.3 The Discontinuous Galerkin Method for Stabilized Problems We have already shown in Chapter 8 that a pure Galerkin approach for space discretization can perform poorly when dealing with advectiondominated problems. In that case a stabilization procedure is therefore in order, and, if we choose to operate on the space variables solely, it can be devised on the ground of what has been done in Chapter 8 for steady advectiondiffusion problems. In particular, one can resort to using in (12.2.1) the space Vh introduced in (8.3.15), which is obtained from xk n HJ(fl) by adding a bubble function in each element K E Th. In other words,
B being a finite dimensional space of bubble functions (see, for instance,
(8.3.16)). Unfortunately, the resulting scheme tends to be overdiffusive across sharp layers. Then we could decide to follow a different approach, resorting to stabilization techniques like those presented in Section 8.3.2, thus choosing Vh = n HJ(fl), k ~ 1, where is the finite element space defined in (3.2.4) or (3.2.5), but replacing in (12.2.1) the bilinear form a(·,·) and the right hand side by more involved ones. For instance, we could consider the following "stabilized" semidiscrete problem:
Xk
Xk
d dt (Uh(t), Vh)
+ (12.3.1 )
+ a(uh(t), Vh)
~
a witnesses the better properties of this method when compared with the pure Galerkin one (i.e., when t5 = 0). The convergence analysis is presented in Hughes, Franca and Mallet (1987) for p = a and Hughes, Franca and Hulbert (1989) for p = 1. The
0,1, ... ,N
418
12. Unsteady AdvectionDiffusion Problems
method is O(h k +1 / 2)accurate with respect to the norm defined at the left hand side of (12.3.6) (however, when p = 0 the operator L has to be replaced by its skewsymmetric part Lss). Notice that one could also choose a different finite dimensional space Vh and/or a different polynomial degree q on each interval [tn, tn+ll. An even more general approach proposed by Johnson, Niivert and Pitkaranta (1984) is based on a spacetime triangulation of any strip Sn := [tn, tn+d X fl. In this case for each n one chooses a finite element subspace v,(n) of h
based on spacetime elements of size less than or equal to h. The triangulations relative to adjoining strips do not need to match on the common time level. The problem still reads as (12.3.4), but now both the solution Un and the test function v belong to V~n), and T h denotes a family of triangulations of Sn'
12.4 OperatorSplitting Methods Another effective approach for finding the approximate solution of a nonstationary advectiondiffusion equation is based on an operatorsplitting strategy. Let us still consider problem (12.1.1). The operator L can be written as (12.4.1) where Ls, i = 1,2, are suitable operators. For instance, a typical splitting for the advectiondiffusion case is given by (12.4.2) L1u
= cLlu + T)Jtu
, L 2u
1 = 2'1 div(bu) + 2'b. 'Vu + (1 
T))Jtu ,
where Jt is defined in (12.1.3) and 0 :::; 1] :::; 1. With this choice, we have separate diffusion from advection. In particular, if T) = 1 we have L 1 = L s and L 2 = Lss, the symmetric and skewsymmetric part of the operator L with respect to the scalar product in L2(fl) (see (12.3.2) and (12.3.3)). The basic idea of an operatorsplitting procedure is as follows: starting from un(x), an approximation of u(t n, x), construct u n+1 (x) through two or more intermediate values, each one obtained by solving a boundary value problem related to only one of the operators L, (see Section 5.7 for a general presentation of these methods). We start focussing on the PeacemanRachford scheme: given uO := Uo, for each n = 0,1, ... , N  1 solve at first (12.4.3)
12.4 OperatorSplitting Methods
419
then (12.4.4)
u n +1
u n +1/ 2
_ T
+ L 1 u n+1/ 2 + L 2u n+1 = !(t n+l / 2)
in [} ,
where T = 1J.t/2 and t n + 1 / 2 = t n + 1J.t/2. Clearly, suitable boundary conditions have to be added to both (12.4.3) and (12.4.4). In general, the choice of which conditions impose heavily depends on the properties of the operators L1 and L 2 , as well as on the prescribed boundary operator B. Considering, for instance, the homogeneous Dirichlet case (i.e., Bu = u in (12.1.1)), and assuming that the operators L 1 and L2 are those introduced in (12.4.2), the boundary condition to be prescribed for (12.4.3) is clearly u = 0 on a[). On the other hand, (12.4.4) is a steady advection problem (see Section 5.1; see also Chapter 14). Therefore, the homogeneous Dirichlet condition has to be imposed only on the inflow boundary, i.e., on a[}in := {x E an I b(x) . n(x) < A}, where n is the unit outward normal vector on a[}. With this choice problem (12.4.4) has a unique solution. In this way, still assuming that (12.1.3) is satisfied, we have (12.4.5) with a:= min{e:, 1]JLo} , and
(L 2v ,v ) = (12.4.6)
L{~[diV(bV)V +
1 = 2 (
(b· V'v)v] + (11])JLV2}
. b· n v 2 + (1  1]) ( JLV 2
ian\an m 2: (11])JLollvI16
in
VVEH~nin(J?),
where, for a subset E C a[}, we have defined Hl([}) := {v E H 1 (J? ) Iv = o on E} (see Section 1.3). For splittings based on nonnegative operators, stability and convergence results as 1J.t + 0 are proven, e.g., in Yanenko (1971) and Marchuk (1990). For nonlinear operators, we refer to Lions and Mercier (1979). Notice that any other fractionalstep scheme introduced in Section 5.7 can be used instead of the PeacemanRachford one. In view of the effective resolution of the problem, for each n one is led to approximate both (12.4.3) and (12.4.4) by a finite dimensional approximation. The convergence of the discrete fractionalstep scheme is ensured provided the approximate operators, say L 1 ,h and L 2 ,h, satisfy properties (12.4.5) and (12.4.6) on all functions v belonging to the appropriate finite dimensional spaces. The splitting procedure described above can also be used at an algebraic level, i.e., when applied to a semidiscrete approximation like (12.2.1). In this case, the problem reads
420
12. Unsteady AdvectionDiffusion Problems
M~; (t) + Ae(t) = F(t)
(12.4.7)
{
= eo
e(O)
, t E (0, T)
,
where the matrix A and M are defined in (12.2.3), Fi(t) := (f(t), !Pi), and we have set Nh
Nh
= 2::~j(t)!pj(X)
Uh(t, X)
, UO,h(X)
j=l
= 2::~O,j!pj(x)
,
j=l
{!pj Ij = 1, ..., N h } being a basis of Vh. Let us recall that the matrix M is symmetric and positive definite. The operatorsplitting (12.4.1), (12.4.2) induces the matrixsplitting A = Al + A 2 , where (12.4.8) for i, j = 1, ..., N h , and
(12.4.9)
al (w, v) := e
(12.4.10) a2(w, v) :=
in v«. + in \Jv
~ r
t
in k=l
nuun:
h(wDkV  VDkW) +
r (1 "7)/LWV
in
We have to notice that the boundary treatment is straightforward in this framework, as the original matrix A already accounts for the boundary conditions. This approach can be called algebraicsplitting, and provides the ground for implementing fractionalstep schemes. For instance, the PeacemanRach:= then solve for n = 0,1, ..., N 1 ford method in this case reads: take
eo eo,
en+I/2
M" (12.4.11) {
7
en+I
M"
en
..
+ A I C +I / 2 + A 2en
= F(t n +I/ 2)
e n+I/ 2
7"
+AIC+I/2+A2C+I=F(tn+I/2) '
2:f:I
where T = iJ.t/2. (We recall that uh(x) = ~'J'Pj(x) is the function approximating u(tn,x).) It is easily seen that Al is positive definite for each ~ "7 ~ 1, while A 2 is positive definite for ~ "7 < 1 and nonnegative definite for "7 = 1. Hence both n +I / 2 and n +1 can be uniquely determined from (12.4.11). Now we can prove a stability result for the PeacemanRachford scheme.
e
e
°
°
Proposition 12.4.1 Let M be the symmetric and positive definite matrix in (12.4.11) and introduce the scalar product and the associated norm
12.4 OperatorSplitting Methods
421
(12.4.12)
for TI,
eE
IRNh.
Assume that M:" A l and M:" A 2 commute, i.e., that
(12.4.13)
and set uh(x) = I:f~\ C;!'Pj(x), where en is the PeacemanRachford sequence constructed in (12.4.11). Then (12.4.14)
I[uhllo:::; Ilua ,hila + t« tE[O,T] max Ilf(t)lla
Proof. Let us start from the recurrence relation (12.4.15) where
and (12.4.17) From (12.4.15) we derive n
(12.4.18)
C
= Bnea
+ L1t
L BnkQMIF(tk_I!2)
,
k=l
therefore n
(12.4.19)
[CIM:::; IBnealM + L1t
L
IBn kQM IF(tk_I!2)IM
k=l
From (12.4.13) we can write
B=
B:= (I 
,MIAd(I + ,MlAdl(I 
u:: A 2)(I + u:' A 2)l
We notice now that for each 0 :::; "7 :::; 1 the matrix M I Al is positive definite with respect to the scalar product (, ')M, whereas M I A 2 is positive definite for 0:::; "7 < 1 and nonnegative definite for "7 = 1, still with respect to (', ·)M. Therefore, by proceeding as in Proposition 2.3.1 we obtain
In a similar way we prove that (12.4.20)
IIQIIM < 1, and we conclude that
422
12. Unsteady AdvectionDiffusion Problems
Finally, it is easily seen that
lelM = Ilu~llo , IM1 F (t )IM = ItP~(f(t))llo
S; IIf(t)lIo ,
where pk is the orthogonal projection operator on Vh with respect to the L 2 (D )scalar product. 0 The commutativity relation (12.4.13) is rather restrictive, and in general is not satisfied when Al and A 2 are the matrices given in (12.4.8)(12.4.10). However, if we introduce the auxiliary unknown (12.4.21) relation (12.4.15) becomes
en+1 = Ben + .1t(I + rM 1A1 )  1 M 1F(tn +1 / 2 ) We can thus repeat the procedure above, obtaining (12.4.20) for en instead of (12.4.22)
en. This corresponds to a stability estimate for en with respect to the vector norm associated to the symmetric and positive definite matrix
M:=
M +r(A 2
+
An +r
2ArM 1A
2
•
For the convergence analysis of the PeacemanRachford method we refer to Yanenko (1971) and Marchuk (1990). Notice that the matrix Al defined in (12.4.8), (12.4.9) is the one corresponding to the discretization of the homogeneous Dirichlet problem for the operator L 1 defined in (12.4.2). On the contrary, the matrix A 2 defined in (12.4.8), (12.4.10), which has dimension Nh, is not the discrete counterpart of the operator L 2 with Dirichlet condition on eo». In fact, in the latter case the number of degrees of freedom is equal to the number of the internal nodes Ni, plus the nodes on aD \ otr, as we are allowed to assign the boundary condition on aDin solely. On the base of this remark, we can conclude that if we apply first the fractionalstep procedure (12.4.3), (12.4.4), and then approximate the two boundary value problems thus obtained, the resulting scheme doesn't necessarily coincide with the one that we would obtain taking first the semidiscretization (12.4.7) of the original problem and then advancing in time by the fractionalstep (12.4.11). Furthermore, we have to warn the reader that the treatment of more general boundary conditions in the framework of operatorsplitting procedures needs to be done carefully, in order to ensure consistency with the original boundary value problem. In some cases also the boundary conditions have to be split. For instance, when considering the Neumann problem
Lu = f
au  b· nu { c;
an
in D
=g
on aD
12.5 A Characteristic Galerkin Method
423
one could choose al(,') as in (12.4.9) and
a2(w,v):=
~ r tbk(WDkVVDkW)+ r (l7J)I'WV~ r b.nwv i: i; Jan k=l
The matrix Al now corresponds to the discretization of the differential operator L 1 associated to the boundary operator E: on afl. On the other hand, A 2 corresponds to the discretization of L 2 with Dirichlet boundary condition on afl i n (imposed in a weak sense).
:n
12.5 A Characteristic Galerkin Method This method stems from considering the nonstationary advectiondiffusion equation from a Lagrangian (instead of Eulerian) point of view, and can be traced back to Pironneau (1982), Douglas and Russell (1982), Ewing, Russell and Wheeler (1984). At first we define the characteristic lines associated to a vector field b = b(t, x). Being given x E fl and s E [0, T], they are the vector functions X = X(t; s, x) such that dX
di(t;s,x) = b(t,X(t;s,x»
(12.5.1) {
, t E (O,T)
X(s;s,x)=x.
The existence and uniqueness of the characteristic lines for each choice of s and x holds under suitable assumptions on b, for instance b continuos in [0, T] x fl and Lipschitz continuous in fl, uniformly with respect to t E [0, T] (see, e.g., Hartman (1973». From a geometric point of view, X(t; s, x) provides the position at time t of a particle which has been driven by the field b and that occupied the position x at the time s. The uniqueness result gives in particular that
(12.5.2)
X(t;S,X(S;T,X» =X(tjT,X)
for each t,S,T E [O,T] and x E fl. Hence X(t;s,X(s;t,x» = X(tjt,x) = x, i.e., for fixed t and s, the inverse function of x  r X(s; t, x) is given by y  r X(tj S, y). Therefore, defining
(12.5.3)
u(t,y) :=u(t,X(t;O,y»
(or, equivalently, u(t,x) = u(t,X(O,t,x» from (12.5.1) it follows that
424
12. Unsteady AdvectionDiffusion Problems
au au at (t,y) = at (t,X(t;O,y)) (12.5.4)
~
ex,
+ LJDiU(t,X(t;O,y))di(t;O,y) i=l
=
(~~ +b.VU) (t,X(t;O,y))
.
Following the same notation introduced in (12.5.3), we can rewrite the nonstationary advectiondiffusion equation as
au atcLlu+(divb+ao)u=!
(12.5.5)
in QT.
We are thus led to discretize (12.5.5). The time derivative is approximated by the backward Euler scheme, i.e.,
aU(t ) '" U(tn+l,y) u(tn,y) at n+l,y Llt
(12.5.6)
If we set y = X(O; tn+l, x), from (12.5.3) we obtain
au( X(O. )) ~ U(tn+l,X)  U(tn,X(tn;tn+l,X)) Llt at tn+l, , tn+l,X = Denoting by X''{x] a suitable approximation of X(t n; tn+l, x), n = 0,1, ..., N  1,we can finally write the following implicit discretization scheme for problem (12.5.5): set UO := uo, then for n = 0,1, ... ,N  1 solve (12.5.7) = f(tn+d
in
n .
Clearly, a boundary condition has to be imposed on on: for simplicity, let us consider the homogeneous Dirichlet condition u~~ = o. One can typically choose a backward Euler scheme also for discretizing (12.5.8) This produces the following approximation of X(t n; tn+l, x): (12.5.9) Notice that X(l) is a second order approximation of X(tn;tn+l,X) since we are integrating (12.5.8) on the time interval (tn, tn+d, which has length Llt. A more accurate scheme is provided by the second order RungeKutta scheme (12.5.10) which gives a third order approximation of X(tn; tn+l, x).
12.5 A Characteristic Galerkin Method
425
It is necessary to verify that X(;)(x) E n for each x E n, i = 1,2, so that we can compute unoX(;). For simplicity, let us assume that b(t,x) = 0 for each t E [0, T] and x E an. As a consequence, XCi)(x) = x for x E an, i = 1,2. If we denote by x" E an the point having minimal distance from x En, we have
IXCI)(x)  x] = Ib(tn+l,x)1 Llt = Ib(tn+I,X)  b(t n+1>x')1 Llt ::; Ib(tn+dILip(Q) [x  x'l Llt , where
Assuming that (12.5.11)
max Ib(t)I L· CQ)Llt< 1 ,
tE[O,T]
vp
it follows at once that XCI) (x) E n for each x E n and each n = 0,1, ..., N l. A similar results holds for X~)(x). From now on we will consider the second order approximation (12.5.9), referring to Pironneau (1988), pp. 8690, for higher order schemes based on (12.5.10). If we suppose that
°
+ ao(x) ~ x E n, stability is easily
(12.5.12)
div b(t, x)
for each t E [0, T] and almost every multiplying (12.5.7) by u n +1 and integrating over (12.5.13)
1116 Ilun+
proven. In fact,
n one obtains
+ eLltl/V'u n+I/16 ::; (Ilu
n
0
X(I)llo + Lltllf(tn+dllo)llun+lllo
From (12.5.11) it also follows that the map X(I) is injective. Therefore, we can introduce the change of variable y = X(I)(x), and setting Y(I)(Y) (X(I))I(y) we have (12.5.14)
Ilun 0 X(I)116 =
1
n
u (y )21det(Jac
XCI))
0
Y('I)(y)I I dy
x(\lO)
On the other hand,
I det(Jac X(I))(x)1 for almost every x E (12.5.15) where
[2,
~ 1 LltCI IIJac b(tn+I)IILooCS?)
provided that
>
°
426
12. Unsteady AdvectionDiffusion Problems
/lr := max IIJac b(t)liL<X>(Q) tE[O,T]
°
and C, > 0, < G2 < G1 l are suitable constants. Therefore, possibly choosing a smaller constant G2 in (12.5.15), from (12.5.14) one has (12.5.16)
IlunoX(l)II~:::; (1 + LltC3/lr) Ilunll~ .
Notice that condition (12.5.15) implies (12.5.11) (if C 2 is small enough). From (12.5.13) we finally obtain for each n = 0,1, ... ,N  1 (1Iun+lll~
+ cLltll'Vun+lll~?/2 :::; (1 + G3/lrLlt)1/21Iu nllo + Lltllf(tn+dllo
(12.5.17)
:::; (1 + C3/lrLlt)~ Iluolio n+l + Llt 2)1 + G3/lrLlt) n±~k Ilf(tk)llo k=l
< (11uo11o + tn+l tE[O,T] max Ilf(t)llo) exp (C3 /lrtn+l) 2 Notice that the L 2(D )st ability holds independently of c. The convergence of un to u(t n) is proven in a similar way. Defining the error function En := U(tn) _Un, from (12.5.7) we obtain for n = 0,1, ... ,N1 En+l _ En 0 xn Llt (1)  cLlE n+l + [divb(tn+d + aojEn+l (12.5.18)
_ U(tn+l)  u(tn) 0 X(l) Llt au  at (tn+d  b(tn+d . 'Vu(tn+d
in D .
Since X(l)(X)  X(tnitn+l'X) = O((Llt)2), it is easily shown that the right hand side in (12.5.18) is O(Llt). Therefore convergence follows from (12.5.17) applied to En, recalling that EO = O. The method we have described above also applies to the fullydiscretized problem obtained from (12.5.7) by using, for example, the finite element method. The resulting scheme would read: given u~ := UO,h E Vh, for each n = 0,1, ... ,N  1 find U~+l E Vh such that
~(un+l _ un 0 X" V ) Llt h h ,h (12.5.19) + c('VU~+l, 'VVh) + ([div b(tn+l) + aolu~+l, Vh) = (f(tn+l),Vh)
't/Vh E Vh ,
where (".) denotes the L 2 ( fl ) scalar product and Vh is a suitable finite element subspace of V = HJ (fl) (the usual changes occur for boundary conditions different from the homogeneous Dirichlet one). Assuming that (12.5.12)
12.5 A Characteristic Galerkin Method
427
and (12.5.15) hold and that b(t)18n = 0 for each t E [O,T], stability can be proven exactly as before. Further, if Vh is given by X~ n HJ(fl), the space of continuous and piecewiselinear polynomials vanishing on afl, it can be shown that Ilu(tn)uhllo = O(h+L1t+ ~:) (see Pironneau (1988), pp. 8689). We conclude this Section by noting that some additional problems have to be faced while implementing this method. In fact, one has to compute the integrals (u h 0 Xn,Vh), and this is usually accomplished by means of a quadrature formula. It turns out that the effects of this quadrature procedure can give rise to instability phenomena (see Morton, Priestley and Stili (1988)). Moreover, numerical integration requires the knowledge of the value of uhoXn at some nodal points. This means that, for any fixed node Xk, it is necessary to know which triangle K E Th contains the point Xn(Xk)' Several remarks on the implementation of the method can be found in Pironneau (1988), pp. 9193 (see also Priestley (1993)). The implementation of this idea in the framework of spectral methods is described, e.g., in Ho, Maday, Patera and Renquist (1990) and Stili and Ware (1992) for the incompressible NavierStokes equations and in Siili and Ware (1991) for advectiondominated elliptic problems and hyperbolic problems.
13. The Unsteady NavierStokes Problem
In this Chapter we turn our attention towards unsteady viscous flows, especially in the incompressible case. We therefore consider the timedependent counterpart of the NavierStokes problem (10.1.1)(10.1.3), that reads
au  vilu + (u· V)u + Vp = f at
in QT := (0, T) x
divu = 0
in QT
u=o
on
2)T :=
Ult=o = Uo
on
n ,
n
(13.1) (0, T) x
an
where f = f(t, x) and Uo = uo(x) are given data, n is an open bounded domain of lRd , with d = 2,3, and an is its boundary. One remarkable feature of (13.1) is the absence of an equation containing ~. Indeed, in (13.1) the pressure p appears as a Lagrange multiplier associated to the divergencefree constraint div u = o. To begin with, we derive in Section 13.1 the equations describing the motion of a general fluid, either incompressible or compressible. In Section 13.2 we introduce the weak formulation of problem (13.1), and comment on the existence, uniqueness and behaviour of solutions. Next we turn to the numerical approximation. Most effective numerical methods can be defined through a separation between temporal and spatial discretization. For this reason, most arguments we have developed in Chapter 10 for the stationary problem are relevant to the timedependent case. Concerning the temporal discretization, we review some of the most popular finite differencing schemes and operatorsplitting algorithms. We restrict our attention to the case of laminar flow problems. For specific monographs on the subject we refer to Ladyzenskaya (1969), Lions (1969), Temam (1983, 1984) and Kreiss and Lorenz (1989) for the mathematical analysis of NavierStokes equations, to Temam (1984), Thomasset (1981), Peyret and Taylor (1983), Fletcher (1988a, 1988b), Canuto, Hussaini, Quarteroni and Zang (1988), Gunzburger (1989), Pironneau (1988) and Quartapelle (1993) for the analysis of numerical approximation methods.
430
13. The Unsteady NavierStokes Problem
13.1 The NavierStokes Equations for Compressible and Incompressible Flows The equations describing the motion of a general fluid are derived from three conservation principles: momentum, mass and energy. Denoting by u the velocity of the fluid, p > 0 its density and e its internal energy per unit mass, these equations read (see, e.g., Landau and Lifshitz (1959), Sects. 15, 1, 49):
a~;) + div (pu ® u) =
(13.1.1)
divT + pf
~: + div(pu) = 0
(13.1.2)
8:
o(pe+ 1pluI (13.1.3)
2 )
[(
+ div
1 )] pe + 2Plul2 u
= div(T . u)  div q
+ pf . u + pr
Here we have denoted by u ® u the tensor {UiUj}, i,j = 1, ... , d, T = T(u,p) the stress tensor, p the pressure, q = q('l9) the heat flux, 'l9 > 0 the (absolute) temperature, and finally by f and r the external force field per unit mass and the heat supply per unit mass per unit time, respectively. Moreover, we have used the notation d
d
(T . u), := LTijuj j=l
(divT)i := LDjTij j=l
i = 1, ... ,d
.
By using (13.1.2) we can rewrite (13.1.1) and (13.1.3) in the nonconservative form: (13.1.4)
p
[~~ + (u· V)u] = divT + pf
(13.1.5)
p
[~: + u . V e] = T
: D  div q
where D = D(u) is the deformation tensor
and we have used the notation d
T:D:= LTijDij i,j=l
+ pr
,
13.1 The NavierStokes Equations
431
Having written these conservation laws, we are now in a position to introduce the constitutive equations which characterize the motion of any particular fluid. We will limit our presentation to those physical situations in which the following constitutive equations hold: (13.1.6)
Tij = [p+ ((
(13.1. 7)
q
= X\7'13
2:)
diVU]
s; +2{LDij
,
{L and ( being the shear and bulk viscosity coefficients, respectively, and X the heat conductivity coefficient. The symbol tSij denotes the Kronecker tensor, i.e., tSii = 1 and tS ij = 0 for i =1= j, i,j = 1, ... ,d.
To close the system some other relations have to be imposed, furnished by thermodynamic principles. At this stage we have to distinguish between compressible and incompressible flows, as the thermodynamic assumptions characterizing these types of flows are different. 13.1.1 Compressible Flows
Choosing as thermodynamic unknowns the density p and the temperature {), in order to close the system we add the following state equations (13.1.8)
p = P(p,'I3)
(13.1.9)
e = E(p, {))
(13.1.10)
{L = {L*(p,{)) , (= (*(p,'I3) , X = X*(p,{)) ,
where P, E, {L*, (* and X* are known functions subjected to the thermodynamic restrictions (ClausiusDuhem inequalities) (13.1.11)
{L*
2:: 0 , (* 2:: 0 , X* 2:: 0 .
For physical reasons, it holds Eii > 0 (this latter condition is a general attribute of all real materials), and Pp > 0 (this condition is generally true for onephase fluids). Moreover, from the wellknown relation (see, e.g., Landau and Lifshitz (1959), Sect. 6) dE = rJdS  Pd(pl) , S specific entropy ,
(13.1.12)
E and P must satisfy the compatibility condition E p = p2(p  'I3Pii) .
(13.1.13)
We can finally rewrite equations (13.1.4) and (13.1.5) as: d
P
[~~ + (u· \7)U] = \7 P + ~ Dj({L* o,« + {L*\7Uj)
(13.1.14)
+ \7
[ ( (* 
2~*) div u] + pf
432
13. The Unsteady NavierStokes Problem o1'J pEf) ( ot
. fL" ~ 2 + U· \J1'J ) = 1'JPf) div u + 2" ,~(D;Uj + Dju;)
(13.1.15)
',J=1
+
((* _2~") (divu)2 + div(X"\J1'J) +
pr .
When the viscosity fL" is greater than 0, then equations (13.1.14), (13.1.2) and (13.1.15) are called NaoierStokes equations for compressible flows. The case fL" = 0, (" = 0 gives the Euler equations for compressible flows. 13.1.2 Incompressible Flows
In this case the pressure p is no longer related to thermodynamic unknowns, and must be determined through the momentum equation. The state equation (13.1.8) is replaced by the requirement that any given amount of fluid does not change its volume along the motion. This physical assumption reads divu=O.
(13.1.16)
Moreover, now the thermodynamic relations (13.1.9), (13.1.10) reduce to (13.1.17)
e=E(1'J) , fL=fl:(1'J) , X=X"(1'J) .
Notice that, owing to (13.1.6), the bulk viscosity ( doesn't appear in the stress tensor any longer. Finally, the specific entropy S is related to E and 1'J through (13.1.18)
dE = 1'JdS ,
which substitutes (13.1.12). Therefore we rewrite equations (13.1.4), (13.1.3) and (13.1.5) as: (13.1.19)
P
[~; + (u' \J)U]
d
=
v» + ~ Dj(fL" o.« + fL*\JUj) + pf J=1
op +u·\Jp=O ot
(13.1.20)
(13.1.21 )
pEf) (
o1'J 8i +u
. \J1'J
)
=
fL* d 2" ,L to», + Djui) 2 ',J=1
+ div(X"\J1'J) + pr The NauierStokes system for incompressible flows is given by equations (13.1.19)(13.1.21), (13.1.16) when fL* > 0, whereas the Euler system for incompressible flows is obtained as a particular case setting fL* = O. The
13.2 Mathematical Formulation and Behaviour of Solutions
433
latter case will not be addressed in this book. The interested reader can refer to Marchioro and Pulvirenti (1993). A flow for which p = p(t) (i.e., the density doesn't depend on the space variable x) is called homogeneous. It is easily verified that a flow is incompressible and homogeneous if and only if its density p is constant. Further, if an incompressible flow is homogeneous at the initial time, i.e., the initial density Po is constant, say Po(x) = p' > 0 for each x E n, owing to (13.1.20) it follows that p(t, x) = p' for each (t, x) E QT. Therefore equation (13.1.20) can be eliminated from the system governing the motion of a homogeneous incompressible flow. Moreover, if p,' is constant the velocity and the pressure are independent of {J, and equation (13.1.21) can be solved separately after (13.1.19), (13.1.16). Under both these assumptions the NavierStokes system takes the wellknown form (13.1), with v := p,' / p' and p := Pip'.
13.2 Mathematical Formulation and Behaviour of Solutions As in Chapter 10 we introduce the Hilbert spaces: Hdiv, Vdiv, V = (HJ(n))d and Q = Lg(n) (see (10.1.8), (9.1.3) and (9.1.9)). By generalizing (10.1.7) we easily see that the weak formulation of (13.1) reads: given f E L2(0, T; Hdiv) and Uo E Hdiv, find u E L 2(0,T;Vdiv) nLOO(O,T;Hdiv) such that :t (u(t), v)
+ a(u(t), v) + c(u(t); u(t), v) = (f(t), v)
(13.2.1)
V v E VdlV
{
u(O) = Uo The above equation has to be intended in the sense of distributions in (0, T). The symbol (".) denotes the scalar product in (L 2(n))d; for the definition of the bilinear forms a(·,·) and b(·,·) and the trilinear form c(·;·,·) we refer to (9.1.5), (9.1.10) and (10.1.5), respectively. The existence of a solution to (13.2.1) has been proven by Leray (1934) and Hopf (1951). Uniqueness is still an open problem in the threedimensional case, whereas for d = 2 the solution u has been shown to belong to CO([O, T]; Hdiv) and to be unique (Ladyzhenskaya (1958, 1959), Lions and Prodi (1959)). Any solution to (13.2.1) also satisfies the following energy estimate:
(13.2.2)
sup tE(O,T)
Ilu(t)116 + v
iTIlvu(t)116:S IIUol16 iT Ilf(t)116, C + E.
0
v
where Co is the constant of the Poincare inequality (1.3.2).
0
434
13. The Unsteady NavierStokes Problem
Under additional assumptions it is also possible to prove the existence of more regular solutions. Precisely, in the twodimensional case, assuming that the boundary an is a regular manifold and Uo E Vdiv, the solution 2(n))2) n CO ([0, T]; Vdiv). In the threeU to (13.2.1) belongs to L2(0, T; (H dimensional case an analogous result holds, although only locally in time. More precisely, there exists T* E (0, T] (but not necessarily T* = T) such that the solution U to (13.2.1) belongs to L 2(0, T*; (H2(n))3) n CO ([0, T*];Vdiv), and is unique in that class (see Prodi (1962)). It must be noticed, however, that the existence of a globalintime unique solution is assured also in the threedimensional case, provided the data f and Uo are small compared to the viscosity t/, The proofs of these results can be found, e.g., in Temam (1984), Chap. III. Other results concerning the existence of more regular solutions in QT and the necessity of nonlocal compatibility conditions for the data on an at t = can be found in Heywood (1980) and Temam (1982). An alternative weak formulation of (13.1), which is related to (10.1.4) instead of (10.1.7), is as follows: find u(t) E V and p(t) E Q such that for almost every t E (0, T)
°
d
dt (u(t), v)
(13.2.3) b(u(t), q) =
+ a(u(t), v) + c(u(t); u(t), v) + b(v,p(t)) = (f(t), v)
Vv E V
°
VqEQ
u(O) = Uo . Clearly, any solution of this problem is also a solution to (13.2.1). The converse is true, provided the solution to (13.2.1) is regular enough to apply Lemma 9.1.1. For instance, when u E L 2(0,T; (H 2(n ))2) n CO ([0, T]; Vcliv) it is possible to determine a pressure p which belongs to L 2(0, T; H1(n)).
13.3 SemiDiscrete Approximation The space discretization of (13.2.1) can be accomplished exactly as described in Chapter 10 for the stationary case. At first, we choose a finite dimensional subspace of the divergence free subspace Vdiv (see (9.1.3)). Denoting it by Vdiv,h, we have the reduced, onefield problem: for each t E [0, T] seek Uh(t, .) E Vdiv,h such that
~ (Uh(t), Vh) + a(uh(t), vx) + C(Uh(t);Uh(t), vs) = (f(t), Vh)
(13.3.1) { Uh(O) =
uO,h
,
V Vh E Vdiv,h , t E (0, T)
13.3 SemiDiscrete Approximation
435
where UO,h E Vdiv,h is an approximation to the initial data un, This is therefore a Galerkin approximation to (13.2.1). In turn, when approximating problem (13.2.3) we consider two subspaces Vh C V and Qh C Q and for each t E [O,T] we seek Uh(t,') E Vh and Ph(t,·) E Qh such that d
dt (Uh(t), vs) + a(uh(t), vs) + C(Uh(t); Uh(t), vs) + b(Vh,Ph(t)) = (f(t), vs) V Vh E Vh , t E (0, T) (13.3.2)
V qh E Qh , t E (0, T) Uh(O) = UO,h , with UO,h E Vh. The analysis of the semidiscrete scheme (13.3.2) has been provided in a series of papers by Heywood and Rannacher (1982, 1986a, 1988, 1990) (see also Okamoto (1982) and Bernardi and Raugel (1985)). In these papers, the scheme (13.3.2) has been modified into another one in which the trilinear form c(w; z, v) is replaced by _
1
c(w; z, v) := 2[C(W; z, v)  c(w; v, z)] . This is due to stability purposes. As a matter of fact, one sees at once that the new trilinear form c(;·,·) is skewsymmetric, i.e., c(w; v, v) = 0 for each v E V. This enables us to use the energy method: for each fixed t, taking in (13.3.3) Vh = Uh(t) provides the stability estimate (13.3.3) which is the discrete counterpart of (13.2.2). At the continuous level, the introduction of the new trilinear form cC; ., .) corresponds to the addition of the term ~(div u)u at the left hand side of (13.1h. However, this perturbation is consistent with the NavierStokes equations, as the exact velocity field is divergence free. The price we pay with the modified scheme derives from the necessity of approximating the extra convective term, too. Let us come now to the convergence analysis for (13.3.2). Assume that Vh and Qh are a couple of finite element spaces that satisfy the compatibility condition (9.2.9). Further, we make the assumptions that for all v E V and qE Q inf .lv vxll. + inf Ilq  qhllo = O(h) . vhEVh
qhEQh
Let us notice that all the choices of spaces Vh and Qh described in Section 9.3 satisfy these assumptions. In this context, the following error estimate can be proven:
436
13. The Unsteady NavierStokes Problem
If the data satisfy appropriate regularity assumptions, then over any time interval (0, T) in which the Dirichlet integral "'Vullo remains uniformly bounded one has
(13.3.5) with r(t) := min(t, 1). Due to the presence of the exponential, these estimates are virtually meaningless for large values of the time variable. However, if the data of the problem are small enough it can be proven that C1 and C2 are indeed uniformly bounded as t + 00. Finally, in the particular case Uo = f(O,·) = 0 the function C 2 remains bounded as t + 0. On the other hand, notice that, if the exact solution is not supposed to be stable, the exponential growth of C1 and C2 has to be expected, since there can be exponential growth in the difference between initially neighboring exact solutions. The error estimate (13.3.4) is obtained without assuming any special regularity of the solution (u,p) as t + 0, except that Uo E Vdiv. This is an important point since the regularity of the solution of an initialboundary value problem up to t = holds only if suitable compatibility conditions are satisfied. While for parabolic equations for the data on aD at t = these compatibility conditions are of local type, in the case of NavierStokes problem they read as follows. If the H3(D)norm of u(t) (or equivalently the Hl(D)norm of ~~(t)) remains bounded up to t = 0, the external force field f at t = and the initial velocity Uo must be such that
°
°
°
(13.3.6)
'Vpo = £(0,·)
+ vLluo
on
aD ,
where Po is the solution (defined up to an additive constant) to the Neumann problem Llpo =  div[(uo . 'V)uoJ in D
{ apo an = vLluo . n
on
aD
(remind that we have assumed that £(0,·) E Hdiv), Because of its nonlocal nature, condition (13.3.6) is virtually uncheckable for general data (excepting when Uo = £(0,,) = 0). Still requiring the compatibility condition Uo E Vdiv (and nothing more), the error estimate (13.3.4) can be improved to (13.3.7) for k = 2, ... , 5, provided the finite dimensional spaces Vh and Qh have been chosen with the corresponding approximability property. Here the functions C 1 and C 2 behave like (13.3.8)
13.3 SemiDiscrete Approximation
437
A further improvement is obtained assuming that the exact solution is stable in a suitable sense. In this case, it is reasonable to expect a uniform bound for C 1 (t) and C 2(t) as t + 00. In fact, this result can be proven under the assumption that the solution U is exponentially stable. A precise definition of this concept is provided in Heywood and Rannacher (1986a, 1986b). Roughly, it can be described by saying that there is a fixed length of time during which any sufficiently small perturbation, starting at any time to :::=: 0, will decay to half of its original size. It must be noticed that exponential stability is a weaker requirement than "universal energy" stability, i.e., that all perturbations decay monotonically and exponentially in the L 2(5?)norm, regardless of their initial size. In particular, the Reynolds number of the flow (see (10.2.2)) is not required to be so small to guarantee this latter property. In the papers quoted above, more general definitions of stability are also considered, and they appear sufficiently general to apply to such phenomena as Taylor cells and von Karman vortex shedding. In correspondence, several error estimates are provided. Results of this type are also valid for the fullydiscrete approximation of (13.2.3) obtained by timeadvancing schemes based on a finite difference approximation of the timederivative. The analysis of the forward Euler, backward Euler and CrankNicolson schemes is reported in Rannacher (1982), Heywood and Rannacher (1986a) and Heywood and Rannacher (1990), respectively. See also Section 13.4 below. An approach slightly more general than (13.3.2) can be formulated as follows: for each t E [0, T] find Uh(t,·) E Vh and ph(t,·) E Qh such that d
dt (Uh(t), Vh)h + ah(uh(t), vs) + Ch(Uh(t)j Uh(t), vs) + b1,h(Vh,Ph(t )) = (f(t), Vh)h V v t. E Vh , t E (0, T) (13.3.9)
Uh(O) = UO,h , where C,·h is an approximation of (, '), ah("') of a(, .), Ch('j"') of cCj" '), and both b1 ,h(' , ' ) and b2 ,hC, ,) are approximations of b(·, ). The generalized Galerkin method (13.3.9) is encountered, e.g., whenever numerical integration is used in the framework of finite elements or spectral methods. In the latter case, a special mention deserves the spectral collocation method that has been discussed in Sections 9.5.2 and 10.2.3 in the framework of the steady problem. Remark 13.3.1 The theory previously developed is suitable for moderately low Reynolds numbers (see (10.2.2)). For high Reynolds numbers the convective term might induce numerical oscillations if not properly treated (see Sections 8.2 and 8.3). It should therefore be approximated by an upwind procedure (see, e.g., Girault and Raviart (1986), pp.336352) or by resorting to
438
13. The Unsteady NavierStokes Problem
stabilization methods like those of Section 8.3 (see, e.g., Brooks and Hughes (1982), Johnson and Saranen (1986), Hansbo and Szepessy (1990), Franca and Frey (1992)). 0 The algebraic form of problem (13.3.2) is easily derived, proceeding as done in Section 9.2.1. Denoting by {/pj Ij = 1, ..., Nh} and {'!PI It = 1, ..., K h} the bases of Vh and Qh, respectively, and considering the expansions Kh
Nh
Uh(t,X) = LUj(t)/pj(x)
,
Ph(t,X) = LPI(t)'lPt(x) , 1=1
j=l
we obtain the following system of nonlinear equations:
(13.3.10)
M~~ (t) + Au(t) + C(u(t))u(t) + B T p(t) = f(t)
, t E (0, T)
Bu(t) = 0
, t E (0, T)
{
u(O) = Uo Here we have set: M ij := (/Pi,/Pj) , A ij := a(/pj,/Pi) Nh
(13.3.11)
(C(W))ij := L
wmc(/Pm;/Pj'/Pi)
m=l
The above is a system of differential algebraic equations (see Brenan, Campbell and Petzold (1989)). We recall that the mass matrix M is in diagonal form, e.g., in the case of the spectral collocation method, or when the finite element method is used with a lumping procedure (see Section 11.4).
13.4 TimeAdvancing by Finite Differences Problem (13.3.10) can be advanced in time by suitable finite difference schemes. For instance, if we use the singlestep fJscheme (introduced in Section 5.6.2) we obtain at each time level t n + 1 = (n + l)Llt, n = 0,1, ... ,N  1, the system: (13.4.1) with u~+l := (Ju n+ 1 + (1  (J)u n, p~+l := (Jpn+1 + (1  (J)pn, and f;+l := f(fJt n+ 1 + (1  fJ)tn) for 0 :::; (J :::; 1, and M, A, C(w) and B are defined in
13.4 TimeAdvancing by Finite Differences
439
(13.3.11). This scheme is second order accurate with respect to Llt if 8 = 1/2 while it is only first order accurate for all other values of 8. (Of course, accuracy is measured in HInorm for each velocity component and L2nor m for the pressure.) A convenient way of solving the problem is to resort to the 8indexed variables. To this aim, we rewrite for 8 i= 0 the continuity equation as B
n+1 _
ue

{
(1  8)BuO
0
ifn if n
=0
>1
and, similarly, we set
Then directly solve U~+l and Pe+ l , and set
We have to notice that whenever 8 i= 1 this scheme requires a value pO for the pressure at the initial time to = 0 which is not prescribed for the problem (13.1). When 8 = 1 (backward Euler case) at each timelevel we obtain a problem that reads (13.4.2)
where G is known. This is like the problem encountered in Chapter 10 for the discretization of the steady NavierStokes problem, and therefore can be faced by a Newton method or anyone of the methods illustrated in Section 10.4. If Llt is small, a good starting guess is the solution provided from the previous time step. A second order backward differentiation scheme is obtained by setting 8 = 1 in (13.4.1) but replacing the firstorder backward difference (u nH  u") / Llt by the twostep, second order one (3u n +1  4u n + u n  I )/2Llt (see (5.6.17)). This scheme, however, requires larger storage compared with the singlestep, second order CrankNicolson method. Moreover, it needs an additional second order initialization u l . The fully implicit scheme requires the solution of a nonlinear system at every time step. One way to avoid it is to resort to semiimplicit methods. For example, linearizing (13.4.1) by the Newton method and performing only one iteration at each time step provides a semiimplicit method. A similar method is given by:
440
13. The Unsteady NavierStokes Problem
(13.4.3) B n+l _ { (1  8)BuO uo 0
if n = 0 if n ~ 1
Other semiimplicit approaches entail a fully explicit treatment of the nonlinear term. An instance is provided by the following second order scheme which approximates the linear terms by the CrankNicolson method and the nonlinear (convective) one by the explicit AdamsBashforth method. It reads (see (5.6.16)):
~Mun+l + Au n+1 + B T pn+l ~t
_ Au" _
(13.4.4) { Bu n +1
=0
B T pn _
= ~Mun + fn+l + fn M
3C( u" )u n
+ C( u n  1 )un  1
,
for n = 1,2, ... ,N  1, having chosen a suitable second order initialization u: This discretization is second order accurate with respect to ~t, provided that the data and the solution are smooth enough with respect to the timevariable. The substantial difference between (13.4.3) and (13.4.4) is that in the latter case we obtain a linear system whose associated matrix doesn't change with n. Indeed we have (13.4.5) for u = u n + 1 , p = pn+l and a suitable right hand side H that depends on known values at time i« and tnl. This system is like the one (9.2.14) associated with the Stokes problem (9.1), and can therefore be solved by the algorithms illustrated in Section 9.6. The semiimplicit scheme (13.4.4) is very often used when the spatial approximation is based on the spectral collocation method, as the latter can take advantage of explicit evaluation of the nonlinear terms. In this context, the method is stable under the condition ~t = O(N 2 ) , N being the polynomial degree of the velocity field. Another semiimplicit approach that enjoies the same kind of property (a linear system that doesn't change at each step) can be formulated as follows (see Gunzburger (1989), p. 128)
13.5 OperatorSplitting Methods 3 __ Mu n+ 1 + Aun +1
2Llt
+ BT pn+l
= fn+l _ C(u*)u*
(13.4.6) {
Bu n +1
441
2 n + Mu Llt
1 Mu n 1
2Llt
=0
where u* := 2u  u n 1 and u 1 := u". This scheme is based on a backward difference formula which is second order accurate in time. However, it is only conditionally stable; for instance, in the finite element case it has been proven in Baker, Dougalis and Karakashian (1982) that it must be submitted to the restriction Llt = O(h 4 / 5 ) . n
13.5 OperatorSplitting Methods A very classical fractionalstep method for solving problem (13.2.1) was introduced by Chorin (1967, 1968) and Temam (1969). We confine to the twodimensional case (d = 2) and for w, z, v E V = (HJ(fl))2 we define aj(w,
v)
Cj(w;
z, v) :=
:=
II
~ DjwDjv
!
2
where j
t
r
i=l }[J
Wj[ViDjZi  ZiDjViJ
,
= 1,2. Moreover, we split the external force field as f = L:~=l f j .
The fractionalstep method reads: for each n = 0,1, ... ,N1 find U~+1/3 E Vh such that (13.5.1) where C,·) is the scalar product in (L2(fl))2 and for each vector function g E (L2(QT))2 we have set n+l/2 ._ _1 g* . Llt
l
tn
1
+
g
(t)dt
t«
Then find u~+2/3 E Vh such that (13.5.2)
1 (n+2/3 Llt uk
n+l/3
 uh
, v i,
)
+ a2 (n+2/3 ) uh , v t,
 (u n+l/3. n+2/3 .v «) _ ,uh + C2 h
(fn+l/2 ) 2,* , vt.
Finally, u~+l E W h is the unique solution to the problem
442
13. The Unsteady NavierStokes Problem
(13.5.3) where W h is a subspace of Vh which constitutes a suitable approximation of Vdiv (see (9.1.3)). Notice that in general Wh is not required to be a subspace of Vdiv' Existence and uniqueness for both (13.5.1) and (13.5.2) follow from positiveness, as Cj(z; v, v) = 0 for each z, v E V, j = 1,2. On the other hand, (13.5.3) states that U~+l is the L2orthogonal projection of U~+2/3 onto Who In other words, (13.5.3) means that n+l =
Uh
n
n+2/3
.LhUh
with Ph : Vh + Wh is the orthogonal projection operator with respect to the scalar product of (L 2(D))2. For such a reason the fractionalstep scheme (13.5.1)(13.5.3) is known as the projection method. This scheme is unconditionally stable in the norm of (L 2(D ))2. Moreover, if the time steprestriction ..1t = O(h 2) is assumed to hold, one has Nl
..1t
L
Ilu~+llli ~
c .
n=O
Under the latter restriction the scheme is also convergent (see Temam (1969, 1984) ).
A slightly different approach, that makes use of two (rather than three) steps, consists in solving: find u~+1/2 E Vh solution of the nonlinear elliptic convectiondiffusion problem (13.5.4)
where C := C1
+ C2,
and then find U~+l E
Wh
such that
(13.5.5) i.e., u~+l = Ph u~+1/2. The existence of a unique solution to (13.5.4) can be proven as done in Sections 10.1 and 10.2. (For a solution of this problem based on a least squares approach see Glowinski (1984), Chap. VII.)
In its continuous version this method amounts to defining two sequences of vector functions u n+1/2, u n +1 and a sequence of scalar functions qn+l recursively given as: u'' := Un, and for n = 0,1, ... ,N 1
(13.5.6) {
..!...(un+1/2 _ u") _ v..1u n+1/2 + (u n+l/ 2 . \7)U n +1/2 ..1t + ~(divun+1/2)un+1/2 = C+1 / 2 in D 2
un+l/2 = 0
on
aD
13.5 OperatorSplitting Methods 2 (u n+l _
dt
(13.5.7)
Un+ 1/ 2)
+ 'V'qn+l =
443
in D
0
j
divu n+1 = 0
in D
u n +1 . n = 0
on aD
where n is the unit outward normal vector on aD. As we have already pointed out in Section 13.3, the presence of the term ~(div un+ 1/ 2)u n +l / 2 in (13.5.6) is due to stability purposes. Equations (13.5.7) stem from the fact that u n +1 is the L 2ort hogonal projection of U n+ 1/ 2 upon the space Hdiv (defined in (10.1.8)). The last two equations follow straightforwardly, while the existence of a scalar qn+l is a consequence of the socalled Helmholtz decomposition principle. This states that any function v E (L 2(D))2 can be uniquely represented as v = W + 'V' q, where WE Hdiv and q E HI(D). Clearly, z 'V'q = 0 for each Z E Hdiv, and therefore W = PdivV, where Pdiv is the orthogonal projection operator from (L 2 (D))2 onto Hdiv. In turn, since W E Hdiv, q turns out to be the solution to the Neumann problem:
In
.:1q = div u
in D
aq =u·n an
on aD
{ 
which defines q up to an additive constant. A question that naturally arises is: how to recover the pressure function if one is interested in approximating the pressure too? In Temam (1969) it is proven that the scalar function qn+l(x) does approximate the pressure P(tn+l, x), although in a very weak sense. As a matter offact, from (13.5.6) and (13.5.7) one can easily infer that
(13.5.8)
a an
qn+l _ _ ='V'qn+l· n = O on aD ,
since both u n+ 1 . nand we deduce
(13.5.9)
U
n
+1/ 2 . n vanish on aD. Besides, still from (13.5.7)
.:1qn+1 = 2 divu n+ I / 2 .:1t _ di (fn+I/2 + VLlU A n+l/2) _ ""' D.JU n+I/2 D.,U n+l/2 IV * ~ i
j
i,j
 !(divun+1 / 2) 2
2

~un+l/2 2
. 'V'div u n+l/ 2
in D
the latter equality being obtained from (13.5.6). Therefore, the function qn+1 satisfies a Poisson problem with a homogeneous Neumann condition. On the other hand, it is known that the exact pressure p satisfies the nonhomogeneous Neumann boundary value problem
444
13. The Unsteady NavierStokes Problem L1p = divf  'L;DjUiDiUj ',J ap { an = (f + vL1u) . n
(13.5.10)
in fl on {)fl
The discrepancy between the righthand sides of equations (13.5.8)(13.5.9) and (13.5.10) is responsible for the poor convergence of qn+1 (x) to P(t n+1, x) which is experimented in practical computations (for additional comments on this topic, see Temam (1991a)). Nevertheless, the method represents correctly the velocity field in many flow problems of physical interest (see, e.g., Gresho and Chan (1990)). If one wishes to approximate both the pressure and the velocity satisfactorily in the whole domain fl, the projection method needs to be modified. Examples are provided, e.g., in Kim and Moin (1985), Orszag, Israeli and Deville (1986), Gresho (1990), Gresho and Chan (1990). A different interpretation, which is based on performing a blockLU decomposition of the algebraic problem resulting from a fullydiscrete approximation of (13.1), has been proposed by Perot (1993). A thorough analysis of the projection method and of its relation with stabilization methods like (9.6.11) (with e = L1t) is given in Rannacher (1992). For n = 0,1, ... , N  1 the scheme is rewritten in the equivalent form J...(un+1/2 _ u n 1/2) _ vL1u n+1/2 + (u n+1/2 . \7)Un+1/2 L1t + ~(divun+1/2)un+l/2 + \7qn = £;,+1/2 in fl
(13.5.11)
2
{ un+l/2 = 0
L1qn+l = (13.5.12)
{ aqn+1
on afl
J... div u n+ 1/2 L1t
~ =0
in fl onafl
(here, the initializations u 1/2 := Uo and qO = 0 are also needed). The velocity and the pressure are proven to be convergent at the first order with respect to L1t in the norms of (L 2(fl))2 and of the dual space of Hl(fl) n L5(fl), respectively. Further, with respect to the norms of (H 1(fl))2 and L 2(fl) the convergence is at the order of (L1t)1/2. It is worthy to notice that the same convergence results are also obtained when the nonlinear convectiondiffusion equation (13.5.11h is linearized, substituting the term (Un+1/2 . \7)U n+ 1/2 with (un. \7) u n +l / 2, where u" := Pdiv u n 1/2. Moreover, the stabilizing term ~(divun+l/2)un+1/2 can be omitted without affecting the validity of the result. The analysis of Rannacher also indicates that in the interior of the domain fl the pressure qn is indeed a reasonable approximation of the exact pressure p(tn, '), as the effects of the nonphysical Neumann boundary condition decay exponentially with respect to dist (x, afl)VvL1t.
13.5 OperatorSplitting Methods
445
Convergence results similar to those described above have been obtained by Shen (1992) for another modification of the projection method. A different operatorsplitting method is the one presented in Temam (1983), p.94. As usual, we denote by Vh a finite dimensional subspace of V and by Wh a subspace of Vh which constitutes a suitable approximation of Vd iv . We advance from t« to t n+ 1 by first finding u~+1/2 E Wh such that 1 (n+l/2
(13.5.13)
L1t u h
n )  Uh' Wh
) Uh + '12 a (n+l/2 .w«
V Wh E Wh ,
= (f;'+l/2, Wh)
and then seeking u~+l E Vh that satisfies (13.5.14)
~(un+l L1t
h
_ Un+1/ 2 v ) h' h
+ ~a(un+l v ) 2 h ' h
+ C(U~+l; U~+l, Vh) = 0
V Vh E Vh
For each n problem (13.5.13) can be regarded as a nonconforming approximation of the linear problem (9.1.6). The existence and uniqueness of a solution u~+1/2 follow from the LaxMilgram lemma. On the other hand, (13.5.14) is a nonlinear elliptic convectiondiffusion problem, whose analysis can be carried out as done in Sections 10.1 and 10.2. This operatorsplitting method allows the separate treatment of the difficulties related to nonlinearity and to the incompressibility constraint. Unconditional stability and convergence are proven in Temam (1983), Sect. 13. This approach, however, doesn't furnish an approximation of the pressure. Another approach is based on the following PeacemanRachford type formulae (see (5.7.7); see also Glowinski (1984), Chap. VII) applied to (13.2.3). We first solve the Stokes problem: find (u~+l/2 ,p~+1/2) E Vh X Qh such that 2 (n+l/2 n) (n+l/2) L1t uh uh,Vh s r;« u h .v»
(13.5.15)
+ 1b(2Vh,Phn+l/2)
/ , vs)  (1  'TJ) a(u h, Vh)  C(uh; U h , Vh)
= (f + n
b(U~+1/2,qh)=0
VqhEQh
where 0 < 1} < 1, u~ := UO,h E V h and fn+o := f((n + O)L1t), 0 :::; 0 :::; 1. Then we consider the nonlinear elliptic convectiondiffusion problem: find u~+l E Vh such that
~(un+l L1t h _ (13.5.16)
u n+1/2 v ) h ,h
+ (1 1}) a(u hn+1' v h )
+ C(U h+ 1; U h+ 1, Vh)
= (f n + 1/ 2 , Vh) 
7] a(u~+1/2,
 b(Vh,P~+1/2)
vs)
V Vh E V h
446
13. The Unsteady NavierStokes Problem
A threestep variant of (13.5.15), (13.5.16), which is based on a Strang type splitting (see (5.7.11)), is defined as follows. Set u~ := UO,h E Vh and take 0 < fJ < 1/2; we obtain (u~H,p~H) E V h x Qh by solving the Stokes problem:
(13.5.17)
Then, we look for u~+lt? E Vh solution of the nonlinear elliptic convectiondiffusion problem: (un+1t? _ un+t? v ) + (1 1J) a(u n +1 t? v ) 1 (1 _ 2fJ)L1t h h' h h' h
(13.5.18)
+ c(u~+lt?; U~+lt?, vs) = (fnH,Vh) 1Ja(u~+19,Vh)
 b(Vh,P~H)
V Vh E Vh
Finally, we find (u~+l,p~+l) E Vh x Qh by solving another Stokes problem:
(13.5.19)
This scheme has been analyzed by Kloucek and Rys (1994); under suitable assumptions on the exact solution, the timestep L1t and the meshsize h, it is stable and convergent. Other splitting methods are presented in Yanenko (1971) and Marchuk (1990). Fractionalstep methods for spectral approximations can be found in Canuto, Hussaini, Quarteroni and Zang (1988).
13.6 Other Approaches The characteristic Galerkin method extends to the NavierStokes equations (13.1) the idea that was implemented for advectiondiffusion equations in
13.6 Other Approaches Section 12.5. Being given x E flow X = X(t; s, x) through
.f}
dd~(tis,X) =
(13.6.1)
{
and
S
447
E [0, T], we define the characteristic
u(t,X(tis,X)) , t E (O,T)
X(SiS,X) = x .
The equations (13.1) are then discretized along this characteristic flow direction. In the simplest onestep case we go from t« to tn+l by facing the Stokes problem:
un+l _ v Llt Llun+l (13.6.2)
{
div u'r" ' =
a
+ Llt \7pn+l = u" 0 x + Llt fn+l
in f2 in f2
where X" is a suitable approximation of X(tn; tn+l, x). The first term on the right hand side must be traced back to the preceding time level t n through solving, for each gridpoint Xk, system (13.6.1) starting at time t = tn+l (see Section 12.5). If used in the framework of piecewise linear finite elements in space, the error behaviour of the characteristic scheme is
uniformly with respect to t/, For references on this approach see Pironneau (1982), Stili (1988). A high order method of this type has been proposed by Ho, Maday, Patera and Rcnquist (1990), Maday, Patera and Renquist (1990) and Stili and Ware (1992) in the framework of spectral methods. Recently, for the approximation of turbulent flows, Marion and Temam (1989) have introduced the socalled nonlinear Galerkin method with the purpose of approximating the inertial manifold. The latter is an analytical tool for the description of the mechanism through which the energy is shifted from low into high frequency modes. This method is quite natural to implement in a spectral context, taking as basis functions the eigenfunctions of the Stokes operator. The momentum equation is projected first onto the subspace of low modes, then onto its orthogonal. The nonlinear interaction between the high frequency component (Uhigh) and the low frequency component (Ul o w ) of the solution is driven by the convective term (see Foias, Manley and Temam (1988)). Assuming the existence of an analytic relation Uhigh = M (Ul ow ) (the "inertial manifold") and eliminating Uhigh from the low frequency equation one obtains a nonlinear momentum equation for the low frequency solution Ul ow ' In the latter equation, after replacing M by a finite dimensional approximation M N, one gets a finite dimensional problem (the nonlinear Galerkin problem).
448
13. The Unsteady NavierStokes Problem
An analysis of the stability and convergence of the nonlinear spectral Galerkin method for the NavierStokes equations has been given by Jauberteau, Rosier and Temam (1990) and Devulder, Marion and Titi (1993) (see also the references therein). Other related results have been provided by Shen (1990), Titi (1991), Foias, Jolly, Kevrekidis and Titi (1991) and Temam (1991b). The use of the finite element method in this context has been advocated in Marion and Temam (1990).
13.7 Complements Stabilization methods like those introduced in Sections 8.3 and 9.4 have been used in flow computations, sometimes combined with suitable fractionalstep methods (see, e.g., Hughes and Brooks (1982), Hansbo and Szepessy (1990), Franca and Frey (1992), Tezduyar, Mittal and Shih (1991), Tezduyar, Mittal, Ray and Shih (1992)). An adhoc approach for the simulation of vorticity is the one based on the socalled vortex method. Among the first contributions to this subject, let us quote the papers by Chorin (1973, 1980), Hald (1979), Leonard (1980) and Beale and Majda (1982). For more recent results, see Anderson and Greengard (1991) and the references therein.
14. Hyperbolic Problems
The numerical approximation of hyperbolic equations is a very active area of research. The main distinguishing feature of these initialboundary value problems is the fact that perturbations propagate with finite speed. Another characterizing aspect is that the boundary treatment is not as simple as that for elliptic or parabolic equations. According to the sign of the equation coefficients, the inflow and outflow boundary regions determine, from case to case, where boundary conditions have to be prescribed. The situation becomes more complex for systems of hyperbolic equations, where the boundary treatment must undergo a local characteristic analysis. If not implemented conveniently, the numerical realization of boundary conditions is a potential source of spurious instabilities. Hyperbolic problems also feature the presence of discontinuous solutions, arising in nonlinear equations, as well as in linear problems with discontinuous initial data. In order to account for unsmooth solutions, the problem is not set in differential form but rather in a weak form in which spatial derivatives are no longer acting on the solution but only on smooth test functions. Roughly speaking, both finite difference and collocation approximations are derived directly from the differential form of the equation. Galerkin methods (including finite elements and spectral methods) stem from the weak formulation. A third way, the finite volume method, is very popular especially for hyperbolic systems of conservation laws. It is derived from the integral form of the equations, by restricting the integration to subregions of the computational domain, called control volumes. This ensures conservation within each control volume. Most part of this Chapter is devoted to finite difference schemes for both linear and nonlinear equations. This is covered in Section 14.2, after addressing some remarkable examples of hyperbolic equations and their principal mathematical properties in Section 14.1. A less detailed description of both finite element and spectral approximations is presented in Sections 14.3 and 14.4, respectively. A short presentation of the wave equation and its finite element approximation is included in Section 14.5. Finally, Section 14.6 is concerned with the finite volume method.
450
14. Hyperbolic Problems
14.1 Some Instances of Hyperbolic Equations To begin with, let us consider some examples of hyperbolic initial value and initialboundary value problems. 14.1.1 Linear Scalar Advection Equations
The simplest example of hyperbolic equation is provided by the linear advection equation
au +aau = 0 at ax
(14.1.1)
t>O, xElR ,
where a E lR \ {O}. The solution to the Cauchy problem defined by (14.1.1) and the initial condition
u(O,x) = uo(x) , x E lR ,
(14.1.2)
is simply the wave travelling with speed a:
u(t,x) = uo(x  at) , t 2:
°.
The solution is constant along the characteristics, i.e., the curves X(t) in the (t, x)plane satisfying the ordinary differential equations
X'(t) = a , t > 0 { X(O) = Xo In the more general case
au at
(14.1.3)
au
+ a ax + aou =
f , t > 0 , x E lR ,
where a, ao, f are given functions of (t,x), defining again the characteristic curves X(t) as the solution to
X'(t) = a(t,X(t)) , t> 0 { X(O) = Xo ,
(14.1.4)
it turns out that the solution to (14.1.3) satisfies d
dtu(t,X(t)) = f(t,X(t))  ao(t,X(t))u(t,X(t)) The solution along the characteristic curves is no longer constant, and it cannot be determined in a simple form any longer; however, it can be obtained solving two sets of ordinary differential equations. Thus, a smooth solution exists for all time t > 0 provided a, ao, f and Uo are smooth.
14.1 Some Instances of Hyperbolic Equations
451
Let us return to equation (14.1.1). Although we have assumed smoothness on u, so that first derivatives make sense in the previous process, the case of nonsmooth initial data can be covered as well. Indeed, it is easily speculated from the previous derivation that, if Uo has a discontinuity across a point xo, then it is propagated along the characteristic issuing from xo. Of course, this process can be given a mathematical sense after defining the concept of weak solution to the hyperbolic equation. The nonsmooth case can be switched to a smooth one if, e.g., the initial data uo(x) is approximated by a sequence of smooth functions ug(x), € > O. The solution u«t,x) of problem (14.1.1) corresponding to the smooth data becomes u«t, x) = ug(x  at), t ~ 0, x E lit If Iluo  ugIlLl(R) S; e where (14.1.5)
Ilv/ILl(R) :=
I:
Iv(x)ldx
is the Llnorm, then limu«t,x) = limuo(xat) =uo(xat) , 0
ifa F'(u r), or equivalently (again by convexity) that Ut > Ur. In the case of a nonconvex F, a more general statement is the following: A discontinuity propagating with speed a given by (14.1.30) satisfies the entropy condition if (14.1.33)
_F...:...(u...:..)__F_(.;....u:.,.t) > a > F(u)  F(u r) u  Ut U  Ur
for all U between Ut and Ur. Another form of the entropy condition entails the use of entropy functions and entropy fluxes. Let us start from the conservation law (14.1.20), and suppose that some functions TJ(u) and J.L(u) satisfy the following equation
a
a
atTJ(u) + axJ.L(u)
(14.1.34)
=0
.
The functions TJ( u) and J.L( u) are called entropy function and entropy flux, respectively. The functions TJ, J.L and F are related by
J.L'(U)
(14.1.35) Indeed, assuming that
U
.
is smooth, from (14.1.34) we have
'( )au TJ u at while from (14.1.20)
= TJ'(u)F'(u) + J.L '( u )au ax
= 0 ,
460
14. Hyperbolic Problems
Fig. 14.1.3. Rarefactionwave (entropic) solution to the Riemann problem
o = 7/ (u)
au [ at
au] + pi (u) ax .
By comparison we obtain (14.1.35). The third statement for the entropy condition is given as follows:
The function u( t, x) is the entropy solution of (14.1.20) if, for all convex entropy functions TJ( .) and corresponding entropy fluxes J.L('), the entropy inequality (14.1.36)
is satisfied in the weak sense. A heuristic explanation is as follows. Using (14.1.25) and (14.1.35) gives:
a E TJ(u ) at
a + J.L(u = TJ'(u E
ax
= c
)
[~ (r/(u ax
E
[au at
)
E
+ P'(u
E
E
aU ax
) _
E E
)
aU ax
E )
_
TJI/(u E ) (au ax
)
= c7/(U
]
2u E
E
a ax 2
) 
2] ~ c~ (TJ'(U ax
E E
)
aU ax
)
Letting e + 0 and setting u := limE+o u E one formally obtains (14.1.36). The weak form of (14.1.36) is
 Jo(00
1 [ aa~(t,x) + 00 00 TJ(u(t,x))
(14.1.37)
~
/l(u(t,x)) a
0 (possibly depending on T) and a value 00 > 0 such that
(14.2.11) for each n.1t
:s T
and 0
< .1t
:s 00, 0 < .1x :s 00. Here, 00
IlvllLl :=.1x
(14.2.12)
L
IVjl
j=oo
is an approximation of the norm of L1(JR) (see (14.1.5)). Since u'' = E;jt(uO), stability holds if there exists (3 ~ 0 such that for each 0 < .1t :::; 00 and 0< .1x 00
:s
As a matter of fact,
for all .1t and n such that n.1t :::; T, hence (14.2.11) would follow. We now come to the issue of consistency. Let us define the local truncation error by 1
.
(TLlt(t))j := .1t[u(t + .1t,Xj)  ELlt(U(t);])] , where ELlt(U(t);j), depending on the twolevel method at hand, is defined as in (14.2.9) with uj substituted by u(t, Xj)' A twolevel method is said to be consistent if
466
14. Hyperbolic Problems
(14.2.13)
lim IITLlt(t)IILl
Llt+O
=0
.
for all t > 0 and 0 < Llx ~ £lo. In addition, the method is of order ql in time and q2 in space if for all sufficiently smooth initial data there is some constant C > 0 such that
t> 0 ,
(14.2.14)
for all 0 < Llt ~ £l o and 0 < Llx ~ £l o. Quite often, in the discretization process Llt and Llx are proportional to one another, i.e., Llt = KLlx for some positive constant K. In that case, the method is said to be of order q, where q = min(ql' q2)' (Let us also point out that whenever method (14.2.8) is used to approximate a steadystate solution, then what matters is the order in space, i.e., the exponent qz of Llx.) Let us finally recall that a scheme is said to be convergent if
as Llt, Llx ... O. The fundamental result of the theory of linear hyperbolic initial value problems is the LaxRichtmyer equivalence theorem, which states that for a consistent method, stability is necessary and sufficient for convergence. A proof of the LaxRichtmyer equivalence theorem may be found in Richtmyer and Morton (1967) (see also Strikwerda (1989)). We illustrate some results of stability here below. If we consider, e.g., the LaxFriedrichs method we obtain n 11lLl Ilu + = Llx
2:: J
luj+l/
~ ~x [2:: 1(1 >.a)uj+ I + ~ 1(1 + >.a)u'J 11] 1
J
J
Assuming that (14.2.15)
then both 1  >'a and 1 + >'a are nonnegative, and
Ilu"+' II. :5:
~x [(1  aA) ~ lu!,+,I + (1+ aA) ~ lu)'_, I]
1 1 :s 2(1a>')llunIILl + 2(1 + a>')llunIILl =
Iluni/Ll
Thus, (14.2.11) holds with CT = 1 if the stability restriction (14.2.15) is satisfied. With a similar procedure it can be shown that the upwind method is stable provided Llt and Llx satisfy condition (14.2.15). As a matter of fact, when a > 0 the upwind scheme (14.2.1) reads
14.2 Approximation by Finite Differences
U,:,+1 = u':'  Aa(U':'J  u':'J 1) J J Then
467
.
Ilun +11lLl :s Llx L: 1(1 Aa)ujl + Llx L: IAauj_1 1 j
j
If (14.2.15) holds, then the coefficients of uj and Uj_1 are both nonnegative, and therefore Ilun+11lLl IlunllLl. The same occurs when a < O. On the contrary, the centered forward Euler method is unstable under a restriction like (14.2.15). However, it becomes stable if Llt = O((LlX)2) (see Strikwerda (1989), p. 49). The same restriction (14.2.15) (but with the strict inequality) is sufficient to guarantee stability for the leapfrog scheme as well. Indeed, if we multiply the leapfrog equation by uj+1 + uj1 we obtain:
:s
= L:[(Uy1)2
+ Aa(uj_1uj1
 uj1 uj+dJ
j
Adding in both terms (U'J)2, and noticing that for all k 2: 0
1 '"' uk+1 = '"'[ukuk+ uk+1)J = '"' L uk,1, L ' ,+1 _ (uku~+l ,,+1 _ uk ,1, L uku~+l ' ,+1 , i
we obtain from the previous relation
L:Gj+1 = L:Gj j
j
with
Gj := (uj)2 Therefore
Lj Gj+l
'"'[(U,:,+1)2
L
J
=
+ (ujl? + aA(u'Juj+ll 
U'J+1 Ujl)
Lj GJ' i.e.,
n + (U Jn)2 + Aa(u,:,+lu J J+1

U,:,+lU':')J J+1
J
j
= L:[(u;? + (uj)2 + Aa(u;uj+l 
UJ+luj)]
j
Using the inequality _x 2  y2
:s 2xy :s x2 + y2
(1IAa!) L:[(ur;+1)2 + (uj)2] j
we can deduce
:s (1 + IAa!) L:[(u})2 + (uj)2] j
If we assume that IAal < 1, we therefore conclude that for each time T there exists a constant CT > 0 such that for all nLlt :s T
468
14. Hyperbolic Problems
This is a stability inequality, that generalizes (14.2.11) to the case of threelevel finite difference schemes. By a similar procedure it can be shown that the Lax Wendroff method is stable under the same restriction (14.2.15). Another suitable approach for proving stability is provided by the von Neumann analysis that is based on Fourier transforming the finite difference equation. This approach is extensively pursued, e.g., in Ritchmyer and Morton (1967) and Strikwerda (1989); see also Vichnevetsky and Bowles (1982). A proof of the stability of the WarmingBeam scheme (based on a von Neumann analysis) is given, e.g., in Hirsch (1988), Sect. 9.3. The stability condition is la>'1 ::; 2. The restriction on >. is weaker than in (14.2.15). The reason is the following: reminding that the stencil of a given method is a graph connecting all gridpoints that are involved in the computation of uj+1, the stencil of the WarmingBeam scheme uses two rather than one point on one side of the point Xj' To be more precise, a geometrical interpretation of the stability condition can be given as follows. For a finite difference scheme having, for example, a stencil like the one of Lax Wendroff, the value uj+1 depends on values of un at the three points x Hi, i = 1,0,1. Continuing back to t = 0 we can see that the solution uj+1 depends only on initial data at the points XHi, for i = (n + 1), ... , (n + 1). Thus the numerical domain of dependence D Lit (tn, x j) of uj satisfies
For any fixed point (t, x) we therefore have
In particular when going to the limit Llt domain of dependence becomes
Do(t, x) = {
+
0, keeping>' fixed, the numerical
x E JR Ilx  xl ::;
i}
Condition (14.2.15) is equivalent to
(14.2.16)
D(t, x) C Do(t, x) ,
where D(t, x) is the domain of dependence defined in (14.1.11) (where p = 1 and >'1 = a, as we are now considering the scalar case). This condition is necessary for stability. Indeed, if it wasn't satisfied, there would be points y* in the domain of dependence that wouldn't belong to
14.2 Approximation by Finite Differences
469
the numerical domain of dependence. Changing the initial data at y' would change the exact solution but not the numerical one. This would prevent obtaining convergence, and henceforth stability in view of the LaxRichtmyer equivalence theorem. The inequality (14.2.15) is called CFL condition, after that Courant, Friedrichs and Lewy (1928) showed that it is necessary in order for the numerical solution to be stable. In particular, this entails that there are no explicit, unconditionally stable, consistent finite difference schemes for hyperbolic initial value problems. In some of the examples above we have shown that the CFL condition can not only be necessary, but also sufficient for the stability of the numerical scheme. However, there exist methods for which the CFL condition is not sufficient for stability. Similar conditions of stability hold for the vector case as well. For example, any scheme having a stencil like Lax Wendroff one when applied to system (14.1.6) needs the following restriction on the timestep:
IAk~:I~l,
(14.2.17)
k=l,oo.,p,
where {Ak I k = 1, oo.,p} are the eigenvalues of A. These conditions can also be written in the form (14.2.16). In other words, each straight line x = x  Adl  t), k = 1, '00' p, must intersect the timeline t = I  Llt at points x(k) falling within the basis of the stencil (see Fig. 14.2.1).
t=t
t=tM x
~
X (1)

x
Fig. 14.2.1. Geometrical interpretation of the CFL condition (14.2.17) Remark 14.2.1 (Boundary treatment). Let us consider the initialboundary value problem (for a > 0)
470
14. Hyperbolic Problems
{
~~ + a ~~ = o , u(t,O)
= 'P(t)
t>o, O<x
°
u(O,x)=uo(x) , O<x j = 1, ..., M  1. Besides, it must enforce the prescribed boundary condition at the inflow point. Many finite difference schemes, however, also require additional boundary conditions in order to determine a unique solution. These extra conditions are usually referred to as numerical boundary conditions. For instance, when using the Lax Wendroff (or the LaxFriedrichs) method, the scheme cannot be applied at the outflow point XM, as it would require the value UM+l at the point XM+l which is outside of the computational interval [0,1]. Typically, the value u'lJl is recovered not from the scheme itself, but through an extrapolation formula that may take several different forms. For instance one can take u'lJl = (constant extrapolation) or else U'lJl = 2U'lJ~1  U'lJ!2 (linear extrapolation), but also u'lJl = 2uM_l  U~!2 (approximate linear extrapolation on characteristics) . Another possibility consists in deriving numerical boundary conditions using onesided difference schemes. For instance, for the Lax Wendroff method we might use _ n \ (n n) . Un+1 M  UM  Aa u M  UM 1
u:
This formula is obtained by applying the Lax Wendroff scheme at the outflow point x'lJl and evaluating uM+1 by the linear extrapolation formula uM+1 = 2uM  uM1' We need to emphasize the fact that numerical boundary conditions may sometimes induce instabilities in finite difference schemes that would be stable on the pure initialvalue problem. For instance, for the leapfrog scheme both constant and linear extrapolations are unstable, whereas the extrapolation on characteristics is stable. The above conclusions are reversed for the CrankNicolson (implicit) scheme. A thorough analysis of numerical boundary conditions is presented in Strikwerda (1989), Chap. 11. 0
14.2 Approximation by Finite Differences
471
14.2.3 Nonlinear Scalar Equations We consider the nonlinear, scalar conservation law (14.2.18)
{
~~ + :XF(U) = 0,
t> 0, x E IR
u(O,X)=uo(X) ,
xEIR
and denote the characteristic speed by a(u) := F'(u). We focus on the case of discontinuous solutions. For smooth solutions, the methods we have seen in Section 14.2 generally apply successfully to the linearized equation. However, we know from Section 14.1.4 that discontinuous solutions may easily arise even when data are smooth. Their numerical approximation is a challenging task, and represents a research area which is still very active nowadays. In this Section we simply aim at illustrating a few basic concepts and some examples of shock capturing schemes. For a systematic presentation we refer the reader to Yee (1989), LeVeque (1990), Godlewski and Raviart (1991), and the references therein. As for the linear case, the general design principle for threepoint, explicit finite difference schemes in conservation form is n u n+l  Ujn  A'(h j+l/2 j
(14 .2 .19)

h njl/2 )
,
where, as before, hj+l/2 := h(u'J,U'J+l)' h(·, .) is the numerical flux function. In the most general case it would be
uj+l = uj  A[H(uj_p, ..., uj+q)  H(Uj_P_l' ..., Uj+q_l)] for some function H of p + q + 1 arguments, p ~ 0, q ~ O. A heuristic explanation of (14.2.19) is provided from the following argument. From (14.2.18) we deduce
Dividing by Llx and using the cell average'ilj defined in Section 14.2.3 the above relation gives 'ilj+l = 'ilj 
~x
1n
tn+l
[
t
F(u(t,Xj+l/2))dt1n
n+ 1
]
F(u(t,Xj_l/2))dt
Comparing this to (14.2.19) shows that
hj+l/2
~
1
t n +1
:
Llt
tn
F(U(t,Xj+l/2))dt,
472
14. Hyperbolic Problems
i.e., the numerical flux function plays the role of an average flux through Xj+1/2 over the time interval [tn, t n+1 ]' For consistency with the conservation law, the numerical flux function is asked to reduce to the true flux F for the case of constant solutions, i.e., (14.2.20)
h(uo, uo) = F(uo) , if u(t, x) == Uo .
Besides, we must require that h is a Lipschitz continuous function of each variable. Stability analysis is well established in the linear case, as seen in Section 14.2. However, local stability of linearized equation is neither necessary nor sufficient for the nonlinear case, especially when strong discontinuities are present. A numerical method can be nonlinearly unstable, i.e., unstable on the nonlinear problem, even though linearized versions are stable. Stabilization through adding "linear" numerical dissipation (or "artificial viscosity") has been a common practice (especially in numerical fluid dynamics). This approach alone doesn't guarantee convergence to a physically correct (entropic) solution in the nonlinear case, and not even convergence to a weak solution. However, according to the Lax Wendroff theorem, the limit solution (provided it does exist) of any finite difference scheme in conservation form which is consistent with the given conservation law is a weak solution of the conservation law (Lax and Wendroff (1960)). Unfortunately, it is not assured that the weak solutions obtained in this manner satisfy the entropy condition. As a matter of fact, there are examples of conservative and consistent numerical methods that converge to weak solutions that violate the entropy condition. As we have seen in Section 14.1, weak solutions are not uniquely determined by their initial values if they don't satisfy the entropy condition. Also, at the finite dimensional level, a discrete form of the entropy condition is needed in order to pick out the physically relevant solutions. After (14.1.36), the discrete form of the entropy inequality can read as follows:
In the above inequality the function J.LIJ. is some discrete entropy flux function. It must be consistent with J.L in the same way that the numerical flux function hj+l/2 is required to be consistent with F. Let us now recall some definitions. First of all, rewrite (14.2.19) as (14.2.21) This numerical scheme is said to be monotone if G is a monotonic increasing function of each of its arguments. A numerical solution for a single conservation law obtained by a monotone scheme, with iJ.t/ iJ.x fixed, always converge to the physically relevant solution as iJ.t + O. For the proof see Harten, Hyman and Lax (1976), or Crandall and Majda (1980).
14.2 Approximation by Finite Differences The numerical scheme (14.2.21) is said to be bounded if there exists C such that
473
>0
sUPlujl:::; C . J,n
The scheme is said to be stable if the finite difference solutions u" and v", that are obtained starting from two different initial data u" and v", satisfy:
for all n 2: 0, where CT > 0 and II . IlL! is the norm introduced in (14.2.12). (For linear problems this definition is equivalent to (14.2.11).) According to a result of Kuznetsov, monotone schemes are bounded, stable, convergent to the entropy solution and at most first order accurate (Kuznetsov (1976), Harten, Hyman and Lax (1976)). They produce smooth behaviour near discontinuities. Hereby, we report several instances of numerical flux functions that generate, throughout (14.2.19), some well known finite difference schemes. Setting for simplicity Fj := F( Uj), the LaxFriedrichs scheme corresponds to taking (14.2.22)
hj+l/2 =
~
[Fj+l + Fj l(Uj+l  Uj)]
and reads therefore (14.2.23) This method can be shown to be consistent and first order accurate. It is monotone provided the CFL condition
IFI(Uj)~~I:::;l
foralljEZandallnEN
is satisfied. The Lax Wendroff method is associated to the numerical flux (14.2.24)
hj+l/2
1 = 2[Fj+l + Fj 
Aaj+l/2(Fj+l  Fj)]
and therefore has the form
uj+l = uj (14.2.25)
+
~(F(Uj+l) 
F(uj_l))
~2 [aj+l/2(F(uj+l) _ F(uj)) 
aj_l/2(F(uj)  F(Uj_l))]
Here, we have set
._ F' (Uj+l 2+ Uj) aj+l/2 .
474
14. Hyperbolic Problems
This method is second order accurate (on smooth solutions), not monotone. Unfortunately, it requires the use of the Jacobian F'(·), which may not be an easy task when facing systems of equations. An alternative, still second order accurate and conservative, that avoids using F' (.) is furnished by the following modification of the Lax Wendroff scheme: (14.2.26)
n+l/2 _ 2"1 (Uj+l n + u n) A [F( n ) F( n)] j  2" Uj+l/2 Uj+l Uj n+l/2) _ F( Un+l/2)] u n+l j _ UJ1} _ A'[F( Uj+l/2 j_ 1 / 2
The numerical flux is given by (14.2.27)
hj+l/2
= F (~(Uj+l + Uj)  ~(Fj+l 
Fj))
Another alternative is the MacCormack method, that uses first forward and then backward differencing:
uj = uj  A[F(uj+l)  F(uj)] (14.2.28)
urI =
~(uj + uj)  ~[F(Uj) 
F(Uj_l)]
The scheme is second order accurate and conservative, and therefore guarantees correct shock capturing, but is not dissipative. It reduces to the LaxWendroff method for the linear scalar advection equation. The numerical flux is given by (14.2.29) The CourantIsaacsonRees method reduces instead to the upwind method for the linear scalar advection equation. For the nonlinear problem (14.2.18), in which U is constant along characteristics, it consists in computing uj+l by an approximation to u at the point Xj  F'(uj).!J.t obtained by linear interpolation of the data u". The scheme reads (if F' (uj) > 0):
n  un 1) U'.'+l = un  AF'(un)(u J J J J JIn general, it can be written as:
U'.'+l =U'.'A [IF'(Uj)I+F1(U j) (u'.'un 1) J J 2 J J(14.2.30)
+
1F'(uj)l F'(uj) 2
n n] (Uj+l Uj)
Since it is not in conservation form, this method is unsuitable for approximating shock waves. The Godunov method makes use of characteristic information without compromising the conservation form. The breakthrough is that Riemann
14.2 Approximation by Finite Differences
475
problems are solved forward in time, rather than following characteristics backward in time. For a given n, a discrete solution u" is used to define a piecewiseconstant function vn(x) having the value uj on the whole interval Xjl/2 < x < Xj+l/2' Then, vn(x) is used as initial datum at time t« for the conservation law considered on the time interval (tn, tn+l]' If Llt is small enough, the equation can in fact be solved exactly through facing a sequence of Riemann problems. Indeed, Llt has to satisfy a CFLlike condition in order to prevent interaction between neighbouring Riemann problems. The solution u~(t, x) defined on [tn,tn+d X lR is obtained by simply collecting all these Riemann solutions. Finally, the discrete solution u n +1 at the new timelevel tn+l is defined by averaging u~ as follows: n+l ._
Uj
.
1
T
x
1"';+1/2
n
u.(tn+1,x)dx .
"';1/2
Based on these values, a new piecewiseconstant function v n +1 (x) is constructed and the process is repeated. The resulting scheme can be written under the conservation form (14.2.19), where the numerical flux function is (14.2.31)
and it turns out to be monotone and upwind, and satisfies the entropy condition. It can also be shown that the numerical flux function is given by h. 1 ~ = {min{F(U) IUj S;uS;uj+!l, ifuj Uj+l
On a linear problem the Godunov scheme reduces to the upwind scheme. We also want to recall another monotone and upwind method in conservation form, the EngquistOsher scheme, which corresponds to the following numerical flux function (14.2.32)
where
F/ := F(max(uj,u)) ,
"i >
F(min(uj,u))
and u is the sonic point of F(u), i.e., the value u such that F(u) = O. 14.2.4 High Order Shock Capturing Schemes
Monotone and first order upwind schemes are too dissipative, therefore they cannot produce accurate solutions for complex flow fields without using a very fine grid. For this purpose, higher order shock capturing schemes ought to be used, with the purpose of including minimal numerical dissipation. (Clearly,
476
14. Hyperbolic Problems
some dissipation needs to be used in order to give nonoscillatory shocks and ensure convergence to the entropy solution.) The basic idea underlying high order methods (say, second order at least), is to move from a potentially high order method and modify it by adding numerical dissipation in the neighbourhood of a discontinuity. The extra term can model a diffusive term proportional to ~ (sometimes, ~:~). The coefficient, say e, must vanish quickly enough as L1t, L1x go to 0, so that the scheme remains consistent with the hyperbolic equation without affecting its high order of accuracy on smooth solutions. There are two classes of such schemes. Classical methods make use of linear numerical dissipation (i.e., the same for any point). Instances are given by the methods of Lax Wendroff, MacCormack and WarmingBeam. Modern methods use nonlinear numerical dissipation. This means that the diffusion coefficient e depends on behaviour of the solution itself, being larger near discontinuities than in smooth regions. Second and third (or higher) order space accuracy is achieved by these approaches. Their design principle is: a. use the conservation form (this ensures the shock capturing property); b. no spurious overshoots or wiggles are created near discontinuities yet providing sharp shocks; c. high accuracy is achieved in smooth regions of the flow; d. entropy condition is satisfied for limit solutions. Conventional methods (e.g., Lax Wendroff type schemes) typically don't match requisites b. and d. Schemes are developed, at first, for a scalar, nonlinear equation in onespace dimension. Extensions to systems come through Riemann solvers (either exact or approximate), or flux vector splitting techniques. An indepth analysis of these methods is beyond the scope of this book. Instead, we confine ourselves to illustrate the main concepts and present a few examples falling into this cathegory. We recall that the total variation of a smooth function u : lR > lR can be defined as the (finite) value TV(u) :=
I:
lu'(x)ldx
If u" is a gridfunction, its total variation can be defined as follows
(14.2.33)
TV(u
n
) :=
L
IUj+l  ujl .
j
A difference method is said to be Total Variation Diminishing (briefly TVD) if (14.2.34)
\j
n ;::: 0 , L1t > 0 .
14.2 Approximation by Finite Differences
477
Even if conservative and consistent with the conservation law, a TVD scheme is not necessarily consistent with an entropy inequality. Monotone methods for scalar conservation laws are TVD (for the proof see Crandall and Majda (1980)). A family of fivepoint difference schemes in conservation form can be written as (14.2.35) urI
+ (J>"(h'tN/2
 h'J~;/2) = u'J  (1  (J)>"(h'J+l/2  h'Jl/2) ,
where hj+l/2 denotes the numerical flux function, and in this case it reads hj+ l/ 2 = h(Ujl,Uj,Uj+l,Uj+2) .
:s :s 1 is a parameter. If (J =
As usual, >.. := fJ.tlfJ.x, while 0 (J is explicit, otherwise it is implicit. We rewrite (14.2.35) as
0 the scheme
where Land R are the finite difference operators: L(u)j := Uj + (J>"(hj+l/2  h j l/ 2) R( u)j := Uj  (1  ())>"(hj+l/2  h j l/ 2) As noticed by Harten (1984), sufficient conditions for (14.2.35) to be TVD are TV(R(u n)) TV(u n) and TV(L(u n+l)) 2: TV(un+l) .
:s
Let us define the new flux (14.2.36) n n un n n+l n+l n+l n+l) J: it Th us hnj+l/2  h( Uj_l' Uj' j+ l, Uj+2' Uj_ l, u j ,U j+ l, Uj+2 lor a sui able flux function h. Consistency with (14.1.20) is achieved provided the function h satisfies (14.2.37)
h(u, u, ..., u) = F(u)
8 times
Then (14.2.35) becomes (14.2.38)
(notice the analogy with (14.2.4) for the linear scalar equation). To give an example of a TVD scheme, let us define _
Fj Uj+l  Uj
._ Fj+l 
aj+l/2 .
For a numerical flux function of the form
478
14. Hyperbolic Problems
(14.2.39)
hj+l/2
= ~[Fj+l + Fj liij+l/21(uj+l 
Uj)]
(first order Roe upwind flux) a sufficient condition for being TVD is the (CFLlike) condition:
(see Harten (1984)). We now comment on the design principles for higher order TVD schemes. Most high order TVD schemes can be viewed as centereddifference schemes with an appropriate numerical dissipation or smoothing mechanism, i.e., an automatic feedback mechanism to control the amount of numerical dissipation (unlike the numerical dissipation used in linear theory). Generally speaking, given any high order flux function h(u;j) associated with the scheme (14.2.19), the addition of artificial dissipation amounts to replace this by the modified flux function (14.2.40)
hart(u;j) := h(u;j)  .1xc:(u;j) (Uj+l  Uj) .
Sometimes, instead of (14.2.40), one considers the more general form:
hart(u;j) := h(u;j)
+ p(u;j)
,
so that the numerical flux function becomes 1 hj+l/2 = 2(Fj+l + Fj + Pj+l/2) In other words, hj+l/2 is the sum of a second order centered scheme and an artificial dissipation (this is the TVD correction). The critical issue is how to determine an appropriate form of c:(.;.) (or p('; .)) that achieves both goals of introducing enough dissipation to preserve monotonicity without affecting the level of accuracy. The fulfillment of this task is not easy to achieve. For this reason, in practice, a more direct approach is often used. Precisely, the main mechanisms for satisfying higher order TVD sufficient conditions are some kind of limiting procedures called limiters. They impose constraints on the gradient of the dependent variable U (slope limiters) or the flux function F (flux limiters). For constant coefficients, the two types of limiters are equivalent. Let us notice that any high order flux, say hhigh, can generally be expressed as a low order flux hiow plus a correction, i.e.,
hhigh(u;j) = h1ow(u;j)
+ [hhigh(u;j)
_ h1ow(u;j)] .
A flux limiter method is typically associated with a flux of the form (14.2.41) The correction term is usually referred to as an antidiffusive flux, as it compensates the high numerical diffusion that is introduced by the low order flux.
14.2 Approximation by Finite Differences
479
Based on this idea is the socalled FluxCorrectedTransport (FCT) method of Boris and Book (1973), one of the earliest high resolution methods. A common approach is to set p(u;j) = 0 such that (14.2.44) for all n, Llt such that nLlt :s T. Obviously, a TVD scheme is also TVB. Advantages of TVB over TVD can be outlined as follows. a. TVB schemes can be uniformly higher order accurate in space including extrema points; b. it is easier to combine interior and boundary schemes in order to obtain a global TVB scheme. Shu (1987) has shown how to modify some existing TVD schemes (which can be second order accurate at most) so that the resulting schemes are TVB. ENO, that stands for Essentially NonOscillatory, is a class of methods introduced by Harten (1986), Harten and Osher (1987). The idea is the following:
14.3 Approximation by Finite Elements a. at the time level t", take
uj
481
as the cell average on the interval I j =
(Xjlj2, Xj+lj2);
b. reconstruct a polynomial approximation to u( tn, X) within cell I j in a nonoscillatory fashion, up to a desired accuracy; c. solve the differential equation using the flux given by this polynomial to advance the solution up to the level t n + 1 ; d. repeat the process at time level tn+l. When accomplishing the interpolation procedure at step b., the idea is to take information from the smoother part of the flow and to choose the interpolant having smaller magnitude. This is a nonlinear interpolant. It can be also proven that any TVD scheme is a ENO scheme. For an algorithmic description of this method, the reader is referred to Harten, Osher, Engquist and Chakravarthy (1987). An effective implementation based on reconstruction of fluxes is presented in Shu and Osher (1988, 1989). The latter papers extend to ENO schemes the result by Osher and Chakravarthy (1986) on reconstruction of fluxes for MUSCL schemes. All ENO schemes with moving stencil may permit oscillations up to the order of the truncation error (see Harten, Osher, Engquist and Chakravarthy (1987)). Furthermore, wider stencils are generally involved (e.g., a sevenpoint stencil is needed for a second order ENO scheme).
14.3 Approximation by Finite Elements For simplicity, until now in this Chapter we have only considered problems in onespace dimension. However, when introducing a finite element approximation, there is no formal difference in presenting results for multidimensional problems. Therefore, below, n will denote a domain in lRd , d = 2,3. Let us consider the linear scalar advection equation
(14.3.1)
au at + a . 'Vu + aou = f
in QT := (0, T) x
u=.
= i1t/ i1x,
( 1  ),2a2)un+l )+1
(14.3.27)
the resulting scheme therefore reads
+ 2(2 + >.2 a2)U,:,+l + (1 >.2 a2)U':'+1 ) )1 = (1  3>'a + 2>.2 a2)uj+1 + 4(1  >.2a2)uj + (1 + 3>'a + 2),2a2)uj_1
The linear system to be solved at each timestep is tridiagonal and symmetric, similarly to the one obtained by applying the procedure we have just described to the classical Lax Wendroff scheme (in that case, in fact, the system is the one associated to the mass matrix M). Therefore, the advantage here is that, with similar computational cost, we obtain third order accuracy, whereas the Lax Wendroff scheme is only second order accurate. The TaylorGalerkin scheme is proven to be stable under the usual CFL condition la),1 :::; 1. The analysis of the method (including the issue of imposing the boundary conditions) and a discussion of its relations with PetrovGalerkin and characteristic Galerkin methods are given in Donea (1984) and Donea, Quartapelle and Selmin (1987), where numerical results for several linear test problems can also be found. The extension of the method to twodimensional problems can be easily performed, only requiring some more complicated manipulations.
14.4 Approximation by Spectral Methods For simplicity, in this Section we return to the onedimensional case, though a similar approach could be followed in higher spatial dimensions. Spectral approximations to hyperbolic equations can take the form of Galerkin methods or collocation methods. The Galerkin case falls under the general framework that has been discussed in Section 14.3. Precisely, the spectral Galerkin method can be cast in the form (14.3.4), provided Vh denotes now a space of global (rather than piecewise) polynomials of degree less than or equal to N. We are not entering further comments here. Instead, we present, in detail, the case of the spectral collocation method. To begin with, we consider the initialboundary value problem for the simple onedimensional linear scalar advection equation: (14.4.1)
au at
au
(14.4.2) (14.4.3)
u(t,l)=O u(O,x)=uo(x) , xEI ,
+ a ox
= f , t> 0 , x E I := (1,1)
where f = f(t,x) and Uo = uo(x) are continuous functions, ip : = 0 is a given coefficient, which, for simplicity, is assumed to be constant.
14.4 Approximation by Spectral Methods
491
14.4.1 Spectral Collocation Method: the Scalar Case Let {Xj I 0 ~ j ~ N} denote the N + 1 nodes of the GaussLobatto formula in [1, 1J (see Chapter 4). They can be referred to as Legendre or Chebyshev nodes, according to the kind of Gaussian formula that we are considering. In any case, we order them in a decreasing way, so that Xo = 1, XN = 1 and Xj > Xj+l for j = 0, ..., N  1. Correspondingly, we denote by {Wj I 0 ~ j ~ N} the associated weights (see Sections 4.3.2 and 4.4.2). Introducing the discrete scalar product N
(14.4.4)
(Z,V)N := LZ(Xj)v(Xj)Wj , z,V E C O([ l , 1]) , j=O
we recall that from (4.2.6) it follows
(Z,V)N = ill z(x)v(x)(l x 2)Odx
(14.4.5)
provided zv E lP'2Nl. Here, a = 0 when we are using Legendre nodes while a = 1/2 for Chebyshev nodes. The collocation approximation to (14.4.1)(14.4.3) reads: for each t 2:: 0 find UN = UN(t) E lP'N such that (14.4.6)
BUN at(t,Xj)
(14.4.7) (14.4.8)
UN(t,XN) = cp(t) , t > 0 UN(O, Xj) = UO,N(Xj) , 0 ~ j
+a
BUN Bx (t,Xj)
= f(t,xj) ~ N
, 0 ~ j ~ N 1, t > 0 ,
where UO,N E lP'N is a suitable approximation of the initial datum Uo. The differential equation has been enforced at all collocation nodes unless the inflow point XN = 1 where the boundary condition (14.4.2) is prescribed. In particular, it is satisfied also at the outflow boundary point Xo = 1. Similarly, at the time t = 0 the approximate solution UN is asked to match the initial data at all GaussLobatto nodes. If a is negative and therefore the boundary condition is prescribed at the point Xo = 1, the collocation method is stated similarly, by simply reversing the role of Xo and XN. In the case of Legendre collocation, stability can be easily derived for UN. As a matter of fact take, for simplicity, ip.: = 0, f = 0 and multiply (14.4.6) by UN(t,Xj)Wj, then sum up on j and use (14.4.5). We obtain for each t > 0
492
14. Hyperbolic Problems
where Ilv II~ = (v, v) N is a discrete norm, equivalent to the norm of £2 ( 1, 1) on lP'N (see (4.4.16)). Then
IluN(t)lI~ +
I au~(r, t
1) dr =
Iluo,NII~
"It> 0 .
Taking UO,N = INuo, the interpolant of the initial datum Uo assumed to belong to C O([l, 1]), the £2(1, I)stability follows at once (see also (6.2.27)). The case of nonhomogeneous data 0
This is the relaxed form of the inflow boundary condition (14.4.8). Since = 2/[N(N + 1)], (14.4.10) states that the boundary data is matched up to a small coefficient times the residual of the differential equations. This treatment of boundary data for hyperbolic equations has been introduced by Funaro and Gottlieb (1988, 1989) and can be also extended to the Chebyshev collocation method. (See also Funaro (1992), p. 231.) Stability analysis for (14.4.9) is straightforward. Taking VN = UN(t) we can easily deduce a result like the one we have obtained for (14.4.6). The proof of convergence can be accomplished as follows. The exact solution U of (14.4.1) satisfies
WN
14.4 Approximation by Spectral Methods
(14.4.11 )
d (u(t ),v )  (au(t), aV . 0, where, for each £P E C O([ I , I]), we have set E(£p,VN) := (£p,VN)  (£P,VN)N. Notice that, by proceeding as in Section 4.4.2, it follows
E(j(t),VN)::; C(II!(t)  IN!(t)llo
+ 11!(t)  IN1!(t)llo)llvNllo
and
E(~~(t),VN) ::; c(II~~(t)  IN ~~(t)llo +
I ~~ (t) 
INl
~~ (t)IIJ IlvNllo
,
where /I. Ilk, k 2: 0, denotes the norm in the Sobolev space Hk(I) (see Section 1.2). Hence, employing the interpolation results proven in Section 4.4.2, we find
494
14. Hyperbolic Problems
rr
(14.4.14)
E(f(t), VN)  E( O~ (t), VN)
:S CN  (III (t)lls_1+ 1I~~(t)lls_JllvNllo
1
S
Taking, for each fixed t, VN = eN(t) in (14.4.13) yields 1 1d 1 2 2 2 dt Il eN(t) 10 + 2aeN(t, 1) + 2 aeN(t, 1) 2
~
(14.4.15)
[CN
1
1
S

(111(t)lI s1 + 1/ ~~ (t)/Is_J
+ 1I0(Uo~ u) (t)/Io + all(u  u)(t)111] IleN(t)lIo Here, we have integrated by parts the term (a(u  u)(t), ~), taking into account that (u  u)(t, ±1) = O. Now using Gronwall lemma (see Lemma 1.4.1) furnishes
IleN(t)1I6 + a it[e~(r, 1) + e~(r, 1)]dr
~
(14.4.16)
[IIINuo  uo,NII6
+ CN 2  2s it (1I1(r)II;l
+ 11~~(r)I[_l + Ilu(r)II;)dr] exp(Ct) , provided that there exists a suitable s > 2 for which all the norms at the righthand side make sense. The error estimates follows from the relation u  UN = U  INu + eN, and reads
Ilu(t)  uN(t)1I6
+ a it [(u  uN)2(r, 1) + (u  uN)2(r, 1)]dr
~ C [Iluo uo,N116 + N 2 2s it (111(r)II;l
(14.4.17)
+ 11~~(r)l[l + Ilu(r)II;)dr] exp(Ct)
14.4.2 Spectral Collocation Method: the Vector Case
Let us now consider the spectral approximation to the linear hyperbolic system (14.4.18) with initial data
au at
+A OU ax = F , t > 0 , x E I ,
14.4 Approximation by Spectral Methods (14.4.19)
495
U(O,x) = Uo(x) , x E I ,
where U [0,+(0) x I + Wand U o : I + 1l~.P. Moreover, A E wxp is a constant matrix, which, for simplicity, is assumed to be nonsingular, so that anyone of its eigenvalues Ai, i = 1, ... ,p, is different from O. Precisely, we indicate by A1, ..., Ap o the positive eigenvalues, and by Ap o+1' ..., Ap the negative ones. We diagonalize A as done in (14.1.7), and introduce the characteristic variables W = T 1 U. The latter satisfy the equation BW
Bt
BW = T1F
+ 11 Bx A
' t >0 , x E
I
,
where 11 = diag (A1' ..., Ap ) . We split 11 and W as done in (14.1.17), and the matrix T 1 accordingly, setting
Here, (T 1 )+ is the Po x P matrix given by the first Po rows of T 1 , and (T 1 )  is the remaining (p  Po) x P matrix. Notice that, if A is symmetric, then the matrix T turns out to be unitary, i.e., T 1 = TT. In this case, (T1)+ is the Po x P matrix having the kth row given by the eigenvector w k , k = 1, ... ,Po, and analogously for (T 1 )  . As discussed in Section 14.1, the system (14.4.18) must be supplemented by Po boundary conditions at the lefthand point XN = 1, and P  Po at the righthand point Xo = 1, for all t > O. For instance, one assignes GU at Xo = 1, G being a (p  Po) x P matrix, and G+U at XN = 1, with G+ a Po x P matrix. These matrices must satisfy the following additional condition: the submatrix given by the first Po columns of G+T is nonsingular { the submatrix given by the last p  Po columns of GT is nonsingular (see, e.g., Gastaldi and Quarteroni (1989)). The spectral collocation approximation to (14.4.18) can be defined as follows. For any t ~ 0 we look for U N(t) E (IP'N)P satisfying the differential equations at each internal point, i.e.,
(14.4.20)
BU N BUN ) ( 7it+A~F (t,Xj)=O,
.
1'5:J'5:N1, t>O,
and moreover the initial condition UN(O) = UO,N E (IP'N)P, a suitable approximation of Un. Besides, at each boundary point we have to enforce the differential equations associated with the outgoing characteristic variables. Precisely (14.4.21 )
496
14. Hyperbolic Problems
(PO equations) and
(T 1 )+
(14.4.22)
C')~N + A a~xN

F) (t,xo) = 0, t> 0
(p  Po equations). Introducing the discrete characteristic variables W N := T1 UN, and splitting W N into (its first Po components) and W"N (the last P  Po components) (the same notation is adopted for the vector T 1 F), we can easily see that (14.4.21) and (14.4.22) are respectively equivalent to:
wt
aw] axN  (T1)F (t,XN)
(14.4.23)
[
aw at
(14.4.24)
[
aw t at + A +awt ~ 
+ A
(T
1
=0
, t
>0
+ ] (t, xo) _ 0 , t > 0 .
) F
The system has to be closed enforcing Po boundary conditions at XN = 1, and P  Po at Xo = 1 (for instance, by assigning G+UN at XN = 1 and GUN at Xo = 1). This boundary treatment for hyperbolic systems was proposed in Gottlieb, Gunzburger and Turkel (1982) and later generalized in Canuto and Quarteroni (1987). It is stable, and needs to be implemented consistently in both explicit and implicit temporal discretization. For its analysis also see Canuto, Hussaini, Quarteroni and Zang (1988), pp. 428430. 14.4.3 TimeAdvancing and Smoothing Procedures
Before concluding, we briefly address the issue of timediscretization. First of all, let us consider the eigenvalue problem: find WN E lPN, WN =1= 0, and >. E C such that (14.4.25)
{
d:;
(Xj) = >'WN(Xj) , 0:::; j :::;
N
1 ,
wN(l) = 0 ,
which is associated with the scalar hyperbolic problem (14.4.1)(14.4.3). For both cases of Legendre and Chebyshev nodes, the real parts of >. are strictly negative, while the modulus satisfies a bound of the form 1>'1 = O(N2) (see, e.g., Canuto, Hussaini, Quarteroni and Zang (1988), Sect. 11.4.3). This has the consequence that explicit temporal discretization for problems like (14.4.1) or (14.4.18) will undergo a stability limit on the timestep of the form L1t = O(N 2 ) . In particular, under this restriction all AdamsBashforth methods are asymptotically stable. On the other hand, implicit methods like backward Euler or CrankNicolson are unconditionally stable. They generate however a linear system
14.5 Second Order Linear Hyperbolic Problems
497
to be solved at each timelevel, whose matrix has a condition number that behaves like O(N2). Finite difference based preconditioners can still be used. An account is given in Funaro (1992), pp.172174. When spectral methods are used naively to approximate hyperbolic equations (or systems) with nonsmooth data, oscillations grow uncontrolled due to the onset of the Gibbs phenomenon. Since oscillations are due to the growth of the higher order modes, the cure relies in the use of a filtering mechanism. This requires transformation of the collocation solution from the physical space to the frequency one, i.e., finding its Legendre (or Chebyshev) coefficients and applying to them a suitable cutoff function (e.g., Vandeven (1991) and references therein). By this global smoothing, the accuracy of the solution may be dramatically improved. However, in practical applications only a finiteorder decay of the error is usually observed in the region near the discontinuities. An alternative consists in postprocessing the collocation solution by a local smoothing, that is one based on a convolution procedure. The idea has been developed both theoretically and computationally, by Gottlieb and coworkers (see Gottlieb (1985), Gottlieb and Tadmor (1985), Abarbanel, Gottlieb and Tadmor (1986)). For nonlinear scalar conservation laws like (14.1.20) (with x E (1,1)), the socalled spectral viscosity method consists in adding to the standard spectral method (Galerkin or collocation) an extra term that has the meaning of nonlinear viscosity. Besides, the resulting solution needs to be postprocessed by a local smoothing procedure. This approach, that was first introduced for spectral Fourier approximation of scalar conservation laws with periodic solutions (Tadmor (1989), Maday and Tadmor (1989), Chen, Du and Tadmor (1992)), has been recently extended to the case of Legendre collocation approximation (Maday, Ould Kaber and Tadmor (1993)).
14.5 Second Order Linear Hyperbolic Problems Let [l C JRd, d = 2,3, be a given domain, and consider the second order hyperbolic equation
498
14. Hyperbolic Problems
fPu
+ Lu
fJt2
=
u=o
f
in QT := (0, T) x 0 on ED,T := (0, T) x TD
(14.5.1)
on EN,T := (0, T) x TN Ult=o = Uo
on 0
au = at It=o
on 0

Ul

d
Here an = rDUTN, rDnrN = 0, Lz =  Li,j=l Di(aijDjz) and the coefficients aij satisfy Definition 6.1.1, so that L is an elliptic operator. Moreover, the conormal derivative has been denoted by 8~L := Lt,j=l aij Djz ru, where n = (nl' ... ,nd) is the unit outward normal vector on an. The variational formulation of problem (14.5.1) is as follows: find U E CO([O, T); V) n Cl([O, T]; L 2(0)) satisfying
d2
dt 2 (u(t), v) (14.5.2)
u(O) =
+ a(u(t), v)
= (j(t), v)
Vv E V
UQ
du dt (0) =
Ul
in the sense of distributions in (0, T), where V := HfD (0) (see (6.1.6)) and d
a(z,v):=
i~l
L
aijDjzDiv
,
(z,v):=
L
zv .
Assuming Uo E V, Ul E L 2(n ) and f E L 2(QT ), the above problem has a unique solution (for a proof, see, e.g., Lions and Magenes (1968b); also see Raviart and Thomas (1983)). A semidiscrete approximation of (14.5.2) derived from a Galerkin method can be defined as follows. Let Vh be a finite dimensional subspace of V (whose dimension is N h ), and assume that UO,h, Ul,h E Vh are suitable approximations to Uo and Ul, respectively. Then for each t E [0, TJ we look for a function Uh(t) E Vh that satisfies
d2
dt 2 (Uh (t), Vh) + a(Uh(t), Vh) = (j(t),Vh) (14.5.3)
Uh(O) = UO,h dUh df(O)
= Ul,h
V vt. E Vh , t E (O,T)
14.5 Second Order Linear Hyperbolic Problems
499
Proceeding as in the case of the differential problem (14.5.2) one can show that problem (14.5.3) has a unique solution. Stability is easily proven by taking at each t > 0 Vh = ~(t) and integrating in time the resulting equation. If the bilinear form a(·, .) is coercive in V and symmetric, i.e., aij = aji for each i, j = 1, ... , d, it can be proven that there exists a sequence of eigenvalues 0< /ll,h :::; ... :::; /lNh,h and corresponding eigenfunctions {Wi,h} verifying (14.5.4) These eigenfunctions are mutually orthogonal in L 2 (st), and provide a complete orthonormal basis of Vh. Setting /li,h = !/li,h, the solution of (14.5.3) takes the form
Uh(t) =
1 LNh { (UO,h,Wi,h)COS(/li,h t) + ""',h ;;:(U1,h,Wi,h)sin(/li,ht)
i=l
+_1_
r (f(s), Wi,h) sin(/li,h(t  S))dS} Wi,h
/li,h fa
For practical purposes the representation above is not useful. With notations similar to those of Section 5.6.1, (14.5.3) can be reformulated in an algebraic form as follows
d2t, M dt 2 (t) (14.5.5)
t,(0) =
+ At,(t) = F(t)
, t E (O,T)
eo
~~ (0) = 6 Precisely, having denoted with {'PjL=l, ...,Nh a basis of Vh, and
Nh
Nh
Nh Uh(t) = L~j(t)'Pj , UO,h = L~O,j'Pj , U1,h = L6,j'Pj , j=l j=l j=l it is
Mij:= ('Pi,'Pj) , A ij :=a('Pj,'Pi) , Fi(t):= (f(t),'Pi) For the time discretization, let us consider the simple ordinary differential equation of second order in time
y"(t) (14.5.6)
yeO) {
= 1jJ(t, yet), y'(t)) = Yo
y'(O) = zo
, t E (0, T)
500
14. Hyperbolic Problems
where 'l/J : [0, T] x lR x lR + lR is a continuous function. Let us divide [0, TJ into subinternals [tn, t n+1 ], with to = 0, tN = T, tn+l = t n + L1t for n = 0, ... , N  1. A popular discretization of second order ordinary differential equations like (14.5.6) is the one based on the Newmark method. Denoting yn the approximation of y(t n), the Newmark method generates the following sequence: set yO = Yo, zO = Zo and then solve
yn+l = (14.5.7)
{
zn+l
u" + Llt zn + (L1t)2
[('l/Jn+l
= z" + Llt[fNn+l + (1 
+ (~  () 'l/Jn]
B)'l/JnJ
for n = 0, 1, ... ,N  1, where ( and B are some nonnegative parameters, and 'l/Jn := 'l/J(tn,yn,zn) (here zn is an approximation of y'(t n)). The scheme is explicit if ( = 8 = 0, otherwise for each n a system (which is nonlinear when 'l/J depends nonlinearly on y and/or y') needs to be solved in order to obtain yn+l, zn+l from yn and zn. When 'l/J doesn't depend on y', then zn+l can be eliminated from the above equations, and we obtain for n = 0,1, ... ,N  2
yn+2 _ 2yn+l
+ yn
(14.5.8) = (Llt)2 [('l/Jn+2
+
(~ 2( + 8) 'l/Jn+l + (~+ ( 8) 'l/Jn]
°
If ( = this scheme is explicit (when 8 = 1/2 it reduces to the well known leapfrog scheme). By formal Taylor expansion it can be found that the Newmark method (14.5.7) is a second order one if 8 = 1/2, whereas it is first order accurate if 8 i= 1/2. The condition 8 2:: 1/2 is necessary for stability. Moreover, if 8 = 1/2 and 1/4, the Newmark method is unconditionally stable. This popular choice is, however, unsuitable for long time integration, as the discrete solution may be affected by parasitic oscillations that are not damped as far as t increases. For long time integration it is therefore preferable to use ( 2:: (8 + 1/2)2/4 for a suitable B > 1/2 (although in such a case the method downgrades to a first order one). When applied to system (14.5.5), the Newmark method becomes:
(=
(14.5.9)
eo eo
el.
= and (To = for n = 0,1, ... ,N  1, with At each step (14.5.9) entails the solution of a linear system
14.6 The Finite Volume Method
501
(14.5.10) whose matrix is symmetric and positive definite. If ( = 0 and M is diagonal (e.g., after a lumping procedure), (14.5.10) provides n + 1 explicitly. It can be shown that the solution is unconditionally stable for e = 1/2 and ( = 1/4, whereas for the other values of e ~ 1/2 and ( ~ 0 the time steprestriction l1t VI'Nh ,h ::; c.;
e
has to be assumed. Here I'Nh,h is the maximum eigenvalue of the bilinear form a(,') (see (14.5.4)), and behaves like h 2 for finite element subspaces Vh (see (6.3.24)). When ~ 1/2 and (~ (8 + 1/2)2/4 the constant CO,( > 0 can be arbitrarily large; otherwise, it is strictly less than [(8 + 1/2)2/4 _ (]l. Proofs of these stability results are given in Raviart and Thomas (1983), where the convergence theory is also provided.
e
14.6 The Finite Volume Method The finite volume method is especially designed for the approximation of conservation laws. In a very general framework, a conservation law can be written in the vector form as d dt
(14.6.1)
r a (w) dx + JaD r