THE SPLITTING EXTRAPOUTION METHOD
SERIES ON APPLIED MATHEMATICS Editor-in-Chief: Frank Hwang Associate Editors-in-Chie...
11 downloads
513 Views
18MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
THE SPLITTING EXTRAPOUTION METHOD
SERIES ON APPLIED MATHEMATICS Editor-in-Chief: Frank Hwang Associate Editors-in-Chief: Zhong-ci Shi and Kunio Tanabe
Vol. 1
International Conference on Scientific Computation eds. T. Chan and Z.-C. Shi
Vol. 2
Network Optimization Problems — Algorithms, Applications and Complexity eds. D.-Z. Du and P. M. Pandalos
Vol. 3
Combinatorial Group Testing and Its Applications by D.-Z. Du and F. K. Hwang
Vol. 4
Computation of Differential Equations and Dynamical Systems eds. K. Feng and Z.-C. Shi
Vol. 5
Numerical Mathematics eds. Z.-C. Shi and T. Ushijima
Vol. 6
Machine Proofs in Geometry by S.-C. Chou, X.-S. Gao and J.-Z. Zhang
Vol. 7
The Splitting Extrapolation Method by C. B. Liem, T. Lu and T. M. Shih
Series on Applied Mathematics A Volume 1
THE SPLITTING EXTRAPOLATION METHOD A New Technique in Numerical Solution of Multidimensional Problems C. B. Liem The Hong Kong I'olylcc/inic
Univerai.ty
T. Lii Chengdu Inal.il.utv of Computer Application a
T. M. Shih The I long Kong Polytechnic
University
World Scientific Sinaapore •• New NewJersey Jersey • London L Singapore • Hong Kong
Published by World Scientific Publishing Co. Pte. Ltd. P O Box 128, Farrer Road, Singapore 9128 USA office: Suite IB, 1060 Main Street, River Edge, NJ 07661 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
Library of Congress Cataloging-in-Publication Data Tao, Lu, 1940Splitting extrapolation method / by Lii Tao, T.M. Shih, C.B. Liem. p. cm. - (Series on applied mathematics: vol. 7) Includes bibliographical references and index. ISBN 9810222173 1. Splitting extrapolation method. I. Shih, T. M., 1937II. Liem, C. B., 1934. III. Title. IV. Series. QA297.5.T36 1995 519.4-dc20 95-18449 CIP
Copyright © 1995 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923 USA. This book is printed on acid-free paper. Printed in Singapore by Uto-Print
Preface
Large scale computation in science and engineering is one of the most challenging areas in computational mathematics. Despite the significant progress m a d e in digital computers and computational m e t h o d s over t h e last two decades, to date the solution of problems with complicated domains and high dimension remains difficult. To solve a high dimensional problem, the computational work and computer storage requirement increase expo nentially with respect to the dimension. This phenomenon, known as "the dimensional effect", is the major obstacle in the application of computa tional methods to the solution of high dimensional problems. In order to overcome this obstacle, a computational m e t h o d would, ide ally, fulfil the following requirements: ( 1 ) it should be a parallel computational method which can be imple mented in multiprocessor computers requiring minimal communica tion among processors; ( 2 ) it should possess a high order of accuracy with a posteriori error estimation and adaptive grid refinement so t h a t it is able to evaluate the computational results in real time; ( 3 ) the complexity of the algorithm should be independent or almost independent of the dimension of the problems. This volume describes the splitting extrapolation m e t h o d t h a t contains all the above-mentioned characteristics. T h e idea of splitting extrapolation was first developed by two Chinese mathematicians Qun Lin and Tao Lii in 1983. T h e authors of this book conducted a detailed analysis of splitting extrapolation and its applications in 1990, and identified it to be an extension of the classical Richardson extrapolation, t h a t is, the multivariate Richardson extrapolation. In recent years, promising progress has been m a d e in parallel computa tional methods, multilevel m e t h o d s and sparse grid combination techniques. T h e combination of these m e t h o d s with splitting extrapolation has become an effective way of tackling "the curse of dimensionality". A comprehensive review of these m e t h o d s can be found in U. Rude (1991, 1993). v
vi
Preface
A lot of the recent research on splitting extrapolation methods has been published in Chinese journals but has not drawn much attention from over seas. I hope that with this publication, it will help to promote interactions among interested readers outside China.
Beijing, December 1994
Zhong-ci Shi Member of Academia Sinica
Acknowledgements
First and foremost, we would like to acknowledge Professors Zhong-ci Shi and Qun Lin for their valuable suggestions and unfailing encourage ments. We also express our sincere gratitude to Professor Zhong-ci Shi for writing the preface of this volume. We owe many thanks to our students at the Hong Kong Polytechnic University, Har-ming Chu, Yu-kin Yiu, Kin-fai Wong, Tung-kao Hung and Wai-lun Kwok, for performing many valuable numerical experiments, and Mr. Yi-xun Wang for his help in editing the manuscript. T h e authors would also like to thank World Scientific Publish ing Co P t e Ltd and Ms E.H. Chionh for their friendly cooperation. Finally, we thank the Hong Kong Polytechnic University Research Fund (340/284 and 340/181) and the National Science Foundation of China (19471075 and 863-306-05-01), without whose subsidies this work would not have been possible.
Hong Kong,
December 1994
Tao Lii Institute of Chengdu Computer Applications Academia Sinica
Tsi-min Shih,
Chin-bo Liem
Department of Applied Mathematics T h e Hong Kong Polytechnic University
vn
This page is intentionally left blank
Introduction
Most m a t h e m a t i c a l models of scientific and engineering problems are described in continuous forms, such as integrals, integral equations, differ ential equations and integro-differential equations. In order to numerically evaluate these models, it is necessary to choose a discrete parameter h and a discretization m e t h o d , so as to convert the continuous forms into alge braic equations, and then to obtain the numerical solution uh. Therefore, the accuracy of uh depends on h. If an exact solution is regarded as the limit of numerical solutions, con sider the dependence between the numerical solution and discrete p a r a m e ters, one can derive naturally an extrapolation or an extrapolation to the limit. In essence, one can choose several different discrete parameters and evaluate the relevant approximate solutions. These solutions are then ex trapolated to obtain another approximate solution with higher order of accuracy. In fact, the idea of extrapolation was used in the ancient time for ac celerating the convergence. T h e Chinese mathematicians Liu Hui (A.D. 263) and Zhu Chongzhi (429-500) used the idea to evaluate the value of 7r, and Huygens (1654), a French mathematician, proposed an extrapolation 4 1 formula -Th - -T2h to evaluate TT. o o However, it was Richardson (1910) who first studied the extrapolation as common and effective algorithms, and established polynomial extrap olations. This was followed by the ^-extrapolation by W y n n (1956), the rational extrapolation by Bulirsch and Stoer (1964), and the general ex trapolation by C. Brezinski (1980). In practice, extrapolation methods have become effective techniques for accelerating the convergence in various branches of numerical m a t h e m a t ics, including the Romberg algorithm (1955) in numerical integration, the Stetter Theorem (1965) and the Gragg-Bulirsch-Stoer algorithms (1964) in ordinary differential equations, the extrapolation in the finite difference m e t h o d by G. Marchuk and V. Shaidurov (1979), the extrapolation in the finite element m e t h o d by Q. Lin and T . Lii (1983), and by R. Rannacher (1988). IX
X
Introduction
T h e above-mentioned extrapolations, though varying in forms, are all based on the extrapolation with respect to a single discrete parameter. Nat urally, the extrapolation of such a kind will cause difficulties in solving high dimensional problems because of the dimensional effect. Take the evalua tion of an s-multiple integral by Richardson's extrapolation as an example, in order to obtain an algebraic precision of degree (2m -f 1), function val ues at 2 m 5 points are needed. This leads to an exponential increase in the computational complexity as the dimension increases. T h e dimension effect becomes more serious in solving partial differ ential equations: for an s-dimensional problem by means of one step of Richardson's extrapolation, apart from solving an algebraic equation with Ns unknowns, one has to solve an auxiliary problem with (2N)S unknowns. However, problems arising from science and engineering, such as the explo ration and exploitation of oil and gas, weather forecasting, pollution control, are often described by high dimensional m a t h e m a t i c a l models. It was on the grounds of such demands t h a t Q. Lin and T . Lii (1983) proposed the Splitting Extrapolation Method (SEM). T h e Richardson extrapolation is based on a single-variable asymptotic expansion of the error, while the SEM is based on a multivariate asymptotic expansion of the error. This simple change greatly improves the efficiency of the extrapolation. First, the computation complexity is almost optimal. For example, to evaluate an s-multiple integral by the SEM, in order to reach the algebraic precision of degree (2m + 1 ) , only the function values at ( m m ) P o m ^ s a r e needed. Second, SEM converts a large scale problem into several mutually independent subproblems which can be computed in parallel by a multiprocessor computer with a few communication among the processors. Here, the degree of parallelism depends on the numbers of independent discrete parameters, which do not need to coincide with the dimension of the problem, and can be set according to the scale of the problem and the available number of processors. Third, SEM provides a posteriori error estimations and self-adaptive algorithms. T h e latest developments of extrapolation m e t h o d s are related to the multilevel methods. Multilevel methods (such as multigrid methods) re quire the solution of several discrete equations with different grid sizes. These results can be used in extrapolation m e t h o d s to obtain economically a higher order of accuracy. There are two branches of this idea: A. B r a n d t (1983) and W . Hackbusch (1984) combined the Richardson extrapolation and the multigrid methods (called the r-extrapolation), and A. Schiiller and Q. Lin (1985) combined the SEM and multigrid m e t h o d s . Further, based on finite element multilevel subspace splitting, C. Zenger (1990) pro posed a sparse-grid m e t h o d for solving multidimensional problems, and
Introduction
XI
—1
—x
proved that s-dimensional problem needs only 0(/t 1 In /i|^ ) grid points and an equivalent number of base functions to obtain the accuracy of or der 0 ( / i 2 | l n / i | 5 _ 1 ) , whereas the standard finite element method requires 0(h~s) grid points for the accuracy of order 0 ( / i 2 ) . The former method therefore has a greatly reduced computation complexity. M. Griebel, M. Schueider and C. Zenger (1990) proposed the combina tion techniques based on the sparse-grid method. In principle, this method is similar to the SEM, in that both methods rely on a certain form of multivariate asymptotic expansion. However, the aim of the combination techniques is to combine the solutions of coarse grids to obtain an approxi mate solution which has the same order of accuracy as the solution of finer grids. A comprehensive report on the relationship between the SEM and the combination techniques can be seen in U. Rude (1991, 1993). Lastly, a new development of extrapolation, the implicit extrapolation, tries to obtain an approximate equation of higher order of accuracy by means of the idea of extrapolation. The developments of the implicit extrapolation methods are also described in U. Rude (1991, 1993). This volume concentrates on the theory and application of the splitting extrapolation methods. Some of the materials are extracted from the work by the authors and are presented in this volume for the first time. Chapter I introduces the Richardson extrapolation and its application in numerical integration. This may be useful for readers who are not familiar with these topics. Chapters II to V cover respectively the SEM and their applications in multidimensional numerical integration, integral equations, and differential equations. Chapter VI discusses the combination methods related to extrapolation. Finally, Chapter VII introduces the combination techniques and the sparse-grid methods by C. Zenger.
This page is intentionally left blank
List of Symbols
■z m |n
set of integers
m Jin
n is not divisible by m
R M'
set of real numbers
X
= ( x i , • • •, xs), a point in Ms
e
j
unit vector in the direction of Xj -coordinate
fi
open set of Ms
an
boundary of Q
Q
closure of Q,
0
empty set
Tfc.1
set of gridlines, see §5.2
Th
set of grid points, see §5.2
nh n*,.-
set of regular grid points, see §5.2
dtlh
d f i n l \ / , see §5.2
«*,«
&h U Qhti U d£lh
M{x)
discrete neighbourhood of gird point x, see §5.2
h
= (fei, • • •, hs)
ho
= m a x hi
a
= ( a i , •• •, a 5 ) , a multi-label
M
= Qfi H
a!
abbreviation of oc\\a2\ • • - a 5 !
/i
a
h
n is divisible by m
Euclidean space
s-dimensional
set of irregular grid points, see §5.2
multi-parameter mesh width
ha,
abbreviation of h^1 • • • ftf*, see §2.1.1 ii
• .•
w
^i
h
v
s 0rt abbreviation of ( — , • • • , —^-), see §2.1.1
xm
xiv
List of Symbols
1-1-a
abbreviation of (, • • •, - — - — ) , see §2.1.1 V l + c*i 1+ a/' *
/(#)
abbreviation of function / ( x i , • • •, xs)
fPm+i are constants and 0 < pi < p2 < • • • < Pm+i- In practice, some asymptotic expansions may have terms of the form hPi lnfe. It will be proved later that such terms can be eliminated by means of the polynomial extrapolation. In general, for smooth problems, the expansion (1.1.2) contains only even powers of h. An extrapolation with the expansion (1.1.2) is called the polynomial ex trapolation, (also called Richardson's extrapolation or Romberg's extrap olation). In 1910, Richardson combined two approximate solutions T(h) 1
2
I. Generalization and application of Richardson's
extrapolation
h A h I and T ( —) to get - T ( — ) — ^ ( ^ ) j which is a solution with an order of accuZ o Z o racy 0 ( / i 4 ) , called the /i 2 -extrapolation by Richardson. In 1955, Romberg used the Euler-Maclaurin asymptotic expansion to successively eliminate /i 4 , /t 6 , • • •, and obtained approximate solutions with higher and higher or der of accuracy. It is the famous Romberg's m e t h o d for numerical integra tion. In fact, in the 3rd century, the Chinese m a t h e m a t i c i a n Liu Hui (A.D. 263) already possessed a primitive thought of extrapolation. In Europe, Huygens (1654) used (4S2n — Sn)/3 to evaluate TT. At present, applications of the extrapolation in improving the accuracy of approximations can be found almost everywhere in mathematical computation.
1.1. Interpolation
polynomials
and
extrapolation
If the approximate solution has an asymptotic expansion of the form (1.1.2), and there are already m + 1 approximate solutions, namely T(h{), i = 0, • • •, m, where ho > hi > • • • > hm > 0, then the m - t h extrapolation value Tm is determined by the following linear system: T(hi) = 7 l 0 ) + ai hPl + • • • +
amhpr,
i = 0,---,m,
(1.1.3)
where Tm , a i , • • •, a m are unknowns. T h e value Tm ' is of interest because the expansion (1.1.2) shows t h a t the order of accuracy is 0 ( / i Q m + 1 ) . In fact, instead of solving the linear system (1.1.3) to find Tm\
one can
m
derive an interpolation polynomial f(x)
= Y ^ 6,-xPi satisfying the interpo-
lation conditions: / ( / i , ) = T(hi), i = 0 , 1 , • • •, m. Once the interpolation polynomial is obtained, / ( 0 ) = 60 = T^ will be the required extrapolation value. This relationship between the extrapolation and interpolation poly nomial makes it possible to use the properties of interpolation polynomials to study extrapolation algorithms. For the sake of simplicity, the following asymptotic expansions with even powers are discussed: T(h) = aQ + aih2
+ • • • + amh2m
+ 0(/i2(m+1)).
(1.1.4)
Let x = /i 2 , and the corresponding interpolation problem becomes: find a polynomial P m ( x ) = a 0 + aix H h a m x m , satisfying Pm(xi)
= T(hi),
i = 0, • • •, m,
(1.1.5)
where X{ — hf and Pm(xi) = / ( x x ) . This problem is equivalent to finding coefficients ao, • • •, am, satisfying the linear system a0 + axxi + • • • + amxm
= /(x t -), i = 0, • • •, m.
(1.1.6)
1. Polynomial
extrapolation
3
Since the coefficient determinant of (1.1.6) is the Vandemonde determinant with value TT(xfc — Xj) ^ 0, the interpolation polynomial exists uniquely. k>j
It is well known that there are different expressions for the interpolation polynomial and each has advantages and disadvantages of its own. A) Lagrange's interpolation formula m
Pm(x) = Y,H*)f(*i),
(1-1-7)
i=0 ™
X — Xj
where Li(x) — \ \ ( jj=o = o Xi
) are called the interpolation basis functions
x
j
satisfying Li(xj) = 8iji i , j = 0,---,m,
(1.1.8a)
£ > ( * ) = 1.
(1.1.8b)
and
The Lagrange interpolation polynomials are expressed explicitly, they are convenient in theoretical analysis, but are not suitable in practical compu tation. For example, when the number of nodes increases, one has to start the computation from the beginning. B) Newton's interpolation formula Pm(x) = f(xo) + (x - x0)f[x0ixi] + (x- x0)(x - xi)f[xQl xi, x2] + • • • + (x - x0) • • • (x - ar m -i)/[xo, • • •, xm],
(1.1.9)
where /[xo, • • •, X{] is the i-th Newton's divided difference, which can be expressed as
/[*„,•••,».•] = E i f{Xi) =0 ' II(**-*J)
( L L 1 °)
A:=0
Newton's interpolation formula has many advantages: one of which is that when the number of nodes increases by one, only one more term is added, while the previous terms remain unchanged; another is that this formula
4
I. Generalization and application of Richardson's
extrapolation
can be generalized easily to multivariate interpolations. This property is very helpful in the discussion of the splitting extrapolation algorithm in Chapter II. C) Neville's interpolation formula
P