Lecture Notes in Computational Science and Engineering Editors Timothy J. Barth Michael Griebel David E. Keyes Risto M. Nieminen Dirk Roose Tamar Schlick
57
Michael Griebel Marc A. Schweitzer (Eds.)
Meshfree Methods for Partial Differential Equations III With 137 Figures and 16 Tables
ABC
Editors Michael Griebel Marc A. Schweitzer Universität Bonn Institut für Numerische Simulation Wegelerstraße 6 53115 Bonn, Germany email:
[email protected] [email protected] Library of Congress Control Number: 2002030507 Mathematics Subject Classification: 65N99, 64M99, 65M12, 6Y99 ISBN-10 3-540-46214-7 Springer Berlin Heidelberg New York ISBN-13 978-3-540-46214-9 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com c Springer-Verlag Berlin Heidelberg 2007 The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: by the authors and techbooks using a Springer LATEX macro package Cover design: design & production GmbH, Heidelberg Printed on acid-free paper
SPIN: 11887522
46/techbooks
543210
Preface
Meshfree methods for the numerical solution of partial differential equations are becoming more and more mainstream in many areas of applications. Their flexiblity and wide applicability are attracting engineers, scientists, and mathematicians to this very dynamic research area. Hence, the organizers of the third international workshop on Meshfree Methods for Partial Differential Equations held from September 12 to September 15, 2005 in Bonn, Germany aimed at bringing together European, American and Asian researchers working in this exciting field of interdisciplinary research. To this end Ivo Babuˇska, Ted Belytschko, Michael Griebel, Wing Kam Liu, Helmut Neunzert, and Harry Yserentant invited scientist from twelve countries to Bonn to strengthen the mathematical understanding and analysis of meshfree discretizations but also to promote the exchange of ideas on their implementation and application. The workshop was again hosted by the Institut f¨ ur Numerische Simulation at the Rheinische Friedrich–Wilhelms Universit¨ at Bonn with the financial support of the Sonderforschungsbereich 611 Singular Phenomena and Scaling in Mathematical Models funded by the Deutsche Forschungsgemeinschaft. This volume of LNCSE now comprises selected contributions of attendees of the workshop. Their content ranges from applied mathematics to physics and engineering.
Bonn, June, 2006
Michael Griebel Marc Alexander Schweitzer
Contents
Local Maximum-Entropy Approximation Schemes M. Arroyo, M. Ortiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Genetic Algorithms for Meshfree Numerical Integration S. BaniHani, S. De . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 An RBF Meshless Method for Injection Molding Modelling F. Bernal, M. Kindelan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Strain Smoothing for Stabilization and Regularization of Galerkin Meshfree Methods J.S. Chen, W. Hu, M.A. Puso, Y. Wu, X. Zhang . . . . . . . . . . . . . . . . . . . . 57 Fuzzy Grid Method for Lagrangian Gas Dynamics Equations O.V. Diyankov, I.V. Krasnogorov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 New Shape Functions for Arbitrary Discontinuities without Additional Unknowns T.-P. Fries, T. Belytschko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 A Meshless BEM for 2-D Stress Analysis in Linear Elastic FGMs X. Gao, C. Zhang, J. Sladek, V. Sladek . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 A Particle-Partition of Unity Method Part VII: Adaptivity M. Griebel, M.A. Schweitzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Enriched Reproducing Kernel Particle Approximation for Simulating Problems Involving Moving Interfaces P. Joyot, J. Trunzler, F. Chinesta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
VIII
Contents
Deterministic Particle Methods for High Dimensional Fokker-Planck Equations M. Junk, G. Venkiteswaran . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Bridging Scale Method and Its Applications W.K. Liu, H.S. Park, E.G. Karpov, D. Farrell . . . . . . . . . . . . . . . . . . . . . . . 185 A New Stabilized Nodal Integration Approach M.A. Puso, E. Zywicz, J.S. Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Multigrid and M-Matrices in the Finite Pointset Method for Incompressible Flows B. Seibold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Assessment of Generalized Finite Elements in Nonlinear Analysis Y. Tadano, H. Noguchi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 A Meshfree Method for Simulations of Interactions between Fluids and Flexible Structures S. Tiwari, S. Antonov, D. Hietel, J. Kuhnert, F. Olawsky, R. Wegener . 249 Goal Oriented Error Estimation for the Element Free Galerkin Method Y. Vidal, A. Huerta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Bubble and Hermite Natural Element Approximations J. Yvonnet, P. Villon, F. Chinesta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Appendix. Color Plates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Local Maximum-Entropy Approximation Schemes Marino Arroyo1 and Michael Ortiz2 1
2
Universitat Polit`ecnica de Catalunya, Barcelona, Spain
[email protected] California Institute of Technology, Pasadena, USA
[email protected] Summary. We present a new approach to construct approximation schemes from scattered data on a node set, i.e. in the spirit of meshfree methods. The rational procedure behind these methods is to harmonize the locality of the shape functions and the information-theoretical optimality (entropy maximization) of the scheme, in a sense to be made precise in the paper. As a result, a one-parameter family of methods is defined, which smoothly and seamlessly bridges meshfree-style approximants and Delaunay approximants. Besides an appealing theoretical foundation, the method presents a number of practical advantages when it comes to solving partial differential equations. The non-negativity introduces the well-known monotonicity and variation-diminishing properties of the approximation scheme. Also, these methods satisfy ab initio a weak version of the Kronecker-delta property, which makes essential boundary conditions straightforward. The calculation of the shape functions is both efficient and robust in any spacial dimension. The implementation of a Galerkin method based on local maximum entropy approximants is illustrated by examples.
Key words: Maximum entropy, information theory, Delaunay triangulation, meshfree methods.
1 Introduction Over the last decade, node-based approximation schemes have experienced a tremendous impetus in the field of numerical methods for PDEs, with the proliferation of meshfree methods (e.g. [14], [4], [13]; see also [11] for a review). The absence of a mesh outstands as a promise of greater flexibility and robustness. The actual realization of these potential advantages is yet to be fully accomplished. Possible reasons include difficulties in the numerical quadrature of the weak form, a topic of intense research (e.g. [7, 6]), and the availability of robust and efficient node-based approximants. The first meshfree approximants were based on the Shepard approximants, while
2
M. Arroyo and M. Ortiz
presently most approaches hinge upon Moving Least Squares (MLS) approximants. We present here a different family of approximation schemes that we call maximum-entropy approximation schemes. Here, we present the first-order method [1]. See [2] for a presentation of higher-order methods. In max-ent approximation methods, much in the tradition of computational geometric modelling, the shape functions are required to be non-negative. Consequently, owing to the 0th order consistency condition, the approximants can be viewed as discrete probability distributions. Furthermore, the 1st consistency condition renders this class of approximants generalized barycentric coordinates. The local max-ent approximation schemes are optimal compromises between two competing objectives: (1) maximum locality of the shape functions (maximum correlation between the approximation and the nodal value at the closest points) and (2) maximum entropy of the scheme. The second objective is a statement of information-theoretical optimality in the sense that it provides the least biased approximation scheme consistent with the reproducing conditions. We prove rigorously that these approximants smoothly and seamlessly bridge Delaunay shape functions and meshfree-style approximants, and that the approximants are smooth, exist and are uniquely defined within the convex hull of the node set. Furthermore, they satisfy ab initio a weak version of the Kronecker-delta property, which ensures that the approximation at each face depends only on the nodes on this particular face. This makes the imposition of essential boundary conditions in the numerical approximation of partial differential equations straightforward. The formulation of the approximants is presented in Section 2. This section also includes the practical calculation of the shape functions. Section 3 provides a summary of the properties of these approximants. Section 4 gives a number of insightful alternative interpretations of these approximants, and include the notion of relative max-ent approximants. The application of the local max-ent approximants in a 3D nonlinear elasticity example is provided in Section 5, and the conclusions are collected in Section 6.
2 Formulation Let u : Ω ⊂ Rd → R be a function whose values {ua ; a = 1, . . . , N } are known on a node set X = {xa , a = 1, . . . , N } ⊂ Rd . Without loss of generality, we assume that the affine hull of the node set is Rd . We wish to construct approximations to u of the form uh (x) =
N
pa (x)ua
(2.1)
a=1
where the functions pa : Ω → R will be referred to as shape functions. A particular choice of shape functions defines an approximation scheme. We shall
Local Maximum-Entropy Approximation Schemes
3
require the shape functions to satisfy the zeroth and first-order consistency conditions: N
pa (x) = 1,
∀x ∈ Ω,
(2.2a)
a=1 N
pa (x) xa = x,
∀x ∈ Ω.
(2.2b)
a=1
These conditions guarantee that affine functions are exactly reproduced by the approximation scheme. In general, the shape functions are not uniquely determined by the consistency conditions when N > d + 1. 2.1 Convex Approximants In addition, we shall require the shape functions be non-negative, i.e., pa (x) ≥ 0,
∀x ∈ Ω, a = 1, . . . , N.
(2.3)
The positivity of the shape functions, together with the partition of unity property and the 1st order consistency conditions, allows us to interpret the shape functions as generalized barycentric coordinates. This viewpoint is common in geometric modelling, e.g., in B´ezier, B-Spline techniques [16], natural neighbor approximations [19], and subdivision approximations [8]. Positive linearly consistent approximants have long been studied in the literature [10]. These methods often present a number of attractive features, such as the related properties of monotonicity, the variation diminishing property (the approximation is not more “wiggly” than the data), or smoothness preservation [9], of particular interest in the presence of shocks. Furthermore, they lead to well behaved mass matrices. The positivity restriction is natural in problems where a maximum principle is in force, such as in the heat conduction problem. In the present context, the non-negativity requirement is introduced primarily to enable the interpretation of shape functions as probability distributions (or coefficients of convex combinations). It follows from (2.2a), (2.2b) and (2.3) that the shape functions at x ∈ convX define a convex combination of vertices which evaluates to x. In view of this property we shall refer to non-negative and first-order consistent approximation schemes as convex approximation schemes. Our approach to building approximation schemes is to choose selected elements amongst all convex approximation schemes at a point x, which we denote by N N N pa (x)xa = x, pa (x) = 1 , (2.4) Px (X) = p(x) ∈ R+ a=1
a=1
where p(x) denotes the vector of RN whose components are {p1 (x), . . . , pN (x)} and RN + is the non-negative orthant.
4
M. Arroyo and M. Ortiz
By direct comparison between the above defined set of convex approximants and the convex hull of the node set N N d λa xa , λa ≥ 0, λa = 1 (2.5) convX = x ∈ R x = a=1
a=1
it follows that Theorem 1. The set of convex approximants Px (X) is non-empty if and only if x ∈ conv X. Note carefully that this result does not preclude using convex approximants in non-convex domains, as long as the non-convex domain is a subset of the convex hull of the node set. This restriction does not seem limiting in any reasonable sense. In the following, for simplicity, we shall assume that Ω = convX. We next provide the criteria to select an approximation scheme amongst the convex schemes. One possible rational criterion to select a convex approximation scheme is, based on information-theoretical considerations, to pick the least biased convex approximation scheme, i.e. that which maximizes the entropy. Another natural criterion is to select the most local convex scheme, since it most accurately respects the principle that the approximation scheme at a given point should be most influenced by the closest nodes in the node set. As we shall see, these are competing objectives. 2.2 Entropy Maximization The entropy of a convex approximation scheme is a natural concept since the positivity and partition of unity properties of convex approximants allow us to interpret these approximants at each point x as discrete probability distributions. We remind that the entropy H(p) = −
N
pa log pa
a=1
is a canonical measure of the uncertainty associated with the probabilities pa , and measures the lack of information about the system, here, the set of shape functions. Equivalently, the entropy measures the information gained when the random variable is realized. Invoking Jaynes’ principle of maximum entropy results in the least biased possible choice of convex scheme, devoid of artifacts or hidden assumptions. In this view, the approximation of a function from scattered data becomes a problem of statistical inference, and is mathematically formulated through the convex program:
Local Maximum-Entropy Approximation Schemes
(M E)
Maximize H(p) = −
N
5
pa log pa
a=1
subject to pa ≥ 0, a = 1, . . . , N, N pa = 1, a=1 N
pa xa = x.
a=1
The solutions of (M E), denoted by p0 (x), are referred to as max-ent approximants. Owing to the fact that the entropy is strictly concave in the set of convex approximants, we have the following result: Theorem 2. The program (M E) has a solution iff x ∈ convX, in which case the solution is unique. The max-ent approximants, though optimal from an information-theoretic point of view, disregard completely the desirable spacial correlation between the approximation scheme at a given point and the nearby nodal values. Consequently, the shape functions are global. Indeed, the max-ent principle tries to find to most uniform distribution consistent with the constraints, here the 1st order consistency condition. Entropy maximization has been independently proposed in [18] as a means to construct C 0 approximants for arbitrary polygonal tesselations. However, the use of strict entropy maximization to define smooth meshfree-style approximants results in global shape functions and poor approximation properties, as illustrated in Figure 1.
(a)
(b)
(c)
Figure 1. Examples of max-ent approximation schemes in the plane. (a) Shape function for the vertex of a pentagon; (b) shape function for an interior node, illustrating the global character of max-ent approximation shemes; and (c) maxent approximation, or inference, of a function from scattered data, illustrating the non-interpolating character of max-ent approximation schemes.
6
M. Arroyo and M. Ortiz
2.3 Locality Maximization: Delaunay Approximants A different criterion to select a distinguished approximant in the set of convex approximation schemes is to maximize the locality, or minimize the width of the shape functions. Define the width of shape function pa as w[pa ] = pa (x)|x − xa |2 dx, (2.6) Ω
i.e. the second moment of pa about xa . Evidently, other measures of the width of a function can be used instead in order to define alternative approximation schemes [1]. The locality measure presented here is the most natural choice, and emanates from an optimal mass transference theory and the 2−Wasserstein distance [2]. The most local approximation scheme is now that which minimizes the total width W [p] =
N a=1
w[pa ] =
N
pa (x) |x − xa |2 dx,
(2.7)
Ω a=1
subject to the constraints (2.2a), (2.2b) and (2.3). Since the functional (2.7) does not involve shape function derivatives its minimization can be performed pointwise. This results in the linear program: (RAJ)
For fixed x minimize U (x, p) =
N
pa |x − xa |2
a=1
subject to pa ≥ 0, a = 1, . . . , N, N pa = 1, a=1 N
pa xa = x.
a=1
A simple argument shows that the program (RAJ) has solutions if and only if x ∈ convX. However, the function U (x, ·) is not strictly convex (it is linear) and the solution is not unique in general. The relationship between the linear program (RAJ) and the Delaunay triangulation has been established. Rajan [17] showed that if the nodes are in general positions (no (d + 1) nodes in X are cospherical), then (RAJ) has a unique solution, corresponding to the piecewise affine shape functions supported by the unique Delaunay triangulation associated with the node set X (a Delaunay triangulation verifies that the circumsphere of every simplex contains no point from X in its interior). We shall refer to the convex approximation schemes defined by the solutions p∞ (x) of (RAJ) as Rajan convex approximation schemes, and to the approximants corresponding to
Local Maximum-Entropy Approximation Schemes
7
the piecewise affine shape functions supported by a Delaunay triangulation as Delaunay convex approximants. Thus, Rajan’s result states that for nodes in general positions, the Delaunay convex approximation scheme coincides with the unique Rajan convex approximation scheme, that is optimal in the sense of the width (2.6). When the nodes are not in general positions, the situation is more subtle; the set of Delaunay approximants is obviously non-unique, and the set of Rajan approximants is neither. In addition, the latter contains elements that do not belong to the former. See [2] for a discussion. 2.4 Pareto Optimality between Entropy and Locality Maximization From the preceeding sections, it is clear that entropy and locality maximization are competing objectives. A standard device to harmonize competing objectives in multi-criterion optimization is to seek for Pareto optimal points [5]. In the present context, the Pareto set is the set of convex approximants such that no other convex approximant is better or dominates. An approximant p is better than another approximant q if and p meets or beats q on all objectives, and beats it in at least one objective, i.e. H(p) ≥ H(q), U (x, p) ≤ U (x, q), and either H(p) > H(q) or U (x, p) < U (x, q). In convex multicriterion optimization, the Pareto set can be obtained through scalarization, i.e. considering the solutions of the one-parameter family of convex programs (LM E)β
For fixed x minimize fβ (x, p) = βU (x, p) − H(p) subject to pa ≥ 0, a = 1, . . . , N, N pa = 1, a=1 N
pa xa = x.
a=1
The max-ent and the Rajan programs and approximants are recovered in the limits β = 0 and β = +∞. By the strict convexity of fβ (x, p) for β ∈ [0, +∞), the program (LM E)β has a solution, pβ (x), which is unique, if and only if x ∈ convX. We shall refer to this family of approximants as local maxent approximation schemes. It is easily seen [5, 1] that the Pareto set is the set of approximants pβ (x) for β ∈ [0, +∞), plus an additional approximation scheme denoted areto (x). For non-degenerate cases, this additional scheme corresponds by pP ∞ to the unique Delaunay triangulation of X. However, for degenerate cases, the selected approximant is the Rajan approximant with maximum entropy, areto (x) = arg pP ∞
max
RAJ (X) p∈Sx
H(p),
8
M. Arroyo and M. Ortiz
which is unique by strict concavity of the entropy. In the equation above, SxRAJ (X) denotes the set of solutions of (RAJ). 2.5 Calculation of the Shape Functions The practical calculation of the local max-ent shape functions pβ (x) is described next. It relies on standard duality methods, and the proof can be found in [1]. Proposition 1. Let β ∈ [0, ∞) and let x ∈ int(convX) be an interior point. Define the partition function Z : Rd × Rd → R associated with the node set X as N exp −β|x − xa |2 + λ · (x − xa ) . (2.8) Z(x, λ) = a=1
Then, the unique solution of the local max-ent program (LM E)β is given by pβa (x) =
1 exp −β|x − xa |2 + λ∗ (x) · (x − xa ) , ∗ Z(x, λ (x))
(2.9)
where a = 1, ..., N and λ∗ (x) = arg min log Z(x, λ). λ∈Rd
(2.10)
Furthermore, the minimizer λ∗ (x) is unique. Proposition 2. Let β ∈ [0, ∞) and let x ∈ bd(convX) be a boundary point. Let C(x) be the contact set of the point x with respect to convX, i.e. the smallest face of convX that contains x. Then, the local max-ent shape functions at x follow from the previous Proposition with a reduced node set X = X ∩ C(x) − x, and is formulated in the subspace of Rd given by the affine hull of the reduced node set L = affX . The role of the thermalization parameter β is clear from Eq. (2.9): it controls the decay or locality of the shape functions. This is expected, since a larger value will give more weight to the locality measure in fβ (x, p). It is also clear that the shape functions, strictly speaking, have global support. From a numerical perspective, the Gaussian decay leads for all practical purposes to compactly supported functions. This is supported by a detailed numerical study in [1]. We note that in the absence of the 1st order consistency condition, the Lagrange multiplier λ∗ (x) is absent, and the local max-ent approximants reduce to the Shepard approximants with Gaussian weight function. In this sense, the Lagrange multipliers introduce a correction so that the approximants satisfy the 1st order consistency condition. We will return later to this discussion. The above Propositions provide a practical means of calculating the local max-ent shape functions at any given point x. More schematically, and fixed x, the shape functions are computed as follows:
Local Maximum-Entropy Approximation Schemes
9
1. Find the proximity index set (relative to a tolerance parameter Tol)
ITol = a | x − xa | ≤ − log(Tol)/β 2. Solve the minimization problem (2.10) in Rd using the Newton-Raphson method. In all evaluations of the partition function, the residual and the Jacobian matrix, the sum is only performed over ITol . 3. Compute the shape functions pβa (x), a ∈ ITol according to Eq. (2.9) and set all other shape functions to zero. The above algorithm is efficient and robust. The smooth minimization problem being solved is guaranteed to have a unique solution by the KuhnTucker theorem and the strict convexity of the function being minimized. The number of unknowns is the space dimension. The expressions for the gradient and the Hessian matrix of the function being minimized are r(x, λ) = ∂λ log Z(x, λ) =
N
pa (x, λ)(x − xa ),
(2.11)
a=1
and J (x, λ) = ∂λ ∂λ log Z(x, λ) =
N
pa (x, λ)(x − xa ) ⊗ (x − xa ) − r(x, λ) ⊗ r(x, λ). (2.12)
a=1
The Newton-Raphson iterations typically converge to reasonable tolerances within 2 or 3 iterations. Note that only the nodes xa , a ∈ ITol contribute noticeably to the partition function. Restricting all index summations to ITol greatly reduces the computational cost in the above calculation. For practical applications, Tol = 10−6 is amply sufficient. Changing this tolerance to machine precision does not change the numerical solutions in Galerkin methods based on local max-ent approximants. 2.6 Examples Figure 2 shows the local max-ent shape function and its partial derivatives for a node in a two-dimensional node set as a function of the dimensionless parameter γ = βh2 , where h is a measure of the nodal spacing and β is constant over the domain. It can be seen from this figure that the shape functions are smooth and their degree of locality is controlled by the parameter γ. For the maximum value of γ = 6.8 shown in the figure the shape function ostensibly coincides with the Delaunay shape function. The parameter β can be allowed to depend on position and that dependence can be adjusted adaptively in order to achieve varying degrees of locality [1].
10
M. Arroyo and M. Ortiz
γ = 0.8
γ = 1.8
γ = 2.8
γ = 6.8
Figure 2. Local max-ent shape functions for a two dimensional arrangement of nodes, and spacial derivatives (arbitrary scale) for several values of γ = βh2 .
3 Properties This section summarizes the basic properties of the above defined family of approximants. Proofs can be found in [1]. 3.1 Behavior of Convex Approximants at the Boundary Property 1. Let p be a convex scheme with node set X = {x1 , ..., xN }. Let F / F ⇒ pa = 0 on F . be a face of the convex polytope convX. Then, xa ∈ This property is a weak version of the Kronecker-delta property of interpolating approximants, i.e. that pa (xb ) = δab . This property states that for all convex approximants, and in particular for the local max-ent approximants, the approximation at a given face in the boundary depends only on shape functions of nodes in that particular face. Thus, a face is autonomous with regards to approximation, which makes the imposition of Dirichlet boundary conditions in Galerkin methods trivial. If the Dirichlet boundary conditions are affine on a face, then these are exactly imposed by constraining the nodal degrees of freedom to the affine function. If the boundary data is more general, then the imposition of the boundary condition is approximate, though convergent. Also, the shape functions of interior nodes vanish at the boundary, and the extreme points of X verify the strong Kronecker-delta property, in sharp
Local Maximum-Entropy Approximation Schemes
11
Figure 3. Illustration of the behavior of the local max-ent shape functions at the boundary of the domain.
contrast with MLS-based approximation schemes (see Figure 3 for an illustration). We note that, if the domain is a non-convex subset of convX, this property does not hold in non-convex parts of the boundary and the method behaves similarly to MLS approximants. There is an intrinsic difficulty in dealing with non-convex domains in approximation methods that rely in their definition on a notion of distance, such as MLS approximants or the method presented here. The effective treatment of non-convex domains has been extensively studied in the context of MLS-based meshfree methods [15, 3, 12], and this methods are directly applicable in the present context. 3.2 Spatial Smoothness of the Shape Functions We note that the local max-ent shape functions are defined point-wise, so their spacial smoothness must be established. Also, the thermalization parameter can take different values in space, and in general we consider it is given by a function β(x). Property 2. Let β : convX → [0, ∞) be C r in int(convX). Then the local max-ent shape functions are of class C r in int(convX). In particular, when β is constant, the shape functions are C ∞ and the derivative is given by ∇pβa (x) = −pβa (x)J (x, λ∗ (x))−1 (x − xa ).
(3.13)
3.3 Smoothness and Limits with Respect to the Thermalization Property 3. Let x ∈ convX. Then pβ (x) is a C ∞ function of β in (0, +∞). Furthermore, the limits at 0 and +∞ exist and are the max-ent approximants and the Pareto optimal Rajan approximants respectively
12
M. Arroyo and M. Ortiz
lim pβ (x) = p0 (x) and
β→0
areto lim pβ (x) = pP (x). ∞
β→+∞
This result is not surprising, the less obvious part being the limit at +∞ when the node set is degenerate. In this case, the max-ent regularization of the Rajan program (RAJ) selects a distinguished element amongst the set of solutions, that does not coincide with any of the multiple Delaunay approximants but rather constitutes a generalized Delaunay approximation scheme for degenerate cases [1].
4 Other Interpretations: Relative Entropy As noted in [1], the local max-ent approximants admit a number of interpretations, which include viewing the new approximants as a regularization of the Delaunay shape functions. A statistical mechanics interpretation is also quite appealing. The regularizing effect of the entropy can also be highlighted by studying the dual problem. In this case, the dual of the Rajan program reduces to the minimization of a polyhedral (non-smooth) convex function, while the dual of the local max-ent program is a smooth strictly convex minimization problem. This regularization can be understood through the approximation of the max function by a family of functions based on the log-sum-exp function [1, 5]. Figure 4 illustrates this view, and illustrates how the regularization provides an optimal path that selects a distinguished solution in cases of nonuniqueness, in the same spirit of viscosity solutions of variational problems.
Figure 4. Illustration of the regularization provided by the entropy in the dual problem.
We next develop yet another reformulation of the local max-ent approximants, which allows for generalizations of max-ent approximation schemes and relies on the concept of relative entropy. The formulation of the relative max-ent approximants requires the definition of the so-called Kullback-Leiber distance between two discrete probability distributions p and q pa pa log . Dp|q = qa a
Local Maximum-Entropy Approximation Schemes
13
Its negative is also referred to as mutual entropy, cross entropy, or relative entropy. Although Dp|p = 0, it is clear that this function is not symmetric in its arguments, and hence it is not strictly a distance. This quantity is used in information theory to measure the amount of information needed to change the description of the system from q to p. Often, the probability distribution q is viewed as prior information about the system. Then, some new information may become available, not accounted for by q. A natural question in statistical inference is then to determine a new distribution p consistent with the new information, but which is in some sense as close as possible to the prior information. The maximization of the relative entropy between p and q subject to the new information made available provides a means to find such a distribution. In the present context, suppose that we have a non-negative approximation scheme associated with the node set X, which satisfies the partition of unity property (required for the statistical interpretation) but does not satisfy the 1st order consistency condition. A simple example that comes to mind is a Shepard approximant with weight function w(·) ≥ 0 given by w( xa − x ) . qa (x) = b w( xb − x ) If we regard this approximant as a prior, and wish to construct the closest approximation scheme in the sense of information theory that satisfies the 1st order consistency condition, i.e. the closest convex scheme, we face the convex program (RM E)q
For fixed x maximize −
N a=1
pa log
pa qa
subject to pa ≥ 0, a = 1, . . . , N, N pa = 1, a=1 N
pa xa = x.
a=1
The above function could as well be combined with a locality measure, but this is not needed here. It is straightforward to verify that defining the partition function as N qb (x) exp [λ · (x − xb )] , Z(x, λ) = b=1
the relative max-ent shape functions follow from pa (x) = qa (x)
exp [λ∗ (x) · (x − xa )] , Z(x, λ∗ (x))
(4.14)
14
where
M. Arroyo and M. Ortiz
λ∗ (x) = arg min log Z(x, λ). λ∈Rd
This procedure transforms any given nonnegative approximant that in general does not satisfy the 1st order consistency conditions and possesses no desirable properties at the boundary into a convex approximant that is 1st order consistent and satisfies the weak Kronecker-delta property at the boundary. From Eq. (4.14) it is apparent that the relative max-ent shape functions inherit the smoothness and support of the prior approximants. Thus, defining relative max-ent approximants with compact support is straightforward: it is sufficient to start, for instance, from Shepard approximants based on a weight function with compact support. One should carefully note that the existence and uniqueness properties of the dual problem for the Lagrange multiplier are not granted for all priors, and in particular, there should be enough overlap between the supports of the prior shape functions. The relationship with the local max-ent approximants is elucidated by considering Shepard approximants with weight w(d) = exp(−βd2 ) as prior. In this case, the relative and the local max-ent approximants coincide.
5 Applications in Galerkin Methods We present here briefly a 3D example of nonlinear elasticity which illustrates the benefits of using smooth shape functions such as the local max-ent shape functions in problems with smooth solutions. A Galerkin (or Rayleigh-Ritz in this setting) method is implemented. More details, as well as other benchmark tests of linear elasticity can be found in [1]. Despite the calculation of the local max-ent shape functions is computationally more expensive than evaluating the Delaunay shape functions, this example shows how overall, for a give accuracy, remarkable computational saving can be achieved by using the meshfree method. We consider a hyperelastic compressible Neo-Hookean block subject to a 100% tensile deformation. The calculation is performed with seven uniform node sets of variable resolution. For each node set, two numerical solutions are obtained, one with the local max-ent approximants, and another one with the Delaunay approximants (linear simplicial finite elements). Figure 5 shows the dependence of a normalized signed relative error in strain energy (relative to an overkill numerical solution) on the nodal spacing. It is observed from that figure that the accuracy of the local max-ent solution is vastly superior to that of the finite element solution. The finest finite element solution has a comparable—albeit slightly larger—error than the second-coarsest local maxent solution. By contrast, the CPU time incurred by the local max-ent solution is over a hundred times shorter than that of the finite element solution. This
Local Maximum-Entropy Approximation Schemes
15
Figure 5. Final deformation for the finest FE mesh and second-coarsest local maxent discretization (left) and signed relative error in strain energy with respect to a reference numerical solution for ν0 = 0.333 (right).
difference in performance is more pronounced in the nearly incompressible case.
6 Conclusions We have presented a new approach for building approximation schemes from scattered data using the concept of maximum entropy. This concept not only provides a natural link between approximation and information theory, but also offers a practical computational means to construct a family of approximants that seamlessly and smoothly bridges meshfree-style shape functions and Delaunay shape functions. The theory, computation and properties of these approximants has been provided. Numerical examples emphasize the notable computational savings afforded by smooth approximantion schemes in problems with smooth solutions. The method presented here allows for extensions that have not been pursued here. These include the approximation in high-dimensional problems, the ability to perform in a very flexible way variational adaptivity, not only of the node positions but also of the thermalization parameter, and the development of higher-order max-ent schemes. The latter provide one of the few unstructured non-negative high-order approximants.
References 1. M. Arroyo and M. Ortiz, Local maximum-entropy approximation schemes: a seamless bridge between finite elements and meshfree methods, International Journal for Numerical Methods in Engineering in press (2005). , Maximum-entropy approximation schemes: higher order methods, Inter2. national Journal for Numerical Methods in Engineering in preparation (2005).
16
M. Arroyo and M. Ortiz
3. T. Belytschko, Y. Krongauz, D. Organ, M. Fleming, and P. Krysl, Meshless methods: An overview and recent developments, Computer Methods in Applied Mechanics and Engineering 139 (1996), no. 1-4, 3–47. 4. T. Belytschko, Y.Y. Lu, and L. Gu, Element-free Galerkin methods, International Journal for Numerical Methods in Engineering 37 (1994), no. 2, 229–256. 5. S. Boyd and L. Vandenberghe, Convex optimization, Cambridge University Press, Cambridge, UK, 2004. 6. P. Breitkopf, A. Rassineux, J.M. Savignat, and P. Villon, Integration constraint in diffuse element method, Computer Methods in Applied Mechanics and Engineering 193 (2004), no. 12-14, 1203–1220. 7. J.S. Chen, C.T. Wu, S. Yoon, and Y. You, Stabilized conforming nodal integration for Galerkin meshfree methods, International Journal for Numerical Methods in Engineering 50 (2001), 435466. 8. F. Cirak, M. Ortiz, and P. Schr¨ oder, Subdivision surfaces: a new paradigm for thin-shell finite-element analysis, International Journal for Numerical Methods in Engineering 47 (2000), no. 12, 2039–2072. 9. C. Cottin, I. Gavrea, H.H. Gonska, D.P. Kacs´ o, and D.X. Zhou, Global smoothness preservation and variation-diminishing property, Journal of Inequalities and Applications 4 (1999), no. 2, 91–114. 10. R.A. DeVore, The approximation of continuous functions by positive linear operators, Springer-Verlag, Berlin, 1972. 11. A. Huerta, T. Belytscko, S. Fern´ andez-M´endez, and T. Rabczuk, Encyclopedia of computational mechanics, vol. 1, ch. Meshfree methods, pp. 279–309, Wiley, Chichester, 2004. 12. P. Krysl and T. Belytschko, Element-free galerkin method: Convergence of the continuous and discontinuous shape functions, Computer Methods in Applied Mechanics and Engineering 148 (1997), no. 3-4, 257–277. 13. W.K. Liu, S. Li, and T. Belytschko, Moving least square reproducing kernel methods Part I: Methodology and convergence, Computer Methods in Applied Mechanics and Engineering 143 (1997), no. 1-2, 113–154. 14. B. Nayroles, G. Touzot, and P. Villon, Generalizing the finite element method: diffuse approximation and diffuse elements, Computational Mechanics 10 (1992), no. 5, 307–318. 15. D. Organ, M. Fleming, T. Terry, and T. Belytschko, Continuous meshless approximations for nonconvex bodies by diffraction and transparency, Computational Mechanics 18 (1996), no. 3, 225–235. 16. H. Prautzch, W. Boehm, and M. Paluszny, B´ezier and B-spline techniques, Springer-Verlag, Berlin, 2002. 17. V.T. Rajan, Optimality of the Delaunay triangulation in Rd , Discrete and Computational Geometry 12 (1994), no. 2, 189–202. 18. N. Sukumar, Construction of polygonal interpolants: A maximum entropy approach, International Journal for Numerical Methods in Engineering 61 (2004), no. 12, 2159–2181. 19. N. Sukumar, B. Moran, and T. Belytschko, The natural element method in solid mechanics, International Journal for Numerical Methods in Engineering 43 (1998), no. 5, 839–887.
Genetic Algorithms for Meshfree Numerical Integration Suleiman BaniHani and Suvranu De Rensselaer Polytechnic Institute, 110 8th st., Troy, NY, USA {banihs,des}@rpi.edu
Summary. In this paper we present the application of the meshfree method of finite spheres to the solution of thin and thick plates composed of isotropic as well as functionally graded materials. For the solution of such problems it is observed that using Gaussian and adaptive quadrature schemes are computationally inefficient. In this paper a new technique, presented in [26, 21], in which the integration points and weights are generated using genetic algorithms and stored in a lookup table using normalized coordinates as part of an offline preprocessing step, is shown to provide significant reduction of computational time without sacrificing accuracy.
Key words: Meshfree methods, method of finite spheres (MFS), plates, functionally graded material (FGM), genetic algorithms (GA), Gaussian quadrature.
1 Introduction Due to several advantages over traditional finite element methods, meshfree computational schemes, reviewed in [1], have been proposed. A major advantage of these techniques is that approximation spaces with higher order continuity may be easily generated. Hence it is useful to apply them to the solution of higher order differential equations. In this paper we present an application of the method of finite spheres [2] to the solution of plate problems. Several examples involving thin and thick elastic and functionally graded material plate problems are discussed. In [4] the moving least squares interpolants were used for the solution of thin elastic plate problems with simply supported and clamped edges. A moving least squares differential quadrature scheme was used in [5] for analyzing thick plates with shear deformation. In [6] radial basis functions were used for solving Kirchhoff plate bending problem with the Hermite collocation approach being used to obtain a system of symmetric and non-singular linear equations.
18
S. BaniHani and S. De
The hp-clouds method was used in [3] for the analysis of Kirchhoff plates with enrichment being provided by Trefftz functions. Lagrange multipliers were used to apply the essential boundary conditions. The MLPG method was used in [7] for solving the problem of a thin plate in bending with the penalty formulation being used to apply the essential boundary conditions. In [8] the element free Galerkin method was used to solve Kirchhoff plate problems. In [22] spline functions were used together with a displacement based Galerkin method for the solution of Mindlin-Reissner plate problems. A uniform nodal arrangement was used with nodes lying outside the computational domain. In [23] the hp-clouds method was employed for the solution of thick plates and it was noticed that using hp-clouds approximations of sufficiently high polynomial degree could control shear locking. However, such approximations are computationally expensive to generate. In [24] a mesh-independent p-orthotropic enrichment in the generalized finite element method (GFEM) was presented for the solution of the ReissnerMindlin plate model. In [25] an extended meshfree method was used for solving plate problems involving shear deformations, where the problem was reduced to a homogeneous one. The homogeneous equation was subsequently solved using the reproducing kernel particle method (RKPM). Functionally graded materials (FGMs) are composites containing volume fractions of two or more materials varying continuously as functions of position along the structure dimensions. In [29] a theoretical formulation, Navier’s solutions of rectangular plates and finite element models based on the third-order shear deformation plate theory, was presented for the analysis of through-thickness functionally graded plates. Numerical results of the linear third-order theory and non-linear first order theory were presented. In the MLPG method the interpolation of choice is the moving least squares method, which is computationally expensive [9]. Furthermore, to embed derivative information in the interpolation, a “generalized moving least squares” scheme is used, analogous to Hermite interpolations, which results in computationally expensive shape functions. It is interesting to note that while many meshfree methods have been developed and some have been applied to higher order differential equations, the issue of numerical efficiency has not been given due attention. Most Galerkinbased meshfree methods are notoriously computationally inefficient since rational, nonpolynomial interpolation functions are used and the integration domains are much more complex than in the finite element techniques. For the solution of higher order differential equations one faces the challenge of efficient numerical integration of the terms in the Galerkin weak form where the integrands are highly peaked in the regions of overlap of the supports of the interpolation functions and many integration points are required unless proper care is taken.
Genetic Algorithms for Meshfree Numerical Integration
19
With the goal of achieving computational efficiency in meshfree methods, the method of finite spheres was developed [9]. In this technique, the interpolation functions generated using the partition of unity paradigm [10, 11] are compactly supported on intersecting and overlapping spheres. In [26] we showed that using piece-wise midpoint quadrature, for higher order deferential equations arising in the solution of thin beam and plate problems, is computationally inaccurate and will not converge if h-refinement is performed. Also adaptive numerical integration methods [13, 14], on the other hand, provide much better results but are computationally expensive. In this paper we employ the genetic algorithm-based lookup table approach, originally proposed in [26, 21], for efficient numerical integration of plate problems. In this technique, the integration points are computed, off line, for a reference nodal configuration using genetic algorithms. The location of these integration points as well as the weights are then nondimensionalized and stored in a lookup table. During computations, the integration points and weights are retrieved from this lookup table. This scheme is used for the solution of several plate problems to demonstrate its accuracy. In Section 2 we formulate the governing equations for the different plate models followed by a brief review of the MFS discretization scheme. In Section 3 we discuss the genetic algorithm-based lookup table approach. In Section 4 we provide some numerical results demonstrating the effectiveness of the integration scheme.
2 Problem Formulation and Discretization 2.1 Governing Equations Kirchhoff Plate We consider a thin isotropic plate as shown in Figure 1 of thickness h and midsurface (Ω ∈ R2 ), elastic modulus E and Poisson’s ratio ν. The boundary of the plate is denoted by Γ . The plate is acted upon by a transverse loading q(x, y). The corresponding potential energy functional for this problem is [16] πKP (w) = Uint (w) − Wext (w) where Uint (w) is the internal energy given by 1 (Pw)T D(Pw)dxdy Uint (w) = 2 Ω
(2.1)
(2.2)
1ν 0 Eh3 ν 1 . 0 where P = [∂ 2 /∂x2 ∂ 2 /∂y 2 2∂ 2 /∂x∂y]T , and D = 12(1−ν 2) 0 0 (1 − ν)/2 Γ = Γw ∪ Γθ ∪ ΓM ∪ ΓQef f are portions of the boundaries where the disˆ moment (M ˆ ), and effective shear force (Q ˆ ef f ) placement (w), ˆ rotation (θ),
20
S. BaniHani and S. De
Figure 1. A flat plate in bending. (i,j) are unit vectors in the global x- and ydirections, respectively. (s, n) are unit vectors along the tangential and normal directions to the plate boundary, respectively.
are prescribed, respectively. The boundary condition sets (w and Qef f ) and (θ and Mn ) are disjoint. The external work Wext (w) has three components due to the applied lateral loads, applied moments and transverse shear, and corner loads Wext (w) = Wload (w) + Wbending (w) + Wcorners (w).
(2.3)
The first two components are Wload (w) =
q(x, y)wdxdy
(2.4)
Ω
ˆ ef f wdΓ − Q
Wbending (w) = ΓQef f
ˆ n ∂w dΓ M ∂n
(2.5)
ΓM
Wcorners (w) exists when there are N corners where the displacements wj at the corners are not prescribed
Genetic Algorithms for Meshfree Numerical Integration
Wcorners (w) =
N
ˆ+ −M ˆ − )wj (M ns ns
21
(2.6)
j=1 + ˆ− ˆ ns , Mns are the twisting moments approaching the edge from the where M right and the left, respectively and wj is the displacement at the j th corner. The essential boundary conditions are imposed by defining the augmented functional 2 ∂w ˆ γ1 γ2 2 ∗ − θ dΓ (w) = πKP (w) + (w − w) ˆ dΓ + (2.7) πKP 2 2 ∂n Γw
Γθ
where γ1 and γ2 are penalty parameters. Mindlin-Reissner Plate We consider a thick isotropic plate as shown in Figure 1 of thickness h. The corresponding potential energy functional for this problem is [16] πM P (w, θ) = Uint (w, θ) − Wext (w, θ)
(2.8)
where Uint (w, θ) is the internal energy given by 1 1 Uint (w, θ) = (Lθ)T D(Lθ)dxdy + (∇w − θ)T α(∇w − θ)dxdy 2 2 Ω Ω (2.9) ∂ 0 1ν 0 ∂ ∂x 3 φx ∂ Eh ∂x 0 ,∇= ∂ , ,θ= where L = 0 ∂y , D = 12(1−ν 2 ) ν 1 φy ∂ ∂ ∂y 0 0 (1 − ν)/2 ∂y ∂x kGh 0 and α = . D is the bending rigidity, Γw , Γφx , Γφy , ΓMn , ΓMns , and 0 kGh ΓQn are the boundaries where the displacement (w), ˆ rotations (φˆx and φˆy ), ˆ ˆ ˆ moments (Mn and Mns ), and shear force (Qn ) are prescribed, respectively. The boundary condition sets ( w and Qn ), (φn and Mn ), and (φs and Mns ) are disjoint. The external work Wext (w, θ) has four components due to the applied lateral loads, applied moments and transverse shear Wext (w, θ) =
q(x, y)wdxdy +
Ω
ΓM n
ˆ n φn dΓ + M ΓMns
ˆ ns φs dΓ + M
ˆ n wdΓ . Q
ΓQn
(2.10) The essential boundary conditions are imposed by defining the augmented functional
22
S. BaniHani and S. De
2 γ1 ∗ πM (w − w) ˆ dΓ P (w, θ) = πM P (w, θ) + 2 Γw 2 2 γ2 ˆ φx − φx dΓ + γ23 φy − φˆy dΓ +2 Γφx
(2.11)
Γφy
where γ1 and γ2 are penalty parameters. Functionally Graded Material Plate Model We assume that the material property gradation is through the thickness and that the volume fraction is given by the following power law n 1 z + + Vb (2.12) V (z) = (Vt − Vb ) h 2 where V is the material property, Vt and Vb are the values at the top and bottom faces of the plate, respectively, h is the thickness, and n is a constant that dictates the volume fraction profile through the thickness. The material properties are homogenized using the Mori-Tanaka scheme described in [30]. Here we assume that the modulus E, density ρ, and thermal coefficient of expansion α vary through the thickness, while ν is assumed constant. The constitutive relations are [29] 1ν 0 xx 1 σxx E ν 1 0 yy − 1 α∆T σyy = (2.13) 1 − ν2 σxy γ 0 0 0 1−ν xy 2 and
"
σyz σxz
#
E = 1 − ν2
1−ν 2
0
"
0 1−ν 2
where ∆T is the temperature change, and ∂φx xx ∂x ∂φy yy =z ∂y ∂φx + ∂φy γxy ∂y
"
γyz γxz
#
" =
φy + φx +
∂x
∂w ∂y ∂w ∂x
γyz γxz
# (2.14)
# .
The potential energy functional corresponding to this problem can be written as (2.15) Π = Uint − Wext where Uint is the internal energy given by Uint = Ub + Us Ub and Us are the bending and shear energy, respectively, given by
(2.16)
Genetic Algorithms for Meshfree Numerical Integration
1 2
Ub = and
Us =
σx [xx yy γxy ] σy dΩ Ω σxy
k 2
[γxz γyz ] Ω
23
σxz dΩ σyz
(2.17)
(2.18)
and k is the shear correction factor. Wext is the external work done by the applied force given by qwdΩ (2.19) Wext = Ω
where q is the external load applied to the upper surface of the plate. The corresponding potential energy functional in equation (2.15) can be expressed in terms of the displacements as 1 − ν ∂φx ∂φy 2 ∂φx 2 ∂φy 2 ∂φx ∂φy 1 + ( + ) dΩ + + 2ν Π = C1 2 ∂x ∂y ∂x ∂y 2 ∂y ∂x Ω (2.20) ∂w ∂w k + φx )2 + ( + φy )2 dΩ ( + C2 2 ∂x ∂y Ω 1 ∂φx t ∂φy t M + M )dΩ − − ( qwdΩ 2 Ω ∂x ∂y Ω where
C1 =
and
h/2
−h/2
C2 =
E z 2 dz 1 − ν2
h/2
−h/2
E dz 2(1 + ν)
t
M is the the thermal moment resultant given by 1 M = 1−ν
h/2
t
αET zdz.
(2.21)
−h/2
The essential boundary conditions are imposed by defining the augmented functional γ1 γ2 γ3 ∗ 2 2 ˆ (w − w) ˆ dΓ + (φx − φx ) dΓ + (φy − φˆy )2 dΓ Π =Π+ 2 Γw 2 Γ φx 2 Γφy (2.22) where γ1 , γ2 , and γ3 are penalty parameters. Γw , Γφx , and Γφy are the boundaries where w, ˆ φˆx , and φˆy are prescribed, respectively.
24
S. BaniHani and S. De
Thermal Analysis In this section we discuss the thermal analysis of functionally graded plates. It is assumed that the top and bottom surfaces of the plate are subjected to constant temperature. The temperature field within the plate depends only on the z-coordinate. The temperature distribution can be obtained by solving the one-dimensional steady state heat transfer equation dT d λ(z) =0 (2.23) − dz dz with the boundary conditions T = Tt at z = h/2, T = Tb at z = −h/2
(2.24)
where λ is the thermal conductivity which varies according to the power law distribution, given by n 2z + h + λb . (2.25) λ(z) = (λt − λb ) 2h The solution of equation (2.23) with the prescribed boundary conditions is T t − Tb
T (z) = Tt − h/2
dz −h/2 λ(z)
z
h/2
dη . λ(η)
(2.26)
This equation is used for the evaluation of M t in equation (2.21). 2.2 The Method of Finite Spheres Approximation Scheme In the method of finite spheres the approximation functions are generated using the partition of unity paradigm [11] and are supported on spheres. In this section we briefly describe this method. Let Ω ∈ Rd (d =1, 2, or 3) be an open bounded domain with Γ as its boundary (Figure 2), and let a set of open spheres {B(xI , rI ); I = 1, 2, ..., N } cover the entire domain Ω, where xI and rI is the sphere I center and radius, respectively. Each node is placed at the geometric center of a sphere, and the surface of the sphere I is denoted by S(xI , rI ). Since we are interested in solving fourth order differential equations, we define a positive radial weight function WI (x) = W (sI ) ∈ C0s (B(xI , rI )), s ≥ 1 with sI = ||x − xI ||0 /rI at each node I which is compactly supported on the sphere at node I. In our work we used a 7th order spline weight functions " 1 − 35s4 + 84s5 − 70s6 + 20s7 0 ≤ s < 1 WI = (2.27) 0 s≥1
Genetic Algorithms for Meshfree Numerical Integration
25
Figure 2. Schematic of the MFS. Discretization of a domain Ω in R2 by the method of finite spheres, using a set of nodes, for each node I there is a sphere ΩI . A sphere that lies completely in the domain is an interior sphere, while if the sphere has an intersection with the boundary, Γ , then it is a boundary sphere. The natural boundary conditions are defined on Γf , and the essential boundary conditions are defined on Γu ;Γ = Γu ∪ Γf and Γu ∩ Γf = 0.
These functions are utilized to generate the Shepard partition of unity functions WI , I = 1, 2, ..., N (2.28) ϕ0I (x) = N WJ J=1
which satisfy 1.
N I=1
ϕ0I (x) = 1 ∀x ∈ Ω
2. ϕ0I (x) ∈ C0s (Rd ), s ≥ 1. The Shepard functions {ϕ0I (x)} satisfy zeroth order consistency. To ensure higher order consistency, a local approximation space VIh = spanm∈ {Pm (x)} is defined at each node I, where Pm (x) is a polynomial and is an index set, and h is a measure of the size of the spheres. For instance, VIh = {1, (x − xI )/rI , (y − yI )/rI } may be used to generate a linearly accurate displacement field. The global approximation space Vh is generated by multiplying the partition of unity functions with the local basis functions at each node Vh =
N I=1
ϕ0I VIh .
(2.29)
26
S. BaniHani and S. De
Therefore, any function wh ∈ Vh can be written as wh (x) =
N
hIm (x)αIm
(2.30)
I=1 m∈
where hIm = ϕ0I (x)Pm (x) is the shape function associated with the mth degree of freedom αIm of node I. It is important to note that hIm (x) ∈ C0s (Rd ), s ≥ 1. 2.3 Discretized Equations Kirchhoff Plate For the Kirchhoff plate, using the discretization wh (x, y) =
N
hIm (x, y)αIm = bf H(x, y)α
(2.31)
I=1 m∈
the augmented potential energy functional (2.1) is approximated by T ∗ (α) = 12 αT Kα − αT f + γ21 (Hα − w) ˆ (Hα − w) ˆ dΓ πKP Γw T ∂H ∂H ˆ ˆ + γ22 ∂n α − θ ∂n α − θ dΓ
(2.32)
Γθ
where
BT DBdxdy
K=
(2.33)
Ω
H q(x, y)dxdy − T
f= Ω
T
ˆ n ∂H dΓ + M ∂n
ΓM
ˆ ef f HT dΓ + Q
ΓQef f
N
Rc HT j
j=1
(2.34) where B = PH. Minimizing the augmented potential energy functional with respect to the nodal unknowns results in the following set of linear algebraic equations (K + G1 + G2 ) α = f + fs1 + fs2
where G1 = γ1
Γw
% T & H H dΓ
(2.35)
Genetic Algorithms for Meshfree Numerical Integration
G2 = γ2 Γθ
∂HT ∂H ∂n ∂n
%
fs1 = γ1
27
dΓ
& wH ˆ T dΓ
Γw
and fs2 = γ2
T
∂H θˆ ∂n
dΓ .
Γθ
Mindlin-Reissner Plate For the Mindlin-Reissner plate, using the method of finite spheres discretization wh (x, y) = θh (x, y) =
hIm (x, y)w ˜Im = Hw (x, y)w ˜
I=1 m∈ N
(2.36)
NIm (x, y)θ˜Im = Hθ (x, y)θ˜
I=1 m∈
N
( ' Hφx 0 , and θ˜T = φ˜x , φ˜y . Using (2.34) we may discretize 0 Hφy the augmented potential energy functional (2.15) as where Hθ =
T γ1 1 T ∗ T (Hw w ˜ − w) ˆ (Hw w ˜ − w) ˆ dΓ πM P (a) = 2 a Ka − a f + 2 Γw T + γ22 Hφx φ˜x − φˆx Hφx φ˜x − φˆx dΓ Γφx
+ γ23
Hφy φ˜y − φˆy
T
(2.37)
Hφy φ˜y − φˆy dΓ
Γφy
where K=K ' b + (Ks T a = w, ˜ θ˜ fw . f= fθ
and
The matrices are defined as
0 0 Kb = 0 Kbθθ Ks =
Ksww Kswθ Ksθw Ksθθ
(2.38) (2.39)
28
S. BaniHani and S. De
with Ksww =
T
(∇Hw ) α∇Hw dxdy Ω T T Hθ α∇Hw dxdy = (Kswθ ) Ksθw = − Ω T Ksθθ = Hθ αHθ dxdy Ω T (LHθ ) DLHθ dxdy Kbθθ = Ω T ˆ n HT fw = Q Hw q(x, y)dxdy + w dΓ Ω ΓQn ˆ HT fθ = θ M dΓ ΓM
( ˆ n, M ˆ ns . ˆT = M where M Minimizing the augmented potential energy functional with respect to the nodal unknowns results in the following set of linear algebraic equations '
(K + G) a = f + fs
where
f and fs = sw f sθ
Gw 0 G= 0 Gθ
with
Gw = γ1
% T & H H dΓ
Γw
Gθ =
Gφx = γ2
x
and fsφy = γ3
Gφx 0 0 Gφy
Γφy
φˆy HφTy dΓ .
Γ φx
φx
(2.41)
HφTx Hφx dΓ Γ φx HφTy Hφy dΓ Gφy = γ3 & Γφ%y fsw = γ1 wH ˆ T dΓ Γw f fsθ = sφx fsφy fsφ = γ2 φˆx H T dΓ
where
(2.40)
(2.42)
Genetic Algorithms for Meshfree Numerical Integration
29
Functionally Graded Material Plate Model The method of finite spheres gives the following discretized equations φhx (x, y) =
N
hIm (x, y)˜ aIm = Ha
(2.43a)
hIm (x, y)˜bIm = Hb
(2.43b)
hIm (x, y)˜ cIm = Hc.
(2.43c)
I=1 m∈I
φhy (x, y) =
N I=1 m∈I
wh (x, y) =
N I=1 m∈I
Minimizing the potential functional (2.22) with respect to the nodal unknowns results in the following set of linear algebraic equations Kd = F
(2.44)
where K, d, and F are the stiffness matrix, nodal unknowns, and force matrix, respectively. d and F are defined as a fx d = b , F = fy (2.45) c fz where fx =
Ω
Bx M t dΩ, fy =
Ω
By M t dΩ
and fz =
Ω
HqdΩ,
where Bx and By are the derivatives of H with respect to x and y, respectively. The stiffness matrix K is defined as νDAxy + 1−ν QAx DAxx + 1−ν 2 Ayy + QA 2 Ayx DAyy + 1−ν QAy K= 2 Axx + QA symmetric Q(Axx + Ayy ) (2.46) where T A= Ω H THdΩ, Ax = Ω H Bx dΩ, Ay = Ω HT By dΩ, (2.47) T Axx = Ω BT x Bx dΩ, Ayy = Ω By By dΩ, T Axy = Ω BT x By dΩ, Ayx = Ω By Bx dΩ. and D=
h/2
E −h/2 1−ν 2
1ν ν 1 00
0 h/2 0 z 2 dz, Q = −h/2
1−ν 2
E 1−ν 2
1−ν 2
0
0 1−ν 2
dz .
(2.48)
30
S. BaniHani and S. De
3 Numerical Integration As in most other meshfree methods, the method of finite spheres requires specialized numerical integration techniques to evaluate the integrals in the weak form where the integrands are nonpolynomial rational functions and the integration domains are much more complex than in traditional finite elements. In [12, 1] several computationally efficient integration schemes have been proposed for the method of finite spheres applied to two-dimensional elasto-static problems and it has been reported that piece-wise midpoint quadrature rules are more effective than higher order Gaussian quadrature. However, for the solution of fourth order problems such as plates, the integrands in the weak form contain higher order derivatives that are more ill behaved than the integrands encountered in second order problems. Hence, in [26, 21] we have developed a novel integration technique based on a lookup table generated using genetic algorithms. This method has much promise since it provides accurate solutions with a reasonable number of integration points. In this section we will briefly describe how genetic algorithms may be used in generating numerical integration schemes and then we will describe the lookup table approach. Details can be found in [26, 21].
Figure 3. Interior (filled), lens (cross-hatched) and boundary disks (striped) on a two-dimensional domain.
In what follows we will consider the following integral Q2D = f (x, y)dxdy
(3.49)
ΩI
where ΩI is a general subdomain in R2 . For the method of finite spheres we will consider three types of subdomains (see Figure 3):
Genetic Algorithms for Meshfree Numerical Integration
31
1. Interior disk ; defined as a disk that does not intersection the domain boundary. 2. Lens; defined as the region of intersection of two spheres. 3. Boundary disk ; defined as a disk with a nonzero intersection with the domain boundary. 3.1 Numerical Integration using Genetic Algorithm Genetic algorithms [17, 18, 19] are a class of biologically inspired optimization algorithms in which a population of potential solutions is chosen at random. Individuals (population members) compete according to their fitness for selection as parents to the next generation. The genetic material of the parents is modified to produce offsprings (new population) using genetic operators such as mutation and crossover. The process is continued until the maximum fitness in the population has not reached a certain preset value or the number of generations has not reached the maximum limit. This general algorithm is adapted to numerical quadrature, the two dimensional case is described below. The algorithm starts by choosing µ partitions (β) randomly, the approximate integral Q of each partition is evaluated as well as the relative error E using two different quadrature rules, one more accurate than the other (e.g., we have used one and three point Gauss quadrature rules). For each subinterval the difference between the two integrals is the relative error Ei , while the integration value Qi corresponds to the more accurate rule. In the two-dimensional case each subdomain type (interior, lens or boundary disk) has a different partition. An interior disk (of radius R) is divided into m subdomains using concentric circles of radii {0 = r0 < r1 < . . . < rm = R}, and each subregion bounded by ri and ri+1 is then divided by n radial lines {θi,0 , . . . , θi,(n−1) |θij ∈ [0, 2π]} and θi,j < θi,j+1 , see Figure 4. The partition β for this case is a matrix of the form 0 = r0 , < r1 , < . . . , < rm = R θ0,0 , ... , θm0 β= , such that r0 = 0, and ri < ri+1 . .. .. . . θ0,(n−1) ,
...
, θm,(n−1) (3.50)
The integral in equation (3.49) may be written as Q=
m−1 n−1
Qi,j
i=0 j=0
with
ri+1 θi,(j+1)
f (r, θ)rdrdθ ≈
Qi,j = r=ri θ=θi,j
Nθ Nr k=1 l=1
Wkl f (¯ rk , θ¯l )
(3.51)
32
S. BaniHani and S. De
Figure 4. Partition of (a) an interior disk and (b) a lens into integration subdomains.
where Nr , and Nθ are the number of the radii and angles used in the approximation rule, Wkl are the weights, e.g., for a mid-point Gauss quadrature Wkl =
1 2 (θl+1 − θl )(rk2 − rk−1 ), 2
and (¯ rk , θ¯l ) is the centroid of the subdomain. The relative error is the sum of the relative errors for each subdomain, i.e., E=
m−1 n−1
|Ei,j |.
i=0 j=0
A lens (Figure 4(b)), on the other hand is partitioned by first randomly generating m concentric lenses, the area between any two of which is then partitioned by n randomly generated straight lines passing through the point O. The partitions have the form 0 = χ0 , . . . , χm = Ξ(θ) θ0,0 , . . . , θm,0 (3.52) β= , .. .. . . θ0,(n−1) , . . . , θm,(n−1) such that χ ∈ [0, Ξ(θ)], χ0 = 0, χi < χi+1 and " κI (θ) Ξ(θ) = κI+1 (θ)
3π π 2 0, ∀j ∈ 1, ..., n (2.7) M M (f1 , f2 , ..., fn ) = maxj {fj } , fj < 0, ∀j ∈ 1, ..., n 0, otherwise. Thus, the time evolution scheme (2.3) can be rewritten
Mi =
1 2
j∈Ωi
F n+1 = F ni + τ · [(Qi − M i ) − Gi ] (2.8) i & % ci cj 7ij |, |W 7ji | . ·sign (xj − xi )·M M ∇F nj , ∇F ni ·max |W + Vi Vj (2.9)
3 The Unsymmetrical Approach to Modify Weights We assume, that 7ij = W
"
αi+ Wij , xi − xj < 0 αi− Wij , otherwise.
Our goal is to achieve two equalities: 7ij = 1. 1. Approximation law: j∈Ωi W 2. Conservation law: 7ij · sign (xi − xj ) − 7ji · sign (xj − xi ) = 0. W W j∈Ωi
(3.10)
(3.11)
j∈Ωi
The approximation equation follows from unity condition that assures the zeroth order consistency of the integral form representation of the continuum function [2]. The conservation equation follows from derivative representation (2.6) that guarantees absence of inner artificial source. To prove this fact looks at the following computations.
Fuzzy Grid Method for Lagrangian Gas Dynamics Equations
81
Let’s consider 1D case and assume that each particle has 4 neighbour particles (2 left and 2 right) except, of course, boundary ones and expand expression for gradient (2.6) 712 · (g2 − g1 ) + W 713 · (g3 − g1 ) ∇g1 = W 7 723 · (g3 − g2 ) + W 724 · (g4 − g2 ) ∇g2 = W21 · (g2 − g1 ) + W 731 · (g3 − g1 ) + W 732 · (g3 − g2 ) + W 734 · (g4 − g3 ) + W 735 · (g5 − g3 ) ∇g3 = W 7 7 7 ∇g4 = W42 · (g4 − g2 ) + W43 · (g4 − g3 ) + W45 · (g5 − g4 ) 753 · (g5 − g3 ) + W 754 · (g5 − g4 ). ∇g5 = W (3.12) Extract coefficients for each gi , i ∈ 1, ..., 5 721 + W 712 − W 713 − W 731 g1 : −W 712 + W 721 − W 723 − W 724 − −W 732 + W 742 g2 : W 713 − W 731 + W 732 − W 734 − W 735 − −W 723 + W 743 + W 753 (3.13) g3 : W 724 − W 742 + W 743 − W 745 − −W 734 + W 754 g4 : W 735 − W 753 + W 754 − −W 745 . g5 : W Thus, in order to achieve conservation requirement each expression for gi must be equal to zero, which was to be proved. We have (2n − 2) unknowns α+ and α− on the one side and we can write (2n − 2) equations from the system (3.11) on the other side, where n is the number of particles. Though we can solve system of linear equations respectively unknowns α+ and α− . The problem is the huge magnitude of resulted α, because large α significantly reduces time step τ (2.5) and leads to extra mathematical viscousity Q (2.4) and smoothing. This fact is obviously seen in figures (7 – 10).
4 The Symmetrical Approach to Modify Weights We assume, that
7ij = αij · Wij W 7ji . 7ij = W W
Our goal is to achieve two equalities: 7ij = 1. 1. Approximation law: j∈Ωi W 2. Conservation law: 7ij · sign (xi − xj ) = 0. W j∈Ωi
(4.14)
(4.15)
82
O.V. Diyankov and I.V. Krasnogorov
7ij symmetry. The conservation law equation was simplified because of W At first we need to build matrix Θ that is consisted from equation coefficients of system (4.15) relatively to unknowns αij . The number of unknowns are equal to number of Wij , j > i. So, the system (4.15) is transformed to the equation Θ·α=B (4.16) where B is the right hand side of the system (4.15). In order to obtain optimal α we pose the following minimization problem (4.17) min αT · α − λT · (Θ · α − B) where λ is the Lagrange multiplier. We can minimize (4.17) by setting the partial derivatives of the α to zero, which is a necessary condition for a minimum 2αT − λT · Θ = 0 1 T Θ · λ. 2 Substitute (4.18) into (4.16) we can find λ α=
(4.18)
1 ΘΘT · λ = B 2 &−1 % λ = 2 ΘΘT · B.
(4.19)
&−1 % · B. α = ΘT · ΘΘT
(4.20)
And finally compute α
Corresponding results are presented in figures (3-6, 11-18). With using antidifussion term the depression wave is closer to exact solution. To more accurate simulation of the contact discontinuity it needs to use special algorithm for solving Riemann problem.
5 Numerical Examples In this section we will present numerical examples which demonstrate performance of our Fuzzy Grid approach. We consider the approximate solution of the Lagrange equations of hydrodynamics flow (2.2) V U −V d ∂ P = 0, mU + V · (5.21) dt ∂x mE PU V m U2 , ρ= . P = (γ − 1)ρ E − 2 V We experiment with the following algorithms:
Fuzzy Grid Method for Lagrangian Gas Dynamics Equations
83
Figure 1. Shock wave - density. t = 1.3
Figure 2. Shock wave - pressure. t = 1.3
Figure 3. Shock wave - density. t = 1.3
Figure 4. Shock wave - pressure. t = 1.3
Figure 5. Shock wave - density. t = 1.3
Figure 6. Shock wave - pressure. t = 1.3
1. The unsymmetrical modification, see (3.10). It is referred to as W1. 2. The symmetrical modification, see (4.14). It is referred to as W2. 3. The symmetrical modification with MinMod anti-diffusion terms, see (4.14), (2.8). It is referred to as W3. We solve the system (5.21) with two sets of initial conditions.
84
O.V. Diyankov and I.V. Krasnogorov
Figure 7. Sod problem - density. t = 1.3
Figure 8. Sod problem - pressure. t = 1.3
Figure 9. Sod problem - velocity. t = 1.3
Figure 10. Sod problem - energy. t = 1.3
Figure 11. Sod problem - density. t = 1.3
Figure 12. Sod problem - pressure. t = 1.3
5.1 Example 1. Strong Shock Wave The first example is the strong shock wave propagation problem which consists of the following initial data " ρ = 4, U = 1, E = 1, x < 0.5 ρ = 1, U = 0, E = 0, x > 0.5. The number of particles is equal to 110.
Fuzzy Grid Method for Lagrangian Gas Dynamics Equations
85
Figure 13. Sod problem - velocity. t = 1.3
Figure 14. Sod problem - energy. t = 1.3
Figure 15. Sod problem - density. t = 1.3
Figure 16. Sod problem - pressure. t = 1.3
Figure 17. Sod problem - velocity. t = 1.3
Figure 18. Sod problem - energy. t = 1.3
As we can see the best result is obtained by using W3 method. The only disadvantage that we have small oscillation on the shock wave front. 5.2 Example 2. Sod Problem The second example is the Riemann problem proposed by Sod [4] which consists of the following initial data
86
O.V. Diyankov and I.V. Krasnogorov
"
ρ = 1, U = 0, E = 2.5, x < 2.5 ρ = 0.125, U = 0, E = 2, x > 2.5 γ = 1.4.
The number of particles is equal to 300.
6 Conclusion We’ve presented a new approach to modify weight coefficients for mesh free method. The unsymmetrical modification method displays excessive dissipation properties for shock wave problem (8 points on shock wave front) and nonphysical behaviour for Sod problem. The symmetrical modification method showes good spatial resolution for shock wave problem (5-6 points on shock wave front) and sufficient correspondence with exact solution for Sod problem. The only disadvantage is the slightly broad transition of the contact discontinuity. With using anti-diffusion correction procedure the discontinuity profiles are sharper and rarefaction wave is more accurate.
References 1. J. J. Monaghan, An introduction to SPH, Computer Physics Communications, vol. 48, 1988, pp. 89–96. 2. G. R. Liu, Mesh Free Methods, CRC Press, 2003. 3. A. Kurganov, E. Tadmor, New High-Resolution Central Schemes for Nonlinear Conservation Laws and Convection-Diffusion Equations, J. Comput. Phys., vol. 160, 2000, pp. 241–282. 4. G. A. Sod, A Survey of Several Finite Difference Methods for Systems of Nonlinear Hyperbolic Conservation Laws, J. Comput. Phys., vol. 27, 1978.
New Shape Functions for Arbitrary Discontinuities without Additional Unknowns Thomas-Peter Fries and Ted Belytschko Dep. of Mechanical Engineering, Northwestern University, 2145 Sheridan Road, 60208 Evanston, Illinois, USA
[email protected] Summary. A method is proposed for arbitrary discontinuities, without the need for a mesh that aligns with the interfaces, and without introducing additional unknowns as in the extended finite element method. The approximation space is built by special shape functions that are able to represent the discontinuity, which is described by the level-set method. The shape functions are constructed by means of the moving least-squares technique. This technique employs special mesh-based weight functions such that the resulting shape functions are discontinuous along the interface. The new shape functions are used only near the interface, and are coupled with standard finite elements, which are employed in the rest of the domain for efficiency. The coupled set of shape functions builds a linear partition of unity that represents the discontinuity. The method is illustrated for linear elastic examples involving strong and weak discontinuities.
Key words: Discontinuity, partition of unity, moving least-squares
1 Introduction A discontinuous change of field quantities or their gradients along certain interfaces is frequently observed in the real world. For example, this may be found in structures in the presence of cracks, pores, and inclusions. In fluids, it occurs on interfaces between two different fluids. The mathematical model and the numerical methods for their approximation have to appropriately consider these interfaces. Discontinuities may be classified as strong and weak. The first involve discontinuous changes in the dependent variable of a model, i.e. of the function itself, whereas weak discontinuities describe discontinuous changes of the derivative of the function. For structural models, typical examples of strong and weak discontinuities are cracks and interfaces of different materials, respectively. In the approximation of discontinuous fields, a suitable treatment of the interfaces is required. In standard finite element analysis [4, 22], this may be
88
T.-P. Fries and T. Belytschko
achieved by constructing a mesh whose element edges align with the interface. For moving interfaces, this requires a frequent remeshing, which often restricts this approach to problems where the interface topology does not change significantly during the simulation [18]. The extended finite element method (XFEM) [6, 16, 21] overcomes the need for aligning the elements with the discontinuity. The approximation space resulting from the standard finite element method is enriched by special functions via a partition of unity so that the discontinuity may be considered appropriately. However, additional unknowns for the enriched nodes are needed for this method. With the XFEM, arbitrary discontinuities may be treated implicitly on a fixed mesh. Often, the description of the discontinuities is realized by means of the level-set method [18, 21]. An interesting method that does not require a partition of unity is given by Hansbo and Hansbo [11], though it can be shown to have the same basis functions as the XFEM [1]. Furthermore, meshfree methods have been successfully used for arbitrary discontinuities, see e.g. the overview in [19]. In this paper a new method, which constructs the shape functions from the beginning so that they are able to represent discontinuities, is proposed. No additional unknowns are introduced. The moving least-squares (MLS) method [14] is used for the construction of the shape functions. This method is frequently used in the context of meshfree methods [3, 10], where it is often the underlying principle for the construction of meshfree shape functions. However, the functions are only meshfree if the weight functions which are involved in the MLS technique are mesh-independent. In this paper, the weight functions are defined on a standard finite element mesh, and the resulting shape functions are mesh-based. For each nodal weight function, the support consists of the neighboring and next-neighboring elements of that node. The support of the weight function is truncated along the discontinuity, consequently, a node has no influence across the interface. The visibility criterion of [17] is employed for this purpose. Thereby, the weight functions are designed so that they lead to shape functions which build a linear partition of unity and are able to represent the discontinuity. The new shape functions have larger supports than standard finite element shape functions. The final system of equation, which results from the use of these shape functions in a weighted residual setting, is less sparse due to the increased connectivity. Therefore, it is desirable to employ the new shape functions only in the proximity of the discontinuity. In all other parts of the domain, standard finite element shape functions are used. The coupling of the two different types of shape functions is realized by a ramp function according to the approach in [7]. The set of coupled shape functions still builds a linear partition of unity with the ability to consider discontinuous changes of the sought functions or its derivatives along the interface. An outline of the paper is as follows: In section 2, the level-set method for the description of the interface is briefly discussed. Throughout this work, the interfaces are static. Section 3 gives an outline of the moving least-squares method, which constructs shape functions based on locally defined weight
New Shape Functions for Arbitrary Discontinuities
89
functions. Shape functions, which are able to consider a discontinuity, result for specially designed weight functions. This is worked out in section 4. The coupling of the new shape functions with standard finite element shape functions is described in section 5, and enables an efficient assembly of the final system of equations. Section 6 shows numerical results with the coupled shape functions for linear elastic problems including strong and weak discontinuities. The numerical results exhibit the optimal rate of convergence. The paper ends in section 7 with a summary and conclusions.
2 Level-Set Method The level-set method is a numerical technique for the implicit tracking of moving interfaces [18]. Throughout this work, only static interfaces Γdisc in a d-dimensional domain Ω ∈ Rd are considered. The signed distance function [18] is used for the representation of the interface position, ψ (x) = ± min x − xΓdisc ,
∀xΓdisc ∈ Γdisc , ∀x ∈ Ω,
(2.1)
where the sign is different on the two sides of a closed interface and · denotes the Euclidean norm. It follows directly from (2.1) that the zero-level of this scalar function is a representation of the discontinuity, i.e. ψ (x) = 0 ∀x ∈ Γdisc .
(2.2)
If the discontinuity only partially cuts the body, it is necessary to construct another level-set function ξ such that ξ (x) > 0 on the cut part, and ξ (x) < 0 on the uncut part. Consider a discretization of the domain by a mesh. The values of the level-set function are only computed at nodes ψ = ψ (xi ), and the level-set function ψ h (x) = M T (x) ψ is an approximation of ψ (x) using the interpolation functions M (x). Then, also the representation of the discontinuity as the zero-level of ψ h (x) is only an approximation of the real interface position, which improves with mesh refinement. In this work, the interpolation functions M (x) are standard bilinear finite element (FE) functions [4, 22].
3 Moving Least-Squares Method The method is discussed here following [14, 15]. For a function u (x) defined on an open set Ω ∈ Rd being sufficiently smooth, i.e. at least u (x) ∈ C 0 (Ω), one can define a “local” approximation around a fixed point x ∈ Ω as uhlocal (x, x) = pT (x) a (x) ,
(3.3)
where p (x) forms a basis of the approximation subspace, which generally consists of monomials. Throughout this paper, a linear basis is used,
90
T.-P. Fries and T. Belytschko
pT (x) = [1, x, y] .
(3.4)
The coefficient vector a (x) is obtained by minimizing the weighted leastsquares discrete L2 -error norm Jx (a (x)) =
r
2 φi (x) pT (xi ) a (x) − u ,
(3.5)
i=1
where φi (x) are weight functions. Thereby, a relation between the unknowns a (x) with the nodal values u is found. The vector xi refers to the position of :i the r nodes within the domain. The weight function φi has small supports Ω around each node, thereby ensuring the locality of the approximation. It plays an important role in the context of the MLS method. A mesh-independent definition of the weight functions leads to the class of meshfree methods, where the MLS is often the underlying principle for the construction of meshfree shape functions, see e.g. [3, 10]. Throughout this work, however, the weight functions are defined on a mesh and the new shape functions, to be derived in section 4, are mesh-based. Minimization of (3.5) with respect to a (x) results in a system of equations r
r
T
φi (x) p (xi ) p (xi ) a (x) =
i=1
φi (x) p (xi ) ui .
(3.6)
i=1
Solving this for a (x) and then replacing a (x) in the local approximation (3.3) leads to 3 r 4−1 r h T T φi (x) p (xi ) p (xi ) φi (x) p (xi ) ui . (3.7) ulocal (x, x) = p (x) i=1
i=1
Since the point x can be chosen arbitrarily, one can let it “move” over the entire domain, x → x, which leads to the global approximation of u (x) [15]. It should be noted that the concept of a “moving” approximation is not needed to construct the MLS functions; one can simply start with (3.5) as the definition and proceed as in [5]. Finally, the MLS approximation may be written as −1
uh (x) = pT (x) [M (x)] where M (x) =
r
B (x) u,
(3.8)
φi (x) p (xi ) pT (xi ) ,
(3.9)
i=1
and B (x) = φ1 (x) p (x1 ) φ2 (x) p (x2 ) . . . φr (x) p (xr ) .
(3.10)
The matrix M (x) is of size k × k, with k being the number of components in p (x). This matrix has to be inverted wherever the MLS functions are to be
New Shape Functions for Arbitrary Discontinuities
91
evaluated. Using these MLS functions as shape functions in an approximation of the form uh (x) = N T (x) u, one can immediately write a specific shape function Ni at a point x Ni (x) = pT (x) [M (x)]
−1
φi (x) p (xi ) .
(3.11)
The set of r MLS functions {N (x)} builds a partition of unity (PU) of order n over the d-dimensional domain Ω [3].
4 Design of Special Shape Functions The aim is to develop shape functions that are able to model arbitrary discontinuities without losing their interpolation properties and without additional degrees of freedom at nodes around the discontinuity. This aim is achieved by especially designed weight functions, which—through the MLS procedure— guarantee shape functions that build a linear PU in the entire domain, taking the discontinuity into account. These functions are C 0 -continuous everywhere in the domain except along the interfaces, where they are constructed to be discontinuous. 4.1 Special Weight Functions The weight functions φi (x) in the MLS procedure determine some important properties of the resulting shape functions. The support and the continuity of the shape functions are identical to the weight functions, that is, ∀i = 1, . . . , r: Ni = 0 where φi = 0, and Ni ∈ C l (Ω) if φi ∈ C l (Ω) (assuming that p (x) is sufficiently smooth). For the new weight functions, the supports consist in the elements contiguous to a node and their neighboring elements. This is shown in Fig. 1 for quadrilateral elements.
Figure 1. The weight function corresponding to the center node has a support which includes the neighboring elements of that node (dark-grey area) and the nextneighboring elements (light-grey area).
92
T.-P. Fries and T. Belytschko
The following definition of the weight functions has been found useful: It is assumed that a domain Ω ∈ R2 is subdivided into nel elements, and each m element is defined by a set Ikel ∈ (N+ ) , k = 1, . . . , nel , of m element nodes. The set of neighboring nodes of a particular node is defined as ; (4.12) Iiel \ . I = i:∈Iiel
The weight function of node
is defined as
φ (x) = 2 · NFEM (x) +
NiFEM (x) ,
(4.13)
i∈I
where NiFEM (x) is a standard finite element shape function. This weight function is depicted in Fig. 2 for a node in a structured and unstructured quadrilateral element setting. One may use the definition of the new weight functions (4.13) for both triangular and quadrilateral elements. However, in this work, without loss of generality, only quadrilateral elements with corresponding bilinear shape functions are considered. The shape functions, used for the construction of the approximation, follow from the MLS procedure, as described in section 3, based on these weight functions. It is noted, that standard FE shape functions are employed for the definition of the special weight functions, which are then used to obtain C 0 -continuous shape functions by the MLS technique. 2
1
2
node
1
Figure 2. The proposed weight function of node in a structured and unstructured element situation when no discontinuity is present.
4.2 Modifying the Weight Function in the Presence of a Discontinuity To introduce the discontinuity we use the visibility method [17], in which all nodes not visible from a point x, when the discontinuity is considered opaque,
New Shape Functions for Arbitrary Discontinuities
93
are omitted. That is, the support of the weight function is truncated on the other side of the discontinuity, see Fig. 3. This modification of the support of the weight functions is a standard treatment of a discontinuity in the field of meshfree methods [5, 17].
discontinuity
Figure 3. The modified supports of the new weight functions, as a consequence of the discontinuity, are shown for two selected nodes.
Mathematically, this is expressed as " φ (x) , for x visible from x φ (x) = 0 , instead,
(4.14)
where φ (x) is defined in (4.13). The fact whether x is visible from x can be based on the sign of the level-set functions, see section 2. The point x is visible from x if ψ (x) · ψ (x ) > 0
and ξ (¯ x) > 0,
(4.15)
¯ is the intersection of the line going from x to x with ψ (x) = 0. where x It may be seen in Fig. 3 that the truncation of the weight function supports results in a reduced overlap of weight functions. However, due to the large supports of the new weight functions, there is still sufficient overlap of the weight functions in the cut elements so that a linear PU may be constructed through the MLS procedure. It is noted that for weight functions with supports of the same size as the standard bilinear FE shape functions, the MLS matrix M (x) in Eq. (3.9) would become singular near the discontinuity, and a PU of the same order as in uncut elements could not be constructed. For open discontinuities, it should be noted that the visibility criterion introduces discontinuous shape functions not only along the interface. Close to the tip of the discontinuity (e.g. a crack tip), artificial discontinuities are resulting in the domain, see Fig. 4. However, these artificial discontinuities do not inhibit the convergence [13], and may be avoided by using approaches as in [17]. It may thus be found that the new shape functions resulting from the proposed definition of the weight functions share the following properties:
94
T.-P. Fries and T. Belytschko
(a)
2.0
(b)
0.6
0.4 1.0 0.2
0.0
0.0
Figure 4. (a) Weight function, and (b) shape function of a node close to the tip of an open discontinuity.
•
They build a linear PU in the entire domain which is able to represent the discontinuity, because no shape function has influence across the discontinuity. • Their supports are larger than those of standard FE shape functions. This leads to an increase in the computational effort. Therefore, it is desirable to use these shape functions only near the discontinuity, i.e., where it is needed, and standard FE shape functions in all other parts of the domain. This is discussed in section 5. • They are C 0 -continuous throughout the domain except along the interface Γdisc . For open discontinuities, some artificial discontinuities in the shape functions are introduced by the visibility criterion near the tip of the interface. This can be avoided by using approaches as described in [17]. • The resulting shape functions do not have Kronecker-delta property, that is Ni (xj ) = δij . If the new shape functions are employed near the boundary, special treatment of the boundary conditions is necessary. This is wellknown in the context of meshfree methods, see [3, 10] for an overview of different techniques to apply boundary conditions there. The proposed truncation of the supports is directly appropriate only in case of strong discontinuities, where the function u (x) has a strong discontinuity. In case of weak discontinuities, where u (x) is continuous but its derivatives are not (e.g., wherever the coefficients of the underlying partial differential equation change), continuity of the function has to be enforced. One may for example enforce continuity by a penalty method or Lagrangian multipliers [4].
5 Coupling In order to ameliorate the increased amount of computational work which results from the larger supports, it is desirable to use the new shape functions
New Shape Functions for Arbitrary Discontinuities
95
as little as possible. They are only needed near the discontinuity because there the standard FE shape functions lose their favorable approximation properties. In all other parts, standard bilinear FE shape functions may be used. The approach of [7] is used for the coupling of the two different shape function classes. Alternatively, the method of [12] could be used. The domain is decomposed into several subdomains as shown in Fig. 5. The set of cut elements is defined as < = (5.16) Q = k i, j ∈ Ikel : ψ (xi ) · ψ (xj ) < 0 , m
where Ikel ∈ (N+ ) , k = 1, . . . , nel , are the sets of m element nodes belonging the nel elements, and i, j are any two nodes of an element. The union of the elements in Q is called Ω MLS . The set of neighboring elements is = ,u − uh , / u and ,u − uh , u L2 , respectively. E E L2 For standard bilinear shape functions, the optimal rate of convergence in the L2 -norm is of order 2, and in the energy-norm of order 1, as long as the discontinuities align with element edges and the exact solution is suitable for a polynomial approximation. It is found that this order of convergence is also obtained with the new shape functions if the discontinuities cut arbitrarily through the elements. However, in case of an approximation of a solution that contains a singularity, as is the case in a crack problem, a polynomial basis only leads to a reduced convergence order [20] for both, finite element and new shape functions. 6.3 Edge-Crack Problem The first test case considers a square domain of size L × L with an edge-crack of length a, see Fig. 6 for a sketch. Along the boundary of the square domain, displacements are prescribed such that the well-known analytic solution of a near-tip crack field is the exact solution in the entire domain. The material is defined by E = 10000 and ν = 0.3, no Neumann boundary is present. The exact solution of this problem may be found e.g. in [9]. It is given in polar coordinates as θ 3θ θ k1 1 − sin sin , cos σ11 (r, θ) = √ 2 2 2 2πr θ 3θ θ k1 1 + sin sin , cos σ22 (r, θ) = √ 2 2 2 2πr 3θ θ θ k1 sin cos , cos σ12 (r, θ) = √ 2 2 2 2πr for the stress components and
?
θ r cos κ − 1 + 2 sin2 2π 2 ? θ k1 r u2 (r, θ) = sin κ + 1 − 2 cos2 2µ 2π 2
u1 (r, θ) =
k1 2µ
θ , 2 θ , 2
(6.28) (6.29) (6.30)
(6.31) (6.32)
98
T.-P. Fries and T. Belytschko
(a)
(b)
L a
L Figure 6. (a) Problem statement of the edge-crack problem and the exact displacement solution (grey) enlarged by a factor of 1000, (b) Structured mesh with 9 × 9 elements and the discontinuity.
for the displacements. The Kolosov constant κ is defined for plain strain: κ = 3 − 4ν,
plain stress: κ =
3−ν , 1+ν
(6.33)
and µ is the shear modulus. The parameter k1 is called stress intensity factor, where the index 1 refers to the present case of a mode-1 crack [9]. For the numerical computation, we choose L = 2, a = 1, and k1 = 1 is prescribed for the displacements along the Dirichlet boundary. Plane stress conditions are assumed. Only structured meshes have been used with nel d elements per dimensions, see Fig. 6b, where also the discontinuity is shown. For the convergence study, nel d is 9, 19, 29, 39, 49, 69, 99, consequently, the discontinuity never aligns with the elements. At the crack tip, the consideration of the visibility criterion requires special attention. It is practically impossible to divide the elements near the crack tip for integration purposes such that they align with the modified supports of the weight functions resulting from the visibility criterion. This is only relevant in the element containing the crack tip and its neighboring elements. In these elements, instead of a decomposition into subelements for integration, the trapezoidal rule is used with a large number of integration points (nQ = 20 × 20). As the element size of the affected elements decreases for higher element numbers, this does not degrade the convergence of the method [13]. Figure 7 shows the rate of convergence obtained with the new shape functions for this test case with a strong discontinuity. It is found, that due to the singularity at the crack tip, the order 2 in the L2 -norm and 1 in the energynorm can not be obtained. However, comparing the results with a standard finite element computation with bilinear elements, where the crack aligns with the elements and a node is placed at the crack tip, it may be seen that the
New Shape Functions for Arbitrary Discontinuities Edge−crack problem
99
stress intensity factor 1.05
0
10
L2−norm, new PU energy−norm, new PU L2−norm, strd FEM energy−norm, strd FEM 1
−1
10
error
k1 0.95
−2
10
b=0.8, new PU b=1.2, new PU b=0.8, strd FEM b=1.2, strd FEM
−3
10 0.02
0.1
0.2
0.9 0.02
h
0.1
0.2
h
Figure 7. Convergence result in the L2 -norm and energy-norm, and convergence of the approximated stress intensity factor k1 .
same convergence order is obtained. For the present test case, the obtained convergence rate is the best possible for a linear basis. In the presence of a singularity, identical rates of convergence were found in meshfree methods in [2]. Higher-order convergence may only be found with methods that enrich this basis by appropriate terms. The stress intensity factor k1 has been evaluated numerically in different integration domains of length b × b around the crack tip. The interaction integral is evaluated for this purpose, see [16], and k1 should be constant, independent of the integration domain. The results may be seen in the right part of Fig. 7. A convergence towards the exact value of k1 = 1 may be seen, and the dependence form the size of the integration domain becomes virtually zero for increasing node numbers. 6.4 Bi-material Problem This test case includes a weak discontinuity. Inside a circular plate of radius b, whose material is defined by E1 = 10 and ν1 = 0.3, a circular inclusion with radius a of a different material with E2 = 1 and ν2 = 0.25 is considered. The loading of the structure results from a linear displacement of the outer boundary: ur (b, θ) = r and uθ (b, θ) = 0. The situation is depicted in Fig. 8. The exact solution may be found in [21]. The stresses are given as σrr (r, θ) = 2µεrr + λ (εrr + εθθ ) , σθθ (r, θ) = 2µεθθ + λ (εrr + εθθ ) ,
(6.34) (6.35)
where the Lam´e constants λ (x) and µ (x) are piecewise constant functions with a discontinuity at r = a. The strains are
100
T.-P. Fries and T. Belytschko
(a)
(b) x2 b r θ
L
x1 a L ur (b, θ) = r uθ (b, θ) = 0
Figure 8. (a) Problem statement of the bi-material problem (the grey area is the numerical domain), (b) the exact displacement solution.
εrr (r, θ) = εθθ (r, θ) =
(1 − (1 +
b2 a22 )α b r 2 )α
+ −
b2 a22 , b r2 ,
0 ≤ r ≤ a, a < r ≤ b,
(6.36)
(1 − (1 −
b2 a22 )α b r 2 )α
+ +
b2 a22 , b r2 ,
0 ≤ r ≤ a, a < r ≤ b,
(6.37)
and the displacements ' ur (r, θ) =
(r
(
b2 b2 a2 )α + a2 r, 2 2 − br )α + br ,
(1 −
0 ≤ r ≤ a, a < r ≤ b,
uθ (r, θ) = 0.
(6.38) (6.39)
The parameter α involved in these definitions is α=
(λ2 + µ2
(λ1 + µ1 + µ2 ) b2 . + (λ1 + µ1 ) (b2 − a2 ) + µ2 b2
) a2
(6.40)
For the numerical model, the domain is a square of size L × L with L = 2, the outer radius is chosen to be b = 2 and the inner radius a = 0.4 + . The parameter is set to 10−3 , and avoids for the used meshes that the level-set function is exactly zero at a node (in this case, the discontinuity would directly cut through that node). The exact stresses are prescribed along the boundaries of the square domain, and displacements are prescribed as u1 (0, ±1) = 0 and u2 (±1, 0) = 0. Plane strain conditions are assumed. In this test case, a weak discontinuity is present, and the displacement field is continuous with discontinuous strains. The continuity information is considered by introducing a penalty term in the weak form (6.25):
New Shape Functions for Arbitrary Discontinuities
(b)
(a)
101
Bi−material problem
−1
10
error
−2
10
−3
L2−norm, new PU L2−norm, strd FEM energy−norm, new PU energy−norm, strd FEM energy−norm, XFEM 1 energy−norm, XFEM 2
10
−4
10
−2
−1
10
10
h
Figure 9. (a) Structured mesh with 20 × 20 elements and the discontinuity, (b) convergence result for the bi-material problem.
γ· Γdisc
% + & % − & − uh Γdisc dΓ, wh uh Γdisc
(6.41)
+ − where Γdisc and Γdisc represent each side of the discontinuity. The penalty parameter is set to γ = 105 ; this value enforces the continuity appropriately without increasing the condition number of the stiffness matrix too much. Structured meshes have been used with nel d elements per dimensions, see Fig. 9a, where also the discontinuity is shown. For the convergence study, nel d is 10, 20, 30, 40, 50, 70, 100, 200. Fig. 9b shows the rate of convergence of different methods employed for this test case with a weak discontinuity. The standard finite element result is obtained by a mesh which aligns with the discontinuity. The XFEM results are displayed for two different extended bases, the results are taken from [21], see this reference for details. It may be seen that these XFEM results have a convergence order of 0.75 and 0.91 in the energy norm, respectively [21]. It is noted, that the XFEM matches optimal convergence for the same test case in [8] by employing special blending elements. With the new shape function, the order is 2 for the L2 -norm and 1 for the energy-norm, which is the optimal convergence for shape functions that build a linear PU.
7 Conclusion A method is proposed which constructs special shape functions with the ability to represent discontinuous changes of field quantities along arbitrary interfaces. The set of shape functions builds a linear partition of unity. The shape functions are C 0 -continuous in the domain except along the interfaces, where they are C −1 -continuous (discontinuous). Close to the interfaces, the moving least-squares technique is employed for the construction of the shape functions, and especially designed mesh-based
102
T.-P. Fries and T. Belytschko
weight functions are involved. In all other parts of the domain, standard FE shape functions are used. The coupling of the two types of shape functions is realized in transition areas by a ramp function. The transition areas depend directly on the interface position, which itself is defined by the level-set method. The resulting method shows favorable numerical properties in a weighted residual setting for the approximation of continua with strong and weak discontinuities. In case of weak discontinuities, the continuity information of the primal variable has to be introduced into the weak form by a penalty or Lagrange multiplier method. It will be a matter of further investigation whether shape functions can be found which are also able to represent weak discontinuities from the beginning without the need of an indirect way to introduce the continuity information of the primal variable. Furthermore, it would be desirable to extend the linear partition of unity by appropriate terms such that singularities as those occurring in crack problems can be considered appropriately. Acknowledgement. The support of the Office of Naval Research under grant N0001498-1-0578 and the Army Research Office under grant W911NF-05-1-0049 is gratefully acknowledged.
References 1. P.M.A. Areias and T. Belytschko, Letter to the editor, Comp. Methods Appl. Mech. Engrg. 195 (2004), 1275–1276. 2. T. Belytschko, L. Gu, and Y. Y. Lu, Fracture and crack growth by element free galerkin methods, Modelling Simul. Material Science Eng. 2 (1994), 519–534. 3. T. Belytschko, Y. Krongauz, D. Organ, M. Fleming, and P. Krysl, Meshless methods: An overview and recent developments, Comp. Methods Appl. Mech. Engrg. 139 (1996), 3–47. 4. T. Belytschko, W.K. Liu, and B. Moran, Nonlinear finite elements for continua and structures, John Wiley & Sons, Chichester, 2000. 5. T. Belytschko, Y.Y. Lu, and L. Gu, Element-free Galerkin methods, Internat. J. Numer. Methods Engrg. 37 (1994), 229–256. 6. T. Belytschko, N. Mo¨es, S. Usui, and C. Parimi, Arbitrary discontinuities in finite elements, Internat. J. Numer. Methods Engrg. 50 (2001), 993–1013. 7. T. Belytschko, D. Organ, and Y. Krongauz, A coupled finite element–elementfree Galerkin method, Comput. Mech. 17 (1995), 186–195. 8. J. Chessa, H. Wang, and T. Belytschko, On the construction of blending elements for local partition of unity enriched finite elements, Internat. J. Numer. Methods Engrg. 57 (2003), 1015–1038. 9. H. Ewalds and R. Wanhill, Fracture mechanics, Edward Arnold, New York, 1989. 10. T.P. Fries and H.G. Matthies, Classification and overview of meshfree methods, Informatikbericht-Nr. 2003-03, Technical University Braunschweig, (http://opus.tu-bs.de/opus/volltexte/2003/418/), Brunswick, 2003.
New Shape Functions for Arbitrary Discontinuities
103
11. A. Hansbo and P. Hansbo, A finite element method for the simulation of strong and weak discontinuities in solid mechanics, Comp. Methods Appl. Mech. Engrg. 193 (2004), 3523–3540. 12. A. Huerta and S. Fern´ andez-M´endez, Enrichment and coupling of the finite element and meshless methods, Internat. J. Numer. Methods Engrg. 48 (2000), 1615–1636. 13. P. Krysl and T. Belytschko, Element-free Galerkin method: Convergence of the continuous and discontinuous shape functions, Comp. Methods Appl. Mech. Engrg. 148 (1997), 257–277. 14. P. Lancaster and K. Salkauskas, Surfaces generated by moving least squares methods, Math. Comput. 37 (1981), 141–158. 15. W.K. Liu, S. Li, and T. Belytschko, Moving least square reproducing kernel methods (i) methodology and convergence, Comp. Methods Appl. Mech. Engrg. 143 (1997), 113–154. 16. N. Mo¨es, J. Dolbow, and T. Belytschko, A finite element method for crack growth without remeshing, Internat. J. Numer. Methods Engrg. 46 (1999), 131–150. 17. D. Organ, M. Fleming, T. Terry, and T. Belytschko, Continous meshless approximations for nonconvex bodies by diffraction and transparency, Comput. Mech. 18 (1996), 225–235. 18. S. Osher and R.P. Fedkiw, Level set methods and dynamic implicit surfaces, Springer Verlag, Berlin, 2003. 19. T. Rabczuk, T. Belytschko, S. Fern´ andez-M´endez, and A. Huerta, Meshfree methods, Encyclopedia of Computational Mechanics (E. Stein, R. De Burst, T.J.R. Hughes, ed.), vol. 1, John Wiley & Sons, Chichester, 2004. 20. G. Strang and G. Fix, An analysis of the finite element method, Prentice-Hall, Englewood Cliffs, NJ, 1973. 21. N. Sukumar, D.L. Chopp, N. Mo¨es, and T. Belytschko, Modeling holes and inclusions by level sets in the extended finite-element method, Comp. Methods Appl. Mech. Engrg. 190 (2001), 6183–6200. 22. O.C. Zienkiewicz and R.L. Taylor, The finite element method, vol. 1 – 3, Butterworth-Heinemann, Oxford, 2000.
A Meshless BEM for 2-D Stress Analysis in Linear Elastic FGMs Xiaowei Gao1 , Chuanzeng Zhang2 , Jan Sladek3 , and Vladimir Sladek3 1
2
3
Department of Engineering Mechanics, Southeast University, Nanjing, 210096, P.R. China
[email protected] Department of Civil Engineering, University of Siegen, D-57068 Siegen, Germany
[email protected] Institute of Construction and Architecture, Slovak Academy of Sciences, 84503 Bratislava, Slovakia {jan.sladek, vladimir.sladek}@savba.sk
Summary. A meshless boundary element method (BEM) for stress analysis in two-dimensional (2-D), isotropic, continuously non-homogeneous, and linear elastic functionally graded materials (FGMs) is presented in this paper. It is assumed that Young’s modulus has an exponential variation, while Poisson’s ratio is taken to be constant. Since no fundamental solutions are yet available for general FGMs, fundamental solutions for isotropic, homogeneous, and linear elastic solids are applied, which results in a boundary-domain integral formulation. Normalized displacements are introduced in the formulation, which avoids displacement gradients in the domain-integrals. The radial integration method (RIM) is used to transform the domain-integrals into boundary integrals along the global boundary. The normalized displacements appearing in the domain-integrals are approximated by a series of prescribed basis functions, which are taken as a combination of radial basis functions and polynomials in terms of global coordinates. Numerical examples are presented to verify the accuracy and the efficiency of the present meshless BEM.
Key words: Meshless boundary element method (BEM), Radial integration method, Partial differential equations (PDFs) with variable coefficients, Functionally graded materials (FGMs).
1 Introduction Meshfree or meshless methods have certain advantages over many domaintype discretization methods like the Finite Element Method (FEM), the Finite-Difference Method (FDM) and the Finite Volume Method (FVM). Among many meshless methods, the global and the local weak-form formulations are often applied ([1]-[4]). The global approach uses a weak-form
106
X. Gao et al.
formulation for the global domain, which requires background meshes for the integration of the weak-form. In the local approach, the weak-form formulation is applied to local sub-domains, which doesn’t need any background meshes ([2, 4, 5]). The meshless local Petrov-Galerkin (MLPG) method [2, 4] is a representative example of the local approach, where trial and test functions can be selected from different functional spaces. If the unit step function is chosen as the test function, then a local boundary-domain integral equation formulation for the sub-domains can be obtained. An alternative way to avoid domain-type discretizations is the boundary element method (BEM) or boundary integral equation method (BIEM). In the classical BEM, the problem dimension is reduced by one, which reduces the computational effort, especially for problems with complicated geometries and moving boundary value problems, where a cumbersome mesh generation and re-meshing are needed in FEM and FDM. Unfortunately, the classical BEM with a boundary-only discretization is limited to problems where the fundamental solutions of the governing partial differential equations can be obtained in closed or simple forms. For isotropic, continuously non-homogeneous and linear elastic solids, the governing partial differential equations possess variable coefficients. Thus, the corresponding fundamental solutions in this case are either not available or they are too complicated. This fact brings significant difficulties to the extension and the application of the classical BEM to non-homogeneous linear elastic solids. This difficulty can be circumvented by using both the global and the local boundary-domain integral equation formulations. The global BEM uses fundamental solutions for homogeneous linear elastic solids, which contains a domain-integral due to the material nonhomogeneity. To transform the domain-integral into boundary integrals over the global boundary of the analyzed domain, the dual reciprocity method (DRM) [6] can be used, where radial basis functions (RBF) are applied. Another novel transform technique is the so-called radial integration method (RIM), which has been developed by Gao ([7, 8, 9]). In the local BEM, the analyzed domain is divided into sub-domains, for which local boundary-domain integral equations are formulated. Recent applications of the local BEM based on the meshless local Petrov-Galerkin (MLPG) method can be found for instance in references [10, 11, 12]. A mesh-free method based on the global weak-form formulation for elastostatic crack analysis in isotropic linear elastic FGMs has been presented by Rao and Rahman [13]. In this paper, a meshless BEM for 2-D stress analysis of isotropic, continuously non-homogeneous, and linear elastic FGMs is presented. The method uses a global boundary-domain integral equation formulation, where fundamental solutions for isotropic, homogeneous, and linear elastic solids are utilized. Normalized displacements are introduced in the formulation to avoid the appearance of displacement gradients in the domain-integrals. An exponential variation is assumed for Young’s modulus, while Poisson’s ratio is taken to be constant. To transform the domain-integral into boundary-integrals along the global boundary of the analyzed domain, the radial integration method
A Meshless BEM for 2-D Stress Analysis in Linear Elastic FGMs
107
(RIM) developed by Gao [7, 8] is applied. The unknown normalized displacements are approximated by a series of prescribed basis functions, which are taken as a combination of radial basis functions and polynomials in terms of global coordinates [14]. The present meshless BEM uses interior nodes instead of domain-type meshes and it is therefore a meshfree or meshless method. To verify the accuracy and the efficiency of the present meshless BEM, numerical results are presented and discussed.
2 Formulation of Boundary-Domain Integral Equations Let us consider an isotropic, continuously non-homogeneous and linear elastic solid with variable Young’s modulus E(x) and constant Poisson’s ratio ν. In this case, the elasticity tensor can be written as cijkl (x) = µ(x)c0ijkl ,
(2.1)
where µ(x) =
E(x) , 2(1 + ν)
c0ijkl =
2ν δij δkl + δik δjl + δil δjk , 1 − 2ν
(2.2)
In Eq. (2.2), ν is Poisson’s ratio, µ(x) is the shear modulus, and δij denotes the Kronecker delta. The stress tensor σij and the displacement gradients ui,j are related by Hooke’s law σij = µc0ijkl uk,l .
(2.3)
Here and in the following analysis, a comma after a quantity represents spatial derivatives and the conventional summation rule over repeated subscripts is applied. The traction vector ti on the boundary of the considered domain is related to the stress components by ti = σij nj ,
(2.4)
where nj is the unit outward normal vector to the boundary Γ of the considered domain Ω. In the absence of body forces, the equilibrium equations σij,j = 0 can be written in the weak-form as Uij σjk,k dA = 0 , (2.5) Ω
where Uij is the weight-function. Substitution of Eq. (2.3) into Eq. (2.5) and application of Gauss’s divergence theorem yield Uij tj ds − Tij µuj ds + Uir,sl c0rsjl µuj dA + Uir,s c0rsjl µ,l uj dA = 0 , Γ
Γ
Ω
Ω
(2.6)
108
X. Gao et al.
where Tij = Σijl nl ,
Σijl = c0rsjl Uir,s =
2ν Uik,k δjl + Uij,l + Uil,j . 1 − 2ν
(2.7)
As weight-function the fundamental solution of the following governing equations is chosen (2.8) c0rsjl Uir,sl = −δij δ(x − y) , where δ(x − y) is the Dirac-delta function. Substitution of Eq. (2.8) into Eq. (2.6) leads to Uij (x, y)tj (x)ds − Tij (x, y)˜ uj (x)ds + Vij (x, y)˜ uj (x)dA , u ˜i (y) = Γ
Γ
Ω
(2.9) where u ˜i (x) = µ(x)ui (x) ,
µ ˜(x) = log µ(x) . (2.10) The solution Uij (x, y) of Eq. (2.8) is the Kelvin’s displacement fundamental solution for an isotropic, homogeneous and linear elastic solid with µ = 1, which, as well as the corresponding traction fundamental solution Tij (x, y) and the stress fundamental solution Σijl (x, y), can be found for instance in [15]. The fundamental solution Vij in the domain-integral of Eq. (2.9) can be expressed as Vij = −
Vij (x, y) = Σijl (x, y)˜ µ,l (x) ,
1 {˜ µ,k r,k [(1 − 2ν)δij + 2r,i r,j ] 4π(1 − ν)r
˜,j r,i )} . + (1 − 2ν)(˜ µ,i r,j − µ
(2.11)
It should be noted here that Eq. (2.9) is a representation integral for the displacement components at an arbitrary internal point. By taking the limitprocess y → Γ , boundary integral equations for boundary points can be obtained (e.g., [15]). Unlike many previous BEM formulations for isotropic, non-homogeneous and linear elastic solids (e.g., [10, 16]), Eq. (2.9) is formulated in terms of ˜j . The domain-integral the tractions tj and the normalized displacements u contains only the normalized displacements instead of the displacement gradients. This feature not only facilitates the numerical implementation, but also results in highly accurate numerical results. For an exponential variation of Young’s modulus or shear modulus such as that used in [10] and in this analysis, it can be seen from Eq. (2.10) that µ ˜,j is constant and Eq. (2.11) thus becomes much simpler for integration.
A Meshless BEM for 2-D Stress Analysis in Linear Elastic FGMs
109
3 Transformation of Domain-Integrals into Boundary Integrals In this analysis, the radial integration method (RIM) developed by Gao [7, 8, 9] is used to transform the domain-integral of Eq. (2.9) into boundary integrals. For this purpose, the normalized displacements in the domainintegral of Eq. (2.9) are approximated by a series of prescribed basis functions as commonly used in dual reciprocity method (DRM) [6]. As shown by many previous investigations (e.g., [14]), the combination of the radial basis functions and the polynomials in terms of global coordinates can give satisfactory results. Therefore, the normalized displacements u ˜i (x) are approximated by αiA φA (R) + αik xk + a0i , (3.12) u ˜i (x) = A
αiA = 0 ,
(3.13)
αiA xA j = 0,
(3.14)
A
A
where R = |x − xA | is the distance from the application point A to the field point x, αiA and aki are unknown coefficients to be determined, and xA denotes the coordinates at the application point A which consists of all boundary nodes and some internal nodes. The commonly used radial basis functions φA (R) can be found in many references (e.g., [7], [14]). In this analysis, we use the following 4-th order spline-type RBF [2] 3 4 2 R R 1 − 6 R + 8 − 3 , 0 ≤ R ≤ dA , dA dA dA φA (R) (3.15) 0 dA ≤ R , where dA is the support size for the application point A. The unknown coefficients αiA and aki in Eqs. (3.12)-(3.14) can be determined by applying the application point A in Eqs. (3.12)-(3.14) to every node. This leads to a set of linear algebraic equations, which can be written in the matrix form as ˜ = φ · α, u (3.16) where α is a vector consisting of the coefficients αiA for all application points and aki . If two nodes do not coincide, i.e., share the same coordinates, the matrix φ is invertible and thus ˜. α = φ−1 · u
(3.17)
Substitution of Eq. (3.12) into the domain-integral of Eq. (2.9) yields A A k 0 Vij u ˜j dA = αj Vij φ dA + aj Vij xk dA + aj Vij dA . (3.18) Ω
Ω
Ω
Ω
110
X. Gao et al.
By using RIM [7, 8, 9], the domain-integrals on the right-hand side of Eq. (3.18) can be transformed into boundary integrals as 1 ∂r A r,k ∂r 1 Fij ds + akj Fij ds Vij u ˜j dA = αjA Ω Γ r ∂n Γ r ∂n & % 1 ∂r 0 Fij ds , + akj yk + a0j (3.19) Γ r ∂n
where
r
FijA =
rVij φA dr ,
(3.20)
0
Fij1
r
=
r2 Vij dr ,
(3.21)
rVij dr .
(3.22)
0
Fij0 =
r
0
Note here that in the radial integrals (3.20)-(3.22) the term r,i is constant [8] and the following relation is used for the transformation from x to r xi = yi + r,i r .
(3.23)
The radial integrals (3.20)-(3.22) are regular and they can be computed numerically by using standard Gaussian quadrature formula for every field point [9].
4 System of Linear Algebraic Equations After numerical integration and substitution of Eq. (3.17) into Eq. (3.19), the domain-integral can be expressed in terms of the normalized displacement ˜ at all nodes. If the BEM model consists of Nb boundary nodes and vector u Ni internal nodes and after invoking the boundary conditions, Eq. (2.9) leads to the following system of linear algebraic equations ˜ Ab xb = yb + Vb u
(4.24)
˜ ˜ i = A i xb + y i + Vi u u
(4.25)
for boundary nodes, and
for internal nodes. In Eqs. (4.24) and (4.25), the sizes of the matrices Ab and Ai are 2Nb × 2Nb and 2Ni × 2Nb , while Vb and Vi are 2Nb × 2Nt and 2Ni × 2Nt with Nt = Nb + Ni , respectively. The vector xb with a size of 2Nb × 1 contains the unknown normalized boundary displacements or the un˜ with a size of 2Nt × 1 consists of known boundary tractions. The vector u the unknown normalized boundary displacements and all normalized internal
A Meshless BEM for 2-D Stress Analysis in Linear Elastic FGMs
111
displacements. It should be noted here that after invoking the boundary conditions, the columns of the matrices Vb and Vi corresponding to the known boundary displacement nodes should be zero. By combining Eqs. (4.24) and (4.25) we obtain 4 3 42 13 Vb xb yb Ab 0 − = , (4.26) ˜i Vi yi u −Ai I where I is the identity matrix. By solving Eq. (4.26) numerically the boundary ˜ i can be obtained. unknowns xb and the normalized internal displacements u Subsequently, the true displacements can be computed by using the first equation of Eq. (2.10).
5 Computation of Stresses Once the unknown boundary data and the internal normalized displacements are obtained from Eq. (4.26), stress components at boundary and internal nodes can be computed by using these quantities. By taking Eq. (2.10) into account, the generalized Hooke’s law (2.3) can be rewritten as u ˜k ∂uk ∂ ∂u ˜k 0 0 = c0ijkl = µcijkl − c0ijkl u ˜k µ ˜,l . (5.27) σij = µcijkl ∂xl ∂xl µ ∂xl From Eq. (2.9) we obtain ∂Ukj (x, y) ∂Tkj (x, y) ∂u ˜k = tj (x)ds − u ˜j (x)ds ∂yl ∂yl ∂yl Γ Γ
∂Vkj (x, y) u ˜j (x)dA , ∂yl
+ Ω
(5.28)
where all integrals exist in the sense of Cauchy principal values except the last one which will be discussed later. Substituting Eqs. (2.7) and (2.11) into Eq. (5.28) and using the relation ∂(·)/∂yl = −∂(·)/∂xl , the following integral representation for the stress components can be obtained from Eq. (5.27) Uijk (x, y)tk (x)ds − Tijk (x, y)˜ uk (x)ds σij (y) = Γ
Γ
Vijk (x, y)˜ uk (x)dA − c0ijkl u ˜k µ ˜,l ,
+
(5.29)
Ω
where the kernels Uijk and Tijk have the same expressions as in the conventional BEM formulation with µ = 1 (e.g. in [15]), and Vijk (x, y) = −c0ijmn
∂Vmk (x, y) . ∂xn
(5.30)
112
X. Gao et al.
By using the second equation of Eq. (2.2) the last term in Eq. (5.27) or (5.29) can be written as 2ν 0 δij µ ˜k . ˜k µ ˜,l = ˜,k + δik µ ˜,j + δjk µ ˜,i u (5.31) cijkl u 1 − 2ν Since the differentiation of Vmk (x, y) causes a strong singularity in the domain integral of Eq. (5.29), a jump term exists. Following the procedure as described in [15], the jump term can be obtained by cutting-out an infinitesimal circle around the source point y and finally, after putting the jump term and the term given in Eq. (5.31) together, equation (5.29) can be rewritten as Uijk (x, y)tk (x)ds − Tijk (x, y)˜ uk (x)ds σij (y) = Γ
Γ
+
Vijk (x, y)˜ uk (x)dA + Fijk (y)˜ uk (y) ,
(5.32)
Ω
where Vijk =
1 {2˜ µ,m r,m [(1 − 2ν)δij r,k + ν(δik r,j + δjk r,i ) 2π(1 − ν)r2
µ,i r,j + µ ˜,j r,i )r,k − (1 − 4ν)˜ µ,k δij − 4r,i r,j r,k ] + 2ν(˜ + (1 − 2ν)(2˜ µ,k r,i r,j + µ ˜,j δik + µ ˜,i δjk )} , Fijk = −
1 (δij µ ˜,k + δik µ ˜,j + δjk µ ˜,i ) . 4(1 − ν)
(5.33) (5.34)
Now the domain-integral in Eq. (5.32) exists in the sense of Cauchy principal values. This means that cutting-out an infinitesimal circle around the source point y does not change the integration result. Based on this property, the conventional singularity-separation technique [15] can be applied to regularize this strongly singular domain-integral. The domain-integral in Eq. (5.32) can be computed by using the radial integration method as described in section 3. Note here that the procedure is exactly the same as in the treatment of domain-integrals with initial stresses arising in plasticity problems [8]. It should be remarked here that the stress integral representation (5.32) is only applicable to compute stresses at internal points. For boundary points when the source point y approaches the field point x, hypersingularities arise. Although the hypersingular integrals can be evaluated directly by using the methods suggested in [9] and [17], the Fortran subroutines as presented in [15] based on the ’traction-recovery’ technique is adopted in this analysis to compute the stresses at the boundary points.
A Meshless BEM for 2-D Stress Analysis in Linear Elastic FGMs
113
6 Numerical Example We consider an isotropic, continuously non-homogeneous and linear elastic rectangular plate with the dimensions L × W as depicted in Fig. 6.1. The FGM plate is subjected to a uniform tensile stress loading σ=1. The plate is discretized into 48 equally-spaced linear boundary elements: 20 along the longitudinal and 4 along the transversal directions, with a total of 48 boundary nodes. An exponential variation of Young’s modulus in the transversal direction is considered, which is described by Ew 1 βy log , (6.35) β= E(x) = E0 e , W E0 where E0 = 10000 and Ew = 20000. Plane stress condition is assumed and Poisson’s ratio is taken as ν = 0.25. y
=1
W=0.3 L=1
x
Figure 1. A rectangular FGM plate subjected to a tensile loading.
Table 6.1 and Fig. 6.3 show a comparison of the computed displacement component ux at y = 0.3 by the present meshless BEM with the results obtained by Sladek et al. [10], who used a meshless local Petrov-Galerkin method. To investigate the influences of the number and the distribution of internal nodes on the numerical results, 48 boundary nodes are used, while the internal nodes are varied from 57 to 0 as shown in Fig. 6.2. The comparison shows that our numerical results agree very well with that of Sladek et al. [10]. In addition, it is seen here that the present meshless BEM is pretty insensitive to the selected node number and distribution, at least for the case considered here. Even with very few or no internal nodes, the present meshless BEM can still yield very accurate numerical results. In Tabs. 6.2 and 6.3 as well as Figs. 6.4 and 6.5, numerical results for the stress component σ11 at x = 0.0 and x = 0.75 obtained by the present meshless BEM are presented and compared with that of Sladek et al. [10]. Numerical calculations are carried out by using 48 boundary nodes and 57 internal nodes (see Fig. 6.2a). The present numerical results for the stress component σ11 show a very good agreement with that obtained by Sladek et al. [10].
114
X. Gao et al.
Figure 2. Boundary and internal nodes.
A Meshless BEM for 2-D Stress Analysis in Linear Elastic FGMs
115
Table 1. Comparison of computed displacement component ux at y = 0.3. x 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00
homog. 0 5.0−6 1.0−5 1.5−5 2.0−5 2.5−5 3.0−5 3.5−5 4.0−5 4.5−5 5.0−5 5.5−5 6.0−5 6.5−5 7.0−5 7.5−5 8.0−5 8.5−5 9.0−5 9.5−5 1.0−4
Ref. [10] 0 3.44353−6 6.88585−6 1.03293−5 1.37755−5 1.72253−5 2.06784−5 2.41341−5 2.75896−5 3.10426−5 3.44827−5 3.78967−5 4.12715−5 4.45876−5 4.78189−5 5.09342−5 5.39030−5 5.67056−5 5.93457−5 6.18648−5 6.43231−5
Ni = 57 0 3.46979−6 6.94299−6 1.04200−5 1.38985−5 1.73781−5 2.08579−5 2.43360−5 2.78096−5 3.12738−5 3.47214−5 3.81418−5 4.15206−5 4.48380−5 4.80700−5 5.11884−5 5.41644−5 5.69748−5 5.96176−5 6.21251−5 6.45682−5
Ni = 30 0 3.46983−6 6.94292−6 1.04196−5 1.38983−5 1.73783−5 2.08585−5 2.43367−5 2.78103−5 3.12747−5 3.47222−5 3.81422−5 4.15210−5 4.48389−5 4.80710−5 5.11897−5 5.41663−5 5.69774−5 5.96197−5 6.21267−5 6.45699−5
Ni = 9 0 3.46897−6 6.94249−6 1.04200−5 1.38987−5 1.73779−5 2.08579−5 2.43372−5 2.78121−5 3.12766−5 3.47222−5 3.81386−5 4.15125−5 4.48249−5 4.80511−5 5.11634−5 5.41338−5 5.69430−5 5.95901−5 6.21042−5 6.45503−5
Ni = 3 0 3.47958−6 6.96360−6 1.04513−5 1.39398−5 1.74287−5 2.09175−5 2.44036−5 2.78825−5 3.13487−5 3.47947−5 3.82104−5 4.15815−5 4.48888−5 4.81082−5 5.12123−5 5.41747−5 5.69776−5 5.96209−5 6.21333−5 6.45787−5
Ni = 0 0 3.47949−6 6.96383−6 1.04530−5 1.39447−5 1.74386−5 2.09338−5 2.44280−5 2.79179−5 3.13981−5 3.48605−5 3.82942−5 4.16840−5 4.50109−5 4.82512−5 5.13781−5 5.43644−5 5.71886−5 5.98480−5 6.23695−5 6.48155−5
1,1x10-4 1,0x10-4
: homogeneous : Sladek et al [10] : Ni = 57 : Ni = 30 : Ni = 9 : Ni = 3 : Ni = 0
9,0x10-5 8,0x10-5
ux
7,0x10-5 6,0x10-5 5,0x10-5
y = 0.3
4,0x10-5 3,0x10-5 2,0x10-5 1,0x10-5 0,0 0,0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1,0
x Figure 3. Comparison of computed displacement component ux at y = 0.3.
116
X. Gao et al.
Table 2. Comparison of numerical results (x = 0.0, 57 internal nodes). y Sladek et al. [10] 0.000 0.685000 0.075 0.814000 0.150 0.969000 0.225 1.155000 0.300 1.375000
This work 0.693971 0.821044 0.977980 1.167289 1.383252
Figure 4. Comparison of numerical results (x = 0.0, 57 internal nodes).
Table 3. Comparison of numerical results (x = 0.75, 57 internal nodes). y Sladek et al. [10] 0.000 0.758000 0.075 0.884000 0.150 0.995000 0.225 1.102000 0.300 1.220000
This work 0.772665 0.897074 1.004405 1.105088 1.218766
Figure 5. Comparison of numerical results (x = 0.75, 57 internal nodes).
The effects of the number and the distribution of used internal nodes on the computed stress component σ11 are shown in Tabs. 6.4 and 6.5 as well as Figs. 6.6 and 6.7. Numerical calculations are performed for 48 fixed boundary nodes and different internal nodes as shown in Fig. 6.2. Here again, it can be concluded that the present meshless BEM for the stress computation is quite insensitive to the selected node number and distribution. This confirms the accuracy, the efficiency, and the robustness of the present meshless BEM. Table 4. Numerical results for different internal nodes (x = 0.0). y 0.000 0.075 0.150 0.225 0.300
Ni =57 0.693971 0.821044 0.977980 1.167289 1.383252
Ni = 30 0.693672 0.821500 0.978044 1.166979 1.383287
Ni =9 0.692579 0.821216 0.978779 1.167411 1.382814
Ni =3 0.686051 0.816922 0.979832 1.171966 1.387026
Ni =0 0.686164 0.817061 0.979900 1.171949 1.386973
A Meshless BEM for 2-D Stress Analysis in Linear Elastic FGMs
117
Table 5. Numerical results for different internal nodes (x = 0.75). y 0.000 0.075 0.150 0.225 0.300
Ni =57 0.772665 0.897074 1.004405 1.105088 1.218766
Ni = 30 0.772592 0.897191 1.004311 1.105005 1.219057
Ni =9 0.773472 0.898036 1.004399 1.104611 1.216538
Ni =3 0.776418 0.900220 1.004362 1.102561 1.213336
Figure 6. Numerical results for different internal nodes (x = 0.0).
7 Conclusions In this paper, a meshless BEM for stress analysis in 2-D, isotropic, nonhomogeneous and linear elastic solids is presented. Normalized displacements are introduced in the boundary-domain integral formulation, which avoids displacement gradients in the domain-integrals. To transform domain-integrals into boundary integrals along the global boundary of the analyzed domain, the radial integration method of Gao [7, 8, 9] is applied, which results in a meshless scheme. The normalized displacements in the domain-integrals are approximated by a series of prescribed basis functions, which are taken as a combination of radial basis functions and polynomials in terms of global coordinates. A 4-th order spline-type radial basis function is chosen in the present analysis. Numerical results are presented to show the accuracy and efficiency of the present meshless BEM. The present meshless BEM is easy to implement, very accurate and quite insensitive to the selected node number and distribution. For simple boundary value problems as shown in this analysis, very few internal or even no internal nodes are required to achieve a sufficient accuracy for the stress computation.
118
X. Gao et al.
Figure 7. Numerical results for different internal nodes (x = 0.75).
Acknowledgement. The support by the German Academic Exchange Service (DAAD) and the Ministry of Education of Slovak Republic under the project number D/04/25722 is gratefully acknowledged.
References 1. T. Belytschko, Y. Krogauz, D. Organ, M. Fleming and P. Krysl, Meshless Methods: An Overview and Recent Developments, Comp. Meth. Appl. Mech. Eng. 139 (1996), 3–47. 2. S. N. Atluri and S. Shen, The Meshless Local Petrov-Galerkin (MLPG) Method, Tech Science Press, 2002. 3. G. R. Liu, Mesh Free Methods: Moving Beyond the Finite Element Method, CRC Press, 2003. 4. S. N. Atluri, The Meshless Method (MLPG) for Domain & BIE Discretizations, Tech Science Press, 2004. 5. S. E. Mikhailov, Localized Boundary-Domain Integral Formulations for Problems With Variable Coefficients, Eng. Anal. Bound. Elem. 26 (2002), 681–690. 6. N. Nardini and C. A. Brebbia, A New Approach for Free Vibration Analysis Using Boundary Elements, Boundary Element Methods in Engineering (C. A. Brebbia, ed.), Springer, Berlin, 1982, pp. 312–326. 7. X. W. Gao, The Radial Integration Method for Evaluation of Domain Integrals With Boundary-Only Discretization, Eng. Anal. Bound. Elem. 26 (2002), 905– 916. 8. X. W. Gao, A Boundary Element Method Without Internal Cells for TwoDimensional and Three-Dimensional Elastoplastic Problems, ASME Journal of Applied Mechanics 69 (2002), 154–160. 9. X. W. Gao, Evaluation of Regular and Singular Domain Integrals With Boundary-Only Discetization - Theory and Fortran Code, Journal of Computational and Applied Mathematics 175 (2005), 265–290.
A Meshless BEM for 2-D Stress Analysis in Linear Elastic FGMs
119
10. J. Sladek, V. Sladek and S. N. Atluri, Local Boundary Integral Equation (LBIE) Method for Solving Problems of Elasticity With Nonhomogeneous Material Properties, Computational Mechanics 24 (2000), 456–462. 11. J. Sladek, V. Sladek and Ch. Zhang, Application of Meshless Local PetrovGalerkin (MLPG) Method to Elastodynamic Problems in Continuously Nonhomogeneous Solids, Computer Modeling in Eng. & Sciences 4 (2003), 637–648. 12. V. Sladek, J. Sladek and Ch. Zhang, Local Integro-Differential Equations With Domain Elements for Numerical Solution of PDE With Variable Coefficients, J. Eng. Math. 51 (2005), 261–282. 13. B. N. Rao and S. Rahman, Mesh-Free Analysis of Cracks in Isotropic Functionally Graded Materials, Eng. Fract. Mech. 70 (2003), 1–27. 14. M. A. Golberg, C. S. Chen and H. Bowman, Some Recent Results and Proposals for the Use of Radial Basis Functions in the BEM, Eng. Anal. Bound. Elem. 23 (1999), 285–296. 15. X. W. Gao and T. G. Davies, Boundary Element Programming in Mechanics, Cambridge University Press, Cambridge, 2002. 16. V. Sladek, J. Sladek and I. Markechova, An Advanced Boundary Element Method for Elasticity Problems in Nonhomogeneous Media, Acta Mechanica 97 (1993), 71–90. 17. X. W. Gao, Numerical Evaluation of Two-Dimensional Singular Boundary Integrals - Theory and Fortran Code, Journal of Computational and Applied Mathematics, 188 (2006), 44–64.
A Particle-Partition of Unity Method Part VII: Adaptivity Michael Griebel and Marc Alexander Schweitzer Institut f¨ ur Numerische Simulation, Universit¨ at Bonn, Wegelerstr. 6, D-53115 Bonn, Germany {griebel, schweitzer}@ins.uni-bonn.de Summary. This paper is concerned with the adaptive multilevel solution of elliptic partial differential equations using the partition of unity method. While much of the work on meshfree methods is concerned with convergence-studies, the issues of fast solution techniques for the discrete system of equations and the construction of optimal order algorithms are rarely addressed. However, the treatment of large scale real-world problems by meshfree techniques will become feasible only with the availability of fast adaptive solvers. The adaptive multilevel solver proposed in this paper is a main step toward this goal. In particular, we present an h-adaptive multilevel solver for the partition of unity method which employs a subdomain-type error indicator to control the refinement and an efficient multilevel solver within a nested iteration approach. The results of our numerical experiments in two and three space dimensions clearly show the efficiency of the proposed scheme.
Key words: Meshfree method, partition of unity method, adaptive refinement, multilevel method.
1 Introduction One main purpose of this paper is to investigate adaptive h-type refinement strategies for meshfree methods and their interplay with multilevel solution techniques; in particular we address these issues for the partition of unity method [1, 15]. To this end, we employ a classical a posteriori error estimation technique due to Babuˇska and Rheinboldt [2]. The resulting error indicator is used to steer the local refinement procedure in our tree-based cover construction procedure. To obtain an adaptive solver with optimal complexity we combine the multilevel techniques developed in [7, 9, 15] for the PUM with the nested iteration approach [12]. The results of our numerical experiments in two and three space dimensions demonstrate the effectiveness of the proposed approach and its overall efficiency.
122
M. Griebel and M.A. Schweitzer
In this paper we restrict ourselves to the study of a scalar elliptic partial differential equation, namely we consider the diffusion problem −∆u = f in Ω ⊂ Rd , u = gD on ΓD ⊂ ∂Ω, ∂u = gN on ΓN = ∂Ω \ ΓD . ∂n
(1.1)
The remainder of this paper is organized as follows. In section 2 we give a short overview of the PUM and its convergence properties. Furthermore, we outline the implementation of essential boundary conditions using Nitsche’s method and the Galerkin discretization of the arising variational problem. The main theme of this paper, the adaptive meshfree multilevel solution of an elliptic PDE, is presented in section 3. There, we introduce our refinement algorithm and show that it leads to point sets and covers that are consistent with the multilevel construction of [9]. Moreover, we present the construction of our error indicator and discuss how we obtain an adaptive multilevel solver with optimal complexity using the nested iteration approach. Then, we present the results of our numerical experiments in two and three space dimensions in section 4. These results clearly demonstrate the efficiency of the proposed scheme. Finally, we conclude with some remarks in section 5.
2 Partition of Unity Method In the following, we shortly review the construction of a partition of unity space V PU and the Galerkin discretization of an elliptic partial differential equation using V PU as trial and test space, see [15] for details. 2.1 Construction of a Partition of Unity Space In a PUM, we define a global approximation uPU simply as a weighted sum of local approximations ui , uPU (x) :=
N
ϕi (x)ui (x).
(2.2)
i=1
These local approximations ui are completely independent of each other, i.e., local basis {ψin } and the order of the local supports ωi := supp(ui ), the uni ψin ∈ Vipi can be chosen indeapproximation pi for every single ui := pendently of all other uj . Here, the functions ϕi form a partition of unity (PU). They are used to splice the local approximations ui together in such a way that the global approximation uPU benefits from the local approximation orders pi yet it still fulfills global regularity conditions. Hence, the global approximation space on Ω is defined as
A Particle-Partition of Unity Method Part VII: Adaptivity
V PU :=
ϕi Vipi =
i
ϕi span{ψin } = span{ϕi ψin }.
123
(2.3)
i
The starting point in the implementation of a PUM approximation space V PU is the construction of an appropriate PU, see Definition 1 and Definition 2. Definition 1 (Partition of Unity). Let Ω ⊂ Rd be an open set. Let {ϕi } be a collection of Lipschitz functions with 0 ≤ ϕi (x) ≤ 1, i ϕi ≡ 1 on Ω, C∇ ϕi L∞ (Rd ) ≤ C∞ , ∇ϕi L∞ (Rd ) ≤ diam(ω , i) where ωi := supp(ϕi ), C∞ and C∇ are two positive constants. The sets ωi are called patches and their collection is referred to as a cover CΩ := {ωi } of the domain Ω. For PUM spaces (2.3) which employ such a PU {ϕi } there hold the following error estimates due to [1]. Theorem 1. Let Ω ⊂ Rd be given. Let {ϕi } be a partition of unity according to Definition 1. Let us further introduce the covering index λCΩ : Ω → N such that (2.4) λCΩ (x) = card({i | x ∈ ωi }) and let us assume that λCΩ (x) ≤ M ∈ N for all x ∈ Ω. Let a collection of local approximation spaces Vipi = span{ψin } ⊂ H 1 (Ω ∩ ωi ) be given. Let u ∈ H 1 (Ω) be the function to be approximated. Assume that the local approximation spaces Vipi have the following approximation properties: On each patch Ω ∩ ωi , the function u can be approximated by a function ui ∈ Vipi such that u − ui L2 (Ω∩ωi ) ≤ ˆi ,
and
∇(u − ui ) L2 (Ω∩ωi ) ≤ ˜i
(2.5)
hold for all i. Then the function uPU := ϕi ui ∈ V PU ⊂ H 1 (Ω) ωi ∈CΩ
satisfies the global estimates u − uPU L2 (Ω) ≤
12 √ ˆ2i , M C∞
(2.6)
ωi ∈CΩ
∇(u − uPU ) L2 (Ω) ≤
% √ 2M ωi ∈CΩ
1 C∇ &2 2 2 2 2 ˆi + C∞ ˜i . diam(ωi )
(2.7)
The estimates (2.6) and (2.7) show that the global error is of the same order as the local errors provided that the covering index is bounded independent
124
M. Griebel and M.A. Schweitzer
of the size of the cover, i.e. M = O(1). Note that we need to assume a slightly stronger condition to obtain a sparse linear system by the Galerkin approach. To this end, we introduce the notion of a local neighborhood or local cover CΩ,i ⊂ CΩ of a particular cover patch ωi ∈ CΩ by CΩ,i := {ωj ∈ CΩ | ωj ∩ ωi = ∅}
(2.8)
and require maxωi ∈CΩ card(CΩ,i ) = O(1). Note furthermore that the conditions imposed on the PU in Definition 1 do not ensure that the product functions ϕi ψin of (2.3) are linearly independent. However, to obtain the linear independence of the product functions ϕi ψin it is sufficient to require that the PU has the following property. Definition 2 (Flat top property). Let {ϕi } be a partition of unity according to Definition 1. Let us define the sub-patches ωFT,i := {x | λCΩ (x) = 1} such that ϕi |ωFT,i ≡ 1. Then, the PU is said to have the flat top property, if there exists a constant CFT such that for all patches ωi µ(ωi ) ≤ CFT µ(ωFT,i )
(2.9)
where µ(A) denotes the Lebesgue measure of A ⊂ Rd . We have C∞ = 1 for a PU with the flat top property. Obviously the product functions ϕi ψin are linearly independent if we assume that the PU has the flat top property and that each of the local bases {ψin } is locally linearly independent on the sub-patches ωFT,i ⊂ ωi .1 The PU concept is employed in many meshfree methods. However, in most cases very smooth PU functions ϕi ∈ C k (Ω) with k ≥ 2 are used and the functions ϕi have rather large supports ωi which overlap substantially. Hence in most meshfree methods card(CΩ,i ) is large and the employed PU does not have the flat top property. This makes it easier to control ∇ϕi L∞ , compare Definition 1 and (2.7), but it can lead to ill-conditioned and even singular stiffness matrices. For a flat top PU we obviously have ∇ϕi |ωFT,i ≡ 0 so that it is sufficient to bound ∇ϕi on the complement ωi \ ωFT,i which requires some additional properties, compare (2.13) and (2.14). Hence, the cover construction for a flat top PU is somewhat more challenging. A PU can for instance be constructed by simple averaging, often referred to as Shepard’s method. Let us assume that we have a cover CΩ = {ωi } of the domain Ω such that 1 ≤ λCΩ (x) ≤ M for all x ∈ Ω. With the help of non-negative weight functions Wk defined on these cover patches ωk , i.e. Wk (x) > 0 for all x ∈ ωk \ ∂ωk , we can easily generate a partition of unity by Wi (x) where Si (x) := ϕi (x) := Wj (x). (2.10) Si (x) ωj ∈CΩ,i
1
Note that the flat top property is a sufficient condition only. It is not a necessary requirement. In practice we already obtain a linearly independent set of shape functions if the flat top property is satisfied by most but not necessarily all patches ωi of the cover CΩ .
A Particle-Partition of Unity Method Part VII: Adaptivity
125
Obviously, the smoothness of the resulting PU functions ϕi is determined entirely by the smoothness of the employed weight functions. Hence, on a cover with tensor product patches ωi we can easily construct partitions of unity of any regularity for instance by using tensor products of splines with the desired regularity as weight functions.2 Hence, let us assume that the weight functions Wi are all given as linear transformations of a generating normalized spline weight function W : Rd → R with supp(W) = [0, 1]d , i.e., Wi (x) = W ◦ Ti (x),
Ti : ωi → [0, 1]d ,
DTi ∞ ≤
CT diam(ωi )
(2.11)
and W ∞ = 1, ∇W ∞ ≤ CW,U . To show that the PU arising from (2.10) is valid according to Definition 1 it is sufficient to make the following additional assumptions: • Comparability of neighboring patches: There exist constants CL and CU such that for all local neighborhoods CΩ,i there holds the implication ωj ∈ CΩ,i •
⇒
CL diam(ωi ) ≤ diam(ωj ) ≤ CU diam(ωi )
with absolute constants CL and CU . Sufficient overlap: There exists a constant K > 0 such that for any x ∈ Ω there is at least one cover patch ωi with the property x ∈ ωi ,
•
(2.12)
dist(x, ∂ωi ) ≥ K diam(ωi ).
(2.13)
Weight function and cover are compatible: There exists a constant CW,L such that for all cover patches ωi |∇Wi (x)| >
CW,L diam(ωi )
holds for all x ∈ Ω with λCΩ (x) > 1,
(2.14)
compare Figure 1. Lemma 1. The PU defined by (2.10) with weights (2.11) is valid according to Definition 1 under the assumptions (2.12), (2.13), and (2.14). Proof. For x ∈ Ω with λCΩ (x) = 1 we have ∇ϕi (x) = 0. Note that we have |Si (x)| ≥ |Wl (x)| = |Wl (x) − Wl (y)| where ωl denotes the cover patch with property (2.13) for x ∈ Ω and y ∈ ∂ωl is arbitrary. For any x ∈ Ω with λCΩ > 1 we therefore obtain with the mean value theorem, (2.13) and (2.14) 2
Other shapes of the cover patches ωi ∈ CΩ are of course possible, e.g. balls or ellipsoids, but the resulting partition of unity functions ϕi are more challenging to integrate numerically. For instance a subdivision scheme based on the piecewise constant covering index λCΩ leads to integration cells with very complicated geometry.
126
M. Griebel and M.A. Schweitzer weight function
weight function
1
1
W(x)
W(x)
1−h
h 0 C−h
a
C
b
C+h
0 C−h
a
C
x−axis
b
C+h
x−axis
Figure 1. A one-dimensional weight function Wi on a patch ωi = (C − h, C + h) with ωFT,i = (a, b) that does not satisfy (left) the compatibility condition (2.14), and one that does (right).
|Si (x)| ≥ CW,L K. Together with (2.11) and (2.12) this yields the point-wise estimate W (x)∇S (x) − ∇W (x)S (x) i i i i |∇ϕi (x)| = 2 (x) S i |∇W ◦ Ti (x)DTi (x)Si (x)| + |Wi (x) k ∇W ◦ Tk (x)DTk (x)| ≤ |Si2 (x)| 2M CT CW,U , ≤ (CW,L K)−2 diam(ωi ) which gives the asserted bound ∇ϕi L∞ (Rd ) ≤
C∇ diam(ωi )
with C∇ ≥
2M CT CW,U (CW,L K)−2
In general any local space which provides some approximation property such as (2.5) can be used in a PUM. Furthermore, the local approximation spaces are independent of each other. Hence, if there is a priory knowledge about the (local) behavior of the solution u available, it can be utilized to choose operator-dependent approximation spaces. For instance, the a priori information can be used to enrich a space of polynomials by certain singularities or it maybe used to choose systems of eigenfunction of (parts) of the considered differential operator as local approximation spaces. Such specialized local function spaces may be given analytically or numerically. If no a priori knowledge about the solution is available classical multipurpose expansion systems like polynomials are used. In this paper we employ products of univariate Legendre polynomials throughout, i.e., we use Vipi (ωi ) := P pi ◦ T:i ,
T:i : ωi → (−1, 1)d d d @ P pi ((−1, 1)d ) = span{ψ n | ψ n = Lnˆ l , ˆ n 1 = n ˆ l ≤ pi }, l=1
where L denotes the Legendre polynomial of degree k. k
l=1
.
A Particle-Partition of Unity Method Part VII: Adaptivity
127
2.2 Essential Boundary Conditions and Galerkin Discretization The treatment of essential boundary conditions in meshfree methods is not straightforward and a number of different approaches have been suggested. In [10] we have presented how Nitsche’s method [13] can be applied successfully in the meshfree context. In order to formulate the resulting weak formulation of (1.1) arising from ∂u Nitsche’s approach, we introduce some additional notation. Let ∂n u := ∂n denote the normal derivative, ΓD,i := ωi ∩ ΓD , and CΓD := {ωi ∈ CΩ | ΓD,i = ∅} denote the cover of the Dirichlet boundary. Furthermore, we define the coverdependent norm diam(ΓD,i ) ∂n u 2L2 (ΓD,i ) . ∂n u 2− 1 ,CΩ := 2
ωi ∈CΓD
With these conventions we obtain the weak formulation aβ (u, v) = lβ (v) for all v ∈ V PU
(2.15)
with the cover-dependent bilinear form ∇u∇v − (∂n uv + u∂n v) + β diam(ΓD,i )−1 uv aβ (u, v) := Ω
ΓD
ΓD,i
ωi ∈CΓD
(2.16) and the corresponding linear form fv − gD ∂n v + gN v + β diam(ΓD,i )−1 gD v (2.17) lβ (v) := Ω
ΓD
ΓN
ΓD,i
ωi ∈CΓD
from the minimization of the functional |∇w|2 − 2 ∂n ww + β diam(ΓD,i )−1 |w|2 . Jβ (w) := Ω
ΓD
ωi ∈CΓD
(2.18)
ΓD,i
Note that this minimization is completed for the error in V PU , i.e., we are looking for minuPU ∈V PU JN,β (u − uPU ). There is a unique solution uPU if the regularization parameter β is chosen large enough; i.e., the regularization parameter is dependent on the discretization space V PU . The solution uPU of (2.15) satisfies an optimal error estimate if the space V PU admits the following inverse estimate ∂n u − 12 ,CΩ ≤ Cinv ∇u L2 (Ω)
for all v ∈ V PU
(2.19)
with an absolute constant Cinv depending on the cover CΩ , the generating weight function W and the employed local bases {ψin }. If Cinv is known, the
128
M. Griebel and M.A. Schweitzer
2 regularization parameter β can be chosen as β > 2Cinv . Hence, the main task 2 can be is the automatic computation of the constant Cinv . Fortunately, Cinv approximated very efficiently, see [10]. To this end, we consider the inverse assumption (2.19) as a generalized eigenvalue problem locally on each patch ωi ∈ CΓD which intersects the Dirichlet boundary and solve for the largest 2 . eigenvalue to obtain an approximation to Cinv Note that this (overlapping) variant of Nitsche’s approach is slightly different from the one employed in [10, 15], e.g., there the inverse assumption (2.19) was formulated using a cover-independent norm. To attain a convergent scheme from (2.18) it is essential that the covering index λCΓD (x) < M is bounded. The implementation of (2.16) and (2.17) is somewhat more involved since the numerical integration scheme must be capable of handling the overlap region correctly. The main advantage of this overlapping variant is that the regularization parameter β is dependent on the employed local approximation spaces, i.e., on the employed polynomial degrees, and on the maximal level difference L close to the boundary only. It is not dependent on diam(ωi ). Hence, it is sufficient to pre-compute β for the maximal allowable value of L and the maximal polynomial degree. This value can then be used for all patches on all levels. For the Galerkin discretization of (2.15), which yields the linear system
A˜ u = fˆ,
with A(i,k),(j,n) = aβ (ϕj ψjn , ϕi ψik ), and fˆ(i,k) = lβ (ϕi ψik ),
we need to employ numerical integration since the PU functions are in general piecewise rational functions. Note that the flat top property is also beneficial to the numerical integration since all PU functions are constant on each ωFT,i A so that the integration on a large part of the domain i ωFT,i ⊂ Ω involves only the local basis functions ψin . Therefore, a subdivision scheme based on the covering index λCΩ which employs sparse grid numerical integration rules of higher order on the cover-dependent integration cells seems to be the best approach, see [8] for details. Note that the use of an automatic construction procedure for the numerical integration scheme is a must for adaptive computations since an a priori prescribed background integration scheme can hardly account for the (possibly) huge variation in the support sizes diam(ωi ) and may lead to stability problems. With respect to the assembly of the system matrix A for a refined PUM space it is important to note that we do not need to compute all its entries A(i,k),(j,n) . We can re-use the entries A(i,k),(j,n) which stem from a patch ωi with the property that none of its neighbors ωj ∈ CΩ,i have been refined. Hence, there are a number of complete block-rows A(i,·),(·,·) that do not need to be computed for the refined space and we need to compute only a minimal number of matrix entries A(i,k),(j,n) from level to level.
A Particle-Partition of Unity Method Part VII: Adaptivity
129
3 Adaptive Multilevel Solution In [8, 9] we have developed a tree-based cover construction scheme that gives a k } based on a given point set P = {xi }. The fundamental sequence of covers {CΩ construction principle employed in [8] is a d-binary tree. Based on the given point data P , we sub-divide a bounding-box CΩ ⊃ Ω until each of the tree cells d @ (cli − hli , cli + hli ) Ci = l=1
contains at most a single point xi ∈ P , see Figure 2 (left). We obtain a valid cover from this tree by choosing ωi =
d @ (cli − αhli , cli + αhli ),
with α > 1.
(3.20)
l=1
Note that we define a cover patch ωi and a corresponding PU function ϕi for cells that contain a point xi ∈ P as well as for empty cells that do not contain any point from P .3 This procedure increases the dimension of the resulting PUM space, yet (under some assumptions) only by a constant factor [5, 8]. The main benefit of using a larger number of cover patches is that the resulting neighborhoods CΩ,i are smaller and that we therefore obtain a k are smaller number of entries in the stiffness matrix. The coarser covers CΩ defined considering coarser versions of the constructed tree, i.e., by removing the complete set of leaves of the tree. For details of this construction see [8]. k all satisfy the conditions of the previous secThe constructed covers CΩ tion.
Figure 2. Subdivision corresponding to an initial cover (left). Subdivision with cells of neighborhood CΩ,i (light gray) and cell corresponding to patch ωi (dark gray) (center). Subdivision with cells of subset RΩ,i (gray) (right).
3
This approach can be interpreted as a saturation technique for our d-binary tree. To this end, we can define an additional point set P˜ = {ξi } such that each cell of the tree now contains exactly one point of the union P˜ ∪ P , compare [8, 9].
130
M. Griebel and M.A. Schweitzer
Lemma 1. A cover CΩ = {ωi } arising from the stretching of a d-binary tree cell decomposition {Ci } according to (3.20) with α > 1 satisfies conditions (2.11), (2.12) and (2.13). Proof. For the ease of notation let us assume hi = hki for k = 1, . . . , d. Then, we have hi 2−li diam(Ω) where li refers to the tree-level of the cell Ci . Obviously, we have CT = 1, CU = max
max 2|li −lj | ,
ωi ∈CΩ ωj ∈CΩ,i
and CL = CU−1 .
Due to the stretching of the tree cells we can find for any x ∈ Ω at least one cover patch ωi such that x ∈ ωi and that the inequality dist(x, ∂ωi ) ≥
α−1 min 2−lj 2 ωj ∈CΩ,i
holds. Hence, with the maximal difference in tree levels of two overlapping cover patches L := maxωi ∈CΩ maxωj ∈CΩ,i |li − lj | we obtain CU = 2L and K = (α − 1)2−L−1 . Therefore, the resulting PU defined by (2.10) using a tensor product Bspline as generating weight function satisfies the assumptions of Definition 1 and the error estimates (2.6) and (2.7) hold for our multilevel PUM on each level k. Furthermore, we can easily enforce that each PU of the resulting sequence {ϕi,k }k has the flat top property. Corollary 1. The PU resulting from (2.10) based on a cover CΩ = {ωi } arising from the stretching of a d-binary tree cell decomposition {Ci } with α > 1 according to (3.20) has the flat top property if α ∈ (1, 1 + 2−L ). 3.1 Particle Refinement In the following we consider the refinement of the given point set P and the k obtained from the tree construction reviewed respective sequence of covers CΩ above. One of the properties this refinement procedure should have is that we are able to bound the maximal level difference L of the resulting tree. Only k } and a sequence of PUs {ϕi,k } then will we obtain a sequence of covers {CΩ which satisfy the conditions given above with uniform constants. In general any local refinement procedure employs a Boolean refinement indicator function r : CΩ → {true, false} which identifies or marks regions which should be refined. Often the refinement indicator is based on thresholding of local error estimates ηi ≈ u − uPU H 1 (Ω∩ωi ) , e.g. " true if 2ηi ≥ ηmax r(ωi ) = , with ηmax := max ηi . (3.21) false else ωi ∈CΩ Based on this refinement indicator we then employ certain refinement rules to improve the resolution on a patch ωi . Since we are interested in the refinement
A Particle-Partition of Unity Method Part VII: Adaptivity
131
of a particle set, these refinement rules must essentially create new particles in the regions {ωi ∈ CΩ | r(ωi ) = true}. Furthermore, this refinement process should be consistent with our tree construction. Before we consider our refinement rules for particles, let us first consider if a simple refinement indicator function like (3.21) is suitable for the PUM. To this end let us assume that we have ηi ≈ u − uPU H 1 (Ω∩ωi ) an estimate of the error u − uPU locally on each patch ωi , see section 3.3 for the construction of such an error estimator. Note that the ηi are (overlapping) subdomain estimators which are fundamentally different from the more common (disjoint) element estimators in the FEM. Recall that the local error is given by ϕj (u − uj ) H 1 (Ω∩ωi ) (3.22) u − uPU H 1 (Ω∩ωi ) = ωj ∈CΩ,i
since ωj ∈CΩ,i ϕj (x) ≡ 1 for all x ∈ ωi . Thus, using a simple indicator like (3.21) which would just mark the patch ωi for refinement may not be sufficient to reduce the error on ωi . It seems necessary that at least some of the neighbors are also marked for refinement to achieve a sufficient error reduction. Another reason why the highly local indicator (3.21) is not suitable for our PUM is the fact that we need to bound the maximal level difference L of neighboring patches. This can hardly be achieved using (3.21). A simple solution to this issue could be to refine all patches ωj ∈ CΩ,i since they all contribute to the error on ωi . This will certainly ensure that the error on ωi is reduced, however, it may lead to a substantial yet unnecessary increase in computational work and storage. Taking a closer look at (3.22) we find that the contribution of a patch ωj to the error on ωi lives on the intersection ωj ∩ ωi only. Hence, it is a promising approach to select an appropriate subset of neighbors ωj ∈ CΩ,i via the (relative) size of these intersections. This approach is further in perfect agreement with our constraint of bounding the maximal level difference since the intersection ωj ∩ ωi will be (relatively) large if lj < li , i.e., ωj is a coarser patch than ωi . Hence, we introduce the sets RΩ,i := {ωi } ∪ {ωj ∈ CΩ,i | lj < li } of patches with a large contribution to the local error on ωi . With the help of these sets we define our refinement indicator function as " true if ωj ∈ RΩ,i and 2ηi ≥ ηmax , (3.23) r(ωj ) = false else see Figure 2. Note however that this does not guarantee that the maximal level difference L stays constant. But it ensures that L increases somewhat slower and only in regions where the error is already small. Hence, the adverse effect of larger constants in the error bound (2.7) is almost negligible. Also note that the presented procedure can be interpreted as a refinement with
132
M. Griebel and M.A. Schweitzer
implicit smoothing of the cover due to the selection of a subset of neighbors. This strategy gave very favorable results in all our numerical studies and it is employed in the numerical experiments presented in this paper. Now that we have identified the refinement region, we need to consider the refinement rules for our particle set P based on the patch-wise refinement indicator function (3.23). Our goal is to define a set of refinement rules which create new points xn and a respective cover C˜Ω , so that our original cover construction algorithm with the input P ∪ {xn } will give the refined cover C˜Ω . Hence, the locations of the created points are constrained to certain cells of our tree. In this paper we employ a very simple and numerically cheap positioning scheme4 for the new points based on a local center of gravity, which is defined as 1 xk , GΩ,i = {xk ∈ P | xk ∈ ωk ∈ CΩ,i }. gi = card(GΩ,i ) xk ∈GΩ,i
Note that gi is well-defined for all patches. Due to our tree-based cover construction we can always find at least one given point xi in the local neighborhoods CΩ,e even for empty patches ωe , i.e. ωe ∩P = ∅. Besides the local centers of gravity gi we furthermore use the geometric centers ci of the tree-cell Ci associated with the considered patch ωi and the centers ci,q of the refined treecells Ci,q with q = 1, . . . , 2d for our positioning scheme. The overall refinement scheme for a patch ωi reads as follows. Algorithm 1 (Particle Refinement). 1. Set counter w = 0. 2. If there is xi ∈ P with xi ∈ Ci ⊂ ωi , then determine sub-cell Ci,˜q ⊂ ωi,˜q with xi ∈ Ci,˜q . 3. If gi ∈ Ci ⊂ ωi , then determine sub-cell Ci,ˆq ⊂ ωi,ˆq with gi ∈ Ci,ˆq . Set P = P ∪ {gi } and w = w + 1. If w ≥ 2d−1 , then stop. 4. If qˆ = q˜, then For q = 1, . . . , 2d compute projection pi,q of sub-cell center ci,q on line xi gi and the projection p˜i,q of ci,q on line xi ci . If pi,q ∈ Ci,q , then set P = P ∪ {pi,q } and w = w + 1. If w ≥ 2d−1 , then stop. pi,q } and w = w + 1. If pi,q ∈ Ci,q and p˜i,q ∈ Ci,q , then set P = P ∪ {˜ If w ≥ 2d−1 , then stop. 5. If qˆ = q˜ and gi = ci , then assume that data is gridded and set P = P ∪ {ci,q } with q = 1, . . . , 2d . 4
Note that many other approaches to the construction of new points are possible. For instance we can minimize the local fill distance or the separation radius under the constraint of positioning the new points within the sub-cells of the tree construction. Such approaches, however, involve the solution of a linear system and hence are computationally more expensive.
A Particle-Partition of Unity Method Part VII: Adaptivity
133
Now that we have our refined point set P , let us consider the question of how to define the respective cover patches ωi . The refinement of a cover is straightforward since a d-binary tree is an adaptive data-structure. Here, it is sufficient to use a single subdivision step to split the tree-cell Ci into 2d subcells Ci,q if the refinement indicator function r(ωi ) = true for the associated cover patch ωi . Then, we insert the created particles in the respective cells and set the patches ωi,q on all refined (sub-)cells Ci,q with q = 1, . . . , 2d using (3.20) with the same overlap parameter α. Due to our careful selection of the positions of the new points {xn } in our refinement scheme we ensure that the refined cover C˜Ω is identical to the cover obtained by our original cover construction algorithm using the refined point set P ∪ {xn } as input. Hence, a refined cover is guaranteed to have the same properties as the covers obtained from the original algorithm. Recall that our tree-based cover construction algorithm provides a comk . Hence, we must deal with the question how to plete sequence of covers CΩ k such that introduce a refined cover into an existing sequence of covers CΩ the resulting refined sequence is consistent with our multilevel construction [9, 15]. 3.2 Iterative Solution k = {ωi,k }} with k = In [9] we have constructed a sequence of covers {CΩ 0, . . . , J where J denotes the maximal subdivision level of the tree, that is all k have the property covers CΩ
k = max li . k ωi ∈CΩ
(3.24)
The respective PUM spaces VkPU are defined as p ϕi,k Vi,ki,k VkPU := ωi,k ∈Ωk
with the PU functions (2.10) based on the cover CΩ,k and local approximation p spaces Vi,ki,k of degree pi,k . Note that property (3.24) ensures that we have a minimal number of levels J + 1 and thus minimal work and storage in the iterative multilevel solver. Hence, the covers of a refined sequence must also satisfy (3.24). k } with k = 0, . . . , J Let us assume that we have a sequence of covers {CΩ J . To obtain satisfying the level property (3.24) and that we refine the cover CΩ k a sequence {CΩ } with this property by the refinement scheme presented above we need to distinguish two cases. J in such a First we consider the simple case where we refine the cover CΩ way that at least one patch ωi with li = J is marked for refinement. Then, J ) has at least one element ωj with lj = J + 1 and the resulting cover R(CΩ k } with k = 0, . . . , J + 1 where we can extend our sequence of covers {CΩ
134
M. Griebel and M.A. Schweitzer
J+1 J CΩ = R(CΩ ). In the case where we refine only patches ωi with li < J, J ) for which maxωi ∈R(CΩJ ) li = J holds and we obtain a refined cover R(CΩ J ). We rather need we cannot extend the existing sequence of covers by R(CΩ J J to replace the cover CΩ by its refined version R(CΩ ) to obtain a consistent k } with sequence of covers. Thus, we end up with the modified sequence {CΩ J J k = 0, . . . , J where we assign CΩ = R(CΩ ). With these conventions it is clear that our refinement scheme leads to k } that satisfies all assumptions of our multilevel a sequence of covers {CΩ construction and our iterative multilevel solver is applicable also in adaptive computations. To reduce the computational work even further, we couple our multilevel solver with the nested iteration technique [12], see Algorithm 2.
Algorithm 2 (Nested Iteration). 1. If l > 0, then set initial guess kl−1 l u ˜l−1 . u ˜0l = Pl−1 Else, set initial guess u ˜0l = 0. 2. Set u ˜kl l ← IS kl l (˜ u0l , fˆl , Al ) . The ingredients of a nested iteration are the basic iterative solution procedure IS l (in our case IS l will be a multilevel iteration MG l (0, fˆJ )) defined on each l . One key observation which lead to the level l and prolongation operators Pl−1 kl−1 development of Algorithm 2 is that the approximate solution u ˜l−1 obtained on level l − 1 is a good initial guess u ˜0l for the iterative solution on level l. To this l to transfer a coarse solution on end, we need the prolongation operator Pl−1 level l − 1 to the next finer level l, see step 1 of Algorithm 2. Another property that is exploited in our nested iteration solver is that there is nothing to gain from solving the linear system of equations (almost) exactly since its solution describes an approximate solution of the considered PDE only. The iterative solution process on level l can be stopped once the error of the iteration is of the same order as the discretization error on level l. Thus, if the employed iterative solver IS l has a constant error reduction rate, as it is the case for an optimal multilevel iteration MG l , then a (very small) constant number of iterations kl that is independent of l in step 2 is sufficient to obtain an approximate solution u ˜l on each level l within discretization accuracy. The overall iterative process is also referred to as full multigrid [6, 11]. In all our numerical studies no more than 2 applications of a V (1, 1)-cycle with block-Gauss-Seidel smoothing (compare [9]) were necessary to obtain an approximate solution within discretization accuracy. 3.3 Error Estimation The final ingredient of our adaptive multilevel PUM is the local error estimator ηi ≈ u − uPU H 1 (Ω∩ωi ) which steers our particle refinement process and can
A Particle-Partition of Unity Method Part VII: Adaptivity
135
be used to assess the quality of the computed global approximation [2, 3, 16, 17]. In this section we now construct an error estimator ηi for the PUM based on the subdomain approach due to [2]. We employ an a posteriori error estimation technique based on the solution of local Dirichlet problems defined on (overlapping) subdomains introduced in [2] which is very natural to the PUM. To this end let us consider the additional local problems in Ω ∩ ωi , −∆wi = f wi = uPU on ∂(Ω ∩ ωi ) \ ΓN , (3.25) ∂wi = gN on ΓN ∩ ∂(Ω ∩ ωi ) ∂n to approximate the error u − uPU on Ω ∩ ωi by wi − uPU ∈ H 1 (Ω ∩ ωi ), see [2]. This leads to the local error estimator ηi := wi − uPU H 1 (Ω∩ωi ) . Note that the local problems (3.25) employ inhomogeneous Dirichlet boundary values. As discussed in section 2.2, the implementation of essential boundary conditions is somewhat more involved in the PUM. There we have presented a non-conforming approach due to Nitsche to realize the global Dirichlet conditions of our model problem (1.1). Of course this technique can also be pursued here, however, since we consider (3.25) on very special subdomains, i.e., on the support of a PU function ϕi , there is a much simpler and conforming approach. Enforcing homogeneous boundary conditions on the boundary ∂ωi is trivial since ϕi |∂ωi ≡ 0. For patches close to the boundary we can easily enforce homogeneous boundary values on ∂(Ω ∩ωi )\ΓN \ΓD . Hence, if we reformulate (3.25) in such a way that we have to deal with vanishing boundary data on ∂(Ω ∩ωi )\ΓN \ΓD only, we can realize the (artificial) boundary conditions in a conforming way. Only for the global boundary data on ΓD we need to employ the non-conforming Nitsche technique. Therefore, we employ a discrete version of the following equivalent formulation of (3.25) −∆w ˜i w ˜i w ˜i ∂w ˜i ∂n
= f − f PU =0 = gD − uPU ∂uPU = gN − ∂n
in Ω ∩ ωi , on ∂(Ω ∩ ωi ) \ ΓN \ ΓD , on ∂(Ω ∩ ωi ) ∩ ΓD ,
(3.26)
on ΓN ∩ ∂(Ω ∩ ωi ),
where f PU denotes the best approximation of f in V PU , with mostly homogeneous boundary conditions within our implementation. We approximate (3.26) using the trial and test spaces Vi,∗ (Ω ∩ ωi ) := ϕi Vipi +qi with qi > 0. Obviously, the functions wi ∈ Vi,∗ (Ω ∩ ωi ) satisfy the homogeneous boundary conditions on ∂(Ω ∩ ωi ) \ ΓN \ ΓD due to the multiplication with the partition of unity function ϕi . Note that these local problems fit very well with the global Nitsche formulation (2.18) since the solution of (3.26) coincides with the minimizer of ˜i ) → min{w ˜i ∈ Vi,∗ (Ω ∩ ωi )} Jγi (u − uPU − w
136
M. Griebel and M.A. Schweitzer
where the parameter γi now depends on the local discretization space Vi,∗ (Ω ∩ ωi ) ⊂ H 1 (Ω ∩ ωi ) and not on V PU ⊂ H 1 (Ω).5 Note that the utilization of the global Nitsche functional is possible due to the use of a conforming approach for the additional boundary ∂(Ω ∩ ωi ) \ ΓN \ ΓD only. We obtain our local approximate error estimator ˜i H 1 (Ω∩ωi ) ηi := w
(3.27)
from the approximate solution w ˜i ∈ Vi,∗ (Ω ∩ ωi ) of the local problem (3.26). The global error is then estimated by η :=
ωi ∈CΩ
(ηi )2
12
=
w ˜i 2H 1 (Ω∩ωi )
12
.
(3.28)
ωi ∈CΩ
Note that we solve (3.26) in the complete space Vi,∗ (Ω ∩ ωi ) = ϕi Vipi +qi p +q \p p +q \p and not just the space ϕi Vi i i i where Vi i i i denotes the hierarchical pi pi +qi . complement of Vi in Vi This subdomain error estimation approach was already analyzed in the PUM context in [1]. There it was shown that the subdomain estimator is efficient and reliable, i.e., there holds the equivalence wi 2H 1 (Ω∩ωi ) ≤ u − uPU 2H 1 (Ω) ≤ C wi 2H 1 (Ω∩ωi ) . (3.29) C −1 ωi ∈CΩ
ωi ∈CΩ
Yet, it was assumed that the variational problem is globally positive definite and that a globally conforming implementation of essential boundary conditions is employed. However, both these assumptions are not satisfied in our PUM due to the Nitsche approach. The analysis of the presented estimator is an open issue. Also note that there are other a posteriori error estimation techniques based on the strong residual in Mortar finite elements based Nitsche’s approach e.g. [4] which can be used in the PUM context. Finally, let us point out an interesting property of the PUM which might be beneficial in the construction of error estimators based on the strong residual. Recall from Theorem 1 that the global error is essentially given as an overlapping sum of the local errors with respect to the local approximation spaces. The properties of the PU required by Definition 1 enter in the constants of the estimates (2.6) and (2.7) only. They do not affect the attained approximation order. Hence, the global approximation error in a PUM is essentially invariant of the employed PU — if the PU is based on the same cover CΩ . Corollary 2. Let Ω ⊂ Rd be given. Let {ϕ1i } and {ϕ2i } be partitions of unity according to Definition 1 employing the same cover CΩ = {ωi }, i.e. for all i assume that ωi1 = ωi2 = ωi . Let us assume that λCΩ (x) ≤ M ∈ N for all x ∈ Ω. 5
We may also pre-compute the Nitsche regularization parameter βmax for maximal total degree max pi + qi and employ βmax on all levels and for all local problems.
A Particle-Partition of Unity Method Part VII: Adaptivity
137
Let a collection of local approximation spaces Vipi = span{ψin } ⊂ H 1 (Ω ∩ωi ) be given as in Theorem 1. Let u ∈ H 1 (Ω) be the function to be approximated. Then there hold the global equivalencies −1 uPU,1 − u L2 (Ω) ≤ uPU,2 − u L2 (Ω) ≤ C2,1 uPU,1 − u L2 (Ω) C1,2
(3.30)
and −1 ∇(uPU,1 −u) L2 (Ω) ≤ ∇(uPU,2 −u) L2 (Ω) ≤ C2,1,∇ ∇(uPU,1 −u) L2 (Ω) C1,2,∇ (3.31) for the functions ϕ1i ui ∈ V PU,1 and uPU,2 := ϕ2i ui ∈ V PU,2 uPU,1 := ωi ∈CΩ
ωi ∈CΩ
with constants C1,2 , C2,1 , C1,2,∇ , C2,1,∇ depending on the partitions of unity only. Due to this equivalence it is easily possible to obtain an approximation u ˜PU with higher regularity k > 0 from a C 0 approximation uPU in our PUM simply be changing the employed generating weight function W. The smoother approximation u ˜PU can for instance be used to evaluate/approximate higher order derivatives without the need to consider jumps or other discontinuities explicitly. 3.4 Overall Algorithm Let us shortly summarize our overall adaptive multilevel algorithm which employs three user-defined parameters: > 0 a global error tolerance, q > 0 the increment in the polynomial degree for the estimation of the error and k > 0 the number of multilevel iterations employed in the nested iteration. Algorithm 3 (Adaptive Multilevel PUM). 1. Let us assume that we are given an initial point setp P and that we have a sequence of PUM spaces VkPU = ωi,k ∈CΩ,k ϕi,k Vi,ki,k with k = 0, . . . , J based on a respective sequence of covers CΩ,k = {ωi,k } arising from a d-binary tree construction using the point set P , see [8, 15] for details. l PU PU : Vl−1 → VlPU and Rll−1 : VlPU → Vl−1 denote transfer operators Let Pl−1 PU PU PU and Sl : Vl × Vl → Vl appropriate smoothing schemes so that we can define a multilevel iteration MG J : VJPU × VJPU → VJPU , see [7, 9, 15] for details. Set u ˜J = MG kJinit (0, fˆJ ) where the number of iterations kinit is assumed to be large enough. 2. Compute the local error estimates ηi from (3.27) using the local spaces p +q ϕJ,i VJ,iJ,k . Estimate the global error by (3.28). 3. If the global estimate satisfies η < : STOP.
138
M. Griebel and M.A. Schweitzer
4. Define the refinement indicator function (3.23) on the cover CΩ,J based on the local estimates ηi . 5. Using the refinement rules of section 3.1 define a refined point set P , a refined cover R(CΩ,J ) and its associated PUM space R(VJPU ). 6. If R(CΩ,J ) satisfies the level property (3.24) with J: J and RJJ−1 . a) Delete the transfer operators PJ−1 b) Compute an intermediate transfer operator PJ : VJPU → R(VJPU ). ˜J . c) Set v˜J = PJ u d) Delete the intermediate transfer operator PJ . e) Remove the cover CΩ,J and its associated PUM space VJPU from the respective sequences. f) Set CΩ,J := R(CΩ,J ) and VJPU := R(VJPU ). 7. If R(CΩ,J ) satisfies the level property (3.24) with J + 1: a) Extend the sequence of covers by CΩ,J+1 := R(CΩ,J ) and the sequence PU := R(VJPU ). of PUM spaces by VJ+1 b) Set v˜J = 0. c) Set J = J + 1. 8. Set up the stiffness matrix AJ and right-hand side fˆJ using an appropriate numerical integration scheme. J PU PU : VJ−1 → VJPU and RJJ−1 : VJPU → VJ−1 9. Compute transfer operators PJ−1 PU PU PU and define appropriate smoother SJ : VJ × VJ → VJ on level J. J u ˜J−1 . 10. If v˜J = 0 , set v˜J = PJ−1 11. Apply k > 0 iterations and set u ˜J = MG kJ (˜ vJPU , fˆJ ). 12. GOTO 2.
4 Numerical Results In this section we present some results of our numerical experiments using the adaptive PUM discussed above. To this end, we introduce some shorthand notation for various error norms, i.e., we define u − uPU L2 (u − uPU ) H 1 , and eH 1 := . u L2 u H 1 (4.32) Analogously, we introduce the notion
eL∞ :=
u − uPU L∞ , u L∞
eL2 :=
e∗H 1
η := = u H 1
% ωi ∈CΩ
ηi2
& 12
u H 1
for the estimated (relative) error using (3.27) and (3.28). These norms are approximated using a numerical integration scheme with very fine resolution, see [15]. For each of these error norms we can compute the respective convergence rate ρ by considering the error norms of two consecutive levels l − 1 and l
A Particle-Partition of Unity Method Part VII: Adaptivity
139
particles 1
0
1
x −axis
0.5
−0.5
−1 −1
Figure 3. Surface plot of approximate solution uPU on level J = 11.
ρ := −
log
u−uPU l u−uPU l−1
dof l log( dof ) l−1
0
x0−axis
0.5
1
Figure 4. Refined point set P on level J = 11 for Example 1 using quadratic polynomials for error estimation.
,
−0.5
dof k :=
p
dim(Vi,ki,k ).
(4.33)
k ωi,k ∈CΩ
To assess the quality of our error estimator we give its effectivity index with respect to the H 1 -norm ∗H 1 :=
e∗H 1 η = eH 1 (u − uPU ) H 1
in the tables. We also give the maximal subdivision level J of our tree for the cover construction, and the total number of degrees of freedom dof of the constructed PUM space V PU on level J. Example 1. In our first example we consider the standard test case of an Lshaped domain in two space dimensions with homogeneous boundary conditions at the re-entrant corner. That is we discretize the problem −∆u = f in Ω = (−1, 1)2 \ [0, 1)2 , u = gD on ∂Ω with our adaptive PUM where we choose f and gD such that the solution u ∈ 3 2 H 2 (Ω) in polar coordinates is given by u(r, θ) = r 3 sin( 2θ−π 3 ), see Figure 3. We employ linear Legendre polynomials as local approximation spaces Vi1 and estimate the local errors once with quartic (Table 1 and Figure 5) and once with quadratic (Table 2 and Figure 6) Legendre polynomials, i.e. Vi,∗ = ϕi Vi4 and Vi,∗ = ϕi Vi2 respectively. It is well-known that in this two-dimensional example uniform refinement will yield a convergence rate of ρH 1 = 13 only instead of the optimal ρH 1 =
140
M. Griebel and M.A. Schweitzer
Table 1. Relative errors e (4.32) and convergence rates ρ (4.33) for Example 1 using quartic Legendre polynomials for error estimation. J dof e L∞ 0 3 3.303−1 1 9 1.162−1 2 36 7.431−2 3 144 4.628−2 4 252 2.908−2 5 360 1.830−2 6 468 1.152−2 7 576 7.252−3 8 1008 4.564−3 9 1566 2.874−3 10 2124 1.811−3 11 3636 1.140−3 12 5418 7.183−4 13 8226 4.525−4 14 13491 2.850−4 15 20412 1.796−4 16 30438 1.131−4 17 49842 7.125−5 18 77256 4.489−5 19 115326 2.828−5 20 189585 1.781−5 21 298440 1.122−5 22 446850 7.069−6 23 737478 4.453−6 24 1171548 2.805−6 25 1756818 1.767−6
ρL∞ 1.01 0.95 0.32 0.34 0.83 1.30 1.76 2.23 0.83 1.05 1.52 0.86 1.16 1.11 0.93 1.12 1.16 0.94 1.05 1.15 0.93 1.02 1.14 0.92 1.00 1.14
eL2 2.469−1 5.301−2 2.084−2 8.947−3 4.207−3 2.774−3 2.389−3 2.277−3 1.164−3 6.580−4 5.282−4 2.669−4 1.834−4 1.230−4 6.996−5 4.731−5 3.305−5 1.926−5 1.225−5 8.611−6 5.119−6 3.129−6 2.201−6 1.321−6 7.915−7 5.570−7
ρL2 1.27 1.40 0.67 0.61 1.35 1.17 0.57 0.23 1.20 1.29 0.72 1.27 0.94 0.96 1.14 0.94 0.90 1.09 1.03 0.88 1.05 1.09 0.87 1.02 1.11 0.87
eH 1 5.272−1 2.153−1 1.572−1 1.034−1 7.291−2 5.632−2 4.821−2 4.457−2 3.244−2 2.524−2 2.174−2 1.635−2 1.306−2 1.063−2 8.298−3 6.618−3 5.455−3 4.288−3 3.385−3 2.786−3 2.193−3 1.719−3 1.411−3 1.110−3 8.672−4 7.109−4
convergence history
e∗H 1 4.150−1 2.327−1 1.344−1 8.702−2 6.030−2 4.566−2 3.836−2 3.505−2 2.554−2 1.981−2 1.701−2 1.280−2 1.023−2 8.324−3 6.493−3 5.191−3 4.277−3 3.359−3 2.657−3 2.187−3 1.720−3 1.350−3 1.109−3 8.715−4 6.814−4 5.585−4
ρH 1 0.58 0.82 0.23 0.30 0.62 0.72 0.59 0.38 0.57 0.57 0.49 0.53 0.56 0.49 0.50 0.55 0.48 0.49 0.54 0.49 0.48 0.54 0.49 0.48 0.53 0.49
ρ∗H 1 0.80 0.53 0.40 0.31 0.66 0.78 0.66 0.43 0.57 0.58 0.50 0.53 0.56 0.49 0.50 0.54 0.48 0.49 0.53 0.49 0.48 0.53 0.49 0.48 0.53 0.49
∗H 1 0.79 1.08 0.85 0.84 0.83 0.81 0.80 0.79 0.79 0.78 0.78 0.78 0.78 0.78 0.78 0.78 0.78 0.78 0.79 0.78 0.78 0.79 0.79 0.78 0.79 0.79
convergence history −1
−1
10
−2
10
10
−2
relative error
relative error
10
−3
10
−4
10
−3
10
−4
10
−5
10 1
H
−5
10
−6
10
1
10
H1
−6
H1,*
10
L∞ L2
10
H1,* L∞ L2
−7
2
10
3
10
4
10
5
10
6
10
degrees of freedom
Figure 5. Convergence history for Example 1 using quartic polynomials for error estimation.
1
10
2
10
3
10
4
10
5
10
6
10
7
10
degrees of freedom
Figure 6. Convergence history for Example 1 using quadratic polynomials for error estimation.
A Particle-Partition of Unity Method Part VII: Adaptivity
141
Table 2. Relative errors e (4.32) and convergence rates ρ (4.33) for Example 1 using quadratic Legendre polynomials for error estimation. J dof 0 3 1 9 2 36 3 144 4 252 5 360 6 468 7 873 8 1278 9 1854 10 3123 11 4842 12 7146 13 11385 14 17937 15 26235 16 41598 17 67266 18 99162 19 157779 20 259047 21 383805 22 612792 23 1014804 24 1509102 25 2412603 26 4014459 27 5983155 28 9575469 29 15969915
e L∞ 3.303−1 1.162−1 7.431−2 4.628−2 2.908−2 1.830−2 1.152−2 7.250−3 4.566−3 2.875−3 1.811−3 1.140−3 7.184−4 4.525−4 2.851−4 1.796−4 1.131−4 7.126−5 4.489−5 2.828−5 1.781−5 1.122−5 7.070−6 4.454−6 2.806−6 1.767−6 1.113−6 7.014−7 4.419−7 2.784−7
ρL∞ 1.01 0.95 0.32 0.34 0.83 1.30 1.76 0.74 1.21 1.24 0.89 1.05 1.19 0.99 1.02 1.22 1.00 0.96 1.19 0.99 0.93 1.18 0.99 0.92 1.16 0.98 0.91 1.16 0.98 0.90
eL2 2.469−1 5.301−2 2.084−2 8.947−3 4.207−3 2.774−3 2.389−3 1.181−3 7.443−4 5.693−4 3.101−4 1.963−4 1.408−4 8.684−5 5.109−5 3.806−5 2.404−5 1.344−5 9.895−6 6.324−6 3.465−6 2.532−6 1.621−6 8.828−7 6.403−7 4.109−7 2.230−7 1.610−7 1.034−7 5.600−8
ρL2 1.27 1.40 0.67 0.61 1.35 1.17 0.57 1.13 1.21 0.72 1.16 1.04 0.85 1.04 1.17 0.77 1.00 1.21 0.79 0.96 1.21 0.80 0.95 1.20 0.81 0.95 1.20 0.82 0.94 1.20
eH 1 5.272−1 2.153−1 1.572−1 1.034−1 7.291−2 5.632−2 4.821−2 3.484−2 2.778−2 2.298−2 1.765−2 1.380−2 1.136−2 9.076−3 7.102−3 5.863−3 4.710−3 3.654−3 2.999−3 2.410−3 1.861−3 1.521−3 1.220−3 9.396−4 7.659−4 6.143−4 4.723−4 3.844−4 3.082−4 2.368−4
ρH 1 0.58 0.82 0.23 0.30 0.62 0.72 0.59 0.52 0.59 0.51 0.51 0.56 0.50 0.48 0.54 0.50 0.47 0.53 0.51 0.47 0.52 0.51 0.47 0.52 0.51 0.47 0.52 0.52 0.47 0.52
e∗H 1 4.287−1 1.496−1 9.882−2 6.502−2 4.696−2 3.753−2 3.306−2 2.418−2 1.950−2 1.615−2 1.243−2 9.784−3 8.054−3 6.422−3 5.047−3 4.173−3 3.347−3 2.602−3 2.138−3 1.714−3 1.325−3 1.085−3 8.686−4 6.695−4 5.465−4 4.375−4 3.366−4 2.743−4 2.195−4
ρ∗H 1 0.77 0.96 0.30 0.30 0.58 0.63 0.48 0.50 0.56 0.51 0.50 0.55 0.50 0.49 0.53 0.50 0.48 0.52 0.51 0.48 0.52 0.51 0.47 0.52 0.51 0.47 0.51 0.51 0.47
∗H 1 0.81 0.69 0.63 0.63 0.64 0.67 0.69 0.69 0.70 0.70 0.70 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71 0.71
1 2 . An efficient self-adaptive method however must achieve the optimal rate ρH 1 = 12 and show a very sharp local refinement near the re-entrant corner,
compare Figure 4. From the numbers displayed in Table 1 and the graphs depicted in Figure 5 we can clearly observe that our adaptive PUM achieves this optimal value of ρH 1 ≈ 12 . The corresponding L2 -convergence rate ρL2 and L∞ -convergence rate ρL∞ are also optimal with a value close to 1. We can also observe the convergence of the effectivity index ∗H 1 to 0.79 from Table 1. Hence, we see that our approximation to the local error using quartic Legendre polynomials is rather accurate. The convergence of ∗H 1 is clear numerical evidence that the subdomain estimator described in section 3.3 satisfies an equivalence such as (3.29) also for the non-conforming Nitsche approach and solutions with less than full elliptic regularity.
142
M. Griebel and M.A. Schweitzer
Of course, an approximation of the error estimator using quartic polynomials is rather expensive. We have to solve a local problem of dimension 15 on each patch. If we are interested in steering the refinement only, then this amount of computational work might be too expensive. Hence, we carried out the same experiment using quadratic Legendre polynomials for the approximation of (3.26) only. Here, we need to solve local problems of dimension 6. The measured errors and convergence rates are displayed in Table 2 and in Figure 6. From these numbers we can clearly observe that we retain the optimal rates of ρH 1 ≈ 12 and ρL2 ≈ 1 also with this coarser approximation. However, we also see that the quality of our approximate estimate is slightly reduced, i.e., the effectivity index converges to the smaller value 0.71. Furthermore, we find that our refinement scheme based on the quadratic approximation selects more patches for refinement than in the quartic case; e.g., on level 9 we have dof = 1854 for the quadratic polynomials and only dof = 1566 for the quartic polynomials. However, we obtain covers with a maximal level difference of L = 1 for both approximation to the local errors. Obviously, the use of the quadratic approximation for the error estimation leads to an unnecessary increase in the total number of degrees of freedom, however, since we attain optimal convergence rates with both approximations this cheaper approximation for the error may well pay off with respect to the total compute time. The solution of the arising linear systems using our nested iteration multilevel solver required an almost negligible amount of compute time. In this experiment it was sufficient to employ a single V (1, 1)-cycle with block-GaussSeidel smoothing (compare [9]) within the nested iteration solver to obtain an approximate solution within discretization accuracy. Finally, let us point out that the obtained numerical approximations uPU to the considered singular solution are highly accurate with eL∞ ≈ 10−7 . Such quality requires an accurate and stable numerical integration scheme which can account for the sharp localization in the adaptive refinement and the singular character of the solution u automatically. Our subdivision sparse grid integration scheme meets these requirements. In summary we can say that the results of this numerical experiment indicate that our adaptive PUM can handle problems with singular solutions with optimal complexity. We obtain a stable approximation with optimal convergence rates already with a relatively cheap approximation to the local errors. The application of a nested iteration with our multilevel method as inner iteration yields approximate solution of very high quality with a minimal amount of computational work. Example 2. In our second example we consider our model-problem (1.1) with Dirichlet boundary conditions on the cube (0, 1)3 in three space dimensions. 1 We choose f and gD such that the solution is given by u(x) = |x| 3 . Again, we use linear Legendre polynomials for the approximation and estimate the local errors using quadratic Legendre polynomials. The optimal convergence
A Particle-Partition of Unity Method Part VII: Adaptivity
143
particles convergence history
−1
10
relative error
0.8 0.6 0.4
−2
10
0.2 −3
H1
10
0.75
1,*
H
0.5
0.25 0.5
L∞ L2
0.25 0.75
x1−axis
1
2
10
x0−axis
10
3
4
10
10
degrees of freedom
Figure 7. Refined point set P on level J = 9 for Example 2.
Figure 8. Convergence history for Example 2.
Table 4.3. Relative errors e (4.32) and convergence rates ρ (4.33) for Example 2. J dof e L∞ 0 4 5.204−1 1 32 3.605−1 2 256 2.804−1 3 480 2.173−1 4 704 1.673−1 5 928 1.276−1 6 2440 9.678−2 7 4540 7.116−2 8 7424 5.140−2 9 15964 3.580−2 10 30076 2.355−2
ρL∞ 0.47 0.18 0.12 0.41 0.68 0.98 0.29 0.50 0.66 0.47 0.66
eL2 3.635−2 1.323−2 4.285−3 2.061−3 1.733−3 1.703−3 9.047−4 5.046−4 4.099−4 2.673−4 1.472−4
ρL2 2.39 0.49 0.54 1.16 0.45 0.06 0.65 0.94 0.42 0.56 0.94
eH 1 2.001−1 1.619−1 9.998−2 6.908−2 5.615−2 5.159−2 3.600−2 2.774−2 2.386−2 1.941−2 1.613−2
ρH 1 1.16 0.10 0.23 0.59 0.54 0.31 0.37 0.42 0.31 0.27 0.29
e∗H 1 1.157−1 1.043−1 6.530−2 4.728−2 4.003−2 3.745−2 2.673−2 2.108−2 1.822−2 1.412−2 1.112−2
ρ∗H 1 1.56 0.05 0.23 0.51 0.44 0.24 0.35 0.38 0.30 0.33 0.38
∗H 1 0.58 0.64 0.65 0.68 0.71 0.73 0.74 0.76 0.76 0.73 0.69
rate with respect to the H 1 -norm in three dimensions is ρH 1 = 13 . From the numbers given in Table 4.3 we can clearly observe this optimal convergence behavior of our adaptive PUM. The rates ρL2 and ρL∞ obtained for the L2 norm and L∞ -norm respectively are comparable to the optimal value of 23 . The effectivity index of our error estimator converges to a value of ∗H 1 ≈ 0.64. Hence, the quality of the quadratic approximation to the error estimator in three dimensions is of comparable quality to that in two dimensions. The maximal level difference in this example was L = 1 as in the previous example, see also Figure 7. Again, it was sufficient to use a single V (1, 1)cycle within the nested iteration to obtain an approximate solution within discretization accuracy.
144
M. Griebel and M.A. Schweitzer particles
approximate solution
1
1 0.95 0.9
0.75
x1−axis
x1−axis
0.85
0.5
0.8 0.75 0.7
0.25
0.65
0
0.25
0.5
x0−axis
0.75
0.65
1
Figure 9. Refined point set P on level J = 8 for for Example 1 with p = 1.
0.7
0.75
0.8
x0−axis
0.85
0.9
0.95
1
Figure 10. Zoom of isolines of the approximate solution computed on level l = 10 with p = 1.
Table 4.4. Relative errors e (4.32) and convergence rates ρ (4.33) for Example 3 using linear Legendre polynomials. J dof 3 84 4 156 5 291 6 822 7 1524 8 5619 9 13332 10 74838 11 275997 12 899823 13 2885646 14 13579752
e L∞ 5.853−1 2.940−1 1.041−1 3.772−2 2.102−2 4.824−3 2.648−3 3.061−4 1.168−4 2.819−5 9.056−6 1.841−6
ρL∞ 0.92 1.11 1.67 0.98 0.95 1.13 0.69 1.25 0.74 1.20 0.97 1.03
eL2 4.360−1 1.371−1 5.489−2 1.944−2 1.428−2 4.716−3 1.805−3 2.875−4 8.154−5 2.872−5 1.025−5 2.062−6
ρL2 1.24 1.87 1.47 1.00 0.50 0.85 1.11 1.06 0.97 0.88 0.88 1.04
eH 1 6.941−1 4.764−1 3.281−1 2.042−1 1.508−1 7.710−2 4.947−2 2.106−2 1.090−2 6.002−3 3.358−3 1.546−3
ρH 1 0.52 0.61 0.60 0.46 0.49 0.51 0.51 0.50 0.50 0.51 0.50 0.50
e∗H 1 5.591−1 3.117−1 2.332−1 1.566−1 1.178−1 5.963−2 3.841−2 1.649−2 8.520−3 4.667−3 2.612−3 1.201−3
ρ∗H 1 0.45 0.94 0.47 0.38 0.46 0.52 0.51 0.49 0.51 0.51 0.50 0.50
∗H 1 0.81 0.65 0.71 0.77 0.78 0.77 0.78 0.78 0.78 0.78 0.78 0.78
Example 3. In our last example we consider the Poisson problem (1.1) with Dirichlet boundary conditions where we choose f and gD such that the solution [14] is given by 1 @ l 2 (x ) (1 − xl )2 (exp(10(xl )2 ) − 1), u(x) = 2000 2
l=1
see also Figure 10. Here, we now consider not only a linear approximation but also a higher order approach with quadratic polynomials since the solution is smooth enough. First, we approximate the solution using linear Legendre polynomials and estimate the error with quadratic Legendre polynomials as before. Then, we
A Particle-Partition of Unity Method Part VII: Adaptivity
145
Table 4.5. Relative errors e (4.32) and convergence rates ρ (4.33) for Example 3 using quadratic Legendre polynomials. J dof 3 168 4 240 5 600 6 1824 7 4974 8 18492 9 61134 10 222414 11 959100 12 3580440 13 13422120
e L∞ 2.073−1 4.879−2 1.608−2 2.492−3 4.917−4 7.648−5 9.078−6 1.392−6 1.359−7 1.952−8 2.766−9
ρL∞ 1.49 4.06 1.21 1.68 1.62 1.42 1.78 1.45 1.59 1.47 1.48
eL2 9.459−2 3.147−2 1.100−2 1.742−3 2.885−4 6.033−5 6.262−6 1.216−6 1.163−7 1.608−8 2.321−9
ρL2 2.48 3.09 1.15 1.66 1.79 1.19 1.89 1.27 1.61 1.50 1.46
eH 1 4.184−1 2.504−1 9.248−2 2.523−2 7.416−3 1.854−3 5.518−4 1.512−4 3.444−5 9.302−6 2.507−6
ρH 1 0.58 1.44 1.09 1.17 1.22 1.06 1.01 1.00 1.01 0.99 0.99
e∗H 1 2.375−1 1.527−1 6.380−2 1.628−2 4.774−3 1.158−3 3.351−4 9.025−5 2.028−5 5.451−6 1.467−6
ρ∗H 1 1.36 1.24 0.95 1.23 1.22 1.08 1.04 1.02 1.02 1.00 0.99
∗H 1 0.57 0.61 0.69 0.65 0.64 0.62 0.61 0.60 0.59 0.59 0.59
consider the case when we approximate the solution with quadratic polynomials and use cubic Legendre polynomials to estimate the errors locally. With respect to the measured convergence rates we expect to find the optimal rates ρH 1 ≈ 12 for the linear approximation, see Table 4.4 and Figure 11, and ρH 1 ≈ 1 for the quadratic approximation, see Table 4.5 and Figure 12.6 Our adaptive PUM achieves this anticipated optimal convergence behavior with respect to the H 1 -norm as well as in the L2 -norm for which we find the optimal rates ρL2 ≈ 1 and ρL2 ≈ 32 respectively. The cover refinement carried out for a linear approximation (compare Figure 9) is in fact different from the one attained for the quadratic approximation. For instance we find a maximal level difference of L = 3 for the linear approximation and L = 1 for the quadratic approximation, i.e., the point sets and covers obtain for the higher order approximation is somewhat smoother. The quality of the quadratic approximation of the error estimator is again similar to those obtain in the previous examples, i.e., we observe ∗H 1 ≈ 0.78. Since the relative increase in the number of degrees of freedom going from quadratic to cubic polynomials is smaller than when we use quadratic polynomials to estimate the error of a linear approximation we can expect to find a smaller value of ∗H 1 in Table 4.5. In fact the effectivity index converges to 0.59 only. Note that the results of further numerical experiments confirm that the quality of the estimator is essentially influenced by the relative increase of the polynomial degree. For instance we found a value of ∗H 1 ≈ 0.8 again, when we use polynomials of order 6 to approximate the error of a quadratic approximation. Hence, this example demonstrates that using only a polynomial of degree p + 1 to estimate the error of an approximation of order p may 6
Note that not all refinement steps are given in the tables and graphs for Example 3. Due to the smoothness of the solution our refinement scheme constructs J ) with J = maxωi ∈R(C J ) li . For better readability we several refined covers R(CΩ Ω only give the final results on the respective level J.
146
M. Griebel and M.A. Schweitzer convergence history
convergence history −1
10 −1
10
−2
10
−3
−2
10
relative error
relative error
10
−3
10
−4
10
−5
10
−6
−4
10
10
1
1
H
−7
10
H1,*
−5
10
∞
L L2 2
10
−8
10 3
10
4
10
5
10
6
10
7
10
degrees of freedom
Figure 11. Convergence history for Example 3 using linear Legendre polynomials.
H
H1,* ∞
L L2
3
10
4
10
5
10
6
10
7
10
degrees of freedom
Figure 12. Convergence history for Example 3 using quadratic Legendre polynomials.
not yield very accurate estimates for large p. Nonetheless, our experiments also indicate that the actual refinement is only very slightly affected by this issue.
5 Concluding Remarks In this paper we have considered the adaptive multilevel solution of a scalar elliptic PDE by the PUM. We have presented a particle refinement scheme for h-type adaptivity in the PUM which is steered by a local subdomain-type error estimator which is in turn approximated by local p-type enrichment. The results of our numerical experiments in two and three space dimensions are strong numerical evidence that the estimator is efficient and reliable. Note that the adaptively constructed point sets may provide a novel way to approximate density distributions which can be related to solutions of PDEs. Also note that a local hp-type refinement is (in principle) straightforward in the PUM due to the independence of the local approximation spaces. The extension of the presented scheme to hp-type adaptivity however is subject of current research. The nested multilevel iteration developed in this paper provides a highly efficient solver with optimal computational complexity. In summary, the presented meshfree scheme is a main step toward the availability of efficient adaptive meshfree methods which will enable us to tackle large scale complicated problems.
A Particle-Partition of Unity Method Part VII: Adaptivity
147
Acknowledgement. This work was supported in part by the Sonderforschungsbereich 611 Singular phenomena and scaling in mathematical models funded by the Deutsche Forschungsgemeinschaft.
References 1. I. Babuˇ ska and J. M. Melenk, The Partition of Unity Method, Int. J. Numer. Meth. Engrg., 40 (1997), pp. 727–758. 2. I. Babuˇ ska and W. C. Rheinboldt, Error Estimates for Adaptive Finite Element Computations, SIAM J. Numer. Anal., 15 (1978), pp. 736–754. 3. R. E. Bank and A. Weiser, Some A Posteriori Error Estimators for Elliptic Partial Differential Equations, Math. Comp., 44 (1985). 4. R. Becker, P. Hansbo, and R. Stenberg, A Finite Element Method for Domain Decomposition with Non-Matching Grids, Math. Modell. Numer. Anal., 37 (2003), pp. 209–225. 5. M. Bern, D. Eppstein, and J. Gilbert, Provably Good Mesh Generation, J. Comput. Sys. Sci., 48 (1994), pp. 384–409. 6. A. Brandt, Multigrid Techniques: 1984 Guide with Applications to Fluid Dynamics, tech. rep., GMD, 1984. 7. M. Griebel, P. Oswald, and M. A. Schweitzer, A Particle-Partition of Unity Method—Part VI: A p-robust Multilevel Preconditioner, in Meshfree Methods for Partial Differential Equations II, M. Griebel and M. A. Schweitzer, eds., vol. 43 of Lecture Notes in Computational Science and Engineering, Springer, 2005, pp. 71–92. 8. M. Griebel and M. A. Schweitzer, A Particle-Partition of Unity Method— Part II: Efficient Cover Construction and Reliable Integration, SIAM J. Sci. Comput., 23 (2002), pp. 1655–1682. , A Particle-Partition of Unity Method—Part III: A Multilevel Solver, 9. SIAM J. Sci. Comput., 24 (2002), pp. 377–409. , A Particle-Partition of Unity Method—Part V: Boundary Conditions, in 10. Geometric Analysis and Nonlinear Partial Differential Equations, S. Hildebrandt and H. Karcher, eds., Springer, 2002, pp. 517–540. 11. W. Hackbusch, Multi-Grid Methods and Applications, vol. 4 of Springer Series in Computational Mathematics, Springer, 1985. 12. L. Kronsjø and G. Dahlquist, On the Design of Nested Iterations for Elliptic Difference Equations, BIT, 12 (1972), pp. 63–71. ¨ 13. J. Nitsche, Uber ein Variationsprinzip zur L¨ osung von Dirichlet-Problemen bei Verwendung von Teilr¨ aumen, die keinen Randbedingungen unterworfen sind, Abh. Math. Sem. Univ. Hamburg, 36 (1970–1971), pp. 9–15. ´s, P. D´ıez, and A. Huerta, Subdomain-based flux-free a posteriori 14. N. Pare error estimators, Comput. Meth. Appl. Mech. Engrg., 195 (2006), pp. 297–323. 15. M. A. Schweitzer, A Parallel Multilevel Partition of Unity Method for Elliptic Partial Differential Equations, vol. 29 of Lecture Notes in Computational Science and Engineering, Springer, 2003. ¨rth, A Review of A Posteriori Error Estimation and Adaptive Mesh16. R. Verfu Refinement Techniques, Wiley Teubner, 1996. 17. O. C. Zienkiewicz and J. Z. Zhu, A Simple Error Estimator and Adapative Procedure for Practical Engineering Analysis, Int. J. Numer. Meth. Engrg., (1987).
Enriched Reproducing Kernel Particle Approximation for Simulating Problems Involving Moving Interfaces Pierre Joyot1 , Jean Trunzler1 , and Fransisco Chinesta2 1
2
LIPSI-ESTIA, Technopole Izarbel, 64210 Bidart, France {j.trunzler,p.joyot}@estia.fr LMSP, 151 Bd. de l’Hˆ opital, 75013 Paris, France
[email protected] Summary. In this paper we propose a new approximation technique within the context of meshless methods able to reproduce functions with discontinuous derivatives. This approach involves some concepts of the reproducing kernel particle method (RKPM), which have been extended in order to reproduce functions with discontinuous derivatives. This strategy will be referred as Enriched Reproducing Kernel Particle Approximation (E-RKPA). The accuracy of the proposed technique will be compared with standard RKP approximations (which only reproduces polynomials).
Key words: Meshless methods, discontinuous derivatives, enriched approximation, reproducing kernel particle method
1 Introduction Meshless methods are an appealing choice for constructing functional approximations (with different degrees of consistency and continuity) without a mesh support. Thus, this kind of techniques seem to be specially appropriated for treating 3D problems involving large displacements, due to the fact that the approximation is constructed only from the cloud of nodes whose positions evolve during the material deformation. In this manner neither remeshing nor fields projections are a priori required. Other important point lies in the easy introduction of some known information related to the problem solution within the approximation functional basis. For this purpose, different reproduction conditions are enforced in the construction of the approximation functions. This approach has been widely used in the context of the moving least squares approximations involved in the diffuse meshless techniques [1] as well as in the element free Galerkin method [3]. Very accurate results were obtained for example in fracture mechanics by introducing the crack tip behavior into the approximation basis [4].
150
P. Joyot et al.
In this work we propose a numerical strategy, based on the reproducing kernel particle techniques, able to construct approximation functions with discontinuous derivatives on fixed or moving interfaces. This problem was treated in the context of the partition of unity by Kronggauz et al. [7]. In our approach the size of the discrete system of equations remains unchanged because no additional degrees of freedom are introduced related to the enrichment. However, the fact of enriching the approximation implies a bigger moment matrix that can result ill conditioned when the enrichment is applied in the whole domain. To circumvent this difficulty local enrichments seem more appropriate. This paper focuses on local enrichments with particular reproduction conditions. The starting point of our development is the reproducing kernel particle approximation (RKPA). The RKP approximation was introduced by Liu et al. [10] for enforcing some degree of consistency to standard smooth particle approximations, i.e. they proved that starting from a SPH (smooth particle hydrodynamics) approximation [5] it is possible to enhance the kernel function for reproducing a certain degree of polynomials. We have extended or generalized this procedure in order to reproduce any function, and more concretely, functions involving discontinuous derivatives. The question of the local enrichment will be then addressed.
2 Enriched Functional Approximations 2.1 Reproducing Conditions for Enriched Shape Function The approximation of a function u(x) is defined by u(x) =
NP
ψI (x)u(xI )
(2.1)
I=1
where u(xI ) are the nodal values of the approximated function and N P the number of nodes used to discretize the domain Ω. ψI (x) is the shape function which can be written in the general form: ψI (x) = Cφ(x − xI )
(2.2)
where φ is the kernel function which has a compact support. Consequently ψI (x) will be non zero only for a small set of nodes. We represent by Λ(x) the set of nodes whose supports include the point x. Thus, we can write Eq. (2.1) as Cφ(x − xλ )u(xλ ) (2.3) u(x) = λ∈Λ(x)
In the following, we use the simplified notation:
Enriched RKPA for Simulating Problems Involving Moving Interfaces
u(x) =
Cφλ u(xλ )
151
(2.4)
λ
In the RKPM context Liu and al. [10] define C as the correction function used to ensure the reproduction conditions. Thus, any linear combination of the functions used to define the reproduction conditions can be approximated exactly. Usually the reproduction conditions are imposed to ensure that the approximation can reproduce polynomials up to a specified degree that represents the approximation order of consistency. In the present work, we want to reproduce a function consisting of a polynomial part and others additional functions used to include known information about the approximated field, as for example discontinuous normal derivatives across fixed or moving interfaces. Let u(x) the function to be reproduced: u(x) =
α
aα xα +
ne
ej χj (x).
(2.5)
j=1
where α is a multi-index used to represent the polynomial part of u, χ is the enrichment function and j a simple index that refers to the power of χ (we want to reproduce the enrichment function and its power up to the degree ne). Multi index notation is deeply described in [9]. First we consider the reproducing conditions for the polynomial part of u. When |α| = 0, we obtain the partition of unity Cφλ 1 = 1 (2.6) λ
and for |α| ≤ m, |α| = 0 (where m is the degree of the polynomial part) α Cφλ xα (2.7) λ =x λ
Now, we consider the non-polynomial reproducing conditions: Cφλ χi (xλ ) = χi (x) 1 ≤ i ≤ ne
(2.8)
λ
All these reproducing conditions can be written in the matrix form: Cφλ R(xλ ) = R(x)
(2.9)
λ
where R denotes the reproducing vector that consists of the polynomial part Rp and the non-polynomial one Re (R (x) = Rp (x) Re (x) )
152
P. Joyot et al.
2.2 Direct Formulation of the Shape Functions To construct the approximation the correction function must be defined. The choice of C lead to different formulations. In the direct formulation, the correction function is defined by Cd = H d (x, xλ , xλ − x)bd (x)
(2.10)
where H d consists of a polynomial part of degree m: Pm (xλ −x), and the ne non-polynomial one H ed = χ(xλ ) − χ(x) · · · (χ(xλ ) − χ(x)) : ne H d (x, xλ , xλ − x) = Pm (xλ − x) χ(xλ ) − χ(x) · · · (χ(xλ ) − χ(x)) (2.11) From the definition of the correction function (2.10) the reproducing conditions (2.9) becomes 2 1 R(xλ )H (2.12) d (x, xλ , xλ − x)φλ bd (x) = R(x) λ
that can by written in the matrix form: M d (x)bd (x) = R(x) where the moment matrix M d (x) is defined by R(xλ )H M d (x) = d (x, xλ , xλ − x)φλ
(2.13)
(2.14)
λ
Finally, from Eq. (2.2) the shape function in the direct formulation will be defined by −1 (2.15) ψd λ (x) = H d (x, xλ , xλ − x)M d (x)R(x)φλ 2.3 Direct and RKPM Shape Function Equivalence In [9] Liu et al. show that the RKPM formulation satisfies the polynomial reproducing conditions. Here, we prove that this result can be extended to non polynomial reproducing conditions. For this purpose, firstly, we derive the enriched RKPM shape functions from the reproducing conditions. The polynomial part can be written as: Cφλ 1 = 1 |α| = 0 (2.16)
λ
Cφλ (xλ − x) = 0 α
λ
the proof can be found for example in [9].
|α| ≤ m, |α| = 0
(2.17)
Enriched RKPA for Simulating Problems Involving Moving Interfaces
Using the same procedure, we can also write Cφλ (χ(xλ ) − χ(x))i = 0 1 ≤ i ≤ ne
153
(2.18)
λ
According to Eqs. (2.17) and (2.18) the reproducing conditions can be written as Cφλ R(x, xλ , xλ − x) = R(0) (2.19) λ
ne where R (x, xλ , xλ −x) = Pm (xλ − x) χ(xλ ) − χ(x) · · · (χ(xλ ) − χ(x)) and R (0) = 1 0 · · · 0 . Now we choose for the correction function the same expression than in the direct formulation (2.11). i.e. H r = H d
Cr = H r (x, xλ , xλ − x)br
(2.20)
introducing this expression into Eq. (2.19), it results in: 2 1 R(x, xλ , xλ − x)H r (x, xλ , xλ − x)φλ br = R(0)
(2.21)
λ
or being R ≡ H r ≡ H. 1
2
H(x, xλ , xλ − x)H (x, xλ , xλ − x)φλ
br = R(0)
(2.22)
λ
whose matrix form results in M r (x)br (x) = R(0) where M r (x) is the enriched RKPM moment matrix defined by H(x, xλ , xλ − x)H (x, xλ , xλ − x)φλ M r (x) =
(2.23)
(2.24)
λ
Since Eqs. (2.9) and (2.19) are equivalent it directly follows that the vectors bd and br are the same, and given H r = H d we can conclude that Direct and the RKPM shape fonctions are the same. 2.4 MLS Shape Function In this section we are going to prove that the MLS formulation [3] can be obtained by choosing the following form of the correction function C = H m (xλ )bm (x)
(2.25)
154
P. Joyot et al.
where H m = R. The reproducing conditions (2.9) can be rewritten R(xλ )H m (xλ )bm (x)φλ = R(x)
(2.26)
λ
or M m (x)bm (x) = R(x) where M m (x) is the MLS form of the moment matrix defined by M m (x) = H m (xλ )H m (xλ )φλ
(2.27)
(2.28)
λ
To derive the standard MLS shape function we consider Eq. (2.4) from which we obtain −1 H (2.29) u(x) = m (xλ )M m (x)R(x)φλ u(xλ ) λ
Since H m = R, the moment matrix is symmetric, and Eq. (2.29) results in −1 u(x) = H H m (xλ )φλ u(xλ ) (2.30) m (x)M m (x) λ
or
u(x) = H m (x)am
(2.31)
where am is computed from M m (x)am =
H m (xλ )φλ u(xλ )
(2.32)
λ
Now, we can prove that MLS shape functions coincide with the ones related to the direct procedure. 2.5 Direct and MLS Shape Function Equivalence To prove the equivalence between both formulations, firstly, we consider the expression of the line i-th component of vector M m bm . [M m bm ]i = bmp0 Ri (xλ )φλ λ
+
bmpα
ne j=1
bmej
Ri (xλ )(xλ )α φλ
λ
0