This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
$ `I + A˜ $
K(E)
& % & % % & % % & % ' $ $ L(E) ` I + A∗ `I + A˜∗ ' $ $ L(E, P) L(E, P) ' $ ` A∗ K(E, P) ' $ ' $ `I `I ` A∗ ∗ ˜ A ` K(E, P) `Pm
&& ' L(E) '
p=∞
&
% & & % `C C ` K(E) K(E) `B & % & % & % & % Figure 1: Venn diagrams of L(E), L(E, P), K(E, P) and K(E) depending on E = p (Zn , X).
Note the beauty and simplicity of Figure 1 in the case 1 < p < ∞, which is called “the perfect case” in [70]. If, in addition, dim X < ∞, then K(E, P) = K(E) and consequently, L(E, P) = L(E), which essentially simpliﬁes many things4 . 4 We
will make this statement much more precisely, later in Sections 1.6.3 and 2.3.
16
Chapter 1. Preliminaries
1.3.5 Matrix Representation Given an arbitrary Banach space X and some p ∈ [1, ∞], with every bounded linear operator A on E := p (Zn , X), we will associate a matrix [A] = [aαβ ]α,β∈Zn , the entries of which are bounded linear operators on X. Therefore, for every α, β ∈ Zn , identify im P{α} with X in the natural way, and let aαβ ∈ L(X) be the operator P{α} AP{β} im P{β} , regarded as acting from im P{β} ∼ = X to im P{α} ∼ = X. In this sense, we write aαβ = P{α} AP{β} ,
α, β ∈ Zn .
(1.16)
Consequently, if u ∈ E has ﬁnite support, we get that v := Au is given by vα = aαβ uβ , α ∈ Zn , β∈Zn
that is, Au = [A]u,
(1.17)
by identifying sequences with vectors. The matrix [A] is referred to as the matrix representation of A. Remark 1.27. a) If X = C, then the entries aαβ are just complex numbers. If also p < ∞, then [A] is the usual matrix representation of A with respect to the canonical standard basis in p . b) In the continuous case A ∈ L(Lp ), the entry aαβ of the matrix representation of the discretization (1.3) of A can be identiﬁed with PHα APHβ im PHβ , regarded as acting from im PHβ ∼ = Lp (H) = X to im PHα ∼ = Lp (H) = X. In this sense, also in the continuous case, we can think of an operator A on Lp as a matrix of “little” operators aαβ on Lp (H). We abbreviate this matrix [AG ] by [A] and call it the matrix representation of A. Example 1.28. – Integral Operators. If p ∈ [1, ∞], n = 1, and A ∈ L(Lp ) is an integral operator on the real line; that is ∞ k(x, y) u(y) dy, x∈R (Au)(x) = −∞
with an appropriate scalarvalued function k on R2 , referred to as the kernel function of A, then, for all i, j ∈ Z, (PHi APHj u)(x) =
j+1
k(x, y) u(y) dy, j
x ∈ [i, i + 1]
1.3. The Operators
17
is an integral operator acting from Lp ([j, j + 1]) to Lp ([i, i + 1]). By shifting x and y to [0, 1], we get that the entries of [A] = [aij ]i,j∈Z are the “little” integral operators 1 (aij u)(x) = k(x + i, y + j) u(y) dy, x ∈ [0, 1] 0
on the interval [0, 1].
Proposition 1.29. a) For arbitrary operators A, B ∈ L(E) with [A] = [aαβ ]α,β∈Zn and [B] = [bαβ ]α,β∈Zn , one has [A + B] = [A] + [B]
and
[AB] = [A][B],
where [A] + [B] := [aαβ + bαβ ]α,β∈Zn and [A][B] := [ γ∈Zn aαγ bγβ ]α,β∈Zn . b) If A ∈ L(E) with p < ∞ and [A] = [aαβ ], then [A∗ ] = [a∗βα ]. (m)
c) Take a sequence (Am ) of operators on E and let [Am ] = [aαβ ] denote their matrix representations. (m)
If Am ⇒ 0, then aαβ ⇒ 0 uniformly with respect to α and β as m → ∞. (m)
If Am → 0, then aαβ → 0 for all α and β as m → ∞. Proof. a) From (1.17) we get that, for every ﬁnitely supported u ∈ E, [A + B]u = (A + B)u = Au + Bu = [A]u + [B]u = ([A] + [B])u and
[AB]u = ABu = A(Bu) = [A](Bu) = [A][B]u
hold, which proves the claim. ∗ ∗ b) From P{α} A∗ P{β} = P{α} A∗ P{β} = (P{β} AP{α} )∗ and (1.16) we get that ∗ the entry in position (α, β) of [A ] equals a∗βα . (m)
c) If Am ⇒ 0, then, by (1.16), aαβ = P{α} Am P{β} ≤ Am → 0 as m → ∞. Now suppose Am → 0 as m → ∞. Choose arbitrary α, β ∈ Zn and an x ∈ X. Deﬁne u ∈ p (Zn , X) by uβ = x and uγ = 0 for γ = β. Since Am → 0, (m) we get v (m) := Am u → 0 as m → ∞. From vα X ≤ v (m) we get that also (m) the αth component vα = aαβ x tends to zero in X as m → ∞. Since α, β and x were chosen arbitrarily, we have proven the claim. From (1.16) it is clear that, for every A ∈ L(E), there is exactly one matrix representation [A]. A much more delicate question is whether also [A] uniquely determines the operator A or not. From Proposition 1.29 a) we conclude that A → [A] is a linear mapping from L(E) to the space of all L(X)valued matrices over Zn × Zn . Now the question is about the kernel of that mapping. But ﬁrst we insert an auxiliary result.
18
Chapter 1. Preliminaries
Lemma 1.30. a) P{α} A = 0 ∀α ∈ Zn
⇐⇒
b) AP{β} = 0 ∀β ∈ Z
⇐⇒
Pm A = 0 ∀m ∈ N
⇐⇒
A = 0.
APm = 0 ∀m ∈ N ⇐= A = 0. Proof. a)If P{α} A = 0 ∀α ∈ Zn , then also Pm A = α≤m P{α} A = 0 ∀m ∈ N. Now suppose A = 0. Then there is an u ∈ E such that Au = 0, i.e. (Au)α = 0 for some α ∈ Zn . But then Pm Au = 0 and hence, Pm A = 0 for all m ≥ α. Contradiction. Finally, A = 0 obviously implies the other two conditions. b) If AP{β} = 0 ∀β ∈ Zn , then also APm = β≤m AP{β} = 0 ∀m ∈ N. Conversely, if APm = 0, then also AP{β} = APm P{β} = 0 if β ≤ m. Finally, A = 0 obviously implies the other conditions. n
The ﬁrst two conditions in Lemma 1.30 b) turn out to be equivalent to [A] = 0. Proposition 1.31.
a) The kernel of the mapping [ . ] : A → [A]
consists of the operators A for which AP{β} = 0 for all β ∈ Zn , or, which is equivalent, APm = 0 for all m ∈ N. b) If p < ∞, then the mapping [.] : A → [A] is onetoone, i.e. ker[.] = {0}. c) For p = ∞, ker[.] is an inﬁnitedimensional subspace of L(E) which intersects with L(E, P) and with S in 0 only. Proof. a) [A] = 0 is equivalent to P{α} AP{β} = 0 for all α, β ∈ Zn . From Lemma 1.30 a) we get that this is equivalent to AP{β} = 0 for all β ∈ Zn , which is moreover equivalent to APm = 0 for all m ∈ N by Lemma 1.30 b). b) If p < ∞, then Pm → I strongly as m → ∞. If A ∈ ker[.], then we know from a) that APm = 0 for all m ∈ N. Passing to the strong limit as m → ∞, we get A = 0. c) For p = ∞, there are indeed operators A = 0 with [A] = 0. A standard example is the operator u → f (u)v in Example 1.26 c). If this functional f is ﬁxed and v runs through an inﬁnite set of linearly independent elements in E, then we get an inﬁnite variety of linearly independent operators in ker[.]. Now suppose that A ∈ L(E, P) is in ker[.]. From APm = 0 we get that AQm = A for all m ∈ N. The ﬁrst condition in (1.12) then shows that, for every k ∈ N, Pk A = Pk AQm ⇒ 0 as m → ∞, i.e. Pk A = 0 for every k ∈ N. From Lemma 1.30 a) we get A = 0. Finally suppose that A ∈ S is in ker[.], and put B = A . From Proposition 1.29 b) we get that all entries of [B] are zero as well. But since B is an operator on 1 (Zn , Y) with Y∗ = X, we know from b) that B = 0, and consequently A = 0.
1.3. The Operators
19
So in the case p = ∞, we have two completely diﬀerent settings which both guarantee that for a given matrix [A] there is only one operator A within that setting: (i) A ∈ L(E, P) (ii) A ∈ S Both conditions (i) and (ii) are suﬃcient but not necessary for the uniqueness of [A] → A. Moreover, neither of the two conditions implies the other one. To see this, ˆ b where some compo• ﬁrstly, think of a generalized multiplication operator M ˆ b is subject nent of the sequence b has no preadjoint operator. Then A = M to condition (i) but not to (ii). • Secondly, let (Au)α := u0 for all α ∈ Zn . Then A is subject to condition (ii) since A is the adjoint operator of Example 1.26 a), but A is not subject to (i) since the second condition in (1.12) is violated. For a given space E = p (Zn , X), we put
L(E) M (E) := L(E, P) ∪ S
if p < ∞, if p = ∞,
which makes M (E) some fairly large set of operators for which we know that no two elements have the same matrix representation. By restricting ourselves to operators in M (E), we will gently walk around certain diﬃculties that can arise if p = ∞. The uniqueness of the operator behind a given matrix will turn out to be only one example of such a problem. Although the entries of our matrices [A] are indexed slightly more complicated than in the matrices that one usually has in mind, we will speak about rows, columns and diagonals of [A] in the usual way: Deﬁnition 1.32. For an operator A ∈ L(E) with matrix representation [A] = [aαβ ], we refer to the sequence (aαβ )β∈Zn as the αth row, to (aαβ )α∈Zn as the βth column, and to (aβ+γ,β )β∈Zn as the γth diagonal of [A] – or simply, of A. As usual, the 0th diagonal is also called the main diagonal. Moreover, we will refer to (aα,−α )α∈Zn as the cross diagonal of A. ˆ b be a generalExample 1.33. a) To discuss some simple examples, ﬁrst let A = M ized multiplication operator. Then [A] is a diagonal matrix, i.e. it is supported on the main diagonal only, with aαα = bα where b = (bα ) is the symbol of A. For operators in M (E), also the reverse statement is true: A is a generalized multiplication operator if and only if [A] is a diagonal matrix. b) As a second example, put A = Vγ with γ ∈ Zn . Then the matrix [A] is supported on the γth diagonal only. Precisely, aαβ is IX , the identity operator on X, if α − β = γ, and it is 0 otherwise.
20
Chapter 1. Preliminaries
ˆ b Vγ . From c) Now we combine the two previous examples and study A = M ˆ b ][Vγ ] is Proposition 1.29 a) and Examples 1.33 a) and b), we get that [A] = [M supported on the γth diagonal only, where aαβ = bα if α − β = γ, and aαβ = 0 otherwise. In analogy to a), also here the reverse statement is true for operators ˆ b Vγ if and only if [A] is only supported on the γth in M (E): A is of the form M diagonal. d) As a ﬁnal example, put A = PU with U ⊂ Zn . Then A is a very simple example of a generalized multiplication operator. So [A] is a diagonal matrix again, where aαβ = IX if α = β ∈ U , and aαβ = 0 otherwise.
1.3.6 Band and Banddominated Operators One usually regards a matrix [aij ] with i, j running through {1, . . . , m} or N or even through Z as a band matrix if its entries aij vanish whenever i − j is larger than some number, which is then called the bandwidth of the matrix. As introduced in Section 1.3.5, for the investigation of operators on E = p (Zn , X), we are interested in the study of matrices [aαβ ] which are indexed over Zn × Zn . We call [aαβ ]α,β∈Zn a band matrix of bandwidth w if all entries aαβ with α − β > w vanish, or, what is equivalent, if the matrix is only supported on the γth diagonals with γ ≤ w. Now we are looking for operators A ∈ L(E) with such a matrix representation. From Example 1.33 c) we know that for operators of the form ˆ b(γ) Vγ M (1.18) A = γ≤w
with b(γ) ∈ ∞ (Zn , L(X)) for every γ ∈ Zn involved in the summation, the matrix representation [A] is a band matrix with bandwidth w. Example 1.33 c) also tells us that the operators A ∈ M (E) with a band matrix representation [A] of bandwidth w are exactly those of the form (1.18). We will therefore use equation (1.18) to deﬁne one of the most important classes of operators in this book: Deﬁnition 1.34. An operator A is called a band operator of bandwidth w if it is of the form (1.18). The set of all band operators is denoted by BO. Remark 1.35. A band operator acts boundedly on all spaces E = p (Zn , X) with p ∈ [1, ∞]. Also note that it is automatically contained in M (E). There are operators A outside of M (E) for which [A] is a band matrix as well but which we do not regard as band operators, for instance the operator in Example 1.26 c). Proposition 1.36. For a bounded linear operator A on E = p (Zn , X) with matrix representation [A] = [aαβ ], the following three properties are equivalent: (i) A is a band operator with bandwidth w, (ii) A ∈ M (E) and aαβ = 0 for α − β > w, (iii) PV APU = 0 for all sets U, V ⊂ Zn with dist (U, V ) > w.
1.3. The Operators
21
Proof. (i)⇒(iii). Suppose that A is of the form (1.18) and that U, V ⊂ Zn have a distance larger than w. Then ˆ b(γ) Vγ PU = ˆ b(γ) PV Pγ+U Vγ = 0 PV M M PV APU = γ≤w
γ≤w
since PV Pγ+U = PV ∩(γ+U) = P∅ = 0 if γ ≤ w < dist (U, V ). (iii)⇒(ii). Take arbitrary α, β ∈ Zn with α − β > w. Put U := {β} and V := {α}, and remember (1.16) to see that aαβ = P{α} AP{β} = PV APU = 0. Moreover, (iii) implies Pk AQm = 0 = Qm APk for all m > k + w, which shows that A ∈ L(E, P) ⊂ M (E). (ii)⇒(i) follows immediately from the discussion in Example 1.33 c). It is easily seen that BO is a subalgebra of L(E), as sums and products of band matrices are band matrices again. Unfortunately, this subalgebra is not closed! So it is a very natural desire to pass to the closure of BO which is a Banach algebra. Deﬁnition 1.37. For every p ∈ [1, ∞], let BDOp refer to the closure, with respect to the operator norm in L(p (Zn , X)), of the algebra BO of band operators. The elements of BDOp are called banddominated operators. We will moreover refer to the collection {b(γ) }γ≤w as the diagonals or the coeﬃcients of the band operator A in (1.18). If F is a subset of ∞ (Zn , L(X)) and A ∈ BDOp is the norm limit of a sequence of band operators with coeﬃcients in F , then we will say that A is a banddominated operator with coeﬃcients in F . Remark 1.38. BO is the smallest algebra that contains all generalized multiplication operators and shift operators. Its elements are ﬁnite sumproducts of these objects. BDOp is the smallest Banach algebra that contains all generalized multiplication operators and shifts. In short: ˆ b , Vα : b ∈ ∞ (Zn , L(X)) , α ∈ Zn , BO = algL(p (Zn ,X)) M ˆ b , Vα : b ∈ ∞ (Zn , L(X)) , α ∈ Zn . BDOp = closalgL(p (Zn ,X)) M Sometimes BO and BDOp are also referred to as the algebra and the Banach algebra, respectively, that are generated by generalized multiplications and shifts. ˆ b and Vα as the basic building stones or atoms of In that sense, we wrote about M the class BDOp in Section 1.3.1, and we will do so in what follows. Note that, as the notation already suggests, the class BO does not depend on the Lebesgue exponent p ∈ [1, ∞] of the underlying space, while BDOp heavily does!
22
Chapter 1. Preliminaries
Example 1.39. – Laurent Operators. We have a short intermezzo on Laurent operators. These are operators, the matrix representation of which is constant along every diagonal. Let p = 2, n = 1, X = C, and ﬁx a function a ∈ L∞ (T). The operator ⎞ ⎛ .. .. .. . .. . . . . .. ⎟ ⎜ ⎟ ⎜ .. ⎜ . a0 a−1 a−2 . . . ⎟ ∞ ⎟ ⎜ ⎟ ⎜ ak Vk i.e. [A] = ⎜ . . . a1 a0 a−1 . . . ⎟ A = ⎟ ⎜ k=−∞ ⎜ . .. ⎟ ⎟ ⎜ . . a2 a1 . a 0 ⎠ ⎝ .. .. .. .. .. . . . . . on E = 2 is called Laurent operator, where ak ∈ C are the Fourier coeﬃcients 2π 1 ak = a(eiθ ) e−ikθ dθ, k∈Z 2π 0 of the function a on the unit circle T, which is referred to as the symbol of A =: L(a). It is readily shown that, via the Fourier isomorphism between 2 and L2 (T), the Laurent operator L(a) on 2 corresponds to the operator of multiplication by a in L2 (T). As a consequence, one gets that L(a) ∈ L(2 ) L(a) L(a) + L(b) L(a)L(b) L(a) is invertible in L(2 )
⇐⇒ = = = ⇐⇒
a ∈ L∞ (T), a ∈ L∞ (T), a∞ , ∞
L(a + b), a, b ∈ L (T), L(ab), a, b ∈ L∞ (T),
(1.19) (1.20)
a is invertible in L∞ (T).
As a consequence of (1.19) and (1.20), we moreover get that L(a) ∈ BO L(a) ∈ BDO2
⇐⇒ ⇐⇒
#{k : ak = 0} < ∞ ⇐⇒ a is trigonometric polynomial, a ∈ closL∞ (T) {trigonometric polynomials} = C(T).
Note that the class M p of all symbols a ∈ L∞ (T), for which L(a) ∈ L(p ) holds, decreases as soon as p ∈ [1, ∞] diﬀers from 2. The same is true for the set of all a ∈ M p for which L(a) ∈ BDOp . For every p ∈ [1, ∞], the set M p is a Banach subalgebra of M 2 = L∞ (T), equipped with the norm aM p := L(a)L(p) . Moreover, it holds that M 1 ⊂ M p1 ⊂ M p2 ⊂ M 2 = L∞ (T)
(1.21)
1.3. The Operators
23
if 1 ≤ p1 ≤ p2 ≤ 2, and M p = M q if 1/p + 1/q = 1, including the equality M 1 = M ∞ . From (1.21) it follows that L(a) ∈ BDOp
⇐⇒
a ∈ closM p {trigonometric polynomials} ⊂ C(T).
It follows that L(a) ∈ BDOp1 implies L(a) ∈ BDOp2 for all 1 ≤ p1 ≤ p2 ≤ 2, and that L(a) ∈ BDOp if and only if L(a) ∈ BDOq with 1/p + 1/q = 1. In particular, the latter holds with p = 1 and q = ∞ if and only if a ∈ W(T); that is, ak  < ∞, which is a proper subclass of C(T). So this is certainly an example of the dependence of BDOp on p. Remark 1.40. The impression that, for every A ∈ BDOp , it holds that Am ⇒ A where [Am ] is just the restriction of [A] to a ﬁnite number of diagonals, is false in general! As a counterexample, take p = 2, n = 1, X = C, and A = L(a), where a ∈ C(T) is such that the sequence of partial Fourier sums t∈T
m
→
a k tk
k=−m
is not uniformly convergent to a as m → ∞. Consequently, by (1.19), Am =
m
ak Vk
⇒
A
as
m → ∞.
k=−m
By Fejer’s theorem [39, Theorem 3.1] we know that, however, the Fejer Cesaro means uniformly converge to a; that is t →
m 1− k=−m
k m+1
a k tk
→
a
in the norm of L∞ (T) as m → ∞, showing that A˜m ⇒ A with A˜m =
m 1− k=−m
k m+1
ak Vk ∈ BO.
We will now prepare Theorem 1.42 which gives some equivalent characterizations of BDOp . One of these characterizations corresponds to condition (iii) in Proposition 1.36. The other characterizations have something to do with the class of bounded and uniformly continuous functions: Deﬁnition 1.41. By BC we denote the set of all bounded and continuous complexvalued functions on Rn . Moreover, let BUC denote the subset of all uniformly continuous functions among BC.
24
Chapter 1. Preliminaries
Given a function ϕ ∈ L∞ and a vector t = (t1 , . . . , tn ) ∈ Rn , we deﬁne the function ϕt at the point x = (x1 , . . . , xn ) ∈ Rn as ϕt (x) := ϕ(t1 x1 , . . . , tn xn ).
(1.22)
Moreover, if r ∈ Rn , then we put ϕt,r (x) := ϕt (x − r) for all x ∈ Rn . Now take an arbitrary function ϕ ∈ BUC and watch the functions ϕt as t → 0; that is, blowing up the argument of ϕ such that the graph becomes ﬂatter and ﬂatter. If we compare that function ϕt with a shifted copy Vκ ϕt for ﬁxed κ ∈ Zn , then it clearly follows from ϕ ∈ BUC, that the diﬀerence ϕt − Vκ ϕt uniformly tends to zero the more we blow up the argument, i.e. as t → 0. This observation shows that, for every ϕ ∈ BUC and every ﬁxed κ ∈ Rn , Mϕt Vκ − Vκ Mϕt = Mϕt − Vκ Mϕt V−κ = Mϕt −Vκ ϕt = ϕt − Vκ ϕt ∞ (1.23) tends to zero as t → 0. We will now change our point of view and regard this convergence process as a property of the shift operator: A = Vκ is subject to [Mϕt , A] ⇒ 0
as
t→0
(1.24)
for all ϕ ∈ BUC. Roughly speaking, property (1.24) says that A commutes increasingly better with Mϕt the more we blow up the function ϕt . Obviously, property (1.24) is written down for the continuous case, A ∈ L(Lp ), because this is the natural playground for functions ϕt ∈ BUC and the operator Mϕt of multiplication by such. But we will transport this property to the discrete case A ∈ L(p (Zn , X)) by ﬁnding the discrete analogue of Mϕt . ˆ f := M ˆb ˆ f denote the generalized multiplication operator M For f ∈ BC, let M ˆ f acts on u ∈ p (Zn , X)) by with b = (bα ), where bα = f (α)IX , i.e. M ˆ f u)α = f (α)uα (M
∀α ∈ Zn .
(1.25)
ˆ f evaluates the function f ∈ BC at the integer points Note that the operator M n α ∈ Z only. Then we study the set of all operators A on p (Zn , X) such that ˆ ϕt , A] ⇒ 0 [M
as
t→0
(1.26)
holds for all ϕ ∈ BUC. Elementary computations show that this set is a Banach algebra. Above we have convinced ourselves that it contains all shift operators. Moreover, trivially, all generalized multiplication operators are subject to (1.26). Consequently, every operator A ∈ BDOp is so. Much more surprisingly, it turns out that the set of bounded and linear operators which are subject to (1.26) equals BDOp . Here we go:
1.3. The Operators
25
Theorem 1.42. For every p ∈ [1, ∞], every Banach space X and every bounded linear operator A on E = p (Zn , X), the following properties are equivalent: (i) A is a banddominated operator. (ii) PV APU ⇒ 0 as dist (U, V ) → ∞ in the following sense: ∀ε > 0 ∃d > 0 :
PV APU < ε for all U, V ⊂ Zn : dist(U, V ) > d.
ˆ ϕt,r , A] ⇒ 0 as t → 0, uniformly w.r.t. r ∈ Rn , for all ϕ ∈ BUC. (iii) [M ˆ ϕt , A] ⇒ 0 as t → 0 holds for all ϕ ∈ BUC. (iv) (1.26), i.e. [M (v) A ∈ M (E) and (1.26) holds for ϕ(x1 , . . . , xn ) = exp i(x1 + · · · + xn ). Here M (E) denotes the following subset ⎧ L(E) ⎨ L(E, P) ∪ S M (E) := ⎩ L(E, P)
of M (E), if p < ∞, if p = ∞ and X = L∞ (H), otherwise.
This theorem is a conglomeration of Theorem 2.1.6 in [70] and Proposition 1.16 in [50] (alias Proposition 2.13 in [47]) with the slight but convenient modiﬁcation of moving the condition A ∈ M (E) from the top of the theorem to condition (v) since it turns out to be redundant in (i) – (iv). In [41] Kurbatov gives an example of an operator A on ∞ (Z) which shows that the condition A ∈ M (E) is not redundant in (v). Proof. (i)⇒(ii)⇒(iii)⇒(iv) is shown in the proof of Theorem 2.1.6 in [70]. (iv)⇒(v). Put E = p (Zn , X). In Proposition 2.4 of [68] it is shown that AT and T A are in K(E, P) for all T ∈ K(E, P) and A being subject to (iv). This clearly shows that (iv) implies A ∈ L(E, P) ⊂ M (E) by Deﬁnition 1.18. Obviously, (iv) also implies the validity of condition (1.26) for the particular function ϕ ∈ BUC mentioned in (v). (v)⇒(i). This implication is proven in Theorem 2.1.6 of [70] for p < ∞ and for all A ∈ L(∞ (Zn , X), P). For the missing case, p = ∞, X = L∞ (H), A ∈ S, it is proven in [50], Proposition 1.16 and also in [47], Proposition 2.13. The key in the latter case is a combination of arguments from [41] and [80]. Finally, we deﬁne a fairly large and especially wellbehaved subalgebra of BDOp – the socalled Wiener algebra. Again abbreviate p (Zn , X) by E, where n and X are ﬁxed, and only p shall vary in [1, ∞] if we say so. For a band operator A of the form (1.18), one clearly has ˆ b(γ) Vγ M ≤ b(γ) ∞ =: AW (1.27) AL(E) = γ≤w
L(E)
γ∈ZN
for all p ∈ [1, ∞], where we put b(γ) = 0 if γ exceeds the bandwidth w of A.
26
Chapter 1. Preliminaries
Deﬁnition 1.43. By W we denote the closure of BO with respect to .W . Equipped with .W , this set turns out to be a Banach algebra, henceforth called the Wiener algebra. From Deﬁnition 1.43 and inequality (1.27) we get that W is contained in BDOp for all p ∈ [1, ∞], BDOp . (1.28) BO ⊂ W ⊂ 1≤p≤∞
In Example 1.49 d) and e) and in Figure 2 on page 28 we will see that both inclusions in (1.28) are proper. Like BO, the class W does not depend on p ∈ [1, ∞]. An operator A ∈ W can be thought of as acting on any of the spaces E = p (Zn , X), p ∈ [1, ∞], where the inequality AL(E) ≤ AW
(1.29)
from (1.27) extends to all A ∈ W. Remark 1.44. Suppose A ∈ W, and [A] = [aαβ ]α,β∈Zn . Unlike for arbitrary banddominated operators A (see Remark 1.40), for operators A ∈ W, the sequence of band operators Am with [Am ] = [aαβ ]α−β≤m converges to A in the Wnorm, and hence in the norm of L(E) for all p ∈ [1, ∞]. Example 1.45. – Convolution Operators on R. We will have another look at the continuous case A ∈ L(Lp ) with n = 1 and p ∈ [1, ∞], where A is a socalled convolution operator on the real line; that is an integral operator as in Example 1.28, the kernel function k(x, y) of which only depends on the diﬀerence of x and y. So let ∞
(Au)(x) = −∞
κ(x − y) u(y) dy,
x ∈ R,
where κ ∈ L1 . From Example 1.28 we know that the entries of [A] = [aij ] are 1 κ(x − y + i − j) u(y) dy, x ∈ [0, 1], (aij u)(x) = 0
for all i, j ∈ Z, showing that also aij depends on the diﬀerence i − j only. An elementary calculation for p = 1 and p = ∞, together with Riesz Thorin interpolation (see [39, IV.1.4] or [84, V.1]) shows that aij ≤
sup
κ[c , c+1] L1
c∈[d−1 , d]
for all d = i − j ∈ Z. Together with κ ∈ L1 , this clearly yields AG ∈ W for the discretization (1.3) of A, showing that A is banddominated for every choice of p ∈ [1, ∞], and that AL(Lp) = AG L(p (Z,X)) ≤ AG W ≤ 2κL1
1.3. The Operators
27
holds with X = Lp ([0, 1]). Young’s inequality [72] shows that the factor 2 even can be omitted in the inequality AL(Lp) ≤ 2κL1 . The following is a very deep and vital result in the theory of the operator classes introduced and studied here. As before, we abbreviate p (Zn , X) by E. Proposition 1.46.
a) L(E, P) is inverse closed in L(E).
p
b) BDO is inverse closed in L(E). c) W is inverse closed in L(E). Proof. a) and c) are both very deep observations with very technical proofs, for which we frankly refer to [70], Theorem 1.1.9 and Theorem 2.5.2, respectively. (Note that our P is a uniform approximate projection in the sense of [70].) b) This follows immediately from Theorem 1.42 (iv): If A ∈ BDOp is invertˆ ϕt ]A−1 ⇒ 0 as t → 0, which ˆ ϕt , A−1 ] = A−1 [A , M ible and ϕ ∈ BUC, then [M p −1 shows that also A ∈ BDO . We ﬁnally extend these notions to the continuous case. Deﬁnition 1.47. An operator A on Lp is called a band operator, a banddominated operator or an operator in the Wiener algebra if its discretization (1.3) is so.
1.3.7 Comparison For simplicity of notation, again abbreviate the space p (Zn , X) by E. We will ﬁnish this section with an illustrative and comprehensive comparison of the recently introduced operator classes BO, BDOp , K(E, P) and L(E, P). We will compare these classes from quite diﬀerent points of view, which might turn out to be helpful for a better understanding of the diﬀerent concepts.
Inclusions Probably the ﬁrst question that arises when comparing these classes is whether and how they are contained in each other. We already addressed this question for the classes K(E), K(E, P), L(E) and L(E, P) in Section 1.3.4. K(E) and L(E) shall not be considered here. Obviously, BO ⊂ W ⊂ BDOp
and
K(E, P) ⊂ L(E, P).
We will see how these two chains are connected. Proposition 1.48. The inclusion K(E, P) ⊂ BDOp ⊂ L(E, P) holds.
28
Chapter 1. Preliminaries
Proof. Let T ∈ K(E, P), take an ε > 0, and choose m ∈ N such that Qm T < ε and T Qm < ε. Then, for all sets U, V ⊂ Zn with dist (U, V ) > d := 2m, by 2m < dist (U, V ) ≤ dist ( {0}, U ) + dist ( {0}, V ), at least one of the two terms on the right is greater than m. Suppose, without loss of generality, it is the second one. Then PV = PV Qm , and consequently, PV T PU ≤ PV Qm T PU ≤ Qm T < ε, showing that T ∈ BDOp by Theorem 1.42. Now suppose A ∈ BDOp and k, m ∈ N with k ≤ m. With Um = Zn \ {−m, . . . , m}n and Vk = {−k, . . . , k}n , we have Pk AQm = PVk APUm ⇒ 0
Qm APk = PUm APVk ⇒ 0
and
as dist (Um , Vk ) = m − k + 1 → ∞ by Theorem 1.42. Consequently, for every ﬁxed k ∈ N, Pk AQm ⇒ 0 and Qm APk ⇒ 0 as m → ∞, and hence A ∈ L(E, P) by Proposition 1.20. As a consequence of Proposition 1.48, we get the following Venn diagram: ' $ ' $ L(E, P) $ BDOp' ' $ W BO `I
`J
` A4& &
` A1
` I + A2 J &
&
' ` Pm
`0
&
$ ` A2
` A2 J % ` A3 % K(E, P) % % %
Figure 2: The relationship between BO, W, BDOp , K(E, P) and L(E, P).
The operators in the respective diﬀerence sets are as follows. Later in this section we will discuss some of these examples in more detail. Example 1.49. a) We start with a ﬁrst example in L(E, P) \ BDOp . Let J refer to the ﬂip operator (Ju)α = u−α . Its matrix [J] is supported on the cross diagonal only, all entries of which are IX . b) Another example in L(E, P) \ BDOp is (A1 u)α = u2α . The matrix entries aα,2α are equal to IX , and all other entries are zero.
1.3. The Operators
29
c) A typical element of K(E, P) is a generalized multiplication operator A2 = ˆ b where the sequence b = (bα ) decays towards inﬁnity. For example, we might M 1 ˆ f with f (x) = 1 . put bα = α+1 IX , i.e. A2 = M x+1 d) The composition A2 J is an operator which is still in K(E, P) (and consequently, in BDOp ) but no longer in BO, in contrast to A2 . The matrix [A2 J] is 1 only supported on the cross diagonal where aα,−α = α+1 IX . 1 u−α , the matrix e) Another example, very similar to A2 J, is (A3 u)α = 2α 1 IX . representation of which also lives on the cross diagonal only, and aα,−α = 2α This time the cross diagonal decays fast enough for A ∈ W, in contrast to the previous example A2 J.
f) For an example of a banddominated operator A4 which is neither in K(E, P) nor a band operator, we pass to n = 1 and look at the Laurent operator (see Example 1.39)
A4 =
∞
1 V , k k 2 k=−∞
i.e.
[A4 ] =
1 2i−j
IX
. i,j∈Z
Note that the choice of the coeﬃcients 1/2k implies A4 ∈ W with A4 W = 3.
Matrix Features Here we will point out which properties of the matrix [A] correspond to the membership of A in one of the classes BO, BDOp , K(E, P) and L(E, P). Purely for illustration, and without any further explanation, we will insert some very intuitive pictures representing [A]. • BO is the most trivial class from this point of view. By deﬁnition of this class, the matrix representation of A ∈ BO is a band matrix – it is only supported on a ﬁnite number of diagonals. There is some number w, the bandwidth of A, such that all matrix entries aαβ with α − β > w are zero. • BDOp was derived from the previous class by passing to norm limits. For every A ∈ BDOp , the matrix [A] now can be supported on inﬁnitely many diagonals – but the entries in the γth diagonal have to tend to zero as γ → ∞. We can easily derive this from Theorem 1.42, property (ii): For every ε > 0, there is a d > 0 such that, for all entries aαβ with α−β > d, aαβ = P{α} AP{β} < ε.
30
Chapter 1. Preliminaries
A ∈ BDOp
A ∈ BO
Note that A ∈ BDOp does not imply a particular decay rate of the norms aαβ as α−β → ∞. In fact, for every positive null sequence δ0 , δ1 , . . . there is an operator A ∈ BDOp such that sup aαβ = δk
α−β=k
for all k ∈ N0 . For example, take (Au)α = δα u−α . Then [A] is only supported on the cross diagonal where aα,−α = δα IX → 0 as α → ∞. On the other hand, an absolutely summable decay; that is δk :=
sup aαβ
with
α−β=k
∞
δk < ∞,
k=0
together with A ∈ M (E), implies A ∈ W ⊂ BDOp for all p ∈ [1, ∞]. • K(E, P) is the set of all operators A ∈ L(E) for which Qm A ⇒ 0
AQm ⇒ 0
and
as
m → ∞.
The ﬁrst condition implies that the matrix entries in the αth row must tend to zero as α → ∞. The second condition implies that the same has to be true for columns. More precisely, for every ε > 0 there is a m ∈ N such that aαβ < ε for all matrix entries aαβ with α > m or β > m; that is, for all but ﬁnitely many entries. In short, aαβ ⇒ 0
as
(α, β) → ∞ in Z2n .
The ﬁnite submatrices [aαβ ]α,β≤m already contain the “major part” of [A], which is nicely reﬂected by the property Pm APm ⇒ A as m → ∞.
1.3. The Operators
31
• L(E, P) is the set of all operators A ∈ L(E) for which (1.12) holds. This is equivalent to P{α} AQm ⇒ 0
and
Qm AP{β} ⇒ 0
as
m→∞
for all ﬁxed α, β ∈ Zn . The ﬁrst condition says that every row of the matrix [A] decays at inﬁnity. The second condition says that the same is true for every column.
A ∈ K(E, P)
A ∈ L(E, P)
With this knowledge we have a look back to the operators in Figure 2 on page 28, see Example 1.49 a) – f). The matrices [J] and [A1 ] have entries of norm 1 in an arbitrarily large distance from the main diagonal, hence, these operators are not banddominated. But every ﬁxed row and every ﬁxed column of [J] and [A1 ] decays at inﬁnity, so J, A1 ∈ L(E, P). Pm , 0, A2 and I are generalized multiplication operators. Consequently, they are band operators of bandwidth 0. Their matrix representations are supported on the main diagonal only. Those cases where this main diagonal decays at inﬁnity, namely Pm , 0 and A2 , are in K(E, P). A2 J and A3 are in K(E, P) as well. But they are not band operators since their matrix entries are spread over all diagonals. From Proposition 1.48 we know that A2 J and A3 are banddominated (for all choices of p ∈ [1, ∞]). Note that the diﬀerent decay rate of the entries of [A2 J] and [A3 ] towards inﬁnity implies that A3 is and A2 J is not contained in the WienerAlgebra W. The matrix representation of I + A2 J is supported on the main and cross diagonal. The operator is not in K(E, P) since the main diagonal does not vanish at inﬁnity, but it is in BDOp since the cross diagonal decays. However, this decay is not strong enough for membership in W. Finally, the Laurent operator A4 is banddominated. It is even in the Wiener algebra since k∈Z 1/2k < ∞. A4 is not a band operator since it is supported
32
Chapter 1. Preliminaries
on all diagonals, and it is not in K(E, P) since its entries aαβ do not decay as (α, β) → ∞. The operators in Example 1.26 a) and b) are not in L(E, P) since the 0th row does not decay at inﬁnity. Example 1.26 c) is not in M (E), whence looking at its matrix representation is completely misleading. In this case, the matrix leads us to the operator 0 ∈ M (E), which clearly diﬀers from the operator in Example 1.26 c).
Information Transport Finally, we will look at our operator classes from a very diﬀerent point of view, which might be named “information transport” – at least we will do so. Again let E stand for p (Zn , X) with p ∈ [1, ∞] and n ∈ N. In (1.10), (1.11), (1.12), Proposition 1.36 and Theorem 1.42, the operator classes K(E, P), L(E, P), BO and BDOp are characterized by the behaviour of PV APU for certain sets U, V ⊂ Zn – for instance, when U and V are drifting apart. For a moment we will refer to U as “the source domain”, to V as “the test domain”, and to nonzero function values as “information”. Then PV APU measures the ability of the operator A to “transport” information from the source to the test domain. Indeed, starting with an arbitrary element u ∈ E with u = 1, the information under consideration is restricted to the source domain by applying PU to u. Then A maps PU u to an element v := APU u, and PV v = PV APU u checks how much information has arrived at the test domain V . Especially interesting is whether or not an operator A is able to transport information over large distances. Therefore, for every nonnegative integer d, we deﬁne (1.30) fA (d) := sup PV APU , U,V
where the supremum is taken over all U, V ⊂ Zn with dist (U, V ) ≥ d. The number fA (d) indicates A’s best information transport ability, shortly called information ﬂow, over the distance d. Proposition 1.50. For A ∈ L(E), the function fA : N0 → R is monotonically decreasing with fA (0) = A. Proof. {(U, V ) : dist (U, V ) ≥ d1 } ⊃ {(U, V ) : dist (U, V ) ≥ d2 } shows that fA (d1 ) ≥ fA (d2 ) if 0 ≤ d1 < d2 . With U = V = Zn we have d = 0 and fA (0) ≥ PV APU = A. On the other hand, for all sets U, V ∈ Zn , PV APU ≤ PV · A · PU = A, which shows that fA (0) ≤ A. Since the function fA is monotonically decreasing and bounded from below by zero, the limit fA (∞) := lim fA (d) ∈ 0 , A d→∞
always exists.
1.3. The Operators
33
Remark 1.51. In the continuous case E = Lp , one might wish to consider d ∈ [0, ∞) and U, V ⊂ Rn in (1.30), and consequently, fA : [0, ∞) → R. This can lead to a more detailed picture about the behaviour of A, although the results of this approach are compatible with those stated here. From (1.16) we get that for every matrix entry of an operator A ∈ L(E), aαβ ≤ fA (α − β)
for all
α, β ∈ Zn
(1.31)
holds. So fA (γ) is an upper bound on the matrix entries in the γth diagonal of A. Of course the function fA cannot reﬂect the details of the matrix [A] but, on the other hand, it detects some things that the matrix itself cannot ‘see’ – namely, if, roughly speaking, something is going on at inﬁnity only. For instance, if C is as in Example 1.26 c), then fC (d) = C for all d ∈ N0 . Proposition 1.52. If A ∈ L(E) and [A] = 0, then fA ≡ A. Proof. Let ε > 0 be arbitrary, and take u ∈ E with u = 1 and Au > A−ε/2. From (1.7) we get that there exists a k ∈ N such that Pk Au > Au − ε/2 > A − ε. Since [A] = 0, we know from Proposition 1.31 a) that APm = 0, i.e AQm = A for all m ∈ N. Combining these two results, we get Pk AQm u > A − ε for all m ∈ N, showing that fA (d) ≥ Pk AQk+d ≥ Pk AQk+d u > A − ε for all d ≥ 0. Since ε was chosen arbitrarily and fA (d) ≤ A for all d ≥ 0, we are done. Now Proposition 1.36 and Theorem 1.42 allow the following characterizations of the classes BO and BDOp in terms of the function fA . • A ∈ BO with bandwidth w if and only if fA (w + 1) = 0. • In particular, A is a generalized multiplicator if and only if fA (1) = 0. • A ∈ BDOp if and only if fA (∞) = 0. In words: Band operators are exactly those operators that cannot transport information over distances greater than a certain number (their bandwidth), while banddominated operators are those operators that increasingly fail to transport information over a distance d as d → ∞. Example 1.53. As a very simple example, we look at the band Laurent operator A = 3V1 + 2V5 + V9 . We start with some information u supported in {−1, 0, 1} and show the result Au as well as the information ﬂow function fA .
34
Chapter 1. Preliminaries u 
−1
0
1
2
3
4
+
2 V5 u
5
6
7
8
9
10
9
10
Au = 3 V1 u +
V9 u 
−1
0 3
1
2
3
4
5
6
7
8
6
2
fA
1 d

0 0
1
2
3
4
5
6
7
8
9
10
st
The matrix [A] clearly has all entries equal to 3 on the 1 diagonal, equal to 2 on the 5th diagonal and equal to 1 on the 9th diagonal. The values and jumps of the function fA correspond to these numbers. The bandwidth of A is 9. Note that, in general, as the following example shows, it is not possible to ﬁnd out whether an operator belongs to the Wiener algebra or not by looking at its information ﬂow only. Example 1.54. Suppose n = 1, and consider the operators A, B ∈ BDOp , the matrices of which are only supported on the cross diagonal, where
1 m 2m , k = 2 , and ak,−k = 0 otherwise, 1 bk,−k = where 2m−1 < k ≤ 2m 2m for k ∈ N, and all other entries are zero. Then ∞
ak,−k  = 2,
while
k=1
∞
bk,−k  = ∞.
k=1
So, A ∈ W and B ∈ BDOp \W, although fA (d) = fB (d) = bd,−d for all d ∈ N0 . So there can be no condition on fA which is suﬃcient and necessary for A ∈ W. But at least we have the following result. Proposition 1.55. For A ∈ L(E), the condition ∞ dn−1 fA (d) < ∞ d=1
is suﬃcient, but not necessary, for A ∈ W.
(1.32)
1.3. The Operators
35
Proof. Suppose A ∈ L(E) and (1.32) holds. Then, clearly, fA (d) → 0 as d → determined by its matrix ∞, whence A ∈ BDOp ⊂ M (E). So A is uniquely representation [A], and it remains to show that γ∈Zn aγ < ∞, where aγ is the supremum norm of the γth diagonal of [A]. From (1.31), we get that aγ ≤ fA (γ) for all γ ∈ Zn . Moreover, put md := #{γ ∈ Zn : γ = d} for every d ∈ N0 . A little thought shows that md = (2d + 1)n − (2d − 1)n ≤ 2 (2d + 1)n−1 ≤ 2 (3d)n−1 = c dn−1 holds for every d ∈ N, where c = 2 · 3n−1 is independent of d. Consequently, γ∈Zn
aγ ≤
γ∈Zn
fA (γ) =
∞
md fA (d) ≤ fA (0) + c
d=0
∞
dn−1 fA (d) < ∞.
d=1
Finally, Example 1.54 shows that (1.32) is not necessary for A ∈ W.
Looking at (1.10), (1.11) and (1.12), we also ﬁnd rough descriptions of the classes L(E, P) and K(E, P), although these are not as precisely as the characterizations of BO and BDOp : • L(E, P) is the set of operators which do not draw information from any bounded set to inﬁnity and no information from inﬁnity to any bounded set. • K(E, P) is the set of operators which do not draw information from or to inﬁnity at all. All operators in Example 1.26 obviously draw information from inﬁnity to some ﬁnite place. This already indicates that they are not in L(E, P). ˆ b , then Example 1.56. a) If A is a generalized multiplication operator, A = M clearly, PV APU = bU∩V ∞ which is zero as soon as dist (U, V ) > 0. Multiplication operators are unable to transport information over any distance greater than zero. Their action is local – in the strictest sense of the word. fA (0) = A = b∞ fA (d) = 0 if d ≥ 1 b) For the shift operator A = Vα with α ∈ Zn , we have PV APU = PV Pα+U Vα = P(α+U)∩V . Consequently,
1 if 0 ≤ d ≤ α, fA (d) = 0 if d > α. Of course, this answer is not very surprising: The shift operator Vα is only able to transport information over distances d ≤ α. c) The ﬂip operator A = J from Example 1.49 a) is able to transport information over arbitrarily large distances (choose U = −V ). The same is true for
36
Chapter 1. Preliminaries
the operator in Example 1.49 b) (choose U = {2α} and V = {α}). In both cases, fA ≡ 1. Note that both operators draw information from inﬁnity to inﬁnity again (in the opposite direction, in the case of J) but not to any other place. So both operators are in L(E, P). They are not in BDOp since fA (∞) = 1 = 0.
1.4 Invertibility of Sets of Operators Fix a Banach space X, and let T denote an arbitrary index set. Besides the invertibility of a single operator A on X, we will be concerned with the question whether a whole set {Aτ }τ ∈T of such operators is collectively invertible in a certain sense. Distinction is made between the following qualities of collective invertibility: Deﬁnition 1.57. We say that a bounded set {Aτ }τ ∈T of operators Aτ ∈ L(X) is elementwise invertible if Aτ is invertible for every τ ∈ T ,
(E)
and we will call {Aτ }τ ∈T uniformly invertible if Aτ is invertible for every τ ∈ T
and
sup A−1 τ < ∞.
τ ∈T
(U)
Moreover, if T ⊂ R is measurable and f : T → L(X) denotes the mapping τ → Aτ , we will call {Aτ }τ ∈T essentially invertible if f is an invertible element of the Banach algebra L∞ (T, L(X)).
(S)
A ﬁrst application of (U) is the following property which will be of great importance in the theory of approximation methods: Deﬁnition 1.58. Let T be some subset of R which is unbounded towards plus inﬁnity. A bounded operator sequence (Aτ )τ ∈T is said to be stable if there is some τ∗ ∈ T such that the set {Aτ }τ ∈T, τ >τ∗ is uniformly invertible. In contrast to (U), property (S) only says that almost all operators Aτ are invertible, and that ess sup A−1 τ < ∞. τ ∈T
So there is a subset T ⊂ T ⊂ R of measure zero such that an appropriate change of all Aτ with τ ∈ T makes the set {Aτ }τ ∈T uniformly invertible.
1.4. Invertibility of Sets of Operators
37
In Section 2.4 we have to deal with the question under which conditions on the set {Aτ }τ ∈T property (S) implies (U). The following condition will prove to do the job. Deﬁnition 1.59. We call the mapping f : T ⊂ R → L(X) with f : τ → Aτ suﬃciently smooth if it has the following property: ∞ If (τk )∞ k=1 is a sequence in T with τk → τ0 ∈ T as k → ∞ and (Aτk )k=1 is −1 −1 stable, then Aτ0 is invertible and Aτ0 ≤ sup Aτk . k>k∗
A ﬁrst indication of the strength of this property is given by the following lemma. For simplicity, we will put A−1 := ∞ if A is not invertible. Moreover, put UεT (τ ) := (τ − ε , τ + ε) ∩ T for every T ⊂ R, τ ∈ T and ε > 0. Lemma 1.60. Suppose τ0 ∈ T is an accumulation point of T , and τ → Aτ is suﬃciently smooth. Then the following is true: If Aτ0 is not invertible, then, for every M > 0, there is a neighborhood UεT (τ0 ) T such that A−1 u > M for all u ∈ Uε (τ0 ). Proof. Conversely, suppose there is some M > 0 such that in every neighbourhood T (τ0 ) with k ∈ N, there is a τk with A−1 U1/k τk ≤ M . Clearly, these τk form a sequence in T which converges to τ0 as k → ∞, and since τ → Aτ is suﬃciently smooth and (Aτk ) is obviously stable, we get that Aτ0 is invertible, which is a contradiction. Lemma 1.61. Continuous functions f : T → L(X) are suﬃciently smooth. Proof. Suppose (τk )∞ k=1 ⊂ T tends to τ0 ∈ T as k → ∞ and Aτk is invertible for all suﬃciently large k, say k ≥ k∗ , with s := sup A−1 τk < ∞. k≥k∗
If f is continuous, then Aτk ⇒ Aτ0 as k → ∞, which is normconvergence in the Banach algebra L(X). So we just have to apply Lemma 1.3 to the sequence −1 (Aτk )∞ k=k∗ , to get that Aτ0 is invertible and Aτ0 ≤ s. Continuous functions are the most trivial examples of suﬃciently smooth functions. In Chapter 2.4 we will study some more sophisticated cases, where τ → Aτ is suﬃciently smooth under considerably weaker conditions. We will say that T ⊂ R is a massive set if UεT (τ ) > 0 for every τ ∈ T and every ε > 0. Note that every interval (of positive length) is massive and that massive sets have no isolated points. R \ Q is massive, Q is not.
38
Chapter 1. Preliminaries
Proposition 1.62. Denote by f : T → L(X) the mapping τ → Aτ . Then the interplay between the properties (E), (U) and (S) of the set {Aτ }τ ∈T is as follows: a) (U) implies (E). b) If f is continuous and T is compact, then (E) implies (U). In addition, suppose that T is a measurable subset of R. Then: c) (U) implies (S). d) If f is suﬃciently smooth and T is massive, then (S) implies (U). Proof. a) and c) are obvious. b) If f : τ → Aτ is continuous and (E) is fulﬁlled, then also the mapping τ → A−1 τ is continuous. If, in addition, T is compact, then the latter mapping is bounded, and this implies (U). d) Let (S) be fulﬁlled. Put s := ess sup A−1 τ τ ∈T
and
T := {τ ∈ T : A−1 τ > s}.
T Suppose τ0 ∈ T . Since T is massive, we have U1/k (τ0 ) > 0 for every k ∈ N. Since, T (τ0 )\ T for every k ∈ N. Then by deﬁnition of ess sup, T  = 0, there is a τk ∈ U1/k ∞ (τk )∞ k=1 ⊂ T \ T with τk → τ0 as k → ∞, and (Aτk )k=1 is stable, by the choice of T . But since f is suﬃciently smooth, also Aτ0 is invertible with A−1 τ0 ≤ s, which contradicts τ0 ∈ T . Consequently T = ∅, and (U) holds.
1.5 Approximation Methods Hagen, Roch and Silbermann begin [36] with the following neat overview: • Functional Analysis: Solve equations in inﬁnitely many variables. • Linear Algebra: Solve equations in ﬁnitely many variables. • Numerical Analysis: Build the bridge! The latter is done by approximation methods.
1.5.1 Deﬁnition Fix a Banach space E. If A ∈ L(E) is an invertible operator, then the equation Au = b
(1.33)
has a unique solution u ∈ E for every righthand side b ∈ E. We will deal with the approximate solution of this equation for dim E = ∞ where A ∈ L(E) and b ∈ E are given and u ∈ E is to be determined.
1.5. Approximation Methods
39
For this purpose, let T ⊂ R be some index set which is unbounded towards plus inﬁnity, and let (Eτ )τ ∈T refer to a sequence of Banach subspaces of E which are the images of projection operators Πτ : E → Eτ and which exhausts E in the sense that the projections Πτ converge5 to the identity operator I on E as τ → ∞. If one has to solve an equation of the form (1.33), one tries to approximate5 the operator A ∈ L(E) by a sequence of operators (A˜τ )τ ∈T with A˜τ ∈ L(Eτ ) and to solve the somewhat simpler equations ˜τ = Πτ b A˜τ u
(1.34)
in Eτ instead, hoping these are uniquely solvable – at least for all suﬃciently large τ – and that the solutions u ˜τ ∈ Eτ of (1.34) tend6 to the solution u ∈ E of (1.33) as τ goes to inﬁnity. If this is the case for every righthand side b ∈ E, then we say that the approximation method (A˜τ ) is applicable to A. This is the idea of approximation methods – or, to be more precisely, of projection methods. It is somewhat unsatisfactory that every operator of the sequence (A˜τ ) acts on a diﬀerent space. To overcome these diﬃculties so that we may regard A and all operators of the approximation method as acting on the same space E, we will henceforth identify A˜τ ∈ L(Eτ ) with Aτ := A˜τ + Θτ ∈ L(E), where Θτ := I − Πτ . Then, for every τ ∈ T , (1.34) is equivalent to Πτ b u ˜τ A˜τ 0 Aτ uτ = b alias = (1.35) Θτ b Θτ b 0 Θτ with respect to the decomposition E = Eτ im Θτ . The approximation method (1.34) is applicable to A if and only if (1.35) is so. Clearly, A˜τ and Aτ are simul˜−1 taneously invertible, where A−1 τ = max(Aτ , 1). Consequently, the sequence ˜ ˜ (Aτ ) is stable (where every operator Aτ is acting on a diﬀerent space Eτ ) if and only if the sequence (Aτ ) is. A quite natural and very popular choice of the sequence (Aτ ) is Πτ AΠτ + Θτ ,
τ ∈T
(1.36)
which we call the natural projection method for the operator A and the sequence (Eτ ) of Banach spaces. We will next make the index set T and the subspaces Eτ more precise for the spaces of our interest in this book, E = p (Zn , X) and E = Lp .
1.5.2 Discrete Case Let E = p (Zn , X) with p ∈ [1, ∞] and an arbitrary Banach space X. 5 The appropriate type of convergence heavily depends on the space E. A typical type for this purpose is the strong convergence. See Sections 1.5.5 and 1.6 for the type that we have in mind here. 6 Again, the appropriate convergence type depends on E. This might be convergence in the norm of E or, as we have in mind, something else (also see Sections 1.5.5 and 1.6).
40
Chapter 1. Preliminaries
In the discrete case, there is usually no need for approximation methods with a continuous index set T . So the typical choice here shall be T = N. For the deﬁnition of the subspaces Eτ with τ ∈ T = N, take a monotonously increasing sequence (Sτ )τ ∈N of ﬁnite subsets of Zn which exhausts Zn in the sense that, for every α ∈ Zn , there is a τ0 ∈ N with α ∈ Sτ for all τ ≥ τ0 . Then put Eτ := p (Sτ , X), Πτ = PSτ and Θτ = QSτ . In this setting, we call (1.36) the ﬁnite section method for the discrete case and write Aτ for the operator (1.36). Example 1.63. Let n = 1, and put Sτ = {−τ, . . . , τ } for every τ ∈ T = N. Then, in matrix language, the inﬁnite system (1.33); that is ⎞ ⎞ ⎛ ⎛ ⎞⎛ .. .. .. .. .. .. .. . . . . . . . ⎟ ⎟ ⎜ ⎜ ⎟⎜ ⎜ · · · a1,1 a1,0 a1,1 · · · ⎟ ⎜ u(1) ⎟ ⎜ b(1) ⎟ ⎟ ⎟ ⎜ ⎜ ⎟⎜ ⎜ · · · a 0,1 a 0, 0 a 0, 1 · · · ⎟ ⎜ u( 0 ) ⎟ = ⎜ b( 0 ) ⎟ , ⎟ ⎟ ⎜ ⎜ ⎟⎜ ⎜ · · · a 1,1 a 1, 0 a 1, 1 · · · ⎟ ⎜ u( 1 ) ⎟ ⎜ b( 1 ) ⎟ ⎠ ⎠ ⎝ ⎝ ⎠⎝ .. .. .. .. .. .. . . . . . . . . . is replaced by the sequence of ﬁnite systems (1.34), namely the truncations ⎛ ⎞ ⎛ ⎞ ⎞⎛ aτ,τ · · · aτ, τ b(τ ) u˜τ (τ ) ⎜ .. ⎟ ⎜ .. ⎟ .. .. ⎟ ⎜ ⎝ . ⎠ = ⎝ . ⎠, . . ⎠⎝ b( τ ) u˜τ ( τ ) a τ,τ · · · a τ, τ for τ = 1, 2, . . .. This is where the name ‘ﬁnite section method’ comes from. During this procedure one is keeping ﬁngers crossed that the latter systems are uniquely solvable once they are suﬃciently large, and that u ˜τ (k) → u(k) as τ → ∞ for every k ∈ Z.
1.5.3 Continuous Case Let E = Lp = Lp (Rn ) = Lp (Rn , C) with p ∈ [1, ∞]. In the continuous case, it is often convenient to work with approximation methods with a continuous index set T . The typical choice here is T = R+ = (0, ∞). For the deﬁnition of the subspaces Eτ with τ ∈ T = R+ , take a monotonously increasing sequence (Ωτ )τ >0 of compact subsets of Rn which exhausts Rn in the sense that for every x ∈ Rn there is a τ0 > 0 with x ∈ Ωτ for all τ ≥ τ0 . Then put Eτ := Lp (Ωτ ), Πτ = PΩτ and Θτ = QΩτ . In this setting, we call (1.36) the ﬁnite section method for the continuous case and write Aτ for the operator (1.36).
1.5.4 Additional Approximation Methods At the ﬁrst glance, one might be moaning about the practicability of the ﬁnite section method in the continuous case. Clearly, in the discrete case, equation (1.33)
1.6. Pconvergence
41
is an inﬁnite linear system of equations. This is reduced to a ﬁnite one (for instance, see Example 1.63) by passing to (1.34), where the latter is readily solved numerically – whereas, in the continuous case, the equations (1.34) are still too complicated to be solved directly. What one usually does for the numerical solution of the equations (1.34) in Lp (Ωτ ) is to “superimpose” another approximation method onto it in order to discretize the operators A˜τ . The choice of this method heavily depends on the space E and the operator A. For example, if E = BC ⊂ L∞ (see Subsection 4.2.3), then a series of discretization methods for the ﬁnitely truncated equation are known. But this shall not be our topic here. However, note that by passing from (1.33) to (1.34), a major step for the approximate solution of (1.33) is done. Therefore, we will restrict ourselves to the study of methods of the form (1.34), like the ﬁnite section method.
1.5.5 Which Type of Convergence is Appropriate? Up to now we did not specify clearly in which way we will expect the operators Aτ (and A˜τ ) and the solutions uτ (and u ˜τ ) to approximate A and u, respectively. Therefore, let us ﬁx a p ∈ [1, ∞] and a Banach space X, and let E denote one of the spaces p (Zn , X) and Lp . It is easily seen that, for p = ∞, it is not appropriate to expect strong convergence of Aτ to A – as one usually does if p < ∞ – since not even the ﬁnite section projectors Πτ = PSτ or Πτ = PΩτ , which are the identity operators on Eτ , converge strongly to the identity I on E as τ → ∞. The same argument shows that, for the ﬁnite section method, also the solutions uτ will not converge to u in the norm of E if p = ∞ – as it is the case for p < ∞. We will introduce the appropriate type of convergence of Aτ to A and uτ to u in Deﬁnition 1.64 in the following section.
1.6 Pconvergence Let T ⊂ R be an index set which is unbounded towards plus inﬁnity. If we say that the ﬁnal part of a sequence (Aτ )τ ∈T has a certain property, then we mean that there exists a τ∗ ∈ T such that the subsequence (Aτ )τ ∈T, τ >τ∗ has this property. For example, a sequence is stable if and only if its ﬁnal part is uniformly invertible. For convergence issues, clearly, the ﬁnal part of a sequence is of interest only. Now ﬁx a p ∈ [1, ∞] and a Banach space X. In this section, E denotes one of the spaces p (Zn , X) and Lp , and P = (P1 , P2 , . . .) is the corresponding approximate identity. In Section 1.3.4 we have seen that, if 1 < p < ∞, then K(E) determines ∗strong convergence in terms of (1.8) and (1.9). In the same sense, K(E, P) determines the convergence in which our approximate identity P approximates
42
Chapter 1. Preliminaries
the identity operator I on E. But this is exactly the convergence that we need in Section 1.5.
1.6.1 Deﬁnition and Equivalent Characterization K(E, P) was introduced in order to characterize the convergence of Pm to I as m → ∞. Now we will study this convergence type in detail. Deﬁnition 1.64. A sequence (Aτ )τ ∈T ⊂ L(E) is Pconvergent to A ∈ L(E) if K(Aτ − A) → 0
and
(Aτ − A)K → 0
as
τ →∞
(1.37)
for every K ∈ K(E, P), and a sequence (uτ )τ ∈T ⊂ E is called Pconvergent to u ∈ E if as τ →∞ (1.38) K(uτ − u) → 0 P
for every K ∈ K(E, P). We will write Aτ → A or A = P−lim Aτ for (1.37), and P uτ → u or u = P−lim uτ for (1.38). It is readily seen that a Plimit is unique if it exists. Indeed, suppose for every operator K ∈ K(E, P), we have K(Aτ − A) ⇒ 0 and K(Aτ − B) ⇒ 0; and consequently, K(A − B) ⇒ 0 as τ → ∞, i.e. K(A − B) = 0. Then clearly, Pm (A − B) = 0 for all m ∈ N, whence A − B = 0 by Lemma 1.30 a). Analogously, we see that also P−lim uτ is unique. If we want to check whether a sequence (Aτ ) ⊂ L(E) or (uτ ) ⊂ E is Pconvergent, it is not necessary to check (1.37) or (1.38) for all K ∈ K(E, P), as the following proposition shows. Proposition 1.65. P
a) Let (Aτ )τ ∈T ⊂ L(E) and A ∈ L(E) be arbitrary. Then Aτ → A if and only if the ﬁnal part of the sequence (Aτ ) is bounded, and Pm (Aτ − A) → 0
and
(Aτ − A)Pm → 0
as
τ →∞ (1.39)
for every ﬁxed m ∈ N. P
b) Let (uτ )τ ∈T ⊂ E and u ∈ E be arbitrary. Then uτ → u if and only if the ﬁnal part of the sequence (uτ ) is bounded, and Pm (uτ − u) → 0
as
τ →∞
(1.40)
for every ﬁxed m ∈ N. Remark 1.66. a) If { τ ∈ T : τ ≤ τ∗ } is ﬁnite for every τ∗ ∈ T , then the boundedness of the ﬁnal part of a sequence is equivalent to the boundedness of the whole sequence of course. b) If we replace L(E) by L(E, P) in a), then we get a wellknown fact, which is already proven in [70] and [50]. The new fact here is that (1.37) implies
1.6. Pconvergence
43
the boundedness condition on (Aτ ) also for arbitrary operators in L(E)! (For operators in L(E, P), this is a simple consequence of Proposition 1.69 and the BanachSteinhaus theorem.) Note that, for A and Aτ in L(E), already the ﬁrst property in (1.37) is suﬃcient for the boundedness of the ﬁnal part of (Aτ ). The second property in (1.37) does not imply the boundedness of (Aτ )τ ∈T, τ >τ∗ for any τ∗ > 0, as we can see if we put Aτ = τ C where C is the operator from Example 1.26 c). c) Part b) of Proposition 1.65 shows that Pconvergence of sequences (uτ ) ⊂ E is equivalent to strict convergence in the sense of Buck [12]. Proof. a) Suppose (Aτ )τ ∈T, τ >τ∗ is bounded for some τ∗ > 0 and (1.39) holds. Then, for all m ∈ N and all K ∈ K(E, P), one has K(Aτ − A) ≤ K Pm(Aτ − A) + KQm Aτ − A, where the ﬁrst term tends to zero as τ → ∞, and the second one is as small as desired if m is large enough. The second property of (1.37) is shown absolutely analogously. Conversely, if (1.37) holds for all K ∈ K(E, P), then (1.39) holds for all m ∈ N since P ⊂ K(E, P). It remains to show that (Aτ )τ ∈T, τ >τ∗ is bounded for some τ∗ > 0. Suppose the converse is true. Without loss of generality, we can suppose that A = 0. Now we will successively deﬁne two sequences: (mk )∞ k=1 ⊂ N and ⊂ T ∪ {0}. We start with m := 1 and τ := 0. (τk )∞ 1 0 k=0 For every k ∈ N, choose τk ∈ T such that τk > τk−1 ,
Aτk > k 2 + 3
and
Pmk Aτk < 1,
P
the latter possible since Pmk ∈ K(E, P) and Aτ → 0. Then Qmk Aτk ≥ Aτk − Pmk Aτk > k 2 + 3 − 1 = k 2 + 2. Take uk ∈ E with uk = 1 and Qmk Aτk uk > k 2 + 1, and choose mk+1 > mk such that Pmk+1 Qmk Aτk uk > k 2 , which is possible by (1.7). Consequently, Pmk+1 Qmk Aτk > k 2
for all
k ∈ N.
Now consider the generalized multiplication operator K :=
∞ 1 Pmj+1 Qmj . 2 j j=1
Then it is easily seen that K ∈ K(E, P). But on the other hand, from
Pmk+1 Qmk , j = k, Pmk+1 Qmk Pmj+1 Qmj = 0, j = k,
44
Chapter 1. Preliminaries
we get that KAτk ≥ Pmk+1 Qmk KAτk =
1 k2 P Q A > = 1 m m τ k+1 k k k2 k2
for every k ∈ N, which contradicts KAτ → 0 as τ → ∞.
b) can be veriﬁed in an analogous way.
Corollary 1.67. For bounded sequences (Aτ ) ⊂ L(E), the Pconvergence to A ∈ L(E) is equivalent to (1.37) for all K ∈ P. The notion of Pconvergence also yields a nice alternative characterization of the class L(E, P): An operator A ∈ L(E) is in L(E, P) if and only if both sequences and (Qm A)∞ (AQm )∞ m=1 m=1 Pconverge to zero as m → ∞. If this is even normconvergence, then A ∈ K(E, P). We will see that the study of Pconvergence in L(E, P) leads to especially nice results, which might be seen as indication that L(E, P) is the natural playground for Pconvergence.
1.6.2 Pconvergence in L(E, P) Proposition 1.68. L(E, P) is sequentially closed with respect to Pconvergence. P
Proof. Take (Aτ ) ⊂ L(E, P) with Aτ → A. We will show that A ∈ L(E, P) as well. Fix a k ∈ N and let ε > 0. Take τ0 ∈ T large enough that Pk (Aτ0 −A) < ε/2, and choose m0 ∈ N large enough that Pk Aτ0 Qm < ε/2 for all m > m0 . Then Pk AQm ≤ Pk (Aτ0 − A) Qm + Pk Aτ0 Qm < ε/2 + ε/2 = ε for all m > m0 which shows that Pk AQm ⇒ 0 as m → ∞. The symmetric property Qm APk ⇒ 0 as m → ∞ can be veriﬁed analogously. Since K(E, P) is an ideal in L(E, P), one can associate with every A ∈ L(E, P) the two operators of left and right multiplication, acting on K(E, P) by Al : K → AK
and
Ar : K → KA
for every K ∈ K(E, P). This observation and the following two propositions are due to Roch, Silbermann and Rabinovich [70]. Proposition 1.69.
a) Al L(K(E,P)) = A = Ar L(K(E,P)) for all A ∈ L(E, P).
b) A sequence (Aτ ) ⊂ L(E, P) is Pconvergent to some A ∈ L(E, P) if and only if Alτ → Al and Arτ → Ar strongly in L(K(E, P)).
1.6. Pconvergence
45
Proof. a) Take an arbitrary A ∈ L(E, P). Clearly, Al ≤ A, and Ar ≤ A. For the reverse inequalities, take an arbitrary ε > 0, and choose u ∈ E with u = 1 such that Au ≥ A−ε. Moreover, choose k ∈ N with Au−Pk Au ≤ ε, which is possible by (1.7). Then Pk Au ≥ A − 2ε. Since Pk ∈ K(E, P), we have (1.41) Ar = Ar Pk ≥ Pk A ≥ Pk Au ≥ A − 2ε. Choosing m ∈ N large enough that Pk AQm ≤ ε and taking the inequality Pk A ≥ A − 2ε from (1.41), we conclude from Pm ∈ K(E, P) to Al = Al Pm ≥ APm ≥ Pk APm ≥ Pk A − Pk AQm ≥ A − 3ε. (1.42) Since (1.41) and (1.42) hold for every ε > 0, we are done with a). b) By a), L(E, P) is isometrically embedded into L(K(E, P)) by each one of the mappings A → Al and A → Ar . Consequently, K(Aτ − A) → 0 for all K ∈ K(E, P) is equivalent to the strong convergence of Arτ to Ar on K(E, P), and the second property in (1.37) is the same for Alτ and Al . Proposition 1.70. For sequences (Aτ ), (Bτ ) ⊂ L(E, P) with Plimits A and B, respectively, we have a) b) c)
A ≤ lim inf Aτ P−lim(Aτ + Bτ ) P−lim(Aτ Bτ )
≤ = =
sup Aτ < ∞, A + B, AB.
Proof. From Proposition 1.68 we get that A, B ∈ L(E, P). Using Proposition 1.69, we only have to recall the BanachSteinhaus theorem as well as the compatibility of strong limits with addition and composition. Alternatively, there are elementary proofs for b) and c) using Proposition 1.65. Equality b) is trivial anyway, and c) can be seen as follows. For all k, m ∈ N, Pk (Aτ Bτ − AB)
≤ Pk (Aτ − A)Bτ + Pk A(Bτ − B) ≤ Pk (Aτ − A) Bτ + Pk A Pm (Bτ − B) + Pk AQm Bτ − B
holds, where the ﬁrst two terms tend to zero as τ → ∞, and the third one becomes as small as desired if we choose m large enough (note that, by Proposition 1.65, the Pconvergent sequence (Bτ ) is bounded). (Aτ Bτ −AB)Pk ⇒ 0 is shown completely symmetric using the boundedness of (Aτ ). Remark 1.71. The previous two propositions show that L(E, P) is indeed a very nice playground for the study of Pconvergence. Originally we even conjectured P that Aτ → A implies that Aτ − A = Bτ + Cτ with a sequence (Bτ ) ⊂ L(E, P) and Cτ ⇒ 0, which would make the study of Pconvergence outside of L(E, P) completely ridiculous. But the following example reveals that this is not true.
46
Chapter 1. Preliminaries
Example 1.72. Take E = ∞ (Z) and T = N as index set. Then the sequence (Ak )∞ k=1 with Ak u = Qk ( . . . , uk , uk , uk , . . . ) for all u ∈ E, is bounded and clearly subject to (1.39) with A = 0, whence it is Pconvergent to zero. On the other hand, the kth column of Ak has all but ﬁnitely many entries equal to 1. This shows that Ak ∈ L(E, P) – more precisely, dist ( Ak , L(E, P) ) = 1 for all k ∈ N. So obviously, Am − A = Am is more than a null sequence Cm away from all Bm ∈ L(E, P).
1.6.3 Pconvergence vs. ∗strong Convergence From the discussion at the beginning of Section 1.6 we see that the relation between K(E) and K(E, P) essentially determines the relation between P and ∗strong convergence. In analogy to Figure 1 on page 15, we will brieﬂy study this relation here, depending on the space E = p (Zn , X). P K From (1.8), (1.9) and Deﬁnition 1.64 we get that Aτ → A implies Aτ → A K P if K(E) ⊂ K(E, P), i.e. if 1 < p < ∞, and that Aτ → A implies Aτ → A if ∗ K(E, P) ⊂ K(E), i.e. if dim X < ∞. Moreover, by Corollary 1.16, Aτ → A always K implies Aτ → A, and both are equivalent if E is reﬂexive; that is if X is reﬂexive (recall that every ﬁnitedimensional space X is reﬂexive) and 1 < p < ∞. Consequently, we get that P K ∗ A Aτ → A =⇒ Aτ → A ⇐⇒ Aτ → K P ∗ Aτ → A =⇒ Aτ → A =⇒ Aτ → A
if 1 < p < ∞ and X reﬂexive, if dim X < ∞.
can be substituted with, The following table shows the sign that the star depending on p and X. The implication ‘⇐’ in the right column of the second row holds under the assumption that X is reﬂexive, for example X = Lp (H).
∗ Aτ → A
p=1 1
P
Aτ → A
dim X < ∞ =⇒ ⇐⇒ =⇒
dim X = ∞ ⇐=
Figure 3: Pconvergence versus ∗strong convergence, depending on the space E = p (Zn , X).
An example, indicating why ‘⇒’ is not true in the right column of the second p row, is the sequence (Aτ ) = (P[0, τ1 ] )∞ τ =1 on L with 1 < p < ∞. P
∗
For p = 1 and p = ∞, we have Pm → I and Pm → I as m → ∞, showing that ‘⇐’ is not true in the ﬁrst and third row of the table. But for p = 1, however, ∗ I is true since we have strong convergence Pm → I here. For 50% of Pm → p = ∞, only Pm → I is true, provided X , and hence E exists. The following two propositions show that this behaviour is a rule. P
Proposition 1.73. If p = 1, then Aτ → A implies the strong convergence Aτ → A.
1.7. Applicability vs. Stability
47
P
Proof. If Aτ → A, then we know from Proposition 1.65 that the ﬁnal part of (Aτ ) is bounded and that (Aτ − A)Pm ⇒ 0 as τ → ∞ for every m ∈ N. Consequently, for every K ∈ K(E) and every m ∈ N, (Aτ − A)K ≤ (Aτ − A)Pm K + Aτ − A QmK, where the ﬁrst term tends to zero as τ → ∞, and the second term can be made as small as desired if we choose m large enough since K is compact and Qm → 0 strongly as m → ∞. By Proposition 1.13, this implies the strong convergence of Aτ to A as τ → ∞. Proposition 1.74. If p = ∞ and X as well as the preadjoint operators Aτ , A P exist, then Aτ → A implies the strong convergence Aτ → A on E . Proof. If E = ∞ (Zn , X) and X exists, then E = 1 (Zn , X ). Now proceed as in the proof of Proposition 1.73 to get (Aτ − A )K ≤ (Aτ − A )Pm K + Aτ − A Qm K, for all K ∈ K(E ). Now (Aτ − A )Pm = Pm (Aτ − A) → 0 as τ → ∞, and Qm K can be made arbitrarily small by choosing m ∈ N large enough since Qm → 0 on E . Again, application of Proposition 1.13 ﬁnally proves the claim.
1.7 Applicability vs. Stability Let E = p (Zn , X) or E = Lp with p ∈ [1, ∞], and let T ⊂ R be a index set which is unbounded towards plus inﬁnity. We will see that the question whether an approximation method (Aτ )τ ∈T is applicable or not, heavily depends on the stability (cf. Deﬁnition 1.58) of the sequence (Aτ ). The classic Polski theorem [36] says that a strongly convergent approximation method (Aτ ) is applicable to A – with convergence of the solutions uτ to u in the norm of E – if and only if the operator A is invertible and the sequence (Aτ ) is stable. It was proven by Roch and Silbermann in [76] (also see Theorem 6.1.3 in [70]) that the same is true for the applicability of the methods that we have in mind: Pconvergent methods (Aτ ) with Pconvergent solutions uτ – provided that the sequence (Aτ ) is subject to the following condition: We write (Aτ ) ∈ F(E, P) if the sequence (Aτ ) is bounded, and, for every k ∈ N, sup Pk Aτ Qm → 0
τ ∈T
and
sup Qm Aτ Pk → 0
τ ∈T
as
m → ∞. (1.43)
Of course, therefore it is necessary that Aτ ∈ L(E, P) for every τ ∈ T , and P moreover A ∈ L(E, P) by Proposition 1.68 if Aτ → A. Theorem 1.75. If (Aτ ) ∈ F(E, P) is Pconvergent to A, then the approximation method (Aτ ) is applicable to A if and only if A is invertible and (Aτ ) is stable.
48
Chapter 1. Preliminaries
Proof. See Theorem 6.1.3 of [70] or [76] or 4.41f in [58].
In the case of a discrete index set T , the uniformity condition (1.43) is redundant if (Aτ )τ ∈T ⊂ L(E, P). For simplicity, we state and prove this result for T = N. Lemma 1.76. If (Aτ )τ ∈N ⊂ L(E, P) is Pconvergent to A, then (Aτ ) ∈ F(E, P). Proof. The boundedness of the sequence follows from Proposition 1.65. It remains to prove (1.43). Therefore ﬁx an arbitrary k ∈ N. From (Aτ ) ⊂ L(E, P) we conclude that for every ε > 0 and every τ ∈ N there is a m(τ, ε) ∈ N such that Pk Aτ Qm < ε for all m > m(τ, ε). What we have to show is that ∀ε > 0 ∃m(ε) :
Pk Aτ Qm < ε
∀m > m(ε) ∀τ ∈ N.
(1.44)
So take an arbitrary ε > 0 and choose m0 ∈ N large enough that Pk AQm < ε/2 for all m > m0 , which is possible since A ∈ L(E, P) by Proposition 1.68. Moreover, choose τ0 ∈ N large enough that Pk (Aτ − A) < ε/2 for all τ > τ0 . For m > m0 and τ > τ0 we conclude Pk Aτ Qm ≤ Pk AQm + Pk (Aτ − A) Qm < ε/2 + ε/2 = ε. Now choose m(ε) := max( m(1, ε) , . . . , m(τ0 , ε) , m0 ) to ensure (1.44). Analogously, we prove the second property in (1.43). From Theorem 1.75 and Lemma 1.76 we immediately get the following. Corollary 1.77. If (Aτ )τ ∈N ⊂ L(E, P) is Pconvergent to A, then the approximation method (Aτ ) is applicable to A if and only if A is invertible and (Aτ ) is stable. Also for arbitrary index sets T , the property (Aτ ) ∈ F(E, P) is often automatically implied by the nature of the approximation method. We will check this for the ﬁnite section method in E = Lp . Proposition 1.78. The ﬁnite section method (Aτ )τ ∈T is applicable to A ∈ L(E, P) if and only if A is invertible and the ﬁnite section sequence (Aτ ) is stable. Proof. Let (Ωτ )τ ∈T denote the increasing and exhausting (in the sense of Subsection 1.5.3) sequence of compact subsets of Rn . Then the sequence (Aτ ), deﬁned by Aτ = PΩτ APΩτ + QΩτ , is automatically contained in L(E, P) if A ∈ L(E, P), it Pconverges to A as τ → ∞ by Proposition 1.70, and it is obviously bounded with Aτ ≤ A + 1 for all τ ∈ T .
1.8. Comments and References
49
By Theorem 1.75, it remains to prove (1.43). Therefore, take an arbitrary k ∈ N, and note that, for all τ ∈ T and m ∈ N, Pk Aτ Qm
≤ =
Pk PΩτ APΩτ Qm + Pk QΩτ Qm PΩτ Pk AQm PΩτ + Pk Qm QΩτ
≤
Pk AQm + Pk Qm ,
which shows that supτ ∈T Pk Aτ Qm ≤ Pk AQm + Pk Qm → 0 as m → ∞ since A ∈ L(E, P) and Pk Qm = 0 if m ≥ k. Analogously, one checks the second property in (1.43).
1.8 Comments and References The study of concrete classes of band and banddominated operators (such as convolution, WienerHopf, and Toeplitz operators) goes back to the 1950’s starting with [40] and [30] by Gohberg and Krein and was culminating in the 1970/80’s ¨ ttcher with the huge monographs [29] by Gohberg/Feldman and [9] by Bo and Silbermann. The study of banddominated operators as a general operator class was initiated by Simonenko [82], [83]. More recent work along the lines of this book can be found in [43], [67], [58] and [41], to mention some examples only. In [58] banddominated operators are called “operators of local type” and in [41] “operators with uniformly fading memory”. The history of Theorem 1.42 goes back to [43], [67], [50] and [70]. Most of the theory presented in this chapter has been done before in [41], [36], [50] and [70], for example. Some results in this chapter are probably new. The most interesting of these might be Propositions 1.24 (with consequences for Figure 1, of course), 1.65 and Lemma 1.76. The idea to study p sequences with values in a Banach space X and to identify Lp with such a space is not new, of course. It can be found in [41], [8], [36], [68] and [70], for example. Approximate identities are introduced as special approximate projections in [70]. Its applications go far beyond the ﬁnite section projectors in p and Lp presented here. For more examples see [36], [70] and [17]. Generalized compactness notions were introduced in [8], [76], [36], [68] and [70], for example. We adapt the main ideas and the notations K(E, P) and L(E, P) from [70]. Operators in L(E, P) are called “operators with locally fading memory” in [41]. Proposition 1.24, which is an important ingredient to the clariﬁcation of the relations between K(E) and K(E, P), goes back to ChandlerWilde and the author [17]. Pconvergence of operator sequences was studied in [76], [58], [68] and [70], for example. Theorem 1.75 goes back to [76], [58] and [70].
Chapter 2
Invertibility at Inﬁnity In this chapter we discuss the property that will play a central role throughout this book: invertibility at inﬁnity.
2.1 Fredholm Operators Let E denote an arbitrary Banach space. By a theorem of Banach, A ∈ L(E) is invertible if and only if it is injective and surjective as a mapping A : E → E. Thus, if A is not invertible, then ker A = {0} or im A = E, or both. As an indication how badly injectivity and surjectivity of A are violated, one deﬁnes the two numbers α := dim ker A
and
β := dim coker A,
(2.1)
where coker A := E/im A, provided that the image of A is closed. Recall that A ∈ L(E) is referred to as a Fredholm operator if its image is closed and both numbers α and β are ﬁnite. In that case, their diﬀerence ind A := α − β is called the index of A. It turns out that A ∈ L(E) is a Fredholm operator if and only if there are operators B, C ∈ L(E) such that AB = I + T1
and
CA = I + T2
(2.2)
hold with some T1 , T2 ∈ K(E). The operators B and C are called right and left Fredholm regularizers of A, respectively.
52
Chapter 2. Invertibility at Inﬁnity
By evaluating the term CAB, it becomes evident that B and C only diﬀer by an operator in K(E). Consequently, B (as well as C) is a regularizer from both sides, showing that A is Fredholm if and only if there is an operator B ∈ L(E) such that AB = I + T1
and
BA = I + T2
(2.3)
hold with some T1 , T2 ∈ K(E). Obviously, (2.3) is equivalent to the invertibility of the coset A + K(E) in the factor algebra L(E)/K(E) where (A + K(E))−1 = B + K(E). In a slightly more ﬂuent language, we will reﬂect this by saying that A is invertible modulo K(E) and B is an inverse of A modulo K(E). Although this is very well known, we will emphasize this algebraic characterization of the Fredholm property as a proposition because it underlines an important connection between operator theory and the theory of Banach algebras. For a more detailed coverage of the theory of Fredholm operators, including proofs, see e.g. [31]. Proposition 2.1. A ∈ L(E) is a Fredholm operator if and only if A + K(E) is invertible in the factor algebra L(E)/K(E). In analogy to the spectrum of an operator as an element of L(E), we introduce the Fredholm spectrum or the socalled essential spectrum of A ∈ L(E) as spess A = {λ ∈ C : A − λI is not Fredholm}. By Proposition 2.1 we have that spess A = spL(E)/K(E) (A + K(E)). The factor algebra L(E)/K(E) is the socalled Calkin algebra. It is wellknown that, if a coset A + K(E) is invertible in the Calkin algebra, then all elements of A + K(E) are Fredholm operators with the same index. For instance, all operators in I + K(E) are Fredholm and have index zero. The theory of Fredholm operators is a rigorous generalization of the probably best known statement connected with the name of Erik Ivar Fredholm: Fredholm’s alternative. In terms of (2.1), it says that, for an operator A = I + K with K ∈ K(E), either
α = 0 and β = 0,
or alternatively,
α = 0 and β = 0.
This statement is clearly true for all Fredholm operators A ∈ L(E) with α = β; that is {A ∈ L(E) : A Fredholm, ind A = 0}. This class is strictly larger than I + K(E), for which Fredholm’s alternative is usually formulated. It contains all operators of the form A = C + K where C ∈ L(E) is invertible and K ∈ K(E). A simple argument (see [31, Theorem 6.2]) shows that it actually coincides with this class.
2.2. Invertibility at Inﬁnity
53
2.2 Invertibility at Inﬁnity Fix a Banach space X and let p ∈ [1, ∞]. As in Chapter 1, we abbreviate the Banach space p (Zn , X) by E.
Recall that we substituted the pair L(E) ; K(E) by L(E, P) ; K(E, P) in Section 1.3.4. In analogy to Fredholmness of A ∈ L(E), which corresponds to an invertibility problem in L(E)/K(E), this naturally leads to the study of the property of an operator A ∈ L(E, P) that corresponds to the invertibility of A+K(E, P) in L(E, P)/K(E, P). We will prove that this invertibility is equivalent to the existence of operators B, C ∈ L(E, P) and an integer m ∈ N such that Qm AB = Qm = CAQm
(2.4)
holds. Proposition 2.2. For all A ∈ L(E, P), the following properties are equivalent: (i) The coset A + K(E, P) is invertible in L(E, P)/K(E, P). (ii) There exist B, C ∈ L(E, P) with (2.2) for some T1 , T2 ∈ K(E, P). (iii) There exists a B ∈ L(E, P) with (2.3) for some T1 , T2 ∈ K(E, P). (iv) There exist B, C ∈ L(E, P) and an m ∈ N such that (2.4) holds. Proof. Obviously, (i) is equivalent to both (ii) and (iii). (iii)⇒(iv). Take m ∈ N large enough that Qm T1 < 1 and T2 Qm < 1, and put D := (I + Qm T1 )−1 . From the Neumann series (or less elementary, from Proposition 1.46), we get that D ∈ L(E, P). Then, by (2.3) and Qm = Q2m , Qm AB = Qm + Qm T1 = Qm (I + Qm T1 ), and consequently, Qm AB = Qm with B = BD ∈ L(E, P). The second equality in (2.4) follows from a symmetric argument using BA = I + T2 . (iv)⇒(iii). If (iv) holds, then AB = Qm AB + Pm AB = Qm + Pm AB = I − Pm + Pm AB =: I + T, where T = Pm AB − Pm ∈ K(E, P) since A, B ∈ L(E, P). The second claim in (2.3) follows analogously. Remark 2.3. The operators B and C from (ii) only diﬀer by an operator in K(E, P). Consequently, both can be used as the operator B in (iii). Also note that m in (2.4) clearly can be replaced by any m > m. In accordance with [70], we will call A ∈ L(E, P) a PFredholm operator if property (i) holds. Since this notion is not deﬁned for operators outside of L(E, P), we will also study a very similar property in the somewhat larger setting A ∈ L(E) which enables us to compare the new property with usual Fredholmness on equal territory:
54
Chapter 2. Invertibility at Inﬁnity
Deﬁnition 2.4. An operator A ∈ L(E) is called invertible at inﬁnity if there exists an operator B ∈ L(E) such that (2.3) holds for some T1 , T2 ∈ K(E, P). Moreover, A ∈ L(E) is called weakly invertible at inﬁnity if there exist B, C ∈ L(E) and an m ∈ N such that (2.4) holds. Remark 2.5. a) Of course, A is invertible at inﬁnity if it is invertible, as we see by putting B = A−1 and T1 = T2 = 0 in (2.3). b) Note that, if A ∈ L(E, P) is invertible at inﬁnity, we do not know if it is even PFredholm, since we cannot guarantee1 that B from Deﬁnition 2.4 can be chosen from L(E, P) if A ∈ L(E, P). But the reverse implication is true of course: If A ∈ L(E, P) is PFredholm, then it is invertible at inﬁnity. Moreover, we do have the following. Proposition 2.6. If A ∈ L(E) is invertible at inﬁnity, then it is weakly invertible at inﬁnity. Proof. The proof of Proposition 2.2 (iii)⇒(iv) works as well (it is even simpler) if we replace L(E, P) by L(E). We would not have chosen two diﬀerent names if the two properties were equivalent. Indeed, here we give four examples of operators which are only weakly invertible at inﬁnity. Example 2.7. a) Let p = 1, n = 1, and let K denote the ﬁrst operator in Example 1.26 a). Our example here is A := I − K (note that A := I + K is no good choice since this operator is even invertible with A−1 = I − K/2). From Qm K = 0 we conclude Qm A = Qm for all m ∈ N0 , and consequently, (2.4) with B = C = Q1 and m = 1, which shows that A is weakly invertible at inﬁnity. Concerning invertibility at inﬁnity, note that the second equality in (2.3) holds with B = Q1 and T2 = −P1 ∈ K(E, P) for example. But we will show that the ﬁrst equality in (2.3) cannot be fulﬁlled with B ∈ L(E) and T1 ∈ K(E, P). Therefore suppose that there exist such B and T1 where AB = I + T1 holds. From Qm B − Qm = Qm AB − Qm = Qm (AB − I) = Qm T1 we conclude Qm B − Qm ⇒ 0
as
m → ∞.
Secondly, without loss of generality, we can suppose that (Bu)0 = 0 for all u ∈ E since A ignores the 0th component of its operand. Consequently, P{0} BQm = 0 for all m ∈ N. Moreover, for all i ∈ Z \ {0}, P{i} BQm = P{i} ABQm + P{i} KBQm ⇒ 0
as
m→∞
since AB = I + T1 ∈ L(E, P) and P{i} K = 0 if i = 0. Choosing an arbitrary k ∈ N and summing up P{i} BQm over all i = −k, . . . , k, we get Pk BQm ⇒ 0 1 An
as
m → ∞.
appropriate modiﬁcation of the proof of Theorem 1.1.9 in [70] was not successful – or not appropriate.
2.2. Invertibility at Inﬁnity
55
Now let ε > 0 and choose k ∈ N large enough that D := Qk − Qk B has norm D < ε/A. Then for all m ≥ k, P0 AQm
=
P0 AQk Qm ≤ P0 AQk BQm + P0 ADQm
≤ ≤
P0 ABQm + P0 APk BQm + P0 ADQm P0 ABQm + A · Pk BQm + A · ε/A,
which shows that P0 AQm → 0 as m → ∞ since AB = I + T1 ∈ L(E, P) and Pk BQm → 0 as m → ∞. Having a look at our operator A, we see that this is wrong as P0 AQm = 1 for all m ∈ N. Contradiction. b) Again let p = 1 and n = 1. But now substitute K in a) by the second ˜ of Example 1.26 a). The rest is analogously. operator (denoted by A) c) and d) Let p = ∞ and consider the adjoint operators A∗ of a) and b), respectively. By duality arguments we see that also A∗ is only weakly invertible at inﬁnity. From the deﬁnition of a PFredholm operator it is immediately clear that this property is robust under perturbations in K(E, P) and under perturbations with suﬃciently small norm, provided the latter are in L(E, P). While, for invertibility at inﬁnity in L(E), we conjecture that this is not true, at least the following can be shown. Proposition 2.8. Weak invertibility at inﬁnity in L(E) is robust a) under perturbations in K(E, P), and b) under small perturbations in L(E). Proof. Suppose A ∈ L(E), and there exist m ∈ N and B, C ∈ L(E) such that (2.4) holds. a) Let T ∈ K(E, P), and choose m ∈ N such that, in addition to (2.4), also Qm T < 1/B holds. Then from Qm (A + T )B = Qm AB + Qm T B = Qm + Q2m T B = Qm (I + Qm T B) and the invertibility of I + Qm T B, we get that Qm (A + T )B = Qm holds, where B = B(I + Qm T B)−1 . The second equality in (2.4) is checked in a symmetric way. b) Let S ∈ L(E) be subject to S ≤ 1/B. Then from Qm (A + S)B = Qm AB + Qm SB = Qm + Qm SB = Qm (I + SB) and the invertibility of I +SB, we get Qm (A+S)B = Qm with B = B(I +SB)−1 . Again, the second equality in (2.4) is checked in a symmetric way.
56
Chapter 2. Invertibility at Inﬁnity
2.2.1 Invertibility at Inﬁnity in BDOp Since we are especially interested in banddominated operators, we insert some results on the invertibility at inﬁnity of operators in BDOp . Proposition 2.9. K(E, P) is an ideal in BDOp . Proof. This is trivial since K(E, P) ⊂ BDOp ⊂ L(E, P) by Proposition 1.48, and K(E, P) is an ideal in L(E, P) by Proposition 1.19. Proposition 2.9 makes it possible to study the factor algebra BDOp /K(E, P). We will now prove that, if A ∈ BDOp is invertible at inﬁnity, the operator B in (2.3) is automatically in BDOp as well. Proposition 2.10. For A ∈ BDOp , the following properties are equivalent. (i) A is invertible at inﬁnity. (ii) A + K(E, P) is invertible in the factor algebra BDOp /K(E, P). (iii) A is PFredholm. Proof. Basically, (i), (ii) and (iii) all say that there exists an operator B such that (2.3) holds for some T1 , T2 ∈ K(E, P). The diﬀerence is, where B can be found: (i) says B ∈ L(E), (ii) says B ∈ BDOp , and (iii) says B ∈ L(E, P). So clearly, (ii)⇒(iii)⇒(i) holds. (i)⇒(ii). Suppose B ∈ L(E), and (2.3) holds with T1 , T2 ∈ K(E, P). Take some ϕ ∈ BUC, and deﬁne ψ := ϕ − ϕ(0) ∈ BUC. Then, for every t ∈ Rn , ˆ ϕt , B] = [M ˆ ψt , B] = B[A , M ˆ ψt ]B + Ct [M (2.5) ˆ ψt )B − B(M ˆ ψt T1 ), holds, where ϕt and ψt are deﬁned as in (1.22), and Ct = (T2 M by (2.3). But for every m ∈ N, ˆ ψt T1 ≤ (M ˆ ψt Pm )T1 + M ˆ ψt (Qm T1 ) M holds, where the second term on the right can be made as small as desired by choosing m large enough, and the ﬁrst term evidently tends to 0 as t → 0 since ˆ ψt shows that Ct ⇒ 0 as t → 0. Together ψ(0) = 0. The same argument for T2 M with (2.5) and Theorem 1.42, we get B ∈ BDOp , and hence (ii). Corollary 2.11. BDOp /K(E, P) is inverse closed in L(E, P)/K(E, P).
2.3 Invertibility at Inﬁnity vs. Fredholmness On page 15 we promised to come back to the diagrams in Figure 1 later. Now it is again time to do so. The aim of this section is to study the dependencies between invertibility at inﬁnity and Fredholmness on E = p (Zn , X) in the six essential cases determined
2.3. Invertibility at Inﬁnity vs. Fredholmness
57
by p and dim X. We will derive a table showing which of the two properties implies the other in which of the six cases. That table is proven to be complete by giving appropriate counterexamples for the “missing” implications. The relation of the sets K(E) and K(E, P) essentially determines the relation of the properties of Fredholmness and invertibility at inﬁnity: Clearly, looking at (2.2), if K(E) ⊂ K(E, P), then Fredholmness implies invertibility at inﬁnity. Conversely, if K(E, P) ⊂ K(E), then invertibility at inﬁnity implies Fredholmness. Consequently, Figure 1 on page 15 immediately implies the following table can be substituted with, depending on p which shows the sign that the star and X:
Fredholmness invertibility at ∞
dim X < ∞
dim X = ∞
p=1 1
=⇒ ⇐⇒ =⇒
⇐=
p=∞
Figure 4: Invertibility at inﬁnity versus Fredholmness, depending on the space E = p (Zn , X).
As a justiﬁcation for the missing implication arrows in this table, we will give some counterexamples. But before, there is one more addition to Figure 4: Although ‘⇐’ does not hold in the upper left corner of our table, the following proposition shows that it “almost” holds, in the following sense. Proposition 2.12. Let p = 1 and dim X < ∞. If A ∈ L(E) is Fredholm, then it is weakly invertible at inﬁnity. Proof. For the proof of the ﬁrst ‘=’ sign in (2.4), take B ∈ L(E) and T ∈ K(E) with AB = I + T . Since Qm → 0, we have Qm T ⇒ 0 as m → ∞. Choose m ∈ N large enough that Qm T < 1 and put D := I + Qm T , which is invertible then. Now Qm AB = Qm (I + T ) = Qm + Qm T = Qm (I + Qm T ) = Qm D shows that Qm AB = Qm with B = BD−1 . For the second ‘=’ sign in (2.4), we ﬁrst claim that, for all suﬃciently large m ∈ N, ker AQm = ker Qm (2.6) holds. Clearly, ‘⊃’ holds in (2.6). Suppose ‘⊂’ does not hold. Then there is a sequence (mk )∞ k=1 ⊂ N tending to inﬁnity such that, for every k ∈ N, there is a xk ∈ ker AQmk \ ker Qmk , i.e. yk := Qmk xk = 0
where
Ayk = AQmk xk = 0
for all
k ∈ N.
But these yk ∈ im Qmk clearly span an inﬁnitedimensional space, which contradicts dim ker A < ∞. Consequently, (2.6) holds.
58
Chapter 2. Invertibility at Inﬁnity
Equality (2.6) with a suﬃciently large m ∈ N shows that E decomposes into a direct sum of ker AQm = ker Qm = im Pm and im Qm . From E = ker AQm im Qm we conclude that AQm im Qm : im Qm → im AQm (2.7) is a bijection. Since A and Qm are Fredholm (remember that dim X < ∞), we get that AQm is Fredholm, and hence, that E1 := im AQm is closed and complementable. Now let E2 denote a complementary space of E1 and deﬁne an operator C ∈ L(E) which acts on E1 as the inverse of (2.7) and is zero on E2 . With that construction, CAQm = Qm holds. To show that in the ﬁrst and third row of our table, the implication ‘⇐’ cannot hold in general, we give four examples of Fredholm operators which are not invertible (although weakly invertible) at inﬁnity. For this purpose we can reuse Example 2.7. If dim X < ∞, then all four operators, a), b), c) and d) are Fredholm (since K is compact) and weakly invertible at inﬁnity – but not invertible at inﬁnity. For dim X = ∞, examples a) and c) are no longer Fredholm, but b) and d) still have this property as K is still compact then. For an operator which is Fredholm but not even weakly invertible at inﬁnity, we go to p = ∞. We draw some inspiration from Example 1.26 c). Example 2.13. Put p = ∞, let E0 denote the space of all u ∈ E which decay at inﬁnity, and ﬁx an element v ∈ E \ E0 . By the HahnBanach theorem there exists a bounded linear functional f on E which vanishes on E0 and takes f (v) = 1. Then deﬁne operators K and A on E by K : u → f (u) · v
and
A := I − K.
Clearly, K is compact. Consequently, A is Fredholm with index zero. A simple computation shows that the kernel of A exactly consists of the multiples of v, whence α = 1. Then also β = 1 follows. But how can we identify this onedimensional cokernel? The fact that f (Au) = f u − f (u)v = f (u) − f (u)f (v) = 0 (2.8) for every u ∈ E, is a strong indication that A is not surjective (and hence, not rightinvertible) at inﬁnity. Roughly speaking, the cokernel of A must be located somewhere at inﬁnity. Indeed, suppose that A were weakly (right)invertible at inﬁnity, i.e. there exist B ∈ L(E) and m ∈ N such that Qm AB = Qm holds. If we rewrite this equality as AB − Pm AB = I − Pm , let both sides act on v and apply the functional f , we arrive at the contradiction 0 − 0 = f (ABv) − f (Pm ABv) = f (v) − f (Pm v) = 1 − 0 by (2.8), by im Pm ⊂ E0 and by the choice of f .
2.4. Invertibility at Inﬁnity vs. Stability
59
Moreover, if dim X = ∞, then Qm is an example of an operator which is invertible at inﬁnity but not Fredholm, where m ∈ N is arbitrary. Also note that a multiplication operator Mb on Lp is invertible at inﬁnity if and only if its symbol b ∈ L∞ is essentially bounded away from zero in a neighborhood of inﬁnity – while Mb is Fredholm if and only if this is true on the whole Rn . This shows that in the right column of our table, the implication ‘⇒’ cannot hold in general. This example ﬁnishes the completeness discussion for our table in Figure 4. Later, in Section 4.2, we will encounter a class of operators for which we can guarantee the implication ‘⇒’ also in the right column of the table: Deﬁnition 2.14. We say that an operator W ∈ L(E) is locally compact if PU W and W PU are compact for all bounded sets U ⊂ Zn . Proposition 2.15. Let A = S + W ∈ L(E), where S is invertible and W is locally compact. If A is invertible at inﬁnity, then A is Fredholm. Proof. Suppose there are B ∈ L(E) and T1 , T2 ∈ K(E, P) such that (2.3) holds. Now put R := (I − BW )S −1 . Then AR
=
A(I − BW )S −1 = AS −1 − ABW S −1 = I + W S −1 − (I + T1 )W S −1
=
I − T1 W S −1 .
From T1 W = T1 (Pm W ) + (T1 Qm )W we see that T1 W is the norm limit of the compact operators T1 (Pm W ) as m → ∞, and consequently, it is compact itself. But this shows that R is a right Fredholm regularizer for A. In a symmetric manner one can check that L := S −1 (I − W B) is a left Fredholm regularizer of A. Figure 1 on page 15 and Figure 4 on page 57 also nicely illustrate another undisputable fact. In the perfect case 1 < p < ∞ with dim X < ∞, where much of the theory of this book grew up (for instance, see [67]), everything is rather nice: K(E, P) ; L(E, P) ; invertibility at ∞ = K(E) ; L(E) ; Fredholmness In fact, no one of the left hand side items even appears in [67]. The bifurcation of the theory starts when we leave that case, and it culminates when p = 1 or p = ∞ and dim X = ∞. It turns out that the appropriate way to follow at this bifurcation point is the left one, and that the above equality is just a coincidence in the case 1 < p < ∞, dim X < ∞.
2.4 Invertibility at Inﬁnity vs. Stability In Section 1.7 we have seen that one of the main ingredients of the applicability of an approximation method is its stability. The aim of this section is to translate the stability problem for a given approximation method into the question whether or not an associated operator is invertible at inﬁnity.
60
Chapter 2. Invertibility at Inﬁnity
Throughout the following, let n ∈ N, p ∈ [1, ∞], and let E denote either the space Lp = Lp (Rn ) or p (Zn , X) where X is an arbitrary Banach space. Depending on the choice of E; that is, whether we are in the continuous case or in the discrete case, take D ∈ {R, Z} corresponding, and deﬁne E = Lp (Rn+1 ) or E = p (Zn+1 , X), respectively. To make a distinction between operators on E and operators on E , we will write I , PU , Pk , QU , Qk for the identity operator and the respective projection operators on E .
2.4.1 Stacked Operators Deﬁnition 2.16. Given an u ∈ E , we deﬁne the sequence (uτ )τ ∈D with uτ ∈ E by uτ (x) := u(x, τ ),
x ∈ Dn , τ ∈ D,
and regard uτ as the τ th layer of u. In this sense we will henceforth think of elements u ∈ E as being composed by their layers uτ ∈ E, τ ∈ D, where we will treat the whole sequence (uτ )τ ∈D as one object in p (D, E) ∼ = p (Zn+1 , X) = E
or
Lp (D, E) ∼ = Lp (Rn+1 ) = E .
(2.9)
Remark 2.17. Note that, for every τ ∈ D, the set {(x, τ ) : x ∈ Dn } has measure zero in Dn+1 if D = R, but it has a positive measure if D = Z. This leads to a slight bifurcation of the following theory for the continuous and the discrete case. In the continuous case, u ∈ E = Lp (Rn+1 ), it makes no sense to talk about a single layer uτ ∈ E of u since uτ is deﬁned on a set of measure zero in Rn+1 , whereas in the discrete case, every single layer uτ ∈ E of u ∈ E is welldeﬁned since uτ is deﬁned on a set of positive measure in Zn+1 . We will use the layer construction from Deﬁnition 2.16 to associate an operator on E with every bounded sequence (Aτ )τ ∈D of operators on E, simply by “stacking” this operator sequence to a “pile”. Deﬁnition 2.18. If T ⊂ D and (Aτ )τ ∈T is a bounded sequence of operators on E, then by
(Aτ uτ )(x) if τ ∈ T, (Bu)(x, τ ) := x ∈ Dn , τ ∈ D, uτ (x) if τ ∈ T, an operator B on E is given, which acts as Aτ in the τ th layer of u for τ ∈ T , and it is the identity operator otherwise. In this sense, we will regard Aτ as the τ th layer of B, and we will refer to B as the stacked operator of the sequence (Aτ )τ ∈T , denoted by ⊕ Aτ τ ∈T
or just by ⊕Aτ if the index set T is ﬁxed.
2.4. Invertibility at Inﬁnity vs. Stability
61
By its deﬁnition, ⊕Aτ acts on every layer of u ∈ E independently from the other layers, like a generalized multiplication operator. If we identify u and (uτ ) in the sense of (2.9), then we can identify ⊕Aτ with the operator of multiplication by a function in ∞ (D, L(E)) = ∞ (Z, L(E))
or
L∞ (D, L(E)) = L∞ (R, L(E)),
(2.10)
respectively. In the discrete case, this is a generalized multiplication operator in the sense of Deﬁnition 1.4. Identiﬁcation (2.10) is the reason why, when we write about an ‘essential supremum’, about ‘essential boundedness’ or ‘essential invertibility’ in the following, for the discrete case this simply translates to the usual supremum, to ‘uniform boundedness’ and ‘uniform invertibility’, respectively. By the identiﬁcation (2.10), ⊕Aτ is a bounded linear operator on E with
s if T = D, where s = ess sup Aτ L(E) . ⊕ Aτ L(E ) = max(s, 1) if T = D, τ ∈T τ ∈T ` `` τ τ 6 6
x 
0 0
(Aτ )τ >0
I
⊕ Aτ
I
τ ∈T
Figure 5: A rough illustration of the mapping (Aτ )τ ∈T → ⊕ Aτ for n = 1, D = R, T = R+ . τ ∈T
Remark 2.19. By identifying ⊕Aτ with an operator of multiplication by a function in (2.10), it is clear that, in the continuous case, roughly speaking, the stacked operator ⊕Aτ doesn’t even ‘see’ the behaviour of a single operator Aτ of the sequence. This is manifested by all the ‘essential’ notations in connection with ⊕Aτ . Proposition 2.20. Let (Aτ )τ ∈T ⊂ L(E) be a bounded sequence. The operator ⊕Aτ is invertible in L(E ) if and only if the set {Aτ }τ ∈T is essentially invertible. Proof. This is an immediate consequence of Deﬁnition 1.57 and the identiﬁcation of ⊕Aτ with an operator of multiplication by a function in (2.10). In general, it is not true that ⊕Aτ ∈ BO or BDOp or L(E , P) whenever all operators Aτ are of that kind. Example 2.21. a) Take T = N and Aτ = Vτ for all τ ∈ T . Then even (Aτ ) ⊂ BO, but it is readily seen that ⊕Aτ in not even in L(E , P) ⊃ BDOp ⊃ BO.
62
Chapter 2. Invertibility at Inﬁnity
b) Take T = R+ and Aτ = V1/τ for all τ ∈ T . Also in this case (Aτ ) ⊂ BO P and even Aτ → I as τ → ∞ if p < ∞. But still ⊕Aτ ∈ L(E , P) holds. But one can prove that ⊕Aτ is in BO or BDOp or L(E , P) if all Aτ are in the respective class, and, in addition, some uniformity condition is fulﬁlled. Recall that by fA (d) we denote the information ﬂow (1.30) of A over the distance d. Proposition 2.22. Suppose (Aτ )τ ∈T ⊂ L(E) is a bounded sequence. a) If (Aτ )τ ∈T ⊂ BO and the bandwidths of Aτ are essentially bounded, then ⊕Aτ is a band operator as well. b) If (Aτ )τ ∈T ⊂ BDOp and the condition ess sup fAτ (d) → 0 τ ∈T
as
d→∞
holds, then ⊕Aτ is banddominated as well. c) If (Aτ )τ ∈T ⊂ L(E, P) and condition (1.43) holds; that is (Aτ ) ∈ F(E, P), then ⊕Aτ ∈ L(E , P). Proof. a) and b) Take some d ∈ N and two (measurable) sets U, V ⊂ Dn+1 with dist (U, V ) > d. Then PV (⊕Aτ )PU = ⊕Cτ , where
Cτ =
PV τ Aτ PU τ PV τ PU τ
if τ ∈ T, if τ ∈ T,
with U τ = {x ∈ Dn : (x, τ ) ∈ U }, V τ = {x ∈ Dn : (x, τ ) ∈ V } for τ ∈ D, and P∅ = 0. But from dist (Uτ , Vτ ) ≥ dist (U, V ) > d,
τ ∈D
and PV (⊕Aτ )PU = ess sup Cτ , we get the claim in a) and b), respectively. c) Take k, m ∈ N, and note that Pk (⊕Aτ )Qm = ⊕Cτ with
Cτ =
Pk Aτ Qm 0
if τ ∈ T and τ  ≤ k, if τ ∈ T or τ  > k,
whenever m > k. Together with condition (1.43) we get that, for all k ∈ N, Pk (⊕Aτ )Qm = ess sup Cτ → 0 as m → ∞. Of course, the symmetric property, Qm (⊕Aτ )Pk → 0 as m → ∞, is shown analogously. Remark 2.23. Note that, for the usage in this proposition, the supremum in (1.43) can be reduced to an essential supremum.
2.4. Invertibility at Inﬁnity vs. Stability
63
2.4.2 Stability and Stacked Operators Our intuition now tells us that the stability of the operator sequence is very closely related with the stacked operator’s invertibility at inﬁnity. But we shall see that, in the continuous case, these two properties are not the same. For the following, let T ⊂ D be bounded below by 0, but unbounded towards plus inﬁnity, whereby in the continuous case, we will in addition suppose that T is measurable and has a positive measure. Proposition 2.24. Consider a bounded sequence (Aτ )τ ∈T in L(E), almost every operator of which is invertible at inﬁnity; that is Aτ Bτ = I + Kτ
and
Bτ Aτ = I + Lτ
(2.11)
for almost all τ ∈ T , with some Bτ ∈ L(E) and Kτ , Lτ ∈ K(E, P), and, in addition, (Bτ ) is bounded, and, for every ﬁxed τ∗ ∈ T , ess sup Qm Kτ , Kτ Qm , Qm Lτ , Lτ Qm → 0 as m → ∞. (2.12) τ ≤τ∗
Then the stacked operator ⊕Aτ is invertible at inﬁnity if and only if there is some τ∗ ∈ T such that the set {Aτ }τ ∈T, τ >τ∗ is essentially invertible. Proof. Suppose there is a τ∗ ∈ T such that the set {Aτ }τ ∈T, τ >τ∗ is essentially invertible. Then, for every τ ∈ T , put
−1 Aτ if A−1 τ exists, Rτ := Bτ otherwise. By the essential boundedness of the sets {A−1 τ }τ ∈T, τ >τ∗ and {Bτ }τ ∈T , the operator ⊕Rτ is bounded. We show that K := (⊕Aτ )(⊕Rτ ) − I is in K(E , P). Clearly, K = ⊕Kτ , where
Aτ Rτ − I if τ ∈ T, Kτ = 0 if τ ∈ T. Take ε > 0 arbitrary, and choose m ∈ N such that m > τ∗ and the essential supremum in (2.12) is less than ε. Then Qm K = ⊕Lτ , where ⎧ if τ ∈ T and τ > m, Aτ Rτ − I ⎨ Qm (Aτ Rτ − I) if τ ∈ T and τ ≤ m, Lτ = ⎩ 0 if τ ∈ T, showing that, by the deﬁnition of Rτ , almost all layers Lτ with τ > τ∗ are zero, whence Qm K = ess supτ ≤τ∗ Qm (Aτ Rτ − I) < ε. Analogously, we see that also KQm < ε, showing that K = (⊕Aτ )(⊕Rτ ) − I ∈ K(E , P). By the same argument one easily checks that also (⊕Rτ )(⊕Aτ )− I ∈ K(E , P), and hence that ⊕Aτ is invertible at inﬁnity.
64
Chapter 2. Invertibility at Inﬁnity
Now suppose that ⊕Aτ is invertible at inﬁnity. Then, by Proposition 2.6, ⊕Aτ is weakly invertible at inﬁnity, which means that there is an m ∈ N such that Qm (⊕Aτ )B = Qm = C(⊕Aτ )Qm holds with some B, C ∈ L(E ). If we put U := {(x, τ ) ∈ Dn+1 : x ∈ Dn , τ > m} and multiply the left equality by PU from the left and the right equality from the right, we get PU (⊕Aτ )B = PU = C(⊕Aτ )PU since PU Qm = PU = Qm PU . But this equality clearly shows that ⊕τ >m Aτ is invertible, and hence the set {Aτ }τ >m is essentially invertible, by Proposition 2.20. τ
6

−m
m
0
I
x
I
Figure 6: Illustration of the proof of Proposition 2.24 for the ﬁnite section method. If {Aτ }τ >τ∗ is essentially invertible, then everything outside a suﬃciently large cube is (locally) invertible.
For example, if we consider the ﬁnite section method Aτ = Aτ = PΩτ APΩτ + QΩτ ,
τ ∈T
(2.13)
for A ∈ L(E) with a monotonously increasing sequence (Ωτ ) of bounded sets in Dn , then the conditions of Proposition 2.24 are clearly fulﬁlled with Bτ = I, and the essential supremum in (2.12) is zero as soon as m ≥
sup x. x∈Ωτ∗
Corollary 2.25. If (Aτ ) = (Aτ ) is the ﬁnite section sequence (2.13) for an operator A ∈ L(E), then the stacked operator ⊕Aτ is invertible at inﬁnity if and only if there is some τ∗ ∈ T such that the set {Aτ }τ >τ∗ , τ ∈T is essentially invertible. What we have learnt from Proposition 2.24 and Corollary 2.25 is that, for very common classes of approximation methods (Aτ ), the invertibility at inﬁnity
2.4. Invertibility at Inﬁnity vs. Stability
65
of the stacked operator ⊕Aτ is equivalent to the essential invertibility of the ﬁnal part of the method. On the other hand, remember that the stability of a method is equivalent to the uniform invertibility of its ﬁnal part. Although this is tempting, there is absolutely no point in studying an approximation method, the ﬁnal part of which is only essentially invertible. Let’s call this property “essential stability” for a moment. The person, who encountered us with the equation Au = b (1.33), will choose a subsequence (Aτk )∞ k=1 out of (Aτ )τ ∈T , hoping to approximate the solution u of (1.33) by the solutions uk of Aτk uk = b (1.35) as k → ∞. However, if (Aτ ) is only what we called “essentially stable”, we can not guarantee that this works for every subsequence (Aτk ) – only for almost every one. For instance, (A˜τ ) from Example 2.26 below is “essentially stable”. But the ﬁnal part of (τk ) must not contain any integers to provide us with an applicable approximation method (A˜τk ). So we better forget about “essential stability” and come back to our problem: stability and stacked operators. Example 2.26. The sequence (Aτ )τ ∈R+ with Aτ = I for all τ ∈ R+ is obviously stable, and ⊕Aτ = I is clearly invertible at inﬁnity. But perturbing this sequence in the integer components only,
0 if τ ∈ N, A˜τ = I otherwise, completely destroys the stability of the sequence, while it does not change the stacked operator’s invertibility at inﬁnity – not even the stacked operator itself which is still the identity I on E . The remainder of this chapter is devoted to the question under which conditions essential and uniform invertibility coincide, and hence, under which additional conditions Proposition 2.24 and Corollary 2.25 tell us that the sequence is stable if and only if the stacked operator is invertible at inﬁnity.
The Discrete Case In the discrete case E = p (Zn , X), where we agreed to use a discrete index set T ⊂ N, essential and uniform invertibility are clearly equivalent, as we remember from the discussion after (2.10). This fact allows us to state the following versions of Proposition 2.24 and Corollary 2.25 for the discrete case. Corollary 2.27. Suppose E = p (Zn , X), T ⊂ N, and (Aτ )τ ∈T is a bounded sequence in L(E), every operator of which is invertible at inﬁnity; that is (2.11) holds with some Bτ ∈ L(E) and Kτ , Lτ ∈ K(E, P), and, in addition, (Bτ ) is bounded. Then the sequence (Aτ ) is stable if and only if the stacked operator ⊕Aτ is invertible at inﬁnity. Note that the set {τ ∈ T : τ ≤ τ∗ } is ﬁnite for every τ∗ ∈ T , whence condition (2.12) automatically follows from Kτ , Lτ ∈ K(E, P).
66
Chapter 2. Invertibility at Inﬁnity
Theorem 2.28. If E = p (Zn , X), T ⊂ N, and (Aτ ) = (Aτ ) is the ﬁnite section sequence (2.13) for an operator A ∈ L(E), then the sequence (Aτ ) is stable if and only if the stacked operator ⊕Aτ is invertible at inﬁnity.
The Continuous Case Now suppose E = Lp with 1 ≤ p ≤ ∞. If we would restrict ourselves to sequences (Aτ )τ ∈T with a discrete index set T , then Corollary 2.27 and Theorem 2.28 would immediately apply for this case as well. But we will now study the case of an index set T ⊂ R+ with positive measure in R, which is often desirable for approximation methods in the continuous case, as we agreed before. In Section 1.4 we have seen that, under several continuity assumptions on the mapping τ → Aτ , essential and uniform invertibility coincide. By Proposition 1.62 c) and d), we might want to expect the operators Aτ to depend suﬃciently smoothly on τ . We will present three diﬀerent approaches to this idea.
1st approach. In Lemma 1.61 we have shown that continuous dependence of Aτ on τ is suﬃciently smooth. But this is one of the strongest forms of suﬃcient smoothness, and it is indeed not very fruitful to expect the mapping τ → Aτ to be continuous from R+ to L(E) since already very simple sequences like (Pτ ) or the ﬁnite section sequence (Aτ ) for A = 2I do not enjoy this property. The problem is that PΩ −PΩ = 1 as soon as Ω and Ω ⊂ Rn diﬀer in a set of positive measure. A workaround for this problem, in connection with the ﬁnite section method, is as follows. Suppose the choice of the sets Ωτ in (2.13) is such that there is a set Ω ⊂ Rn , and, for every τ ∈ T , there is a bijective deformation ϕτ of Rn which sends Ωτ to Ω. For example, if Ωτ = τ Ω, then this deformation is just shrinking the Rn by the factor τ . Based on the deformations ϕτ , deﬁne invertible isometries Φτ on E. For example, if ϕτ (x) = x/τ is the shrinking map mentioned above, then one can put (Φτ u)(x) = ρτ u(x/τ ), where
ρτ =
τ −n/p , p < ∞, 1, p=∞
is a norm correction factor. Now transform all operators Aτ = Aτ in (2.13) from −1 operators which essentially act on Lp (Ωτ ) to operators, say AΩ τ := Φτ Aτ Φτ , all p essentially acting on the same space L (Ω), and then require continuous dependence of AΩ τ on τ . With this construction we get that (Aτ ) and (AΩ τ ) are simultaneously stable, are simultaneously invertible at inﬁnity. So, under the assumpand ⊕Aτ and ⊕AΩ τ tion that τ → AΩ is continuous, we have that the sequence (Aτ ) is stable if and τ only if ⊕Aτ is invertible at inﬁnity.
2.4. Invertibility at Inﬁnity vs. Stability τ
6
` ` `
67 τ
⊕
−→
0
6 @ @ @ @ @ I @ @
I x 
0
(Aτ )
I
↓ τ
6
` ` `
⊕Aτ
I
↓ τ
6
⊕
−→
I
I x 
0 0
(AΩ τ)
I
⊕AΩ τ
I
Figure 7: This ﬁgure illustrates the idea of the 1st approach. It shows the sequences (Aτ ) , (AΩ τ ) and their stacked operators for Ωτ = τ Ω and T = R+ .
The problem of this approach is that this continuity assumption on τ → AΩ τ is still a very strong restriction on the sequence (Aτ ). For example, if A ∈ L(E) is composed – via addition and composition – by a multiplication operator Mb and some other operators, and Aτ is the ﬁnite section sequence (2.13) of A, then this continuity condition requires the function b ∈ L∞ to be uniformly continuous (with some possible exception at the origin), which is rather restrictive!
2nd approach. Since the assumption that Aτ ⇒ Aτ0 as τ → τ0 turned out to be a bit restrictive, we consider a weaker type of continuity now. From Proposition 1.1.21 in [70] and the discussion afterwards, we conclude that, for P (Aτ ) ∈ F(E, P), the mapping τ → Aτ is suﬃciently smooth if Aτ → Aτ0 for τ → τ0 . But it is readily seen that this is, in general, not the case since, again, the condition already fails for (Aτ ) = (PΩτ ), and hence for the ﬁnite section method. Again, there is a workaround. For every τ0 ∈ T , one can deﬁne an approxi˜ P mate identity P˜ = (P˜m )∞ m=1 such that PΩτ → PΩτ0 as τ → τ0 . For example, if Ω is compact and convex and Ωτ = τ Ω, one can choose P˜m = I − P(1+1/m)τ0 Ω + P(1−1/m)τ0 Ω ,
m ∈ N.
68
Chapter 2. Invertibility at Inﬁnity
However, due to this new approximate identity, the class F (E, P˜ ), for which Proposition 1.1.21 of [70] is applicable, completely diﬀers from F (E, P), and, moreover, this class is dependent on τ0 . P 3rd approach. Instead of Aτ ⇒ Aτ0 or Aτ → Aτ0 , we will now, depending
∗ on E = Lp , either suppose ∗strong convergence Aτ → Aτ0 or pre∗strong con vergence Aτ → Aτ0 as τ → τ0 . Precisely, for every τ0 ∈ T , we expect the strong convergence (2.14) Aτ → Aτ0
on E and the strong convergence of the (pre)adjoints A∗τ → A∗τ0
if p < ∞
or
Aτ → Aτ0
if p = ∞
(2.15)
on the (pre)dual space of E as τ → τ0 . This type of continuous dependence of Aτ on τ is suﬃciently smooth, as we will prove in the following lemma. Lemma 2.29. If (Aτ ) ⊂ L(E) is subject to conditions (2.14) and (2.15), then the mapping τ → Aτ , from T ⊂ R to L(E), is suﬃciently smooth. Proof. Take an arbitrary τ0 ∈ T and some sequence τk → τ0 in T as k → ∞, and suppose the sequence (Aτk )∞ k=1 is stable. What we have to show is that then Aτ0 is invertible with A−1 ≤ s := supk>k∗ A−1 τ0 τk .
∗ If p < ∞, then, by (2.14) and (2.15), we have Aτk → Aτ0 as k → ∞. We will (independently from this lemma) show in Corollary 2.37 that, under these conditions, Aτ0 is invertible, and A−1 τ0 ≤ s. If p = ∞, then, with (Aτk ), also the sequence (Aτk ) is stable. By (2.14) and ∗ (2.15), we have Aτk → Aτ0 as k → ∞. Now apply the same Corollary 2.37 to (Aτk ), showing that Aτ0 is invertible, and (Aτ0 )−1 ≤ s. But then, also Aτ0 is invertible, and A−1 τ0 ≤ s.
Under some conditions on the sets Ωτ , the ﬁnite section method (2.13) has both properties (2.14) and (2.15), and hence it is suﬃciently smooth, showing that it is stable if and only if its stacked operator is invertible at inﬁnity. We will henceforth study such a situation. Let Ω be a compact and convex subset of Rn , containing the origin in its interior. In what follows, we will choose the index set T := R+ , and the sequence of sets Ωτ from Section 1.5.3 shall be in the following convenient way: Ωτ := τ Ω,
τ > 0.
Then Ωτ1 ⊂ Ωτ2 if 0 < τ1 < τ2 , and for every x ∈ Rn , there exists a τ > 0 such that x ∈ Ωτ .
2.4. Invertibility at Inﬁnity vs. Stability
69
The case 1 < p < ∞. Consider the ﬁnite section method (2.13) on E = Lp , where 1 < p < ∞. It is easily seen that this method is subject to (2.14) and (2.15). Proposition 2.30. Let E = Lp with 1 < p < ∞. For every A ∈ L(E), the ﬁnite sections Aτ depend suﬃciently smoothly on τ ∈ R+ . Proof. For 1 < p < ∞, the sequence of ﬁnite section projectors Pτ Ω is subject to (2.14) and (2.15), where Pτ∗Ω is equal to Pτ Ω , acting on Lq ∼ = E ∗ with 1/p+1/q = 1. Since ∗strong convergence is compatible with addition and multiplication, we get that also (Aτ ) = (Aτ ) is subject to (2.14) and (2.15), and it remains to apply Lemma 2.29. The cases L1 and L∞ . Now we study the ﬁnite section method (2.13) on E = Lp for p = 1 and for p = ∞. Our aim is to prove some analogue of Proposition 2.30. But clearly, for p = 1, we only have (2.14), and for p = ∞, we only have (2.15). So, recalling the proof of Lemma 2.29, if τk → τ0 > 0 and (Aτk ) is stable, then in either case, we can, roughly speaking, only prove 50% of the invertibility of Aτ0 . Fortunately, the operators A that we have in mind in Section 4.2, have the property that a ﬁnite section Aτ0 is either invertible or not even “semiinvertible”. This is the theory we are going to develop now. Firstly, here is what we mean by “semiinvertible”. Deﬁnition 2.31. We say that an operator A ∈ L(E) is bounded below if the quotient Ax/x is bounded below by a positive constant for all x ∈ E \ {0}. In that case, the best possible lower bound ν(A) :=
inf x∈E\{0}
Ax = x
inf
x∈E, x=1
Ax > 0
is referred to as the lower norm of A. Lemma 2.32. An operator A ∈ L(E) is bounded below if and only if A is injective and has a closed image. Proof. If A is bounded below, then, clearly, ker A = {0}, and every Cauchy sequence in im A converges in im A. Conversely, if A is injective and im A is closed, then Banach’s theorem on the inverse operator says that the operator Ax → x from im A to E is bounded. In fact, being bounded below is closely related with being onesided invertible. Lemma 2.33. An operator A ∈ L(E) is left invertible; that is BA = I for some B ∈ L(E), if and only if A is bounded below and im A is complementable in E. Proof. Suppose A is bounded below and im A is complementable. From Lemma 2.32 we know that A is injective and im A is closed. Let C : im A → E act by Ax → x. By Banach’s theorem on the inverse operator, C is bounded. Let B act
70
Chapter 2. Invertibility at Inﬁnity
like C on im A and as the zero operator on its complement. Then B ∈ L(E) and BA = I. If there is a B ∈ L(E) with BA = I, then from x = BAx ≤ BAx we see that Ax/x is bounded below by 1/B. Moreover, it turns out that ker B is a direct complement of im A in E. Indeed, every x ∈ E can be written as x = ABx+(x−ABx) where ABx ∈ im A and x−ABx ∈ ker B since B(x−ABx) = Bx−(BA)Bx = 0. Finally, suppose y ∈ im A∩ker B. Then y = Ax for some x ∈ E and By = 0. Consequently, 0 = By = BAx = x which shows that y = Ax = 0. Remark 2.34. If E is a Hilbert space, then im A is automatically complementable in E if A is bounded below. This is the case since every closed subspace of a Hilbert space has a complement (its orthogonal complement, for example). Consequently, if E is a Hilbert space, then A ∈ L(E) is left invertible if and only if it is bounded below. Lemma 2.35. A ∈ L(E) is invertible if and only if A and A∗ are bounded below. In that case, ν(A) = 1/A−1 . Proof. If A ∈ L(E) is invertible, then also A∗ ∈ L(E ∗ ) is invertible, and, by Lemma 2.33, both are bounded below. Conversely, if A and A∗ are bounded below, then, by Lemma 2.32, A is injective and has a closed image. Since, by Lemma 2.32 again, also A∗ is injective, the image of A is dense in E, and hence it is all of E. Consequently, A is invertible. If x ∈ E, then x = A−1 Ax ≤ A−1 Ax shows Ax/x ≥ 1/A−1 if x = 0. This inequality can be made as tight as desired by the choice of x ∈ E. Proposition 2.36. (Am )∞ m=1 ⊂ L(E) be a stable sequence with strong limit A ∈ L(E). Then A is bounded below, and ν(A) ≥ 1/ supm≥m∗ A−1 m . Proof. Take m∗ ∈ N large enough that Am is invertible for all m ≥ m∗ , and put s := sup A−1 m < ∞. Then, for every x ∈ E and m ≥ m0 , we have m≥m∗
−1 x = A−1 m Am x ≤ Am Am x ≤ sAm x → sAx
as
So Ax/x is bounded below by 1/s.
m → ∞.
∗ Corollary 2.37. If (Am ) ⊂ L(E) is stable and Am → A, then A is invertible, and −1 −1 A ≤ supm≥m∗ Am .
Proof. If (Am ) ⊂ L(E) is stable, then also (A∗m ) ⊂ L(E ∗ ) is stable. Now apply Proposition 2.36 and Lemma 2.35. Lemma 2.38. The map A → ν(A) is continuous in L(E) → R; precisely, for all A, B ∈ L(E), it holds that ν(A) − ν(B) ≤ A − B. Proof. Let A, B ∈ L(E). Then, for all x ∈ E with x = 1, A − B ≥ Ax − Bx ≥ Ax − Bx ≥ ν(A) − Bx
2.4. Invertibility at Inﬁnity vs. Stability
71
holds. But the right hand side comes arbitrarily close to ν(A) − ν(B) by choosing x accordingly. The other inequality, A − B ≥ ν(B) − ν(A), holds as well, by reasons of symmetry. Corollary 2.39. The set of operators A ∈ L(E) that are bounded below is an open subset of L(E); precisely, if A ∈ L(E) is bounded below, then every operator B ∈ L(E) with d := A − B < ν(A) is bounded below, and ν(B) ≥ ν(A) − d > 0. Deﬁnition 2.40. Let Φ+ (E) denote the set of all operators A ∈ L(E) which have a ﬁnitedimensional kernel and a closed image. Operators in Φ+ (E) are called semiFredholm operators. Note that, by Lemma 2.32, if A ∈ L(E) is bounded below, then A ∈ Φ+ (E). In analogy to Fredholmness, also semiFredholmness has the following properties. Lemma 2.41. SemiFredholmness is invariant a) under small perturbations, and b) under compact perturbations.
Proof. See [9], for example.
For the remainder of this section, let us decompose the set L(E) into the three distinct subsets i(E), b(E) and n(E), namely the sets of operators that are • invertible, • bounded below but not invertible, and • not bounded below, respectively. Following our objective to study operators, the ﬁnite sections of which are either invertible or not even “semiinvertible”, we will deﬁne a new set of operators by taking away from L(E) all the operators that are bounded below but not invertible. So let r(E)
:=
L(E) \ b(E)
=
i(E) ∪ n(E).
Then r(E) is the set of all operators A ∈ L(E) with the property A is invertible or A is not even bounded below, or, in an equivalent formulation, If
A is bounded below, then
A is even invertible.
(2.16)
For example, it is readily seen that all multiplication operators are subject to (2.16): Lemma 2.42. Take an arbitrary b ∈ L∞ . If Mb ∈ Φ+ (E), then Mb is invertible.
72
Chapter 2. Invertibility at Inﬁnity
Proof. If Mb ∈ Φ+ (E), then, by Lemma 2.41 a), there is an ε0 > 0 such that all operators A with Mb − A < ε0 are in Φ+ (E) as well. Suppose Mb were not invertible. Then, for every ε > 0, there exists a set Sε ⊂ Rn with positive measure such that b(x) < ε for all x ∈ Sε . Now put
c(x) :=
b(x), 0,
x ∈ Sε0 /2 , x ∈ Sε0 /2 .
Then b − c∞ < ε0 , whence Mc ∈ Φ+ (E). But this contradicts the fact that Mc has an inﬁnite dimensional kernel (c vanishes on all of Sε0 /2 ). Corollary 2.43. The subalgebra {Mb : b ∈ L∞ } of all multiplication operators in L(E) is contained in r(E). Proof. Take an arbitrary function b ∈ L∞ . We show that Mb is subject to (2.16). Suppose Mb is bounded below. Then, by Lemma 2.32, Mb ∈ Φ+ (E). But now Lemma 2.42 shows that Mb is invertible. Proposition 2.44. The subset r(E) of L(E) is closed under norm convergence and multiplication. Proof. Norm convergence: Take a sequence (Am )∞ m=1 ⊂ r(E) with Am ⇒ A as m → ∞. We will show that also A ∈ r(E), i.e. A has property (2.16). If A is bounded below, then M := ν(A) > 0. We choose some m ∈ N such that d := A − Am < M/3. From Corollary 2.39 we know that then Am is bounded below with Am x ≥ (M − d)x > (M −
2 M )x = M x, 3 3
x ∈ E.
Since Am ∈ r(E), we can, by (2.16), conclude that Am is even invertible, and Lemma 2.35 tells that 3 . A−1 m ≤ 2M Consequently, we know that −1 A−1 m (A − Am ) ≤ Am A − Am ≤
and hence,
1 3 M · = , 2M 3 2
A = Am I + A−1 m (A − Am )
is invertible. So the norm limit A has property (2.16) as well. Multiplication: It is readily seen that, for all Ai , Bi ∈ i(E) and An , Bn ∈ n(E), Ai Bi ∈ i(E), Ai Bn ∈ n(E), An Bi ∈ n(E) and An Bn ∈ n(E) hold, and hence, AB ∈ r(E) = i(E) ∪ n(E) if A, B ∈ r(E) = i(E) ∪ n(E).
2.4. Invertibility at Inﬁnity vs. Stability
73
Remark 2.45. Note that r(E) is not closed under addition. For example, take n = 1, p ∈ [1, ∞], and A = V1 P[0,+∞)
and
B = P(−∞,0) .
Then A, B ∈ n(E) but A + B ∈ b(E).
In accordance with our objective to study operators whose ﬁnite sections have property (2.16), we deﬁne the following two classes, where we abbreviate (Aτ ) = (A )τ by Aτ . R1 := A ∈ L(L1 ) : Aτ ∈ r(L1 ) ∀τ > 0 ⊂ L(L1 ), R∞ := A ∈ S : Aτ ∈ r(L1 ) ∀τ > 0 ⊂ L(L∞ ). Now ﬁnally, we have found the right framework for our analogous formulation of Proposition 2.30 concerning the ﬁnite section method in L1 and L∞ . Proposition 2.46. Let p ∈ {1, ∞}, E = Lp , and A be an arbitrary operator in Rp . Then the ﬁnite sections Aτ depend suﬃciently smoothly on τ ∈ R+ . Proof. Take an arbitrary τ0 > 0 and a sequence τk → τ0 as k → ∞, and suppose (Aτk ) is stable. We have to show that Aτ0 is invertible in L(E) with A−1 τ0 ≤ −1 s := sup Aτ . Start with p = 1. Here we have strong convergence Pτk Ω → Pτ0 Ω , and hence Aτk = Pτk Ω APτk Ω + Qτk Ω → Pτ0 Ω APτ0 Ω + Qτ0 Ω = Aτ0 strongly as k → ∞. Applying Proposition 2.36 to the stable sequence (Aτk ), we get that Aτ0 is bounded below and ν(Aτ0 ) ≥ 1/s. The condition A ∈ R1 now yields that Aτ0 ∈ r(E), and hence, we can conclude from Aτ0 being bounded below to even being invertible. The inequality A−1 τ0 = 1/ν(Aτ0 ) ≤ s follows. For p = ∞, we pass to the sequence of preadjoints (Aτk ), which is stable as well, and apply the result for p = 1, yielding that Aτ0 , and hence Aτ0 , is invertible and the desired norm inequality holds. So – under somewhat stronger assumptions on A – in L1 and L∞ , we have the same result as Proposition 2.30 in Lp with 1 < p < ∞. In Section 4.2.2 we will see that R1 and R∞ still contain suﬃciently many interesting and practically relevant operators. Now here is the outcome of our investigations on the ﬁnite section method in the continuous case which, together with Theorem 2.28 for the discrete case, completes our studies on the ﬁnite section method for this chapter. We will come back to the ﬁnite section method in Chapter 4. Theorem 2.47. If A is an operator in L(Lp ) with 1 < p < ∞ or in R1 or R∞ , then the ﬁnite section method (Aτ ) is stable if and only if ⊕Aτ is invertible at inﬁnity.
74
Chapter 2. Invertibility at Inﬁnity
Proof. By Corollary 2.25, the stacked operator ⊕Aτ is invertible at inﬁnity if and only if there is some τ∗ > 0 such that {Aτ }τ >τ∗ is essentially invertible. Propositions 2.30 and 2.46 show that, under the conditions of this theorem, the mapping τ → Aτ is suﬃciently smooth. Moreover, T = R+ is massive, and Proposition 1.62 c) and d) shows that in this case the essential invertibility of {Aτ }τ >τ∗ is equivalent to its uniform invertibility which is nothing but the sta bility of (Aτ ).
2.5 Comments and References The theory of Fredholm operators can be found in a huge number of books on operator theory, for example [31]. This theory essentially generalizes E. I. Fredholm’s classical results for operators of the form I + K with K ∈ K(E). In this connection, the name “Fredholm operator” originated. Sometimes these operators are also referred to as Noether or Φoperators. F. Noether was the ﬁrst who studied the more general situation of operators with a closed image and ﬁnitedimensional kernel and cokernel [56]. The usage of the letter Φ, which is standard today, started with a mistranslation of the cyrillic letter “F” (for “Fredholm”) from a publication of Gohberg and Krein. The study of PFredholmness and invertibility at inﬁnity goes back to [68] and [70]. Proposition 2.2 is basically from [70]. The main part of Proposition 2.10 goes back to [68]. Proposition 2.12 is the product of personal communication with Bernd Silbermann. Proposition 2.15 generalizes an argument from the proof of Proposition 3.3.4 in [70]. An even stronger version of Proposition 2.15 can be found in Section 3.3 of [17]. The idea to identify the stability of a sequence of operators with properties of an associated blockdiagonal operator can already be found at Gorodetski [32]. The results presented here for the discrete case are wellknown (see [68] and [70], for example). The treatment of methods with a continuous index set, as presented here, goes back to [68] (the “1st approach”) and [48], [50] (the “3rd approach”). Operators that are bounded below are wellstudied in operator theory.
Chapter 3
Limit Operators In Chapter 2 we introduced and discussed the property of invertibility at inﬁnity. In particular, we made very detailed studies of its intimate connections with Fredholmness in Section 2.3, and we explored the possibility to study approximation methods and their stability as a special invertibility problem at inﬁnity in Section 2.4, which opens the door to a variety of applications. So invertibility at inﬁnity turns out to be the key property to be studied in this book. It is the aim of this chapter to introduce a tool for the study of this key property: limit operators. Let E = p (Zn , X) with p ∈ [1, ∞] and a Banach space X, and take an arbitrary operator A ∈ L(E). In Example 2.7 a)–d) we have seen operators A ∈ L(E) which are not surjective or not injective at inﬁnity, respectively. Is there a generic way to detect such behaviour?
A First Glimpse at Localization If A is a banddominated operator, then Proposition 2.10 relates the question whether or not A is invertible at inﬁnity to an invertibility problem in the Banach algebra BDOp /K(E, P). For the study of invertibility in a Banach algebra B, it is often a good idea to look at socalled unital homomorphisms; that is a mapping ϕ from B into another Banach algebra A which is subject to ϕ(a +· b) = ϕ(a) +· ϕ(b)
and
ϕ(eB ) = eA
for all a, b ∈ B. Clearly, if an element a ∈ B is invertible in B, then also ϕ(a) is necessarily invertible in A, and ϕ(a)−1 = ϕ(a−1 ). In short, a is invertible
=⇒
ϕ(a) is invertible,
∀ unital homomorphisms ϕ. (3.1)
The following example is about the mother of all commutative Banach algebras.
76
Chapter 3. Limit Operators
Example 3.1. Let B = C[0, 1] be the Banach algebra of all continuous functions on the interval [0, 1], equipped with pointwise operations and the maximum norm. Moreover, let A = C, and ﬁx a point x ∈ [0, 1]. We look at the mapping ϕx that assigns to every continuous function a on [0, 1] its function value a(x) at x. So we have ϕx : a → a(x). ϕx : B → A, Clearly, ϕx is a unital homomorphism. It is compatible with addition and multiplication, and it maps the function that is equal to 1 on all of [0, 1] to 1 ∈ C. And indeed, if a is invertible as a function in B; that is a ∈ B and 1/a ∈ B, then its function value at x is necessarily nonzero; that is ϕx (a) is invertible in A = C. If we no longer ﬁx the point x ∈ [0, 1] in Example 3.1, then (3.1) tells us that a is invertible
=⇒
ϕx (a) is invertible,
∀x ∈ [0, 1].
(3.2)
Our knowledge about continuous functions on compact sets tells us that the invertibility of all elements ϕx (a) is not only necessary for the invertibility of a in B, as (3.1) and (3.2) say, but also suﬃcient! That is what we call a suﬃcient family of unital homomorphisms. Our family of unital homomorphisms {ϕx }x∈[0,1] contains enough elements to even yield the other implication in (3.2), because, roughly speaking, it simply looks everywhere. Despite its simplicity, Example 3.1 did not only lead us to the notion of a suﬃcient family, it also nicely illustrates the main idea of local principles in Banach algebras: Replace a diﬃcult invertibility problem in B by a family of simpler invertibility problems. Very often this is done by, again speaking roughly, looking at a ∈ B through a family of microscopes. In Example 3.1 we might think of a little microscope attached to every point x ∈ [0, 1], and the process of looking at the separate elements {ϕx (a)}x∈[0,1] ⊂ A rather than at the whole element a ∈ B at once, is referred to as localization over [0, 1]. If we want to investigate the invertibility problem that is connected with invertibility at inﬁnity of an operator A, there is clearly one place we have to look at: inﬁnity1 . The somewhat simpler objects, that we ﬁnd there via application of an appropriate family of unital homomorphisms, each represent the local behaviour of A in a certain location at inﬁnity. We could call them “inﬁnite local representatives” or “samples at inﬁnity” of A, but in a minute we will start calling them the limit operators of A. 1 In
that case, we might prefer telescopes instead of microscopes.
3.1. Deﬁnition and Basic Properties
77
How to Pick Samples from Inﬁnity – A Rough Guide Given a point ϑ ∈ Zn , one might ask how the action of our operator A on u ∈ E looks like from another point of view, say ϑ taking the role of the origin. Well, for ﬁnite points ϑ, the answer is clearly V−ϑ AVϑ which is still a very close relative of the operator A itself. But what if we ask for the point ϑ = ∞? Then ϑ can only be approached as the limit of a sequence h of ﬁnite points h1 , h2 , . . . in Zn , and by doing the above process for every one of these ﬁnite points hm , the answer for ϑ = ∞ can only be understood in the sense of some sort of limit of the sequence of operators V−hm AVhm as m goes to inﬁnity. This limit is what we will call a limit operator. Obviously, there are many diﬀerent paths h = (h1 , h2 , . . .) leading to inﬁnity which, in general, produce many diﬀerent limit operators of A. The behavior of A at ϑ = ∞ will be reﬂected by the collection of all these limit operators of A.
3.1 Deﬁnition and Basic Properties Let E = p (Zn , X) with p ∈ [1, ∞] and a Banach space X. In order to derive a family of local representatives of A ∈ L(E) at inﬁnity, we need something like a partition of inﬁnity into diﬀerent locations. For n = 1, a rough partition of ∞ that we are very familiar with is {−∞ , +∞}. We will generalize this idea of partitioning ∞ (with respect to the diﬀerent directions in which it can be approached) to Rn . Deﬁnition 3.2. Let S n−1 denote the unit sphere in Rn with respect to the Euclidian norm  . 2 . If R > 0, s ∈ S n−1 and V ⊂ S n−1 is a neighbourhood of s in S n−1 , then UR∞ := {x ∈ Rn : x2 > R}
(3.3)
is called a neighbourhood of ∞, and ∞ UR,V := {x ∈ Rn : x2 > R and
x ∈V} x2
(3.4)
n is a neighbourhood of ∞s . We say that a sequence (xm )∞ m=1 ⊂ R tends to inﬁnity or tends to inﬁnity in direction s, and we write xm → ∞ or xm → ∞s , if, for every neighbourhood U of ∞ or ∞s , respectively, all but ﬁnitely many elements of the sequence (xm ) are in U .
If n = 1, we will, of course, use the familiar notations −∞ and +∞ instead of ∞−1 and ∞+1 . Since the Euclidian norm  . 2 and the maximum norm  .  on Rn are equivalent, we have that xm tends to inﬁnity if and only if xm  → ∞ as m → ∞. Now ﬁnally, here is the deﬁnition of a limit operator.
78
Chapter 3. Limit Operators
Deﬁnition 3.3. Let A ∈ L(E), and h = (hm ) be a sequence in Zn tending to inﬁnity. If the sequence of operators V−hm AVhm ,
m = 1, 2, . . .
(3.5)
Pconverges to an operator B ∈ L(E) as m → ∞, then we call B the limit operator of A with respect to the sequence h, and denote it by Ah . In this case, we will say that the sequence h leads to a limit operator of A. From the uniqueness of the Plimit we know that a limit operator Ah is unique if it exists. If h ⊂ Zn is a sequence tending to inﬁnity such that Ah exists, and if g is an inﬁnite subsequence of h, then also g leads to a limit operator of A, and Ag = Ah . The passage to a limit operator is compatible with addition, composition, passing to the norm limit and to the adjoint operator. Proposition 3.4. Let A, B, A(1) , A(2) , . . . be arbitrary operators in L(E, P), and let h ⊂ Zn be a sequence tending to inﬁnity. a) If Ah exists, then Ah ≤ A. b) If Ah and Bh exist, then (A + B)h exists and is equal to Ah + Bh . c) If Ah and Bh exist, then (AB)h exists and is equal to Ah Bh . d) If A(m) ⇒ A as m → ∞, and the limit operators (A(m) )h exist for all suﬃciently large m, then Ah exists, and (A(m) )h ⇒ Ah as m → ∞. e) If Ah exists and p < ∞, then (A∗ )h exists and is equal to (Ah )∗ . Proof. a) – d) immediately follow from Deﬁnition 3.3 and Corollary 1.70. e) First note that, for every k ∈ N and α ∈ Zn , Pk∗ and Vα∗ can be identiﬁed with Pk and V−α on E ∗ = q (Zn , X∗ ) with 1/p + 1/q = 1, respectively. Then the fact that Pk (V−hm A∗ Vhm − (Ah )∗ ) = ((V−hm AVhm − Ah )Pk )∗ = (V−hm AVhm − Ah )Pk → 0 as m → ∞ for every k ∈ N, together with its symP metric analogue, proves V−hm A∗ Vhm → (Ah )∗ as m → ∞. Lemma 3.5. Let A ∈ L(E, P), and let h ⊂ Zn lead to a limit operator of A. Then Ah cannot have a higher information ﬂow than A; that is fAh (d) ≤ fA (d) for all d ∈ N0 , where the information ﬂow fA (d) is deﬁned by (1.30). Proof. Let d ∈ N0 be arbitrary, and take two sets U, V ⊂ Zn with dist (U, V ) ≥ d. Then, from PV Ah PU = P−limm→∞ PV V−hm AVhm PU and Corollary 1.70 a), we get PV Ah PU
≤ sup PV V−hm AVhm PU = sup V−hm Phm +V APhm +U Vhm m
m
= sup Phm +V APhm +U ≤ fA (d) m
since Vα is an isometry for all α ∈ Zn , and dist (hm + U, hm + V ) = dist (U, V ) ≥ d for all m ∈ N.
3.1. Deﬁnition and Basic Properties
79
Proposition 3.6. Take an A ∈ L(E) and a sequence h ⊂ Zn that leads to a limit operator of A.
a) If A ∈ S, then also Ah ∈ S, and (Ah ) = (A )h =: Ah . b) If A ∈ L(E, P), then also Ah ∈ L(E, P). c) If A ∈ K(E, P), then Ah = 0 for any sequence h tending to inﬁnity. d) If A is a generalized multiplication operator, then also Ah is one. e) If A ∈ BO, then also Ah ∈ BO. f) If A ∈ BDOp , then also Ah ∈ BDOp . Proof. a) Repeat the proof of Proposition 3.4 e) with A ∈ L(E ) in place of A. b) follows from {Vα : α ∈ Zn } ⊂ L(E, P) and Proposition 1.68. c) For every k ∈ N, put Uk := {−k, . . . , k}n . From Vα Pk = Vα PUk = Pα+Uk Vα for all α ∈ Zn , k ∈ N and from Vα being an isometry for every α ∈ Zn , we conclude that, for a sequence h = (hm ) tending to inﬁnity and an operator A ∈ L(E), the limit operator Ah equals 0 if and only if both Pk (V−hm AVhm − 0) = V−hm Phm +Uk AVhm = Phm +Uk A ≤ Qhm −k−1 A, (V−hm AVhm − 0)Pk = V−hm APhm +Uk Vhm = APhm +Uk ≤ AQhm −k−1 tend to zero as m → ∞ for every ﬁxed k ∈ N. But if A ∈ K(E, P) and hm → ∞, this is clearly the case. d – f) follow from Lemma 3.5 and the characterizations of the respective op erator classes in terms of the function fA in Subsection 1.3.7. For the study of the invertibility at inﬁnity of A, a single limit operator Ah in general cannot tell the whole story. That’s why we collect all “samples” Ah that we can get from A. Deﬁnition 3.7. The set of all limit operators of A ∈ L(E) is denoted by σ op (A), and we refer to it as the operator spectrum of A. The operator spectrum includes all limit operators Ah of A, regardless of the direction in which h tends to inﬁnity. But sometimes this direction is of importance, and so we will split σ op (A) into many sets – the so called local operator spectra. Deﬁnition 3.8. For every direction s ∈ S n−1 , the local operator spectrum σsop (A) is deﬁned as the set of all limit operators Ah with h = (hm ) ⊂ Zn and hm → ∞s . op op op op (A) and σ+1 (A) by σ− (A) and σ+ (A), For n = 1, we will abbreviate σ−1 respectively. The following result is certainly not surprising.
Proposition 3.9. For every operator A ∈ L(E), the identity σ op (A) =
σsop (A) s∈S n−1
holds.
80
Chapter 3. Limit Operators
Proof. Clearly, every local operator spectrum is contained in the operator spectrum σ op (A). For the reverse inclusion, take some Ah ∈ σ op (A). The sequence hm /hm 2 need not converge to a point s in S n−1 ; but since the unit sphere S n−1 is compact, there is a subsequence g of h that has this property, and hence g tends to inﬁnity in some direction s ∈ S n−1 . But then Ah = Ag ∈ σsop (A). As so often, we have to make sure that we have picked a suﬃcient amount of samples before we can make a reliable statement. Remember Example 3.1 and our little discussion afterwards. In order to fully characterize the invertibility of the object a ∈ B, we needed a suﬃcient family of unital homomorphisms; this is why we had to look at a ∈ B everywhere – well, at least in a dense subset – in [0, 1]. The same problem will occur in connection with limit operators. In order to actually tell the whole story about A being invertible at inﬁnity or not, its operator spectrum σ op (A) has to contain suﬃciently many elements in the following sense. Deﬁnition 3.10. We will say that A ∈ L(E) is a rich operator if every sequence h ⊂ Zn , that tends to inﬁnity, possesses a subsequence g which leads to a limit operator of A. The set of all rich operators A ∈ L(E) is denoted by2 L$ (E). As a rule, if we want to say anything about the operator A that is based on knowledge about its operator spectrum only, then A should better be rich in order to allow such a statement. In this connection we introduce the abbreviations L$ (E, P) := BO$ := BDOp$ BDOpS,$$
:= :=
L(E, P) ∩ L$ (E), BO ∩ L$ (E), BDOp ∩ L$ (E) BDOpS ∩ L$ (E),
where BDOpS
:=
BDOp , BDO∞ ∩ S,
and
p < ∞, p = ∞.
Proposition 3.11. a) L$ (E, P), BDOp$ and BDOpS,$$ are Banach subalgebras of L(E). b) The algebra BO$ is dense in BDOp$ . Proof. a) Let A, B ∈ L$ (E, P), and take an arbitrary sequence h = (hm ) ⊂ Zn tending to inﬁnity. Since A is rich, there is a subsequence g of h which leads to a limit operator of A. But since also B is rich, we can once more choose a subsequence f of g for which Bf exists, and by Proposition 3.4 b) and c), f leads to a limit operator of A + B and of AB. The closedness of L$ (E, P) is veriﬁed by an analogous construction: Take a sequence (A(m) ) in L$ (E, P) with A(m) ⇒ A as m → ∞ and an arbitrary sequence h ⊂ Zn tending to inﬁnity. From h we choose a subsequence h(1) leading to a limit 2 Feel
free to replace the dollar sign by any other currency of your choice.
3.2. Limit Operators vs. Invertibility at Inﬁnity
81
operator of a A(1) . From h(1) we choose a subsequence h(2) leading to a limit (m) operator of A(2) , and so on. Putting g := (hm ), and ﬁnally applying Proposition 3.4 d), we get A ∈ L$ (E, P). The claim for BDOp$ follows from the fact that BDOp is a Banach subalgebra of L(E, P), and for BDOpS,$$ it suﬃces to recall Proposition 1.10. b) As the intersection of a (nonclosed) algebra and a Banach algebra, BO$ = BO ∩ L$ (E, P) is a (nonclosed) algebra. That it is dense in BDOp$ goes back to [50, Proposition 2.9] (also see [70, Theorem 2.1.18]). The proof uses details of the proof of Theorem 1.42. We will say that A ∈ L(E) is translation or shift invariant if A commutes with every shift operator Vα with α ∈ Zn . If A ∈ L(E) is shift invariant, then the sequence (3.5) is constant, and Ah exists and is equal to the operator A itself for every sequence h ⊂ Zn that tends to inﬁnity.
3.2 Limit Operators Versus Invertibility at Inﬁnity The aim of this section is to discuss and prove the connection between an operator’s invertibility at inﬁnity and its operator spectrum. Clearly, invertibility at inﬁnity is a property that only depends on an operator’s behaviour in a neighbourhood of inﬁnity. For a rich operator, it turns out that all necessary information about this behaviour is accurately stored in the operator spectrum σ op (A).
3.2.1 Some Questions Around Theorem 1 For operators A ∈ BDOpS,$$ , we are able to prove Theorem 1 from the introduction of this book, saying that A is invertible at inﬁnity if and only if its operator spectrum σ op (A) is uniformly invertible. This is the central result which basically justiﬁes the study of limit operators – the ‘aorta’ of the whole business. Before we come to its proof, we will discuss its practicability by looking at the following questions. Q1
I have an operator A ∈ L(E). How can I know whether it is in BDOpS,$$ or not?
Q2
Theorem 1 tells me I have to look at the limit operators of A. How do I compute these?
Q3
Are the limit operators Ah always of reasonably “simpler” structure than the operator A itself?
We do not know an eﬃcient algorithm that answers question Q1 in general. We have certainly found some characterizations for the property A ∈ BDOp in
82
Chapter 3. Limit Operators
Subsection 1.3.6, and we have at least Proposition 1.9 for the existence of a preadjoint, but how can we know whether A is rich or not? By Proposition 3.11, BDOpS,$$ is a Banach algebra. So we can at least try to decompose our operator A into some basic components from which it is assembled via addition, composition and passing to the normlimit and ﬁnd out whether or not these components are rich. If they are, then A is rich as well; if they are not, then A is probably not. The natural candidates for these basic components are shift operators and generalized multiplication operators, which are the atoms of every banddominated operator as we pointed out in Remark 1.38. In this connection recall that we referred to the symbols of the generalized multiplication operators arising from the decomposition of a band operator as its coeﬃcients, and that we say that A ∈ BDOp has coeﬃcients in a set F if it is the norm limit of a sequence of band operators with coeﬃcients in F . In Chapter 4 we will moreover study a class of integral operators A ∈ L(Lp ) which are composed by convolution operators (remember Example 1.45) and usual multiplication operators on Lp . In this case, we will clearly think of these two classes of operators as the atoms of A. In either case, the problem can be reduced to the study of generalized and usual multiplication operators since the other components, namely shift operators and convolution operators, are shiftinvariant, and hence these are rich operators. The remaining question, whether or not a (generalized) multiplication operator is rich, is studied in Section 3.4. Concerning question Q2, we make use of the facts we proved in Proposition 3.4. So computing a limit operator of A can be done by decomposing A into its atoms, computing their limit operators, and puzzling these together again as A was assembled by its components. Again, limit operators of a shiftinvariant operator are equal to the operator itself, and hence the problem can be reduced to the computation of the limit operators of (generalized) multiplication operators. Also this issue is discussed in Section 3.4. Question Q3 is also a very tough one because it is not clear at all what it means that an operator is “simpler” than another. We will address some aspects of the apparent simpliﬁcation A → Ah in Section 3.8.
3.2.2 Proof of Theorem 1 In this subsection we will prove Theorem 1. We divide the proof in some propositions and lemmas. The reader who is not interested in this proof might simply want to skip the rest of this section and continue at Section 3.3 on page 86. For the rest of this section we will ﬁx the following set of continuous functions. ˆ f with Also remember that we associated a generalized multiplication operator M every bounded and continuous function f on Rn by (1.25).
3.2. Limit Operators vs. Invertibility at Inﬁnity By ϑ : R → [0, 1] we ⎧ 3x + 2 ⎪ ⎪ ⎨ 1 ϑ(x) := −3x +2 ⎪ ⎪ ⎩ 0
83
denote the continuous function if x ∈ [−2/3 , −1/3), if x ∈ [−1/3 , 1/3], if x ∈ (1/3 , 2/3], otherwise,
−1
16
0
B Bϑ BB 1
(3.6)
with support in [−2/3 , 2/3]. Let ω : Rn → [0, 1] be the continuous function with support in [−2/3 , 2/3]n which is given by " ω(x1 , . . . , xn ) := ϑ(x1 ) · · · ϑ(xn ), (x1 , . . . , xn ) ∈ Rn . For every α ∈ Zn , we put ωα (x) := ω(x − α), Note that ω is chosen such that ωα2 (x) = 1,
x ∈ Rn .
x ∈ Rn
α∈Zn
holds. In this sense we say that {ωα2 }α∈Zn is a partition of unity. Also note that, for every x ∈ Rn , the latter sum is actually a ﬁnite sum. For every R > 0, we moreover deﬁne the function ωα,R (x) := ωα (x/R) = ω(x/R − α),
x ∈ Rn .
2 }α∈Zn is a partition of unity as well for every R > 0. Obviously, {ωα,R Finally, let ψ : Rn → [0, 1] be a continuous function supported in [−4/5, 4/5]n which is equal to 1 on [−3/4 , 3/4]n (and hence on the whole support of ω). In analogy to ωα and ωα,R , we deﬁne ψα and ψα,R , respectively.
Proposition 3.12. If A ∈ BDOpS is invertible at inﬁnity, then its operator spectrum σ op (A) is uniformly invertible. Proof. We demonstrate the proof for the case p = ∞, where we make use of the existence of the predual E and the preadjoint operator A . For p < ∞, the proof is almost completely analogously, with E and A replaced by E ∗ and A∗ , respectively. Let E = ∞ (Zn , X), and suppose A ∈ BDOpS is invertible at inﬁnity. Then there is a B ∈ L(E) such that I = BA+T1 and I = AB+T2 with T1 , T2 ∈ K(E, P). For arbitrary elements u ∈ E and v ∈ E ∗ , by passing to adjoints in the second equation, this yields to uE vE ∗
≤ BAu∞ + T1 uE ≤ B ∗ A∗ vE ∗ + T2∗ vE ∗ .
84
Chapter 3. Limit Operators
Restricting v to E ⊂ E ∗ and putting s := max(B, 1), we get that the inequalities uE vE
≤ s(AuE + T1 uE ) ≤ s(A vE + T2∗ vE ∗ )
(3.7) (3.8)
hold for all u ∈ E and v ∈ E . Let h = (hm ) ⊂ Zn be an arbitrary sequence tending to inﬁnity and leading to a limit operator Ah of A. We will show that Ah is invertible. Since Vα is an isometry for every α ∈ Zn , we get from (3.7) that uE = Vhm uE
≤
s(AVhm uE + T1 Vhm uE )
=
s(V−hm AVhm uE + V−hm T1 Vhm uE ),
u ∈ E.
Remember the functions ψα,R , deﬁned above for all α ∈ Zn and R > 0. Replacing ˆ ψ0,R u and doing all the same steps with inequality u in the last inequality by M (3.8), we get that ˆ ψ0,R uE M ˆ ψ0,R vE M
≤ ≤
ˆ ψ0,R uE + V−hm T1 Vhm M ˆ ψ0,R uE ), s(V−hm AVhm M ˆ ψ0,R vE + V−hm T ∗ Vhm M ˆ ψ0,R vE ∗ ) s(V−hm A Vhm M 2
are true for all R > 0, all u ∈ E and all v ∈ E . ˆ ψ0,R ⇒ 0 ˆ ψ0,R ⇒ 0, Proposition 3.6b), V−hm T1 Vhm M From (V−hm AVhm −Ah )M ∗ ∗ ˆ ψ0,R = (M ˆ ψ0,R V−hm T2 Vhm ) = M ˆ ψ0,R V−hm T2 Vhm going and V−hm T2 Vhm M to zero as m → ∞, we conclude, for all R > 0, all u ∈ E and all v ∈ E , ˆ ψ0,R uE M ˆ ψ0,R vE M
ˆ ψ0,R uE ≤ s(M ˆ ψ0,R Ah uE + [Ah , M ˆ ψ0,R ]uE ), ≤ sAh M ˆ ψ0,R vE ≤ s(M ˆ ψ0,R Ah vE + [Ah , M ˆ ψ0,R ]vE ). ≤ sAh M
ˆ ψ0,R ] = [M ˆ ψ0,R , Ah ] holds, letting R → ∞ leads Since Ah ∈ BDOp and [Ah , M to uE vE
≤ ≤
sAh uE , sAh vE ,
u ∈ E, v ∈ E.
So both Ah and Ah are bounded below with a lower norm not less than 1/s. From Lemma 2.35 we conclude that Ah is invertible with its inverse being bounded by s. Consequently, also its adjoint Ah is invertible and also A−1 h ≤ s, where s does not depend on the sequence h. The rest of this subsection is devoted to the proof of the suﬃciency part of Theorem 1. Note that the condition A ∈ S (for p = ∞) was only needed for the necessity part, whereas the condition that A is a rich operator is only needed in the suﬃciency part.
3.2. Limit Operators vs. Invertibility at Inﬁnity
85
Proposition 3.13. Let A ∈ BDOp , and suppose a limit operator Ah of A is invertible. Then, for every function ξ ∈ L∞ with bounded support, there exists a m0 ∈ N such that, for every m > m0 , there are operators Bm , Cm ∈ BDOp with Bm , Cm < 2(Ah )−1 and ˆV ξ = M ˆV ξ = M ˆ V ξ ACm . Bm AM hm hm hm Proof. The proof of this proposition is very straightforward. It can be found in [67, Proposition 15], [70, Proposition 2.2.4], [47, Proposition 3.3] or [50, Proposition 3.2]. Lemma 3.14. If {Aα }α∈Zn ⊂ L(E, P) is bounded, then, for every R > 0, the operator ˆ ωα,R Aα M ˆ ψα,R AR := M α∈Zn
is welldeﬁned, and AR ≤ 2 supα∈Zn Aα holds. For p < ∞, the series is Pconvergent, and for p = ∞, the operator can still be deﬁned pointwise (AR u)(x) for every u ∈ E and every x ∈ Zn . n
Proof. For a proof for p < ∞, see [67, Proposition 13] or [70, Proposition 2.2.2], and for p = ∞ this follows from [47, Lemma 2.6] or [50, Lemma 1.8]. Proposition 3.15. Let A be some operator in BDOp . If there is an M > 0 such that, for every R > 0, there exists a ρ(R) such that, for every α ∈ Zn with α > ρ(R), there are two operators Bα,R , Cα,R ∈ L(E, P) with Bα,R , Cα,R < M and ˆ ωα,R = M ˆ ωα,R = M ˆ ωα,R ACα,R , Bα,R AM
(3.9)
then A is invertible at inﬁnity. Proof. If A ∈ BO, then it can be veriﬁed that, for a suﬃciently large R > 0, the operators ˆ ωα,R Bα,R M ˆ ωα,R ˆ ωα,R Cα,R M ˆ ωα,R M M and CR := BR := α>ρ(R)
α>ρ(R)
are subject to BR A = I −
α≤ρ(R)
ˆ ω2 +T1 M α,R
and
ACR = I −
ˆ ω2 +T2 . (3.10) M α,R
α≤ρ(R)
where T1 , T2 ∈ L(E, P) and T1 , T2 < 1/2. The operators BR and CR are deﬁned by Lemma 3.14. Then I + T1 and I + T2 are invertible, their inverses are in L(E, P) again, and multiplying the ﬁrst equality in (3.10) from the left by (I + T1 )−1 and the second one from the right by (I + T2 )−1 , shows that A is PFredholm, and hence invertible at inﬁnity. Using this fact and Lemma 1.3, the result can be extended to all A ∈ BDOp . For further details of the proof, see [70, Proposition 2.2.3].
86
Chapter 3. Limit Operators
Now we are ready for the proof of the suﬃciency part of Theorem 1, which, together with Proposition 3.12, completes the proof of Theorem 1. Proposition 3.16. If A ∈ BDOp$ , and its operator spectrum σ op (A) is uniformly invertible, then A is invertible at inﬁnity. Proof. Let σ op (A) be uniformly invertible, and suppose A were not invertible at inﬁnity. Then, by Proposition 3.15, for every M > 0, and hence, for M := 2
sup Ah
∈σop (A)
A−1 h ,
there is an R > 0 such that, for all ρ > 0, there exists an α ∈ Zn with α > ρ such that ˆ ωα,R ˆ ωα,R = M BAM for all operators B ∈ L(E, P) with B < M . Consequently, there is a sequence (αk ) ⊂ Zn tending to inﬁnity such that, for no k ∈ N, there is an operator Bk ∈ L(E, P) with Bk < M for which ˆ ωα ,R = M ˆ ωα ,R Bk AM k k
(3.11)
holds. Since A is rich, there is a subsequence g of (αk ) such that Ag exists. But Ag ∈ σ op (A) is invertible by our premise. And so Proposition 3.13 (putting ξ := ω0,R ) says that, for inﬁnitely many k ∈ N, there is an operator Bk ∈ BDOp ⊂ L(E, P) with Bk < M that is subject to equation (3.11). This is clearly a contradiction.
3.3 Fredholmness, Lower Norms and Pseudospectra 3.3.1 Fredholmness vs. Limit Operators From Subsection 2.3 we know that Fredholmness is closely related, sometimes even coincident, with invertibility at inﬁnity; the latter can be studied by looking at limit operators. Here is a ﬁrst simple result along these lines. Proposition 3.17. Let E = p (Zn , X) with p ∈ [1, ∞] and a Banach space X. a) If 1 < p < ∞ and A ∈ BDOp is Fredholm, then all limit operators of A are invertible, and their inverses are uniformly bounded. Moreover, sp B ⊂ spess A.
(3.12)
B∈σop (A)
b) Now suppose A ∈ BDOp$ , and either A is the sum of an invertible and a locally compact operator or dim X < ∞. Then, if σ op (A) is uniformly invertible, A is Fredholm.
3.3. Fredholmness, Lower Norms and Pseudospectra
87
Proof. a) From 1 < p < ∞ and Figure 4 we get that A is invertible at inﬁnity if it is Fredholm. The uniform invertibility of σ op (A) follows now from Proposition 3.12, where the condition A ∈ S is redundant if p = ∞. The inclusion (3.12) follows analogously: Suppose B − λI is not invertible for λ ∈ C and some limit operator B of A. From B −λI ∈ σ op (A−λI) and Proposition 3.12 we get that A − λI is not invertible at inﬁnity and hence not Fredholm by Figure 4 if 1 < p < ∞. b) This is immediate from Propositions 3.16, 2.15 and Figure 4. For the reverse implication of (3.12) – as in (3.52), for example – we need that the elementwise invertibility of σ op (A) implies its uniform invertibility. For details about this, see Section 3.9, in particular Subsection 3.9.4. For the remainder of this section, suppose n = 1, 1 < p < ∞ and X = C. Consequently, Fredholmness coincides with invertibility at inﬁnity, and Theorem 1 says that A ∈ BDOp$ is Fredholm if and only if its operator spectrum is uniformly invertible. Interestingly, even the Fredholm index of A can be restored from its operator spectrum σ op (A)! We abbreviate the projector to the positive half axis and its complementary projector by P := PN and Q := I − P , respectively. Now let us suppose that A ∈ BDOp = BDOp$ (see Corollary 3.24) on E = p = p (Z, C). Via the evident formula E = im P im Q, we can decompose A into four operators: ⎛ ⎜ QAQ ⎜ ⎜ A=⎜ ⎜ ⎜ ⎝ P AQ
⎞
⎛
⎞
⎜ ⎜ QAP ⎟ ⎟ ⎜ ⎟ ⎜ ⎟ = ⎜ ⎟ ⎜ ⎟ ⎜ ⎜ ⎠ P AP ⎝
⎟ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎟ ⎠
If A ∈ BO, then the two blocks P AQ and QAP are ﬁniterank operators, whence they are compact if A is banddominated. Consequently, A = (P + Q)A(P + Q) = P AP + P AQ + QAP + QAQ is Fredholm if and only if ⎛
⎞
⎜ ⎜ ⎜ ⎜ A − P AQ − QAP = P AP + QAQ = ⎜ ⎜ ⎜ ⎜ ⎝
⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠
(3.13)
88
Chapter 3. Limit Operators
is Fredholm, where the two Fredholm indices coincide in this case. For the study of the operator (3.13), we put A+ := P AP + Q
and
A− := QAQ + P,
(3.14)
which gives us that (3.13) equals the product A+ A− = A− A+ . From this equality it follows that (3.13), and hence A, is Fredholm if and only if A+ and A− are Fredholm, and ind A = ind A+ + ind A−
(3.15)
holds. Clearly, op (A) ∪ {I} σ op (A+ ) = σ+
and
op σ op (A− ) = σ− (A) ∪ {I},
whence the Fredholmness of A± is determined by the local operator spectrum op (A), respectively. But also the index of A± is hidden, in an astonishingly simple σ± op way, in σ± (A), respectively. Indeed, if we call ind+ A := ind A+ = ind (P AP + Q) and
ind− A := ind A− = ind (QAQ + P )
the plus and the minusindex of the banddominated operator A, then the following result holds. op Proposition 3.18. Let A ∈ BDOp be Fredholm. Then all operators in σ+ (A) have the same plusindex, and this number coincides with the plusindex of A. Analoop gously, all operators in σ− (A) have the same minusindex, and this number coincides with the minusindex of A.
This remarkable result was derived in [66] via computations of the Kgroup of the C ∗ algebra BDO2 for p = 2, and it was generalized to 1 < p < ∞ in [74]. It should be mentioned that Roch’s results in [74] easily extend to operators in BDO1 and BDO∞ that belong to the Wiener algebra W. Corollary 3.19. Let A ∈ BDOp be Fredholm. Then, for any two limit operators op op (A) and C ∈ σ− (A) of A, the identities B ∈ σ+ ind+ A
= ind+ B = −ind− B
ind− A ind A
= ind− C = −ind+ C = ind+ B + ind− C = ind (P BP + Q) + ind (QCQ + P )
hold. Proof. If A is Fredholm, then B and C are invertible by Theorem 1, whence ind B and ind C are both equal to zero. The rest is immediate from Proposition 3.18 and (3.15).
3.3. Fredholmness, Lower Norms and Pseudospectra
89
3.3.2 Pseudospectra vs. Limit Operators Besides close connections between the essential spectrum of a banddominated operator A and the spectra of its limit operators, as indicated in the previous subsection, one can also say something about the pseudospectrum [85] of A in terms of the pseudospectra of its limit operators. Again, put E = p (Zn , X) with p ∈ [1, ∞], n ∈ N and an arbitrary Banach space X. As an auxiliary result, we ﬁrst say something about lower norms. Therefore recall Deﬁnition 2.31 and equation (1.13). Lemma 3.20. a) Let A ∈ L(E) and B ∈ σ op (A). Then Bu ≥ ν(A) u for all u ∈ E0 . In particular, ν(B) ≥ ν(A) if E0 = E; that is if p < ∞. b) If A ∈ L(E, P) and B ∈ σ op (A) is invertible, then ν(B) ≥ ν(A). Proof. a) If B ∈ σ op (A), then B = Ah for some sequence h = (hm ) ⊂ Zn tending to inﬁnity. For k ∈ N and every u ∈ E0 , we have that V−hm AVhm Pk u = AVhm Pk u ≥ ν(A) Vhm Pk u = ν(A) Pk u. P
Since V−hm AVhm → B, taking the limit as m → ∞ we get BPk u ≥ ν(A) Pk u. As u ∈ E0 we have Pk u → u as k → ∞, so the result follows. b) For k, l ∈ N and u ∈ E, Pk B −1 u ≤ ≤
Pk B −1 Ql u + Pk B −1 Pl u Pk B −1 Ql u + B −1 Pl u.
(3.16)
As L(E, P) is inverse closed, by Proposition 1.46, we have that B −1 ∈ L(E, P), so that Pk B −1 Qr  → 0 and Qr B −1 Pl  → 0 as r → ∞, the latter implying that B −1 Pl u ∈ E0 . Thus from (3.16) and a) we have that ν(A) Pk B −1 u ≤ ν(A) Pk B −1 Ql u + Pl u. Taking the limit ﬁrst as l → ∞ and then as k → ∞, we get that ν(A) B −1 u ≤ u. So we have shown that ν(A) v ≤ Bv, for all v ∈ E, as required. For all A ∈ L(E) and ε > 0, we deﬁne the εpseudospectrum of A by spε A := {λ ∈ C : (λI − A)−1 ≥ 1/ε}, where we again agree by putting B −1 := ∞ if B is not invertible. In this sense, if we moreover put 1/0 := ∞, then sp0 A = sp A for all A ∈ L(E). Proposition 3.21. For all A ∈ BDOpS and all ε > 0, one has spε B ⊂ spε A. B∈σop (A)
90
Chapter 3. Limit Operators
Proof. From Proposition 3.12 we know that, if B ∈ σ op (A) and λI − B is not invertible, then λI −A is not invertible. Now suppose λI −B is invertible. If λI −A is not invertible, then there is nothing to prove. If also λI − A is invertible, then, by Lemma 2.35 and Lemma 3.20 b), the latter applies since B ∈ σ op (A) ⊂ L(E, P) as A ∈ BDOp ⊂ L(E, P), it follows that (λI − B)−1 =
1 1 ≤ = (λI − A)−1 . ν(λI − B) ν(λI − A)
For a stronger result than Proposition 3.21 in the case when L(E) is a C ∗ algebra; that is when p = 2 and X is a Hilbert space, see [63] or Section 6.3 in [70].
3.4 Limit Operators of a Multiplication Operator As discussed in Subsection 3.2.1, the practicability of Theorem 1 and of all its applications essentially rests on our knowledge about generalized and usual multiplication operators, more precisely, whether or not such an operator is rich, and what its limit operators look like.
3.4.1 Rich Functions We start with generalized multiplication operators. So let X be a Banach space. Proposition 3.22. For b = (bα )α∈Zn ∈ ∞ (Zn , L(X)), the generalized multiplication ˆ b is rich if and only if the set {bα }α∈Zn is relatively compact in L(X). operator M Proof. This is Theorem 2.1.16 in [70].
Note that, if dim X < ∞, the boundedness of the set {bα }α∈Zn automatically implies its relative compactness in L(X). Corollary 3.23. If dim X < ∞, then every generalized multiplication operator on E = p (Zn , X) is rich. Taking this together with Proposition 3.11, we even get the following. Corollary 3.24. If dim X < ∞, then BDOp$ = BDOp . So every banddominated operator on E = p (Zn , X) is rich. From this point we will mainly focus on multiplication operators on E = Lp . Remember that, by (1.2), Lp = Lp (Rn ) can be identiﬁed with p (Zn , X) where X = Lp (H) and H = [0, 1)n . Via this identiﬁcation, by (1.3), the multiplication operator Mb with symbol b ∈ L∞ corresponds to the generalized multiplication operator with symbol (bα )α∈Zn , where bα is the operator of multiplication by the restriction of b to the cube α + H, shifted to H. From Proposition 3.22 we imply the following.
3.4. Limit Operators of a Multiplication Operator
91
Corollary 3.25. For b ∈ L∞ , the multiplication operator Mb is rich if and only if the set {bα+H }α∈Zn is relatively compact in L∞ (H). Remark 3.26. Note that we wrote bα+H as a lazy abbreviation for the much clumsier notations (V−α b)H or V−α Pα+H b in the previous corollary. Also note that the norm limit of a sequence of multiplication operators on Lp (H), if it exists, is clearly a multiplication operator on Lp (H) again. For multiplication operators on Lp , we have the following result which is some sort of a continuous analogue of Proposition 3.6 d). It says that the limit operator of a multiplication operator Mb is always a multiplication operator Mc where the symbol c is the Plimit of a sequence of shifts of the symbol b. Proposition 3.27. Let A = Mb with b ∈ L∞ , and let h = (hm ) ⊂ Zn tend to inﬁnity. The following properties are equivalent, and, if they hold, then Ah = Mc . (i) Ah exists. (ii) There is a function c ∈ L∞ with ess sup  b(hm + x) − c(x)  → 0
as
m→∞
(3.17)
x∈U
for all bounded and measurable sets U ⊂ Rn . (iii) There is a function c ∈ L∞ such that (3.17) holds with U = [−k, k]n for every k ∈ N. (iv) The Plimit c := P−lim V−hm b exists for m → ∞. Proof. Since [−k, k]n is bounded and measurable for every k ∈ N, and since every bounded and measurable set U ⊂ Rn is contained in a set [−k, k]n with k ∈ N, properties (ii) and (iii) are equivalent. Moreover, (iii) and (iv) are equivalent by Proposition 1.65 b). P (i)⇒(iii). If Ah exists, then from MV−hm b = V−hm Mb Vhm → Ah as m → ∞, we get that MPk V−hm b = Pk MV−hm b ⇒ Pk Ah
as
m→∞
for every k ∈ N. Consequently, (Pk V−hm b)∞ m=1 is a Cauchy sequence, and hence convergent, in im Pk = L∞ ([−k, k]n ) for every k ∈ N. By Proposition 1.65 b), ∞ ∞ the sequence (V−hm b)∞ m=1 is Pconvergent in L . Denote its Plimit by c ∈ L . Then, for every k ∈ N, ess sup  b(hm + x) − c(x)  = Pk (V−hm b − c)∞ → 0
x∈[−k,k]n
which is (iii).
as
m → ∞,
92
Chapter 3. Limit Operators (iv)⇒(i). Suppose Pk V−hm b → Pk c as m → ∞ for every k ∈ N. So we have Pk MV−hm b ⇒ Pk Mc
and
MV−hm b Pk ⇒ Mc Pk
as
m→∞
P
for every k ∈ N, and hence V−hm Mb Vhm = MV−hm b → Mc as m → ∞, whence Ah exists and is equal to Mc . The function c = P −lim V−hm b from Proposition 3.27 will henceforth be denoted by b(h) . So h leads to a limit operator of Mb if and only if b(h) = P− lim V−hm b exists in which case (Mb )h = Mb(h) . Remark 3.28. a) We will sometimes write (3.17) in the form bhm +U → cU
m → ∞,
as
(3.18)
where the functions bhm +U are again identiﬁed with functions on U via V−hm . b) Since the limit operator method grew up in the discrete case, the sequences h = (hm ) were naturally restricted to Zn . In the continuous case we could easily drop this restriction and pass to h = (hm ) ⊂ Rn . We will however resist this temptation and stay in the integers, which will result in some technical eﬀorts in Subsections 3.4.2, 3.4.8 and 3.4.9. But afterwards, in Subsection 3.4.13, we will state the reason for doing so. In addition to the cube H = [0, 1)n , we ﬁx one more notation for a cube that we are using frequently in the following, which is Q := [−1, 1]n. Moreover, recall that, for x = (x1 , . . . , xn ) ∈ Rn , we denote by [x] ∈ Zn the vector ([x1 ], . . . , [xn ]) of the componentwise integer parts of x. Deﬁnition 3.29. For a function f ∈ L∞ , a point x ∈ Rn and a bounded and measurable set U ⊂ Rn , we deﬁne oscU (f ) := ess sup f (u) − f (v)
and
oscx (f ) := oscx+Q (f ),
u,v∈U
the latter is referred to as local oscillation of f at x. Lemma 3.30. If b ∈ L∞ and h = (hm ) ⊂ Zn leads to a limit operator Mc of Mb , then, for every bounded and measurable set U ⊂ Rn , oschm +U (b) → oscU (c)
as
m → ∞.
Proof. Take an arbitrary ε > 0 and a bounded and measurable U ⊂ Rn . By (3.17), there is some m0 such that, for every m > m0 , ε  b(hm + x) − c(x)  < 2 holds for almost all x ∈ U . For almost all u, v ∈ U we then have b(hm + u) − b(hm + v)
≤ b(hm + u) − c(u) + c(u) − c(v) + c(v) − b(hm + v) ≤ ε/2 + oscU (c) + ε/2,
3.4. Limit Operators of a Multiplication Operator
93
and consequently, oschm +U (b) ≤ oscU (c) + ε. Completely analogously, we derive oscU (c) ≤ oschm +U (b) + ε, and taking this together we see that  oschm +U (b) − oscU (c)  < ε
∀m > m0
which proves our claim. ∞
Deﬁnition 3.31. We call a function b ∈ L rich if Mb is a rich operator. Otherwise we call b ordinary. There are even functions b ∈ L∞ for which no sequence h → ∞ in Zn at all leads to a limit operator of Mb . Such functions will be referred to as poor functions. We denote the set of rich functions by L∞ $ . ∞ Proposition 3.32. L∞ $ is a Banach subalgebra of L .
Proof. From Proposition 3.11 a) we get that ∞ {Mb : b ∈ L∞ $ } = {Mb : b ∈ L } ∩ L$ (E, P)
is a Banach subalgebra of L(E) and hence of {Mb : b ∈ L∞ }, which implies our claim. Example 3.33. a) Let n = 1. The function b(x) = (−1)[x/π] ,
x ∈ R,
where [y] denotes the integer part of y ∈ R, is a poor function. b is 2πperiodic and has a jump at every multiple of π. Since π is irrational, the sequence (bhm +U )m cannot be a Cauchy sequence in L∞ (U ) for any sequence of integers (hm ) if U ⊃ 2Q. The reason for b being poor is clearly the condition that all hm in Deﬁnition 3.3 have to be integers. b) Again let n = 1. Another example of a poor function is b(x) = sin(x2 ),
x ∈ R.
One can easily show that no sequence (of integers or reals) tending to inﬁnity leads to a limit operator of Mb . The reason is that b ∈ BC is not uniformly continuous. We will show in Proposition 3.38 that functions in BC \ BUC cannot be rich.
3.4.2 Step Functions Roughly speaking, step functions are piecewise constant functions on a lattice of hypercubes. Deﬁnition 3.34. A function f ∈ L∞ is called step function with steps of size s > 0 if there is an x0 ∈ Rn such that f is constant on all cubes Hα := x0 + s(α + H),
α ∈ Zn .
The set of these functions will be denoted by Ts . Finally, put TQ :=
Tp/q . p,q∈N
(3.19)
94
Chapter 3. Limit Operators
Functions in T1 can be identiﬁed with elements of ∞ (Zn , C) in the natural way. The function b in Example 3.33 a) is obviously in Tπ , and it has proven to be ordinary (even poor). We will see that this follows from the irrationality of π. Proposition 3.35. Step functions with rational step size are rich, TQ ⊂ L∞ $ . Proof. Pick some arbitrary p, q ∈ N. Since Tp/q ⊂ T1/q , it remains to show that all functions in T1/q are rich. So take b ∈ Ts with s = 1/q, and let x0 ∈ Rn be such that b is constant on every cube (3.19). The function b can easily be identiﬁed n with a sequence (bα ) ∈ ∞ (Zn , X) with X = Cq , where, for every α ∈ Zn , the n vector bα ∈ Cq contains the function values of b at the q n steps inside the size 1 cube x0 + α + H. By this simple construction, we can identify Mb with the generalized muln tiplication operator with symbol (bα IX ) acting on p (Zn , X). Since X = Cq is ﬁnitedimensional, our claim follows directly from Corollary 3.23.
3.4.3 Bounded and Uniformly Continuous Functions Remember that BC and BUC denote the Banach algebras of all continuous and all uniformly continuous functions in L∞ , respectively. Proposition 3.36. Every function b ∈ BUC is rich. Moreover, if Mc is a limit operator of Mb , then c ∈ BUC as well. Proof. Take an arbitrary b ∈ BUC and some ε > 0. Then there is a δ > 0 such that for all x ∈ Rn . (3.20) oscx+δQ (b) ≤ ε So this is true for every δ ∈ Q with 0 < δ ≤ δ as well. Consequently, there is a step function f ∈ Tδ with b − f ∞ < ε which tells that TQ is dense in BUC. Propositions 3.35 and 3.32 show that BUC ⊂ L∞ $ . Now suppose Mc is the limit operator of Mb with respect to some sequence h = (hm ) tending to inﬁnity. Take an arbitrary ε > 0, and choose δ > 0 such that (3.20) holds. Then we have oschm +x+δQ (b) ≤ ε,
x ∈ Rn , m ∈ N,
and Lemma 3.30 shows that oscx+δQ (c) ≤ ε for all x ∈ Rn , i.e. c ∈ BUC.
So all uniformly continuous BCfunctions are rich. But are there any more rich functions among the others in BC? To answer this question, we ﬁrst have a look at an example of such a function in BC \ BUC. Example 3.37. Take a continuous function f : R → [0, 1] which is only supported 1 1 , m ) and is subject to f (m2 ) = 1 for all m ∈ N. Mf in the intervals m2 + (− m has no limit operator with respect to any subsequence of h = (m2 ). But there are sequences like (m2 + m) which lead to a limit operator of Mf . So f is not poor. However, f is just ordinary.
3.4. Limit Operators of a Multiplication Operator
95
Indeed, one can show that every function in BC \ BUC is a little bit like f and thus ordinary. Proposition 3.38. There are no rich functions in BC \ BUC. Proof. Take a bounded and – not uniformly – continuous function b. Since b is uniformly continuous on every compactum, the reason for b not being in BUC lies at inﬁnity, i.e. there is an ε0 > 0 and a sequence (xm ) ⊂ Rn with xm → ∞ and oscxm + m1 Q (b) ≥ ε0
(3.21)
for all m = 1, 2, . . . Now choose the sequence h = (hm ) := ([xm ]) ⊂ Zn . We will see that Mb does not possess a limit operator with respect to any subsequence of h. Suppose Mc is the limit operator of the subsequence (hmk ) of h = (hm ). Then b is uniformly continuous on every compactum hmk + 2Q, k ∈ N, and (3.18) shows that c is uniformly continuous on 2Q. But then, there is a δ ∈ (0, 1) such that oscx+δQ (c) ≤
ε0 , 2
x∈Q
(3.22)
since x + δQ ⊂ 2Q. For every m > 1/δ and x := xm − hm ∈ Q, inequalities (3.21) and (3.22) show that bhm +2Q and c2Q diﬀer by at least ε0 /4 on a subset of 2Q with positive measure since oschm +x+δQ (b) ≥ ε0
and
oscx+δQ (c) ≤
ε0 . 2
But this is a contradiction to bhmk +2Q → c2Q as k → ∞.
From Propositions 3.36 and 3.38 we conclude the following characterization of the class BUC. Proposition 3.39. A function f ∈ BC is rich if and only if f ∈ BUC. In short, BUC = BC ∩ L∞ $ .
3.4.4 Intermezzo: Essential Cluster Points at Inﬁnity In the previous two subsections we dealt with piecewise constant and with continuous functions. The evaluation of an arbitrary function f ∈ L∞ requests some more care. Take a function f ∈ L∞ and a measurable set U ⊂ Rn . How can we know whether or not a given number c ∈ C is attained by f in U ? Since we do not distinguish between two functions in L∞ which diﬀer in a set of measure zero only, it makes sense to accept only those values c ∈ C as actually attained by f in U , and write c ∈ f (U ), for which the set {x ∈ U : f (x) − c < ε}
96
Chapter 3. Limit Operators
has a positive measure for every ε > 0. A more elegant way to say the same thing is that f (U ) is the spectrum of f U in the Banach algebra L∞ (U ). One refers to f (U ) as the essential range of f in U . For the study of multiplication operators with symbol f ∈ L∞ and their limit operators, it is suﬃcient to evaluate f in a neighbourhood of inﬁnity. Therefore, if s ∈ S n−1 , we denote by U the set of all neighbourhoods UR∞ of inﬁnity (3.3) with ∞ of inﬁnity (3.4) R > 0, and by Us we denote the set of all neighbourhoods UR,V n−1 . Then put with R > 0 and a neighbourhood V of s in S f (∞) := f (U ) and f (∞s ) := f (U ). (3.23) U∈U
U∈Us
We call these sets the local essential ranges and their elements the essential cluster points of f at ∞ and ∞s , respectively. Lemma 3.40. A number c ∈ C is contained in f (∞s ) for a given function f ∈ L∞ n and a direction s ∈ S n−1 if and only if there is a sequence (xm )∞ m=1 ⊂ R tending to ∞s such that dist c , f (xm + Q) → 0 as m → ∞. (3.24) The same statement is true if we replace ∞s by ∞. Proof. Let f ∈ L∞ and ﬁx a direction s ∈ S n−1 . The proof for ∞ is analogously. First suppose there is a sequence (xm ) ⊂ Rn tending to ∞s with (3.24). For every neighbourhood U ∈ Us , there is an m0 ∈ N such that xm + Q ⊂ U for all m ≥ m0 , and hence W := ∪m≥m0 f (xm + Q) ⊂ f (U ). Since spectra are closed, the set f (U ) is closed, and from (3.24) we get that c ∈ clos W ⊂ f (U ). Now suppose c ∈ f (∞s ). For every m ∈ N, put Vm (s) = {x ∈ S n−1 : ∞ x − s2 < 1/m}. By (3.23), for every m ∈ N, there exists a set Wm ⊂ Um,V m (s) with Wm  > 0 and f Wm − c∞ < 1/m. If we choose xm ∈ Wm such that (xm + Q) ∩ Wm  > 0, then xm → ∞s and (3.24) holds since dist (c, f (xm + Q)) ≤ f Wm − c∞ < 1/m. Proposition 3.41. For every f ∈ L∞ , the identity f (∞) =
f (∞s ) s∈S n−1
holds. Proof. Clearly, f (∞s ) ⊂ f (∞) for every s ∈ S n−1 . Conversely, if c ∈ f (∞), then, by Lemma 3.40, there exists a sequence (xm ) ⊂ Rn tending to ∞ such that (3.24) holds. But since S n−1 is compact, there is a subsequence (xmk ) of (xm ) for which xmk /xmk 2 converges to some s ∈ S n−1 . But then, Lemma 3.40 again, now applied to the subsequence with xmk → ∞s , shows that c ∈ f (∞s ).
3.4. Limit Operators of a Multiplication Operator
97
3.4.5 Slowly Oscillating Functions Here we will study another interesting class of functions which have the property that limit operators of their multiplication operators have an especially simple structure. Deﬁnition 3.42. We say that a function f ∈ L∞ is slowly oscillating and write f ∈ SO if oscx (f ) → 0 as x → ∞. Moreover, for every s ∈ S n−1 , we will say that f ∈ L∞ is slowly oscillating towards ∞s and write f ∈ SOs if oscx (f ) → 0 as x → ∞s . Similar to Propositions 3.9 and 3.41, we can prove the following result. Proposition 3.43. The identity SO =
SOs
s∈S n−1
holds. Proof. If f ∈ SO and s ∈ S n−1 , then clearly oscx (f ) → 0 if x → ∞s , whence f ∈ SOs . Now suppose f ∈ SO. Then there exists a sequence (xm ) ⊂ Rn with xm → ∞ and an ε0 > 0 with oscxm (f ) ≥ ε0 (3.25) for all m ∈ N. Since S n−1 is compact, we can choose a subsequence (xmk ) from (xm ) which tends to ∞s for some s ∈ S n−1 . But then (3.25) shows that f ∈ SOs . Proposition 3.44. SO and SOs are Banach subalgebras of L∞ for every s ∈ S n−1 . Proof. This proof is very straightforward. Just note that oscx (f + g) ≤ oscx (f ) + oscx (g), oscx (f g) ≤ oscx (f ) g + f oscx (g) and oscx (f ) ≤ oscx (fm ) + 2 f − fm hold for all f, g, fm ∈ L∞ and x ∈ Rn . Lemma 3.45. Let f be an arbitrary function in L∞ . a) For every s ∈ S n−1 , the following three conditions are equivalent. (i) f ∈ SOs , (ii) (iii)
lim oscx (f ) = 0,
x→∞s
lim oscx+U (f ) = 0 for all bounded and measurable U ⊂ Rn .
x→∞s
98
Chapter 3. Limit Operators
b) If f is diﬀerentiable in some neighbourhood of ∞s , then the convergence of its gradient ∇f (x) → 0 as x → ∞s is suﬃcient for f ∈ SOs – but not necessary. Proof. a) (i) ⇐⇒ (ii) holds by Deﬁnition 3.42. (iii)⇒(ii) is trivial since (ii) is just lim oscx+U (f ) = 0
(3.26)
x→∞s
with U = Q. (ii)⇒(iii): If (ii) holds, then we have (3.26) for U = Q and all subsets of Q. If (3.26) holds for a set U , then it also holds for all sets of the form t + U , t ∈ Rn in place of U . Finally, if it holds for U = U1 and U = U2 with U1 ∩ U2  > 0, then it clearly holds for U = U1 ∪ U2 since oscx+(U1 ∪U2 ) (f ) ≤ oscx+U1 (f ) + oscx+U2 (f ). Taking all this together, it is clear that (3.26) holds for all bounded and measurable U ⊂ Rn . b) Take an ε > 0. If ∇f (x) → 0 as x → ∞s , there is some neighbourhood Vε of ∞s such that ∇f (x) < ε/(2n) for all x ∈ Vε . But from f (x + u) − f (x + v) = ∇f (ξx,u,v ) · (u − v) < ε,
u, v ∈ Q, ξx,u,v ∈ x + Q,
we conclude that oscx (f ) ≤ ε if x ∈ Vε . This is (ii). Check the function f in Example 3.46 d) to see that ∇f (x) need not tend to zero as x → ∞ if f ∈ SO. In what follows, we will often use property (iii) of Lemma 3.45 to characterize the sets SOs . Example 3.46. a) The simplest examples of slowly oscillating functions are functions which converge at ∞. If f ∈ L∞ only converges towards ∞s for some s ∈ S n−1 , then at least f ∈ SOs . b) Take n = 2 and f (x, y) =
sin x , y2 + 1
x, y ∈ R.
Then f (x) converges as x → ∞s , and hence f ∈ SOs , if s ∈ S 1 \ {(−1, 0), (1, 0)}. But f ∈ SOs for s ∈ {(−1, 0), (1, 0)}. " c) Take n = 1 and f (x) = sin x. Then f ∈ SO since f (x) → 0 as x → ∞. d) Take n = 1 and f (x) =
sin x2 , x
x ∈ R.
Then f (x) converges as x → ∞, whence f ∈ SO. But note that f (x) does not tend to zero as x → ∞ although f ∈ SO. Slowly oscillating functions need not even be continuous. For instance, look at g(x) =
(−1)[x] , x
x ∈ R,
3.4. Limit Operators of a Multiplication Operator
99
which converges for x → ∞ but has a jump at every x ∈ Z.
We will now study the set of limit operators of Mb when b is slowly oscillating. Proposition 3.47. If s ∈ S n−1 and b ∈ SOs , then the local operator spectrum of Mb is σsop (Mb ) = { cI : c ∈ b(∞s ) }. Proof. Pick some c ∈ b(∞s ), and take an ε > 0. By Lemma 3.40, there is a sequence (xm ) ⊂ Rn tending to ∞s and a sequence (cm ) ⊂ C with cm ∈ b(xm + Q) and cm → c as m → ∞. Since b ∈ SOs , there is an m0 ∈ N such that, for every m > m0 , we have oscxm +2Q (b) < ε/2. If m0 is taken large enough that, in addition, cm − c < ε/2 holds, then b(xm + x) − c ≤ b(xm + x) − cm  + cm − c ≤ oscxm +2Q (b) + cm − c < ε for almost all x ∈ 2Q, i.e. bxm +2Q − c∞ < ε if m > m0 . Now deﬁne the sequence h = (hm ) ⊂ Zn by hm := [xm ]. Then hm + Q is contained in xm + 2Q, and consequently, bhm +Q − c∞ < ε for all m > m0 . From Proposition 3.27 we get that cI is the limit operator of Mb with respect to the sequence h. Conversely, by Lemma 3.30, it is clear that, for all limit operators Mc of Mb towards ∞s , the local oscillation oscx (c) has to be zero at every x ∈ Rn , i.e. c is a constant. From Proposition 3.27 and Lemma 3.40 we get that c ∈ b(∞s ). Proposition 3.9 and 3.41 allow us to glue these local results together and get the following3 . Corollary 3.48. If b ∈ SO, then σ op (Mb ) = {cI : c ∈ b(∞)}. Proof. If b ∈ SO, then b ∈ SOs for all s ∈ S n−1 by Proposition 3.43. By Propositions 3.9, 3.47 and 3.41, we have σ op (Mb ) =
{cI : c ∈ b(∞s )} = {cI : c ∈ b(∞)}.
σsop (Mb ) = s∈S n−1
s∈S n−1
Proposition 3.49. Slowly oscillating functions are rich, SO ⊂ L∞ $ . Proof. Take a function b ∈ SO and an arbitrary sequence h = (hm ) ⊂ Zn with hm → ∞. Now pick a sequence (cm ) of complex numbers such that cm ∈ b(hm +Q) for every m ∈ N. Since (cm ) ⊂ C is bounded, there is a subsequence g of h such that the corresponding subsequence of (cm ) converges to some c ∈ C. Now proceed as in the proof of Proposition 3.47, showing that cI is the limit operator of Mb with respect to g. The following proposition shows that, in a sense, even the reverse of Corollary 3.48 is true. 3 Of
course, Corollary 3.48 also can be shown directly, exactly like Proposition 3.47.
100
Chapter 3. Limit Operators
Proposition 3.50. If b ∈ L∞ $ and every limit operator of Mb is a multiple of the identity operator I, then b ∈ SO. Proof. Let the conditions of the proposition be fulﬁlled, and suppose that b ∈ SO. Then there exist an ε0 > 0 and a sequence of points (xm ) ⊂ Rn tending to inﬁnity such that, for every m ∈ N, we have oscxm +Q (b) ≥ ε0 . Now put h = (hm ) ⊂ Zn with hm := [xm ]. Since xm + Q ⊂ hm + 2Q, we have oschm +2Q (b) ≥ oscxm +Q (b) > ε0 for all m ∈ N. Since b ∈ L∞ $ , there is a subsequence g of h such that the limit operator of Mb exists with respect to g. Denote this limit operator by Mc . By Lemma 3.30, we then conclude that osc2Q (c) ≥ ε0 , and hence, c is certainly not constant on 2Q. So at least one limit operator of Mb is not a multiple of I which contradicts our assumption. These results motivate the following deﬁnition. Deﬁnition 3.51. We denote the set of functions b ∈ L∞ , for which every limit operator of Mb is a multiple of the identity, by CL; that is CL := b ∈ L∞ : σ op (Mb ) ⊂ {cI : c ∈ C} . Now we can summarize Corollary 3.48 and Propositions 3.49, 3.50 by the following result. Proposition 3.52. A function b ∈ L∞ is slowly oscillating if and only if it is rich and every limit operator of Mb is a multiple of the identity. In short, SO = CL ∩ L∞ $ . Remark 3.53. The notion of a slowly oscillating function is easily generalized to elements of ∞ (Zn , L(X)) with an arbitrary Banach space X. We say that b = (bα )α∈Zn is slowly oscillating if bα+q − bα L(X) → 0
as
α→∞
for all q ∈ Q ∩ Zn or, equivalently, for all q ∈ Zn . Analogously, one can study slow oscillation towards ∞s for s ∈ S n−1 . An obviously equivalent description of this fact is that, for all q ∈ Zn , the sequence Vq b − b vanishes at ∞ or ∞s , respectively. It is readily checked that, in the case X = C, all results of this subsection carry over to generalized multiplication operators with a slowly oscillating symbol. As a consequence, in this case we get that, for A ∈ BDOp$ , every limit operator of A is shiftinvariant if and only if A has slowly oscillating coeﬃcients! ˆ b no longer needs to be rich if Note that, if X is inﬁnitedimensional, then M b is slowly oscillating. But however, even then it holds that all limit operators of ˆ b are generalized multiplication operators with a constant symbol. M
3.4. Limit Operators of a Multiplication Operator
101
3.4.6 Admissible Additive Perturbations We know that neither the operator spectrum of Mb nor the property whether or not Mb is rich is aﬀected by additive perturbations T ∈ K(E, P). Since our focus is on multiplication operators, we will have a look at the set of all functions f ∈ L∞ for which T = Mf ∈ K(E, P). We denote this set by L∞ 0
:= {f ∈ L∞ : Mf ∈ K(E, P)} = {f ∈ L∞ : Qm f → 0 as m → ∞}.
∞ Equivalently, L∞ with local essential range 0 is the set of all functions f ∈ L ∞ f (∞) = {0}; that is limx→∞ f (x) = 0. Also note that L∞ 0 = (L )0 in the notation ∞ ∗ 1 ∗∗ = L∞ . of (1.13). Moreover, it is wellknown that (L0 ) = L , and hence (L∞ 0 ) ∞ There is another characterization of L0 in terms of limit operators. op Proposition 3.54. f ∈ L∞ 0 if and only if f is rich and σ (Mf ) = {0}.
Proof. If f ∈ L∞ 0 , then clearly, f is rich with all limit operators of Mf being 0. The reverse implication follows from Proposition 3.50 and Corollary 3.48. ∞ It is immediate that L∞ 0 is a closed ideal in L , and that the limit operators ∞ ∞ of Mb and its property of being rich only depend on the coset b + L∞ 0 in L /L0 .
Deﬁnition 3.55. If F is a subset of L∞ , then we refer to F + L∞ 0 as the relaxation of F . Suppose, for a set F ⊂ L∞ , we know something about the limit operators of Mb for all b ∈ F . Then we still have the same knowledge (and the same limit operators) if we relax the condition b ∈ F to b ∈ F + L∞ 0 , which can enlarge the class of functions quite considerably – think of F = BUC, for example.
3.4.7 Slowly Oscillating and Continuous Functions Sometimes, the class SOC := SO ∩ BC of slowly oscillating and continuous functions is of interest. Proposition 3.56. SOC ⊂ BUC Proof. Although there is a direct proof (using some ε’s and δ’s), we will do something diﬀerent: By Proposition 3.49, we have SO ⊂ L∞ $ . Consequently, SOC = SO ∩ BC ⊂ L∞ $ ∩ BC = BUC, by Proposition 3.39.
By deﬁnition, SOC comes from SO by taking intersection with BC. Conversely, SO can be derived from SOC by adding L∞ 0 .
102
Chapter 3. Limit Operators
Proposition 3.57. SOC + L∞ 0 = SO Proof. If f ∈ SOC and g ∈ L∞ 0 , then both are in SO, and hence, f + g ∈ SO. For the reverse inclusion, take an arbitrary f ∈ SO. Now deﬁne a function g as follows: In the integer points x ∈ Zn , let g(x) be some value from the essential range f (x + H), and then use an interpolation idea by setting g(x + h) a convex combination of the function values of g in the 2n corners of the hypercube x + H: g(x + h) :=
n # (1 − hi − vi ) g(x + v),
v∈{0,1}n
i=1
where h = (hi ) ∈ H and v = (vi ) ∈ {0, 1}n. Then g is continuous, and g(x + H) ⊂ conv f (x + 2H)
for all
x ∈ Zn ,
by our construction. Consequently, oscx+H (g) ≤ oscx+2H (f ) → 0 as x → ∞, whence g ∈ SOC. Finally, from (f − g)x+H ∞ ≤ oscx+2H (f ) → 0 as x → ∞, we ∞ get f − g ∈ L∞ 0 , whence f ∈ SOC + L0 . Proposition 3.57 shows that SO is the relaxation of SOC. From Example 3.46 a) we know that the following class is an especially simple subset of SOC. By C(Rn ) we denote the set of all continuous functions f : Rn → C for which the limits lim f (x) x→∞s
exist for all s ∈ S , and by C(R˙ ) we denote its subset of functions f for which this limit does not depend on s. Clearly, n−1
n
C(R˙ n ) ⊂ C(Rn ) ⊂ SOC ⊂ BUC are all Banach subalgebras of L∞ . We will come back to SO and SOC later in Subsection 3.4.11.
3.4.8 Almost Periodic Functions This time we will start in the discrete case, and we pass to the continuous case towards the end of the subsection. Suppose X and Y are arbitrary Banach spaces. ˆ b on p (Zn , X) with b ∈ ∞ (Zn , L(X)), For a generalized multiplication operator M it clearly holds that ˆ b Vγ = M ˆ V−γ b , V−γ M
γ ∈ Zn .
(3.27)
We say that b ∈ ∞ (Zn , Y) is periodic if there are linearly independent vectors ω1 , . . . , ωn ∈ Zn such that Vωk b = b,
k = 1, . . . , n.
3.4. Limit Operators of a Multiplication Operator
103
If we put M := [ω1 , . . . , ωn ] ∈ Zn×n , then S := M (H) ∩ Zn is the discrete parallelotope spanned by ω1 , . . . , ωn , and b is completely determined by the ﬁnitely many values bα with α ∈ S. Consequently, the set {Vγ b}γ∈Zn = {Vγ b}γ∈S ˆ b Vγ }γ∈Zn . Both sets have (at most) is ﬁnite, and, by (3.27), so is the set {V−γ M #S =  det M  elements. A slightly weaker property than periodicity is the following. Deﬁnition 3.58. An element b ∈ ∞ (Zn , Y) is called almost periodic if the set {Vγ b}γ∈Zn is relatively compact in ∞ (Zn , Y). This relative compactness is still a very strong property. It ensures that, if ˆ b with an almost periodic symbol b, and h = (hm ) ⊂ Zn tends to inﬁnity, A=M then there is a subsequence g of h for which the sequence V−gm AVgm converges as ˆ b is m → ∞ – even in the operator norm! So if b is almost periodic, then A = M rich, and the Pconvergence of the sequence (3.5) is actually normconvergence. This fact carries over to BDOp with almost periodic coeﬃcients. Corollary 3.59. If A is a banddominated operator with almost periodic coeﬃcients, then A is rich, and the invertibility of any limit operator Ah of A already implies the invertibility of A. Proof. That A is rich follows from the above and Proposition 3.11. If h = (hm ) goes to inﬁnity and Ah is invertible, then, by Lemma 1.2 b) and by what was said above, V−hm AVhm is invertible for suﬃciently large m ∈ N, and hence A itself is invertible. For ε > 0, we say that ω ∈ Zn is an εalmost period of b ∈ ∞ (Zn , Y), if Vω b − b∞ < ε, and we denote the set of all εalmost periods of b by Ωε (b). Proposition 3.60. b ∈ ∞ (Zn , Y) is almost periodic if and only if, for every ε > 0, the set Ωε (b) is relatively dense in Zn ; that is, there exists a bounded set Sε ⊂ Zn such that every translate of Sε in Zn contains an εalmost period of b. Proof. The proof for n = 1 can be found in [13] (Theorem 1.26) for example. The result is easily carried over to arbitrary n ∈ N. As a consequence of Proposition 3.60, we get the following result. Proposition 3.61. If b ∈ ∞ (Zn , L(X)) is almost periodic, then ˆ b ) = σ op (M ˆ b) σsop (M for all s ∈ S n−1 .
104
Chapter 3. Limit Operators
Proof. Let b ∈ ∞ (Zn , L(X)) be almost periodic, take arbitrary s, t ∈ S n−1 , and ˆ b ). We will show that A ∈ σ op (M ˆ b ) as well, which, by Proposuppose A ∈ σsop (M t sition 3.9, proofs this proposition. ˆ b ) and Proposition 3.6 d) we get that A = M ˆ c and that there From A ∈ σsop (M P n exists a sequence h = (hm ) ⊂ Z with hm → ∞s and V−hm b → c as m → ∞ by (3.27). Since b is almost periodic, we can pass to a subsequence of h for which P this Pconvergence → is even convergence in L∞ . Suppose h is already this subsequence. Now choose a sequence g = (gm ) ∈ Zn such that gm − hm ∈ Ω1/m (b) and gm → ∞t . Then, by Proposition 3.60, V−gm b − c∞
≤ V−gm b − V−hm b∞ + V−hm b − c∞ = V−gm (b − Vgm −hm b)∞ + V−hm b − c∞ = b − Vgm −hm b∞ + V−hm b − c∞ < 1/m + V−hm b − c∞ → 0
ˆ c is the limit operator of M ˆ b with respect to as m → ∞, and consequently A = M g = (gm ) as well. Now we come to the continuous case. If we identify a multiplication operator on Lp with its discretization (1.3); that is a generalized multiplication operator on p (Zn , X) with X = Lp (H), we can carry over the notion of almost periodicity to L∞ . Deﬁnition 3.62. We say that b ∈ L∞ is Zalmost periodic if the set {Vγ b}γ∈Zn is relatively compact in L∞ , and it is almost periodic if the set {Vγ b}γ∈Rn is relatively compact in L∞ . The sets of all Zalmost and almost periodic functions in L∞ will be denoted by APZ and AP, respectively. By what we discussed earlier, it follows that AP ⊂ APZ ⊂ L∞ $ . The following example shows that AP is in fact a proper subset of APZ . Example 3.63. If n = 1 and s ∈ R+ , then f (x) = (−1)[x/s] ,
x∈R
is a step function with step size s. It is easily seen that f ∈ APZ if and only if s ∈ Q. Using Proposition 3.35 and a little thought, it also follows that f is rich if and only if s ∈ Q. If we allow realvalued shift distances γ in {Vγ f }, then, for example, the sequence (Vk√2s f )k∈N does not contain a convergent subsequence, regardless of the choice of s. So the set {Vγ f }γ∈Rn is not relatively compact, whence f ∈ AP. There is the following nice relationship between almost periodic functions in the discrete and the continuous case.
3.4. Limit Operators of a Multiplication Operator
105
Lemma 3.64. A sequence b = (bα ) ∈ ∞ is almost periodic if and only if there is a function f ∈ AP with bα = f (α) for all α ∈ Zn . Proof. For n = 1, this is shown in Theorem 1.27 of [13]. But this construction easily generalizes to n ≥ 1. We prepare an equivalent characterization for functions in AP. Therefore, for every a ∈ Rn , put x ∈ Rn fa (x) := exp( i a, x ), where ·, · refers to the standard inner product in Rn . We denote by Π the set of all complex linear combinations of functions fa with a ∈ Rn , and we refer to elements of Π as trigonometric polynomials. Then the following holds. Proposition 3.65. For f ∈ L∞ = L∞ (Rn ), the following conditions are equivalent. (i) The set {Vτ f }τ ∈Rn is relatively compact in L∞ . (ii) f is continuous, and, for every ε > 0, there is a cube S ⊂ Rn such that every translate of S contains a τ ∈ Rn with Vτ f − f ∞ < ε. (iii) There is a sequence in Π which converges to f in the norm of L∞ . Proof. For n = 1 this is a wellknown result by Bohr and Bochner which can be found in any textbook on almost periodic functions, for example [13] or [39, Chapter VI]. In fact, the equivalence of (i) and (iii) holds for locally compact abelian groups in place of Rn (see [39], page 192). The equivalence of (i) and (ii) is written down for n = 1 in Chapter VI, Theorem 5.5 of [39] which literally applies to n ≥ 1 as well. As in the discrete case, the vectors τ in (ii) are called the εalmost periods of f . From (iii) we get that AP is the smallest Banach subalgebra of L∞ that contains all functions fa with a ∈ Rn , AP = closL∞ Π = closalgL∞ {fa : a ∈ Rn }. From this fact and fa ∈ BUC for all a ∈ Rn , we get that AP ⊂ BUC.
(3.28)
Finally, refer to the Banach subalgebra of L∞ that is generated by AP and C(Rn ) as SAP := closalgL∞ {AP , C(Rn )}. The elements of SAP are called semialmost periodic functions. For n = 1, if b+ ∈ C(R) with b+ (+∞) = 1 and b+ (−∞) = 0, and b− := 1−b+ are ﬁxed, then a famous result by Sarason [78] says that every function f ∈ SAP has a unique representation of the form f
=
b− f−
+
b0
+
b+ f+
(3.29)
106
Chapter 3. Limit Operators
with f+ , f− ∈ AP and b0 ∈ C(R) with b0 (±∞) = 0. In contrast to almost periodic functions, semialmost periodic functions can have diﬀerent almost periodic behaviour towards diﬀerent directions of inﬁnity. For dozens of very beautiful pictures of (semi)almost periodic functions, see [7]. From (3.28) and C(Rn ) ⊂ BUC, we get that SAP ⊂ BUC ⊂ L∞ $ . From (3.29) and Proposition 3.61, we see that, for n = 1 and every f ∈ SAP, op op σ− (Mf ) = σ− (Mf− ) = σ op (Mf− )
op op and σ+ (Mf ) = σ+ (Mf+ ) = σ op (Mf+ )
with f+ , f− ∈ AP uniquely determined by (3.29).
3.4.9 Oscillating Functions In this subsection, we restrict ourselves to functions b ∈ Lp on the axis; that is n = 1. Remember that T denotes the complex unit circle, and suppose f : T → C is a bounded and nonconstant function. Then, for every ω > 0, by x , x∈R b(x) := f exp 2πi ω we get a periodic function in L∞ with b(x + ω) = b(x) for all x ∈ R. In this case we will say that b oscillates with a constant frequency. Furthermore, we will study functions whose frequency increases or decreases as x goes to inﬁnity. Therefore, let g : R → R be an unbounded, odd and strictly monotonously increasing, diﬀerentiable function. Then put
b(x) := f exp 2πi g(x) , x ∈ R. (3.30) The three cases under consideration ➀ g (x) → +∞ as x → ∞ ➁ g (x) = 1/ω for all x ∈ R ➂ g (x) → 0 as x → ∞
are (frequency tends to inﬁnity), (constant frequency – the periodic case), (frequency tends to zero).
Proposition 3.66. If f is continuous on T, g is subject to one of the cases ➀, ➁, ➂, and b is as in (3.30), then • in case ➀, b is always poor; • in case ➁, b is always rich with σ op (Mb ) = {MVc b : c ∈ C}, where
1 k , . . . , k−1 if p = m ∈ Q with gcd(k, m) = 1, {0, m m } C = [0, p) if p ∈ R \ Q; • and in case ➂, b is always rich with σ op (Mb ) = {cI : c ∈ f (T)}.
(3.31)
3.4. Limit Operators of a Multiplication Operator
107
Proof. The proof is a bit lengthy but rather straightforward. Firstly, it is shown that b is in BC \ BUC, AP and SO in case ➀, ➁, ➂, respectively, and secondly, the operator spectra are derived from previous results on these function classes plus some additional arguments. The interested reader is referred to [49, Proposition 3.18] or [50, Proposition 2.25]. So if f is continuous, all answers in cases ➀, ➁ and ➂ are given – including an explicit description of the operator spectra. The situation changes a bit as soon as f has a single discontinuity, say a jump at 1 ∈ T: • Case ➀ remains poor. • Case ➁ is rich if and only if the period ω is rational. • Most interesting is case ➂. Here one cannot give a statement independently from the knowledge of the function g. This is demonstrated in Example 3.67. Example 3.67. Consider the function ⎧ ⎨ loga x, x g(x) = e ln a , ⎩ loga (−x),
x ≥ e, −e < x < e, x ≤ −e,
where a ≥ e is ﬁxed. It is easily seen that g is unbounded, odd, strictly monotonously increasing and diﬀerentiable, and g (x) → 0 as x → ∞. Then the function b from (3.30) jumps at x = 0 as well as at x = ±ak for k ∈ N, and it is continuous at every other point. The local oscillation of b, apart from near the jumps, goes to zero as x goes to inﬁnity. • Suppose the basis a is an integer. Then all jumps of b are at integer points. Now it is almost immediate that there is a step function bs with step length 1 and a function b0 ∈ L∞ 0 such that b = bs + b0 . As the sum of two rich functions, b is rich. √ • Suppose a = 2.√ Then the jumps x = ±ak of b are integers for k even and multiples of 2 for k odd. But then it is easily seen that the sequence h = ([a2m+1 ])m∈N has no subsequence leading to a limit operator of Mb , while the sequence h = (a2m )m∈N = (2m )m∈N leads to a limit operator. So here, b is ordinary – but not poor. • If a is a transcendent number, then the jumps of b simply don’t ﬁt together modulo 1 because otherwise, we had integers k1 and k2 such that ak1 − ak2 is an integer c, i.e. a solves the equation xk1 − xk2 − c = 0. So no subsequence of h = ([am ])m∈N leads to a limit operator, and hence, b is just ordinary. For completeness, and this is independent from the explicit structure of g, we remark that b is never poor in case ➂ since there are always sequences leading to a limit operator, for example, h = ([g −1 (t + m)])m∈N , where g ◦ g −1 = id = g −1 ◦ g and t ∈ (0, 1) is ﬁxed.
108
Chapter 3. Limit Operators
It is not hard to see that the results from f having one jump can be extended to f having ﬁnitely many jumps on T.
3.4.10 Pseudoergodic Functions If we referred to slowly oscillating functions as those functions with especially simple associated limit operators, then the functions we consider now, in some sense show the opposite behaviour. Fix a Banach space Y and a bounded and closed subset D of Y. In accordance with Davies [26], we say that b = (bα ) ∈ ∞ (Zn , Y) is pseudoergodic with respect to D if, for every ﬁnite set S ⊂ Zn , every function c : S → D and every ε > 0, there is a vector γ ∈ Zn such that sup bγ+α − cα Y < ε.
(3.32)
α∈S
Large classes of random potentials have this property almost surely. A little thought reveals that there are in fact inﬁnitely many vectors γ ∈ Zn with (3.32) since this property also holds for all continuations of c to sets S ⊂ Zn containing S. Also it follows immediately from this deﬁnition that, if b is pseudoergodic with respect to D, then b is pseudoergodic with respect to every closed subset of D. Roughly speaking, (bα ) is pseudoergodic with respect to D if one can ﬁnd, up to a given precision ε > 0, any ﬁnite pattern of elements from D somewhere in (bα ). For example, it is conjectured4 that the decimal expansion of π = 3, 1415926535 . . . forms a pseudoergodic sequence N → D = {0, . . . , 9}. It is easily seen that, if b is pseudoergodic with respect to D, then the ˆ b contains every generalized multiplication operator M ˆc operator spectrum of M one can think of with a Dvalued function c. But also the reverse implication is true. Proposition 3.68. Let X be a Banach space, D be a bounded and closed subset of L(X), and suppose b ∈ ∞ (Zn , L(X)). Then b is pseudoergodic with respect to D if and only if ˆ b ) ⊃ {M ˆ c : c = (cα )α∈Zn ⊂ D}. σ op (M (3.33) ˆ b with Proof. In analogy to Proposition 3.27, it is readily seen that, if A = M b = (bα ) ∈ ∞ (Zn , L(X)) and h = (hm ) ⊂ Zn tends to inﬁnity, then Ah exists and ˆ c with c = (cα )α∈Zn ⊂ D if and only if is equal to M sup bhm +α − cα → 0
as
m→∞
(3.34)
α∈S
for all ﬁnite sets S ⊂ Zn . Let b be pseudoergodic w.r.t. D, and take an arbitrary c = (cα )α∈Zn ⊂ D. For every m ∈ N, deﬁne hm ∈ Zn as the value of γ in (3.32), where we put 4 Go
to http://pi.nersc.gov/ to check if your name is coded in the ﬁrst 4 billion digits of π.
3.4. Limit Operators of a Multiplication Operator
109
Y = L(X), S = {−m, . . . , m}n and ε = 1/m. Since there are inﬁnitely many choices for this vector γ, we can moreover suppose that hm  > m. Then h = (hm ) converges to inﬁnity, and it is easily seen that (3.34) holds for every bounded ˆ c ∈ σ op (M ˆ b ). S ⊂ Zn , showing that M op ˆ ˆ Conversely, if Mc ∈ σ (Mb ) for every c = (cα )α∈Zn ⊂ D, then (3.34) holds for every ﬁnite set S ⊂ Zn . But this clearly implies that b is pseudoergodic with respect to D. Clearly, for b to be pseudoergodic with respect to a set D, it is necessary that D is contained in the closure of the set of all components of b. Lemma 3.69. If b = (bα ) ∈ ∞ (Zn , Y) is pseudoergodic with respect to D ⊂ Y, then D ⊂ closY {bα }α∈Zn . Proof. Let b be pseudoergodic w.r.t. D, and let a ∈ D be arbitrary. Put S = {0} and c0 = a to see that, for every ε > 0, there is a γ ∈ Zn such that bγ − aY < ε, by (3.32). We will say that b = (bα ) ∈ ∞ (Zn , Y) is pseudoergodic if it is pseudoergodic with respect to D = closY {bα }α∈Zn , which is the largest possible choice for D. Corollary 3.70. Let X be a Banach space, and suppose b ∈ ∞ (Zn , L(X)). Then b is pseudoergodic if and only if ˆ b ) = {M ˆ c : c = (cα )α∈Zn ⊂ D} σ op (M
(3.35)
with D = closY {bα }α∈Zn . Proof. The only thing that remains to be shown is that the reverse implication holds as well in (3.33). But this follows from Proposition 3.6 d) and the choice of D. ˆ b is among its own Corollary 3.71. If b ∈ ∞ (Zn , L(X)) is pseudoergodic, then M limit operators.
3.4.11 Interplay with Convolution Operators Remember that the commutator of two operators A, B ∈ L(E) is the operator AB − BA, which is why the operators AB and BA are sometimes referred to as semicommutators of A and B. Moreover, recall from Example 1.45 that the convolution with a L1 function is a bounded linear operator on every space Lp with 1 ≤ p ≤ ∞.
110
Chapter 3. Limit Operators
The Case 1 < p < ∞ In [81] Shteinberg studied the classes QSC and QC of all functions b ∈ L∞ for which the semicommutator or the commutator, respectively, of Mb with an arbitrary L1 convolution is compact in all spaces Lp with 1 < p < ∞. This question arises since • ﬁrstly, multiplication and convolution operators are the basic building stones for many interesting banddominated operators on Lp , as we will see in Chapter 4, for example, • and secondly, K(E) ⊂ K(E, P) with E = Lp if 1 < p < ∞, whence all compact operators on Lp are rich and their operator spectrum is equal to {0} if 1 < p < ∞, by Proposition 3.6 c). So if b ∈ L∞ and C is a L1 convolution operator on Lp , then σ op (Mb C)
and
σ op (CMb )
(3.36)
are invariant under perturbing b by a function in QSC , and σ op (Mb C − CMb ) is invariant under perturbing b by a function in QC . The same invariance holds for the answer to the question whether or not the operators Mb C, CMb and Mb C − CMb are rich, respectively. It turns out (see [81] or [70, Theorem 3.2.2]) that a function b ∈ L∞ is in QSC if and only if, for each compactum U ⊂ Rn , bx+U → 0 as x → ∞, (3.37) 1 where bx+U 1 denotes the L1 norm of the restriction of b to the compact set x + U . This clearly shows that ⊂ QSC ⊂ QC . L∞ 0
(3.38)
Moreover (still citing [81] and [70]), QC is a Banach subalgebra of L∞ , QSC is a closed ideal in L∞ , and hence in QC , and both algebras are related by the equality QC = QSC + SOC.
(3.39)
Actually, we can reﬁne (3.39) a little bit: Proposition 3.72. QC = QSC + SO. ∞ Proof. From L∞ 0 ⊂ QSC and both being linear spaces, we get QSC = QSC + L0 . Taking this together with (3.39) and Proposition 3.57, we conclude QC = QSC + SOC = QSC + L∞ 0 + SOC = QSC + SO.
3.4. Limit Operators of a Multiplication Operator
111
Before we can give some more equivalent characterizations of the algebras QSC and QC , we will have a look at the limit operators of Mb for b ∈ QSC . Proposition 3.73. If b ∈ QSC , then σ op (Mb ) is either {0} or ∅. Proof. Suppose Mb has a limit operator (Mb )h = Mc . We have to show that c = 0. For every compactum U ⊂ Rn , we have dm ∞ → 0 as m → ∞ by (3.18), where dm := bhm +U − cU . Consequently, cU 1
≤
bhm +U 1 + dm 1
≤
bhm +U 1 + U  · dm ∞ → 0
as m → ∞ by (3.37). So cU = 0 for every compact set U ⊂ Rn , i.e. c = 0.
Now we are in the position to describe the set of rich functions among QSC and QC , respectively. Proposition 3.74.
∞ a) QSC ∩ L∞ $ = L0 .
b) QC ∩ L∞ $ = SO. op Proof. If b ∈ QSC and b ∈ L∞ $ , then σ (Mb ) = {0} by Proposition 3.73, and ∞ Proposition 3.54 tells that b ∈ L0 . The reverse inclusion is trivial. If f ∈ QC ∩L∞ $ , then, by Proposition 3.72, we get f = g +h with g ∈ QSC and ∞ ∞ h ∈ SO ⊂ L∞ $ , by Proposition 3.49. Consequently, f ∈ L$ implies g = f −h ∈ L$ , ⊂ SO. So g, h ∈ SO, whence f = g + h ∈ SO. For the and from a), we get g ∈ L∞ 0 reverse inclusion, recall Propositions 3.72 and 3.49.
Having this result, we will try to ﬁnd alternative descriptions of QSC and QC . We already noticed that QSC contains L∞ 0 . But it is strictly larger since it also contains functions b with property (3.37) and bx+U → 0 as x → ∞. (3.40) ∞ We will refer to functions which are subject to (3.37) and (3.40) as noise. A typical example of a noise function is the characteristic function of the set ∞ 1 m, m+ ⊂ R m m=1 for n = 1. The set of all noise functions shall be denoted by N . If A ∪. B denotes the union of two disjoint sets A, B, then the following holds. Lemma 3.75. QSC = L∞ ∪. N . 0
Proof. This is trivial: The set of functions subject to (3.37) decomposes into those with (3.37) and (3.40), which is N , and those with (3.37) and not (3.40), which is L∞ 0 . Corollary 3.76. Noise is never rich. Even shorter, N ∩ L∞ $ = ∅.
112
Chapter 3. Limit Operators
Proof 1. Use Lemma 3.75 and Proposition 3.74 a).
Proof 2. Conversely, suppose b ∈ N is rich. Take an integer sequence h = (hm ) tending to inﬁnity with bhm +Q ∞ being bounded away from zero. But from Proposition 3.73 we know that (Mb )h = 0, which clearly contradicts (3.18). For a concise description of QSC and QC , put N0 := N ∪ {0}. Proposition 3.77. Both QSC and QC decompose into a rich and a noisy part by ∪. and by +: . and b) QC = SO ∪. (SO + N ). a) QSC = L∞ 0 ∪ N ∞ c) QSC = L0 + N0 and d) QC = SO + N0 . Proof. a) is a repetition of Lemma 3.75. Considering b), we recall Proposition 3.72, part a) and L∞ 0 ⊂ SO to get ∞ QC = QSC + SO = (L∞ 0 ∪ N ) + SO = (L0 + SO) ∪ (N + SO) = SO ∪ (N + SO).
Clearly, SO and SO + N are disjoint since the earlier functions are always rich by Proposition 3.49, and the latter are always ordinary by Proposition 3.49 and Corollary 3.76. Now c) follows from a) and N = N + L∞ 0 by ∞ ∞ ∞ QSC = L∞ 0 ∪ N = L0 ∪ (L0 + N ) = L0 + (N ∪ {0}),
and d) follows from b) by QC = (SO + N ) ∪ SO = SO + (N ∪ {0}).
We conclude this journey by a simple corollary. Corollary 3.78.
a) If b ∈ QSC and (Mb )h = Mc exists, then c = 0.
b) If b ∈ QC and (Mb )h = Mc exists, then c = const. Proof. a) is a repetition of Proposition 3.73, and b) follows from Proposition 3.77 b) and Corollary 3.48.
The Cases p = 1 and p = ∞ We will see that, if p = 1 or p = ∞, one of the two operator spectra (3.36) as well as the richness of the corresponding operator is still invariant under perturbing b by a function in QSC . Recall that, for b ∈ L∞ , the multiplication operator Mb is rich if every sequence in Zn tending to inﬁnity has an inﬁnite subsequence h = (hm ) such that there exists a function c ∈ L∞ with bhm +U − cU → 0 as m → ∞ (3.41) ∞
3.4. Limit Operators of a Multiplication Operator
113
for every compactum U ⊂ Rn , using the lazy notation as in Remark 3.26. We will show that the semicommutator of Mb with a convolution operator is already rich if the above holds with the much weaker condition bhm +U − cU → 0 as m → ∞ (3.42) 1 instead of (3.41). We denote the set of all b ∈ L∞ with this property by L∞ SC$ $ , and write ˜b(h) for the function c with property (3.42) for all compact sets U . Recall ∞ with the ﬁrst property, and that we write b(h) for that L∞ $ is the set of all b ∈ L the function c in (3.41). Proposition 3.79. Let C be the operator of convolution on Lp by a function κ ∈ L1 , n let b ∈ L∞ SC$ $ , and h ⊂ Z be a sequence tending to inﬁnity. a) If p = 1, then Mb C is rich and (Mb C)h = M˜b(h) C if ˜b(h) exists. b) If p = ∞, then CMb is rich and (CMb )h = CM˜b(h) if ˜b(h) exists. Proof. Since κ can be approximated in the norm of L1 as close as desired by a continuous function with compact support, it is suﬃcient to prove the proposition for this case. Choose > 0 large enough for supp κ ⊂ [−, ]n . We prove the statement b). Then statement a) follows from duality. To see that CMb is rich, take an arbitrary sequence g ⊂ Zn tending to inﬁnity. Since b ∈ L∞ ˜ = b(h) ∈ L∞ SC$ $ , there is an inﬁnite subsequence h = (hm ) of g and a function c n such that (3.42) holds for all compact sets U ⊂ R . Then, for every τ > 0, putting U := [−τ − , τ + ]n , Pτ (V−hm CMb Vhm − CM˜b(h) ) ≤ ess sup κ(x − y) b(y + hm ) − ˜b(h) (y) dy x<τ
Rn
= ess sup x<τ
≤ κ∞
κ(x − y) b(y + hm ) − ˜b(h) (y) dy
U
bhm +U − ˜b(h) U → 0 1
as m → ∞. Analogously, for every τ > 0, putting V = [−τ, τ ]n , (V−hm CMb Vhm − CM˜b(h) )Pτ ≤ ess sup κ(x − y) b(y + hm ) − ˜b(h) (y) dy x∈Rn V ≤ κ∞ bhm +V − ˜b(h) V 1 → 0 as m → ∞. This shows that CM˜b(h) is the limit operator of CMb with respect to the subsequence h of g. ∞ ∞ Proposition 3.80. It holds that L∞ $ + QSC = L$ + N0 ⊂ LSC$ $. ∞ ∞ Proof. From Proposition 3.77 c) we get that L∞ $ + QSC = L$ + L0 + N0 = ∞ ∞ L$ + N0 . Now let b = c + d with c ∈ L$ and d ∈ QSC , and let g ⊂ Zn be
114
Chapter 3. Limit Operators
an arbitrary sequence tending to inﬁnity. Since c ∈ L∞ $ , there is a subsequence h = (hm ) ⊂ g and a function c˜(h) such that chm +U − c˜(h) U ∞ → 0 as m → ∞ for every compact set U in Rn . But then bhm +U − c˜(h) U ≤ chm +U − c˜(h) U + dhm +U 1 1 1 ≤ U  · chm +U − c˜(h) U + dhm +U → 0 ∞
1
as m → ∞ for every compact set U in Rn .
Corollary 3.81. Let C be the operator of convolution on Lp by a function in L1 , and let b ∈ L∞ . If p = 1, then the richness and the operator spectrum of Mb C are invariant under perturbations of b in QSC . If p = ∞, then the richness and the operator spectrum of CMb are invariant under perturbations of b in QSC .
3.4.12 The Big Picture We haven’t had a picture for quite a long time. So let us look at another Venn diagram that shows the relation of most of the classes of functions studied in the previous subsections. Here is a short reminder and a legend to the picture. L∞ $ BC BUC CL SO SOC QC
... ... = ... = = ...
QSC
...
L∞ 0 N CL1
= = ...
AP SAP
... ...
the rich functions; the bounded and continuous functions; BC ∩ L∞ $ , the bounded and uniformly continuous functions; the functions b ∈ L∞ for which σ op (Mb ) ⊂ {cI : c ∈ C}; CL ∩ L∞ $ , the slowly oscillating functions; SO ∩ BC, the slowly oscillating continuous functions; the functions b for which [Mb , C] ∈ K(Lp ) for all p ∈ (1, ∞) and all convolutions C by a L1 function; the functions b for which Mb C, CMb ∈ K(Lp ) for all p ∈ (1, ∞) and all convolutions C by a L1 function; QSC ∩ L∞ $ , the functions vanishing at inﬁnity; QSC ∩ (L∞ \ L∞ $ ), the noise functions; the functions b ∈ CL for which σsop (Mb ) has at most 1 element per direction s ∈ S n−1 ; the almost periodic functions; the semialmost periodic functions.
Moreover, we have the equalities C(Rn ) + L∞ 0 C(Rn ) SO
= = =
CL1 ∩ L∞ $ , CL1 ∩ L∞ $ ∩ BC, ∞ QC ∩ L $ .
3.4. Limit Operators of a Multiplication Operator
115
Venn Diagram for Some Subclasses of L∞ L∞ $
L∞ \ L∞ $ BUC
BC SAP AP
SOC
C(Rn ) + L∞ 0
C(Rn )
L∞ 0
N QSC CL1
SO
CL
QC
Figure 8: The relationship between the subclasses of L∞ studied in Section 3.4.
Note that, for the sake of simplicity of the picture, we have excluded the constant functions from AP and SAP for a moment.
116
Chapter 3. Limit Operators
3.4.13 Why Choose Integer Sequences? In Subsections 3.4.2, 3.4.8 and 3.4.9 we have experienced the consequences of the restriction of h = (hm ) to integer sequences, as already discussed in Remark 3.28 b). We have seen that step functions with rational step length and periodic functions with rational period are always rich while their irrational counterparts are ordinary or even poor in general. This seems a bit unnatural since rescaling axes a little bit will change the richorordinarysituation completely. The reason why we stick to this somewhat unnatural constraint becomes apparent when we look at what it means that an operator A ∈ L(E) is rich: A: Every sequence (3.5) with (hm ) ⊂ Zn has a Pconvergent subsequence. B: Every sequence (3.5) with (hm ) ⊂ Rn has a Pconvergent subsequence. Clearly, B implies A, but, as already Example 3.63 shows, not the other way round. By passing from Zn to Rn , we would lose some basic classes of rich functions frequently used in applications!
h⊂Z h⊂R
nonconvergent step functions s∈Q s ∈ Q all rich ordinary/poor all ordinary
noncontinuous periodic functions ω∈Q ω ∈ Q all rich all poor all ordinary
Indeed, the unnatural distinction between rational and irrational parameters would vanish. But instead of these distinct cases, everything would turn into one case of ordinary functions.
3.5 Alternative Views on the Operator Spectrum In this section we will look at the limit operator concept from one or two alternative perspectives, which is certainly interesting in its own right but is also helpful for a better understanding of the whole concept.
3.5.1 The Matrix Point of View Before we come to something much more sophisticated in the next subsection, we ﬁrst have to think about the notion of a limit operator from the matrix point of view, as it was motivated in the introduction. Therefore take some p ∈ [1, ∞], an n ∈ N and a Banach space X, and remember that with every bounded linear operator A on E = p (Zn , X) we associated a matrix [A] with entries aαβ ∈ L(X), indexed over Zn ×Zn . If A is banddominated, then also the matrix [A] uniquely determines the operator A. An elementary computation shows that, for every c ∈ Zn , the matrix [bαβ ] of the shifted operator V−c AVc is connected with [A] = [aαβ ] by bα,β = aα+c,β+c ,
α, β ∈ Zn ;
3.5. Alternative Views on the Operator Spectrum
117
that is, roughly speaking, [V−c AVc ] is what you see when you look at [A] from the point that is c positions further down the main diagonal. Now we know what the matrices [V−hm AVhm ] look like when [A] is given and h = (hm ) ⊂ Zn is a sequence that tends to inﬁnity. To go one step further and get an idea what [Ah ] looks like, provided the limit operator Ah exists, we have to understand Pconvergence from the matrix point of view. Therefore take a bounded sequence (Am ) ⊂ L(E) and an A ∈ L(E). From P Proposition 1.65 we easily derive that Am → A if and only if, for all γ ∈ Zn , Pγ Am ⇒ Pγ A
and
Am Pγ ⇒ APγ
m → ∞;
as
that is, every row and every column of Am uniformly converges to the respective row and column of A. In this connection also recall that every row and every column of a matrix decays towards inﬁnity if the underlying operator is in L(E, P). Summarizing, this is how the process of passing to the limit operator for a given sequence h = (h1 , h2 , . . .) looks like from the matrix point of view: • Travel down the main diagonal of [A] along the sites h1 , h2 , . . ., where, at each site, you look at the matrix through an inﬁnite crossshaped window that only allows to view at a ﬁnite number k of rows and columns. • If, during this journey down the diagonal, uniform convergence happens in your window, and if this is the case for all widths k of the cross bars, then h = (hm ) leads to a limit operator of A, and the rows and columns of [Ah ] are the uniform limits of our respective outlooks. 9 6 5 5 2 hm−2 4 3 6 1 hm−1 4 3 7 0 34418159813629 40628620899862 53589793238462 3 5 0 7 1 8 6 ..
.
8 3 4 6 7 4 7 2 9 2 5 9 6 7 8 6 4 2 7 4 0 3 8
9 1 6 9 0 0 5 8 4 7 5 2 1 747713 034825 433832 1 4 h m+1 0 2 7 5 3
099 342 795
..
.
Figure 9: The outlook on a journey down the diagonal while looking through a crossshaped window with k = 3.
118
Chapter 3. Limit Operators
3.5.2 Another Parametrization of the Operator Spectrum Instead of the long journey down the diagonal, waiting for convergence, one can give a much more straightforward deﬁnition for every matrix entry of a limit operator. Before we can give this deﬁnition, we need some more knowledge on commutative Banach algebras.
Maximal Ideals and Characters Let B be a commutative Banach algebra with unit e. A function f : B → C which is subject to f (a +· b) = f (a) +· f (b) for all a, b ∈ B and which is not the zero function on B is called a character of B. Characters are sometimes also called states, nonzero scalarvalued homomorphisms or nonzero multiplicative functionals. Moreover, an ideal M ⊂ B is a proper ideal of B if M = B, and it is called maximal ideal of B if it is a proper ideal of B that is not properly contained in another proper ideal of B. The family of all maximal ideals M of a commutative Banach algebra B is denoted by M(B). There is a onetoone correspondence between characters and maximal ideals of a commutative Banach algebra B. The kernel of every character is a maximal ideal, and conversely, every maximal ideal is the kernel of a uniquely determined character. Therefore, one often thinks of the elements of M(B) as characters and writes f ∈ M(B) if f is a character of B. It is clear that every character of B maps the unit e to 1, and, by (3.1), every invertible element in B is mapped to a nonzero complex number. As a consequence of this, we get that every character is automatically bounded as a linear functional, and its norm is equal to 1. Indeed, let f ∈ M(B). From f (e) = 1 we get that f ≥ 1. Suppose f > 1. Then there is an a ∈ B with a < 1 and f (a) = 1. But this implies f (e − a) = f (e) − f (a) = 0 although e − a is invertible in B by Lemma 1.2. We already mentioned that from (3.1) we get that f (a) = 0 for all characters f ∈ M(B) if a ∈ B is invertible. In fact, Gelfand’s theorem says that also the reverse implication is true; that is, the family M(B) turns out to be suﬃcient. Proposition 3.82. An element a ∈ B is invertible in B if and only if it is not contained in any maximal ideal of B; that is, equivalently, if no character of B vanishes on a. This observation yields the ﬁnest5 possible localization of an element in B: a∈B
←→
{f (a)}f ∈M(B) ⊂ C
The invertibility of a in B is reduced to the elementwise invertibility of the family of numbers {f (a)}f ∈M(B) in C. The function a ˆ : M(B) → C with a ˆ(f ) = f (a) is referred to as the Gelfand transform of a, and the map a → a ˆ is the Gelfand transformation on B. 5 One
clearly cannot ‘decompose’ an a ∈ B into something ‘ﬁner’ than complex numbers.
3.5. Alternative Views on the Operator Spectrum
119
The set M(B) is a compact Hausdorﬀ space under the Gelfand topology – the coarsest topology on M(B) that makes all Gelfand transforms a ˆ : M(B) → C with a ∈ B continuous. Example 3.83. In Example 3.1, where B = C[0, 1], the evaluation functionals ϕx : a → a(x),
a∈B
with x ∈ [0, 1] are exactly the characters of B, so that M(B) can be identiﬁed with [0, 1] by identifying ϕx with x. After this identiﬁcation, the Gelfand transform a ˆ : M(B) → C coincides with the function a : [0, 1] → C itself, and the Gelfand topology on M(B) coincides with the usual topology on [0, 1].
Characters of ∞ For our purposes, we are especially interested in the characters of the commutative Banach algebra B = ∞ = ∞ (Zn , C) with unit element e = (1)α∈Zn . The set M(∞ ) naturally subdivides in two classes: • Characters that evaluate a ∈ ∞ at ﬁnite points. • Characters that evaluate a ∈ ∞ at inﬁnity. ∞ that decay To see this, introduce the subspace ∞ 0 of all elements a ∈ ∞ ∞ at inﬁnity. A character f ∈ M( ) is in the class M∞ ( ) if f vanishes on ∞ 0 , otherwise it is in the class MZn (∞ ). Equivalently, a maximal ideal M of ∞ is in ∞ M∞ (∞ ) if and only if it is contained in ∞ 0 . As a closed subset of M( ), the ∞ set M∞ ( ) is compact in the Gelfand topology as well. It is an elementary exercise to show that f ∈ MZn (∞ ) if and only if f is an evaluation functional; that is
f (a) = aβ ,
a = (aα )α∈Zn ∈ ∞
for some ﬁxed β ∈ Zn . In this sense, the class MZn (∞ ) can be identiﬁed with the set Zn . The much more interesting class is M∞ (∞ ). In contrast to characters in MZn (∞ ), it is convenient to think of elements of M∞ (∞ ) as evaluation functionals at inﬁnity. General elements of ∞ may behave pretty arbitrarily at inﬁnity which makes it a hard task to actually think of a character f ∈ M∞ (∞ ) that evaluates a particular aspect of this behaviour with a single number. If we pass to the subalgebra ∞ that consists of all convergent sequences a = (aα ) ∈ ∞ with aα → a∞ ∞ c of as α → ∞, a character of ∞ c acting at inﬁnity is certainly the functional flim that assigns to every element a ∈ ∞ c its limit a∞ at inﬁnity. Although this limit functional flim cannot be deﬁned on all of ∞ , it is nice to see that every character f of ∞ acting at inﬁnity is just a continuation6 of 6 At this point you might also want to recall the functional in Example 1.26 c). But remember that the characters f ∈ M∞ (∞ ), on top of that, also have to be multiplicative.
120
Chapter 3. Limit Operators
∞ ∞ flim from ∞ acting at c to all of . Or, the other way round, any character of ∞ inﬁnity, applied to a convergent sequence a ∈ c , returns nothing but the obvious value flim (a) = a∞ . Indeed, if a ∈ ∞ c , then a can be written as a = a∞ e + c with c ∈ ∞ 0 , whence
f (a) = a∞ f (e) + f (c) = a∞ · 1 + 0 = flim (a) for all f ∈ M∞ (∞ ).
ˇ The StoneCech Boundary A slight generalization of the previous notions is the following. If G is an inﬁnite discrete group, then the set βG := M(∞ (G)) ˇ is called the StoneCech compactiﬁcation of G. As in the previous case, G = Zn , the set βG subdivides into a set of characters acting at inﬁnity and another set of characters acting at the points of G. The latter set can again be identiﬁed with G ˇ itself, and the ﬁrst set is called the StoneCech boundary of G, denoted by ∂G := βG \ G. So in our case, G = Zn , we have βZn = M(∞ ) and ∂Zn = M∞ (∞ ). ˇ The StoneCech compactiﬁcation βG has the socalled universal property: Every map from G to a compact Hausdorﬀ space K can be uniquely extended to a continuous map βG → K and hence to the boundary ∂G.
Rearranging the Operator Spectrum Remember that we motivated the introduction of limit operators by the idea of studying an invertibility problem via a family of unital homomorphisms. But although we ﬁnally proved Theorem 1, our operator spectrum has a little defect: For a ﬁxed sequence h ⊂ Zn tending to inﬁnity, the mapping A → Ah cannot be deﬁned on all of L(E) – not even on L(E, P) or BDOp . The best thing one can do is to study the restriction of Ah → A to the set {A ∈ BDOp : Ah exists}, which is a Banach algebra by Proposition 3.4, where it is indeed a homomorphism, by the same Proposition 3.4. But this is not very satisfactory. Alternatively, instead of ﬁxing h, we could ﬁrst ﬁx A, denote the set of all sequences h = (hm ) ⊂ Zn with hm → ∞ for which Ah exists by HA and then study the map A → σ op (A) = {Ah }h∈HA , hoping that this is a homomorphism from L(E, P) or BDOp to an algebra of operatorvalued functions on HA . But of course there is the same problem as
3.5. Alternative Views on the Operator Spectrum
121
above since HA heavily depends on A. This defect prevents us from writing things like σ op (A + B) = σ op (A) + σ op (B)
σ op (AB) = σ op (A)σ op (B)
and
(3.43)
in the pointwise sense, since σ op (A) and σ op (B) are deﬁned on the, in general diﬀerent, sets HA and HB . And even if HA = HB , the sets HA+B and HAB can be very diﬀerent from those, as simple examples like B = −A show. Summarizing this little dilemma, Theorem 1 shows that our operator spectrum σ op (A) contains the right elements, but, as the above problems unveil, with a slightly weird enumeration. The solution to these problems is a rearrangement of the operator spectrum as σ op (A) = {Af }f ∈F for every A ∈ BDOp in a way that the index set F is no longer dependent on A. The answer, presented in [67], is to choose F = M∞ (∞ ) and to deﬁne Af in an appropriate way for every f ∈ M∞ (∞ ). Therefore we restrict ourselves to E = p = p (Zn ) with p ∈ [1, ∞]; that is, we put X = C. First suppose A ∈ L(E) is a band operator, so we can write A in the form (1.18), ˆ b(γ) Vγ M A = γ≤w (γ)
∞
with w ∈ N0 and b ∈ for every γ under consideration. Instead of taking the long journey down the diagonal of A and wait for convergence as described in Subsection 3.5.1, we will immediately evaluate the diagonals b(γ) ∈ ∞ at inﬁnity, using a character f ∈ M∞ (∞ ). This gives us one number for every diagonal b(γ) of A, which will be an entry of the respective diagonal c(γ) of the limit operator Af . The other entries of c(γ) are derived in the same way by considering shifts of A and hence shifts of the diagonal b(γ) . So put ˆ c(γ) Vγ M Af := γ≤w
with c(γ) =
f (V−α b(γ) ) ⎛
⎜ ⎜ ⎜ ⎜ ⎜ [Af ] = ⎜ ⎜ ⎜ ⎜ ⎝
α∈Zn
∈ ∞ for every γ ∈ Zn with γ ≤ w; that is
..
.
..
..
.
f (V1 b(0) )
..
.
f (b(1) )
..
.
f (V−1 b(2) ) .. .
0
.
as an illustration for the case n = 1.
..
.
..
.
f (V1 b(−1) ) f (V1 b(−2) ) f (b(0) )
f (b(−1) )
f (V−1 b(1) ) f (V−1 b(0) ) .. .. . .
⎞ 0 ⎟ .. ⎟ . ⎟ ⎟ .. ⎟ . ⎟ ⎟ .. ⎟ . ⎟ ⎠ .. .
122
Chapter 3. Limit Operators
Remark 3.84. As a trivial consequence of the formula for c(γ) , we see that, if a diagonal b(γ) of A converges to a value b∞ at inﬁnity, then the respective diagonal c(γ) of every limit operator Af is constantly equal to b∞ . In Proposition 20 of [67] it is shown that, for every f ∈ M∞ (∞ ), the map A → Af is a bounded homomorphism on all of BO, and hence it can be extended to a bounded homomorphism on BDOp . Moreover, in Theorem 5 of [67] it is shown that {Af }f ∈M∞ (∞ ) = σ op (A) = {Ah }h∈HA holds for every A ∈ BDOp . With this new enumeration, we can identify σ op (A) with a function M∞ (∞ ) → BDOp mapping f → Af , for which (3.43) holds by the results stated above. ˇ In [73] Roe reformulates this idea in terms of the StoneCech compactiﬁcan ∗ tion ∂Z for the C algebra case p = 2. Recall from Figures 1, 3 and 4 on pages 15, 46 and 57, respectively, that, in this case, K(E, P) = K(E), Pconvergence equals ∗strong convergence, and invertibility at inﬁnity equals Fredholmness. The elegant approach of [73] even allows the substitution of Zn by any ﬁnitely generated discrete group. The plot is basically as follows. Fix a banddominated operator A on E = 2 . By the universal property, there is a uniquely determined ∗strongly continuous extension of the function Zn → BDO2
mapping
γ → V−γ AVγ
ˇ to the StoneCech compactiﬁcation βZn . The operator spectrum σ op (A) is then ˇ deﬁned as the restriction of this function to the StoneCech boundary ∂Zn . Conop ∗ sequently, σ (A) is an element of the C algebra Cs of all ∗strongly continuous functions ∂Zn → BDO2 , equipped with the supremum norm. Finally, the map σ op : A → σ op (A) is a ∗homomorphism from BDO2 to this algebra Cs , the kernel of which coincides with the set of compact operators K(E) = K(E, P). These facts immediately show that BDO2 /K(E, P) is ∗isomorphic to the image of the ∗homomorphism σ op in Cs , proving Theorem 1 for the case p = 2, X = C.
The Local Principle by Allan and Douglas For commutative Banach algebras B, the Gelfand transform a ˆ yields a very ﬁne localization of an element a ∈ B. A useful localization technique for noncommutative Banach algebras is the local principle by Allan and Douglas, also known as central localization. The set of all a ∈ B that commute with any element of B is referred to as the center, denoted by cen B, of B. Clearly, cen B is a commutative Banach subalgebra of B which coincides with B if and only if B is commutative. In either case, cen B contains at least the multiples of the unit; that is Ce = {ce : c ∈ C}. In
3.5. Alternative Views on the Operator Spectrum
123
the case cen B = Ce, we say that B only has the trivial center. So, in a sense, the center of B measures the amount of commutativity in B. The central localization works as follows. Fix a Banach subalgebra A of cen B containing the unit e, which makes A a commutative Banach algebra. For every maximal ideal M ∈ M(A), denote by JM := closidB M the smallest ideal in B that contains M . Moreover, abbreviate the socalled local algebra B/JM by BM , and write bM for the socalled local representatives b + JM ∈ BM of b ∈ B. The local principle of Allan [1] and Douglas [27] then says the following. Proposition 3.85. An element b ∈ B is invertible in B if and only if, for all M ∈ M(A), the local representative bM is invertible in its local algebra BM . For a commutative Banach algebra B, one can choose A = cen B = B. But then, for every M ∈ M(A) = M(B), we get that JM = M is the kernel of a character fM of B. So the local algebra BM = B/JM = B/ ker fM is isomorphic to im fM = C. In this case, Proposition 3.85 says that b is invertible in B if and only if fM (b) = 0 for all M ∈ M(B), which is a repetition of Proposition 3.82. In the other extreme case, a Banach algebra B with a trivial center, the only remaining choice is A = Ce. But then M = {0} is the only element of M(A), and consequently, JM = {0} as well, which leaves us with BM = B/{0} ∼ = B and bM = b + {0} for all b ∈ B. The statement of Proposition 3.85 is less breathtaking in that case: “b is invertible in B if and only if b is invertible in B.” Remark 3.86. a) It should be mentioned that, for the application of Proposition 3.85, the algebra A does not have to be all of the center of B. Hence, it is not even necessary to know cen B to apply central localization. However, by the choice of A, one can control the simplicity of the local algebras on a scale from C to B, as the discussion of the two extreme cases above nicely illustrates. As a rule of thumb, the larger we choose A, the larger are the maximal ideals M and the local ideals JM , and the ﬁner are the local algebras BM . b) If one does not know the whole maximal ideal space of a commutative Banach algebra B, one can still pass to a unital subalgebra A of B, for which M(A) is known, and proceed as above instead of applying Gelfand theory.
AllanDouglas localization vs. limit operators It was ﬁrst pointed out in [67] that the limit operator concept is compatible with the local principle by Allan and Douglas. We will restrict ourselves to saying a few words about that. For details see Section 4 of [67] or Section 2.3.5 of [70]. Let A be a banddominated operator on E = p (Zn , X). What we will do is to study the invertibility at inﬁnity of A, that is invertibility in the noncommutative Banach algebra BDOp /K(E, P) by means of AllanDouglas localization without even thinking about our limit operator approach. But we will soon realize that what we automatically end up if we follow this road are limit operators.
124
Chapter 3. Limit Operators
To use the local principle for B = BDOp /K(E, P), we have to ﬁx a Banach subalgebra A of the center of B. For this subalgebra we will make the following choice. Put ˆ f + K(E, P) : f ∈ C(Rn ) }. A := { M ˆ f , with f ∈ C(Rn ), commutes with the generators of BDOp It is readily seen that M modulo K(E, P) so that A is indeed contained in the center of B. Moreover, M(A) can be identiﬁed with S n−1 , where we identify s ∈ S n−1 with the character ˆ f + K(E, P) → M
lim f (x)
x→∞s
of A. Following the local principle, we let Js denote the smallest ideal of B containing the maximal ideal of A that is associated with s ∈ S n−1 , and abbreviate the local algebra B/Js = (BDOp /K(E, P))/Js by Bs and the local representative A + K(E, P) + Js by As for every s ∈ S n−1 . Then Proposition 3.85 involves the invertibility of As in Bs for every s ∈ S n−1 . But regarding to this, we have the following very nice coincidence as shown in Theorem 6 of [67]. Proposition 3.87. Let A ∈ BDOp$ and s ∈ S n−1 . Then As is invertible in Bs if and only if the local operator spectrum σsop (A) is uniformly invertible. Involving the local principle, Proposition 3.85, we then get the following. Corollary 3.88. An operator A ∈ BDOp$ is invertible at inﬁnity if and only if σsop (A) is uniformly invertible for every s ∈ S n−1 . Note that this is an even stronger result than Theorem 1 since it does not require the uniform boundedness of the suprema sup B∈σsop (A)
B −1
with respect to s ∈ S n−1 ! Therefore, Corollary 3.88 was an important step towards getting rid of the uniform boundedness condition in Theorem 1 – as far as possible. Section 3.9 below is devoted to this question. Another step into this direction is shown in what follows. Our choice of the central subalgebra A and the rather simple behaviour of a function f ∈ C(Rn ) towards ∞s for every s ∈ S n−1 had the eﬀect that every local representative As ∈ Bs represents the whole local operator spectrum σsop (A) which can still contain quite a lot of operators. We could get a ﬁner localization of A + K(E, P) if we found a larger subalgebra of cen B. Remember from Remark 3.53 that b ∈ ∞ (Zn , CI) is slowly oscillating if and n only if Vα b − b ∈ ∞ 0 for all α ∈ Z . But this is clearly equivalent to ˆ b V−α − M ˆ b ] = (Vα M ˆ b )Vα = M ˆ Vα b−b Vα ∈ K(E, P) [Vα , M for all α ∈ Zn . Let us denote the set of all slowly oscillating b ∈ ∞ (Zn , CI) by ˆ b clearly commutes with every generalized multiplication operator SO(Zn ). Since M
3.6. Generalizations of the Limit Operator Concept
125
if b ∈ SO(Zn ), we get that ˆ b + K(E, P) : b ∈ SO(Zn ) } A := { M
(3.44)
is contained in the center of B = BDO /K(E, P). In fact, it follows (see [61] or Theorem 2.4.2 of [70]) that (3.44) coincides with the center of BDOp$ /K(E, P). This observation gives rise to a much ﬁner localization of this factor algebra, culminating in an analogue of Proposition 3.87 where S n−1 ∼ = M∞ (C(Rn )) is replaced by M∞ (SO(Zn )). Therefore, one has to appropriately redeﬁne the local operator spectra σsop (A) for s ∈ M∞ (SO(Zn )) by using nets instead of sequences due to the topological nature of M(SO(Zn )). As a consequence, also the analogue of Corollary 3.88 holds where S n−1 is replaced by M∞ (SO(Zn )). Note that this is another, even stronger, improvement of Theorem 1. As a corollary of this result we get Proposition 3.113 since in that case the localization is ﬁne enough that σsop (A) turns out to consist of a single operator only for every s ∈ M∞ (SO(Zn )). p
3.6 Generalizations of the Limit Operator Concept There is a number of possible generalizations of the limit operator concept as introduced here. Subject to possible modiﬁcations in the deﬁnition Ah := P−lim Uh−1 AUhm m are • the family {Ur } of invertible isometries, • the approximate identity P, and • the space E, where a certain compatibility between these three items is needed, of course. We will brieﬂy present one sensible choice of the family {Ur } for the case E = Lp , which diﬀers from the setting presented in the rest of this book. For every r ∈ R+ , put Ur := Vϑ Φ−1 r , where Vϑ is our usual shift operator on Lp by a ﬁxed vector ϑ ∈ Rn , and x (Φr u)(x) = ρr u r blows the argument of a function u ∈ E up by the factor r ∈ R+ with
−n r p p < ∞, ρr = 1 p=∞ being a correction factor that makes Φr an isometry on E = Lp = Lp (Rn ). With this construction, the isometry Ur−1 = Φr V−ϑ magniﬁes and shifts the local behaviour of a function u ∈ E at the ﬁxed point ϑ ∈ Rn to the origin.
126
Chapter 3. Limit Operators 6 radius
u ϑa @ @ @ @ @
1 r
radius 1 '$ 0a

&% Ur−1 u Figure 10: The behaviour of u in a ball of radius 1/r around ϑ is magniﬁed and shifted to the unit ball centered at the origin.
Now the operator Ur−1 AUr = Φr V−ϑ AVϑ Φ−1 r (shrink, shift, apply A, shift back, blow up) studies the behaviour of A in a neighbourhood of the point ϑ, which is of interest, for example, if a lot of oscillation is going on at ϑ. Limit operators can be deﬁned here by letting the zoom ratio r go to inﬁnity in a prescribed way r1 , r2 , . . .. For an application of limit operators in this zooming context see [6], where the local behaviour of a singular integral operator with slowly oscillating coeﬃcients at the endpoint ϑ of a Carleson curve in the plane is studied. Of course, in this connection one has to replace our approximate identity P by another one which is more appropriate for this process of zooming into ϑ instead of shifting towards inﬁnity.
3.7 Examples It is time to look at some important examples of banddominated operators and their operator spectra.
3.7.1 Characteristic Functions of Half Spaces Let ·, · refer to the standard inner product in Rn , ﬁx a nonzero vector a ∈ Zn , and denote by U = {x ∈ Rn : a, x ≥ 0} (3.45) the half space in Rn for which the boundary ∂U of U passes through the origin and is orthogonal to the vector a, and a is contained in U . Then the multiplication
3.7. Examples
127
operator PU = MχU is a banddominated operator on E = Lp for every p ∈ [1, ∞]. Recall from basic geometry that a, x = a2 dist 2 (x, ∂U ) for every x ∈ Rn . Proposition 3.89. The operator PU is rich, and its operator spectrum consists of (i) the identity operator I, (ii) the zero operator 0, and (iii) all shifts Pc+U of PU with c ∈ Zn . Proof. Put A = PU and take an arbitrary sequence in Zn tending to inﬁnity. As in the proof of Proposition 3.9, choose a subsequence h = (hm ) with hm → ∞s for some direction s ∈ S n−1 . Depending on a, s, we have the following three cases. (i) If a, s > 0, then a, hm → +∞, which shows that Ah = I. (ii) If a, s < 0, then a, hm → −∞, which shows that Ah = 0. (iii) If a, s = 0, then a, hm ∈ Z either remains bounded for all m or not. In the ﬁrst case, there is a subsequence g = (gm ) of h such that a, gm ∈ Z is a constant sequence. But then V−gm AVgm = P−gm +U is constant since c + U = d + U if and only if c − d ∈ ∂U ; that is a, c = a, d. Consequently, Ag = Pc+U with a c ∈ Zn . In the second case, we can choose a subsequence g = (gm ) of h such that a, gm tends to plus or minus inﬁnity, which brings us to either (i) or (ii). Conversely, it is easily seen that every operator of the form Pc+U with c ∈ Zn is in the operator spectrum of A, by choosing hm = mb − c for example, with a vector b ∈ ∂U ∩ Zn . As a corollary we get that PU = PU1 · · · PUk is rich if U = U1 ∩ · · · ∩ Uk is a polyhedron with half spaces U1 , . . . , Uk ⊂ Rn . We will brieﬂy discuss the simple case k = 2 in R2 . Therefore, let U1 and U2 denote two half planes in R2 of the form (3.45) such that ∂U1 = ∂U2 , and put U = U1 ∩ U2 . Corollary 3.90. The operator PU is rich, and its operator spectrum consists of (i) the identity operator I, (ii) the zero operator 0, (iii) all shifts Pc+U1 of PU1 with c ∈ Zn , and (iv) all shifts Pc+U2 of PU2 with c ∈ Zn . Proof. From A := PU = PU1 PU2 and Propositions 3.11 a) and 3.4 c) we get that A is rich and that every limit operator of A is the product of two limit operators (towards the same direction) of PU1 and PU2 , which covers the cases (i)–(iv) by Proposition 3.89. Conversely, as in the proof of Proposition 3.89, for every operator (i)–(iv), we can easily ﬁnd a sequence h ⊂ Zn leading to the corresponding limit operator of A.
128
Chapter 3. Limit Operators
We will later apply this result to the quarter planes U = U1 ∩ U2 and V = V1 ∩ V2 with U1 = {(x, y) : x ≥ 0}, U2 = {(x, y) : y ≥ 0}, V1 = {(x, y) : y ≥ x} and V2 = {(x, y) : y ≤ −x}.
3.7.2 WienerHopf and Toeplitz Operators Let A be a shift invariant operator on E = Lp (R2 ), and put U = U1 ∩ U2 with two half spaces U1 and U2 of the form (3.45). We will study the compression PU APU of A. At one point, we extend the operator PU APU by the complementary projector QU = I − PU in order to give it a chance to be invertible at inﬁnity. Lemma 3.91. Let A ∈ BDOpS be shift invariant. Then the following holds. a) A is invertible at inﬁnity if and only if A is invertible. b) If PU APU + QU is invertible at inﬁnity, then A is invertible. c) PU APU = A. Proof. The “if” part is trivial (see Remark 2.5), and the “only if” part follows from Proposition 3.12 and A ∈ σ op (A) = {A}. b) This also follows from Proposition 3.12 and A ∈ σ op (PU APU + QU ). c) Clearly, PU APU ≤ PU APU = A. On the other hand, we have A ≤ PU APU by Proposition 3.4 a) since A ∈ σ op (PU APU ). Since A is banddominated and shift invariant, we get that PU APU is banddominated, rich, and its operator spectrum consist of A, 0 and compressions of A to all integer shifts of U1 and U2 by Corollary 3.90. 6
U QU
PU APU 
QU
QU
Figure 11: The WienerHopf operator PU APU is extended to the whole space by adding the complementary projector QU of PU . In this simple example, U is the intersection of the upper and the right half plane U1 and U2 in R2 .
Proposition 3.92. The operator PU APU + QU is invertible at inﬁnity if and only if PU1 APU1 and PU2 APU2 are invertible on Lp (U1 ) and Lp (U2 ), respectively.
3.7. Examples
129
Proof. From Theorem 1 we know that PU APU + QU is invertible at inﬁnity if and only if its limit operators A, I, Pc+U1 APc+U1 + Qc+U1
and
Pc+U2 APc+U2 + Qc+U2
(3.46)
are invertible for all c ∈ Z2 and their inverses are uniformly bounded. This clearly proves the “only if” part. For the “if” part, suppose the two half plane compressions of A to U1 and U2 are invertible on Lp (U1 ) and Lp (U2 ), respectively. Then their extensions A1 = PU1 APU1 + QU1
and
A2 = PU2 APU2 + QU2
and all shifts of these are invertible on Lp . But since A is shift invariant, the shifts Vc A1 V−c and Vc A2 V−c coincide with the operators (3.46) above. Finally, the invertibility of A follows from Lemma 3.91 b), and the inverses of all these limit −1 operators of PU APU + QU are bounded by max{1, A−1, A−1 1 , A2 }. We will now look at some shift invariant operators A on Lp and on p . If A is a convolution operator7 on Lp (R2 ), then the compression PU APU is referred to as a twodimensional WienerHopf operator [87]. If the convolution kernel of A is in L1 (R2 ), then A is banddominated (even A ∈ W), and we have Proposition 3.92 to study whether or not this WienerHopf operator is invertible at inﬁnity. The discrete analogue of a convolution operator on Lp is a banddominated operator on p with constant coeﬃcients. Such operators are also known as Laurent operators8. The compression of a Laurent operator A to a proper subset U of Zn is typically referred to as a discrete WienerHopf operator or a Toeplitz operator. Note that the results of Subsection 3.7.1 literally translate to the discrete case p if we replace the half spaces (3.45) by their intersections with Zn . Consequently, also Proposition 3.92 applies to the discrete case with the replacement of half spaces by discrete half spaces. To be able to draw a matrix, we pass to n = 1, and hence to onedimensional Laurent operators as in Example 1.39. Put U = N0 and T (a) = PU L(a)PU , where a ∈ L∞ (T) and L(a) is deﬁned as in Example 1.39. The onedimensional Toeplitz operator T (a) is typically studied on p (N0 ) but we will embed it into p (Z) by studying T (a) + QN0 in order to use our techniques. Since 0 and I are the only limit operators of PU , and since L(a) is shift invariant, T (a) + QU has only L(a) and I as its limit operators. This is nicely 7 This
is the analogue of Example 1.45 in two dimensions. Also see Subsection 4.2.1. Example 1.39 for the onedimensional case n = 1.
8 Remember
130
Chapter 3. Limit Operators
conﬁrmed by a look at the following matrix. ⎛
..
⎜ ⎜ ⎜ ⎜ ⎜ ⎜ 0 ⎜ ⎜ ⎜ [T (a) + QU ] = ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝
⎞ .
0
0
1 1 1 ..
.
a0
a−1
a−2
a1
a0
a−1
a−2
a2 .. .
a1
a0
a−1
a2 .. .
a1 .. .
a0 .. .
0
..
.
..
.
..
.
..
.
⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠
(3.47)
If we travel towards +∞; that is down the diagonal, we arrive at L(a), and towards −∞; that is up the diagonal, we arrive at I. From Theorem 1 we consequently get that T (a) + QU is invertible at inﬁnity if and only if L(a) is invertible.
3.7.3 Singular Integral Operators By evaluating the following integral in the sense of the Cauchy principal value, the singular integral operator u(t) 1 ds, t∈T (Su)(t) = πi T s − t is welldeﬁned and bounded on a dense subspace of L2 (T), whence it can be extended to a bounded linear operator on L2 (T). It is readily seen that, via the Fourier isomorphism between L2 (T) and 2 = 2 (Z), the singular integral operator S corresponds to the operator ˆ sgn = P − Q S 2 = M
with sgn(m) =
1, m ≥ 0, −1, m < 0,
(3.48)
m ∈ Z,
P = PN0 and Q = I − P . Recall from Example 1.39 that the operator Mc of multiplication by a function c ∈ L∞ (T) on L2 (T) corresponds to the Laurent operator L(c) on 2 via the Fourier isomorphism. Now suppose c, d are continuous functions on T, and we will study the bounded linear operator A = Mc + Md S
(3.49)
3.7. Examples
131
on L2 (T). From (3.48) we get that A can be identiﬁed with the operator A2
= =
L(c) + L(d) S2 = L(c) (P + Q) + L(d) (P − Q) L(c + d) P + L(c − d) Q = L(a) P + L(b) Q
on 2 , where a = c + d, b = c − d ∈ C(T). From Example 1.39 we know that A2 is banddominated, and ⎛ ⎞ .. .. .. . . . ⎜ ⎟ ⎜ . ⎟ .. ⎜ .. b ⎟ . b1 0 ⎜ ⎟ ⎜ . ⎟ .. ⎜ .. b ⎟ . b0 b1 ⎜ ⎟ −1 ⎜ ⎟ .. .. ⎜ ⎟ . b−1 b0 . a1 ⎜ ⎟ [A2 ] = ⎜ ⎟. .. .. ⎜ ⎟ . . b−1 a0 ⎜ ⎟ a1 ⎜ ⎟ .. ⎟ .. ⎜ . ⎟ . a−1 a0 a1 ⎜ ⎜ ⎟ ⎜ .. .. ⎟ ⎜ . a−1 a0 . ⎟ ⎝ ⎠ .. .. . . . ..
(3.50)
(3.51)
The obvious limit operators of P , Q, L(a) and L(b) or just a look at this matrix immediately show that σ op (A2 ) = {L(a), L(b)}, with L(a) in the local operator spectrum at +∞ and L(b) at −∞. So A2 , and consequently A, is Fredholm9 if and only if L(a) and L(b) are invertible. With the same argument for A − λI and another look back to Example 1.39, we get that spess A = spess A2 = a(T) ∪ b(T) since A2 − λI = L(a − λ)P + L(b − λ)Q is Fredholm if and only if a − λ and b − λ are invertible in L∞ (T). Moreover, by Corollary 3.19, the Fredholm index of A is equal to ind+ L(a) + ind− L(b) which, by basic results on Toeplitz operators [10], is the diﬀerence of the winding numbers of b and a with respect to the origin. If b ∈ L∞ (T) is invertible, we can restrict ourselves to the study of L(a)P + Q instead of A2 , where we denote the function b−1 a by a again. For this operator we have L(a)P + Q = P L(a)P + Q I + QL(a)P = T (a) + Q I + QL(a)P where the second factor I + QL(a)P is always an invertible operator with its inverse equal to I − QL(a)P . This equality shows that, in this case, the study of the operator (3.49) is equivalent to the study of the Toeplitz operator (3.47) from Subsection 3.7.2. 9 Recall
Figure 4 on page 57 and Theorem 1.
132
Chapter 3. Limit Operators
3.7.4 Discrete Schr¨ odinger Operators In this subsection we discuss the discrete Schr¨ odinger operator A =
n
ˆ c, (Vek + V−ek ) + M
k=1
on p where e1 , . . . , en are the unit vectors in Rn , and c ∈ ∞ (Zn , L(X)) is called the potential of A. This operator is the discrete analogue of the second order diﬀerential operator −∆ + Mb where ∆ is the Laplacian and b ∈ L∞ is a bounded potential, both on Rn . A is a band operator with bandwidth 1. For n = 1, its matrix representation is ⎛
..
.
⎜ ⎜ .. ⎜ . ⎜ ⎜ ⎜ [A] = ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝
..
.
c−2 1
1 c−1 1
1 c0 1
1 c1 1
0
1 c2 .. .
⎞ 0 ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎟ .. ⎟ . ⎟ ⎠ .. .
Let 1 < p < ∞ and dim X < ∞. Then X is reﬂexive and A ∈ BO ⊂ W is rich by Corollary 3.24. In this case, A is Fredholm if and only if it is invertible at inﬁnity, which, on the other hand, is equivalent to the elementwise invertibility of σ op (A) by Proposition 3.112. This argument with A − λI in place of A shows that we can improve (3.12) to spess A =
sp B.
(3.52)
B∈σop (A)
Depending on the function c, one can give a more or less explicit description of the operator spectrum σ op (A) as well as of sp B for B ∈ σ op (A), and hence, of the essential spectrum of A. • If c is almost periodic, then the Pconvergence in Deﬁnition 3.3 is automatically convergence in the norm of L(E), and the invertibility of a limit operator of A immediately implies that of A. Repeating this argument with A − λI in place of A, we get that sp B = sp A for all B ∈ σ op (A), whence (3.52) translates to spess A = spA. In particular, A is Fredholm if and only if all B ∈ σ op (A) are invertible, which is the case if and only if A is invertible.
3.8. Limit Operators – Everything Simple Out There?
133
ˆ c ) = {aI : a ∈ c(∞)}. But then all limit • If c is slowly oscillating, then σ op (M operators of A are Laurent operators. For these operators the spectrum is well known (see [9], for example). Together with (3.52), this leads to spess A =
sp B = c(∞) + [−2N, 2N ], B∈σop (A)
which is Theorem 2.6.25 in [70]. ˆ c ) is the set of all operators M ˆd • Finally, if c is pseudoergodic, then σ op (M n with a function d : Z → clos c(Z), and $ n % op n ˆ d : d : Z → clos c(Z) . σ (A) = (Vek + V−ek ) + M k=1
Consequently, A itself is among its own limit operators, which shows that A is Fredholm if and only if it is invertible, and spess A =
sp B = sp A. B∈σop (A)
For the limit operator method in the treatment of the essential spectrum of discrete Schr¨ odinger operators for some more classes of potentials, see [64].
3.8 Limit Operators – Everything Simple Out There? In the introduction to this chapter we motivated the limit operator method as a tool to study the invertibility at inﬁnity of A via localization “over inﬁnity”, A → σ op (A) = {Ah }.
(3.53)
We also mentioned that the main idea of localization is to substitute the original problem by a family of simpler invertibility problems. Of course, the image of a homomorphism (remember Subsection 3.5.2) is an algebra with a simpler structure, and hence the invertibility problem there cannot be harder than the original one. But the question that naturally comes up in connection with the map (3.53) is whether, and in which sense, the limit operators Ah are simpler objects than the operator A itself. If we think of generalized or usual multiplication operators A with a slowly oscillating symbol, then the limit operators of A are multiples of the identity, which are certainly very simple objects. On the other hand we have the results on generalized multiplication operators A with almost periodic or pseudoergodic symbol. In the ﬁrst case, the invertibility of any limit operator of A implies the invertibility of A, which shows that every single limit operator represents quite
134
Chapter 3. Limit Operators
a lot of the behaviour of A. In the second case, A itself is even among its limit operators. Here are a view quickanddirty arguments, ordered from true to true but shallow, why Ah is a simpler – or at least not more complicated – object than A: • A ∈ BDOpS being invertible at inﬁnity is suﬃcient for Ah to be invertible. • Every submatrix of [Ah ] is (approximately equal to) a submatrix of [A]. • A “sample” of A cannot possess more complexity than A itself. To ﬁnd out a bit more about the aspect of simpliﬁcation when passing from A to Ah , we will address each of the following two questions in a little subsection. • Is this step of simpliﬁcation ultimate in the sense that things are not going to become simpler when doing this step again? • Is it a “privilege” for an operator to be limit operator of another one? Do limit operators form some subclass of “simple” operators?
3.8.1 Limit Operators of Limit Operators of . . . We have shown that some interesting properties (including invertibility at inﬁnity, Fredholmness and applicability of approximation methods) of A are closely connected with all limit operators of A (or an associated operator) being invertible. But if verifying the latter property is easier than looking at A itself, then perhaps things become even easier when we pass to limit operators again. And again and again. . . ? In this subsection we will show that every limit operator of a limit operator of A is already a limit operator of A. We will do this in two steps: • Show that V−α BVα ∈ σ op (A) if α ∈ Zn and B ∈ σ op (A). • Show that the Plimit of a sequence in σ op (A) is in σ op (A). To give us a bit more ﬂuent language, we will add one more deﬁnition. Deﬁnition 3.93. Fix an arbitrary operator A ∈ L(E). Operators of the form V−α AVα with α ∈ Zn are called shifts of A, and the set of all shifts of A will be denoted by VA , VA := { V−α AVα }α∈Zn . We will generalize this notation to sets L ⊂ L(E) of operators by putting VL :=
VA = {V−α AVα : α ∈ Zn , A ∈ L}. A∈L
So, by what we deﬁned earlier, A ∈ L(E) is a shift invariant operator if and only if VA = {A}. We will moreover call a set L ⊂ L(E) shift invariant if VL = L.
3.8. Limit Operators – Everything Simple Out There?
135
Proposition 3.94. For A ∈ L(E), the operator spectrum σ op (A) as well as all local operator spectra σsop (A) with s ∈ S n−1 are shift invariant sets. Proof. Take A ∈ L(E) and let h = (hm ) ⊂ Zn be a sequence tending to inﬁnity such that Ah ∈ σ op (A) exists. Now take an arbitrary α ∈ Zn and verify that the sequence h + α := (hm + α) ⊂ Zn leads to Ah+α = P− lim V−α V−hm AVhm Vα = V−α (P− lim V−hm AVhm )Vα = V−α Ah Vα . m→∞
m→∞
Clearly, h + α tends to inﬁnity in the same direction as h does.
One task down, still one to be done: Proposition 3.95. For A ∈ L(E), the global operator spectrum as well as all local operator spectra are sequentially closed with respect to Pconvergence. Proof. We will give the proof for the global operator spectrum. It is completely analogous for local operator spectra. Given a sequence A(1) , A(2) , . . . ⊂ σ op (A) which Pconverges to B ∈ L(E), (k) we will show that also B ∈ σ op (A). For k = 1, 2, . . ., let h(k) = (hm ) ⊂ Zn denote (k) a sequence tending to inﬁnity such that A = Ah(k) , and choose a subsequence g (k) out of h(k) such that, for all m ∈ N, the following holds 1 (k) (k) AV (k) − A < . Pm V−gm gm m (m)
Now we put g = (gm ) := (gm ) and show that B = Ag . Therefore, take a r ∈ N and some ε > 0. Then, for all m > r, Pr (V−gm AVgm − B)
≤
Pr Pm (V−gm AVgm − A(m) ) + Pr (A(m) − B)
≤
Pr Pm (V−g(m) AVg(m) − A(m) ) m
m
+ Pr (A(m) − B) ≤
1 · 1/m
+
Pr (A(m) − B)
holds, which clearly tends to zero as m → ∞. The same can be shown for Pr coming from the right, and so we have B = Ag ∈ σ op (A). Corollary 3.96. For A ∈ L(E), the global operator spectrum as well as all local operator spectra are closed subsets of L(E). Proof. Every normconvergent sequence in σ op (A) is Pconvergent of course, and, by Proposition 3.95, its limit is in σ op (A) again. If we take Propositions 3.94 and 3.95 together, we get that the operator spectrum is closed under shifting and under passing to Plimits. The same is true for local operator spectra. Consequently, we have:
136
Chapter 3. Limit Operators
Corollary 3.97. Every limit operator of a limit operator of A ∈ L(E) is already a limit operator of A itself. So no further operators – and hence no further simpliﬁcation – occurs when repeatedly passing to the set of limit operators.
3.8.2 Everyone is Just a Limit Operator! Another question which is closely related with the mentioned eﬀect of simpliﬁcation is whether or not the set of all possible limit operators σ op (A) A∈L(E,P)
is some proper subset of L(E, P) containing the “simple” operators only. Here the answer is “No”. The main result of this subsection essentially says that no operator is “complicated” enough not to be the limit operator of an even more “complicated” one. So take some p ∈ [1, ∞] and a Banach space X, put E = p (Zn , X), and take an arbitrary operator A ∈ L(E, P). The following construction of an “even more complicated operator than A” was suggested by Roch. The basic idea is very similar to the explicit construction of a sequence s with a prescribed set {a1 , a2 , . . .} of partial limits by putting s := ( a1 , a1 , a2 , a1 , a2 , a3 , . . . ). We still need some preparations. Firstly, choose and ﬁx a sequence h = (hm ) ⊂ Zn that tends to inﬁnity fast enough that all sets Um := hm + {−m, m}n,
m = 1, 2, . . . 2
(3.54) 2
are pairwise disjoint (for instance, put hm = ((m + 1) , . . . , (m + 1) )). Given a bounded sequence (Am )∞ m=1 with Am ∈ L(im Pm ) for every m ∈ N, we let S(A1 , A2 , . . .) refer to the operator on E for which every space im PUm is an invariant subspace of E, and S acts on im PUm as Vhm Am V−hm , i.e. as the operator Am , shifted to im PUm . Otherwise, S(A1 , A2 , . . .) be the zero operator, that is
(Vhm Am V−hm u)α if α ∈ Um , m ∈ N, (3.55) S(A1 , A2 , . . .)u α = 0 otherwise for every u ∈ E. In this sense, one can also write S(A1 , A2 , . . .) =
∞
Vhm Am V−hm ,
m=1
the sum in (Su)α actually being ﬁnite for every α ∈ Zn by the choice of h = (hm ). Since S(A1 , A2 , . . .) acts as a blockdiagonal operator, we immediately have that S(A1 , A2 , . . .) ∈ L(E) with S(A1 , A2 , . . .) = sup Am . m∈N
(3.56)
3.8. Limit Operators – Everything Simple Out There?
137
From (3.55) we easily get that, for every α ∈ Zn , Pα S(A1 , A2 , . . .)Qm ⇒ 0 and Qm S(A1 , A2 , . . .)Pα ⇒ 0 as k → ∞ which clearly implies S(A1 , A2 , . . .) ∈ L(E, P). Lemma 3.98. The map (Am ) → S(A1 , A2 , . . .) is bounded and linear from the space of all bounded sequences (Am )∞ m=1 with Am ∈ L(im Pm ) to L(E, P). Proof. The equalities S(A1 + B1 , A2 + B2 , . . .) = S(A1 , A2 , . . .) + S(B1 , B2 , . . .) and S(λA1 , λA2 , . . .) = λ S(A1 , A2 , . . .) for all such sequences (Am ) and (Bm ) and all λ ∈ C follow immediately from the deﬁnition (3.55), and the boundedness follows from (3.56). Now remember our operator A ∈ L(E, P), put Am := Pm APm for every m ∈ N, and deﬁne SA := S(P1 AP1 , P2 AP2 , . . .). Then we can say even more about SA than in (3.56). Since A ∈ L(E, P), we get P from Proposition 1.70 that Am → A and that A ≤ sup Am . Together with (3.56) and Am = Pm APm ≤ A, this shows that SA = sup Am = A.
(3.57)
m∈N
Lemma 3.99. If A is a band operator, then SA is a band operator of the same bandwidth, and if A is banddominated, then so is SA . Proof. If A is a band operator, then also Vhm Am V−hm is one with the same bandwidth, and hence also SA has this property. If A is banddominated, then we can take a sequence of band operators with A(k) ⇒ A. Now also the SA(k) are band operators, and Lemma 3.98 and (3.57) show that SA(k) − SA = SA(k) −A = A(k) − A → 0 as k → ∞, showing that also SA is banddominated.
Now we are ready for the main result of this subsection, saying that every A ∈ L(E, P) is a limit operator – even with respect to a prescribed direction towards inﬁnity. Proposition 3.100. For every A ∈ L(E, P) and every sequence g = (gm ) ∈ Zn tending to inﬁnity, there is an operator B ∈ L(E, P) and a subsequence h of g such that A = Bh .
138
Chapter 3. Limit Operators
Proof. From the sequence g choose a subsequence h = (hm ) such that the sets Um , deﬁned by (3.54), are pairwise disjoint, and put B = SA . From the deﬁnition of SA we know that, for every m ∈ N, PUm B = Vhm Pm APm V−hm = BPUm
(3.58)
and consequently Pm V−hm BVhm = V−hm PUm BVhm = Pm APm holds. Now ﬁx k ∈ N. Then, for every m > k, Pk V−hm BVhm = Pk Pm V−hm BVhm = Pk Pm APm = Pk APm , and hence Pk (A − V−hm BVhm ) = Pk A − Pk APm = Pk AQm ⇒ 0 as m → ∞ since A ∈ L(E, P). The symmetric property (A − V−hm BVhm )Pk ⇒ 0 as m → ∞ is analogously derived from the second equality in (3.58). This shows P that V−hm BVhm → A as m → ∞ and completes the proof. Remark 3.101. It was shown that σ op (SA ) contains A. By the same idea, putting SA,B := S( P1 AP1 , P2 BP2 , P3 AP3 , P4 BP4 , . . . ), one can construct an operator which has the two operators A and B in its operator spectrum. This idea can be extended up to ﬁnding an operator S with countably many prescribed limit operators in the same local operator spectrum!
3.9 Big Question: Uniformly or Elementwise? It is the biggest question of the whole limit operator business whether or not in Theorem 1, and consequently in all its applications, uniform invertibility of σ op (A) can be replaced by elementwise invertibility. In other words: Big question:
Is the operator spectrum of a rich operator automatically uniformly invertible if it is elementwise invertible?
The fact that no one has found a counterexample yet, of course doesn’t mean that there is no such example. On the other hand, the whole theory would immediately gain a lot of simpliﬁcation and beauty if one were able to give a positive answer to the big question in general. Our strategy to tackle this problem is as follows. Sets with the word “spectrum” in their name surprisingly often enjoy some sort of compactness property. If the operator spectrum σ op (A) were a compact subset of L(E), we could immediately answer the big question.
3.9. Big Question: Uniformly or Elementwise?
139
Indeed, suppose the answer were “No”. Then we had a rich operator A ∈ L(E) whose operator spectrum is elementwise but not uniformly invertible. Choose a sequence A1 , A2 , . . . from σ op (A) such that A−1 m → ∞ as m → ∞. From the compactness of σ op (A) we conclude that (Am ) has a subsequence with normlimit B in σ op (A) again. But since B is invertible, by our premise, Lemma 1.2 b) contradicts the fact of the growing inverses of Am . So the answer had to be “Yes”. Unfortunately, the operator spectrum σ op (A) in general is not compact with respect to the norm topology in L(E). But we will follow this road a little bit further and discover that σ op (A), however, enjoys a weaker form of compactness. The content of this section should be seen as an appetizer to the big question of limit operator business and to some of the problems and strategies to tackle it, ﬁnally leading to a positive answer for E = p (Zn , X) in the cases p = 1 and p = ∞. Maybe this section can stimulate someone to tackle the problem for the remaining cases p ∈ (1, ∞), where it seems to be most promising to start with the C ∗ case 2 . Also note that the uniformity condition in Theorem 1 was already considerably weakened in [67], [61] and [70], using central localization techniques – remember Subsection 3.5.2 of this book or look at Sections 2.3 and 2.4 of [70].
3.9.1 Reformulating Richness We start our investigations with a slight but very natural reformulation of an operator’s richproperty. As usual, we will say that a set L ⊂ L(E) is relatively Psequentially compact if every inﬁnite sequence from L has a Pconvergent subsequence, and we will say that L is Psequentially compact if this Plimit is in L again. Proposition 3.102. An operator A ∈ L(E) is rich if and only if its set of shifts VA = {V−α AVα }α∈Zn is relatively Psequentially compact. Proof. If VA is relatively Psequentially compact, then all inﬁnite sequences (V−αm AVαm )∞ m=1 , including those with αm tending to inﬁnity as m → ∞, possess a Pconvergent subsequence. So A is rich. If conversely, A is rich, then every sequence (V−αm AVαm ) with αm → ∞ has a Pconvergent subsequence. All other sequences (V−αm AVαm ) ⊂ VA have an inﬁnite subsequence such that all αk involved are contained in some bounded set U ⊂ Zn . Since U has only ﬁnitely many elements, there is even an inﬁnite constant subsequence of (V−αm AVαm ) which, clearly, Pconverges. Summarizing this, every sequence (V−αm AVαm ) in VA (with αk tending to inﬁnity or not) has a Pconvergent subsequence. Remark 3.103. In Subsection 3.4.13 we gave some arguments why we restrict ourselves to integer shift distances when approaching limit operators. Note that
140
Chapter 3. Limit Operators
another nice fact that we would lose by passing to real valued shift distances is the onlyif part of Proposition 3.102. For instance, the operator A := P[0,∞) on Lp (R) is rich, but in {V−α AVα }α∈R √ there are many sequences, choose αm := [m 2] for example, without a Pconvergent subsequence. In this connection we may also recall the operator of multiplication by the function f in Example 3.63 with rational step length s. As trivial Proposition 3.102 seems, it opens the door to seeing rich operators and their operator spectra in a somewhat diﬀerent light. For instance, we just have to recall Proposition 3.95 to ﬁnd and prove the following interesting fact. Proposition 3.104. For a rich operator A ∈ L(E), every local as well as the global operator spectrum of A is Psequentially compact. Proof. If A ∈ L(E) is rich, then, by Proposition 3.102, VA is relatively Psequentially compact. Consequently, PclosVA is Psequentially compact. Now clearly, σ op (A), as well as every local operator spectrum of A, is a subset of the latter set. But as Psequentially closed subsets (cf. Proposition 3.95) of a Psequentially compact set, σ op (A) and σsop (A) are Psequentially compact themselves. So once more, the word “spectrum” has turned out to be a good indicator for a compactness property. Remark 3.105. Although this is not true for Proposition 3.102 (see Remark 3.103), Proposition 3.104 remains valid with some small changes in its proof when we deﬁne limit operators by using real shift distances.
3.9.2 Turning Back to the Big Question The discovery that operator spectra of rich operators are Psequentially compact encourages us to think about the big question with a bit more optimism. It is a ﬁrst clue towards believing that the answer might be “Yes” – the more beautiful but less expected one of the two possible answers. However, for ordinary operators the answer has to be “No”, as the following example shows. Example 3.106. Let (am )∞ m=1 ⊂ [0, 1) be a sequence of pairwise distinct numbers, and put Um := am + [0, 1] for every m ∈ N, so that all these intervals Um diﬀer from each other. Now put Am :=
1 PU + QUm , m m
m = 1, 2, . . .
and let A denote the operator S from Remark 3.101, having all operators A1 , A2 , . . . in its operator spectrum. It turns out that the operator spectrum of A, besides A1 , A2 , . . . (and integer shifts of those), only contains the identity operator I, which is due to the odd choice of the intervals Um . This set is elementwise invertible – but not uniformly since A−1 m = m.
3.9. Big Question: Uniformly or Elementwise?
141
If this operator A were rich, then, by Proposition 3.104, the sequence (Am ) would have some Pconvergent subsequence which clearly is not possible due to the incompatibility of the intervals Um .
1st Try So suppose A ∈ L(E) is rich, and recall the situation from the beginning of this section; that is, σ op (A) is elementwise but not uniformly invertible. As before, choose a sequence A1 , A2 , . . . from σ op (A) such that A−1 m → ∞ as m → ∞. This time, indeed, Proposition 3.104 says that (Am ) has a Pconvergent subsequence with Plimit B in σ op (A). The question jumping to our mind now is whether this operator B can be invertible or not. Answer: Although the normlimit of a sequence, whose inverses tend to inﬁnity, cannot be invertible, the Plimit B can: Example 3.107. The sequence 1 Qm , m = 1, 2, . . . (3.59) m Pconverges to the identity I as m → ∞, although its inverses are growing unboundedly with A−1 m = m. Am := Pm +
Learning from this Setback The reason why the sequence (3.59), despite of its badly growing inverses, however has an invertible Plimit, is that, roughly speaking, those “parts” of Am which are responsible for the growing inverses, are “running away” towards inﬁnity when m → ∞, and so they have no contribution to the Plimit B. Luckily, we here have to do with operator spectra and those have one more nice feature, stated in Proposition 3.94: We stay in σ op (A) if we pass to appropriate shifts V−αm Am Vαm instead of Am , where every αm ∈ Zn shall be chosen such that the “bad parts” of V−αm Am Vαm cannot run away to inﬁnity any more and consequently, they must have some “bad impact” on the Plimit B as well! That is the idea. In order to write it down in a more precise way, we extend the notion from Deﬁnition 2.31. Recall that an operator A ∈ L(E) is said to be bounded below if ν(A) := inf Ax > 0. x=1
We will moreover say that a set L ⊂ L(E) is uniformly bounded below if every A ∈ L is bounded below and if there is a ν > 0 such that ν(A) ≥ ν for all A ∈ L; that is Ax ≥ νx, A ∈ L, x ∈ E. Now take a Banach space X and put E = p (Zn , X). The following proposition gives a positive answer to the big question for operators in BDOp$ with p = ∞. In fact, it is an even stronger statement than what we need.
142
Chapter 3. Limit Operators
Proposition 3.108. If p = ∞, A ∈ BDOp$ and all limit operators of A are bounded below, then σ op (A) is uniformly bounded below. Proof. Let p = ∞, A ∈ BDOp$ , and suppose σ op (A) is elementwise but not uniformly bounded below. Then there is a sequence (Am ) ⊂ σ op (A) such that, for every m ∈ N, there is an xm ∈ E with xm = 1
Am xm ≤
and
1 . m
Now ﬁx some nonempty bounded set U ⊂ Zn . For our purposes, meaning p = ∞, we may in fact put U := {0}. Clearly, there are translation vectors cm ∈ Zn such that Pcm +U xm ∞ > xm ∞ /2 = 1/2 for every10 m ∈ N. Now put ym := V−cm xm
and
Bm := V−cm Am Vcm ,
(3.60)
which leaves us with ym
=
V−cm xm = xm = 1,
PU ym
=
PU V−cm xm = Pcm +U xm >
(3.61) 1 ym 2
(3.62)
and Bm ym
=
V−cm Am Vcm V−cm xm = Am xm ≤
1 m
(3.63)
for every m ∈ N. Moreover, by Proposition 3.94, the sequence (Bm ) is in σ op (A) again. So, by Proposition 3.104, we can pass to a subsequence of (Bm ) with Plimit B ∈ σ op (A). For simplicity of notations, suppose (Bm ) itself already be this sequence. Now some of the “bad parts” of Bm are located inside U , and hence, they cannot “run away” when m → ∞. In fact, now we can prove that the Plimit B has inherited some “bad parts” inside U – bad enough to make B not bounded below: As an element of σ op (A), the operator B is bounded below by our premise. So there is a constant a := ν(B) > 0 such that Bx ≥ ax
for all
x ∈ E.
(3.64)
Now ﬁx some continuous function ϕ : Rn → [0, 1] which is identically 1 on H = [0, 1)n and vanishes outside of 2H. By ϕr denote the function ϕr (t) := ϕ(t/r),
t ∈ Rn
10 And this is a typical ∞ argument! p is much more sophisticated here since U has to be chosen suﬃciently large, and it is not clear if the same U works for all xm .
3.9. Big Question: Uniformly or Elementwise?
143
for every r > 0, so that ϕr is supported in 2rH, while on rH it is equal to 1. From A ∈ BDOp$ ⊂ BDOp , Proposition 3.6 b), Theorem 1.42 and ϕ ∈ BUC ˆ ϕr ] ⇒ 0 as r → ∞. Choose r ∈ N large enough that U + H ⊂ we know that [ B , M rH and ˆ ϕr ] < a . [B, M (3.65) 6 P
Since Bm → B, there is some m0 ∈ N such that P2r (B − Bm ) <
a 6
for all
m > m0 .
(3.66)
ˆ ϕr = M ˆ ϕr P2rH and inequalities (3.65) and (3.66) we conclude that From M ˆ ϕr − M ˆ ϕr Bm ≤ B M
ˆ ϕr − M ˆ ϕr B + M ˆ ϕr (B − Bm ) B M ˆ ϕr − M ˆ ϕr B + M ˆ ϕr · P2rH (B − Bm ) B M a a a + 1· = for all m > m0 . 6 6 3
≤ <
The latter shows that, for every x ∈ E, ˆ ϕr x B M
≤ ≤ ≤
ˆ ϕr Bm x + B M ˆ ϕr x − M ˆ ϕr Bm x M ˆ ϕr Bm x + B M ˆ ϕr − M ˆ ϕr Bm · x M ˆ ϕr Bm x + a x M 3
(3.67)
ˆ ϕr holds if m > m0 . Taking everything together, keeping in mind that PU = PU M since U + H ⊂ rH and ϕr is equal to 1 on rH, we get a 2
(3.62)
=
a ym 2
=
ˆ ϕr ym aM
(3.61)
(3.61)
≤
<
ˆ ϕr ym ≤ aPU · M ˆ ϕr ym aPU ym = aPU M
(3.64)
≤
ˆ ϕr ym B M
ˆ ϕr · Bm ym + M
a ·1 3
(3.67)
(3.63)
≤
ˆ ϕr Bm ym + a ym M 3 a 1 + 1· m 3 ≤
for all m > m0 . If we ﬁnally subtract a/3 at both ends of the chain, we arrive at 1 a ≤ 6 m
for all
which, of course, is contradicting a > 0.
m > m0 ,
This result implies a positive answer to the big question for p = 1 and p = ∞: Theorem 3.109. If p ∈ {1, ∞}, then the operator spectrum of a rich banddominated operator is uniformly invertible as soon as it is elementwise invertible.
144
Chapter 3. Limit Operators
Proof. Let p = ∞, and suppose A ∈ BDO∞ $ has an elementwise invertible operator spectrum σ op (A). Then, by Lemma 2.35, all limit operators of A are bounded below, and Proposition 3.108 shows that they are even uniformly bounded below. But then, by Lemma 2.35 again, the norms of their inverses are bounded from above, which proves the theorem for p = ∞. Now let p = 1, and suppose A ∈ BDOp$ . Then A∗ ∈ BDO∞ $ , and the operator spectra σ op (A) and σ op (A∗ ) can be identiﬁed elementwise by taking (pre)adjoints. Since this identiﬁcation clearly preserves norms and invertibility, we are done with p = 1 as well. Well, that was quite something. However, speaking with Sir Edmund Hillary, the big “bastard” still to be “knocked oﬀ”, is the big question for the case p ∈ (1, ∞), which is a bit ironic since this tended to be the nice and handsome case before, as we remember from Figures 1, 3 and 4 (see pages 15, 46 and 57), for example.
3.9.3 Alternative Proofs for ∞ After the proof presented in the previous subsection was found in 2003, it turned out that a positive answer to the big question in the case ∞ = ∞ (Zn ) was already hidden in the middle of the proof of Theorem 8 in [67]. Although this is a theorem on the Wiener algebra W, a closer look shows that this particular step works for all banddominated operators on ∞ . Meanwhile, a third proof for E = ∞ has been found by ChandlerWilde and the author in [17]. One of the ﬁrst messages of this paper is that the notions of convergence studied by ChandlerWilde and Zhang in [24] are compatible with many of the concepts presented in this book. For example: β
P
• The strict convergence um → u in E of [24] is equivalent to um → u in E. • The class of operators A ∈ L(E) which map strictly convergent to strictly convergent sequences in E is the superset of L(E, P) of operators A ∈ L(E) which are subject to the ﬁrst constraint of (1.12). • The class of operators A ∈ L(E) which map strictly convergent to norm convergent sequences in E is the superset of K(E, P) of operators A ∈ L(E) which are subject to the constraint (1.10). Then [17] goes on to apply the collective compactness approach of [24] to the operator spectrum of a rich banddominated operator. Therefore, we brieﬂy introduce the following notions of [24]: A set L ⊂ L(E) is called collectively sequentially compact on (E, s) if, for every bounded set B ⊂ E, the image of B under the set L is relatively sequentially compact in E with respect to strict convergence.
3.9. Big Question: Uniformly or Elementwise?
145 β
For a sequence (Am ) ⊂ L(E) and an operator A ∈ L(E), one writes Am → A β
β
if um → u implies Am um → Au. A subset K ⊂ L is called βdense in L ⊂ L(E) β
if, for every A ∈ L, there is a sequence (Am ) ⊂ K with Am → A. Finally, a set V of isometrical isomorphisms on E is called suﬃcient if, for some m ∈ N, it holds that, for every u ∈ E, there exists V ∈ V such that 2 Pm V u ≥ u. Then Theorem 4.5 in [24] says the following. Proposition 3.110. Suppose that V is a suﬃcient set of isometrical isomorphisms on E, and L ⊂ L(E) has the following properties: (i) L is collectively sequentially compact on (E, s); (ii) L is βsequentially compact; (iii) V −1 BV ∈ L for all B ∈ L and V ∈ V; (iv) I − B is injective for all B ∈ L. Then the set I − L is uniformly bounded below. If, moreover, in I − L there is an sdense subset of surjective operators, then all operators in I − L are surjective. This result can be applied to E = ∞ ,
V = {Vα }α∈Zn
and
L = σ op (I − A)
with a rich banddominated operator A on E. Indeed: (o) V is suﬃcient if E = ∞ since every u ∈ ∞ can be shifted such that at least 50% of its norm are attained in the 0th component. (i) Since P ⊂ K(E) if dim X < ∞, a set L ⊂ L(E) is collectively sequentially compact on (E, s) if and only if L is bounded. The latter is true for the operator spectrum by Proposition 3.4 a). P
β
(ii) It is readily shown that Am → A implies Am → A. Therefore, and by Proposition 3.104, we know that the operator spectrum of a rich operator is βsequentially compact. (iii) From Proposition 3.94 we get that V −1 BV is a limit operator of I − A if B is one and if V ∈ V. Now Proposition 3.110 yields the result of Proposition 3.108 and hence the desired answer to the big question for ∞ .
3.9.4 Passing to Subclasses Although, apart from p ∈ {1, ∞}, we are not yet able to deﬁnitely answer the big question in general, there are some subclasses of BDOp$ for which a positive
146
Chapter 3. Limit Operators
answer can be given, independently of p. If the set A + CI is contained in one of these classes, then, for dim X < ∞, we have the opposite inclusion to (3.12) sp B ⊃ spess A,
(3.68)
B∈σop (A)
which shows, together with (3.12), that sp B = spess A,
(3.69)
B∈σop (A)
if we are in the socalled perfect case, 1 < p < ∞ and dim X < ∞, and for the set A + CI the elementwise invertibility of operator spectra implies their uniform invertibility. A ﬁrst but very simple situation in which elementwise invertibility implies uniform invertibility of σ op (A) is when σ op (A) is a compact subset of L(E), as we already found out earlier. For example, this is the case when σ op (A) is contained in a ﬁnitedimensional subspace of L(E) since, by Proposition 3.4 a) and Corollary 3.96, the operator spectrum is always a bounded and closed subset of L(E). Some more sophisticated subclasses with a positive answer to the big question are studied in Sections 2.3 – 2.5 of [70]. Here we will basically cite two of the highlights of this theory and refer to [70] for the proofs.
Operators in the Wiener Algebra Fix a Banach space X, and put E = p (Zn , X) where p ∈ [1, ∞] may vary if we say so. The deﬁnition of a limit operator of A ∈ L(E) is based on the notion of Pconvergence which, in general, depends on the norm of the underlying space E and hence on the value of p ∈ [1, ∞]. So even if A ∈ BDOp for several values of p, the operator spectrum σ op (A) can diﬀer from p to p. But for operators A in the Wiener algebra for all p ∈ [1, ∞], W ⊂ BDOp (recall Deﬁnition 1.43) one can show that σ op (A) does not depend on p. Lemma 3.111. If A ∈ W is rich and h ⊂ Zn tends to inﬁnity, then there is a subsequence g of h such that the limit operator Ag exists with respect to all spaces E with p ∈ [1, ∞]. This limit operator again belongs to W, and Ag W ≤ AW . Proof. This is Proposition 2.5.6 in [70].
As a consequence of Proposition 1.46 c) we get that, if A ∈ W is invertible on one space E with p ∈ [1, ∞], then it is invertible on all these spaces E, and A−1 L(E) ≤ A−1 W . Moreover, a consequence of Lemma 3.111 is that the operator spectrum σ op (A) is contained in the Wiener algebra W and does not depend on the underlying space E if A ∈ W. So for A ∈ W, the statement of Theorem 1 holds independently of the underlying space E. Moreover, by Theorem 3.109, also
3.10. Comments and References
147
the uniform boundedness condition of the inverses is redundant since this is true for p ∈ {1, ∞} and consequently also for p ∈ (1, ∞) by RieszThorin interpolation (see [39, IV.1.4] or [84, V.1]). So, for operators in the Wiener algebra, we are in the following nice situation. Proposition 3.112. If X is reﬂexive, E = p (Zn , X) with some p ∈ [1, ∞], and A ∈ W is rich, then the following statements are equivalent. (i) A is invertible at inﬁnity on one of the spaces E. (ii) All limit operators of A are invertible on one of the spaces E. (iii) A is invertible at inﬁnity on all spaces E. (iv) All limit operators of A are invertible on all spaces E. Proof. This is Theorem 2.5.7 of [70].
So, in particular, the big question has a positive answer for every operator A ∈ W, which includes all band operators.
Banddominated Operators with Slowly Oscillating Coeﬃcients Again ﬁx a Banach space X and some p ∈ [1, ∞], and put E = p (Zn , X). Remember from Remark 3.53 that a sequence b ∈ ∞ (Zn , L(X)) is called slowly oscillating if, for every q ∈ Zn , the sequence Vq b − b vanishes at ∞. From the discrete analogue of Corollary 3.48 we know, as already formulated in Remark 3.53, that if A is a banddominated operator on E with slowly oscillating coeﬃcients, then A is rich and all limit operators of A have constant coeﬃcients; that is, they are shiftinvariant. In addition to this simple structure of each limit operator of A, we have the following result on the operator spectrum of A which says that the elementwise invertibility of σ op (A) implies its uniform invertibility. Proposition 3.113. If A ∈ BDOpS has slowly oscillating coeﬃcients, then A is invertible at inﬁnity if and only if all limit operators of A are invertible. Proof. This is Theorem 2.4.27 in [70].
So also in this case, the uniform boundedness of the inverses turns out to be redundant.
3.10 Comments and References What is probably the ﬁrst application of limit operators goes back to 1927, where Favard [28] used them to study ordinary diﬀerential equations with almostperiodic coeﬃcients. Since that time limit operators have been used in the context of partial diﬀerential and pseudodiﬀerential operators and in many other
148
Chapter 3. Limit Operators
ﬁelds of numerical analysis (for instance, see [35] and [36]). The Fredholmness of pseudodiﬀerential operators in spaces of functions on Rn has been studied by Muhamadiev in [54], [55], and by Lange and Rabinovich in [45] and [59, 60]. The ﬁrst time limit operator techniques were applied to the general case of banddominated operators was in 1985 by Lange and Rabinovich [43, 44]. After the breakthrough [67], Rabinovich, Roch and Silbermann and a small number of their coauthors pushed the research in this ﬁeld forward to the current standard which is manifested in the monography [70], the current ‘bible’ of the limit operator method. The results in Section 3.1 are standard (see [67], [68], etc.). Part b) of Proposition 3.11 goes back to [50]. The history of Theorem 1 starts with [67] for the ‘perfect case’ 1 < p < ∞ with dim X < ∞ and [68] for L2 . The cases dim X = ∞ and p ∈ {1, ∞} were then covered by [70], [47] and [50]. Lemma 3.14 basically goes back to Simonenko [83]. The index results in Subsection 3.3.1 go back to [66] via computations of the Kgroup of the C ∗ algebra BDO2 for p = 2, and [74] for the generalization to 1 < p < ∞. It should be mentioned that the results in [74] easily extend to operators in BDO1 and BDO∞ that belong to the Wiener algebra W. Subsection 3.3.2 is from [17]. Note that even stronger results are presented in [63] and in Section 6.3 of [70] for the C ∗ algebra case; that is when p = 2 and X is a Hilbert space. Much more can be said about the approximation of spectra and pseudospectra for particular classes of banddominated operators, for example, see [5] for banded Toeplitz operators (compressions of band operators with constant coeﬃcients, see Subsection 3.7.2, in particular (3.47)). Section 3.4 is an extended version of [49]. Operators with pseudoergodic coeﬃcients have been studied by Davies in [26]. The compatibility of the limit operator method and central localization, as indicated in Subsection 3.5.2, was ﬁrst shown in [67]. The observation that the center of BDOp$ /K(E, P) coincides with (3.44) and the localization of BDOp$ /K(E, P) over M∞ (SO(Zn )) goes back to [61]. Generalizations of the limit operator approach, as indicated in Section 3.6, can be found in [62] and Chapter 7 of [70]. Proposition 3.95 is essentially shown in [67] already. Corollary 3.97 is Corollary 1.2.4 in [70]. The results of Subsection 3.8.2 can be found in Subsection 2.1.5 of [70]. The ‘big question’, that is discussed in Section 3.9, is as old as Theorem 1. ¨ ttcher writes about the uniform In his review of the article [61], Albrecht Bo boundedness condition in Theorem 1: “Condition (∗) is nasty to work with.”. We have got nothing to add to this. Propositions 3.102, 3.104 and 3.108 go back to the author [50]. Subsection 3.9.3 is due to ChandlerWilde and the author [17], and the results of Subsection 3.9.4 are from [67] and [61].
Chapter 4
Stability of the Finite Section Method 4.1 The Finite Section Method: Stability Versus Limit Operators The two preceding chapters suggest an approach to the applicability question of an approximation method in terms of limit operators since the following three conditions are equivalent for a fairly large class of methods (Aτ ). (A) (B) (C)
The approximation method (Aτ ) is stable. The stacked operator ⊕Aτ is invertible at inﬁnity. The operator spectrum of ⊕Aτ is uniformly invertible.
We will demonstrate this approach when (Aτ ) is the ﬁnite section method for operators on the axis, where we are able to derive pretty explicit criteria on the applicability of the method. So in what follows, n equals 1, and the operator sequence (Aτ ) is (Aτ ) = (Pτ APτ + Qτ ), τ ∈ T, (4.1) where T = N or R+ . Note that this is the method (2.13) with Ωτ = {−τ, . . . , τ } or [−τ, τ ], respectively, depending on whether we are in the discrete or continuous case.
4.1.1 Limit Operators of Stacked Operators For the study of the stacked operator ⊕Aτ , as introduced in Deﬁnition 2.18, we have to jump back and forth between operators on the axis and operators on the
150
Chapter 4. Stability of the Finite Section Method
plane. Moreover, for reasons of analogy, we will study the discrete and the continuous case at once. To establish this polymorphism, we will reuse the notations of Section 2.4. Throughout the following, let p ∈ [1, ∞], and let E denote either the space Lp = Lp (R) or p (Z, X) where X is an arbitrary Banach space. Depending on the choice of E; that is, whether we are in the continuous case or in the discrete case, take D ∈ {R, Z} corresponding, and deﬁne E = Lp (R2 ) or E = p (Z2 , X), respectively. To make a distinction between operators on E and operators on E , we will write I , PU , Pk , QU , Qk for the identity operator and the respective projection operators on E . Finally, put D+ = {τ ∈ D : τ > 0}; that is D+ ∈ {R+ , N}, in accordance with the choice of D. Now suppose A is a rich banddominated operator on E, with a preadjoint in case p = ∞; that is A ∈ BDOpS,$$ . We will show that ⊕Aτ , stacked over the index set T = D+ , as an operator on E , has all these properties as well, and we will give a description of its operator spectrum σ op (⊕Aτ ). The following formula, which is immediate from (4.1), will turn out to be helpful: ⊕Aτ = PV (⊕A)PV + QV ,
(4.2)
where the quarter plane V ⊂ D2 is the intersection of the two half planes HNE = {(x, τ ) ∈ D2 : τ ≥ −x}
and HNW = {(x, τ ) ∈ D2 : τ ≥ x}.
Note that in (4.2) we may suppose that ⊕A is stacked over T = D. Moreover, we abbreviate := PH , PNE NE
QNE := I − PNE ,
PNW := PH NW
and QNW := I − PNW ,
and put op σM (B) :=
op σ(cos ϕ,sin ϕ) (B) ϕ∈M
for all B ∈ L(E ) and all M ∈ { N, S, NE, NW } where
& π 3 3 π 3 9 , π , S= π , π , NE = and NW = π . N= 4 4 4 4 4 4 τ
6
` ` `
0
τ
6 @ @ @ @ QV @ @ @
QV x 
0
(Aτ )τ >0
QV
QV
⊕Aτ
Figure 12: The ﬁnite section sequence (Aτ ) and its stacked operator ⊕Aτ for n = 1.
4.1. The FSM: Stability vs. Limit Operators
151
Now here is the promised result. Proposition 4.1. If A ∈ BDOpS,$$ on E, then ⊕Aτ ∈ BDOpS,$$ on E , and the operator spectrum σ op (⊕Aτ ) consists of the following operators: (i) the identity operator I, (ii) all operators ⊕(V−c AVc ) with c ∈ Z, (iii) all operators ⊕Ah with Ah ∈ σ op (A), and all shifts of op (iv) the operators PNW (⊕Ah ) PNW + QNW with Ah ∈ σ+ (A), and op (v) the operators PNE (⊕Ah ) PNE + QNE with Ah ∈ σ− (A),
where all the ⊕ in (ii)–(v) are stacked over the index set D. Proof. Let A ∈ BDOpS,$$ . To show that ⊕Aτ ∈ BDOpS,$$ , it is suﬃcient, by equation (4.2) and Proposition 3.11 a), to show that ⊕A, PV and hence QV = I − PV are in BDOpS,$$ . Clearly, the (generalized) multiplication operator PV is banddominated and has a preadjoint. Moreover, by Corollary 3.90, it is rich, and we know everything about its operator spectrum. So it remains to check ⊕A. From Proposition 2.22 b) we get that ⊕A ∈ BDOp . Moreover, (⊕A) = ⊕(A ) shows that ⊕A ∈ S. To see that ⊕A is rich and to ﬁnd out about its operator spectrum, we take (1) (2) an arbitrary sequence h = (hm ) with hm = (hm , hm ) ∈ Z2 tending to inﬁnity. (2) Clearly, the second component h of h has no inﬂuence at all on the operator V−hm (⊕A)Vhm . So we just have to look at the ﬁrst component of h. • If h(1) is bounded, then there is an inﬁnite subsequence g of h and a c ∈ Z such that g (1) is the constant sequence c, and, consequently, (⊕A)g exists and is equal to ⊕(V−c AVc ). • If h(1) is unbounded, then h has a subsequence g the ﬁrst component of which tends to plus or minus inﬁnity. Since A is rich, we can choose a subsequence f from g such that f (1) leads to a limit operator of A. Hence (⊕A)f exists and is equal to ⊕(Af (1) ). From Corollary 3.90, equation (4.1) and from what we just found out about the limit operators of ⊕A, we immediately get the inclusion “⊂” in the four equalities σSop (⊕Aτ ) = σNop (⊕Aτ ) = op (⊕Aτ ) = σNE op σNW (⊕Aτ ) =
{I}, ⊕ (V−c AVc ) , ⊕Ah : c ∈ Z, Ah ∈ σ op (A) , op PU (⊕Ah ) PU + QU : U = c + HNW , c ∈ Z2 , Ah ∈ σ+ (A) , op PU (⊕Ah ) PU + QU : U = c + HNE , c ∈ Z2 , Ah ∈ σ− (A) .
152
Chapter 4. Stability of the Finite Section Method
The reverse inclusion “⊃” is checked in an analogous way as in the proof of Proposition 3.89, using the fact that A is rich. op op (⊕Aτ ) and σNW (⊕Aτ ) are exactly the shifts Vc BV−c The operators in σNE of the operators B in (iv) and (v) with Ah replaced by the operator Ac(1) +h , respectively, which is in the same local operator spectrum as Ah is, by Proposition 3.94. This proposition, in combination with the equivalence of properties (A) and (C) above, leaves us with a stability criterion for the ﬁnite section method of a certain class of operators A on E.
4.1.2 The Main Theorem on the Finite Section Method To give us a more concise notation, abbreviate P := PD+
and
Q := I − P,
which are the projection to the positive half axis and its complementary projection, respectively. Theorem 4.2. If A ∈ BDOpS,$$ , and either (i) E = p (Z, X) with p ∈ [1, ∞], a Banach space X, and D = Z, or (ii) E = Lp with p ∈ (1, ∞) and D = R, or (iii) E = Lp with p ∈ {1, ∞}, A ∈ Rp , and D = R, then the following statements are equivalent. ➀ The ﬁnite section method (4.1) is applicable to A. ➁ The ﬁnite section sequence (4.1) is stable. ➂ A itself is invertible, and the set consisting of all operators op (A) and all ⊕τ ∈D (QVτ Ah V−τ Q + P ) with Ah ∈ σ+ op ⊕τ ∈D (P Vτ Ah V−τ P + Q) with Ah ∈ σ− (A) is uniformly invertible. op (A) and ➃ A is invertible, and all sets { QVτ Ah V−τ Q + P }τ ∈D with Ah ∈ σ+ op { P Vτ Ah V−τ P + Q }τ ∈D with Ah ∈ σ− (A) are essentially invertible, and the essential suprema of all these inverses are uniformly bounded.
In case (i), properties ➀, ➁, ➂ and ➃ are moreover equivalent to ➄ A is invertible, and the set consisting of all operators QAh Q + P
with
op Ah ∈ σ+ (A)
(4.3)
P Ah P + Q
with
op Ah ∈ σ− (A)
(4.4)
as well as all is uniformly invertible.
4.1. The FSM: Stability vs. Limit Operators
153
Proof. We start with the proof of the equivalence of ➁ and ➂. By Theorem 2.28 or 2.47, depending on whether case (i), (ii) or (iii) applies, we get that (Aτ ) is stable if and only if ⊕Aτ is invertible at inﬁnity, which, by Proposition 4.1 and Theorem 1, is equivalent to the uniform invertibility of σ op (⊕Aτ ). By Proposition 4.1, this is equivalent to the uniform invertibility of the sets
and
{⊕(V−c AVc ) : c ∈ D},
(4.5)
{⊕Ah : c ∈ D}, op : Ah ∈ σ− (A)},
(4.6) (4.7)
op : Ah ∈ σ+ (A)},
(4.8)
(⊕Ah )PNE + QNE {PNE {PNW (⊕Ah )PNW + QNW
where all four ⊕ signs in (4.5)–(4.8) are over τ ∈ D, meaning that every one of these four stacked operators is the same in every layer. Firstly, the uniform invertibility of (4.5) implies the invertibility of A, which, secondly, implies the uniform invertibility of the sets (4.5) and (4.6), by Proposition 2.20 and Theorem 1. From = ⊕ (V−τ P Vτ ) PNE τ ∈D
and
QNE = ⊕ (V−τ QVτ ) τ ∈D
we get that the operators in (4.7) are of the form ⊕ V−τ (P Vτ Ah V−τ P + Q)Vτ . τ ∈D
By Proposition 2.20, and since Vτ and V−τ are invertible isometries, this operator is invertible if and only if ⊕ (P Vτ Ah V−τ P + Q)
τ ∈D
is invertible, where the norms of their inverses coincide. Consequently, the uniform invertibility of (4.7) is equivalent to the uniform invertibility of the ﬁrst set in ➂. Analogously, one checks that (4.8) corresponds to the second set in ➂, and we are ﬁnished with the equivalence of ➁ and ➂. The equivalence of ➂ and ➃ is immediate from Proposition 2.20. Since ➁ is equivalent to ➂, it implies the invertibility of A. But then, by Proposition 1.78, ➀ and ➁ are equivalent. Finally, let D = Z. Then the word “essential” becomes “uniform” in ➃, by the discussion after (2.10), and the union of all sets discussed in ➃ is exactly the set of operators in (4.3) and (4.4), by Proposition 3.94, which proves the equivalence of ➃ and ➄ if D = Z. Remark 4.3. a) If p ∈ {1, ∞} or A ∈ W, we know from Section 3.9 that the uniform invertibility of σ op (⊕Aτ ) is guaranteed by its elementwise invertibility. That is
154
Chapter 4. Stability of the Finite Section Method
why, in these cases, the word “uniformly” can be replaced by “elementwise” in ➂, and the uniformity condition in ➃ is redundant as well. b) In ➂ and ➃ it is obviously not necessary to distinguish between limit operators Ah and Ah if h is of the form h + d for some d ∈ Z. c) The operators in (4.3) and (4.4) are the operators (Ah )− and (Ah )+ in the notation of (3.14). We will make use of this observation in the next subsection. d) Note that ➄ is in general not equivalent to ➀–➃ if D = R, as the following example shows. However, note that ➄ were equivalent to ➀–➃ if we had deﬁned limit operators of A on Lp with respect to sequences h = (hm ) ⊂ Rn . (See Subsection 3.4.13 why we did not.) Example 4.4. For every k ∈ Z, let Jk denote the ﬂip on the interval (k, k + 1); that is
u(2k + 1 − x) x ∈ (k, k + 1), (Jk u)(x) = 0 otherwise, and put A :=
Jk ,
k∈Z
the sum in (Au)(x) actually being ﬁnite for every x ∈ R. Then a ﬁnite section Aτ is invertible if and only if τ is an integer, and hence, the ﬁnite section sequence (Aτ ) is far away from being stable. But A is (integer) translation invariant and thus coincides with all of its limit operators Ah . So it is easy to see that property ➄ is fulﬁlled!
4.1.3 Two Baby Versions of Theorem 4.2 Before we apply our theorem in a more sophisticated setting in the next section, we will study two cases in which Theorem 4.2 takes an especially simple form in this short subsection.
Translation Invariant Operators A very simple case is that of a Dtranslation invariant operator; that is an operator A ∈ BDOpS,$$ with V−τ AVτ = A for all τ ∈ D. Corollary 4.5. Let the conditions of Theorem 4.2 be fulﬁlled, and suppose A ∈ BDOpS,$$ is Dtranslation invariant. Then the ﬁnite section method (4.1) is applicable to A if and only if A itself and the two operators A+ and A− are invertible, where, as agreed in (3.14), A+ = P AP + Q
and
A− = QAQ + P.
Proof. This follows immediately from the equivalence of ➀ and ➃ in Theorem 4.2 since A coincides with all of its limit operators Ah and their shifts Vτ Ah V−τ with τ ∈ D if A is Dtranslation invariant.
4.1. The FSM: Stability vs. Limit Operators
155
The Discrete Case with Slowly Oscillating Coeﬃcients Suppose E = p = p (Z, C) with 1 < p < ∞; that is n = 1, X = C and D = Z. We will study the ﬁnite section method for banddominated operators A on E with slowly oscillating coeﬃcients. For simplicity, we start with a band operator A on E with slowly oscillating coeﬃcients. In that case, we have the following simpliﬁcation of Theorem 4.2. Corollary 4.6. If A ∈ BO has slowly oscillating coeﬃcients and D = Z, then the ﬁnite section method (4.1) is applicable to A if and only if A and all operators (4.3) and (4.4) are invertible. Proof. Indeed, recalling Remark 4.3 a), the uniform boundedness condition in ➃ is redundant since ⊕Aτ ∈ BO ⊂ W by Proposition 2.22 a). Moreover, the sets in ➃ are all singletons since the limit operators of A are translation invariant by Remark 3.53. With these modiﬁcations, ➃ is equal to ➄ with “uniformly” replaced by “elementwise”, and our proof is ﬁnished. Remark 4.7. a) Note that Proposition 3.113 is not applicable here to remove the uniform boundedness condition in ➃ of Theorem 4.2 since although A ∈ BDOp has slowly oscillating coeﬃcients, the same is in general not true for ⊕Aτ . b) In fact, the statement of Corollary 4.6 is even true for arbitrary A ∈ W without the premise of slowly oscillating coeﬃcients. This can be shown by replacing the stacked operator ⊕Aτ with the block diagonal operator
diag P1 AP1 , P2 AP2 , . . . acting on the same space E as A does. For example, see [69] for the adjustment of the proofs to this setting. The reason why we didn’t present this idea here is that it is limited to D = Z only, while our aim was to present a uniﬁed approach for continuous and discrete approximation methods. So, besides the invertibility of A, which is an indispensable ingredient to the applicability of any approximation method to A of course, all we need for the applicability of the ﬁnite section method is the invertibility of all operators op (A) B− with B ∈ σ+
and
op C+ with C ∈ σ− (A),
(4.9)
where B− and C+ are deﬁned by (3.14). From Corollary 3.19 we know that the invertibility of A implies that B− and C+ are Fredholm, and ind B− = −ind+ A
and
ind C+ = −ind− A = ind+ A
(4.10)
op op for all B ∈ σ+ (A) and C ∈ σ− (A). Since all coeﬃcients of A are slowly oscillating, the limit operators B and C are Laurent operators, by Remark 3.53. Consequently, B− and C+ are Toeplitz operators on the respective half axes, and for those we have Coburn’s theorem (Theorem 2.38 in [9] or Theorem 1.10 in [10]) saying that ind B− = 0 and ind C+ = 0 already imply the invertibility of B− and C+ .
156
Chapter 4. Stability of the Finite Section Method
Summarizing this with (4.10) and the fact that (4.1) is applicable if and only if A and all operators (4.9) are invertible, we get: Proposition 4.8. If A ∈ BO has slowly oscillating coeﬃcients and D = Z, then the ﬁnite section method (4.1) is applicable to A if and only if A is invertible and ind+ A = 0. In [74], Roch generalized this result to banddominated operators with slowly oscillating coeﬃcients, so that Proposition 4.8 is still true if we replace BO by BDOp . Moreover, Roch’s result extends from 1 < p < ∞ to A ∈ BDO1 and A ∈ BDO∞ , provided A ∈ W. Note that this is a generalization of the classical result on the stability of the ﬁnite section method for banddominated Laurent operators, which is the case of constant coeﬃcients.
4.2 The FSM for a Class of Integral Operators For the remainder of this book, we will dive into the continuous case, where we study the ﬁnite section method in an algebra containing a huge amount of operators emerging from practical problems.
4.2.1 An Algebra of Convolution and Multiplication Operators Let E = Lp = Lp (Rn ) with some p ∈ [1, ∞] and n ∈ N. With every function κ ∈ L1 , we associate the operator that maps a function u ∈ E to the convolution κ u; that is the function κ(x − y) u(y) dy, x ∈ Rn . (κ u)(x) = Rn
From Young’s inequality [72] we know that κ u ∈ E and κ u ≤ κ1 u, so that u → κ u is a bounded linear operator on E. For reasons which will become apparent in a minute, it is more convenient to associate the operator u → κ u with the Fourier transform (F κ)(x) = κ(y) eix,y dy, x ∈ Rn Rn
of κ rather than with κ, where ·, · is the standard scalar product in Rn . The convolution operator u → κ u is henceforth denoted by Ca , where a = F κ is the Fourier transform of κ, which is also referred to as the symbol of Ca . We denote the set of functions {F κ : κ ∈ L1 } by F L1 . From what we said earlier, we know that Ca ∈ L(E) and Ca ≤ κ1 for every a = F κ ∈ F L1 . From the associativity κ1 (κ2 u) = (κ1 κ2 ) u and the basic convolution theorem F (κ1 κ2 ) = (F κ1 )(F κ2 ) we get that Ca1 Ca2 = CF (κ1 κ2 ) = C(F κ1 )(F κ2 ) = Ca1 a2 , where a1 = F κ1 and a2 = F κ2 .
4.2. The FSM for a Class of Integral Operators
157
Moreover, it can be shown that the spectrum sp Ca coincides with the range ˙ n ) of the symbol a ∈ F L1 , where one easily veriﬁes that F L1 ⊂ C(R˙ n ) with a(R a(∞) = 0 for every a ∈ F L1 . Therefore, Ca is never invertible if a ∈ F L1 , while an operator λI − Ca is invertible if and only if λ ∈ a(R˙ n ). We will widen our class of convolution operators by saying that λI + Ca is the convolution operator with symbol b = λ+ a = λ+ F κ and denote this operator by Cb ; that is (Cb u)(x) = λu(x) + κ(x − y) u(y) dy, x ∈ Rn Rn
for all λ ∈ C and κ ∈ L1 . Moreover, we write C + F L1 as an abbreviation for the set {λ + F κ : λ ∈ C, κ ∈ L1 }. Note that this enlargement of F L1 is equivalent to the enlargement of L1 by the delta distribution and its multiples λδ with λ ∈ C. It follows immediately from what we said earlier that Ca1 Ca2 = Ca1 a2 and sp Ca = a(R˙ n ) also hold for all symbols a1 , a2 and a in C + F L1 , where a(∞) = λ if a = λ + F κ ∈ C + F L1 ⊂ C(R˙ n ). The study of convolution operators can be pretty interesting in its own right but the class of operators to be studied enjoys a lot more variety if also multiplication operators enter the scene. Depending on the application that we have in mind, choose some Banach subalgebra Y of L∞ , and let AoY and AY denote the smallest subalgebra and Banach subalgebra, respectively, of L(E) containing all convolution operators Ca with a ∈ F L1 and all multiplication operators Mb with b ∈ Y ; that is AoY := algL(E) Ca , Mb : a ∈ F L1 , b ∈ Y , AY
:= closL(E) AoY .
If Y = L∞ , then the subscript Y will be omitted. We start our studies of the Banach algebra AY by a simple observation, where we say that an operator A ∈ L(E) is locally compact if PU A and APU are compact operators for all bounded and measurable sets U ⊂ Rn , which is clearly equivalent to the discretization (1.3) being locally compact in the sense of Deﬁnition 2.14. Lemma 4.9. Convolution operators Ca with symbol a ∈ F L1 are locally compact. Proof. Let a = F κ with κ ∈ L1 , and let U ⊂ Rn be bounded and measurable. Since κ can be approximated in the norm of L1 as well as desired by some compactly supported continuous function κ and since CF κ − CF κ = CF (κ−κ ) ≤ κ − κ 1 ,
(4.11)
it suﬃces to prove our claim with κ in place of κ. Therefore, let V ⊂ Rn denote a compact set which contains U as well as the support of κ . Then PV CF κ is an integral operator with a continuous kernel
158
Chapter 4. Stability of the Finite Section Method
function supported in (2V ) × V . But since such operators are compact, also PU CF κ = PU (PV CF κ ) is compact. The proof of the second claim is completely symmetric.
This fact has remarkable consequences as we will see. By its deﬁnition, the algebra AoY consists of all ﬁnite sums of the form A =
k
Ai ,
i=1
where Ai are ﬁnite products of convolution and multiplication operators. Some of these Ai might be pure multiplication operators, say A1 , . . . , Aj , and all other terms, Aj+1 , . . . , Ak involve at least one convolution operator. The sum A1 + · · · + Aj is again a multiplication operator whose symbol shall be denoted by bA ∈ Y . So k Ai . (4.12) A = MbA + i=j+1
This additive decomposition in a pure multiplication operator and a remaining term involving convolution operators is uniquely determined. We will show that, in a sense, this is even true for arbitrary A ∈ AY . Therefore, recall that JY := closidAY {Ca : a ∈ F L1 } is the smallest closed ideal in AY containing all convolution operators with symbol in F L1 . Again, we omit the subscript Y if Y = L∞ . It is an elementary exercise to show that (4.13) Ai Cai Bi : Ai , Bi ∈ AoY , ai ∈ F L1 , JY = closL(E) i
the ﬁnite summation being over all terms of the given form. Lemma 4.10. All operators in JY are locally compact. Proof. Take an arbitrary bounded and measurable set U ⊂ Rn . From Lemma 4.9 and the fact that PU commutes with every multiplication operator we get that PU ACa and Ca BPU are compact for all A, B ∈ AoY and all a ∈ F L1 . The rest follows from formula (4.13). Proposition 4.11. The decomposition AY = {Mb : b ∈ Y } JY holds. Proof. Clearly, on the right we have two closed subalgebras of AY . Suppose A is contained in both. Then, for every bounded and measurable set U ⊂ Rn , the operator PU A is compact by Lemma 4.10. This implies b = 0, and hence A = 0.
4.2. The FSM for a Class of Integral Operators
159
Let us now show that the sum on the right is all of AY . Every operator in the (dense) subset AoY can be written as such a sum, by (4.12). It remains to show that the (linear) mapping A → MbA is bounded on AoY , and hence can be continuously extended to AY . To see this, take some A ∈ AoY , an ε > 0 and a bounded and measurable set U ⊂ Rn with positive measure such that PU bA ∞ > bA ∞ − ε. By (4.12) and Lemma 4.10, we have PU A − PU MbA ∈ K(E). Now bA ∞ − ε
≤ PU MbA = PU MbA + K(E)L(E)/K(E) = PU A + K(E)L(E)/K(E) ≤ PU A ≤ A
for every ε > 0, and hence, MbA = bA ∞ ≤ A.
The continuous extension of the mapping A → MbA to all of AY shall be denoted by µ. The bounded linear mapping µ : AY → AY is obviously a projection from AY onto {Mb : b ∈ Y }. The complementary projection A → A − MbA from AY onto JY shall be denoted by . The decomposition A = µ(A) + (A)
(4.14)
of A ∈ AY into a multiplication part and a locally compact part is the generalization of the simple idea (4.12) to the Banach algebra AY . Proposition 4.11 allows us to identify the factor algebra AY /JY with the algebra {Mb : b ∈ Y }, and hence with Y ⊂ L∞ , where the multiplication operator µ(A) is a representative of the coset A + JY for every A ∈ AY . Proposition 4.12. a) If A ∈ A is invertible at inﬁnity and the multiplication operator µ(A) is invertible, then A is Fredholm. b) If A ∈ A is Fredholm and 1 < p < ∞, then A is invertible at inﬁnity. Proof. a) is an immediate consequence of the decomposition (4.14), Lemma 4.10 and Proposition 2.15, and b) follows from Figure 4 on page 57. The condition κ ∈ L1 is suﬃcient and necessary for CF k being in the Wiener algebra. Consequently, we have the following. Proposition 4.13. Every operator A ∈ Ao is in the Wiener algebra W. Proof. Of course, every multiplication operator is in W, and it is veriﬁed in almost complete analogy to Example 1.45 that Ca ∈ W for all a ∈ F L1 . The claim then follows from the closedness of W under ﬁnite sums and products.
4.2.2 The Finite Section Method in A$$ We will now study the applicability of the ﬁnite section method for operators in the smallest closed subalgebra A$ := AL$∞
160
Chapter 4. Stability of the Finite Section Method
of L(E) containing all convolutions Ca with a ∈ F L1 and all multiplications Mb by a rich function b ∈ L∞ $ . To be able to apply our Theorem 4.2, we put n = 1, and hence E = Lp := Lp (R) with some p ∈ [1, ∞]. Proposition 4.14. Every operator A ∈ A$ is subject to the conditions on Theorem 4.2; that is A ∈ BDOpS,$$ and, if p ∈ {1, ∞}, then A ∈ Rp . Proof. By Proposition 3.11 a), for the proof of A$ ⊂ BDOpS,$$ it is suﬃcient to show that the generators Ca and Mb of our algebra A$ are in BDOpS,$$ . Let a = F κ with κ ∈ L1 . Since Pm κ − κ1 → 0 as m → ∞, we get that Cam − Ca → 0 as m → ∞, where am := F (Pm κ); that is, we replace the convolution kernel κ by its truncation to the interval [−m, m]. But from Cam ∈ BO for all m ∈ N, we get that Ca ∈ BDOp . Alternatively, also Proposition 4.13 immediately shows that Ca ∈ BDOp . Obviously, also Mb ∈ BDOp for all b ∈ L∞ $ . If p = ∞, taking into account that the preadjoint operators of Ca and Mb are Ca˜ and Mb , respectively, where a ˜(t) = a(−t), we clearly have Ca , Mb ∈ S. Convolution operators Ca are translation invariant, and hence rich, and our multiplication operators Mb are rich by our premise b ∈ L∞ $ . It remains to show that A$ ⊂ Rp if p ∈ {1, ∞}. We start with p = 1 and take an arbitrary A ∈ A$ . We have to show that Aτ ∈ r(L1 ) for every τ > 0. Therefore write A = Mb + J with Mb = µ(A) and J = (A) ∈ JL$∞ as in (4.14). Then we have Aτ = Pτ APτ + Qτ = Pτ Mb Pτ + Pτ JPτ + Qτ = Mbτ + K, where bτ = Pτ b + Qτ 1 and K = Pτ JPτ ∈ K(E) by Lemma 4.10. What we have to show is that this operator is subject to (2.16). So suppose Aτ is bounded below. Then, by Lemma 2.32, it is a Φ+ operator, and this set is closed under compact perturbations, by Lemma 2.41. So Mbτ is in Φ+ (L1 ) as well. Lemma 2.42 shows that then Mbτ is invertible on L1 . Consequently, Aτ = Mbτ +K, as a compact perturbation of an invertible operator, is a Fredholm operator of index zero. On the other hand, it has kernel dimension zero, by Lemma 2.32, and so it is invertible. For the proof of A$ ⊂ Rp for p = ∞, we just note that A ∈ A$ in L1 if A ∈ A$ in L∞ since the preadjoint operators of Ca and Mb are Ca˜ and Mb , respectively, where a ˜(t) = a(−t).
The Slowly Oscillating Case We can now apply Theorem 4.2 to operators in our algebra A$ to get some suﬃcient and necessary criteria for the applicability of their ﬁnite section method. We demonstrate this for the case AoSO , which is contained in A$ by Proposition 3.49, where we get a nice and very explicit result. By Proposition 3.47, the limit operator of an operator of multiplication by a slowly oscillating function is just a multiple of the identity operator. Applying
4.2. The FSM for a Class of Integral Operators
161
Proposition 3.4, we get that every limit operator of A ∈ ASO is in closalgL(E) { Ca , I : a ∈ F L1 } = { Ca : a ∈ C + F L1 }. We collect all these symbols a ∈ C + F L1 of limit operators Ah towards +∞ in a set R+ and those towards −∞ in R− . So we have R± ⊂ C + F L1 with op (A) = { Ca : a ∈ R± }, σ±
respectively. Finally, put R := R+ ∪ R− . Proposition 4.15. For A ∈ AoSO , the following statements are equivalent. (i) The ﬁnite section method (4.1) is applicable to A. (ii) The ﬁnite section sequence (4.1) is stable. (iii) The operator A itself is invertible, and for all functions a ∈ R, the range ˙ is a closed curve in the complex plane which does not contain the origin a(R) and has winding number zero. Proof. By their deﬁnition, all functions a ∈ R are in C + F L1 , and hence, they ˙ i.e. each range a(R) ˙ is some are continuous on the onepoint compactiﬁcation R, closed curve in the complex plane. Proposition 4.14 and Theorem 4.2 yield that (i) and (ii) are equivalent to each other and to the uniform invertibility of the set consisting of (1)
the operator A itself,
(2)
op all ⊕ (QVτ Ah V−τ Q + P ) with Ah running through σ+ (A), and
(3)
op (A). all ⊕ (P Vτ Ah V−τ P + Q) with Ah running through σ−
τ ∈R τ ∈R
By Propositions 4.13 and 3.112, the word “uniform” can be replaced by “elementwise” in the previous sentence. op We start by examining the operators in (3). Since Ah ∈ σ− (A), we have Ah = Ca with some a ∈ R− . Hence, Ah is translation invariant and P Vτ Ah V−τ P + Q = P Ca P + Q =: Wa for every τ ∈ R. The operators Wa are WienerHopf operators (recall Subsection 3.7.2). Such an operator Wa is invertible if and only if the image of its symbol a does not contain the origin and has winding number zero with respect to the origin (see, for instance, [29]). Analogously, we get that (2) is equivalent to the invertibility of all QCa Q + P with a ∈ R+ . But QCa Q + P is invertible if and only if Wa = P Ca P + Q is invertible. So all operators in (2) and (3) are invertible if and only if the symbols of all functions a ∈ R stay away from the origin and have winding number zero.
162
Chapter 4. Stability of the Finite Section Method
Remark 4.16. Remember that every function b ∈ L∞ which converges at +∞ and −∞, is contained in SO. If all multiplication operators involved in A have this property, then the sets R+ and R− are just singletons, their union R has at most two elements, and the number of curves to be examined in property (iii) of Proposition 4.15 is (at most) two. So the criterion for applicability of the FSM is very eﬀective in this case, indeed. This special case was already studied, using slightly diﬀerent methods, in [46]. Note that, in this setting, Proposition 4.15 also yields the classical results of Gohberg and Feldman [29] about the applicability of projection methods to paired equations; that is A = P Ca + QCb
or
A = Ca P + Cb Q
with a, b ∈ C + F L1 , the discrete analogues of which we already encountered in (3.50) and (3.51).
4.2.3 A Special Finite Section Method for BC In this section we will study an approximation method for operators on the space of bounded and continuous functions BC ⊂ L∞ . The operators of interest to us will be of the form A = I + K, where K shall be bounded and linear on L∞ with the condition Ku ∈ BC for all u ∈ L∞ . Typically, K will be some integral operator. One of the simplest examples is a convolution operator K = Ca with some a ∈ F L1 . In this simple case, the validity of the above condition can be seen as follows. Lemma 4.17. If a ∈ F L1 , then Ca u is a continuous function for every u ∈ L∞ . Proof. Let a = F κ with some κ ∈ L1 . Since κ can be approximated in the norm of L1 as close as desired by a continuous function with a compact support, we may, by (4.11), suppose that κ already is such a function. But in this case, ' ' ' ' ' ' ' ' κ(x1 − y) − κ(x2 − y) u(y) dy ' ' (Ca u)(x1 ) − (Ca u)(x2 ) ' = ' Rn ≤ u∞ κ(t+ x) − κ(t) dt Rn
clearly tends to zero as x = x1 − x2 → 0.
As a slightly more sophisticated example one could look at an operator of the following form or at the norm limit of a sequence of such operators. Example 4.18. Put K :=
j i=1
Mbi Cai Mci ,
(4.15)
4.2. The FSM for a Class of Integral Operators
163
where bi ∈ BC, ai ∈ F L1 , ci ∈ L∞ and j ∈ N. For the condition that K maps L∞ into BC, it is suﬃcient to impose continuity of the functions bi in (4.15), whereas the functions ci need not be continuous since their action is smoothed by the convolution thereafter. The following auxiliary result is essentially Lemma 3.4 from [4]. Since we will apply this result in diﬀerent situations in what follows, we pass to diﬀerent notations for a moment. Let L be bounded and linear on L∞ with im L ⊂ BC, and put B = I + L. Moreover, abbreviate the restriction BBC by B0 . Lemma 4.19.
a) Bu ∈ BC if and only if u ∈ BC.
b) B is invertible on L∞ if and only if B0 is invertible on BC. In this case B0−1 L(BC) ≤ B −1 L(L∞ ) ≤ 1 + B0−1 L(BC) LL(L∞) .
(4.16)
Proof. a) This is immediate from Bu = u + Lu and Lu ∈ BC for all u ∈ L∞ . b) If B is invertible on L∞ , then the invertibility of B0 on BC and the ﬁrst inequality in (4.16) follows from a). Now let B0 be invertible on BC. To see that B is injective on L∞ , suppose Bu = 0 for u ∈ L∞ . From 0 ∈ BC and a) we get that u ∈ BC and hence u = 0 since B is injective on BC. Surjectivity of B on L∞ : Since B0 is surjective on BC, for every v ∈ L∞ there is a u ∈ BC such that B0 u = Lv ∈ BC. Consequently, B(v − u) = Bv − B0 u = v + Lv − Lv = v
(4.17)
holds, showing the surjectivity of B on L∞ . So B is invertible on L∞ , and, by (4.17), B −1 v = v − u = v − B0−1 Lv for all v ∈ L∞ , and hence B −1 = I − B0−1 L. This proves the second inequality in (4.16). Remark 4.20. a) The previous lemma clearly holds for arbitrary Banach spaces with one of them contained in the other in place of BC and L∞ . b) If, moreover, L has a preadjoint operator on L1 , then an approximation argument as in the proof of Lemma 3.5 of [4] even shows that, in fact, (4.16) can be improved to the equality B0−1 L(BC) = B −1 L(L∞ ) . We are looking for solutions u of the equation Au = b, which typically is some integral equation u(x) + k(x, y) u(y) dy = b(x), x ∈ Rn , (4.18) Rn
for any right hand side b ∈ BC. By Lemma 4.19 a) with L = K we are looking for u in BC only. In this setting, a popular approximation method is just to reduce the range of integration from Rn to [−τ, τ ]n which is clearly diﬀerent from the FSM. We
164
Chapter 4. Stability of the Finite Section Method
call this procedure the ﬁnite section method for BC, short: BCFSM. We are now looking for solutions uτ ∈ BC of k(x, y) uτ (y) dy = b(x), x ∈ Rn (4.19) uτ (x) + [−τ,τ ]n
with τ > 0, and hope that the sequence (uτ ) of solutions of (4.19) Pconverges to the solution u of (4.18) as τ → ∞. Note that, unlike in our previous methods of the form (1.34), this time the right hand side b remains unchanged, and the support of uτ is not limited to some cube! The modiﬁed ﬁnite section method (4.19) can be written as Aτ uτ = b with Aτ = I + KPτ .
(4.20)
As a consequence of Lemma 4.19 a) with L = KPτ one also has Corollary 4.21. For every τ > 0, it holds that Aτ uτ ∈ BC if and only if uτ ∈ BC. In contrast to the usual ﬁnite section method (4.1) on L∞ , the method (4.20) is taylormade for operators of the form A = I + K acting on BC. We can regard Aτ (or better, its restriction to BC) as an element of L(BC), and we have uτ ∈ BC for all τ > 0 if b ∈ BC, whereas another projector Pτ from the left in (4.20) would obviously spoil this setting. In accordance with the machinery presented up to this point, our strategy to study equation (4.18) and the stability of its approximation by (4.19) is to embed these into L∞ , where we can relate the BCFSM (4.19) to our usual FSM (4.1) on L∞ . Indeed, the applicabilities of these diﬀerent methods turn out to be equivalent. Proposition 4.22. For the operator A = I + K with im K ⊂ BC, Aτ = I + KPτ
and
Aτ = Pτ APτ + Qτ ,
τ > 0,
the following statements are equivalent. (i) The BCFSM (Aτ ) alias (4.20) is applicable in BC. (ii) The BCFSM (Aτ ) alias (4.20) is applicable in L∞ . (iii) The ﬁnite section method (Aτ ) is applicable in L∞ . (iv) (Aτ ) is stable on BC. (v) (Aτ ) is stable on L∞ . (vi) (Aτ ) is stable on L∞ . Proof. The implication (i)⇒(iv) is standard. The equivalence of (iv) and (v) follows from Lemma 4.19 b) with L = KPτ . The equivalence of (v) and (vi) comes
4.2. The FSM for a Class of Integral Operators
165
from the following observation. Aτ = I + KPτ
= =
Pτ + Pτ KPτ + Qτ + Qτ KPτ Pτ APτ + Qτ (I + Qτ KPτ )
= =
(Pτ APτ + Qτ )(I + Qτ KPτ ) Aτ (I + Qτ KPτ ),
where the second factor (I + Qτ KPτ ) is always invertible with its inverse equal to I − Qτ KPτ , and hence (I + Qτ KPτ )−1 ≤ 1 + K for all τ > 0. (v)⇒(ii). Since (v) implies (vi), it also implies the invertibility of A on L∞ by Theorem 4.2. But this, together with (v), implies (ii) by Theorem 1.75. Finally, the implication (ii)⇒(i) is trivial if we keep in mind Lemma 4.19 a) and Corollary 4.21, and the equivalence of (iii) and (vi) follows from Theorem 4.2. For the study of property (iii) in Proposition 4.22 we have Theorem 4.2 involving limit operators of A, provided that, in addition, A is a rich operator. In Example 4.18 we therefore require the functions bi and ci to be rich. By ∞ Proposition 3.39, this is equivalent to bi ∈ BUC = BC ∩ L∞ $ and ci ∈ L$ . In the particularly simple case when n = 1 and all functions bi and ci have limits ± b± i , ci ∈ C at plus and minus inﬁnity, respectively, this gives us that the statements of Proposition 4.22 are equivalent to the operator A and two associated WienerHopf operators being invertible. These two WienerHopf operators are the compression of I + Ca+ to the negative half axis and that of I + Ca− to the positive half axis, where j ± ± 1 b± a = i ci a i ∈ F L , i=1
respectively. They are invertible if and only if the closed curves a+ (R) and a− (R) do not contain the point −1 ∈ C and have winding number zero with respect to it. In Subsection 4.3.4 we will study similar problems when A = I + K is a boundary integral operator in socalled rough surface problems. Remark 4.23. – Other approximation methods. Although we have restricted ourselves to the FSM (4.1) and the BCFSM (4.20), there are some other approximation methods in BC which are “close enough” to the FSM to handle the question of their applicability in terms of our results on the ﬁnite section method. For example, instead of the ﬁnite section projectors Pτ one can consider projections P k , k ∈ N, onto spaces of polynomial splines. These splines shall be deﬁned on a uniform mesh on the cube [−τ (k), τ (k)]n with increasing density as k increases and with τ (k) → ∞ as k → ∞. As in Chapter 5 of [58] it can be seen that, after a sensible choice of these spline subspaces, the resulting Galerkin method (P k AP k )∞ k=1 is close enough to the ﬁnite section method (Pτ (k) APτ (k) )∞ for its applicability being equivalent to k=1 that of the latter method.
166
Chapter 4. Stability of the Finite Section Method
4.3 Boundary Integral Equations on Unbounded Surfaces A large number of important problems in physical sciences and engineering can be modelled by a strongly elliptic boundary value problem on a domain D; that is a strongly elliptic PDE in the domain, coupled with an additional condition at the boundary ∂D. A common technique to study such problems is to reformulate the boundary value problem as a boundary integral equation [53, 79]. The theory of the boundary integral equation method when the boundary ∂D is compact and smooth enough (at least Lipschitz) is very well developed, for example, see [53, 79, 34, 25]. The theory is much less welldeveloped in the case when ∂D is unbounded; clearly, additional issues arise in numerical solution, in particular the issue of truncation of ∂D (involving the ﬁnite section method). In this section we consider the case where D = {(x, z) ∈ Rn+1 : x ∈ Rn , z > f (x)}
(4.21)
is the epigraph of a given bounded and continuous function f : Rn → R. Let f+ = sup f (x) x∈Rn
and
f− = infn f (x) x∈R
denote the highest and the lowest elevation of the inﬁnite surface ∂D. It is convenient to assume, without loss of generality, that f− > 0, so that D is entirely contained in the half space H = {(x, z) ∈ Rn+1 : x ∈ Rn , z > 0}. There is a lot of ongoing research for problems on such domains. A number of speciﬁc problems of this type have been rigorously formulated as boundary value problems and reformulated as second kind integral equations. For example, see [22, 21, 20, 16] for acoustic problems, [2, 3] for problems involving elastic waves, and [57] for a boundary value problem for Laplace’s equation arising in the study of unsteady water waves. A small selection of literature on the numerical analysis of these problems is [15, 16, 19, 20, 42].
4.3.1 The Structure of the Integral Operators Involved The integral operators arising in all these reformulations are of the form k(x, y) u(y) dy, x ∈ Rn (Ku)(x) =
(4.22)
Rn
for u ∈ BC with k(x, y) =
j
bi (x) ki x − y, f (x), f (y) ci (y),
(4.23)
i=1
where bi ∈ BC,
ki ∈ C (Rn \ {0}) × [f− , f+ ]2 and ci ∈ L∞
(4.24)
4.3. Boundary Integral Equations on Unbounded Surfaces
167
for i = 1, . . . , j, and k(x, y) ≤ κ(x − y),
x, y ∈ Rn ,
(4.25)
for some κ ∈ L1 . We denote the set of all operators K satisfying (4.22)–(4.25) for a particular function f ∈ BC by Kf . Remark 4.24. a) Inequality (4.25) ensures that K acts boundedly on L∞ and, moreover, that it is banddominated, and even contained in the Wiener algebra. This is reﬂected by the methods used in the engineering literature [14, 86]. b) Note that, as is the case in the following example, inequality (4.25) need not hold for ki x − y, f (x), f (y) in place of k(x, y). Example 4.25. – Unsteady water waves. In [57] Preston, Chamberlain and ChandlerWilde consider the twodimensional Dirichlet boundary value problem: Given ϕ0 ∈ BC(∂D), ﬁnd ϕ ∈ C 2 (D) ∩ BC(D) such that ϕ = 0 ϕ = ϕ0
in D, on ∂D,
which arises in the theory of classical free surface water wave problems. In this case, n = 1 and the surface function f shall in addition be diﬀerentiable with an αH¨older continuous ﬁrst derivative for a ﬁxed α ∈ (0, 1]; precisely, for some constant C > 0, f (x) − f (y) ≤ Cx − yα holds for all x, y ∈ Rn . Now let x, y ∈ R2 G(x, y) = Φ(x, y) − Φ(xr , y), denote the Green’s function in the half plane H where Φ(x, y) = −
1 ln x − y2 , 2π
x, y ∈ R2 ,
with  . 2 denoting the Euclidean norm in R2 , is the fundamental solution for Laplace’s equation in two dimensions, and xr = (x1 , −x2 ) is the reﬂection of x = (x1 , x2 ) with respect to ∂H. For the solution of the above boundary value problem we make the following double layer potential ansatz: ∂G(x, y) u ˜(y) ds(y), x ∈ D, ϕ(x) = ∂D ∂n(y)
where n(y) = f (y) , −1 denotes the normal vector of ∂D at y = y , f (y) , and we look for the corresponding density function u ˜ ∈ BC(∂D). In [57] it is shown that ϕ satisﬁes the above Dirichlet boundary value problem if and only if (I − K)˜ u = −2ϕ0 ,
where (K u ˜)(x) = 2
∂D
∂G(x, y) u ˜(y) ds(y), ∂n(y)
(4.26) x ∈ ∂D.
168
Chapter 4. Stability of the Finite Section Method
In accordance with the parametrization x = x, f (x) of ∂D, we deﬁne u(x) := u ˜(x)
and
b(x) := −2ϕ0 (x),
x∈R
and rewrite equation (4.26) as the equation u(x) −
+∞
k(x, y) u(y) dy = b(x), −∞
x∈R
(4.27)
on the real axis for the unknown function u ∈ BC(R), where ∂G(x, y) " 1 (x − y) · n(y) (xr − y) · n(y) 2 k(x, y) = 2 1 + f (y) = − − ∂n(y) π x − y22 xr − y22 (x − y)f (y) + f (x) + f (y) 1 (x − y)f (y) − f (x) + f (y) − = − π (x − y)2 + (f (x) − f (y))2 (x − y)2 + (−f (x) − f (y))2 x−y 1 x−y = − − f (y) π (x − y)2 + (f (x) − f (y))2 (x − y)2 + (f (x) + f (y))2 f (x) − f (y) f (x) + f (y) 1 + + π (x − y)2 + (f (x) − f (y))2 (x − y)2 + (f (x) + f (y))2 is clearly of the form (4.23) with j = 2 and property (4.24) satisﬁed. From Lemma 2.1 and inequality (5) in [57] we moreover get that inequality (4.25) holds with
c xα−1 if 0 < x ≤ 1, κ(x) = c x−2 if x > 1, where α ∈ (0, 1] is the H¨older exponent of f , and c is some positive constant. Example 4.26. – Wave scattering on an unbounded rough surface. In [21] ChandlerWilde, Ross and Zhang consider the corresponding problem to the preceding Example 4.25 for the Helmoltz equation in two dimensions: Given ϕ0 ∈ BC(∂D), they seek ϕ ∈ C 2 (D) ∩ BC(D) such that ϕ + k 2 ϕ = 0 ϕ = ϕ0
in
D,
on ∂D,
and such that ϕ satisﬁes an appropriate radiation condition and constraints on growth at inﬁnity. Again, n = 1 and the surface function f shall be diﬀerentiable with an αH¨older continuous ﬁrst derivative for some α ∈ (0, 1]. This problem models the scattering of acoustic waves by a soundsoft rough surface; the same problem arises in timeharmonic electromagnetic scattering by a perfectly conducting rough surface. The constant k in the Helmholtz equation is the wave number. The authors reformulate this problem as a boundary integral equation which has exactly the form (4.26), where G(x, y) is now deﬁned to be the Green’s function
4.3. Boundary Integral Equations on Unbounded Surfaces
169
for Helmholtz’s instead of Laplace’s equation in the half plane H which satisﬁes the impedance condition ∂G/∂x2 + ikG = 0 on ∂H. As in Example 4.25, this boundary integral equation can be written in the form (4.27) with k(x, y) of the form (4.23) with j = 2 and property (4.24) satisﬁed, and also here inequality (4.25) holds with
c xα−1 if 0 < x ≤ 1, κ(x) = c x−3/2 if x > 1, where α ∈ (0, 1] is the H¨older exponent of f , and c is some positive constant. Example 4.27. – Wave propagation over a ﬂat inhomogeneous surface. The propagation of monofrequency acoustic or electromagnetic waves over ﬂat inhomogeneous terrain has been modelled in two dimensions by the Helmholtz equation ϕ + k 2 ϕ = 0 in the upper half plane D = H (so f ≡ 0 in (4.21)) with a Robin (or impedance) condition ∂ϕ + ikβϕ = ϕ0 ∂x2 on the boundary line ∂D. Here k, the wavenumber, is constant, β ∈ L∞ (∂D) is the surface admittance describing the local properties of the ground surface ∂D, and the inhomogeneous term ϕ0 is in L∞ (∂D) as well. Similarly to Example 4.26, in fact using the same Green’s function G(x, y) of the Helmholtz equation, ChandlerWilde, Rahman and Ross in [20] reformulate this problem as a boundary integral equation on the real line, u(x) −
+∞
−∞
κ ˜ (x − y)z(y)u(y) dy = ψ(x),
x ∈ R,
(4.28)
where ψ ∈ BC is given and u ∈ BC is to be determined. The function κ ˜ is in L1 , ∞ and z ∈ L is closely connected with the surface admittance β by z = i(1 − β). Note that the kernel function of the integral operator in (4.28) is of the form (4.23) with j = 1. The validity of (4.24) and (4.25) is trivial in this case. For technical reasons we ﬁnd it convenient to embed the class Kf of integral operators (4.22) with a kernel k(·, ·) subject to (4.23), (4.24) and (4.25) into a somewhat larger Banach algebra of integral operators1 . Therefore, given f ∈ BC, put f− := inf f , f+ := sup f , and let Rf denote the set of all operators of the form (Bu)(x) = Rn 1 It
k x − y, f (x), f (y) u(y) dy,
x ∈ Rn
(4.29)
will turn out that this Banach algebra actually is not that much larger than our class Kf .
170
Chapter 4. Stability of the Finite Section Method
with k ∈ C(Rn × [f− , f+ ]2 ) compactly supported, and put Bˆ := B := Cˆ := C
:=
( ) closspan Mb BMc : b ∈ BC, B ∈ Rf , c ∈ L∞ , ) ( closalg Mb BMc : b ∈ BC, B ∈ Rf , c ∈ L∞ , ( ) closspan Mb Ca Mc : b ∈ BC, a ∈ F L1 , c ∈ L∞ , ( ) closalg Mb Ca Mc : b ∈ BC, a ∈ F L1 , c ∈ L∞ .
Remark 4.28. a) Here, closspan M denotes the closure in L(BC) of the set of all ﬁnite sums of elements of M ⊂ L(BC), and, as agreed before, closalg M denotes the closure in L(BC) of the set of all ﬁnite sumproducts of elements of M . So closspan M is the smallest closed subspace and closalg M the smallest (not necessarily unital) Banach subalgebra of L(BC) containing M . In both cases we say they are generated by M . b) The following proposition shows that Bˆ and B do not depend on the function f ∈ BC which is why we omit f in their notations. c) It is easily seen that all operators in Cˆ map arbitrary elements from L∞ into BC. Consequently, every K ∈ Cˆ is subject to the condition on K in Subsection 4.2.3. d) The linear space Cˆ is the closure of the set of operators covered by Example 4.18. The following proposition shows that this set already contains all of Kf . More precisely, it coincides with the closure of Kf in the norm of L(BC) and with the other spaces and algebras introduced above. Proposition 4.29. The identity clos Kf = Bˆ = Cˆ = B = C ⊂ A holds. Proof. Clearly, Cˆ ⊂ Bˆ since Ca with a ∈ F L1 can be approximated in the operator norm by convolutions B = Ca with a continuous and compactly supported kernel. But these operators B are clearly in Rf . ˆ it is suﬃcient to show that the generators of For the reverse inclusion, Bˆ ⊂ C, ˆ ˆ B are contained in C. We will prove this by showing that B ∈ Cˆ for all B ∈ Rf . So let k ∈ C(Rn × [f− , f+ ]2 ) be compactly supported, and put B as in (4.29). To see ˆ take L ∈ N, choose f− = s1 < s2 < · · · < sL−1 < sL = f+ equidistant that B ∈ C, in [f− , f+ ], and let ϕξ denote the standard Courant hat function for this mesh that is centered at sξ . Then, since k is uniformly continuous, its piecewise linear interpolations (with respect to the variables s and t), k (L) (r, s, t) :=
L ξ,η=1
k(r, sξ , sη ) ϕξ (s) ϕη (t),
r ∈ Rn , s, t ∈ [f− , f+ ],
4.3. Boundary Integral Equations on Unbounded Surfaces
171
uniformly approximate k as L → ∞, whence the corresponding integral operators with k replaced by k (L) in (4.29), (B (L) u)(x) =
L
Rn ξ,η=1
k x − y, sξ , sη ϕξ f (x) ϕη f (y) u(y) dy,
(4.30)
converge to B in the operator norm as L → ∞. But it is obvious from (4.30) that ˆ which proves that also B ∈ C. ˆ B (L) ∈ C, To see that B = C, it is suﬃcient to show that the generators of each of the ˆ which is algebras are contained in the other algebra. But this follows from Bˆ = C, already proven. That C is contained in the Banach algebra A generated by L1 convolutions and L∞ multiplications, is obvious. For the inclusion C ⊂ Cˆ it is suﬃcient to show that Ca Mb Cc ∈ Cˆ for all a, c ∈ F L1 and b ∈ L∞ . So take an arbitrary b ∈ L∞ and let a = F κ and c = F λ with κ, λ ∈ L1 . Without loss of generality, by (4.11), we may suppose that κ and λ are continuous and compactly supported, say κ(x) = λ(x) = 0 if x > . It is now easily checked that k(x, y)u(y) dy, x ∈ Rn (Ca Mb Cc u)(x) = Rn
with
k(x, y) = Rn
κ(x − z) b(z) λ(z − y) dz =
t≤
κ(t) b(x − t) λ(x − t − y) dt.
By taking a suﬃciently ﬁne partition {T1 , . . . , TN } of [−, ]n and ﬁxing tm ∈ Tm for m = 1, . . . , N , we can approximate the above arbitrarily close by k(x, y)
=
N m=1
≈
N
κ(t) b(x − t) λ(x − t − y) dt
Tm
N
b(x − t) dt Tm
m=1
=
κ(tm ) λ(x − tm − y) κm λm (x − y) bm (x),
m=1
where κm = κ(tm ), λm (x) = λ(x − tm ) and bm (x) =
x, y ∈ Rn *
(4.31)
b(x − t) dt, the latter ˆ depending continuously on x. But this clearly shows that Ca Mb Cc ∈ C. ˆ The inclusion B ⊂ clos Kf is also obvious since (4.24) and (4.25) hold if bi ∈ BC, ci ∈ L∞ and ki is compactly supported and continuous on all of Rn ×[f− , f+ ]2 . ˆ This clearly follows if we show So it remains to show that clos Kf ⊂ B. ˆ So let K ∈ Kf be arbitrary, that means K is an integral operator that Kf ⊂ B. Tm
172
Chapter 4. Stability of the Finite Section Method
of the form (4.22) with a kernel k(., .) subject to (4.23), (4.24) and (4.25). For every ∈ N, let p : [0, ∞) → [0, 1] denote a continuous function with support in [1/(2) , 2] which is identically equal to 1 on [1/ , ]. Then, for i = 1, . . . , j, ()
r ∈ Rn , s, t ∈ [f− , f+ ]
ki (r, s, t) := p (r) ki (r, s, t),
()
is compactly supported and continuous on Rn × [f− , f+ ]2 , whence Bi ∈ R with
() () (Bi u)(x) := x ∈ Rn ki x − y, f (x), f (y) u(y) dy, Rn
for all u ∈ BC. Now put k () (x, y) :=
j
()
bi (x) ki
x − y, f (x), f (y) ci (y)
i=1
=
p (x − y) k(x, y),
and let K () denote the operator (4.22) with k replaced by k () ; that is K () =
j
()
Mbi Bi Mci ,
(4.32)
i=1
ˆ It remains to show that K () ⇒ K as → ∞. Therefore, which is clearly in B. note that ' ' ' ' K − K () ≤ sup ' k(x, y) − k (l) (x, y)' dy n n x∈R R ' '
' 1 − p (x − y) k(x, y)' dy = sup x∈Rn Rn ≤ sup k(x, y) dy + sup k(x, y) dy x∈Rn x−y<1/ x∈Rn x−y> ≤ sup κ(x − y) dy + sup κ(x − y) dy x∈Rn x−y<1/ x∈Rn x−y> = κ(z) dz + κ(z) dz z<1/
z>
with κ ∈ L1 from (4.25). But clearly, this goes to zero as → ∞.
4.3.2 Limit Operators of these Integral Operators In order to apply our previous results on the ﬁnite section method for A = I+K, we need to know about the limit operators of A, which, clearly, reduces to ﬁnding the limit operators of K ∈ Kf . But before we start looking for these limit operators,
4.3. Boundary Integral Equations on Unbounded Surfaces
173
we single out a subclass Kf$ of Kf all elements of which are rich operators. So, this time given f ∈ BUC, let Kf$
:=
Bˆ$
:=
B$ Cˆ$
:=
C$
:=
:=
(
) K ∈ Kf : bi ∈ BUC, ci ∈ L∞ SC$ $ for i = 1, . . . , j , ) ( closspan Mb BMc : b ∈ BUC, B ∈ Rf , c ∈ L∞ SC$ $ , ( ) closalg Mb BMc : b ∈ BUC, B ∈ Rf , c ∈ L∞ SC$ $ , ( ) closspan Mb Ca Mc : b ∈ BUC, a ∈ F L1 , c ∈ L∞ SC$ $ , ( ) closalg Mb Ca Mc : b ∈ BUC, a ∈ F L1 , c ∈ L∞ SC$ $
ˆ B, Cˆ and C, and put denote the rich counterparts of Kf , B, ) ( 1 ∞ A$ := closalg Mb , Ca Mc : b ∈ L∞ ⊃ A$ . $ , a ∈ F L , c ∈ LSC$ $ Recall that, by Proposition 3.39, BC ∩ L∞ $ = BUC, and that Ca Mc is rich for all by Proposition 3.79, whence every operator in A$ is rich. a ∈ F L1 and c ∈ L∞ SC$ $ Then the following “rich version” of Proposition 4.29 holds. Proposition 4.30. If f ∈ BUC, then it holds that clos Kf$ = Bˆ$ = Cˆ$ = B$ = C$ ⊂ A$ . In particular, every K ∈ Kf$ is rich. Proof. All we have to check is that the arguments we made in the proof of Proposition 4.29 preserve membership of b and c in BUC and L∞ SC$ $ , respectively. In only two of these arguments there are multiplications by b and c involved at all. ˆ In this argument, we show The ﬁrst one is the proof of the inclusion Bˆ ⊂ C. ˆ that every B ∈ Rf is contained in C. But in fact, this construction even yields B ∈ Cˆ$ , which can be seen as follows. B ∈ Rf is approximated in the operator norm by the operators B (L) from (4.30). Since the Courant hats ϕξ and ϕη are in BUC and also f ∈ BUC, we get ϕξ ◦ f ∈ BUC and ϕη ◦ f ∈ BUC ⊂ L∞ SC$ $ . So (L) ˆ ˆ ∈ C$ , and hence B ∈ C$ . B The second argument involving multiplication operators is the proof of the ˆ *inclusion C ⊂ C. But also at this point it is easily seen that the functions bm (x) = ˆ Tm b(x − t) dt that are invoked in (4.31) are in fact in BUC, whence C$ ⊂ C$ . Now we are ready to say something about the limit operators of K ∈ Kf$ . Not surprisingly, the key to these operators is the behaviour of the surface function f and of the multipliers bi and ci at inﬁnity. We will show that every limit operator (h) Kh of K is of the same form (4.22) but with f , bi and ci replaced by f (h) , bi (h) and c˜i , respectively, in (4.23). We will even formulate and prove the analogous result for operators in B$ . The key step to this result is the following lemma.
174
Chapter 4. Stability of the Finite Section Method
Lemma 4.31. Let B ∈ Rf ; that is, B is of the form (4.29) with a compactly supported kernel function k ∈ C(Rn × [f− , f+ ]2 ), and let c ∈ L∞ SC$ $ . If a sequence h = (hm ) ⊂ Zn tends to inﬁnity and the functions f (h) and c˜(h) exist, then the limit operator (BMc )h exists and is the integral operator
k x − y, f (h) (x), f (h) (y) c˜(h) (y) u(y) dy, x ∈ Rn . (BMc )h u (x) = Rn
(4.33)
Proof. Choose > 0 large enough that k(r, s, t) = 0 for all r ∈ Rn with r ≥ and all s, t ∈ [f− , f+ ]. Now take a sequence h = (hm ) ⊂ Zn such that the functions f (h) and c˜(h) exist. By deﬁnition of f (h) and c˜(h) , this is equivalent to chm +U − c˜(h) U → 0 f hm +U − f (h) U → 0 and (4.34) ∞ 1 as m → ∞ for every compactum U ⊂ Rn . Moreover, it is easily seen that
(V−hm BMc Vhm u)(x) = k x − y, f (x + hm ), f (y + hm ) c(y + hm ) u(y) dy Rn
for all x ∈ Rn and*u ∈ BC. Abbreviating Am := V−hm BMc Vhm − (BMc )h , we get that (Am u)(x) = Rn dm (x, y) u(y) dy, where dm (x, y) '
' = ' k x − y, f (x + hm ), f (y + hm ) c(y + hm ) '
' − k x − y, f (h) (x), f (h) (y) c˜(h) (y) ' '
'' ' ≤ ' k x − y, f (x + hm ), f (y + hm ) − k x − y, f (h) (x), f (h) (y) ' · c∞ ' ' ' ' (4.35) + k∞ · ' c(y + hm ) − c˜(h) (y)' for all x, y ∈ Rn and m ∈ N. Moreover, dm (x, y) = 0 if x − y ≥ . Now take an arbitrary τ > 0 and put U := [−τ − , τ + ]n and V := [−τ, τ ]n . Then, by (4.35), dm (x, y) dy = ess sup dm (x, y) dy → 0 Pτ Am = ess sup x∈V
Rn
x∈V
U
as m → ∞ since (4.34) holds and k is uniformly continuous. Analogously, Am Pτ = ess sup dm (x, y) dy = ess sup dm (x, y) dy → 0 x∈Rn
V
x∈U
V
as m → ∞. This proves that (BMc )h from (4.33) is indeed the limit operator of BMc with respect to the sequence h = (hm ).
4.3. Boundary Integral Equations on Unbounded Surfaces
175
Proposition 4.32. a) Let K = Mb BMc with b ∈ BUC, B ∈ Rf and c ∈ L∞ SC$ $ . If h = (hm ) ⊂ Zn tends to inﬁnity and all functions b(h) , f (h) and c˜(h) exist, then the limit operator Kh exists and is the integral operator
(Kh u)(x) = b(h) (x) k x−y, f (h)(x), f (h) (y) c˜(h) (y) u(y) dy, x ∈ Rn . Rn
(4.36)
b) Every limit operator of K = Mb BMc with b ∈ BUC, B ∈ Rf and c ∈ L∞ SC$ $ is of this form (4.36). c) The mapping K → Kh acting on {Mb BMc : b ∈ BUC, B ∈ Rf , c ∈ L∞ SC$ $} as given in (4.36) extends to a continuous Banach algebra homomorphism on all of B$ by passing to an appropriate subsequence of h if required. In particular, all limit operators Kh of K ∈ Kf ⊂ B are of the form (4.22) with k replaced by kˆ(h) (x, y) =
j
(h) (h) bi (x) ki x − y, f (h) (x), f (h) (y) c˜i (y).
(4.37)
i=1
Proof. a) From Proposition 3.4 we get that Kh exists and is equal to (Mb )h (BMc )h which is exactly (4.36) by Lemma 4.31. b) Suppose g ⊂ Zn is a sequence tending to inﬁnity that leads to a limit ∞ operator Kg of K. Since b, f ∈ L∞ $ and c ∈ LSC$ $ , there is a subsequence h of g (h) (h) (h) such that the functions b , f and c˜ exist. But then we are in the situation of a), and the limit operator Kh of K exists and is equal to (4.36). Since h is a subsequence of g, we have Kg = Kh . c) The extension to B$ follows from Proposition 3.4. The formula for the limit operators of K ∈ Kf follows from the approximation of K by (4.32) for which we explicitly know the limit operators. Example 4.33. Suppose K ∈ Kf where the surface function f and the functions bi and ci are all slowly oscillating. Let h ⊂ Zn be a sequence tending to inﬁnity (h) (h) such that bi , f (h) and c˜i exist – otherwise pass to a subsequence of h with this property which is always possible. (h) (h) (h) From Corollary 3.48 we know that all of bi , f (h) and c˜i = ci are constant. Then, by Proposition 4.32 c), the limit operator Kh is the integral operator with kernel function kˆ(h) (x, y) =
j
(h)
bi
(h)
c˜i
(h) (h) ki x − y, fh , fh ,
x, y ∈ Rn
(4.38)
i=1
which is just a pure operator of convolution by κ ˆ (h) ∈ L1 with κ ˆ (h) (x − y) = kˆ (h) (x, y) for all x, y ∈ Rn .
(4.39)
176
Chapter 4. Stability of the Finite Section Method
4.3.3 A Fredholm Criterion in I + Kf Having studied integral operators K ∈ Kf$ a bit, we are now coming back to our original problem: a second kind integral equation u + Ku = b on BC. We write this equation as Au = b with A = I + K. From Propositions 4.30 and 4.32 we know that A ∈ A$ , and all its limit operators are of the form Ah = I + Kh ; that is (Ah u)(x) = u(x) +
j
Rn i=1
(h) (h) bi (x) ki x − y, f (h) (x), f (h) (y) c˜i (y) u(y) dy
(4.40) for all u ∈ BC and x ∈ Rn . It is easily seen that C is contained in the ideal J (recall (4.13)) in A that is generated by convolution operators. Consequently, the equality A = I + K with K ∈ Kf ⊂ C is exactly the decomposition (4.14) of A into a multiplication operator and a locally compact part. Proposition 4.12 a) and our knowledge on the operator spectrum of A yield the following suﬃcient criterion for Fredholmness of A. Proposition 4.34. Let A = I + K with K ∈ Kf$ . If all limit operators (4.40) of A are invertible on L∞ , then A is Fredholm on L∞ and hence on BC. Proof. Since Kf$ ⊂ A$ , A is rich and banddominated. Now Theorem 3.109 and Theorem 1 show that the invertibility of all limit operators of A on L∞ implies that A is invertible at inﬁnity. Finally, from µ(A) = I and Proposition 4.12 a) we get that A is Fredholm on L∞ which clearly implies its Fredholmness on BC as well. A much weaker result along these lines that, however, might be of interest for proving noninvertibility of A is the following: If A = I + K with K ∈ Kf is invertible on L∞ , then all limit operators of A are invertible on L∞ .
4.3.4 The BCFSM in I + Kf Since K ∈ Kf maps L∞ into BC, we can, by Proposition 4.22, study the applicability of the BCFSM (4.19) for A = I + K by passing to its FSM (4.1) instead. In order to apply Theorem 4.2 on the ﬁnite section method, we restrict ourselves to operators on the axis, n = 1. By Theorem 4.2 we have to look at all operators of the form op with Ah ∈ σ+ (A) (4.41) QV−τ Ah Vτ Q + P and P V−τ Ah Vτ P + Q
with
op Ah ∈ σ− (A)
(4.42)
4.3. Boundary Integral Equations on Unbounded Surfaces
177
with τ ∈ R. The operator (4.41) is invertible on L∞ if and only if the operator that maps u ∈ L∞ to u(x) +
0 j
(h) (h) bi (x−τ ) ki x−y, f (h) (x−τ ), f (h) (y−τ ) c˜i (y−τ ) u(y) dy (4.43)
−∞ i=1
with x < 0 is invertible on the negative half axis, or, equivalently, u(x) +
τ j
(h) (h) bi (x) ki x − y, f (h) (x), f (h) (y) c˜i (y) u(y) dy,
x<τ
−∞ i=1
is invertible on the half axis (−∞, τ ) for the corresponding sequence h = (h1 , h2 , . . .) leading to a limit operator at plus inﬁnity. And analogously, the operator (4.42) is invertible if and only if the operator that maps u to
u(x) +
+∞ j 0
(h) (h) bi (x−τ ) ki x−y, f (h) (x−τ ), f (h) (y−τ ) c˜i (y−τ ) u(y) dy (4.44)
i=1
with x > 0 is invertible on the positive half axis, or, equivalently,
u(x) +
+∞ j τ
(h) (h) bi (x) ki x − y, f (h) (x), f (h) (y) c˜i (y) u(y) dy,
x>τ
i=1
is invertible on the half axis (τ, +∞) for the corresponding sequence h = (h1 , h2 , . . .) leading to a limit operator at minus inﬁnity. By Proposition 4.22, Theorem 4.2 and Remark 4.3 a), the modiﬁed ﬁnite section method is applicable to A = I + K if and only if • A is invertible on L∞ , • for every sequence h leading to a limit operator at plus inﬁnity, the set {(4.43)}τ ∈R is essentially invertible on L∞ (−∞, 0), and • for every sequence h leading to a limit operator at minus inﬁnity, the set {(4.44)}τ ∈R is essentially invertible on L∞ (0, +∞). Remark 4.35. a) If ci ∈ BUC for all i = 1, . . . , j, then both the operators (4.43) and (4.44) depend continuously on τ ∈ R which is why ‘essentially invertible’ is the same as ‘uniformly invertible’ in the two statements above in this case. b) If, as in Example 4.33, all of f , bi and ci are slowly oscillating, then we ˆ (h) ∈ L1 as introduced in (4.39) in Example 4.33. have Ah = I + CF κˆ (h) with κ In this case, by Proposition 4.34, A is Fredholm if −1 is not in the spectrum of ˙ ⊂ C stay away any CF κˆ (h) ; that is, all the (closed, connected) curves F κ ˆ (h) (R)
178
Chapter 4. Stability of the Finite Section Method
from the point −1. Moreover, the BCFSM is applicable to A if and only if A is ˙ on top of staying away from −1, have winding invertible and all curves F κ ˆ (h) (R), number zero with respect to this point. c) In some cases (see Example 4.36 below) the functions kˆ (h) (x, y) from (4.38) in Example 4.33 even depend on x − y only, which shows that the same is true for κ ˆ (h) (x − y) := kˆ (h) (x, y) then. If we then look at the applicability of the BCFSM for n = 1, we get the following interesting result: The invertibility of A already implies the applicability of the BCFSM. Indeed, if A is invertible, then, all limit ˆ h stay away from operators Ah are invertible, which shows that all functions F κ the point −1. But from F κ ˆ (h) (z) = F κ ˆ (h) (−z) for all z ∈ R we get that the point Fκ ˆ (h) (z) traces the same curve (just in opposite directions) for z < 0 and for ˙ around −1 is automatically z > 0. So the winding number of the curve F κ ˆ (h) (R) zero then. Example 4.36. Let us come back to Example 4.25 where, as we found out earlier, n = 1, j = 2, b1 ≡ −1/π, c1 = f , b2 ≡ 1/π, c2 ≡ 1, k1 (r, s, t) =
r r − 2 r2 + (s − t)2 r + (s + t)2
k2 (r, s, t) =
s−t s+t + 2 . r2 + (s − t)2 r + (s + t)2
and
In addition, suppose that f (x) → 0 as x → ∞. Then, by Lemma 3.45 b), all of b1 , b2 , c1 , c2 and f are slowly oscillating, and, for every sequence h leading to (h) (h) inﬁnity such that the strict limit f (h) exists, we have that b1 ≡ −1/π, c1 ≡ 0, (h) (h) b2 ≡ 1/π, c2 ≡ 1, and f (h) ≥ f− > 0 is a constant function, whence kˆ(h) (x, y)
f (h) − f (h) f (h) + f (h) + (x − y)2 + (f (h) − f (h) )2 (x − y)2 + (f (h) + f (h) )2
=
1 π
=
1 2f (h) 2 π (x − y) + 4(f (h) )2
=: κ ˆ (h) (x − y),
x, y ∈ Rn
where f (h) is an accumulation value of f at inﬁnity. Now it remains to check the function values of the Fourier transform F κ ˆ (h) . (h) (h) A little exercise in contour integration shows that F κ ˆ (z) = exp(−2f z) for ˙ stays away from −1 and has winding z ∈ R (cf. Remark 4.35 c)). So F κ ˆ (h) (R) number zero. Consequently, by our criteria derived earlier, we get that A is Fredholm and that the BCFSM is applicable if and only if A is invertible. As discussed in [57], by other, somewhat related arguments, it can, in fact, be shown that A is invertible, even when f is not slowly oscillating. Precisely, injectivity of A can be established via applications of the maximum principle to the associated BVP, and then limit operatortype arguments can be used to establish surjectivity.
4.4. Comments and References
179
We note also that the modiﬁed version of the BCFSM proposed in [19] could be applied in this case. (This method approximates the actual surface function f by an f for which f is compactly supported before applying the ﬁnite section.) For this modiﬁed version the arguments of [19] and the invertibility of A establish applicability even when f is not slowly oscillating.
4.4 Comments and References There is a lot of ongoing research on the ﬁnite section method for banddominated operators in general, as well as for special coeﬃcients. Steffen Roch has written a very nice paper [75] on this topic that reviews the state of the art for the case p = 2, X = C including some new contributions to the theory. The ﬁnite section method for banddominated operators with almost periodic coeﬃcients has been studied by Rabinovich, Roch and Silbermann in [71]. The method that is called BCFSM here is usually referred to as “the ﬁnite section method” in the literature dealing with operators A = I + K on BC. The ﬁrst versions of Theorem 4.2 appeared in [67], [68] and [69]. The results for the continuous case and D = R go back to [50] and [48]. Proposition 4.8 is the main result of [51] by Rabinovich, Roch and the author. Later, Roch [74] generalized this result to banddominated operators with slowly oscillating symbol. Most of Section 4.2 goes back to [50]. Parts of this, for example Subsection 4.2.3, were essentially already contained in [46]. Section 4.3 is recent work by ChandlerWilde and the author [18]. For some particular problems of the discussed type, Fredholmness and even invertibility have been established before [2, 3, 4, 21, 22, 23]. Also note that a suﬃcient criterion for the applicability of a modiﬁed form of the ﬁnite section method for rough surface scattering was derived in [19]. We would, however, like to underline the generality of the approach presented here, as well as its very nice symbiosis with the limit operator concept.
Index — symbols — → ∞, convergence . . . . . . . . . . . . . . . 77 → ∞s , convergence . . . . . . . . . . . . . . 77 ⇒, norm convergence . . . . . . . . . . . . . 3 →, strong convergence . . . . . . . . . . . . 3 P → , Pconvergence . . . . . . . . . . . . . 42 ∗ → , ∗strong convergence . . . . . . . 10 K → , Kstrong convergence . . . . . . 10 → , pre∗strong convergence . . . . 10 [A], matrix representation . . . . . . . . 16 [A , B] = AB − BA, commutator . . 3  U , the Lebesgue measure of U . . 1 #U , the counting measure of U . . . 1  x , maximum norm of vector x . . 1  x 2 , Euclidian norm of vector x. 77 [x], integer part of x ∈ R or Rn . . . .1 ⊕Aτ , stacked operator . . . . . . . . . . . 60 A+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 A− . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 A, AY , algebra of operators . . . . . 157 Ao , AoY , algebra of operators . . . . 157 A$ , algebra of operators . . . . . . . . 159 AG , discretization of A . . . . . . . . . . . . 5 AP, almost periodic functions . . . 104 APZ , Zalmost periodic func’s. . .104 A∗ , the adjoint operator . . . . . . . . . . .7 A , the preadjoint operator . . . . . . . 7 Ah , limit operator of A w.r.t. h . . 78 Aτ , ﬁnite section of A . . . . . . . . . . . 40 B, linear space of operators . . . . . 170 B$ , linear space of operators . . . . 173 ˆ algebra of operators . . . . . . . . . . 170 B, Bˆ$ , algebra of operators . . . . . . . . .173 BC, bdd&continuous functions . . . 23 BDOp , the banddominated op’s . 21
BDOpS , in BDOp with preadjoint 80 BDOpS,$$ , rich operators in BDOpS .80 BDOp$ , rich operators in BDOp . . . 80 BO, class of band operators . . . . . . 20 BO$ , rich operators in BO . . . . . . . 80 BUC, bdd&unif continuous f’s . . . 23 C, the complex numbers . . . . . . . . . . 1 C, linear space of operators . . . . . 170 C$ , linear space of operators . . . . 173 ˆ algebra of operators . . . . . . . . . . 170 C, Cˆ$ , algebra of operators . . . . . . . . . 173 Ca , convolution operator . . . . . . . . 156 C(R˙ n ), function space . . . . . . . . . . 102 C(Rn ), function space . . . . . . . . . . 102 CL, function space . . . . . . . . . . . . . . 100 E0 , subspace of E . . . . . . . . . . . . . . . . 12 F (E, P), space of sequences . . . . . . 47 F L1 , Fourier transforms of L1 . . 156 F κ, Fourier transform of κ . . . . . . 156 G, discretization operator . . . . . . . . . 5 H, the hypercube [0, 1)n . . . . . . . . . . 1 J , JY , ideal of operators . . . . . . . . 158 Kf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167 Kf$ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173 K(E), compact operators on E . . . .9 K(E, P), operator ideal . . . . . . . . . . 11 Lp , function space. . . . . . . . . . . . . . . . .4 L∞ 0 , decaying functions . . . . . . . . . 101 L∞ SC$ $ . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 L∞ , rich functions . . . . . . . . . . . . . . . 93 $ L(E), bdd lin operators on E . . . . . 3 L(E, P), operator algebra . . . . . . . . 11 L$ (E), rich operators . . . . . . . . . . . . 80 L$ (E, P), rich op’s in L(E, P) . . . 80
182 M(B), max ideal space of B . . . . 118 M∞ (∞ ), characters at ∞ . . . . . . 119 MZn (∞ ), characters at Zn . . . . . 119 M (E). . . . . . . . . . . . . . . . . . . . . . . . . . . .19 M (E) . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Mb , multiplication operator . . . . . . . 6 ˆ b , generalized multiplicator . . . . . . 6 M ˆ f , generalized multiplicator . . . . 24 M N, the natural numbers 1, 2, · · · . . . 1 N0 , the nonnegatives 0, 1, · · · . . . . . . 1 N , noise functions . . . . . . . . . . . . . . 111 P , projector . . . . . . . . . . . . . . . . . 87, 152 PU , cutoﬀ projection . . . . . . . . . . . . . 8 Pk , ﬁnite section projection . . . . . . . 8 Pτ , ﬁnite section projection . . . . . . . 8 P, approximate identity . . . . . . . . . . . 8 P−lim, the Plimit . . . . . . . . . . . . . . . 42 Φ+ (E), semiFredholm op’s . . . . . . 71 Q, projector . . . . . . . . . . . . . . . . .87, 152 Q, the hypercube [−1, 1]n . . . . . . . . 92 Q, the rational numbers . . . . . . . . . . . 1 QC , function space. . . . . . . . . . . . . .110 QSC , function space. . . . . . . . . . . . .110 QU , cutoﬀ projection . . . . . . . . . . . . . 8 Qk , ﬁnite section projection . . . . . . . 8 Qτ , ﬁnite section projection . . . . . . . 8 R, the real numbers . . . . . . . . . . . . . . . 1 R+ , positive real numbers . . . . . . . . . 1 Rp , set of operators . . . . . . . . . . . . . . 73 S, operators with a preadjoint . . . . 8 S n−1 , unit sphere . . . . . . . . . . . . . . . . 77 SAP, semialmost periodic func’s105 SO, slowly oscillating functions. . .97 SOs , slowly oscillating functions . .97 SOC, slowly osc.&cont’s func’s . . 101 T, the complex unit circle . . . . . . . . . 1 Ts , set of step functions . . . . . . . . . . 93 VA , set of shifts of A . . . . . . . . . . . . 134 Vα , shift operator . . . . . . . . . . . . . . . . . 6 W, the Wiener algebra . . . . . . . . . . . 26 X, some Banach space . . . . . . . . . . . . 2 X∗ , the dual space . . . . . . . . . . . . . . . . 6 X , the predual space . . . . . . . . . . . . 6 Z, the integers . . . . . . . . . . . . . . . . . . . . 1
Index aαβ , matrix entry . . . . . . . . . . . . . . . . 16 algB M . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 ˇ βG, StoneCech compactiﬁcation120 (h) b , the symbol of (Mb )h . . . . . . . . 92 ˜b(h) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 cen B, center of B . . . . . . . . . . . . . . 122 closalgB M . . . . . . . . . . . . . . . . . . . . . . . . 2 closidB M . . . . . . . . . . . . . . . . . . . . . . . . . 2 ˇ ∂G, StoneCech boundary. . . . . . .120 dist (U, V ), distance of sets . . . . . . . . 1 e, the unit element . . . . . . . . . . . . . . . . 2 fA (d), information ﬂow . . . . . . . . . . 32 f (U ), essential range . . . . . . . . . . . . . 96 f (∞), local essential range at ∞ . 96 f (∞s ), local ess range at ∞s . . . . . 96 im A, image of A . . . . . . . . . . . . . . . . . . 3 ind A, index of A. . . . . . . . . . . . . . . . .51 ind+ A, plusindex of A . . . . . . . . . . 88 ind− A, minusindex of A . . . . . . . . 88 ker A, kernel of A . . . . . . . . . . . . . . . . . 3 p , space of sequences . . . . . . . . . . . . . 4 µ, projector on AY . . . . . . . . . . . . . 159 n, dimension, see Rn , Zn . . . . . . . . . . 1 ν(A), lower norm of A . . . . . . . . . . . 69 ω, ωα , ωα,R , functions . . . . . . . . . . . . 83 oscU (f ), oscillation of f at U . . . . 92 oscx (f ), oscillation of f at x . . . . . 92 p, Lebesgue exponent . . . . . . . . . . . . . 4 ψ, ψα , ψα,R , functions . . . . . . . . . . . . 83 r(X), set of operators . . . . . . . . . . . . 71 , projector on AY . . . . . . . . . . . . . .159 spB x, spectrum of x in B . . . . . . . . . 2 spε A, εpseudospectrum of A . . . . 89 spess A, essential spectrum of A. . .52 σ op (A), operator spectrum of A . . 79 σsop (A), local operator spectrum . . 79 —A— adjoint operator . . . . . . . . . . . . . . . . . . .7 Zalmost periodic . . . . . . . . . . . . . . . 104 almost periodic . . . . . . . . . . . . 103, 104 applicability . . . . . . . . . . . . . . . . . . . . . 39 approximate identity . . . . . . . . . . . . . . 9 approximation method . . . . . . . . . . . 38
Index —B— Banach algebra . . . . . . . . . . . . . . . . . . . 2 band matrix . . . . . . . . . . . . . . . . . . . . . 20 band operator . . . . . . . . . . . . . . . . . . . 20 band width . . . . . . . . . . . . . . . . . . . . . . 20 banddominated operator . . . . . . . . 21 BCFSM . . . . . . . . . . . . . . . . . . . . . . . .164 bounded below . . . . . . . . . . . . . . . . . . . 69 —C— center . . . . . . . . . . . . . . . . . . . . . . . . . . 122 central localization . . . . . . . . . . . . . . 122 character . . . . . . . . . . . . . . . . . . . . . . . 118 coeﬃcients of a band operator. . . .21 coeﬃcients of a banddom op . . . . 21 column . . . . . . . . . . . . . . . . . . . . . . . . . . 19 commutator . . . . . . . . . . . . . . . . . . . . . . . 3 convergence ∗strong . . . . . . . . . . . . . . . . . . . . . 10 Kstrong . . . . . . . . . . . . . . . . . . . . 10 P. . . . . . . . . . . . . . . . . . . . . . . . . . .42 norm . . . . . . . . . . . . . . . . . . . . . . . . 3 pre∗strong . . . . . . . . . . . . . . . . . . 10 strong . . . . . . . . . . . . . . . . . . . . . . . . 3 to ∞ . . . . . . . . . . . . . . . . . . . . . . . . 77 to ∞s . . . . . . . . . . . . . . . . . . . . . . . 77 convolution . . . . . . . . . . . . . . . . . . . . . 156 convolution operator . . . . . . . . 26, 156 cross diagonal . . . . . . . . . . . . . . . . . . . . 19 —D— diagonal . . . . . . . . . . . . . . . . . . . . . . . . . 19 diagonals of A . . . . . . . . . . . . . . . . . . . 21 dual space . . . . . . . . . . . . . . . . . . . . . . . . 6 —E— elementwise invertible . . . . . . . . . . . . 36 essential cluster point . . . . . . . . . . . . 96 essential range . . . . . . . . . . . . . . . . . . . 96 essential spectrum . . . . . . . . . . . . . . . 52 essentially invertible . . . . . . . . . . . . . 36 —F— factor algebra . . . . . . . . . . . . . . . . . . . . . 2
183 ﬁnal part . . . . . . . . . . . . . . . . . . . . . . . . 41 ﬁnite section method for BC . . . . 164 ﬁnite section method, FSM . . . . . . 40 ﬁnite section projector . . . . . . . . . . . . 9 Fourier transform . . . . . . . . . . . . . . . 156 Fredholm operator . . . . . . . . . . . . . . . 51 Fredholm’s alternative . . . . . . . . . . . 52 —G— Gelfand transform . . . . . . . . . . . . . . 118 Gelfand transformation . . . . . . . . . 118 generalized multiplicator . . . . . . . . . . 6 —H— homomorphism . . . . . . . . . . . . . . . . . . 75 —I— ideal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 information ﬂow . . . . . . . . . . . . . . . . . 32 integral operator . . . . . . . . . . . . . . . . . 16 singular . . . . . . . . . . . . . . . . . . . . 130 inverse closed . . . . . . . . . . . . . . . . . . . . . 2 invertible . . . . . . . . . . . . . . . . . . . . . . . 2, 3 at inﬁnity . . . . . . . . . . . . . . . . . . . 54 at inﬁnity, weakly . . . . . . . . . . . 54 elementwise . . . . . . . . . . . . . . . . . 36 essentially . . . . . . . . . . . . . . . . . . . 36 uniformly . . . . . . . . . . . . . . . . . . . 36 invertibility at inﬁnity . . . . . . . . . . . 54 —K— kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 kernel function . . . . . . . . . . . . . . . . . . . 16 —L— Laurent operator . . . . . . . . . . . . . . . . . 22 layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 limit operator . . . . . . . . . . . . . . . . . . . . 78 local algebra . . . . . . . . . . . . . . . . . . . . 123 local essential range . . . . . . . . . . . . . . 96 local operator spectrum . . . . . . . . . . 79 local oscillation . . . . . . . . . . . . . . . . . . 92 local principle
184 by Allan and Douglas . . . 122 main idea . . . . . . . . . . . . . . . . . . . 76 local representatives . . . . . . . . . . . . 123 locally compact . . . . . . . . . . . . . 59, 157 lower norm . . . . . . . . . . . . . . . . . . . . . . 69 —M— main diagonal. . . . . . . . . . . . . . . . . . . .19 massive set . . . . . . . . . . . . . . . . . . . . . . 37 matrix entry . . . . . . . . . . . . . . . . . . . . . 16 matrix representation . . . . . . . . . . . . 16 maximal ideal. . . . . . . . . . . . . . . . . . .118 minusindex . . . . . . . . . . . . . . . . . . . . . .88 multiplicator . . . . . . . . . . . . . . . . . . . . . . 6 —N— neighbourhood of ∞ . . . . . . . . . . . . . 77 neighbourhood of ∞s . . . . . . . . . . . . 77 noise. . . . . . . . . . . . . . . . . . . . . . . . . . . .111 norm convergence . . . . . . . . . . . . . . . . . 3 —O— operator norm . . . . . . . . . . . . . . . . . . . . 3 operator spectrum . . . . . . . . . . . . . . . 79 ordinary function . . . . . . . . . . . . . . . . 93 —P— Pconvergence . . . . . . . . . . . . . . . . . . . 42 PFredholm operator. . . . . . . . . . . . .53 Plimit . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Psequentially compact . . . . . . . . . 139 plusindex . . . . . . . . . . . . . . . . . . . . . . . 88 poor function . . . . . . . . . . . . . . . . . . . . 93 preadjoint operator . . . . . . . . . . . . . . . 7 predual space . . . . . . . . . . . . . . . . . . . . 6 pseudoergodic . . . . . . . . . . . . . . . . . . 109 pseudoergodic w.r.t. D . . . . . . . . . 108 pseudospectrum . . . . . . . . . . . . . . . . . .89 —Q— quasidiagonal operator . . . . . . . . . . . 12 —R— reﬂexive. . . . . . . . . . . . . . . . . . . . . . . . . . .7 relaxation. . . . . . . . . . . . . . . . . . . . . . .101
Index rich function . . . . . . . . . . . . . . . . . . . . . 93 rich operator . . . . . . . . . . . . . . . . . . . . . 80 row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 —S— Schr¨ odinger operator . . . . . . . . . . . .132 semialmost periodic function . . . 105 semiFredholm operator . . . . . . . . . . 71 shift invariant . . . . . . . . . . . . . . . . . . . . 81 shift operator . . . . . . . . . . . . . . . . . . . . . 6 shifts of A . . . . . . . . . . . . . . . . . . . . . . 134 singular integral operator . . . . . . . 130 slowly oscillating . . . . . . . . . . . . 97, 100 spectrum. . . . . . . . . . . . . . . . . . . . . . . . . .2 stable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 stacked operator . . . . . . . . . . . . . . . . . 60 step function . . . . . . . . . . . . . . . . . . . . . 93 ˇ StoneCech boundary . . . . . . . . . . . 120 ˇ StoneCech compactiﬁcation . . . . 120 strong convergence . . . . . . . . . . . . . . . . 3 ∗strong convergence . . . . . . . . . . . . . 10 suﬃcient family . . . . . . . . . . . . . . . . . . 76 suﬃciently smooth . . . . . . . . . . . . . . . 37 —T— Toeplitz operator . . . . . . . . . . . . . . . 129 translation invariant . . . . . . . . . . . . . 81 —U— uniformly bounded below . . . . . . . 141 uniformly invertible . . . . . . . . . . . . . . 36 unit element . . . . . . . . . . . . . . . . . . . . . . 2 unital homomorphism . . . . . . . . . . . . 75 —W— weakly invertible at inﬁnity . . . . . . 54 Wiener algebra . . . . . . . . . . . . . . . . . . 26 WienerHopf operator . . . . . . . . . . .129
Bibliography [1] G. R. Allan: Ideals of vectorvalued functions, Proc. London Math. Soc., 18 (1968), 193–216. [2] T. Arens: Uniqueness for Elastic Wave Scattering by Rough Surfaces, SIAM J. Math. Anal. 33 (2001), 461–476. [3] T. Arens: Existence of Solution in Elastic Wave Scattering by Unbounded Rough Surfaces, Math. Meth. Appl. Sci. 25 (2002), 507–528. [4] T. Arens, S. N. ChandlerWilde and K. Haseloh: Solvability and spectral properties of integral equations on the real line. II. Lp spaces and applications, J. Integral Equations Appl. 15 (2003), no. 1, 1–35. ¨ ttcher and S. M. Grudsky: Spectral Properties of Banded Toeplitz [5] A. Bo Matrices, SIAM, Philadelphia 2005. ¨ ttcher, Yu. I. Karlovich and V. S. Rabinovich: The method of [6] A. Bo limit operators for onedimensional singular integrals with slowly oscillating data, J. Operator Theory, 43 (2000), 171–198. ¨ ttcher, Yu. I. Karlovich and I. Spitkovski: Convolution Op[7] A. Bo erators and Factorization of Almost Periodic Matrix Functions, Birkh¨ auser Verlag, Basel 2002. ¨ ttcher and B. Silbermann: Inﬁnite Toeplitz and Hankel matrices [8] A. Bo with operatorvalues entries., SIAM J. Math. Anal., 27 (1996), 3, 805–822. ¨ ttcher and B. Silbermann: Analysis of Toeplitz Operators, [9] A. Bo AkademieVerlag, Berlin, 1989 and Springer Verlag, Berlin, Heidelberg, New York 1990, 2nd edition: Springer Verlag, Berlin, Heidelberg, New York 2006. ¨ ttcher and B. Silbermann: Introduction to large truncated Toeplitz [10] A. Bo Matrices, Springer Verlag, New York 1999. [11] P. C. Bressloff and S. Coombes: Mathematical reduction techniques for modelling biophysical neural networks, In: Biophysical Neural Networks, Ed. Roman R Poznanski, Gordon and Breach Scientiﬁc Publishers, 2001.
186
Bibliography
[12] R. C. Buck: Bounded Continuous Functions on a Locally Compact Space, Michigan Math. J. 5 (1958), 95–104. [13] C. Corduneanu: Almost periodic functions, Chelsea Publishing Company, New York, 1989. [14] C. H. Chan, K. Pak and H. Sangani: MonteCarlo simulations of largescale problems of random rough surface scattering and applications to grazing incidence, IEEE Trans. Anten. Prop., 43, 851–859. [15] S. N. ChandlerWilde: Some uniform stability and convergence results for integral equations on the real line and projection methods for their solution, IMA J. Numer. Anal. 13 (1993), no. 4, 509–535. [16] S. N. ChandlerWilde, S. Langdon and L. Ritter: A highwavenumber boundaryelement method for an acoustic scattering problem, Phil. Trans. R. Soc. Lond. A., 362 (2004), 647–671. [17] S. N. ChandlerWilde and M. Lindner: Generalized Collective Compactness and Limit Operators, submitted to Integral Equations Operator Theory (2006). [18] S. N. ChandlerWilde and M. Lindner: Boundary integral equations on unbounded rough surfaces: Fredholmness and the Finite Section Method, submitted for publication (2006). [19] S. N. ChandlerWilde and A. Meier: On the stability and convergence of the ﬁnite section method for integral equation formulations of rough surface scattering, Math. Methods Appl. Sci., 24 (2001), no. 4, 209–232. [20] S. N. ChandlerWilde, M. Rahman and C. R. Ross: A fast twogrid and ﬁnite section method for a class of integral equations on the real line with application to an acoustic scattering problem in the halfplane, Numerische Mathematik, 93 (2002), no. 1, 1–51. [21] S. N. ChandlerWilde, C. R. Ross and B. Zhang: Scattering by inﬁnite onedimensional rough surfaces, Proc. R. Soc. Lond. A., 455 (1999), 3767– 3787. [22] S. N. ChandlerWilde and C. R. Ross: Scattering by rough surfaces: The Dirichlet problem for the Helmholtz equation in a nonlocally perturbed halfplane, Math. Meth. Appl. Sci., 19 (1996), 959–976. [23] S. N. ChandlerWilde and B. Zhang: On the solvability of a class of second kind integral equations on unbounded domains, J. Math. Anal. Appl. 214 (1997), no. 2, 482–502. [24] S. N. ChandlerWilde and B. Zhang: A generalised collectively compact operator theory with an application to second kind integral equations on unbounded domains, J. Integral Equations Appl. 14 (2002), 11–52.
Bibliography
187
[25] D. Colton and R. Kress: Integral Equation Methods in Scattering Theory, Wiley, New York 1983. [26] E. B. Davies: Spectral Theory of PseudoErgodic Operators, Commun. Math. Phys. 216 (2001), 687–704. [27] R. G. Douglas: Banach Algebra Techniques in Operator Theory, Academic Press, New York, London 1972. [28] J. A. Favard: Sur les equations diﬀerentiales a coeﬃcients presque periodiques, Acta Math. 51 (1927), 31–81. [29] I. Gohberg and I. A. Feldman: Convolution equations and projection methods for their solutions, Nauka, Moskva 1971 (Russian; Engl. translation: Amer. Math. Soc. Transl. of Math. Monographs 41, Providence, R.I. 1974). [30] I. Gohberg and M. G. Krein: Systems of integral equations on the semiaxis with kernels depending on the diﬀerence of arguments, Usp. Mat. Nauk 13 (1958), no. 5, 3–72 (Russian). [31] I. Gohberg and N. Krupnik: Onedimensional linear singular integral operators, Vol. I., Birkh¨ auser Verlag, Basel, Boston, Stuttgart 1992. [32] M. B. Gorodetski: On the Fredholm theory and the ﬁnite section method for multidimensional discrete convolutions, Sov. Math. 25 (1981), no. 4, 9–12. [33] A. Grothendieck: Une Caracterisation VectorielleMetrique de Espaces L1 , Canad. Math. J. 7 (1955), 552–561. [34] W. Hackbusch: Integral Equations, Birkh¨ auser Verlag, 1995. [35] R. Hagen, S. Roch and B. Silbermann: Spectral Theory of Approximation Methods for Convolution Equations, Birkh¨ auser Verlag, 1995. [36] R. Hagen, S. Roch and B. Silbermann: C ∗ −Algebras and Numerical Analysis, Marcel Dekker, Inc., New York, Basel, 2001. [37] P. R. Halmos: Ten problems in Hilbert space, Bull. Amer. Math. Soc. 76 (1970), 887–933. [38] N. Hatano and D. R. Nelson: Vortex pinning and nonHermitian quantum mechanics. Phys. Rev. B 77 (1996), 8651–8673. [39] Y. Katznelson: An Introduction to Harmonic Analysis, John Wiley 1968, Dover Publications 1976 and Cambridge University Press 2004. [40] M. G. Krein: Integral equations on the semiaxis with kernels depending on the diﬀerence of arguments, Usp. Mat. Nauk 13 (1958), no. 2, 3–120 (Russian). [41] V. G. Kurbatov: Functional Diﬀerential Operators and Equations, Kluwer Academic Publishers, Dordrecht, Boston, London 1999.
188
Bibliography
[42] S. Langdon and S. N. ChandlerWilde: A wavenumber independent boundary element method for an acoustic scattering problem, Isaac Newton Institute for Mathematical Sciences, Preprint NI03049CPD, to appear in SIAM J. Numer. Anal. [43] B. V. Lange and V. S. Rabinovich: On the Noether property of multidimensional discrete convolutions, Mat. Zam. 37 (1985), no. 3, 407–421. [44] B. V. Lange and V. S. Rabinovich: On the Noether property of multidimensional operators of convolution type with measurable bounded coeﬃcients, Izv. Vyssh. Uchebn. Zaved., Mat. 6 (1985), 22–30 (Russian). [45] B. V. Lange and V. S. Rabinovich: Pseudodiﬀerential operators on Rn and limit operators, Mat. Sbornik 129 (1986), no. 2, 175–185 (Russian, English transl. Math. USSR Sbornik 577 (1987), no. 1, 183–194). [46] M. Lindner and B. Silbermann: Finite Sections in an Algebra of Convolution and Multiplication Operators on L∞ (R), TU Chemnitz, Preprint 6 (2000). [47] M. Lindner and B. Silbermann: Invertibility at inﬁnity of banddominated operators in the space of essentially bounded functions, In: Operator Theory – Advances and Applications 147, The Erhard Meister Memorial Volume, Birkh¨auser 2003, 295–323. [48] M. Lindner: The ﬁnite section method in the space of essentially bounded functions: An approach using limit operators, Numer. Func. Anal. & Optim. 24 (2003) no. 7&8, 863–893. [49] M. Lindner: Classes of Multiplication Operators and Their Limit Operators, Journal for Analysis and its Applications 23 (2004), no. 1, 187–204. [50] M. Lindner: Limit Operators and Applications on the Space of essentially bounded Functions, Dissertation, TU Chemnitz 2003. [51] M. Lindner, V. S. Rabinovich and S. Roch: Finite sections of band operators with slowly oscilating coeﬃcients, Linear Algebra and Applications 390 (2004), 19–26. [52] X. Liu, G. Strang and S. Ott: Localized eigenvectors from widely spaced matrix modiﬁcations. SIAM J. Discr. Math. 16 (2003), no. 3, 479–498. [53] W. C. H. McLean: Strongly Elliptic Systems and Boundary Integral Equations, Cambridge University Press 2000. [54] E. M. Muhamadiev: On normal solvability and Noether property of elliptic operators in spacesof functions on Rn , Part I: Zapiski nauchnih sem. LOMI 110 (1981), 120–140 (Russian). [55] E. M. Muhamadiev: On normal solvability and Noether property of elliptic operators in spacesof functions on Rn , Part II: Zapiski nauchnih sem. LOMI 138 (1985), 108–126 (Russian).
Bibliography
189
¨ [56] F. Noether: Uber eine Klasse singul¨arer Integralgleichungen, Math. Ann. 82 (1921), 42–63. [57] M. D. Preston, P. G. Chamberlain and S. N. ChandlerWilde: An integral equation method for a boundary value problem arising in unsteady water wave problems, Proceedings of the UK BIM5, Liverpool, September 2005. ¨ ssdorf and B. Silbermann: Numerical Analysis for Integral and [58] S. Pro Related Operator Equations, AkademieVerlag, Berlin, 1991 and Birkh¨ auser Verlag, Basel, Boston, Berlin 1991. [59] V. S. Rabinovich: Fredholmness of pseudodiﬀerential operators on Rn in the scale of Lp,q spaces, Sib. Mat. Zh. 29 (1988), no. 4, 635–646 (Russian, English transl. Sib. Math. J. 29 (1998), no. 4, 635–646). [60] V. S. Rabinovich: Criterion for local invertibility of pseudodiﬀerential operators with operator symbols and some applications, In: Proceedings of the St. Petersburg Mathematical Society, V, AMS Transl. Series 2, 193 (1998), 239–260. [61] V. S. Rabinovich and S. Roch: Local theory of the Fredholmness of banddominated operators with slowly oscillating coeﬃcients, Toeplitz matrices and singular integral equations (Pobershau, 2001), 267–291, Oper. Theory Adv. Appl., 135, Birkh¨ auser, Basel, 2002. [62] V. S. Rabinovich and S. Roch: An axiomatic approach to the limit operators method, In: Singular integral operators, factorization and applications, Operator Theory: Advances and Applications, 142, 263–285, Birkh¨auser, Basel, 2003. [63] V. S. Rabinovich and S. Roch: Algebras of approximation sequences: Spectral and pseudospectral approximation of banddominated operators, Linear algebra, numerical functional analysis and wavelet analysis, 167–188, Allied Publ., New Delhi, 2003. [64] V. S. Rabinovich and S. Roch: The essential spectrum of Schr¨ odinger operators on lattices, Preprint Nr. 2443, Technical University Darmstadt, 2006 (to appear in Journal of Physics, Ser. A, 2006). [65] V. S. Rabinovich and S. Roch: Reconstruction of input signals in timevarying ﬁlters, Preprint Nr. 2418, Technical University Darmstadt, 2006 (to appear in Numer. Func. Anal. & Optim., 2006). [66] V. S. Rabinovich, S. Roch and J. Roe: Fredholm indices of banddominated operators, Integral Equations Operator Theory 49 (2004), no. 2, 221–238. [67] V. S. Rabinovich, S. Roch and B. Silbermann: Fredholm Theory and Finite Section Method for Banddominated operators, Integral Equations Operator Theory 30 (1998), no. 4, 452–495.
190
Bibliography
[68] V. S. Rabinovich, S. Roch and B. Silbermann: Banddominated operators with operatorvalued coeﬃcients, their Fredholm properties and ﬁnite sections, Integral Equations Operator Theory 40 (2001), no. 3, 342–381. [69] V. S. Rabinovich, S. Roch and B. Silbermann: Algebras of approximation sequences: Finite sections of banddominated operators, Acta Appl. Math. 65 (2001), 315–332. [70] V. S. Rabinovich, S. Roch and B. Silbermann: Limit Operators and Their Applications in Operator Theory, Operator Theory: Advances and Applications, 150, Birkh¨ auser Basel 2004. [71] V. S. Rabinovich, S. Roch and B. Silbermann: Finite Sections of banddominated operators with almost periodic coeﬃcients, Preprint Nr. 2398, Technical University Darmstadt, 2005 (to appear in Modern Operator Theory and Applications, 170, The Simonenko Anniversary Volume, Birkh¨ auser 2006). [72] M. Reed and B. Simon: Methods of Modern Mathematical Physics, II. Fourier Analysis, SelfAdjointness, Academic Press, New York, San Francisco, London 1975. [73] J. Roe: Banddominated Fredholm operators on discrete groups, Integral Equations Operator Theory 51 (2005), no. 3, 411–416. [74] S. Roch: Banddominated operators on p −spaces: Fredholm indices and ﬁnite sections, Acta Sci. Math. 70 (2004), no. 3–4, 783–797. [75] S. Roch: Finite sections of banddominated operators, Preprint Nr. 2355, Technical University Darmstadt, 2004 (to appear in Memoirs AMS, 2006). [76] S. Roch and B. Silbermann: Nonstrongly converging approximation methods, Demonstratio Math. 22 (1989), no. 3, 651–676. [77] S. Sakai: C ∗ −algebras and W ∗ −algebras, Springer Berlin, Heidelberg, New York 1971. [78] D. Sarason: Toeplitz Operators with semialmost periodic symbols, Duke Math. J. 44, no. 2 (1977), 357364. [79] S. Sauter and C. Schwab: Randelementmethoden, Teubner 2004. [80] H. H. Schaefer: Topological vector spaces, MacMillan Company New York, CollierMacMillan Ltd., London 1966. [81] B. Ya. Shteinberg: Operators of convolution type on locally compact groups, RostovnaDonu, Dep. at VINITI (1979), 715–780 (Russian). [82] I. B. Simonenko: Operators of convolution type in cones, Mat. Sb. 74 (116),(1967), 298–313 (Russian).
Bibliography
191
[83] I. B. Simonenko: On multidimensional discrete convolutions, Mat. Issled. 3 (1968), no. 1, 108–127 (Russian). [84] E. M. Stein and G. Weiss: Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, 1971. [85] L. N. Trefethen and M. Embree: Spectra and Pseudospectra, Princeton University Press, 2005. [86] L. Tsang, C. H. Chan, H. Sangani, A. Ishimaru and P. Phu: A banded matrix iterative approach to MonteCarlo simulations of largescale random rough surface scattering, TE case, J. Electromagnetic Waves Appl., 7 (1993), 1185–1200. ¨ [87] N. Wiener and E. Hopf: Uber eine Klasse singul¨arer Integralgleichungen, Sitzungsberichte Akad. Wiss. Berlin (1931), 696–706. [88] K. Yosida: Functional Analysis, Springer 1995.