This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
T. Define F and D on [0, oo) by asking that F(t) = F(T) and D(t) = D(T) if t > 1. Check the details to see if the variation of parameters formula will work on [0, T]. Exercise 2.3.3. Continue the reasoning of Exercise 2.3.2 and suppose that it is known that Jo \Z(t)\ dt < oo. If D is not of exponential order but F is bounded, can one still conclude that solutions of (2.3.1) are bounded? We return to x(i) = f (t) + [ B(t s)x(s) ds
(2.3.6)
Jo with f : [0, oo) —> Rn being continuous and B a continuous n x n matrix, both f and B of exponential order. Theorem 2.3.3. If H is defined by (2.3.7) and if H is differentiable, then the unique solution of (2.3.6) is given by x(t) = f(t) + / H'(ts)f(s)ds.
Jo
(2.3.9)
36
2. LINEAR EQUATIONS
Proof. We have L(H) = {s[I 
L(B)}}1
and [I  L(B)]L(x) = L(t), whose product yields s" 1 L(x) = L(H)L({) or L(l)L(x) = L(H)L(f), so that L
/ x ( i  s ) d s ) = i ( / H(t  s)i(s) ds ,
V^o
J
\Jo
)
which implies /* x(s)ds= /* H(ts)f(s)ds.
Jo
Jo
We differentiate this to obtain (2.3.9), because H(0) = I. This completes the proof. The matrices Z and H are also called resolvents, which will be discussed in Chapter 7 in some detail.
2.4
Stability
Consider the system x' = ^(t)x+ / C(t,s)x(s)ds
(2.4.1)
Jo with A an n x n matrix of functions continuous for 0 < t < oo and C(t, s) an n x n matrix of functions continuous for 0 < s < £ < oo. If : [O,£o] —> R" is a continuous initial function, then x(t, 0) will denote the solution on [to, oo). If the information is needed, we may denote the solution by x(t,to,(f>). Frequently, it suffices to write x(t). Notice that x(£) = 0 is a solution of (2.4.1), and it is called the zero solution.
2.4. STABILITY
37
Definition 2.4.1. The zero solution of (2.4.1) is (Liapunov) stable if, for each s > 0 and each to > 0, there exists S > 0 such that \(f)(t)\ <S
on
[0,t0]
and
t > t0
imply x(i, 0, there exists S > 0 such that to>O,
\(f>(t)\ t0
imply x(t, (p)\ < e. Definition 2.4.3. The zero solution of (2.4.1) is asymptotically stable if it is stable and if for each to > 0 there exists 5 > 0 such that \)\ —> 0 as t —> oo. Definition 2.4.4. The zero solution of (2.4.1) is uniformly asymptotically stable (U.A.S.) if it is uniformly stable and if there exists r] > 0 such that, for each s > 0, there is a T > 0 such that io>O,
0(i) t0 + T
imply x(t, (p)\ < e. We begin with a brief reminder of Liapunov theory for ordinary differential equations. The basis idea is particularly simple. Consider a system of ordinary differential equations x' = G ( t , x ) ,
(2.4.2)
with G : [0, oo) xRn > Rn being continuous and G(t, 0) = 0, so that x = 0 is a solution. The stability definitions apply to (2.4.2) with (t) = x(to) on [0,t 0 ]. Suppose first that there is a scalar function V : [0,oo) x Rn ^ [0,oo) having continuous first partial derivatives with respect to all variables. Suppose also that V(£, x) —> co as x —> co uniformly for 0 < t < CXD; for example, suppose there is a continuous function W : Rn —> [0, oo) with
W(x) ^ oo as x > oo and V{t,x) > W(x).
38
2. LINEAR EQUATIONS
Notice that if x(t) is any solution of (2.4.2) on [0, oo), then V(t,x(t)) is a scalar function of t, and even if x(t) is not explicitly known, using the chain rule and (2.4.2) it is possible to compute V'(t,x(t)). We have . NN
,, y ( i
dV dxx ^ ^ ^
x ( f ) )
'
But G(t,x) = [dxi/dt,... V'(t,x(t))=gradVG
+
dV dxn    + ^ ^
,dxn/dtj
+
dV ^ 
(a)
and so (a) is actually
+ dV/dt.
(b)
The righthand side of (b) consists of known functions of t and x. If V is shrewdly chosen, many conclusions may be drawn from the properties of V. For example, i(V'{t,x(t)) < 0, then t > t0 implies V{t,x.(t)) < V(to,x(to)), and because V(t,x) —> oo as x —> oo uniformly for 0 < t < oo, x(t) is bounded. The object is to find a suitable V function. We now illustrate how V may be constructed in the linear constant coefficient case. Let A be an n x n constant matrix all of whose characteristic roots have negative real parts, and consider the system x' = ,4x.
(2.4.3)
All solutions tend to zero exponentially, so that the matrix /"CO
B=
[exp Atf [exp At] dt
(2.4.4)
Jo is well defined, symmetric, and positive definite. Furthermore, ATB + BA=I
(2.4.5)
because  / = [exp Atf [exp At]\™ O
= / (d/dt) [exp At]T [exp At] dt Jo O
= / Jo
( A T [exp At]T [exp At] + [exp At]T [exp At] A) dt
= ATB + BA. Thus, if we select V as a function of x alone, say V{x) =xTBx,
(2.4.6)
2.4. STABILITY
39
then for x(i) a solution of (2.4.3) we have
V'(x(t)) = (xT)'Bx + xTBx' = (x') T Bx + x T Bx' = xTATBx + xTBAx = xT(ATB + BA)x T = —X X .
The matrix B will be used extensively throughout the following discussions. In some of the most elementary problems asking V(t,x) to have continuous first partial derivatives is too severe. Instead, it suffices to ask that V : [0, oo) x Rn —> [0, oo) is continuous and
(2.4.7) V satisfies a local Lipschitz condition in x.
Definition 2.4.5. A function V(i, x) satisfies a local Lipschitz condition in x on a subset D of [0, oo) x Rn if, for each compact subset L of D, there is a constant K = K(L) such that (i,xi) and (£, X2) in L imply that V(t,x1)V(t,x2)\2fc[x T Bx] 1/2 ,
(2.5.3)
\Bx\ Rn be bounded and continuous. Suppose B satisfies ATB + BA = —I and a 2 and 01 are the smallest and largest eigenvalues of B, respectively. If L \BC(t, s)\ds < M for 0 < t < oo and 2(3M/a < 1, then all solutions of (2.5.14) are bounded. Proof. If the theorem is false, there is a solution x(t) with limsupj^^ xT(t)Bx(t) = +oo. Thus, there are values of t with x(t) as large as we please and [xT(t)Bx(t)~\ > 0, say, at t = S, and xT(t)Bx(t) < xT(S)Bx(S) if t < S. Hence, at t = S we have [x T (t)Bx(t)]'=x T (t)x(t)+ /
2xT(s)CT(t,s)Bx(t)ds
Jo T
+ 2F (t)Bx(t) >0 or xT(S)x(S)
\(t)
\C(u,s)\du,
(2.5.21)
Jt for 0 < s < t < oo. For, in that case, if K > K and k 1
and [3c\/h < a
for some
a < 1.
In the convolution case there is a natural way to search for rj(t). Exercise 2.5.5. Consider the vector system * x'=^x+/
D(t  s)x(s) ds
(2.5.23)
Jo in which the characteristic roots of A have negative real parts and D(£) > 0 on [0, oo). Let B, k, and K be denned as before and suppose that there is a d > K and k\ > 0 with k>h>d
\D(ut)\du, Jt
Prove the following result.
0oo
\D(v)\ >\(v) / Jv
\D(u)\du,
then there is a constant q > 0 such that for x(t) a solution of (2.5.23) and rt
V(t,x()) = [xTBx}1/2+d
/>oo
/ Jo Jt
\D(us)\du\x(s)\ds
we have V'(t,X()) 0 such that ,t ,u I I B(u — s) ds du —> —oo as t —> oo , Jti Jo then there exists ti > 0 such that if x(0) = 1, then x(^) = 0.
2.6. UNIFORM ASYMPTOTIC STABILITY
51
Proof. If the theorem is false, then x(t) has a positive minimum, say xi, on [0,ti]. Then for t > t\ we have i
x'{t)
—oo Jtt Jo as t —> oo, a contradiction. This completes the proof.
2.6
Uniform Asymptotic Stability
We noticed in Theorem 2.5.1 that every solution x(i) of (2.5.1) may satisfy O
/ Jo
x(t)di(t0) + I Jo
Z(ts)F(s)ds
or ft
x{t + to,to,(f>) = Z(t)(f)(to)+ Jo so that
f
rk,
Z(ts){
/ I Jo
^
D(s +
tou)(j)(u)du}ds, )
x{t + to,to,4>) = z(t)4>(t0) ft
r
fU,
>
/ D(s + u)4>(t0  u) du \ ds . (2.6.5) + / Z{ts)\ Jo I Jo ) Next, notice that, because A is constant and Z(t) is Z/1[0,oo), then AZ(t) is L 1 [0,oo). Also, the convolution of two functions in L1^, <x>) is
2.6. UNIFORM ASYMPTOTIC STABILITY
53
Ll[0, oo), as may be seen by Fubini's theorem [see Rudin (1966, p. 156).] Thus Jo D(t  s)Z(s) ds is L^O, oo), and hence, Z'(t) is L^O, oo). Now, because Z'{t) is i 1 [0, oo), it follows that Z(t) has a limit as t —> oo. But, because Z(t) is L1^, oo), the limit is zero. Moreover, the convolution of an L1 [0, oo) function with a function tending to zero as t —> oo yields a function tending to zero as t —> oo. {Hint: Use the dominated convergence theorem.) Thus
Z'(t) = AZ(t) +
r* Jo
D(t s)Z(s) ds^O
as t —> oo. Examine (2.6.5) again and review the definition of uniform asymptotic stability (Definition 2.4.4). We must show that \4>{t)\ < r] on [O,£o] implies that x(i + to,to,(f>) —> 0 independently of toNow in (2.6.5) we see that Z{t){to) —> 0 independently of to The second term is bounded by fto
7] \Z(ts)\ Jo
/OO
/'t
\D(s+u)duds oo and, hence, is a (bounded) function tending to zero as t —> oo. Thus, x(t + to,to, oo independently of to The proof is complete Corollary 1. If the conditions of Theorem 2.5.1 (b) hold and ifC(t,s) is of convolution type, then the zero solution of (2.5.1) is uniformly asymptotically stable. Proof. Under the stated conditions, we saw that solutions of (2.5.1) were L^O.oo). Corollary 2. Consider /"* x'=ylx+/ D(ts)x(s)ds
(2.6.1)
Jo with A being an nxn constant matrix and D continuous on [0, oo). Suppose that each solution of (2.6.1) with initial condition x(0) = xo tends to zero ast —> oo. If there is a function A(s)ei 1 [0, oo) with fQ" \D(s + u)\du < A(s) for 0 < to < oo and 0 < s < oo, then the zero solution of (2.6.1) is uniformly asymptotically stable.
54
2. LINEAR EQUATIONS
Proof. We see that Z(i) —> 0 as t —> oo, and in (2.6.5), then, we have X(t
+ to,toA)\) —> 0 as t —» oo uniformly in to This completes the proof. Example 2.6.1. Let D(t) = (t + I)"™ for n > 2. Then /" D(,s + u)rfu= [ (s + u + l)ndu Jo Jo _ (s + u + l)n+1 *" < (s + l )  " + 1 —n + 1 n— 1 0 ~~ which is L 1 . Recall that for a linear system x' = A(t)x
(2.6.6)
with A(t) an n x n matrix and continuous on [0,oo), the following are equivalent: (i) All solutions of (2.6.6) are bounded. (ii) the zero solution is stable. The following are also equivalent under the same conditions: (i) All solutions of (2.6.6) tend to zero, (ii) The zero solution is asymptotically stable. However, when A{t) is Tperiodic, then the following are equivalent: (i) All solutions of (2.6.6) are bounded. (ii) The zero solution is uniformly stable. Also, A(t) periodic implies the equivalence of: (i) All solutions of (2.6.6) tend to zero, (ii) The zero solution is uniformly asymptotically stable, (iii) All solutions of x' = A(t)x + F(i)
(2.6.7)
are bounded for each bounded and continuous F : [0, oo) —» Rn.
2.6. UNIFORM ASYMPTOTIC STABILITY
55
Property (iii) is closely related to Theorem 2.6.1. Also, the result is true with \A(t)\ bounded instead of periodic. But with A periodic, the result is simple, because, from Floquet theory, there is a nonsingular Tperiodic matrix P and a constant matrix R with Z(i) = P(t)eRt being an n x n matrix satisfying (2.6.6). By the variation of parameters formula each solution x(t) of (2.6.7) on [0, oo) may be expressed as ft x(i) = Z{t)x{0) + / Jo
Z(t)Z1(s)F(s)ds.
In particular, when x(0) = 0, then x(i) = /
Jo
P{t)[eR{ts)}p1{s)F(s)ds.
Now P(t) and P~1(s) are continuous and bounded. One argues that if x(i) is bounded for each bounded F, then the characteristic roots of R have negative real parts; but, it is more to the point that O
/ \P(t)em\dt < oo. Jo Thus, one argues from (iii) that solutions of (2.6.6) are L1[0,cx)) and then that the zero solution of (2.6.6) is uniformly asymptotically stable. We shall shortly (proof of Theorem 2.6.6) see a parallel argument for (2.6.1). The preceding discussion is a special case of a result by Perron for \A(t) \ bounded. A proof may be found in Hale (1969; p. 152). Problem 2.6.1. Examine (2.6.5) and decide if: (a) boundedness of all solutions of (2.6.1) implies that x = 0 is stable. (b) whenever all solutions of (2.6.1) tend to zero then the zero solution of (2.6.1) is asymptotically stable. We next present a set of equivalent statements for a scalar Volterra equation of convolution type in which A is constant and D{t) positive. An ndimensional counterpart is given in Theorem 2.6.6. Theorem 2.6.2. Let A he a positive real number, D : [0, oo) —> (0, oo) continuous, Jo°° D(t) dt < oo, A + Jo°° D(t) dt ^ 0, and x' = Ax+
ft Jo
D(t  s)x(s) ds.
The following statements are equivalent.
(2.6.8)
56
2. LINEAR EQUATIONS
(a) All solutions tend to zero. (b) A + /0°° D{t) dt < 0. (c) Each solution is Ll[0, oo). (d) The zero solution is uniformly asymptotically stable. (e) The zero solution is asymptotically stable. Proof. We show that each statement implies the succeeding one and, of course, (e) implies (a). Suppose (a) holds, but —A + JQ" D(t) dt > 0. Choose to so large that /0°° D(t) dt> A and let (i) = 2 on [0, t0] Then we claim that x(t, ) > 1 on [to,oo). If not, then there is a first t\ with x(t\) = 1, and therefore, x'(ti) < 0. But ft! x'(t1) = Ax(t1)+  s)x(s) ds Jo = A+ > A+
Jo
D(s)x(ti  s)ds D(s)ds
Jo
> A+
D(s)ds > 0, Jo a contradiction. Thus (a) implies (b). Let (b) hold and define fOO
V(t,x()) = \x\+ I I D(us)du\x(s)\ds, Jo Jt so that if x(t) is a solution of (2.6.8), then r* V'(t,x())0 .
2.6. UNIFORM ASYMPTOTIC STABILITY
57
An integration yields 0 < V(t,x())
< V(to,) a
ft Jto
\x(s)\ds,
as required. Thus, (b) implies (c). Now Theorem 2.6.1 shows that (c) implies (d). Clearly (d) implies (e), and the proof is complete. To this point we have depended on the strength of A to overcome the effects of D{t) in ft
x' = Ax + / D(t  s)x(s) ds Jo
(2.6.1)
to produce boundedness and stability. We now turn from that view and consider a system /"* x'=A(t)x+
C(t,s)x.(s)ds + F(t),
(2.6.9)
Jo with A being an n x n matrix and continuous on [0, oo), C(t, s) continuous for 0 < s < t < oo and n x n, and F : [0, oo) —> Rn bounded and continuous. Suppose that /"CO
G{t,s) =
C(u,s)du
(2.6.10)
is defined and continuous for 0 < s < t < oo. Define a matrix Q on [0, oo) by Q(t) = A(t) G{t,t)
(2.6.11)
and require that Q commutes with its integral
(2.6.12)
(as would be the case if A were constant and C of convolution type) and that ft exp / Q(s)ds Ju
< M exp [  a(t  u)]
for 0 < u < t and some positive constants M and a.
(2.6.13)
2. LINEAR EQUATIONS
58
Here, when L is a square matrix, then eL is defined as the usual power series (of matrices). Also, when Q(t) commutes with its integral, then exp Jt Q(s) ds is a solution matrix of
x' = Q(t)x. Moreover,
Q(t)exp \J Q(s)ds] =  exp \J Q(S)ds] \ Q(t). Notice that (2.6.9) may be written as x' = [A(t)  G(t, t)] x + F(t) + (d/dt) / G(t, s)yi(s) ds .
(2.6.14)
Jo
If we subtract Qx from both sides, left multiply by exp [ — Jo Q(s) ds ], and group terms, then we obtain f * 1 1 ' ^ exp  / Q(s)dsx(t) \
{
Jo
J J
c r * 1 1r * i = { exp  / Q(s) (Is (d/dt) / G(t, s)x(s) ds + F(t) .
I
L Jo
i) I
Jo
J
Let / be a given continuous initial function on [0, to] Integrate the last equation from to to t and obtain / I I Jo J J Jo
+ / Q(uW exp  / Q(s)cis Jtt)
{
I
Jo
G(to,s)x(s)ds
\ / G(w, s)x(s) cis du. i J Jo
Left multiply by exp [J o Q(s)ds], take norms, and use (2.6.13) to obtain x(i) < Mx(t o )+ I
\G(to,s)\4>(s)\ds] exp [  a(t  t0)]
ft
ft
Mea(tu'>\F(u)\du+
+
\G(t,s)\\x(s)\ds
(2.6.15)
Jo
Jta
+ I \Q(u)\Mea{tu) I \G(u,s)\\x(s)\dsdu. Jta
Jo
Theorem 2.6.3. If x(i) is a solution of (2.6.9), if\Q(t)\ < D on [0, oo) for some D > 0, a n d if s u p 0 < t < o o J o G(i, s)\ds < (3, then for (3 sufficiently small, x ( i ) is bounded.
Proof. For the given to and , because F is bounded there is a K\ > 0 with Mx(i o )+ / Jo +
sup
G(i o ,s)^(s)ds / M{ exp [  a(i  u)] } F(u)  du < Kx.
t() 0 there exists S > 0 such that [n a positive integer, t\ £ [a, b], £2 S [a, b], and \t\ — t^\ < S] imply \fn(h)  fn(t2)\ < £ . 69
70
3. EXISTENCE PROPERTIES
Part (b) is sometimes called uniformly equicontinuous. Also, some writers consider a family of functions (possibly uncountable) instead of a sequence. Presumably, one uses the axiom of choice to obtain a sequence from the family. If {/„(£)} is a uniformly bounded and L e m m a 3.1.1. AscoliArzela equicontinuous sequence of real functions on an interval [a, b], then there is a subsequence that converges uniformly on [a, b] to a continuous function. Proof. Because the rational numbers are countable, we may let t\,t2, be a sequence of all rational numbers on a, b] taken in any fixed order. Consider the sequence {/ ra (ti)}. This sequence is bounded, so it contains a convergent subsequence, say, {/^(ii)}, with limit 4>(t\). The sequence {fn(^)} also has a convergent subsequence, say, 1/^(^2)}, with limit 4>{^2)If we continue in this way, we obtain a sequence of sequences (there will be one sequence for each value of m): /"(£),
m=l,2,...;
n= l,2,...,
each of which is a subsequence of all the preceding ones, such that for each m we have f™(trn)
 (tm) a s H  > 00 .
We select the diagonal. That is, consider the sequence of functions Fk{t)
=
fkk{t).
It is a subsequence of the given sequence and is, in fact, a subsequence of each of the sequences {/™(t)} ; for n large. As /™(t m ) —* <j>(tm): it follows that Fk(tm) —> 4>{tm) as k —> oo for each m. We now show that {Fk(t)} converges uniformly on [a, b]. Let E\ > 0 be given, and let e = Si/3. Choose S with the property described in the definition of equicontinuity for the number e. Now, divide the interval [a, b] into p equal parts, where p is any integer larger than (6 — a)/5. Let £j be a rational number in the jih part (j = 1 , . . . ,p); then {Fk(t)} converges at each of these points. Hence, for each j there exists an integer Mj such that \Fr(£j) Fs(£j)\ < e if r > Mj and s> Mj. Let M be the largest of the numbers Mj.
3.1. DEFINITIONS, BACKGROUND, AND REVIEW
71
If t is in the interval [a, b], it is in one of the p parts, say the jth, so \t  £j\ < 6 and \Fk(t)  Fk(£j)\ < s for every k. Also, if r > M > Mj and s > M, then \Fr(£j)  Fs(£j)\ < e. Hence, if r > M and s > M, then \Fr(t)  Fs(t)\ = \(Fr(t)  Fr(^)) + (Frfo)  F a &))  (F8(t)  Fs(^))\ < \Fr(t)  F r &) + F r &)  Fs(^)\ + \Fs(t)  F8(t:)\ < 3e = £i.
By the Cauchy criterion for uniform convergence, the sequence {Fk(t)} converges uniformly to some function (£). As each Fk(t) is continuous, so is 4>{t). This completes the proof. The lemma is, of course, also true for vector functions. Suppose that {Fn(t)} is a sequence of functions from [a, 6] to i?p, for instance, Fn(t) = (fn(t)i, , fn(t)P) [The sequence {Fn(t)} is uniformly bounded and equicontinuous if all the {fn(t)j} are.] Pick a uniformly convergent subsequence {fkj(t)i} using the lemma. Consider {fkj(t)2} a n d use the lemma to obtain a uniformly convergent subsequence {fkjr(t)2} Continue and conclude that {F^jr...s(t)} is uniformly convergent. We are now in a position to state the fundamental existence theorem for the initialvalue problem for ordinary differential equations. Theorem 3.1.1. Let (£o,xo) £ Rn+1 and suppose there are positive constants a, b, and M such that D = {(£, x) : \t — to\ < a, x — xo < 6}, Then there is G : D ^ Rn is continuous, and G(i,x) <Mif{t,x)eD. at least one solution x(t) of x' = G(i,x),
x(i o )=x o ,
and x ( t ) is defined for \tto\ 0. If
f(t) [0, oo) such that when y, z, and u are in S then (a) p(y, z) > 0, p(y, y) = 0, and p(y, z) = 0 implies y = z, (b) p(y,z) = p(z,y), and (c) p(y,z) < p(y,u) + p{u,z). The metric space is complete if every Cauchy sequence in (S, p) has a limit in that space. Definition 3.1.4. Let (S,p) be a metric space and A : S —> S. the operator A is a contraction operator if there is an a € (0,1) such that x £ S and y s S imply p[A(x),A(y)}
<S a contraction operator. Then there is a unique (f> <E S with A(i = A(ip) and ipn+i = A(tpn), then tpn —> , the unique fixed point. That is, the equation A() = 4> has one and only one solution.
76
3. EXISTENCE PROPERTIES
Proof. Let XQ G S and define a sequence {xn} in S by xi = A(XQ), X% = A(x{) = A2xo, ,xn = Axni = AUXQ. To see that {xn} is a Cauchy sequence, note that if m > n, then p(xn,xm) = p{Anx0,Amx0) Rn and g : U > Rn both be continuous, where U = {(t, s, x) : 0 < s < t < a and x  f (t) \ < b} . Then there is a continuous solution of x ( i ) = f ( t ) + I g(t,s,x(s))ds Jo
(3.2.1)
on [0,T], where T = mm[a,b/M] and M = maxy g(i,s,x). Proof. We construct a sequence of continuous functions on [0, T] such that xi(t) = f(t), and if j is a fixed integer, j > 1, then Xj(t) = f(i) when t G [0,T/j] and ft(T/j)
x,(t) = f(t) + / Jo
g(i  (T/i), S , X j (s)) ds
for T/j 1, is independent of all other Xfe(t), Notice that ft(T/j)
Xj(t)f(*)l< / Jo
\s(t(T/j),S,^(s))\ds
<M(t(T/j))<M(b/M) = b.
This sequence {xj(t)} is uniformly bounded because \Xj(t)\ 0 be given, let n be an arbitrary integer, let t and v be in [0, T] with t > v, and consider xn(i)x» Rn is continuous and q is continuous for 0 < s < t < oo and y G Rn. Thus, the existence theorem may be applied to
y(t)=h(t)+
f q(4,s,y(s))ds.
Jo One may take a = b = ir again, but q is a translation of g, so a new M, say, Mi, will be obtained and an interval [0,Ti], with Ti = mm[n,7r/Mi], will result. Any solution y(i) on [0, Ti], translated to the right by y(t — T), so as to be denned on [T, T + T\], will be called a continuation of (p. Naturally, there may be many continuations of on intervals T, Ti, T2,.... With slight modifications the process can be used if f is defined on [0, 00) and g is defined and continuous for 0 < s < t < 00 and x e D, where D is an open and connected subset of Rn. The number b will change each time, becoming the distance from the new f (t) to the boundary of D. Now, there are two possibilities. The intervals T, Xi, T2,..., when translated, may exhaust [0, 00). This would happen if, for example, g(i, s,x) is bounded and D = Rn, because M and Mi would be the same. [Is it also true if g(i, s, x) < a + /3x? If g is continuous for 0 < s < t < 00 and x G D, then x(i) may approach the boundary of D and Tn may approach zero too quickly. If D = Rn, then x(t) may become unbounded as t approaches some number L from the left. One thing is clear, however, from our translation argument: if D = Rn, then bounded solutions can be continued tot = 00, because the bounds Mi on g are then bounded on any interval [0,ftT]. The next result formalizes this.
84
3. EXISTENCE PROPERTIES
Theorem 3.3.1. Let f : [0, oo) > Rn and g : U > Rn be continuous where U = {(t, s,x) : 0 < s < t < oo , xe Rn). Ifx(t) is a solution of x(t) = f(t) +
/"*
g(t,s,x(s))ds
(3.3.1)
Jo
on an interval [0, T) and if there is a constant P with \x(t) — f (£) < P on [0,T), then there is a f > T such that x(t) can be continued to [0,T]. Proof. If we can show that lim t ^j x(t) exists, then an application of the existence theorem starting at t = T will complete the proof. Let {tn} be a monotonic increasing sequence with limit T. We shall show that {x(i n )} is a Cauchy sequence. Now x(tm)x(tn) < f(tm)f(tn) I
rt,,
rtm
+ \ / I Jo
g(tm,s,x(s))ds
 / Jo
g(tn,s,x(s))ds
,
and, because f is continuous and tn —> T, the first term on the right tends to zero as n, m —> oo. Also, if tm > tn, then / Jo
g(tm,s,x(s))ds 
/ Jo
g(tn,s,x(s))ds
rt,,.
oo as x > oo uniformly for 0 < t < T
(3.3.4)
holds, then every solution of (3.3.2) can be continued to +oo. Proof. If the theorem is false, then there is a solution x(t) of (3.3.2) defined on some interval [to,T) with lim_x(i) = +oo.
(3.3.5)
According to (3.3.3), V'(t,x(t)) < 0 so F(t,x(t)) < F(i o ,x(t o )) on [to,T). But by (3.3.4) and (3.3.5) we have V(t,x(t)) > oo as t > T~, a contradiction. This completes the proof.
3.3. CONTINUATION OF SOLUTIONS
87
The drawback of this result is, of course, the necessity of finding the function V. We illustrate such a function in the proof of the following classical result known as the ContiWintner theorem. Theorem 3.3.3. Let G : [0, oo) x Rn —> Rn be continuous and suppose there are continuous functions X : [0, oo) —> [0, oo) and u : [0, oo) —> [1, oo) with f™[ds/cj(s)] = +oo. If G(i,x) < A(i)w(x), then any solution of
x' = G(i,x),
x(io)=xo
(3.3.2)'
can be continued to [to,oo). Proof. Let x(i) be a solution of (3.3.2) and define
U
x
ft
>
"I
[ds/co(s)} + 1
exp  / A(s) ds . ) Jo (A differentiable norm may be chosen for i ^ O . Because we are concerned with limt_>y x(i) = +oo, we will not be bothered with the nondifferentiability of x at 0.) We find that
V(3.3.2)(*»x) < \(t)V(t,X)+
[G(t,x)/a;(x)]exp [  jT A(s)ds] r
< X(t)V(t,
x) + X(t) exp
*
i
 / A(s) ds
L io
< 0.
J
The result now follows from Theorem 3.3.2. When ui is nondecreasing the result may be extended to (3.3.1). Definition 3.3.1. Let h : [0, oo) —> (—00,00) and for U = {(t,s,x)\0
< s x(t).
88
3. EXISTENCE PROPERTIES
Of course, if solutions are unique, then the unique solution is the maximal and minimal solution. Much can be proved concerning maximal solutions, and we shall repeat little of it here. The interested reader may consult Hartman (1964, pp. 2531) for some interesting properties of ordinary differential equations and integral equations. Extensive results of this type are also found in Lakshmikantham and Leela (1969, e.g., pp. 1131). Theorem 3.3.4. Let the maximal solution x{t) of the scalar equation x{t) = B +
p(s,x(s))ds Jo exist on [0,^4], where B is constant, and letp: [0, A] x R —> R be continuous and nondecreasing in x when 0 < t < A. If y(t) is a continuous scalar function on [0,A] satisfying y(t) 0 and a continuous function LO : [0,oo) —> [l,oo) with to nondecreasing, g(t,s,x) < K(T)LO(\X\) ifO<s (00,00), let C(t,s) be a scalar function defined for 0 < s < t < 00, and let fit), Ct(t,s) and C(t,s) be continuous on their domains. Suppose g : (—00,00) —> (—00,00) with xg(x) > 0 ifx ^ 0, and for each T > 0 we have C(t, t)+J^ \Cu(u, t)\du< 0. Then each solution of x(t) = f(t) + f C(t, s)g(x(s)) ds Jo can be continued for all future times.
(3.3.9)
3.3. CONTINUATION OF SOLUTIONS
91
Proof. We show that if a solution x(t) is defined on [0, a), it is bounded. Let /'(i) < M on [0,a] and define
V(t,x()) = eMt\l
+ \x(t)\+ j I
\Cu(u,S)\du\g(x(s))\ds],
so that V'(t,x()) < eMt
M
M\x\ + \f'(t)\ + C(t,t)\g(x)\
+ f \Ct(t,s)\\g(x(s))\ds + I" \Cu(u,t)\du\g{x)\ Jo Jt
 J*\Ct(t,s)\\g(x(s))\dsj (—00,00), / : (—00,00) —> (—00,00), both are continuous, and xf(x) > 0 for x 7^ 0. Write
F{x)= f f{s)ds. Jo
The following is a fundamental continuation result for (3.3.12) that we wish to generalize, in some sense, to encompass integral equations. Theorem 3.3.8. Suppose a(t\) < 0 for some t\ > 0. If either J0+c°[l+F(x)]^dxoo,
then (3.3.12) lias solutions not continuable to +00. Moreover, ifa(t) < 0 on an interval [ti,^), then (3.3.12) has a solution x(t) defined at t\ satisfying limt_>y \x(t)\ = +00 for some T satisfying t\ < T < £2 if and only if either (a) or (b) holds. Proof. Because a(ti) < 0 and a is continuous, there are positive numbers
[y2(t1)+2mF(x(t))]1/2
or [y2(t1) + 2mF(x)y1/2
dx> dt.
Integrating both sides from t\ to t and recalling that x{t\) = 0, we have / Jo
[y2(h)
+ 2mF{s)}
1/2
ds>th.
Because (a) holds, we may choose y'2{t\) so large that the integral is smaller than co before t reaches t\+8. This completes the proof of the first part of the theorem when (a) holds. If (b) holds, then a similar proof is carried out in Quadrant III of the xy plane. The details showing that the integral can be made smaller than 5 and the proof of the second part of the theorem can be found in Burton and Grimmer (1971). That paper also contains results on the uniqueness of the zero solution that may be extended to integral equations. We return now to our integral equation and show that if a grows too fast and if C(t, s) becomes positive at one point, then there are solutions with finite escape time. It is convenient to introduce an initial function on an initial interval [0,a and show that the solution generated by this initial function has finite escape time. As discussed in Chapter 1, it is possible to translate the equation by y(t) = x(t + a), so that the initial function becomes a forcing function.
94
3. EXISTENCE PROPERTIES
Theorem 3.3.9. Consider the scalar equation x(t) =xo+
[ C(t, s)g(x(s)) ds , Jo
(3.3.13)
where g is continuous and positive for x > 0 and C(t,s) and Ct(t,s) are continuous on 0 < s < t < t\ for t\ > 0. Suppose also (a) there exist e > 0 and CQ > 0 with C(t, s) > CQ if t\ — e < s < t < t\, (b) g(x)/x —> oo as x —* oo, and (c) f™[dx/g(x)] < o o . Then there is a t% G (0, t\) and an initial function : [0, £2] —> [0, 00) sucii thai a solution x(t, <j>) has finite escape time. Proof. Because C(t, s) > CQ if t\ — e < s < t < t\, there is a K > 0 with \Ct(t, s)\ < KC(t, s)foit1e<s 0 with cog(x)  Kx = f h(x) > 0 for x > R . Note that g(x)/x —> 00 as x —> 00 implies that /i(x) > Mg(x) for some M > 0 and x large. Thus j^[dx/h(x)] < 00, and we may choose Ri > R with f™[dx/h(x)] <e/2. Now, pick xo = i ? i . Define an initial function
C(t,t)g(x)K[xx0] > cog(x)  Kx = h(x) > 0.
3.4. CONTINUITY OF SOLUTIONS
95
Thus dx/h(x) > dt, so Ads/M*)] > <  ( /
[dz/ft(a:)] = /
J Ri
[ds//i(s)] > / [ds/h(s)}
J xo
J X{)
> *  ( * !  (e/2)) Thus, if x(t) exists to t = t\, then e/2 > e/2, a contradiction. Roughly speaking, this theorem tells us that if C(t, t) > 0 at some t = ti, if g(x) > 0 for x > 0, and if jx [ds/g(s)\ < oo, then
x(t) = f(t)+ f C(t,s)g(x(s))ds Jo
has solutions with finite escape time. Exercise 3.3.2. Study the statement and proof of Theorem 3.3.9. (a) State and prove a similar result for x{t) =f{t)+
{ Jo
g(t,s,x(s))ds.
(b) Do the same for ft
x' = h(t,x) + / g(t,s,x(s))ds . Jo
3.4
Continuity of Solutions
In Chapter 1 we saw that the innocentappearing f (t) in x ( i ) = f ( i ) + / g(4,s,x(s))ds Jo
(3.4.1)
may, in fact, be filled with complications. It may contain constants x'(0), x"(0),..., x(™)(0), all of which are arbitrary, or (even worse) a continuous
96
3. EXISTENCE PROPERTIES
initial function 0 : [0, to] —> Rn, where both 0 and to are arbitrary. Recall that for a given initial function 0 we write x(t) = f (t) + / " g(t, s, 0(s)) ds + f g(t, s, x(s)) (is , JO
(3.4.2)
./to
and ask for a solution of the latter equation for t > to. To change it into the form of (3.4.1) we let y(t) = x(t + t 0 ) = f (t + t 0 ) + / g(t +1 0 , s, 0(s)) ds Jo t
+ /
g(to+t,s,x(s))ds,
Jtu
and define h(t) = f(t + t o ) + /
g(t + to,s,0(,s)) oo, the term Jo" g(t + to, s, 0(s)) ds may reasonably be expected to tend to zero and, frequently, even be L1[0, oo).
3.4. CONTINUITY OF SOLUTIONS
97
Example 3.4.1. Consider the scalar equation ft
q(t,s)e^s)x(s)ds,
x(t) = l+ Jo
where \q(t, s)\ < 1. Let : [0,1] > [L, L] for some L > 0. Then
r1 h(t)l=
q(t +
l,s)eit+1s)(s)ds
Jo = e" ( * +1) / q(t + l,s)es Rn be continuous :0<s(£) = ^(t,€o) b y uniqueness. Thus t/>(t,£k.) =t tp(t,£0) on [0,T]. For, by way of contradiction, suppose there were a subsequence for which this were not true. Then by Theorem 3.4.2 there would be a subsequence of that one tending to a solution ip* of £' = f ( £ ) ,
V>*(0)=^
with ip*{t) ^ ip(t,£o). This contradicts uniqueness. Thus ift(t,£k) =4 ^ ( t ,  0 ) on [0,T], so V(tfc,€fc) ^ ^(*o,€o) b e c a u s e ^(*fc,Co) ^ ^(*o,€o) and ijj(t,^0) is continuous in i. This completes the proof. When we set out to formulate a counterpart to Theorem 3.4.2 for x(t) = f(£)+ / Jo
g(t,s,x(s))ds,
it is clear that we want a sequence gk{t, s,x) n£ g(t, s,x) on compact subsets of [0,oo) x [0,oo) x Rn. But f(i) contains the initial conditions, and we therefore desire a sequence of functions ffe(t) — f (t). However, the type of convergence needed is not very clear. The fact that £fc —> £0 in Theorem 3.4.2 is of little help for functions ffc(i). A simple solution is to ask for equicontinuity of {fk} and a form of equicontinuity of {gk(t, s,x)} in t. In particular, if there is a P > 0 with g/t(£, s,x) — gfe(^i, s, x) < P\t — ti\ on compact sets, this works very well.
3.4. CONTINUITY OF SOLUTIONS
101
Theorem 3.4.4. Let {gk} be a sequence of continuous functions with gk : [0, a] x [0, a] x Rn —> Rn satisfying \gk(t, s, x) < K(l + x) on its domain. Suppose that {ik} is a sequence of uniformly bounded and equicontinuous functions with ik : [0, a] —> i?" and ffc(t) =1 f(t) on [0,a]. Suppose also (a) for each compact subset B C i?", then gfc(i,s,x) =^ g(t,s,x) on [0,a] x [0,a] x B; (b) for eaci k,ipk{t) is a solution of *jjk(t) = fk(t)+
I Jo
Sk(t,s,il>k(s))ds,
0 < t < a; (c) for each £ > 0 and M > 0, there exists S > 0 such that [fc an integer, s e [0,a], \t  t i  < 6, t , t i G [0,a], \x\ < M] imply
gfc(t,s,x)

g fe (ti,s,x) < e  t  t i  . Then there is a subsequence kj —> oo with j such that ij}k.(t) =£ tZ'(t) on [0, a] as j —> oo, and tp satisfies V(t) = f(t)+ / g(t,s,iP(s))ds Jo on [0,a]. Proof. If ffe(i) < J, then from (b) we have ^ f c (4)l<J+ / K(l + \tj,k(s)\)ds Jo <J + aK + K I \ipk(s)\ds, Jo so that \^k(t)\ 0 with
102
3. EXISTENCE PROPERTIES for all k. If t, tx e [0,a] with i > *i, as i/>fc(s) < M
gfc(t,s,x) < QonBM we have
hM*)^fe(*i)l = ffc(t)  ffe(ti) + I gk(t,s,if>k(s))dsJo < \ik(t)  ik(h)\ + \J^ +
I
[ Jo
gk(h,s,ipk(s))ds
[gk(t,s,t/>k(s))  gk(h,s,tl>k(s)j\ ds
Sk(t,s,il>k(s))ds
Jtx
< ffe(i)  ffc(ti) + ea\t  h\ + Q\t  h\. Because the ik are equicontinuous, so is {ipp.} Hence, there is a subsequence of the ipk, say ipk again, with il)k(t) =S i/>(t) on [0, a]. We have t/jk(t) = fk(t)+
I Jo
Sk(t,s,i>(s))ds,
and as k —> oo we obtain
*P(t) = t(t)+ f Jo
g(t,s,Ms))ds,
as required. Notice that if gfc(t, s,x) = g(t, s,x) and if solutions are unique, then the result states that as the initial functions ik(t) converges to f(t), then the solutions converge. That is, solutions depend continuously on initial functions. In short, uniqueness implies continual dependence of solutions on initial conditions. Quite obviously, continual dependence of solutions on initial conditions implies uniqueness.
Chapter 4
History, Examples, and Motivation 4.0
Introduction
This chapter is devoted to a selection of problems and historical events that have affected the development of the subject. Many of the formulations are quite different from the traditional derivations seen in mathematical physics, which proceed from first principles. At least in the early development of the subject, problems were formulated from the descriptive point of view; a physical situation was observed and a mathematical model was constructed that described the observations. The aim was to discover properties from the mathematical model that had not been observed in the physical situation, which could assist the observer in better understanding the outside object. Much of mathematical biology has proceeded in this fashion. And though its critics abound, the successes have been marked and important. An authoritative case for proceeding in this way is made by the eminent biologist J. Maynard Smith (1974, p. 19) in a modern monograph on mathematical biology. In this chapter we briefly discuss numerous problems related, in at least some way, to Volterra equations. In some cases we present substantial results; in other cases we formulate the problems so they may be solved using methods of later chapters; and finally, some problems are briefly introduced as examples to which the general theory applies. In all cases we attempt to provide references, so that the interested reader may pursue the topic in some depth. 103
104
4.1
4. HISTORY, EXAMPLES, AND MOTIVATION
Volterrra and Mathematical Biology
In this section we study the work that went into the formulation of a pair of predatorprey equations x' = x[a — by — dx],
r
r*
i
(4.1.1)
y' = y\  c+ kx + / K(t  s)x(s) ds I 1 form treated in the general and then transform theseJo equations into the theory discussed in Chapters 5 and 6. The study of Volterra's work on competing species is a fundamental example of the progressive improvement of a model to explain a description of a physical process. It shows, in particular, how a description of observable facts can lead to the suggestion of new information. Fairly accurate records had been kept by Italian port authorities of the ratio of food fish to trash fish (rays, sharks, skates, etc.) netted by Italian fisherman from 1914 to 1923. The period spanned World War I and displayed a very curious and unexpected phenomenon. The proportion of food fish markedly decreased during the war years and then increased to the prewar levels. Fishing was much less intense during the war; it was hazardous, and many fisherman were otherwise occupied. Intuitively, one would think fishing would be much improved after the slow period. Indeed, rare is that person who has not dreamed of the glorious fishing to be had in some virgin mountain lake or stream. The Italian biologist Umberto D'Ancona considered several possible explanations, rejected all of them, and in 1925 consulted the distinguished Italian mathematician Vito Volterra in search of a mathematical model explaining this fishing phenomenon. The problem interested Volterra for the remainder of his life and provided a new setting for his functionals. Moreover, his initial work inspired such widespread interest that by 1940 the literature on the problem was positively enormous. A brief description of his concern with it is quite worthwhile. It was, to begin with, quite clearly a problem of predator and prey. The trash fish fed on the food fish. Moreover, the literature on descriptive growth of species was not at all empty. In 1798 Thomas Robert Malthus, an English economist and historian, published a work (of inordinate title length) contending that a population increases geometrically (e.g., 3, 9, 27, 81, . . . ) whereas food production increases arithmetically (e.g., 3, 6, 9, 12, . . . ) . He contended that population will always tend to a limit of subsistence at which it will be held by
4.1. VOLTERRRA AND MATHEMATICAL BIOLOGY
105
famine, disease, and war. (See Encyclopaedia Britannica, 1970, vol. 14 for a synopsis.) This contention, although far from ludicrous, has been under attack since its publication. One proceeds as follows to formulate a mathematical model of Malthusian growth. Let p(t) denote the population size (or density) at a given time t. If there is unlimited space and food, while the environment does not change, then it seems plausible that the population will increase at a rate proportional to the number of individuals present at a given time. If p{t) is quite large, it may be fruitful to conceive of p(t) as being continuous or even differentiable. (Indeed, the science philosopher Charles S. Peirce (1957, pp. 5760) contends that the "application of continuity to cases where it does not really exist . . . illustrates . . . the great utility which fictions sometimes have in science." He seems to consider it a cornerstone of scientific progress.) In that case we would say dp(t)/dt = kp(t),
p(to)=po,
(4.1.2)
where k is the constant of proportionality. As it is assumed that the population increases, k > 0. This problem has the unique solution p(t)=poexp[k(tto)}.
(4.1.3)
Notice that when time is divided into equal intervals, say, years, this does yield a geometric increase. Obviously, no environment could sustain such growth, and by about 1842 the Belgian statistician L. A. J. Quetelet had noticed that a population able to reproduce freely with abundant space and resources tends to increase geometrically, while obstacles tend to slow the growth, causing the population to approach the upper limit, resulting in an Sshaped population curve with a limiting population L (see Fig. 4.1). Such curves had been observed by Edward Wright in 1599 and were called logistic curves, a term still in use. The history of the problem of modeling such a curve mathematically is an interesting one. A colleague of Quetelet, P. F. Verhulst, assumed that the population growth was inhibited by a factor proportional to the square of the population. Thus, the equation for Malthusian growth was modified to p'(t)=kp(t)rp2(t)
(4.1.4)
for k and r positive. This has become known as the logistic equation and rp2{t) the logistic load. It is a simple Riccati equation, which is equivalent to a secondorder, linear differential equation. Its solution is p(t) = m / [ l + Mexp(fet)],
(4.1.5)
106
4. HISTORY, EXAMPLES, AND MOTIVATION
Figure 4.1: Sshaped population curve with limit L. which may be obtained by separation of variables and partial fractions. Its limiting population is m = k/r, called the carrying capacity of the environment. It describes the curve of Fig. 4.1 and, moreover, if p(to) > k/r, it describes a curve of negative slope approaching k/r as t —> oo. Thus, for example, if a fishpond is initially over stocked, the population declines to k/r. With the proper choice of k and r, (4.1.5) describes the growth of many simple populations, such as yeast [see Maynard Smith (1974, p. 18)]. Although the logistic equation is a descriptive statement, it has received several pseudo derivations. The law of mass action states, roughly, that if m molecules of a substance x combine with n molecules of a substance y to form a new substance z, then the rate of reaction is proportional to [x]m[y]ra, where [x] and [y] denote the concentrations of substances x and y, respectively. Thus, one might argue that for population x{t) with density p(t), the members compete with one another for space and resources, and the rate of encounter is proportional to p(t)p(t). So, one postulates that population increases at a rate proportional to the density and decreases at a rate proportional to the square of the density p'(t) = kp(t)  rp2(t). Derivations based on the Taylor series may be found in Pielou (1969, p. 20). One may ask: What is the simplest series representation for p'(t) = f(p),
4.1. VOLTERRRA AND MATHEMATICAL BIOLOGY
107
where / is some function of the population? To answer this question, write f(p) = a + bp + cp2 +
.
First, we must agree that /(0) = 0, as a zero population does not change; hence, a = 0. Next, if the population is to grow for small p, then b must be positive. But if the population is to be selflimiting and if we wish to work with no more than a quadratic, then c must be negative. This yields (4.1.4). Detractors have always argued that the growth of certain populations are Sshaped, and hence, any differential equation having Sshaped solutions with parameters that can be fitted to the situation could be advanced as an authoritative description. Enter Volterra: Let x{t) denote the population of the prey (food fish) and y{t) the population of the predator (trash fish). Because the Mediterranean Sea (actually the upper Adriatic) is large, let us imagine unlimited resources, so that in the absence of predators, x' = ax
a>0,
(4.1.6)
which is Malthusian growth. But x(t) should decrease at a rate proportional to the encounter of prey with predator, yielding x' = ax — bxy ,
a > 0 and b > 0 .
(417)
Now imagine that, in the absence of prey, the predator population would decrease at a rate proportional to its size y' = cy,
c > 0.
But y should increase at a rate proportional to its encounters with the prey, yielding y' = cy + kxy ,
c > 0 and k > 0 .
(4.1.8)
We now have the simplest predatorprey system x' = ax — bxy , (4.1.9) y = cy + kxy , and we readily reason that a, b, c, and k are positive, with b > k, because y does not have 100% efficiency in converting prey.
108
4. HISTORY, EXAMPLES, AND MOTIVATION
Incidentally, (4.1.9) had been independently derived and studied by Lotka (1924) and, hence, is usually called the LotkaVolterra system. The system may be solved for a first integral as follows. We have dy/dx = (—c + kx)y/(a — by)x ,
(4.1.10)
so that separation of variables and an integration yields (ya/eb*)(xc/ekx)=K,
(4.1.11)
K a constant. The solution curves are difficult to plot, but Volterra (1931; p. 29) [See Davis (1962, p. 103)] devised an ingenious graphical scheme for displaying them. The predatorprey system makes sense only for x > 0 and y > 0. Also, there is an equilibrium point (x' = y' = 0) in the open first quadrant at (x = c/k, y = a/b), which means that populations at that level remain there. May we say that the predator and prey would "live happily ever after" at that level? The entire open first quadrant is then filled with (noncrossing) simple closed curves (corresponding to periodic solutions), all of which encircle the equilibrium point (c/k, a/b) (see Fig. 4.2). We will not go into the details of this complex graph now but a simplifying transformation presented later will enable the reader easily to see the form.
Figure 4.2: Periodic solutions of predatorprey systems.
4.1. VOLTERRRA AND MATHEMATICAL BIOLOGY
109
We are unable to solve for x(t) and y(t) explicitly, but we may learn much from the paths of the solutions, called orbits, displayed in Fig. 4.2. An orbit that is closed and free of equilibrium points represents a periodic solution. Each of those in Fig. 4.2 may have a different period T. Let us interpret the action taking place during one period. We trace out a solution once in the counterclockwise direction starting near the point (0,0). Because there are few predators, the prey population begins to increase rapidly. This is good for the predators, which now find ample food and begin to multiply, but as the predators increase, they devour the prey, which therefore diminish in number. As the prey decrease, the predators find themselves short on food and lose population rapidly. The cycle continues. Although we cannot find x(t), y(t), nor T, surprisingly, we can find the average value of the population densities over any cycle. The average of a periodic function / over a period T is
/ = ( i / r ) f f(t)dt. Jo From (4.1.9) we have
(1/T) / [x'(t)/x(t)}dt = (l/T) f Jo Jo
[aby(t)}dt
= (l/T)aT(b/T)
f y(t)dt; Jo
furthermore (1/T) f [x'(t)/x(t)\ dt = (1/T) In [x(T)/x(0)\ = 0 , Jo because x(T) = x(0). This yields
y = (l/T) f y(t)dt =a/b. Jo A symmetric calculation shows x = c/k. Thus, the coordinates of the equilibrium point (c/k, a/b) are the average populations of any cycle. Notice that statistics on catches would be averages, and those averages are the equilibrium populations. To solve the problem presented to Volterra (in this simple model), we must take into account the effects of fishing. The fishing was by net, so the
110
4. HISTORY, EXAMPLES, AND MOTIVATION
densities of x and y are decreased by the same proportional factor, namely, —ex and — sy, respectively. The predatorprey fishing equations become x' = ax — bxy — ex ,
y = cy + kxy  ey .
(4.1.12)
[The reader should consider and understand why b ^ k, but e is the same in both directions.] The new equilibrium point (or average catch) is /c + e a
e\
{— In other words, a moderate amount of fishing (a > e) actually diminishes the proportion of predator and increases the proportion of prey. If one believes the model (and not even Volterra did, he continued to refine it), there are farreaching implications. For example, spraying poison on insects tends to kill many kinds, in the way the net catches many kinds of fish. Would spraying fruit trees increase the average prey density and decrease the average predator density? Here, the prey are leaf and fruit eaters and the predators are the friends of the orchard. The controversy rages, and we will, of course, settle nothing here. Let it be said, however, that elderly orchardists in southern Illinois claim that prior to 1940 they raised highly acceptable fruit crops without spraying. Chemical companies showed them that a little spraying would correct even their small problems. Now they are forced to spray every 3 to 10 days during the growing season to obtain marketable fruit. In a more scientific vein, there is hard evidence that the feared outcome of spraying did occur in the apple orchards of the Wenatchee area of Washington state. There, DDT was used to control the McDaniel spider mite, which attacked leaves, but the spraying more effectively controlled its predator [see Burton (1980b, p. 257)]. We return now to Volterra's problem and consider the effect of logistic loads. Thus we examine x' = ax — dx2 — bxy ,
y = cy + kxy ,
(4.1.13)
with equilibrium at ( c ak  dc\ def ,_ _, requiring ak > dc, so that it is in the first quadrant.
,. , , ,s
4.1. VOLTERRRA AND MATHEMATICAL BIOLOGY
111
It is easy to see that any solution (x(t), y(t)) in the open first quadrant is bounded, because kx' + by' = kax — kdx2 — bey is negative for x2 + y2 large, x and y positive. Thus, an integration yields kx(t) + by(t) bounded. In fact, one may show that all of these solutions approach the equilibrium point of (4.1.14). To that end, define u = In [x/x]
and
v = In [y/y] ,
(4.1.15)
so that x = xeu
and
y = ye".
(4.1.16)
Then using (4.1.13)(4.1.16) we obtain u1 = dx(l  eu) + by(l 
ev),
and
(4.1.17) v' = kx(eu  1).
If we multiply the first equation in (4.1.17) by kx{eu — 1) and the second by by(ev — 1), then adding we obtain kx(eu  l)uf + by{ev  l)v' = dkx2{eu
 I) 2
or (d/dt) [kx{eu u)
+ by{ev  v)} < 0 .
Thus the function V(u, v) = kx(eu  u) + by(ev  v) is a Liapunov function. It has a minimum at (0, 0) (by the usual derivative tests), and V(u, v) —> oo as u2+v2 —> oo. As V/4 17\(u, v) < 0, all solutions are bounded. Moreover, if we examine the set in which V'(u,v) = 0, we have eu  1 = 0, or u = 0. Now, if u = 0, then v' = 0 and u' = —by(ev — 1), which is nonzero unless v = 0. Thus, a solution intersecting u = 0 will leave u = 0
112
4. HISTORY, EXAMPLES, AND MOTIVATION
unless v = 0 also. This situation is covered in the work of Barbashin and Krasovskii (see our Section 6.1, Theorems 6.1.4 and 6.1.5). We may conclude that all solutions of (4.1.17) tend to (0,0). But, in view of (4.1.16), all solutions of (4.1.13) approach the equilibrium (x,y) of (4.1.14). [Incidentally, transforming (4.1.9) by (4.1.15) will simplify the graphing problem.] It seems appropriate now to summarize much of this work in the following result. Theorem 4.1.1. Consider (4.1.13) and (4.1.14) with a, b, c, and k positive, ak > dc, and d > 0. (a) If d > 0, then all solutions in Quadrant I approach [x, y). (b) If d = 0, all solutions in Quadrant I are periodic. The mean value of any solution (x(t),y(t)) is (c/k,a/b). The predatorpreyfishing equations become x' = ax — dx2 — bxy — ex ,
(4.1.18)
y' = cy + kxy  ey , so that the new equilibrium point is / c + £ (a  e)k  d(c + e) \
V k '
bk
)'
Thus, the asymptotic population of prey increases with moderate fishing and the predator population decreases. Much solid scientific work has gone into experimental verification of Volterra's model, with mixed results. A critical summary is given in Goel et al. (1971, pp. 121124). The next observation is that, although the prey population immediately decreases upon contact with predator, denoted by —bxy, it is clear that the predator population does not immediately increase upon contact with the prey. There is surely a time delay, say, T, required for the predator to utilize the prey. This suggests the system x' = ax — bxy , y< = Cy + kx(t  T)y(t  T),
(4.1.19)
which is a system of delay differential equations. Actually, (4.1.19) does not seem to have been studied by Volterra, but rather by Wangersky and Cunningham (1957). Yet, the system seems logically to belong here in the successive refinement of the problem given Volterra.
4.1. VOLTERRRA AND MATHEMATICAL BIOLOGY
113
The initial condition for (4.1.19) needs to be a pair of continuous initial functions x(t) = (x,y)
and conjectured various forms for / and . Later, Rosenzweig (1969) analyzed, in some detail, the biological significance of the shape of the curve for x = 0, f(x) = 0 and 73 > 0. Let us assume that, in the second species at least, the distributions by age of the individuals remains constant, and let (£)d£ be the ratio of the number of ages lying between £ and £+ 0 , V ^ 0,
individuals of the second species during the interval (t,t + dt), so that, in adding these supposedly independent effects, one obtains
N2(t)dt I J—00
7 V'(4T)/(tr)iVi(r)dr.
4.1. VOLTERRRA AND MATHEMATICAL BIOLOGY
115
We then replace the second equation in (4.1.27) by ft
dN2
= s 2 N 2 d t +N2dt /
F ( t  T ) JVi ( r ) d r ,
F > 0 F^O.
Jco
We then have the system N[ = N^t)^

7lN2(t)]
I N^=N2(t)
* e2 +
1 F(tT)iVi(r)dT
(41.28)
J
J — oo
or the more symmetric system
N[ = L  7lJV2(t)  /" Fi(t  r)7V2(r) dr] N^t), L
Joo (4.1.29)
iVa = [  ^2 + 72iVi(t) + /
F2(t T)N!(T) drj iV2(t)
with £i,£2)7i)72 > 0, Fi > 0, F2 > 0, and especially 71 > 0 and F2 ^ 0. Volterra emphasizes that these integrals may take the form of
/'
"
/ '
depending on the duration of the heredity. Although the complete stability analysis of the problems formulated by Volterra has not been given, it is enlightening to view some properties of equations of that general type. We might call r x' =x\abx+
*
1 K{t,s)x(s)ds
,
(4.1.30)
with a and b positive constants and K continuous for to < s < t < ex) a scalar VerhulstVolterra equation. Here, to may be —00, in which case we would, of course, write to < s < t < 00. Thus we are taking into account the entire past history of x. Frequently, K(t, s) is discontinuous and the integral is taken in the sense of Stieltjes as described, for example, by Cushing (1976). We shall suppose for the next two results that to ^ 0 and that we have an initial function on [to,O]. Thus, we shall be discussing the solutions for t > 0. We could let to be any value and let the initial function be given on any interval [to,ti], but the historical setting of such problems tends to be of the type chosen here.
116
4. HISTORY, EXAMPLES, AND MOTIVATION
Theorem 4.1.2. Let r, R, and m be positive constants with
I \K(t,s)\ds <m to, and suppose that for to < s < 0 we have 0 < r < x(s) < R with a  bR + mR < 0 and a  br  Rm > 0. Then r < x(t) < R for 0 0. Now suppose x(t\) = r. Then x'{t\)/r > a — br — Rm > 0, a contradiction because we must have x'(t\) < 0. This completes the proof. Roughly speaking, if m is quite small, then solutions are bounded and extinction does not occur. It would be very interesting to learn precisely how such solutions behave. For example, can carrying capacity be defined, and do solutions oscillate around that carrying capacity? Next, we consider the system r x' = x\a — bx — cy I
ft
i K\(t, s)y(s) ds , Jtu J
(4.1.31)
y' = y\a + (3x+ I K2(t, s)x(s) ds . Theorem 4.1.3. If Ki(t, s) > 0 and continuous for to < s < t < oo, if a, b, c, a, and (3 are positive constants, and if there is an e > 0 with a > (a/b) / K2(t, s)ds + s for t > to , then all solutions remaining in the first quadrant and satisfying x(t) < a/b on the initial interval [to,O] are bounded.
4.1. VOLTERRRA AND MATHEMATICAL BIOLOGY
117
Proof. First notice that x' < 0 if x > a/b, hence, x(t) < a/b if t > toNext, /
fjx' + cy' < Px(a bxcy)
\
ft
+ cy[  a + fix + / K2(t, s)x(s) ds \
i'1
r
/
Jtu
i
< [3x{a  bx) + cy\ a + (a/b) / K2(t, s) ds < [3x(a — bx) — cey , which is negative if cey > afix — b(5x2 .
Hence, if the line j3x + cy = constant lies above the parabola cey = af3x — bf3x2 ,
then [Px(t) + cy(t)]' < 0 , so that y(t) is bounded. It would be very interesting to obtain information about the qualitative behavior of these bounded solutions. It is our view that one of the real deficiencies of the attempts to analyze (4.1.31) is the absence of a true equilibrium of any type. For example, Cushing (1976) considers the system x[ =x1(a1 r
cix2) r*
x'2 = X2\ — a,2 + /
] k2(t  s)xi(s)
ds
and speaks of the equilibrium point [a^j \b\ + Jo°° k2(s) ds ] , a\/ci), where b\ comes from another equation. Clearly, x2 = ai/ci, but that value in x'2 does not yield x'2 = 0 for any constant x\. Similarly, (4.1.31) does not have a constant equilibrium solution. It seems that one needs to locate an asymptotic equilibrium and then work on perturbation problems. Volterra's derivation suggests that we consider K\ = 0. Thus, let us consider x' = x[a — dx — by], r
y' = y\ c + kx+
L
ft
,
K(t
^o
(4.1.32)
s)x(s) ds ,
J
in which a, b, c, d, and k are positive constants, K is continuous with Kit) > 0, and Jo°° K(s) ds = r < oo.
118
4. HISTORY, EXAMPLES, AND MOTIVATION
To locate an asymptotic equilibrium we write ,t
,t
/ K(ts)ds= / K(s)ds Jo Jo o
= / Jo def
/oo
K(s)ds
 / Jt
K(s)ds
,,.
= r  7(t). Then we write (4.1.32) as x' = x[a — dx — by] f * 1 y' = y< c + kx + rx  xj(t) + / Kit  s)[x(s) x]ds { Jo J
(4.1.33)
where x is denned by —c + kx — rx = 0 or x = c/(k + r). Then a — dx — by = 0 yields y = (a — dx)/b. Let u = In [x/x]
and w = In [t//y] ,
so that (4.1.33) becomes u' = dx(l  eu) + by(l  ev) rt v1 = kx{eu  1)  xj(t) + / xK(t  s)(eu('s)  1) ds. Jo Now the linear approximation of this is
(4.1.34)
v! = —dxu — byv , ?t v' = kxu + / xK(t — s)u(s) ds — xj(t), Jo
(4.1.35)
4.1. VOLTERRRA AND MATHEMATICAL BIOLOGY
119
which, in matrix form, is
fu\'=fdx \v)
by\fu\
\ kx
0)
\v)
Jo \xi^(i  s) Oj \v(s)J
\xy(t)J
or X' = ^ X + / C ( i  s ) X ( s ) d s + T ( i ) ,
(4.1.37)
where all characteristic roots of A have negative real parts, Jo°° C(s) ds < oo, and T(t) —> 0 as t ^ oo. Moreover, it is consistent with the problem to ask that roo
/
\l{t)\dt < oo.
(4.1.38)
We then find a matrix B = BT satisfying ATB + BA = —I and form a Liapunov functional for (4.1.37) in the form
F(i,X()) = j p ^ B X ] 1 / 2 + K j r \C(u s)\du \X(s)\ds + 1 j r
x e x p \L
L
/
]
* T(S)
Jo
rfs .
J
These forms are precisely the ones considered in Chapter 6. See the perturbation result Theorem 6.4.5 for both (4.1.37) and (4.1.34). Now return to the nonlinear form (4.1.34) and consider u' = dx{\  eu) + by(l  ev), v' = kx(eu  1).
(4.1.39)
The work leading to Theorem 4.1.1 yields uniform asymptotic stability in the large. Under these conditions we shall see in Chapter 6 (Theorem 6.1.6) that there is a Liapunov function W for (4.1.39) that is positive definite and satisfies W'{iAm){u,v) 0. We then show in Chapter 6 (see Theorems 6.4.16.4.3 and 6.4.5) how W may be used to show global stability for the nonlinear system (4.1.34).
120
4. HISTORY, EXAMPLES, AND MOTIVATION
In addition, Brauer (1978) has an interesting discussion of such equilibrium questions as raised here. He applies certain linearization techniques of Grossman and Miller to systems of the form of (4.1.34) with (4.1.38) holding.
4.2
Renewal Theory
The renewal equation is an example of an integral equation attracting interest in many areas over a long period of time. An excellent account, given by Bellman and Cooke (1963), contains 41 problems (solved and unsolved) of historical interest. Consider the scalar equation
u(t)=g(t)+ f u(ts)f(s)ds, Jo
(4.2.1)
where / and g : [0, oo) —> [0, oo) are continuous. Note that we are assuming that / and g are nonnegative. Our discussion here is brief and is taken from the excellent classical paper by Feller (1941), which appeared at an interesting time historically. The work of Volterra in Section 4.1 had been well circulated and had received much attention. Moreover, just two years earlier Lotka (1939) had published a paper containing 74 references to the general questions considered in Section 4.1. The work by Feller represents an attempt to synthesize, simplify, and correct much of the then current investigation. He gives very exact results concerning the behavior of solutions of (4.2.1). This is in contrast to the stability objective of this book. His work is important here in revealing the kinds of behavior one might attempt to prove in qualitative terms. Moreover, he provides two excellent formulations of concrete problems. The entire paper is strongly recommended to the interested reader. Although we make no use of it here, Feller points out that (4.2.1) can be put into another form of particular interest when / is not continuous. We have frequently differentiated an integral equation in order to use the techniques of integrodifferential equations. By contrast, one can integrate (4.2.1) and obtain a new integral equation.
4.2. RENEWAL THEORY
121
Define U, F, and G by ft U(t) = / u(s)ds, Jo F(t)=
f Jo
f(s)ds,
and
(4.2.2)
G(t)= f g(s)ds, Jo
so that we may write U(t) = G(t) +
ft Jo
U(t s) dF(s).
(4.2.3)
The main objective is to study the mean value of u(t), namely, ft u*(t) = (1/t) / u(s)ds. Jo
Equation (4.2.1) has at least two practical applications. The first is Lotka's formulation. In the abstract theory of industrial replacement, each time an individual drops out that individual is replaced by a new one of age zero. (One may think, at times, of light bulbs, for example.) Here f(t) denotes the probability density at the moment of replacement that an individual of age t will drop out. Now let i](s) denote the age distribution of the population at time t = 0. Thus the number of individuals between ages s and s + (Ss) is r)(s)(6s) + O(Ss). Then g(t) denned by 9(t)= I V(s)f(ts)ds (4.2.4) Jo represents the rate of removal at time t of individuals belonging to the parent population. The function u(t) gives the removal rate at time t of individuals of the total population. Note that each individual dropping out at time t either belonged to the parent population or entered the population by the process of replacement at some time t — s for 0 < s < t. Hence, u(t) satisfies (4.2.1). Because / is a probability density function, we have / f(t)dt=l. Jo
(4.2.5)
122
4. HISTORY, EXAMPLES, AND MOTIVATION
The next formulation is for a single species and is akin to Volterra's own derivation of the predatorprey system of Section 4.1. Let f(t) denote the reproduction rate of females of a certain species at age t. In particular, the average number of females born during a time interval {t,t + {5t)) from a female of age t is f(t)(5t) + O(5t). lir](s) denotes the age distribution of the parent population at t = 0, then Eq. (4.2.4) yields the rate of production of females at time t by members of the parent population. Then u(t) in (4.2.1) measures the rate of female births at time t > 0. This time / is not a probability density function, and we have />OO
/ f(t) dt being any nonnegative number. (4.2.6) Jo This integral is a measure of the population's tendency to increase or decrease. We list without proof some sample results by Feller. Theorem 4.2.1. Suppose that JQ f(t)dt = a and jQ g(t)dt = b, both are finite, / > 0, and g > 0. (a) In order that u*(t) = (1/t) / uis)ds >c Jo as t —> oo, where c is a, positive constant, it is necessary and sufficient that a = 1 and that Jo°° tfit) dt = m, a finite number. In this case, c = b/m. (b) If a < 1, then /0°° u(t) dt = 6/(1  a). Notice that according to Theorem 2.6.1, this result concerns uniform asymptotic stability. Theorem 4.2.2. Let Jo°° f(t) dt = 1 and Jo°° g(t) dt = b < oo. Suppose there is an integer n > 2 with O
tkf\t)dt
m k=
for fe = l , 2 , . . . , n
Jo all being Unite and that the
functions
f(t),tf(t),...,tn2f(t) are of bounded total variation over (0, oo). Suppose also that O
lim tn~2git) = 0 and
lim tn~2 / t
g(s) ds = 0 .
4.3. EXAMPLES
123
Then limbec u(t) = b/m\ and limi™2[u(t)(6/mi)] = 0 .
4.3
Examples
In this section, we give a number of examples of physical processes that give rise to integrodifferential or integral equations. In most cases the examples are very brief and are accompanied by references, so that the interested reader may pursue them in depth. The main point here is that applications of the general theory are everywhere. If i(t, x) is smooth, then the problem x' = f(£,x),
x(t0) = x 0
has one and only one solution. If that problem is thought to model a given physical situation, then we are postulating that the future behavior depends only on the object's position at time to. Frequently this position is extreme. Physical processes tend to depend very strongly on their past history. The point was made by Picard (1907), in his study of celestial mechanics, that the future of a body in motion depends on its present state (including velocity) and the previous state taken back in time infinitely far. He calls this heredity and points out that students of classical mechanics claim that this is only apparent because too few variables are being considered. A.
Torsion of a Wire
In the same vein, Volterra (1913, pp. 138139, 150) considered the first approximation relation between the couple of torsion P and the angle of torsion W as W = kP. He claimed that the elastic body had inherited characteristics from the past because of fatigue from previously experienced distortions. His argument was that hereditary effects could be represented by an integral summing the contributions from some to to t, so that the approximation W = kP could be replaced by W{t) = kP{t) + I K(t,s)P{s)ds. He called K(t, s) the coefficient of heredity.
(4.3.1)
124
4. HISTORY, EXAMPLES, AND MOTIVATION
The expression of W is a function of a function, and Volterra had named such expressions "functions of lines." Hadamard suggested the name "functionals," and that name persists. [This problem is also discussed by Davis (1962, p. 112) and by Volterra (1959, p. 147).]
B.
Dynamics
Lagrange's form for the general equations of dynamics is
d or TtW* where qt,
d(Tn) d^—=Q*> n
(4 3 2)
'
are the independent coordinates,
i
s
the kinetic energy,
~n= ~ 2 5Z Yl bisqiqs the potential energy, and Qi, , Qn the external forces. See Rutherford (1960) or Volterra (1959, pp. 191192) for details. When aiS and b{S are constants, then the equations take the linear form s
s
Volterra (1928) shows that in the case of hereditary effects (4.3.3) becomes
J2a^
+
J2bisqs+J2
s
s
s
J
*is(t,r)qa(r)dr = Qi.
(4.3.4)
°°
If the system has only one degree of freedom, if $ is of convolution type, and if the duration of heredity is T, then the system becomes the single equation
q" + bq+ f
&{s)q(ts)ds = Q.
(4.3.5)
Jo If we suppose that «&(s) is continuous, nonpositive, increasing, and zero for s > T and if b > 0, then b + fQ 3>(s) ds = m > 0. In this way we may write (4.3.5) as q" + mqf
&(s)[q(t)  q(t  s)] ds = Q . Jo
(4.3.6)
4.3. EXAMPLES
125
Then \mq2  \ f z
z
*(s) [q(t)  q(t  s)]2 ds
(4.3.7)
Jo
is called the potential of all forces. Potentials are always important in studying the stability of motion. Frequently a potential function can be used directly as a Liapunov function, an idea going back to Lagrange (well before Liapunov). See Chapter 6 and the discussion surrounding Eq. (6.2.4) for such construction. A suggestion by Volterra (1928) concerning energy enabled Levin (1963) to construct a very superior Liapunov functional. C.
Viscoelasticity
We consider a onedimensional viscoelastic problem in which the material lies on the interval 0 < x < L and is subjected to a displacement given by u(t,x)
= f(t,x)x,
(4.3.8)
where / : [0, oo) x [0, L] —> R. If po : [0, L] —> [0, oo) is the initial density function, then, from Newton's law of F = ma, we have ax(t,x)
= [p0(x)}[ftt(t,x)},
(4.3.9)
where a is the stress. For linear viscoelasticity the stress is given by
r*
a(t,x)= / G(ts,x)uxt(s,x)ds,
(4.3.10)
Jo
where G : [0, oo) x [0,L] —> [0,oo) is the relaxation function and satisfies Gt < 0, Gtt > 0. If we integrate (4.3.10) by parts we obtain a(t,x) = G(0,x)ux(t,x) — G(t,x)ux(0,x) + / Gt(ts,x)ux(s,x)ds. Jo Because po(x)fu(t,x)
= ax(t,x)
r po(x)ua = \G(0,x)ux(t,x) L
(4.3.11)
it follows that 
* Gt(t s,x)ux(s,x)ds\ Jo
i . ix
(4.3.12)
126
4. HISTORY, EXAMPLES, AND MOTIVATION
If the material is homogeneous in a certain sense, then we take po(x) = 1 and G to be independent of re, say, G(t,x) = G(t). This yields utt = G{0)uxx(t,
x)+ [ Gt{tJo
s)uxx(s,
x) ds .
(4.3.13)
With wellfounded trepidation, one separates variables
u(t,x)
=g(t)h(x)
and obtains (where the overdot indicates d/dt and the prime indicates d/dx for this section only) g(t)h(x) = G(0)g(t)h"(x) + f G(t  s)g(s)h"(x) ds ,
(4.3.14)
Jo
so that h(x)/h"{x)
= \G(0)g(t) + f G(t
s)g(s) ds 1 /g(t)
(4.3.15)
K a constant (which may need to be negative to satisfy boundary conditions). This yields g(t) = KG{0)g{t) + K
ft . G{ts)g{s)ds Jo
(4.3.16)
Let g = y, g = z, and obtain
(y\_( \z)
o
A(y\+ft(
\KG{Q) 0) \z)
+
.o
°)(y^)ds
Jo \KG(t  s) o) {z(s)J
as
'
which we write as
X' = AX+ /
Jo
C(ts)X{s)ds.
If K < 0, then the characteristic roots of A have zero real parts and the stability theory developed in Chapter 2 fails to apply. A detailed discussion of the problem may be found in Bloom (1981, Chapter II, especially pp. 2931, 7375). Stability analysis was performed by MacCamy (1977b).
4.3. EXAMPLES
D.
127
Electricity
Even the very simplest RLC circuits lead to integrodifferential equations. For if a singleloop circuit contains resistance R, capacitance C, and inductance L with impressed voltage E(t), then Kirchhoff's second law yields LI' + RI+(l/C)Q = E(t),
(4.3.17)
with Q = j t I(s) ds. Although this is usually thought to be a trivial integrodifferential equation, if E is too rough to be differentiated, then the equation must be treated in its present form, perhaps by Laplace transforms. At the other end of the spectrum, Hopkinson (1877) considers an electromagnetic field in a nonconducting material, where E = (Ei,E2,Es) is the electric field and D = (Di,D2,Ds) the electric displacement. He uses Maxwell's equations (indeed, the problem was suggested by Maxwell) to write /"* D(i) = eE(t) +
(t s)E(s) ds,
(4.3.18)
J — oo
where e is constant and
0, and m'{'(t) < 0. Also, (4.3.23) is linear, so that we can consider the homogeneous form and then use the variation of parameters theorem. F.
Heat Flow
In many of the applications we begin with a partial differential equation and, through simplifying assumptions, arrive at an integral or integrodifferential equation. If one casts the problem in a Hilbert space with unbounded nonlinear operators, then these problems appear to merge into one and the same thing. A particularly pleasing example of the merging of many problems and concepts occurs in the work of MacCamy (1977b) who considers the problem of onedimensional heat flow in materials with "memory" modeled by ft
ut(t,x) = / a(t  s)ax(ux(s,x))ds °
u(t,0)
+ f(t,x),
0 < x < 1 t>0
= u(t,l)
(4.3.24)
= 0 ,
and
u(0,x) = uo(x). Now (4.3.24) is an example of an integrodifferential equation ,t
u'(t) = 
a(ts)g(u(s))ds
+ f(t),
Jo
(4.3.25)
M(0) = M0
on a Hilbert space with g a nonlinear unbounded operator. Moreover, (4.3.25) is equivalent to u"(t) + k(0)u'(t)+g(u(t))+
ft Jo
k'{ts)u'(s)ds
=4>{t)
(4.3.26)
130
4. HISTORY, EXAMPLES, AND MOTIVATION
for some kernel k, and the damped wave equation Utt + OlUt — Cx(ux) u(0,x)
= UQ(X) ,
= 0,
—OO < X < OO , t > 0 ,
(4.3.27) ut(O,x)
= u\(x),
a>0
is a special case of (4.3.26). Finally, the problem of nonlinear viscoelasticity is formally the same as (4.3.26). Thus, we see a merging of the wave equation, the heat equation, viscoelasticity, partial differential equations, and integrodifferential equations. The literature is replete with such merging. In Burton (1991) there is a lengthy, detailed, and elementary presentation of the damped wave equation as a Lienard equation. The classical Liapunov functionals for the Lienard equation are parlayed into Liapunov functions for the damped wave equation and corresponding stability results are obtained.
G.
Chemical Oscillations
The LotkaVolterra equations of Section 4.1 are closely related to certain problems in chemical kinetics. The problem discussed here was also discussed by Prigogine (1967), who gives a linear stability analysis of the resulting equations. Consider an autocatalytic reaction represented by A + X ^ IX , h
X + Y^2Y,
(4.3.28)
h 1
Y^B, h
where the concentrations of A and B do not vary with time. Here, all kinetic constants for the forward reactions are taken as unity and the reverse as h. The reaction rates, Vl=AXV2
hX2 ,
= XY  hY2 ,
v3 = Y hB
(4.3.29)
4.3. EXAMPLES
131
are based on the law of mass action [see the material in Section 4.1 following Eq. (4.1.5)]. Thus, the differential equations are X' = AXXY
 hX2 + KY2 , (4.3.30)
Y' = XY  Y  hY2 + hRA.
Note that as h —> 0 we obtain the LotkaVolterra equations (4.1.9) with a = b = c= k = 1. The total affinity of the reaction is A =  log h3R with
R = B/A .
Although it is difficult to find even the equilibrium point in the open first quadrant for (4.3.30), much can be said about the system. Solutions starting in the open Quadrant I remain there, according to our uniqueness theory. Also, X' + Y' = AX  Y  hX2 + hRA < 0
(4.3.31)
if X2 + Y2 is large. Hence, all solutions starting in the open Quadrant I are bounded. Moreover, if we write (4.3.31) as X' = P and Y' = Q, then (dP/dX) + (dQ/dY) = AY
 2hX  1  2hY < 0
(4.3.32)
in Quadrant I provided that h >
a n d A [0, oo) lias continuous Erst partial derivatives, W : D —> [0, oo) is continuous with W(0) = 0, W(x) > 0 if x ^ 0, V(t,x) > W(x) on D, and V(t,O) = 0. If VL 1 5s(i,x) < 0, then the zero solution of (5.1.5) is stable. Proof. Let e > 0 and to > 0 be given. Assume e so small that x < e implies x G D. Because W is continuous on the compact set L = {x G Rn : x = e}, W has a positive minimum, say, a, on L. Because V is continuous and V(t,O) = 0, there is a 5 > 0 such that xo < 5 implies V(to,xo) oo is a contradiction to F(z(i)) being bounded. This completes the proof. Remark 5.1.1. Barbashin(1968) notes that for any given matrix C = CT, the equation ATB + BA = C may be uniquely solved for B = BT provided that the characteristic roots of A are such that Aj + Xk does not vanish for any i and k. To give a unified exposition, we have consistently taken C = —I. However, any negative definite C would suit our purpose, and when A, + \k = 0 , we sometimes must take C ^ —/. That is, if Aj + A^ = 0, then we may still be able to solve ATB + BA = C with C being negative definite; but the solution may not be unique. The Barbashin result is an interesting one. We give three examples and a general construction idea. Example 5.1.1. Let
A=(1
°)
B=(h
b
A
and solve ATB + BA = I for B. We have
,R, f^h o \ /  i o\ A B + B A = ^ Q _ 2 h ) = { 0 _ : J, / B
so that b\ = — j , 63 = 5, and 62 is not determined. Any choice for 62 will produce a matrix B such that V(x) = x T Bx will satisfy Theorem 5.1.3 for x' = Ax. Thus, B exists for C = —I, but B is not unique.
138
5. INSTABILITY, STABILITY, AND PERTURBATIONS
When B can be determined, but not uniquely, it usually best serves our purpose to make \B\ a minimum. See, for example, Theorems 5.3.1 and 5.3.3. The next example has B unique, and A an unstable matrix. Example 5.1.2. Let AA=
1 h b f*\ BB= f {  1 2)> [ b 2 6 3AJ '
and solve for B in ATB + BA = I. We have (2{b1+b2) [b1+b2b3
h + b2b3\ 262 + 463 )
(1 \Q
0\ l) '
so that 61 + b2 = \ , hi + 62 = £*3 , and 262 + 463 =  1 . The determinant of coefficients is 1 1 0
1 0 1  1 = 4  [2 + 4] ^ 0 , 2 4
so the solution &3 = \ ,
b2 =  § ,
and
61 = 2
is unique. Because Aj + Xj ^ 0, this was predicted by the Barbashin result. If we write x = {x\, x2)T, we find V(x) = xTBx = 2xj ~ 3.T1.X2 + I x\. Along x\ = x2 we have V(x) = — \x\, so that the conditions of Theorem 5.1.3 are satisfied. Our next example shows that there are cases when we must select C other then —/.
5.1. THE MATRIX ATB + BA
139
Example 5.1.3. Consider A
{0
l)
'
B
[b2
bs) '
and try to solve ATB + BA = I. We obtain ATB + BA(2bl
h
)
which cannot be —/; however, if b\ = — 1, b2 = 0, and 63 = 1, then
— i.j r x2 '
and
V = 2(x21+x1x2 +xl) S {Xi
+ x2)
The conditions of Theorem 5.1.3 are satisfied when x2 = (l/n,0). The Barbashin result, stated in Remark 5.1.1, takes care of all matrices A with Xi + \k ^ 0. We know that there is no possible B when A has a characteristic root with a zero real part. However, we do wish to find some V satisfying Theorem 5.1.3 when no root of A has a zero real part. The following procedure shows that it is possible to do so. Some of the details were provided by Prof. L. Hatvani. Let x' = Ax with A real and having no characteristic root with zero real part. Transform A into its Jordan form as follows. Let x = Qy so that x1 = Qy1 = AQy or y' = Q~lAQy. Now
where Pi and P2 are blocks in which all characteristic roots of Pi have positive real parts and all roots of P2 have negative real parts. Let M* denote the conjugatetranspose of a matrix M and define ,0
Bi =  I
(exp P*t)(exp Pit) dt
J — GO
and O
B2 = +
Jo
(exp P*t) (exp P2t) dt.
140
5. INSTABILITY, STABILITY, AND PERTURBATIONS
Notice that /=(expP*t)(expP 1 t)° o o =  f
(d/dt) [(exp P1*t)(exp P^)] dt
J — OO
= /
P*(expP*t)(expPii)dt
./ — OO
 /
(expP1*t)(expPit)Pidt
Jco
= PfB, + BiPi . In the same way,  7 = +(expP2*t)(expP2t)~ O
= / Jo
(d/dt) [(exp P2*t) (exp P 2 t)] dt OO
P 2 *(expP 2 *i)(expP 2 i)dt
/
O
+ / Jo
(expP 2 *t)(expP 2 t)P 2 dt
= P2*B2 + B2P2 Next, form the matrices „
p =
(Pi
0\
(Bx
and B =
( o ftj
0\
(o B J '
and notice that
(PIB1 ^ 0 =
0 \ P1B2)
fP1*B1 + B1P1 ^
0
+
(BXPX ^ 0 0
0 \ BsPs^ A
P2*B2 + B2P2J
5.1. THE MATRIX ATB + BA
141
Thus y' = QlAQy = Py, and so V(y) = yTBy yields V'{y) = yTy. We then have {QlAQ)*B + B{Q~lAQ) = I or Q*AT(Q1)*B + BQ1AQ = I. Now, left multiply by (Q*)^1 and right multiply by Q^1 obtaining AT(Q~1)*BQ1
+ {Q*)lBQ~lA
=
{Q*)lQ~l
and, because (Q^1)* = ( 0 for each i ^ O . Proof. If V(x) = xTBx > 0 for each x ^ 0, then V is positive definite and V'(x) = xTCx < 0, so that stability readily follows. Indeed, x = 0 is uniformly asymptotically stable. Suppose there is some xo ^ 0 with x^Bxo < 0. If X^BXQ = 0, let x(t) = x(i,0,x 0 ) and consider V(x(t)) = xT(t)Bx(t) with V'(x(t)) = xT(t)Cx{t) < 0 at t = 0. Thus, V decreases, so if tx > 0, V(x.{ti)) = xT(ti)Bx(ti) < 0. Thus, we may suppose X^BXQ < 0. If xo = nyn defines yn, then X^BXQ = n2y^Byn < 0, so {yn} converges to zero and V(yn) < 0. All parts of Theorem 5.1.3 are satisfied and x = 0 is unstable. This completes the proof.
5.2
The Scalar Equation
The concept in Theorem 5.1.6 is a key one, and it will be extended to systems of Volterra equations after we lay some groundwork with scalar equations.
5.2. THE SCALAR EQUATION
143
Consider the scalar equation x' = A(t)x +
ft
Jo
C(t,s)x(s)ds
(5.2.1)
with A{t) continuous on [0, oo) and C(t, s) continuous for 0 < s < t < oo. Select a continuous function G(t,s) with dG/dt = C(t,s), so that (5.2.1) may also be written as /"* x'= Q(t)x + (d/dt)
G(t,s)x(s)ds
(5.2.2)
Jo with Q(t)+G(t,t) = A(t). Note that (5.2.1) and (5.2.2) are, in fact, the same equation. For reference we repeat the definition of stability of the zero solution of (5.2.1) and then negate it. Definition 5.2.1. The zero solution (5.2.1) is stable if for each e > 0 and each to > 0 there exists a 5 > 0 such that 4> : [O,io] — R, 4> continuous, \ to imply x(t, to, <j>) < £. Definition 5.2.2. The zero solution of (5.2.1) is unstable if there exists an e > 0 and there exists a to > 0 such that, for each 5 > 0, there is a continuous function cj> : [O,to] —> R with \(j>(t)\ < 5 on [O,io] and with \x(ti,to,(p)\ > s for some ti > toTheorem 5.2.1. Suppose there are constants Mi and Mi, such that for 0 < t < oo we have rt
/>oo
/ \C(t,s)\ds + Jo Jt
\C(u,t)\du
<M1<M22A(t)x22
I
\C(t,s)\\x(s)x(t)\ds
Jo rt
OO
/
\C{u,t)\dux2 + / Jo
\C(t,s)\x2(s)ds
ft
> 2A(t)x2  / \C(t,s)\ [x2{t) +x2(s)] ds Jo oo
/
it
\C(u,t)\dux2+
/ Jo
\C(t,s)\x2(s)ds
5.2. THE SCALAR EQUATION
[
rt
2A(t)
145
rCO
/ \C(t,s)\ds Jo
 / Jt
\C(u,t)\du
x2
> [2A(t)  M J ] I 2 > \M2  Mx\x2 d
^ax2,
a>0.
Now, if x(t) = x(t,to,V2(t,x())>V2(t0A())+a
/
x2(s)ds.
Jtu
Given any to > 0 and any 5 > 0 we can pick a continuous initial function <j) : [0,t0] ^ R with V^io, ) > 0. Thus, x2(t) > V2(t0, so that x2(t)>V2(t,x()) >V2(t0, = ^(io,
+a /
F2(t0,
) ds
) + aV2(to, ())(t 
t0),
and so \x(t)\ —> oo as i —> oo. This completes the proof.
Corollary 1. Let (5.2.4) hold, let A(t) be bounded, and let A{t) < 0. Then x = 0 is asymptotically stable. Proof. We showed that Vr1'/521j(t,2;()) < —ax2, so we have x2(t) in L1[0, oo) and x2(t) bounded. Note also that x'(t) is bounded. Thus, x(t) —> 0 as t —> oo.
Exercise 5.2.1. Try to eliminate the requirement that J4(£) be bounded from the corollary. In Chapter 6 we develop three ways of doing this. Corollary 2. Let (5.2.4) hold and let A{t) > 0. Then the unbounded solution x(t) produced in the proof of Theorem 5.2.1 satisfies \x(t)\ > c\ + ci(t — to) for c\ and c2 positive.
146
5. INSTABILITY, STABILITY, AND PERTURBATIONS
Theorem 5.2.2. ROO
\C(t,s)\ds + Jt
\C(u,t)\du < J ,
(5.2.6)
and [ Jt
\G(u,t)\du + f \G(t,s)\ds [0, oo) with
\G(t, s)\ < h(t  s) and h(u)  ^ O a s m o o .
(5.2.8)
Then the solution of (5.2.2) is stable if and only if Q(t) < 0. Proof. First suppose that Q(i) < 0 and define V3(t,x())=
x f G{t,s)x(s)ds L Jo
+Q2 [ [ Jo Jt
\G(u,s)\dux2{s)ds,
so that along a solution x(t) of (5.2.2) we have *3(5.2.2) (*> ^(O) =2(X
G
J
^
S
)
X
^ dS ) Q®X
G(u,t)dua; 2 Q2 / G(t,s)2;2(s) ds Jo
+ Q2 / Jt
rt
< 2Q{t)x2 + Q2 / \G(t, s)\ [x2(s) + x2(t)] ds Jo pt
oo
/
\G{u,t)\dux2 Q2
= \2Q(t) + Q2(l < [2Qi + RQi]x2 d
=f3x2,
/3>0.
\G(t,s)\ds+J
/ \G(t,s)\x2(s)ds Jo
\G(u,t)\du)]x2
5.2. THE SCALAR EQUATION
147
Recall that (5.2.1) and (5.2.2) are the same and consider V\{t, x{)) once more. It is certainly true that VL52 1 j(t,x()) = V[,52 2Jt,x()). Hence, Vj'/g 2 2) m a Y be obtained by taking V;^2A)(t,x()) 0 and define \G(u,s)\dux2(s)ds,
V4(t,x())= (x I G(t,s)x(s)ds) Q2 I I V Jo J Jo Jt so that *4(5.2.2) (* 2Q(t)x2 Q2
\G(u,t)\dux2
I \G(t, s)\ [x2(s) + x2(t)] ds Jo
ft
f'OO
+ Q 2 / \G{t,s)\x2{s)ds Jo
Q2
/ Jt
= \2Q(t)  Q2( I \G(t,s)\ds+f > [2Q(t)  RQ^x2 > [2Qi  RQi]x2 = 72; ,
7>0.
\G{u,t)\dux2 \G(u,t)\du\\x2
148
5. INSTABILITY, STABILITY, AND PERTURBATIONS
Now, by way of contradiction, we suppose x = 0 is stable. Thus, given £ > 0 and 0, there is a 5 > 0 such that for any continuous 4> : [0, to] —> R with \ t0. For this t0 and this S, we may choose such a cf) with V 4 (io,0())>O and let
x{t)=x{t,to,4>). As J o G(£, s) ds is bounded and x(t) is bounded, so is JQ G(t, s)x(s) ds. If x2{t) is not in £ 1 [0, oo), then x(t) is unbounded, and we have
[ * ( * )  / G(t, 3)1(3) ds
>F4(t,x()) >F 4 (t o ,0(O)+7 / a;2(s)ds. o
1
2
Hence, we suppose x' {t) is in X [0, 00). Next, note that
< / G(t,s)ds / Jo Jo
\G(t,s)\x2(s)ds
by the Schwartz inequality. Moreover, \G(t, s)\ < h(t — s) and h(u) ^ 0 as M ^ 00. Thus,
/ \G{t,s)\x2{s)ds
[V4(t0,(P())]1/2.
As the integral tends to zero, it follows that \x(t)\ > a for large t and some a > 0. This contradicts x2(t) in L1, thereby completing the proof.
5.2. THE SCALAR EQUATION
149
Corollary 1. Suppose that A is constant, C(t,s) = C(t — s), G(t,s) = G(t  s), Jo°° \C(t)\dt < oo, and G(t) =  / t °° C(s) ds. Also suppose that O
\G(u)\duCO
OO
/
\C(ut)\du=
/ Jo
C(v) 0, and let P = sup / Ci(t,s)ds, t>o Jo
(5.2.20)
J = s u p f \H{t,s)\ds,
(5.2.21)
t>0 Jo
and agree that 0 x P = 0. Theorem 5.2.6. and
Suppose that J < 1, J  i ( i )  < ^Q, for some Q > 0,
/ [10^,3)1+Q\H(t,s)\]ds Jo + / [(l + J)\C1(u,t)\ + (Q + P)\H(u,t)\]du Jt
2\L(t)\ 0 .
(5.2.22)
In addition, suppose there is a continuous function h : [0, oo) —> [0, oo) such that \H(t, s)\ < h(t — s) and h(u) —> 0 as u —> oo. Tien tie zero solution of (5.2.19) is stable if and only if L(i) < 0.
154
5. INSTABILITY, STABILITY, AND PERTURBATIONS
The proof is left as an exercise. It is very similar to earlier ones, except that when using the Schwartz inequality one needs to shift certain functions from one integral to the other. Details may be found in Burton and Mahfoud (1983, 1985). Numerous examples and more exact qualitative behavior are also found in those papers.
5.3
The Vector Equation
We now extend the results of Section 5.2 to systems of Volterra equations and present certain perturbation results. Owing to the greater complexity of systems over scalars, it seems preferable to reduce the generality of A and G. Consider the system x' = ^ x + / C(t,s)x(s)ds,
(5.3.1)
Jo in which A is a constant n x n matrix and C an n x n matrix of functions continuous for 0 < s < t < oo. We suppose there is a symmetric matrix B with ATB = BA= I.
(5.3.2)
Moreover, we refer the reader to Remark 5.1.2 and to Theorem 5.1.6, which show that (5.3.2) can be replaced by the more general condition that ATB + BA = C has a solution B for some positive definite matrix C. T h e o r e m 5.3.1. Let (5.3.2) hold and suppose there is a constant M > 0 with \B\[
\ \C(t,s)\ds
\ Jo
+
\C(u,t)\du)
Jt
<M < 1 .
(5.3.3)
J
Then the zero solution of (5.3.1) is stable if and only ifxTBx
> 0 for each
5.3. THE VECTOR EQUATION
155
Proof. We define V1(t,x())=xTBx+\B\
/
/
\C(u,s)\dux2(s)ds,
Jo Jt
where x 2 = x T x. Then VH5.3.i)(tM)) = ^TAT
j\T(s)CT(t,s)ds^BX
+
+ xTB\Ax + / C(t,s)x(s)ds I Jo \C(u,t)\dux2  I \B\\C{t,s)\x2{s)ds Jo rt
=  x 2 + 2x T B / C(t, s)x{s) ds Jo ft
OO
/
\C(u,t)\dux2 \B\ / \C(t,s)\x2ds Jo <  x 2 + \B\ I \C(t, s)\ [x2(s) + x2(i)] ds Jo
nt
OO
/
\C{u,t)\duy? \B\ / \C(t,s)\yi2{s)ds
= 1 + \B\ ( f \C(t,s)\ds + < [l+M]x2=fax2, T
Jo r\C(u,t)\du\]x2
a>0.
Now, if x Bx > 0 for all x / 0 , then V\ is positive definite and V{ is negative definite, so x = 0 is stable. Suppose there is an xo ^ 0 with x^Sxo < 0. Argue as in the proof of Theorem 5.1.6 that there is also an xo with xj£?xo < 0. By way of contradiction, we suppose that x = 0 is stable. Thus, for e = 1 and to = 0, there is a S > 0 such that xo < 5 and t > 0 implies x(i,0,xo) < 1. We may choose xo with xo < S and x^Bxo < 0. Let x(t) = x(i,0, xo) and have Vi(0,x0) < 0 and V{(t,x()) < ax2, so that xT(t)Bx(t) 0, then xT (tn)Bx{tn) —> 0, and this would contradict xT(t)Bx(t) < x^Bx0 < 0. Thus, there is a 7 > 0 with x2(t) > 7, so that xT(t)Bx(t)
< X^BXQ  ccyt,
which implies that x(i) —> 00 as t —> 00. This contradicts x(i) < 1 and completes the proof. In Eq. (5.3.1) we suppose C is of convolution type, C(t, s) = C(t — s), and select a matrix G with G'(t) = C(t). Then write (5.3.1) as ft
x' = Qx+{d/dt)
Jo
G{ts)x(s)ds,
(5.3.4)
where Q + G(0) = A. Note that (5.3.1) and (5.3.4) are the same under the convolution assumption. We suppose there is a constant matrix D with DT = D and QTD + DQ = I.
(5.3.5)
Refer also to Remark 5.1.2. Theorem 5.3.2. Let (5.3.5) hold and suppose G(t) —> 0 as t —> 00. Suppose also that there are constants N and P with 2\DQ\
\G(t)\dt 0 there is a 7 > 0 such that x < 7 implies i?j(x) < 77. Let 77 > 0 be given and find 7 such that as long as x(i) < 7 we have \C(u,t)\dux2 D
V'(t, x()) < j  1 + B \2rn2 + 2V+ f \C(t, s)\ (1 + mi + Jrj) ds OO
/
N
\C(u,t)\du  x 2
rt
+ \B\ \C(t,s)\(l+m1 + Jr])x2(s)ds Jo D
it
Jo
\C(t,s)\x2(s)ds.
If r = B(mi + Jrj), then
f
r
r*
V"(i, x()) < 0 for all x ^ 0, and let e > 0 and to > 0 be given. Assume £ < 7. Because B is positive definite, we may pick d > 0 with dx2 < x T Bx < F(t,x()). Then, for £ > 0 and t0 > 0, we can find 5 > 0 such that  oo .
(5.3.16)
Theorem 5.3.4. Suppose that (5.3.2), (5.3.3), (5.3.15), and (5.3.16) hold. All solutions of (5.3.14) are bounded if and only ifxTBx > 0 for each x ^ 0 . Proof. In the proof of Theorem 5.3.1 we found V[,5 a > 0. Select L > 0 so that
3 1)(i,x())
< —ax2,
 a x 2 + 2S x (x + l)A(i)  L\(t) < ax2 , a > 0, for all x when t is large enough, say, t > S. Next, define V(t,x()) r
=
xTBx + l + 5 /
L
rt /.oo
/
Jo Jt
C(M,s)dMx2(s)ds
I f /
J
1
*
e x p \L
L
1
/ A(s)ds
io
J
,
5.4. COMPLETE INSTABILITY
163
so that V(' 5 .3.14)(*>X())
< L\(t)V
+ exp [  J* LX(s) ds ] [V;(531) (t, x()) +21511x1 f (i, x)]
0 if t > S. Suppose that x^Bxo > 0 for all xo =/= 0. If x(i) is any solution of (5.3.14), then, by the growth condition on f, it can be continued for all future time. Hence, for t > S we have V(t,x.(t)) < V(S,x.()), so that x(i) is bounded. Suppose that x^Bxo < 0 for some xo =/= 0. Because B is independent of f, one may argue that x^Bxo < 0 for some xo. Pick to = S and select cj> on [0,t0] with V{to,(f>) < 0. Then F'(t,x()) < 0 implies V(t,x()) < V(to, 0()) One may argue that x(t) is bounded strictly away from zero, say, x(i) > /i, for some fi > 0. As X(t) * 0, if t > S, then Vr/(t,x()) < /3x 2 , so y'(i,x()) <  / V for large t. Thus, V(t,x()) > oo and x(i) is unbounded. This completes the proof. Exercise 5.3.1. Review the proof and notice that when B is positive definite one may take L so large that the condition A(i) —> 0 is not needed. Exercise 5.3.2. Formulate and prove Theorems 5.3.3 and 5.3.4 for Eq. (5.3.4)
5.4
Complete Instability
In this section we focus on three facts relative to instability of Volterra equations. First, when all characteristic roots of A have positive real parts, then an instability result analogous to Theorem 5.3.1 may be obtained by integrating only one coordinate of C(t,s), as opposed to integrating both coordinates in (5.3.3). Next, we point out the existence of complete instability in this case. Indeed, this also could have been done in Section 5.2 or 5.3. Finally, we note that Volterra equations have a solution space that is far simpler than one might expect. Generally, complete instability is impossible for functional differential equations.
164
5. INSTABILITY, STABILITY, AND PERTURBATIONS
Consider the system /"* x' = Dx+ / C{t,s)x{s)ds, (5.4.1) Jo where D is an n x n constant matrix whose characteristic roots all have positive real parts. Find the unique matrix L = LT that is positive definite and satisfies DTL + LD = I,
(5.4.2)
and find positive constants m and M with x > 2m(x T Lx) 1 / 2
(5.4.3)
and ix < M(xT£x)1/2
.
(5.4.4)
Theorem 5.4.1. Let (5.4.2)(5.4.4) hold and let f™ \C(u,s)\du be continuous. Suppose there is an H > M and 7 > 0 with m — H/f°° \C(u,t)\du > 7. Then each solution x(i) of (5.4.1) on [0,oo) with x(0) 7^ 0 satisfies x(t) > ci + C2t for 0 < t < 00, where ci and C2 are positive constants depending on x(0). AJso, if to > 0, then, for each S > 0, there is a continuous initial function cf) : [0,to] —> Rn with \4>{t)\ < 8 and x(t, to, 0 )  > c\ + C2(t — to) for to < t < 00, where c\ and ci are positive constants depending on (p and to Proof. Let H > 0, define V(t,x{)) = (x T Lx) 1 / 2 i7 / / Jo Jt
\C(u,s)\du\x(s)\ds,
and for x / 0 , obtain V('5.4.1)(*.X())
= I \*TDT + f xT(s)CT(t, s) dsj i x + x T i ii)x + T C(t, s)x(s) dsj I /{2(xTix)1/2} ,00
iJ/ it
,t
C(u,t)dux +iJ / C(t,s)x(s)rfs io
5.4. COMPLETE INSTABILITY
165
> {x T x/2(x T Lx) 1/2 } {Xr(t)LC(t,s)X(s)/[xr(t)LX{t)}1/2}ds
+ [ Jo
/ Jo
\C(u,t)\du\x\+H
\C(t,s)\\x(s)\ds
roo
> m  x  HI
\C(u,t)\du\x. Jt
+ H f \C(t,s)\\x(s)\ds Jo r
= \mH I Jt I
 M f Jo
\C(t,s)\\X(s)\ds
i
*
\C{u,t)\du\\x\ + (HM) J Jo
\C(t,s)\\x(s)\ds
> 7  x  + ( f f  M ) / C(M)x(s)ds. Jo Hence, there is a /i > 0 with V('5.4.i)(*.x())>M[x(t)l + x'(t)l](54.5) From the form of V and an integration of (5.4.5), for some a > 0, we have aX(£)>[xT(i)Lx(£)]1/2>^x()) + / Mx(s)ds, Jto
where x(i) is any solution of (5.4.1) on [to,t) with to > 0. If i 0 = 0, then
x(t) > j ^ ^ L x W l ^ + jT Mx(s)ds / a > [x^OjixfO)] 1 / 2 /^ so that x(i) > {[x T (O)ix(O)] 1/2 +t/i[x T (O)ix(O)] 1/2 /a}/a def
= Ci + C2t . If io > 0, select on [0, io] with /"to
[<j>T{to)Lct>{to)\1/2>
/"OO
/ JO
C(u,s)du^(s)ds
Jto
and draw a conclusion, as before, to complete the proof.
166
5. INSTABILITY, STABILITY, AND PERTURBATIONS
Roughly speaking, a functional differential equation is one in which x'(t) depends explicitly on part or all of the past history of x(t). Such dependence is clear in (5.4.1). Explicit dependence is absent in x'(t)=f(t,x(t)), although it may become implicit through continual dependence of solutions on initial conditions. Conceptually, one of the most elementary functional differential equations is a scalar delay equation of the form x'{t) = ax(t) + bx(t  1),
(5.4.6)
where a and b are constants with 6 ^ 0 . Recall that we encountered a system of such equations in Section 4.1. To specify a solution we need a continuous initial function R. We may then integrate x'[t)
= ax{t) + b{t  1 ) ,
x{t0) = (*o)
on the interval [to, io + 1] to obtain a solution, say, tp(t). Now on the interval [to, to + 1] the function ip becomes the initial function. We then solve x'[t) = ax{t) + 6V(* " 1),
x(t0 + 1) = V(*o + 1),
on the interval [to + 1, to+2] We may, in theory, continue this process to any point T > to This is called the method of steps and it immediately yields existence and continuation of solutions. We can say, with much justice, that (5.4.6) is a completely elementary problem whose solution is within the reach of a college sophomore. Indeed, letting a and b be functions of t does not put the problem beyond our grasp. By contrast, (5.4.1) is exceedingly complicated. Unless C(t,s) is of such a special type that (5.4.1) can be reduced to an ordinary differential equation, there is virtually no hope of displaying a solution in terms of integrals, even on the interval [0,1 . Yet, it turns out that the solution space of (5.4.6) is enormously complicated. With a and b constant, try for a solution of the form x = ert with r constant. Thus, x' = rert, so rert =aert + ber(tl) or r = a + ber,
(5.4.7)
which is called the characteristic quasipolynomial. It is known that there is an infinite sequence {rn} of solutions of (5.4.7) [see El'sgol'ts(1966)].
5.5. NONEXPONENTIAL DECAY
167
Moreover, R e r n —> —oo as n —> oo. Each function x(i) = cer"* is a solution for each constant c. Because we may let c be arbitrarily small, the zero solution cannot be completely unstable. As simple as (5.4.6) may be, its solution space on [0, oo) is infinitedimensional, whereas that of (5.4.1) on [0, oo) is finitedimensional. This contributes to the contrast in degree of instability. The infinitedimensionality would appear to have a stabilizing effect. Roughly speaking, any ndimensional linear and homogeneous, functional differential equation whose delay at some to reduces to a single point and that enjoys unique solutions will have a finitedimensional solution space starting at to For example, the delay equation x'(t) = ax(t) + bx[t  r(t)}
(5.4.8)
with r(t) continuous, r(t) > 0, and r(io) = 0 for some to, should have exactly one linearly independent solution starting at to
5.5
Nonexponential Decay
In this section we discuss work of J. Appleby and D. Reynolds on a linear scalar equation z'(t) = az(t) +
ft
Jo
k(ts)z(s)ds,
t >0,
z(0) = 1,
(5.5.1)
whose solutions decay slower than exponential. We make the assumption that k is a continuously differentiable, integrable function with k(t) > 0 for all t > 0. Then (5.5.1) has a unique continuous solution on [0, oo). It is known that z e L 1 (0, oo) if and only if a > /0°° k(s) ds, and that in this case z(t) —> 0 as t —> oo. On the other hand, if z(t) —> 0 as t —> oo, then a > /0°° k(s) ds > 0. We ask that the kernel further satisfy
lim ffl = 0 ,
(5.5.2)
V t^oo k(t) ' which forces k(t) —> 0 as t —> oo more slowly than any decaying exponential. To see this, put p(t) = k'(t)/k(t), and let e > 0. Then there is a T > 0 such that pit) >  e / 2 for all t > T. Since
k(t) =
k(T)ef'rp{s)ds,
it follows by multiplying both sides by e£t, that e£tk(t) > k(T)e£(tJ">/2 > oo as t —> oo. Hence (5.5.2) implies that lim e£tk(t) = oo , t—>oo
for every
e > 0.
(5.5.3)
168
5. INSTABILITY, STABILITY, AND PERTURBATIONS
Suppose that the solution of (5.5.1) obeys z(t) —> 0 as t —> oo. Then it satisfies the ordinary differential equation z'(t)
= az(t)
+ f(t),
t>0,
with the forcing term given by f(t)=
/ k(ts)z(s)ds Jo
> 0
as
t > oo .
Since a > 0 and the solution can be represented using the variation of parameters formula, tt
z(t) = e~at+
ea(ts)f(s)ds,
t>0,
(5.5.4)
Jo the asymptotic behaviour of f(t) as t —> oo influences the rate at which z(t) —> 0 as £ —> oo. This is brought out in the proof of the following result. Theorem 5.5.1. Suppose that k is an integrable and continuously differentiable function on [0, oo), with k(t) > 0 as t —> oo. Moreover assume that k'(t)/k(t) —> 0 as t —> oo. If the solution of (5.5.1) obeys z(£) —> 0 as i —> oo, then
liminf^>4t^oo
fe(t)
(55.5)
az
Consequently limj^oo e6tz(t) = oo for every e > 0. Proof. Firstly note that z(t) > e~a* for all t > 0. Since z(0) = 1, t0 = inf {t e [0, oo) : z(t) = 0} . Since k(t) > 0 and z(t) > 0 for all 0 < t < t0, f{t) > 0 for all 0 < t < t0. If to is finite, it follows from (5.5.4) that o
0 = z(t0) = eat" + / ea^'s)f{s) Jo
ds > e~at" > 0 ,
giving a contradiction. Therefore z(i) > 0 for all t > 0. Employing the positivity of k, f(t) > 0 for all t > 0, and hence (5.5.4) implies that z(t) > e~at for all t > 0. Consequently
/ ( * ) = f k{t  s)z(s) ds > [ k{t  s)e~as ds . Jo Jo
5.5. NONEXPONENTIAL DECAY
169
Thus /( g(t) for all t > 0, where g(t) = e~a* /„* easfc(s) ds is independent of z. Hence using (5.5.4) again, rt
z{t)>eat
rt
easf(s)ds>eat
Jo
easg(s)ds,
t>0,
Jo
and consequently, using the positivity of k(t), \teasg(s)ds
z(t)
7T > ——T^A—>
i > 0

at
k(t) ~ e k(t) By L'Hopital's rule, (5.5.2) and (5.5.3),
(556) v
;
g(t) S* ea'k(s) ds 1 1 hm —H = hm — —— = hm ,,,,, =  . too k(t) too e°*fc(t) too ( ^ + f f l ) a Using L'Hopital's rule again,
This and (5.5.6) establish that (5.5.5) holds. Due to (5.5.3) and (5.5.5),
z{tyt
=
W)k{t)eSt ^ °°
as t —> oo if s > 0, completing the proof. We conclude with some remarks. Remark 5.5.1. (5.5.3) implies that, for each T > 0, v
;
^ 1 as t > oo, uniformly for 0 < s < T.
(5.5.7)
fc(i) If this condition is assumed instead of (5.5.3) and a > Jo°° k(s) ds, the lower bound in (5.5.5) can be improved to lim inf / N > . ,„ . , t^oo k(t)  a(a  fo k(s) ds) where the right hand side is interpreted as infinity if a = Jo°° k(s) ds [see Appleby and Reynolds (2004)]. It turns out that (5.5.7) also implies (5.5.3).
170
5. INSTABILITY, STABILITY, AND PERTURBATIONS
Remark 5.5.2. Theorem 5.5.1 asserts that the solution z does not decay to zero faster than the kernel k. Positive, integrable, continuous functions satisfying (5.5.7) and f* Jo
kit V  s)k(s) w ds
, .; k(t)
>2
f°°
Jo
k(s)ds as i ^ o o ,
(5.5.8)
are called subexponential in Appleby and Reynolds (2002). It is shown in Appleby and Reynolds (2002, 2003) that if the kernel k is subexponential and a > Jo°° k(s) ds, then z and k decay at exactly the same rate: indeed lim ^ 4 =
*~ *(*)
o
(aJ^k(s)ds)2
Remark 5.5.3. At first glance the conditions (5.5.7) and (5.5.8) seem very restrictive. However if k is a positive, continuous and integrable function which obeys k(Xt)k(t)^1 —> AQ as t —> oo for all A > 0 for some a < —1, then k is subexponential. An example is k(t) = (1 +12)~1. Another example outside this class is k(t) = exp( — (t + I)13) with 0 < (3 < 1.
Chapter 6
Stability and Boundedness 6.1
Stability Theory for Ordinary Differential Equations
Consider a system of ordinary differential equations x'(i) = G(i,x(i)),
G(i,O) = O,
(6.1.1)
n
in which G : [0, oo) x D —> R is continuous and D is an open set in Rn with 0 in D. We review the basic definitions for stability. Definition 6.1.1. The solution x(t) = 0 of (6.1.1) is (a) stable if, for each e > 0 and to > 0, there is a S > 0 such that xo < to imply x(i,i o ,x o ) < e, (b) uniformly stable if it is stable and 5 is independent of to > 0, (c) asymptotically stable if it is stable and if, for each to > 0, there is an T] > 0 such that xo < 77 implies x(i,io,xo) —> 0 as £ —> 00 (If,
in addition, all solutions tend to zero, then x = 0 is asymptotically stable in the large or is globally asymptotically stable.), (d) uniformly asymptotically stable if it is uniformly stable and if there is an 7] > 0 such that, for each 7 > 0, there is a T > 0 such that xo0,
and t>to
+T
imply x(£,£o,xo) < 7. (If r] may be arbitrarily large, then x = 0 is uniformly asymptotically stable in the large.) 171
172
6. STABILITY AND BOUNDEDNESS
Under suitable smoothness conditions on G, all of the stability properties except (c) have been characterized by Liapunov functions. Definition 6.1.2. A continuous function W : [0, oo) —> [0,oo) with W(0) = 0, W(s) > 0 if s > 0, and W strictly increasing is called a wedge. (In this book wedges are always denoted by WorWi, where i is an integer.) Definition 6.1.3. A function U : [0, oo) x D > [0, oo) is called (a) positive definite if U(t,O) = 0 and if there is a wedge W\ with
t/(i,x)> 1^(1x1), (b) decrescent if there is a wedge W2 with U(t,x) < W2(x), (c) negative definite if — U(t,x) is positive definite. (d) radially unbounded if D = Rn and there is a wedge Ws(\x.\) < U(t, x) and Wz{r)  ^ o o a s r ^ 00, and (e) mildly unbounded if D = Rn and if, for each T > 0, U(t,x) —> 00 as x —> 00 uniformly for 0 < t < T. Definition 6.1.4. A continuous function V : [0, 00) x D —> [0, 00) that is locally Lipschitz in x and satisfies V/61 !)(t,x) =limsup [V(t + h,x + hG(t,x))V{t,x)]/h
0 and t0 > 0 be given. We must find 6 > 0 such that xo < 5 and t > to imply \x(t,to,xo)\ < e. (Throughout these proofs we assume e so small that x < £ implies x e D.) As V(to,x) is continuous and V(to,O) = 0, there is a 5 > 0 such that x < S implies V(t0, x) < Wi(e). Thus, if t > t0, then V' < 0 implies that, for xo < S and x(t) = x(i,to, x o)i w e have Wi(x(t)) 0 such that W2(5) < VKi(e), where Wi(x) < F(t,x) < W2(x). Now, if i0 > 0 and ]xo < 5, then, for x(t) = x(t,to,xo) and t > to, we have Wi(x(i))to+T
imply x(£,£o,xo) < 7 Set x(i) = x(t,to,xo). Pick \i > 0 with W2(JU) < VFi(7), so that if there is a t\ > to with x(ii) < fi, then, for t > t\, we have Wi(x(t)) to + T, then x(i) > /j, fails, and we have x(i) < 7 for all t > t0 + T. This proves U.A.S. The proof for U.A.S. in the large is accomplished in the same way. (d) Because V is radially unbounded, we have V(t, x) > Wi(x) —> 00 as x ^ 00. Thus, given to > 0 and xo, there is an r > 0 with W\(r) > V(to,xo). Hence, if t > to and x(i) = x(i,io,xo), then Wi(x(i)) < V(t,X(t)) < V(t,Xo) < W^r), or x(£) < r. (e) To prove continuation of solutions it will suffice to show that if x(i) is a solution on any interval [to,T), then there is an M with x(i) < M on [to,T). Now V(t,x) —> 00 as x —> 00 uniformly for 0 < t < T. Thus, there is an M > 0 with V(t,x) > V(to,xo) if 0 < £ < T and x > M. Hence, for 0 < t < T we have V(t,x(t)) < V(to,xo), so that x(£) < M. The proof of Theorem 6.1.1 is complete. Theorem 6.1.1 is an attempt to bring the book into focus as we look back at earlier chapters and forward to Chapters 6 and 8. The continuation question treated in Theorem 6.1.1(e) was considered in detail in Section 3.3 and will be seen throughout the remainder of the book. Next, in Chapters 2 and 5 we have seen examples of (a) and (b) extended to Volterra equations. For ordinary differential equations the concept of a Liapunov function being decrescent is simple, natural, and we can readily find examples. However, it is still not known what type of decrescent condition might be necessary and sufficient for asymptotic stability in this scheme. Suppose we have a Liapunov function which is positive definite, decrescent, and whose derivative is not positive. The decrescent condition does two things. First, it allows us to give an argument yielding uniform stability, as we have seen several times in Chapters 2 and 5. Involved in this is the fact that it allows us to show that a solution which gets close to zero will then stay close to zero; that property does not hold for functional differential equations. For Volterra equations and general functional differential equations the decrescent concept becomes elusive and, to this day, a uniformly satisfactory form is unknown. The concept in (2.5.11) of V' < — /4x + x'] was a major step in avoiding the decrescent question, as was an upcoming Marachkov condition which regulates the speed of a solution. Sections 8.3 and 8.7 are mainly devoted to additional steps in that direction.
6.1. ODE STABILITY THEORY
175
We have chosen our wedges to simplify proofs. But that choice makes examples more difficult. One can define a wedge as a continuous function W : D > [0, oo) with W(0) = 0 and W{x) > 0 if x ^ 0. That choice makes examples easier, but proofs more difficult. The following device is helpful in constructing a wedge V^i(x) from a function W(x). Suppose W : D > [0, oo), D = {x e Rn : x < l } , W(0) = 0, and W(x) > 0 if x 7^ 0. We suppose there is a function V(t,x) > W(x) and we wish to construct a wedge VKidxl) < V(t,x). First, define a(r) = min r [0, oo) and a is nondecreasing. Next, define Wi(r) = /J" a(s) ds and note that Wi(0) = 0, W[(r) = a(r) > 0 if r > 0, and W1(r) < ra(r) < a(r). Thus, if xi < 1, then V(t,^)>W(Xl)>
min
W(x)=a(\x1\)>W1(\Xl\).
l*i 0 and V negative definite do not imply that solutions tend to zero, because x(t) = g{t) is a solution of (6.1.3). To this end, we compute V('6.i.3)(*.z) = o!(t)x2 + 2a(t)[g'(t)/g(t)]x2 and set V = x2. That yields a'(t) + 2a(t)[g'(t)/g(t)] =  1 or
a'(t) =
2a(t)[g\t)/g(t)]l
with the solution
a(i)= \a(0)g\0) J g\s) ds] jg\t). Because 0 < g(t) < 1 and g is in X1[0, oo), we may pick a(0) so large that a(t) > 1 on [0, oo). Notice that V(t,x) > 0 and positive definite; however, V is not decrescent. The first real progress on the problem of asymptotic stability was made by Marachkov [see Antosiewicz (1958, Theorem 7, p. 149)]. Theorem 6.1.3. Marachkov If G(t,x) is hounded for x bounded and if there is a positive definite Liapunov function for (6.1.1) with negative definite derivative, then the zero solutions of (6.1.1) is asymptotically stable. Proof. There is a function V : [0, oo) x D * [0, oo) with Wi(\x\) < V(t,x) and V  (6ii)(^ x ) < —W2(x) for wedges W\ and W?.. Also, there is a
6.1. ODE STABILITY THEORY
177
constant P with G(i,x) < P if x < m, where m is chosen so that x < m implies x is in D. Because V is positive definite and V < 0, then x = 0 is stable. To show asymptotic stability, let to > 0 be given and let Wi(m) = a > 0. Because V(to,x) is continuous and V(to,O) = 0, there is an r] > 0 such that xo < f] implies F(to,xo) < a. Now for x(t) = x(t,to,xo), we have V'(t,x(t)) < 0, so Wi(x(i)) < V(t,x(t)) < V(t0,xo) < W^m), implying x(i) < m if t > t0 Notice that V'(t,x(t)) < W 2 (x(i)), so that 0 0, there is an e > 0 and a sequence {sn} with x(sra) > e and sn ^ oo. But because x(tn) ^ 0 and x(t) is continuous, there is a pair of sequences {Un} and {Jn} with Un < Jn < Un+\, x(C/n) = e/2, x(J n ) = e, and e/2 < x(i) < e if C/n < t < Jn. Integrating (6.1.1) from Un to Jn we have x(Jn)=x(Un)+
/
G(s,x(s))ds,
so that e/2 < \x(Jn)  x(Un)\ < P(Jn  Un) or JnUn>e/2P. Also, if t > Jn, then
0 [0, oo) that is locally Lipschitz in x, if there is a continuous function W : Rn —> [0, oo) that is positive definite with respect to a closed set ft, and ifVL l ^ (t, x) < —W(x), then every solution of (6.1.1) approaches Q, as t —» oo. Proof. Consider a solution x(£) on [to, oo) that, being bounded, remains in some compact set Q for t > to If x(t) ^ Q, then there is an e > 0 and a sequence { oo with x(tn) £ C/(O,e)cnQ. Because G(t,x) is bounded for x in Q, there is a K with G(t, x(i)) < K. Thus, there is a T > 0 with x(i) e U{Q,e/2)c n Q for tn < t < tn+T. By taking a subsequence, if necessary, we may suppose these intervals disjoint. Now, for this e/2 there is a 0 with
V'(t,x) tn + T we have
0 < V(t,x(t)) < V(to,x(to))  [ W(x(s))ds n
00 with x(tn) —> y. The set of wlimit points of a solution of (6.1.5) is called the ulimit set. By uniqueness, if y is in the wlimit set of x(i), then the orbit through y, say, {z G Rn : z = x(i, 0, y ) , t > 0} is also in the wlimit set. (Actually, this follows from continual dependence on initial conditions, which, in turn, follows from uniqueness.) A set A is positively invariant if y G A implies x(t, 0, y) G A for t > 0. Theorem 6.1.5. Let the conditions of Theorem 6.1.4 hold for (6.1.5) and let V = V(x). Also, let M be the largest invariant set in Q. Then every solution of (6.1.5) approaches M as t —> 00. Proof. If x(t) is a solution of (6.1.5), then it approaches Q,. Suppose there is a point y in the u;limit set of x(i) not in M. Certainly, y G £l, and as y ^ M, there is a t\ > 0 with x ( t i , 0 , y ) ^ il. Also, there is a sequence {tn} —> 00 with x(tn) —> x(ii, 0, y), a contradiction to x(i) —> Q as t —> 00. This completes the proof.
180
6. STABILITY AND BOUNDEDNESS
The result can be refined further by noticing that V(x(t)) —> c so the set M is restricted still more by satisfying V(x) = c for some c > 0. The ideas in the last two theorems were extended by Hale to autonomous functional differential equations using Liapunov functionals and by Haddock and Terjeki using a Razumikhin technique. These will be discussed in Chapter 8. They were also extended to certain classes of partial differential equations by Dafermos. Example 6.1.3. Consider Example 6.1.2 once more with O being the x axis. Notice that if a solution starts in O with x\ ^ 0, then y' = —x\ ^ 0, so the solution leaves Q. Hence, M = {(0,0)}. Theorems 6.1.4 and 6.1.5 frequently enable us to conclude asymptotic stability (locally or in the large) using a "poor" Liapunov function. But when (6.1.1) is perturbed, we need a superior Liapunov function so we can analyze the behavior of solutions. For example, suppose D = Rn and there is a continuous function V : [0, oo) x Rn —> [0, oo) with (a) \V(t,xi)  V(t,x2)\
< K\xi  x 2  on [0, oo) x Rn with K constant,
(b) ^ ( e . i . i ) ^ ) ^ ~cV(t,x),
c > 0, and
(c) V{t,x) > Wi(\x\) > oo as x > oo. Then for a perturbed form of (6.1.1), say, x ' = G(t,x)+F(t,x)
(6.1.6)
with G, F : [0, oo) x Rn —> Rn being continuous, we have ^('6.i.6)(*.x) < ^ . L D ( * , x )
0 with
x/{2[xTSx]1/2}0.
(6.1.13)
These ideas have been developed extensively by Erhart (1973), Haddock (1977a,b), Hatvani (1978) and Burton (1977). They were employed in the proof of Theorem 2.5.1. In many results, such as Theorem 6.1.3, we can weaken the condition V'(t,x) < —W(\x\). Here is a typical way. Definition 6.1.6. A scalar function p : [0, oo) —> [0,oo) is said to be integrally positive if for each 5 > 0 and each sequence {£„} —> oo monotonically, rt,,+S
lim inf / ™^°° Jt,,
p(t) dt > 0 .
6.2. CONSTRUCTION OF LIAPUNOV FUNCTIONS
183
Thus, we are allowed to have p(t) = sin 2 t, but p(t) = sin£ + sint would not qualify as being integrally positive. Exercise 6.1.1. In Theorem 6.1.3, drop the condition that the derivative of V is negative definite. Instead, ask that V'(t,x) < — p(t)W(\x\) where p is integrally positive. Show that the zero solution is still asymptotically stable.
6.2
Construction of Liapunov Functions
Beyond any doubt, construction of Liapunov functions is an art. But like any other art, there are guidelines and there are masters to emulate. Whereas the previous section concentrated on formal theorems concerning consequences of Liapunov functions, this section contains a detailed account of the construction of somewhat special Liapunov functions. Such constructions are fundamental in the construction of Liapunov functionals for Volterra equations. A.
x' = Ax
As we have seen, given x' = Ax
(6.2.1)
we try V(x) = x r B x with B = BT and ATB + BA = I
(6.2.2)
when no characteristic root of A has a zero real part. [In Chapter 5 we explored the possibility of solving (6.2.2).] Also, if all characteristic roots of A have a zero real part and if the elementary divisors are simple, then the equation ATB + BA = 0
(6.2.3)
may be solved for B = BT. The reader may wish to try this for
Equation (6.2.3) has important consequences for perturbed forms of (6.2.1).
184 B.
6. STABILITY AND BOUNDEDNESS x' = y, y' = f(x,y)y
 g(x)
Long before Liapunov, the mathematician Lagrange noted that equilibrium was stable when the total energy of the system was at a minimum. That idea, applied to the scalar equation x" + f{x,x')x' +g(x) = 0
(6.2.4)
with f(x, x1) > 0 and xg(x) > 0 for x ^ 0, produced the Liapunov function V(x,y) = \y2+ 1
f g(s)ds Jo
(6.2.5)
for the system of the form x' = y, (6.2.6) y' =
f(x,y)yg(x).
The result is ^(6.2.6) fa y) = f(x,y)y'2 0 , through the series x" + f{x)x' + g(x) = 0 ,
f(x) > 0 ,
x" + h{x, x')x' + g{x) = 0 ,
h(x, x') > 0 ,
x" + h(x, x')x' + g(x) = e(t),
e(t + T) = e(t),
and x" + k(t, x, x')x' + a(t)g(x) = e(t, x, x'), e bounded, k > 0, and a(t) > 0. For bibliographies see Graef (1972), SansoneConti (1964), and Reissig et al. (1963). x' = P(x,y),y' = Q(x,y)
G.
Nor is one restricted to a first integral of a given system. From the point of view of subsequent perturbations the very best Liapunov functions are obtained as follows. Consider a pair of firstorder scalar equations x' =
P(x,y), (6.2.15)
y' = Q(x,y), so that dy/dx =
Q(x,y)/P{x,y).
Then the orthogonal trajectories are obtained from dy/dx = —P(x,y)/Q(x,y) or P(x,y)dx + Q(x,y)dy = 0. If we can find an integration factor /i(:r, y) so that /i(x, y)P(x, y) dx + /i(x, y)Q{x, y) dy = 0
188
6. STABILITY AND BOUNDEDNESS
is exact, then there is a function V(x, y) with dV/dx = /J,P and dV/dy = fiQ, so that
V^2A5)(x,y)=n(x,y)[P2(x,y)+Q2(x,y)}
(6.2.16)
.
If V and /i are each of one sign and V\i < 0, then is a Liapunov function for (6.2.15). Moreover, if we review Eqs. (6.1.8)(6.1.10), we have ^6.2.i5) (x,y) = \gradV(x,y)\ \{P(x,y),Q(x,y))\
cos9 ,
(6.2.17)
and because V is obtained from the orthogonal trajectories, we have cos(7 =
.
(6.2.18)
For this reason, (6.2.15) can be perturbed with comparatively large functions without disturbing stability properties of the zero solution.
H.
x' = Ax + b/(cr), <x' = c T x  r/(<x)
In view of (6.2.2) and (6.2.12), one can quickly see how to proceed with the (n + l)dimensional control problem x' = Ax + b / ( a ) ,
o' = cTx  rf(a),
(6.2.19)
in which A is an n x n matrix of constants whose characteristic roots all have negative real parts, b and c are constant vectors, r is a positive constant, a and / are scalars, and <jf(<j) > 0 if a ^ 0. This is called the problem of Lurie and it concerns automatic control devices. The book by Lefschetz (1965) is devoted entirely to it and considers several interesting Liapunov functions. Lurie used the Liapunov function F(x, a) = x T 5 x + / f(s)ds, (6.2.20) Jo in which B = BT and ATB+BA = —D, where D = DT is positive definite. Then we have
6.2. CONSTRUCTION OF LIAPUNOV FUNCTIONS I.
189
x ' = A(t)x
It is natural to attempt to investigate x' = A(t)x
(6.2.22)
in the same way that (6.2.1) was treated. Suppose that A is an n x n matrix of functions continuous for 0 < t < oo. A common procedure may be described as follows. If all characteristic roots of A(t) have negative real parts for every value of t > 0, then for each t the equation AT(t)B(t) + B{t)A{t) = I
(6.2.23)
may be uniquely solved for a positive definite matrix B(t) = BT{t). For brevity, let us suppose B(t) is differentiable on [0,oo). We then seek a differentiable function b : [0, oo) —> [0, oo) such that V(t,x) = b{t)xTB{t)x
(6.2.24)
will be a Liapunov function for (6.2.22). Thus, V('6 2 , 22) (t,x) = b\t)xTB{t)x
+ b(t)xT[ATB + BA + B']x
= xT [b{t){ATB + BA + B') + b'(t)B] x d
=b(t)xTH(t)x,
where H(t) = I + B'(t) + [b'(t)/b(t)\B(t). If we take a(t) to be the largest root of the equation det [  / + B'(t) + a{t)B{t)} = 0 , and
P(t) = 6(0) exp \  j a(s)ds] , then the condition [b'(t)/b(t)\ < [P'(t)/p(t)} = ~a(t) is necessary and sufficient for H{i) to have only negative characteristic roots. In that case, stability and asymptotic stability may be determined from V and V. For more details see Lebedev (1957), Hahn (1963, pp. 2932), and Krasovskii (1963, pp. 5662).
190 J.
6. STABILITY AND BOUNDEDNESS x' = F(x)
The most common method of attack on a nonlinear system x' = F(x),
F(0) = 0,
(6.2.25)
is by way of the linear approximation. If F is differentiable at x = 0, then it may be approximated by a linear function there. One may write (6.2.25) as x' = Ax + G(x),
(6.2.26)
in which A is the Jacobian matrix of F at x = 0 and limx^o C(x)/x = 0. For example, if f(x) = /(xi, ,xn) is a differentiable scalar function at x = 0, then
/(x) = /(o) + (df/dXl)Xl +
+ (df/dx2)x2
+ (df/dxn)xn
+ higherorder terms,
where the partials are evaluated at x = 0. One expands each component of F in this way and selects the matrix A from the coefficients of the Xj. It is more efficient to consider x ' = Ax + H(i, x),
(6.2.27)
where A is a constant n x n matrix, H : [0, oo) x D —> Rn is continuous, D is an open set in Rn with 0 in D, and lim H(t,x)/x = 0
uniformly for 0 < t < oo.
(6.2.28)
Theorem. Liapunov If (6.2.27) and (6.2.28) hold and if all characteristic roots of A have negative real parts, then the zero solution of (6.2.27) is uniformly asymptotically stable. Proof. By our assumption on A we can solve ATB + BA = —I for a unique positive definite matrix B = BT. We form
and obtain ^6.2.27) (x) = ( x T i T + HT)Bx + xTB(Ax + H)
= xTx + 2HTBx <  x  2 + 2ff B x,
6.3. A FIRST INTEGRAL LIAPUNOV FUNCTIONAL
191
so that for x ^ 0 we have F'(x)/x 2 <  1 + 2\B\ H(t,x)/x <  1 / 2 , if x is small enough, in consequence of (6.2.28). The conditions of Theorem 6.1.1(c) are satisfied and x = 0 is U.A.S.
K.
A(x) = Jo J(sx) ds
Much may be lost by evaluating the Jacobian of F in (6.2.25) only at x = 0. If we write the Jacobian of F as J(x) = (dFi/dxk), evaluated at x, then for f1 A(x) = / J(sx) ds Jo we have F(x) = ^(x)x. Investigators have discovered many simple Liapunov functions from A(x) yielding global stability. A summary may be found in Hartman (1964, pp. 537555). Excellent collections of Liapunov functions for specific equations are found in the work of Reissig et al. (1963) and Barbashin (1968).
6.3
A First Integral Liapunov Functional
We consider a system of Volterra equations ft x' = A(t)x +
(6.3.1)
C(t,s)x(s)ds, Jo
with A and C being nxn matrices continuous on [0, oo) and 0 < s < t < oo, respectively. To arrive at a Liapunov functional for (6.3.1), integrate it from 0 to t and interchange the order of integration to obtain rt
rt
x(i) = x(0) + / A(s)x(s)ds+ Jo
rt
/ C(u,s) dux(s) ds . Jo Js
We then have h(i,x()) = x ( i ) + /
\A(s)
C{u,s)du\x{s)ds,
(6.3.2)
Jo L Js J which is identically equal to x(0). Hence, the derivative of h along a solution of (6.3.1) is zero. It is reasonable to think of h as a first integral
192
6. STABILITY AND BOUNDEDNESS
functional for (6.3.1). Compare this with Eqs. (6.2.8)(6.2.12) for constructing Liapunov functions. Now h may serve as a suitable Liapunov functional for (6.3.1) as it stands. Moreover, the changes necessary to convert h to an outstanding Liapunov functional are quite minimal. Suppose that (6.3.1) is scalar, A(t) < 0 , C(t, s) > 0 ,
and
 A(s)  / C(u, s)du>0 Js for 0 < s < t < oo. Consider solutions of (6.3.1) on the entire interval [0, oo) (as opposed to solutions on some [to,oo) with to > 0). Because —x(t) is a solution whenever x(t) is a solution, we need only consider solutions x(t) with x(0) > 0. Notice that when x(0) > 0 and C(t, s) > 0, the solutions all remain positive. Hence, along these solutions the scalar equation
*r
r*
i
h(t, x()) = x(t) + /  A(s)  / C(u, s) du x{s) ds Jo L Js 1 is a positive definite functional. In fact, we may write it as
H(t,x()) = \x(t)\ + £ ^\A(s)\  J* \C(u, s)\duj \x(s)\ ds , (6.3.3) and the derivative of H along these solutions of (6.3.1) is zero. Under the conditions of this paragraph, we see that solutions of (6.3.1) are bounded. However, much more can be said. Notice that if l^(s)l " / Js
\C(u,s)\du>a>0,
then boundedness of H implies that x(t) must be i 1 [0, oo). Definition 6.3.1. A scalar functional H(t,x()) expands relative to zero if there is a t\ > 0 and a > 0 such that if x(i) > a on fa, oo) with t?. > t\, then H(t,x()) —> oo as t —> oo. We formally state and prove these observations. Theorem 6.3.1. Let (6.3.1) be a scalar equation with A(s) < 0 and \A(s)\  Jl \C{u,s)\du > 0 for 0 < s < t < oo. Then the zero solution of (6.3.1) is stable. If, in addition, there is a t 0 and an a > 0 with \A(S)\ ~ Is \C(U' s)\du > a for t2 < s ) be any solution of (6.3.1). We compute H'(631)(t,x()) 0, nor that only solutions on [0, oo) be considered.) The stability is now clear; for if we are given e > 0 and to > 0, we let : [0, to] —> R be continuous and satisfy \(j>(s)\ < 5 on [O,to], where 5 is to be determined. Then for x(t) = x(t, to, to we have \x(t)\ t2, so that x is in L1[0, oo). As Jo \C(t,s)\ds, A(t), and x(t) are bounded, it follows that x'(t) is bounded. Hence, x(t) —> 0. This completes the proof. We recall from Section 6.1 that there are two alternatives to asking x'(t) bounded. Whereas the requirement that Jo \C(t,s)\ds be bounded is consistent with the other assumptions, the requirement that A(t) be bounded is not only severe but it conflicts with the intuition that the more negative A(t) is, the more stable (6.3.1) should be. Let us return to the vector equation (6.3.2). If we wish to pass from (6.3.2) to a scalar functional analogous to (6.3.3), we have several options for the norms and each option will yield different results.
194
6. STABILITY AND BOUNDEDNESS
Let us suppose there is a constant positive definite matrix D = DT and a continuous scalar function fj, : [0, oo) —> [0, oo) with xT[ATD + DA}x <  / i ( t ) x T x .
(6.3.4)
The norm we will take on the solution x(t) will be [xT Dx]1/2 and bounds will be needed. There are positive constants s, k, and K with x > 2fc[x T Dx] 1/2 ,
(6.3.5)
Dx 0
for
0 < s < t < oo .
(6.3.9)
Js
Theorem 6.3.2. Let (6.3.4)(6.3.9) hold. (a) The zero solution of (6.3.1) is stable. (b) If P(i,x()) expands relative to zero and if x' is bounded for x bounded, then x = 0 is asymptotically stable. (c) If there is an M > 0 with / \k/j,(s)K Jo L
Js
\C(u,s)\du\ds<M J
for 0 < t < oo, then x = 0 is uniformly stable.
(6.3.10)
6.3. A FIRST INTEGRAL LIAPUNOV FUNCTIONAL
195
Proof. For x / O w e compute T T X (S)C (t,S)dS^Dx
^(6.3.1) M O ) =  [ x T A T + ^ + XTD\AX
+ / C(t,s)x(s)ds]
I.
Jo
X /{2[x T Dx] 1 / 2 } JJ /
ft
+ kfi(t)\x\
K\C(t,s)\\x(s)\ds Jo < { M (t)x T x/2[x T J Dx] 1 /2} + f \C(t,S)\\x(S)\ds Jo
{\Dx\/[xTDx}1/2}
r* + kn(t)\x\
K\C(t,s)\\x(s)\ds 0 be given and S > 0 still to be determined. If t0 > 0, if ), then for i > to we have sx(i) < P ( t , x (  ) ) < P ( * o , 0 ( O ) K, then we may be able to drop the requirement that x' be bounded for x bounded and perform the simplified annulus argument as noted following Eq. (6.1.10). However, with A(t) variable there may still be problems with the annulus argument. Those problems will evaporate when we consider the onesided Lipschitz conditions introduced in (6.1.16) and (6.1.17) and to be developed in Definition 6.4.1. The foregoing explanation shows in detail how we arrive at the Liapunov functional used to prove Theorem 2.5.1. The reader is urged to review Theorem 2.5.1 and its proof carefully at this time. Moreover, the functional
[ I Jo Jt
\C(u,s)\du\x(s)\ds
of (6.3.11) turns out to be a fundamental part of each Liapunov functional, with, at most, minimal changes needed. The method outlined for constructing a Liapunov functional for the linear system can be extended without difficulty to nonlinear equations.
6.3. A FIRST INTEGRAL LIAPUNOV FUNCTIONAL
197
Consider the system ft x' = g(t,x)+ / p(t,s,x(s))ds, (6.3.13) Jo in which g and p are continuous when g : [0, oo) x U —> Rn, p : [0, oo) x [0,oo) xU > Rr\ and U = {x e Rn : \x\ < e, e > 0}. We integrate (6.3.13) from 0 to £ and interchange the order of integration to obtain x(t) = x(0) + / g ( s , x ( s ) ) d s + / / Jo Jo Jo ,t
*
p(u,s,x.(s))dsdu
*
= x(0) + / g(s,x(s))ds + / / p(w, S,X(S)) duds Jo Js Jo ft T ft 1 = x(0)+/ g ( s , x ( s ) ) + / p(u,s,x(s))du rfs, 7o L Js
so that ,t r
,t
r(£,x()) = x ( t ) + /  g ( s , x ( s ) )  / p(u,s,x(s))du ds ^o L ^s = x(0) and hence rL3 13>(£, x()) = 0. The same sequence following Eq. (6.3.2) may be repeated. Briefly, in the scalar case we write
R(t,x())
=\ x \ + J \\g(s,x(s))\j
\p(u,s,x(s))\du\ds.
(6.3.14)
If xg(t, x) < 0, then we obtain R{6 3 13)(t^()) /
\p(u,s,x(s))\ds
Js
for 0 < s < t < oo and x an arbitrary continuous function, x : [0, oo) —> £/, then the zero solution of (6.3.13) is stable. If, in addition, the functional ex
198
6. STABILITY AND BOUNDEDNESS
pands relative to zero, then asymptotic stability may be obtained. Finally, under proper convergence assumptions we write *
V(t,x()) = \x\ + / / Jo Jt
\p(u, s,x(s))\ duds .
The situation becomes much more interesting in the vector case. We then interpret the norm of x in (6.3.14) as a norm of solutions of y' = g(i,y).
(6.3.15)
That is, we seek a Liapunov function W(t,y) for (6.3.15) with Wl6.3A5)(t,y) 0 and if (s) is sufficiently small on [0,i0], then
y(to,0(O)) where 0(i) < 5 on [0,i0]We suppose x(i) ^ 0 as t —> 00. Then there is a /i > 0 and a sequence {in} — 00 with x(i n ) > fi. To be definite, we suppose (6.4.5) holds.
204
6. STABILITY AND BOUNDEDNESS
Now determine a > 0 so that Wi(fj.) > 2W2(a). Because V'(t,x()) < —W3(x) there is a sequence {Tn} —> oo with x(Tn) < a. In fact, we may suppose x(Tn) = a, x(i n ) = /i, and a < x(i) < \i if tn < t < Tn, by renaming tn and Tn if necessary. Now P(Tn, x(.))  P(t n , x()) < £(T n  t n ) .
(6.4.9)
Also, V'(t,x()) < 0 implies that V(t,x()) —> c, a positive constant, as x(i) ^> 0. Thus, y(t n ,x()) — V(Tn,x())\ may be made arbitrarily small by taking n large. But V(tn,x())V(Tn,x()) = W(tn,x(tn))W(Tn,x(Tn)) + P(tn,x())P(Tn,x())
> Wtfa)  W2{a)  [P(Tn,x())  P(*n,x())] >2if2(Q)w2(a)L(rng or F(*n,x())  V(r n ,x(.)) > T^2(a)  L(Tn  tn). As the left side tends to zero, for each r] > 0, there exists TV such that n > N implies V>V(tn,x())V(Tn,x()) >W2(a)L{Tntn) or r) + L(Tntn)
>W2(a)7
so that L{Tn  tn) > W2(a) 
if 77 < W2(a)/2.
V
>
W2(a)/2
Hence, for n > TV, we have
Tntn>W2(a)/2Ld=T.
6.4. NONLINEARITIES AND AN ANNULUS ARGUMENT 205 Because F'(t,x()) < W 3 (x(i)), if Tn < t, then 0()) <W2{5) + W4(6) < Wi(e)
implying x(i) < e for i > io This completes the proof. Exercise 6.4.4. In Theorem 6.4.2 replace (6.4.7) by W1(\X\) 0, and suppose W satisfies (6.4.10) for some c > 0. Suppose also that Jt \p(u, s, x(s)) du is defined for 0 < t < oo whenever x : [0, oo) —> (7 is continuous. (a) If tiiere is a wedge W3 with O
cW(t,x) + Lj
p( U) t,x(t))dwt
/OO
/ / p(u, s,x(s)) duds Jo it satisfies Definition 6.4.2, then x = 0 is asymptotically stable. (b) If ,00
cW(t,x) + L
Jt
\p(u,t,x(t))\dut
/OO
/ / p(w, s,x(s)) duds Jo Jt satisfies Definition 6.4.3, then x = 0 is uniformly stable. Proof. Define ft
V(t,x()) = W(t,x)+L
/.OO
/ / io it
p(u,s,x(s))duds,
so that V(64i)(t,x()) Rn, p : [0, oo) x [0, oo) x U —> Rn, and U = {x e Rn : x < e, e > 0}. Let W : [0, oo) x U > [0, oo) with \W(t,yn)  W(t,x2)\ < £xi  x 2  on [0,oo) x U for L > 0, Wi(x) < W(t,x), W(t,0) = 0, W[6A_2)(t,y) < Z(t,y), where Z : [0,oo) x U > [0, oo) is continuous. Theorem 6.4.4. Let the conditions of the preceding paragraph hold. Suppose there is a continuous function q : [0, oo) x U —> [0, oo) with \p(t, s,x(s))/W / (s,x(s)) < q(t,s) if x() is any continuous function in U and if 0 < s < t < oo. Also suppose that Z(t,x) > cW(t,x) for some c > 0 and that there are constants c\ and C2 with 0 < c\ < c and C2 > L, so that jQ q(t, s) ds < C1/C2 if 0 < t < oo. Then for each to > 0 and each e > 0 there exists 5 > 0 such that if \(t)\ < 5 on [0, to] and x(i, to, ) is a solution of (6.4.4), then \x(t,to,4>)\ < e f°r t > to
208
6. STABILITY AND BOUNDEDNESS
Proof. Define V(t,x()) = W(t,x)+ f \ClW(u,x(u))c2
I
\p(u,s,x(s))\ds]du,
so that along a solution x(t) of (6.4.4) we have V'(t,x())o.
\C(u,s)\du\x(s)\ds
210
6. STABILITY AND BOUNDEDNESS
Because V is Lipschitz in x(t), V will also be negative along solutions of nt
ft
x' = , 4 x + / C(t,s)x(s)ds + D(t,s)r(x(s))ds Jo Jo + q(4, x) + H(i, x()) + p(t, x)
(6.4.11)
under the following assumptions: (i) H(i,x()) < /3x(t) \f*E(t,s)m(x(s))ds\ m defined below.
where (5 > 0, with E and
(ii) D and E are continuous, n x n matrices on [0, oo) x [0, oo) with
\D(t,s)\ < a\C(t,s)\ and \E(t,s)\ < a\C(t,s)\ for some a > 0 and 0 < s < t < oo. (iii) q : [0, oo) x U > Rn is continuous, U = {x G Rn : x < e, £ > 0}, and q(i,x) / x —> 0 as x —> 0 uniformly for 0 < t < oo. (iv) r and m : U —> i?" are continuous, r(x)/x —> 0 as x ^ 0, and m(x) < u>\x\ for some u> > 0. (v) p : [0, oo) x U —> i?" is continuous and p(t,x) < A(t)x where A : [0, oo) —> [0, oo) is continuous and Jo°° \(t) dt < oo. (vi) A is an n x n constant matrix whose characteristic roots all have negative real parts and there is a matrix B = BT satisfying ATB + BA = I, x > 2fc[x r Bx] 1 / 2 , Bx < i f j x ^ x ] 1 / 2 , and 7  x  < [x^Bx] 1 / 2 for 7, k, and K positive. (vii) C(t,s) is continuous for 0 < s < t < oo and Jt°° \C(u,s)\du is continuous for 0 < s < t < oo. (viii) There exists K > K and k > 0 with k < k  K f™ \C(u, t)\ du. Theorem 6.4.5. Consider (6.4.11) and suppose that (i)(viii) hold. Then for each e > 0 and each to >0 there exists 5 > 0 such that if \(f>(t)\ < S on [0, to]j then x(t, to, 0 as i —> oo. Proof. Define F(i,x()) = j ^ S x ]
1
/^^
r x exp \ — L
L
/* /* *
C(u,s)dux(s)ds
i \(s) ds ,
^o
J
6.5. A FUNCTIONAL IN THE UNSTABLE CASE where L satisfies [xTBx]x/2L V('6.4.11) f
211
> K\x\. Then
) r
< \  LX(t)[xTBx}1/2
iKK) +
_
roo
 \kK
i
\C(u,t)\du
f \C(t,s)\\x(s)\ds + K f
Jo tfq(i,x)+*:H(i,x(.))
+ K\p(t,x)\\exp\L < \ k\x\(KK)
I
Jo
Jo
x
\D(t,s)v(x(s))ds
f X(s)ds] \C(t,s)\\x(s)\ds
+ Ka f \C(t,s)\\r(x(s))\ds + K\ 0 there is a unique n x n matrix Z(t,s) such that Z(s,s) = I which satisfies (7.1.12). A particular solution of (7.1.7) will then be expressed in terms of this matrix and / . In fact, it will be true that Z(t, s) = R(t, s). We will use a contraction mapping argument here. A proof of the contractive mapping principle was given in Section 3.1. Proposition 7.1.1. The solution x(t) of ft
x'(t) = A(t)x(t) +
B(t,u)x(u)du, s
is unique and exists on [s, oo).
x(s)=x 0 ,
(7.1.13)
222
7. THE RESOLVENT
Proof. Write (7.1.13) as *r x ( i ) = x
o
+ / is
r
L4(w)x(f) + / L Js
\dv J
= xo + / ^4(w)x(w) du + / Js
i B(v,u)yi(u)du
/ B(v,u)dvx(u) du
Js Ju
r* r * i = x 0 + / L4(u) + / B(v,u)dv \x(u)du Js L i« J where we have changed the order of integration. For a given T > s and an n x n matrix C(i, w) defined and continuous for s < u < t < T, we define a matrix norm by \C\ to be sup s < u < t < T , x  < 1 \C(t,u)x.\. Find a number r with A ( u ) + / B ( w , u ) dv < r  l , Ju
s < u < t < T .
Let (M,   r ) be the complete metric space of continuous functions : [s, T] —* Rn with 4>(s) = xo and with the metric induced by the norm \cP\r:=
sup
\cP(t)\ert
s~v\r / {r  l)er(tu)
du
Js
f* B(t,u)Z(u,s)du, JS
Z(s,s) = 1. (7.1.13)
7.2. A FLOQUET THEORY
223
Theorem 7.1.1. Tie solution of (7.1.7) such that x(0) = xo is given by the variation of parameters formula x(t) = Z(t, O)xo+
Z(t,s)f(s)ds.
(7.1.14)
Jo Proof. Define y : [0,T] > Rn by y(t) = /0* Z{t, s)f(s) ds. Differentiating and using (7.1.13), we have y'(t)=Z(t,t)f(t) + j
^Z(t,s)f(s)ds
= /f(t)+ / L4(t)Z(i,s)+ / B(t,u)Z(u,s)du )f(s)ds Jo \
Js
)
= {{t)+A{t) f Z{t,s)f{s)ds+ f Jo
= f(£) + A(t)y(t) + f f Jo Jo
B(t,u)Z(u,s)f(s)duds
B{t,u)Z{u,s)i{s)dsdu
= f(t) + A{t)y(t) + [ B{t,u) f Jo Jo = A(t)y(t)+
f
Jo Js
I B(t,u)y(u)du Jo
Z{u,s)f{s)dsdu
+ f(t).
Thus, y(t) is a solution of (7.1.7) for 0 < t < T. Since T is arbitrary it follows that it is a solution for all t > s. Moreover, since Z(t, O)xo satisfies the homogeneous equation, Z(t, O)xo + y(i) is the desired solution of the nonhomogeneous equation. This completes the proof. If the lower limit of integration in (7.1.7) is replaced by r, then the solution with x(r) = xo is given by x(t) = Z(t,r)xo+
/
Z(t,s)i(s)ds.
JT
7.2
A Floquet Theory
In Section 2.6 we had a brief look at Floquet theory for ordinary differential equations. Suppose that Z(t) is the principal matrix solution of x'=^(t)x
(7.2.1)
224
7. THE RESOLVENT
so that Z'(t) = A(t)Z(t) and Z(0) = I. Then the variation of parameters formula for x' = A(t)x + f (t)
(7.2.2)
is given by Z(t)Z1(s)i(s)ds.
x(t,0,xo) = Z(t)xo+
(7.2.3)
Jo Suppose that f (£) is bounded and we want to show that solutions of (7.2.2) are bounded. Even if we know that Z(t) —> 0 and that Z £ i 1 [0, oo), Z~1(s) has terms of Z divided by the determinant of Z so that Z~1(s) can be very large. It requires Draconian conditions on A(t) to ensure boundedness of solutions. But if A(t + T) = A(t) for all t and some T > 0, it is possible to find a constant matrix R and a periodic matrix P with Z(t) = P(t)em ,
P(0)=7.
(7.2.4)
Now, the variation of parameters formula becomes rt x(t)
= P(t)emx0+
P(t)eR(ts)p1(s)i{s)ds.
(7.2.5)
Jo The critical term eRt is preserved. The matrix P is periodic and nonsingular so those terms are bounded. Thus, for bounded / we are asking that roo
/
\em\dt 2?" has a continuous derivative, then
/
(\g(u)\ + (b a)\g'(u)\) du > (b  a) max \g(u)\.
Ja
(7.2.16)
a (b  a)m + (b  a)\g(ui)  g{uo)\ >(b
a)m + (b a)(\g(Ul)\  \g(uo)\) = (b  a)M .
One may verify that (7.2.16) holds for n x n matrices using the induced matrix norm.
7.2. A FLOQUET THEORY
227
Theorem 7.2.1. Let A and B he continuous, R(t,s) satisfy (7.2.13), B(t + T,s + T) = B(t,s) (i)
If there is J > 0 such / \B(u,s)\du<J
and A(t + T) = A(t).
(7.2.17)
that for 0 < s 0 such that / \B{t,s)\ds 0 ,
(7.2.21)
then (7.2.20) impiies (7.2.19).
Proof. If we integrate (7.2.13) from s to t, s < t, we obtain /" \dR{u,s)/du\du Js
T .
7.2.
A FLOQUET THEORY
229
Then for ( 2 > t , > r w e have r(*2)r(ti)=
/ Jo
< f Jo = f Jo
< I Jo
(\R(t2,S)\\R(h,s)\)ds
\R(t2,s)R(t1,s)\ds f\dR(t,s)/dt)dt
ds
Jti
(2 \dR(t,s)/dt\dtds.
Jti
Changing the order of integration yields
r(* 2 )r(ti)< (2 I \dR(t,s)/dt\dsdt / i Jo
< I 2 \\A\\ Jtt
Jtt JO A(t)R(t,s)+
0 such that O
/
\R{t,s)\dt<E
for all s e [ 0 , T ] ,
(7.2.22)
Js
then ft
sup / \R(t, s)\ds = M for some t>o Jo is satisfied.
M > 0.
(7.2.20)
Proof. If we integrate (7.2.22) from 0 to T, the value is bounded by ET.
T h e o r e m 7.2.2. Suppose that (7.2.17) and (7.2.18) hold with O
/ Jo
\R{t,0)\dt 0 as i > oo.
7.2. A FLOQUET THEORY
231
Proof. We showed in the proof of Theorem 7.2.1 that o
roo
I \dR(u,0)/du\du< (\\A\\ + J) I \R(u,0)\du. Jo Jo A similar result holds for each jth column of R(t,0), say z(t,O,ej). If the theorem is false, there is a j , an e > 0, and a sequence {tn} —> oo with z(i ra ,0,ej) > e . Also, ft z(t,0,ej) = ej + / z'(u,0,ej)du
Jo
so that tn < t < tn + 1 implies that z(i,0,ej)^(in,0,ej) < f \z'(u,0,ej)\du<e/2 Jtn, for large n. Hence ^:(t, 0, e^)  > e/2 for tn < t < tn + 1, contradicting z(t,0,ej) G L1. This completes the proof. Example 7.2.1. Consider the scalar equation
r* z' = A(t)z+
B(t,u)z(u)du Js
in which (7.2.17) and (7.2.18) hold with A and B continuous. If there is an a > 0 and s* G [0, T] with \B(ts
+ s*,s*)\>\B(t,s)\
for
0 < s < t < oo
and with A(t)+
\B(u + s*,s*)\du 0 as t > oo, and that i(t + T) = f(i). Then there exists a sequence of positive integers {fij} such that the function y defined in (7.2.12) satisfies y(t + n / r , 0 , y o )  » I J — oo
R{t, s)f(s) ds := x(t),
j > oo ,
(7.2.25)
7.3. UAS AND INTEGRABILITY OF THE RESOLVENT
233
where x(t) is a T—periodic solution of d
^=A(t)x+f B(t,s)x(s)ds + f(t) dt J_00
(7.2.25)
on (—oo, oo).
7.3
UAS and Integrability of the Resolvent
The basic perturbation result in the previous chapter, Theorem 6.4.5, concerns the equation ,t
,t
x ' = , 4 x + / C(t,s)x(s)ds Jo
+ / Jo
D(t,s)r(x(s))ds
+ q(t,x) + H(i,x()) + p(t,x).
(6.4.11)
It depends on the Liapunov functional ("i /co
V(t,x()) = [xTBx]1/2+K
/ / \C(u,s)\du\x(s)\ds Jo Jt
for the system r* x' = ^ix + / C(t, .s)x(.s) ds , Jo which is required to have nice properties. Other perturbation results are found in Chapters 2 and 5. In this section we present and discuss the work of Hino and Murakami (1996) and Zhang (1997) who consider a system /"* x'(t) = A(t)x(t) +
B(t,s)x(s)ds (7.3.1) Jo where A, B are nx n matrix functions, A(t) is continuous on [0, +oo), B(t, s) is continuous for 0 < s < t < oo. The classical resolvent equation of (7.3.1) is 8R(t,s) os
=_fl(t
)^( s )_ f R(t,u)B(u,s)du, Js
R(t,t)=I,
(7.3.2)
for t > s > 0, while Becker's resolvent is
^p^=A(t)R(t,s)+ ot
f B(t,u)R{u,s)du
Js
R(s,s)=I,
(7.3.3)
234
7. THE RESOLVENT
for t > s > 0, where / is the n x n identity matrix. When A is a constant matrix and B{t,s) = B(t — s) is of convolution type, equation (7.3.1) becomes /"* x'(t) = Ax{t) +
B{t s)x(s) ds
(7.3.4)
Jo and the resolvent equation (7.3.3) is * Z'(t)=AZ(t)+ B(ts)Z(s)ds, Z(0) = I. Jo If we apply the standard variation of parameters formula to ft x'(t) = A(t)x(t) + / B(t, s)x(s) ds + p(t) then a solution x(i) = x(t,to,4>) °f the perturbed equation may be expressed as
x(Mo,o Jo
(7.3.6)
The proof of this theorem is based on a series of results of Hino and Murakami (1996) and Zhang (1997) on uniform asymptotic stability and total stability of (7.3.1). Lemma 7.3.1. Zhang (1997) If (Hi) and (7.3.6) hold, then there exists a constant K such that \R(t, s)\ < K for all 0 < s < t < 00.
236
7. THE RESOLVENT
Proof. Since R(t, s) is a solution of (7.3.2), we obtain R(t,s)=I
+
R(t,u)A(u)du + I
I
Js
Jv
Js
R(t,u)B(u,v)dudv.
Interchange the order of integration in the last term to obtain R(t,u)A(u)du +
R(t,u) Js
B(u,v)dvdu Js
and \R(t,s)\ < 1+ / \R(t,u)\\A(u)\du Js
,t
,u
+ / \R(t,u)\ / Js Js
\B(u,v)\dvdu.
(7.3.7)
By (Hi) and (7.3.6), there are positive constants M and L such that sup(^(t)+ / \B(t,s)\ds\ <M t>o [ J o ) and sup //"* \R(t,s)\ ds < L . t>o Jo
It then follows from (7.3.7) that \R(t, s) < 1 + ML =: K. This completes the proof. Lemma 7.3.2. Zhang (1997) The matrix functions A(t) and B(t, t + s) can be continuously extended to (t, s) e R x R~ with A(t) = A(t) and B(t,t + s) = B(t,t + s) on Q = {(t,s) G R+ x R~ \  t < s < 0}. Moreover, if (Hi)(Hs) hold for A(t) and B(t,t + s), then the extensions A{t) and B(t, t + s) satisfy the following conditions: (Hi) sup \\A(t)\+ f \B(t,s)\ds\=:M ten I Joo J
0, there exists an S = S(cr) > 0 such that /
\B(t,u)\du e C([0,i0]) with \\<j>\\ < 5(1), S() is the one given for the TS of the zero solution of (7.1.1). Then \x(t,to, )\ < 1 for all t > to Now for any s > 0 (0 < s < 1), a > 0, we set
*
u(t) = u(t,a,e) = < l+sat [l
ift>0 if t < 0,
define y(t) by y(t)
teR+,
= u(t  to)x(t),
and p(t) by ft
p(i) = u'(t  io)x(t) + / B(t,s)x.(s)[u(tto)u(sto)]ds Jo
(7.3.8)
for t > to. One may verify that y(t) satisfies (7.3.5) for t > to with p(i) denned in (7.3.8). Notice also that for t > 0 w
eV
2 + 2£at)
and ,. .
a(2e)
^=(TT^FThis yields 1 < u(t) < 2/e, \u(t)  u(s)\ < 2a\t  s\ for t,s e R. It follows
from (H2) that for any 77 > 0, there exists an S = S(r]) > 0 such that / Jo
\B(t,u)\du S(r]). By (Hi), there exists a constant M* > 0 such that sup / \B(t,s)\ds<M* . t>o Jo
238
7. THE RESOLVENT
Let t > to Without loss of generality, we may assume that to > S(rj). By (7.3.8) we have /"* p(i) 0 and a = a(e) so small that p(t) < t0. Hence if t > t0 + (1  e)/(ea), then x(t,t o ,0) = y(t)/u(tt o ) < [l+ea(tto)]/[l
+ 2a(tto)] <e
which proves the theorem. Next we shall discuss the converse of the above theorem. To do this, we assume that (Hi)(Hs) holds and study the limiting equation of (7.3.1) with A(t) and B(t, s) replaced by A(t) and B(t, s). By the AscoliArzela's theorem, (S3) implies that for any sequence {t'k} with t'k —> 00 as k —> 00, there exists a subsequence {tk} of {t'k} and functions D(t) and E(t, s) such that A(t + tk) * D(t) and B(t + tk, t + tk + s) —> E{t, t + s) as k —> 00 uniformly on J x K for any compact sets J G R and if C R~. We denote by F(A,i3) the set all pairs (D,E) which satisfy the above situation for some sequence {tk} with tk —> 00 as k —> CXD. We can easily see that each (D,E) £ T(A,B) also satisfies (Hi)(Hs) with the same number M and S(a). In particular, (A,B) G T(A,B) whenever A(t) and B(t,t + s) are almost periodic in t £ i? uniformly for s £ i?~ [see Hino and Murakami (1991)]. If (D,E) e T(A,B), then the equation /"* x'(t)=D(t)x(t)+
E(t,sMs)ds
(Loo)
Jco
is called a limiting equation of (7.3.1). [See (7.2.23). In a similar way, one can also define the stability of the zero solution of (Loo) by taking (OO.T] m the place of ) is UAS with the same common triple (5o,S(),T()). We now claim that for any e > 0, there exists an a(e) > 0 and 8(e) > 0 such that T > a(s), r . Then the zero solution of (7.3.1) is TS since it is unique. We prove the claim by the method of contradiction. Suppose there exists an e, 0 < e < 5Q/2, and sequences {rfe} G R+, Tfc —> oo as k —> oo, {rk},rk > 0, and functions G C([0, rfe]), pfe G C([rfe, oo)) and the solution xfe(i) = x(t,Tk, (pk,pk) of (7.3.5) with p = pfc through (Tk,4> ) such that p fe [r fc ,oo)k (u + rk + rk T)du + pk(t + Tk+rkT) for t > T—rk. In this case we may assume that {yk} converges to a function y uniformly on any compact set in (—00, T]. Moreover, y is a solution of
240
7. THE RESOLVENT
(Loo) on [0,T]. Letting k > oo in (7.3.9), we have y(t) < a on (oo,T] and y(T) = e. This is a contradiction since y(_oo.o] < £ < r as k —> oo for some r G i? + and set xk(t) = xk(t + rk) for £ > — Tfe. Then xfc(t) satisfies
d — dt
k
(t) = A(t + Tk)xk(t)+ + /
fl / B(t + Tk,u + Tk)x.k(u)du Jo
B(t + Tk,U + Tk)(pk(u + Tk)du +Pk(t + Tk)
for i > 0. Again, we may assume that the sequence {xfe} converges to a function x uniformly on any compact subset (—oo,r]. By the same reasoning as for y, we see that x is a solution of some limiting equation of (7.3.1). On the other hand, it follows from (7.3.9) that x(i) = 0 on R~ and x(r) = e. This is again a contradiction since we must have x(t) = 0 on R by the uniqueness of solutions of (.Loo) with respect to initial functions. This shows that the zero solution of (7.3.1) is TS if it is UAS. We are now ready to prove Theorem 7.3.1 by applying Perron's theorem [Perron (1930)] and using the properties of the resolvent R(t, s) defined in (7.3.2). It is also verified in Hino and Murakami (1996) that resolvent equations (7.3.2) and Becker's resolvent (7.3.3) are equivalent. Proof of Theorem 7.3.1. First we suppose that the zero solution of (7.3.1) is UAS. By Theorem 7.3.3, it is TS. Let p e C(R+) be bounded and x p G C(R+) satisfy ft x'(t) = A(t)x(t) + / B(t,s)x(s)ds Jo
+p(t)
for t > 0 with xp(0) = 0. By the variation of parameters formula, we obtain xp(i) = / R(t, s)p(s) ds for t > 0 . Jo Since the zero solution of (7.3.1) is TS, we see that x p is bounded on R+. This implies that J o R(t, s)p(s) ds is bounded on R+ whenever p G C(R+) is bounded. Applying Perron's theorem, we obtain that sup tefl + J o \R(t, s)\ds < oo, and hence (7.3.6) holds. Conversely, suppose that (7.3.6) holds with sup teij .+ J o \R(t, s)\ds = L for some L > 0. By Lemma 7.3.1, there exists a constant K > 0 such that
7.3. UAS AND INTEGRABILITY OF THE RESOLVENT
241
\R(t,s)\ < K for alH > s > 0. Let x(i) = x(t,i o ,^,p) be a solution of (7.3.5). By the variation of parameters formula again, we obtain »
*
x(t,t o ,0) = R(t,to)(u)duds Jo
R(t,s)p{s)ds.
Jtu
This implies that x(i)< \R(t,to)\\$(to)\ + f M > 0, then what more is needed to conclude boundedness, uniform boundedness, or uniform ultimate boundedness?
8.1
Existence and Uniqueness
We consider a system of Volterra functional differential equations x'^t) = fi(t,xi(s),... ,xn(s); a < s < t) for t > to, a > — oo, a < to, and i = 1,..., n. These equations are written as x'(t) =F(t,x()),
t>t0,
(8.1.1)
where x() represents the function x on the interval a, t] with the value of t always determined by the first coordinate of F in (8.1.1). Thus, (8.1.1) is a delay differential equation. This section and part of the next will closely follow the excellent paper by Driver (1962), which remains the leading authority on the subject of fundamental theory for (8.1.1). As Driver notes, much of his material is found elsewhere in varying forms; in particular, the early work is from Krasovskii, EPsgol'ts, Myshkis, Corduneanu, Lakshmikantham, and Razumikhin. But important formulations, corrections, and general synthesis are by Driver. Notation. (a) If x G Rn, then x = maxj = i,... jn \xi\. (b) Hip : [a,b] > Rn, then \\iP\\^=
sup   ^ ( s )   . a<s D) denotes the class of continuous functions ip : [a, b] —> D. Because a can be —oo, one accepts the following. Convention. If a = —oo, then intervals [a,t] and [a, 7) mean (—00, t] and (—00,7), respectively, and ip G C([a,t] —> D) means that there is a compact set L^, C D such that ip G C(( —oo,t] —> L,/,). This implies that ip G C[[a,t] —> D) with t0 < t < 7 means ip G C([a, t] —> L^) for some
compact set L^, C D, regardless of whether a is finite or not.
8.1. EXISTENCE AND UNIQUENESS
245
Definition 8.1.1. The functional F(t,x()) will be called (a) continuous in t if F(i, x()) is a continuous function oft for to < t < 7 whenever x G C([a, 7) —» Z)), (b) locally Lipschitz with respect to x if, for every 7 G [£0,7) and every compact set L C D, there exists a constant K^_L such that F(i,x(.))F(£,y(.)) i ) . Definition 8.1.2. Given an initial function G C([a,to] —> Z)), a solution is a function x G C([a,/?) —> £)), wiere to < /3 < 7, such that x(i) = ). A solution is unique if every other solution y(t,to,4>) agrees with x(i, to, 4>) as long as both are defined. Theorem 8.1.1, Theorem 8.1.5, and the first paragraph of Theorem 8.1.6 are the basic results on nondifferentiable Liapunov functions and functionals. Theorem 8.1.1. Driver (1962) Let ui(t,r) be a continuous, nonnegative function of t and r for to < t < (i, r > 0. Let v(t) be any continuous, nonnegative function for a < t < (3 such that limsup
v(t V + At);
v(t) —
max a < s < 4() v(s) be given, and suppose that the maximal continuous solution r(t), of r'(t) = u>(t,r(t)) for t > to with r(to) = ro exists for to 0 and all t £ [to,P), so v(t) < r(t) for all t £ \to,(5). This completes the proof. Theorem 8.1.2. Driver (1962) Let the functional F(i,x()) be continuous in t and locally Lipschitz in x. Let x(t) = x(t,to,(f>) and x(t) = x(t, to, and 4>e C([a,t0] ^ D). Then, for any (3 G (to,/3), x(t) and x(t) both map [a, J3\ into some compact set H c D, and x(t)  x(i) < U  WaM
exp[^; L (t  to)]
for to oo. Thus, T(x) = x or x(.
=
U(t)
for
[^(t o ) + / t *,F(s,x())ds
a < t < t0 for i 0 < i < i o + /i
This completes the proof. The next result indicates what must happen if a solution cannot be continued beyond some value of t. Again we see the contrast between ordinary differential equations and functional differential equations, as we saw in Section 3.3. Theorem 8.1.4. Driver (1962) Let F(t,x()) be continuous in t and locally Lipschitz in x and let (3 such that x(tfe) G D — H for k = 1,2,.... Proof. Uniqueness has already been proved. Now, suppose x(t) is a solution for a < t < P, where to < (3 < 7. Let G be a compact subset of D with ^GC([a,to]^G).
8.1. EXISTENCE AND UNIQUENESS
249
Suppose there is a compact set H C D such that x(i) G H for to < t < P. Let G1 = GUH. Then, as in the proof of Theorem 8.1.3, F(i,{t) = x(t) for a < t < /3. Because ^ G C([a,P] > Gi), Theorem 8.1.3 yields a solution x(t, P, cf>) on a < t < (3 + h, some h > 0. This completes the proof. Remark 8.1.2. Stability definitions for (8.1.1) are identical to those for Volterra integrodifferential equations. To speak of the zero solution, we must assume that 7 = +00, that D = BH = {x e Rn : x < H, 0 < H < 00}, and that F(t, 0) is zero. Moreover, one refers to stability to the right of some fixed to, which we shall always take to be zero. Definition 8.1.3. Let V(t, ip()) be a scalarvalued functional defined for t > 0 and tp G C([a, t] —> BH) Then the derivative ofV with respect to (8.1.1) is ^(8.i.i)(*»^(0) = l i m s u p ^
^
i
K
^,
where V
U(S) [V(0+ F (i>V'())(si)
fora<S(t,r)
is stable. Naturally, we mean to > 0 and r$ > 0. We are now ready to state a stability result, and in preparation, we offer an outline as a rough summary.
8.1. EXISTENCE AND UNIQUENESS
251
(i) Theorem 8.1.1 says that if lim sup
v(t v + At) v(t) / 5LZ < w ( i ) v(t)) 5
then v(t) < r(t), where r' = u>(t,r). (ii) Definition 8.1.3 allows us to take a derivative of V, say VL 1 jw along a solution of (8.1.1) without knowing the solution, (iii) Theorem 8.1.5 says that if V is continuous in t and locally Lipschitz in x, then the derivative in (ii) really is the derivative lim8up^ +
A t ,x(.,y)V(^(.))
At^0+
At
(iv) Theorem 8.1.6 will tell us to let v(t) = V(t,x()), apply (iii), and accept the conclusion of (i). Theorem 8.1.6. Driver (1962) ip e C([a,t] > BH) with
IfV(t,ip())
is defined for t > 0 and
(a) n*,0) = 0, (b) V continuous in t and Lipschitz in if), (c) V(t,ip()) > W(\ip(t)\), W a wedge, and if
(d) VfeAA)(tM))
W(\ip(t)\), W a wedge, (d) F ( ' 811) (i,x(.)) < WMx(£)), W1 a wedge, then the zero solution of (8.1.1) is asymptotically stable. Proof. Stability follows from Theorem 8.1.6. The asymptotic stability is almost identical to Marachkov's theorem (Theorem 6.1.3). Let x(i) by any solution of (8.1.1) on an interval [to,oo) with x(t) < Hi. If x(i) ^ 0, then there is an e > 0 and a sequence {tn} —> oo with
8.2. ASYMPTOTIC STABILITY
255
x(i n ) > e. Because x'(t) < M, there is a T > 0 with x(i) > e/2 for tn oo as t —> oo. If there is a function V : [a, oo) x BH —> [0, oo) such that (a) (b) (c) (d)
V(t,x) < W(\x\), W a wedge, V continuous in t and locally Lipschitz in x, V(t,x) > VKi(x), W1 a wedge, and there is a continuous, nondecreasing function f : [0, oo) —> [0, oo) with f(r) > r for all r > 0 and a wedge Wi with V(a,2A)(t,x(),g(t))
t0, x e C([a,t] > BH), and V(s,x(s)) for all s G [g(t),t],
t — h for t > 0, where h > 0 is a constant, then the zero solution is U.A.S. Proof. The proof is broken into three parts. (i) Uniform, stability and definition of 5\. Let e £ (0,H) be given. Find S G (0,£) with W(S) < Wi(£). Then for any t 0 > 0 and 0 G C([a,t] > Bs), we have VL2 i)(t>x{'>to, 0)i