Lecture Notes in Mathematics A collection of informal reports and seminars Edited by A. Dold, Heidelberg and B. Eckmann,...
12 downloads
444 Views
5MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Lecture Notes in Mathematics A collection of informal reports and seminars Edited by A. Dold, Heidelberg and B. Eckmann, Z0rich
104 i
i
George H. Pimbley, Jr. University of California Los Alamos Scientific Laboratory, Los Alamos, New Mexico
Eigenfunction Branches of Nonlinear Operators, and their Bifurcations
$ Springer-Verlag Berlin-Heidelberg- New York 1969
Work performed underthe auspices of the U. S. Atomic Energy Commission
All rights reserved. No part of this book may be translated or reproduced in any form without written permission from Springer Verlag. @ by Springer.Verlag Berlin" Heidelberg 1969 Library of Congress Catalog Card Number 70-97958 • Printed in Germany. Title No. 3710
-1 TABLE
Introduction
-
OF CONT~VgS
. . . . . . . . . . . . . . . . . . . . . . .
I.
An Example
2.
T h e E x t e n s i o n of B r a n c h e s o f S o l u t i o n s for N o n l i n e a r Equat i o n s in B a n ~ c h S l ~ c e s . . . . . . . . . . . . . . . . . .
ii
D e v e l o p m e n t of B r a n c h e s o f S o l u t i o n s for N o n l i n e a r Equations near an Exceptional Point. Bifurcation Theory . . .
18
S o l u t i o n of t h e B i f u r c a t i o n E q u a t i o n in t h e C a s e n = i; Bifurcation at the Origin . . . . . . . . . . . . . . . .
29
T h e E i g e n v a l u e Problem; H a m m e r s t e i n Operators; S u b l i n e a r a n d S u p e r l l n e s r Operators; O s c i l l a t i o n K e r n e l s . . . . . .
43
O n t h e E x t e n s i o n o f B r a n c h e s of E i g e n f u n c t i o n s ; C o n d i t i o n s Preventing Secondary BifUrcation of Brancke~ . . . ~ . . .
58
E x t e n s i o n o f B r a n c h e s of E i g e n f u m c t i o n s o f H a m m e r s t e i n Operators . . . . . . . . . . . . . . . . . . . . . . . .
8O
3.
4.
.
6.
7.
. . . . . . . . . . . . . . . . . . . . . . . .
8.
T h e E x a m p l e o f S e c t i o n l, R e c o n s i d e r e d
9.
A Two-Point Boundary Value Problem
i0.
S,mm~y;
C o l l e c t i o n of H y p o t h e s e s ;
. . . . . . . . . .
. . . . . . . . . . . . Unsettled Questions
. .
102
Bibliogral~hy . . . . . . . . . . . . . . . . . . . . . . .
if4
Additional References
Zl6
. . . . . . . . . . . . . . . . . .
Appendix: A n o t h e r B i f u r c a t i o n Method; t h e E x a m p l e of S e c t i o n i~ R e c o n s i d e r e d A g a i n . . . . . . . . . . . . . . .
120
-2INTRODUCTION The series of lectures on nonlinear operators covered by these lecture notes was given at the Battelle Memorial Institute Advanced Studies Center in Geneva, Switzerland during the period June 27 August 5, 1968, at the invitation of Dr. Norman W. Bazley of the Battelle Research Center in Geneva.
The material is taken from the re-
sults of approximately seven years of work on the ~ r t
of the author
at the Los Alamos Scientific Laboratory of the University of California, Los Alamos, New Mexico.
Much of this material had previously been pub-
lished in the open literature (see the Bibliography).
This effort was
generated by the need for a nonlinear theory observed in connection with actual problems in physics at Los Alamos. In deriving nonlinear theory, abstract formulation is perhaps a desired end; but in the newer Im~rts of the theory, as with secondary bii%xrcation in these notes, progress seems to be made more easily with concrete assumptions, as with our preoccul~tion with HA~merstein operators with oscillation kernels. The entire lecture series had to do with the eigenvalue problem kx = T(x), where T(x) is a bounded nonlinear operator.
Other authors,
with a view to applications in nonlinear differential equations with appropriate use of Sobolev Sl~Ces to render the operators bounded, have preferred to study eigenvalue problems of the form (LI+NI)U = k(L2+N2)u, where LI, L2 are linear and NI, N 2 are nonlinear. M. S. Berger [ref. 4].
Such is the case with
In these notes we had the less ambitious goal of
understanding nonlinear integral equations, whence we concentrated on the
-4-
So as to illustrate the type of problems considered in these notes, we present an eigenvalue problem for a nonlinear operator which c~n be attacked by elementary methods.
Namely, we solve the following integral
equation
2 I
= Z
[a sin s sin t + b sin 2s sin 2t] [o(t) + ~3(t)]dt
which has a second-rank kernel•
We suppose that 0 < b < a.
(i.i)
Because of
the form of the kernel, any solution of eq. (I.i) is necessarily of the form O(s) = A sin s + B sin 2s with undetermined constants A,B (which will turn out to be functions of the real parameter A).
Substituting in eq.
(i.i), we have A[A sin s + B sin 2s] =
2
fo
[a sin s sin t + b sin 2s sin 2t]
• [(A sin t + B sin 2t) + (A sin t + B sin 2t)3]dt = 2 a sin s
sin2t dt + A 3
sin4t dt + 3AB2
sin2t sin22g
~o 2 b sin 2s
B
si#2t et + 3 A ~
sin22t sin2t dt + B 3
in°I A÷ A3÷ 21 in2sI A2 ÷ 31 where use has been made of the following values of integrals:
s i~ 2t dt
-5-
~o sin t sin 2t d t =
~
sin3t sin 2t dt =
f
~ sin t sin32t dt = O.
=o
~o
Equating coefficients of sin s and sin 2s, we obtain a pair of nonlinear simultaneous algebraic equations: AA = aA + ~ aA3 +
dAB 2
(1.2) bB 3 . There are four ~ n d s of solutions of equations (1.2): l)
A = B = O; this gives the trivial solution of eq. (1.1).
2)
A ~ O, B = O; only the first equation is nontrivial.
We
cancel A ~ 0 to obtain k=a+~aA
2
whence A=~
-'2 ~
"
The corresponding solution of eq. (i.i) is ~l(S,k) = ~ ~
- ~ sin s,
defined and real for k ~ a. 3)
A = O~ B ~ O; only the second equation is nontrivial.
We
cancel B ~ 0 to obtain k = b + @ bB 2 whence B=±
2
The corresponding solution of eq. (i.i) is ~o2(s,k) = ± 2_. ~ defined and real for A k b~ where we recall that b < a.
_ i sin 2s,
-64)
A ~ O, B ~ O; here both A and B may be cancelled in eq. (1.2).
We obtain the two ellipses:
~ A 2 + 2~ B2
k
i
a
(1.3)
=~-1. Solutions of eq. (1.2) are given by intersections of these ellipses. Solving, we get
=
Lab
so that we have the following solutions of eq. (i.i):
~3(s,~ )
=
4-
23 ~J~zl-b ab A-I sin s 4- ~2
~-a ~A-I
Clearly 2a-b > 0 since we assumed that b < a.
sin 2s.
(1.~,)
Hence the question of whether
or not solutions of the form (1.4) can be real hinges upon whether or not 2b-a > 0, or b > ~i • We have the following cases:
Case !:
b ~ ~1 ; °3 (S,k) is real for no real A.
Case ii:
b > ~i;
3(s,k ) is real f o r ~ > m a x
(~ ab b'
~ ab > "
Since a > b, this means that ~3(s,~ ) is real when k > ~ ab
•
Under case I above, i.e., when ba g 12 2 the only real solutions of eq (i.i) are the trivial solution ~(spk) • O, and the two main branches:
~l(s,~) = + 2_. - ~ ~
sin s
sin 2s.
-7The solutions q~l and q02 branch away from the trivial solution q0 m 0 at the eigenvalues a,b of the linearization of eq. (i.i) at the origin:
Ah(s) = /T2 So
[a sin s sin t + b sin 2s sin 2¢]h(t)dt.
(1.5)
We can represent this situation pictorially in two ways:
b >
sin 2 s
b ~
ab
a third type of solution branch appears, namely that in eq. (1.4). that as k ~ ~
ab
, k > ~
~ 2 aab - b k -i ~ ~¥ 2b-a
k 4 ~ %oI
.
s, ~ a
, the coefficients
abJ2b'a k - l ¥
On the other hand note that
"
Thus ~s X "
ab
Therefore at k = ~ 2 J2a-b
~3+(s,~.) = -~ ¥ - y ~
T 2b-a
s,
=
2
~-i sin s 4- ~ V ab
o:(
s,
sin 2s
= Ol
s,
sub-branch (twig)
q03- ( s , X ) =
2J2a-b
- 3~
ab
2 j2b:
k-i
as
, the sub-branch (twig)
.....
joins the main branch, i.e.,
Note
0 and
- i
2 b - a ' we see c h a t ~_P3(s,~) ~.-.~3
> a,
sin s 4. ~ v a h
k-i sin 2s
while the
-9of the ~ i n branch, i.e., ~3 -( s, ~ ab)
Joins the n~ative ~
~i
'
b 1 We have h~e, under Case II, when ~ > ~ , the ~ bi~tion," ~e
=
or ~ e
~ i n br~ches. ~e
eig~lues
fo~ ~e
of s ~ r ~ e s
~in br~es
of~e
from ~ e ~ i n branches.
of "secondary
or G i g s which b i ~ c a t e
hi,care
line~iz~ion~
~
~.
~
the trivial solution
(1.9), while the t ~ g s b i ~ c a t e
We c ~ represent the s i t ~ t i ~ ~ i n
J
b
/ sin
>
sin 2s
b I -~>~
$ FIG. 1.2a.
:
/AZ+B 2
I
b>± o 2
0
2b
FIG. 1.2b.
~om
-0
in ~ o ~ y s :
- I0 Thus solutions of the nonlinear equation (i.i) exist as continuous loci in (k, sin s, sin 2s) space.
There are two main branches:
~01(s,A)
splits off from the trivial solution ~-- 0 at k = a, and its two l~Lrts ~l+,Ol" differ only in sign; ~2(s,k) Joins the trivial solution at k = b, and its two parts ~2+,~2" differ only in sign.
a and b on the k axis are
the primary bifurcation points for the main branches. If ~b > ~1 , i .e. Case II, two sub-branches or twigs split away from Ol(S,k) at k = ~ ab
, which is known as a secondary bifurcation point.
The question of whether or not secondary bifurcation of the eigensolutions of eq. (1.1) takes place therefore hinges on whether we have b~l
b~l b ~ , or ~ ~ . The c o n d l t i o m ~
1 ~ in this simple problem is a
"condition preventing secondary bifurcation."
Much interest attaches
generally to the question of whether we have secondary bifurcation of a given branch of eigensolutions,
or of any branch of eigensolutions of a
nonlinear eigenvalue problem, and to the derivation of conditions preventing or allowing secondary bifurcation.
The occurrence of secondary bi-
furcation clearly has a -~rked effect on the matter of multiplicity of solutions, over considerable ranges of the real parameter k, as this simple example shows. The example of this section is such that the solutions can be completely worked out by elementary methods.
In the next sections we pre-
sent the methods of nonlinear functional analysis which must be employed to study bifurcations and solution branches in the general theory of nonlinear eigenvalue problems.
There happens to be however much qualitative
similarity between the structure of solutions of problem (1.1) of this section, and more general cases.
-ii 2 .
-
The Extension of Branches of So!utions for Nqnline~ Equations in Banach Sl~.qes. In this
s e c t i o n we c o n s i d e r g e n e r a l bounded c o n t i n u o u s l y F r ~ c h e t - d i f -
ferentiable transformations T(x) of a real Banach space X into itself: x E X, T(x) E X.
We assume that T(8) = %~ where 8 is the null element.
Let
us suppose that the equation xx -- ~(x) + r ,
(2.1)
where A is a real parameter and f
E X is a fixed element~ has a solution
x o E X for a value Ao of the parameter; i.e., suppose that AoXo = T(x o) + f.
We pose the problem of finding a nearby solution x o + h for
a nearby value A = Ao + 5.
Thus we solve the following equation for h,5:
T(Xo+h) + f = (Ao+8)(Xo+h).
(2.2)
Using the definition of the Fr~chet derivative T' (xo) [ref. i~; p. 183]~ we can write eq. (2.2) in the form T(x o) + T' (Xo)h + Rl(Xo, h) + f = XoX ° + Xoh + 5x ° + 5h where
IIRlcxo,h)ll il~il ~ - o as llhll- o.
u.~
the a s ~ = ~ i ~
that Xo, X o ~ t i s ~
[AoI-T' (Xo)]h = - 5X O - 5h + Rl(Xo, h).
eq. (2.1) we have
(2.3)
Since T' (xo) is a bounded linear transform~ti~ such that T' (Xo)h E X if h E X, let us assume that Ao E p(T e (Xo)); other complementary assumptions will he discussed in the next section. tinuous inverse M.
Thus Ao I - T' (xO) has a con-
Then from eq. (2.3) we write
-12
h = [AoI-T' (x O) ]'1[
-
.BXo.Sh+Rl(Xo,h)}
= MF8 (h) •
(2.4)
We now prove a prel~m!ng~y result about Fs(h) defined in eq. (2.~):
Le,.m- 2.1:
The function FB(h) = -5Xo-~h+Ri(xo,h) satisfies a Lipschitz
condition
llF5(h l) - F5 (h2)ll < A(B,hl,h2)!lhl-h211, with A(5,hl, h 2) > 0,
Proof:
and
A(B,hl, h 2) -~ 0 as 151 -~ O, llhlll -* 0, llh211-~ 0.
By definition of
the
Fr~chet derivative,
slCx, h) = T(Xo+h) - T(Xo)- T' ( x ) h . Hence
Rl(Xo, hl) - Rl(Xo, h 2) -- T(Xo+hl) - T(Xo+h 2) - Te(Xo)(hl-h 2) = T(Xo+h2+[hi-h 2] ) - T(Xo+h 2) - T' (Xo)(hl-h 2) = T' (Xo+h2)(hl-h2) + Ri(Xo+h2,hl'h 2) - T'
(Xo)(hi-h2),
so that
!tRi(Xo~,hi)-Ri(xo,h2)ll < {liT' (xo+h2)'TS(xo)!l + ~h1_h211 The q,~ntity
{I1~'
+ll~i(~o+h2,hi'H2~l~ -.0 C~o+H2)-T'(xoIll
~s ,,,,llh~.ll-. o and ,,,,llh211-. O.
Now we have
lIF5mI)-~~(h2)ll~ 18111hl-h2!1+ 11R1(Xo,h1)- ~i(~o,h2)11 and the l e ~
iw, ediately follows.
I!!hi'H2!I"
-13
-
The following result depends upon the previous lemma:
Theorem 2.2: There exist positive constants c,d such that for 151 < c, the mapping h* = MFs(h) carries the ball [hlI ~ d into itself, and is contracting thereon. Proof:
We have I
IIh*ll < II1~I
IR~ (x ,h)ll 151!IXo!l
+ 15II!h!l + - ~
!lhll I •
First let us take d I > 0 small enough that
!IR~ ~ 7zci~f]llzll~,~for any [x]6X/~(A),
where y is the minimum modulus of A, [ref. i0, p. 96].
Hence given y 6 R(A)
there exists an element x E [x] with y = Ax such that llxll < cllA~l = cll~l, 2 where c = -- • 7 Now define M on R(A) as follows: y = Ax and E projects on~(A).
put My = (I-E)x where y E R(A),
M is well defined; indeed if y = Ax I = Ax 2,
then Xl-X 2 E ~(A), whence (I-E)x I = (I-E)x 2. = AX = y, y E R(A), and M is bounded:
Also AM = I since AMy = A(I-E)x
lIM~l = II(I'E)~I ~ KlllXll ~ cKIII~I, using
a proper choice of x. On the other hsad~ if S(A) = X) let M be the given pseudo-inverse. bounded by the Closed Graph theorem. AEx = O) R(E) c ~ ( A ) .
Therefore E = I-MA is bounded.
If x E ~(A) then Ex = (I-MA)x = x.
projection on ~(A)~ and the le-~a is proven.
A is
Since
Hence E is the
-
20
-
let M(x o) be the pseudo-inverse of AoI-T' (xo) given by
Henceforth,
the le...A. We have
[~oI-T'(Xo)]M(xo) = I on Sl(Xo)
~(~o)[~J-T' (Xo)] -- ~. We extend the pseudo-inverse M(Xo) to the entire space X by writing M(x O) = M(Xo) ~ .
Then
[×oI-T' (xo) ]~(xo) -- Y(x o) [×oI-T '(Xo)] -- ~.
(3.1)
With the aid of the extended pseudo-inverse, let us study the following equation to be solved for h:
h = ~(Xo)Fs(h) + u,
u ~ ~i(~o)
(3.2)
where as before (see eq. (2.4))
(3.3)
Fs(h) = - 5x O - 5h + Rl(Xo, h).
I f h E X s a t i s f i e s eq. (2.3) for given Xo,ko,5 , then FB(h) E Rl(Xo)Using eq. (3.1) we see that u = h - M(Xo)FB(h) E~l(Xo) , so that %he same h satisfies eq. (3.2) with this u.
Therefore we are motivated to prove
an existence theorem for eq. (3.2): Theorem 3,2: There exist positive constants c,d,e such that for 151 < c
a n }!~I < e, u ~ ~l(Xo), eq. (3.2) has a solutlon h(5,u) ~lque in the ball llh!l ~ d.
Proof:
The solution is continues in 8.
We study the m~pping h* = M(Xo)F~(h) + u of X into itself,
u 6 ~l(Xo).
We have
IIh~l ~ II~l
I
IBIIIXoll + IBl't}h!l +
"Rl(Xo'h)"
ilhtl'"
"h"} + ''~' •
- 21 according to our definition (3.3) of Fs(h).
llRl(Xo'h)II
s m a l l that ! t h l
I
1
< . . . . for !thtl < d 1. 3!t~t
First we can take d I so
With d I thus fixed, we can
find B I such that dI
1~1 "!!Xoll ÷ 181 "11h!I ~ I~l (llXo!l÷dl) ~ 3!I~! Next we ~ e i f I~I < h '
d1
!1~1 ~ T
for
I~1 < ~l"
; then llh*ll = IIR F8(h)+u~l ~ d I i f 181 < 51.
d1
and II~l < 7" the map c a r r i e s ~ e
~ll
Thus
Ilhll < d I into i t s e l f .
In view of Le-m~ 2 .i we can find d2,5 2 small enough in order to have
!!~lA(5,hl, h 2)
1 < ~ for 151
l, although attempts have been and are being made to study these cases, [refs. 3, 12 am~ 48]. For the present, we here confine the discourse to the case n = 1. If we assume the null spaces ~l(Xo) and ~l*(Xo)tO be one dimensional, and to be spanned by the elements u I and Ul* respectively, i.e., T' (Xo)U1 = koUl, T t (Xo)*Ul* = koUl* , then eq. (3.8) simplifies to the following single equation in the scalar unknown E1 (note:
u I E X, Ul* E X*):
-SUl*X ° - 5 ~lUl*[ U l + d ~ (Xo; u I,Mx o) ] ui*d~(Xo;Ul'U i) + --6- Ul
+ ~
r(Xo; Ul,Ul,Ul)
.i)
3
÷Z
Ul*~ij(5,~ I) =
O, where
~ij(5,~l) =
o(151il
i+j=l
ilUl!1: 11ui-II-- i. For convenience we write eq. (4.1) as follows:
~II~),
- 30 -
+
5[aI+~I(5,~i)]
~i~[a2+~2(5,
{i)] + ~[a3+@3(5,~i)] (4 .~)
+ q31%+~4(B,q)] = o
where a I = - Ul~Xo
a3
= i ul.d(Xo;U1,1)
and we h a v e defined
_
_d%o%o%o ).
~i(5, ~i ) -
~3 (8'~I) =
5 Ul~02 2"
,
~2(~'q ) =
'
~4 (8'~I) =
5q Ul*~03 " 3
and where $i(5,~i) - 0 as 5,~ I ~ O, i = I, 2, 3, 4. Eq. (4.2) is of the form m
Z~ q [ai+~i(5,q)]=
o
(4.3)
i=l
with m = 4; a I = i, 81 = O; ~2 = i, 82 = i; cL3 = O, 8 3 = 2; 64 = O, 84 = 3. Equations such as eq. (4.3) were treated by the method of the Newton Polygon by j. Dieudonn~ [ref. 8] and R. G. Bsrtle [ref. i~ p. 376].
In each of these
studies it was necessary to assume that~ among those terms in eq. (4.3) with a i ~ O~ rain ~i = rain 8 i = O. Thus with the exponents listed for eq. (4.2) l~i~n l~i~n we ~ o ~ d want a I ~ 0, ~ a a 3 ~ 0 or % ~ 0 in o r a ~ to ~ i ~ the ~ o n Polygon method as developed by these two authors. We now take up the study of bifurcation at the origin.
In eq. (2.1)
we t a k e f - 0 so that we consider now the eigenfunctions of the nonlinear
-
operator T(x).
81
-
In other words, from now on we interest ourselves in the
elgenvalue problem ~x = T(x), where T(e) = 8t the null element.
There
exists the trivial solution x = 8, and we have the problem of determining those real values of k such that nontrivial solutions exist.
The
pair (8,k o) is a solution pair of eq. (2.1); if it happens to be an exceptional point as defined in connection with Theorem 2.4, then we have the problem of bifurcation at the origin. It is convenient at this time also to assume that the nonlinear operator T(x) is odd: H-~:
T(-x) = - T(x).
Thus
T(x) is an odd, thrice differentiable operator.
With odd operators we have the following result: Theorem 4.1:
Let the nonlinear operator T(x) satisfy H-2.
is an even operator: ~' (-x)
-- - ~ '
Proof:
T' (-x) = T' (x), and ~' (x) is an odd operator:
(x).
By a known result [ref.
15,
p. 185] the weak derivative exists,
and we have T' (-x)h = lira ~1 [T(-x+th) - T(-x)] = llm ~o t-.o = T' (x)h.
This shows the evenness of T' (x).
1
~
[T(x-th) - T(x)]
Again ~' (-X)hlh 2 -- d2T(-X;hl, h2)
= lim ~1 [T, ( . x + t h l ) . T , (.x)]h2 = lim ~1 [ T , ( x . t h l ) . T , ( x ) ] h 2 t~o ~o - nm
Then T' (x)
=
[T'(x-thl)-T' (x)Sh 2 -- - T ~ (X)hlh2, or ~' (-x) -- - T ~ (x), ~ i c h
shows the oddness of %#'(x).
Here, of course, x,h, hl,h 2 E X.
This ends
the proof. Just as the oddness of T(x) implies that T(8) = 8, so does the oddness of %#'(x) imply that %#'(8)hlh 2 = d~(8;hl~h 2) = 9,hllh 2 E X. eq. (4.1) appears as follows:
By Theorem 4.1,
-32 3 -
~13 Ul*d3T(e;Ul'Ul'Ul) + 5~lUl*Ul + --6-
~
Ul~ij(5'~l) = 0.
i+j=l With x O = 8, we also discern that R3(e,h) = O(llul~); this, and other terms in the expansion of Fs(Vs(u)) which vanish, imply that ~lO = = w30 = w02 = 8.
Hence the bifurcation equation in the case x o = 8 can
be written as follows:
qs[a2+~2(5, q)] + q3[a~+~(5, q)] = 0. The coefficients a2,a 4 and the functions ~2(5,~i), ~4(5,~i) are as defined in connection with eq. (4.2). in eq.
It is seen that ~i can be cancelled
(~.~).
At this point we explain the -/~--er in which J. Dieudonn& treated
equations such as (4.2), (4.3) and ( 4 . 4 ) . side of either eqs. (4.2), (4.3) or (4.4).
Let f(5,~ l ) be t h e l e f t - h a n d If the function $(5) solves
the equation f(5,~ I) = O, i.e., f(8,$(5)) --- O, in the neighborhood of
(0,0),
(f(0,O) = 0), then $(5) ~ t5 ~, where - 1 is the slope of one of 8 exponents
the sides of the Newton Poly-
vs ~ exponents
gon, and t is a real root of the equation 7 a~t k = O. Here k k runs over the indices of the points on the side of the polygon of which - 1 is the slope, [ref. 8, p. 90].
Conversely,
to a given side and slope of Newton Polygon FIG. 4.i.
a l
the Newton Polygon, there may correspond a solution in the
-33 -
smaii, ¢(5) ~
of z ( 5 , q )
-- 0.
For simplicity let us take the case where there is only one side, of slope - ~1 ,
of the Newton Polygon.
After division by 5
Put ~I= ~5 #I in f ( 5 , ~ = O.
, we have, (cf. eq. (4.3)) :
m
=0. i=l Noting now that ~ k + ~iBk " ~181 = 0
for all points on the single side
of the Newton Polygon, we have
k
i
where the second sum is over all the rewalnlng points. Now let t O be a real root of the equation alt of multiplicity q.
81
+ ~. akt k
8k
= O,
Then eq. (4.5) can be written as follows:
(n-to)q =
> o,
where F(5,~) is continuous and tends to b ~ 0 as (5,~) - (O,to). ~i derivatives - ~
If
~F exist and are continuous near (0,0), then ~ exists and
is continuous near (O, to).
If q is even and b > O, we may write 1
n - t o = • 5k/q[F(5,~)] q. Either branch may be solved for ?] in terms of 5 by using the ordinary Implicit Function Theorem [ref. ii, p. 138], since the Jacobean is nonvanishing for small 5.
If b < 0 there is no real solution. On the other
hand if the multiplicity of t o is edd, we may write
-
34
-
i
- t o = 5k/q[F(5,~)] ~.
This one real branch can then be uniquely solved for any real b, again using the Implicit Function Theorem. We now use the method of Dieudonn~ to prove the following result: Theorem 4.2:
Under H-l, H-2 and the supposition that
(e,ko)
is an ex-
ceptional point of the nonlinear operator T(x), (i.e., koI - TS(8) has no bounded inverse,or k
E O
P~(T'(e))), there
exist two nontrivial solu-
tlon branches ~ ( k ) of the equation T(x) = kx consisting of eigenfunctions of T(x), which bifurcate from the trivial solution x = e at the bii~Arcation point k = k o.
The two branches differ only in sign.
If a2a4 < O,
the two branches exist only for k > k o and bifurcation is said to be to the right; if a2a 4 > O, the two branches exist only for k < k o and the bifurcation is said to be to the left.
These branches exist at least in
a small neighborho~ of ~o' and 11~(×)II " 0 as × - × o Proof:
We start with eq. (4.4).
for 5 > 0 or 5 < O.
Clearly ~i = 0 is a solution of eq. (4.4)
Thus u = ~lUl = e.
Insertion of u = e and ± 5 ~ 0 in
eq. (3.2) leads to the trivial solution. Next, if we suppose ~i ~ 0, it may be cancelled in eq. (4.4). remains an equation in 5 and ~i2 which
2~
possesses a Newton Polygon with one side and slope -2.
A s s u m i ~ at first that
5 > O, we put ~i = ~5~"
After cancel-
ing 5, we get 1
1
~j
[ a2+{2(5,'r152)] + r]2 [ a4+{4(5,r]52")] = O.
(~.6)
~
--
Newton Polygon FZG.~ ~ 4 -
There
- 35
Solution of the leading part, a 2 + a4~
-
2
= O, leads to ~]1,2 --~
a4
This represents two real solutions of unit multiplicity if aud only if a2%
0 .
1 Since A(5,r~ 2) is differentiable with respect to ~1, we can solve the two equations 1
1
w
+
A ~(5,~_5a).
~=~i
A(~,~aa)~
~..n2
~3 = ~2 +
o-~l
uniquely for ~ as a function of 5 > O, employing the Implicit Function Theorem for real functions [ref. ll, p. 138], in a sufficiently small neighborhood.
We get two real functions ~i~(5) for small 5 > 0, one
tending to HI as 5 ~ 0, the other to ~2"
Through the relation
1 ~l = r~2 there result two real curves ~+(5)--±which, when substituted as ~i,5 pairs in eq° (3.2) with u = ~lUl , provide two real solutions
x~(A)
o f T(x) = kx for k ~ear k oC l e a r l y s i n c e ~1 = ~5~' 5 > O,
we
see
that
4~i ~
0 as
5 ~
0 and
thus 'Ix~(~)ll ~ o as ~ - ×o"
More-8
over because the use of the Implicit Function Theorem above im02 o4 < 0
plies a uniqueness property and because the Newton Polygon has
8>0
FIG, ~.
- 36
only one side, there are no other solutions of T(x) = kx such that iI~I ~ O as k ~ k o, k > k ° for a 2 a 4
k I as its only eigenvalue, c o r r e s p o n ~
to eigen-
function sin s.
Since kl,~l could be any point on the continuous branch,
we see that
"secondary bifurcation" does not exist in this problem.
a
There is no point of the branch where the considerations of Theorem 3.3 are applicable. This example is illustrative of the situation of Theorem 4.2. lar problem illustrative of Theorem 4.3 would be k£O(s) = ~
sin s sin t[~o(t)+qo2(t)+~o3(t)]dt.
A simi-
-
5.
43
o
The Eigenv~lue Problem; Ha~uerstein Operators; Sublinear and Superlinear 0perators; O sqillatio n Kernels. In the preceding development we have discussed the extension of
branches of eigenelements of quite general nonlinear operators T(x) of a Banach Space X into itself.
Then we treated bifurcation of branches
of elgenelements of these general operators under the assumption that T ~ (xo) is compact at a bifurcation point (Xo,k o) (condition H-l); this can be accomplished if T(x) itself is assumed to be completely continuous, in which case T t (x) is compact everywhere.
There resulted a set of simul-
taneous bifurcation equations of quite a general character, namely eq. (3.8). Because of the difficulties in handling the general bifurcation equations, attention was then confined to the case where the null space ~l(Xo) of the operator koI-Tn (Xo) is one dimensional, where (xo,ko) is the bifurcation point.
In this case eq. (3.8) becomes just one scalar equation in one
scalar unknown.
The Newton Polygon method was used to treat this case
of "bifurcation at an elgenvalue of T t (xo) of unit multiplicity." treatment became very explicit in the case of an odd operator:
The
T(-x) =
° T(x), in which case T(e) = e, where e E X is the null element.
We
handled '~olfurcatlon at the origin" in Theorem 4.2; i .e. we set x
-- 8. O
With odd operators T(x) and bifurcation at the origin x o -- e, we have a situation which may roughly be compared with the situation pertalnlng to compact linear operators on a real Banach space X, and the real elgenvalue problem for such operators. The eigenvalue problem for an odd completely continuous operator T(x) may be described as follows: such that the equation
Find those values of the real parameter k
-
44
-
~x = ~(x)
(5.l)
has nontrivial solutions x E Xj and then explore the properties of these nontrlvlal solutions.
Eq. (5.1) always has of course the trivial solu-
tion by the oddness of T(x).
Now if T(x) = Axj x E X, where A is a com-
pletely continuous linear operator, the study of eq. (5.1) leads along
o en ooo
scro
o uonoOI n O
numbers; these eigenvalues are of finite algebraic multiplicity~ and thus of finite geometric multiplicity (ref. 21, p. 336).
Associated
with each eigenvalue therefore is a finite dimensional linear space, sometimes called an eigenspace; this space is spanned by the eigenelemerits associated with the eigenvalue. ilxli
Since the eigenspace is a linear space, it contains elements of arbitrarily large norm.
If we were to ~ k e
a two-
dimensional plot of the norm of the eigenelement vs. the eigenvalue, we should have an array of vertical lines emanating from the k axis and extending to infinity.
For linear opera-
FIG. ~.I.
tors, such a portrayal would not seem to have any conceptual advantages. If we think of a linear operator~ however, as a type of nonlinear operator, we can regard these vertical lines as representing branches of eigenelements and the eigenvalues
~
as being bifurcation points at the origin.
If
-
the eigenvalues apt one.
~
45
-
are of unit multiplicity, this description is an
Indeed, there are nonlinear operators with linear elgenspaces
bifurcating from the trivial solution x = 8, as e x e m ~ l i f i e d b y t h e first example at the end of Section 4. In general with nonlinear odd operators, the eigenelements x(k) are not such trivial functions of the II x l l
eigenvalues k, and the branches of eigenelements which bifurcate from an eigenvalue a(o) of the linear-n ized operator T' (8) are nonlinear manifolds.
On a norm vs. k plot
we do not in general have vertical straight lines.
The ques-
tions asked though are the same,
5
i.e.
FIG. ~.2.
those asked in connection
with eq. (5.1). The problem therefore in dealing with eq. (5.1), for an odd completely continuous operator T(x), is to take the information developed in Theorem 4.2 about the " p r i m r y bifurcation points"
~ 0)
, (i.e . x o =
theorem), and about the corresponding solution branches in a small neighborhood of x = @, and then to extend these branches into the large.
In other
words, it usually is not enough to know the behavior of the branches of elgenelements merely in a small neighborhood of the origin in real Ban Ach space X.
We want to study elgenelements with large norms also.
-
When
48
-
it comes to this question of extending branches of eigensolu-
tions from the small to the large, we run out of general theory.
Branches
can be extended by stepwise application of the process of Theorem 2.3 provided there is a ~air (x,A) on a branch which is recognizable as an "ordinary point," i .e., where kl-T' (x) has a b o u ~ e d inverse.
The con-
siderations of Theorem 2.4 are applicable~ however~ which means we cannot get past the "exceptional points" which may occur, i.e., pairs (x,k) where kl-T' (x) has no inverse.
In order to investigate this latter problem
we must now considerably restrict the class of operators T(x).
Namely we
consider the operators of H~mmerstein, (ref. 14, p. 46). Generally, a Hammerstein operator consists of an operator of Ne~ytskii [ref. 14, p. 20], namely fx = f(spx(s)), defined on some ftmction space, premultiplied by a linear operator K, i.e. we let T(x) = Kfx. In the sequel, H~merstein operators will play the leading role.
We
admit at this point that we are not partial to H~mmerstein operators on account of any great applicability in physical problems, though there are a few such applications, of course.
One might mention the rotating chain
problem [ref. 38] and the rotating rod problem [refs. 2~ 20].
Rather, we
like H~mmerstein operators because they are amenable to the study of branches of eigenfunctions in the large.
Assumptions can be made about the linear
operator K which make the study presently possible.
An extension of these
results to other classes of nonlinear operators is to be desired~ but seems difficult now. In the present study, we let the Banach space X be the space C(011) of real continuous functions x(s) defined on the interval 0 ~ s ~ i with
-
the sup norm.
47
-
The Nemytskil operator f is defined by a function f(s,x)
which is continuous in the strip 0 g s g I, - ~ < x < + ~, uniformly in x with respect to s.
The linear operator K is generated by a bounded
continuous kernel K(s,t), and thus is compact on C(0,1). T(x) = Kfx--
Thus we let
K(s,t)f(t,x(tIIdt.
(5.2)
The discrete H-mmerstein operator is defined on finite dimensional vector space by letting K be a square matrix and f a nonlinear vector function.
It is interesting for examples, but is not fundamentally dif-
ferent than the operator of eq. (5.2). We further assume that f(s,x) is dlfferentiable with respect to x, uniformly in s, up to the third order, and we define the following Fr~chet derivatives: .I T'(x)h = Kfx'h = JO K(s,t)fx'(t,x(t))h(t)dt,
~'(X)hlh2 = KfUhlh2x = ~01 K(s,t)f"(t,x(t))hl(t)h2(t)dt,
i T~(X)hlh2h3 = "--x~f "'nlh2h3 = 50 KCs't)fx~Ct'x(t))hlCt)h2Ct)h3(t)dt
which are defined everywhere on C(O,I) since C(O,I) is an algebra. All of the theory developed in sections 2-4 holds for these operators. It is very useful in the stuc4y of branches of eigenfunctions in the large to distinguish two types of nonlinearity. We assume we are speaking of odd operators:
T(-x) = -T(x), which for H~mmerstein operators means
- 48
that f(s,-x) = - f(s,x).
-
Thus f(s,0) =- 0.
f '(s,x) > O, 0 m s ~ I, - ~ < x < X
+ ~.
We further assume that
Then for Hammersteln operators
we readily distinguish the following pure categories of nonlinearity: (i)
f
Sublinearity: Xfx" (s,x) < 0 O~
s~
I~
- ~ < x < + m
from which it follows that
f/
fx"(s,x) < O;
sublinear FIG. 5 - ~ -
(2)
Superlinearity:
(s,x) > o f
O~
sm
i, - ~
0 that a I = a 3 = O, a 2 < 0 and a 4
0.
With refer-
ence to eq. (4.4) and Theorem 4.2, llXll
there are small real solutions for k < ko, none if k > k o.
Thus bi-
f
~
_
Ol
~
ko
" ~
~o
furcation is to the left at k = k o.
sublinear case
Again if k o < O, we have a2a 4 < 0
FIG. ~-~a.
and bifurcation is to the right. In the superlinear case, if k o > O,
II • II
we have a I = a 3 = O, a 2 < 0 and a4>
O so that a 2 a 4
ko, none for k < k o.
superlinear case
FIG.
5 .~Sb•
Hence bifurca-
tion is to the right at k = k o. If k o < O, we have a 2 a 4 >
0 so that bifurcation is to the left.
The above remarks on sublinearityand
superllnearityhave an analog
with abstract operators.
Indeed let X = H, a real Hilbert space, and let
T(x) be an odd operator:
T(-x) = -T(x), with T(e) = e.
Further we sup-
pose that T(x) is completely continuous and of variational type [ref. 14, p. 300]; in this case T'(x) is compact and symmetric for given x 6 H. Suppose moreover that T~(x) is positive definite for given x E H.
-52
-
Such an operator T(x) is said to be sublinear if (dT'(x,x)h,h) < 0 for all h, x E H.
In other words dT'(x;x) = ~'(x)x is, for all x E H, a
negative definite linear transformation of H into itself.
Similarly T(x)
is said to be superlinear if (dT'(x,x)h,h) > O for all h, x E X. With a > 0 any number, and x E H, we have by definition of the Fr~chet differential [ref. 15, P. 183] and the fact that ~' (e)x = e,
dT' ( a x ; x ) ; ~ ( ~ x ) x
-- ~ '
( ~ x ) x -T" ( e ) x
(5 .~) = dT~(e;ax)x + R(e,ax)x,
where R ( e , a x ) = o (a!lxll)for all
h , x E H, i t
S i n c e in t h e s u b l i n e a r c a s e ( d T ' ( a x ~ x ) h , h ) < 0
can be seen from eq.
( 5 . 4 ) +.~hat f o r a s m a l l enough,
(dTn(e;ax)xh, h) = (d~'(e;ax, x)h,h) < O; this implies however that (d~'(e;x,x)h,h) < 0 for all h, x E H.
Similarly, for the su~erlinear
case (d~'(e;x,x)h,h) > 0 for all h, x q H. Then for x ° = 8, we have the following coefficients in the bifurcation equation, eq. (4.2), for the sublinear case: aI
0, % = - (ul,uI)---I, a 3 i (Ul,d3T(e ;Ul,Ul,Ul))
Here of course, (kol-T' (e))u I = e. and we have bifurcation to the left.
=0
= ~ (ul, d2T ' (e;Ul,Ul)U i) < O.
Thus in the sublinear case, a2a 4 > 0 On the other hand, if it were the
superlinear case, we should have a2a 4 < 0 and bifurcation to the right, (see Theorem 4.2). Perhaps the chief reason for our selection of Hammerstein operators as an object of study is the fact that this type of concrete nonlinear
-
53
-
operator possesses a separated kernel K(s,t) about which we can make further assumptions.
Specifically, from an investigative standpoint,
it is useful to assume that K(s,t) is an oscillation kernel, [ref. 9, p.
236].
Definition:
An nxn matrix A = (alk) is a completely non-negatlve matrix
(or respectively completely positive) if all its minors of any order are non-negative (or respectively positlve). Definition:
An nxn matrix A = (aik) is an oscillation matrix if it is a
completely non-negatlve matrix, and there exists a positive integer such that A ~ is a completely positive matrix. Definition:
A continuous kernel K(s,t), 0 < s, t ~ l, is an oscillation
kernel if for any set of n points Xl, x2, ..., Xn, where 0 ~ x i ~ l, one of which is internal, the matrix (K(xl,xk)) 1
is an oscillation matrix,
n = i, 2, 3, .... With K(s,t) a sym~netric oscillation kernel, we have the following properties for eigenvalues and eigenfunctions of the equation ~(s)
--
K(s,t)$ (t)d~ (t)
(5.5)
#0 where ~(t) is a non-dimlnlshlng function with at least one point of growth in the open interval 0 ~ t K i, [ref. 9, P- 262]: (a)
There is an infinite set of elgenvalues if ~(t) has an infinite number of growth points.
(b)
All the eigenvalues are positive and simple: 0 ( . . . ( k n < k n . l
(c)
The eigenfunction ~o(S) corresponding to k ° has no zeros on the open interval 0 ( s ~ I.
(
..-(ko.
"
(d)
54
-
For each j = i, 2, ..., the eigenfunction Cj(s) corresponding to kj has exactly J nodes (odd order zeros) in the interval O
0. i=k
If the num-
ber of zeros is equal to m 3 these zeros are nodes. (f)
The nodes of the functions Cj(s) and ~j+l(S) alternate, j = l, 2,
...
.
Our interest in the oscillation kernel in dealing with Ha,~erstein operators stems from the fact that with fx t(s,x) > O, 0 g s ~ l, - ~ < x < as we have supposed, the Fr~chet derivative Kfx'h =
K(s,t)fxe (t,x(t))h(t)dt,
(5.6)
with K(s,t) an oscillation kernel, is a case of a linear operator such as that appearing in eq. (G-G), so that the properties (a)-(f) listed above are true for its eigenvalues and eigenfunctions.
We wish to stress as very
important for H~mmerstein operators that properties (a)-(f) hold for operator (5.6) whatever the continuous function x(t) used in the definition of the operator, if K(s,t) is an oscillation kernel. Properties (e), (f) are actually in excess of requirement as far as we know, as is also the statement in property (b) about the positivity of the eigenvalues. With K(s,t) an oscillation kernel, every eigenvalue ~p(o) p -- 0, i, 2, ... of the Frgchet derivative Kfoeh =
~0 1
K(s, t)fxl (t~o)h(t)dt at the
+
- 55 origin is of multiplicity unity, so that Theorem 4.2 or 4.3 is directly applicable to study primary bifurcation from the trivial solution. such eigenvalue ~p(o) is a bifurcation point.
Each
Moreover if Xo(S,k O) is an
exceptional point on a branch of eigensolutions, i .e. the Fr~chet derivaI" 1 tive Kfxolh = JO K(s't)fx'(t'x°(t))h(t)dt has a n eigenvalue AO~ or
Ao I-Kfxo' has no bounded inverse, then we know a priori that k ° is a simple eigenvalue, or the null space ~l(Xo) is one dimensional.
Hence our bifurca-
tion theory with the Newton Polygon method is applicable, in ~erticular eq.
(4.2). Another bene~tin assuming an oscillation kernel is illustrated in the following example for a discretized HAmmerstein operator: ~E~m_~!e:
Consider the discrete superlinear problem:
(.
O
b
v+v 3
=k
,
a > b
(5-7)
for which we have the following linearization:
(~ . 8 ) o
b
k (l+3v2)k!
= ~"
•
At the origin~ u = v = O, and we have primary bifurcation points a > b.
A continuous branch of eigenvectorsj n~me!y (~ ~ I b l f u r c a t e s right at k = a from the trivial solution
(°I
while another branch
0
to the
(& )
-
56
-
bifurcates to the right at k = b.
Of interest is the
behavior of the eigenvalues
I I x II
of the linearized problem eq. (9.8) as the branches evolve.
Taking the second f i
branch
, and
/// /
"!/(+F-_-, k
letting u = 0 a n d Q
v = • ~l 4 ~u - i
in ~ .
(5.8)
F!O. ~.6.
we see that the linearization has two eigenvalues, ~i = a and ~2 = ~-2b.
The parameter k increases
as the second branch evolves however, and whereas initially we have a.2b ~2 < ~i' for k > ~ we have ~2 > ~i" where k = ~I = a. o
r
o
Moreover a situation is attained
At this point on the second branch, eq. (5.8) has the 11\ = =o ooo.
place. In this example, the kernel or matrix(~ matrix; two of its minors vanish. matrix~ s a y ( :
:>with 0
Op O g s g i, and xf "(s,x) < O, x
0 ~ S < i, - ~ < X < H-4b:
x
+ ~.
Superlinearity;
i.e.f. '(s,x) > O, O < s ~ i, and xf "(s,x) > O, x
0 ~ s < i~
-
~
left since ~p
but none if k >
60
-
O; i.e. there exist two sm~ll solutions if A < ~ o ) ,
~ p (o) •
In the superlinear case on the other hand (H-4b) I
the branching is to the right, i.e., there exist two small solutions if
.Co). The two solutions of small norm which k > ~P(°)' but none if k < ~p
(o) ,
bifurcate at k = ~p
p = O, i t 2, ..., differ only in sign.
We denote
(o) by xp~(s,×), and note that
the two solutions bifurcating at k = ~p llm
= 0 in the norm of C(O,I).
x.~ °)
This is readily seen in
inspecting the proof of Theorem 4.2. The following result on the zeros of x~(s,k) will be useful: Theorem 6.1:
x~ (s,k),
(o),
where defined for k ~ ~p
has exactly p nodes
and no other zeros on 0 < s < i, p = O~ i, 2, -.-. Proof:
Consider the problem
~u(s) = which has
~oI
K(s,t)
f(t,~(t,k))
x;--(t,A)
u(t)dt,
(6.3)
the eigenvalue sequence {~n} and eigenfunction sequence {Un(S)}
where, as indicated for oscillation kernels, Ul~(S) has exactly p nodes on 0 < s < i°
To convert eq° (6.3) to a ~ o b ! e m wlth a symmetric kernel with
the same eigenvalues t we put V(s) =41
~
s,~)) ~V(s) =
' '
u(s)p whence
+(s,x))
K(s,t)
........ v(t)
t.
,)
(6.~)
-
(We note that £(s,X)x > 0.)
61
-
By H-3 as k - ~P(°)' k ~ ~p(°), the symmetric
kernel tends uniformly to the symmetric kernel jfx'(S,O)K(s,t) ~fx'(t,o). Therefore by a known result [ref. 7, P- 151], the eigenvalue ~p of eq. (6.3) tends to ~ o ) , p = O, i, 2, "", and the normalized eigenfunction Vp(S) of eq. (6.4) tends uniformly to Wp(S), where Wp(S) is the p'th normalized eigenfunction of the problem
w(s) --
i
i Jfx'(S,O) K(s,t) Jfx'(t,o) W(t)dt
Equivalently we may W(t) ~o1 K(s,t)fx,(t,o ) J X'p .... dr,
which is associated with eigenvalue ~p(o) . (o)
~_W10(S)--
write
p = 0,1,2,---. But obviously we then have ~........ = hp(°)(s), where "np(°)(s) is the p'th
eigenfunction of eq. (6.1) with y(s) = O. This is because the kernel K(s,t)fx'(t,o ) has elgenvalues of unit multiplicity only. We happen to know a solution pair (u(s),~) for eq. (6.3) however, namely
u(s) = C ( s ' k )
, ~ = k. We readily see this by inspection of
eq. (6.2). Indeed k is one of the eigenvalues [~n] and
among
is
the normalized eigenfunctions [Un] of eq. (6.3). As k ~ ~p(o) how-
ever, only one of the eigenvalues of eq. (6.3) tends to ~pO)-- and this must be k itself.
Hence ~ = ~p.
The corresponding eigenfunction up(s)
62-
is then a member of the one-dimensional eigenspace spanned by _!lxp
(s,×)ll
Since up(s) has p nodes and no other zeros on 0 < s < i, the same is true for xp~(s,k).
This concludes the proof.
We prove the following result for the sublinear case.
The superlinear
case is shown in the same way. Theorem 6.2: case.
Suppose hypothesis H-$a holds~ i.e.~ we have the sublinear
Let (xp*,k*) be a solution pair for eq. (6.2) on the p'th branch
X;(S,k).
Then ~p* < k*, where ~p* is the p'th elgenvalue of eq. (6.1)
where we have put y(s) = xp*. Proof: K(s,t).
Let K be the positive definite operator on ~(0,i) generated by (K(s,t)
is symmetric, and has positive eigenvalues since it is
an oscillation kernel. ) There exists a unique positive definite square root H, where K = H.H.
Let us consider the following eigenvalue problem
for a symnetric operator: :
.
(6.5)
with eigenvalue parameter k and eigenfunctions L to be determined.
(6.5)has the sam_~eeigenwalues {,n* } (To
as
Eq.
equation (6.1) with y(s) = xp*.
elucidate our notation in eq. (6.5), we state that if the operator
H has a kernel H(s,t), then
Hf~p H~ =
/o
H(s,r)f x' (r,xp*(r,k*))
fo
H(r,t)~(t)dtdr;
the operator H ~f. H below would have a corresponding expression.) x
P
- 63
-
Likewise t h e problem
L 5z = H _ Xp*
Hz,
(6.6)
with a sy~m~etric operator, has the same eigenvalues [Sn] as problem (6.3) with x;(s,k) = Xp*; the eigenfunctions of equation (6.6) are used later in t h e proof, and are denoted by [ Zn]. We note now that for all u E L2(0,1) we have
(He~. H,,,u) = (r~: w K~,Hu) = ~o1 Csul2rx '(s, P
*(s,k*))ds
1 + Ap' - ~ , n = 1,2,3,'", and in~i ~$
)= sup ~ g) = 1 + • gEEK Since the unit sphere is weakly coml~ct in L2(O,I) , there exists a subsequence [gnl], Ilgnll! = i, weakly convergent to some g* E L2(O,I).
By
passing to the weak limit in the inequality I(gnl,g*) I ~ llgnl~lllg*ll as nI ~ ~, we see that l!g~l < i. Since K is compact, {Kgn2 converges strongly in L2(O,I) to an element $* = K ~ m O.
Because Fp(O) is a continuous function of O E C(O,I)
in the L2(O,I) norm by Le.m~ 6.3, we have Fp(Kgnl ) - Ap ' as n I ~ ~.
In
the case g* ~ 0 therefore, the maximum 1 + ~ ' of @pR(g) on EK is assumed by ~p(g) at g*. = i+ ~'.
I f llg*ll < i, we should then have ~p(g*) = (g*,g*) + FpCKg~)
This is a contra~ictlon since ,p(g*)< ~ / ( ~ ) ,
turn is because Fp(Kg) is constant on radial lines, and ~
which in E EK.
Hence
!lg*!l = l, and both ~pR(g) and Fp(Kg) assume their maximum v a l u e s , 1 + Ap' and Ap' r e s p e c t i v e l y ,
on EK a t t h e element g* 6 EK.
The mximum v a l u e o f
Fp((:p) on S ; is therefore assumed at . . ~ , where ~p* = ~
> O, .(D*~ O.
-
Also
68
-
since K(s,t) is a continuous kernel, we have ~*(s) E C(O,l), so that
l
Ap = Ap. In the case E* = O, Fp(KE*)is not defined, but the limiting value Ap' of 0p(g) as gnl - E* weakly in ~ ( 0 , i ) is less than certain discernable values assumed by ~p(g) on ~ ,
which is a contradiction.
The linear operator (6.8), with ~ i ~*, has eigenvalues ~n(K,o*), n = 0,1,2, ..., such that ~n(K,~*) < ~n.l(K,~*) (strict inequality); this is because K(s,t) is an oscillation kernel, [ref. 9, PP. 254-273].
Hence
Ap < i, and the le~-a is proven. With these two preliminary results proven, we are now in a position to state our main results having to do with whether or not a given point
xp"
xp~(s,k *) on the p'th branch is an ordinary point. We put forth a couple of conditions which, together with the state-
ment of Theorem 6.2, will be seen to guarantee that x * is an ordinary P point. These may be considered as a priori conditions on either the kernel K(s,t) or on the function f(s,x).
In the sublinear case, hypothesis
H-4a, the condition is that
f(s,x)
> Alp,
O ~ s ~ i, - ® < x < ®,
(6.9a)
while in the superlinear case, hypothesis H-~b the condition is that xf ' (s,x) x 1 "f(S,x) ~p(0) . By the supposition of oddness (H-2)~ the two branches xp~(s,~,) differ only in sign. In order to employ considerations of Theorems 2.3 and 2.4 to extend the p'th branch xp~(s,~) from the small into the large~ we needed ance
assur-
that there existed some ordinary point (x:,k*) on that branch, i.e.
a point such that k*I-T t (xp*) has a bounded inverse.
This assurance is
given by Corollary 6.6 o_~rCorollary 6.8 under the assumption that either condition (6.9) o_~rcondition (6.15) holds~ whether we have sublinearity (H-b~) or superlinearity (H-4b).
Moreover~ we shall see that either of
these corollaries givesassura~ce that~ a priori, all points (xp2k) on the p'th branch Xp&(S,k) are indeed ordinary points.
Of course the
latter can be inferred also from Theorem 2.4 once a single ordinary point is found~ but there is no assurance that the branch cannot terminate at a singular point on the basis of Theorem 2.4. Accordingly we invoke Theorem 2.4 and state that there does exist a branch x~(s~k)s
or a "unique maximal sheet~" of eigenfunctions of problem
81
-
(6.2) emanating from the trivial solution at the pr~-~ry bifurcation point k = ~p(o) • The only finite boundary point such a sheet may have is a point (X *,M*) such that k ' I - = x . has no bounded inverse. A ~ P Theorem 7-!: The branch xp~(s,k) has no finite boundary point apart from x = @, and may therefore be continued indefinitely. Proof:
If there were such a boundary point (xp*,k*), then XpCS,k)" xp*
as k -- A* in the C(O,I) norm. nodes on O < s < i.
By Theorem 6.1, xp*(s,k*) has exactly p
Accordingly in view of Theorem 6.2 and either Theorem
6.9 or Theorem 6.7 we have ~p* < k*< ~ - I in the sublinear case and ~ i
0 such t2~at llxp(s,k)ll ~ M, k E ~p. tO (0,~(0)). sequence.
Then we show that lip is closed relative
Indeed let [k~, k k E X]p, k = 0,1,2, ..- be a convergemt
Each function xp k = xp(s,k k) solves eq.
(6oe) with
k = k k,
- 82
-
kI
sequence {kkl} such that [KfXp 3 converges in norm. We have that KfXpkl = k klxpkl however, whence I x ; I} converges in norm. sider here that [kk~~ is bounded aw~yfromO;
(We may con-
otherwise we should already
. A .
be done.)
Then~
= lim x P kl~® P
k1
is a solution of eq. (6.2) with k = [ =
kl~lim~ kkl , i .e. [ Xp = KfXp, and by Theorem 6.1 ~p has exactly p nodes in
0 < s < i. Hence ~ E rlp, which shows that lip is closed. hand [Tp must be open relative to exists such that [
(°)
-P
On the other
) since, given ~ 6 np, Xp(S,[): ~p
"l exi.ts as a bo
ed inverse by
Theorem 7.1, and Theorem 2.3 indicates that there is a neighborhood ~ such that Xp(S,k) exists, k 6 N~k ; i.e. % c
Hp.
of [
A set such as Hp which is both
open and closed relative to (0,~(°)) must either be empty or be equal to
0 ,~p(o) ). We have seen however that ~p is not empty since it contains a left neighborhood of ~p(°). Hence [~p = (0,~(O)) under the assumption that llXp(S,k)!! ~ M, A 6 lip- But by Theorem 6.2, we must have ~p < k for k E Hp, where ~p is the p'th eigenvalue of linearized eq. (6.1) with y(s) = xp(s,k). Now for functions x(s) E C(O,l) with llx!l~ M, we have f'(s,x) a f.'(s,M) by the sublinearlty assumption, (H-4a). Let ~pM be the p'th eigenvalue of the operai tot Kf~ = Jo K(s,t)fx(t,M)-dt. Using the sysmetrized operators of Theorem 6.2, which are such that Hf.OHxhas the same spectrum as Kf'x, we can write
-
= "~
~n {Vl,...,~p.l}
max ~Vl,'",vp.
max ~ UAWl, • •"Wp.1
83
-
(.f~u,u) + -(u,u.)- - -
1
max
~
--~
= ~p,
uAwI~ •• -,wp.1
where we have indicated here by Wl,---,Wp. I the first p-i eigenelements of the operator Hfx H. Hence for k E Hp we necessarily have 0 < ~pM < P ,p < × ~.d.r the ass~tion t h a t !lXp(S.×)!! ~ M. ~ i s i s a e o n t ~ d i c t i o n since we also proved that • p = (O,,p).
Thus xp(s,k) cannot remain bounded;
a . ~ b e r ~ ~ o such that as ~ + ~ . ~ > ~ .
th.re ~ t s
lira sup llxp(s.k)!!
--
-.
In the superlinear case (H-4b) the argument is the same except that the set ~p, where xp~(s,k) exists and is continuous, lies to the right of the pri.(o) , (o)+®) mary bifurcation point ~p , an~ is a subset of t h e interval t~p , • This
proves the theorem. Now
let us consider the linear eigenv~lue problem I yh(s) = ~o K(s,t)A(t)h(t)dt
(7.1)
formed with the function A(s) ~ 0 of the Hypothesis H-~ of Section 6. Problem (7.1) possesses the positive sequence of simple eigenvalues {Vn). Finally, to prove the following result, we must strengthen H-5:
H'~:
lf(s,B)-ACs)BI ~ ~ ,
Th~eorem 7.S: sult~
i.e.
0 ~ 8
~ = 7p, where ~
< @', where ~
i s a constant.
0 is the number appearing in the last re-
n m sup tl~*(s,~)It = ®. k-.Tp
- 84
-
Proof: Let KA be the operator of eq. (7.1). By Theorem 7.2 there exists a sequence k k ~ k such that kk~k lira llxp(s,kk)ll= -, end xp(s,kk) satisfies eq. (6.2) with k = k k. We subtract the element FAx from each side of P eq. (6.2) to obtain .i (kl-KA)xp(s,kk) = ~[ K(s,t) [f(t,xp(t,kk) )-A(t)xp(t,kk) ]dt, whence, using the sup norm of C(0,1),
11xp(s,~k)ll~ II (~'KA)'~I'II~I'IIr(S,%(S,~Q)'A(s)~Cs,~k)IIThen by H-7, IIXp(S,kk)ll g II(kI-KA)'~I.II~I.MI. Thus lim implies that [ 6 {yn} where {yn} are the elgenvalues
of
=
@D
eq. (7.1).
Suppose now that -k = ¥m > 0. We compare the functions hmm e n d xp(s,k k) where hmm(S) is the normallzed eigenfunction associated ~rith ym •
h®(s).
~(s'×k) 1 ~oI K(s,t)A(t)hm~(t)dt llXp(S,~kJIl-- ~q
-
1 1 ........ f K(s,t)f(t,xp(t,kk))dt - ~kllxp(~,~ n i
j£.
~0
K(s,t)A(t)hj(t)dt + I
K(s,t)A(t)
[,
~0
1
1 [ K(s,t)[A(t)xp(t,kk)-f(t,xp(t,kk) )]dt. "O
• llnll" o as ~k " Ym' s~d likewise
Now Ym
(t) - llXp(t,kk)ii @
-
8......The
85
-
.mmmp!e of Section I, Reconsla~ea.
Having developed some methods for treating more general cases, let us now reconsider the example of section I, namely eq. (i.i).
This equa-
tion now appears to be an e/~envalue problem for a superlinear Hammerstein operator of odd type.
In fact we find that hypotheses H-I through
H-3p H-4b, and H-5 with A(s) = A = w~ are satisfied.
The second rank
kernel does not impress us as being an oscillation kernel in that it is possible for it to assume negative values, but in a simple example we can live with whatever deficient l~roperties a specific kernel does have~ if it has any~ and find out where we are led. Accordingly let us begin by treating eq. (i.i) in a fashion reminiscent of Section 2.
Nsmely~ let h~8 be increments added to ~o~Ao
respectivelyp where we assume that (~o,Ao) represents a pair which sarislies eq. (i.I).
We have then
Xo~ ° + Xoh + B~ o + ~h =
~o
[a sin s sin t + b sin 2s sin2t]
•
(8.1)
dt
~he fact that (~o,Ao) solves eq. (i.i) allows some cancellations in eq. (8.1); after rearra~ement we get
koI - ~
[a sin s sin t + b sin 2s sin 2t][l +
2 |
.dr
h =
[a sin s sin t + b sin 2s sin 2t][3~o(t)h2(t)+h3(t)]dt
~o --
Fs(h).
(8.2)
-
At
86
-
the trivial solution ~o(S) --- 0 we have the linearization eq. (1.5)
with two eigenvalues a,b, with a > b as assumed, to which are associated the eigenspaces Slm~nned respectively by sin s, sin 2s.
We thus have two
primary bifurcation points, a,b~ where the operator on the left in eq. (8.2) has no inverse.
Corresponding to eq. (3.2) we have the following
equation to be solved for h at the bifurcation point k = a, (with 5 = k-a):
h = MF 5 (h) + ~ sin s = M(I-E)
{ So 8h + n--
[a sin s sln t + b sin 2s sin 2t]h3(t)dt} + ~ sin s
where here E is the orthogonal projection on the null space spanned by sin s, and M is the pseudo inverse.
h - S
{
5(l-E)h + ~
This gives
b sin 2s sin 2t h3(t)d
t}
+ ~ sin s.
(8.3)
Putting h o = ~ sin s in an iteration processp we find that the integral in eq. (8.3) vanishes, so that h I = ~ sin s.
Likewise every succeeding iterate
is equal to ~ sin s, and therefore h = VS(~ sin s) = ~ sin s.
Then the bi-
furcation equation~ eq. (3.4)~ becomes
slns)--E (-5
s l n s + 2 ~ n [ a sin s sin t + b sln 2s sin 2t]~3sin3t dt
3] sin s = O.
--
Eq. (8.~) has the trivial solution ~ = 0 and the nontrivlal solution
~ = ~
~=
± ° -2 ~b"
tI
'a andl then flrst e branch d 4 s°luti°n' ~
by substituting this ~ into eq. (8.3), is h(s) = •
2-- ~_- - i
sin s.
- 87 -
Again, we write eq. (3.2) for the bifurcation point at A = b, with 5=k-b:
-~h + 2 foN[a sin s s i n t +
h -- M(I'E)
b sin 2s s i n 2 t ] h 3 ( t ) d t ~ +
~ sin 2s
where E is now the orthogonal projection onto the null sp~ce spanned by sin 2s, and 5 = A - b.
h = M
This gives
5(l-E)h + ~
a sin s sin t
Starting with the first iterate h
o
+ ~ sin 2s.
(8.5)
= ~ sin 2s and substituting this on the
right in eq. (8.5)p we again hare the integral vanishing~ whence h I = ~ sin 2s. We ~ a n t h e n
see that hn = ~ sin 2s also I so that h = V~(~ sin 2s) = ~ sin 2s.
The b~furcation equation is now written
EFs( ~ sin 2s) = E
~o
foN[asin
5~ sin 2s + 2
= {- 8 ~ + ~ b ~
s sin t + b sin 2s sin 2t]~3sin32t dt
3] sin 2s = O.
(8.6)
Eq. (8.6) has the trivial solution ~ = 0 and the nontrivial solution
~=~
-
=
-
~-i
eq. (8.6), to get h(s) = ±
. We solve eq. (8.5), with this solution of
2 Sb.
i
sin 2s for the second branch of eigen-
solutions. These branches of eigensolutions are exactly the same as those obtained in Section i by more elementary means.
Also since the expansion (3.5) is
tribal in this case, the ~resslons OlCS,~) = Oo+h = * 2__~ ~a
1 sin s and
- 88 ~2(s,k)
=
~o+h=&~q ~-l
-
sin 2s are valid in the large.
need for the process of Theorems 2.3 and 2.4. the steps.
There is no
Of course one could follow
The uniqueness property of Theorem 2.3 would yield no other re-
suit. With this example in Section 1 however, we had secondary bifurcation on the 1st branch if b > ~1 . Here, we study this possibility by learning how the eigenvalue ~2 of the linearization
wh(s) = ~2 foN [a sin s sin t + b sin 2s sin 2t][ l+3~12(t,k) ]h(t)dt
(8.7) behaves as the is, branch ~l(S,k) evolves.
The eigenvalue ~i does not
bother us since ~l = a+3(k-a) -- k+2(k-a) > k. Theorem 6.2 would tell us.
Of course this is what
2b For ~2 we have the expression ~2 = " b + -~- k.
Secondary bifurcation of the branch ~l(S,k) occurs if ever ~2 = k; this does happen if 2b > 1 but cannot happen if 2b g 1. a a condition as in Section 1. curs then at ksb = ~ ab
Hence we get the same
The secondary bifurcation in this example oc-
with the solution qOsb(S)_ 2 _ = ± --
sin
s.
There is of course the question of the bifurcation analysis at the secondary bifurcation point (~0sb,ksb).
In Section 1 it was found, using
direct elementary methods, that the two sub-branches or twigs split away from the main branch ~l(Stk) at this point and evolve to the right.
When
it comes to repeating this bifurcation analysis by use of the bifurcation equation, eq. (4.2), we find that difficulties arise.
When we compute the
coefficients using (5.3), we find that a I vanishes; the nonvanishing of a 1 is essential in the application of the Newton Polygon method as discussed
-89 by J. Dieudonn~ [ref. 8~ p. 4].
In treating bifurcation at the origin as
in Section 4, we were able to b~ndle a case where a I vanished since there it was clear that Ii(5 , ~i ) in eq. (4.2) vanished also.
In this ~ l e
where we have secondary bifurcation at (~sbPAsb) we have yet to learn how to work out the sub-branches using the bifurcation equation.* The vanishing of a I in eq. (4o2) at a secondary bifurcation point as above is a peculiarity of Hasnerstein operators for which K(l-s,l-t) = K(s~t) and f(l°s,x) = f(s,x). i is of this type.
More general e ~ l e s
K(s,t)f(t,x(t))dt = K~x The example of section
lead to the nonvanishing of a I in
eq. (4.2) at a secondary bifurcation point, whence the Newton Polygon method is applicable to eq. (4.2) as it stands.
It should be noted however that in
a case of nonvanishing a I in eq. (4.2) one usually has a changing of direction of evolution of the branch of eigensolutions at the secondary bifurcation point, rather than a formation of sub-branches as in the problem of Section i.
Some writers prefer to call such a point a limit point of the
branch of eigensolutions, thus preserving the term "secondary bifurcation" for the more intuitive idea of the splitting of a branch.
In any case, however,
the bifurcation analysis ~ s t be used in general. We can compare and assess the two conditions ngainst secondary bifurcation given in Section 61 namely (6.9b) and (6.1~b) respectively.
We found
in Section I that a necessary and sufficient condition against secondary bib i furcation in the example was that ~ ~ ~ • How do conditions (6.9b) and (6.15b) co,r~eJre with this?
*See AFpendix however.
-
90
-
With respect to condition (6.9b), the quantity on the left, namely 3 ~ ~ , varies between i and 3.
For the condition to be satisfied
1 must be no higher than ~ . Now in the present example, A 2 can be given a most refined definition.
In connection with eq. (8.7) we saw that the two
eigenvalues, 141(K,I+3~2) = 3k-2a and 142(K,I+~2)= "b+2ba k evolved as the first branch ~l(S,k) evolved.
We know these expressions for 141 and 142
only because we know a priori, independently of ~I and 142' the expression for ~l(S,)~) in this example.
142(K, :L+,-~2 )
2) cone.
Hence we can compute the maximum of
over this known branch only, rather than over the positive
b The m~xi~um of the ratio is ~ and is assumed at k = a, i.e. at
the origin ~ m O.
This allows interpretation of condition (6.9b) in terms
of eigenvalues of the linearization at the origin, namely a and b.
Condi-
b 1 tion (6.9b) therefore requires ~ ~ ~ as a condition for no secondary bi~trcation in this example.
Hence condition (6.9b) while being a sufficient condi-
tlon, is far from being a necessary condition for no secondary bifurcation. Condition (6.15b) on the other hand requires that
2
i+~i2
~2(K,I+~2)
as a condition against secondary bifurcation.
this means that the condition is satisfied if
t
(9.1)
N
2
sin ns sin nt,
O
0.
It is the zero'th branch of eigenfunctions
which bifurcates from the trivial solution ~ ~ 0 at the zero'th bifurcation point k = l, which is the zero'th elgenvalue of the linearizedproblem at the origin: ~
+ ~ = 0 (9.9) :
o
m(x
:
o.
1 , n : 1,2,-... Linear problem (9-9) has simple eigenvalues at k = -~ n In a similar fashion we match the n'th zero ~n of the solution of problem (9.6) with k ' ~ .
and so get
the
We have
expression
~n(C)
=
= n~l(C).
Solution of the equation ~n(C) = k ' ~
yields a value cn.
T h e n we h a v e
-87
2 k
and
k-,
--
=
2 ~
~
as
-
n ...............
-
c
i
- 0
(9.10)
-. ~ . n
Thus is yielded the (n-l)th branch of eigenfunctions of problem (9.5), that branch with n-i interior zeros.
It is parameterized by the initial
slope Cn; eq. (9.10) gives the eigenvalue gives the eigenfunctions.
while problem (9.6) with c = cn
1 The primary bifurcation f r o m ~ m 0 is at k = -~ . n
For future reference, we note that the n'th maximum of the solution of problem (9.6)(or the n'th zero o f t he function % below) occurs at
~n(C ) _- (2n-!)K(k(c))
-
(9.11)
Now let us consider again the solution of problem (9.6) which forms a trajectory ~ 2 + [ 2 + ~ 4 ] have set $ = ~ .
= c 2 in the ~,# phase plane (see Fig. 9.1).
Let ~
Here we
= successive zeros of ~, v = 0,1,2,... = successive zeros of ,, v = 1,2,3,''"
where we label ~o = O arbitrarily.
By inspection of Fig. 9.1 we note that
s@ ~(~,c) = (-i) 'J,
~v < ~ < ~,~i
sgn t(~,c) = (-i) v,
u~ < ~ < ~1"
We consider also the linearized initial value problem h~
+ [i+3~2]h = 0
(9.12) h(o) = 0
h~(o) -. 1.
98Problem (9.12) has a trajectory which also revolves around the origin of an h, k phase plane, where k = h~. In Fig. 9-3 we superimpose the two phase planes.
Define~ v =
the successive zeros of h, v = O, i, 2, • • • • By inspection we have
sgn h(~,c) = (-i)v,
~
< ~ < ~u+l"
FIG. 9.3.
If we multiply the differential equation h ~ by ~, multipl~ the differential eq~tion ~
+ [1+3~2]h = O through
+ [g)+~3] = O through by h,
and subtract the latter fr0M the former, we get
Integration from ~
to ~ l
gives
V = 1,2,3,'''; in other words the h,k trajectory leads the ~,~ trajectory in Fig. 9-3. Proof:
We employ induction.
but not for m+l.
Assume the l e - ~ is true for v = 1,2, ---,m,
Then we have a m < ~ < ~m+l ~ ~m+l"
The integrand in
eq. (9.13), (in which we put V = m)1 is positive since h and £0 have the same sign between ~m and ~m+l under our assumption, and since sgn h(~m,C) =
(-i)m, ~
sgn ~(~m,c) = (-i)m, we also have h(~m,c)%(~m,C) > O.
by eq. (9.13) we should have h(~m+l,C)%(~l,C)
> O.
Thus
The latter must be
- 99 -
false however since either sgn h(~w,l,C ) = (-i)m or h ( ~ l , C ) = 0, by
our assumption, but ss~ $(~m,l,C) = (-I)m+l. ~t
This contradiction shows
~m~l < ~a,l' and Proves the l e m . A proof that ~c = ~c Cp(~,c) = h and %c = ~
,(~,c) = k can be l ~ t -
terned after a very similar proof in a published paper of the author; [ref. 17; p. 132, L ~ m = i]. L e m B 9-2:
~ < ~v+l' and ~v+l < ~v+l' v = 0,1,2, ...;
in other words,
the le~d of the (h,k) trajectory over the (~,$) trajectory in Fig. 9.3 is less than 90°. Pr+oof: The first statement follows from the second since by inspection of Fig. 9.3, it is clear that ~ < ~+i' ~ = 0,i,2,---.
The second state-
ment can be proves by showing that sgn h(~,c) = sgn ~(r~,c), V = 1,2,3, "'" (with reference to ~
9.1).
Prom the expression of the solution of prob-
lem (9.6) in terms of elliptic functions, we have ~ (~ ,c ) = (-l)v'Ip =
(.1)~-1 .JV l + 2 c
1
.
d
But
~(r~p(C),C) = Ir(~,c) = 0. Therefore d o ( r ~ p , c ) = h(~p,c) = (-1) ~p'I d ~ ~ - ~ 2 c 2 -1 ( 1)~-i
1
c
.
Also we have
sgn ~ ( ~ , c )
= (-i) v ' l .
Therefore sen h(~,c) = s~s O(~,c), ~ = 1,2,3, "'', which proves the i ~ . Theorem 9,3:
There is no secondary bifurcation on any branch of eigenfunc-
t i o n s of problem (9.~). Proof:
Solutions of problem 49-5), equivalent to problem (9-~), are given
by solutions of problem (9.6) with c = cn > Os where cn solves the equation
-
1 0 0
-
2
'I
~n(C) = k ' ~ .
By eq. (9.10) we had k =
~ as the n'th branch eigen~ J ( c n)
value of problem (9-5) expressed as a function of the initial slope cn of the associated elgenfunctlon. values { ~ }
In exactly the same~ay, the discrete elgen-
of the llnearized boundary value problem
h~
+ (l+30n2)h = 0
(9.1#) h(o) = 0
h(k'~) = 0
are obtained by matching the zeros @v of the solution of the initial value 2 problem (9.12) with the value k'~11; thus ~v = ~ ( C n ) " By Le-,-as 9.1 and 9.2 we can then make the following comparison:
~n+l(Cn ) =
2
2
2 ~n+l(Cn )
~n2(Cn )
2 ~J(Cn) = ~n(Cn )
0 < C
n
~ . Indeed, the leading coefficient of eq. (4.2) vanishes in this example at a secondary bifurcation point~ but not enough of the lumped terms seem to cancel as was the case in producing eq. (4.4).
Hence use of the Newton Polygon method as discussed by
J. Dieudonn~ [ref. 8], and R. G. Battle [ref. i, p. 376] seems to founder on the requirement
rain ~i = rain 8 i = 0 imposed upon the exponents of l~i~n ~:i~n
the bifurcation equation.
Actually this failure is not to be expected in
secondary bifurcation for Hammerstein operators which are such that K(l-s,l-t) K(s,t), or
f(1-s,x)
~ f(s,x).*
In s e c t i o n 9 we f i r s t
n o t e t h e e q u i v a l e n c e between t h e e t g e n v a l u e
problem T ( x ) = ~x f o r t h e H a s ~ e r s t e i n o p e r a t o r ( 5 . 2 ) , where we assume * P l e a s e s e e Appendix h o w e v e r .
-
107
-
that K(s,t) has the form given in eq. (9.1), and a certain familiar two point boundary value problem, (9.2).
It is known that in the autonomous
case, f(s,x(s)) - f~x(s)), there is no secondary bifurcation of ~ of solutions of problem (9.2).
branch
This is shown for a particular two-point
boundary value problem, namely (9.4), in a way which clearly relates this absence of secondary bifurcation to some of o u r considerations of section 6.
Eigenvalue problem (9.14) with kernel (9.15) generalizes the problem
of section i in that the kernel is complete, only the first two terms).
(the kernel of section I had
But kernel (9.1) is that particular choice of
kernel (9.15) in which we set -n u (O) = - ~I ; also we note that the ~n(o), s n are primary bifurcation points for problem (9.14).
Hence by Theorem 9-3
there actually do exist sets of constants {~(o)} such that the eigenfunetion branches arising at these primary bifurcation points undergo no secondary bifurcations at all.
Indeed {~
=
is one such set.
Hence if we seek a condition on t h e primary bifurcation points ~(n °)}-
of
problem (9.14), with the complete kernel (9.15)~ such that there is no
use only the first two terms~, the particular constants -n ~(o) = --~ i would n
/
presumably have to satisfy it. Thus, assumptions H-I through H-6 have been made on the nonlinear operator T(x) mapping a real Banach space X into itself, with T(e ) = 8, and three times Fr~chet differentiable.
For the convenience of the
reader we set down these cumulative hypotheses:
- 108 H-!:
T'(Xo) is a compact linear operator, where x o is a solution of
the problem T(x) = kx.
This statement is fulfilled conveniently if the
nonlinear operator T(x) is completely continuous. for a n y x
Then Tt(X) is compact
6 X.
Statement H-I is used in section 3.
H-2:
T(x) is an odd operator, i.e., T(-x) = - T(x).
This is used in section 4.
Further statements have to do w l t h T ( x )
Ha-~erstein operator in C(O,l), (see eq. (9-2)).
as a
They are used in sec-
tions 6 and 7H-3:
f(s,x) is four times differentiable in x, with Ifv ivl, bounded uni-
z u
a%
fornD.y o v e r 0 ~ s < l ,
and lira f(s,x) x x-*O
= f '(s,O) x
uniformly
on 0 < s g 1.
(Statement H-2 already implies that f(s,O) = 0).
H-4a: Subllnearity; i.e. fx'(S'X) > O, 0 < s < i, and Xfx" (s,x) < O, 0 < s < i, - ~
< x
O, . . . .
O