FERMAT DAYS 85: MATHEMATICS FOR OPTIMIZATION
NORTH-HOLLAND MATHEMATICS STUDIES
NORTH-HOLLAND-AMSTERDAM
NEW YORK
OX...
52 downloads
514 Views
13MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
FERMAT DAYS 85: MATHEMATICS FOR OPTIMIZATION
NORTH-HOLLAND MATHEMATICS STUDIES
NORTH-HOLLAND-AMSTERDAM
NEW YORK
OXFORD *TOKYO
129
FERMAT DAYS 85: MATHEMATlCS FOR OPT1MlZATl0N
Edited by
J. -B. HIRIART-URRUTY UniversityPaul Sabatier Toulouse, France
1986
NORTH-HOLLAND -AMSTERDAM
NEW YORK OXFORD *TOKYO
@
Elsevier Science Publishers B.V., 1986
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner.
ISBN: 0 444 70121 4
Publishers:
ELSEVIER SCIENCE PUBLISHERS B.V. P.O. BOX 1991 1000 BZ AMSTERDAM THE NETHERLANDS
Sole distributors for the U S A . and Canada:
ELSEVIER SCIENCE PUBLISHING COMPANY, INC. 52 VANDERBILT AVENUE NEW YORK, N.Y. 10017 USA.
Library of Congress CataloginginPubliation Data
FEIiMAT Days 85 (Toulouse, France) FERMAT Days 85.
(North-Holland mathematics studies ; 129) Sponsored Py the French Mathematical Society and the Society for Applied and Industrial Mathematics. 1. Mathematical optimization--Congresses.c HiriartUrruty, Jean-Baptlste, 194911. Societe mathematique de France. 111. Society for Applied and Industrial Mathamstice. IV. Tltle. V. Tltlo: Mathematics for optimization. VI. Series. QA402.5.Fb7 1986 519 86-19877 ISBN 0-444-701U-4
.
PRINTED IN THE NETHERLANDS
z.
Pierre de FERMAT and a Muse Salle des illustres Town-hall of TOULOUSE
This Page Intentionally Left Blank
vii
PREFACE FERMAT Days 85 “Mathematics for Optimization”
1. Description More than 1.50 participants attended the FERMAT Days 85 which took place in Toulouse from May 6 t o 10, 1985, and which were devoted t o “Mathematics for Optimization.” About fifteen foreign countries were represented, among them a good number of European countries, t o such a pass t h a t I would be tempted t o speak of Europtimization
. . . One has also noticed
that
a fair number of participants came from the Mediterranean area including Spain (16 participants), Italy (12) and Maghreb (11). Some were attending this kind of conference for the first time (from countries which are “opening themselves” t o research in Applied Mathematics since some years) ; some others, who delivered lectures, had had not visited France or even Europe for some time. Nevertheless, in spite of all the efforts of the organizer, three distinguished mathematicians from Soviet Union, invited for a long time, have been prevented from coming. In addition t o lectures and communications, a list of which may be found farther on, numerous contacts have been established at the occasion of this conference, as also informal discussions organized. This is the “nonmeasurable” part of activities inherent in a scientific meeting, and which is as important as the communications themselves.
Preface
viii
2. Main topics concerned with Optimization which was the main subject of FERMAT Days 85 should be understood in the broad sense of the word, since talks dealing with optimization in differential equations adjoined communications on optimization problems arizing in Mechanics or Statistics. However some “mainstreams~’ clearly appeared: calculus of variations and nonlinear el asticity, optimal control, analysis and optimization in problems dealing with nondiflerentiable data, duality techniques, algorithms in mathematical programming and optimal control. Researches in some of these areas are at cross-roads: new ideas, impulses,
are necessary to go forward. On the contrary, in some other areas, a pause seems to be required to allow the evaluation of the fall-outs of the theoretical advances that have been carried out recently, in the various fields where applications of such advances have been foreseen. Needless t o say, this meeting has been an opportunity for many to take stock of questions in Optimization. 3. List of lectures and communications
At the occasion of the FERMAT Days 85, twenty plenary talks were delivered and twenty six communications presented. Here follows the list of titles:
H. ATTOUCH, D. A Z E and R. WETS: Convergence of convex-concave saddle functions and continuity of the partial Legendre-Fenchel transformation. Applications to the convergence of Lagrange multipliers.
J.-P. AUBIN: On inverse function theorem for set-valued maps.
A. AUSLENDER: Two general methods for computing saddle points with applications for decomposing convex programming problems.
Preface
ix
E. BALDER: Seminormality of integral functionals and their integrands.
R. BENACER and PHAM DINH TAO: Etude de certains algorithmes pour la rksolution d'une classe de problhmes d 'op timisation non convexes.
A. BENSOUSSAN: Equations de Bellman en dualitk.
J. BORWEIN: Boundaries of nonconvex sets in normed spaces.
R.W. CHANEY: Second-order conditions for optimality for semi-smooth problems.
P.G. CIARLET: Pro bJkmes unila t Craux en klasticit 6 non linkaire tridimensionnelle.
F.H. CLARKE: RkgularitC des solutions en calcul des variations.
F. CONRAD and F. ISSARD-ROCH: Approximation
d 'in6q u a tions
variationnelles par des pCnalisa tions
diffbrentiables. AppJication B J'analyse des bifurcations.
B. DACOROGNA: CaractCrisation des enveloppes polyconvexes, quasi-convexes et rang 1 convexes en calcul des variations.
F. DEMENGEL and R. TEMAM: Convex functions of a measure and applications.
I. EKELAND: Variational principles and nonconvex duality.
Preface
X
H. FRANKOWSKA: Local controllability and infinitesimal generators of semigroups of setvalued maps.
A. GAIVORONSKI: Some optimization problems in the space of measures.
E.A. GALPERIN: Une nouvelle mkthode dans la thkorie de la commande.
M.A. GOBERNA and M.A. LOPEZ: On the representations of linear semi-infinite systems and applications.
J.-L. GOFFIN: The ellipsoid method: a variable subgradient optimization.
R. GONZALEZ and E. ROFMAN: Optimisation d’un modkle B court terme de production d’knergie.
J .M. GUTIERREZ-DIEZ: Global and local Farkas-Minkowski properties.
A. HATIMY and M. ZOUHIR: Dktermination des s o h tions extrkmales phriodiques d’une Cquation diffkrentielle du second ordre.
J.-B. HIRIART-URRUTY and A. SEEGER: Calculus rules on a new set-valued second-order derivative for convex functions.
A.D. IOFFE: On the theory of subdifferential..
Preface
xi
G. ISAC and M. THERA: Complementarity problem and post-critical equilibrium state of thin elastic plates.
A. KOZEK: Minimum LCvy distance estimation of a translation parameter.
A.B. KURZHANSKI: On the evolution of controlled systems under state constraints.
C. LEMARECHAL: Toward constructing second-order methods for nonsmooth optimization.
L. Mc LINDEN: New results for problems involving monotone operators.
P. MARCELLINI: On the definition and the lower-semicontinuity of certain quasi-convex integrals.
K. MARTI: Rates of convergence of semi-stochastic approximation procedures for solving stochastic optimiza t ion problems .
J.E. MARTINEZ-LEGAZ: Level sets and the minimal time function of linear control processes.
J.-J. MOREAU: Some variational concepts in the finite-dimensional dynamics of unilateral constrained systems with possible friction.
J. MORGAN and P. LORIDAN: A general approach for the Stackelberg problem and applications.
Preface
xii
J.-P. PENOT: Modified and augmented Lagrangian theory revisited and augmented.
PHAM DINH TAO: Algorithmes pour la rBsolution d’une classe de problkmes d’optimisation non convexe. MBthodes de sous-gradients.
J. RECOULES: Approximation, a u sens de l’information, d’un processus stationnaire.
R.T. ROCKAFELLAR: Second derivatives defined by epi-convergence of a second-order difference quotient.
A. SABRI: DBtermination des extrBmales phriodiques de seconde espkce d’une Bquation differentielle du second ordre B coefficients pdriodiques.
R. W .H. SARGENT: Optimal control for systems described by differen tial-algebraic equations.
J. SPINGARN: The proximal point algorithm revisited.
M. STUDNIARSKI: Sufficient conditions in Lipschitz optimization.
H. TUY: A general approach to global optimization problems.
J. WARGA: Controllability and necessary conditions in nonsmooth and highly smooth optimization problems.
Preface
xiii
L.C. YOUNG: Courbes (ou surfaces) gdndralis6es et particules 616mentaires en Physique.
T. ZOLEZZI: Stability analysis and relaxation in optimal control. This volume contains a collection of 13 papers. Some of them are written versions of talks presented at the conference, some are related works in Optimization proposed by lecturers.
4. Acknowledgments
The FERMAT Days 85 were sponsored by the French Mathematical Society (S.M.F.) and the Society for Applied and Industrial Mathematics (S.M.A.I.). The financial support was provided by: S.M.F., S.M.A.I., Scientific Council of “U.E.R. Mathtmatiques, Informatique, Gestion,” Direction de la CoopCration et des Relations Internationales (Ministry of National Education), National Scientific Research Council of France (C.N.R.S.), University PAUL SABATIER, European Research Office of the United States Army (under grant DA JA 45-85-M-0336), and Regional Council of MIDI-PYRENEES. We are grateful to these organizations for their interest and assistance. I am pleased to express my gratitude t o the authors for their contributions, and to the referees for their helpful comments. Last but not least, I would like to thank Christiane and AndrC Lannes for their invaluable help during the preparation of this volume.
J.-B. Hiriart- Urru ty Organizer of the FERMAT rays 85
This Page Intentionally Left Blank
xv
CONTENTS
Preface.. ...........................................................
vii
H. ATTOUCH, D. AZE and R. WETS On continuity properties of the partial Legendre-Fenchel transform: convergence of sequences of augmented Lagrangian functions, Moreau- Yosida approximates and subdifferential operators. .......................................
1
E.J. BALDER Seminormality of integral functionals and relaxed control theory.. ................................
... .....
43
.....
65
R. BENACER and PHAM DINH TAO Global maximization of a nondefinite quadratic function over a convex polyhedron.. ........................
F.H. CLARKE and R. VINTER On connections between the maximum principle and the dynamic programming technique
...........................
F. DEMENGEL and R. TEMAM Convex function of a measure. The unbounded case.. ...............
I7
103
R. GONZALEZ and E. ROFMAN Computational methods in scheduling optimization
.................
135
J .-B. HIRIART-URRUTY A new set-valued second order derivative for convex functions..
..............................................
151
...................................
183
A.D. IOFFE On the theory of subdifferential..
xvi
Contents
C. LEMARECHAL Constructing bundle methods for con vex optimization ...............
20 1
P. MARCELLINI Existence theorems in nonlinear elasticity. ..........................
241
PHAM DINH TAO and EL BERNOUSSI SOUAD Algorithms for solving a class of nonconvex optimization problems. Methods of subgradients .................................
249
H. TUY A general deterministic approach to global optimization via d.c. programming.. ................................
273
T. ZOLEZZI Well-posedness and stability analysis in optimization
................
30 5
FERMAT Days 85: Mathematics for Optimization J.-B. Hkiart-Urruty (editor) 0 Elsevier Science Publishers B.V. (North-Holland),1986
ON CONTINUITY PROPERTIES OF THE PARTIAL LEGENDRE-FENCHEL TRANSFORM CONVERGENCE OF SEQUENCES OF AUGMENTED LAGRANGIAN FUNCTIONS, MOREAU-YOSIDA APPROXIMATES AND SUBDIFFERENTIAL OPERATORS
H. ATTOUCH A .V .A .M .A.C Dkpartement de Mat hkmatiques Universit k PERPIGNAN (France) D. AZE A.V.A.M.A.C Dkpartement de Mathkmatiques Universit 6 PERPIGNAN (France) R. WETS University of California DAVIS (U.S.A.)
H. Attouch, D. Aze' and R. Wets
2
Abstract. In this article we consider the continuity properties of the partial Legendre-Fenchel transform which associates, with a bivariate convex function
F: X x Y
--t
R U {+m}, its partial conjugate
L : x x Y * + R, i.e. L ( z , y * )= inf y E Y { F ( z , y )- (Y*
I Y)}.
Follwiing [3] where this tranformation has been proved to be bicontinuous when convex functioiis F are equipped with the Mosco-epi-convergence, and convex concave Lagrangian functions L with the Mosco-epi/hypo-convergence, we now investigate the corresponding convergence notions for augmented Lagrangians, Moreau-Yosida approximates and subdifferential operators.
1. INTROD'CJCTION In 141, [5] the authors have introduced a new concept of convergence for bivariate functions specifically designed to study the convergence of sequences of saddle value problems, called epi/hypo-convergence.
A main feature of this convergence notion is, in the convex setting, to make the partial Legendre-Fenchel transform bicontinuous. We recall that, given a
R its partial Legendre-Fenchel L: x x Y * R
convex function F: X x Y convex-concave function
+
transform is the
--*
The transformation F H L is one-to-one bicontinuous when convex functions are equipped with epi-convergence and closed convex-concave functions (in the sense of R.T. Rockafellar 1371 with epi/hypo convergence (see [5], [3])). When, following the classical duality scheme, functions F n are perturbation functions attached to the primal problems inf F n (5, 0) ,
%EX
the above continuity property, combined with the variational properties of epi/hypo-convergence, is a key tool in order to study the convergence of the
Partial Legendre-Fenchel transform
3
saddle points (that is of primal and dual solutions) of the corresponding Lagrangian functions {L" ; n E N}. T h e reduced problem is the study of epiconvergence of the sequence of perturbations functions {F" ; n E N}. This approach has been successfully applied to various situations in Convex Analysis (in Convex Programming see D. AzC [8], for convergence problems in Mechanics like homogenization of composite materials or reinforcement by thin structures see [9], H. Chabi [17], . . .). Indeed there are many other mathematical objects attached t o this classical duality scheme. Our main purpose in this article is t o study for each of them the corresponding convergence notion. Particular attention is paid to the so-called augmented Lagrangian (especially quadratic augmented) whose definition is (compare with (1.1))
and which can be viewed as an "augmented "partial Legendre-Fenchel transform, In theorem 4.2 we prove the equivalence between Mosco epifhypoconvergence of Lagrangian functions L" and for every r > 0 and y* E Y *, the sequence of convex functions {Lp(.,y*); n E N} Mosco epi-converges t o L r
(a,
(1.3)
y*).
By the way, since Lr can be written as an inf-convolution
we are led to study the two following basic properties of the inf-convolution operation, which explains the practical importance (especially from a numerical point of view) of the augmented Lagrangian : -
regularization effect;
-
conservation of the infima and minimizing elements.
This is consid-
ered in Propositions 3.1 and 3.2 for general convolution Kernels, see also M. Bougeard and J.P. Penot [14], M. Bougeard [13].
H. Attouch, D.Azt! and R. Wets
4
Iterating this regularization process but, now, on the %-variable, we obtain the so called Moreau- Yosida approximate
the inf-sup being equal to the sup-inf (for closed convex-concave functions (theorem 5.1 d)) and the Mosco epi/hypo-convergence of Ln t o L is equivalent to the pointwise convergence of the associated Moreau-Yosida approx-
~ the same saddle elements as L! imates (Theorem 5.2). Moreover L x , has (Theorem 5.1 b). Finally we characterize in terms of graph convergence of subdifferential operators
aLn -%aL the above notions (Theorem 6.1)’ and summarize in a diagram all these equivalent convergence properties. 2.
CONVERGENCE OF CONVEX-CONCAVE SADDLE FUNCTIONS AND CONTINUITY OF THE PARTIAL LEGENDRE-FENCHEL TRANSFORMATION
2.1. Duality scheme Let us first briefly review the main feature of Rockafellar’s duality scheme (cf. [37], [38], [39]). Let X, Y , X*, Y * be linear spaces such that X (resp. Y ) is in separate duality with X * (resp. Y * )via pairings denoted by (. I .). Let us consider
L: x x Y * --+ R which is convex in the
5
variable,
concave in the y * variable.
Partial Legendre-Fenchel transform
5
Let us define
F:XxY
+E
G : X * x Y * +R by :
F ( w ) = SUP {L(Z,Y*)+ (Y* I Y ) L
(2.1)
G ( z * , Y *= ) i d { L ( ~ , y * -) (z* 1 z)}.
(2.2)
Y'EY'
F (resp. G) is the convex (resp.concave) parent of the convex-concave function L. Two convex-concave functions are said to be equivalent if they have the same parents. A function L is said t o be closed if its parents are conjugate to each other, i.e.,
-G = F* and (-G)* = F.
(2.3)
For closed convex-concave functions L, the associated equivalence class is an interval, denoted by
[L,z]with
Let us observe that
where *y (resp. * z * ) bmotes the partial conjugation wit,, respect t o the y (resp. z*)variable.
If we denote by I'(X x Y ) the class of all convex 1.s.c. functions defined on X x Y with value in
R, we have the following
([37]).
H. Attouch, D.Aze' and R . Wets
6
Proposition 2.1. The map K
H
F establishes a one-to-one cor-
respond en ce bet ween closed con vex-concave equivalence classes and I'(X x Y ) . In the sequel, closed convex-concave functions will be assumed to be proper, i.e., convex parent F is neither the function = -oo.
3
t o o nor the function
In the classical theory of convex duality (see [19], [39]) the Lagrangian associated with the proper closed convex perturbation function F is the convexconcave function I; defined in (2.5). The research for a primal and dual solution is then equivalent to that of a saddle point for the equivalence class which contains
Z.
2.2. Mosco epi-convergence
For further results see [l],[25], [34].
Definition 2.2. Let X be a reflexive Banach space. A sequence
{F": X
4
R} is said
to be Mosco-epi-convergent to F: X
if
(i) for every z
E X, for every
z,
-t
W
5,
liminf, F n ( z n ) 2 F(z), (ii) for every x E X, there exists z,, 2z,
limsup, Fn(z,) 5 F ( z ) , where w and
8
denote the weak and the strong topology
of X respectively.
We then write
F =M
- lime F".
A basic property of Mosco-convergence is the following (cf. [33])
Partial Legendre-Fenchel transfom
Theorem 2.2. Let X be a reflexive Banach space and
{ F " ; F :X
R U {+00}}
--t
a collection of closed convex proper functions. Then
F =M
-
lime F"
+ F* = M - lim,(Fn)*.
Comment. The above results establishes that the conjugacy operation is bicontinuous with respect to Mosco-convergence. In fact, as proved in [6], this operation is an isometry for suitable choice of metrics on I'o(X) and r O ( X * ) .
2.3. Extended Mosco-epi/hypo-convergence Let (E,T ) and ( F ,CT) be topological spaces and {L": E XF -+ R} be a sequence of bivariate functions; we define, for every (z,y)E E x F :
Definition 2.3 (see (41, (51, (31). Let X and Y be reflexive Banach spaces and { L", L: X x Y *
-+
R} a collection
of bivariate
functions. We say that Ln Mosco epi/hypo-converges to L in the extended sense if
where
and
a respectively denote the extended lower closure and
the extended upper closure, i.e., for any function F: (X, T )
clF=
cl F
if cl F >
-00
otherwise,
-+
-00,
cl F denoting the I.s.c. regularization of F, and
aF =
(-F).
H. Attoucli, D.Azd and R. Wets
8
F = F** (let us observe that if e s / h , - Is Ln is convex in X and hs/ew - li Ln is concave in y * , then For a convex function, it is well known that
in definition (2.11) the extended closure operations reduce to biconjugation). The following result ([3]) establishes that the partial conjugation defined in (2.4) and ( 2 . 5 ) is bicontinuous when rO(X x Y ) is endowed with Mosco convergence and the classes of closed convex-concave functions is endowed with Mosco epilhypo-convergence.
Theorem 2.4 [[3], Theorem 3.2). Let u s consider X and Y , reflexive Banach spaces, and { F " , F : X x Y
+
R} a collec-
tion of closed proper convex functions with associated equivalence classes of closed convex-concave functions denoted by L", L . Then, the following are equivalent : [i) F" [ii) Ln
5 F,
-
M-e/h
L [extended Mosco epi/hypo-convergence).
The extended Mosco epi/hypo-convergence is variational convergence in a sense made precise by
Theorem 2.6 (131, Theorem 2.6). Let u s consider ( X , T ) and ( ~ , utwo )
general topological spaces and { K " , K : x x Y
+
R)
a sequence of bivariate functions such that
cl (e,/h, - 1s K " ) 5 K 0, we derive, from Theorem 2.4 that
and then
From (4.7), we derive that
(PI,* 3 ( F * ) , , for some r > 0. Using the resolvent equation that
-
n-++oo
(Fn)i(z*)
=
(4.10)
we obtain from (4.8) and (4.10)
( F * ) p ( x * for ) every p > 0 and z * E X*.
Using again (4.8), in fact a slightly weakened version (see [l]),we derive
(F")* 3 ( F * )
H. Attouch. D.Azk and R. Wets
24
and by (4.7)
F"
5F.
(i) e-(iv). We observe that the convex function
which is not identically equal to value
-00,
+00
since F is proper, does not take on the
and is 1.s.c. since, for every z E X, the function
is uniformly coercive when z remains bounded. Let us define (in the following argument y * is fixed)
W z ) = LXz, Y*),
W)= L ( Z , Y * ) , and observe that
Let us now consider p > 0 and (\kn)Z, the Moreau-Yosida approximate of (W)* of parameter p. We derive
(4.11)
1
+ g IIY* - v.12). The same calculation holds for
and we obtain
1 +2r lIY*
(4.12)
- $*I?}.
Partial Legendre-Fenchel transform
Let us return to the proof of the equivalence (i) e and \kn
25
(2.).
By definitions of 9
(iv) e v r > 0, ~ y E* Y * , \kn 5 \k
3\k*
e v r > 0,
~ y E* Y * , (en)*
e Vr > 0,
V p > 0, V(z*,y*) E X * x Y * ,
lim ( 9 " ) ; ( s * ) = \k;(z*)
n++w
e V r > 0,
V(z*,y*) E
X* x Y*,
e ( F n ) *% F*
~F~M'F, which ends the proof of Theorem 4.2. 1
Comments (1) Theorem 4.2 can be viewed as a continuity result of the generalized partial duality transform
where (y*,y) denotes the non-bilinear coupling
(cf. the papers of S. Dolecki [18] and M. Volle (411).
(2) One can give an equivalent expression of the augmented Lagrangian in the Hilbert spaces by using Theorem 2.9 of [6]
H. Attouch, D.A z t and R. Wets
26
5.
Moreau-Yosida approximates of closed convex-concave functions. Equivalence between extended Mosco epi/hypo convergence and pointwise limit of Moreau-Yosida approximates
In [4], H. Attouch and R. Wets have defined the upper and lower MoreauYosida approximates of general bevariate functions L by means of the following formula
When L is a closed convex-concave function, these two quantities prove t o be equal as made precise by the following
Theorem 5.1. Let X , Y be reflexive Banach spaces (renormed as in ( 4 . 1 ) ) and
L:XxY*+fl a closed convex-concave function. (a) Then, for all X > 0, /I > 0
Li,# = L;,,,
:= LA,,,.
(5.1)
LA,,,is called the Moreau-Yosida approximate o f index A , p of L . ( b ) L and LA,,, have same saddle value and saddle points. (c) For all
(2,y*)
E
X x Y * the function
+ 2x1 llz - Ell2 - 1 IIY* - 7*1l2
L ( E , q * )= L ( E , q * )
2P
has a unique saddle point
(H
(' -,*.') ,- H * ('*
(z~,,,, y A* , , > characterized by -,y"p))
E a L ( z A , , , , ~ ; , ,, , )
(5.2)
Partial Legendre-Fenchel transform
where H: X
21
X * and H * :Y * + Y are the duality maps defined in (4.3) and d L = dxL x ( - d z ( - L ) ) . --+
is locally Lipschitz convex-concave function of class
(d)
C' on X x Y*, with derivative DLx,,(z, y*) = ( H
('
,- H *
(Y* - Yi& )) . c1
Pro0f (a) and (c). We shall use the inf-sup theorem of J.J. Moreau [30]; let us recall this result. Under the assumptions
U , V are locally convex t.v.s. K: u x
v + i?
is convex-concave
K ( . , w ) E T ( U ) for all w E V , there exists w o E V , ko {u E
(5.3)
> infuEU K ( u ,wo) such that
U : K ( u ,wo 5 ko} is weakly compact.
Then inf sup K ( u , v )= sup inf K ( u , v ) . V E V UEU
UEU V E V
Moreover
Let us define 1
1
K ( € , V * )= L(F,V*)+ 5 IIE - 412- - IlV* - Y*l12. 2P
K is closed convex-concave function such that
H. Attouch, D.A d and R. Wets
28
It is clear that
K
verifies the assumptions (5.3);we derive that
sup inf K ( ( , q * )= sup inf K ( ( , q * ) , 9'EY' €EX-
q ' E Y ' €EX
= min sup K ( ( , q * ) , ( from (5.4), (5.5)), E€Xt)'EY'
= min sup K ( ( , q * ) , €EX q' E y '
= min sup K((,q*). €EX q'Ey'
The same argument applied to inf
(-K)shows that
sup K ( ( , q * )= max inf K ( ( , q * ) .
EEXq'EY"
V ' E Y ' €EX
It follows that max inf K((,q*) = min sup K ( ( , q * ) ,
V'EY' €EX
€EX V ' E Y *
which ensures the existence of a saddle point which is unique thanks to the strict convexity-concavity of K ; the characterization (5.2) of this saddle point is then straightforward.
(b) Let us consider the quadratic augmented Lagrangian L,(S,Y*) = SUP V'EY'
From Proposition 4.1', L, and
{ J%'.)*)
1
- & IIY*
-
.)*It2}
'
L have same saddle values and saddle points.
Exchanging the role played by the variables and taking the augmented Lagrangian of parameter A of (-L,), we obtain the closed concave -convex function K defined by
= -L?,,b7 Y*), = -Lx,,(w*). Using again Proposition 4.1', part (b) of Theorem 5.1 follows.
Partial Legendre-Fenchel transform
29
(d) We claim that the operator
is strongly continuous and bounded on bounded sets. Indeed, let us consider
xo E X and yc E Y * such that
We deduce the existence of a positive constant c such that
Using the fact that
we derive
Adding these two inequalities, we obtain
Using (5.8) and properties of H and H * , it follows
H.Attouch, D.AzP and R.
30
Wets
From this it follows that
with M depending only on (11x011 IIyGll , c , A , p ) , and that the operator JA,, is bounded on bounded sets.
x and y*n A y * ;
Let us now prove the continuity of Jx,,. Consider x" we claim that z;,, A ZA,, and y;t;L A yfl,,.
Indeed, define
It is clear that
K"
MAh K,
and V~:ZQ*,
K ( E , ~ >_* )1imsupZn(t,q:). n
From Theorem 2 . 5 , it follows that the sequence x:,,, ,:;y converges weakly to x~,,,y;,,)
(
over we obtain, for every K" E
that is
(
), which is bounded,
since the saddle point of K is unique. More-
[Kn,Tn]and K
E
[K,x],
Partial Legendre-Fenchel transform
From the saddle point property of (x:,,,
31
, we derive
y),:;
Passing t o the limit superior in the above inequality and using (5.11) we derive
y: since weak-convergence and convergence of the norms imply So, yV;T;. strong convergence thanks to assmption (4.1).
The strong convergence x : , ~-L x ~ , , is then obtain by using a similar method.
. L, be the quadraLet us now calculate the Frecbet derivative of L X , ~Let tic augmented Lagrangian defined in (4.4); its convex parent F, is the function
F,(z, y ) = F ( z ,y) + f 11y112. Using formula (4.5) we derive ( u * ,).
E dL,(Z, Y*)
* (u*,Y*) E dF,(z, -4, e~ (u*,Y*)E d F ( z , - v ) + ( O , - p H * - ' ( v ) ) , # (u*, y*) E d L y* + p H * - l ( v ) ) , (5,
An analogous calculation after a regularization of parameter X on the first variable provides
and then
dLx,,(z , y * ) = ( H
(
,-H*
(
Y* - Yi,,
))
(
thanks to the unicity of xx,,, y;,,). Let us fix y* E Y*. The function L A , @ (y*) . , is then convex, continuous (in fact locally Lipschitz) and its subdifferential taken in z reduces t o
H. Attouch, D.AzP and R. Wets
32
It follows ([19] Chap. I, Proposition 5.3) that L ~ , ~ ( a , y is * )GGteaux differentiable in z and then Frechet differentiable since its derivative is continuous as seen above. In the same way, the function L A , ~ ( .)z ,has a continuous Frechet derivative namelv
It follows that L+ is a C1function and is locally Lipschitz since its derivative is bounded on bounded subsets of X x Y*, which ends the proof of Theo-
rem 5.1.I We can now prove
Theorem 5.2. There is equivalence between
Proof. Let us consider the augmented Lagrangians LS and L,; we define
From Theorem 3.2 we know that
Let us compute, for X > 0, the Moreau-Yosida approximation $:
Using the characterization of Mosco convergence in terms of pointwise convergence of the Moreau-Yosida approximates (4.8), we derive that (ii) is equivalent to (i). I
Partial Legendre-Fenchel transform
33
Comments 1) In the Hilbert case, an easy computation based on the formula
(see [6] Theorem 2.9) shows that
It follows in an evident way, that
which is stronger than the equivalence (i)
(ii) in Theorem (5.2).
2) An interesting open question is to know whether the equivalence
is true or not. If this were the case, the class of maximal monotone operators (see [38] or [22])
associated with closed proper convex-concave functions L would verify for every sequence An, for every A!(Z,Y*)
*
(2,y*)
EX x Yn (5.13)
Ax(z,y*) implies Ay(z, y * ) 5 Ax(z, y*),
where A! and Ax are the Yosida approximates of the operators An and A. In [l],remark 3.30, H. Attouch has proved that (5.13) is true for the subdifferentials of convex functions.
H. Attouch, D. Aze and R. Wets
34
6. Equivalence between Mosco-epi/hypo convergence of closed
convex-concave saddle functions and graph convergence of their subdifferentials In [l]H. Attouch has established t h e following equivalence for sequences of closed convex proper functions defined on a reflexive Banach space with value in R U $00, (see also [29]):
F ~ S F is equivalent to G
d F n -+ d F
+
some normalization condition,
(6.2)
where dF is the subdifferential of the closed convex proper function F . The normalisation condition comes from the fact t h a t F is determined by
d F up t o a n additive constant and is described below 3(z, x*) E d F , 3(zn, x:) E d F n for every n E N
(W
such t h a t z, A z , x:
5 x* and F"(z,)
--t
F(z).
T h e code letter G means graph convergence t h a t is: (i) V(x,z*) E d F 3z, & z,
2:
5 x* with
(zn,zk) belonging to dF" for
every n E N; (ii) for every sequence (z,z*)
(xk,z;)
E dFnk such that z k A z , z; A z * , we have
E dF.
In fact (ii) is implied by (i) thanks t o the maximal monotonicity of the subdifferential operator. Moreover (ii) can be replaced by a weaker assumption in which one of the two strong limits is in fact a weak limit (see [l],3.7). Let us return t o convex-concave functions. In [38], R.T. Rockafellar has introduced the notion of subdifferential of closed convex-concave function L by the formula
Partiol Legendre-Fenchel transform
35
where d l L and & ( - L ) denote the partial convex subdifferentials with respect to the first and second variable. He proved that dom ( d L ) and d L itself is and that the graph of d L is related to the graph of independent of L E [r(,z] the subdifferential d F of the convex parent F via the relation
+ (U*,Y')
(u*,v) E dL(~,y*)
E ~F(z,-w).
(6.4)
It is clear that
( z , y * )is a saddle point of L
+ (0,O) E d L ( z , y * ) .
(6.5)
From (6.4), and the definition of graph convergence, it follows
Putting together (6.6), the equivalence between (6.1) and (6.2) and Theorem 2.4 provides
Theorem 6.1. Let { Ln:X x Y * -, R} be a sequence of closed convex-concave functions (X and Y are reflexive Banach spaces) whose convex parents (F") verify (N.C). Then the following are
-
equivalent :
(i) Ln
M-e/h
(ii) dL"
G
L,
dL.
Theorem 6.1 points out the fact that extended Mosco epilhypo-convergence is the notion of convergence for classes of closed convex-concave functions associated with graph convergence of subdifferentials. This graph convergence makes precise the variational properties of extended Mosco epi/hypoconvergence in order to obtain strong stability of saddle points (see [42]and [8] for applications in Convex Programming).
Theorem 6.2. Let {L", L:X x Y'
+
TI}
be a collection of
closed convex-concave functions (X and Y are reflexive Banach spaces) such that Ln M
s / h L.
H. Attouch, D. AzC and R. Wets
36
Then for every sequence (z,,yi) of saddle points of L", which is ( w x w) relatively compact, each (w x w) limit (z,y*) of a subsequence is a saddle point of L;
for every saddle point & A Z ,
y+y*,
(5,
y*) of L , there exist sequences
u:Ao, v,Lo,
such that (u;,vn)E dLn(z,,y:). Thesequence (zn,y i ) is then a saddle point of the convex-concave functions
whose convex parent is @"(z,y) = F"(z,y - v,) - (u; I z ) .
Proof. Since L"
M
e h
L , we derive, from Theorem 6.1 that G
dL" -dL. If (z,y*) is a saddle point of L, we derive (0,O) E dL(z,y*). Using the definition of graph convergence it follows (6.8), the calculation of K" and @" being straightforward.
In order to prove (6.7), let us consider ( ( , q ) E X x Y and q n 2q such that F ( ( , 7) = limn,+,
F"
5F , thanks to the assumption
Fn(En,7"); such sequences exist since that Ln
M e h
L (see Theorem 2.4).
From the fact that (zn,y i ) is a saddle point of L", we derive (090) E aLn(zn,Y,t)
En 5 E ,
9
( 0 , ~ ; ) E dFn(zn,O) (see (6.4) and (6.5)).
Partial Legendre-Fenchel transform
37
It follows that F n ( E n , V n ) - Fn(zn,O)
L (Y:
I Vn)-
Taking the lim sup on both sides and still denoting by z, and y: the sequences such that
2,
2
and :y 5 y*, we obtain
and (z, y*) is a saddle point of L, which proves 6.7. I Let us conclude this work by giving another characterization of the extended Mosco epilhypo-convergence in terms of the resolvents and Yosida approximates of the maximal monotone operator A(z,y*) = { ( u * , v ) E X * x Y ; ( u * , - v ) E aL(z,Y*)} associated with every closed convex-concave function L. Let us return t o (4.2) and consider zx,x and y i , x defined in (4.2); it follows
which yields the following formulae: ( z x , ~ Y, ; , ~ ) = J e ( z ,y * )
and
(H
(
A -,>,
(
resolvent of index A),
, a* '* -A
)) =
Ax(s,y*)
(Yosida approximate of index A ) .
Theorem 6.3. Let X , Y be two reflexive Banach spaces which verify (4.11, and {L", L: X x Y * + R} be a collection of closed convex-concave functions. Then are equivalent (i) Ln
M e h
9
L)
H. Attouch. D.Az6 and R. Wets
38
n-++co
-
(ii) Jf"(z,y*) -+ J f ( z , y * ) strongly, for every X > 0 , n++m (iii) AY(z,y*) A!(z,y*) strongly, for every X > 0 ,
x x Y*.
(z,y*) E
Proof. From Theorem 6.1, we obtain
L"
M-e/h -+
L
d ~ n G . a ~ ,
and from the definitions of A" and A, we derive
dLn Z d L
@
G
A" - A .
Then apply Proposition 3.60 of [l],and Theorem 6.3 follows. I Let us summarize the preceding results with the following diagram
-
Left-hand side:
pointwise
F,n
a a -
Mosco-epi
Fn
--t
dFn
Graph conv
aa a
Thm. 5 . 2
F
dF
Thm. 2.4
u
-
Ln
Mosco-epi/hy PO Thm. 6.1
dLn
Graph conv.
dL
Thm. 6 . 3
-
pointwise J,"
a-
L
Jx
Right-hand side: pointwise
LA > I
Thrn. 5.2
Mosco-epi/hypo
D
1 -
L
Thm. 6.1
Graph conv.
dL
Thm. 6.3
pointwise
Jx
Thm. 4.2
' .
LX,Y*)
-
Mosco-epi
Lr(*,Y*)vy*EY'
Partial Legendre-Fenchel transform
39
REFERENCES
[I] H. ATTOUCH, Variational convergence for functions and operators, Applicable Mathematics series, Pitman (1984). [2] A. AUSLENDER, Optimisation - MQthodesnumhriques, Masson, Paris (1976).
[3] H. ATTOUCH, D. AZE, R. WETS, Convergence of convex-concave saddle functions; continuity properties of the Legendre-Fenchel transform with applications t o convex programming and mechanics, Techn. Report
A.V.A.M.A.C. Perpignan n085-08, submitted (1985). [4] H. ATTOUCH, R. WETS, A convergence theory for saddle functions, Tra,s. A.M.S., 280, nO1 (1983), 1-41.
[5] H. ATTOUCH, R. WETS, A convergence for bivariate functions aimed at the convergence of saddle values, Mathematical theories of optimiza-
tion, ed. by J.P. Cecconi and T. Zolezzi, Springer-Verlag Lectures Notes in Mathematics, n0979 (1981). [6] H. ATTOUCH, R. WETS, Isometries for the Legendre-Fenchel transform., to appear in Trans. of the A.M.S.
[7] H. ATTOUCH, R. WETS, Approximation and convergence in nonlinear optimization, Nonlinear Programming 4, Academic Press (1981), 367-
394. (81 D. AZE, Stability results in convex programming, Techn. Report nO85-04, A.V.A.M.A.C., Perpignan (1985). [9] D. AZE, Convergence des variables duales duns des probltmes de transmission d travers des couches minces par des me'thodes d 'epi-convergence,
Techn. Report n085-06, A.V.A.M.A.C., Perpignan, submitted (1985). [lo] B. BANK, J. GUDDAT, D. KLATTE, B. KUMMER, K. TAMMER, Non-
linear parametric optimization, Birkhauser Verlag (1983).
H. Attouch, D.Azd and R. Wets
40
[ll]D.P. BERTSEKAS, Constrained optimization and Lagrange mul-
tipliers methods, Academic Press (1982).
[ 121 D.P. BERTSEKAS, Convezification procedures and decomposition methods for nonconvez optimization problems,
J.O.T.A., 29, n02 (1979), 169-197.
[13] M. BOUGEARD, Contribution d la the'orie de Morse, Cahiers du CERE-
MADE, Univ. Paris IX Dauphine, n"7911 (1979). [14] M. BOUGEARD, J.P. PENOT, Approzimation and decomposition prop-
erties of some classes of locally d.c. functions, t o appear.
[15] H. BREZIS, Ophrateurs maximaux monotones et semi-groupes de contraction dans les espaces de Hilbert, North-Holland (1973).
[16] E. CAVAZUTTI, I?-convergenza multipla, convergenza d i punti d i sella e d i maz-min, Boll. Un. Mat. Ital., 6, l-B (1982), 251-274.
[17] E.H. CHABI, Convergence des variables dudes dans des problbmes d'homoge'ne'isation et de renforcement, Techn. report, A.V.A.M.A.C., Per-
pignan, to appear (1985). [18] S. DOLECKI, O n perspective of abstract Lagrangien, Proc. of the meeting
on Generalized Lagrangians in systems and economic theory, I.I.A.S.A., Laxenburg, Austria (1979). [19] I. EKELAND, R. TEMAM, Analyse convexe et problhmes variationnels, Dunod (1974).
[20] M. FORTIN, LeGons sur l'analyse conveze et l'approzimation des probltmes de point-selle, Publications mathhmatiques d'Orsay, n"78.03 (1978). [21] A. FOUGERES, A. TRUFFERT, Re'gularisation 5.c.i. e t I?-convergence. Approximations inf-convolutive associe'es d un re'fe'rentiel, to appear in
Ann. di Mat. Pura. ed. Appl. (1986). [22] J.P. GOSSEZ, On the subdiflerential of a saddle function, Jour. of Functional Anal., 11 (1972), 220-230.
Partial Legendre-Fenchel transform
41
[%3]J.-B. HIRIART- URRUTY, Extension of Lipschitz functions, J. Math.
Anal. Appl., 77 (1980), 539-554. [24] R. JANIN, Sur la dualit6 et la sensibilith dans les problames des
programmes mathhmatiques, Thkse d’Etat, Paris VI (1974). [25] J.L. JOLY, Une famille de topologies sur l’ensemble des fonctions convexes pour lesquelles la polarite‘ est bicontinue, J. Math. Pures et Appl., 52
(1973), 421-441. [26] P.J. LAURENT, Approximation et Optimisation, Hermann (1972). [27] L. Mc LINDEN, Dual operation on saddle functions, Trans. A.M.S., 179 (1973), 363-381. (281 L. Mc LINDEN, A n eztension of Fenchel duality theorem t o saddle functions and dual minimax problems, Pacific Jour. of Maths., 50 (1974),
135-158. [29] M. MATZEU, S u u n tipo d i continuita dell’operatore subdifferenziale, Boll.
Un. Mat. Ital., 5, 14-B (1977),480-490. [30] J.J. MOREAU, The‘ordmes “inf-sup”, C.R.A.S., Paris, 258 (1964), 27202722. [31] J.J. MOREAU, Znf-convolution de fonctions nume‘riques sur u n espace vectoriel, C.R.A.S., Paris, 256 (1963), 5047-5049.
[32] J.J. MOREAU, Fonctionnelles convexes, SCminaire du Collkge de
France (1966). [33] U. MOSCO, O n the continuity of the Young-Fenchel transform, J. Math.
Anal. Appl., 35 (1971), 518-535. [34] U. MOSCO, Convergence of convex sets and of solutions of variational inequalities, Advances in Maths., 3 (1969), 510-585.
[35] R.T. ROCKAFELLAR, Augmented Lagrangian and applications of the proximal point algorithms in convex programming, Math. of Operations
Research, 1, n02 (1976), 97-116.
42
H. Attouch, D. Azk and R . Wets
[36] R.T. ROCKAFELLAR, Augmented Lagrange multiplier functions and duality in nonconwex programming, SIAM Jour. on Control, 12, n02 (1974), 268-285. [37] R.T. ROCKAFELLAR, A general correspondence between dual minimax problem8 and convez programs, Pacific Jour. of Maths., 25, n03 (1968), 597-61 1. [38] R.T. ROCKAFELLAR, Monotone operators associated with saddle functions and minimax problems, Proc. symp. pure Maths. A.M.S., 18 (1970), 241-250. [39] R.T. ROCKAFELLAR, Conjugate duality and optimization, re-
gional conference series in Applied Mathematics 16, SIAM publications, Philadelphia (1974). [40] Y. SONNTAG, Convergence au sens de Mosco; thhorie et applications 5 I’approximation des solutions d’inhquations, Thhse d’Etat,
Marseille (1982). [41] M. VOLLE, Conjugaison par tranches, Ann. di. Mat. Pura ed Appl. IV,
CXXXIX (1985), 279-312.
VO~.
[42] T. ZOLEZZI, On stability analysis in mathematical programming, Math.
Programming study, 21 (1984), 227-242.
r CKMAI uays u3: lnarnemarics IOIuprlmlzarion l.-B. Hiriart-Urmty(editor) 0 Elsevier Science Publishers B.V. (North-Holland), 1986
43
SEMINORMALITY OF INTEGRAL FUNCTIONAL§ AND RELAXED CONTROL THEORY
E.J. BALDER Mathematical Institute University of Utrecht Budapestlaan 6 Utrecht (The Netherlands)
Abstract. Recent results by the author on seminormality of integral functionals are used t o generalize his earlier results on relaxed control theory.
Keywords. Integral functionals. Seminormality. Relaxed control theory.
1. INTRODUCTION This paper unifies recent results by the author on the seminormality of integral functionals [ lg] with his earlier, separate results on relaxed control theory [la-f]. More in particular, it explains how the relative character of the set of relaxed control functions, as a convex subset of a certain vector space, calls for a special kind of Nagumo tightness [lg], which is provided by the notion of tightness introduced in [la, el (the latter notion extends the classical tightness notion in measure theory). Also, it shows how the usual lower semicontinuity statements of relaxed control theory can be generalized into seminormality statements for integral functionals at no extra cost. The specialization of the results of [lg] t o the usual framework for relaxation has
44
E.J. Balder
been made more understandable (hopefully) by the creation of an intermediate abstract framework for relaxation. Others, notably L. Cesari and his students, have also established certain connections between seminormality and relaxation [7b, c]. However,they do not consider seminormality of integral functionals and tightness (both notions were introduced recently by the present author). In view of the great generality of the results reached in [lg], which are obtained using Nagumo tightness, the exact relation between the two notions of tightness would seem to be of some interest. Needless t o say, these factors cause the present study t o be a great deal more general than the ones mentioned above. The plan of this paper is as follows. In Section 2 seminormality is recapitulated in general, and in Section 3 the.seminormality of integral functionals and their integrands. In Section 4 we study an abstract framework for relaxation. In particular, the role of tightness as a sufficient condition for Nagumo tightness is explained there. Finally, the results of Section 4 are specialized to the usual setting of relaxed control theory (Section 5 ) . Let us note already that the term “relaxed control theory” has the meaning of [le, f]. That is t o say, relaxation is considered for measurable functions in general, and is not restricted t o the control functions of an optimal control problem. Hence it is not reasonable to suppose that the space of “control points” is compact (which would make Sections 4 and 5 far less interesting specializations of Section 3).
2. SEMINORMALITY
This section recapitulates the salient features of seminormality; we refer to [lg], [7c] for more details. Let (X, d) be a metric space and (V, P, (. I .)) a pair of locally convex vector spaces, paired by a strict duality (. 1 .) [8].
A.
Definition (a) A function e: X x V
+
R is simple seminormal (on X x V )
45
Seminormality of integral functionals
if e ( w )= f(z)
for some I.s.c. function f:X (b) A function a : X x V
---f
+ (v I P)
R and some p E P.
% is seminormal
--+
(on X x V ) if a
is the pointwise supremum of simple seminormal functions
onxxv. This definition goes back to L. Tonelli [19] (cf. [ll]for a recent related concept). T h e general form of the above definition brings out the fact that seminormality is a generalized convexity property. Notably, arbitrary pointwise suprema and finite, well-defined sums of seminormal functions are again seminormal. This motivates the followhg localizations of the concept.
Definition (c) A function a: X x V
+
i? is seminormal o n a subset D of
X x V if there exists a seminormal function
a'
on X x V,
a' 5 a , such that a ' ( z , v ) = a ( z , v ) for all
(d) A function a: X x V
+
(2,
v) E
D.
i? is seminormal at a point (50,VO)
of X x V if a is seminormal on the singleton (e) A function a
:
x ~v
(2.1)
---f
Ei is
((z0,vO)).
relatively seminormal on a
subset D of X x V if there exists a seminormal function a' on X x V such that (2.1) holds. Let us note that a is seminormal on D if and only if a is seminormal at every point of D; the same is not true for relative seminormality! Observe also that if a is relatively seminormal on D , then a is relatively 1.s.c. on
D,
since seminormal functions are a fortiori 1.s.c. on X x V for the usual product topology.
B. Note that the marginals on V of a seminormal function are 1.s.c. proper convex functions (or identically equal t o
-00).
Thus it is not surprising that
E.J. Balder
46
seminormality can be expressed in terms of Fenchel conjugate functions [12], [lg, Theorem 2.4). Let a: X x V
be a given function; we define 6,
--+
Proposition. The function a is seminormal at
(z0,wo)E
X xV
if and only if
C. Even with this more explicit criterion at our disposal, it could still be quite tedious to verify the seminormality of a function on X x V. Fortunately, seminormality can frequently be obtained by adding a coercive component; this type of result also goes back t o Tonelli.
A function h: V
Definition.
-+ (+m, -001
is coercive (or inf-
compact on V for every slope) if for every p E P and
{w E V : h(w) - (v I p )
Proposition. Suppose that a:
pER
5 p } is compact
x x v + iZ satisfies the following
conditions: a is 1.s.c. on X x V , a( z, ,) is convex on
a ( z , v ) 1 fo(z) for some I.s.c.
also that h: V
V for every z
+ (v I PO)
function f 0 : X -+
(-00,
-+
for all
EX,
EX xV
(2,~)
R and some p o
E
P . Suppose
+m] is a coercive convex function. Then
for every 6 > 0 a
+ ch is seminormal on
{z E
X : fo(z) > -w} x V .
Seminormality of integral functionals
47
Note that the three conditions above are necessary for seminormality of a . This result slightly generalizes [lg, Corollary 2.91, but its proof runs almost exactly the same: by a version of Berge’s theorem of the maximum [16, p. 3581 b(zo,*) = &(so,.) if fo(so) > -00; hence (2.2) follows by standard facts from convex analysis [6, 1.4). A related, more involved result can be found in [lg,Theorem 2.131.
D. Suppose that the topological space P is separable and that cr(V,P) is the topology on V . By Smulian’s theorem (10, Theorem 3.21, this entails the equivalence of (relative) compactness and (relative) sequential compactness on V. Then it can be seen directly that the first condition of the above proposition can be weakened into: a is sequentially 1.s.c. on X x V.
The point t o be noted here is that a ( z o , . )
+ ch will still be 1.s.c.
on V for
every zo E X with fo(zo)> -m.
E. We briefly discuss the connection between Cesari’s property (Q) and seminormality [Ig], [7a, c].
Definition. A multifunction Q : X
V has property (Q) at
xo E X if
Recall that the indicator function X Q : X x V is defined by setting XQ
+
(0, +m) of
0 on the graph of Q, and X Q
G
&: X
V
+oo elsewhere. The
following result is almost elementary [lg, Proposition 2.151:
Proposition. The multifunction Q :X
V has property (Q)
at xo E X if and only if X Q is seminormal on
(50)
x V.
In the opposite sense, one can show that a finite-valued function on X x V is normal on
(20) x
V if and only if its epigraphic multifunction has property
E.J. Balder
48
(Q)at xo E X [lg, Proposition 2.201; this result is due t o Cesari [7c, 17.51, [7d, p. 1331. 3. SEMINORMALITY OF INTEGRAL FUNCTIONALS
Let (X, d ) be a Suslin metric space, and (V, P,
( a
I .))
Suslin locally convex
vector spaces, paired by a strict duality. Let (T,7 , p ) be a a-finite measure space, and let (1, d) be a decomposable class [6, VII-31 of (equivalence classes(’) of) (7,B (X))-measurable functions from T into X, which we equip with the essential supremum metric d, i.e., d(s,y)
ess sup d ( s ( t ) , y ( t ) ) . tET
Let ( V , P , (. I .)) be a pair of locally convex vector spaces, where V ( P ) is a decomposable class of (equivalence classes of) scalarly p-integrable functions from T into V ( P ) ,such that the integral
(v I P) =
s,
( v ( t )I P ( t ) ) 4 d t )
is finite for every v E V , p E P . Note that the duality
(a
I .) is strict [6, VII-61;
we shall equip V and P with topologies for which they are paired with respect
to it.
A. Let l:T x X x V functional Il: X x V
be a given function. The associated outer integral
-+
+R
is defined by
h ( z , v )3 &(t),v(t))
P(dt),
where outer integration has been employed, so that 1 need not be measurable [le-g].
Theorem. Suppose that for some p o E P and po E L k
W T ,7
9
l ( t , 27.) 2 (.
I Po@))
+pop)
for all (x,v)E X x V p-a.e.
(‘1
With respect to the equivalence relation “equality p-a.e.”
(3-1)
Seminormality of integral functionals
49
Consider the following two seminormality properties of 1 and
I1 :
l ( t ,-,-) is seminormal on X x V p-a.e.,
(34
I1 is seminormal on X x V .
(3.3)
Then (3.2) implies (3.31, and if also the following hold:
1 is 7 x B ( X x V)-measurable, Il(r0,vo) < +oo for some
(~0,210) E
X
(3.4) x
U,
(3.5)
then (3.2) and (3.3) are equivalent. The proof of this result on the relationship between seminormality in the
small (3.2) and seminormality in the large (3.3) [lg, Theorem 3.1) is elaborate; to a large extent, it depends on the criterion (2.2) and on measurable selection results for the Frenchel conjugation of integral functionals, due to
R.T. Rockafellar [18].
B. Even when (3.2) does not hold, the integral functional I[ may still be relatively seminormal on suitable subsets X x VO of X x V . Let XN(T;V) be the set of all 7 x B(V)-measurable functions h: T x V -, [0, +oo] such that h ( t , is coercive and convex on V for every t E T. a)
Definition (a)
A subset Vo of V is Nagumo tight if for some h E "(T; V )
( b ) A subset Vo of U is almost Nagumo tight if there is a nonincreasing sequence { B z } y in 7 , p that for every
i
(n,"=, Bi) = 0, such
50
E.J. Balder
for some h; E UN(T;V).
Theorem. Suppose that 1: T x X x V Z(t, -,.) is sequentially I.s.c.
-+
R satisfies (3.1) and
on X x V p-a.e.,
(3.6)
Z(t, z,.) is convex on V for every 5 E X p-a.e..
(3.7)
Then for every almost Nagumo tight subset Vo of V
Il is relatively seminormal on X x Vo. This result follows immediately from combining Theorem 3.A and Proposition 2.C. A somewhat more general result was given in [lg, Theorem 4.41. T h e following corollary is not without interest, since it shows t h a t the values of the limit function of a convergent sequence in
21 always satisfy a property
(&)-like relation, provided t h a t t h e sequence as a whole is (almost) Nagumo tight; cf. the examples below.
Corollary. Suppose that and that
Vk + vo.
{vk}?
is almost Nagumo tight
in
Then
-
Proof. We apply the above theorem t o the case where X
v o E { V k } r . Define 1: T x X x V
Z(t,m,v)
= o+m
3
NU {oo} and
(0, +oo} by
n,"=,
if E otherwise.
CI
CO{Vk(t) : k 2 p } ,
Then 1 satisfies (3.1) and the other conditions of the theorem. Thus we find
51
Seminormality o f integral functional3
from which the desired result follows. I @. The following examples have been worked out in
Examples (a) Suppose that for
Uo c U there exists a scalarly 7-measurable multifunc-
V with compact convex values such that
tion r : T
v(t) E
Then
r(t)p-a.e.
UO is Nagumo tight, as is seen by working with h G X r ,
(b) Suppose that p ( T )
0 and every i the function
by setting 1 +chi Zc,i
00
{ O
on (T\B;) x X x VI, on (T\Bi) x x (V\Vl), on Bi x X x V.
x
Just as in the lemma above, it follows t h a t
lc,i
satisfies (3.4); also, it
trivially satisfies (3.1) and (3.5). By Theorem 3.B the integral functionals
Ile,iare seminormal on X x U. Hence, J : X x U
is seminormal on
X
x
--f
R, defined
by
U. Also, J coincides with I1 on t h e set X x UO.I
Observe how the conditions and statements of the above result are strictly relative t o Vl and
V1,
and how the special nature of the functions ^hi of t h e
Seminormality of integral functionals
57
preceding lemma plays a crucial role in obtaining the proper extensions t o the general framework; cf. [la, b] for more instances of this. Corollary. Suppose that {vk}y c
V1
exists a subsequence {k} of {k} and v* in
is tight. V1
Then there
such that for every
function g: T x V1 + [O,+00] satisfying g(t,.) is I.s.c. and convex on
V1
p-a.e.,
we have
Moreover
Proof. Apply the previous theorem and Corollary 3.B. I Note that in the above result we continue to use outer integration. As explained in Section 5 , this corollary generalizes [le, Theorem I], which lies at the base of several applications given in [le]. 5. RELAXATION
In this section we shall see how the general framework of Section 4 actually generalizes the traditional framework for relaxed control theory, invented by L.C. Young, E.J. McShane, J. Warga and others [20], [21]. Unlike most authors, we shall allow the space of control points to be noncompact. The tightness instrument can then compensate for any loss of compactness. This leniency has great advantages, since it makes trajectories and their derivatives also accessible to an approach by relaxation [le]. (Ironically, the control functions themselves are usually pushed off the scene by a deparametrization, so that the expressions “relaxed control theory” and “relaxed controls” have to be taken with a grain of salt; Young’s original term “generalized curve” would
E.J. Balder
58
have been appropriate here.) In actual control problems such tightness arises from suitable growth conditions, conditions on the dynamics, etc.; see e.g. [lc, d1,[7c1, [201,[211. Let U be a metrizable Lusin (alias standard Borel) space; i.e., U is homeomorphic t o a Borel subset of compact metric space
( 6 , p ) [9].
of generality we may identify U with this Borel subset of
Without loss
6, letting U inherit
6.Note that by this procedure compact subsets of U are also compact subsets of 6. Let V1 G M1(U) be the set of all probabilty
the relative topology from
measure on (U, B ( U ) ) . As a consequence of our embedding, M1 ( U ) is a subset
= M ( U ) , the set of all bounded h
of V
h
signed measure on U ; more precisely,
M ( 6 ) that are carried by the M ( 6 ) is the topological dual of P = C ( 6 ) ,
M 1 ( U )is the set of all probability measures in subset U. By Riesz’s theorem,
the set of all continuous functions on
6[9].P is well-known to be a separable
Banach space for the usual supremum norm
11. 1 1
(131. Also, it is well-known
that the usual narrow (alias weak) topology on M1(U) is the relative vague topology
.(M(6),C(6)) for which M1(U) is metrizable and Borel in M ( 6 ) [8,
111. 58-60]. The set V 1 of relazed control functions can be identified with the set of all 5 E V
= LO” with we)
5 ( t ) E V1
= M1(U) p-a.e.
(Hence, every
equivalence class in V1 has a representant which is a transition probability from ( T ,7 )into (U,B ( U ) ) ;cf. [lg, Appendix A].) The elements of P
G
L’
are known as Caratheodory functions [20]. The topology on V I is the relative
.( v ,P)-topology. A. Let U(T, U ) be the set of all 7 x B(U)-measurable functions h: T x U
+
[0, w] such that h ( t ,.) is inf-compact on U for every t E T . The following
concept, inspired by [2], was given in [la, el (only part a). Definition (a) A subset VOof V 1 is tight if for some h E
(b) A subset VO of
V1
U(T;U )
is almost tight if there is a nonincreas-
Seminomlity of integral functionais
59
(5.1) for some hi E X(T;U )
Lemma (a) For h E U(T;U ) the function i : T x M1(U) defined by N
W,v)
=
s,
-,
[ O , +m]
h(t,'1L)v(d'1L),
is equivalent to a function in X,(T;Vl). (b) Suppose that g: T x U
--t
[0,+00] is a 7 x B (U)-measurable
function with g ( t , . ) is
I.s.c. on U for every t E T.
is equivalent to a 7 x B(Vl)-measurable function
Pro0f (a) For arbitrary t E T, x ( t , . ) is trivially convex on V1; also, it is inf-compact by [lg, Remark 2.61 (the latter fact also follows simply from our proof of
T t o be a singleton.) It remains to prove that there exists a function h' E X,(T; VI) with h'(t, .) = x ( t , .) p-a.e., and this Theorem 4.B by taking
will follow from our proof of (b). (b) By [lg, Appendix A] there exists a nondecreasing sequence {gn}? in such t h a t for p-almost every t E T lim n
t gn(t,u)
= g ( t , u ) for every u E U.
P
E.J. Balder
60
By the monotone convergence theorem, we have
s,
t
g’(t,w)-lim n
Since the functions t v E V1
H
g n ( t , u ) w ( d u ) = ~ ( t , v ) f o r a l l w E V p-a.e. l
su gn(t,
u ) w(du) are
c V , and the functions v
separable metrizable space
B (Vl)-measurable.
71,
H
7-measurable for every fixed
su g n ( t , u ) w(du) continuous on the
it follows from [6, 111-141 t h a t g’ is 7 x
I
Since X is a metrizable Suslin space, part (b) of this lemma can be extended t o functions on T x X x U ; cf. the proof of [le, Lemma A.11.
B. From 4.B we obtain immediately the following version of Prohorov’s theorem for relaxed control functions. Theorem. Suppose that Uo is an almost tight subset of
U1.
Then the
following hold. (a) Vo is relatively weakly (sequentially) compact in
V1.
(b) For every T x B ( X x U)-measurable function g:T x X x U
--t
R
with g(t, z , u ) 2 po(t) for all
( 5 , ~E )X
x U,
for some p o E L k , and such that
g(t,.,-) is 1.s.c. on X x U , we have that
I g is relatively seminormal on X x UO, where
Proof. Let {El,}? function
and {hi}? be as in (5.1). Define for every hi the
x, as in the preceding lemma (part (a)). Then it follows that Uo is
almost tight in the sense of (4.2). Now apply Theorem 4.B. I
Seminormality of integral functionals
61
We shall refrain from rephrasing Corollary 4.B in the present context. Instead, we state another, closely related consequence of that result. Recall that the relasation of (an equivalence class of) a (?, B (U))-measurable function u: T
--t
U is the element
cu
in
U1,defined
by
c U ( t ) E probability measure in M 1 ( U ) concentrated at u ( t ) p-a.e.
Corollary. Suppose that {uk}? is a sequence of ( 7 ,B(U))-measurable
{cuk}yc
V1 is tight. Then there exist a subsequence {k} of {k} and a reJaxed control function 6, E U1 such that for every 7 x B(U))-measurable function g: T x U + [0, m] satisfying
functions such that
g ( t , . ) is 1.s.c. on
U p-a.e.
we have
Moreover
n 00
6,(t) is carried by
CJ
{uk(t) : k 2 p } p-a.e.
p= 1
Proof. Apply Corollary 4.B. Only the last statement requires clarification. It is enough to point out that for every p and t E T, any probability measure in the narrow closure of the convex hull of { e l l k ( t ) : k 2 p } must be carried by the set Cl{uk(t) : k 2 p } I This corollary coincides with [le, Theorem I] (as mentioned before, uniformly integrable negative parts can be handled by the trick of [14]). This result of [le, Theorem I] was extended in [If] to the case where U is the countable union of metrizable Lusin spaces (such as the space V of Example 3.b). In this form, this result implies Theorem 3.C directly.
It
requires a modification of the tightness condition given in this section. To the present author it is an open question how this kind of result can be fitted into the abstract framework of Section 4 (and Section 3).
Acknowledgment. 1 wish t o thank both referees for their helpful remarks.
E.J. Balder
62
REFERENCES [la] E.J. BALDER, O n a useful compactification for optimal control problems, J. Math. Anal. Appl., 72 (1979), 391-398. [lb] E.J. BALDER, Lower closure problems with weak convergence conditions
in a new perspective, SIAM J. Control Optimi., 20 (1982), 198-210. [lc] E.J. BALDER, Prohorov's theorem for transition probabilities and its applications to optimal control, in: Proceedings 22nd IEEE Conference on
Decision and Control, San Antonio, (1983), 166-170. [Id] E.J. BALDER, Esistence results without convezity conditions
for
general
problems of optimal control with singular components, J. Math. Anal.
Appl., 101 (1984), 527-539. [le] E.J. BALDER, A general approach to lower semicontinuity and lower closure in optimal control theory, SIAM J. Control Optimi., 22 (1984), 570-
598. [If] E.J. BALDER, A n estension of Prohorov's theorem for transition prob-
abilities with applications to infinite-dimensional lower closure problems,
Rend. Circ. Mat. Palermo 34, No. 3 (1985). [lg] E.J. BALDER, O n seminormality of integral functionals and their inte-
grands, SIAM J. Control Optimi., 24 (1986), 95-121.
[ l h ] E.J. BALDER, Necessary and suficient conditions for L'-strong-weak lower semicontinuity of integral functionals, submitted for publication. [2] H. BERLIOCCHI, J. M. LASRY, Inte'grandes normales et mesures paramitries en calcul des variations, Bull. SOC.Math. France, 101 (1973),
129-1 84. [3] G. BOTTARO, P. OPPEZZI, Semicontinuitci inferiore d i u n funzionale
integrale dipendente da funzioni a valori in u n o spazio d i Banach, Boll.
Un. Mat. Ital. 17-B (1980), 1290-1307.
Seminormality of integral functionals
63
[4] J.K. BROOKS, R.V. CHACON, Continuity and compactness of measures,
Adv. Math. 37 (1980), 16-26. [5] C. CASTAING, P. CLAUZURE, Semicontinuite‘ des fonctionnelles intdgrales, Travaux du SCminaire d’Analyse Convexe, UniversitC des Sci-
ences et Techniques du Languedoc, Montpellier, (1981), 15.1-15.45. [6] C. CASTAING, M. VALADIER, Convex Analysis and measurable
Multifunctions, Springer Lectures Notes in Mathematics, No.
580,
Berlin (1977). [7a] L. CESARI, Existence theorems f o r weak and usual optimal solutions in Lagrange problems with unilateral constraints I, Trans. Amer. Math. SOC.
124 (1966), 369-412. [7b] L. CESARI, same title, 11, ibidem 413-429. [7c] L. CESARI, Optimization - Theory and Applications, Springer-
Verlag, Berlin, (1983). [7d] L. CESARI, Seminormality and upper semicontinuity in optimal control,
J. Optim. Theory Appli. 6 (1970), 114-137. [8] G. CHOQUET, Lectures on Analysis, Benjamin, Reading, Mass. (1969), [9] C. DELLACHERIE, P.A. MEYER, Probabilitbs et
Potentiel,
Hermann, Paris (1975). [lo] K. FLORET, Weakly Compact Sets, Springer Lectures Notes in Math-
ematics, No. 801, Berlin (1980). [ l l ] A. GAVIOLI, A lower semicontinuity theorem for the integral of the calculus of variations, Atti. Sem. Mat. Fis. Univ. Modena, 31 (1982),
268-284. [12] G.S. GOODMAN, The duality of convex functions and Cesari’s property
(Q), J. 0ptm.Theory. Appl. 19 (1976), 17-23.
E.J. Balder
64
(131 R.B. HOLMES, Geometric Functional Analysis and its Applica-
tions, Springer-Verlag, Berlin, (1975)
.)
[14] A.D. IOFFE, O n lower semicontinuity of integral functionals, SIAM J .
Control Optimi., 15 (1977), 521-538. [15] A. and C. IONESCU-TULCEA, Topics in the theory of Lifting,
Springer-Verlag, Berlin (1969).
[ 16) P.-J. LAURENT, Approximation et Optimisation, Hermann, Paris (1972). [17] C. OLECH, A characterization of Ll -weak lower semicontinuity of integral
junctionals, Bull. Acad. Polon. Sci., 25 (1977), 135-142. [18] R.T. ROCKAFELLAR, Integrals which are convez functionals I; 11, Pacific
J . Math. 24 (1968), 525-539; 39, (1971), 439-469. [19] L. TONELLI, Sugli integrali del calcolo delle variazioni in forma ordinaria,
Ann. Scuola Norm. Sup. Pisa, (2), 3 (1934), 401-450. [20] J . WARGA, Optimal Control of Differential and Functional Equa-
tions, Academic-Press, New York (1972). [21] L.C. YOUNG, Generalized curves and the esistence of an attained absolute
m i n i m u m in the calculus of variations, C. R. Sci. Lettres Varsovie, (C HI), 30 (1937), 212-234.
rCKMAl Uays (15: Mathematics for Optimization .I.-B. Hiriart-Urruty (editor) 0 Elsevier Science Publishers B.V. (North-Holland), 1986
65
GLOBAL MAXIMIZATION OF A NONDEFINITE QUADRATIC FUNCTION OVER A CONVEX POLYHEDRON
R. BENACER and PHAM DINH TAO Laboratoire TIM3 Institut de MathCmatiques AppliquCes de Grenoble B.P. 68 38402 Saint Martin d’Hkres Cedex (France)
Abstract. The problem of globally maximizing a nondefinite quadratic function is reduced to a linear program with an additional reverse convex constraint, and then a finite pivoting algorithm is proposed for finding globally maximizing points. A remarkable advantage of this procedure is that the set of constraints is not cumulative and requires only pivoting operations. Numerical examples are given t o illustrate how the algorithm works in practice.
Keywords. Linear programs with convex reverse constraints. quadratic programming. Global maximization of convex functions.
Indefinite
R. Benacer and Pham Dinh Tao
66
I. INTRODUCTION Maximizing a negative definite quadratic function over a convex polyhedron has been studied by a number of authors as a typical concave programming problem. Significant results have been obtained in recent years ([ 1, 8, 91,see also the bibliography given by H. Tuy in [15]) for the problem of global maximization of a positive quadratic function which is a special convex maximization problem. Fewer paper have been devoted t o the problem of maximizing (or minimizing ) a n indefinite quadratic function ([l,13, 14, 151) which is a
typical case of maximizing (or minimizing) a difference of two convex functions subject to linear constraints. In this paper we present a new finite method for maximizing a nondefinite quadratic function, which needs only pivoting operations, does not involve cuts of the feasible region, and is guaranteed t o terminate at the global solution. The method can be considered as a variant of E. Balas’ Algorithm [l]for nonconvex quadratic programming, which deals with generalized polar sets.
Our algorithm is based on the reduction of this problem t o a linear program with a n additional reverse convex constraint. T h e difference is that in our algorithm, we do not need t o solve a sequence of linear subproblems (see [l]). In section 2, from t h e application of the Kuhn-Tucker’s theorem t o this problem, it is shown that the problem of maximizing a quadratic function is equivalent t o a linear program with a n additional reverse convex constraint in the sense t h a t solving one implies t h e resolution of the other. In section 3, a finite algorithm is then proposed for solving the resulting equivalent problem; this is done by starting with the simplex defined by a subset of the set of constraints of this problem and containing the feasible region. We then proceed with a successive underestimation of t h e polytope defining the feasible region until finding another feasible solution better than the current one. We prove that the resulting feasible solution generated by the algorithm defines a Kuhn-Tucker point of globally optimal solution of (P). In section 4, we illustrate the procedure with two numerical examples.
Global maximization of a nondefinife quadratic function
67
2. REDUCTION OF A GENERAL QUADRATIC
OPTIMIZATION PROBLEM Consider the general linearly constrained quadratic programming problem 1 Maximize f ( z ) = - (z I Cz) (c z) a 2 (PI subject to z E D = { z E Rn : As 5 b, z 2 0},
+ I +
where a is real number, c and z are n-vectors, b is a rn-vector and C is a symmetric n x n-matrix. The theorem of Kuhn-Tucker (see for example [lo]) giving necessary conditions for optimality; when applied to problem (P),it can be stated as follows:
If x* is a local [or global) solution of the problem [P)l then there exist
u*~ v * such that the point (u*,z*,y*,v*) is a Kuhn-
Tucker point, i.e., satisfies the Kuhn-Tucker relations [here At denotes transposition of A): Az+y=b
-Atti
+ C S+ v =
-C
an d
Further, from (2.1) (2.2) we have, for any Kuhn-Tucker point ( u , z , y , w ) , 1 f ( z ) = - (z I Cz) (c I z) a 2 1 = -(- ( C I Z) (U I Az) - ( v I z)) ( C I Z) + a 2 1 = -((c )1. + ( b 14 - .( I Y) - (. 14)+ a 2 1 = - ( ( c I z) (b I u ) ) a. 2 It has been shown in [16] that this nonlinear constraint (2.2) with u,z, y,
+
+
+
+
+
w
+
2 0 can be expressed in the form: m
m
i=l
i=l
R . Benacer and Pham Dinh Tao
68
Since the function g(u, x, y, v ) is concave, then the function -g(u, x,y, v ) defines the reverse convex constraint (-g(u,x,y,v)
1 0) in R2((m-tn)e
Let us denote
where M is an h x h square matrix, while q , p , z and w are vectors in R h , with h = m + n. Finally, from conditions (2.1), (2.3) and the notations (2.4), the problem (P) can be reduced t o the equivalent following one:
Maximize ( p I z )
+
Q:
subject to ( z , w ) E F = {t = ( z , w ) E RZh: M z + w = q , z , w
20},
(P)*
( z , w ) ~ G ={ t = ( z , ~ ) E R ~ ~ : g ( z , w ) ~ O } . Clearly a point x* is an optimal solution for (P) if and only if there exist y*, u * , v * such that t* = ( z * , u * ) = (u*,x*,y*,v*) is an optimal solution for
(P)*.T h e main difficulty with (P)*is due t o the presence of the reverse convex constraint, which makes, in general, that the problem cannot be solved with the standard simplex method. The problem (P)*is known as a linear program with an 'additional reverse
constraint (or with a n additional complementary condition). 3. C H A R A C T E R I Z A T I O N OF AN O P T I M A L S O L U T I O N OF (P)
AND AN A L G O R I T H M In what follows, for any nonempty convex polyhedral set S c R", we denote the set of vertices and t h e set of edges of S by V ( S ) and E ( S ) respectively. For any nonempty set S c R", 8s denotes the boundary of S. We assume that (P) is feasible and has a finite optimum. Let us consider the general linear programming with a n additional reverse convex constraint.
Minimize ( d I z) subject to x E S n H ,
(LRCP)
Global maximization o f a nondefinite quadratic function
69
where S is arbitrary convex polytope, and
H = {z E R" : h ( z ) 2 0} where h is an arbitrary convex function. The algorithms developed in [6,3] for solving this problem are based on the following characterization of a subset of the feasible region S n H which can contain a solution of (LRCP).
Theorem 1 [3].Let z an optimal solution of min{(d I z) : z E S } .
If h ( z ) < 0, then an optimal solution of (LRCP) lies in the set E ( S )n d H . Now, from the structure of the function g in (P)*,we derive the following characterization of an optimal solution of (P).
Theorem 2. z* is an optimal solution of (P) if an only if there exist vectors u*,y*, v* such that ( p I z*)
+ a = max (p I z ) + a
(3.11
(i.e., the set of solutions of (P)*is reduced to V ( F )n dG)
Proof. By definitions of the set G and of the convex polyhedron F , we have
dG c V ( F ) . Since E ( F ) n V ( F ) = V ( F ) ,the set of an optimal solutions in Theorem 1 is reduced to V ( F )n dG. I
R. Benacer and Pham Dinh Tao
70
ALGORITHM. We assume that (P)* (or (P)) is feasible and has an optimal solution, and let of (P) (i.e.,
P be a real number greater than the optimal value
p 2 maxf(x) x E D).
Initialization S t e p 0: Find a basic solution t o = (zo,wo) to the system Mz+w=q, (zIw)=O (i.e., a Kuhn-Tucker point of (P)).
S t e p 1 : Set F' = { s = ( t , r ) = ( ( z , ~ )( ,r 1 , r z ) ) E R+ 2(h+1) :
t E F, - ( p I z) + r l = - ( p
I z0 ) , ( p I z ) + r 2 = P -
a};
find a simplex S 1 of dimension h and of vertex so = ( t o , r o ) such that F' c S'. Set k = l , go to iteration k
Iteration k: S t e p 2: If for every s E V ( S k ) ,we have g ( t ) > 0 or g ( t ) = 0 implies (P I
4
= (P I z o ) , then stop: xo is a global optimal solution of (P),
and t o = (uo,xo, yo, TI") is a Kuhn-Tucker point of the optimal solution x'. Otherwise go to step 3.
S t e p 3: Select
s* E
V ( S k )such that s*
I
= argmax { ( p z ) : s E v ( s k )g ,( t ) 5 0 } .
If t* E V ( F )n dG, then stop: t* is an optimal solution of (P)*. Otherwise, find a variable si which is negative and go to step 4.
S t e p 4 : 1) Determine the vertices of the set
sk= {s E Sk,
6,
= 0} and set
Sk+l = {s E S k , s ; 2 O};
2) Delete all the
s
E V ( S k )such that s $ V ( S k + +and ') set
V(S"1) = V ( S k )u V ( 3 " ; 3) k = k
+ 1 and go to step 2.
Global maximization of a nondefinite quadratic function
71
Construction of the simplex S' To formalize the construction of h-simplex S1 with vertex so such t h a t t h e convex polyhedron F' C S ' , we use the following notations. As it is well-known, the convex polyhedron F' can be defined as follows:
F' = { S E R2(h+'),S J = q*
- M * S J I( ,S J , S J I )
2 0},
where J + J ' = 2 ( m + n + l ) = 2 ( h + l ) , M * i s a h + Z x h m a t r i x , q * i s a h + 2 vector and sJ , s J I are basic and nonbasic variables respectively. T h e set J and J' = ( 1 , . . . ,h
+ 2) - J
define the indices of basic and nonbasic variables
respectively; then card( J ) = h T h e vertex so =
+ 2 and card(J ' ) = h.
(SJ,SJI)
= ( q * , O ) is then the feasible solution of the
system = q* - M * s j i , s = ( s j , s j i ) 2 0.
SJ
Now, consider the polyhedral cone
(*I
C with vertex
{
C = s E R2(h+'): S J = q*
-M
so defined as
* s J ~ , 2 0} .
Then S' is defined as follows: sEC: C s j S X
, w h e r e X 2sEF' rnaxEsj.
j €J'
j €J'
Proposition [3].Let I be the identity h x h matrix, and let e, be the ith column o f I ; let each element j o f the set J' be indexed by i
c ( 1 , . . . , h } , i.e., J' = {jl,i = 1 , . . . , h } . Then:
1) F' c S'; 2) S' is an h-dimensional simplex a n d
v ( s ' )= {si, i = 0,. . . , h } where for i = 1 , . . . ,h a n d j, E J'
s; = qL - XM*e,, S J ~- j ,
= 0.
72
R. Benacer and Phani Dinh Tao
Remark o n the implementation
1) We remark that t h e vertex so is degenerate in F ' , but in practice it is important t o make the slack variable r l of this constraint (p 1 z) 2 (p z O ) in the set J' so t h a t we have (p I z ) 2 ( p I z " ) for any s E V ( S k ) .
2) If all t h e coefficients of the ith row of the matrix M * are positive and q:
> 0, this constraint will replace the redundant constraint C j E Jsj, 5 X
for the construction of the polytope S1.
3) The initialization step can be carried out by several methods which provide sequences converging t o a local maximum of (P), (see, e.g., [11,16]). 4) For generating the vertices of S k at step 4, see for example [1,2,5,15].
5) In practice, if there is a feasible vertex s of V ( S k ) such t h a t ( p I z) > (p I z") and there are more elements of V ( S k ) which are not feasible for the set G, then we can choose the new Kuhn-Tucker point and we restart the algorithm.
Convergence result
Theorem 3. The algorithm finds an optimal solution for (P) in a finite number of steps.
Proof. As described above, due t o t h e structure of the collection of polytopes S k , k 2 1, and since the number of constraints in F' is finite , it follows t h a t
5 h + 2. From the characterization of optimal solutions of (P)(3.1) and the inclusion F' c S k , Vk 2 1, we deduce
this algorithm stops at iteration k with k
that the feasible solution obtained by this algorithm is a Kuhn-Tucker point of (P) which corresponds t o an optimal solution of (P).I
Global maximization of a nondefinite quadratic function
73
4. NUMERICAL EXAMPLES
In order to illustrate the algorithm, we apply it to the following examples.
Example 1 [l] Maximize f ( z ) = -zl Subject to
221
L + -21z 2 + -2: 3
- 22 5 3,
21, x2
1
2
- ?z1
2 0.
Example 2 1181. This example is derived from the well-known Zwart's counterexample to Ritter's method.
+ z122 - l o x l + 2s2 - 20 Subject t.o z1 + 22 5 11, 1 . 5 +~ 22 ~ 5 11.4, z l , q 2 0.
Maximize f ( z ) = 22:
By applying the algorithm, we obtain the following results a) In Example 1, the optimal solution is
t* = (u*,z*,y*,w*) = (1,2.25,1.5,0.0,0.0,0.0,) wj"i the optimal value 0.75. For the starting Kuhn-Tucker point t1 = (u
1
,z1 ,Y 1 ,wl) = (0, (0,0.5),3.5,(1,O))
and ,f3 = 6 2 max{f(z) : x E
D},we obtain the results presented on
table (a), bj The optimal solution of (P) in Example 2 is
t* = ( u * , x*, y * ,
0") =
(0.0,13.6,7.6,0.0,3.4,0.0,4)
with the optimal value 19.25. For the starting Kuhn-Tucker point t 1 = (ul,z l , y l ,
and
p
w') = ( 2 , 0 , 0 , 1 1 , 0 , 0 . 4 , 1 , 0 )
= 80, we obtain the results presented on table (b).
R . Benacer and Pham Dinh Tao
14
Table (a):
~~
~
v 1 = (18.64,0.0,19.14,22.14,38.28,0.0,32.62,-26.75) 1 0’ = (9.32,32.62,9.82, -52.42, -23.85,0.0,0.0,5.87) v3 = (-4.66,0.0,28.46,31.46,-8.32,32.65,0.0,5.87)
2
v1 = (18.64,0.0,19.14,22.14,38.28,0.0,32.62,-26.75) 0’ = (15.87,9.68,16.37,0.0,18.83,0.0,22.93, -17.06) v 3 = (-4.66,0.0,28.46,31.46, -8.32,32.65,0.0,5.87) v4 = (0.58,2.04,1.08,0.0, -0.55,0.0,0.0,5.87) v 5 = (0.58,12.23,21.47,0.0, -14.14,20.38,0.0,5.87) V’
= (18.64,0.0,19.14,22.14,38.28,0.0,32.62,-26.75)
V’
= (15.87,9.68,16.37,0.0,18.83,0.0,22.93,-17.06)
18.64
(*) -76.28 15.48
(*)
v3 = (0.50,0.0,26.80,29.80,0.0,26.80,5.82,0.504)
3
v4 v5 v6 v7
= (1.00,2.25,1.50,0.0,0.0,0.0,0.62,5.25) = (6.95,11.17,19.35,0.0,0.0,11.90,99.55, -3.65) = (-0.50,0.0,3.50,6.50,0.0,3.50,0.0,5.87) = (0.37,1.31,0.87,1.25,0.0,0.0,0.0,5.87)
0.12
(*)
18.64 9.68 15.48 -0.55 6.24
0.12
18.64 9.68 26.30 0.00 11.90 3.00 0.37
0.75
Table (b):
k
s*
(P I
= (u*,z*,y*,v*,r;,ra) = ( z * , w * , r *= ) (t*,r*)
1 (-67.63,67.2,0.0,11,0.0,0.4,32.2,--2.4,0.0,38.0)
E*) -
-69.6 -32.8 2 (0.0,11.6,0.0, -44.5,55.6,55.9,72.0,9.7,0.0,38.0) 3 (24.2,0.0,11.0,0.0,0.0,0.0, -5.1, -9.8,11.2,56.2, -18.2) -14.8
2 58.2
4 (10.9,0.0,7.6,0.0,3.4,0.0, -9.4,1.3,0.0,38.0) 5 (0.0,13.6,7.6,0.0,3.4,0.0,0.0,4.0,17.5,20.4)
2 19.5
-6.09 0.0
2
20
Global maximization o f a nondefinite quadratic function
75
REFERENCES
[11 E. BALAS, Nonconwex quadratic programming via the generalized polars, SIAM J. Appl. Math., 28 (1975), 325-349. [2] M. BALINSKI, A n Algorithm f o r finding all the vertices of convex polyhedral sets, J . SIAM, 19 (1961), 72-88.
[3] R. BENACER, PHAM DINH TAO, T w o general algorithms for solving a linear program with a n additional reverse conwex constraint, submitted to
Math. Programming (1985). (41 R. BENACER, PHAM DINH TAO, Linear programming with reverse conwex constraints, submitted t o Math. Programming (1985).
[5] M.E. DEYER, L.G. PROLL, A n algorithm for determining all extreme points of a conwez polytope, Math. Programming, 12 (1977), 81-96.
[6] R.J. HILLESTAD, S.E. JACOBSON, Linear programs with a n additional reverse convex constraint, Appl. Math. Optim., 6 (1980), 257-269.
[7] E.L. KELLER, The general quadratic optimization problem, Math. Pro-
gramming, 5 (1973), 311-337. [8] H. KONNO, Maximization of a convex quadratic function under linear constraints, Math. Programming, 11 (1976), 117-127.
191 A. MATHAY, A. WHINSTON, Quasiconcave minimization subject t o linear constraints, Discrete Math., 9 (1974), 35-59.
(lo] O.L. MANGASARIAN, Nonlinear Programming, Mc. Graw-Hill, New York (1969). 1111 PHAM DINH TAO, Algorithms for solving a class of nonconwex optimization problems. Methods of subgradients, submitted to Math. Programming
(1985).
R. Benacer and Pharn Dinh Tao
16
[la] PHAM DINH TAO, Algorithmes de calcul du m a x i m u m d’une f o r m e quadratique sur la b o d e unitk de la norme du m a x i m u m , Num. Math.,
45 (1984), 163-183. [13] K. RITTER, A method f o r solving m a x i m u m problems with a nonconcawe quadratic objective function, Z. Wahrschembichkeitstheor. Verw. Geb., 4
(1966), 340-351. [14] P.P. ROUGH, The indefinite quadratic programming problem, Oper. Res., 27 (1979), 516-533. [15] H. TUY, Global maximization of a difference of two conwea: functions,
submitted to Math. Programming Study (1984). [16] H. TUY, A general deterministic approach to global optimization via d.c. programming, Preprint, Institute of Mathematics, Hanoi’ (1985)
[17] R. WERNER, R. WCETZEL, Complementary pivoting algorithms inwolwing estreme rays, Math. Oper. Res., 2, 10 (1985), 195-206.
[18] P. ZWART, Nonlinear programming: Counterexamples t o two global optimization algorithms, Math. Oper. Res., 6, 21 (1973), 1260-1266.
rCKMAl uays 8 5 : Mathematics tor Optimization J:B. Hiriart-Urruty(editor) 0 Elsevier Science Publishers B.V. (North-Holland),1986
71
ON CONNECTION§ BETWEEN THE MAXIMUM PRINCIPLE AND THE DYNAMIC PROGRAMMING TECHNIQUE F.H. CLARKE Centre d e recherches mathkmatiques UniversitC de MontrCal, C.P. 6128, Succ. A MontrCal, QuCbec, H3C 357, Canada and
R. VINTER Department of Electrical Engineering Imperial College London, SW7 2BT, England
Abstract.(') Let V ( t ,z) be the infimum cost of a n optimal control problem, viewed as a function of the initial time and state ( t ,z). Dynamic programming is concerned with the properties of V ( . , . )and, in particular, with its characterization as a solution t o a partial differential equation, t h e Hamilton-JacobiBellman equation. Heuristic arguments have long been advanced t o suggest that the costate function appearing in the Maximum Principle is given by
where Q(.) is the minimizing state function of interest. In this paper we examine the validity of such claims, and find that (*), interpreted as a differential inclusion involving the partial generalized gradient, is indeed true, almost everywhere and at the endpoints, for a very large class of nonsmooth optimal control problems. ( l ) Ce rapport a btb publib g r l c e h la recherche.
une subvention d u Fonds F C A R pour I'aide et le soutien
I8
F.H. Clarke and R . Vinter
1. INTRODUCTION T h e issues addressed in this paper concern the relationship between the costate function t h a t appears in the Maximum Principle on the one hand, and the value function associated with perturbations in t h e initial time and state on the other. For the most part, our framework is t h a t of the following free final state optimal control problem:
Minimize
J,1
L ( t , z ( t ) , u ( t )dt)
+ h(z(1))
s.t. i ( t ) = f ( t , z ( t ) , u ( t ) ) ,a.e. t E [ O , 11
s(0) = 20 u(t)E
-
z(t)
Ut, a.e. t E [O, 11
E Rt, all t E [ O , 11.
T h e d a t a for the problem are functions f : [ O , 11 x R" x
R"
+ R",
L: [0,1] x R" x R"
+ R,
h:R" + R,
a vector zo E R, and sets
U c [0,1] x R", R c (0,1] x R". (Given a set A c [0,1] x R" and a point t E [0,1] we denote by At the set {z : ( t , z ) E
A}.)
Given a subinterval I c [0,1], a control function (on I) is a (Lebesgue) measurable function u(.):I
+
R" which satisfies (1.2) (when I replaces [0, 11).
A process (on I) is a pair of functions ( z ( - ) , u ( . ) )on I of which u ( . ) is a control function and z(-):I + Rn is an absolutely continuous function which satisfies t h e differential equation (1.1)and the constraint (1.3) (when I replaces the interval [0,1]) and for which s
-+
L ( s , z ( s ) , u ( s ) is ) integrable. If z(.) is
the first component of some process, z(.) is called a state function. A process
79
The maximum principle and the dynamic programming technique
( z ( . ) , u ( . ) ) on
I is admissible if z(0) =
minimizes the value of the cost function
50.
The process is minimizing if it
L dt+h over all admissible processes.
A tube T is a subset $2 of the form
for some 6
> 0 and some continuous function
z(.). We also refer t o this set as
the 6-tube about z ( - ) . We loosely refer t o “state functions in a tube T about z(.)” when we mean state functions having graphs in a tube about z(.).
Let (zo(.), u o ( . ) )be a minimizing process contained in some tube. Then under certain conditions (the precise nature of which is unimportant for the moment), the Maximum Principle asserts that there exists a costate function p ( . ) : [0, 1
-m = P ( t )
-+
R such that
fz(t, z o ( t ) ,u a ( t ) ) - L z ( t ,z o ( t ) , u o ( t ) ) ax. t
E
[0,1]
(1.4)
Now define the value function V ( . ,.): [0,1] x R” + R U {-XI} U {+oo}:
Here the infimum is taken over processes ( z ( . ) , u ( . ) )on [t, 1) for which z ( t ) = x. When no such processes exist the value is +oo.
If V ( . ,.) happens to be continuously differentiable on the interior of some tube T about zo(.), we find under mild assumptions that V ( . , . ) satisfies the Hamilton-Jacobi equation there:
Vt(t,x)+ uEUt min{V,(t,z)
- j ( t , z , u ) + L ( t , s , u ) }= 0,
( t , ~E) T
(1.8)
F.H. Clarke and R. Vinter
80
V(1,z) = h(x),
5
E 01.
(1.9)
Solving these equations for V ( . , . ) is the basis of the Dynamic Programming technique. In elementary treatments of optimal control the tie-up between the Maximum Principle and Dynamic Programming is typically described in the following terms (here we quote from [2, p. 229 et seq.], one of many possible sources): t h e relation is t h a t “the adjoint (i.e. costate) variables
. . equal the
negative of the rate of change of the performance index with respect t o the corresponding state variables.” This simply means
See also [l]and [6]. T h e grounds for this assertion are t h a t (1.8) and (1.10) imply the following: for each t E [0,1] and x in a neighbourhood of ~ ( t )
(1.12) and for each t and u E Ut
(1.13)
Then (1.13) amounts t o “maximization of the hamiltonian condition” (1.6) (when p ( . ) replaces q ( . ) ) . If for each t , the right-hand side of (1.12) is a continuously differentiable function of x, we deduce from this inequality t h a t
a
ax If we also assume V ( . , is twice continuously differentiable, then we obtain a)
The maximum principle and the dynamic programming technique
81
But
d ;itV& z o ( t ) ) = (Vt,
+ L f ) ( kz o ( t ) ,U O ( t ) )
whence q ( t ) = -V,(t, z o ( t ) ) satisfies the costate differential equation (1.4) for p ( . ) . Finally (1.9) implies
which is the transversality condition (1.5). This would appear t o be the end of the story. But unfortunately this is not so because it is difficult t o justify our heuristic arguments, in a setting of any generality. They are based (among other things) on the assumption that V ( . ,.) is twice continuously differentiable and, t o quote Pontryagin et al. [7, p. 71, “the assumption on the continuous differentiability of the functional does not hold in the simplest cases.” This leads t o our central preoccupation in this paper: T h e extent to which the relationship (1.11) bridging the Maximum Principle and the Dynamic Programming technique is valid, under more or less the hypotheses typically introduced t o prove the Maximum Principle.
Our main result is that, for a very large class of nonsmooth problems we can indeed choose a costate function which satisfies the inclusion (1.14) for
t = 0 and 1, and on a subset of ( 0 , l ) of full measure. We are able t o show also that the inclusion (1.14) is still valid for some costate function when a terminal constraint of the form ‘ ‘ ~ ( 1E) C1” is present provided some verification function is substituted for V(.;). Here we take advantage of recent advances in our understanding of verification functions [4] which permits us to replace the terminally constrained problem by an equivalent free endpoint problem involving a nonsmooth penalty term and thereby t o reduce the problem to one we already know how to deal with. We conclude this introduction by describing the basic idea behind our methods. For any measurable function a(-): [0,1] + R” and control function u(.) on [0,1], let
z(.) be a solution to
x = f ( t , z ( t ) ,~ ( t )+) a ( t )a.e. [0,1].
(1.15)
F.H. Clarke and R. Vinter
82
If (1.8) were valid, we could write:
for all t E [0,1]. Integrating over [0,1] we obtain
i.e., 1
1
L ( t , z ( t ) , u ( t )d) t + h ( z ( l ) ) - V ( O , z ( O ) ) - / Vz(t,z(t))cr(t) d t 2 0. (1.16) 0
We now treat the left-hand side of (1.16) as a cost function for an auxiliary optimal control problem with dynamics (1.15), where a(.)is treated as a n additional control variable and where the initial and terminal values of the state function are unconstrained. Since J,l
L ( t ,d )u, o ( t ) )dt
+ h ( s (1 ) ) - V ( O , S ( O ) )= 0
by definition of the value function, it follows from (1.16) that
is a solution t o the auxiliary optimal control problem. This leads t o the conclusion that there is an absolutely continuous function p ( . ) : [0,1] --t Rnsatisfying the usual conditions in the Maximum Principle for the original problem (Lee, p ( . ) is a costate function). But the presence of the terms involving V ( . , in a)
the cost and the a(.) control in the dynamics implies in addition - p ( t ) = V Z ( t , z o ( t ) )a.e. , t E [O, 11
(we get this from t h e maximization of the Hamiltonian condition) and
(from the transversality condition). These are t h e desired relations.
The maximum principle and the dynamic programming technique
83
Of course we are not in general justified in using (1.8) because we cannot expect V ( . ,.) to be continuously differentiable. However a kind of nonsmooth version of inequality (1.16) can. be proved, and this provides an auxiliary problem, application of the Maximium Principle to which gives a costate function with the hoped-for properties. Derivation of the nonsmooth inequality involves consideration of state functions, in some extended sense, which are discontin-
uous, and some rather delicate approximations. Full details appear in [5]. 2. HYPOTHESES
Throughout the paper (zo(*),ug(-))is taken to be a fixed minimizing process for the optimal control problem. Define
The following hypotheses remain always in force. H1: ( t ,u )
--$
f"(t,z, u ) is L x B m measurable for each fixed z E Rnwhere
L x B" denotes the a-algebra generated by product sets the first element of which is a Lesbegue subset of [0,1] and the second of which is a Borel set in
Rm.
H2: U is a Borel measurable set. H3: For each t E [0,1], u E U,,
y(t,. , u ) is Lipschitz continuous on Rt,
with rank at most k ( t ) ( k ( t ) does not depend on u ) , and k(.) E
L'(0,l).
H4: There exists a function
c(.) E
L ' (0,l) such that
H5: h ( - )is a locally Lipschitz continuous function. As in the introduction ((1.7)), we take the value function
V (*, *) :R
--+
R U -00
U $00
F.H. Clarke and R. Vinter
84
to be
v ( t , z )= inf
[JI
1
~ ( s , z ( s ) , u ( s cis ))
+ h(z(l))]
where the infimum is taken over processes on [ t ,11 which satisfy z ( t ) = 5 , and as before we set the value t o +oo when no such processes exist.
Concerning V ( . ,.) we impose
R' c R, and a constant r such that for each t E [0,1], V ( t , . )is Lipschitz continuous on Ri with
H6: There exists a tube R' about
ZO(.),
rank at most r. Hypothesis (H6) on the value function is very mild. In fact it is almost superfluous, since one is most often interested in examples of the optimal control problem in which R = [0,1] x R"; for such examples (H6) is implied by the other hypotheses. See [5]. 3. THE MAIN RESULT
Define the pseudo Hamiltonian function H ( . , -,., .,):
Theorem 3.1. Tbere exists an absolutely continuous function p ( . ) : [0,1]
---f
R" and a subset C C [0,I] of full Lebesgue measure
such that
Here
a,(.)
denotes the partial generalized gradient in the z-variable [3,
p. 631, e.g.
a,V(t,s) = co(1im V,(t,zi) : xi + 2,
V ( t ,.) is differentiable at z,, i = 1,2,.. .}.
85
The maximum principle and the dynamic programming technique
Notice that the theorem incorporates the Maximum Principle since the new condition on the costate function (3.1) implies the transversality condition
the only component of the Maximum Principle not explicitly present. This follows from the fact that V(1;) = h ( - ) . 4. SOME SPECIAL CASES
The assertions of Theorem 3.1 do not exclude the possibility that - p ( t )
4
a , V ( t , z o ( t ) ) for all t ’ s in some null set in ( 0 , l ) . This is probably unavoidable under the hypotheses considered. However the question arises whether this null set can be eliminated in special circumstances. We seek then additional hypotheses under which (3.1) can be strenghtened to - p ( t ) E a,V(t,zO(t)) for all t E [O, 11.
(4.1)
One direction in which we can proceed is t o introduce regularity hypotheses on the multifunction
t
+
a,v(t,zO(t)).
Proposition 4.1. Suppose that the multifunction is upper semicontinuous on [0,1]. Then condition (3.1) can be strengthened to (4.1). Proof. Take any t E [0,1]. Let { t i } be a sequence such that
ti
-+
t as i + 00
and p ( t i ) E a,V(ti, z o ( t ; ) ) , for i = 1 , 2 , . . .. Since p ( t ; ) -+ p ( t ) and
i -+
00,
ti
-+
t , as
it follows from upper semicontinuity that p ( t ) E a,V(t, z o ( t ) ) .
Corollary 4.2. Suppose that for every t E [0,1], V ( t ,.) is convex. Then condition (3.1) can be strengthened to (4.1). Proof. It is shown in [5] that V ( . , . )is continuous on some tube T about zo(.).
Take any t E [0,1], p E Rn and sequences { t i } ,
{pi}
such t i
+t
and
pi
-+
p,
F.H. Clarke and R. Vinter
86
and suppose that
pi E
a z V ( t i , x o ( t i ) ) . We must show that
Let 6 > 0 be such ( t , x o ( t ) )+6B c T ; B here is t h e unit ball in R"+'. Choose any I E (1/2)6B1 (B1 is the unit ball in Rn). Since V ( t i ,.) is convex,
For i sufficiently large,
( t i ,I
+ s o ( t i ) ) and ( t i , z o ( t i ) )both lie in T. By conti-
nuity we can pass to the limit:
This inequality holds for all I E (1/2)6B1. Since V ( t ,.) is convex the inequality extends however t o all of Rn. We have shown (4.2). I There is at least one important case where the hypotheses of the corollary are satisfied, as we shall see later in this section. Our next special case concerns control problems with smooth data. O u r results involve the concept of strict differentiability: a function +(.):Rk-, R admits a strict derivative at x E Rk, a matrix denoted by D s + ( x ) , if, for all
w E Rk, lim 2'+2
+ "1
x
-
+("I
= DS+(.)
.
2).
A 1 0
An important property of a function .II, which is strictly differentiable at a point x is t h a t the function is Lipschitz continuous in a neighbourhood of s, and
W ( 5 )= { D s + ( . ) > where a + ( z ) here refers t o the generalized Jacobian
a + ( x ) = co {lim + , ( x i ) : xi + x , +(.) is differentiable at zi, i = 1 , 2 , . . .} . This follows from [3, Propositions 2.2.1 and 2.6.2(e)].
87
The maximum principle and the dynamic programming technique
Proposition 4.3. Suppose that for almost every t E [ O , 11 the functions
are strictly differentiable at z o ( t ) . Suppose z
--f
h ( z ) is strictly differentiable a t zO(1).
(4.3)
Then condition (3.1) can be strengthened to (4.1). Proof. Let 6
> 0 be such that V ( t , - )is Lipschitz continuous on z ( t ) + 6B for
all t E [0,1]. For any t E [0,1] consider the auxiliary control problem (Pt):
[Minimize I ' L ( t , z ( t ) , u ( t ) dt )
+ h(z
By definition of V ( . , . )and the principle of optimality, a minimizing process for (Pt) .is ( z o ( . ) , u o ( * ) )restricted to [ t ,11.
The Maximum Principle says that there exists p t ( - ) :[ t ,11 ---* Rn such that
H ( t , s o ( s ) , p t ( s ) , u o ( s )= ) max H ( t , s o ( s ) , ? q s ) , u )a*e. s E [ t ,11 uEUs
-P"l)
= Dsh(z(1))
and P"t) = - a z v ( t , z o ( t ) ) .
(4.4)
(Ds(.) denotes the strict derivative in the z-variable. Note that H is strictly differentiable in z since f and L are.) Now define p ( . ) := PO(.). Choose any
t E [0, I]; then p t ( . ) and p ( . ) must coincide on [t,11 (and in particular at t ) since they are both solutions there to the differential equation in q ( * ) :
F.H. Clarke and R. Vinter
88
with initial condition -q(l) = Dsh(z(1))
and this differential equation has a unique solution. But now (4.1) follows from (4.4). I This covers the smooth case, since continuous differentiability implies strict differentiability. The proof of Proposition 4.3 is very much more straightforward than that available for Theorem 3.1, and readers with interest only in smooth problems might be tempted t o think that Proposition 4.3 and its proof were adequate for their purposes although of course the nonsmoothness of V persists. This however is not the case, to the extent that when we try to prove analogous results to (4.1) in the presence of a terminal constraint
(see Section 5 ) , we need to apply theorem 3.1 t o an auxiliary problem with a penalty term to accomodate the constraint (4.5). This penalty term must in general be nondifferentiable and so, even if the data is smooth, the uniqueness argument above does not automatically provide an easy way t o obtain the desired inclusions. We can weaken slightly the hypothesis (4.3) in Proposition 4..3 t o admit some degree of nonsmoothness:
Proposition 4.4.
The conclusions of Proposition 4.2 remain
valid if hypothesis (4.3) is replaced by either (a) h ( . ) is expressible as a sum of functions
in which hl(.) is strictly differentiable a t ~ ( 1 and ) h2(.) is concave a t zO(I),
or
89
The maximum principle and the dynamic programming technique
(b) V(O,-) is expressible as a sum of functions
in which
v1(*)
convex a t
is strictly differentiable at zo and
v2(.)
is
50.
Proof. We prove just (b); (a) is proved in a similar way. Let ( z ( . ) , u ( . ) )be any
, be any sequence of admissible admissible process on [ O , t ] . Let (zi(.)ui(.)) processes on [t,1] such that
But for each i, by definition of V ( 0 ,
e),
Passing to the limit we obtain rt
L ( s , z ( s ) , u ( s )d) s + V ( t , z ( t ) ) V(O,z(O)) 2 0. J O
Now select any g E dvz(z0). Then
However by the principle of optimality
It follows that, for any t E [0,1], ( z O ( - ) , u o ( . )restricted ) to [ O , t ] is minimizing for the control problem [Minimize l L d s + V ( t , z ( t ) )- g . ( z ( O )
-20)
Lover control processes ( z ( - > , u ( . )on ) [0,t1.
-vl(z(O))
F.H. Clarke and R. Vinter
90
We now invoke the Maximum Principle for each t E [0,1]. There results a costate p t ( . ) with the usual properties, and notably
- p ( t ) E W ( t ,z o ( t ) ) . -P"o)(=
+ 9) E dzV(O,zo).
D&(zo)
Let p ( . ) = pl(.). Then since, for each t , p t ( . ) and p ( . ) are both solutions t o
[
-q(s)
= Q ( S ) * Dsf(s, z O ( S ) , U O ( s ) ) - DsL(s,zO(s),u O ( s ) )
4(0)= -Dsvl(zo) - g,
and this equation has only one solution, (4.6) implies (4-1). I There is one case, of some importance, when additional hypothesis (b) of Proposition 4.4 is directly verifiable. This is t h e case of.optima1 control
problems involving linear dynamics and a convex terminal cost. Proposition 4.5. Assume that R = [0,1] x Rn,h(.) is convex and that we have
a ( t )* % + b ( t ) * u A ( t ) z B(t)u
+
1
for integrable vector and matrix-valued functions a ( . ) , b ( . ) , A ( * ) and B ( - ) .Suppose also that
Ut is compact, a.e. t E [ O , 11 and ess sup IlUtll < 00. t€[O,ll
Then condition (3.1) can be strengthened to (4.1). Proof. In view of Proposition 4.3 it suffices t o show that V ( 0 , is convex. In a )
fact V ( t , . )is convex for all t E [0,1] as we shall see. Define the (n matrix
by
+ 1) x ( n + 1) block matrix 2 and the (n + 1) x m block
-
[
] , B(6) - =
A(s)= 0 a(t) 0 A(t)
['z)]
The maximum principle and the dynamic programming technique
91
and let @(., .) be the transition matrix associated with the time-varying differential equation
-
y = A(t)y.
Now for each t E [0,1] define the mapping Mt from control functions in
according to
1
R"+'
1
Mt(U('))
=
@(l, S ) i J ( S ) U ( S ) ds
t
and the linear mapping A t : R"+' + R" by
[ :].
Atx = @ ( l , t ) Write
Rt = range{Mt}. By Aumann's theorem (see, e.g., [3, p. 256]), Rt is a closed convex set. Finally define
x:Rn+'
--+
R to be
Using the standard representation of the solution to inhomogeneous linear differential equations in terms of the transition matrix, we easily confirm that
-
V ( t , s )= min h ( A t s <ERt
Take arbitrary
21, x 2
E Rn and
c
E [0,1]. Let
+ 5). $1
(4.7)
(52) be a minimizing element
in the expression on the right-hand side of (4.7) when x takes value z 1 (q). N
We now deduce from the convexity of the function h and the convexity of the set Rt that
cV(t,xl)
+ (1 - E ) V ( ~ ,=Z c~x ()A t z i + + (1 - c ) x ( A t +~ 52) $1)
2 x ( A t ( c x 1 + (1 - € ) Z 2 ) , 5 ) where 5 = el1
+ (1 - c ) c 2 ,
+
2 V ( t , € Z l (1 - 4 2 2 ) .
F.H. Clarke and R. Vinter
92
So V ( t , . )is convex, as claimed. I We note incidentally that the problem which Proposition 4.5 addresses is an instance when the hypotheses of Corollary 4.2 are verifiable. Indeed, as we point out in the proof of Proposition 4.5, for the “linear-convex” problem
V ( t , . )is convex for all t E (0,1]. Thus Corollary 4.2 affords an independent proof of Proposition 4.5.
-Minimize h(z(1)) s.t. i ( t ) = f ( t , z ( t ) , u ( t ) )
x(0) = 50 41)E
c 1
u ( t ) E Ut, a.e.
-
x(t) E
Rt,
t E [O, 11
a.e. t E [O, 11
A process (on [ O , 11)will be called minimizing if it minimizes h(z(1)) over processes (x(.),.(.)) satisfying the constraints of the problem (which now include “x(1) E C*”). Let (xo(.),uo(.))be a minimizing process such that some tube about xo(*) is contained in R. Define the multifunction
F(-, .):
The maximum principle and the dynamic programming technique
93
In addition t o the hypotheses of Section 2 henceforth we impose the following . H7: F(., -) takes values compact, convex sets in Rn,and is continuous in the sense that dist(F(t’,x’),F(t,x))
-t
0 if (t’,x’)
+
(t,x) in R
(dist(., .) is the Hausdorff distance function); H8: there is a constant, k such that dist(F(t, x’),F(t, x)) 5 klx’ - 21, t E [0,1], x, x’ E R,; H9: F ( - ,-) is uniformly bounded on C2; H10: C1 is closed. The usual Maximum Principle now asserts existence of a real number X E ( 0 , l ) and an absolutely continuous function p ( . ) :[0,1] -, R” (A, p(.) not both 0) such that
and
Let us recall in passing that a minimizing process is customarily called “normal” if the number A in such conditions as above cannot be taken zero, i.e., the conditions are non-degenerate in the sense that they make some reference t o the cost function. The precise notion of normality required for later purposes is the following.
F.H. Clarke and R. Vinter
94
Definition 5.1. We say the minimizing process (zo(.), u o ( . ) ) is normal if the only absolutely continuous function p”(.) : [0,1] --t
Rn which satisfies
is
F(.) 3 0. Here H ( t , z , p ) , the “true Hamiltonian”, is
q t , z7i-q = SUP F *f(t,5,). UEUt
and c3H denotes the generalized gradient of ( z , p )
-+
H ( t , z , p ) . Normality
as here defined relates t o non-degeneracy of first order optimality conditions
expressed in terms of generalized gradients of the true hamiltonian (see [3, p. 1471). Even in the presence of a terminal constraint we may define the value function V ( . ,.): [0,1] x Rn
-+
R U {-a} U {+m}:
V ( t ,z) = inf(h(z(1))) where the infimum is over processes (z(*),u ( . ) ) on [t, 11 which satisfy the constraints of the problem, modified t o involve the time-interval [ t , l ] and the initial condition s ( t ) = z ( V ( . , . )take values
$00
if no such processes exist),
and we might hope to show in certain circumstances that p ( . ) can be chosen to satisfy the inclusion
-At)
E &V(t, z o ( t ) ) ,
appropriately interpreted. When we add the terminal constraint z(1) E C1 there is however the very real possibility t h a t for all t E [0,1], V ( t ,.) will not be Lipschitz continuous
95
The maximum principle and the dynamic programming technique
on a neighbourhood of ~ ( tand, ) worse, might take finite values only on a set having empty interior. It is not clear how our methods could be modified t o overcome these difficulties; indeed establishing such a relationship between some costate function and V ( . ,.) remains a challenging open problem. (Note that the inclusion, if it applies in situations having any degree of generality, must involve generalized gradients of possibly non-Lipschitz continuous, extended valued functions.) However the approach of earlier sections does provide a n answer to a related question. To pose this we need to bring verification functions into the discussion.
Definition 5.2. Let ( z ( . ) , u ( . ) ) be some process on [0,1] which satisfies the constraints. Then a Lipschitz continuous function
W ( - , . )defined on some 6-tube T6 about z(.) is said to be a verification function (for (z(.), .(.))I
if
min {{a - H ( t , z , - p ) } = 0, for all ( t ,z) E int{T6}} (a,P)€aw( t J )
(5.1)
The significance of there being a verification function W ( . ,.) for an admissible process ( z ( . ) , u ( . ) ) is that, if the graph of z(-) is contained in some tube T
c R , then
(.(.),.(a))
is a minimizing process among admissible pro-
cesses whose state functions lie in some (possibly narrover) tube about z(*); i.e., W ( . , . ) verifies the local optimality of ( z ( . ) , u ( . ) ) . This is shown in [3]. Of course there is an intimate relationship between verification functions for (zo(.), uo(-)) and the value function V ( - -): ,
V ( - .), satisfies conditions (5.2)
and (5.3) in the definition of a verification function V ( . , . ) ,and we can expect V ( . , . )to satisfy Eq. (5.1),which will be recognized as a version of the Hamilton-Jacobi equation, at least on neighbourhoods where V ( . , is Lipsa )
chitz continuous. The difference is that there can exist a (Lipschitz continuous)
96
F.H. Clarke and R. Vinter
verification function for (zo(-), uo(.)) even when V ( . , is very ill-behaved (has a )
effective domain with empty interior, say). Existence of verification functions has been established under quite mild conditions, as is illustrated by the following result, proved in [3] and [4].
Proposition 5.3. Suppose the minimizing process (zo(.), ? L O ( . ) ) is normal. Then there is a verification function for ( z o ( . ) , u ~ ( . ) ) .
After this long detour we are ready to return to our modified question: when can we choose some costate function p ( . ) and some verification function
W ( . , . )for ( z o ( - ) , u o ( - )such ) that
An answer is provided by the following theorem.
Theorem 5.4.
Suppose that
(zo(.), u o ( . ) ) is normal.
the minimizing process
Then there exists an absolutely conti-
nuous function p ( . ) : [0,1] + R",a verification function W ( . ,.)
for
(ZO(-), uo(-)) and
a subset C
c [0,1] of full measure such that
- P ( t ) E a z H ( t , 5 o ( t ) , p ( t ) , u o ( t )a-e. ) t E [0,1], H ( t , s o ( t ) , p ( t ) , u o ( t )= ) $ j y t , s o ( t ) , p ( t ) , u ) a-e. t E [0,11, - P ( l ) E NCl ( Z O ( 1 ) )
+ ah(Zo(1))
and - p ( t ) E a,W(t,zo(t)) for all t E {0} u {I}
1
-Minimize K1
max{lz(t)
- r(t)l - ~ , O } d + t K z d c l ( s ( l ) ) + h(z(1));
s.t. x = f ( t , z , u , ) , z(0) = 50, u ( t ) E ut; m
z(t) E
xt.
u C.
The maximum principle and the dynamic programming technique
91
Here X is the 2e-tube about zo(*).d c , (.) denotes the euclidean distance function. It is shown further that the value function for this problem is Lipschitz continuous on some tube about zo(.) and is a verification function relative to the original terminally contrained problem for
(20
( a ) ,
uo
(a)).
Now apply The-
orem 3.1 to this free endpoint problem. This is permissible since, as we easily check, the free endpoint problem satisfies the necessary hypotheses. There result the assertions of Theorem 5.1. 6. OUTLINE OF THE PROOF OF THEOREM 3.1
Fix 6 > 0 such that the &tube about zo(.) is contained in the set R’ of hypothesis H6.
We shall say a triple (z(.), u ( . ) ,a ( . ) )is a perturbated process (on [0,1]) if u(.) is a
measurable Rm-valued function such that u ( t ) = Rt, a.e., a(.) is a
measurable Rm-valued function such that
and
.(a)
is an absolutely continuous Rn-valued function such that
Let ( z ( . ) , u ( - )a, ( . ) )be any perturbated process such that z(.) is contained in the 6-tube about zo(-). Define
Lemma 6.1. The function t
H aa(t,a ) is
a bounded measurable
function and
This is the promised inequality replacing inequality (1.16) of the introduction. It is the technical crux of the proof.
98
F.H. Clarke and R. Vinter
To establish it we need to extend the notion of perturbated process t o admit a(.)%which belong to some class of generalized functions, and use some intricate approximation arguments. Full details are given in [5]. We denote by [Minimize
l1
the following control problem
L ( t , z ( t ) , u ( t )dt )
+
1
~ a ( t , - a ( t )dt)
+ h ( z ( 1 ) )- V(O,s(O))
+
i ( t ) = f ( t , z ( t ) , u ( t ) ) a ( t ) a.e. t E [O,1]; u(.)E
Ut a.e. t E [O,I];
a ( t )E B a.e. t E [O,1]; -s(t) E
X t all t E [O,I].
Here X 6 is the &-tube about zo(.). (B here is the open unit ball in R".) In
Pa, the control variable is the pair of vectors ( ~ ( t a) (, t ) ) .Lemma 6.1 can now be re-expressed in the following terms:
L e m m a 6.2. The extended process ( z o ( ~ ) , u o ( . ) = , a 0) solves
p6 * It is a straightforward task now to check that the hypotheses are satisfied under which the Maximum Principle [3, Theorem 5.2.11 applies, at the min). imizing process ( z ( . ) , u ( . ) , a ( - ) Bearing in mind that right endpoint problem, we conclude the following: let
is a free left and
The maximum principle and the dynamic programming technique
99
and H(t,zo(t)9p(t)9uo(t))= maxH(t,zo(t)9p(t),u)9 a’e* UEUt
[O,
l ] S
max{p(t) . cr - 0 6 ( t , - a ) } = 0, a.e. t E [O, I]. aEB
Take a point t E [091]at which (6.1) is true. then
where
S 6 ( t ) = co
u{azv(t,s) 11s :
- zo(t)ll
5 6)
for otherwise - p ( t ) and the convex set S,(t) can be strictly separated: i.e,9 there exists an n-vector a’ E B , -p(t)
- a‘ > max { - r . a’ : r E s6(t)}= a s ( t ,-a’)
in contradiction of (6.2). Up to this point 6 > 0 has been fixed. Let now {&}, 6, 1 0, be a sequence of numbers such that the &-tube about zo(.) is contained in R’ for all i. Replace
6 by &,and write
pi(.)
in place of the costate p ( - ) satifying equations (6.1)
and seq.. We deduce from the costate differential inclusion that
where k(.) is the function of hypothesis H3, and
for all i, where K is the Lipschitz rank of h ( . ) on some suitable neighbourhood of zo(1). Application of Gronwall’s inequality tells us that the p j ( . ) ’ s are uniformly bounded and the pi(.)’s are dominated by a common integrable function. The hypotheses are met then under which [3, Theorem 3.1.71 applies to the inclusions
h ( t ) E G ( t , p i ( t ) ) a.e. , t E [O, 11
F.H. Clarke and R. Vinter
100
where G ( t ,2) = d,H(t, z o ( t ) , t ,u g ( t ) ) . We conclude that, following extraction of a suitable subsequence,
for some absolutely continuous function p ( . ) :[0,1] --+
R" satisfying
We deduce
from the upper semicontinuity of the generalized gradient of a locally Lipschitz function. A simple contradiction argument, in which we employ hypotheses H3 and H4, leads to the conclusion
for all t E
niMi, where Mi is the set on which the Hamiltonian is maximized
for problem Psi. So (6.3) is true a.e.. Finally we examine the implication of "-pi(t) E
s,(t)a.e.."
Evidently
for all t belonging t o some subset S
c [0,1] of full measure. We claim t h a t ,
for any t E S , - p ( t ) E dZV(4 s o p ) ) .
(6.5)
For otherwise we can strictly separate the point - p ( t ) and the closed convex set d,V(t,zo(t)), i.e., there exists q E Rn and 7
-p(t)
*
-
'
max
-
> 0 such that
s q = D",(t;zo(t);q).
sEazV(t,zo(t))
The maximum principle and the dynamic programming technique
101
Here DO,V denotes the generalized directional derivative with respect to the z variable. (We have used the fact that D;V is the support function of the set
a,V.) Since the generalized directional derivative is upper semicontinuous in its arguments [3, Proposition 2.1.11
whenever
$
E {zo(t)}
+ E ~ Bfor, some e l > 0. But then
But this means
in contradiction of (6.4). (6.5) is true then on a subset of full measure. This completes our sketch of the proof of Theorem 3.1.
REFERENCES [l]A.E. BRYSON, Y.-C. HO, Applied Optimal Control, Blaisdell Pub-
lishing CO., Waltham Mass. (1969).
[2] S.J. CITRON, Elements of Optimal Control, Holt, Rinehart and Winston, New York (1969).
[3] F.H. CLARKE, Optimization and Nonsmooth Analysis, John Wiley, New York (1983). [4] F.H. CLARKE, R.B. VINTER, Local Optimality Conditions and Lips-
chitzian Solutions to the Hamilton- Jacobi Equation, SIAM J. Control and Opt., 21, 6 (1983), 856-870. [5] F.H. CLARKE, R.B. VINTER, The Relationship between the Mazimum
Principle and Dynamic Programming, Rapport CRM 1300, Universitd de Montreal (aoiit 1985).
F . R Clarke and R . Vinter
I02
[6] O.L.R. JACOB§, Introduction to Control Theory, Clarenden Press, Oxford (1974)
[7] L.S. PONTRYAGIN et al., The Mathematical Theory of Optimal
Processes, Wiley Interscience, New York (1962).
FERMAT Days 85: Mathematics for Optimization J.-B. Hiriart-Urruty (editor) 0 Elsevier Science Publishers B.V. (North-Holland), 1986
1 03
CONVEX FUNCTION OF A MEASURE THE UNBOUNDED CASE
F . DEMENGEL and R. TEMAM Laboratoire d'Analyse NumCrique UniversitC de Paris Sud - BSt. 425 91405 Orsay Cedex (France)
INTRODUCTION Some mechanical problems - in soil mechanics for instance, or for elastoplastic materials obeying to the Prandtl-Reuss Law - lead t o variational problems of the type
where .J, is a convex lower semi-continuous function such that is conjugate 1c,* has a domain B which is unbounded and convex. In order t o try t o solve this type of problems, we are led to study the possibility of extending the convex function .J, to an appropriate convex subset M+ of the space M'(R) of bounded measures on St. More precisely, we will study the case where $* is as follows
and B is a convex unbounded set of the space E of symmetric tensors of order 2
F. Demengel and R. Temam
104
on RN. If do is its asymptotic cone, A the polar cone of do, and if l l ~ ( B( l)) is bounded, we may extend $ to the convex subset of M'(R):
= { p E M1(R),p = h d s
M#)
+ ps,
ll,oh E L2(R,ds),((ps E A))
(2) }
After giving the definition of $(/A), we follow the study carried out in [3], were B was a bounded set. In fact, most of the results proved in [3] are still valid in the unbouded case, under the previous assumptions. The main result is a duality formula, and an approximation result by smooth functions for the metric:
1. ASSUMPTIONS 'AND GEOMETRIC PROPERTIES
Let X be a finite dimensional euclidean space and $ a function from X into R. We assume that $ is convex, 1.s.c. and proper (i.e., its range does not include -00,
and $ is not identically equal t o
+00.
We recall that the conjugate (or
Legendre transform) $* of $ is defined on X' = X by:
$* is also convex, 1.s.c. and proper, and $** coincides with $ (see for instance
J. Moreau [7], R.T. Rockafellar [8], and I. Ekeland
-
R. Temam [4]). Let us
B = {s E x : $*(s) < +w},
(1.1)
recall that the domain B of $* is defined as
We will suppose that
B is closed. The restriction of $* to B is continuous. OEDom$
('I
l l denotes ~ the projection on A in If.
(2) This notation is explained in the text; it means ( p s I p) 5 0, for any cp lying in
CF(n,Ao).
105
Convex function of a measure
(this last condition is equivalent t o saying t h a t $* is bounded from below on
B). Since $ is convex, [ $ ( t f ) - $ ( O ) ] / t is a nondecreasing function of t and then the limit
exists in
R u {+co}; $,is
called the asymptotic function of
and is lower
semi-continuous when $ is, since it may also be written as
Let us observe that under the hypothesis (1.4), $ ,
is nothing else than
the conjugate of the indicator function of B: $,oo(F)
= SUP F * 7 llEB
This result may be obtained by Theorems 8.5 and 13.3 of Rockafellar (cf. [8] .~
p. 66 and 116), or by the following simple argument. Let us assume that q belongs to B ; we have $(tE)
2 t E . rl - $ * ( r l ) ,
and then
q being arbitrary in B , we obtain that:
For the reverse inequality let us suppose t h a t [ belongs t o the domain of
4,.
Then $ ( t < ) / t is bounded when t goes to infinity. In particular, for all c > 0 and for all t , there exists qt E B such that
Dividing by t , we obtain
F. Demengel and R. Temam
106
for some constant c such that $* 2 c, which implies, letting t tend to infinity:
This proves the reverse inequality when
E belongs t o the domain of $m.
Now
if $m(() = $00, then for all A > 0, there exists T such that for t 2 T
Then there exists qt in B such t h a t
which implies: qt * t t 2 At
+c
and then Xk(€)
L
rlt€>
A+
C
t
*
This shows, since A is arbitrary, t h a t
X x 9 = +0O. According to the definition of
I
4,, we see t h a t $m is convex and positively
homogeneous (and 1.s.c. since $ is 1.s.c.). Its domain, denoted by A, is then a convex cone with vertex 0. We denote by A' its polar cone, or equivalently
Let us make more precise the link between A' and the set of asymptotic directions of B , which is defined as the set of t h e [ 's for which there exists q E B such that q
+ R+( C B.
(l)
~~
( l ) We recall that this condition does not depend on q
p. 6 3 ) .
EB
(see Rockafellar 181, Th. 8 . 3 ,
Convex function o f a measure
107
We now state
Proposition 1.1. Under the above assumptions Ao is the asymptotic cone of B , i.e., AO
Proof. Let
E
= { E :\d4 E B , 4 + ~ + (c B } .
be in A’, 4 in B and X in R’,
and let us assume t h a t 4
+ A(
is
not in B. Then using the geometric form of t h e Hahn-Banach Theorem, there exists (40,P) in X x R such that
and 4o.b L P for every b in B . Then 40 E A; in fact 4 * 40
I SUP 40 b€B
*
b = 1clo0(40) 5
P < +w.
We then have A v o . [ 5 0, and
P < 40 (X + 4 ) *
< 40 4, which contradicts (1.8). For the converse conclusion, let us suppose t h a t
Q
+ R+E C B (with 4 in
B ) , and let us show t h a t ( belongs t o A’. Taking f’ in A, and writing t h a t
llcl0(f’)< +a, we obtain:
and the supremum on the right-hand side cannot be finite unless equivalently
E E A’.
I
(‘ . ( 5 0, or
F. Demengel and R. Temam
108
Examples We now give some examples of convex sets appearing in mechanics, and which will be studied in this paper. For each of them, we will give the asymptotic cone and the polar cone A' (see further examples after Proposition 1.2): a) The bounded convex sets: when B is bounded, and using for instance
Proposition 1.1, we see that A' = {0}, A = X. The study which follows has already be made in that case (cf. Demengel-Temam [3]). b) Let E = X denote the space of symmetric tensors of order two on R", E D be the space of symmetric tensors with null trace, and let B = B D x RId, with B D bounded in E D . Convex sets of this type are studied in perfect plasticity (cf. Temam [S]). We have in such a case:
A' = 0 x RId,
A = E D x 0.
c) When B is a cone with vertex 0, then B = A' and
+w
= 0 on A. This case
will often occur in the following, and will be used for the Prandtl Reuss law in plasticity. We now deduce from the properties of B some results for
$w.
Proposition 1.2. The following statements (i), (ii) and (iii) are
equivalent: (i) There exists r < 00 such that
B c B(0,r) + A'
(1.9)
(ii) There exists r > 0 such that, for every ( in A, (1.10)
(This implies that A is closed.) (iii)
+ is at most linear at infinity on A:
Convex function of a measure
109
Proof. Let us first show that (i) is equivalent t o (ii). Suppose that (i) holds, then for ( in A
Using the lower semi-continuity of
4w, it is then easy t o see that A is closed.
On the other hand let us suppose that
$m
is at most linear at infinity. For all
q in B we write q = I I A ~ + I I A o ~where , we denote by IIAand IIAo respectively
the projection operators in X on A and A'. Then we have, using Proposition 2, p. 24, of Aubin-Cellina [l]:
which implies that
5 r,
InAql
and consequently that
B
C
+
B ( 0 , r ) A'.
Let us now see that (iii) is equivalent to (i) or (ii). Using the definition of
4,and the fact that $I*
is bounded from below on B (by (1.4)) we have:
5 sup
€
0
7
7
-c
tl EX
= $(,) which implies that
- c for € in X ,
4 is at most linear at infinity on A when (i) or (ii) is fulfilled.
For the reverse implication, when 1c, is at most linear on A, we get for every in A:
E
110
or
F. Dernengel and R. Ternarn
$w
is at most linear at infinity on A. I
Let us give two other examples of convex sets; the first one appears in mechanics and satisfies (i), the second does not. d) Let B be the convex set of R2 (cf. Fremond-Friaii [5]), defined as follows: (z,y)ER2 : - l < y < + l , z > - -
1 - y2
2}.
(1.11)
B contains 0 in its interior. Moreover, it is easy t o calculate A' and A; we obtain
A'={Y=O:X>O}
and n ^ B C (-2 5 X 5 0, -1 5 Y 5 +1} is bounded. e) Let us now consider a convex set for which (i) is not fulfilled:
B = { (X, Y ) : Y 2 X 2
+ m}
A' = { X = 0,Y A = { Y < 0) U (0,O) (A is not closed)
(with m E R+)
> 0)
n = A''
(1.12) (1.13)
= {Y 5 0}
IIxB = (anB ) U {Y = 0) which is not bounded. I
(1.14) (1.15)
The following Proposition 1.3 gives us further informations on the link between the behaviour of qh and its restrictions to A and A' respectively. We assume t h a t A is closed. Then: Proposition 1.3.
(i) The following inequality is valid for all ( in X:
Convex function of a measure
111
(ii) Let us assume that 0 belongs to B and that (1.17) where g is an even convex continuous function defined on
R,g* its conjugate. Then there exist
c1,
c3
2 0,
> 0, such that for every
cg
in X :
'$(El 2 and
c1
{cl161
+ c24(nA0E) - c3},
(1.18)
is strictly positive if 0 is in the interior of B .
(i) Proposition 1.1 gives that A' c B whenever 0 belongs to B and then :
W) = sup{E
rl - $ * ( r l ) }
*
sEB
2 SUP { E rl - s*(lrll)} *
11EAO
= sup{A sup x>o
( *
7 - g*(A)}
qEAo 111151
= suP{A
lnAoEI
- g*(A)}
X>O
= g((nAO[() since g = g**. Now, using (1.17), we note that
and we have obtained (1.18) in this case:
F. Demengel and R . Temarn
112
Now if 0 lies in the interior of B, assume t h a t B(0,r) C B for some r > 0. We then have:
Icl(
let us define
d b ,P) = &P, with P = (01 - 9 2 , P1 1 0,
P2
Pl)- d ( p ,(02)
(2.19)
1 0.
The additivity property allows us t o show that (2.19) does not depend on the choice of
p1
and
'p2
such that p =
'p1
- p 2 . T h e positivity of
& ( p , - ) easily implies t h a t J ( p , . ) is a bounded measure when $ ( p , 1) is
finite. Let us suppose secondly that the last term on the right of (2.18) is
+w; then for every A > 0 there exists vo in D,,j(L1p) such t h a t
Using once more Lemma 2.2 there exists
vj
-,
VO, vj
E
CF(R,B) such
t h a t for j sufficiently large, we have:
which implies t h a t the first term on the left-hand side of (2.18) is +oo too. (ii) Let us now suppose t h a t p belongs t o M+(R). Then $ ( p ) is a bounded measure on R,
11 o h is in L'(R, d z ) and we have for all v
that v(z) E B almost everywhere and for all
(o
in L'(R, dp) such
2 0 in &(R):
F. Demengel and R. Temam
122
and ps almost everywhere, ps = hS Ips 1,
This inequality shows t h a t $ ( p , ’p) is finite for all
’p
2 0 in Co(n), and
N
implies t h a t $ ( p , .) is a bounded measure when $ ( p ) also is, i.e., when p belongs to M+(R). We want now t o show t h a t q ( p , .) and $ ( p ) coincide in that case. We will show in a first step t h a t
and then t h a t
+ o h is d s integrable when $ ( h d z ,1) is finite, $ w ( h s ) is
ps integrable when & w ( h S I p S / , l ) is finite and that we have in addition
the inequalities
J,$oh 5 4(hdz,l)
(2.21)
J, $M(hS)dIPSl 5 J w ( hs IPs I, 1).
(2.22)
Let us show t o begin with equality (2.20). We proceed like in (31. Let 6
> 0 be given and
(v1,vZ)E
Co(R,B) such t h a t
Convex function of a measure
L $m(PS,P) -
hSV2PIPSI
Let now 0 be a set of Lebesgue measure 6, Ips I that ps is supported by 0. Since 11.’ such that
s,
IP(V1
123
+ h dx measurable, such
is singular, we may choose 6
>0
- vz)h - ( + * ( V l ) - +*(w2))1 dx c
Let us then consider the function
v = { v1
on n\0, on 0 ;
212
v is Ipl
+ dx integrable, with values in B ; moreover:
The left-hand side may be written in the form
We finally have obtained:
5 4 ( P , P)+ 3€, Y
which ends the proof of equality (2.20). Let us now suppose that $ ( p , - ) is a bounded measure; then there exists a constant c
> 0 such that
124
F. Demengel and R. Temarn
We then show that t,b o h E L 1 ( R , d x ) . For
E
> 0 and given 6 > 0, we
consider a disjointed covering of R by universally measurable sets A:, mes(Af) < 6, and we define t h e vectors X f :
Since II, = $I**, there exists
vE
ef
in B , such that
L'(R,dz), v(z) E B for almost every z in R and using $ ( O ) = 0:
11,(h6)ds 5
1
h6v - $*(v)
+ E mes R
R
5 & ( h d z , l )+ c mes R. Since h6
4
h in L'(R) when 6
--f
0, there exists a subsequence 6'
+
0
such that h 6 ' ( z )-, h ( s )everywhere on R. Using t h e lower semi-continuity of 11, and Fatou's Lemma, we may write $ o h ( z )dz 5
lirninf 4 0 h 6 ( z )dz, 6+0
5 lim inf 6-0
$ 0 h6(z)ds,
5 & ( h d z ,1)+ E mes R , which implies t h a t 11, 0 h is Lebesgue integrable. We could use the same argument of approximation by piecewise linear functions, replacing t,b by
t,bw, d x by lpsl and h by h S , t o obtain the analagous inequality
L+m
o h s IPs I
5 i ~ (s IP h s 1,1)+ E mes 0.
125
Convex function of a measure
And this ends the proof of Theorem 2.1. I We conclude this section by the proof of Lemmas 2.1 and 2.2:
Proof of Lemma 2.1. From the definition of L ' ( R , v , B ) , there exists a sequence wj E Co(R,X) which converges to v in
L1(R,v , X ) . Then, since the projector IIB in X onto
B is Lipschitzian, the sequence I I B w ~ belongs t o
CO(L?, B ) and
converges to v
in L ' ( R , v , B ) . Now assume that v E Co(R,B). By a mollification procedure we can contruct a sequence v j E Cr(R,B)which converges to v uniformly as j Such a sequence is obtained by setting only satisfy the conditions
pj
1 0 and
vj
--* 00.
= p j * v where the mollifiers p i must
JRN
pj(s)
dz = 1.
By combining these two remarks, we obtain for every v E L1(R,p,X),a sequence vi E Cr(R,B) which converges t o v p-a.e., and the result follows. Proof of Lemma 2.2.
(i) In a first step we approximate v by functions in L"(R, v,B ) , v = lpl+ dx (note that B is not necessarily bounded). For that purpose we set for every integer p,
We easily have Ivp(s)l5 p, Ivp(x)l 5 Iv(z)I and up(.) E B v-a.e. From the Lebesgue dominated convergence theorem, u p converges to v in L'(Q, v,B ) as p
--f
00.
Then, due to the convexity of $*, and since + * ( O ) = 0, $* 2 0,
Since $* is continuous on B , $*(v,(x)) p
--f
00,
+
$*(v(z)),ds-a.e.
in R , as.
and it follows from the Lebesgue dominated convergence theorem
that Jn $* o u p -+
sn $*
0
v.
F. Demengel and R. Temam
126
(ii) We are now reduced to the case where furthermore v takes its values in
B, for some p. Lemma 2.1, applied with
Y,B
replaced by 1p1
+ dx, B,,
provides a sequence v j E Cr(n,B,) which converges t o v in L1(R,v,B,) and u-a.e. continuity of
Then v j converges t o v dx-almost everywhere and by the
4*,
We also observe that
and then the Lebesgue dominated convergence theorem applies and shows that
ln 4* o v j converges to ln +* o v The proof is complete. I
3. APPROXIMATION RESULTS
We recall the hypothesis on $ of Section 2:
$J
is supposed to be convex,
B = dom $* is closed,
11 1 0 , WJ)= 0,
(34
A is closed and IIAB is bounded.
(3.2)
We may endow M + ( n ) with the distance defined by:
and Lemma 3.1 below proves that from every bounded sequence in M+(R) we can extract a subsequence which is vaguely convergent towards a measure in
M+ (0). L e m m a 3.1. Let pj be a sequence in MJ,(R) weakly convergent towards a bounded measure p . Then if $ ( p j ) is bounded in
M'(R), p belongs to MJ,(R)and we have
Convex function of a measure
Proof. Let us suppose by contradiction that p
127
4 M+(R). By
($(/A,1)) = +w,and for every real A > 0 there exists w in that
Theorem 2.1,
Cr(n,B ) such
n
Since pj is vaguely convergent to p , there exists J > 0 such that for j > J :
Therefore for all j > J :
and
sn $ ( p j ) cannot be bounded; this contradicts the hypothesis.
We thus
have shown that p E M+(R),and we may conclude for every 6 > 0 and p 2 0 in Co(n) the existence of w in C F ( 0 , B )such that
P
-
5 liminf
($(pj)
J-+W
s,
ll*(v)p
+
I $4+ 6,
which ends the proof of Lemma 3.1. I Let us now define a topology on M$(Rj which is less finer than that defined by the metric (3.3). This second topology may be defined by the family of semi-dist ances: (3.5)
( l ) This means
s,
+ ( p ) p 5 liminfj,=
s,
+(/A,) p, V p E C o ( n ) , p 2 0.
F. Demengel and R. Temam
128
for all v in
C(fi),
v 2 0. It is immediate that this topology is stronger than
the weak topology which can be defined by the family of semi-norms:
where v is in
C,(n)
We are now going t o show the following approximation result:
Lemma 3.2. For every p in M @ ( n )there , exists a sequence o f functions z c j in
C?(n),such that
Proof. let O j be an increasing sequence of functions in
being pointwise convergent t o l n ; let
~j
Cr(n),0 I 9 j I 1, O j
be a sequence of positive real numbers
converging t o zero, p be a function in D(RN),p being even,
and uj = p E j * (Ojp). We are going t o show t h a t
By the convexity of +* we have, for all v in +*
and then, using
JRN p
This implies that
= 1,
(PCj
* v) I
PEj
C?(n),v ( z ) E B :
* +*(v)
129
Convex function of a measure
for pcj
* V
= pcj * V E
P(G,B) c a&v(ejP)).(l)
Using the Lemma 3.3 just below we have
On the other hand, since .II, ( p c j * ( e j p ) ) is bounded in M 1 , we may extract from it a subsequence which is vaguely convergent towards Y E M'(R). According t o Lemma 3.1, we have
and therefore
Lemma 3.2. Let
p1
and
p2
be in
C(G), with 0 I
p1
I
p2.
Then for every p in M'(R, X )
The proof of this lemma follows that of the lemma 2.4 established in [3], and consists in the use of the duality formula (2.16).
Remark 3.1. The proof of Lemma 3.2 contains the inequality in the sense of measures:
F. Demengel and R. Temam
130
for all 0 in C,(R), Indeed, let on
uj
E
be in
< d(suptO, an), and
p a regularizing function as before.
Cr(n);u j tends to p vaguely on R,$J(uj)-, $ ( p ) vaguely
R. We have by the convexity of $ and using $(O) = 0,
$J
2 0:
Now the convergence of u j towards p implies that of pr * 8uj towards pt
* 9p;
using Lemma 3.1, we then have
5 liminf $ ( p t * O G ( u j ) ) . 3-M
It is clear that the convergence of $J(uj) towards $ ( p ) implies that of pc*8$(u3) towards pt
* 9 $ ( p ) . We finally have obtained
APPENDIX To begin with, let us recall the definition of the Caratheodory function:
Definition A . l . A function f ( s , u ) of two variables +m, s E
-00
E } < E.
In fact, it is easy to see that Definition A.2 is equivalent to the following:
Definition A.3.
u, + u
in measure if
Y E > 0, V6 > 0 3 N , V n 2 N mes{z : I(u, - .)(.)I
+
> 6) < c.
Let us recall the lemma 2.1 of IS]:
Lemma A.l. let-R be a set of finite measure. Then the operator F transforms every sequence of functions that converges in measure into a sequence of functions that also converges in measure . We can now prove the theorem; we may suppose that F ( 0 ) = 0 and show that F is continuuous at zero. Indeed, let us introduced for a fixed u in
LP(R,A) the operator H defined on the convex cone A - u:(l) H(v)= F(v
+ u)- F ( u )
Suppose by contradiction that there exists a sequence of functions Pn E
LP(R,A - u) such that lPnlLP(n,E)
(l) h - u = { w - u : : € E } .
+ 0,
F. Demengel and R. Temam
132
and such that
We may assume, by extracting a subsequence, t h a t
n
and suppose for a while t h a t we have proved the existence of sequences Ek p n k ,and sets
Gk
c R, such that the following conditions are fulfilled:
Let us then consider t h e sets
u m
Dk= G k \
Gi.
i=k+l
We have: r=k+1
a=k+l
Let us now define the function II, by
We then have
> -2a- - =a 3
4 lies in LP and
3
a
3 '
so H $ E L'. On the other hand we have
+ 0,
Convex function of a measure
133
which is a contradiction. It remains t o prove conditions (a), (b), (c) and (d) by induction. Suppose t h a t e l = mes R, pnl = p 1 7G1 = G , and suppose t h a t Ek, G k , ( o a k
have been constructed. Since H (pnk)E L' we may choose
such t h a t for all
then
,,s
~ k + l
IHp,,I'
0, f:'(zo;d) is
An approximation of
f"(x0;
defined as
E
> 0, whether
shares the properties of
~ " ( Z O ; . ) (pos-
T h e advantage of this concept is t h a t it is defined for all
f"(z0;d) is defined or not. f!(zo;.)
itively homogenous of degree 2, positive) and is, moreover, 1.s.c.. So, it is
167
A new set-valued second order derivative
easy t o imagine that variants of the concepts below could be developed with
f:’(zo;
a)
instead of f”(z0; .).
The limit of f:’(zo;d) when E
O+ is precisely f”(z0;d) ([7]), which
-+
corroborates the role of “second order concept” expected from f”(zo; d )
a
2.2. Let zT,E af(z0). We define f”(z0, a$; d ) as follows:
f”(zO,zg; .) offers no interest when zg E int af(z0) since f”(z0, zg; .) = ${o) (the indicator function of (0)) in such a case. When f is differentiable at. zo9
f/’(zO,Vf(zo);.) is nothing else than f”(z0; .). In short, f”(zO,xg;
a)
can be
expressed as
f”(z0,z:; *) = fl/(zo;*) The philosophy in defining ~ ‘ ’ ( z ozg; ,
a),
+ 4Ne,cz,,(z;)(’).
(2.8)
when zg is chosen, consists in consid-
ering f”(xo;-) only in the directions in which x; is active. In doing so, one takes a “convex part” of f”(zo; .) into account.
Proposition 1. (i) f”(z0, xg;0) = 0 and f”(z0, xg;d ) 2 0 for all d; (ii) f”(x0, xg;.) is convex and positively homogenous of degree 2. Proof. Only the convexity of f”(z0, x:; .) needs t o be shown. When
(which is the domain of f”(z0, zg; -)), f’(z0; d ) = (2;
I d ) so that
can be viewed, on N a f ( 2 , ) ( z ; ) )as , a limit of convex functions. Whence the convexity of f”(z0,z;;
a).
I
J.-B. Hirinrt-Urruty
168
Example 1 with zo = (0,O)and zt, = (0,O)E df(z0) serves for showing that
f“(z0, x:;
a)
is not necessarily I.s.c..
Vd = ( d l , d 2 )
We have:
f“(z0, 2 : ; d) =
As a general rule, f”(zO,z:;.)
{
0 2di +oo
if
dl
0.
is indeed continuous on the interior of
Naf(zo)(z:)but a discrepancy between f”(z0,z:;
a)
and its 1.s.c. hull may
occur at points lying on the boundary of Naf(zo)(z6). The 1.s.c. hull of
fll(z0, z;:
-), denoted by cl f”(z0,z:; -), is positive, l.s.c.,
convex and positively homogenous of degree 2. Thus cl f”(so,s:;.)
is the
square of a certain gauge function ([12], corollary 15.3.11) or, equivalently here, the square of a support function ([12], theorem 14.51). This motivates the next definition.
Definition 2. We set d2f(zo,z):
{
= z : * : (s:*
a2f(zo) =
I d) 5 d
m for all d 1, n wzo, .;I.
(2.9)
(2.10)
z;l€af(zo)
d2f(zo,z:) is called the second differential o f f at (z0,z:) and d2f(zo) the second differential o f f a t zo.
d2f(zo,z:) is the whole space whenever zt, lies in int df(zo),so that only the
57;
E bd df(z0) are relevant in the definition of a2f(z0). The distinction
between d2f(zo,z:) and d2f(zo)may be of importance for second order expansions (see below) as well as for the calculus rules ([lo]). We thus have defined two set-valued mappings:the first one a2f(-,.):gr aj (50,z:)
=
Rn d 2 f ( z o ,2;)
which is defined on the graph gr df of the subdifferential o f f ; the second one
169
A new set-valued second order derivative
which is defined on the whole Rn. Proposition 2. d2f(zo, z:)
is a closed convex set containing the
tangent cone Taf(zo) (z:) to &!(so) the 1.s.c. hull of
d
m
at zc. Its support function is
.
d2f(zo) is a compact convex set, containing 0, whose support function is the biconjugate of
d m .
Proof. We know that d c 1 ~ " ( z oz,:; .), which is also cl
support function. Relation (2.9) shows that cl support function of d2f(zo,z:).
4-
dft1(z0, z;: *),
is a
is precisely the
Since
Whence the announced result follows. I
Remark 1. When af(z0) is of full dimension, the affine hull of af(z0) is the whole space. In s x h a case, "most" of the points lying on the boundary of
af (zo) are "smooth,"
in the following sense: for all zg belonging to a set dense
in bd af(zo), the normal cone t o af(z0) at zc is a half-line R+d(zi) directed by d(zz). The second differential of f at (z0,z;)
{
is then the half-space
I
z** : (z** d(z;S))5
Remark 2. Due to the relationship (2.8), we have that
J.-B. Hiriart-Urrutv
170
Equality holds whenever f"(z0; -) is convex. Also note that the a2f(zo,z:), x: E af(zo), are unbounded if and only if
f is not differentiable at
50.
To enlighten the meaning of a2f(zo,zG) and a2f(zo) along with their calculus, let us consider two examples. The first one deals with the function considered in Example 1. For zo = (0,O) and x: E a.f(zo), a2f(zo,zG) looks like this:
i x;;= (0,O)
x;;= ( 2 , O )
z: = (a,O) with a E ]0,2[
Figure 3
Consequently, a2f(zo) is t h e segment [ O , 4 1 x (0). A second example we look at is t h a t of the maximun of C2 functions. Although calculus rules ([lo])allow us t o give inner and outer estimates of the second differential in such a case, it is interesting t o go through the calculus for simple cases.
I71
A new set-valued second order derivative
Example 2. Let f : R n 3 R be a polyhedral function, that is the maximum of a finite number of affine forms:
f :z
-, a.=max { ( a ; I z) + b i } . 1 , . . .,k
Given zf,E df(zo), f"(zo;z:,.)
is the indicator function of N q ( z , ) ( z ; ) ;
whence a2f(zo,x:) reduces t o the tangent cone T,y(z,)(z~) of af(z0)at z.: f " ( z o ; d ) is null for all zo. Hence d2f(zo) = (0) for all
50.
This is not
surprising, since all the information about f is obtained through first order expansions.
Example 3. L,et f1, f2:R2 + R be defined as
We set f = max(fl,f2). What is d2f(zo) at the given point z o = ( O , O ) ? f2
fl,
are quadratic forms so that d2fl(zo) and d2f2(zo) are elliptic sets:
For calculating d2f(zo,zG), z: E df(z0) = c0{(2,0), (0,2)}, we have to consider three cases: s: = (2,0), x: = (0,2) and z: = ( a ,2 - a) with a E 10, 2[. The second differentials d2f(zo,z:)
look as follows:
172
J.-B. Hiriart-Urruty
Observe in this example that d2f(so) contains d2f1(so) n d2f2(zo) and is contained in the convex hull of d2fl(so)U d2f2(zo). This is precisely the kind of estimates obtained for d2(maxi f i ) ( s o ) (cf. [lo]). Second order estimates of
f(s0
+ Ad) in terms of d2f(zo,s;)
or d2f(zo)
are now easy consequences of what has been laid out in 52.1 and 52.2.
173
A new set-valued second order derivative
Proposition 3. We have for all X > 0:
for all xt, E
with d. Equality holds whenever
af(z0) associated
f”(z0, zt,; .) is 1.s.c. at d.
Equality holds if
f”(z0;
a)
is convex.
What can be said about the second differentials o f f and g are comparable in a neighbourhood of zo? Let f and g be such that f(z0) = g(z0) and
f ( z ) 5 g(z) in a neighbourhood of at
50, we
50.
If both f and g are twice differentiable
know that Vf(z0) = Vg(z0) and (V2f(zo)d I d ) 5 (V2g(zo)d I d )
for all d. Suppose now f and g are not differentiable at
50.
It is clear that
af(z0) c a g ( z o ) . The next proposition states what can be claimed about the
second differentials o f f and g at zo. Proposition 4. We have that
(2.13) Hence
n
d2f(xo) c
d 2 g ( z o ,43,
(2.14)
z;Ebdaf(zo)fhdag(zo)
with the convention that
n,
a2g(zo,z:)
is the whole space
R”.
Proof. Let z: E af(z0). Since af(z0) c a g ( z o ) , we have that
Let d be in N a g ( 2 0 ) ( z t ,i.e., ) , a direction in which zt, is active for g. According to the inclusion above, z: is also active in d for f . Thus
J.-B. Hiriart-Urruty
1 74
Consequently
Whence f"(z0, 5:;
d)
5 g"(z0, 5:; d ) for all d ,
and the inclusion (2.13) follows. Inclusion (2.14) comes from the definition itself of a2f(z0).I
2.3. Before going into the definition of what we call inner and outer second derivatives of f at
50,we
need to recall some basic results concerning
the geometry of ellipsoids in R". The geometric concept in R" associated with a symmetric positive semidefinite (n,n)matrix M is the ellipsoid E = M j ( B ) , where B denotes the closed unit ball in
R".The support function
of E is then
(2.15) Conversely, an ellipsoid-E with center 0 in R" can be represented as E = N ( B ) for a certain matrix N . Two N1 and N2 yiedling the same ellipsoid E are such that N1 - 'NI = N2 . 'N2. Whence we have
E = M i ( B ) where M = N - 'N is uniquely determined. Thus, there is
-,
one-to-one correspondence between the set E of ellipsoids in
R" and the set P, of symmetric positive semi-definite E
E = N(B)
+
( n , n ) matrices.
P" M = N ' 'N
E = M j ( B ) +--t M . When M is positive definite, E can also be represented as
E = (5 : (M-'z
15)
5 l}.
175
A new set-valued second order derivative
An old result, due t o Alexandroff (1939), says that a convex function f defined on
R” is
“twice differentiable almost everywhere.” The precise meaning of
this statement is as follows: At almost every so E R“, f has a second order
expansion in the sense that there exists a symmetric positive semi-definite matrix V’f(s0) verifying f(s)= f ( z 0 )
+ (Vf(s0) I
+ .(I. Clearly
f”(z0; d )
5
- so)
+ 51 ( V 2 f ( x 0 ) ( 2- zo) I
- 20)
- zoll”.
= (V2f(so)d I d ) at such s o ; whence d 2 f ( x o ) is the ellipsoid
associated with V’f(s0). When we are at a point so where d 2 f ( s o ) is not an ellipsoid, there is a priori no way of associating a unique positive semi-definite matrix with
d2f(zo), which would play the role of V’f(s0). One may howewer consider the set
7 of
ellipsoids centered at 0 containing d’f(z0) and the set
C and the
of
A criterion one can consider
ellipsoids centered at 0 contained in d’f(s0). t o take the “maximal” element in
&
“minimal”e1ement in
7 is that
of volume. Circumscribed and inscribed ellipsoids of extremal volume have been a question under investigation by mathematicians some decades ago; the papers [4, $61 and [5] contain the material in that respect and the appropriate references. For our concern, things are a little bit easier since we are imposing the ellipsoids in
or
7 be centered
a t 0.
Proposition 5. There is one and only one largest volume ellipsoid centered a t 0 contained in d2f(zo). There is one and only one smallest volume ellipsoid centered a t 0 contained in d 2 f ( x o ) .
For the sake of simplicity, we assume that d2f(so) is of full dimension. If not, the same reasoning as the one we are going to develop should be carried out in the affine hull of d’f(z0).
Proof. the volume of an ellipsoid E = M 1 / ’ ( B ) is proportional t o (detM)l/’. The ellipsoids
E and
J$ we are looking for correspond t o (symmetric) posi-
J. -B. Hiriart-Urruty
176
tive definite matrices. We therefore are in the presence of two optimization problems:
(a (PI
{ {
Maximize (det M )‘I2 subject to
M E
P”,
and {z : (M-’z
I z) 5 l} C d2f(zo).
Minimize (det M )1/2 subject to
M E
P”,
and a2f(z0) c {z : ( M - l s
I z) 5 l} .
p” denotes the set of symmetric positive definite ( n , n ) matrices. Consider firstly ( p ) .p” and a2f(z0) are convex sets. Moreover, if
Here
{z : ( M - l z
5)
5 l} = M1/2(B) c a2f(z0)
z)
5 l} = N ’ / 2 ( B ) c d2f(zo),
and (5
:(N-lz
then
+
{z : ( [ a M (1 - a)N]-’z
1 5) 5 l} = [ a M + (1 - a ) N ] 1 / 2 ( Bc) a2f(zo)
for all a E [0,1]. This is easy to check by considering support functions. Therefore the constraint set (on M ) in (f) is convex.
( p )is equivalent t o maximize log(detM). The function M -+ log(detM) is known to be strictly concave on p”, ([2, p. 631). Whence the uniqueness of E . The existence of E offers no difficulty. To maximize (detM)’/2 in
( P ) . The existence of E is not difficult of E is more tricky since we are led to
Consider now The uniqueness
to observe either. by minimizing a
(strictly) concave function. Suppose M and N are two distinct solutions of
( p ) and
following “combination” of M and N :
[aM-’
+ (I - a ) ~ - ’ ] - ’for some a E 10,
I[.
consider the
A new set-valued second order derivative
Obviously [aM-' matrices in
+ (1 - a)N-']-'
177
satisfies the constraints imposed on the
(p).Moreover
log{det[aM-'
+ (1 - a ) N - ' ] - ' }
e
alog(det M )
+ (1 - a ) log(det N ) .
Whence a contradiction with the fact that both M and N are solutions of The uniqueness of
E is therefore proved.
(F).
I
The preceding results lead t o the following definitions.
Definition 3. We call inner second derivative (resp. outer second derivative) o f f a t xo the unique symmetric positive semidefinite matrix denoted by y 2 f ( x 0 ) (resp. O2f(zo)) associated with the unique largest volume (resp. smallest volume) ellipsoid centered a t 0 and contained in (resp. containing @f(zo)). Obviously 0 2 f ( x o ) = 0 2 f ( z o ) = V2f(so) at those points xo where f is twice differentiable. As a general rule, given x: E af(so),the quadratic form
plays the role of a quadratic (almost) lower bound since
for all X > 0.
3. A GLIMPSE OF FURTHER DEVELOPMENTS We mention in this section some problems related to second differentials which have not been investigated in this paper, m well as a comparison result with the so-called generalized Hessian matrix for C1>'functions.
J.-B. Hiriart-Urruty
178
3.1. Some unsolved questions
Estimates of d2f(zo,zg) can be obtained via calculus rules when f is built up from other
fi
whose second differentials are easier to determine (cf. [lo]).
However the exact evaluation of second differentials of f may be difficult even if f is a
“
simple ”convex function.
Consider the case of support f u n c t i o n s p : Rn+ R; p is a positively homogeneous convex function, which can also be written as $; with C = a p ( 0 ) . It is immediate to observe that a 2 p ( 0 ) = (0). What about a 2 p ( z o ) when
xo
# O?
We have that p ” ( s 0 ; d ) = 0 whenever d = azo, a 2 0. For other d ,
the calculus of p ” ( z 0 ; d ) requires us to know
Due to the particular structure of d p ( s 0 in the direction
50
+ A d ) , the limit in
+ A d ) (supporting set C z o + ~ofd C
(3.1) reminds us of some “directional
curvature ”of C in the d direction. A better insight into d 2 p would give us a better understanding of the geometric structure of the boundary of C. Similar questions may be posed for the distance f u n c t i o n d c ; estimates of d 2 d c ( z o ,zz) or d 2 d c ( z o ) can however be obtained by a direct computation or using calculus rules ([14]).
Another problem which is worth mentioning is that of the relationship between the second differential o f f and that of its conjugate f*. It is expected that, under suitable conditions, a2f*(z;,zo)is the polar set of d2f(so,z~).
179
A new set-valued second order derivative
3.2. Relationship with the generalized Hessian matrix
of a C’J convex function We have defined in [9,521 the generalized Hessian matrix for a C13’ function, that is the function f which is differentiable and whose gradient mapping V j is locally Lipschitz. Given such a function, the generalized Hessian matrzz of f
at xo in CLARKE’S sense, denoted by a$f(zo), is the set of matrices defined as the convex hull of the set
{M : 32i a:f(xo)
+ xo
with f twice differentiable at xi and V2f(xi) -, M} .
is a (nonempty) compact convex set of symmetric matrices which
reduces t o {V2f(x0)} whenever f is twice continuously differentiable at xo. A fundamental relationship between
a&f( 20) and
some kind of generalized
second order directional derivative foo(xo; .) of f at xo is as follows:
Assume now f is
C’*’and convex. a$f(xo) consists of symmetric positive
semidefinite matrices. How t o compare a$f(zo) and a2f(zo)? As we said it earlier, a n ellipsoid E = M 1 / 2 ( B whose ) support function is
d
m is associated with each M E a g f ( x 0 ) .
We infer from (2.3) that
Thus, d 2f (xo) is contained in the convex hull of the ellipsoids associated with
the matrices of a$f(zo):
Since the rigth-hand side of this inclusion is symmetric with respect to the origin, we actually have that co{a2f(so)
u -a”(zo)}
c co
{
u M€a;f(zO)
W/2(B)}.
(3.4)
J. -B. Hiriart-Urruty
180
We show that equality holds for a large class of C13'convex functions. An important construction giving rise t o C1~'convex functions is the following
where g is twice continuously differentiable and convex. Assume there is ? such i that g(5) < 0 [If not, f would be twice continuously differentiable everywhere], and let
50
be such that g(z0) = 0. The gradient of g is not null at such
a
point, and we have that
It results from this that
The ellipsoid E, associated with M a = aVg(z0)
-
tVg(zo) E @f(zo) is
nothing else than co(-a Vg(zo), a Vg(z0)). Equality in (3.4) is therefore easy t o check. Also observe that, in this example, the inner and outer second derivatives are as following:
A new set-valued second order derivative
181
REFERENCES (11 J.-P. AUBIN, Lipschitz behaviour of solutions to convex minimization problems, Math. of Operations Research 9, no 1 (1984), 87-11. 121 F. BECKENBACH, R. BELLMAN, Inequalities, Springer-Verlag, Berlin (1965). [3] H.BUSEMAN, Convex surfaces, Intersciences Tracts in Pure and Ap-
plied Mathematics 6, Intersciences Publishers, Inc., New York (1958). [4] J.-L. GOFFIN, Variable metric relaxation methods, part I: A conceptual algorithm, Technical Report SOL 81-86, Systems Optimization Labora-
tory, Operation Research Departement, Stanford Universit>y (Standford California 1981). [5] J.-L. GOFFIN, Variable metric relaxation methods, part II: The ellipsoid method, Mathematical Programing 30 (1984), 147-162. [6] J.-B. HIRIART-URRUTY, Approximating a second-order directional derivative for nonsmooth .convex functions, SIAM J. on Control and Op-
timization 20 (1982) 783-807. [7] J.-B. HIRIART-URRUTY, Limiting behaviour of the approximate firstorder and second-order directional derivatives f o r a convex function, Non-
linear Analysis: Theory, Methods and Applications, Vol. 6 (1982), 13091326.
[8] J.-B. HIRIART-URRUTY, The approximate first-order and second-order directional derivatives for a convex function, in “Mathematical Theories of Optimization” Lecture notes in Mathematics 979 (1983), 144-177.
[9] J.-B. HIRIART-URRUTY, J.-J. STRODIOT and V. HIEN NGUYEN, Generalized Hessian matrix and second-order optimality conditions f o r problems with C’?’data, Applied Mathematics and Optimization 11 (1984), 43-56.
J.-B. Hiriart- Urruty
I82
[lo] J.-B. HIRIART-URRUTY and A. SEEGER, Calculus rules o n a new setvalued second-order derivative jor convex functions, preprint Laboratoire
d’Analyse NumCrique, UniversitC Paul Sabatier (Toulouse, 1985). 1111 C. LEMARECHAL and J. ZOWE, Some remarks on the construction of higher-order algorithms in convex optimization, Applied Mathematics and
Optimization 10 (1983), 51-68.
[ 121 R.T. ROCKAFELLAR, Convex Analysis, Princeton University Press (1970). [I31 R.T. ROCKAFELLAR, Maximal monotone relations and the second derivative of nonsmooth junctions, preprint University of Washington
(Seattle, 1984). [14] A. SEEGER, Doctoral Thesis, in preparation.
FERMAT Days 85: Mathematics for Optimization J.-B. Hiriart-Urruty(editor) 0 Elsevier Science Publishers B.V. (North-Holland), 1986
183
ON THE THEORY OF SUBDIFFERENTIAL
A.D. IOFFE Profsoyusnaya 85-1-203 Moscow 11’7279 (U.S.S.R.)
Keywords: Generalized subdifferentials. Tangent cones. Nonsmooth analysis and optimization.
INTRODUCTION The theory that will be briefly described consists of two parts, the general one in which subdifferentials are studied in terms of their properties rather than constructions, and the special part with emphasis on specific constructions.
In the first part we try t o single out some basic properties of subdifferentials, the simplest ones that may serve as a definition and (as few as possible
of) additional properties that are crucial in applications. And, indeed, very few properties are enough t o cover a big portion of possible applications. The second part contains: (a) a description of all tangent cones that can be obtained by means of the
relations x‘
+ t’h’
E
C, x‘
+
z, t’
--+
0, h’
---t
h. There are 40 such
cones plus 35 of their recession cones (for 5 out of 40 are convex) - many of them are new but, fortunately, almost all interesting cones have been
known before;
A.D. Ioffe
184
(b) a description of all “tangentially generated” subdifferentials (i.e., supports of directional derivatives). It turns out that not to every tangent cone corresponds a subdifferential, and very few among the latter have reasonably good calculus. Namely, Clarke’s generalized gradient is, in a certain sense, the only tangentially generated subdifferential having “full” calculus. There are also three more subdifferentials similar to that used by Frankowska (which is among the three) having “restricted” calculus, and that is all. A similar work is still to be done for subdifferentials that are not tangentially generated. Among them there are subdifferentials with “full” calculus. Moreover, there is a reason t o conjecture that one of them called (approximate) G-subdifferential is absolutlely minimal among those having full calculus.
Notations:
.Nx is a base of neighbourhoods
of the origin in X;
X * is the dual space always considered with the weak* topology; (. I -) is the canonical bilinear form on X * x X.
185
On the theory of subdifferential
1. GENERAL THEORY
1.1. Basic properties and the definition
Speaking of a subdifferential, we always mean a set-valued map that associates
c X * with every function f (of a certain class) and every which f is finite. We set af(z) = 0 if lf(z)I = 00. a set af(z)
z at
This automatically excludes from our discussion such objects as Warga’s derivate containers [23] and quasidifferentials of Dem’yanov and Rubinov [5]. With every subdifferential and
x,
a we associate a concept of normality:
if C c X
is the indicator function of C , we then set
So let
a be a set-valued mapping as described above.
The following seven
properties will be referred to as the definition of subdifferential: (dl) af(z) = a g ( z ) if f and g coincide in a neighbourhood of z; (dz) af(Xz) = Xaf(z) if A > 0; (d3) if g(z,y) = f(z), then dg(z,y) = af(z) x 10); (d4) af(z)is the subdifferential in the sense of convex analysis if f is a convex continuous function; (d5) if a linear function f(u)=
(2’
I u ) attains a local
minimum on C at z,
then -z* E N(C,s); (d6) if there are a cone K c X and a U c
Ax
such that
then N ( C , z ) c KO, the polar cone of K ; (d7) 4 f ( z ) = {z* : (-1,z*) E “Pi
f,(f(z),s))).
These properties are shared by all known subdifferentials, and their verification is easy, if not trivial. The meaning of the last two is that subdifferentials
A .D. Ioffe
186
should not be too big and must have a two-way relationship with normal cones.
It seems that no trivial subdifferential may satisfy all the properties but, on the other hand, they are not sufficient for a meaningful calculus. We therefore add few more properties that subdifferentials may have or not: (al) N ( C n D ,z) c N ( C ,z ) + N ( D ,z)if certain conditions are satisfied. Usually
we suppose that (a1)holds under the following condition: one of the sets is epi-Lipschitz and ci N ( C , z )n (-cl N ( D , z ) ) = (0). Similar conditions were independently offered by Mordukhovich [ 181 and Rockafellar [21] (for X = Rn in both cases), so we refer to it as to the MR-condition. (a2)
Let L be a subspace; then
B being the unit ball. (a3)
Suppose that f ( u ) = inf, g (u ,v ) and f ( z ) = g(z,y). Suppose also that for any V E vEy
Ny
there is a U E Nx such that for any u E z
+ U there is a
+ V such that f ( u ) = g ( u , v ) . Then
(In other words, the inclusion holds if the minimization problem admits 0 solution continuously depending on the parameter at the given point z.) (aq)
( X normed) dp(C,z) C N ( C , z ) if z E C; here p is the distance function. The list of the additional properties can be extended if necessary (i.e., if
a certain property cannot be obtained from these ones).
187
On the theory o f subdifferential
1.2. Restricted calculus (definition and ( a l ) ) The results stated in this section are consequences of the definition and (al). We give them without proofs. We of course suppose, here and in what follows, that the functions we consider belong to the class on which the subdifferential is defined.
To begin with we recall the notions of singular subdifferential (A la Rockafellar [22]) and coderivative (associated with the given subdifferential). We set
amf(z)= {x*
: ( O , X * )E "epi
f,
(f(x),z))} Y
and if F is a set-valued map from X into Y and y E F ( z ) ,then
(If F is single-valued, we write D*F(z)(y*))
Theorem 1. Suppose the functions f l , . . ,fn are directionally Lipschitz a t z except a t most one of them and x t ~ a ~ f i ( zi =) ,1 ,
Suppose further that
fl
. . . ,n a n d x ; + . . . + z : = O jx; = ... = x; = 0.
+ . . + fn attains a local minimum at 2.
then
O E af,(z)+...+af,(X)
.
Further results of this section are more or less direct corollaries of the theorem.
Proposition 1. I f f is Lipschitz near z, then af(z) is nonempty and precompact and a m f ( x ) = (0).
Proposition 2. I f f is strictly differentiable a t z, then
A.D. Ioffe
188
Proposition 3. (mean-value theorem). Suppose f is locally Lipschitz near the segment [u,b] c X . Then there exist z E [a,b] and z*E af(z) U a(-f)(z) such that
Let F be a set-valued map from X into Y , and g be a function on Y. We set
(9 0 F ) b ) = inf M Y ) : Y E F ( 4 ) *
Proposition 4. Suppose that f = g o F attains a local minimum a t z. Suppose further that f ( z ) = g(y) for some y E F ( z ) , g is directionally Lipschitz a t y and -y* E P g ( y ) and 0
E D*F(z,y)(y*)
* y* = 0. Then
Suppose X and Y are Banach spaces and F a set-valued map from X into
Y with closed graph. We set sur(F,z,y) = liminft-’sup{r 20:B ( y , r ) c F ( B ( z , t ) ) } . t++O
(B(w, u ) stands for the ball of radius
a
around
20
in the corresponding space.)
We set further
The first quantity is the “constant of surjection” of F at (z,y)and the second “the dual Banach constant” of D*F(z,y) (see Ill] for more details).
On the theory of subdifferential
189
Proposition 5 , (a surjection theorem [ll]). Suppose that there exists an open U
c X x Y such that for any (2,y )
E Ungraph F
we have c k ( z , y ) 2 m . Then sur(F, z, y) 2 m, V(z, y) E U n graph F.
The next result follows directly from (al) rather than from Theorem 1.
Proposition 6. Suppose that f and g are Lipschitz near x and f ( z ) = g(z). Then
The list of results can be extended but, hopefully, it is enough t o imagine the range of the restricted calculus and t o compare it with the “full” calculus discussed in the next section. 1.3. Full calculus
In this section we suppose that the subdifferential satisfies the properties (al) through
(aq), in
which case we say that the subdifferential has full calculus.
Theorem 2. Suppose that one o f the functions f and g is directionally Lipschitz at x and d“O f (2) n -amg(z) = (0). Then
Note that the theorem is based only on (a1)and
(ap).
Unlike Theorem 1,
Theorem 2 can be applied recursively, so it is enough t o consider two functions. As in the previous section, most of further results are connected with the theorem.
A .D.loffe
190
Proposition 7. I f f is strictly differentiable a t z, then
Proposition 8. (chain-rule). Suppose F : X
+
Y is continu-
ous and f ( z ) = g ( F ( z ) ) .Suppose g is directionally Lipschitz a t y = F ( z ) and y* E P g ( y ) , 0 E D*F(z)(y*)imply y * = 0. Then
For this result (as) must be used.
Proposition 9. If F: X
-, Y
is Lipschitz near z, then
s ( y * , h ) = sup{(z* I h) : z* E D*F(z)(y*)} is the support function o f a bounded fan which is a prederivative of F a t
1121,
The two last results of this section follow from (a1) and
(aq).
We say that d is upper semicontinuous on a class Q if the set-valued map z H df(z) is upper semicontinuous for any
f E P (recall that X* is supplied
with the u ( X * ,X)--topology).
Proposition 10. (cf. Prop. 4). Suppose d is
U.S.C.
on locally
Lipschitz functions. Let F be a set-valued map from X into Rn with closed graph. then
Let us finally consider the problem minimize f(u) subject to 0 E F ( z ) .
191
On the theory o f subdifferential
Proposition 11. Suppose that s is a local solution to (P). Suppose further that f is Lipschitz near x, a is U.S.C. on Lipschitz functions and f is a set-valued mapping into R" with closed graph. There then exist A 2 0 and y* such that A
+ ( ( y * ( (> 0
and 0 E Aaf(2)
+ D*F(x,O)(y*).
2. SPECIAL THEORY In this part we assume that all spaces are Banach. Although the results are valid without the assumption after suitable reinterpretation, their formulation may take too much time, for the interpretation requires nonstandard analysis by means of which some of the results were first obtained.
A. Tangent cones
A.l. A description Throughout this section, a set C c X and a point s E C are fixed. We shall use the following notations: 2'
:= {Z,}
c c,
5"
--t
s;
(so that, say, Vs' means "for any sequence of elements of
C converging to
z
etc.); Qx' stands either for Vs' or for 3s' or for s' = x (which means that all elements of the sequence coincide with s; in this case we write Q =
a).
Likewise Rt' stands for 3t' or for Vt' and Sh' is either Vh' or 3h' or h' = h. We denote by TQRS(C,Z) the collection of all h E X such that
Qs'Rt'Sh'(s'
+ t'h') E C.
A.D. Ioffe
192
For example h E T.v3(C,z)if and only if for any t , such that z
-+
0 there exists h,
+
h
+ tnhn E C for all n (or for all large n ) .
The series of so obtained tangent cones consists of 18 elements (for Q and S have three possible values each, and
R two). We shall call it the principal
series of tangent cones. We observe that
T = J ~ ( C ,isZ )the contingent [l],[2] (also called tangent [19]) cone;
T . w ( C , z ) is the interior tangent cone IS], [19]; T.vq(C,z) is the cone introduced by Dubovitzkii-Miljutin [8] (called tangent in [71); T.3. (C,z)
is the radial tangent cone and
T . v . ( C , z )is the interior radial tangent cone [19];
Tw3(C,z) is the tangent cone introduced by Clarke [3] (tangent in [lo], peritangent in[19], hypertangent in [7];
T w ( C , z ) is the hypertangent cone (that is how it was called in [4], which seems to be a suitable name);
T w .(C,z) is the radial hypertangent cone (hypertangent in [22]). All the statements can be verified directly. For the general (non-Banach) case the characterization was obtained by Kutateladze [16] in terms of nonstandard analysis. A different principle of classification was earlier used by Dolecki
[?I.
In all cones of the principal series, the first quantifier relates to z, the second to t and the third to h. More cones can be obtained by changing the order of quantifiers. For example, we can consider the cone T3,,.vt(C,z) containing h with the following property: there exists a sequence h, that for any t , number) etc.
--t
+h
such
+O we have z + t,h, E C for all n (starting with a certain
I93
On the theory of subdifferential
Taking into account that -V = V. etc., we find that altogether there exist 40 different combinations that obviously embrace all possible tangent cones
that can be expressed in terms of displacement limit relation z‘
+ t’h’ E C.
A.2. Calculus We shall call derivative cones those obtained from elements of the principal series by changing order of quantifiers.
Proposition 1. The WS-cones and their derivatives, and only these ones, are always convex. Thus there exist five convex cones of which
Tt/\dg,
Clarke’s tangent cone, is
of course the largest. The next two propositions demonstrate the exceptional role of the hypertangent cone, which is obviously the smallest among the 40.
Proposition 2.
Here, and in what follows, we denote by
i? the recession cone of K :
R = {z : z + K c K . } Proposition 3. Suppose Q # 3. Then,
If Q = V, it follows from Proposition 2 that TW = int Tws. The following is a consequence of Propositions 1 and 3.
A.D. Ioffe
194
Proposition 4. If T w ( D ,z) #
0, then
provided that
provided that
We remark that in view of what was said in the previous section, F.m(C,z) is the asymptotic tangent cone used by Frankowska [9].(')
For certain classes of sets, some of the cones may coincide. The following is an example.
Proposition 6. If C is closed, then
( l ) I am indebted t o the referee for the following remark: the asymptotic tangent cone ?,ws(C,
2) has
been introduced in [20]and in unpublished form by E. Giner in 1981. The first
appearence of asymptotic (or star-difference) cones on connection with nonsmooth analysis seems to be in [6].
On the theory o f subdifferential
195
B. Subdifferentials
B. 1. Tangentially generated subdifferentials We set
NQR.s(C,Z) = { z * : (z* I h) 5 0, V h E T Q R . S ( ~ , ~ ) } ; ~ Q R S ~ (= Z ){ z * : (-1, z*)
E NQR.s(~P~ f 7(f(z),z ) ) } .
4
A
In the same way we define NQRS and
~ Q R S using
recession cones, and similar
things correspond t o all the derivative cones. We shall call tangentially generated those of
ZQRS etc. which are subdifferentials in the sense of the
~ Q R S ,
definition of Section 1.1. We note t h a t
d ~ 3 f ( z )is the generalized gradient of Clarke [3], 141;
d.33f(z) is the Dini subdifferential [13]; d.y3f(z) is the asymptotic subdifferential used by Frankowska [9].
Proposition 6. (i)
d 3 3 ~dvt3,s ,
and aSv,,Z, and their recession counterparts
are not subdifferentials [for they do not satisfy [d4));
[ii)
~ Q R Vand
their derivative and recession counterparts are
subdifferentials on the class o f directionally Lipschitz functions but not on the class of all lower semicontinuous functions;
(iii) all other are subdifferentials on the class of lower semicontinuous functions. Thus we have altogether 32 tangentially generated subdifferentials on the class of 1.s.c. functions. T h e following theorem shows t h a t very few of them have calculus. Given two subdifferentials 13 and
a’
we say that they a r e Lipschitz equiv-
alent if df(z) = d’g(z) i f f is Lipschitz near z. We say t h a t d is smaller than
d‘ if d f ( x )
c d’f(z) for all f
and x.
A .D. loffe
196
Theorem 2.
(i) d w s and their derivative counterparts have full calculus and d w 3 , the generalized gradient of Clarke, is the smallest among them; (ii) d.vs have restricted calculus (they do not satisfy condi-
tion (az)). They are all Lipschitz equivalent, but none of them is minimal.
(iii) No other tangentially generated subdifferential satifies condition
(ill).
Thus, no other tangentially generated subdifferential may have the applications described in .the first part. This does not mean, however, that other subdifferentials deserve no attention at all. For instance, the “fuzzy” calculus of the Dini subdifferential [13] allows us to use it for construction of approximate subdifferentials (having full calculus) [14], [15]. We conclude this section with two examples demonstrating the negative statements in (ii).
Example 1. Consider the functions
the sets
c = {(a,P,x): a L f(4) D = {(.,P,.)
:P
1
2 9(x)>
and the subspace:
L = {(a,p, x) : a + p = 0) . It is not difficult to verify that
fi.vs(C n D,O)does not contain ~ . v s ( n CD
+ L,O)
On the theory o f subdifferential
197
( w e h a v e C n D + L = { ( c r , / 3 , a ) : a + / 3 2 f ( s ) + g ( s ) } ) and
Both functions are Lipschitz continuous.
Example 2. Consider the functions
and
O 00
if y = s + z 3 , otherwise.
We have [-1,1] x (0) = a^.V,f(O)
2 5.v f ( 0 )
and A
{ ( t ,-t) : t E R} = a.v,g(O) c a^.vg(0) = R2. So none of
5
~
and 3
Z.V.is smaller than the other.
B.2. Other subdifferentials Altough no similar study has been undertaken yet for other subdifferentials, some remarks are to be made. First of all, we know a number of subdifferentials with good properties that are not tangentially generated. Among them there are (a) proximal or Frkchet subdifferential which in many respects is similar to
the Dini subdifferential; (b) limit subdifferential including the one introduced by Kruger and Mordukhovich [ 171 and approximate subdifferentials introduced by the author [14], (151. Every subdifferential of the second group is obtained as a result of a limit process involving either proximal or tangentially generated subdifferentials. All of them have full calculus.
A .D.Ioffe
198
(c) “geometric” subdifferentials obtained by means of the distance function approximately in the same way as in Clarke’s original definition of generalized gradients for 1.s.c. functions. One of the geometric subdifferentials a,f(z) introduced in [15] under the name “broad geometric subdifferential” and mentioned in [ll]under the name of G-subdifferential has the property that for any closed set N W (C, ~ x)
= cl conv NG (C, z) .
(Recall that the cone on the left-hand side is the Clarke normal cone.)
Conjecture. G-subdifferential is absolutely the smallest among all subdifferentials on t h e class of 1.s.c. functions on Banach spaces having full calculus.
REFERENCES
[ 11 J.-P. AUBIN, Contingent derivative of set-valued maps and ezistence of solutions t o diferential inclusions, Adv. in Math. Suppl. Studies, 7A,
(1981), 159-229. [2] C. BOULIGAND, Introduction i la ghomhtrie infinithsimale di-
recte, Vuibert, Paris (1932). [3] F.H. CLARKE, Generalized gradients and applications, Trans. American Math. Society, 205 (1975), 2247-262. [4] F.H. CLARKE, Optimization and Nonsmooth Analysis, Wiley (1983). [5] V.F. DEM’YANOV, A.M. RUBINOV, O n quasidiflerentiable functionals, Soviet Mat. Doklady, 21 (1980), 14-17. 16) S. DOLECKI, Hypertangent cone3 for a special class of sets, in Opti-
mization, Theory and Algorithms, J.-B. Hiriart-Urruty et a1 editors, Marcel Dekker, N.Y. (1983), 3-12.
On the theory of subdifferential
199
[7] S. DOLECKI, Tangency and diflerentiation: some applications of convergence theory, Ann. Math. P u r a Appl., 130 (1982), 235-255.
[8] H. Ya. DUBOVITZKII, A.A. MILJUTIN, Extremal problems in presence of constraints, Z. Vichisl. Matem. i Mat. Fiz., 5 (1965), 395-453 (in
Russian). [9] H. FRANKOWSKA, Inclusions adjointes associe‘es auz trajectoires minimales, Notes aux C.R.A.S., 297 (1983), 461-464.
[lo] J.-B. HIRIART-URRUTY, Tangent cones, generalized gradients and mathematical programming in Banach spaces, Math. Oper. Res., 4 (1979),
79-97. 1111 A.D. IOFFE, On the local surjection property, t o appear in Nonlinear
Analysis, Theory, Methods, Appl. [12] A.D. IOFFE, Nonsmooth analysis: diflerential calculus of nondiflerentiable mappings, Trans. American Math. Society, 266 (1981), 1-56.
[13] A.D. IOFFE, Calculus of Dini subdiflerentials, Nonlinear Anal. Theory,
Methods and Appl., 8 (1984), 517-539. [14] A.D. IOFFE, Approximate subdiflerentials of nonconvex functions, Cahier
# 8120, Univ. Paris IX “Dauphine,” (1981). [15] A.D. IOFFE, Approzimate subdiflerentials and applications I. T h e finite dimensional theory, Trans. American Math. Society, 281 (1984), 289-316.
(161 S.S. KUTATELADZE, Infinitesimal tangent cones, Siberian Math. J., 26: 6 (1985). 1171 A.Ya.
KRUGER, B.Sh. MORDUKHOVICH, Extreme points and the
Euler equation in nondiflerentiable optimization problems, Dokl. Akad.
Nauk. B.S.S.R., 24 (1980), 684-687. [18] B.Sh. MORDUKHOVICH, Nonsmooth analysis with nonconvez generalized diflerentials and conjugate mappings, Dokl. Akad. Nauk. B.S.S.R.,
28 (1984), 976-979.
200
A.D. Ioffe
[19] J.-P. PENOT, The use of generalized subdifferential calculus in optimization theory, Proc. 3d Symp. Oper. Research, Mannheim (1978).
[20] J.-P. PENOT, P. TERPOLILLI, Cines tangents et singularitts, C.R. Ac. Sci. Paris, 296 (1983), 721-724. [21] R.T. ROCKAFELLAR, Eztension of subgradient calculus with application to optimization, Nonlinear Anal. Theory, Methods and Appl., to appear.
[22] R.T. ROCKAFELLAR, Directionally Lipschitzian functions and subdifferential calculus, Proc. London Math. SOC.,39 (1979), 331-355.
[23] J. WARGA, Derivate containers, inverse functions and controllability, in Calculus of variations and control theory, (D.L. Russel, editor), Academic Press, New York (1976).
FERMAT Days 85: Mathematics for Optimization J:B. Hiriart-Umty (editor) 0 Elsevier Science Publishers B.V. (North-Holland),1986
201
CONSTRUCTING BUNDLE METHOD§ FOR CONVEX OPTIMIZATION
C. LEMARECHAL INRIA, Rocquencourt '78153 Le Chesnay (France)
Abstract. This paper overviews the theory necessary t o construct optimization algorithms based on convex analysis, namely on the use of approximate subdifferentials. Three approaches are considered: one is already classical (conjugate subgradient method and the like) the other two are being developed (a-Newton method, differentiation of a multi-valued mapping). After a general introduction to the necessary concepts from convex analysis (support functions, approximate subgradients) the ideas underlying each of the three approaches are explained. Finally, because the approximate subdifferential is not a computable set, a section is devoted to implementable approximation t o it. T h e paper is as non-technical as possible. In the theoretical part, stress is laid on geometry and intuitive concepts; as for the computational part, no algorithmic detail is given.
Keywords. Convex analysis. Nondifferentiable optimization. Optimization algorithms.
C Lemarechal
202
1. INTRODUCTION
Our aim is to construct minimization methods by using tools from convex analysis (subgradients, support functions, ...), rather than the classical ones from differential calculus (gradients, Hessians...).
Of course, the resulting
algorithms can be expected to behave well when applied to functions which are just convex, but we do not particularly insist on these functions being actually nonsmooth and we will be perfectly happy if some more assumptions must finally be made. We believe that this view provides possibilities t o construct alternatives to classical methods (conjugate gradient, quasi-Newton ...) and this paper reviews some of these possibilities. We consider, therefore, a convex function f , mapping R” into R. Given z E Rn (our current approximation to a minimizer of f ) we want to find
h E R”, giving the next approximation z
+ h.
The tools that we want to
use from convex analysis are the so-called approximate directional derivative
f:(z, p), and its associated approximate subdifferential a,f(z); these objects will be defined and studied in Section 2; motivation for their use will be the subject of Sections 3 through 6. Before constructing an algorithm, the following question must be adressed: “
In what ways is a,f or f: good for defining h?” A first answer, given in [3],is
the subject of Section 3. In Section 4, a totally different answer is considered, based on a second order development of f along a direction [21], [15]. Based on the same kind of motivation, Section 5 contains the preliminary theory to what might become a third answer [19], [l],[7], [14]. Unfortunately, the above question is purely theoretical, and answering it is not enough, because neither a,f(z) nor f:(z,p) can be computed in practice. One must, therefore, also answer the practical question: “how can a,f(z) or f:(z,p) be approximated?”. Section 6, where the concept of bundling is
introduced, is devoted to this, and some conclusions are drawn, In our terminology, a bundle method is one that is constructed as follows:
Constructing bundle methods for convex optimization
203
(i) take a conceptual method given by an answer t o the theoretical question, and (ii) make it implementable by using an approximate scheme answering the practical question. Nothing in this paper is really new. We insist rather on explaining things
that already exist, and pointing out open problems. We believe that the field of nondifferentiable optimization, after a period of stagnation, is now close to a restart, and our aim will be reached if this paper stimulates new research.
The discussion is restricted to the case when
f is defined on the whole space,
mainly to help the reader with geometric intuition. Everything can essentially be generalized (mutatis mutandis) to functions that take on the value foo, and to infinite-dimensional Hilbert spaces.
2. SUBDERIVATIVES, SUBLINEARITY
AND SUBDIFFERENTIALS The material of this section can mainly be found in [12], [13];see also [18,
11.11, [28, p. 2191. We give and explain the basic definitions and properties, in a language that we hope is as clear as possible for non-specialists. 2.1 The approximate directional derivative and its basic properties
2 0 and p E R” be given, with p # 0. The +directional derivative of f a t z is the number: Let
E
This corresponds t o the geometric definition given in Figure 1, representing
+
the 2-dimensional (half-) space of the graph of f ( s t p ) for t 2 0.
By convexity, there is a unique half-line passing through (0, f(s) - €1 and “
touching
”
the graph of f (the contact can be a point, a segment, a half-line,
or it can be “at infinity”); the slope of this half-line is f:(z,p).
C. Lemarechal
204
*t
U
f’(w4, ’, ,
0
0
0
Figure 1. The approximate directional derivative tend to zero gives the (half-) tangent
As suggested in Figure 1, letting
at [0, f(z)], whose slope is classically denoted f ’ ( z , p ) : fL(Z,P)
= lim {f:(z,P) : 4 0 )
= lim
{ f b + tP)t - f(4 t 1o} :
(2.2)
=: f ’ ( 5 , p ) .
Denote for the moment v ( ~ := ) f : ( z , p ) . The function v is continuous, increasFurthermore, suppose ing and concave on [0, +XI[.
E
> 0 and let
be the (possibly empty) contact set in Figure 1. This set has some useful properties:
t > 0 : v(6) 5
1
.(€)
1 + -(6 t
= .’-(€) 2
min { t : t E T ( E ) }
-
- €) V6 2 0
(2.4)
.It(€)
1 sup{t : t E T(E)}*
In particular, when T ( c ) reduces to a singleton, its inverse is just the derivative of the function v(c); this could be felt intuitively by differentiating the term in (2.1), with respect to
E.
205
Constructing bundle methods for convex optimization
Figure 1 suggests that, when
E
10,T ( E normally ) shrinks t o {0}, in which
case (2.5) says that w has an infinite slope at that, for 0
0 (see Figure 1) is t o say that
f ( z + ‘ p ) is linear on [ O , t o ] ; once again, (2.5) gives v ’ ( E )= $ for
E
< €0.
As a mapping from [0, +oo[into [0, +a[, T is “generalized increasing” (the proper term is “maximal monotone ”) as illustrated on Figure 2. Horizontal segments correspond to kinks of f ; vertical segments correspond t o linear portions of f .
Figure 2. A typical maximal monotone mapping
The maximal monotone mapping T can be inverted t o
E ( t ) := { E > 0 : t E T(E)}
(2.6)
which is also maximal monotone. This allows the following important remark: the function v:R+ + R is characterized through (2.1) by the function
+
the above inversion suggests that f(s ‘ p ) is in turn characterized by v. More specifically, because v ( t ) is the slope of f at T ( E )the , function w reproduces
the differential behaviour o f f on 10, +oo[.This observation, which will be exploited in Section 4, is quantified by the following result (fairly clear from Figure 1): for given t
f(z
>0
+ t P ) = f ( z ) + sup {tfL(Z,P)- 6 : 6 > o}
C. Lemarechal
206
and 6 is optimal precisely on E ( t ) . Here we finally observe that, as a function of p , f : ( z , p ) is positively homogeneous, convex (and finite); functions having these properties are called sublinear, and form the subject of the next section.
2.2. Support functions
It is known that any vector g E R" defines the linear form Zg(p) := g T p , from R" into R. Conversely, any linear form 1 on R" defines a unique vector, say g1 E R", such that Z(p) = g F p , V p . This triviality in R" is called Riesz theorem in more general spaces and is illustrated in Figure 3: to say that the algebraic length Z(p) varies linearly in p is to say that, for each p , the hyperplane H , := { z E R" : z T p = Z(p)} passes through a fixed point 91.
Figure 3. Riesz theorem
Thus there is a one-to-one correspondence betwen linear forms and vectors in
R".
Constructing bundle methods for convex optimization
207
More generally, given a convex, closed and bounded set G, one can associate with it the function
called the support function of G. Functions of this type are no longer linear, but sublinear, i.e. : (i) convex and (ii) positively homogeneous (linear on 1-dimensional half-spaces : c r ~ ( t p= ) tcrc(p) V t
1 0 ) ; as such, they are also
continuous (while they would become only lower semi-continuous, possibly taking the value +oo, if we were allowing unbounded G's). Conversely, any sublinear function cr induces the convex, closed and bounded set
G, := acr(0) := { g E
R" : g T p 5 cr(p)
(2.8)
V p E R"} ;
(the notation acr(0) will be explained later). Geometrically, if one draws Figure 3 for a sublinear function, one finds that, instead of passing through a fixed point, H , envelopes a convex set (wich is precisely G,) located in the half-space opposite t o p . The would-be vector g1 (representing now the contact between H , and
Go)becomes dependent on
p and may not be unique for a given p .
To sum up this section: there is a one-to-one correspondence between closed, convex and bounded sets G in
Rn,and functions
cr from
R" into R
which are convex and positively homogeneous (and finite), i.e., sublinear. For
G and cr related by this correspondence, (2.7) and (2.8) hold as equivalent statements. Note that (2.7) adds little to (2.8), namely that the sharp.
5
in (2.8) is
208
C. Lemarechal
2.3. The approximate subdifferential
A sublinear function a(.):= f:(z,.) has been defined by (2.1); therefore it induces its own aa(0) := [ ~ 3 f ~ ( z , p ) called ] ~ = ~ ,t h e approximate subdifferential and denoted a,f(z); its elements g are the approximate subgradients o f f at z. With these notations, (2.7) and (2.8) read:
Using (2.1) in these relations gives the characterization:
(easily seen if one sets y := z
+tp).
Note t h a t , symmetrically, one could consider (2.9) as t h e definition of
a,f(z); then (2.1) would result (not obviously!) from the definition (2.7)' of a support function. For
E
= 0, one normally denotes
a!(.)
:= a,f(z), the usual subdifferential
of convex analysis, associated with t h e usual directional derivative (2.2). This explains t h e notation in (2.8); a support function a has a subdifferential (like any other convex function) which, for z = 0, is precisely t h e convex set G, (compare (2.8) with (2.9) for 6 = 0, and use a ( 0 ) = 0)). When
E
./, 0, the nested
sets a,f(z) shrink to af(z), just because of (2.2).
Another asymptotic property is that af(z) can be recovered from
af
in the
neighbourhood of z, namely:
This property is somewhat conserved for 6 > 0; very roughly speaking (see [18, 11.1.21 for details), a,f(z) is (the convex hull of) the image under
af of
a certain neighborhood of z. This property is the basis for Section 6, and is
also helpful for extensions to nonconvex situations.
209
Constructing bundle methods for convex optimization
We illustrate these concepts with 3 examples. (i) First let f ( z ) := i z T A z
+ bTz be a convex quadratic form, with
A a
symmetric positive definite matrix. Here, ( 2 . 1 ) reads
and straightforward calculations yield
(2.11)
(2.12) One can compute a,f(z) through ( 2 . 8 ) ’ , or more conveniently ( 2 . 9 ) :
1 a,f(z) = ( g :min -(z d ) T A ( z d ) d 2 1 -zTAz bTz - E } ; 2
+ + b T ( z + d ) - gTd 2
+ +
one can also use property ( 2 . 1 0 ) : setting F ( p ) := f : ( z , p ) , F has a gradient
for p
# 0; when p -, 0, the cluster points
of A p / ( p T A p ) d describe the surface
{g : g A - ’ g = l } ; in summary,
(2.13)
i.e., a , f ( z ) is an ellipsoid centered at
Of(.); in the above, if we set g := Vf(y),
we find that
(2.14)
C. Lemarechal
210
i.e., a,f(z) is the image under Of of an ellipsoid centered at z. A further expression is:
(ii) Now let f ( z ) := max { a ; tion. To compute
+ gFz : i = 1 , . . . ,k}
be a piecewise linear func-
a,f at a given z, it helps t o take this z as the origin,
setting
where qi := f(z) - a, - gTz 2 0, and qi = 0 means that t h e ithpiece is active at z. Setting y = z - p in (2.9), we obtain
a,f(z) = { g : V p E Rn,3i such that g T p
+ q; 5 g T p + c }
and some manipulations (related t o the theory of linear programming) yield:
Computing f : ( z , p ) from (2.1) gives the linear program with 2 variables: z and
min z
(2.19) z > g T p + y
i = l , ...,k
while (2.7)' with (2.16) gives the dual of (2.17): max A1
When
6
c x,g'p
3 0;
EX; = 1;
5 €*
(2.18)
is small enough, namely when c < min {qi : qi > 01, explicit cal-
culations can be completed, which a r e facilited by particularizing Figure l to this case. One obtains
Constructing bundle methods for convex optimization
21 1
(note t h a t , in view of (2.19), t h e only active i’s in (2.20) have qi > 0). Thus
T ( c ) of (2.3) is in this situation a singleton which does not depend on c 9 but rather on p ; this is why we prefer t o call it t ( p ) . Finally
Keeping Qk
E
just as small and assuming t h a t there is only one zero qi, say
= 0 (which means t h a t
= max
{(
gk
gk
= Of(s)) we obtain
+ E(gi - g k ) ) Qi
(2.22) p : i = l , ..., k - 1
Setting
ri(f) := g k -k and
%(€)
When
E
(gi
- gk)-
.
z = 1 , . . ~,k - 1,
Qa
:= g k , we see t h a t
6
1 0 , df(s) shrinks t o
{gk}
like a spider’s web with at most k - 1
extreme points, and diameter proportional t o
E.
Finally, compare (2.11) with
(2.21) t o see a fundamental difference between quadratic and piecewiece linear functions, concerning t h e speed of convergence of f : ( z , p ) t o f ’ ( z ,p ) . (iii) As a last example in this section, let us come back t o
V(E)
:= f : ( s , p ) of
(2.1). T h e notations are those of Section 2.1. Because v is an inf, we clearly have, for t E T ( c ) :
v(6) 5
[f(z
+ t p ) - f ( z ) + + 6 - €1 = €
t
V(E)
+ -.( 6 -1 4
This shows that v is concave (-v is convex); t h e above inequality means t h a t
& c a v ( ~ (where ) dv
should now be called a “superdifferential”), while
C. Lemarechal
212
(2.4) just says that equality does hold. The derivative v‘(q 1) of v at c in t h e direction +1, i.e., v$(c), given by t h e general formula min { g T p : g E a v ( ~ ) } (see(2.9)’, the min comes from concavity) gives here the second half of (2.5); in the direction -1, we have the general relation v ’ ( ~ , - l ) = -vL(c), which gives the first half of (2.5). To finish this section, we mention a proposal of [lo] for nonconvex f . Exploiting (2.10) or (2.14), it consists in replacing
a,f(s) by t h e closed convex
hull of t h e image under Of of the €-ball around z;unfortunately, this set does not coincide with the classical
a,f for convex f
(unless one takes some clever
metric).
3. THEORETICAL ANSWER 1: €-DESCENT DIRECTIONS
3.1. Motivation
Suppose t h a t 0 E a E f ( z ) . Then (2.9) shows t h a t z minimizes f within c . Conversely, let p
#0
be such that f : ( s , p )
< 0 (i.e., p separates the closed
a,f(s),which is possible if and only if 0 4 a,f(z)). Then
convex sets (0) and
(2.1) shows that p can be called an +descent direction, in the sense that:
3t > 0 such t h a t f(s
+ t p ) < f(s)- E .
Thus, the following strategy to compute t h e next iterate z
(3.1)
+ h suggests
itself Choose
E
> 0.
(i) If 0 E a,f(z), then stop (if c is small enough) or diminish E
until
(ii) when 0 $
a,f(s), find p
such that f : ( s , p )< 0; then take
h := t p by a line search along this p . Analysing t h e convergence of such an algorithm is rather meaningless: it
Constructing bundle methods for convex optimization
213
is essentially a finite algorithm; the real point is an implementable characterization of
a,f(s) - which will be the subject of Section 6 .
The €-descent directions form an (open) convex cone: issuing from s, they point toward the level set {y : f ( y ) < f ( s )- 6). Suppose, for example, that f has a unique minimum z* (see Figure 4); when
E
increases toward the critical
, cone shrinks to the direction z* - s, while a,f(z) value f ( z ) - f ( s * )this
eventually catches 0 on its boundary.
Figure 4. The cone of €-descent directions.
Dashed lines are in the dual space
214
C. Lemarechal
3.2. The specific answer Among all t h e possible €-descent directions, a natural choice is one that separates 0 and a,f(z) “best,”i.e., one t h a t solves
where the normalization constraint is due t o positive homogeneity of f:(z, .). There are two cases: either (i) the minimal value is strictly negative. Then (3.2) is equivalent to:
min { f : ( S , P )
IPl I 1)
7
(3.3)
because t h e constraint is certainly active in (3.3). Or (ii) the minimal value in (3.2) is non-negative, i.e., 0 E a,f(z).Then the optimal value in (3.3) is 0, obtained for example by p = 0; this is not a direction, but none is needed. In summary, it does not hurt t o replace (3.2) by the convex problem (3.3), whose solutions can be called steepest e-descent directions. Now, in view of (2.7)’, (3.3) reads
This convex-compact minimax problem has a saddle point, which means that it is equivalent to:
Suppose for example t h a t we choose the norm Ipl := ( ; p T N p ) ’ with N a symmetric positive definite preconditioner. Then, for each g, t h e minimizing p in (3.4) is (colinear to) - N - ’ g ,
and the optimal g solves
We see t h a t , when a,f(z) is a polyhedron (jpiecewise linear), computing an e-descent direction amounts t o minimizing a quadratic function under linear
Constructing bundle methods for convex optimization
215
constraints; this will be the case in Section 6. For illustration, consider here a quadratic f ; from (2.13), the steepest €-descent direction solves
min
When
E
{
1 1 spTNp :j ( N p
T + Of(%)) A
-1
= 0, p = - N - ' V f ( z )
feasible point in (3.6)). When
6
(Np
+ Vf(z)) 5 e } .
(3.6)
is t h e steepest descent direction (only
tends t o f ( s )- f ( z * ) ,p tends t o t h e optimal
direction (see Figure 4). When c is in between, we note the vague similarity of the solutions of (3.6) with the trust region trajectory (see [25]). When N = A , p is colinear t o the Newton direction.
This illustrates a major drawback of the present theoretical answer: t o obtain some Newtonian algorithm is impossible, unless the normalization is cleverly chosen; in other words, the present way of using f: does not bring anything new concerning second order. Sections 4 and 5 are precisely motivated by this deficiency. This approach was given first in [3];see also Section 3.2 in [l];a non-convex extension is given in [lo].
4.THEORETICAL ANSWER 2: a-NEWTON DIRECTIONS In this section, we consider a first approach t o introducing second order. This approach is directional in the following sense: the step h is decomposed as
h = t p , with t > 0; first we fix p and let t 1 0 t o define a (second order) directional derivative; then we let p be free t o minimize (in the whole space) the approximation o f f thus obtained.
C.Lemarechal
216
4.1 Motivation The whole business of establishing (directional) derivatives consists in replacing f ( z
+ t p ) - f(z) by
a funcion which is (i) a linear function of t 2 0,
i.e., a positively homogeneous function of p , and (ii) an approximation of f ( s+ t p ) - f ( z ) , valid for small t . We simply mean that (2.2) can be written:
where & ( t ) tends t o 0 if t 10. Likewise, a second-order (directional) derivative approximates, for small
t , the function f ( s + t p )
- f ( z ) - t f ‘ ( z , p ) by a function,
say $f”(z,p), which
is positively homogeneous of degree 2: f ” ( s , t p ) = t 2 f ” ( s , p ) ;we want
and this defines
(4.3) Approximating f to second order requires first the existence of the limit in
(4.3) - which, in contrast t o first order, is not implied by convexity. However, this is not the only difficulty: even when the limit exists, it must be actually computed, in such a way that something new is obtained with respect to the classical Taylor approach (see Section 1 ) . This is one occasion t o use the remark at the end of Section 2.1. A classical result says that, from convexity, (4.3) is equivalent t o
in the sense that the limit in (4.3) or (4.4) exists if and only if the other exists (and then, they coincide). Now, for t
> 0, define crt
+
+
= cr := f ( z ) - [ f ( s t p ) - tf’(s t p , p ) ] . Figure
5 shows that this o and this t are in correspondence via T and E of (2.3),
217
Constructing bundle methods for convex optimization
(2.6); in fact fL(z,p) = f’(z
+ tp,p).
Instead of comparing the increment
[f’(z+ t p , p ) - f ’ ( z , p ) ]t o the horizontal t (which is done in (4.4)), compare the same [ f A ( z , p ) - f ’ ( z , p ) ] to the vertical u and take this dent variable; what happens when u
1 O?
D
as the indepen-
The answer is that (4.3) and (4.4)
are further equivalent to - see [21]
(provided the limit in question is strictly positive; the question of whether this equivalence still holds in case of a zero limit is still open).
Figure 5. Connection between [ f A ( z , p ) - f ’ ( z , p ) ] and
[f’b+ tP,P) - f’(5,P)]
Finally, suppose that we further approximate the limit in (4.5) by the value of the right-hand side for a (small but) fixed u
> 0. This amounts t o
replacing the expansion (4.2) by:
where Muis the u-Newton model (we set h = tp and use positive homogene ity) :
C. Lemarechal
218
4.2. +Newton iterates
A a-Newton iterate is now logically defined as a minimizer of the model (4.7). It may sound a curious idea not t o pass t o the limit in (4.5). However, note t h a t this makes the model computable (and minimizable, see below) with the sole knowledge off: for some a > 0, and this is in accordance with the situation of Section 3. Along this line we make two more remarks: (i) T h e additional term 6 ( 0 ) can be controlled (through an appropriate choice of a if necessary), unlike the first term S ( t ) whose magnitude is eventually fixed by the minimization of Mu; (ii) adding such an approximation is by no means new, and the same idea was already used in the classical q u a s i - N e w t o n methods: in the update
in which H (the inverse of the Hessian) is supposed to satisfy IY - X I
Y-z
these methods consist in computing H t o satisfy the equivalent of (4.8)
for some fixed y, conveniently chosen. In anticipation of Section 6, we illustrate this point by a one-dimensional example. To minimize f : R
+
R,
suppose we have on hand two iterates x and Z, together with their functionvalues f and 7, and derivatives g and 3. T h e quasi-Newton method, known here as the secant method, consists in taking the iteration z
+z
+ h with
To implement a a-Newton method, let us choose the specific value a := f -
(i.e., take z
f:(z,h)
+ tp = Z in Figure 5).
p + g(z - z)] For this a and
h
:= Z
- z, there holds
= gh, so (assuming that f has the derivative g at, 5): -2
(4.10)
Constructing bundle methods for convex optimization
219
If we assume that (4.10) holds not only for 71 but also for all h E R, the corresponding a-Newton iteration boils down t o
(4.11) However, numerical experimentations did not suggest any superiority of
(4.11) over (4.9). Actually, passing from
h
to all h in (4.10) amounts t o
replacing f by a quadratic, which is a bit coarse in the nonsmooth case. It was shown in [21] that minimizing M , of (4.7) can conveniently be done in two steps: first, find an optimal direction p* by minimizing M , on the unit ball; then find the stepsize by minimizing M,(tp*) with respect t o t > 0 (this stepsize is given directly by a linear equation, thanks to positive homogeneity). Finding the optimal p* can be illustrated when af(z) = {Vf(z)} happens to be a singleton, and is also decomposed in two steps illustrated by Figure
6: first find A, ~ ] 0 , 1 solving ] min { A : AOf(z) E a,f(z)}; then take a hyperplane separating a,f(z) from the origin at X,Vf(z).
The normal t o such a
hyperplane is a p* (observe incidentally that it is a a-descent direction).
Figure 6. A a-Newton direction The above calculation can be carried out when a,f(z) is an ellipsoid, or a polyhedron. Suppose first that f is quadratic. Either by plugging (2.11)
C. Lemarechal
220
into (4.7), or by applying the above construction to (2.13), straightforward calculations show that p* points to the minimum of f, independently of n o This is due t o the fact t h a t the differential quotient in (4.5) is constant, just as in (4.4) or (4.3),and t h a t
Mu z f.
Suppose now that f is piecewise linear and make the same assumptions t h a t imply (2.22), (2.23). Particularizing Figure 6 t o this case shows t h a t , once again, p* is independent of the small
0;
as shown in [20], this p* points
to the minimizer of f on the polyhedron enclosing z,over which f ( z
+ h) =
f ( z ) + gTh. However the stepsize does not give t h a t minimizer; it is equal t o
-2gTp*t2(p*)/0 (compare with (2.20), (2.22) and (4.6)), which diverges when 0
1 0 (its numerator does not depend on o!): the curvature of a piecewise
linear function is 0 everywhere.
I
f = fl
Figure 7. Absence of gradients implies discontinuity of second order
To finish this section, we mention that a 0-Newton direction can still be computed when d,f(z) is an ellipsoid, and d j ( s ) is a polyhedron characterized by its extreme points (apply the construction of Figure 6 to each of the extreme points and take one that gives the best Mu-value). This remark is related to what we consider a serious deficiency of the present approach: Mu(.) may
Constructing bundle methods for convex optimization
221
not be convex; there may be several isolated a-Newton iterates, which means some instability. This last problem has its roots in the very beginning of the approach: f" defined by (4.3), (4.4) or (4.5) is not a continuous function of p ; see a counter-example in [15]. Indeed, suppose f = max{fl,fi} with each
fi
smooth (for example quadratic). Take z on the critical surface f l = f 2 ; when p crosses the tangent plane to this (smooth) surface, f ' ( z , p ) jumps between
the (most likely different) values p T V 2 f i ( z ) p , i = 1 , 2 (see Figure 7). This trouble is very serious because the tangent plane is most likely to contain the direction we are looking for (for example, the steepest descent direction is such). A possibility t o avoid it might be to use the second order derivative of [15].It would consist in replacing fh(z, - f'(z, -) by its convex 0
)
hull in the definition (4.7) of Mb(.). Then, M , would become convex and would remain continuous when
t-7
1 0 . However, (4.6) would become an inequality,
limiting the validity of the model.
5. THEORETICAL ANSWER 3: DIFFERENTIATING
A CONVEX-VALUED MAPPING The elements characterizing the approach followed in Section 4 were essent ially : (i) to consider a second order development of f ; (ii) to make this development along one single direction (i.e., if there were a Hessian V;f(., .), operating on R" x R", only its diagonal trace V : f ( p , p ) would be used); (iii) to use the development with respect to
6
1 0, instead
of h
+ 0.
In this section, the 3 points above will be treated the opposite way.
C. Lemarechal
222
5.1. Motivation Minimizing f amounts t o solving the infinite system of inequalities:
or, equivalenty, t o solving the inclusion
0 E df(z). Applying the Newton idea, t o either of these problems, consists in approximating f’(z
+ h , p ) (resp. df(z + h ) ) for small h by a “simple” function
(resp. multi-function) and solving for h the resulting system (resp. inclusion). However, such approximations a r e unlikely to exist, since they are supposed t o approximate items which are not even continuous (take n = 1,
f ( z ) = 151, z = 0,p = 1, h 1 0;the situation is different from t h a t of Section 4, where the function f’(z
+ t p , p ) , continuous for t 1 0, was used).
time t o replace f’ and df by f: and d,f which, for given
E
Thus, it is
> 0, vary in a
Lipschitzian way when z varies. Approximating f:(z
+ h , p ) amounts t o finding a derivative of f : ( . , p ) . T h e
result wanted is given in [19], [l],namely that, for fixed d E R”, p E
Rn,one
can define
{
9 lim i[j:(z
+ s d , p ) - f : ( z , p ) l :s J. 0) =: f : ’ ( z , p ; d ) .
(5.1)
Furthermore, this limit has a n expression, which can be illustrated as follows: suppose p is such that T ( 6 ) of (2.3) is a singleton t e ( p ) , and also t h a t the max in (2.7)’ is attained at a single g , ( p ) E d,f(z); finally, suppose f has the gradient go at z. Then
+
It also holds that g , ( p ) E d f [z t , ( p ) p ] , and this sheds some light on (5.2) (see Figure 5); for example, assuming f E C2,we obtain
Constructing bundle methods for convex optimization
where A ( e , p ) is the mean value of
V2f betwen z and
z
223
+t,(p)p.
From (5.1), we have t h e approximation
so, if we solve for h (standing for s d ) the infinite system
we obtain nothing but a Newton iterate t o find a n €-minimizer of f. To illustrate this development, suppose first t h a t f is quadratic. Then
(5.2) gives directly f:'(z,p; d ) = pTAd (which is independent of
E)
while, using
(2.11), (5.5) reads
+
[Vf(z) AhITp
+ (2cpTAp)+ 1 0
Vp.
(5.6)
This system can be solved: for example, minimize with respect to p the convex function on t h e left-hand side and obtain the relation
(
Ap = - k p T A p )
1'2
[Vf(z) + Ah]
which, plugged back into (5.6), gives after some manipulations the equivalent condition
1 -2[ h + A-'Vf(z)IT A [ h
+ A-'Vf(z)]
5 c.
(5.7)
Thus, t h e solutions of (5.6) form a n ellipsoid around the Newton step. Further calculations show t h a t they cover exactly t h e €-minimizers of f and this was t o be expected since the approximation (5.4) is exact i f f is quadratic. Another illustration is when f is piecewise linear, with t h e assumptions leading t o (2.22), (2.23). Assume further t h a t there is a single maximal i(p) in (2.22) (or (2.20)). Then g,(p) of (5.2) is there follows
~ i ( ~ ) (ofc )(2.23).
From (2.20),
C. Lemarechal
224
Using (2.22) and (5.8) in (5.5), we see t h a t t h e resulting Newton iteration consists in solving for h:
where v ( p ) :=
[gi(p)
- gk] / q i ( p ) . This approach will of course remain a dead
end until an efficient way is found t o solve such a system. 5.2. D i f f e r e n t i a t i n g a multivalued mapping
It is not clear at all whether the development of Section 5.1 will ever be of any use in numerical optimization. Several basic questions will have to be answered first, such as: actual computation of f:’(z,p; d ) (a lot more complex than t h a t of f:(z,p)); continuity, convexity, symmetry, etc.. . of f:‘(z, .;
a);
actual solution of (5.5), properties of the resulting Newton steps ... However, this development raises a question which has its own interest from a purely mathemetical point of view: in view of the correspondence (see Section 2.2) between the sublinear function f:(z, .) and the convex set a,f(z), does there correspond to the derivative of f:(.,p) a “derivative” of the mapping
&f (*)? Thus, forgetting t h a t we are minimizing f , we generalize the framework and use notations in the spirit of Section 2.2: we are given a finite and sublinear function s ( t , p ) (standing for f:(z
+ t d , p ) ) and a function denoted by B ( 0 , p )
(standing for f:‘(z,p; d ) ) defined by
(note: i(0, .) is positively homogeneous but probably not convex!) Likewise, for t 1 0, we are given the convex compact set G ( t ) ,related t o s ( t , +) through
G ( t )= {g E R” : gTp
I s ( t , p ) Vp E R”};
s ( t , p ) = max { g T p : p E G ( t ) } .
Constructing bundle methods for convex optimization
225
The question is: does there exist a set, H ( t ) say, which is also convex and compact, which approximates G ( t ) to first order, and such that the mapping
t
-+
H ( t ) is “simpler” than g? The first order approximation property means (El is the unit ball in R”):
H ( t ) c G ( t )+ tG(t)B and G ( t ) c H ( t ) + tG(t)B. In terms of the support function of (2.%),this can also be written
Assume that the b p ( t ) in (5.9) has the form 6 ( t ) l p ) ,i.e., that the approximation (5.9) is uniform with respect t o p E B. Then we see that
must
+ t s ( 0 , .). Thus we choose (5.10) H ( t ) := {g E R” : g T p 5 s ( 0 , p ) + t s ( 0 , p ) Vp E R”},
also be a first-order approximation to s(0, .)
and it can be shown that this H ( t ) approximates G ( t ) when, in addition to uniformity, the Slater assumption int G(0)
# 0 holds.
For details, we refer
to [7]. The question of differentiating a multi-valued mapping, although not new, has not received a definitive answer yet (see [6]), even in the present convexcompact situation. Note in particular that, with H defined by (5.10), there is no set G’(0) for which H ( t ) = G(0)
+ tG’(0) (contrary t o the single-valued
case); such a linearity is hopeless. Indeed, it seems that the relevant property to be satisfied by a derivative such as H is to have a maximal convex graph, see “221. 6. ANSWERING THE PRACTICAL QUESTION:
BUNDLE METHODS In this section, we address the problem of how to approximate the approximate subdifferential, in order to make implementable the algorithms of Sections 3 to 5.
C Lemarechal
226
6.1. The rules of the game It is reasonable t o consider that the only information computable at a given z is the value f ( z ) and one subgradient g(z) E a f ( z ) ; the notation g(z) simply means that there is a deterministic process (a computer subprogram) t h a t chooses the particular subgradient; it also suggests that the situation is just the same as in classical smooth optimization, where one accepts to compute the gradient. On the other hand, in the course of a minimization algorithm, one can store the information accumulated during the previous iterations. So, from now on, we denote by
Xk
(the current iterate ) t h e point z of Sections 2-5
where h must be computed, and we assume that we have at our disposal the finite set of triples:
this is what we call t h e bundle of information. With these data, we can compute the non-negative numbers
whose meaning is illustated by Figure 8, in the space of the graph. It shows directly from (2.9) t h a t gi E d,,f(zk)
i = 1 , . . .,k
so, by convex combination:
Comparing (6.4) with (2.16) shows that Gk(c) = drfk(zk)where fk(Y) :=
f(%) + max{-qi
= m={l(zi)
+ ST(!/
+ gT(y - Zk)} - Xi)}
Constructing bundle methods for convex optimization
is the cutting plane approximation of fk
f a
227
Inclusion (6.4) simply means that
is the lowest convex function compatible with the information contained in
the bundle (6.1)*
Figure 8. Supporting plane at
5,
Comparing Figures 1 and 8 also shows that, for p = z, - zk:
and the T ( q * )of (2.3) contains 1; therefore, from (2.4):
From (2.8)’, this implies
in which GL(e) is the c-subdifferential of the convex hull of the f ( q )’s, i.e., the largest convex function compatible with the bundle (6.1.); note that this inclusion has killed all the differential information contained in (6.1): GL(c) does not depend on the gi’s.
C. Lemarechal
228
6.2. Inner approximations and +descent methods Up to now, all the known bundle methods have been based on a more or less sophisticated combination of Section 3 and (6.4). The first instance was the old cutting plane method itself 151, [16]. For the time being, there exist essentially two best implementations following this approach: one consists in solving
{
min Xi
4 ICXigi 1'
2 0,
CXiqi
EX, = 1
(6.9)
5
for a suitably chosen c 2 0. The resulting direction points inside the level line
f(zk) - €
of the cutting plane function fk of (6.5). If this fk has a
minimum, then this direction goes to the minimum closest to zk when to
fk(zk)
- fk(z*) (see Figure 4 again, with f replaced by
c
goes
fk).
The other implementation consists in solving (note the abstract equivalence between (6.9) and (6.10)) (6.10) for a suitably chosen u 2 0. The resulting step h = C Xigi/u minimizes fk(zk
+ h) + i1u ) h \ '
defined by (6.5) - t o see this, consider the problem min{z
+ -ulhI2 1 :z 2 f(Zk) 2
- qi
+ giT h
i = I , . . . ,k},
and apply duality theory with dual variables Xi. This shows that the present approach is a particular form of the prox. algorithm: z k + l := Arg
1 min f(.) i u 1
+
--5k1'
implemented trough a particular form of the cutting plane algorithm; see [2]. For a general and extensive study of this kind of method, we refer to the
monograph [17]. Here, we mention some facts which are little known.
Constructing bundle methods for convex optimization
(i) In (6.9) and (6.10), I
1 is the Euclidean
229
norm, for want of a better idea.
Yet, it has been suggested at the end of Section 3 that some kind of Hessian would probably be desirable, and in (91 has been defined an object which could be called a Hessian, and which is constructed by the ellipsoid algorithm. J.L. Goffin has conducted experiments which show that dramatic improvement could probably be obtained through a proper combination of bundle methods and ellipsoid updates. Along this line, we also refer t o some theoretical work being done by K.C. Kiwiel. (ii) Little is known concerning rates of convergence. Reduced t o its essentials, the question of rates of convergence is the following (see [29]): construct two sequences
{gk}
and
{pk}
:= Proj O/conv
pk
as follows:
[gl,.
. ,g k ]
i.e., p k solves (6.10) with u = 0; gk+l
Then
pk
---+
arbitrary satisfying
Igk+ll
5 1 and
(6.11) gF+lpk
5 0.
0 and the question is: at what speed? The only known indica-
tion is given by the following argument: construct two other sequences {gk} and {pi}: p k := Proj O/conv [ p i - l , g k ] gjC+l arbitrary satisfying Ig;+,l
5 1 and g k y I p k 5 0.
(6.12)
Elementary calculations result in the asymptotic estimate:
(6.13) Because it obviously holds, at each iteration (mutatis mutandis), that IPkl2
i
lPiI2,
(6.14)
we obtain that bundle methods have at least the sublinear rate l / k . However there are good reasons to think that (6.14) is very pessimistic for large
k
(or rich bundle), one of those reasons being the observed behaviour of the algorithm.
C Lemarechal
230
We mention that (6.14) becomes an equality when the sequence {gk} is
= 0 for i
orthogonal, i.e., g'gj
# 8;this
means that, in infinite dimension,
bundle methods may have a definitely sublinear rate of convergence (unless the function f is smooth; (6.13) improves largely if gl,
.--, 0).
(iii) The sequence constructed by (6.12) suggests one of the simplest forms of bundle methods, which is very easy t o state if f is C1 ( if not, the need for null steps makes the exposition more involved). Choose q := 0 E
6
> 0, 6 > 0. Initialize
z E
R",
g := Vf(s),p := g,
R.
By a line-search, find z+ = z - t p , g+ = V.f(z+). Set p + := xp
where X = max(0, [(g+ If lp+I
5 6 and
Xq+
+ (1- X)g+ - p)Tg+
- q + ] / I p - g+I2}.
5 c stop: for all y,
otherwise loop with z+, p + , q+ replacing z, p , q. This algorithm is the adaptation of (6.12) t o (6.10) with u = 1. Note its similarity with a conjugate gradient method, in which t h e direction is a combination of the gradient with t h e previous direction. When applied t o a nonsmooth function, it converges with a sublinear rate. For a smooth function, a linear rate probably holds, related t o a Lipschitz constant for
Vj.
(iv) Almost exactly the same algorithms can be applied to nonconvex f ; the only essential change concerns t h e definition of t h e qi's, which cannot be simply (6.2), but something like
15,
-
skl
(see the end of Section 2).
However, a practical difficulty is t o have a new definition t h a t coincides with (6.2) when f happens t o be convex; the only proposal for this is [17, Chap.41, which in t u r n suffers from some practical difficulties concerning
Constructing bundle methods for convex optimization
23 1
the stopping criterion. We consider that obtaining a fully satisfactory extension to general functions is not easy (although many real-life problems
- differentiable or not, convex or not
-
are being solved through the
present approach, and in general satisfactorily).
For general f , convexity must be replaced by some other hypothesis; a t the moment, it does not seem possible to minimize a general locally Lipschitz function. We mention [4], which gives the weakest known assumption probably the minimal one.
6.3. Other combinations The next idea to construct an algorithm is of course to combine (6.4),with the other two theoretical answers. Concerning Section 5, too many fundamental questions are still open; one is that solving (5.5) for piecewise linear f is a mystery; another one is that, if f is piecewise linear, then the uniform
convergence mentioned at the end of Section 5.2 does not hold (along these lines, we mention some theoretical work of E. Panier: given a piecewise linear function such as
fk
of (6.5),the necessary hypotheses for approximating a s f k
are met if one perturbs
fk
to
for small u and large v , where V is the inf-convolution). On the other hand, combining (6.4)with Section 4 - more precisely with (4.6) - can be considered more concretely. Consider the situation leading
to (2.19)-(2.23), i.e., * assume
(Note that, in the framework of an algorithm, this assumption is not restrictive
at all). Then
fk
is differentiable a t
Zk,
the extreme points of Gk(o)are known,
so the results of [21, Section 51 can be used.
C. Lemarechal
23 2
In [20], it was shown that, in this situation, the c-Newton idea ends up with a variant of the cutting plane algorithm, which is in practice so close to the original version that the whole approach is worthless. The reason for this unfortunate result is that the piecewise linear character imposed by the form of (6.4) is so strong that it kills all the second-order information contained in the quadratic-like model M , of (4.6). Thus, the approximation of
a,f
should be based more on quadratic approximations o f f ;
in particular, it should shrink to
{gk}
at speed ul/', i.e., infinitely slower than
the pessimistic G k ( e ) . Together with J.J. Strodiot, we have made an attempt in this direction which can be described as follows: assume f [ z k
+ t ( z ;- z k ) ]
is a quadratic function of t for i = 1 , . . ,k - 1.
(6.16)
~
We deduce first from (2.11) that there is some Ci such that
The value of C; can be constructed from (6.6), giving
Thus, somewhere on the hyperplane { g : g T ( z ;- z k ) = ai}, there is an extreme point of
a o f ( % k ) , say
hi(.). To place hi, use (6.16) again and assume
that the gradient varies linearly in t , starting from
gk
for t = 0, to g i for t = 1.
Direct computations give:
Then we can approximate
a,f(zk)
by conv
{hl(c),'..,hk-l(c),gk);
the
resulting o-Newton direction is just that of [20] but with each qi replaced by 1
q:
. Preliminary experiments with this approach have been rather inconclusive
but they have clearly indicated the following difficulty: in the early stages
233
Constructing bundle methods for convex optimization
of any algorithm, in particular when
k 5
n, the t , of [20] is 1. Then the
@-Newton direction is essentially the solution of (6.10) with u = 0 except that X k is unconstrained b u t , in practice, X k is always positive in (6.10). In other words, the algorithm in this situation becomes t h e original conjugate subgradient of [29], which is not very good. Any kind of u-Newton algorithm will need a special algorithm t o get started. Because (6.15) imposes small values for u when k is large, the development in [20] is tied t o the inescapably bad rate of decrease of Gk(u) when cr
1 0 . Another way round this difficulty would be t o drop condition (6.15)
(yet preserve differentiability of
fk
at zk) by fixing u independently of k. We
do not know what t h e resulting algorithm would be. In this case, Gk(u) would have “a priori qi
5
fJ
”
O(k2) extreme points, formed by taking each pair (i,j)with
qJ‘
6.4. Conclusions Admitting t h a t methods based on Section 3 need t o be improved and updated, we consider t h a t Section 4 brings interesting proposals. For these t o materialize, it has become crucial to approximate the approximate subdifferential in a much more daring way than was done before. Little has been done in this direction, and several paths still remain unexplored. For example, we can suggest t h a t (i) properties (6.6) -(6.8) have never been really used, including the fact t h a t 1 is a good approximation of the derivative of f;(zk,z, -
zk)
at u = qt
(see (2.5)).
(ii) In the framework of a polyhedral approximation, it might be a good idea not to trust the bundle too literally but t o consider t h a t gt E
af(zk)
if qt
is really small, as evoked in [8]. (iii) For large u , ellipsoids might be the best suited approximations to this case, a strategy
‘‘A la”
a,f; in
quasi-Newton might be a good idea, in which
an initial ellipsoid is modified minimally t o fit with the new information
C. Lemarechal
234
brought at each iteration
while forgetting the older information.
-
Finally, we mention two more approaches which seem more advanced than the above somewhat fuzzy ideas. (iv) Methods of Section 6.2, in which the direction solves (6.10), consist in approximating f near
zk
by
fk+
iul . - x k I 2
(fk
given by (6.5)), a function
made u p of k quadratics, all with t h e same hessian ul. More sophisticated approximations can be imagined, in which the bundle
(6.1) includes also a set of matrices { H i } , ideally representing the Hessian of f at xi (if it exists), so t h a t each function f:
approximates f near xi and FL := max {f: : i = 1, ' . - , k} approximates f more globally. T h e kth direction finding subproblem is then t o minimize FL9 a (smooth) programming problem with k quadratic constraints which can be treated by classical means [27, Sections I11 6, 71, [ll]:linearize the constaints and use the Hessian of a Lagrange function. With notations in t h e spirit of Section 6.2, we are led to choosing a matrix
H and solving
{
min w
+ ihTHh
f ( ~ k )-
ql + ~ : 5~ w hi = l , * * * , k
(6.17)
where (compare (6.2)):
The dual of (6.17) is
(6.18) whose analogy with (6.10) is fairly clear.
Constructing bundle methods for convex optimization
235
To obtain H knowing the Hi's, a n estimate of the multipliers is needed; it can be obtained from the solution of (6.18) at iteration k - 1, or from solving
(6.18) a first time with H replaced by I , just as in the standard NLP situation. On t h e other hand, t h e problem of obtaining the Hi's is new because the constraints are introduced somewhat artificially via the bundle, and all kind of second order information is definitely out of reach. T h e following idea, due t o R. Mifflin, is a n interesting starting point. Call Ik the set of active constraints in
(6.17).
It is generally the case in bundle methods t h a t , after a transient period, gk
is always in
Ik,
and lIkl is constant. This means t h a t , when solving (6.17)
at iteration k + 1, g k + l will enter the basis and some T h e proposal is then to update and
Hi(k)
gi(k)
will leave t h e basis.
by a secant formula involving zk+1-
q(k)
Qk+l - Q i ( k ) .
To motivate this strategy, suppose f is t h e maximum of a finite number of smooth functions. We can hope t h a t , when k is large enough, I k includes all smooth pieces that make up f near the optimum. Then t h e new quadratic piece k it
+ 1 approximates the same smooth function as some older piece (call
fi(k)), but more accurately
zj(k)). Admitting
(because
zk+l
that zj(k) - z k + l is small, the informations contained in
q k + l , gk+l and q ; ( k ) ,g;.(k) are similar, so k
in the same active set and k
is closer t o the optimum t h a n
Ik+l.
+ 1 and j(k) should not coexist
If this intuition is correct, i ( k ) = j ( k ) : both i ( k )
+ 1 belong to the same smooth piece.
For more details along these lines, we refer t o [24]. (v) T h e second approach we outline is given by E.A. Nurminskii (261. Let
a E R" be fixed and denote by G c Rn+l the graph of
E + a,f(a):
G = {c 2 0, g E R" : g E a , j ( a ) } . Given ( - 1 , h) E
Rn+l,
compute the support function c ~ ~ ( -h l) ,,i.e., solve max -q
+ gTh
(6.19)
C. Lemarechal
236
t o obtain a solution q(h),g ( h ) . Solving (6.19) merely reduces t o computing
a way t o see this is to solve (6.19) for g first, obtaining max-q
+ f;(a,h),
and then apply the end of Section 2.1.. Clearly, we wish t o choose h so t h a t g ( h ) = 0 but we may also regard this construction as a n approximation scheme: ) G, as well as an it yields an inner information about G, namely ( q ( h ) , g ( h ) E outer information, namely t h e half-space
Thus if a pair of sets G-
c G c G+
is already on hand, computing h in some suitable way improves the approximation t o G- u { ( q ( h ) , g ( h ) )c ) G c G+ n H ( h ) .
(6.20)
Most algorithms for nonsmooth optimization can be rediscovered this way, depending on how h is computed a n d on how a is updated.
A Convex analyst will recognize that G is the epigraph of t h e (convex) function a ( g ) := min { c : g E a,f(a)}
= min { c : j ( a )
= f(a) and t h a t a * ( h ) = f(a
+ f *(9)- gTa 5 E }
+ f*(s)- g T a ,
+ h) - f ( a ) , which
explains the above development.
We believe that the “sandwiching” technique (6.20) is an interesting new view of nonsmooth optimization, which could suggest original ways t o compute h , i.e., t o define an optimization algorithm. Sandwiching the graph o f f * between
G- and G+ exactly corresponds t o sandwiching the graph of f between (6.5) and (6.8).
Constructing bundle methods for convex optimization
237
REFERENCES [l] A. AUSLENDER, O n the differential properties of the support function of the subdifferential of a conwex function. Math. Prog. 2 4 , 3 (1982) 257-268. (21 A. AUSLENDER, Numerical methods f o r nondifferentiable conwex optimization problems. In : Mathematical Programing Study; B. Cornet,
V.H. Hien, J . Ph. Vial (Eds.), (1986). [3] D.P. BERTSEKAS, S.K. MITTER, A descent numerical method f o r optimization problems with nondifferentiable cost functionals. SIAM Journal
on Control 11,4 (1973) 637-652. [4] A.BIHAIN, Optimization of upper semi-differentiable functions. J. Opt.
Theory & Appl. 44 (1984) 545-568. [5] E.W. CHENEY, A.A.GOLDSTEIN, Newton's method for conwex programming and Tchebycheff approximation . Numerische Mathematik 1 (1959)
253-268. [6] V.F. DEMJANOV, C. LEMARECHAL, J. ZOWE, Attempts t o approxi-
mate a set-valued mapping. In: V.F. Demjanov, D. Pallaschke (Eds.) Non
differentiable Optimization: Motivations and Applications. Lecture Notes in Economics and Mathematical Systems 255, Springer Verlag (1985), 3-7.
[7] V.F. DEMJANOV, C. LEMARECHAL, J. ZOWE, Approximation t o a set-valued mapping. I a proposal. To appear in Applied Mathematics and
Optimization. [8] M. GAUDIOSO, An algorithm for conwex N D O based o n properties of the contour lines of conwex quadratic functions. In: V.F.Demjanov, D. Pal-
laschke (Eds.) Nondifferentiable Optimization: Motivations and Applications. Lectures notes in Economics and Mathematical Systems 255, Springer Verlag (1985), 190-196.
C.Lemarechal
238
191 J.L. GOFFIN, Variable metric relazation methods. Part 11: The ellipsoid
method. Math. Prog. 30.2 (1984), 147-162.
[ 101 A.A. GOLDSTEIN, Optimization of Lipschitz continuous functions. Math. Prog. 13,l (1977), 14-22. Ill] S.P. HAN, Variable metric methods for minimizing a class of nondifferentiable functions. Mathematical Programming 20, 1 (1981), 1-13.
[12] J.-B. HIRIART-URRUTY, Limiting behaviour of the approzimate firstorder and second-order directional derivatives for a convex function, Non-
linear Analysis: Theory, Methods & Applications, 6, 12 (1982), 1309-1326. [13] J.-B. HIRIART-URRUTY, €-differential calculus. In: “Convex Analysis
and Optimization”, Res. Notes in Math. 57 (Pitman, 1982), 43-92. [14] J.-B. HIRIART-URRUTY, The approzimate first-order and second-order directional derivatives for a convex function. in “Mathematical Theories of
Optimisation” J.P. Cecconi, T. Zolezzi Eds, Lecture notes in Mathematics 979. Springer-Verlag (1983).
[ 151 J.-B. HIRIART-URRUTY, A new set-valued second-order derivative for convex functions, this volume.
[16] J.E. KELLEY, The cutting plane method for solving convex programs.
Journal of the SIAM 8 (1960), 703-712. [17] K.C. KIWIEL, Methods of descent for non differentiable optimiza-
tion. Lecture notes in Mathematics 1133, Springer-Verlag (1985). [18] C. LEMARECHAL, Eztensions diverses des methodes de gradient et applications. Dissertation Thesis, Univerity of Paris IX (1980).
[19] C. LEMARECHAL, E.A. NURMINSKII, Sur la diffirentiabilite‘ d e la fonction d ’appui du sous-diffirential approche. Comptes Rendus Acad. Sc.
Paris 290 A (1980), 855-858.
Constructing bundle methods for convex optimization
23 9
[20] C . LEMARECHAL, J.J. STRODIOT, Bundle methods, cutting planes algorithms and a-Newton directions. In: V,F. Demjanov, D. Pallaschke
(Eds.) “Non differentiable Optimization: Motivations and Applications”. Lecture notes in Economics and Mathematical Systems 255 SpringerVerlag (1985) , 25-33. [21] C. LEMARECHAL, J. ZOWE, Some remarks o n the construction of higher-order algorithms in convex optimization. J. Appl. Math. & O p t .
10 (1983), 51-68. [22] C. LEMARECHAL, J. ZOWE, Approzimation to a multi-valued mapping.
I1 : existence and unicity. In preparation. [23] R. MIFFLIN, Stationarity and superlinear convergence of a n algorithm for univariate locally Lipschitz constrained minimization. Math. Prog. 28, 1
(1984). [24] R. MIFFLIN, Better than linear convergence and safeguarding in nonsmooth minimization.
In:
“
System Modelling and Optimization
”, P.
Thoft-Christensen Ed., Springer-Verlag (1984), 321-330. [25] J.J. MORE, Recent developments in algorithms and software for trust region methods. In:
“
Mathematical Programing, T h e State of t h e Art-Bonn
82”; A.Bachem, M. Grotschel, B. Korte, Eds. Springer-Verlag (1983), 259287. [26] E. A. NURMINKII, c-subgradient image and problem of convex optimization. Kibernetika 6 (1985), 61-63, 1985 (In Russian; English translation
t o appear in Cybernetics; see also one more paper t o appear in Kibernetika (1986) and one more in USSR Computational Mathematics and Mathematical Physics). [27] B.N. PSHENICHNYI. YU. M. DANILIN, Numerical Methods for ex-
tremal Problems, Mir (1978).
C Lemarechal
240
[28] R.T. ROCKAFELLAR, Convex Analysis, Princeton University Press
(1970)
a
[29] P. WOLFE, A method of conjugate subgradients for minimizing nondifferentiable functions. In: “Nondifferentiable Optimization,” M.L. Balinski &
P. Wolfe Eds, Math. Prog. Study 6 (1975), 145-173.
r c m i uays B J : marnemarics Ior upmnzauon
J.B. Hiriart-Umty (editor)
0 Elsevier Science Publishers B.V.(North-Holland), 1986
24 1
EXISTENCE THEOREMS IN NONLINEAR ELASTICITY
P. MARCELLINI Istituto Matematico “U. Dini” Viale Morgagni 67/A 50134 Firenze (Italy)
Keywords. Nonlinear elasticity. Variational problems. Quasiconvex integral functionals. Following the existence theory in nonlinear elasticity introduced by
J. Ball [l],we are interested in minimizing, on a fixed class of functions, an energy integral of the type
s,
g(x, V u ,adj V u ,det
Vu)ds +
s,
-
$(s) u dx.
(1)
Here R is a bounded open connected set in R3, u is a vector-valued function defined in R , of class C1(R;R3).Moreover, Vu is the 3 x 3 matrix of the deformation gradient, det V u is the determinant of the matrix Vu, which we assume to be positive, and, as usual, adj Vu is the matrix (det V u ) ( O u ) - l . The functions g and $ satisfy: g = g(s,E , q , 6) is defined and continuous on
R x R9 x R9 x R,
with values in the closed interval [0, +m]. Moreover, g is convex with respect to ( < , q , 6 ) and it is finite if and only if 6 > 0 (since g is continuous, in particular g --t +w as 6
-, O + ) .
P.Marcellini
242
There exist a positive constant c and exponents p , q > 1 such that g ( z , € , r l , 6 )1 C(l€IP
(3)
+ 1rlI9), V G E , r l , 6 . 1c, E Lr(R;R3)for r 2 1 such that
-+-
3 - ~ + J.< 1, (i.e.,
(4)
We of course look for a minimum of the integral (1) on a proprer class of functions of the Sobolev class H1yp(R;R3).We fix a boundary condition, natural in the context of nonlinear elasticity, like in [ l ] ,[4]. We mention that we could also consider different boundary conditions, for example, like in Ciarlet and NeEas [ 5 ] . Let us assume that the boundary c3R of R is Lipschitz, and let
r be a subset of
c3R with positive 2-dimensional measure. Let
C1(fi,R3),such that det V u o > 0 in
uo E
fi. We consider the class of functions:
A = {U E H'lP(R;R3): = uo on
r}.
(5)
If u E H'9P(R;R3),adj V u and det V u can be not well defined if p < 2 or if p < 3 respectively. Here we follow the ideas of Ball [l]: We consider the following relation, valid for a C2(R;R3)function u (u', u 2 ,u3)(similarly
i, a):
=
we can consider (adj Vu)', for other values of the indices
(adj v u )i = uzzu,, 2
3
2 3 - uz3uz2
= (u2u:,)z2 - (u2U:Jzs.
(6)
If u E H'*p(R;R3)for some p < 3 , then u E L3P/(3-P)(R;R3).Thus u2 V u E
L1(R;R3)if ( 3 - p ) / 3 p Therefore, if p
+ l / p 5 1, that is if p 1 3/2.
1 3/2, we can define the distribution (Adj Vu): in the
usual way
for every test function p E Cr(R). We operate similarly to define the other components of Adj V u .
Existence theorems in nonlinear elasticity
243
In the same way we can start from the formal relation: 3
det V u =
u< (adj Vu); a= 1 3
=
{ul(adj Vu);}
zlr
Let us assume that Adj V u E L9(R;R9). For every a,the product ul(Adj Vu): is summable if (3 - p)/3p
+ l/q 5 1, that is if l/p + l/q 5 4/3.
Therefore the
dist~ibutionDet V u is well defined by (Det Vu I p) = -
(c 3
ul(Adj Vu)kpz,
(9)
a= 1
for every test function p E C r ( 0 ) . Like in lemma 6.2 of (I]we have obtained the following:
Lemma 1 (Ball). Let u s assume that u E H'J'(R2;R3),with p 2 3/2. Then Adj V u is well defined as a distribution through (7).
If Adj Vu E Lq(R;R9), with l/p + l/q 5 4/3, then Det Vu is well defined as a distribution through (9). The next result goes in the same direction as the lemma 3.3(b) of Ball [2], where the assumption p 2 2 is made.
Lemma 2. Let us assume that u E H1J)(R;R3), with p > 3/2. Let us assume that Adj V u E L4(R;R9), with l/p Then Det V u E L1(R)and 3
Det Vu =
ui,(Adj Vu); Cr=l
.
+ l/q
5 1.
P.Marcellini
244
Proof. For u E C2(R;R3)and cp E C r ( R ) , by integrating by parts twice, we
get
c 3
3
((Adj ~ u ) : I cpzu> =
a= 1
Jn x ( a d j vu):cp,,
dx
a=l
3 d = - s , L x -(adj a
(11)
0.): cpdx = 0 ;
a=l
the last equality holds since the integrand is identically equal to zero. From definition (7), Adj Vu is continuous on H1*P(R;R3) for p > 3/2. Therefore 3
C ((Mvu); I vz,) = 0,
(14
a= 1
for every u E H'$P(R;R3), if p > 3/2. If Adj Vu E Lq(R;R9), (12) means:
for every 9 E C r ( R ) . By proceeding like in lemma 3.3(b) of [2], we obtain formula (10). Since l / p
+ l/q
5 1, we also have Det V u E L'(R).
The next existence theorem is a variant of a result by Ball ([l],theorem 7.6, where the coercivity of g ( z , ( , q , 6 ) with respect to 6 is also assumed), and of a result by Ball and Murat ([4], theorem 6.1, obtained for p
2 2).
Theorem 1. Let g satisfy (2), (3) and let t,b satisfy (4). Ifp > 3/2 and l / p
+ l/q I1, then absolute minimum of the integral Vu,Det Vu) dx
+
(14)
Proof. Let uk be a minimizing sequence for the integral (14) in the class ff.
By (3) we can extract a subsequence, which we still denote by uk, such that, ask++oo uk weakly converges to u in H1,p(f'2;R3);
(15)
Existence theorems in nonlinear elasticity
245
Adj Vuk weakly converges t o v in Lq(R;R9).
(16)
T h e vector-valued function u is in the class A . Since p > 3/2, by Rellich's compactness theorem, the sequence U k converges t o u from the norm topology of L 3 ( R ; R 3 ) . From definition (7), we see t h a t Adj Vuk converges t o Adj V u in the sense of distributions. From (16) we get Adj V u = v E L q ( R ; R g ) .
(17)
Since Adj VUk converges t o Adj V u in t h e weak topology of L q ( R ; R 9 ) , we obtain from the definition (9) for Det V u and Det Vuk t h a t , as k
-+
00,
Det VUk converges t o Det Vv. in t h e sense of distributions. Moreover, by lemma 2, Det Vuk and Det V u are L'-functions,
and the sequence Det VUk
is bounded in L'(R). We can apply the lower semicontinuity theorem (6.3)(iii) of Marcellini [8] t o obtain t h a t u minimizes t h e integral (14) in the class A. Let us note t h a t , since t h e integral (14) is finite at the minimum u , then Det V u is positive almost everywhere in R. It is interesting t o consider admissible functions u whose Det V u is a distribution t h a t is not an L1-function, for example, in the study of the phenomemon of cavitation (see [3]). Since the integrand g is finite only when Det V u is positive, we are interested in positive distributions Det V u ; we therefore consider admissible functions u E H'J'(R;R3)
whose Det V u is a
positive measure. We assume p > 3/2 and l / p + l / q 3/2 and l / p
+ l/q
5 4/3. For every u E H1*P(R;R3)such that
Adj V u E Lq(R;.RQ) and Det vuk E L’(R), we have
F(u)=
g(Vu,Adj Vu,Det V u )dx.
(21)
(In other words, F is an extension of I . ) Proof. Let u E H1J’(R;R3)with Adj V u E Lq(R;RQ)and Det V u E L1(R).
By definition F ( u )
1(u).
Conversely, let u k be a sequence that converges to u in the weak topology of H1lP(R;R3),such that Adj V Z LE ~Lq(R;RQ)and Det vuk E L’(R) for
every k. Let us assume that
F ( u ) = limkinf 1(uk) < +oo.
(22)
Then by the coercivity condition (3), like in theorem 1, we prove that Adj VUk converges t o Adj V u in the weak topology of Lq(R;R9),and Det VUk converges to Det Vu in the sense of distributions. We then apply the semicontinuity theorem 6.1 of [8] to obtain that I(u)
< F(u).
Existence theorems in nonlinear elasticity
247
REFERENCES [l] J.M. BALL, Convexity conditions and existence theorems in nonlinear elasticity, Arch. Rat. Mech. Anal., 63 (1977), 337-403.
[2] J.M. BALL, Constitutive inequalities and existence theorems in nonlinear elasticity, Nonlinear Analysis and Mechanics, Heriot- Watt Symposium, 1
(1977), 187-241. [3] J.M. BALL, Discontinuous equilibrium solutions and cavitation in nonlinear elasticity, Phil. Trans. R. SOC. London, 306 (1982), 557-611.
[4] J.M. BALL, F. MURAT, W'J'-quasiconvexity
and variational problems
for multiple integrals, J. Functional Analysis, 58 (1984), 225-253.
[5] P.G. CIARLET, J. NECAS, Unilateral problems in nonlinear threedimensional elasticity, Arch. Rat. Mech. Anal., 87 (1985), 319-338.
[6] B.' DACOROGNA, Weak continuity and weak lower semicontinuity of nonlinear functionals, Lectures Notes in Math., 922, Springer-Verlag,
Berlin (1982). [7] P. MARCELLINI, Approximation of quasiconvez functions and lower semicontinuity of multiple integrals, Manuscripta Math., 51 (1985), 1-28.
[8] P. MARCELLINI, O n the definition and the lower semicontinuity of certain quasiconvex integrals, Ann.
Linkaire, 3 (1986), to appear.
Inst.
Henri Poincarh, Analyse non
This Page Intentionally Left Blank
FERMAT Days 85: Mathematics for Optimization J.-B. Hiriart-Urruty(editor) 0 Elsevier Science Publishers B.V.(North-Holland), 1986
249
ALGORITHMS FOR SOLVING A CLASS OF NONCONVEX OPTIMIZATION PROBLEMS. METHODS OF SUBGRADIENTS
PHAM DINH TAO Laboratoire TIM3 Institut IMAG, B.P. 68 38402 Saint Martin d'Hkres Cedex (France)
EL BERNOUSSI SOUAD Laboratoire TIM3 Institut IMAG, B.P. 68 38402 Saint Martin d'Hkres Cedex (France)
1. INTRODUCTION
In nonconvex optimization the following typical problems constitute a large class, which is important because of its applications; at present time, active research is done in this area. (1) sup {f(s): x E C}, where f:R" H R is convex and C a nonempty closed
convex set. (2) inf{f(z)
- 9(z) : z
E C}, where f , g:R"
H
R are convex and C is a
nonempty closed convex set.
(3) inf {f(z) : z E C,g(z) 2 0}
(f,g,C: idem).
These problems are equivalent in the sense that solving one of them implies the resolution of the others. Problem (1) was first studied by Hoang
Pham Dinh Tao and El Bernoussi Souad
250
Tuy (1964) [lo], with C a polyhedron. He proposed a direct method of type “cutting plane method” which uses hyperplanes determined by n extreme points of C for localizing an extreme point solution of (1). Theoretically this method leads t o a global maximum of (1); however it is applied only to problems of small size because of its complexity and its cumbersomeness concerning the effective programming. For a long time, people have been interested in this problem and have developed Hoang Tuy’s idea under different forms. We must admit nevertheless that in spite of a large number of publications on this subject, an efficient implementation of this algorithm for concrete problems of reasonable size does not yet exist. In the period 1964-1980, among the important works relative to the development of this method and its variants, we mention: -
E. Balas for maximization of convex quadratic form over a polyhedron [ l ,21.
-
A. Majthay and Whinston for introducing a process of accumulating constraints in this method (making it finite), and also for proposing a treatment of the case where C is degenerate 1161.
-
Finally, the recent paper of Hoang Tuy and Ng.V. Thoai which contains interesting improvements of an earlier work 1111.
In [12] Hoang Tuy proves that problem (2) is a particular case (in R” x R
of problem (3)) and solving this last problem amounts t o solving problem (1). The theory of duality in nonconvex optimization, as studied by J.F. Toland
(311, generalizes our previous results concerning the problem of convex maximization over convex compact sets (17-191. J.F. Toland is able to establish duality results in the case of minimization of the d.c. (difference of convex) functions. However, with his results, the resolution of problem (1) does not amount to solving problem (2). The works quoted above belong t o the quantitative study of this class of nonconvex optimization problems, that is to say, the elaboration of algorithms
Solving a class of nonconvex optimization problems
25 1
for solving these problems. In general, these algorithms does not make use of the analytical character-
izations of optimal solutions which resort to the qualitative study of this class of problems, to which the works of Pham Dinh Tao [17-251, J.F. Toland [31] and J.-B. Hiriart-Urruty [9] among others belong. Our work is at the cross-road of these (quantitive and qualitive) studies: (i) Our first step consists of an effective realization of simple and finite algorithms for solving problem (3) in the following particular case [3-61: inf { ( c 1 z) : z E C , g(z)
1O} ,
where C is a polytope and g is a convex function. These algorithms can be generalized to the resolution of the problem [3, 4, 61: inf { ( c I z) : z E C, g i ( z ) 2 0, i = I , . . . ,m }
(C is a polytope, and
g;,
(i = 1 , . . . , m ) , are convex functions).
(ii) We adapt afterwards the algorithms to solve problem (3) which can be written under the following form:
It is equivalent, in
Rn x R, to:
and a2 2 a = inf{f(z) : z E C , g(z)
L 0 ) ; h ( z , t )= g(z).
It is clear that: z* solution of (Ps) . ( z * , t * ) solution of
(Ps) =+ f(z*) = t * , g(z*) = 0.
Pham Dinh Tao and El Bernoussi Souad
252
Conversely, z* solution of (P3)
g(z*) = 0 and
( z * , t * )with t* = f ( z ' I
is a solution of (P3). One can see that [ a l ,a21 provides a localization of a;it is then important t o decrease
a2
during t h e execution of these algorithms in order t o obtain an
interval which is smaller and smaller. We note that:
C polyhedron (nondegenerate)
+ C x [ a l ,a21 polyhedron (nondegenerate)
In the general case where C is a n arbitrary convex set
.
we draw inspiration
from the methods of accumulating constraints [15, 301 for reducing the reso-
lution of problem (P3) t o that of a sequence of subproblems of type (i) [3-61. (iii) Following this approach, we remark that problem (1)can be treated in the simpler manner:
where crl
5
Q
5
a2,
a = sup{f(z) : z E C} and h ( z , t ) = f ( z ) - t. As
mentioned above, we can decrease
a2
during the execution of our algo-
rit hms. Problem (i) has been studied by Hillestad and Jacobsen [7,8]. These authors have proposed two algorithms for solving it: the first one uses the Tuy's plane cut, the second is a finite algorithm which is based on the simplex method. Their algorithms, as ours, are completely different from those of Hoang Tuy and his colleagues. A comparative study of these methods is currently taking place. Our first step proceeds in the direction of the elaboration of algorithms for solving this important class of nonconvex optimization, without really going t o the analytical characterizations of the optimal solutions [3-61. (iv) Our second approach deals essentially with problem (1). We apply directly the duality in nonconvex optimization for defining the dual problem (Q). This is a generalization of our previous works on t o the problem of computing t h e bound norms of matrices [17-231. We can establish complete
Solving a class of nonconvex optimization problems
253
relationships between the solutions of the problems (PI) and (QI). We then study the characterizations of the optimal solutions of (PI) with the aid of subgradients of f and of xc (indicator function of C). These analytical characterizations (of local type) are the basis of our subgradient methods for solving problem (PI). One can say that these methods are the analogues of the Frank-Wolfe method and of the projected-gradient method in convex optimization. The methods of subgradients are very easy to program and are actually the only ones which enable us to treat practical problems of large size; their only drawback is that one cannot say in general, if the local maximums obtained are effectively global maximums, except if an initial vector xo is close enough to a global maximum. In the latter case this is possible because of the duality in convex optimization used above for establishing the relations between the solutions of the primal and dual problems. Finally, for a complete resolution of ( P I ) , these methods of subgradients would need new and more refined
characterizations of the optimal solutions. This paper is devoted t o duality schemes for problem (PI) and t o methods of subgradients for solving this problem.
Pham Dinh Tao and El Bernoussi Souad
254
2. DUALITY IN CONVEX MAXIMIZATION
OVER A CLOSED CONVEX SET Let f be a convex function on X = Rn, and let C be a closed convex set in X. Let us consider the problem (P) whose solution set is denoted by P:
x = sup {f(s): 5 E C } , P = {z* E c : f(z*)= A } .
(PI
By using the conjugate f* of f , we can write (Y = Rn):
Hence,
x = sup{f(z)
:5
E C} = sup SUP{(% 1 y) - f*(Y)) XEC Y€Y
= SUP SUP{(Z Y€Y XEC
=
I Y> - f*(Y))
SUP SUPUS 1 Y) - f*(Y)}
YEOf
(C) XEC
The dual problem (Q) is then defined by:
We shall denote by
Q the solution
P = {Y*
set of (Q):
E V ( C ): x;(y*) - f*(y*) = A}
For each y E af(C), let us consider the subproblem
(Py):
Py is exactly axk(y).
We shall see below, in the description
of methods of subgradients, the roles of
(Py)and at the same time, a possible
whose solution set
justification of the convergence of these methods towards an optimal solution
of (P) when the initial vector xo is close enough t o an element of P.
255
Solving a class of nonconvex optimization problems
Proposition. Iff is convex on X and if C is a compact convex set in X, then one has the following properties:
(1) p = Uy*EQPy* = Uy*EQ ax;(?/*); (2)
Q = U ~ * Eaf(z*); P
(3) a x * , ( y * ) c Proof. If y* E
af*(~*), VY* E Q.
Q,then, for every
z * E ax;(y*), we have
It follows that f(z*) = X and z* E af*(y*). Consequently
and Uy*EQ aX',(Y*)
Conversely if z* E
P, then for every
y*
c
E af(z*) (which is nonempty since f
is continuous), we have f(z*) = (z* I y * ) - f * ( y * ) = A.
Therefore z * E a x L ( y * ) and xL(y*) - f * ( y * ) = A , i.e., y * E proved ( l ) , (3) and the inclusion
U y * E p a f ( z * )C
Q. Now
P. Thus we have let y* E P;then
x k ( y * ) - f * ( y * ) = X and (z* I y * ) - f * ( y * ) = A, vx* E
a@*).
It follows that, for every z* E ax;(y*), we have
f ( z * ) = (z* I y * ) this is equivalent to z * E
P
-
f * ( y * ) = A;
and y* E af(z*). Now it is clear that property (2)
is proved whenever ax;(y*) is nonempty; this is always the case if C is a nonempty compact convex set in X. I
Pham Dinh Tao and El Bernoussi Souad
256
Remark. Properties (1) and (3) remain valid if f is proper convex, lower semicontinuous function and if C is a compact convex set. In this case we only have that U 2 * E p af(s*)c
Q.
Corollary 1. Let f be a proper convex, lower semicontinuous on X, and let C be a compact convex set in X. I f f is essentially strictly convex (i.e., f is strictly convex on every convex subset of {z : af(s) #
0) = dom af), or i f f *
is smooth on af(C), then
In this case one has P = U,.Ea{Vf*(y*)}. Proof. This is immediate from the previous proposition. I
Corollary 2. P is nonempty if and only if Q is nonempty. P is nonempty if C is a nonempty compact convex set in X. Proof. This is a simple consequence of the previous proposition. I
3. ANALYTICAL CHARACTERIZATIONS
OF OPTIMAL SOLUTIONS OF (P) Let f be a proper convex and lower semicontinuous function on X and let C be a compact convex set in X. We suppose in the sequel that f is not constant on C. We shall describe below further analytical characterizations of optimal solutions of (P),in addition to those of Proposition 2.
Solving a class of nonconvex optimization problems
257
Proposition 1 (1) P c rbd C (relative boundary of C), (2) af(z*) c
ax&*),
vz* E
P,
+
(3) {z*} = Pc(z* p a f ( z * ) ) ,Vz* E P , V p > 0, where PC denotes the projection onto C. Proof. (1) is immediate from the definition of the relative boundary of C and the convexity of f [20, 261. It is clear that if z* E P, then
i.e., analytically [20, 261: af(z*) c
axc(z*). Since (3) is a mere traduction
of (2), the proof is complete. I We denote by I'o(X) the set of proper convex, lower semicontinuous functions on X. Let f E I'o(X) be a function which does not take the value
+CQ.
If the upper bound of f on C is finite, then problem (1) is a particular case (g = x,) of the general problem:
X = inf {g(z) - f ( z ) : z E X } ,
(PI
whose set of solutions is denoted by P. By introducing the conjugate functions
f* and g*, we naturally define the dual problem X = inf {f*(y) - g*(y) : y E Y } ,
(Q)
whose set of solutions is denoted by Q. The following proposition, which generalizes the preceding ones, completes Toland's work on this subject.
Pham Dinh Tao and El Bernoussi Souad
258
Proposition 2. Let f , g E I'o(X), then we have: (1) a f ( x * ) c a g ( X * ) , v x * E P ,
a f * ( ~ *VY* ) , E Q. (3) U , . , , a g * ( y * ) c P, and the equality holds i f f (2) a s * ( g * ) c
is subdif-
ferentiable on X (for instance i f f is continuous on X). (4) UZ*,p af(z*)c Q, and the equality holds if g* is subdif-
ferentiable on Y [for instance if g* is continuous on Y). Proof. We only need to show property (1). The other properties are then
simple consequences. This proof is completely different from Toland's approach [31]. It is very close t o the reasoning used in the proof of Propositions 2 and 3, and is based on the definitions of the subdifferential and the epigraph of aconvexfunction. If z* E
P
then,
h ( x * ) = f(Z ) - f(z*) 5 g ( z ) - g ( z * ) = k ( Z ) , V X E X .
In other words epi k C epi h and they meet at ( x * , O ) ( h ( x * ) = k ( z * ) = 0 ) . This implies that N(epi h; (z*,O)c N(epi k ; (s*,O))
(*I
where N ( C ; u * ) ,for any u* E C (a closed convex set), denotes the normal cone to C a t u * . But, for any f E I'o(X) and any x o E dom f , we have [26]: Yo E 8f(zo)
if and only if (Yo, -1) E N(epi
f ; ( X o , f(s0));
hence the preceding inclusion ( x * ) is equivalent t o dh(x*) C ak(x*),
in other words: 8 f ( x * ) c ag(x*>.I
It is worth nothing that when f is convex continuous on X and g = x,, then this latter inclusion can be expressed as follows.
259
Solving a class of nonconvex optimization problems
Proposition 3. Let f be any convex continuous function on X; then we have:
(1) SUPZECf1(5*; 5 - z*) = supyEaf(z'){x;:(Y) (2) af(z*)c
- (z*I Y)>*
axc("*) if and only if supzEc f l ( z * 5; - 5')
= 0.
Proof. First of all, let us recall that fl(z*;5) is the directional of f at z*. We
know that [26]:
f+*;
5
- 5*)=
sup
Y€W
hence
sup f+*;
5 - 5*)=
z€C
=
( 2*
1
(z- 5* I y) ;
sup sup{(s I y) - (z*I y)} Y€af(z*)ZEC SUP { X p ) - (z* I Y)> YGf(Z')
It is clear that af(s*)c axc(z*)if and only if (5 -
I 5 0, Vz E C and V y E af(z*).
z* y)
This is equivalent to sup
sup
ZEC Y€af(z')
(z- 5* I y) = sup fl(z*;z - z*)5 0. ZE c
This latter inequality is in fact an equality, so the proof is complete. I For each u E C, let us now consider the following problem a ( . )
= sup j y u ; 5 - u ) , z€C
whose solution set is denoted by R ( u ) , and its dual S(u) (in the sense of the duality which has been introduced above):
whose solution set is denoted by S ( u ) .
260
Pham Dinh Tao and El Bernoussi Souad
Proposition 4. Let f be any convex function on X and let C be a compact convex set in X . Then (1)
(2)
R ( 4 = U,*€S(U)ax;(Y*), S ( 4 = U Z * € R ( U ) ax;f(,,(z* - 4.
Proof. Apply Proposition 3 after having rewritten problems R(u) and S ( u ) under suitable forms. I Remark that the problem S ( u ) is equivalent to
The preceding results will be used in designing our methods of subgradients for solving the problem (P) of Section 2. 4. METHODS OF SUBGRADIENTS FOR MAXIMIZING
A CONVEX FUNCTION OVER A COMPACT CONVEX SET Let us consider the problem (P) of Section 2 with the following assumptions:
f is continuous and not constant on C,and C is a nonempty compact convex set. We shall now present two types of methods of subgradients for solving (P).
A. Methods of type A These methods are based on the characterizations of the elements of P given in Propositions 1, 3, namely P
c rbd C and af(z*)c axc(z*),Vz* E P.
Solving a class of noneonvex optimization problems
26 1
(i) First form of methods of type A In practice, we begin by determining the points z* E rbd C such that :
This amounts t o finding (z*,y*) E rbd C x af(C)
such that: y*
E af(z*) and z* E a x > ( y * )
Then we can describe the simplified form of methods of type A. Starting with zo E rbd C , we construct two sequences (z'), (yk) in the following manner:
Pham Dinh Tao and El Bernoussi Souad
262
Proposition 1 (Convergence of the method)
(1) f ( z k )I f(zk+l). Equality holds if and only if
) axc(zk) In this event, we have a f ( z k n
# 8.
(2) The sequence f ( x k ) converges monotically (from below) to p
5 A. This value p depends on
xo and on the choice of
subgradients for constructing the sequences ( z k )and ( y k ) .
(3) limk+m{x>(yk) - ( z k I y k ) } = 0. ( 4 ) The sets of cluster points of ( z k )and of ( y k ) are nonempty.
Moreover, for every cluster point z*of ( z k )(respectively y * of (yk)), there exists a cluster point y* of (yk) (respectively
z* of
(2'))
such that:
(4.1) z* E Bx>(y*) and y* E a f ( x * ) . (4.2) limk,,
f ( z k )= f(z*) = p .
(4.3) limk-rco{X>(yk) - f * ( y k ) } =
x>(Y*)- f * ( y * ) } =
f (2") = p. Proof. The continuity o f f (by hypothesis) and of
x;
(because C is supposed
compact convex and nonempty) implies that the sequences ( z k )and (yk) are well defined. We have
+
j(."+') 2 f ( z k )
(Zk+l
- zk I
Pk>
'
but X>(ark) = ( z k + l I y k ) 2 ( z k I y k ) ;
thus f ( z k + ' ) 2 f ( z k ) .
If j ( z k S ' ) 2 x k E ax;,(yk).
f ( z k ) )then , x;(yk)
= ( z k I yk), which is equivalent to
Solving a class of nonconvex optimization problems
263
Moreover, we have f(Zk+')
+ f*(yk)
(because f ( z k )
+ f ( z k )= ( Z k + l I yk)
= (zk I y k ) for all k, by construction of (zk)and
(yk)); in other words y k E
af(&').
Conversely, zk E a x > ( y k ) and y k E af(zk+') imply respectively
+
(zk+l I y k ) = (zk I y k ) and f ( z k + ' ) f ( z k )= (zkS1 I y k ) . It follows that f ( z k + ' )= f ( z k ) . Property (2) is obvious because X is finite. Property (3) is then immediate. The sequence (zk)is contained in the nonempty compact convex set
C ;the
set of its clusters points is thus nonempty. The same conclusion follows for the sequence ( y k ) since the continuity of f and the compactness of the compactness of
C imply
af(C)[26].
Now let z* be a cluster point of (zk); for the sake of simplicity in notations, we shall write limk,,
zk = z*.
We can suppose (extracting subsequences if necessary) that the sequence (yk) converges to a point y* E af(C). Property (3) then gives z* E ax:(y).
As the multivalued mapping is upper semicontinuous, we have: x k yk
-+
y * and y k
-+
z*,
E af(zk)imply that y* E af(z*).
Property (4.2) is a consequence of the continuity of f . It follows from property (4.1) that x > ( y * ) = (z* I y*) = f ( z * ) whence property (4.3). I
+ f*(y*),
Pham Dinh Tao and El Bernoussi Souad
264
Remarks We shall now give some indications about the choice of subgradients in this form of the methods of type A.
1) Choice of yk E a f ( s k ) When f ( z k )= f(zkS1)we can stop the algorithm if df(zk) C dxC(zk). Otherwise, choose a new yk E a f ( z k )- d x c ( z k ) and start again the algorithm. We proceed likewise with z*. Note t h a t if f ( z k ) = f ( z k + ' ) then yk E df(zk+')
n dxC(zk+l) and we can proceed as above with
zk+'.
2) Choice of z k E ax>(yk)
It is clear that the best choice of z k is the following:
Following this choice, the algorithm leads us t o a decomposition of the problem (P) (See sections 2 and 3).
( 5 ) Second form of methods of type A T h e preceding indications about the choice of subgradients in the first form of the methods of type A lead us to consider the next form, which is more elaborate and thus more efficient. Starting with xo E rbd C , we construct two sequences ( z k )and (yk) as follows:
zo ~ r b C d
H
z1 E T(y0)
H
zk+l
E T(y7
y o € S(zo) 'y E S(z')
yk+l
E
S(Zk+l),
where T ( y ) is the solution set of the problem T(y):
The precedent results of sections 2 and 3 as well as Proposition 1 enable us t o formulate t h e following proposition.
26 5
Solving a class of nonconvex optimization problems
Proposition 2 (Convergence of the method)
(1) f ( z k )5 f(&').
Equality holds if and only if
af(zk)c
and y k E af(zk+').
(2) The sequence f ( z k )converges monotically (from below) to p
5 A. This value p depends only on
(3) limk-w{X;:(Y")
- (zk I
zo.
Ilk)>>= 0.
( 4 ) The sets of cluster points of (zk)and of ( y k ) are nonempty.
Moreover, for every cluster point z* o f ( z k )(respectively y * of(yk)), there exists a cluster point y* o f ( y k ) (respectively
z* of (&)) such that: (4.1) af(z*)c ax;(z*); y* E S(z*). (4.2) ax;(y*) (4.3) limk,,
C
af*(y*); z * E 7 ( y * ) .
f ( z k )= f ( z * )= p .
(4.4) limk+m{X;,(yk) - f*(Y"} = X;(Y*)
- f*(Y*)) =
f b * )= P * Proof. Using the preceding results and taking into account that: (i) the multivalued mapping S is upper semicontinuous on
{ z * E C : a j ( z * ) c axc(.*)}
where S(z*)= af(z*),
(ii) the multivalued mapping 7 is upper semicontinuous on {y* E af(C) : ax>(y*)c aj*(y*)}
where 7 ( y * ) = ax>(y*),
the assertions of Proposition 2 are easy to prove. I
266
Pham Dinh Tao and El Bernoussi Souad
Remarks 1 - With the second form of the methods of type A, we are sure t o obtain z* E rbd C such that af(z*)c axc(z*). 2
-
If f is strongly convex on C (i.e., there exists p > 0 such that
for every z1,.2 2 in C and every y1 E af(zI)), we then have:
+ p (1zk+l- zk1I2.
2.1 - f ( z k + l ) 2 f ( z k ) a++* = zk.
2.2 - limk,,
Equality holds if and only if
llzk+' - zkll = 0. Moreover, if the set of cluster points of (zk)
is finite, then the whole sequence zk converges.
3 - Algorithms of type A are finite if C is polyhedral. 4 - From the results of section 2, if zo is close enough to an optimal solution z,
then the algorithm converges to a solution of (P). Such an zo can be obtained with the aid of the direct methods quoted at the beginning.
B. Methods of type B These methods are based on the following characterization (Proposition 1, 2):
{z*}= P&*
+ paf(z*)),
vx* E
P, v p > 0.
They can be described as follows: Starting with zo E rbd C , we construct (zk)and ( y k ) in the following manner: y k E af(zk);zk+l E
+
Pc(2 pk a . f ( z k ) ) ,
> 0, continuous in zk,and the constant po > 0 can be chosen in such a way that xk + p k y k 4 C,Vk. The effect of this choice pk
> 0, pk(zk)must be 2
po
is to accelerate the convergence of the algorithm.
Solving a class of nonconvex optimization problems
267
Proposition (convergence of the method)
(4 f bk + l 1 -> f ( z k ) +
(zk+l-
z
I yk)
1
2 7 ((zk+l - z y 2 P
+
Equality holds if and only if zk = zk+' E Pc(sk pkyk). (2) The increasing sequence ( f ( z k ) )is convergent to p
5 A.
This value p depends on zo and on the choice of the subgradients y k in the construction of the sequences (zk)and (yk). Moreover limk,,
((zk+'- 5
(1 = 0.
(3) The sets of cluster points of (zk)and ( y k ) are nonempty.
Moreover, for every cluster point z* of
(2) (respectively
(yk)), there exists a cluster point y* of ( y k ) (respectively
z* of (zk))) such that:
+
(3.1) z* E Pc(z* p y * ) where y* E af(z*)and p > 0. (3.2) limk,,
f ( z k )= f(z*) = p .
Proof. By definition, we have
Hence
Property (1) is then immediate. The proof of property 2 is easy and therefore omitted. Since C and af(C) are compact and nonempty, the sets of cluster points of (zk)and ( y k ) are nonempty. Now, let z be a cluster point of (zk). For the sake of simplicity in the notations, we shall write: limk+- zk = z*.
We may suppose (extracting subsequences if necessary) that the sequence ( y k ) converges to a point y E a f ( C ) . The continuity of pk(zk) in (zk)and that of Pc, combined with the fact that limk,,
IIzk+l
- zkll = 0 imply:
z* E Pc(z*+ py*) where y* E af(z*) and p = lim pk 2 po. k+oo
The proof is then complete. I
Pham Dinh Tao and El Bernoussi Souad
268
Remark. In the methods of type B, we have only the choice of subgradients
Pc(xk+ p k y k ) , af(xk) and V p > 0
y k E af(xk). When f ( x k ) = f ( x k + ’ ) , i.e., xk+’ = zk =
we can stop the algorithm if x k = Pc(xk+ py), Vy E (i.e., af(xk) C
ax,(xk)).
Otherwise we choose a new y k E af(zk)such that
+
zk # Pc(xk pyk), p > 0, and again we start the algorithm. We proceed
likewise with x*. REFERENCES [l] E. BALAS, Intersection cuts. A new type of cutting plane for integer programming, Oper. Res., 19 (1971), 19-39.
[2] E. BALAS, C.A. BURDET, Mazimizing a convez quadratic function subject t o linear constraints, Management Science Report No 299, GSIA
Carnegie-Mellon University, Pittsburgh, Pa (1973). [3] R. BENACER, PHAM DINH TAO, Etude de certains algorithmes pour la re‘solution d ’une classe de probldmes d’optimisation non convexe, Journkes
Fermat: MathCmatiques pour l’optimisation, Toulouse (6-10 Mai 1985). [4] R. BENACER, PHAM DINH TAO, Linear programs with reverse convex
constraints, Submitted to Math. Programming (1985).
[5] R. BENACER, PHAM DINH TAO, T w o general algorithms for solving linear programs with an additional reverse constraint, Submitted t o Math.
Programming (1985). [6] R. BENACER, Contribution A
1’Qtude des algorithmes de l’optimisation non convexe et non differentiable, These de Doctorat de Mathbmatiques, Universitk de Grenoble, t o appear (1986).
[7] R.J. HILLESTAD, S.E. JACOBSEN, Reverse convez programming, Appl.
Math. Optim., 6 (1980), 63-78. [8] R.J. HILLESTAD, S.E. JACOBSEN, Linear programs with a n additional reverse convex constraint, Appl. Math. Optim., 6 (1980), 257-269.
Solving a class of noneonvex optimization problems
269
[9] J.-B. HIRIART-URRUTY, Generalized differentiability, duality and optimization for problems dealing with differences of convex functions, Lectures
Notes in Economics and Math. Systems, 256 (1985), 37-70. [lo] HOANG TUY, Concave programming under linear constraints, Dokl. Akad. Nauk. SSSR, 159 (1964), 32-35 Translated Soviet Math. (1964), 1437-1440.
[ll] HOANG TUY, NGUYEN VAN THOAI, Convergent algorithms for minimizing a concave function, Oper. Res., 5 (1980), No 4.
[12] HOANG TUY, Global optimization of a diference of two convex functions,
Submitted t o Math. Programming Study (1985). [13] HOANG TUY, Ng.Q. THAI, A conical algorithm for globally minimizing a concave function over a closed convex set, To appear in Math. Oper.
Res. (1985). [14] HOANG TUY, Convex programs with a n additional reverse convex constraint, Preprint (1985). [15] J.E. KELLEY, The cutting plane method for solving convex programs,
SIAM, 8 (1960), 703-12. [16] A. MAJTHAY, A. WHINSTON, Quasiconcave minimization subject to linear constraints, Discrete Math., 9 (1974), 35-59.
[17] PHAM DINH TAO, Ele‘ments homoduaux d’une matrice relatifs Ci un Applications au calcul de S,,(A), SCminaire couple de normes (9,s).
d’Analyse NumCrique, USMG Labo. IMAG, Grenoble (1975). [18] PHAM DINH TAO, Calcul du m a x i m u m d’une forme quadratique de’finie positive sur la b o d e unite‘ de la norme du maximum, SCminaire d’Analyse
NumCrique, USMG Labo. IMAG, Grenoble (1975).
270
Pham Dinh Tao and El Bernoussi Souad
[19] PHAM DINH TAO, Me‘thodes directes et indirectes pour le calcul du maxi m u m d’une forme quadratique de‘finie positive sur la boule unite‘ de la norme du maximum, Colloque national d’Analyse NumCrique, Port Bail
(1976). [20] PHAM DINH TAO, Contribution Q la thhorie de normes et ses applications
Q l’analyse numhrique, Thkse de Doctorat d’Etat
ks
Sci-
ences, USMG, Grenoble (1981). [21] PHAM DINH TAO, Algorithmes du calcul du maximurn d’une forme quadratique sur la boule unite‘ de la norme du maximum, Numer. Math.,
45 (1984), 377-340. [22] PHAM DINH TAO, Convergence of subgradient methods for computing the bound-norm of matrices, Linear Alg. and its Appl., 62 (1984), 163-182.
[23] PHAM DINH TAO, Me‘thodes ite‘ratives pour le calcul des normes d’ope‘rateurs de matrices, A paraitre dans Linear Alg. and its Appl.
[24] PHAM DINH TAO, Subgradient methods j o r maximizing convex functions over compact convex sets, Submitted t o J.O.T.A. [25] PHAM DINH TAO, Algorithmes pour la re‘solution d’une classe de problimes d ’optimisation non convexe.
Me‘thodes de sous-gradients,
JournCes Fermat : Mathkmatiques pour l’optimisation, Toulouse (6-10 Mai 1985). [26] R.T. ROCKAFELLAR, Convex analysis, Princeton University Press (1970). [27] I. SINGER, A Fenchel-Rockafellar type duality theorem for maximization,
Bull. of the Austral. Math. SOC.,20 (1979), 193-198. [28] I. SINGER, Maximization of lower semicontinuous convex functionals on bounded subsets of locally convex spaces, Result. Math., 3 (1980), 235-248.
Solving a class of nonconvex optimization problems
27 1
[29] I. SINGER, Optimization b y level set methods: duality formulae in Opti-
mization: Theory and Algorithms, Lectures notes in pure and applied mathematics, 86, edited by J.-B. Hiriart-Urruty, W. Oettli, J. Stoer (1983). [30] D.M. TOPKIS, Cutting plane methods without nested constraints set, Oper. Res., 18 (1970), 404-413. [31] J.F. TOLAND, O n subdifferential calculus and duality in nonconvex opti-
mization, Bull. SOC.Math. France, Mkmoire 60 (1979), 173-180.
This Page Intentionally Left Blank
1 YIYII-.
1
UP,'
Y_.
.
I-IYL.L"I..YLIII
I".
Vy%Y..YY,&"lL
J.-B. Hiriart-Urmty (editor) 0 Elsevier Science Publishers B.V. (North-Holland), 1986
213
A GENERAL DETERMINISTIC APPROACH TO GLOBAL OPTIMIZATION VIA D.C. PROGRAMMING
H. T U Y Institute of Mat hemat ics P.O. Box 631, Bo'H6 Hanoi' (Vietnam)
Abstract. A d.c. function is a function which can be represented as a difference of two convex functions. A d.c. programming problem is a mathematical programming problem involving a d x . objective function and (or) d.c. constraints. We present a general approach to global optimization based on a solution method for d.c. programming problems.
Keywords. Global optimization. Deterministic approach. D.c. Programming. Canonical d.c. program. Convex maximization. Outer approximation method.
274
H. Tuy
1. INTRODUCTION T h e general problem we are concerned with in this paper is t o find t,he global minimum of a function F:R" -, R over a constraint set given by a system of the form
Z E R , G i ( z ) > O ( i = 1 , 2 ,..., m), where R is a closed convex subset of Rn and F , Gi (i = 1 , 2 , . . . ,rn) are real-valued functions on R", continuous relative t o a n open neighbourhood of R. This is an inherently difficult problem. Even in the special case where F is concave and the Gi are affine, the problem is known t o be NP-hard. In the classical approach t o nonlinear optimization problems, we use information on the local behaviour of the function F and Gi t o decide whether a given feasible point is t h e best among all neighbouring feasible points. If so
we terminate; if not, t h e same local information indicates the way t o proceed t o a better solution. When the problem is convex, the optimal solution provided by this approach will surely be a global optimum. But, in the absence of convexity, this approach generally yields a local optimum only. Therefore, if a truly global optimum is desired rather than an arbitrary local minimum, the conventional methods of local optimization are almost useless. This intrinsic difficulty accounts for the small number of papers devoted t o global optimization, in comparison with the huge literature on local optimization. Yet many applications require global optimization, and the practical need for efficient and reliable methods for solving this problem has very much increased in recent time. From a deterministic point of view, the global optimization problem, in the general case when no further structure is given on the d a t a , is essentially intractable, since it would require, for example, evaluating t h e functions over a prohibitively dense grid of feasible points. It seems, then, that the only
A general deterministic approach to global optimization
275
available recourse in many cases is t o use stochastic methods, which guarantee the results only with some probability and very often are not able t o conciliate reliability t o efficiency. However, for a particular class of global optimization problems, namely for problems of global minimization of concave functions (or, equivalently, global maximization of convex functions) over convex sets, significant results have been obtained in the last decade; see [36], [8], [15], [16], [27], [32], [31], [37],
[38], [42], [45]. On the basis of these results and also the tremendous progress in computing techniques (parallel processing and the like), it has become now realistic to hope for a general deterministic approach to a very wide class of global optimization problems whose data are given in terms of convex and concave functions only. In view of the usually very high cost of the computation of an exact global solution, it seems appropriate to slightly modify the formulation of the global optimization problem as follows: Find a globally a-optimal solution, i.e., a feasible point Z such that there is no feasible point z satisfying F ( z ) 5 F ( 2 ) - a where a is some prescribed positive number. The core of the problem consists of two basic questions:
1) Given a feasible point zo, check whether or not x o is globally a-optimal. 2) Knowing that zo is not globally a-optimal, find a feasible point x such that F ( z ) 5 F ( z o )- a.
If these two questions are answered successfully, the solution scheme suggests itself start from a feasible point (preferably a local minimum) zo, check whether or not xo is globally &-optimal; if not, find a feasible point (preferably a local minimum) x 1 such that F(sl)5 F ( x o ) - a ;repeat the procedure as
long as needed, each new iteration starting from the point produced by the previous one. In actual practice, the choice of a is guided by computational considerations. In many cases a can be taken fairly small, but in other cases
H. Tuy
216
one must b e content with a comparatively coarse value of a,or perhaps with only one or two iterations of the procedure. Well, one cannot cry for the moon, and, for certain extremely hard problems, it would be unreasonable t o try to reach the absolute minimum, simply because it is unreachable by the means currently available. P u t in this framework, deterministic methods for global optimization may be combined with conventional local optimization methods to yield reasonably efficient procedures for locating a sufficiently good local minimum. The purpose of this paper is t o present a general approach t o global optimization problems in the above setting, via t h e class of so-called d.c. programming problems. It turns out that this class includes the great majority
of optimization problems potentially encountered in the applications. Every d.c. program can in turn be reduced t o a canonical form which is a convex program with just one additional reverse convex constraint (i.e., a constraint of the form g(z) 1 0 with g convex). Several alternative algorithms can then be proposed for this canonical d.c. program, each consisting basically of two alternating phases: a local phase, where local searches are used to find a local minimum better than the current feasible solution, and a global phase, where global methods are called for to test the current local minimum for global a-optimality and t o find a new starting point for the next local phase. This two-phase structure is also typical for most of t h e stochastic methods. Thus the difference between our deterministic approach and the stochastic one is that in the global phase stochastic methods evaluate the functions in a number of random sample points, while our methods choose these points on the basis of so far collected information about the structure of the problem. T h e paper consists of five sections. After t h e introduction, we discuss in Section 2 d.c. functions and d.c. programs. Here we introduce canonical d.c. programs and show how any d.c. program can be, by simple manipulations, reduced to this canonical form. In Section 3 we demonstrate the practical importance of d.c. programs by showing how a number of problems of interest
271
A general deterministic approach to global optimization
can be recognized as d.c. programs. Finally, Sections 4 and 5 a r e devoted t o developing solution methods for canonical d.c. programs.
2. D.C. FUNCTIONS AND D.C. PROGRAMS We first recall the definition and some basic properties of d.c. functions. For a comprehensive discussion of d.c. functions t h e reader is referred t o the article of Hiriart-Urruty [14] and t h e recent work of R. Ellaia [7].
Definition 1. Let R be a convex closed subset of R". A continuous function f : R
+
R is called a d.c. f u n c t i o n if it can be
represented as a difference of two convex functions on R:
with
f l , f2
convex on R.
Examples of d.c. functions (see e.g.[14] or [7]): (1) Any convex or concave function on R. (2) Any continuous piecewise affine function.
(3) Any C2-function and, in particular, any quadratic function. (4) Any lower-C2 function, i.e., any function f:R" -+ R such t h a t : for each
point u E R" there is for some open neighbourhood
f(x) = max F(x,s) SES
where S is a compact topological space and
X of u a representation
X€X F:Xx S -+ R is
a function
which has partial derivatives up t o order 2 with respect t o x and which along with all these derivatives is jointly continuous in
( 2 ,s)
E X x s [25].
T h e usefulness of d.c. functions in optimization theory stems from the following properties (see e.g. [40] for a proof).
Proposition 1. The class of d.c. functions on R is a linear space which is stable under the operations
278
H. Tuy
It follows, in particular, that the pointwise minimum of a finite family of convex functions g l ( z ) , . . . , g m ( z ) is a d.c. function. In fact
where the second term is a convex function since it is the pointwise maximum of m convex functions
CiZjgi
( j = 1 , . . . ,m).
Proposition 2. If R is compact, the family of d.c. functions on
R is dense in C (0). In other words, any continuous function on R can be approximated, as closely as desired, by a d.c. function. In particular, one way t o approximate a continuous function on R by a d.c. function is t o construct a piecewise affine approximation to it over R (Example 2 above), using e.g. a triangulation of R. Thus the class of d.c. functions includes a large majority of functions of practical interest.
Definition 2. A d.c. programming problem is a global optimization problem of the form minimize f ( z ) , s.t. z E R, gi(z) 2 0 (i = 1 , . .. ,m) where R is a convex closed subset of R",and
(2)
f , 91,. . . ,gm are
d.c. functions on R". From Proposition 2 it follows t h a t any global optimization problem as stated at the beginning of the Introduction, with R compact, can be approximated by a d.c. program (2) where the d.c. functions f, gi are such that
(l)
(l)
I t is beyond the scope of this paper to discuss the conditions under which an optimal
solution of the approximate d.c. program actually gives an approximate optimal solution of the original problem.
219
A general deterministic approach to global optimization
D.c. programs form a very broad class of optimization problems including, of course, convex programs (where f and 9%are all concave). In the next section we shall discuss some examples of the most typical d.c. programs encountered in the applications. A striking feature of d.c. programs is that, despite their great variety, they can all be reduced to a canonical form which we are going now to describe.
Definition 3. An inequality of the form g ( z ) 2 0 where g : R n
-, R is
a convex function, is called a reverse convex (sometimes also complementary convex) inequality. Obviously, such an inequality determines a nonconvex set which is the complement of a convex open set (and for this reason is called a complementary convex set). Optimization problems involving reverse constraints have been considered by Rosen 1261, Avriel and Williams 121, Meyer 1211, Ueing 1441, and more recently by Hillestad and Jacobsen [12], [13], Tuy 1391, [43], Bulatov [4], Bohringer and Jacobsen [3], Thuong and Tuy [34], [41], Thach [28], and others.
Definition 4. A mathematical programming problem is called a canonical d.c. program if its objective function is linear, and all its constraints are convex, except exactly one which is reverse convex. In other words, a canonical d.c. program is an optimization problem of the form:
(PI
minimize c T s , s.t.
x E 0, g ( x ) 2 0,
(3)
where 0 is a convex closed set and g : Rn -, R is a convex function. Clearly such a problem can be regarded as the result of adjoining to the convex program minimize c T 5, s.t. the additional reverse convex constraint g ( s )
2
E
R,
(4)
2 0. Thus, in a canonical d.c.
program, all the nonconvexity of the problem is confined t o the reverse convex
H. Tuy
280
constraint. It seems, then, that a canonical d.c. program is a rather special form of d.c. programs. Neverthless,
Proposition 3. Any d.c. program (2) can be converted into an equivalent canonical d.c. program.
Proof. First we note t h a t problem (2) is equivalent t o the following one: minimize z , s.t. s E R, g i ( z ) 2 0 (i = 1,.. . , m ) >z - f ( z ) 2 0 . Therefore, by changing the notation, we can assume that the objective function in (2) is linear. Next, on the basis of Proposition 1, a system of d.c. constraints g i ( s ) 2 0 (i = 1,.. . ,m ) can always be replaced by a single d.c. constraint g(s) =
.
gi(z) 2 0. min ...,m
a=l,
Finally, a d.c. constraint
( p , q convex) is equivalent t o the system
where the first constraint is convex, while the second is reverse convex. We have thus proved t h a t , by means of some simple manipulations, any d.c. program (2) can be transformed into an equivalent minimization problem, with a linear objective function, and with all constraints convex, except one reverse convex. I As a corollary of Proposition 3, it follows, in particular, t h a t any convex
program with several additional reverse convex constraints, minimize c T z , s.t. s E R , g,(s) 2
o
(i = I , . . . , m )
where all the gi are convex, can be converted into one with a single additional reverse convex constraint (i.e., a canonical d.c. program). This is achieved by
A general deterministic approach to global optimization
281
performing t h e transformation (1) followed u p by t h e manipulations from ( 5 ) to (6).
3. EXAMPLES OF D.C. PROGRAMS It is well known from economists t h a t reverse convex constraints and concave objective functions t o be minimized (or convex objective functions t o be maximized) arise in situations where economies of scale (or increasing returns) are present (see e.g. [48], [49] for a discussion of nonconvexity in economics). Typically, suppose that an activity program x has t o be selected from a certain
c R” (the set of all technologically feasible programs) and the
convex set R
selection must be made so as t o minimize the cost, f ( z ) , under the conditions that u t ( z ) 2 c, (z = 1 , . . . , m ) , where ur(x)represents some kind of “effect” or “utility
”
resulting from the program z, and c, is t h e minimal level required
for the ith effect. Then the problem t o be solved is a problem ( P ) ,with g t ( z ) = u,(z) - c t , and will be generally a d.c. program since the functions
f
and u, can very often assumed t o be d.c. functions (for example, f is concave, while the u, are convex; in many cases u,(x)= funct,ions utJ
(a)
C,”=,u t J ( z j )where , certain
are convex, others are concave or S-shaped functions, so t h a t
u r ( z )is actually a d.c. function). D.c. programs encountered in industry and other applied fields (electrical network design, water distribution network design, engeneering design, Mechanics, Physics) have been described, e.g., in [9], [lo], [ l l ] ,[19], [26], “461. Let us discuss here in more detail some examples of optimization problems whose d.c. structure is not readily apparent.
I ) Design centering problem. Random variations inherent in any fabrication process may result in very low production yield. To help the designer to minimize the influence of these random variations, a method consists in maximizing yield by centering the nominal value of design parameters in the so-called region of acceptability ([46]). This problem, called the design centering problem, can be formulated as a problem of the following form.
H. Tuy
282
# 8 and
Given in R" a convex closed set R with int R convex sets
Di = Rn\Ci (i
rn complementary
= 1 , . . . ,rn; Ci being convex open), and given
a convex compact set Bo with 0 E int Bo, find t h e largest convex body B homothetic to a translate of Bo that is contained in
s = R n D~n . . . n D,. If p denotes the gauge (Minkowski functional) of Bo, so that
then the problem is to find max{r : B Z ( r )= {y : p(y - z) I r} c 2,r
S}.
In this formulation, the problem is a kind of two-level optimization problem, namely: for every fixed z E S find the maximum r ( z ) of all r satisfying
then maximize r ( z j over all z E S. A more efficient approach, however, is t o formulate this problem as a d.c. program, in the following way [29].
For any closed subset M of R", define dM(z) = inf {p(y - z) : y $ M } . Then it can be proved (see[29]) that every dDj is a finit convex fu ction on
R", while dn is a finite concave function on
which can be extended to a
finite concave function dn on Rn. If
-
f (z) = min{dn (2) ,dD, (z)
9 *
*
dDm(z)},
then it follows from Proposition 1 t h a t f is a d.c. function, and it is easily seen t h a t the design centering problem is just equivalent t o the d.c. program maximize {f(z), s.t. z E S}
A general deterministic approach to global optimization
(if
283
Di = { x : g i ( x ) 2 0 } , with gi convex, then the problem has just the form
(P)). Note t h a t problems very near t o this are encountered in computer aided coordinate measurement technique (see [9]).
2) Linear programs with an additional complementarity condition. T h e following problem has been considered in [lo]. In offshore technology, a submarine pipeline is usually laid so t h a t it rests freely on the sea bottom. Since the sea bed profile is usually irregularly hilly, its regularization is often carried out by trench excavations, both in order t o avoid excessive bending moments in the pipe and t o bury it for protection. Thus, the practical problem which arises is t o minimize the total cost of the excavation, under the conditions t h a t the free contact equilibrium configuration of t h e pipe nowhere implies excessive bending. It has been shown in [lo] t h a t this problem can be expressed as a nonlinear program of the form:
Minimize c T x
+ dTy,
s.t.
Ax+By2p, s2O,y20 xTy = 0
( 5 ,Y
E R"),
where the only nonlinear contraints is the complementarity condition
x T y = 0. But the latter condition is obviously equivalent to
which is a reverse convex inequality, since the function
is concave. Therefore, t h e problem in point is a canonical d.c.
program,
more precisely, a linear program with a n additional reverse convex constraint.
H. Tuy
284
T h e usefulness of this approach from a computational point of view has been demonstrated e.g. in [33].
3) Jointly constrained biconvex programs. T h e following problem, which generalizes the well known bilinear programming problem, has been considered by some authors (see e.g. [l]) minimize f ( z )
+ xTy + g(y), s.t. ( z , y ) E S c R" x R",
where f and g are convex on S , and S is a convex closed subset of R" x R". Here the bilinear form xTy is a d.c. function, since 4sTy = I ~ + y 1 ~ - l s - y I2
+
Therefore the objective function itself is a d.c. function. We can also convert the problem into a concave minimization problem:
4) Global minimization of a Lipschitzian function. Let R be a convex compact set in Rn,and f a Lipschitzian function on R. Let K be a constant such t h a t
If(.)
-
f(Y)lI K l z - YI
x,Y E 0.
then f ( z ) = maxg(x,y) = g(z,z), YER
and hence minf(x) = minmaxg(x,y). XER
ZER YEn
On the basis of this property, in [6], [23] (see also [4]) and more recently in [20], the following method has been proposed to find the global minimum of f over
R:
A general deterministic approach to global optimization
285
Initialization: Take zo E $2, set R1 = {zo}, set i = 1.
Step I. Compute z', a solution of the problem min max g ( z ,y ) . ZER YERi
Step 2. Set Ri+l = Ri Set i
t
u {zz}.
i + 1 and go back t o step 1.
T h e crucial step in this procedure is t o solve the problem (7). However, as strange as it might seem, neither in [6], [23], nor in [20] has it been indicated how t o solve this problem. In fact, for every y E Ri t h e function g ( . ,y) is concave, so the function (oi(z)
= maxyEni g ( z , y ) is a d.c. function (Proposition 1) and each prob-
lem (7) is the minimization of a d.c. function over t h e compact convex set R. Thus, the above method involves the solution of a sequence of increasingly complicated d.c. programs.
A more efficient approach recently developed by P.T. Thach [30] requires solving only one d.c. program.
5) Continuous programming. We have seen that in view of t h e density of d.c. functions in C(R) (when R is convex compact), any problem of t h e form minimize F ( z ) ,s.t. z E R, G,(z)5 0 (i = 1 , . . . , m ) where F, Gi (i = 1 , . . . ,m ) are continuous functions, can be approximated by a d.c. program. We now go further by showing t h a t any continuous program-
ming problem can actually be converted into a d.c. program. Indeed, assuming, as we may without loss of generality, t h a t the objective function is linear, the problem has the form minimize c T z , s.t. z E M , where M is a closed subset of
Rn.B u t
it is known t h a t for any nonempty
closed set M, t h e function z
H
d&(z) = inf
{ 1% - yI2 : y E M }
H. Tuy
286
is d.c., more specifically
where the function z
H
1zI2 - d & ( z ) is convex (see [14]). Therefore the
problem is nothing else than the d.c. program minimize c T z , s.t. d L ( z ) 5 0.
4. SOLUTION METHOD FOR D.C. PROGRAMS
As shown in Section 2, every d.c. program can be reduced t o the canonical form
(P)
minimize c r z , s.t. z E R, g(z)2 0,
where R is a convex closed set and g: Rn ---t R is a convex function. In this and the next section we shall present a method for solving this problem ( P ) .
For the sake of simplicity we shall assume t h a t R is compact, so that an optimal solution of ( P ) exists. Also we shall only consider t h e case where min
(7% : z E R} < min { c T z : z E R , g ( z ) 2 0} .
(8)
Otherwise, the reverse convex constraint g(z) 2 0 would be inessential, i.e.,
( P ) would be equivalent t o the convex program minimize c T z , s.t. z E R and could be solved by many available efficient algorithms. Thus, we shall assume that a point w is available such t h a t
w E R, g(w) < 0 cTw c min { cTz : z E R, g(z) 2 0} :
(9)
287
A general deterministic approach to global optimization
Let G = { x : g(x)
5 0}, and for any set M
C
R" denote by d M the
boundary of M .
For any feasible point z let n ( z ) be the point where the line segment [w; 21 meets dG. Since n ( z ) = Bw
+ (1 - 6 ) z with 0 5 0 < 1, we have by virtue
of (11): c ~ ( T ( z )= )
and if g(z)
>0
(so t h a t 6
ecTw + (1 - 6 ) c T z 5 &,
> 0), then C'(.(.))
Proposition 4.
< cT z.
Under the above stated assumptions, prob-
lem (P) always has an optimal solution lying on dR
n dG.
Proof. As noticed just above, if a feasible point z is such t h a t g(z) > 0, then c T ( x ( z ) )< c T z . Therefore, problem ( P ) is equivalent t o minimize cTx, s.t. x E R , g(x) = 0.
(12)
Consider now any solution Z of (12). Take a supporting hyperplane t o the convex set G at point 3, and denote by S its intersection with R. If z' is an extreme point of S where the linear function cTx achieves its minimum over
S , then x' E dR and since S C R\int G, x' is also an optimal solution t o ( P ) , hence, from the above, g(x') = 0, i.e. x' E dG. Thus, in solving ( P ) we can restrict ourselves t o points on dR
n dG. I
The two basic questions t o be examined, as mentioned in the introduction, are the following: 1 ) Given a feasible solution xo of (12), i.e., a point xo E RndG, check whether or not xo is an a-optimal solution of ( P ) .
2) Given a point xo E R
n dG
which is not an a-optimal solution
point x 1 E RndG such t h a t cTxl positive number).
5 cTx0-a.
, find
a
(Here a denotes a prechosen
H. Tuy
7-88
Proposition 5. In order that a feasible solution so to ( P ) be an a-optimum it is necessary a n d sufficient that.
Proof. Since R is compact, the maximum defined in (13) is always attained, Clearly (13) holds if and only if (5 : 5
E
R,
g(s)
1 0 , c T z I C T Z O - a } = 0,
i.e., if and only if so is a n a-optimum. I Thus, t o check whether or not a n a-optimum has been achieved at zo we have t o solve the subproblem
(QbO))
maximize g ( s ) , s.t. s E R, cTs 5 cTs0 - a-
This is a convex maximization (i.e. concave minimization) problem and is still a difficult problem of global optimization. One should not wonder at it, though, since to decide the global optimality of so one cannot expect t o use only local information. T h e point here is that t h e problem (&(so)) is more tractable than ( P ) and can be solved with a reasonable efficiency by several available algorithms (see e.g. [37], [38], [15], [16]). Proposition-5 resolves at the same time the two questions 1) and 2). If the maximal value of g in (&(so)) is < 0, xo is a n a-optimal solution of ( P ) . Otherwise, we find an optimal solution zo to (&(so)) such t h a t zo E
R , g ( z O ) 2 0, c T s 0 5
If g ( z o ) = 0, we set s1 = z o ; if g ( z o )
CTZO
- a.
> 0, we set z1 = x ( z o ) ,the point where
the line segment [w; zO] meets 8G. These results suggest the following method for solwing ( P ) .
A general deterministic approach to global optimization
289
ALGORITHM 1 (conceptual) Initialization: Compute a point xo E R n dG (preferably a local minimum of c T x over R\int G. Set k = 0.
Phase I : Solve the subproblem ( Q ( z k ) maximize ) g ( z ) , s.t. z E R , c T z 5 c T z k - a to give an optimal solution zk.
If g(zk) < 0, terminate. Otherwise, go t o Phase 2. Phase 2: Starting from zk compute a point xk+' E R n dG (preferably near t o a local minimum) such that cTzk+'
5 cTzk 5 c T x k - a . Set k
t
k+l
and go to Phase 1.
Proposition 6. The above algorithm terminates after finitely many iterations.
Proof. Obvious, since cTxk+' 5 c T z k - Q (k = 1,2,.. .) and c T x is bounded below on R\int G. I
Remark 1. Algorithm 1 consists of an alternating sequence of "global
"
and
"local" searches. In Phase 1, global search is carried out to test the current z k for a-optimality and t o find a better feasible point zk,if such a point exists. In Phase 2, local search is used t o find a feasible point zk+' on dG at least as good as z k . The simplest way is to take zk+' = 7r(zk),the intersection point of dG with the line segment [ w , z k ] .But in many circumstances it is advantageous t o compute a point zk+' close to a local minimum, by using any local minimization algorithm. For example one can proceed as follows.
Step 0: Let uo = r ( z k ) .Set i = 0. Step 1 : Take a supporting hyperplane H it o G at u':
H' = {z : (pa I x - u ' ) = 0 } , with pa E dg(u'). If u' is an optimal for the convex program minimize c T x , s.t. x E R
n H',
H. Tuy
290
stop and set zk+l = u i . Otherwise compute an optimal solution v i t o this program.
Step 2: Let ui+’ = ~ ( v ’ ) Set . i t i + 1 and return t o Step 1.
It can be shown that if the convex function g is Giteaux differentiable on dG (so that there is a unique supporting hyperplane to G at each point on dG), then the just described procedure converges to a “stationary” point on dG (see [43]).
Remark 2
: When
R is polyhedral, several finite algorithms are available for
solving ( Q ( x k ) )([8], [37], [27], [31]). In this case, zk is a vertex of the polytope x E R,
CTX
5 C T Z k - a.
Therefore, if g(zk) > 0, then in Phase 2, starting from the vertex zk,we can apply the simplex algorithm for minimizing c T x over this polytope. After a number of pivots we cross the surface g ( x ) = 0: the crossing point will yield xk+l E R
n dG. Algorithm 1 for this case is essentially the same as the one
developed earlier in [34]. 5. IMPLEMENTABLE ALGORITHM
In the general case, where certain constraints defining R are convex nonlinear, the convex maximization problem ( Q ( x k ) )in Phase 1 cannot be solved exactly by a finite procedure. Therefore, to make Algorithm 1 implementable, we must organize Phase 1 in such a way that either it terminates after finitely many steps or, whenever infinite, it converges to some a-optimal solution. Let R = {z : h ( z ) 5 0}, where h:Rn
-+
R is a convex finite function
(note that such a function is continuous and subdifferentiable everywhere; see e.g. [24]). We shall assume int R
# 0.
This condition, along with the assumptions
already made in the previous section, entails the existence of a point w satisfying (lo), (11) and such that
w E int 0, i.e., h ( w ) < 0.
A general deterministic approach to global optimization
29 1
ALGORITHM 2 (implementable) Initialization: Compute a point zo E R n aG (preferably a local minimum
of c T z over R\int G).Select a polytope S containing the convex compact set
R, such that the set V of all vertices of S is known and IVI is small. Set k = 0.
I. Phase 1 Set S1= S n {z : c T z 5 cTxk - a}. Compute the set V' of all vertices of
S1. Set i = 1. Step I. Compute vi = arg max {g(z) : z E
If
g(v')
v'}.
(14)
< 0, terminate. Otherwise go to Step 2.
Step 2. Compute u' = ~ ( v ' ) the , intersection point of the line segment [ w ,u'] with 8G.
a) If u' E R, i.e., h(u') 5 0, set zk = u'. Reset S = S', V = V' and go to
Phase 2. b) If u' $ R, i.e., h(u') > 0, find the point y' where the line segment [w,u'] meets a R , select p a E 8h(y') and generate the new constraint
Let Sa+l = Sin {z : la(.)
5 O}. Compute the set ViS1 of all vertices of
SiS1.
Set i + i
+ 1 and go back to Step 1.
H. Tuy
292
11. Phase 2 Starting from z k , compute a point local minimum ) such that c T z k + l
zk+l
ER
n aG
(preferably close t o a
5 c T z k . Set k t k + 1 and go t o Phase 1
(with S, V defined in Step 2a of previous Phase 1).
Remark 3. Since Si+' differs from Siby just one additional linear constraint,
Vi+l of all vertices of Si+l can be computed from the knowledge of the set V', by using e.g. the procedure developed in [31] (see also [37]).
the set
Lemma 1. In every iteration k, constraint (15) separates u* and v* strictly from
Rk = ( 5 E R
: c*z
5 c T z k - a},
(16)
that is
Proof. Suppose k = 0. Clearly S' 3 Ro. From the definition of a subgradient
5 h ( z ) - h(y*) = h(s) (since h(yC) = 0). Hence Z,(z) 5 0 for every z satisfying h ( z ) 5 0, i.e., for every 5 E R. Further, Z,(ya) = 0 and l,(w) 5 h(w) < 0, hence, nothing t h a t ui = w + O(yi - w) with 0 > 1, we deduce Zi(ua) = (1 - 0)Z,(w) > 0. Then, since vi = w + X(ui - w) with X 2 1, we have Z,(z)
we also have Zi(Wi)
= (1 - X ) l i ( W ) + XZi(U') > 0.
Thus (17) holds for k = 0. By induction it is easily seen that hence (17) holds for every iteration k. I
S 1 3 R k and
293
A general deterministic approach to global optimization
Proposition 9. If Algorithm 2 terminates a t some iteration k, then x k is an a-optimal solution of ( P ) . Proof. Algorithm 2 terminates when g ( d ) < 0. But from (14) and the convexity of g it follows that
o > max {g(s) : z E si} On the other hand, as seen in the proof of the previous lemma,
Rk c S'.
Hence
o > max {g(x) : z E R,
c T s 5 c T x k - a}.
This implies the a-optimality of x k by Proposition 5 . I
Proposition 8. If for some iteration k Phase 1 is infinite, then any cluster point ii of the generated sequence {ui} satisfies
Proof. Observe that if Phase 1 is infinite, then Step 2a never occurs, and .so h(ui) > 0 for every i. Therefore ui always lies between yi and
line segment
[w,zli].
d
on the
Consider now any cluster point 1 of { u ' } , for example
ii = 1imv+=
u*y.
the sequence
{ d V converges } to some V. But if Phase 1 is infinite, this amounts
By taking a subsequence if necessary we may assume that
exactly to applying the outer approximation algorithm ([15], [37]) t o the problem of maximizing g(z) over the convex set
Rk defined
by (16). Therefore,
ij
must be a n optimal solution to the latter problem, i.e.,
v E ok,g(V) = max{g(z) Since vi
:x E
nk}.
4 R for every i, this implies 6 E c ~ Rand , hence V = ii.
On the other
hand, since ui E d G for every i, it follows that ii E aG, i.e., g(ii) = 0. This completes the proof of (18). I
It follows from Propositions 8 and 5 that x k is not a-optimal. However, since cTI
5 c T z k - a, (18) implies
o = max {g(z)
:z E
a,
cTx
5 c'ii} .
(19)
H. TUY
294
Observe t h a t if instead of this inequality we had
o > max{g(s)
:s E
n,
cTz < c T 4 } ,
we would be able t o conclude that ii is a global optimal solution of (P), for this would simply mean the inconsistency of the system z€
n,
g(s) 2 0 ,
CT5
5 CTii.
Since, unfortunately, we have only (19), in order t o conclude on the global optimality of ti, we must make the following additional assumption about the problem (P).
Definition 5. Problem [P) is said to be stable if
Proposition 9. If problem ( P ) is stable, then in order that a feasible solution Z of (P) be a global optimum it is necessary and sufficient that
o = max {g(z) : z E R,
cTs 5 c ~ z } .
(20)
Proof. If there is a point z E 0 such t h a t cTz 5 cTZ and g ( z ) > 0, then ~ ( z will ) be a feasible point such t h a t cT(x(z)) < c T x
5 cTZ. Therefore, (20)
must hold for any optimal solution 3. Conversely, suppose we have (20). This means that there is no point z E R satisfying c T x 5 CTZ, g ( z ) > 0. Hence, for
E
> 0, if
Z'
is an optimal solution of the perturbed problem
min { c T z : z E R , g(z) 2 then cTx" > c T Z . Letting
E
1 0 we
E}
,
obtain from t h e stability assumption -y 2
c T Z , where -y is the optimal value in (P). Therefore, Z is optimal for (P). I
295
A general deterministic approach to global optimization
Corollary. If problem (P) is stable, then, in the case where
Phase 1 is infinite, any cluster point fi of
{d}yields an
optimal
solution to (PI. Remark 4. It follows from the proof of Proposition 8 that, in the case where
Phase 1 is infinite, g ( v 2 )
+
0. If we stop this phase at some step where
< c, then according to (14) for every z E R such that cTz 5 c T z k - a, we must have g(z) 5 g ( v a ) < 6 . Hence, there is no z E R satisfying g(z) 1 6 and cTz 5 cTzk -a. This shows that zk is an approximate a-optimal solution g(v2)
in the following sense cTzk - a
< min { c T z : z E R, g(z) 2 c } .
This conclusion is independent of the stability assumption. Remark 5. A first algorithm somewhat different from the algorithm presented
above was given in 1431. There also exist solution methods to problem (P) which bypass the stability condition (see e.g. Thach [28]). Using certain ideas in the latter paper one can propose the following variant of Algorithm 2. Let h + ( z ) = max{O,h(z)}.
Then h + ( z ) is also a convex function, and
since h + ( z ) = 0 for z E R , it is easily seen, by virtue of Proposition 5 , that a feasible solution zo to (P)is a-optimal if and only if
o > max { g(z)+ h+ (z) : z E 0, c T z 5 c T z 0 - a } . Let us now modify Phase 1 in Algorithm 2 as follows.
Phase 1 Set
S' = Sn { z
:cTz
5 c T z k - a} . Compute the set V' of all vertices of
S'. Set i = 1. Step I. Compute vi = arg max {g(s)
+
+ h+(z) : s E vi}.
If g ( v i ) h + ( v i ) < 0, terminate. Otherwise, go t o Step 2.
H.TUY
296
Step 2. We have g(v')
+ h+(v') 2 0.
a) If max{g(v'), h(v')} = 0, then h+(v') = 0, g(vi) = 0, i.e., vi E
n n dG,
so we set zk = v', reset S = S', V = V' and go to Phase 2.
b) If max{g(v'), h ( v i ) } > 0 , then v i 4
R n G and we can find the inter-
section point y' of the line segment [w,vi] with the boundary of R
n G.
Let
p' be a subgradient of the convex function max{g, h} at yi. Generate the new
constraint
Z&) Let S*+l= S'
= ( p i I z - yi) 5 0.
n {z : Zi(z) 5 0). Compute the set Vi+l of all vertices of
SiS1(from the knowledge of V').
Set i + i
+ 1 and go back to Step 1.
By an argument analogous to that used previously we can then prove that: 1) If Phase 1terminates at Step 1, then zk is an a-optimal solution to (P); 2) If Phase 1 is infinite, any cluster point B of {vi} satisfies
B E d o n aG, 0 = max {g(z) + It+(.) Furthermore, it turns out that
:zE
nk}.
A general deterministic approach to global optimization
297
Proposition 10. Let v ' - arg min { c T z : z E v
~g(z) , 2O},
Y -
and assume that g is strictly convex. If Phase 1 is infinite, then any cluster point G of the sequence {Ga} is an optimal solution of (P). Proof. Since g(G') 2 0, it follows that g(Z) 2 0. But g(2)
+h+(q
I g(v1)
+ h+ (va )
by the definition of v'. Therefore g(G)
where
ij
+ h+(G) 5 g(v) + h+(Z) = 0,
is the corresponding cluster point of {vi}. This implies g(G) = 0,
h+(G) = 0, i.e., G E an n aG. Take now any optimal solution Z of (P). Then by Proposition 4, Z E aR
n aG. Let
q E a g ( 2 ) . Since 2 E S' (see Lemma l), there is a ver-
tex z' of S' in the half space
I
( q 5 - 2)
2 0.
Let E be a cluster point of { z i } . Then, since g(zi)
2 g(Z)
+ ( q I zi - 2) 2 0,
it follows, in a manner analogous to G, that g(E) = 0, h + ( f ) = 0, i.e., f E at2 n aG.But then ( q I f - 2) 5 g(E) - g(Z) = 0,
and since ( q I za - 2) 2 0 implies ( q I E - 3) 2 0, it follows that ( q I Z - 2) = 0. This equality along with the relation g(Z) = 0 shows that E is an intersection point of G and the supporting hyperplane H =
(5
: (q
I 5 - Z) = O}.
however, g is strictly convex, we must have H n G = {Z}, hence E = 2. Noting that from the definition of Z' CTi?
we then conclude cTiT
5 c T Z , and
so
5 CTZ', is is actually optimal t o (P).
Since,
H. Tuy
298
6. CONCLUSION
We have presented a general approach to global optimization, the main points of which can be summarized as follows: 1) A large majority of mathematical programming problems of interest actually involve d.c. functions. 2) Any d.c. programming problem can be reduced to the canonical form where all the nonconvexity of the problem is confined to a single reverse convex constraint. 3) Canonical d.c. programming problems can be solved by algorithms of the same complexity as outer approximation methods for convex maximization problems. 4) By restricting the problem to the search for an a-optimum, it is possible
to devise a flexible solution method in which local searches alternate with global searches. Hopefully, the above approach, properly combined with local and stochastic approaches, will help to handle a class of global optimization problems which otherwise would be very difficult to solve.
REFERENCES [l] F.A. AL-KHAYYAL, J.E. FALK, Jointly constrained biconves programming, Math. Oper. Res., 8 (1983), 273-286. [2] M. AVRIEL, A.A. WILLIAMS. Complementary geometric programming, SIAM J. Appl. Math., 19 (1970), 125-141. [3] M.C. BOHRINGER, S.E. JACOBSEN, Convergent cutting planes for linear programs with additional reverse convex constraints, Lectures notes in
Control and Information Science, 59, System Modelling and Optimization, Proc. 11th IFIP Conference Copenhagen (1983), 263-272.
A general deterministic approach to global optimization
299
[4] V.P. BULATOV, Embedding methods in optimization problems, Nauka.
Novosibirsk (1977), Russian. [5] NGUYEN DINH DAN, On the characterization and the decomposition of d.c. functions, Preprint, Institute of Mathematics, Hanoi(1985).
[6] Yu.M. DANILIN, S.A. PIYAVSKII, O n a n algorithm for finding the absolute minimum, In: “Theory of Optimal decisions”, 2, Kiev, Institute of
Cybernetics (1967), Russian. [7] R. ELLAIA, Contribution a l’analyse et l’optimisation de diffirences de fonctions convexes, Thkse de Doctorat 3eme cycle, UniversitC Paul
Sabatier, Toulouse (1984). [8] J.E. FALK, K.R. HOFFMAN, A successive underestimation method for concave minimization problems, Math. Oper. Res., 1 (1976), 251-259.
[9] W. FORST, Algorithms for optimization problems of computer aided coordinate measurement techniques, 9th Symposium on Operations Research,
Osnabruck, August (1984). [lo] F. GIANNESSI, L. JURINA, G. MAIER, Optimal excavation profile for a pipeline freely resting o n the sea floor, Eng. Struct., 1 (1979), 81-91.
[ll] B. HERON, M. SERMANGE, Nonconvez methods for computing free boundary equilibria of axially symmetric plasmas, Appl. Math. Optim., 8
(1982), 351-382. [12] R.J. HILLESTAD, S.E. JACOBSEN, Reverse convex programming, Appl.
Math. Optim., 6 (1980), 63-78. [13] R.J. HILLESTAD, S.E. JACOBSEN, Linear programs with a n additional reverse converse constraint, Appl. Math. Optim., 6 (1980), 257-269.
[ 141 J.-B. HIRIART-URRUTY, Generalized diflerentiability, duality and optimization for problems dealing with differences of convex functions, to
appear in Lecture Notes in Mathematics, Springer-Verlag (1985).
300
H. Tuy
[15] K.L. HOFFMAN, A method for globally minimizing concave functions over convex sets, Math. Programming, 20 (1981), 22-32.
[16] R. HORST, An algorithm for nonconvex programming problems, Math.
Programming, 10 (1976), 312-321. [17] L.A. ISTOMIN, A modification of Hoang Tuy’s method for minimizing a concave function over a polytope, Z. Vycisl. Mat. i Mat. fiz., 17 (1977),
1592-1597, (Russian). [18] R. KLEMPOUS, J. KOTOWSKI, J. LUASIEWICZ, Algorithm wyznaczania optymalnej strategii wspoldzialania zbiornikow siecowych z systemem wodociagowym, Zerszyty Naukowe Politechniki Slaskiej, Seria Au-
tomatyka, 69 (1983), 27-35. [19] Z. MAHJOUB, Contribution & l’e‘tude de l’optimisation des re‘seauz maille‘s, ThBse d’Etat, Institut National Polytechnique de Toulouse (1983).
[20] D.Q. MAYNE, E. POLAK, Outer approximation algorithm for nondifferentiable optimization problems, Journal of Optimization Theory and
Applications, 42 (1984), 19-30. [21] R. MEYER, The validity of a family of optimization methods, SIAM J.
Control, 8 (1970), 41-54. [22] B.M. MUKHAMEDIEV, Approximate methods for solving the concave programming problem, Z. Vycisl. Mat. i Mat. fiz., 22 (1982), 727-731,
(Russian). 1231 S.A. PIYAVSKII, Algorithms for finding the absolute minimum of a function, In: “Theory of Optimal decisions”, 2, Kiev, Institute of Cybernetics
(1964), Russian. [24] R.T. ROCKAFELLAR, Convex analysis, Princeton Univ.
Press,
Princeton, New Jersey (1970). (251 R.T. ROCKAFELLAR, Favorable classes of Lipschitz continuous functions in su bgradient optimization, Working paper, IIASA (1981).
A general deterministic approach to global optimization
301
[26] J.B. ROSEN, Iterative solution of nonlinear optimal control problems,
SIAM J. Control, 4 (1966), 223-244. [27] J.B. ROSEN, Global minimization of linearly constraint concave function by partition of feasible domain, Math. Oper. Res., 8 (1983), 215-230.
[28] P.T. THACH, Convex programs with several additional reverse convex constraints, Preprint, Institute of Mathematics, Hanoi'(1984).
[29] P.T. THACH, The design centering problem as a d.c. program, Preprint,
Institute of Mathematics, Hanoi'( 1985). [30] P.T. THACH, H. TUY, Global optimization under Lipschitzian constraints,
Preprint, Institute of Mathematics, Hanoi'(1985). [31] T.V. THIEU, B.T. TAM, V.T. BAN, A n outer approximation method for globally minimizing a concave function over a compact convex set, IFIP
Working Conference on Recent Advances on System Modeling and Optimization, Hanoi( 1983). [32] Ng.V. THOAI, H. TUY, Convergent algorithms f o r minimizing a concave function, Math. Oper. Res., 5 (1980), 556-566.
[33] Ng.V. THOAI, O n convex programming problems with additional constraints of complementarity type, CORE discussion paper no 8508 (1985).
[34] Ng.V. THUONG, H. TUY, A finite algorithm f o r solving linear programs with a n additional reverse convex constraint, Proc. Conference on Nondif-
ferentiable Optimization, IIASA (1984). [35] J.F. TOLAND, A duality principle for nonconvex optimization and the calculus of variations, Arch. Rat. Rech. Anal., 71 (1979), 41-61.
1361 H. TUY, Concave programming under linear constraints, Doklad. Nauk., 159 (1964), 32-35; English translation in Soviet Mathematics, 5 (1964), 1437-1440.
H. Tuy
302
[37] H. TUY, O n outer approximation methods for solving concave minimiration problems, Report no 108 [ 1983), Forschungsschwerpunkt Dynamische Systeme, Univ. Bremen, Acta. Math. Vietnamica, 8, 2 (1983), 3-34. [38] H. TUY, T.V. THIEU, Ng.Q. THAI, A conical algorithm for globally minimizing a concave function over a closed convex set, Math. Oper. Res., forthcoming. [39] H. TUY, Global minimization of a concave function subject to mixed linear and reverse convex constraints, IFIP Working Conference on Recent Advances on System Modeling and Optimization, Hano:( 1983). [40] H. TUY, Global minimization of a difference of two convez functions, Selected Topics in Oper. Res. and Math. Economics, Lecture Notes in
Economics and Mathematical Systems, Springer-Verlag, 226 (1984), 98-118. [41] H. TUY, Ng.V. THUONG, Minimizing a convex function over the complement of a convex set, Proc. 9th Symposium on Operations Research, Osnabruck (1984). [42] H. TUY, Concave minimization under linear constraints with special structure, Optimization, 16 (1985), 2-18. [43] H. TUY, Convex programs with an additional reverse convex constraint, Journal of Optimization and Applications, forthcoming. [44] U. UEING, A combinatorial method to compute a global solution of certain nonconvez optimization problems, in: Numerical Methods for Non-
linear Optimization, Ed. F.A. Lootsma, Academic Press, New York (1972), 223-230. [45] N.S. VASILIEV, Active computing method for finding the global minimum of a concave function, Z. Vycisl. Mat. i Mat. fiz., 23 (1983), 152-156, (Russian).
A general deterministic approach to global optimization
303
[46] L.M. VIDIGAL, S.W. DIRECTOR, A design centering algorithm for nonconvex region of acceptability, IEEE Trans. on Computer-Aided Design of
Integrated Circuits and Systems, CAD-1 (1982), 13-24. [47] Z.Q. XIA, J.-J. STRODIOT, V.H. NGUYEN, Some optimality conditions
for the CM-embedded problem with Euclidean norm, IIASA Workshop
on Nondifferentiable Optimization: Motivation and Applications, Sopron (1984). [48] A.B. ZALESSKY, Non-convexity of admittable areas and optimization of
economic decisions, Ekonomika i Mat. Metody, XVI (1980), 1069-1080
(Russian). [49] A.B. ZALESSKY, O n optimal assessments under non-convex feasible solutions areas, Ekonomika i Mat. Metody, XVII (1981), 651-667 (Russian).
This Page Intentionally Left Blank
FERMAT Days 85: Mathematics for Optimization J . S . Hiriart-Urmty (editor) 0 Elsevier Science Publishers B.V. (North-Holland), 1986
305
WELL-POSEDNESS AND STABILITY ANALYSIS IN OPTIMIZATION
T. ZOLEZZI Istituto di Matematica Via L.B. Alberti 4 16132 Genova (Italy)
Keywords. Stability. Well-posed problems. Variational convergence.
INTRODUCTION This work synthesizes some results about well-posedness and stability analysis in abstract minimum problems and optimal control of ordinary differential inclusions. Some of these results may be formulated in an abstract form which shows connection between stability analysis and variational extensions of minimum problems in the sense of Ioffe-Tikhomirov. In the first section we describe (Hadamard or Tikhonov) well-posedness of convex minimum problems in Banach spaces by using their optimal value functions. A (slight) extension of some results in [I] is thereby obtained. In the second section we deal with optimal control problems for differential inclusions. Neither convexity nor existence of optimal solutions is assumed. Extending results of [2], we relate the continuous behaviour of the optimal
T.Zolezzi
306
value function with respect to perturbations acting on the data, with its stable behaviour when passing to the (unperturbated) relaxed problem. This gives both relaxability criteria (extending those in [3], [4]) and variational stability results in optimal control. The approach behind these theorems is based on the variational convergence [ 5 ] . In the third section we show the abstract meaning of the results of the second one by interpreting the variational extensions [6] with the aid of a suitable variational convergence. The relations between variational convergence (including epi-convergence) and some results reported here are considered in [7]with an eye on mathematical programming problems. Further convergence results in optimal control are considered in [8]. A detailed version of the results given here in the last two sections will be published elsewhere.
1. WELL-POSEDNESS AND THE VALUE FUNCTION IN CONVEX OPTIMIZATION In the sequel we shall use the following known definitions. (The arrows
-,
--f
denote weak and strong convergence, respectively. Subsequences are denoted as the original sequences).
Definition 1. A real Banacb space X is called an E-space if and only if X is reflexive, strictly convex and for any sequence ( z n ) ,
For a discussion about the variational meaning of definition 1 see 191.
Well-posedness and stability analysis in optimization
Definition 2. Let K,, n = 0, 1, 2, - - . , be a sequence of nonempty subsets of the real Banach space X. Then Kn Mosco converges M
towards KO,written Kn +KO, if and only if for every subsequence of K,,
z, E K, for all n, (zn
-
z)
z E KO;
given y E KO there exists Y n E Kn such that Yn
+ y.
For a discussion of definition 2 (with properties of Mosco convergence) see [lo]. Given a nonempty subset K of the real Banach space X and f : K
-, R,
we denote by (f,K) the problem of minimizing f over K; moreover v(f, K ) = inf f ( K ) denotes the optimal value, and arg min (f,K) the (perhaps empty) set of the global optimal solutions to (f,K).
Definition 3. (a) (f,K) is Tikhonov well-posed if and only if arg min (f,K)
is a singleton to wich every minimizing sequence converges
strongly. (b) Assume f has an unique global minimum point on every nonempty closed convex subset of X. Then f is Hadamard well-posed with respect to a given set convergence --t if and only if, for every sequence K, of nonempty closed convex subsets of X,
(Kn
+
*
KO) arg min(f, K)
* arg min(f,Ko).
For a discussion of definition 3 see [I] and the references thereof. For a given nonempty closed convex subset of an E-space X and a given
z E X, we denote by p ( z , K) the nearest point in K to z, and by. dist(z, K) its distance from K. A basic result about well-posedness in the following theorem ([lo], theorem 3.33), due t o Y. Soontag.
T.Zolezzi
308
Theorem 1. Let X " , X be E-spaces, K , a sequence of nonempty closed convex subsets thereof. The following are equivalent:
K,
dist(x,K,)
5 K,;
+ dist(x, KO)
(3)
for all z E X.
(5)
As far as well-posedness is concerned, the meaning of Theorem 1 relies on the following variational characterization of E-spaces (see [9]).A convex best approximation problem in the Banach space X is defined by giving xo E X a nonempty closed convex subset thereof, and by minimizing on K
Theorem 2. The Banach space X is an E-space if and only if every convex best approximation problem in X is Tikhonov well-posed. From Theorem 1, 2 we see that for convex best approximation problems, their Tikhonov well-posedness implies the equivalence between Hadamard well-posedness with respect to Mosco convergence (4), and continuity ( 5 ) of the optimal value function with respect to the constraints. The continuity properties of the value function therefore afford a complete picture of wellposedness for such a class of optimization problems. One may wonder to what extend the above results may be generalized to more general classes of minimum problems. After all, best approximation problems seem t o involve in an essential way the underlying geometrical structure of the given space X.
As a matter of fact, the equivalence between Tikhonov and Hadamard well-posedness is a deep and far-reaching property which holds for broader classes of minimum problems and variational inequalities in an abstract setting. One gets in this way, as a byproduct, the Hadamard well-posedness of
3 09
Wellposedness and stability analysis in optimization
linear operator equations (from a new point of view), which are natural extensions of the classical boundary value problems in Mathematical Physics, t o which the original Hadamard well-posedness definition applies. See the results of [l],[ll],[12] and [13] in that respect. Let
T
-
denote the collection of all nonempty closed convex subsets of the
given Banach space X, and denote by
H
the Hausdorff set convergence in
X. As a corollary of the results of [l]we get Theorem 3. Let X be a real reflexive Banach space, f : X
-, R
convex and lower semicontinuous. Assume arg min ( f , K ) is a singleton for every K E T. Then if f is uniformly continuous on every bounded set, then (f,K) Tikhonov well-posed for every K E
T
implies v ( f , K , ) -, v ( f , K o )whenever K , % K O , K , E T. Moreover f is Hadamard well-posed with respect to Mosco convergen ce;
(v(fK , ,)
+
-, v ( f ,KO)whenever K, 3KO,K ,
arg min(f, K,)
-
E T)
arg min(f, KO).
Proof. By theorem 3.1 of [l]we get Hadamard well-posedness; moreover, arg min(f, K,)
-, arg
min(f, KO).
Thus (6) is obtained since f is continuous. Let now ( K , ) be as in (7) and set
x, = arg min(f,K,), Consider y, E K, such that y, limsupf(z,)
-, zo
n = 0, 1,
2 , . ..
by (2). Then
5 limsupf(y,) = ~ ( z o ) ,
therefore x, is bounded by lemma 2.5 of [l].Fix any subsequence of z,. For
a further subsequence we get by (1)
T.Zolezzi
310
Thus
f ( ~I)lim inf f(sn)I lim inf inf f(K,)
= inf f(Ko) which shows
f(z) = inf f(K0) = f ( s o ) . By the uniqueness property we get Z = so, thereby obtaining (7). I
Remarks (a) It is unknown (but true in some significant examples) whether (7) could
be strengthened to strong convergence of optimal solutions. (b) An explicit characterization of the function f verifying the uniqueness assumptions of theorem 3 is available; see remark 3.4 in [l]. 2.
RELAXATION AND OPTIMAL VALUE STABILITY IN THE OPTIMAL CONTROL OF DIFFERENTIABLE INCLUSIONS
Here we consider optimal control problems which may be lacking of (global) optimal solutions. The behaviour of the optimal value function is studied as related to perturbations acting on the data of the problem. Such a stability analysis reveals an interesting link with relaxability (in the usual sense of 131, [4]). Such a connection was firstly discovered in [2].
The optimal control problems we shall discuss are defined as follows: minimize f ( ~ ( 6,P) )
(8)
subject to i ( t ) E G ( t , s ( t ) , p ) ,a.e. in z(t)E H(t,p), a
( a , b ) ; .(a) = 20;
I t I 6.
(9)
(10)
Here a,b are fixed; the perturbation parameter p belongs t o a given met-
ric space T. The nonempty Rn-valued multifunctions G , H are given;
Well-posedness and stability analysis in optimization
31 1
xo E H ( a , p ) for every p. A fixed point 0 E T is given, which defines the unperturbed problem (i.e.. (8), (9), (lo), with p = 0). We are interested in the stability analysis of the above problems, particularly we shall consider the behaviour as p
+
0 of the optimal value junction
v ( p ) = inf{f(x(b),p); x is absolutely continuous
in [a,b] and solves (9), (10)). Of course the behaviour as p
---$
(11)
0 of the approximate solutions set is of
interest. These are define as those (absolutely continuous) solutions x t o (9),
(10) such that for some
E
> 0 (assuming v ( p ) E R)
Following the terminology of [2] we shall say that performance stability holds for (8), (9), (10) if and only if v ( p ) -, v ( O ) as p
+ 0.
Given the results of section 1, performance stability may be considered as a weak form of problem well-posedness. General theorems relate performance stability to the upper semicontinuous behaviour of the approximate solution set (see [7], theorem 2) as the perturbations disappear, through the use of variational convergence. This mode of convergence (see [ 5 ] ) is the basic tool in the approach adopted here, although the relevant definition (which follows) will not explicitly required in the statement of the main result. Let X be a convergence space, and let
q n : X + R , n = 0 , 1 , 2,..., be a sequence of extended real-valued functions. Then qn converges variation-
ally towards qo, written as var-lim qn = qo
T. Zolezzi
312
if and only if
for every u E X and a
> 0,
there exists a sequence un E X such that lim SUP qn(un) 5 u
+ qo(u).
Given q: T x X -,R, we write var-lim q ( p , = q(0, a)
if and only if for every sequence pn
+0
a)
in T,
The role of Mosco convergence as related to well-posedness has been shown in section 1. Mosco convergence (l),(2) may be defined (in a coherent way ) for sequences of extended real-valued functions. This turns out to be a particular case of epi-convergence (with respect to both strong and weak convergence) on a Banach space (see
[lo]). Epi-convergence
is a particular instance of vari-
ational convergence, which in turn is particularly well suited t o handle some convergence problems for minimum problems possibly lacking solutions, as (8),
(9), (10) do (see [7]). It is therefore not surprising (and perhaps only natural) that variational convergence is strongly related to performance stability in the present setting. The following heuristic discussion shows a link between stability analysis for (8), (9),(10) and relaxation. Let p , -, 0 in T be given with v ( p , ) E R,z, be a solution t o (9), (10) corresponding to pn such that for every n,
313
Well-posedness and stability analysis in optimization
Under standard smoothness assumptions, x n has weakly convergent subsequences in L ' ( a , b ) . Then an absolutely continuous yo exists such that by (14),
The main point is that yo does not necessarily solve (9) with p = 0. As an example, we may well have
G ( t , z , p )= { - l , l } , Ik,(t)l = 1 a.e., ~ ~ (=00,)
x,
-
0 in L ' ( a , b ) .
In general we get i o ( t ) E co G(t,yo(t),O),a.e. in ( a ,b),
where co denotes the convex hull. So we are naturally led t o relate the stability analysis of (8),(9), (10) with the unperturbed relaxed problem defined as follows: t o minimize f(y(b),O)
(16)
y(t) E co G ( t , y ( t ) , O )a.e. , in ( a , b ) ;
(17)
subject to
y(a) = SO; y ( t )
E H(t,O), a 5 t 5 b.
Let us denote by v* the optimal value for (16), (17). Of course v* 5 v(0). Relaxation stability holds if and only if v(0) = v*. By (15) we see that
lim inf v(p,) 2 v*. So relaxation stability is related t o performance stability.
Owing to con-
straint ( lo ) , the relaxation stability problem is highly nontrivial. For unconstrained problems, the Filippov- Wasewski relaxation theorem ([141, Theorem 2 p. 124) answers the question. But here there is no guarantee that (16), (17)
T. Zolezzi
314
is a correct representation of the lower semicontinuous hull in the appropriate topology (relaxed problem) of the unperturbed problem (and in fact, this conclusion is often false). An abstract version of this problem will be considered in section 3. To get a complete relation between relaxation stability and performance stability, some conditions of the type (13) is required in order to connect limsupv(p) with w(0) (it is a necessary condition too). This may be obtained as follows.
We assume some a priori estimate, known in the form of a closed W c Rn such that { z ( b ) ; zsolves (9) with p = 0} c W.
For problems (8), (9) and (16), (17), consider the following assumptions (terminology as in [IS]):
f is continuous;
G ( t , z , p ) is measurable in t , upper semicontinuous in
(2,p ) ,
integrably bounded;
G(., .,0) is measurably Lipschitz;
H ( t , -) is upper semicontinuous by inclusion at p = 0 for all t ; for every 3 E W
n H ( b , O ) and every small 6 > 0,
there exist solutions z to (9), (10) corresponding to p sufficiently small, such that f(Z(b),P)5
f(%P)
+ t*
Theorem 4. Under assumptions (181, (19), relaxation stabilty and performance stability are equivalent facts. Theorem 4 generalizes some of the results in [2], and gives a partial extension of the relaxability results in [4] (which, strictly speaking, are independent of Theorem 4; see Example 2 below).
Well-posednessand stability analysis in optimization
315
Example 1. We minimize s,’(z2 - u2) dt subject to Z
= u, z(O) = 0, IuI 5 1, s ( t ) = 0 for all t .
This is the umperturbed problem, with optimal value 0. The perturbed problems are defined as above, except that the constraint s ( t ) = 0 for all t is replaced by Is(t)l I P,
It is easily seen that v(p)
+
0
-1 as p
5 t I 1, P > 0. + 0.
The lack of performance stability
is explained by Theorem 4; relaxation stability fails since v* 5 -1 < v(0)= 0.
The assumptions of Theorem 4 are easily seen to be verified, including (19) with the a priori estimate W = [-1,1] x [-1,1].
Examde 2. Let * =
{
z + 1, z
I 0,
( z - 1)2, z
2 0.
The umperturbed problem is defined by minimizing
subject to Z1
= u , Zl(0) = 0,
x2 = u, 4 0 ) = 0,
g ( n ( 1 ) ) I Z2(1), 1.1(1)1 5 2, 122(1)1 5 1. This problem is not calm (as defined in [4]). Relaxation stability is obviously true. We consider the perturbed problem by replacing ~ ( 0 =) 0 by
T.Zolezzi
316
As easily checked v ( p ) = -1 - p i
+ v(0)
as p
--t
0.
Theorem 4 applies, but Theorem 2 of [4, p. 5701 does not. 3. VARIATIONAL LIMIT AND VARIATIONAL EXTENSIONS
An abstract approach t o the problem considered in Section 2 allows us to connect the performance stability results, as related t o relaxation stability, with suitable variational limits, which give the variational extensions of minimum problems in [6], a natural abstract version of relaxation procedures. For the sake of simplicity, we shall modify the original definition of (61 as follows (definition 5). Let E, F be subspaces of a given convergence space X. Let
I:E
---$
R, J :F
-+
R
be two given functions.
Definition 4. J , F is a weak variational extension of ( I ,E ) if and only if the following conditions hold:
there exists a continuous mapping 8: E
---$
F such that (20)
J [ O ( x ) ]5 I ( x ) for every x E E ; for every y E F and a > 0 there exists a sequence x n E E such that limsup I ( x n ) 5 a
+J(y).
(21)
Definition 5. ( J , F ) is a strong variational extension of ( I , E ) if and only if it is a weak one and moreover found in such a way that 8(z,)
5,
in (21) may be
-, y.
As shown in Lemma 1.2 of (61, strong extensions preserve all local minima. Since we are interested in the behaviour of the global minima only (as in the above sections) we need a weaker concept, as in Definition 4.
Well-posedness and stability analysis in optimization
317
A variational extension is called regular whenever the relevant function J is lower semicontinuous. We remark that F = cl B(E) as a consequence of
Definition 5. The proof of the following result may be obtained in a straightforward way.
Relaxation Theorem. (See [6], 1151). (a) Let (J,F) be a regular weak variational extension of (I,E)
and v
= inf I ( E ) , v * = inf J ( F ) .
Then v = v * . Moreover (xn) a minimizing sequence for
(I,E ) , y a cluster point for O(zn) implies y E arg min(J, F).
(b) Let ( J , F ) be a regular strong variational extension of
(I,E). Assume that for some t > v { z E E : I(z) 5 t } is sequentially compact.
Then, arg m i n ( J , F ) = set of all chster points of (O(z,)), ( z n ) any minimizing sequence for
(I= E).
The following is an abstract version of the problem considered in Section 2. Let H be a closed set in X which meets E. Let (J,F) be a variational extension of (I,E) with O = identity. Does then ( J ,F n H ) realize a variational extension
of (I,E
nH)?
Answers may be obtained by employing a stability analysis approach, to get generalizations of Theorem 4. A sample result is as follows:
318
T. Zolezzi
Intersection Theorem. Let X be a linear metric space, let
I:X
--f
R and (5,F) be a strong variational extension of ( I , E )
with 9 = identity. Let
v = inf I ( E ) , v ( p ) = inf { I ( % ): z E E n ( H
+p ) } .
Then lim inf v ( p ) 1 v P+O
implies that
(J,F n H) is a weak variational extension of (I,E f l H ) . (Compare with the calmness such as defined in (41.) The role of variational convergence, hidden in the approach followed in Section 2, is clearly shown in the following theorem. We shall use the definition of epi-convergence (see [lo]). Given E, F as in Definition 4, write
-
if and only if zn E E , y E F, and O(x,) the convergence
9
+
y. We consider E as equipped by
.
Theorem 6. If ( 5 , F ) is a regular strong (respectively weak) variational extension of (I,E), then for the constant sequence In= I for all n, epi-lim I = J (respectively var-lim I = J)
Remark. See [lo]-corollary 2.3 for a related known result.
Well-posedness and stability analysis in optimization
319
REFERENCES
[ 11 R. LUCCHETTI-F. PATRONE, Hadamard and Tikhonov well-posedness of a certain class of convex functions, J. Math. Anal. Appl., 88, (1982), 204-215. [2] A. DONTCHEV-B. MORDUHOVIC, Relaxation and well-posednness of nonlinear optimal processes, Systems Control Letters, 3, (1983), 177-179. [3] J. WARGA, Relaxed variational problems, J. Math. Anal. Appl., 4, (1962), 11 1-128. (41 F. CLARKE, Admissible relaxation in variational and control problems, J.
Math. Anal. Appl., 51, (1975), 557-576. [5] T. ZOLEZZI, O n convergence of minima, Boll. Un. Mat. Ital., 8, (1973), 246-257. [6] A. I O F F E V . TIKHOMIROV, Extension of variational problems, Trans.
Moscow Math. SOC.,18, (1968), 207-273. [7] T. ZOLEZZI, Stability analysis in optimization, Proceedings International
School of Mathematics “G. Stampacchia,” Optimization and related fields, Erice, (1984) edited by R. Conti, E. De Giorgi, F. Giannessi, t o appear. [8] T. ZOLEZZI, Some convergence results in optimal control and mathematical programming, Proceedings Workshop in differential equations and their
control, Iasi (1982), 218-232; Iasi (1983). [9] R. HOLMES, A course in optimization and best approximation, Lecture notes in Math., Springer Verlag, (1972), 257.
[lo] H. ATTOUCH, Variational convergence for functions and operators, Pitman (1984).
[ 111 R. LUCCHETTI-F. PATRONE, A characterization of Tikhonov wellposedness for minimum problems, with applications t o variational inequalities, Numer. Funct. Anal. Optimization, 3, (1981), 461-476.
T. Zolezzi
3 20
[I21 R. LUCCHETTI-F. PATRONE, Some properties of well-posed variational inequalities governed b y linear operators, Numer. Funct. Anal. Optimiza-
tion, 5, (1982-83), 349-361. [13] R. LUCCHETTI, Some aspects of the connections between Hadamard and Tikhonov well-posedness of convex programs, Boll. Un. Mat. Ital., Analisi
Funzionale applicazioni, serie VI, 1, vol. C, (1982), 337-347. (141 J.-P. AUBIN-A. CELLINA, Differential inclusions, Springer (1984). (151 M. VALADIER, Sur u n the‘orime de relaxation d’Ekeland-Temam, SBminaire d’Analyse Convexe, Montpellier, expos6 no 5, (1981). [16] F. CLARKE, Optimization and nonsmooth analysis, WileyInterscience (1983).