Systems Cjptim iza tion A/I etn oaoloqij Part II
This page is intentionally left blank
Qystems Optimization jV| etno...
44 downloads
890 Views
127MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Systems Cjptim iza tion A/I etn oaoloqij Part II
This page is intentionally left blank
Qystems Optimization jV| etnoaoioqij Part II Translated rrom Russian Ly Mr Y M D o n e t s
VVKoltin St Petersburg State University
(World Scientific Singapore »New Jersey London•Hong Kong
Published by World Scientific Publishing Co. Pte. Ltd. P O Box 128, Farrer Road, Singapore 912805 USA office: Suite IB, 1060 Main Street, River Edge, NJ 07661 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
SYSTEMS OPTIMIZATION METHODOLOGY PART 2 Copyright © 1999 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN 981-02-3303-5
This book is printed on acid-free paper.
Printed in Singapore by Uto-Print
Dedicated to Galina Kolbina, A.Sc, my wife and friend.
This page is intentionally left blank
CONTENTS Introduction
xi
Chapter 1. RISK AND UNCERTAINTY IN THE COMPLEX SYSTEMS
1
1.1 Uncertainty and probability in the Planning and Management Problems for Complex Systems
1
1.2 Probabilistic Approaches to Complex Systems
3
1.3 Basic Indications for Classification of Stochastic Programming Problems
5
Chapter 2. CHANCE-CONSTRAINED STOCHASTIC PROGRAMMING PROBLEMS 2.1 Statement and Qualitative Analysis of Chance-Constrained Stochastic Programming Problems
7
7
2.2 Charnes-Cooper Deterministic Equivalents
12
2.3 Deterministic Equivalents to Chance-Constrained Stochastic Programming Problems 2.4 Examples of Chance-Constrained Stochastic Programming Problems
15 23
Chapter 3. TWO-STAGE STOCHASTIC PROGRAMMING PROBLEMS 3.1 Statement of a Two-stage Stochastic Programming Problem
37 37
3.2 Analysis of a Two-stage Stochastic Programming Problem
39
3.3 Some Partial Models of Two-stage Stochastic Programming Problems
44
3.4 The Two-stage Nonlinear Stochastic Programming Problem
47
3.5 Methods for the Solution of Two-stage Stochastic Programming Problems: Examples
53
3.6 Applications of Two-stage Stochastic Programming Problems: examples
61
Chapter 4. MULTISTAGE STOCHASTIC PROGRAMMING PROBLEMS 4.1 Formulations of Dynamic Stochastic Programming Problems vii
67 67
CONTENTS
viii
4.2 Qualitative analysis of Multistage Stochastic Problems with a Posteriori Decision Rules 4.3 A Priori Decision Rules in Multistage Stochastic Programming Problems 4.4 Duality in Multistage Stochastic Programming 4.5 Applications of Multistage Stochastic Programming Problems
.73 76 84
89
Chapter 5. GAME APPROACH TO STOCHASTIC PROGRAMMING PROBLEMS
93
5.1 Game formulation of Stochastic Programming Problems
93
5.2 Special cases of the Game G(E+,F,g)
97
Chapter 6. EXISTENCE OF SOLUTION AND ITS OPTIMALITY IN STOCHASTIC PROGRAMMING PROBLEMS 6.1 Dual Stochastic Linear Programming Problems 6.2 Optimality and Existence of the Solution in Stochastic Programming Problems 6.3 Investigation of one Stochastic Programming Problem 6.4 Definition of the Set of feasible Solutions in Hanson's Problem
105 105 107 109 116
Chapter 7. STABILITY OF SOLUTIONS IN STOCHASTIC PROGRAMMING PROBLEMS 7.1 Stability of Solutions in Stochastic Linear Programming Problems 7.2 e-stability of solutions in the mean
119 119 120
7.3 Stability of Solutions to Stochastic Nonlinear Programming Problems 7.4 Feasible Solution and Function Stability with respect to the t-th Constraint
130
7.5 Investigation of Absolute Solution Stability
134
7.6 Stability in Probability Measure
139
125
Chapter 8. METHODS FOR SOLVING INFINITE AND SEMI-INFINITE PROGRAMMING PROBLEMS
143
CONTENTS
ix
8.1 Statement of Semi-infinite Programming Problems
144
8.2 Duality of Semi-infinite Problems
146
8.3 Optimality Conditions for Semi-infinite Programming Problems
148
8.4 Existence and Uniqueness of Semi-infinite Programming Problems
151
8.5 Methods and Algorithms for Solving Semi-infinite Programming Problems 8.6 Statement of the Infinite Programming Problem
153 172
8.7 Duality of Infinite Programming Problems
175
8.8 Optimality Conditions for Infinite Programming Problems
179
8.9 Existence and Uniqueness of a Solution to Infinite Programming Problems 8.10 Methods and Algorithms for Solving Infinite Programming Problems
182 183
Chapter 9. OPTIMIZATION ON FUZZY SETS
187
9.1 Optimization problems in Large and Complex Systems
187
9.2 Optimization of Problems with Nonuniquely Defined Parameters
192
9.3 Basic Notions
194
9.4 Solution Optimization Algorithms for Linear Programming Problems with Variations in Constraint Coefficients 9.5 Optimization of Integer LP Problems
200 202
Ciapter JO. OPTIMIZATION OF NONLINEAR PROGRAMMING PROBLEMS WITH NONUNIQUELY DEFINED VARIABLES
209
10.1 Optimization of Problems with Nonuniquely Defined Variables and their Solution Methods
210
10.2 Generalization in Nonuniquely Defined Functional Optimization Problems
214
10.3 Optimization of Nonlinear Convex Functions with Nonuniquely Defined Coefficients
219
10.4 Solution Algorithms for Nonlinear Programming Problems with Nonuniquely Defined Variables
224
Chapter 11. OPTIMIZATION PROBLEMS IN FUNCTION SPACES 11.1 Topological Spaces 11.2 Linear Topological Spaces. Fundamentals of Convex Analysis
229 229 237
x
CONTENTS
11.3 Measure Spaces. Probability Spaces. Modeling of Random Variables 11.4 Ordered Spaces 11.5 Linear Optimization in Conditionally Complete Vector Lattices 11.6 Matrix Games in Conditionally Complete Vector Lattices 11.7 Optimization Problems on Vector Lattices 11.8 Generalized Parametric Programming Problem Conclusion
254 274 277 284 288
Bibliography
289
242 247
Introduction Methodology of investigation and optimization of goal-oriented sys tems employs nondeterministic devices as those utilized in natural sci ence. Here stochastic programming is taken as the class of chanceconstrained optimization models and methods which takes on great significance. Our major concern is how to formulate those problems that are gen erally ill-defined and give rise to more significant difficultes than those involved in investigation and optimization of their models. The exis tence and uniqueness (in a sense) conditions are provided for solutions (optimization) of suitable models. Implementation procedures are pro posed and their convergence statements are proved. Consideration is given to the properties of solutions and objective functionals in proba bility models with the emphasis on stability problems. Specific models are provided for optimization of systems under incomplete information. The concluding part of this monograph deals with optimization mod els on fuzzy sets and in function spaces. This monograph builds on the course of lectures the author has delivered to students and post-graduates of the State University in Leningrad (St. Petersburg), as from 1966 ("Stochastic Programming") to this date ("Mathematical Decision Theory").
xi
Chapter 1 Risk and Uncertainty in the Complex Systems 1.1
Uncertainty and probability in the Planning and Management Problems for Complex Systems
Nowadays the deterministic approach prevails in investigation and optimization for purposes of planning, designing and managing complex systems in economy, tech nology and defense. Assuming that all classes of initial data are unambiguous, the deterministic approach ignores the duality of real complex systems which, on the one hand, develop under objective laws and thus can be viewed as deterministic, and on the other hand accomplish their development in the form of a tendency constantly disturbed by random factors, i.e. they possess probability properties in that their further development cannot be predicted with certainty. Ignorance of probability properties of complex systems may lead to extremes in applications of the deterministic approach, such as a tendency to completely formalize the process of optimal planning and management, provide the maximum in detail of mathematical models, and maximize on their basis the accuracy of an optimal solution. These factors may give rise to misconceptions of optimal planning, design and management of complex systems and rational directions of their development. We shall use the best known linear programming model to illustrate miscalcula tions which may arise where the probality properties of complex systems are ignored. Suppose we have a comlex system described by n linear thechnological methods, where A = (a tJ ) is an m x n matrix of technological methods, C is an n — dimensional vector of profits from the use of technological methods with singular intensity and b is an m — dimensional vector of demand and supply. If in the conditions of the problem the values of parameters a^, 6,, Cj, i = 1,2, ...,m, 1
2
CHAPTER 1. RISK AND UNCERTAINTY
IN THE COMPLEX
SYSTEMS
j = 1, 2,..., n are defined in one way or another, the current optimal planning problem becomes Y^c-Xj ->• max, Y^jXj j=l
= k\ Xj > 0, i = 1, 2,. . . , m; j = 1, 2,. . . , n
(1.1)
i=l
The solution of problem (1.1) is designated X = (^ 1 ,...,s n ). In the overwhelming majority of cases the values of parameters at]) &,, c ; do not coincide with the true values of parameters gy, &;, c, in a planning period. Let Tj be the index set of products produced by an economic system, and let I2 be the index set of ingredients consumed by this system. In the general case, we assume that some products i <E h are overproduced by the economic system with n
~
the resulting shortage of some kinds of ingredients i G h, E 5y2j > &•> whereas the other products i € I\ are underproduced with the resulting excess of the other kinds ■a
of ingredients i € I2, J2 5,jX; < &,. j=i
Further, g~ denotes the losses, i £ h, due to storage and sale at reduced prices in the case of the i-th product and the losses, i € I2, due to a shortage of the i-th ingredient;^ denotes the losses, i £ I\, due to a shortage of the i-th product and the losses, i € h, due to surplases of the i-th ingredient. Then we have n
yf - yj = 'bi-J2 2«*i3rii y?> y7 ^ °> ytvl = o, % = 1,2,..., m
(1.2)
If the losses due to the above discrepancy are linearly dependent on its magnitude, the actual profit is n
m
m
Z) l&3 ~ Y.itvt - Y,liVl■ 3= 1
1=1
(L3)
1=1
If the values of qf, q~, i = 1,2,,..,m are great and c3 differs significantly from c3,j = 1,2, ...,rc, the plan X = (x\, ...,xn) must be replaced by the optimal one which takes into account the uncertainty of the values of parameters in the problem. The plan X can be found in terms of a two - person game in which nature chooses its strategies (A,b,c) = w , w 6 0 , from the set of feasible states f! and a decisionmaker seeks for a strategy X. Such a nature game is considered as a zero-sum twoperson game and thus corresponds to the choice of a strategy by a decision-maker which is based on the maximin criterion. This criterion takes into account the worst conditions for development of the system, whose realization is usually unlikely, and takes no account of the behavior of the system in more favorable conditions. The probabilistic approach seems to be the most consistent approach to the problem of allowing for risk and uncertainty in the solution of optimization problems. Much recognition has been granted to the frequency concept of probability (the objective probability concept) which is applicable to mass events and is based on the assumption that mass events have an objective characteristic that is called probability and can
1.2. PROBABILISTIC
APPROACHES
TO COMPLEX SYSTEMS
3
be approximately measured by frequency. Sequential application of this concept for optimization of complex systems requires that the solution obtained should be applied universally in conditions of the same type. The concept of mathematical probability (nonnegative additive, normed measure) can and must be used to construct optimization models for complex systems in a sig nificantly wider class of cases than that which provides no grounds for application of the concept of objective probability. In the solution of practical problems, probabil ity characteristics are determined in non mass cases either by experts or by some induction rules (e.g., principle of maximum entropy, invariance principle).
1.2
Probabilistic Approaches to Complex Systems
In modeling complex systems, one has to allow for uncertainty if it is interpreted to mean the ignorance of actual values of system parameters. The main instrument devised to allow for uncertainty is induction. Historically, although the elements of inductive logic can be found in Aristotle's works, Bacon and Mill are considered to be the pioneers of classical inductive logic. Mathematization of inductive logic began in the twentieth century on the basis of the mathematical probability concept. Probabilistic logic associates statements not only with true or false values, but also with intermediate values, i.e. probabilities that statements or hypotheses are true. This, however, does not contradict the fact that a hypothesis can be either true or false as it stands, since the value of probability characterizes the relation of this hypothesis to reality not directly, but through the other statements reflecting a knowledge we have available at a given time. The quantitative definition of probability of some statements with respect to other statements presupposes two moments: the rule of obtaining the initial distribution of probabilities of statements with respect to available knowledge (rule of induction) and the rules of calculating probabilities for composite hypotheses for which the probabilities of their constituent statements are known. Induction rules are still the subject of discussion and are solved in different ways by representatives of different schools of probabilistic logic, whereas the rules of calculating probabilities for composite hyphotheses are enforced in all systems of probabilistic logic by employing calculas of probabilities. The rules of calculating probabilities derive their source from Kolmogorov's system of axioms [23]. The diversity of probabilistic logic systems stems from the diversity of induction rules. At present, probabilistic logic remains immature to adequately allow for uncertainty of parameter values of complex systems. The concept of subjective probability is progressing within a framework of deci sion theory which is crucial for the theory and practice of modeling complex systems. De Finetti was the first to develop the subjective probability concept both consis tently and comprihensively. In the middle of the twentieth century mathematical methods and, specifically, methods of mathematical statistics find wide application
4
CHAPTER 1. RISK AND UNCERTAINTY
IN THE COMPLEX
SYSTEMS
in production planning and management. Theory of statistical decision is concerned not only with the classical criteria of mathematical statistics (minimum variance, maximum likelihood), but also with economic criteria. A. Wald considered two types of decisions: minimax and Bayesian. Decision theory was initially developed to pro vide a framework for the Bayesian theory of statistical decisions. The supporters of the classical Neumann-Pearson defined mathematical statistics as the science of methods for treatment of experimental data with a view to studying the laws of ran dom mass events. On the other hand, defenders of the Bayesian theory of statistical decisions defined it as the science of decision - making under uncertainty on the basis of experimental data. In his works on decision theory, Savage provided rationales for the use of prior probability distributions in statistical decision problems. More recently it became clear that decision theory can be viewed not only as the basis for statistics, but also as normative theory for decision - making under uncertainty, as a. logic of such decisions. Decision theory derives its sources from theory of games, theory of statistical decisions, and development of the subjective probability concept. The Bayesian approach to modeling complex systems has led to a new understsnding of organization of work in the science of management and operations research. Now it is possible for the OR scientists to formulate and solve a problem in cooper ation with decision makers. Such cooperation provides an example of how scientists and managers can unite their efforts. The concepts of logical and subjective prob ability provide a framework for investigation of probability of an individual event. This problem, however, is not investigated in terms of objective probability. Based on Kolmogorov's axiomatics (concept of objective probability), modern calculus of probabilities is generally applicable to all three probability concepts. Instead of un certainty, as defined above, only statistical certainty is studied in terms of objective probability. In the case of uncertainty we are faced with non-predictability, whereas in the case of statistical determinism we have theoretically complete predictability which in practice appears to be approximate. In fact, the objective probability con cept is concerned not with uncertainty but with certainty, though only statistical. The concept of mathematical probability provides the requisite means of quantitatively allowing for uncertainty, whereas extremal probability models are adequate models for complex systems under uncertainty. Probabilistic characteristics of models for complex systems under uncertainty can be determined with the help of experts us ing some inductive principles and statistical methods. In this case, the methods of the Bayesian statistical decision theory allow the use of an optimality principle via sequential decision - making.
1.3. BASIC INDICATIONS
1.3
FOR CLASSIFICATION
OF SPP
5
Basic Indications for Classification of Stochastic Programming Problems
Almost any problem in applied mathematics can be assigned to one of the follow ing two classes. One class comprises "descriptive" problems in which mathematical methods are used to treat information on the event studied, and some laws of that event are deduced from the other laws. The other class comprises "optimization" problems in which an optimal solution is selected from the set of frasible solutions. Operational research (OR) problems more often than not belong to the latter class. In applied mathematics, problems can be also classified according to other indications. In particular, it is convenient to distinguish between deterministic and stochastic problems. Solution of the latter problems gave rise to a special division of mathematics - theory of probability. Probabilistic methods have long been used to solve descriptive problems. Stochas tic optimization problems were developed over the last 30 or 35 years. This also applies to stochastic versions of optimal programming problems. Optimal stochastic programming, however, is an important branch of applied mathematics, because decisions are actually made under uncertainty. It is also clear that stochastic programming problems are much more challenging than the corre sponding deterministic variants. Therefore, in examining the optimal stochastic pro gramming problems, one cannot expect to quickly obtain sufficiently general and effective results. From the preceding it may be seen that further developments in stochastic pro gramming require systematization of the previously obtained results and consistent exposition of their content in the form of a unified mathematical theory. The chapters that follow make an attempt in this direction. In the majority of practical mathematical programming, the values of parame ters are subject to variability. The division of mathematical programming concerned with the theory and methods of solving conditional extremal problems under incom plete information on the parameters of conditions in the problem is called stochastic programming. Also, we sometimes find under this heading the conditional extremal problems with deterministic conditions, where it is worthwhile to seek a solution in the form of a random vector distribution. Such problems often arise in recurring situations, where constraints must be satisfied "in the mean" (in a sense) and only the "mean" effect of the decisions taken is of interest. From the above discussion it follows that the subject matter of stochastic pro gramming is constituted by conditional extremal problems in which the parameters of conditions and/or their constituent solutions are random variables. In stochastic programming, serious difficulties arise even at the stage of setting a problem where one has to allow for subtle situations arising in prediction, design and management under risk or uncertainty. In actual practice, stochastic programming is used to solve problems of two types. A special feature of problems of one type is that statistical characteristics are predicted
6
CHAPTER 1. RISK AND UNCERTAINTY
IN THE COMPLEX
SYSTEMS
for the behavior of the set of extremal systems that are identical in one way or another. That part of stochastic programming which is concerned with such problems is called "passive stochastic programming". Models of the other type ensure that methods and algorithms are developed for planning and management under risk or uncertainty. The corresponding section of stochastic programming is called "active stochastic programming" The informational structure defined largely by a sequence of observations and decisions is crucial for construction of management models under risk or uncertainty. Conceptually, the setting of a problem determines whether its solution is in pure or mixed strategies. Solutions in pure strategies are called decision rules and solutions in mixed strategies are called decision distributions. The solution of a stochastic programming problem is governed either by prior information, i.e. distribution characteristics or a sampling of feasible values of pa rameters in conditions, or by posterior information, i.e. observations of a specific realization of parameters in conditions of the problem. In stochastic programming problems, distribution characteristics of random vari ables can be specified (risk) or unknown (uncertainty). The models of stochastic programming problems are distinguished by the solution quality index and the defi nition of the concept of a feasible solution to the problem. The chapters that follow deal with various models for stochastic programming problems and methods for solving such problems. The existence, optimality and stability of a solution are the focus of much research.
Chapter 2 Chance-constrained Stochastic Programming Problems 2.1
Statement and Qualitative Analysis of Chance-Constrained Stochastic Programming Problems
This chapter deals with chance-constrained stochastic programming problems. Such problems are addressed where the situation to be modeled (planning or management issues) becomes meaningful only if the conditions of the problem are assumed to be violated on a set of elementary events (states of nature). It is convenient to classify chance-constrained stochastic programming problems according to the method of specifying solutions, the form of an objective functional in the problem, and the nature of specification of conditions in the problem. In chance-constrained stochastic programming problems, the solution can be defined as a deterministic or a random vector (in pure or mixed strategies) depending on deterministic or probabilistic characteristics of parameters of the problem. Objective functions are usually taken to be the following functionals (linear problems): —the mean value of a linear form CX
=
E(C{LO),X)
for .E-model; the variance of a linear form E(C(u)X - CX)2 or E(C{u)X - C°X 0 ) 2 , where generally CX ^ C°X°; for V-model, the probability that the linear form will exceed some previously specified threshold P{C(UJ)X
> C°X0}
or / -> min,
P{C{u)X < / } = /? for P-model. In the general case, chance constraints in stochastic problems are of the form P{9i{X) < 0} > a „
0 < a, < 1, 7
i = T^rn,
8
CHAPTER 2. CHANCE-CONSTRAINED
SPP
where gi(X) are some random functions restricting the domain of definition for the objective function of the problem. In formalizing a particular problem, probabilis tic conditions can be imposed on one or more constraints. For example, for linear stochastic programming problems with chance constraints the domains of definition of objective functions can be given as: 1) P { £ aijXj >&,•} >«*,-, 0 < a , - < l , »'= l,2,...,m; 2) P{AX > B} > a; 0 < a < 1; 3) P{ E aikjx3 > btk;ik C Ik} > a(k), 0 < a{k) < 1, fc = l , 2 , . . . , 5 , U 4 = { l , 2 , . . . , m } . k=l
We will look more closely at a chance-constrained stochastic LP problem of the form maxZ = maxci, (2-1)
pl'JTatjXj
(2.2)
< b% \ > a,;0 < a, < l,i = Y~i^ XJ>0,J
(2.3)
=T~n,
where (a^) is a deterministic matrix, and the vectors b = (bi) and c = (a,) are random. Problem (2.1)—(2.3) can be reduced to a deterministic LP problem max Z = max ex, Ax < b, x > 0, where c is the mean value of vector c, while vector b can be determined as follows: Let (p{b\, bi, ...,bm) be a joint distribution density of components of a random vector b(u>) and let ¥>,-(&,•) be a distribution density of the z-th component of vector &(w), i.e. +oo
+oo
, J
= ^JE(c,-c,-)(c,-c,>,xJ = ^ K , x , ^ =
XTVX.
= 1
ASSUMPTION 2.2. The vector C(u>) has a normal distribution with the mean C_ and covariance matrix V. Then C{u)X has a normal distribution with the mean CX and variance XrVX. Hence
p(cnx x(Y(X)) = $x(AX-b)
= a. >0,
Y(X) = AX -b b}. If the components 6, of _
m
~
the vector b are independent random variables, we have P{b > b} = fj P{6; > 6,} = PI [1 - P(b, < k)} = II [1 - $t,(&\)]> where §(,,(&«.
(2-58)
subject to 771
1=1
X >0
(2.59)
Problem (2.56)-(2.59) becomes a convex programming problem when the distribu tion density of the constraint vector components 6, complies with the normal law and is governed by the Weibull distribution, uniform distribution or gamma-distribution. This actually is the case where the distribution densities of the constraint vector components 6; are determined by one of the following functions (following the above procedure) l_e-(x,-/,,)2/(20i,Ci > M . > 0;
€
s.
**M
|rfb(^)—- ^0.0
> 1,A, > (I;
But the left parts in (2.58) are not convex-up functions. Following [13], one may replace the conditions (2.58) by the equivalent inequalities £ l n [ l - $ 6 . ( 6 , ) ] > l n a , which allows for all of the above distributions to pass on toequivalent deterministic problems, where the left parts of constraints are 1inear or convex-up functions. In chance-constrained stochastic programming; problems , the probability that the conditions of problem would be satisfied is assumed to be known in advance. In prac tical problems, however, it does not always happen that one '.has adequate information to determine (a priori) such a probability. In this respect, one may be interested in the model studied in [13], [37], [38].
2.3. DETERMINISTIC
EQUIVALENTS
TO CHANCE-CONSTRAINED
SPP 19
We shall consider a chance-constrained stochastic programming problem of the form Mh{X), (2.60)
P{yE'*i*iPi,i'sl,rnt
(2.61)
X € H,
(2.62)
where X = (xi,X2, ...,xn) is a random vector-plan whose components are random variables Xj that are independent and normally distributed; 6, is an arbitrary random variable with Fi(t) as a distribution function F,(t); A = {a^} is a fixed m x n matrix; Pi is a fixed number, | < /J,- < 1, i = l,m; /»(X) is some functional defined over the space X of the random vectors; i / is a subset of X for which there is the set H in the space (m.,, cr,). According to the central limit theorem the distribution function of a properly
13 & _ °"
normed sum f*=1 . j of random variables £jt converges (under some conditions) to a normal distribution function
1 f . 2 dz
*(*)=
More precisely, if for example | ^ | < C < oo, Dlk > a > 0, then
?
PS'«H(3,;i
= l,m
(m, 0}.
Then the problem of minimizing the functional CX subject to P{A(u)X X > 0 is equivalent to the problem
< 6(u>)} = 1,
minCX. xes In some cases the convex polyhedron S can be given by a system of linear inequalities. If the number of realizations ui is finite, then S is determined by the inequality system: A > 0; A(uir)X < b(ior), r = 1,2,...,/? where R is the number of feasible realizations. Because of a large number of constraints the solution of such a problem involves significant computational problems. If only the vector b(u>) is random, then the deterministic equivalent problem is obtained as follows. Let 6 be a vector whose i-th coordinate is determined from the relation 6, = inf{6;(w)}. Then the set S = {X : AX < b, X > 0} is a subset of the set Sa. It is standard practice for rigorously stated problems to take a realization as the mean value of random parameters. To establish the permanence of the solution in the mean X , use is made of the equivalence of LP problem and zero-sum two-person game. If the game has the payoff matrix
M =
0 -AT bT
A 0 -CT
-b C 0
(m+n+l)x(m+n+l)
and there is an optimal mixed strategy (Y0,X0,t) such that t > 0, then A'* = ^a Ym — ■& provide a solutoin to the pair of dual linear programming problems.
an(J
2.3. DETERMINISTIC
EQUIVALENTS
TO CHANCE-CONSTRAINED
SPP
21
Let EM be a matrix of the game corresponding to the problem in the mean and let Z = (>', X,t) be a solution of the game such that t > 0. Then the solution in the mean X is derived from the relation X* = y . For X* to be permanent, it is necessary and sufficient that A(ui)X* < b(w) for any w € fl, which is equivalent to A{UJ)X
- b{u)i
< 0.
In [25], we find a computational procedure to verify the last condition for any Let g(M) = MZ. The first m components of the vector g(M) are of the form )X — bi(ui)t,i = 1,2, ...,m. Suppose the set ft, of realizations cj(ai.(uj),bi((jj)) for each i forms a convex polyhedron. Solving m problem of the form m&xg,(M) = g,, we obtain a sufficient condition for the permanence of A' , i.e. max g, < 0. All of the above chance-constrained stochastic programming models provided con ditions for constructing a deterministic equivalent of the problem. Below is given a general method of constructing a deterministic equivalent for a wide class of chanceconstrained stochastic programming problems. Suppose that in the general case we have the following notatitions for probability conditions: 1) P{fi{X) < 0} > a,, i = T7^, 2) P{f(X) •3) P{fi„{X)
< 0} > a < Q,ik C h) > a,,,
k = 1,...,5;U4 = l,...,m. k
Notation 1) admits dependence only between random parameters belonging to the same row; in notation 2), all random parameters can be dependent; in notation 3), the random parameters corresponding to the functions } , k for distinct k are mutually independent. Suppose F,v(i) is the absolute distribution function of a random variable for a. given X and Fx{ti, ■■■,tm) is the joint distribution function of random values /,(A') of components of the vector f(X) for the given X: CO
t
OO
Fix(t)= I . . . / . . . / dFx{fu.., /„..., /„)■ — CO
— OO
— CO.
Let g(X) = {g1(X),...,gm{X)} be a deterministic vector whose component range is bounded by the range of the corresponding random variable fl(X): g,(X) € {Mfi{X),tmpMX)}. For each 0 < Fx(g,(X)) = P{f,(X) < g,(X)} < 1, we have Fx(g(X))
= P{h(X)
< gi(X),..„fm(X)
< gm(X}}
= P{f(X)
< g(X)}.
Using the above concepts,we formulate deterministic mathematical programming problems whose solutions coincide with the solutions of the corresponding chanceconstrained stochastic problems. Such problems are conventionally called the deter ministic equivalents of stochastic programming problems.
CHAPTER 2. CHANCE-CONSTRAINED
22
SPP
Let us consider a chance-constrained stochastic programming problem in nota tion 1): (2-67)
max MX), P{fi{X)Oi,i
(2.68)
= l^.
The deterministic equivalent of problem (2.67)—(2.68) is (2.69)
max MX), Flx(gi{X))
= m,
ffi{X) < 0,
(2.70)
i=T~n^
i = T>.
(2.71)
where we are required to define (deterministic) vectors X and g(X) = {gi(X), ...,gm(X)}. When we have a continuous distribution function, the deter ministic equivalent of problem (2.67)—(2.68) becomes (272)
max MX)
(2.73)
F-}(ai) a.
(2.75)
In accordance with the following assertion, the deterministic equivalent of problem (2.74)-(2.75) is max MX), (2.76) Fx(g(X))
= a,
(2.77)
g(X) < 0,
(2.78)
where we are required to calculate deterministic vectors, X and g(X). T h e o r e m 2.1. [35] If the joint distribution function F(X) of components of a, random vector f(X) = {fi(X),..,, fm(X)}, is continuous for each X, then problem (2.76)—(2.78) is a deterministic equivalent of the stochastic problem (2.74)—(2.75). Proof. If we prove that both problems have the same domain of definition, we prove the theorem, since the objective functions of both problems coincide. Let g(X) satisfy the conditions of problem (2.76)—(2.78). Then P{f(X) a.
2.4. EXAMPLES
OF STOCHASTIC
PROGRAMMING
PROBLEMS
23
For each X, we divide the numbers of the vector f(X) components into three sets: ! x'i Ix^fpi where i G I$] if f,{X) is a deterministic variable; i 6 1$ if f)(X) is a random variable with a negative upper bound, /,(A') < J,{X) < 0; finally, i E / f if fi(X) is a random variable for which P{fi(X) > 0} > 0. The plan of problem (2.74)-(2.75) defines the deterministic vector
g(X) =
fi(X)
,ieq(1)
f,W
,i€l(2) x ,ielx
(3)
We have P{f(X)) — coal ash content at t h e i-th mine operating according to t h e j - t h project when UJ random conditions of t h e problem are realized; fcy(w) — annual wages fund for t h e i-th mine operating according to t h e j - t h project when w random natural factors are realized; d — t h e annual coal o u t p u t for t h e whole association as required by a higher level plan (or as determined by demand); 3 — admissible coal ash content for t h e mining association as a whole; k — t h e total annual wages fund for t h e association; ad,ot3,ak -probabilities t h a t t h e restrictions placed by a higher organization on satisfaction of demand, coal ash content and wages fund are observed. Using t h e above notation and terminology, t h e coal o u t p u t planning model for the mining association can be represented as [ m n,
\
£|£Ece(w)*tf|->min> P
p
(
m
n,
i
J E E 4 > H z , ; , > 4 >ad, E E ^ » u < s
(2-79)
>«.,
(2.80) (2.81)
2.4. EXAMPLES
OF STOCHASTIC [|
PROGRAMMING
m 77i n, n,
PROBLEMS
1
PJ££M")*o «*. (.. = 1 . 7 = 1 J P lT.ll
25
kij{^)xhak,
(2.82)
n, 71,
.,m, = l1,2,. £ z < , = l>t = 1 2,...,m,
(2.83)
1=1
{ 0 , l } , j = 1,l ,...,n,; .., n t ; i = 1,1,. = {0, ...,m. xs,j ., m. v = The solution of this problem is a random vector-plan for coal mining in the subsequent period. In this case, the future development planning of the mining association incorporates actual realizations of natural factors. Suppose the parameters dt},st),kij are random variables that are mutually independent and normally distributed. Let c,j, dij, 5,-j, ktJ, a2(ctJ), a2(dtj), w4>rf.
771 7n 771 1 ,,
m
-a where n s u p s ( { ( j ) a w ) = {x : x}k -> x;xJk g s f ^ W ' * ' ) lim infs(f y ) a ( i ) ) = {x : x3 -^ x;x3 e
s(({j)a{f))
except perhaps a finite number of points Xj}. Let Fg(x^(t) be a distribution function ofg(x,£); then P{g{x,£) < 0} = F s(x{ )(0); P{9(*,(U)) a} is equivalent to s(£ (j) , a(j>) = {x\Gu)(x) >
o«}. The following results were obtained for the model studied. Theorem 2.2. Let g(x,u) be continuous and suppose that for any £ the vec tor g(x,£) has a continuous distribution function. Then lim G^\x) = G(x) when j—foo
fW4f. Theorem 2.3. Suppose that: i) the conditions of Theorem 2.2 are satisfied; 2) there exists a sequence {x^} : G^'(XJ) > a} and x3 —¥ x; 3) ^ 4 £, a' j ) -)■ a. Then a; satisfies G(x) > a. From Theorem 2.3 we immediately have the inclusion lim sup s({«W J ') C s(£, a). j—H-oo
For the other inclusion, it suffices to impose an auxiliary weak condition on G(x). T h e o r e m 2.4. For any x : G(x) = a let there be sequences {xk} and {ejt}; G(xk) = a + sk, tk > 0, £k -> 0, xk -> i . Under the conditions of Theorem 2.1, for any x we then have from s(£,a) that there exists {x3} : G^3\x1) >ctj and XJ—>x or, which is equivalent, lim inf s({' J 'a' J ') 3 s((,a).
The constructive proof (i.e., the method of constructing x} in terms of xk) is also provided. Combining Theorems 2.3 and 2.4, we obtain the continuity theorem for the set of feasible solutions to problem (1). Theorem 2.5. Let £ (j) be such that ( (j) -4 f, a w -4 a. l j g(x, 0 is continuously distributed Vx;