INTEGER PROGRAMMING Naval Postgraduate School Monterey, California
HAROLD GREENBERG
ACADEMIC PRESS
@
New Y o r k . ...
152 downloads
1818 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
INTEGER PROGRAMMING Naval Postgraduate School Monterey, California
HAROLD GREENBERG
ACADEMIC PRESS
@
New Y o r k . London
COPYRIGHT 0 1971, BY ACADEMIC PRESS,INC.
ALL RIGHTS RESERVED NO PART OF THIS BOOK MAY BE REPRODUCED IN ANY FORM, BY PHOTOSTAT, MICROFILM, RETRIEVAL SYSTEM, OR ANY OTHER MEANS, WITHOUT WRITTEN PERMISSION FROM THE PUBLISHERS.
ACADEMIC PRESS, INC.
111 Fifth Avenue, New York,New York 10003
United Kingdom Edition published by ACADEMIC PRESS, INC. (LONDON) LTD.
Berkeley Square House, London W l X 6BA
LIBRARY OF CONGRESS CATALOG
CARD
NUMBER: 73 - 137596
AMS (MOS) 1970 Subject Ciassification: 90C10 PRINTED IN THE UNITED STATES OF AMERICA
Contents
xi
PREFACE
Chapter 1 Introduction,to integer Programming 1 Presentation of the Problem 2 Pilot Scheduling 3 A Quadratic Assignment Problem 4 The Knapsack Problem 5 The Traveling Salesman Problem 6 The Fixed-Charge Problem 7 Nonlinear Approximation 8 Dichotomies
Chapter 2
1
2 4 5 6 8 9 10
Linear Programming 1 The General Linear Program 2 Recognition of Optimality 3 The Simplex Method 4 Tableau Form 5 The Inverse Matrix Method 6 Variables with Upper Bounds 7 The Lexicographic Dual Simplex Method vii
13 14 21 27 28 33 38
viii
CONTENTS
Chapter 3 All-Integer Methods 1 Optimality Theory for Integer Programming
2 3 4 5 6 7 8
Chapter 4
Improving a Nonoptimal Solution Equivalent Integer Programs Convergence to Optimality Bounded Variable Problems Negative c, Values The Use of Bounding Forms A Primal Integer Method
Solving Integer Programs by Enumeration
1 A Direct Enumeration Method 2 3 4 5
Chapter 5
Solution to the Integer Program An Accelerated Enumeration A Dynamic Programming Method Knapsack Functions
81 86 91 95 97
Continuous Solution Methods 1 A Continuous Solution Method 2 Improving a Nonoptimal Solution 3 Convergence in the Algorithm 4 Reducing the Continuous Solution to an AllInteger Format 5 Bounding the Determinant Value 6 Bounded Variable Problems 7 The Mixed Integer Problem
Chapter 6
48 49 55 59 62 66 67 72
104 106 111 115 121 123 128
Number Theory Results 1 The Euclidean Algorithm 2 Linear Diophantine Equations 3 Linear Congruences 4 The Solution of a System of Linear Congruences
135 141 145 151
CONTENTS
Chapter 7
Dynamic Programming Solutions 1 A Dynamic Programming Solution 2 Reducing the Number of Congruences 3 An Accelerated Dynamic Programming Solution
Chapter 8
ix
156 162 169
Branch and Bound Procedures 1 A Branch and Bound Method 2 Tightening the Bounds 3 The Mixed Inteyer Problem
177 187 188
AUTHOR INDEX SUBJECT INDEX
193 194
Preface
This book is designed to serve as an introduction to the study of integer programming. The first two chapters contain applications of the subject and a review of linear programming. The integer programming problem is then placed in a parametric context. From that perspective, an understanding of the integer programming process is acquired and the convergence proofs and algorithmic results are developed. The approach is to express the variables of the integer program in terms of parameters and then to vary the parameters until the problem is solved. This flexible framework permits the handling of such previously difficult problems as those in which the variables have upper bounds and helps to establish the relationship between the all-integer and the continuobs solution methods. The book also offers an enumerative procedure for the solution of integer programs. The procedure is developed by means of a dynamic xi
xii
PREFACE
programming format that makes it possible for many solutions to be generated at one time. The enumeration is then accelerated by a method that establishes the variable values leading to the solution of the problem. The book concludes with a branch and bound scheme that eliminates the necessity for large computer storage capacity-a drawback in the early branch and bound methods. I am very grateful to ProfessoI Richard Bellman, who proposed the writing of the book, and to my wife, Eleanor, for her editorial assistance and typing of the manuscript.
INTRODUCTION TO INTEGER PROGRAMMING
1
PRESENTATION OF THE PROBLEM
A great many problems in mathematical programming are solved as linear programs. The usual linear programming problem is: Find xj 2 0 f o r j = 1,2, . . . , n that minimize z when n
1
CjXj
j= 1
= z,
iaijxj=bi,
j = 1
i = l , 2 ,..., m.
The solution to the problem usually results in fractional xi values. The integer programming problem arises when the xj variables are restricted to integer values. To illustrate the range of integer programming, we present examples of practical problems that are amenable to solution within an integer programming format. We also describe mathematical programming problems that can be handled as integer linear programs. (Most of the examples presented were first collected by Dantzig [6].)
2
2
1
INTRODUCTION TO INTEGER PROGRAMMING
PILOT SCHEDULING
A major concern of many airlines is how to schedule their flight crews. A crew member is assigned to a particular flight. For example, Captain Bursting is pilot of Flight 92. It leaves New York City at 112 0 a.m. and arrives in Boston at 12.02 p.m. After arriving in Boston, Captain Bursting goes to the flight operations office to report on his trip and to learn of any changes in his forthcoming flight. He must allow 30 min for this period. Bursting then has 50 min for lunch. He reports to the flight operations office 40 min before the scheduled departure time of his next flight to receive the flight plan, weather reports, air traffic reports, etc. He is scheduled for a flight to leave Boston after 2.02 p.m. The flight could be any one of those shown in the accompanying tabulation. Flight Number
Departure Time
Arrival Time (p.m.)
Arrival Station
121 171 437 759 103
2.30 2.35 3.OO 3.00 3.35
3.25 3.59 4.03 5.44 4.59
New York City Chicago New York City Miami Chicago
(m.)
Meanwhile, Captain Masters is pilot of Flight 230. It leaves Chicago at 8.55 a.m. and arrives in Boston at 11.48 a.m. He is also eligible to fly the same listed flights out of Boston. How should both Bursting and Masters be scheduled ? What we have begun to describe is a combinatorial problem of enormous size. The airline company has 100 flights into and out of Boston every day. How should the incoming and outgoing flights be linked as part of a pilot’s continuous flight schedule ? Before we can even answer the question, we must view the problem in a broader context. Boston connects directly to 25 other cities. A total of 75 cities are in the
2
PILOT S C H E D U L I N G
3
overall network that comprises the flight schedule for the airline company. What is the complete flight schedule for all pilots when all the other cities are included ? Fortunately, the problem can be expressed as an integer program : Find x j = 0 or 1 f o r I = 1, 2 , . . . , n that minimize z when
i:c j x j
= z,
j = 1
2 a i j x j 2 1,
where
j= 1
i = l , 2 ,..., rn,
i stands for a particular flight (number) between two cities,
m is the total number of flights,
jstands for a set of flights that describe a path, n is the total number of possible paths, aij = 1 means flight i appears in pathj, aij = 0 means flight i doesnot appear in path j, cj is the cost of path j , x j = 1 means path j i s flown, and x j = 0 means p a t h j is not flown. The units on the right side of the inequalities cause all flights t o be flown. The greater than or equal sign permits pilots to ride as passengers to other cities to meet their next flights. The costs cj may include per diem rates for pilots, hotel costs, meals, premium pay, etc. If all cj = 1, we minimize the number of pilots needed to fly the schedule. Otherwise, we minimize the total cost. The flights that make up each path (i.e., the aij values) are found by enumerating all combinations of flights that can physically connect and that comply with the airline’s flight regulations. Typical flight regulations for pilots are : (1) The pilot must check in at flight operations 45 min before the
scheduled departure of his first flight of the day. (2) He must allow 30 min check-out time after his plane lands. (3) The minimum stop between flights is 40 min.
4
1
INTRODUCTION TO INTEGER P R O G R A M M I N G
(4) The maximum ground duty time is 5 hr. (5) The minimum stop for food is 50 min. ( 6 ) The maximum duty time is 12 hr. (7) The maximum number of landings allowed in any 24-hr period is 4. (8) Only paths that return to home base within 3 days are permitted.
The total number of combinations of flight paths has been found quite rapidly with the use of computer calculations. Even though a typical schedule often has several thousand paths possible, the problem can be solved as an integer program. 3
A QUADRATIC ASSIGNMENT PROBLEM
The dean at a large western university has heard of mathematical programming. He naturally thinks of using this powerful tool for scheduling classes at the university. He calls in several members of the Department of Operations Research and presents them with the problem. After several minutes of thought, the members suggest that the problem be solved as follows : Find x i j = 0 or 1 that minimize z when
n
where
C x i j = 1 , j=1
i = 1 , 2,..., m,
z is the total scheduling cost, xij = 1 means course iis assigned at time period j , x i j = 0 means course i is not assigned at time j , C i k is the cost of assigning courses i and k at the same time, m is the number of courses, and n is the number of time periods. For a nontrivial solution, m > n. The costs C i k occur when two courses are scheduled for the same time period and students wish to take both courses. If C i k is taken as the number of students who desire to enroll in
4
THE KNAPSACK PROBLEM
5
both course i and course k , then z is the total number of students that cannot be accomodated. The mathematical model is a variation of the quadratic assignment problem [ I 7 ] ;it was developed in the form given above by Carlson and Nemhauser [ 4 ] ,who found local minima. The same model was obtained by Freeman et al. [S] in a slightly different context. We showed in [ I ] ] how to convert the problem to an integer program by making the change of variables n
Yik
=
j= 1
xijxkj
to obtain the equivalent problem: Find yik= 0 or 1 that minimize z when
mi1 5 i=l k=i+l
m- 1
aikyik
5
i=l k=i+l
Yik
=z,
2
nd(d - 1) + dm,, 2
+ Y i j - Y k j 5 1, Y i k + Y k j - Y i j 5 1, yik
yij
Ykj
- Y i k 5 1,
all i < k < j ,
where d = [m/n]and m, 3 m mod n ; [XI being the greatest integer less than or equal to x. When the optimal Y i k are found, the xij are obtained by inspection. For example, if y i k = 1, then xij = 1 and xkj= I for some arbitrary j value. Reference [11] shows how to find the solution to the integer program by enumeration. Any of the methods presented in this book can be used for the same purpose.
4
THE KNAPSACK PROBLEM
We can think of the knapsack problem in two senses. In the first, a given space is to be packed with items of differing value and volume. The problem is to select the most valuable packing. In the second sense,
6
1
INTRODUCTION TO INTEGER PROGRAMMING
a given item is to be divided into portions of differing value and our aim is to find the most valuable division of the item. We can express the knapsack problem as: Find integers xj 2 0 that maximize z when
c 2 n
ZjXj
j= 1
= z,
ajxj I
j=l
L,
where the nj are positive numbers, the aj are positive integers, and L is an integer. The most interesting application of the knapsack problem occurs in the solution of a large-size linear programming problem. In solving a one-dimensional cutting stock problem as a linear program, Gilmore and Gomory [9]use the knapsack solution to generate the pivot column in the simplex method. They employ the inverse matrix method of Chapter 2; the n iare the simplex multipliers in the inverse matrix. The xj in the knapsack solution correspond to the aij values of the original linear programming problem. If maximum z is small enough, then the current basic solution is optimal. Otherwise, the optimal xj values are used to generate the pivot column and a new basic solution is found. There are other applications for the knapsack problem: Hanssmann [13] for capital investment, Kolesar [16] for network reliability, Cord [5] for capital budgeting, and Kadane [15] for the Neyman-Pearson lemma of statistics. The knapsack problem was formulated by Bellman and Dreyfus [ S ] as a dynamic programming model. Efficient algorithms for the solution are contained in Gilmore and Gomory [ I O ] , Shapiro and Wagner [21], and the author [12,this volume].
5
THE TRAVELING SALESMAN PROBLEM
The traveling salesman starts in one city, goes to n other cities once, and then returns to the first city. How can he determine the order in which to visit each city so that the total distance he covers is at a minimum?
5
THE TRAVELING SALESMAN PROBLEM
7
The traveling salesman problem is formulated by Miller et al. [ZO] as an integer program: Find x i j = 0 or 1 that minimize z when
n
1
i=O
= 1,
xij
= 1,
n
1
j=O
ui
j=O,1,2
xij
- u j + n x i j In
- 1,
,..., n,
i = 1,2, ..., n , 1I i # j I n,
where x i j = 1 means that the salesman travels from city i to city j , x i j = 0 means that the salesman does not travel from city i to cityj, dijis the distance between cities i a n d j (dii= a), and u i are arbitrary real numbers.
The first j 2 0 equalities insure that the salesman enters each city once from one of the n other cities. The next n equalities insure that the salesman leaves each city i 2 1 once for one of the n other cities. A circuit is a tour of k 5 n + 1 cities that starts and ends in one city. The last group of inequalities in the problem insures that every circuit contains city 0. Suppose there was a circuit that did not contain city 0 for some integer solution satisfying the equality constraints. We have xij = 1 around the circuit: thus
u i - u j 5 -1 for cities i and j in the circuit. By adding all such inequalities, we obtain 0 5 - 1, a result which contradicts our supposition. It is clear, then, that if we have any circuit at all, it must be a possible salesman’s tour. What we must demonstrate now is the existence of ui values that satisfy the inequalities. Take ui = t if city i is the tth city visited. If x i j = 0, then ui - uj + nxij s n - 1 for all ui and uj values. If x i j = 1, then U i - U, + nxij = t - ( t + 1) + n = n - 1. In either case, the inequalities are satisfied. Any feasible solution of the constraints produces a possible tour; the minimization achieves the smallest length tour.
8
1
INTRODUCTION TO INTEGER PROGRAMMING
There are other problems in the traveling salesman category. For example, there is the case of sequencing jobs on a machine. We require that a number ofjobs be performed successively. After job i is completed, the machine is set up for job j at a cost cij. The problem is to order the jobs in such a way that the total cost is at a minimum.
6
THE FIXED-CHARGE PROBLEM
Another problem which can be represented in integer programming terms is the fixed-charge problem. For example, a factory has a fixed charge a > 0 whenever x > 0 items are shipped. It incurs a cost C given by a + cx, if x > 0, C=( 0, if x = 0. The objective is to minimize the total shipping cost. The problem is generally formulated as: Find xi 2 0 that minimize z when
c n
(ajyj
j= 1
+ CjXj) = z , n
j= 1
aijxj = bi,
yj=
(1:0
i
=
1,2, ..., m,
if x j = 0, if xi > 0,
where aj > 0. Our method of solving the problem is to convert it to a mixed integer format by determining some upper bound dj for each x j , and replacing the last group of constraints by xj 0, then y j must be one, while if x j = 0, then y j must be zero since y j = 1 produces a higher z value. If we restrict the xj to be integers, the problem becomes an integer program.
7
NONLINEAR APPROXIMATION
9
The fixed-charge problem is discussed by Hirsch and Dantzig [14]. Other applications are by Balinski [ I ] for transportation problems, Balinski and Wolfe [2]and Manne [I81 for plant location problems, and Weaver and Hess [22]in apportionment cases.
7
NONLINEAR APPROXI MATlON
Nonlinear functions of a single variable may be approximated in a mixed integer format. Let the function bef(x); the curve off(x) passes through the points P o ,P , , . . . , Pk, where the coordinates of points P i are ( a i , f (aij) = ( a i ,bJwith ai < The linear interpolation between points P i and P i + , is used to approximate f(x) for ai I xI Thus we replace x andf(x) by x = A*ffo
f(x) = A0 bo 1= A 0
+ A,a,+ . . . +&a,,
+ Aib1 + . . + A, bk, '
+A1
+"'+Ak,
where Ai 2 0. This approximation is the same one used in separable programming. For example, see Miller [19]. Since the interpolation is between points P i and P i + l , we need to impose the conditions that all Aj = 0 except for one pair Ai and Ai+,. This is accomplished by defining integer variables hi = 0 or 1. If d i = 1, the interval between Piand P i + , is considered. If d i = 0, some other region is considered. Hence, if we take the k = 4 case, we have A0
I
A, I A,
0 in(35). Thus
index s is chosen by
1
.
as
j,J+
-us = l-mln
I
-c l j , aj
where the minimum is defined in the lexicographic sense. When (35) is developed, we designate the p’ and uj’ values to be the current and aj values. Hence, (35) is of the same form as (33). If one or more components of p are negative again, we select a row with negative component and repeat the process begun with (34). Eventually a form is attained where fl has no negative components and x = p is the minimal solution. With aj > 0 at every iteration, p is lexicographically increasing as seen by (36). The minimal solution is reached in a finite number of steps since the same set of basic variables cannot be repeated. Example
Find x1 2 0, x2 2 0, x3 2 0 that minimize z when 2x1 t 3x, 2x1 - 3x2 x1
+
x2
+ 3x3 = + X 3 2 4, Z,
+ 2x3 2 3.
We introduce surplus variables x4 2 0 and x5 2 0 and put the problem in the format of (32) in the following tableau. 1 0 0 0 0 -4 - 3
2
3
2 3 1 0 0 1 0 0 2 -3 1 1
3 0 0 1 1 2
PROBLEMS
43
ITERATION 1.
The basic variables are x4 and x5.The x4 row is selected ; J + s = 1 and we develop the tableau below using (36).
-
4 2 0 0 0
1
2
1 f 0 0 1 1 +
6 L2 1 0 0 +
= (1,3).
3 2
-4
0 1 0
p
ITERATION 2.
The basic variables are x1_and x5.The xs row is selected; J' 3). s = 3 and we develop the following tableau. 1
3 -4 0
0
2
3
-$
3
0
= (1,
2,
1
ITERA TION 3.
The minimal solution is reached; z =?, x1 =$, x2 == 0, and x3 =
3.
Problems
1.
If any of the xj in ( I ) are unrestricted in sign, show that each such xi can be replaced by x J. = x.' - xO J
44
2
LINEAR PROGRAMMING
with xj’ 2 0 and xo 2 0. Thus, we need to introduce only one new variable to obtain the general linear programming format. 2. Find xj 2 0 that mininize zwhen 3x, - x, XI
+
x2
3x, - 3x,
3.
+ 12x, + x q = z , + 3x3-3x4=4, + Ilx, + x4 = 2.
Use the lexicographic dual simplex method to solve:
+ 3x2 = z , 3x, + 2x2 2 9,
2x1 2x1 XI
4.
20,
x220.
Use the bounded variable technique to find nonnegative x1 2 1,
x2 5 1,x, 5 1,x4 5 1 andminzfor 3x2 x1 f 5X,
5.
+ 5x2 2 8,
x2
+
+
x3
+ 2x4 = z ,
x3
= 2,
- 3x2 + 3x3 - 4x4 = 4.
Find nonnegative xi and min z in
Use the inverse matrix method.
REFERENCES
45
References Dantzig, G. B., “ Linear Programming and Extensions.” Princeton Univ. Press, New Brunswick, New Jersey, 1963. 2. Hadley, G. “Linear Programming.” Addison-Wesley, New York, 1962. 3. Simonnard, M. “Linear Programming” (W. S . Jewell, trans.). Prentice-Hall, Englewood Cliffs, New Jersey, 1966. 4. Wolfe, P., Experiments in linear programming, in “Recent Advances in Mathematical Programming” (R. L. Graves and P. Wolfe, eds.), pp. 177-206. Mc-Graw-Hill, New York, 1963. 1.
3
ALL-INTEG ER METHODS
We begin our study of integer linear programs with a presentation of all-integer methods. In an ;all-integer method the problem is stated with given integer coefficients; all calculations result in integer coefficients at each iteration. We give the theory of all-integer methods using a parametric approach. Two main all-integer algorithms are developed: The first is similar to one by Gomory [3]but has different convergence properties; the second is used to solve bounded variable integer programs. We then examine the interesting bounding form method developed by Glover [ I ] . The method may not produce rapid convergence in the solution of integer programs, and may not, therefore, seem practical ; it is valuable, however, for the insights it offers into the integer programming process. We will use the method again in Chapter 6 to solve Some of the classical problems of number theory. These first methods are all based on the dual simplex algorithm. No feasible integer solutions are available until the optimal solution is achieved. We conclude the chapter with a simple primal algorithm, i.e., one that produces feasible integer solutions. We can offer no assurance that this algorithm will produce the optimal solutions, but modifications by Young [4] and Glover [2] insure convergence.
48
1
3
ALL-INTEGER METHODS
OPTIMALITY THEORY FOR INTEGER PROGRAMMING
We are interested in solving the integer programming problem : Find integer xi 2 0 f o r j = 1, 2, . . . , n that minimize z when CjXj
= z,
j=1
"
2 a i j x j 2 bi,
j= 1
i = 1, 2 , . . ., m,
and the aij, bi, and cj are given integer constants. The method we use to find the optimal solution to Eq. (1) consists of making a series of changes of variables to achieve the transformation
The integer constants d j , djk are developed in an iterative procedure during the solution of the problem. The initial transformation is established by writing Eq. (1) in parametric form as n
(3)
j = l , 2 ,..., n,
x J. = y .I >- o ,
The variables x,, for i = 1, 2, . . . , m are surplus variables. Eliminating xi from ( I ) , using (2), we obtain the equivalent program: Find integer y j 2 0 f o r j = 1,2, . . . , n that minimize z when z
=z, +
c cjyj, n
j= 1
(4)
The constants Zo , C j , 6,, and Zij are developed as a result of the transformation with
2
n
cj
(5)
=
ckdkj k= 1
h i = bi -
9
n j=l
aijdj,
n
61J. . =
1
aikdkj, k= 1
IMPROVING A NONOPTIMAL SOLUTION
49
j = l , 2 ,..., n, i = 1,2, ..., m,
i = 1 , 2,..., m ; j = 1 , 2,..., n.
We have Theorem 1
If the constants are such that Cj 2 0, dj 2 ,O, and bi I 0 for all i and j , then the minimal solution to Eq. (4) is given by z = Z,, y j = 0 for j = 1,2, ..., n. Proof
If dj 2 0 and 6 iI 0, then y j = 0 for j = 1, 2, . . . , n satisfy the constraints of (4) and produce z = 5,. In addition, since Cj 2 0, a positive value for any y j can only increase z. We shall prove in Section 3 that Eq. (4) is an equivalent problem to (l), The minimal solution to (1) is then given by z = Z,, x j = dj for j = 1, 2, . . ., n. We see from ( 5 ) that when dj 2 0 and bi 0, then xj = dj satisfies the constraints of (1) with objective value 2,.
2
IMPROVING A NONOPTIMAL SOLUTION
We demonstrate here how to find the transformation, Eq. (2), that yields an equivalent problem, Eq. (4), and leads in a finite number of steps to the optimality conditions C j 2 0, dj 2 0, and 6, 5 0.
50
3 ALL-INTEGER METHODS
Consider the case where all cj 2 0. We can write (4) as n
where x is a column vector with components z, x i , x , , . . . , x,,+,, p is a columnvector with components 5, , d,,d, , . . . , dn, - 6,, - 6,, . . . , - 6,, and aj is a column vector with components C j , dlj,dZj,. . . , dnj, Z z j , . . . ,i i n r j Initially . (3) is also in the form given by (6) with p components 0, 0, . . . , 0, -b,, -b, , . .., -b, and ajcomponents c j , 0, 0, . . . , 0, 1, 0, . . . ,0, a l j ,a Z j , .. . ,amj.The one appears as the ( j -t 1) component of aj . To insure a finite algorithm we use lexicographic ordering in considering the aj and p vectors. A vector R is defined as being lexicographically greater than zero, or lexicopositive, if R has at least one nonzero component, the first of which is positive. A vector R is less than vector S, R 4 S , in the lexicographic sense, if the vector S minus R is lexicopositive, S - R > 0. Define the symbols “” to mean less than and greater than, respectively, in the lexicographic sense. If (3) is written in the form of (6), we see that aj > 0 because all cj 2 0. Suppose at some iteration we have achieved a form like (6) where aj > 0 and one or more components of p are negative; the optimality conditions of Theorem 1 are not fulfilled. Select a row from (6) with negative p component. Let the inequality be n
(7)
j = 1
a j y j 2 bo,
where - b, is the negative component of p. For the inequality to have a solution, at least one of the aj is positive. Taking A a positive number, any value aj/Amay be written as
where ( a ] denotes the smallest integer greater than or equal to a. Thus, after dividing by A in (7) and using (8), we obtain
2
wherepi
=
IMPROVING A NONOPTIMAL SOLUTION
51
{ a j / l > Note . that
1 "
- C rjyj 2 0; 1j = , hence we have
(9) Since the left side of (9) can have only an integer value, then n
where q = { bo/A}. It is desirable to make a change of variables for y , , where a, > 0; then if I is chosen so that I 2 as,from (lO),we obtain Ys
= 4 - j + sP j Y j
+ y,',
where integer y,' 2 0 repres-ts the surplus variable in (10). If y , from (1 1 ) is substituted into (6),we have x = p'
+ 1U j ' Y j j = 1
The coefficients of (12) are
,=
&.f
(13)
@.
- p . &,,
jZs
ffs' = a,,
B '= B + 4@,;
also, y,' has been replaced by new variable y , in (12). We require that the uj' remain lexicopositive. Thus (14)
clj-pjas>O,
j f s .
Condition (14) enables us to determine the index s and a value for 1, which produces the p j values. If J + is defined as the set of indices j where czj > 0 from (7), the index s is chosen by the rule a, = I-min ctj; je Jf
52
3
ALL-INTEGER METHODS
a, is the lexicographically smallest of the aj for j E J + . We define I-min to be the lexicographic minimum. Define integer value p j as the largest integer that maintains aj - p j a , lexicopositive for j € J + ; also take p , = 1. Condition (14) is fulfilled if p j 2 p j . Now we can determine A. If A = ak/pk for some index k E J’, then pk = p k . If A 2 ak/pk, then pk can only be reduced and condition (14) holds. The requirement for all j is that A 2 a j / p j .In addition, to reduce the number of iterations, q is made as large as possible to affect the greatest change in p. Since /3 is bounded from above, as seen in Section 4, we make p approach the bound rapidly. For example, the first component of p is Z,, which is bounded by the optimal value. By making q large we might produce an objective value that is close to optimum. Thus, to produce large q value, A is made as small as possible; we take 1 by the rule a.
1.= max I. jsJ+
Note that A may be fractional and that 1 2 a, so that p , is unity in (10). When (12) is developed, we designate the fi’ and aj’ values to be the current p and aj values. Hence, (12) is of the same form as (6). If one or more components of p are negative again, we select a row with negative fi component and repeat the process begun with inequality (7). Eventually a form is developed where p has no negative components. At this point the form (6) is like (4) and the conditions of Theorem 1 are fulfilled. The optimal solution to (1) is then produced by the first (n + l) components of p. The solution of program (1) is obtained by using Algorithm 1 1 . Develop a tableau by listing the columns /?,a l , a 2 , . . . , a,. Before any iteration p has components Z, d,, d 2 ,. . . ,d,, -&, - 6 , , . . . , -6m; column x j has components C j , d l j , d Z j , ..., d n j , Z L j , Z Z j , ..., Z m j . Initially Zo = 0, dj = 0, 6 , = b,; Cj = cj 2 0, dii = 1, dij = 0 , for i # j ; 0.. -,J = a ., J . Go to 2. 2 . If d j 2 0 f o r , j = 1, 2, . . . , n and h i < O for i = I , 2 , ..., m, the
2
IMPROVING A NONOPTIMAL SOLUTION
53
minimal solution is z = Z, xi = dj f o r j = 1, 2, . . . ,n ; stop. Otherwise, select the nonobjective row with the smallest p component. Suppose the row is -bo, a,, u 2 , . . . , a,. Define J + as the set of indices j where aj > 0. If all uj I 0, the problem has no solution; stop. Otherwise, go to 3. 3. Determine index s from us = f-minjEJ+uj. Find the largest integer pj that maintains uj - pjuS> 0 f o r j e J + ,j # s. If L Y ~and LY,begin with an unequal number of zeroes, take pj = 00. Otherwise, suppose the first nonzero terms are ej and e , . If e, does not divide e j , take pj = [ej/e,],where [ a ] denotes the greatest integer less than or equal to a. If e, does divide e j , then pj = ej/e, if aj - (ej/e,)a,> 0 and pj = ej/e, - 1 otherwise. Also p, = 1. Take A = maxjeJ+ a j / p j . Go to 4. 4. Calculate q = {b,/A}, pi = {aj/A} and new column values B’ = p + qa, and aj’ = uj - p i us for j # s. Designate p’ and aj’ to be the current /3 and uj . Return to 2. Example
Find integers x1 2 0, x2 2 0, x3 2 0 that minimize z when 2x1 X1
+ 6x2 + 3x3 = Z ,
+ 3x2 +
X3
2 5,
+ 5x2 - 3x3 2 6, 2x1 + 3x, + 2x, 2 4.
2x1
We follow the steps as they occur in Algorithm 1. 1. The problem is listed in the following tableau.
-
0 0 0 0
5
-6 4
1
2
3
2 1 0 0
6 0 1 0
3 0 0 1
1 2 2
3
5 3
1 -3
2
The integer surplus variables are x4, x 5 , and x6
54
3
ALL-INTEGER METHODS
ITERATION 7.
p,
2. The x5 row is selected.J + = (1,2). 3. s = 1, p i = 1, p, = 2; A = max(+, 3)= 3. 4. q = 3. p1 = 1, pz = 2, p3 = - 1. We form the following tableau.
-20 0 2
1
2
3
0
0
2
2 1 - 1 2 - 1 4
ITERATION 2.
2. The x4 row is selected. J + = (1,2,3). 3. s = 2.p1 = l , p 2 = l , p 3 = 2;A = max(+,+,+) = 1. 4. q = 2,p1 = l,p, = l , p 3 = 2. We form the tableau below. 1
2
3
1 0 0 2 1 -1 3 -2 5 2 -1 1 -2 0 0 0 1 0 0 1 0 2 1 1 - 3 0 3 - 1 6 ITERATION 3.
2. The x1 row is selected. J + = ( 1, 3) 3. s = 1 . p1 = 1 , p3 = 03; A = 3.
EQUIVALENT INTEGER PROGRAMS
3
4. q = 1, p 1 = 1, p z
= 0, p 3
55
= 2. We form the following tableau.
1
2
3
1 0 0 2 1 2 3 -2 -1 1 - 1 1 0 0 0 0 1 0 0 1 0 3 1 1 - 5 3 3 - 1 0 ITERATION 4.
2. The minimal solution is reached in the preceding tableau. It is z = 10, XI = 2, x2 = 1, x3 = 0. In step 2 of Algorithm 1,the nonobjective row with smallest fl component is chosen for inequality (7). This choice rule serves two purposes. First, it may make q = {bo/A} large, thereby increasing p the most; secondly at the end of the iteration, the corresponding p component has value -b, + qa, > -bo , which may be positive and more nearly fulfill the optimality conditions. The choice rule leads to a finite algorithm. Other choice rules may be effective, such as the selection of
1. 2. 3. 4. 3
the row that makes the greatest lexicographic change in p, the first nonobjective row with negative p component, the rows in a cyclic manner, the rows at random.
EQUIVALENT INTEGER PROGRAMS
We define two integer programs as equivalent if the minimal solution to one produces the minimal solution to the other and vice versa. We now show that the transformation (2), developed in Section 2, causes Programs (1) and (4)to be equivalent.
56
3
ALL-INTEGER METHODS
Consider program (1) in matrix form, with relaxed integer restriction: Find xi 2 0 for j = 1,2, . . . ,n that minimize z when cx = z ,
A x L b,
where x , c, and b are vectors with components xj ,cj ,and bi,respectively; A is a matrix with elements a i j .Write the transformation (2) as
(16)
x
=d
+ Dy,
where d and y are vectors with components dj and y j respectively; D is a matrix with elements d i j . Eliminate x from (15) using (16) and consider the program : Minimize z when CDY = z - cd, D y 2 -d, ADy 2 b - Ad. We have Theorem 2
If the inverse matrix D-' exists and y o is a minimal solution to (17), then xo = d + Dyo is a minimal solution to (1 5). Proof
Suppose X satisfies the constraints of (15) with objective value < zo = cd + c o y o . Since D-' exists, then y = D-'X - D-'d satisfies the constraints of (17) with objective value 5 ; thus, the assumption that y o is a minimal solution is contradicted. Hence, xo = d + Dyo is a minimal solution to (15). Similarly, the minimal solution to (15) produces the minimal solution to (17).
Z = cX
In addition, if d, D, and D-' have only integer elements, then (15) and (17) are equivalent programs for integer xi 2 0 and integer y j . Now we must show that our transformation ( I 1) leads to the proper D matrix and that the equivalence of (15) and (17) then holds for nonnegative integer y j .
3
EQUIVALENT INTEGER PROGRAMS
57
Transformation (2), as contained in (4), is obtained by repeated use of transformations like (1 1). Transformation (1 1) may be written as y(') = - Q, + p , y ( r - 1) ; (18)
y(') is the y vector after iteration r [i.e., the rth use of forms like (1 l)], Q, is a vector with IZ - 1 zero components and some component s, with value qr ,and P,is an n x I? elementary matrix' 1 P, =
1
Plr
~
2
r
. . . 1
.
. .
Pnr
1 where P, differs from the identity matrix in row s,. The values q,, pjr , and s, are the q, pi,and s values obtained from using (1 1) in iteration r. The transformation arises by writing (1 1) in the form
yi"= &)
&-I),
= &-I),
yz)
= $1)
where the r subscripts on s, q, and the p , have been neglected ;ps = I . The inverse to P, is
An elementary matrix differs from the identity matrix in just one row or column.
58
3
ALL-INTEGER METHODS
and the inverse transformation to (1 8) exists and is (22)
p-1 ) = Q , + p ; 1 p ) .
We use (22) recursively. Initially, y"' is achieved after some iteration t as
=x
and transformation (2)
1 Tr-,Qr + T,y'", t
x =
r= 1
where To = I , T, = P;'P;'
. . . p;'.
Since T, is the product of elementary matrices of the form (21), the determinant of Tt is one; thus, T;' exists. Also define d and D for (2), from (23), as
D=T, ClearIy, the conditions of Theorem 2 are fulfilled. Programs (15) and (17) are equivalent for D given by (24). Since the determinant of D is unity and D is composed of integer elements, D-' is also composed of integer elements. Hence, (15) and (17) are equivalent problems for integer xi 2 0 and integer y j when the transformation is given by (23). We proceed to show that (15) and (17) are also equivalent problems for integer x j 2 0 and integer y j 2 0. In working out the development of (17) we are really considering a new problem for each iteration r. The successive use of (22) leads to the constraints of each new problem (17). Program r of (17) is defined as (17) with y j = y y ) . We prove inductively that any integer yj!) values that satisfy the constraints of problem r must be nonnegative. We show that integer y y ) 2 0 for all r. Take r = 1 ; since y:.') = x,, we have yj') = x j for j # s and y y = -4
+ c pjxj n
j = 1
from (20). Equation (25) is developed, as in the formation of (1 l), by
4 CONVERGENCE TO OPTIMALITY
59
requiring that xj = yjo) 2 0 be integers in (3) for the first iteration. Following (lo), integers x j 2 0 must satisfy (26)
2
pjxj
j= 1
2 q.
Program( 15)with integer xi 2 0 and program one of (17) with integer ys') are equivalent for transformation (20) with r = 1. Thus feasible integers y;') to (17) will produce feasible integers x j 2 0 to (15). These feasible integers must satisfy (26). Hence, ys') 2 0 from (25) and yl" = xj 2 0,j # s. Program (1 5) and program one of (1 7) are equivalent for integer x j 2 0 and integer yj') 2 0. Assume now that problems r - 1 and r of (17) are equivalent for integer y y - ' ) 2 0 and integer y y ) 2 0 for r = 1, 2, . . . , t (the r = 0 problem of (17) is (15) for integer xi 2 0). We shall prove that integer y:!") 2 0. The equations in (20) hold for r = t + 1, where the s, q, and pi may be different than in (25). The equations for ySt+') in (20) result from the requirement that y j = y?) 2 0 be integers in (4).Thus, following (lo), integers y:!) L O must satisfy. (27)
i:p j y y 2
j = 1
q.
Problem t of (17) with integer y;" 2 0 and problem t + 1 with integer y?+') are equivalent for transformation (20) with r = t . Thus, feasible integers yj!") to problem t + 1 of (17) will produce feasible integers y?) 2 0 to problem t of (17). These feasible integers must satisfy (27). Hence, yj!'') 2 0 from (20). Problems t and t + 1 are equivalent for integer )('I, 2 0 and integer y j ' + l f 2 0. Therefore, by induction, (15) and problem t of (17) are equivalent for integer xi 2 0 and integer yjt) 2 0 for any t value. 4
CONVERGENCE TO OPTIMALITY
The integer programming problem appears to be readily solvable by Algorithm 1. We must show that the algorithm produces the minimal solution in a finite number of iterations. The convergence of the algorithm is a result of the lexicographic ordering or the aj columns of (6).
60
3
ALL-INTEGER METHODS
The cxj columns are lexicopositive in each step of the algorithm. Since /?,from one iteration to the next, is given by
P'
=
P+
W
S
,
then p' > p ; /3 is always increased lexicographically. Suppose the initial value of p obtained in (3) is Po. After iteration t of the algorithm, the /3 value is pt. A P sequence is produced with successively larger lexicographic values as Po < p1 4 p2 < . Also, since the components of P are integers, the sequence changes by integer quantities. Assume there exists a finite minimal solution xo with objective component z o ; i.e., xo = ( z o , xl0, x20, . . . , x,'). Then xo must satisfy (6) with constants for p and cxj that result after any iteration. There must exist y j 2 0 that satisfy a .
2CLjyj
j = I
= xo - p.
The convergence of the algorithm can now be proven. If the components of /3 are bounded from above by finite values, the algorithm requires a finite number of iterations. Consider the firstcomponent of P, the objective value Zo. It can increase only for a finite number of iterations and then must remain at some fixed value Zo 5 zo . If 5, > z,, after some iteration, then write the objective row from (28) as n
1 E j J J j = zo - Zo < 0 ;
j= 1
since F j 2 0, no y j 2 0 values can possibly produce z = zo . Thus, Zo, the first component of /I, is bounded from above by zo . An infinite number of iterations occur if some component of becomes unbounded. Suppose the (r + 1) component, I I r 5 n, takes an arbitrarily large value; then d,, the value of variable x,., becomes large. Let d, > x,," after some finite number of iterations. Write the corresponding row from (6)
C d , j y j = x,," - d , < 0;
j = 1
3
CONVERGENCE TO OPTlMALlTY
61
if all drj 2 0, then no y j 2 0 exist and d, must be bounded by.:x If some drj < 0, then the inequality (29)
C
jsJ-
(-‘rj)yj
2 dr - x r
0
must hold. We define J - as the set of indicesj where drj < 0. The y j are restrained by (29), which is of the form given by (7). We proceed as follows :
1. Obtain the index s by selecting a, as the lexicographic smallest oftheorjforjEJ-. 2. Find pj as the largest integer that keeps aj - p j a S> 0 for ~ E J ’ - ; also, ps = 1. Take 2 = maxjsJ - - d r j / p j . 3. Calculate q = {(d, - xro)/A}and p j = { -drj/A} for ~ E J; -pi = 0 otherwise. This leads to forms like (10) and (1 I ) ; y , is eliminated from (6) using (1 1) and (6) becomes
x
=p+
+ 1 ff;yj, n
qa,
j = 1
asin(12). The convergence becomes apparent by induction. Suppose dl > xIo after some iteration. The objective value from (30) is
Zo’ = To + qZs. (31) At least one d,, must be negative and the orj are lexicopositive, causing Cs to be positive. Since 2’, 5 zo , then q 5 (zo - Zo)/C, produces a bound ford, given by
d, 5 x , o +
4zo
- 20)
-
CS
Any larger value of d, can produce only a value of z larger than z o , which is impossible. Suppose, further, that d,, d,, . . . , d r - l are bounded by 6,, 6,, . . ., d r - , , respectively, and that dr > x: after some iteration. The objective value again is of the form of (31) In addition, the next ( r - 1) components of are
di’=di+qdis,
i = l , 2 ,..., r - 1 .
62
3 ALL-INTEGER METHODS
From the lexicopositive property of us and from the fact that d,, < 0, one or more of the values C,, d,, ,d 2 , , . . . , d,is positive. Thus -
x r o + L Tzo, - zo cs
x,
+ /I6i - di , -
dis
if
E,>O
if di,>O,
i = 1 , 2 ,..., r - 1 ,
In any case d, is bounded. By induction d,, d 2 , . . . , d,, the first n components of fl, have finite bounds. Also, the next m components of B, representing the surplus variables, must be bounded. Since fl is bounded and is lexicographically increasing by integer steps, the algorithm must take a finite number of iterations to achieve the optimal solution.
5
BOUNDED VARIABLE PROBLEMS
We are interested in solving the integer programming problem when the variables have upper bound restraints. We seek to find the solution to (1) with the additional constraints xi I mi f o r j = 1, 2, . . . , n, where the mi are given integer values. A possible solution to the bounded variable problem might be obtained by including the inequalities -x, 2 - mj with the constraints of (1) and using Algorithm 1 to handle the considerably enlarged problem. We choose a different approach. We shall use Algorithm 1 and disregard the upper bounds on the variables until an iteration is reached, in developing (4), with dj > mj for some indexj. Naturally, if all dj 5 mi for every iteration, Algorithm 1 solves the problem. Suppose at some iteration we have achieved a form like (6), where a, > 0 and one or more of the dj components of p have value dj > mj . Select one such row from (6). For that row we have
5
BOUNDED VARIABLE P R O B L E M S
63
They, must then satisfy n
- 1 djkyk 2 d j - mi
(32)
k= 1
> 0. If we define ukand b, by
k = 1, 2, , . ., n, ak = - d j k , bo = d, - m j , then (32) is exactly of the form given by (7). Using (32) then as (7), the analysis follows as before; we develop transformation (1 1) and the new form of (6) given by (12). The lexicographic property of the aj is maintained. The selection of index s and the /z calculation remain the same. Furthermore, the theory presented in Sections 3 and 4 holds; the bounded variable problem is simply solved by the use of (32). For upper bound problems we have Algorithm 2
1. Same as step 1 of Algorithm 1 with the additional listing of m,, m, ,
..., m , . 2. (a) If 0 I dj 5 mj f o r j = 1,2, . . . ,n and hi 5 0 for i = 1,2, . . . , m, the minimal solution is z = Z,, x j = dj for j = 1, 2, . . . , n ; stop. Otherwise, go to 2 (b). (b) If dj 5 mj f o r j = 1, 2, . . . , n, go to 2(c). Otherwise, select the row with dj component of p that produces the largest value of dj - mj > 0. Take bo = dj - m,, uk = -djk for k = 1, 2, ..., n as the row picked. Go to 2(d). (c) Select the nonobjective row with the smallest p component. Suppose the row is - 6 , , a,, u 2 ,. . . ,a,, . Go to 2(d). (d) Define J + as the set of indices j where uj > 0. If all u j 5 0, the problem has no solution; stop. Otherwise, go to 3. 3. Same as step 3 of Algorithm 1. 4. Same as step 4 of Algorithm 1. The choice rules of steps 2(b) and 2(c) may be changed just as in the previous algorithm. See the discussion immediately following Algorithm 1.
64
3
ALL-INTEGER METHODS
Example
Find nonnegative integers when 3x, 3x1 3x1 2x,
x1 5 1, x 2 I 1, x3 I 1 that minimize z
+ 2x2 + 5x3 = z , + 5x2 + 4x3 2 3, + 2x2 + 2x3 2 3, + 2x2 + 7x3 2 4.
We follow the steps as they occur in the algorithm. I. The problem is listed in the following tableau. 1 0 0 0 0 -
3 1 0 0 3 3 3 3 4 2
2
3
2 0 1 0 5 2 2
5 0 0 1 4 2 7
ITERATION 1.
2. (c) The x6 row is selected. (d) J + = (1,2,3). 3. s = 2 . p 1 = l , p 2 = l , p 3 = 2 ; 1 = max(+,+,I) 4. q = 2, p I = 1, p 2 = 1, p3 = 2. We form the tableau below.
=3.
1
2 ~~
4 0 2 0 7 1 0
1 1 -1 0 -2 1 0
3 ~
2 1 0 0 1 -2 0 1 5 -6 2 - 2 2 3
5
BOUNDED VARIABLE PROBLEMS
65
ITERATION 2.
2. (a) d2 = 2, m 2 = 1 ; d2 > m 2 . (b) b, = 1,ai = I , a 2 = - ] , a 3 = 2 . (d) J + = (1,3). 3. s = 3 . p 1 = 1 , p 3 = l ; I = m a x ( + , + ) = 2 . 4. q = 1, p1 = 1, p 2 = 0,p 3 = 1. We form the following tableau. 1
2
3
5 0 2 1 0 1 0 0 0 1 1 - 2 1 - 1 0 1 1 4 5 - 6 2 -2 -1 3 3 - 3 2 3 ITERATION 3.
2. (c) The x5row is selected. (d) J + = (1,2). 3. s = 1 . = 1, p2 = m; R = 3. 4. q = I , p ,
=
l , p 2 = 1,p 3
= 0. We form
1
2
the tableau below. 3
5 0 2 1 1 1 - 1 0 1 1 0 - 2 0 - 1 1 1 5 4 1 - 6 2 3 -1 -2 0 - 3 5 3 ITERATION 4.
2. (a) The minimal solution is reached in the tableau of the previous iteration. It is z = 5 , x1 = I, x2 = 1. x3 = 0.
66
3
ALL-INTEGER METHODS
6 NEGATI‘JE
C,
VALUES
The methods discussed so far have required that cj 2 0 for j
=
1, 2,
..., n. These methods can be modified, however, to include the case
where any of the cj are negative. We examine such a case here. Let us define J - as the set of indicesj where cj < 0. If the problem is to be of interest, the sum of xj f o r j E J - must have a finite upper bound. This upper bound is either apparent or readily obtained by solving the linear programming problem : Find xi 2 0 for j = 1 , 2, . . . ,n that maximize z’ when
1 x j = z’,
jeJ-
(33)
n
i = l , 2 ,..., m.
Caijxj2bi,
j= 1
In either case, we can assume that
where integer do is initially given or obtained from (33) as the integer part of optimal 5’. We express Eq. (I) in the form given by (6). The inequality with xi replaced by y j is
(34) we find index s from as = 1-min a j . jeJ-
We introduce a slack variable y,’ 2 0 and solve for y , in (34) to obtain
(35) Using (35), we replace y , in (6) to form (12) where a,‘ = a J. - a J ff,’
= -asr
j?’ = j?
S’
+ doas.
7
THE USE OF BOUNDING FORMS
Designate p’ and the aj’ to be the current p and a j . Observe that the become lexicopositive. We begin Algorithm 1 in step 2.
67
tii
Example
Find integers xj 2 0 that maximize z when X I - 2x2 + 3x3 = 2 , -3x, 3x2 x3 5 4, 2x, + 2x2 - x3 5 4, 3x, - 2x, 5 1.
+
+
Note that this is a maximization problem. We maximize z’ = x1 + x 3 subject to the constraints and obtain max z’= We write the problem in Tableau El ; s = 3, do = 5, and we form Tableau E2.
y.
1 0 0 0 0 -4 -4 -1
2 2
-1
1
3
2
3
‘3
1
0
0
0 0 3 -2 -3
1 0 -3 -2 2
0 1
-1
1 0
Tableau El
-3
-2
-1
Tableau E2
Tableau E2 is the starting point for the algorithm.
7 THE USE OF BOUNDING FORMS
There is another dual simplex method in addition to the method described in Section 1, one that is formulated by Glover [2] and known as the bound escalation method. It is based on the observation that y , 2 q = {bo/a,} if an inequality of (4), namely u j y j 2 6 , > 0, has the property that a, > 0 for some index s and uj 2 0 for j # s.
cj”=l
68
3
ALL-INTEGER METHODS
The inequality is then a bounding form because it provides a positive lower bound to variable y,. We shall show how bounding forms are produced and how they lead to the solution of integer programs. When a bounding form does not exist, it can always be developed through a single transformation. Let the inequality n
C a j y j 2 bo
(36)
j= 1
>O
have two or more positive coefficients uj . Suppose us > 0 ; we desire a bounding form that maintains a, > 0. If we write the inequality as (37) where J + is the index set for aj > 0 and J - is the index set for uj < 0, then
C V j Y j + Y , = y,'
jeJ
2 0,
+
J#S
where vj = {uj/u,} for j E J + . Eliminating y s from (37) using (38), we obtain (39) where
a.'=a.-v.a
s7
jEJ+, j + s ,
a,' = a,
and y,' is replaced by new variable y,. Since uj' 5 0 for j E J+, j # s, then (39) is a bounding form with y , 2 q = {bo/us). Take the integer programming problem as given by (6) where aj > 0 and one or more components of B are negative. Select a row with negative fl component. Let the inequality be (36) and define index s from a, = /-minjeJ+ a j . If we use (38) to eliminate y , from (6), we achieve (1 2) where g.'=ff.-v.a s, jEJ+? j # s us' = a,, aj' = u j , p' = p.
j
E
J-,
7
THE USE OF BOUNDING FORMS
69
We now make an additional change of variable Ys = 4
and replace p' by (41)
p'
=p
+ Ys'
+ gas.
When a sequence of lexicographically increasing p values are produced, the method solves the integer programming problem. The increasing p values occur if the aj' remain lexicopositive in (40). To insure aj' > 0, we require that vj < p j , where ,uj is the largest integer that keeps aj - p j u s > OforjEJ'. We have seen that if any vj > p j , then the bounding form may not lead to a finite algorithm, To maintain aj' > 0 we change (38) to
2 0,
where pi = min(vj, pj). We use (42) to eliminate y, from (6) and achieve (12) with vj replaced by pj in (40). Hence, if any vj > p j , then (39) is not a bounding form and the p vector is not changed by (41). We repeat the process in an effort to obtain the bounding form; the fl vector remains the same and we select the row that gave us (36). Eventually, we obtain a bounding form and a new b given by (41). Thus, we maintain uj > 0 and produce an increasing p. We sum up the procedure in Algorithm 3
Steps 1 and 2 are identical with those in Algorithm 1. 3. Determine index s from or, = I-minjEJ+ a j . Find the largest integer p j that maintains aj - p j a s > 0 for jeJ'; p s = 1. Calculate vj = {uj/u,} f0rje.l'. Go to 4. 4. Take pj = min(vj, p j ) and calculate new column values aj' = aj - p j cis f o r j E J + , ,j # s. Designate the aj' to be the current a j . If all v j -< p j , calculate q = {b,/u,) and the new column 8' = p + qa,; designate b' to be the current p and return to 2. Otherwise, some vj > p j .
70
3 ALL-INTEGER METHODS
The row selected in step 2 has new values -b, , a,, u 2 , . . . ,a,,. Define J + as the set of indices j where uj > 0 and return to 3. We have presented an algorithm in which a bounding form is developed, but a search procedure may be included to determine whether or not a bounding form already exists. Glover [2]presents methods to force faster solution. Example
Find integers x1 2 0, x2 2 0, x3 2 0 that minimize z when
+ + +
5x, 7x2 4x, 5x2 x, x2 5x1 + 3x2
+ llx,
= z, 5x3 2 6, 3x3 2 7, 2x3 2 5.
+ + +
We write the steps as they appear in the algorithm. 1. The problem is listed in the following tableau. 0 0 0 0
1
2
3
5 1 0 0
7 1 1 0 0 1 0 0 1
-
6
4
5
5
-
5
5
3
2
-
7
1
1
3
The integer surplus variables are x4, x5,x6 ITERATION 1.
2. The x5 row is selected. J + = ( I , 2,3). 3. s = 1.p1 = 1, p2 = 1, p3 = 2; v 1 = I , v 2 = 1, v 3 = 3. 4. p1 = I , p 2 = I,p3 = 2 ; v3 > p 3 , We form the tableau below. For the x5 r o w J + = ( 1 , 3 ) .
7
THE USE OF BOUNDING FORMS
1
2
3
x2 x3
x4 x5
4
-6 -7
1 -3 0
-5
-2
5
-8
3. s = 3 . p , = 5 , p 3 = 1 ; v 1 = 1 , v 3 = 1 . 4. p1 = 1, p3 = 1 ;q = 7. We form the following tableau. 1
2
3
7 4 2 1 -14 3 -1 -2 0 0 1 0 0 1 7 - 1 -27 7 1 -3 0 0 0 1 -61 13 -2 - 8 ITERATION 2.
2. The x6 row is selected. .I+ = (1). 3. s = l . B l = I ; v 1 = 1. 4. p1 = 1 ;q = 5. We form the following tableau.
1
2
3 ~
2
7
4
2
1 3 -1 0 0 1 2 - 1 0 8 7 1 0 0 0 4 13 -2
1 -2 0 1 - 3 1 -8
71
72
3
ALL-INTEGER METHODS
ITERATION 3.
The minimal solution is reached in the tableau in Iteration 2. It is z =2 7 , = ~ 1~, x 2 = 0 , x 3 = 2.
8
A PRIMAL INTEGER METHOD
A primal integer programming method uses the primal simplex algorithm so that feasible values of the basic variables are present at each iteration. The advantages of the primal method are: ( I ) a feasible solution obtained by any means may be used as a starting point for the algorithm, and ( 2 ) a feasible solution is available at any time in the calculations. The simple all-integer primal algorithm which we present has been known for a considerable time and may be attributed to Gomory. Assuming that a feasible canonical form is available initially, the problem is: Find integer x j f o r j = 1, 2, . . ,, n, II + 1, ..., n + m that minimize z when n
-z
+ 1 c j x j = 0, j= 1
(43) x , + i +
1 uijxj = b i ,
j= 1
i = 1, 2,
..., m,
j = l , 2 ,..., n , n + l , ..., n + m ,
xj20,
and the a i j , b i , and cj are given integer constants. In addition, bi 2 0 so that a feasible solution x , + ~= bi exists. To find the optimal solution, we make the transformation n
(4.4)
x J. = d J. - C d j k y k , k= 1
j = l , 2 ,..., n .
We develop the integer constants d j , djkin an iterative process during the
8
A PRIMAL INTEGER METHOD
73
solution of the problem. The initial transformation is established by writing (43) as
-z
+ c cjyj = 0, n
j = 1
xj-yj=O, x,+,
+ j 2= aijyj = b,, 1
j = l , 2 ,..., n, i = 1, 2,
.. ., m.
Eliminating xi from (43), using (44), we obtain the equivalent problem: Find integer y j 2 0 f o r j = 1,2, . . ., II that minimize z when
-z
+ 2 zjyj = -zo, j= 1
(45)
The constants, .To, C j , a i j ,and 6, are devcloped from the transformation. If the constants are such that all C j 2 0, 6, 2 0, and dj 2 0, then the minimal solution to (43) is given by z = T o , xj =dj for j = 1, 2, . . . , n and xn+,= 6, for i = 1,2, . . . ,m. Equation (45) can be written as n
(46)
X + CajYj=P, j= 1
where x is a column vector with components -z, xl,x2,. . . , xn+,, ctj is a column vector with components E j , d l j ,d Z j , . . , d n j ,ZIj, a2,, . . . , Zmj, and p is a column vector with components -Zo, d,, d , , . .., d,,, 6,, 6,, . , . , 6,. Initially, Ej = c j , dij = 0 for i # j , djj = - 1, Zo = 0, dj = 0, 6, = b i . After some iteration, suppose equations like those in (45) are obtained where all 6 , r 0, dj 2 0, and some C j are negative; we develop the transformation that converts (46) to an optimal format. We determine index s from C, = min Cj < 0 ; if all a, < 0 and dj, < 0, the solution
74
3
ALL-INTEGER METHODS
is unbounded. Otherwise, find 0 = min(6,/Zi,, dj/djs),where the minimum is over indices i a n d j so that C i s > 0 and dj, > 0. Select some equation of (45) with [6i/Zis]I0 or [dj/dj,] s 8, where if,, > 0 and dj, > 0. Let the selected equality be
x, +
(47)
1a j y j = bo.
j = 1
Dividing through by a, > 0 and noting that aj/a, = [aj/as]+ rj with 0 I r j < 1, (47) appears as
(48)
b0 Y,+ X p j ~ j = - a,
j#s
1
CrjYj--xr, j+s a,
wherepj = [aj/as].From (48) the inequality (49) results. Since the left side of (49) can take on only integer values, a more restricted relation holds, i.e.,
where q
= [ bo/as].Inequality (50) then
leads to
where integer y,’ represents the slack variable in (50). If y , from (51) is substituted for they, in (46), the latter becomes
+ c Clj’Yj = p: n
x
j= 1
where aJ. t = U . - pJ. as ? J
Cl,’ =
B’
=
iZs,
-us,
P - qas,
and ys‘ is redefined as y , . Designate aj‘ and
p’ as the current uj and f i ;
8
A PRIMAL INTEGER METHOD
75
(52) is again of the form given by (46). The same process can be repeated until an iteration is reached in which the optimality conditions hold. We are now ready to state a primal simplex algorithm for the solution to (43). Algorithm 4 1. Develop a tableau by listing the columns a, , s 1 2 ,
. . . , sl, , p. Before
any iteration slj has components C j , d l j ,d 2 j , . . , dnj, ZIj, . . ., Z m j ; ColumnP has components - Z o , d, , d 2 , . . . , d,, 6,,6,,. . . ,6,n. Initially, c. = c j , dJJ. .= - 1 , dij = 0 for i # j , Zij = a i j ; Zo = 0, dj = 0, 6,= 6,. Go to 2. 2. I f C j i O f o r j = 1,2, . . . , n,theoptimalsolutionisz=Z,,xj=dj, f o r j = 1, 2, . . . , n and x,+, = b,, for i = 1, 2, . . ., m ; stop. Otherwise, select index s by C, = min C j < 0. Go to 3. 3. If all dj, I 0 and Zi, S O , the solution is unbounded; stop. Otherwise, calculate 8 = min(6JZif, dj/dj,) for a,, > 0 and dj, > 0. Select a row with the property that [dj/djs]I 8 or [6i/Zis] 5 0, where dj, > 0 and ZiS > 0. Suppose the row is a, , a 2 , . . . ,a,, b, . Calculate p j = [aj/a,] for j = I , 2, . . . , n and q = [bo/as].Calculate new column j values, j # s, by multiplying the values of column s by pi and subtracting the result from columnj. Calculate a new constant value column by multiplying the values of column s by q and subtracting the result from the constant value column. Finally, change the sign of each element of column s. Go to 2. Example
Find integers x, 2 0, x2 2 0,x3 2 0 that minimize z when -2X1 - 3x2
+
X3
= Z,
4x, - x2 - 3x3 5 5,
-2x,
+ 2x2 + 3x3 I7.
We introduce slack variables x4 and x5 to achieve the feasible solution x4 = 5 , x5 = 7. We write the steps as they occur in the algorithm.
76
3
ALL-INTEGER METHODS
1. The problem is listed in the tableau below.
1
2
3
-2 -3 1 - 1 0 0 0 - 1 0 0 0 - 1 4 -1 -3 - 2 2 3
0 0 0 0 5 7
ITERATION 1.
2. s = 2. 3. The x 5 row is selected; a, = 2, p1 = - 1, p2 = 1 , p 3 = 1, 9 = 3. We form the following tableau. 1
2
x2
0 -1 1 -2
x3
x4 x5
3
0
0
-2
ITERATION 2.
2. s = l . 3. The x4 row is selected; a, We form the tableau below. 1
=
3, p1 = 1, p2 2
3
5 3 -1 1 0 - 1 1 1 0 0 0 - 1 - 3 1 1 1 0 - 2
19 2 5 0 2 1
= 0, p 3 =
- 1, 9 = 2.
PROBLEMS
77
ITERATION 3.
2. s = 3. 3. The x5 row is selected; a, = 1, p 1 = 0, p z We form the following tableau. 1
2
=
- 2 , p 3 = 1, q = 1.
3
5 1 1 2 1 - 2 1 1 1 0 0 - 2 1 -3 3 -1 0 0 - 1
0 3 5 1 1 0
ITERATION 4.
The minimal solution is reached in the tableau in Iteration 3. It is 2 = -20, XI = 3, x2 = 5 , x 3 = I , xq = 1, x g = 0. Algorithm 4 is not very useful in computational work because it generally requires many iterations even in very small problems. Since it is possible that q = [bo/as] may be zero for each iteration, the algorithm may not produce an optimal solution in a finite number of iterations. Young and Glover present means for preventing this occurrence, but the resulting algorithms converge very slowly. Problems
1 . Show that them equalities
zaijxj=bi,
j = 1
and m
i = l , 2 ,..., m,
+ 1 inequalities n
C a i j x j 2 bi,
j = 1 n
j=l
hjxj 2 h
i = 1,2, ..., m ,
78
3
ALL-INTEGER METHODS
are equivalent where
hi
=
-
m
aij,
i= 1 m
h= -1bi. i=l
Thus equality constraint integer programming problems can be converted to inequality constraint problems. 2. Minimize z in the followjng problem by means of the all-integer algorithm:
+ 2x2 4- 6x3 = Z, 2x1 - 3x2 + 4x3 2 5, + 4x2 - 5x3 2 8, 3x1
XI
XI
3x1 - 2x2 - x3 2 4, 20, x220, x320.
3. Find min z and integers x1 2 0, x2 2 0, x3 2 0 for 2x,
+ 3x2 + 2x3 = z,
3x1 - 2x2 - X3 2 4, -2x1 - 2x2 x3 2 5, -xi 3x2 - x3 2 3.
+
+
4. Solve the negative cj value problem
+ x3 = z , + - 2 6, -x1 + 2x2 - x3 2 2, x1 - x2 + 2x3 2 4. 2x1 - x2
XI2x2
~3
5. Solve by the bounding form method
3x1 + 2x2 -k 4x3 == Z, x1 - 2x2
+
+ 5x3 2 3, + 3x3 2 4,
6x1 ~2 4x1 + 7x2 - 2x3 2 2.
PROBLEMS
6 . Minimize z when xi
= 0 or
79
1 f o r j = 1,2,3
+ 2x2 + 2x1 + 2x2 + x1
+
x3 = z , x3 2 3,
2x, - 2x2 x3 2 1, x1 2x2 - 2x3 2 1.
+
7. Soive by the primal integer method: min z for integers x1 2 0 and x2 2 0 when -xl
3x,
- x2 = 2,
+ 2x2 5 7.
80
3
ALL-INTEGER METHODS
References 1. Glover, F., A bound escalation method for the solution of integer programs, Centre D’EtudesRech. Oper.6(3),131 (1964). 2. Glover, F., A new foundation for a simplified primal integer programming algorithm, Operutions Res. 16 (4),727 (1968). 3. Gomory, R. E., An all-integer programming algorithm, in “Industrial Scheduling” (J. F. Muth and G. L. Thompson, eds.), pp. 193-206. Prentice-Hall, Englewood Cliffs, New Jersey, 1963. 4. Young, R. D., A simplified primal (all-integer) integer programming algorithm Operations Res., 16 (4),750 (1968).
4
SOLVING INTEGER PROGRAMS BY ENUMERATION
An enumeration method for solving integer programs often has advantages over other methods. Obviously, since the variables take on only discrete values, they can be listed easily, although they may be numerous. For the procedure to be manageable, the enumeration should be ordered in such a way that solutions are obtained with a minimal amount of calculation. We enumerate all possible values of the objective function for an integer program and demonstrate how to use the enumeration in finding the solution to the program. We also provide rules to accelerate the enumerative process. The enumeration is given by a dynamic programming formulation of the problem. (See Bellman [3]for a general presentation of dynamic programming.) In conclusion we give an enumerative solution to the knapsack problem. 1
A DIRECT ENUMERATION METHOD
We shall find the solution to the integer programming problem: Find integer values of x j 2 0 for j = 1,2, . . . ,n that minimize z when
c n
c j x j = z,
j=1 n
xaijxj2bi,
j= 1
i = l , 2 ,..., m.
82
4
SOLVING INTEGER PROGRAMS BY ENUMERATION
To solve (1) directly by enumeration, we begin by finding all the values of z from z
(2)
= ClXl
+ c 2 x 2 + . * .+ c,x,
that are produced by nonnegative integer values of xi where the cj are positive numbers. Equation ( 2 ) is the objective function of the integer program (1). We find all feasible values of z as a monotonic increasing sequence. For a feasible value of z, say zo , we also find the x j values that produce zo . In the process of developing the monotonic sequence of the z, we obtain the solution to (1) when the smallest z value in the sequence has corresponding x j values that satisfy the constraints. Since the x j values produce the smallest objective function value consistent with the constraints, the solution must be optimal. Thus the enumeration of (2) is performed in order of increasing values and stopped when the constraints are satisfied. The method for generating feasible values of z from ( 2 ) is contained in Theorem 1
c j x j that is produced by integer If zo is a feasible value for z = values xio f o r j = 1,2, . . . , n, then other feasible values of z are produced b y z o + c j f o r j = l , 2 ,..., n. Proof
cjxjo, we can form another feasible z value by Since zo = increasing any xio value, say xt0, by one. The result is yet another feasible z value given by z = cjxjo + c, = zo + c,. Other feasible z values then can be formed by zo + ck for k = 1, 2, . . . ,n.
c3=1
In Theorem 1 we see how an enumeration of z values can be performed when a feasible z already exists: Given a list of feasible z, additional feasible z are found by using a z value, such as zo ,as generator and forming zo + cj for j = 1, 2, . . . , n. We then add the new z values
1
A DIRECT ENUMERATION METHOD
83
to the list and select another z value as generator. If we select the smallest unused z , we insure that every z value will act as a generator. The only problem is to find the first generator. Since each feasible z value has corresponding x j values, the first generator is z = 0 with x j = 0. We give the formal algorithm for generating all feasible z values and then prove that no z values are omitted using the algorithm. Algorithm 1
1. List the values of the problem as 1
2
3
...
n
c1
c2
cj
...
Cll
xj = 0
Go to 2. 2. Given the list, find c, = min cj for all unmarked columns in all sections. Take c = c, and go to 3. 3. Add a new section of columns to the list as follows: (a) Calculate cj’ = c + cj for the indicesjof the unmarked columns in the section containing column r. The values of cj used are from the first section. (b) Mark the r column. (c) Add columns headed by the prescribed indices in a new section with values cj’.Consider the cj’ as new cj values. (d) Underneath the section added, write the xi values from the section containing the newly marked r column. Increase x, by one for the new section. Go to 2. This concludes the algorithm. There are no feasible z values omitted by the algorithm as seen by Theorem 2
Every feasible value of z for z listed by Algorithm 1.
=
cy=,
c j x j with integer
xj 2 0 is
84
4
SOLVING INTEGER PROGRAMS BY ENUMERATION
Proof
The values of z generated by z = 0 are cj f o r j = 1, 2, . . . , n and are listed in the algorithm. Suppose z* is a feasible value that is not listed; we have z* = c j x j * . If z* > 0 is feasible, then some xi*, say xk*, is positive. Another feasible z exists for xk = xk* - 1, xj = xi*, j # k , and has value z‘ = cj xj* - ck and we have z’ = z* - ck. Thus, z* would be generated by z using Theorem 1. Furthermore, z’ is not listed by the algorithm; if it were, it would generate the value z* which would be listed. If z’ > 0, then it can be generated by some smaller value of z which is not on the list. Each value of z > 0 not on the list can be generated by a smaller value not on the list. We can find smaller and smaller feasible z values that are not produced by the algorithm and which will generate z values that lead to z*. Since we can always find a smaller feasible z as generator and because there are only a finite number of z values smaller than z*,we see that z = 0 must be a generator and thus must generate all values of z that lead to z*. Moreover, z = 0 cannot generate values of z on the list that lead to z*. All possible feasible values of z generated by z = 0, however, are on the list; these are the cj . Thus we have a contradiction and all feasible values of z are produced by Algorithm 1.
cy=l c;=
We have just shown that Theorem 1 leads to Algorithm 1 for the cj xi enumeration of z and corresponding xi values that satisfy z = with integer xj 2 0 and cj > 0. Theorem 2 proves that every feasible z value is enumerated by the algorithm. Example
Find feasible z that satisfy z
= 3x,
+ 4x, + 7x3
for xl,x 2 , x 3 2 0 and integer. We list
i 1
(3)
2
3
1
A DIRECT ENUMERATION METHOD
85
Note that min cj is the 3 of column 1. We mark the 3 and form 1
2
3
6*
7
10
.
ri.
(4)
The minimum unmarked element is the 4 in column 2 of (3). We mark the 4 and form
(5)
x2
=
1
We do not have a column headed by 1 in (5) because it would duplicate the column headed by 2 in (4). Both would require that x1 = 1, x2 = 1. This storage reduction is handled by step 3(a) of the algorithm. The 6 from (4)generates the next values 1 1
2
3
9
10
13
x1 = 2
As the method continues, the list in subsequent steps becomes
14 11 14 12 15 12 13 16 x3=1
x1 = 1 x2=1 x2=2
x,=3
17
14 17
18
15 18
~~~~
x1 = 1 x1 = 2 x2 = 1 X I = 1 x 3 = 1 x 2 = 1 x,=1 x2=1
86
4
SOLVING INTEGER PROGRAMS BY ENUMERATION
Feasible values of z are listed in order as 6 7 7
10 10 11 11
2 0 1
-___
2 0 0 1
1 2 0 1
--_
0 0 1
2
13 13 14 1L
____
---
-___
-___
15 15 15
--_
0 1 5
--_
___
-_--
0 1 1 2
0 2 0 1
___-
--_
0 1 0
1 0 1 0
1 1 2 1
1 0 0
2 3 0
SOLUTION TO THE INTEGER PROGRAM
We are now ready to solve Eq. (1) by considering the problem: Find xi 2 0 f o r j = 1,2, . . . , n that minimize z when
(6)
ClXl tl1x1
+ c2xz + ... + c n x n= z , + tlzxz + . . * + = a, UnXn
where the t l j are column vectors with elements aij and CI is a feasible variable column vector. Take tloas a column vector with elements bi. As in Section 1, we enumerate values for z in (6) as a monotonic increasing function. This time, however, the corresponding xi values produce values for tl in the constraints. We achieve a solution to (1) for the smallest z value and corresponding x j values that produce a vector a with the property that tl 2 a. (each component of a is greater than or equal to the corresponding component of u0). We can remove the cj > 0 restriction in solving (6). If Algorithm 1 is used where c, = min cj < 0, the enumeration will continue by increasing only x, without limit. In a particular problem, however, x, may be bounded either implicity through the constraints of (1) or by having an upper bound initially. In either case we will assume that (7)
xj<mj,
j = l , 2 ,..., n;
mj are bounds on the variables (any mi value may be infinite). For those
2 SOLUTION TO THE INTEGER PROGRAM
87
indicesj where cj 5 0, we must first attempt to find a finite mj for variable xj. If a variable is not initially bounded, then a bound, if it exists, may be obtained by solving the h e a r program: Maximize xk subject to the constraints of (1). The variable xk is simply one of the variables from (1) for which it is desirable to achieve a bound. If xkois the resultant value of max x k , then we have x k < [Xk"] where [xko] is the integer part ofxko. While it is most useful to bound each variable, it may not be practical because of the need to solve a linear programming problem for each variable. An alternate strategy is to solve the single linear program: Maximize c j E J x j subject to the constraints of (l), J being the set of integers j where cj 5 0. We take the resultant bound as integer do and havexjSJxj 5 do. For the solution of (1) with the additional constraints xj mi for j = 1, 2, . . . , n, we present Algorithm 2. If cj > 0, then mj may be infinite; if cj < 0, then mj must be finite for the method to solve every problem. In any case, mj may also be prescribed by the problem. (We see, for example, that mi = 1 restricts the variable xj to being zero or one, which constitutes an important class of integer programs.) For each cj < 0 we make the change xi' = mj - x j . We can then assume all cj 2 0. Thus we have Algorithm 2
1. Define solution vector S(z*, xl*, x 2 * , . . ., xn*), where xj* for j = I , 2, . . . ,n is a feasible integer solution to the problem with objective value z*. If no feasible solution is apparent take 2. List the values of problem (1) as
tll
a2
ci3
Z*
= 00. GOto
2.
...
x0 and the mj are also listed. Component i of z j is a i j , j # 0. Component i of is b i . Go to 3.
88
4
SOLVING INTEGER PROGRAMS BY ENUMERATION
3. In the newly listed section, find index r from c, = min cj for indices j with ci, 2 a,,. Mark each such column. If no such column exists or c, 2 z*, go to 4. Otherwise take z* = c, and form S(z*, xl*, xz*,. . . , x,*) where the xj* are the nonzero xi values found below the new section. The other xj* values equal zero. Increase x,* by one. G o to 4. 4. Given the list, the solution S(z*, xl*,xz*, . . . , x,*) is the minimal integer solution if there are no unmarked columns with cj < z*. Stop. Otherwise, find index r from c, = min cj for all unmarked columns in all sections. Take c = c, anc ci = a,. G o to 5. 5. Add a new section of columns to the list, if possible, as follows: (a) Calculate cj' = c + c, and cij' = tl + cij for the indices j of the unmarked columns in the section containing column r . The c j , ci, values are taken from the list in step 2. Mark the r column. (b) Add columns with values cj' and cij' headed by the prescribed indices J if cj < z*. Include a column headed by r only if x, + 1 < m, for the x, value found below the section containing the newly marked r column. Designate the cj' and cij' values as cj and t l j values respectively. (c) Underneath the section added write the x j values from the section containing the newly marked r column. The nonappearance of a variable means it has zero value. Increase x, by one for the new section. Go to 3. If no new section is added, return to 4. While enumerating z values for z = C c j x j in the algorithm, we are simultaneously enumerating M values for tl = C cij x i . The M enumeration is performed with a, as generator. We form new values of ci from M, aj for j = 1, 2, . . . , n. As in the z enumeration, the method succeeds in enumerating all feasible ci values. The order of enumeration of the z values directs the tl enumeration and permits all feasible ci to be produced. The solution vector S(z, x l , xz, . . . , x,) is used in the algorithm to reduce the amount of computation. If z and corresponding xi values are listed and feasible, then there is no need to list the same or larger z
+
2
89
SOLUTION TO THE INTEGER PROGRAM
values. In addition, if we have a bound for the variables given by
F1.
CjEJxj5 do,wemodifyAlgorithm2bythevariablechangexj’ = do - xi
for j E J , \Ve can then assume all cj 2 0. List the values of the problem in step 2 as
u3
El
u2
a,
a2 a3
...
En
... a,
c,l=l
Consider the bound inequality as a j x j 5 d o . Thus we have aj = 1 i f j E J a n d aj = 0 otherwise. In step 5(a) of the algorithm, calculate cj‘ = c, c j , uj‘ = CY, mi, uj‘ = a, + aj for the indices j of the unmarked columns in the section containing column r . The c,, a , , and a, values are from the section containing column r. The c j , aj,;and uj values are taken from the list in step 2. Mark the r column. In step 5(b), add columns with values cj’, uj‘, and uj’ headed by the prescribed indicesj. Include t h e j column only if aj‘ < do. Also, include the r column only if x, + 1 < rn, for the x, value found below the section containing the newly marked r column. Designate the cj‘, aj’, and uj’ values as cj , uj,and uj values, respectively. Algorithm 2 succeeds in solving the integer program with a finite number of steps when a finite minimal solution exists. We enumerate all objective values z < zo where zo is the optimal one. We also find all x j values that produce each z. Since there are only a finite number of integer values less than or equal to z o , the minimal solution must be listed in a finite number of steps. We do not claim any efficiency for the direct enumeration method, although it works well when the optimal xi values are small. Because the two operations performed are additions and comparisons, the method is particularly amenable to computer calculation and can readily be coded to produce extremely rapid solutions to some integer programs. In Section 3 we show how to accelerate the enumeration.
+
+
90
4
SOLVING INTEGER PROGRAMS BY ENUMERATION
Example
Consider the problem: Find integers x1 2 0, x 2 2 0 , x3 2 0 that minimize z when 3x, + 4x2 7x3 = z , 3x1 + 2x2 3x3 2 8, 4x, - x2 - 2x3 2 6, - X 1 + 3x2 4x3 2 2.
+ +
+
We list the steps as they occur in the algorithm.
1. z* = 00. 2. The problem is listed in Tableau E l ; a0 = (8, 6, 2), all mi = I*
2* 3*1
I*
2* 3
1
2
3
1
2
I
3#
00.
2
2
11 7
12 6 -3 9
~~
3 4 7 3 2 3 4 - 1 -2 - 1 3 4 -
6 7 1 0 6 5 6 8 3 2 2 2 3
x1
=
I
8 1 1 9 1 4 5 9 -2 -3 12 6 7 - 3
x2 = 1
x,
0 1 3 8 9 7 6 1 2 =2
2 5
-~ x,=1x2=: x2 = I
5. We form Tableau E2. Mark column 1 of Tableau El (the marking is shown by the *). 4. r = 2 in Tableau E l , c, = 4. 5. We form Tableau E3. Mark column 2 of Tableau E l . 4. r = 1 in Tableau E2, c, = 6. 5. We form Tableau E4. Mark column I of Tableau E2. 3. A feasible solution is apparent in Tableau E4. r = 3, CL,2 a,,, where CL,= (9, 6, 2). z* = 13, xl* = 2, x2* = 0, x3*= 1. Mark column 3. 4. r = 1 in Tableau E l , c, = 7. 5. No new tableau is formed. Mark column 3 of Tableau E l . 4. r = 2 in Tableau E2, c, = 7.
3
AN ACCELERATED ENUMERATION
91
5. We form Tableau E5. Mark column 2 of Tableau E2. 4. r = 2 in Tableau E3, c, = 8. 5. We form Tableau E6. Mark column 2 of Tableau E3. The method continues easily. No new tableaus are formed. The feasible solution z = 13, x1 = 2, x2 = 0, x j = 1 is minimal. 3 AN ACCELERATED ENUMERATION
As we indicated in Section 2, it is sometimes desirable to accelerate the enumeration. This can be done once the variables become evaluated. We present several rules for making the variables known. In (1) it may occur that a positive value for one of the xj variables may not allow a feasible solution to be found; alternatively, some of the xj must have positive values in order for the solution to be found. Similar conditions may exist when Algorithm 2 is applied in the search for the solution to (1). Consider (1) as listed in step 2 of Algorithm 2 . We form an index set Kwith initial membersi = I , 2, . . . , n. We then form (8)
Ui= Caikyk, k s K
i = 1,2, ..., m ;
yk for k e K is a nonnegative integer that represents the subsequent increase in xk . Thus y k Sm,' where mk' = mk - xk'.Initially all xk' = 0.
We require that Ui 2 bi. Hence, we may be able to determine from (8) whether no y , can produce feasibility or whether some y , must have a value to produce feasibility. Take
(9)
ui*= C
k EX,+
aikmkt,
i = 1,2, ..., m,
Where Ki+is the set of indices in Kwith aik> 0 and Ui*is the maximum lf Kif has no members, then Ui* = 0. The following rules value of U i . apply where initially c = 0 and di= 0 for i = 1, 2, . . . , m. (These rules are extensions of the ones developed by Le Garff and Malgrange [6] and Balas [ I ] in the mi = 1 case. For a concise presentation of the mi = I case, see Beale [2].) A I . If any Ui* < bi - d i , then no y k values can produce feasibility. No feasible solution to the problem can be found. Stop.
92
4
SOLVING INTEGER PROGRAMS BY ENUMERATION
+
A2. If Ui* aij < b, - d, for j E K, then y j = 0 since y j 2 1 produces infeasibility. Mark column j and remove index j from K. Write a new form for the U iin (8) and calculate the Ui*in (9). Return to rule A l . A3. (a) I f j is the only index in Ki+and di < b,, then a i j y j > bi - di requires that y j 2 0 where 0 = {(b,- di)/aij}.' (b) If Ui*- aijmj' < b, - d, f o r j E K, then y j 2 8, where 8 is the smallest integer with the property that Ui*- aijmj' aij8 > b, - di.Any y j < 0 produces infeasibility. (c) When 8 is found in (a) or (b), increase xi' by 8, decrease mj' by 0, replace c by c + cj 0 and d, by d, + aij8. If mif = 0, remove index j from K. Write a new form for the U iand calculate Ui*. Return to rule A l .
+
If, as a result of adhering to these rules, all xj' = 0, continue in the algorithm with step 3. If, on the other hand, some xj' is positive, problem (1) can be solved only with x j > xi' for allj. We then take value di as component i of a. If a > C I ~ ,the problem is solved with z = c and xj = xj' for allj. Otherwise, add a new section of columns to the list. Each column is headed by an index k E K. The column values are given by ck' = c ck and ak' = a + ah.The ck and cik values are taken from the list in step 2. Underneath the section write the xi = xj' values. Mark all unmarked columns on the list from step 2 and go to 3. We are able, further, to evaluate the variables within Algorithm 2. In the enumeration to the solution of (6) we have a feasible solution to (1)whenevera = a, isachievedwiththepropertythata, 2 a o .Eachcolumn k developed in Algorithm 2 is a listing of z and a that results when the x j values appearing below the column section, with xk increased by unity, are substituted in (6). It may be that these values of xj d o not allow a feasible ar to be produced or that some of the x j must be increased to obtain values of r r > s ~ We ~ .can therefore modify Algorithm 2 to ascertain whether a current stage of the enumeration will lead t o a feasible c( or whether some of the x j require specific values in order to produce a feasible a. When index r is found from c, = min cj in step 4 (with c = c,.),
+
This rule is useful if m,
=
co. If mj is finite, then rule (b) holds as well.
3
AN ACCELERATED ENUMERATION
93
we define a set of columns K as follows : (a) Form set K’ where column k E K’ if k is an index of one of the unmarked columns in the section containing column r. (b) Remove k from K‘if ck 2 z* (the ck value in the section). (c) Remove r from K’ if x, + 1 = m, for the x, value found below the section. (d) Remove k from K’ if c + ck 2 z* where the ck value is taken from the list in step 2. (e) The remaining set K‘is taken as set K. Define the column values for a, and d,, d, , .. . , d,,,.The possible values of z and component Vi of a that can be enumerated at this stage are given by
(10)
zZc+
x C k y k ,
keK
vi = di + ui,
i = l , 2 ,..., m,
where Ui is given by (8) for the present set K ; y, is a nonnegative integer that represents the subsequent increase in xk. Thus, yk I mk’ where m,’= m, - x,’. The x,’ are the x, values that appear below the section containing column r with x,’ = x, + 1. In terms of Eq. ( I ) , we require that V i 2 b i . This means that we may be able to determine from (lo) whether no yk can produce feasibility or whether some y , must have a value to produce feasibility. Consider
Vi*= di + Ui*,
i = 1, 2 ,
. . . , m,
where Ui*is given by (9) with Ki+ as the set of indices in current K with aik > 0. Then Vi* is the maximum value that V i can attain when we perform the enumeration starting with the section containing column r . The following rules then apply. B1. If any Vi*< b i ,then no yn values can produce feasibility. Mark column ran d continue the algorithm in step 4. €32. If Vi*+ aij < bi for j s K , then y j = 0 since y j 2 I produces infeasibility. Remove i n d e x j from K and write new forms for the U i . Calculate the Vi*and return to rule B1.
94
4
SOLVING INTEGER PROGRAMS BY ENUMERATION
B3. (a) I f j i s the only element in Ki+and di< bi,thend, + a i j y j 2 b, requires that y j 2 8 where 8 = {(bi - di)/aij}. (b) If Vi*- aijmj' < bi for j E K, then y j 2 8 where 8 is the smallest integer with the property that Vi*- uijmj'+ a i j 8 2 b i.Any y j < 8 produces infeasibility. (c) When 8 is found in (a) or (b) and if c + c j 8 2 z*, then mark column r and continue the algorithm in step 4. Otherwise, increase xj' by 8, decrease mi' by 8, replace c by c + c j 8 and di by d , + a,O. If mj' = 0, remove j from K. If c + ck 2 z*, remove k from K. Write new forms for the (Ii; calculate Vi* and return to rule B1. When the application of the rules is completed, we return to the algorithm in a modified step 5. The modifications consist of the following :
5. (a) Mark column r. Take value di as component i of CI. If t( 2 u0 take z* = c and form S(z*, xl', x2', . . . , x,,'); go to 4. Otherwise, go to 5 (b). (b) If set K has no elements, go to 4. Otherwise, add a new section of columns to the list. Each column is headed by an index k E K. The column values are given by c,' = c + c, and a,' = c( + ak. The c,, CI, values are taken from the list in step 2. Underneath the section write the xi = x j' values. Go to 3.
Example
Find nonnegative integers xi 5 1 that minimize z when
+ 8x2 + 10x3 4-2x4 4= z, - 3x2 5x3 + X 4 - 3x5 - 6x6 2 2, - 3x4 + 2x5 + 2x6 2 2, - 2 ~ 1+ 6x2 -xl - x2 + 2x3 - x4 + x 6 2 1. 5x1
Xg
X1
~3
We list the problem values in Tableau El with x 0 = (2, 2, 1) and all mi = 1. We obtain U,* = 7, U2* = 10, U3* = 3 with K = (1, 2, . . . , 6). For rule A2, U,* + a16= 1 < 2; we mark column 6 and remove the 6 from K. Hence, U,* = 7, U,* = 8, U3* = 2. For rule A3, y 3 2 8 = 1 ;
4
95
A DYNAMIC PROGRAMMING METHOD
= 10, d, = 5, d, = - 1 , d3 = 2. We remove 3 from K ; no further rules apply and we form Tableau E2. All unmarked columns in tableau El are marked.
xg' = 1 , c
1
-2 -1
2
-1
6
3
4
-1
2
5
6
*
/
1 15 6 -3 1
-3 -1
2
*
4
5
18 13 2 6 5 -4 1 1
11 2 1 2
xj = 1
Tableau E l
Tableau E2 A feasible solution is apparent in column 2 of Tableau E2. We have x2* = 1, xj* = 1, xl* = 0, x4* = 0, x5* = 0, x6* = 0; mark column 2. In step 4 of Algorithm 2, r = 5 in Tableau E2 with c, = 11. We obtain K = (1, 4) and = 4, Vz* = 1 < 2, V3* = 2 for rule B1. We mark column 5 and continue the algorithm. No other column can pass rule B1. The current feasible solution is minimal.
z* = 18,
vl*
4
A DYNAMIC PROGRAMMING METHOD
The enumeration in Section 2 may be interpreted as a dynamic programming procedure. In this section we proceed to formalize the method in terms of the dynamic programming equations. We define the function
where 2 is a feasible variable vector. Problem (6) is equivalent to the functional relationship given by (11). Note the the zero argument of F(0) is a vector with zero elements. We are able to convert (1 1) to a dynamic programming recursion in
96
4
SOLVING INTEGER PROGRAMS BY ENUMERATION
Theorem 3
The functional relationship (1 1) leads t o the recursion
F(cc) = min(cj
(12)
+ F(cc - aj)),
j
F ( 0 ) = 0, where z is feasible. (The recursion given in (1 2 ) was first formulated by Gilmore and Gomory [4]for knapsack functions.) Proof xk
Consider (1 1) with cc # 0; then some x j is positive. Suppose we have positive; then
f;(a) = min
( +c ck
CjXj
j f k
+ c k ( X k - 1) 1a j x j +
c(k(Xk
- 1) = @
- Mk,
lj+k
Thus
(13)
F(u) = ck + min
crjxj' = cr - ak: integer xi' 2 0
where the change of variables xi' = x j , ! # k , x k ' = xk - 1 is made; note that the min term is F(cc - a k )from definition (1 1). We have
+
F ( a ) = Ck F(cc - crk). (14) Equation (14) holds in the case where xG> 0; hence, we have
+
j = 1, 2, . . . , n , F(U) I cj F(cc - cc,), (15) in all cases. Inequality (15) is another way of writing (12) provided that the equality holds for at least one j , as in (14). Since at least one x j is positive for cc # 0, we have achieved (12) and proven the theorem.
The recursion in (12) may be solved as a direct enumeration because F(xc,)= c,, where c, = min c j . Thus we produce one solution value for F(z). Replacing a by ci - LY, in (12), we form F(cr - cr,) = min(cj j
+ F(a - crj - cr,)),
5
KNAPSACK FUNCTIONS
97
which we substitute for the F(LX- r,) term on the right side of (12). We then obtain
(16)
F ( a ) = min(min(cj j
+ F ( r - a;)), min(cj + F(a - x i ) ) ) , j t r
where cj‘ = cj + c, and L X ~=‘ mj + LX, for j = 1, 2, . . . , n . Designate cj’ and r j ’ as cj and aj values. Note that (16) is of the same form as (12) except that the original c, + F(cc - r,) term is missing and other terms are included. Since we have the same form as (12), we obtain the solution value F ( q ) = c, by finding a new c, = min c j . In this way we find F(Lx) for all feasible LX.By keeping track of the indices j that produce each F(a,),we obtain the corresponding x j values. The method of solution just outlined is the same as given in Section 2. Algorithm 2 conveniently lists the values of F(Lx)at any stage of the enumeration.
5
KNAPSACK FUNCTIONS
The knapsack problem and its applications were introduced in Chapter 1. Following the author’s [5] approach, we shall present a dynamic programming solution to the problem. We define the knapsack problem as: Find integers x j 2 0 that maximize z when n
1c j x j = z,
j= 1
c n
ajxj
j= 1
I L,
where each cj is a positive number, each uj is a positive integer, and L is a positive integer. We express the one-dimensional knapsack function as
(17)
F ( x ) = max
1 c j x j 1a j x j = x,
(j:1
lj:1
1
integer x j 2 0
98
4
SOLVING INTEGER PROGRAMS BY ENUMERATION
for feasible values of x. (See Gilmore and Gomory [4] for higher dimensional knapsack functions.) The knapsack problem is solved when we find the maximum of F(x) for x 5 L. In a manner like that used in Section 4, the knapsack function can be written as the dynamic programming recursion
F ( x ) = max ( c j
(18)
j:al s x
+ F(x - aj)),
F ( 0 ) = 0. The maximum in (18) is for indices j with the property that aj 5 x. This occurs because the x j values are zero when the corresponding aj values are greater than x in (1 7). We can obtain an immediate solution to (18) by finding a, = min aj for all j. If ak = a, for k # r , then index r is chosen where c, 2 c k . We take x = a, and produce F(a,) = c,. We then replace x by x - a, in (1 8) and have
F ( x - a,)
=
max ( c j +a , 5 x
j:a,
+ F ( x - a, - a j ) ) ,
which we substitute for the F(x - a,) term on the right side of (18). Hence,
F ( x ) = max( max ( c j j#r:a, c x
+ F(x - aj)),
max (cj’
j:a,’cx
+ F ( x - a;)),
(19)
+
+
where cj‘ = c, cj and aj’ = a, cj for allj. If all cj’ and aj’ are defined as additional cj and aj values, then (19) is of the form given by (18) and another immediate solution can be obtained by seeking a new a, = min a j ; again F(a,) = c,. We continue this way until F(L) is found or until no other F(x) for x 5 L can be produced. At the same time, we remember t h e j = r that produces the solution so that we can determine the optimal x j at the end of the procedure. We also consider only distinct aj values. If aj = akand cj 2 ck f o r j # k , then take xk = 0. The method for solving the knapsack problem is contained in
5
KNAPSACK FUNCTIONS
99
Algorithm 3 1. List the values of the problem as follows:
I ;;
c2
a,
c3
u3
- * -
...
2
1;
L is also listed. G o to 2. 2. Given the list, find index r from a, = min uj 5 L for all unmarked columns in all sections. I f a, = L or if no a, exists, go to 4. Otherwise, take c = c,, u = a, and go t o 3. 3. Add a new section of columns to the list, if possible. Calculate cj‘ = c + cj and uj’ = a + uj for the indices j of the unmarked columns in the section containing column r . Mark the r column. The cj ,ajvalues are taken from the list in step 1 . Add a column with values cj’ and aj’ if:
(a) uj’ I L. (b) uj’ is not on the list. (c) aj‘ is on the list and has corresponding cj value that is smaller than cj‘. Underneath the section added, write the x j values from the section containing the newly marked r column. lncrease x, by one for the new section. Designate the cj’, uj’ as c j , aj values and go t o 2. 4. The problem is solved with solution c, = max cj for a, I L for columns in all sections. The values of the variables are found below the section where c, appears; increase x, by one. Algorithm 3 is a single pass algorithm in the sense that the x j values are always available. If desirable, the x j values need not be maintained but may be calculated in a back-tracking procedure. We would then define the function l ( q )= r in step 2. When index s is found in step 4, we calculate the xi values as follows: (a) Initially all x, = 0.
100
4
SOLVING INTEGER PROGRAMS BY ENUMERATION
(b) Take a = a, in step 4. (c) Increase x, by one. (d) Calculate m = a - a,, where a, is the value from the list in step I . If m = 0, the current xj values are optimal. Otherwise, take s = I(m) and return to (c). Example
Find integers xJ 2 0 that maximize z when
3x, 44x, 9x, + 7x,
+ 3x3 = 2, + 6x, i 15,
We list the values in Tableau El with L = 15. We have r = 3, c = 3, a = 6 in Tableau E l . We mark the 3 column and form Tableau E2. We have r = 2, c = 4, a = 7 in Tableau E l . We mark the 2 column and form Tableau E3. No other columns are added. The maximum solution is in column 2 of Tableau E3. It is z = 8 with x2 = 2, x1 = 0, x3 = 0.
1
2* 3*
3 9
4 7
1
2
3
6 7 6 15 13 12
3 6
El
E2
2
I
8 14
I
E3
Tableaus Problems
1 . The map coloring problem was described in Chapter 1. Show how a feasible solution may be found by enumerating values of a,, for t,
- t, = a,, .
When the problem is tabulated as in Algorithm 1 , what is a good column selection criterion that leads to all a,, # 0.
PROBLEMS
101
2. The traveling salesman problem was discussed in Chapter 1 . Show how to enumerate solutions to the objective function until the constraints are satisfied. 3. Solve the example on page 90 by the accelerated enumeration of Section 3. 4. Find integers xi 2 0 and min z for 2x1 3x1 XI
+ 5x2 + 4x3 =
+ 2x2 + 3x3 2
Z,
8,
+ 3x2 + 5x3 2 11.
5. Minimize z when 0 I x1 I 1, 0 I x2 I 2, 0 I x3 I 3, 0 I x4 5 1 in 2x, -7x,
2x,
+ + 3x3 + 5x, = z , - x2 + x3 + 2x4 2 3, + x2 + x3 + 2x4 2 6. x2
102
4
SOLVING INTEGER PROGRAMS BY ENUMERATION
References 1. Balas, E., An additive algorithm for solving linear programs with zero-one variables, Operations Res. 13 (4), 517 (1965). 2. Beale, E. M. L., Mathematical Programming in Practice.” Pitman, London, 1968. 3. Bellman, R., Dynamic Programming.” Princeton Univ. Press, New Brunswick, New Jersey, 1957. 4. Gilmore, P. C . , and Gomory, R. E., The theory and computation of knapsack functions, Operations Res. 14 (6), 1045 (1 966). 5. Greenberg, H., An algorithm for the computation of knapsack functions, J. Mafh.AnaI.App1.,26(1), 159(1969). 6. Le Garff, A,, and Malgrange, Y.,Resolution des programmes lineaires a valeurs entieres par une methode Booleienne “ compacte ”, Proc. 3rd Infer. Conf.Operational Res., Dunod, Paris, 1964, pp. 695-701. ‘I
5
CONTINUOUS SOLUTION METHODS
We turn our attention to a study of continuous solution methods for solving integer programs. The simplex algorithm is an efficient method for solving a linear program for continuous variables and can be used to advantage for the integer problem. First, we present the theory for continuous solution methods in which the integer restriction is temporarily relaxed and the problem solved as a linear program. If the continuous solution is also integer, then the integer problem is solved. If the solution values are fractional, we obtain the optimal integer values from the continuous solution. The method is analogous to the all-integer dual simplex method of Chapter 3. The resulting algorithm is comparable to one by Gomory [2],but has different convergence properties. We then show how to combine the continuous solution and allinteger methods; this process tends to improve the convergence. The approach taken in Sections 4 and 5 was indicated by Gomory [3]. The chapter concludes with a new algorithm for problems having variables with upper bounds, and a method to solve the mixed integer problem.
104
1
5
CONTINUOUS SOLUTION METHODS
A CONTINUOUS SOLUTION METHOD
The integer programming problem may be written as: Find integer x j 2 0 f o r j = 1,2, . . . , n that minimize z when
c n
c j x j = z,
j= 1
(1)
and the a i j , b , , and cj are given integer constants. Suppose the integer constraint on the xj is relaxed, i.e., the xj may take on fractional values. The resulting continuous problem may be solved by the simplex method which converts the equations of (1) to an optimality format
z (2)
+ c cjxj, I
= To
j = 1
where the first t = n - m and the last m variables have been arbitrarily selected as the nonbasic and basic variables, respectively. Since (2) is an optimal canonical form, Ej 2 0 and 6, 2 0; the optimal continuous solution to program (1) is = 6, for the basic variables and xi = 0 for the nonbasic variables. In addition, if the continuous solution has all 6, as integer, program (1) is solved. When any of the 6, are fractional, the equations of (2) are used to obtain the integer solution. Program ( I ) is equivalent to: Find integer xj 2 0 for j = 1, 2, . . . , n that minimize z when the objective function and constraints are given by (2). We shall follow closely the approach of Chapter 3. The method to find the optimal integer solution from (2) consists of making a series of changes of variables to achieve the transformation
(3)
1
A CONTINUOUS SOLUTION METHOD
105
The integer constants d j , djk are developed in an iterative procedure during the solution of the problem. The initial transformation is established by writing (2) in parametric form as
z=z,+
ccjyj, j = 1
(4)
j = l , 2 ,..., t,
xJ- = y J- -> 0 t
x t + i = 6,
+ 1a i j y j , j= I
i = 1,2, ..., m.
Eliminating xi from (2) using (3), we obtain the equivalent problem: Find integer y j 2 0 f o r j = 1,2, . . . ,t that minimize z when
+2
z = zo’
CjtYj> j = 1
f
xt+, = bi’
+ 1I a i j y j , j=
i = 1 , 2 , . . ., m.
The constants zo’, cj’, bi’, and u,!~are developed as a result of the transformation with zo’ = Zo
r
+1
Cj
dj,
j= 1
r
Cj‘
=
1
k=l
ckdkj,
bi’= bi + t
a 1J! . =
t
1aijdj,
j = 1
1iiikdkj,
k= 1
j = l , 2 ,..., t , i = 1, 2 , . . ., m,
all
i
and j .
If the constants are such that cj‘ 2 0, dj and bi’ are nonnegative integers, then the minimal solution to (5) is given by z = zo’, y j = 0 f o r j = 1 , 2, . . ., t . In addition, the minimal solution to (1) and (2) is given by xi = dj for j = 1, 2, . . . , I and x , + ~= bi‘ for i = 1 , 2, . . . , m.
106
5
CONTINUOUS SOLUTION METHODS
In Section 2 we develop transformation (3). The proof of optimality and the equivalence of programs (2) and ( 5 ) then follow by adapting the methods and proofs of Chapter 3. We leave the details to the reader.
2
IMPROVING A NONOPTIMAL SOLUTION
We demonstrate here how to find the transformation (3) that yields an equivalent problem, Eq. (9,and leads in a finite number of steps to the optimality conditions cj‘ 2 0, integer dj 2 0, and integer bi’2 0. We can write (5) as
where x is a column vector with components z, x,, x2, . . ., x,. p i s a column vector with components zo’, d,, d 2 , . . . , d,, bl’, b2‘,. . . ,b,’ and aj is a column vector with components cjf, d l j , d 2 j ,. . . , d t j ,a i j , aLj, . . . , ahj. Initially (4) is also in the form given by (6) with j3 components Z,, 0, 0 , . . . , 0 , 6,, b 2 , . . . , b,,, and aj components C j , o,o,. . . ,o, 1 , 0, . . . , 0, Z l j , Z Z j , . . . , Z m j . The one appears as the ( j + 1) component Of M j . We insure a finite algorithm by maintaining aj lexicopositive. If (4) is written in the form of (6), we see that aj > 0 because all Cj 2 0. Suppose at some iteration we have achieved a form like (6) with aj > 0, all nonobjective function components of p nonnegative and at least one component of j3 fractional. Select a row with fractional p component. Suppose the corresponding equation is given by
+c f
(7)
x, = b,
ajyj. j= 1
Define q = {b,} - b, > 0 andpj = uj - [aj] 2 0. Thus(7)maybewritten as
2
IMPROVING A NONOPTIMAL SOLUTION
107
Defining integer y as the left side of (8), we have y 2 {b,} or y - y’ = {b,} for integer y’ 2 0. From (8) we obtain (9)
2 0. Equation (9) is a new restriction that the y j must satisfy to produce an integer value for x,. Note that 6, may be negative so that the objective function may be used to generate a new restriction. To produce a finite algorithm, we will use the topmost fractional component of p to select the row for (7). Thus x, in (7) represents z or some x i . Next find index s from 1
. 1
- as = 2-min - c c j , Ps
j d f
p j
where J + is the set of indicesj where pi > 0. The constraint (9) may then be written as (10)
~ s = ’
-4
+ j C# sP j y j + ~
s
~
s
.
If y s is eliminated from (6) using (lo), we obtain
+ 1a j t y j ,
x = p’
j= 1
where 4
B’=P+-a,, P S
1
a,‘ = - a,, Ps
and y,‘ has been redefined by new variable y , . If in addition p’ and uj’ are redefined as B and aj ,then (1 1) is again of the form of (6) with aj > 0. The dual lexicographic method is then used in conjunction with (1 1) to find a new continuous solution. In effect, we employ a continuous simplex method in an attempt to achieve an integer solution. As in the lexicographic dual method of Chapter 2, we determine the equation of
108
CONTINUOUS SOLUTION METHODS
(6) [after redefining (1 1) as (6)] with most negative p component, excluding the objective function. If all such components are nonnegative, the continuous solution is given by x = p. Otherwise, let the equation determined by a negative component be
+ C ajyj, f
x, = -b,
j = 1
where b, > 0. Define J + as the set of indices j where aj > 0. Define index s from
. 1
1
- us = I-min - a j a, j s J f aj
and write (12) as (13)
x,= -b,+
Cajyj+a,y,
j#s
= Ys’
2 0,
where y,‘ is a nonnegative integer defined equal to x,. Eliminate y, from (6) using (13) with y,’ and obtain the form of (1 1) where
B’
=p
+ -b,a,a s ,
and y,’ has been redefined as y s . Note that the constant term in the x, row becomes zero and the equation becomes x, = y,. Thus x, is considered nonbasic. This enables us t o make a lexicographic argument to show that optimality for the continuous solution is reached in a finite number of steps. When the form of (1 1) is achieved, it is redefined as (6). If any nonobjective components of fl are negative, we repeat the procedure that led to the selection of (12) and the new form of (6). Otherwise, we have
2
IMPROVING A NONOPTIMAL SOLUTION
109
achieved a new continuous solution. If the continuous solution is fractional, then (9) is developed and the method repeated until the optimal continuous solution is all-integer. The entire procedure is contained in Algorithm 1 1. Solve (1) by the simplex algorithm of Chapter 2 to obtain a form equivalent to (2). If the basic continuous solution has all integer values, the integer programming problem is solved ; stop. Otherwise, go to 2. 2. Develop a tableau by listing the columns x I p, a,, a z , . . . , CI,. Column x is a listing of the variables. The first element of x is z ; the next t are the nonbasic variables in order and the next m are the basic variables in order of equation appearance. Assuming the basic and nonbasic variables are indexed as in (4), the B and uj values are assigned accordingly. Initially, then, z‘, = Z, , cj’ = C j , dj = 0,djj = 1, dij = 0 for i Z j , bi’ = tii and a t = iiij where the unprimed values are from (2). Use the corresponding values when the problem format is different; the basic and nonbasic variables are indexed according to the problem. Go to 3. 3. Select the topmost row with fractional /3 component. The row values are b, , a,, a2, . . . , a,. Calculate q = {b,} - b, and p j = aj - [ a j ] , f o r j = 1,2, . . ., t . Go to 4 (a). 4. (a) Define J + as the index set where pi > 0. Determine index s from 1 . 1 - mS = l-min - a j . Ps
j s J + Pj
Calculate new column values /3’ = p
+ -4 CIS, Ps
,
1
us = - a,. PS
Designate p’, aj‘ as the current p, a j , and go to 4 (b).
110
5
CONTINUOUS SOLUTION METHODS
(b) If no component of p beyond the first is negative, the optimal continuous solution is achieved. If the optimal solution is fractional, go to 3. If the solution is all-integer, the problem is solved with values x = p. Stop. Otherwise, select the nonobjective row with most negative p component. The row values are -bo, a,, a 2 , ..., a , . T a k e q = b o , p j = a j , j = 1,2, ..., t and go to 4(a). Example
Find integers xi 2 0 that minimize z when
3x1 - x2 + 12x3 + x 4 = Z , XI + x 2 + 3x3 - 3x4 =4, 3x1 - 3 x 2 + 1 1 x 3 + X4 = 2.
1 16 -
3
0 0
7 3 5
3
-5
- 10
1 0
0 1
3
1
2
10 -_ 3 -1
Z
3
4 -
3 r
1
7 - 9 3
47 6 5 6
1 3 -1 2 2 0 0 1 -1 - 5 3 2 + +
1
2
2 3
-5
0 3
0 3
2
x3
x3
x4
x4
X1
X1
x2
x2
Tableau E5
3
+4x4 = 5 + +x3 + +x4.
x2
X3
6
6
1
3
1 +
2
5 5 12 0 1 - 1 2 1 2 5 -2 6 5 2 3 Tableau E6
1. Using the simplex algorithm, we transform the equations to z = ' d3 + i -3 x 3 + 9 x 4
- S3
5
-1
1 Z
XI = f
2
Tableau E3
Z
Tableau E4
20 3
1 3
Tableau E2
Tableau El 1
2
3
CONVERGENCE IN THE ALGORITHM
111
2. Tableau El is formed. 3. The zrowis selected. q = *,pi = 4 , p 2 = +. 4. (a) J + = (1,2); s = 1. We form Tableau E2. (b) Reminimize. The x1row is selected. q = 1 , p 1 = - 5 , p 2 = 3. (a) J + = (2); s = 2. We form Tableau E3. 3. The z row is selected. q = g, p , = 5,p 2 = 6. 4. (a) J f = (1,2); s = 2. We form Tableau E4. 3. The x3row is selected. q = +,p 1 = 4, p 2 = 4. 4. (a) J + = (1,2); s = 2. We form Tableau E5. 1 3. The z row is selected. q = $ , p l = 0, p 2 = 5. 4. (a) J + = (2); s = 2. We form Tableau E6. (b) The minimal solution is z = 12, x3 = 0, x4 = 2, x1 = 5, x2 = 5.
3
CONVERGENCE I N THE ALGORITHM
Since aj > 0, every utilization of step 4(a) of Algorithm I causes to increase lexicographically. In an argument similar to that in Chapter 3, we show that the components of p have finite upper bounds. The problem of producing a finite algorithm arises because of the fact that although p is lexicographically increasing, the components of p may change only by minute amounts over an infinite number of iterations. To prevent these infinitessimal changes and obtain a finite algorithm, we select the topmost row with fractional p component in step 3 of the algorithm. We shall prove that the algorithm takes a finite number of iterations with this row selection strategy. There are two types of convergent processes occurring during the use of the algorithm. One of these occurs during the reoptimization phase. Each minimization cycle takes only a finite number of iterations because the number of basic variables is fixed during the cycle. Since is lexicographically increasing from iteration to iteration, the same set of basic variables may not recur and since there exist a finite number of sets of basic variables, the optimal continuous solution must be arrived at in a finite number of iterations.
112
5
CONTINUOUS SOLUTION METHODS
The second convergent process occurs at the end of each minimization phase. At the end of some cycle the continuous solution must eventually be all-integer. The number of minimization cycles must remain finite. To achieve a finite number of cycles, we first show that the components of /3 have finite upper bounds when a finite optimal integer solution exists. Assume there exists a finite optimal solution xo with objective component zo, i.e., xo = (zo, xIo,xzo, . . ., x,,'). Thus xo must satisfy (6); there must exist y j 2 0 that satisfy
C t
(14)
ajYj
j= 1
= xo - p.
Consider the first component of p, the objective value zo'. Although monotonically increasing, zo' I zo holds for all minimization cycles. This is readily apparent because if zo' > zo after some cycle, the objective row from (14) is
2 cityj
i= 1
= zo
- zo'
< 0;
since cj' 2 0, then, as seen from (15), no y j 2 0 values can possibly produce z = zo. Thus zo', the first component of p, is bounded from above by zo. The other components of p remain bounded. Suppose, on the contrary, that the ( r + 1) component, 1 I r s n, takes on arbitrarily large values. Take the (r + 1) component as d, with d, > x r 0 (for r > 1, take d, = 6,' and drj = aij). The corresponding equation from (14) is t
(16)
C d , j y j = x,O - d ,
j= 1
< 0.
If all drj 2 0, then no y j 2 0 exist that satisfy (16) and d, would be bounded by xr0. Thus if (1 6 ) exists, some d,j < 0 and
3
CONVERGENCE I N THE ALGORITHM
113
must hold ;J - is the set of indices j where drj < 0. We perform the following steps. 1. Obtain index s from 1
-a, -d r s
.
1 -drj
= 1-min -a j . jeJ-
2. Write (17) in the equality form
C drjyj + Y,'
(18)
jEJ-
= xr"
- dr,
where y,' 2 0. Use (18) to eliminate y, from (6) and obtain the form of (1 1) where
The boundedness of p follows easily by induction. Suppose dl > x l 0 . The objective value from (19) becomes zg = zo'
+ XlO-dI dl,
c,
I .
3
and d,, y , we employ either the determinant reduction procedure of Algorithm 3 or the all-integer algorithm. We show this in Algorithm 4
1. Assume that the integer program is in the nonoptimal format of (6) with clj > 0 and available D value. Develop a tableau by listing the columns x I p, C I ~ ,a 2 , . . . , CI,.Assign the upper bound value y . Go to 2. 2 . If all components of p and aj are integers, set D = 1. If D 2 y go to 3. Otherwise, go to 4.
122
5
CONTINUOUS SOLUTION METHODS
3. If no component of p beyond the first is negative, the optimal continuous solution is achieved; if the optimal solution is fractional, go to 4. If the solution is all integer, the problem is solved with values x = p. Stop. Otherwise, select the nonobjective row with most negative p component. The row values are -b,, al, a 2 , . . . , a,. Take q = b,, pi = aj f o r j = 1, 2, . . . , t . Define J + as the set of indices j where pi > 0. Determine index s from 1
. 1
- us = 1-min - a j . Ps jsJ+ pj If D = 1, and p s > y, go to 6. If D > 1 and p s D > y, go to 4. Otherwise,
D’ = p , D. Designate D‘ as the current D and go to 5. 4. Select the topmost row with fractional p component, if possible. If all components of p are integers, select a row with fractional aj component. The row values are b, ,a,, a 2 , . . . ,a,. Calculate q = {b,} - bo and pi = aj - [aj] for j = 1, 2, . . . , t. Define J + as the set of indices j where pi > 0. Determine index s from . 1
1
- a, = 1-min - mi. Ps jeJ+ pj Calculate D‘ = p s D; designate D’ as the current D and go to 5. 5. Calculate new column values
p’
4
= p +-a,, Ps
1
asr= - a, P S
Designate p‘, aj’ as the current p, a j . If D = 1, go to 3.Otherwise,go to 2. 6. Select the nonobjective row with most negative p component. The row selected is - b, , a,, a 2 , . . . ,a,, . Define J as the set of indices j where uj > 0. Determine index s from +
a, = 1-min a j jcJ+
6
BOUNDED VARIABLE PROBLEMS
123
the largest integer that maintains x j - p j a s > 0 for ~ E J ' , 1 . Take I = maxjsJ+ a j / p i . Calculate pi = {aj/A} and Go to 5. Example
We shall solve the example problem for Algorithm 1. Take y = 5. Tableaus El and E2 are identical with those of the first algorithm. We start with Tableau E2; D = 2. 3. The x1 row is selected. q = 1, p I = -5, p2 = 3 ; J + = (2), and s = 2.p2L) = 6 > y . 4. The z row is selected. q = 0, p1 = 3,p2 = 3;J + = (1, 2), and s = 2 . D = 1. 5. We form Tableau E3. 3. The x1 row is selected. q = 1, p1 = -8, p2 = 6; J + = (2), and ~=2.p= 2 6 > 7. 6. J + = (2); s = 2. I = 6 , = 1 , p1 = - 1 , p2 = 1. 5. We form Tableau E4. ' 3. The minimal solution is reached in Tableau E4. z = 12, x 3 = 0, x4 = 2, x1 = 5, x2 = 5. 1
2
1
2
1
2
5 -2 5 2
6 3
Z
x3
-1 Tableau E2 6
-8 2 -1
Tableau E3
x4 X1
x2
Tableau E4
BOUNDED VARIABLE PROBLEMS
We can use the approach developed in Chapter 3 for the bounded variable problem. We seek to find the solution to (1) with the additional constraints xi I mj for j = 1, 2, . . . , n, where the mi are given integer values.
124
5
CONTINUOUS SOLUTION METHODS
The integer constraint on the x j is relaxed and (1) is solved as a linear program by the bounded variable technique of Chapter 2. The canonical form of (2) is then achieved with feasible xi values. The optimality conditions are: C j 2 0 for nonbasic variables at their lower bound zero and F j I 0 for nonbasic variables at their upper bound mi. If the feasible solution is fractional, we again find the optimal integer solution from (2) through transformation (3). The initial transformation is given in (4). If (4) is then written in vector format as in (6), we may not have all M~ lexicopositive. If F j < 0, then we make the change of variables y J. = mJ. - y.’ J ’
where integer yj’ 2 0. Thus we obtain the form of (1 I ) with
ff
.‘ = - u .J ’
for j E J - ,
= ff. J’
otherwise ;
ff.’ J
J - is the set of indices j where Cj < 0. If p‘ and 0 ~ ~are ’ redefined as and a j , then we have the form of (6) with uj > 0. Nonobjective components of /3 are infeasible in the continuous minimization phase whenever they are negative or whenever they exceed their upper bound values. When a component of /3 is negative, the lexicographic dual simplex method is used as in Algorithm 1. When a component of /3 exceeds its upper bound value, we can modify the dual simplex method. Suppose at some iteration we have achieved a form like (6) where uj > 0 and one or more of the components of /3 exceed their upper bound values. Select one such row from (6). For that row we must have
+ C a j y j 5 m,, f
x, = bo
j = 1
where bo > m, . The y j then satisfy
C ajyj 2 0 f
(24)
y’ = m, - bo -
j=l
6
BOUNDED VARIABLE PROBLEMS
125
for integer y'. If we define q = bo - m,
> 0, j = l , 2 ) . . . )t ,
pi = - a j ,
then (24) is exactly of the form given by (9).' Using (24) as (9), the analysis follows as before; the selection of index s remains the same and we develop transformation (10) and the new form of (6) given by (1 1). The lexicographic property of the aj is maintained. For upper bound problems, then, we have Algorithm 5
1. List mi,m 2 , , . ., m, and solve (1) by a bounded variable simplex algorithm to obtain a form equivalent to (2). If the feasible continuous solution has all-integer values, the integer programming problem is solved ; stop. Otherwise, go to 2.
2. Same as step 2 of Algorithm 1. 3. DefineJ- as the set of indicesjwhere Cj < 0. I f J - has no members, go to 4(a). Otherwise, calculate new column values as
B'=B+ C mjaj, jcJ-
for j E J - ,
aj' = - a j , Uj'
otherwise.
=aj,
Designate p', aj' as the current p, aj and go to 4(a). 4. (a) If any component of p exceeds the corresponding upper bound value, go to 5(a). Otherwise, go to q b ) . (b) If any nonobjective component of p is negative, go to 5(b). Otherwise, go to 4(c).
* If (24) is handled the same way as (7), a constraint stronger than (24) may be found by defining q = {bo - m,} - bo m,,p = -a, - [-a,] for (9).
+
126
5
CONTINUOUS SOLUTION METHODS
(c) The optimal continuous solution is achieved. If the solution is also all-integer, the problem is solved with values given by x = p. Stop. Otherwise, the solution is fractional; go to 5(c). 5. (a) Select the row with component that exceeds the upper bound by the greatest amount. The corresponding variable and row values are x,., b,, a,, a,, . . a,. Takeq = b, - m r , p j = - a j , and go to 6. (b) Select the nonobjective row with most negative p component. The row values are - b , a,, a 2 , .. . , a , . Take q = b, , pi = aj and go to 6 . (c) Select the topmost row with fractional p component. The row values are b,, a,, a 2 , . . . , a , . Take q = {b,} - b,, pi = aj - [aj] and go to 6. 6. Define J + as the set of indices j where p j > 0. Determine index s from 1 . 1 - cis = 1-min - c i j . Ps j d + pj . 7
Calculate new column values
B’ = p + - u4 s , Ps
1
Us’ = - a s PS
Designate p’, uj’as the current p, aj and go to 4.
6
BOUNDED VARIABLE PROBLEMS
127
1. Using the bounded variable technique of Chapter 2, we transform the equations to
z , s3 - l 3x1
+!fx4*
x 3 = $ - 4 x 1 + +x4> x2 = 3 +xl - 3x4;
+
the minimal continuous solution is z = 3, x1 = I , x2 = 3, x3 = +, x4 = 0. 2. Tableau El is formed. 3. J - = (1). We form Tableau E2. 5 . (c) The z row is selected. q = 3, p 1 = +, p z = 3. 1
8 3
0 0
_ _34
3
3
1
2 3
1 0
3
5
2
_ _13
1
1 7
z
3
0 1
XI
x
4
3
x
3
3
x2
_. 2
-2
Tableau El 1
3
1 -1 0 0 1
4
5
1
3
0 1 2
3
-T
2
3
2
-7
1 Z
x3
x2
6. 5. 6. 4.
2
3
Tableau E2
x x
Tableau E3
2
1
l 4
3
2
1 0 l l 2 + 1 0 -1 0 -1 0 l 1
Tableau E4
J + = ( 1,2) ;s = 1. We form Tableau E3. (a) The x3row is selected. q = 2 , p , = - 4 , ~=~2. J + = (2); s = 2. We form Tableau E4. (b) The minimal solution is achieved. z = 3, x1 = 1, x4 = 1, x3 = 1, x2 = 0.
128
7
5
CONTINUOUS SOLUTION METHODS
T H E MIXED INTEGER PROBLEM
The mixed integer problem is expressed as in (1) with the difference that not all of the xi variables are restricted to be integers. We find the continuous solution and follow the procedure developed in Section 1. Hence, we assume that at any iteration we have the form (6). We follow Gomory’s [ I ] method. If a component of p is fractional while the corresponding variable is an integer variable, then we have not achieved the minimal solution. Select such a row. Suppose the corresponding equation is given by xo = bo
+
t
ajyj;
j= 1
b, is fractional-valued and xo is one of the integer restricted variables. We then write (25) in the form x, = b,
+ C ajyj+ C ajyj; jeJ-
jeJ+
define J + as the set of indices j where aj > 0 and J - as the set of indices j where aj < 0. If ajyj+ C ajyjIO, jeJ
jeJ-
then x, s b,; also, x, I {b,} - 1. Hence, we obtain
whereq
=
{b,} - 6, > 0. Therefore,
and we can write
We then certainly have
cajyjIq-l
jsJ-
7
If, on the other hand
C ajYj +
jsJ
+
THE MIXED INTEGER PROBLEM
jsJ-
ajyj
129
20,
then x o 2 b,; also, xo 2 {bo).Hence, we obtain Therefore,
C
jsJ
+
ajyj
2 4.
We then certainly have
which is the. same as (26). Thus, (26) holds in either case. In the mixed integer problem, we replace Eq. (9) by
y'=-q+
Cajyj-
jsJ+
2 0.
C
jsJ-
4
-a j Y j 1-4
The analysis then is the same as after (9), where we define
We develop the new form of (6) given by (1 1). We can also produce a stronger constraint than (27) if some of the vj variables are integer. The integers y j are apparent when we have = y j as one of the equations of (6) for integer x i . Otherwise, y j is fractional. We write (25) as xo = bo
+ C a j y j+ C a j y j+ j.I
jeF+
ajYj, j EF-
where we define : I as the set of indices j where y j is integer, F' as the set of indices j where y j is fractional and aj > 0, and F - as the set ofindices jwhere y j is fractional and aj < 0.
130
For j
5
CONTINUOUS SOLUTION METHODS
E Z,let
(29)
aj = [aj] + f j ,
if
aj = {aj> - g j ,
if g j < q.
f j
_< 4%
or else (30)
Define I + as the set of indices j E I when (29) holds an I- as the set of indices j Elwhen (30) holds. Thus, we can write (28) as
where integer y is
Following the previous analysis, we obtain f
Y' = - 4
+ j1 PjYj = 1
20 to replace (9) where
pj=-
1
fj??
4 gj, 1-q
[-&
aj,
for j
EI+
for j
E F+
for j E 1 for j E F - .
The new constraint is stronger than (27) in the sense that p' may be lexicographically larger with the possibly smaller p s value. The convergence in the mixed integer case follows exactly as in the integer case if z is constrained to be integer and the row selected for (25) is for the topmost eligible component of p.
PROBLEMS
131
Example
Consider the example of page 110 with only z, xl, and x3 as integers. 2. Tableau El is formed. 3. The z row is selected. y , is integer. q = 3, p1 = +, p 2 = IF. 4. (a) J + = (1,2); s = 2. We form Tableau E2. 3. The x1 row is selected. y , is integer. q = -$,pl= *,p2 = f . 4. (a) J + = (2); s = 2. We form Tableau E3. (b) The minimal solution is z = 7, x3 = 0, x4 = x1 = 3, x2 = +.
+,
1
1
2
2 Z x3 10 2
5
13
18
5
2
x4 X1
x2
Tableau E2
Tableau El
7 0
1
2
0 1
% 0
L
-1
3
-4
5 2
2
--1 2
Tableau E3
Problems
1. Find integers xi 2 0 that minimize z for 16x1 + 15x2
+ 2X1 + Sx,
+ 12x3
+ 3x2 + 5x,
= z, 2x3 - x4 = 4, 3x3 - X 4 = 3.
2. Find min z in the following integer programming problem : x2- 5x3 + x q = z , X I - x2 + 3 x 3 + x 4 = 8 , x1 + 2x2 - 2x3 - 3x4 = 4.
-3x1
3. Minimize z when
+
z = 3 + +xl + y x,=*-+x,
x3,
- +x3,
x4 = 3 - +xl - + x 3 .
1 4
1
2 4
132
5
CONTINUOUS SOLUTION METHODS
4. By the method in this chapter, minimize z when xi j = l,2,3:
+
+ + +
x1 2x2 x3 = z , 2 x , + 2x2 x3 2 3 , 2 x , - 2x, x3 2 1, x1 + 2 x 2 - 2 x 3 2 1 . 5. Find integers x j 2 0 and min z for
2 x 1 + 3 x , - x3
+
xq
= z,
+
x4
= 7.
x1 - 2 x 2 + 2 x 3 - x4 = 4,
2x,
+
x2 - xj
=0
or 1 for
REFERENCES
133
References Gomory, R. E., An algorithm for the mixed integer problem, RAND Corp., Paper P-1885,February 22,1960. 2. Gomory, R. E., An algorithm for integer solutions to linear programs, in “Recent Advances in Mathematical Programming” (R. L. Graves and P. Wolfe, eds.), pp. 269-302. McGraw-Hill, New York, 1963. 3. Gomory, R. E., An all-integer integer programming algorithm, in “Industrial Scheduling” (J. F. Muth and G. L. Thompson, eds.), pp. 193-206. Prentice-Hall, Englewood Cliffs, New Jersey, 1963. 4. Martin, G. T., An accelerated Euclidean algorithm for integer linear programming, in “Recent Advances in Mathematical Programming” (R. L. Graves and P. Wolfe, eds.), pp. 311-317. McGraw-Hill, New York, 1963. 1.
N U M B E R THEORY RESULTS
It is instructive to regard various concepts of number theory in the context of integer programming. Since some of the number theory results are needed in the next chapter, it is appropriate to offer the applicable material at this point. We shall also present algorithms using integer programming methods for some of the classical problems of number theory.
1
THE EUCLIDEAN ALGORITHM
The Euclidean algorithm provides a method for computing the greatest common divisor (gcd) of a set of positive integers a,, a 2 , . . . , a,. We can assume positive integers since the greatest common divisor of a set of integers is unchanged if any ai of the set is replaced by -ai. The problem can also be solved as an integer program: Find integers XI, x 2 , . . . , x, that minimize (1)
where z 2 1.
z
=
c
j=l
ajxj,
136
6
NUMBER THEORY RESULTS
Proof that the optimal value of z is the greatest common divisor of the aj is contained in the following four theorems. (See Griffin [ I ] ) . Theorem 1
The minimum positive integer in the set of integers L=
n
C ajxj
j= 1
is the greatest common divisor of the set, where the aj are given integers and the xi are all possible integers. Proof
There is a finite number of integers between zero and any positive integer. The set L contains at least one positive integer; hence, the set has a minimum positive integer. Denote the minimum positive integer by
d=
ajxj‘.
j=l
By Euclid’s theorem, for any integer s and positive d, there exist integers p and q such that
(3)
~=pd+q,
O ~ q < d .
For any s in L we also have n
s= Cajx;.
(4)
j= 1
From (2), (3), and (4) we obtain
or
2 aj(xy - p x
j=1
= q.
1
T H E EUCLIDEAN ALGORITHM
137
Since x; - pj xj' is an integer for each j , then q is an integer in L. Also, since q < d and d is the minimum positive integer in L, q must equal zero. From (3) we see that d divides s and thus divides every member of L;' d is a common divisor of L. No integer greater than d is a common divisor of L since d is in L and it would have to be a divisor of d. Therefore. dis the greatest common divisor of L. Theorem 2
The greatest common divisor of the uj exists and is the minimum positive integer in the set L. Proof
Each integer aj is in L. From Theorem 1 the minimum positive integer d i n the set is a common divisor of all the aj . Therefore, there exist integers xi' where
consider d' any common divisor of the a j . We have aj = aj'd', where the uj' are integers. Hence,
any common divisor of the aj divides d. The greatest common divisor of the uj then exists and is d. Theorem 3
If d is the greatest common divisor of the u j , then d is the greatest common divisor of the set of integers L. An integer b divides an integer a if there exists an integer c such that a = bc.
138
6
NUMBER THEORY RESULTS
Proof
If d = gcd(a,, a 2 , . . . , a,,), then aj = aj'd, where the aj' are integers. Also, d is a common divisor of set L since L =d
airxi. j= 1
Furthermore, any common divisor of the set divides the aj because the aj are in L. Any common divisor of the aj is also a divisor of d (as seen
in the proof of Theorem 2); hence, a common divisor of the set L must divide d. Thus d is the greatest common divisor of the set.
Theorem 4 The greatest common divisor of the set L is unique. Proof
Suppose dl and d2 are greatest common divisors of L. Since d, and d2 are in L, dl must divide d2 and d2 must divide d l . Consequently, dl I d2 and d2 I dl. Therefore, dl = d 2 . From Theorems 1-4, it is clear that the greatest common divisor of a,, a2, . . . , a,, may be obtained by solving the integer program (1). The optimal value of z is then the greatest common divisor. The application of Algorithm 3 of Chapter 3, properly modified for variables unrestricted in sign, provides a means for solving (1) and determining gcd(a,, a 2 , , . . , an).Write problem (1) as n
(5)
j= 1
x1. = y 1' .
j = l , 2)...)n.
We make a series of transformations of the y j in ( 5 ) until an integer solution is apparent. At the start of any iteration we have
1
THE EUCLIDEAN ALGORITHM
139
where x is a vector with components z , xl, x 2 , . . . , x, and aj is a column vector. Initially, the first component of aj is u j , the ( j + 1) component is unity and all other components are zero. Suppose the first component of aj is taken as u j . At any iteration define J + as the set of indices j where aj > 0. If J + contains more than one member, find index s from a, = min a j jeJ+
and calculate vj = [uj/u,] for all j E J'. Make a change of variables for y , given by
j # s
where y,' is an integer. Eliminating y , from (6) using (7), we obtain x=
c
j #s
Uj'Yj
+ a,y,'
where
(9)
,- a,v .
a.' = a .
a,' J = a J. '
J ,
j E J + ,
j # s
j $ J'.
Designating aj' and y,' as the current & j and y , , respectively, we obtain the form (6). The first component of aj is again called a j . We repeat the process. It is clear that in a finite number of iterations a form must develop with only one positive uj value. The equivalent optimization problem becomes: Find integer y , that minimizes z when
where s is the only index in J'. The other equations of (6) are not restricting and can be temporarily disregarded. The obvious solution of (10) is y , = 1 with optimal z = u s .(Again, the a, value need not be one of the original uj values; it represents the result of (9) over possibly many ilerations.) The optimal z value obtained is the greatest common divisor
140
6
NUMBER THEORY RESULTS
of the original aj values. The complete solution is then
+ cCLjyj,
x = CL,
jZs
where the yj, f o r j # s, act as parameters. We use (1 1) to find all possible xj values that produce the gcd(a,, u 2 , . . . , a,). If the greatest common divisor is desired and the xj values not needed, the method is performed with only the first components of the CLj
.
15 1 0 0
36 0 1 0
81 0 0 1
15
3 6 0 5 -2 -3 -2 1 -1 0 0 1
6
6 -5 1 0 0 1
1 -2
0 0
3 0 0 5 - 1 2 -3 -2 5 -1 0 0 1
and obtain z = 3 as the greatest common divisor with x1 = 5 - 12y2 -3y3,x2= -2+5y,-y3,andx3=y3. Theorem 5
The set of integers L consists of all and only multiples of d,the greatest common divisor of a,, u 2 ,. . . ,a,. Proof
Every integer of the set L has been shown to be a multiple of d. Since there exist integer xi* values with
d = Cajxj*, j = 1
then kd
C aj(kxj*) n
=
j=l
is obviously in L. Thus, all, and only, multiples of d are in L.
LINEAR DIOPHANTINE EQUATIONS
2
141
Theorem 6
If d, is the greatest common divisor of a,, a,, . . . , a,, then the greatest common divisor of a,, a,, . . . , a,, a,,, is the greatest common divisor of d, and a, + 1. Proof
There exist integers xi so that n
d, = C a j x i . j= 1
If d2 = gcd(dl, a, +
then there exist integers y,, y 2 so that ~ Z = ~ I Y+ Ia n + l Y z .
We have
n
d, =
C ai(y,xj) + an+
1 ~ 2 .
If d3 = gcd(a,, a 2 , . . . , a,, a,,,), we see that d3 divides d , . Furthermore, since dl divides aj , for j = 1,2, .. . , n, and since d2divides d, and a,,,, we see that d2is acommondivisor ofaj, for j = 1, 2, . . . , n, n + 1 ; hence, d, divides d, from the proof of Theorem 2. Therefore d2 = d3.
2 LINEAR DIOPHANTINE EQUATIONS
A Diophantine equation is a rational integral equation in which the constants are integers and the solutions, or values of the variables that satisfy the equations, are integers. Theorem 7
The linear Diophantine equation n
aJ. xJ . = b
j = 1
142
6
NUMBER THEORY RESULTS
has a solution if and only if the greatest common divisor of a,, a,, . . . , a, divides b. Proof
c:=,
In the previous theorems, we showed that d = gcd(a,, a,, . . . , a,) divides L = a j x j for all integral values xi. If (12) has a solution, then d divides b. Alternatively, if d divides b, let b = b, d. Since d is in L there exist integers xj’so that
b = b, C a j x j ’ j= 1
Thus, xj = b, xj‘ solves Eq. (12). Integers a, and a, are relatively prime if the greatest common divisor of a, and a, is unity. Thus, a necessary and sufficient condition that a, and a, are relatively prime is that there is a solution of the linear Diophantine equation
a,x,
+ a,x,
= 1.
Theorem 8
If m divides ab, and m and a are relatively prime, then m divides b. Proof
Since m and a are relatively prime,
mx,
+ ax, = 1
has a solution xi’, x,‘. Then and
+ ax,’) = b mbx,’ + abx,‘ = b. b(mx,’
Since m divides ab, then ab = mr for some integer r and
m(bx,’ Hence, m divides b.
+ rx,‘) = b.
2
LINEAR DIOPHANTINE EQUATIONS
143
The system of linear Diophantine equations
may be solved by modifying Algorithm 3 of Chapter 3 for variables unrestricted in sign. Write (13) as
(14)
iaijyj=bi,
j= 1
yj=xj,
i = 1 , 2,..., m,
j = l , 2 ,..., n .
We make a series of transformations of the y j in (14) until an integer solution is apparent. At the beginning of any iteration we have
2 u j y j = x. n
j= 1
At the first iteration x is a vector with components b,, b, , . . . , b,, , xl, x2, . . ., x,, and N~ is a column vector. Also, initially, component i, 1 I i Im of aj is uij;component m + j is unity and all other components are zero. Consider the first equation of (1 5) as
Define J as the set of indices j where uj # 0. If J contains more than one member, find index s from
where I a I means the absolute value of a. For each j E J ,j # s, determine an integer vj value that produces the minimum of I aj - vju,l. Note that vj may be positive or negative. Make a change of variables for y s given by
144
6
NUMBER THEORY RESULTS
where y,' is an integer. Eliminating y s from (16) using (17), we obtain Mj'yj -k M,ys' = X,
(18)
j#s
where
,= a . -8 V.
M.'
J ,
a.' J = M J. '
jEJ, jZs, j#J.
Designating M ~ ' and ys' as the current M~ and ys respectively, we obtain the form (15). We repeat the process. It is clear that in a finite number of iterations a form must develop with only one nonzero term in (16). The equation then reads (19)
asYs=bo.
To have a solution, usmust divide b, . If this is the case, we eliminate y, from (15) using (19) and obtain where /3 = ~,b,/u,. If p is subtracted from both sides of (20), we again have the form (IS) with x replaced by x - p. Note that in this new form of (1 5) the first row has all zero values on both sides. The s column has been eliminated. Next the first row is discarded. The new first row is then changed like (16) until only one nonzero element is left. We obtain (19) and (20) once more. Another column and row are eliminated and the process is continued until only the last n rows of (15) remain; at this point the solution is (21)
p'
-k
1 Mjyj =
X,
where p' is the result of m applications of (19) and (20) and the sum in (21) is over the columns remaining. The yj variables in (21) represent parameters. Exampfe
Find integers xj that satisfy 4x, - 4x, 5x1 - 7x2
+ 3x3 = 3, + 6x3 = 4.
3
We list successively with x
= (3,4, xi,
LINEAR CONGRUENCES
x2 , x3)
1
0 -2
0 9 1 -3 1 0 0 4
-1 0 0
0
0
145
1 0
0
0
-1
-1
We obtain p = (3, -3, 3, 0, -3); x is replaced by (0, 7, x1 - 3, x 2 , xg + 3). We discard the first row and first coIumn and list successively
-2 1 1
0 We obtain fl
9 -3 0
4
-2 1
1 1
1 0
4 4
0 3 9 8
1 1 4 4
= (7,7,28,28). The solution is
x, = 10
+ 3y,
x2 = 28 -t- 9y, x3 = 25 -i-8 y .
3
as
LINEAR CONGRUENCES
For any nonzero integer m an integer a can be expressed uniquely a = qm
+ r,
where 0 5 r < I rn I. All integers can be separated into m classes according to the remainder r developed after division by m. Two integers, a and b, are congruent modulo m if and only if division of each by m results in the same remainder r. Thus (22)
+ r, b = q2m + r,
a = qlm
146
6
NUMBER THEORY RESULTS
where 0 5 r < 1 m 1 ; a and b are considered to be in the same residue class. The integers congruent to a given integer for the modulus m make up a residue class modulo m. The notation
a=b
mod
m
means that a is congruent to b modulo m or that a is congruent to b for the modulus m. We have Theorem 9
Two integers are congruent modulo m if and only if their difference is divisible by nonzero m. Proof
Suppose a is congruent to b modulo m. From (22) a-
b = m(q1 - 92)
and m divides a - b. Conversely, if a - b = mk, than a = b mod m, since otherwise
+ rl, b = q2m + r 2 ,
a =qlm
with 0 5 r l , r , < I ml. We have a - b = m(q, - q 2 )
+ r l - r,;
hence, m divides rl - r , , which can occur only if r , = r 2 .Thus we have proved that a and b are in the same residue class. The following properties of congruences can be stated : ac
1. I f a G b m o d m a n d c = d m o d m , t h e n a + c - b k d m o d m a n d G bd mod m. 2. If d divides m and a = b mod m ,then a G b mod d.
LINEAR C O N G R U E N C E S
3
147
3. If a = b mod m, and a = b mod m,, then a = b mod m3 where m3 is the least common multiple of m, and rn, . 2 4. If ac = bc mod m and gcd(c, m) = 1, then (I = b mod m. 5. If ac 3 bc mod m and gcd(c, m) = d, then a 3 b mod m,, where m =mod. We need t o solve the linear congruence
mod m.
ax= b
We have first Theorem 10
When a is prime to m, the congruence
(23)
a x = 1 mod rn
has one and only one solution modulo m. This solution is also prime to m. Proof
The congruence (23) is equivalent to the equation
(24)
ax
+ my = 1,
which has an integer solution, from Theorem 7, when a is prime to m. If both x1 and x2 satisfy (23), then ax, fax,
mod
in
and
x1 = x 2
mod
m.
Furthermore, if x , satisfies (23), then the equation xlu
+ mu = 1
If m = 9,a = q2b, then m is a common multiple of a and b. If, in addition, every common multiple of a and b is a multiple of m , then m is the least common multiple of a and 6 .
148
6
NUMBER THEORY RESULTS
has a solution for u and u, namely, u x1and rn are relatively prime.
=a
and u
=y
from (24). Thus
Theorem 11
When a is prime to rn, the congruence (25)
a x = b mod
m
has one and only one solution modulo rn. Proof
Since a is prime to rn, from Theorem 10, the congruence a x - 1 mod
rn
x = x l - mod
rn.
has a unique solution Clearly, x , b satisfies (25). As proved in Theorem 10, there can be but one solution modulo rn; the solution, however, may not be prime to m. Theorem 12
If d is the greatest common divisor of a and rn, then
(26)
ax
=b
mod
rn
has a solution if and only if d divides b. There are d solutions modulo rn when d divides b. Proof
Since d = gcd(a, rn), then a = a, d and rn = rn, d. If the congruence has a solution, there exist integers x’ and y’ so that Thus
ax‘
+ my‘ = b .
d(a, x’
+ m, y ’ ) = b
149
LINEAR CONGRUENCES
3
and d divides b. Conversely, if d divides b, let b = cd and reduce the congruence to (27)
c mod
a,x-
m,.
Furthermore, there exist integers x* and y* so that ax*
+ my* = d ;
hence, aox*
+ moy* = 1.
Thus a, and mo are relatively prime and (27) has one solution x1 modulo m , . Consider the class of integers of the form x1 km,. All of the integers in the class also solve (26). Some of the integers are congruent to each other modulo m and would represent the same solution to (26). For these integers there would be values kl and k , with
+
x1
+ k l m o = x1 + k 2 m o
mod
m.
Hence,
mo(kl - k,) z 0 mod
m
and
k,
= k,
mod
d.
+
Therefore, when k is 0, 1, . . . , d - 1 the integers x, km, are exactly d solutions of (26) that are incongruent for the modulus m .
Example
Solve 9x
= 15 mod 24.
Since gcd (9, 24) = 3 and 3 divides 15, the congruence is reduced to 3x = 5 mod 8, which has one solution x = 7 mod 8. Hence, the solutions of the original congruence are x = 7, 15, 23 mod 24. All other solutions are in one of these 3 residue classes modulo 24.
150
6
NUMBER THEORY RESULTS
We are now interested in a method for solving ax- b
mod
m.
The equivalent problem is: Find integers x = x 1 and x2 that satisfy ax,
+ m x 2 = b.
We solve this equation by the method of Section 2. We list
and reduce the first row to a form having zero and nonzero elements. The second row then contains the solution for x . Example
Solve 3x E 5 mod 8. We list
and form successively
1
3 1
8 0
I
3
-1
1
-3
-y,
=5
0 1-8
-1
-3
The first row produces
and the solution is then apparent in the second row as x = 15 - 8 k .
Thus x
= 15 mod 8 ;x s 7 mod 8 solves the congruence.
1
A SYSTEM OF LINEAR CONGRUENCES
4
4
151
THE SOLUTION OF A SYSTEM OF LINEAR CONGRUENCES
The system of h e a r congruences n
2a i j x j = bi
j = 1
mod
i = 1, 2, ..., t
mi,
may be solved by writing the equivalent system of linear Diophantine equations n
C a i j x j + m i y i = b,,
i
= 1,2,.
j = 1
.., t .
The theorems and method of solution presented in Section 2 may then be applied to the system of equations. Example
The Chinese remainder thebrem states that if m , , m 2 , . . . , m, are relatively prime in pairs, then the system x=a2
mod mod
m,, In,,
x-a,
mod
rn,
X = U ,
has a unique solution modulo m, where m = mlm2 . . * m , . Solve x = 3 mod 14, x E 4 mod 9, x E 6 mod 11. We write the equivalent equations X
x x
+ 14y1 = 3,
+
9y,
+ lly,
= 4,
= 6.
Using the method in Section 2, we write the tableau 3 4 6 x
1 1 1 1
1
4 0 0 0
0
9
0 0
0
0 11 0
152
NUMBER THEORY RESULTS
6
We do not need to keep track of the y i variables. Thus we obtain successively
6
-14 -14
X
1 1-14 3 -14 X-3 -14
-25 31
X -
Thus x
9 01 4 9 0 11 -14 0 0 0 -14 0
-126 -126 = 6331 -
11 0 1386k or x
-
0 0
01 4 1 01 0 1 11 -14 28 11 -126 28 0 -14 28 0 -126 28 0 -1386
787 mod 1386.
Problems
1. Find integers xj that minimize z = 42x1
+ 63x2 + 78x3 + lO2x4
for z 2 1. 2. Find the greatest common divisor of 117, 108, 54, and 135. 3. Solve the simultaneous Diophantine equations
+ 3x3 - 2x4 = 12, + 2x, + 3x, - 7x4 = 4, - x2 + 5x3 + x 4 = 9.
7x, - 3x2 3x, 2x,
4. Find integers that satisfy 33x1 - 7x2
+ 5x3 - 4x4 = 54.
5. Solve the system of congruences x=6 x-1 x-3 x ~
mod mod mod mod 9
7
5 4 11.
01 11 0
1 -252 ’
REFERENCE
153
Reference 1.
Griffin, H., “Elementary Theory of Numbers.” McGraw-Hill, New York, 1954.
DYNAMIC PROGRAMMING SOLUTIONS
We solve the integer programming problem by means of a dynamic programming enumeration. The method can be used to solve the important class of problems in which the variables have upper bound values. This includes the problems where the variables are restrained t o zero or one. In Chapter 5, we relaxed the integer constraint on the variables and used the simplex method to find the continuous solution. If the continuous solution was fractional, we used additional constraints and minimization cycles to produce the optimal integer solution. In this chapter, we again relax the integer constraint on the variables and use the simplex method to find the continuous solution. We go on t o develop linear congruences that the nonbasic variables must satisfy. We then obtain the integer solution by means of a dynamic programming enumeration analogous t o the one presented in Chapter 4. This approach was developed in [2].The remainder of the chapter contains rules for accelerating the enumeration.
156
1
7
D Y N A M I C PROGRAMMING SOLUTIONS
A DYNAMIC PROGRAMMING SOLUTION
Once again, consider the integer programming problem : Find integers x j 2 0 f o r j = 1 , 2 , . . . , n that minimize z when
i:
j = 1
c j x j = z,
where the u i j , b i , and cj are given integer constants. Equation ( I ) may be solvcd as a linear program by relaxing the integer constraints to obtain the optimal canonical format z
c +c +
c j x j = -5,,
j = I t
x,+i
ZijXj =
j = 1
i = l , 2 ,..., m,
6,,
where the first t = n - m and the last m variables have been arbitrarily selected as the nonbasic and basic variables, respectively. Since (2) is an optimal format, T j 2 0 and bi 2 0; the optimal continuous solution to (1) is x,+ = 6i for the basic variables and x, = 0 for the nonbasic variables. If the continuous solution has all 6, as integer, the integer program is solved. If any of the bi are fractional, however, the integer variables xj must still satisfy (2), which are then used for obtaining the integer solution. Select some equation of (2) with bias fractional. Suppose it is (3) Definef, > 0 from 6,
x,
r
+1
UjXj
j= 1
=
[b,]
11 u j x j -fo
j =
b,.
+yo;Eq. (3) may be written as
r
(4)
=
=
[b,] - x,.
A DYNAMIC PROGRAMMING SOLUTION
1
157
The right side of (4) is an integer; thus t
1a j x j - f o
j = 1
mod
1.
In (5) we have developed a congruence, which means that Cj=lu j x j -fo is an integer. Furthermore, define the fractional parts fj 2 0 from uj = [aj] +A; Eq. (5) may be written as t
~ f j x j - f o mod
j = 1
1.
Congruence (6) arises because [ a j ] x j 3 0 mod 1 for integer x i . We have now developed a congruence that the nonbasic integer variables must satisfy to produce an integer value for basic variable x,. If 6, is integer and some uj are fractional, then (6) still holds withf, = 0. If 6, and the uj are all integers, then the trivial congruence 0 E 0 mod 1 results. Thus we may write a congruence similar to (6) for every equation of (2). We need not develop a congruence from the objective equation of (2) since it is apparent that if all solutions to the congruences like (6) produce integers x j , then z must have an integer value from ( 1 ) . Let us write (2) in the form z
+
c
FJXJ
= -To,
j = I
x + p i j x j =a,, j= 1
where x is a vector with components x , + ~each , aj f o r j # 0 is a column vector with components iij, and a, is a column vector with components 6,. The congruences developed from (7), as in ( 6 ) ,may be written as t
c p j x j = Po mod 1
j= 1
where the Pj are the columns of fractional parts of the a j . In addition, ~ 0, the constraints of (7) may appear as since x , + 2 I
158
7
DYNAMIC PROGRAMMING SOLUTIONS
It is clear that the equivalent problem to (1) is: Find integers
x j 2 0 that minimize z when
c c f
j = 1
c j x j = z - I,,
f
(9)
j= 1
ajxj
I a0,
t
cpjxj-po
j = 1
mod
1.
When the optimal solution x j O of (9) is determined, then the basic variables x of (7) are given by
We can solve (9) by writing the function
where a and /3 are variable vectors and the inequalities of (8) are changed to equalities.' Since some x j > 0, we can use the same argument as in Chapter 4 to develop the dynamic programming recursion
F(a, p) = minj(Tj
(10)
+ F(a - a j , p - p,)).
Note that the second argument of F is taken modulo I . The recursion in (10) may be solved as a simple enumeration. Observe that F(ar, p,) = T,, where T, = min C j . We then form F(a - a l , 0 - /I, by )replacing a and p by 3 - a, and p - 0, respectively in (10) as
( I 1) F(a - a,, p - p,)
' Goniory [ I ]
= minj(Tj
+ F(a - a j - a, , p - pi - /3,)).
initiated this type of round-off procedure by considering F@) = min x, mod 1, integer x j 2 0); (9) is solved by his method if x' emerges as nonnegative.
Pj - p (x:= 1 x:f=, xj
1
A DYNAMIC PROGRAMMING SOLUTION
We substitute (11) for the F(cr - cr,, (10). Thus we obtain (12) F(a,/3) = min(min(ci j
159
/3 - PI) term on the right side of
+ F(cc - aj’, /3 - Pj’)>, min(Tj + F ( a - a j , /3 - P j ) ) > j t r
+
T , + T j , crj’ = u j cr,, and Pj’ 3 Pj + /3, mod 1. Designate cj’ as the current T j , a j , and p j values; observe that (12) is of the same form as (lo), except that the C, + F(a - a r , P - p,) term ismissing and other terms are included. Thus from (12) we can produce the solution value F(a,,/3,) = T, by finding a new C, = min T j . Continuing in this way, we find F ( a , /3) for all feasible CI and /3. We can also obtain the corresponding x j values by keeping track of the indices j that produce each F(cc, P). The procedure just outlined is an enumeration of all possible values of Fixj in order of increasing value. To solve the integer program given by (9), we stop the prockdure when F(a, Po) is calculated for some a 5 a. (i.e., when each component of u is less than or equal to the corresponding component of go). The method for the solution to (10) may fail in case any T j are zero; the same r value may be picked an infinite number of times and the constraints may never be satisfied. Thus, to insure a finite algorithm, we must assume that all xj values have finite upper bounds m j . Let x j Imj for ,j = 1, 2, . . . , n be an additional constraint in (1). The bounded variable linear programming technique is used to transform (1) to (2). Some of the nonbasic variables may be at their upper bounds in the continuous solution of (1). These variables have corresponding Cj values that are nonpositive. We also have C j 2 0 for nonbasic variables at the zero value. For ease of computation, we make the transformation xj‘ = mj - xj for those nonbasic variables at their upper bound. When the optimal solution is obtained, we use the reverse transformation to calculate the xi value. Thus we may assume a form like ( 2 ) with all Ti 2 0 and all nonbasic variables at zero value. When the variables are bounded, the dynamic programming enumeration is performed so that the upper bounds mj are not violated.
where cj’
aj’, and
3/’,
=
160
7
DYNAMIC PROGRAMMING SOLUTIONS
If we define M as a vector with ith component m,,, ,we stop the procedure when F(a,Po) is calculated with 0 < zo - a < M . Thus we are led to
Algorithm 1
1 . Define a solution vector S(z*, xl*, x2*, . . . , xr*),where xi* for 1,2, . . . , t is a feasible integer solution to the problem with objective value z*. If no feasible solution is apparent, take z* = co. G o to 2. 2. List the values of the problem as
j
=
1
_ c1 a1 P1
2
-
cp
3
c3
c12
a3
P2
P3
.*. ... ...
t c, a,
... P,
~~
Go to 3. 3. In the newly listed section, define J as the set of indices j where pi = Po and 0 I a. - aj I M . If J has no members, go to 4. Otherwise, findindex rfrom T, =minjE,?j.If?, 2 z*,goto4.Otherwise,takez*= C, and form S(z*, xl*, xz*,. . . , x,*), where the xi* are the nonzero xj values found below the section. The other xj* values equal zero. Increase x,* by one and go to 4. 4. Given the list, the solution S(z*, xl*, x2*,. . . , x,*) is the minimal integer solution if there are no unmarked columns with Cj < z*. G o to 6 . Otherwise, find index r from C, = min C j for all unmarked columns in all sections. G o to 5. 5. Add a new section of columns to the list as follows, if possible: (a) Calculate cj’ = ?, + C j , aj‘ = a, + a j , and Pi’ = Pr pi mod 1 for the indices j of the unmarked columns in the section containing column r. Mark the r column. The C j , u j , and Pi values are taken from the list in step 2. (b) Add columns with values cj’, aj’, and pi’ headed by indices j if cj’ < z*. Include a column headed by r only if x, 1 < m, for the x, value found below the section containing the newly
+
+
1
161
A DYNAMIC PROGRAMMING SOLUTION
marked r column. If no new columns are added, go to 3. Designate the cj‘, aj’,and Pi’ values as new C j , aj , and pj values, respectively. (c) Underneath the section added, write the xi values from the section containing the newly marked r column. The nonappearance of a variable means that it has zero value. Increase x, by one for the new section. Go to 3. 6 . The minimal solution to the integer program is z = Z o + z*, x = a0 a j x j * . End.
xf=l
Example
Consider the problem first given on page 110: Find integers xj 2 0 that minimize z when 3x1 - x2 + 12x3
+
+
x4=z,
+ 3x3 - 3 x 4 =4, 3x1 - 3x2 + l l x , + x4 = 2. x1
x2
Using the simplex algorithm, we transform the equations to
-x
+ I3 x 3
-z
3
x1 + q x 3 +x4= x2 - 1x I x = 3 3 3 4
9
1 3 9
r
3 7
and develop the congruences
+ $x4 = 3 $x3 + +x4 = $ 1x 3 3
mod mod
1, 1.
1. z* = 00. 2. The problem is listed in Tableau E l ; Zo P o = (f,
3.
3. J has no members.
=
9,a.
4. r = 3 in Tableau E l , 2,. = 3. 5. We form Tableau E2. Mark column 3 of Tableau El. 3. J has no members.
= ($,
3,
and
162
7
D Y N A M I C PROGRAMMING SOLUTIONS
3*
4
4
I 0 3 20 -
5
-_230 _ -38
2
3
;
_ _
-2
2 3
1 3 2 3
0 0
1 3
x3 =
El
- --130
1
E2
x4
=1
E3
3
5 10 -1
0 0 x3 =
2
E4
4. r = 4 in Tableau E l , C, = q. 5. We form Tableau E3. Mark column 4 of Tableau E l . 3. J = (4) in Tableau E3. r = 4 and C, = 29; we have a feasible solution z* = ?$, x3* = 0, x4* = 2. 4. r = 3 in Tableau E2, C, = J$. 5. We form Tableau E4. Mark column 3 of Tableau E2. No further tableaus are formed acd the current feasible solution is minimal. 6. The solution to the original problem is then z = 12, xI = 5, x2 = 5, x3 = 0, x4 = 2.
2
REDUCING THE NUMBER OF CONGRUENCES
Algorithm 1 required that calculations be made for m congruences in (8). Ordinarily, it is not necessary to include all the congruences in a problem; all solutions of one congruence may satisfy a second congruence, which, then, need not be included. We now show how to determine the congruences to be included. On the basis of results in Chapter 5 , we know that the constants of the optimal format ( 2 ) are in the form i / D . Numerator i is an integer and denominator D is the absolute value of the determinant of the coefficient matrix of basic variables x,+,, x , + ~ ., . . , x, in the original problem Eq. (1). Furthermore, the constants in congruence (6) are also
2
R E D U C I N G THE N U M B E R OF C O N G R U E N C E S
163
in the form i / D , since they are the fractional parts of the constants in an equation in (2). Hence, congruence (6) may be written as
hj h, 5xj=D
C
j = 1
1,
mod
where fj = h j / D and hi is an integer. Other congruences developed from (2) may have the same integer solution as in (1 3). A congruence that has the same integer solution as in (13) may be generated by forming (14)
igjxj-g,
j = 1
mod
1,
where
kh.
g . = L J-D
(15)
mod
1,
j = O , l , ..., t ;
k is a positive integer. The fact that all integer solutions of (13) also satisfy (14) is seen from Theorem 1
If integer xj* solves (13), then xj* solves (14). Proof
Ifxj* solves (13), we have
where w is an integer. From (1 5) we obtain
where y, is an integer. Multiplying (16) through by positive integer k and using ( I 7 ) ,we obtain
j = I
+ 1 .vjxj*. 1
t
1 g j x j * - g,
=
kw - yo
j = 1
164
7
DYNAMIC P R O G R A M M I N G SOLUTIONS
Since the right side of (18) is an integer, it follows that t
Cgjxj* =go j = 1
mod
1.
Thus xj* solves (14), thereby proving the theorem. Since any solution of congruence (13) satisfies (14), we need not include the latter in (8). Observe that some solutions to congruence (14) may not satisfy (1 3); care must be taken in picking the congruence to keep when both are developed from ( 2 ) . Sometimes it may happen that the congruences are equivalent. Two congruences are equiualent congruences if they have exactly the same solutions. If k and D are relatively prime, then (1 3) and (14) are equivalent; either congruence may be used. Thus Theorem 2
If gcd ( k , D)= I , then ( I 3) and (14) are equivalent congruences. Proof
Since the converse was proven in Theorem 1, we need prove only thatifxj*solves(14), thenit solves(l3). Let xj* satisfy(l4); then
where w is an integer. Using (17), we obtain j = 1
where integer y is given by y=w-
xyjxj*+y,.
j = 1
Looking at (19), we see that k must divide y because gcd(k, 0)= I . Thus, y = ky*, where y* is an integer and
C h j x j * - ho = Dy*. f
j = 1
2
REDUCING THE NUMBER OF CONGRUENCES
165
Dividing both sides of (20) by D , we see that xj* satisfies (13), thereby proving the theorem. Tn a given problem it is possible for a single congruence to generate all the other congruences. It is therefore desirable to find the congruence of (8) that generates the largest number of congruences. We have Theorem 3
If h, and D are relatively prime, then k h j / D is incongruent to h j / D mod 1 f o r k = 2 , 3 , . . . , D. Proof
Suppose
'
kh, - h, D D
mod
1.
There exists an integer y so that
kh, D
h, -Y D
and
hj(k - I )
= yD.
Since hi and D are relatively prime, h, must divide y , or y integer r. Thus
= h,r
for some
k-1=rD or k must satisfy k - 1 mod Therefore, k
= 2,3,
D.
. . . , D produces k h j / D noncongruent to h,lD mod 1.
From Theorem 3 we know that if a congruence of (8) has any constant with numerator and denominator D relatively prime, then the congruence can generate D - 1 other congruences. These other congruences, if they appear in (8), are not needed and may be eliminated.
166
7
DYNAMIC PROGRAMMING SOLUTIONS
It should be noted that a congruence with the desired property may not necessarily generate every other congruence. More significantly, a congruence without a constant having a numerator and denominator relatively prime cannot generate a congruence with a constant having that property. Instead of developing all the congruences of (8) first and then searching for a congruence having a constant with the required property, it is better, from a computational standpoint, to find an equation of (2) having a constant with relatively prime numerator and denominator. The congruence developed from the equation will then have a constant with the required property. In hand-solved problems, the constants of (2) can be inspected easily to determine the equation having a constant with numerator and denominator relatively prime. But when an electronic computer is used to solve extremely large problems, it is not practical to evaluate each constant until one is found having the relatively prime property. To offset this difficulty, we offer a method to facilitate the computations. In solving Eq. (1) as a linear program, we shall assume that the inverse of the basic variables is available. Following Section 5 of Chapter 2, the inverse is n, b,, b,,
0
bm, ...
1
B-’
=
... ... ...
0 0
-
Thus, C j , Z,, Z i j , and bi in (2) are given by
cj = c j -
m
1 n iu ..’ i= LJ
1
m
Zo =
1n i b i ,
i= 1 m
hi =
m k= 1
bik b k .
z
REDUCING THE NUMBER OF CONGRUENCES
167
Let us examine, for example, the relationship for a,. Since each Zij = h,/D and b , = f i k / D, where the h i j and fik are integers, we have
(21) and arrive at Theorem 4
If hij and D are relatively prime for some i and j , then gcd(fil, . . J i m , D) = 1.
fi2,.
Proof
If hij and D are relatively prime, then there exist integers x and y so that h i j x + Dy = 1. Using (21), we obtain m
X
1 Akakj + D y = 1.
k= 1
For the givenjvalue we have y k = xukjand y k is an integer. Thus, m
k= 1
and gcd(f,1,f,2
9
. . . ,s,,
3
fik Y k
+ DY
=
m = 1.
The converse of Theorem 4 is not true. We shall use the theorem, however, in Algorithm 2 to develop the necessary congruences in (8). We must first solve (1) as a linear program by the inverse matrix method. The value D is calculated during the computation as the product of the pivot elements. When (2) is achieved and B-' is available, we use the Euclidean algorithm of Chapter 6 to calculate the values Di = gcd(fi1,
fr2
7
. . ., f i m , D>,
and determine index s from
D,
= min
D,;
i
=
1> 2,
. . ., f i 1 ,
168
7
DYNAMIC PROGRAMMING SOLUTIONS
in case of ties we arbitrarily select the smallest index. We then form a single congruence, as in (6), from constraint s of (2). The requisite procedure is contained in Algorithm 2
1. Same as Algorithm 1. 2. Same as Algorithm 1, except that, initially, only the congruence from constraint s makes up the P j . 3. (a) In the newly listed section, define J as the set of indices j where P j = Po and 0 5 uo - clj 5 M . If J has no members, go to 4. Otherwise, go to 3(b). (b) Find index r from F, = minj., F j . If F , 2 z*, go to 4. Otherwise, calculate x = uo - u, . If the components of x are integers, take z* = C, and form S(z*, x l * , xz*, . . . , x,*), where the xi* are the nonzero x j values found below the section. The other xj* values equal zero. Increase x,* by one and go to 4. If any component of x is fractional, define Z as the set of indices i where component i of x is fractional. Find index s from D,= minisI Di. Form a new congruence f
j= 1
fjxj
=fo
mod
1
from constraint s of (2). Each Pj for j = 0, 1, . . . , t is 'enlarged by the additional component f,. Modify the list in step 2 accordingly. In addition, calculate a new Pj component in each section having unmarked columns by first calculating f
=
C &xi' t
j = 1
mod
1,
where the xj' are the nonzero xi values found below the section. The other xj' values equal zero. The additional component of Pj for the unmarkedjcolumn of the section is then fj'
=f +f j
Go to 3(a). 4, 5, and 6. Same as Algorithm 1.
mod
1.
3
A N ACCELERATED D Y N A M I C PROGRAMMING SOLUTION
169
Example
Consider the problem on page 161. If it is solved as a linear program by the inverse matrix method, we obtain a value of D = 6 and inverse matrix
;
1 -1
B-‘=
-!I.
-2
From the second row of B-’ we calculate D , = gcd(3, 1, 6) = 1 ; from the third row D , = 1. We develop the congruence
+x3 + 3x4 = + mod
1.
The problem is then solved with this congruence only.
3 AN ACCELERATED DYNAVIC PROGRAMMING SOLUTION
In Eq. (2) it may occur that a positive value for one of the variables may not allow a feasible solution to be found. It may also be that some of the variables must have positive values to obtain a feasible solution. Similar conditions may exist when Algorithm 1 is applied in the search for the solution to (2). When these situations arise, we can accelerate the dynamic programming enumeration, just as we did in Chapter 4. Consider (9) with x, I mi as listed in step 2 of Algorithm 1. Since x,, I m, + i , we have from (2) that
,
f
C 2ijxj 2 6, - m,,,, j= 1
i = 1 , 2 , . . ., m.
Form an index set K with initial members j
y , for ~
=
1, 2, . . . , 1. Then form
E isKa nonnegative integer that represents the subsequent increase in x,. Thus, y , I m,’,where m,’ = mk - xk’. Initially all xk’ = 0.
170
7
DYNAMIC P R O G R A M M I N G SOLUTIONS
,
We require that U i2 6, - m,+ . Hence, we may be able to determine from (22) whether a positive y , value produces infeasibility or whether Y k must have a positive value to produce feasibility. Take
where K,' is the set of indices in K with i i i k > 0. The development of Ui* in (23) is similar to the development of the Ui*in (9) of Chapter 4. Therefore, rules Al-A3 of Chapter 4 apply here with aij replaced by iiijand bi replaced by hi- m,+ . We can now augment the rules of Chapter 4 by calculating
,
(24) where Ki- is the set of indices in K with iii, < 0 and U,' is the minimum value of U , . If Ki-has no members, then Ui'= 0. Since x , + ~2 0, we require that U,' 16,. The following rules apply where K, c, the mk',d , , and xi' are the result of Al-A3:
A4. If any Ui' > 6, - d i , then no y k values can produce feasibility. No feasible solution to the problem can be found. Stop. A5. If U,' + iiij > 6, - di for j E K , then y j = 0 since y j 2 1 produces infeasibility. Mark column j and remove index j from K. Write a new form for the U iin (22) and calculate the Ui*in (23). Return to rule A l . A6. (a) I f j is the only index in Ki- and d, > 6,, then iiijyj 5 6, - di requires that y . > 0 = ((6, - di)/iiij}. -_ (b) If U,' - iiijmj' > bi - d, f o r j E K, then y j 2 8, where 0 is the smallest integer with the property that Ui' - iiijmj' a i j 0 5 6, - d,. (c) When 8 is found in (a) or (b), increase xj' by 0, decrease mj' by 0, replace c by c + C j O and di by d, + i i i j O . If mj' = 0, remove index j from K . Write a new form for the U iand calculate Ui*. Return to rule A l . J
+
Each time rule A4 is applied, the Ui' calculation from (24) must be done again. If as a result of adhering to rules Al,-A6 all xi' = 0, continue Algorithm 1 in step 3. If, on the other hand, some xj' is positive,
3
A N ACCELERATED D Y N A M I C PROGRAMMING SOLUTION
171
we define vector c( with dias component i and calculate
p-
f
1 pixj’
mod
j=1
1.
If p = Po and 0 5 a0 - a 5 M , then program(2) is solved with z x j = xi’ f o r j = 1 , 2 , . . . , t and
= 5,
+ c,
r
x = a0 -
1
tljXj‘
j = 1
for the basic variables. Otherwise, add a new section of columns to the list. Each column is headed by an index ~ E KThe . column values are CI and P k ’ K p pkmod 1. The f k , given by c k ‘ = C C k , ah, and Pkvalues are taken from the list in step 2. Underneath the section write the xi = xj‘ values. Mark all unmarked columns on the list from step 2 and go to 3. We are also able t o modify the algorithm to ascertain whether a current stage of the enumeration will lead to a feasible a or whether some of the x j require specific values in order t o produce a feasible a. When index r is found from C, = min C j in step 4 (with c = ?,), we define the set of columns K as on page 93. Define the column values for a, as d , , d , , . . . , d,. The possible values of z and components V i of a that can be enumerated at this stage are given by
+
+
vi = d i + ui,
+
i=1,2
,..., ni,
where U iis given by (22) for the present set K ;yk 5 mk’,where mk’ = m k - xk‘. The xk’ are the xkvalues that appear below the section containing column r with x,’ = x, + 1. Since x,+ < m t +i , we require that V i2 hi - m,+ i.Consider
Vi*= di + Ui*,
i = 1,2,
..., m,
where Ui*is given by (23) with K i + as the set of indices in current K with S i k > 0. Then Vi* is the maximum value V ican attain when we perform the enumeration starting with the section containing column r. The development of Vi* is similar to the development of the Vi*
172
7
DYNAMIC P R O G R A M M I N G SOLUTIONS
of Chapter 4. Rules Bl-B3 of Chapter 4 apply here with aij replaced by
6, and b, replaced by 6, - m, +
,.
We can add other rules following rule B3. Since x,+, 2 0, we require that V i I 6,. Consider
Vi‘ = dj
+ Ui’,
where U,‘ is given by (24) and K j - is the set of indices in current K with Z i k < 0. Then, Vj’ is the minimum value that Vi can attain when we perform the enumeration starting with the section containing column r. The following rules then apply:
B4. If any Vi’ > h i , then no Y k value can produce feasibility. Mark column rand continue the algorithm in step 4. B5. If Vi’ + Z i j > b, for j e K, then y j = 0 since y j 2 1 produces infeasibility. Remove index j from K and write new forms for the U iin (22). Calculate the V j *and return to rule B1. B6. (a) l f j is the only index 1.n K i - and d j > 6,, then di + Z i j y j I bi requires that y j 2 0 = {(tii - di)/Zij}. (b) If Vi’ - Zijmj’> 6, for j e K , then y j 2 6 where 8 is the smallest integer with the property that Vi’ - Zijmj‘ + Z i j 6 < 6,. (c) When 8 is found in (a) or (b), increase xj‘ by 8, decrease mj‘ by 0, replace c by c + Cj8 and di by di + Z i j 6 . If mj‘ = 0, remove index j from K. If c + C k 2 z*, remove k from K . Write new forms for the Ui and calculate the Vi*. Return to rule B1. Each time that rule B4 is applied, the V j ’ calculation must be done again. When the application of the rules is completed, we return to the algorithm in a modified step 5. The modifications consist of the following: 5. (a) Mark column r. Take value di as component i of 6.Calculate f
/3z
C pixj’
j= 1
mod
1.
fl = D o and 0 I a. - a 5 M , take z* = c and form S(z*, xl‘, x2’, . . . , x,,’); go to 4. Otherwise, go to 5(b). If
(b) If set K has no elements, go to 4. Otherwise, add a new section
3
AN ACCELERATED DYNAMIC PROGRAMMING SOLUTION
173
of columns to the list. Each column is headed by an index k E K . The column values are given by ck' = c + C k , ak' = a + ak and pk' = p pk mod 1. The ? k , ak, p k values are taken from the list in step 2. Underneath the section write the xi = xj'values. G o to 3.
+
Example
Find nonnegative integers xi < 1 that minimize z when
+ 3x3
-z XI x2
+
1x
9
+"x3 3
+ y x 5 + x6 = + zx -1x 4 9 5 3 6 -
+ q-x3 - 3x4 -
$xg
-+,
8
9'
+ 4x6 =
$.
The equations are in an optimal canonical format. The continuous solution is z = x1 = 8, xz = 4, xj = 0, x4 = 0, x5 = 0, x6 = 0. We develop the congruences
+,
+ + $X5 -k 5x6 E 8 $x3 + 3x4 + $x5 + +x6 = $ $X3
mod mod
5x4
1. z* = co. 2. The problem is listed in Tableau El : Zo 4)and all mj = I .
I
3*
4*
5
6
1, 1.
= +, a. = ($, +),
1
6
6
3 0 1 0 0
24 9
_ -91 -_92
8 9 7 -
9
-~
x3
=
x5 =
1 1
x5
=
E3 Tableaus
1
Po = (9,
174
7
DYNAMIC PROGRAMMING SOLUTIONS
-++
3. J has no members. We obtain U l ’ = -+, U 2 ’ = with K 4, 5, 6). For rule A5, U,’ + C14 = $ > $; we mark column 4 and remove 4 from K . No other rules apply. 4. r = 3 in Tableau E l , C, = 4; dl = -;, d2 = J$, and V,’ = -$, V 2 ’ = $ with K = (5, 6). Since V 2 ’ - Z 2 5 =J$- >z 9, then y 5 = 1 and K = (6). No other rules apply. We mark column 3 and form Tableau E2. 3. J has no members. 4. r = 5 in Tableau E l , C, = ++. No rules apply. We mark column 5 and form Tableau E3. 3. J = (6). The optimal solution becomes apparent from Tableau E3; it is z = 3, x5 = 1, x6 = 1 , x3 = 0, x4 = 0, x1 = 1, x2 = 1. = (3,
Problems
1 . Solve the problem on page 161 by the accelerated enumeration procedure. 2. Use the inverse matrix method to develop a single congruence in:
+ 15x2 + 12x, = z, 8x, + 5x2 + 2x3 - x 4 = 4 , 2x1 + 3x2 + 3x3 - X 4 = 3.
16x,
Solve the problem. 3. Use the accelerated enumeration method to find nonnegative xj 5 2 and min z for + Lx + f i x+ L X 9 3 9 4 3 5 -
-z x1
- +x3 + $x4 + + x 5 = x2 + y - x 3 - ;x4 + +x5 =
-2
3,
5, f.
4. Discuss means of saving computer storage space by calculating aj and /Ij values only when needed. Show how a backtracking operation can be used to obtain the xj values.
REFERENCES
175
References Gomory, R. E., On the relation between integer and non-integer solutions to linear programs, Proc. Nat. Acad. Sci. 53,260 (1965). 2. Greenberg, H., A dynamic programming solution to integer linear programs, J . Math. Anal. Appl. 26 (2), 454 (1969). 1.
8
BRANCHANDBOUND PROCEDURES
The branch and bound procedure for the solution to integer programs, as originated by Land and Doig [3],works well on problems having few variables. For problems with many variables, however, it requires extremely large computer storage capacity. We shall present an extension of the method into a branch search scheme that overcomes these large storage requirements. We also provide a method for solving the mixed integer problem.
1
A BRANCH AND BOUND METHOD
We wish to solve the integer programming problem: Find integers xi 2 0 for j = 1,2, . . . , n that minimize z when c j x j = z, j = 1
n
(1)
j= I
a ,,, . . x . = b.,,
xjImj,
i=l,2,...,m,
j = l , 2 ,..., n,
where the cj , a i j ,bi,and mj are given integers. Any or all of the mi may be infinite.
178
8
BRANCH AND BOUND PROCEDURES
We are able to solve (1) by enumerating solutions in a tree search procedure. Consider (1) as a linear program by relaxing the integer constraints on the x i . The resulting continuous problem may be solved by the simplex method, which converts the equations of (1) to an optimality format and equivalent problem: Find nonnegative integers xi 5 mj that minimize z when
.z +
xi +
c tixi= c a i j x j = hi, n
-,To,
j=m+ 1
n
j=m+ 1
i = l , 2 ,..., m,
where the first rn variables and the last n - rn variables have been arbitrarily selected as the basic and nonbasic variables, respectively. In addition, Cj 2 0 for variables at their lower bound zero and Cj 0 for variables at their upper bound mj . The values of the objective function and basic variables are given by z,'=.T,+
CCjmj,
jsll
b,'
= 6, -
1i i i j m j ;
jell
the objective value is zo', the ith basic variable has nonnegative value bi' m i ,and U is the index set of nonbasic variables at their upper bounds. If thecontinuous solution has allb,' as integer, program (1) is solved. Ifany of the b,' are fractional, we start the treeenumeration. Weconsider the zero node of a tree as corresponding to the fractional solution with objective value zo'. Integer xi must satisfy x i [bill or x i2 [bi'] + 1. We branch t o level one of the tree by adding either the upper or the lower bound to the constraints of (2); then we find a new continuous minimum for z. Whichever bound we choose, we pursue the single branch until it becomes necessary to backtrack. Then we follow the other branch. The minimal integer solution occurs for one of the branches. This type of single-branch procedure originally appears in Dakin 111.
1
A BRANCH AND BOUND METHOD
179
Let us assume that after some minimization we are at level r of the tree. We have a canonical form like ( 2 ) with the additional constraints 0 < j j j 5 xi Iiiij; j j j and iiij are lower and upper bounds, respectively. Initially Pi = 0 and iiij = m j . The values of the objective function and basic variables are given by z,’=Z,+
CZjiiij+ jsU
(3)
bi’ = 6, -
CEjPj,
jsL
xa..e. - C a . . j Jj ’. * J
jcU
IJ
jsL
IJ
the objective value is z,‘, the ith basic variable has value bi’ where pi bi’ 5 mi, and L is the index set of nonbasic variables at their lower bounds. We consider the node at level r as corresponding to the current fractional solution. The optimality conditions for (2) with the current bounds are given in Theorem 1
Any values of x j for j = 1, 2, . . . , n satisfying the constraint equations in ( 2 ) and j j j 5 x j 5 iiij are a minimal solution for z if Cj 2 0 for variables at their lower bound P j and Cj 5 0 for variables at their upper bound mi. Proof
We have optimality by the conditions of the theorem, since any increase in the variables at their lower bound or any decrease in the variables at their upper bound can only increase z. Suppose some of the bi’are fractional. We select some fractional bit and branch to level r + 1 of the tree. We accomplish this by adding either x i < m i= [bi’] or xi 2 p i = [bi’]+ 1 to the constraints of (2) and then reminimizing. Whichever bound we choose, we pursue the single branch until it becomes necessary to follow the other branch. We must decide which fractional-valued basic variable to use and, at the same time, whether to use an upper or lower bound for the variable.
180
8
B R A N C H A N D B O U N D PROCEDURES
The choice of either an upper or a lower bound will cause the objective value to be greater than or equal to z,,’ because another constraint has been added to the problem. Thus, z,,’ is a lower bound to all objective values for nodes that lead from the current one. We can calculate the change in the objective function for the newly constrained problem. Let us suppose that x i , a basic variable, has a fractional value after the optimal canonical form ( 2 ) has been achieved. We determine
Di
(4)
= min j
?. aij
2 0, where D iis taken as infinite if no proper Zij exists to produce a finite value. We also calculate -
Ii = min j
-C.
T-J aij
2 0, where I i is taken as infinite if no proper G i j exists to produce a finite value.’ We can now determine the effect on the objective function of either a decrease or an increase in the value of x i . Equations (4) and (5) each define a variable with index j that can be eliminated from the objective function of ( 2 ) and produce an optimality format for z. Thus, we are able to calculate the smallest change in the objective value that results when xi is decreased or increased.2 If xi is to be decreased, we use (4) with the z and xi equations of (2) to form
if we take x i 5 mi, we must have z 2 U i ,where
Ui
= 20’
+ Di(6j’ - mi).
The index i o n D ior Ii,will more generally be s i , the index of the basic variable in equation i. This is the cost ranging procedure of linear programming. (See Driebeck [2]for its original application in integer programming.)
1
A BRANCH A N D B O U N D METHOD
181
If x i is to be increased, we use ( 5 ) to form n
z
(7)
= 2 , - I i bi -tI i xi
+ 1
(Cj
j=m+ 1
+ I ; iiij)xj;
if we take x i 2 p i , then we have z 2 V , ,where
v, = 20’ + Ii(yi - hi’).
Note that if no iiij with the proper sign exists to produce a finite value of D i , then it is not possible t o reduce the value of x i ; we take x i 2 pi. Similarly, if no proper Z i j exists to produce a finite value of I i , it is not possible to increase thevalue of x i and we take x i < m i . If U , 1 > z*, we take x i 2 p i , while if V i I > z*, we take x i 2 mi, where z* is the objective value of a feasible integer solution. At this point we can decide which basic variable to pick for branching. If U i is large for basic variable x i , we may not need the x i < mi branch if we use x i 2 p i . There is a chance of obtaining a feasible integer solution with objective z* with U i + 1 > z*. (This type of strategy was developed by Little, and co-workers [ 4 ] for solving the traveling salesman problem.) Similar possibilities exist if V iis large. The branching variable xk is obtained by selecting the index k from
+
+
(8)
w k
= max(Ui ,
q)
for indices i where xi is fractional. Therefore, if the maximum occurs for u k , we use xk 2 j j k = [bill 1 ; if it occurs for V k ,we use x, I m k [bk’]. Suppose now that a feasible integer solution to (1) is available with objective value z*. This solution is an upper bound to the optimal one. There is no need to branch from node level r to r + 1 if:
+
1. zo’
+ 1 > z*,
2 . all the bi’ are feasible integers and zo’ E,.We replace (9) in (2) by (10)
xk
+ 1
akjxj
j=m+l
+
xn+k
=
6k,
where x,+,,2 0 is an artificial variable. The variable x k is given the value E , and x , , + ~has value x ; + , = b,’ - E,.The simplex method is initiated again by adding the w-objective function
to (2). The w-objective value is wo’ then b,‘ < P , . We replace (9) in (2) by
= b,‘ - %,
. If we use xk 2 Pk,
where x,+k 2 0 is an artificial variable. The variable xk is given the value j k and X , + k has value x ; + = ~ Pk - b,‘. The simplex method is initiated again by adding the w-objective function n
(13)
-w
+ xk + 1
j=m+ I
akjxj=
6,
to (2). The w-objective value is wo’ = Pk - bk‘. At the end of the minimization phase we have the canonical form for level r 1. To convert the canonical form at level r to a canonical form at level t + 1, we first remove the bounds along the path from the r level to the t level node. The bounds for the t level are given by the branchings from the zero level node to the t level node. Let us impose these bounds as the P j and E j . All current variable values at level r satisfy Pj 5 x j E j ;the
+
A BRANCH AND BOUND METHOD
1
183
current canonical form is feasible. To obtain the canonical form for level t , we simply reminimize as a Phase I1 simplex problem. When the canonical form is achieved, we must branch with the xk variable. If xk is not a basic variable, due to a nonunique solution, we make xk basic (the corresponding Ck is zero). We then achieve the form given in (9). We impose the appropriate bound to obtain (10) and (1 1) or (12) and (1 3), and reminimize to get the canonical form for level t + 1. The two phases of the simplex method for obtaining a new continuous solution follow from the methods of Chapter 2. In any case, assume that the w equation from (1 1) or (1 2) is of the form n
+ 1a j x j = -Eo.
-w
j = 1
The index s is obtained from (14)
d,’ = min
for x j not at its upper bound, for x j not at its lower bound
,
in Phase I . Let us assume that d, is negative and that nonbasic x , has value xS’. The effect of increasing x , by 8 is seen by changing wo’,zo’, x i , , , and the bi’ for i # k to the new values
+ d,e, zb: = zof + ~ , e ,
wb:= wo’ (15)
= xL+k -
xL+k
b!‘ = b.‘ - a,,e. 1
1
The greatest x , can be increased and still maintain feasibility is
,m,
- xs),
184
8
BRANCH A N D BOUND PROCEDURES
Suppose that a, is positive and that nonbasic x, has value x,’. The effect of decreasing x,by y is seen by changing wo’,zo‘, x A + k , and the bi’ for i # k to the new values
The greatest x,can be reduced and still maintain feasibility is
1 xs’ - P,
9
After selecting the nonbasic variable x, by the choice rule of (14), we have Case 1.
Here x, = xs’< Z,and ds < 0. In the next iteration x, = x,’ + B*. The variable values are given in (15) with 8 = B*: all other nonbasic variables retain their same values. If B* = 5,- x,’, then x, assumes its upper bound value and no pivot operation is performed. Otherwise, x, is made basic; we use choice rule (1 6) to select some basic variable x, to become nonbasic. The usual pivot operation is then performed with a,, as pivot element. Case It.
Here x, = x,’> P, and d, > 0. In the next iteration x, = x,’ - y*. The variable values are given in (17) with y = y*; all other nonbasic variables retain their same values. If y* = xs’- P,, then x, assumes its
1
A BRANCH AND BOUND METHOD
185
lower bound value and no pivot operation is performed. Otherwise, x, is made basic; we use choice rule (18) to select some variable x, t o become nonbasic. Pivoting then occurs. If w = wg > 0, we repeat the procedure starting with choice rule (14). If w cannot be reduced t o zero, the branching is infeasible. When w = 0, we initiate Phase 11 of the simplex algorithm by finding index s from
(19)
c,' = min
t-
cj, -C j ,
for x j not a t its upper bound, for xi not at its lower bound.
The analysis follows as in Eq. (15)-(18) where the w and x,+k rows d o not appear. Case I is the same for x, = x,' < fi, and C, < 0. Case I1 is the same for x, = x,' > p , and C, > 0. We have the newest continuous solution and canonical form when c,' 2 0 in (19). The branch search procedure is summarized in Algorithm 1
1. Define solution vector S(z*, xl*, xz*, . . . , x,*) where xi* f o r j = 1, 2, . . . , n is a feasible integer solution to (1) with objective value z*. If no feasible solution is apparent, take z* = co. Let r refer to the rth branching variable, where Z(r) is the index of the variable, N ( r ) is the number of branchings [initially N ( r ) = 01, B(r) is the bound for the variable (this bound will usually be a temporary one), and G ( r ) = 0 indicates that x k 5 B(r) for k = Z ( r ) ; G ( r ) = 1 indicates that xk 2 B(r) for k = Z(r). Initially all pi = pi = 0 and Zj= mj. Solve (1) as a linear program. If the solution is all integer, the problem is solved. If not, set r = 0 and go to 2, maintaining the canonical form and variable values to the solution to (1). 2. Set r = r + 1. Select index k and determine Wk from (8). If Wk = uk in the calculation, set B(r) = [bk'] + I , G ( r ) = 1, and Pk = B(r). Otherwise, wk = Vk;set B(r) = [&'I, G(r) = 0, and Z k= B(r). In any case, set Z(r) = k , N ( r ) = 1, W(r) = wk,and go to 3. 3. Solve the linear program from the current canonical form with the new bounds on the variables. We do one of the following: (a) If the solution produces an objective value z with z + 1 > z * , go to 4.
186
8
B R A N C H A N D B O U N D PROCEDURES
(b) If the current problem has an infeasible solution, go to 4. (c) If the solution produces an objective value z with z < z* and the solution xl*, x2*, ..., x,* is all integer, redefine S(z*, xl*, x2*, . . . ,x,*) as a new feasible integer solution where Z* = Z. GOto 4. (d) If the solution produces an objective value z with z + 1 2 z* and the solution is fractional, go to 2. 4. Backtrack. Set f i j = mi and p j = pi for j = 1,2, . . . , n. Go to 4(a). (a) If W ( r )> z* - 1, go to 4(f). Otherwise, go to 4(b). (b) If N ( r ) = 1 and r = 1, go to 4(e). If N ( r ) = 1 and r > 1, let t = 1 and go to 4(c). Otherwise, N ( r ) = 2; go to 4(f). (c) If G(t) = 0, let Eik = B(t) for k = Z(t). Otherwise, G(t) = 1 ; let P k = B(t) for k = Z(t). Let t = r + 1 and go to 4(d). (d) If t < r, go to 4(c). Otherwise, r = Y. Solve the linear program from the current canonical form with the new bounds on the variables. Go to 4(e). (e) If G ( r ) = 0, set B(r) = B(r) + 1 and N ( r ) = 2. Thus, Pk = B(r) for k = Z(r). Go to 3. Otherwise, C ( r ) = 1 ; set B(r) = B(r) - 1 and N ( r ) = 2. Thus, Ek= B(r) for k = Z(r) Go to 3. (f) If r = 1, the feasible solution S(z*, xl*, x2*, . . . , x,*) is optimal. Stop. Otherwise, set N ( r ) = 0, r = r - 1 and go to 4(a). Example
Find integers xj 2 0 that minimize z when 3x1 -k 4x2 2x1 Xi
+ + 3x2 x2
= z, = 1,
x3
- x4 = 4.
1. We solve the problem as a linear program to obtain
-z
+ +xl 3x1
+ x2
-I x 3
The solution is z
=
1
+ 4x - -1367 3 4 -
- - x3
4
=
+ x j -+x4=
4, 3.
9,x1 = 0, x2 = $, x3 = +, x4 = 0.
2
TIGHTENING THE BOUNDS
187
2. r = 1. D , = 5, D, = 00. Wesetx, 2 1 and W(1)= 00. 3. The new canonical form is -z
+ + x4= - 5 , x2 + f X 3 - fX4 = +, x3
x1
-Ax 5
3
+ + x 4 = -15 9
where we have the solution z = 6, x1 = $, x2 = 0, x3 = 1, x4 = 0. 2. r = 2 . D , = 5 , D , = 5 , 1 2 = $ , I l = $ ; U 2 = 7 , U , = 8 , V 2 = 8 , V , = 7. We take W, = V , = 8. We set x2 2 1 and W(2) = 8. 3. The new canonical form is + 3 x 4 = -12,
-5x,
-2
5x,
Xi
+ 3x2
+ x3 - 2x4 =
7,
- x4 =
4,
where we have solution z* = 7, xl* = 1, x2* = 1, x3* = 2, x4* = 0. 4. Backtrack. W ( 2 ) > 6, W(1) > 6. The current feasible solution is minimal.
2
TIGHTENING THE BOUNDS
In Algorithm 1 the upper bounds mj and the lower bounds p j = 0 remain fixed. The G j and p j are regarded as temporary bounds. Whenever a continuous linear programming solution is found in steps I , 3, or 4, further bounds may be placed on the variables. This tightening of the bounds will tend to improve the efficiency of the algorithm. The improvement is possible when a feasible integer solution is available with objective value z*. Thus, if we are interested in obtaining a smaller objective value’, we need consider only values of the objective function given by 2=.?,+
c
j=m+ 1
< z* - 1.
C -1 . XI. -
We can obtain bounds on the nonbasic variables x, where T, # 0.
188
8
BRANCH A N D BOUND PROCEDURES
If 2, < 0, we achieve x, 2 { u s } where u, = m,
+ z* - C1, - zo’
If C, > 0, we have x, 5 [us] where us
= Ps
+
z* - 1 - zo’ -
C,
Thus, the lower bound on the variable x, can be increased to (us} if the present bound is smaller (when C, < 0). Also the upper bound on x, can be reduced to [us] if the present bound is larger (when 2* > 0). Similar results hold for the bounds Ej and p j during the branching process, if we define U, and V, by replacing m, and p , by E, and p , , respectively. In addition, we can use (6) and (7) to tighten the bounds on the x i by requiring that z 5 z* - 1. Using (6) we obtain z2
zO’
+ Dibi’ - Dixi;
thus when D i> 0 we have x i 2 {ui}where 0.I
= b.’ 1
Using (7), we obtain z 2
zO’
z* - 1 - zo’ Di
- Ii bi’ + Ii xi;
thus, when I i > 0, we have x i I [ui]where
ui
=
bi’ +
z* - 1 - zo’
Ii
We replace bounds E i and p iby these new bounds if the latter are tighter. 3
THE MIXED INTEGER PROBLEM
The algorithm in Section 1 is modified slightly to handle the mixed integer case where only some of the xi variables are restricted to be integers. The remaining variables can be integers or fractions and are
3
THE MIXED INTEGER PROBLEM
189
never picked to be the branching variables. The enumeration is performed for the integer-restricted variables only; eventually a feasible solution results for both the integer and fractional variables. If the objective value is also constrained to be an integer, it appears simplest to replace the objective function in (1) by
and include the additional constraint
where x, and x,+ are nonnegative integers. If the objective value z can be fractional, we replace steps 3(a) and 4(a) of the algorithm by 3. (a) If the solution produces an objective value z with z 2 z*, go to 4. 4. (a) If W(r)2 z*, go to 4(f). Otherwise, go to 4(b). Example
Find x1 2 0, integer x2 2 0, x 3 2 0, and integer x4 2 0 that minimize z when
5x1 - x2- x3 9x, - 2x2 - 5x3 x2 + x3
+ 2 x 4 = z, + 4x4 = 2, + x4=4.
1. We solve the problem as a linear program to obtain -z
+ +Xl -+XI
3x1
+ 3x3 + x2 + 3x3
-
29
5 29
-+x3 + x 4 =
+,
-1
2 2'
The solution is z = 9,x1 = 0, x2 = x3 = 0, x4 = 3. 2. r = l . D 2 = 1 , D 4 = + ,1 2 = + , 1 4 = 3 ; U 2 = l , U 4 = 5 , V 2 = 3 , V4 = 2.W4 = V4 = 2. We set x4 5 1 and W(1) = 2.
190
8
BRANCH AND BOUND PROCEDURES
3. The new canonical form is
+'x3
-z
x2 x1
+
3
- 1 3x 4 -- - I
9
+
4, 1,
x3 x4= - - x3 3 +'x3 4 =
where we have solution z* = +, xl* = +, x2* = 3, x3*= 0, x4* = 1. 4. Backtrack. W(1) 2 +. The current feasible solution is minimal.
Problems
1. Use the branch and bound technique to solve the integer problem: + 5x +'ox 3 3 3
-z
- -3x 9
4 -
+ 3-x3 - 4x4 = xz - +x3 - i3 x 4 = 10
XI
-2 37
s3 '
2. Find the minimum z for the integer problem : -2 - I x +'x 3 1 3 4=-+, x j + $XI - +x4 = x2
3,
- +Xl + 5x4 = f ;
0I xj I 1 for j
=
1, 2, 3,4.
3. Solve the mixed integer problem on page 189 where z is constrained to be an integer. 4. Minimize z for integers xj 2 0 in
-z
+ $xl + -yx3 = -3, x2 + i X 1 + +x3 = 1 x 4 + + x 1+ 9x3 = 4. 29
REFERENCES
191
References Dakin, R. J., A tree-search algorithm for mixed integer programming problems, Comput. J., 8 (3), 250 (1965). 2. Driebeck, N. J., An algorithm for the solution of mixed integer programming problems, Management Sci. 12,576 (1 966). 3. Land, A. H., and Doig, A. G., An automatic method of solving discrete programming problems, Econometrica 28,497 (1960). 4. Little, J. D. C., Murty, K. G., Sweeney, I).W., and Karel, C., An algorithm for the traveling salesman problem, Operations Res. 11,972 (1963). 1.
AUTHOR INDEX
Numbers in parentheses are reference numbers and indicate that a n author’s work is referred to although his name is not cited in the text. Numbers in italics show the page o n which the complete reference is listed. Balas, E., 91, 102 Balinski, M. L., 9, 12 Beale, E. M. L., 91, 102 Bellman, R., 6, 12, 81, 102 Brooks, R. B. S., 5 ( 8 ) , 12
Kadane, J. B., 6 , 12 Karel, C., 181(4), I91 Kolesar, P. J., 6, I 2 Land, A . H., 177, 191 Lawler, E. L., 5(17), 12 Le Garff, A ., 91, 102 Little, J. D. C., 181, 191
Carlson, R. C., 5 , I 2 Cord, J., 6, 12
Malgrange, Y., 91, I02 Manne, A . S . , 9, 12 Martin, G. T., 121, 133 Miller, C . E., 7, 9, 12 Murty, K . G., 181(4), / 9 /
Dakin, R. J., 178, 191 Dantzig, G . B., I , 9, 10, 12, 13, 39, 45 Doig, A. G., 177, 191 Dreyfus, S. E., 6 , 12 Driebeck, N. J., 180, 191
Nemhauser, G. L., 5, 12 Freeman, R. J., 5, I2 Gilmore, P. C . , 6, 12, 96, 98, 102 Glover, F., 47, 67, 70, 80 Gogerty, D. C., 5(8), I 2 Gomory, R. E., 6, 12, 47, 80, 96, 98, 102, 103, 128, 133, 158, 175 Graves, G. W., 5 ( 8 ) , 12 Greenberg, H., 5(1 I), 6, 12,97, 102, 155(2), 175 Griffin, H., 136, 153 Hadley, G., 39, 45 Hanssmann, F., 6, I2 Hess, S. W., 9, 12 Hirsch, W. M., 9, I2
Shapiro, J. F., 6 , I2 Simonnard, M., 40, 45 Sweeney, D. W., 181(4), 191 Tucker, A . W., 7(20), I 2 Wagner, H. M . , 6, I 2 Weaver, J. B., 9. 12 Wolfe, P., 9, 12, 21(4), 45 Young, R. D . , 47, 80 Zemlin, R. A,, 7(20), 12 193
SUBJECT INDEX
Bounding forms, 67-72 algorithm, 69-70 example, 70-72 Branch and bound algorithm, 185-186 example, 186-187 Branch and bound procedures, 177191 tightening hounds, 187-188
A All-integer algorithm, 52-53 example, 53-55 All-integer methods, 47-80 hounds on solution, 60-62 convergence to optimality, 59-62 equivalent programs, 48, 49, 55-59 improving nonoptimal solution, 4955 negative objective constants, 66-67 example, 67 Apportionment, 9 Artificial variables, 22
C
Canonical form, 15 Capital budgeting, 6 Capital investment, 6 Chinese remainder theorem, 15 I Common multiple, 147 Congruent integers, 145-147 Continuous solution algorithm, 109-1 10 convergence to optimality, 11 1-1 15 example, 110-1 1 I Continuous solution methods, 103-132 bounding determinant value, 121I23 algorithm, 121-123 example, 123 improving nonoptirnal solution, 106-
B Basic feasible solution, 15 Basic variables, 15 Bounded variable all-integer algorithm, 63 e?ample, 64-65 Bounded variable continuous solution algorithm, 125-126 example, 126-127 Bound escalation method, see Bounding forms
111 194
195
SUBJECT INDEX
reduction to all-integer format, 115121 algorithms, 117-120 examples, 118-121 Cost ranging, 180 Cutting stock problem, 6
D Dichotomies, 1&1 I Diophantine equations, 141-1 45 example, 144-145 Dual simplex method, 39, 40 Duality, 38 Dynamic programming enumeration, 95-100 for integer programs, 95-97 for knapsack problems, 97-100 Dynamic programming solution algorithm, 160-161 example, 161-162 Dynamic programming solutions, 155-1 75 accelerated solution, 169-174 example, 173-174 reducing number of congruences, 162-1 69 algorithm, 168 example, 169 E Elementary matrix, 57, 116 Enumeration a1I-in teger, 8 1-1 02 accelerated solution, 91-95 example, 94-95 dynamic programming formulation, 95-97 for integer programs, 86-91 algorithm, 87-88 example, 90-91 for objective function, 81-86 algorithm, 83 example, 84-86
continuous solution, see Dynamic programming solutions Euclidean algorithm, 135-141 example, 140
F Feasible canonical form, 15 Fixed-charge problem, 8
G Greatest common divisor, 135-142
I Infeasibility form, 22 Inverse matrix method, 28-33 example, 32-33
K Knapsack problem, 5-6 algorithm, 99-100 dynamic programming formulation, 97-100 example, 100
L Least common multiple, 147 Lexicographic dual simplex method, 3 8 4 4 , 107 example, 4 2 4 3 Lexicographic ordering, 40, 50 Lexicopositive vectors, 50 Linear congruences, 145-1 50 examples, 149-1 50 Linear interpolation, 9 Linear programming, 1, 1 3 4 5 , see alsc Simplex method degeneracy, 20 dual problem, 39 general format, 13-14
SUBJECT INDEX
inconsistencies, 24 nonunique solutions, 16 primal problem, 39 recognition of optimality, 14-21 redundancies, 24 variables with upper bounds, 33-38 example, 37-38
M Map coloring, 10-1 1 Matrix form, 115 Mixed integer problem, 128-131, 188-1 89 examples, 131, 189-190 N Network reliability, 6 Neyman-Pearson lemma, 6 Nonbasic variables, 15 Nonlinear approximation, 9-10 Nonlinear programming, 10 Number theory, 135-153
Q Objective function, 14 Optimality theory, 4849, 106, see also All-integermethods; Continuous solution methods
P Pivot column, 20 Pivot element, 20 Pivoting, 20 Plant location. 9 Primal integer programming, 72-77 algorithm, 75 example, 75-77 Product form of inverse, 116
196
Q Quadratic assignment problem, 4-5
R Relatively prime integers, 142 Residue classes, 146 Revised simplex method, see Inverse matrix method
S Scheduling, 2-5 Separable programming, 9 Sequencing of jobs, 8 Simplex algorithm, 22-24 example, 2 6 2 7 Phase I, 23, 24, 25 Phase 11, 23, 25 tableau form, 27-28 Simplex method, 21-27 Simplex multipliers, 30, 39 Slack variables, 14 Surplus variables, 14 System of linear congruences, 151-152 example, 151-152
-
T Transportation problems, 9 Traveling salesman problem, 6 8 Tree search, 178
U Upper bound variable problems, 62-65, 123-127