Computational Optimization and Applications, 21, 277–300, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands.
A Center Cutting Plane Algorithm for a Likelihood Estimate Problem FERNANDA RAUPP National Laboratory for Scientific Computing, RJ, Brazil ´ CLOVIS GONZAGA Federal University of Santa Catarina, Florian´opolis, SC, Brazil
[email protected] [email protected] Received July 10, 2000; Revised October 20, 2000; Accepted May 15, 2001
Abstract. We describe a method for solving the maximum likelihood estimate problem of a mixing distribution, based on an interior cutting plane algorithm with cuts through analytic centers. From increasingly refined discretized statistical problem models we construct a sequence of inner non-linear problems and solve them approximately applying a primal-dual algorithm to the dual formulation. Refining the statistical problem is equivalent to adding cuts to the inner problems. Keywords: maximum likelihood estimation, cutting planes, analytic center
1.
Introduction
We propose a new interior point algorithm that uses the cutting plane method to find the maximum likelihood estimate of a mixing distribution. Like the algorithm developed by Lesperance and Kalbfleisch [8], our algorithm follows the geometric interpretation proposed by Lindsay [9], which uses the concept of the directional derivatives to reformulate the statistical problem into an optimization problem. We use the same approach as Coope and Watson [1], who solve the dual of the optimization problem in the context of semi-infinite programming. The likelihood maximization problem can be modeled as the maximization of a nonlinear concave function in a convex region, which in its turn is formed by an infinite number of linear constraints of which we do not have complete knowledge. Cutting plane algorithms do not need the previous knowledge of this full linear system: they add one or more constraints per iteration, constructing an increasingly precise description of the feasible set during the process. Thus, the cutting plane algorithm deals with a sequence of linearly constrained convex problems, each one derived from the former by adding one or more constraints. This method for solving convex optimization problems becomes very competitive when it uses as test points the analytic centers proposed by Sonnevend [13]. Interior point optimization algorithms basically follow a trajectory in the interior of the feasible region of the optimization problem, known as the central path. They have shown
278
RAUPP AND GONZAGA
a great efficiency in practice, specially for solving large scale linearly constrained convex optimization problems, like the ones that we have in this paper. The evolution of the research on interior point methods is described in the surveys by Gonzaga [5], Den Hertog [6] and Todd [15]. Terlaky and Vial [14] developed interior point algorithms with primal and primal-dual approaches to solve the maximum likelihood estimate of a probability density function without using the cutting plane method. Our algorithm uses approximations of analytic centers to cut the feasible region. We describe in detail the modeling of the statistical problem as a convex optimization problem, and show how the linear approximations of the feasible set correspond to discretized statistical problems. Increasingly precise linear models correspond to increasingly refined discretizations of the statistical problem. The algorithm is composed of a sequence of outer iterations, each of which refines a linearly constrained problem by adding new constraints and calls an inner algorithm to find a central point for this problem. For the inner algorithm we use the primal-dual path following approach introduced by Kojima et al. [7], which has proved to be the most effective up to now. Cutting plane methods based on analytic centers are surveyed in [3, 4]. These algorithms are polynomial when used for solving convex feasibility problems, as proved by Goffin et al. in [2], and Ye [17] proved that this complexity is preserved when multiple cuts are introduced in each iteration. Recently, Goffin and Vial [3, 4] showed the same complexity result when using shallow, deep or very deep cuts in the analytic center cutting plane method. Since the convex feasibility problem is equivalent to the maximization of a concave function in a convex set, the same complexity bound should be associated with the analytic center cutting plane algorithm that finds the maximum likelihood estimate of a mixing distribution. Nevertheless, we do not examine this complexity issue in this paper. An alternative way of dealing with the problem uses localization sets bounded by a nonlinear constraint given by the objective function. This approach, proposed by Mokhtarian and Goffin [12], allows computing a complexity estimate, a stronger result than global convergence. We used cutting planes because the specific objective function in our problem has a simple format, which adapts very well to this approach, as we shall see below. 1.1.
Notation
In the statistical context, we use upper case letters to denote random variables and lower case ones to denote possible values assumed by them. Given a vector represented by a lower case letter (x, s, t, y, z will appear in the text), the corresponding upper case letter denotes the diagonal matrix whose diagonal elements are its components. 2.
The likelihood problem
Consider an n-dimensional vector V of independent continuous random variables. Assume that:
A CENTER CUTTING PLANE ALGORITHM
279
(i) all variables Vi , i = 1, . . . , n, have the same given probability density f Vi | (vi | θ) conditioned on a populational parameter θ ∈ , where ⊂ IR is a closed (possibly unbounded) interval. All other populational parameters are fixed and known. (ii) The parameter is given by a random variable defined on . We have no information on the distribution of : the purpose of this work will be to find a maximum likelihood distribution F for it. If has this distribution, then the density of each Vi , i = 1, . . . , n, is given by f Vi (vi ) = f Vi | (vi | θ ) dF (θ ),
and since they are all assumed independent, that of V is f V (v) =
n i=1
f Vi | (vi | θ) dF (θ ).
(1)
If in fact has a density function f with support , i.e., such that f (θ ) = 0 for θ θ ∈ IR\ , then we can write F (θ ) = −∞ f (λ) dλ, and in the expressions above, dF (θ ) can be replaced by f (θ ) dθ . (iii) An observation v of V was made (hence v is fixed from now on). Assume that m ≤ n of vi are distinct, so that we have values vi , i = 1, . . . , m, with multiplicities ti ≥ 1, the m t i=1 i = n. Now the expression (1) depends only on F and can be written with the new indexing and taking logarithms l(F ) =
m
ti log
i=1
ai (θ ) dF (θ ),
(2)
where θ ∈ → ai (θ ) = f Vi | (vi | θ) are known for i = 1, . . . , m. Assumptions: We assume that for i = 1, . . . , m, ai (·) are bounded, continuous, and either is compact or lim | θ | →∞ ai (θ ) = 0. 2.1.
The problem
Let F be the family of all probability distributions defined on . The value in (1) or equivalently in (2) represents the likelihood associated with the distribution F . The maximum likelihood will be given by any solution of the problem maximize l(F ) = F∈F
m i=1
ti log
ai (θ ) dF (θ ),
where ti ≥ 1, i = 1, . . . , m are known integers and the m functions ai (·) are known.
(3)
280
RAUPP AND GONZAGA
Remarks on F : The one-dimensional case (m = 1) is trivial: we must maximize a(θ ) dF (θ ) = a(θ ) f (θ ) dθ . Choosing a pointθˆ ∈ argmaxθ ∈ a(θ ) and any distribution ˆ = a(θ )δ(θ ) dθ , where δ(θ), ˆ is the Dirac δ F ∈ F , we have a(θ ) dF (θ ) ≤ a(θ) ˆ ˆ the step function at θˆ , gives “function” or unit impulse at θˆ . Equivalently, F = U (θ), l( Fˆ ) = log a(θˆ ). Hence U (θˆ ) is an optimal solution for any maximizer θˆ of a(θ ) on . Since the case m = 1 is trivial, we shall assume henceforth that n ≥ m > 1. A similar reasoning will be used for the n-dimensional case. Our experiment gives only m distinct data values. It will be possible to find a maximum likelihood distribution with at most m discrete values, or, equivalently, a density function given by the summation of m impulses. This is illustrated by the computational tests in Section 6: the parameter θ is the mean of a normal distribution, and the samples are taken from a mixture of data points with three normal distributions. The resulting maximum likelihood distributions for θ are composed of a small number of step functions, at points near the original means. Note that the resulting number of step functions may be much smaller than m. 2.2.
The solution set
The problem (3) will be simplified following the geometrical approach proposed by Lindsay [9]. Let U (θ ) denote the step function at θ, U (θ )(λ) = 0 for λ ∈ (−∞, θ ), U (θ )(λ) = 1 for λ ≥ θ. A distribution F ∈ F with finite support {θ1 , θ2 , . . . , θ p } is given by F =
p
x j U (θ j ),
j=1
p
x j = 1,
p
x ∈ IR+ ,
(4)
j=1
where for j = 1, . . . , p, θ j ∈ , U (θ j ) ∈ F is a trivial distribution. The integrals in (3) now become for i = 1, . . . , m z i (F ) =
ai (θ ) dF (θ ) =
p
x j ai (θ j ).
(5)
j=1
So, the distribution F with finite support {θ1 , θ2 , . . . , θ p } is associated with a convex combination of the vectors a(θ j ), j = 1, . . . , p. Define C = {a(θ ) | θ ∈ }. We have conv(C) = {z(F ) | F ∈ F with finite support}. Lemma 1.
The set C is bounded and either C is closed or cl(C) = C ∪ {0}.
Proof: C is bounded because ai (·) are bounded. Consider a sequence {a(θi )}i∈IN such that a(θi ) → a. ¯ Then either {θi }i∈IN is unbounded and a(θi ) → 0 by hypothesis, or {θi }i∈IN ¯ In this case a¯ = a(θ) ¯ due to the continuity of is bounded and has an accumulation point θ. a(·), completing the proof. ✷
281
A CENTER CUTTING PLANE ALGORITHM
¯ is compact. Due to the integration Define C¯ = cl(C). Since this is a compact set, conv (C) in (5), any distribution F can be approximated by a sequence of step functions. Hence {z(F ) | F ∈ F } = cl{z(F ) | F ∈ F , F with finite support} ¯ = conv(C) and the problem (3) becomes
maximize
m
¯ . ti log z i | z ∈ conv(C)
(6)
i=1
This problem has a strictly concave monotonic objective function and a compact convex ¯ feasible set. Hence it has a unique optimal solution zˆ > 0, zˆ ∈ conv(C). We now show that actually zˆ ∈ conv(C): Lemma 2. The problem (6) has a unique optimal solution zˆ > 0 such that zˆ ∈ conv(C) and hence the problem can be written as zˆ = argmax
m
ti log z i | z ∈ conv(C) .
(7)
i=1
Proof: The problem (6) has a strictly concave monotonic objective function and a com¯ We must prove that pact feasible set. Hence it has a unique optimal solution zˆ ∈ conv(C). ¯ zˆ ∈ conv(C). Assume that zˆ ∈ / conv(C). Using Lemma 1, C = C ∪ {0} and hence zˆ =
p−1
x j z j + x p .0,
with z j ∈ C
for j = 1, . . . , p − 1
j=1
p−1 p−1 x and α = j=1 x j < 1. Setting y j = αj > x j , j = 1, . . . , p − 1, it follows that z˜ = j=1 y j z j m m ∈ conv(C), z˜ > zˆ . From the monotonicity of the objective, i=1 ti log z˜ i > i=1 ti log zˆ , contradicting the optimality of zˆ and completing the proof. ✷ 2.3.
Discretization
As we saw above, the feasible solutions are convex combinations of vectors in C. Notice that C is a continuous curve in IRm , given by θ ∈ → a(θ ). The unique optimal solution zˆ belongs to the boundary of conv(C), because of the monotonicity of the objective function. Using Caratheodory’s theorem (specialized for the boundary of conv(C)), zˆ will be a convex combination of no more than m elements of C, which corresponds to a distribution Fˆ with
282
RAUPP AND GONZAGA
support of size m or less. Note that although zˆ is unique, there may exist more than one distribution F with z(F ) = z( Fˆ ). The first method developed used distributions of fixed support size m and iteratively updated the multipliers x j , j = 1, . . . , m. The following methods, more competitive than the first one, add one or more points to the support and then adjust x j , j = 1, . . . , pk with pk growing with k (see [8] for more details). Our method will follow this scheme with a new approach to update the multipliers x j , j = 1, . . . , pk as we shall see in the next section. 2.4.
Optimality
Consider the problem (7). Denote the objective function1 z ∈ IRm ++ → f (z) = −
m
ti log z i , ∇ f (z) = −Z −1 t,
i=1
where each feasible z ∈ conv(C) corresponds to one (or more) probability distribution F ∈ F , with l(F ) = f (z). m Given z ∈ IRm ++ and h ∈ IR , the directional derivative of f (·) along h is f (z, h) = −t T Z −1 h. Taking in particular h = a(θ¯ ) − z, θ¯ ∈ , we obtain
g(θ¯ , F ) = f (z, a(θ¯ ) − z) = −
m ¯ ai (θ) ti −1 . zi i=1
(8)
Now g(θ¯ , F ) corresponds to the variation in f (·) when z is moved in the direction of a point a(θ¯ ) ∈ C, or equivalently, when the distribution F is changed to accommodate one more supporting point θ¯ . m Since f (·) is differentiable in R++ , zˆ minimizes f (·) on conv(C) if, and only if, ∇ f (ˆz )T (z − zˆ ) ≥ 0 for all z ∈ conv(C), or, equivalently, g(θ, F ) ≥ 0 for all θ ∈ . We reproduce the following results from [10], whose proofs follow easily from the statements above: Theorem 1. i) zˆ ∈ conv(C) optimal solution for the problem (6) if, and only if, zˆ > 0 and m isti aian (θ ) supθ∈ i=1 = n. zˆ i ii) Fˆ maximizes l(F ) if, and only if, infθ ∈ g(θ, Fˆ ) = 0. iii) If Fˆ maximizes l(F ), then the support of Fˆ lies in the set {θ | g(θ, Fˆ ) = 0}.
A CENTER CUTTING PLANE ALGORITHM
3.
283
The problem model
Consider a given discretization {θ1 , θ2 , . . . , θ p } of . The problem (6) restricted to this support is minimize
m − ti log z i i=1
subject to
zi = p
p
ai (θ j ) x j
j=1
(9)
xj = 1
j=1
x j ≥ 0,
j = 1, . . . , p z i > 0, i = 1, . . . , m. If the support {θ1 , θ2 , . . . , θ p } is optimal, then a maximum likelihood estimate can obbe p tained by finding an optimal solution xˆ ∈ IR p for this problem and setting F = j=1 xˆ j U (θ j ). The algorithm to be described generates a sequence of problems like (9) with support sets of size pk which increases with the iteration k. Each external iteration adds one or more points to the support set by adding cuts to the dual of (9). If we solve the internal problem exactly, then the method will be Benders like. Our approach will be based on central points: each iteration will find an approximate central point associated with a barrier parameter µk for (9), and we shall make µk tend to zero. In this section we describe the external algorithm. The remaining sections will study the internal problem in detail. Each iteration k of the external algorithm starts with a support {θ1 , θ2 , . . . , θ pk } of size pk , and constructs the problem (P k ) as follows: let Ak ∈ IRm× pk be the matrix with columns a(θ j ) ∈ IRm + , j = 1, . . . , pk , (P k )
minimize subject to
−
m
ti log z i
i=1 k
A x−z =0 eT x = 1 x ≥0 z > 0.
The feasible set for (P k ) is X k = {(x, z) ∈ IRm+ pk | Ak x − z = 0, e T x = 1, x ≥ 0, z > 0}, and the set of interior points is o
X k = {(x, z) ∈ X k | x > 0}.
(10)
284
RAUPP AND GONZAGA o
Assumption: We assume that X k = ∅. This assumption will be ensured by construction by the algorithm.
3.1.
Duality and optimality conditions
Let us study the problem (P k ), which can be rewritten as (P k )
minimize
−
m
ti log Aik x
i=1
subject to
(11)
eT x = 1 x ≥ 0,
where Aik , i = 1, . . . , m, are the rows of Ak . This problem is convex, has a compact feasible set and satisfies the Slater qualification condition because p1k e > 0 is feasible. Hence it admits a dual problem satisfying strong duality. Consider the following problem in dual format: (D k ) maximize subject to
m
ti log yi −
i=1 k T
m
ti log ti
i=1
(A ) y + s = ne s≥0
(12)
y>0 Y −1 t ∈ R(Ak ), where R(Ak ) is the column space of Ak . Lemma 3.
The problems (P k ) and (D k ) above satisfy a strong duality relationship.
Proof: For simplicity, let us drop the index k. Let Ai , i = 1, . . . , m be the rows of A. The Lagrangean function for (P) is λ ∈ IR, s ∈ IR p , x ∈ IR p → l(x, λ, s) = −
m
ti log Ai x + λe T x − s T x − λ
i=1
if Ax > 0, l(x, λ, s) = +∞, otherwise. At a minimizer of l(·, λ, s), −
m i=1
ti
AiT + λe − s = 0. Ai x
285
A CENTER CUTTING PLANE ALGORITHM
Multiplying by x T , we obtain λe T x − s T x = Wolfe dual of (P)) is maximize −
m
m
i=1 ti
= n. The dual problem (actually the
ti log Ai x + n − λ
i=1
subject to
m i=1
AiT
ti + s = λe Ai x
s≥0 Ax > 0. Since strong duality is guaranteed, at an optimal primal dual solution x, s, λ the primal and the dual objective values coincide, which implies n − λ = 0. Finally, defining yi = ti /Ai x for i = 1, . . . , m, and substituting into the problem above, we obtain (D). Note that the two last constraints are equivalent to yi = ti /Ai x and Ax > 0, completing the proof. ✷ Optimality conditions. Since (P k ) is a convex problem which satisfies a Slater condition, x is an optimal solution if, and only if, x satisfies the Karush-Kuhn-Tucker conditions with appropriate multipliers. Using the same arguments as in the construction of (D k ), it is easy to show that the KKT conditions are the following:
(PDk )
X s = 0, Y z = t,
(13a) (13b)
Ak x − z = 0,
(13c)
(A ) y + s = ne x, s ≥ 0, y, z > 0, k T
(13d) (13e)
m where i=1 ti = n. Note that the primal constraint e T x = 1 does not appear above, but it follows from the conditions 0 = x T s = x T (ne − (Ak )T y) = (ne)T x − (Ak x)T y = ne T x − y T z = ne T x − n = n(e T x − 1). Since n = 0, we have e T x = 1. We shall say that (x, z, y, s) is a feasible solution for (PDk ) if it satisfies (13c–13e). It is an interior solution if besides this x, s > 0. When referring to a feasible solution, we shall freely indicate it by the quadruple (x, z, y, s) or by the pair (x, y), since z and s can be obtained from the feasibility conditions. The internal problem consists in finding a good primal-dual solution. In the next section we shall study the central trajectory for this problem, and characterize these solutions as nearly central points. For the time being we simply state that we shall be interested in primaldual solutions satisfying (approximately) the relaxed KKT conditions, which coincide with
286
RAUPP AND GONZAGA
(PDk ) with the first equation replaced by X s = µe, with µ ∈ IR++ . Such points are called central points. Instead of trying to compute exact central points, we shall be content with nearly central points, which are interior points satisfying instead of (13a, 13b) the following proximity criterion: X s − µe ≤ αµ,
Y z − t ≤ αµ,
where α ∈ (0, 1) is a fixed number, usually taken as 0.5. Remark. The last constraint in (D k ) is not needed if Ak has full rank. But it is quite possible for the curve θ ∈ → a(θ ) to be contained in a subspace of dimension smaller than m, and then the constraint is needed in the dual problem. But in our treatment this constraint will be irrelevant, as we shall remark in the next section: at central points it is always satisfied. So, we state in advance the feasible set for (D k ) as k T
k = y ∈ IRm ++ (A ) y ≤ ne and its set of interior points as k T
o k = y ∈ IRm ++ (A ) y < ne . p
k Duality gap. Given x ∈ IR++ and y ∈ IRm ++ , the primal-dual gap between x and y is
(x, y) = −
m
ti log Aik x −
i=1
m i=1
ti log yi +
m
ti log ti .
i=1
If (x, y) is a feasible primal-dual solution for (PDk ), then (x, y) ≥ 0, with (x, y) = 0 if and only if x and y are respectively optimal primal and dual solutions. 3.2.
The main algorithm
The algorithm will construct a sequence of problems by a cutting plane method. Each external iteration increases the support set by adding constraints to the dual problem (D k ) and variables to the problem (P k ). The internal algorithm finds a nearly central point associated with µk . The generation of cuts and the choice of the new value of the parameter µ is done by an oracle to be described below. We present the algorithm here, and each of its steps will be carefully explained below.
A CENTER CUTTING PLANE ALGORITHM
287
Algorithm 1. The main algorithm k=0 Initialization: compute an initial matrix A0 = [a(θ1 ) a(θ2 ) · · · a(θ p0 )], 0 T 0 such that the feasible set 0 = {y ∈ IRm ++ | (A ) y ≤ ne} for (D ) 0 is bounded, choose an initial parameter µ and an initial interior solution (x 0 , y 0 ). REPEAT
Find a nearly central point (x k , z k , y k , s k ) for (PDk ), and µk (internal Newton centering algorithm). Call the oracle at (x k , y k ), which replies with one of: STOP: the required precision has been attained. add one or more cuts: Add these cuts to k , obtaining Ak+1 and new problems (P k+1 ) and (D k+1 ). Choose µk+1 ∈ (0, µk ). no cut: (P k+1 ), (D k+1 ) coincide with (P k ), (D k ). Choose µk+1 ∈ (0, µk ). k=k+1 END OF REPEAT
From now on we study in detail each step of the algorithm. Initialization. Now we show how to obtain an initial matrix A0 leading to a bounded feasible set. We explain how to choose an initial set of cuts and then show that they define a bounded feasible set. The functions θ ∈ → ai (θ ) are continuous, bounded, and either is compact or lim|θ|→∞ ai (θ ) = 0, i = 1, . . . , m, by assumption. Hence they have maximizers. The following fact is true: it is possible to choose a support θ1 , . . . , θ p in with p ≤ m such that all rows of A0 = [a(θ1 ) a(θ2 ) · · · a(θ p )] have at least one positive entry. In fact, it is enough to choose for i = 1, . . . , m, θi ∈ argmaxθ ∈ ai (θ ): the diagonal elements ai (θi ) will be positive. Lemma 4.
Consider the feasible set for (D 0 )
0 T
0 = y ∈ IRm ++ (A ) y ≤ ne , where A0o ∈ IRm× p0 , p0 ≤ m, has no null rows. Then 0 is bounded and has non-empty 0 T interior 0 = {y ∈ IRm ++ | (A ) y < ne}. Proof: For δ > 0 sufficiently small, y = δe satisfies (A0 )T y < ne. It follows that {y ∈ IRm ++ | (A0 )T y < ne} = ∅, and this is the interior of 0 by standard results in convexity. We must prove that 0 is bounded.
288
RAUPP AND GONZAGA
Consider y ≥ 0 such that (A0 )T y ≤ ne, or equivalently, a(θ j )T y ≤ n for j = 1, . . . , p0 . Since a(θ j ) ≥ 0 and y ≥ 0 we deduce that ai (θ j )yi ≤ n for j = 1, . . . , p0 , i = 1, . . . , m. For any i = 1, . . . , m, choose j such that ai (θ j ) > 0. We have 0 ≤ yi ≤
n . ai (θ j )
This defines a compact box which contains 0 , completing the proof.
✷
We should remark that in general we do not need m initial cuts: all that is needed is a set of cuts (maybe just one) leading to a bounded set 0 , or equivalently to a matrix A0 with no null row. Choice of an initial point: set y = δe ∈ IRm with δ > 0 such that (A0 )T y < ne, and x = e/ p0 , x ∈ IR p0 . A simple way of choosing δ is to compute the maximum value of λ such that λe is feasible (this is a ratio test), and taking δ = λ/2. A good choice for the value of µ0 is given by µ0 = x T s/ p0 , where s is the slack associated with y by (A0 )T y + s = ne. The oracle. Two fixed positive constants 1 , 2 will be used by the oracle to define a stopping rule. At iteration k of the algorithm, assume that the oracle is called from a point (x k , z k , y k , s k ). Let A ∈ IRm× p be the constraint matrix and {θ1 , . . . , θ p } be the support. Oracle: Compute θ¯ ∈ argmax a(θ )T y k . θ∈
(14)
If a(θ¯ )T y k − n > 1 then reply with the new dual constraint a(θ¯ )T y + s p+1 = n.
(15)
Otherwise reply with no cut. In this case, stop the algorithm if µk ≤ 2 . It may be convenient to generate several cuts simultaneously by choosing a set {θ¯1 , . . . , θ¯η } of local maximizers of a(θ )T y k in (14). The points θ¯ j , j = 1, . . . , η such that a(θ¯ j )T y k > n generate new cuts as in (15). The stopping rule. Assume initially that the equation Y k z k = t is satisfied. Then by con¯ Fθ ) = −a(θ) ¯ T y + n. Then struction θ¯ minimizes the directional derivative (8), with g(θ, k the direction from z to a(θ¯ ) is a steepest descent direction for the primal objective function. ¯ Fθ ) < −1 , and then new cuts are The value of this directional derivative is accepted if g(θ, generated. If µk is very small, then the equation Y k z k = t is satisfied with high precision, and this gives our stopping criterion. The following lemma shows the limiting situation, for µ tending to zero for a fixed discretization and an oracle with 1 = 0.
289
A CENTER CUTTING PLANE ALGORITHM
Lemma 5. Consider an optimal solution (x, z, y, s) for the discretized problems (P k ) and (D k ), and the corresponding distribution Fθ . The oracle with 1 = 0 generates no cuts from this solution if and only if Fθ is an optimal solution for the problem (3). Proof: Assume that (x, z, y, s) solves (P k ) and (D k ). For any θ ∈ , the expression (8) gives g(θ, Fθ ) = −
m
(yi ai (θ ) − ti ) = −a(θ )T y + n.
i=1
The oracle generates no cuts if and only if g(θ, Fθ ) ≥ 0 for all θ ∈ . This fact and Theorem 1 complete the proof. ✷ 4.
The outer algorithm: Central points
Each iteration of the algorithm constructs the problems (P k ) and (D k ) and finds an approximate central point associated with the parameter value µk . We now describe central points and study their properties. Our treatment privileges the dual problems (D k ), and perturbs the primal problems, as we see below. 4.1.
Central points
Consider a problem (D k ), and a penalty parameter µ > 0. The central point (y, s) associated with µ is the unique optimal solution of the problem
Dµk
maximize subject to
m
ti log yi −
i=1 k T
m i=1
ti log ti + µ
pk
log si
i=1
(A ) y + s = ne y, s > 0.
The central point is well-defined: by construction the feasible set k is bounded with nonempty interior. The objective function is strictly concave and decreases infinitely near the o boundary of k . Hence there is a unique maximizer in k . Optimality conditions: a straightforward derivation of the Karush-Kuhn-Tucker optimality conditions at an optimal solution (y, s) > 0 with multipliers x ∈ IR pk and z ∈ IRm gives Xs Yz k PDµ Ak x − z (Ak )T y + s x, s, y, z
= µe =t =0 = ne > 0.
(16)
290
RAUPP AND GONZAGA
Note that at a central point (y, z) we have Y −1 t = z = Ak x, and hence the constraint Y t ∈ R(Ak ) in (12) is satisfied. The primal constraint e T x = 1 is not satisfied by the multipliers x in (16): they satisfy −1
e T x = 1 + pk µ/n.
(17)
In fact, we have from the first equation in (16) x T s = pk µ. Using the other equations, pk µ = x T s = x T (ne − (Ak )T y) = ne T x − y T z = n(e T x − 1). Let the perturbed primal problem be stated as
Pµk
minimize
−
m
ti log z i + µ
i=1 k
pk
log xi
i=1
subject to (A )x − z = 0 e T x = 1 + pk µ/n x, z > 0. Again a straightforward derivation shows that (16) are the Karush-Kuhn-Tucker conditions at an optimal solution (x, z) > 0 of this problem. These results are summarized in the following lemma: p
k Lemma 6. Consider the problems (Pµk ), (Dµk ), the conditions (16), x, s ∈ IR++ and m y, z ∈ IR++ . Then (x, z, y, s) satisfy (16) if, and only if, (x, z) solves (Pµk ) and (y, s) solves (Dµk ).
Given a feasible solution (x, z) for (Pµk ), we can compute a primal-feasible solution (x, z) for (P k ) by x˜ =
x eT x
,
z˜ =
z eT x
.
In fact, the constraint Ak x˜ − z˜ = 0 is trivially satisfied, and e T x˜ = 1. Lemma 7.
Consider a central point (x, z, y, s) for (PDkµ ), µ > 0. Then
(x, y) = 0, (x, ˜ y) ≤ pk µ. Proof: From the conditions (16), yi z i = ti , i = 1, . . . , m or − log ti + log yi + log z i = 0. Adding these equations multiplied by ti (taking all summations from i = 1 to m), 0=− ti log ti + ti log yi + ti log z i = −(x, y),
291
A CENTER CUTTING PLANE ALGORITHM
proving the first With x˜ = x/e T x, we obtain z˜ = z/e T x and relation. T ti log z i − ti log e x. Hence (x, ˜ y) = − ti log z˜ i − ti log yi + ti log ti =− ti log z i + ti log e T x − ti log yi + ti log ti = (x, y) + ti log e T x
ti log z˜ i =
= 0 + n log e T x = n log(1 + pk µ/n) ≤ pk µ, because log(1 + pk µ/n) ≤ pk µ/n, due to the concavity of the logarithm function at 1 and pk µ/n ≥ 0, completing the proof. ✷ 4.2.
Nearly central points
Let α ∈ (0, 1) be a fixed number (usually taken as α = 0.5). Given µ > 0, a primal-dual pair (x, y) is nearly central for (PDkµ ), if X s − µe ≤ αµ Y z − t ≤ αµ Ak x − z = 0 (Ak )T y + s = ne x, s, y, z > 0.
(18)
We shall usually (but not necessarily) impose the primal constraint (17). Comparing (18) with (16), we see that central points correspond to α = 0, and then the primal constraint (17) is automatically satisfied. Given a feasible primal-dual solution (x, z, y, s) for (PDkµ ), its proximity to the central point is measured by δ(x, z, y, s) = X s − µe + Y z − t. Lemma 8. Let (x, z, y, s) be a nearly central point for (PDkµ ), µ > 0. Then √ (i) |x T s − pk µ|√ ≤ pk µ; T (ii) |y z − n| ≤ mµ; (iii) |yi z i − ti | ≤ µ for i = 1, . . . , m; (iv) | e T x − 1| ≤ (1 + 2npk )µ. Proof:
Let δ = X s − µe, δ ≤ αµ. We have e T δ = x T s − µpk and hence
|x T s − pk µ| = |e T δ| ≤
√
pk δ ≤
√
pk αµ,
proving (i). (ii) is proved by the same process as (i).
(19)
292
RAUPP AND GONZAGA
From (18), Y z − t ≤ αµ < µ. But |yi z i − ti | ≤ Y z − t∞ ≤ Y z − t < µ for i = 1, . . . , m, proving (iii). Using (18), x T s = x T (ne − (Ak )T y) = ne T x − y T z. Taking absolute values and using (i) and (ii), |ne T x − n| ≤ |x T s| + |y T z − n| ≤ pk µ +
√
pk µ +
√
mµ ≤ (n + 2 pk )µ, ✷
completing the proof.
The next theorem shows that the algorithm indeed converges to the optimal solution of the original problem. Note that by Lemma 2, we know that (6) has a unique optimizer. Theorem 2. Assume that the algorithm uses precision 1 = 0, and generates the sequences (x k , z k , y k , s k ), pk , Ak , µk for k ∈ IN, with pk µk → 0. Then z k → zˆ , where zˆ is the optimal solution of (6). Proof: We begin by showing that the sequence (z k , y k ) is bounded. In fact, (y k ) is bounded by construction. By Lemma 8, |e T x k − 1| ≤ (1 + 2npk )µk . Since pk µk → 0 by construction, e T x k → 1. Hence, e T x k is bounded. Since a(·) is bounded, say, by a constant M > 0, we have z k = Ak x k ≤ Me T x k , proving that (z k ) (and hence (z k , y k )) is bounded and has an accumulation point. Taking subsequences if necessary, assume that (z k , y k ) → (z , y ). We must prove that z is an optimal solution for (6) (and hence z is the unique optimal solution zˆ and the whole sequence converges). From Lemma 8, |yik z ik − ti | ≤ µk for k ∈ IN, i = 1, . . . , m. Since y k is bounded and k k yi z i − ti → 0, z ik is bounded away from zero, and it follows that yi =
ti . z i
Assume by contradiction that z is not optimal. Using Theorem 1, we have max a(θ )T y = max θ∈
θ∈
m ti ai (θ ) i=1
z i
> n,
and hence there exists ν > 0 such that maxθ ∈ a(θ )T y ≥ n + 2ν. Since y k → y and the function y ∈ IRm → maxθ∈ a(θ )T y is continuous, for k sufficiently large (say, k > K ), max a(θ )T y k ≥ n + ν. θ∈
293
A CENTER CUTTING PLANE ALGORITHM
This implies that at all iterations with k > K the oracle generates a cut ¯ θ¯ ∈ argmax a(θ )T y k . (a k )T y ≤ n, where (a k )T y k ≥ n + ν and a k = a(θ), θ ∈
For any k ∈ IN and l ∈ IN, l > k, y l is feasible in (D k ) by construction. Since y is a limit point of (y l ), it follows that y is always feasible for (D k ), k ∈ IN. Thus for k > K , (a k )T y ≤ n,
(a k )T y k ≥ n + ν,
which implies (a k )T (y k − y ) ≥ ν > 0 for all k ≥ K , and as y k → y , a k → +∞, contradicting the fact that a(·) is bounded, and completing the proof. ✷ Finally, the next lemma shows that when the algorithm stops with a small value of µ the duality gap is small. Lemma 9. Let (x, z, y, s) be nearly central for (Pµk ), (Dµk ), with µ > 0 such that αµ < √ 0.4, and let x˜ = x/e T x, z˜ = z/e T x. Then (x, y) ≤ 1.4 mµ and (x, ˜ y) ≤ 2(n + pk )µ. Proof:
The duality gap for (x, z, y, s) is given by (with summations from i = 1 to m)
ti log z i − ti log yi + ti log ti yi z i =− ti log . ti
(x, y) = −
From (18), Y z = t + δ, with δ ≤ αµ ≤ 0.4, and hence (x, y) = −
δi ti log 1 + . ti
Now we use the following known inequality for |λ| < 1: log(1 + λ) ≥ λ − λ2 /(2(1 − |λ| )). With |λ| ≤ 0.4, we get log(1 + λ) ≥ λ − λ2 ≥ −1.4 |λ|. Since δ ≤ 0.4 and ti ≥ 1, it follows for i = 1, . . . , m that
δi |δi | log 1 + ≥ −1.4 , ti ti and hence (x, y) ≤ 1.4
√ √ |δi | ≤ 1.4 mδ ≤ 1.4 mαµ.
294
RAUPP AND GONZAGA
This proves the first inequality. To prove the second one, consider the gap for x˜ = x/e T x. We get: (x, ˜ y) = − =−
ti log z˜ i − ti log z i +
ti log yi +
ti log ti ti log(e x) − ti log yi + ti log ti T
= (x, y) + n log(e T x). From Lemma 8, e T x ≤ 1+(n +2 pk )µ/n, and it follows from the concavity of the logarithm function at 1 that log(e T x) ≤ (n + 2 pk )µ/n. Using the first inequality in this lemma, we conclude that √ (x, ˜ y) ≤ 1.4 mαµ + (n + 2 pk )µ. √ For m > 1, 1.4 mα < n, and hence (x, ˜ y) ≤ 2(n + pk )µ, completing the proof. 5.
✷
Computation of nearly central points
In this section we study the computation of a nearly central point, which is the first step of the loop of the algorithm. This computation starts by finding an interior point if one is not already available, followed by an application of the primal-dual Newton centering algorithm discussed below. We shall do the following simplification in the notation: each iteration starts from a point denoted simply (x, z, y, s), which may be the point given by the initialization of the algorithm in the first iteration, or, for k ≥ 1, (x, z, y, s) = (x k−1 , z k−1 , y k−1 , s k−1 ). In this last case, the point was nearly central for the penalty parameter µk−1 , but lost its centrality because the parameter changed or new cuts were introduced. Let us study these two cases separately. 5.1.
First case: No new cuts
Denote the constraint matrix by A ∈ IRm× p . Since no new cuts were generated, the dual vectors are already feasible. We must change the primal variables to satisfy the constraint e T x = β = 1 + pµk /n. This is done by defining the initial point (x, ¯ z¯ , y, s), where x¯ = βx/e T x,
z¯ = βz/e T x.
This point is primal-dual feasible and interior. It will be used as initial point for the centering algorithm. The primal-dual Newton centering algorithm is described in many texts on interior-point methods. The textbook [16] has a complete treatment, with computational details. The algorithm consists of a sequence of relaxed Newton steps for solving the system (16). Each step starts from a point (x, z, y, s) and computes a direction (dx, dz, dy, ds)
A CENTER CUTTING PLANE ALGORITHM
295
by solving X ds + S dx = −X s + µk e Y dz + Z dy = −Y z + t Ak dx − dz = 0 (Ak )T dy + ds = 0. The next point in Newton’s method will be (x + , z + , y + , s + ) = (x, z, y, s) + λ(d x, dz, dy, ds), where the steplength λ is computed by minimizing (approximately) the proximity to the central point defined in (19). The algorithm stops when the conditions (18) are satisfied. This algorithm is known to be very efficient, converging quadratically to the central point with a large region of quadratic convergence. 5.2.
Second case: New cuts were introduced
In this case, assume that η new cuts were introduced. Let A ∈ IRm× p be the constraint matrix before introducing the cuts, and now Ak = [A Aη ] ∈ IRm× pk ,
pk = p + η.
Since new cuts were introduced, we no longer have a feasible initial point to start the Newton centering algorithm. Obtaining an interior feasible point is usually difficult in center cutting plane methods (see [3, 4]), but will be easy in our case, as we describe now. Primal feasible point. Starting from (x, z) from the former iteration, we need a vector pk x¯ ∈ IR++ , where pk = p + η. A trivial solution is given by x¯ = (x, λe), λ > 0 taking the steepest ascent direction for the new components, but we want to satisfy the constraint e T x¯ = β = 1 + pk µk /n. This is achieved by taking x¯λ = β
(1 − λ)x˜ λe/η
p + η k (1 − λ) eTx x = 1+ . µ n λe/η
(20)
To understand well this direction, notice what is happening on the unit simplex: x˜ 0 x¯λ = (1 − λ) +λ . β 0 e/η For λ growing from 0 to 1, we move from the point (x, ˜ 0) towards (0, e/η), both on the boundary of the feasible set. This is a steepest ascent direction for the new components, and is similar to the approach taken by Mitchell and Todd [11].
296
RAUPP AND GONZAGA
When only one cut is added, then multiplying this by Ak we get z¯ λ = (1 − λ)˜z + λ a p+1 , β where a p+1 = a(θ p+1 ) for a given θ p+1 ∈ , which is precisely the steepest descent direction for the primal objective function generated by the oracle and discussed at the end of Section 3. We conclude that the initialization of the centering step corresponds to a move along the primal steepest descent direction. The choice of λ ∈ (0, 1) needs a line search to minimize approximately the primal penalized function (Pµk ). Dual feasible point. Starting from y from the former iteration, η new cuts are introduced, a Tj y ≤ n,
j = p + 1, . . . , p + η,
where a j = a(θ j ) for given values θ j . These constraints are not satisfied at y, by definition of cut, but they are feasible for yλ = λy,
¯ λ ∈ (0, λ),
where λ¯ < 1 is defined by 1 = λ¯
max
j= p+1,..., p+η
a Tj y n
> 1.
In fact, for such a point and j = p + 1, . . . , p + η, λ ≤ n/a Tj y and hence a Tj yλ = λa Tj y ≤ n. Since a j ≥ 0 for all j = 1, . . . , p because their components are values of probability density functions, A T yλ ≤ A T y for λ ∈ (0, 1). Finally, [A Aη ]T yλ ≤ ne ¯ A dual interior point is obtained by choosing y¯λ = λyλ , λ ∈ (0, λ). ¯ The for λ ∈ (0, λ). choice is done by using a line search to maximize approximately the dual penalized function (Dµk ). Centering. After an interior primal-dual feasible point (x¯λ , y¯λ ) is obtained by the process above, centering is done by Newton’s method as discussed in the case of no cuts with appropriate constraint matrix. 6.
Computational experiments
In this section we report two computational experiments with the algorithm proposed. These are typical examples of mixing distributions, possibly the main application of this approach.
297
A CENTER CUTTING PLANE ALGORITHM
The experimental data v are taken from a population composed of a mixture of points from three classes with different normal distributions. The parameter to be estimated is the mean, and we want to know how the population is distributed among these three classes. The correct response for an experiment with a large number of points should be a distribution composed of three steps U (θi ), at the means θi of these three classes, and intensities corresponding to the respective percentages in the population. The algorithm was coded in the C language and executed on a Power2 IBM computer. For both experiments, we fixed the stopping rule constants at 1 = 10−4 and 2 = 10−8 , and generated data sets, whose data are independently and identically distributed as a mixture of normal distributions. Assuming that Fθ with finite support {θ1 , . . . , θ p } is an estimate of the true Fˆθ with finite support {θˆ1 , . . . , θˆpˆ }, we can check our results computing the following error: τ=
1 |λ M − λm |
λM λm
1/2
(Fθ (λ) − Fˆθ (λ))2 dλ
,
where λm = min{θi , θˆ j } and λ M = max{θi , θˆ j }, i = 1, . . . , p and j = 1, . . . , p. ˆ Test 1: In order to show the progress of the algorithm in finding an estimate of Fˆθ , we generated the first sample with 12 data points from 3 normal distributions having the same variance. The sample set has 50% of the data generated from N (−2, 0.0064), (the first number in the pair is the mean and the second one the variance), 33% of data from N (0, 0.0064) and the remaining from N (2, 0.0064). The variance was fixed at 0.0064. The generated data set is {−1.9221, −1.8454, −2.2246, −2.0495, −1.9768, −1.9566, 0.1261, 0.1256, −0.2568, 0.2135, 1.9733, 1.9940}. The algorithm obtained after 11 iterations and 5 additional cuts: {−1.9967, −0.1297, 0.0778, 1.9913} as the estimate of the support set with respective probabilities {0.5000, 0.0794, 0.2539, 0.1667}. The initial support set had 11 points. The algorithm stopped with µ = 7.8948 × 10−08 and validation error τ = 0.0415. We observed that this error decreases as the size of the data set increases. Figures 1–3 show the progress of the algorithm in obtaining the estimate of Fˆθ at iterations k = 0, 2 and 5. Test 2: For the second experiment, we generated a sample with 450 data points. Again, we have three distributions, but now the variances are also different. For this experiment, 33.33% of the data are generated from N (−3, 0.0016), 33.33% of data are generated from N (0, 0.0036) and the remaining from N (3, 0.0025). The variances were randomly obtained from (0, 0.0050). In 38 iterations, the algorithm found {−2.9976, −0.0379, 0.0196, 0.0202, 2.9711, 3.0054} as the estimate of the support set with respective probabilities {0.3331, 0.0756, 0.1545, 0.1027, 0.0540, 0.2793}. The number of initial points in the support set was 301, and the algorithm added 13 more points. The algorithm stopped with µ = 8.827471 × 10−08 and validation error τ = 0.0195.
298
RAUPP AND GONZAGA
Figure 1.
Distribution estimate at iteration k = 0.
Figure 2.
Distribution estimate at iteration k = 2.
A CENTER CUTTING PLANE ALGORITHM
Figure 3.
299
Distribution estimate at iteration k = 5.
Acknowledgments This research was proposed and partly supervised by Yinyu Ye, while the first author visited him in the University of Iowa. The authors are very grateful to him for sharing his ideas and insights. Much help was also given by an extremely careful and friendly referee.
Note 1. The minus sign is added to make this “primal” problem convex.
References 1. I.D. Coope and G.A. Watson, “A projected Lagrangian algorithm for semi-infinite programming,” Mathematical Programming, vol. 32, pp. 337–356, 1985. 2. J.-L. Goffin, Z.Q. Luo, and Y. Ye, “Complexity analysis of an interior cutting plane method for convex feasibility problems,” SIAM J. Optim., vol. 6, pp. 638–652, 1996. 3. J.-L. Goffin and J.-P. Vial, “Shallow, deep and very deep cuts in the analytic center cutting plane method.” Mathematical Programming, vol. 84, pp. 89–103, 1999. 4. J.-L. Goffin and J.-P. Vial, “Multiple cuts in the analytic center cutting plane method,” SIAM J. Optim., vol. 11, pp. 266–288, 1999. 5. C.C. Gonzaga, “Path following methods for linear programming,” SIAM Review, vol. 34, pp. 167–224, 1992.
300
RAUPP AND GONZAGA
6. D. den Hertog, Interior Point Approach to Linear, Quadratic and Convex Programming, Algorithms and Complexity, Kluwer Academic Publishers: Doordrecht, The Netherlands, 1994. 7. M. Kojima, S. Mizuno, and A. Yoshise, “A primal-dual interior point algorithm for linear programming,” in Progress in Mathematical Programming: Interior Point and Related Methods, N. Megiddo (Ed.), Springer Verlag: New York, 1989, pp. 29–47. 8. M.L. Lesperance and J.D. Kalbfleisch, “An algorithm for computing the non parametric MLE of a mixing distribution,” Journal of the American Statistical Association, vol. 87, no. 417, pp. 120–126, 1992. 9. B.G. Lindsay, “The geometry of mixture likelihoods: A general theory,” The Annals of Statistics, vol. 11, no. 1, pp. 86–94, 1983. 10. B.G. Lindsay, Mixture Models: Theory, Geometry and Applications, Institute of Mathematical Statistics, Hayward, California, and American Statistical Association: Alexandria, VA, 1995. 11. J.E. Mitchell and J. Todd, “Solving combinatorial optimization problems using Karmarkar’s algorithm,” Mathematical Programming, vol. 56, pp. 245–284, 1992. 12. F.S. Mokhtarian and J.-L. Goffin, “A nonlinear analytic center cuting plane method for a class of convex programming problems,” SIAM J. Optim., vol. 8, pp. 1108–1131, 1998. 13. G. Sonnevend, “An analytic center for polyhedra and new classes of global algorithms for liner (smooth, convex) programming” in Lecture Notes Control Inform. Sci., Springer Verlag: New York, NY, 1985, pp. 866– 876. 14. T. Terlaky and J.-Ph. Vial, “Computing maximum likelihood estimators of convex density functions,” SIAM J. Sci. Comput., vol. 19, no. 2, pp. 675–694, 1998. 15. M.J. Todd, “Potential-reduction methods in mathematical programming,” Mathematical Programming, vol. 76, pp. 3–45, 1996. 16. S. Wright, Primal-Dual Interior-Point Methods, SIAM Publications: Philadelphia 1997. 17. Y. Ye, “Complexity analysis of the analytic center cutting plane method that uses multiple cuts,” Mathematical Programming, vol. 78, pp. 85–104, 1997.