journal of optimization theory and applications:
Vol. 127, No. 1, pp. 71–88, October 2005 (© 2005)
DOI: 10.1007/s10957...

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

journal of optimization theory and applications:

Vol. 127, No. 1, pp. 71–88, October 2005 (© 2005)

DOI: 10.1007/s10957-005-6393-4

K -Convexity1 in n G. Gallego2 and S. P. Sethi3 Communicated by G. Leitmann

Abstract. We generalize the concept of K-convexity to an n-dimensional Euclidean space. The resulting concept of K-convexity is useful in addressing production and inventory problems where there are individual product setup costs and/or joint setup costs. We derive some basic properties of K-convex functions. We conclude the paper with some suggestions for future research. Key Words. K-convexity, inventory models, (s, S) policy, supermodular functions.

1. Introduction One of the most important results in inventory theory is the proof of the optimality of a so-called (s, S) policy when there is a ﬁxed setup or ordering cost in a single-product inventory problem. The policy is characterized by two numbers s and S, S ≥ s, such that, when the inventory position falls below the level s; an order is issued for a quantity that brings the inventory position up to level S; nothing is ordered otherwise. It is customary to say that an (s, S) policy is optimal even when the two parameters vary from period to period when the problem is either ﬁnite horizon or nonstationary or both. The idea to defer orders until the inventory level has dropped to a low enough level s so that the setup cost will be incurred when a large 1 Support

from Columbia University and University of Texas at Dallas is gratefully acknowledged. Helpful comments from Qi Feng are appreciated. 2 Professor, Department of Industrial Engineering and Operations Research, Columbia University, New York City, NY. 3 Ashbel Smith Professor, School of Management, University of Texas at Dallas, Richardson, Texas.

71 0022-3239/05/1000-0071/0 © 2005 Springer Science+Business Media, Inc.

72

JOTA: VOL. 127, NO. 1, OCTOBER 2005

enough amount (at least S − s) is ordered had been appealing to researchers in the 1950s. However, its proof eluded them because the value function in the dynamic programming formulation of the problem was neither convex nor concave. Finally, Scarf (Ref. 1) provided a proof by introducing a new class of functions called K-convex functions deﬁned on 1 . While Scarf (Ref. 1) had assumed holding/backlog cost to be convex, Veinott (Ref. 2), under somewhat different conditions, supplied a new proof of the optimality of the (s, S) policy. Veinott (Ref. 2) assumed the negative of the one-period expected holding/backlog cost to be unimodal and (nearly) rising over time; therefore, his conditions do not imply and are not implied by the conditions assumed by Scarf (Ref. 1). Since these classical works of Scarf (Ref. 1) and Veinott (Ref. 2), there have been few attempts to study multiproduct extensions of the problem. For our purpose, we will review only Johnson (Ref. 3), Kalin (Ref. 4), Ohno and Ishigaki (Ref. 5), and Liu and Esobgue (Ref. 6). Other works that we do not review are discussed by these authors. Johnson (Ref. 3) considers an n-product problem with a joint setup cost, which is incurred if one or more of the products is ordered. He uses the policy improvement method in Markov decision processes to show that the optimal policy in the stationary case is a (σ, S) policy, where σ ⊂ n and S ∈ n ; one orders up to the level S if the inventory position x ∈ σ and x ≤ S and one does not order if x ∈ / σ . Since nothing is speciﬁed when x ∈ σ, x S, the policy is proved to be optimal only when the initial inventory level is less than or equal to S. Kalin (Ref. 4) shows that, in addition, when x ∈ σ and x S, then ¯ ¯ there is S(x) ≥ x such that the optimal policy is to order S(x) − x. Such a policy can be termed a (σ, S(·)) policy. Kalin (Ref. 4) characterizes also the nonordering set σ c , the complement of σ in n . His proof assumes a number of conditions and uses the concept called (K, η)-quasiconvexity. Finally, Ohno and Ishigaki (Ref. 5) consider a continuous-time problem with Poisson demands. They use a policy improvement method to show that the (σ, S(·)) policy is optimal for their problem. They compute also the optimal policy in some cases and compare it with three well-known heuristic policies. Among all the multiproduct models with a ﬁxed joint setup cost, the paper by Liu and Esobgue (Ref. 6) is the only one that builds on Scarf’s proof. In order to accomplish this, they generalize the concept of K-convexity to n . They use this concept to prove the optimality of an (st , St ) policy in a ﬁnite-horizon case, where t denotes the time period, under the condition that the initial inventory level x0 ≤ S0 and that St increases with t. Since this second condition cannot be veriﬁed a priori, their proof cannot be considered complete.

JOTA: VOL. 127, NO. 1, OCTOBER 2005

73

While we are not able to complete their proof, we propose a fairly general deﬁnition of K-convexity in n , which includes the cases of joint setups as well as individual setups. Then, we develop some properties of K-convex functions that we hope will lead to the solution of a general multiproduct inventory problem with different kinds of setup costs. The plan of the paper is as follows. In Section 2, we provide our deﬁnition of K-convex functions deﬁned on n and derive some properties of such functions. In Section 3, we consider the individual setup cases and show that the sum of independent K-convex functions is K-convex. In Section 4, we establish a result for supermodular K-convex functions. In Section 5, we consider the joint setup case and discuss the results obtained by Liu and Esobgue (Ref. 6) and Kalin (Ref. 4) concerning the optimality of a (σ, S) policy. We conclude the paper in Section 6 by discussing some important open problems for research. 2. Deﬁnitions and Some Properties In this section, we introduce some deﬁnitions of K-convexity and derive some properties of K-convex functions. We begin with the classical deﬁnition of K-convexity in the one-dimensional space given by Scarf (Ref. 1). Scarf ’s Deﬁnition in 1 . A function g : 1 −→ 1 is K-convex if g(u) + z [(g(u) − g(u − b))/b] ≤ g(u + z) + K,

(1)

for any u, z ≥ 0, and b > 0. Next, we propose a fairly general extension of the above concept to n . Deﬁne K = (K0 , K1 , . . . , Kn ) to be a vector of n + 1 nonnegative constants. Let us deﬁne a function K(·) : n+ −→ 1+ as follows: K(x) = K0 δ(e x) +

n

Ki δ(xi ),

i=1

where e = (1, 1, . . . , 1) ∈ n , δ(0) = 0,

δ(z) = 1,

n+ = {x ∈ n |x ≥ 0}, for all z > 0.

(2)

74

JOTA: VOL. 127, NO. 1, OCTOBER 2005

This function K(·) satisﬁes the triangular inequality and it is a homogeneous function of degree zero as derived in the following Lemmas 2.1 and 2.2. Lemma 2.1. The function K(·) satisﬁes the triangular inequality; i.e., for all x ≥ 0, y ≥ 0, we have K(x + y) ≤ K(x) + K(y).

(3)

Proof. It follows from (2) and the fact that δ(u + v) ≤ δ(u) + δ(v) for u ≥ 0, v ≥ 0. Lemma 2.2. The function K(·) is homogenous of degree 0; i.e., for all x ≥ 0 and any constant b > 0, we have K(bx) = K(x). Proof.

(4)

It follows from the fact that δ(bu) = δ(u) for u ≥ 0.

Deﬁnition 2.1. A function g : n → 1 is (K0 , K1 , . . . , Kn )-convex or simply K-convex if ¯ ≤ λg(x) + λ[g(y) ¯ g(λx + λy) + K(y − x)],

(5)

for all x, y with y ≥ x, and all λ ∈ [0, 1], where as usual λ¯ ≡ 1 − λ. This deﬁnition is motivated by the joint replenishment problem when a setup cost K0 is incurred whenever an item is ordered and individual setup costs are incurred for each item included in the order. There are a number of important special cases that we note in what follows. The simplest is the case of one product or n = 1, where K0 + K1 can be considered to be the setup cost and (5) can be written as ¯ ≤ λg(x) + λ[g(y) ¯ g(λx + λy) + K · δ(y − x)],

(6)

where K = K0 + K1 . In this case, (6) is equivalent to the concept of K-convexity in 1 deﬁned by Scarf (Ref. 1); see also Denardo (Ref. 7) and Porteus (Ref. 8).

JOTA: VOL. 127, NO. 1, OCTOBER 2005

75

The next special case arises when Ki = 0, i = 1, 2, . . . , n, i.e., K = (K0 , 0, 0, . . . , 0). In this case, a setup cost K0 is incurred whenever any one or more of the products are ordered. Here, K(x) = K0 δ(e x);

(7)

this case is referred to as the joint setup cost case. Finally, there is a case in which K0 = 0. Here, there is no joint setup, but there are individual setups. Thus, K(x) =

n

Ki δ(xi ).

(8)

i=1

The individual setups case will be discussed further in Section 3. The joint setup case will be treated in Section 4. Deﬁnition 2.1 admits a simple geometric interpretation related to the concept of visibility; see for example Kolmogorov and Fomin (Ref. 9). Let a ≥ 0. A point (x, f (x)) is said to be visible from (y, f (y) + a) if all inter¯ f (λx + λy)), ¯ mediate points (λx + λy, 0 ≤ λ ≤ 1, lie below the line segment joining (x, f (x)) and (y, f (y) + a). We can now obtain the following geometric characterization of K-convexity. Theorem 2.1. A function g is K-convex if and only if (x, g(x)) is visible from (y, g(y) + K(y − x)) for all y ≥ x. Proof. By K-convexity, the function g over the segment λx + (1 − λ)y, λ ∈ [0, 1], lies below the line segment joining (x, g(x)) and (y, g(y) + K(y − x)). Since y ≥ x, K(y − x) ≥ 0. This completes the proof. Deﬁnition 2.2. A function g : n −→ 1 is K-convex if g(u) + (1/µ) [g(u) − g(u − µh)] ≤ g(u + h) + K(h),

(9)

for every h ∈ n , h > 0, µ ∈ 1 , µ > 0, and x ∈ n . Here, h > 0 means h ≥ 0 and hi > 0 for at least one i ∈ {1, 2, . . . , n}. Note that, when h = 0, (9) is satisﬁed trivially. Note that, in 1 , this deﬁnition reduces to the Scarf deﬁnition by using the transformation h = z and µ = z/b. Theorem 2.2. Deﬁnitions 2.1 and 2.2 are equivalent.

76

JOTA: VOL. 127, NO. 1, OCTOBER 2005

Proof. It is sufﬁcient to prove that (5) can be reduced to (1) and vice versa. This can be seen by using the transformation λ = 1/(1 + µ),

x = u − µh,

y = u + h,

and on account of Lemma 2.2, K(y − x) = K((1 + µ)h) = K(h). The following usual properties of K-convex functions on 1 can be extended easily to n . Property 2.1. If g : n −→ 1 is K-convex, then it is L-convex for any L ≥ K, where L = (L0 , L1 , . . . , Ln ). In particular, if g is convex, then it is also K-convex for any K ≥ 0. Proof.

Since L ≥ K, it follows from (5) that

¯ ≤ λg(x) + λ[g(y) ¯ g(λx + λy) + K(y − x)] ¯ ≤ λg(x) + λ[g(y) + L(y − x)], for all x ≤ y and all λ ∈ [0, 1], where the function L(·) : n+ → 1+ is given by L(x) = L0 δ(e x) +

n

Li δ(xi ).

i=1

Property 2.2. If g 1 : n −→ 1 is K-convex and g 2 : n −→ 1 is L-convex, then for α ≥ 0, β ≥ 0, g = αg 1 + βg 2 is (αK + βL)-convex. Proof. By deﬁnition, we have ¯ ≤ λg 1 (x) + λ¯ [g 1 (y) + K(y − x)], g 1 (λx + λy) 2 ¯ ≤ λg 2 (x) + λ[g ¯ 2 (y) + L(y − x)], g (λx + λy) for all x ≤ y and all λ ∈ [0, 1]. Then, ¯ = αg 1 (λx + λ¯ y) + βg 2 (λx + λ¯ y) g(λx + λy) ≤ λ[αg 1 (x) + βg 2 (x)] 1 ¯ +λ[αg (y) + βg 2 (y) + αK(y − x) + βL(y − x)] ≤ λg(x) + λ¯ [g(y) + (αK + βL)(y − x)],

JOTA: VOL. 127, NO. 1, OCTOBER 2005

77

where the function (αK + βL)(·) : n+ → 1+ is deﬁned analogously to (2). Property 2.3. If g : n −→ 1 is K-convex and ξ = (ξ1 , ξ2 , . . . , ξn ) is a random vector such that E|g(x − ξ )| < ∞ for all x, then Eg(x − ξ ) is also K-convex. Proof.

Since g is K-convex, we have that, for any z ∈ n ,

¯ − z)) ≤ λg(x − z) + λ[g(y ¯ g(λ(x − z) + λ(y − z) + K(y − x)],

(10)

for all x ≤ y and all λ ∈ [0, 1]. Since E|g(x − ξ )| < ∞, we can take expectations on both sides of (10) to obtain ¯ − ξ ) ≤ λEg(x − ξ ) + λ[Eg(y ¯ Eg(λx + λy − ξ ) + K(y − x)]. Therefore, Eg(x − ξ ) is K-convex. In addition, the following property follows immediately from the deﬁnition of K-convexity. Property 2.4. Let g : n → 1 . Fix x, y ∈ n with y ≥ x and let f (θ) = g(x + θ (y − x)). Then, f : 1 → 1 is K(y − x)-convex in the sense of Scarf (Ref. 1) if and only if g is K-convex. Proof. Assume that f is not K(y − x)-convex. Then, there exists θ1 < θ2 such that ¯ 1 ) > λf (θ1 ) + λ[f ¯ (θ2 ) + K(y − x)]. f (λθ1 + λθ This implies that ¯ y) g(λx˜ + λ¯ y) ˜ > λg(x) ˜ + λ[g( ˜ + K(y − x)], where x˜ = x + θ1 (y − x)

and y˜ = x + θ2 (y − x) > x, ˜

which contradicts the K-convexity of g. Assume now that g is not K-convex. Then, there exist x, y with x ≤ y such that g(λx + λ¯ y) > λg(x) + λ¯ (g(y) + K(y − x)).

78

JOTA: VOL. 127, NO. 1, OCTOBER 2005

Let θ1 = 0 and θ2 = 1. Then, the above inequality implies that ¯ 2 ) > λf (θ1 ) + λ[f ¯ (θ2 ) + K(y − x)], f (λθ1 + λθ which contradicts the K(y − x)-convexity of f . Theorem 2.3. Let g : n → 1 be K-convex. Let S ∈ n be a ﬁnite global minimizer of g. Let x ≤ S and deﬁne f (θ ) = g(x + θ(S − x)). Let θ1 < 1 be such that f (θ1 ) = f (1) + K(S − x). Then, f (θ) is nonincreasing over θ < θ1 ; therefore, f (θ ) ≥ f (1) + K(S − x) for all θ ≤ θ1 . Proof. From Property 2.4, f (θ ) is K(S − x)-convex. Note from Lemma 2.2 that K(S − x) depends only on the direction of the line joining S and x and not on S − x or x. Then, from the standard one-dimensional case, the result follows. Corollary 2.1. Let θ1 be as deﬁned in Theorem 2.3. Then, for any w = x + θ (S − x), θ ≤ θ1 , g(w) ≥ g(S) + K(S − w). In Section 3, we study the special case of individual setups. 3. Individual Setup Case The individual setup case is characterized by K(x) =

n

δ(xi ).

i

In this case, we show that the sum of independent K-convex functions is K-convex. 1 1 Theorem 3.1. n Let gi : n −→ 1 be Ki -convex for i = 1, . . . , n. Then, g(x1 , . . . , xn ) = i=1 gi (xi ) : −→ is (0, K1 , . . . , Kn )-convex i.e., K-convex with K = (0, K1 , . . . , Kn ).

Proof. Let x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ), with y > x, be any two points. Deﬁne ¯ ψ = λx + λy. See Figure 1 for the illustration of these points in 2 .

JOTA: VOL. 127, NO. 1, OCTOBER 2005

Fig. 1.

79

Grid in R2 .

By deﬁnition, we have ¯ i (yi ) + Ki · δ(yi − xi )), gi (ψi ) ≤ λgi (xi ) + λ(g

(11)

adding over i results in g(ψ) ≤ λg(x) + λ¯ (g(y) + K(y − x)), completing the proof.

4. K -Convexity and Supermodularity The separable property assumed in Theorem 3.1 may be quite restrictive. We extend the result by relaxing the separability to a diagonal property. A function g : n → 1 is supermodular if, for any two points x and y, g(x) + g(y) ≤ g(x ∧ y) + g(x ∨ y), where the pointwise minimum of x and y is called the meet of x and y and is denoted by x ∧ y = (x1 ∧ y1 , x2 ∧ y2 , . . . , xn ∧ yn ),

80

JOTA: VOL. 127, NO. 1, OCTOBER 2005

and where the pointwise maximum of x and y is called the join of x and y and is denoted by x ∨ y = (x1 ∨ y1 , x2 ∨ y2 , . . . , xn ∨ yn ). The reader is referred to Topkis (Ref. 10) for a general deﬁnition of a supermodular function deﬁned on a lattice and to the teaching notes of veinott (Ref. 11). Let g : n −→ 1 . For each z ∈ n and for each subset I ⊂ N ≡ {1, . . . , n}, let gI (·; z) : |I | → 1 be the function that results by freezing the components of z that are not in I . The domain of the function gI are vectors of the form i∈I xi ei , where ei is the ith unit vector. Sometimes, we will abuse notation and write gI (z) as a shorthand for gI (·; z) and gi (z) instead of the more cumbersome g{i} (·; z). Then, g∅ (z) is just the constant g(z), while gi (z) = g(z1 , . . . , zi−1 , ·, zi+1 , . . . , zn ). Let K(·) : n+ → 1+ be a function with the property that KI ∪k (xI ∪k ) ≥ KI (xI ) + Kk (xk ),

(12)

for all k ∈ / I, I = N. A function g is said to be KI -convex if, for all x, y ∈ |I | , x ≤ y, and for all z ∈ n , we have gI (λx + λ¯ y; z) ≤ λgI (x; z) + λ¯ (gI (y; z) + KI (y − x)), for all λ ∈ (0, 1). Note that, for I = {1, . . . , n}, KI -convexity is the same as K-convexity. Theorem 4.1. If g is supermodular and g is Ki -convex for all i ∈ {1, . . . , n}, then g is KI -convex for all I ⊂ {1, . . . , n}. Proof. The proof is by induction in the cardinality of I . By hypothesis, the result holds for |I | = 1. Assume that g is KI -convex for all |I | ≤ m for some m < n. We will show that the result holds for m + 1. For this purpose, consider a subset of {1, . . . , n} of m + 1 distinct components. We can write this set as J = I ∪ k, for some k ∈ / I and a set I of cardinality m. Let x and y be any two vectors in m+1 . To show that g is KJ -convex, we need to show that gJ (λx + λ¯ y; z) ≤ λgJ (x; z) + λ¯ (gJ (y; z) + KJ (y − x)), for all x, y ∈ m+1 such that x ≤ y.

JOTA: VOL. 127, NO. 1, OCTOBER 2005

81

Let ¯ ψ = λx + λy

˜ = y − x, and

so we can write ψ = x + , where ˜

= λ¯ . Let I be the vector that results from by making zero the component corresponding to item k and let k be the vector that results from

by making zero the components corresponding to items in I . Similar ˜ k. ˜ I and

deﬁnitions apply to the vectors

Consider the vectors x + I

˜ k. and x + I +

Clearly, ψ =x +

= x + I ∪k ˜k = x + I + λ¯

˜ k ). = λ(x + I ) + λ¯ (x + I +

Consequently, ˜ k ; z) + Kk (

˜ k )]. gJ (ψ; z) ≤ λgJ (x + I ; z) + λ¯ [gJ (x + I +

(13)

˜ I . Once again, Now, consider the vectors x + k and x + k +

ψ =x +

= x + I ∪k ˜k = x + I + λ¯

¯ + k +

˜ I ). = λ(x + k ) + λ(x Consequently, ¯ J (x + k +

˜ I ; z) + KI (

˜ I )]. gJ (ψ; z) ≤ λgJ (x + k ; z) + λ[g

(14)

82

JOTA: VOL. 127, NO. 1, OCTOBER 2005

Adding the right-hand sides of inequalities (13) and (14) and using the supermodularity property, we obtain the inequalities gJ (x + I ; z) + gJ (x + k ; z) ≤ gJ (x; z) + gJ (x + ; z) = gJ (x; z) + gJ (ψ; z), ˜ k ; z) + gJ (x + k +

˜ I ; z) ≤ gJ (ψ; z) + gJ (y; z)); gJ (x + I +

we obtain 2gJ (ψ; z) ≤ λ[gJ (x; z) + gJ (ψ; z)] ¯ J (ψ; z) + gJ (y; z) + Kk (

˜ k ) + KI (

˜ I )], +λ[g or equivalently, ¯ J (y; z) + Kk (

˜ k ) + KI (

˜ I ]. gJ (ψ; z) ≤ λgJ (x; z) + λ[g Finally, using the fact that ˜ k ) + KI (

˜ I ) = Kk ( ) ˜ + KI ( ) ˜ ≤ KJ ( ) ˜ Kk (

establishes that g is KJ -convex. Since I and k were arbitrary, this implies that g is KI -convex for all subsets I of cardinality |I | = m + 1, and therefore for all subsets I of {1, . . . , n}. In Section 4.1, we provide two examples of K(·) that satisfy the property (12). 4.1. Examples of Function K(·) Satisfying (12). is the case where K(x) =

n

One trivial example

Kj δ(xj ),

j =1

with δ denoting an indicator function such that δ(xj ) = 1, δ(xj ) = 0,

if xj > 0 otherwise.

Thus, the result here implies the result that we obtained before under the weaker assumption of pairwise supermodularity. The result is actually somewhat stronger because it holds for all subsets I , not only for the full set I = {1, . . . , n}.

83

JOTA: VOL. 127, NO. 1, OCTOBER 2005

While the deﬁnition of K(·) satisfying (12) does not allow for the traditional joint replenishment function K(x) =

n

Kj δ(xj ) + K0 δ

xj ,

j

j =1

it does allow other forms of joint setup costs. For example, let K(x) =

n

Kj δ(xj )

j =1

if at most one of the components are positive and let K(x) =

n

Kj δ(xj ) + K0

j =1

if two or more of the components are ordered. 5. Joint Setup Case In the joint setup case, we have K(x) as deﬁned in (7). Thus, we can rewrite (5) as g(λx + λ¯ y) ≤ λg(x) + [g(y) + K0 δ(e (y − x))],

(15)

in our deﬁnition of K-convexity in the joint setup case. For each of exposition, we shall refer to this special case as K0 -convexity, where K0 = (K0 , 0, 0, . . . , 0) ∈ n+1 . We note also that Deﬁnition 2.2 reduces to the deﬁnition of K0 -convexity proposed by Liu and Esogbue (Ref. 6). Consider a continuous K-convex function g, which is coercive, i.e., lim g(x) = ∞.

x→∞

Then, it has a global minimum point S. Deﬁne the set ∂σ as follows: ∂σ = {x ≤ S|g(x) = g(s) + K0 }.

(16)

Clearly, ∂σ is nonempty and bounded because of the coercivity of g(x). Now, deﬁne two regions. The ﬁrst region is = x ≤ S | ∃s ∈ ∂σ such that x ∈ sS , (17)

84

JOTA: VOL. 127, NO. 1, OCTOBER 2005

where sS is the line segment joining s and S. Clearly, ∂σ ⊂ . The second region is σ = {x ≤ S|x ∈ / }.

(18)

Note that σ ∩ = σ ∩ ∂σ = ∅. See Figure 2. Lemma 5.1. If g is K0 -convex, continuous, and coercive, then (i) x ∈ ⇒ g(x) ≤ K0 + g(S), (ii) x ∈ σ ⇒ g(x) > K0 + g(S). Proof. (i) Suppose there is an x ∈ such that g(x) > K0 + g(S). By the deﬁnition of , there is an S such that x ∈ sS and g(s) = K0 + g(S). Thus, g(x) > g(s) and s is not visible from K0 + g(s), contradicting the K-convexity of g. (ii) This follows from that fact that σ ∩ = ∅.

Fig. 2.

Minimum point S, , ∂σ , and σ .

JOTA: VOL. 127, NO. 1, OCTOBER 2005

85

Lemma 5.2. If g is continuous K0 -convex, then f (x) = inf {g(y) + K0 δ(e (y − x))} y≥x K0 + g(S), x ∈ σ, = g(x), x ∈ ,

(19)

where S is the global minimum of g. Furthermore, f is continuous and K0 -convex on {x ≤ S}. Proof. For x ∈ σ , we have g(x) > K + g(S). Since y = S is feasible and since K + g(S) ≤ K + g(y),

for all y < S,

we can conclude that the minimum is y ∗ (x) = S; therefore, f (x) = K0 + g(S). For x ∈ , g(x) ≤ K0 + g(S). Then, for any y > x, g(x) ≤ K0 + g(S) ≤ K0 + g(y). Thus, y ∗ (x) = x is the minimum and f (x) = g(x). To prove that f is continuous, it is sufﬁcient to prove its continuity on any closed set ⊂ n . By the assumption on g, we know that the set = {y ∗ (x) | x ∈ } of optimal values y ∗ (x), x ∈ , is compact. Then, g(x) is uniformly continuous on . That is, for any > 0, there is a δ > 0 such that |g(y1 ) − g(y2 )| < ,

if y1 , y2 ∈ and y1 − y2 < δ.

Now, let any two points x1 , x2 ∈ such that x1 − x2 < δ. Let y ∗ (x1 ) and y ∗ (x2 ) denote the solutions, respectively.

86

JOTA: VOL. 127, NO. 1, OCTOBER 2005

Consider the two cases y ∗ (x1 ) > x1 and y ∗ (x1 ) = x1 . In the ﬁrst case, y ∗ (x1 ) = S and there exists y2 ≥ x2 such that S − y2 < δ. By this and (19), we have f (x1 ) = K0 + g(S) ≥ K0 + g(y2 ) − ≥ f (x2 ) − .

(20)

Likewise, in the second case, we have f (x1 ) = g(x1 ) ≥ g(x2 ) − ≥ f (x2 ) − .

(21)

Thus, we have f (x1 ) ≥ f (x2 ) − in both cases. Similarly, we can prove f (x2 ) ≥ f (x1 ) − . This completes the proof of the continuity of f . Next, we prove that f is K0 -convex on {x ≤ S} by using Theorem 2.1. We need to examine three cases of (x, y), y ≥ x: (i) x ∈ σ, y ∈ σ ; (ii) x ∈ , y ∈ ; (iii) x ∈ σ, y ∈ . In case (i), f (x) = f (y) = K + g(S) and so f is K0 -convex. In case (ii), f = g and g is convex, so f is convex. In case (iii), K0 + g(y) ≥ K0 + g(S) = f (x), and from Lemma 5.1, f (z) = g(z) ≤ K0 + g(S),

(22)

for every z ∈ [x, y]. So every (x, f (z)) is visible from (y + f (y)). This completes the proof of K0 -convexity of function f . Lemma 5.2 provides the essential induction step in a proof of an (s, S)-type policy. The following result, which has been proved by Kalin (Ref. 4) without using K-convexity, is expected to be derived with the use of Lemma 5.2.

JOTA: VOL. 127, NO. 1, OCTOBER 2005

87

Theorem 5.1. Assume that g(x) representing the cost of inventory/ backlog is convex, continuous, and coercive. Let K0 represent the joint setup cost of ordering. Then, there is a value S and partition of the region {x ≤ S} into disjoint sets σ and such that, given an initial inventory x0 ≤ S, the optimal ordering policy for the inﬁnite-horizon inventory problem is S, if x ∈ σ, y ∗ (x) = x, if x ∈ . A ﬁnite-horizon version of Theorem 5.1 was proved by Liu and Esobgue (Ref. 6) under a condition that the order-up-to-levels in different periods increase over time. But this condition is not convenient, as it cannot be veriﬁed a priori. 6. Future Research and Concluding Remarks In appplications, it will be important to ﬁnd conditions under which the K-convexity persists after a dynamics programming iteration. The key question is the following. If g is K-convex and f (x) = inf {g(z) + K(z − x)}, z≥x

then is f a K-convex function? This is certainly true in R1 . For Rn , n > 1, things get complicated unless we impose additional conditions such as monotone order-up to levels. One direction for future research is to ﬁnd out conditions that would guarantee the K-convexity of f . Another direction would be to further quality K-convexity in some appropriate way and then prove that this qualiﬁed K-convexity is preserved after a dynamic programming iteration. In this paper, we have deﬁned the notion of K-convexity in Rn and have developed the properties of K-convex functions. In the individual setup case, we have shown that g is K-convex in Rn with K = (0, K1 , . . . , Kn ), if g = i gi and each gi is Ki -convex in R1 . In Section 4, we have shown that, if g is supermodular and Ki -convex for each i ∈ {1, . . . , n}, then g is K-convex for each I ⊂ {1, . . . , n}; and in particular g is K-convex. We have also reviewed the literature on multiproduct inventory problems with ﬁxed costs. Finally, in the joint setup case, we discuss the results obtained by Liu and Esobgue (Ref. 6) on K0 -convexity and its possible application in proving the optimality of a (σ, S) policy shown by Kalin (Ref. 4) by using other means.

88

JOTA: VOL. 127, NO. 1, OCTOBER 2005

References 1. Scarf, H., The Optimality of (s, S) Policies in the Dynamic Inventory Problem, Mathematical Methods in the Social Sciences, Edited by J. Arrow, S. Karlin, and P. Suppes, Stanford University Press, Stanford, California, pp. 196–202, 1960. 2. Veinott, A. F. Jr., On the Optimality of (s, S) Inventory Policies: New Conditions and a New Proof, SIAM Journal on Applied Mathematics, Vol. 14, pp. 1067–1083, 1966. 3. Johnson, E. L., Optimality and Computation of (σ, S) Policies in the MultiItem Inﬁnite-Horizon Inventory Problem, Management Science, Vol. 13, pp. 475–491, 1967. 4. Kalin, D., On the Optimality of (σ, S) Policies, Mathematics of Operations Research, Vol. 5, pp. 293–307, 1980. 5. Ohno, K., and Ishigaki, T., Multi-Item Continuous Review Inventory System with Compound Poisson Demands, Mathematical Methods in Operations Research, Vol. 53, pp. 147–165, 2001. 6. Liu, B., and Esogbue, A. O., Decision Criteria and Optimal Inventory Processes, Kluwer, Boston, Massachusetts, 1999. 7. Denardo, E. V., Dynamic Programming: Models and Appllications, Prentice Hall, Englewood Cliffs, New Jersey, 1982. 8. Portues, E., On the Optimality of Generalized (s, S) Policies, Management Science, Vol. 17, pp. 411–426, 1971. 9. Kolmogorov, A. N., and Fomin, S. V., Introduction to Real Analysis, Dover Publications, New York, NY, 1970. 10. Topkis, D. M., Minimizing a Submodular Function on a Lattice, Operations Research, Vol. 26, pp. 305–321, 1978. 11. Veinott, A. F. JR., Teaching Notes in Inventory Theory, Stanford University, Stanford, california, 1975.

Vol. 127, No. 1, pp. 71–88, October 2005 (© 2005)

DOI: 10.1007/s10957-005-6393-4

K -Convexity1 in n G. Gallego2 and S. P. Sethi3 Communicated by G. Leitmann

Abstract. We generalize the concept of K-convexity to an n-dimensional Euclidean space. The resulting concept of K-convexity is useful in addressing production and inventory problems where there are individual product setup costs and/or joint setup costs. We derive some basic properties of K-convex functions. We conclude the paper with some suggestions for future research. Key Words. K-convexity, inventory models, (s, S) policy, supermodular functions.

1. Introduction One of the most important results in inventory theory is the proof of the optimality of a so-called (s, S) policy when there is a ﬁxed setup or ordering cost in a single-product inventory problem. The policy is characterized by two numbers s and S, S ≥ s, such that, when the inventory position falls below the level s; an order is issued for a quantity that brings the inventory position up to level S; nothing is ordered otherwise. It is customary to say that an (s, S) policy is optimal even when the two parameters vary from period to period when the problem is either ﬁnite horizon or nonstationary or both. The idea to defer orders until the inventory level has dropped to a low enough level s so that the setup cost will be incurred when a large 1 Support

from Columbia University and University of Texas at Dallas is gratefully acknowledged. Helpful comments from Qi Feng are appreciated. 2 Professor, Department of Industrial Engineering and Operations Research, Columbia University, New York City, NY. 3 Ashbel Smith Professor, School of Management, University of Texas at Dallas, Richardson, Texas.

71 0022-3239/05/1000-0071/0 © 2005 Springer Science+Business Media, Inc.

72

JOTA: VOL. 127, NO. 1, OCTOBER 2005

enough amount (at least S − s) is ordered had been appealing to researchers in the 1950s. However, its proof eluded them because the value function in the dynamic programming formulation of the problem was neither convex nor concave. Finally, Scarf (Ref. 1) provided a proof by introducing a new class of functions called K-convex functions deﬁned on 1 . While Scarf (Ref. 1) had assumed holding/backlog cost to be convex, Veinott (Ref. 2), under somewhat different conditions, supplied a new proof of the optimality of the (s, S) policy. Veinott (Ref. 2) assumed the negative of the one-period expected holding/backlog cost to be unimodal and (nearly) rising over time; therefore, his conditions do not imply and are not implied by the conditions assumed by Scarf (Ref. 1). Since these classical works of Scarf (Ref. 1) and Veinott (Ref. 2), there have been few attempts to study multiproduct extensions of the problem. For our purpose, we will review only Johnson (Ref. 3), Kalin (Ref. 4), Ohno and Ishigaki (Ref. 5), and Liu and Esobgue (Ref. 6). Other works that we do not review are discussed by these authors. Johnson (Ref. 3) considers an n-product problem with a joint setup cost, which is incurred if one or more of the products is ordered. He uses the policy improvement method in Markov decision processes to show that the optimal policy in the stationary case is a (σ, S) policy, where σ ⊂ n and S ∈ n ; one orders up to the level S if the inventory position x ∈ σ and x ≤ S and one does not order if x ∈ / σ . Since nothing is speciﬁed when x ∈ σ, x S, the policy is proved to be optimal only when the initial inventory level is less than or equal to S. Kalin (Ref. 4) shows that, in addition, when x ∈ σ and x S, then ¯ ¯ there is S(x) ≥ x such that the optimal policy is to order S(x) − x. Such a policy can be termed a (σ, S(·)) policy. Kalin (Ref. 4) characterizes also the nonordering set σ c , the complement of σ in n . His proof assumes a number of conditions and uses the concept called (K, η)-quasiconvexity. Finally, Ohno and Ishigaki (Ref. 5) consider a continuous-time problem with Poisson demands. They use a policy improvement method to show that the (σ, S(·)) policy is optimal for their problem. They compute also the optimal policy in some cases and compare it with three well-known heuristic policies. Among all the multiproduct models with a ﬁxed joint setup cost, the paper by Liu and Esobgue (Ref. 6) is the only one that builds on Scarf’s proof. In order to accomplish this, they generalize the concept of K-convexity to n . They use this concept to prove the optimality of an (st , St ) policy in a ﬁnite-horizon case, where t denotes the time period, under the condition that the initial inventory level x0 ≤ S0 and that St increases with t. Since this second condition cannot be veriﬁed a priori, their proof cannot be considered complete.

JOTA: VOL. 127, NO. 1, OCTOBER 2005

73

While we are not able to complete their proof, we propose a fairly general deﬁnition of K-convexity in n , which includes the cases of joint setups as well as individual setups. Then, we develop some properties of K-convex functions that we hope will lead to the solution of a general multiproduct inventory problem with different kinds of setup costs. The plan of the paper is as follows. In Section 2, we provide our deﬁnition of K-convex functions deﬁned on n and derive some properties of such functions. In Section 3, we consider the individual setup cases and show that the sum of independent K-convex functions is K-convex. In Section 4, we establish a result for supermodular K-convex functions. In Section 5, we consider the joint setup case and discuss the results obtained by Liu and Esobgue (Ref. 6) and Kalin (Ref. 4) concerning the optimality of a (σ, S) policy. We conclude the paper in Section 6 by discussing some important open problems for research. 2. Deﬁnitions and Some Properties In this section, we introduce some deﬁnitions of K-convexity and derive some properties of K-convex functions. We begin with the classical deﬁnition of K-convexity in the one-dimensional space given by Scarf (Ref. 1). Scarf ’s Deﬁnition in 1 . A function g : 1 −→ 1 is K-convex if g(u) + z [(g(u) − g(u − b))/b] ≤ g(u + z) + K,

(1)

for any u, z ≥ 0, and b > 0. Next, we propose a fairly general extension of the above concept to n . Deﬁne K = (K0 , K1 , . . . , Kn ) to be a vector of n + 1 nonnegative constants. Let us deﬁne a function K(·) : n+ −→ 1+ as follows: K(x) = K0 δ(e x) +

n

Ki δ(xi ),

i=1

where e = (1, 1, . . . , 1) ∈ n , δ(0) = 0,

δ(z) = 1,

n+ = {x ∈ n |x ≥ 0}, for all z > 0.

(2)

74

JOTA: VOL. 127, NO. 1, OCTOBER 2005

This function K(·) satisﬁes the triangular inequality and it is a homogeneous function of degree zero as derived in the following Lemmas 2.1 and 2.2. Lemma 2.1. The function K(·) satisﬁes the triangular inequality; i.e., for all x ≥ 0, y ≥ 0, we have K(x + y) ≤ K(x) + K(y).

(3)

Proof. It follows from (2) and the fact that δ(u + v) ≤ δ(u) + δ(v) for u ≥ 0, v ≥ 0. Lemma 2.2. The function K(·) is homogenous of degree 0; i.e., for all x ≥ 0 and any constant b > 0, we have K(bx) = K(x). Proof.

(4)

It follows from the fact that δ(bu) = δ(u) for u ≥ 0.

Deﬁnition 2.1. A function g : n → 1 is (K0 , K1 , . . . , Kn )-convex or simply K-convex if ¯ ≤ λg(x) + λ[g(y) ¯ g(λx + λy) + K(y − x)],

(5)

for all x, y with y ≥ x, and all λ ∈ [0, 1], where as usual λ¯ ≡ 1 − λ. This deﬁnition is motivated by the joint replenishment problem when a setup cost K0 is incurred whenever an item is ordered and individual setup costs are incurred for each item included in the order. There are a number of important special cases that we note in what follows. The simplest is the case of one product or n = 1, where K0 + K1 can be considered to be the setup cost and (5) can be written as ¯ ≤ λg(x) + λ[g(y) ¯ g(λx + λy) + K · δ(y − x)],

(6)

where K = K0 + K1 . In this case, (6) is equivalent to the concept of K-convexity in 1 deﬁned by Scarf (Ref. 1); see also Denardo (Ref. 7) and Porteus (Ref. 8).

JOTA: VOL. 127, NO. 1, OCTOBER 2005

75

The next special case arises when Ki = 0, i = 1, 2, . . . , n, i.e., K = (K0 , 0, 0, . . . , 0). In this case, a setup cost K0 is incurred whenever any one or more of the products are ordered. Here, K(x) = K0 δ(e x);

(7)

this case is referred to as the joint setup cost case. Finally, there is a case in which K0 = 0. Here, there is no joint setup, but there are individual setups. Thus, K(x) =

n

Ki δ(xi ).

(8)

i=1

The individual setups case will be discussed further in Section 3. The joint setup case will be treated in Section 4. Deﬁnition 2.1 admits a simple geometric interpretation related to the concept of visibility; see for example Kolmogorov and Fomin (Ref. 9). Let a ≥ 0. A point (x, f (x)) is said to be visible from (y, f (y) + a) if all inter¯ f (λx + λy)), ¯ mediate points (λx + λy, 0 ≤ λ ≤ 1, lie below the line segment joining (x, f (x)) and (y, f (y) + a). We can now obtain the following geometric characterization of K-convexity. Theorem 2.1. A function g is K-convex if and only if (x, g(x)) is visible from (y, g(y) + K(y − x)) for all y ≥ x. Proof. By K-convexity, the function g over the segment λx + (1 − λ)y, λ ∈ [0, 1], lies below the line segment joining (x, g(x)) and (y, g(y) + K(y − x)). Since y ≥ x, K(y − x) ≥ 0. This completes the proof. Deﬁnition 2.2. A function g : n −→ 1 is K-convex if g(u) + (1/µ) [g(u) − g(u − µh)] ≤ g(u + h) + K(h),

(9)

for every h ∈ n , h > 0, µ ∈ 1 , µ > 0, and x ∈ n . Here, h > 0 means h ≥ 0 and hi > 0 for at least one i ∈ {1, 2, . . . , n}. Note that, when h = 0, (9) is satisﬁed trivially. Note that, in 1 , this deﬁnition reduces to the Scarf deﬁnition by using the transformation h = z and µ = z/b. Theorem 2.2. Deﬁnitions 2.1 and 2.2 are equivalent.

76

JOTA: VOL. 127, NO. 1, OCTOBER 2005

Proof. It is sufﬁcient to prove that (5) can be reduced to (1) and vice versa. This can be seen by using the transformation λ = 1/(1 + µ),

x = u − µh,

y = u + h,

and on account of Lemma 2.2, K(y − x) = K((1 + µ)h) = K(h). The following usual properties of K-convex functions on 1 can be extended easily to n . Property 2.1. If g : n −→ 1 is K-convex, then it is L-convex for any L ≥ K, where L = (L0 , L1 , . . . , Ln ). In particular, if g is convex, then it is also K-convex for any K ≥ 0. Proof.

Since L ≥ K, it follows from (5) that

¯ ≤ λg(x) + λ[g(y) ¯ g(λx + λy) + K(y − x)] ¯ ≤ λg(x) + λ[g(y) + L(y − x)], for all x ≤ y and all λ ∈ [0, 1], where the function L(·) : n+ → 1+ is given by L(x) = L0 δ(e x) +

n

Li δ(xi ).

i=1

Property 2.2. If g 1 : n −→ 1 is K-convex and g 2 : n −→ 1 is L-convex, then for α ≥ 0, β ≥ 0, g = αg 1 + βg 2 is (αK + βL)-convex. Proof. By deﬁnition, we have ¯ ≤ λg 1 (x) + λ¯ [g 1 (y) + K(y − x)], g 1 (λx + λy) 2 ¯ ≤ λg 2 (x) + λ[g ¯ 2 (y) + L(y − x)], g (λx + λy) for all x ≤ y and all λ ∈ [0, 1]. Then, ¯ = αg 1 (λx + λ¯ y) + βg 2 (λx + λ¯ y) g(λx + λy) ≤ λ[αg 1 (x) + βg 2 (x)] 1 ¯ +λ[αg (y) + βg 2 (y) + αK(y − x) + βL(y − x)] ≤ λg(x) + λ¯ [g(y) + (αK + βL)(y − x)],

JOTA: VOL. 127, NO. 1, OCTOBER 2005

77

where the function (αK + βL)(·) : n+ → 1+ is deﬁned analogously to (2). Property 2.3. If g : n −→ 1 is K-convex and ξ = (ξ1 , ξ2 , . . . , ξn ) is a random vector such that E|g(x − ξ )| < ∞ for all x, then Eg(x − ξ ) is also K-convex. Proof.

Since g is K-convex, we have that, for any z ∈ n ,

¯ − z)) ≤ λg(x − z) + λ[g(y ¯ g(λ(x − z) + λ(y − z) + K(y − x)],

(10)

for all x ≤ y and all λ ∈ [0, 1]. Since E|g(x − ξ )| < ∞, we can take expectations on both sides of (10) to obtain ¯ − ξ ) ≤ λEg(x − ξ ) + λ[Eg(y ¯ Eg(λx + λy − ξ ) + K(y − x)]. Therefore, Eg(x − ξ ) is K-convex. In addition, the following property follows immediately from the deﬁnition of K-convexity. Property 2.4. Let g : n → 1 . Fix x, y ∈ n with y ≥ x and let f (θ) = g(x + θ (y − x)). Then, f : 1 → 1 is K(y − x)-convex in the sense of Scarf (Ref. 1) if and only if g is K-convex. Proof. Assume that f is not K(y − x)-convex. Then, there exists θ1 < θ2 such that ¯ 1 ) > λf (θ1 ) + λ[f ¯ (θ2 ) + K(y − x)]. f (λθ1 + λθ This implies that ¯ y) g(λx˜ + λ¯ y) ˜ > λg(x) ˜ + λ[g( ˜ + K(y − x)], where x˜ = x + θ1 (y − x)

and y˜ = x + θ2 (y − x) > x, ˜

which contradicts the K-convexity of g. Assume now that g is not K-convex. Then, there exist x, y with x ≤ y such that g(λx + λ¯ y) > λg(x) + λ¯ (g(y) + K(y − x)).

78

JOTA: VOL. 127, NO. 1, OCTOBER 2005

Let θ1 = 0 and θ2 = 1. Then, the above inequality implies that ¯ 2 ) > λf (θ1 ) + λ[f ¯ (θ2 ) + K(y − x)], f (λθ1 + λθ which contradicts the K(y − x)-convexity of f . Theorem 2.3. Let g : n → 1 be K-convex. Let S ∈ n be a ﬁnite global minimizer of g. Let x ≤ S and deﬁne f (θ ) = g(x + θ(S − x)). Let θ1 < 1 be such that f (θ1 ) = f (1) + K(S − x). Then, f (θ) is nonincreasing over θ < θ1 ; therefore, f (θ ) ≥ f (1) + K(S − x) for all θ ≤ θ1 . Proof. From Property 2.4, f (θ ) is K(S − x)-convex. Note from Lemma 2.2 that K(S − x) depends only on the direction of the line joining S and x and not on S − x or x. Then, from the standard one-dimensional case, the result follows. Corollary 2.1. Let θ1 be as deﬁned in Theorem 2.3. Then, for any w = x + θ (S − x), θ ≤ θ1 , g(w) ≥ g(S) + K(S − w). In Section 3, we study the special case of individual setups. 3. Individual Setup Case The individual setup case is characterized by K(x) =

n

δ(xi ).

i

In this case, we show that the sum of independent K-convex functions is K-convex. 1 1 Theorem 3.1. n Let gi : n −→ 1 be Ki -convex for i = 1, . . . , n. Then, g(x1 , . . . , xn ) = i=1 gi (xi ) : −→ is (0, K1 , . . . , Kn )-convex i.e., K-convex with K = (0, K1 , . . . , Kn ).

Proof. Let x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ), with y > x, be any two points. Deﬁne ¯ ψ = λx + λy. See Figure 1 for the illustration of these points in 2 .

JOTA: VOL. 127, NO. 1, OCTOBER 2005

Fig. 1.

79

Grid in R2 .

By deﬁnition, we have ¯ i (yi ) + Ki · δ(yi − xi )), gi (ψi ) ≤ λgi (xi ) + λ(g

(11)

adding over i results in g(ψ) ≤ λg(x) + λ¯ (g(y) + K(y − x)), completing the proof.

4. K -Convexity and Supermodularity The separable property assumed in Theorem 3.1 may be quite restrictive. We extend the result by relaxing the separability to a diagonal property. A function g : n → 1 is supermodular if, for any two points x and y, g(x) + g(y) ≤ g(x ∧ y) + g(x ∨ y), where the pointwise minimum of x and y is called the meet of x and y and is denoted by x ∧ y = (x1 ∧ y1 , x2 ∧ y2 , . . . , xn ∧ yn ),

80

JOTA: VOL. 127, NO. 1, OCTOBER 2005

and where the pointwise maximum of x and y is called the join of x and y and is denoted by x ∨ y = (x1 ∨ y1 , x2 ∨ y2 , . . . , xn ∨ yn ). The reader is referred to Topkis (Ref. 10) for a general deﬁnition of a supermodular function deﬁned on a lattice and to the teaching notes of veinott (Ref. 11). Let g : n −→ 1 . For each z ∈ n and for each subset I ⊂ N ≡ {1, . . . , n}, let gI (·; z) : |I | → 1 be the function that results by freezing the components of z that are not in I . The domain of the function gI are vectors of the form i∈I xi ei , where ei is the ith unit vector. Sometimes, we will abuse notation and write gI (z) as a shorthand for gI (·; z) and gi (z) instead of the more cumbersome g{i} (·; z). Then, g∅ (z) is just the constant g(z), while gi (z) = g(z1 , . . . , zi−1 , ·, zi+1 , . . . , zn ). Let K(·) : n+ → 1+ be a function with the property that KI ∪k (xI ∪k ) ≥ KI (xI ) + Kk (xk ),

(12)

for all k ∈ / I, I = N. A function g is said to be KI -convex if, for all x, y ∈ |I | , x ≤ y, and for all z ∈ n , we have gI (λx + λ¯ y; z) ≤ λgI (x; z) + λ¯ (gI (y; z) + KI (y − x)), for all λ ∈ (0, 1). Note that, for I = {1, . . . , n}, KI -convexity is the same as K-convexity. Theorem 4.1. If g is supermodular and g is Ki -convex for all i ∈ {1, . . . , n}, then g is KI -convex for all I ⊂ {1, . . . , n}. Proof. The proof is by induction in the cardinality of I . By hypothesis, the result holds for |I | = 1. Assume that g is KI -convex for all |I | ≤ m for some m < n. We will show that the result holds for m + 1. For this purpose, consider a subset of {1, . . . , n} of m + 1 distinct components. We can write this set as J = I ∪ k, for some k ∈ / I and a set I of cardinality m. Let x and y be any two vectors in m+1 . To show that g is KJ -convex, we need to show that gJ (λx + λ¯ y; z) ≤ λgJ (x; z) + λ¯ (gJ (y; z) + KJ (y − x)), for all x, y ∈ m+1 such that x ≤ y.

JOTA: VOL. 127, NO. 1, OCTOBER 2005

81

Let ¯ ψ = λx + λy

˜ = y − x, and

so we can write ψ = x + , where ˜

= λ¯ . Let I be the vector that results from by making zero the component corresponding to item k and let k be the vector that results from

by making zero the components corresponding to items in I . Similar ˜ k. ˜ I and

deﬁnitions apply to the vectors

Consider the vectors x + I

˜ k. and x + I +

Clearly, ψ =x +

= x + I ∪k ˜k = x + I + λ¯

˜ k ). = λ(x + I ) + λ¯ (x + I +

Consequently, ˜ k ; z) + Kk (

˜ k )]. gJ (ψ; z) ≤ λgJ (x + I ; z) + λ¯ [gJ (x + I +

(13)

˜ I . Once again, Now, consider the vectors x + k and x + k +

ψ =x +

= x + I ∪k ˜k = x + I + λ¯

¯ + k +

˜ I ). = λ(x + k ) + λ(x Consequently, ¯ J (x + k +

˜ I ; z) + KI (

˜ I )]. gJ (ψ; z) ≤ λgJ (x + k ; z) + λ[g

(14)

82

JOTA: VOL. 127, NO. 1, OCTOBER 2005

Adding the right-hand sides of inequalities (13) and (14) and using the supermodularity property, we obtain the inequalities gJ (x + I ; z) + gJ (x + k ; z) ≤ gJ (x; z) + gJ (x + ; z) = gJ (x; z) + gJ (ψ; z), ˜ k ; z) + gJ (x + k +

˜ I ; z) ≤ gJ (ψ; z) + gJ (y; z)); gJ (x + I +

we obtain 2gJ (ψ; z) ≤ λ[gJ (x; z) + gJ (ψ; z)] ¯ J (ψ; z) + gJ (y; z) + Kk (

˜ k ) + KI (

˜ I )], +λ[g or equivalently, ¯ J (y; z) + Kk (

˜ k ) + KI (

˜ I ]. gJ (ψ; z) ≤ λgJ (x; z) + λ[g Finally, using the fact that ˜ k ) + KI (

˜ I ) = Kk ( ) ˜ + KI ( ) ˜ ≤ KJ ( ) ˜ Kk (

establishes that g is KJ -convex. Since I and k were arbitrary, this implies that g is KI -convex for all subsets I of cardinality |I | = m + 1, and therefore for all subsets I of {1, . . . , n}. In Section 4.1, we provide two examples of K(·) that satisfy the property (12). 4.1. Examples of Function K(·) Satisfying (12). is the case where K(x) =

n

One trivial example

Kj δ(xj ),

j =1

with δ denoting an indicator function such that δ(xj ) = 1, δ(xj ) = 0,

if xj > 0 otherwise.

Thus, the result here implies the result that we obtained before under the weaker assumption of pairwise supermodularity. The result is actually somewhat stronger because it holds for all subsets I , not only for the full set I = {1, . . . , n}.

83

JOTA: VOL. 127, NO. 1, OCTOBER 2005

While the deﬁnition of K(·) satisfying (12) does not allow for the traditional joint replenishment function K(x) =

n

Kj δ(xj ) + K0 δ

xj ,

j

j =1

it does allow other forms of joint setup costs. For example, let K(x) =

n

Kj δ(xj )

j =1

if at most one of the components are positive and let K(x) =

n

Kj δ(xj ) + K0

j =1

if two or more of the components are ordered. 5. Joint Setup Case In the joint setup case, we have K(x) as deﬁned in (7). Thus, we can rewrite (5) as g(λx + λ¯ y) ≤ λg(x) + [g(y) + K0 δ(e (y − x))],

(15)

in our deﬁnition of K-convexity in the joint setup case. For each of exposition, we shall refer to this special case as K0 -convexity, where K0 = (K0 , 0, 0, . . . , 0) ∈ n+1 . We note also that Deﬁnition 2.2 reduces to the deﬁnition of K0 -convexity proposed by Liu and Esogbue (Ref. 6). Consider a continuous K-convex function g, which is coercive, i.e., lim g(x) = ∞.

x→∞

Then, it has a global minimum point S. Deﬁne the set ∂σ as follows: ∂σ = {x ≤ S|g(x) = g(s) + K0 }.

(16)

Clearly, ∂σ is nonempty and bounded because of the coercivity of g(x). Now, deﬁne two regions. The ﬁrst region is = x ≤ S | ∃s ∈ ∂σ such that x ∈ sS , (17)

84

JOTA: VOL. 127, NO. 1, OCTOBER 2005

where sS is the line segment joining s and S. Clearly, ∂σ ⊂ . The second region is σ = {x ≤ S|x ∈ / }.

(18)

Note that σ ∩ = σ ∩ ∂σ = ∅. See Figure 2. Lemma 5.1. If g is K0 -convex, continuous, and coercive, then (i) x ∈ ⇒ g(x) ≤ K0 + g(S), (ii) x ∈ σ ⇒ g(x) > K0 + g(S). Proof. (i) Suppose there is an x ∈ such that g(x) > K0 + g(S). By the deﬁnition of , there is an S such that x ∈ sS and g(s) = K0 + g(S). Thus, g(x) > g(s) and s is not visible from K0 + g(s), contradicting the K-convexity of g. (ii) This follows from that fact that σ ∩ = ∅.

Fig. 2.

Minimum point S, , ∂σ , and σ .

JOTA: VOL. 127, NO. 1, OCTOBER 2005

85

Lemma 5.2. If g is continuous K0 -convex, then f (x) = inf {g(y) + K0 δ(e (y − x))} y≥x K0 + g(S), x ∈ σ, = g(x), x ∈ ,

(19)

where S is the global minimum of g. Furthermore, f is continuous and K0 -convex on {x ≤ S}. Proof. For x ∈ σ , we have g(x) > K + g(S). Since y = S is feasible and since K + g(S) ≤ K + g(y),

for all y < S,

we can conclude that the minimum is y ∗ (x) = S; therefore, f (x) = K0 + g(S). For x ∈ , g(x) ≤ K0 + g(S). Then, for any y > x, g(x) ≤ K0 + g(S) ≤ K0 + g(y). Thus, y ∗ (x) = x is the minimum and f (x) = g(x). To prove that f is continuous, it is sufﬁcient to prove its continuity on any closed set ⊂ n . By the assumption on g, we know that the set = {y ∗ (x) | x ∈ } of optimal values y ∗ (x), x ∈ , is compact. Then, g(x) is uniformly continuous on . That is, for any > 0, there is a δ > 0 such that |g(y1 ) − g(y2 )| < ,

if y1 , y2 ∈ and y1 − y2 < δ.

Now, let any two points x1 , x2 ∈ such that x1 − x2 < δ. Let y ∗ (x1 ) and y ∗ (x2 ) denote the solutions, respectively.

86

JOTA: VOL. 127, NO. 1, OCTOBER 2005

Consider the two cases y ∗ (x1 ) > x1 and y ∗ (x1 ) = x1 . In the ﬁrst case, y ∗ (x1 ) = S and there exists y2 ≥ x2 such that S − y2 < δ. By this and (19), we have f (x1 ) = K0 + g(S) ≥ K0 + g(y2 ) − ≥ f (x2 ) − .

(20)

Likewise, in the second case, we have f (x1 ) = g(x1 ) ≥ g(x2 ) − ≥ f (x2 ) − .

(21)

Thus, we have f (x1 ) ≥ f (x2 ) − in both cases. Similarly, we can prove f (x2 ) ≥ f (x1 ) − . This completes the proof of the continuity of f . Next, we prove that f is K0 -convex on {x ≤ S} by using Theorem 2.1. We need to examine three cases of (x, y), y ≥ x: (i) x ∈ σ, y ∈ σ ; (ii) x ∈ , y ∈ ; (iii) x ∈ σ, y ∈ . In case (i), f (x) = f (y) = K + g(S) and so f is K0 -convex. In case (ii), f = g and g is convex, so f is convex. In case (iii), K0 + g(y) ≥ K0 + g(S) = f (x), and from Lemma 5.1, f (z) = g(z) ≤ K0 + g(S),

(22)

for every z ∈ [x, y]. So every (x, f (z)) is visible from (y + f (y)). This completes the proof of K0 -convexity of function f . Lemma 5.2 provides the essential induction step in a proof of an (s, S)-type policy. The following result, which has been proved by Kalin (Ref. 4) without using K-convexity, is expected to be derived with the use of Lemma 5.2.

JOTA: VOL. 127, NO. 1, OCTOBER 2005

87

Theorem 5.1. Assume that g(x) representing the cost of inventory/ backlog is convex, continuous, and coercive. Let K0 represent the joint setup cost of ordering. Then, there is a value S and partition of the region {x ≤ S} into disjoint sets σ and such that, given an initial inventory x0 ≤ S, the optimal ordering policy for the inﬁnite-horizon inventory problem is S, if x ∈ σ, y ∗ (x) = x, if x ∈ . A ﬁnite-horizon version of Theorem 5.1 was proved by Liu and Esobgue (Ref. 6) under a condition that the order-up-to-levels in different periods increase over time. But this condition is not convenient, as it cannot be veriﬁed a priori. 6. Future Research and Concluding Remarks In appplications, it will be important to ﬁnd conditions under which the K-convexity persists after a dynamics programming iteration. The key question is the following. If g is K-convex and f (x) = inf {g(z) + K(z − x)}, z≥x

then is f a K-convex function? This is certainly true in R1 . For Rn , n > 1, things get complicated unless we impose additional conditions such as monotone order-up to levels. One direction for future research is to ﬁnd out conditions that would guarantee the K-convexity of f . Another direction would be to further quality K-convexity in some appropriate way and then prove that this qualiﬁed K-convexity is preserved after a dynamic programming iteration. In this paper, we have deﬁned the notion of K-convexity in Rn and have developed the properties of K-convex functions. In the individual setup case, we have shown that g is K-convex in Rn with K = (0, K1 , . . . , Kn ), if g = i gi and each gi is Ki -convex in R1 . In Section 4, we have shown that, if g is supermodular and Ki -convex for each i ∈ {1, . . . , n}, then g is K-convex for each I ⊂ {1, . . . , n}; and in particular g is K-convex. We have also reviewed the literature on multiproduct inventory problems with ﬁxed costs. Finally, in the joint setup case, we discuss the results obtained by Liu and Esobgue (Ref. 6) on K0 -convexity and its possible application in proving the optimality of a (σ, S) policy shown by Kalin (Ref. 4) by using other means.

88

JOTA: VOL. 127, NO. 1, OCTOBER 2005

References 1. Scarf, H., The Optimality of (s, S) Policies in the Dynamic Inventory Problem, Mathematical Methods in the Social Sciences, Edited by J. Arrow, S. Karlin, and P. Suppes, Stanford University Press, Stanford, California, pp. 196–202, 1960. 2. Veinott, A. F. Jr., On the Optimality of (s, S) Inventory Policies: New Conditions and a New Proof, SIAM Journal on Applied Mathematics, Vol. 14, pp. 1067–1083, 1966. 3. Johnson, E. L., Optimality and Computation of (σ, S) Policies in the MultiItem Inﬁnite-Horizon Inventory Problem, Management Science, Vol. 13, pp. 475–491, 1967. 4. Kalin, D., On the Optimality of (σ, S) Policies, Mathematics of Operations Research, Vol. 5, pp. 293–307, 1980. 5. Ohno, K., and Ishigaki, T., Multi-Item Continuous Review Inventory System with Compound Poisson Demands, Mathematical Methods in Operations Research, Vol. 53, pp. 147–165, 2001. 6. Liu, B., and Esogbue, A. O., Decision Criteria and Optimal Inventory Processes, Kluwer, Boston, Massachusetts, 1999. 7. Denardo, E. V., Dynamic Programming: Models and Appllications, Prentice Hall, Englewood Cliffs, New Jersey, 1982. 8. Portues, E., On the Optimality of Generalized (s, S) Policies, Management Science, Vol. 17, pp. 411–426, 1971. 9. Kolmogorov, A. N., and Fomin, S. V., Introduction to Real Analysis, Dover Publications, New York, NY, 1970. 10. Topkis, D. M., Minimizing a Submodular Function on a Lattice, Operations Research, Vol. 26, pp. 305–321, 1978. 11. Veinott, A. F. JR., Teaching Notes in Inventory Theory, Stanford University, Stanford, california, 1975.

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close